text
stringlengths
9
7.94M
subset
stringclasses
1 value
meta
dict
file_path
stringclasses
1 value
question
dict
answers
listlengths
\begin{document} \begin{frontmatter} \title{A step towards proving de Polignac's Conjecture} \author{J. Sellers} \begin{abstract} Consider the set of all natural numbers that are co-prime to primes less than or equal to a given prime. Then given a consecutive pair of numbers in that set with an arbitrary even gap, we prove there exists an unbounded number of actual prime pairs with that same gap. This conditional proof of de Polignac's conjecture constitutes a proof for a range of known gaps, but the full conjecture requires additional proof that such number pairs exist for all even gaps. \end{abstract} \end{frontmatter} \section{Introduction} French mathematician Alphonse de Polignac conjectured in 1849 that: "Every even number is the difference of two consecutive primes in infinitely many ways."\cite{dP,LD} The subsumed twin prime conjecture is more well known and is considered older, but its origin is not otherwise documented. de Polignac's conjecture, a generalization for arbitrary even gaps, is taken as the earliest documented statement that is inclusive of the twin prime conjecture. Work on prime gaps has application to both de Polignac's ocnjecture and the twin prime conjecture, but the twin prime conjecture appears to have been the primarily goal of most work. Maynard in \cite{JM1} gives an excellent overview of approaches to the twin prime conjecture. The earliest result comes in the work of Hardy and Littlewood \cite{HL} where they proposed a prime pair counting function using a modified assumption about the Riemann Hypothesis to characterize the density of prime pairs. Sieve theory has made the most significant recent progress. Originally proposed by Brun \cite{VB} as a modified form of the sieve of Eratosthenes and applied to the Goldbach Conjecture. His significant result proved that the sum of the reciprocal of twin primes converges. Sieve theory was further developed by Selberg \cite{AS} and has made significant advances applying the work of Bombieri, Friedlander, and Iwaneic \cite{BFI1,BFI2,BFI3} on the distribution of primes in arithmetic progression and then applying the results of Goldston, Pintz, and Yildririm \cite{GPY} on primes in tuples. This culminated in the work of Zhang \cite{YZ} who combined these approaches and proved the existence of a finite, though very large limit on gaps, for which there are infinite prime pairs. His method was subsequently modified to significantly reduce the gap limit, to 246,.\cite{JM2, poly1, poly2}. Those latter approaches formulated sieves using a product of linear functions chosen to ensure finding at least two prime numbers in an infinite number of tuples of fixed finite size. Therefore, while it has produced significant progress, it does not demonstrate a result for prime pairs of a specific gap and is known to have inherent limitations for reducing the gap limit further. The primary difference in this paper is that we work in the realm of relative primes rather than attempting to deal with primes directly, because relative primes are more easily predicted. The set of numbers prime to $P\le P_{k}$ includes the set of all prime numbers greater than $P_{k}$ and all composite numbers whose prime factors are all greater than $P_{k}$. All of these fall in the the two arithmetic progressions $6n+5$ and $6n+7$. All such relative primes between the composite numbers are actual prime numbers. The difficulty in predicting prime numbers derives from the inability to order composite numbers beyond $P_{k+1}^2<P_{k+1}P_{k+2}$ without knowing their actual values. However, we do know that all numbers less than $P_{k+1}^2$ that are prime to $P<P_{k}$ are actual prime numbers. In that domain our results are applicable to actual prime numbers. The various combinations of prime factors $P\le P_{k}$ repeat identically in successive sequences of $P_{k}\#$ numbers. Using this, we define prospective primes, numbers prime to $P\le P_{k}$ for some $P_{k}$, among which all prime numbers geater than $P_{k}$ must occur. We then apply a formulaic approach for the specification of prospective primes in successively larger sets of $P_{k}\# \rightarrow P_{k+1}\#$ numbers. We see that gaps between consecutive prospective primes propagate predictably between successively larger sets, whereas gaps between actual primes do not. This allows us to assess their distribution directly and prove they exist in a range where they must also be actual prime pairs of a given gap. This work represents an extension of \cite{JS} which addressed only twin primes, extending it to gaps of arbitrary even numbers. In this approach there are two parts to proving de Polignac's conjecture. Part one, shown in this work, proves that given any consecutive prospective prime pair of even gap $g$, there exists an unbounded number of actual prime pairs with gap $g$. The second part, partially addressed in this work, requires one to prove there exists a pair of consecutive prospective primes for any arbitrary even gap. We show that such gaps exist between consecutive prospective prime pairs for $g=P_{k}\pm 1$ and $g=P_{k+1}-P_{k}$ for all $P_{k}$, however to complete the proof of de Polignac's conjecture one must show that such gaps exist for all even numbers. \section{Definitions and framework} $P$ = generic prime number $P_{k} = k^{th}$ prime number $(P_{1}=2)$ $P_{k}\#= \prod_{i=1}^{k}P_{i}$ $S_{k}:=\left\{N: 5\le N \le 4+ P_{k}\#\right\}; \; N\in \mathbb{N}$ $S_{k}^{(m)}:=\left\{N: 5+mP_{k-1}\#\le N \le 4+(m+1)P_{k-1}\#\right\}$; where: \[ 0\le m\le P_{k}-1; \quad S_{k}^{(m)}\subset S_{k};\quad S_{k}^{(0)}=S_{k-1} \] \[ \cup_{m=0}^{P_{k}-1}S_{k}^{(m)}=S_{k} \quad \& \quad S_{k}^{(m)}\cap S_{k}^{(m')}= \begin{cases} \emptyset &\textrm{if} \quad m\ne m'\\ S_{k}^{(m)} &\textrm{if}\quad m= m' \end{cases} \] $\widetilde{P}_{\{k\}}=$ unspecified prospective prime number in $S_{k}$: \[ \forall{P} \left[P|\widetilde{P}_{\{k\}}\longrightarrow P> P_{k} \right] \] $\qquad\widetilde{P} =$ generic prospective prime ; prime to all $P\le P_{l}$ for unspecified $P_{l}$ $\:\;\widetilde{\mathbb{P}}_{k}:=\left\{\widetilde{P}_{\{k\}}\in S_{k}\right\}$, the set of all prospective primes in $S_{k}$ $\:\;\widetilde{\mathbb{P}}_{k}^{(m)}:=\left\{\widetilde{P}_{\{k\}}\in S_{k}^{(m)}\right\}$, the set of all prospective primes in subset $S_{k}^{(m)}$ $(\widetilde{PgP})=$ generic prospective prime pair with gap $g$ $(\widetilde{PgP})_{k}=$ generic prospective prime pair with gap $g$ in $S_{k}$ \begin{definition} Two prospective prime numbers, $\widetilde{P}_{\{k\}} < \widetilde{P}_{\{k\}}'$ are considered consecutive prospective prime numbers, when there is no prospective prime number between them, i.e.: \[ \forall{N} \left[ \left(\widetilde{P}_{\{k\}}<N<\widetilde{P}_{\{k\}}'\right)\longrightarrow \left(P|N\rightarrow P\le P_{k}\right)\right] \] \end{definition} When we refer to prospective prime pairs we always mean consecutive prospective prime pairs. Prospective prime numbers, prime to all $P\le P_{k}$ have the form: \begin{equation}\label{E:genproprime} \widetilde{P}_{\{k\}}=\left(\begin{array}{c} 5 \\ 7 \end{array}\right)+\sum_{j=3}^{k}m_{j}P_{j-1}\# \end{equation} For $\widetilde{P}_{\{k\}}\in S_{k}$, $m_{k}$ is constrained by: $0\le m_{k} \le P_{k}-1$. In addition two values of $m_{j}$ for each $j$, corresponding separately to the 5 and 7 in (\ref{E:genproprime}) are disallowed to avoid a result divisible by $P_{j}$.\footnote{If we allow all values $m_{j} \ge 0$, then (\ref{E:genproprime}) represents the progressions $6n+5$ and $6n+7$.} This is best handled iteratively as in the following: Going from $S_{k}\rightarrow S_{k+1}$ we get: \begin{equation}\label{E: nextproprime} \widetilde{P}_{\{k+1\}}=\widetilde{P}_{\{k\}}+m_{k+1}P_{k}\# \qquad 0 \le m_{k+1} \le P_{k+1}-1 \end{equation} $\widetilde{P}_{\{k+1\}}$ remains prime to $P\le P_{k}$ and will be prime to $P_{k+1}$ as long as we insist $P_{k+1} \nmid \widetilde{P}_{\{k+1\}}$, enforced by $m_{k+1}\ne \widehat{m}_{k+1}$, where:\footnote{(\ref{E: defmhat}) follows from (\ref{E: nextproprime}) letting $\widetilde{P}_{\{k+1\}}\bmod{P_{k+1}}=0$}. \begin{equation}\label{E: defmhat} \widehat{m}_{k+1}=\frac{\alpha P_{k+1}-\widetilde{P}_{\{k\}}\bmod{P_{k+1}}}{\left(P_{k}\#\right)\bmod{P_{k+1}}} \end{equation} and where $\alpha$ is the smallest integer such that $\widehat{m}_{k+1}$ is an integer $\le P_{k+1}-1$. Also, \[ \widetilde{P}_{\{k\}}\bmod{P_{k+1}}=0 \longleftrightarrow \alpha =0 \]. One can see from (\ref{E: defmhat}) that the values of $\widehat{m}_{k+1}$ are distinct for $\widetilde{P}_{\{k\}}$ belonging to distinct residue classes $\bmod{P_{k+1}}$ and all $\widetilde{P}_{\{k\}}$ in the same residue class $\bmod{P_{k+1}}$ have the same value for $\widehat{m}_{k+1}$. Note that: \begin{equation}\label{E: proprimeinsub} \widetilde{P}_{\{k+1\}}=\widetilde{P}_{\{k\}}+m_{k+1}P_{k}\# \in S_{k+1}^{(m_{k+1})} \end{equation} Therefore each prospective prime number in $S_{k}$ generates one prospective prime number in all but one subset of $S_{k+1}$. The one disallowed subset being $S_{k+1}^{(\widehat{m}_{k+1})}$. It follows from (\ref{E: proprimeinsub}) that for $m'>m$ and if $\widetilde{P}_{\{k\}}\in S_{k}^{(m)}$ and $\widetilde{P}_{\{k\}}'\in S_{k}^{(m')}$ then $\widetilde{P}_{\{k\}}<\widetilde{P}_{\{k\}}'$. Therefore, consecutive prospective primes can only occur within a subset or between the largest prospective prime in one subset and the least prospective prime in the next sequential subset. It is also important to know that prospective primes using (\ref{E:genproprime}) are unique in accordance with the following lemma. \begin{lemma}\label{L: uniqueness} Given \[ \widetilde{P}_{\{k\}}=\left(\begin{array}{c} 5\\7 \end{array}\right)+\sum_{j=3}^{k}m_{j}P_{j-1}\# \] and \[ \widetilde{P}_{\{k\}}'=\left(\begin{array}{c} 5\\7 \end{array}\right)+\sum_{j=3}^{k}m_{j}'P_{j-1}\# \] where $0\le m_{j},m_{j}' \le P_{j}-1$. Then, \[ \widetilde{P}_{\{k\}}=\widetilde{P}_{\{k\}}' \longleftrightarrow m_{j}=m_{j}' \quad \textrm{for}\quad 3\le j \le k \quad \textrm{and both either start with 5 or both with 7} \] \end{lemma} \begin{proof} Taking: $\widetilde{P}_{\{k\}}'=\widetilde{P}_{\{k\}}$ gives: \[ \sum_{j=3}^{k}(\pm\Delta m_{j})P_{j-1}\#=\left(\begin{array}{c} 0\\2 \end{array}\right) \] where the zero applies if $\widetilde{P}_{k}$ and $\widetilde{P}_{k}'$ both start with 5 or both start with 7, and 2 applies if one starts with 5 and the other starts with 7. The smallest finite value for the left hand side of the equation is $6$. Therefore it cannot be solved by finite integral values of $\Delta m_{j}$ and the only solution is $\sum_{j=3}^{k}(\pm\Delta m_{j})P_{j-1}\#=0$, where $\Delta m_{j}=0$ for all $j$. \end{proof} \section{Prospective Prime pairs with gap $g$} We call prospective prime numbers, prime to all $P\le P_{k}$, consecutive if there are no numbers prime to all $P\le P_{k}$ between them.\footnote{Consecutive prime numbers may be taken as consecutive prospective prime numbers, but only if there are no prospective prime numbers between them. } Gaps between consecutive prospective prime pairs both propagate unchanged and are increased when generating prospective numbers via (\ref{E: nextproprime}). Increases occur due to the supplemental condition $m_{k+1}\ne \widehat{m}_{k+1}$. For example, let $\widetilde{P}_{\{k\}}<\widetilde{P}_{\{k\}}'< \widetilde{P}_{\{k\}}''$ be three consecutive prospective prime numbers in $S_{k}$, with gaps $g=\widetilde{P}_{\{k\}}'-\widetilde{P}_{\{k\}}$ and $g'=\widetilde{P}_{\{k\}}''-\widetilde{P}_{\{k\}}'$. Then Equation~(\ref{E: nextproprime}) gives the following numbers in $S_{k+1}$ which remain prime to $P\le P_{k}$: \[ \widetilde{P}_{\{k+1\}}=\widetilde{P}_{\{k\}}+m_{k+1}P_{k}\# \] \[ \widetilde{P}_{\{k+1\}}'=\widetilde{P}_{\{k\}}'+m_{k+1}'P_{k}\# \] \[ \widetilde{P}_{\{k+1\}}''=\widetilde{P}_{\{k\}}''+m_{k+1}''P_{k}\# \] In cases where $m_{k+1}=m_{k+1}'=m_{k+1}''$ the gaps remain at $g$ and $g'$. However, we must consider the disallowed cases given by the supplemental condition (\ref{E: defmhat}), which is necessary so that the corresponding numbers in $S_{k+1}$ are prime to $P\le P_{k+1}$. Note that $\widehat{m}_{k+1}$, $\widehat{m}_{k+1}'$, and $\widehat{m}_{k+1}''$ are distinct from each other unless $g\bmod{P_{k+1}}=0$, $g'\bmod{P_{k+1}}=0$, or $(g+g')\bmod{P_{k+1}}=0$. Then given that there are $P_{k+1}-1$ valid values for each, there are the following cases when $\widehat{m}_{k+1}$, $\widehat{m}_{k+1}'$, and $\widehat{m}_{k+1}''$ are distinct:\footnote{$g^{?}$ represents an unspecified gap, which is the gap from the disallowed prospective prime to the next larger or smaller prospective prime, respectively.}\\ \begin{enumerate} \item \underline{$m_{k+1}=m_{k+1}'=m_{k+1}'' \ne \widehat{m}_{k+1},\widehat{m}_{k+1}', \widehat{m}_{k+1}''$:} Yeilds $P_{k+1}-3$ cases where both gaps are preserved, because all three of the corresponding prospective primes are allowed in those corresponding subsets: \[ \widetilde{P}_{\{k\}}\leftarrow g \rightarrow \widetilde{P}_{\{k\}}'\leftarrow g' \rightarrow \widetilde{P}_{\{k\}}'' \quad\Longrightarrow\quad \widetilde{P}_{\{k+1\}}\leftarrow g \rightarrow \widetilde{P}_{\{k+1\}}'\leftarrow g' \rightarrow \widetilde{P}_{\{k+1\}}'' \] \item \underline{$m_{k+1}=m_{k+1}'=m_{k+1}'' = \widehat{m}_{k+1}$:} Yeilds 1 case where only the second gap is preserved, because $\widetilde{P}_{\{k+1\}}$ is disallowed in $S_{k+1}^{(\widehat{m}_{k+1})}$: \[ \widetilde{P}_{\{k\}}\leftarrow g \rightarrow \widetilde{P}_{\{k\}}'\leftarrow g' \rightarrow \widetilde{P}_{\{k\}}''\quad \Longrightarrow \quad \leftarrow g^{?} + g \rightarrow \widetilde{P}_{\{k+1\}}'\leftarrow g' \rightarrow \widetilde{P}_{\{k+1\}}'' \] \item \underline{$m_{k+1}=m_{k+1}'=m_{k+1}'' = \widehat{m}_{k+1}'$:} Yeilds 1 case where the two gaps merge, because $\widetilde{P}_{\{k+1\}}'$ is disallowed in $S_{k+1}^{(\widehat{m}_{k+1}')}$: \[ \widetilde{P}_{\{k\}}\leftarrow g \rightarrow \widetilde{P}_{\{k\}}'\leftarrow g' \rightarrow \widetilde{P}_{\{k\}}'' \quad\Longrightarrow\quad \widetilde{P}_{\{k+1\}}\leftarrow g + g' \rightarrow \widetilde{P}_{\{k+1\}}'' \] \item \underline{$m_{k+1}=m_{k+1}'=m_{k+1}'' = \widehat{m}_{k+1}''$:} Yeilds 1 case where only the first gap is preserved, because $\widetilde{P}_{\{k+1\}}''$ is disallowed in $S_{k+1}^{(\widehat{m}_{k+1}'')}$: \[ \widetilde{P}_{\{k\}}\leftarrow g \rightarrow \widetilde{P}_{\{k\}}'\leftarrow g' \rightarrow \widetilde{P}_{\{k\}}'' \quad\Longrightarrow\quad \widetilde{P}_{\{k+1\}}\leftarrow g \rightarrow \widetilde{P}_{\{k+1\}}'\leftarrow g' + g^{?} \rightarrow \] \end{enumerate} One can see from this that if $\widehat{m}_{k}$, $\widehat{m}_{k}'$, $\widehat{m}_{k}''$ are not distinct, then case 1 would have $P_{k+1}-2$ cases if any two are equal and the third is distinct and would have $P_{k+1}-1$ cases if all three were equal. Another important point from this example is why it is necessary to track prospective prime numbers rather than actual prime numbers. Consider the case in the above example where $\widetilde{P}_{\{j\}}=P_{\{j\}}$ and $\widetilde{P}_{\{j\}}''=P_{\{j\}}''$ are actual consecutive prime numbers. It is possible then that either one or both of $\widetilde{P}_{\{j+1\}}$ and $\widetilde{P}_{\{j+1\}}''$ may not be prime. If they are both prime it is possible that $\widetilde{P}_{\{j+1\}}'$ may also be prime. In these cases the gaps are not propagated unchanged and $\widetilde{P}_{\{j+1\}}$ and $\widetilde{P}_{\{j+1\}}''$ are not consecutive prime numbers. However, in the case of consecutive prospective prime numbers there are always predictable cases where the gaps are preserved and the prospective prime numbers remain consecutive. This is independent of whether the prospective prime numbers are prime or not. Consider, for example, the consecutive prime numbers in $S_{4}$, $113$ and $127$. While they are consecutive primes, they are not consecutive prospective primes because $121=11^2$ between them is a prospective prime in $S_{4}$, i.e., prime to $P\le 7$. Table~\ref{T: example1} shows how these three numbers propogate into $S_{5}$ along with their associated gaps. \begin{table}[h] \caption{ The table shows the propogation of consecutive prime numbers 113 and 127 from $S_{4}$ into $S_{5}$. The gap between prime numbers is only preserved in $S_{5}$ in cases where the intermediate prospective prime, 121, does not generate an actual prime and where the corresponding prospective primes generated by 113 and 127 are actual primes. }\label{T: example1} \begin{tabular}{|p{9mm}||p{6mm}|p{6mm}|p{6mm}|p{6mm}|p{6mm}|p{8mm}|p{8mm}|p{8mm}|p{8mm}|p{8mm}|p{8mm}|} \multicolumn{12}{c}{$\widetilde{\mathbb{P}}_{5}^{(m)}=\widetilde{\mathbb{P}}_{4} +m\cdot 210 \qquad \textbf{bold} = \neg P \qquad \widehat{m}=\neg \widetilde{P}$}\\ \hline m= & \; 0 & \; 1 & \; 2 & \; 3 &\; 4 & \: 5 & \: 6 & \: 7 & \: 8 & \: 9 & \; 10 \\ \hline 113 &113&\textbf{323} & 533 & 743 & 953 & 1163 & 1373 & 1583 & $\; \widehat{m}$ & 2003& 2213 \\ \hline $\:\; g$ & & \;8 & \;8 & \;8 & \; 8 & \;8 & \;8 & \; 8 & & \; 8 & \: 8 \\ \hline 121 &$\; \widehat{m}$ & 331 & 541 & 751 & \textbf{961} & 1171 & 1381 & \textbf{1591}& 1801 & 2011 & 2221 \\ \hline $\:\; g'$ & & \; 6 & \; 6 & \; 6 &\; 6 & & \; 6&\; 6 & \; 6 & \; 6 & \; 6 \\ \hline 127 &127 & 337 & 547 & 757 & 967 & $\; \widehat{m}$& \textbf{1387} & 1597& \textbf{1807} & 2017 & \textbf{2227} \\ \hline $g+g'$ & 14 & & & & 14 & & & 14 & & & \\ \hline \end{tabular} \end{table} The lesson here is that determining whether a prospective prime is an actual prime in a given subset is not as straightforward as predicting whether a prospective prime is present or disallowed in that subset as determined by $\widehat{m}$. \subsection{Propagation of prospective prime pairs with gap $g$} \begin{theorem}\label{T: noprotpp} Given set $S_{l}=\left\{N: 5 \le N \le 4+P_{l}\#\right\}$ containing a pair of consecutive prospective prime numbers, $(\widetilde{P}_{\{l\}},\widetilde{P}_{\{l\}}')$ with gap $\widetilde{P}_{\{l\}}'-\widetilde{P}_{\{l\}}=g$ and given any prime number, $P_{k} > P_{l}$, let $\mathring{n}_{k}^{g}$ be the number of prospective prime pairs $\left(\widetilde{P}_{\{k\}}, \widetilde{P}_{\{k\}}'\right)$ with gap $g$ in $S_{k}=\left\{N: 5 \le N \le 4+P_{k}\#\right\}$ that are derived from that prospective prime pair with gap $g$ in $S_{l}$, then \[ \mathring{n}_{k}^{g} = \prod_{i=l+1}^{k}(P_{i}-2)\cdot \prod_{\substack{i=l+1\\ P_{i}|g }}^{k}\frac{(P_{i}-1)}{(P_{i}-2)} \] \end{theorem} \begin{proof} Given a consecutive prospective prime pair with gap $g$ in $S_{j}$, $\left(\widetilde{P}_{\{j\}}\widetilde{P}_{\{j\}}'\right)$, we can define prospective prime pairs with gap $g$ in $S_{j+1}$ by: \begin{align}\label{E: defproprima-1} \widetilde{P}_{\{j+1\}}=\widetilde{P}_{\left\{j\right\}}+m_{j+1}P_{j}\#\\ \widetilde{P}_{\{j+1\}}'=\widetilde{P}_{\left\{j\right\}}'+m_{j+1}P_{j}\#\notag \end{align} with the supplementary conditions: $0\le m_{j+1} \le P_{j+1}-1$, $m_{j+1}\ne \widehat{m}_{j+1}$ and $m_{j+1}\ne \widehat{m}_{j+1}'$ where: \begin{align}\label{E: suppconda}\notag \widehat{m}_{j+1}&=\frac{\alpha P_{j+1}-\widetilde{P}_{\left\{j\right\}}\bmod{P_{j+1}}}{\left(P_{j}\#\right)\bmod{P_{j+1}}} \\ \\ \notag \widehat{m}_{j+1}'&=\frac{\alpha' P_{j+1}-\widetilde{P}_{\left\{j\right\}}'\bmod{P_{j+1}}}{\left(P_{j}\#\right)\bmod{P_{j+1}}} \end{align} Given $m_{j+1}<P_{j+1}-1$ one can see that both $\widetilde{P}_{\{j+1\}}$ and $\widetilde{P}_{\{j+1\}}'$ are prime to all $P\le P_{j}$. Then the other supplementary condition guarantees that $\widetilde{P}_{\{j+1\}}$ and $\widetilde{P}_{\{j+1\}}'$ are both prime to $P_{j+1}$ and therefore they are a prospective prime pair with gap $g$ in $S_{j+1}$. Given $\widetilde{P}_{\{j\}}'=\widetilde{P}_{\{j\}}+g$, (\ref{E: suppconda}) gives: \begin{align}\label{E: disalloweddiff}\notag \widehat{m}_{j+1}'&=\frac{\alpha' P_{j+1}-(\widetilde{P}_{\left\{j\right\}}+g)\bmod{P_{j+1}}}{\left(P_{j}\#\right)\bmod{P_{j+1}}} \\ &=\widehat{m}_{j+1} + \frac{\Delta\alpha \cdot P_{j+1}-g\bmod{P_{j+1}}}{\left(P_{j}\#\right)\bmod{P_{j+1}}} \end{align} where $\Delta \alpha$ is modified from $\alpha'-\alpha$ to account for separating out $g$ in the mod function, and is chosen as the least integer to make the second term an integer. Consider the case where $g\bmod{P_{j+1}}=0$, then: \[ \widetilde{P}_{\{j\}}'\bmod{P_{j+1}}=( \widetilde{P}_{\{j\}}+g)\bmod{P_{j+1}}=\widetilde{P}_{\{j\}}\bmod{P_{j+1}} \] In that case, there is only one disallowed subset in $S_{j+1}$, so $(P_{\{j\}},P_{\{j\}}')$ generates $P_{j+1}-1$ prospective prime pairs with gap $g$ in $S_{j+1}$. If $g\bmod{P_{j+1}}\ne 0$ then $m_{j+1}$ has $P_{j+1}-2$ allowed values and the prime pair $(P_{\{j\}},P_{\{j\}}')$ generates $P_{j+1}-2$ distinct prospective prime pairs with gap $g$ in $S_{j+1}$. By the same procedure, those prospective prime pairs in $S_{j+1}$ each generate prospective prime pairs with gap $g$ in $S_{j+2}$: \begin{align}\label{E: defproprima-2} \widetilde{P}_{\{j+2\}}=\widetilde{P}_{\{j+1\}}+m_{j+2}P_{j+1}\#\\ \widetilde{P}_{\{j+2\}}'=\widetilde{P}_{\{j+1\}}'+m_{j+2}P_{j+1}\#\notag \end{align} with the supplementary conditions: $0\le m_{j+2} \le P_{j+2}-1$, $m_{j+2}\ne \widehat{m}_{j+2}$ and $m_{j+2}\ne \widehat{m}_{j+2}'$ where: \begin{align}\label{E: suppconda-2}\notag \widehat{m}_{j+2}&=\frac{\alpha_{j+2} P_{j+2}-\widetilde{P}_{\{j+1\}}\bmod{P_{j+2}}}{\left(P_{j+1}\#\right)\bmod{P_{j+2}}} \\ \\ \notag \widehat{m}_{j+2}'&=\frac{\alpha_{j+2}' P_{j+2}-\widetilde{P}_{\{j+1\}}'\bmod{P_{j+2}}}{\left(P_{j+1}\#\right)\bmod{P_{j+2}}} \end{align} Again, $\widehat{m}_{j+2}$ and $\widehat{m}_{j+2}'$ are distinct unless $g\bmod{P_{j+2}}=0$ in which case the corresponding prospective prime pair in $S_{j+1}$ generates $P_{j+2}-1$ instead of $P_{j+2}-2$ prospective primes with gap $g$ in $S_{j+2}$. Furthermore we know from Lemma~\ref{L: uniqueness} that the prospective primes generated in this process are distinct so that the prime pairs are also distinct pairs. Then following the same process, successively generating prospective prime pairs of gap $g$, in larger sets, e.g. going from $S_{j}$ to $S_{j+1}$, each prospective prime pair with gap $g$ in $S_{j}$ generates $P_{j+1}-1$ distinct prospective prime pairs of gap $g$ in $S_{j+1}$ if $P_{j+1}$ is a factor in $g$ and otherwise generates $P_{j+1}-2$ distinct prospective prime pairs of gap $g$ in $S_{j+1}$. Therefore in going from $S_{l}$ to $S_{k}$ the number of prospective prime pairs with gap $g$ in $S_{k}$ that are generated from a prospective prime pair with gap $g$ in $S_{l}$ is given by $ \prod_{i=l+1}^{k}(P_{i}-2)\cdot \prod_{\substack{i=l+1\\ P_{i}|g }}^{k}\frac{(P_{i}-1)}{(P_{i}-2)}$. Therefore we have: \[ \mathring{n}_{k}^{g} =\prod_{i=l+1}^{k}(P_{i}-2)\cdot \prod_{\substack{i=l+1\\ P_{i}|g }}^{k}\frac{(P_{i}-1)}{(P_{i}-2)} \] \end{proof} Assuming there exists set $S_{l}$ that contains at least one prospective prime pair with gap $g$, if that set contains $n_{l}^{g}$ such prospective prime pairs, then the actual number of prospective prime pairs with gap $g$ in $S_{k}$, $k>l$, derived from those $n_{l}^{g}$ prospective prime pairs is: \begin{equation}\label{E: totalprimepairs} n_{k}^{g}\ge n_{l}^{g}\cdot \mathring{n}_{k}^{g} \end{equation} The equal sign holds if $g=2$, because prospective twin primes can all be generated from the single twin prime $(5,7)\in S_{2}$ using (\ref{E: nextproprime}) and (\ref{E: defmhat}), giving: \cite{JS} \[ n_{k}^{2}=\prod_{i=3}^{k}(P_{i}-2) \] The formulas in Theorem~\ref{T: noprotpp} and (\ref{E: totalprimepairs}) will generally represent a minimum when considering the total prospective prime pairs with gap $g>2$ in a set. This occurs because new larger gaps are always generated in going to a larger set because of the supplemental condition (\ref{E: defmhat}). \subsection{Distribution of Prospective Prime pairs with gap $g$}\label{S: distribution}. We define $(\widetilde{PgP})_{j}=\left( \widetilde{P}_{\{j\}}, \widetilde{P}_{\{j\}}' \right)$ as a generic prospective prime pair with gap $g$ in $S_{j}$ In the following Lemmas we assume there exists a set $S_{l}$ with at least one prospective prime pair with gap $g$. In the Lemmas, the indices $j$ and $k$ are assumed to have values $>l+2$. \begin{lemma}\label{L: distrofSj+1} The set of $(\widetilde{PgP})_{j+1}\in S_{j+1}$ generated from a single $(\widetilde{PgP})_{j}\in S_{j}$ has each $(\widetilde{PgP})_{j+1}$ distributed to a distinct subset of $S_{j+1}$. Furthermore, if $g\bmod{P_{j+1}}=0$ they are distributed one each to all but one subset of $S_{j+1}$ and if $g\bmod{P_{j+1}}\ne 0$ they are distributed one each to all but two subsets of $S_{j+1}$. \end{lemma} \begin{proof} Let $\widetilde{(PgP)}_{j+1}=\left( \widetilde{P}_{\{j+1\}}, \widetilde{P}_{\{j+1\}}' \right)$ be a prospective prime pair with gap $g$ in $S_{j+1}$ generated from $(\widetilde{PgP})_{j}$, where: \begin{equation}\label{E: PgP 1} (\widetilde{PgP})_{j+1}=(\widetilde{PgP})_{j}+m_{j+1}P_{j}\# \end{equation} This actually represents separate equations relating $\widetilde{P}_{\{j+1\}}$ to $\widetilde{P}_{\{j\}}$ and $\widetilde{P}_{\{j+1\}}'$ to $\widetilde{P}_{\{j\}}'$ both using the same value of $m_{j+1}$, where: \[ 0 \le m_{j+1} \le P_{j+1}-1 \] and where additionally: \begin{align}\label{E: PgPms}\notag m_{j+1}\ne& \widehat{m}_{j+1} =\frac{\alpha_{j+1} P_{j+1}-\widetilde{P}_{\{j\}}\bmod{P_{j+1}}}{\left(P_{j}\#\right)\bmod{P_{j+1}}}\\ \textrm{and}&\\ \notag m_{j+1}\ne& \widehat{m}_{j+1}'= \frac{\alpha_{j+1}' P_{j+1}-\widetilde{P}_{\{j\}}'\bmod{P_{j+1}}}{\left(P_{j}\#\right)\bmod{P_{j+1}}} \end{align} where $\alpha_{j+1}$ and $\alpha_{j+1}'$ represent the lowest integer values yielding integer solutions for $\widehat{m}_{j+1}$ and $\widehat{m}_{j+1}'$. Given subsets of $S_{j+1}$: \begin{equation}\label{E: subsetdef} S_{j+1}^{(m)}=\left\{N: 5+mP_{j}\#\le N\le 4+(m+1)P_{j}\#\right\} \end{equation} one can see that: \begin{equation}\label{E: PgP 2} (\widetilde{PgP})_{j+1}=(\widetilde{PgP})_{j}+m_{j+1}P_{j}\#\in S_{j+1}^{(m_{j+1})} \end{equation} where $0\le m_{j+1}\le P_{j+1}-1$. Therefore a fixed $(\widetilde{PgP})_{j}\in S_{j}$ generates one prospective prime pair with gap $g$ into each allowed subset of $S_{j+1}$. The disallowed subsets of $S_{j+1}$ are given by (\ref{E: PgPms}) and are $S_{j+1}^{(\widehat{m}_{j+1})}$ and $S_{j+1}^{(\widehat{m}_{j+1}')}$. These will be the same single disallowed subset if $g\bmod{P_{j+1}}=0$, because then \[\widetilde{P}_{\{j\}}'\bmod{P_{j+1}}=\left(\widetilde{P}_{\{j\}}+g\right)\bmod{P_{j+1}}=\widetilde{P}_{\{j\}}\bmod{P_{j+1}} \] . Therefore, each $(\widetilde{PgP})_{j}\in S_{j}$ generates one corresponding $(\widetilde{PgP})_{j+1}$ into all but one or two of the $P_{j+1}$ subsets of $S_{j+1}$ respectively, depending on whether $g\bmod{P_{j+1}}=0$ or not. \end{proof} \begin{lemma}\label{L: distroinSj+2} Given the set of $(\widetilde{PgP})_{j+2}\in S_{j+2}$ generated by a single $(\widetilde{PgP})_{j}\in S_{j}$, then the disallowed subsets $S_{j+2}^{(\widehat{m})}$ corresponding to the two comoponents of each $(\widetilde{PgP})_{j+2}$ are separately distinct. \end{lemma} \begin{proof} Consider the set of $(\widetilde{PgP})_{j+1}\in S_{j+1}$ generated from the same $(\widetilde{PgP})_{j}\in S_{j}$, which we represent as: $\left\{(\widetilde{PgP})_{j+1}\right\}_{(\widetilde{PgP})_{j}}$. The $(\widetilde{PgP})_{j+1} \in \left\{(\widetilde{PgP})_{j+1}\right\}_{(\widetilde{PgP})_{j}}$ are distributed in $S_{j+1}$ as given by Lemma~\ref{L: distrofSj+1}, one each to all but one or two subsets of $S_{j+1}$. Now consider the set of $(\widetilde{PgP})_{j+2}$ generated by the set of $\left\{(\widetilde{PgP})_{j+1}\right\}_{(\widetilde{PgP})_{j}}$. We represent this set as: \begin{equation}\label{E: PGPinJ+2} \left\{(\widetilde{PgP})_{j+2}\right\}_{(\widetilde{PgP})_{j}}= \left\{(\widetilde{PgP})_{j+1}\right\}_{(\widetilde{PgP})_{j}}+ m_{j+2}P_{j+1}\# \end{equation} where we consider that the second term on the right is added to both components of each member of the set represented as the first term on the right. We have supplementary conditions: \[ 0\le m_{j+2} \le P_{j+2}-1 \le \quad \textrm{and}\quad m_{j+2}\ne \widehat{m}_{j+2}, \widehat{m}_{j+2}' \] where, given $(\widetilde{PgP})_{j+1}=( \widetilde{P}_{\{j+1\}}, \widetilde{P}_{\{j+1\}}')$: \begin{align}\label{E: suppconda-2b}\notag \widehat{m}_{j+2}&=\frac{\alpha_{j+2} P_{j+2}-\widetilde{P}_{\{j+1\}}\bmod{P_{j+2}}}{\left(P_{j+1}\#\right)\bmod{P_{j+2}}} \\ \\ \notag \widehat{m}_{j+2}'&=\frac{\alpha_{j+2}' P_{j+2}-\widetilde{P}_{\{j+1\}}'\bmod{P_{j+2}}}{\left(P_{j+1}\#\right)\bmod{P_{j+2}}} \end{align} These represent two distinct disallowed subsets in $S_{j+2}$ unless $g\bmod{P_{j+2}}=0$ in which case there is only one disallowed subset. By definition, each $\widetilde{P}_{j+1}\in \left\{(\widetilde{PgP})_{j+1}\right\}_{(\widetilde{PgP})_{j}}$ is generated using the same $(\widetilde{PgP})_{j}$. Therefore, from Equations~(\ref{E: suppconda-2b}) we have: \begin{align}\label{E: suppconda-2c}\notag \widehat{m}_{j+2}&=\frac{\beta_{j+2} P_{j+2}-\widetilde{P}_{\{j\}}-m_{j+1}\left(P_{j}\#\right)\bmod{P_{j+2}}}{\left(P_{j+1}\#\right)\bmod{P_{j+2}}} \\ \\ \notag \widehat{m}_{j+2}'&=\frac{\beta_{j+2}' P_{j+2}-\widetilde{P}_{\{j\}}-g\bmod{P_{j+2}}-m_{j+1}\left(P_{j}\#\right)\bmod{P_{j+2}}}{\left(P_{j+1}\#\right)\bmod{P_{j+2}}} \end{align} Where we use $\beta$ instead of $\alpha$ to represent possible changes to the integer values given the breakout of the mod arguments. However they still are the lowest integer values making $\widehat{m}_{j+2}$ and $\widehat{m}_{j+2}'$ integers. One can see that for a given $\widetilde{P}_{\{j\}}$ and fixed $P_{j+2}$ the only variable in each of the equations in (\ref{E: suppconda-2c}) is $m_{j+1}$. According to Lemma~\ref{L: distrofSj+1} each $(\widetilde{PgP})_{j+1}\in \left\{(\widetilde{PgP})_{j+1}\right\}_{(\widetilde{PgP})_{j}}$ has a unique corresponding value of $m_{j+1}$, and therefore the values of $\widehat{m}_{j+2}$ and $\widehat{m}_{j+2}'$ are separately distinct corresponding to the values of $m_{j+1}$. Therefore the disallowed subsets for each component of \[(\widetilde{PgP})_{j+2} \in \left\{(\widetilde{PgP})_{j+2}\right\}_{(\widetilde{PgP})_{j}} \] namely $S_{j+2}^{(\widehat{m}_{j+2})}$ and $S_{j+2}^{(\widehat{m}_{j+2}')}$ are separately distinct. \end{proof} \begin{lemma}\label{L: delamhat} The separation of disallowed subsets corresponding to the two components of each $(\widetilde{PgP})_{k}\in S_{k}$ is a constant in $S_{k}$. \end{lemma} \begin{proof} Using (\ref{E: suppconda-2b}) with $k=j+2$ and $\widetilde{P}_{\{k-1\}}'=\widetilde{P}_{\{k-1\}}+g$ gives: \begin{equation}\label{E: fixeddelta} \Delta \widehat{m}_{k}=\frac{\Delta\alpha P_{k}-g\bmod{P_{k}}}{\left(P_{k-1}\#\right)\bmod{P_{k}}} \end{equation} \end{proof} Where all quantities on the right hand side of (\ref{E: fixeddelta}) are fixed given $S_{k}$. \begin{lemma}\label{L: distrofPgPinSj_2} Given the set $\left\{(\widetilde{PgP})_{j+2}\right\}_{(\widetilde{PgP})_{j}}$ of $(\widetilde{PgP})_{j+2}\in S_{j+2}$ generated by a single $(\widetilde{PgP})_{j}\in S_{j}$, each subset $S_{j+2}^{(m)}$ contains a minimum of $P_{j+1}-4$ of the $(\widetilde{PgP})_{j+2}\in \left\{(\widetilde{PgP})_{j+2}\right\}_{(\widetilde{PgP})_{j}}$. \end{lemma} \begin{proof} Restating (\ref{E: PGPinJ+2}): \[ \left\{(\widetilde{PgP})_{j+2}\right\}_{(\widetilde{PgP})_{j}}= \left\{(\widetilde{PgP})_{j+1}\right\}_{(\widetilde{PgP})_{j}}+ m_{j+2}P_{j+1}\# \] Lemma~\ref{L: distrofSj+1} gives that the $(\widetilde{PgP})_{j+1}\in \left\{(\widetilde{PgP})_{j+1}\right\}_{(\widetilde{PgP})_{j}} $ are distributed one to a subset across all but one or two subsets of $S_{j+1}$. That means there are at least $P_{j+1}-2$ distinct $(\widetilde{PgP})_{j+1}\in \left\{(\widetilde{PgP})_{j+1}\right\}_{(\widetilde{PgP})_{j}}$. Applying Lemma~\ref{L: distrofSj+1} individually to each $(\widetilde{PgP})_{j+1}$ says that the corresponding $(\widetilde{PgP})_{j+2}$ are distributed one per subset across all but one or two subsets of $S_{j+2}$. This is true for each of the $P_{k+1}-2$ instances of $(\widetilde{PgP})_{j+1}$. Lemma~\ref{L: distroinSj+2} says that the disallowed subsets of $S_{j+2}$ are separately distinct for the lesser and greater components of the resulting $(\widetilde{PgP})_{j+2}$. Therefore none of the $(\widetilde{PgP})_{j+2}$ have the same disallowed subset corresponding to their lesser components and the same for their greater components. It is possible however for the disallowed subsets of $S_{j+2}$ to be the same for the opposite components of two $(\widetilde{PgP})_{j+2} \in \left\{(\widetilde{PgP})_{j+2}\right\}_{(\widetilde{PgP})_{j}}$. This can occur when: \[ (\widetilde{PgP})_{j+1}=(\widetilde{PgP})_{j+1}' \pm (nP_{j+2} + g) \] This can only occur if $g\bmod{P_{j+2}}\ne 0$; i.e., where the corresponding $(\widetilde{PgP})_{j+1}$ has two disallowed subsets when generating prospective prime pairs in $S_{j+2}$. This means that a subset of $S_{j+2}$ can have at most two exclusions of $(\widetilde{PgP})_{j+2}$ and therefore there are at least $P_{j+1}-4$ of the $(\widetilde{PgP})_{j+2}$ in each subset of $S_{j+2}$ \end{proof} With these results we have the following theorem. \begin{theorem}\label{T: distroPgP} Given the set $S_{l}=\left\{N: 5 \le N \le 4+P_{l}\# \right\}$ containing at least one prospective prime pair with gap $g$. Then for $k>l+2$, consider the set $S_{k}=\left\{N: 5 \le N \le 4+P_{k}\#\right\}$ with its $P_{k}$ subsets: \[S_{k}^{(m)}=\left\{N: 5+mP_{k-1}\# \le N \le 4+(m+1)P_{k-1}\#\right\} \] $0 \le m \le P_{k}-1$. Then if $\mathring{n}_{S_{k}^{(m)}}^{g}$ is the number of prospective prime pairs with gap $g$ in each subset $S_{k}^{(m)}\in S_{k}$ generated from a prospective prime pair with gap $g$ in $S_{l}$, then: \[ \mathring{n}_{S_{k}^{(m)}}^{g}\ge \mathring{n}_{k-2}^{g}(P_{k-1}-4) = (P_{k-1}-4)\prod_{i=l+1}^{k-2}(P_{i}-2)\cdot \prod_{\substack{i=l+1\\ P_{i}|g }}^{k-2}\frac{(P_{i}-1)}{(P_{i}-2)} \] \end{theorem} \begin{proof} Given Lemma~\ref{L: distrofPgPinSj_2} we know that for each $(\widetilde{PgP})_{k-2} \in S_{k-2}$ we have a minimum of $P_{k-1}-4$ prospective prime pairs with gap $g$ in each of the subsets $S_{k}^{(m)}$. Then using Theorem~\ref{T: noprotpp} we know there are $\mathring{n}_{k-2}^{g}=\prod_{i=l+1}^{k-2}(P_{i}-2)\cdot \prod_{\substack{i=l+1\\ P_{i}|g }}^{k-2}\frac{(P_{i}-1)}{(P_{i}-2)}$ prospective prime pairs in $S_{k-2}$ Putting these two results together we get: \begin{align}\label{E: primespairsinsub}\notag \mathring{n}_{S_{k}^{(m)}}^{g} &\ge n_{k-2}^{g} \cdot (P_{k-1}-4) \\ &= (P_{k-1}-4) \prod_{i=l+1}^{k-2}(P_{i}-2)\cdot \prod_{\substack{i=l+1\\ P_{i}|g }}^{k-2}\frac{(P_{i}-1)}{(P_{i}-2)} \\ \notag \end{align} \end{proof} \begin{corollary} For sufficiently large $P_{k}$: \[ \mathring{n}_{S_{k}^{(m)}}^{g}\ge\mathring{n}_{k-1}^{g}-2\mathring{n}_{k-2}^{g} \] \end{corollary} \begin{proof} We can also write the inequality~(\ref{E: primespairsinsub}) as: \begin{align}\label{E: thmversion} \notag \mathring{n}_{S_{k}^{(m)}}^{g} &\ge (P_{k-1}-4) \prod_{i=l+1}^{k-2}(P_{i}-2)\cdot \prod_{\substack{i=l+1\\ P_{i}|g }}^{k-2}\frac{(P_{i}-1)}{(P_{i}-2)} \\ \notag &=[(P_{k-1}-2)-2] \prod_{i=l+1}^{k-2}(P_{i}-2)\cdot \prod_{\substack{i=l+1\\ P_{i}|g }}^{k-2}\frac{(P_{i}-1)}{(P_{i}-2)} \\ \notag &=\left[\prod_{i=l+1}^{k-1}(P_{i}-2)-2\prod_{i=l+1}^{k-2}(P_{i}-2)\right]\cdot \prod_{\substack{i=l+1\\ P_{i}|g }}^{k-2}\frac{(P_{i}-1)}{(P_{i}-2)}\\ &= \begin{cases} \mathring{n}_{k-1}^{g}-2\mathring{n}_{k-2}^{g}&\textrm{if}\quad P_{k-1} \nmid g \\ \frac{(P_{k-1}-2)}{(P_{k-1}-1)}\mathring{n}_{k-1}^{g}-2\mathring{n}_{k-2}^{g} &\textrm{if}\quad P_{k-1}|g \end{cases} \end{align} Note that by choosing $P_{k}$ sufficiently large, e.g., $P_{k}>P_{\boldsymbol{\pi}\left(P_{l}\#\right)}>g$, only the first case in (\ref{E: thmversion}) applies. \end{proof} \begin{corollary}\label{C: asymtoticlimit} Given the minimum distribution of $(\widetilde{PgP})_{k}$ across the subsets of $S_{k}$ as in Theorem~\ref{T: distroPgP}, that minimum assymtotically approaches the average distribution of $(\widetilde{PgP})_{k}$ to subsets of $S_{k}$: \[ \min{(\mathring{n}_{S_{k}^{(m)}}^{g})}\longrightarrow\frac{ \mathring{n}_{k}^{g}}{P_{k}}\quad \textrm{as} \quad k\longrightarrow \infty \] \end{corollary} \begin{proof} Given that $S_{k}$ has $P_{k}$ subsets, $S_{k}^{(m)}$, the stated minimum number of prospective prime pairs in each subset generated for each $(\widetilde{PgP})_{l}$ accounts for \[ P_{k} \cdot(P_{k-1}-4)\prod_{i=l+1}^{k-2}(P_{i}-2)\cdot \prod_{\substack{i=l+1\\ P_{i}|g }}^{k-2}\frac{(P_{i}-1)}{(P_{i}-2)} \] of the $\mathring{n}_{k}^{g}$ total prospective prime pairs in $S_{k}$ for each $(\widetilde{PgP})_{l}$. Therefore the fraction of the total represented by the minimum is: \begin{align}\notag \frac{P_{k} \cdot \min{(\mathring{n}_{S_{k}^{(m)}}^{g})}}{\mathring{n}_{k}^{g}} =& \frac{P_{k}(P_{k-1}-4)\prod_{i=l+1}^{k-2}(P_{i}-2)\cdot \prod_{\substack{i=l+1\\ P_{i}|g }}^{k-2}\frac{(P_{i}-1)}{(P_{i}-2)}}{\prod_{i=l+1}^{k}(P_{i}-2)\cdot \prod_{\substack{i=l+1\\ P_{i}|g }}^{k-2}\frac{(P_{i}-1)}{(P_{i}-2)}}\\ \notag =& \frac{P_{k}(P_{k-1}-4)}{(P_{k}-2)(P_{k-1}-2)}\quad\textrm{then letting}\quad \Delta=P_{k}-P_{k-1}\\ \notag &=\frac{1-\frac{\Delta +2}{P_{k}-2}}{1-\frac{\Delta +2}{P_{k}}}<1 \end{align} Therefore the ratio, which is less than $1$ approaches $1$ as $k$ gets large, proving the corollary. \end{proof} Corollary~\ref{C: asymtoticlimit} means that when we consider the distribution of prospective prime pairs with gap $g$ in $S_{k}$ that there is no systematic allotment of more prospective prime pairs to one or a few subsets and overall the difference in allotments averages out. Therefore we can say that prospective twin primes are fairly evenly distributed between the subsets of $S_{k}$. Additionally, while each individual $(\widetilde{PgP})_{k-1}$ in a given subset of $S_{k-1}$ does not contribute to all subsets of $S_{k}$, collectively they do. To prove this we need to determine the contribution: $S_{k-1}^{(m)}\longrightarrow S_{k}^{(m')}$. \begin{lemma}\label{L: equaldistro} Given the set $S_{l}=\left\{N: 5 \le N \le 4+P_{l}\# \right\}$ containing at least one prospective prime pair with gap $g$, then for $k>l+4$, each subset $S_{k-1}^{(m)}\subset S_{k-1}$ generates a minimum of $(P_{k-2}-6)\cdot \mathring{n}_{k-3}^{g}\quad$ $(\widetilde{PgP})_{k}$ into each subset $S_{k}^{(m')}\subset S_{k}$. \end{lemma} \begin{proof} From Lemma~\ref{L: distrofPgPinSj_2} each subset $S_{k-1}^{(m)}$ contains a minimum of $P_{k-2}-4$ of $(\widetilde{PgP})_{k-1}\in \left\{(\widetilde{PgP})_{k-1}\right\}_{(\widetilde{PgP})_{k-3}}$. These can be expressed as: \[ (\widetilde{PgP})_{k-1}=(\widetilde{PgP})_{k-3}+m_{k-2}P_{k-3}\#+mP_{k-2}\# \] where they are distinguished by $P_{k-2}-4$ distinct values of $m_{k-2}$. Then the contribution of these to $S_{k}^{(m')}$ is: \[ (\widetilde{PgP})_{k}=(\widetilde{PgP})_{k-3}+m_{k-2}P_{k-3}\#+mP_{k-2}\#+m'P_{k-1}\#\in S_{k}^{(m')} \] These are again distinguished by the $P_{k-2}-4$ distinct values of $m_{k-2}$ since we consider $m$ and $m'$ as constants, corresponding to two arbitrary subsets of $S_{k-1}$ and $S_{k}$ respectively. Then we know from Lemma \ref{L: distroinSj+2} that for each value of $m_{k-2}$, each corresponding to a single $(\widetilde{PgP})_{k-2}\in S_{k-2}^{(m_{k-2})}$, that the disallowed subsets for each component of the resulting $(\widetilde{PgP})_{k}$ are separately distinct. But as discussed in the proof of Lemma \ref{L: distrofPgPinSj_2}, the disallowed subsets for the opposite components of two $(\widetilde{PgP})_{k}$ may be the same. Therefore at most two of the $(\widetilde{PgP})_{k}$ may be disallowed in subset $S_{k}^{(m')}$, leaving a minimum of $P_{k-2}-6$ prospective prime pairs with gap $g$ in $S_{k}^{(m')}$ that are generated by such prospective prime pairs in $S_{k-1}^{(m)}$. Therefore given the existence of $S_{l}$ prescribed by the statement in the corollary, and given Theorem~\ref{T: noprotpp},we have: \[ (P_{k-2}-6)\cdot \mathring{n}_{k-3}^{g} \] as the minimum contribution of $S_{k-1}^{(m)}$ to $S_{k}^{(m')}$. \end{proof} \begin{lemma}\label{L: uniformdistro} With respects to minimum distributions of prospective prime pairs with gap $g$, the contribution of $S_{k-1}^{(m)}$ to $S_{k}^{(m')}$ in the process of generating prospective prime pairs into $S_{k}$ from $S_{k-1}$ is asymtotically uniform across all subsets $m$ and $m'$. \end{lemma} \begin{proof} Lemma~\ref{L: equaldistro} gives the minimum contributions of prospective prime pairs with gap $g$ from subset $S_{k-1}^{(m)}$ to subset $S_{k}^{(m')}$ as: \[ (P_{k-2}-6)\cdot \mathring{n}_{k-3}^{g} \] Given that there are $P_{k-1}$ subsets in $S_{k-1}$ the total contribution from all subsets of $S_{k-1}$ is, at a minimum: \[ P_{k-1}\cdot(P_{k-2}-6)\cdot \mathring{n}_{k-3}^{g} \] Then we know the minimum distribution of prospective prime pairs with gap $g$ from $S_{k-1}$ to each subset of $S_{k}$ is given by Theorem~\ref{T: distroPgP} as: \[ \mathring{n}_{S_{k}^{(m)}}^{g}\ge \mathring{n}_{k-2}^{g}(P_{k-1}-4) \] Taking the ratio of the minimum subset to subset contribution to the minimum contribution from set to subset gives: \begin{align}\notag \frac{P_{k-1}\cdot(P_{k-2}-6)\cdot \mathring{n}_{k-3}^{g}}{\mathring{n}_{k-2}^{g}(P_{k-1}-4)}=&\frac{P_{k-1}\cdot(P_{k-2}-6)}{(P_{k-2}-2)(P_{k-1}-4)}\\ \notag =&\frac{P_{k-2}-6}{P_{k-2}-6+4\left(1-\frac{P_{k-2}}{P_{k-1}}\right)+\frac{8}{P_{k-1}}} \end{align} The ratio is less than $1$ and clearly tends to $1$ for large $k$ proving the Lemma. \end{proof} \section{Prime pairs with gap g} The foregoing results now allow the following theorem that proves the existence of actual prime pairs with gap $g$ given prospective prime pairs with gap $g$. \begin{theorem}\label{T: numtwinprimes} Given a set $S_{r}=\left\{N: 5 \le N \le 4+P_{r}\#\right\}$ containing at least one prospective prime pair with gap $g$: $(\widetilde{PgP})_{r}$. Pick $l\ge r$ and define $P_{k}=P_{\boldsymbol{\pi}\left(\sqrt{P_{l}\#}\right)}$, then let $\mathring{n}_{P_{k}\rightarrow P_{k+1}^2}^{g}$ be the number of prime pairs with gap $g$ between $P_{k}$ and $P_{k+1}^2$ that are generated from $(\widetilde{PgP})_{r}$, then: \[ \mathring{n}_{p_{k}\rightarrow P_{k+1}^{2}}^{g}\ge \mathring{n}_{l}^{g}\cdot \prod_{j=l}^{k-1}\frac{(P_{j}-4) }{(P_{j}-2)} \cdot \prod_{\substack{i=l\\ P_{i}|g }}^{k-1} \frac{(P_{i}-2)}{(P_{i}-1)} \] where, $\mathring{n}_{l}^{g}$ as in Theorem~\ref{T: noprotpp} is: \[ \mathring{n}_{l}^{g}=\prod_{i=r+1}^{l}\left(P_{i}-2\right)\cdot\prod_{\substack{i=r+1 \\ P_{i}|g }}^{l}\frac{P_{i}-1}{P_{i}-2} \] is the number of prospective prime pairs with gap $g$ in $S_{l}$ that are derived from each such prospective prime pair in $S_{r}$. \end{theorem} \begin{proof} Given $l$ and $P_{k}=P_{\boldsymbol{\pi}\left(\sqrt{P_{l}\#}\right)}$ consider the set of sequential natural numbers $S_{k}=\left\{5 \longrightarrow 4+P_{k}\#\right\}$. We will show that $S_{k}$ always contains prospective prime pairs, $(\widetilde{PgP})_{k} \in S_{k}$ prime to all $P \le P_{k}$ where $P_{k}< (\widetilde{PgP})_{k} < P_{k+1}^2$ and consequently those $(\widetilde{PgP})_{k}=(PgP)_{k}$ are actual prime pairs with gap $g$ and the number of such prime pairs meets the stated minimum. Note that while $P_{l}\#+1$ is the largest prospective prime number in $S_{l}=\left\{5 \longrightarrow 4+P_{l}\#\right\}$ in that it is prime to all $P \le P_{l}$, it cannot be the square of a prime number.\footnote{Any prime number $>3$ has the form $6n\pm1$ and its square is then $36n^2\pm 12n +1$. Then equating $P_{l}\#+1$ to that square gives $6n^2\pm2n=\frac{P_{l}\#}{6}$. This cannot hold because the left side is even and the right is odd.} Therefore, with the definition of $P_{k}$ we have: \[ P_{k}^2 < P_{l}\# \longrightarrow P_{k}^2 \in S_{l} \] and given $P_{k+1}^2=\left(P_{\boldsymbol{\pi}\left(\sqrt{P_{l}\#}\right)+1}\right)^2$, we have: \[ P_{l}\# < P_{k+1}^2 < P_{l+1}\#\longrightarrow P_{k+1}^2 \in S_{l+1}\quad \& \quad P_{k+1}^2 \notin S_{l} \] Note that $P_{k+1}$ is the smallest prime number whose square is greater than $4+P_{l}\#$ and $P_{k}$ is the largest prime number whose square is less than $P_{l}\#$. Therefore all prospective prime numbers and prospective prime pairs in $S_{l}$ are less than $P_{k+1}^2$. It remains to show that some $(\widetilde{PgP})_{l}$ are greater than $P_{k}$ and are prime to all $P \le P_{k}$ which means some $(\widetilde{PgP})_{l}=(\widetilde{PgP})_{k}$ and being less than $P_{k+1}^2$ are therefore actual prime pairs with gap $g$. In doing this we will show the inequality for $\mathring{n}_{P_{k}\rightarrow P_{k+1}^2}^{g}$ holds. To prove the theorem we must show there are some $(\widetilde{PgP})_{l}=(\widetilde{PgP})_{k}$. Given that: \[(\widetilde{PgP})_{l} \in S_{l}=S_{l+1}^{(0)}\subset S_{l+2}^{(0)}\subset \cdots \subset S_{k}^{(0)} \] This requires $m_{j}=0$ at each stage of: $(\widetilde{PgP})_{k}=(\widetilde{PgP})_{l}+\sum_{j=l+1}^{k}m_{j}P_{j-1}\#$. We know, $S_{l+1}^{(0)}$ contains a minimum number of prospective prime pairs with gap $g$, represented as $\min(\mathring{n}_{S_{l+1}^{(0)}}^{g})$ and given by Theorem~\ref{T: distroPgP}, which are prime to $P\le P_{l+1}$ and since $m_{l+1}=0$, $(\widetilde{PgP})_{l+1}=(\widetilde{PgP})_{l}$. Then given $S_{l+2}^{(0)}=S_{l+1}$ we know again from Theorem~\ref{T: distroPgP} that $S_{l+2}^{(0)}$ has a minimum number of prospective prime pairs with gap $g$ represented as $\min(\mathring{n}_{S_{l+2}^{(0)}}^{g})$ which are prime to $P\le P_{l+2}$. However all subsets of $S_{l+1}$ have contributed prospective prime pairs with gap $g$ to $S_{l+2}^{(0)}$ and we need to only consider those contributed by $S_{l+1}^{(0)}$. Lemmas~\ref{L: equaldistro} and \ref{L: uniformdistro} showed that all subsets of $S_{l+1}$ contribute the same minimum number of prospective prime pairs to all subsets of $S_{l+2}$ and that the contributions remain uniform asymtotically for large $l$. Then the fraction of prospective prime pairs with gap $g$ in $S_{l+2}^{(0)}$ generated from $(\widetilde{PgP})_{l}=(\widetilde{PgP})_{l+1} \in S_{l+1}^{(0)}$ is therefore given by: \[ \frac{\min\left(\mathring{n}_{S_{l+1}^{(0)}}^{g}\right)}{\mathring{n}_{l+1}^{g}}\min\left(\mathring{n}_{S_{l+2}^{(0)}}^{g}\right) = \textrm{minimum number of}\quad (\widetilde{PgP})_{l+2}=(\widetilde{PgP})_{l} \] Then we have $\min\left(n_{S_{l+3}^{(0)}}^{g}\right)$ prospective prime pairs, $(\widetilde{PgP})_{l+3}\in S_{l+3}^{(0)}$ derived from all $(\widetilde{PgP})_{l+2}\in S_{l+2}$. The fraction of those derived from the set of $(\widetilde{PgP})_{l+2}=(\widetilde{PgP})_{l}\in S_{l+2}^{(0)}$ is: \begin{multline}\notag \frac{\min\left(\mathring{n}_{S_{l+1}^{(0)}}^{g}\right)}{\mathring{n}_{l+1}^{g}}\cdot\frac{\min\left(\mathring{n}_{S_{l+2}^{(0)}}^{g}\right)}{\mathring{n}_{l+2}^{g}}\cdot \min\left(\mathring{n}_{S_{l+3}^{(0)}}^{g}\right)\\ = \textrm{minimum number of}\quad (\widetilde{PgP})_{l+3}=(\widetilde{PgP})_{l} \end{multline} Carrying this process forward up to the number of $(\widetilde{PgP})_{k}=(\widetilde{PgP})_{l}$, where then $P_{k}<(\widetilde{PgP})_{l}\le P_{k+1}^2$, gives: \begin{equation}\label{E: numtleqg} \mathring{n}_{p_{k}\rightarrow P_{k+1}^{2}}^{g} \ge \min\left(\mathring{n}_{S_{k}^{(0)}}^{g}\right) \prod_{j=l+1}^{k-1}\frac{\min\left(\mathring{n}_{S_{j}^{(0)}}^{g}\right)}{\mathring{n}_{j}^{g}} \end{equation} Expanding this using Theorem~\ref{T: distroPgP} and Theorem~\ref{T: noprotpp} we get: \begin{align}\label{E: abc}\notag \mathring{n}_{p_{k}\rightarrow P_{k+1}^{2}}^{g}\ge& (P_{k-1}-4) \prod_{i=r+1}^{k-2}(P_{i}-2)\cdot \prod_{\substack{i=r+1\\ P_{i}|g }}^{k-2}\frac{(P_{i}-1)}{(P_{i}-2)}\cdot \\ \notag &\cdot \prod_{j=l+1}^{k-1}\frac{(P_{j-1}-4) \prod_{i=r+1}^{j-2}(P_{i}-2)\cdot \prod_{\substack{i=r+1\\ P_{i}|g }}^{j-2}\frac{(P_{i}-1)}{(P_{i}-2)}}{\prod_{i=r+1}^{j}(P_{i}-2)\cdot \prod_{\substack{i=r+1\\ P_{i}|g }}^{j}\frac{(P_{i}-1)}{(P_{i}-2)}}\\ \notag =& (P_{k-1}-4) \prod_{i=r+1}^{k-2}(P_{i}-2)\cdot \prod_{j=l+1}^{k-1}\frac{(P_{j-1}-4) \prod_{i=r+1}^{j-2}(P_{i}-2) }{\prod_{i=r+1}^{j}(P_{i}-2)} \cdot\\ \notag &\cdot \prod_{\substack{i=r+1\\ P_{i}|g }}^{k-2}\frac{(P_{i}-1)}{(P_{i}-2)}\cdot\prod_{\substack{j=l+1\\ P_{i}|g }}^{k-1} \frac{\prod_{\substack{i=r+1\\ P_{i}|g }}^{j-2}\frac{(P_{i}-1)}{(P_{i}-2)}}{\prod_{\substack{i=r+1\\ P_{i}|g }}^{j}\frac{(P_{i}-1)}{(P_{i}-2)}} \\ \notag =& (P_{k-1}-4) \prod_{i=r+1}^{l}(P_{i}-2)\cdot \prod_{i=l+1}^{k-2}(P_{i}-2)\cdot \prod_{j=l+1}^{k-1}\frac{(P_{j-1}-4) }{{(P_{j}-2)(P_{j-1}-2)} }\\ \notag &\cdot \prod_{\substack{i=r+1\\ P_{i}|g }}^{k-2}\frac{(P_{i}-1)}{(P_{i}-2)}\cdot \prod_{\substack{j=l+1\\ P_{j}|g }}^{k-1} \frac{(P_{j}-2)}{(P_{j}-1)}\frac{(P_{j-1}-2)}{(P_{j-1}-1)} \\ \notag =&\prod_{i=r+1}^{l}(P_{i}-2)\cdot \prod_{\substack{i=r+1\\ P_{i}|g }}^{l}\frac{(P_{i}-1)}{(P_{i}-2)}\cdot\prod_{j=l}^{k-1}\frac{(P_{j}-4)}{(P_{j}-2)}\cdot \prod_{\substack{i=l\\ P_{i}|g }}^{k-1} \frac{(P_{i}-2)}{(P_{i}-1)}\\ &=\mathring{n}_{l}^{g}\cdot \prod_{j=l}^{k-1}\frac{(P_{j}-4) }{(P_{j}-2)} \cdot \prod_{\substack{i=l\\ P_{i}|g }}^{k-1} \frac{(P_{i}-2)}{(P_{i}-1)} \end{align} This is clearly a possitive function and we want to show it is a monotonically increasing function with values greater than $1$. To do this we look at the case for $l\rightarrow l+1$ and $k\rightarrow k'=\boldsymbol{\pi}(\sqrt{P_{l+1}\#})$: \begin{align}\notag \mathring{n}_{p_{k'}\rightarrow P_{k'+1}^{2}}^{g}&\ge\mathring{n}_{l+1}^{g}\cdot \prod_{j=l+1}^{k'-1}\frac{(P_{j}-4) }{(P_{j}-2)} \cdot \prod_{\substack{i=l+1\\ P_{i}|g }}^{k'-1} \frac{(P_{i}-2)}{(P_{i}-1)}\\ \notag &=\prod_{r+1}^{l+1}(P_{i}-2)\prod_{\substack{i=r+1\\ P_{i}|g }}^{l+1} \frac{(P_{i}-2)}{(P_{i}-1)}\cdot \prod_{j=l+1}^{k'-1}\frac{(P_{j}-4) }{(P_{j}-2)} \cdot \prod_{\substack{i=l+1\\ P_{i}|g }}^{k'-1} \frac{(P_{i}-2)}{(P_{i}-1)}\\ \notag &=\mathring{n}_{l}^{g}\cdot(P_{l+1}-2)\cdot\left(\frac{P_{l+1}-2}{P_{l+1}-1}\right)_{P_{l+1}|g}\cdot \frac{(P_{l}-2)}{(P_{l}-4)}\cdot\prod_{i=l}^{k-1}\frac{(P_{i}-4)}{(P_{i}-2)}\cdot\\ \notag &\cdot\prod_{i=k}^{k'-1}\frac{(P_{i}-4)}{(P_{i}-2)}\cdot\left(\frac{P_{l}-1}{P_{l}-2}\right)_{P_{l}|g}\cdot\prod_{\substack{i=l\\ P_{i}|g }}^{k-1} \frac{(P_{i}-2)}{(P_{i}-1)}\cdot\prod_{\substack{i=k\\ P_{i}|g }}^{k'-1} \frac{(P_{i}-2)}{(P_{i}-1)}\\ \notag &=\mathring{n}_{p_{k}\rightarrow P_{k+1}^{2}}^{g}\cdot (P_{l+1}-2)\cdot\left(\frac{P_{l+1}-2}{P_{l+1}-1}\right)_{P_{l+1}|g}\cdot \frac{(P_{l}-2)}{(P_{l}-4)}\cdot\\ \notag &\cdot\prod_{i=k}^{k'-1}\frac{(P_{i}-4)}{(P_{i}-2)}\cdot\left(\frac{P_{l}-1}{P_{l}-2}\right)_{P_{l}|g}\cdot\prod_{\substack{i=k\\ P_{i}|g }}^{k'-1} \frac{(P_{i}-2)}{(P_{i}-1)}\\ \notag &=\mathring{n}_{p_{k}\rightarrow P_{k+1}^{2}}^{g}\cdot (P_{l+1}-2)\cdot \frac{(P_{l}-2)}{(P_{l}-4)}\cdot\prod_{i=k}^{k'-1}\frac{(P_{i}-4)}{(P_{i}-2)}\cdot\\ \notag &\cdot\left(\frac{P_{l+1}-2}{P_{l+1}-1}\right)_{P_{l+1}|g}\cdot\left(\frac{P_{l}-1}{P_{l}-2}\right)_{P_{l}|g}\cdot\prod_{\substack{i=k\\ P_{i}|g }}^{k'-1} \frac{(P_{i}-2)}{(P_{i}-1)}\\ \notag \end{align} If we choose $l$ sufficiently large so that $P\ge P_{l}\rightarrow P\nmid g$, we can ignore the second line of products, giving: \begin{align}\label{E: successiveterms} \mathring{n}_{p_{k'}\rightarrow P_{k'+1}^{2}}^{g} &\ge \mathring{n}_{p_{k}\rightarrow P_{k+1}^{2}}^{g}\cdot (P_{l+1}-2)\cdot \frac{(P_{l}-2)}{(P_{l}-4)}\cdot\prod_{i=k}^{k'-1}\frac{(P_{i}-4)}{(P_{i}-2)} \end{align} Then the last product factor gives: \begin{equation}\label{E: approxcase} \prod_{i=k}^{k'-1}\frac{(P_{i}-4)}{(P_{i}-2)}=\prod_{i=k}^{k'-1}\left(1-\frac{2}{P_{i}-2}\right)\\ \ge 1-\frac{2(k-k')}{P_{k}} \end{equation} Then given $k'=\boldsymbol{\pi}(\sqrt{P_{l}\#})$, giving: \[ k'\approx\frac{\sqrt{P_{l+1}\#}}{\ln{\sqrt{P_{l+1}\#}}} =\frac{\sqrt{P_{l+1}}\sqrt{P_{l}\#}}{\ln{\sqrt{P_{l+1}}}+\ln{\sqrt{P_{l}\#}}} \] Ignoring $\ln{\sqrt{P_{l+1}}}$ relative to $\ln{\sqrt{P_{l}\#}}$ and noting that $k\approx \frac{\sqrt{P_{l}\#}}{\ln{\sqrt{P_{l}\#}}}$, gives: \[ k'\approx \sqrt{P_{l+1}}\cdot k \] Using this in (\ref{E: approxcase}) gives: \begin{align}\label{E: approxfinal} \prod_{i=k}^{k'-1}\frac{(P_{i}-4)}{(P_{i}-2)} &\ge 1-\frac{2\sqrt{P_{l+1}}}{\ln{P_{k}}}\ge 1-\frac{2\sqrt{P_{l+1}}}{\ln{\sqrt{P_{l+1}\#}}} \end{align} Therefore $\prod_{i=k}^{k'-1}\frac{(P_{i}-4)}{(P_{i}-2)}$, while remaining $<1$ is a monotonically increasing function assymtotically approaching $1$. The approximation (\ref{E: approxfinal}) is conservative:\footnote{The approximation used in (\ref{E: approxfinal}) allows negative values for small $l$, but is positive for $l\ge 8$, while the term being approximated clearly always has a positive value.}, and using it for the last term in (\ref{E: successiveterms}) gives for example: \[ l= 9:\qquad \mathring{n}_{p_{k'}\rightarrow P_{k'+1}^{2}}^{g} \ge 1.4\cdot \mathring{n}_{p_{k}\rightarrow P_{k+1}^{2}}^{g} \] \[ l= 10:\qquad \mathring{n}_{p_{k'}\rightarrow P_{k'+1}^{2}}^{g} \ge 4.5 \cdot\mathring{n}_{p_{k}\rightarrow P_{k+1}^{2}}^{g} \] \[ l= 15:\qquad \mathring{n}_{p_{k'}\rightarrow P_{k'+1}^{2}}^{g} \ge 18.7\cdot\mathring{n}_{p_{k}\rightarrow P_{k+1}^{2}}^{g} \] \end{proof} Given Theorem~\ref{T: numtwinprimes} we can prove the following theorem: \begin{theorem}\label{T: main} Given a set $S_{r}=\left\{N: 5 \le N \le 4+P_{r}\#\right\}$ containing at least one prospective prime pair with gap $g$, then given any number $M$ there is always a prime pair with gap $g$ greater than $M$. \end{theorem} \begin{proof} Pick integer $l>r$ so that $P_{k}=P_{\boldsymbol{\pi}\left(\sqrt{P_{l}\#}\right)}>M$.Then we know from Theorem~(\ref{T: numtwinprimes}) that there is always a prime pair with gap $g$ greater than $P_{k}$. \end{proof} \section{Prime gaps for which de Polignac's conjecture holds} Given Theorem~\ref{T: numtwinprimes} we need only show the existence of a set $S_{k}$ containing a pair of consecutive prospective prime numbers with a secific gap $g$ to prove de Polignac's conjecture holds for that gap. \begin{lemma}\label{L: primegaps} Given any prime number $P_{k}>3$, then $P_{k}$ and $P_{k+1}$ are consecutive prospective prime numbers in $S_{k-1}$. \end{lemma} \begin{proof} Consider the set $S_{k-1}=\left\{N: 5 \le N \le 4+P_{k-1}\#\right\}$ and its subset of prospective prime numbers, $\widetilde{\mathbb{P}}_{k-1}$. Then we know that all prospective prime numbers in $\widetilde{\mathbb{P}}_{k-1}$ that are less than $P_{k}^2$ are actual prime numbers. For $P_{k}>3$, we have: \[ P_{k+1}<P_{k-1}\#+4\quad \textrm{and consequently}\quad P_{k},P_{k+1} \in \widetilde{\mathbb{P}}_{k-1} \] and because $P_{k},P_{k+1}<P_{k}^2$, any prospective prime number between them must also be an actual prime number. But $P_{k}$ and $P_{k+1}$ are consecutive prime numbers, so there can be no prospective prime numbers between them and they are consecutive prospective prime numbers as well as consecutive actual prime numbers in $S_{k-1}$. \end{proof} The following theorem follows directly from Theorem~\ref{T: main} together with Lemma~\ref{L: primegaps} \begin{theorem}\label{T: depolignacPkpm2} For all $P_{k}>3 $ there exists infinitely many consecutive prime pairs with gaps $g=P_{k+1}-P_{k}$. \end{theorem} Now consider the gaps between subsets, where we use the following definitions: \begin{definition} \[ \widetilde{P}_{\{k\}}^{(m)<}:=\min \left\{ \widetilde{P}\in S_{k}^{(m)} \right\} \] \[ \widetilde{P}_{\{k\}}^{(m)>}:=\max \left\{ \widetilde{P}\in S_{k}^{(m)} \right\} \] \end{definition} Then the subset gap is defined as: \begin{definition} \[ g_{\Delta_{SS_{k}}}:=\widetilde{P}_{\{k\}}^{(m)<}-\widetilde{P}_{\{k\}}^{(m-1)>} \] \end{definition} \begin{lemma}\label{L: subsetgaps} Given set $S_{k}=\left\{N: 5 \le N \le 4+P_{k}\#\right\}$ and its $P_{k}$ subsets $S_{k}^{(m)}=\left\{N: 5+m\cdot P_{k-1}\# \le N \le 5+(m+1)\cdot P_{k-1}\#\right\}$ with $P_{k}-1$ associated gaps, $g_{\Delta_{SS_{k}}}$, then: \begin{equation}\notag g_{\Delta_{SS_{k}}}= \begin{cases} P_{k}-1 \quad \textrm{for $P_{k}-2$ gaps}\\ P_{k}+1 \quad \textrm{for one gap} \end{cases} \end{equation} \end{lemma} \begin{proof} The smallest prospective prime in $S_{k-1}$ is $P_{k}$ and the largest two prospective primes in $S_{k-1}$ are $\widetilde{P}_{k-1}^{\pm}:=P_{k-1}\#\pm 1$. For $S_{k-1}\rightarrow S_{k}$ we use (\ref{E: nextproprime}) subject to the supplementary condition (\ref{E: defmhat}) to generate prospective primes in $S_{k}$. Note that given $P_{k}\in \widetilde{\mathbb{P}}_{k-1}$, then for $\widetilde{P}_{k}=P_{k}+mP_{k-1}\#$ all values of $m$ except $m=0$ are allowed, making $P_{k+1}$ the least prospective prime in the zeroth subset of $\widetilde{\mathbb{P}}_{k}$, and making $P_{k}+mP_{k-1}\#$ the least prospective prime in all other subsets of $\widetilde{\mathbb{P}}_{k}$. Therefore we have: \begin{equation}\label{E: firstpprime} \widetilde{P}_{\{k\}}^{(m)<}= \begin{cases} P_{k+1}&\textrm{for}\quad m=0 \\ P_{k}+m P_{k-1}\#&\textrm{for}\quad 1\le m\le P_{k}-1 \end{cases} \end{equation} and \begin{equation}\label{E: lastpprime} \widetilde{P}_{\{k\}}^{(m)>}= \begin{cases} (m +1)P_{k-1}\# +1 &\textrm{if}\quad m \ne \widehat{m}^+\\ (m +1)P_{k-1}\# -1 &\textrm{if}\quad m= \widehat{m}^+ \end{cases} \end{equation} $\widehat{m}^+$ represents the disallowed subset for $\widetilde{P}_{\{k\}}=\widetilde{P}_{k-1}^{+}+mP_{k-1}\#$, which however is allowed for $\widetilde{P}_{k-1}^{-}=\widetilde{P}_{k-1}^{+}-2$, where: \[ \widehat{m}^+=\frac{\alpha P_{k}-\widetilde{P}_{k-1}^{+}\bmod{P_{k}}}{P_{k-1}\#\bmod{P_{k}}} \\ \] In regards to (\ref{E: lastpprime}) note that $\widehat{m}^+\ne P_{k}-1$ because using the maximum value is always allowed for $P_{k-1}^{\pm}$ where using it in (\ref{E: nextproprime}) gives: \[ \widetilde{P}_{k-1}^{\pm}+(P_{k}-1)P_{k-1}\# =P_{k-1}\#\pm 1 +(P_{k}-1)P_{k-1}\# =P_{k}\#\pm 1 =P_{k}^{\pm} \] Therefore, $\widehat{m}^+$ associated with $\widetilde{P}_{k-1}^{+}$ can only have a value in the range $0$ to $P_{k}-2$ associated with the greatest prospective prime in each subset of $\widetilde{\mathbb{P}}_{k}$. This leaves one subset of $\widetilde{\mathbb{P}}_{k}$, namely $\mathbb{P}_{k}^{(\widehat{m}^+)}$, $0\le \widehat{m}^+\le P_{k}-2$ where $\widehat{m}^+ P_{k-1}\#-1$ is the greatest prospective prime and where $mP_{k-1}\#+1$ is the greatest prospective prime in the remainder of the subsets. Therefore, there are $P_{k}-2$ cases where: \[ g_{\Delta_{SS_{k}}}=\widetilde{P}_{\{k\}}^{(m)<}-\widetilde{P}_{\{k\}}^{(m-1)>}= (P_{k}+mP_{k-1}\#)-( mP_{k-1}\# +1)=P_{k}-1 \] $1\le m\le P_{k}-1$, and $m-1 \ne \widehat{m}^+$; and one case where: \[ g_{\Delta_{SS_{k}}}=\widetilde{P}_{\{k\}}^{(\widehat{m}^+)<}-\widetilde{P}_{\{k\}}^{(\widehat{m}^+ -1)>}= (P_{k}+\widehat{m}^+P_{k-1}\#)-( \widehat{m}^+P_{k-1}\# -1)=P_{k}+1 \] \end{proof} \begin{corollary}\label{C: twogapvalues} Every set $S_{k}$ has at least $P_{k}-2$ prospective prime pairs with gap $g=P_{k}-1$ and at least one prospective prime pair with gap $g=P_{k}+1$ \end{corollary} \begin{proof} This follows directly from Lemma~\ref{L: subsetgaps} recognizing that gaps between subsets are gaps between prospective prime pairs. The "at least" follows because internal to subsets there are prime pairs with gaps that may be the same or may differ from the subset gaps. \end{proof} The following theorem follows directly from Theorem~\ref{T: main} and Corollary~\ref{C: twogapvalues} \begin{theorem}\label{T: depolignacPkpm1} For all $P_{k}$ there exists infinitely many consecutive prime pairs with gaps $g=P_{k} \pm 1$. \end{theorem} \end{document}
arXiv
{ "id": "2108.13834.tex", "language_detection_score": 0.6183856129646301, "char_num_after_normalized": null, "contain_at_least_two_stop_words": null, "ellipsis_line_ratio": null, "idx": null, "lines_start_with_bullet_point_ratio": null, "mean_length_of_alpha_words": null, "non_alphabetical_char_ratio": null, "path": null, "symbols_to_words_ratio": null, "uppercase_word_ratio": null, "type": null, "book_name": null, "mimetype": null, "page_index": null, "page_path": null, "page_title": null }
arXiv/math_arXiv_v0.2.jsonl
null
null
\begin{document} \title{Conditions for the classicality of the center of mass of many-particle quantum states} \author{Xavier Oriols} \ead{[email protected]} \address{Departament d'Enginyeria Electr\`onica, Universitat Aut\`onoma de Barcelona, 08193-Bellaterra (Barcelona), Spain} \author{Albert Benseny} \address{Quantum Systems Unit, Okinawa Institute of Science and Technology Graduate University, Onna, Okinawa 904-0495, Japan} \begin{abstract} We discuss the conditions for the classicality of quantum states with a very large number of identical particles. By treating the center of mass as a Bohmian particle, we show that it follows a classical trajectory when the distribution of the Bohmian positions in just one experiment is always equal to the marginal distribution of the quantum state in physical space. This result can also be interpreted as a unique-experiment generalization of the well-known Ehrenfest theorem. We also demonstrate that the classical trajectory of the center of mass is fully compatible with a conditional wave function solution of a classical non-linear Schr\"odinger equation. Our work shows clear evidence for a quantum-classical inter-theory unification and opens new possibilities for practical quantum computations with decoherence. \end{abstract} \maketitle \section{Introduction} Since the beginning of quantum theory a century ago, the study of the frontier between classical and quantum mechanics has been a constant topic of debate~\cite{landau,Ballentine90,herbert,Zurek03,Giulini96,diosi,schlosshauer14,Dieter}. Despite great efforts, the quantum-to-classical transition still remains blurry and certainly much more puzzling and intriguing than, for example, the frontier between classical mechanics and relativity. The relativistic equations of motion just tend to the classical ones when the velocities are much slower than the speed of light~\cite{herbert}. The difficulties in finding a simple explanation for the classical-to-quantum transition have their roots in the so-called \emph{measurement problem} that requires getting rid of quantum superpositions~\cite{Dieter,maudlin,bohm66}. Possible quantum states of a particle are represented by vectors in a Hilbert space. Linear combinations of them, for example a superposition of macroscopically distinguishable state, also correspond to valid states of the Hilbert space. However, such superposition of states is not always compatible with measurements~\cite{bohm66,nikolic}. The measurement problem can be formulated as the impossibility for a physical quantum theory (in empirical agreement with experiments) to satisfy simultaneously the following three assumptions~\cite{maudlin}. First, the wave function always evolves deterministically according to the linear and unitary Schr\"odinger equation. Second, a measurement always find the physical system in a localized state, not in a superposition of macroscopically distinguishable states. Third, the wave function is a complete description of a quantum system. Different physical theories appear depending on which assumption is ignored~\cite{herbert}. The first type of solutions argues that the unitary and linear evolution of the Schr\"odinger equation is not always valid. For instance, in the instantaneous collapse theories~\cite{bassi13} (like the GRW interpretation~\cite{ghirardi86}), a new stochastic equation is used that breaks the superposition principle at a macroscopic level, while still keeping it at a microscopic one~\cite{bassi13}. Another possibility is substituting the linear Schr\"odinger equation by a non-linear collapse law only when a measurement is performed~\cite{landau,bohr20}. This is the well-known orthodox (or Copenhagen) solution, and most of the attempts to reach a quantum-to-classical transition have been developed under this last approach~\cite{Zurek03,Giulini96,diosi,schlosshauer14,Dieter,kim,Brukner,Yang13}. A second type of solution ignores the assumption that a measurement always find the physical system in a localize state. One then assumes that there are different worlds where different states of the superposition are found. This is the many worlds solution~\cite{Everett57,wallace12,saunders}, in which the famous Schr\"odinger's cat is found alive in one world and dead in another. Explanations of the quantum-to-classical transition have also been attempted within this interpretation~\cite{saunders}. There is a final kind of solutions that assumes that the wave function alone does not provide a complete description of the quantum state, i.e., additional elements (hidden variables) are needed. The most spread of these approaches is Bohmian mechanics~\cite{bohm66,bohm52,bell66,Holland93,Oriols12,ABM_review,durr13}, where, in addition to the wave function, well-defined trajectories are needed to define a complete (Bohmian) quantum state. In a spatial superposition of two disjoint states in a single-particle system, only the one whose support contains the position of the particle becomes relevant for the dynamics. Previous attempts to study the quantum-to-classical transition with Bohmian mechanics mainly focused on single-particle problems~\cite{durr13,sevensteps,Salvador13,Allori09}. In this paper, we generalize such works by analyzing when the center of mass of a many-particle quantum system follows a classical trajectory. The use of the center of mass for establishing the classicality of a quantum state has some promising advantages. The first one is related to the description of the initial conditions. Fixing the initial position and velocity of a classical particle seems unproblematic, while it is forbidden for a quantum particle due to the uncertainty principle~\cite{landau,bohr20}. The use of the center of mass relaxes this contradiction: it is reasonable to expect that two experiments with the same preparation for the wave function will give quite similar values for the initial position and velocity of the center of mass when a large number of particles is considered, although the microscopic distribution of all (Bohmian) particles will be quite different in each experiment. The second advantage is that it provides a \emph{natural} coarse-grained definition of a classical trajectory that coexists with the underlying microscopic quantum reality. One can reasonably expect that the Bohmian trajectory of the center of mass of a large number of particles can follow a classical trajectory, without implying that each individual particle becomes classical. Therefore, the use of the center of mass allows a definition of the quantum-to-classical transition, while keeping a pure quantum behavior for each individual particle. This article is structured as follows. We begin by studying the conditions under which the center of mass of a quantum state behaves classically. We then present a type of wave functions that always fulfills these conditions, and show the equation that guides the wave function of the center of mass. Next, we discuss examples of quantum states whose center of mass does not behave classically. To finish, we summarize the main results, contextualize them within previous approaches and comment on further extensions of this work. \section{Conditions for a classical center of mass} \label{sec:conditions} \subsection{Evolution of the center of mass in an ensemble of identical experiments} \label{ensemble} Throughout the article, we will consider a quantum system composed of $N$ particles of mass $m$ governed by the wave function $\Psi(\vec r_1,\ldots,\vec r_N,t)$ solution of the many-particle non-relativistic Schr\"odinger equation, \begin{equation} \rmi \hbar \frac{\partial \Psi}{\partial t} = \left( -\frac{\hbar^2}{2m} \sum_{i = 1}^N \nabla^2_i + V\right) \Psi, \label{mpscho} \end{equation} where $\vec r_i$ is the position of the $i$-th particle, $\nabla^2_i$ its associated Laplacian operator, and the potential $V=V(\vec r_1,\ldots,\vec r_N,t)$ contains an external and an interparticle component, \begin{equation} V=\sum_{i=1}^{N} V_{\rm ext}(\vec r_i)+\frac{1}{2}\sum_{i=1}^{N}\sum_{{f=1; i\neq f}}^{N}V_{\rm int}(\vec r_i-\vec r_f) . \label{potential} \end{equation} In particular, we are interested in the evolution of one specific degree of freedom, the center of mass, defined as \begin{equation} \vec{r}_{\rm cm} = \frac{1}{N} \sum_{i=1}^{N}\vec r_i . \label{cm} \end{equation} Our aim in this paper is to analyze under which circumstances the observable associated to the operator $\vec{r}_{\rm cm}$ follows a classical trajectory in a unique experiment. We first consider an ensemble of experiments realized with the same (prepared) wave function, whose average ensemble value of the center of mass is given by \begin{equation} \label{cmev} \langle \vec{r}_{\rm cm}\rangle(t)=\int d^3\vec{r}_1 \ldots \int d^3\vec{r}_N |\Psi(\vec r_1,\ldots,\vec r_N,t)|^2 \vec{r}_{\rm cm}. \end{equation} From Ehrenfest's theorem~\cite{ehrenfest27}, it is well-known that the time derivative of $\langle \vec{r}_{\rm cm}\rangle$ is \begin{equation} \label{eren1} \frac{d \langle \vec{r}_{\rm cm}\rangle}{dt}=\frac{1}{N} \sum_{i=1}^{N} \frac{d \langle \vec r_i \rangle}{dt}=\frac{1}{N} \sum_{i=1}^{N} \langle \vec p_i \rangle=\langle \vec{p}_{\rm cm} \rangle . \end{equation} We can follow the same procedure for the time derivative of the momentum of the center of mass, \begin{equation} \label{eren2} \frac{d \langle \vec{p}_{\rm cm}\rangle}{dt}=\frac{1}{N} \sum_{i=1}^{N} \frac{d \langle \vec p_i \rangle}{dt}= -\frac{1}{N} \sum_{i=1}^{N} \langle \nabla_i V_{\rm ext}(\vec r_i) \rangle . \end{equation} When the spatial extent of the many-particle wave function is much smaller than the variation length-scale of the potential, we can assume $\langle \nabla_i V_{\rm ext}(\vec r_i) \rangle = \nabla V_{\rm ext}( \langle \vec{r}_{\rm cm} \rangle)$, and write \begin{equation} \label{eren3} \frac{d^2 \langle \vec{r}_{\rm cm}\rangle}{dt^2}= - \nabla V_{\rm ext}( \langle \vec{r}_{\rm cm} \rangle) . \end{equation} This classical behavior of the average $\langle \vec{r}_{\rm cm}\rangle$ is a very well-known result~\cite{landau,Ballentine90,ehrenfest27}. The types of $V_{\rm ext}$ that satisfy the condition $\langle \nabla_i V_{\rm ext}(\vec r_i) \rangle = \nabla V_{\rm ext}( \langle \vec{r}_{\rm cm} \rangle)$ will be further discussed later. \subsection{Evolution of the center of mass in a unique experiment} \label{evolution} In order to satisfy our classical intuition, we need to certify that the observable associated to $\vec{r}_{\rm cm}$ follows a classical trajectory in each experiment (not in an average over several experiments). This problem could be analyzed within the orthodox formalism~\cite{Zurek03,Giulini96,schlosshauer14,kim,Brukner,Yang13,lusanna}. The typical approach would be to construct a reduced density matrix of the center of mass by tracing out the rest of degrees of freedom interpreted as the environment. The effect of decoherence, i.e. the entanglement between the environment and the system, then leads to a diagonal (or nearly diagonal) density matrix. Finally, after invoking the collapse law, one obtains the observable result for the operator $\vec{r}_{\rm cm}$ by selecting one element of the diagonal at each measuring time. In this work, however, we will approach the problem using Bohmian mechanics~\cite{bohm52,Holland93,Oriols12,ABM_review,durr13}. This alternative formalism will allow us to reach the quantum-to-classical transitions without dealing with the reduced density matrix and without specifying the collapse law (this law is not needed in the Bohmian postulates~\cite{Holland93,Oriols12,durr13}). As indicated in the introduction, in Bohmian mechanics, a quantum state is completely described by two elements: the many-particle wave function $\Psi(\vec r_1,\ldots,\vec r_N,t)$ solution of the usual Schr\"odinger equation and the trajectory $\{\vec r^j_i(t)\}$ of each $i=1 \ldots N$ particle. Hereafter, each Bohmian quantum state will refer to a wave function and to a particular set of trajectories labeled by the superindex $j$ that correspond to a unique experiment. The velocity of each particle is given by \begin{equation} \label{velo} \vec v^j_i(t)=\frac{d \vec r^j_i(t)}{dt}=\frac {\vec J_i(\vec r^j_1(t),\ldots,\vec r^j_N(t),t)}{|\Psi(\vec r_1(t),\ldots,\vec r_N(t),t)|^2} , \end{equation} where $\vec J_i = \hbar \mathop{\rm Im}(\Psi^* \nabla_i \Psi)/m$. Thus, the configuration of particles reproduce all quantum features while evolving ``choreographed'' by the wave function~\cite{Oriols12,ABM_review,durr13,velocity,Marian16}. By construction, Bohmian predictions are as uncertain as the orthodox ones~\cite{FNL2016b}: it is not possible to know the initial positions in a particular experiment (unless the wave function is a position eigenstate). The best we can know about the particle positions in the $j$-experiment, $\{\vec r^j_i(t)\}$, is that they are found in locations where the wave function has a reasonable presence probability. In particular, the set of positions in $M$ different experiments (prepared with the same wave function) are distributed according to \begin{equation} \label{QE} |\Psi(\vec r_1,\ldots,\vec r_N,t)|^2 = \lim_{M \rightarrow \infty} \frac{1}{M} \sum_{j=1}^{M} \prod_{i=1}^N \delta\left(\vec r_i-\vec r^j_i(t)\right). \end{equation} If the set of $N$ positions follows this distribution at some time $t_0$, it is easy to demonstrate that \eref{QE} will also be satisfied at any other time $t$, provided that the many-particle wave function evolves according to \eref{mpscho} and that the particles moves according to \eref{velo}. This property is known as equivariance~\cite{conditional} and it is key for the empirical equivalence between Bohmian mechanics and other quantum theories. Equation \eref{QE} says that Born's law is always satisfied by counting particles~\cite{bohm52,Holland93,ABM_review,durr13} and that quantum results are unpredictable~\cite{FNL2016b}. Several authors assume as a postulate of the Bohmian theory that the initial configuration of particles satisfies \eref{QE}, while others argue that it is just a consequence of being in a ``typical'' Universe~\cite{Callender,conditional}\footnote{In principle, one could postulate \eref{QE} (at some initial time) in the Bohmian theory in the same way that Born's law is a postulate in the orthodox theory. However, some authors argue that this is not necessary~\cite{Callender}. Probably the most accepted view against taking \eref{QE} as a postulate comes from the seminal work by D\"urr, Goldstein, and Zangh\`i~\cite{conditional}, where the equivariance in any system is discussed from the initial configurations of (Bohmian) particles in the Universe. Using Bohmian mechanics to describe the wave function of the whole Universe, then the wave function associated to any (sub)system is an effective (conditional) wave function of the universal one. Using typicality arguments, D\"urr \etal showed that the overwhelming majority of possible selections of initial positions of particles in the Universe will satisfy the condition \eref{QE} in a subsystem~\cite{conditional}. Other authors~\cite{valentini} have attempted to dismiss \eref{QE} as a postulate by showing that any initial configuration of Bohmian particles will relax, after some time, to a distribution very close to \eref{QE} for a subsystem. }. After selecting one initial positions of the particles from \eref{QE} in a unique $j$-experiment, we can then define the trajectory for the center of mass of the Bohmian quantum state associated to such $j$-experiment as \begin{equation} \vec{r}_{\rm cm}^j(t) = \frac{1}{N} \sum_{i=1}^{N}\vec r^j_i(t) . \label{cmue} \end{equation} As discussed above, in general $\vec{r}_{\rm cm}^j(t)\neq \vec{r}_{\rm cm}^{h}(t)$ for any two different experiments $j$ and $h$, because the Bohmian positions have an intrinsic uncertainty coming from \eref{QE}. \subsection{Classical center of mass in a unique experiment} \label{conditions} A classical trajectory for the center of mass $\vec{r}_{\rm cm}^j(t)$ of a quantum state in a unique experiment is obtained when the following two conditions are satisfied: \begin{itemize} \item \textbf{Condition 1} --- For the overwhelming majority of experiments associated to the same wave function, the same trajectory for the center of mass is obtained. That is to say, for (almost) any two different experiments $j$ and $h$ we obtain $\vec{r}_{\rm cm}^j (t) = \vec{r}_{\rm cm}^{h}(t)$. \item \textbf{Condition 2} --- The spatial extent of the (many-particle) wave function in each direction is much smaller than the variation length-scale of the external potential $V_{\rm ext}$. \end{itemize} According to condition 1, since $\vec{r}_{\rm cm}^{j}(t)=\vec{r}_{\rm cm}^{j0}(t)$ for all $M$ experiments, the empirical evaluation of $\langle \vec{r}_{\rm cm}\rangle$ will be equal to the trajectory of the center of mass $\vec{r}_{\rm cm}^{j0}(t)$ in a unique experiment: \begin{equation} \label{condition2} \langle \vec{r}_{\rm cm}\rangle(t)=\lim_{M \rightarrow \infty} \frac {1}{M} \sum_{j=1}^M \vec{r}_{\rm cm}^j(t) = \vec{r}_{\rm cm}^{j_0}(t) . \end{equation} Moreover, we notice that $\vec{r}_{\rm cm}^{j}(t)$ in such quantum state has the same well-defined initial conditions (position and velocity) as in the overwhelming majority of experiments. While condition 1 might seem very restrictive, we will show in what follows that quantum states that satisfy it are more natural than expected when the number of particles is very large. A better understanding of condition 2 can be found from a Taylor expansion of the external potential $V_{\rm ext}(\vec r_i)$ in \eref{eren2}. One can easily realize that the condition $\langle \nabla V_{\rm ext}(\vec r_i) \rangle = \nabla V_{\rm ext}( \langle \vec r_i \rangle)$ is directly satisfied by constant, linear or quadratic potentials. Where $V_{\rm ext}$ can be approximated by potentials with such dependence requires a discussion on its physical meaning. $V_{\rm ext}(\vec r_i)$ in \eref{potential} describes the interaction of particle $i$ with some distant ``source'' particles located elsewhere. Moreover, the fact that this potential is felt identically by all $N$ system particles (i.e. $V_{\rm ext}(\vec r_i)$ is a single particle potential) is due to the large distance between our system and the potential sources. We can then assume, that $V_{\rm ext}$ is generated by some kind of long-range force, such as electromagnetic or gravitational ones. Such external long-range potentials will usually have a small spatial variation along the support of $\Psi(\vec r_1,\ldots,\vec r_N,t)$ and a linear or quadratic approximation for $V_{\rm ext}$ would seem enough in most macroscopic scenarios. In any case, scenarios where higher orders of the series expansion of $V_{\rm ext}$ are relevant are possible in the laboratory. Then, if condition 1 is applicable, it will guarantee a unique trajectory $\langle \vec{r}_{\rm cm}\rangle(t)=\vec{r}_{\rm cm}^{j_0}(t)$ in all experiments with well-defined initial conditions, however its acceleration will not only be given by the gradient of $V_{\rm ext}$, but it will also depend on the wave function. \section{Quantum states with a classical center of mass} \label{sec:fullofparticles} \subsection{Quantum state full of identical particles} \label{example2} We define here a type of quantum state with a very large number of indistinguishable particles (either fermions or bosons) that we name \emph{quantum state full of identical particles}. We will show that the center of mass of these states always follows a classical trajectory. Our definition will revolve around the concept of marginal probability distribution, i.e. the spatial distribution for the $i$th particle independently of the position of the rest of the particles, i.e., \begin{equation} D(\vec r_i,t) = \int\ldots\int |\Psi(\vec r_1,\ldots,\vec r_N,t)|^2 \prod_{{f=1; f\neq i}}^N d^3\vec{r}_f . \label{margdef1} \end{equation} Empirically, this distribution can be calculated from a very large number $M$ of experiments as \begin{equation} D(\vec r_i,t) =\lim_{M \rightarrow \infty} \frac{1}{M} \sum_{j=1}^M \delta( \vec r_i-\vec r_i^j(t)) . \label{margdef2} \end{equation} Since our definition of a quantum state full of identical particles always involves indistinguishable particles, the subindex $i$ is superfluous, and all particles will have the same marginal distribution. We notice that, while all Bohmian particles $\vec r_i(t)$ are ontologically distinguishable (through the index $i$), the Bohmian dynamical laws, Eqs. \eref{mpscho} and \eref{velo}, ensure that they are empirically indistinguishable\footnote{ The empirical indistinguishability of the Bohmian trajectories means that the $\vec r_2$-observable computed from $\vec r^j_2(t)$ is identical to the $\vec r_1$-observable computed from $\vec r^j_1(t)$. This property can be easily understood from the symmetry of the wave function, see also Refs.~\cite{Holland93,durr13,identical}. Consider a set of trajectories $\{\vec r^j_1(t),\vec r^j_2(t),\ldots,\vec r^j_N(t)\}$ assigned to an experiment $j$. We construct another set of trajectories $\{\vec r^h_1(t),\vec r^h_2(t),\ldots,\vec r^h_N(t)\}$ whose initials conditions are $\vec r^h_1(0)=\vec r^j_2(0)$ and $\vec r^h_2(0)=\vec r^j_1(0)$, while $\vec r^h_i(0)=\vec r^j_i(0)$ for $i=3,\ldots,N$. Due to the symmetry of the wave function (and of the velocity \eref{velo}), $\vec r^h_1(t)=\vec r^j_2(t)$ and $\vec r^h_2(t)=\vec r^j_1(t)$ (the rest of trajectories are identical in $j$ and $h$). Any observable related to $\vec r_1$ (or $\vec r_2$) is evaluated over an ensemble of different experiments. For each $j$-element of the ensemble, we can construct its corresponding $h$-set of trajectories and evaluate the $\vec r_2$-observable using $\vec r_2^h(t)$ instead of $\vec r_2^j(t)$. By construction, since $\vec r^h_2(t)=\vec r^j_1(t)$, the $\vec r_2$-observable is identical to the $\vec r_1$-observable.}. We define a quantum state full of identical particles as a state whose distribution of the positions of the $N$ particles in just one experiment is always equal to the marginal distribution of a unique variable obtained from averaging over different experiments, \begin{equation} D(\vec r,t) = \lim_{N \rightarrow \infty} \frac{1}{N} \sum_{i=1}^N \delta(\vec r-\vec r_i^{j_0}(t)) = \lim_{M \rightarrow \infty} \frac{1}{M} \sum_{j=1}^M \delta( \vec r-\vec r_i^j(t)). \label{mar1} \end{equation} For the practical application of this definition in systems with a finite (but very large) number of particles, one can impose that the condition in \eref{mar1} has to be satisfied for the overwhelming majority of experiments, see \ref{app:error}. The selection of the initial position of the particles, $\vec r_1^{j_0}(0),\vec r_2^{j_0}(0)\ldots \vec r_N^{j_0}(0)$, in a single experiment (labeled here $j_0$) can be done from \eref{QE}. One would start by first selecting $\vec r_1^{j_0}(0)$ (independently of the rest of positions). Then, selecting $\vec r_2^{j_0}(0)$ conditioned to the fact the $\vec r_1^{j_0}(0)$ is already selected. This procedure is repeated until the last position is selected, $\vec r_N^{j_0}(t)$, conditioned to all previous selected positions. The probability distribution for selecting the trajectory $\vec r_i^{j_0}(0)$ , when the previous positions $\vec r_1^j(0),\ldots ,\vec r_{i-1}^{j_0}(0)$ are already selected, can be defined from a combination of conditional and marginal probabilities as: \begin{equation} D^{j_0,i}(\vec r_i,0) = \frac{\bar D^i(\vec r_1^{j_0}(0),\ldots ,\vec r_{i-1}^{j_0}(0),\vec r_{i},0)}{\int \bar D^i(\vec r_1^{j_0}(0),\ldots ,\vec r_{i-1}^{j_0}(0),\vec r_{i},0) d\vec r_{i}} \label{marcon1} \end{equation} with \begin{equation} \fl \bar D^i(\vec r_1,\ldots ,\vec r_{i},0) = \int\ldots\int |\Psi(\vec r_1,\ldots ,\vec r_{i},\ldots,\vec r_N,0)|^2 d^3\vec{r}_{i+1}\ldots d^3\vec{r}_N \label{marcon2} \end{equation} By construction, the probability distribution function in \eref{mar1} has a total probability equal to unity. On the contrary, a normalization constant is explicitly included in the definition of \eref{marcon1} to ensure that it is a probability distribution function properly normalized to unity. In particular, for any $j_0$-experiment, we get $D^{j_0,1}(\vec r_1,0) \equiv D(\vec r_1,0)$ and $D^{j,N}(\vec r_N,0) \equiv |\Psi(\vec r_1^{j_0}(0),\ldots ,\vec r_{i}^{j_0}(0),\ldots,\vec r_{N-1}^{j_0}(0),\vec r_N,t)|^2$. Therefore, a quantum state full of identical particles can be alternatively defined as the wave function satisfying that the global distribution of the $i=1,\ldots,N$ particles in a unique $j_0$-experiment constructed from \eref{marcon1} and \eref{marcon2}, is equal to $D(\vec r,0)$ in \eref{margdef1} for the overwhelming majority of experiments. A trivial example of a quantum state full of identical particles is the one where the corresponding distribution for selecting the $i=1,\ldots,N$ particles in the overwhelming majority of experiments satisfies $D^{j_0,i}(\vec r_i,0) = D(\vec r_i,0)$. The equivalence between both expressions in \eref{mar1} implies the equivalence between two sets of positions: first, the positions of particle $i_0$ in $M$ different experiments, $\{\vec r_{i_0}^j(t)\}$ for $j=1,\ldots,M$, and, second, the positions of the $N$ particles in the same $j_0$-experiment, $\{\vec r_i^{j_0}(t)\}$ for $i=1,\ldots,N$. Because of this equivalence, a position in the first set, say $\vec r_i^{j_0}(t)$, is equal to another position in the second set, $\vec r_{i_0}^j(t)$. Any position of one set has another identical position in the other set. Therefore, since the exchange of positions of identical particles does not exchange their velocity~\cite{identical}, we obtain that $\vec v^{j_0}_i=\vec v_{i_0}^j$, which implies that $\vec r_i^{j_0}(t) =\vec r_{i_0}^j(t)$ at any time. Therefore, we conclude that if \eref{mar1} is satisfied at a particular time, such as $t=0$, then the quantum state will be full of identical particles at any other time. At this point, using \eref{mar1} for any time $t$, we can certify that the trajectory of the center of mass of a quantum state full of identical particles satisfies, \begin{eqnarray} \fl \vec{r}_{\rm cm}^{j_0}(t) = \lim_{N\to\infty} \frac{1}{N} \sum_{i=1}^{N} \vec r^{j_0}_i(t) = \lim_{M\to\infty} \frac{1}{M} \sum_{j=1}^{M} \vec r^j_{i_0}(t) \nonumber \\ = \lim_{N,M\to\infty} \frac {1} {N} \sum_{i=1}^{N} \frac{1}{M} \sum_{j=1}^{M} \vec r^j_i(t) = \lim_{N,M\to\infty} \frac{1}{M} \sum_{j=1}^{M} \frac {1} {N} \sum_{i=1}^{N} \vec r^j_i(t) \nonumber \\ = \lim_{M\to\infty} \frac{1}{M} \sum_{j=1}^{M} \vec{r}_{\rm cm}^j = \langle \vec{r}_{\rm cm} \rangle(t), \label{qsfip} \end{eqnarray} where we have used that \begin{equation} \vec{r}_{\rm cm}^{j_0}(t)=\int \vec r\; D(\vec r,t) d\vec r \label{cmfromd} \end{equation} with $D(\vec r,t)$ given by any of the two expressions in \eref{mar1}. In summary, a quantum state full of identical particles satisfies condition 1, and, if condition 2 also holds, its center of mass will be a classical trajectory. The arguments we have presented here is for a system of indistinguishable particles. For a macroscopic object composed of several types of particles, we can apply the same reasoning and obtain a classical center of mass for each type of particle subsystem, such that the global center of mass is also classical. \subsection{Example 1: Many-particle quantum state with a unique single-particle wave function} \label{example1} Here we show the simplest example of a quantum state full of identical particles. We consider a $N$-particle wave function given by \begin{equation} \Psi(\vec r_{1},\ldots, \vec r_{N},t) =\prod_{j=1}^{N} \psi (\vec r_j,t) . \label{example11} \end{equation} It corresponds, for example, to a system of non-interacting bosons, all with the same single-particle wave function $\psi(\vec r,t)$ solution of a single-particle Schr\"odinger equation under the external potential $V_{\rm ext}(\vec r)$. The quantum state in the $j$-experiment is completed with the set of trajectories $\{\vec r_i^j(t)\}$ for $i=1,\ldots,N$ selected according to $|\Psi|^2$. Since \eref{example11} corresponds to a separable system, each position $\vec r_i^j(0)$ has to be selected according to its own probability distribution in \eref{marcon1} and \eref{marcon2} with $D^{j_0,i}(\vec r_i,0) =|\psi(\vec r_i,0)|^2$. The marginal distribution in \eref{margdef1} satisfies $D(\vec r_i,0)=|\psi(\vec r_i,0)|^2$, which is exactly the same distribution mentioned above for selecting the particles. Therefore, this quantum state trivially satisfies \eref{mar1} when $N \rightarrow \infty$, i.e. $D^{j_0,i}(\vec r,0) = D(\vec r,0)$. As a result, the (Bohmian) trajectory of the center of mass will follow a classical trajectory when condition 2 about $V_{\rm ext}$ is also satisfied. \subsubsection*{Numerical example} \begin{figure} \caption{ (a) Evolution of a quantum wave packet with a potential $V_{\rm ext}(x)=2 x$. The initial wave function is a Gaussian wave packet of width $\sigma = 1$, centered around $x_0=-15$, and an initial positive velocity $k_0 = 10$. (b) Quantum trajectories corresponding to the dynamics in (a); with the average shown as a the dashed black line. Units are $m=\hbar=1$. } \label{fig:Q1} \end{figure} \begin{figure} \caption{ Same as \fref{fig:Q1} but for the evolution of an initial Gaussian wave packet with $x_0=0$, $\sigma=1$ and $k_0=0$, in a potential $V_{\rm ext}(x)=x^2/2$. } \label{fig:Q2} \end{figure} For simplicity, we consider a 1D physical space to numerically test the properties of the above state. As the initial single-particle wave function we select a wave packet of the form \begin{equation} \psi(x,0)= \frac{1}{\sqrt{\sigma\sqrt{\pi}}} \exp\left( -\frac{(x-x_0)^2}{2 \sigma^2} \right) \exp(i k_0 x) , \label{eq:wf} \end{equation} with $\sigma$ the dispersion of the wave-packet, $x_0$ the initial position and $k_0$ the initial momentum. Then, since the particles are independently selected, the central limit theorem~\cite{central} ensures that the center of mass of the quantum state will be normally distributed with a dispersion $\sigma_{\rm cm}=\sigma/\sqrt{N} \rightarrow 0$, confirming that the center of mass has the same well-defined position in all experiments (see \ref{app:error}). In the first example in \fref{fig:Q1} we use a linear potential $V_{\rm ext}(x) = 2 x$ emulating a particle in free fall under a gravity force. The quantum wave packet increases its width over time and its center follows a typical parabolic movement. The second example in \fref{fig:Q2} corresponds to a harmonic potential $V_{\rm ext}(x)=x^2/2$. In this case, because the wave function corresponds to the ground state of the quantum harmonic oscillator, it does not show any dynamics and the trajectories remain static at their initial positions. In any case, the center of mass (dashed black line in \fref{fig:Q2}) corresponds to the classical trajectory at the position of the minimum of the harmonic potential with zero velocity. Now, we confirm the classicality of the center of mass of a quantum state defined by \eref{example11} using simpler arguments. Since there is no correlation between different trajectories $x_i^j(t)$, the Bohmian trajectories plotted in figures~\ref{fig:Q1} and \ref{fig:Q2} can be interpreted in two different ways. The first interpretation is the one explained above where they correspond to different $i=1,\ldots,N$ trajectories in the same experiment described by the many-particle wave function given by \eref{example11}. In this case, the average value of the trajectories (dashed black lines in figures~\ref{fig:Q1}(b) and \ref{fig:Q2}(b)) is understood as the trajectory for the center of mass in that particular experiment. The second interpretation is that the trajectories correspond to different experiments of a single particle system defined by the wave function $\psi(x,t)$. In this interpretation, $\langle x_{\rm cm} \rangle$ corresponds to a classical trajectory (for large enough $N$ and $V_{\rm ext}$ satisfying condition 2), as shown by Ehrenfest's theorem~\cite{ehrenfest27} discussed in \sref{ensemble}. Since the trajectories in both interpretations are mathematically identical, we conclude that the (Bohmian) trajectory of the center of mass in a unique experiment follows a classical trajectory $x_{\rm cm}^j(t)=\langle x_{\rm cm} \rangle$, as anticipated in the discussion above on how these quantum states satisfy the condition in \eref{mar1} , i.e. $D^{j,i}(x,0) = D(x,0)$. \subsection{Example 2: Many-particle quantum state with exchange and inter-particle interactions} \label{examfop} In the following we consider a more general example of quantum state full of identical particles with exchange and inter-particle interactions. We consider here a quantum wave function $\Psi$ which, at time $t=0$, is build from permutations of $N$ single-particle wave functions, $\psi _{i}(\vec r,0)$. We define $\Psi(\vec r_{1},\ldots, \vec r_{N},0)$ as \begin{equation} \Psi(\vec r_{1},\ldots, \vec r_{N},0) = \sum_{\vec p \in S_N}\prod_{i=1}^{N} \psi _{p_i}(\vec r_i,0) s_{\vec p}, \label{exch0} \end{equation} where $\vec p=\{p_1,p_2,\ldots,p_N\}$ is an element of the set $S_N$ of $N!$ permutations of $N$ elements. The term $s_{\vec p} = \pm 1$ is the sign of the permutation for fermions, while $s_{\vec p}=1$ for bosons. A global normalization constant has been omitted because it will be irrelevant. In particular, we consider that the single-particle wave functions $\psi _{i}(\vec r,0)$ and $\psi _{j}(\vec r,0)$ are either identical or without spatial overlapping. For any $\vec r$ and $\psi _{f}(\vec r,0)$, we have: \begin{eqnarray} \label{defexch1} \psi _{f}(\vec r,0)=\psi _{i}(\vec r,0) \qquad &\forall f \in N_i, \nonumber\\ \psi _{f}(\vec r,0) \psi _{i}(\vec r,0) \simeq 0 \qquad &\forall f \notin N_i, \label{defexch2} \end{eqnarray} where $N_i$ is the subset of wave functions identical to $\psi _{i}(\vec r,0)$. We now check if the quantum state defined by Eqs. \eref{exch0} and \eref{defexch2} is a quantum state full of identical particles. The initial modulus squared of the wave function in \eref{exch0} can be written as \begin{equation} |\Psi|^2= \sum_{\vec p,\vec p' \in S_N}\prod_{i=1}^{N} \psi_{p_i}(\vec r_i,0) \psi^*_{p'_i}(\vec r_i,0) s_{\vec p} s_{\vec p'}, \label{modulexch} \end{equation} and the marginal distribution for each particle is then given from \eref{margdef1} as \begin{equation} D(\vec r,0)= \sum_{\vec p,\vec p' \in S_N} \psi_{p_1}(\vec r,0) \psi^*_{p'_1}(\vec r,0) \prod_{i=2}^{N} d_{p_i,p'_i} s_{\vec p} s_{\vec p'}, \label{exch2} \end{equation} with the matrix element $d_{i,f}$ defined as \begin{equation} d_{i,f} = \int \psi_{i}(\vec r,0) \psi^*_{f}(\vec r,0) d^3\vec r . \label{exch3} \end{equation} Because of \eref{defexch2}, $d_{i,f} = 1$ for all $f \in N_i$ and $d_{i,f} \simeq 0$ for $f \notin N_i$. Then, only the summands in \eref{exch2} with all the terms $d_{i,f}=1$ are different from zero, and we can rewrite $D(\vec r,0)$ as \begin{equation} D(\vec r,0)= \alpha \left( \sum_{i=1}^N |\psi_{i}(\vec r,0)|^2 \right). \label{exch22} \end{equation} where $\alpha$ is the product of the number of permutations of each $N_i$ to provide a properly normalized distribution in \eref{mar1}. On the other hand, the selection of the $N$ positions in a unique experiment $\{\vec r_i^j(0)\}$ has to satisfy \eref{QE}. The selection of the first particle $\vec r_1^j(0)$ (independently on all other particles) is given by \eref{exch22}. To select the second particle $\vec r_2^j(0)$, one needs to take into account the already selected $\vec r_1^j(0)$. In general, according to the definitions \eref{marcon1} and \eref{marcon2} and using \eref{modulexch}, \eref{exch2} and \eref{exch3}, the selection of the position $\vec r_{m}^j(0)$ as a function of the previous $m-1$ positions $\vec r_{1}^j(0),\ldots,\vec r_{m-1}^j(0)$ is given by the distribution \begin{equation} \fl D^{j,m}(\vec r,0) = \sum_{\vec p,\vec p' \in S_N} \left( \prod_{k=1}^{m-1}w^j_{k,p_k,p'_k}\right) \psi_{p_m}(\vec r,0) \psi^*_{p'_m}(\vec r,0) \left( \prod_{i=m+1}^{N} d_{p_i,p'_i}\right) s_{\vec p} s_{\vec p'} , \label{exch4} \end{equation} with the matrix element $w^j_{k,p_k,p'_k}$ defined as \begin{equation} w^j_{k,p_k,p'_k} =\psi_{p_k}(\vec r_k^j(0),0) \psi^*_{p'_k}(\vec r_k^j(0),0) . \label{exch5} \end{equation} For each position $\vec r^j_k(0)$, because of \eref{defexch2}, there is a $N_i$ set of wave functions whose value is $w^j_{k,i,f}=|\psi _{i}(\vec r^j_k(0),0)|^2$ for any $f \in N_i$, and $w^j_{k,i,f} \simeq 0$ for any $f \notin N_i$. Again, we can assume that only the summands with the products $w^j_{k,i,f}=|\psi _{i}(\vec r^j_k(0),0)|^2$ and $d_{i,f}=1$ will remain different from zero in \eref{exch4} giving $\psi_{i}(\vec r,0) \psi^*_{f}(\vec r,0)=|\psi_{i}(\vec r,0)|^2$. We can then rewrite $D^{j,m}(\vec r,0)$ as \begin{eqnarray} D^{j,m}(\vec r,0) = \beta_m \left( \sum_{i=1}^N |\psi_{i}(\vec r,0)|^2 \right) \label{exch6} \\ \beta_m = \alpha \sum_{\vec p \in S_{M-1}} \prod_{k=1}^{m-1} |\psi _{p_k}(\vec r^j_k(0),0)|^2 . \end{eqnarray} Again, the parameter $\beta_m$ is irrelevant because the selection of the particles can be done through an expression of $D^{j,m}(\vec r,0)$ properly normalized to unity, where only the dependence on $\vec r$ matters. In summary, for the quantum state defined by Eqs.~\eref{exch0} and \eref{defexch2} plus a set of trajectories $\{\vec r_i^j(0)\}$, we conclude that the (normalized versions of the) distributions $D(\vec r,0)$ in \eref{exch22} and $D^{j,m}(\vec r,0)$ in \eref{exch6} for any $m$ are identical. Therefore we are dealing with a quantum state full of identical particles whose center of mass follows a classical trajectory. As we have demonstrated in \sref{example2}, whether $\Psi(\vec r_{1},\ldots, \vec r_{N},t)$ fulfills the condition in \eref{mar1} or not has to be tested in a unique time. Since we have shown that \eref{exch0} is a quantum state full of identical particles at $t=0$, we conclude that any quantum state with the wave function $\Psi(\vec r_{1},\ldots, \vec r_{N},t)$ solution of the many-particle Schr\"odinger equation in \eref{mpscho}, with or without external $V_{\rm ext}$ or inter-particle $V_{\rm int}$ potentials, and with the initial state defined by Eqs.~\eref{exch0} and \eref{defexch2} is a quantum state full of identical particles when $N \rightarrow \infty$. \subsubsection*{Numerical example} \begin{figure} \caption{ (a) Simulation with $N=20$ distinguishable particles: particle trajectories (thin lines), quantum center of mass trajectory (dashed black line), classical center of mass trajectory (solid orange line). (b) Same as (a) but for indistinguishable particles. (c) Relative error between the classical and quantum center of mass trajectories for 1 particle (black solid line) or $N$ distinguishable (light orange) or indistinguishable (dark blue) particles. From thin to thick lines: $N=4$, 8, 12, 16, and 20 particles. } \label{fig:one} \end{figure} In what follows we investigate numerically this system. We will show that the center of mass of the quantum state effectively tends to a classical result even for a quite small number of particles. The evolution of the initial wave function in \eref{exch0} in the limit of $N \rightarrow \infty$ is numerically intractable. We will consider here a finite number of non-interacting bosons in a 1D space and test if the center of mass tends to a classical trajectory when $N$ increases. Each single-particle wave function $\psi_i(x_{i},t)$ is a solution of a single-particle Schr\"odinger equation under the potential $V_{\rm ext}$. Therefore, the bosonic many-particle wave function can be written at any time $t$ as \begin{equation} \Psi(x_{1},\ldots, x_{N},t) = \sum_{\vec p \in S_N}\prod_{i=1}^{N} \psi _{p_i}(x_i,t) , \label{exch} \end{equation} For comparison, we also consider the same state in \eref{exch}, but without exchange interaction \begin{equation} \Psi(x_{1},\ldots, x_{N},t) = \prod_{i=1}^{N} \psi_i(x_{i},t). \label{no-ex} \end{equation} In particular, we will consider each of the $\psi_i$ in Eqs.~\eref{exch} and \eref{no-ex} as a sum of two initially separated Gaussian wave packets, but with opposite central momenta to ensure that they impinge at a later time \begin{equation} \fl \psi _i (x_j,0 ) = \frac{\exp {\left( {ik_{iL} x_j } \right)}}{{2\left( {\pi \sigma^2 } \right)^{1/4} }} \exp{\left( {- \frac{{\left( {x_j - x_{iL} } \right)^2 }}{{2\sigma^2 }}} \right)} +\frac{\exp {\left( {ik_{iR} x_j } \right)}}{{2\left( {\pi \sigma^2 } \right)^{1/4} }} \exp{\left( {- \frac{{\left( {x_j - x_{iR} } \right)^2 }}{{2\sigma^2 }}} \right)}, \label{gausiana} \end{equation} The $x_{iL}$ and $x_{iR}$ are the centers of two (non-overlapping) Gaussian wave packets, with respective momenta $k_{iL}$ and $k_{iR}$, and spatial dispersion $\sigma=15$ nm. Each of the wave functions have different random values for $x_{iL}$, $x_{iR}$, $k_{iL}$, and $k_{iR}$. These wave functions are evolved using Schr\"odinger equation with an external potential $V_{\rm ext}$ implying a constant electric field of $3.3 \times 10^5$ V/m. We show in \fref{fig:one}(a,b) for the cases with and without exchange interaction, the evolution of the quantum trajectories (thin lines). We plot their quantum center of mass (dashed black line) computed from \eref{cmue} for $N=20$. We also plot the classical center of mass (solid orange line), computed from a Newtonian trajectory with the same initial position and velocity as the previous quantum center of mass. We notice that the Bohmian trajectories for states with exchange interaction do not cross in the physical space. This is a well-know property~\cite{identical} that obviously remains valid even if the center of mass becomes classical. Moreover, in \fref{fig:one}(c) we show the difference between the quantum and classical centers of mass for different values of $N$, with and without exchange interaction (see \ref{app:error} for a discussion of the error of a quantum state full of identical particles when a large, but finite, number of particles is considered). We see that the quantum center of mass $x_{\rm cm}(t)$ becomes more and more classical as $N$ grows, and the indistinguishable case reduces the quantum non-classical effects faster than the case without exchange interaction. These results can be interpreted in a simple way: a unique experiment with $N$ distinguishable particles represents effectively only one experiment, while a unique experiment with $N$ indistinguishable particles represents, in fact, $N!$ different experiments, each one with the initial (Bohmian) positions interchanged. This explains why the latter center of mass become more similar to that given by the Ehrenfest theorem which involves an infinite number of experiments. \subsection{Wave equation for the center of mass} \label{wavequation} While the description of a classical state requires only a trajectory, a complete Bohmian quantum state requires a wave function plus trajectories. Moreover, because of its exponential complexity, solutions to the Schr\"odinger equation in the whole many-particle configuration space are not accessible. However, an equation describing the evolution of a wave function associated to the center of mass of a quantum state full of identical particles will help to certify that a classical center of mass behavior is fully compatible with a \emph{pure} quantum state. In addition, such an equation will provide an accessible numerical framework to analyze practical quantum system under decoherence. One route towards this equation could be obtained from the reduced density matrix of the center of mass, and assuming some kind of collapse. Alternatively, as mentioned along the paper, we will follow a Bohmian procedure which allows the construction of such a wave equation for the center of mass through the use of the (Bohmian) conditional wave function~\cite{conditional,Oriols07,norsen}. To simplify the derivations, in the following we restrict ourselves to a 1D physical space. We define the center of mass of our $N$-particle state, $x_{\rm cm}$, and a set of relative coordinates, $\vec{y}=\{y_2,\ldots,y_N\}$, as \begin{eqnarray} x_{\rm cm} = \frac{1}{N} \sum_{i=1}^{N}x_i , \label{eq:var_chg} \\ y_j = x_j - \frac{(\sqrt{N}x_{\rm cm}+x_1)} {\sqrt{N} + 1} . \end{eqnarray} With these substitutions, the 1D version of the Schr\"odinger equation (cf. Eq.~\eref{mpscho}) can be rewritten as \begin{equation} \label{eq:condi} \rmi \hbar \frac{\partial \Psi}{\partial t} = \left( - \frac{\hbar^2}{2 M_{\rm cm}}\frac{\partial^2}{\partial x_{\rm cm}^2} -\frac{\hbar^2}{2m} \sum_{i = 2}^N \frac{\partial^2}{\partial y_i^2} + V \right) \Psi , \end{equation} with $M_{\rm cm} \equiv N m$ and $\Psi \equiv \Psi(x_{\rm cm},\vec{y},t)$ is the many-particle wave function with the new coordinates. The coordinates $\vec{y}$ in \eref{eq:var_chg} are chosen such that no crossed terms appear in the Laplacian of \eref{eq:condi}, see \ref{app:centerofmass}. Notice that the many-particle Schr\"odinger equation in \eref{eq:condi} is, in general, non separable because of the potential $V$ defined in \eref{appoten}. Hereafter, we derive the wave equation associated to the conditional wave function for the center of mass~\cite{conditional,Oriols07,norsen} defined as $\psi_{\rm cd}(x_{\rm cm},t) \equiv \Psi(x_{\rm cm},\vec{y}^j(t),t)$ associated to the $j$-experiment. By construction, the velocity (and therefore the trajectory) of the center of mass only depends on the spatial derivatives along $x_{\rm cm}$~\cite{conditional,norsen}. Therefore, $x_{\rm cm}^j(t)$ can be equivalently computed from either $\psi_{\rm cd}$ or $\Psi$. Following Ref.~\cite{Oriols07}, the previous \eref{eq:condi} can be written in the conditional form as \begin{eqnarray} \fl \label{eq:conditional} \rmi \hbar \frac{\partial \psi_{\rm cd}}{\partial t} = - \frac{\hbar^2}{2 M_{\rm cm}} \frac{\partial^2 \psi_{\rm cd}}{\partial x_{\rm cm}^2} -\left.\frac{\hbar^2}{2m} \sum_{i = 2}^N \frac{\partial^2 \Psi(x_{\rm cm},\vec{y},t)}{\partial y_i^2} \right|_{\vec{y}^j(t)} \nonumber \\ - \left.\rmi\hbar \sum_{i = 2}^N v_i^j(t) \frac{\partial \Psi(x_{\rm cm},\vec{y},t)}{\partial y_i}\right|_{\vec{y}^j(t)}+ V_{\rm cm}(x_{\rm cm}) \psi_{\rm cd}, \end{eqnarray} where $V_{\rm cm}(x_{\rm cm}) = NV_{\rm ext}(x_{\rm cm})$. See \ref{app:centerofmass} to see how the term $V$ in the many-particle Schr\"odinger equation \eref{eq:condi} is translated into the term $V_{\rm cm}$ in the conditional wave function \eref{eq:conditional}. By inserting the polar decomposition of the full and conditional wave functions, $\Psi \equiv R \exp(\rmi S/\hbar)$ and $\psi_{\rm cd} \equiv R_{\rm cd} \exp(\rmi S_{\rm cd}/\hbar)$, into \eref{eq:conditional}, one can then derive a continuity-like equation, \begin{eqnarray} 0= \frac{\partial R_{\rm cd}^2}{\partial t} + \frac {\partial}{\partial x_{\rm cm}} \left( R_{\rm cd}^2 \frac{\partial S_{\rm cd}}{\partial x_{\rm cm}} \frac {1} {M_{\rm cm}} \right) +J|_{\vec{y}^j(t)} , \label{condition4} \\ J= \hbar \sum_{i=2}^N \left [ \frac {\partial R^2}{\partial y_i} v_i^j(t) - \frac {\partial}{\partial y_i} \left ( \frac{1}{m} r^2 \frac {\partial S}{\partial y_i} \right ) \right ] , \label{condition4bis} \end{eqnarray} plus a quantum Hamilton--Jacobi-like equation \begin{eqnarray} 0= \frac{\partial S_{\rm cd}}{\partial t} + \frac {1} {2 M_{\rm cm}} \left(\frac{\partial S_{\rm cd}}{\partial x_{\rm cm}} \right)^2+ V_{\rm cm} +G|_{\vec{y}=\vec{y}^j(t)} , \label{condition5} \\ G=Q_{\rm cm} + \sum_{i=2}^N \left ( \frac{1}{2 m} \left ( \frac {\partial S}{\partial y_i} \right)^2 + Q_i - v_i^j(t) \frac {\partial S}{\partial y_i} \right) . \label{condition5bis} \end{eqnarray} They include the definition of the quantum potentials \begin{eqnarray} \label{eq:quantum_cm1} Q_{\rm cm} = Q_{\rm cm}(x_{\rm cm},\vec{y},t)=-\frac{\hbar^2}{2 M_{\rm cm} R}\frac{\partial^2 R}{\partial x_{\rm cm}^2 } , \\ \label{eq:quantum_cm2} Q_{i} = Q_{i}(x_{\rm cm},\vec{y},t)=-\frac{\hbar^2}{2mR}\frac{\partial^2 R}{\partial y_i^2 } , \end{eqnarray} and the (non-local) velocity fields \begin{eqnarray} v_{\rm cm} = v_{\rm cm}(x_{\rm cm},\vec{y},t) = \frac{1}{M_{\rm cm}} \frac{\partial S}{\partial x_{\rm cm}} ,\label{vtcm} \\ v_i = v_i(x_{\rm cm},\vec{y},t) = \frac{1}{m}\frac{\partial S}{\partial y_i} . \end{eqnarray} The behavior of the quantum Hamilton--Jacobi equation \eref{condition5} would be classical if the effect of the ``potential'' $G$ could be ignored. Therefore, the key point in our demonstration is to show that $G$ in \eref{condition5bis} fulfills \begin{equation} \left.\frac{\partial G}{\partial x_{\rm cm}}\right|_{\vec{y}=\vec{y}^j(t)} = 0, \label{G0} \end{equation} for a quantum state full of identical particles. The first part of this proof is showing that \begin{equation} \fl \left.\frac {\partial}{\partialx_{\rm cm}} \sum_{i=2}^N \left ( \frac{1}{2 m} \left ( \frac {\partial S}{\partial y_i} \right)^2 - v_i^j(t) \frac {\partial S}{\partial y_i} \right)\right|_{\vec{y}^j(t)} = \left.\left ( \frac{1}{m} \frac {\partial S}{\partial y_i} \frac {\partial^2 S}{\partialx_{\rm cm} y_i} - v_i^j(t) \frac {\partial^2 S}{\partialx_{\rm cm} y_i} \right)\right|_{\vec{y}^j(t)} = 0 , \end{equation} where he have used that ${\partial S}/{\partial y_i}$ depends on $x_{\rm cm}$, but $v_i^j(t)$ does not. The second part of the proof is showing that \begin{equation} \left[ \frac {\partial}{\partial x_{\rm cm}} \left(Q_{\rm cm}+\sum_{i=2}^N Q_i \right)\right]_{\vec{y}^j(t)} = 0 . \label{extra3} \end{equation} Up to here all equations involve only the $j$-experiment. Since we know from \sref{evolution} that any other trajectory of the center of mass associated to the $k$-experiment will satisfy $x_{\rm cm}^k(t)=x_{\rm cm}^j(t) \equiv x_{\rm cm}(t)$, the shape of the potential term in \eref{extra3} for the $j$-experiment must be also equal to that of any other $k$-experiment. Therefore, we substitute \eref{extra3} by an average over an ensemble of experiments, \begin{equation} \fl \left[ \frac {\partial}{\partial x_{\rm cm}} \left(Q_{\rm cm}+ \sum_{i=2}^N Q_i \right)\right]_{\vec{y}^j(t)}= \frac {1} {M} \sum_{k=1}^M \left[ \frac {\partial}{\partial x_{\rm cm}} \left( Q_{\rm cm}+\sum_{i=2}^N Q_i\right)\right]_{x_{\rm cm}^k(t),\vec{y}^k(t)} . \label{clasical2} \end{equation} Since the trajectories $x_{\rm cm}^k(t)$ and $\vec{y}^k(t)$ in the r.h.s. are selected according to \eref{QE}, we can substitute the sum in \eref{clasical2} by an integral weighted by $R^2$, \begin{eqnarray} \fl \frac{1}{M} \sum_{k=1}^M \left[ \frac {\partial}{\partial x_{\rm cm}} \left( Q_{\rm cm}+\sum_{i=2}^N Q_i\right)\right]_{x_{\rm cm}^k(t),\vec{y}^k(t)}\nonumber\\= \int\limits_{x_{\rm cm}}\int\limits_{y_2}\ldots\int\limits_{y_N} R^2 \frac{\partial}{\partial x_{\rm cm}} \left( Q_{\rm cm}+\sum_{i=2}^N Q_i \right ) dx_{\rm cm} dy_2\ldots dy_N . \label{condition3new} \end{eqnarray} For each term $Q_i$ we have that \begin{equation} \fl \int\limits_{x_{\rm cm}} R^2(x_{\rm cm},\vec{y}) \frac{\partial Q_i(x_{\rm cm},\vec{y})}{\partial x_{\rm cm}} d x_{\rm cm} = \frac{\hbar^2}{2m} \left[ \int\limits_{x_{\rm cm}} \frac{\partial R}{\partial x_{\rm cm}} \frac{\partial^2 R}{\partial y_i^2} dx_{\rm cm} -\int\limits_{x_{\rm cm}} R \frac{\partial^3 R}{\partial x_{\rm cm} \partial y_i^2} dx_{\rm cm} \right] . \label{demo32} \end{equation} It can be easily seen that these two terms are equal (but with opposite signs) by integrating by parts the first term (assuming that $R$ is zero for $x\rightarrow\pm \infty$). Therefore \eref{demo32} is equal to 0. A similar argument can be made to show that the term with $Q_{\rm cm}$ in \eref{condition3new} is also zero. The fact that \eref{condition3new} vanishes can be anticipated by knowing that this type of integrals on the whole configuration space also appear (and are zero) in the derivation of Ehrenfest's theorem if the polar form of the wave function is used. We have just demonstrated that the (conditional) wave equation of a center of mass associated to a quantum state full of identical particles implies \eref{G0}. In this case, the Hamilton--Jacobi equation in \eref{condition5} has no dependence on $R_{\rm cd}$, and only on $S_{\rm cd}$. Therefore, the velocity of the center of mass, \begin{equation} v_{\rm cm}= \frac{1}{M_{\rm cm}} \frac{\partial S_{\rm cd}}{ \partial x_{\rm cm}} , \end{equation} and its trajectory can be computed from \eref{condition5} independently of \eref{condition4}. Moreover, \eref{condition5} ignoring the ``potential'' $G$ is analogous to the (classical) Hamilton--Jacobi equation, from which one can derive a Schr\"odinger-like equation \begin{equation} \rmi \hbar \frac{\partial \psi_{\rm cd}}{\partial t}= \left( -\frac{\hbar^2}{2 M_{\rm cm}} \frac{\partial^2}{\partial x_{\rm cm}^2} + V_{\rm cm} - Q_{\rm cm} \right) \psi_{\rm cd} . \label{condgen} \end{equation} In the derivation of this wave equation, we have also used \eref{condition4}. The exact shape of the term $J$ in \eref{condition4} is irrelevant for computing the velocity of the center of mass (which only depends on \eref{condition5}), and we have assumed the term $J=0$ to deal with a conditional wave function with norm equal to one. This equation is also known as the (non-linear) classical Schr\"odinger wave equation~\cite{richardson_nonlinear_2014,Oriols12,nikolic}. A study of the dynamics associated with this equation can be found in Ref.~\cite{FNL2016}. We emphasize that the correlations among $x_{\rm cm}$ and the rest of $y_i$ present in \eref{eq:condi} are included through the non-linear term $-Q_{\rm cm}$ in the conditional equation of motion \eref{condgen}. \subsubsection*{Numerical examples} \begin{figure} \caption{ (a) Evolution of a classical wave packet subjected to a potential $V_{\rm cm}(x)=2 x$. The initial wave function is a Gaussian wave packet of width $\sigma = 1$, centered around $x_0=-15$, and an initial positive velocity $k_0 = 10$. (b) Trajectories corresponding to these dynamics. Units are $M_{\rm cm}=\hbar=1$. } \label{fig:C1} \end{figure} \begin{figure} \caption{ Same as \fref{fig:C1} but in a potential $V_{\rm cm}(x)=x^2/2$. The initial Gaussian wave function has $x_0=-2$, $\sigma=0.2$, and $k_0=0$. } \label{fig:C2} \end{figure} In order to illustrate the previous derivation, in what follows we will solve the (non-linear) classical Schr\"odinger wave equation in \eref{condgen}. We show in \fref{fig:C1} the case of the evolution of a wave packet under a potential $V_{\rm cm}(x) = 2 x$. One can see that the classical wave packet preserves its shape, and its corresponding trajectories are the expected classical parabolic ones. This contrasts with the simulation of the same initial quantum wave packet in \fref{fig:Q1}, which expanded over time. Another simulation is shown in \fref{fig:C2}, in this case for a harmonic potential with an narrow initial wave packet displaced from the origin. As expected from the classical behavior, the trajectories oscillate around the origin, while the wave packet maintains its narrow shape. We emphasize that the initial wave packet has to reflect that the probability distribution of the center of mass is very sharp~\cite{FNL2016}. \section{Quantum states without a classical center of mass} \label{sec:macroscopic} There are certainly many examples of quantum states whose center of mass do not behave classically~\cite{Giulini96, schlosshauer14,Oriol11,Zeilinger}. In the following we discuss two paradigmatic examples. \subsection{Single-particle states} For a single particle states, the center of mass in a unique experiment is the Bohmian position of the particle itself. Moreover, it cannot satisfy condition 1 because different experiments will provide different results. Therefore, the center of mass of a quantum system with one (or few particles) cannot follow our classical intuition. Let us analyze the problems appearing when Bohmian mechanics is used to study the quantum-to-classical transition for a single-particle states. By inserting $\psi = R \exp(\rmi S/\hbar)$ into the single-particle Schr\"odinger equation one arrives to a quantum continuity equation \begin{equation} \frac {\partial R^2} {\partial t} + \frac {\partial } {\partial x} \left( \frac {R^2} {m} \frac{\partial S}{\partial x} \right) =0, \label{conti} \end{equation} plus a quantum Hamilton--Jacobi equation~\cite{bohm52} given by \begin{equation} \label{hamilton} \frac{\partial S}{\partial t} + \frac {1} {2m} \left(\frac{\partial S}{\partial x} \right)^2 +V_{\rm ext} + Q = 0. \end{equation} It can be easily demonstrated that \eref{conti} and \eref{hamilton} give a Newton-like equation for the (Bohmian) trajectories~\cite{bohm52,Oriols12} \begin{equation} \label{newton} m \frac {d v(x^j(t),t)} {dt} = \left[ -\frac {\partial} {\partial x} \left(V_{\rm ext} + Q \right) \right]_{x = x^j(t)}. \end{equation} It has been argued~\cite{Rosen64} that a classical (Newtonian) trajectory could be obtained from \eref{newton} by just adding a \emph{new} condition \begin{equation} \frac {\partial Q} {\partial x} = 0. \label{qzero} \end{equation} The problem with this statement is that the classical state given by $x^j(t)$ is not compatible with a quantum state given by the same trajectory $x^j(t)$ and a wave function $\psi$. The reason of such incompatibility is that $\psi$ does not exist in general. The wave function $\psi$ would have to satisfy, in each position, three equations, Eqs.~\eref{conti}, \eref{hamilton} and \eref{qzero}, but with only two unknowns, $R$ and $S$. Another single-particle approach to reach classical dynamics is to interpret the potential $V_{\rm ext}$ as an additional unknown that allows to define some (exotic) systems where the trajectory and the wave function belong to a state which is simultaneously classical and quantum~\cite{Mako02}. The simplest example is a plane wave with a constant $R=1$, giving $Q=0$. However, even these particular compatible solutions have some unphysical features in disagreement with our classical intuition. The initial position of the Bohmian trajectories $x^j(t)$ associated to these systems obviously have to be selected according to the distribution $|\psi|^2$ obtained from \eref{QE}. This means that different initial positions are obtained in different experiments. For the plane wave, the particle can depart from anywhere at the initial time, contradicting our classical intuition of having well defined initial positions. On the contrary, we have shown in \sref{sec:fullofparticles} that a quantum state full of identical particles is compatible with a center of mass following a classical trajectory. The reason why both classical and quantum states are compatible in our case is because the condition in \eref{G0} is satisfied in a \emph{natural} way by a quantum state full of identical particles (without imposing any condition on $V_{\rm ext}$). In addition, the classical trajectory of the center of mass of such states directly implies that its initial position and velocity do not change when the experiment is repeated. \subsection{Many-particle states} \label{exotic} Our definition of a quantum state full of identical particles discussed in \sref{example2} is quite \emph{natural} when the number of particles tends to be very large. However, we define here a quantum state with a large number $N$ of particles with strong correlations that do not satisfy our requirements for a quantum state full of identical particles. One can think of wave functions of identical particles which make it impossible for a unique experiment to fill the whole support of the marginal distribution. Macroscopic quantum many-particle superpositions~\cite{Oriol11,Zeilinger,Oriol11a} will not satisfy the condition in \eref{mar1} and therefore we do not expect a classical behavior for their center of mass, even when $N\to\infty$. An extreme example would be the superposition of two separated wave packets (a Schr\"odinger-cat-like state) such as \begin{equation} \Psi(x_1,\ldots,x_N) = \frac{1}{\sqrt{2}} \left(\prod_{i=1}^N \phi(x_i-x_L) +\prod_{i=1}^N \phi(x_i-x_R) \right). \label{mar17} \end{equation} We assume that $\phi(x)$ is a (properly normalized) wave packet centered around $x=0$, whose support is much smaller than the distance between the two wave packets ($x_R-x_L$) so that the overlap between $\phi(x_i-x_L)$ and $\phi(x_i-x_R)$ is zero. The wave function in \eref{mar17} only allows for two kinds of quantum states. The first one corresponds to the wave function above plus all particles around $x_L$. The second one corresponds to the same wave function plus all particles around $x_R$. In order to see this from the point of view of the probability distributions, we calculate the marginal probability distribution of this state, using \eref{margdef1}, \begin{equation} D(x,0)=\frac{1}{2} \left(|\phi(x_i-x_L)|^2+|\phi(x_i-x_R)|^2\right). \end{equation} Therefore, the first particle position in the $j$-experiment has equal probability to be in either $x^j_1(0) \approx x_L$ or $x^j_1(0) \approx x_R$. If for instance it is $x^j_1(0) \approx x_L$, then, using \eref{marcon1} and \eref{marcon2}, the second particle is selected according to $D^{j,2}(x,0)=|\phi(x_i-x_L)|^2$, and it will also be $x^j_2(0) \approx x_L$. In fact, all subsequent particles are located around $x_L$ because \eref{marcon1} and \eref{marcon2} show that $D^{j,i}(x_i,0)=|\phi(x_i-x_L)|^2$ for $i>1$. Similarly, if in another experiment the first particle is $x^j_1(0) \approx x_R$, then, all particles will be around $x^j_i(0)\approx x_R$. It is obvious then, that in this case $D(x,0)\neq D^{j,i}(x,0)$ in all experiments. This is because the marginal distribution for this state has a non-zero support around both $x_L$ and $x_R$, while the quantum state in any experiment involves only particles at left or only particles at the right, but never particles at both sides. We discuss here why the center of mass of a quantum state like the one in \eref{mar17} can show quantum interference. Although the marginal distribution has support in both sides, in a particular experiment, the Bohmian trajectories associated to this state will be present in only one side, say the left support. Thus, the dynamics of the center of mass is associated only to the particles in the left support of the wave function. However, (classically unexpected) interferences could appear later if the left wave function overlaps and interferes with the right one (empty of particles), thus modifying the velocities of the particles. On the contrary, in the numerical example of \sref{examfop} where the marginal distribution also has two separated supports, such (classically unexpected) interferences will not appear because it is a quantum state full of identical particles. Bohmian trajectories will always fill up both left and right supports and the center of mass will always be an average over all (left and right) particles. If the left and right support are large enough to be macroscopically distinguishable, we will \emph{see} two classical particles, described by the center of mass of the left and right Bohmian particles, respectively. The trajectories of these centers of mass will correspond to the elastic collision between classical particles. We conclude that quantum states whose supports are partially empty of particles are required to observe effects against our classical intuition. \section{Conclusions} \label{sec:conc} In summary, by using the peculiar properties of the center of mass interpreted as a Bohmian particle, we have provided a \emph{natural} route to explain the quantum-to-classical transition. We have defined a quantum states full of identical particles as the state whose distribution of the Bohmian positions in a unique experiment is always equal to the marginal distribution. The center of mass of such states satisfies our classical intuition in the sense that, first, its initial position and velocity are perfectly fixed when experiments are repeated (prepared with the same wave function) and, second, it follows a classical trajectory. We emphasize that only the center of mass behaves classically, while the rest of microscopic degrees of freedom can and will show quantum dynamics. In this sense, the quantum-to-classical transition appears due to the \emph{natural} coarse-graining description of the center of mass. Due to the compatibility between Bohmian and orthodox results~\cite{bohm52,Holland93,Oriols12,ABM_review}, the arguments in this paper can be equivalently derived using with orthodox arguments. The Bohmian route explored here avoids dealing with the reduced density matrix and the collapse law. There is a commonly accepted wisdom in the orthodox attempts that decoherence plays a relevant role in the quantum-to-classical transition, and this work does not contradict this. One can see that the center of mass (our open system) is strongly entangled with the rest of degrees of freedom of the macroscopic object (the environment). Notice, from the definition of the potential in \eref{appoten}, that the many-particle Schr\"odinger equation in \eref{eq:condi} is, in general, non separable. Without this entanglement, we will not arrive to the classical (dispersionless) wave equation in \sref{wavequation}, but to a single-particle Schr\"odinger equation with the typical spreading of wave packets. Notice that the original Schr\"odinger equation is linear, while the classical version is non-linear, breaking the superposition principle. A paradigmatic example of the role of decoherence in destroying superposition (and avoiding wave packet spreading) was initially presented by Zurek using the example of Hyperion, a chaotically tumbling moon of Saturn~\cite{decoh1,decoh2,decoh3,decoh4}. He estimated that, without decoherence, within 20 years the quantum state of Hyperion would evolve into a highly nonlocal coherent superposition of macroscopically distinguishable orientations. It is important to emphasize that, in our work, the environment of the center of mass of Hyperion would consist of $N \approx 10^{44}$ particles, which would be responsible for the decoherence of the center of mass. The conclusions in this paper for a quantum state full of identical particles, derived for an infinite number of particles, can be translated into a macroscopic system with a very large but finite number of particles when the error defined in \ref{app:error} remains smaller than some predetermined measuring accuracy. In particular, for the two numerical examples of this paper, the central limit theorem~\cite{central} ensures that the center of mass of a quantum state full of identical particles with a finite number of particles tends to the exact classical value as $N$ grows. Finally, an explanation on why we have ignored the measurement apparatus along this article is in order. It is well-known that the Bohmian formalism does not include any collapse law but, instead one has to include the interaction between the system and a measuring apparatus. We have ignored this interaction because we are only dealing with a classical object measured by a classical apparatus. Both the classical object and the classical measuring apparatus are in a quantum state full of identical particles whose centers of mass follow a classical trajectory $\vec r_{\rm s,cm}(t)$ and $\vec r_{\rm a,cm}(t)$, respectively. Then, the interaction between the system and the apparatus, i.e. between $\vec r_{\rm s,cm}(t)$ and $\vec r_{\rm a,cm}(t)$, is unproblematic and it can be ignored if the type of classical measurement is assumed to not perturb the classical macroscopic object. On the contrary, the present work cannot be directly applied to the measurement of a quantum system in general. Obviously, many quantum systems cannot be described by a quantum state full of identical particles when different experiments (with identical wave function preparation) provide different measured results. Nevertheless, a straightforward generalization of the present work can explain why the measuring apparatus (entangled with the quantum system) presents a classical behavior with its macroscopic pointer (in fact, its center of mass) following a classical trajectory. \ack We would like to thank David Tena for fruitful discussions. This work was supported by This work has been partially supported by the Fondo Europeo de Desarrollo Regional (FEDER) and the ``Ministerio de Ciencia e Innovaci\'{o}n'' through the Spanish Project TEC2015-67462-C2-1-R, the Generalitat de Catalunya (2014 SGR-384), the European Union's Horizon 2020 research and innovation program under grant agreement No 696656, and the Okinawa Institute of Science and Technology Graduate University. \appendix \section{Evolution of the error of the center of mass for a quantum state full of identical particles with a finite number of particles.} \label{app:error} A definition of a quantum state full of identical particles in \eref{mar1} of the text, in principle, requires $N \to \infty$. Let us now study the properties of a quantum state with a finite number, $N_F$, pf particles that becomes a quantum state full of identical particles when $N_F \to \infty$. We use the subscript $F$ in $N_F$ to remind that the number of particles is finite. In particular, the selection of the initial position of the trajectories associated of these new quantum state with only $N_F$ particles follows also \eref{marcon1} and \eref{marcon2}. Once the $N_F$ particles are selected, we can distribute them following \begin{equation} C^{j_0,F}(\vec r,t) = \frac{1}{N_F} \sum_{i=1}^{N_F} \delta(\vec r-\vec r_i^{j_0}(t)) , \label{disF} \end{equation} and define their center of mass as \begin{equation} \vec{r}_{\rm cm}^{j_0,F}(t) = \int d\vec r \; \vec r\; C^{j_0,F}(\vec r,t)=\frac{1}{N_F}\sum_{i=1}^{N_F} \vec r_i^{j_0}(t) . \label{cmf} \end{equation} Notice again that $\vec{r}_{\rm cm}^{j_0,F}(t)\neq \vec{r}_{\rm cm}^{j_0}(t)$ because we are dealing here with a finite number of particles $N_F$, while we know that $\vec{r}_{\rm cm}^{j_0}(t)=\langle \vec{r}_{\rm cm} \rangle(t)$. The error resulting from comparing this center of mass $\vec{r}_{\rm cm}^{j_0,F}(t)$ with the one obtained for $N_F\to\infty$, can be estimated as \begin{equation} \mathop{\rm Err}(t) = \left|\langle \vec{r}_{\rm cm} \rangle (t)-\vec{r}_{\rm cm}^{j_0,F} (t)\right| . \label{error} \end{equation} As indicated in \eref{qsfip}, $\langle \vec{r}_{\rm cm}\rangle (t)$ is independent of the experiment, but $\vec{r}_{\rm cm}^{j_0,F} (t)$ in \eref{cmf} varies between experiments due to quantum randomness. To further develop expression \eref{error}, let us assume now that the selections of all $\vec r_i^{j_0}(t)$ are independent, i.e., we select each $\vec r_i^{j_0}(t)$ according to $D(\vec r_i,t)$. This is exactly the case in the two numerical examples explained in sections \ref{example1} and \ref{examfop}. The center of mass in \eref{cmf} corresponds to a sequence of independent and identically distributed random variables $\vec r_i$ drawn from a distribution $D(\vec r_i,t)$ with a mean value given by $\langle \vec{r}_{\rm cm}\rangle (t)=\int \vec r\; D(\vec r,t) d\vec r$ and with a finite variance given by \begin{equation} \sigma^2(t)= \int (\vec r-\langle \vec{r}_{\rm cm}\rangle (t))^2\; D(\vec r,t) d\vec r . \end{equation} We know from the central limit theorem~\cite{central} that the distribution of $\vec{r}_{\rm cm}^{j,F} (t)$ in different experiments given by \eref{cmf} follows a normal distribution when $N_F$ grows with mean value and variance \begin{eqnarray} \vec{r}_{\rm cm}^{j_0,F}(t)= \int d\vec r \; \vec r\; C^{j_0,F}(\vec r,t) \approx \langle \vec{r}_{\rm cm}\rangle (t) , \\ \int d\vec r \; (\vec r-\langle \vec{r}_{\rm cm}\rangle (t))^2\; C^{j_0,F}(\vec r,t) \approx \frac{\sigma(t)^2}{N_F} . \label{limvar} \end{eqnarray} These results are valid for any initial distribution $D(\vec r_i,t)$ as far as $N_F$ is large enough.\\ The error in expression \eref{error} can now be rewritten in terms of the probability of getting a difference between $\langle \vec{r}_{\rm cm} \rangle (t)$ and $\vec{r}_{\rm cm}^{j_0,F} (t)$ smaller than a given error, $\mathop{\rm Err}$, \begin{equation} {\mathcal P}\left(\left|\langle \vec{r}_{\rm cm} \rangle-\vec{r}_{\rm cm}^{j,F}\right|<\mathop{\rm Err}\right) =2 \Phi_N\left(\frac{\sqrt{N_F} \mathop{\rm Err}}{\sigma}\right)-1 \label{error1} \end{equation} where $\Phi_N(x)$ is the cumulative distribution function of the standard normal distribution, \begin{equation} \Phi_N(x)=\int_{-\infty}^{x} \frac{1}{\sqrt{2\pi}} \exp(-t^2/2) dt , \end{equation} and we have used its property $\Phi_N(x)+\Phi_N(-x)=1$. If we require, for example, the difference $\langle \vec{r}_{\rm cm} \rangle-\vec{r}_{\rm cm}^{j_0,F}$ be smaller than $\mathop{\rm Err}=0.005\sigma$ with a probability of ${\mathcal P}\left(\left|\langle \vec{r}_{\rm cm} \rangle-\vec{r}_{\rm cm}^{j_0,F}\right|<0.005\sigma\right)=0.98$, then, we get that the number of particles $N_F$ has to be equal or larger than: \begin{equation} N_F \ge \frac{\left(\Phi_N^{-1}(0.99)\right)^2}{0.005^2} \simeq 2 \times 10^5 \label{error2} \end{equation} In summary, if we consider $0.005\sigma$ an acceptable error for $\vec{r}_{\rm cm}^{j,F}$, then we are sure than $98\%$ of the experiments with our quantum state with a number of particles $N_F\gtrsim 2\times 10^5$ satisfy the fixed error. As a more realistic example, let us consider a macroscopic system with the number of particles equal to a mol of the matter, i.e. $N_F=6\times10^{23}$ particles. In addition, we require that the value of $\vec{r}_{\rm cm}^{j,F}$ gives always the classical value, i.e., that only once in $M_F = 2\times10^{12}$ experiments, the value of $\vec{r}_{\rm cm}^{j,F}$ overcomes a fixed value of the $\mathop{\rm Err}$. Then, we can compute the required error by solving the relation ${\mathcal P}=1 - 1/M_F$ in \eref{error1} as: \begin{equation} \frac{\mathop{\rm Err}}{\sigma} =\frac{\Phi_N^{-1}(1-10^{-12})}{\sqrt{N_F}} \simeq 9\times10^{-12} \label{error3} \end{equation} In summary, for a quantum state with a number of particles typical of a macroscopic system, i.e. $N_F=6\times10^{23}$, the error of $\vec{r}_{\rm cm}^{j,F}$ is smaller than $\mathop{\rm Err} \approx10^{-11} \sigma$ always (except in one experiment every $M_F=2\times10^{12}$). The time evolution of the error in \eref{error} can be obtained once we know the particular time-dependence of the variance of $D(x,t)$. For example, in the case of $D(x,t)$ given by the modulus square of a Gaussian wave packet in free space, then, the standard deviation is given (for larger times) by \begin{equation} \sigma(t) = \sigma_0 \sqrt{1 + \left( \frac{\hbar t}{2 m \sigma_0^2} \right)^2} \approx \frac{\hbar t}{2 m \sigma_0} . \label{sigma} \end{equation} For example, assuming an initial spatial dispersion $\sigma_0=100$ nm, a mol of carbon atoms ($m=2\times10^{-26}$ kg), after $t = 1$ year of classical evolution, the absolute error in \eref{error3} is given by $\mathop{\rm Err} (t) \simeq 10^{-12} \sigma(t) \simeq 8$ $\mu$m. In summary, in the overwhelming majority of experiments (all $M_F=2\times10^{12}$ experiments except one), the error in the center of mass after one year of evolution, between the exact value (with $N\to\infty$) and the approximate center of mass (with $N_F=6\times10^{23}$) for the described quantum state is smaller than 10 $\mu$m. Certainly, in this example $\mathop{\rm Err}(t)$ grows with time due to the intrinsic expansion of a free wave packet. However, we want to emphasize that our classical intuition is based on crystalline materials where particles have an ordered structure due to their attractive interactions. Thus, classical objects (i.e. its particles) will tend to remain much more localized than in the above example. These interactions will also introduce correlations among the different particles and, in principle, the assumption that the selection of all $\vec r_i^{j_0}(t)$ are independent might not seem fully rigorous. However, one can argue that in a realistic classical system, with $N_F\simeq6\times10^{23}$ interacting particles, the accurate selection of a the first, say $N_F/100$, particles with the procedure in \eref{marcon1} and \eref{marcon2} will be roughly independent. This is due to the selection of points in a huge (and basically empty) configuration space of $3 N_F \sim 10^{24}$ dimensions. Only the selection of the last particles will be influenced by the non-negligible correlations with the previous ones. \section{Wave equation for the center of mass coordinates} \label{app:centerofmass} Our aim here is to find a change of coordinates in the 1D many-particle Schr\"odinger equation, cf. 1D version of Eq.~\eref{mpscho}, with the usual definition of the center of mass, \begin{equation} \label{eq:ap_chg1} x_{\rm cm} = \frac{1}{N} \sum_{i=1}^N x_i . \end{equation} and without cross terms appearing in the Laplacian. The additional set of $N-1$ coordinates can be written as \begin{equation} \label{eq:ap_chg2} y_j = \sum_{i=1}^N \alpha^{(j)}_{i} x_i \qquad \textrm{for} \ j=2,\ldots,N , \end{equation} and the $\alpha^{(j)}_{i}$ will be fixed by the condition that cross terms do not appear in the Laplacian \begin{equation} \sum_{i=1}^N \frac{\partial^2 \psi}{\partial x_i^2} = \frac{1}{N} \frac{\partial^2 \psi}{\partial x_{\rm cm}^2} +\sum_{j=2}^N \frac{\partial^2 \psi}{\partial y_j^2} . \label{eq:laplacian} \end{equation} Substituting Eqs.~\eref{eq:ap_chg1} and \eref{eq:ap_chg2} into the l.h.s. of \eref{eq:laplacian}, one obtains \begin{equation} \fl \sum_{i=1}^N \frac{\partial^2 \psi}{\partial x_i^2} = \frac{1}{N} \frac{\partial^2 \psi}{\partial x_{\rm cm}^2} + \frac{2}{N} \sum_{k=2}^N \left[ \frac{\partial^2 \psi}{\partial x_{\rm cm} \partial y_k} \sum_{i=1}^N \alpha^{(k)}_{i} \right] + \sum_{k=2}^N \sum_{j=2}^N \left[ \frac{\partial^2 \psi}{\partial y_j \partial y_k} \sum_{i=1}^N \alpha^{(j)}_{i} \alpha^{(k)}_{i} \right] . \label{eq:whatwehave} \end{equation} Comparing this with \eref{eq:laplacian} we see that the conditions for our change of variables are \begin{equation} \fl \label{eq:base_conds} 0 = \sum_{i=1}^N \alpha^{(j)}_{i} , \quad 1 = \sum_{i=1}^N \Big(\alpha^{(j)}_{i}\Big)^2 , \quad 0 = \sum_{i=1}^N \alpha^{(j)}_{i} \alpha^{(k)}_{i} \ \textrm{for} \ j \neq k . \end{equation} We propose a change of variables with the following structure (using $x_1$ separately as we only need $N-1$ variables besides the center of mass): \begin{equation} \fl y_j = a x_j + b x_{\rm cm} + c x_1 = a x_j + \frac{b}{N} \sum_{i=1}^N x_i + c x_1 \quad \Rightarrow \quad \alpha^{(j)}_{k} = a \, \delta_{jk} + \frac{b}{N} + c \, \delta_{1k} \end{equation} We impose conditions \eref{eq:base_conds} in order to get the following system \begin{eqnarray} \fl 0 = \sum_{i=1}^N \alpha^{(j)}_{i} = a + b + c , \\ \fl 1 = \sum_{i=1}^N \Big(\alpha^{(j)}_{i}\Big)^2 = \left(c + \frac{b}{N}\right)^2 + \left(a + \frac{b}{N}\right)^2 + (N-2) \left(\frac{b}{N}\right)^2 , \\ \fl 0 = \sum_{i=1}^N \alpha^{(j)}_{i} \alpha^{(k)}_{i} = \left(c + \frac{b}{N}\right)^2 + \frac{2 b}{N} \left(a + \frac{b}{N}\right) + (N-3) \left(\frac{b}{N}\right)^2 . \end{eqnarray} This can be solved to yield the variable changes in Eq.~\eref{eq:var_chg} and the final many-particle Schr\"odinger equation in \eref{eq:condi}. Now, in order to see how the term $V$ in the many-particle Schr\"odinger equation \eref{eq:condi} is translated into the term $V_{\rm cm}$ in the conditional wave function \eref{eq:conditional}, we invert \eref{eq:var_chg} to obtain \begin{equation} x_1 = x_{\rm cm}-\frac{1}{\sqrt{N}} \sum_{i=2}^{N} y_i , \label{eq:var_chg_inv} \qquad x_j = x_{\rm cm} +y_j - \frac{1} {\sqrt{N} +N} \sum_{i=2}^{N} y_i. \end{equation} We can now rewrite the potential \eref{potential} as: \begin{eqnarray} \fl V(x_{\rm cm}, \vec{y}) = V_{\rm ext}\left(x_{\rm cm}-\frac{1}{\sqrt{N}} \sum_{i=2}^{N} y_i\right) + \sum_{j=2}^{N} V_{\rm ext}\left(x_{\rm cm} +y_j - \frac{1} {\sqrt{N} +N} \sum_{i=2}^{N} y_i \right)\\ + \frac{1}{2}\sum_{j=2}^{N}V_{\rm int}\left(-\frac{1}{1+\sqrt{N}} \sum_{i=2}^{N} y_i-y_j \right) + \frac{1}{2}\sum_{i=2}^N\sum_{{j=2; i\neq j}}^{N}V_{\rm int}(y_i-y_j)\nonumber \label{appoten} \end{eqnarray} The terms $V_{\rm int}$ have no dependence on $x_{\rm cm}$. Therefore, when considering the conditional wave function of the center of mass with $\vec y=\vec y(t)$ in \eref{appoten}, they will just become a purely time-dependent potential. Their only effect will then be a pure time-dependent phase in the wave function, which can be neglected in the computation of the conditional equation of motion of the center of mass. Each of the other two terms $V_{\rm ext}$ in \eref{appoten} have a dependence on $x_{\rm cm}$ plus a dependence on $\sum_{i=2}^{N} y_i$. We provide a Taylor expansion around $x_{\rm cm}$ \begin{equation} \fl V_{\rm ext}(x_{\rm cm}+\Delta x) = V_{\rm ext}(x_{\rm cm}) + \left.\frac{\partial V_{\rm ext}(x)}{\partial x}\right|_{x=x_{\rm cm}} \Delta x + \left. \frac{1}{2} \frac{\partial^2 V_{\rm ext}(x)}{\partial x^2}\right|_{x=x_{\rm cm}} \Delta x^2 +\ldots . \label{taylor} \end{equation} We define, in order to simplify, the expressions, \begin{equation} \beta(x_{\rm cm}) = \left.\frac{\partial V_{\rm ext}(x)}{\partial x}\right|_{x=x_{\rm cm}} , \qquad \gamma(x_{\rm cm}) = \frac{1}{2} \left.\frac{\partial^2 V_{\rm ext}(x)}{\partial x^2}\right|_{x=x_{\rm cm}} . \end{equation} This allows to rewrite the part of the potential that depends on $x_{\rm cm}$ as \begin{eqnarray} \fl V_{\rm ext}\left(x_{\rm cm}-\frac{1}{\sqrt{N}} \sum_{i=2}^{N} y_i\right) + \sum_{j=2}^{N} V_{\rm ext}\left(x_{\rm cm} +y_j - \frac{1} {\sqrt{N} +N} \sum_{i=2}^{N} y_i \right)\\ = N V_{\rm ext}(x_{\rm cm}) + \beta(x_{\rm cm}) \left( 1 - \frac{1}{\sqrt{N}} - \frac{N-1} {\sqrt{N} +N} \right) \sum_{i=2}^{N} y_i \nonumber \\ + \gamma(x_{\rm cm}) \left[ \sum_{j=2}^{N} y_j^2 + \left( \frac{1}{N} + \frac{N-1} {(\sqrt{N} +N)^2} - \frac{2} {\sqrt{N} +N} \right) \left(\sum_{j=2}^{N} y_j \right)^2 \right] +\ldots \nonumber \end{eqnarray} We see that the factor of $\beta(x_{\rm cm})$ is zero, i.e. $1- \frac{1}{\sqrt{N}} - \frac{N-1} {\sqrt{N} +N} = 0$, and the factor of $\gamma(x_{\rm cm})$ can be simplified as $\frac{1}{N} + \frac{N-1} {(\sqrt{N} +N)^2} - \frac{2} {\sqrt{N} +N} = 0$, so we arrive at \begin{eqnarray} \fl V_{\rm ext}\left(x_{\rm cm}-\frac{1}{\sqrt{N}} \sum_{i=2}^{N} y_i\right) + \sum_{j=2}^{N} V_{\rm ext}\left(x_{\rm cm} +y_j - \frac{1} {\sqrt{N} +N} \sum_{i=2}^{N} y_i \right)=\nonumber\\ =N V_{\rm ext}(x_{\rm cm}) + \gamma(x_{\rm cm}) \sum_{i=2}^{N} y_i^2 + \ldots \end{eqnarray} The $\gamma(x_{\rm cm})$ in the second term and higher orders still have, in principle, some $x_{\rm cm}$ spatial dependence. We invoke now condition 2 (see \sref{evolution}) that assumes a quadratic approximation for the (long range) external potential, with a negligible dependence of $\gamma$ on $x_{\rm cm}$. This means that $\gamma(x_{\rm cm})=\gamma$ and the rest of higher order derivatives of the Taylor expansion become zero. Under such conditions, when calculating the conditional wave function of the center of mass at $\vec y(t)$, the term $\gamma \sum_{i=2}^{N}y_i^2(t)$ can be neglected as a purely time-dependent term (as happened for the previously discussed $V_{\rm int}$ terms). Therefore, we finally get the external potential of the equation of motion of the conditional wave function of the center of mass \begin{equation} \sum_{j=1}^NV_{\rm ext}(x_j) \Big|_{\vec y=\vec y(t)} = N\; V_{\rm ext}(x_{\rm cm}) \equiv V_{\rm cm}(x_{\rm cm}) . \label{appoten8} \end{equation} The same simple potential can be exactly recovered for a quadratic external potential $V_{\rm ext}(x)=\alpha+\beta x+ \gamma x^2$ with constant $\alpha$, $\beta$, and $\gamma$. Notice that our derivation above demands a more relaxed condition on $V_{\rm ext}$, as it only requires that this shape (constant $\gamma$) happens along the extension of the object in physical space. \section*{References} \end{document}
arXiv
{ "id": "1602.03988.tex", "language_detection_score": 0.7578076124191284, "char_num_after_normalized": null, "contain_at_least_two_stop_words": null, "ellipsis_line_ratio": null, "idx": null, "lines_start_with_bullet_point_ratio": null, "mean_length_of_alpha_words": null, "non_alphabetical_char_ratio": null, "path": null, "symbols_to_words_ratio": null, "uppercase_word_ratio": null, "type": null, "book_name": null, "mimetype": null, "page_index": null, "page_path": null, "page_title": null }
arXiv/math_arXiv_v0.2.jsonl
null
null
\begin{document} \title{On the complementary quantum capacity of the depolarizing channel} \date{October 2, 2018} \author{Debbie Leung} \affiliation{Institute for Quantum Computing and Department of Combinatorics and Optimization, University of Waterloo} \orcid{orcid.org/0000-0003-3750-2648} \author{John Watrous} \affiliation{Institute for Quantum Computing and School of Computer Science, University of Waterloo} \orcid{orcid.org/0000-0002-4263-9393} \maketitle \begin{abstract} The qubit depolarizing channel with noise parameter $\eta$ transmits an input qubit perfectly with probability $1-\eta$, and outputs the completely mixed state with probability $\eta$. We show that its \emph{complementary channel} has \emph{positive} quantum capacity for all $\eta>0$. Thus, we find that there exists a single parameter family of channels having the peculiar property of having positive quantum capacity even when the outputs of these channels approach a fixed state independent of the input. Comparisons with other related channels, and implications on the difficulty of studying the quantum capacity of the depolarizing channel are discussed. \end{abstract} \section{Introduction} It is a fundamental problem in quantum information theory to determine the capacity of quantum channels to transmit quantum information. The \emph{quantum capacity} of a channel is the optimal rate at which one can transmit quantum data with high fidelity through that channel when an asymptotically large number of channel uses is made available. In the classical setting, the capacity of a classical channel to transmit classical data is given by Shannon's noisy coding theorem \cite{Shannon48}. Although the error correcting codes that allow one to approach the capacity of a channel may involve increasingly large block lengths, the capacity expression itself is a simple, \emph{single letter formula} involving an optimization over input distributions maximizing the input/output mutual information over \emph{one} use of the channel. In the quantum setting, analyses inspired by the classical setting have been performed \cite{Lloyd97,Shor02,Devetak05}, and an expression for the quantum capacity has been found. However, the capacity expression involves an optimization similar to the classical setting not for a single channel use, but for an increasingly large number of channel uses. The optimum value for $n$ copies of the channel leads to the so-called \emph{$n$-shot coherent information} of the channel, but little is known in general about how the $n$-shot coherent information grows with $n$. (Reference \cite{DiVincenzoSS98} showed that the coherent information can be superadditive for some channels, so the one-shot coherent information does not generally provide an expression for the quantum capacity of a quantum channel.) Consequently, the quantum capacity is unknown for many quantum channels of interest. Furthermore, \cite{DiVincenzoSS98} showed that the $n$-shot coherent information of a channel can increase from zero to a positive quantity as $n$ increases, and reference \cite{Cubitt2015unbounded} showed that given any positive integer $n$, there is a channel whose $n$-shot coherent information is zero but whose quantum capacity is nevertheless positive. Moreover, no algorithm is known to determine if a quantum channel has zero or positive quantum capacity. On the other hand, some partial characterizations are known \cite{bennett1997-erasure,bruss1998-symmetric,Peres96, 2000-Horodecci-BE-channel,Smith-Smolin-2012-incapacity}. For several well-known families of quantum channels that can be characterized by noise parameters, the quantum capacity is proved to be zero within moderately noisy regimes, well before the channel output becomes constant and independent of the input. In this paper, we show that any complementary channel to the qubit depolarizing channel has positive quantum capacity (in fact, positive one-shot coherent information) unless the output is exactly constant. This is in sharp contrast with the superficially similar qubit depolarizing channel and erasure channel, whose capacities vanish when the analogous noise parameter is roughly half-way between the completely noiseless and noisy extremes. Prior to this work, it was not known (to our knowledge) that a family of quantum channels could retain positive quantum capacity while approaching a channel whose output is a fixed state, independent of the channel input. We hope this example concerning how the quantum capacity does not vanish will shed light on a better characterization of when a channel has no quantum capacity. Another consequence of our result concerns the quantum capacity of low-noise depolarizing channels. Watanabe \cite{Watanabe2012} showed that if a given channel's complementary channels have no quantum capacity, then the original channel must have quantum capacity equal to its private classical capacity. Furthermore, if the complementary channels have no classical private capacity, then the quantum and private capacities are given by the one-shot coherent information. Our result shows that Watanabe's results cannot be applied to the qubit depolarizing channel. Very recently, \cite{LeditzkyLS17} established tight upper bounds on the difference between the one-shot coherent information and the quantum and private capacities of a quantum channel, although whether or not the conclusion holds exactly remains open. In the remainder of the paper, we review background information concerning quantum channels, quantum capacities, and relevant results on a few commonly studied families of channels, and then prove our main results. \section{Preliminaries} Given a sender (Alice) and a receiver (Bob), one typically models quantum communication from Alice to Bob as being sent through a quantum channel $\Phi$. We will associate the input and output systems with finite-dimensional complex Hilbert spaces $\mathcal{A}$ and $\mathcal{B}$, respectively. In general, we write $\mathcal{L}(\mathcal{X},\mathcal{Y})$ to denote the space of linear operators from $\mathcal{X}$ to $\mathcal{Y}$, for finite-dimensional complex Hilbert spaces $\mathcal{X}$ and $\mathcal{Y}$, and we write $\mathcal{L}(\mathcal{X})$ to denote $\mathcal{L}(\mathcal{X},\mathcal{X})$. For two operators $X,Y \in \mathcal{L}(\mathcal{X})$, we use $\ip{X}{Y}$ to denote the Hilbert-Schmidt inner product $\op{Tr}(X^{\ast}Y)$, where $X^{\ast}$ denotes the adjoint of $X$. We also write $\mathcal{D}(\mathcal{X})$ to denote the set of positive semidefinite, trace one operators (i.e., density operators) acting on $\mathcal{X}$. A quantum channel $\Phi$ from Alice to Bob is a completely positive, trace-preserving linear map of the form \begin{equation} \Phi:\mathcal{L}(\mathcal{A})\rightarrow\mathcal{L}(\mathcal{B}) \,. \end{equation} There exist several well-known characterizations of quantum channels. The first one we need is given by the Stinespring representation, in which a channel $\Phi$ is described as \begin{equation} \label{eq:Stinespring-Phi} \Phi(\rho) = \op{Tr}_{\mathcal{E}} (A \rho A^{\ast}), \end{equation} where $\mathcal{E}$ is a finite-dimensional complex Hilbert space representing an ``environment'' system, $A\in\mathcal{L}(\mathcal{A},\mathcal{B}\otimes\mathcal{E})$ is an isometry (i.e., a linear operator satisfying $A^{\ast} A = \mathds{1}$), and $\op{Tr}_{\mathcal{E}}:\mathcal{L}(\mathcal{B}\otimes\mathcal{E})\rightarrow\mathcal{L}(\mathcal{B})$ denotes the partial trace over the space $\mathcal{E}$. In this context, the isometry $A$ is sometimes known as an isometric extension of $\Phi$, and is uniquely determined up to left multiplication by an isometry acting on $\mathcal{E}$. For a channel $\Phi$ with a Stinespring representation \eqref{eq:Stinespring-Phi}, the channel $\Psi$ of the form $\Psi:\mathcal{L}(\mathcal{A})\rightarrow\mathcal{L}(\mathcal{E})$ that is given by \begin{equation} \Psi(\rho) = \op{Tr}_{\mathcal{B}} (A \rho A^{\ast}) \end{equation} is called a \emph{complementary channel} to $\Phi$. Following the degree of freedom in the Stinespring representation, a complementary channel of $\Phi$ is uniquely determined up to an isometry on the final output. A channel $\Psi$ that is complementary to $\Phi$ may be viewed as representing information that leaks to the environment when $\Phi$ is performed. The second type of representation we need is a Kraus representation \begin{equation} \Phi(\rho) = \sum_{k = 1}^N A_k \rho A_k^{\ast} \,, \end{equation} where the operators $A_1,\ldots,A_N \in\mathcal{L}(\mathcal{A},\mathcal{B})$ (called Kraus operators) satisfy \begin{equation} \sum_{k = 1}^N A_k^{\ast} A_k = \mathds{1} \,. \end{equation} The \emph{coherent information} of a state $\rho\in\mathcal{D}(\mathcal{A})$ through a channel $\Phi:\mathcal{L}(\mathcal{A})\rightarrow\mathcal{L}(\mathcal{B})$ is defined as \begin{equation} \operatorname{I}_{\textup{\tiny C}}(\rho;\Phi) = \operatorname{H}(\Phi(\rho)) - \operatorname{H}(\Psi(\rho)) \,, \end{equation} for any channel $\Psi$ complementary to $\Phi$, where $\operatorname{H}(\sigma) = - \op{Tr}(\sigma \log \sigma)$ denotes the von~Neumann entropy of a density operator $\sigma$. Note that the coherent information is independent of the choice of the complementary channel $\Psi$. The coherent information of $\Phi$ is given by the maximum over all inputs \begin{equation} \operatorname{I}_{\textup{\tiny C}}(\Phi) = \max_{\rho \in \mathcal{D}(\mathcal{A})} \operatorname{I}_{\textup{\tiny C}}(\rho;\Phi) \,. \end{equation} The $n$-shot coherent information of $\Phi$ is $\operatorname{I}_{\textup{\tiny C}}(\Phi^{\otimes n})$. The \emph{quantum capacity theorem} \cite{Lloyd97,Shor02,Devetak05} states that the quantum capacity of $\Phi$ is given by the expression \begin{equation} \operatorname{Q}(\Phi) = \lim_{n\rightarrow\infty} \frac{\operatorname{I}_{\textup{\tiny C}}(\Phi^{\otimes n})}{n} \,. \label{eq:qcap} \end{equation} The $n$-shot coherent information $\operatorname{I}_{\textup{\tiny C}}(\Phi^{\otimes n})$ of a channel $\Phi$ is trivially lower-bounded by $n$ times the coherent information $\operatorname{I}_{\textup{\tiny C}}(\Phi)$, and therefore the coherent information $\operatorname{I}_{\textup{\tiny C}}(\Phi)$ provides a lower-bound on the quantum capacity of $\Phi$. The \emph{qubit depolarizing channel} with noise parameter $\eta$, denoted by $\Phi_\eta$, takes a qubit state $\rho \in \mathcal{D}(\mathbb{C}^2)$ to itself with probability $1-\eta$, and replaces it with a random output with probability $\eta$: \begin{equation} \Phi_{\eta}(\rho) = (1 - \eta) \, \rho + \eta \, \frac{\mathds{1}}{2} \,. \end{equation} One Kraus representation of $\Phi_{\eta}$ is \begin{equation} \Phi_{\eta}(\rho) = (1 - \varepsilon) \, \rho + \frac{\varepsilon}{3} \bigl(\sigma_1 \, \rho \, \sigma_1 + \sigma_2 \, \rho \, \sigma_2 + \sigma_3 \, \rho \, \sigma_3 \bigr), \end{equation} where $\varepsilon = 3\eta/4$, and \begin{equation} \sigma_1 = \begin{pmatrix} 0 & 1\\ 1 & 0 \end{pmatrix},\quad \sigma_2 = \begin{pmatrix} 0 & -i\\ i & 0 \end{pmatrix}, \quad\text{and}\quad \sigma_3 = \begin{pmatrix} 1 & 0\\ 0 & -1 \end{pmatrix} \end{equation} denote the Pauli operators. A Stinespring representation of $\Phi_{\eta}$ that corresponds naturally to this Kraus representation is \begin{equation} \Phi_\eta(\rho) = \op{Tr}_{\mathcal{E}} \bigl(A_{\varepsilon} \rho A_{\varepsilon}^{\ast}\bigr) \end{equation} for the isometric extension \begin{equation} A_{\varepsilon} = \sqrt{1 - \varepsilon}\, \mathds{1} \otimes \ket{0} + \sqrt{\frac{\varepsilon}{3}} \bigl( \sigma_1 \otimes \ket{1} + \sigma_2 \otimes \ket{2} + \sigma_3 \otimes \ket{3}\bigr). \end{equation} The complementary channel $\Psi_{\eta}$ to $\Phi_{\eta}$ determined by this Stinespring representation is given by \begin{equation} \Psi_{\eta}(\rho) = \begin{pmatrix} 1- \varepsilon & \sqrt{\frac{\varepsilon(1-\varepsilon)}{3}} \ip{\sigma_1}{\rho} & \sqrt{\frac{\varepsilon(1-\varepsilon)}{3}} \ip{\sigma_2}{\rho} & \sqrt{\frac{\varepsilon(1-\varepsilon)}{3}} \ip{\sigma_3}{\rho} \\[2mm] \sqrt{\frac{\varepsilon(1-\varepsilon)}{3}} \ip{\sigma_1}{\rho} & \frac{\varepsilon}{3} & - \frac{i \varepsilon}{3} \ip{\sigma_3}{\rho} & \frac{i \varepsilon}{3} \ip{\sigma_2}{\rho} \\[2mm] \sqrt{\frac{\varepsilon(1-\varepsilon)}{3}} \ip{\sigma_2}{\rho} & \frac{i \varepsilon}{3} \ip{\sigma_3}{\rho} & \frac{\varepsilon}{3} & - \frac{i \varepsilon}{3} \ip{\sigma_1}{\rho} \\[2mm] \sqrt{\frac{\varepsilon(1-\varepsilon)}{3}} \ip{\sigma_3}{\rho} & - \frac{i \varepsilon}{3} \ip{\sigma_2}{\rho} & \frac{i \varepsilon}{3} \ip{\sigma_1}{\rho} & \frac{\varepsilon}{3} \end{pmatrix}\!. \label{eq:epolarizing} \end{equation} We call this complementary channel the \emph{epolarizing channel}. Note that when $\eta \approx 0$, the channel $\Phi_\eta$ is nearly noiseless, while $\Psi_\eta$ is very noisy, and the opposite holds when $\eta \approx 1$. We will use the expressions above to calculate a lower-bound on the coherent information $\operatorname{I}_{\textup{\tiny C}}(\Psi_{\eta})$, which provides a lower-bound on the quantum capacity of the epolarizing channel $\Psi_\eta$. \section{Main result} \begin{theorem} \label{theorem:positive-complementary-coherent-information} Let $\Phi_{\eta}$ be the qubit depolarizing channel with noise parameter $\eta \in [0,1]$. Any complementary channel to $\Phi_{\eta}$ has positive coherent information when $\eta > 0$. \end{theorem} \begin{proof} The coherent information is independent of the choice of the complementary channel, so it suffices to focus on the choice $\Psi_{\eta}$ described in \eqref{eq:epolarizing}. Taking \begin{equation} \label{eq:input} \rho = \begin{pmatrix} 1 - \delta & 0\\ 0 & \delta \end{pmatrix} \end{equation} yields $\ip{\sigma_1}{\rho} = 0$, $\ip{\sigma_2}{\rho} = 0$, and $\ip{\sigma_3}{\rho} = 1 - 2\delta$, and therefore \begin{equation} \Psi_{\eta}(\rho) = \begin{pmatrix} (1- \varepsilon) & 0 & 0 & \sqrt{\frac{\varepsilon(1-\varepsilon)}{3}} (1 - 2\delta) \\[2mm] 0 & \frac{\varepsilon}{3} & -\frac{i \varepsilon}{3} (1 - 2\delta) & 0 \\[2mm] 0 & \frac{i \varepsilon}{3} (1 - 2\delta) & \frac{\varepsilon}{3} & 0 \\[2mm] \sqrt{\frac{\varepsilon(1-\varepsilon)}{3}} (1 - 2\delta) & 0 & 0 & \frac{\varepsilon}{3} \end{pmatrix}. \end{equation} A closed-form expression for the entropy of $\Psi_{\eta}(\rho)$ is not difficult to obtain; however for our purpose it suffices to lower bound $H(\Psi_{\eta}(\rho))$ with the following simple argument. Define the state \begin{equation} \xi = \begin{pmatrix} (1-\varepsilon) & 0 & 0 & \sqrt{\frac{\varepsilon(1-\varepsilon)}{3}}\\[2mm] 0 & \frac{\varepsilon}{3} & -\frac{i\varepsilon}{3} (1-2\delta) & 0 \\[2mm] 0 & \frac{i\varepsilon}{3} (1-2\delta) & \frac{\varepsilon}{3} & 0 \\[2mm] \sqrt{\frac{\varepsilon(1-\varepsilon)}{3}} & 0 & 0 & \frac{\varepsilon}{3} \end{pmatrix}\,, \end{equation} and note that \begin{equation} \Psi_{\eta}(\rho) = (1-\delta) \, \xi + \delta \, U \xi U^{\ast} \end{equation} where $U$ is diagonal with diagonal entries $(1,1,1,-1)$. As the von~Neumann entropy is concave and invariant under unitary conjugations, it follows that $H(\Psi_{\eta}(\rho)) \geq H(\xi)$. Finally, $\xi$ has eigenvalues \begin{equation} \biggl\{ 1-\frac{2\varepsilon}{3},0,\frac{2\varepsilon(1-\delta)}{3}, \frac{2\varepsilon\delta}{3}\biggr\} = \biggl\{ 1-\frac{\eta}{2},0,\frac{\eta(1-\delta)}{2}, \frac{\eta\delta}{2}\biggr\} \end{equation} and entropy \begin{equation} \operatorname{H}(\xi) = \frac{\eta}{2} \operatorname{H}_2(\delta) + \operatorname{H}_2\Bigl(\frac{\eta}{2}\Bigr). \label{eq:eveentropy} \end{equation} On the other hand, \begin{equation} \Phi_{\eta}(\rho) = \begin{pmatrix} (1 - \eta)(1 - \delta) + \frac{\eta}{2} & 0\\ 0 & (1 - \eta) \, \delta + \frac{\eta}{2} \end{pmatrix}, \end{equation} and therefore \begin{equation} \operatorname{H}\bigl(\Phi_{\eta}(\rho)\bigr) = \operatorname{H}_2\Bigl((1 - \eta)\,\delta + \frac{\eta}{2}\Bigr). \label{eq:bobentropy} \end{equation} By the mean value theorem, one has \begin{equation} \operatorname{H}_2\Bigl((1 - \eta)\,\delta + \frac{\eta}{2}\Bigr) - \operatorname{H}_2\Bigl(\frac{\eta}{2}\Bigr) = (1 - \eta)\,\delta \, \bigl(\log(1-\mu) -\log(\mu) \bigr) \end{equation} for some choice of $\mu$ satisfying $\eta/2 \leq \mu \leq (1 - \eta)\,\delta + \eta/2$, and therefore \begin{equation} \operatorname{H}\bigl(\Phi_{\eta}(\rho)\bigr) \leq \operatorname{H}_2\Bigl(\frac{\eta}{2}\Bigr) + (1 - \eta)\,\delta \log\Bigl(\frac{2}{\eta}\Bigr). \end{equation} Therefore, the coherent information of $\rho$ through $\Psi_{\eta}$ is lower-bounded as follows: \begin{equation} \begin{multlined} \operatorname{I}_{\textup{\tiny C}}(\rho;\Psi_\eta) = \operatorname{H}(\Psi_\eta(\rho)) - \operatorname{H}(\Phi_\eta(\rho))\\ \geq \operatorname{H}(\xi) - \operatorname{H}(\Phi_\eta(\rho)) \geq \frac{\eta}{2}\operatorname{H}_2(\delta) - (1{-}\eta)\,\delta\log\Bigl(\frac{2}{\eta}\Bigr). \end{multlined} \end{equation} We solve the inequality where the rightmost expression is strictly positive. The values of $\delta$ for which strict positivity holds includes the interval \begin{equation} 0 < \delta \leq 2^{-\frac{2(1 - \eta)}{\eta}\log\bigl(\frac{2}{\eta}\bigr)}, \end{equation} which completes the proof. \end{proof} Note that one can obtain a closed-form expression of $\operatorname{I}_{\textup{\tiny C}}(\rho;\Psi_\eta)$ for $\rho$ given by \eqref{eq:input}. Furthermore, this input is optimal due to the symmetry of $\Psi_\eta$. Therefore, the actual coherent information of $\Psi_\eta$ can be obtained by optimizing $\operatorname{I}_{\textup{\tiny C}}(\rho;\Psi_\eta)$ over $\delta$. This method does not extend to the calculation of the $n$-shot coherent information, nor the asymptotic quantum capacity of $\Psi_\eta$. \section{Comparisons with some well-known families of channels} The \emph{qubit erasure channel} with noise parameter $\eta \in [0,1]$, denoted by $\Xi_\eta$, takes a single qubit state $\rho \in \mathcal{D}(\mathbb{C}^2)$ to itself with probability $1-\eta$, and replaces it by an error symbol orthogonal to the input space with probability $\eta$. The quantum capacity of the erasure channel is known and is given by $\operatorname{Q}(\Xi_\eta) = \max(0,1-2\eta)$ \cite{bennett1997-erasure}. We can relate the depolarizing channel, the erasure channel, and the epolarizing channel as follows. Let each of $\mathcal{A}, \mathcal{S}_1, \mathcal{S}_2, \mathcal{G}_1, \mathcal{G}_2$ denote a qubit system. Consider an isometry \begin{equation} A \in \mathcal{L}(\mathcal{A}, \mathcal{S}_1 \otimes \mathcal{S}_2 \otimes \mathcal{G}_1 \otimes \mathcal{G}_2 \otimes \mathcal{A}) \end{equation} acting on a pure qubit state $\ket{\psi} \in \mathcal{A}$ as \begin{equation} \ket{\psi}_{\mathcal{A}} \mapsto \left[ \ket{0}\bra{0}_{\mathcal{S}_1} \otimes \mathds{1}_{\mathcal{A} \mathcal{G}_1} + \ket{1}\bra{1}_{\mathcal{S}_1} \otimes \textsc{swap}_{\mathcal{A} \mathcal{G}_1} \right] \ket{s}_{\mathcal{S}_1 \mathcal{S}_2} \ket{\Phi}_{\mathcal{G}_1 \mathcal{G}_2} \ket{\psi}_{\mathcal{A}}, \end{equation} where $\ket{s} = \sqrt{1-\eta} \, \ket{00} + \sqrt{\eta} \, \ket{11}$ and $\ket{\Phi} = \frac{1}{\sqrt{2}} ( \ket{00} + \ket{11})$, and where the subscripts denote the pertinent systems. The isometry can be interpreted as follows. System $\mathcal{A}$ (the input space) initially contains the input state $\ket{\psi}_{\mathcal{A}}$, while a system $\mathcal{G}_1$ (which represents a ``garbage'' space) is initialized to a completely mixed state. The input is swapped with the garbage if and only if a measurement of the $\mathcal{S}_1$ system (which represents a ``syndrome'') causes the state $\ket{s}$ of $\mathcal{S}_1\mathcal{S}_2$ to collapse to $\ket{11}$. Finally, each of the depolarizing, erasure, and the epolarizing channel can be generated by discarding a subset of the systems as follows: \begin{equation} \label{eq:all3} \begin{aligned} \Phi_\eta(\rho) & = \op{Tr}_{\mathcal{S}_1 \otimes \mathcal{S}_2 \otimes \mathcal{G}_1 \otimes \mathcal{G}_2} (A \rho A^{\ast})\,,\\ \Xi'_\eta(\rho) & = \op{Tr}_{\mathcal{S}_2 \otimes \mathcal{G}_1 \otimes \mathcal{G}_2} (A \rho A^{\ast})\,,\\ \Psi'_\eta(\rho) & = \op{Tr}_{\mathcal{A}} (A \rho A^{\ast}) \,. \end{aligned} \end{equation} To be more precise, the channel $\Xi'_\eta$ in \eqref{eq:all3} is related to the channel $\Xi_\eta$ described earlier by an isometry---for all relevant purposes, $\Xi'_\eta$ and $\Xi_\eta$ are equivalent. Likewise, $\Psi'_\eta$ is equivalent to $\Psi_\eta$ in \eqref{eq:epolarizing}. If we ignore the precise value of $\eta$, the systems $\mathcal{A}$ and $\mathcal{G}_1$ carry qualitatively similar information. Furthermore, the additional garbage system $\mathcal{G}_2$ is irrelevant. So, the three families of channels are distinguished by which syndrome systems are available in the output: none for the depolarizing channel output, both for the epolarizing channel, and one for the erasure channel. These different possibilities cause significant differences in the noise parameter ranges for which the quantum capacity vanishes \cite{bennett1997-erasure,DiVincenzoSS98}: \begin{equation} \begin{aligned} \operatorname{Q}(\Phi_\eta) = 0 & \quad \text{if} \;\; 1/3 \leq \eta \leq 1 \,,\\ \operatorname{Q}(\Xi_\eta) = 0 & \quad \text{iff} \;\; 1/2 \leq \eta \leq 1 \,,\\ \operatorname{Q}(\Psi_\eta) = 0 & \quad \rm{iff} \;\; \eta = 0 \,. \end{aligned} \end{equation} In particular, when $\eta \approx 0$, the syndrome state carries very little information and only interacts weakly with the input---and yet having all shares of it in the output keeps the quantum capacity of the epolarizing channel positive. The syndrome systems therefore carry qualitatively significant information that is quantitatively negligible. Despite recent results in \cite{LeditzkyLS17}, the extent to which this phenomenon is relevant to an understanding of the capacity of the depolarizing channel is a topic for further research. We also note that the qubit amplitude damping channel (see \cite{NC00}) has vanishing quantum capacity if and only if the noise parameter satisfies $1/2 \leq \eta \leq 1$, which is similar to the erasure channel (while the output only approaches a constant as $\eta \rightarrow 1$). The dephasing channel (see below) does not take the input to a constant for all noise parameters. \section{Extension to other channels} A mixed Pauli channel on one qubit can be described by a Kraus representation \begin{equation} \Theta(\rho) = (1-p_1-p_2-p_3) \; \rho + p_1 \, \sigma_1 \, \rho \, \sigma_1 + p_2 \, \sigma_2 \, \rho \, \sigma_2 + p_3 \, \sigma_3 \, \rho \, \sigma_3 \,, \end{equation} for $p_1,p_2,p_3\geq 0$ satisfying $p_1 + p_2 + p_3\leq 1$. For example, a \emph{dephasing channel} can be described in this way by taking $p_1 = p_2 = 0$ and $p_3 \in [0,1]$. In this case the quantum capacity is known to equal $1-H_2(p_3)$, which is positive except when $p_3 = 1/2$. Any complementary channel of such a dephasing channel must have zero quantum capacity. If at least $3$ of the $4$ probabilities $(1-p_1-p_2-p_3),p_1, p_2, p_3$ are positive, a generalization of our main result demonstrates that the capacity of a complementary channel of $\Theta$ has positive coherent information, as is proved below, so the phenomenon exhibited by the depolarizing channel is therefore not an isolated instance. It is an interesting open problem to determine which mixed unitary channels in higher dimensions, meaning those channels having a Kraus representation in which every Kraus operator is a positive scalar multiple of a unitary operator, have complementary channels with positive capacity. (It follows from the work of \cite{CubittRS08} that every mixed unitary channel with commuting Kraus operators is degradable, and therefore must have zero complementary capacity.) \begin{theorem} \label{theorem:mixed-pauli} Consider the mixed Pauli channel on one qubit described by \begin{equation} \Theta(\rho) = p_0 \; \rho + p_1 \, \sigma_1 \, \rho \, \sigma_1 + p_2 \, \sigma_2 \, \rho \, \sigma_2 + p_3 \, \sigma_3 \, \rho \, \sigma_3 \,, \end{equation} where $p_0,p_1,p_2,p_3\geq 0$, $p_0+p_1+p_2+p_3=1$. If three or more of these probabilities are nonzero, then any complementary channel to $\Theta$ has positive coherent information. \end{theorem} \begin{proof}[Proof of Theorem~\ref{theorem:mixed-pauli}] The proof is similar to that of Theorem~\ref{theorem:positive-complementary-coherent-information}. We can assume without loss of generality that $p_0 \geq p_1 \geq p_2 \geq p_3$, by redefining the basis of the output space if necessary. A convenient choice of the isometric extension is \begin{equation} A = \sum_{i=0}^3 \sqrt{p_i} \, \sigma_i \otimes \ket{i} \,, \end{equation} where $\sigma_0 = \mathds{1}$. This gives a complementary channel $\Theta^c$ acting as \begin{equation} \Theta^c(\rho) = \begin{pmatrix} p_0\!\! & \sqrt{p_0 p_1} \, \ip{\sigma_1}{\rho}\!\! & \sqrt{p_0 p_2} \, \ip{\sigma_2}{\rho}\!\! & \sqrt{p_0 p_3} \, \ip{\sigma_3}{\rho}\!\! \\[2mm] \sqrt{p_0 p_1} \, \ip{\sigma_1}{\rho} & p_1\!\! & -{i} \sqrt{p_1 p_2} \, \ip{\sigma_3}{\rho}\!\! & i \sqrt{p_1 p_3} \, \ip{\sigma_2}{\rho} \\[2mm] \sqrt{p_0 p_2} \, \ip{\sigma_2}{\rho}\!\! & i \sqrt{p_1 p_2} \, \ip{\sigma_3}{\rho}\!\! & p_2\!\! & -{i} \sqrt{p_2 p_3} \, \ip{\sigma_1}{\rho}\! \\[2mm] \sqrt{p_0 p_3} \, \ip{\sigma_3}{\rho}\!\! & -{i} \sqrt{p_1 p_3} \, \ip{\sigma_2}{\rho}\!\! & i \sqrt{p_2 p_3} \, \ip{\sigma_1}{\rho}\!\! & p_3\!\! \end{pmatrix}. \end{equation} We choose the following parametrization to simplify the analysis. Let $p_1 = p > 0$, $p_2 = \alpha p$ where $0 < \alpha \leq 1$, and $\eta' = 2(1+\alpha)p$. We will see that the parameter $\eta'$ enters the current proof in a way that is similar to the noise parameter $\eta$ for the depolarizing channel in the proof of Theorem~\ref{theorem:positive-complementary-coherent-information}. Once again, we take \begin{equation} \rho = \begin{pmatrix} 1 - \delta & 0\\ 0 & \delta \end{pmatrix} \end{equation} so $\ip{\sigma_1}{\rho} = 0$, $\ip{\sigma_2}{\rho} = 0$, and $\ip{\sigma_3}{\rho} = 1 - 2\delta$, and therefore \begin{equation} \Theta^c(\rho) = \begin{pmatrix} p_0 & 0 & 0 & \sqrt{p_0 p_3} (1 - 2\delta) \\[2mm] 0 & p_1 & -{i} \sqrt{p_1 p_2} (1 - 2\delta) & 0 \\[2mm] 0 & i \sqrt{p_1 p_2} (1 - 2\delta) & p_2 & 0 \\[2mm] \sqrt{p_0 p_3} (1 - 2\delta) & 0 & 0 & p_3 \end{pmatrix}. \end{equation} The entropy of $\Theta^c(\rho)$ is at least the entropy of the state \begin{equation} \xi' = \begin{pmatrix} p_0 & 0 & 0 & \sqrt{p_0 p_3} \\[2mm] 0 & p_1 & -i\sqrt{p_1 p_2} (1 - 2\delta) & 0 \\[2mm] 0 & i\sqrt{p_1 p_2} (1 - 2\delta) & p_2 & 0 \\[2mm] \sqrt{p_0 p_3} & 0 & 0 & p_3 \end{pmatrix}. \end{equation} The submatrix at the four corners gives rise to the eigenvalues $\{p_0 + p_3,0\} = \{1-\frac{\eta'}{2},0\}$ as in the proof of Theorem~\ref{theorem:positive-complementary-coherent-information}. Meanwhile, the middle block can be rewritten as \begin{equation} \frac{\eta'}{2} \begin{pmatrix} \frac{1}{1+\alpha} & \frac{i\sqrt{\alpha}}{1+\alpha}(1-2\delta) \\[2mm] \frac{-i\sqrt{\alpha}}{1+\alpha}(1-2\delta) & \frac{\alpha}{1+\alpha} \end{pmatrix} = \frac{\eta'}{2} \begin{pmatrix} \frac{1}{2} + \frac{\cos (2\theta)}{2} & \frac{i \sin (2 \theta)}{2} (1-2\delta) \\[2mm] \frac{-i \sin (2 \theta)}{2} (1-2\delta) & \frac{1}{2} - \frac{\cos (2\theta)}{2} \end{pmatrix} \, , \label{eq:midblock} \end{equation} where \begin{align} \frac{1}{1+\alpha} = \cos^2(\theta) = \frac{1}{2} + \frac{\cos(2\theta)}{2}\,,\\ \frac{\alpha}{1+\alpha} = \sin^2 (\theta) = \frac{1}{2} - \frac{\cos (2\theta)}{2} \,,\\ \frac{\sqrt{\alpha}}{1+\alpha} = \sin(\theta)\cos(\theta) = \frac{\sin (2 \theta)}{2} \,, \end{align} and $0< \theta \leq \frac{\pi}{2}$. From equation \eqref{eq:midblock}, the eigenvalues of the middle block can be evaluated as \begin{equation} \frac{\eta'}{2}\biggl\{\frac{1+r}{2},\frac{1-r}{2}\biggr\} \end{equation} where \begin{equation} r^2 = \cos^2 (2 \theta) + (1-2 \delta)^2 \sin^2(2 \theta) = 1-4\delta \sin^2 (2 \theta) + 4 \delta^2 \sin^2 (2 \theta) \,. \end{equation} If we define the variable $\delta'$ to satisfy the equation \begin{equation} \delta (1-\delta) \sin^2 (2 \theta) = \delta' (1-\delta'), \end{equation} then $r = 1-2\delta'$ and the two eigenvalues are \begin{equation} \biggl\{\frac{\eta'(1-\delta')}{2},\frac{\eta'\delta'}{2}\biggr\}. \end{equation} Altogether, the spectrum of $\xi'$ is \begin{equation} \biggl\{ 1-\frac{\eta'}{2},0,\frac{\eta'(1-\delta')}{2}, \frac{\eta'\delta'}{2}\biggr\}, \end{equation} which has the same form as the spectrum of $\xi$ in the proof of Theorem~\ref{theorem:positive-complementary-coherent-information}, and the entropy of $\xi'$ is analogous to \eqref{eq:eveentropy}, \begin{equation} \operatorname{H}(\xi') = \frac{\eta'}{2} \operatorname{H}_2(\delta') + \operatorname{H}_2\biggl(\frac{\eta'}{2}\biggr). \end{equation} On the other hand, $\Theta(\rho)$ has exactly the same expression as $\Phi_{\eta'}(\rho)$ and the entropy of $\Theta(\rho)$ is analogous to \eqref{eq:bobentropy}, \begin{equation} \operatorname{H}\bigl(\Theta(\rho)) = \operatorname{H}_2\biggl((1 - \eta')\,\delta + \frac{\eta'}{2}\biggr). \end{equation} Following arguments similar to the proof of Theorem~\ref{theorem:positive-complementary-coherent-information}, the coherent information of $\rho$ through $\Theta^c$ is lower-bounded as follows: \begin{equation} \operatorname{I}_{\textup{\tiny C}}(\rho;\Theta^c) = \operatorname{H}(\Theta^c(\rho)) - \operatorname{H}(\Theta(\rho)) \; \geq \; \operatorname{H}(\xi') - \operatorname{H}(\Theta(\rho)) \; \geq \; \frac{\eta'}{2}\operatorname{H}_2(\delta') - (1{-}\eta')\,\delta\log\Bigl(\frac{2}{\eta'}\Bigr). \end{equation} We have a $\delta'$-dependency in the first term and $\delta$-dependency in the second term. However, \begin{equation} \delta (1-\delta) \sin^2 (2 \theta) = \delta' (1-\delta'), \end{equation} and $\sin^2 (2 \theta)$ is a positive constant determined by $\alpha = p_2/p_1$, so for sufficiently small $\delta$, the above equation is strictly positive. \end{proof} \section{Conclusion} We have shown that any complementary channel to the qubit depolarizing channel has positive quantum capacity unless its output is exactly constant. This gives an example of a family of channels whose outputs approach a constant, yet retain positive quantum capacity. We also point out a crucial difference between the epolarizing channel and the related depolarizing and erasure channels. We hope these observations will shed light on what may or may not cause the quantum capacity of a channel to vanish. Our work also rules out the possibility that Watanabe's results \cite{Watanabe2012} can be applied directly to show that the low-noise depolarizing channel has quantum capacity given by the $1$-shot coherent information. Very recently, \cite{LeditzkyLS17} established tight upper bounds on the difference between the one-shot coherent information and the quantum and private capacities of a quantum channel. While our results do not have direct implications to these capacities of $\Phi_{\eta}$, we hope they provide insights for further investigations beyond the bounds established in \cite{LeditzkyLS17}. \end{document}
arXiv
{ "id": "1510.01366.tex", "language_detection_score": 0.6767470240592957, "char_num_after_normalized": null, "contain_at_least_two_stop_words": null, "ellipsis_line_ratio": null, "idx": null, "lines_start_with_bullet_point_ratio": null, "mean_length_of_alpha_words": null, "non_alphabetical_char_ratio": null, "path": null, "symbols_to_words_ratio": null, "uppercase_word_ratio": null, "type": null, "book_name": null, "mimetype": null, "page_index": null, "page_path": null, "page_title": null }
arXiv/math_arXiv_v0.2.jsonl
null
null
\begin{document} \title{\rm New cubic fourfolds with odd-degree \\ unirational parametrizations} \author{Kuan-Wen Lai} \address{Department of Mathematics, Brown University, Providence, RI} \email{[email protected]} \begin{abstract} We prove that the moduli space of cubic fourfolds $\mathcal{C}$ contains a divisor $\mathcal{C}_{42}$ whose general member has a unirational parametrization of degree 13. This result follows from a thorough study of the Hilbert scheme of rational scrolls and an explicit construction of examples. We also show that $\mathcal{C}_{42}$ is uniruled. \end{abstract} \maketitle \DeclareRobustCommand{\gobblefour}[4]{} \newcommand*{\SkipTocEntry}{\addtocontents{toc}{\gobblefour}} \setcounter{tocdepth}{1} \tableofcontents \parskip = 5pt \section*{Introduction} Let $X$ be a smooth projective variety of dimension $n$ over $\varmathbb{C}$. We say that $X$ has a \emph{degree $\varrho$ unirational parametrization} if there is a dominant rational map $\rho:\varmathbb{P}^n\dashrightarrow X$ with $\deg\rho=\varrho$. Such a parametrization implies that the smallest positive integer $N$ which allows the rational equivalence \begin{equation}\label{decDiag} N\Delta_X\equiv N\{x\times X\}+Z\quad\mbox{in}\quad {\rm CH}^n(X\times X) \end{equation} would divide $\varrho$, where $x\in X$ and $Z$ is a cycle supported on $X\times Y$ for some divisor $Y\subset X$. The relation (\ref{decDiag}) for arbitrary integer $N$ is called a \emph{decomposition of the diagonal} of $X$, and it is called an \emph{integral} decomposition of the diagonal if $N=1$. (\cite{BS83}. See also \cite[Chap. 3]{Voi14}.) This paper studies the unirationality of \emph{cubic fourfolds}, i.e. smooth cubic hypersurfaces in $\varmathbb{P}^5$ over $\varmathbb{C}$. Let ${\rm Hdg}^4(X,\varmathbb{Z}):=H^4(X,\varmathbb{Z})\cap H^2(\Omega_X^2)$ be the group of integral Hodge classes of degree 4 for a cubic fourfold $X$. In the coarse moduli space of cubic fourfolds $\mathcal{C}$, the Noether-Lefschetz locus \[ \left\{X\in\mathcal{C}\,:\,{\rm rk}\left({\rm Hdg}^4(X,\varmathbb{Z})\right)\geq 2\right\} \] is a countably infinite union of irreducible divisors $\mathcal{C}_d$ indexed by $d\geq8$ and $d\equiv 0,2$ (mod 6). Here $\mathcal{C}_d$ consists of the \emph{special cubic fourfolds} which admit a rank-2 saturated sublattice of discriminant $d$ in ${\rm Hdg}^4(X,\varmathbb{Z})$ \cite{Has00}. Because the integral Hodge conjecture is valid for cubic fourfolds \cite[Th. 1.4]{Voi13}, $X\in\mathcal{C}$ is special if and only if there is an algebraic surface $S\subset X$ not homologous to a complete intersection. Voisin \cite[Th. 5.6]{Voi15} proves that a special cubic fourfold of discriminant $d\equiv 2$ (mod 4) admits an integral decomposition of the diagonal. Because every cubic fourfold has a unirational parametrization of degree 2 \cite[Example 18.19]{Har95}, it is natural to ask whether they have odd degree unirational parametrizations. For a general $X\in\mathcal{C}_d$ with $d=14$, 18, 26, 30, and 38, the examples constructed by Nuer \cite{Nue15} combined with an algorithm by Hassett \cite[Prop. 38]{Has16} support the expectation. In this paper, we improve the list by solving the case $d=42$. \begin{thm}\label{mainThm} A generic $X\in\mathcal{C}_{42}$ has a degree 13 unirational parametrization. \end{thm} Recall that a variety $Y$ is \emph{uniruled} if there is a variety $Z$ and a dominant rational map $Z\times\varmathbb{P}^1\dashrightarrow Y$ which doesn't factor through the projection to $Z$. As a byproduct of the proof of Theorem \ref{mainThm}, we also prove that \begin{thm} $\mathcal{C}_{42}$ is uniruled. \end{thm} \subsection*{Strategy of Proof} When $d=2(n^2+n+1)$ with $n\geq2$ and $X\in\mathcal{C}_d$ is general, the Fano variety of lines $F_1(X)$ is isomorphic to the Hilbert scheme of two points $\Sigma^{[2]}$, where $\Sigma$ is a K3 surface polarized by a primitive ample line bundle of degree $d$. \cite[Th. 6.1.4]{Has00} The isomorphism $F_1(X)\cong\Sigma^{[2]}$ implies that $X$ contains a family of two-dimensional rational scrolls parametrized by $\Sigma$. Indeed, the divisor $\Delta\subset\Sigma^{[2]}$ parametrizing the non-reduced subschemes can be naturally identified as the projectivization of the tangent bundle of $\Sigma$. Each fiber of this $\varmathbb{P}^1$-bundle induces a smooth rational curve in $F_1(X)$ through the isomorphism, hence corresponds to a rational scroll in $X$. Let $S\subset X$ be one of such scrolls. Since $S$ is rational, its symmetric square $W={\rm Sym}^2S$ is also rational. A generic element $s_1+s_2\in W$ spans a line $l(s_1,s_2)$ not contained in $X$, so there is a rational map \[\begin{array}{cccc} \rho:&W&\dashrightarrow&X\\ &s_1+s_2&\mapsto&x \end{array}\] where $l(s_1,s_2)\cap X=\{s_1,s_2,x\}$. By \cite[Prop. 38]{Has16}, this map becomes a unirational parametrization if $S$ has isolated singularities. Moreover, its degree is odd as long as $4\nmid d$. Discriminant $d=42$ corresponds to the case $n=4$ above. Note that $4\nmid d=42$. Thus a generic $X\in\mathcal{C}_{42}$ admits an odd degree unirational parametrization once we prove that \begin{thm}\label{mainStep} A generic $X\in\mathcal{C}_{42}$ contains a degree-9 rational scroll $S$ which has 8 double points and is smooth otherwise. \end{thm} Here a \emph{double point} means a \emph{non-normal ordinary double point}. It's a point where the surface has two branches that meet transversally. The idea in proving Theorem \ref{mainStep} is as follows: Degree-9 scrolls in $\varmathbb{P}^5$ form a component $\mathcal{H}_9$ in the associated Hilbert scheme. Let $\mathcal{H}_9^8\subset\mathcal{H}_9$ parametrize scrolls with 8 isolated singularities. By definition (See Section \ref{sect:HilbSS}) an element $\overline{S}\in\mathcal{H}_9^8$ is non-reduced. We use $S$ to denote its underlying variety. Let $U_{42}\subset|\mathcal{O}_{\varmathbb{P}^5}(3)|$ be the locus of special cubic fourfolds with discriminant 42. Consider the incidence variety \[ \mathcal{Z} = \left\{(\overline{S},X)\in\mathcal{H}_9^8\times U_{42}:S\subset X\right\}. \] Then there is a diagram \[\xymatrix{ &\mathcal{Z}\ar_{p_1}[ld]\ar^{p_2}[rd]&&\\ \mathcal{H}_9^8&&U_{42}\ar[r]&\mathcal{C}_{42} }\] Theorem \ref{mainStep} is proved by showing that $p_2$ is dominant. Two main ingredients in the proof are: \begin{itemize} \item Constructing an explicit example. \item Estimating the dimension of the Hilbert scheme parametrizing singular scrolls. \end{itemize} Section \ref{sect:pSS} provides an introduction of rational scrolls and the basic properties required in the proof. We construct an example in Section \ref{sect:constr} and then prove the main results in Section \ref{sect:C42}. The general description about the Hilbert schemes $\mathcal{H}_9^8\subset\mathcal{H}_9$ and the estimate of the dimensions are left to Section \ref{sect:HilbSS}. Throughout the paper we will frequently deal with the rational map \[ \Lambda_Q: \varmathbb{P}^{D+1}\dashrightarrow\varmathbb{P}^N \] defined as the projection from some $(D-N)$-plane $Q$. Here $D$ and $N$ are positive integers such that $D+1\geq N\geq 3$. We will assume $D\geq N\geq 5$ when we are studying singular scrolls. \noindent {\bf Acknowledgments:} I am grateful to my advisor, Brendan Hassett, for helpful suggestions and his constant encouragement. I also appreciate the helpful comments from Nicolas Addington. I'd like to give my special thanks to the referee who points out a mistake in an earlier version of the paper and provides suggestions on how to fix it. I am grateful for the support of the National Science Foundation through DMS-1551514. \section{Preliminary: rational scrolls}\label{sect:pSS} We provide a brief review of rational scrolls and introduce necessary terminologies and lemmas in this section. \subsection{Hirzebruch surfaces} Let $m$ be a nonnegative integer, and let $\mathscr{E}$ be a rank two locally free sheaf on $\varmathbb{P}^1$ isomorphic to $\mathcal{O}_{\varmathbb{P}^1}\oplus\mathcal{O}_{\varmathbb{P}^1}(m)$. The Hirzebruch surface $\varmathbb{F}_m$ is defined to be the associated projective space bundle $\varmathbb{P}(\mathscr{E})$. Let $f$ be the divisor class of a fiber, and let $g$ be the divisor class of a section, i.e. the divisor class associated with Serre's twisting sheaf $\mathcal{O}_{\varmathbb{P}(\mathscr{E})}(1)$. The Picard group of $\varmathbb{F}_m$ is freely generated by $f$ and $g$, and the intersection pairing is given by \[\begin{array}{c|cc} &f &g\\ \hline f &0 &1\\ g &1 &m. \end{array}\] The canonical divisor is $K_{\varmathbb{F}_m}=-2g+(m-2)f$. Let $a$ and $b$ be two integers, and let $h=ag+bf$ be a divisor on $\varmathbb{F}_m$. The ampleness and the very ampleness for $h$ are equivalent on $\varmathbb{F}_m$, and it happens if and only if $a>0$ and $b>0$. \cite[Chapter V, \S2.18]{Har77} \begin{lemma}\label{hHir} Suppose the divisor $ag+bf$ is ample. We have \[\begin{array}{ccl} h^0\left(\varmathbb{F}_m\,,\,ag+bf\right)&=&(a+1)\left(\frac{1}{2}am+b+1\right),\\ h^i\left(\varmathbb{F}_m\,,\,ag+bf\right)&=&0\quad\mbox{for all}\enspace i>0. \end{array}\] \end{lemma} These formulas appear in several places in the literature with slightly different details depending on the contexts, for example \cite[Prop. 2.3]{Laf02}, \cite[p.543]{BBF04}, and \cite[Lemma 2.6]{Cos06}. It can be proved by induction on the integers $a$ and $b$ or by applying the projection formula to the bundle map $\pi:\varmathbb{F}_m\rightarrow\varmathbb{P}^1$. \subsection{Deformations of Hirzebruch surfaces} $\varmathbb{F}_m$ admits a deformation to $\varmathbb{F}_{m-2k}$ for all $m>2k\geq0$. More precisely, there exitsts a holomorphic family $\tau:\mathcal{F}\rightarrow\varmathbb{C}$ such that $\mathcal{F}_0\cong\varmathbb{F}_m$ and $\mathcal{F}_t\cong\varmathbb{F}_{m-2k}$ for $t\neq0$. The family can be written down explicitly by the equation \begin{equation}\label{defHir} \mathcal{F} = \left\{{x_0}^my_1-{x_1}^my_2+t{x_0}^{m-k}{x_1}^ky_0=0\right\} \subset\varmathbb{P}^1\times\varmathbb{P}^2\times\varmathbb{C}, \end{equation} where $\left(\left[x_0,x_1\right],\left[y_0,y_1,y_2\right],t\right)$ is the coordinate of $\varmathbb{P}^1\times\varmathbb{P}^2\times\varmathbb{C}$. \cite[p.205]{BPV84} Generally, $\varmathbb{F}_m$ admits an analytic versal deformation with a base manifold of dimension $h^1\left(\varmathbb{F}_m,T_{\varmathbb{F}_m}\right)$ by the following lemma. \begin{lemma}\label{absDef}\cite[Lemma 1. and Theorem 4.]{Sei92} There is a natural isomorphism $H^1\left(\varmathbb{F}_m,T_{\varmathbb{F}_m}\right)\cong H^1\left(\varmathbb{P}^1,\mathcal{O}_{\varmathbb{P}^1}(-m)\right)$. We also have $H^2\left(\varmathbb{F}_m,T_{\varmathbb{F}_m}\right)=0$. \end{lemma} Let $\mathcal{E}=\mathcal{O}_{\varmathbb{P}^1}\oplus\mathcal{O}_{\varmathbb{P}^1}(m)$ be the underlying locally free sheaf of $\varmathbb{F}_m$. It is straightforward to compute that ${\rm Ext^1}_{\varmathbb{P}^1}\left(\mathcal{E},\mathcal{E}\right)\cong H^1\left(\varmathbb{P}^1,\mathcal{O}_{\varmathbb{P}^1}(-m)\right)$, so there is a natural isomorphism \begin{equation}\label{natIsom} H^1\left(\varmathbb{F}_m,T_{\varmathbb{F}_m}\right)\cong{\rm Ext^1_{\varmathbb{P}^1}}\left(\mathcal{E},\mathcal{E}\right), \end{equation} by Lemma \ref{absDef}. The elements of the group ${\rm Ext^1}_{\varmathbb{P}^1}\left(\mathcal{E},\mathcal{E}\right)$ are in one-to-one correspondence with the deformations of $\mathcal{E}$ over the dual numbers $D_t\cong\frac{\varmathbb{C}[t]}{(t^2)}$. \cite[Th. 2.7]{Har10} Thus (\ref{natIsom}) says that the infinitesimal deformation of $\varmathbb{F}_m$ can be identified with the infinitesimal deformation of its underlying locally free sheaf. Every element in $ {\rm Ext^1}_{\varmathbb{P}^1}\left(\mathcal{E},\mathcal{E}\right)\cong {\rm Ext^1}_{\varmathbb{P}^1}\left(\mathcal{O}_{\varmathbb{P}^1}(m),\mathcal{O}_{\varmathbb{P}^1}\right) $ is represented by a short exact sequence \[ 0\rightarrow\mathcal{O}_{\varmathbb{P}^1}\rightarrow\mathcal{O}_{\varmathbb{P}^1}(k)\oplus\mathcal{O}_{\varmathbb{P}^1}(m-k) \rightarrow\mathcal{O}_{\varmathbb{P}^1}(m)\rightarrow0 \] for some $k$ satisfying $m>2k\geq0$. By tracking the construction of the correspondence in \cite[Th. 2.7]{Har10}, the above sequence corresponds to a coherent sheaf $\mathscr{E}$ on $\varmathbb{P}^1\times D_t$, flat over $D_t$, such that $\mathscr{E}_0\cong\mathcal{E}$ and $\mathscr{E}_t\cong\mathcal{O}_{\varmathbb{P}^1}(k)\oplus\mathcal{O}_{\varmathbb{P}^1}(m-k)$ for $t\neq0$. So it induces a flat family $\mathcal{F}$ of Hirzebruch surfaces over $D_t$ such that $\mathcal{F}_0\cong\varmathbb{F}_m$ and $\mathcal{F}_t\cong\varmathbb{F}_{m-2k}$ for $t\neq0$. \subsection{Rational normal scrolls} Let $u$ and $v$ be positive integers with $u\leq v$ and let $N=u+v+1$. Let $P_1$ and $P_2$ be complementary linear subspaces of dimensions $u$ and $v$ in $\varmathbb{P}^N$. Choose rational normal curves $C_1\subset P_1$, $C_2\subset P_2$, and an isomorphism $\varphi:C_1\rightarrow C_2$. Then the union of the lines $\bigcup_{p\in C_1}\overline{p\,\varphi(p)}$ forms a smooth surface $S_{u,v}$ called a \emph{rational normal scroll of type} $(u,v)$. The line $\overline{p\,\varphi(p)}$ is called a \emph{ruling}. When $u<v$, we call the curve $C_1\subset S_{u,v}$ the \emph{directrix} of $S_{u,v}$. A rational normal scroll of type $(u,v)$ is uniquely determined up to projective isomorphism. In particular, each $S_{u,v}$ is projectively equivalent to the one given by the parametric equation \begin{equation}\label{stdRNS} \begin{array}{ccc} \varmathbb{C}^2 &\longrightarrow &\varmathbb{P}^N\\ (s,t) &\longmapsto &(1,s,...,s^u,t,st,...,s^vt). \end{array} \end{equation} One can check by this expression that a hyperplane section of $S_{u,v}$ which doesn't contain a ruling is a rational normal curve of degree $u+v$. It easily follows that $S_{u,v}$ has degree $D=u+v$. The rulings of $S_{u,v}$ form a rational curve in $\varmathbb{G}(1,N)$ the Grassmannian of lines in $\varmathbb{P}^N$. By using (\ref{stdRNS}), we can parametrize this curve as \begin{equation}\label{paraRul} \begin{array}{ccc} \varmathbb{C} &\longrightarrow &\varmathbb{G}(1,N)\\ s &\longmapsto &\left(\begin{array}{cccccccc} 1&s&...&s^u&0&0&...&0\\ 0&0&...&0&1&s&...&s^v \end{array}\right), \end{array} \end{equation} where the matrix on the right represents the line spanned by the row vectors. The embedding $S_{u,v}\subset\varmathbb{P}^N$ can be seen as the Hirzebruch surface $\varmathbb{F}_{v-u}$ embedded in $\varmathbb{P}^N$ through the complete linear system $|g+uf|$. Conversely, every nondegenerate, irreducible and smooth surface of degree $D$ in $\varmathbb{P}^{D+1}$ isomorphic to $\varmathbb{F}_{v-u}$ must be $S_{u,v}$.\cite[p.522-525]{GH94} It's not hard to compute that $H^1\left(T_{\varmathbb{P}^N}|_{\varmathbb{F}_{u-v}}\right)=0$ under the above embedding. Combining this with the rigidity result, it implies that every abstract deformation of $\varmathbb{F}_{v-u}$ can be lifted to an embedded deformation as a family of rational normal scrolls in $\varmathbb{P}^N$. \cite[Remark 20.2.1]{Har10} We conclude this as the following lemma: \begin{lemma}\label{embDef} For $m>2k\geq0$, let $\mathcal{F}$ be an abstract deformation of Hirzebruch surfaces such that $\mathcal{F}_0\cong\varmathbb{F}_m$ and $\mathcal{F}_t\cong\varmathbb{F}_{m-2k}$ for $t\neq0$. Then $\mathcal{F}$ can be realized as an embedded deformation $\mathcal{S}$ in $\varmathbb{P}^{D+1}$ with $\mathcal{S}_0\cong S_{u,v}$ and $\mathcal{S}_t\cong S_{u+k,v-k}$ for $t\neq0$, where $D=u+v$, and $u\leq v$ are any positive integers satisfying $v-u=m$. \end{lemma} \subsection{Rational scrolls} \begin{defn} We call a surface $S\subset\varmathbb{P}^N$ a rational scroll (or a scroll) of type $(u,m+u)$ if it is the image of a Hirzebruch surface $\varmathbb{F}_m$ through a birational morphism defined by an $N$-dimensional subsystem $\textfrak{d}\subset|g+uf|$ for some $u>0$. \end{defn} Equivalently, $S\subset\varmathbb{P}^N$ is a rational scroll of type $(u,v)$ either if it is a rational normal scroll $S_{u,v}$, or if it is the projection image of $S_{u,v}\subset\varmathbb{P}^{D+1}$ from a $(D-N)$-plane disjoint from $S_{u,v}$. Here $D=u+v$ is the degree of $S_{u,v}$ as well as the degree of $S$. In the latter case, we also call a line on $S$ a ruling if its preimage is a ruling on $S_{u,v}$. The following lemma computes the cohomology groups of the normal bundle for an arbitrary embedding of a Hirzebruch surface into a projective space. \begin{lemma}\label{h0h1} Let $\iota:\varmathbb{F}_m\hookrightarrow\varmathbb{P}^N$ be an embedding with image $S$ and $\iota^*\mathcal{O}_{\varmathbb{P}^N}(1)\cong\mathcal{O}_S(h)$, where $h=ag+bf$ with $a>0$ and $b>0$. Let $N_{S/\varmathbb{P}^N}$ be the normal bundle of $S$ in $\varmathbb{P}^N$, then \[ h^0(S,N_{S/\varmathbb{P}^N})= (N+1)(a+1)(\frac{am}{2}+b+1)-7 \] and $h^i(S,N_{S/\varmathbb{P}^N})=0,\forall i>0$. Especially, if $S$ is a smooth scroll of degree $D$, then the formula for $h^0$ reduces to \[ h^0(S,N_{S/\varmathbb{P}^N})= (N+1)(D+2)-7. \] \end{lemma} \begin{proof} The short exact sequence \begin{equation}\label{h0h11} 0\rightarrow T_S\rightarrow T_{\varmathbb{P}^N}|_{S}\rightarrow N_{S/\varmathbb{P}^N}\rightarrow0 \end{equation} has the long exact sequence \[\begin{array}{ccccccccc} 0&\rightarrow&H^0(S,T_S)&\rightarrow&H^0(S,T_{\varmathbb{P}^N}|_{S})&\rightarrow&H^0(S,N_{S/\varmathbb{P}^N})&&\\ &\rightarrow&H^1(S,T_S)&\rightarrow&H^1(S,T_{\varmathbb{P}^N}|_{S})&\rightarrow&H^1(S,N_{S/\varmathbb{P}^N})&&\\ &\rightarrow&H^2(S,T_S)&\rightarrow&H^2(S,T_{\varmathbb{P}^N}|_{S})&\rightarrow&H^2(S,N_{S/\varmathbb{P}^N})&\rightarrow&0. \end{array}\] In order to calculate the dimensions in the right, we need the dimensions in the first two columns. For the middle column, we can restrict the Euler exact sequence $$0\rightarrow\mathcal{O}_{\varmathbb{P}^N}\rightarrow\mathcal{O}_{\varmathbb{P}^N}(1)^{\oplus(N+1)}\rightarrow T_{\varmathbb{P}^N}\rightarrow0$$ to $S$ and obtain \[ 0\rightarrow\mathcal{O}_S\rightarrow\mathcal{O}_S(h)^{\oplus(N+1)}\rightarrow T_{\varmathbb{P}^N}|_S\rightarrow0. \footnote{Tor$_i^{\mathcal{O}_{\varmathbb{P}^N}}(\mathcal{O}_S,\mathscr{Ful98})=0$ for all $i>0$ and locally free sheaf $\mathscr{Ful98}$, so the Euler exact sequence keeps exact after the restriction.} \] Lemma \ref{hHir} confirms that $h^i(S,\mathcal{O}_S(h))=0$ for $i>0$, so we have \[ 0\rightarrow H^0(S,\mathcal{O}_S)\rightarrow H^0(S,\mathcal{O}_S(h))^{\oplus(N+1)}\rightarrow H^0(S,T_{\varmathbb{P}^N}|_S)\rightarrow0 \] from the associated long exact sequence while the other terms are all vanishing. It follows that \[\begin{array}{rcl} h^0(S,T_{\varmathbb{P}^N}|_S) &=& (N+1)h^0(S,\mathcal{O}_S(h)) - h^0(S,\mathcal{O}_S)\\ &\stackrel{\ref{hHir}}{=}& (N+1)(a+1)(\frac{am}{2}+b+1)-1. \end{array}\] For the first column, one can use the Hirzebruch-Riemann-Roch formula to compute that $\chi(T_S)=6$. We also have $h^2(S,T_S)=0$ by Lemma \ref{absDef}. Thus $h^0(S,T_S)-h^1(S,T_S)=\chi(T_S)=6$. Collecting the above results, the long exact sequence for (\ref{h0h11}) now becomes \[\begin{array}{ccccccccc} 0&\rightarrow&H^0(S,T_S)&\rightarrow&H^0(S,T_{\varmathbb{P}^N}|_{S})&\rightarrow&H^0(S,N_{S/\varmathbb{P}^N})&&\\ &\rightarrow&H^1(S,T_S)&\rightarrow&0&\rightarrow&H^1(S,N_{S/\varmathbb{P}^N})&&\\ &\rightarrow&0&\rightarrow&0&\rightarrow&H^2(S,N_{S/\varmathbb{P}^N})&\rightarrow&0. \end{array}\] Therefore we have $h^i(S,N_{S/\varmathbb{P}^N})=0,\forall i>0$, and \[\begin{array}{rcl} h^0(S,N_{S/\varmathbb{P}^N}) &=& h^0(S,T_{\varmathbb{P}^N}|_{S})-\chi(T_S)\\ &=& (N+1)(a+1)(\frac{am}{2}+b+1)-7. \end{array}\] When $S$ is a rational scroll, we have $h=g+bf$. Then the formula is obtained by inserting $a=0$ and $D=h^2=m+2b$ into the equation. \end{proof} \subsection{Isolated singularities on rational scrolls} The singularities on a rational scroll are all caused from projection by definition, so we assume $D\geq N$. We also assume $N\geq 5$. Let $S\subset\varmathbb{P}^N$ be a rational scroll under $S_{u,v}\subset\varmathbb{P}^{D+1}$ and let $q:S_{u,v}\rightarrow S$ be the projection. A point $p\in S$ is singular if and only if one of the following situations occurs \begin{itemize} \item There are two distinct rulings $l$, $l'\subset S_{u,v}$ such that $p\in q(l)\cap q(l')$. \item There is a ruling $l\subset S_{u,v}$ such that $p\in q(l)$ and the map $q$ is ramified at $l$. \end{itemize} Suppose that $S$ has isolated singularities, i.e. the singular locus of $S$ has dimension zero. Then each singular point is set-theoretically the intersection of two or more rulings. Let $m$ be the number of the ruling which passes through any of the singular points. Then the number of singularities on $S$ is counted as ${m\choose 2}$. Note that $S_{u,v}$ is cut out by quadrics so every secant line intersects $S_{u,v}$ in exactly two points transversally. Let $T(S_{u,v})\subset S(S_{u,v})$ respectively be the tangent and the secant varieties of $S_{u,v}$. Then every $x\in S(S_{u,v})-T(S_{u,v})$ belongs to one of the two conditions \begin{enumerate} \item The point $x$ lies on one and only one secant line. \item\label{sec2} The point $x$ lies on two secant lines. Let $Z_2\subset S(S_{u,v})$ denote the union of such points. \end{enumerate} \begin{lemma}\label{noMoreSec} The subset $Z_2\neq\emptyset$ if and only if $u=2$. In this situation, $Z_2\cong\varmathbb{P}^2$ and each $x\in Z_2$ lies on infinitely many secant lines. \end{lemma} \begin{proof} We retain the notation used in constructing $S_{u,v}$ throughout the proof. Let $x\in Z_2$ be any point. First we claim that the intersection points of $S_{u,v}$ with the union of the two secants described in (\ref{sec2}) lie on four distinct rulings. The intersection points don't lie on two rulings because any two distinct rulings are linearly independent.\cite[Exercise 8.21]{Har95} If they lie on three rulings, then the projection to $P_2$ would be a trisecant line of $C_2$. But this is impossible because $C_2$ has degree $v\geq\lceil\frac{D}{2}\rceil\geq2$. Hence the claim holds. The claim admits a rational normal curve $C\subset S_{u,v}$ (either sectional or residual) of degree $\geq u$ passing through the four intersection points.\cite[Example 8.17]{Har95} This imposes a non-trivial linear relation on four distinct points on $C$, which forces $C$ to be either a line or a conic. If $C$ is a line then $C$ coincides with the two secant lines, which is impossible. Hence $C$ must be a conic.. It follows that $u\leq\deg C\leq2$. If $u=1$, then the conic $C$ would dominate $C_2$ through the projection from $P_1$. However, this cannot happen since $C_2$ has degree $v=D-u\geq4$. Hence $u=2$. In this condition, $C$ can only be the directrix since $u<v$. It follows that the $Z_2$ coincides with the 2-plane spanned by $C$ and each point of $Z_2$ lies on infinitely many secants. Conversely, $u=2$ implies that $Z_2$ contains the 2-plane spanned by $C_2$. By the same arguement above they coincide and every $x\in Z_2$ lies on infinitely many secants. \end{proof} Assume that $S$ is the projection of $S_{u,v}$ from a $(D-N)$-plane $Q\subset\varmathbb{P}^{D+1}$. \begin{cor}\label{isoSingEq} The scroll $S$ is singular along $r$ points if and only if $Q$ intersects $S(S_{u,v})$ in $r$ points away from $T(S_{u,v})\cup Z_2$. \end{cor} \begin{proof} Assume that $S$ has isolated singularities. Recall that the number $r$ counts the number of the pair $(l,l')$ of distinct rulings on $S_{u,v}$ such that $q(l)$ intersects $q(l')$ in one point. (Different pairs might intersect in the same point.) It then counts the number of the unique line joining $l$, $l'$ and $Q$. By Lemma \ref{noMoreSec}, each $x\in S(S_{u,v})$ away from $T(S_{u,v})\cup Z_2$ lies on a unique secant. Thus it is the same as the number of the intersection between $Q$ and $S(S_{u,v})$ away from $T(S_{u,v})\cup Z_2$. \end{proof} In the end we provide a criterion for $S$ to have isolated singularities when $u=1$. This is going to be used in proving Proposition \ref{fourSq}. \begin{prop}\label{noCSing} Assume $u=1$. If $Q\cap T(S_{1,v})=\emptyset$, then $S$ has isolated singularities. \end{prop} \begin{proof} If $Q$ intersects $S(S_{1,v})$ in points, then the proposition follows from Corollary \ref{isoSingEq}. Assume $Q\cap S(S_{1,v})$ contains a curve $\Gamma$. We are going to show that $\Gamma$ intersects $T(S_{1,v})$ nontrivially which then contradicts our hypothesis. Let $f$ be the fiber class of $S_{1,v}$. Then the linear system $|f|$ parametrizes the rulings of $S_{1,v}$. For distinct $l,l'\in|f|$, the linear span of $l$ and $l'$ is a 3-space $P_{l,l'}\subset S(S_{1,v})$. Consider the incidence correspondence \[ \varmathbb{S} = \{(x,l+l')\in\varmathbb{P}^{D+1}\times|2f|:x\in P_{l,l'}\}. \] Observe that $\varmathbb{S}$ is a $\varmathbb{P}^3$-bundle over $|2f|\cong\varmathbb{P}^2$ via the second projection \[ p_2:\varmathbb{S}\rightarrow|2f|. \] On the other hand, the image of $\varmathbb{S}$ under the first projection \[ p_1:\varmathbb{S}\rightarrow\varmathbb{P}^{D+1} \] is the secant variety $S(S_{1,v})$. Consider the diagonal \[ \Delta:=\{2l:l\in|f|\}\subset|2f|. \] It's easy to see that the tangent variety $T(S_{1,v})\subset S(S_{1,v})$ is the image of $p_2^{-1}(\Delta)$ via the first projection. If $\Gamma\nsubset P_{l,l'}$ for all $l+l'$, then the curve $p_1^{-1}(\Gamma)$ is mapped to a curve in $|2f|$ which intersects $\Delta$ nontrivially. It follows that $\Gamma\cap T(S_{u,v})\neq\emptyset$. Suppose $\Gamma\subset P_{l,l'}$ for some $l+l'$. The directrix $C_1$ is a line in $P_{l,l'}$ by hypothesis, so we have \[ T(S_{u,v})\cap P_{l,l'} = P\cup P' \] where $P$ and $P'$ are the 2-planes spanned by $C_1$ and $l$ and $l'$, respectively. So $\Gamma$ and $T(S_{u,v})$ has a nontrivial intersection in $P_{l,l'}$. \end{proof} \section{Construction of singular scrolls in ${\bf P}^5$}\label{sect:constr} This section provides a construction of singular scrolls in $\varmathbb{P}^5$ of type $(1,v)$ with isolated singularities. The construction actually relates the existence of the singular scrolls to the solvability of a four-square equation as follows: \begin{prop}\label{fourSq} Assume $v\geq4$. There exists a rational scroll in $\varmathbb{P}^5$ of type $(1,v)$ with isolated singularities which has at least $r$ singularities if there are four odd integers $a\geq b\geq c\geq d>0$ satisfying \begin{enumerate} \item $8r+4=a^2+b^2+c^2+d^2$, \item $a+b+c\leq 2v-3$. \end{enumerate} \end{prop} We use the construction to produce an explicit example which can be manipulated by a computer algebra system. With the help of a computer we prove that \begin{prop}\label{SinX} There is a degree-9 rational scroll $S\subset\varmathbb{P}^5$ which has eight isolated singularities and smooth otherwise such that \begin{enumerate} \item\label{SinX1} $h^0(\varmathbb{P}^5,\mathcal{I}_S(3))=6$, where $\mathcal{I}_S$ is the ideal sheaf of $S$ in $\varmathbb{P}^5$. \item\label{SinX2} $S$ is contained in a smooth cubic fourfold $X$. \item\label{SinX3} $S$ deforms in $X$ to the first order as a two dimensional family. \item\label{SinX4} $S$ is also contained in a singular cubic fourfold $Y$. \end{enumerate} \end{prop} We introduce the construction first and prove Proposition \ref{SinX} in the end. Recall that, with a fixed rational normal scroll $S_{1,v}\subset\varmathbb{P}^{D+1}$, every degree $D$ scroll $S\subset\varmathbb{P}^5$ of type $(1,v)$ is the projection of $S_{1,v}$ from $P^\perp$ for some $P\in\varmathbb{G}(5,D+1)$. \subsection{Plane $k$-chains} Let $k$ be a positive integer. It can be proved by induction that $k$ distinct lines in a projective space intersect in at most ${k\choose2}$ points counted with multiplicity, and the maximal number is attained exactly when the $k$ lines span a 2-plane. \begin{defn} Let $k\geq1$ be an integer. We call the union of $k$ distinct lines which span a 2-plane a plane $k$-chain. Let $W\subset\varmathbb{P}^N$ be the union of a finite number of lines. A plane $k$-chain in $W$ is called maximal if it is not a subset of a plane $k'$-chain in $W$ for some $k'>k$. \end{defn} Let $S\subset\varmathbb{P}^5$ be a singular scroll with isolated singularities. There's a subset $W\subset S$ consisting of a finite number of rulings defined by \begin{equation}\label{rulConfig} W = \bigcup l:\mbox{$l$ is a ruling passing through a singular point on $S$.} \end{equation} By Zorn's lemma, $W$ can be expressed as \[ W = \bigcup_{i=1}^n K_i: \mbox{$K_i$ is a maximal plane $k_i$-chain with $k_i\geq2$.} \] If two plane $k$-chains share more than one line, then they must lie on the same 2-plane. In particular, both of them can not be maximal. Therefore, for distinct maximal plane $k$-chains $K_i$ and $K_j$ in $W$, we have either $K_i\cap K_j=\emptyset$ or $K_i\cap K_j=l$ a single ruling. It follows that the number of singularities on $S$ equals $\sum_{i=1}^n {k_i\choose2}$ since a plane $k$-chain contributes ${k\choose2}$ singularities. Let $l_1,...,l_k\subset S_{1,v}$ be $k$ distinct rulings which span a subspace $P_{l_1,...,l_k}\subset\varmathbb{P}^{D+1}$. The images of the rulings form a plane $k$-chain on $S$ through projection if and only if $P_{l_1,...,l_k}$ is projected onto a 2-plane in $\varmathbb{P}^5$. Parametrize the rulings as in (\ref{paraRul}) with $l_j=l_j(s_j)$, $j=1,...,k$. Then $P_{l_1,...,l_k}$ is spanned by the row vectors of the following $(k+2)\times(D+2)$ matrix \begin{equation}\label{P1tok} P(s_1,...,s_k)= \left(\begin{array}{cccccc} 1&s_1&0&0&...&0\\ 1&s_2&0&0&...&0\\ 0&0&1&s_1&...&s_1^v\\ &&\vdots&&&\\ 0&0&1&s_k&...&s_k^v \end{array}\right). \end{equation} The projection $S_{1,v}\rightarrow S$ is restricted from a linear map \[ \Lambda: \varmathbb{P}^{D+1}\dashrightarrow\varmathbb{P}^5. \] Suppose $\Lambda$ is represented by a $(D+2)\times6$ matrix \[ \Lambda = \left(\begin{array}{cccccc} v_1&v_2&v_3&v_4&v_5&v_6\end{array}\right), \] where $v_1,...,v_6$ are vectors in $\varmathbb{P}^{D+1}$. Then $P_{l_1,...,l_k}$ is projected onto a 2-plane if and only if the $(k+2)\times6$ matrix \[ P(s_1,...,s_k)\cdot\Lambda \] has rank three. \subsection{Control the number of singularities} Let $r$ be a non-negative integer. We introduce a method to find a projection $\Lambda$ which maps $S_{1,v}$ to a singular scroll $S$ with isolated singularities. The method allows us to control the number of singularities such that it is bounded below by $r$. For simplicity, we consider only the cases when the configuration $W\subset S$ defined in (\ref{rulConfig}) consists of four disjoint maximal plane $k$-chains. We start by picking distinct rulings on $S_{u,v}$ and produce four matrices $P_1$, $P_2$, $P_3$, and $P_4$ as in (\ref{P1tok}). Suppose $P_i$ consists of $k_i$ rulings. Note that $P_i$ contribute ${k_i\choose 2}$ singularities if its rulings are mapped to a plane $k_i$-chain. Thus we also assume that \begin{equation}\label{singNum} r={k_1\choose2}+{k_2\choose2}+{k_3\choose2}+{k_4\choose2}. \end{equation} Here we allow $k_i=1$ which means that $P_i$ consists of a single ruling and thus contributes no singularity. Consider $\Lambda = \left(\begin{array}{cccccc} v_1&v_2&v_3&v_4&v_5&v_6\end{array}\right)$ as an unknown. Let $P$ be the 5-plane spanned by $v_1,...,v_6$. We are going to construct $\Lambda$ satisfying \begin{enumerate} \item\label{ld1} ${\rm rk}\left(P_i\cdot\Lambda\right)=3,\quad i=1,2,3,4$. \item\label{ld2} $P^\perp\,\cap\,T(S_{1,v})=\emptyset$. \end{enumerate} Note that (\ref{ld1}) makes the number of isolated singularities $\geq r$, while (\ref{ld2}) confirms that no curve singularity occurs. We divide the construction into two steps: \noindent\emph{Step 1. Find $v_1$, $v_2$, $v_3$ and $v_4$ to satisfy (\ref{ld1}).} Consider each $P_i$ as a linear map by multiplication from the left. We are trying to find independent vectors $v_1$, $v_2$, $v_3$ and $v_4$ such that for each $P_i$ three of them are in the kernel while the remaining one isn't. The four vectors arranged in this way contribute exactly one rank to each $P_i\cdot\Lambda$. In the next step, $v_5$ and $v_6$ will be general vectors in $\varmathbb{P}^{D+1}$ satisfying some open conditions. This contributes two additional ranks to each $P_i\cdot\Lambda$, which makes ($\ref{ld1}$) true. Under the standard parametrization for $S_{1,v}\subset\varmathbb{P}^{D+1}$, the underlying vector space of $\varmathbb{P}^{D+1}$ can be decomposed as $A\oplus B$ with $A$ representing the first 2 coordinates and $B$ representing the last $v+1$ coordinates. With this decomposition, the matrix $P$ in (\ref{P1tok}) can be decomposed into two Vandermonde matrices \[ P^A= \left(\begin{array}{cc} 1&s_1\\ 1&s_2 \end{array}\right) \quad\mbox{and}\quad P^B= \left(\begin{array}{cccc} 1&s_1&...&s_1^v\\ &\vdots&&\\ 1&s_k&...&s_k^v \end{array}\right). \] Note that $\ker P=\ker P^B$. So we can search for the vectors from $\ker P^B$. In our situation, we have four matrices ${P_1}^B$, ${P_2}^B$, ${P_3}^B$, ${P_4}^B$ which have four kernels $\ker{P_1}^B$, $\ker{P_2}^B$, $\ker{P_3}^B$, and $\ker{P_4}^B$, respectively. By the assumption $k_i\leq v$ and the fact that a Vandermonde matrix has full rank, each $\ker {P_i}^B$ is a codimension $k_i$ subspace of $B$. Now we want to pick $v_1,...,v_4$ from $B$ such that each $\ker{P_i}^B$ contains exactly three of the four vectors, i.e. we want \begin{equation}\label{vec3-1} \left|\,\ker{P_i}^B\cap\{v_1,v_2,v_3,v_4\}\,\right|=3,\quad\mbox{for}\enspace i=1,2,3,4. \end{equation} One way to satisfy (\ref{vec3-1}) is to pick $v_i$ from \begin{equation}\label{ker3-1} \left(\bigcap_{j\neq i}\ker{P_j}^B\right)-\ker{P_i}^B,\quad\mbox{for}\enspace i=1,2,3,4. \end{equation} The sets in (\ref{ker3-1}) are nonempty if and only if \[ \dim\left(\ker{P_\alpha}^B\cap\ker{P_\beta}^B\cap\ker{P_\gamma}^B\right)\geq1 \] for all distinct $\alpha,\beta,\gamma\in\{1,2,3,4\}$. This is equivalent to \begin{equation}\label{4v} k_\alpha+k_\beta+k_\gamma\leq v,\enspace\mbox{for distinct}\;\alpha,\beta,\gamma\in\{1,2,3,4\}. \end{equation} So we have to include (\ref{4v}) as one of our assumptions. \noindent\emph{Step 2. Adjust $v_1,...,v_4$ and then pick $v_5$ and $v_6$ to satisfy (\ref{ld2}).} \begin{lemma} Let ${v_i}^\perp$ be the hyperplane in $\varmathbb{P}^{D+1}$ orthogonal to $v_i$. The four vectors $v_1,...,v_4$ can be chosen generally such that $\bigcap_{i=1}^4{v_i}^\perp$ intersects $T(S_{1,v})$ only in the directrix of $S_{1,v}$. \end{lemma} \begin{proof} Parametrize the rational normal curve $C=S_{1,v}\cap\varmathbb{P}(B)$ by \[ \theta(s)=(0,0,1,s,...,s^v). \] Then the standard parametrization (\ref{stdRNS}) can be written as \[ (1,s,0,...,0)+t\theta(s). \] Let $a$ and $b$ be the parameters for the tangent plane over each point. Then the tangent variety $T(S_{1,v})$ has the parametric equation \[\begin{array}{l} (1,s,0,...,0)+t\theta(s)+a\left[(0,1,0,...,0)+t\frac{d\theta}{ds}(s)\right]+b\theta(s)\\ = (1,s+a,0,...,0)+(t+b)\theta(s)+ta\frac{d\theta}{ds}(s). \end{array}\] Each point on $T(S_{1,v})$ lying in $\bigcap_{i=1}^4{v_i}^\perp$ is a common zero of the equations \begin{equation}\label{viIntTan} (t+b)\left(\theta(s)\cdot v_i\right)+ta\left(\frac{d\theta}{ds}(s)\cdot v_i\right)=0,\quad i=1,2,3,4. \end{equation} By considering $(t+b)$ and $ta$ as variables, (\ref{viIntTan}) becomes a system of linear equations given by the matrix \[\left(\begin{array}{cccc} \theta(s)\cdot v_1&\theta(s)\cdot v_2&\theta(s)\cdot v_3&\theta(s)\cdot v_4\\ \theta'(s)\cdot v_1&\theta'(s)\cdot v_2&\theta'(s)\cdot v_3&\theta'(s)\cdot v_4 \end{array}\right).\] The matrix fails to be of full rank exactly when $s$ admits the existence of $\alpha,\beta\in\varmathbb{C}$, $\alpha\beta\neq0$, such that \begin{equation}\label{tanOfCurve} \left(\alpha\theta(s)+\beta\theta'(s)\right)\cdot v_i=0,\quad i=1,2,3,4. \end{equation} Note that (\ref{tanOfCurve}) has a solution if and only if $\bigcap_{i=1}^4{v_i}^\perp$ and the tangent variety $T(C)$ of $C$ intersect each other. One can choose $v_2,v_3$ and $v_4$ in general from (\ref{ker3-1}) so that $\bigcap_{i=2}^4{v_i}^\perp$ is disjoint from $C$. This forces $\bigcap_{i=2}^4{v_i}^\perp$ to intersect $T(C)$ in either empty set or points. By the properties of a rational normal curve, the hyperplane orthogonal to a point on $C$ contains no invariant subspace when one perturb the point. Hence, after necessary perturbation of the chosen rulings, one can choose $v_1$ from (\ref{ker3-1}) such that $\left(\bigcap_{i=1}^4{v_i}^\perp\right)\cap T(C)=\emptyset$. As a result, the equations in (\ref{viIntTan}) become independent, so the solutions are $t=b=0$ or $a=0$, $t=-b$. Both solutions form the directrix of $S_{1,v}$. \end{proof} With the above adjustment, we can pick $v_5$ and $v_6$ in general in $\varmathbb{P}^{D+1}$ so that the $(D-5)$-plane $Q = {v_1}^\perp\cap...\cap{v_6}^\perp$ has no intersection with $T(S_{1,v})$. Note that the projection defined by $\Lambda$ is the same as the projection from $Q$. By Proposition \ref{noCSing}, this projection produces a rational scroll with isolated singularities. \begin{prop} There exists a rational scroll in $\varmathbb{P}^5$ of type $(1,v)$ with isolated singularities which has at least $r$ singularities if there are four positive integers $k_1\geq k_2\geq k_3\geq k_4$ satisfying (\ref{singNum}) and (\ref{4v}): \[ r={k_1\choose 2}+{k_2\choose 2}+{k_2\choose 2}+{k_2\choose 2} \quad\mbox{and}\quad k_1+k_2+k_3\leq v. \] \end{prop} Proposition \ref{fourSq} is obtained by expanding the binomial coefficients followed by a change of variables. \subsection{Proof of Proposition \ref{SinX}} In the following we exhibit an explicit example which can be manipulated by a computer algebra system over characteristic zero. The main program used in our work is {\sc Singular} \cite{DGPS}. Consider $\varmathbb{P}^{10}$ with homogeneous coordinate ${\bf x}=(x_0,...,x_{10})$. We define the rational normal scroll $S_{1,8}$ by the $2\times2$ minors of the matrix \[\left( \begin{array}{ccccccccc} x_0&x_2&x_3&x_4&x_5&x_6&x_7&x_8&x_9\\ x_1&x_3&x_4&x_5&x_6&x_7&x_8&x_9&x_{10} \end{array} \right).\] In order to project $S_{1,8}$ onto a rational scroll whose singular locus is zero dimensional and consists of at least eight singular points, we use the method introduced previously to construct a projection \[ \Lambda = \arraycolsep=2.5pt \left(\begin{array}{c} v_1\\v_2\\v_3\\v_4\\v_5\\v_6\end{array}\right)^T= \left(\begin{array}{rrrrrrrrrrr} 0&0&0&120&-34&-203&91&70&-56&13&-1\\ 0&0&2880&5184&-2372&-2196&633&261&-63&-9&2\\ 0&0&0&480&304&-510&-339&30&36&0&-1\\ 0&0&0&144&36&-196&-49&56&14&-4&-1\\ 1&0&1&1&1&1&1&1&1&1&1\\ 0&1&1&0&0&0&0&0&0&0&0 \end{array}\right)^T \] Let ${\bf z}=(z_0,...,z_5)$ be the coordinate for $\varmathbb{P}^5$. Then the projection $\varmathbb{P}^{10}\dashrightarrow\varmathbb{P}^5$ defined by $\Lambda$ can be explicite written by \[ {\bf z} = {\bf x}\cdot\Lambda. \] Let $S$ be the image of $S_{1,8}$ under the projection. Due to the limit of the author's computer, we check that $S$ has eight singularities and smooth otherwise over the finite field of order 31. On the other hand, the double point formula implies that $S$ has eight double points if the singular locus is isolated. Hence the singularity of $S$ consists of eight double points over characteristic zero as required. The generators of the ideal of $S$ contain six cubics, so property (\ref{SinX1}) is confirmed. Properties (\ref{SinX2}) and (\ref{SinX4}) can be easily checked by examining the linear combinations of those cubics. The final step is to varify property (\ref{SinX3}). Let $X\subset\varmathbb{P}^5$ be a smooth cubic containing $S$. Let $F_1(S)$ and $F_1(X)$ denote the Fano variety of lines on $S$ and $X$, respectively. Then it is equivalent to show that $F_1(S)$ deforms in $F_1(X)$ to the first order with dimension two. Let $\varmathbb{G}(1,5)$ be the grassmannian of lines in $\varmathbb{P}^5$. Every element ${\bf b}\in\varmathbb{G}(1,5)$ is parametrized by a $2\times6$ matrix \begin{equation}\label{coorGrass} \left(\begin{array}{c} {\bf b}_1\\{\bf b}_2 \end{array}\right) = \left(\begin{array}{cccccc} b_{10}&b_{11}&b_{12}&b_{13}&b_{14}&b_{15}\\ b_{20}&b_{21}&b_{22}&b_{23}&b_{24}&b_{25} \end{array}\right) \end{equation} where ${\bf b}_1$ and ${\bf b}_2$ are two vectors which span the line ${\bf b}$. Let $P_X=P_X({\bf z})$ be the homogeneous polynomial defining $X$. Let $V$ be the 6-dimensional linear space underlying $\varmathbb{P}^5$. Consider $P_X$ as a symmetric function defined on $V\oplus V\oplus V$. Then $F_1(X)\subset\varmathbb{G}(1,5)$ is cut out by the four equations \begin{equation}\label{FanoX} P_X({\bf b}_1,{\bf b}_1,{\bf b}_1),\; P_X({\bf b}_1,{\bf b}_1,{\bf b}_2),\; P_X({\bf b}_1,{\bf b}_2,{\bf b}_2),\; P_X({\bf b}_2,{\bf b}_2,{\bf b}_2). \end{equation} Consider the Fano variety of lines on $S_{1,8}$ as a rational curve $\varmathbb{P}^1\subset\varmathbb{G}(1,10)$ parametrized by \[Q=\left( \begin{array}{ccccccccccc} r&s&0&0&0&0&0&0&0&0&0\\ 0&0&r^8&r^7s&r^6s^2&r^5s^3&r^4s^4&r^3s^5&r^2s^6&rs^7&s^8 \end{array} \right)\] where $(r,s)$ is the homogeneous coordinate for $\varmathbb{P}^1$. Then $F_1(S)\subset\varmathbb{G}(1,5)$ is defined by the parametric equation \[ R=Q\cdot\Lambda. \] Now consider a $2\times6$ matrix $dR$ whose first row consists of arbitrary linear forms on $\varmathbb{P}^1$ while the second row consists of arbitrary 8-forms. The coefficients of those forms introduce $2\cdot6+9\cdot6=66$ variables $c_1,...,c_{66}$. Then an abstract first order deformation of $F_1(S)$ in $\varmathbb{G}(1,5)$ is given by \[ R+dR. \] Inserting $R+dR$ into (\ref{FanoX}) gives us four polynomials in $r$ and $s$ with coefficients in $c_1,...,c_{66}$. The linear parts of the coefficients form a system of linear equations in $c_1,...,c_{66}$ whose associated matrix has rank 53. Then the first order deformation of $F_1(S)$ in $F_1(X)$ appears as solutions of the system. In addition to the 53 constraints contributed by the above linear equations, we also have \begin{itemize} \item 4 constraints from the ${\rm GL}(2)$ action on the coordinates (\ref{coorGrass}). \item 3 constraints from the automorphism group of $\varmathbb{P}^1$. \item 4 constraints from rescaling the four equations (\ref{FanoX}). \end{itemize} So $F_1(S)$ deforms in $F_1(X)$ to the first order with dimension $66-53-4-3-4=2$. \section{Special cubic fourfolds of discriminant 42}\label{sect:C42} This section proves that a generic special cubic fourfold $X\in\mathcal{C}_{42}$ has a unirational parametrization of odd degree and also that $\mathcal{C}_{42}$ is uniruled. We also provide a discussion in the end talking about the difficulty of generalizing our method to higher discriminants. \subsection{The space of singular scrolls} The Zariski closure of the locus for degree-9 scrolls forms a component $\mathcal{H}_9$ in the associated Hilbert scheme. Let $\mathcal{H}_9^8\subset\mathcal{H}_9$ be the closure of the locus parametrizing scrolls with 8 isolated singularities. By Propositions \ref{fourSq} and Theorem \ref{singCodim} we have the estimate: \begin{cor} $\mathcal{H}_9^8$ has codimension at most 8 in $\mathcal{H}_9$. \end{cor} Note that $\mathcal{H}_9^8$ parametrizes non-reduced schemes by definition. In the following, we use an overline to specify an element $\overline{S}\in\mathcal{H}_9^8$ and denote by $S$ its underlying reduced subscheme. Let $U\subset|\mathcal{O}_{\varmathbb{P}^5}(3)|$ be the locus parametrizing smooth cubic fourfolds. Define \[ \mathcal{Z}=\left\{(\overline{S},X)\in\mathcal{H}_9^8\times U:S\subset X\right\}. \] By Proposition \ref{SinX} there exists $(\overline{S},X)\in\mathcal{Z}$ such that $S$ has isolated singularities and $X$ is smooth. The right projection $p_2:\mathcal{Z}\rightarrow U$ factors through $U_{42}$, the preimage of $\mathcal{C}_{42}$ in $U$. Indeed, by definition $S$ is the image of a rational normal scroll $F\subset\varmathbb{P}^{10}$ through a projection. Let $\epsilon:F\rightarrow X$ be the composition of the projection followed by the inclusion into $X$. Let ${[S]_X}^2$ is the self-intersection of $S$ in $X$. Then the number of singularities $D_{S\subset X}=8$ on $S$ satisfies the double point formula \cite[Th. 9.3]{Ful98}: \[ D_{S\subset X}= \frac{1}{2}\left({[S]_X}^2-\epsilon^*c_2(T_X)+c_1(T_F)\cdot\epsilon^*c_1(T_X)-c_1(T_F)^2+c_2(T_F) \right). \] By using this formula on can get ${[S]_X}^2=41$. Let $h_X$ be the hyperplane class of $X$. Then the intersection table for $X$ is \[\begin{array}{c|cc} &h_X^2 &S\\ \hline h_X^2 &3 &9\\ S &9 &41. \end{array}\] So $X$ has discriminant $3\cdot41-9^2=42$. \subsection{Odd degree unirational parametrizations} \begin{thm}\label{dom} Consider the diagram \[ \xymatrix{ &\mathcal{Z}\ar_{p_1}[ld]\ar^{p_2}[rd]&&\\ \mathcal{H}_9^8&&U_{42}\ar[r]&\mathcal{C}_{42} } \] \begin{enumerate} \item\label{dom1} $\mathcal{Z}$ dominates $U_{42}$. Therefore a general $X\in\mathcal{C}_{42}$ contains a degree-9 rational scroll with 8 isolated singularities and smooth otherwise. \item $\mathcal{C}_{42}$ is uniruled. \end{enumerate} \end{thm} \begin{proof} Let $(\overline{S},X)\in\mathcal{Z}$ be a pair satisfying Proposition \ref{SinX}. Then \[ h^0(\varmathbb{P}^5,\mathcal{I}_S(3))=6. \] On the other hand, the short exact sequence \[ 0\rightarrow\mathcal{I}_S(3)\rightarrow\mathcal{O}_{\varmathbb{P}^5}(3)\rightarrow\mathcal{O}_S(3)\rightarrow0 \] implies that \begin{equation}\label{h0I(3)} h^0(\varmathbb{P}^5,\mathcal{I}_S(3))\geq h^0(\varmathbb{P}^5,\mathcal{O}_{\varmathbb{P}^5}(3))-h^0(S,\mathcal{O}_S(3)). \end{equation} Let $F\subset\varmathbb{P}^{10}$ be the preimage scroll of $S$. Then $H^0(S,\mathcal{O}_S(3))$ consists of the sections in $H^0(F,\mathcal{O}_F(3))$ which cannot distinguish the preimage of a singular point. We have $h^0(F,\mathcal{O}_F(3))=58$ by Lemma \ref{hHir}, so $h^0(S,\mathcal{O}_S(3))=58-8=50$. So the right hand side of (\ref{h0I(3)}) equals $56-50=6$. Thus $h^0(\varmathbb{P}^5,\mathcal{I}_S(3))$ attains a minimum. The left projection $p_1:\mathcal{Z}\rightarrow\mathcal{H}_9^8$ has fiber $\varmathbb{P} H^0(\varmathbb{P}^5,\mathcal{I}_S(3))$ over all $\overline{S}\in\mathcal{H}_9^8$. Because the fiber dimension is an upper-semicontinuous function, there is an open subset $V\subset\mathcal{H}_9^8$ containing $\overline{S}$ such that $\mathcal{Z}$ is a $\varmathbb{P}^5$-bundle over $V$. We have $\dim\mathcal{H}_9=59$ by Proposition \ref{SSHilb}. Hence $\dim\mathcal{H}_9^8\geq59-8=51$ by Theorem \ref{singCodim}. Thus $\mathcal{Z}$ has dimension at least $51+5=56$ in a neighborhood of $(\overline{S},X)$. By Proposition \ref{SinX} (\ref{SinX3}), $\mathcal{Z}$ has fiber dimension at most 2 over an open subset of $p_2\left(\mathcal{Z}\right)$ which contains $X$. Hence $p_2\left(\mathcal{Z}\right)$ has dimension at least $56-2=54$ in a neighborhood of $X$. On the other hand, $U_{42}$ is an irreducible divisor in $U$. In particular, $U_{42}$ has dimension 54. So $\mathcal{Z}$ must dominate $U_{42}$. Next we prove the uniruledness of $\mathcal{C}_{42}$. We already know that $\mathcal{Z}$ has an open dense subset $\mathcal{Z}^\circ$ isomorphic to a $\varmathbb{P}^5$-bundle over $V\subset\mathcal{H}_9^8$. If we can prove that the composition $\mathcal{Z}^\circ\xrightarrow{p_2}U_{42}\rightarrow\mathcal{C}_{42}$ does not factor through this bundle map, then the proof is done. Let $(\overline{S},X)\in\mathcal{Z}^\circ$ be the pair as before. By Proposition \ref{SinX}, $S$ is also contained in a singular cubic $Y$. Assume that the map $\mathcal{Z}^\circ\rightarrow\mathcal{C}_{42}$ does factor through the bundle map instead. Then all of the cubics in $\varmathbb{P} H^0(\varmathbb{P}^5,\mathcal{I}_S(3))$ would be in the same $\varmathbb{P}{\rm GL}(6)$-orbit. In particular, the smooth cubic $X$ and the singular cubic $Y$ would be isomorphic, but this is impossible. \end{proof} \begin{prop}\label{varrho}\cite[Prop. 38]{Has16} \cite[Prop. 7.4]{HT01} Let $X$ be a cubic fourfold and $S\subset X $ be a rational surface. Suppose $S$ has isolated singularities and smooth normalization, with invariants $D=\deg S$, section genus $g_H$, and self-intersection $\left<S,S\right>_X$. If \begin{equation}\label{varrhoIneq} \varrho=\varrho(S,X):=\frac{D(D-2)}{2}+(2-2g_H)-\frac{\langle S,S\rangle_X}{2}>0, \end{equation} then $X$ admits a unirational parametrization $\rho:\varmathbb{P}^4\dashrightarrow X$ of degree $\varrho$. \end{prop} \begin{cor} A general $X\in\mathcal{C}_{42}$ has an unirational parametrization of degree 13. \end{cor} \begin{proof} By Theorem \ref{dom} (\ref{dom1}), a general cubic fourfold $X\in\mathcal{C}_{42}$ contains a degree-9 scroll $S$ having 8 isolated singularties, with $\langle S,S\rangle_X=41$ and $g_H=0$. Thus $\varrho=\frac{9\cdot7}{2}+2-\frac{41}{2}=13$ by Proposition \ref{varrho}. \end{proof} \subsection{Problems in higher discriminants} Let $\Delta\subset\Sigma^{[2]}$ denote the divisor parametrizing non-reduced subschemes. Recall that it is a $\varmathbb{P}^1$-bundle over $\Sigma$. Its fibers correspond to smooth rational curves of degree $2n+1$ in $F_1(X)$, where the polarization on $F_1(X)$ is induced from $\varmathbb{G}(1,5)$. Each rational scroll $S\subset X$ induced by these rational curves has the intersection product \[\begin{array}{c|cc} &h_X^2 &S\\ \hline h_X^2 &3 &2n+1\\ S &2n+1 &2n^2+2n+1, \end{array}\] where $h_X$ is the hyperplane section class of $X$. One can compute by the double point formula that $S$ has $n(n-2)$ singularities provided that they are all isolated. \cite[Prop. 7.2]{HT01} In order to obtain an odd degree unirational parametrization for a generic member in $\mathcal{C}_d$ by Proposition \ref{varrho}, we need the existence of a degree $2n+1$ scroll $S\subset\varmathbb{P}^5$ with isolated singularities which has $n(n-2)$ singularities and is contained in a cubic fourfold $X$. We also need an estimate on the dimension of the associated Hilbert scheme $\mathcal{H}_{2n+1}^{n(n-2)}$ which contains $S$. Section \ref{sect:constr} builds up a method to find such $S$, but the existence of a cubic fourfold $X$ containing $S$ requires examination with a computer. This works well with $n=4$ because in this case a generic such $S$ is contained in a cubic hypersurface. However, the same phenomenon may fail when $n\geq5$. Indeed, the Hilbert scheme $\mathcal{H}_{2n+1}^{n(n-2)}$ of degree $2n+1$ scrolls with $n(n-2)$ singularities satisfies $\dim\mathcal{H}_{2n+1}^{n(n-2)}\geq-n^2+14n+11$ by Theorem \ref{singCodim} and Proposition \ref{SSHilb}. When $5\leq n\leq8$, $\dim\mathcal{H}_{2n+1}^{n(n-2)}\geq55$ the dimension of cubic hypersurfaces in $\varmathbb{P}^5$, so a generic $S\in\mathcal{H}_{2n+1}^{n(n-2)}$ is not in a cubic fourfold. We don't know what happens when $n\geq 9$, but working in this range involves tedious trial and error. \noindent{\bf Question.} Assume $n\geq2$. Let $S\subset\varmathbb{P}^5$ be a degree $2n+1$ rational scroll which has $n(n-2)$ isolated singularities and smooth otherwise. When is $S$ contained in a cubic fourfold? \section{The Hilbert scheme of rational scrolls}\label{sect:HilbSS} Let $N\geq3$ be an integer. The Hilbert polynomial $P_S$ for a degree $D$ smooth surface $S\subset\varmathbb{P}^N$ has the following form \[ P_S(x) = \frac{1}{2}Dx^2+\left(\frac{1}{2}D+1-\pi\right)x+1+p_a, \] where $\pi$ is the genus of a generic hyperplane section and $p_a$ is the arithmetic genus of $S$. \cite[V, Ex 1.2]{Har77} We are interested in the case when $S$ is a rational scroll. In this case $\pi=p_a=0$, so \[ P_S(x) = \frac{D}{2}x^2+\left(\frac{D}{2}+1\right)x+1. \] Every smooth surface sharing the same Hilbert polynomial has $\pi=0$ and $p_a=0$ also and thus is rational. We denote by ${\rm Hilb}_{P_S}(\varmathbb{P}^N)$ the Hilbert scheme of subschemes in $\varmathbb{P}^N$ with Hilbert polynomial $P_S$. The closure of the locus parametrizing degree $D$ scrolls forms a component $\mathcal{H}_D\subset{\rm Hilb}_{P_S}(\varmathbb{P}^N)$. We study this space by stratifying it according to the types of the scrolls. Recall that, by fixing a rational normal scroll $S_{u,v}\subset\varmathbb{P}^{D+1}$ where $D=u+v$, a rational scroll $S\subset\varmathbb{P}^N$ of type $(u,v)$ is either $S_{u,v}$ itself or the image of $S_{u,v}$ projected from a disjoint $(D-N)$-plane. We define $\mathcal{H}_{u,v}\subset\mathcal{H}_D$ as the closure of the subset consisting of smooth rational scrolls of type $(u,v)$. In this section, we will first show that \begin{prop}\label{SSHilb} Assume $D+1\geq N\geq 3$. \begin{enumerate} \item\label{SSHilb1} $\mathcal{H}_D$ is generically smooth of dimension $(N+1)(D+2)-7$. \item\label{SSHilb2} $\mathcal{H}_{u,v}$ is unirational of dimension $(D+2)N+2u-4-\delta_{u,v}$, \end{enumerate} where $\delta_{u,v}$ is the Kronecker delta. We also have \begin{enumerate}\setcounter{enumi}{2} \item\label{SSHilb3} $\mathcal{H}_{u,v}\subset\mathcal{H}_{u+k,v-k}$ for $0\leq 2k<v-u$, and $\mathcal{H}_{\lfloor\frac{D}{2}\rfloor,\lceil\frac{D}{2}\rceil}=\mathcal{H}_D$. \end{enumerate} \end{prop} When $D+1=N$, a generic element of $\mathcal{H}_{u,v}$ is projectively equivalent to a fixed rational normal scroll $S_{u,v}\subset\varmathbb{P}^{D+1}$. In this case $\mathcal{H}_{u,v}$ is birational to $\varmathbb{P}{\rm GL}(D+2)$ quotient by the stablizer of $S_{u,v}$. When $D\geq N$, a generic element in $\mathcal{H}_{u,v}$ is the projection of $S_{u,v}$ from a $(D-N)$-plane. Note that $\mathcal{H}_{u,v}$ also records the scrolls equipped with embedded points along their singular loci. Such element occurs when the $(D-N)$-plane contacts the secant variety of $S_{u,v}$. We denote by $\mathcal{H}_{u,v}^r\subset\mathcal{H}_{u,v}$ the closure of the subset parametrizing the schemes such that the singular locus of each of the underlying varieties consists of $\geq r$ isolated singularities. Let $\mathcal{H}_D^r\subset\mathcal{H}_D$ denote the union of $\mathcal{H}_{u,v}^r$ through all possible types. The main goal of this section is to prove the following theorem \begin{thm}\label{singCodim} Assume $D\geq N\geq 5$, and assume the existence of a degree $D$ rational scroll with isolated singularities in $\varmathbb{P}^N$ which has at least $r$ singularities. Suppose $rN\leq(D+2)^2-1$, then $\mathcal{H}_D^r$ has codimension at most $r(N-4)$ in $\mathcal{H}_D$. Especially when $r=1$, $\mathcal{H}_D^1$ is unirational of codimension exactly $N-4$. \end{thm} \subsection{The component of rational scrolls} Here we give a general picture of the component $\mathcal{H}_D$ and also prove Proposition \ref{SSHilb}. Note that Proposition \ref{SSHilb} (\ref{SSHilb1}) follows immediately from Lemma \ref{h0h1}. As mentioned before, $\mathcal{H}_{u,v}$ is birational to $\varmathbb{P}{\rm GL}(D+2)$ when $D+1=N$. In order to study the case of $D\geq N$, we introduce the \emph{projective Stiefel variety}. \begin{defn}\label{pStfl} Let $V_{N+1}(\varmathbb{C}^{D+2})={\rm GL(D+2)}/{\rm GL(D-N+1)}$ be the homogeneous space of $(N+1)$-frames in $\varmathbb{C}^{D+2}$. The group $\varmathbb{C}^*$ acts on $V_{N+1}(\varmathbb{C}^{D+2})$ by rescaling, which induces a geometric quotient $\varmathbb{V}(N,D+1)$ that we call a projective Stiefel variety. \end{defn} $\varmathbb{V}(N,D+1)$ has a fiber structure over $\varmathbb{G}(N,D+1)$: \[\begin{array}{cccl} \varmathbb{P}{\rm GL}(N+1)&\hookrightarrow&\varmathbb{V}(N,D+1)&\\ &&\downarrow{\scriptstyle p}&\\ &&\varmathbb{G}(N,D+1)&. \end{array}\] An element $\Lambda\in\varmathbb{V}(N,D+1)$ over $P\in\varmathbb{G}(N,D+1)$ can be expressed as a $(D+2)\times(N+1)$-matrix \[ \Lambda = \left(\begin{array}{cccc} v_1&v_2&\dots&v_{N+1}\end{array}\right)_{(D+2)\times(N+1)} \] up to rescaling, where $v_1,...,v_{N+1}$ are column vectors which form a basis of the underlying vector space of $P$. In particular, each $\Lambda\in\varmathbb{V}(N,D+1)$ naturally defines a projection $\cdot\Lambda:\varmathbb{P}^{D+1}\dashrightarrow\varmathbb{P}^N$ by multiplying the coordinates from the right. Let $S_{u,v}\subset\varmathbb{P}^{D+1}$ be the rational normal scroll given by the standard parametrization (\ref{stdRNS}). When $D\geq N$, every rational scroll in $\mathcal{H}_{u,v}$ is the image of $S_{u,v}$ under the projection defined by some $\Lambda\in\varmathbb{V}(N,D+1)$. So there is a dominant rational map \begin{equation}\label{grassH} \begin{array}{cccc} \pi=\pi(S_{u,v}):&\varmathbb{V}(N,D+1)&\dashrightarrow&\mathcal{H}_{u,v}\\ &\Lambda&\longmapsto&S_{u,v}\cdot\Lambda, \end{array} \end{equation} where $S_{u,v}\cdot\Lambda$ is the rational scroll given by the parametric equation \[\begin{array}{ccc} \varmathbb{C}^2 &\longrightarrow &\varmathbb{P}^N\\ (s,t) &\longmapsto &(1,s,...,s^u,t,st,...,s^vt)\cdot\Lambda. \end{array}\] \begin{proof}[Proof of Proposition \ref{SSHilb} (\ref{SSHilb2})] Both $\varmathbb{P}{\rm GL}(D+2)$ and $\varmathbb{V}(N,D+1)$ are rational quasi-projective varieties, so $\mathcal{H}_{u,v}$ is unirational either when $D+1=N$ or $D\geq N$ by the above construction. The formula for the dimension of $\mathcal{H}_{u,v}$ holds by \cite[Lemma 2.6]{Cos06}. \end{proof} \begin{proof}[Proof of Proposition \ref{SSHilb} (\ref{SSHilb3})] By Lemma \ref{embDef}, there exists an embedded deformation $\mathcal{S}$ in $\varmathbb{P}^{D+1}$ over the dual numbers $D_t=\frac{\varmathbb{C}[t]}{(t^2)}$ with $\mathcal{S}_0\cong S_{u,v}$ and $\mathcal{S}_t\cong S_{u+k,v-k}$ for $t\neq0$. For every rational scroll $S\in\mathcal{H}_{u,v}$, we can find a $\Lambda\in\varmathbb{V}(N,D+1)$ such that $S=S_{u,v}\cdot\Lambda$. Then $\mathcal{S}\cdot\Lambda$ defines an infinitesimal deformation of $S$ to a rational scroll of type $(u+k,v-k)$, which forces the inclusion $\mathcal{H}_{u,v}\subset\mathcal{H}_{u+k,v-k}$ to hold. When $(u,v)=(\lfloor\frac{D}{2}\rfloor,\lceil\frac{D}{2}\rceil)$, i.e. when $u=v$ or $u=v-1$, we have $\dim\mathcal{H}_D=\dim\mathcal{H}_{u,v}=(N+1)(D+2)-7$ by Proposition \ref{SSHilb} (\ref{SSHilb1}) and (\ref{SSHilb2}). Because $\mathcal{H}_D=\bigcup_{u+v=D}\mathcal{H}_{u,v}$, we must have $\mathcal{H}_{\lfloor\frac{D}{2}\rfloor,\lceil\frac{D}{2}\rceil}=\mathcal{H}_D$. \end{proof} \subsection{Projections that produce one singularity} We are ready to study the locus in $\mathcal{H}_D$ which parametrizes singular scrolls. Assume $D\geq N\geq 5$. Let us start from studying the projections that produce one singularity. \begin{proof}[Notations \& Facts]\renewcommand{\qedsymbol}{} Let $K$ and $L$ be any linear subspaces of $\varmathbb{P}^{D+1}$. \begin{enumerate} \item We use the same symbol to denote a projective space and its underlying vector space. The dimension always means the projective dimension. \item Assume $K\subset L$, we write $K^{\perp L}$ for the orthogonal complement of $K$ in $L$. When $L=\varmathbb{P}^{D+1}$, we write $K^\perp$ instead of $K^{\perp\varmathbb{P}^{D+1}}$. \item $K+L$ means the space spanned by $K$ and $L$. We write it as $K\oplus L$ if $K\cap L=\{0\}$, and write it as $K\oplus_\perp L$ if $K$ and $L$ are orthogonal to each other. \end{enumerate} The following two relations can be derived by linear algebra. \begin{equation}\label{perpSum}(K\cap L)^\perp = K^\perp+L^\perp.\end{equation} \begin{equation}\label{perpIn}(K\cap L)^{\perp K} = (K\cap L)^\perp\cap K.\end{equation} \end{proof} \begin{defn} Let $l$ and $l'$ be a pair of distinct rulings on $S_{u,v}$, and let $P_{l,\,l'}$ be the 3-plane spanned by them. We define $\sigma(l,l')$ to be a subvariety of $\varmathbb{G}(N,D+1)$ by \[ \sigma(l,l') = \left\{P\in\varmathbb{G}(N,D+1)\,:\,\dim(P\cap P_{l,\,l'}^\perp)\geq N-3\right\}. \] \end{defn} \begin{lemma}\label{sgll} Let $p:\varmathbb{V}(N,D+1)\rightarrow\varmathbb{G}(N,D+1)$ be the bundle map. Then $p^{-1}(\,\sigma(l,l')\,)\subset\varmathbb{V}(N,D+1)$ consists of the projections which produce singularities by making $l$ and $l'$ intersect. \end{lemma} \begin{proof} Let $P\in\varmathbb{G}(N,D+1)$ and $\Lambda\in p^{-1}(P)$ be arbitrary. The target space of the projection map $\cdot\Lambda$ is actually $P$. Let $L\subset\varmathbb{P}^{D+1}$ be any linear subspace, then the image $L\cdot\Lambda$ is identical to $(P^\perp+L)\cap P$. On the other hand, (\ref{perpSum}) and (\ref{perpIn}) implies that $ (P\cap L^\perp)^{\perp P} = (P\cap L^\perp)^\perp\cap P = (P^\perp+L)\cap P. $ Therefore, \[\begin{array}{l} N-1= \dim P-1 = \dim (P\cap L^\perp) + \dim(P\cap L^\perp)^{\perp P}\\ = \dim (P\cap L^\perp) + \dim\left((P^\perp+L)\cap P\right) = \dim (P\cap L^\perp) + \dim\left(L\cdot\Lambda\right). \end{array}\] With $L= P_{l,\,l'}$, the equation implies that \[\begin{array}{rcl} \dim(P\cap P_{l,\,l'}^\perp)\geq N-3&\Leftrightarrow& \dim\left(P_{l,\,l'}\cdot\Lambda\right)\leq2. \end{array}\] It follows that \[ p^{-1}(\,\sigma(l,l')\,) = \left\{\Lambda\in\varmathbb{V}(N,D+1)\,:\,\dim(P_{l,\,l'}\cdot\Lambda)\leq2\right\}. \] The image $P_{l,\,l'}\cdot\Lambda\subset\varmathbb{P}^N$ lies in a plane if and only if $l$ and $l'$ intersect each other after the projection $\cdot\Lambda:\varmathbb{P}^{D+1}\dashrightarrow\varmathbb{P}^N$. As a consequence, every $\Lambda\in p^{-1}(\,\sigma(l,l')\,)$ defines a projection which produces a singularity by making $l$ and $l'$ intersect. \end{proof} \subsection{The geometry of the variety $\boldsymbol{\sigma(l,l')}$} The properties of the singular scroll locus that we are interested in are the unirationality and the dimension. As a preliminary, we describe here the geometry of the variety $\sigma(l,l')$, which implies immediately the rationality of $\sigma(l,l')$ and also allows us to find its dimension easily. Instead of studying $\sigma(l,l')$ alone, the geometry would be more apparent if we consider generally the linear subspaces in $\varmathbb{P}^{D+1}$ which satisfies a certain intersectional condition. Fix a $(D-3)$-plane $L\subset\varmathbb{P}^{D+1}$. For every $j\geq0$, we define \begin{equation}\label{sb} \sigma_j(L) = \left\{\,P\in\varmathbb{G}(N,D+1):\dim(P\cap L)\geq N-4+j\,\right\}. \end{equation} For example, $\sigma_0(L)=\varmathbb{G}(N,D+1)$, and $\sigma_1(P_{l,\,l'}^\perp)=\sigma(l,l')$. Note that $P\subset L$ or $L\subset P$ if $j\geq\min\left(4\,,\,D-N+1\right)$ in (\ref{sb}), so we have \[\begin{array}{lrl} \sigma_j(L)\supsetneq\sigma_{j+1}(L) &{\rm if}&0\leq j<\min\left(4\,,\,D-N+1\right),\\ \sigma_j(L)=\sigma_{j+1}(L) &{\rm if}&j\geq\min\left(4\,,\,D-N+1\right). \end{array}\] Define $\sigma^\circ_j(L) = \left\{P\in\varmathbb{G}(N,D+1)\,:\,\dim(P\cap L)=N-4+j\right\}$, then \[\begin{array}{lrl} \sigma^\circ_j(L)=\sigma_{j}(L)-\sigma_{j+1}(L) &{\rm if}&0\leq j<\min\left(4\,,\,D-N+1\right),\\ \sigma^\circ_j(L)=\sigma_{j}(L) &{\rm if}&j=\min\left(4\,,\,D-N+1\right). \end{array}\] \begin{lemma}\label{blSgm} Assume $1\leq j<\min\left(4\,,\,D-N+1\right)$, then $\sigma_j(L)$ is singular along $\sigma_{j+1}(L)$ and smooth otherwise. The singularity can be resolved by a $\varmathbb{G}(3-j\,,\,D-N+4-j)$-bundle over $\varmathbb{G}(N-4+j\,,\,D-3)$. Especially, $\sigma_j(L)$ is rational with codimension $j(N-3+j)$ in $\varmathbb{G}(N,D+1)$. \end{lemma} \begin{proof} We define ${\bf G}_j(L)$ to be the fiber bundle \[\begin{array}{ccc} \varmathbb{G}(3-j\,,\,D-N+4-j)&\,\hookrightarrow\,&{\bf G}_j(L)\\ &&\downarrow\\ &&\varmathbb{G}(N-4+j\,,\,L) \end{array}\] by taking $\varmathbb{G}(3-j\,,\,Q^\perp)$ as the fiber over $Q\in\varmathbb{G}(N-4+j\,,\,L)$. Apparently ${\bf G}_j(L)$ is smooth and rational. We denote an element of ${\bf G}_j(L)$ as $(Q,R)$, where $Q$ belongs to the base and $R$ belongs to the fiber over $Q$. In the following, we will construct a birational morphism from ${\bf G}_j(L)$ to $\sigma_j(L)$, which determines the rationality and the codimension immediately. Then we will study the singular locus by analyzing the tangent cone to $\sigma_j(L)$ at a point on $\sigma_{j+1}(L)$. \noindent\emph{Step 1. A birational morphism from ${\bf G}_j(L)$ to $\sigma_j(L)$.} Every $P\in\sigma^\circ_j(L)$ can be decomposed as $P=(P\cap L)\oplus_\perp(P\cap L)^{\perp P}$. Because $P\cap L\in\varmathbb{G}(N-4+j\,,\,L)$ and $(P\cap L)^{\perp P}$ is a $(3-j)$-plane in $(P\cap L)^\perp$, this induces a morphism \[\begin{array}{cccc} \iota:&\sigma^\circ_j(L)&\longrightarrow&{\bf G}_j(L)\\ &P&\longmapsto&\left(P\cap L\,,\,(P\cap L)^{\perp P}\right). \end{array}\] On the other hand, $Q\oplus_\perp R\in\sigma_{j}(L)$ for every $(Q,R)\in{\bf G}_j(L)$ since $\dim(Q\cap L)=N-4+j$ by definition. Thus there is a morphism \begin{equation}\label{blSg} \begin{array}{cccc} \epsilon:&{\bf G}_j(L)&\longrightarrow&\sigma_j(L)\\ &(Q,R)&\longmapsto&Q\oplus_\perp R. \end{array} \end{equation} Clearly, the composition $\epsilon\circ\iota$ is the same as the inclusion $\sigma^\circ_j(L)\subset\sigma_j(L)$. Therefore $\epsilon$ is a birational morphism. The smoothness and rationality of ${\bf G}_j(L)$ implies that $\sigma^\circ_j(L)$ is smooth and that $\sigma_j(L)$ is rational. Moreover, \[\begin{array}{l} \dim\sigma_j(L) = \dim{\bf G}_j(L)\\ =(4-j)(D-N+1)+(N-3+j)(D-N+1-j)\\ =(N+1)(D-N+1)-j(N-3+j)\\ =\dim\varmathbb{G}(N,D+1)-j(N-3+j). \end{array}\] Hence $\sigma_j(L)$ has codimension $j(N-3+j)$ in $\varmathbb{G}(N,D+1)$. \noindent\emph{Step 2. The tangent cones to $\sigma_j(L)$.} Choose any $P\in\sigma_j(L)$ and fix a $\phi\in T_P\varmathbb{G}(N,D+1)\cong{\rm Hom}\left(P,P^\perp\right)$. Let $T_P{\sigma_j(L)}$ be the tangent cone to $\sigma_j(L)$ at $P$. By definition, $\phi\in T_P{\sigma_j(L)}$ if and only if the condition $\dim(P\cap L)\geq N-4+j$ is kept when $P$ moves infinitesimally in the direction of $\phi$, which is equivalent to the condition that $P\cap L$ has a subspace $Q$ of dimension $N-4+j$ such that $\phi(Q)\subset L$. Consider the decomposition \[ P^\perp = (P^\perp\cap L)\oplus_\perp(P^\perp\cap L)^{\perp P^\perp}. \] Define \[ \Gamma:{\rm Hom}\left(P,P^\perp\right)\rightarrow {\rm Hom}\left(P\cap L,(P^\perp\cap L)^{\perp P^\perp}\right) \] to be the composition of the restriction to $P\cap L$ followed by the right projection of the above decomposition. For any subspace $Q\subset P\cap L$, $\phi(Q)\subset L$ if and only if $\phi(Q)\subset P^\perp\cap L$, if and only if $Q\subset\ker\Gamma(\phi)$. So $L$ has a subspace $Q$ of dimension $N-4+j$ such that $\phi(Q)\subset L$ if and only if the (projective) dimension of $\ker\Gamma(\phi)$ is at least $N-4+j$. Therefore, \begin{equation}\label{TSgker} T_P{\sigma_j(L)} = \left\{\phi\in{\rm Hom}\left(P,P^\perp\right):\dim\left(\ker\Gamma(\phi)\right)\geq N-4+j\right\}. \end{equation} Note that $\sigma_j(L)$ is the disjoint union of $\sigma^\circ_{j+k}(L)$ for all $k$ satisfying \[0\leq k\leq\min\left(4\,,\,D-N+1\right)-j.\] Assume $P\in\sigma^\circ_{j+k}(L)$, i.e. $\dim(P\cap L) = N-4+j+k$, then (\ref{TSgker}) is equivalent to \begin{equation}\label{TSgrk} T_P{\sigma_j(L)} = \left\{\phi\in{\rm Hom}\left(P,P^\perp\right):{\rm rk}\,\Gamma(\phi)\leq k\right\}. \end{equation} When $k=0$, the constraint becomes ${\rm rk}\,\Gamma(\phi)=0$, so $T_P{\sigma_j(L)} = \ker\Gamma$ is a vector space. This reflects the fact that $\sigma_j(L)$ is smooth on $\sigma^\circ_j(L)$ for all $j$. On the other hand, from the inequality \[ \dim(P\cap L)+\dim(P^\perp\cap L)\leq\dim(L)-1, \] we get \[\begin{array}{l} \dim(P^\perp\cap L)\leq\dim(L)-\dim(P\cap L)-1\\ = (D-3)-(N-4+j+k)-1=D-N-j-k.\\ \end{array}\] It follows that \[\begin{array}{l} \dim\left((P^\perp\cap L)^{\perp P^\perp}\right) = \dim(P^\perp)-\dim(P^\perp\cap L)-1\\ \geq (D-N)-(D-N-j-k)-1 = j+k-1. \end{array}\] So $\dim\left((P^\perp\cap L)^{\perp P^\perp}\right)\geq j+k-1\geq k$ once $k\geq1$. Under this condition, the linear combination of members of rank $k$ in ${\rm Hom}\left(P\cap L,(P^\perp\cap L)^{\perp P^\perp}\right)$ can have rank exceeding $k$. So $T_P{\sigma_j(L)}$ can not be a vector space, thus $P$ is a singularity of $\sigma_j(L)$. \end{proof} Recall that $\sigma(l,l')=\sigma_1(P_{l,\,l'}^\perp)$, so Lemma \ref{blSgm} implies that \begin{cor}\label{sgCodim} $\sigma(l,l')$ is rational with codimension $N-2$ in $\varmathbb{G}(N,D+1)$. \end{cor} \subsection{Families of the projections} The singularities we have studied are those produced from the intersection of a fixed pair of distinct rulings. Now we are going to make use of the variety $\sigma(l,l')$ to control multiple singularities. Let ${\varmathbb{P}^1}^{[2]}$ be the Hilbert scheme of two points on $\varmathbb{P}^1$ and $U\subset{\varmathbb{P}^1}^{[2]}$ be the open subset parametrizing reduced subschemes. On the rational normal scroll $S_{u,v}$, the set of $r$ pairs of distinct rulings \[ \left\{ (l_1+{l_1}',..., l_r+{l_r}') : l_i\neq{l_i}'\;\forall i \right\} \] is parametrized by $U^{\times r}$. Let $\Sigma_r$ be a subset of $U^{\times r}\times\varmathbb{G}(N,D+1)$ defined by \[ \Sigma_r = \left\{ (l_1+{l_1}',...,l_r+{l_r}', P)\in U^{\times r}\times\varmathbb{G}(N,D+1)\,:\,P\in\bigcap_{i=1}^r\sigma(l_i,{l_i}')\right\}. \] Let $p_1$ be the left projection and $p_2$ the right projection. Then there is a diagram \[\begin{array}{ccccc} \bigcap_{i=1}^r\sigma(l_i,{l_i}')&\subset&\Sigma_r&\xrightarrow{\scriptstyle p_2}&\varmathbb{G}(N,D+1)\\ \downarrow&&\!\!\!{\scriptstyle p_1}\downarrow\enspace&&\\ (l_1+{l_1}',...,l_r+{l_r}')&\in&\enspace {\rm U}^{\times r}.\!\!&& \end{array}\] By Lemma \ref{sgll}, the image $p_2(\Sigma_r)$ consists of the $N$-planes such that the projections to them produce at least $r$ singularities. By the diagram above and Corollary \ref{sgCodim}, the codimension of $\Sigma_r$ in $U^{\times r}\times\varmathbb{G}(N,D+1)$ is at most $r(N-4)$. When $r=1$, $\Sigma_1$ is rational with codimension exactly $N-4$. Our goal is to compute the dimension of $p_2(\Sigma_r)$, so we care about whether $p_2$ is generically finite onto its image or not. It turns out that the condition below is sufficient (See Lemma \ref{SgrCodim}) \begin{equation}\label{ass} \parbox[c]{9.5cm}{ There exists a rational scroll with isolated singularities $S\subset\varmathbb{P}^N$ of type $(u,v)$ which has at least $r$ singularities. } \end{equation} By considering $S$ as the projection of $S_{u,v}$ from $P^\perp$ for some $N$-plane $P$, we can apply Corollary \ref{isoSingEq} to translate (\ref{ass}) into the equivalent statement: \begin{equation}\tag{\ref{ass}'}\label{ass'} \parbox[c]{9.5cm}{ There exists an $N$-plane $P$ such that $P^\perp$ intersects $S(S_{u,v})$ in $\geq r$ points away from $T(S_{u,v})\cup Z_2$. } \end{equation} \begin{prop}\label{assHold} (\ref{ass}) holds for $r\leq D-N+1$. \end{prop} \begin{proof} By \cite[Prop. 2.2]{CM96} and \cite[Example 19.10]{Har95}, $\deg\left(S(S_{u,v})\right)={D-2\choose 2}$. Since $\dim\left(S(S_{u,v})\right)=5$ and $T(S_{u,v})\cup Z_2$ form a proper closed subvariety of $S(S_{u,v})$, we can use Bertini's theorem to choose a $(D-4)$-plane $R$ which intersects $S(S_{u,v})$ in ${D-2\choose 2}$ points outside $T(S_{u,v})\cup Z_2$. It is easy to check that ${D-2\choose 2}\geq D-N+1$. Thus we can choose $D-N+1$ of the intersection points to span a $(D-N)$-plane $Q\subset R$. Then $P=Q^\perp$ satisfies the hypothesis. \end{proof} Unfortunately, Proposition \ref{assHold} doesn't cover the case $D=9$, $N=5$ and $r=8$ in our proof of the unirationality of discriminant 42 cubic fourfolds. In the following, we estimate the dimension of $p_2(\Sigma_r)$ under the assumption (\ref{ass}) and leave the construction of examples to Section \ref{sect:constr}. \begin{lemma}\label{SgrCodim} Suppose (\ref{ass}) holds. Then $p_2(\Sigma_r)$ has codimension $\leq r(N-4)$ in $\varmathbb{G}(N,D+1)$. When $r=1$, $p_2(\Sigma_1)$ is rational of codimension exactly $N-4$. \end{lemma} \begin{proof} Let $l$ and $l'$ be distinct rulings on $S_{u,v}$. We write $P_{l,\,l'}$ for the 3-plane spanned by them. Note that $S(S_{u,v}) = \overline{\,\bigcup_{l\neq l'} P_{l,\,l'}\,}$. Let $P$ be an $N$-plane satisfying (\ref{ass'}). Then there exists $r$ pairs of distinct rulings $\left(l_1,{l_1}'\right),...,\left(l_r,{l_r}'\right)$ such that $P^\perp$ and $P_{l_i,\,{l_i}'}$ intersect in exactly one point for each pair $\left(l_i,{l_i}'\right)$. This implies that $\dim\left(P^\perp+P_{l_i,\,{l_i}'}\right)\leq D-N+3$ for all $i$, which is equivalent to $\dim\left(P\cap P_{l_i,\,{l_i}'}^\perp\right)\geq N-3$ for all $i$ by (\ref{perpSum}). Hence $P\in\bigcap_{i=1}^r\sigma\left(l_i,{l_i}'\right)$, i.e. $P$ belongs to the image of $p_2$. Suppose $P^\perp$ intersects $S(S_{u,v})$ in $m$ points, then the $\left(l_1+{l_1}',...,l_r+{l_r}'\right)$ in the preimage of $P$ is unique up to the choices of $r$ from $m$ pairs, the reordering of the $r$ pairs, and the transpositions of the rulings in a pair. Hence $p_2$ is generically finite with $\deg p_2={m\choose r}\cdot r!$. In particular, $\Sigma_r$ and $p_2(\Sigma_r)$ are equidimensional. From $\dim\left(S(S_{u,v})\right)=5$ and our assumption that $N\geq5$, we are able to choose a $(D-N)$-plane which intersect $S(S_{u,v})$ in any one and exactly one point. Therefore, we can find $P$ so that $P^\perp$ intersects $S(S_{u,v})$ in one point outside $T(S_{u,v})\cup Z_2$. This provides an example of (\ref{ass}) for $m=r=1$. It follows that $p_2$ has degree one, and the image is rational since $\sigma\left(l_1,{l_1}'\right)$ is rational by Corollary \ref{sgCodim}. \end{proof} \subsection{Proof of Theorem \ref{singCodim}} \begin{lemma}\label{singuvCodim} Assume $D\geq N\geq 5$, and assume the existence of a degree $D$ rational scroll $S\subset\varmathbb{P}^N$ with isolated singularities which has at least $r$ singularities. Then $\mathcal{H}_{u,v}^r$ has codimension at most $r(N-4)$ in $\mathcal{H}_{u,v}$. For $r=1$, $\mathcal{H}_{u,v}^1$ is unirational of codimension exactly $N-4$. \end{lemma} \begin{proof} We have the following diagram \[\xymatrix{ &\varmathbb{V}(N,D+1)\ar_{p}[d]\ar@{-->}[r]^{\qquad\pi}&\mathcal{H}_{u,v}\\ \Sigma_r\ar[r]^{p_2\qquad}&\varmathbb{G}(N,D+1).& }\] By definition, $\mathcal{H}_{u,v}^r = \pi\left(p^{-1}\left(p_2(\Sigma_r)\right)\right)$. By Lemma \ref{SgrCodim}, $p_2(\Sigma_1)$ is rational, which implies that $\mathcal{H}_{u,v}^1$ is unirational. It's clear that $p_2(\Sigma_r)$ and $p^{-1}\left(p_2(\Sigma_r)\right)$ have the same codimension. On the other hand, $p^{-1}\left(p_2(\Sigma_r)\right)$ and $\pi^{-1}\left(\pi\left(p^{-1}\left(p_2(\Sigma_r)\right)\right)\right)$ have the same dimension since both contain an open dense subset consisting of the projections which generate $r$ singularities, so the codimension of $p^{-1}\left(p_2(\Sigma_r)\right)$ is the same as its image through $\pi$. Therefore, $p_2(\Sigma_r)$ and $\mathcal{H}_{u,v}^r$ have the same codimension in their own ambient spaces, and the results follows from Lemma \ref{SgrCodim}. \end{proof} Lemma \ref{singuvCodim} is the special case of Theorem \ref{singCodim} when restricting to the locus of a particular type on the Hilbert scheme. The next lemma shows that a general $S\in\mathcal{H}_D^r$ deforms equisingularly between different types under the assumption $rN\leq(D+2)^2-1$. Hence the dimension estimate made by Lemma \ref{singuvCodim} can be extended regardless of the types. \begin{lemma}\label{equiDim} Assume (\ref{ass}) and $rN\leq(D+2)^2-1$, then $\mathcal{H}_{u,v}^r = \mathcal{H}_{u,v}\cap\mathcal{H}_{u+k,v-k}^r$ for $0\leq 2k<v-u$. \end{lemma} \begin{proof} It is trivial that $\mathcal{H}_{u,v}^r\supset\mathcal{H}_{u,v}\cap\mathcal{H}_{u+k,v-k}^r$. To prove that $\mathcal{H}_{u,v}^r\subset\mathcal{H}_{u,v}\cap\mathcal{H}_{u+k,v-k}^r$, it is sufficient to show that a generic element in $\mathcal{H}_{u,v}^r$ deforms equisingularly to an element in $\mathcal{H}_{u+k,v-k}$. If $(u,v)=\left(\lfloor\frac{D}{2}\rfloor,\lceil\frac{D}{2}\rceil\right)$ then there is nothing to prove, so we assume $(u,v)\neq\left(\lfloor\frac{D}{2}\rfloor,\lceil\frac{D}{2}\rceil\right)$. The elements satisfying (\ref{ass}) form an open dense subset of $\mathcal{H}_{u,v}^r$. Let $S\in\mathcal{H}_{u,v}^r$ be one of them, and assume $S$ is the image of $F\cong S_{u,v}\subset\varmathbb{P}^{D+1}$ projected from some $(D-N)$-plane $Q$. By hypothesis, $F$ has $r$ secants $\gamma_1$, ..., $\gamma_r$ incident to $Q$. Assume $\gamma_j\cap F=\{x_j,y_j\}$ for $j=1,...,r$. $H^1\left(T_{\varmathbb{P}^{D+1}}|_F\right)=0$ by Lemma \ref{embDef}, so the short exact sequence $ 0\rightarrow T_F\rightarrow T_{\varmathbb{P}^{D+1}}|_F\rightarrow N_{F/\varmathbb{P}^{D+1}}\rightarrow0 $ induces the exact sequence \[ 0\rightarrow H^0\left(T_F\right)\rightarrow H^0\left(T_{\varmathbb{P}^{D+1}}|_F\right) \rightarrow H^0\left(N_{F/\varmathbb{P}^{D+1}}\right)\rightarrow H^1\left(T_F\right)\rightarrow0. \] By Lemma \ref{absDef}, $h^1\left(F,T_F\right)=h^1\left(\varmathbb{P}^1,\mathcal{O}_{\varmathbb{P}^1}(u-v)\right)=v-u-1$, the same as the codimension of $\mathcal{H}_{u,v}$ in $\mathcal{H}_D$, thus a deformation normal to $\mathcal{H}_{u,v}$ is induced from an element in $H^1\left(T_F\right)$. In order to prove that the deformation is equisingular, it is sufficient to prove that for all $\mathcal{F}\in H^1\left(T_F\right)$ and its lift $\mathcal{S}\in H^0\left(N_{F/\varmathbb{P}^{D+1}}\right)$, there exists $\alpha\in H^0\left(T_{\varmathbb{P}^{D+1}}|_F\right)$ such that the vectors $\mathcal{S}(x_j)+\alpha(x_j)\in T_{\varmathbb{P}^{D+1},x_j}$ and $\mathcal{S}(y_j)+\alpha(y_j)\in T_{\varmathbb{P}^{D+1},y_j}$ keep $\gamma_j$ contact with $Q$ for $j=1,...,r$, so that $\mathcal{S}+\alpha$ is a lift of $\mathcal{F}$ representing an embedded deformation which preserves the incidence of the $r$ secants to $Q$. Note that for arbitrary $p\in\varmathbb{P}^{D+1}$, the tangent space $T_{\varmathbb{P}^{D+1},p}\cong{\rm Hom}\left(p,p^\perp\right)\cong p^\perp$ can be considered as a subspace of $\varmathbb{P}^{D+1}$. We identify a point in $\varmathbb{P}^{D+1}$ with its underlying vector. Let $\gamma=\gamma_j$ for some $j$, and let $\{x,y\}=\gamma\cap S_{u,v}$ with $x=(x_1,...,x_{D+1})$ and $y=(y_1,...,y_{D+1})$. The condition that $\mathcal{S}(x)+\alpha(x)$ and $\mathcal{S}(y)+\alpha(y)$ keep $\gamma$ contact with $Q$ is equivalent to the condition that the set of vectors consisting of $x+\mathcal{S}(x)+\alpha(x)$, $y+\mathcal{S}(y)+\alpha(y)$ and the basis of $Q$ is not independent. One can compute that $h^0\left(T_{\varmathbb{P}^{D+1}}|_F\right)=(D+2)^2-1$ by the Euler exact sequence $ 0\rightarrow\mathcal{O}_F\rightarrow\mathcal{O}_F(1)^{\oplus(D+2)}\rightarrow T_{\varmathbb{P}^{D+1}}|_F\rightarrow0 $ and Lemma \ref{hHir}. Suppose $H^0\left(T_{\varmathbb{P}^{D+1}}|_F\right)$ has basis $e_1$, ..., $e_{(D+2)^2-1}$, we write the evaluation of $e_i$ at $p$ as $e_i(p)=(e_i(p)_1,...,e_i(p)_{D+1})$. Let $\alpha=\sum_{i\geq1}\alpha_ie_i$, $\mathcal{S}(x)=\sum_{i\geq1}c_ie_i(x)$ and $\mathcal{S}(y)=\sum_{i\geq1}d_ie_i(y)$, also write $Q=\left(q_{i,j}\right)$ as a $(D-N+1)\times(D+2)$-matrix. Then the dependence condition is equivalent to the condition that the $(D-N+3)\times(D+2)$-matrix \[ A_\gamma =\left(\begin{array}{c} \alpha_0x+\alpha_0\mathcal{S}(x)+\alpha(x)\\ \alpha_0y+\alpha_0\mathcal{S}(y)+\alpha(y)\\ Q \end{array}\right)\\ \] \[ =\left(\begin{array}{ccc} \alpha_0x_0+\sum\left(\alpha_0c_i+\alpha_i\right)e_i(x)_0&...& \alpha_0x_{D+1}+\sum\left(\alpha_0c_i+\alpha_i\right)e_i(x)_{D+1}\\ \alpha_0y_0+\sum\left(\alpha_0d_i+\alpha_i\right)e_i(y)_0&...& \alpha_0y_{D+1}+\sum\left(\alpha_0d_i+\alpha_i\right)e_i(y)_{D+1}\\ q_{0,0}&...&q_{0,D+1}\\ \vdots&&\vdots\\ q_{D-N,0}&...&q_{D-N,D+1} \end{array}\right) \] has rank at most $D-N+2$. Here we homogenize the first two rows by $\alpha_0$, so that the matrix defines a morphism \[\begin{array}{cccc} {\bf A}_\gamma:&\varmathbb{P}\left(\varmathbb{C}\oplus H^0\left(T_{\varmathbb{P}^{D+1}}|_F\right)\right)\cong\varmathbb{P}^{(D+2)^2-1}& \rightarrow&\varmathbb{P}^{(D-N+3)(D+2)-1}\\ &(\alpha_0,...,\alpha_{(D+2)^2-1})&\mapsto&A_\gamma. \end{array}\] Let $M_{D-N+2}\subset\varmathbb{P}^{(D-N+3)(D+2)-1}$ be the determinantal variety of matrices of rank at most $D-N+2$. Then ${{\bf A}_\gamma}^{-1}\left(M_{D-N+2}\right)\subset\varmathbb{P}\left(\varmathbb{C}\oplus H^0\left(T_{\varmathbb{P}^{D+1}}|_F\right)\right)$ is an irreducible and nondegenerate subvariety of codimension $N$, whose locus outside $\alpha_0=0$ parametrizes those $\alpha\in H^0\left(T_{\varmathbb{P}^{D+1}}|_F\right)$ such that $\mathcal{S}+\alpha$ preserves the incidence between $\gamma$ and $Q$. It follows that the intersection $\bigcap_{j=1}^r{{\bf A}_{\gamma_j}}^{-1}\left(M_{D-N+2}\right)$ is nonempty by the hypothesis $rN\leq(D+2)^2-1$. Moreover, it is not contained in the hyperplane $\alpha_0=0$ for a generic $S\in\mathcal{H}_{u,v}^r$. Indeed, if this doesn't hold, then the limit case $\gamma_1=...=\gamma_r$ should also be inside the hyperplane $\alpha_0=0$. However, the intersection in that case is a multiple of a nondegenerate variety, a contradiction. As a result, for a generic $S\in\mathcal{H}_{u,v}^r$ we can find $\alpha$ from $\bigcap_{j=1}^r{{\bf A}_{\gamma_j}}^{-1}\left(M_{D-N+2}\right)$ which lies on $\{\alpha_0=1\}=H^0\left(T_{\varmathbb{P}^{D+1}}|_F\right)$, so that $\mathcal{S}+\alpha$ preserves the incidence condition between $\gamma_1$, ..., $\gamma_r$ and $Q$. \end{proof} Now we are ready to finish the proof of Theorem \ref{singCodim}. Note that $\mathcal{H}_D^r=\bigcup_{u+v=D}\mathcal{H}_{u,v}^r$. By Lemma \ref{equiDim} \[ \bigcup_{u+v=D}\mathcal{H}_{u,v}^r = \bigcup_{u+v=D}\left(\mathcal{H}_{u,v}\cap\mathcal{H}_{\lfloor\frac{D}{2}\rfloor,\lceil\frac{D}{2}\rceil}^r\right) = \mathcal{H}_D\cap\mathcal{H}_{\lfloor\frac{D}{2}\rfloor,\lceil\frac{D}{2}\rceil}^r = \mathcal{H}_{\lfloor\frac{D}{2}\rfloor,\lceil\frac{D}{2}\rceil}^r. \] Therefore $\mathcal{H}_D^r=\mathcal{H}_{\lfloor\frac{D}{2}\rfloor,\lceil\frac{D}{2}\rceil}^r$, and the result follows from Lemma \ref{singuvCodim} with $(u,v) = (\lfloor\frac{D}{2}\rfloor,\lceil\frac{D}{2}\rceil)$. \end{document}
arXiv
{ "id": "1606.03853.tex", "language_detection_score": 0.646358847618103, "char_num_after_normalized": null, "contain_at_least_two_stop_words": null, "ellipsis_line_ratio": null, "idx": null, "lines_start_with_bullet_point_ratio": null, "mean_length_of_alpha_words": null, "non_alphabetical_char_ratio": null, "path": null, "symbols_to_words_ratio": null, "uppercase_word_ratio": null, "type": null, "book_name": null, "mimetype": null, "page_index": null, "page_path": null, "page_title": null }
arXiv/math_arXiv_v0.2.jsonl
null
null
\begin{document} \title{Effects of single-qubit quantum noise on entanglement purification} \author{Giuliano Benenti\inst{1,2}, Sara Felloni\inst{3}, and Giuliano Strini\inst{4}} \institute{Center for Nonlinear and Complex Systems, Universit\`a degli Studi dell'Insubria, Via Valleggio 11, 22100 Como, Italy \and Istituto Nazionale per la Fisica della Materia, Unit\`a di Como and Istituto Nazionale di Fisica Nucleare, Sezione di Milano \and Dipartimento di Matematica, Universit\`a degli Studi di Milano, via Saldini 50, 20133 Milano, Italy \and Dipartimento di Fisica, Universit\`a degli Studi di Milano, via Celoria 16, 20133 Milano, Italy} \titlerunning{Effects of single-qubit quantum noise on entanglement purification} \authorrunning{G. Benenti, S. Felloni, and G. Strini} \date{Received: September 19, 2005} \abstract{We study the stability under quantum noise effects of the quantum privacy amplification protocol for the purification of entanglement in quantum cryptography. We assume that the E91 protocol is used by two communicating parties (Alice and Bob) and that the eavesdropper Eve uses the isotropic Bu\v{z}ek-Hillery quantum copying machine to extract information. Entanglement purification is then operated by Alice and Bob by means of the quantum privacy amplification protocol and we present a systematic numerical study of the impact of all possible single-qubit noise channels on this protocol. We find that both the qualitative behavior of the fidelity of the purified state as a function of the number of purification steps and the maximum level of noise that can be tolerated by the protocol strongly depend on the specific noise channel. These results provide valuable information for experimental implementations of the quantum privacy amplification protocol.} \PACS{ {03.65.Yz} Decoherence; open systems; quantum statistical methods \and {03.67.Hk} Quantum communication \and {03.67.Dd} Quantum cryptography } \maketitle \section{Introduction} \label{sec:intro} A central problem of quantum communication is how to reliably transmit quantum information through a noisy quantum channel. The carriers of information (the qubits) unavoidably interact with the external world, leading to phenomena such as decoherence and absorption. In particular, if a member of a maximally entangled EPR (Einstein-Podolsky-Rosen) pair is transmitted from a sender (known as Alice) to a receiver (Bob) through a quantum channel, then noise in the channel can degrade the amount of entanglement of the pair. This problem is of primary importance for entanglement-based quantum cryptography. Indeed, in the idealized E91 protocol \cite{E91} Alice and Bob share a large number of maximally entangled states. Entanglement purification techniques exist \cite{Bennett,Bennett2}. In particular, they have been applied to quantum cryptography: in Ref.~\cite{DEJMPS} a quantum privacy amplification (QPA) iterative protocol was proposed, that eliminates entanglement with an eavesdropper by creating a small number of nearly perfect (pure) EPR states out of a large number of partially entangled states. This protocol is based on the so-called LOCC, that is on local quantum operations (quantum gates and measurements performed by Alice and Bob on their own qubits), supplemented by classical communication. Under realistic conditions, the quantum operations themselves are unavoidably affected by errors and introduce a certain amount of noise. A first study of the impact of these errors on the QPA protocol was made in Ref.~\cite{briegel} and conditions for the security of QPA were found. However, the noise model considered in \cite{briegel} was not the most general one. In particular, error channels like the amplitude damping or thermal excitations were not considered. Studies of the impact of noise on the stability of quantum computation and communication are of primary importance for the practical implementation of quantum information protocols. In this paper, for the first time {\it all} single-qubit quantum noise channels are studied and compared and their different impact on the quantum privacy amplification protocol is elucidated. Errors acting on single qubits are described most conveniently using the Bloch sphere picture: Quantum noise acting on a single qubit is described by 12 parameters, associated to rotations, deformations and displacements of the Bloch sphere. We study in detail the effects of these different errors and show that they impact {\it very differently} on the QPA algorithm. In particular, errors giving a displacement of the Bloch sphere are very dangerous. These results provide valuable information for experimentalists: indeed, knowing what are the most dangerous noise channels is useful to address experiments towards implementations for which these channels have negligible impact. The paper is organized as follows. The eavesdropper's attack strategy is described in Sec.~\ref{sec:Eve}. Here we assume that the eavesdropper Eve attacks the qubits sent by Alice to Bob by means of the quantum copying machine of Bu\v{z}ek and Hillery \cite{buzekhillery}. As a result, Alice and Bob share partially entangled pairs. Each pair is now entangled with the environment (Eve's qubits) and described by a density operator. The QPA protocol, reviewed in Sec.~\ref{sec:QPA}, can be used to purify entanglement and, as a consequence, reduce the entanglement with any outside system to arbitrarily low values (a maximally entangled EPR pair is a pure state automatically deentangled from the outside world). We then consider the effects of noise acting on the purification protocol. The most general single-qubit noise channels are discussed in Sec.~\ref{sec:Blocherrors}. We model each noise channel by means of equivalent quantum circuits, from which the usual Kraus representation and the transformation (rotation, translation or displacement) of the Bloch sphere coordinates can be derived. The impact of these errors on the entanglement purification is discussed in Sec.~\ref{sec:noisyQPA}. Finally, in Sec.~\ref{sec:conc} we present our conclusions. \section{Eavesdropping} \label{sec:Eve} We assume that Alice has at her disposal a source of EPR pairs and sends a member of each pair to Bob. The eavesdropper Eve wants, on one hand, to find out as much information as possible on the transmitted qubits and, on the other hand, make his intrusion as unknown as possible to Alice and Bob. Isotropic cloning by means of the Buzek-Hillery machine \cite{buzekhillery} is the most natural way to meet these two requirements. We also note that isotropy is necessary only in the case in which Alice and Bob use a six-state protocol, that is, the measurements are performed along the $x$, $y$ and $z$ axis of the Bloch sphere. The isotropy condition may be relaxed when Alice and Bob use a four-state protocol: they measure only along $x$ and $z$ and Eve knows what are the measurement axes. In this case, it would be sufficient for Eve to send Bob qubits that reproduce as faithfully as possible the $x$ and $z$ coordinates, but with no constraints about $y$. We have also studied this case (non isotropic cloning) but not reported it on the paper for the sake of simplicity. In the following we assume that, as shown in Fig.~\ref{figBHcrypto}, Eve attacks the qubits sent by Alice using the Bu\v{z}ek-Hillery machine \cite{buzekhillery}. The two bottom qubits in Fig.~\ref{figBHcrypto} are prepared by Eve in the state \begin{equation} |\Phi\rangle = \alpha |00\rangle+\beta|01\rangle+\gamma|10\rangle+ \delta|11\rangle \label{betgamdel} \end{equation} and we assume that $\alpha,\beta,\gamma,\delta$ are real parameters. Let us call $\rho_B$ and $\rho_E$ the density matrices describing the final states of Bob's qubit and Eve's qubit. As we have said, we assume isotropy, that is, if we call $(x,y,z)$ the coordinates of the qubit sent from Alice to Bob before eavesdropping, then the Bloch sphere coordinates $(x_B,y_B,z_B)$ and $(x_E,y_E,z_E)$ associated to $\rho_B$ and $\rho_E$ are such that $x_B/x=y_B/y=z_B/z\equiv R_B$ and $x_E/x=y_E/y=z_E/z\equiv R_E$. As shown in Appendix~\ref{app:isotropiccloning}, these conditions are fulfilled for \begin{equation} \beta=\frac{\alpha}{2}-\sqrt{\frac{1}{2}-\frac{3}{4}\alpha^2},\quad \gamma=0,\quad \delta=\frac{\alpha}{2}+\sqrt{\frac{1}{2}-\frac{3}{4}\alpha^2}. \label{betadeltabh} \end{equation} It can be checked by direct computation (see again Appendix~\ref{app:isotropiccloning}) that in this case $(x_B,y_B,z_B)=2\alpha\delta(x,y,z)$ and $(x_E,y_E,z_E)=2\alpha\beta(x,y,z)$. Since the Bloch sphere coordinates must be real and nonnegative, we obtain $\frac{1}{\sqrt{2}}\le \alpha \le \frac{2}{\sqrt{6}}$. The ratios $R_B\equiv x_B/x=y_B/y=z_B/z$ and $R_E\equiv x_E/x=y_E/y=z_E/z$ are shown in Fig.~\ref{bobeveratios}. It can be seen that the two limiting cases $\alpha=\frac{1}{\sqrt{2}}$ and $\alpha=\frac{2}{\sqrt{6}}$ correspond to no intrusion ($x_B=x,y_B=y,z_B=z$) and maximum intrusion ($x_E=x_B,y_E=y_B,z_E=z_B$), respectively. In the first case, the qubit sent from Alice to Bob is not attacked. In the latter case Eve makes two imperfect identical copies of the original qubit (symmetric Bu\v{z}ek-Hillery machine), that is $\rho_E=\rho_B$: in this way Eve both optimizes the information obtained about the transmitted state and minimizes the modification of the qubit received by Bob. The degree of Eve's intrusion is therefore conveniently measured by the intrusion parameter \begin{equation} f_\alpha=\frac{\alpha-\frac{1}{\sqrt{2}}}{\frac{2}{\sqrt{6}}- \frac{1}{\sqrt{2}}}, \end{equation} with $0\le f_\alpha \le 1$. \begin{figure} \caption{Top: quantum circuit representing the intrusion (by means of the Bu\v{z}ek-Hillery copying machine) of the eavesdropper Eve in the E91 protocol. The density matrices $\rho_A$, $\rho_B$ and $\rho_E$ represent the states of Alice's qubit, Bob's qubit and Eve's qubit after tracing over all other qubits. Bottom: decomposition of the unitary transformation $W$ in four CNOT gates. By definition, CNOT$|x\rangle|y\rangle= |x\rangle|y\oplus x\rangle$, with $x,y=0,1$ and $\oplus$ indicating addition modulo 2. The first ($x$) qubit in the CNOT gate acts as a control (full circle in the figure) and the second ($y$) as a target qubit ($\oplus$ symbol). Here and in the following circuits, any sequence of logic gates must be read from the left (input) to the right (output). From bottom to top, qubits run from the least significant to the most significant.} \label{figBHcrypto} \end{figure} \begin{figure} \caption{Ratios $R_B$ (solid line) and $R_E$ (dashed line) for the isotropic Bu\v{z}ek-Hillery copying machine versus the parameter $\alpha$.} \label{bobeveratios} \end{figure} \section{Quantum privacy amplification} \label{sec:QPA} We assume that Alice and Bob purify entanglement by means of the QPA protocol \cite{DEJMPS}. This is an iterative procedure, which we briefly review in what follows. At each iteration, the EPR pairs are combined in groups of two. The following steps are then taken for each group (see Fig.~\ref{figDEJMPS}): \begin{itemize} \item Alice applies to her qubits a $\frac{\pi}{2}$ rotation about the $x$-axis of the Bloch sphere, described by the unitary matrix \begin{equation} U= R_x\left(\frac{\pi}{2}\right)= \frac{1}{\sqrt{2}}\left[ \begin{array}{cc} 1 & -i \\ -i & 1 \end{array} \right]. \end{equation} \item Bob applies to his qubits the inverse operation \begin{equation} V= U^{-1}= R_x\left(-\frac{\pi}{2}\right)= \frac{1}{\sqrt{2}}\left[ \begin{array}{cc} 1 & i \\ i & 1 \end{array} \right]. \end{equation} \item Both Alice and Bob perform a CNOT gate (defined in the caption of Fig.~\ref{figBHcrypto}) using their members of the two EPR pairs. \item They measure the polarizations $\sigma_z$ of the two target qubits. \item Alice and Bob compare the measurement outcomes by means of a public classical communication channel. If the outcomes coincide, the control pair is kept for the next iteration and the target pair discarded. Otherwise, both pairs are discarded. \end{itemize} \begin{figure} \caption{Schematic drawing of the QPA entanglement purification scheme. Note that the density matrix $\rho_{AB}^\prime$ describes the two top qubits only when the detectors $D_0$ and $D_1$ give the same outcome.} \label{figDEJMPS} \end{figure} In order to illustrate the working of the QPA procedure, let us consider the special case in which the initial mixed pairs are described by the density matrix $\rho_{AB}$ obtained from the ideal EPR state $|\phi^+\rangle=\frac{1}{\sqrt{2}}(|00\rangle+|11\rangle)$ after application of the Bu\v{z}ek-Hillery copying machine with intrusion parameter $f_\alpha$. After application of the unitary transformation $W$ in Fig.~\ref{figBHcrypto} the overall state of the four-qubit system becomes \begin{equation} \begin{array}{l} \frac{1}{\sqrt{2}}(\alpha|0000\rangle+\beta|0101\rangle+\gamma|0110\rangle +\delta|0011\rangle \\ +\alpha|1111\rangle+\beta|1010\rangle+\gamma|1001\rangle +\delta|1100\rangle). \end{array} \end{equation} After tracing over Eve's two qubits, we obtain \begin{equation} \rho_{AB}=\frac{1}{2} \left[ \begin{array}{cccc} \alpha^2+\delta^2 & 0 & 0 & 2\alpha\delta \\ 0 & \beta^2+\gamma^2 & 2\beta\gamma & 0 \\ 0 & 2\beta\gamma & \beta^2+\gamma^2 & 0 \\ 2\alpha\delta & 0 & 0 & \alpha^2+\delta^2 \end{array} \right]. \end{equation} We note that this state is diagonal in the so-called Bell basis $\{|\phi^{\pm}\rangle=\frac{1}{\sqrt{2}}(|00\rangle\pm |11\rangle), |\psi^{\pm}\rangle=\frac{1}{\sqrt{2}}(|01\rangle\pm |10\rangle)\}$. Indeed, we have \begin{equation} \begin{array}{c} \rho_{AB}= A |\phi^+ \rangle\langle \phi^+ |+ B |\phi^- \rangle\langle \phi^- |\\\\ +C |\psi^+ \rangle\langle \psi^+ |+ D |\psi^- \rangle\langle \psi^- |, \end{array} \label{belldiagonal} \end{equation} where $A=\frac{1}{2}(\alpha+\delta)^2$, $B=\frac{1}{2}(\alpha-\delta)^2$, $C=\frac{1}{2}(\beta+\gamma)^2$ and $D=\frac{1}{2}(\beta-\gamma)^2$. The quantum circuit in Fig.~\ref{figDEJMPS} maps the state $\rho_{AB}$ of the control pair, in the case in which it is not discarded, onto another state $\rho_{AB}^\prime$ diagonal in the Bell basis. Namely, $\rho_{AB}^\prime$ can be expressed in the form (\ref{belldiagonal}), provided that new coefficients $(A^\prime,B^\prime,C^\prime,D^\prime)$ are used instead of $(A,B,C,D)$: \begin{equation} \begin{array}{c} A^\prime=\frac{A^2+D^2}{N},\quad B^\prime=\frac{2AD}{N},\\\\ C^\prime=\frac{B^2+C^2}{N},\quad D^\prime=\frac{2BC}{N}, \end{array} \label{DEJMPSmap} \end{equation} where $N=(A+D)^2+(B+C)^2$ is the probability that Alice and Bob obtain coinciding outcomes in the measurement of the target qubits. Note that map (\ref{DEJMPSmap}) is nonlinear as a consequence of the strong nonlinearity of the measurement process. The fidelity after the purification procedure is given by \begin{equation} F = \langle \phi^+ | \rho_{AB}^\prime | \phi^+ \rangle \label{fido} \end{equation} (note that $F=A^\prime$). This quantity measures the probability that the control qubits would pass a test for being in the state $|\phi^+\rangle$. Map (\ref{DEJMPSmap}) can be iterated and we wish to drive the fidelity to one. It is possible to prove \cite{macchiavello98} that this map converges to the target point $A=1,B=C=D=0$ for all initial states (\ref{belldiagonal}) with $A>\frac{1}{2}$. This means that, when this condition is satisfied and a sufficiently large number of initial pairs is available, Alice and Bob can distill asymptotically pure EPR pairs. Note that the quantum privacy amplification procedure is rather wasteful, since at least half of the pairs (the target pairs) are lost at every iteration. This means that to extract one pair close to the ideal EPR state after $n$ steps we need at least $2^n$ mixed pairs at the beginning. However, this number can be significantly larger, since pairs must be discarded when Alice and Bob obtain different measurement outcomes. We therefore compute the survival probability $P(n)$, measuring the probability that a $n$-step QPA protocol is successful. More precisely, if $p_i$ is the probability that Alice and Bob obtain coinciding outcomes at step $i$, we have \begin{equation} P(n)=\prod_{i=1}^n p_i. \label{survp} \end{equation} The efficiency $\xi(n)$ of the algorithm is given by the number of obtained pure EPR pairs divided by the number of initial impure EPR pairs. We have \begin{equation} \xi(n)=\frac{P(n)}{2^n}. \end{equation} Both the fidelity and the survival probability are shown in Fig.~\ref{DEJMPSpure}. The different curves of this figure correspond to values of the intrusion parameter from $f_\alpha=0.05$ (weak intrusion) to $f_\alpha=0.95$ (strong intrusion). It can be seen that the convergence of the QPA protocol is fast: the fidelity deviates from the ideal case $F=1$ by less than $10^{-7}$ in no more than $n=6$ map iterations. Moreover, the survival probability is quite high: it saturates to $P_\infty\equiv \lim_{n\to\infty} P(n)= 0.60$ for $f_\alpha=0.95$, $P_\infty=0.94$ for $f_\alpha=0.5$ and $P_\infty=0.9995$ for $f_\alpha=0.05$. \begin{figure} \caption{Deviation $1-F$ of the fidelity $F$ from the ideal case $F=1$ (top) and survival probability $P$ (bottom) as a function of the number of iterations $n$ of map (\ref{DEJMPSmap}). The different curves correspond to the intrusion parameter $f_\alpha=0.95$ (dashed line), $0.5$ (dot-dashed line) and $0.05$ (solid line).} \label{DEJMPSpure} \end{figure} \section{Single qubit errors} \label{sec:Blocherrors} In any realistic implementation of the QPA protocol, errors acting on the purification operations are unavoidable. For the sake of simplicity we limit ourselves to consider errors affecting only a single qubit. Nevertheless, we would like to stress that a complete treatment of the effects of all possible single-qubit noise channels on the QPA algorithm is provided in this paper. We need 12 parameters to characterize a generic quantum noise operation acting on a single qubit \cite{chuang}. Each parameter describes a particular noise channel (like bit flip, phase flip, amplitude damping,...) and can be most conveniently visualized as associated to rotations, deformations and displacements of the Bloch sphere. In the following, we provide, for each noise channel, (i) the Kraus representation, (ii) the transformation of the Bloch sphere coordinates, (iii) an equivalent quantum circuit leading to a unitary representation in an extended Hilbert space. A great advantage of these equivalent quantum circuits is that the evolution of the reduced density matrix describing the single-qubit system is automatically guaranteed to be completely positive. \begin{itemize} \item {\it Rotations of the Bloch sphere} - Rotations through an angle $\theta$ about an arbitrary axis directed along the unit vector ${\bf n}$ are given by the operator \cite{qcbook} \begin{equation} R_n(\theta)=\left(\cos\frac{\theta}{2}\right)I- i\left(\sin\frac{\theta}{2}\right){\bf n}\cdot \mbox{\boldmath$\sigma$}, \end{equation} where $\mbox{\boldmath$\sigma$}=(\sigma_x,\sigma_y,\sigma_z)$, $\sigma_x,\,\sigma_y$ and $\sigma_z$ being the Pauli matrices. The quantum circuit representing rotations is shown in Fig.~\ref{rotation}. Any generic rotation can be obtained by composing rotations about the axes $x$, $y$ and $z$. Let us write as an example the transformation of the Bloch sphere coordinates associated to a rotation through an angle $\theta$ about the $z$-axis: \begin{equation} \left\{ \begin{array}{l} x^\prime=(\cos\theta)x - (\sin \theta)y,\\ y^\prime=(\sin\theta)x + (\cos \theta)y,\\ z^\prime=z \end{array} \right. \end{equation} \begin{figure} \caption{Quantum circuit representing a rotation through an angle $\theta$ about the ${\bf n}$-axis.} \label{rotation} \end{figure} \item {\it Deformations of the Bloch sphere} - The well known bit flip, phase flip and bit-phase flip channels correspond to deformations of the Bloch sphere into an ellipsoid. An equivalent quantum circuit implementing the bit-flip channel is shown in Fig.~\ref{bitflipcircuit}. Note that a single auxiliary qubit, initially prepared in the state $|\psi\rangle= \cos\frac{\theta}{2}|0\rangle+\sin\frac{\theta}{2}|1\rangle$ (with $0\le\theta\le \pi$) is sufficient to obtain a unitary representation of this noise channel. The corresponding Kraus representation is defined by the Kraus operators \begin{equation} F_0=\left(\cos\frac{\theta}{2}\right) I, \;\;\; F_1=\left(\sin\frac{\theta}{2}\right) \sigma_x. \end{equation} The quantum operation \begin{equation} \rho^\prime = \sum_k F_k \rho F_k^\dagger,\;\;\; (\sum_k F_k^\dagger F_k=I), \label{kraus} \end{equation} maps the Bloch sphere into an ellipsoid with $x$ as symmetry axis: \begin{equation} \left\{ \begin{array}{l} x^\prime=x,\\ y^\prime=(\cos \theta) y,\\ z^\prime=(\cos \theta) z, \end{array} \right. \end{equation} The phase flip and bit-phase flip channels are obtained from quantum circuits analogous to Fig.~\ref{bitflipcircuit}, after substitution of $\sigma_x$ with $\sigma_z$ and $\sigma_y$, respectively. In the phase flip channel the Bloch sphere is mapped into an ellipsoid with $z$ as symmetry axis, while in the bit-phase flip channel the symmetry axis is $y$. \begin{figure} \caption{Quantum circuit implementing the bit flip channel.} \label{bitflipcircuit} \end{figure} \item {\it Displacements of the Bloch sphere} - A displacement of the center of the Bloch sphere must go with a deformation of the sphere. This is necessary if we want that $\rho^\prime$ still represents a density matrix: the Bloch radius ${\bf r}$ associated to any density matrix must have length $r$ such that $0\le r\le 1$. This condition can be fulfilled as follows. Let us consider, for instance, a displacement of the center of the Bloch sphere along the $+z$-direction, so that the new center is $(0,0,1-b)$, with $0 < b < 1$. We also assume that the Bloch sphere is deformed into an ellipsoid with $z$ as symmetry axis: \begin{equation} \frac{x^2+y^2}{a^2} + \frac{[z-(1-b)]^2}{b^2}=1. \label{ellipse} \end{equation} Imposing a higher order tangency of this ellipsoid to the Bloch sphere $x^2+y^2+z^2=1$ we obtain $b=a^2$. If we define $a=\cos\theta$ ($0<\theta<\pi/2$), then Eq.~(\ref{ellipse}) becomes \begin{equation} \frac{x^2+y^2}{\cos^2\theta} + \frac{(z-\sin^2\theta)^2} {\cos^4\theta}=1. \label{ellipse2} \end{equation} Note that this equation corresponds to the minimum deformation required to the Bloch sphere in order to displace its center along the $z$-axis by $1-b=\sin^2\theta$. The graphic visualization of the mapping of the Bloch sphere onto an ellipsoid with displaced center is shown in Fig.~\ref{fig:ellipse}. \begin{figure} \caption{Visualization of the minimum deformation required to displace the center of the Bloch sphere along the $z$-axis. The horizontal axis can be any axis in the $(x,y)$ plane.} \label{fig:ellipse} \end{figure} The mapping of the Bloch sphere onto the ellipsoid (\ref{ellipse2}) can be obtained by means of the simple equivalent circuit drawn in Fig.~\ref{fig:ampdamp}. This circuit leads to a single-qubit quantum operation known as the amplitude damping channel. It is described by the Kraus operators \begin{equation} F_0= \left[ \begin{array}{cc} 1 & 0\\ 0 & \cos\theta \end{array} \right], \quad F_1= \left[ \begin{array}{cc} 0 & \sin\theta \\ 0 & 0 \end{array} \right]. \end{equation} The corresponding transformation of the Bloch sphere coordinates is \begin{equation} \left\{ \begin{array}{l} x^\prime=(\cos \theta) x,\\ y^\prime=(\cos \theta) y,\\ z^\prime=\sin^2 \theta + (\cos^2 \theta) z. \end{array} \right. \end{equation} \begin{figure} \caption{Quantum circuit implementing a displacement of the Bloch sphere along the $+z$ direction. Note that $\theta^\prime\equiv \pi/2-\theta$.} \label{fig:ampdamp} \end{figure} While displacements of the center of the Bloch sphere along the positive direction of the $z$-axis can be seen as representative of zero temperature dissipation, thermal excitations are instead associated to displacements along the $-z$-direction. The equivalent quantum circuit describing thermal excitations is shown in Fig.~\ref{fig:thermal}. It leads to the Kraus operators \begin{equation} F_0= \left[ \begin{array}{cc} \cos\theta & 0\\ 0 & 1 \end{array} \right], \quad F_1= \left[ \begin{array}{cc} 0 & 0 \\ \sin\theta & 0 \end{array} \right] \end{equation} and to the Bloch sphere coordinate transformation \begin{equation} \left\{ \begin{array}{l} x^\prime=(\cos \theta) x,\\ y^\prime=(\cos \theta) y,\\ z^\prime=-\sin^2 \theta + (\cos^2 \theta) z. \end{array} \right. \end{equation} \begin{figure} \caption{Quantum circuit implementing a displacement of the Bloch sphere along the $-z$ direction. The unitary transformation $D$ corresponds to the boxed part of the circuit in Fig.~\ref{fig:ampdamp}. The $\oplus$ symbol stands for the NOT gate ($|0\rangle\to |1\rangle,\; |1\rangle\to|0\rangle$).} \label{fig:thermal} \end{figure} We also consider displacements of the Bloch sphere along the directions $\pm x$ and $\pm y$. The equivalent quantum circuit is drawn in Fig.~\ref{fig:dispxy}. A displacement along $\pm x$ takes place when the unitary transformation $U$ in Fig.~\ref{fig:dispxy} is described by the matrix \begin{equation} U=\frac{1}{\sqrt{2}} \left[ \begin{array}{cc} 1 & \pm 1\\ \mp 1 & 1 \end{array} \right]. \end{equation} The corresponding Kraus operators and the transformation of the Bloch sphere coordinates are \begin{equation} \begin{array}{c} F_0= \frac{1}{2}\left[ \begin{array}{cc} 1+\cos\theta & \pm (1-\cos\theta)\\ \pm (1-\cos\theta) & 1+\cos\theta \end{array} \right], \\\\ F_1= \frac{1}{2}\left[ \begin{array}{cc} \mp \sin\theta & \sin\theta \\ -\sin\theta & \pm \sin\theta \end{array} \right], \end{array} \end{equation} \begin{equation} \left\{ \begin{array}{l} x^\prime=\pm \sin^2 \theta + (\cos^2 \theta) x,\\ y^\prime=(\cos \theta) y,\\ z^\prime=(\cos \theta) z. \end{array} \right. \end{equation} For a displacement along $\pm y$ we have \begin{equation} U=\frac{1}{\sqrt{2}} \left[ \begin{array}{cc} 1 & \pm i\\ \pm i & 1 \end{array} \right], \end{equation} \begin{equation} \begin{array}{c} F_0= \frac{1}{2}\left[ \begin{array}{cc} 1+\cos\theta & \pm i (1-\cos\theta)\\ \mp i (1-\cos\theta) & 1+\cos\theta \end{array} \right], \\\\ F_1= \frac{1}{2}\left[ \begin{array}{cc} \pm i \sin\theta & \sin\theta \\ \sin\theta & \mp i \sin\theta \end{array} \right], \end{array} \end{equation} \begin{equation} \left\{ \begin{array}{l} x^\prime=(\cos \theta) x,\\ y^\prime=\pm \sin^2 \theta + (\cos^2 \theta) y,\\ z^\prime=(\cos \theta) z. \end{array} \right. \end{equation} \begin{figure} \caption{Quantum circuit implementing a displacement of the Bloch sphere along the $\pm x$ or $\pm y$ directions. The unitary transformation $D$ corresponds to the boxed part of the circuit in Fig.~\ref{fig:ampdamp}.} \label{fig:dispxy} \end{figure} \end{itemize} We have given a geometric interpretation of 9 out of the 12 parameters describing a generic single-qubit quantum operation (3 are associated to rotations about the axes $x$, $y$ or $z$, 3 to displacements along the same axes, and 3 to deformations of the Bloch sphere into an ellipsoid, with $x$, $y$ or $z$ as symmetry axes). The remaining 3 parameters correspond to deformations of the Bloch sphere into an ellipsoid with symmetry axis along an arbitrary direction. Since these deformations can be obtained by combining the 9 previously studied quantum operations, then, for small errors, it will be sufficient to consider only 9 parameters. \section{Impact of noise on entanglement purification} \label{sec:noisyQPA} We discuss the impact of the 9 noise channels described in the previous section on the QPA algorithm. We present numerical data for the case in which quantum noise acts on the top qubit in Fig.~\ref{figDEJMPS} after the $U$-rotation. However, we point out that very similar results are obtained when noise acts on one of the other three qubits in the same figure. Data are obtained by iteration of a four-qubit noisy quantum map, with input state $\rho_{AB}\otimes \rho_{AB}$ and output state (for the first two qubits) $\rho_{AB}^\prime$ \cite{footnote}. We measure the quality of the purified EPR pair by the fidelity $F$, defined in Eq.~(\ref{fido}). Moreover, we compute the survival probability $P(n)$, defined in Eq.~(\ref{survp}), measuring the probability that a $n$-step QPA protocol is successful. We note that the following symmetries in the effect of errors are observed for the QPA algorithm: (i) rotations through an angle $+\theta$ or $-\theta$ have the same impact; (ii) displacements along the positive or the negative direction of a given axis have the same effect; (iii) rotations about the $x$ axis and deformations with $x$ as symmetry axis (bit flip channel) have the same effect; the same observation applies for the axes $y$ and $z$ as well. The main result of our studies is the demonstration that the sensitivity of the quantum privacy protocol to errors strongly depends on the kind of noise. Two main distinct behaviors are observed: (i) the fidelity is continuously improved by increasing the number of purification steps; (ii) the fidelity saturates to a value $F<1$ after a finite number of steps, so that any further iteration is useless \cite{largeepsilon}. As examples of behaviors of the kind (i) and (ii) we show the bit-flip channel in Fig.~\ref{fidobitflip} (for error strength $\theta=10^{-1}$) and the displacement along $x$ in Fig.~\ref{fidodisplacementx} ($\theta=10^{-3}$). In both figures, the survival probability $P(n)$ can also be seen. Note that, for these sufficiently small error strengths, the values of $P(n)$ shown in Fig.~\ref{fidobitflip} and Fig.~\ref{fidodisplacementx} are not very far from those of the ideal protocol (see Fig.~\ref{DEJMPSpure}). \begin{figure} \caption{Same as in Fig.~\ref{DEJMPSpure} but for the bit flip channel at $\theta=10^{-1}$.} \label{fidobitflip} \end{figure} \begin{figure} \caption{Same as in Fig.~\ref{DEJMPSpure} but for the noise channel corresponding to a displacement along the $x$-axis of the Bloch sphere, at $\theta=10^{-3}$.} \label{fidodisplacementx} \end{figure} It is important to point out that not only the behavior of $F(n)$ is qualitatively different depending on the noise channel but also the level of tolerable noise strength is channel-dependent. To give a concrete example, we show in Fig.~\ref{DEJMPSnoisy} the deviation $1-F$ of the fidelity from the ideal value $F=1$ as a function of the noise strength $\theta$. Data are obtained after $n=5$ iterations of the QPA protocol, in the case of strong Eve's intrusion ($f_\alpha=0.95$) and we consider the bit flip, the phase flip and the amplitude damping (displacement along $z$) channels. In the noiseless case we start from $1-F=1.57\times 10^{-1}$ and improve the fidelity to $1-F=8.20\times 10^{-6}$ after $n=5$ iterations of the quantum privacy amplification protocol. Even though all noise channels degrade the performance of the protocol, the level of noise that can be safely tolerated strongly depends on the specific channel. For instance, it is clear from Fig.~\ref{DEJMPSnoisy} that the QPA protocol is much more resilient to bit flip and amplitude damping errors than to phase flip errors. \begin{figure} \caption{Deviation $1-F$ of the fidelity $F$ from the ideal case $F=1$ as a function of the noise strength $\theta$, after $n=5$ steps of the quantum privacy amplification protocol, for $f_\alpha=0.95$, bit flip (circles), phase flip (squares) and amplitude damping (triangles) channels.} \label{DEJMPSnoisy} \end{figure} A further confirmation of the very different impact of the various noise channels is shown in Table~\ref{tablemaxerr}, showing, at $f_\alpha=0.95$, the value of $\theta$ such as $1-F=10^{-4}$ after $n=5$ map iterations. This gives an estimate of the maximum level of error tolerable for each noise channel. It is interesting to remark that displacements of the Bloch sphere along $x$ and $y$ are much more dangerous than displacements along $z$. We note that the value $1-F=10^{-4}$ has been chosen just for convenience but the same conclusions are obtained also for other values of $1-F$. We also point out that, as shown in Table~\ref{tablemaxerr}, it is possible to achieve very good fidelities in a small number of purification steps also for quite high errors $\theta \sim 10^{-1}>>1-F$ affecting the QPA protocol. \begin{table} \begin{center} \begin{tabular}{|l|c|} \hline Noise channel & $\theta$ \\ \hline Rotation about $x$ & $1.55\times 10^{-1}$\\ Rotation about $y$ & $2.69\times 10^{-1}$\\ Rotation about $z$ & $1.92\times 10^{-2}$\\ Bit flip & $1.55\times 10^{-1}$\\ Bit-phase flip & $2.69\times 10^{-1}$\\ Phase flip & $1.92\times 10^{-2}$\\ Displacement along $x$ & $1.91\times 10^{-2}$\\ Displacement along $y$ & $1.91\times 10^{-2}$\\ Displacement along $z$ & $1.27\times 10^{-1}$\\ \hline \end{tabular} \end{center} \caption{Value of the noise strength $\theta$ such that $1-F=10^{-4}$ after $n=5$ iterations of the purification protocol, at $f_\alpha=0.95$.} \label{tablemaxerr} \end{table} \section{Conclusions} \label{sec:conc} We have performed a systematic study of the effects of the different single-qubit noise channels on the quantum privacy amplification protocol. Our results show the very different impact of the various noise channels on the QPA algorithm. In particular, we have distinguished between cases where it is possible to drive the fidelity arbitrarily close to one and others in which the fidelity saturates to a value different from one. Another important feature that emerges from our investigations is the strong dependence of the maximum noise strength tolerable for the QPA protocol on the noise channel. This is a valuable piece of information for experimental implementations. For instance, the fact that the QPA protocol is much less sensitive to displacements along $z$ than along $x$ or $y$ suggests that the $z$-axis is chosen along ``the direction of noise''. We can then choose the axes $x$ and $y$ to minimize other noise effects. Finally, we remark that studies like the present one, taking into account all possible single-qubit quantum noise channels, promise to give useful insights also in the field of quantum computation. One of us (G.B.) acknowledges support by EU (IST-FET EDIQIP contract) and NSA-ARDA (ARO contract No. DAAD19-02-1-0086). \appendix \section{Isotropic cloning} \label{app:isotropiccloning} Let us first consider the case in which the initial state of Bob's qubit is pure, $|\psi\rangle=\mu|0\rangle+\nu|1\rangle$, where $\mu,\nu$ are complex numbers, with $|\mu|^2+|\nu|^2=1$. The unitary transformation $W$ in Fig.~\ref{figBHcrypto} maps the state $|\psi\rangle|\Phi\rangle$ (where $|\Phi\rangle$ is given by Eq.~(\ref{betgamdel})) onto the state \begin{equation} \begin{array}{c} |\Psi\rangle= \mu(\alpha|000\rangle+\beta|101\rangle+\gamma|110\rangle +\delta|011\rangle) \\ +\nu(\alpha|111\rangle+\beta|010\rangle+\gamma|001\rangle +\delta|100\rangle). \end{array} \end{equation} We then obtain the density matrix $\rho_B$ after tracing the density matrix $|\Psi\rangle\langle\Psi|$ over Eve's qubit and the ancillary qubit. We have \begin{equation} \rho_B=\left[ \begin{array}{cc} |\mu|^2(\alpha^2+\delta^2) & 2\mu\nu^\star\alpha\delta\\ +|\nu|^2(\beta^2+\gamma^2) & +2\mu^\star\nu\beta\gamma\\ &\\ 2\mu^\star\nu\alpha\delta & |\mu|^2(\beta^2+\gamma^2)\\ +2\mu\nu^\star\beta\gamma & +|\nu|^2(\alpha^2+\delta^2) \end{array} \right]. \end{equation} In the same way we obtain the density matrix $\rho_E$ after tracing over Bob's qubit and the ancillary qubit: \begin{equation} \rho_E=\left[ \begin{array}{cc} |\mu|^2(\alpha^2+\beta^2) & 2\mu\nu^\star\alpha\beta\\ +|\nu|^2(\gamma^2+\delta^2) & +2\mu^\star\nu\gamma\delta\\ &\\ 2\mu^\star\nu\alpha\beta & |\mu|^2(\gamma^2+\delta^2)\\ +2\mu\nu^\star\gamma\delta & +|\nu|^2(\alpha^2+\beta^2) \end{array} \right]. \end{equation} Let us call $(x,y,z)$, $(x_B,y_B,z_B)$ and $(x_E,y_E,z_E)$ the Bloch sphere coordinates corresponding to $|\psi\rangle\langle\psi|$, $\rho_B$ and $\rho_E$. We have \begin{equation} \mu\nu^\star=\frac{1}{2}(x-iy),\quad |\mu|^2=\frac{1}{2}(1+z),\quad |\nu|^2=\frac{1}{2}(1-z). \end{equation} After setting $\gamma=0$, we obtain \begin{equation} \left\{ \begin{array}{l} \frac{1}{2}(x_B-iy_B)=(\rho_B)_{01}=(x-iy)\alpha\delta,\\\\ \frac{1}{2}(1+z_B)=(\rho_B)_{00}= \frac{1}{2}(1+z)(\alpha^2+\delta^2)+\frac{1}{2}(1-z)\beta^2, \end{array} \right. \end{equation} which imply \begin{equation} \left\{ \begin{array}{l} x_B=2\alpha\delta x,\\ y_B=2\alpha\delta y,\\ z_B=(\alpha^2+\delta^2-\beta^2)z. \end{array} \right. \end{equation} The state $\rho_B$ is an isotropic cloning of $|\psi\rangle\langle\psi|$ when $R_B=x_B/x=y_B/y=z_B/z$. Therefore we obtain \begin{equation} \left\{ \begin{array}{l} 2\alpha\delta=\alpha^2+\delta^2-\beta^2,\\ \alpha^2+\beta^2+\delta^2=1, \end{array} \right. \end{equation} so that \begin{equation} \delta=\frac{\alpha}{2}\pm\sqrt{\frac{1}{2}-\frac{3}{4}\alpha^2}. \label{deltabh} \end{equation} In the same way we obtain \begin{equation} \left\{ \begin{array}{l} x_E=2\alpha\beta x,\\ y_E=2\alpha\beta y,\\ z_E=(\alpha^2+\beta^2-\delta^2)z. \end{array} \right. \end{equation} Isotropic cloning ($R_E=x_E/x=y_E/y=z_E/z$) is obtained when \begin{equation} \left\{ \begin{array}{l} 2\alpha\beta=\alpha^2+\beta^2-\delta^2,\\ \alpha^2+\beta^2+\delta^2=1, \end{array} \right. \end{equation} so that \begin{equation} \beta=\frac{\alpha}{2}\pm\sqrt{\frac{1}{2}-\frac{3}{4}\alpha^2}. \label{betabh} \end{equation} Note that, if we choose the plus sign in (\ref{deltabh}), then the minus sign has to be taken in (\ref{betabh}) in order that the normalization condition $\alpha^2+\beta^2+\delta^2$ is satisfied. This choice corresponds to Eq.~(\ref{betadeltabh}). Note that the cloning is isotropic also in the case in which the initial state $\rho$ of Bob's qubits is mixed. In this case we can write $\rho=\sum_i p_i \rho_i$, with $\rho_i=|\psi_i\rangle\langle\psi_i|$ pure state. The Bloch vector ${\bf r}$ associated to $\rho$ is the weighted sum of the Bloch vectors ${\bf r}_i$ associated to the density matrices $\rho_i$: ${\bf r}=\sum_i p_i {\bf r}_i$. Since we have seen that for pure initial states $({\bf r}_i)_B= R_B {\bf r}_i$ and $({\bf r}_i)_E= R_E {\bf r}_i$, then ${\bf r}_B=\sum_i p_i ({\bf r}_i)_B= R_B {\bf r}$ and ${\bf r}_E=\sum_i p_i ({\bf r}_i)_E= R_E {\bf r}$. \end{document}
arXiv
{ "id": "0505177.tex", "language_detection_score": 0.7484019994735718, "char_num_after_normalized": null, "contain_at_least_two_stop_words": null, "ellipsis_line_ratio": null, "idx": null, "lines_start_with_bullet_point_ratio": null, "mean_length_of_alpha_words": null, "non_alphabetical_char_ratio": null, "path": null, "symbols_to_words_ratio": null, "uppercase_word_ratio": null, "type": null, "book_name": null, "mimetype": null, "page_index": null, "page_path": null, "page_title": null }
arXiv/math_arXiv_v0.2.jsonl
null
null
\begin{document} \begin{title} {Finite type annular ends for harmonic functions} \end{title} \vskip .2in \begin{author} {William H. Meeks III\thanks{This material is based upon work for the NSF under Award No. DMS-1309236. Any opinions, findings, and conclusions or recommendations expressed in this publication are those of the authors and do not necessarily reflect the views of the NSF.} \and Joaqu\'\i n P\' erez\thanks{Research supported in part by the MINECO/FEDER grant no. MTM2014-52368-P. }} \end{author} \maketitle \begin{abstract} In this paper we describe the notion of an annular end of a Riemann surface being of finite type with respect to some harmonic function and prove some theoretical results relating the conformal structure of such an annular end to the level sets of the harmonic function. As an application of these results, we obtain important information on the conformal type of any properly immersed minimal surface $M$ in $\mathbb{R}^3$ with compact boundary and which intersects some plane transversely in a finite number of arcs; in particular, such an $M$ is a parabolic Riemann surface. This information is applied by the authors in~\cite{mpe3} to classify the asymptotic behavior of annular ends of a complete embedded minimal annulus with compact boundary in terms of the flux vector along its boundary. In the present paper, we apply this information to understand and characterize properly immersed minimal surfaces in $\mathbb{R}^3$ of finite total curvature, in terms of their intersections with two nonparallel planes. \noindent{\it Mathematics Subject Classification:} Primary 53A10, Secondary 49Q05, 53C42 \noindent{\it Key words and phrases:} Minimal surface, finite type, harmonic function, parabolic Riemann surface, hyperbolic Riemann surface, angular limits. \end{abstract} \section{Introduction.} Given a nonconstant harmonic function $f\colon M\to \mathbb{R}$ on a noncompact Riemann surface with compact boundary, we say that an annular end\footnote{A proper subdomain $E\subset M$ is an {\it annular end} if it is homeomorphic to $\mathbb{S}^1\times [0,1)$.} $E$ of $M$ has {\it finite type for} $f$ if for some $t_0\in \mathbb{R}$, the one-complex $f^{-1}(t_0)\cap E$ has a finite number of ends. Observe that if $E'$ is an annular subend of $E$, then $E'$ has finite type for $f$ if and only if $E$ has finite type for $f$. Thus, in the sequel we will assume that $f$ has no critical points in the boundary of $E$. Note that $f^{-1}(t_0)$ might fail to be smooth: at each critical point $p$ of $f$ lying in $f^{-1}(t_0)$, this level set consists locally of an equiangular system of curves crossing at $p$; also note that $f^{-1}(t_0)$ cannot bound a compact subdomain in $E-\partial E$ by the maximum principle. These observations imply that $f^{-1}(t_0)\cap E$ has a finite number of ends if and only if $f^{-1}(t_0)\cap E$ has a finite number of components and a finite number of crossing points. When $E$ is an annular end of finite type for $f$, we will prove several results on the level sets of $f|_E$ depending on the conformal type of $E$. In order to describe these results, we first fix some notation. For $R\in [0,1)$, let $A(R,1)=\{z\in \mathbb{C} \mid R< |z|\leq 1\}$, $\overline{A}(R,1) =\{z\in \mathbb{C} \mid R\leq |z|\leq 1\}$, $\partial_R=\{t\in\mathbb{C} \mid |z|=R\}$ and $\partial_1=\{ z\in\mathbb{C} \mid |z|=1\}$. Thus, the closure $\overline{A}(0,1)$ in $\mathbb{C}$ is the closed unit disk $\overline{\mathbb{D} }$ and $A(0,1)=\overline{\mathbb{D}}-\{ 0\} $. If $0<R<1$ and $F\colon A(R,1)\to \mathbb{C} $ is a holomorphic function, then $F$ is said to have {\it angular limit $F(\xi )\in \mathbb{C} \cup \{ \infty \} $ at $\xi \in \partial _R$} if $\lim _{z\to \xi ,z\in S}F(z)$ exists and equals $F(\xi )$ for every angular sector $S\subset A(R,1)$ centered at $\xi $, whose median line is the radial arc $[1,2] \xi$, with small radius $t\in (0,1-R)$ and total angle $2{\alpha} $, $0<{\alpha} <\frac{\pi }{2}$, see Figure~\ref{fig1}. This definition of having angular limit can be directly extended to (real valued) harmonic functions. \begin{figure} \caption{Angular sector in $A(R,1)$, centered at $\xi \in \partial _R$.} \label{fig1} \end{figure} We can now state our main result. \begin{theorem} \label{th1.1} Suppose $f\colon M\to \mathbb{R}$ is a nonconstant harmonic function and $E$ is an annular end of $M$ of finite type for $f$. Then: \begin{enumerate} \item If $E$ is conformally $A(0,1)$, then the holomorphic one-form $\omega=df+idf^*$ on $A(0,1)$ extends to a meromorphic one-form $\widetilde{\omega }$ on the closed unit disk $\overline{\mathbb{D}}$ (here $f^*$ denotes the locally defined conjugate harmonic function of $f$). Furthermore: \begin{enumerate} \item If for some $t_0\in \mathbb{R} $ the one-complex $f^{-1}(t_0)\cap E$ has $2k$ ends (note that the number of ends is always even since $f$ alternates values greater and smaller than $t_0$ in consecutive components of the complement of $f^{-1}(t_0)\cap E$ around $z=0$) and $\widetilde{\omega }$ has a pole at $z=0$, then this pole is of order $k+1$. In particular, for every $t\in \mathbb{R}$, the level set $f^{-1}(t)$ has exactly $2k$ ends. \item $\widetilde{\omega }$ is holomorphic if and only if $f$ is bounded on $E$ (in which case $f$ admits a well-defined harmonic conjugate function on $\overline{\mathbb{D} }$). \item If $\int _{{\alpha} }|df|<\infty $ for some end representative ${\alpha} $ of $f^{-1}(t_0)\cap E$, then $f$ is bounded on $E$. \end{enumerate} \item If $E$ is conformally $A(R,1)$ for some $R\in (0,1)$, then: \begin{enumerate} \item $f$ has angular limits almost everywhere on $\partial_R$. \item Given $t_0 \in \mathbb{R}$ such that $f^{-1}(t_0)\cap E$ has a finite number of ends, then the limit set of each end of $f^{-1}(t_0)\cap E$ consists of a single point in $\partial_R$. In particular, the closure in $A(R,1)$ of at least one component of $\{z\in A(R,1) \mid f(z)\neq t_0\}$ is hyperbolic\footnote{A noncompact Riemann surface $\Sigma$ with boundary is {\it hyperbolic } if its boundary fails to have full harmonic measure (equivalently, bounded harmonic functions on $\Sigma $ are not determined by their boundary values); otherwise, $\Sigma$ is called {\it parabolic}.}. \end{enumerate} \end{enumerate} \end{theorem} In the special case that the flux $\int_{\partial _1}\frac{\partial f}{\partial r}\, ds$ vanishes, then item~2 of Theorem~\ref{th1.1} follows from the proof of Theorem 7.1 in~\cite{mr7}. Theorem~\ref{th1.1} is motivated by applications of it to minimal surface theory. For specific applications in~\cite{mpe3}, we will need the following corollary to Theorem~\ref{th1.1}. We remark that L\'opez~\cite{lop4} obtained related results of Picard type for minimal surfaces under the assumption that the second sentence of Corollary~\ref{c1.2} holds, and that Alarc\'on and L\'opez~\cite{allo1} also applied some of the main results in this paper. \begin{corollary} \label{c1.2} Suppose $X=(x_1,x_2,x_3)\colon A(R,1)\to \mathbb{R}^3$ is a proper, conformal minimal immersion and that for some horizontal plane $P\subset \mathbb{R}^3$, the one-complex $X^{-1}(P)$ has a finite number $2k$ of ends. Then $R=0$ and the holomorphic height differential $dx_3 + idx_3^*$ extends meromorphically to $\overline{\mathbb{D}}$ with a pole of order $k+1$. In particular, for every horizontal plane $P'\subset \mathbb{R}^3$, the level set $X^{-1}(P')$ has exactly $2k$ ends. Furthermore, after replacing $A(0,1)$ by $A(0,R')=\{z\in \mathbb{C} \mid 0<|z|\leq R'\}$ for some $R' \in (0,1)$, the Gauss map of the induced minimal immersion $X|_{A(0,R')}$ is never vertical. Hence, on $A(0,R')$, the meromorphic Gauss map of $X$ can be expressed as $g(z)=z^ne^{H(z)}$ for some $n\in \mathbb{Z}$ and for some holomorphic function $H\colon A(0, R')\to \mathbb{C}$. Furthermore, the following three statements are equivalent: \begin{enumerate} \item $X$ has finite total curvature. \item $H$ is bounded on $A(0,R')$. \item There are two nonparallel planes $P_1, P_2$ such that each of these planes individually intersects the surface transversely in a finite number of immersed curves (if $X$ is not flat, then we can change ``there are two'' by ``for all'' in this statement). \end{enumerate} Additionally, if the flux $\int _{{\alpha} }\frac{\partial x_3}{\partial \eta }ds$ is finite for some end representative ${\alpha} $ of $X^{-1}(P)$, then $X$ has finite total curvature and $X(A(R,1))$ is asymptotic to $P$ with finite multiplicity. \end{corollary} \section{Preliminaries on complex analysis.} In this section we recall the statements of two classical theorems in the theory of functions of one complex variable that we will apply to prove Theorem~\ref{th1.1}. \begin{theorem}[Plessner~\cite{ple1}] \label{thmplessner} If $F$ is a meromorphic function in the open unit disk $\mathbb{D} $, then, for almost all $\xi \in \partial \mathbb{D}$, either $F$ has a finite angular limit at $\xi$ or $F(S)$ is dense in $\mathbb{C} \cup \{ \infty \} $, for every angular sector $S=\{ z\in \mathbb{D} \mid |\arg (1-\overline{z}\xi )|<{\alpha} , |z-\xi |<t\} $ centered at $\xi $ of radius $t$, aperture angle $2{\alpha} $ (here $0<{\alpha} <\frac{\pi}{2 }$) and median line $[0,\xi ]$. \end{theorem} \begin{theorem}[Privalov~\cite{priv1}] \label{thmPrivalov} Let $F$ be a meromorphic function in $\mathbb{D} $. If $F$ has angular limit~$0$ in a subset of positive measure in $\partial \mathbb{D} $, then $F$ vanishes identically. \end{theorem} \section{The proof of Theorem~\ref{th1.1}.} Suppose $t_0\in \mathbb{R}$ and $f\colon A(R,1)\to \mathbb{R}$ is a nonconstant harmonic function with $f^{-1}(t_0)$ having $n+1$ ends. The ends of $f^{-1}(t_0)$ can be represented by a family $\{{\alpha}_0,{\alpha}_1,\ldots, {\alpha}_n\}$ of pairwise disjoint proper arcs in $A(R,1)$. Our first observation in proving Theorem~\ref{th1.1} is given in the following assertion. \begin{assertion} \label{a2.1} There exists a simple closed analytic curve $\beta\colon \mathbb{S}^1\to A(R,1)$ which is topologically parallel to $\partial_1$ and which intersects $f^{-1}(t_0)$ transversely in $n+1$ points $p_0={\alpha}_0\cap \beta, p_1={\alpha}_1\cap \beta, \ldots, p_n={\alpha}_n\cap \beta $. Furthermore, after replacing $A(R,1)$ by the closure of the subdomain of $A(R,1)-\beta$ which is disjoint from $\partial_1$, then $f^{-1}(t_0)$ consists of $n+1$ disjoint proper arcs representing the ends of $f^{-1}(t_0)$. \end{assertion} \begin{proof} This assertion follows immediately from elementary separation properties of curves and the conformal classification of annular domains. \end{proof} By Assertion~\ref{a2.1}, after replacing $A(R,1)$ by a subend and parameterizing this subend by some $A(R',1)$, we may assume that $f\colon A(R,1)\to \mathbb{R}$ is harmonic and analytic (up to and including the boundary $\partial _1$) and $f^{-1}(t_0)$ is a finite collection $\{{\alpha}_0, {\alpha}_1, \ldots, {\alpha}_n\}$ of pairwise disjoint, properly embedded arcs transverse to $\partial_1$, and each arc ${\alpha}_i$ has its starting point $p_i$ in $\partial_1$, $i=0,1, \ldots, n$. We can also assume that $\{{\alpha}_0,{\alpha}_1,\ldots, {\alpha}_n\}$ are cyclically ordered in a counterclockwise manner. \begin{assertion} \label{a2.2} Suppose that $R>0$ and that the limit set $L(f^{-1}(t_0))\subset \partial_R$ of $f^{-1}(t_0)$ is not equal to $\partial_R$. Then, item~2 in the statement of Theorem~\ref{th1.1} holds. \end{assertion} \begin{proof} Let $\sigma$ be a compact embedded arc in $\overline{A}(R,1)- \bigcup^n_{i=0}{\alpha}_i$ with one end point in $\partial_1$ and the other end point in $\partial_R$ (note that $\sigma $ exists since $\partial _R-L(f^{-1}(t_0))\neq \mbox{\O }$). Let $\sigma({\varepsilon})$ be a small, open regular neighborhood of $\sigma$ in $\overline{A}(R,1)-\bigcup^n_{i=0}{\alpha}_i$ which is at a positive distance from $\bigcup_{i=0}^n{\alpha}_i$ and chosen so that $\partial [A(R,1)-\sigma({\varepsilon})]$ is a smooth connected arc. By the conformal classification of annular domains and boundary regularity of holomorphic functions, there exists a conformal diffeomorphism (see Figure \ref{fig2}) \[ \eta\colon \Delta=\left\{ z=x+iy\in \mathbb{C} \mid |z|<1,\ y \geq 0\right\}\to [A(R,1)-\sigma({\varepsilon})]. \] Consider the induced harmonic function $\widehat{f}=f\circ \eta \colon \Delta\to \mathbb{R}$. Since $\Delta$ is simply connected, $\widehat{f}$ admits a well-defined harmonic conjugate function $\widehat{f}^*$. Hence, the function $F=\widehat{f}+i\widehat{f}^*\colon \Delta\to\mathbb{C}$ is holomorphic. \begin{figure} \caption{$\eta $ conformally parameterizes the shaded region on the right by a half disk. In fact, we will show that ${\alpha} _0$ with an interval limit set does not occur.} \label{fig2} \end{figure} As $f$ does not have critical points in $f^{-1}(t_0)$, $F$ restricted to any of the finite number of components $\eta ^{-1}({\alpha}_i)$ of $\widehat{f}^{-1}(t_0)$ ($0\leq i\leq n$) monotonically parameterizes an interval on the line \[ L_{t_0}=\{ w=u+iv\in \mathbb{C} \mid u=t_0\} . \] The end points on these intervals on $L_{t_0}$ form a finite subset, and thus, it is possible to find a compact interval ${\gamma} $ in $L_{t_0}$ which is disjoint from the end points in the intervals in $F(\widehat{f}^{-1}(t_0))$. Therefore, $F^{-1}({\gamma})$ is compact. Hence, after replacing $A(R,1)$ by a subend, we will assume that $F^{-1}({\gamma})=\mbox{\O }$. Consider the restriction $F|_{\mbox{Int}(\Delta )}\colon \mbox{Int}(\Delta )\to \mathbb{C}-{\gamma} $ to the interior of $\Delta $. $F|_{\mbox{Int}(\Delta )}$ is essentially bounded in the sense that its image is contained in a domain conformally equivalent to an open subset of the unit disk via the Riemann mapping theorem. By Plessner's Theorem (Theorem~\ref{thmplessner}), $F|_{\mbox{Int}(\Delta )}$ has angular limits almost everywhere on $(\mathbb{S}^1)^+=\{ z\in \mathbb{C} \mid |z|=1, y\geq 0\} $, and thus, $f$ has angular limits almost everywhere on $\partial _R-\sigma ({\varepsilon} )$. Clearly, by taking smaller and smaller neighborhoods $\sigma ({\varepsilon} )$, we conclude that item~(a) in statement~2 of Theorem~\ref{th1.1} holds in this case. We next describe the limit set $L_i$ of $\eta ^{-1}({\alpha}_i)$, $i=0,1,\ldots ,n$. Clearly $L_i\subset (\mathbb{S}^1)^+$. Suppose that for $i$ fixed, $\eta ^{-1}({\alpha} _i)$ has two distinct limit points $q_1,q_2\in (\mathbb{S}^1)^+$. We claim that in this case, the subarc $I$ of $(\mathbb{S}^1)^+$ whose extrema are $q_1,q_2$ is entirely contained in $L_i$. Arguing by contradiction, suppose that there exists $s\in \mbox{Int}(I)$ which is not a limit point of $\eta ^{-1}({\alpha}_i)$. Then there exists a small ${\delta} >0$ such that the radial arc $[1-{\delta} ,1 ]s \subset \Delta $ is disjoint from $\eta ^{-1}({\alpha} _i)$. As $\eta ^{-1}({\alpha} _i)$ is proper in $\Delta $ and disjoint from $[1-{\delta} ,1 ]s$, then $\eta ^{-1}({\alpha} _i)$ eventually lies in the one of the two connected components, say $A$, obtained by removing $[1-{\delta} ,1 ]s$ from the ${\delta} $-neighborhood of $(\mathbb{S}^1)^+$ in $\Delta$. In particular, $L_i$ fails to contain the point in $\{q_1, \, q_2\}$ which does not lie in $\overline{A}$, which is a contradiction. This proves our claim, and therefore, $L_i$ is either a compact subarc of $(\mathbb{S}^1)^+$ or a point. We claim that all the limit sets $L_i$ reduce to points. Suppose on the contrary, that some $L_i$ is a subarc of $(\mathbb{S} ^1)^+$ with nonempty interior Int$(L_i)$. Then almost everywhere on Int$(L_i)$, the holomorphic function $F$ has angular limits which correspond to the end point $*\in L_{t_0}\cup \{ \infty \} $ of $F(\eta ^{-1}({\alpha}_i))$. In this case, $F$ would have the constant angular limit $*$ on a set of positive measure of $(\mathbb{S}^1)^+$, thereby contradicting Privalov's Theorem~\cite{priv1} since $F$ is not constant. This contradiction implies that $L_i$ is a single point in $(\mathbb{S}^1)^+$ for all $i=0,1,\ldots, n$, and so, each end of $f^{-1}(t_0)$ has a unique limit point in $\partial_R$. Since we are assuming $R>0$, then the harmonic measure of $\partial _R$ is positive, which clearly implies item~(b) in statement~2 of Theorem~\ref{th1.1}. Now the proof of Assertion~\ref{a2.2} is complete. \end{proof} Recall that the collection $\{ {\alpha} _0,{\alpha} _1,\ldots ,{\alpha} _n\} =f^{-1}(t_0)$ of pairwise disjoint arcs is cyclically ordered. For $k=0,1,\ldots, n$, let $D_k$ be the union of ${\alpha} _k\cup {\alpha} _{k+1}$ with the component of $A(R,1)-f^{-1}(t_0)$ whose boundary contains the arcs ${\alpha}_k, {\alpha}_{k+1}$ (in the case of $k=n$ we identify ${\alpha} _0$ with ${\alpha} _{n+1}$). Note that $\partial D_k$ is a connected open arc, and $D_k$ is simply connected. We will call each $D_k$ a (topological) {\it sector} of $A(R,1)$. In the next result we study the conformal character of the sectors $D_k$. \begin{assertion} \label{a2.3} Suppose that $R>0$ and that the limit set $L(f^{-1}(t_0))$ equals $\partial_R$. Then, for each $k=0,1,\ldots ,n$ the (topological) sector $D_k$ is conformally equivalent to the closed upper halfplane in $\mathbb{C} $ (the conformal diffeomorphism between $D_k$ and $\{ a+bi \mid b\geq 0 \}$ fails to be conformal only at the starting points $p_k, \, p_{k+1}$ of ${\alpha}_k, \, {\alpha}_{k+1}$). \end{assertion} \begin{proof} The argument is again by contradiction. Assume that for some $k=0,1,\ldots, n$ the assertion fails. By the Uniformization Theorem and boundary regularity of holomorphic maps, then there exists an bijective map $\phi _k\colon \Delta =\{z=x+iy \in \mathbb{C} \mid |z|<1, \ y\geq 0\} \to D_k$ which is a conformal immersion except at the two points of $\partial \Delta$ that correspond to the end points $p_k, \, p_{k+1}$ of ${\alpha}_k$ and ${\alpha}_{k+1}$ in $\partial _1$. Since $\phi _k$ is a bounded holomorphic function, Theorem~\ref{thmplessner} insures that we can find distinct points $\xi _1, \xi _2\in (\mathbb{S} ^1)^+$ such that $\phi _k$ has an angular limit at $\xi _i$, $i=1,2$. Consider a pair of smooth disjoint arcs ${\beta} _1,{\beta} _2\subset \Delta $, each joining a point of $\phi _k^{-1}[(\partial D_k\cap \partial _1)-\{ p_k,p_{k+1}\} ]\subset \partial \Delta \cap \{ y=0\} $ to one of the points $\xi _1,\xi _2$, and transverse to $(\mathbb{S}^1)^+$. Let $D'_k$ be the subdomain of $\Delta $ whose boundary contains both ${\beta} _1,{\beta} _2$, see Figure~\ref{fig2a}. \begin{figure} \caption{The shaded domain on the right is bounded by curves with distinct limiting end points.} \label{fig2a} \end{figure} Note that the (single) limit point of $\phi _k({\beta} _1)$ is different from the limit point of $\phi _k({\beta} _2)$: otherwise for every point $\xi '$ in the open arc $(\mathbb{S}^1)^+\subset \partial \Delta $ with extrema $\xi_1,\xi _2$, the angular limit of $\phi _k$ at $\xi '$ exists and is equal to the common limit point of $\phi _k({\beta} _i)$, $i=1,2$. By Privalov's Theorem (Theorem~\ref{thmPrivalov}), this would lead to a contradiction since $\phi _k$ is not constant. Also note that $\phi _k(D'_k)$ is a subdomain of $A(R,1)$ whose boundary consists of $\phi _k({\beta} _1)$, $\phi _k({\beta} _2)$, an arc contained in $\partial _1$ and an open arc ${\delta} \subset \partial_R$, and that every point of $\delta $ is a positive distance from the domain $A(R,1)-D_k$. Therefore, $\delta $ is disjoint from the limit set of $f^{-1}(t_0)$, which contradicts one of our hypotheses. Now the proof of the assertion is complete. \end{proof} We continue with our proof of Theorem~\ref{th1.1} under the assumption that $R>0$. Let $\Sigma$ be the flat surface with boundary $A(R,1)-{\alpha}_0$, and let $\overline{\Sigma}$ denote the simply connected flat surface with boundary obtained after attaching to $\Sigma $ two ``copies'' of ${\alpha} _0$. Note that $\overline{\Sigma}$ has connected boundary, which consists of the arc $\partial_1-{\alpha}_0$ together with the two copies of ${\alpha}_0$. For each $k\in \{0,1,\ldots,n\}$, we will consider the "lift" $\widehat{D}_k$ of the topological sector $D_k$ to $\overline{\Sigma}$. Let $\widehat{f}\colon \overline{\Sigma} \to \mathbb{R}$ be the harmonic function given by the "lift" of $f$ to $\overline{\Sigma}$ (clearly $\widehat{f}$ takes equal values at corresponding points in the copies of ${\alpha} _0$ in $\partial \overline{\Sigma}$). Since $\overline{\Sigma}$ is simply connected, $\widehat{f}$ admits a well-defined harmonic conjugate function $\widehat{f}^*$ on $\overline{\Sigma }$, and so, the function $F=\widehat{f}+i\widehat{f}^*\colon\overline{\Sigma}\to \mathbb{C} $ is holomorphic. Note that each sector $\widehat{D}_k\subset \overline{\Sigma}$ has its image $F(\widehat{D}_k)$ in one of the two closed halfspaces \[ \mathbb{C}^+(t_0)=\{w=u+iv\in\mathbb{C} \mid u\geq t_0\} , \quad \mathbb{C}^{-}(t_0)=\{u+iv \mid u\leq t_0\} . \] Let $L_{t_0}=\{u+iv\in \mathbb{C} \mid u=t_0\}$ and note that $F^{-1}(L_{t_0})$ is the finite ordered set of curves $\{\widehat{{\alpha}}_0, \widehat{{\alpha}}_1, \widehat{{\alpha}}_2, \ldots, \widehat{{\alpha}}_n, \widehat{{\alpha}}'_0\}$ corresponding to the cyclically ordered set of arcs $\{{\alpha}_0,{\alpha}_1,\ldots, {\alpha}_n\}=f^{-1}(t_0)$; also note that $\partial \widehat{D}_n$ contains the arcs $\widehat{{\alpha}}_n$ and $\widehat{{\alpha}}_0'$ in its boundary. After reindexing, we can assume that $F(\widehat{D}_k)\subset \mathbb{C}^+(t_0)$ for $k$ even and $F(\widehat{D}_k)\subset \mathbb{C}^-(t_0)$ for $k$ odd. For $k=0,1,\ldots ,n$, let $w_k\in L_{t_0}\cup \{ \infty \} $ be the end point of the half-open arc $F|_{\widehat{{\alpha} }_k}$, and let $w_{n+1}\in L_{t_0}\cup \{ \infty \} $ be the corresponding end point of $F|_{\widehat{{\alpha} }'_0}$. \begin{assertion} \label{ass3.4bis} If $R>0$, then item~2 of Theorem~\ref{th1.1} holds. \end{assertion} \begin{proof} By Assertion~\ref{a2.2}, we only need to arrive to a contradiction provided that $L(f^{-1}(t_0))=\partial _R$. So assume $L(f^{-1}(t_0))=\partial _R$. We first check that with the notation above, then $w_k=w_{k+1}$ for all $k=0,1,\ldots ,n$. Consider the restriction $F|_{\widehat{D}_k}$, which is a holomorphic function whose image is contained in, say, $\mathbb{C} ^+(t_0)$. Since we are assuming $L(f^{-1}(t_0))=\partial _R$, Assertion~\ref{a2.3} insures that there exists a bijective map $\psi _k\colon \{ a+ib\in \mathbb{C} \mid b\geq 0\} \to \widehat{D}_k$ which is conformal everywhere on the closed upper halfplane except at the two points $q_k,q_{k+1}$ in $\{ b=0\} $ which correspond respectively to the starting points $\widehat{p}_k,\widehat{p}_{k+1}$ of $\widehat{{\alpha} }_k,\widehat{{\alpha} }_{k+1}$, respectively. Schwartz's reflection principle applied to the restriction of $F\circ \psi _k$ to $\{ a+ib \mid b\geq 0\} -[q_k,q_{k+1}]$ (here $[q_k,q_{k+1}]$ denotes the closed interval in the real axis with extrema $q_k,q_{k+1}$), and produces a holomorphic function $\widetilde{F}_k \colon \mathbb{C} -[ q_k,q_{k+1}]\to \mathbb{C} $ such that \begin{itemize} \item $\widetilde{F}_k$ extends continuously to the metric completion ${\mathcal C}$ of $\mathbb{C} -[ q_k,q_{k+1}]$ (note that ${\mathcal C}$ is conformally $A(0,1)$). We denote these extensions by the same symbols $\widetilde{F}_k$. \item $\widetilde{F}_k$ maps $\mathbb{R} -[ q_k,q_{k+1}]$ into the line $L_{t_0}$ and maps each of the two copies of the interval $[ q_k,q_{k+1}]$, when considered inside the boundary of ${\mathcal C}$, into the union of $F(\partial \widehat{D}_k \cap \partial _1)$ and its reflected image with respect to $L_{t_0}$. Furthermore, the preimage by $\widetilde{F}_k$ of every point in $L_{t_0}$ consists of at most two points in the real line $\{ b=0\} $, and for some point in $L_{t_0}$, its preimage by $\widetilde{F}_k$ consists of at most one point. \end{itemize} By Picard's theorem, $\widetilde{F}_k$ extends meromorphically across $\infty $, and its extension is a conformal diffeomorphism from a neighborhood $U_k$ of $\infty $ in $\mathbb{C} \cup \{ \infty \} $ into its image. In particular, the limit point $w_k$ of $\widetilde{F}_k\left( \psi _k^{-1}(\widehat{{\alpha} }_k) \right) $ equals the limit point $w_{k+1}$ of $\widetilde{F}_k\left( \psi _k^{-1}(\widehat{{\alpha} }_{k+1}) \right) $, as desired. We now consider the special case where $w_0=\infty $. In this case, the pullback by $\widetilde{F}_k|_{U_k}$ of the complete flat metric $|dw|^2$ is a complete flat metric on $U_k$. Furthermore, for each $k$ the equality $F^*|dw|^2= (\psi _k^{-1})^*\left( \widetilde{F}_k|_{U_k\cap \{ b\geq 0\} }\right) ^*|dw|^2$ holds on some end representative $E_k$ of $D_k$. Therefore $F$ induces under pullback a complete flat metric on $\cup _{k=0}^nE_k$, which is an end representative of $\overline{\Sigma }$. Clearly this complete flat metric on this end representative of $\overline{\Sigma }$ descends to a complete flat metric on an end representative of $A(R,1)$ when $w_0=\infty $. This contradicts the assumption that $R$ is positive (because any such complete flat annulus has quadratic area growth by a direct application of the Gauss-Bonnet formula together with the first and second variation of area formulas, and the fact that annular ends with at most quadratic area growth are parabolic, see~\cite{gri1}). On the other hand if $w_0$ is finite, then the arguments in the last paragraph apply to the holomorphic function $\frac{1}{\widetilde{F}_k-w_0}$ and lead to a similar contradiction. This finishes the proof of Assertion~\ref{ass3.4bis}. \end{proof} In order to complete the proof of Theorem~\ref{th1.1} it remains to demonstrate item~1 of the theorem. So from now on suppose $R=0$. As we did just after Assertion~\ref{a2.1}, we can assume that $f\colon A(0,1)\to \mathbb{R}$ is harmonic and analytic (up to and including the boundary $\partial _1$) and $f^{-1}(t_0)$ is a cyclically ordered finite collection $\{{\alpha}_0, {\alpha}_1, \ldots, {\alpha}_n\}$ of pairwise disjoint, properly embedded arcs transverse to $\partial_1$, and each arc has its starting point in $\partial_1$ and limits to $z=0$. Also the arguments just before Assertion~\ref{a2.3} lead us to define the topological sectors $D_k$, $k=0,1,\ldots ,n$, each one being the union of ${\alpha} _k\cup {\alpha} _{k+1}$ with the component of $A(0,1)-f^{-1}(t_0)$ whose boundary contains the arcs ${\alpha}_k, {\alpha}_{k+1}$ (with ${\alpha} _0={\alpha} _{n+1}$ if $k=n$), and with an arc in $\partial _1$. Note that these sectors $D_k$ are still parabolic in our new setting of $R=0$, since the conformal structure of the annulus $A(0,1)$ is parabolic and the sectors $D_k$ are then proper subdomains of a parabolic surface. Repeating the arguments before Assertion~\ref{ass3.4bis}, we cut $A(0,1)$ along ${\alpha} _0$ and then reattach the cutting curve twice to obtain a simply connected surface $\overline{\Sigma }$, a holomorphic function $F=\widehat{f}+i\widehat{f}^*\colon \overline{\Sigma }\to \mathbb{C} $ and a finite, ordered set of arcs $\{\widehat{{\alpha}}_0, \widehat{{\alpha}}_1, \widehat{{\alpha}}_2, \ldots, \widehat{{\alpha}}_n, \widehat{{\alpha}}'_0\}$ which correspond to $\{{\alpha}_0,{\alpha}_1,\ldots, {\alpha}_n\}=f^{-1}(t_0)\subset A(0,1)$. By the arguments in the proof of Assertion~\ref{ass3.4bis}, the holomorphic differential $dF$ on $\overline{\Sigma }$ descends to the holomorphic differential $\omega =df+idf^*$ on $A(0,1)$, which extends to a meromorphic differential $\widetilde{\omega }$ on $\overline{\mathbb{D} }=\overline{A}(0,1)$. In order to prove item~1(a), note that if $\widetilde{\omega }$ is holomorphic at $0\in \overline{A}(0,1)$, then $d\widehat{F}$ is holomorphic at $\infty $ (with the same notation as in the proof of Assertion~\ref{ass3.4bis}). This implies that $F$ is holomorphic at $0$ and so, $f$ is bounded on $E$. Reciprocally, if $f$ is bounded then $F$ is bounded as well, which implies $\widetilde{\omega }=dF$ is holomorphic. Finally we prove item~1(b) of Theorem~\ref{th1.1}. Since length$({\alpha} )=\int _{{\alpha} }|*df|=\int _{{\alpha} }|df|<\infty $, then the common limit point $w_0=\ldots =w_n=w_{n+1}$ corresponds a finite point. In this case, the arguments in the last paragraph demonstrate that $f$ is bounded. \begin{remark} {\rm Suppose $R>0$ and $f\colon A(R,1)\to \mathbb{R}$ is a nonconstant harmonic function with angular limits almost everywhere on $\partial_R$. Then the arguments in the proof of Theorem~\ref{th1.1} can also be applied to demonstrate the following result: For every $t\in \mathbb{R}$, each proper arc (piecewise smooth) in the boundary of a component of $\{z\in A(1,R) \mid f(z)\geq t\}$ or of $\{z\in A(1,R) \mid f(z)\leq t\}$ has a unique limit point in $\partial_R$. In particular, each nonlimit end of the $1$-complex $f^{-1}(t)$ has a unique limit point in $\partial_R$. } \end{remark} \section{The proof of Corollary~\ref{c1.2}.} In this section, we will apply the following theorem of Collin, Kusner, Meeks and Rosenberg~\cite{ckmr1} to prove Corollary~\ref{c1.2}. \begin{theorem} \label{t3.1} If $X\colon \Sigma \to \mathbb{R}^3$ is a properly immersed minimal surface with boundary (possibly empty) which is contained in a halfspace of $\mathbb{R}^3$, then $\Sigma$ is parabolic. \end{theorem} Let $R\in [0,1)$. Suppose $X\colon A(R,1)\to \mathbb{R}^3$ is a conformal, proper minimal immersion such that $X^{-1}(P)$ has a finite number of ends for some horizontal plane $P\subset \mathbb{R}^3$ at height $t_0\in \mathbb{R} $. We claim that $R=0$. Otherwise $R>0$, $A(R,1)$ is an annular end of finite type for $x_3$ and some component of $x_3^{-1}(( -\infty ,t_0])$ or $x_3^{-1}([t_0,\infty ))$ is hyperbolic by item~2 of Theorem~\ref{th1.1}. But such a component must be parabolic by Theorem~\ref{t3.1}, since $X$ restricted to this component is a properly immersed minimal surface contained in a halfspace of $\mathbb{R}^3$. Hence $R=0$. By item~1 of Theorem~\ref{th1.1}, the holomorphic one-form $dx_3+idx_3^*$ extends to a meromorphic one-form on $\overline{\mathbb{D} }=\overline{A}(0,1)$. By regularity of the induced metric, the meromorphic Gauss map $g\colon A(0,1)\to \mathbb{C} \cup \{ \infty \} $ of $X$ achieves the values $0,\infty $, corresponding to the north and south poles of $\mathbb{S}^2$ and equal to the unit normals of $P$, only a finite number of times. Hence, $g$ misses $0,\infty $ on $A(0,R')$ for some $R'\in (0,1]$. For some $k\in \mathbb{Z} $, $z^{-k}g$ induces the zero map from $\pi_1(A(0,R'))$ to $\pi_1(\mathbb{C} - \{0\})$ and thus, by elementary covering space theory, $z^{-k}g(z)=e^{H(z)}$ for some holomorphic function $H$ on $A(0,R')$. This completes the proof of the main statement of Corollary~\ref{c1.2}. We next prove the equivalence between items 1--3 of Corollary~\ref{c1.2}. The only implication which is not obvious is that 3 implies 1, so assume 3 holds. The main statement of Corollary~\ref{c1.2} applied to each of the planes $P_1, P_2$ implies that on some end $A_{R_1}=\{ z\in A(0,1)\mid 0<|z|\leq R_1<1\} $ of $A(0,1)$, the Gauss map $g$ misses the four values corresponding to the two pairs of antipodal points of $\mathbb{S}^2$ which are orthogonal to $P_1$ or $P_2$. Since $A_{R_1}$ is conformally a punctured disk, Picard's theorem can be applied to $g$ and gives that $g$ extends across the puncture to a meromorphic function on $\overline{\mathbb{D}}=\overline{A(0,1)}$. Hence, $X$ has finite total curvature. Finally suppose that $\int _{{\alpha} }\frac{\partial x_3}{\partial \eta }\, ds$ is finite for some end representative of $X_3^{-1}(P)$. Then item~1.(b) of Theorem~\ref{th1.1} implies that $x_3$ is bounded on the minimal end $E=X(A(0,1))$. In particular, $E$ is contained in a horizontal slab. In this setting, Lemma~2.2 in~\cite{ckmr1} insures that $E$ has quadratic area growth. Since the Gaussian curvature of $E$ is nonpositive, a standard application of the first and second variation formulas for area imply that $E$ has finite total curvature. In this situation, it is well-known that $E$ is asymptotic to $P$ with finite multiplicity. Now Corollary~\ref{c1.2} is proved. \center{William H. Meeks, III at [email protected]\\ Mathematics Department, University of Massachusetts, Amherst, MA 01003} \center{Joaqu\'\i n P\'{e}rez at [email protected]\\ Department of Geometry and Topology and Institute of Mathematics (IEMath-GR), University of Granada, 18071, Granada, Spain} \end{document}
arXiv
{ "id": "0909.1963.tex", "language_detection_score": 0.7600874900817871, "char_num_after_normalized": null, "contain_at_least_two_stop_words": null, "ellipsis_line_ratio": null, "idx": null, "lines_start_with_bullet_point_ratio": null, "mean_length_of_alpha_words": null, "non_alphabetical_char_ratio": null, "path": null, "symbols_to_words_ratio": null, "uppercase_word_ratio": null, "type": null, "book_name": null, "mimetype": null, "page_index": null, "page_path": null, "page_title": null }
arXiv/math_arXiv_v0.2.jsonl
null
null
\begin{document} \title{Vertex Operator Superalgebras and Odd Trace Functions} \begin{abstract} We begin by reviewing Zhu's theorem on modular invariance of trace functions associated to a vertex operator algebra, as well as a generalisation by the author to vertex operator superalgebras. This generalisation involves objects that we call `odd trace functions'. We examine the case of the $N=1$ superconformal algebra. In particular we compute an odd trace function in two different ways, and thereby obtain a new representation theoretic interpretation of a well known classical identity due to Jacobi concerning the Dedekind eta function. \end{abstract} \section{Introduction} One of the most significant theorems in the theory of vertex operator algebras (VOAs) is the modular-invariance theorem of Zhu \cite{Zhu}. The theorem states that under favourable circumstances the graded dimensions of certain modules over a VOA are modular forms for the group $SL_2(\mathbb{Z})$. The favourable circumstances are that the VOA be rational, $C_2$-cofinite, and be graded by integer conformal weights (we define all terms in Section \ref{definitions} and state Zhu's theorem fully in Section \ref{zhusection} below). Numerous generalisations of Zhu's theorem have appeared in the literature: to twisted modules over VOAs \cite{DLMorbifold}, to vertex operator superalgebras (VOSAs) and their twisted modules \cite{DZ}, \cite{DZ2} (see also \cite{Jordan}), to intertwining operators for VOAs \cite{Huang}, \cite{M2}, to twisted intertwining operators \cite{Y}, and to non rational VOAs \cite{Mnonrational}. In \cite{meCMP} the present author relaxed the assumption of integer conformal weights of $V$ to allow arbitrary rational conformal weights. This work was carried out in the setting of twisted modules over a rational $C_2$-cofinite VOSA. Actually it is worth noting that in that paper the condition of $C_2$-cofiniteness was also relaxed slightly, allowing applications to some interesting examples such as affine VOAs at admissible level. One of the features of \cite{meCMP} is the appearance of odd trace functions (see Section \ref{meat} for the definition) which are to be included alongside the more usual (super)trace functions in order to achieve modular invariance. Although similar in many ways, these odd traces differ from (super)traces in that they act nontrivially on odd elements of a vector superspace, whereas the (super)trace must always vanish on such elements. The results of \cite{meCMP} are reviewed in Section \ref{meat} (for simplicity in the special case of Ramond twisted modules). In the present work, in Section \ref{ex4}, we compute an odd trace function for a particular example: the $N=1$ superconformal minimal model of central charge $c = -21/4$. We evaluate the odd trace function on the superconformal generator (which is an odd element of conformal weight $3/2$), using the strong constraint of its modular invariance. The odd trace function in question equals the weight $3/2$ modular form $\eta(\tau)^3$, where $\eta(\tau)$ is the well known Dedekind eta function. We then give a different proof of this equality (up to an ambiguity of signs) using a BGG resolution and some simple combinatorics. The result is a representation theoretic interpretation of the classical identity \begin{align*} \eta(\tau)^3 = q^{1/8} \sum_{n \in \mathbb{Z}} (4n+1) q^{n(2n+1)} \end{align*} similar in spirit, but a little different, to the celebrated proof coming from the affine Weyl-Kac denominator identity (\cite{IDLA} Chapter 12). \emph{Acknowledgements:} I would like to warmly thank the organisers of the conference `Lie Superalgebras' at Universit\`{a} di Roma Sapienza where this work was presented. Also to express my gratitude to IMPA and to the IHES where the writing of this paper was completed. \section{Definitions}\label{definitions} For us a \underline{vertex superalgebra} \cite{Kac}, \cite{FBZ} is a quadruple $V, {\left|0\right>}, T, Y$ where $V$ is a vector superspace, ${\left|0\right>} \in V$ an even vector, $T : V \rightarrow V$ an even linear map, and $Y : V \otimes V \rightarrow V((z))$, denoted $u \otimes v \mapsto Y(u, z)v = \sum_{n \in \mathbb{Z}} u_{(n)}v z^{-n-1}$, is also even. These data are to satisfy the following axioms. \begin{itemize} \item The unit identities $Y({\left|0\right>}, z) = I_V$ and $Y(u, z){\left|0\right>}|_{z=0} = u$. \item The translation invariance identity $Y(Tu, z) = \partial_z Y(u, z)$. \item The Cousin property that the three expressions \[ Y(u, z)Y(v, w) x \quad \quad p(u, v) Y(v, w) Y(u, z) x, \quad \text{and} \quad Y(Y(u, z-w)v, w) x, \] which are elements of $V((z))((w))$, $V((w))((z))$, and $V((w))((z-w))$, are images of a single element of $V[[z, w]][z^{-1}, w^{-1}, (z-w)^{-1}]$ under natural inclusions into those three spaces. \end{itemize} An equivalent definition, more convenient for some applications, is the following. A vertex superalgebra is a triple $V, {\left|0\right>}, Y$ where these data are as above, but satisfy the following axioms. \begin{itemize} \item The unit identities ${\left|0\right>}_{(n)}u = \delta_{n, -1} u$, $u_{(-1)}{\left|0\right>} = u$ and $u_{(n)}{\left|0\right>} = 0$ for $n > 0$. \item The Borcherds identity (also known, in a different notation, as the Jacobi identity) \[ B(u, v, x; m, k, n) = 0 \quad \text{for all $u, v, x \in V$, $m, k, n \in \mathbb{Z}$}, \] where \begin{align*} B(u, v, x; m, k, n) &= \sum_{j \in \mathbb{Z}_+} \binom{m}{j} (u_{(n+j)}v)_{(m+k-j)} x \\ &- \sum_{j \in \mathbb{Z}_+} (-1)^j \binom{n}{j} \left[ u_{(m+n-j)} v_{(k+j)} - (-1)^n p(u, v) v_{(k+n-j)} u_{(m+j)} \right] x. \end{align*} \end{itemize} A \underline{vertex algebra} is a purely even vertex superalgebra. Let $V$ be a vertex superalgebra. A $V$-module is a vector superspace $M$ with a vertex operation $Y^M : V \otimes M \rightarrow M((z))$ such that \begin{align}\label{moduleborcherds} Y^M({\left|0\right>}, z) = I_M, \quad \text{and} \quad B(u, v, x; m, k, n) = 0 \end{align} for all $u, v \in V$, $x \in M$, $m, k, n \in \mathbb{Z}$. For the present theory we require the extra structure of a \underline{conformal vector}. This is a vector $\omega \in V$ such that its modes $L_n = \omega_{(n-1)} \in \en V$ furnish $V$ with a representation of the Virasoro algebra, i.e., \[ [L_m, L_n] = (m-n) L_{m+n} + \delta_{m, -n} \frac{m^3-m}{12} c \] (here $c \in \mathbb{C}$ is an invariant of $V$ called the central charge), $L_0$ is diagonal with real eigenvalues bounded below, and $L_{-1} = T$. We call a vertex superalgebra with conformal vector a \underline{vertex operator superalgebra} or VOSA, and we use the term VOA to distinguish the purely even case. A $V$-module $M$ is called a \underline{positive energy} module if $L_0 \in \en M$ acts diagonally with eigenvalues bounded below. In particular $V$ is a positive energy $V$-module. The eigenvalues of $L_0 \in \en V$ are called \underline{conformal weights}, and if $L_0 u = \Delta u$ we write \[ Y(u, z) = \sum_{n \in \mathbb{Z}} u_{(n)} z^{-n-1} = \sum_{n \in -\Delta + \mathbb{Z}} u_n z^{-n-\Delta} \] (so that $u_n = u_{(n-\Delta+1)}$). The \underline{zero mode} $u_0 \in \en M$ attached to $u \in V$ is special because it commutes with $L_0$ and thus preserves the eigenspaces of the latter. A VOSA $V$ is said to be \underline{rational} if its category of positive energy modules is semisimple, i.e., it contains finitely many irreducible objects, and any object is isomorphic to a direct sum of irreducible ones. The condition of $C_2$-cofiniteness is an important finiteness condition of vertex (super)algebras introduced by Zhu. We say that $V$ is \underline{$C_2$-cofinite} if \[ \dim \left( V / V_{(-2)}V \right) < \infty. \] \section{The Theorem of Zhu}\label{zhusection} Now we come to the theorem of Zhu \cite{Zhu}. \begin{thm}[Zhu] \label{zhuthm} Let $V$ be a VOA (i.e., purely even VOSA) such that: \begin{itemize} \item $V$ is rational, \item the conformal weights of $V$ lie in $\mathbb{Z}_+$, \item $V$ is $C_2$-cofinite. \end{itemize} We associate to each irreducible positive energy module $M$, and $u \in V$, the series \[ S_M(u, \tau) = \tr_M u_0 q^{L_0 - c/24}, \] convergent for $q = e^{2\pi i \tau}$ of modulus less than $1$. There is a grading $V = \oplus_{\nabla \in \mathbb{Z}_+} V_{[\nabla]}$ such that for $u \in V_{[\nabla]}$ the span $\mathcal{C}(u)$ of the (finitely many) functions $S_M(u, \tau)$ defined above is modular invariant of weight $\nabla$, i.e., \[ (c\tau + d)^\nabla f\left(\frac{a\tau+b}{c\tau+d}\right) \in \mathcal{C}(u) \quad \text{for all $f(\tau) \in \mathcal{C}(u)$ and $\slmat \in SL_2(\mathbb{Z})$}. \] \end{thm} Here is an outline of the proof of Zhu's theorem. \begin{enumerate} \item Introduce a space $\mathcal{C}$ of maps $S(u, \tau) : V \times \mathcal{H} \rightarrow \mathbb{C}$ linear in $V$ and holomorphic in $\mathcal{H} = \{\tau \in \mathbb{C} | \text{Im}{\tau} > 0\}$ satisfying certain axioms, this space is called the `conformal block' of $V$ or the space of conformal blocks of $V$. The definition of $\mathcal{C}$ can be understood in terms of elliptic curves and their moduli \cite{FBZ}. \item It is automatic from its definition that $\mathcal{C}$ admits an action of the group $SL_2(\mathbb{Z})$, namely, \[ [S \cdot A](u, \tau) = (c\tau + d)^{-\nabla_u} S(u, A\tau), \] where $\nabla$ is the grading mentioned above. \item It is proved by direct calculation that $\tr_M u_0^M q^{L_0 - c/24}$ is a conformal block (at least as a formal power series). \item Using the $C_2$-cofiniteness condition, one shows that any fixed $S \in \mathcal{C}$ satisfies some differential equation, and consequently is expressible as a power series in $q$ (whose coefficients are linear maps $V \rightarrow \mathbb{C}$). \item \label{zhustep} The lowest order coefficient $C_0 : V \rightarrow \mathbb{C}$ in the series expansion factors to a certain quotient $\zhu(V)$ of $V$. This quotient has the structure of a unital associative algebra, and $C_0$ is symmetric, i.e., $C_0(ab) = C_0(ba)$. \item There is a natural bijection \[ \text{irreducible positive energy $V$-modules} \longleftrightarrow \text{irreducible $\zhu(V)$-modules}, \] and so if $V$ is rational, $\zhu(V)$ is finite dimensional semisimple. Thus we can write $C_0 = \sum_N \alpha_N \tr_N$ where the sum is over irreducible $\zhu(V)$-modules and $\alpha_N \in \mathbb{C}$. \item Write the corresponding sum $\sum_N \alpha_N S_M$ where $M$ is the $V$-module associated to $N$. Subtract this conformal block from $S$. \item One can repeat the process and show that $S$ is exhausted by trace functions in a finite number of steps. \end{enumerate} \section{Generalisation to the Supersymmetric Case, and to Rational Conformal Weights}\label{meat} Results described in this section are drawn from \cite{meCMP}. Many examples of interest, especially in the supersymmetric case, are graded by noninteger conformal weights. So it is first necessary (and of independent interest) to relax the condition of integer conformal weights in Theorem \ref{zhuthm}. Therefore let $V$ be a VOA whose conformal weights lie in $\mathbb{Q}$ (and are bounded below) rather than in $\mathbb{Z}_+$, and which satisfies the other conditions of the theorem. Then the trace functions $S_M(u, \tau)$ and their span $\mathcal{C}(u)$ are defined as before. There exists a certain \emph{rational} grading $V = \oplus_{\nabla \in \mathbb{Q}} V_{[\nabla]}$ in place of the usual integer grading. It is then true that for $u \in V_{[\nabla]}$ the space $\mathcal{C}(u)$ is invariant under the weight $\nabla$ action\footnote{The precise definition of the action involves choices of roots of unity in general. See \cite{meCMP} for details.}, not of $SL_2(\mathbb{Z})$, but of its congruence subgroup \begin{align*} \Gamma_1(N) = \{\slmat \in SL_2(\mathbb{Z}) | b \equiv 0 \bmod N, \,\text{and}\,\, a \equiv d \equiv 1 \bmod N\}. \end{align*} Here $N$ is the least common multiple of the denominators of conformal weights of vectors in $V$. This number is finite because of the condition of $C_2$-cofinitness. It is possible to achieve invariance under the whole of $SL_2(\mathbb{Z})$ by altering our definition of $V$-module. Define a \underline{Ramond twisted} $V$-module to be a vector superspace $M$ together with fields \[ Y^M(u, z) = \sum_{n \in -\Delta_u + \mathbb{Z}} u_{(n)} z^{-n-1} = \sum_{n \in \mathbb{Z}} u_n z^{-n-\Delta} \] satisfying (\ref{moduleborcherds}) for all $m \in -\Delta_u + \mathbb{Z}$, $k \in -\Delta_v + \mathbb{Z}$, $n \in \mathbb{Z}$. Notice that the ranges of indices of the modes are modified so that $u \in V$ always possesses integrally graded modes $u_n \in \en M$, and in particular always possesses a zero mode. Let us call a VOA \underline{Ramond rational} if its category of positive energy Ramond twisted modules is semisimple. Let $V$ be a Ramond rational, $C_2$-cofinite VOA with rational conformal weights bounded below. Attach the trace function \[ S_M(u, \tau) = \tr_M u_0 q^{L_0 - c/24} \] to $u \in V$ and $M$ an irreducible positive energy Ramond twisted $V$-module, and let $\mathcal{C}(u)$ be the span of all such trace functions. Then for $u \in V_{[\nabla]}$ the space $\mathcal{C}(u)$ is invariant under the weight $\nabla$ action of the full modular group $SL_2(\mathbb{Z})$. The previous paragraph stated a result for VOAs. Upon passage from VOAs to VOSAs, one might expect the claim to hold with trace functions simply replaced by supertrace functions $S_M(u, \tau) = \str_M u_0 q^{L_0 - c/24}$. This would be true, except for an interesting subtlety which can be traced to Step \ref{zhustep} of the proof outline given in Section \ref{zhusection}. In the present situation it is appropriate to replace the usual Zhu algebra with a certain superalgebra (which we also refer to as the Zhu algebra and denote $\zhu(V)$) introduced in the necessary level of generality in \cite{DK}. If $V$ is Ramond rational then $\zhu(V)$ is finite dimensional and semisimple. The lowest coefficient $C_0$ of the series expansion of a conformal block now descends to a supersymmetric function on $\zhu(V)$, i.e., $C_0(ab) = p(a, b)C_0(ba)$. The classification of pairs $(A, \varphi)$, where $A$ is a finite dimensional simple superalgebra (over $\mathbb{C}$) and $\varphi$ is a supersymmetric function on $A$, is as follows. \begin{itemize} \item $A = \en(N)$ for some finite dimensional vector superspace $N$, and $\varphi$ is a scalar multiple of $\str_N$. \item $A = \en(P)[\theta] / (\theta^2 - 1)$ where $P$ is a vector space and $\theta$ is an odd indeterminate, and $\varphi$ is a scalar multiple of $a \mapsto \tr_P(a \theta)$. \end{itemize} The first case is the analogue of the usual Wedderburn theorem. The superalgebra of the second case is known as the \underline{queer superalgebra} and is often denoted $Q_n$ (where $n = \dim P$). Clearly we have \[ Q_n \cong \left\{ \twobytwo X Y Y X | X, Y \in \text{Mat}_n(\mathbb{C}) \right\}, \] and the unique up to a scalar factor supersymmetric function on $Q_n$ is $\left(\begin{smallmatrix}{X}&{Y}\\{Y}&{X}\\ \end{smallmatrix}\right) \mapsto \tr Y$, which is known as the \underline{odd trace}. Roughly speaking modular invariance will hold for $\mathcal{C}(u)$ defined as the span of supertrace functions together with apropriate analogues of odd trace functions. More precisely: \begin{defn}\label{queertracedefn} Let $V$ be a Ramond rational, $C_2$-cofinite VOSA with rational conformal weights bounded below. Let $A$ be a simple component of $\zhu(V)$, $N$ the corresponding unique $\mathbb{Z}_2$-graded irreducible module, and $M$ the corresponding irreducible positive energy Ramond twisted $V$-module. If $A \cong \en(P)[\theta]/(\theta^2-1)$ is queer then let $\Theta : M \rightarrow M$ denote the lift to $M$ of the map $\theta : N \rightarrow N$ of multiplication by $\theta$. In this case we define the \underline{odd trace function} \[ S_M(u, \tau) = \tr_M u_0 \Theta q^{L_0 - c/24}. \] If $A$ is not queer then we define the \underline{supertrace function} \[ S_M(u, \tau) = \str_M u_0 q^{L_0 - c/24}. \] \end{defn} Now we can state the main theorem: it is Theorem 1.3 of \cite{meCMP} applied to the special case of untwisted characters of Ramond twisted $V$-modules. \begin{thm}[\cite{meCMP}]\label{mythm} Let $V$ be a VOSA as in Definition \ref{queertracedefn}, and let $\mathcal{C}(u)$ be the span of the supertrace functions and odd trace functions attached to all irreducible positive energy Ramond twisted $V$-modules. There exists a grading $V = \oplus_{\nabla \in \mathbb{Q}} V_{[\nabla]}$ such that for $u \in V_{[\nabla]}$ the space $\mathcal{C}(u)$ is invariant under the weight $\nabla$ action of $SL_2(\mathbb{Z})$. \end{thm} In the next two sections we view some examples of odd trace functions. \section{Example: The Neutral Free Fermion} This example is considered more fully in \cite{meCMP}, the interested reader may refer there for further details. We consider the Lie superalgebras $A^\text{tw}$ (resp. $A^\text{untw}$) \[ (\oplus_{n} \mathbb{C} \psi_n) \oplus \mathbb{C} 1 \] where the direct sum ranges over $n \in 1/2 + \mathbb{Z}$ (resp. $n \in \mathbb{Z}$). Here the vector $1$ is even, $\psi_n$ is odd. The commutation relations in both cases are \[ [\psi_{m}, \psi_{n}] = \delta_{m, -n} 1. \] We introduce the Fock module \[ V = U(A^\text{tw}) \otimes_{U(A^\text{tw}_+)} \mathbb{C} {\left|0\right>}, \] where \[ A^\text{tw}_+ = \mathbb{C} 1 \oplus (\oplus_{n \geq 1/2} \mathbb{C} \psi_n) \] and $\mathbb{C} {\left|0\right>}$ is the $A^\text{tw}_+$-module on which $1$ acts as the identity and $\psi_n$ acts trivially. It is well known \cite{Kac} that $V$ can be given the structure of a VOSA. The Virasoro element is $\omega = \frac{1}{2} \psi_{-3/2}\psi_{-1/2}{\left|0\right>}$ with central charge $c = 1/2$. The vector $\psi = \psi_{-1/2}{\left|0\right>}$ has conformal weight $1/2$ and associated vertex operator \[ Y(\psi, z) = \sum_{n \in 1/2 + \mathbb{Z}} \psi_n z^{-n-1/2}. \] Ramond twisted $V$-modules are, in particular, modules over the untwisted Lie superalgebra $A^\text{untw}$. In fact the unique irreducible positive energy Ramond twisted $V$-module is \[ M = U(A^\text{untw}) \otimes_{U(A^\text{untw}_+)} (\mathbb{C} v + \mathbb{C} \overline{v}) \] where \[ A^\text{untw}_+ = \mathbb{C} 1 \oplus (\oplus_{n \geq 0} \mathbb{C} \psi_n) \] and $\mathbb{C} v + \mathbb{C} \overline{v}$ is the $A^\text{untw}_+$-module on which $1$ acts as the identity, $\psi_n$ acts trivially for $n > 0$, $\psi_0 v = \overline{v}$ and $\psi_0 \overline{v} = v/2$. We note that $V$ is $C_2$-cofinite and Ramond rational (as well as being rational). The (Ramond) Zhu algebra of $V$ is explicitly isomorphic to the queer superalgebra $Q_1 = \mathbb{C}[\theta] / (\theta^2 = 1)$ via the map $[{\left|0\right>}] \mapsto 1$, $[\psi] \mapsto \sqrt{2} \theta$. Thus the lowest graded piece $M_0 = \mathbb{C} v + \mathbb{C} \overline{v}$ of $M$ is a $Q_1$-module. The corresponding odd trace function is \[ S_M(u, \tau) = \tr_M u_0 \Theta q^{L_0-c/24} \] where $\Theta : M \rightarrow M$ is as in Definition \ref{queertracedefn}. By unwinding that definition we see that $\Theta : \mathbf{m} w \mapsto \mathbf{m} \psi_0 w$, where $w$ is $v$ or $\overline{v}$, and the monomial $\mathbf{m} \in U(A^\text{untw} / A^\text{untw}_+)$. The odd trace function $S_M(u, \tau)$ vanishes on $u = {\left|0\right>}$ (indeed on all even vectors of $V$), but it acts nontrivially on the odd vector $\psi$ (which is pure of Zhu weight $1/2$). Therefore $S_M(\psi, \tau)$ must be a modular form on $SL_2(\mathbb{Z})$ of weight $1/2$ (with possible multiplier system). Indeed one may verify that $\psi_0 \Theta$ acts as $(-1)^{\text{length}(\mathbf{m})}$ on the monomial vector $\mathbf{m} w$, and so \begin{align*} S_M(\psi, \tau) &= q^{-c/24} q^{L_0|{M_0}} (1 - q^1) (1 - q^2) \cdots \\ &= q^{1/24} \prod_{n=1}^\infty (1-q^n) = \eta(\tau) \end{align*} (here we have used that $c = 1/2$, and that $L_0|_{M_0} = 1/16$). We have recovered the well known Dedekind eta function $\eta(\tau)$ which is indeed a modular form on $SL_2(\mathbb{Z})$ of weight $1/2$. \section{Example: The $N=1$ Superconformal Algebra} \label{ex4} First we recall the definition of the Neveu-Schwarz Lie superalgebra $\NS^{\text{tw}}$, and its Ramond-twisted variant $\NS^{\text{untw}}$ (which is often called the Ramond superalgebra). \begin{defn} As vector superspaces the Lie superalgebras $\NS^{\text{tw}}$ (resp. $\NS^{\text{untw}}$) are \[ (\oplus_{n \in \mathbb{Z}} \mathbb{C} L_n) \oplus (\oplus_{m} \mathbb{C} G_n) \oplus \mathbb{C} C \] where the direct sum ranges over $m \in \mathbb{Z}$ (resp. $m \in 1/2 + \mathbb{Z}$). Here $C$ and $L_n$ are even, $G_m$ is odd. The commutation relations in both cases are \begin{align} \label{NS} \begin{split} [L_m, L_n] &= (m-n)L_{m+n} + \frac{m^3-m}{12} \delta_{m, -n} C, \\ [G_m, L_n] &= (m - \frac{n}{2}) G_{m+n}, \\ [G_m, G_n] &= 2L_{m+n} + \frac{1}{3}(m^2 - \frac{1}{4}) \delta_{m, -n} C, \end{split} \end{align} with $C$ central. \end{defn} As usual we introduce the Verma $\NS^{\text{tw}}$-module \[ M^{\text{tw}}(c, h) = U(\NS^{\text{tw}}) \otimes_{U(\NS^{\text{tw}}_+)} \mathbb{C} v_{c, h} \] where \begin{align*} \NS^{\text{tw}}_+ = \mathbb{C} C + \oplus_{n \geq 0} \mathbb{C} L_n + \oplus_{m \geq 1/2} \mathbb{C} G_m, \end{align*} and $\mathbb{C} v_{c, h}$ is the $\NS^{\text{tw}}_+$-module on which $C$ acts by $c \in \mathbb{C}$, $L_0$ acts by $h \in \mathbb{C}$, and higher modes act trivially. It is well known \cite{Kac}, \cite{KacWang} that the quotient $\text{NS}^c = M^\text{tw}(c, 0) / U(\NS^{\text{tw}}) G_{-1/2} v_{c, 0}$ is a VOSA of central change $c$, as is the irreducible quotient $\text{NS}_c$. We shall also require the (generalised) Verma $\NS^{\text{untw}}$-modules \[ M(c, h) = U(\NS^{\text{untw}}) \otimes_{U(\NS^{\text{untw}}_+)} S_{c, h} \] and their irreducible quotients $L(c, h)$ (we omit the superscript $\text{untw}$) where \begin{align*} \NS^{\text{untw}}_+ = \mathbb{C} C + \oplus_{n \geq 0} \mathbb{C} L_n + \oplus_{m \geq 0} \mathbb{C} G_m, \end{align*} and $S_{c, h}$ is the $\NS^{\text{untw}}_+$-module characterised by: \begin{align*} \begin{array}{ll} \text{$S_{c, h} = \mathbb{C} v_{c, h}$ with $G_0 v_{c, h} = 0$} & \text{if $h = c/24$}, \\ \text{$S_{c, h} = \mathbb{C} v_{c, h} + \mathbb{C} G_0 v_{c, h}$} & \text{if $h \neq c/24$}, \\ \end{array} \end{align*} with $C = c$, $L_0 = h$, and positive modes acting trivially in both cases. A clear summary of the representations of $\text{NS}^c$ and $\text{NS}_c$ can be found in \cite{supermilas}. Here we focus on the Ramond twisted representations and shall often omit the adjective `Ramond twisted'. Generically $\text{NS}_c = \text{NS}^c$ is irreducible and all the $\NS^{\text{untw}}$-modules $L(c, h)$ acquire the structure of positive energy $\text{NS}_c$-modules. For certain values of $c$ though, $\text{NS}_c$ is a nontrivial quotient of $\text{NS}^c$ and the irreducible positive energy $\text{NS}_c$-modules are finite in number and are all of the form $L(c, h)$. In fact, $\text{NS}_c$ is a (Ramond) rational VOSA when \begin{align*} c = c_{p, p'} = \frac{3}{2}\left(1 - \frac{2(p'-p)^2}{pp'}\right) \end{align*} for $p, p' \in \mathbb{Z}_{>0}$ with $p < p'$, $p' - p \in 2\mathbb{Z}$ and $\gcd(\frac{p'-p}{2}, p) = 1$. In this case the irreducible positive energy $\text{NS}_c$-modules are precisely the $\NS^{\text{untw}}$-modules $L(c, h)$ where \begin{align*} h = h_{r, s} = \frac{(rp'-sp)^2 - (p'-p)^2}{8pp'} + \frac{1}{16} \end{align*} for $1 \leq r \leq p-1$ and $1 \leq s \leq p'-1$ with $r-s$ odd. Let $c$ be one of these special values from now on. The irreducible $\zhu(\text{NS}_c)$-modules are precisely the lowest graded pieces of the modules $L(c, h)$ introduced above. The lowest graded piece is of dimension $1$ if $h = c/24$ (there is clearly at most one such module for any fixed value of $c$), and is of dimension $1|1$ if $h \neq c/24$. It is known that $\zhu(\text{NS}_c)$ is supercommutative (it is a quotient of $\zhu(\text{NS}^c) \cong \mathbb{C}[x, \theta] / (\theta^2 - x + c/24)$ where $x$ is even and $\theta$ odd). Therefore the simple components of $\zhu(\text{NS}_c)$ with the $1|1$-dimensional modules are all copies of the queer superalgebra $Q_1$, and the component with the $1$-dimensional module (if it exists) is $\mathbb{C}$. Let us consider the case $c = -21/4$ (so $p=2$, $p'=8$) for which the two irreducible positive energy modules are $M_i = L(c, h_i)$ where $h_1 = -3/32$ and $h_2 = -7/32 = c/24$. The first of these is the unique queer module. Theorem \ref{mythm} tells us that the supertrace function $S_{M_2}(u, \tau)$ and the odd trace function $S_{M_1}(u, \tau)$ together span an $SL_2(\mathbb{Z})$-invariant space whose weight is the Zhu weight of $u$. Assume further that $u \in V$ is odd. Then $S_{M_2}(u, \tau)$ vanishes as the supertrace of an odd element. But $S_{M_1}(u, \tau)$ need not vanish, and it will be a modular form (with multiplier system). Unwinding Definition \ref{queertracedefn} we see that $\Theta : \mathbf{m} v_{c, h} \mapsto \mathbf{m} G_0 v_{c, h}$ where $\mathbf{m} \in U(\NS^{\text{untw}} / \NS^{\text{untw}}_+)$, and \[ S_{M_1}(u, \tau) = \tr_{M_1} u_0 \Theta q^{L_0 - c/24}. \] The VOSA $\text{NS}_c$ possesses a distinguished element $\nu = G_{-3/2}{\left|0\right>}$ of conformal weight $3/2$, it satisfies $\nu_0 = G_0$. It turns out that $\nu$ is of pure Zhu weight $3/2$ and so by the above remarks \[ F(\tau) := \tr_{M_1} G_0 \Theta q^{L_0 - c/24} \] is a modular form of weight $3/2$. On the top level of $M_1$, $G_0 \Theta = G_0^2 = h - c/24 = 1/8$, so the top level contribution to $F(\tau)$ is $\frac{1}{4}q^{1/8}$. This is already enough information to determine $F(\tau)$ completely. The cube of the Dedekind eta function is $q^{1/8}$ times an ordinary power series in $q$, so the quotient $f(\tau) = F(\tau) / \eta(\tau)^3$ is a holomorphic modular form of weight $0$ for $SL_2(\mathbb{Z})$, possibly with a multiplier system. Since the $q$-series of $f$ has integer powers of $q$ we have $f(T\tau) = f(\tau)$, and since $S^2 = 1$ we have only the possibilities $f(S\tau) = \pm f(\tau)$. But in $SL_2(\mathbb{Z})$ we have the relation $(ST)^3 = 1$, so if $S$ acted by $-1$ on $f$ we would have $f(\tau) = f(-T^3\tau) = -f(\tau)$. Hence $f(S\tau) = f(\tau)$ and, since it is a genuine holomorphic modular form on $SL_2(\mathbb{Z})$, we have $f(\tau) = 1$. Thus \begin{align} \label{calcofS} F(\tau) = \eta(\tau)^3 / 4. \end{align} We next compute $F(\tau)$ using representation theory. We obtain (up to some undetermined signs) the following well known classical identity of Jacobi \begin{align}\label{jaceta} \eta(\tau)^3 = q^{1/8} \sum_{n \in \mathbb{Z}} (4n+1) q^{n(2n+1)}. \end{align} We begin by considering the trace of $G_0 \Theta q^{L_0}$ on the Verma module $M(c, h)$. The action of $G_0 \Theta$ on the monomial \begin{align*} \mathbf{m} v = L_{m_1} \cdots L_{m_s} G_{n_1} \cdots G_{n_t} v, \end{align*} where $m_1 \leq \ldots \leq m_s \leq -1$, $n_1 < \ldots < n_t \leq -1$, and $v$ is $v_{c, h}$ or $G_0 v_{c, h}$, looks like \[ \mathbf{m} G_0 v \mapsto G_0 \mathbf{m} G_0 v = (-1)^t \mathbf{m} G_0^2 v + \text{reduced terms}. \] Reduced terms resulting from a single use of the commutation relations are of the same length as $\mathbf{m}$, but contain different numbers of the symbols $L$ and $G$. Reduced terms resulting from more than one use of the commutation relations are strictly shorter than $\mathbf{m}$. Therefore none of these terms contribute to the trace. Consider monomials $\mathbf{m}$ as above with a fixed value of $N = \sum_{i=1}^s m_i + \sum_{j=1}^t n_j$. A simple generating function argument shows that if $N > 0$ then the number of such monomials with $t$ even is the same as the number with $t$ odd. Thus the only nonzero term in $\tr G_0 \Theta q^{L_0}$ is the leading term. It is known that $L(c = -21/4, h_0 = -3/32)$ has a BGG resolution \[ 0 \leftarrow L(c, h_0) \leftarrow M(c, h_0) \leftarrow M(c, h_1) \oplus M(c, h_{-1}) \leftarrow M(c, h_2) \oplus M(c, h_{-2}) \leftarrow \cdots, \] where $h_n = -3/32 - n(2n+1)$ for all $n \in \mathbb{Z}$, and that all $M(c, h_k)$ are naturally embedded in $M(c, h_0)$ \cite{IoharaKoga}. We therefore identify each Verma module with its image in $M(c, h_0)$. The trace we seek is given as an alternating sum over the terms in the resolution. From this we see already that the only nonzero coefficients in the $q$-expansion of $\eta(\tau)^3$ must be for powers $q^{1/8 - n(2n+1)}$. We can also easily determine the coefficients up to a sign. Indeed we have $\Theta^2 = h_0 - c/24 = 1/8$, while the operator $G_0|_{M(c, h_k)}$ preserves the top piece $S_k$ of $M(c, h_k)$ and squares to $h_k^2 - c/24$. Therefore the operator $(G_0 \Theta)|_{S_k}$ (which is diagonal on the $1|1$-dimensional space $S_k$) squares to \[ (h_k-c/24)^2/8 = \left[(4k+1)/8\right]^2. \] This matches perfectly with (\ref{jaceta}). To determine the signs of the coefficients directly it seems to be necessary to know some further information about the singular vectors, it would be nice to find a simpler derivation. Of course similar arguments may be applied to other rational $\text{NS}_c$ and their modules. If $L_0$ happens to take the value $c/24$ on one of the levels of a module then the arguments potentially become more intricate. \end{document}
arXiv
{ "id": "1307.4114.tex", "language_detection_score": 0.7441450357437134, "char_num_after_normalized": null, "contain_at_least_two_stop_words": null, "ellipsis_line_ratio": null, "idx": null, "lines_start_with_bullet_point_ratio": null, "mean_length_of_alpha_words": null, "non_alphabetical_char_ratio": null, "path": null, "symbols_to_words_ratio": null, "uppercase_word_ratio": null, "type": null, "book_name": null, "mimetype": null, "page_index": null, "page_path": null, "page_title": null }
arXiv/math_arXiv_v0.2.jsonl
null
null
\begin{document} \title{Topological cell decomposition and dimension theory in $P$-minimal fields} \begin{abstract} This paper addresses some questions about dimension theory for $P$-minimal structures. We show that, for any definable set A, the dimension of $\overline{A}\backslash A$ is strictly smaller than the dimension of $A$ itself, and that $A$ has a decomposition into definable, pure-dimensional components. This is then used to show that the intersection of finitely many definable dense subsets of $A$ is still dense in $A$. As an application, we obtain that any definable function $f: D \subseteq K^m \to K^n$ is continuous on a dense, relatively open subset of its domain $D$, thereby answering a question that was originally posed by Haskell and Macpherson. In order to obtain these results, we show that $P$-minimal structures admit a type of cell decomposition, using a topological notion of cells inspired by real algebraic geometry. \end{abstract} \section{Introduction} Inspired by the successes of $o$-minimality \cite{drie-1998} in real algebraic geometry, Haskell and Macpherson \cite{hask-macp-1997} set out to create a $p$-adic counterpart, a project which resulted in the notion of $P$-minimality. One of their achievements was to build a theory of dimension for definable sets which is in many ways similar to the $o$-minimal case. Still, some questions remained open. The theorem below is one of the main results of this paper. It gives a positive answer to one of the questions raised at the end of their paper (Problem 7.5). We will assume $K$ to be a $P$-minimal expansion of a $p$-adically closed field with value group $|K|$. When we say definable, we mean definable (with parameters) in a $P$-minimal structure. \begin{theorem-nonum}[Quasi-Continuity] Every definable function $f$ with domain $X\subseteq K^m$ and values in $K^n$ (resp. $|K|^n$) is continuous on a definable set $U$ which is dense and open in $X$, and $\dim (X\setminus U)<\dim U$. \end{theorem-nonum} Haskell and Macpherson already included a slightly weaker version of the above result in Remark~5.5 of their paper \cite{hask-macp-1997}, under the additional assumption that $K$ has definable Skolem functions. However, they only gave a sketch of the proof, leaving out some details which turned out to be more subtle than expected. The authors agreed with us that some statements in the original proof required further clarification. One of the motivations for writing this paper was to remedy this, and also to show that the assumption of Skolem functions could be removed. This seemed worthwhile given that the result had already proven to be a useful tool for deducing other properties about the dimension of definable sets in $P$-minimal structures. For example, in \cite{kuij-leen-2015} one of the authors showed how the Quasi-Continuity Theorem would imply the next result. \begin{theorem-nonum}[Small Boundaries] Let $A$ be a non-empty definable subset of $K^m$. Then it holds that $\dim(\overline{A}\setminus A)<\dim A$. \end{theorem-nonum} That both theorems are very much related is further illustrated by the approach in this paper: we will first prove the Small Boundaries Property, and use it to derive the Quasi-Continuity Property. The tool used to prove these results is a `topological cell decomposition', which we consider to be the second main result of this paper. \begin{theorem-nonum}[Topological Cell Decomposition] For every definable function $f$ from $X\subseteq K^m$ to $K^n$ (resp. $|K|^n$) there exists a good t\--cell decomposition ${\cal A}$ of $X$, such that for every $A\in{\cal A}$, the restriction $f_{|A}$ of $f$ to $A$ is continuous. \end{theorem-nonum} The notions of `t\--cell' and `good t-cell decomposition' were originally introduced by Mathews, whose paper \cite{math-1995} has been a major source of inspiration for us. They are analogous to a classical notion of cells coming from real algebraic geometry (see for example the definition of cells in \cite{boch-cost-roy-1987}). Exact definitions will be given in the next section. \\\\ By now, there exist many cell decomposition results for P-minimal structures, which can be quite different in flavour, depending on their aims and intended level of generality. Historically, the most influential result is probably Denef's cell decompositon for semi-algebraic sets \cite{dene-1986} (which in turn was inspired by Cohen's work \cite{cohe-1969}). This has inspired adaptations to the sub-analytic context by Cluckers \cite{cluc-2004}, and to multi-sorted semi-algebraic structures by Pas \cite{pas-1990}. Results like \cite{cubi-leen-2015, mour-2009, darn-halu-2015-tmp} give generalizations of Denef-style cell decomposition. Note that full generality is hard to achieve: whereas \cite{cubi-leen-2015} works for all $p$-minimal structures without restriction, it is somewhat weaker than these more specialized results. On the other hand, \cite{mour-2009, darn-halu-2015-tmp} are closer to the results cited above, but require some restrictions on the class of $P$\--minimal fields under consideration. A somewhat different result is the Cluckers-Loeser cell decomposition \cite{cluc-loes-2007} for $b$-minimal structures. Each of these decompositions has its own strengths and weaknesses. The topological cell decomposition proposed here seems to be the best for our purposes, since it is powerful enough to fill the remaining lacunas in the dimension theory of definable sets over $P$\--minimal fields, without restriction. \ The rest of this paper will be organized as follows. In section~\ref{se:notation}, we recall some definitions and known results, and we set the notation for the remaining sections. In section~\ref{se:cell-dec}, we will prove the t\--cell decomposition theorem (Theorem~\ref{th:M-cell-prep}) and deduce the Small Boundaries Property (Theorem~\ref{th:dim-boundary}) as a corollary. Finally, in section~\ref{se:pure-dim}, we prove the Quasi-continuity Property (Theorem~\ref{th:dense-cont}). The key ingredient of this proof is the following result (see Theorem~\ref{th:dense-int}), which is also interesting in its own right. \begin{theorem-nonum} Let $A_1,\dots,A_r\subseteq A$ be a family of definable subsets of $K^m$. If the union of the $A_k$'s has non empty interior in $A$ then at least one of them has non empty interior in $A$. \end{theorem-nonum} Note that the above statement shows that, if $B_1,\dots,B_r$ are definable subsets which are dense in $A$, then their intersection $B_1\cap\cdots\cap B_r$ will also be dense in $A$. Indeed, a definable subset is dense in $A$ if and only if its complement in $A$ has empty interior inside $A$. \paragraph{Acknowledgement} The authors would like to thank Raf Cluckers for encouraging this collaboration and for helpful discussions. The research leading to these results has received funding from the European Research Council, ERC Grant nr. 615722, MOTMELSUM, 2014--2019. During the preparation of this paper, the third author was a Postdoctoral Fellow of the Fund for Scientific Research - Flanders (FWO) \section{Notation and prerequisites} \label{se:notation} Let $K$ be a $p$-adically closed field, i.e., elementarily equivalent to a $p$-adic field, and $K^*=K\setminus\{0\}$. We use multiplicative notation for the $p$\--valuation, which we then denote by $|\,.\,|$ so $|ab|=|a||b|$, $|a+b|\leqslant\max |a|,|b|$, and so on\footnote{ Compared with additive notation this reverses the order: $|a|\leqslant|b| \Leftrightarrow v(b)\leqslant v(a)$.}\!. For every set $X\subseteq K$ we will use the notation $|X|$ for the image of $X$ by the valuation. A natural way to extend the valuation to $K^m$ is by putting \[\|(x_1,\dots,x_m)\|:=\max_{i\leqslant m}\{|x_i|\}.\] This induces a topology, with balls \[B(x,\rho):=\{y\in K^m\mathrel{:}\|x-y\|<\rho\}\] as basic open sets, where $x\in K^m$ and $\rho\in |K^*|$. For every $X\subseteq K^m$, write $\overline{X}$ for the closure of $X$ and $\mathop{\rm Int}\nolimits X$ for the interior of $X$ (inside $K^m$). The relative interior of a subset $A$ of $X$ inside $X$, that is $X\setminus\overline{X\setminus A}$, is denoted $\mathop{\rm Int}\nolimits_X A$. Let us now recall the definition of $P$-minimality: \begin{definition} Let $\mathcal{L}$ be a language extending the ring language $\mathcal{L}_{\text{ring}}$. A structure $(K, \mathcal{L})$ is said to be $P$-minimal if, for every structure $(K', \mathcal{L})$ elementarily equivalent to $(K,\mathcal{L})$, the $\mathcal{L}$-definable subsets of $K'$ are semi-algebraic. \end{definition} In this paper, we always work in a $P$-minimal structure $(K,{\cal L})$. Abusing notation, we simply denote it as $K$. The word definable means definable using parameters in $K$. A set $S \subseteq K^m\times|K|^n$ is said to be definable if the related set $\{(x,y) \in K^m\times K^n \mathrel{:} (x,|y_1|, \ldots |y_n|)\in S\}$ is definable. A function $f$ from $X\subseteq K^m$ to $K^n$ (or to $|K|^n$) is definable if its graph is a definable set. For every such function, let ${\cal C}(f)$ denote the set \begin{displaymath} {\cal C}(f):=\big\{a\in X\mathrel{:} f\mbox{ is continuous on a neighbourhood of $a$ in }X\big\}. \end{displaymath} It is easy to see that this is a definable set. We will use the following notation for the fibers of a set. For any set $S\subseteq K^{m}$, the subsets $I = \{i_1, \ldots, i_r\}$ of $\{1, \ldots, m\}$ induce projections $\pi_I:K^m\to K^r$ (onto the coordinates listed in $I$) . Given an element $y\in K^r$, the fiber $X_{y,I}$ denotes the set $\pi_I^{-1}(y)\cap X$. In most cases, we will drop the sub-index $I$ and simply write $X_y$ instead of $X_{y,I}$ when the projection $\pi_I$ is clear from the context. In particular, when $S\subseteq K^{m+n}$ and $x\in K^m$, we write $S_x$ for the fiber with respect to the projection onto the first $m$ coordinates. \\\\ One can define a strict order on the set of non-empty definable subsets of $K^m$, by putting \begin{center} $B\ll A$ \quad $\Leftrightarrow$ \quad $B\subseteq A$ and $B$ lacks interior in $A$. \end{center} The rank of $A$ for this order is denoted $D(A)$. It is defined by induction: $D(A)\geqslant 0$ for every non-empty set $A$, and $D(A)\geqslant d+1$ if there is a non-empty definable set $B\ll A$ such that $D(B)\geqslant d$. Then $D(A)=d$ if $D(A)\geqslant d$ and $D(A)\ngeqslant d+1$. By convention $D(\emptyset)=-\infty$. \\\\ The notion of dimension used by Haskell and Macpherson in \cite{hask-macp-1997} (which they denoted as $\mathop{\rm topdim} A$) is defined as follows: \begin{definition} \label{def:dimension} The \textbf{dimension} of a set $A \subset K^m$ (denoted as $\dim A$) is the maximal integer $r$ for which there exists a subset $I$ of $\{1,\dots,m\}$ such that $\pi_I^m(A)$ has non-empty interior in $K^r$, where $\pi_I^m:K^m\to K^r$ is defined by \begin{displaymath} \pi_I^m:(x_1,\dots,x_m)\mapsto (x_{i_1},\dots,x_{i_r}) \end{displaymath} with $i_1<\cdots<i_r$ an enumeration of $I$. \end{definition} We will omit the super-index $m$ in $\pi_I^m$ when it is clear from the context, and put $\dim\emptyset=-\infty$. Given a set $S\subseteq K^{m+1}$, $\pi^{m+1}_{\{1,\dots,m\}}(S)$ is simply denoted $\widehat{S}$. Note that by $P$-minimality, if $A \subseteq K^m$ is a definable set and $\dim A =0$, then $A$ is a finite set. Also, $\dim A = m$ if and only if $A$ has non-empty interior. \\\\ Let us now recall some of the properties of this dimension that were already proven by Haskell and Macpherson in \cite{hask-macp-1997}: {\sl \begin{description} \item[\HM {\bf 1}] Given definable sets $A_1,\dots,A_r\subseteq K^m$, it holds that $\dim A_1\cup\cdots\cup A_r=\max(\dim A_1,\dots,\dim A_r)$. (Theorem~3.2) \item[\HM {\bf 2}] For every definable function $f:X\subseteq K^m\to |K|$, $\dim X\setminus{\cal C}(f)<m$. (Theorem~3.3 and Remark~3.4 (rephrased)) \item[\HM {\bf 3}] For every definable function $f:X\subseteq K^m\to K$, $\dim X\setminus{\cal C}(f)<m$. (Theorem~5.4) \end{description}} \noindent Recall that a complete theory $T$ satisfies the Exchange Principle if the model-theoretic algebraic closure for $T$ does so. In every model of a theory satisfying the Exchange Principle, there is a well-behaved notion of dimension for definable sets, which is called model theoretic rank. Haskell and Macpherson showed that {\sl \begin{description} \item[\HM {\bf 4}] The model-theoretic algebraic closure for $\mathop{\rm Th}(K)$ satisfies the Exchange Principle. (Corollary~6.2) \item[\HM {\bf 5}] For every definable $X\subseteq K^m$, $\dim X$ coincides with the model-theoretic $\mathop{\rm rk} X$. (Theorem~6.3) \end{description} } The following Additivity Property (Lemma~\ref{le:additivity} below) is known to hold for the model theoretic rank $rk$, in theories satisfying the exchange principle. For a proof, see Lemma~9.4 in \cite{math-1995}. Hence, theorems \HM 4 and \HM 5 imply that $\dim$ also satisfies the Additivity Property. This fact was not explicitly stated by Haskell and Macpherson in \cite{hask-macp-1997}, and seems to have been somewhat overlooked until now. It plays a crucial role in our proof of Theorem~\ref{th:M-cell-prep}, hence in all our paper. \begin{lemma}[Additivity Property]\label{le:additivity} Let $S \subseteq K^{m+n}$ be a definable set. For $d \in \{ -\infty, 0,1, \ldots, n\}$, write $S(d)$ for the set \[ S(d) := \{ a \in K^m \mathrel{:} \dim S_a = d \}.\] Then $S(d)$ is definable and \[ \dim \bigcup_{a \in S(d)} S_a= \dim(S(d)) + d.\] \end{lemma} \noindent Combining this with the first point \HM1, it follows easily that $\dim$ is a dimension function in the sense of van den Dries \cite{drie-1989}. \\\\ Haskell and Macpherson also proved that $P$-minimal structures are \textbf{model-theoretically bounded} (also known as ``algebraically bounded'' or also that ``they eliminate the $\exists^\infty$ quantifier''), \textit{i.e.}, for every definable set $S\subseteq K^{m+1}$ such that all the fibers of the projection of $S$ onto $K^m$ are finite, there exists an integer $N\geqslant 1$ such that all of them have cardinality $\leqslant N$. \\\\ While it is not known whether general $P$-minimal structures admit definable Skolem functions, we do have the following weaker version for coordinate projections with finite fibers. \begin{lemma} Let $S \subseteq K^{m+1}$ be a definable set. Assume that all fibers $S_x$ with respect to the projection onto the first $m$ coordinates are finite. Then there exists a definable function $\sigma: \widehat{S} \to K^{m+1}$ such that $\sigma(x)\in S$ for every $x \in \widehat{S}$. \end{lemma} \begin{proof} In Lemma 7.1 of \cite{dene-1984}, Denef shows that this is true on the condition that the fibers are not only finite, but uniformly bounded. (The original lemma was stated for semi-algebraic sets, but the same proof holds for general $P$-minimal structures.) Since uniformity is guaranteed by model-theoretic boundedness, the lemma follows. \end{proof} From this it follows by an easy induction that \begin{corollary}[Definable Finite Choice] \label{cor:choice} Let $f: X \subseteq K^m \to K^n$ be a new definable function. Assume that for every $y \in f(X)$, $f^{-1}(y)$ is finite. Then there exists a definable function $\sigma: f(X) \to X$, such that \[\big(\sigma\circ f(x), f(x)\big) \in \text{Graph}(f)\] for all $x \in X$. \end{corollary} Using the coordinate projections from Definition \ref{def:dimension}, we will now give a definition of t\--cells and t\--cell decomposition: \begin{definition} A set $C \subseteq K^m$ is a {\bf topological cell} (or {\bf t\--cell} for short) if there exists some (non unique) $I\subseteq\{1,\dots,n\}$ such that $\pi_I^m$ induces a homeomorphism from $C$ to a non-empty open set. \end{definition} In particular, every non-empty open subset of $K^m$ is a t\--cell, and the only finite t\--cells in $K^m$ are the points. For any definable set $X\subseteq K^m$, a {\bf t\--cell decomposition} is a partition ${\cal A}$ of $X$ in finitely many t\--cells. We say that the t\--cell decomposition is {\bf good}, if moreover each t\--cell in ${\cal A}$ is either open in $X$ or lacks interior in $X$. \section{Topological cell decomposition} \label{se:cell-dec} Recall that $K$ is a $P$-minimal expansion of a $p$-adically closed field. We will first show that every set definable in such a structure admits a decomposition in t\--cells: \begin{lemma}\label{le:M-cell-dec} Every definable set $X\subseteq K^m$ has a good t\--cell decomposition. \end{lemma} \begin{proof} Put $d=\dim X$ and let $e=e(X)$ be the number of subsets $I$ of $\{1,\dots,m\}$ for which $\pi_I(X)$ has non-empty interior in $K^d$. The proof goes by induction on pairs $(d,e)$ (in lexicographic order). The result is obvious for $d\leqslant 0$ so let us assume that $1\leqslant d$, and that the result is proved for smaller pairs. Let $I\subseteq\{1,\dots,m\}$ be such that $\pi_I(X)$ has non-empty interior in $K^d$. For every $y$ in $\mathop{\rm Int}\nolimits\pi_I(X)$, we write $X_y$ for the fiber with respect to the projection $\pi_I$. For every integer $i\geqslant 1$ let $W_i$ be the set \[W_i := \{y\in\mathop{\rm Int}\nolimits\pi_I(X) \mathrel{:} \mathop{\rm Card} X_y = i\}.\] By model-theoretic boundedness, there is an integer $N\geqslant 1$ such that $W_i$ is empty for every $i> N$. We let ${\cal I}$ denote the set of indices $i$ for which $W_i$ has non-empty interior in $K^d$. For each $i\in{\cal I}$, Definable Finite Choice (Corollary~\ref{cor:choice}) induces a definable function \[\sigma_i: =(\sigma_{i,1},\dots,\sigma_{i,i}): \mathop{\rm Int}\nolimits W_i \to K^{mi},\] such that $X_y=\{\sigma_{i,j}(y)\}_j$ for every {$y\in \mathop{\rm Int}\nolimits W_i$}. Put $V_i: ={\cal C}(\sigma_i)$, and $C_{i,j}:=\sigma_{i,j}(V_i)$. Notice that $C_{i,j}$ is a t\--cell for every $i\in{\cal I}$ and $j\leqslant i$. Indeed, the restrictions of $\pi_I$ and $\sigma_{i,j}$ are reciprocal homeomorphisms between $C_{i,j}$ and the open set $V_i$. We show that each $C_{i,j}$ is open in $X$. Fix $i \in \mathcal{I}$ and $j \leqslant i$. Let $x_0$ be an element of $C_{i,j}$ and $y_0=\pi_I(x_0)$, so that $x_0 = \sigma_{i,j}(y_0)$. By construction, $\bigcup_k C_{i,k}=\pi_I^{-1}(V_i)\cap X$ is open in $X$ (because $V_i={\cal C}(\sigma_i)$ is open in $\mathop{\rm Int}\nolimits W_i$, hence open in $K^d$). So there is $\rho\in|K^\times|$ such that $B(x_0,\rho)\cap X$ is contained in $\bigcup_k C_{i,k}$. Let $\varepsilon$ be defined as \begin{displaymath} \varepsilon:=\min_{k\neq j}\|\sigma_{i,k}(y_0)-\sigma_{i,j}(y_0)\| = \min_{k\neq j}\|\sigma_{i,k}(y_0)-x_0\| \end{displaymath} Because $\sigma_i$ is continuous on the open set $V_i$, there exists $\delta$ such that \begin{displaymath} B(y_0, \delta) \subseteq \sigma_i^{-1}(B(\sigma_i(y_0),\rho)) \subseteq V_i, \end{displaymath} and such that for all $y \in B(y_0, \delta)$, we have that \begin{displaymath} \|\sigma_i(y) - \sigma_i(y_0)\|< \varepsilon. \end{displaymath} Making $\delta$ smaller if necessary, we may assume that $\delta < \min\{\varepsilon, \rho\}$. We will show that $B(x_0, \delta) \cap X \subseteq C_{i,j}$. Let $x$ be in $B(x_0, \delta) \cap X$, and put $y:= \pi_I(x)$. Since $\delta < \rho$, we know that there must exist $k$ such that $x = \sigma_{i,k}(y)$. Assume that $k \neq j$. Since $\delta < \varepsilon$, we now have that \begin{eqnarray*} \| \sigma_{i,k}(y) - \sigma_{i,k}(y_0)\| & \leqslant & \| \sigma_i(y) - \sigma_i(y_0)\|\\ & < & \varepsilon \\ & \leqslant & \| \sigma_{i,j}(y_0) - \sigma_{i,k}(y_0)\| \\ & = & \| \sigma_{i,k}(y) - \sigma_{i,j}(y_0) \|\\ & = & \| \sigma_{i,k}(y) - x_0\|, \end{eqnarray*} but this means that $\sigma_{i,k}(y) \not \in B(x_0, \delta)$, and hence we can conclude that $x = \sigma_{i,j}(y) \in C_{i,j}$. Given that each $C_{i,j}$ is a t\--cell which is open in $X$, it remains to show the result for $Z:=X\setminus (\bigcup_{i\in{\cal I}, j\leqslant i} C_{i,j})$. We will check that $\pi_I(Z)$ has empty interior (in $K^d$), or equivalently that $\dim\pi_I(Z)< d$. Note that $\pi_I(Z)$ is a disjoint union $A_1 \sqcup A_2 \sqcup A_3$, where $A_1:= \pi_I(X)\setminus\mathop{\rm Int}\nolimits\pi_I(X)$, $A_2 := \mathop{\rm Int}\nolimits\pi_I(X)\setminus\bigcup_{i\leqslant N}W_i$, and $A_3$ is the set \begin{displaymath} A_3:= \bigg(\bigcup_{i\in{\cal I}} \big(W_i\setminus\mathop{\rm Int}\nolimits W_i\big) \cup \big(\mathop{\rm Int}\nolimits W_i\setminus V_i\big)\bigg) \cup \bigcup_{i\notin{\cal I}}W_i. \end{displaymath} By \HM 1 it suffices to check that each of these parts has dimension $<d$. Clearly $A_1$ has empty interior, hence dimension $<d$. For every $y$ in $A_2$, the fiber $X_y$ is infinite, hence $A_2$ must have dimension $<d$ by the Additivity Property. Next, we need to check that $A_3$ also has dimension smaller than $d$. By \HM 1, it is sufficient to do this for each part separately. The set $W_i\setminus \mathop{\rm Int}\nolimits W_i$ has empty interior for every $i\in{\cal I}$, and hence dimension $<d$. For $i\in {\cal I}$, $\mathop{\rm Int}\nolimits W_i\setminus V_i$ has dimension $<d$ by \HM 3. And finally, $W_i$ has empty interior for every $i\notin{\cal I}$ by definition of ${\cal I}$, hence dimension $<d$. So $\dim \pi_I(Z)<d$ by \HM 1, hence $\pi_I(Z)$ has empty interior. {A fortiori,} the same holds for $\pi_I(Z_1)$ and $\pi_I(Z_2)$ where $Z_1:=\mathop{\rm Int}\nolimits_X Z$ and $Z_2:=Z\setminus\mathop{\rm Int}\nolimits_X Z$. This implies that, for each $k\in\{1,2\}$, either $\dim Z_k< d$, or $\dim Z_k=d$ and $e(Z_k)<e$. Hence, the induction hypothesis applies to each $Z_k$ separately and gives a good partition $(D_{k,l})_{l\leqslant l_k}$ of $Z_k$. Since $Z_1$ is open in $X$ and $Z_2$ has empty interior in $X$, the sets $D_{k,l}$ will also be either open in $X$, or have empty interior in $X$. It follows that the family of consisting of the t\--cells $C_{i,j}$ and $D_{k,l}$ forms a good t\--cell decomposition of $X$. \end{proof} We will now show that this decomposition can be chosen in such a way as to ensure that definable functions are piecewise continuous, which is one of the main theorems of this paper. \begin{theorem}[Topological Cell Decomposition]\label{th:M-cell-prep} For every definable function $f$ from $X\subseteq K^m$ to $K^n$ (or to $|K|^n$) there exists a good t\--cell decomposition ${\cal C}$ of $X$, such that for every $C\in{\cal C}$ the restriction $f_{|C}$ of $f$ to $C$ is continuous. \end{theorem} \begin{proof} We prove the result for functions $f:X\subseteq K^m\to K^n$, by induction on pairs $(m,d)$ where $d=\dim X$. Our claim is obviously true if $m=0$ or $d\leqslant 0$, so let us assume that $1\leqslant d\leqslant m$ and that the theorem holds for smaller pairs. Note that it suffices to prove the result for each coordinate function $f_i$ of $f:= (f_1,\dots,f_n)$ separately. Indeed, suppose the theorem is true for the functions $f_i: X \to K$. This means that, for each $1\leqslant i\leqslant n$, there exists a good t\--cell decomposition ${\cal C}_i$ of $X$ adapted to $f_i$. It is then easy, by means of Lemma~\ref{le:M-cell-dec}, to build a common, finer good t\--cell decomposition of $X$ having the required property simultaneously for each $f_i$, and hence for $f$. Thus, we may as well assume that $n=1$. Consider the set $X\setminus\mathop{\rm Int}\nolimits{\cal C}(f)$, which can be partitioned as $ A_1 \sqcup A_2$, where $A_1 := X\setminus{\cal C}(f)$ and $A_2:={\cal C}(f)\setminus\mathop{\rm Int}\nolimits{\cal C}(f)$. It follows from \HM 3 that $\dim A_1 <m$. Also, $\dim A_2<m$ since it has empty interior (inside $K^m$), and therefore the union, $X\setminus\mathop{\rm Int}\nolimits{\cal C}(f)$, has dimension $<m$ by \HM 1. Hence, by throwing away $\mathop{\rm Int}\nolimits{\cal C}(f)$ if necessary (which is a definable open set contained in $X$, hence a t\--cell open in $X$ if non-empty), we may assume that $\dim X<m$. Using Lemma~\ref{le:M-cell-dec}, one can obtain a good t\--cell decomposition $(X_j)_{j\in J}$ of $X$. For each $j\in J$, we get a subset $I_j$ of $\{1,\dots,m\}$, an open set $U_j \subseteq K^{d_j}$ (with $d_j=\dim X_j<m$), and a definable map $\sigma_j:U_j\to X_j$. These maps $\sigma_j$ can be chosen in such a way that $\sigma_j$ and the restriction of $\pi_{I_j}$ to $X_j$ are reciprocal homeomorphisms. Now apply the induction hypothesis to each of the functions $f\circ\sigma_j$ to get a good t\--cell decomposition ${\cal C}_j$ of $X_j$. Putting ${\cal C}=\bigcup_{j\in J}{\cal C}_j$ then gives the conclusion for $f$. The proof for functions $f:X\subseteq K^m\to|K|^n$ is similar, the main difference being that one needs to use \HM 2 instead of \HM 3. \end{proof} \begin{remark}\label{re:M-cell-dens} With the notation of Theorem~\ref{th:M-cell-prep}, let $U$ be the union of the cells in ${\cal C}$ which are open in $X$. Clearly $U\subseteq{\cal C}(f)$ and $X\setminus U$ is the union of the other cells in ${\cal C}$, each of which lacks interior in $X$. To conclude that ${\cal C}(f)$ is dense in $X$, it remains to check that this union still has empty interior in $X$. This will be the subject of section~\ref{se:pure-dim}. \end{remark} The Topological Cell Decomposition Theorem is a strict analagon of the Cell Decomposition Property (CDP) considered by Mathews in the more general context of $t$-minimal structures. In his paper, Mathews showed that the CDP holds in general for such structures, if a number of rather restrictive conditions hold (e.g., he assumes that the theory of a structure has quantifier elimination), see Theorem 7.1 in \cite{math-1995}. Because of these restrictions, we could not simply refer to this general setting for a proof of the CDP for $P$-minimal structures. Further results from Mathews' paper justify why proving Theorem \ref{th:M-cell-prep} is worth the effort. In Theorem~8.8 of \cite{math-1995} he shows that, if the CDP and the Exchange Principle are satisfied for a t\--minimal structure with a Hausdorff topology, then several classical notions of ranks and dimensions, including $D$ and $\dim$, coincide for its definable sets. Because of Theorem~\ref{th:M-cell-prep} and \HM 4, we can now apply the observation from Theorem 8.8 to $P$-minimal fields, to get that \begin{corollary}\label{co:rk-dim-D} For every definable set $A\subseteq K^m$, $\dim A=D(A)$. \end{corollary} The Small Boundaries Property then follows easily. \begin{theorem}[Small Boundaries Property]\label{th:dim-boundary} For every definable set $A\subseteq K^m$, one has that $\dim(\overline{A}\setminus A)<\dim A$. \end{theorem} \begin{proof} First note that $D(\overline{A}\setminus A)<D(\overline{A})$, since $\overline{A}\setminus A$ has empty interior in $\overline{A}$. This means that $\dim(\overline{A}\setminus A)<\dim\overline{A}$ by Corollary~\ref{co:rk-dim-D}. Applying \HM 1, we get that $\dim \overline{A}=\dim A$, and therefore $\dim(\overline{A}\setminus A)<\dim A$. \end{proof} \section{Relative interior and pure components} \label{se:pure-dim} Given a definable set $A\subseteq K^m$ and $x\in K^m$, let $\dim(A,x)$ denote the smallest $k\in{\mathbf N}\cup\{-\infty\}$ for which there exists a ball $B\subseteq K^m$ centered at $a$, such that $\dim A\cap B=k$ (see for example~\cite{boch-cost-roy-1987}). Note that $\dim(A,x)=-\infty$ if and only if $x\notin \overline{A}$. We call this the {\bf local dimension} of $A$ at $x$. $A$ is said to be {\bf pure dimensional} if it has the same local dimension at every point $x\in A$. \begin{claim}\label{cl:pure-dim-basic} Let $S\subseteq K^m$ be a definable set of pure dimension $d$. \begin{enumerate} \item\label{it:pur-dim-dense} Every definable set dense in $\overline{S}$ has pure dimension $d$. \item\label{it:pur-dim-int} For every definable set $Z\subseteq S$, $Z$ has empty interior in $S$ if and only if $\dim Z<\dim S$. \end{enumerate} \end{claim} \begin{proof} Let $X\subseteq S$ be a definable set dense in $S$. Consider a ball $B$ with center $x\in X$. Then $B\cap S$ is non-empty, and therefore we have that $\dim B\cap S=d$. Moreover, it is easy to see that $B\cap X$ is dense in $B\cap S$, which implies that $\dim B\cap X=d$ as well, by the Small Boundaries Property and \HM 1. This proves the first part. Let us now prove the second point. If $Z$ has empty interior in $S$, this means that $S\setminus Z$ is dense in $S$, and hence $Z$ is contained in $\overline{(S\setminus Z)}\setminus (S\setminus Z)$. But then $\dim Z<\dim (S\setminus Z)$ by the Small Boundaries Property, and therefore $\dim Z<\dim S$. Conversely, if $Z$ has non-empty interior inside $S$, there exists a ball $B$ centered at a point $z\in Z$ such that $B\cap S\subseteq Z$. By the purity of $S$, $\dim B\cap S=d$, and hence $\dim Z\geqslant d$. Since $Z\subseteq S$, this implies that $\dim Z=d$. \end{proof} \noindent For every positive integer $k$, we put \[ \Delta_k(A):= \{ a \in A \mid \dim(A,a) =k \},\] and we write $C_k(A)$ for the topological closure of $\Delta_k(A)$ inside $A$. It is easy to see that $\Delta_k(A)$ is pure dimensional, and of dimension $k$ if the set is non-empty. By part~\ref{it:pur-dim-dense} of Claim~\ref{cl:pure-dim-basic}, the same holds for $C_k(A)$ . Moreover, since $C_k(A)$ is closed in $A$, one can check that it is actually the largest definable subset of $A$ with pure dimension $k$ (if it is non-empty). For this reason, we call the sets $C_k(A)$ the {\bf pure dimensional components} of $A$. \begin{remark}\label{re:Delta-closure} If $\dim(A,x)<k$ for some $x\in A$, then there exists a ball $B$ centered at $x$ for which $\dim B\cap A<k$. Such a ball must be disjoint from $C_k(A)$, because $C_k(A)$ either has pure dimension $k$ or is empty. But then $C_k(A)$ is disjoint from every $\Delta_l(A)$ with $l<k$, which means that it must be contained in the union of the $\Delta_l(A)$ with $l\geqslant k$. \end{remark} \begin{lemma}\label{le:dim-CkCl} For every definable set $A\subseteq K^m$ and every $k$, one has that \begin{displaymath} \dim \left(C_k(A)\cap \bigcup_{l\neq k} C_l(A)\right) < k. \end{displaymath} \end{lemma} \begin{proof} By \HM 1, it suffices to check that $\dim C_k(A)\cap C_l(A)<k$ for every $l\neq k$. This is obvious when $l<k$, since in these cases $C_l(A)$ already has dimension $l$ or is empty. Hence we may assume that $l>k$. Using Remark~\ref{re:Delta-closure}, one gets that \begin{displaymath} C_k(A)\cap C_l(A)\subseteq C_k(A)\cap\bigcup_{i>k}\Delta_i(A) = \bigcup_{i>k}C_k(A)\cap \Delta_i(A). \end{displaymath} Using \HM 1 again, it now remains to check that $\dim C_k(A)\cap\Delta_i(A)<k$ whenever $i>k$. But since $\Delta_i(A)$ is disjoint from $\Delta_k(A)$, we find that $C_k(A)\cap\Delta_i(A)\subseteq C_k(A)\setminus \Delta_k(A)$. This concludes the proof because of the Small Boundaries Property. \end{proof} \begin{lemma}\label{le:Ck-int-vide} Let $Z\subseteq A\subseteq K^m$ be definable sets. Then $Z$ has empty interior inside $A$ if and only if $\dim Z\cap C_k(A)<k$ for every $k$. \end{lemma} \begin{proof} For every $k$, we will consider the set \[D_k(A)=A\setminus\bigcup_{l\neq k}C_l(A).\] Clearly, this set is open in $A$ and contained in $\Delta_k(A)$. We claim that $D_k(A)$ is also dense in $C_k(A)$. Indeed, $C_k(A)$ is either empty or has pure dimension $k$. The first case is obvious, so assume that $C_k(A)$ has pure dimension $k$. By part~\ref{it:pur-dim-int} of Claim~\ref{cl:pure-dim-basic}, it suffices to check that $C_k(A)\setminus D_k(A)$ has dimension $<k$. But this follows from Lemma~\ref{le:dim-CkCl}, so our claim holds. If $Z$ has non-empty interior in $A$, there exists $z\in Z$ and $r\in|K^\times|$, such that $B(z,r)\cap A\subseteq Z$. If we put $k:=\dim(A,z)$, then $z\in \Delta_k(A)$. Since $D_k(A)$ is dense in $\Delta_k(A)$, the set $D_k(A)\cap B(z,r)$ is non-empty. Pick a point $z'$ in this intersection. Because $D_k(A)$ is open in $A$, there exists $r'\in|K^\times|$ such that $B(z',r')\cap A\subseteq D_k(A)$ and $r' \leqslant r$. But then \[ B(z',r')\cap D_k(A) \subseteq B(z',r)\cap A = B(z,r)\cap A\subseteq Z, \] and $B(z',r')\cap D_k(A)$ is non-empty since it contains $z'$. This shows that $Z\cap D_k(A)$ has non-empty interior inside $D_k(A)$. Since $D_k(A)$ is open in $A$ (and hence in $C_k(A)$), $Z\cap D_k(A)$ has non-empty interior inside $ C_k(A)$ as well. Because $C_k(A)$ is pure dimensional, part~\ref{it:pur-dim-int} of Claim~\ref{cl:pure-dim-basic} implies that $\dim (Z\cap C_k(A))=k$. Conversely, assume that $\dim (Z\cap C_k(A))=k$ for some $k$. By the Small Boundaries Property, one has that $\dim (C_k(A)\setminus D_k(A))<k$. From this, we can deduce that $\dim (Z\cap D_k(A))=k$, using \HM 1. The purity of $D_k(A)$ and part~\ref{it:pur-dim-int} of Claim~\ref{cl:pure-dim-basic} then imply that $Z\cap D_k(A)$ has non-empty interior in $D_k(A)$, and hence in $A$ (since $D_k(A)$ is open in $A$). A fortiori, $Z$ itself has non-empty interior in $A$. \end{proof} We can now prove the results which were the aim of this section. \begin{theorem}\label{th:dense-int} Let $A_1,\dots,A_r\subseteq A$ be a finite family of definable subsets of $K^m$. If their union has non empty interior in $A$ then at least one of them has non empty interior in $A$. In particular, a piece $A_i$ has non-empty interior in $A$ if $\dim A_i\cap C_k(A)=k$ for some $k$. \end{theorem} \begin{proof} If $Z:=A_1\cup\cdots\cup A_r$ has non-empty interior in $A$, then $\dim (Z\cap C_k(A))=k$ for some $k$ by Lemma~\ref{le:Ck-int-vide}. Then by \HM 1, $\dim (A_i\cap C_k(A))=k$ for some $i$ and some $k$, and thus $A_i$ has non-empty interior in $A$ by Lemma~\ref{le:Ck-int-vide}. \end{proof} \begin{theorem}\label{th:dense-cont} Every definable function $f$ from $X\subseteq K^m$ to $K^n$ (resp. $|K|^n$) is continuous on a definable set $U$ which is dense and open in $X$, and $\dim (X\setminus U)<\dim X$. \end{theorem} \begin{proof} The existence of $U$, dense and open in $X$ on which $f$ is continuous, follows from Theorems~\ref{th:M-cell-prep} and \ref{th:dense-int} by Remark~\ref{re:M-cell-dens}. That $\dim (X\setminus U)<\dim X$ then follows from the Small Boundaries Property. \end{proof} \end{document}
arXiv
{ "id": "1508.07536.tex", "language_detection_score": 0.809389054775238, "char_num_after_normalized": null, "contain_at_least_two_stop_words": null, "ellipsis_line_ratio": null, "idx": null, "lines_start_with_bullet_point_ratio": null, "mean_length_of_alpha_words": null, "non_alphabetical_char_ratio": null, "path": null, "symbols_to_words_ratio": null, "uppercase_word_ratio": null, "type": null, "book_name": null, "mimetype": null, "page_index": null, "page_path": null, "page_title": null }
arXiv/math_arXiv_v0.2.jsonl
null
null
\begin{document} \title{Rarefaction pulses for the Nonlinear Schr\"odinger Equation\ in the transonic limit} \begin{abstract} We investigate the properties of finite energy travelling waves to the nonlinear Schr\"odinger equation with nonzero conditions at infinity for a wide class of nonlinearities. In space dimension two and three we prove that travelling waves converge in the transonic limit (up to rescaling) to ground states of the Kadomtsev-Petviashvili equation. Our results generalize an earlier result of F. B\'ethuel, P. Gravejat and J-C. Saut for the two-dimensional Gross-Pitaevskii equation, and provide a rigorous proof to a conjecture by C. Jones and P. H. Roberts about the existence of an upper branch of travelling waves in dimension three. \end{abstract} \ \\ \noindent {\bf Keywords.} Nonlinear Schr\"odinger equation, Gross-Pitaevskii equation, Kadomtsev-Petviashvili equation, travelling waves, ground state.\\ \noindent {\bf MSC (2010)} Main: 35C07, 35B40, 35Q55, 35Q53. Secondary: 35B45, 35J20, 35J60, 35Q51, 35Q56, 35Q60. \tableofcontents \section{Introduction} We consider the nonlinear Schr\"odinger equation in $\mathbb R^N$ \be \tag{NLS} i \frac{\partial \Psi}{\partial t} + \Delta \Psi + F(|\Psi|^2) \Psi = 0 \ee with the condition $|\Psi(t,x)| \to r_0$ as $ | x | \to \infty$, where $r_0>0$ and $F(r_0^2) =0$. This equation arises as a relevant model in many physical situations, such as the theory of Bose-Einstein condensates, superfluidity ({see} \cite{Cos}, \cite{G}, \cite{IS}, \cite{JR}, \cite{JPR} and the surveys \cite{RB}, \cite{AHMNPTB}) or as an approximation of the Maxwell-Bloch system in Nonlinear Optics ({cf.} \cite{KL}, \cite{KivPeli}). When $F(\varrho) = 1- \varrho$, the corresponding (NLS) equation is called the Gross-Pitaevskii equation and is a common model for Bose-Einstein condensates. The so-called ``cubic-quintic'' (NLS), where $$ F(\varrho) = - \alpha_1 + \alpha_3 \varrho - \alpha_5 \varrho^2 $$ for some positive constants $\alpha_1 $, $\alpha_3 $ and $\alpha_5$ and $F $ has two positive roots, is also of high interest in Physics (see, {e.g.}, \cite{BP}). In Nonlinear Optics, the nonlinearity $F$ can take various forms (cf. \cite{KL}), for instance \be \label{nonlin} F(\varrho) = - \alpha \varrho^\nu - \beta \varrho^{2\nu}, \quad \quad F(\varrho) = - \alpha \Big( 1 - \frac{1}{ (1+ \frac{\varrho}{\varrho_0} )^\nu} \Big), \quad \quad F(\varrho) = - \alpha \varrho \Big( 1 + \gamma \, {\rm tanh} ( \frac{\varrho^2 - \varrho_0^2}{\sigma^2} ) \Big), \qquad \mbox{etc.,} \ee where $\alpha$, $\beta$, $\gamma $, $\nu$, $\sigma > 0$ are given constants (the second formula, for instance, was proposed to take into account saturation effects). It is therefore important to allow the nonlinearity to be as general as possible. The travelling wave solutions propagating with speed $c$ in the $x_1$-direction are the solutions of the form $ \Psi (x,t) = U(x_1 - ct, x_2, \dots, x_N)$. The profile $U$ satisfies the equation \be \tag{TW$_c$} - i c \partial_{x_1} U+ \Delta U + F(|U|^2) U = 0. \ee They are supposed to play an important role in the dynamics of (NLS). Since $(U, c)$ is a solution of (TW$_c$) if and only if $(\overline{U}, -c)$ is also a solution, we may assume that $c \geq 0$. The nonlinearities we consider are general, and we will merely make use of the following assumptions:\\ \noindent {\bf (A1)} The function $F$ is continuous on $[0,+\infty)$, of class $\mathcal{C}^1$ near $r_0^2$, $ F(r_0^2) = 0 $ and $ F'(r_0^2) <0$. \noindent {\bf (A2)} There exist $C >0$ and $p_0 \in [1 , \frac{2}{N-2}) $ ($p_0 < \infty$ if $N=2$) such that $|F(\varrho)| \leq C ( 1 + \varrho^{p_0} )$ for all $\varrho \geq 0$. \noindent {\bf (A3)} There exist $C_0>0$, $\alpha_0>0$ and $ \varrho_0 > r_0 $ such that $ F(\varrho) \leq - C_0 \varrho^{\alpha_0}$ for all $\varrho \geq \varrho_0$.\\ Assumptions (A1) and ((A2) or (A3)) are sufficient to guarantee the existence of travelling waves. However, in order to get some sharp results we will need sometimes more information about the behavior of $F$ near $ r_0^2$, so we will replace (A1) by \noindent {\bf (A4)} The function $F$ is continuous on $[0,+\infty)$, of class $\mathcal{C}^2$ near $r_0^2$, with $ F(r_0^2) = 0 , $ $ F'(r_0^2) <0 $ and $$ F(\varrho) = F(r_0^2) + F'(r_0^2) (\varrho-r_0^2) + \frac12 F''(r_0^2) ( \varrho-r_0^2)^2 + \mathcal{O}((\varrho-r_0^2)^3) \quad \quad \mbox{ as } \quad \varrho \to r_0^2 . $$ If $F$ is $\mathcal{C}^2$ near $r_0^2$, we define, as in \cite{CM1}, \begin{eqnarray} \label{Gamma} \Gamma = 6 - \frac{4r_0^4}{{\mathfrak c}_s^2} F''(r_0^2) . \end{eqnarray} The coefficient $\Gamma$ is positive for the Gross-Pitaevskii nonlinearity ($F(\varrho) = 1 - \varrho$) as well as for the cubic-quintic Schr\"odinger equation. However, for the nonlinearity $F(\varrho) = b {\sf e}^{-\varrho/ \alpha} - a $, where $\alpha>0$ and $0 < a < b$ (which arises in nonlinear optics and takes into account saturation effects, see \cite{KL}), we have $ \Gamma = 6 + 2 \ln(a/b) $, so that $ \Gamma$ can take any value in $(-\infty, 6)$, including zero. The coefficient $\Gamma$ may also vanish for some polynomial nonlinearities (see \cite{C1d} for some examples and for the study of travelling waves in dimension one in that case). In this paper we shall be concerned only with the nondegenerate case $\Gamma \not = 0$. {\bf Notation and function spaces.} For $ x = ( x_1, x_2, \dots, , x_N) \in \mathbb R^N$, we denote $x = ( x_1, x_{\perp})$, where $ x_{\perp} = ( x_2, \dots, x_N) \in \mathbb R^{N-1}$. Given a function $f$ defined on $ \mathbb R^N$, we denote $ \nabla _{x_{\perp}} f = ( \frac{ \partial f}{\partial x_2}, \dots, \frac{ \partial f}{\partial x_N}).$ We will write $ \Delta_{x_{\perp} }= \frac{ \partial^2}{\partial x^2 } + \dots + \frac{ \partial^2}{\partial x^N }$. By "$f (t) \sim g(t)$ as $ t \to t_0$" we mean $ \lim_{t \to t_0 } \frac{f(t)}{ g(t) } = 1$. We denote by $\mathscr{F} $ the Fourier transform, defined by $ \mathscr{F} (f) (\xi )= \displaystyle \int_{\mathbb R^N} {\sf e}^{-i x.\xi } f(x) \, dx $ whenever $ f \in L^1 ( \mathbb R^N)$. Unless otherwise stated, the $L^p$ norms are computed on the whole space $\mathbb R^N$. We fix an odd function $ \chi : \mathbb R \to \mathbb R $ such that $\chi(s) = s $ for $0 \leq s \leq 2 r_0$, $ \chi(s) = 3 r_0$ for $s \geq 4 r_0 $ and $ 0 \leq \chi' \leq 1$ on $\mathbb R_+$. As usually, we denote $ \dot{H}^1(\mathbb R^N) = \{ h \in L_{loc}^1(\mathbb R^N) \; | \; \nabla h \in L^2(\mathbb R^N) \}$. We define the Ginzburg-Landau energy of a function $ \psi \in \dot{H}^1(\mathbb R^N) $ by $$ E_{\rm GL} (\psi) = \int_{\mathbb R^N} |\nabla \psi|^2 + (\chi^2(| \psi|)-r_0^2)^2 \ dx . $$ We will use the function space $$ \mathcal{E} = \left\{ \psi \in \dot{H}^{1} (\mathbb R^N) \; \big| \; \chi^2 ( |\psi| ) - r_0^2 \in L^2(\mathbb R^N) \right\} = \left\{ \psi \in \dot{H}^{1} (\mathbb R^N) \; \big| \; E_{\rm GL}(\psi ) < \infty \right\} . $$ The basic properties of this space have been discussed in the Introduction of \cite{CM1}. We will also consider the space $$ \mathcal{X} = \left\{ u \in \mathcal{D}^{1,2} (\mathbb R^N) \; \big| \; \chi^2 ( |r_0 -u| ) - r_0^2 \in L^2(\mathbb R^N) \right\}, $$ where $ \mathcal{D}^{1,2} (\mathbb R^N)$ is the completion of $ \mathcal{C}_c^{\infty} (\mathbb R^N)$ for the norm $ \| u \|_{\mathcal{D}^{1,2}} = \| \nabla u \|_{L^2(\mathbb R^N)}$. If $ N \geq 3$ it can be proved that $ \mathcal{E} = \{ \alpha ( r_0 - u) \; \big| \; \, u \in \mathcal{X} , \; \alpha \in \mathbb C, \; |\alpha | = 1\}.$ {\bf Hamiltonian structure. } The flow associated to (NLS) formally preserves the energy $$ E(\psi) = \int_{\mathbb R^N} |\nabla \psi|^2 + V(|\psi|^2) \ dx , $$ where $V$ is the antiderivative of $-F$ which vanishes at $r_0^2$, that is ${ V(s) = \int_s^{r_0^2} F(\varrho) \ d \varrho }$, as well as the momentum. The momentum (with respect to the direction of propagation $x_1$) is a functional $Q$ defined on $ \mathcal{E}$ (or, alternatively, on $ \mathcal{X}$) in the following way. Denoting by $ \langle \cdot , \cdot \rangle$ the standard scalar product in $ \mathbb C$, it has been proven in \cite{CM1} and \cite{Maris} that for any $ \psi \in {\mathcal E} $ we have $ \langle i \frac{\partial \psi}{\partial {x_1} } , \psi \rangle \in \mathcal{Y} + L^1(\mathbb R^N) $, where $\mathcal{Y} = \{ \frac{ \partial h}{ \partial{x_1} } \; | \; h \in \dot{H}^1(\mathbb R^N) \}$ and $ \mathcal{Y}$ is endowed with the norm $ \| \partial_{x_1} h \|_{\mathcal{Y}} = \| \nabla h \|_{L^2(\mathbb R^N)}$. It is then possible to define the linear continuous functional $L$ on $ \mathcal{Y} + L^1(\mathbb R^N) $ by $$ L \left( \frac{ \partial h}{\partial{x_1}} + \Theta \right) = \int_{\mathbb R^N} \Theta (x) \ dx \qquad \mbox{ for any } \frac{ \partial h}{\partial{x_1}} \in \mathcal{Y} \mbox{ and } \Theta \in L^1( \mathbb R^N). $$ The momentum (with respect to the direction $x_1$) of a function $ \psi \in \mathcal{E} $ is $ Q( \psi ) = L \left( \langle i \frac{\partial \psi}{\partial {x_1} } , \psi \rangle \right). $ \\ If $ \psi \in {\mathcal E} $ does not vanish, it can be lifted in the fom $\psi = \rho {\sf e}^{i\phi}$ and we have \be \label{momentlift} Q(\psi) = \int_{\mathbb R^N} (r_0^2 - \rho^2) \frac{\partial \phi}{\partial x_1} \ dx . \ee Any solution $U \in \mathcal{E}$ of (TW$_c$) is a critical point of the functional $E _c = E + cQ$ and satisfies the standard Pohozaev identities (see Proposition 4.1 p. 1091 in \cite{M2}): \be \label{Pohozaev} \left\{\begin{array}{ll} P_c(U) = 0, \qquad \mbox{ where } \displaystyle{ P_c(U) = E(U) + cQ(U) - \frac{2}{N-1} \int_{\mathbb R^N} |\nabla_{x_\perp} U|^2 \ dx, } \qquad \mbox{ and} \\ \\ \displaystyle{ E(U) = 2 \int_{\mathbb R^N} |\partial_{x_1} U|^2 \ dx }. \end{array}\right. \ee We denote \be \label{Cc} \mathscr{C} _c= \{ \psi \in \mathcal{E} \; \big| \; \psi \mbox{ is not constant and } P_c( \psi ) = 0 \}. \ee Using the Madelung transform $ \Psi = \sqrt{ \varrho } {\sf e} ^{ i \theta}$ (which makes sense in any domain where $ \Psi \neq 0$), equation (NLS) can be put into a hydrodynamical form. In this context one may compute the associated speed of sound at infinity (see, for instance, the introduction of \cite{M2}): $$ {\mathfrak c}_s = \sqrt{ - 2 r_0^2 F'(r_0^2) } > 0 . $$ Under general assumptions it was proved that finite energy travelling waves to (NLS) with speed $c$ exist if and only if $|c| < {\mathfrak c}_s$ (see \cite{M2, Maris}). Let us recall the existence results of nontrivial travelling waves that we use. \begin{theo} [\cite{CM1}] \label{th2dposit} Let $N = 2$ and assume that the nonlinearity $F$ satisfies (A2) and (A4)and that $ \Gamma \not = 0 $. (a) Suppose moreover that $V$ is nonnegative on $[0, \infty)$. Then for any $q \in (-\infty , 0 )$ there exists $U\in \mathcal{E}$ such that $Q(U)= q$ and $$ E(U) = \inf \{ E(\psi) \; \big| \; \psi \in \mathcal{E}, \; Q(\psi) = q \} . $$ (b) Without any assumption on the sign of $V$, there is $ q_{\infty} >0$ such that for any $ q \in (- q_{\infty}, 0 )$ there is $U\in \mathcal{E}$ satisfying $Q(U)= q$ and $$ E(U) = \inf \Big\{ E(\psi) \; \big| \; \psi \in \mathcal{E}, \; Q(\psi) = q , \; \int_{\mathbb R^2} V(|\psi |^2 ) \, dx > 0 \Big\} . $$ For any $U$ satisfying (a) or (b) there exists $ c = c(U) \in (0,{\mathfrak c}_s)$ such that $U$ is a nonconstant solution to {\rm (TW$_{c(U)}$)}. Moreover, if $ Q( U_1 ) < Q( U_2 ) <0 $ we have $0 < c(U_1) < c(U_2 ) < {\mathfrak c}_s $ and $c(U ) \to {\mathfrak c}_s $ as $q \to 0$. \end{theo} \begin{theo} [\cite{CM1}] \label{th2d} Let $N = 2$. Assume that the nonlinearity $F$ satisfies (A2) and (A4) and that $ \Gamma \not = 0 $. Then there exists $ 0 < k_{\infty} \leq \infty $ such that for any $ k \in (0, k_{\infty})$, there is $ \mathcal{U} \in \mathcal{E} $ such that $ \displaystyle{\int_{\mathbb R^2} |\nabla \mathcal{U} |^2 \ dx = k}$ and $$ \int_{\mathbb R^2} V(| \mathcal{U} |^2) \ d x + Q( \mathcal{U} ) = \inf \left\{ \int_{\mathbb R^2} V(|\psi|^2) \ d x + Q(\psi) \; \Big| \; \psi \in \mathcal{E}, \; \int_{\mathbb R^2} |\nabla \psi|^2 \ dx = k \right\} . $$ For any such $ \mathcal{U} $ there exists $c = c( \mathcal{U} ) \in (0, {\mathfrak c}_s)$ such that the function $ U(x) = \mathcal{U} ( x / c) $ is a solution to {\rm (TW$_c$)}. Moreover, if $ \mathcal{U}_1$, $\mathcal{U}_2$ are as above and $\displaystyle \int_{\mathbb R^2} |\nabla \mathcal{U}_1|^2 \, dx < \int_{\mathbb R^2} |\nabla \mathcal{U}_2|^2 \, dx $, then ${\mathfrak c}_s > c(\mathcal{U}_1) > c (\mathcal{U}_2) > 0$ and we have $ c( \mathcal{U} ) \to {\mathfrak c}_s $ as $ k \to 0 $. \end{theo} \begin{theo} [\cite{Maris}] \label{thM} Assume that $N \geq 3$ and the nonlinearity $F$ satisfies (A1) and (A2). Then for any $0 < c < {\mathfrak c}_s $ there exists a nonconstant $ \mathcal{U} \in \mathcal{E} $ such that $ P_c( \mathcal{U} ) = 0 $ and $ E(\mathcal{U}) + c Q (\mathcal{U}) = \displaystyle \inf_{ \psi \in \mathscr{C} _c} ( E( \psi )+ cQ( \psi )).$ If $N \geq 4$, any such $\mathcal{U} $ is a nontrivial solution to {\rm (TW$_c$)}. If $ N = 3$, for any $\mathcal{U}$ as above there exists $\sigma > 0 $ such that $U(x) = \mathcal{U} ( x_1 , \sigma x_\perp ) \in \mathcal{E} $ is a nontrivial solution to {\rm (TW$_c$)}. \end{theo} If (A3) holds it was proved that there is $C_0 >0$, depending only on $F$, such that for any $ c \in (0, {\mathfrak c}_s)$ and for any solution $U\in \mathcal{E}$ to (TW$_c$) we have $|U| \leq C_0$ in $ \mathbb R^N$ (see Proposition 2.2 p. 1079 in \cite{M2}). If (A3) is satisfied but (A2) is not, one can modify $F$ in a neighborhood of infinity in such a way that the modified nonlinearity $ \tilde{F}$ satisfies (A2) and (A3) and $ F = \tilde{F}$ on $[0, 2 C_0]$. Then the solutions of (TW$_c$) are the same as the solutions of (TW$_c$) with $F$ replaced by $\tilde{F}$. Therefore all the existence results above hold if (A2) is replaced by (A3); however, the minimizing properties hold only if we replace throughout $F$ and $V$ by $\tilde{F}$ and $\tilde{V}$, respectively, where $\tilde{V}(s) = \displaystyle \int_s^{ r_0 ^2} \tilde{F}( \tau ) \, d \tau$. The above results provide, under various assumptions, travelling waves to (NLS) with speed close to the speed of sound $ {\mathfrak c}_s$. We will study the behavior of travelling waves in the transonic limit $c \to {\mathfrak c}_s$ in each of the previous situations. \subsection{Convergence to ground states for (KP-I)} In the transonic limit, the travelling waves are expected to be rarefaction pulses close, up to a rescaling, to ground states of the Kadomtsev-Petviashvili I (KP-I) equation. We refer to \cite{JR} in the case of the Gross-Pitaevskii equation ($F(\varrho)= 1 - \varrho$) in space dimension $N=2$ or $N=3$, and to \cite{KL}, \cite{KAL}, \cite{KivPeli} in the context of Nonlinear Optics. In our setting, the (KP-I) equation associated to (NLS) is \be \tag{KP-I} 2 \partial_\tau \zeta + \Gamma \zeta \partial_{z_1} \zeta - \frac{1}{{\mathfrak c}_s^2} \, \partial_{z_1}^3 \zeta + \Delta_{z_\perp} \partial^{-1}_{z_1} \zeta = 0 , \ee where $\Delta_{z_\perp} = \displaystyle{\sum_{j=2}^N \partial_{z_j}^2}$ and the coefficient $\Gamma$ is related to the nonlinearity $F$ by (\ref{Gamma}). The (KP-I) flow preserves (at least formally) the $L^2$ norm $$ \int_{\mathbb R^N} \zeta^2 \ dz $$ and the energy $$ \mathscr{E} (\zeta) = \int_{\mathbb R^N} \frac{1}{{\mathfrak c}_s^2}\, (\partial_{z_1} \zeta)^2 + |\nabla_{z_\perp} \partial_{z_1}^{-1} \zeta|^2 + \frac{\Gamma}{3} \, \zeta^3 \ dz . $$ A solitary wave of speed $ 1 / (2{\mathfrak c}_s^2) $, moving to the left in the $z_1$ direction, is a particular solution of (KP-I) of the form $\zeta(\tau,z) = \mathcal{W}(z_1 + \tau/ (2{\mathfrak c}_s^2), z_\perp)$. The profile $\mathcal{W}$ solves the equation \be \tag{SW} \frac{1}{{\mathfrak c}_s^2} \, \partial_{z_1} \mathcal{W} + \Gamma \mathcal{W} \partial_{z_1} \mathcal{W} - \frac{1}{{\mathfrak c}_s^2} \, \partial_{z_1}^3 \mathcal{W} + \Delta_{z_\perp} \partial^{-1}_{z_1} \mathcal{W} = 0 . \ee Equation (SW) has no nontrivial solution in the degenerate linear case $\Gamma = 0$ or in space dimension $N \geq 4$ (see Theorem 1.1 p. 214 in \cite{dBSIHP} or the begining of section \ref{proofGS}). If $\Gamma \not = 0$, since the nonlinearity is homogeneous, one can construct solitary waves of any (positive) speed just by using the scaling properties of the equation. The solutions of (SW) are critical points of the associated action $$ \mathscr{S} (\mathcal{W}) = \mathscr{E} (\mathcal{W}) + \frac{1}{{\mathfrak c}_s^2} \int_{\mathbb R^N} \mathcal{W}^2\ dz . $$ The natural energy space for (KP-I) is $ \mathscr{Y} (\mathbb R^N)$, which is the closure of $\partial_{z_1} \mathcal{C}^\infty_c(\mathbb R^N)$ for the (squared) norm $$ \| \mathcal{W}\|_{\mathscr{Y} (\mathbb R^N)}^2 = \int_{\mathbb R^N} \frac{1}{{\mathfrak c}_s^2}\, \mathcal{W}^2 + \frac{1}{{\mathfrak c}_s^2}\, (\partial_{z_1} \mathcal{W})^2 + |\nabla_{z_\perp} \partial_{z_1}^{-1} \mathcal{W}|^2 \ dz . $$ From the anisotropic Sobolev embeddings (see \cite{BIN}, p. 323) it follows that $\mathscr{S}$ is well-defined and is a continuous functional on $\mathscr{Y}(\mathbb R^N)$ for $N=2$ and $N=3$. Here we are not interested in arbitrary solitary waves for (KP-I), but only in {\it ground states.} A ground state of (KP-I) with speed $1/(2{\mathfrak c}_s^2)$ (or, equivalently, a ground state of (SW)) is a nontrivial solution of (SW) which minimizes the action $\mathscr{S}$ among all solutions of (SW). We shall denote $\mathscr{S}_{\rm min}$ the corresponding action: $$ \mathscr{S}_{\rm min} = \inf \Big\{ \mathscr{S} (\mathcal{W} ) \; \Big| \; \mathcal{W} \in \mathscr{Y}(\mathbb R^N) \setminus \{ 0 \}, \ \mathcal{W} \ {\rm solves \ (SW)} \Big\} . $$ The existence of ground states (with speed $1/(2{\mathfrak c}_s^2)$) for (KP-I) in dimensions $N=2$ and $N=3$ follows from Lemma 2.1 p. 1067 in \cite{dBSSIAM}. In dimension $N=2$, we may use the variational characterization provided by Lemma 2.2 p. 78 in \cite{dBS}: \begin{theo} [\cite{dBS}] \label{gs2d} Assume that $N=2$ and $\Gamma \not = 0$. There exists $\mu >0$ such that the set of solutions to the minimization problem \be \label{minimiz} {\mathscr{M}(\mu) } = \inf \left\{ \mathscr{E} (\mathcal{W})\; \Big| \; \mathcal{W} \in \mathscr{Y}(\mathbb R^2), \ \int_{\mathbb R^2} \frac{1}{{\mathfrak c}_s^2}\, \mathcal{W}^2 \ dz = \mu \right\} , \ee is precisely the set of ground states of {\rm (KP-I)} and it is not empty. Moreover, any sequence $ (\mathcal{W} _n)_{n \geq 1} \subset \mathscr{Y}(\mathbb R^2)$ such that $ \displaystyle \int_{\mathbb R^2} \frac{1}{{\mathfrak c}_s^2}\, \mathcal{W}_n^2 \ dz \to \mu $ and $\mathscr{E} (\mathcal{W}_n) \to {\mathscr{M}(\mu) }$ contains a convergent subsequence in $\mathscr{Y}(\mathbb R^2)$ (up to translations). Finally, we have $$ \mu = \frac{3}{2} \, \mathscr{S}_{\rm min} \quad \quad \quad {\it and} \quad \quad \quad {\mathscr{M}(\mu) } = - \frac{1}{2} \mathscr{S}_{\rm min} . $$ \end{theo} We emphasize that this characterization of ground states is specific to the two-dimensional case. Indeed, since $ \mathscr{E} $ and the $L^2$ norm are conserved by (KP-I), it implies the orbital stability of the set of ground states for (KP-I) if $N=2$ ({cf.} \cite{dBS}). On the other hand, it is known that this set is orbitally unstable if $N=3$ (see \cite{Liu}). In the three-dimensional case we need the following result, which shows that ground states are minimizers of the action under a Pohozaev type constraint. Notice that any solution of (SW) in $\mathscr{Y}(\mathbb R^N)$ satisfies the Pohozaev identity $$ \int_{\mathbb R^N} \frac{1}{{\mathfrak c}_s^2}\, (\partial_{z_1} \mathcal{W})^2 + |\nabla_{z_\perp} \partial_{z_1}^{-1} \mathcal{W}|^2 + \frac{\Gamma}{3}\, \mathcal{W}^3 + \frac{1}{{\mathfrak c}_s^2}\, \mathcal{W}^2 \ dz = \frac{2}{N-1} \int_{\mathbb R^N} |\nabla_{z_\perp} \partial_{z_1}^{-1} \mathcal{W}|^2 \ dz , $$ which is (formally) obtained by multiplying (SW) by $z_\perp \cdot \nabla_{z_\perp} \partial_{z_1}^{-1} \mathcal{W}$ and integrating by parts (see Theorem 1.1 p. 214 in \cite{dBSIHP} for a rigorous justification). Taking into account how travelling wave solutions to (NLS) are constructed in Theorem \ref{thM} above, in the case $N =3$ we consider the minimization problem \be \label{miniGS} \mathscr{S}_* = \inf \Big\{ \mathscr{S} (\mathcal{W}) \; \Big| \; \mathcal{W} \in \mathscr{Y}(\mathbb R^3) \setminus \{ 0 \}, \ \mathscr{S} (\mathcal{W}) = \int_{\mathbb R^3} |\nabla_{z_\perp} \partial_{z_1}^{-1} \mathcal{W}|^2 \ dz \Big\} . \ee Our first result shows that in space dimension $N=3$ the ground states (with speed $1/(2{\mathfrak c}_s^2)$) of (KP-I) are the solutions of the minimization problem (\ref{miniGS}). \begin{theo} \label{gs} Assume that $ N = 3 $ and $ \Gamma \not = 0 $. Then $ \mathscr{S}_* > 0 $ and the problem (\ref{miniGS}) has minimizers. Moreover, $\mathcal{W}_0$ is a minimizer for the problem (\ref{miniGS}) if and only if there exist a ground state $ \mathcal{W} $ for {\rm (KP-I)} (with speed $1/(2{\mathfrak c}_s^2)$) and $\sigma> 0$ such that $ \mathcal{W}_0 (z) = \mathcal{W}(z_1, \sigma z_\perp) $. In particular, we have $ \mathscr{S}_* = \mathscr{S}_{\rm min} $. Furthermore, let $(\mathcal{W}_n)_{n \geq 1} \subset \mathscr{Y}(\mathbb R^3)$ be a sequence satisfying: (i) There are positive constants $ m_1, m_2 $ such that $ m_1 \leq \displaystyle \int_{\mathbb R^3} \mathcal{W}_n ^2 + (\partial _{z_1} \mathcal{W}_n )^2 \, dz \leq m_2$. (ii) $ \displaystyle \int_{\mathbb R^3} \frac{ 1}{{\mathfrak c}_s ^2} \mathcal{W}_n ^2 + \frac{ 1}{{\mathfrak c}_s ^2} (\partial _{z_1} \mathcal{W}_n )^2 + \frac{ \Gamma}{3 } \mathcal{W}_n ^3 \, dz \to 0$ as $ n \to \infty$. (iii) $ \displaystyle \liminf_ {n \to \infty} \mathscr{S}( \mathcal{W}_n) \leq \mathscr{S}_*.$ \noindent Then there exist $\sigma>0$, $ \mathcal{W} \in \mathscr{Y}(\mathbb R^3) \setminus \{ 0 \}$, a subsequence $ ( \mathcal{W}_{n_j} )_{j \geq 0}$, and a sequence $(z^j)_{j\geq 0} \subset \mathbb R^3$ such that $z \mapsto \mathcal{W}( z_1 , \sigma^{-1} z_\perp)$ is a ground state for {\rm (KP-I)} with speed $1/(2{\mathfrak c}_s^2)$ and $$ \mathcal{W}_{n_j} ( \cdot - z^j ) \to \mathcal{W} \quad \quad \quad {\it in} \quad \mathscr{Y}(\mathbb R^3) . $$ \end{theo} We will study the behavior of travelling waves to (TW$_c$) in the transonic limit $c \nearrow {\mathfrak c}_s$ in space dimension $N=2$ and $N=3$ under the assumption $\Gamma \not = 0$ (so that (KP-I) has nontrivial solitary waves). For $ 0 < \varepsilon < {\mathfrak c}_s$, we define $c(\varepsilon)>0$ by $$ c(\varepsilon) = \sqrt{{\mathfrak c}_s^2 - \varepsilon^2} . $$ As already mentioned, in this asymptotic regime the travelling waves are expected to be close to "the" ground state of (KP-I) (to the best of our knowledge, the uniqueness of this solution up to translations has not been proven yet). Let us give the formal derivation of this result, which follows the arguments given in \cite{JR} for the Gross-Pitaevskii equation in dimensions $N=2$ and $N=3$. We insert the ansatz \be \label{ansatzKP} U(x) = r_0 \Big( 1 + \varepsilon^2 A_\varepsilon (z) \Big) \exp \big( i\varepsilon \varphi_\varepsilon (z) \big) \quad \quad \quad \mbox{ where } z_1 = \varepsilon x_1, \quad z_\perp = \varepsilon^2 x_\perp \ee in (TW$_{c(\varepsilon)}$), cancel the phase factor and separate the real and imaginary parts to obtain the system \be \label{MadTW} \left\{\begin{array}{ll} \displaystyle{ - c(\varepsilon) \partial_{z_1} A_\varepsilon + 2 \varepsilon^2 \partial_{z_1} \varphi_\varepsilon \partial_{z_1} A_\varepsilon + 2 \varepsilon^4 \nabla_{z_\perp} \varphi_\varepsilon \cdot \nabla_{z_\perp} A_\varepsilon + (1 + \varepsilon^2 A_\varepsilon) \Big( \partial_{z_1}^2 \varphi_\varepsilon + \varepsilon^2 \Delta_{z_\perp} \varphi_\varepsilon \Big)} = 0 \\ \ \\ \displaystyle{ - c(\varepsilon) \partial_{z_1} \varphi_\varepsilon + \varepsilon^2 (\partial_{z_1} \varphi_\varepsilon)^2 + \varepsilon^4 |\nabla_{z_\perp} \varphi_\varepsilon |^2 - \frac{1}{\varepsilon^2} F\Big( r_0^2 ( 1 +\varepsilon^2 A_\varepsilon )^2 \Big) - \varepsilon^2 \frac{ \partial_{z_1}^2 A_\varepsilon + \varepsilon^2 \Delta_{z_\perp} A_\varepsilon}{1+\varepsilon^2 A_\varepsilon}} = 0. \end{array}\right. \ee Formally, if $A_\varepsilon \to A$ and $\varphi_\varepsilon \to \varphi$ as $\varepsilon \to 0$ in some reasonable sense, then to the leading order we obtain $ - {\mathfrak c}_s \partial_{z_1} A +\partial^2_{z_1} \varphi = 0 $ for the first equation in \eqref{MadTW}. Since $F$ is of class $\mathcal{C}^2$ near $r_0^2$, using the Taylor expansion $$ F\Big( r_0^2 ( 1 +\varepsilon^2 A_\varepsilon )^2 \Big) = F(r_0^2) - {\mathfrak c}_s^2 \varepsilon^2 A_\varepsilon + \mathcal{O}(\varepsilon^4) $$ with $F(r_0^2)=0$ and $ {\mathfrak c}_s^2 = -2 r_0^2 F'(r_0^2) $, the second equation in \eqref{MadTW} implies $ - {\mathfrak c}_s \partial_{z_1} \varphi + {\mathfrak c}_s^2 A = 0$. In both cases, we obtain the constraint \be \label{prepar} {\mathfrak c}_s A = \partial_{z_1} \varphi . \ee We multiply the first equation in \eqref{MadTW} by $c(\varepsilon) / {\mathfrak c}_s^2 $ and we apply the operator $ \frac{1}{ {\mathfrak c}_s ^2} \partial_{z_1} $ to the second one, then we add the resulting equalities. Using the Taylor expansion $$ F\Big( r_0^2 (1+ \alpha)^2 \Big) = - {\mathfrak c}_s^2 \alpha - \Big( \frac{{\mathfrak c}_s^2}{2} - 2 r_0^4 F''(r_0^2) \Big) \alpha^2 + F_3(\alpha), \quad \quad \quad {\rm where} \ F_3(\alpha) = \mathcal{O}(\alpha^3) \ {\rm as} \ \alpha \to 0 , $$ we get \begin{align} \label{desing} \frac{{\mathfrak c}_s^2 - c^2(\varepsilon) }{\varepsilon^2 {\mathfrak c}_s^2} \, \partial_{z_1} A_\varepsilon & \ - \frac{1}{{\mathfrak c}_s^2} \, \partial_{z_1} \Big( \frac{\partial_{z_1}^2 A_\varepsilon + \varepsilon^2 \Delta_{z_\perp} A_\varepsilon}{1 + \varepsilon^2 A_\varepsilon} \Big) + \frac{c(\varepsilon) }{ {\mathfrak c}_s^2} ( 1 + \varepsilon^2 A_\varepsilon ) \Delta_{z_\perp} \varphi_\varepsilon \nonumber \\ & + \Big\{ 2 \frac{c(\varepsilon)}{{\mathfrak c}_s^2} \partial_{z_1} \varphi_\varepsilon \partial_{z_1} A_\varepsilon + \frac{c(\varepsilon)}{{\mathfrak c}_s^2}\, A_\varepsilon \partial^2_{z_1} \varphi_\varepsilon + \frac{1}{{\mathfrak c}_s^2} \, \partial_{z_1} \left( ( \partial_{z_1} \varphi_\varepsilon )^2 \right) + \Big[ \frac{1}{2} - 2 r_0^4 \frac{F''(r_0^2)}{{\mathfrak c}_s^2} \Big] \partial_{z_1} ( A_\varepsilon^2 ) \Big\} \nonumber \\ & = - 2 \varepsilon^2 \frac{c(\varepsilon)}{{\mathfrak c}_s^2} \nabla_{z_\perp} \varphi_\varepsilon \cdot \nabla_{z_\perp} A_\varepsilon - \frac{\varepsilon^2}{{\mathfrak c}_s^2} \partial_{z_1} \left( \, |\nabla_{z_\perp} \varphi_\varepsilon |^2 \right) - \frac{1}{{\mathfrak c}_s^2 \varepsilon^4}\, \partial_{z_1} \left( F_3(\varepsilon^2 A_\varepsilon) \right) . \end{align} If $A_\varepsilon \to A$ and $\varphi_\varepsilon \to \varphi$ as $\varepsilon \to 0$ in a suitable sense, we have ${\mathfrak c}_s^2 - c^2(\varepsilon) = \varepsilon^2$ and $\partial_{z_1}^{-1} A = \varphi/{\mathfrak c}_s$ by \eqref{prepar}, and then \eqref{desing} gives $$ \frac{1}{{\mathfrak c}_s^2}\, \partial_{z_1} A - \frac{1}{{\mathfrak c}_s^2} \, \partial^3_{z_1} A + \Gamma A \partial_{z_1} A + \Delta_{z_\perp} \partial_{z_1}^{-1} A = 0 , $$ which is (SW).\\ The main result of this paper is as follows. \begin{theo} \label{res1} Let $N \in \{ 2, 3 \}$ and assume that the nonlinearity $F$ satisfies (A2) and (A4) with $ \Gamma \neq 0$. Let $(U_n, c_n)_{n \geq 1}$ be any sequence such that $U_n \in \mathcal{E}$ is a nonconstant solution of (TW$_{c_n}$), $ c_n \in (0, {\mathfrak c}_s )$ and $ c_n \to {\mathfrak c}_s $ as $ n \to \infty $ and one of the following situations occur: (a) $N=2$ and $U_n$ minimizes $E$ under the constraint $Q = Q(U_n)$, as in Theorem \ref{th2dposit} (a) or (b). (b) $N=2$ and $U_n (c_n \cdot) $ minimizes the functional $ I (\psi ) : = Q( \psi ) + \displaystyle \int_{\mathbb R^N} V(|\psi |^2) \, dx $ under the constraint $ \displaystyle \int_{ \mathbb R^N} |\nabla \psi |^2 \, dx = \int_{ \mathbb R^N} |\nabla U_n |^2 \, dx$, as in Theorem \ref{th2d}. (c) $N=3$ and $U_n$ minimizes $E_{c_n} = E + c_n Q$ under the constraint $P_{c_n} =0$, as in Theorem \ref{thM}. Then there exists $ n _0 \in \mathbb N$ such that $ |U_n| \geq r_0 /2$ in $ \mathbb R^N$ for all $ n \geq n_0$ and, denoting $ \varepsilon_n = \sqrt{ {\mathfrak c}_s ^2 - c_n ^2}$ (so that $ c_n = c( \varepsilon_n)$), we have \begin{eqnarray} \label{energy} E(U_n) \sim - {\mathfrak c}_s Q (U_n) \sim r_0^2 {\mathfrak c}_s^4 (7-2N) \mathscr{S}_{\rm min} \Big( {\mathfrak c}_s^2 - c_n^2 \Big)^{\frac{5-2N}{2}} = r_0^2 {\mathfrak c}_s^4 (7-2N) \mathscr{S}_{\rm min} \varepsilon_n^{5-2N} \end{eqnarray} and \begin{eqnarray} \label{Ec} E(U_n) + c_n Q (U_n) \sim {\mathfrak c}_s^2 r_0^2 \mathscr{S}_{\rm min} \varepsilon_n^{7-2N} \qquad \mbox{ as } n \to \infty. \end{eqnarray} Moreover, $U_n$ can be written in the form $$ U_n(x) = r_0 \Big( 1 + \varepsilon_n^2 A_n (z) \Big) \exp \big( i \varepsilon_n \varphi_n (z) \big) , \quad \quad \quad \mbox{ where } \quad z_1 = \varepsilon_n x_1, \quad z_\perp = \varepsilon_n^2 x_\perp , $$ and there exist a subsequence $(U_{n_k}, c_{n_k})_{k \geq 1}$, a ground state $\mathcal{W}$ of {\rm (KP-I)} and a sequence $ (z^k )_{k \geq 1} \subset \mathbb R^N$ such that, denoting $ \tilde{A}_k = A_{n_k}( \cdot - z^k)$, $ \tilde{\varphi}_k = \varphi_{n_k } (\cdot - z^k)$, for any $1 < p < \infty$ we have $$ \tilde{A}_k \to \mathcal{W}, \qquad \partial_{z_1} \tilde{A}_k \to \partial_{z_1} \mathcal{W}, \qquad \partial_{z_1} \tilde{\varphi}_ k \to {\mathfrak c}_s \mathcal{W} \quad \mbox{\it and} \quad \partial_{z_1} ^2 \tilde{\varphi}_ k \to {\mathfrak c}_s \partial_{z_1} \mathcal{W} \quad {\it in} \quad W^{1,p}(\mathbb R^N) \mbox{ as } k \to \infty. $$ \end{theo} As already mentioned, if $F$ satisfies (A3) and (A4) it is possible to modify $F$ in a neighborhood of infinity such that the modified nonlinearity $\tilde{F}$ also satisfies (A2) and (TW$_c$) has the same solutions as the same equation with $\tilde{F}$ instead of $F$. Then one may use Theorems \ref{th2dposit}, \ref{th2d} and \ref{thM} to construct travelling waves for (NLS). It is obvious that Theorem \ref{res1} above also applies to the solutions constructed in this way. Let us mention that in the case of the Gross-Pitaevskii nonlinearity $F(\varrho) = 1-\varrho$ and in dimension $N=2$, F. B\'ethuel, P. Gravejat and J-C. Saut proved in \cite{BGS1} the same type of convergence for the solutions constructed in \cite{BGS2}. Those solutions are global minimizers of the energy with prescribed momentum, which allows to derive {\it a priori} bounds: for instance, their energy is small. In fact, if $V$ is nonnegative and $N=2$, Theorem \ref{th2dposit} provides travelling wave solutions with speed $\simeq {\mathfrak c}_s$ for $|q|$ small and the proof of Theorem \ref{res1} is quite similar to \cite{BGS1}, and therefore we will focus on the other cases. However, if the potential $V$ achieves negative values, the minimization of the energy under the constraint of fixed momentum on the whole space $ \mathcal{E}$ is no longer possible, hence the approach in Theorem \ref{th2d} or the local minimization approach in Theorem \ref{th2dposit} $(b)$. In dimension $N=3$ (even for the Gross-Pitaevskii nonlinearity $F(\varrho) = 1-\varrho$), the travelling waves we deal with have high energy and momentum and are {\it not} minimizers of the energy at fixed momentum (which are the vortex rings, see \cite{BOS}). In particular, we have to show that the $U_n$'s are vortexless ($|U_n|\geq r_0 /2$). For the Gross-Pitaevskii nonlinearity, Theorem \ref{res1} provides a rigorous proof to the existence of the upper branch in the so-called Jones-Roberts curve in dimension three (\cite{JR}). This upper branch was conjectured by formal expansions and numerical simulations (however limited to not so large momentum). In dimension $N=3$, the solutions on this upper branch are expected to be unstable (see \cite{BR}), and these rarefaction pulses should evolve by creating vortices (cf. \cite{B}).\\ It is also natural to investigate the one dimensional case. Firstly, the (KP-I) equation has to be replaced by the (KdV) equation \be \tag{KdV} 2 \partial_\tau \zeta + \Gamma \zeta \partial_{z} \zeta - \frac{1}{{\mathfrak c}_s^2} \, \partial_{z}^3 \zeta = 0 , \ee and (SW) becomes $$ \frac{1}{{\mathfrak c}_s^2}\, \partial_z \mathcal{W} + \Gamma \mathcal{W} \partial_{z} \mathcal{W} - \frac{1}{{\mathfrak c}_s^2} \, \partial_{z}^3 \mathcal{W} = 0 . $$ If $ \Gamma \not = 0 $, the only nontrivial travelling wave for (KdV) (up to space translations) is given by $$ {\rm w} (z) = - \frac{3}{{\mathfrak c}_s^2 \Gamma \cosh^2(z/2)} , $$ and there holds $$ \mathscr{S}({\rm w}) = \int_{\mathbb R} \frac{1}{{\mathfrak c}_s^2}\, (\partial_{z} {\rm w} )^2 + \frac{\Gamma}{3} \, {\rm w}^3 \ dz + \frac{1}{{\mathfrak c}_s^2} \int_{\mathbb R} {\rm w}^2\ dz = \int_{\mathbb R} \frac{2}{{\mathfrak c}_s^2}\, (\partial_{z} {\rm w} )^2 \ dz = \frac{48}{5 {\mathfrak c}_s^6 \Gamma^2} . $$ The following result, which corresponds to Theorem \ref{res1} in dimension $N=1$, was proved in \cite{C1d} by using ODE techniques. \begin{theo} [\cite{C1d}] \label{res2} Let $N = 1 $ and assume that $F$ satisfies (A4) with $ \Gamma \not = 0 $. Then, there are $\delta>0 $ and $ 0 < \mathfrak{c}_0 < {\mathfrak c}_s $ with the following properties. For any $ \mathfrak{c}_0 \leq c < {\mathfrak c}_s $, there exists a solution $U_c$ to {\rm (TW$_{c}$)} satisfying $ \| \, |U_c| - r_0 \|_{L^\infty(\mathbb R)} \leq \delta$. Moreover, for $ \mathfrak{c}_0 \leq c < {\mathfrak c}_s $ any nonconstant solution $u$ of {\rm (TW$_{c}$)} verifying $ \| \, |u| - r_0 \|_{L^\infty(\mathbb R)} \leq \delta$ is of the form $ u(x) = {\sf e}^{i\theta} U_c (x-\xi)$ for some $ \theta \in \mathbb R $ and $ \xi \in \mathbb R$. The map $U_c$ can be written in the form $$ U_c(x) = r_0 \left( 1 + \varepsilon^2 A_\varepsilon (z) \right) \exp ( i \varepsilon \varphi_\varepsilon (z) ) , \qquad \mbox{ where } z = \varepsilon x \quad \mbox{ and } \quad \varepsilon = \sqrt{{\mathfrak c}_s^2 - c^2} $$ and for any $1 \leq p \leq \infty$, $$ \partial_{z} \varphi_\varepsilon \to {\mathfrak c}_s {\rm w} \quad \quad \quad {\it and} \quad \quad \quad A_\varepsilon \to {\rm w} \quad {\it in} \quad W^{1,p}(\mathbb R) \quad \quad \quad {\it as} \quad \varepsilon \to 0 . $$ Finally, as $ \varepsilon \to 0 $, $$ E(U_{c(\varepsilon)} ) \sim - {\mathfrak c}_s Q (U_{c(\varepsilon)} ) \sim 5 r_0^2 {\mathfrak c}_s^4 \mathscr{S}( {\rm w}) \Big( {\mathfrak c}_s^2 - c^2(\varepsilon) \Big)^{\frac{3}{2}} = \varepsilon^3 \frac{48 r_0^2 }{{\mathfrak c}_s^2 \Gamma^2} $$ and $$ E(U_{c(\varepsilon)} ) + c(\varepsilon) Q (U_{c(\varepsilon)} ) \sim {\mathfrak c}_s^2 r_0^2 \mathscr{S}( {\rm w}) \varepsilon^{5} = \frac{48 r_0^2 }{5 {\mathfrak c}_s^4 \Gamma^2} \varepsilon^5. $$ \end{theo} \begin{rem} \rm In the one-dimensional case it can be easily shown that the mapping $(\mathfrak{c}_0 , {\mathfrak c}_s) \ni c \mapsto ( A_c - r_0 , \partial_z \phi ) \in W^{1,p}(\mathbb R) $, where $ U_c = A_c \exp( i \phi ) $, is continuous for every $1 \leq p \leq \infty$. \end{rem} A natural question is to investigate the dynamical counterparts of Theorems \ref{res1} and \ref{res2}. If $\Psi_\varepsilon^0$ is an initial datum for (NLS) of the type $$ \Psi_\varepsilon^0 (x) = r_0 \Big( 1 + \varepsilon^2 A_\varepsilon^0 (z) \Big) \exp \Big( i \varepsilon \varphi_\varepsilon^0(z) \Big) , $$ with $z=(z_1,z_\perp) = (\varepsilon x_1, \varepsilon^2 x_\perp)$ and $ {\mathfrak c}_s A_\varepsilon^0 \simeq \partial_{z_1} \varphi_\varepsilon^0$, we use for $\Psi_\varepsilon$ the ansatz at time $t>0$, for some functions $A_\varepsilon$, $\varphi_\varepsilon$ depending on $(\tau,z)$, $$ \Psi_\varepsilon (t,x) = r_0 \Big( 1 + \varepsilon^2 A_\varepsilon (\tau,z)\Big) {\sf e}^{i \varepsilon \varphi_\varepsilon(\tau,z) } , \quad \quad \quad \tau = {\mathfrak c}_s \varepsilon^3 t , \quad z_1 = \varepsilon ( x_1 - {\mathfrak c}_s t) , \quad z_\perp = \varepsilon^2 x_\perp . $$ Similar computations imply that, for times $\tau$ of order one (that is $t$ of order $\varepsilon^{-3}$), we have $ {\mathfrak c}_s A_\varepsilon \simeq \partial_{z_1} \varphi_\varepsilon$ and $A_\varepsilon$ converges to a solution of the (KP-I) equation. This (KP-I) asymptotic dynamics for the Gross-Pitaevskii equation in dimension $N=3$ is formally derived in \cite{BR} and is used to investigate the linear instability of the solitary waves of speed close to $ {\mathfrak c}_s = \sqrt{2} $. The one-dimensional analogue, where the (KP-I) equation has to be replaced by the corresponding Korteweg-de Vries equation, can be found in \cite{ZK} and \cite{KAL}. The rigorous mathematical proofs of these regimes have been provided in \cite{CR2} in arbitrary space dimension and for a general nonlinearity $F$ (the coefficient $\Gamma$ might even vanish), respectively in \cite{BGSS} for the one dimensional Gross-Pitaevskii equation by using the complete integrability of the equation (more precisely, the existence of sufficiently many conservation laws). \subsection{Scheme of the proof of Theorem \ref{res1}} In case $(a)$ there is a direct proof of Theorem \ref{res1} which is quite similar to the one in \cite{BGS1}. Moreover, it follows from Proposition 5.12 in \cite{CM1} that if $(U_n, c_n)$ satisfies $(a)$ then it also satisfies $(b)$, so it suffices to prove Theorem \ref{res1} in cases $(b)$ and $(c)$. The first step is to give sharp asymptotics for the quantities minimized in \cite{CM1} and \cite{Maris} in order to prove the existence of travelling waves, namely to estimate $$ I_{\rm min} (k) = \inf \Big\{ \int_{\mathbb R^2} V(|\psi|^2) \ d x + Q(\psi ) \; \big\vert \; \psi \in \mathcal{E}, \ \int_{\mathbb R^2} |\nabla \psi|^2 \ dx = k \Big\} \qquad \mbox{ as } k \to 0$$ and $$ T_c = \inf \Big\{ E(\psi ) + c Q (\psi ) \; \big\vert \; \psi \in \mathcal{E}, \; \psi \mbox{ is not constant, } E(\psi) + c Q (\psi) = \int_{\mathbb R^3} | \nabla_{x_\perp} \psi |^2 \ dx\} \qquad \mbox{ as } c \to {\mathfrak c}_s.$$ These bounds are obtained by plugging test functions with the ansatz \eqref{ansatzKP} into the corresponding minimization problems, where $(A_\varepsilon,\varphi_\varepsilon ) \simeq (A, {\mathfrak c}_s^{-1} \partial_{z_1}^{-1} A )$ and $A$ is a ground state for (KP-I). A similar upper bound for $I_{\rm min}(k)$ was already a crucial point in \cite{CM1} to rule out the dichotomy of minimizing sequences. \begin{prop} \label{asympto} Assume that $F$ satisfies (A2) and (A4) with $ \Gamma \not = 0 $. Then: (i) If $N=2$, we have as $ k \to 0 $ $$ I_{\rm min} (k) \leq - \frac{k}{{\mathfrak c}_s^2} - \frac{4 k^3}{27 r_0^4 {\mathfrak c}_s^{12} \mathscr{S}^2_{\rm min}} + \mathcal{O}(k^5) . $$ (ii) If $ N = 3 $, the following upper bound holds as $\varepsilon \to 0$ (that is, as $c(\varepsilon) \to {\mathfrak c}_s$): $$ T_{c(\varepsilon)} \leq {\mathfrak c}_s^2 r_0^2 \mathscr{S}_{\rm min} ( {\mathfrak c}_s^2 - c^2(\varepsilon) )^{\frac{1}{2}} + \mathcal{O}\Big( ( {\mathfrak c}_s^2 - c^2(\varepsilon) )^{\frac{3}{2}} \Big) = {\mathfrak c}_s^2 r_0^2 \mathscr{S}_{\rm min} \varepsilon + \mathcal{O}(\varepsilon^3) . $$ \end{prop} The second step is to derive upper bounds for the energy and the momentum. In space dimension three (case $(c)$) this is tricky. Indeed, if $U_c$ is a minimizer of $E_c$ under the constraint $ P_c = 0$, the only information we have is about $ T_c = \displaystyle \int_{\mathbb R^N} |\nabla _{x_{\perp}} U_c |^2 \, dx $ (see the first identity in (\ref{Pohozaev})). In particular, we have no {\it a priori } bounds on $\displaystyle \int_{\mathbb R^N} \Big| \frac{ \partial U_c}{\partial x_1 } \Big|^2 \, dx$, $Q(U_c)$ and the potential energy $\displaystyle \int_{\mathbb R^N} V(|U_c|^2) \, dx$. Using an averaging argument we infer that there is a sequence $(U_n, c_n)$ for which we have "good" bounds on the energy and the momentum. Then we prove a rigidity property of "good sequences": any sequence $(U_n, c_n)$ that satisfies the "good bounds" has a subsequence that satisfies the conclusion of Theorem \ref{res1}. This rigid behavior combined with the existence of a sequence with "good bounds" and a continuation argument allow us to conclude that Theorem \ref{res1} holds for {\it any } sequence $(U_n, c_n)$ with $ c_n \to {\mathfrak c}_s$ (as in (c)). More precisely, we will prove: \begin{prop} \label{monoto} Let $N \geq 3$ and assume that $F$ satisfies (A1) and (A2). Then: (i) For any $ c \in (0, {\mathfrak c}_s )$ and any minimizer $ U $ of $E_c$ in $ \mathscr{C} _c $ we have $Q(U) <0$. (ii) The function $ (0, {\mathfrak c}_s) \ni c \longmapsto T_c \in \mathbb R_+$ is decreasing, thus has a derivative almost everywhere. (iii) The function $ c \longmapsto T_c $ is left continuous on $(0, {\mathfrak c}_s)$. If it has a derivative at $c_0$, then for any minimizer $U_0$ of $ E_{c_0}$ under the constraint $P_{c_0} = 0$, scaled so that $U_0$ solves {\rm (TW}$_{c_0}${\rm)}, there holds $$ \frac{d T_c}{dc}_{|c=c_0} = Q(U_0) . $$ (iv) Let $ c_0 \in ( 0 , {\mathfrak c}_s )$. Assume that there is a sequence $ (c_n) _{n \geq 1}$ such that $ c_n > c_0$, $ c_n \to c_0$ and for any $n$ there is a minimizer $ U_n \in \mathcal{E}$ of $E_{c_n}$ on $ \mathscr{C} _{c_n}$ which solves (TW$_{c_n}$) and the sequence $(Q(U_n))_{n \geq 1 }$ is bounded. Then $ c \longmapsto T_c$ is continuous at $ c_0$. (v) Let $0 < c_1 <c_2 < {\mathfrak c}_s$. Let $U_i $ be minimizers of $E_{c_i} $ on $ \mathscr{C}_{c_i}$, $ i =1,2$, such that $ U_i$ solves {\rm (TW}$_{c_i}${\rm)}. Denote $ q_1 = Q(U_1)$ and $q_2 = Q(U_2).$ Then we have $$ \frac{ T_{c_1}^2}{q_1^2} - c_1 ^2 \geq \frac{ T_{c_2}^2}{q_2^2} - c_2 ^2 . $$ (vi) If $N=3$, $F$ verifies (A4) and $\Gamma \not = 0$, there exist a constant $C>0$ and a sequence $\varepsilon_n \to 0$ such that for any minimizer $U_n \in \mathcal{E}$ of $ E_{c(\varepsilon_n)} $ on $ \mathscr{C}_{c(\varepsilon_n)}$ which solves {\rm (TW}$_{c(\varepsilon_n)}${\rm)} we have $$ E(U_n) \leq \frac{C}{\varepsilon_n} \qquad \mbox{ and } \qquad |Q(U_n) | \leq \frac{C}{\varepsilon_n} . $$ \end{prop} \begin{prop} \label{convergence} Assume that $N=3$, (A2) and (A4) hold and $ \Gamma \neq 0$. Let $( U_n, \varepsilon _n)_{n \geq 1}$ be a sequence such that $ \varepsilon_n \to 0$, $U_n$ minimizes $E_{c(\varepsilon_n)}$ on $ \mathscr{C} _{c(\varepsilon_n)}$, satisfies {\rm (TW}$_{c(\varepsilon_n)}${\rm)} and there exists a constant $C>0$ such that $$ E(U_n) \leq \frac{C}{\varepsilon_n} \qquad \mbox{ and } \qquad |Q(U_n) | \leq \frac{C}{\varepsilon_n} \qquad \mbox{ for all } n. $$ Then there is a subsequence of $(U_n, c( \varepsilon _n))_{n \geq 1}$ which satisfies the conclusion of Theorem \ref{res1}. \end{prop} \begin{prop} \label{global3} Let $ N =3$ and suppose that (A2) and (A4) hold with $ \Gamma \neq 0$. There are $ K > 0$ and $ \varepsilon _* > 0 $ such that for any $ \varepsilon \in ( 0, \varepsilon _* )$ and for any minimizer $U$ of $E_{c(\varepsilon)} $ on $ \mathscr{C}_{c (\varepsilon)}$ scaled so that $U$ satisfies {\rm (TW}$_{c(\varepsilon )}${\rm)} we have $$ E(U) \leq \frac{K}{\varepsilon} \qquad \mbox{ and } \qquad |Q(U) | \leq \frac{K}{\varepsilon}. $$ \end{prop} It is now obvious that the proof of Theorem \ref{res1} in the three-dimensional case follows directly from Propositions \ref{convergence} and \ref{global3} above. The most difficult and technical point in the above program is to prove Proposition \ref{convergence}. Let us describe our strategy to carry out that proof, as well as the proof of Theorem \ref{res1} in the two-dimensional case. Once we have a sequence of travelling waves to (NLS) with "good bounds" on the energy and the momentum and speeds that tend to $ {\mathfrak c}_s$, we need to show that those solutions do not vanish and can be lifted. We recall the following result, which is a consequence of Lemma 7.1 in \cite{CM1}: \begin{lem} [\cite{CM1}] \label{liftingfacile} Let $ N \geq 2 $ and suppose that the nonlinearity $F$ satisfies (A1) and ((A2) or (A3)). Then for any $ \delta > 0 $ there is $ M( \delta)> 0$ such that for all $ c \in [0, {\mathfrak c}_s]$ and for all solutions $ U \in \mathcal{E}$ of {\rm (TW}$_c${\rm )} such that $ \| \nabla U \| _{L^2( \mathbb R^N)} < M( \delta )$ we have $$ \| \, |U| - r_0 \|_{L^\infty(\mathbb R^N)} \leq \delta . $$ \end{lem} In the two-dimensional case the lifting properties follow immediately from Lemma \ref{liftingfacile}. However, in dimension $N=3$, for travelling waves $U_{c(\varepsilon)} $ which minimize $E_{c(\varepsilon)}$ on $ \mathscr{C} _{c(\varepsilon)} $ the quantity $ \Big\| \frac{ \partial U_{c(\varepsilon)}}{\partial x_1 } \Big\|_{L^2} ^2 $ is large, of order $\simeq \varepsilon^{-1}$ as $ \varepsilon \to 0$. We give a lifting result for those solutions, based on the fact that $ \| \nabla _{x_{\perp}} U_{c(\varepsilon)} \|_{L^2} ^2= \frac{N-1}{2} T_{c(\varepsilon)} $ is sufficiently small. \begin{prop} \label{lifting} We consider a nonlinearity $F$ satisfying (A1) and ((A2) or (A3)). Let $ U \in \mathcal{E}$ be a travelling wave to {\rm (NLS)} of speed $ c \in [0, {\mathfrak c}_s ]$. (i) If $N \geq 3$, for any $0 < \delta < r_0$ there exists $ \mu = \mu ( \delta ) > 0 $ such that $$ \Big\| \frac { \partial U}{\partial x_1} \Big\|_{L^2( \mathbb R^N)} \cdot \|\nabla _{x_{\perp}} U \| _{L^2( \mathbb R^N)}^{N-1} \leq \mu( \delta) \qquad \mbox{ implies } \qquad \| \, |U| - r_0 \|_{L^\infty(\mathbb R^N)} \leq \delta . $$ (ii) If $ N \geq 4$ and, moreover, (A3) holds or $\displaystyle \Big\| \frac { \partial U}{\partial x_1} \Big\|_{L^2( \mathbb R^N)} \cdot \|\nabla _{x_{\perp}} U \|_{L^2( \mathbb R^N)} ^{N-1} \leq 1$, then for any $ \delta > 0 $ there is $ m(\delta) > 0$ such that $$ \int_{\mathbb R^N} |\nabla _{x_{\perp}} U|^2 \, dx \leq m( \delta) \qquad \mbox{ implies } \qquad \| \, |U| - r_0 \|_{L^\infty(\mathbb R^N)} \leq \delta . $$ \end{prop} As an immediate consequence, the three-dimensional travelling wave solutions provided by Theorem \ref{thM} have modulus close to $r_0$ (hence do not vanish) as $ c \to {\mathfrak c}_s$: \begin{cor} \label{sanszero} Let $N=3$ and consider a nonlinearity $F$ satisfying (A2) and (A4) with $\Gamma \not= 0$. Then, the travelling wave solutions $U_{c(\varepsilon)} $ to (NLS) provided by Theorem \ref{thM} which satisfy an additional bound $E(U_{c(\varepsilon)} ) \leq \frac{C}{\varepsilon}$ (with $C$ independent on $ \varepsilon$) verify $$ \| \, |U_{c(\varepsilon)}| - r_0 \|_{L^\infty(\mathbb R^3)} \to 0 \quad \quad \mbox{ as } \quad \varepsilon \to 0. $$ In particular, for $\varepsilon$ sufficiently close to $ 0$ we have $|U_{c(\varepsilon)}| \geq r_0 /2 $ in $\mathbb R^3$. \end{cor} \noindent {\it Proof.} By the the second identity in (\ref{Pohozaev}) we have $$ \int_{\mathbb R^3} \Big|\frac{\partial U_{c(\varepsilon)}}{\partial x_1} \Big|^2 \, dx = \frac 12 E ( U_{c(\varepsilon)} ) \leq \frac{C}{\varepsilon}. $$ Moreover, the first identity in \eqref{Pohozaev} and Proposition \ref{asympto} $(ii)$ imply $$ \int_{\mathbb R^3} |\nabla _{x_{\perp}} U_{c(\varepsilon)} |^2 \, dx = E_{c(\varepsilon)}(U_{c(\varepsilon)}) = T_{c(\varepsilon)} \leq C \varepsilon . $$ Hence $ \Big\| \frac { \partial U_{c(\varepsilon)}}{\partial x_1} \Big\|_{L^2( \mathbb R^3)} \|\nabla _{x_{\perp}} U_{c(\varepsilon)} \| _{L^2( \mathbb R^3)}^{2} \leq {C}{\sqrt{\varepsilon} } $ and the result follows from Proposition \ref{lifting} $(ii)$. \ $\Box$ \\ We give now some properties of the two-dimensional travelling wave solutions provided by Theorem \ref{th2d}. \begin{prop} \label{prop2d} Let $N = 2 $ and assume that $F$ verifies (A2) and (A4) with $\Gamma \not = 0$. Then there exist constants $C_1, \, C_2, \, C_3 , \, C_4>0$ and $ 0 < k_* < k_\infty$ such that all travelling wave solutions $U_k$ provided by Theorem \ref{th2d} with $ 0 < k = \displaystyle \int_{\mathbb R^2} |\nabla U_k|^2 \ dx < k_* $ satisfy $|U_k| \geq r_0 / 2 $ in $\mathbb R^2$, \be \label{estim2d} C_1 k \leq - Q(U_k) \leq C_2 k, \qquad C_1 k \leq \int_{\mathbb R^2} V(|U_k|^2) \, dx \leq C_2 k, \qquad C_1 k \leq \int_{\mathbb R^2} (\chi ^2(|U_k|) - r_0 ^2) ^2\, dx \leq C_2 k \ee and have a speed $c(U_k) = \sqrt{{\mathfrak c}_s^2 - \varepsilon_k^2}$ satisfying \be \label{kifkif} C_3 k \leq \varepsilon_k \leq C_4 k . \ee \end{prop} At this stage, we know that the travelling waves provided by Theorems \ref{th2d} and \ref{thM} do not vanish if their speed is sufficiently close to $ {\mathfrak c}_s$. Using the above lifting results, we may write such a solution $U_c$ in the form \be \label{ansatz} U_c (x) = \rho (x) {\sf e}^{i\phi(x)} = r_0 \sqrt{1+\varepsilon^2 \mathcal{A}_{\varepsilon}(z) }\ {\sf e}^{i\varepsilon \varphi_{\varepsilon} (z)}, \quad \quad \mbox{ where } \quad \varepsilon = \sqrt{ {\mathfrak c}_s ^2 - c^2}, \quad z_1 = \varepsilon x_1 , \ z_\perp = \varepsilon^2 x_\perp , \ee and we use the same scaling as in \eqref{ansatzKP}. The interest of writing the modulus in this way (and not as in \eqref{ansatzKP}) is just to simplify a little bit the algebra and to have expressions similar to those in \cite{BGS1}. Since $\mathcal{A} _{\varepsilon } = 2 A_{\varepsilon} + \varepsilon^2 A_{\varepsilon}^2$, bounds in Sobolev spaces for $\mathcal{A} _{\varepsilon} $ imply similar Sobolev bounds for $A_{\varepsilon }$ and conversely. We shall now find Sobolev bounds for $\mathcal{A} _{\varepsilon} $ and $\varphi _{\varepsilon } $. It is easy to see that (TW$_{c}$) is equivalent to the following system for the phase $ \varphi$ and the modulus $ \rho$ (in the original variable $x$): \be \label{phasemod} \left\{\begin{array}{l} \displaystyle{ c \frac{\partial}{\partial {x_1}} (\rho^2 - r_0^2 ) = 2 \mbox{div} ( \rho^2 \nabla \phi ) }, \\ \\ \displaystyle{ \Delta \rho - \rho |\nabla \phi|^2 + \rho F(\rho^2) = - c \rho \frac{\partial \phi}{\partial {x_1}} } . \end{array}\right. \ee Multiplying the second equation by $ 2 \rho$, we write (\ref{phasemod}) in the form \be \label{phasemod2} \left\{\begin{array}{l} 2 \mbox{div} ( (\rho^2 - r_0 ^2) \nabla \phi ) - \displaystyle{ c \frac{\partial}{\partial {x_1}} (\rho^2 - r_0^2 ) } = - 2 r_0 ^2 \Delta \phi , \\ \\ \displaystyle{ \Delta (\rho^2 - r_0 ^2) - 2 |\nabla U_c|^2 + 2 \rho^2 F(\rho^2) + 2c( \rho ^2 - r_0 ^2) \frac{\partial \phi}{\partial {x_1}} = - 2 c r_0^2 \frac{\partial \phi}{\partial {x_1}} } . \end{array}\right. \ee Let $\eta = \rho^2 - r_0 ^2$. We apply the operator $ \displaystyle - 2c \frac{ \partial }{\partial x_1}$ to the first equation in (\ref{phasemod2}) and we take the Laplacian of the second one, then we add the resulting equalities to get \be \label{fond} \left[ \Delta^2 - {\mathfrak c}_s^2 \Delta + c^2 \frac{ \partial ^2}{\partial x_1^2} \right] \eta = \Delta \left( 2 |\nabla U_c|^2 - 2 c \eta \frac{\partial \phi}{\partial x_1} - 2 \rho ^2 F( \rho ^2) - {\mathfrak c}_s^2 \eta \right) + 2 c \frac{\partial}{ \partial x_1} (\mbox{div} (\eta \nabla \phi ) ). \ee Since ${\mathfrak c}_s^2 = - 2 r_0^2 F'(r_0^2)$, using the Taylor expansion $$ 2 (s + r_0^2) F( s + r_0^2) + {\mathfrak c}_s^2 s = - \frac{ {\mathfrak c}_s ^2}{r_0^2} \left( 1 - \frac{ r_0^4 F'' (r_0^2)}{{\mathfrak c}_s ^2} \right) s^2 + r_0 ^2 \tilde{F}_3(s), $$ where $ \tilde{F}_3(s) = \mathcal{O}(s^3)$ as $s \to 0$, we see that the right-hand side in \eqref{fond} is at least quadratic in $(\eta , \phi) $. Then we perform a scaling and pass to the variable $ z = ( \varepsilon x_1, \varepsilon^2 x_{\perp})$ (where $ \varepsilon = \sqrt{ {\mathfrak c}_s ^2 - c^2}$), so that \eqref{fond} becomes \begin{align} \label{Fonda} \Big\{ \partial_{z_1}^4 - \partial_{z_1}^2 - {\mathfrak c}_s^2 \Delta_{z_\perp} + 2 \varepsilon ^2 \partial_{z_1}^2 \Delta_{z_\perp} + \varepsilon^4 \Delta^2_{z_\perp} \Big\} \mathcal{A} _{\varepsilon} = & \ \mathcal{R} _{\varepsilon}, \end{align} where $\mathcal{R} _{\varepsilon}$ contains terms at least quadratic in $(\mathcal{A} _{\varepsilon} ,\varphi _{\varepsilon})$: \begin{align*} \mathcal{R} _{\varepsilon} = & \ \{ \partial_{z_1}^2 + \varepsilon ^2 \Delta_{z_\perp} \} \Big[ 2(1 + \varepsilon^2 \mathcal{A} _{\varepsilon}) \Big( (\partial_{z_1} \varphi _{\varepsilon} )^2 + \varepsilon ^2 |\nabla_{z_\perp} \varphi _{\varepsilon} |^2 \Big) + \varepsilon^2 \frac{(\partial_{z_1} \mathcal{A} _{\varepsilon} )^2 + \varepsilon ^2 |\nabla_{z_\perp} \mathcal{A} _{\varepsilon} |^2}{2(1+ \varepsilon ^2 \mathcal{A} _{\varepsilon} )} \Big] \\ & \ - 2 c \varepsilon ^2 \Delta_{z_\perp} ( \mathcal{A} _{\varepsilon} \partial_{z_1} \varphi _{\varepsilon}) + 2 c \varepsilon ^2 \displaystyle \sum_{j=2}^N \partial_{z_1} \partial_{z_j} ( \mathcal{A} _{\varepsilon} \partial_{z_j} \varphi _{\varepsilon} ) \\ & \ + \{ \partial_{z_1}^2 + \varepsilon ^2 \Delta_{z_\perp} \} \Big[ {\mathfrak c}_s^2 \Big( 1 - \frac{r_0^4F''(r_0^2)}{{\mathfrak c}_s^2} \Big) \mathcal{A} _{\varepsilon}^2 - \frac{1}{\varepsilon ^4} \tilde{F}_3(r_0^2 \varepsilon ^2 \mathcal{A} _{\varepsilon}) \Big] . \end{align*} In the two-dimensional case, uniform bounds (with respect to $\varepsilon$) in Sobolev spaces have been derived in \cite{BGS1} by using \eqref{Fonda} and a bootstrap argument. This technique is based upon the fact that some kernels related to the linear part in \eqref{Fonda}, such as $$ \mathscr{F}^{-1} \Big( \frac{\xi_1^2}{ \xi_1^4 + \xi_1^2 + {\mathfrak c}_s^2 |\xi_{\perp}|^2 + 2 \varepsilon ^2 \xi_1^2 |\xi_{\perp}|^2 + \varepsilon ^4 |\xi_{\perp}|^4 } \Big) \quad \quad {\rm and} \quad \quad \mathscr{F}^{-1} \Big( \frac{\varepsilon ^2 |\xi_\perp|^2}{ \xi_1^4 + \xi_1^2 + {\mathfrak c}_s^2 |\xi_{\perp}|^2 + 2 \varepsilon ^2 \xi_1^2 |\xi_{\perp}|^2 + \varepsilon ^4 |\xi_{\perp}|^4 } \Big) $$ are bounded in $L^p(\mathbb R^2)$ for $p$ in some interval $[2, \bar{p})$, uniformly with respect to $\varepsilon$. However, this is no longer true in dimension $N=3$: the above mentioned kernels are not in $L^2(\mathbb R^3)$ (but their Fourier transforms are uniformly bounded), and from the analysis in \cite{G}, the kernel $$ \mathscr{F}^{-1} \Big( \frac{\xi_1^2}{ \xi_1^4 + \xi_1^2 + {\mathfrak c}_s^2 |\xi_{\perp}|^2} \Big) $$ is presumably too singular near the origin to be in $L^p( \mathbb R^3) $ if $p \geq 5/3$. This lack of integrability of the kernels makes the analysis in the three dimensional case much more diffcult than in the case $ N=2$. One of the main difficulties in the three dimensional case is to prove that for $ \varepsilon $ sufficiently small, $ \mathcal{A}_{\varepsilon}$ is uniformly bounded in $L^p$ for some $ p >2$. To do this we use a suitable decomposition of $\mathcal{A} _{\varepsilon }$ in the Fourier space (see the proof of Lemma \ref{Grenouille} below). Then we improve the exponent $p$ by using a bootstrap argument, combining the iterative argument in \cite{BGS1} (which uses the quadratic nature of $\mathcal{R} _{\varepsilon}$ in \eqref{Fonda}) and the appropriate decomposition of $\mathcal{A} _{\varepsilon}$ in the Fourier space. This leads to some $L^p$ bound with $p> 3 = N$. Once this bound is proved, the proof of the $W^{1,p}$ bounds follows the scheme in \cite{BGS1}. We get: \begin{prop} \label{Born} Under the assumptions of Theorem \ref{res1}, there is $\varepsilon_0 >0 $ such that $ \mathcal{A}_{\varepsilon} \in W^{4, p}(\mathbb R^N)$ and $ \nabla \varphi _{e } \in W^{3, p}(\mathbb R^N)$ for all $ \varepsilon \in ( 0, \varepsilon_0)$ and all $ p \in ( 1, \infty)$. Moreover, for any $ p \in (1, \infty ) $ there exists $C_{p} >0$ satisfying for all $ \varepsilon \in (0, \varepsilon _0) $ \begin{eqnarray} \label{goodestimate} \| \mathcal{A}_{\varepsilon } \|_{L^p} + \| \nabla \mathcal{A}_{\varepsilon } \|_{L^p} + \| \partial^2_{z_1} \mathcal{A}_{\varepsilon } \|_{L^p} + \varepsilon \| \partial_{z_1} \nabla_{z_\perp} \mathcal{A}_{\varepsilon } \|_{L^p} + \varepsilon ^2 \| \nabla_{z_\perp}^2 \mathcal{A}_{\varepsilon } \|_{L^p} \leq C_p \qquad \mbox{ and } \end{eqnarray} \begin{eqnarray} \label{goodestimate2} \begin{array}{l} \| \partial _{z_1} \varphi _{\varepsilon } \|_{L^p} + \varepsilon \| \nabla_{z_{\perp }} \varphi _{\varepsilon } \| _{L^p} + \| \partial _{z_1}^2 \varphi _{\varepsilon } \|_{L^p} + \varepsilon \| \nabla_{z_{\perp }} \partial _{z_1} \varphi _{\varepsilon } \| _{L^p} + \varepsilon ^2 \| \nabla_{z_{\perp }} ^2 \varphi _{\varepsilon } \| _{L^p} \\ \\ + \| \partial _{z_1}^3 \varphi _{\varepsilon } \|_{L^p} + \varepsilon \| \nabla_{z_{\perp }} \partial _{z_1} ^2 \varphi _{\varepsilon }\| _{L^p} + \varepsilon ^2 \| \nabla_{z_{\perp }} ^2 \partial _{z_1} \varphi _{\varepsilon }\| _{L^p} \leq C_p. \end{array} \end{eqnarray} The estimate \eqref{goodestimate} is also valid with $A_{\varepsilon }$ instead of $ \mathcal{A}_{\varepsilon }$. \end{prop} Once these bounds are established, the estimates in Proposition \ref{asympto} show that $({\mathfrak c}_s^{-1} \partial_{z_1} \varphi_n )_{n\geq 0}$ is a minimizing sequence for the problem \eqref{minimiz} if $N=2$, respectively for the problem (\ref{miniGS}) if $N=3$. Since Theorems \ref{gs2d} and \ref{gs} provide compactness properties for minimizing sequences, we get (pre)compactness of $ ( {\mathfrak c}_s^{-1} \partial_{z_1} \varphi_n )_{n\geq 0}$ in $\mathscr{Y}(\mathbb R^N) \hookrightarrow L^2(\mathbb R^N)$, and then we complete the proof of Theorem \ref{res1} by standard interpolation in Sobolev spaces. \subsection{On the higher dimensional case} It is natural to ask what happens in the transsonic limit in dimension $N\geq 4$. Firstly, it should be noticed that even for the Gross-Pitaevskii nonlinearity the problem is critical if $N=4$ and supercritical in higher dimensions, hence Theorem \ref{thM} does not apply directly. The first crucial step is to investigate the behaviour of $T_c$ as $c \to {\mathfrak c}_s$. In particular, in order to be able to use Proposition \ref{lifting} to show that the solutions are vortexless in this limit, we would need to prove that $T_c \to 0$ as $c \to {\mathfrak c}_s$. We have not been able to prove (or disprove) this in dimension $N=4$ and $N=5$, except for the case $ \Gamma = 0$. Quite surprisingly, for nonlinearities satisfying (A3) and (A4) (this is the case for both the Gross-Pitaevskii and the cubic-quintic nonlinearity), this is not true in dimension higher than $5$, as shown by the following \begin{prop} \label{dim6} Suppose that $F$ satisfies (A3) and (A4) (and $\Gamma$ is arbitrary). If $ N \geq 6$, there exists $\delta >0 $ such that for any $ 0 \leq c \leq {\mathfrak c}_s$ and for any nonconstant solution $U \in \mathcal{E}$ of {\rm (TW$_{c}$)}, we have $$ E(U) + c Q(U) \geq \delta . $$ In particular, $$ \inf_{0 < c < {\mathfrak c}_s} T_c > 0 . $$ The same conclusion holds if $ N \in \{ 4, 5 \}$ provided that $ \Gamma = 0$. \end{prop} Therefore we do not know if the solutions constructed in Theorem \ref{thM} (for a subcritical nonlinearity) may vanish or not as $c\to {\mathfrak c}_s$ if $N\geq 6$. On the other hand we can show, in any space dimension $N\geq 4$, that we cannot scale the solutions in order to have compactness and convergence to a localized and nontrivial object in the transonic limit as soon as the quantity $E+cQ$ tends to zero. \begin{prop} \label{vanishing} Let $N\geq 4$ and suppose that $F$ satisfies (A2), (A3) and (A4) (and $\Gamma$ is arbitrary). Assume that there exists a sequence $(U_n, c_n)$ such that $ c_n \in (0, {\mathfrak c}_s]$, $U_n \in \mathcal{E}$ is a nonconstant solution of {\rm (TW$_{c_n}$)} and $E_{c_n}(U_n) \to 0$ as $n\to \infty$. Then, for $n$ large enough, there exist $\alpha_n,\beta_n, \lambda_n, \sigma_n \in \mathbb R$, $ A_n \in H^1( \mathbb R^N)$ and $ \varphi_n \in \dot{H}^1( \mathbb R^N)$ uniquely determined such that $$ U_n(x) = r_0 \Big( 1 + \alpha_n A_n (z) \Big) \exp \Big( i \beta_n \varphi_n (z) \Big) , \quad \quad \quad \mbox{ where } \quad z_1 = \lambda_n x_1, \quad z_\perp = \sigma_n x_\perp , $$ $$ \alpha_n \to 0 \qquad \mbox{ and } \qquad \| A_n \|_{L^\infty(\mathbb R^N)} = \| A_n \|_{L^2(\mathbb R^N)} = \| \partial_{z_1} \varphi_n \|_{ L^2(\mathbb R^N)} = \| \nabla_{z_\perp} \varphi_n \|_{L^2(\mathbb R^N)} = 1 . $$ Then we have $c_n \to {\mathfrak c}_s$ and $$ \| \partial_{z_1} A_n \|_{L^2(\mathbb R^N)} \to 0 \qquad \mbox{ as } n \to +\infty. $$ \end{prop} Consequently, even if one could show that $T_c \to 0$ as $ c \to {\mathfrak c}_s $ in space dimension $4$ or $5$, we would not have a nontrivial limit (after rescaling) of the corresponding rarefaction pulses. \section{Three-dimensional ground states for (KP-I) \label{proofGS}} We recall the anisotropic Sobolev inequality (see \cite{BIN}, p. 323): for $N \geq 2$ and for any $2 \leq p < \frac{2(2N-1)}{2N-3}$, there exists $C=C(p,N)$ such that for all $\Theta \in \mathcal{C}_c^\infty(\mathbb R^N)$ we have \be \label{Pastis} \| \partial_{z_1} \Theta \|_{L^p(\mathbb R^N)} \leq C \| \partial_{z_1} \Theta \|_{L^2(\mathbb R^N)}^{1 - \frac{(2N-1)(p-2)}{2p}} \| \partial^2_{z_1} \Theta \|_{L^2(\mathbb R^N)}^{\frac{N(p-2)}{2p}} \| \nabla_{z_\perp} \Theta \|_{L^2(\mathbb R^N)}^{\frac{(N-1)(p-2)}{2p}} . \ee This shows that the energy $\mathscr{E}$ is well-defined on $\mathscr{Y}(\mathbb R^N)$ if $N=2$ or $N=3$. By (\ref{Pastis}) and the density of $\partial_{z_1} C_c^{\infty}(\mathbb R^3)$ in $\mathscr{Y}(\mathbb R^3)$ we get for any $ w \in \mathscr{Y}(\mathbb R^3)$: \be \label{SobAnis} \| w \|_{L^3(\mathbb R^3)} \leq C \| w \|_{L^2(\mathbb R^3)}^{\frac16} \| \partial_{z_1} w \|_{L^2(\mathbb R^3)}^{\frac12} \| \nabla_{z_\perp} \partial_{z_1}^{-1} w \|_{L^2(\mathbb R^3)}^{\frac13} . \ee On the other hand, the following identities hold for any solution $\mathcal{W} \in \mathscr{Y} (\mathbb R^N)$ of (SW): \be \label{identites} \left\{\begin{array}{ll} \displaystyle{ \int_{\mathbb R^N} \frac{1}{{\mathfrak c}_s^2}\, (\partial_{z_1} \mathcal{W})^2 + |\nabla_{z_\perp} \partial_{z_1}^{-1} \mathcal{W}|^2 + \frac{\Gamma}{2}\, \mathcal{W}^3 + \frac{1}{{\mathfrak c}_s^2}\, \mathcal{W}^2 \ dz = 0} \\ \ \\ \displaystyle{ \int_{\mathbb R^N} \frac{-1}{{\mathfrak c}_s^2}\, (\partial_{z_1} \mathcal{W})^2 +3 |\nabla_{z_\perp} \partial_{z_1}^{-1} \mathcal{W}|^2 + \frac{\Gamma}{3}\, \mathcal{W}^3 + \frac{1}{{\mathfrak c}_s^2}\, \mathcal{W}^2 \ dz } = 0 \\ \ \\ \displaystyle{ \int_{\mathbb R^N} \frac{1}{{\mathfrak c}_s^2}\, (\partial_{z_1} \mathcal{W})^2 + |\nabla_{z_\perp} \partial_{z_1}^{-1} \mathcal{W}|^2 + \frac{\Gamma}{3}\, \mathcal{W}^3 + \frac{1}{{\mathfrak c}_s^2}\, \mathcal{W}^2 \ dz = \frac{2}{N-1} \int_{\mathbb R^N} |\nabla_{z_\perp} \partial_{z_1}^{-1} \mathcal{W}|^2 \ dz }. \end{array}\right. \ee The first identity is obtained by multiplying (SW) by $\partial_{z_1}^{-1} \mathcal{W}$ and integrating, whereas the two other equalities are the Pohozaev identities associated to the scalings in the $z_1$ and $z_\perp$ variables respectively. Formally, they are obtained by multiplying (SW) by $z_1 \mathcal{W}$ and $z_\perp \cdot \nabla_{z_\perp} \partial_{z_1}^{-1} \mathcal{W}$ respectively and integrating by parts (see \cite{dBSIHP} for a complete justification). Combining the equalities in \eqref{identites} we get \be \label{Ident} \left\{\begin{array}{c} \displaystyle{ \int_{\mathbb R^N} \frac{1}{{\mathfrak c}_s^2} \, (\partial_{z_1} \mathcal{W} )^2 \ dz = \frac{N}{N-1} \int_{\mathbb R^N} | \nabla_{z_\perp} \partial_{z_1}^{-1} \mathcal{W}|^2 \ dz } \\ \\ \displaystyle{ \frac{\Gamma}{6} \int_{\mathbb R^N} \mathcal{W}^3 \ dz = - \frac{2}{N-1} \int_{\mathbb R^N} | \nabla_{z_\perp} \partial_{z_1}^{-1} \mathcal{W}|^2 \ dz } \\ \\ \displaystyle{ \int_{\mathbb R^N} \frac{1}{{\mathfrak c}_s^2} \, \mathcal{W}^2 \ dz = \frac{7-2N}{N-1} \int_{\mathbb R^N} | \nabla_{z_\perp} \partial_{z_1}^{-1} \mathcal{W}|^2 \ dz }. \end{array}\right. \ee Notice that for $N\geq 4$ we have $ 7-2N <0$ and the last equality implies $\mathcal{W}=0$. We recall the following results about the ground states of (SW) and the compactness of minimizing sequences in $ \mathscr{Y}(\mathbb R^3)$. \begin{Lemma} [\cite{dBSIHP}, \cite{dBSSIAM}] \label{gs3} Let $ N=3$ and $ \Gamma \neq 0$. (i) For $ \lambda \in \mathbb R^*$, denote $I_{\lambda } = \inf \Big\{ \| w \|_{\mathscr{Y} (\mathbb R^3)} ^2 \; | \; \displaystyle \int_{\mathbb R^3} w^3 (z)\, dz = \lambda \Big\}.$ Then for any $ \lambda \in \mathbb R^*$ we have $ I_{\lambda } > 0$ and there is $ w_{\lambda } \in \mathscr{Y}(\mathbb R^3)$ such that $\displaystyle \int_{\mathbb R^3} w_{\lambda} ^3 (z)\, dz = \lambda$ and $\| w _{\lambda} \|_{\mathscr{Y} (\mathbb R^3)} ^2 = I_{\lambda }$. Moreover, any sequence $(w_n)_{n \geq 1} \subset \mathscr{Y}(\mathbb R^3)$ such that $\displaystyle \int_{\mathbb R^3} w_n^3 (z)\, dz \to \lambda $ and $\| w _{n} \|_{\mathscr{Y} (\mathbb R^3)} ^2 \to I_{\lambda }$ has a subsequence that converges in $ \mathscr{Y}(\mathbb R^3)$ (up to translations) to a minimizer of $ I_{\lambda }$. (ii) There is $ \lambda^* \in \mathbb R^*$ such that $ w^* \in \mathscr{Y}(\mathbb R^3)$ is a ground state for {\rm (SW)} (that is, minimizes the action $ \mathscr{S}$ among all solutions of {\rm (SW)}) if and only if $w^*$ is a minimizer of $I_{\lambda ^*}$. \end{Lemma} The first part of Lemma \ref{gs3} is a consequence of the proof of Theorem 3.2 p. 217 in \cite{dBSIHP} and the second part follows from Lemma 2.1 p. 1067 in \cite{dBSSIAM}. {\it Proof of Theorem \ref{gs}.} Given $ w \in \mathscr{Y}(\mathbb R^3)$ and $ \sigma > 0$, we denote $P(w) = \displaystyle \int_{\mathbb R^3} \frac{1}{{\mathfrak c}_s ^2} w^2 + \frac{1}{{\mathfrak c}_s ^2} |\partial_{z_1} w|^2 + \frac{\Gamma}{3} w^3 \, dz$ and $ w_{\sigma} (z) = w( z_1, \frac{ z_{\perp}}{\sigma} )$. It is obvious that $$ \begin{array}{c} \displaystyle \int_{\mathbb R^3} w_{\sigma }^p \, dz = \sigma ^2 \int_{\mathbb R^3} w^p \, dz , \qquad \int_{\mathbb R^3} |\partial _{z_1} (w_{\sigma })|^2 \, dz = \sigma ^2 \int_{\mathbb R^3} |\partial_{z_1}w|^2 \, dz \qquad \mbox{ and } \\ \\ \displaystyle \int_{\mathbb R^3} |\nabla_{z_{\perp}} \partial_{z_1}^{-1} (w_{\sigma })|^2 \, dz = \int_{\mathbb R^3} |\nabla_{z_{\perp}} \partial_{z_1}^{-1} (w)|^2 w|^2 \, dz . \end{array} $$ Let $w^*$ be a ground state for (SW) (the existence of $ w^*$ is guaranteed by Lemma \ref{gs3} above). Since $ w^*$ satisfies (\ref{identites}), we have $ P(w^*) = 0 $ and $ \mathscr{S} (w^*) = \displaystyle \int_{\mathbb R^3} |\nabla_{z_{\perp}} \partial_{z_1}^{-1} (w^*)|^2 w|^2 \, dz.$ Consider $ w \in \mathscr{Y}(\mathbb R^3)$ such that $ w \neq 0$ and $ P(w) = 0$. Then $ \displaystyle \frac{ \Gamma}{3} \int_{\mathbb R^3} w^3 \, dz = - \frac{1}{{\mathfrak c}_s ^2} \int_{\mathbb R^3} w^2 + | \partial _{z_1} w|^2 \, dz < 0 $ and it is easy to see that there is $ \sigma > 0 $ such that $\displaystyle \int_{\mathbb R^3} w_{\sigma }^3 \, dz = \int_{\mathbb R^3} (w ^*)^3 \, dz = \lambda ^*$. From Lemma \ref{gs3} it follows that $\| w_{\sigma} \|_{\mathscr{Y}(\mathbb R^3)} ^2\geq \| w^* \|_{\mathscr{Y}(\mathbb R^3)} ^2, $ that is $$ \frac{ \sigma ^2} { {\mathfrak c}_s ^2} \int_{\mathbb R^3} w^2 + |\partial _{z_1} w|^2 \, dz + \int_{\mathbb R^3} |\nabla_{z_{\perp}} \partial_{z_1}^{-1} w |^2 \, dz \geq \frac{ 1} { {\mathfrak c}_s ^2} \int_{\mathbb R^3} (w^*)^2 + |\partial _{z_1} w^*|^2 \, dz + \int_{\mathbb R^3} |\nabla_{z_{\perp}} \partial_{z_1}^{-1} w^*|^2 \, dz . $$ Since $ P(w) = 0 $ and $ P(w^*) = 0$ we have $$ \frac{ \sigma ^2} { {\mathfrak c}_s ^2} \int_{\mathbb R^3} w^2 + |\partial _{z_1} w|^2 \, dz = - \sigma ^2 \frac { \Gamma}{3} \int_{\mathbb R^3} w^3 \, dz = - \frac { \Gamma}{3} \int_{\mathbb R^3} (w^*)^3 \, dz = \frac{1}{ {\mathfrak c}_s ^2} \int_{\mathbb R^3} (w^*)^2 + |\partial _{z_1} w^*|^2 \, dz $$ and the previous inequality gives $ \displaystyle \int_{\mathbb R^3} |\nabla_{z_{\perp}} \partial_{z_1}^{-1} w|^2 \, dz \geq \int_{\mathbb R^3} |\nabla_{z_{\perp}} \partial_{z_1}^{-1} w^*|^2 \, dz , $ that is $ \mathscr{S}(w) \geq \mathscr{S}( w^*)$. So far we have proved that the set $ {\mathcal P} = \{ w \in \mathscr{Y}(\mathbb R^3) \; | \; w \neq 0, \; P(w) = 0 \}$ is not empty and any ground state $w^*$ of (SW) minimizes the action $ \mathscr{S}$ in this set. It is then clear that for any $ \sigma > 0$, $ w_{\sigma }^* $ also belongs to $ {\mathcal P}$ and mnimizes $ \mathscr{S}$ on $ {\mathcal P}$. Conversely, let $ w \in {\mathcal P} $ be such that $ \mathscr{S}(w) = \mathscr{S}_*$. Let $w^*$ be a ground state for (SW). It is clear that $ \displaystyle \int_{\mathbb R^3} |\nabla_{z_{\perp}} \partial_{z_1}^{-1} w|^2 \, dz = \mathscr{S}_* = \int_{\mathbb R^3} |\nabla_{z_{\perp}} \partial_{z_1}^{-1} w^*|^2 \, dz $. As above, there is a unique $ \sigma >0 $ such that $\displaystyle \int_{\mathbb R^3} w_{\sigma}^3 \, dz = \int_{\mathbb R^3} (w^*)^3 \, dz = \lambda ^*$ and then we have $\displaystyle \int_{\mathbb R^3} w_{\sigma } ^2 + |\partial _{z_1} w_{\sigma }|^2 \, dz = \int_{\mathbb R^3} (w^*)^2 + |\partial _{z_1} w^*|^2 \, dz $. We find $ \| w_{\sigma }\|_{\mathscr{Y}(\mathbb R^3)}^2 = \| w^*\|_{\mathscr{Y}(\mathbb R^3)}^2 = I_{\lambda ^*} $, thus $ w_{\sigma }$ is a minimizer for $ I_{\lambda ^*} $ and Lemma \ref{gs3} (ii) implies that $ w_{\sigma }$ is a ground state for (SW). Let $(\mathcal{W}_n)_{n \geq 1} $ be a sequence satisfying $(i)$, $(ii)$ and $(iii)$. We have $ P(\mathcal{W}_n ) \to 0 $ and $$ \frac{\Gamma}{3} \int_{\mathbb R^3} \mathcal{W}_n^3 \, dz = P(\mathcal{W}_n) - \frac{1}{{\mathfrak c}_s ^2} \int_{\mathbb R^3} \mathcal{W}_n ^2 + | \partial _{z_1} \mathcal{W}_n |^2 \, dz \in \left[ \frac{ - 2 m_2}{{\mathfrak c}_s ^2}, - \frac{ m_1}{2 {\mathfrak c}_s ^2} \right] \qquad \mbox{ for all $n$ sufficiently large.} $$ We infer that there are $ n_0 \in \mathbb N$, $ \underline{ \sigma}, \bar{\sigma} > 0 $ and a sequence $(\sigma _n )_{ n \geq n_0}\subset [ \underline{ \sigma}, \bar{\sigma} ] $ such that $ \displaystyle \int_{\mathbb R^3} ( (\mathcal{W}_n)_{\sigma _n}) ^3 \, dz = \lambda ^*$ for all $ n \geq n_0$. Moreover, $$ \begin{array}{rcl} \| (\mathcal{W}_n)_{\sigma _n} \| _{\mathscr{Y}(\mathbb R^3)} ^2 & = & \displaystyle \frac{ \sigma _n ^2}{{\mathfrak c}_s ^2} \int_{\mathbb R^3} \mathcal{W}_n ^2 + | \partial _{z_1} \mathcal{W}_n |^2 \, dz + \int_{\mathbb R^3} |\nabla_{z_{\perp}} \partial_{z_1}^{-1} \mathcal{W}_n|^2 \, dz \\ \\ & = & \displaystyle \sigma _n ^2 \left( P(\mathcal{W}_n) - \frac{ \Gamma}{3} \int_{\mathbb R^3} \mathcal{W}_n ^3 \right) + ( \mathscr{S}( \mathcal{W} _n) - P(\mathcal{W}_n) ) \\ & = & \displaystyle ( \sigma _n ^2 - 1) P(\mathcal{W}_n) + \mathscr{S}( \mathcal{W} _n) - \frac{ \Gamma}{3} \int_{\mathbb R^3} (\mathcal{W}_n)_{\sigma _n} ^3 \, dz. \end{array} $$ Passing to the limit in the above equality we get $$ \displaystyle \liminf_{n \to \infty } \| (\mathcal{W}_n)_{\sigma _n} \| _{\mathscr{Y}(\mathbb R^3)} ^2 = \liminf_{n \to \infty } \mathscr{S}( \mathcal{W} _n) - \frac{ \Gamma}{3} \lambda ^* \leq \mathscr{S}_* - \frac{ \Gamma}{3} \lambda ^* = \mathscr{S}( w^*) - \frac{ \Gamma}{3} \int_{\mathbb R^3} (w^*)^3\, dz = \| w^* \|_{\mathscr{Y}(\mathbb R^3)} ^2 = I_{\lambda ^*}. $$ Hence there is a subsequence of $((\mathcal{W}_n)_{\sigma _n})_{n \geq 1}$ which is a minimizing sequence for $I_{\lambda ^*}$. Using Lemma \ref{gs3} we infer that there exist a subsequence $(n_j)_{ j \geq 1}$ such that $ \sigma_{n _j} \to \sigma \in [\underline{\sigma}, \bar{\sigma}]$, a sequence $ (z_j)_{j \geq 1} \subset \mathbb R^3$ and a minimizer $ \mathcal{W}$ of $ I_{\lambda ^*}$ (hence a ground state for (SW)) such that $(\mathcal{W}_{n_j})_{\sigma_{n_j}} ( \cdot - z_j) \to \mathcal{W}$ in $ \mathscr{Y}(\mathbb R^3)$. It is then straightforward that $ \mathcal{W}_{n_j}( \cdot - z_j) \to \mathcal{W}_{\frac{1}{\sigma}}$ in $ \mathscr{Y}(\mathbb R^3)$. \ $\Box$ \\ We may give an alternate proof of Theorem \ref{gs} which does not rely directly on the analysis in \cite{dBSIHP}, \cite{dBSSIAM} by following the strategy of \cite{Maris}, which can be adapted to our problem up to some details. \section{Proof of Theorem \ref{res1}} \subsection{Proof of Proposition \ref{asympto}} \label{preuveasympto} For some given real valued functions $A_\varepsilon$ and $\varphi_\varepsilon$, we consider the mapping $$ U_{\varepsilon}(x) = |U_{\varepsilon}| (x) {\sf e}^{i\phi(x)} = r_0 \Big( 1+ \varepsilon^2 A_\varepsilon(z) \Big) {\sf e}^{i\varepsilon \varphi_\varepsilon(z)} , \quad \quad \mbox{ where } \quad z=(z_1,z_\perp) = ( \varepsilon x_1 , \varepsilon^2 x_\perp) . $$ It is obvious that $ U_{\varepsilon } \in \mathcal{E}$ provided that $ A_{\varepsilon} \in H^1 ( \mathbb R^N)$ and $ \nabla \varphi _{\varepsilon} \in L^2( \mathbb R^N)$. If $\varepsilon$ is small and $A_\varepsilon$ is uniformly bounded in $\mathbb R^N$, $U_{\varepsilon}$ does not vanish and the momentum $Q(U_{\varepsilon})$ is given by $$ Q(U_{\varepsilon}) = - \int_{\mathbb R^N} ( |U_{\varepsilon}|^2 - r_0^2 ) \frac{\partial \phi}{\partial x_1} \ dx = - \varepsilon^{5-2N} r_0^2 \int_{\mathbb R^N} \Big( 2 A_\varepsilon + \varepsilon^2 A_\varepsilon^2 \Big) \frac{\partial \varphi_\varepsilon}{\partial z_1} \ dz , $$ while the energy of $U_{\varepsilon }$ is \begin{align*} E(U_{\varepsilon} ) = & \ \int_{\mathbb R^N} |\nabla U_{\varepsilon}|^2 + V(|U_{\varepsilon}|^2) \ dx \\ = & \ \varepsilon^{5-2N} r_0^2 \int_{\mathbb R^N} (\partial_{z_1} \varphi_\varepsilon)^2 \Big( 1 + \varepsilon^2 A_\varepsilon \Big)^2 + \varepsilon^2 |\nabla_{z_\perp} \varphi_\varepsilon|^2 \Big( 1 + \varepsilon^2 A_\varepsilon \Big)^2 + \varepsilon^2 (\partial_{z_1} A_\varepsilon)^2 + \varepsilon^4 |\nabla_{z_\perp} A_\varepsilon|^2 \\ & \hspace{2cm} + {\mathfrak c}_s^2 A_\varepsilon^2 + \varepsilon^2 {\mathfrak c}_s^2 \Big( 1 - \frac{4r_0^4}{3{\mathfrak c}_s^2} F''(r_0^2) \Big) A_\varepsilon^3 + \frac{{\mathfrak c}_s^2}{\varepsilon^4} V_4 \Big( \varepsilon^2 A_\varepsilon \Big) \ dz , \end{align*} where we have used the Taylor expansion \be \label{V} V \Big( r_0^2 (1 + \alpha)^2 \Big) = r_0^2 \Big\{ {\mathfrak c}_s^2 \alpha^2 + {\mathfrak c}_s^2 \Big( 1 - \frac{4r_0^4}{3{\mathfrak c}_s^2} F''(r_0^2) \Big) \alpha^3 + {\mathfrak c}_s^2 V_4(\alpha) \Big\} = r_0 ^2 {\mathfrak c}_s ^2 \Big\{ \alpha ^2 + \Big(\frac{\Gamma}{3} - 1 \Big) \alpha ^3 + V_4 ( \alpha ) \Big\} \ee with $ V_4(\alpha) = \mathcal{O}(\alpha^4) $ as $\alpha \to 0 . $ Consequently, with ${\mathfrak c}_s^2 = c^2(\varepsilon) + \varepsilon^2$ we get \begin{align} \label{develo} E_{c(\varepsilon)} (U_{\varepsilon }) & = \ E(U_{\varepsilon} ) + c(\varepsilon) Q (U_{\varepsilon}) \nonumber \\ & = \varepsilon^{5-2N} r_0^2 \int_{\mathbb R^N} (\partial_{z_1} \varphi_\varepsilon)^2 \Big( 1 + \varepsilon^2 A_\varepsilon \Big)^2 + \varepsilon^2 |\nabla_{z_\perp} \varphi_\varepsilon|^2 \Big( 1 + \varepsilon^2 A_\varepsilon \Big)^2 + \varepsilon^2 (\partial_{z_1} A_\varepsilon)^2 + \varepsilon^4 |\nabla_{z_\perp} A_\varepsilon|^2 \nonumber \\ & \hspace{1cm} + {\mathfrak c}_s^2 A_\varepsilon^2 + \varepsilon^2 {\mathfrak c}_s^2 \Big( 1 - \frac{4r_0^4}{3{\mathfrak c}_s^2} F''(r_0^2) \Big) A_\varepsilon^3 + \frac{{\mathfrak c}_s^2}{\varepsilon^4} V_4 \Big( \varepsilon^2 A_\varepsilon \Big) - c(\varepsilon) \Big( 2 A_\varepsilon + \varepsilon^2 A_\varepsilon^2 \Big) \partial_{z_1} \varphi_\varepsilon \ dz \nonumber \\ & = \varepsilon^{7-2N} r_0^2 \int_{\mathbb R^N} \frac{1}{\varepsilon^2} \Big( \partial_{z_1} \varphi_\varepsilon - c(\varepsilon) A_\varepsilon \Big)^2 + (\partial_{z_1} \varphi_\varepsilon)^2 ( 2 A_\varepsilon + \varepsilon^2 A_\varepsilon^2 ) + |\nabla_{z_\perp} \varphi_\varepsilon|^2 ( 1 + \varepsilon^2 A_\varepsilon )^2 + (\partial_{z_1} A_\varepsilon)^2 \nonumber \\ & \hspace{1cm} + \varepsilon^2 |\nabla_{z_\perp} A_\varepsilon|^2 + A_\varepsilon^2 + {\mathfrak c}_s^2 \Big( 1 - \frac{4r_0^4}{3{\mathfrak c}_s^2} F''(r_0^2) \Big) A_\varepsilon^3 + \frac{{\mathfrak c}_s^2}{\varepsilon^6} V_4( \varepsilon^2 A_\varepsilon) - c(\varepsilon) A_\varepsilon^2 \partial_{z_1} \varphi_\varepsilon \ dz . \end{align} Since the first term in the last integral is penalised by $\varepsilon^{-2}$, in order to get sharp estimates on $E_{c(\varepsilon)} $ one needs $\partial_{z_1} \varphi_\varepsilon \simeq c(\varepsilon) A_\varepsilon$. Let $N=3$. By Theorem \ref{gs}, there exists a ground state $ A \in \mathscr{Y}(\mathbb R^3)$ for (SW). It follows from Theorem 4.1 p. 227 in \cite{dBSSIAM} that $ A \in H^s(\mathbb R^3)$ for any $s \in \mathbb N$. Let $ \varphi = {\mathfrak c}_s \partial_{z_1 }^{-1} A$. We use \eqref{develo} with $A_\varepsilon(z) = \frac{\lambda {\mathfrak c}_s}{ c(\varepsilon) } A( \lambda z_1 , z_\perp)$ and $\varphi_\varepsilon(z) = \varphi( \lambda z_1 , z_\perp)$. For $\varepsilon>0$ small and $\lambda \simeq 1$ (to be chosen later) we define $$ U_\varepsilon (x) = |U_\varepsilon| (x) {\sf e}^{i\phi_\varepsilon(x)} = r_0 \Big( 1+ \varepsilon^2 \frac{{\mathfrak c}_s}{c(\varepsilon)} \lambda A(z) \Big) {\sf e}^{i\varepsilon \varphi(z)} , \quad \quad \quad \mbox{ where } \qquad z=(z_1,z_\perp) = ( \varepsilon \lambda x_1 , \varepsilon^2 x_\perp) . $$ Notice that $U_{\varepsilon}$ does not vanish if $\varepsilon$ is sufficiently small. Since $\partial_{z_1} \varphi = {\mathfrak c}_s A$, we have $ \partial_{z_1} \varphi_\varepsilon (z)= \lambda \partial_{z_1} \varphi ( \lambda z_1, z_{\perp} )= \lambda {\mathfrak c}_s A (\lambda z_1, z_{\perp} )= c(\varepsilon) A_\varepsilon (z) $ and therefore \begin{align*} \lambda E_{c(\varepsilon)} (U_\varepsilon) = & \ {\mathfrak c}_s^2 r_0^2 \varepsilon \int_{\mathbb R^3} \lambda^3 \frac{{\mathfrak c}_s}{c(\varepsilon)} A^2 \Big( 2 A + \varepsilon^2 \frac{{\mathfrak c}_s}{c(\varepsilon)} \lambda A^2 \Big) + \lambda^2 |\nabla_{z_\perp} \partial_{z_1}^{-1} A|^2 \Big( 1 + \varepsilon^2 \frac{{\mathfrak c}_s}{c(\varepsilon)} \lambda A \Big)^2 + \frac{\lambda^4}{c^2(\varepsilon)} (\partial_{z_1} A)^2 \nonumber \\ & \hspace{2cm} + \varepsilon^2 \frac{\lambda^2}{c^2(\varepsilon)} |\nabla_{z_\perp} A|^2 + \frac{\lambda^2}{c^2(\varepsilon)} A^2 + \frac{{\mathfrak c}_s^3}{c^3(\varepsilon)} \lambda^3 \Big( 1-\frac{4r_0^4}{3{\mathfrak c}_s^2} F''(r_0^2) \Big) A^3 + \frac{1}{\varepsilon^6} V_4\Big( \varepsilon^2 \frac{{\mathfrak c}_s}{c(\varepsilon)} \lambda A \Big) \nonumber \\ & \hspace{2cm} - \lambda^3 \frac{{\mathfrak c}_s}{c(\varepsilon)} A^3 \ dz \nonumber \\ = & \ {\mathfrak c}_s^2 r_0^2 \varepsilon \int_{\mathbb R^3} \lambda^3 \frac{{\mathfrak c}_s}{c(\varepsilon)} \Big( 1 + \frac{{\mathfrak c}_s^2}{c^2(\varepsilon)} \Big[ \frac{\Gamma}{3} - 1 \Big] \Big) A^3 + \lambda^2 |\nabla_{z_\perp} \partial_{z_1}^{-1} A|^2 \Big( 1 + \varepsilon^2 \frac{{\mathfrak c}_s}{c(\varepsilon)} \lambda A \Big)^2 + \frac{\lambda^4}{c^2(\varepsilon)} (\partial_{z_1} A)^2 \nonumber \\ & \hspace{2cm} + \frac{\lambda^2}{c^2(\varepsilon)} A^2 + \varepsilon^2 \frac{\lambda^2}{c^2(\varepsilon)} |\nabla_{z_\perp} A|^2 + \varepsilon^2 \lambda^4 \frac{{\mathfrak c}_s^2}{c^2(\varepsilon)} A^4 + \frac{1}{ \varepsilon^6} V_4\Big( \varepsilon^2 \frac{{\mathfrak c}_s}{c(\varepsilon)} \lambda A \Big) \ dz . \end{align*} On the other hand, \begin{align*} \lambda \int_{\mathbb R^3} |\nabla_{\perp} U_\varepsilon|^2 \ dx = & \, r_0^2 \varepsilon \int_{\mathbb R^3} |\nabla_{z_\perp} \varphi|^2 \Big( 1+ \varepsilon^2 \lambda \frac{{\mathfrak c}_s}{c(\varepsilon)} A \Big)^2 + \varepsilon^2 \lambda^2 \frac{{\mathfrak c}_s^2}{c^2(\varepsilon)} |\nabla_{z_\perp} A|^2 \ dz \\ = & \, {\mathfrak c}_s^2 r_0^2 \varepsilon \int_{\mathbb R^3} |\nabla_{z_\perp} \partial_{z_1}^{-1} A|^2 \Big( 1+ \varepsilon^2 \lambda \frac{{\mathfrak c}_s}{c(\varepsilon)} A \Big)^2 + \varepsilon^2 \frac{\lambda^2}{c^2(\varepsilon)} |\nabla_{z_\perp} A|^2\ dz . \end{align*} Hence $U_\varepsilon $ satisfies the constraint $P_{c(\varepsilon )} (U_{\varepsilon}) = 0 $ (or equivalently $ \displaystyle E_{c(\varepsilon)} (U_\varepsilon ) = \int_{\mathbb R^3} |\nabla_{\perp} U_\varepsilon |^2 \ dx $) if and only if $G(\lambda, \varepsilon ^2) = 0$, where \begin{align*} G (\lambda , \varepsilon^2 ) = & \, \int_{\mathbb R^3} \lambda^3 \frac{{\mathfrak c}_s}{c(\varepsilon)} \Big( 1 + \frac{{\mathfrak c}_s^2}{c^2(\varepsilon)} \Big[ \frac{\Gamma}{3} - 1 \Big] \Big) A^3 + \lambda^2 |\nabla_{z_\perp} \partial_{z_1}^{-1} A|^2 \Big( 1 + \varepsilon^2 \frac{{\mathfrak c}_s}{c(\varepsilon)} \lambda A \Big)^2 + \frac{\lambda^4}{c^2(\varepsilon)} (\partial_{z_1} A)^2 \nonumber \\ & \hspace{2cm} + \frac{\lambda^2}{c^2(\varepsilon)} A^2 + \varepsilon^2 \frac{\lambda^2}{c^2(\varepsilon)} |\nabla_{z_\perp} A|^2 + \varepsilon^2 \lambda^4 \frac{{\mathfrak c}_s^2}{c^2(\varepsilon)} A^4 + \frac{1}{\varepsilon^6} V_4\Big( \varepsilon^2 \frac{{\mathfrak c}_s}{c(\varepsilon)} \lambda A \Big) \ dz \\ & \ - \int_{\mathbb R^3} |\nabla_{z_\perp} \partial_{z_1}^{-1} A|^2 \Big( 1+ \varepsilon^2 \lambda \frac{{\mathfrak c}_s}{c(\varepsilon)} A \Big)^2 + \varepsilon^2 \frac{\lambda^2}{c^2(\varepsilon)} |\nabla_{z_\perp} A|^2\ dz . \end{align*} Denote $\epsilon = \varepsilon^2$. Since $A$ is a ground state for (SW), it satisfies the Pohozaev identities \eqref{identites}. The last of these identities is $ \mathscr{S}(A) = \displaystyle \int_{\mathbb R^3} |\nabla_{z_\perp} \partial_{z_1}^{-1} A |^2 \ dz $, or equivalently $$ G( \lambda = 1, \epsilon = 0 ) = 0 . $$ A straightforward computation using \eqref{Ident} gives $$ \frac{\partial G}{\partial \lambda}_{|(\lambda = 1, \epsilon = 0)} = \int_{\mathbb R^3} \Gamma A^3 + 2 |\nabla_{z_\perp} \partial_{z_1}^{-1} A|^2 + \frac{4}{{\mathfrak c}_s^2} (\partial_{z_1} A)^2 + \frac{2}{{\mathfrak c}_s^2} A^2 \ dz = 3 \int_{\mathbb R^3} |\nabla_{z_\perp} \partial_{z_1}^{-1} A|^2 \not = 0 . $$ Then the implicit function theorem implies that there exists a function $\epsilon \longmapsto \lambda(\epsilon) = 1 + \mathcal{O} ( \epsilon) = 1 + \mathcal{O}(\varepsilon^2)$ such that for all $\epsilon$ sufficiently small we have $G(\lambda(\epsilon),\epsilon ) = 0$, that is $U_{c(\varepsilon)} $ satisfies the Pohozaev identity $P_{c(\varepsilon)}(U_{\varepsilon}) = 0$. Choosing $ \lambda = \lambda ( \varepsilon ^2)$ and taking into account the last indetity in \eqref{identites}, we find $$ T_{c(\varepsilon)} \leq E_{c(\varepsilon)}(U_\varepsilon) = \int_{\mathbb R^3} |\nabla_{\perp} U_\varepsilon |^2 \ dx = {\mathfrak c}_s^2 r_0^2 \varepsilon \int_{\mathbb R^3} |\nabla_{z_\perp} \partial_{z_1}^{-1} A|^2 + \mathcal{O}(\varepsilon^3) = {\mathfrak c}_s^2 r_0^2 \varepsilon \mathscr{S}_{\rm min} + \mathcal{O}(\varepsilon^3) $$ and the proof of $(ii)$ is complete. Next we turn our attention to the case $N=2$. Let $A = {\mathfrak c}_s^{-1} \partial_{z_1} \varphi \in \mathscr{Y}(\mathbb R^2) $ be a ground state of (SW). The existence of $A$ is given by Theorem \ref{gs2d}. By Theorem 4.1 p. 227 in \cite{dBSIHP} we have $A \in H^s(\mathbb R^2)$ for all $s \in \mathbb N$. For $\varepsilon$ small, we define the map $$ U_\varepsilon(x) = |U_\varepsilon| (x) {\sf e}^{i\phi_\varepsilon(x)} = r_0 \Big( 1+ \varepsilon^2 \frac{{\mathfrak c}_s}{c(\varepsilon)} A(z) \Big) {\sf e}^{i\varepsilon \varphi(z)} , \quad \quad \quad \mbox{ where } \qquad z=(z_1,z_2) = ( \varepsilon x_1 , \varepsilon^2 x_2) . $$ From the above computations and \eqref{Ident} we have \begin{align*} k_\varepsilon = & \ \int_{\mathbb R^2} |\nabla U_\varepsilon|^2 \ dx = r_0^2 \varepsilon \int_{\mathbb R^2} (\partial_{z_1} \varphi_\varepsilon)^2 \Big( 1 + \varepsilon^2 A_\varepsilon \Big)^2 + \varepsilon^2 (\partial_{z_1} A_\varepsilon)^2 + \varepsilon^2 (\partial_{z_2} \varphi_\varepsilon)^2 \Big( 1 + \varepsilon^2 A_\varepsilon \Big)^2 + \varepsilon^4 (\partial_{z_2} A_\varepsilon)^2 \ dz \\ = & \ r_0^2 {\mathfrak c}_s^2 \varepsilon \int_{\mathbb R^2} A^2 \Big( 1 + \frac{\varepsilon^2 {\mathfrak c}_s}{c(\varepsilon)} A \Big)^2 + \frac{\varepsilon^2}{c^2(\varepsilon)} (\partial_{z_1} A)^2 + \varepsilon^2 (\partial_{z_2} \partial_{z_1}^{-1} A)^2 \Big( 1 + \frac{\varepsilon^2 {\mathfrak c}_s}{c(\varepsilon)} A \Big)^2 + \frac{\varepsilon^4}{c^2(\varepsilon)} (\partial_{z_2} A)^2 \ dz \\ = & \ r_0^2 {\mathfrak c}_s^2 \Big\{ \varepsilon \int_{\mathbb R^2} A^2 \ dz + \varepsilon^3 \int_{\mathbb R^2} \Big( 2 A^3 + \frac{(\partial_{z_1} A)^2}{{\mathfrak c}_s^2} + (\partial_{z_2} \partial_{z_1}^{-1} A)^2 \Big) \ dz + \mathcal{O}(\varepsilon^5) \Big\} \\ = & \ r_0^2 {\mathfrak c}_s^2 \Big\{ \varepsilon \frac32 {\mathfrak c}_s ^2 \mathscr{S}(A) + \varepsilon^3 \Big( 2 - \frac{12}{\Gamma} - \frac12 \Big) \mathscr{S}(A) + \mathcal{O}(\varepsilon^5) \Big\} \end{align*} It is easy to see that $\varepsilon \mapsto k_\varepsilon$ is a smooth increasing diffeomorphism from an interval $[0,\bar{\varepsilon}]$ onto an interval $[0,\bar{k}= \bar{k}_{\bar{\varepsilon}}]$, and that $ \varepsilon = \displaystyle{\frac{k_\varepsilon}{r_0^2 {\mathfrak c}_s^2 \| A \|_{L^2}^2}} + \mathcal{O}(k_\varepsilon^3) = \frac{k_{\varepsilon }}{\frac 32 r_0 ^2 {\mathfrak c}_s ^4 \mathscr{S} (A) } + \mathcal{O}(k_{\varepsilon}^3) $ as $\varepsilon \to 0$. Moreover, denoting $U_{\varepsilon}^\sigma (x) = U_{\varepsilon}(x/ \sigma)$ we have $$ \int_{\mathbb R^2} |\nabla U_\varepsilon^\sigma|^2 \ dx = \int_{\mathbb R^2} |\nabla U_\varepsilon|^2 \ dx $$ because $N=2$. Using the test function $U_\varepsilon^\sigma$, it follows that $$ I_{\rm min} (k_\varepsilon) \leq I(U_\varepsilon^\sigma ) \qquad \mbox{ for any } \sigma >0 . $$ Since $Q(U_{\varepsilon}) < 0$, the mapping $$ \sigma \longmapsto I(U_\varepsilon^\sigma ) = Q( U _\varepsilon^\sigma) + \int_{\mathbb R^2} V(|U_\varepsilon^\sigma|^2) \ dx = \sigma Q(U_\varepsilon) + \sigma^2 \int_{\mathbb R^2} V(|U_\varepsilon|^2) \ dx $$ achieves its minimum at $ \sigma_0 = \displaystyle{\frac{- Q(U_\varepsilon)}{\displaystyle{2 \int_{\mathbb R^2} V(|U_\varepsilon|^2)}}} > 0 $, and the minimum value is $ I(U_\varepsilon^{\sigma_0}) = \displaystyle \frac{- \displaystyle Q^2(U_\varepsilon)}{ \displaystyle{4\int_{\mathbb R^2} V(|U_\varepsilon|^2) \ dx}}. $ Hence $$ I_{\rm min} (k_\varepsilon) \leq I(U_\varepsilon^{\sigma_0} ) = \frac{- Q^2(U_\varepsilon)}{ \displaystyle{4\int_{\mathbb R^2} V(|U_\varepsilon|^2) \ dx}} . $$ Using (\ref{V}) and (\ref{Ident}) we find $$ \begin{array}{rcl} \displaystyle \int_{\mathbb R^2} V(|U_\varepsilon|^2) \ dx & = & \displaystyle {\mathfrak c}_s^2 r_0^2 \varepsilon \int_{\mathbb R^2} A^2 + \varepsilon^2 \Big( \frac{\Gamma}{3} - 1 \Big) A^3 + \frac{1}{\varepsilon^4} V_4 ( \varepsilon^2 A ) \ dz \\ \\ & = & \displaystyle\frac 32 {\mathfrak c}_s ^4 r_0 ^2 \mathscr{S}(A) \varepsilon - {\mathfrak c}_s ^2 r_0 ^2 \Big( \frac{\Gamma}{3} - 1 \Big) \frac{ 6}{\Gamma} \mathscr{S}(A) \varepsilon ^3 + \mathcal{O}(\varepsilon^5) \end{array} $$ and \begin{align*} Q(U_\varepsilon) = - \varepsilon r_0^2 {\mathfrak c}_s \int_{\mathbb R^2} \Big( 2 A^2 + \varepsilon^2 A^3 \Big) \ dz = -3 r_0 ^2 {\mathfrak c}_s ^3 \mathscr{S}(A) \varepsilon + r_0 ^2 {\mathfrak c}_s \frac{ 6}{\Gamma} \mathscr{S}(A) \varepsilon^3. \end{align*} Finally we obtain $$ \begin{array}{l} \displaystyle I_{\rm min} (k_\varepsilon) +\displaystyle \frac{ k_{\varepsilon}}{{\mathfrak c}_s ^2} \leq \frac{- \displaystyle Q^2(U_\varepsilon)}{ \displaystyle{4 \int_{\mathbb R^2} V(|U_\varepsilon|^2) \ dx}} + \frac{1}{{\mathfrak c}_s^2} \displaystyle \int_{\mathbb R^2} |\nabla U_{\varepsilon} |^2 \, dx \\ \\ \displaystyle = - \frac{\left(-3 {\mathfrak c}_s ^2 + \frac{6}{\Gamma} \varepsilon^2 \right) ^2 r_0 ^4 {\mathfrak c}_s ^2 \mathscr{S}^2(A) \varepsilon^2} {4 \Big[ \frac 32 {\mathfrak c}_s ^2 - \Big( 2 - \frac{6}{\Gamma} \Big) \varepsilon^2 + \mathcal{O}(\varepsilon^4) \Big]r_0 ^2 {\mathfrak c}_s ^2 \mathscr{S}(A) \varepsilon } + \left[ \frac 32 r_0 ^2 {\mathfrak c}_s ^2 \varepsilon + r_0 ^2 \Big( \frac 32 - \frac{12}{\Gamma} \Big) \varepsilon^3 + \mathcal{O}( \varepsilon ^5 ) \right] \mathscr{S}(A) \\ \\ = \displaystyle - \frac{ \left(3 r_0 ^2 {\mathfrak c}_s ^2 \varepsilon ^3 + \mathcal{O}( \varepsilon ^5) \right) \mathscr{S}(A) } {2 \left[3 {\mathfrak c}_s ^2 - \left( 4 - \frac{12}{\Gamma} \right) \varepsilon^2 + \mathcal{O}( \varepsilon^4) \right] } = - \frac 12 r_0 ^2 \mathscr{S}(A) \varepsilon ^3 + \mathcal{O}(\varepsilon^5) \\ \\ \displaystyle = - \frac 12 r_0 ^2 \mathscr{S}(A) \left[ \frac{ k_{\varepsilon}}{\frac 32 r_0 ^2 {\mathfrak c}_s ^4 \mathscr{S}(A) } + \mathcal{O}(k_{\varepsilon}^3) \right]^3 + \mathcal{O} \left( \left(\frac{ k_{\varepsilon}}{\frac 32 r_0 ^2 {\mathfrak c}_s ^4 \mathscr{S}(A) } + \mathcal{O}(k_{\varepsilon}^3) \right)^5 \right) = \frac {-4k_{\varepsilon} ^3 }{27 r_0 ^4 {{\mathfrak c}_s } ^{12} \mathscr{S}_{\rm min}^2 } + \mathcal{O}(k_{\varepsilon}^5 ). \end{array} $$ Since $ \varepsilon \longmapsto k_{\varepsilon}$ is a diffeomorphism from $[0,\bar{\varepsilon}]$ onto $[0,\bar{k}]$, Proposition \ref{asympto} (i) is proven. \ $\Box$ \subsection{Proof of Proposition \ref{monoto}} Given a function $ f $ defined on $ \mathbb R^N$ and $ a, \, b > 0$, we denote $ f_{a, b}(x) = f( \frac{ x_1}{a}, \frac{ x_{\perp}}{b}).$ By Proposition 2.2 p. 1078 in \cite{M2}, any solution of (TW$_{c}$) belongs to $W_{loc}^{2, p }(\mathbb R^N)$ for all $ p \in [2, \infty)$, hence to $C^{1, \alpha }(\mathbb R^N)$ for all $ \alpha \in (0, 1)$. ($i$) Let $ U $ be a minimizer of $ E_c= E + cQ$ on $ \mathscr{C} _c$ (where $\mathscr{C} _c$ is as in \eqref{Cc}) such that $ \psi $ solves (TW$_c$). Then $U $ satisfies the Pohozaev identities \eqref{Pohozaev}. If $ Q( U ) > 0$, let $ \tilde{ U }(x) = U ( - x_1, x_{\perp})$, so that $ Q( \tilde{ U }) = - Q( U ) < 0$ and $ P_c( \tilde{ U } ) = P_c( U ) - 2 c Q( U ) = - 2cQ( U ) < 0$. Since for any function $ \phi \in {\mathcal E} $ we have \be \label{Pca} P_c( \phi _{a, 1}) = \frac 1a \int_{\mathbb R^N} \Big| \frac{ \partial \phi }{\partial x_1} \Big| ^2 \, dx + a \frac{N-3}{N-1} \int_{\mathbb R^N} |\nabla_{x_{\perp}} \phi |^2 \, dx + c Q( \phi ) + a \int_{\mathbb R^N} V( |\phi |^2) \, dx, \ee we see that there is $ a_0 \in (0, 1) $ such that $ P_c( \tilde{U}_{a_0 ,1 }) = 0$ . We infer that $$ T_c \leq E_c( \tilde{U}_{a_0, 1} ) = \frac{2}{N-1} \int_{\mathbb R^N} |\nabla_{x_{\perp}} \tilde{U}_{a_0, 1} |^2 \, dx = a_0 \frac{2}{N-1} \int_{\mathbb R^N} |\nabla_{x_{\perp}} U |^2 \, dx = a_0 E_c( U) = a_0 T_c, $$ contradicting the fact that $ T_c > 0$. Thus $ Q( U ) \leq 0$. Assume that $ Q( U ) = 0. $ From the identities (\ref{Pohozaev}) with $ Q( U ) = 0$ we get \be \label{pr} \displaystyle \int_{\mathbb R^N} \Big| \frac{ \partial U}{\partial x_1} \Big| ^2 \, dx = - \frac{1}{N-2} \int_{\mathbb R^N} V(|U |^2) \, dx \qquad \mbox{ and } \qquad \displaystyle \int_{\mathbb R^N} | \nabla _{x_{\perp}} U | ^2 \, dx = - \frac{N-1}{N-2} \int_{\mathbb R^N} V(|U |^2) \, dx . \ee Since $ U \in {\mathcal E} $ and $ U $ is not constant, necessarily $ \displaystyle \int_{\mathbb R^N} V(|U |^2) \, dx = - (N-2) \int_{\mathbb R^N} \Big| \frac{ \partial U}{\partial x_1} \Big| ^2 \, dx <0$ and this implies that the potential $V$ must achieve negative values. Then it follows from Theorem 2.1 p. 100 in \cite{brezis-lieb} that there is $ \tilde{\psi }_0 \in {\mathcal E} $ such that $\displaystyle \int_{\mathbb R^N} |\nabla \tilde{\psi}_0 |^2 \, dx = \inf \Big\{ \int_{\mathbb R^N} |\nabla \phi |^2 \, dx \; \Big| \; \phi \in \mathcal{E}, \; \int_{\mathbb R^N} V(|\phi |^2) \, dx = -1 \Big\}.$ Using Theorem 2.2 p. 102 in \cite{brezis-lieb} we see that there is $ \sigma > 0$ such that, denoting $ \psi _0 = (\tilde{\psi }_0)_{\sigma, \sigma}$ and $ - v_0 = \displaystyle \int_{\mathbb R^N} V(|\psi _0|^2) \, dx = - \sigma ^N$, we have $ \Delta \psi _0 + F( |\psi _0 |^2 ) \psi _0 = 0 $ in $ \mathbb R^N$. Hence $\psi _0 $ solves (TW$_0$) and $$ \displaystyle \int_{\mathbb R^N} |\nabla {\psi}_0 |^2 \, dx = \inf \Big\{ \int_{\mathbb R^N} |\nabla \phi |^2 \, dx \; \Big| \; \phi \in \mathcal{E}, \; \int_{\mathbb R^N} V(|\phi |^2) \, dx = - v_0 \Big\}. $$ Since all minimizers of this problem solve (TW$_{0}$) (after possibly rescaling), we know that they are $C^1$ in $\mathbb R^N$ and then Theorem 2 p. 314 in \cite{MarisARMA} imply that they are all radially symmetric (after translation). In particular, we have $Q (\psi _0 ) = 0 $ and $ \displaystyle \int_{\mathbb R^N} \Big| \frac{ \partial \psi _0}{\partial x_j} \Big| ^2 \, dx = \frac 1N \displaystyle \int_{\mathbb R^N} |\nabla \psi _0 | ^2 \, dx $ for $ j = 1, \dots, N$. By Lemma 2.4 p. 104 in \cite{brezis-lieb} we know that $ \psi _0 $ satisfies the Pohozaev identity $ \displaystyle \int_{\mathbb R^N} |\nabla \psi _0 | ^2 \, dx = - \frac{ N}{N-2} v_0. $ It follows that $P_c( \psi _0 ) = 0 $, hence $ \psi _0 \in \mathscr{C}_c $ and we infer that $ E_c ( \psi _0 ) \geq T_c$, that is $ \displaystyle \frac{2}{N-1} \int_{\mathbb R^N} |\nabla_{x_{\perp}} \psi _0 |^2 \, dx \geq \frac{2}{N-1} \displaystyle \int_{\mathbb R^N} |\nabla_{x_{\perp}} U |^2 \, dx$. Taking into account \eqref{pr} and the radial symmetry of $\psi _0$, this gives $ v_0 \geq - \displaystyle \int_{\mathbb R^N} V( |U |^2 ) \, dx$. On the other hand, by scaling it is easy to see that $ \psi _0 $ is a minimizer of the functional $ \phi \longmapsto \| \nabla \phi \|_{L^2( \mathbb R^N)} ^2 $ in the set $ {\mathcal P} = \Big\{ \phi \in \mathcal{E} \; \Big| \; \displaystyle \int_{\mathbb R^N} |\nabla \phi | ^2 \, dx = - \frac{N}{N-2} \int_{\mathbb R^N} V(|\phi |^2) \, dx \Big\}$. By \eqref{pr} we have $ U \in {\mathcal P}$, hence $\| \nabla U \|_{L^2( \mathbb R^N)} ^2 \geq \| \nabla \psi _0\|_{L^2( \mathbb R^N)} ^2$ and consequently $ - \displaystyle \int_{\mathbb R^N} V( |U |^2 ) \, dx \geq v_0$. Thus $\| \nabla U \|_{L^2( \mathbb R^N)} ^2 = \| \nabla \psi _0\|_{L^2( \mathbb R^N)} ^2$, $ \displaystyle \int_{\mathbb R^N} V( |U |^2 ) \, dx = \int_{\mathbb R^N} V( |\psi _0 |^2 )$ and $ U $ minimizes $ \| \nabla \cdot \|_{L^2(\mathbb R^N)}^2$ in the set $\Big\{ \phi \in \mathcal{E} \; \Big| \; \displaystyle \int_{\mathbb R^N} V( |\phi |^2 ) \, dx = - v_0 \Big\}$. By Theorem 2.2 p. 103 in \cite{brezis-lieb}, $ U $ solves the equation $ \Delta U + \lambda F(|U |^2) U = 0 $ in $ {\mathcal D} '(\mathbb R^N)$ for some $ \lambda>0$ and using the Pohozaev identity associated to this equation we see that $ \lambda = 1$, hence $ U $ solves (TW$_0$). Since $U $ also solves (TW$_c$) for some $ c >0$ and $ \frac{\partial U}{\partial x_1}$ is continuous, we must have $ \frac{\partial U}{\partial x_1} = 0 $ in $ \mathbb R^N$. Together with the fact that $ U \in \mathcal{E}$, this implies that $ U $ is constant, a contradiction. Therefore we cannot have $ Q( U ) = 0$ and we conclude that $Q(U ) < 0$. ($ii$) Fix $ c_0 \in (0, {\mathfrak c}_s)$ and let $ U_0 \in \mathcal{E}$ be a minimizer of $ E_{c_0}$ on $ \mathscr{C} _{c_0}$, as given by Theorem \ref{thM}. It follows from \eqref{Pca} that $P_c ((U_0)_{a, 1} ) = \frac 1a R_{c, U_0}(a), $ where \be \label{Rca} R_{c, U_0}(a) = \int_{\mathbb R^N} \Big| \frac{ \partial U_0}{\partial x_1} \Big|^2 \, dx + a c Q( U_0) + a^2 \left[ \frac{N-3}{N-1} \int_{\mathbb R^N} |\nabla _{x_{\perp} } U_0 |^2 \, dx + \int_{\mathbb R^N} V(|u_0|^2) \, dx \right] \ee is a polynomial in $a$ of degree at most 2. It is clear that $R_{c, U_0} (0 ) > 0$, $R_{c_0, U_0} (1) = P_{c_0} (U_0) = 0 $ and for any $ c > c_0 $ we have $R_{c, U_0}(1) = P_{c_0}(U_0) + (c - c_0) Q(U_0) < 0 $ because $ Q(U_0) <0$. Hence there is a unique $ a(c) \in (0,1)$ such that $R_{c, U_0} (a(c)) = 0$, which means $P_{c}((U_0)_{a(c), 1}) =0$. We infer that \be \label{ac} T_c \leq E_c( (U_0)_{a(c), 1}) = \frac{2}{N-1} \int_{\mathbb R^N} |\nabla _{x_{\perp} } (U_0)_{a(c), 1} |^2 \, dx = a(c) \frac{2}{N-1} \int_{\mathbb R^N} |\nabla _{x_{\perp} } U_0 |^2 \, dx = a(c) T_{c_0} . \ee Since $ a(c) \in (0,1)$, we have proved that $T_c < T_{c_0}$ whenever $ c_0 \in (0, {\mathfrak c}_s ) $ and $ c \in (c_0, {\mathfrak c}_s)$, thus $ c \longmapsto T_c$ is decreasing. By a well-known result of Lebesgue, the function $ c \longmapsto T_c $ has a derivative a.e. ($iii$) Notice that \eqref{ac} holds whenever $c_0$, $U_{c_0}$ are as above and $a(c)$ is a positive root of $R_{c, U_0}$. Using the Pohozaev identities \eqref{Pohozaev} we find $$ 2\int_{\mathbb R^N} \Big| \frac{ \partial U_0}{\partial x_1} \Big|^2 \, dx = \frac{2}{N-1} \int_{\mathbb R^N} |\nabla _{x_{\perp} } U_0 |^2 \, dx - c_0 Q (U_0) = T_{c_0} - c_0 Q( U_0) \qquad \mbox{ and then} $$ \be \label{ps} \frac{N-3}{N-1} \int_{\mathbb R^N} |\nabla _{x_{\perp} } U_0 |^2 \, dx + \int_{\mathbb R^N} V(|u_0|^2) \, dx = - c_0 Q( U_0) - \int_{\mathbb R^N} \Big| \frac{ \partial U_0}{\partial x_1} \Big|^2 \, dx = - \frac 12 c_0 Q(U_0) - \frac 12 T_{c_0} . \ee We now distinguish two cases: $R_{c, U_0}$ has degree one or two. Case $(a)$: If $ \displaystyle \frac{N-3}{N-1} \int_{\mathbb R^N} |\nabla _{x_{\perp} } U_0 |^2 \, dx + \int_{\mathbb R^N} V(|u_0|^2) \, dx = 0$, then $R_{c, U_0}$ has degree one and we have $\displaystyle \int_{\mathbb R^N} \Big| \frac{ \partial U_0}{\partial x_1} \Big|^2 \, dx + c_0 Q( U_0) =0$ because $ P_{c_0}(U_0) = 0$. Since $R_{c, U_0}$ is an affine function, we find $a(c) = \frac{ c_0}{c}$ for all $ c > 0$, hence $ a( c_0) = 1$. Moreover, the left-hand side in \eqref{ps} is zero, thus we have $ c_0 Q(U_0) + T_{c_0} = 0$ and consequently $ a'( c_0) = - \frac{ 1}{c_0} = \frac{Q(U_0)}{T_{c_0}}$. Case $(b)$: If $ \displaystyle \frac{N-3}{N-1} \int_{\mathbb R^N} |\nabla _{x_{\perp} } U_0 |^2 \, dx + \int_{\mathbb R^N} V(|u_0|^2) \, dx \not = 0$, $ R_{c, U_0} $ has degree two, and the discriminant of this second-order polynomial is equal to $$ \Delta _{c, U_0} = ( c^2 - c_0 ^2) Q^2 ( U_0) + T_{c_0}^2. $$ Consequently $R_{c, U_0}$ has real roots as long as $ ( c^2 - c_0 ^2) Q^2 ( U_0) + T_{c_0}^2 \geq 0$. It is easy to see that if there are real roots, at least one of them is positive. Indeed, $R_{c, U_0}(0) > 0 > R_{c, U_0}'(0) $. If $ \Delta _{c, U_0} \geq 0 $, no matter of the sign of the leading order coefficient $ \frac{N-3}{N-1} \int_{\mathbb R^N} |\nabla _{x_{\perp} } U_0 |^2 \, dx + \int_{\mathbb R^N} V(|u_0|^2) \, dx \not = 0 $, the smallest positive root $a(c)$ of $ R_{c, U_0} $ is given by the formula \be \label{root} a(c) = \frac{- c Q( U_0) - \sqrt{ ( c^2 - c_0 ^2) Q^2 ( U_0) + T_{c_0}^2}}{- c_0 Q(U_0) - T_{c_0}} = \frac{ - c_0 Q( U_0) + T_{c_0}}{- cQ(U_0) + \sqrt{ ( c^2 - c_0 ^2) Q^2 ( U_0) + T_{c_0}^2}} . \ee Therefore, the function $ c \longmapsto a(c)$ is defined on the interval $ [\tilde{c}_0, \infty )$ where $ \tilde{c}_0 = \sqrt{ c_0 ^2 - \frac{ T_{c_0^2}}{Q^2(U_0)}}< c_0$, it is differentiable on $ (\tilde{c}_0, \infty )$ and $ a( c_0) = 1$. Moreover, a straightforward computation gives $ a'( c_0) = \frac{Q(U_0)}{T_{c_0}}$. Note that in Case $(a)$, the last expression in \eqref{root} is equal to $ \frac{c_0}{c} $, which is then indeed $ a(c)$. By \eqref{ac} we have $ T_c \leq a(c) T_{c_0}$ and passing to the limit we get $ \displaystyle \lim_{c \to c_0,\, c < c_0} T_c \leq \lim_{c \to c_0, \, c < c_0} a(c) T_{c_0} = T_{c_0}$. Since $ c \longmapsto T_c$ is decreasing, $T_c > T_{c_0} $ for $ c < c_0 $ and we see that it is left contiuous at $ c_0$. Moreover, we have $$ \frac{ T_c - T_{c_0}}{c - c_0 } \leq \frac{ a(c) - a(c_0)}{c - c_0} T_{c_0 } \quad \mbox{ for } c > c_0, \quad \mbox{ respectively } \quad \frac{ T_c - T_{c_0}}{c - c_0 } \geq \frac{ a(c) - a(c_0)}{c - c_0} T_{c_0 } \quad \mbox{ for } c \in [\tilde{c}_0, c_0). $$ Passing to the limit in the above inequalities we obtain, since $ a'( c_0) = \frac{Q(U_0)}{T_{c_0}}$ in Cases $(a)$ and $(b)$, $$ \limsup_{ c \to c_0, \, c > c_0} \frac{ T_c - T_{c_0}}{c - c_0 } \leq a'( c_0) T_{c_0} = Q(U_0), \qquad \mbox{ respectively } \qquad \liminf_{ c \to c_0, \, c < c_0} \frac{ T_c - T_{c_0}}{c - c_0 } \geq a'( c_0) T_{c_0} = Q(U_0). $$ It is then clear that if $ c \longmapsto T_c$ is differentiable at $ c_0$, necessarily $ \displaystyle \frac{d T_c}{dc}_{|c=c_0} = Q(U_0) .$ ($iv$) Fix $ c_* \in ( c_0, {\mathfrak c}_s)$. Passing to a subsequence we may assume that $ c_0 < c_n < c_*$ for all $n$ and $ Q( U_n ) \to - q _0 \leq 0$. Then $ T_{c_0 } > T_{c_n} > T_{c_*} > 0 $ and $ (c_0 ^2 - c_n ^2) Q^2( U_n) + T_{c_n}^2 > ( c_0 ^2 - c_n ^2) Q^2( U_n) + T_{c_*}^2 > 0$ for all sufficiently large $n$. Hence for large $n$ we may use \eqref{ac} and \eqref{root} with $( c_n, c_0)$ instead of $ ( c_0, c)$ and we get $$ T_{c_0} \leq \frac{ - c_n Q( U_n) + T_{c_n}}{- c_0 Q(U_n) + \sqrt{ ( c_0^2 - c_n ^2) Q^2 ( U_n) + T_{c_n}^2}} T_{c_n}. $$ Since $T_{c_n}$ has a positive limit, passing to the limit as $ n \to \infty $ in the above inequality and using the monotonicity of $ c \longmapsto T_c$ we get $ \displaystyle T_{c_0} \leq \liminf_{ n \to \infty} T_{c_n} = \liminf_{c \to c_0, \, c > c_0} T_c$. This and the fact that $ T_c$ is decreasing and left continuous imply that $T_c$ is continuous at $ c_0$. ($v$) Let $ 0 < c_1 < c_2 < {\mathfrak c}_s $ and $U_1, \; U_2,$ $ q_1 = Q(U_1)< 0 $, $ q_2 = Q(U_2) < 0$ be as in Proposition \ref{monoto} ($v$). If $ c_1 ^2 \leq c_2 ^2 - \frac {T_{c_2}^2}{q_2^2}$, the inequality in Proposition \ref{monoto} ($v$) obviously holds. From now on we assume that $ c_1 ^2 > c_2 ^2 - \frac {T_{c_2}^2}{q_2^2} $. The two discriminants $ \Delta_{c_2 , U_1} = ( c_2^2 - c_1^2 ) q_1^2 + T_{c_1}^2 $ and $ \Delta_{c_1 , U_2} = ( c_1^2 - c_2^2 ) q_2^2 + T_{c_2}^2 $ are positive: since $ 0 < c_1 < c_2 $ for the first one, and by the assumption $ c_1 ^2 > c_2 ^2 - \frac {T_{c_2}^2}{q_2^2}$ for the second one. Therefore, we may use \eqref{ac} and \eqref{root} with the couples $(c_1, c_2)$, respectively $(c_2, c_1)$ instead of $(c_0, c)$ to get $$ T_{c_2} \leq \frac{ - c_1 q_1 + T_{c_1}}{- c_2 q_1 + \sqrt{ (c_2 ^2 - c_1 ^2 ) q_1 ^2 + T_{c_1}^2}} T_{c_1}, \qquad \mbox{ respectively } \qquad T_{c_1} \leq \frac{ - c_2 q_2 + T_{c_2}}{ - c_1 q_2 + \sqrt{ (c_1 ^2 - c_2 ^2 ) q_2 ^2 + T_{c_2}^2}} T_{c_2}. $$ Since $ T_{c_i} > 0 $, we must have $$ \frac{ - c_1 q_1 + T_{c_1}}{ - c_2 q_1 + \sqrt{ (c_2 ^2 - c_1 ^2 ) q_1 ^2 + T_{c_1}^2}} \cdot \frac{ - c_2 q_2 + T_{c_2}}{ - c_1 q_2 + \sqrt{ (c_1 ^2 - c_2 ^2 ) q_2 ^2 + T_{c_2}^2}} \geq 1. $$ We set $ y_1 = - \frac{ T_{c_1}}{c_1 q_1} > 0 $, and recast this inequality as \be \label{ineqmagique} \frac{ 1 + y_1}{\frac{c_2}{c_1} + \sqrt{ \frac{ c_2^2}{c_1^2} - 1 + y_1^2}} \geq \frac{ - c_1 q_2 + \sqrt{ (c_1 ^2 - c_2 ^2 ) q_2 ^2 + T_{c_2}^2}}{ - c_2 q_2 + T_{c_2}} = \frac{ 1 + \sqrt{ 1 - \frac{c_2^2}{c_1^2} + \frac{T_{c_2}^2}{c_1^2 q_2^2} }}{ \frac{c_2}{c_1} - \frac{T_{c_2}}{c_1 q_2} } . \ee Denoting, for $y \in \mathbb R$, $ g (y) = \displaystyle \frac{ 1 + y}{ \frac{c_2}{c_1} + \sqrt{ \frac{c_2^2}{c_1^2} - 1 + y^2}} $, \eqref{ineqmagique} is exactly $$ g \Big( - \frac{ T_{c_1}}{c_1 q_1} \Big) = g (y_1) \geq g \Big( \sqrt{ 1 - \frac{c_2^2}{c_1^2} + \frac{T_{c_2}^2}{c_1^2 q_2^2} } \Big) . $$ If we show that $g$ is increasing, then we obtain $$ - \frac{ T_{c_1}}{c_1 q_1} \geq \sqrt{ 1 - \frac{c_2^2}{c_1^2} + \frac{T_{c_2}^2}{c_1^2 q_2^2} } , \quad \quad \quad {\rm or} \quad \quad \quad \frac{ T_{c_1}^2}{q_1^2} - c_1 ^2 \geq \frac{ T_{c_2}^2}{q_2^2} - c_2 ^2 , $$ which is the desired inequality. To check that $g$ is increasing, we simply compute $$ g' (y) = \displaystyle \frac{ \displaystyle \frac{c_2^2}{c_1^2} - 1 + \displaystyle \frac{c_2}{c_1} \sqrt{ \displaystyle \frac{c_2^2}{c_1^2} - 1 + y^2} - y}{ \Big( \displaystyle \frac{c_2}{c_1} + \sqrt{ \frac{c_2^2}{c_1^2} - 1 + y^2} \Big)^2 \sqrt{ \frac{c_2^2}{c_1^2} - 1 + y^2}} , $$ which is positive since $ \frac{c_2}{c_1} > 1 $ and $ \sqrt{ \frac{c_2^2}{c_1^2} - 1 + y^2} > |y| $. ($vi$) Since $ c \longmapsto - T_c$ is increasing, by a well-known result of Lebesgue this map is differentiable a.e., the function $ c \longmapsto \frac{ d T_c}{d c}$ belongs to $L_{loc}^1( 0, {\mathfrak c}_s)$ and for any $ 0 < c_1 < c_2 < {\mathfrak c}_s $ we have $ \displaystyle \int_ {c_1 }^{c_2} - \frac{ d T_c}{d c} \, dc \leq -T_{c_2} + T_{c_1}. $ We recall that $ c( \varepsilon ) = \sqrt{ {\mathfrak c}_s ^2 - \varepsilon^2}$ for all $ \varepsilon \in ( 0, {\mathfrak c}_s )$. If $N=3$, (A2) and ( A4) hold and $ \Gamma \neq 0$, by Proposition \ref{asympto} ($ii$) there is $ K > 0 $ such that $T_{c(\varepsilon)} \leq K \varepsilon $ for all sufficiently small $ \varepsilon$. Thus for $ n \in \mathbb N$ large we have $$ \int_{c(2/n)}^{c(1/n)} - \frac{d T_c}{dc} \ dc \leq T_{c(2/n)} - T_{c(1/n)} \leq T_{c(2/n)} \leq \frac{2K}{n} . $$ Hence there exists $ c_n \in (c(2/n) , c(1/n))$ such that $c \mapsto T_c $ is differentiable at $ c_n $ and $$ - \frac{d T_c}{dc}_{|c=c_n} \leq \frac{1}{c(\frac 1n)- c(\frac 2n) } \cdot \frac{2K}{n} \leq K' n . $$ Let $\varepsilon_n = \sqrt{{\mathfrak c}_s^2 - c_n^2} $, so that $c(\varepsilon_n) = c_n$. Since $ c(2/n) \leq c_n \leq c(1/n) $, we have $ \frac1n \leq \varepsilon_n \leq \frac2n , $ so that $ \varepsilon _n \to 0 $ as $ n \to \infty$. Let $U_n$ be a minimizer of $ E_{c_n} $ on $ \mathscr{C} _{c_n}$, scaled so that $U_n$ solves (TW$_{c_n}$). From ($i$) and ($iii$) we get $$ |Q(U_n) | = - Q( U_n) = - \frac{d T_c}{dc}_{|c=c_n } \leq K'n \leq \frac{ 2K'}{\varepsilon _n} . $$ Since $ E(U_n) + c_n Q(U_n) = T_{c_n } = \mathcal{O}(\varepsilon_n) $, it follows that $$ E(U_n)\leq - c_n Q( U_n) + T_{c_n} \leq \frac{K''}{ \varepsilon_n} $$ and the proof is complete. \ $\Box$ \subsection{Proof of Proposition \ref{global3}} We postpone the proof of Proposition \ref{convergence} and we prove Proposition \ref{global3}. Let $ (\varepsilon_n)_{n \geq 1}$ be the sequence given by Proposition \ref{monoto} ($vi$). For each $n$ let $ U_n \in \mathcal{E}$ be a minimizer of $ E_{c_n}$ on $ \mathscr{C} _{c_n}$ which solves (TW$_{c_n}$). Passing to a subsequence if necessary and using Proposition \ref{convergence}, we may assume that $ (\varepsilon_n)_{n \geq 1}$ is strictly decreasing, that $(\varepsilon _n , U_n)_{n \geq 1}$ satisfies the conclusion of Theorem \ref{res1} and \be \label{en} \frac 12 r_0 ^2 {\mathfrak c}_s ^4 \mathscr{S}_{\rm min} \frac{ 1}{\varepsilon _n } < E( U_n) < 2 r_0 ^2 {\mathfrak c}_s ^4 \mathscr{S}_{\rm min} \frac{ 1}{\varepsilon _n }, \ee \be \label{mom} \frac 12 r_0 ^2 {\mathfrak c}_s ^3 \mathscr{S}_{\rm min} \frac{ 1}{\varepsilon _n } < - Q( U_n) < 2 r_0 ^2 {\mathfrak c}_s ^3 \mathscr{S}_{\rm min} \frac{ 1}{\varepsilon _n } \qquad \mbox{ for all } n. \ee We shall argue by contradiction. More precisely, we shall prove by contradiction that there exists $ \varepsilon _* > 0 $ such that for any $ \varepsilon \in ( 0, \varepsilon _* )$ and for any minimizer $U$ of $E_{c(\varepsilon)} $ on $ \mathscr{C}_{c (\varepsilon)}$ scaled so that $U$ satisfies (TW$_{c(\varepsilon )}$), we have $$ |Q(U) | \leq \frac{5r_0^2 {\mathfrak c}_s^3 \mathscr{S}_{\rm min}}{\varepsilon} . $$ In view of Proposition \ref{asympto} ($ii$), we then infer that $$ E(U) = T_{c(\varepsilon)} - c(\varepsilon) Q(U) \leq \frac{K}{\varepsilon} $$ for some constant $ K $ depending only on $ r_0$, $ {\mathfrak c}_s $ and $ \mathscr{S}_{\rm min} $, which is the desired result. We thus assume that there exist infinitely many $n$'s such that there is $ \tilde{\varepsilon} _n \in ( \varepsilon _n, \varepsilon _{n-1})$ and there is a minimizer $ \tilde{U}_n $ of $ E_{c( \tilde{\varepsilon}_n)}$ on $ \mathscr{C} _{c( \tilde{\varepsilon}_n)}$ which satisfies (TW$_{c( \tilde{\varepsilon}_n)}$) and \be \label{mauvais} |Q(\tilde{U}_n )| = - Q(\tilde{U}_n ) > 5 r_0 ^2 {\mathfrak c}_s ^3 \mathscr{S}_{\rm min} \frac{ 1}{\tilde{\varepsilon} _n }. \ee Passing again to a subsequence of $(\varepsilon_n)_{n \geq 1}$, we may assume that \eqref{mauvais} holds for all $ n \geq 1$. Then for each $ n \in \mathbb N^*$ we define $$ \begin{array}{rcl} I_n & = & \Big\{ \varepsilon \in ( \varepsilon _n, \varepsilon_{n-1}) \; \Big| \; \mbox{ for all } \varepsilon ' \in [ \varepsilon_n, \varepsilon ] \mbox{ and for any minimizer } U_{\varepsilon'} \mbox{ of } E_{c(\varepsilon')} \mbox{ on } \mathscr{C} _{c(\varepsilon')} \\ & & \mbox{ which solves (TW$_{c(\varepsilon')}$) there holds } | Q( U_{\varepsilon'})| \leq 4 r_0 ^2 {\mathfrak c}_s ^3 \mathscr{S}_{\rm min} \cdot \frac{ 1}{ \varepsilon' } \Big\} \end{array} $$ and $$ \varepsilon_n^{\#} = \sup I_n. $$ By Proposition \ref{monoto} ($v$), for $ \varepsilon'\in ( \varepsilon _n , {\mathfrak c}_s)$ and for any minimizer $ U_{\varepsilon'}$ of $E_{c(\varepsilon')}$ on $ \mathscr{C} _{c(\varepsilon')} $ which solves (TW$_{c(\varepsilon')}$) we have $$ \frac{ T_{c( \varepsilon')}^2}{Q^2(U_{\varepsilon'})} + (\varepsilon')^2 \geq \frac{ T_{c( \varepsilon _n)}^2}{Q^2(U_{n})} + \varepsilon_n ^2, $$ which can be written as $\displaystyle \frac{Q^2(U_{\varepsilon'})}{ T_{c( \varepsilon')}^2} \leq \frac{Q^2(U_{n})}{T_{c( \varepsilon _n)}^2 + ( \varepsilon _n ^2 - (\varepsilon ')^2) Q^2( U_n) } \; $ and this gives \be \label{estimate1} (\varepsilon')^2 Q^2(U_{\varepsilon'}) \leq \frac{(\varepsilon')^2 Q^2(U_{n}) T_{c( \varepsilon')}^2}{T_{c( \varepsilon _n)}^2 + ( \varepsilon _n ^2 - (\varepsilon ')^2) Q^2( U_n) }. \ee The mapping $ \varepsilon \longmapsto T_{c(\varepsilon)}$ is right continuous (because $ c \longmapsto T_c$ is left continuous) and using \eqref{mom} we find $$ \lim_{\varepsilon'\to \varepsilon_n, \, \varepsilon'> \varepsilon _n} \frac{(\varepsilon')^2 Q^2(U_{n}) T_{c( \varepsilon')}^2}{T_{c( \varepsilon _n)}^2 + ( \varepsilon _n ^2 - (\varepsilon ')^2) Q^2( U_n) } = \varepsilon_n ^2 Q^2 ( U_n) < ( 2 r_0 ^2 {\mathfrak c}_s ^3 \mathscr{S}_{\rm min} )^2. $$ Thus all $ \varepsilon '\in ( \varepsilon_n, \varepsilon_{n-1}) $ sufficiently close to $ \varepsilon_n$ belong to $I_n$. In particular, $ I_n$ is not empty. On the other hand, \eqref{mauvais} implies that any $ \varepsilon'\in ( \tilde{\varepsilon }_n , \varepsilon_{n-1}) $ does not belong to $ I_n$, hence $\varepsilon_n ^{\#} = \sup I_n \in ( \varepsilon _n, \tilde{\varepsilon}_n ]\subset ( \varepsilon_n, \varepsilon_{n-1}).$ Let $ U_n^{\#}$ be a minimizer of $E_{c( \varepsilon_n^{\#})}$ on $\mathscr{C} _{c( \varepsilon_n^{\#})} $ which solves (TW$_{c( \varepsilon_n^{\#})}$). We claim that \be \label{claim1} | Q(U_n^{\#}) | = 4 r_0 ^2 {\mathfrak c}_s ^3 \mathscr{S}_{\rm min} \frac{1}{\varepsilon_n^{\#} } . \ee Indeed, proceeding as in \eqref{estimate1} we have for any $ \varepsilon'\in ( \varepsilon_n, \varepsilon_n^{\#})$ and any minimizer $ U_{\varepsilon'} $ of $E_{c(\varepsilon')}$ on $ \mathscr{C} _{c(\varepsilon')} $ which satisfies (TW$_{c(\varepsilon')}$) \be \label{estimate2} (\varepsilon_n^{\#})^2 Q^2(U_n^{\#}) \leq \frac{\left( \frac{\varepsilon_n^{\#}}{\varepsilon' } \right)^2 (\varepsilon')^2 Q^2(U_{\varepsilon '}) T_{c( \varepsilon_n^{\#})}^2} {T_{c( \varepsilon ')}^2 + \left( 1 - \left( \frac{\varepsilon_n^{\#}}{\varepsilon'} \right)^2\right)(\varepsilon ')^2 Q^2( U_{\varepsilon'}) }. \ee Notice that $ (\varepsilon')^2 Q^2(U_{\varepsilon '}) \leq ( 4 r_0 ^2 {\mathfrak c}_s ^3 \mathscr{S}_{\rm min} )^2 $ because $ \varepsilon' \in I_n$. In particular, $ Q(U_{\varepsilon'})$ is bounded as $ \varepsilon' \in ( \varepsilon_n, \, \varepsilon_n^{\#}).$ Since $ c( \varepsilon') \searrow c( \varepsilon_n^{\#}) $ as $ \varepsilon' \nearrow \varepsilon_n^{\#}$, Proposition \ref{monoto} ($iv$) implies that $ c \longmapsto T_c$ is continuous at $c( \varepsilon_n^{\#})$. Then passing to $\displaystyle \liminf $ as $ \varepsilon' \nearrow \varepsilon_n^{\#}$ in \eqref{estimate2} we get $(\varepsilon_n^{\#})^2 Q^2(U_n^{\#}) \leq ( 4 r_0 ^2 {\mathfrak c}_s ^3 \mathscr{S}_{\rm min} )^2 $. We conclude that $ \varepsilon_n^{\#} \in I_n$. Next, for any $ \varepsilon' \in ( \varepsilon_n^{\#}, {\mathfrak c}_s )$ and any minimizer $ U_{\varepsilon'} $ of $E_{c(\varepsilon')}$ on $ \mathscr{C} _{c(\varepsilon')} $ that solves (TW$_{c(\varepsilon')}$), inequality \eqref{estimate1} holds with $ \varepsilon_n^{\#} $ and $ U_n^{\#}$ instead of $ \varepsilon_n$ and $ U_n$, respectively. The limit of the right-hand side as $ \varepsilon' \searrow \varepsilon _n^{\#} $ is $(\varepsilon_n^{\#})^2 Q^2(U_n^{\#})$. If $\varepsilon_n^{\#} | Q(U_n^{\#} | < 4 r_0 ^2 {\mathfrak c}_s ^3 \mathscr{S}_{\rm min} $, as above we infer that there is $ \delta_n > 0 $ such that $[ \varepsilon_n^{\#}, \varepsilon_n^{\#} + \delta _n ] \subset I_n$, contradicting the fact that $ \varepsilon_n^{\#} = \sup I_n$. The claim \eqref{claim1} is thus proved. Now we turn our attention to the sequence $(\varepsilon_n^{\#}, U_n^{\#})_{n\geq 1}$. It is clear that $ \varepsilon_n ^{\#} \to 0 $ (because $ \varepsilon_n^{\#} \in (\varepsilon_n, \varepsilon_{n-1})$). By Proposition \ref{asympto} ($ii$) there is $ K > 0 $ such that $$ E ( U_n^{\#} ) + c(\varepsilon_n^{\#}) Q ( U_n^{\#} ) = E_{c(\varepsilon_n^{\#})} ( U_n^{\#} )= T_{c(\varepsilon_n^{\#})} \leq K \varepsilon_n^{\#} $$ and using \eqref{claim1} we find $ | E ( U_n^{\#} )| \leq \frac{K'}{\varepsilon_n^{\#} }$ for some constant $ K'> 0 $ and for all $n$ sufficiently large. Hence we may use Proposition \ref{convergence} and we infer that there is a subsequence $(\varepsilon_{n_k}^{\#}, U_{n_k}^{\#})_{k\geq 1}$ which satisfies the conclusion of Theorem \ref{res1}. In particular, we have $$ \lim_{k \to \infty} \varepsilon_{n_k}^{\#} | Q( U_{n_k}^{\#}) | = r_0 ^2 {\mathfrak c}_s ^3 \mathscr{S}_{\rm min} $$ and this contradicts the fact that $U_{n_k}^{\#} $ satisfies \eqref{claim1}. Proposition \ref{global3} is thus proven. \ $\Box$ \subsection{Proof of Proposition \ref{lifting}} \label{preuvelifting} ($i$) Since $ U \in \mathcal{E}$, we have $ |U| - r_0 \in H^1( \mathbb R^N)$ (see the Introduction of \cite{CM1}) and then $ \Big| \displaystyle \frac{ \partial }{\partial x_i} ( |U | - r_0 ) \Big| \leq \Big| \frac{ \partial U}{\partial x_i} \Big| $ a.e. in $ \mathbb R^N$. It is well-known (see, for instance, \cite{brezis} p. 164) that for any $ \phi \in H^1( \mathbb R^N)$ there holds $$ \| \phi \|_{L^{2^*}(\mathbb R^N)} \leq C_S \prod _{i =1}^N \Big\| \frac{\partial \phi}{\partial x_i} \Big\| _{L^2( \mathbb R^N)}^{\frac 1N}. $$ We infer that \be \label{sobo} \| \, |U| - r_0 \|_{L^{2^*}(\mathbb R^N)} \leq C_S \prod _{i =1}^N \Big\| \frac{\partial U}{\partial x_i} \Big\| _{L^2( \mathbb R^N)}^{\frac 1N} \leq C_S \Big\| \frac{\partial U}{\partial x_1} \Big\| _{L^2( \mathbb R^N)}^{\frac 1N} \cdot \| \nabla _{x_{\perp}} U \| _{L^2( \mathbb R^N)}^{\frac{N}{N-1}}. \ee Assume first that (A2) holds. If $\Big\| \frac{\partial U}{\partial x_1} \Big\| _{L^2( \mathbb R^N)} \cdot \| \nabla _{x_{\perp}} U \| _{L^2( \mathbb R^N)}^{N-1} \leq 1$, from \eqref{sobo} we get $ \| \, |U| - r_0 \|_{L^{2^*}(\mathbb R^N)} \leq C_S$. Let $ \tilde{U} (x) = e^{-\frac{i c x_1}{2}} U(x) $. Then $ \tilde{U} \in H_{loc}^1( \mathbb R^N)$ and $ \tilde{U}$ solves the equation $$ \Delta \tilde{U} + \left( \frac{c^2}{4} + F(|\tilde{U} |^2) \right) \tilde{U} = 0 \qquad \mbox{ in } \mathbb R^N. $$ Since $ \| \tilde{U} \|_{L^{2^*} (B(x, 1))} \leq C$ for any $ x \in \mathbb R^N$ and for some constant $C>0$, using the above equation and a standard bootstrap argument (which works thanks to (A2)), we infer that $ \| \tilde{U} \|_{W^{2,p} (B(x, \frac{1}{2^{n_0}}))} \leq \tilde{C}_p$ for some $ n_0 \in \mathbb N$, $ \tilde{C}_p > 0$ and for any $ x \in \mathbb R^N$ and any $ p \in [2, \infty)$. This clearly implies $ \| {U} \|_{W^{2,p} (B(x, \frac{1}{2^{n_0}}))} \leq {C_p}$ for any $ x \in \mathbb R^N$ and any $ p \in [2, \infty)$. In particular, using the Sobolev embedding we see that there is $ L>0 $ (independent on $U$) such that $ \| \nabla U \|_{L^{\infty}( \mathbb R^N)} \leq L$. Fix $ \delta > 0$. If there is $ x_0 \in \mathbb R^N$ such that $ | \, |U(x_0)| - r_0 | \geq \delta$, we infer that $ \| \, |U(x)| - r_0 | \geq \frac{\delta}{2}$ for any $ x \in B( x_0, \frac{ \delta}{2L})$ and consequently \be \label{low} \| \, |U| - r_0 \|_{L^{2^*}(\mathbb R^N)} \geq \frac{\delta}{2} \left({\mathcal L} ^N \left( B( x_0 \frac{ \delta}{2L}) \right) \right)^{\frac{1}{2^*}} = \frac{\delta}{2} \left(\frac{\delta}{2L} \right)^{\frac{N}{2^*}} \left({\mathcal L} ^N( B(0,1)) \right)^{\frac{1}{2^*}} . \ee Let $ \mu ( \delta) = \min \left( 1, \frac{\delta}{2} \left(\frac{\delta}{2L} \right)^{\frac{N}{2^*}} \left( {\mathcal L} ^N( B(0,1)) \right)^{\frac{1}{2^*}} \right).$ From \eqref{sobo} and \eqref{low} we infer that $|\, |U(x) | - r_0 | < \delta $ for any solution $ U \in \mathcal{E}$ of (TW$_c$) satisfying $ \Big\| \frac{\partial U}{\partial x_1} \Big\| _{L^2( \mathbb R^N)} \cdot \| \nabla _{x_{\perp}} U \| _{L^2( \mathbb R^N)}^{N-1} \leq \mu ( \delta).$ If (A3) holds, it follows from the proof of Proposition 2.2 p. 1078-1080 in \cite{M2} thet there is $ L >0$, independent on $U$, such that $ \| \nabla U \|_{L^{\infty}( \mathbb R^N)} \leq L$. The rest of the proof is as above. ($ii$) By Proposition 2.2 p. 1078 in \cite{M2} we know that $ U \in W_{loc}^{2,p}(\mathbb R^N)$ for any $ p \in [2, \infty)$. In particular, $U \in C^1( \mathbb R^N)$ . As in the proof of ($i$) we see that there is $ L > 0$, independent on $U$, such that $ || \nabla U \|_{L^{\infty}( \mathbb R^N)} \leq L$. Fix $ \delta > 0$ and assume that there is $ x^0 = ( x_1^0, \dots, x_N^0)$ such that $ | \, |U(x^0)| - r_0 | \geq \delta$. Then we have $ | \, |U(x)| - r_0 | \geq \frac{\delta}{2}$ for any $ x \in B( x^0 , \frac{ \delta}{2L})$ and, in particular, $|\, | U(x_1, x_2^0, \dots, x_N^0)| - r_0 | \geq \frac{ \delta}{2}$ for any $ x_1 \in [x_1^0 - \frac{ \delta}{2L}, x_1^0 + \frac{ \delta}{2L}]$. We infer that $|\, | U(x_1, x_{\perp})| - r_0 | \geq \frac{\delta}{4} $ for any $ x_1 \in [x_1^0 - \frac{ \delta}{2L}, x_1^0 + \frac{ \delta}{2L}]$ and any $ x_{\perp} \in B_{\mathbb R^{N-1}} ( x_{\perp}^0, \frac{ \delta}{4L}).$ Consequently $$ \begin{array}{l} \| \, |U( x_1, \cdot ) | - r_0 \|_{L^{\frac{2(N-1)}{N-3}}(\mathbb R^{N-1})} \geq \frac{ \delta}{4} \left( {\mathcal L}^{N-1} \left( B_{\mathbb R^{N-1}}\left(x_{\perp}^0, \frac{\delta}{4L}\right) \right) \right)^{\frac{N-3}{2(N-1)}} \\ \\ \geq \frac{ \delta}{4} \left(\frac{\delta }{4L} \right)^{\frac{N-3}{2}} \left( {\mathcal L}^{N-1} ( B_{\mathbb R^{N-1}}(0, 1)) \right)^{\frac{N-3}{2(N-1)}} = K \delta ^{ \frac{N-1}{2}} \end{array} $$ for all $ x_1 \in [x_1^0 - \frac{ \delta}{2L}, x_1^0 + \frac{ \delta}{2L}]$. Using the Sobolev inequality in $ \mathbb R^{N-1}$ we get for $ x_1 \in \left[x_1^0 - \frac{ \delta}{2L}, x_1^0 + \frac{ \delta}{2L}\right], $ $$ \int_{\mathbb R^{N-1}} |\nabla_{x_{\perp}} U( x_1, x_{\perp}) |^2 \, dx _{\perp} \geq \frac{1}{\tilde{C}_S^2} \| \, |U( x_1, \cdot ) | - r_0 \|_{L^{\frac{2(N-1)}{N-3}}(\mathbb R^{N-1})} ^2 \geq \frac{K^2}{\tilde{C}_S^2} \delta ^{N-1} . $$ Integrating the above inequality on $[x_1^0 - \frac{ \delta}{2L}, x_1^0 + \frac{ \delta}{2L}]$ we obtain $ \| \nabla_{x_{\perp} } U \|_{ L^2 (\mathbb R^N )} ^2 \geq \frac{K^2}{L \tilde{C}_S^2} \delta ^{N} = K_1 \delta ^{N} .$ We conclude that if $\| \nabla_{x_{\perp} } U \|_{ L^2 (\mathbb R^N )} ^2 <\min(1, K_1 \delta ^{N}) $, then necessarily $ | \, |U| - r_0 | < \delta$ in $ \mathbb R^N$. \ $\Box$ \subsection{Proof of Proposition \ref{prop2d}} It follows from Lemma 4.1 in \cite{CM1} that there are $ k_0 > 0$, $ C_1, C_2 > 0$ such that for all $ \psi \in \mathcal{E}$ with $\displaystyle \int_{\mathbb R^2} | \nabla \psi |^2 \, dx \leq k_0$ we have \be \label{kifkifpot} C_1 \int_{\mathbb R^2} ( \chi^2(|\psi|) - r_0^2 )^2 \ dx \leq \int_{\mathbb R^2} V(|\psi|^2) \ dx \leq C_2 \int_{\mathbb R^2} ( \chi^2(|\psi|) - r_0^2 )^2 \ dx . \ee We recall that in space dimension two, nontrivial solutions $U_k$ to (TW$_c$) have been constructed in Theorem \ref{th2d} by considering the minimization problem $$ \mbox{ minimize } I(\psi) = Q( \psi ) + \int_{\mathbb R^2} V(| \psi |^2) \, dx \quad \mbox{ in } \mathcal{E} \; \mbox{ under the constraint } \int_{\mathbb R^2} | \nabla \psi |^2 \, dx = k. \eqno{({\mathcal I} _k)} $$ If $\mathcal{U}_k$ is a minimizer for $({\mathcal I} _k)$, there is $ c_k > 0$ such that $U_k = (\mathcal{U}_k)_{c_k, c_k}$ solves (TW$_{c_k}$) and minimizes $E_{c_k} = E + c_k Q$ in the set $ \Big\{ \psi \in \mathcal{E} \; \Big| \; \displaystyle \int_{\mathbb R^2} | \nabla \psi |^2 \, dx = k \Big\}$. Moreover, we have $ c_k \to {\mathfrak c}_s $ as $ k \to 0$. Lemma \ref{liftingfacile} implies that $|U_k| \to r_0$ uniformly on $ \mathbb R^2$ as $ k \to 0$; in particular, there is $ k_1 > 0 $ such that if $ k \in ( 0, k_1) $, we have $|U_k| \geq \frac{r_0}{2} $ in $ \mathbb R^2$. From the Pohozaev identities \eqref{Pohozaev} we get $c_k Q( U_k) + 2 \displaystyle \int_{\mathbb R^2} V(|U_k|^2 )\, dx = 0$, and this gives \be \label{scaling2} I_{\rm min}(k) = I( \mathcal{U} _k) = \frac{ 1}{c_k} Q(U_k) + \frac{ 1}{c_k^2 } \int_{\mathbb R^2} V(|U_k|^2 )\, dx = \frac{ 1}{2c_k} Q(U_k) = - \frac{ 1}{c_k^2 } \int_{\mathbb R^2} V(|U_k|^2 )\, dx. \ee By Lemma 5.2 in \cite{CM1} there is $ k_2 > 0 $ such that $ - \frac{2k}{{\mathfrak c}_s ^2} \leq I_{\rm min}(k) \leq - \frac{k}{{\mathfrak c}_s ^2} $ for all $ k \in (0, k_2)$. Since $ c_k \to {\mathfrak c}_s $ as $ k \to 0$, the estimates \eqref{estim2d} follow directly from \eqref{kifkifpot} and \eqref{scaling2}. It remains to prove \eqref{kifkif}. By Proposition \ref{asympto}, there is $ \mu _0 > 0 $ such that for $k$ sufficiently small we have $ I_{\rm min} (k) \leq - \frac{k}{{\mathfrak c}_s^2} - \mu_0 k^3 .$ By scaling we have $$ \frac{1}{c_k^2} \Big( E_{c_k}(U_k) - \int_{\mathbb R^2} |\nabla U_k|^2 \ dx \Big) = \frac{1}{c_k^2} \Big( c_k Q(U_k) + \int_{\mathbb R^2} V(|U_k|^2) \ dx \Big) = I(\mathcal{U}_k) = I_{\rm min}(k) \leq - \frac{k}{{\mathfrak c}_s^2} - \mu_0 k^3 . $$ Since ${\mathfrak c}_s^2 - c_k^2 = \varepsilon _k^2 $ and $\displaystyle \int_{\mathbb R^2} |\nabla U_k|^2 \ dx = k$, we get \be \label{ecusson} E_{c_k}(U_k) \leq k \Big( 1 - \frac{c _k^2}{{\mathfrak c}_s^2} \Big) - \mu_0 c_k^2 k^3 = \frac{k\varepsilon_k^2}{{\mathfrak c}_s^2} - \mu_0 c_k ^2 k^3 . \ee The second Pohozaev identity \eqref{Pohozaev} yields $E_{c_k}(U_k) = 2 \displaystyle \int_{\mathbb R^2} |\partial_{2} U_k |^2 \ dx \geq 0$, thus $ 0 \leq k \Big( \frac{\varepsilon_k^2}{{\mathfrak c}_s^2} - \mu_0 c_k ^2 k^2 \Big) $ and this implies $$ \frac{\varepsilon_k^2}{{\mathfrak c}_s^2} \geq \mu_0 c^2 k^2 . $$ Since $c \geq {\mathfrak c}_s/2$ for $k$ small, the left-hand side inequality in \eqref{kifkif} follows. In order to prove the second inequality in \eqref{kifkif}, we need the next Lemma. In the case of the Gross-Pitaevskii nonlinearity, this result follows from Lemma 2.12 p. 597 in \cite{BGS1}. In the case of general nonlinearities, it was proved in \cite{CM1}. \begin{lem} [\cite{BGS1, CM1}] \label{tools} Let $N\geq 2$. There is $ \beta _* > 0$ such that any solution $ U = \rho e^{ i \phi} \in {\mathcal E}$ of (TW$_c$) verifying $ r_0 - \beta_* \leq \rho \leq r_0 + \beta _*$ satisfies the identities \be \label{blancheneige} E(U) + c Q(U) = \frac{2}{N} \int_{\mathbb R^N} |\nabla \rho |^2 \ dx \qquad \mbox{ and } \ee \be \label{grincheux} 2 \int_{\mathbb R^N} \rho^2 |\nabla \phi |^2 \ dx = c \int_{\mathbb R^N} ( \rho^2 - r_0^2 ) \partial_1 \phi \ dx = - c Q(U) . \ee Furthermore, there exist $a_1, a_2 > 0$ such that \be \label{balai} a_1 \| \rho ^2 - r_0 ^2 \|_{L^2(\mathbb R^N)} \leq \| \nabla U \|_{L^2( \mathbb R^N)} \leq a_2 \| \rho ^2 - r_0 ^2 \|_{L^2(\mathbb R^N)}. \ee \end{lem} \noindent {\it Proof.} Identity \eqref{grincheux} is Lemma 7.3 ($i$) in \cite{CM1}. Formally, it follows by multiplying the first equation in \eqref{phasemod} by $ \phi $ and integrating by parts over $\mathbb R^N$; see \cite{CM1} for a rigorous justification. Combining the two Pohozaev identities in \eqref{Pohozaev}, we have $$ (N-2) \int_{\mathbb R^N} |\nabla U|^2 \ dx + N \int_{\mathbb R^N} V(|U|^2) \ dx + c ( N-1 ) Q(U) = 0 . $$ Using that $|\nabla U|^2 = |\nabla \rho|^2 + \rho^2 |\nabla \phi|^2 $, we infer from \eqref{grincheux} \begin{align*} N(E(U) + c Q(U) ) = 2 \int_{\mathbb R^N} |\nabla U|^2 \ dx + c Q(U) = & \ 2 \int_{\mathbb R^N} |\nabla \rho|^2 \ dx + \Big( 2 \int_{\mathbb R^N} \rho^2 |\nabla \phi|^2 \ dx + c Q(U) \Big) \\ = & \ 2 \int_{\mathbb R^N} |\nabla \rho|^2 \ dx , \end{align*} and this establishes \eqref{blancheneige}. The estimate \eqref{balai} has been proven in \cite{CM1} (see inequality (7.17) there). \ $\Box$ We come back to the proof of Proposition \ref{prop2d}. We write $ U_k = \rho e^{ i \phi}$ and we denote $ \eta = \rho ^2 - r_0 ^2$, so that $ \rho$, $\phi $ and $ \eta$ satisfy \eqref{phasemod}$-$\eqref{fond} (with $c_k$ instead of $c$). Taking the Fourier transform of \eqref{fond} we get \be \label{fondfou} \begin{array}{rcl} \widehat{\eta} ( \xi) & = & \displaystyle \frac{ |\xi|^2}{|\xi |^4 + {\mathfrak c}_s ^2 |\xi |^2 - c_k ^2 \xi _1^2 } \mathscr{F} \left( -2 |\nabla U_k|^2 + 2 c_k \eta \frac{ \partial \phi}{\partial x_1} + 2 \rho ^2 F( \rho ^2) + {\mathfrak c}_s ^2 \eta \right) \\ \\ & & \displaystyle - 2 c_k \sum_{j=1}^N \frac{ \xi _1 \xi _j }{|\xi |^4 + {\mathfrak c}_s ^2 |\xi |^2 - c_k ^2 \xi _1^2 } \mathscr{F} \left( \eta \frac{ \partial \phi}{\partial x_j} \right). \end{array} \ee It is easy to see that $ 2 \rho ^2 F( \rho ^2) + {\mathfrak c}_s ^2 \eta = \mathcal{O} ( (\rho ^2 - r_0 ^2) ^2) = \mathcal{O} ( \eta^2)$, hence $$ \| \mathscr{F} \left( 2 \rho ^2 F( \rho ^2) + {\mathfrak c}_s ^2 \eta \right) \|_{L^{\infty} ( \mathbb R^N)} \leq \| 2 \rho ^2 F( \rho ^2) + {\mathfrak c}_s ^2 \eta \|_{L^1( \mathbb R^N)} \leq C \| \eta \|_{L^2( \mathbb R^N)} ^2. $$ Since $ r_0 - \beta_* < |U_k| < r_0 + \beta_* $ if $ k $ is sufficiently small and $ |\nabla U_k|^2 = |\nabla \rho |^2 + \rho^2 |\nabla \phi |^2$, using \eqref{balai} we get $$ \Big\| \mathscr{F} \left( \eta \frac{ \partial \phi}{\partial x_j} \right) \Big\| _{L^{\infty}(\mathbb R^N)} \leq \Big\| \eta \frac{ \partial \phi}{\partial x_j} \Big\| _{L^{1}(\mathbb R^N)} \leq \| \eta \|_{L^2(\mathbb R^N)} \Big\| \frac{ \partial \phi}{\partial x_j} \Big\| _{L^{2}(\mathbb R^N)} \leq C \| \eta \|_{L^2(\mathbb R^N)} ^2 $$ and $ \| \mathscr{F} ( |\nabla U_k| ^2) \|_{L^{\infty}(\mathbb R^N)} \leq \| \nabla U_k \|_{L^2(\mathbb R^N)}^2 \leq C \| \eta \|_{L^2(\mathbb R^N)} ^2 .$ Coming back to \eqref{fondfou} we discover $$ | \widehat{ \eta } ( \xi ) | \leq C \| \eta \|_{L^2(\mathbb R^N)} ^2 \cdot \frac{ |\xi|^2}{|\xi |^4 + {\mathfrak c}_s ^2 |\xi |^2 - c_k ^2 \xi _1^2 } . $$ Using Plancherel's formula and the above estimate we find \be \label{planchereta} \| \eta \|_{L^2(\mathbb R^N)} ^2 = \frac{1}{(2 \pi )^N} \int_{\mathbb R^N} |\widehat{\eta} ( \xi )|^2\, d \xi \leq C \| \eta \|_{L^2(\mathbb R^N)} ^4 \int_{\mathbb R^N} \frac{ |\xi|^4}{ (|\xi |^4 + {\mathfrak c}_s ^2 |\xi |^2 - c_k ^2 \xi _1^2) ^2 } \, d \xi. \ee If $N=2$, a straightforward computation using polar coordinates gives (see the proof of (2.59) p. 598 in \cite{BGS2}): $$ \int_{\mathbb R^2} \frac{ |\xi|^4}{ (|\xi |^4 + {\mathfrak c}_s ^2 |\xi |^2 - c_k ^2 \xi _1^2) ^2 } \, d \xi = \frac{ \pi}{{\mathfrak c}_s \sqrt{ {\mathfrak c}_s ^2 - c_k^2}} = \frac{ \pi}{{\mathfrak c}_s \varepsilon _k}. $$ From to \eqref{planchereta} we get $ \| \eta \|_{L^2(\mathbb R^2)} ^2 \leq \frac{C}{\varepsilon _k} \| \eta \|_{L^2(\mathbb R^2)} ^4$ and taking into account \eqref{balai} we infer that $\varepsilon _k \leq C \| \eta \|_{L^2(\mathbb R^2)} ^2 \leq \tilde{C} \| \nabla U_k \|_{L^2( \mathbb R^2)} ^2 = \tilde{C} k. $ \ $\Box$ Notice that at this stage, we have only upper bounds on the energy of travelling waves, and we will have to prevent convergence towards the trivial solution to (SW). This will be done with the help of the following result. It was proven in \cite{BGS2} in the case of the Gross-Pitaevskii nonlinearity (see Proposition 2.4 p. 595 there). We extend the proof to general nonlinearities. \begin{lem} \label{minoinf} Let $N \geq 2$ and assume that (A1) holds and $F$ is twice differentiable at $ r_0^2$. There is $C>0$, depending only on $N$ and on $F$, such that any travelling wave $ U \in \mathcal{E}$ of {\rm (NLS)} of speed $ c \in [0, {\mathfrak c}_s]$ such that $ \frac{r_0}{2} \leq|U| \leq \frac{ 3 r_0}{2}$ satisfies $$ \| \, |U| - r_0 \|_{L^{\infty}(\mathbb R^N)} \geq C( {\mathfrak c}_s ^2 - c^2) = C \varepsilon^2 (U). $$ \end{lem} \noindent {\it Proof.} Let $U \in \mathcal{E}$ be a travelling wave such that $ \frac{r_0}{2} \leq|U| \leq \frac{ 3 r_0}{2}$ in $ \mathbb R^N$. Then $U \in W_{loc}^{2,p}(\mathbb R^N)$, $ \nabla U \in W^{1,p}(\mathbb R^N)$ for all $ p \in [2, \infty)$ (see Proposition 2.2 p. 1078-1079 in \cite{M2}), and $U$ admits a lifting $U = \rho e^{i \phi}$, where $ \rho$ and $ \phi$ satisfy \eqref{phasemod}. Since $ U \in \mathcal{E}$ we have $ \rho ^2 - r_0^2 \in H^1( \mathbb R^N)$ and then it is easy to see that $ \frac{ \rho ^2 - r_0^2}{\rho } \in H^1( \mathbb R^N)$. Multiplying the second equation in \eqref{phasemod} by $ \frac{ \rho ^2 - r_0^2}{\rho } $ and integrating by parts we get \be \label{ident1} \int_{\mathbb R^N} \left( 1 + \frac{r_0^2}{\rho^2} \right) |\nabla \rho |^2 \, dx + \int_{\mathbb R^N} ( \rho ^2 - r_0^2) |\nabla \phi |^2 - ( \rho ^2 - r_0^2) F( \rho ^2) - c ( \rho ^2 - r_0^2) \frac{ \partial \phi }{\partial x_1} \, dx = 0. \ee Denote $ \delta = \| \, |U| - r_0 \|_{L^{\infty}(\mathbb R^N)} = \| \rho - r_0 \|_{L^{\infty}(\mathbb R^N)} .$ We have \be \label{inegradrho} \int_{\mathbb R^N} \left( 1 + \frac{r_0^2}{\rho^2} \right) |\nabla \rho |^2 \, dx \geq \left( 1 + \frac{r_0^2}{(r_0 + \delta )^2} \right) \int_{\mathbb R^N} |\nabla \rho |^2 \, dx \qquad \mbox{ and } \ee \be \label{majophase} \Big| \int_{\mathbb R^N} ( \rho ^2 - r_0^2) |\nabla \phi |^2 \, dx \Big| \leq \int_{\mathbb R^N} \frac{| \rho ^2 - r_0^2 |}{\rho^2} \rho ^2 |\nabla \phi |^2 \, dx \leq \frac{ 2 r_0 \delta + \delta ^2 }{(r_0 - \delta)^2} \int_{\mathbb R^N} |\nabla U |^2 \, dx. \ee There is $ \tilde{C} > 0 $ such that $| F( s^2) - F'( r_0 ^2) ( s^2 - r_0 ^2) | \leq \tilde{C} ( s^2 - r_0 ^2)^2 $ for all $ s \in [\frac{r_0}{2}, \frac{ 3r_0}{2}]$. Remember that $ - F'( r_0 ^2) = 2 a^2 $ and $ {\mathfrak c}_s = 2 a r_0$, thus \be \label{60} - ( \rho ^2 - r_0 ^2) F( \rho ^2) \geq - F'( r_0 ^2) ( \rho ^2 - r_0 ^2) ^2 - \tilde{C} | \rho ^2 - r_0 ^2 |^3 \geq \left( 2 a^2 - \tilde{C} ( 2 r _0 \delta + \delta ^2 ) \right) (\rho ^2 - r_0 ^2) ^2. \ee Using \eqref{grincheux} and \eqref{momentlift}, then \eqref{ident1} and \eqref{inegradrho}-\eqref{60} we get $$ \begin{array}{l} \displaystyle - 2 c Q( U) = 2 \int_{\mathbb R^N} \rho ^2 |\nabla \phi |^2 \, dx + c \int_{\mathbb R^N} ( \rho ^2 - r _0 ^2) \frac{ \partial \phi}{\partial x_1 } \, dx \\ \\ \displaystyle = 2 \int_{\mathbb R^N} \rho ^2 |\nabla \phi |^2 \, dx + \int_{\mathbb R^N} \left( 1 + \frac{r_0^2}{\rho^2} \right) |\nabla \rho |^2 \, dx + \int_{\mathbb R^N} ( \rho ^2 - r_0^2) |\nabla \phi |^2 - ( \rho ^2 - r_0^2) F( \rho ^2) \, dx \\ \\ \displaystyle \geq 2 \int_{\mathbb R^N} \rho ^2 |\nabla \phi |^2 \, dx + \int_{\mathbb R^N} \left( 1 + \frac{r_0^2}{(r_0 + \delta )^2} \right) |\nabla \rho |^2 - \frac{ 2 r_0 \delta + \delta ^2 }{(r_0 - \delta)^2} |\nabla U |^2 + \left( 2 a^2 - \tilde{C} ( 2 r _0 \delta + \delta ^2 ) \right) (\rho ^2 - r_0 ^2) ^2 \, dx \end{array} $$ and we infer that there exists $K>0$, depending only on $F$, such that \be \label{61} -2 c Q(U) \geq 2 ( 1 - K \delta ) \int_{\mathbb R^N} |\nabla U |^2 + a^2 ( \rho ^2 - r_0 ^2)^2\, dx. \ee On the other hand, using \eqref{momentlift} we have \be \label{62} \begin{array}{l} \displaystyle - Q( U) = \frac{ 2 a r_0}{ {\mathfrak c}_s} \int_{\mathbb R^N} ( \rho ^2 - r _0 ^2) \frac{ \partial \phi}{\partial x_1 } \, dx \leq \frac{1}{{\mathfrak c}_s} \int_{\mathbb R^N} r_0 ^2 \Big| \frac{ \partial \phi}{\partial x_1 } \Big| ^2 + a ^2 ( \rho ^2 - r_0 ^2)^2\, dx \\ \\ \displaystyle \leq \frac{1}{{\mathfrak c}_s} \int_{\mathbb R^N} \frac{ r_0 ^2}{(r_0 - \delta )^2} \rho ^2 \Big| \frac{ \partial \phi}{\partial x_1 } \Big| ^2 + a^2 ( \rho ^2 - r_0 ^2)^2\, dx \leq \frac{1}{{\mathfrak c}_s} \frac{ r_0 ^2}{(r_0 - \delta )^2} \int_{\mathbb R^N} |\nabla U |^2 + a^2 ( \rho ^2 - r_0 ^2)^2\, dx. \end{array} \ee Since $U$ is not constant we have $ \displaystyle \int_{\mathbb R^N} |\nabla U |^2 + a^2 ( \rho ^2 - r_0 ^2)^2\, dx > 0$ and comparing \eqref{61} and \eqref{62} we get $$ \frac{c}{{\mathfrak c}_s } \frac{ r_0 ^2}{(r_0 - \delta )^2} \geq 1 - K \delta. $$ If $ \delta > \frac{1}{2K}$ the conclusion of Lemma \ref{minoinf} holds because $ \varepsilon (U)$ is bounded. Otherwise, the previous inequality is equivalent to $ \frac{ r_0 ^2}{(r_0 - \delta )^2} \frac{1}{ 1 - K \delta} \geq \frac{ {\mathfrak c}_s}{\sqrt{ {\mathfrak c}_s ^2 - \varepsilon^2(U) }}. $ There are $ K_1, \; K_2 > 0 $ such that $ \frac{ r_0 ^2}{(r_0 - \delta )^2} \frac{1}{ 1 - K \delta} \leq 1 + K_1 \delta $ and $ \frac{ {\mathfrak c}_s}{\sqrt{ {\mathfrak c}_s ^2 - \varepsilon^2 }} \geq 1 + K_2 \varepsilon^2 $ for all $ \delta \in [0, \frac{1}{2K} ] $ and all $ \varepsilon \in [0, {\mathfrak c}_s )$ and we infer that $ 1 + K_1 \delta \geq 1 + K_2 \varepsilon ^2(U)$, that is $ \delta = \| \, | U| - r_0 \|_{L^{\infty} ( \mathbb R^N )} \geq \frac{K_2}{K_1} \varepsilon^2(U)$. \ $\Box$ \subsection{Initial bounds for $\boldsymbol{\mathcal{A}_{\varepsilon}}$} Let $U_c \in \mathcal{E}$ be a travelling wave to (NLS) of speed $ c$ provided by Theorems \ref{th2dposit} or \ref{th2d} if $N=2$, respectively by Theorem \ref{thM} if $N=3$, such that $\frac{ r_0}{2} \leq |U| \leq \frac{ 3r_0}{2} $ in $ \mathbb R^N$. As in \eqref{ansatz}, we write $U_c(x) = \rho(x) e^{ i \phi(x)} = r_0 \sqrt{1+\varepsilon^2 \mathcal{A}_{\varepsilon}(z) }\ {\sf e}^{i\varepsilon \varphi_{\varepsilon} (z)}, $ where $ \varepsilon = \sqrt{ {\mathfrak c}_s ^2 - c^2}, $ $z_1 = \varepsilon x_1 ,$ $ \ z_\perp = \varepsilon^2 x_\perp .$ According to Proposition 2.2 p. 1078-1079 in \cite{M2} we have $$ \| U_c \|_{C_b^1( \mathbb R^N)} \leq C \qquad \mbox{ and } \qquad \| \nabla U_c \|_{W^{1,p}(\mathbb R^N)} \leq C_p \quad \mbox{ for } p \in [2, \infty) . $$ By scaling, we obtain the initial (rough) estimates \be \label{bourrinska} \| \mathcal{A} _{\varepsilon} \|_{L^{\infty }} \leq \frac{C}{\varepsilon ^2}, \quad \| \partial _{z_1} \mathcal{A} _{\varepsilon} \|_{L^{\infty }} \leq \frac{C}{\varepsilon ^3}, \quad \| \nabla _{z_{\perp}} \mathcal{A} _{\varepsilon} \|_{L^{\infty }} \leq \frac{C}{\varepsilon ^4}, \quad \| \partial_{z_1} \varphi _{\varepsilon} \|_{L^{\infty }} \leq \frac{C}{\varepsilon ^2}, \quad \| \nabla _{z_{\perp}} \varphi_{\varepsilon} \|_{L^{\infty }} \leq \frac{C}{\varepsilon ^3} \ee and \be \label{bourrinSKF} \Big\| \frac{ \partial ^2 \mathcal{A}_{\varepsilon}}{\partial z_1 ^2} \Big\| _{L^p} \leq C_p \varepsilon^{-4 + \frac{2N-1}{p}}, \qquad \Big\| \frac{ \partial ^2 \mathcal{A}_{\varepsilon}}{\partial z_1 \partial z_j } \Big\| _{L^p} \leq C_p \varepsilon^{-5 + \frac{2N-1}{p}}, \qquad \Big\| \frac{ \partial ^2 \mathcal{A}_{\varepsilon}}{\partial z_j \partial z_k} \Big\| _{L^p} \leq C_p \varepsilon^{-6 + \frac{2N-1}{p}} \ee for any $ p \in [2, \infty) $ and all $ j, k \in \{ 2, \dots, N \}.$ We have: \begin{lem} \label{BornEnergy} Assume that (A2) and (A4) are satisfied and $ \Gamma \neq 0$. Let $U_c$ be a solution to {\rm (TW$_{c}$)} provided by Theorem \ref{th2d} if $N=2$, respectively by Theorem \ref{thM} if $N=3$ and let $ \varepsilon = \sqrt{{\mathfrak c}_s ^2 - c^2}$. If $N=3$ we assume moreover that $E(U_c) \leq \frac{K}{\varepsilon}$, where $K$ does not depend on $ \varepsilon$. There exist $ \varepsilon _0 > 0 $ and $ C > 0 $ (depending only on $F$, $N$, $K$) such that $U_c$ admits a lifting as in \eqref{ansatz} whenever $ \varepsilon \in (0, \varepsilon _0)$ and the following estimate holds: $$ \int_{\mathbb R^N} | \partial_{z_1} \varphi_{\varepsilon} |^2 + |\nabla_{z_\perp} \varphi_{\varepsilon}|^2 + \mathcal{A}_{\varepsilon} ^2 + | \partial_{z_1} \mathcal{A}_{\varepsilon } |^2 + \varepsilon^2 | \nabla_{z_\perp} \mathcal{A}_{\varepsilon} |^2 \ dz \leq C . $$ \end{lem} \noindent {\it Proof.} If $ N =2$ it follows from Theorem \ref{th2d} that $ k = \displaystyle \int_{\mathbb R^2} |\nabla U_c |^2 \, dx $ is small if $ \varepsilon $ is small. Using Lemma \ref{liftingfacile} in the case $N=2$, respectively Corollary \ref{sanszero} if $N=3$, we infer that $|U_c|$ is arbitrarily close to $ r_0 $ if $ \varepsilon $ is sufficiently small and then it is clear that we have a lifting as in \eqref{ansatz}. We will repeatedly use the fact that there is a constant $C$ depending only on $F$ such that $$ C |\partial_j U_c|^2 \geq |\partial_j (\rho^2)|^2 + |\partial_j \phi|^2 \qquad \mbox{ for } 1 \leq j \leq N. $$ In view of the Taylor expansion of $V$ near $r_0^2$, for $\varepsilon$ sufficiently close to $ 0$ (so that $|U_c|$ is sufficiently close to $r_0$) we have $$ V(|U_c|^2) \geq C (|U_c| - r_0)^2 . $$ By scaling, we infer that for some $\delta_1 >0$ depending only on $F$ there holds $$ E(U_c) = \int_{\mathbb R^N} |\nabla U_c|^2 + V(|U_c|^2) \ dx \geq \delta_1 \varepsilon^{5-2N} \int_{\mathbb R^N} | \partial_{z_1} \varphi_{\varepsilon} |^2 + \mathcal{A}_{\varepsilon}^2 \ dz . $$ In the case $ N =2$ it follows from Proposition \ref{prop2d} that $E(U_c) \leq C \varepsilon $ for some $ C $ independent of $ \varepsilon $. In the case $ N =3$ we use the assumption $E(U_c) \leq \frac{K}{\varepsilon}$. In both cases the previous inequality implies that \be \label{bourrin2} \int_{\mathbb R^N} | \partial_{z_1} \varphi_{\varepsilon} |^2 + \mathcal{A}_{\varepsilon}^2 \ dz \leq C . \ee We have $E_{c} (U_c) = T_{c} = \mathcal{O}(\varepsilon )$ if $ N=3 $ by Proposition \ref{asympto} $(ii)$, respectively $E_{c} (U_c) = \mathcal{O}( k \varepsilon ^2 ) = \mathcal{O}(\varepsilon ^3)$ by \eqref{ecusson} and \eqref{kifkif} in the case $ N =2$. From the Pohozaev identity $P_c(U_c) = 0$ (see \eqref{Pohozaev}) we deduce $$ \frac{2r_0^2 \varepsilon^{7-2N}}{N-1} \int_{\mathbb R^N} |\nabla_{z_\perp} \varphi_{\varepsilon}|^2 + \varepsilon ^2 |\nabla_{z_\perp} \mathcal{A}_{\varepsilon}|^2 \ dz \leq C \frac{2}{N-1} \int_{\mathbb R^N} |\nabla_{\perp} U_c|^2 \ dx = C E_{c} (U_c) = \mathcal{O}(\varepsilon ^{7-2N}) . $$ Thus we get \be \label{bourrin3} \int_{\mathbb R^N} |\nabla_{z_\perp} \varphi_{\varepsilon}|^2 + \varepsilon ^2 |\nabla_{z_\perp} \mathcal{A}_{\varepsilon}|^2\ dz \leq C . \ee Furthermore, by scaling the identity \eqref{blancheneige} in Lemma \ref{tools} we obtain $$ r_0^2 \varepsilon ^{7-2N} \int_{\mathbb R^N} |\partial_{z_1} \mathcal{A}_{\varepsilon } |^2 \ dz \leq C \int_{\mathbb R^N} |\partial_{x_1} \rho |^2 \ dx \leq C \int_{\mathbb R^N} |\nabla \rho |^2 \ dx = C \frac{N}{2} E_{c } (U_c) = \mathcal{O} (\varepsilon ^{7-2N} ) , $$ so that \be \label{bourrin4} \int_{\mathbb R^N} |\partial_{z_1} \mathcal{A}_ {\varepsilon} |^2 \ dz \leq C . \ee Gathering \eqref{bourrin2}, \eqref{bourrin3} and \eqref{bourrin4} yields the desired inequality. \ $\Box$ \\ Using the above estimates, we shall find $L^q$ bounds for $\mathcal{A}_{\varepsilon}$. The proof is based on equation \eqref{Fonda}, that is $$ \Big\{ \partial_{z_1}^4 - \partial_{z_1}^2 - {\mathfrak c}_s^2 \Delta_{z_\perp} + 2 \varepsilon ^2 \partial_{z_1}^2 \Delta_{z_\perp} + \varepsilon ^4 \Delta^2_{z_\perp} \Big\} \mathcal{A}_{\varepsilon} = \mathcal{R}_{\varepsilon} , \eqno{\mbox{\eqref{Fonda}}} $$ where \begin{align*} \mathcal{R} _{\varepsilon} = & \ \{ \partial_{z_1}^2 + \varepsilon ^2 \Delta_{z_\perp} \} \Big[ 2(1 + \varepsilon^2 \mathcal{A} _{\varepsilon}) \Big( (\partial_{z_1} \varphi {\varepsilon} )^2 + \varepsilon ^2 |\nabla_{z_\perp} \varphi _{\varepsilon} |^2 \Big) + \varepsilon^2 \frac{(\partial_{z_1} \mathcal{A} _{\varepsilon} )^2 + \varepsilon ^2 |\nabla_{z_\perp} \mathcal{A} _{\varepsilon} |^2}{2(1+ \varepsilon ^2 \mathcal{A} _{\varepsilon} )} \Big] \\ & \ - 2 c \varepsilon ^2 \Delta_{z_\perp} ( \mathcal{A} _{\varepsilon} \partial_{z_1} \varphi _{\varepsilon}) + 2 c \varepsilon ^2 \displaystyle \sum_{j=2}^N \partial_{z_1} \partial_{z_j} ( \mathcal{A} _{\varepsilon} \partial_{z_j} \varphi _{\varepsilon} ) \\ & \ + \{ \partial_{z_1}^2 + \varepsilon ^2 \Delta_{z_\perp} \} \Big[ {\mathfrak c}_s^2 \Big( 1 - \frac{r_0^4F''(r_0^2)}{{\mathfrak c}_s^2} \Big) \mathcal{A} _{\varepsilon}^2 - \frac{1}{\varepsilon ^4} \tilde{F}_3(r_0^2 \varepsilon ^2 \mathcal{A} _{\varepsilon}) \Big] \end{align*} and we recall that $ \tilde{F}_3(\alpha) = \mathcal{O}(\alpha^3)$ as $\alpha \to 0$. Let $$ D_{\varepsilon} (\xi) = \xi_1^4 + \xi_1^2 + {\mathfrak c}_s^2 |\xi_\perp|^2 + 2 \varepsilon^2 \xi_1^2 |\xi_\perp|^2 + \varepsilon^4 |\xi_\perp|^4 = ( \xi _1 ^2 + \varepsilon ^2|\xi _{\perp} |^2)^2 + \xi _1 ^2 + {\mathfrak c}_s ^2 |\xi _{\perp}|^2. $$ We will consider the following kernels: $$ \mathcal{K}^1_{\varepsilon } (z) = \mathscr{F}^{-1} \Big( \frac{\xi_1^2}{D_{\varepsilon} (\xi)} \Big) , \quad \quad \mathcal{K}^\perp_{\varepsilon} (z) = \mathscr{F}^{-1} \Big( \frac{|\xi_\perp|^2}{D_{\varepsilon}(\xi)} \Big) \quad \quad {\rm and} \quad \quad \mathcal{K}^{1,j}_{\varepsilon} (z) = \mathscr{F}^{-1} \Big( \frac{\xi_1 \xi_j}{D_{\varepsilon}(\xi)} \Big) , \quad j = 2, \dots, N.$$ Then we may rewrite \eqref{Fonda} as a convolution equation \be \label{Henry} \mathcal{A}_{\varepsilon} = \Big( \mathcal{K}^1_{\varepsilon} + \varepsilon ^2 \mathcal{K}^\perp_{\varepsilon} \Big) * G_{\varepsilon} + 2 c \varepsilon ^2 \mathcal{K}_{\varepsilon}^{\perp} * (\mathcal{A}_{\varepsilon} \partial_{z_1} \varphi_{\varepsilon}) - 2 c(\varepsilon) \varepsilon ^2 \sum_{j=2}^N \mathcal{K}^{1,j}_{\varepsilon} * (\mathcal{A}_{\varepsilon} \partial_{z_j} \varphi_{\varepsilon}) , \ee where \begin{align*} G_{\varepsilon} = & \ (1 + \varepsilon^2 \mathcal{A}_{\varepsilon}) \Big( (\partial_{z_1} \varphi_{\varepsilon})^2 + \varepsilon ^2 |\nabla_{z_\perp} \varphi_{\varepsilon}|^2 \Big) + \varepsilon ^2 \frac{(\partial_{z_1} \mathcal{A}_{\varepsilon} )^2 + \varepsilon^2 |\nabla_{z_\perp} \mathcal{A}_{\varepsilon}|^2} {4(1+ \varepsilon ^2 \mathcal{A}_{\varepsilon} )} \\ & \ + \frac{{\mathfrak c}_s^2}{4} ( \Gamma - 2 ) \mathcal{A}_{\varepsilon} ^2 - \frac{1}{\varepsilon ^4} \tilde{F}_3(r_0^2 \varepsilon ^2 \mathcal{A}_{\varepsilon} ) . \end{align*} \begin{lem} \label{Grenouille} The following estimates hold for $N=2$, $3$ and $\varepsilon$ small enough: (i) For all $ 2 \leq p \leq \infty $ we have $ \| \partial_{z_1} \mathcal{A}_{\varepsilon} \|_{L^p} + \varepsilon \| \nabla_{z_\perp} \mathcal{A}_{\varepsilon} \|_{L^p} \leq C \varepsilon ^{\frac{6}{p}-3} $. (ii) There exists $C>0 $ such that $ \| \mathcal{A}_{\varepsilon} \|_{L^{3q}} \leq C \varepsilon ^{- \frac23 } \| \mathcal{A}_{\varepsilon} \|^{\frac23}_{L^{2q}}$ for any $ 1 \leq q \leq \infty$. (iii) If $N=3$, for any $ 2 \leq p < 8/3 $ there is $ C_p > 0 $ such that $ \| \mathcal{A}_{\varepsilon} \|_{L^p(\mathbb R^3)} \leq C_p $. (iv) If $N=2$, for any $ 2 \leq p < 4 $ there is $ C_p > 0 $ such that $ \| \mathcal{A}_{\varepsilon} \|_{L^p(\mathbb R^2)} \leq C_p $. \end{lem} \noindent {\it Proof.} For $(i)$, it suffices to notice that the estimate is true for $p=2$ by Lemma \ref{BornEnergy} and for $p=\infty$ by \eqref{bourrinska}, therefore it holds for any $2 \leq p \leq \infty$ by interpolation. For $(ii)$ we just interpolate the exponent $3q$ between $2q$ and $\infty$ and we use \eqref{bourrinska}: $$ \| \mathcal{A}_{\varepsilon} \|_{L^{3q}} \leq \| \mathcal{A}_{\varepsilon} \|_{L^{2q}}^{ \frac 23 } \| \mathcal{A}_{\varepsilon} \|_{L^{\infty}}^{ \frac 13 } \leq C \varepsilon ^{ -\frac 23 } \| \mathcal{A}_{\varepsilon} \|_{L^{2q}}^{ \frac 23 } . $$ Next we prove $(iii)$. As already mentioned, a uniform $L^p$ bound (for $2\leq p \leq 8/3$) on the kernels $\mathcal{K}^1_{\varepsilon}$, $ \varepsilon ^2 \mathcal{K}^\perp_{\varepsilon}$ and $ \varepsilon ^2 \mathcal{K}^{1,j}_ {\varepsilon} $ is established in \cite{BGS1} by using a Sobolev estimate. Unfortunately this is no longer possible in dimension $N=3$. We thus rely on a suitable decomposition of $ \mathcal{A}_{\varepsilon} $ in the Fourier space. Some terms are controlled by using the energy bounds in Lemma \ref{BornEnergy}, the others by using \eqref{Henry}. We consider a set of parameters $\alpha$, $\beta$, $\gamma \in (1,2)$ and $\nu > 5/2 $ with $ \alpha \geq \beta $ and $\alpha \geq \gamma $ (to be fixed later). For $ \varepsilon \in (0,1)$, let $$ \begin{array}{c} E^I = \{ \xi \in \mathbb R^N \; \big| \; |\xi _{\perp} | < 1 \}, \quad E^{II} = \{ \xi \in \mathbb R^N \; \big| \; |\xi_\perp| > \varepsilon ^{-\alpha} \}, \quad E^{III} = \{ \xi \in \mathbb R^N \; \big| \; \varepsilon ^{-\beta} \leq |\xi_\perp| \leq \varepsilon ^{-\alpha},\, |\xi_1| < 1 \}, \\ \\ E^{IV} = \{ \xi \in \mathbb R^N \; \big| \; \varepsilon ^{-\gamma} \leq |\xi_\perp| \leq \varepsilon ^{-\alpha}, \, 1 \leq |\xi_1|^\nu \leq |\xi_\perp| \}, \qquad E^{V} = \{ \xi \in \mathbb R^N \; \big| \; 1 \leq |\xi_\perp| \leq \varepsilon ^{-\alpha}, \, |\xi_1|^\nu > |\xi_\perp| \}, \\ \\ E^{VI} = \{ \xi \in \mathbb R^N \; \big| \; 1 \leq |\xi_\perp| < \varepsilon ^{-\beta}, \, |\xi_1| < 1 \}, \qquad E^{VII} = \{ \xi \in \mathbb R^N \; \big| \; 1 \leq |\xi_\perp| < \varepsilon ^{-\gamma}, \, 1 \leq |\xi_1|^\nu \leq |\xi_\perp| \}. \end{array} $$ It is easy to see that the sets $E^I, \dots , E^{VII}$ are disjoint and cover $ \mathbb R^N$. For $J \in \{ I, \dots , VII \} $ we denote $ \mathcal{A}_{\varepsilon} ^J = \mathscr{F} ^{-1} (\widehat{\mathcal{A} } _{\varepsilon} \mathbf{1}_{E^J})$, so that $ \mathcal{A}_{\varepsilon} = \mathcal{A}_{\varepsilon} ^I + \dots + \mathcal{A}_{\varepsilon}^{VII} $, and we estimate each term separately. For $ \mathcal{A}_{\varepsilon} ^{I} $ we use $$ \| \nabla_{z_\perp} \mathcal{A}_{\varepsilon }^{I} \|_{L^2} = \| \xi_{\perp} \widehat{\mathcal{A}}_{\varepsilon} {\bf 1}_{\{ |\xi_\perp| < 1 \} } \|_{L^2} \leq \| \widehat{\mathcal{A}}_{\varepsilon } {\bf 1}_{ \{ |\xi_\perp|\leq 1 \} } \|_{L^2} \leq \| \widehat{\mathcal{A}}_{{\mathbf E} } \|_{L^2} = \| \mathcal{A}_{{\mathbf E} } \|_{L^2} \leq C . $$ By Lemma \ref{BornEnergy}, $\mathcal{A}_{\varepsilon} $ and $ \partial_{z_1} \mathcal{A}_{\varepsilon } $ are uniformly bounded in $L^2$, thus we have $$ \| \mathcal{A}_ {\varepsilon} ^{I} \|_{L^2} + \| \partial_{z_1} \mathcal{A}_{\varepsilon } ^{I} \|_{L^2} \leq C . $$ Hence $ \mathcal{A}_{\varepsilon } ^{I} $ is uniformly bounded in $H^1$, and using the Sobolev embedding we deduce \be \label{Timide1} \forall \; 2 \leq p \leq 6, \quad \quad \quad \| \mathcal{A}_{\varepsilon } ^{I} \|_{L^p} \leq C . \ee We will use the Riesz-Thorin theorem to bound $\mathcal{A}_{\varepsilon }^{II}$: if $1<q = \frac{p}{p-1} <2$ is the conjugate exponent of $ p \in (2, \infty) $, there holds $$ \| \mathcal{A}_{\varepsilon } ^{II} \|_{L^p} \leq C \| \widehat{\mathcal{A}}_{\varepsilon } ^{II} \|_{L^q} . $$ Thus it suffices to bound $ \| \widehat{\mathcal{A}}_{\varepsilon } ^{II} \|_{L^q} $. Using the H\"older inequality with exponents $\frac{2}{q} $ and $\frac{2}{2-q} $, we have \begin{align*} \| \widehat{\mathcal{A}}_{\varepsilon } ^{II} \|_{L^q}^q = & \ \int_{\mathbb R^3} \Big( (|\xi_1| + \varepsilon |\xi_\perp|) |\widehat{\mathcal{A}}_{\varepsilon } | \Big)^q \times \frac{{\bf 1}_{\{ |\xi_\perp| > \varepsilon ^{-\alpha} \} } }{(|\xi_1| + \varepsilon |\xi_\perp|)^q } \, d \xi \\ \leq & \ \| (|\xi_1| + \varepsilon |\xi_\perp|) \widehat{\mathcal{A}}_{\varepsilon } \|_{L^2}^q \left( \int_{\mathbb R^3} \frac{ {\bf 1}_{\{ |\xi_\perp| \geq \varepsilon ^{-\alpha} \} } }{(|\xi_1| + \varepsilon |\xi_\perp|)^{\frac{2q}{2-q} }} \, d \xi \right)^{\frac{2-q}{q} } \\ \leq & \ C_q ( \| \partial_{z_1} \mathcal{A}_{\varepsilon } \|_{L^2} + \varepsilon \| \nabla_{z_\perp} \mathcal{A}_{\varepsilon } \|_{L^2})^q \left( \int_{\varepsilon ^{-\alpha}}^{\infty} \frac{R \, dR}{ (\varepsilon R)^{\frac{3q-2}{2-q}}} \right)^{\frac{2-q}{q} }. \end{align*} (We have computed the integral in $\xi_1$ and we used cylindrical coordinates for the third line.) Provided that $\frac{3q-2}{2-q}> 2$ (or, equivalently, $ q > 6/5 $), the last integral in $R$ is $$ C (q) \varepsilon ^{-\frac{3q-2}{2-q}} \times \varepsilon ^{\alpha \frac{5q-6}{2-q}} \leq C_q $$ as soon as $\alpha \geq \frac{3q-2}{5q-6} = \frac{2+p}{6-p}$, that is $ p \leq 6 - \frac{8}{\alpha + 1 }$. Notice that $2<6- \frac{8}{\alpha+1} <6 $ because $\alpha > 1 $. By Lemma \ref{BornEnergy} we get \be \label{Timide2} \forall \; 2 \leq p \leq 6 - \frac{8}{\alpha+1} , \quad \quad \quad \| \mathcal{A}_{ \varepsilon } ^{II} \|_{L^p} \leq C(\alpha) . \ee Using similar arguments, we have \begin{align*} \| \mathcal{A}_{\varepsilon } ^{III} \|_{L^p}^q \leq & \ C \| \widehat{\mathcal{A}}_{\varepsilon} ^{III} \|_{L^q}^q \\ = & \ C \int_{\mathbb R^3} \Big( \varepsilon |\xi_\perp| \cdot |\widehat{\mathcal{A}}_{\varepsilon } | \Big)^q \times \frac{{\bf 1}_{ \{ \varepsilon ^{-\beta} \leq |\xi_\perp| \leq \varepsilon ^{-\alpha} , \, |\xi_1| < 1 \} } } {(\varepsilon |\xi_\perp|)^q } \, d \xi \\ \leq & \ C ( \varepsilon \| \nabla_{z_\perp} \mathcal{A}_{\varepsilon } \|_{L^2} )^q \left( \int_{\mathbb R^3} \frac{ {\bf 1}_{ \{ \varepsilon ^{-\beta} \leq |\xi_\perp| \leq \varepsilon ^{-\alpha} , \, |\xi_1| \leq 1 \} } }{(\varepsilon |\xi_\perp|)^{\frac{2q}{2-q} }} \, d \xi \right)^{\frac{2-q}{q} } \\ \leq & \ C_q \left( \varepsilon ^{-\frac{2q}{2-q} } \int_{\varepsilon ^{-\beta}}^{\varepsilon ^{-\alpha}} \frac{dR}{R^{\frac{4q-4}{2-q} + 1}} \right)^{\frac{2-q}{q} } \leq C_q \end{align*} if $\beta \frac{4q-4}{2-q} - \frac{2q}{2-q} \geq 0 $, that is $ 2 \beta \geq \frac{q}{(q-1)} = p $. Consequently, \be \label{Timide3} \forall \; 2 \leq p \leq 2 \beta , \quad \quad \quad \| \mathcal{A}_{\varepsilon }^{III} \|_{L^p} \leq C(\beta) . \ee Similarly we get a bound for $ \mathcal{A}_{\varepsilon } ^{IV}$: \begin{align*} \| \mathcal{A}_{\varepsilon } ^{IV} \|_{L^p}^q \leq & \ C ( \varepsilon \| \nabla_{z_\perp} \mathcal{A}_{\varepsilon } \|_{L^2} )^q \left( \int_{\mathbb R^3} \frac{ {\bf 1}_{ \{ \varepsilon ^{-\gamma} \leq |\xi_\perp| \leq \varepsilon ^{-\alpha} ,\, 1 \leq |\xi_1|^\nu \leq |\xi_\perp| \} } }{(\varepsilon |\xi_\perp|)^{\frac{2q}{2-q} }} \, d \xi \right)^{\frac{2-q}{q} } \\ \leq & \ C_q \left( \varepsilon ^{-\frac{2q}{2-q} } \int_{\varepsilon ^{-\gamma}}^{\varepsilon ^{-\alpha}} \frac{R^{\frac{1}{\nu}} \, dR}{R^{\frac{4q-4}{2-q} + 1}} \right)^{\frac{2-q}{q} } \leq C_q \end{align*} provided that $\gamma \frac{4q-4}{2-q} - \frac{2q}{2-q} - \frac{\gamma}{\nu} \geq 0 $, which is equivalent to $ p \leq \frac{2\gamma (2 \nu + 1)}{2\nu+\gamma} $ (notice that $\frac{2\gamma (2 \nu + 1)}{2\nu+\gamma} > 2$ because $\gamma > 1$). Therefore, \be \label{Timide4} \forall \; 2 \leq p \leq \frac{2\gamma (2 \nu + 1)}{2\nu+\gamma} , \quad \quad \quad \| \mathcal{A}_{\varepsilon } ^{IV} \|_{L^p} \leq C(\nu) . \ee We use the fact that $\| \partial_{z_1} \mathcal{A}_{\varepsilon} \|_{L^2}$ is bounded independently of $ \varepsilon$ (see part $(i)$) in order to estimate $\mathcal{A}_{\varepsilon }^V$: \begin{align*} \| \mathcal{A}_{\varepsilon } ^{V} \|^q_{L^p} \leq & \ C \| \widehat{\mathcal{A}}_{\varepsilon } ^{V} \|^q_{L^q} \\ = & \ C \int_{\mathbb R^3 } |\xi_1 \widehat{\mathcal{A}}_{\varepsilon } |^q \times \frac{{\bf 1}_{\{ 1 \leq |\xi_\perp | \leq \varepsilon ^{-\alpha}, \, |\xi_\perp| < |\xi_1|^\nu \}}}{|\xi_1|^q} \, d \xi \\ \leq & \ C \| \partial_{z_1} \mathcal{A}_{\varepsilon } \|_{L^2}^{q} \left( \int_{\mathbb R^3} \frac{{\bf 1}_{\{ 1 \leq |\xi_\perp | \leq \varepsilon ^{-\alpha}, \, |\xi_\perp| \leq |\xi_1|^\nu \}}}{|\xi_1|^{\frac{2q}{2-q}}} \, d \xi \right)^{\frac{2-q}{2}} \\ \leq & \ C \left( \int_1^{\varepsilon ^{-\alpha} } \frac{ R \, dR}{R^{(\frac{2q}{2-q}-1)/\nu}} \right)^{\frac{2-q}{2}} , \end{align*} by using cylindrical coordinates in the fourth line. We have $\frac{2q}{2-q} > 1$ for $q \in [1, 2)$ and the last integral is bounded independently of $\varepsilon $ as soon as $\frac{1}{\nu} \left( \frac{2q}{2-q} - 1 \right) > 2 $, that is $ p < \frac{4\nu +2}{2\nu-1} $. It is obvious that $ \frac{4\nu +2}{2\nu-1} > 2$ for $\nu > 1/2$. As a consequence, we get \be \label{Timide5} \forall \; 2 \leq p < \frac{4\nu +2}{2\nu-1} , \quad \quad \quad \| \mathcal{A}_{\varepsilon } ^{V} \|_{L^p} \leq C(p) . \ee We use the convolution equation \eqref{Henry} to estimate $\mathcal{A}_{\varepsilon } ^{VI}$ and $\mathcal{A}_{\varepsilon } ^{VII}$. Applying the Fourier transform to \eqref{Henry} we obtain the pointwise bound \begin{align*} |\widehat{\mathcal{A}}_{\varepsilon } (\xi) | = & \ \Big| \Big( \widehat{\mathcal{K}}^1_{\varepsilon } + \varepsilon ^2 \widehat{\mathcal{K}}^\perp_{\varepsilon } \Big) \widehat{G}_{\varepsilon } + 2 c(\varepsilon ) \widehat{\mathcal{K}}^{\perp }_{\varepsilon } \mathscr{F} ( \mathcal{A}_{\varepsilon } \partial_{z_1} \varphi_{\varepsilon } ) - 2 c(\varepsilon ) \varepsilon ^2 \sum_{j=2}^N \widehat{\mathcal{K}}^{1,j}_{\varepsilon } \mathscr{F} (\mathcal{A}_{\varepsilon } \partial_{z_j} \varphi_{\varepsilon }) \Big| \\ \leq & \ C \Big( |\widehat{\mathcal{K}}^1_{\varepsilon }| + \varepsilon ^2 |\widehat{\mathcal{K}}^\perp_{\varepsilon }| + \varepsilon ^2 \sum_{j=2}^N |\widehat{\mathcal{K}}^{1,j}_{\varepsilon }| \Big) \Big( \| \widehat{G}_{\varepsilon } \|_{L^\infty} + \| \mathscr{F} ( \mathcal{A}_{\varepsilon } \partial_{z_1} \varphi_{\varepsilon } ) \|_{L^\infty} + \sum_{j=2}^N \| \mathscr{F} ( \mathcal{A}_{\varepsilon } \partial_{z_j} \varphi_{\varepsilon } ) \|_{L^\infty} \Big) . \end{align*} The estimates in Lemma \ref{BornEnergy} and the boundedness of $\mathscr{F} : L^1 \to L^\infty$ imply that the second factor is bounded independently of $\varepsilon$. Therefore \be \label{Atchoum} |\widehat{\mathcal{A}}_{\varepsilon } (\xi) | \leq C \Big( |\widehat{\mathcal{K}}^1_{\varepsilon }| + \varepsilon ^2 |\widehat{\mathcal{K}}^\perp_{\varepsilon }| + \varepsilon ^2 \sum_{j=2}^N |\widehat{\mathcal{K}}^{1,j}_{\varepsilon }| \Big) \leq C \frac{\xi_1^2 + \varepsilon ^2 |\xi_\perp|^2 + \varepsilon ^2 |\xi_1| \cdot |\xi_\perp|}{D_{\varepsilon }(\xi)} \leq C \frac{\xi_1^2 + \varepsilon ^2 |\xi_\perp|^2 }{D_{\varepsilon }(\xi)} \ee because $ 2 \varepsilon ^2 |\xi_1| \cdot |\xi_\perp| \leq \xi_1^2 + \varepsilon ^4 |\xi_\perp|^2$. If $ \xi \in E^{VI}$ we have $ |\xi_1| \leq 1 $ and $ 1 \leq |\xi_\perp| \leq \varepsilon^{-\beta} \leq \varepsilon^{-2}$ (because $\beta < 2$), hence there is some constant $C$ depending only on ${\mathfrak c}_s$ such that $$ C |\xi_\perp|^2 \geq D_{\varepsilon }(\xi) = \xi_1^4 + \xi_1^2 + {\mathfrak c}_s^2 |\xi_\perp|^2 + 2 \varepsilon ^2 \xi_1^2 |\xi_\perp|^2 + \varepsilon ^4 |\xi_\perp|^4 \geq \frac{|\xi_\perp|^2}{C} . $$ Using the Riesz-Thorin theorem with exponents $2 < p < \infty$ and $q=p/(p-1) \in (1,2)$ as well as \eqref{Atchoum} we find \begin{align*} \| \mathcal{A}_{\varepsilon } ^{VI} \|_{L^p}^q \leq & \ C \| \widehat{\mathcal{A}}_{\varepsilon } ^{VI} \|_{L^q}^q \\ \leq & \ C \int_{\mathbb R^3} {\bf 1}_{ \{ 1 \leq |\xi_\perp| \leq \varepsilon ^{-\beta}, \, |\xi_1| \leq 1 \}} \frac{( \xi_1^2 + \varepsilon ^2 |\xi_\perp|^2 )^q}{ |\xi_\perp|^{2q} } \ d \xi \\ \leq & \ C \int_{\mathbb R^3} {\bf 1}_{ \{ 1 \leq |\xi_\perp| \leq \varepsilon ^{-\beta}, \, |\xi_1| \leq 1 \} } \left( \frac{\xi_1^{2q}}{ |\xi_\perp|^{2q} } + \varepsilon ^{2q} \right) \ d \xi \\ \leq & \ C \int_{|\xi_\perp|\geq 1} \frac{d\xi_\perp}{|\xi_\perp|^{2q} } + C \varepsilon ^{2q -2\beta} \leq C_q \end{align*} provided that $q> 1$ and $ q \geq \beta $. We have $ q \geq \beta $ if and only if $ p \leq \frac{\beta}{\beta-1}$. It is obvious that $\frac{\beta}{\beta-1}>2$ because $ 1 < \beta < 2$. Hence we obtain \be \label{Timide6} \forall \; 2 \leq p \leq \frac{\beta}{\beta-1} , \quad \quad \quad \| \mathcal{A}_{\varepsilon }^{VI} \|_{L^p} \leq C(\beta) . \ee In order to estimate $\mathcal{A}_{\varepsilon } ^{VII}$ we notice that for $ \xi \in E^{VII} $ we have $ 1 \leq |\xi_\perp| \leq \varepsilon ^{-\gamma}$ and $ 1 \leq |\xi_1|^\nu \leq |\xi_\perp|$, thus $|\xi_1|^2 \leq |\xi_\perp| \leq \varepsilon ^{-2}$ because $\nu \geq 5/2 > 2 $ and $ \gamma \leq 2$. Hence there exists $C > 0$ depending only on ${\mathfrak c}_s$ such that $$ C |\xi_\perp|^2 \geq D_{\varepsilon} (\xi) = \xi_1^4 + \xi_1^2 + {\mathfrak c}_s^2 |\xi_\perp|^2 + 2 \varepsilon ^2 \xi_1^2 |\xi_\perp|^2 + \varepsilon ^4 |\xi_\perp|^4 \geq \frac{|\xi_\perp|^2}{C} . $$ Using \eqref{Atchoum} we get \begin{align*} \| \mathcal{A}_{\varepsilon }^{VII} \|_{L^p}^q \leq & \ C \int_{\mathbb R^3} {\bf 1}_{ \{ 1 \leq |\xi_\perp| \leq \varepsilon ^{-\gamma}, \, 1 \leq |\xi_1|^\nu \leq |\xi_\perp| \}} \left( \frac{\xi_1^{2q}}{ |\xi_\perp|^{2q} } + \varepsilon ^{2q} \right) \, d \xi \\ \leq & \ C \int_{|\xi_\perp|\geq 1} \frac{|\xi_\perp|^{\frac{2q+1}{\nu}}}{|\xi_\perp|^{2q} } \, d\xi_\perp + C \varepsilon ^{2q} \int_1^{\varepsilon ^{-\gamma}} R^{1+\frac{1}{\nu}} \, dR \leq C_q \end{align*} provided that $2q - \frac{2q+1}{\nu} > 2 $ and $2q - \gamma (2 + \frac{1}{\nu} ) \geq 0 $. These inequalities are equivalent to $ p < \frac{2\nu +1}{3}$ and $ p \leq \frac{\gamma(2\nu +1)}{\gamma(2\nu +1) - 2 \nu}$, respectively. Since $\nu > 5/2$, we have $ \frac{2\nu +1}{3} > 2 $ and $ \frac{4\nu}{2\nu+1} > 5/3 $ and $ \frac{\nu}{\nu-1} < 5/3 $. It is easy to see that $ \frac{\gamma(2\nu +1)}{\gamma(2\nu +1)-2\nu} > 2$ if and only if $ \gamma < \frac{4\nu}{2\nu+1} $, and that $ \frac{\gamma(2\nu +1)}{\gamma(2\nu +1)-2\nu} > \frac{2\nu +1}{3}$ if and only if $ \gamma < \frac{\nu}{\nu-1} $. Hence \be \label{Timide7} \left\{ \begin{array}{ll} \displaystyle{\forall \; 1 \leq \gamma \leq \frac{\nu}{\nu-1}, \quad \forall \; 2 \leq p < \frac{2\nu +1}{3}} , & \quad \quad \quad \| \mathcal{A}_{\varepsilon }^{VII} \|_{L^p} \leq C(p,\nu) \\ \\ \displaystyle{ \forall \; \frac{\nu}{\nu-1} < \gamma \leq \frac53, \quad \forall \; 2 \leq p \leq \frac{\gamma(2\nu +1)}{\gamma(2\nu +1)-2\nu} ,} & \quad \quad \quad \| \mathcal{A}_{\varepsilon }^{VII} \|_{L^p} \leq C(\gamma,\nu) . \end{array} \right. \ee We now choose the parameters $\alpha$, $\beta$, $\gamma$ and $\nu$. In view of \eqref{Timide3} and \eqref{Timide6}, we fix $\beta = 3/2$, so that $ 2 \beta = \beta/(\beta-1) = 3 $. We set $\alpha = 5/3 > 3/2 = \beta $. Then by \eqref{Timide1}, \eqref{Timide2}, \eqref{Timide3} and \eqref{Timide6} it follows that $$ \forall \; 2 \leq p \leq 3, \quad \quad \quad \| \mathcal{A}_{\varepsilon }^{I} \|_{L^p} + \| \mathcal{A}_{\varepsilon }^{II} \|_{L^p} + \| \mathcal{A}_{\varepsilon }^{III} \|_{L^p} + \| \mathcal{A}_{\varepsilon }^{VI} \|_{L^p} \leq C . $$ For the other terms, we notice that in the case $1 \leq \gamma \leq \frac{\nu}{\nu-1}$ we have $$ \frac{2\gamma (2 \nu + 1)}{2\nu+\gamma} \leq \frac{4 \nu + 2}{2\nu- 1} , $$ with equality if $\gamma = \frac{\nu}{\nu-1}$. We also observe that $$ \frac{2\nu +1}{3} < \frac{4 \nu + 2}{2\nu-1} < \frac83 \quad {\rm if} \quad \nu < \frac72, \quad \quad \quad {\rm respectively} \quad \quad \quad \frac83 < \frac{4 \nu + 2}{2\nu-1} < \frac{2\nu +1}{3} \quad {\rm if} \quad \nu > \frac72 . $$ Then we fix $\nu = 7 / 2$ and $\gamma = \frac{\nu}{\nu-1} = 7 / 5 < 5 / 3 $ and using \eqref{Timide4}, \eqref{Timide5} and \eqref{Timide7} we obtain $$ \forall \; 2 \leq p < \frac83, \quad \quad \quad \| \mathcal{A}_{\varepsilon }^{IV} \|_{L^p} + \| \mathcal{A}_{\varepsilon }^{V} \|_{L^p} + \| \mathcal{A}_{\varepsilon }^{VII} \|_{L^p} \leq C . $$ This concludes the proof of $(iii)$. $(iv)$ We use the same inequalities as in the three-dimensional case with $ 1 < \nu < 3$ and $\alpha$, $\beta$, $\gamma \in (1,2)$ satisfying $\beta \leq \alpha$ and $ \gamma \leq \alpha$. We get $$ \begin{array}{llll} \displaystyle{ \forall \; 2 \leq p < \infty}, & \quad \| \mathcal{A}_{\varepsilon }^{I} \|_{L^p} \leq C_p; & \quad \quad \quad \displaystyle{ \forall \; 2 \leq p \leq 4 \alpha - 2}, & \quad \| \mathcal{A}_{\varepsilon } ^{II} \|_{L^p} \leq C_p; \\ \\ \displaystyle{ \forall \; 2 \leq p \leq \frac{2\beta}{2 - \beta},} & \quad \| \mathcal{A}_{\varepsilon }^{III} \|_{L^p} \leq C(\beta); & \quad \quad \quad \displaystyle{ \forall \; 2 \leq p \leq \frac{2\gamma(\nu+1)}{\gamma + \nu(2-\gamma)},} & \quad \| \mathcal{A}_{\varepsilon }^{IV} \|_{L^p} \leq C(\beta) ; \\ \\ \displaystyle{ \forall \; 2 \leq p < 2\frac{\nu+1}{\nu-1},} & \quad \| \mathcal{A}_{\varepsilon }^{V} \|_{L^p} \leq C_p ; & \quad \quad \quad \displaystyle{ \forall \; 2 \leq p < \infty,} & \quad \| \mathcal{A}_{\varepsilon } ^{VI} \|_{L^p} \leq C_p \end{array} $$ and $$ \forall \; 1 \leq \gamma \leq \frac{\nu}{\nu-1}, \quad \forall \; 2 \leq p < \frac{\nu +1}{3-\nu} , \quad \quad \quad \| \mathcal{A}_{\varepsilon }^{VII} \| _{L^p} \leq C_p . $$ Then we choose $$ \beta = \frac43, \quad \quad \quad \alpha = \frac53 , \quad \quad \quad \nu = 3^- , \quad \quad \quad \gamma = \frac{\nu}{\nu-1} = \frac32^+ ,$$ so that $ \alpha > \beta $ and $ \alpha > \gamma$. We infer that $$ \forall \; 2 \leq p < 4 , \quad \quad \quad \| \mathcal{A}_{\varepsilon } \|_{L^p} \leq C_p . $$ This completes the proof in the case $N=2$.\ $\Box$ \subsection{Proof of Proposition \ref{Born}} We first recall the Fourier multiplier properties of the kernels $\mathcal{K}_{\varepsilon }^1, $ $ \mathcal{K}_{\varepsilon }^{\perp}$ and $\mathcal{K}_{\varepsilon}^{1,j }$. We skip the proof since it is the same as in section 5.2 in \cite{BGS1} and does not depend on the space dimension $N$. \begin{lem} \label{Multiply} Let $1 < q < \infty$. There exists $C_q>0$ (depending also on ${\mathfrak c}_s$) such that for any $ \varepsilon \in (0, 1 )$, any $2 \leq j \leq N$ and $h \in L^q$ we have \begin{align*} & \| \mathcal{K}^1_{\varepsilon } \star h \|_{L^q} \\ & + \| \partial_{z_1} \mathcal{K}^1_{\varepsilon } \star h \|_{L^q} + \| \nabla_{z_\perp} \mathcal{K}^1_{\varepsilon } \star h \|_{L^q} \\ & + \| \partial_{z_1}^2 \mathcal{K}^1_{\varepsilon } \star h \|_{L^q} + \varepsilon \| \partial_{z_1} \nabla_{z_\perp} \mathcal{K}^1_{\varepsilon } \star h \|_{L^q} + \varepsilon ^2 \| \nabla_{z_\perp}^2 \mathcal{K}^1_{\varepsilon} \star h \|_{L^q} \leq C_q \| h \|_{L^q} , \end{align*} \begin{align*} & \| \mathcal{K}^\perp_{\varepsilon } \star h \|_{L^q} \\ & + \varepsilon \| \partial_{z_1} \mathcal{K}^\perp_{\varepsilon } \star h \|_{L^q} + \varepsilon ^2 \| \nabla_{z_\perp} \mathcal{K}^\perp_{\varepsilon } \star h \|_{L^q} \\ & + \varepsilon ^2 \| \partial_{z_1}^2 \mathcal{K}^\perp_{\varepsilon } \star h \|_{L^q} + \varepsilon ^3 \| \partial_{z_1} \nabla_{z_\perp} \mathcal{K}^\perp_{\varepsilon } \star h \|_{L^q} + \varepsilon ^4 \| \nabla_{z_\perp}^2 \mathcal{K}^\perp_{\varepsilon } \star h \|_{L^q} \leq C_q \| h \|_{L^q} \end{align*} and \begin{align*} & \| \mathcal{K}^{1,j}_ {\varepsilon } \star h \|_{L^q} \\ & + \| \partial_{z_1} \mathcal{K}^{1,j}_{\varepsilon } \star h \|_{L^q} + \varepsilon \| \nabla_{z_\perp} \mathcal{K}^{1,j}_{\varepsilon } \star h \|_{L^q} \\ & + \varepsilon \| \partial_{z_1}^2 \mathcal{K}^{1,j}_{\varepsilon } \star h \|_{L^q} + \varepsilon ^2 \| \partial_{z_1} \nabla_{z_\perp} \mathcal{K}^{1,j}_{\varepsilon } \star h \|_{L^q} + \varepsilon ^3 \| \nabla_{z_\perp}^2 \mathcal{K}^{1,j}_{\varepsilon } \star h \|_{L^q} \leq C_q \| h \|_{L^q}. \end{align*} \end{lem} The proof of \eqref{goodestimate} is then divided into 5 Steps. \noindent {\bf Step 1.} There is $ \varepsilon _ 1 > 0 $ and for any $1 < q < \infty $ there exists $C_q$ (depending also on $F$) such that for all $\varepsilon \in (0, \varepsilon _1)$, \begin{align*} \| \mathcal{A}_{\varepsilon } \|_{L^q} + & \ \| \nabla_z \mathcal{A}_{\varepsilon } \|_{L^q} + \| \partial^2_{z_1} \mathcal{A}_{\varepsilon } \|_{L^q} + \varepsilon \| \partial_{z_1} \nabla_{z_\perp} \mathcal{A}_{\varepsilon } \|_{L^q} + \varepsilon ^2 \| \nabla_{z_\perp}^2 \mathcal{A}_{\varepsilon } \|_{L^q} \nonumber \\ & \leq C_q \Big( \| \mathcal{A}_{\varepsilon } \|^2_{L^{2q}} + \varepsilon ^2 \Big[ \| \partial_{z_1} \mathcal{A}_{\varepsilon } \|_{L^{2q}} + \varepsilon \| \nabla_{z_\perp} \mathcal{A}_{\varepsilon } \|_{L^{2q}} \Big]^2 \Big) . \end{align*} The proof is very similar to that of Lemma 6.2 p. 268 in \cite{BGS1} and thus is only sketched. Indeed, if $ U = \rho e ^{ i \phi }$ is a finite energy solution to (TW$_c$) such that $ \frac{ r_0 }{2 } \leq \rho \leq 2 r _0$ then the first equation in \eqref{phasemod} can be written as $$ 2 r_0 ^2 \Delta \phi = c \frac{\partial }{\partial x_1} ( \rho ^2 - r_0 ^2) - 2 \mbox{div}\left( (\rho ^2 - r_0 ^2) \nabla \phi \right) $$ and this gives $$ 2 r_0 ^2 \frac{ \partial \phi}{\partial x_j } = c R_j R_1 ( \rho ^2 - r_0 ^2) - 2 \sum_{k = 1}^{N} R_j R_k \left( ( \rho ^2 - r_0 ^2 ) \frac{ \partial \phi}{\partial x_k } \right), $$ where $R_k$ is the Riesz transform (defined by $ R_k f = \mathscr{F} ^{-1} \left( \frac{ i \xi _k}{|\xi |} \widehat{f} \right)$). It is well-known that the Riesz transform maps continuously $ L^p(\mathbb R^N)$ into $ L^p(\mathbb R^N)$ for $ 1 < p < \infty$. From the above we infer that for any $ q \in (1, \infty) $ and any $ j \in \{ 1, \dots, N \}$ we have $$ \Big\| \frac{ \partial \phi}{\partial x_j } \Big\|_{L^q} \leq C(q) \| \rho ^2 - r_0 ^2 \|_{L^q} + C(q) \sum_{k=1}^N \Big\| (\rho ^2 - r_0 ^2 ) \frac{ \partial \phi}{\partial x_j } \Big\|_{L^q} \leq C(q) \| \rho ^2 - r_0 ^2 \|_{L^q} + C(q) \| \rho ^2 - r_0 ^2 \|_{L^{\infty } } \| \nabla \phi \|_{L^q} $$ and this implies $$ \| \nabla \phi \|_{L^q} \leq C(q) \| \rho ^2 - r_0 ^2 \|_{L^q} + C(q) \| \rho ^2 - r_0 ^2 \|_{L^{\infty } } \| \nabla \phi \|_{L^q}. $$ If $\| \rho ^2 - r_0 ^2 \|_{L^{\infty } } $ is sufficiently small we get $ \| \nabla \phi \| _{L^q} \leq \tilde{C} (q) \| \rho ^2 - r_0 ^2 \|_{L^q} \leq K(q) \| \rho - r_0 \| _{L^q}.$ By scaling, this estimate implies that for $1 < q < \infty$, \begin{eqnarray} \label{phasestimate1} \| \partial_{z_1} \varphi_{\varepsilon } \|_{L^q} + \varepsilon \| \nabla_{z_\perp} \varphi_{\varepsilon } \|_{L^q} \leq C_q \| \mathcal{A}_{\varepsilon } \|_{L^q} . \end{eqnarray} Hence, by H\"older's inequality and Lemma \ref{Grenouille} $(ii)$, \begin{align*} \| G_{\varepsilon } \|_{L^q} \leq & \ C_q \Big( \| \mathcal{A}_{\varepsilon } \|^2_{L^{2q}} + \varepsilon ^2 \| \mathcal{A}_{\varepsilon } \|^3_{L^{3q}} + \varepsilon ^2 \| \partial_{z_1} \mathcal{A}_{\varepsilon } \|^2_{L^{2q}} + \varepsilon ^4 \| \nabla_{z_\perp} \mathcal{A}_{\varepsilon } \|^2_{L^{2q}} \Big) \\ \leq & \ C_q \Big( \| \mathcal{A}_{\varepsilon } \|^2_{L^{2q}} + \varepsilon ^2 \Big[ \| \partial_{z_1} \mathcal{A}_{\varepsilon } \|_{L^{2q}} + \varepsilon \| \nabla_{z_\perp} \mathcal{A}_ {\varepsilon } \|_{L^{2q}} \Big]^2 \Big) . \end{align*} We take the derivatives up to order $2$ of \eqref{Henry} and then the conclusion follows from Lemma \ref{Multiply}. \noindent {\bf Step 2.} Let $N=3$. There is $ \varepsilon _2 > 0$ and for any $1 < p < 3/2 $ there exists $C_p$ (also depending on $F$) such that for any $\varepsilon \in (0, \varepsilon _2)$ there holds \begin{align*} \| \mathcal{A}_{\varepsilon } \|_{L^p} + \| \nabla \mathcal{A}_{\varepsilon } \|_{L^p} + \| \partial^2_{z_1} \mathcal{A}_{\varepsilon } \|_{L^p} + \varepsilon \| \partial_{z_1} \nabla_{z_\perp} \mathcal{A}_{\varepsilon } \|_{L^p} + \varepsilon ^2 \| \nabla_{z_\perp}^2 \mathcal{A}_{\varepsilon } \|_{L^p} \leq C_p . \end{align*} If $1 \leq q \leq 3/2$, we have by Lemma \ref{Grenouille} $(i)$ $$ \varepsilon \Big[ \| \partial_{z_1} \mathcal{A}_{\varepsilon } \|_{L^{2q}} + \varepsilon \| \nabla_{z_\perp} \mathcal{A}_{\varepsilon } \|_{L^{2q}} \Big] \leq C . $$ Thus for $ 1 < q \leq 3/2$ we infer from Step 1 that \begin{align} \label{Garulfo} \| \mathcal{A}_{\varepsilon } \|_{L^q} + \| \nabla_z \mathcal{A}_{\varepsilon } \|_{L^q} + & \ \| \partial^2_{z_1} \mathcal{A}_{\varepsilon } \|_{L^q} + \varepsilon \| \partial_{z_1} \nabla_{z_\perp} \mathcal{A}_{\varepsilon } \|_{L^q} + \varepsilon ^2 \| \nabla_{z_\perp}^2 \mathcal{A}_{\varepsilon } \|_{L^q} \leq C_q + C_q \| \mathcal{A}_{\varepsilon } \|^2_{L^{2q}} . \end{align} If $ 1 < p < 4/3 $, we use \eqref{Garulfo} combined with Lemma \ref{Grenouille} $(iii)$ with exponent $ 2p \in [2,8/3)$ to get \begin{align} \label{Prof} \| \mathcal{A}_{\varepsilon } \|_{L^p} + \| \nabla_z \mathcal{A}_{\varepsilon } \|_{L^p} + & \, \| \partial^2_{z_1} \mathcal{A}_{\varepsilon } \|_{L^p} + \varepsilon \| \partial_{z_1} \nabla_{z_\perp} \mathcal{A}_{\varepsilon } \|_{L^p} + \varepsilon ^2 \| \nabla_{z_\perp}^2 \mathcal{A}_{\varepsilon } \|_{L^p} \leq C_p . \end{align} This proves Step 2 for $1 < p < 4/3$. In dimension $N=3$, the Sobolev inequality does not enable us to improve the $L^q$ integrability of $\mathcal{A}_{\varepsilon }$ to some $q>8/3$. We thus rely on the decomposition of $\mathcal{A}_{\varepsilon }$ as $ \mathcal{A}_{\varepsilon } = \mathcal{A}_{\varepsilon }^{I} + \mathcal{A}_{\varepsilon }^{II} + \mathcal{A}_{\varepsilon }^{III} + \mathcal{A}_{\varepsilon }^{IV} + \mathcal{A}_{\varepsilon }^{V} + \mathcal{A}_{\varepsilon }^{VI} + \mathcal{A}_{\varepsilon }^{VII} , $ exactly as in Lemma \ref{Grenouille}. We choose $\alpha = 5/3$, $\beta = 3/2$. By the estimates in the proof of Lemma \ref{Grenouille} $(iii)$ we have then $$ \forall \; 2 \leq p \leq 3, \quad \quad \quad \| \mathcal{A}_{\varepsilon }^{I} \|_{L^p} + \| \mathcal{A}_{\varepsilon }^{II} \|_{L^p} + \| \mathcal{A}_{\varepsilon }^{III} \|_{L^p} + \| \mathcal{A}_{\varepsilon }^{VI} \|_{L^p} \leq C . $$ It remains to bound $\mathcal{A}_{\varepsilon }^{IV}$, $\mathcal{A}_{\varepsilon }^{V}$ and $\mathcal{A}_{\varepsilon }^{VII}$ in $L^{3^-}$. In view of \eqref{Timide5}, we choose $\nu = 5/2$, so that $\frac{4\nu+2}{2\nu-1} = 3$, and thus $$ \forall \; 2 \leq p < 3, \quad \quad \quad \| \mathcal{A}_{\varepsilon }^V \|_{L^p} \leq C_p . $$ We cancel out $\mathcal{A}_{\varepsilon }^{IV}$ by taking $\gamma = 5/3 = \alpha$. Next we turn our attention to the "bad term" $\mathcal{A}_{\varepsilon }^{VII}$. By \eqref{Prof} we get $$ \forall \; 1 < p < \frac43 , \quad \quad \quad \| \nabla_{z_\perp} \mathcal{A}_{\varepsilon } \|_{L^p} \leq C_p , $$ hence, by the Riesz-Thorin theorem, $$ \forall \; 4 < r < \infty , \quad \quad \quad \| \xi_{\perp} \widehat{\mathcal{A}}_{\varepsilon } \|_{L^r} = \| \mathscr{F}( \nabla_{z_\perp} \mathcal{A}_{\varepsilon } ) \|_{L^r} \leq C_r . $$ Consequently, for $4 < r < \infty$, $ 2 < p < \infty $ and $q= p / (p-1) \in (1,2)$, using once again the Riesz-Thorin theorem and the H\"older inequality with exponents $\frac{r}{q} $ and $ \frac{r}{r-q} $ we get \begin{align*} \| \mathcal{A}_{\varepsilon }^{VII} \|_{L^p}^q \leq & \ C \| \widehat{\mathcal{A}}_{\varepsilon }^{VII} \|_{L^q}^q \\ = & \ C \int_{\mathbb R^3} ( |\xi_\perp| \cdot |\widehat{\mathcal{A}}_ {\varepsilon } | )^q \times \frac{{\bf 1}_{ \{ 1 \leq |\xi_\perp| \leq \varepsilon ^{-\gamma},\, 1 \leq |\xi_1|^\nu \leq |\xi_\perp| \} }}{|\xi_\perp|^q} \ d \xi \\ \leq & \ C \| \xi_\perp \widehat{\mathcal{A}}_{\varepsilon } \|_{L^r}^{q} \Big( \int_{\mathbb R^3} \frac{{\bf 1}_{ \{ 1 \leq |\xi_\perp| \leq \varepsilon ^{-\gamma},\, 1 \leq |\xi_1|^\nu \leq |\xi_\perp| \} }}{|\xi_\perp|^{\frac{rq}{r-q}}} \ d \xi \Big)^{\frac{r-q}{r}} \\ \leq & \ C_{r,q} \Big( \int_1^{\varepsilon ^{-\gamma}} \frac{R^{1+\frac{1}{\nu}}}{R^{\frac{rq}{r-q}}} \ d R \Big)^{\frac{r-q}{r}} \leq C_{r,q} \end{align*} provided that $ \frac{rq}{r-q} > 2 + \frac{1}{\nu} = 12 / 5 $. Now let $ 2 \leq p < 3 $ be fixed, so that $ 3/2 < q \leq 2 $. Since $ 3/2 < q \leq 2 $ and $q \longmapsto \frac{4q}{4-q} $ is increasing on $ (3/2 , 2 ] $, we have $\frac{4q}{4-q} > 12 / 5$. Furthermore, we have $ \frac{rq}{r-q} \to \frac{4q}{4-q} > 12 / 5$ as $r \to 4$. Hence we may choose $r > 4$ such that $ \frac{rq}{r-q} > 2 + \frac{1}{\nu} = 12 / 5 $. As a consequence, we have $$ \forall \; 2 \leq p < 3, \quad \quad \quad \| \mathcal{A}_{\varepsilon }^{VII} \|_{L^p} \leq C_p . $$ Collecting the above estimates for $\mathcal{A}_{\varepsilon }^{I} $, ... , $\mathcal{A}_{\varepsilon }^{VII}$ we deduce $$ \forall \; 2 \leq p < 3, \quad \quad \quad \| \mathcal{A}_{\varepsilon } \|_{L^p} \leq C_p . $$ Then we use once again \eqref{Garulfo} with exponent $ p/2 \in (1,3/2) $ to infer that Step 2 holds for $ 1 < p < 3/2 $.\\ In order to be able to use Step 1 with some $q > 3/2$, we need to prove that $\mathcal{A}_{\varepsilon }$, $\varepsilon_n \partial_{z_1} \mathcal{A}_{\varepsilon }$ and $ \varepsilon_n^2 \nabla_{z_\perp} \mathcal{A}_{\varepsilon }$ are uniformly bounded in $L^p$ for some $p>3$. This is what we will prove next. \noindent {\bf Step 3.} If $N=3$, the following bounds hold: $$ \left\{ \begin{array}{ll} \displaystyle{ \forall \; 2 \leq p < 15/4 = 3.75 }, & \quad \quad \quad \| \mathcal{A}_{\varepsilon } \|_{L^p} \leq C_p ; \\ \\ \displaystyle{ \forall \; 2 \leq p < 18/5 = 3.6 }, & \quad \quad \quad \varepsilon \| \partial_{z_1} \mathcal{A}_{\varepsilon } \|_{L^p} \leq C_p ; \\ \\ \displaystyle{ \forall \; 2 \leq p < 18/5 = 3.6 }, & \quad \quad \quad \varepsilon ^2 \| \nabla_{z_\perp} \mathcal{A}_{\varepsilon } \|_{L^p} \leq C_p . \end{array} \right. $$ Fix $ r \in (3, \infty)$, $ p \in (2, \infty )$ and let $q= p/(p-1) \in (1,2)$ be the conjugate exponent of $p$. By the Riesz-Thorin theorem and the H\"older inequality with exponents ${\frac{r}{q}} $ and $ {\frac{r}{r-q}}$ we have \begin{align} \label{sorciere} \| \mathcal{A}_{\varepsilon } \|_{L^p}^q \leq & \ C \| \widehat{\mathcal{A}}_{\varepsilon } \|_{L^q}^q \nonumber \\ = & \ C \int_{\mathbb R^3} \Big[ (1 +|\xi_1|^2 + |\xi_\perp|) \cdot |\widehat{\mathcal{A}}_{\varepsilon }| \Big]^q \times \frac{d \xi}{(1 + |\xi_1|^2 + |\xi_\perp|)^q} \nonumber \\ \leq & \ C \Big( \| \widehat{\mathcal{A}}_{\varepsilon } \|_{L^r} +\| \xi_1^2 \widehat{\mathcal{A}}_{\varepsilon } \|_{L^r} + \| \xi_\perp \widehat{\mathcal{A}}_{\varepsilon } \|_{L^r} \Big)^{q} \Big( \int_{\mathbb R^3} \frac{ d \xi}{( 1+ |\xi_1|^2 + |\xi_\perp|)^{\frac{rq}{r-q}}} \Big)^{\frac{r-q}{r}} . \end{align} We bound the first parenthesis using again the Riesz-Thorin theorem: since $ r \in(3, \infty) $, its conjugate exponent $ r/(r-1) $ belongs to $ (1,3/2) $ and then Step 2 holds for the exponent $ r$ instead of $p$, hence \begin{align*} \| \widehat{\mathcal{A}}_{\varepsilon } \|_{L^r} + \| \xi_1^2 \widehat{\mathcal{A}}_{\varepsilon } \|_{L^r} + \| \xi_\perp \widehat{\mathcal{A}}_{\varepsilon } \|_{L^r} = & \ \| \mathscr{F}( \mathcal{A}_{\varepsilon }) \|_{L^r} + \| \mathscr{F}( \partial_{z_1}^2 \mathcal{A}_{\varepsilon }) \|_{L^r} + \| \mathscr{F}( \nabla_{z_\perp} \mathcal{A}_{\varepsilon } ) \|_{L^r} \\ \leq & \ C \Big( \| \mathcal{A}_{\varepsilon } \|_{L^{\frac{r}{r-1}}} + \| \partial_{z_1}^2 \mathcal{A}_{\varepsilon } \|_{L^{\frac{r}{r-1}}} + \| \nabla_{z_\perp} \mathcal{A}_{\varepsilon } \|_{L^{\frac{r}{r-1}}} \Big) \leq C_r . \end{align*} Next, we compute using cylindrical coordinates \begin{align*} \int_{\mathbb R^3} & \ \frac{ d \xi }{( 1+ |\xi_1|^2 + |\xi_\perp|)^{\frac{rq}{r-q}}} \\ & \leq 4 \pi \Big[ \int_0^1 \int_0^{+\infty} \frac{ R dR }{(1+R)^{\frac{rq}{r-q}}} \ d \xi_1 + \int_1^{+\infty} \int_0^{\xi_1^2} \frac{ R dR }{\xi_1^{\frac{2rq}{r-q}}} \ d \xi_1 + \int_1^{+\infty} \int_{\xi_1^2}^{+\infty} \frac{ R dR }{R^{\frac{rq}{r-q}}} \ d \xi_1 \Big] \\ & \leq 4 \pi \Big[ \int_0^{+\infty} \frac{ R dR }{(1+R)^{\frac{rq}{r-q}}} + \frac12 \int_1^{+\infty} \frac{\xi_1^4}{\xi_1^{\frac{2rq}{r-q}}} \ d \xi_1 + \frac{1}{\frac{rq}{r-q}-2} \int_1^{+\infty} \frac{d \xi_1}{\xi_1^{2(\frac{rq}{r-q}-2)} } \Big] . \end{align*} The integrals in the last line are finite provided that $\frac{rq}{r-q} > 2$ (for the first integral), $\frac{2rq}{r-q} > 5$ (for the second integral) and $ 2(\frac{rq}{r-q}-2) > 1$ (for the third integral), hence their sum is finite if $\frac{rq}{r-q} > 5/2 $. Note that $\frac{rq}{r-q} \to \frac{3q}{3-q} $ as $r \to 3$ and $ \frac{3q}{3-q} > 5/2 $ for $ q \in (\frac{15}{11}, 3)$. If $ 2 < p < 15 / 4 = 3.75$ we have $ 15 / 11 < q < 2 $ and we may choose $ r > 3 $ (and $r$ close to $3$) such that $\frac{rq}{r-q} > 5/2 $. Then it follows from the two estimates above that $$ \forall \; 2 \leq p < \frac{15}{4} , \quad \quad \quad \| \mathcal{A}_{\varepsilon } \|_{L^p} \leq C_p . $$ Now we turn our attention to the bound on $\varepsilon \partial_{z_1} \mathcal{A}_{\varepsilon }$. Let $r \in ( 1, \frac 32)$, $ q \in [2, \infty)$ and $ s \in (r,q)$. We use the estimates in Step 2 for $ \Big\| \frac{ \partial ^2 \mathcal{A}_{\varepsilon }}{\partial z_i \partial z_j} \Big\|_{L^r}$ and \eqref{bourrinSKF} with $N=3$ for $ \Big\| \frac{ \partial ^2 \mathcal{A}_{\varepsilon }}{\partial z_i \partial z_j} \Big\|_{L^q}$, then we interpolate to get \begin{eqnarray} \label{81a} \Big\| \frac{ \partial ^2 \mathcal{A}_{\varepsilon }}{\partial z_1 ^2} \Big\|_{L^s} + \varepsilon \Big\| \frac{ \partial ^2 \mathcal{A}_{\varepsilon }}{\partial z_1 \partial z_j} \Big\|_{L^s} + \varepsilon ^2 \Big\| \nabla_{\perp} ^2 \mathcal{A} _{\varepsilon} \Big\|_{L^s} \leq C_{r, q} \varepsilon^{\left(-4 + \frac{2N-1}{q} \right) \frac{ 1 - \frac rs }{1 - \frac rq}}. \end{eqnarray} If $ s \in (r,3)$, from the Sobolev inequality and the above estimate we obtain \begin{eqnarray} \label{81} \| \partial_{z_1} \mathcal{A} _{\varepsilon} \|_{L^{\frac{3s}{3-s}}} \leq C_s \| \partial_{z_1} ^2 \mathcal{A}_{\varepsilon } \|_{L^s} ^{\frac 13} \| \partial _{z_1} \nabla_{\perp} \mathcal{A}_{\varepsilon } \|_{L^s} ^{\frac 23} \leq C_{s,r,q} \varepsilon ^{-\frac 23} \varepsilon^{\left(-4 + \frac{5}{q} \right) \frac{ 1 - \frac rs }{1 - \frac rq}}. \end{eqnarray} We have $ - \frac 23 + \left(-4 + \frac{5}{q} \right) \frac{ 1 - \frac rs }{1 - \frac rq} \to - \frac{14}{3} + \frac{4r}{s} $ as $ q \to \infty$ uniformly with respect to $ r \in [1, \frac 32]$ and $ s \in [1,3]$. If $ 1 < s < \frac{18}{11} \approx 1.636$ we have $ - \frac{14}{3} + \frac{4r}{s} \to - \frac{14}{3} + \frac 6s > -1$ as $ r \to \frac 32$. For any fixed $ s \in (1, \frac{18}{11})$ we may choose $q$ sufficiently large and $ r \in (1,\frac 32)$ sufficiently close to $ \frac 32$ such that $- \frac 23 + \left(-4 + \frac{5}{q} \right) \frac{ 1 - \frac rs }{1 - \frac rq} > -1$. Since $\frac{3s}{3-s} \nearrow \frac{18}{5} $ as $ s \nearrow \frac{18}{11}$, from \eqref{81} we get $$ \forall \; p \in \left(1, \frac {18}{5} \right), \qquad \quad \| \partial_{z_1} \mathcal{A}_{\varepsilon } \|_{L^p} \leq C_p \varepsilon^{-1}. $$ Let $r \in ( 1, \frac 32)$, $ q \in [3, \infty)$ and $ s \in (r,3)$. Using the Sobolev inequality and \eqref{81a} we have $$ \| \nabla_{z_\perp} \mathcal{A}_{\varepsilon } \|_{L^{\frac{3s}{3-s}}} \leq C_p \| \partial_{z_1} \nabla_{z_\perp} \mathcal{A}_{\varepsilon } \|_{L^s}^{\frac13} \| \nabla^2_{z_\perp} \mathcal{A}_{\varepsilon } \|_{L^s}^{\frac23} \leq C_{s,r,q} \varepsilon ^{-\frac 53} \varepsilon^{\left(-4 + \frac{5}{q} \right) \frac{ 1 - \frac rs }{1 - \frac rq}}. $$ Proceeding as above we infer that $$ \forall \; 1 < p < 18 / 5, \quad \quad \quad \varepsilon ^2 \| \nabla_{z_\perp} \mathcal{A}_{\varepsilon } \|_{L^p} \leq C_p . $$ \noindent {\bf Step 4.} Conclusion in the case $N=3$. Fix $ 1 < p < 9 / 5 = 1.8 $. Since $ 2 < 2p < 18/ 5 < 15/4 $, we may use Step 1 (with $p$ instead of $q$) and Step 3 to deduce that \begin{align} \label{princecharmant} \| \mathcal{A}_{\varepsilon } \|_{L^p} + & \ \| \nabla_z \mathcal{A}_{\varepsilon } \|_{L^p} + \| \partial^2_{z_1} \mathcal{A}_{\varepsilon } \|_{L^p} + \varepsilon_n \| \partial_{z_1} \nabla_{z_\perp} \mathcal{A}_{\varepsilon } \|_{L^p} + \varepsilon_n^2 \| \nabla_{z_\perp}^2 \mathcal{A}_{\varepsilon } \|_{L^p} \nonumber \\ & \leq C_p \Big( \| \mathcal{A}_{\varepsilon } \|^2_{L^{2p}} + \Big[ \varepsilon_n \| \partial_{z_1} \mathcal{A}_{\varepsilon } \|_{L^{2p}} + \varepsilon_n^2 \| \nabla_{z_\perp} \mathcal{A}_{\varepsilon } \|_{L^{2p}} \Big]^2 \Big) \leq C_p . \end{align} Hence \eqref{goodestimate} holds for $p \in (1, \; 9/5)$. In particular, by the Sobolev imbeddding $ W^{1,p} \hookrightarrow L^{\frac{3p}{3-p}}$ with $ 1 < p < 9/5 $ we have $$ \forall \, 1 < q < 9/2 = 4.5 , \quad \quad \quad \| \mathcal{A}_{\varepsilon } \|_{L^q} \leq C_q . $$ On the other hand, for any $1 < p < 9/5 $, $$ \varepsilon \| \partial_{z_1} \mathcal{A}_{\varepsilon } \|_{W^{1,p}} = \varepsilon \| \partial_{z_1} \mathcal{A}_{\varepsilon } \|_{L^{p}} + \varepsilon \| \partial^2_{z_1} \mathcal{A}_{\varepsilon } \|_{L^{p}} + \varepsilon \| \nabla_{z_\perp} \partial_{z_1} \mathcal{A}_{\varepsilon } \|_{L^{p}} \leq C_p \quad \quad {\rm and} \quad \quad \varepsilon ^2 \| \nabla_{z_\perp} \mathcal{A}_{\varepsilon } \|_{W^{1,p}} \leq C_p , $$ hence by the Sobolev embdding, $$ \forall \; 1 < q < 9/2 = 4.5 , \quad \quad \quad \varepsilon \| \partial_{z_1} \mathcal{A}_{\varepsilon } \|_{L^q} + \varepsilon \| \nabla_{z_\perp} \mathcal{A}_{\varepsilon } \|_{L^q} \leq C_q . $$ Thus we may apply Step 1 again to infer that \eqref{princecharmant} holds now for $1 < p < 9/4 = 2.25 $. By the Sobolev embedding $ W^{1,p} \hookrightarrow L^{\frac{3p}{3-p}}$, we deduce as before that $$ \forall \; 1 < q < 9, \quad \quad \quad \| \mathcal{A}_{\varepsilon } \|_{L^q} + \varepsilon \| \partial_{z_1} \mathcal{A}_{\varepsilon } \|_{L^q} + \varepsilon ^2 \| \nabla_{z_\perp} \mathcal{A}_{\varepsilon } \|_{L^q} \leq C_q . $$ Applying Step 1, we discover that \eqref{princecharmant} holds for any $1 < p < 9/2 $. Since $ 9/2 > 3 $, the Sobolev embedding yields $$ \forall \; 1 < q \leq \infty, \quad \quad \quad \| \mathcal{A}_{\varepsilon } \|_{L^p} + \varepsilon \| \partial_{z_1} \mathcal{A}_{\varepsilon } \|_{L^p} + \varepsilon ^2 \| \nabla_{z_\perp} \mathcal{A}_{\varepsilon } \|_{L^p} \leq C_p , $$ and the conclusion follows using again Step 1. \noindent {\bf Step 5.} Conclusion in the case $N=2$. The proof of \eqref{goodestimate} in the two-dimensional case is much easier: for any $1 < p < \frac 32$, we have by Step 1 and Lemma \ref{Grenouille} $(i)$ and $(iv)$ $$ \| \mathcal{A}_{\varepsilon } \|_{L^p} + \| \nabla_z \mathcal{A}_{\varepsilon } \|_{L^p} + \| \partial^2_{z_1} \mathcal{A}_{\varepsilon } \|_{L^p} + \varepsilon \| \partial_{z_1} \nabla_{z_\perp} \mathcal{A}_{\varepsilon } \|_{L^p} + \varepsilon ^2 \| \nabla_{z_\perp}^2 \mathcal{A}_{\varepsilon } \|_{L^p} \leq C_p . $$ Thus, by the Sobolev embedding $W^{1,p} (\mathbb R^2 ) \hookrightarrow L^{\frac{2p}{2-p}} (\mathbb R^2 ) $, \begin{eqnarray} \label{85} \forall \; 1 < q < 6, \quad \quad \quad \| \mathcal{A}_{\varepsilon } \|_{L^q} \leq C_q \quad \quad \quad {\rm and} \quad \quad \quad \varepsilon_n \Big[ \| \partial_{z_1} \mathcal{A}_{\varepsilon } \|_{L^{q}} + \varepsilon_n \| \nabla_{z_\perp} \mathcal{A}_{\varepsilon } \|_{L^{q}} \Big] \leq C_q . \end{eqnarray} Applying Step 1 once again, we infer that \eqref{princecharmant} holds for any $ p \in (1, 3)$. Since $ 3 >2$, the Sobolev embedding implies that \eqref{85} holds for any $ q \in (1, \infty]$. Repeating the argument we get the desired conclusion. \\ Since $A_{\varepsilon } = \varepsilon^{-2} ( \sqrt{1+\varepsilon ^2 \mathcal{A}_{\varepsilon }} - 1 )$, uniform bounds bounds on $ A_{\varepsilon }$ and its derivatives up to order 2 follow immediately from \eqref{goodestimate}. It remains to prove \eqref{goodestimate2}. The uniform bounds on $\partial_{z_1} \varphi_{\varepsilon }$ and $ \varepsilon \nabla_{z_\perp} \varphi_ {\varepsilon }$ follow from \eqref{phasestimate1} and \eqref{goodestimate}. Let $ U = \rho {\sf e} ^{i \phi } $ be a finite energy solution to (TW$_c$), from the first equation in \eqref{phasemod} we have $$ 2 \rho ^2 \Delta \phi = c \frac{ \partial }{ \partial x_1 } ( \rho ^2 - r_0 ^2) - 2 \nabla ( \rho ^2) \cdot \nabla \phi. $$ If $ \rho \geq \frac{ r_0}{2} $ and $ c \in (0, {\mathfrak c}_s)$, using the properties of the Riesz transform we get for any $ j, k \in \{ 1, \dots, N \}$ and any $ q \in (1, \infty)$ $$ \Big\| \frac{ \partial ^2 \phi}{\partial x_j \partial x_k } \Big\|_{L^q} = \| R_j R_k ( \Delta \phi ) \|_{L^q} \leq C \| \Delta \phi \|_{L^q} \leq C \Big\| \frac{ \partial }{ \partial x_1 } ( \rho ^2 - r_0 ^2) \Big\| _{L^q} + C \| \nabla ( \rho ^2) \cdot \nabla \phi \|_{L^q}. $$ In the case $ U = U_{\varepsilon}$, $ \rho (x) = r_0 \sqrt{ 1 + \varepsilon ^2 \mathcal{A}_{\varepsilon} (z) }$, $\phi (x) = \varepsilon \varphi _{\varepsilon} (z)$, using \eqref{goodestimate} and \eqref{phasestimate1} we get $$ \Big\| \frac{ \partial ^2 \phi}{\partial x_j \partial x_k } \Big\|_{L^q} \leq \varepsilon^{ 3 - \frac{2N -1}{q} } \Big\| \frac{ \partial \mathcal{A}_{\varepsilon}}{\partial z _1} \Big\| _{L^q} + C \varepsilon^{ 5 - \frac{2N -1}{q}} \Big\| \frac{ \partial \mathcal{A}_{\varepsilon}}{\partial z _1} \cdot \frac{ \partial \varphi_{\varepsilon }}{\partial z_1} \Big\| _{L^q} + C \varepsilon^{7 - \frac{2N -1}{q}} \sum_{ j =2}^N \Big\| \frac{ \partial \mathcal{A}_{\varepsilon}}{\partial z _j} \cdot \frac{ \partial \varphi_{\varepsilon }}{\partial z_j} \Big\| _{L^q} \leq C_q \varepsilon^{ 3 - \frac{2N -1}{q} }. $$ By scaling we find for $ j,k \in \{2, \dots, N \}$, \begin{eqnarray} \label{phasestimate3} \Big\| \frac{ \partial ^2 \varphi _{\varepsilon}}{\partial z_1 ^2} \Big\|_{L^q} + \varepsilon \Big\| \frac{ \partial ^2 \varphi _{\varepsilon}}{\partial z_1 \partial z_j } \Big\|_{L^q} + \varepsilon ^2 \Big\| \frac{ \partial ^2 \varphi _{\varepsilon}}{\partial z_j \partial z_k } \Big\|_{L^q} \leq C_q. \end{eqnarray} By assumption (A4) there is $ \delta > 0$ such that $F$ is $C^2$ on $ (\, ( r_0 -2 \delta )^2, ( r_0 + 2 \delta )^2) $. Let $ U = \rho {\sf e} ^{ i \phi }$ be a solution to (TW$_c$) such that $ r_0 - \delta \leq \rho \leq r_0 + \delta$. Differentiating (TW$_c$) and using standard elliptic regularity theory it is not hard to see that $ U \in W_{loc}^{4, p} ( \mathbb R^N)$ and $ \nabla U \in W^{3, p}(\mathbb R^N)$ for any $ p \in (1, \infty) $ (see the proof Proposition 2.2 (ii) p. 1079 in \cite{M2}). We infer that $ \nabla \rho, \, \nabla \phi \in W^{3, p} ( \mathbb R^N)$ for $ p \in (1, \infty) $. Differentiating the first equation in \eqref{phasemod} with respect to $ x_1 $ we find \begin{eqnarray} \label{deriveq} c \frac{ \partial ^2}{\partial x_1 ^2} \left( \rho ^2 - r_0 ^2\right) = 2 \nabla \left( \frac{ \partial ( \rho ^2)}{\partial x_1 } \right) \cdot \nabla \phi + 2 \nabla ( \rho ^2) \cdot \nabla \left( \frac{ \partial \phi}{\partial x_1} \right) + 2 \frac{ \partial ( \rho ^2)}{\partial x_1 } \Delta \phi + 2 \rho ^2 \Delta \left( \frac{ \partial \phi}{\partial x_1} \right). \end{eqnarray} If $ U = U_{\varepsilon}$, $ \rho (x) = r_0 \sqrt{ 1 + \varepsilon ^2 \mathcal{A}_{\varepsilon } (z) } $ and $ \phi (x) = \varepsilon \varphi_{\varepsilon} (x)$, we perform a scaling and then we use \eqref{goodestimate}, \eqref{phasestimate1} and \eqref{phasestimate3} to get, for $ 1 < q < \infty$ and all $ \varepsilon $ sufficiently small, $$ \begin{array}{c} \displaystyle \Big\| \frac{ \partial ^2}{\partial x_1 ^2} \left( \rho ^2 - r_0 ^2 \right) \Big\|_{L^q} = \varepsilon^{ 4 + \frac{1 - 2N}{q}} \Big\| \frac{ \partial ^2 \mathcal{A}_{\varepsilon}}{\partial z_1 ^2 } \Big\|_{L^q} \leq C_q \varepsilon^{ 4 + \frac{1 - 2N}{q}}, \\ \\ \displaystyle \Big\| \frac{ \partial ^2 ( \rho ^2)}{\partial x_1 ^2 } \cdot \frac{\partial \phi}{\partial x_1} \Big\|_{L^q} \leq \Big\| \frac{ \partial ^2 ( \rho ^2)}{\partial x_1 ^2 } \Big\|_{L^{2q}} \Big\| \frac{\partial \phi}{\partial x_1} \Big\|_{L^{2q}} = \varepsilon^{ 6 + \frac{1 - 2N}{q}} \Big\| \frac{ \partial^2 \mathcal{A}_{\varepsilon}}{\partial z_1 ^2} \Big\|_{L^{2q}} \Big\| \frac{\partial \varphi _{\varepsilon}}{\partial z_1} \Big\|_{L^{2q}} \leq C_q \varepsilon^{ 6 + \frac{1 - 2N}{q}}, \\ \\ \displaystyle \Big\| \frac{ \partial ^2 ( \rho ^2)}{\partial x_1 \partial x_{k} } \cdot \frac{\partial \phi}{\partial x_k} \Big\|_{L^q} \leq \Big\| \frac{ \partial ^2 ( \rho ^2)}{\partial x_1 \partial x_k } \Big\|_{L^{2q}} \Big\| \frac{\partial \phi}{\partial x_k} \Big\|_{L^{2q}} = \varepsilon^{ 8 + \frac{1 - 2N}{q}} \Big\| \frac{ \partial^2 \mathcal{A}_{\varepsilon}}{\partial z_1 \partial z_k} \Big\|_{L^{2q}} \Big\| \frac{\partial \varphi _{\varepsilon}}{\partial z_k} \Big\|_{L^{2q}} \leq C_q \varepsilon^{ 6 + \frac{1 - 2N}{q}}, \\ \\ \displaystyle \Big\| \frac{ \partial ( \rho ^2)}{\partial x_1 } \cdot \frac{\partial ^2 \phi}{\partial x_1 ^2 } \Big\|_{L^q} \leq \Big\| \frac{ \partial ( \rho ^2)}{\partial x_1 } \Big\|_{L^{2q}} \Big\| \frac{\partial ^2 \phi}{\partial x_1 ^2 } \Big\|_{L^{2q}} = \varepsilon^{ 6 + \frac{1 - 2N}{q}} \Big\| \frac{ \partial \mathcal{A}_{\varepsilon}}{\partial z_1 } \Big\|_{L^{2q}} \Big\| \frac{\partial ^2 \varphi _{\varepsilon}}{\partial z_1 ^2 } \Big\|_{L^{2q}} \leq C_q \varepsilon^{ 6 + \frac{1 - 2N}{q}}, \\ \\ \displaystyle \Big\| \frac{ \partial ( \rho ^2)}{\partial x_k } \cdot \frac{\partial ^2 \phi}{\partial x_1 \partial x_k } \Big\|_{L^q} \leq \Big\| \frac{ \partial ( \rho ^2)}{\partial x_k } \Big\|_{L^{2q}} \Big\| \frac{\partial ^2 \phi}{\partial x_1 \partial x_k } \Big\|_{L^{2q}} = \varepsilon^{ 8 + \frac{1 - 2N}{q}} \Big\| \frac{ \partial \mathcal{A}_{\varepsilon}}{\partial z_k } \Big\|_{L^{2q}} \Big\| \frac{\partial ^2 \varphi _{\varepsilon}}{\partial z_1 \partial z_k } \Big\|_{L^{2q}} \leq C_q \varepsilon^{ 7 + \frac{1 - 2N}{q}}, \\ \\ \displaystyle \Big\| \frac{ \partial ( \rho ^2)}{\partial x_1 } \Big\|_{L^q} = \varepsilon^{ 3 + \frac{1 - 2N}{q}} \Big\| \frac{ \partial \mathcal{A}_{\varepsilon}}{\partial z_1} \Big\|_{L^q} \leq C_q \varepsilon^{ 3 + \frac{1 - 2N}{q}}, \\ \\ \displaystyle \Big\| \frac{ \partial ^2 \phi }{\partial x_1 ^2 } \Big\|_{L^q} = \varepsilon^{ 3 + \frac{1 - 2N}{q}} \Big\| \frac{ \partial ^2 \varphi_{\varepsilon}}{\partial z_1 ^2 } \Big\|_{L^q} \leq C_q \varepsilon^{ 3 + \frac{1 - 2N}{q}} \qquad \mbox{ and } \qquad \Big\| \frac{ \partial ^2 \phi }{\partial x_k ^2 } \Big\|_{L^q} = \varepsilon^{ 5 + \frac{1 - 2N}{q}} \Big\| \frac{ \partial ^2 \varphi_{\varepsilon}}{\partial z_k ^2 } \Big\|_{L^q} \leq C_q \varepsilon^{ 3 + \frac{1 - 2N}{q}} . \end{array} $$ Hence $ \| \Delta \phi \|_{L^q} \leq C_q \varepsilon^{ 3 + \frac{1 - 2N}{q}} $ and then $\displaystyle \Big\| \frac{ \partial (\rho ^2)}{\partial x_1} \cdot \Delta \phi \Big\|_{L^q} \leq C_q \varepsilon^{ 6 + \frac{1 - 2N}{q}} $. From \eqref{deriveq} and the above estimates we infer that $\displaystyle \Big\| \Delta \left(\frac{ \partial \phi }{\partial x_1} \right) \Big\|_{L^q} \leq C_q \varepsilon^{ 4 + \frac{1 - 2N}{q}} $. As before, this implies $\displaystyle \Big\| \frac{ \partial ^3 \phi}{\partial x_1 \partial x_i \partial x_j } \Big\|_{L^q} \leq C_q \varepsilon^{ 4 + \frac{1 - 2N}{q}} $ for any $ i, j \in \{ 1, \dots, N \}$. By scaling we find $$ \Big\| \frac{ \partial ^3 \varphi _{\varepsilon}}{\partial z_1 ^3} \Big\|_{L^q} + \varepsilon \Big\| \nabla_{z_{\perp}} \frac{ \partial ^2 \varphi _{\varepsilon}}{\partial z_1 ^2} \Big\|_{L^q} + \varepsilon ^2 \Big\| \nabla_{z_{\perp}} ^2 \frac{ \partial \varphi _{\varepsilon}}{\partial z_1 } \Big\|_{L^q} \leq C_q. $$ Then \eqref{goodestimate2} follows from the last estimate, \eqref{phasestimate1} and \eqref{phasestimate3}. \ $\Box$ \subsection{Proof of Proposition \ref{convergence}} Let $(U_n, \varepsilon _n)_{n \geq 1}$ be a sequence as in Proposition \ref{convergence}. We denote $ c_n = \sqrt{{\mathfrak c}_s ^2 - \varepsilon _n ^2}$. By Corollary \ref{sanszero} we have $ \| \, |U_n| - r_0 \| _{L^{\infty}(\mathbb R^3 )} \to 0 $ as $ n \to \infty$, hence $|U_n| \geq \frac{ r_0}{2}$ in $ \mathbb R^3$ for all sufficiently large $n$, say $ n \geq n_0$. For $ n \geq n_0$ we have a lifting as in Theorem \ref{res1} or in \eqref{ansatz}, that is $$ U_n (x) = \rho _n (x) {\sf e}^{i\phi _n(x)} =r_0 \left( 1 + \varepsilon _n ^2A_n(z) \right) {\sf e}^{i \varepsilon _n \varphi _n(z) } = r_0 \sqrt{1+\varepsilon_n ^2 \mathcal{A}_{n }(z) }\ {\sf e}^{i\varepsilon _n \varphi_{n } (z)}, $$ $ \mbox{where } z_1 = \varepsilon _n x_1 , \; z_\perp = \varepsilon_n ^2 x_\perp . $ Let $\mathcal{W}_n = \partial_{z_1} \varphi_n / {\mathfrak c}_s $. Our aim is to show that $(\mathcal{W}_n )_{n \geq n_0}$ is a minimizing sequence for $\mathscr{S}_*$ in the sense of Theorem \ref{gs}. To that purpose we expand the functional $E_{c_n} (U_n)$ in terms of the (KP-I) action of $\mathcal{W}_n = \partial_{z_1} \varphi_n / {\mathfrak c}_s $. Recall that by \eqref{develo} we have \begin{align*} E_{c_n} (u_n) = & \ \varepsilon_n r_0^2 \int_{\mathbb R^3} \frac{1}{\varepsilon_n^2} \Big( \partial_{z_1} \varphi_n - c_n A_n \Big)^2 + (\partial_{z_1} \varphi_n)^2 ( 2 A_n + \varepsilon_n^2 A_n^2 ) + |\nabla_{z_\perp} \varphi_n |^2 ( 1 + \varepsilon_n^2 A_n )^2 \nonumber \\ & \hspace{2cm} + (\partial_{z_1} A_n)^2 + \varepsilon_n^2 |\nabla_{z_\perp} A_n|^2 + A_n^2 + {\mathfrak c}_s^2 \Big( \frac{\Gamma}{3} - 1 \Big) A_n^3 + \frac{{\mathfrak c}_s^2}{\varepsilon_n^6} V_4( \varepsilon_n^2 A_n) \nonumber \\ & \hspace{2cm} - c_n A_n^2 \partial_{z_1} \varphi_n \ dz . \end{align*} By Proposition \ref{Born}, $(A_n)_{n \geq n_0 }$ is bounded in $W^{1,p}(\mathbb R^N)$ for all $ p \in (1, \infty)$, hence it is bounded in $L^{\infty }(\mathbb R^3)$. Since $ F( r_0 ^2 ( 1 + \varepsilon ^2 A_{\varepsilon } )) = F( r_0 ^2) - {\mathfrak c}_s ^2 \varepsilon ^2 A_{\varepsilon } + \mathcal{O} ( \varepsilon ^4 A_{\varepsilon } ^2 ) = - c^2( \varepsilon) \varepsilon ^2 A_{\varepsilon } - \varepsilon^4 A_{\varepsilon } + \mathcal{O} ( \varepsilon ^4 A_{\varepsilon } ) $, from the second equation in \eqref{MadTW}, Lemma \ref{BornEnergy} and Proposition \ref{Born} we get \begin{eqnarray} \label{approx1} \| \partial _{z_1} \varphi _{n } - c _n A_{n } \|_{L^2} = \mathcal{O}(\varepsilon_n ^2). \end{eqnarray} In particular, we have $ \displaystyle \int_{\mathbb R^3} \frac{1}{\varepsilon_n^2} \Big( \partial_{z_1} \varphi_n - c_n A_n \Big)^2 \, dz = \mathcal{O}(\varepsilon_n ^2) $ as $ n \to \infty$. By Proposition \ref{Born}, $ \partial_{z_1} \varphi _n \in W^{2, p} (\mathbb R^N)$ for $ p \in (1, \infty)$. Integrating by parts we have $$ \int_{\mathbb R^N} ( \partial _{z_1} A _n ) ^2 - \frac{( \partial ^2_{z_1} \varphi _n ) ^2}{ c_n^2 }\, dz = - \int_{\mathbb R^N} \left( A _n - \frac{ \partial _{z_1} \varphi _n}{ c_n }\right) \left( \partial _{z_1} ^2 A _n + \frac{ \partial ^3_{z_1} \varphi _n}{ c _n }\right)\, dz $$ From the above identity, the Cauchy-Schwarz inequality, \eqref{approx1} and Proposition \ref{Born} we get $$ \Big\vert \int_{\mathbb R^N} ( \partial _{z_1} A _n ) ^2 - \frac{ ( \partial_{z_1} ^2\varphi _n )^2}{{\mathfrak c}_s ^2} \, dz \Big\vert \leq \left( \frac{1}{ c_n ^2 } - \frac{1}{{\mathfrak c}_s ^2} \right) \int_{\mathbb R^N} (\partial_{z_1} ^2 \varphi _n )^2 \, dz + \Big\| A_n - \frac{ \partial _{z_1} \varphi _n}{ c_n } \Big\|_{L^2} \Big\| \partial _{z_1} ^2 A _n + \frac{ \partial ^3_{z_1} \varphi _n}{ c _n } \Big\|_{L^2} = \mathcal{O}( \varepsilon _n ^2). $$ Similarly, using \eqref{approx1}, H\"older's inequality and Proposition \ref{Born} we find $$ \begin{array}{l} \displaystyle \Big| \int_{\mathbb R^3} A_n^2 - \frac{(\partial_{z_1} \varphi_n)^2}{{\mathfrak c}_s^2} \, dz \Big| + \Big| \int_{\mathbb R^3} A_n^3 - \frac{(\partial_{z_1} \varphi_n)^3}{{\mathfrak c}_s^3} \, dz \Big| \\ \\ \displaystyle + \Big| \int_{\mathbb R^3} A_n^2 \partial_{z_1} \varphi_n - \frac{(\partial_{z_1} \varphi_n)^3}{{\mathfrak c}_s^2} \, dz \Big| + \Big| \int_{\mathbb R^3} A_n ( \partial_{z_1} \varphi _n )^2 - \frac{(\partial_{z_1} \varphi_n)^3}{{\mathfrak c}_s} \, dz \Big| = \mathcal{O}(\varepsilon_n ^2) . \end{array} $$ Since $(A_n)_{n \geq n_0}$ is bounded in $L^{\infty}(\mathbb R^3)$, using Lemma \ref{BornEnergy} we find $$ \int_{\mathbb R^3 } |\nabla_{z_\perp} \varphi_n |^2 ( 1 + \varepsilon_n^2 A_n )^2 \, dz = \int_{\mathbb R^3 } |\nabla_{z_\perp} \varphi_n |^2 \, dz + \mathcal{O}( \varepsilon _n ^2) = {\mathfrak c}_s ^2 \int_{\mathbb R^3 } |\nabla_{z_\perp} \partial_{z_1}^{-1} \mathcal{W} _n |^2 \, dz + \mathcal{O}( \varepsilon _n ^2) . $$ Recall that $ V_4( \alpha ) = \mathcal{O}(\alpha ^4) $ as $ \alpha \to 0$, hence Proposition \ref{Born} implies that $$ \int_{\mathbb R^3 } \varepsilon _n ^2 A_n ^2 (\partial_{z_1} \varphi _n )^2 + \varepsilon_n^2 |\nabla_{z_\perp} A_n|^2 + \frac{{\mathfrak c}_s^2}{\varepsilon_n^6} V_4( \varepsilon_n^2 A_n) \, dz = \mathcal{O} (\varepsilon _n ^2). $$ Inserting the above estimates into \eqref{develo} we obtain \begin{eqnarray} \label{ecun} \frac{E_{c(\varepsilon_n)} (U_n)}{{\mathfrak c}_s^2 r_0^2 \varepsilon_n} = \int_{\mathbb R^3} \Big| \nabla_{z_\perp} \partial_{z_1}^{-1} \mathcal{W}_n \Big|^2 + \frac{1}{{\mathfrak c}_s^2}\, (\partial_{z_1} \mathcal{W}_n )^2 + \frac{\Gamma}{3}\, \mathcal{W}_n^3 + \frac{1}{{\mathfrak c}_s^2}\, \mathcal{W}_n^2 \, dz + \mathcal{O}(\varepsilon_n ^2) = \mathscr{S} ( \mathcal{W}_n ) + \mathcal{O}(\varepsilon_n ^2) . \end{eqnarray} From the above estimate and the upper bound on $E_{c_n} (U_n) = T_{c_n}$ given by Proposition \ref{asympto} $(ii)$ we infer that $$ \mathscr{S} ( \mathcal{W}_n ) = \frac{E_{c(\varepsilon_n)} (U_n)}{{\mathfrak c}_s^2 r_0^2 \varepsilon_n} + \mathcal{O}(\varepsilon_n ^2) = \frac{ T_{c_n}}{{\mathfrak c}_s^2 r_0^2 \varepsilon_n} + \mathcal{O}(\varepsilon_n ^2) \leq \mathscr{S}_{\rm min} + \mathcal{O}(\varepsilon_n ^2) = \mathscr{S}_{*} + \mathcal{O}(\varepsilon_n ^2) . $$ Similarly we have $$ \int_{\mathbb R^3} |\nabla _{x_{\perp}} U_n |^2 \, dx = r_0 ^2 \varepsilon _n \int_{\mathbb R^3} ( 1 + \varepsilon _n ^2 A_n) ^2 |\nabla _{x_{\perp}} \varphi _n |^2 + \varepsilon _n ^2 |\nabla _{x_{\perp}} A_n |^2 \, dz = r_0 ^2 {\mathfrak c}_s ^2 \varepsilon _n \int_{\mathbb R^3} \Big| \nabla_{z_\perp} \partial_{z_1}^{-1} \mathcal{W}_n \Big|^2 \, dz + \mathcal{O}( \varepsilon_n ^3). $$ Since $U_n$ satisfies the Pohozaev identity $\displaystyle E_{c_n}(U_n) = \int_{\mathbb R^3} | \nabla_{z_\perp} U_n |^2 \ dz $, comparing the above equation to the expression of $E_{c_n}(U_n) $ in \eqref{ecun} we find $$ \int_{\mathbb R^3} \frac{1}{{\mathfrak c}_s^2}\, (\partial_{z_1} \mathcal{W}_n )^2 + \frac{\Gamma}{3}\, \mathcal{W}_n^3 + \frac{1}{{\mathfrak c}_s^2}\, \mathcal{W}_n^2 \ dz = \mathcal{O}(\varepsilon _n^2) . $$ In order to apply Theorem \ref{gs}, we have to check that there is $ m_1 >0 $ such that for all $n$ sufficiently large there holds $$ \int_{\mathbb R^3} \mathcal{W}_n^2 + (\partial_{z_1} \mathcal{W}_n)^2 \ dz \geq m_1 . $$ By Lemma \ref{minoinf}, there are $ k>0 $ depending only on $F$ and $ n_1 \geq n_0$ such that $$ \forall n \geq n_1, \quad \| A_n \|_{L^\infty} \geq k . $$ Since $A_n$ tends to $0$ at infinity, after a translation we may assume that $$ |A_n(0)| = \| A_n \|_{L^\infty} \geq k . $$ By Proposition \ref{Born} we know that for all $ p \in (1, \infty)$ there is $ C_p > 0 $ such that $ \| A_{n } \|_{W^{1, p } } \leq C_p$ for any $ n \geq n_0$. Then Morrey's inequality (see e.g. Theorem IX.12 p. 166 in \cite{brezis}) implies that for any $ \alpha \in (0,1)$ there is $ C_{\alpha } > 0$ such that for all $ n \geq n_0$ and all $x, y \in \mathbb R^3$ we have $|A_{n }(x) - A_{n }(y) | \leq C_{\alpha } |x -y |^{\alpha}.$ We infer that $ |A_n| \geq k/2 $ in $B_r(0)$ for some $r>0$ independent of $n$, hence there is $m_1 > 0$ such that $$ \| A_n \|_{L^2} \geq \| A_n \|_{L^2(B_r(0))} \geq 2 m_1. $$ From \eqref{approx1} it follows that $\| \mathcal{W} _n - A_n \|_{L^2} \to 0 $ as $ n \to \infty$, hence $$ \| \mathcal{W}_n \|_{L^2} \geq \| \mathcal{W}_n \|_{L^2(B_r(0))} \geq m_1 \qquad \mbox{ for all } n \mbox{ sufficiently large.} $$ Then Theorem \ref{gs} implies that there exist $\mathcal{W} \in \mathscr{Y}(\mathbb R^3)$, a subsequence of $(\mathcal{W}_n)_{n \geq n_0}$ (still denoted $(\mathcal{W}_n)_{n \geq n_0}$), and a sequence $(z^n)_{n\geq n_0} \subset \mathbb R^3$ such that $$ \mathcal{W}_n ( \cdot - z^n ) \to \mathcal{W} \quad \quad \quad {\rm in} \quad \mathscr{Y}(\mathbb R^3) . $$ Moreover, there is $ \sigma > 0 $ such that $ z \longmapsto \mathcal{W}(z, \frac{1}{\sigma } z_{\perp})$ is a ground state (with speed $1/(2 {\mathfrak c}_s ^2)$) of (KP-I). We will prove that $ \sigma = 1$. Let $ x^n = \left( \frac{z_1^n}{\varepsilon _n}, \frac{ z_{\perp}^n}{\varepsilon_n ^2} \right).$ We denote $ \tilde{\mathcal{W}}_n = \mathcal{W}_n( \cdot - z^n)$, $ \tilde{A}_n = A_n (\cdot - z ^n)$, $ \tilde{ \varphi}_n = \varphi_n (\cdot - z^n)$, $\tilde{U}_n = U_n (\cdot -x^n).$ It is obvious that $\tilde{U}_n$ satisfies (TW$_{c_n}$) and all the previous estimates hold with $ \tilde{A}_n$, $ \tilde{ \varphi}_n $ and $\tilde{U}_n$ instead of $ A_n$, $\varphi_n$ and $U_n$, respectively. Since $ \tilde{\mathcal{W}}_n = \frac{1}{{\mathfrak c}_s } \partial_{z_1} \tilde{\varphi}_n$ and $ \tilde{\mathcal{W}}_n \to \mathcal{W}$ in $\mathscr{Y}(\mathbb R^3)$, we have \begin{eqnarray} \label{conv1} \partial_{z_1} \tilde{\varphi}_n \to {\mathfrak c}_s \mathcal{W}, \qquad \quad \partial_{z_1}^2 \tilde{\varphi}_n \to {\mathfrak c}_s \partial_{z_1} \mathcal{W} \qquad \mbox{ and } \qquad \nabla_{z _{\perp}} \tilde{\varphi}_n \to {\mathfrak c}_s \nabla_{z _{\perp}} \partial_{z_1}^{-1} \mathcal{W} \qquad \mbox{ in } L^2(\mathbb R^3). \end{eqnarray} Integrating by parts, then using the Cauchy-Schwarz inequality, Proposition \ref{Born} and \eqref{approx1} we find $$ \begin{array}{l} \displaystyle \int_{\mathbb R^3} \Big| \partial _{z_1}^2 \tilde{\varphi} _n - c_n \partial_{z_1} \tilde{A}_n \Big| ^2 \, dz = - \int_{\mathbb R^3} ( \partial _{z_1} \tilde{\varphi} _n - c_n \tilde{A}_n ) ( \partial _{z_1}^3 \tilde{\varphi} _n - c_n \partial_{z_1}^2 \tilde{A}_n ) \, dz \\ \\ \leq \| \partial _{z_1} \tilde{\varphi} _n - c_n \tilde{A}_n \|_{L^2} \|\partial _{z_1}^3 \tilde{\varphi} _n - c_n \partial_{z_1}^2 \tilde{A}_n \|_{L^2} = \mathcal{O}(\varepsilon_n ^2), \end{array} $$ hence $\| \partial _{z_1}^2 \tilde{\varphi} _n - c_n \partial_{z_1} \tilde{A}_n \|_{L^2} = \mathcal{O}(\varepsilon _n) \to 0$. Since $ c_n \to {\mathfrak c}_s$, from \eqref{approx1} and \eqref{conv1} we get \begin{eqnarray} \label{conv2} \tilde{A}_n \to \mathcal{W} \qquad \mbox{ and } \qquad \partial_{z_1} \tilde{A}_n \to \partial_{z_1} \mathcal{W} \qquad \mbox{ in } L^2( \mathbb R^3) \quad \mbox{ as } n \to \infty. \end{eqnarray} It is obvious that $ \tilde{A}_n$, $\tilde{\varphi}_n $ and $ \varepsilon _n$ satisfy \eqref{desing}. Let $ \psi \in C_c^{\infty}(\mathbb R^3)$. We multiply \eqref{desing} by $ \psi $, integrate by parts, then pass to the limit as $ n \to \infty$. We use Proposition \ref{Born}, \eqref{conv1} and \eqref{conv2} and after a straightforward computation we discover that $ \mathcal{W}$ satisfies the equation (SW) in ${\mathcal D} '( \mathbb R^3 )$. This implies that necessarily $ \sigma = 1$ and $ \mathcal{W} $ is a ground state of speed $1/(2 {\mathfrak c}_s ^2)$ to (KP-I). In particular, $ \mathcal{W} $ satisfies the Pohozaev identities \eqref{identites} and \eqref{Ident}. Since $ \tilde{\mathcal{W}}_n \to \mathcal{W}$ in $ \mathscr{Y}(\mathbb R^3) $, we have $ \mathscr{S} ( \mathcal{W}_n) = \mathscr{S} ( \tilde{\mathcal{W}}_n) \to \mathscr{S} ( \mathcal{W}) $ and \eqref{ecun} implies $$ \frac{ E_{c( \varepsilon _n)} ( U_n) }{{\mathfrak c}_s ^2 r_0 ^2 \varepsilon _n } = \mathscr{S} ( \mathcal{W} _n) + \mathcal{O}( \varepsilon_n ^2) = \mathscr{S}( \mathcal{W}) + o(1) = \mathscr{S}_{\rm min} +{o}(1), $$ that is \eqref{Ec} holds. Using the expression for the momentum in \eqref{momentlift}, then \eqref{conv1}, \eqref{conv2}, Proposition \ref{Born} and the Pohozaev identities \eqref{identites} and \eqref{Ident} we get $$ - \frac{ \varepsilon_n}{r_0 ^2 {\mathfrak c}_s ^3 } Q(U_n) = \frac{ \varepsilon_n}{r_0 ^2 {\mathfrak c}_s ^3 } \int_{\mathbb R^3} ( \rho _n ^2 - r_0 ^2) \frac { \partial \phi_n}{\partial x_1 } \, dx = \frac{1}{{\mathfrak c}_s ^3} \int_{\mathbb R^3} ( 2 A_n (z) + \varepsilon _n ^2 A_n ^2 (z) ) \frac{ \partial \varphi _n}{ \partial z_1} (z) \, dz \longrightarrow \frac{2}{{\mathfrak c}_s ^2} \int_{\mathbb R^3} \mathcal{W}^2 (z) \, dz = \mathscr{S}( \mathcal{W}) . $$ Hence $ - {\mathfrak c}_s Q(U_n) \sim r_0 ^2 {\mathfrak c}_s ^4 \mathscr{S}_{\rm min} \varepsilon^{-1} $ as $ n \to \infty$. Together with \eqref{Ec} this implies that $(U_n)_{n\geq n_0 } $ satisfies \eqref{energy}. By Proposition \ref{Born} we know that $ (\tilde{A}_n)_{n \geq n_0}$, $ (\partial_{z_1} \tilde{A}_n)_{n \geq n_0}$, $ (\partial_{z_1} \tilde{\varphi }_n)_{n \geq n_0}$ and $ (\partial_{z_1} ^2 \tilde{\varphi }_n)_{n \geq n_0}$ are bounded in $L^p(\mathbb R^3) $ for $1 < p < \infty$. From \eqref{conv1}, \eqref{conv2} and standard interpolation in $L^p $ spaces we find as $ n \to \infty$ \begin{eqnarray} \label{conv3} \tilde{A}_n \to \mathcal{W}, \qquad \partial_{z_1} \tilde{A}_n \to \partial_{z_1} \mathcal{W}, \qquad \partial_{z_1} \tilde{\varphi }_n \to {\mathfrak c}_s \mathcal{W} \quad \mbox{ and } \quad \partial_{z_1}^2 \tilde{\varphi }_n \to {\mathfrak c}_s \partial_{z_1} \mathcal{W} \quad \mbox{ in } L^p \end{eqnarray} for any $ p \in (1, \infty)$. Proceeding as in \cite{BGS1} (see Lemma 4.6 p. 262 and Proposition 6.1 p. 266 there) one can prove that for any multiindex $\alpha \in \mathbb N^N$ with $ |\alpha | \leq 2$, the sequences $ (\partial ^{\alpha } \tilde{A}_n)_{n \geq n_0}$, $ (\partial ^{\alpha } \partial_{z_1} \tilde{A}_n)_{n \geq n_0}$, $ ( \partial ^{\alpha } \partial_{z_1} \tilde{\varphi }_n)_{n \geq n_0}$ and $ (\partial ^{\alpha } \partial_{z_1} ^2 \tilde{\varphi }_n)_{n \geq n_0}$ are bounded in $L^p(\mathbb R^3) $ for $1 < p < \infty$. Then by interpolation we see that \eqref{conv3} holds in $W^{1,p}(\mathbb R^3) $ for all $ p \in (1, \infty)$. \ $\Box$ \subsection{Proof of Theorem \ref{res1} completed in the case $\boldsymbol{N=2}$} Assume that $ N =2$. Let $ (U_n, c_n )$ be a sequence of travelling waves to (NLS) satisfying assumption (b) in Theorem \ref{res1} such that $ c_n \to {\mathfrak c}_s$ as $ n \to \infty$. Let $ \varepsilon _n = \sqrt{{\mathfrak c}_s ^2 - c_n ^2}$. By Theorem \ref{th2dposit} we have $ \displaystyle \int_{\mathbb R^2} | \nabla U_n |^2 \, dx \to 0 $ as $ n \to \infty$ and then Lemma \ref{liftingfacile} implies that $\| \, | U_n | - r_0 \|_{L^{\infty}} \to 0 $; in particular, for $n$ sufficiently large we have a lifting $U_n (x) = \rho_n(x) {\sf e}^{i \phi _n (x)} = r_0 \Big( 1 + \varepsilon_n^2 A_n (z) \Big) {\sf e} ^{ i\varepsilon _n \varphi_n (z) } $ as in \eqref{ansatzKP} and the conclusion of Proposition \ref{Born} holds for $A_n$ and $ \varphi _n$. As in the proof of Proposition \ref{convergence} we obtain \begin{eqnarray} \label{approx2} \| \partial _{z_1} \varphi _{n } - c _n A_{n } \|_{L^2} = \mathcal{O}(\varepsilon_n ^2) \qquad \mbox{ and } \qquad \| \partial _{z_1} ^2 \varphi _{n } - c _n \partial_{z_1} A_{n } \|_{L^2} = \mathcal{O}(\varepsilon_n ) \qquad \mbox{ as } n \to \infty. \end{eqnarray} Let $ k_n = \displaystyle \int_{\mathbb R^2} |\nabla U_n(x)|^2 \, dx$. We denote $\mathcal{W}_n = {\mathfrak c}_s^{-1} \partial_{z_1} \varphi_n $. By \eqref{approx2} we have $\| \mathcal{W}_n - A_n \|_{L^2} = \mathcal{O}(\varepsilon_n^2)$. As in the proof of Proposition \ref{convergence} we find $ \displaystyle \Big| \int_{\mathbb R^2} (\partial_{z_1} A_n )^2 - (\partial_{z_1} \mathcal{W} _n )^2 \, dz \Big| = \Big\vert \int_{\mathbb R^2} ( \partial _{z_1} A _n ) ^2 - \frac{ ( \partial_{z_1} ^2\varphi _n )^2}{{\mathfrak c}_s ^2} \, dz \Big\vert = \mathcal{O}( \varepsilon _n ^2).$ Using \eqref{approx2} and Proposition \ref{Born} we get \begin{align} \label{kn} k_n = & \ \int_{\mathbb R^2} |\nabla U_n|^2 \ dx = \varepsilon_n r_0^2 \int_{\mathbb R^2} (\partial_{z_1} \varphi_n)^2 (1 + \varepsilon_n^2 A_n)^2 + \varepsilon_n^2 (\partial_{z_1} A_n)^2 + \varepsilon_n^2 (\partial_{z_2} \varphi_n)^2 (1 + \varepsilon_n^2 A_n)^2 + \varepsilon_n^4 (\partial_{z_2} A_n)^2 \ dz \nonumber \\ = & \ \varepsilon_n r_0^2 \int_{\mathbb R^2} (\partial _{z_1} \varphi_n)^2 \ dz + \varepsilon_n^3 r_0^2 \int_{\mathbb R^2} \Big( 2 A_n (\partial_{z_1} \varphi_n ) ^2 + (\partial_{z_1} A_n )^2 + ( \partial_{z_2} \varphi _n )^2 \Big) \ dz + \mathcal{O}(\varepsilon_n^5) \nonumber \\ = & \ \varepsilon_n r_0^2 {\mathfrak c}_s^2 \int_{\mathbb R^2} \mathcal{W}_n^2 \ dz + \varepsilon_n^3 r_0^2 {\mathfrak c}_s^2 \int_{\mathbb R^2} \Big( 2 \mathcal{W}_n^3 + \frac{1}{{\mathfrak c}_s^2} ( \partial_{z_1} \mathcal{W}_n )^2 + ( \partial_{z_2}\partial_{z_1}^{-1} \mathcal{W}_n )^2 \Big) \ dz + \mathcal{O}(\varepsilon_n^5) . \end{align} Inverting this expansion we find the following expression of $ \varepsilon _n$ in terms of $ k_n$: \be \label{pepsi} \varepsilon_n = \frac{k_n}{r_0^2 {\mathfrak c}_s^2 \| \mathcal{W}_n \|_{L^2}^2 } - \frac{k_n^3}{r_0^6 {\mathfrak c}_s^6 \| \mathcal{W}_n \|_{L^2}^8} \int_{\mathbb R^2} \Big( 2 \mathcal{W}_n^3 + \frac{1}{{\mathfrak c}_s^2} ( \partial_{z_1} \mathcal{W}_n )^2 + ( \partial_{z_2}\partial_{z_1}^{-1} \mathcal{W}_n )^2 \Big) \ dz + \mathcal{O}(k_n^5) . \ee Recall that the mapping $U_n (c_n \cdot )$ is a minimizer of the functional $I(\psi ) = Q( \psi)+ \displaystyle \int_{\mathbb R^2} V( |\psi |^2) \, dx $ under the constraint $\displaystyle \int_{\mathbb R^2} |\nabla \psi |^2 \, dx = k_n$. Using this information, Proposition \ref{asympto} $(i)$, the fact that $c_n^2 = {\mathfrak c}_s^2 - \varepsilon_n^2 $ and \eqref{pepsi} we get \begin{align} \label{lapubelle} c_n Q(U_n) + \int_{\mathbb R^2} V(|U_n|^2) \ dx = & \, c_n ^2 I(U_n (c_n \cdot ) ) = c_n^2 I_{\rm min} (k_n) \nonumber \\ \leq & \, c_n^2 \left( - \frac{k_n}{{\mathfrak c}_s^2} - \frac{4 k_n^3}{27 r_0^4 {\mathfrak c}_s^{12} \mathscr{S}^2_{\rm min}} + \mathcal{O}(k_n^5) \right) \nonumber \\ = & \, - k_n + \frac{k_n^3}{r_0^4 {\mathfrak c}_s^6 \| \mathcal{W}_n \|_{L^2}^4} - \frac{4 k_n^3}{27 r_0^4 {\mathfrak c}_s^{10} \mathscr{S}_{\rm min}^2} + \mathcal{O}(k_n^5) . \end{align} Moreover, using the Taylor expansion \eqref{V}, we find $$ \int_{\mathbb R^2} V(|U_n|^2) \ dx = r_0^2 {\mathfrak c}_s^2 \varepsilon_n \int_{\mathbb R^2} \Big( A_n^2 + \varepsilon_n^2 \Big[ \frac{\Gamma}{3} - 1 \Big] A_n^3 + \frac{V_4(\varepsilon_n^2 A_n)}{\varepsilon_n^4} \Big) \ dz $$ and by \eqref{momentlift} we have $$ Q(U_n) = - \varepsilon_n r_0^2 \int_{\mathbb R^2} \Big( 2 A_n + \varepsilon_n^2 A_n^2 \Big) \frac{\partial \varphi_n}{\partial z_1} \ dz . $$ Taking into account \eqref{approx2} and the equality $c_n^2 = {\mathfrak c}_s^2 - \varepsilon_n^2$, then using expansion of $ \varepsilon _n$ in terms of $ k_n$ \eqref{pepsi} we get \begin{align} \label{99} c_n Q(U_n) + & \ \int_{\mathbb R^2} V(|U_n|^2) \ dx \nonumber \\ = & \ r_0^2 {\mathfrak c}_s^2 \left( \varepsilon_n \int_{\mathbb R^2} \Big( - 2 A_n \mathcal{W}_n + A_n^2 \Big) \ dz + \varepsilon_n^3 \int_{\mathbb R^2} \Big( - A_n^2 \mathcal{W}_n + \Big[ \frac{\Gamma}{3} - 1 \Big] A_n^3 +\frac{1}{{\mathfrak c}_s^2} A_n \mathcal{W}_n \Big) \ dz + \mathcal{O}(\varepsilon_n^5) \right) \nonumber \\ = & \ r_0^2 {\mathfrak c}_s^2 \left( \varepsilon_n \| \mathcal{W}_n - A_n \|_{L^2}^2 - \varepsilon_n \int_{\mathbb R^2} \mathcal{W}_n^2 \ dz + \varepsilon_n^3 \int_{\mathbb R^2} \Big[ \frac{\Gamma}{3} - 2 \Big] \mathcal{W}_n^3 + \frac{\mathcal{W}_n^2}{{\mathfrak c}_s^2} \ dz + \mathcal{O}(\varepsilon_n^5) \right) \nonumber \\ = & \ r_0^2 {\mathfrak c}_s^2 \left( - \varepsilon_n \int_{\mathbb R^2} \mathcal{W}_n^2 \ dz + \varepsilon_n^3 \int_{\mathbb R^2} \Big[ \frac{\Gamma}{3} - 2 \Big] \mathcal{W}_n^3 + \frac{\mathcal{W}_n^2}{{\mathfrak c}_s^2} \ dz + \mathcal{O}(\varepsilon_n^5) \right) \nonumber \\ = & \ - k_n + \frac{k_n^3}{ r_0^4 {\mathfrak c}_s^4 \| \mathcal{W}_n \|_{L^2}^6} \mathscr{S}(\mathcal{W}_n) + \mathcal{O}(k_n^5) . \end{align} Inserting \eqref{99} into \eqref{lapubelle} we discover $$ \frac{k_n^3}{ r_0^4 {\mathfrak c}_s^4 \| \mathcal{W}_n \|_{L^2}^6} \mathscr{S}(\mathcal{W}_n) + \mathcal{O}(k_n^5) \leq\frac{k_n^3}{r_0^4 {\mathfrak c}_s^6 \| \mathcal{W}_n \|_{L^2}^4} - \frac{4 k_n^3}{27 r_0^4 {\mathfrak c}_s^{10} \mathscr{S}_{\rm min}^2} + \mathcal{O}(k_n^5) , $$ that is $$ \mathscr{S}(\mathcal{W}_n) \leq \frac{1}{{\mathfrak c}_s^2} \| \mathcal{W}_n \|_{L^2}^2 - \frac{4}{27 {\mathfrak c}_s^{6} \mathscr{S}_{\rm min}^2} \| \mathcal{W}_n \|_{L^2}^6 + \mathcal{O}(k_n^2) $$ or equivalently \be \label{topbelle} \mathscr{E} (\mathcal{W}_n) = \mathscr{S}(\mathcal{W}_n) - \frac{1}{{\mathfrak c}_s^2} \int_{\mathbb R^2} \mathcal{W}_n^2 \ d z \leq - \frac{1}{2 \mathscr{S}_{\rm min}^2} \Big( \frac{2}{3} \Big)^3 \cdot \Big( \frac{1}{{\mathfrak c}_s^{2}} \| \mathcal{W}_n \| _{L^2}^2 \Big)^3 + \mathcal{O}(k_n^2) . \ee As in the proof of Proposition \ref{convergence}, it follows from Lemma \ref{minoinf} and Proposition \ref{Born} that there are some positive constants $ m_1, \, m_2 $ such that $$m_1 \leq \| \mathcal{W}_n \|_{L^2}^2 \leq m_2 \qquad \mbox{ for all sufficiently large } n. $$ Denote $ \lambda_n = \frac{\| \mathcal{W}_n \|_{L^2}^2}{{\mathfrak c}_s^2} .$ Passing to a subsequence if necessary we may assume that $ \lambda _n \to \lambda $, where $ \lambda \in (0,+\infty)$. Let $$ {\mathcal{W}}_n ^{\#} (z) = \frac{\mu^2}{\lambda_n^2} \mathcal{W}_n \Big( \frac{\mu}{\lambda_n} z_1,\frac{\mu^2}{\lambda_n^2} z_2 \Big), $$ where $\mu$ is as in Theorem \ref{gs2d}. Then ${ \mathcal{W}}_n ^{\# }$ satisfies $$ \int_{\mathbb R^2} \frac{1}{{\mathfrak c}_s^2} \, ({\mathcal{W}}_n^{\#}) ^2 \ dz = \frac{\mu}{\lambda_n} \int_{\mathbb R^2} \frac{1}{{\mathfrak c}_s^2} \, {\mathcal{W}}_n^2 \ dz = \mu \quad \quad \quad {\rm and} \quad \quad \quad \mathscr{E} ({\mathcal{W}}_n^{\# }) = \frac{\mu^3}{\lambda_n^3} \mathscr{E} (\mathcal{W}_n) . $$ Plugging this into \eqref{topbelle} and recalling that $ \mu = \frac32 \mathscr{S}_{\rm min}$, we infer that $$ \mathscr{E} ({\mathcal{W}}_n^{\#}) = \frac{\mu^3}{\lambda_n^3} \mathscr{E} (\mathcal{W}_n) \leq - \frac{1}{2 \mathscr{S}_{\rm min}^2} \Big( \frac{2\mu }{3} \Big)^3 + \mathcal{O}(k_n^2) = - \frac{1}{2} \mathscr{S}_{\rm min} + \mathcal{O}(k_n^2) . $$ Therefore $({\mathcal{W}}_n^{\#})_{n \geq n_0} $ is a minimizing sequence for \eqref{minimiz}. By Theorem \ref{gs2d} we infer that there exist a subsequence of $ ({\mathcal{W}}_n^{\#} )_ {n \geq n_0}$, still denoted $ ({\mathcal{W}}_n^{\#})_{n \geq n_0} $, a sequence $ (z^n )_{n \geq n_0} = ( z_1^n, z_2^n)_{n \geq n_0} \subset \mathbb R^2 $ and a ground state $\mathcal{W}$ (with speed $1/(2{\mathfrak c}_s^2)$) of (KP-I) such that $ {\mathcal{W}}_n^{\#} ( \cdot - z^n) \longrightarrow \mathcal{W}$ strongly in $\mathscr{Y}(\mathbb R^2) $ as $ n \to \infty$. Let $ x^n = \left( \frac{ \mu}{\varepsilon _n \lambda _n } z_1^n, \frac{ \mu ^2 }{\varepsilon _n ^2 \lambda _n ^2 } z_2^n \right)$ and $ \tilde{U}_n = U( \cdot - x^n)$, $ \tilde{A}_n (z) = A_n \left( z_1 - \frac{ \mu}{\lambda _n } z_1 ^n, z_2 - \frac{ \mu ^2}{\lambda _n ^2} z_2 ^n \right)$, $ \tilde{ \varphi }_n (z) = \varphi_n \left( z_1 - \frac{ \mu}{\lambda _n } z_1 ^n, z_2 - \frac{ \mu ^2}{\lambda _n ^2} z_2 ^n \right)$, $ \tilde{ \mathcal{W} }_n (z) = \mathcal{W}_n \left( z_1 - \frac{ \mu}{\lambda _n } z_1 ^n, z_2 - \frac{ \mu ^2}{\lambda _n ^2} z_2 ^n \right)$. We denote $ \tilde{ \mathcal{W}}(z)= \frac{ \lambda ^2}{\mu ^2} \mathcal{W} ( \frac{ \lambda }{\mu} z_1, \frac{ \lambda ^2}{\mu ^2} z_2)$. It is obvious that $ \tilde{U}_n (x) = r_0 \left( 1 + \varepsilon _n ^2\tilde{A}_n (z) \right) {\sf e} ^{ i \varepsilon _n \tilde{\varphi }_n (z )}$ is a solution to (TW$_{c_n}$) with the same properties as $ U_n$ and the functions $ \tilde{A}_n$, $ \tilde{\varphi}_n$, $ \tilde{\mathcal{W} }_n$ satisfy the same estimates as $A_n$, $ \varphi _n$ and $ \mathcal{W} _n$, respectively. Moreover, we have $ \tilde{\mathcal{W} }_n = \frac{1}{{\mathfrak c}_s } \partial_{z_1} \tilde{\varphi} _n$ and $ \tilde{\mathcal{W} }_n \longrightarrow \tilde{ \mathcal{W}}$ strongly in $\mathscr{Y}(\mathbb R^2) $ as $ n \to \infty$. It is clear that $ \tilde{A}_n$, $\tilde{\varphi}_n $ and $ \varepsilon _n$ satisfy \eqref{desing}. For any fixed $ \psi \in C_c^{\infty}(\mathbb R^3)$ we mutiply \eqref{desing} by $ \psi $, integrate by parts, then pass to the limit as $ n \to \infty$. Proceeding as in the proof of Proposition \ref{convergence} we find that $ \tilde{\mathcal{W} }$ satisfies equation (SW) in $ {\mathcal D}'( \mathbb R^2)$. We know that $ \mathcal{W}$ also solves (SW) and comparing the equations for $ \mathcal{W} $ and $ \tilde{\mathcal{W}}$ we infer that $ \left( \frac{ \lambda ^3}{\mu ^3} - \frac{ \lambda ^5}{\mu ^5} \right) \partial_{z_1} \mathcal{W} = 0 $ in $ \mathbb R^2$ . Since $ \partial_{z_1} \mathcal{W} \not= 0$, $ \lambda > 0 $ and $ \mu > 0$, we have necessarily $ \lambda = \mu$, that is $ \tilde{\mathcal{W}} = \mathcal{W}$. In particular, we have $ \mathscr{S} ( \mathcal{W} _n) = \mathscr{S} ( \tilde{\mathcal{W}}_n) \longrightarrow \mathscr{S} ( \mathcal{W}) = \mathscr{S}_{\rm min} $ as $ n \to \infty$. Since $ \displaystyle \int_ {\mathbb R^2} |\nabla U_n |^2 \, dx = k_n$, using \eqref{99} and \eqref{kn} we get $$ E(U_n) + c_n Q( U_n) = \frac{k_n^3}{ r_0^4 {\mathfrak c}_s^4 \| \mathcal{W}_n \|_{L^2}^6} \mathscr{S}(\mathcal{W}_n) + \mathcal{O}(k_n^5) \sim \varepsilon_n ^3 r_0 ^2 {\mathfrak c}_s ^2 \mathscr{S}_{\rm min} \qquad \mbox{ as } n \to \infty . $$ Hence \eqref{Ec} holds. As in the proof of Proposition \ref{convergence} we have $$ \begin{array}{l} \displaystyle Q( U_n) = - \int_{\mathbb R^2} (\rho_n ^2 - r_0 ^2 ) \frac{ \partial \phi}{\partial x_1} = - r_0 ^2 \varepsilon _n \int_{\mathbb R^2} ( 2 A_n (z) + \varepsilon _n ^2 A_n ^2 (z) ) \frac{ \partial \varphi _n}{ \partial z_1} (z) \, dz \\ \\ \displaystyle \sim -2 r_0 ^2 {\mathfrak c}_s \varepsilon_n \int_{\mathbb R^2} \mathcal{W}^2 (z) \, dz = - 3 r_0 ^2 {\mathfrak c}_s ^3 \mathscr{S}( \mathcal{W}) \varepsilon _n. \end{array} $$ The above computation and \eqref{Ec} imply \eqref{energy}. Finally, the convergence in \eqref{conv3} as well as the similar property in $W^{1,p} (\mathbb R^2)$ are proven exactly as in the three dimensional case. \ $\Box$ \section{The higher dimensional case} \subsection{Proof of Proposition \ref{dim6}} We argue by contradiction. Suppose that the assumptions of Proposition \ref{dim6} hold and there is a sequence $(U_n)_{n \geq 1} \subset \mathcal{E} $ of nonconstant solutions to (TW$_{c_n}$) such that $E_{c_n}(U_n) \to 0$ as $n \to +\infty$. By Proposition \ref{lifting} $(ii)$ we have $|U_n| \to r_0 > 0 $ uniformly in $\mathbb R^N$. Hence for $n$ sufficiently large we have the lifting $ U_n(x) = \rho_n(x) {\sf e}^{i\phi_n(x)} . $ We write $$ \mathcal{B}_{n } = \frac{|U_n| }{r_0 } - 1 , \qquad \mbox{ so that } \qquad \rho _n = r_0( 1 + \mathcal{B} _n) \qquad \mbox{ and } \qquad \mathcal{B} _n \to 0 \quad \mbox{ as } n \to \infty. $$ Recall that $U_n$ satisfies the Pohozaev identities \eqref{Pohozaev}. The identity $P_{c_n} (U_n) = 0 $ can be written as $$ \int_{\mathbb R^N} \Big| \frac{ \partial U_n}{\partial x_1} \Big|^2 + \frac{N-3}{N-1} |\nabla _{x_{\perp}} U_n |^2 \, dx + c_n Q(U_n) + \int_{\mathbb R^N} V(|U_n| ^2) \, dx = 0 . $$ Using the formula \eqref{momentlift} for $Q(U_n)$ and the Taylor expansion \eqref{V} for $V( r_0 ^2 ( 1 + \mathcal{B}_n)^2 ) $ we get \begin{align*} r_0 ^2 \int_{\mathbb R^N} & \Big| \frac{ \partial \mathcal{B} _n}{\partial x_1} \Big|^2 + ( 1 + \mathcal{B} _n) ^2 \Big| \frac{ \partial \phi _n}{\partial x_1} \Big|^2 + \frac{N-3}{N-1} |\nabla _{x_{\perp}} \mathcal{B} _n |^2 + \frac{N-3}{N-1} ( 1 + \mathcal{B}_n )^2 |\nabla _{x_{\perp}} \phi _n |^2 \\ & - c_n ( 2 \mathcal{B}_n + \mathcal{B} _n ^2 ) \frac{ \partial \phi _n}{\partial x_1} + {\mathfrak c}_s ^2 \left( \mathcal{B}_n ^2 + \Big( \frac{ \Gamma }{3} -1 \Big) \mathcal{B}_n ^3 + V_4 (\mathcal{B} _n) \right) \, dx = 0, \end{align*} where $V_4 ( \alpha ) = \mathcal{O}( \alpha ^4) $ as $ \alpha \to 0$. After rearranging terms, the above equality yields \begin{align*} & \int_{\mathbb R^N} ( \partial_{x_1} \phi_n - c_n \mathcal{B} _n )^2 + (\partial_{x_1} \mathcal{B} _n )^2 + \frac{N-3}{N-1} |\nabla_{x_\perp} \phi_n|^2 ( 1 + \mathcal{B} _n )^2 + \frac{N-3}{N-1} |\nabla_{x_\perp} \mathcal{B} _n|^2 + \varepsilon_n^2 \mathcal{B} _n ^2 \ dx \nonumber \\ & = - \int_{\mathbb R^6} (\partial_{x_1} \phi_n)^2 ( 2 \mathcal{B} _n + \mathcal{B}_n ^2 ) + {\mathfrak c}_s^2 \Big( \frac{\Gamma}{3} - 1 \Big) \mathcal{B}_n^3 + {\mathfrak c}_s^2 V_4( \mathcal{B}_n ) - c_n \mathcal{B}_n ^2 \partial_{x_1} \phi_n \ dx \nonumber \\ & = - \Big[ \frac{\Gamma}{3} {\mathfrak c}_s^2 - \varepsilon_n^2 \Big] \int_{\mathbb R^N} \mathcal{B}_n ^3 \ dz - {\mathfrak c}_s^2 \int_{\mathbb R^N} V_4( \mathcal{B}_n ) \ dx - \int_{\mathbb R^N} (\partial_{x_1} \phi_n)^2 \mathcal{B}_n ^2 \ dx \nonumber \\ & \quad \quad + \int_{\mathbb R^N} \mathcal{B}_n \Big( ( \partial_{x_1} \phi_n - c_n \mathcal{B}_n )^2 -3 c_n \mathcal{B}_n (\partial_{x_1} \phi_n - c_n \mathcal{B}_n ) \Big) \ dx \end{align*} and this can be written as \begin{eqnarray} \label{Dev3} \begin{array}{l} \displaystyle \int_{\mathbb R^N} ( \partial_{x_1} \phi_n - c_n \mathcal{B} _n )^2 + (\partial_{x_1} \mathcal{B} _n )^2 + \frac{N-3}{N-1} |\nabla_{x_\perp} \phi_n|^2 ( 1 + \mathcal{B} _n )^2 + \frac{N-3}{N-1} |\nabla_{x_\perp} \mathcal{B} _n|^2 + \varepsilon_n^2 ( 1 - \mathcal{B}_n ) \mathcal{B} _n ^2 \ dx \\ \\ = \displaystyle - \frac{\Gamma}{3} {\mathfrak c}_s^2 \int_{\mathbb R^N} \mathcal{B}_n ^3 \ dz - {\mathfrak c}_s^2 \int_{\mathbb R^N} V_4( \mathcal{B}_n ) \ dx - \int_{\mathbb R^N} (\partial_{x_1} \phi_n)^2 \mathcal{B}_n ^2 \ dx \\ \\\quad \quad \displaystyle + \int_{\mathbb R^N} \mathcal{B}_n \Big( ( \partial_{x_1} \phi_n - c_n \mathcal{B}_n )^2 -3 c_n \mathcal{B}_n (\partial_{x_1} \phi_n - c_n \mathcal{B}_n ) \Big) \ dx . \end{array} \end{eqnarray} For $n$ sufficiently large we have $ \frac 12 \mathcal{B}_n \leq ( 1 - \mathcal{B}_n ) \mathcal{B} _n ^2 \leq \frac 32 \mathcal{B} _n ^2$ and then all the terms in the left-hand side of \eqref{Dev3} are nonnegative. We will find an upper bound for the right-hand side of \eqref{Dev3}. First we notice that the third integral there is nonnegative. Since $\mathcal{B} _n \to 0$ in $L^\infty$ and $V_4(\alpha) = \mathcal{O}(\alpha^4) $ as $\alpha \to 0$, we have \begin{eqnarray} \label{good1} \Big| {\mathfrak c}_s^2 \int_{\mathbb R^N} V_4( \mathcal{B} _n) \ dx \Big| \leq C \| \mathcal{B}_n \|_{L^4} ^4 \leq C \| \mathcal{B}_n \|_{L^\infty} \| \mathcal{B}_n \|_{L^3}^3 . \end{eqnarray} Using the fact that $\| \mathcal{B}_n \|_{L^\infty} \leq 1/4$ for $n$ large enough and the inequality $2ab \leq a^2 + b^2$, we get \begin{eqnarray} \label{good2} \int_{\mathbb R^N} \mathcal{B}_n \Big( ( \partial_{x_1} \phi_n - c_n \mathcal{B}_n )^2 - 3 c_n \mathcal{B}_n (\partial_{x_1} \phi_n - c_n \mathcal{B}_n ) \Big) \ dx \leq \frac12 \int_{\mathbb R^ N} ( \partial_{x_1} \phi_n - c_n \mathcal{B}_n )^2 \ dx + 9 {\mathfrak c}_s^2 \int_{\mathbb R^ N} \mathcal{B}_n ^4 \ dx . \end{eqnarray} It is easy to see that $ \mathcal{B} _n \in H^1 ( \mathbb R^N)$ (see the Introduction of \cite{CM1}). We recall the critical Sobolev embedding: for any $ h \in H^1 ( \mathbb R^N) $ (with $N\geq 3$) there holds \begin{eqnarray} \label{Sobol} \| h \|_{L^{\frac{2N}{N-2}}} \leq C \| \partial_{x_1} h \|_{L^2}^{\frac{1}{N}} \| \nabla_{x_\perp} h \|_{L^2}^{\frac{N-1}{N}} . \end{eqnarray} Assume first that $ N \geq 6$. Then $ 2^* = \frac{ 2N}{N-2} \leq 3$. Using the Sobolev embedding \eqref{Sobol} and the fact that $\| \mathcal{B} _n \| _{L^{\infty}} $ is bounded we get \begin{eqnarray} \label{good3} \| \mathcal{B} _n \|_{L^3} ^3 \leq \| \mathcal{B} _n \|_{L^{\infty }}^{ 3 - 2^*} \| \mathcal{B} _n \|_{L^{2^*}}^{ 2^*} \leq C \| \partial_{x _1} \mathcal{B} _n \| _{ L^2 } ^{\frac{2^*}{N}} \| \nabla_{x_{\perp}} \mathcal{B} _n \| _{L^2}^{ \frac{ 2^*(N-1)}{N}} . \end{eqnarray} Using the inequalities $ \| \mathcal{B}_n \|_{L^4}^4 \leq \| \mathcal{B}_n \|_{L^\infty} \| \mathcal{B}_n \|_{L^3}^3 $ and $1+\mathcal{B}_n \geq 1/2$ for $n$ large, we deduce from \eqref{Dev3} that \begin{eqnarray} \label{sobolomenthe} \int_{\mathbb R^N} ( \partial_{x_1} \phi_n - c_n \mathcal{B} _n )^2 + (\partial_{x_1} \mathcal{B} _n )^2 + |\nabla_{x_\perp} \phi_n|^2 + |\nabla_{x_\perp} \mathcal{B} _n |^2 + \varepsilon_n^2 \mathcal{B} _n ^2 \ dx \leq C \| \mathcal{B} _n \| _{L^3}^3 . \end{eqnarray} From \eqref{sobolomenthe} and \eqref{good3} we obtain \be \label{sobolomenthos} \| \nabla_{x_\perp} \phi_n \| _{L^2}^2 + \| \partial_{x_1} \mathcal{B} _n \|_{L^2}^2 + \| \nabla_{x_\perp} \mathcal{B}_n \| _{L^2}^2 \leq C \| \mathcal{B}_n \|_{L^3}^3 \leq C \| \partial_{x_1} \mathcal{B}_n \| _{L^2}^{\frac{2}{N-2}} \| \nabla_{x_\perp} \mathcal{B}_n \| _{L^2}^{\frac{2N-2}{N-2}}. \ee Assume now that ($ N=4$ or $N=5$) and $ \Gamma \neq 0$. From \eqref{Dev3}, \eqref{good1} and \eqref{good2} we get \begin{eqnarray} \label{sobolomenthe1} \int_{\mathbb R^N} ( \partial_{x_1} \phi_n - c_n \mathcal{B} _n )^2 + (\partial_{x_1} \mathcal{B} _n )^2 + |\nabla_{x_\perp} \phi_n|^2 + |\nabla_{x_\perp} \mathcal{B} _n |^2 + \varepsilon_n^2 \mathcal{B} _n ^2 \ dx \leq C \| \mathcal{B} _n \| _{L^4}^4 . \end{eqnarray} We have $ 2^* = 4 $ if $ N =4$ and $ 2^* = \frac{10}{3} < 4$ if $ N=5$. By the Sobolev embedding we have \begin{eqnarray} \label{good4} \| \mathcal{B} _n \|_{L^4 }^4 \leq \| \mathcal{B}_n \|_{L^\infty} ^{ 4 - 2^* } \| \mathcal{B}_n \|_{L^{2^*}}^{2^*} \leq C \| \mathcal{B}_n \|_{L^{2^*}}^{2^*} \leq C \| \partial_{x _1} \mathcal{B} _n \| _{ L^2 } ^{\frac{2^*}{N}} \| \nabla_{x_{\perp}} \mathcal{B} _n \| _{L^2}^{ \frac{ 2^*(N-1)}{N}} . \end{eqnarray} The two inequalities above give \be \label{sobolomenthos1} \| \nabla_{x_\perp} \phi_n \| _{L^2}^2 + \| \partial_{x_1} \mathcal{B} _n \|_{L^2}^2 + \| \nabla_{x_\perp} \mathcal{B}_n \| _{L^2}^2 \leq C \| \mathcal{B}_n \|_{L^4}^4 \leq C \| \partial_{x_1} \mathcal{B}_n \| _{L^2}^{\frac{2}{N-2}} \| \nabla_{x_\perp} \mathcal{B}_n \| _{L^2}^{\frac{2N-2}{N-2}}. \ee From either \eqref{sobolomenthos} or \eqref{sobolomenthos1} we obtain $$ \| \partial_{x_1} \mathcal{B} _n \|_{L^2}^2 \leq C \| \partial_{x_1} \mathcal{B}_n \| _{L^2}^{\frac{2}{N-2}} \| \nabla_{x_\perp} \mathcal{B}_n \| _{L^2}^{\frac{2N-2}{N-2}}, $$ which gives $ \| \partial_{x_1} \mathcal{B} _n \|_{L^2}^ {\frac{2N -6}{N-2}} \leq C \| \nabla_{x_\perp} \mathcal{B}_n \| _{L^2}^{\frac{2N-2}{N-2}}$, or equivalently \begin{eqnarray} \label{good5} \| \partial_{x_1} \mathcal{B} _n \|_{L^2} \leq C \| \nabla_{x_\perp} \mathcal{B}_n \| _{L^2}^{\frac{N-1}{N-3}}. \end{eqnarray} Now we plug \eqref{good5} into \eqref{sobolomenthe} or \eqref{sobolomenthe1} to discover $$ \| \nabla_{x_{\perp}} \mathcal{B}_n \| _{L^2}^ 2 \leq C \| \partial_{x_1} \mathcal{B}_n \| _{L^2}^{\frac{2}{N-2}} \| \nabla_{x_{\perp}} \mathcal{B}_n \| _{L^2}^{\frac{2N-2}{N-2}} \leq C \| \nabla_{x_{\perp}} \mathcal{B}_n \| _{L^2}^{\frac{2(N-1)}{N-3}}. $$ Since $ \frac{2(N-1)}{N-3} > 2$ we infer that there is a constant $ m > 0 $ such that $\| \nabla_{x_{\perp}} \mathcal{B}_n \| _{L^2} \geq m$ for all sufficiently large $n$. On the other hand $U_n$ satisfies the Pohozaev identity $ P_{c_n }(U_n) = 0$, hence for large $n$ we have $$ E_{c_n} (U_n) = \frac{2}{N-1} \int_{\mathbb R^N} |\nabla_{x_{\perp} } U_n |^2 \, dx \geq \frac{2}{N-1} r_0 ^2 \int_{\mathbb R^N} |\nabla_{x_{\perp} } \mathcal{B} _n |^2 \, dx \geq \frac{1}{N-1} r_0 ^2 m^2. $$ This contradicts the assumption that $E_{c_n} (U_n) \to 0 $ as $ n \to \infty$. The proof of Proposition \ref{dim6} is complete. \ $\Box$ \begin{rem} \rm We do not know whether $T_{c}$ tends to zero or not as $c \to {\mathfrak c}_s $ if $N=4$ or $N=5$ and $\Gamma \neq 0$. \end{rem} \subsection{Proof of Proposition \ref{vanishing}} Let $ N \geq 4 $ and let $ (U_n,c_n)_{n \geq 1} $ be a sequence of nonconstant, finite energy solutions solution of (TW$_{c_n}$) such that $ E_{c_n}(U_n) \to 0$. By Proposition \ref{lifting} $(ii)$ we have $|U_n| \to r_0 > 0 $ uniformly in $\mathbb R^N$, hence for $n$ sufficiently large we may write $$ U_n(x) = \rho_n(x) {\sf e}^{i\phi_n(x)} = r_0 \Big( 1 + \alpha_n A_n (z) \Big) \exp \Big( i \beta_n \varphi_n (z) \Big) \quad \quad \quad \mbox{ where } z_1 = \lambda_n x_1, \quad z_\perp = \sigma_n x_\perp , $$ and $ \alpha_n = \frac{1}{r_0} \| \rho_n - r_0 \|_{ L^{\infty}} \to 0 $. Using the Pohozaev identity $ P_{c_n} (U_n) = 0$ and \eqref{blancheneige} we have $$ \frac{2}{N-1} \int_{\mathbb R^N} |\nabla_{x _{\perp} } U_n (x) |^2 \, dx = E(U_n) + c_n Q(U_n) = \frac{2}{N} \int_{\mathbb R^N} |\nabla \rho _n|^2 \ dx. $$ Since $U_n \in \mathcal{E} $ and $ U_n$ is not constant, we have $ \displaystyle \int_{\mathbb R^N} |\nabla_{x _{\perp} } U_n (x) |^2 \, dx > 0$ and the above identity implies that $ \rho _n $ is not constant. The equality $ E( U_n) + c_n Q( U_n) = \displaystyle \frac{2}{N} \int_{\mathbb R^N} |\nabla \rho _n|^2 \ dx$ can be written as $$ \left( 1 - \frac 2N \right) \int_{\mathbb R^N} |\nabla \rho _n|^2 \, dx + \int_{\mathbb R^N} \rho _n ^2 |\nabla \phi _n|^2 \, dx + c_n Q( U_n) + \int _{\mathbb R^N} V(\rho _n ^2) \, dx = 0. $$ Since $ \rho _n \to r_0 $ uniformly in $ \mathbb R^N$ as $ n \to \infty$, for $ n$ large we have $ V( \rho _n ^2) \geq 0 $ and from the last identity we infer that $ \displaystyle 0 > c_n Q( U_n) = \int_{\mathbb R^N} (r_0 ^2 - \rho _n ^2) \frac{ \partial \phi}{\partial x_1} \, dx $, which implies $ \| \partial _{x_1 } \phi _n \|_{L^2} > 0$. We must have $ \| \nabla _{x_{\perp}} \phi _n \|_{L^2} > 0 $ (otherwise $\phi $ would depend only on $ x_1$, contradicting the fact that $ \displaystyle \int_{\mathbb R^N} |\nabla \phi _n|^2 \, dx $ is finite). The choice of $ \alpha _n$ implies $ \| A_n \| _{L^{\infty}} =1 $. Since $A_n$, $\partial_{z_1} \phi _n$ and $\nabla_{z_\perp} \phi_n $ are nonzero, by scaling it is easy to see that \be \label{normal} \| A_n \|_{L^2} = \| \partial_{z_1} \varphi_n \|_{ L^2} = \| \nabla_{z_\perp} \varphi_n \|_{L^2} = 1 \ee if and only if $$ \lambda_n \sigma_n^{N-1} = \frac{\| \, |U_n| - r_0 \|_{L^\infty}^2}{\| \, |U_n| - r_0 \|_{L^2}^2} , \quad \quad \lambda_n \beta_n = \| \partial_{x_1} \phi_n \|_{L^2} \frac{\| \, |U_n| - r_0 \|_{L^\infty}}{\| \, |U_n| - r_0 \|_{L^2}} , \quad \quad \beta_n \sigma_n = \| \nabla_{x_\perp} \phi_n \|_{L^2} \frac{\| \, |U_n| - r_0 \|_{L^\infty}}{\| \, |U_n| - r_0 \|_{L^2}} . $$ Since $N\geq 3$, the above equalities allow to compute $\lambda_n$, $\beta _n$ and $ \sigma_n$. Hence the scaling parameters $(\alpha_n,\beta_n, \lambda_n,\sigma_n)$ are uniquely determined if \eqref{normal} holds and $ \| A_n \| _{L^{\infty}} =1 $. The Pohozaev identity $ P_{c_n }(U_n) = 0 $ gives \begin{align} \label{Dev2} & \int_{\mathbb R^N} \lambda_n^2 \beta_n^2 (\partial_{z_1} \varphi_n)^2 \Big( 1 + \alpha_n A_n \Big)^2 + \alpha_n^2 \lambda_n^2 (\partial_{z_1} A_n)^2 \nonumber \\ & + \frac{N-3}{N-1} \beta_n^2 \sigma_n^2 |\nabla_{z_\perp} \varphi_n|^2 \Big( 1 + \alpha_n A_n \Big)^2 + \frac{N-3}{N-1} \alpha_n^2 \sigma_n^2 |\nabla_{z_\perp} A_n|^2 + \frac{1}{r_0 ^2} V\Big(r_0^2(1 + \alpha_n A_n)^2 \Big)\ dz \nonumber \\ & \hspace{1cm} = 2 c_n \int_{\mathbb R^N} 2 \lambda_n \alpha_n \beta_n A_n \partial_{z_1} \varphi_n + \lambda_n \alpha_n^2 \beta_n A_n^2 \partial_{z_1} \varphi_n \ dz . \end{align} By \eqref{normal}, the right-hand side of \eqref{Dev2} is $\mathcal{O}(\lambda_n \alpha_n \beta_n)$. Since $\alpha_n \to 0$ and $\| A_n \|_{L^\infty}=1$ for $n$ large enough we have $1+\alpha_n A_n \geq 1/2$, and by \eqref{V} we get $V(r_0^2(1 + \alpha_n A_n)^2 ) \geq \frac 12 r_0 ^2 {\mathfrak c}_s ^2 \alpha _n ^2 A_n ^2$. If $N\geq 3$ all the terms in the left-hand side of \eqref{Dev2} are non-negative and we infer that $$ \int_{\mathbb R^N} \lambda_n^2 \beta_n^2 (\partial_{z_1} \varphi_n)^2 + \alpha_n^2 A_n^2 \ dz = \mathcal{O}(\lambda_n \alpha_n \beta_n). $$ From the normalization \eqref{normal} it follows that $$ \lambda_n^2 \beta_n^2 = \mathcal{O}(\lambda_n \alpha_n \beta_n), \quad \quad \quad {\rm and} \quad \quad \quad \alpha_n^2 = \mathcal{O}(\lambda_n \alpha_n \beta_n), $$ which yields \be \label{tutu1} C_1 \leq \frac{\lambda_n \beta_n}{\alpha_n} \leq C_2 \qquad \mbox{ for some } C_1, \, C_2 > 0. \ee Let $\theta _n = \frac{ \lambda _n \beta _n}{\alpha _n}$. We use the Taylor expansion \eqref{V} for the potential $V$, multiply \eqref{Dev2} by $\frac{1}{\alpha_n^2}$ and write the resulting equality in the form \begin{align*} & \int_{\mathbb R^N} \Big( \theta_n \partial_{z_1} \varphi_n - c_n A_n \Big)^2 + \lambda_n^2 (\partial_{z_1} A_n)^2 + \frac{N-3}{N-1} \frac{\theta_n^2 \sigma_n^2}{\lambda_n^2} |\nabla_{z_\perp} \varphi_n|^2 \Big( 1 + \alpha_n A_n \Big)^2 + \frac{N-3}{N-1} \sigma_n^2 |\nabla_{z_\perp} A_n|^2 \nonumber \\ & \hspace{2cm} + ({\mathfrak c}_s^2-c_n^2) A_n^2 \ dz \nonumber \\ & = - \int_{\mathbb R^N} \theta_n^2 \alpha_n (\partial_{z_1} \varphi_n)^2 \Big( 2 A_n + \alpha_n A_n^2 \Big) + {\mathfrak c}_s^2 \alpha_n \Big( \frac{\Gamma}{3} - 1 \Big) A_n^3 + {\mathfrak c}_s^2 \frac{V_4( \alpha_n A_n)}{\alpha_n^2} - 2 c_n \theta_n \alpha_n A_n^2 \partial_{z_1} \varphi_n \ dz . \end{align*} By \eqref{normal} and \eqref{tutu1} the right-hand side of the above equality is $\mathcal{O}(\alpha_n)$. If $N \geq 3$ all the terms in the left-hand side are nonnegative. In particular, we get $ \displaystyle ({\mathfrak c}_s^2 - c_n^2 ) \int_{\mathbb R^N} A_n ^2 \, dz = {\mathfrak c}_s^2 - c_n^2 = \mathcal{O}(\alpha_n) , $ so that $c_n \to {\mathfrak c}_s$. Assuming that $N \geq 4$, we also infer that $$ \int_{\mathbb R^N} \lambda_n^2 (\partial_{z_1} A_n)^2 + \frac{\sigma_n^2}{\lambda_n^2} |\nabla_{z_\perp} \varphi_n|^2 \ dz = \mathcal{O}(\alpha_n ) . $$ Together with \eqref{normal} and \eqref{tutu1}, this implies \be \label{tutu2} \frac{\sigma_n^2}{\lambda_n^2} = \mathcal{O}(\alpha_n) \quad \quad \quad {\rm and} \quad \quad \quad \int_{\mathbb R^N} (\partial_{z_1} A_n)^2 \ dz = \mathcal{O} \Big( \frac{\alpha_n}{\lambda_n^{2}} \Big) . \ee The Pohozaev identity $P_{c_n}(U_n) = 0$ and \eqref{normal} imply that for each $n$ such that $ 1 + \alpha_n A_n \geq \frac 12 $ we have \begin{align} \label{final1} E_{c_n}(U_n) = & \ \frac{2}{N-1} \int_{\mathbb R^N} |\nabla_\perp U_n|^2 \ dx \nonumber\\ \nonumber \\ = & \ \frac{2 r_0 ^2 }{(N-1)\lambda_n \sigma_n^{N-1}} \int_{\mathbb R^N} \beta_n^2 \sigma_n^2 |\nabla_{z_\perp} \varphi_n|^2 \Big(1+\alpha_n A_n \Big)^2 + \alpha_n^2 \sigma_n^2 |\nabla_{z_\perp} A_n|^2 \ dz \nonumber \\ \nonumber \\ \geq & \ \frac{r_0 ^2 \alpha_n^2 \theta _n^2 }{2(N-1) \lambda_n^3 \sigma_n^{N-3}} \int_{\mathbb R^N} |\nabla_{z_\perp} \varphi_n|^2 \ dz \geq \frac{\alpha_n^2}{C \lambda_n^3 \sigma_n^{N-3}}. \end{align} However, in view of \eqref{tutu2} we have \begin{eqnarray} \label{final2} \frac{\alpha_n^2}{\lambda_n^3 \sigma_n^{N-3}} = \frac{\alpha_n^2}{\lambda_n^N ( \sigma_n / \lambda_n )^{N-3}} \geq \Big( \frac{\alpha_n}{\lambda_n^2} \Big)^{N/2} \frac{\alpha_n^2}{C \alpha_n^{N/2} \alpha_n^{(N-3)/2}} = \Big( \frac{\alpha_n}{\lambda_n^2} \Big)^{N/2} \frac{1}{C \alpha_n^{(2N-7)/2}} . \end{eqnarray} Notice that $ \alpha_n^{(2N-7)/2} \to 0 $ as $\alpha_n \to 0$ because $N \geq 4$. The fact that $E_{c_n} (U_n) \longrightarrow 0 $, \eqref{final1} and \eqref{final2} imply that $ \frac{\alpha_n}{\lambda_n^2} \to 0 $ {\rm as} $\ n \to +\infty . $ Then using \eqref{tutu2} we find $$ \int_{\mathbb R^N} (\partial_{z_1} A_n)^2 \ dz = \mathcal{O} \Big( \frac{\alpha_n}{\lambda_n^{2}} \Big) \to 0 $$ and the proof is complete. \ $\Box$ \noindent {\bf Acknowledgement:} We greatfully acknowledge the support of the French ANR (Agence Nationale de la Recherche) under Grant ANR JC { ArDyPitEq}. \end{document}
arXiv
{ "id": "1210.1315.tex", "language_detection_score": 0.46420207619667053, "char_num_after_normalized": null, "contain_at_least_two_stop_words": null, "ellipsis_line_ratio": null, "idx": null, "lines_start_with_bullet_point_ratio": null, "mean_length_of_alpha_words": null, "non_alphabetical_char_ratio": null, "path": null, "symbols_to_words_ratio": null, "uppercase_word_ratio": null, "type": null, "book_name": null, "mimetype": null, "page_index": null, "page_path": null, "page_title": null }
arXiv/math_arXiv_v0.2.jsonl
null
null
\begin{document} \title{Integers Represented as a Sum of Primes and Powers of Two} \author{D.R. Heath-Brown and J.-C. Puchta\\ Mathematical Institute, Oxford} \date{} \maketitle \section{Introduction} It was shown by Linnik [10] that there is an absolute constant $K$ such that every sufficiently large even integer can be written as a sum of two primes and at most $K$ powers of two. This is a remarkably strong approximation to the Goldbach Conjecture. It gives us a very explicit set $\cl{K}(x)$ of integers $n\le x$ of cardinality only $O((\log x)^K)$, such that every sufficiently large even integer $N\le x$ can be written as $N=p+p'+n$, with $p,p'$ prime and $n\in\cl{K}(x)$. In contrast, if one tries to arrange such a representation using an interval in place of the set $\cl{K}(x)$, all known results would require $\cl{K}(x)$ to have cardinality at least a positive power of $x$. Linnik did not establish an explicit value for the number $K$ of powers of 2 that would be necessary in his result. However, such a value has been computed by Liu, Liu and Wang [12], who found that $K=54000$ is acceptable. This result was subsequently improved, firstly by Li [8] who obtained $K=25000$, then by Wang [18], who found that $K=2250$ is acceptable, and finally by Li [9] who gave the value $K=1906$. One can do better if one assumes the Generalized Riemann Hypothesis, and Liu, Liu and Wang [13] showed that $K=200$ is then admissible. The object of this paper is to give a rather different approach to this problem, which leads to dramatically improved bounds on the number of powers of 2 that are required for Linnik's theorem. \begin{theorem} Every sufficiently large even integer is a sum of two primes and exactly 13 powers of 2. \end{theorem} \begin{theorem} Assuming the Generalized Riemann Hypothesis, every sufficiently large even integer is a sum of two primes and exactly 7 powers of 2. \end{theorem} We understand that Ruzsa and Pintz have, in work in preparation, given an independent proof of Theorem 2, and have established a version of Theorem 1 requiring only 8 powers of 2. Indeed, already in 2000, Pintz had announced the values $K=12$ unconditionally, and $K=10$ on the Generalized Riemann Hypothesis. Although we have not seen an account of this work, we understand that that our approach is different in a number of respects. We should also report that Elsholtz, in unpublished work, has shown that one can obtain $K=12$ in Theorem 1, by a variant of our method. He does this by improving our constant $2.7895$ in (25) to $2.96169$, by using $D=21$, and replacing our estimate (41) for $C_2$ by $C_2\le 1.992$. Previous workers have based their line of attack on a proof of Linnik's theorem due to Gallagher [3]. Let $\varpi$ be a small positive constant. Set \begin{equation} S(\alpha)=\sum_{\varpi N<p\le N}e(\alpha p), \end{equation} where $e(x):=\exp(2\pi ix)$, and \[T(\alpha)=\sum_{1\le\nu\le L}e(\alpha 2^{\nu}),\;\;\; L=[\frac{\log N/2K}{\log 2}].\] As in earlier proofs of Linnik's Theorem we shall use estimates for ${\rm meas}(\cl{A}_{\lambda})$, where \[\cl{A}_{\lambda}=\{\alpha\in[0,1]: |T(\alpha)|\ge\lambda L\}.\] In \S 7 we shall bound ${\rm meas}(\cl{A}_{\lambda})$ by a new method, suggested to us by Professor Keith Ball. This provides the following estimates. \begin{lemma} We have \[{\rm meas}(\cl{A}_{\lambda})\ll N^{-E(\lambda)}\] with $E(0.722428)>1/2$ and $E(0.863665)>109/154$. \end{lemma} We are extremely grateful to Professor Ball for suggesting his alternative approach to us. An earlier version of this paper used a completely different technique to bound $E(\lambda)$ and showed that one can take \[E(\lambda)\geq 0.822\lambda^2 +o(1)\] as $N\rightarrow\infty$. This sufficed to establish Theorems 1 and 2 with 24 and 9 powers of 2 respectively. For comparison with Lemma 1, the best bound for $E(\lambda)$ in the literature is due to Liu, Liu and Wang [11; Lemma 3], and states that \[E(1-\eta)\le 1-F(\frac{2+\sqrt{2}}{4}\eta)-F(1-\frac{2+\sqrt{2}}{4}\eta) +o(1)\] for $\eta<(7e)^{-1}$, where $F(x)=x(\log x)/(\log 2)$. The estimate provided by Lemma 1 will be injected into the circle method, where it will be crucial in bounding the minor arc contribution. On the major arcs we shall improve on Gallagher's analysis so as to show that hypothetical zeros close to $\sigma=1$ play no r\^{o}le. Thus, in contrast to previous workers, we will have no need for explicit numerical zero-free regions for $L$-functions. Naturally this produces a considerable simplification in the computational aspects of our work. Thus it is almost entirely the values of the constants in Lemma 1 which determine the number of powers of 2 appearing in Theorems 1 and 2. The paper naturally divides into two parts, one of which involves the circle method and zeros of $L$-functions, and the other of which is devoted to the proof of Lemma 1. We begin with the former. One remark about notation is in order. At various stages in the proof, numerical upper bounds on $\varpi$ will be required. Since we shall always take $\varpi$ to be sufficiently small, we shall assume that any such bound is satisfied. Moreover, since $\varpi$ is to be thought of as fixed, we will allow the implied constants in the $O(\ldots)$ and $\ll$ notations to depend on $\varpi$. \section{The Major Arcs} We shall follow the method of Gallagher [3; \S 1] closely. We choose a parameter $P$ in the range $1\le P\le N^{2/5}$ and define the major arcs ${\mathfrak M}$ as the set of $\alpha\in[0,1]$ for which there exist $a\in\mathbb{Z}$ and $q\in \mathbb{N}$ such that $q\le P$ and \[|\alpha-\frac{a}{q}|\le\frac{P}{qN}.\] If $\chi$ is a character to modulus $q$, we write \[c_n(\chi)=\sum_{a=1}^{q}\chi(a)e(\frac{an}{q})\] and \[\tau(\chi)=\sum_{a=1}^{q}\chi(a)e(\frac{a}{q}).\] Moreover we put \[A(\chi,\beta)=\sum_{\varpi N<p\le N}\chi(p)e(\beta p)\] and \[I_{n,s}(\chi,\chi')=\int_{-P/sN}^{P/sN}A(\chi,\beta) A(\chi',\beta)e(-\beta n)d\beta.\] If $\chi$ is a character to a modulus $r|q$ we also write $\chi_q$ for the induced character modulo $q$, and if $\chi,\chi'$ are characters to moduli $r$ and $r'$ respectively, we set \[J_n(\chi,\chi')=\twosum{q\le P}{[r,r']|q}\frac{1}{\phi(q)^2} c_n(\chi_q\chi'_q)\tau(\overline{\chi_q})\tau(\overline{\chi'_q}) I_{n,q}(\chi,\chi').\] Then, by a trivial variant of the argument leading to Gallagher [3; (3)], we find that \begin{equation} \int_{{\mathfrak M}}S(\alpha)^2 e(-\alpha n)d\alpha= \sum_{\chi,\chi'}J_n(\chi,\chi')+O(P^{5/2}), \end{equation} for any integer $n$, the sum being over primitive characters $\chi,\chi'$ to moduli $r,r'$ for which $[r,r']\le P$. In what follows we shall take $1\le n\le N$. To estimate the contribution from a particular pair of characters $\chi,\chi'$ we put \[A_q(\chi)=\{\int_{-P/qN}^{P/qN}|A(\chi,\beta)|^2 d\beta\}^{1/2}\] and \[C_n(\chi,\chi')=\twosum{q\le P}{[r,r']|q}\frac{1}{\phi(q)^2} |c_n(\chi_q\chi'_q)\tau(\overline{\chi_q})\tau(\overline{\chi'_q})|.\] Note that what Gallagher calls $||A(\chi)||$ is our $A_1(\chi)$. We have $A_q(\chi)\le A_m(\chi)$ whenever $m\le q$. Then, as in Gallagher [3; (4)] we find \begin{equation} |J_n(\chi,\chi')|\le C_n(\chi,\chi')A_{[r,r']}(\chi)A_{[r,r']}(\chi'). \end{equation} It is in bounding $C_n(\chi,\chi')$ that there is a loss in Gallagher's argument. Let $r''$ be the conductor of $\chi\chi'$, and write $m=[r,r']$. moreover, for any positive integers $a$ and $n$ we write \[a_n=\frac{a}{(a,n)}.\] Then Gallagher shows that \[C_n(\chi,\chi')\le (rr'r'')^{1/2}\sum_{q\le P,\,m|q}(\phi(q)\phi(q_n))^{-1},\] where $q/m$ is square-free and coprime to $m$. Moreover we have $r''|m_n$. It follows that \[C_n(\chi,\chi')\le \frac{(rr'r'')^{1/2}}{\phi(m)\phi(m_n)} \sum_{(s,m)=1}\mu^2 (s)/\phi(s)\phi(s_n).\] The sum on the right is \[\prod_{p\, | \hspace{-1.1mm}/\, mn}(1+\frac{1}{(p-1)^2}) \prod_{p|n, p\, | \hspace{-1.1mm}/\, m}(1+\frac{1}{(p-1)})\ll \prod_{p|n, p\, | \hspace{-1.1mm}/\, m}\frac{p}{(p-1)},\] and \[\frac{m}{\phi(m)}\prod_{p|n, p\, | \hspace{-1.1mm}/\, m}\frac{p}{(p-1)}\le \frac{n}{\phi(n)}\frac{m_n}{\phi(m_n)}.\] We therefore deduce that \[C_n(\chi,\chi')\ll \frac{(rr'r'')^{1/2}}{m}\frac{m_n}{\phi^2 (m_n)} \frac{n}{\phi(n)}.\] Now if $p^e||r$ and $p^f||r'$, then $p^{|e-f|}|r''$, since $r''$ is the conductor of $\chi\chi'$. (Here the notation $p^e||r$ means, as usual, that $p^e|r$ and $p^{e+1}\, | \hspace{-1.1mm}/\, r$.) We therefore set \begin{equation} h=(r,r')\;\;\;\mbox{and}\;\;\; r=hs,\; r'=hs', \end{equation} so that $ss'|r''$ and $m=hss'$. Since \[\frac{m_n}{\phi^2(m_n)}\ll m_n ^{\varpi-1}\] we therefore have \[\frac{(rr'r'')^{1/2}}{m}\frac{m_n}{\phi^2 (m_n)}\ll (ss')^{-1/2}{r''}^{1/2}m_n^{\varpi-1}.\] Now, using the bounds $r''\le m_n$ and $ss'\le r''$, we find that \begin{eqnarray*} \frac{(rr'r'')^{1/2}}{m}\frac{m_n}{\phi^2 (m_n)}&\ll& (ss')^{-1/2}{r''}^{1/2}{r''}^{\varpi-1}\\ &=&(ss')^{-1/2}{r''}^{\varpi-1/2}\\ &\ll & (ss')^{\varpi-1}. \end{eqnarray*} Alternatively, using only the fact that $m_n\ge r''$, we have \begin{eqnarray*} \frac{(rr'r'')^{1/2}}{m}\frac{m_n}{\phi^2 (m_n)}&\ll & (ss')^{-1/2}m_n^{1/2}m_n^{\varpi-1}\\ &\ll & m_n^{\varpi-1/2}. \end{eqnarray*} These estimates produce \[C_n(\chi,\chi')\ll \min\{(ss')^{\varpi-1}\,,\,m_n^{\varpi-1/2}\} \frac{n}{\phi(n)}.\] On combining this with the bounds (2) and (3) we deduce the following result. \begin{lemma} Suppose that $P\le N^{2/5-\varpi}$. Then \[\int_{{\mathfrak M}}S(\alpha)^2 e(-\alpha n)d\alpha=J_n(1,1)+ O(\frac{n}{\phi(n)}S_n)+O(N^{1-\varpi}),\] where \[S_n=\sum_{\chi,\chi'}A_{[r,r']}(\chi) A_{[r,r']}(\chi')\min\{(ss')^{\varpi-1}\,,\,m_n^{-1/3}\},\] the sum being over primitive characters, not both principal, of moduli $r,r'$, with $[r,r']\le P$. \end{lemma} We have next to consider $A_m(\chi)$. According to the argument of Montgomery and Vaughan [15; \S 7] we have \[A_m(\chi)\ll N^{1/2}\max_{\varpi N<x\le N}\max_{0<h\le x}(h+mN/P)^{-1} |\sum_{x}^{x+h}\chi(p)|.\] Note that we have firstly taken account of the restriction in (1) to primes $p>\varpi N$, and secondly replaced $(h+N/P)^{-1}$ as it occurs in Montgomery and Vaughan, by the smaller quantity $(h+mN/P)^{-1}$. The argument of [15; \S 7] clearly allows this. By partial summation we have \[\sum_{x}^{x+h}\chi(p)\ll (\log x)^{-1}\max_{0<j\le h}\sum_{x}^{x+j}\chi(p)\log p.\] Moreover, a standard application of the `explicit formula' for $\psi(x,\chi)$ produces the estimate \[\sum_{x}^{x+j}\chi(p)\log p\ll N^{1/2+3\varpi}(\log N)^2+ \sum_{\rho}|\frac{(x+j)^{\rho}}{\rho}-\frac{x^{\rho}}{\rho}|,\] where the sum over $\rho$ is for zeros of $L(s,\chi)$ in the region \[\beta\ge\frac{1}{2}+3\varpi,\;\;\;|\gamma|\le N.\] When $\chi$ is the trivial character we shall include the pole $\rho=1$ amongst the `zeros'. Since $j\le h$ and \[\frac{(x+j)^{\rho}}{\rho}-\frac{x^{\rho}}{\rho}\ll \min\{jN^{\beta-1}\,,\,N^{\beta}|\gamma|^{-1}\},\] we find that \[A_m(\chi)\ll \frac{P}{m}N^{4\varpi}+\frac{N^{1/2}}{\log N} \{\max_{0<h\le N}(h+mN/P)^{-1} \sum_{\rho}N^{\beta-1}\min\{h\,,\,N|\gamma|^{-1}\}.\] However we have \[\min\{\frac{h}{h+H}\,,\,\frac{A}{h+H}\}\le\min\{1\,,\,\frac{A}{H}\}\] whenever $h,H,A>0$. Applying this with $H=mN/P$ and $A=N|\gamma|^{-1}$, we deduce that \begin{equation} A_m(\chi)\ll \frac{P}{m}N^{4\varpi}+\frac{N^{1/2}}{\log N} \sum_{\rho}N^{\beta-1}\min\{1\,,\,Pm^{-1}|\gamma|^{-1}\}. \end{equation} \section{The Sum $S_n$} In order to investigate the sum $S_n$ we decompose the available ranges for $r,r'$ and the corresponding zeros $\rho,\rho'$ into (overlapping) ranges \begin{equation} \left\{\begin{array}{cc}R\le r\le RN^{\varpi},&\;\;\;R'\le r'\le R'N^{\varpi},\\ T-1\le |\gamma|<TN^{\varpi},&\;\;\; T'-1\le |\gamma'|<T'N^{\varpi}. \end{array}\right. \end{equation} Clearly $O(1)$ such ranges suffice to cover all possibilities, so it is enough to consider the contribution from a fixed range of the above type. Throughout this section we shall follow the convention that $\rho=1$ is to included amongst the `zeros' corresponding to the trivial character. Let $N(\sigma,\chi,T)$ denote as usual, the number of zeros $\rho$ of $L(s,\chi)$, in the region $\beta\ge \sigma$, $|\gamma|\le T$, and let $N(\sigma,r,T)$ be the sum of $N(\sigma,\chi,T)$ for all characters $\chi$ of conductor $r$. Since \[N^{\beta-1}=N^{3\varpi-1/2}+\int_{1/2+3\varpi}^{\beta}N^{\sigma-1} (\log N)d\sigma\] for $\beta\ge 1/2+3\varpi$, we find that \begin{equation} \sum_{\rho}N^{\beta-1}\ll N^{6\varpi-1/2}RT+I(r)\log N, \end{equation} where the sum is over zeros of $L(s,\chi)$ for all $\chi$ of conductor $r$, subject to $T-1\le |\gamma|\le TN^{\varpi}$, and were \[I(r)=\int_{1/2+3\varpi}^{1}N^{\sigma-1}N(\sigma,r,TN^{\varpi})d\sigma.\] In view of the minimum occuring in (5) it is convenient to set \[m(R,T)=\min(1,\frac{P}{RT}).\] We now insert (7) into (5) so that, for given $r,r'$, the range (6) contributes to \[\sum_{\chi\!\!\!\pmod{r}}A_m(\chi)\] a total \begin{eqnarray} &\ll& \phi(r)\frac{P}{m}N^{4\varpi}+\frac{N^{1/2}}{\log N}m(R,T)N^{6\varpi-1/2}RT +N^{1/2}m(R,T)I(r)\nonumber\\ &\ll& PN^{6\varpi}+N^{1/2}m(R,T)I(r). \end{eqnarray} Similarly, for the double sum \[\sum_{\chi\!\!\!\pmod{r}}\sum_{\chi'\!\!\!\pmod{r'}}A_m(\chi)A_m(\chi')\] the contribution is \begin{equation} \begin{array}{ll}\ll & P^{2}N^{12\varpi}+PN^{1/2+6\varpi}m(R,T)I(r)\\ &{}+PN^{1/2+6\varpi}m(R',T')I(r')+Nm(R,T)m(R',T')I(r)I(r'). \end{array} \end{equation} We then sum over $r,r'$ using the following lemma. \begin{lemma} Let \[\max_{r\le R}N(\sigma,r,T)=N_1(R),\;\;\; \max_{r'\le R'}N(\sigma',r',T')=N_1(R'),\] and \[\sum_{r\le R}N(\sigma,r,T)=N_2(R),\;\;\; \sum_{r'\le R'}N(\sigma',r',T')=N_2(R').\] In the notation of (4) we have \begin{eqnarray} \lefteqn{\sum_{r\le R}\sum_{r'\le R'} N(\sigma,r,T)N(\sigma',r',T')(ss')^{\varpi-1}}\hspace{2cm}\\ &\ll & \{N_1(R)N_2(R)N_1(R')N_2(R')\}^{1/2+2\varpi},\nonumber \end{eqnarray} for $1/2\le\sigma,\sigma'\le 1$. Moreover, if \[P\le N^{45/154-4\varpi},\] then \begin{equation} \sum_{r,r'}m(R,T)m(R',T') N(\sigma,r,TN^{\varpi})N(\sigma',r',T'N^{\varpi})(ss')^{\varpi-1}, \end{equation} \begin{equation}\ll N^{(1-\varpi)(1-\sigma)+(1-\varpi)(1-\sigma')} \end{equation} for $1/2+3\varpi\le\sigma,\sigma'\le 1$, where the summation is for $R\le r\le RN^{\varpi}$ and $R'\le r'\le R'N^{\varpi}$. \end{lemma} We shall prove this at the end of this section. Henceforth we shall assume that $P\le N^{45/154-4\varpi}$. For suitable values of $\eta$ in the range \begin{equation} 0\le\eta\le \log\log N \end{equation} we shall define $\cl{B}(\eta)$ to be the set of characters $\chi$ of conductor $r\le P$, for which the function $L(s,\chi)$ has at least one zero in the region \[\beta>1-\frac{\eta}{\log N},\;\;\; |\gamma|\le N.\] According to our earlier convention the trivial character is always in $\cl{B}(\eta)$. Now, if we restrict attention to pairs $\chi,\chi'$ for which $\chi\not\in\cl{B}(\eta)$ we have \begin{eqnarray*} \lefteqn{\sum_{R\le r\le RN^{\varpi}}\sum_{R'\le r'\le R'N^{\varpi}} Nm(R,T)m(R',T')I(r)I(r')(ss')^{\varpi-1}}\hspace{3cm}\\ &\ll & \int_{1/2+3\varpi}^{1-\eta/\log N}\int_{1/2+3\varpi}^{1} N^{1-\varpi(1-\sigma)-\varpi(1-\sigma')}d\sigma' d\sigma\\ &\ll & N^{1-\varpi\eta/\log N}(\log N)^{-2}\\ &=& e^{-\varpi\eta}N(\log N)^{-2}. \end{eqnarray*} Terms for which $\chi\in\cl{B}(\eta)$ but $\chi'\not\in\cl{B}(\eta)$ may be handled similarly. This concludes our discussion of the final term in (9) for the time being. To handle the third term in (9) we use the zero density estimate \begin{equation} \sum_{r\le R}N(\sigma,r,T)\ll (R^2 T)^{\kappa(\sigma)(1-\sigma)}, \end{equation} where \begin{equation} \kappa(\sigma)=\left\{\begin{array}{cc} \frac{3}{2-\sigma}+\varpi, & \frac{1}{2}\le\sigma\le\frac{3}{4}\\ \frac{12}{5}+\varpi, & \frac{3}{4}\le\sigma\le 1.\end{array}\right. \end{equation} This follows from results of Huxley [5], Jutila [7; Theorem 1] and Montgomery [14; Theorem 12.2]. For each fixed value of $r'$ we have \begin{eqnarray*} \sum_{r} (ss')^{\varpi-1}&\le& \sum_{h|r'} (r'/h)^{\varpi-1}\sum_{s\le P/h}s^{\varpi-1}\\ &\ll &\sum_{h|r'} (r'/h)^{\varpi-1}(P/h)^{\varpi}\\ &\ll & N^{\varpi}. \end{eqnarray*} The contribution of the third term in (9) to $S_n$ is therefore \[\ll PN^{1/2+5\varpi}m(R',T')\sum_{r'}I(r').\] However the bound (14) shows that \[m(R',T')\sum_{r'}N(\sigma,r',TN^{\varpi})\ll \min\{1\,,\,\frac{P}{R'T'}\} ({R'}^2 N^{2\varpi}T'N^{\varpi})^{\kappa(\sigma)(1-\sigma)}.\] Since \[0\le \kappa(\sigma)(1-\sigma)\le 1\] in the range $1/2+\varpi\le\sigma\le 1$, this is \[\ll (P^2 N^{3\varpi})^{\kappa(\sigma)(1-\sigma)}.\] Moreover, if $P\le N^{45/154-4\varpi}$, then \[(P^2 N^{3\varpi})^{\kappa(\sigma)(1-\sigma)}N^{\sigma-1}\le N^{f(\sigma)}\] with \begin{eqnarray*} f(\sigma)&=&(\frac{45}{77}\kappa(\sigma)-1)(1-\sigma)\\ &\le& (\frac{45}{77}\{\frac{12}{5}+\varpi\}-1)(1-\sigma)\\ &\le& (\frac{31}{77}+\varpi)(1-\sigma)\\ &\le& (\frac{31}{77}+\varpi)\frac{1}{2}\\ &\le& \frac{31}{154}+\varpi. \end{eqnarray*} It follows that the contribution of the third term in (9) to $S_n$ is \[\ll PN^{1/2+6\varpi}.N^{31/154+\varpi}\ll N^{1-\varpi}.\] The second term may of course be handled similarly. Finally we deal with the first term of (9) which produces a contribution to $S_n$ which is \begin{eqnarray*} &\ll & P^{2}N^{12\varpi}\sum_{r,r'}(ss')^{\varpi-1}\\ &\ll & P^{2}N^{12\varpi}\sum_{ss'h\le P}(ss')^{\varpi-1}\\ &\ll & P^{2}N^{12\varpi}\sum_{ss'\le P}P(ss')^{\varpi-2}\\ &\ll & P^{3}N^{12\varpi}\\ &\ll & N^{1-\varpi}, \end{eqnarray*} for $P\le N^{45/154-4\varpi}$. We summarize our conclusions thus far as follows. \begin{lemma} If $P\le N^{45/154-4\varpi}$ then \[S_n\le \sum_{\chi,\chi'\in\cl{B}(\eta)}A_m(\chi)A_m(\chi')m_n^{-1/3} +O(e^{-\varpi\eta}N(\log N)^{-2}).\] \end{lemma} To handle the characters in $\cl{B}(\eta)$ we use the zero-density estimate \begin{equation} N(\sigma,r,T)\ll (rT)^{\kappa(\sigma)(1-\sigma)}, \end{equation} with $\kappa(\sigma)$ given by (15). This also follows from work of Huxley [5], Jutila [7; Theorem 1] and Montgomery [14; Theorem 12.1]. Thus \begin{eqnarray*} m(R,T)N(\sigma,r,TN^{\varpi})&\ll & \max\{1\,,\,\frac{P}{RT}\}(rTN^{\varpi})^{\kappa(\sigma)(1-\sigma)}\\ &\ll &(PN^{2\varpi})^{\kappa(\sigma)(1-\sigma)}\\ &\ll &(PN^{2\varpi})^{(12/5+\varpi)(1-\sigma)}\\ &\ll &N^{(1-\varpi)(1-\sigma)} \end{eqnarray*} for $P\le N^{45/154-4\varpi}$. We deduce that \[m(R,T)I(r)\ll (\log N)^{-1}.\] It follows from (8) that \[A_m(\chi)\ll N^{1/2}(\log N)^{-1}.\] We also note that \[\#\cl{B}(\eta)\ll\sum_{r}N(1-\frac{\eta}{\log N},r,N)\ll (P^2 N)^{3\eta/\log N}\ll e^{6\eta},\] by (14), since $\kappa(\sigma)\le 3$ for all $\sigma$. We therefore have the following facts. \begin{lemma} If $\chi\in\cl{B}(\eta)$, we have $A_m(\chi)\ll N^{1/2}(\log N)^{-1}$. Moreover, we have $\#\cl{B}(\eta)\ll e^{6\eta}$. \end{lemma} We end this section by establishing Lemma 3. We shall suppose, as we may by the symmetry, that \begin{equation} N_2(R)N_1(R')\le N_2(R')N_1(R). \end{equation} Let $U\ge 1$ be a parameter whose value will be assigned in due course, see (18). For those terms of the sum (10) in which $ss'\ge U$ we plainly have a total \[\le \sum_{r\le R}\sum_{r'\le R'}N(\sigma,r,T)N(\sigma',r',T')U^{\varpi-1} \ll N_2(R)N_2(R')U^{\varpi-1}.\] On the other hand, when $ss'<U$ we observe that, for fixed $s,s'$ we have \begin{eqnarray*} \sum_{h}N(\sigma,hs,T)N(\sigma',hs',T')&\ll& \sum_{h}N(\sigma,hs,T)N_1(R')\\ &\ll& \sum_{r}N(\sigma,r,T)N_1(R')\\ &\ll& N_2(R)N_1(R').\\ \end{eqnarray*} On summing over $s$ and $s'$ we therefore obtain a total \[\ll N_2(R)N_1(R')\sum_{ss'\le U}(ss')^{\varpi-1}\ll N_2(R)N_1(R')U^{2\varpi}.\] It follows that the sum (10) is \[\ll N_2(R)\{N_2(R')U^{2\varpi-1}+N_1(R')U^{2\varpi}\}.\] We therefore choose \begin{equation} U=N_2(R')/N_1(R'), \end{equation} whence the sum (10) is \begin{eqnarray*} &\ll& N_2(R)N_1(R')U^{2\varpi}\\ &\ll& N_2(R)N_1(R')\{N_1(R)N_2(R)N_1(R')N_2(R')\}^{2\varpi}\\ &\ll& \{N_2(R)N_1(R')N_2(R')N_1(R)\}^{1/2} \{N_1(R)N_2(R)N_1(R')N_2(R')\}^{2\varpi}\\ \end{eqnarray*} in view of (17). This produces the required bound. To establish (12) we shall bound $N_1(R)$ and $N_1(R')$ using (16). Moreover to handle $N_2(R)$ and $N_2(R')$ we shall use the estimate \[\sum_{r\le R}N(\sigma,r,T)\ll \left\{\begin{array}{cc} (R^2 T)^{\kappa(\sigma)(1-\sigma)},\; &\; \frac{1}{2}+\varpi\le\sigma\le \frac{23}{38}\\ (R^2 T^{6/5})^{\lambda(1-\sigma)},\; &\; \frac{23}{38}<\sigma\le 1,\end{array}\right.\] where \[\lambda=\frac{20}{9}+\varpi.\] This follows from (14) and (15) along with Heath-Brown [4; Theorem 2] and Jutila [7; Theorem 1]. We now see that the sum (11) may be estimated as \begin{equation} \ll m(R,T)R^aT^c.m(R',T'){R'}^b{T'}^d. N^{e}, \end{equation} say, where \[a=\left\{\begin{array}{cc} 3\kappa(\sigma)(1-\sigma)(\frac{1}{2}+2\varpi), \; &\; \frac{1}{2}+3\varpi\le\sigma\le \frac{23}{38}\\ \{\kappa(\sigma)+2\lambda\}(1-\sigma)(\frac{1}{2}+2\varpi),\; &\; \frac{23}{38}<\sigma\le 1,\end{array}\right.\] and \[c=\left\{\begin{array}{cc} 2\kappa(\sigma)(1-\sigma)(\frac{1}{2}+2\varpi) ,\; &\; \frac{1}{2}+3\varpi\le\sigma\le \frac{23}{38}\\ \{\kappa(\sigma)+6\lambda/5\}(1-\sigma)(\frac{1}{2}+2\varpi),\; &\; \frac{23}{38}<\sigma\le 1,\end{array}\right.\] and similarly for $b$ and $d$. Moreover we may take \[e=6\varpi(1-\sigma)+6\varpi(1-\sigma').\] It therefore follows that $0\le c,d< 1$, whence (19) is maximal for $T=P/R$ and $T'=P/R'$. Similarly we have $a\ge c$ and $b\ge d$. Thus, after substituting $T=P/R$ and $T'=P/R'$ in (19), the resulting expression is increasing with respect to $R$ and $R'$, and hence is maximal when $R=R'=P$. We therefore see that (20) is \[\ll P^{a+b}N^e.\] Finally one can check that \[(\frac{45}{154}-4\varpi)a\le (1-7\varpi)(1-\sigma),\] and similarly for $b$. This suffices to establish the bound (12) for $P\le N^{45/154-4\varpi}$. \section{Summation Over Powers of 2} In this section we consider the major arc integral \[\int_{{\mathfrak M}}S(\alpha)^2 T(\alpha)^K e(-\alpha N)d\alpha,\] where we now assume $N$ to be even. According to Lemmas 2 and 4 we have \begin{eqnarray} \int_{{\mathfrak M}}S(\alpha)^2 T(\alpha)^K e(-\alpha N)d\alpha&=& \Sigma_0+O(e^{-\varpi\eta}N(\log N)^{-2}\Sigma_1)\nonumber\\ &&\hspace{1cm}+O(N(\log N)^{-2}\Sigma_2), \end{eqnarray} where \[\Sigma_0=\sum_{n} J_n(1,1),\] \[\Sigma_1=\sum_{n}\frac{n}{\phi(n)}\] and \[\Sigma_2=\sum_{\chi,\chi'\in\cl{B}(\eta)}\sum_{n} \frac{n}{\phi(n)}m_n^{-1/3}.\] In each case the sum over $n$ is for values \begin{equation} n=N-\sum_{j=1}^{K}2^{\nu_j}. \end{equation} We begin by considering the main term $\Sigma_0$. We put \[T(\beta)=\sum_{\varpi N<m\le N}\frac{e(\beta m)}{\log m}\] and \[R(\beta)=S(\beta)-T(\beta).\] We also set \[||R||=\int_{-P/N}^{P/N}|R(\beta)|^2 d\beta\] and \[J(n)=\twosum{\varpi<m_1,m_2<N}{m_1+m_2=n}(\log m_1)^{-1}(\log m_2)^{-1}.\] Then, as in Gallagher [3; (11)], we have \begin{eqnarray} J_n(1,1)&=&J(n)\cl{S}(n) +O(N(\log N)^{-2}\frac{n}{\phi(n)}d(n)\frac{\log P}{P})\nonumber\\ &&\hspace{1cm}+O(\frac{n}{\phi(n)}\{N^{1/2}(\log N)^{-1}||R||+||R||^2\}), \end{eqnarray} where \[\cl{S}(n)=\prod_{p|n}(\frac{p}{p-1})\prod_{p\, | \hspace{-1.1mm}/\, n}(1-\frac{1}{(p-1)^2}).\] In analogy to (5) we have \[||R||\ll PN^{4\varpi}+\frac{N^{1/2}}{\log N} \sum_{\rho}N^{\beta-1}\min\{1\,,\,P|\gamma|^{-1}\},\] where the sum over $\rho$ is for zeros of $\zeta(s)$ in the region \[\beta\ge\frac{1}{2}+3\varpi,\;\;\;|\gamma|\le N.\] We split the range for $|\gamma|$ into $O(1)$ overlapping intervals \[T-1\le |\gamma|\le TN^{\varpi},\] and find, as in (8) that each range contributes \[\ll PN^{4\varpi}+N^{1/2}\min\{1\,,\,\frac{P}{T}\} \{N^{6\varpi-1/2}T+\int_{1/2+3\varpi}^{1}N^{\sigma-1}N(\sigma,1,TN^{\varpi})d\sigma\}\] to $||R||$. Using the case $R=1$ of (14), together with Vinogradov's zero-free region \[\sigma\ge 1-\frac{c_0}{(\log T)^{3/4}(\log\log T)^{3/4}}\] (see Titchmarsh [16; (6.15.1)]), we find that this gives \[||R||\ll N^{1/2}(\log N)^{-10},\] say, for $P\le N^{45/154-4\varpi}$. The error terms in (22) are therefore $O(N(\log N)^{-9})$. We also note that \begin{eqnarray*} J(n)&=&(\log N)^{-2}\#\{m_1,m_2:\varpi N<m_1,m_2\le N,\,m_1+m_2=n\}\\ &&\hspace{3cm}+O(N(\log N)^{-3})\\ &=&(\log N)^{-2}R(n)+O (N(\log N)^{-3}), \end{eqnarray*} where \[R(n)=\left\{\begin{array}{cc} 2N-n,& (1+\varpi)N\le n\le 2N,\\ n-2\varpi N,& 2\varpi N\le n\le (1+\varpi)N,\\ 0, & \mbox{otherwise}.\end{array}\right.\] In particular, we have $R(N-m)=(1-2\varpi)N(\log N)^{-2}+O(m(\log N)^{-2})$ for $1\le m\le N$. Since \[\cl{S}(n)\ll\frac{n}{\phi(n)}\ll\log\log N, \] we find, on taking $n$ of the form (21), that \[\sum_{n}J(n)\cl{S}(n)=(1-2\varpi)N(\log N)^{-2}\sum_{n}\cl{S}(n)+O(N(\log N)^{K-5/2})\] for $K\ge 2$, whence \[\Sigma_0=(1-2\varpi)N(\log N)^{-2}\sum_{n}\cl{S}(n)+ O(N(\log N)^{K-5/2}).\] Since the numbers $n$ are all even, we have \[\cl{S}(n)=2C_0\prod_{p|n, p\not=2}\frac{p-1}{p-2}=2C_0\sum_{d|n}k(d),\] where \begin{equation} C_0=\prod_{p\not=2}(1-\frac{1}{(p-1)^2}) \end{equation} and $k(d)$ is the multiplicative function defined by taking \begin{equation} k(p^e)=\left\{\begin{array}{cc} 0,\; & p=2\;\mbox{or}\;e\ge 2,\\ (p-2)^{-1},\;& \mbox{otherwise.} \end{array}\right. \end{equation} For any odd integer $d$ we shall define $\varepsilon(d)$ to be the order of 2 in the multiplicative group modulo $d$, and we shall set \[H(d;N,K)=\#\{(\nu_1,\ldots,\nu_K): 1\le\nu_i\le\varepsilon(d),\, d|N-\sum 2^{\nu_i}\}.\] Then for any fixed $D$ we have \begin{eqnarray*} \sum_{n}\cl{S}(n)&=&2C_0\sum_{d}k(d)\#\{n:d|n\}\\ &\ge&2C_0\sum_{d\le D}k(d)\#\{n:d|n\}\\ &\ge&2C_0\sum_{d\le D}k(d)H(d;N,K)[L/\varepsilon(d)]^K\\ &\ge&\{1+O((\log N)^{-1})\}2C_0L^K \sum_{d\le D}k(d)H(d;N,K)\varepsilon(d)^{-K}. \end{eqnarray*} We shall take $D=5$. We trivially have $\varepsilon(1)=1$ and $H(1;N,K)=1$ for all $N$ and $K$. When $d=3$ or $d=5$ the powers of 2 run over all non-zero residues modulo $d$, and it is an easy exercise to check that \[H(d;N,K)=\left\{\begin{array}{cc} \frac{1}{d}\{(d-1)^K-(-1)^K\}, & d\, | \hspace{-1.1mm}/\, N\\ \frac{1}{d}\{(d-1)^K+(-1)^K (d-1)\}, & d|N.\end{array}\right.\] Thus if $K\ge 7$ we have \[H(3;N,K)\varepsilon(3)^{-K}\ge \frac{1}{3}(1-2^{-6})\] and \[H(5;N,K)\varepsilon(5)^{-K}\ge \frac{1}{5}(1-4^{-6}),\] whence \[2\sum_{d\le D}k(d)H(d;N,K)\varepsilon(d)^{-K}\ge 2.7895\] for any choice of $N$. We therefore conclude that \begin{equation} \Sigma_0\ge 2.7895(1-2\varpi)C_0N(\log N)^{-2}L^K+O(N(\log N)^{K-5/2}), \end{equation} providing that $K\ge 9$. To bound $\Sigma_1$ we note that \[\frac{n}{\phi(n)}\ll\prod_{p|n,\,p\not=2}(1+\frac{1}{p})= \sum_{q|n,\,2\, | \hspace{-1.1mm}/\, q}\frac{\mu^2(q)}{q}.\] We deduce that \[\Sigma_1\ll \sum_{q\le N,\,2\, | \hspace{-1.1mm}/\, q}\frac{\mu^2(q)}{q}\#\{n:\,q|n\}.\] However, if $q$ is odd, then \[\#\{\nu:0\le\nu\le L,\,2^{\nu}\equiv m\!\!\!\pmod{q}\}\ll 1+\frac{L}{\varepsilon(q)}.\] It follows that \[\#\{n:\,q|n\}\ll L^{K-1}+L^K /\varepsilon(q),\] whence \[\Sigma_{1}\ll (\log N)^K+ (\log N)^K\sum_{q\le N,\,2\, | \hspace{-1.1mm}/\, q}\frac{\mu^2(q)}{q\varepsilon(q)}.\] To bound the final sum we call on the following simple result of Gallagher [3; Lemma 4] \begin{lemma} We have \[\sum_{\varepsilon(q)\le x}\frac{\mu^2(q)}{\phi^2(q)}q\ll\log x.\] \end{lemma} From this we deduce that \begin{equation} \sum_{x/2<\varepsilon(q)\le x}\frac{\mu^2(q)}{q\varepsilon(q)}\ll\frac{\log x}{x}. \end{equation} We take $x$ to run over powers of $2$ and sum the resulting bounds to deduce that \[\sum_{q\le N,\,2\, | \hspace{-1.1mm}/\, q}\frac{\mu^2(q)}{q\varepsilon(q)}\ll 1,\] and hence that \begin{equation} \Sigma_{1}\ll (\log N)^K. \end{equation} Turning now to $\Sigma_2$, we fix a particular pair of characters $\chi,\chi'\in\cl{B}(\eta)$, and investigate \[\sum_{n}\frac{n}{\phi(n)}m_n^{-1/3}=\Sigma_2(\chi,\chi'),\] say. Let $m=[r,r']$ as usual, and write $m=2^{\mu}f$, with $f$ odd. Put $g=(f,n)$ so that \begin{equation} m_n\ge f_n=f/g, \end{equation} and consider \[\sum_{g|n}\frac{n}{\phi(n)}.\] As before we have \[\frac{n}{\phi(n)}\ll\sum_{q|n,\,2\, | \hspace{-1.1mm}/\, q}\frac{\mu^2(q)}{q}.\] Terms $q$ with $q\ge d(n)$ can contribute at most $1$ in total, so that in fact \[\frac{n}{\phi(n)}\ll\sum_{q|n,\,2\, | \hspace{-1.1mm}/\, q, q\le d(n)}\frac{\mu^2(q)}{q}.\] Thus, if \[D=\max_{1\le n\le N}d(n),\] we deduce as before that \begin{eqnarray*} \sum_{g|n}\frac{n}{\phi(n)}&\ll&\sum_{q\le D,\,2\, | \hspace{-1.1mm}/\, q} \frac{\mu^2(q)}{q}\#\{n:\,[g,q]|n\}\\ &\ll &\sum_{q\le D,\,2\, | \hspace{-1.1mm}/\, q}\frac{\mu^2(q)}{q}\{(\log N)^{K-1}+\frac{(\log N)^K}{\varepsilon([g,q])}\}. \end{eqnarray*} Here we note that \[\sum_{q\le D}q^{-1}\ll \log D\ll\frac{\log N}{\log\log N}.\] To deal with the remaining terms let $\xi$ be a positive parameter. Then \begin{eqnarray*} \sum_{\varepsilon(q)>\xi}\frac{\mu^2(q)}{q\varepsilon([g,q])}&\le & \sum_{\varepsilon(q)>\xi}\frac{\mu^2(q)}{q\varepsilon(q)}\\ &\ll& \frac{\log{\xi}}{\xi}, \end{eqnarray*} by (26). If $\varepsilon(q)\le \xi$ we note that \begin{equation} q\le 2^{\varepsilon(q)}-1,\;\; \mbox{for}\;\;q>1, \end{equation} so that $q\le 2^{\xi}$. Thus \begin{eqnarray*} \sum_{\varepsilon(q)\le\xi}\frac{\mu^2(q)}{q\varepsilon([g,q])}&\le & \sum_{q\le 2^{\xi}}\frac{\mu^2(q)}{q\varepsilon(g)}\\ &\le & \frac{\xi}{\varepsilon(g)}. \end{eqnarray*} On choosing $\xi=\sqrt{\varepsilon(g)}$ we therefore conclude that \[\sum_{2\, | \hspace{-1.1mm}/\, q}\frac{\mu^2(q)}{q\varepsilon([g,q])}\ll \frac{\log\varepsilon(g)}{\sqrt{\varepsilon(g)}},\] and hence that \[\sum_{g|n}\frac{n}{\phi(n)}\ll (\log N)^K \{(\log\log N)^{-1}+\varepsilon(g)^{-1/3}\}.\] It follows from (29) that $\varepsilon(g)\gg\log g$, and we now conclude that \[\sum_{g|n}\frac{n}{\phi(n)}\ll (\log N)^K \{(\log\log N)^{-1}+(\log g)^{-1/3}\}.\] We now observe from (28) that \[\Sigma_2(\chi,\chi')\le \sum_{n}\frac{n}{\phi(n)}(\frac{f}{(f,n)})^{-1/3}.\] Let $\tau\ge 1$ be a parameter to be fixed in due course. Then terms in which $(f,n)\le f/\tau$ contribute \[\le \tau^{-1/3}\sum_{n}\frac{n}{\phi(n)}=\tau^{-1/3}\Sigma_1 \ll \tau^{-1/3}(\log N)^K,\] by (27). The remaining terms contribute \begin{eqnarray*} &\le&\sum_{g|f,\,g\ge f/\tau}(f/g)^{-1/3}\sum_{g|n}\frac{n}{\phi(n)}\\ &\ll& \sum_{g|f,\,g\ge f/\tau}(f/g)^{-1/3} (\log N)^K \{(\log\log N)^{-1}+(\log g)^{-1/3}\}\\ &\ll& \sum_{g|f,\,g\ge f/\tau} (\log N)^K \{(\log\log N)^{-1}+(\log f)^{-1/3}\}\\ &\ll& \sum_{j|f,\,j\le \tau} (\log N)^K \{(\log\log N)^{-1}+(\log f)^{-1/3}\}\\ &\ll& \tau(\log N)^K \{(\log\log N)^{-1}+(\log f)^{-1/3}\}. \end{eqnarray*} We deduce that \[\Sigma_2(\chi,\chi')\ll\tau^{-1/3}(\log N)^K+ \tau(\log N)^K \{(\log\log N)^{-1}+(\log f)^{-1/3}\}.\] We therefore choose \[\tau=\{(\log\log N)^{-1}+(\log f)^{-1/3}\}^{-3/4},\] whence \begin{equation} \Sigma_2(\chi,\chi')\ll (\log N)^K\{(\log\log N)^{-1/4}+(\log f)^{-1/12}\}. \end{equation} In order to bound $f$ from below we note that, since $\chi,\chi'$ are not both trivial, we may suppose that $\chi$, say, is non-trivial. We then use a result of Iwaniec [6;~Theorem~2]. This shows that if $L(\beta+i\gamma,\chi)=0$, with $|\gamma|\le N$, and $\chi$ of conductor $r\le N$, then either $\chi$ is real, or \[1-\beta\gg \{\log d+(\log N\log\log N)^{3/4}\}^{-1},\] where $d$ is the product of the distinct prime factors of $r$. In our application we clearly have $f\ge d/2$, so that if $\chi$, say, is in $\cl{B}(\eta)$ we must have \[\frac{\eta}{\log N}\gg \{\log f+(\log N\log\log N)^{3/4}\}^{-1}\] if $\chi$ is not real. Thus, if we insist that $\eta\le (\log N)^{1/5}$ it follows that either \[\log f\gg\eta^{-1}\log N\gg (\log N)^{4/5},\] or $\chi$ is real. Of course if $\chi$ is real we will have $16\, | \hspace{-1.1mm}/\, r$, whence $f\gg r$. Moreover we will also have \[(\log N)^{4/5}\gg\frac{\eta}{\log N}\gg 1-\beta\gg r^{\varpi-1/2},\] so that $f\gg r\gg(\log N)^{3/2}$. Thus in either case we find that $\log f\gg\log\log N$, so that (30) yields \[ \Sigma_2(\chi,\chi')\ll (\log N)^K(\log\log N)^{-1/12}.\] In view of the bound for $\#\cl{B}(\eta)$ given in Lemma 5, we conclude that \begin{equation} \Sigma_2\ll e^{12\eta}(\log N)^K(\log\log N)^{-1/12}. \end{equation} We may now insert the bounds (25), (27) and (31) into (20) to deduce that \begin{eqnarray*} \int_{{\mathfrak M}}S(\alpha)^2 T(\alpha)^K e(-\alpha N)d\alpha &\ge &2.7895(1-2\varpi)C_0N(\log N)^{-2}L^K\\ &&\hspace{2mm}{}+O(N(\log N)^{K-5/2})\\ &&\hspace{4mm}{}+O(e^{-\varpi\eta}N(\log N)^{K-2})\\ &&\hspace{6mm}{}+O(e^{12\eta}N(\log N)^{K-2}(\log\log N)^{-1/12}). \end{eqnarray*} We therefore define $\eta$ by taking \[e^{\eta}=(\log\log N)^{1/145},\] so that $\eta$ satisfies the condition (13), and conclude as follows. \begin{lemma} If $p\le N^{45/154-4\varpi}$ and $K\ge 9$ we have \[\int_{{\mathfrak M}}S(\alpha)^2 T(\alpha)^K e(-\alpha N)d\alpha \ge 2.7895(1-3\varpi)C_0N(\log 2)^{-2}L^{K-2}\] for large enough $N$. \end{lemma} \section{A Mean Square Estimate} In this section we shall estimate the mean square \[J({\mathfrak m})=\int_{{\mathfrak m}}|S(\alpha)T(\alpha)|^2 d\alpha,\] where ${\mathfrak m}=[0,1]\setminus{\mathfrak M}$ is the set of minor arcs. Instead of this integral, previous researchers have worked with the larger integral \[J=\int_{0}^1 |S(\alpha)T(\alpha)|^2 d\alpha.\] Thus it was shown by Li [9; Lemma 6], building on work of Liu, Liu and Wang [13; Lemma 4] that \[J\le (24.95+o(1))\frac{C_0}{\log^2 2}N,\] In this section we shall improve on this bound, and give a lower bound for the corresponding major arc integral \[J({\mathfrak M})=\int_{{\mathfrak M}}|S(\alpha)T(\alpha)|^2 d\alpha.\] By subtraction we shall then obtain our bound for $J({\mathfrak m})$. We begin by observing that \[J=\sum_{\mu,\nu\le L}r(2^{\mu}-2^{\nu}),\] where \[r(n)=\#\{\varpi N<p_1,p_2\le N: n=p_1-p_2\}.\] Moreover, by Theorem 3 of Chen [2] we have \[r(n)\le C_0 C_1 h(n)\frac{N}{(\log N)^2},\] for $n\not=0$ and $N$ sufficiently large, where $C_0$ is given by (23), \begin{equation} C_1=7.8342, \end{equation} and \[h(n)=\prod_{p|n,\,p>2}(\frac{p-1}{p-2}).\] Observe that our notation for the constants that occur differs from that used by Liu, Liu and Wang, and by Li. Since $h(2^{\mu}-2^{\nu})=h(2^{\mu-\nu}-1)$ for $\mu>\nu$ we conclude, as in Liu, Liu and Wang [13; \S 3] and Li [9; \S 4] that \begin{equation} \sum_{\mu\not=\nu\le L}r(2^{\mu}-2^{\nu})\le 2C_0 C_1\frac{N}{(\log N)^2} \sum_{1\le l\le L}(L-l)h(2^l-1), \end{equation} while the contribution for $\mu=\nu$ is $L\pi(N)-L\pi(\varpi N)\le LN(\log N)^{-1}$, for large $N$. Now \[h(n)=\sum_{d|n}k(d),\] where $k(d)$ is the multiplicative function defined in (24). Thus \begin{eqnarray*} \sum_{1\le j\le J}h(2^j-1)&=& \sum_{d=1}^\infty k(d)\#\{j\le J: d|2^j-1\}\\ &=&\sum_{d=1}^\infty k(d)[\frac{J}{\varepsilon(d)}]. \end{eqnarray*} However $[\theta]=\theta+O(\theta^{1/2})$ for any real $\theta>0$, whence \begin{equation} \sum_{1\le j\le J}h(2^j-1)=C_2 J+O(J^{1/2}) \end{equation} with \begin{equation} C_2=\sum_{d=1}^\infty \frac{k(d)}{\varepsilon(d)}. \end{equation} Here we use the observation that the sum \[\sum_{d=1}^\infty \frac{k(d)}{\varepsilon(d)^{1/2}}\] is convergent, since Lemma 6 implies that \begin{equation} \sum_{x/2<\varepsilon(d)\le x}\frac{k(d)}{\varepsilon(d)^{1/2}}\ll x^{-1/2}\sum_{x/2<\varepsilon(d)\le x}\frac{\mu^2(d)d}{\phi^2(d)}\ll \frac{\log x}{x^{1/2}} \end{equation} for any $x\geq 2$. We may now use partial summation in conjunction with (34) to deduce that \[\sum_{1\le l\le L}(L-l)h(2^l-1)=C_2\frac{L^2}{2}+O(L^{3/2}),\] Thus, using (33) we reach the following result. \begin{lemma} We have \[J\le \{\frac{C_0 C_1 C_2}{\log^2 2}+\frac{1}{\log 2}+o(1)\}N,\] with the constants given by (23), (32) and (35). \end{lemma} We now turn to the integral $J({\mathfrak M})$. According to Lemma 3.1 of Vaughan [17], if \[|\alpha-\frac{a}{q}|\le\frac{\log x}{x},\;\;\;(a,q)=1,\] and $q\le 2\log x$, we have \[\sum_{p\le x}e(\alpha p)\log p=\frac{\mu(q)}{\phi(q)}v(\alpha-\frac{a}{q}) +O(x(\log x)^{-3}),\] with \[v(\beta)=\sum_{m\le x}e(\beta m).\] It follows by partial summation that \[S(\alpha)=\frac{\mu(q)}{\phi(q)}w(\alpha-\frac{a}{q}) +O(N(\log N)^{-4}),\] with \[w(\beta)=\sum_{\varpi N<m\le N}\frac{e(\beta m)}{\log m},\] providing that \begin{equation} |\alpha-\frac{a}{q}|\le\frac{\log N}{N},\;\;\;(a,q)=1 \end{equation} and $q\le\log N$. Then if $\mathfrak{a}$ denotes the set of $\alpha\in[0,1]$ for which such $a,q$ exist, we easily compute that \begin{eqnarray*} J(\mathfrak{M})&\ge&J(\mathfrak{a})\\ &=& \int_{{\mathfrak a}}|\frac{\mu(q)}{\phi(q)}w(\alpha-\frac{a}{q}) T(\alpha)|^2 d\alpha+O(N(\log N)^{-1}), \end{eqnarray*} where, for each $\alpha\in\mathfrak{a}$, we have taken $a/q$ to be the unique rational satisfying (37). By partial summation we have \[w(\beta)\ll (||\beta||\log N)^{-1},\] whence \[\int_{-(\log N)/N}^{(\log N)/N}|w(\beta)T(\frac{a}{q}+\beta)|^2 d\beta =\int_{-1/2}^{1/2}|w(\beta)T(\frac{a}{q}+\beta)|^2 d\beta+O(N(\log N)^{-1}). \] It follows that \[J(\mathfrak{a})= \sum_{q\le\log N}\sum_{(a,q)=1}\frac{\mu^2(q)}{\phi^2(q)} \int_{0}^{1}|w(\beta)T(\frac{a}{q}+\beta)|^2 d\beta+ O(N(\log N)^{-1}\log\log N).\] The integral on the right is \[\sum_{0\le\mu,\nu\le L}e(a(2^{\mu}-2^{\nu})/q)S(2^{\mu}-2^{\nu}),\] where \begin{eqnarray*} S(n)&=& \twosum{\varpi N<m_1,m_2\le N}{m_1-m_2=n}(\log m_1)^{-1}(\log m_2)^{-1}\\ &=&(\log N)^{-2}\#\{m_1,m_2:\varpi N<m_1,m_2\le N,\,m_1-m_2=n\}\\ &&\hspace{3cm}+O(N(\log N)^{-3})\\ &=&(\log N)^{-2}\max\{N(1-\varpi)-|n|\,,\,0\}+O (N(\log N)^{-3}). \end{eqnarray*} Thus \begin{equation} S(n)=(1-\varpi)N(\log N)^{-2}+O(|n|(\log N)^{-2})+O (N(\log N)^{-3}) \end{equation} for $n\ll N$. On summing over $a$ we now obtain \[J(\mathfrak{a})= \sum_{0\le\mu,\nu\le L}\sum_{q\le\log N}\frac{\mu^2(q)}{\phi^2(q)} c_q(2^{\mu}-2^{\nu})S(2^{\mu}-2^{\nu})+O(N(\log N)^{-1}\log\log N),\] where $c_q(n)$ is the Ramanujan sum. When $q$ is square-free we have $c_q(n)=\mu(q)\mu((q,n))\phi((q,n))$. Thus the error terms in (38) make a total contribution $O(N(\log N)^{-1}\log\log N)$ to $J(\mathfrak{a})$. Moreover \[\mu^2(q)c_q(n)=\mu(q)\sum_{d|(q,n)}\mu(d)d,\] whence \[\sum_{0\le\mu,\nu\le L}\mu^2(q)c_q(n)=\mu(q)\sum_{d|q}\mu(d)d \#\{\mu,\nu:\,1\le\mu,\nu\le L,\,d|2^{\mu}-2^{\nu}\}.\] If $d$ is odd we have \[\#\{\mu,\nu:\,1\le\mu,\nu\le L,\,d|2^{\mu}-2^{\nu}\}=L^2\varepsilon(d)^{-1}+O(L),\] while if $d$ is even, of the form $2e$ with $e$ odd, we have \[\#\{\mu,\nu:\,1\le\mu,\nu\le L,\,d|2^{\mu}-2^{\nu}\}=L^2\varepsilon(e)^{-1}+O(L).\] The error terms contribute $O(N(\log N)^{-1}\log\log N)$ to $J(\mathfrak{a})$, by (38), so that \[J(\mathfrak{a})=\frac{(1-\varpi)N}{(\log N)^2}L^2 \sum_{q\le\log N}\frac{\mu(q)}{\phi^2(q)}\sum_{d|q}\mu(d)d\varepsilon(d)^{-1} +O(N(\log N)^{-1}\log\log N),\] where $\varepsilon(d)$ is to be interpreted as $\varepsilon(e)$ when $d=2e$. Now \begin{eqnarray} \sum_{q\le\log N}\frac{\mu(q)}{\phi^2(q)}\sum_{d|q}\frac{\mu(d)d}{\varepsilon(d)}&=& \sum_{d\le\log N}\frac{\mu(d)d}{\varepsilon(d)} \twosum{q\le\log N}{d|q}\frac{\mu(q)}{\phi^2(q)}\nonumber\\ &=&\sum_{d\le\log N}\frac{\mu(d)d}{\varepsilon(d)} \sum_{j\le (\log N)/d}\frac{\mu(jd)}{\phi^2(jd)}\nonumber\\ &=&\sum_{d\le\log N}\frac{\mu^2(d)d}{\varepsilon(d)\phi^2(d)} \twosum{j\le (\log N)/d}{(j,d)=1}\frac{\mu(j)}{\phi^2(j)}\nonumber\\ &=&\sum_{d\le\log N}\frac{\mu^2(d)d}{\varepsilon(d)\phi^2(d)} \{\twosum{j=1}{(j,d)=1}^{\infty}\frac{\mu(j)}{\phi^2(j)} +O(\frac{d}{\log N})\}\nonumber\\ &=&\sum_{d\le\log N}\frac{\mu^2(d)d}{\varepsilon(d)\phi^2(d)} \prod_{p\, | \hspace{-1.1mm}/\, d}\{1-(p-1)^{-2}\}\nonumber\\ &&\hspace{1cm}+ O((\log N)^{-1}\sum_{d\le\log N}\frac{\mu^2(d)d^2}{\varepsilon(d)\phi^2(d)}). \end{eqnarray} If $d=2e$ with $e$ odd, we have \[\frac{\mu^2(d)d}{\varepsilon(d)\phi^2(d)} \prod_{p\, | \hspace{-1.1mm}/\, d}\{1-(p-1)^{-2}\}=2C_0 k(e)/\varepsilon(d),\] while if $d$ is odd we have \[\prod_{p\, | \hspace{-1.1mm}/\, d}\{1-(p-1)^{-2}\}=0,\] since the factor with $p=2$ vanishes. Moreover \[\sum_{d\gg\log N}\frac{k(d)}{\varepsilon(d)}\ll \frac{\log N}{\log\log N}\] by Lemma 6, applied as in (36). The leading term in (39) is therefore $2C_0 C_2+o(1)$, with $C_0$ and $C_2$ as in (23) and (35). To bound the error term we use Lemma 6, which shows that \[\twosum{X<d\le 2X}{x<\varepsilon(d)\le 2x}\frac{\mu^2(d)d^2}{\varepsilon(d)\phi^2(d)} \ll\frac{X\log x}{x}.\] According to (29) we must have $x\gg\log X$, so on summing as $x$ runs over powers of $2$ we obtain \[\sum_{X<d\le 2X}\frac{\mu^2(d)d^2}{\varepsilon(d)\phi^2(d)} \ll\frac{X\log\log X}{\log X}.\] Now, summing as $X$ runs over powers of $2$ we conclude that \[\sum_{d\le\log N}\frac{\mu^2(d)d^2}{\varepsilon(d)\phi^2(d)}\ll \frac{(\log N)(\log\log\log N)}{\log\log N}.\] We may therefore summarize our results as follows. \begin{lemma} We have \[ J(\mathfrak{M})\ge \{\frac{2(1-\varpi)C_0 C_2}{\log^2 2}+o(1)\}N,\] and hence \[J(\mathfrak{m})\le \{\frac{C_0(C_1-2+2\varpi)C_2}{\log^2 2} +\frac{1}{\log 2}+o(1)\}N,\] by Lemma 8. \end{lemma} It remains to compute the constants. We readily find \[\prod_{2<p\le 200000}(1-(p-1)^{-2})=0.6601...\] Since \[\prod_{p>K}(1-(p-1)^{-2})\ge\prod_{n=K}^{\infty}(1-n^{-2}) =1-K^{-1},\] we deduce that \begin{equation} C_0\ge 0.999995\times0.6601\ge 0.66. \end{equation} However the estimation of $C_2$ is more difficult. We set \[m=\prod_{e\le x}(2^e-1)\] and \[s(x)=\sum_{\varepsilon(d)\le x}k(d),\] whence \begin{eqnarray*} s(x)&\le&\sum_{d|m}k(d)\\ &=& h(m)\\ &=&\prod_{p|m,\,p>2}(\frac{p-1}{p-2})\\ &\le & \prod_{p>2}(\frac{(p-1)^2}{p(p-2)})\prod_{p|m}(\frac{p}{p-1})\\ &=& C_0^{-1}\frac{m}{\phi(m)}. \end{eqnarray*} Moreover we have $m/\phi(m)\le e^{\gamma}\log x$ for $x\ge 9$, as shown by Liu, Liu and Wang [13; (3.9)]. It then follows that \begin{eqnarray*} C_2&=&\int_{1}^{\infty}s(x)\frac{dx}{x^2}\\ &=&\int_{1}^{M}s(x)\frac{dx}{x^2}+\int_{M}^{\infty}s(x)\frac{dx}{x^2}\\ &\le&\sum_{\varepsilon(d)\le M}\int_{\varepsilon(d)}^{M}k(d)\frac{dx}{x^2}+ C_0^{-1}e^{\gamma}\int_{M}^{\infty}\log x\frac{dx}{x^2}\\ &\le &\sum_{\varepsilon(d)<M}k(d)(\frac{1}{\varepsilon(d)}-\frac{1}{M})+ 2.744(\frac{1+\log M}{M}) \end{eqnarray*} for any integer $M\ge 9$. We now set \[\sum_{\varepsilon(d)=e}k(d)=\kappa(e)\] so that \[\sum_{e|d}\kappa(e)=\sum_{\varepsilon(e)|d}k(e).\] However $\varepsilon(e)|d$ if and only if $e|2^d-1$. Thus \[\sum_{e|d}\kappa(e)=\sum_{e|2^d-1}k(e)=h(2^d-1).\] We therefore deduce that \[\kappa(e)=\sum_{d|e}\mu(e/d)h(2^d-1).\] This enables us to compute \[\sum_{\varepsilon(d)<M}k(d)(\frac{1}{\varepsilon(d)}-\frac{1}{M})= \sum_{m<M}\kappa(m)(\frac{1}{m}-\frac{1}{M})\] by using information on the prime factorization of $2^d-1$ for $d<M$. In particular, taking $M=20$ we find that \[\sum_{m<20}\kappa(m)(\frac{1}{m}-\frac{1}{20})=1.6659\ldots,\] and hence that \begin{equation} C_2\le\sum_{m<20}\kappa(m)(\frac{1}{m}-\frac{1}{20})+ 2.744(\frac{1+\log 20}{20})=2.2141\ldots \end{equation} For comparison with this upper bound for $C_2$ we note that \[C_2\ge\sum_{d\le 10000}k(d)/\varepsilon(d)=1.9326\ldots\] This latter figure is probably closer to the true value, but the discrepancy is small enough for our purposes. From (32), (40) and (41) we calculate that \[(C_1-2)C_2+C_0^{-1}\log 2\le 13.967,\] so that Lemma 9 yields the following bound. \begin{lemma} We have \[J(\mathfrak{m})\le \{13.968+o(1)\}C_0\frac{N}{\log^2 2}.\] \end{lemma} \section{Completion of the Proof} Let $R(N)$ denote the number of representations of $N$ as a sum of two primes and $K$ powers of $2$ in the ranges under consideration, so that \[R(N)=\int_0^1S(\alpha)^2 T(\alpha)^K e(-\alpha N)d\alpha.\] To estimate the minor arc contribution to $R(N)$ we first bound $S(\alpha)$. According to Theorem 3.1 of Vaughan [17] we have \[\sum_{p\le x}e(\alpha p)\log p\ll (\log x)^4\{xq^{-1/2}+x^{4/5}+x^{1/2}q^{1/2}\}\] if $|\alpha-a/q|\le q^{-2}$ with $(a,q)=1$. Thus if $\alpha\in\mathfrak{m}$ we may take $P\ll q\ll N/P$ to deduce that \[S(\alpha)\ll (\log N)^3\{N^{4/5}+NP^{-1/2}\}.\] Taking $P=N^{45/154-4\varpi}$, we obtain \[S(\alpha)\ll N^{263/308+3\varpi}.\] If one assumes the Generalized Riemann Hypothesis, we may apply Lemma 12 of Baker and Harman [1], which implies that \[\sum_{n\le x}\Lambda(n)e((\frac{a}{q}+\beta) n)\ll (\log x)^2\{q^{-1}\min(x,|\beta|^{-1})+x^{1/2}q^{1/2}+x(q|\beta|)^{1/2}\}\] when $|\beta|\le x^{-1/2}$. It follows by partial summation that \[S(\frac{a}{q}+\beta)\ll (\log N)\{q^{-1}\min(N,|\beta|^{-1}) +N^{1/2}q^{1/2}+N(q|\beta|)^{1/2}\}\] for $|\beta|\le N^{-1/2}$. According to Dirichlet's Approximation Theorem, we can find $a$ and $q$ with \[|\alpha-\frac{a}{q}|\le\frac{1}{qN^{1/2}},\;\;\;q\le N^{1/2}.\] Thus \[S(\alpha)\ll (\log N)N^{3/4}\] unless $q\le N^{1/4}$ and $|\alpha-a/q|\le q^{-1}N^{-3/4}$. Since $\alpha\in\mathfrak{m}$ and $P=N^{45/154-4\varpi}\ge N^{1/4}$, these latter conditions cannot hold. We therefore conclude that \[S(\alpha)\ll N^{\theta+o(1)}\] for $\alpha\in\mathfrak{m}$, where we take $\theta=263/308$ in general, and $\theta=3/4$ under the Generalized Riemann Hypothesis. We now have \begin{eqnarray*} \int_{{\mathfrak m}\cap\cl{A}_{\lambda}}S(\alpha)^2 T(\alpha)^K e(-\alpha N)d\alpha&\ll&{\rm meas}(\cl{A}_{\lambda})N^{2\theta+o(1)}L^K\\ &\ll& N^{-E(\lambda)+2\theta+o(1)}\\ &\ll&N, \end{eqnarray*} providing that $E(\lambda)>2\theta-1$. Thus, according to Lemma 1, we may take $\lambda=0.863665$ unconditionally, and $\lambda=0.722428$ under the Generalized Riemann Hypothesis. It remains to consider the set ${\mathfrak m}\setminus\cl{A}_{\lambda}$. Here we have \begin{eqnarray*} |\int_{{\mathfrak m}\setminus\cl{A}_{\lambda}}S(\alpha)^2 T(\alpha)^K e(-\alpha N)d\alpha|&\le& (\lambda L)^{K-2} \int_{{\mathfrak m}}|S(\alpha)T(\alpha)|^2 d\alpha\\ &\le& (\lambda L)^{K-2} 13.968\frac{C_0}{\log^2 2} N. \end{eqnarray*} Finally we compare this with the estimate for the major arc integral, given by Lemma 7, and conclude that \[\int_0^1 S(\alpha)^2 T(\alpha)^K e(-\alpha N)d\alpha >0\] providing that $N$ is large enough, $\varpi$ is small enough, and \[ 13.968\lambda^{K-2} < 2.7895. \] When $\lambda=0.863665$ this is satisfied for $K>12.991$, so that $K=13$ is admissible. Similarly, when $\lambda=0.722428$ one can take any $K>6.995$, so that $K=7$ is admissible. This completes the proof of our theorems, subject to Lemma 1. \section{Proof of Lemma 1} In this section we shall prove Lemma 1. We shall again use $\varpi$ to denote a small positive constant. We shall allow the constants implied by the $O(\ldots)$ and $\ll$ notations to depend on $\varpi$, although sometimes we shall mention the dependence explicitly for emphasis. As mentioned in the introduction, the method we shall adopt was suggested to us by Professor Keith Ball, and is based on the martingale method for proving exponential inequalities in probability theory. It is convenient to work with \[T_L(\alpha)=T(\alpha/2)=\sum_{0\le n\le L-1}e(\alpha 2^n)\] in place of $T(\alpha)$. Clearly we have \[{\rm meas}\{\alpha\in[0,1]: |T_L(\alpha)|\ge\lambda L\}= {\rm meas}(\cl{A}_{\lambda}).\] Let $M=1+[2\pi/\varpi]$ and suppose that $|T_L(\alpha)|\ge\lambda L$ with $\arg(T_L(\alpha))=\phi$. Write $m=[M\phi/2\pi]$ and $\rho_m=e(-m/M)$. Then \[|e^{-i\phi}-\rho_m|\le |\phi-\frac{2\pi m}{M}|\le \frac{2\pi}{M}\le\varpi,\] whence \begin{eqnarray*} {\rm Re}(\rho_m T_L(\alpha))&\ge& {\rm Re}(e^{-i\phi}T_L(\alpha))- \varpi|T_L(\alpha)|\\ &=&(1-\varpi)|T_L(\alpha)|\\ &\ge&(1-\varpi)\lambda L. \end{eqnarray*} It follows that \begin{eqnarray*} \lefteqn{{\rm meas}\{\alpha\in[0,1]: |T_L(\alpha)|\ge\lambda L\}} \hspace{2cm}\\ &\le& \sum_{m=0}^{M-1}{\rm meas}\{\alpha\in[0,1]: {\rm Re}(\rho_m T_L(\alpha)) \ge(1-\varpi)\lambda L\}\\ &\ll_{\varpi}&\sup_{|\rho|=1}{\rm meas}\{\alpha\in[0,1]: {\rm Re}(\rho T_L(\alpha))\ge(1-\varpi)\lambda L\}. \end{eqnarray*} We now set \[S(\xi,\rho,L)=\int_{0}^{1}\exp\{\xi{\rm Re}(\rho T_L(\alpha))\}d\alpha,\] for an arbitrary real $\xi>0$, whence \[S(\xi,\rho,L)\ge \exp\{\xi(1-\varpi)\lambda L\} {\rm meas}\{\alpha\in[0,1]: {\rm Re}(\rho T_L(\alpha))\ge(1-\varpi)\lambda L\}.\] It therefore follows that \begin{equation} {\rm meas}(\cl{A}_{\lambda})\ll \exp\{-\xi(1-\varpi)\lambda L\} \sup_{|\rho|=1}S(\xi,\rho,L). \end{equation} For any integer $h$, we have $T_L(\alpha)=T_{L-h}(2^h\alpha)+T_h(\alpha)$. Moreover, for any function $f$ we have \[\int_0^1 f(\alpha)d\alpha = \frac{1}{2^h}\int_0^{1} \sum_{r=0}^{2^h-1}f(\frac{\beta}{2^h}+\frac{r}{2^h})d\beta.\] It therefore follows that \[S(\xi,\rho,L)=\frac{1}{2^h}\int_0^{1}\sum_{r=0}^{2^h-1} \exp\{\xi{\rm Re}(\rho T_{L-h}(\beta+r))\} \exp\{\xi{\rm Re}(\rho T_{h}(\frac{\beta+r}{2^h}))\}d\beta.\] Since $T(\alpha)$ has period $1$ this becomes \[\int_0^{1}\exp\{\xi{\rm Re}(\rho T_{L-h}(\beta))\}\frac{1}{2^h} \sum_{r=0}^{2^h-1}\exp\{\xi{\rm Re}(\rho T_{h}(\frac{\beta+r}{2^h}))\} d\beta.\] If we now set \begin{equation} F(\xi,h)=\sup_{\beta\in[0, 1],\, |\rho|=1}\frac{1}{2^h} \sum_{r=0}^{2^h-1}\exp\{\xi{\rm Re}(\rho T_h(\frac{\beta + r}{2^h}))\} \end{equation} we deduce that \[S(\xi,\rho,L)\le S(\xi,\rho,L-h)F(\xi,h).\] Using this inductively we find that \[S(\xi,\rho,L)\le S(\xi,\rho,L-nh)F(\xi,h)^n,\] and taking $n=[L/h]$ we deduce that \[S(\xi,\rho,L)\ll_{\xi,h} F(\xi,h)^n\ll_{\xi,h} F(\xi,h)^{L/h}.\] When we combine this with (42) we deduce that \[{\rm meas}(\cl{A}_{\lambda})\ll_{\xi,h,\varpi} \exp\{-\xi(1-\varpi)\lambda L\} F(\xi,h)^{L/h}.\] It follows that we may take \[E(\lambda)=\frac{\xi\lambda}{\log 2}-\frac{\log F(\xi,h)}{h\log 2}- \frac{\varpi}{\log 2}\] for any $h\in\mathbb{N}$, any $\xi>0$ and any $\varpi>0$. We proceed to show that the supremum in (43) occurs at $\beta=0$ and $\rho=1$, whence \begin{equation} F(\xi,h)=\frac{1}{2^h}\sum_{r=0}^{2^h-1} \exp\{\xi{\rm Re}(T_h(\frac{r}{2^h}))\}. \end{equation} Since \[{\rm Re}(\rho T_h(\frac{\beta+r}{2^h}))=\frac{1}{2} \{\rho T_h(\frac{\beta+r}{2^h})+\overline{\rho}\,T_h(\frac{-\beta-r}{2^h})\},\] we find that \begin{eqnarray*} \lefteqn{\sum_{r=0}^{2^h-1}\exp\{\xi{\rm Re}(\rho T_h(\frac{\beta+r}{2^h}))\}} \hspace{2cm}\\ & = & \sum_{n=0}^\infty\frac{1}{2^n\cdot n!}\sum_{r=0}^{2^h-1} \xi^n \left(\rho T_h(\frac{\beta + r}{2^h}) + \overline{\rho}\, T_h(\frac{-\beta-r}{2^h})\right)^n. \end{eqnarray*} However \[\sum_{r=0}^{2^h-1} \left(\rho T_h(\frac{\beta + r}{2^h}) + \overline{\rho}\, T_h(\frac{-\beta-r}{2^h})\right)^n = \sum_{m=0}^n \left(\begin{array}{c}n\\m\end{array}\right) \rho^{2m-n} S(n,m,h,\beta),\] where \begin{equation} S(n,m,h,\beta)=\sum_{r=0}^{2^h-1} T_h(\frac{\beta+r}{2^h})^m T_h(\frac{-\beta-r}{2^h})^{n-m}. \end{equation} It follows that \begin{equation} F(\xi,h)\le \frac{1}{2^h}\sup_{\beta\in[0,1]} \sum_{n=0}^\infty\frac{1}{2^n\cdot n!} \xi^n \sum_{m=0}^n {n\choose m} |S(n,m,h,\beta)|. \end{equation} We now expand the powers of $T_h$ occurring in (45), and perform the summation over $r$. We then see that $S(n,m,h,\beta)$ is a sum of terms \[2^h\exp\{\beta(2^{a_1}+\ldots+2^{a_m}-2^{b_1}-\ldots-2^{b_{n-m}})\},\] over integer values $a_i,b_j$ between 0 and $h-1$, subject to the condition \[2^{a_1}+\ldots+2^{a_m}\equiv 2^{b_1}+\ldots+2^{b_{n-m}}\pmod{2^h}.\] It is now apparent that $|S(n,m,h,\beta)|\le S(n,m,h,0)$, whence (46) yields \begin{eqnarray*} F(\xi,h)&\le &\frac{1}{2^h}\sum_{n=0}^\infty\frac{1}{2^n\cdot n!} \xi^n \sum_{m=0}^n {n\choose m} S(n,m,h,0)\\ &=&\frac{1}{2^h}\sum_{r=0}^{2^h-1} \exp\{\xi{\rm Re}(T_h(\frac{r}{2^h}))\}, \end{eqnarray*} The assertion (44) now follows. Hence it remains to compute $F(\xi,h)$ using (44) and optimize for $\xi$ in (42). We have carried out the computations for $h=16$. Comparing the results for this value with the outcome for smaller values of $h$, it appears that the potential improvements obtainable by choosing $h$ larger than 16 are only small. After taking suitable care over rounding errors we find that we may take $\xi=1.181$ to get \[E(0.863665)>\frac{109}{154}+10^{-8}\] and $\xi=0.905$ to get \[E(0.722428)>\frac{1}{2}+10^{-8}.\] Using Mathematica 4.1 on a PC, computing the values $T_{16}(r/2^{16})$ for the integers $0\leq r\leq 2^{16}-1$ took about 7 minutes, and summing these values up to obtain $F(\xi, h)$ took 24 seconds for each of the two values of $\xi$. D.R. Heath-Brown Mathematical Institute, 24-29, St.Giles', Oxford OX1 3LB, ENGLAND [email protected] J.-C. Puchta Mathematical Institute, 24-29, St.Giles', Oxford OX1 3LB, ENGLAND [email protected] \end{document}
arXiv
{ "id": "0201299.tex", "language_detection_score": 0.6248728632926941, "char_num_after_normalized": null, "contain_at_least_two_stop_words": null, "ellipsis_line_ratio": null, "idx": null, "lines_start_with_bullet_point_ratio": null, "mean_length_of_alpha_words": null, "non_alphabetical_char_ratio": null, "path": null, "symbols_to_words_ratio": null, "uppercase_word_ratio": null, "type": null, "book_name": null, "mimetype": null, "page_index": null, "page_path": null, "page_title": null }
arXiv/math_arXiv_v0.2.jsonl
null
null
\begin{document} \title{The Schur functor on tensor powers} \author{Kay Jin Lim} \author{Kai Meng Tan} \address{Department of Mathematics, National University of Singapore, Block S17, 10 Lower Kent Ridge Road, Singapore 119076.} \email[K. J. Lim]{[email protected]} \email[K. M. Tan]{[email protected]} \date{February 2011} \thanks{Supported by MOE Academic Research Fund R-146-000-135-112.} \thanks{2010 {\em Mathematics Subject Classification.} 20G43, 20C30} \begin{abstract} Let $M$ be a left module for the Schur algebra $S(n,r)$, and let $s \in \mathbb{Z}^+$. Then $M^{\otimes s}$ is a $(S(n,rs), F\sym{s})$-bimodule, where the symmetric group $\sym{s}$ on $s$ letters acts on the right by place permutations. We show that the Schur functor $f_{rs}$ sends $M^{\otimes s}$ to the $(F\sym{rs},F\sym{s})$-bimodule $F\sym{rs} \otimes_{F(\sym{r} \wr \sym{s})} ((f_rM)^{\otimes s} \otimes F\sym{s})$. As a corollary, we obtain the image under the Schur functor of the Lie power $L^s(M)$, exterior power $\boldsymbol{/\!\backslash}^s(M)$ of $M$ and symmetric power $S^s(M)$. \end{abstract} \maketitle \section{Introduction} \label{S:intro} The representations of general linear groups and symmetric groups are classical objects of study. Following the work by Schur in 1901, there is an important connection between the polynomial representations of general linear groups and the representations of symmetric groups via the Schur functor. In this short article, we examine the images of tensor powers, Lie powers, symmetric powers and exterior powers under the Schur functor. Our motivation comes from our study of the Lie module $\operatorname{\mathrm{Lie}}(s)$ of the symmetric group $\sym{s}$ on $s$ letters. This may be defined as the left ideal of the group algebra $F\sym{s}$ generated by the Dynkin-Specht-Wever element $$ \upsilon_s = (1-c_2)\dotsm (1-c_s), $$ where $c_k$ is the descending $k$-cycle $(k, k-1, \dotsc, 1)$ (note that we compose the elements of $\sym{s}$ from right to left). This module is also the image under the Schur functor of the Lie power $L^s(V)$, which is the homogeneous part of degree $s$ of the free Lie algebra on an $n$-dimensional vector space $V$ with $n \geq s$. In our study, we found that the knowledge of the image of $L^s(M)$ (where $M$ is a general $S(n,r)$-module; note that $V$ is naturally an $S(n,1)$-module) under the Schur functor will be most useful. For example, the formula we provide here is used by Bryant and Erdmann \cite{BE} to understand the summands of $\operatorname{\mathrm{Lie}}(s)$ lying in non-principal blocks. It is also used by Erdmann and the authors \cite{ELT} in their study of the complexity of $\operatorname{\mathrm{Lie}}(s)$. As we shall see, the image of $L^s(M)$ under the Schur functor can be easily obtained as a corollary by understanding the image under the Schur functor of the tensor power $M^{\otimes s}$ as a $(F\sym{rs},F\sym{s})$-bimodule. The latter result can be regarded as a refinement of a special case of \cite[2.5, Lemma]{DE}, although our proof is independent of their result. Besides obtaining the image of $L^s(M)$ under the Schur functor from this refinement, we can also get those of the symmetric powers $S^s(M)$ and exterior powers $\boldsymbol{/\!\backslash}^s(M)$. Our proofs are fairly elementary. The organisation of the paper is as follows: in the next section, we give a background on Schur algebras and a summary of the results we need. We then proceed in Section \ref{S:main} to state and prove our main results. \section{Schur algebras} We briefly discuss Schur algebras and the results we need in this section. The reader may refer to \cite{G} for more details. Throughout, we fix an infinite field $F$ of arbitrary characteristic. Let $n,r \in \mathbb{Z}^+$. The Schur algebra $S(n,r)$ has a distinguished set $\{ \xi_{\alpha} \mid \alpha \in \Lambda(n,r)\}$ of pairwise orthogonal idempotents which sum to 1, where $\Lambda(n,r)$ is the set of compositions of $r$ with $n$ parts \cite[(2.3d)]{G}. Thus each left $S(n,r)$-module $M$ has a vector space decomposition $$ M = \bigoplus_{\alpha \in \Lambda(n,r)} \xi_\alpha M. $$ We write $M^{\alpha}$ for $\xi_{\alpha}M$, and call it the $\alpha$-weight space of $M$. Let $\mathrm{GL}_n(F)$ be the general linear group. There is a surjective algebra homomorphism $e_r : F\mathrm{GL}_n(F) \to S(n,r)$ \cite[(2.4b)(i)]{G}. If $s$ is another positive integer, and $M_1$ and $M_2$ are left $S(n,r)$- and $S(n,s)$-modules respectively, then $M_1 \otimes_F M_2$ can be endowed with a natural left $S(n,r+s)$-module structure, which satisfies \begin{equation*} \label{E:tensor} e_{r+s}(g) (m_1 \otimes m_2) = (e_r(g)m_1) \otimes (e_s(g)m_2) \end{equation*} for all $g \in \mathrm{GL}_n(F)$, $m_1 \in M_1$ and $m_2 \in M_2$. The weight spaces of $M_1 \otimes_F M_2$ can be described \cite[(3.3c)]{G} in terms of the weight spaces of $M_1$ and $M_2$, as follows: \begin{equation} \label{E:weight} (M_1 \otimes_F M_2)^{\gamma} = \bigoplus_{\substack{\alpha \in \Lambda(n,r) \\ \beta \in \Lambda(n,s) \\ \alpha+\beta = \gamma}} M_1^{\alpha} \otimes_F M_2^{\beta}. \end{equation} (Here, and hereafter, if $\alpha = (\alpha_1,\dotsc, \alpha_n) \in \Lambda(n,r)$ and $\beta = (\beta_1,\dotsc, \beta_n) \in \Lambda(n,s)$, then $\alpha + \beta = (\alpha_1 + \beta_1,\dotsc, \alpha_n + \beta_n) \in \Lambda(n,r+s)$.) The symmetric group $\sym{n}$ on $n$ letters acts on $\Lambda(n,r)$ by place permutation: $\tau \cdot (\alpha_1,\dotsc, \alpha_n) = (\alpha_{\tau^{-1}(1)},\dotsc, \alpha_{\tau^{-1}(n)})$. We also view $\sym{n}$ as the subgroup of $\mathrm{GL}_n(F)$ consisting of permutation matrices. Thus, $\sym{n}$ also acts naturally on left $S(n,r)$-modules via $e_r$. We have the following lemma: \begin{lem}\label{L:iso} Let $n,r \in \mathbb{Z}^+$, and let $M$ be a left $S(n,r)$-module. Let $\sigma \in \sym{n}$ and $\alpha = (\alpha_1,\dotsc, \alpha_n) \in \Lambda(n,r)$. \begin{enumerate} \item[(i)] $e_r(\sigma)$ maps $M^{\alpha}$ bijectively onto $M^{\sigma \cdot \alpha}$. \item[(ii)] If $\sigma(i) = i$ for all $i$ such that $\alpha_i \ne 0$, then $e_r(\sigma)$ acts as identity on $M^{\alpha}$. (Equivalently, if $\sigma_1(i) = \sigma_2(i)$ for all $i$ such that $\alpha_i \ne 0$, then $e_r(\sigma_1)m = e_r(\sigma_2)m$ for all $m \in M^{\alpha}$.) \end{enumerate} \end{lem} \begin{proof} Part (i) is (3.3a) of \cite{G} (and its proof). For part (ii), it follows from the definition of $e_r$ in \cite[\S2.4]{G} that $e_r(\sigma) m = \xi_{\sigma\mathbf{i},\mathbf{i}}m$ for $m$ lying in a weight space associated to $\mathbf{i}$, so that $e_r(\sigma) m = m$ when $m \in M^{\alpha}$, and $\sigma$ satisfies the condition in (ii). \end{proof} In the case where $n \geq r$, let $$\omega_r = (\underbrace{1,\dotsc, 1}_{r \text{ times}}, \underbrace{0,\dotsc, 0}_{n-r \text{ times}}) \in \Lambda(n,r),$$ The subalgebra $\xi_{\omega_r} S(n,r) \xi_{\omega_r}$ of $S(n,r)$ is isomorphic to $F\sym{r}$ \cite[(6.1d)]{G}. This induces the Schur functor $f_r : {}_{S(n,r)}\textbf{mod} \to {}_{F\sym{r}}\textbf{mod}$ which sends a left $S(n,r)$-module $M$ to its weight space $M^{\omega_r}$. The $\sym{r}$-action on $f_rM = M^{\omega_r}$ is that via $e_r$ and viewing $\sym{r}$ as a subgroup of $\mathrm{GL}_n(F)$ via the embedding $\sym{r} \subseteq \sym{n} \subseteq \mathrm{GL}_n(F)$, i.e. if $m \in f_rM$ and $\sigma \in \sym{r}$, then \begin{equation*} \label{E:symaction} \sigma \cdot m = e_r(\sigma) m. \end{equation*} \section{Main results} \label{S:main} Let $M$ be a left $S(n,r)$-module, and let $s \in \mathbb{Z}^+$. The $s$-fold tensor product $M^{\otimes s}$ is then a left $S(n,rs)$-module, and it also admits another commuting right action of $\sym{s}$ by place permutations, i.e.\ $(m_1 \otimes \dotsb \otimes m_s) \cdot \sigma = m_{\sigma(1)} \otimes \dotsb \otimes m_{\sigma(s)}$ where $m_1,\dotsc, m_s \in M$, $\sigma \in \sym{s}$. As such, if $n \geq rs$, then $f_{rs} M^{\otimes s}$ is a $(F\sym{rs}, F\sym{s})$-bimodule. On the other hand, $(f_r M)^{\otimes s}$ is a left $F(\sym{r} \wr \sym{s})$-module via $$ (\sigma_1,\dotsc,\sigma_s) \tau \cdot (m_1 \otimes \dotsb \otimes m_s) = \sigma_1 m_{\tau^{-1}(1)} \otimes \dotsb \otimes \sigma_s m_{\tau^{-1}(s)}$$ and we can make it into a $(F(\sym{r} \wr \sym{s}), F\sym{s})$-bimodule by allowing $\sym{s}$ to act trivially on its right, while $F\sym{s}$ is naturally a $(F\sym{s},F\sym{s})$-bimodule and we can make it into a $(F(\sym{r} \wr \sym{s}), F\sym{s})$-bimodule by allowing $(\sym{r})^s$ to act trivially on its left. Thus, $(f_r M)^{\otimes s} \otimes_F F\sym{s}$ is a $(F(\sym{r} \wr \sym{s}), F\sym{s})$-bimodule via the diagonal action. For each $1\leq i\leq s$ and $\sigma\in \sym{r}$, we write $\sigma[i]\in \sym{rs}$ for the permutation sending $(i-1)r+j$ to $(i-1)r+\sigma(j)$ for each $1\leq j\leq r$, and fixing everything else pointwise; also, let $\sym{r}[i] = \{ \sigma[i] \mid \sigma \in \sym{r} \}$. For $\tau\in \sym{s}$, we write $\tau^{[r]}\in\sym{rs}$ for the permutation sending $(i-1)r+j$ to $(\tau(i)-1)r+j$ for each $1\leq i\leq s$ and $1\leq j \leq r$; also, let $\sym{s}^{[r]} = \{ \tau^{[r]} \mid \tau \in \sym{s} \}$. We identify $\sym{r}\wr \sym{s}$ with the subgroup $(\prod_{i=1}^s \sym{r} [i])\sym{s}^{[r]}$ of $\sym{rs}$. With the above understanding, we have the following result. \begin{thm} \label{T:main} Let $n, r,s \in \mathbb{Z}^+$ with $n \geq rs$, and let $M$ be an $S(n,r)$-module. Then $$ f_{rs} M^{\otimes s} \cong \operatorname{Ind}_{\sym{r} \wr \sym{s}}^{\sym{rs}} ((f_r M)^{\otimes s} \otimes_F F\sym{s})$$ as $(F\sym{rs}, F\sym{s})$-bimodules. \end{thm} \begin{proof} Firstly, by \eqref{E:weight}, \begin{equation} \label{E:weight2} f_{rs} M^{\otimes s} = (M^{\otimes s})^{\omega_{rs}} = \bigoplus_{(\alpha^{[1]}, \dotsc, \alpha^{[s]}) \in \Lambda} M^{\alpha^{[1]}} \otimes_F \dotsb \otimes_F M^{\alpha^{[s]}}, \end{equation} where $\Lambda = \{ (\alpha^{[1]}, \dotsc, \alpha^{[s]}) \mid \alpha^{[i]} \in \Lambda(n,r)\ \forall i,\ \sum_{i=1}^s \alpha^{[i]} = \omega_{rs} \}$. Also, \begin{align*} \operatorname{Ind}_{\sym{r} \wr \sym{s}}^{\sym{rs}} ((f_r M)^{\otimes s} \otimes_F F\sym{s}) &= F\sym{rs} \otimes_{F(\sym{r} \wr \sym{s})} ((f_r M)^{\otimes s} \otimes_F F\sym{s}) \\ &= \bigoplus_{t \in T} t \otimes ((f_r M)^{\otimes s} \otimes 1), \end{align*} where $T$ is a fixed set of left coset representatives of $\prod_{i=1}^s \sym{r}[i]$ in $\sym{rs}$. The symmetric group $\sym{rs}$ acts naturally and transitively on the set $$\Omega = \left\{ (A_1,\dotsc, A_s) \mid |A_i| = r\ \forall i,\ \bigcup_{i=1}^s A_i = \{ 1,\dotsc, rs\} \right\},$$ with $\prod_{i=1}^s \sym{r}[i]$ being the stabiliser of $(\{1,\dotsc, r\},\dotsc, \{(s-1)r + 1,\dotsc, rs\})$. As such, the function $\theta : T \to \Omega$ defined by $$t \mapsto (\{t(1),\dotsc, t(r)\},\dotsc, \{t((s-1)r+1), \dotsc, t(rs)\})$$ is a bijection. On the other hand, each $r$-element subset $A$ of $\{1,\dotsc, rs\}$ corresponds naturally to a distinct element $\alpha_A = ((\alpha_A)_1,\dotsc, (\alpha_A)_n) \in \Lambda(n,r)$ defined by $(\alpha_A)_j = 1$ if $j \in A$, and $0$ otherwise. This induces a bijection $\chi: \Omega \to \Lambda$ defined by $$ (A_1,\dotsc, A_s) \mapsto (\alpha_{A_1}, \dotsc, \alpha_{A_s}).$$ For each $t \in T$ and $i = 1,\dotsc, s$, let $\tau_{t,i}$ be any fixed element of $\sym{rs}$ satisfying $\tau_{t,i}(j) = t((i-1)r + j)$ for $j = 1, \dotsc, r$ (we shall see below that how $\tau_{t,i}$ acts on other points is immaterial for our purposes). Let $\alpha^{[i]} = \tau_{t,i} \cdot \omega_r$; then $\alpha^{[i]} = \alpha_{A_i}$, where $$ A_i = \{ \tau_{t,i}(1), \dotsc, \tau_{t,i}(r) \} = \{ t((i-1)r + 1), \dotsc, t(ir) \}. $$ Thus $(\alpha^{[1]},\dotsc, \alpha^{[s]}) = \chi(\theta (t)) \in \Lambda$. Let \begin{align*} \phi_t : t \otimes ((f_r M)^{\otimes s} \otimes 1) &\to M^{\alpha^{[1]}} \otimes_F \dotsb \otimes_F M^{\alpha^{[s]}} \\ t \otimes ((x_1 \otimes \dotsb \otimes x_s) \otimes 1) &\mapsto e_r(\tau_{t,1}) x_1 \otimes \dotsb \otimes e_r(\tau_{t,s}) x_s \quad (x_1, \dotsc, x_s \in f_rM). \end{align*} Since each $\tau_{t,i}$ maps $f_r M = M^{\omega_r}$ bijectively onto $M^{\tau_{t,i} \cdot \omega_r} = M^{\alpha^{[i]}}$ by Lemma \ref{L:iso}(i), we see that $\phi_t$ is bijective. Now let $$\phi = \bigoplus_{t \in T} \phi_t : \operatorname{Ind}_{\sym{r} \wr \sym{s}}^{\sym{rs}} ((f_r M)^{\otimes s} \otimes_F F\sym{s}) \to f_{rs} M^{\otimes s}.$$ This is well-defined and bijective by \eqref{E:weight2} (note that $\chi \circ \theta : T \to \Lambda$ is a bijection). Let $g \in \sym{rs}$ and $h \in \sym{s}$. Then if $t \in T$ and $x_1,\dotsc, x_s \in f_rM$, we have \begin{align*} g (t \otimes ((x_1 \otimes \dotsb \otimes x_s) \otimes 1)) h &= gth^{[r]} \otimes ((x_{h(1)} \otimes \dotsb \otimes x_{h(s)}) \otimes 1) \\ &= t' \otimes ((e_r(g_1)x_{h(1)} \otimes \dotsb \otimes e_r(g_s)x_{h(s)}) \otimes 1), \end{align*} where $gth^{[r]} = t' \prod_{i=1}^s g_i[i]$ with $t' \in T$ and $g_i \in \sym{r}$ for all $i$. Thus it is sent by $\phi$ to $e_r(\tau_{t',1})e_r(g_1) x_{h(1)} \otimes \dotsb \otimes e_r(\tau_{t',s})e_r(g_s) x_{h(s)}$. Note that \begin{multline*} \tau_{t',i}(j) = t'((i-1)r + j) = gth^{[r]} (\prod_{i=1}^s g_i[i])^{-1} ((i-1)r + j) \\ = gth^{[r]} ((i-1) + g_i^{-1}(j)) = gt((h(i)-1) + g_i^{-1}(j)) = g\tau_{t,h(i)}g_i^{-1}(j), \end{multline*} so that $e_r(\tau_{t',i})e_r(g_i) x_{h(i)} = e_r(g\tau_{t,h(i)}g_i^{-1})e_r(g_i) x_{h(i)}$ by Lemma \ref{L:iso}(ii). Hence, \begin{align*} \phi(g (t \otimes ((x_1 \otimes \dotsb \otimes x_s) \otimes 1)) h) &= e_r(g) e_r(\tau_{t,h(1)}) x_{h(1)} \otimes \dotsb \otimes e_r(g) e_r(\tau_{t,h(s)}) x_{h(s)} \\ &= e_{rs}(g) (e_r(\tau_{t,1})x_1 \otimes \dotsb \otimes e_r(\tau_{t,s})x_s) h \\ &= g(\phi(t \otimes ((x_1 \otimes \dotsb \otimes x_s) \otimes 1))) h. \end{align*} Thus $\phi$ is an $(F\sym{rs},F\sym{s})$-bimodule isomorphism. \end{proof} The $s$-th Lie power $L^s(M)$, the $s$-th exterior power $\boldsymbol{/\!\backslash}^s(M)$ and the $s$-th symmetric power $S^s(M)$ of the left $S(n,r)$-module $M$ may be defined as follows: \begin{align*} L^s(M) &= (M^{\otimes s}) \upsilon_s; \\ \boldsymbol{/\!\backslash}^s(M) &= (M^{\otimes s}) (\sum_{\sigma \in \sym{s}} \operatorname{sgn}(\sigma) \sigma); \\ S^s(M) &= M^{\otimes s} \otimes_{F\sym{s}} F. \end{align*} Here, $\upsilon_s$ is the Dynkin-Specht-Wever element mentioned in Section \ref{S:intro}, and $\operatorname{sgn}$ is the signature representation of $\sym{s}$. \begin{cor} \label{C:isom} Let $n, r,s \in \mathbb{Z}^+$ with $n \geq rs$, and $M$ be an $S(n,r)$-module. Then \begin{align*} f_{rs} L^s(M) &\cong \operatorname{Ind}_{\sym{r} \wr \sym{s}}^{\sym{rs}} ((f_r M)^{\otimes s} \otimes_F \operatorname{\mathrm{Lie}}(s)); \\ f_{rs} \boldsymbol{/\!\backslash}^s(M) &\cong \operatorname{Ind}_{\sym{r} \wr \sym{s}}^{\sym{rs}} ((f_r M)^{\otimes s} \otimes_F \operatorname{sgn}); \\ f_{rs} S^s(M) &\cong \operatorname{Ind}_{\sym{r} \wr \sym{s}}^{\sym{rs}} (f_r M)^{\otimes s} \end{align*} as left $F\sym{rs}$-modules. \end{cor} \begin{proof} Post-multiply $\upsilon_s$ and $\sum_{\sigma \in \sym{s}} \operatorname{sgn}(\sigma) \sigma$ to both sides of the isomorphism in Theorem \ref{T:main} to obtain the first two isomorphisms. The third isomorphism is obtained by taking tensor product with $F$ over $F\sym{s}$ on the right of both sides of the same isomorphism. \end{proof} \end{document}
arXiv
{ "id": "1102.4157.tex", "language_detection_score": 0.5368123054504395, "char_num_after_normalized": null, "contain_at_least_two_stop_words": null, "ellipsis_line_ratio": null, "idx": null, "lines_start_with_bullet_point_ratio": null, "mean_length_of_alpha_words": null, "non_alphabetical_char_ratio": null, "path": null, "symbols_to_words_ratio": null, "uppercase_word_ratio": null, "type": null, "book_name": null, "mimetype": null, "page_index": null, "page_path": null, "page_title": null }
arXiv/math_arXiv_v0.2.jsonl
null
null
\begin{document} \title{Efficient bounds on quantum communication rates {\it via} their reduced variants} \author{Marcin L. Nowakowski and Pawel Horodecki\footnote{Electronic address: [email protected]}} \affiliation{Faculty of Applied Physics and Mathematics, ~Gdansk University of Technology, 80-952 Gdansk, Poland} \affiliation{National Quantum Information Centre of Gda´nsk, Andersa 27, 81-824 Sopot, Poland} \pacs{03.67.-a, 03.67.Hk} \begin{abstract} We investigate one-way communication scenarios where Bob manipulating on his parts can transfer some sub-system to the environment. We define reduced versions of quantum communication rates and further, prove new upper bounds on one-way quantum secret key, distillable entanglement and quantum channel capacity by means of their reduced versions. It is shown that in some cases they drastically improve their estimation. \end{abstract} \maketitle Recently years have seen enormous advances in quantum information theory proving it has been well established as a basis for a concept of quantum computation and communication. Much work \cite{BennettDiVincenzo, BennettSmolin, Barnum3, Barnum4, DevetakW1, DevetakW2, DevetakW3} has been performed to understand how to operate on quantum states and distill entanglement enabling quantum data processing or establish quantum secure communication between two or more parties. One of the central problems of quantum communication field is to estimate efficiency of communication protocols establishing secure communication between involved parties or distilling quantum entanglement \cite{Renner, KHPH, KHPH2, DevetakW1, DevetakW2, DevetakW3, Smith}. Most simple communication scenarios are those that do not use classical side channel or use it only in one-way setup. The challenge for the present theory is to determine good bounds on such quantities like the secret key rate or quantum channel capacity and distillable entanglement of a quantum state, that allow to estimate the communication capabilities. In this paper we provide efficient upper bounds avoiding massive overestimation of communication rates. We are inspired by classical information and entanglement measures theory where so-called reduced quantities have been used \cite{Renner, KHPH, DiVincenzo}. Herewith we consider two pairs of quantities: private capacity $\mathcal{P}$, quantum one-way secret key $K_{\rightarrow}$ and one-way quantum channel capacity $\mathcal{Q}_{\rightarrow}$, one-way distillable entanglement $D_{\rightarrow}$ providing new efficient upper bounds. We prove that in some cases the bounds explicitly show that the corresponding quantity is relatively small if compared to sender and receiver systems. The main method is again the fact that all the above quantities vanish on some classes of systems. Moreover, we introduce 'defect' parameters $\Delta$ for the reduced quantities resulting from possible transfer of sub-systems on receivers' side which are (sub)additive and hence, can be exploited in case of composite systems and regularization. \textit{\bf Reduced one-way secret key.} A secret key is a quantum resource allowing two parties Alice and Bob private communication over a public channel. In an ideal scenario they generate a pair of maximally correlated classical secure bit-strings such that Eve representing the adversary in the communication is not able to receive any sensible information from further communication between Alice and Bob. In this section we will elaborate on generation of a one-way secret key from a tripartite quantum state shared by the parties with Eve that means Alice and Bob can use only protocols consisting of local operations and one-way public communication. We propose a new reduced measure of the one-way secret key that simplify in many cases analysis of one-way security of quantum states. To derive new observations about one-way quantum secret key we utilize in this section fundamental information notions engaging entropy \cite{Entropy} and quantum mutual information \cite{MInformation} which play a vital role in quantum information theory. We state a new result about the upper bound on the Holevo function \cite{HolevoFunction} $\chi(\cdot)$: \textbf{Observation 1.}\label{reducedholevo2} \textit{For any ensemble of density matrices $\mathfrak{A}=\{\lambda_{i}, \rho^{i}_{BB'}\}$ with average density matrix $\rho_{BB'}=\sum_{i}\lambda_{i}\rho^{i}_{BB'}$ there holds: \begin{equation}\label{reducedholevo} \chi(\rho_{BB'}) \leq \chi(\rho_{B}) + 2S(\rho_{B'}) \end{equation}} \textit{Proof. } Basing on subadditivity and concavity of quantum entropy we can easily show that: \begin{eqnarray*}\label{LHS1} &&|S(\rho_{BB'})-\sum_{i}p_{i}S(\rho^{i}_{BB'})-S(\rho_{B})+\sum_{i}p_{i}S(\rho^{i}_{B})| \leq \\ &\leq&|S(\rho_{BB'})-S(\rho_{B})|+ |\sum_{i}p_{i}S(\rho^{i}_{BB'})-\sum_{i}p_{i}S(\rho^{i}_{B})\\ &\leq&S(\rho_{B'})+\sum_{i}p_{i}S(\rho^{i}_{B'})\leq 2S(\rho_{B'}) \end{eqnarray*} which completes the proof. $\Box$ One can use \cite{DevetakW1, DevetakW2} a general tripartite pure state $\rho_{ABE}$ to generate a secret key between Alice and Bob. Alice engages a particular strategy to perform a quantum measurement (POVM) described by $Q=(Q_{x})_{x \in \cal X}$ which leads to: $\widetilde{\rho}_{ABE}=\sum_{x}|x\rangle\langle x|_{A} \otimes Tr_{A}(\rho_{ABE}(Q_{x})\otimes I_{BE})$. Therefore, starting from many copies of $\rho_{ABE}$ we obtain many copies of cqq-states $\widetilde{\rho}_{ABE}$ and we restate the theorem defining one-way secret key $K_{\rightarrow}$: \textbf{Theorem 1.}\cite{DevetakW1} \textit{For every state $\rho_{ABE}$, $K_{\rightarrow}(\rho) = \lim_{n\rightarrow\infty}\frac{K_{\rightarrow} ^{(1)}(\rho^{\otimes n})}{n}$, with $K_{\rightarrow}^{(1)}(\rho)=\max_{Q,T|X} I(X:B|T) - I (X:E|T)$ where the maximization is over all POVMs $Q=(Q_{x})_{x \in \cal X}$ and channels R such that $T=R(X)$ and the information quantities refer to the state: $\omega_{TABE}=\sum_{t,x} R(t|x)P(x) |t\rangle\langle t|_{T}\otimes |x\rangle\langle x|_{A} \otimes Tr_{A}(\rho_{ABE}(Q_{x})\otimes I_{BE}).$ The range of the measurement Q and the random variable T may be assumed to be bounded as follows: $|T|\leq d^{2}_{A}$ and $|\cal X|\leq d^{2}_{A}$ where T can be taken a (deterministic) function of $\cal X$. } Following we define a modified version of the one-way secret key rate $K_{\rightarrow}$ basing on the results of \cite{Renner,KHPH} for reduced intrinsic information and reduced entanglement measure. \textbf{Definition 1.} \textit{For the one-way secret key rate $K_{\rightarrow}^{(1)}(\rho_{AB})$ of a bipartite state $\rho_{AB}\in B(\cal{H}_{A}\otimes \cal{H}_{B})$ shared between Alice and Bob the reduced one-way secret key rate $K_{\rightarrow}^{(1)}\downarrow(\rho_{AB})$ is defined as: \begin{equation} K_{\rightarrow}^{(1)}\downarrow(\rho_{AB})=\inf_{\cal{U}}[K_{\rightarrow}^{(1)}(\cal{U}(\rho_{AB}))+\Delta_{K_{\rightarrow}} ] \end{equation} where $\cal{U}$ denotes unitary operations on Bob's system with a possible transfer of subsystems from Bob to Eve, i.e. $\cal{U}(\rho_{AB})=Tr_{B'}(I\otimes \cal{U})\rho_{ABB'}$. $\Delta_{K_{\rightarrow}} =4 S (\rho_{B'})$ denotes the defect parameter related to increase of entropy produced by the transfer of B'-subsystem from Bob's side to Eve.} The reduced one-way secret key rate is an upper bound on $K_{\rightarrow}$ which we prove now for every cqq-state $\rho$: \textbf{Theorem 2.} \textit{For every cqq-state $\rho_{ABE}$ there holds: \begin{equation} K_{\rightarrow}(\rho)=\lim_{n\rightarrow\infty} \frac {K_{\rightarrow}^{(1)}(\rho^{\otimes n})}{n} \leq K_{\rightarrow}\downarrow(\rho) \end{equation} where $K_{\rightarrow}\downarrow(\rho)=\lim_{n\rightarrow\infty} \frac { K_{\rightarrow}^{(1)}\downarrow(\rho^{\otimes n})}{N}$.Particularly, for identity operation $\cal{U}=id$ on Bob's side one obtains: $K_{\rightarrow}(\rho_{ABB'}) \leq K_{\rightarrow}(\rho_{AB})+ 4S(\rho_{B'})$.} To prove this theorem one can start showing how the formula behaves for one-copy secret key: \textbf{Lemma 2.} \textit{ For every cqq-state $\rho_{ABE}$ there holds: \begin{equation}\label{lemma2} K_{\rightarrow}^{(1)}(\rho) \leq K_{\rightarrow}^{(1)}\downarrow(\rho) \end{equation}} \begin{proof} Since \[ \left\lbrace \begin{array}{l} I(A:B|C)=S(AC)+S(BC)-S(ABC)-S(C)\\ I(A:E|C)=S(AC)+S(EC)-S(AEC)-S(C)\\ \end{array} \right. \] then: \[ K_{\rightarrow}^{(1)}(\rho)=\max_{Q,C|A}[S(BC)-S(ABC)-S(EC)+S(AEC)] \] To prove the thesis of this lemma it suffices to show that: \begin{equation}\label{key1} K_{\rightarrow}^{(1)}(\rho_{A(BB')E})\leq K_{\rightarrow}^{(1)}(\rho_{AB(B'E)})+4S(B') \end{equation} due to the fact that in case of application of $\cal{U}$ without discarding subsystem $B'$ one obtains equality. We denote by $\rho_{AB(B'E)}$ transition of $B'$-subsystem to the environment. For (\ref{key1}) we can omit maximization that is performed on both side of the inequality representing an application of a chosen 1-LOCC protocol distilling a secret key that invokes: \begin{eqnarray*} &&S(BB'C)-S(ABB'C)-S(EC)+S(AEC) \leq \\ &&S(BC)-S(ABC)-S(B'EC)+S(AB'EC)+ 4S(B') \end{eqnarray*} It is easy to note that application of unitary operations on Bob's side do not change the inequality mainly due to property of unitary invariancy of the von Neumann entropy. To simplify the proof one can decompose this inequality into following two inequalities: \begin{equation}\label{key2} \left\lbrace \begin{array}{l} S(BB'C)-S(ABB'C)\leq S(BC)-S(ABC) + 2S(B')\\ S(B'EC)-S(AB'EC)\leq S(EC)-S(AEC) + 2S(B')\\ \end{array} \right. \end{equation} or equivalently considering the assumption that the initial state is of cqq-type and 'A' represents classical distribution we can rewrite the first inequality into the form: \begin{eqnarray*} &&S(\sum_{i}p_{i}\rho_{i}^{BB'})-H(p_{i})-\sum_{i}p_{i}S(\rho_{i}^{BB'})-S(\sum_{i}p_{i}\rho_{i}^{B})\\ &&+H(p_{i})+\sum_{i}p_{i}S(\rho_{i}^{B})\leq 2S(B') \end{eqnarray*} and similarly for the second inequality which gives in result a more compact structure: \[ \left\lbrace \begin{array}{l} \chi(\sum_{i}p_{i}\rho_{i}^{BB'C})-\chi(\sum_{i}p_{i}\rho_{i}^{BC})\leq 2S(B') \\ \chi(\sum_{i}p_{i}\rho_{i}^{B'EC})-\chi(\sum_{i}p_{i}\rho_{i}^{EC})\leq 2S(B') \\ \end{array} \right. \] However, the above was proved in Lemma 1 that completes the proof. \end{proof} Finally, we will extend this result in the asymptotic regime proving Theorem 2. \begin{proof} To prove Theorem 2 it suffices to notice that (\ref{lemma2}) holds under 1-LOCC and an arbitrary chosen $\mathcal{U}$ for any $\rho_{n}=\rho^{\otimes n}$. Moreover, existence of the defect parameter $\Delta_{K_{\rightarrow}}$ enables regularization of the reduced one-way secret rate since in the asymptotic regime after application of unitary operations on Bob side one can apply subadditivity of entropy to estimate entropy of the transferred B' part which implies $K_{\rightarrow}(\rho_{ABB'}) \leq K_{\rightarrow}(\rho_{AB})+ 4S(\rho_{B'})$. \end{proof} It is interesting that our results reflect E-nonlockability of the secret key rate \cite{Ekert} which means that the rate cannot be locked with information on Eve's side. Monogamy of entanglement has been used to prove that for some region quantum depolarizing channel has zero capacity even if does not destroy entanglement \cite{Bruss} which is a particular application of symmetric extendibility of states to evaluation of the quantum channel capacity. The following examples will show application of the concept: \textit{Example 1.} As an example of application of Theorem 2 we present a state which after discarding a small B' part on Bob's side becomes a symmetric extendible state \cite{MNPH}. This example is especially important since the presented state does not possess \cite{MNPH2} any symmetric extendible component in its decomposition for symmetric and non-symmetric parts, thus, one cannot use the method \cite{Lutk3} to find an upper bound on $K_{\rightarrow}$ by means of linear optimization. Let us consider a bipartite quantum state shared between Alice and Bob on the Hilbert space $\cal{H}_{A}\otimes\cal{H}_{B}\cong \cal{C}^{d+2}\otimes\cal{C}^{d+2}$: \begin{equation} \rho_{AB}=\frac{1}{2} \left[ \begin{array}{cccc} \Upsilon_{AB} & 0 & 0 & \cal A\\ 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0\\ \cal A^{\dagger} & 0 & 0 & \Upsilon_{AB}\\ \end{array} \right] \end{equation} where $\cal A$ is an arbitrary chosen operator so that $\rho_{AB}$ represents a correct quantum state. This matrix is represented in the computational basis $|00\rangle, |01\rangle, |10\rangle, |11\rangle$ held by Alice and Bob and possess a singlet-like structure. Whenever one party (Alice or Bob) measures the state, the state decoheres and off-diagonal elements vanish which leads to a symmetric extendible state \cite{MNPH}: \begin{equation} \Upsilon_{AB}=\frac{d}{2d-1}P_{+}+\frac{1}{2d-1}\sum^{d-1}_{i=1}|i\;0\rangle\langle i\;0| \end{equation} from which no entanglement nor secret key can be distilled by means of 1-LOCC \cite{D1,D2,Lutk3,MNPH}. Therefore, applying Theorem 2 one derives $K_{\rightarrow}(\Upsilon_{AB})=0$ and $K_{\rightarrow}(\rho_{AB})\leq K_{\rightarrow}\downarrow(\rho_{AB})=4$. \textit{Example 2.} Let us consider a graph state \cite{Hein} $|\mathcal{G}\rangle$ of a $3n+1$-qubit system associated with a mathematical graph $\mathcal{G}= \{\mathcal{V},\mathcal{E}\}$, composed of a set $\mathcal{V}$ of $3n+1$ vertices and a set $\mathcal{E}$ of edges $\{i,j\}$ connecting each vertex $i$ with some other $j$: \begin{equation} |\mathcal{G}\rangle=\bigotimes_{{i,j}\in\mathcal{E}}CZ_{ij}|\mathcal{G}_{0}\rangle \end{equation} where $3n+1$ qubits are initialized in the product state $|\mathcal{G}_{0}\rangle=\bigotimes_{i\in\mathcal{V}}|\psi_{i}\rangle$ with $|\psi_{i}\rangle=|0_{i}\rangle+|1_{i}\rangle$. Afterwards, one applies a maximally-entangling control-Z (CZ) gate to all pairs $\{i,j\}$ of qubits joined by an edge: $CZ_{ij}=|0_{i}0_{j}\rangle\langle0_{i}0_{j}| + |0_{i}1_{j}\rangle\langle0_{i}1_{j}|+|1_{i}0_{j}\rangle\langle 1_{i}0_{j}|-|1_{i}1_{j}\rangle\langle1_{i}1_{j}|$. If Alice takes no more than $n$ qubits from the graph system that will use to establish communication with Bob who uses other $n$ qubits in this graph state, then they will be not able by any means to set secure one-way communication. This results from the fact that the state $\rho^{AB}_{2n}$ (with n qubits on Alice side and n qubits on Bob's side) is symmetric extendible to a state $\rho^{AB}_{3n}$ which means that $K_{\rightarrow}(\rho^{AB}_{2n})=0$. A natural symmetric extension of $\rho^{AB}_{2n}$ is a state $\rho^{AB}_{3n}=Tr_{B'}|\mathcal{G}\rangle\langle\mathcal{G}|$ resulting from tracing out an arbitrary chosen qubit B' from graph $\mathcal{G}$. However, if Alice takes $n$ qubits and Bob takes $n+1$ qubits from the graph system, the resulting state $\rho^{AB}_{2n+1}$ is not symmetric extendible anymore. Exemplary, for $n=2$ this state has spectral representation: \begin{equation}\label{state1} \rho^{AB}_{2n+1}=\frac{1}{2}(|\phi_{0}\rangle\langle\phi_{0}|+|\phi_{1}\rangle\langle\phi_{1}|) \end{equation} where $|\phi_{0}\rangle=|0_{A}\rangle|0_{B}\rangle+|1_{A}\rangle|1_{B}\rangle$, $|\phi_{1}\rangle=|0_{A}\rangle|1_{B}\rangle-|1_{A}\rangle|0_{B}\rangle$ and $\{|0\rangle_{A}=|00-01-10-11\rangle_{A},|1\rangle_{A}=|00+01+10-11\rangle_{A}, |0\rangle_{B}=|001+010+100-111\rangle_{B},|1\rangle_{B}=|000-011-101-110\rangle_{B}\}$. This state is isomorphic to qubit bipartite state and meets the condition \cite{Lutk1, Lutk2} for $\cal{C}^{2}\otimes\cal{C}^{2}$ Bell-diagonal states to be symmetric extendible: $4\sqrt{det(\rho_{AB})} \geq Tr(\rho^{2}_{AB})-\frac{1}{2}$. One can easily show the isomorphism of $\rho^{AB}_{2n+1}$ for any n with a qubit bipartite state structure (\ref{state1}). Thus, for one-way secret key of the state there holds: $K_{\rightarrow}(\rho^{AB}_{2n+1}) \leq K_{\rightarrow}\downarrow(\rho^{AB}_{2n+1})=4$, since after discarding one qubit B' on Bob's side his system would become symmetric extendible. \textit{\bf An upper bound on quantum channel capacity.} The best known definition of the one-way quantum channel capacity $\mathcal{Q}_{\rightarrow}(\Lambda)$ \cite{Bennett2, Barnum3} is expressed as an asymptotic regularization of coherent information: $\mathcal{Q}_{\rightarrow}(\Lambda)=\lim_{n\rightarrow \infty}\frac{1}{n}\sup_{\rho_{n}} I_{c}(\rho_{n}, \Lambda^{\otimes n})$ with parallel use of N copies of $\Lambda$ channel. Coherent information for a channel $\Lambda$ and a source state $\sigma$ transferred through the channel is defined as: $I_{c}(\sigma, \Lambda)=I^{B}(I\otimes \Lambda)(|\Psi\rangle\langle\Psi|)$ where $\Psi$ is a pure state with reduction $\sigma$ and coherent information of a bipartite state $\rho_{AB}$ shared between Alice and Bob is defined as: $I^{B}(\rho_{AB})=S(B)-S(AB)$. We will use further the following notation: $I_{c}(A \rangle B)=I^{B}(\rho_{AB})$. \textbf{Observation 1. }\textit{For a bipartite state $\rho_{ABB'}\in B(\cal{H}_{A}\otimes \cal{H}_{B}\otimes \cal{H}_{B'})$ shared between Alice and Bob (B and B' system) there holds:} \begin{equation} I_{c}(A \rangle BB')\leq I_{c}(A \rangle B) + 2S(B') \end{equation} \textit{Proof.} One can easily observe that for subadditivity of entropy $S(BB')\leq S(B)+S(B')$ and for the Araki-Lieb inequality $|S(AB)-S(B')|\leq S (ABB')$, the left hand side can be bounded as follows: $S(BB')-S(ABB')\leq S(B)+S(B')-S(AB)+ S(B')=I_{c}(A \rangle B) + 2S(B')$ which completes the proof. $\Box$ Motivated by the reduced quantity of secret key rate and above observation we derive further the reduced version of quantum channel capacity and show that it is a good bound on quantum channel capacity: \textbf{Definition 4.} \textit{For a one-way quantum channel $\Lambda_{BB'}:B(\cal{H}_{BB'})\rightarrow B(\cal{H}_{\widetilde{B}\widetilde{B'}})$ the reduced one-way quantum channel capacity is defined as: \begin{equation} \mathcal{Q}_{\rightarrow}^{(1)}\downarrow(\Lambda_{BB'}) = \inf_{\cal{U}}[\mathcal{Q}_{\rightarrow}^{(1)}(\cal{U}(\Lambda_{B}))+ \Delta_{\mathcal{Q}_{\rightarrow}}] \end{equation} where $\cal{U}$ denotes unitary operations on Bob's system with a possible transfer of subsystems from Bob to Eve after action of $\Lambda_{BB'}$ channel, i.e. $\cal{U}(\Lambda_{B}(\rho_{B}))=Tr_{B'}\cal{U}\Lambda_{BB'}(\rho_{BB'})$. $\Delta_{\mathcal{Q}_{\rightarrow}}=2 \sup_{\rho_{BB'}}S(Tr_{B}\Lambda_{BB'}(\rho_{BB'}))$ denotes the defect parameter related to increase of entropy produced by the transfer of B'-subsystem from Bob's side to Eve.} \textbf{Theorem 3. }\textit{For any one-way quantum channel $\Lambda_{BB'}:B(\cal{H}_{BB'})\rightarrow B(\cal{H}_{\widetilde{B}\widetilde{B'}})$ there holds: \begin{equation} \mathcal{Q}_{\rightarrow}(\Lambda_{BB'}) \leq \mathcal{Q}_{\rightarrow}\downarrow(\Lambda_{BB'}) \end{equation} where $\mathcal{Q}_{\rightarrow}\downarrow(\Lambda_{BB'})=\lim_{n}\mathcal{Q}_{\rightarrow}^{(1)}\downarrow(\Lambda_{BB'}^{\otimes n })/n$ denotes the reduced quantum capacity. Particularly, for identity operation $\cal{U}=id$ on Bob's side one obtains: $\mathcal{Q}_{\rightarrow}(\Lambda_{BB'}) \leq \mathcal{Q}_{\rightarrow}(\Lambda_{B})+ 2\sup_{\rho_{BB'}}S(Tr_{B}\Lambda_{BB'}(\rho_{BB'}))$}. To prove this inequality for regularized quantum capacity and its reduced version it is sufficient to derive the below lemma for a single copy case in analogy to the lemma for one-way secret key rate above: \textbf{Lemma 4. }\textit{For any one-way quantum channel $\Lambda_{BB'}:B(\cal{H}_{BB'})\rightarrow B(\cal{H}_{\widetilde{B}\widetilde{B'}})$ there holds: \begin{equation} \mathcal{Q}_{\rightarrow}^{(1)}(\Lambda_{BB'}) \leq \mathcal{Q}_{\rightarrow}^{(1)}\downarrow(\Lambda_{BB'}) \end{equation} } \textit{Proof.} The proof of this lemma is straightforward with application of Observation 1 that for a state $\rho_{BB'}$ maximizing coherent information on the left hand side of the observation the above formula holds also for a possible transfer of B' to the environment. It is worth recalling that action of unitary operator on a state does not change its entropy and in a result coherent information for any partition of the system.$\Box$ Further, one can complete the proof of the theorem in the asymptotic regime: \textit{Proof.} To prove the inequality of Theorem 3 asymptotically it suffices to notice that statements of Lemma 4 hold also for arbitrary chosen state $\rho_{n}=\rho^{\otimes n}$. Now we can prove that: $\mathcal{Q}_{\rightarrow}(\Lambda_{BB'}) \leq \mathcal{Q}_{\rightarrow}(\Lambda_{B})+ \Delta_{\mathcal{Q}_{\rightarrow}}$. Let $\rho^{BB'}_{n}$ be a state maximizing $\mathcal{Q}_{\rightarrow}(\Lambda_{BB'})$ as an asymptotic regularization of coherent information, i.e. $\mathcal{Q}_{\rightarrow}(\Lambda_{BB'})=\lim_{n\rightarrow \infty}\frac{1}{n}I_{c}(\rho^{BB'}_{n}, \Lambda_{BB'}^{\otimes n})$ which one can represent as $I_{c}(A \rangle BB')$ for the aforementioned Choi-Jamiolkowski isomorphism between states and channels. Basing on Observation 1, one can immediately derive for the maximizing state $\rho^{BB'}_{n}$: $\frac{1}{n}I_{c}(A \rangle BB') \leq \frac{1}{n}[I_{c}(A \rangle B) +2S(\rho^{B'}_{n})]$ where $I_{c}(A \rangle B)=I_{c}(Tr_{B'}\rho^{BB'}_{n},\Lambda^{\otimes n}_{B})$ and $\rho^{B'}_{n}=Tr_{B}\Lambda_{BB'}^{\otimes n}(\rho^{BB'}_{n})$. However, if there exists a state $\sigma^{B}_{n}$ for which $I_{c}(\sigma^{B}_{n},\Lambda^{\otimes n}_{B}) > I_{c}(Tr_{B'}\rho^{BB'}_{n},\Lambda^{\otimes n}_{B})$, then it proves that right hand side of the inequality in the lemma can be only larger than in case of the chosen state $\rho^{BB'}_{n}$ which completes the proof. Finally, as in the aforementioned proof for key subadditivity of entropy can be applied to verify that in case of the regularized reduced secret key its defect parameter cannot be larger than $\Delta_{\mathcal{Q}_{\rightarrow}}=2 \sup_{\rho_{BB'}}S(Tr_{B}\Lambda_{BB'}(\rho_{BB'}))$, since $\sup_{\rho_{BB'}^{n}}S(Tr_{B^{n}}\Lambda_{BB'}^{\otimes n }(\rho_{BB'}^{n}))\leq n\sup_{\rho_{BB'}}S(Tr_{B}\Lambda_{BB'}(\rho_{BB'})$. $\Box$ \textit{Example 3.} As an example we will use the aforementioned graph state from Example. 2 and we will search for one-way channel capacity of a channel $\Lambda_{BB'}$, isomorphic due to Choi-Jamiolkowski isomorphism, with a state $\rho^{ABB'}_{2n+1}=(I\otimes \Lambda_{BB'})|\Psi\rangle\langle\Psi|$. As above, after discarding $B'$ 1-qubit system the state would become symmetric extendible that implies $Q_{\rightarrow}(\Lambda_{B})=0$. Therefore, we obtain $Q_{\rightarrow}(\Lambda_{BB'})\leq 2$. The power of the above results appears especially in application of Lemma 3 to any channel reducible to anti-degradable channel which Choi-Jamiolkowski representation is symmetric extendible \cite{Lutk1} or channels reducible to degradable channels which have known capacity \cite{Smith1}. \textit{\bf Dual picture for one-way distillable entanglement and private information.} Our results for one-way secret key and quantum channel capacity lead immediately to similar reduced formula for private information and one-way distillation quantities. The private capacity \cite{DevetakW3, DevetakW4} $\mathcal{P}(\Lambda)$ of a quantum channel is equal to regularization of private information: $\mathcal{P}^{(1)}(\Lambda)=\max_{X,\rho_{x}^{A}}(I(X,B)-I(X,E))$ with maximization over classical random variables X and input quantum states $\rho_{x}^{A}$ depending on the value of X. Absorbing T into X variable in Theorem 1. leads to definitions for private information and private capacity \cite{DevetakW4}, thus, following Lemma 3. we can derive an upper bound on private information and private capacity via their reduced counterparts: \textbf{Definition 5.} \textit{For a one-way quantum channel $\Lambda_{BB'}:B(\cal{H}_{BB'})\rightarrow B(\cal{H}_{\widetilde{B}\widetilde{B'}})$ the reduced private information is defined as: \begin{equation} \mathcal{P}^{(1)}\downarrow(\Lambda_{BB'}) = \inf_{\cal{U}}[\mathcal{P}^{(1)}(\cal{U}(\Lambda_{B}))+ \Delta_{P}] \end{equation} where $\cal{U}$ denotes unitary operations on Bob's system with a possible transfer of subsystems from Bob to Eve, i.e. $\cal{U}(\Lambda_{B}(\rho_{B}))=Tr_{B'}\cal{U}\Lambda_{BB'}(\rho_{BB'})$. $\Delta_{P} =4 S (\rho_{B'})$ denotes the defect parameter related to increase of entropy produced by the transfer of B'-subsystem from Bob's side to Eve.} \textbf{Theorem 4. }\textit{For a one-way quantum channel $\Lambda_{BB'}:B(\cal{H}_{BB'})\rightarrow B(\cal{H}_{\widetilde{B}\widetilde{B'}})$ there holds: \begin{equation} \mathcal{P}(\Lambda_{BB'}) \leq \mathcal{P}\downarrow(\Lambda_{BB'}) \end{equation} where $\mathcal{P}\downarrow(\Lambda_{BB'})=\lim_{n}\mathcal{P}^{(1)}\downarrow(\Lambda_{BB'}^{\otimes n })/n$ denotes the reduced private capacity. Particularly, for identity operation $\cal{U}=id$ on Bob's side one obtains: $\mathcal{P}(\Lambda_{BB'}) \leq \mathcal{P}(\Lambda_{B})+ 4S(\rho_{B'})$} The proof can be conducted in analogy to Theorem 2. and Lemma 3, however, for regularization of reduced private information it is crucial to derive the below lemma for a one-copy case: \textbf{Lemma 5. }\textit{For every one-way quantum channel $\Lambda_{BB'}:B(\cal{H}_{BB'})\rightarrow B(\cal{H}_{\widetilde{B}\widetilde{B'}})$ there holds: \begin{equation} \mathcal{P}^{(1)}(\Lambda_{BB'}) \leq \mathcal{P}^{(1)}\downarrow(\Lambda_{BB'}) \end{equation} } \textit{Proof.} To prove this lemma it suffices to absorb variable T into X in Theorem 1. for definition of private information and conduct the proof in analogy to the proof of Lemma 2 for a channel $\Lambda_{BB'}$ and a chosen state $\rho$ sent through it. $\Box$ We can now propose a new bound on distillation of entanglement by means of one-way LOCC. This result is based on observation \cite{DevetakW3, DevetakW4} that one-way distillable entanglement $D_{\rightarrow}$ of a state $\rho_{AB}$ can be represented as regularization of one-copy formula: $D^{(1)}_{\rightarrow}(\rho_{AB})=\max_{\textbf{T}}\sum_{l=1}^{L}\lambda_{l}I_{c}(A\rangle B)_{\rho_{l}}$ where the maximization is over quantum instruments $T = (T_{1}, \dots , T_{L})$ on Alice’s system, $\lambda_{l}=TrT_{l}(\rho_{A})$, $T_{l}$ is assumed to have one Kraus operator $T_{l}(\rho)=A_{l}\rho A_{l}^{\dag}$ and $\rho_{l}=\frac{1}{\lambda_{l}}(T_{l}\otimes id)\rho_{AB}$. Basing on the results of Observation 1. and Lemma 3. we derive a general formula for the bound on one-way distillable entanglement applying the reduced quantity: \textbf{Definition 4.} \textit{For a bipartite state $\rho_{ABB'}\in B(\cal{H}_{A}\otimes \cal{H}_{B}\otimes \cal{H}_{B'})$ shared between Alice and Bob (B and B' system) the reduced one-way distillable entanglement is defined as: \begin{equation} D_{\rightarrow}^{(1)}\downarrow(\rho_{ABB'}) = \inf_{\cal{U}}[D_{\rightarrow}^{(1)}(\cal{U}(\rho_{AB}))+ \Delta_{D_{\rightarrow}}] \end{equation} where $\cal{U}$ denotes unitary operations on Bob's system with a possible transfer of subsystems from Bob to Eve, i.e. $\cal{U}(\rho_{AB})=Tr_{B'}(I\otimes \cal{U})\rho_{ABB'}$. $\Delta_{D_{\rightarrow}} =2 S (\rho_{B'})$ denotes the defect parameter related to increase of entropy produced by the transfer of B'-subsystem from Bob's side to Eve.} \textbf{Theorem 5. }\textit{For a bipartite state $\rho_{ABB'}\in B(\cal{H}_{A}\otimes \cal{H}_{B}\otimes \cal{H}_{B'})$ shared between Alice and Bob (B and B' system) there holds: \[ D_{\rightarrow}(\rho_{ABB'}) \leq D_{\rightarrow}\downarrow(\rho_{ABB'}) \] where $\Delta_{D_{\rightarrow}}=2S(\rho_{B'})$ and $D_{\rightarrow}\downarrow(\rho_{ABB'})=\lim_{n}D_{\rightarrow}^{(1)}\downarrow(\rho_{ABB'}^{\otimes n })/n$ denotes regularized version of reduced one-way distillable entanglement for one copy. Particularly, for identity operation $\cal{U}=id$ on Bob's side one obtains: $D_{\rightarrow}(\rho_{ABB'}) \leq D_{\rightarrow}(\rho_{AB})+ 2S(\rho_{B'})$.} The proof of this theorem can be conducted in analogy to the previous proofs for bounds on one-way secret key and quantum channel capacity. The left inequality is an immediate implication of the following lemma for the one-copy formula: \textbf{Lemma 6. }\textit{For every bipartite state $\rho_{ABB'}$ there holds: \begin{equation} D_{\rightarrow}^{(1)}(\rho_{ABB'}) \leq D_{\rightarrow}^{(1)}\downarrow(\rho_{ABB'}) \end{equation} } \textit{Proof.} It suffices to use results of Observation 1. to notice that for a chosen set of instruments $\textbf{T}$ on Alice side for calculation of $D_{\rightarrow}^{(1)}(\rho_{ABB'})$ the inequality holds as extension of inequality from Observation 1. by multiplicands $\lambda_{l}$ on the left and right side. However, if in case of calculating $D_{\rightarrow}^{(1)}(\rho_{AB})$ there exists a set $\textbf{T'}$ maximizing $D_{\rightarrow}(\rho_{AB})$ better than $\textbf{T}$, then right hand side of the inequality can be only greater. $\Box$ It is crucial to notice that the 'defect' parameters $\Delta$ for the reduced quantities are subadditive and hence, can be exploited in case of composite systems and regularization: \textit{\bf Corollary. }\textit{For the reduced quantities of $\{K_{\rightarrow},\mathcal{P}, \mathcal{Q}_{\rightarrow}, D_{\rightarrow}\}$ for composite systems there holds: $\Delta_{X}(\rho\otimes\sigma)\leq\Delta_{X}(\rho)+\Delta_{X}(\sigma)$ and $\Delta_{Y}(\Lambda\otimes\Gamma)\leq\Delta_{Y}(\Lambda)+\Delta_{Y}(\Gamma)$ where $X=\{K_{\rightarrow},D_{\rightarrow}\}$ stands for states and $Y=\{\mathcal{Q}_{\rightarrow},\mathcal{P}\}$ for channels respectively. } To prove the above corollary it suffices to use subadditivity of entropy for composite systems since Bob can act with a unitary operation before he discard some part of his subsystem. This property of the parameters enables regularization in the asymptotic regime of the reduced quantities for large systems $\rho^{\otimes n}$. \textit{Example 4. Activable multi-qubit bound entangled states.} As an example illustrating this bound we consider an activated bound entangled state $\rho_{II}$ \cite{Dur} which is distillable if the parties (Alice and Bob) form two groups containing between $40\%$ and $60\%$ of all parties of the system in the state $\rho_{II}$. If Alice or Bob posses less than $40\%$ of the system or system is shared between more than two parties, then the state becomes undistillable. This state for large amount of particles can manifest features characteristic for 'macroscopic entanglement' with no 'microscopic entanglement'. For definition of the state, let us consider the family $\rho_{N}$ of N-qubit states: $\rho=\sum_{\sigma=\pm}\lambda_{0}^{\sigma}|\Psi_{0}^{\sigma}\rangle\langle\Psi_{0}^{\sigma}|+ \sum_{k\neq0}\lambda_{k}(|\Psi_{k}^{+}\rangle\langle\Psi_{k}^{+}|+|\Psi_{k}^{-}\rangle\langle\Psi_{k}^{-}|)$ where $|\Psi_{k}^{\pm}\rangle=\frac{1}{\sqrt{2}}(|k_{1}k_{2}\ldots k_{N-1}0\rangle\pm|\overline{k}_{1}\overline{k}_{2}\ldots \overline{k}_{N-1}1\rangle)$ are GHZ-like states with $k=k_{1}k_{2}\ldots k_{N-1}$ being a chain of $N-1$ bits and $k_{i}=0, 1$ if $\overline{k}_{i}=1, 0$, thus, the state is parameterized by $2^{N-1}$ coefficients. Let us consider now a bipartite splitting $\mathcal{P}$ where Alice takes $0.6N$ of qubits and Bob takes the other $0.4 N$ qubits. We can immediately show that $D_{\rightarrow}(\rho_{II})\leq - 2(\lambda_{0}^{\pm}+2\sum_{k}\lambda_{k})\log(\lambda_{0}^{\pm}+2\sum_{k}\lambda_{k})$ since for Bob transferring one qubit to the environment, we obtain undistillable state $D_{\leftrightarrow}(\rho_{N-1})=0$. It is noticeable that even for a large macroscopic system with $N\rightarrow\infty$, $D_{\rightarrow}(\rho_{II})\leq - 2 (\lambda_{0}^{\pm}+2\sum_{k}\lambda_{k})\log(\lambda_{0}^{\pm}+2\sum_{k}\lambda_{k})$. It can be easily shown that with the same method it is possible to achieve an upper bound on one-way quantum channel capacity $Q_{\rightarrow}$. \textit{\bf Conclusions.} In this paper we proposed new reduced versions of quantum quantities: reduced one-way quantum key, distillable entanglement and reduced corresponding capacities. We show that in some cases they may provide bounds on the non-reduced versions simplifying drastically their estimations. It is evident especially in case of states of large systems which is supported by examples. The open problem is whether they can be applied to non-additivity problem of quantum channel capacities and quantum secure key \cite{Smith, Smith1}. Further, it is not known if they have analogs in general quantum networks and whether the bounds can be improved by better estimation of defect parameters. \textit{\bf Acknowledgments.} The authors thank Michal Horodecki for critical comments on this paper. This work was supported by Ministry of Science and Higher Education grant No N202 231937. Part of this work was done in National Quantum Information Center of Gdansk. \end{document}
arXiv
{ "id": "1007.0591.tex", "language_detection_score": 0.7528778314590454, "char_num_after_normalized": null, "contain_at_least_two_stop_words": null, "ellipsis_line_ratio": null, "idx": null, "lines_start_with_bullet_point_ratio": null, "mean_length_of_alpha_words": null, "non_alphabetical_char_ratio": null, "path": null, "symbols_to_words_ratio": null, "uppercase_word_ratio": null, "type": null, "book_name": null, "mimetype": null, "page_index": null, "page_path": null, "page_title": null }
arXiv/math_arXiv_v0.2.jsonl
null
null
\begin{document} \title{Chromatic index determined by fractional chromatic index} \author{Guantao Chen$^{a}$, Yuping Gao$^{b,a}$, Ringi Kim$^{c}$, Luke Postle$^c$, Songling Shan$^{d}$\\ {\xiaowuhao $^{a}$ Department of Mathematics and Statistics, Georgia State University, Atlanta, GA\,30303, USA}\\ {\xiaowuhao $^{b}$ School of Mathematics and Statistics, Lanzhou University, Lanzhou, Gansu 730000, China}\\ {\xiaowuhao $^c$ University of Waterloo, Waterloo, ON, N2L 3G1, Canada}\\ {\xiaowuhao $^{d}$ Department of Mathematics, Vanderbilt University, Nashville, TN\, 37240, USA}} \date{} \maketitle \iffalse \author{Guantao Chen} \address{{\bf G.Chen}: Georgia State University, Atlanta, GA\,30303, USA} \curraddr{} \email{[email protected]} \thanks{} \author{Yuping Gao} \address{{\bf Y. Gao}: Georgia State University, Atlanta, GA\,30303, USA; } \curraddr{Lanzhou University, Lanzhou, Gansu 730000, China} \email{[email protected]} \thanks{} \author{Songling Shan} \address{{\bf S. Shan}: Vanderbilt University, Nashville, TN\, 37240, USA} \curraddr{} \email{[email protected]} \thanks{} \fi \begin{abstract}Given a graph $G$ possibly with multiple edges but no loops, denote by $\Delta$ the maximum degree, $\mu$ the multiplicity, $\chi'$ the chromatic index and $\chi_f'$ the fractional chromatic index of $G$, respectively. It is known that $\Delta\le \chi_f' \le \chi' \le \Delta + \mu$, where the upper bound is a classic result of Vizing. While deciding the exact value of $\chi'$ is a classic NP-complete problem, the computing of $\chi_f'$ is in polynomial time. In fact, it is shown that if $\chi_f' > \Delta$ then $\chi_f'= \max \frac{|E(H)|}{\lfloor |V(H)|/2\rfloor}$, where the maximality is taken over all induced subgraphs $H$ of $G$. Gupta\,(1967), Goldberg\,(1973), Andersen\,(1977), and Seymour\,(1979) conjectured that $\chi'=\lceil\chi_f'\rceil$ if $\chi'\ge \Delta+2$, which is commonly referred as Goldberg's conjecture. It has been shown that Goldberg's conjecture is equivalent to the following conjecture of Jakobsen: For any positive integer $m$ with $m\ge 3$, every graph $G$ with $\chi'>\frac{m}{m-1}\Delta+\frac{m-3}{m-1}$ satisfies $\chi'=\lceil\chi_f'\rceil$. Jakobsen's conjecture has been verified for $m$ up to 15 by various researchers in the last four decades. We use an extended form of a Tashkinov tree to show that it is true for $m\le 23$. With the same technique, we show that if $\chi' \geq\Delta+\sqrt[3]{\Delta/2}$ then $\chi'=\lceil\chi_f'\rceil$. The previous best known result is for graphs with $\chi'> \Delta +\sqrt{\Delta/2}$ obtained by Scheide, and by Chen, Yu and Zang, independently. Moreover, we show that Goldberg's conjecture holds for graphs $G$ with $\Delta\leq 23$ or $|V(G)|\leq 23$. \end{abstract} \emph{\indent \textbf{Keywords}.} Edge chromatic index; Fractional chromatic index; Critical graph; Tashkinov tree; Extended Tashkinov tree \section{Introduction} Graphs considered in this paper may contain multiple edges but no loops. Let $G$ be a graph and $\Delta:=\Delta(G)$ be the maximum degree of $G$. A (proper) {\it $k$-edge-coloring} $\varphi$ of $G$ is a mapping $\varphi$ from $E(G)$ to $\{1, 2, \cdots, k\}$ (whose elements are called colors) such that no two adjacent edges receive the same color. The {\it chromatic index} $\chi' :=\chi'(G)$ is the least integer $k$ such that $G$ has a $k$-edge-coloring. In graph edge-coloring, the central question is to determine the chromatic index $\chi'$ for graphs. We refer the book~\cite{StiebSTF-Book} of Stiebitz, Scheide, Toft and Favrholdt and the elegant survey~\cite{McDonaldSurvey15} of McDonald for literature on the recent progress of graph edge-colorings. Clearly, $\chi' \ge \Delta$. Conversely, Vizing showed that $\chi' \le \Delta + \mu$, where $\mu := \mu(G)$ is the multiplicity of $G$. However, determining the exact value of $\chi'$ is a very difficult problem. Holyer~\cite{Holyer81} showed that the problem is NP-hard even restricted to simple cubic graphs. To estimate $\chi'$, the notion of fractional chromatic index is introduced. A {\it fractional edge coloring} of $G$ is a non-negative weighting $w(.)$ of the set $\mathcal{M}(G)$ of matchings in $G$ such that, for every edge $e\in E(G)$, $\sum_{M\in \mathcal{M}: e\in M} w(M) =1$. Clearly, such a weighting $w(.)$ exists. The {\it fractional chromatic index} $\chi_f' := \chi'_f(G)$ is the minimum total weight $\sum_{M\in \mathcal{M}} w(M)$ over all fractional edge colorings of $G$. By definitions, we have $\chi' \ge \chi'_f\ge \Delta $. It follows from Edmonds' characterization of the matching polytope~\cite{Edmonds65} that $\chi_f'$ can be computed in polynomial time and \[ \chi'_f = \max\left\{ \frac{|E(H)|}{\lfloor |V(H)|/2\rfloor} \, : \, \mbox{ $H\subseteq G$ with $|V(H)|\ge 3$ } \right\} \, \mbox{if $\chi_f' > \Delta$}. \] It is not difficult to show that the above maximality can be restricted to induced subgraphs $H$ with odd number of vertices. So, in the case of $\chi'_f > \Delta$, we have $$\lceil \chi_f'\rceil=\max\left\{\left\lceil \frac{2|E(H)|}{|V(H)|-1}\right\rceil: \mbox{induced\ subgraphs\ $H\subseteq G$\ with\ $|V(H)| \ge 3$ \ and\ odd} \,\right\}.$$ A graph $G$ is called {\it elementary} if $\chi' = \lceil \chi_f' \rceil$. Gupta\,(1967)~\cite{Gupta67}, Goldberg\,(1973)~\cite{Goldberg}, Andersen\,(1977)~\cite{Andersen}, and Seymour\,(1979)~\cite{Seymour} independently made the following conjecture, which is commonly referred as {\it Goldberg's conjecture}. \begin{CON}\label{con:GAS} For any graph $G$, if $\chi' \ge \Delta+2$ then $G$ is elementary. \end{CON} An immediate consequence of Conjecture~\ref{con:GAS} is that $\chi'$ can be computed in polynomial time for graphs with $\chi' \ge \Delta +2$. So the NP-complete problem of computing the chromatic indices lies in determining whether $\chi' = \Delta$, $\Delta +1$, or $\ge \Delta +2$, which strengthens Vizing's classic result $\chi'\leq \Delta+\mu$ tremendously when $\mu$ is big. Following $\chi'\le \frac{3\Delta}2$ of the classic result of Shannon~\cite{Shannon49}, we can assume that, for every $\Delta$, there exists the least positive number $\zeta$ such that if $\chi'> \Delta + \zeta$ then $G$ is elementary. Conjecture~\ref{con:GAS} indicates that $\zeta \le 1$. Asymptotically, Kahn~\cite{Kahn96} showed $\zeta= o(\Delta)$. Scheide~\cite{Scheide-2010}, and Chen, Yu, and Zang~\cite{CYZ-2011} independently proved that $\zeta \le \sqrt{\Delta/2}$. In this paper, we show that $\zeta \le \sqrt[3]{\Delta/2}-1$ as stated below. \begin{THM}\label{THM:cubic} For any graph $G$, if $\chi'\geq\Delta+\sqrt[3]{\Delta/2}$, then $G$ is elementary. \end{THM} Jakobsen~\cite{Jakobsen73} conjectured that $\zeta \le 1 + \frac{\Delta -2}{m-1}$ for every positive integer $m(\geq3)$, which gives a reformulation of Conjecture~\ref{con:GAS} as stated below. \begin{CON}\label{con:Jm} Let $m$ be an integer with $m\ge 3$ and $G$ be a graph. If $\chi'>\frac{m}{m-1}\Delta+\frac{m-3}{m-1}$, then $G$ is elementary. \end{CON} Since $\frac{m}{m-1}\Delta+\frac{m-3}{m-1}$ decreases as $m$ increases, it is sufficient to prove Jakobsen's conjecture for all odd integers $m$ (in fact, for any infinite sequence of positive integers), which has been confirmed slowly for $m\le 15$ by a series of papers over the last 40 years: \begin{itemize} \item {\bf $m=5$:} Three independent proofs given by Andersen~\cite{Andersen} (1977), Goldberg~\cite{Goldberg} (1973), and S{\o}rensen\,(unpublished, page 158 in \cite{StiebSTF-Book}), respectively. \item {\bf $m=7$:} Two independent proofs given by Andersen~\cite{Andersen} (1977) and S{\o}rensen (unpublished, page 158 in \cite{StiebSTF-Book}), respectively. \item {\bf $m=9$:} By Goldberg~\cite{Goldberg-1984} (1984). \item {\bf $m=11$:} Two independent proofs given by Nishizeki and Kashiwagi~\cite{Nishizeki-Kashiwagi-1990} (1990) and by Tashkinov~\cite{Tashkinov-2000} (2000), respectively. \item {\bf $m=13$:} By Favrholdt, Stiebitz and Toft~\cite{FavrST06} (2006). \item {\bf $m=15$:} By Scheide~\cite{Scheide-2010} (2010). \end{itemize} In this paper, we show that Jakobsen's conjecture is true up to $m=23$. \begin{THM}\label{THM:Jm19} If $G$ is a graph with $\chi'>\frac{23}{22}\Delta+\frac{20}{22}$, then $G$ is elementary. \end{THM} \begin{COR}\label{COR:Delta23} If $G$ is a graph with $\Delta\le 23$ or $|V(G)|\le 23$, then $\chi'\le \max\{\Delta+1,\lceil\chi'_f\rceil\}$. \end{COR} Note that in Corollary~\ref{COR:Delta23}, $|V(G)| \le 23$ does not imply $\Delta \le 23$, as $G$ may have multiple edges. The remainder of this paper is organized as follows. In Section 2, we introduce some definitions and notation for edge-colorings, Tashkinov trees, and several known results which are useful for the proofs of Theorems~\ref{THM:cubic} and~\ref{THM:Jm19}; in Section 3, we give an extension of Tashkinov trees and prove several properties of the extended Tashkinov trees; and in Section 4, we prove Theorem~\ref{THM:cubic}, Theorem~\ref{THM:Jm19} and Corollary \ref{COR:Delta23} based on the results in Section 3. \section{Preliminaries} \subsection{Basic definitions and notation} Let $G$ be a graph with vertex set $V$ and edge set $E$. Denote by $|G|$ and $||G||$ the number of vertices and the number of edges of $G$, respectively. For any two sets $X, Y\subseteq V$, denote by $E(X, Y)$ the set of edges with one end in $X$ and the other one in $Y$ and denote by $\partial(X) := E(X, V-X)$ the boundary edge set of $X$, that is, the set of edges with exactly one end in $X$. Moreover, let $E(x, y) := E(\{x\}, \{y\})$ and $E(x) := \partial(\{x\})$. Denote by $G[X]$ the subgraph induced by $X$ and $G-X$ the subgraph induced by $V(G)-X$. Moreover, let $G-x = G-\{x\}$. For any subgraph $H$ of $G$, we let $G[H] = G[V(H)]$ and $\partial(H) = \partial(V(H))$. Let $V(e)$ be the set of the two ends of an edge $e$. A path $P$ is usually denoted by an alternating sequence $P=(v_0, e_1, v_1,\cdots, e_p,v_p)$ with $V(P)=\{v_0,\cdots, v_p\}$ and $E(P)=\{e_1,\cdots, e_p\}$ such that $e_i\in E_G(v_{i-1}, v_i)$ for $1\le i\le p$. The path $P$ defined above is called a $(v_0,v_p)$-path. For any two vertices $u,v \in V(P)$, denote by $uPv$ or $vPu$ the unique subpath connecting $u$ and $v$. If $u$ is an end of $P$, then we obtain a {\it linear order $\preceq_{(u, P)}$} of the vertices of $P$ in a natural way such that $x\preceq_{(u,P)}y$ if $x\in V(uPy)$. The set of all $k$-edge-colorings of a graph $G$ is denoted by $\mathcal{C}^k(G)$. Let $\varphi\in \mathcal{C}^k(G)$. For any color $\alpha$, let $E_{\alpha} =\{e\in E \ : \ \varphi(e) =\alpha\}$. More generally, for each subgraph $H\subseteq G$, let $$E_{\alpha}(H)=\{e\in E(H)\ :\ \varphi(e)=\alpha\}.$$ For any two distinct colors $\alpha$ and $\beta$, denote by $G_{\varphi}(\alpha,\beta)$ the subgraph of $G$ induced by $E_{\alpha} \cup E_{\beta}$. The components of $G_{\varphi}(\alpha,\beta)$ are called {\it $(\alpha,\beta)$-chains}. Clearly, each $(\alpha,\beta)$-chain is either a path or a cycle of edges alternately colored with $\alpha$ and $\beta$. For each $(\alpha, \beta)$-chain $P$, let $\varphi/P$ denote the $k$-edge-coloring obtained from $\varphi$ by exchanging colors $\alpha$ and $\beta$ on $P$, that is, for each $e\in E$, $$ \varphi/P\ (e) = \left\{ \begin{array}{ll} \varphi(e), & \hbox{$e\notin E(P)$;} \\ \beta, & \hbox{$e\in E(P)$ and $\varphi(e)=\alpha$;} \\ \alpha, & \hbox{$e\in E(P)$ and $\varphi(e)=\beta$.} \end{array} \right. $$ For any $ v\in V$, let $P_v(\alpha,\beta,\varphi)$ denote the unique $(\alpha,\beta)$-chain containing $v$. Notice that, for any two vertices $u, \, v\in V$, either $P_u(\alpha,\beta,\varphi)=P_v(\alpha,\beta,\varphi)$ or $P_u(\alpha,\beta,\varphi)\cap P_v(\alpha,\beta,\varphi)=\emptyset$. For any $v\in V$, let $\varphi(v) :=\{\varphi(e)\,: e\in E(v)\}$ denote the set of colors presented at $v$ and $ \overline{\varphi}(v)$ the set of colors not assigned to any edge incident to $v$, which are called {\it missing} colors at $v$. For any vertex set $X\subseteq V$, let $\varphi (X) = \cup_{x\in X} \varphi(x)$ and $\overline{\varphi}(X) = \cup_{x\in X} \overline{\varphi}(x)$ be the set of colors presenting and missing at some vertices of $X$, respectively. For any edge set $F\subseteq E$, let $\varphi (F) = \cup_{e\in F} \varphi(e)$. \subsection{Elementary sets and closed sets} Let $G$ be a graph. An edge $e\in E(G)$ is called {\it critical} if $\chi'(G-e) < \chi'(G)$, and the graph $G$ is called {\it critical} if $\chi'(H) < \chi'(G)$ for any proper subgraph $H\subseteq G$. A graph $G$ is called {\it $k$-critical} if it is critical and $\chi'(G) = k+1$. In the proofs, we will consider a graph $G$ with $\chi'(G) = k+1 \ge \Delta +2$, a critical edge $e\in E(G)$, and a coloring $\varphi\in \mathcal{C}^k(G-e)$. We call them together a {\it $k$-triple} $(G, e, \varphi)$. \begin{DEF} Let $G$ be a graph and $e\in E(G)$ such that $\mathcal{C}^k(G-e)\ne \emptyset$ and let $\varphi \in \mathcal{C}^k(G-e)$. Let $X \subseteq V(G)$ contain two ends of $e$. \begin{itemize} \item We call $X$ {\it elementary} \emph{(}with respect to $\varphi$\emph{)} if all missing color sets $\overline{\varphi}(x)$ \emph{(}$x\in X$\emph{)} are mutually disjoint. \item We call $X$ {\it closed} \emph{(}with respect to $\varphi$\emph{)} if $\varphi(\partial(X))\cap \overline{\varphi}(X)=\emptyset$, i.e., no missing color of $X$ appears on the edges in $\partial(X)$. If additionally, each color in $\varphi(X)$ appears at most once in $\partial(X)$, we call $X$ {\it strongly closed} \emph{(}with respect to $\varphi$\emph{)}. \end{itemize} \end{DEF} Moreover, we call a subgraph $H\subseteq G$ {\it elementary}, {\it closed}, and {\it strongly closed} if $V(H)$ is elementary, closed, and {strongly closed}, respectively. If a vertex set $X\subseteq V(G)$ containing two ends of $e$ is both elementary and strongly closed, then $|X|$ is odd and $k= \frac{2(|E(G[X])|-1)}{|X| -1}$, so $k +1=\left\lceil\frac{2|E(G[X])|}{|X| -1}\right\rceil=\lceil \chi_f' \rceil$. Therefore, if $V(G)$ is elementary then $G$ is elementary, i.e., $\chi'(G) =k+1 =\lceil \chi_f'\rceil$. \subsection{Tashkinov trees} \begin{DEF}\label{Def:Tashkinov-tree} A {\it Tashkinov tree} of a $k$-triple $(G, e, \varphi)$ is a tree $T$, denoted by $T=(e_1, e_2, \cdots, e_p)$, induced by a sequence of edges $e_1=e$, $e_2$, $\dots$, $e_p$ such that for each $i\ge 2$, $e_i$ is a boundary edge of the tree induced by $\{e_1, e_2, \cdots, e_{i-1}\}$ and $\varphi(e_i)\in \overline{\varphi}\left(V\left(\bigcup\limits_{j=1}^{i-1}e_j\right)\right)$. \end{DEF} For each $e_j\in \{e_1,\cdots, e_p\}$, we denote by $Te_j$ the subtree $T[\{e_1,\cdots, e_j\}]$ and denote by $e_jT$ the subgraph induced by $\{e_j,\cdots,e_p\}$. For each edge $e_i$ with $i \ge 2$, the end of $e_i$ in $Te_{i-1}$ is called the {\it in-end} of $e_i$ and the other one is called the {\it out-end} of $e_i$. Algorithmically, a Tashkinov tree is obtained incrementally from $e$ by adding a boundary edge whose color is missing in the previous tree. Vizing-fans (stars) (used in the proof of Vizing's classic theorem~\cite{Vizing64}) and Kierstead-paths (used in ~\cite{Kierstead84}) are special Tashkinov trees. \begin{THM}\label{THM:TashOrigi}$[$Tashkinov $\cite{Tashkinov-2000}$ $]$ For any given $k$-triple $(G, e, \varphi)$ with $k\geq \Delta+1$, all Tashkinov trees are elementary. \end{THM} For a graph $G$, a Tashkinov tree is associated with an edge $e\in E(G)$ and a $k$-edge-coloring of $G-e$ with $k \ge \Delta +1$. We distinguish the following three different types of maximality. \begin{DEF}\label{LEM:T-Property} Let $(G,e, \varphi)$ be a $k$-triple with $k \ge \Delta +1$, and $T$ be a Tashkinov tree of $(G,e,\varphi)$. \begin{itemize} \item We call $T$ {\it $(e,\varphi)$-maximal} if there is no Tashkinov tree $T^*$ of $(G, e, \varphi)$ containing $T$ as a proper subtree, and denote by $\mathcal{T}_{e, \varphi}$ the set of all $(e,\varphi)$-maximal Tashkinov trees. \item We call $T$ {\it $e$-maximal} if there is no Tashkinov tree $T^*$ of a $k$-triple $(G, e, \varphi^*)$ containing $T$ as a proper subtree, and denote by $\mathcal{T}_e$ the set of all $e$-maximal Tashkinov trees. \item We call $T$ {\it maximum} if $|T|$ is maximum over all Tashkinov trees of $G$, and denote by $\mathcal{T}$ the set of all maximum Tashkinov trees. \end{itemize} \end{DEF} Let $T$ be a Tashkinov tree of a $k$-triple $(G, e, \varphi)$. Then, $T$ is $(e, \varphi)$-maximal if and only if $V(T)$ is closed. Moreover, the vertex sets are the same for all $T\in \mathcal{T}_{e, \varphi}$. We call colors in $\varphi(E(T))$ {\it used} and colors not in $\varphi(E(T))$ {\it unused} on $T$, call an unused missing color in $\overline{\varphi}(V(T))$ a {\it free color} of $T$ and denote the set of all free colors of $T$ by $\Gamma^f(T)$. For each color $\alpha$, let $E_{\alpha}(\partial(T))$ denote the set of edges with color $\alpha$ in boundary $\partial(T)$. A color $\alpha$ is called a {\it defective color} of $T$ if $|E_{\alpha}(\partial(T))|\ge 2$. The set of all defective colors of $T$ is denoted by $\Gamma^d(T)$. Note that if $T\in \mathcal{T}_{e, \varphi}$, then $V(T)$ is strongly closed if and only if $T$ does not have any defective colors. The following corollary follows immediately from the fact that a maximal Tashkinov tree is elementary and closed. \begin{COR}\label{COR-(e,phi)Max} For each $T\in \mathcal{T}_{e, \varphi}$, the following properties hold. \item[$(1)$] $|T| \ge 3$ is odd. \item [$(2)$] For any two missing colors $\alpha,\beta\in \overline{\varphi}(V(T))$, we have $P_u(\alpha,\beta,\varphi)=P_v(\alpha,\beta,\varphi)$, where $u$ and $v$ are the two unique vertices in $V(T)$ such that $\alpha\in \overline{\varphi}(u)$ and $\beta\in \overline{\varphi}(v)$, respectively. Furthermore, $V(P_u(\alpha,\beta,\varphi))\subseteq V(T)$. \item[$(3)$] For every defective color $\delta\in \Gamma^d (T)$, $|E_{\delta}(\partial(T))|\ge 3$ and is odd. \item [$(4)$] There are at least four free colors. More specifically, $$|\Gamma^f(T)|\ge |T|(k-\Delta)+2-|\varphi(E(T))|\ge |T|+2-(|T|-2) \ge 4.$$ \end{COR} The following lemma was given in~\cite{StiebSTF-Book}. \begin{LEM}\label{LEM:PassAll} Let $T\in \mathcal{T}_{e}$ be a Tashkinov tree of a $k$-triple $(G, e, \varphi)$ with $k\geq \Delta+1$. For any free color $\gamma\in \Gamma^f(T)$ and any $\delta \notin \overline{\varphi}(V(T))$, the $(\gamma, \delta)$-chain $P_u(\gamma, \delta, \varphi)$ contains all edges in $E_{\delta}(\partial(T))$, where $u$ is the unique vertex of $T$ missing color $\gamma$. \end{LEM} \proof Otherwise, consider the coloring $\varphi_1 = \varphi/P_u(\gamma, \delta, \varphi)$. Since $\delta$ and $\gamma$ are both unused on $T$ with respect to $\varphi$, $T$ is still a Tashkinov tree and $\delta$ is a missing color with respect to $\varphi_1$. But $E_{\delta}(\partial(T))\neq \emptyset$, which gives a contradiction to $T$ being an $e$-maximal tree. \qed Following the notation in Lemma~\ref{LEM:PassAll}, we consider the case of $\delta$ being a defective color. Then $P: =P_u(\gamma,\delta,\varphi)$ is a path with $u$ as one end. Since $u$ is the unique vertex in $T$ missing $\gamma$ by Theorem~\ref{THM:TashOrigi}, the other end of $P$ is not in $T$. In the linear order $\preceq_{(u,P)}$, the last vertex $v$ with $v\in V(T)\cap V(P)$ is called an {\it exit vertex} of $T$. Applying Lemma~\ref{LEM:PassAll}, Scheide~\cite{Scheide-2010} obtained the following result. \begin{LEM}\label{LEM:exit-color} Let $T\in \mathcal{T}_e$ be a Tashkinov tree of a $k$-triple $(G, e,\varphi)$ with $k \ge \Delta +1$. If $v$ is an exit vertex of $T$, then every missing color in $\overline{\varphi}(v)$ must be used on $T$. \end{LEM} Let $T\in \mathcal{T}_{e, \varphi}$ be a Tashkinov tree of $(G, e, \varphi)$ and $V(e) =\{x, y\}$. By keeping odd number of vertices in each step of growing a Tashkinov tree from $e$, Scheide~\cite{Scheide-2010} showed that there is another $T^*\in \mathcal{T}_{e, \varphi}$, named a {\it balanced Tashkinov tree}, such that $V(T^*) = V(T)$ constructed incrementally from $e$ by the following steps: \begin{itemize} \item {\bf Adding a path:} Pick two missing colors $\alpha$ and $\beta$ with $\alpha \in \overline{\varphi}(x)$ and $\beta\in \overline{\varphi}(y)$, and let $T^*:= \{e\}\cup (P_x(\alpha, \beta, \varphi)-y)$ where $P_x(\alpha, \beta, \varphi)$ is the $(\alpha, \beta)$-chain containing both $x$ and $y$. \item {\bf Adding edges by pairs:} Repeatedly pick two boundary edges $f_1$ and $f_2$ of $T^*$ with $\varphi(f_1) = \varphi(f_2) \in \overline{\varphi}(V(T^*))$ and redefine $T^* := T^*\cup \{f_1, f_2\}$ until $T^*$ is closed. \end{itemize} The path $P_x(\alpha, \beta, \varphi)$ in the above definition is called the {\it trunk} of $T^*$ and $h(T^*):=|V(P_x(\alpha, \beta, \varphi))|$ is called the {\it height} of $T^*$. \begin{LEM}\label{LEM:balanced-T} {\em [Scheide~\cite{Scheide-2010}]} Let $G$ be a $k$-critical graph with $k \ge \Delta +1$ and $T\in \mathcal{T}$ be a balanced Tashkinov tree of a $k$-triple $(G, e, \varphi)$ with $h(T)$ being maximum. Then, $h(T)\ge 3$ is odd. Moreover, if $h(T) =3$ then $G$ is elementary. \end{LEM} \begin{COR}\label{COR:T-order-exit-vertex} Let $G$ be a non-elementary $k$-critical graph with $k\ge \Delta +1$ and $T\in \mathcal{T}$ be a balanced Tashkinov tree of a $k$-triple $(G, e, \varphi)$ with $h(T)$ being maximum. Then $|T|\ge 2(k-\Delta)+1$. \end{COR} \textbf{Proof}.\quad Since $G$ is not elementary, $T$ is not strongly closed with respect to $\varphi$. There is an exit vertex $v$ by Lemma~\ref{LEM:PassAll}, so $\overline{\varphi}(v) \subseteq \varphi(E(T))$ by Lemma~\ref{LEM:exit-color}. Since $T$ is balanced and $h(T) \ge 5$ by Lemma \ref{LEM:balanced-T}, each used color is assigned to at least two edges of $E(T)$. Thus, \[ |T| = ||T|| +1 \ge 2 |\overline{\varphi}(v)| +1 \ge 2(k-\Delta) +1. \qed \] Working on balanced Tashkinov trees, Scheide proved the following result. \begin{LEM}\label{LEM:Small-T}{\em [Scheide ~\cite{Scheide-2010}]} Let $G$ be a $k$-critical graph with $k \ge \Delta +1$. If $|T| <11$ for all Tashkinov trees $T$, then $G$ is elementary. \end{LEM} \section{An extension of Tashkinov trees} \subsection{Definitions and basic properties} In this section, we always assume that $G$ is a {\it non-elementary} $k$-critical graph with $k\ge \Delta+1$ and $T_0\in \mathcal{T}$ is a maximum Tashkinov tree of $G$. Moreover, we assume that $T_0$ is a Tashkinov tree of the $k$-triple $(G, e, \varphi)$. \begin{DEF}\label{DEF:stable} Let $\varphi_1,\varphi_2\in \mathcal{C}^k(G-e)$ and $H\subseteq G$ such that $e\in E(H)$. We say that $H$ is {\it $(\varphi_1,\varphi_2)$-stable} if $\varphi_1(f) = \varphi_2(f)$ for every $f\in E(G[V(H)])\cup \partial(H)$, that is, $\varphi_1(f)\neq \varphi_2(f)$ implies that $f\in E(G-V(H))$. \end{DEF} Following the definition, if a Tashkinov tree $T_0$ of $(G,e,\varphi_1)$ is $(\varphi_1,\varphi_2)$-stable, then it is also a Tashkinov tree of $(G, e,\varphi_2)$. Moreover, the sets of missing colors of $T_0$, used colors of $T_0$, and free colors of $T_0$ are the same in both colorings $\varphi_1$ and $\varphi_2$. The following definition of {\it connecting edges} will play a critical role in our extension based on a maximum Tashkinov tree. \begin{DEF}\label{DEF:Conn} Let $H\subseteq G$ be a subgraph such that $T_0\subseteq H$. A color $\delta$ is called a {\it defective color} of $H$ if $H$ is closed, $\delta\not\in \overline{\varphi}(V(H))$ and $|E_{\delta}(\partial(H))| \ge 2$. Moreover, an edge $f\in \partial(H)$ is called a {\it connecting edge} if $\delta :=\varphi(f)$ is a defective color of $H$ and there is a missing color $\gamma\in \overline{\varphi}(V(T_0)) -\varphi(E(H))$ of $T_0$ such that the following two properties hold. \begin{itemize} \item The $(\gamma, \delta)$-chain $P_u(\delta, \gamma, \varphi)$ contains all edges in $E_{\delta}(\partial(H))$, where $u$ is the unique vertex in $V(T_0)$ such that $\gamma \in \overline{\varphi} (u)$; \item Along the linear order $\preceq_{(u, P_u(\gamma, \delta, \varphi))}$, $f$ is the first boundary edge on $P_u(\gamma, \delta, \varphi)$ with color $\delta$. \end{itemize} \end{DEF} In the above definition, we call the successor $f^s$ of $f$ along $\preceq_{(u, P_u(\gamma, \delta, \varphi))}$ the {\it companion} of $f$, $(f,f^s)$ a {\it connecting edge pair} and $(\delta,\gamma)$ a {\it connecting color pair}. Since $P_u(\gamma, \delta, \varphi)$ contains all edges in $E_{\delta}(\partial(H))$, we have that $f^s$ is not incident to any vertex in $H$ and $\varphi(f^s) = \gamma$. \begin{DEF}\label{DEF:ETT} We call a tree $T$ an {\bf Extension of a Tashkinov Tree (ETT)} of $(G, e, \varphi)$ based on $T_0$ if $T$ is incrementally obtained from $T:=T_0$ by repeatedly adding edges to $T$ according to the following two operations subject to $\Gamma^f(T_0) - \varphi(E(T)) \ne \emptyset$: \begin{itemize} \item {\bf ET0:} If $T$ is closed, add a connecting edge pair $(f, f^s)$, where $\varphi(f)$ is a defective color and $\varphi(f^s)\in \Gamma^f(T_0) - \varphi(E(T))$, and rename $T:=T\cup \{f,f^{s}\}$. \item {\bf ET1:} Otherwise, add an edge $f\in \partial(T)$ with $\varphi(f) \in \overline{\varphi}(V(T))$ being a missing color of $T$, and rename $T:=T\cup \{f\}$. \end {itemize} \end{DEF} Note that the above extension algorithm ends with $\Gamma^f(T_0)\subseteq \varphi(E(T))$. Let $T$ be an ETT of $(G, e, \varphi)$. Since $T$ is defined incrementally from $T_0$, the edges added to $T$ follow a linear order $\prec_{\ell}$. Along the linear order $\prec_{\ell}$, for any initial subsequence $S$ of $E(T)$, $T_0\cup S$ induces a tree; we call it a {\it premier segment} of $T$ provided that when a connecting edge is in $S$, its companion must be in $S$. Let $f_1, \, f_2, \dots, f_{m+1}$ be all connecting edges with $f_1\prec_{\ell} f_2\prec_{\ell} \dots \prec_{\ell} f_{m+1}$. For each $1\le i\le m+1$, let $T_{i-1}$ be the premier subtree induced by $T_0$ and edges before $f_i$ in the ordering $\prec_{\ell}$. Clearly, we have $T_0\subset T_1 \subset T_2 \subset \dots \subset T_m \subset T$. We call $T_i$ a {\it closed segment} of $T$ for each $0\le i\le m$, $T_0\subset T_1 \subset T_2 \subset \dots \subset T_m \subset T$ the {\it ladder of $T$}, and $T$ an {\it ETT with $m$-rungs}. We use $m(T)$ to denote the number of rungs of $T$. For each edge $f\in E(T)$ with $f\ne e$, following the linear order $\prec_{\ell}$, the end of $f$ is called the {\it in-end} if it is in $T$ before $f$ and the other one is called the {\it out-end} of $f$. For any edge $f\in E(T)$, the subtree induced by $T_0$, $f$ and all its predecessors is called an $f$-{\it segment} and denoted by $Tf$. Let $\mathbb{T}$ denote the set of all ETTs based on $T_0$. We now define a binary relation $\prec_t$ of $\mathbb{T}$ such that for two $T, T^*\in \mathbb{T}$, we call $T\prec_t T^*$ if either $T=T^*$ or there exists $s$ with $1\le s\le \min\{m+1,m^*+1\}$ such that $T_h = T^*_h$ for every $0 \le h< s$ and $T_s \subsetneq T^*_s$, where $T_0 \subset T_1 \subset \dots \subset T_s \subset \dots \subset T_{m} \subset T_{m+1}(=T)$ and $T_0^*(=T_0) \subset T^*_1\subset \dots \subset T^*_s \subset \dots \subset T^*_{m^*+1} (=T^*)$ are the ladders of $T$ and $T^*$, respectively. Notice that in this definition, we only consider the relations of $T_h$ and $T^*_h$ for $h\leq s$. Clearly, for any three ETTs $T$, $T'$ and $T^*$, $T\prec_t T'$ and $T' \prec_t T^*$ give $T\prec_t T^*$. So, $\mathbb{T}$ together with $\prec_t$ forms a poset, which is denoted by $(\mathbb{T}, \prec_t)$. \begin{LEM}\label{LEM:Max} In the poset $(\mathbb{T}, \prec_t)$, if $T$ is a maximal tree over all ETTs with at most $|T|$ vertices, then any premier segment $T'$ of $T$ is also a maximal tree over all ETTs with at most $|T'|$ vertices. \end{LEM} \proof Suppose on the contrary: {\it there is a premier segment $T'$ of $T$ and an ETT $T^*$ with $|T^*| \le |T'|$ and $T'\prec_t T^*$}. We assume that $T' \ne T^*$. Let $T_0\subset T_1\subset \dots \subset T_{m'}\subset T'$ and $T_0 \subset T_1^* \subset \dots \subset T_{m^*}^*\subset T^*$ be the ladders of $T'$ and $T^*$, respectively. Since $T'\prec_t T^*$, there exists $s$ with $1\le s\le \min\{m'+1, m^*+1\}$ such that $T_j = T^*_j$ for each $0\le j\le s-1$ and $T_s \subsetneq T^*_s$, where $T'_{m'+1}=T'$ and $T_{m^*+1}^* = T^*$. Since $|T^*| \le |T'|$, we have $s <m'+1$. Since $T'$ is a premier segment of $T$, $T_0\subset T_1\subset \dots \subset T_{m'}$ is a part of the ladder of $T$. So, we have $T\prec_t T^*$, giving a contradiction to the maximality of $T$. \qed \begin{LEM}\label{LEM:FixConn} Let $T$ be a maximal ETT in $(\mathbb{T}, \prec_t)$ over all ETTs with at most $|T|$ vertices, and let $T_0 \subset T_1 \subset \cdots \subset T_m \subset T$ be the ladder of $T$. Suppose $T$ is an ETT of $(G,e,\varphi_1)$. Then for every $\varphi_2 \in \mathcal{C}^k(G-e)$ such that $T_m$ is $(\varphi_1,\varphi_2)$-stable, $T_m$ is an ETT of $(G, e, \varphi_2)$. Furthermore, if $T_m$ is elementary, then for every $\gamma \in \Gamma^f(T_0) - \varphi_1(E(T_m))$ and $\delta \not \in \overline{\varphi}_1(V(T_m))$, $P_u(\gamma, \delta,\varphi_2) \supseteq \partial_{\delta}(T_m)$ where $u \in V(T_0)$ such that $\gamma \in \overline{\varphi}_1(u)$. \end{LEM} \proof Suppose on the contrary: let $T$ be a counterexample to Lemma~\ref{LEM:FixConn} with minimum number of vertices. Let $T_0 \subset \cdots \subset T_m \subset T$ be the ladder of $T$ and let $\varphi_1,\varphi_2 \in \mathcal{C}^k(G-e)$ be two edge colorings such that $T$ is an ETT of $(G,e,\varphi_1)$, $T_m$ is $(\varphi_1,\varphi_2)$-stable and either \begin{itemize} \item[(1)] $T_m$ is not an ETT of $(G,e,\varphi_2)$ or \item[(2)] $T_m$ is elementary and there exist $\gamma \in \Gamma^f(T_0) - \varphi_1(E(T_m))$ and $\delta \not \in \overline{\varphi}_1(V(T_m))$ such that $P_u(\gamma, \delta,\varphi_2) \not\supseteq \partial_{\delta}(T_m)$ where $u \in V(T_0)$ such that $\gamma \in \overline{\varphi}_1(u)$. \end{itemize} By the minimality of $T$, we observe that $|T|=|T_m|+2$. Furthermore, since $T_0 \in \mathcal{T}$ is a maximum Tashkinov tree of $G$, it follows that $m\ge 1$ by Lemma~\ref{LEM:PassAll}. First, we show that (1) does not hold, in other words, $T_m$ is an ETT of $(G,e,\varphi_2)$. Since colors for edges incident to vertices in $T_m$ are the same in both $\varphi_1$ and $\varphi_2$, we only need to show that each connecting edge pair in coloring $\varphi_1$ is still a connecting edge pair in coloring $\varphi_2$. For $0\le j \le m-1$ let $(f_j,f_j^s)$ be the connecting edge pair of $T_j$ and let $(\delta_j,\gamma_j)$ be the corresponding connecting color pair with respect to $\varphi_1$. Since $T_{j+1}$ is $(\varphi_1,\varphi_2)$-stable and an ETT of $(G,e,\varphi_1)$ and $T_{j+1} \subsetneq T$, by the minimality of $T$, it follows that $P_{u_j}(\gamma_j,\delta_j,\varphi_2)$ contains $\partial_{\delta_j}(T_j)$ where $u_j$ is the unique vertex in $V(T_0)$ with $\gamma_j \in \overline{\varphi}_1(u_j)$. Moreover, since $T_{j+1}$ is $(\varphi_1,\varphi_2)$-stable, it follows that $f_j$ is the first boundary edge on $P_{u_j}(\gamma_j,\delta_j,\varphi_2)$ with color $\delta_j$ and $f_j^s$ being its companion. So $(f_j,f_j^s)$ is still a connecting edge pair in $\varphi_2$. We point out that $P_{u_j}(\gamma_j,\delta_j,\varphi_1)$ and $P_{u_j}(\gamma_j, \delta_j, \varphi_2)$ may be different in $(G, e, \varphi_1)$ and $(G, e, \varphi_2)$. Thus (2) holds and there exist $\gamma \in \Gamma^f(T_0) - \varphi_1(E(T_m))$ and $\delta \not \in\overline{\varphi}_1(V(T_m))$ such that $P_u(\gamma, \delta,\varphi_2) \not\supseteq \partial_{\delta}(T_m)$. Let $P=P_u(\gamma, \delta,\varphi_2)$. Since $T_m$ is both elementary and closed and $u$ is one of the two ends of $P$, the other end of $P$ must be in $V \setminus V (T_m)$. So, $E(P)\cap E_{\delta}(\partial(T_m)) \neq \emptyset$. Let $Q$ be another $(\gamma, \delta)$-chain such that $E(Q) \cap E_{\delta}(\partial(T_m))\neq \emptyset$. Let $\varphi_3 := \varphi_2/Q$ be a coloring of $G- e$ obtained from $\varphi_2$ by interchanging colors assigned on $E(Q)$. Let $(f,f^s)$ be the connecting edge pair of $T_{m-1}$, and $T'=T_{m-1} \cup \{f,f^s\}$. We claim that $E(T') \cap E(Q) =\emptyset$. By the minimality of $T$, $P$ contains every edge of $E_{\delta}(\partial(T_{m-1}))$, and so $E(T_{m-1}) \cap E(Q)=\emptyset$. If $\varphi_2(f) \neq \delta$ then $f \not \in E(Q)$ and if $\varphi_2(f)=\delta$ then $f \in E(P)$ so $f\not \in E(Q)$. Thus $f \not \in E(Q)$. Lastly, $\varphi_2(f^s) \neq \delta$ since $\delta \in \overline{\varphi}_2(V(T_m))$ and $\varphi_2(f^s) \neq \gamma$ since $\gamma \not \in \varphi_2(E(T_m))$, so $f^s \not \in E(Q)$. Observe that $T'$ is an ETT of $(G,e,\varphi_1)$ with ladder $T_0 \subset \cdots \subset T_{m-1}$ and is $(\varphi_1,\varphi_3)$-stable. Moreover $|T'| \le |T_m| <|T|$. Therefore, by the minimality of $T$, $T_{m-1}$ is an ETT of $(G,e,\varphi_3)$, and because we do not use any edge in $Q$ when we extend $T_{m-1}$ to $T_m$, $T_m$ is also an ETT of $(G,e,\varphi_3)$ which is not closed. However, it is a contradiction that $T$ is a maximal ETT. \qed In Lemma \ref{LEM:FixConn}, by taking $\varphi_1=\varphi_2$, we easily obtain the following lemma. \begin{LEM} \label{LEM:EXconn} Let $T$ be a maximal ETT in $(\mathbb{T}, \prec_t)$ over all ETTs with at most $|T|$ vertices, and let $T_0\subset T_1\subset \dots \subset T_m\subset T$ be the ladder of $T$. Suppose $T$ is an ETT of $(G,e,\varphi)$. If $T_m$ is elementary and $\Gamma^f(T_0) -\varphi(E(T))\ne \emptyset$, then for any $\gamma\in \Gamma^f(T_0) -\varphi(E(T))$ and $\delta\notin \overline{\varphi}(V(T_m))$, $P_u(\gamma, \delta, \varphi) \supset E_{\delta}(\partial (T_i))$ for every $i$ with $0\le i\le m$, where $u\in V(T_0)$ such that $\gamma\in \overline{\varphi}(u)$. \end{LEM} \begin{LEM}\label{LEM:ETT>F} For every ETT $T$ of $(G,e,\varphi)$ based on $T_0$, if $T$ is elementary such that $|\Gamma^f(T_0)| >m(T)$ and $ |E(T) - E(T_0)|-m(T) < |\overline{\varphi}(V(T_0))|$, then there exists an ETT $T^*$ containing $T$ as a premier segment. \end{LEM} \proof Let $T$ be an ETT of $(G,e,\varphi)$ and $m = m(T)$. Since $\varphi(f_i) \notin \overline{\varphi}(V(T_0))$ for each connecting edge $f_i$, where $i\in \{1,2,\cdots,m\}$, we have $|\varphi(E(T)-E(T_0))\cap \overline{\varphi}(V(T_0))| \le |E(T) - E(T_0)| -m < |\overline{\varphi}(V(T_0))|$. So, $\overline{\varphi}(V(T_0)) - \varphi(E(T)-E(T_0)) \ne \emptyset$. Let $\gamma\in\overline{\varphi}(V(T_0)) - \varphi(E(T)-E(T_0))$. We may assume $\gamma \notin \varphi(E(T_0))$, i.e., $\gamma \in \Gamma^f(T_0)$. Since $m < |\Gamma^f(T_0)|$, there exists a color $\beta\in \Gamma^f(T_0) -\{\gamma_1, \gamma_2, \dots, \gamma_m\}$. Since $T_0$ is closed, a $(\beta, \gamma)$-chain is either in $G[V(T_0)]$ or vertex disjoint from $T_0$. Let $\varphi_1$ be obtained from $\varphi$ by interchanging $\beta$ and $\gamma$ for edges in $E_{\beta}(G-V(T_0))\cup E_{\gamma}(G-V(T_0))$. Clearly, $T_0$ is $(\varphi, \varphi_1)$-stable. So, $T$ is also an ETT of $(G, e, \varphi_1)$. Since $\gamma\notin \varphi(E(T)-E(T_0))$, we have $\beta\notin \varphi_1(E(T))$, so the claim holds. \iffalse We may assume $\Gamma^f(T_0)-\varphi(E(T))\neq \emptyset$. Suppose $\gamma\in \overline{\varphi}(V(T_0))\cap \varphi(E(T_0))$. Let $\beta\in \Gamma^f(T_0)$ be a free color of $T_0$. Since $T_0$ is closed, a $(\beta, \gamma)$-chain is either in $G[V(T_0)]$ or vertex disjoint from $T_0$. Let $\varphi_1$ be obtained from $\varphi$ by interchanging $\beta$ and $\gamma$ for edges in $E_{\beta}(G-V(T_0))\cup E_{\gamma}(G-V(T_0))$. Clearly, $T_0$ is $(\varphi, \varphi_1)$-stable. So, $T$ is also an ETT of $(G, e, \varphi_1)$. Since $\gamma\notin \varphi(E(T)-E(T_0))$, we have $\beta\notin \varphi_1(E(T))$, so the claim holds. \fi We can apply {\bf ET0} and {\bf ET1} to extend $T$ to a larger tree $T^*$ unless $T$ is closed and does not have a connecting edge. In this case, $T$ is both elementary and closed. Since $G$ itself is not elementary, $T$ is not strongly closed. Thus, $T$ has a defective color $\delta$. Since $T$ does not have a connecting edge, $P_v(\gamma, \delta, \varphi)$ does not contain all edges of $E_{\delta}(\partial(T))$, where $v\in V(T_0)$ is the unique vertex with $\gamma\in \overline{\varphi}(v)$. Let $Q$ be another $(\gamma, \delta)$-chain containing some edges in $E_{\delta}(\partial(T))$ and let $\varphi_2= \varphi/Q$. By Lemma~\ref{LEM:EXconn}, $Q$ is disjoint from $T_m$, where $T_m$ is the largest closed segment of $T$. So, $T_m$ is $(\varphi, \varphi_2)$-stable. By Lemma~\ref{LEM:FixConn}, $T_m$ is an ETT of $(G, e, \varphi_2)$, which in turn gives that $T$ is also an ETT of $(G, e, \varphi_2)$. Applying {\bf ET1}, we extend $T$ to a larger ETT $T^*$, which contains $T$ as a premier segment. \qed \subsection{The major result} The following result is fundamental for both Theorems~\ref{THM:cubic} and \ref{THM:Jm19}. \begin{THM}\label{LEM:Elem} Let $G$ be a $k$-critical graph with $k\ge \Delta +1$ and $T$ be a maximal ETT over all ETTs with at most $|T|$ vertices in the poset $(\mathbb{T}, \prec_t)$. Suppose $T$ is an ETT of $(G,e,\varphi)$. If $|E(T) - E(T_0)| -m(T) < |\overline{\varphi}(V(T_0))|-1$ and $m(T) < |\Gamma^f(T_0)|-1$, then $T$ is elementary. \end{THM} \proof Suppose on the contrary: {\it let $T$ be a counterexample to Theorem~\ref{LEM:Elem} with minimum number of vertices.} And we assume that $(G, e, \varphi)$ is the triple in which $T$ is an ETT. By Theorem~\ref{THM:TashOrigi}, we have $T\supsetneq T_0$. For any premier segment $T'$ of $T$, by Lemma~\ref{LEM:Max}, $T'$ is maximal over all ETTs with at most $|T'|$ vertices. Additionally, following the definition, we can verify that $|E(T') -E(T_0)| -m(T') \le |E(T) - E(T_0)| -m(T)$ and $m(T')\leq m(T)$. So, every premier segment of $T$ satisfies the conditions of Theorem~\ref{LEM:Elem}. Hence, Theorem~\ref{LEM:Elem} holds for all premier segments of $T$ which are proper subtrees of $T$. Let $T_0\subset T_1\subset \dots \subset T_m\subset T$ be the ladder of $T$. Let $v_1, v_2$ be two distinct vertices in $T$ such that there is a color $\alpha \in \overline{\varphi}(v_1)\cap \overline{\varphi}(v_2)$. For each connecting edge $f_i$ with $1\le i \le m$, let $(\delta_i, \gamma_{\delta_i})$ denote the corresponding color pair, where $\varphi(f_i) = \delta_i$. According to the definition of ETT, $\gamma_{\delta_1}, \gamma_{\delta_2}, \dots, \gamma_{\delta_m}$ are pairwise distinct while $\delta_1$, $\delta_2$, $\dots$, $\delta_m$ may not be. Let $L=\{\gamma_{\delta_1}, \gamma_{\delta_2}, \dots, \gamma_{\delta_m}\}$. In the paper \cite{CYZ-2011} by Chen et al., the condition $\overline{\varphi}(v)\not\subseteq L$ is needed for any $v\in V(T)-V(T_0)$. In the following proof, we overcome this constraint. We make the following assumption. {\flushleft \bf Assumption 1:} We assume that over all colorings in $\mathcal{C}^k (G-e)$ such that $T$ is a minimum counterexample, the coloring $\varphi\in \mathcal{C}^k(G-e)$ is one such that $|\overline{\varphi}(V(T_0)) -(\varphi(E(T)-E(T_0))\cup\{\alpha\})|$ is minimum. The following claim states that we can use other missing colors of $T_0$ before using free colors of $T_0$ except those in $L$. \begin{CLA}\label{CLA:usedfirst} We may assume that if $\varphi(E(T)-E(T_0))\cap (\Gamma^f(T_0) -(L\cup\{\alpha\})) \ne \emptyset$, then $\varphi(E(T)-E(T_0)) \supset \overline{\varphi}(V(T_0)) -\Gamma^f(T_0)$. \end{CLA} \proof Assume that there is a color $\gamma \in \varphi(E(T)-E(T_0))\cap (\Gamma^f(T_0) -(L\cup\{\alpha\}))$ and there is a color $\beta\in (\overline{\varphi}(V(T_0)) -\Gamma^f(T_0)) - \varphi(E(T)-E(T_0))$. Since $T_0$ is closed, a $(\beta, \gamma)$-chain is either in $G[V(T_0)]$ or disjoint from $V(T_0)$. Let $\varphi_1$ be obtained from $\varphi$ by interchanging colors $\beta$ and $\gamma$ on all $(\beta, \gamma)$-chains disjoint from $V(T_0)$. It is readily seen that $T_0$ is $(\varphi, \varphi_1)$-stable. Since both $\gamma$ and $\beta$ are in $\overline{\varphi}(V(T_0))-L$, $T$ is also an ETT of $(G, e, \varphi_1)$. In coloring $\varphi_1$, we still have $\gamma\in\Gamma^{f}(T_0)-(L\cup \{\alpha\})$ and $\beta\in \overline{\varphi}_1(V(T_0))-\Gamma^{f}(T_0)$. However, $\gamma$ is not used on $T-T_0$ while $\beta$ is used. Additionally, Assumption 1 holds since $|\overline{\varphi}(V(T_0)) -(\varphi(E(T)-E(T_0))\cup\{\alpha\})| = |\overline{\varphi}_1(V(T_0)) -(\varphi_1(E(T)-E(T_0))\cup\{\alpha\})|$. By repeatedly applying this argument, we show that Claim~\ref{CLA:usedfirst} holds. \qed Since $m(T) < |\Gamma^f(T_0)|-1$, we have $\Gamma^f(T_0) - (L\cup\{\alpha\}) \ne \emptyset$. Since $|E(T)-E(T_0)| -m(T) < |\overline{\varphi}(V(T_0))|-1$, we have $\overline{\varphi}(V(T_0)) -(\varphi(E(T)-E(T_0))\cup\{\alpha\}) \ne \emptyset$. By Claim~\ref{CLA:usedfirst}, we have the following claim. \begin{CLA}\label{CLA:Non0} We may assume that $\Gamma^f(T_0) - (\varphi(E(T))\cup \{\alpha\}) \ne \emptyset$. \end{CLA} We consider two cases to complete the proof according to the type of the last operation in adding edge(s) to extend $T_0$ to $T$. {\flushleft \bf Case 1:} The last operation is {\bf ET0}, i.e., the two edges in the connecting edge pair $(f, f^s)$ are the last two edges in $T$ following the linear order $\prec_{\ell}$. Let $x$ be the in-end of $f$, $y$ be the out-end of $f$ (in-end of $f^s$), and $z$ be the out-end of $f^s$. In this case, we have $V(T)=V(T_m)\cup\{y, z\}$, i.e., $T'=T_m$. Let $\delta = \varphi(f)$ be the defective color and $\gamma_{\delta}\in \Gamma^f(T_0) -\varphi(E(T_m))$ such that $f$ is the first edge in $\partial(E(T_m))$ along $P:=P_u(\gamma_{\delta}, \delta, \varphi)$ with color $\delta$, where $u\in V(T_0)$ such that $\gamma_{\delta}\in \overline{\varphi}(u)$. Recall that $v_1$ and $v_2$ are the two vertices in $T$ such that $\alpha \in \overline{\varphi}(v_1)\cap\overline{\varphi}(v_2)$. We have $\{v_1, v_2\}\cap \{y, z\} \ne \emptyset$. We consider the following three subcases to lead a contradiction. {\bf \noindent Subcase 1.1: $\{v_1, v_2\} = \{y, z\}$ }. Assume, without loss of generality, $y=v_1$ and $z=v_2$. Since $f^s$ is the successor of $f$ along the linear order $\preceq_{(u,P)}$, $\varphi(f^s) = \gamma_{\delta}$. So, $f^s$ is an $(\alpha, \gamma_{\delta})$-chain. Let $\varphi_1 = \varphi/f^s$, a coloring obtained from $\varphi$ by changing color on $f^s$ from $\gamma_{\delta}$ to $\alpha$. Then $T_m$ is $(\varphi,\varphi_1)$-stable. By Lemma \ref{LEM:FixConn}, $T_m$ is an ETT of $(G,e,\varphi_1)$ and $\gamma_{\delta}$ is missing at $y$ in $\varphi_1$, which in turn gives that $P_u(\gamma_{\delta},\delta,\varphi_1):=uPy$ only contains one edge $f\in E_{\delta}(\partial(T_m))$, giving a contradiction to Lemma \ref{LEM:EXconn}. {\bf \noindent Subcase 1.2: $\alpha\in (\overline{\varphi}(y)-\overline{\varphi}(z))\cap \overline{\varphi}(V(T_{m}))$.} Since $\delta, \gamma_{\delta} \in \varphi(y)$ and $\alpha\in \overline{\varphi}(y)$, $\alpha\notin\{\delta, \gamma_{\delta}\}$. We may assume that $\alpha\in \Gamma^f(T_0)-\varphi(E(T))$. Otherwise, let $\beta\in\Gamma^f(T_0)-\varphi(E(T))$ and consider the $(\alpha,\beta)$-chain $P_1:=P_y(\alpha,\beta,\varphi)$. Since $\alpha,\beta\in \overline{\varphi}(V(T_{m}))$ and $V(T_{m})$ is closed with respect to $\varphi$ by the assumption, we have $V(P_1)\cap V(T_{m})=\emptyset$. Let $\varphi_1=\varphi/P_1$. Since $\{\alpha,\beta\}\cap\{\delta, \gamma_{\delta}\}=\emptyset$, we have $f^s\notin E(P_1)$. Hence $T_{m}$ is $(\varphi, \varphi_1)$-stable, which gives that $T_m$ is an ETT of $(G, e,\varphi_1)$, so is $T$. The claim follows from $\beta\in\overline{\varphi}_1(y) \cap (\Gamma^f(T_0)-\varphi_1(E(T)))$. Consider the $(\alpha,\gamma_{\delta})$-chain $P_2:=P_y(\alpha, \gamma_{\delta},\varphi)$. Since $\alpha, \gamma_{\delta}\in \overline{\varphi}(V(T_0))$ and $T_m$ is closed, $V(P_2)\cap V(T_{m})=\emptyset$. Let $\varphi_2=\varphi/P_2$. Clearly, $T_m$ is $(\varphi, \varphi_2)$-stable. By Lemma~\ref{LEM:FixConn}, $T_m$ is an ETT of $(G, e, \varphi_2)$, so is $T$. Then $P_{u}(\gamma_{\delta},\delta,\varphi_2)$ is the subpath of $P_u(\gamma_{\delta}, \delta, \varphi)$ from $u$ to $y$. So, it does not contain all edges in $E_{\delta}(\partial(T_{m}))$, which gives a contradiction to Lemma~\ref{LEM:EXconn}. {\bf \noindent Subcase 1.3: $\alpha\in (\overline{\varphi}(z)-\overline{\varphi}(y))\cap \overline{\varphi}(V(T_{m}))$.} Since $P_u(\gamma_{\delta}, \delta, \varphi)$ contains all the edges in $E_{\delta}(\partial(T_{m}))$ and $\alpha\in \overline{\varphi}(z)$, we have $\alpha\notin\{\delta, \gamma_{\delta}\}$. Following a similar argument given in Subcase 1.2, we may assume that $\alpha\in\Gamma^{f}(T_0)-\varphi(E(T))$. Let $v$ be the unique vertex in $V(T_0)$ with $\alpha\in \overline{\varphi}(v)$. Let $\beta\in \overline{\varphi}(y)$, $P_v:=P_{v}(\alpha,\beta,\varphi)$, $P_y:=P_{y}(\alpha,\beta,\varphi)$ and $P_z:=P_{z}(\alpha,\beta,\varphi)$. We claim that $P_v=P_y$. Suppose, on the contrary, that $P_v\neq P_y$. By Lemma \ref{LEM:EXconn}, $E(P_v)\supset E_{\beta}(\partial(T_{m}))$. Therefore, $V(P_y)\cap V(T_{m})=\emptyset$. Let $\varphi_1=\varphi/P_y$. In $(G, e, \varphi_1)$, $T$ is an ETT and $\alpha\in \overline{\varphi}_1(y)\cap\overline{\varphi}_1(V(T_{0}))$. This leads back to either Subcase 1.1 or Subcase 1.2. Hence, $P_v=P_y$ and it is vertex disjoint with $P_z$. Let $\varphi_2=\varphi/P_z$. By Lemma \ref{LEM:EXconn}, $E(P_v)\supset E_{\beta}(\partial(T_m))$. So, $V(P_z)\cap V(T_m)=\emptyset$, which in turn gives that $T$ is an ETT of $(G, e, \varphi_2)$ and $\beta\in \overline{\varphi}_2(y)\cap\overline{\varphi}_2(z)$. This leads back to Subcase 1.1. {\flushleft \bf Case 2:} The last edge $f$ is added to $T$ by {\bf ET1}. Let $y$ and $z$ be the in-end and out-end of $f$, respectively, and let $T' =T-z$. Clearly, $T'$ is a premier segment of $T$ and $T_m \subsetneq T'$. In this case, we assume that $z=v_2$, i.e., $\alpha\in \overline{\varphi}(z)\cap \overline{\varphi}(v_1)$ and $v_1\in V(T')$. Recall that $v_1$ and $v_2$ are the two vertices in $T$ such that $\alpha\in\overline{\varphi}(v_1)\cap\overline{\varphi}(v_2)$. \begin{CLA}\label{CLA:OneComp} For any color $\gamma\in \Gamma^f(T_0)$ and any color $\beta\in \overline{\varphi}(V(T'))$, let $u\in V(T_0)$ such that $\gamma\in \overline{\varphi}(u)$ and $v\in V(T')$ such that $\beta\in \overline{\varphi}(v)$. Denote by $e_v\in E(T)$ the edge containing $v$ as the out-end and $e_v\prec_{\ell} e^*$ for every $e^*\in E(T)$ with $\varphi(e^*)=\gamma$, then $u$ and $v$ are on the same $(\beta, \gamma)$-chain. \end{CLA} \proof Since $T_m$ is both elementary and closed, $u$ and $v$ are on the same $(\beta, \gamma)$-chain if $v\in V(T_m)$. Suppose $v\in V(T) -V(T_m)$ and, on the contrary, $P_u:=P_u(\gamma, \beta, \varphi)$ and $P_v:=P_v(\gamma, \beta, \varphi)$ are vertex disjoint. By Lemma~\ref{LEM:EXconn}, $E(P_u)\supset E_{\beta}(\partial(T_m))$, so $V(P_v)\cap V(T_m) = \emptyset$. Let $\varphi_1 = \varphi/P_v$ be the coloring obtained by interchanging the colors $\beta$ and $\gamma$ on $P_v(\gamma, \beta, \varphi)$. Clearly, $T_m$ is $(\varphi, \varphi_1)$-stable. By Lemma~\ref{LEM:FixConn}, $T_m$ is an ETT of $(G, e, \varphi_1)$. As $e_v\prec_{\ell} e^*$ for every $e^*\in E(T)$ with $\varphi(e^*)=\gamma$, we can extend $T_m$ to $Te_{v}$ such that $Te_{v}$ is still an ETT of $(G,e,\varphi_1)$. But, in the coloring $\varphi_1$, $\gamma\in \overline{\varphi}_1(u)\cap \overline{\varphi}_1(v)$, which gives a contradiction to the minimality of $|T|$. \qed \begin{CLA}\label{CLA:a-free} We may assume $\alpha\in \Gamma^f(T_0)-\varphi(E(T_{m}))$. \end{CLA} \proof Otherwise, by Claim~\ref{CLA:Non0}, let $\gamma\in \Gamma^f(T_0) -(\varphi(E(T))\cup\{\alpha\})$. Let $\varphi_1$ be obtained from $\varphi$ by interchanging colors $\alpha$ and $\gamma$ for edges in $E_{\alpha}(G-V(T_m))\cup E_{\gamma}(G-V(T_m))$. Since $T_m$ is closed, $\varphi_1$ exists. Clearly, $T_m$ is $(\varphi,\varphi_1)$-stable. By Lemma~\ref{LEM:FixConn}, $T_m$ is an ETT of $(G, e, \varphi_1)$, so is $T$. In the coloring $\varphi_1$, $\gamma\in \overline{\varphi}_1(z)$ but is not used on $T_m$. \qed Applying Claim~\ref{CLA:Non0} again if it is necessary, we assume both Claim~\ref{CLA:Non0} and Claim~\ref{CLA:a-free} hold. Recall that $z$ is the out-end of $f$ and $y$ is the in-end of $f$, and $\alpha \in \overline{\varphi}(v_1)\cap \overline{\varphi}(z)$. {\bf \noindent Subcase 2.1:} $y\in V(T') - V(T_m)$, i.e., $f\notin \partial(T_m)$. \begin{CLA}\label{CLA:a-used} Color $\alpha$ is used in $E(T -T_m)$, i.e., $\alpha\in \varphi(E(T-T_m))$. \end{CLA} \proof Suppose on the contrary that $\alpha \notin \varphi(E(T-T_m))$. By Claim~\ref{CLA:a-free}, we may assume that $\alpha \notin \varphi(E(T_m))$, so $\alpha \notin \varphi(E(T))$. Let $\varphi(f)=\theta$ and $\beta \in \overline{\varphi}(y)$ be a missing color of $y$. We consider the following two cases according to whether $y$ is the last vertex of $T'=T-z$. We first assume that $y$ is the last vertex of $T'$. Let $P_{v_1}:=P_{v_1}(\alpha,\beta,\varphi)$, $P_y:=P_y(\alpha,\beta,\varphi)$ and $P_z:=P_z(\alpha,\beta,\varphi)$ be $(\alpha, \beta)$-chains containing vertices $v_1$, $y$ and $z$, respectively. By Claim~\ref{CLA:OneComp}, we have $P_{v_1}=P_y$, so it is disjoint from $P_z$. By Lemma~\ref{LEM:EXconn}, $E(P_{v_1})\supset E_{\beta}(\partial(T_m))$, so $V(P_z)\cap V(T_m) = \emptyset$. Let $\varphi_1 = \varphi/P_z$ be the coloring obtained from $\varphi$ by interchanging colors $\alpha$ and $\beta$ on $P_z$. Since $\alpha\notin \varphi(E(T-T_m))$ and $\beta\in \overline{\varphi}(y)-\overline{\varphi}(V(T'))$, $\beta\not\in \varphi_1(E(T-T_m))$. Clearly, $T_m$ is $(\varphi, \varphi_1)$-stable. By Lemma~\ref{LEM:FixConn}, $T_m$ is an ETT of $(G, e, \varphi_1)$, so is $T$. In the coloring $\varphi_1$, $\theta = \varphi_1(f)$ and $f$ itself is a $(\beta, \theta)$-chain. Let $\varphi_2 = \varphi_1/f$ be the coloring obtained from $\varphi_1$ by changing color $\theta$ to $\beta$ on $f$. Since $f$ is disjoint from $T_m$, we can verify that $T$ is an ETT of $(G, e, \varphi_2)$ by applying Lemma~\ref{LEM:FixConn}. Since $f$ is not a connecting edge, $\theta\in \overline{\varphi}(V(T'))$, which in turn shows that $T'$ is not elementary with respect to $\varphi_2$, giving a contradiction to the minimality of $|T|$. We now assume that $y$ is not the last vertex of $T'$; and let $x$ be the last one. Recall $\theta = \varphi(f)$. If $\theta\in\varphi(x)$ then $T-x$ is not an elementary ETT of $(G,e,\varphi)$, which contradicts the minimality of $|T|$. Hence we assume $\theta\in\overline{\varphi}(x)$. Clearly $\alpha \in \varphi(x)$. Let $P_{v_1}:=P_{v_1}(\alpha,\theta,\varphi)$, $P_{x}:=P_{x}(\alpha,\theta,\varphi)$ and $P_z:=P_z(\alpha,\theta,\varphi)$ be $(\alpha, \theta)$-chains containing vertices $v_1$, $x$ and $z$, respectively. By Claim \ref{CLA:OneComp} we have $P_{v_1}=P_{x}$ which is disjoint with $P_z$. Furthermore Lemma \ref{LEM:EXconn} implies that $E(P_{v_1})\supset E_\theta(\partial(T_m))$, together with the assumption that $\alpha\in\Gamma^f(T_0)$, we get $V(P_z)\cap V(T_m)=\emptyset$. Let $\varphi_1=\varphi /P_z$ be the coloring obtained from $\varphi$ by interchanging colors $\alpha$ and $\theta$ along $P_z$. Observe that $\theta$ is only used on $f$ for $E(T-(T_m\cup \partial(T_m)))$ since $\theta\in\overline{\varphi}(x)$, $f$ is colored by $\alpha$ in $\varphi_1$. Clearly $T_m$ is $(\varphi,\varphi_1)$ stable. By Lemma \ref{LEM:FixConn}, $T_m$ is an ETT of $(G,e,\varphi_1)$, so is $T$. By Claim \ref{CLA:Non0}, let $\gamma \in \Gamma^f(T_0)-(\varphi_1(E(T))\cup \{\theta\})$. Say $\gamma \in \overline\varphi(v_2)$ for $v_2 \in V(T_0)$. By Claim \ref{CLA:OneComp} the $(\gamma, \theta)$-chain $P^{'}_{v_2}:=P_{v_2}(\gamma,\theta,\varphi_1)$ is the same with $P^{'}_{x}:=P_{x}(\gamma,\theta,\varphi_1)$, hence it is disjoint with $P^{'}_{z}:=P_{z}(\gamma,\theta,\varphi_1)$. Now we consider $T_{zx}$ obtained from $T$ by switching the order of adding vertices $x$ and $z$. Clearly $T_{zx}$ is an ETT of $(G,e,\varphi_1)$ since $f$ is colored by $\alpha$ in $\varphi_1$. Similarly by Claim \ref{CLA:OneComp} the $(\gamma, \theta)$-chain $P^{'}_{v_2}:=P_{v_2}(\gamma,\theta,\varphi_1)$ is the same with $P^{'}_{z}:=P_{z}(\gamma,\theta,\varphi_1)$. Now we reach a contradiction. \qed We now prove the following claim which gives a contradiction to {\bf Assumption 1} and completes the proof of this subcase. \begin{CLA}\label{CLA:min1} There is a coloring $\varphi_1\in \mathcal{C}^k(G-e)$ such that $T$ is a non-elementary ETT of $(G, e, \varphi_1)$, $T_m$ is $(\varphi, \varphi_1)$-stable, and $|\overline{\varphi}_1(V(T_0))\cap \varphi_1(E(T)-E(T_0))| > |\overline{\varphi}(V(T_0))\cap \varphi(E(T)-E(T_0))|$. \end{CLA} \proof Following the linear order $\prec_{\ell}$, let $e_1$ be the first edge in $E(T-T_m)$ with $\varphi(e_1) = \alpha$, and let $y_1$ be the in-end of $e_1$. Pick a missing color $\beta_1\in \overline{\varphi}(y_1)$. Note that, since $\varphi(e_1) = \alpha$ and $\alpha \in \Gamma^f(T_0) - \varphi(E(T_m))$, $e_1\notin \partial(T_m)$. Hence $y_1\in V(T) - V(T_m)$. Let $P_{v_1}:=P_{v_1}(\alpha,\beta_1,\varphi)$, $P_{y_1}:=P_{y_1}(\alpha,\beta_1,\varphi)$, and $P_z:=P_z(\alpha,\beta_1,\varphi)$ be $(\alpha, \beta_1)$-chains containing $v_1$, $y_1$ and $z$, respectively. By Claim~\ref{CLA:OneComp}, $P_{v_1} = P_{y_1}$, which in turn shows that it is disjoint from $P_z$. By Lemma~\ref{LEM:EXconn}, $E(P_{v_1})\supset E_{\beta_1}(\partial(T_m))$, so $V(P_z)\cap V(T_m) = \emptyset$. Consider the coloring $\varphi_1 = \varphi/P_z$. Since $V(P_z)\cap V(T_m) = \emptyset$, $T_m$ is $(\varphi, \varphi_1)$-stable. By Lemma~\ref{LEM:FixConn}, $T_m$ is an ETT of $(G, e, \varphi_1)$. Since $e_1$ is the first edge colored with $\alpha$ along $\prec_{\ell}$, we have that $e_1 \prec_{\ell} e^*$ for all edges $e^*$ colored with $\beta_1$. So, $T$ is an ETT of $(G, e, \varphi_1)$. Note that $e_1\in E(P_{y_1}) = E(P_{v_1})$, which in turn gives $\varphi_1(e_1) = \alpha$. We also note that $\beta_1\in \overline{\varphi}_1(z)\cap \overline{\varphi}_1(y_1)$. By Claim~\ref{CLA:Non0}, there is a color $\gamma \in \Gamma^f(T_0)-\varphi(E(T))$. Let $u\in V(T_0)$ such that $\gamma\in \overline{\varphi}(u)$. Let $Q_u:= P_u(\gamma, \beta_1, \varphi_1)$, $Q_{y_1}:=P_{y_1}(\gamma, \beta_1, \varphi_1)$ and $Q_z:=P_z (\gamma, \beta_1, \varphi_1)$ be $(\gamma, \beta_1)$-chains containing $u$, $y_1$ and $z$, respectively. By Claim~\ref{CLA:OneComp}, $Q_u = Q_{y_1}$, so $Q_u$ and $Q_z$ are disjoint. By Lemma~\ref{LEM:EXconn}, $E(Q_u)\supset E_{\beta_1}(\partial(T_m))$, so $V(Q_z)\cap V(T_m) = \emptyset$. Let $\varphi_2 = \varphi_1/Q_z$ be a coloring obtained from $\varphi_1$ by interchanging colors on $Q_z$. Since $V(Q_u)\cap V(T_m) = \emptyset$, $T_m$ is an ETT of $(G, e, \varphi_2)$. Since $\gamma\in \overline{\varphi}(V(T_0)) -\varphi(E(T))$, $T_m$ can be extended to $T$ as an ETT in $\varphi_2$. Since $\gamma\in \overline{\varphi}_2(z)\cap \overline{\varphi}_2(u)$, by Claim~\ref{CLA:a-used}, we have $\gamma\in \varphi_2(E(T-T_m))$. Since $e_1\in Q_{y_1} = Q_u$, the color $\alpha$ assigned to $e_1$ is unchanged. Thus, \[ \overline{\varphi}_2(V(T_0))\cap \varphi_2(E(T)-E(T_0)) \supseteq ( \overline{\varphi}(V(T_0))\cap \varphi(E(T)-E(T_0)))\cup\{\gamma\}, \] and $\alpha \in \overline{\varphi}(V(T_0))\cap \varphi(E(T))$. So, Claim~\ref{CLA:min1} holds. \qed {\bf \noindent Subcase 2.2:} $y\in V(T_{m})$, i.e. $f\in \partial(T_m)$. The following two claims are similar to Claims~\ref{CLA:a-used} and \ref{CLA:min1} in Subcase 2.1, which lead to a contradiction to {\bf Assumption 1}. Their proofs respectively are similar to those of the previous two claims. However, for the completeness, we still give the details. \begin{CLA}\label{CLA:a-used2} Color $\alpha$ is used in $E(T-T_m)$, i.e., $\alpha\in \varphi(E(T-T_m))$. \end{CLA} \proof Suppose on the contrary $\alpha \notin \varphi(E(T-T_m))$. By Claim~\ref{CLA:a-free}, we assume that $\alpha \notin \varphi(E(T_m))$, so $\alpha \notin \varphi(E(T))$. Let $\varphi(f)=\theta$. As $f\in \partial(T_m)$ is not a connecting edge and $T_m$ is closed, we know that there exists $w\in V(T-T_m)$ such that $\theta\in \overline{\varphi}(w)$. Consider the $(\alpha,\theta)$-chain $P_{v_1}:=P_{v_1}(\alpha,\theta,\varphi)$. By Lemma~\ref{LEM:EXconn}, $E(P_{v_1}) \supset E_{\theta}(\partial(T_m))$. So, $f\in E(P_{v_1})$ and $z$ is the other end of $P_{v_1}$. Then, $P_{w}:=P_{w}(\alpha,\theta,\varphi)$ is disjoint from $P_{v_1}$, which in turn shows $V(P_{w})\cap V(T_m) = \emptyset$. Let $\varphi_1 = \varphi/P_{w}$. Since $V(P_{w})\cap V(T_m) = \emptyset$, $T_m$ is $(\varphi, \varphi_1)$-stable. By Lemma~\ref{LEM:FixConn}, $T_m$ is an ETT of $(G, e, \varphi_1)$. Since $\alpha$ is not used in $T-T_m$, $T_m$ can be extended to $T'$ as an ETT of $(G, e, \varphi_1)$. Note that $\alpha\in \overline{\varphi}_1(v_1)\cap \overline{\varphi}_1(w)$. So, $T'$ is not elementary, which gives a contradiction to the minimality of $|T|$. \qed \begin{CLA}\label{CLA:min2} There is a coloring $\varphi_1\in \mathcal{C}^k(G-e)$ such that $T$ is a non-elementary ETT of $(G, e, \varphi_1)$, $T_m$ is $(\varphi, \varphi_1)$-stable, and $|\overline{\varphi}_1(V(T_0))\cap \varphi_1(E(T)-E(T_0))| > |\overline{\varphi}(V(T_0))\cap \varphi(E(T)-E(T_0))|$. \end{CLA} \proof Following the linear order $\prec_{\ell}$, let $e_1$ be the first edge in $E(T-T_m)$ with $\varphi(e_1) = \alpha$, and let $y_1$ be the in-end of $e_1$. Pick a missing color $\beta_1\in \overline{\varphi}(y_1)$. Since $\varphi(e_1) = \alpha\in \overline{\varphi}(V(T_0))$ and $T_m$ is closed, $e_1\notin \partial(T_m)$. Hence, $y_1\in V(T) - V(T_m)$. Let $P_{v_1}:=P_{v_1}(\alpha,\beta_1,\varphi)$, $P_{y_1}:=P_{y_1}(\alpha,\beta_1,\varphi)$, and $P_z:=P_z(\alpha,\beta_1,\varphi)$ be $(\alpha, \beta_1)$-chains containing $v_1$, $y_1$ and $z$, respectively. By Claim~\ref{CLA:OneComp}, $P_{v_1} = P_{y_1}$, which in turn shows that it is disjoint from $P_z$. By Lemma~\ref{LEM:EXconn}, $E(P_{v_1})\supset E_{\beta_1}(\partial(T_m))$, so $V(P_z)\cap V(T_m) = \emptyset$. Consider the coloring $\varphi_1 = \varphi/P_z$. Since $V(P_z)\cap V(T_m) = \emptyset$, $T_m$ is $(\varphi, \varphi_1)$-stable. By Lemma~\ref{LEM:FixConn}, $T_m$ is an ETT of $(G, e, \varphi_1)$. Since $e_1$ is the first edge colored with $\alpha$ along $\prec_{\ell}$, we have that $e_1 \prec_{\ell} e^*$ for all edges $e^*$ with $\varphi_1(e^*) = \beta_1$. So, $T$ is an ETT of $(G, e, \varphi_1)$. Note that $e_1\in E(P_{y_1}) = E(P_{v_1})$, which in turn gives $\varphi_1(e_1) = \alpha$. We also note that $\beta_1\in\overline{\varphi}_1(z)\cap \overline{\varphi}_1(y_1)$. By Claim~\ref{CLA:Non0}, there is a color $\gamma \in \Gamma^f(T_0)-\varphi(E(T))$. Let $u\in V(T_0)$ such that $\gamma\in \overline{\varphi}(u)$. Let $Q_u:= P_u(\gamma, \beta_1, \varphi_1)$, $Q_{y_1}:=P_{y_1}(\gamma, \beta_1, \varphi_1)$ and $Q_z:=P_z (\gamma, \beta_1, \varphi_1)$ be $(\gamma, \beta_1)$-chains containing $u$, $y_1$ and $z$, respectively. By Claim~\ref{CLA:OneComp}, $Q_u = Q_{y_1}$, so $Q_u$ and $Q_z$ are disjoint. By Lemma~\ref{LEM:EXconn}, $E(Q_u)\supset E_{\beta_1}(\partial(T_m))$, so $V(Q_z)\cap V(T_m) = \emptyset$. Let $\varphi_2 = \varphi_1/Q_z$ be the coloring obtained from $\varphi_1$ by interchanging colors on $Q_z$. Since $V(Q_u)\cap V(T_m) = \emptyset$, $T_m$ is an ETT of $(G, e, \varphi_2)$. Since $\gamma\in \overline{\varphi}(V(T_0)) -\varphi(E(T))$, $T_m$ can be extended to $T$ as an ETT in $\varphi_2$. Since $\gamma\in \overline{\varphi}_2(z)\cap \overline{\varphi}_2(u)$, by Claim~\ref{CLA:a-used}, we have $\gamma\in \varphi_2(E(T-T_m))$. Since $e_1\in Q_{y_1} = Q_u$, $\varphi_1(e_1) = \varphi(e_1) = \alpha$. Thus, \[ \overline{\varphi}_2(V(T_0))\cap \varphi_2(E(T)-E(T_0)) \supseteq (\overline{\varphi}(V(T_0))\cap \varphi(E(T) -E(T_0)))\cup\{\gamma\}, \] and $\alpha\in \overline{\varphi}(V(T_0))\cap\varphi(E(T)$. So, Claim~\ref{CLA:min2} holds. \qed We now complete the proof of Theorem~\ref{LEM:Elem}. \qed Combining Theorem~\ref{LEM:Elem} and Lemma~\ref{LEM:ETT>F}, we obtain the following result. \begin{COR}\label{COR:Main} Let $G$ be a $k$-critical graph with $k\ge \Delta+1$. If $G$ is not elementary, then there is an ETT $T$ based on $T_0\in \mathcal{T}$ with $m$-rungs such that $T$ is elementary and \[ |T| \ge |T_0| -2+ \min\{m + |\overline{\varphi}(V(T_0))|, 2(|\Gamma^f(T_0)|-1) \}. \] \end{COR} \section{Proofs of Theorems~\ref{THM:cubic} and ~\ref{THM:Jm19} } \subsection{Proof of Theorem~\ref{THM:cubic}} Clearly, we only need to prove Theorem~\ref{THM:cubic} for critical graphs. \begin{THM}\label{THM:cubic-2} If $G$ is a $k$-critical graph with $k\geq\Delta+\sqrt[3]{\Delta/2}$, then $G$ is elementary. \end{THM} \proof Suppose on the contrary that $G$ is not elementary. By Corollary~\ref{COR:Main}, let $T$ be an ETT of a $k$-triple $(G, e, \varphi)$ based on $T_0\in \mathcal{T}$ with $m$-rungs such that $V(T)$ is elementary and \[ |T| \ge |T_0| -2+ \min\{m + |\overline{\varphi}(V(T_0))|, 2(|\Gamma^f(T_0)|-1) \}. \] Since $m\ge 1$ and $|\overline{\varphi}(V(T_0))| \ge (k-\Delta) |T_0| +2$, we have $|T_0| -2 + m +|\overline{\varphi}(V(T_0))| \ge (k-\Delta +1)|T_0| +1$. Following Scheide~\cite{Scheide-2010}, we may assume that $T_0$ is a balanced Tashkinov tree with height $h(T_0) \ge 5$. So, $|\varphi(E(T_0))| \le \frac{|T_0| -1} 2$, which in turn gives \[ |\Gamma^f(T_0)| = |\overline{\varphi}(V(T_0))| - |\varphi(E(T_0))| \ge (k-\Delta -\frac 12)|T_0| +\frac{5}{2}. \] Hence \[ |T_0| -2 + 2(|\Gamma^f(T_0)|-1) \ge 2(k -\Delta)|T_0| +1 \ge (k-\Delta +1)|T_0| +1. \] Therefore, in any case, we have the following inequality \begin{equation}\label{EQ:T>} |T| \ge (k -\Delta +1) |T_0| +1. \end{equation} By Corollary~\ref{COR:T-order-exit-vertex}, $|T_0| \ge 2(k-\Delta) +1$. Following ~(\ref{EQ:T>}), we get the inequality below. \begin{equation}\label{EQ:T>2} |T| \ge (k-\Delta +1)(2(k-\Delta) +1) + 1= 2(k-\Delta)^2 + 3(k-\Delta) + 2. \end{equation} Since $T$ is elementary, we have $k \ge |\overline{\varphi}(V(T))| \ge (k-\Delta)|T| +2$. Plugging into (\ref{EQ:T>2}), we get the following inequality. \[ k \ge 2(k-\Delta)^3 + 3(k - \Delta)^2 + 2(k-\Delta) + 2. \] Solving the above inequality, we obtain that $k < \Delta +\sqrt[3]{\Delta/2}$, giving a contradiction to $k\geq\Delta +\lceil \sqrt[3]{\Delta/2}\rceil$. \qed \subsection{Proofs of Theorem~\ref{THM:Jm19} and Corollary \ref{COR:Delta23}} We will need the following observation from~\cite{StiebSTF-Book}. For completeness, we give its proof here. \begin{LEM}\label{LEM:elmentary-Jm} Let $s\ge 2$ be a positive integer and $G$ be a $k$-critical graph with $k> \frac{s}{s-1}\Delta+\frac{s-3}{s-1}$. For any edge $e\in E(G)$, if $X\subseteq V(G)$ is an elementary set with respect to a coloring $\varphi\in \mathcal{C}^k(G-e)$ such that $V(e)\subseteq X$, then $|X|\le s-1$. \end{LEM} \proof Otherwise, assume $|X| \ge s$. Since $X$ is elementary, $k \ge |\overline{\varphi}(X)| \ge (k-\Delta)|X| +2 \ge s(k-\Delta) +2$, which in turn gives \[ \Delta \ge (s-1)(k-\Delta) +2 > (\Delta +(s-3)) +2 = \Delta + s -1 > \Delta, \] a contradiction. \qed Clearly, to prove Theorem \ref{THM:Jm19}, it is sufficient to restrict our consideration to critical graphs. \begin{THM}\label{THM:Jm19-2} If $G$ is a $k$-critical graph with $k >\frac{23}{22}\Delta+\frac{20}{22}$, then $G$ is elementary. \end{THM} \proof Suppose, on the contrary, $G$ is not elementary. By Corollary~\ref{COR:Main}, let $T$ be an ETT of a $k$-triple $(G, e, \varphi)$ based on $T_0\in \mathcal{T}$ with $m$-rungs such that $V(T)$ is elementary and \[ |T| \ge |T_0| -2+ \min\{m + |\overline{\varphi}(V(T_0))|, 2(|\Gamma^f(T_0)|-1) \}. \] By Lemma~\ref{LEM:elmentary-Jm}, $|T| \le 22$. We will show that $|T| \ge 23$ to lead a contradiction. By Lemma~\ref{LEM:Small-T}, we have $|T_0| \ge 11$. Since $G$ is not elementary, $V(T_0)$ is not strongly closed, so $T\supsetneq T_0$. In particular, we have $m\ge 1$. Since $e\in E(T_0)$, we have $|\overline{\varphi}(V(T_0))| \ge |T_0| +2$. Thus, \begin{equation}\label{EQ:>23} |T_0| -2+ m + |\overline{\varphi}(V(T_0))| \ge 2|T_0| +1 \ge 2\times 11 +1 = 23. \end{equation} Following Scheide~\cite{Scheide-2010}, we may assume that $T_0$ is a balanced Tashkinov tree with height $h(T_0) \ge 5$, which in turn gives $|\varphi(E(T_0))| \le (|T_0| -1)/2$. So, $|\Gamma^f(T_0)| \ge |T_0| +2 - (|T_0|-1)/2 \ge (|T_0| +5)/2$. Thus, \begin{equation} \label{EQ:>25} |T_0| -2 +2(|\Gamma^f(T_0)|-1) \ge 2|T_0| +1 \ge 23. \end{equation} Combining (\ref{EQ:>23}) and (\ref{EQ:>25}), we get $|T| \ge 23$, giving a contradiction. \qed We now give a proof of Corollary \ref{COR:Delta23} and recall that Corollary \ref{COR:Delta23} is stated as follows. \begin{COR}\label{COR:Delta23-2} If $G$ is a graph with $\Delta\leq 23$ or $|G|\leq 23$, then $\chi'\leq\max\{\Delta+1,\lceil\chi'_f\rceil\}$. \end{COR} \proof We assume that $G$ is critical. Otherwise, we prove the corollary for a critical subgraph of $G$ instead. If $\Delta\leq 23$, then $\left\lfloor\frac{23}{22}\Delta+\frac{20}{22}\right\rfloor=\left\lfloor\Delta+\frac{\Delta+20}{22}\right\rfloor\leq \Delta+1$. If $\chi'\leq \Delta+1$, we are done. Otherwise, we assume that $\chi'\geq \Delta+2\geq \frac{23}{22}\Delta+\frac{20}{22}$. By Theorem \ref{THM:Jm19}, we have $\chi'=\lceil \chi_{f}'\rceil$. Assume that $|G|\leq23$. If $\chi'\leq\Delta+1$, then we are done. Otherwise, $\chi'=k+1$ for some integer $k\geq \Delta+1$. By Corollary \ref{COR:Main}, let $T$ be an ETT of a $k$-triple $(G, e, \varphi)$ based on $T_0\in \mathcal{T}$ with $m$-rungs such that $V(T)$ is elementary and \[ |T| \ge |T_0| -2+ \min\{m + |\overline{\varphi}(V(T_0))|, 2(|\Gamma^f(T_0)|-1) \}. \] By Lemma~\ref{LEM:Small-T}, we have $|T_0| \ge 11$. Suppose that $G$ is not elementary, then $V(T_0)$ is not strongly closed, so $T\supsetneq T_0$. In particular, we have $m\ge 1$. Since $e\in E(T_0)$, we have $|\overline{\varphi}(V(T_0))| \ge |T_0| +2$. Thus, \begin{equation}\label{EQ:>23-2} |T_0| -2+ m + |\overline{\varphi}(V(T_0))| \ge 2|T_0| +1 \ge 2\times 11 +1 = 23. \end{equation} Following Scheide~\cite{Scheide-2010}, we may assume that $T_0$ is a balanced Tashkinov tree with height $h(T_0) \ge 5$, which in turn gives $|\varphi(E(T_0))| \le (|T_0| -1)/2$. So, $|\Gamma^f(T_0)| \ge |T_0| +2 - (|T_0|-1)/2 \ge (|T_0| +5)/2$. Thus, \begin{equation} \label{EQ:>25-2} |T_0| -2 +2(|\Gamma^f(T_0)|-1) \ge 2|T_0| +1 \ge 23. \end{equation} Combining (\ref{EQ:>23-2}) and (\ref{EQ:>25-2}), we get $|T| \ge 23$. Then $|G|\geq |T|\geq 23$. Therefore, $|G|=23$ and $G$ is elementary, giving a contradiction. \qed \end{document}
arXiv
{ "id": "1606.07927.tex", "language_detection_score": 0.7002131938934326, "char_num_after_normalized": null, "contain_at_least_two_stop_words": null, "ellipsis_line_ratio": null, "idx": null, "lines_start_with_bullet_point_ratio": null, "mean_length_of_alpha_words": null, "non_alphabetical_char_ratio": null, "path": null, "symbols_to_words_ratio": null, "uppercase_word_ratio": null, "type": null, "book_name": null, "mimetype": null, "page_index": null, "page_path": null, "page_title": null }
arXiv/math_arXiv_v0.2.jsonl
null
null
\begin{document} \title{Numerical Bayesian quantum-state assignment for a three-level quantum system \\ II. Average-value data with a constant, a Gaussian-like, and a Slater prior} \author{\firstname{A.} \surname{M{\aa}nsson}} \email{[email protected]} \author{\firstname{P. G. L.} \surname{Porta Mana}} \author{\firstname{G.} \surname{Bj\"{o}rk}} \affiliation{Kungliga Tekniska H\"ogskolan, Isafjordsgatan 22, SE-164\,40 Stockholm, Sweden} \date{14 January 2007} \begin{abstract} This paper offers examples of concrete numerical applications of Bayesian quantum-state assignment methods to a three-level quantum system. The statistical operator\ assigned on the evidence of various measurement data and kinds of prior knowledge is computed partly analytically, partly through numerical integration (in eight dimensions) on a computer. The measurement data consist in the average of outcome values of $N$ identical von~Neumann projective measurements performed on $N$ identically prepared three-level systems. In particular the large-$N$ limit will be considered. Three kinds of prior knowledge are used: one represented by a plausibility distribution constant in respect of the convex structure of the set of statistical operator s; another one represented by a prior studied by Slater, which has been proposed as the natural measure on the set of statistical operators; the last prior is represented by a Gaussian-like distribution centred on a pure statistical operator, and thus reflecting a situation in which one has useful prior knowledge about the likely preparation of the system. The assigned statistical operators obtained with the first two kinds of priors are compared with the one obtained by Jaynes' maximum entropy method for the same measurement situation. In the companion paper the case of measurement data consisting in absolute frequencies is considered. \end{abstract} \pacs{03.67.-a,02.50.Cw,02.50.Tt,05.30.-d,02.60.-x} \maketitle \section{Introduction} \label{sec:intro} In this paper we continue our two-part study \citep{maanssonetal2006} with examples of concrete numerical applications of Bayesian quantum-state assignment methods to a three-level quantum system. Since we will consider the same scenario as in the first paper, to avoid repeating ourselves we therefore refer the reader to the first paper for a more detailed and complete account of the motivations, explanations, discussions and references on the background, theory, formulas, nomenclature, etc, used in this paper. The main difference between the two papers lies in the type of measurement data considered. In the first paper the measurement data consisted in absolute frequencies of the outcomes of $N$ identical von~Neumann projective measurements performed on $N$ identically prepared three-level systems. Here we will consider the same measurement situation, but the measurement data will instead be in the form of an average of values being associated to the measurement outcomes, in particular $1$, $0$, and $-1$. The statistical operator encoding the average value data and prior knowledge is computed partly numerically and partly analytically in the limit when $N\to\infty$, for a constant, and also for two different kinds of a non-constant, prior probability distribution, and different average value data. A reason for studying data of this kind, other than the obvious one that it may have been given to us in this form, is that it constitutes an example of more complex data than mere absolute frequencies. It is also interesting to study this particular kind of data since it enables us to compare our assigned statistical operators with those obtained by instead using Jaynes' maximum entropy method \cite{jaynes1957b} for the same measurement situation. The reason for doing this is that we want to investigate whether or not this method could be seen as a special case of Bayesian quantum-state assignment, and if so, try to find the prior that would lead to the same statistical operator\ as the one obtained by using the maximum entropy method. \section{The present study} \label{sec:presstud} In this paper we study data $D$ and prior knowledge $I$ of the following kind: \begin{itemize} \item The measurement data $D$ consist in the average of $N$ outcome values of $N$ instances of the same measurement performed on $N$ identically prepared systems. The measurement is represented by the extreme positive-\bd operator-\bd valued measure\ ({i.e.}, non-degenerate `von~Neumann measurement') having three possible distinct outcomes $\set{\text{`1'}, \text{`2'}, \text{`3'}}$ represented by the eigenprojectors $\set{\ketbra{1}{1}, \ketbra{0}{0}, \ketbra{-1}{-1}}$, where the eigenprojectors are labelled by their associated outcome values $\{1,0,-1\}$, respectively. We consider the limiting case of very large $N$. \item Three different kinds of prior knowledge $I$ are used. Two of them, $I_\text{co}$ and $I_\text{ga}$, are the same as those given in first paper, i.e. a prior plausibility distribution \begin{equation} \label{eq:first_prior_rho} p(\bm{\rho} \mathpunct{|} I_\text{co})\, \mathrm{d}\bm{\rho} = g_\text{co}(\bm{\rho})\, \mathrm{d}\bm{\rho} \propto \mathrm{d}\bm{\rho}, \end{equation} which is constant in respect of the convex structure of the set of statistical operators, in the sense explained in~\citep[\S~3,4]{maanssonetal2006}; and a spherically symmetric, Gaussian-like prior distribution \begin{equation}\label{eq:second_prior_rho} p(\bm{\rho} \mathpunct{|} I_\text{ga})\, \mathrm{d}\bm{\rho} = g_\text{ga}(\bm{\rho})\, \mathrm{d}\bm{\rho} \propto \exp\biggl\{ -\frac{\tr[(\bm{\rho} -\hat{\bm{\rho}})^2]}{s^2} \biggr\}\, \mathrm{d}\bm{\rho}, \end{equation} centred on the statistical operator\ $\hat{\bm{\rho}}$. The latter prior expresses some kind of knowledge that leads us to assign a higher plausibility to regions in the vicinity of $\hat{\bm{\rho}}$. For this prior we consider two examples, when $\hat{\bm{\rho}}=\ketbra{1}{1}$ and $\hat{\bm{\rho}}=\ketbra{0}{0}$.\footnote{ Note that the case $\hat{\bm{\rho}}=\ketbra{-1}{-1}$ is equivalent to the case with $\hat{\bm{\rho}}=\ketbra{1}{1}$.} The third kind of prior knowledge, $I_\text{sl}$, is represented by the prior plausibility distribution \begin{equation}\label{eq:slater_prior_rho} p(\bm{\rho} \mathpunct{|} I_\text{sl})\, \mathrm{d}\bm{\rho} = g_\text{sl}(\bm{\rho})\, \mathrm{d}\bm{\rho} \propto (\det{\bm{\rho}})^{2d+1}\, \mathrm{d}\bm{\rho}, \end{equation} the so called ``Slater prior'' for a $d$-level system, which has been proposed as a candidate for being the appropriate measure on the set of statistical operators.~\citep{slater1995b} \end{itemize} The paper is organised as follows: In \S~\ref{sec:scen_gen_case} we present the reasoning leading to the statistical-operator-assignment formulae in the case of average value data, for finite $N$ and in the limit when $N\to\infty$. We arrived at the same formulae (as special cases of formulae applicable to generic, not necessarily quantum-theoretical systems) in a series of papers~\citep{portamanaetal2006,portamanaetal2006b,portamanaetal2006c}. In \S~\ref{sec:exstasn3level} we present the particular case studied in this paper and give the statistical-operator-assignment formulae in this case, introduce the Bloch vector parametrisation, present the calculations by symmetry arguments and by numerical integration, discuss the result and in some cases compare it with that obtained by the maximum entropy method. Finally, the last section summarises and discusses the main points and results. \section{Statistical operator assignment} \label{sec:scen_gen_case} \subsection{General case} \label{sec:scen_gen_case_fin} Again we assume there is a preparation scheme that produces quantum systems always in the same `condition' --- the same `state' --- where each condition is associated with a statistical operator. Suppose we come to know that $N$ measurements, represented by the $N$ positive-\bd operator-\bd valued measure s $\set{\bm{\varEpsilon}^{(k)}_\mu \colon \mu = 1, \dotsc, r_k}$, $k= 1, \dotsc, N$, are or have been performed on $N$ systems for which our knowledge $I$ holds. In this paper we will be analysing the case when the data is an average of a number of outcome values and it will therefore be natural to limit ourselves to the situation when the $N$ measurements are all instances of the same measurement. Thus, for all $k= 1, \dotsc, N$, $\set{\bm{\varEpsilon}^{(k)}_\mu} = \set{\bm{\varEpsilon}_\mu}$. Let us say that the outcomes $i_1, \dotsc, i_k, \dotsc, i_N$ are or were obtained. Since every outcome is associated to an outcome value $m_i$, the average of all outcome values is \begin{equation} \bar{m} \equiv \sum_{k=1}^{N} m_{i_k}/N. \end{equation} We will consider the general situation in which the data $D$ consists in the knowledge that the average value $\bar{m}$ in $N$ repetitions of the measurement lies in a set $\varUpsilon$; \begin{equation} D \mathrel{\hat{=}} [\bar{m}\in\varUpsilon]. \end{equation} Such kind of data arise when the the measurements is affected by uncertainties and is moreover ``coarse-grained'' for practical purposes, so that not precise average values are obtained but rather a region of possible ones. On the evidence of $D$ we can update the prior plausibility distribution $g(\bm{\rho})\,\mathrm{d}\bm{\rho}\coloneqqp(\bm{\rho} \mathpunct{|} I)\,\mathrm{d}\bm{\rho}$. By the rules of plausibility theory\footnote{We do not explicitly write the prior knowledge $I$ whenever the statistical operator\ appears on the conditional side of the plausibility; {i.e.}, $p(\cdot \mathpunct{|} \bm{\rho}) \coloneqq p(\cdot \mathpunct{|} \bm{\rho}, I)$.} \begin{gather} \label{eq:update_prior} p(\bm{\rho} \mathpunct{|} D \land I)\,\mathrm{d}\bm{\rho} = \frac{ p(D \mathpunct{|} \bm{\rho})\, g(\bm{\rho})\, \mathrm{d}\bm{\rho} } {\int_{\mathbb{S}} p(D \mathpunct{|} \bm{\rho})\, g(\bm{\rho})\, \mathrm{d}\bm{\rho}}, \end{gather} where $\mathbb{S}$ is the set of all statistical operators. The plausibility of obtaining a particular sequence of outcomes is \begin{gather} \label{eq:many_meas2} p\bigl(\bm{\varEpsilon}_{i_1}, \dotsc, \bm{\varEpsilon}_{i_N} \mathpunct{|} \bm{\rho}\bigr) = \tprod_{i=1}^r [\tr\bigl\{\bm{\varEpsilon}_i \bm{\rho} \bigr\}]^{N_i}, \end{gather} with the convention, here and in the following, that only factors with $N_i>0$ are to be multiplied over, and where we have used that $\tr\bigl\{\bm{\varEpsilon}_i \bm{\rho} \bigr\}=p\bigl(\bm{\varEpsilon}_i \mathpunct{|} \bm{\rho}\bigr)$ and $(N_1,..,N_r)\eqqcolon\bar{N}$ are the absolute frequencies of appearance of the $r$ possible outcomes (naturally, $N_i \geqslant 0$ and ${\textstyle\sum}_i N_i = N$). Since the exact order of the sequence of outcomes is unimportant and only the absolute frequencies of appearance $\bar{N}$ matter, the plausibility of the absolute frequencies $\bar{N}$ in $N$ measurements is \begin{gather} \label{eq:pnxfirstpaper} p\bigl(\bar{N} \mathpunct{|} \bm{\rho} \bigr) = N! \prod_{i=1}^r \frac{[\tr\bigl\{\bm{\varEpsilon}_i \bm{\rho} \bigr\}]^{N_i}}{N_i!}. \end{gather} Define $\mathbb{N}_N^r$ as the set of all absolute frequencies $\bar{N}$, for fixed $N$ and $r$. By the rules of plausibility theory we then have that \begin{gather} \label{eq:postinv} p\bigl(D \mathpunct{|} \bm{\rho} \bigr) = \sum_{\bar{N}\in\mathbb{N}_N^r} p\bigl(D|\bar{N} \land \bm{\rho}) p\bigl(\bar{N}|\bm{\rho}). \end{gather} Given that we know $\bar{N}$, we can with certainty tell if $\bar{N}$ corresponds to an average value \begin{equation} \bar{m} \equiv \sum_{i=1}^{r} N_i m_i/N \end{equation} that belongs to the set $\varUpsilon$, and knowledge of the statistical operator $\bm{\rho}$ is here irrelevant. We thus have that $p\bigl(D|\bar{N} \land \bm{\rho})=p\bigl(\bar{m}\in\varUpsilon|\bar{N})=1$ if $\bar{N}\in\phi_{\varUpsilon}$ and $0$ otherwise, where we have defined \begin{gather} \phi_{\varUpsilon} \coloneqq \{\bar{N}\in\mathbb{N}_N^r \mathpunct{|} {\textstyle\sum}_i N_i m_i/N \in \varUpsilon\}. \end{gather} Using this together with equations \eqref{eq:pnxfirstpaper} and \eqref{eq:postinv} we obtain: \begin{gather} p(D|\bm{\rho}) = \sum_{\bar{N}\in\phi_{\varUpsilon}} p(\bar{N}|\bm{\rho}) = N! \sum_{\bar{N}\in\phi_{\varUpsilon}} {\prod_{i=1}^{r}} \frac{[\tr\{\bm{\varEpsilon}_{i}\,\bm{\rho} \}]^{N_i}}{N_i!}. \end{gather} Inserting this into equation \eqref{eq:update_prior} we finally obtain: \begin{gather} p(\bm{\rho}|D \land I)\, \mathrm{d}\bm{\rho} = \frac{\displaystyle \sum_{\bar{N}\in\phi_{\varUpsilon}} \Bigl({\prod_{i=1}^{r}} \frac{[\tr\{\bm{\varEpsilon}_{i}\,\bm{\rho}\}]^{N_i}}{N_i!}\Bigr) g(\bm{\rho})\,\mathrm{d}\bm{\rho}} {\displaystyle \sum_{\bar{N}\in\phi_{\varUpsilon}}\int\limits_{\mathbb{S}} \Bigl({\prod_{i=1}^{r}} \frac{[\tr\{\bm{\varEpsilon}_{i}\,\bm{\rho}\}]^{N_i}}{N_i!}\Bigr) g(\bm{\rho})\,\mathrm{d}\bm{\rho}}. \end{gather} We saw in the first paper that generic knowledge $\tilde{I}$ can be represented by or ``encoded in'' a unique statistical operator: \begin{gather} \bm{\rho}_{\tilde{I}} \coloneqq \int_{\mathbb{S}} \bm{\rho}\, p(\bm{\rho} \mathpunct{|} \tilde{I})\, \mathrm{d}\bm{\rho}. \label{eq:def_rho_I} \end{gather} The statistical operator\ encoding the joint knowledge $D \land I$ is thus given by \begin{gather} \label{eq:genassso} \bm{\rho}_{D \land I} = \frac{\displaystyle\sum_{\bar{N}\in\phi_{\varUpsilon}}\int\limits_{\mathbb{S}} \bm{\rho}\,\Bigl({\prod_{i=1}^{r}} \frac{[\tr\{\bm{\varEpsilon}_{i}\bm{\rho}\}]^{N_i}}{N_i!}\Bigr) g(\bm{\rho})\,\mathrm{d}\bm{\rho}} {\displaystyle\sum_{\bar{N}\in\phi_{\varUpsilon}}\int\limits_{\mathbb{S}} \Bigl({\prod_{i=1}^{r}} \frac{[\tr\{\bm{\varEpsilon}_{i}\bm{\rho}\}]^{N_i}}{N_i!}\Bigr) g(\bm{\rho})\,\mathrm{d}\bm{\rho}}. \end{gather} \subsection{Large-$N$ limit} \label{sec:scen_gen_case_infty} Let us now summarise some results obtained in~\citep{portamanaetal2006c} for the case of very large $N$. Consider the general situation in which each data set $D_N$ consists in the knowledge that the relative frequencies $\bm{f}\equiv(f_i):=(N_i/N)$ lie in a region $\varPhi_N=\{\bm{f}\mathpunct{|}[\sum_if_i\,m_i]\in\varUpsilon_N\}$, where $\varUpsilon_N$ is a region in which the average values lie (being such that $\varPhi_N$ has a non-empty interior and its boundary has measure zero in respect of the prior plausibility measure). Mathematically we want to see what form the state-assignment formulae take in the limit $N \to \infty$. Consider a sequence of data sets $\set{D_N}_{N=1}^\infty$ with corresponding sequences of regions $\set{\varUpsilon_N}_{N=1}^\infty$ and $\set{\varPhi_N}_{N=1}^\infty$, and assume the regions converges (in a topological sense specified in~\citep{portamanaetal2006c}) to regions ${\varUpsilon_\infty}$ and ${\varPhi_\infty}$ (the latter also with non-empty interior and with boundary of measure zero), respectively. Given that the statistical operator\ is $\bm{\rho}$, the plausibility distribution for the outcomes is \begin{equation} \label{eq:plaus_out_N} \bm{\zq}(\bm{\rho}) \equiv \bigl(q_i(\bm{\rho})\bigr)\quad \text{with} \quad q_i(\bm{\rho}) \coloneqq \tr(\bm{\varEpsilon}_i \bm{\rho}). \end{equation} In \citep{portamanaetal2006c} it is shown that \begin{multline} \label{eq:post_N} p(\bm{\rho} \mathpunct{|} D_N \land I)\, \mathrm{d}\bm{\rho} \propto \begin{cases} 0,& \text{if $\bm{\zq}(\bm{\rho}) \not\in {\varPhi_\infty}$}, \\ p(\bm{\rho} \mathpunct{|} I)\, \mathrm{d}\bm{\rho}, & \text{if $\bm{\zq}(\bm{\rho}) \in {\varPhi_\infty}$}, \end{cases} \\ \text{as $N \to \infty$}. \end{multline} Further it is also shown that if ${\varUpsilon_\infty}$ degenerates into a single average value $\bar{m}$, the expression above becomes\footnote{Note that we have here, with abuse of notation, written $p[\bm{\rho} \mathpunct{|} \bar{m} \land I]$ instead of the more correct form $p[\bm{\rho} \mathpunct{|} (\bar{m}=\bar{m}^*) \land I]$, to avoid introducing another variable $\bar{m}^*$ for the average value data.} \begin{equation} \label{eq:post_N_delt} p[\bm{\rho} \mathpunct{|} \bar{m} \land I]\, \mathrm{d}\bm{\rho} \propto p(\bm{\rho} \mathpunct{|} I)\, \deltaup[{\textstyle\sum}_iq_i(\bm{\rho})\,m_i-\bar{m}]\,\mathrm{d}\bm{\rho}. \end{equation} This is an intuitively satisfying result, since in the limit when $N\to\infty$ we would expect that it is only those statistical operators $\bm{\rho}$ whose expectation value ${\textstyle\sum}_iq_i(\bm{\rho})\,m_i$ is equal to the measured average value $\bar{m}$ that could have been the case. The data single out a set of statistical operators, and these are then given weight according to the prior $p(\bm{\rho}\mathpunct{|} I)\,\mathrm{d}\bm{\rho} = g(\bm{\rho})\,\mathrm{d}\bm{\rho}$, specified by us. By normalising the posterior plausibility distribution in equation~\eqref{eq:post_N_delt}, the assigned statistical operator in equation~\eqref{eq:def_rho_I} is then given by \begin{gather} \label{eq:ass_op_ninfty} \bm{\rho}_{D \land I} = \frac{\displaystyle \int\limits_{\mathbb{S}} \bm{\rho}\,g(\bm{\rho})\,\deltaup[{\textstyle\sum}_iq_i(\bm{\rho})\,m_i-\bar{m}]\,\mathrm{d}\bm{\rho}} {\displaystyle \int\limits_{\mathbb{S}} g(\bm{\rho})\,\deltaup[{\textstyle\sum}_iq_i(\bm{\rho})\,m_i-\bar{m}]\,\mathrm{d}\bm{\rho}}. \end{gather} \section{An example of state assignment for a three-level system} \label{sec:exstasn3level} \subsection{Three-level case} \label{sec:exstasn3level_3levcase} We will now consider the particular case studied in this paper. The preparation scheme concerns three-level quantum systems; the corresponding set of statistical operator s will be denoted by ${\mathbb{S}_3}$. We are going to consider the case when the number of measurements $N$ is very large and in the limit goes to infinity. The $N$ measurements are all instances of the same measurement, namely a non-degenerate projection-valued measurement (often called `von~Neumann measurement'). Thus, for all $k= 1, \dotsc, N$, $\set{\bm{\varEpsilon}^{(k)}_\mu} = \set{\bm{\varEpsilon}_\mu} \coloneqq \set{\zone, \zzero, \zmone}$, where the projectors, labelled by the particular outcome values $(m_1,m_2,m_3)=(1,0,-1)$ we have chosen to consider here, define an orthonormal basis in Hilbert space. All relevant operators will, quite naturally and advantageously, be expressed in this basis. We have for example that $q_{\mu}(\bm{\rho})=\tr(\bm{\varEpsilon}_{\mu} \bm{\rho}) = \rho_{\mu \mu}$, the $\mu$th diagonal element of $\bm{\rho}$ in the chosen basis. As data we are given that the average of the measurement outcome values is $\bar{m}$ (more precisely in the sense that ${\varPhi_\infty}$ degenerates into a single average value $\bar{m}$). \subsection{Bloch vector parametrisation and symmetries} \label{sec:exstasn3level_blvecsym} We will be using the same parametrisation of the statistical operators as in the companion paper, i.e. in terms of Bloch vectors $\bm{x}$. For a three-level system the Bloch vector expansion of a statistical operator\ $\bm{\rho}(\bm{x})$ is given by: \begin{equation} \label{eq:blvexp} \bm{\rho}(\bm{x}) = \frac{1}{3} \bm{I}_3 + \frac{1}{2}\textstyle{\sum}_{j=1}^{8} x_j \bm{\lambda}_j, \end{equation} where \begin{equation} \label{eq:blvcoef} x_i = \tr\{\bm{\lambda}_i\,\bm{\rho}(\bm{x})\} \equiv \expe{\bm{\lambda}_i}_{\bm{\rho}(\bm{x})}. \end{equation} The Gell-Mann operators $\bm{\lambda}_i$ are Hermitian and can therefore be regarded as observables. Note that our von~Neumann measurement corresponds to the observable \begin{equation} \bm{\lambda}_3\equiv\zone + 0\, \zzero - \zmone.\label{eq:lambda_3-explic} \end{equation} Hence, given a statistical operator\ $\bm{\rho}(\bm{x})$, the following holds for the expectation value of the outcome values $\{1,0,-1\}$ for this particular measurement: \begin{gather} \expe{\bm{\lambda}_i}_{\bm{\rho}(\bm{x})} = {\textstyle\sum}_iq_i(\bm{x})\,m_i = \rho_{11}(\bm{x})-\rho_{33}(\bm{x}) = x_3. \end{gather} Equation~\eqref{eq:post_N_delt} thus becomes \begin{equation} \label{eq:post_N_delt_bvp} p[\bm{x} \mathpunct{|} \bar{m} \land I]\, \mathrm{d}\bm{x} \propto g(\bm{x})\, \deltaup(x_3-\bar{m})\,\mathrm{d}\bm{x}, \end{equation} and the assigned statistical operator in equation~\eqref{eq:ass_op_ninfty} assumes the form \begin{gather} \label{eq:ass_op_ninfty_b} \bm{\rho}_{\bar{m} \land I} = \frac{\displaystyle \int\limits_{{\mathbb{B}_8}} \bm{\rho}(\bm{x})\,g(\bm{x})\,\deltaup(x_3-\bar{m})\,\mathrm{d}\bm{x}} {\displaystyle \int\limits_{{\mathbb{B}_8}} g(\bm{x})\,\deltaup(x_3-\bar{m})\,\mathrm{d}\bm{x}}, \end{gather} where ${\mathbb{B}_8}$ is the set of all three-level Bloch vectors. This can be rewritten in a form especially suited for numerical integration by computer, which we shall use hereafter: \begin{gather} \label{eq:statopass} \bm{\rho}_{\bar{m} \land I} = \frac{\displaystyle\int\limits_{{\mathbb{C}_8}} \bm{\rho}(\bm{x}) g(\bm{x})\deltaup(x_3-\bar{m})\,\chi_{\mathbb{B}_8}(\bm{x})\,\mathrm{d}\bm{x}} {\displaystyle\int\limits_{{\mathbb{C}_8}} g(\bm{x})\deltaup(x_3-\bar{m})\,\chi_{\mathbb{B}_8}(\bm{x})\,\mathrm{d}\bm{x}}, \end{gather} where $\chi_{\mathbb{B}_8}(\bm{x})$ is the characteristic function of the set ${\mathbb{B}_8}\subset{\mathbb{C}_8} \coloneqq \clcl{-1,1}^7 \times \Bigl\lclose-\tfrac{2}{\sqrt{3}}, \tfrac{1}{\sqrt{3}} \Bigr\rclose$. Using the Bloch vector expansion in equation~\eqref{eq:blvexp} we see that by computing the following set of integrals we have determined $\bm{\rho}_{\bar{m} \land I}$: \begin{equation} \label{eq:integrals} L_i[\bar{m},I] \coloneqq \int\limits_{{\mathbb{C}_8}} x_i\, g(\bm{x})\, \deltaup(x_3-\bar{m})\,\chi_{\mathbb{B}_8}(\bm{x})\,\mathrm{d}\bm{x}, \end{equation} where $i\in\{1,..,8\}$, and \begin{equation} \label{eq:integralsz} Z[\bar{m},I] \coloneqq \int\limits_{{\mathbb{C}_8}} g(\bm{x})\, \deltaup(x_3-\bar{m})\,\chi_{\mathbb{B}_8}(\bm{x})\,\mathrm{d}\bm{x}, \end{equation} where the dependence of the average value and prior knowledge is indicated within brackets. The assigned statistical operator will then given by \begin{equation} \label{eq:genso} \bm{\rho}_{\bar{m} \land I} \,=\, \frac{1}{3} \bm{I}_3 + \frac{1}{2} \sum_{i=1}^{8} \frac{L_i[\bar{m},I]}{Z[\bar{m},I]} \bm{\lambda}_i. \end{equation} One sees directly from equations \eqref{eq:integrals} and \eqref{eq:integralsz} that $L_3[\bar{m},I]/Z[\bar{m},I] = \bar{m}$ ($Z[\bar{m},I]$ can never vanish, its integrand being positive and never identically naught). For the same reasons already accounted for in the first paper we will not try to determine $\bm{\rho}_{\bar{m} \land I}$ exactly, but also here compute it with a combination of symmetry considerations of ${\mathbb{B}_8}$ and numerical integration. For all three kinds of prior knowledge considered in this paper the same symmetry arguments used in the companion paper also holds here, so again we have that $L_i[\bar{m},I]/Z[\bar{m},I]=0$ for all $i\neq3,8$ and any average value $-1 \leq \bar{m} \leq 1$. The assigned Bloch vector is thus given by $(0,0,\bar{m},0,0,0,0,L_8[\bar{m},I]/Z[\bar{m},I])$. This means that $\bm{\rho}_{\bar{m} \land I}$ lies in the $(x_3,x_8)$-plane and it has, in the chosen eigenbasis, the diagonal matrix form \begin{equation} \label{eq:rhomatrix_diag} \bm{\rho}_{\bar{m} \land I}= \begin{pmatrix} \frac{1}{3}+\frac{\bar{m}}{2}+ \frac{L_8[\bar{m},I]}{2\sqrt{3} Z[\bar{m},I]} & 0 &0 \\ 0 & \frac{1}{3}-\frac{L_8[\bar{m},I]}{\sqrt{3} Z[\bar{m},I]} & 0 \\ 0 & 0 & \frac{1}{3}-\frac{\bar{m}}{2}+ \frac{L_8[\bar{m},I]}{2\sqrt{3} Z[\bar{m},I]} \end{pmatrix}. \end{equation} \subsection{Numerical integration, results and the maximum entropy method} \label{sec:exstasn3level_numintresme} We have used numerical integration\footnote{Using quasi Monte Carlo-integration in Mathematica 5.2 on a PC (Pentium~$4$ processor, $3$~GHz). The computation times are given in figures~\ref{fig:Fig_const} to~\ref{fig:Fig_gauss0}, and for more details on the numerical integration we again refer the reader to the companion paper~\citep{maanssonetal2006}.} to compute $L_8[\bar{m},I]/Z[\bar{m},I]$ for different prior knowledge and different values of $\bar{m}$. The result for a constant prior density is shown in figure~\ref{fig:Fig_const}, where the blue curve (with bars indicating the numerical-integration uncertainties) is the Bloch vector corresponding to $\bm{\rho}_{\bar{m} \land I_\text{co}}$ plotted for different values of $x_3=\bar{m}$.\footnote{\label{ftn:symm} Note that we have for all three kinds of priors considered in this paper computed $L_8[\bar{m},I]/Z[\bar{m},I]$ only for non-negative values of $x_3=\bar{m}$, since by using the symmetry operation $(x_1,x_2,x_3,x_4,x_5,x_6,x_7,x_8) \mapsto (x_6,x_7,-x_3,x_4,-x_5,x_1,x_2,x_8)$ one can show that $L_8[\bar{m},I]/Z[\bar{m},I]$ is invariant under a sign change of $\bar{m}$. Further, we have not computed $L_8[\bar{m},I]/Z[\bar{m},I]$ for $\bar{m}=\pm 1$, since it follows from \citep[eq.~17]{maanssonetal2006} that $L_8[\bar{m},I]/Z[\bar{m},I]=1/\sqrt{3}$ is the only possibility in this case (which one also realises by looking at the figures).} It is interesting to compare $\bm{\rho}_{\bar{m} \land I_\text{co}}$ with the statistical operator obtained by the maximum entropy method \cite{jaynes1957b} for the measurement situation we are considering here. Given the expectation value $\expe{\bm{M}}$ of a Hermitian operator $\bm{M}$, corresponding to an observable $M$, the maximum entropy method assigns the statistical operator to the system that maximises the von Neumann entropy $S\coloneqq-\tr\{\bm{\rho}\ln{\bm{\rho}}\}$ and satisfies the constraint $\tr\{\bm{\rho}\bm{M}\}=\expe{\bm{M}}$. Having obtained an average value $\bar{M}$ from many instances of the same measurement performed on identically prepared systems, one conventionally sets $\expe{\bm{M}}=\bar{M}$. In our case the operator $\bm{M}$ would be identified as the Hermitian operator $\bm{\lambda}_3$ and $\bar{M}$ as $\bar{m}$. Hence the maximum entropy method corresponds here to an assignment of the statistical operator that maximises $S$ among all statistical operators satisfying $\expe{\bm{\lambda}_3}=\bar{m}$, and this statistical operator is given by \begin{gather} \bm{\rho}_{ME} \coloneqq \frac{\displaystyle \mathrm{e}^{-\mu(\bar{m})\, \bm{\lambda}_3}}{\displaystyle \tr\{\mathrm{e}^{-\mu(\bar{m}) \,\bm{\lambda}_3}\}}, \end{gather} where \begin{gather} \mu(\bar{m}) := \ln\Biggl\{\frac{\displaystyle -\bar{m}+\sqrt{4-3\bar{m}^2}}{\displaystyle 2(\bar{m}+1)}\Biggr\}. \end{gather} This could be compared with the statistical operator $\bm{\rho}_{\bar{m} \land I}$ obtained by instead using Bayesian quantum-state assignment, and expressed in general form as in equation~\eqref{eq:ass_op_ninfty_b} it is seen to instead be given by a weighted sum, with weight $g(\bm{x})\,\mathrm{d}\bm{x}$, of all statistical operators with $\expe{\bm{\lambda}_3}=x_3=\bar{m}$. In the case of a constant prior one sees from figure~\ref{fig:Fig_const} that $\bm{\rho}_{\bar{m} \land I_\text{co}}$ is in general different from $\bm{\rho}_{ME}$ (the red curve [without bars]). This means for instance that, if the maximum entropy method is a special case of Bayesian quantum-state assignment, the statistical operator obtained by the former method corresponds to a non-constant prior probability distribution $g(\bm{x})\,\mathrm{d}\bm{x}$ on ${\mathbb{B}_8}$ in the latter method. This conclusion in itself is perhaps not so surprising, but it raises an interesting question: Does there exist a (non-constant on ${\mathbb{B}_8}$) prior distribution $g(\bm{\rho})\,\mathrm{d}\bm{\rho}$ that one with Bayesian quantum-state assignment in general obtains the same assigned statistical operator as with the maximum entropy method? A strong candidate is the ``Bures prior'' which has been proposed as the natural measure on the set of all statistical operators (see e.g.~\citep{byrdetal2001,slater2001,slater2001b,slater1999b,slater1996c}), but unfortunately it turns out to be difficult to do numerical integrations on it due to its complicated functional form, so we have not computed the assigned statistical operator in this case. Another interesting candidate is the ``Slater prior''~\citep{slater1995b}, which have also been suggested to be the natural measure on the set of all statistical operators, and the computed assigned statistical operator in this case is shown in figure~\ref{fig:Fig_slater}. One can see directly from the figure that although it is similar to the curve obtained by the maximum entropy method, we have found them to differ.\\ The computed assigned statistical operators for the Gaussian-like prior, centred on the projectors $\hat{\bm{\rho}}=\ketbra{1}{1}$ and $\hat{\bm{\rho}}=\ketbra{0}{0}$ with ``breadth'' $s=1/4$, are shown in figures \ref{fig:Fig_gauss1} and \ref{fig:Fig_gauss0}, respectively. Apart from being symmetric under a sign change of $\bar{m}$, as already have been noted in footnote~\ref{ftn:symm}, one can also show that $L_8[\bar{m},I_\text{ga}]/Z[\bar{m},I_\text{ga}]$ does not depend on the $\hat{x}_3$-coordinate of the statistical operator the prior is centred on. \section{Conclusions} \label{sec:conclusions} This was the second paper in a two-part study where the Bayesian quantum-state assignment methods has been applied to a three-level system, showing that the numerical implementation is possible and simple in principle. This paper should not only be of theoretical interest but also be of use to experimentalists involved in state estimation. We have analysed the situation where we are given the average of outcome values from $N$ repetitions of identical von~Neumann projective measurements performed on $N$ identically prepared three-level systems, when the number of repetitions $N$ becomes very large. From this measurement data together with different kinds of prior knowledge of the preparation, a statistical operator can be assigned to the system. By a combination of symmetry arguments and numerical integration we computed the assigned statistical operator for different average values and for a constant, and also for two examples of a non-constant, prior probability distribution.\\ The results were also compared with that obtained by the maximum entropy method. An interesting question is whether there exists a prior probability distribution that gives rise to an assigned statistical operator which is in general identical to the one given by the maximum entropy method, i.e. if the maximum entropy method could be seen as a special case of Bayesian quantum-state assignment? In the case of a constant and a ``Slater prior'' on the Bloch vector space of a three-level system we saw that the assigned statistical operator did not agree with the one given by the maximum entropy method. It would therefore be interesting to try other kinds of priors, in particular ``special'' priors like the Bures one.\\ The generalisation of the present study to data involving different kinds of measurement is straightforward. Of course, in the general case one has to numerically determine a greater number of parameters (the $L_j[\bar{m},I]$) and therefore compute a greater number of integrals. \paragraph{Post scriptum:} During the preparation of this manuscript, P. Slater kindly informed us that some of the integrals numerically computed here and in the previous paper can in fact be calculated analytically, using \emph{cylindrical algebraic decomposition}~\citep{arnonetal1984,davenportetal1987_t1993, mishra1993,jirstrand1995,brown2001b} with a parametrisation introduced by Bloore~\citep{bloore1976}; {cf.}~Slater~\citep{slater2006c}. This is true, {e.g.}, for the integrals involving the constant and Slater's priors. By this method Slater has also proven the exact validity of eq.~(52) of our previous paper~\citep{maanssonetal2006}. We plan to use and discuss more extensively this method in later versions of these papers. \begin{figure*}\label{fig:Fig_const} \end{figure*} \begin{figure*}\label{fig:Fig_slater} \end{figure*} \begin{figure*}\label{fig:Fig_gauss1} \end{figure*} \begin{figure*}\label{fig:Fig_gauss0} \end{figure*} \end{document}
arXiv
{ "id": "0701087.tex", "language_detection_score": 0.7666910886764526, "char_num_after_normalized": null, "contain_at_least_two_stop_words": null, "ellipsis_line_ratio": null, "idx": null, "lines_start_with_bullet_point_ratio": null, "mean_length_of_alpha_words": null, "non_alphabetical_char_ratio": null, "path": null, "symbols_to_words_ratio": null, "uppercase_word_ratio": null, "type": null, "book_name": null, "mimetype": null, "page_index": null, "page_path": null, "page_title": null }
arXiv/math_arXiv_v0.2.jsonl
null
null
\begin{document} \keywords{Poset, associahedron, cyclohedron, realization, configuration space, compactification} \begin{abstract} Given any connected poset $P$, we give a simple realization of Galashin's poset associahedron $\mathscr A(P)$ as a convex polytope in $\mathbb R^P.$ The realization is inspired by the description of $\mathscr A(P)$ as a compactification of the configuration space of order-preserving maps~$P \to \mathbb{R}.$ In addition, we give an analogous realization for Galashin's affine poset cyclohedra. \end{abstract} \title{A Realization of Poset Associahedra} \section{Introduction} Given a finite connected poset $P$, the poset associahedron $\mathscr A(P)$ is a simple, convex polytope of dimension $|P|-2$ introduced by Galashin~\cite{galashin2021poset}. Poset associahedra arise as a natural generalization of Stasheff's associahedra~\cite{haiman1984constructing, petersen2015Eulerian, StasheffCyclohedron, tamari1954monoides}, and were originally discovered by considering compactifications of the configuration space of order-preserving maps~${P\to\mathbb R.}$ These compactifications are generalizations of the Axelrod\nobreakdash--Singer compactification of the configuration space of points on a line~\cite{axelrod1994chern, lambrechts2010associahedron, sinha2004manifold}. Galashin constructed poset associahedra by performing stellar subdivisions on the polar dual of Stanley's \emph{order polytope}~\cite{stanley1986two}, but did not provide an explicit realization. Various poset associahedra and cyclohedra have already been studied including \emph{permutohedra}, \emph{associahedra}, \emph{operahedra}~\cite{laplante2022diagonal}, \emph{type B permutohedra}~\cite{fomin2005root}, and \emph{cyclohedra}~\cite{bott1994self}. Poset associahedra bear resemblance to graph associahedra, where the face lattice of each is described by a \emph{tubing criterion.} However, neither class is a subset of the other. When Carr and Devadoss introduced graph associahedra in~\cite{carr2006coxeter}, they distinguish between \emph{bracketings} and \emph{tubings} of a path, where the idea of bracketings does not naturally extend to any simple graph. In the case of poset associahedra, the idea of bracketings \emph{does} extend to every connected poset. Galashin~\cite{galashin2021poset} also introduces \emph{affine posets,} and analagous \emph{affine order polytopes} and \emph{affine poset cyclohedra}. In this paper, we provide a simple realization of poset associahedra and affine poset cyclohedra as an intersection of half spaces, inspired by the compactification description and by a similar realization of graph associahedra due to Devadoss~\cite{devadoss2009realization}. In independent work~\cite{mantovani2023Poset}, Mantovani, Padrol, and Pilaud found a realization of poset associahedra as sections of graph associahedra. The authors of~\cite{mantovani2023Poset} also generalize from posets to oriented building sets (which combine a building set with an oriented matroid). \pagebreak \section{Background} \subsection{Poset Associahedra} We start by defining the poset associahedron. \begin{definition} \label{def:tubings} Let $(P, \preceq)$ be a finite poset. We make the following definitions: \begin{itemize} \item A subset $\tau \subseteq P$ is \emph{connected} if it is connected as an induced subgraph of the Hasse diagram of $P$. \item $\tau \subseteq P$ is \emph{convex} if whenever $a, c \in \tau$ and $b \in P$ such that $a \preceq b \preceq c$, then $b \in \tau$. \item A \emph{tube} of $P$ is a connected, convex subset $\tau \subseteq P$ such that $2 \le |\tau|$. \item A tube $\tau$ is \emph{proper} if $|\tau| \le |P|-1.$ \item Two tubes $\sigma, \tau \subseteq P$ are \emph{nested} if $\sigma \subseteq \tau$ or $\tau \subseteq \sigma.$ Tubes $\sigma$ and $\tau$ are \emph{disjoint} if $\tau \cap \sigma = \emptyset$. \item For disjoint tubes $\sigma, \tau$ we say $\sigma \prec \tau$ if there exists $a \in \sigma, b \in \tau$ such that $a \prec b.$ \item A \emph{proper tubing} $T$ of $P$ is a set of proper tubes of $P$ such that any pair of tubes is nested or disjoint and the relation $\prec$ extends to a partial order on $T$. That is, whenever $\tau_1, \dots, \tau_k \in T$ with $\tau_1 \prec \dots \prec \tau_k$ then $\tau_k \not\prec \tau_1$. This is referred to as the \emph{acyclic tubing condition.} \item A proper tubing $T$ is $\emph{maximal}$ if it is maximal by inclusion on the set of all proper tubings. \end{itemize} \begin{figure} \caption{Examples and non-examples of proper tubings.} \label{fig:tubing_examples} \end{figure} Figure \ref{fig:tubing_examples} shows examples and non-examples of proper tubings. \end{definition} \begin{definition} For a finite poset $P$, the \emph{poset associahedron} $\mathscr A(P)$ is a simple, convex polytope of dimension $|P|-2$ whose face lattice is isomorphic to the set of proper tubings ordered by reverse inclusion. That is, if $F_T$ is the face corresponding to $T$, then $F_S \subset F_T$ if one can make $S$ from $T$ by adding tubes. Vertices of $\mathscr A(P)$ correspond to maximal tubings of $P$. \end{definition} We realize poset associahedra as an intersection of half-spaces. Let $P$ be a finite poset and let $n = |P|$. We work in the ambient space $\mathbb R^P_{\Sigma = 0}$, the space of real\nobreakdash-valued functions on $P$ that sum to $0$. For a subset $\tau \subseteq P$, define a linear function $\alpha_\tau$ on $\mathbb R^P_{\Sigma = 0}$ by $$\alpha_\tau(p) := \sum\limits_{\substack{i \prec\mathrel{\mkern-5mu}\mathrel{\cdot} j \\ i, j \in \tau}} p_j - p_i.$$ Here the sum is taken over all covering relations contained in $\tau$. We define the half-space $h_\tau$ and the hyperplane $H_\tau$ by $$\begin{aligned} h_\tau &:= \{p \in \mathbb R^P_{\Sigma = 0} \mid \alpha_\tau(p) \ge n^{2|\tau|}\} &\text{ and } \\H_\tau &:= \{p \in \mathbb R^P_{\Sigma = 0} \mid \alpha_\tau(p) = n^{2|\tau|}\}. & \end{aligned}$$ The following is our main result in the finite case: \begin{theorem}\label{thm:main_thm_finite} If $P$ is a finite, connected poset, the intersection of $H_P$ with $h_\tau $ for all proper tubes $\tau$ gives a realization of $\mathscr A(P)$. \end{theorem} \subsection{Affine Poset Cyclohedra} Now we describe affine poset cyclohedra. \begin{definition} An \emph{affine poset} of \emph{order} $n \ge 1$ is a poset $\tilde P = (\Z, \preceq)$ such that: \begin{enumerate} \item For all $i \in \Z, i \preceq i+n$; \item $\tilde P$ is $n$-periodic: For all $i, j \in \Z, i \preceq j \Leftrightarrow i + n \preceq j + n$; \item $\tilde P$ is \emph{strongly connected}: for all $i, j \in \Z$, there exists $k \in \Z$ such that $i \preceq j + kn$. \end{enumerate} The \emph{order} of $\tilde P$ is denoted $|\tilde P| := n$. \end{definition} Following Galashin~\cite{galashin2021poset}, we give analagous versions of Definition \ref{def:tubings}. We give them only where they differ from the finite case. \begin{definition} Let $\tilde P = (\Z, \preceq)$ be an affine poset. \begin{itemize} \item A \emph{tube} of $\tilde P$ is a connected, convex subset $\tau \subseteq P$ such that $2 \le |\tau|$ and either $\tau = \tilde P$ or $\tau$ has at most one element in each residue class modulo $n$. \item A collection of tubes $T$ is \emph{$n$-periodic} is for all $\tau \in T, k \in \Z$, $\tau + kn \in T$. \item A \emph{proper tubing} $T$ of $\tilde P$ is an $n$-periodic set of proper tubes of $\tilde P$ that satisfies the acyclic tubing condition and such that any pair of tubes is nested or disjoint. \end{itemize} Figure \ref{fig:aff_claw} gives an example of an affine poset of order $4$ and a maximal tubing of that poset. \begin{figure} \caption{ An affine poset of order $4$ and a maximal tubing} \label{fig:aff_claw} \end{figure} \end{definition} \begin{definition} For an affine poset $\tilde P$, the \emph{affine poset cyclohedron} $\mathscr C(\tilde P)$ is a simple, convex polytope of dimension $|\tilde P|-1$ whose face lattice is isomorphic to the set of proper tubings ordered by reverse inclusion. Vertices of $\mathscr C(\tilde P)$ correspond to maximal tubings of $\tilde P$. \end{definition} We also realize affine poset cyclohedra as an intersection of half-spaces. Let $\tilde P$ be an affine poset and let $n = |\tilde P|$. Fix some constant $c \in \R^+$. We define the space of \emph{affine maps} $\R^{\tilde P}$ as the set of bi-infinite sequences $\mathbf{\tilde x} = (\tilde x_i)_{i \in \Z}$ such that $\tilde x_i = \tilde x_{i+n}+c$ for all $i \in \Z$. Let $\mathcal C \subset \R^{\tilde P}$ be the subspace consisting of all constant maps. We work in the ambient space $\R^{\tilde P}/\mathcal C$ where the constant $c$ in the definition of affine maps is given by $c = n^{2(n+1)}$. For a finite subset $\tau \subseteq P$, define a linear function $\alpha_\tau$ on $\mathbb R^{\tilde P}/\mathcal C$ by $$\alpha_\tau(\mathbf{\tilde x}) := \sum\limits_{\substack{i \prec\mathrel{\mkern-5mu}\mathrel{\cdot} j \\ i, j \in \tau}} \tilde x_j - \tilde x_i.$$ Again, the sum is taken over all covering relations contained in $\tau$. We define the half-space $h_\tau$ and the hyperplane $H_\tau$ by $$\begin{aligned} h_\tau &:= \{p \in \mathbb R^{\tilde P}/{\mathcal C} \mid \alpha_\tau(p) \ge n^{2|\tau|}\} &\text{ and } \\H_\tau &:= \{p \in \mathbb R^{\tilde P}/{\mathcal C} \mid \alpha_\tau(p) = n^{2|\tau|}\}. & \end{aligned}$$ \begin{remark}\label{re:nperiodic} Observe that for any tube $\tau$ and $k \in \Z$, $h_\tau = h_{\tau + kn}$. \end{remark} The following is our main result in the affine case: \begin{theorem}\label{thm:main_thm_affine} If $\tilde P$ is an affine poset, the intersection of $h_\tau $ for all proper tubes $\tau$ gives a realization of $\mathscr C(\tilde P)$. \end{theorem} \subsection{An interpretation of tubings} When $P$ is a chain, $\mathscr A(P)$ recovers the classical associahedron. There is a simple interpretation of proper tubings that explains all of the conditions above in terms of \emph{generalized words.} We can understand the classical associahedron as follows: Let $P = (\{1, ..., n\}, \le)$ be a chain. We can think of the chain as a word we want to multiply together with the rule that two elements can be multiplied if they are connected by an edge. A maximal tubing of $P$ is a way of disambiguating the order in which one performs the multiplication. If a pair of adjacent elements $x$ and $y$ have a pair of brackets around them, they contract along the edge connecting them and replace $x$ and $y$ by their product. \begin{figure} \caption{ Multiplication of a word and of a generalized word} \label{fig:generalized_words} \end{figure} Similarly, we can understand the Hasse diagram of an arbitrary poset $P$ as a \emph{generalized word} we would like to multiply together. Again, we are allowed to multiply two elements if they are connected by an edge, but when multiplying elements, we contract along the edge connecting them and then take the transitive reduction of the resulting directed graph. That is, we identify the two elements and take the resulting quotient poset. A maximal tubing is again a way of disambiguating the order of the multiplication. See Figure \ref{fig:generalized_words} for an illustration of this multiplication. This perspective is discussed in relation to operahedra in~\cite[Section 2.1]{laplante2022diagonal} when the Hasse diagram of $P$ is a rooted tree. \section{Configuration spaces and compactifications} We turn our attention to the relationship between poset associahedra and configuration spaces. For a poset $P$, the \emph{order cone} $$\mathscr L(P) := \{p \in \mathbb R^P_{\Sigma = 0} \mid p_i \le p_j \text{ for all } i \preceq j\}$$ is the set of order preserving maps $P \to \mathbb R$ whose values sum to $0$. Fix a constant $c \in \mathbb R^+$. The \emph{order polytope,} first defined by Stanley~\cite{stanley1986two} and extended by Galashin~\cite{galashin2021poset}, is the $(|P|-2)$-dimensional polytope $$\mathscr O(P) := \{p \in \mathscr L(P) \mid \alpha_P(p) = c\}.$$ \begin{remark} \label{re:stanley} When $P$ is \emph{bounded}, that is, has a unique maximum $\hat 1$ and minimum $\hat 0$, this construction is projectively equivalent to Stanley's order polytope where we replace the conditions of the coordinates summing to $0$ and $\alpha_P(p) = c$ with the conditions $p_{\hat 0} = 0$ and $p_{\hat 1} = 1$, see~\cite[Remark~2.5]{galashin2021poset}. \end{remark} Galashin~\cite{galashin2021poset} obtains the poset associahedra by an alternative compactification of $\mathscr O^\circ(P)$, the interior of $\mathscr O(P)$. We describe this compactification informally, as it serves as motivation for the realization in Theorem \ref{thm:main_thm_finite}. A point is on the boundary of $\mathscr O(P)$ when any of the inequalities in the order cone achieve equality. The faces of of $\mathscr O(P)$ are in bijection with proper tubings of $P$ such that all tubes are disjoint. Let $T$ be such a tubing. If $p$ is in the face corresponding to $T$ and $\tau \in T$ then $p_i = p_j$ for $i, j \in \tau$. We can think of the point $p$ in the face corresponding to $T$ as being ``what happens in $\mathscr O(P)$'' when for each $\tau \in T$, the coordinates are infinitesimally close. However, by taking all coordinates in $\tau$ to be equal, we lose information about their relative ordering. In $\mathscr A(P)$, we still think of the coordinates in $\tau$ as being infinitesimally close, but we are still interested in their configuration. Upon zooming in, this is parameterized by the order polytope of the subposet $(\tau, \preceq)$. We iterate this process, allowing points in $\tau$ to be infinitesimally closer, and so on. We illustrate this in Figure \ref{fig:compactification}. This idea is a common explanation of the Axelrod\nobreakdash--Singer compactification of $\mathscr O^\circ(P)$ when $P$ is a chain, see~\cite{axelrod1994chern, lambrechts2010associahedron, sinha2004manifold}. \begin{figure} \caption{ A vertex in $\mathscr O(P)$ vs. $\mathscr A(P).$} \label{fig:compactification} \end{figure} The idea of the realization in Theorem \ref{thm:main_thm_finite} is to replace the notions of \emph{infinitesimally close} and \emph{infinitesimally closer} with being \emph{exponentially close} and \emph{exponentially closer.} For $p \in \mathscr L(P)$, $\alpha_\tau$ acts a measure of how close the coordinates of $p|_\tau$ are. We can make this precise with the following definition and lemma. \begin{definition}\label{def:diameter} For $S \subseteq P$ and $p \in \mathbb R^P$, define the \emph{diameter} of $p$ relative to $S$ by $$\operatorname{diam}_S(p) = \max\limits_{i, j \in S} |p_i - p_j|.$$ That is, $\operatorname{diam}_S(p)$ is the diameter of $\{p_i : i \in S\}$ as a subset of $\mathbb R$. \end{definition} \begin{lemma} \label{le:alpha_bound} Let $\tau \subseteq P$ be a tube and let $p \in \mathscr L(P)$. Then $$\operatorname{diam}_\tau(p) \le \alpha_\tau(p) \le \frac{n^2}{4}\operatorname{diam}_\tau(p).$$ \end{lemma} \begin{proof} By the triangle inequality and as $\tau$ is connected, $\operatorname{diam}_\tau(p) \le \alpha_\tau(p)$. For the other inequality, $$\begin{aligned} \alpha_\tau(p) &= \sum\limits_{\substack{i \prec\mathrel{\mkern-5mu}\mathrel{\cdot} j \\i, j \in \tau}} p_j - p_i \\&\le \sum\limits_{\substack{i \prec\mathrel{\mkern-5mu}\mathrel{\cdot} j \\i, j \in \tau}} \operatorname{diam}_\tau(p) \\&\le \frac{1}{4}n^2 \operatorname{diam}_\tau(p) \end{aligned}$$ The inequality in the last line comes from the fact that there are at most $\frac{n^2}{4}$ covering relations in $P$, which follows from Mantel's Theorem and the fact that Hasse diagrams are triangle\nobreakdash-free. \end{proof} In particular, for $p \in \mathscr L(P)$, if $p \in H_\tau$, then $\{p_i \mid i \in \tau\}$ is clustered tightly together compared to any tube containing $\tau$. If $p \in h_\tau$, then $\{p_i \mid i \in \tau\}$ is spread far apart compared to any tube contained in $\tau$. \section{Realizing poset associahedra} We are now prepared to prove Theorem \ref{thm:main_thm_finite}. Define $$\mathscr A(P) := \bigcap\limits_{\sigma \subset P} h_\sigma \cap H_P$$ where the intersection is over all tubes of $P$. Note that $\mathscr A(P) \subseteq \mathscr L(P)$ as if $i \prec\mathrel{\mkern-5mu}\mathrel{\cdot} j$ is a covering relation, then for $p \in h_{\{i, j\}}$, $p_i \le p_j$. Theorem \ref{thm:main_thm_finite} follows as a result of three lemmas: \begin{lemma}\label{le:vertices} If $T$ is a maximal tubing, then $$v^T := \bigcap\limits_{\tau \in T \cup \{P\}} H_\tau $$ is a point. \end{lemma} \begin{lemma}\label{le:incompatible} If $T$ is a collection of tubes that do not form a proper tubing, then $$\bigcap\limits_{\tau \in T} H_\tau \cap \mathscr A(P) = \emptyset.$$ \end{lemma} \begin{lemma}\label{le:interior} If $T$ is a maximal tubing and $\tau \notin T$ is a proper tube, then $\alpha_\tau(v^T) > n^{2|\tau|}.$ That is, $v^T$ lies in the interior of $h_\tau$. \end{lemma} Lemma \ref{le:vertices} follows from a standard induction argument. \begin{proof}[Proof of Lemma \ref{le:incompatible}] If $T$ is not a collection of tubes that do proper tubing, then at least one of the following two cases holds: \begin{enumerate}[label = (\arabic*)] \item There is a pair of non-nested and non-disjoint tubes $\tau_1, \tau_2$ in $T$. \item There is a sequence of disjoint tubes $\tau_1, ..., \tau_k$ such that $\tau_1 \prec \dots \prec \tau_k \prec \tau_1$. \end{enumerate} The idea of the proof is as follows: For $S \subseteq P$, define the \emph{convex hull} of $S$ as $$\operatorname{conv}(\sigma) := \{b \in P \mid \exists a, c \in S : a \le b \le c \}.$$ Observe that if $p \in \mathscr L(P),$ then $\operatorname{diam}_S(p) \le \operatorname{diam}_{\operatorname{conv}(S)}(p)$. Take $\sigma = \operatorname{conv}(\tau_1 \cup \dots \cup \tau_k)$. One can show that $\sigma$ is a tube, so Lemma \ref{le:alpha_bound} tells us that for each $\tau_i$, $\operatorname{diam}_{\tau_i}(p)$ is very small compared to $n^{2|\sigma|}$. As the tubes either intersect or are cyclic, one can show this forces $\operatorname{diam}_{\sigma}(p)$ to also be small, so $\alpha_\sigma(p) < n^{2|\sigma|}$. More concretely, suppose that $$p \in \bigcap H_{\tau_i} \cap \mathscr L(P) .$$ Note that for all $i$, $|\sigma| > |\tau_i| + 1$ and $\operatorname{diam}_{\tau_i}(p) \le n^{2(|\sigma|-1)}$. In case (1), let $a,b \in \sigma$. There exists some $x \in \tau_1 \cap \tau_2$, so $$\begin{aligned} |p_a - p_b| &\le |p_a - p_x| + |p_x - p_b| \\&\le \operatorname{diam}_{\tau_1}(p) + \operatorname{diam}_{\tau_2}(p) \\ &\le 2n^{2(|\sigma|-1)} \\&<n^{2(|\sigma|)} . \end{aligned}$$ Hence $\operatorname{diam}_\sigma(p) < n^{2|\sigma|}$, so by Lemma \ref{le:alpha_bound}, $p \notin h_\sigma$. Now we move to case (2). Suppose there is a sequence of disjoint tubes $\tau_1, ..., \tau_k$ such that for each $i$ there exists $x_i, y_i \in \tau_i$ where $x_i \prec y_{i+1}$ where we take the indices $\text{mod }{k}$. Then: $$\begin{aligned} p_{y_i} - \operatorname{diam}_{\tau_i}(p) &\le p_{x_i} \\p_{x_i} &\le p_{y_{i+1}} \\p_{y_{i+1}} &\le p_{x_{i+1}} + \operatorname{diam}_{\tau_{i+1}} \end{aligned}$$ Furthermore, since $\tau_i \tand \tau_{i+1}$ are disjoint, $|\tau_i| \le |\sigma| - 2$ and $\operatorname{diam}_{\tau_i} \le n^{2(|\sigma|-2)}$. Combining these we get $$p_{y_i} \le p_{y_{i+1}} + 2n^{2(|\sigma|-2)}.$$ Then we have: $$\begin{aligned} p_{y_1} & \le p_{y_{i}} + 2in^{2(|\sigma|-2)} & \text{ and }\\ p_{y_{i}} + 2in^{2(|\sigma|-2)} &\le p_{y_1} + 2(k+1) n^{2(|\sigma|-2)}. \end{aligned}$$ These yield $$\begin{aligned} p_{y_1} - p_{y_i} &\le 2in^{2(|\sigma|-2)} & \text{ and }\\ p_{y_i} - p_{y_1} &\le 2(k - i+1) n^{2(|\sigma|-2)}. \end{aligned}$$ As $i, k-i+1 \le k \le \frac{n}{2}$, we have $|p_{y_1} - p_{y_i}| \le n^{2(|\sigma|-1)}$. Finally, if $z_i \in \tau_i, z_j \in \tau_j$, then $$\begin{aligned} |p_{z_i} - p_{z_j}| &\le |p_{z_i} - p_{y_i}| +|p_{y_i} - p_{y_1}| + |p_{y_1} - p_{y_j}| + |p_{y_j} - p_{z_j}| \\ &\le 4 n^{2(|\sigma|-1)} \\ &< n^{2|\sigma|}. \end{aligned}$$ Hence $\operatorname{diam}_{\sigma} (p) < n^{2|\sigma|}$, and by Lemma \ref{le:alpha_bound}, $p \notin h_\sigma$. \end{proof} \begin{proof}[Proof of Lemma \ref{le:interior}] Let $T$ be a maximal tubing of $P$ and let $\tau \notin T$ be a tube. Define the \emph{convex hull} of $\tau$ \emph{relative} to $T$ by $$\operatorname{conv}_T(\tau) := \min\{\sigma \in T \mid \tau \subset \sigma\}.$$ Let $\sigma = \operatorname{conv}_T(\tau)$. $T$ partitions $\sigma$ into a lower set $A$ and an upper set $B$ where $A$ and $B$ are either tubes or singletons. Furthermore, $A$ and $B$ both intersect $\tau$. See Figure \ref{fig:Interior_Lemma_Sketch} for an example illustrating this. \begin{figure} \caption{An example illustrating the proof of Lemma \ref{le:interior}.} \label{fig:Interior_Lemma_Sketch} \end{figure} The idea of the proof is as follows: Let $p = v^T$. By Lemma \ref{le:alpha_bound}, $\operatorname{diam}_A(p)$ and $\operatorname{diam}_B(p)$ are both very small compared to $\operatorname{diam}_\sigma(p)$. Then for any $a \in A, b \in B$, $|p_a - p_b|$ must be large. As $\tau$ intersects both $A$ and $B$, $\operatorname{diam}_\tau(p)$ must be large and hence $p \in h_\tau$. See Figure \ref{fig:Interior_Lemma_Diameter} for an illustration of this. More precisely, we show that for any $i \in A, j \in B$, $p_j - p_i > (n^2)^{|\tau|}$, which implies that $p$ lies in the interior of $h_\tau$. \begin{figure} \caption{ If $\operatorname{diam}_A(p)$ and $\operatorname{diam}_B(p)$ are small and $\operatorname{diam}_\sigma(p)$ is large, then $\operatorname{diam}_\tau(p)$ is large.} \label{fig:Interior_Lemma_Diameter} \end{figure} Observe that: $$\sum\limits_{x \prec\mathrel{\mkern-5mu}\mathrel{\cdot} y} p_y - p_x = \underbrace{\sum\limits_{\substack{x \prec\mathrel{\mkern-5mu}\mathrel{\cdot} y \\ x, y \in A}} (p_y - p_x)}_{\substack{\le (n^2)^{|\sigma|-1}\\ < \frac18 (n^2)^{|\sigma|}}}+ \underbrace{\sum\limits_{\substack{x \prec\mathrel{\mkern-5mu}\mathrel{\cdot} y \\ x, y \in B}} (p_y - p_x )}_{\substack{\le (n^2)^{|\sigma|-1}\\ < \frac18 (n^2)^{|\sigma|}}}+ \sum\limits_{\substack{x \prec\mathrel{\mkern-5mu}\mathrel{\cdot} y \\ x \in A, y \in B}} (p_y - p_x).$$ Fix $i \in A \tand j \in B$. By Lemma \ref{le:alpha_bound}, for any $x \in A, y \in B$, $$\begin{aligned} p_y - p_x &\le p_j - p_i + \operatorname{diam}_A(p) + \operatorname{diam}_B(p) \\& \le p_j + p_i + 2n^{2(|\sigma|-1)}. \end{aligned}$$ Again, noting that the number of covering relations in $\sigma$ is at most $\frac{n^2}{4}$ we obtain: $$\begin{aligned} \sum\limits_{\substack{x \prec\mathrel{\mkern-5mu}\mathrel{\cdot}_\sigma y \\ x \in A, y \in B}} (p_y - p_x) &\le \sum\limits_{\substack{x \prec\mathrel{\mkern-5mu}\mathrel{\cdot}_\sigma y \\ x \in A, y \in B}} (p_j - p_i + 2(n^2)^{|\sigma|-1}) \\&\le \frac{n^2}{4}\left( p_j - p_i + 2(n^2)^{|\sigma|-1} \right) \\&= \frac{n^2}{4}(p_j - p_i) + \frac12(n^2)^{|\sigma|}. \end{aligned}$$ Combining all of this we get: $$ \begin{aligned} \sum\limits_{x \prec\mathrel{\mkern-5mu}\mathrel{\cdot}_\sigma y} p_y - p_x &= (n^2)^{|\sigma|} \\& < \frac{1}{8} (n^2)^{|\sigma|} + \frac{1}{8} (n^2)^{|\sigma|} + \frac{1}{2} (n^2)^{|\sigma|} + \frac{n^2}{4}(p_j - p_i) \\&\le \frac{3}{4}(n^2)^{|\sigma|} + \frac{n^2}{4}(p_j - p_i) \end{aligned} $$ Then $(n^2)^{|\sigma|-1} < (p_j - p_i)$ and as $|\tau| \le |\sigma| - 1$, $p$ is in the interior of $h_\tau$. \end{proof} \begin{remark} A similar approach for realizing graph associahedra is taken by Devadoss~\cite{devadoss2009realization}. For a graph $G=(V,E)$, Devadoss realizes the graph associahedron of $G$ by taking the supporting hyperplane for a graph tube $\tau$ to be $$\left\{p \in \mathbb R^V \mid \sum\limits_{i \in \tau} p_i = 3^{|\tau|}\right\}.$$ One difference is that Devadoss realizes graph associahedra by cutting off slices of a simplex whereas we cut off slices of an order polytope. When the Hasse diagram of $P$ is a tree, the poset associahedron is combinatorially equivalent to the graph associahedron of the line graph of the Hasse diagram. In this case, the two realizations have linearly equivalent normal fans. If the Hasse diagram of $P$ is a path graph, then both realizations have linearly equivalent normal fans to the realization of the associahedron due to Shnider and Sternberg~\cite{StasheffCyclohedron}. \end{remark} \section{Realizing affine poset cyclohedra} The proofs in the affine case are nearly identical to the finite case with some additional technical components. The similarity comes from the fact that Lemma \ref{le:alpha_bound} still applies. We highlight where the proofs are different. Let $\tilde P$ be an affine poset of order $n$. Define $$\begin{aligned} \mathscr C(\tilde P) &:= \bigcap\limits_{\sigma \subset P} h_\sigma & \text{ and }\\ \mathscr L(\tilde P) &:= \{p \in \mathbb R^{\tilde P}/\mathcal C \mid p_i \le p_j \text{ for all } i \preceq j\}. & \end{aligned}$$ where the intersection is over all tubes of $\tilde P$. Note that $\mathscr C(\tilde P) \subseteq \mathscr L(\tilde P)$ as if $i \prec\mathrel{\mkern-5mu}\mathrel{\cdot} j$ is a covering relation, then for $p \in h_{\{i, j\}}$, $p_i \le p_j$. Theorem \ref{thm:main_thm_affine} follows as a result of 3 lemmas: \begin{lemma}\label{le:vertices_affine} If $T$ is a maximal tubing, then $$v^T := \bigcap\limits_{\tau \in T} H_\tau $$ is a point. \end{lemma} \begin{lemma}\label{le:incompatible_affine} If $T$ is a collection of tubes that do not form a proper tubing, then $$\bigcap\limits_{\tau \in T} H_\tau \cap \mathscr C(\tilde P) = \emptyset.$$ \end{lemma} \begin{lemma}\label{le:interior_affine} If $T$ is a maximal tubing and $\tau \notin T$ is a proper tube, then $\alpha_\tau(v^T) > n^{2|\tau|}.$ That is, $v^T$ lies in the interior of $h_\tau$. \end{lemma} \begin{proof}[Proof of Lemma \ref{le:vertices_affine}] Let $T$ be a maximal tubing and take any $\sigma \in T$ such that $|\tau| = n$. Then restricting to $\tilde P|_{\sigma}$, Lemma \ref{le:vertices} implies that $$\bigcap\limits_{\substack{\tau \in T \\ \tau\subseteq \sigma}} H_\tau$$ is a point. However, as $T$ is $n$-periodic, $$\bigcap\limits_{\substack{\tau \in T \\ \tau\subseteq \sigma}} H_\tau = \bigcap\limits_{\substack{\tau \in T}} H_\tau .$$ \end{proof} \begin{proof}[Proof of Lemma \ref{le:incompatible_affine}] By Remark \ref{re:nperiodic}, we can assume $T$ is $n$-periodic. The proof is almost identical to the proof of Lemma \ref{le:incompatible}. Define $$\mathscr L(\tilde P) := \{p \in \mathbb R^{\tilde P}/\mathcal C \mid p_i \le p_j \text{ for all } i \preceq j\}.$$ and note that $$\mathscr L(\tilde P) \subseteq R^{\tilde P}/\mathcal C \bigcap\limits_{\substack{i, j \in \tilde P \\ i \prec\mathrel{\mkern-5mu}\mathrel{\cdot} j}} h_{\{i, j\}}.$$ Let $$p \in \bigcap H_{\tau_i} \cap \mathscr L(\tilde P) .$$ We again break into two cases: \begin{enumerate}[label = (\arabic*)] \item There is a pair of non-nested and non-disjoint tubes $\tau_1, \tau_2$ in $T$. \item All tubes in $T$ are pairwise nested or disjoint and there is a sequence of disjoint tubes $\tau_1, ..., \tau_k$ such that $\tau_1 \prec \dots \prec \tau_k \prec \tau_1$. \end{enumerate} The only difference in the proof occurs in case (1). Here, it is possible that there exists $x \in \tau_1 \cup \tau_2$ such that $x + n \in \tau_1 \cup \tau_2$ as well. In this case, the proof of Lemma \ref{le:incompatible} still implies that $\operatorname{diam}_{\tau_1 \cup \tau_2}(p) \le \operatorname{diam}_{\tau_1}(p) + \operatorname{diam}_{\tau_2}(p) \le 2n^{2n}$. However, $|p_{x+n} - p_{x}| = n^{2(n+1)}$. \end{proof} \begin{proof}[Proof of Lemma \ref{le:interior_affine}] Let $T$ be a maximal tubing and $\tau \notin T$ be a proper tube. Let $p = v^T$. We claim that $\alpha_\tau(p) > n^{2|\tau|}.$ The only difference from the proof of Lemma \ref{le:interior} is that $\tau$ may not be contained by any tube in $\tau$ so $\operatorname{conv}_T(\tau)$ may not be well-defined. In this case, there exists $A \in T$ such that $|A| = n$, $A \cap \tau \neq \emptyset, \tand (A+n) \cap \tau \neq \emptyset$. Here, $(A+n)$ acts the same as $B$ in the finite case, except the argument is much simpler. Let $i \in A \cap \tau, j \in (A+n) \cap \tau$. Observe that $\operatorname{diam}_A(p), \operatorname{diam}_{(A+n)}(p) \le n^{2n}$ and that $i + n \in (A+n)$. Then $$\begin{aligned} |p_j - p_i| &\ge (p_j - n^{2n}) - p_i \\&\ge p_{i+n} - p_i \\&= n^{2(n+1)}. \end{aligned}$$ Hence $\operatorname{diam}_{\tau}(p) > n^{2|\tau|}$ and by Lemma \ref{le:alpha_bound}, $\alpha_\tau(p) > n^{2|\tau|}$. \end{proof} \section{Remarks and Questions} \begin{remark} Let $(P, \preceq)$ be a bounded poset. In Remark \ref{re:stanley}, we discuss how $\mathscr O(P)$ can be realized as the set of all $p \in \R^P$ such that $p_{\hat 0} = 0$, $p_{\hat 1} = 1$, and $p_i \le p_j$ whenever $i \preceq j$. We can similarly realize $\mathscr A(P)$ as follows: Fix $0 < \varepsilon < \frac{1}{n^2}$. For a proper tube $\tau \subset P$, let $$h'_\tau = \{p \in \R^P \mid \alpha_\tau(p) < \varepsilon^{n-|\tau|}\}.$$ Then $\mathscr A(P)$ is realized as the intersection over all $h'_\tau$ with the hyperplanes $$\{p_{\hat 0} = 0\} \tand \{p_{\hat 1} = 1\}.$$ Letting $\varepsilon \to 0$, we obtain $\mathscr O(P)$ as a limit of $\mathscr A(P)$ as shown in Figure~\ref{fig:CubeLimit}. \begin{figure} \caption{ $\mathscr O(P)$ as a limit of $\mathscr A(P)$} \label{fig:CubeLimit} \end{figure} \end{remark} \begin{remark} The key piece to the realizations in Theorems \ref{thm:main_thm_finite} and \ref{thm:main_thm_affine} is the linear form $\alpha_\tau$, where $\alpha_\tau$ acts as an approximation of $\operatorname{diam}_\tau$. In particular, let $\tau$ be a tube and let $p \in \mathscr L(P)$. Then: \begin{itemize} \item $\alpha_\tau(p) \ge 0$. \item $\alpha_\tau(p) = 0 \Leftrightarrow p|_{\tau}$ is constant. \item If $\sigma \subseteq \tau$ is a tube, then $\alpha_\sigma(p) \le \alpha_\tau(p)$. \end{itemize} However, there are many other options for choice of $\alpha_\tau$ that could fill this role. Some other options include: \begin{enumerate} \item Sum over all pairs $i \prec j$ in $\tau$. $$\alpha_\tau(p) = \sum\limits_{\substack{i \prec j \\ i, j \in \tau}} p_j - p_i.$$ \item Let $A \tand B$ be the set of minima and maxima of the restriction $P|_{\tau}$ respectively. $$\alpha_\tau(p) = \sum\limits_{\substack{i \prec j \\ i \in A, j \in B}} p_j - p_i.$$ \item Fix a spanning tree $T$ in the Hasse diagram of $\tau$. Let $E = \{(i, j) \mid i \prec\mathrel{\mkern-5mu}\mathrel{\cdot}_T j\}$ be the set of edges in $T$. $$\alpha_\tau(p) = \sum\limits_{(i, j) \in E} p_j - p_i.$$ An advantage of this option is that we would have $$\operatorname{diam}_\tau(p) \le \alpha_\tau(p) \le (n-1)\operatorname{diam}_\tau(p).$$ \end{enumerate} A similar realization can be obtained for each choice of of $\alpha_\tau$. \end{remark} \begin{question} \label{que:h_stat} Recall that for a simple $d$-dimensional polytope $P$, the $f$-vector and $h$-vector of $P$ are given by $(f_0, \dots, f_d)$ \tand $(h_0, \dots, h_d)$ where $f_i$ is the number of $i$-dimensional faces and $$ \sum\limits_{i = 0}^d f_i t^i = \sum\limits_{i = 0}^d h_i (t+1)^i. $$ Postnikov, Reiner, and Williams~\cite{postnikov2008faces} found a statistic on maximal tubings of graph associahedra of chordal graphs where $$\sum\limits_{T} t^{\operatorname{stat}(T)} = \sum h_i t^i.$$ In particular, they define a map $T \mapsto w_T$ from maximal tubings of a graph on $n$ vertices to the set of permutations $S_n$ such that $\operatorname{stat}(T) = \operatorname{des}(w_T)$, the number of descents of $w_T$. It would be interesting to find a similar statistic on maximal tubings of poset associahedra. For a simple polytope $P$, one can orient the edges of $P$ according to a generic linear form and take $\operatorname{stat}(v) = \operatorname{outdegree}(v)$~\cite[\S 8.2]{ziegler2012lectures}. It may be possible to use our realization to find the desired statistic. \end{question} \end{document}
arXiv
{ "id": "2301.11449.tex", "language_detection_score": 0.6887917518615723, "char_num_after_normalized": null, "contain_at_least_two_stop_words": null, "ellipsis_line_ratio": null, "idx": null, "lines_start_with_bullet_point_ratio": null, "mean_length_of_alpha_words": null, "non_alphabetical_char_ratio": null, "path": null, "symbols_to_words_ratio": null, "uppercase_word_ratio": null, "type": null, "book_name": null, "mimetype": null, "page_index": null, "page_path": null, "page_title": null }
arXiv/math_arXiv_v0.2.jsonl
null
null
\begin{document} \begin{center} \large {THE CHIRAL OSCILLATOR AND ITS APPLICATIONS IN QUANTUM THEORY} \end{center} \vskip 2cm R. Banerjee\footnote{E-mail:[email protected]}\\ S. N. Bose National Centre for Basic Sciences\\ Block JD, Sector III, Calcutta 700091, India\\ \vskip .5cm \noindent and\\ \vskip .5cm \noindent Subir Ghosh\\ Dinabandhu Andrews College\\ Garia, West Bengal, India.\\ \vskip 2cm \noindent Abstract\\ The fundamental importance of the chiral oscillator is elaborated. Its quantum invariants are computed. As an application the Zeeman effect is analysed. We also show that the chiral oscillator is the most basic example of a duality invariant model, simulating the effect of the familiar electric-magnetic duality. It is well known that the Harmonic Oscillator (HO) pervades our understanding of quantum mechanical as well as field theoretical models in various contexts. An interesting thrust in this direction was recently made in \cite{rprl} where the quantum invariants of the HO were computed. It was also opined that this approach could be used for developing a technique \cite{rjmp} to study interacting and time dependent (open) systems. In this paper we argue that, in some instances, the Chiral Oscillator (CO) instead of the usual HO captures the essential physics of the problem. This is tied to the fact that the CO simulates the left-right symmetry. Consequently the CO has a decisive role in those cases where this symmetry is significant. The CO is first systematically derived from the HO and the issue of symmetries is clarified. Indeed, it is explicitly shown that the decomposition of the HO leads to a pair of left-right symmetric CO's. The soldering of these oscillators to reobtain the HO is an instructive exercise. Following the methods of \cite{rprl,rjmp}, the quantum invariants of the CO's are computed and their connection with the HO invariant is illuminated. As an application, the Zeeman splitting \cite{pk} for the Hydrogen atom electron energy levels under the influence of a constant magnetic field is studied. The interaction of the atom with a time-dependent magnetic field, constituting an open system, can also be analysed from the general expressions. In a completely different setting we show that the CO is the most basic example of a duality invariant theory \cite{az}. By reexpressing the computations in a suggestive electromagnetic notation, the mapping of this duality with Maxwell's electromagnetic duality is clearly established. The Lagrangean for the one dimensional HO is given by \begin{equation} L={M\over 2}(\dot {x}^2-\omega^2x^2). \label{eqlho} \end{equation} To obtain the CO, the basic step is to convert (\ref{eqlho}) in a first order form by introducing an auxiliary variable $\Lambda$ in a symmetrised form, \begin{equation} L={M\over 2}(\Lambda\dot x-x\dot{\Lambda}-{\Lambda^2}-\omega^2x^2). \label{eqlf} \end{equation} There are now two distinct classes for relabelling these variables corresponding to proper and improper rotations generated by the matrices with determinant $\pm1$, \[ \left ( \begin{array}{c} x\\ {{\Lambda}\over{\omega}}\\ \end{array} \right )=\left ( \begin{array}{cc} cos\theta & sin\theta\\ -sin\theta & cos\theta \end{array} \right )\left ( \begin{array}{c} x_1 \\ x_2\\ \end{array}\right ), ~~ \left ( \begin{array}{c} x\\ {{\Lambda}\over{\omega}}\\ \end{array} \right )=\left ( \begin{array}{cc} sin\phi & cos\phi\\ cos\phi & -sin\phi \end{array} \right )\left ( \begin{array}{c} x_1 \\ x_2\\ \end{array}\right ) \] leading to the structures, \begin{equation} L_{\pm}={M\over 2}(\pm\omega\epsilon_{\alpha\beta}x_\alpha \dot{x}_\beta -\omega^2x_\alpha^2), \label{eqlco} \end{equation} where $\alpha=1,2$ is an internal index with $\epsilon_{12}=1$. The basic Poisson brackets of the above model are read off from the symplectic structure, \begin{equation} \{x_\alpha ,x_\beta \}_{\pm}=\mp{1\over{\omega M}}\epsilon_{\alpha\beta}. \label{eqbr} \end{equation} The corresponding Hamiltonians are, \begin{equation} H_{\pm}={{M\omega^2}\over 2}(x_1^2+x_2^2) =\tilde{H}. \label{eqhco} \end{equation} The above Lagrangeans in (\ref{eqlco}) are interpreted as two bi-dimensional CO's rotating in either a clockwise or an anti-clockwise sense. A simple way to verify this property is to look at the spectrum of the angular momentum operator, \begin{equation} \omega J_{\pm}=\omega\epsilon_{\alpha\beta} x_\alpha p_\beta=\pm{1\over 2}M\omega^2x_\alpha^2 =\pm\tilde{H}, \label{eqJ} \end{equation} where $\tilde{H}$ is defined above. To complete the picture it is desirable to show the mechanism of combining the left and right CO's to reproduce the usual HO. This is achieved by the soldering technique \cite{adc,abc} introduced recently. Let us then begin with two {\it independent} chiral Lagrangeans $L_+(x)$ and $L_-(y)$. Consider the following gauge transforms, $\delta x_\alpha=\delta y_\alpha =\eta_\alpha$ under which $$ \delta L_{\pm}(z)=M\omega\epsilon_{\alpha\beta}\eta_\alpha (\pm\dot{z}_\alpha +\omega\epsilon_{\alpha\beta}z_\beta),~~~z=x,y. $$ Introduce a new variable $B_\alpha$, which will effect the soldering, transforming as, $\delta B_\alpha =\epsilon_{\alpha\beta}\eta_\beta$. This new Lagrangean \begin{equation} L=L_{+}(x)+L_{-}(y)-M\omega B_\alpha(\dot{x}_\alpha +\omega\epsilon_{\alpha\beta}x_\beta-\dot{y}_\alpha +\omega\epsilon_{\alpha\beta}y_\beta), \label{eqlinv} \end{equation} is invariant under the above transformations. Eliminating $B_\alpha$ by the equations of motion, we obtain the final soldered Lagrangean, $$L(w)={M\over 4}(\dot{w}^2_{\alpha}-\omega^2 w^2_\alpha),$$ which is no longer a function of $x$ and $y$ independently, but only on their gauge invariant combination, $w_\alpha=x_\alpha -y_\alpha$. The soldered Lagrangean just corresponds to a bi-dimensional simple harmonic oscillator. Thus, by starting from two distinct Lagrangeans containing the opposite aspects of chiral symmetry, it is feasible to combine them into a single Lagrangean. The connection between the CO and HO is now used to obtain the invariants of the former by exploiting known results \cite{rprl} for the latter. For the positive CO, \begin{equation} I^+={1\over 2}\tan^{-1}({x_1}^{-1}x_2)+ {1\over 2}\tan^{-1}(x_2{x_1}^{-1}), \label{eqco+} \end{equation} is the invariant, while $I^-$ is given by interchanging $x_1$ and $x_2$. Note that non-commutativity of the variables has already been taken into account. Incorporating the "soldering" prescription \cite {abc} whereby we were able to construct a bi-dimensional oscillator from the two CO's, another quantum invariant can also be obtained, \begin{equation} I^+(x_1,x_2) \oplus I^-(y_1,y_2)=I(x_1-y_1,x_2-y_2), \label{eqsol} \end{equation} where, the right hand side of the equation is a simple sum of two terms, obtained by substituting $x_1-y_1,~~M(\dot x_1-\dot y_1)/2$ and $x_2-y_2,~~M(\dot x_2-\dot y_2)/2$ in place of $x$ and $p$ in the corresponding expression for HO in \cite{rprl}. We stress that the above invariant operators are independent as they pertain to completely different systems and were not present in the literature so far. In the next section, we will put the CO invariants into direct use in interacting and open quantum systems by considering the Zeeman effect. Let us consider the simplistic Bohr model of Hydrogen atom, where the (non-relativistic) electron is moving in the presence of a repulsive centrifugal barrier and the attractive Coulomb potential. The effective central potential has a well like structure and we consider the standard HO approximation about the potential minimum. The excitations are the HO states above the minimum. Hence the electron, at a particular stationary state, is approximated to an oscillator with a frequency $\omega$, obtained from the effective potential seen by the atomic electron without the magnetic field.This yields $\omega=(Me^4)/l^3$, with $l=Mr^2\dot{\phi}$ being the angular momentum, when expressed in plane polar coordinates. In the presence of a magnetic field ${\bf B}$, the motion of the electron can be broken into components parallel and perpendicular to ${\bf B}$. The Lorentz force acting on the electron affects the motion in the normal plane of ${\bf B}$ only, the motion being two rotational modes in the clockwise and anti-clockwise sense, or more succintly two CO's of opposite chirality. In this setup, ${\bf B}$ splits the original level into three levels, one of frequency $\omega$ remaining unchanged and the other two frequencies changed to $\omega\pm(eB)/(2Mc)$ \cite{pk}. This clearly shows that there is a redundancy in the number of degrees of freedom in treating the electron as a HO, whereas the CO representation is more elegant and economical whenever the degeneracy between the right and left movers is lifted such as in the presence of magnetic field. The Hamiltonian of a charged HO in an axially symmetric magnetic field is, $${\bf A}={1\over 2}B(t){\bf k}\times{\bf r},~~{\bf B}(t)=\nabla\times{\bf A} =B(t){\bf k}$$ $$H={1\over{2M}}({\bf p}-e{\bf A})^2 +{1\over 2}M\omega^2r^2 ={1\over{2M}}({p_1}^2+{p_2}^2)+{1\over 2}M\omega^2({x_1}^2+{x_2}^2)$$ \begin{equation} +{{eB(t)}\over{2Mc}}(x_2p_1-x_1p_2) +{{e^2}\over{8Mc^2}}{B(t)}^2({x_1}^2+{x_2}^2). \label{eqcho} \end{equation} For the semi-classical reasoning (regarding the Zeeman effect) to hold, $\mid{\bf B}\mid$ must be small in the sense that the radius of gyration $r=~(cMv)/(eB)=~(cl)/(eBr)$, which simplifies to $r=~\sqrt{(cl)/(eB)}=~\sqrt{(nc\hbar)/(eB)}$ is much larger than the Bohr radius of the (Hydrogen) atom \cite{yk} $r_{Bohr}=~\hbar^2/(Me^2).$ This condition is expressed as \begin{equation} {{\hbar^3B}\over{cM^2e^3}}<<1. \label{eqsb} \end{equation} In our Hamiltonian (\ref{eqcho}), this condition will hold if \begin{equation} \mid{1\over 2}M\omega^2({x_1}^2+{x_2}^2)\mid>> \mid{{eB(t)}\over{2Mc}}(x_2p_1-x_1p_2)\mid. \label{eqsm} \end{equation} To verify this, substitute $\omega=~Me^4/l^3 $ and $({x_1}^2+{x_2}^2)=~r_ {Bohr}$ in the left hand side, and $(x_2p_1-x_1p_2)=~l$ in the right hand side. This reproduces (\ref{eqsb}). The quadratic $B$-term in (\ref{eqcho}) is still smaller. The above structure of the Hamiltonian is very similar to the model of a charged particle in a specified electromagnetic field, considered in \cite{rprl, rjmp}. The idea there is to look for the invariants of the full interacting Hamiltonian, and to construct eigenstates of the invariant operator. The solutions of the time dependent Schrodinger equation are related uniquely to these eigenstates via a time dependent phase, $$ \mid\lambda,k,t>_{Sch}=e^{i\alpha_{\lambda k}(t)}\mid\lambda,k,t>_{I},~~~~ I(t)\mid\lambda,k,t>_{I}=\lambda \mid\lambda,k,t>_{I},$$ satisfying, $$i\hbar{{d\alpha_{\lambda k}}\over{dt}}=<\lambda,k\mid_I(i\hbar{{\partial} \over{\partial t}}-H)\mid\lambda,k>_I.$$ Next we define, $$H_o={1\over{2M}}({p_1}^2+{p_2}^2)+{1\over 2}M\omega^2({x_1}^2+{x_2}^2)$$ and the rest of the $B$-dependent terms appearing in (\ref{eqcho}) as small perturbations. In the framework of \cite{rpla}, the invariant operator is also expressible as a power series in the small parameter $B\hbar^3/(cM^2e^3)$ and the zeroth order invariant $I_0$ is identical to $H_o$. Hence the eigenstates of $H_o$ and $I_o$ will be same and $\mid\lambda,k,t>_I= exp(-i(n+{1\over 2})\omega t)\mid\lambda,k>_I$. As in the conventional scenario, the total energy is also expressed as a series with the zeroth term being $(n+{1\over 2})\hbar\omega$. Thus we will compute the $B$-dependent corrections only by the scheme of \cite{rjmp}, which actually comprises the task of calculating the phase $\alpha_{\lambda k}$. Here the CO's will come into play. As we have already established the connection between the results of HO and CO models, we simply replace the HO variables by the CO ones in the final result. From the symplectric structure, the following identifications are consistent, \begin{equation} CO^+:~~\{{x_1}^+,~{x_2}^+\}=-{1\over{\omega M}} ,~~\to p_1\equiv -\omega M{x_2}^+,~p_2\equiv \omega M{x_1} ^+, \label{px+} \end{equation} \begin{equation} CO^-:~~\{{x_1}^-,~{x_2}^-\}={1\over{\omega M}} ,~~\to p_1\equiv \omega M{x_2}^-,~p_2\equiv -\omega M{x_1} ^-. \label{px-} \end{equation} Introducing these in (\ref{eqcho}), we get, \begin{equation} H_{\pm}={M\over 2}({x_1}^2+{x_2}^2)(1+{{e^2B^2}\over {4M^2c^2}}\mp{{eB}\over{Mc}}). \label{cco} \end{equation} The above splitting in the energy is one of our main results. This underlines the economy in the CO formulation since one CO is sufficient to obtain the correct results. Obviously it is easier to work with less number of degrees of freedom in cases of more complicated systems. Essentially this change in the relative sign of the linear $B$ term can also be interpreted as a consequence of the opposite angular momenta of the CO's, as demonstrated before. This brings us to the cherished expression of the phase for the two CO's, \begin{equation} \alpha_{jn}^{\pm}=\mp[n+(j+{1\over 2})]{e\over{Mc}}\int^t dt'[{1\over 2} B(t')-\rho^{-2}(t')], \label{eqpco} \end{equation} where the quantum numbers $j$ and $n$ are explained in \cite{rjmp} and $\rho (t')$ satisfies the equation, $$({{Mc}\over e})^2\ddot{\rho}+{{B^2(t)}\over 2}\rho -\rho^{-3}=0.$$ Considering the simplest case, that is normal Zeeman effect, where $B$ is a constant, we find a time-independent solution of $\rho$, $\rho^2=\pm{\sqrt 2}/B$. When $\rho^2=-{\sqrt 2}/B$ is substituted in (\ref{eqpco}), the standard Zeeman level splitting is reproduced. \begin{equation} E_n^{\pm}=(n+{1\over 2})\hbar\omega\pm[n+(j+{1\over 2})]{{eB}\over{Mc}}. \label{eqzee} \end{equation} On the other hand, $\rho^2={\sqrt 2}/B$ reveals no shift in the energy eigenvalue. Clearly this is reminiscent of the fact that the energy of the mode parallel to ${\bf B}$ remains unaffected. For time dependent magnetic field, one has to obtain the appropriate solution for $\rho$. Inserting this in (\ref{eqpco}) it is then possible to obtain the solutions of the corresponding Schr$\ddot o$dinger equation. We next show the possibility of interpreting the CO as a prototyype of a duality invariant model characteristic of the electric-magnetic duality \cite{az}. For convenience, we set $M=~\omega=~1$ in (\ref{eqlho}). Introduce a change of variables, $E=\dot x,~~~B=x$, so that \begin{equation} \dot B -E=0 \label{eqeb} \end{equation} is identically satisfied. In these variables, the Lagrangian (\ref{eqlho}) and the corresponding equation of motion are expressed as \begin{equation} L={1\over 2}(E^2-B^2), ~~~\dot E+B=0. \label{eqem} \end{equation} It is simple to observe that the transformations, \footnote{Note that these are the discrete cases $(\theta=~\pm{{\pi}\over 2})$ for a general $SO(2)$ rotation matrix parametrised by the angle $\theta$.} $E\rightarrow \pm B;~B\rightarrow\pm E$, swap the equation of motion in (\ref{eqem}) with the identity (\ref{eqeb}) although the Lagrangean (\ref{eqem}) is not invariant. The similarity with the corresponding analysis in Maxwell theory is quite striking, with $x$ and $\dot x$ simulating the roles of the magnetic and electric fields, respectively. There is a duality among the equation of motion and the "Bianchi" identity (\ref{eqeb}), which is not manifested in the Lagrangean. Let us now consider the Lagrangean for the CO, \begin{equation} L_{\pm}={1\over 2}(\pm\epsilon_{\alpha\beta}x_\alpha\dot{x}_\beta -x^{2}_\alpha) ={1\over 2}(\pm\epsilon_{\alpha\beta}B_\alpha E_\beta -B^{2}_\alpha). \label{eqdu} \end{equation} These chiral Lagrangeans are manifestly invariant under the duality transformations, \begin{equation} x_\alpha\rightarrow~R^{+}_{\alpha\beta}(\theta)x_\beta. \label{eqxr} \end{equation} Thus, the CO's represent a quantum mechanical example of a duality invariant model. Indeed, the expressions for $L_{\pm}$ given in the second line of (\ref{eqdu}) closely resemble the analogous structure for the Maxwell theory deduced in \cite{bw}. The generator of the infinitesimal symmetry transformmation is given by, $Q=x_\alpha x_\alpha/2,$ so that the complete transformations (\ref{eqxr}) are generated by, $$x_\alpha\rightarrow~x_{\alpha}'=e^{-i\theta Q}x_\alpha e^{i\theta Q} =~R^{+}_{\alpha\beta}(\theta)x_\beta.$$ This follows by exploiting the basic bracket of the theory given in (\ref{eqbr}). To conclude, certain interesting properties of the CO were illustrated. A systematic method of obtaining this oscillator from the usual simple HO was given. It was also shown that the distinct left and right components of the CO were combined by the soldering formalism \cite{adc, abc} to yield a bi-dimensional HO. In this way the symmetries of the model were highlighted. The importance in the CO lies in the fact that in some cases it has a concrete and decisive role than the usual simple HO in illuminating the basic physics of the problem. This was particularly well seen in the derivation of the Zeeman splitting by exploiting the perturbation theory technique based on quantum invariant operators \cite{rprl, rjmp}. An explicit computation of the quantum invariants for the CO was also performed. Apart from the study of the Zeeman effect, such CO invariants can find applications in other quantum mechanical examples, particularly where a left-right symmetry is significant. Another remarkable feature of the present analysis has been the elucidation of the fundamental nature of the duality symmetry currently in vogue either in quantum field theory or the string theory \cite{az, bw}. It was shown that the CO was a duality symmetric model, contrary to the usual HO. Expressed in the "electromagnetic" notation, this difference was seen to be the origin of the presence or absence of duality symmetry in electrodynamics. It may be remarked that the explicit demonstration of duality symmetry in a quantum mechanical world is nontrivial since conventional analysis \cite{az} considers two types of duality invariance confined to either $D=4k$ or $D=4k+2$ dimensions, thereby leaving no room for $D=1$ dimension. Nevertheless, since most field theoretical problems can be understood on the basis of the HO, it is reassuring to note that the origin of electromagnetic duality invariance is also contained in a variant of the HO- the chiral oscillator. Our study clearly reveals that the CO complements the usual HO in either understanding or solving various problems in quantum theory. \end{document}
arXiv
{ "id": "9805009.tex", "language_detection_score": 0.8006580471992493, "char_num_after_normalized": null, "contain_at_least_two_stop_words": null, "ellipsis_line_ratio": null, "idx": null, "lines_start_with_bullet_point_ratio": null, "mean_length_of_alpha_words": null, "non_alphabetical_char_ratio": null, "path": null, "symbols_to_words_ratio": null, "uppercase_word_ratio": null, "type": null, "book_name": null, "mimetype": null, "page_index": null, "page_path": null, "page_title": null }
arXiv/math_arXiv_v0.2.jsonl
null
null
\begin{document} \begin{center} {\Large Six Signed Petersen Graphs, and their Automorphisms } {\large Thomas Zaslavsky}\\[0.5cm] Department of Mathematical Sciences, \\ Binghamton University (SUNY), \\ Binghamton, NY 13902-6000, U.S.A. \\ E-mail: {\tt [email protected]} \\ \today \end{center} \hrule \noindent{\textbf{Keywords:} Signed graph; Petersen graph; balance; switching; frustration; clusterability; switching automorphism; proper graph coloring } \noindent{\textbf{2010 Mathematics Subject Classifications:} Primary 05C22; Secondary 05C15, 05C25, 05C30} \begin{quote} { \emph{Abstract.} Up to switching isomorphism there are six ways to put signs on the edges of the Petersen graph. We prove this by computing switching invariants, especially frustration indices and frustration numbers, switching automorphism groups, chromatic numbers, and numbers of proper 1-colorations, thereby illustrating some of the ideas and methods of signed graph theory. We also calculate automorphism groups and clusterability indices, which are not invariant under switching. In the process we develop new properties of signed graphs, especially of their switching automorphism groups. } \end{quote} \setcounter{tocdepth}{1} \tableofcontents \section{Introduction}\label{intro} The Petersen graph $P$ is a famous example and counterexample in graph theory, making it an appropriate subject for a book (see \cite{TPG}). With signed edges it makes a fascinating example of many aspects of signed graph theory as well. There are $2^{15}$ ways to put signs on the edges of $P$, but in many respects only six of them are essentially different. We show how and why that is true as we develop basic properties of these six signed Petersens. \begin{figure} \caption{$P$, the Petersen graph.} \label{F:P} \end{figure} The fundamental property of signed graphs is balance. A signed graph is \emph{balanced} if all its circles (circuits, cycles, polygons) have positive sign product. Harary introduced signed graphs and balance \cite{NB} (though they were implicit in K\"onig \cite[\S X.3]{Konig}). Cartwright and Harary used them to model social stress in small groups of people in social psychology \cite{CH}. Subsequently signed graphs have turned out to be valuable in many other areas, some of which we shall allude to subsequently. The opposite of balance is frustration. Most signatures of a graph are unbalanced; but they can be made balanced by deleting (or, equivalently, negating) edges. The smallest number of edges whose deletion makes the graph balanced is the \emph{frustration index}, a number which is implicated in certain questions of social psychology (\cite{PsL, MSB} et al.) and spin-glass physics (\cite{Toulouse, BMRU} et al.). We find the frustration indices of all signed Petersen graphs (Theorem \ref{T:fr}). The second basic property of signed graphs is switching equivalence. Switching is a way of turning one signature of a graph into another, without changing circle signs. Many, perhaps most properties of signed graphs are unaltered by switching, the frustration index being a notable example. The first of our main theorems is that there are exactly six equivalence classes of signatures of $P$ under the combination of switching and isomorphism (Theorem \ref{T:types}). Figure \ref{F:types} shows a representative of each switching isomorphism class. In each representative the negative edges form a smallest set whose deletion makes the signed Petersen balanced. Hence, we call them \emph{minimal signatures} of $P$ (see Theorem \ref{T:fr}). Because there are only six switching isomorphism classes of signatures, the frustration index of every signature of $P$ can be found from those of the minimal signatures. \begin{figure} \caption{The six switching isomorphism types of signed Petersen graph. Solid lines are positive; dashed lines are negative.} \label{F:types} \end{figure} The second main theorem, which occupies the bulk of this paper, is a computation of the automorphism and switching automorphism groups of the six minimal signatures (Theorem \ref{T:aut}). An automorphism has the obvious definition: it is a graph automorphism that preserves edge signs. This group is not invariant under switching. It is not even truly signed-graphic, for as concerns automorphisms a signed graph is merely an edge 2-colored graph. The proper question for signed graphs regards the combination of switching with an automorphism of the underlying graph. The group of switching automorphisms of a signed graph is, by its definition, invariant under switching, so just six groups are needed to know them all. Some of the groups are trivial, but one is so complicated that it takes pages to describe it thoroughly. Isomorphic minimal signatures may not be equivalent under the action of the switching group. The number of switching inequivalent signatures of a given minimal isomorphism type is deducible from the order of the switching automorphism group (Section \ref{orbit}). Two further properties are treated more concisely. First, a signed graph can be colored by signed colors. That leads to two chromatic numbers, depending on whether or not the intrinsically signless color 0 is accepted. The chromatic numbers are invariant under switching (and isomorphism); thus they help to distinguish the six minimal signatures by showing their inequivalence under switching isomorphism (Theorem \ref{T:col}). The two chromatic numbers are aspects of two chromatic polynomials, but we make no attempt to compute those polynomials, as they have degree 10. Finally, we take a brief excursion into a natural generalization of balance called \emph{clusterability} (Section \ref{clust}). This, like the automorphism group, is not switching invariant, but it has attracted considerable interest, most recently in connection with the organization of data (cf.\ \cite{Bansal} et al.), and has complex properties that have been but lightly explored. Signed graphs, signed Petersens in particular, have other intriguing aspects that we do not treat. Two are mentioned in the concluding section but they hardly exhaust the possibilities. \section{Graphs and Signs}\label{defs} We write $V$ and $E$ for the vertex and edge sets of a graph $\Gamma$ or signed graph $\Sigma$, except when they may be confused with the same sets of another graph. The complement of $X \subseteq V$ is $X^c := V \setm X$. The (open) neighborhood of a vertex $v$ is $N(v)$; the closed neighborhood is $N[v] = N(v) \cup \{v\}$. A \emph{cut} in a graph is a set $\del X := \{uv \in E : u \in X \text{ and } v \notin X\}$ where $X \subseteq V$. We call two or more substructures of a graph, such as edge sets or vertex sets, \emph{automorphic} if there are graph automorphisms under which any one is carried to any other. A \emph{signed graph} is a pair $\Sigma := (\Gamma,\sigma)$ where $\Gamma = (V,E)$ is a graph and $\sigma: E \to \signs$ is a \emph{signature} that labels each edge positive or negative. Hence, a \emph{signed Petersen graph} is $(P, \sigma)$. Two examples are $+P := (P,+)$, where every edge is positive, and $-P := (P,-)$, where every edge is negative. The \emph{underlying graph} of $\Sigma$ is $\Sigma$ without the signs, denoted by $|\Sigma|$. We say $\Sigma$ is \emph{homogeneous} if it is all positive or all negative, and \emph{heterogeneous} otherwise; so $+P$ and $-P$ are the homogeneous signed Petersens. The set of positive edges of $\Sigma$ is $E^+$, that of negative edges is $E^-$; $\Sigma^+$ and $\Sigma^-$ are the corresponding (unsigned) graphs $(V,E^+)$ and $(V,E^-)$. The \emph{negation} of $\Sigma$ is $-\Sigma = (\Gamma,-\sigma)$, the same graph with all signs reversed. A compact notation for a signed Petersen graph with negative edge set $S$ is $P_S$. The sign of a circle (i.e., a cycle, circuit, or polygon) $C$ is $\sigma(C) :=$ the product of the signs of the edges in $C$. The most essential fact about a signed graph usually is not, as one might think, the edge sign function itself, but only the set $\cC^+(\Sigma)$ of circles that have positive sign. If this set consists of all circles we call the signed graph \emph{balanced}. Such a signed graph is equivalent to its unsigned underlying graph in most ways. We call $\Sigma$ \emph{antibalanced} if $-\Sigma$ is balanced. \begin{prop}[{Harary \cite{NB}}]\label{P:balance} $\Sigma$ is balanced if and only if $V$ can be divided into two sets so all positive edges are within a set and all negative edges are between the sets. \end{prop} We say `divided' rather than `partitioned' because one set may be empty. If that is so, the signature is all positive. \section{Petersen Structure}\label{structure} The Petersen graph $P$ is the complement of the line graph of $K_5$: $P = \overline{L(K_5)}$. Thus, its vertices $v_{ij}$ are in one-to-one correspondence with the ten unordered pairs from the set $\5$ and its fifteen edges are all the pairs $v_{ij}v_{kl}$ such that $\{i,j\} \cap \{k,l\} = \eset$. (For legibility, in subscripts we often omit the $v$ of vertex names.) We usually write $V$ and $E$ for $V(P)$ and $E(P)$ when discussing the Petersen graph as there can be no confusion with the vertex and edge sets of a general graph. For use later we want structural information about $P$. As $P$ has edge connectivity 3, the smallest cut has three edges. The automorphism group $\Aut P$ is well known to be the symmetric group $\fS_5$ with action on $V$ induced by the permutations of the set $\5$. Writing $\fS_T$ for the group of permutations of the set $T$, we identify $\Aut P$ with $\fS_{\5}$. We use the same symbol for a permutation of $\5$ and the corresponding automorphism of $P$, as there is little danger of confusion. $\Aut P$ carries any oriented path of length 3 to any other; hence it is also transitive on pairs of adjacent edges (distance 1) and on pairs of edges at distance 2. (For these properties see, e.g., \cite[Section 4.4]{AGT}.) Furthermore, $\Aut P$ carries any nonadjacent vertex pair to any other. The maximum size of a set of independent vertices in $P$ is four. Each maximum independent vertex set has the form $X_m := \big\{ v_{im} : i \in \5 \setm m \big\}$, and any three vertices in $X_m$ determine $m$. For any $m,n \in \5$, $X_m$ and $X_n$ are automorphic. An independent set of three vertices is either the neighborhood of a vertex, or a subset of a maximum independent set $X_m$. Deleting an independent vertex set leaves a connected graph except that $P \setm N(v) = C_6 \cupdot K_1$ and $P \setm X_m$ is a three-edge matching. If $|W|=3$ and $W \subset X_m$, then $P \setm W$ is a tree consisting of three paths of length 2 with one endpoint in common. \subsection{Hexagons}\label{hex} Each hexagon is $E(P \setm N[v])$ for a vertex $v$. Thus there is a one-to-one correspondence between vertices and hexagons; we write $H_v = H_{lm}$ for the hexagon that corresponds to $v = v_{lm}$. The stabilizer of $H_{lm}$ is $\fS_{\{l,m\}} \times \fS_{\5 \setm \{l,m\}}$. A hexagon is determined by any two of its edges that have distance 2. Furthermore, any two hexagons are automorphic. \subsection{Matchings}\label{matchings} We need to know all automorphism types of a matching in $P$. Let $M_k$ denote a matching of $k$ edges. Matchings of 1 edge are obviously all automorphic. Let $M_{2,d}$ denote a pair of edges at distance $d=2$ or $3$. Any 2-edge matching is an $M_{2,2}$ or $M_{2,3}$. All $M_{2,2}$ matchings are automorphic because $\Aut P$ is transitive on paths of length 3. All $M_{2,3}$ matchings are automorphic; for the proof see the treatment of $M_{3,3}$. An $M_5$ can only be a cut between two pentagons, since $P \setm M_5$ is a 2-factor and $P$ is non-Hamiltonian. All are clearly automorphic. A matching of 4 edges leaves two vertices unmatched. If they are adjacent, $M_4 = M_5\ \setm$ edge; all such matchings are automorphic. If they are nonadjacent, say they are $v_{ik}$ and $v_{jk}$ in Figure \ref{F:m3types}. Then $M_4$ consists of $a$ and one of the two $M_{3,2}$'s in $H_{lm}$. Call this type of matching $M_4'$. Interpreting $M_4'$ as one of the matchings in $H_{lm}$ together with one of the edges incident with $v_{lm}$, it is easy to see that all matchings of type $M_4'$ are automorphic. Consequently, there are two automorphism classes of 4-edge matchings. There are four nonautomorphic kinds of 3-edge matching $M_3$. First we describe them; then we prove there are no other kinds. By $M_{3,3}$ we mean a set of three edges, each pair having the same distance $3$. Each $M_{3,3}$ has the form $$M_{3(m)} := E(P \setm X_m) = \big\{v_{ij}v_{kl}: \{i,j,k,l\} = \5 \setm m \big\}.$$ There are five such edge sets, one for each $m \in \5$; they partition $E(P)$. Obviously, all the $M_{3(m)}$'s are automorphic. (An $M_{2,3}$ lies in a unique $M_{3,3}$, since the $M_{2,3}$ determines the value of $m$. That implies there are 15 different $M_{2,3}$'s.) Permuting $\5 \setm m$ permutes the edges of $M_{3(m)}$; it follows that any $M_{2,3}$ is automorphic to any other. We define $M_{3,2}$ to consist of alternate edges of a hexagon, say $H_{lm}$, which we call the \emph{principal hexagon} of the three edges. There are two such sets for each hexagon, hence 20 $M_{3,2}$'s in all, and they are all automorphic to each other. The notation $M_{3,2}$ reflects the fact that the edges in the matching all have distance two from each other. Each $M_{2,2}$ is contained in a unique hexagon, hence in a unique $M_{3,2}$; thus, there are 40 $M_{2,2}$'s. There is another way to form a matching of three edges at distance 2 from one another. In a pentagon $v_{ij}v_{kl}v_{mi}v_{jk}v_{lm}v_{ij}$ take the edges $e=v_{kl}v_{im}$ and $f=v_{jl}v_{km}$ and the edge $a=v_{ij}v_{lm}$. We call this type $M_3'$. Another view of $M_3'$ is as $M_5\ \setm$ two edges. All matchings of type $M_3'$ are automorphic but they are not automorphic to any $M_{3,2}$ because $e,f,a$ do not lie in a hexagon. A fourth type of 3-edge matching, call it $M_{3,2/3}$, consists of $e$, $f$, and $b=v_{jk}v_{lm}$. The distances of these edges are 2, except that $b$ and $f$ have distance 3. All $M_{3,2/3}$'s are automorphic, but the distance pattern proves an $M_{3,2/3}$ is not automorphic to any other type. \begin{lem}\label{L:m3types} Every $3$-edge matching in $P$ is an $M_{3,3}$, an $M_{3,2}$, an $M_{3,2}'$, or an $M_{3,2/3}$. \end{lem} \begin{proof} Let $M_3$ be a 3-edge matching. If its edges are all at distance 3 from each other, then $M_3$ can only be $M_{3,3}$, as two edges at distance 3 have a unique edge at the same distance from both. \begin{figure} \caption{The four kinds of $3$-edge matching in $P$.} \label{F:m3types} \end{figure} If $M_3$ contains edges $e,f$ at distance 2, there are four potential third edges up to the symmetry that interchanges $e$ and $f$ (see Figure \ref{F:m3types}). Choosing $g$ for $M_3$, the hexagon $H_{lm}$ contains $M_3$ so we have $M_{3,2}$. Choosing $a$ for $M_3$, the pentagon $v_{ij}v_{kl}v_{mi}v_{jk}v_{lm}v_{ij}$ shows we have $M_3'$. Choosing $b$, we have $M_{3,2/3}$. Choosing $c$, we have $M_3'$ again with the pentagon $v_{im}v_{jk}v_{il}v_{km}v_{jl}v_{im}$. \end{proof} \section{Switching}\label{sw} Two signed graphs, $\Sigma_1 = (\Gamma_1,\sigma_1)$ and $\Sigma_2 = (\Gamma_2,\sigma_2)$, are \emph{switching equivalent} (written $\Sigma_1 \sim \Sigma_2$) if $\Gamma_1 = \Gamma_2$ and there is a function $\zeta: V_1 \to \signs$ (a \emph{switching function}) such that $\sigma_2(vw) = \zeta(v)\sigma_1(vw)\zeta(w)$ for every edge $vw$. We write $\sigma_2 = \sigma_1^\zeta$ and $\Sigma_2 = \Sigma_1^\zeta$; that is, we write the switched signature or graph as if we were conjugating in a group---and indeed switching is a graphical generalization of conjugation. Another way to state switching is to switch a vertex set $X \subseteq V$ (the connection is that $X = \zeta\inv(-)$); that means negating the sign of every edge in the cut $\del X$. Then we write $\Sigma^X = (\Gamma,\sigma^X)$ for the switched graph. The switching function $\zeta_X$ is defined by $\zeta_X(v) := +$ if $v \notin X$ and $-$ if $v \in X$. Switching functions multiply pointwise: $(\zeta\eta)(v) = \zeta(v)\eta(v)$. Multiplication corresponds to set sum (symmetric difference) of switching sets: $\zeta_X \zeta_Y = \zeta_{X \oplus Y}$. The group of switching functions is $\signs^V$. We write $\varepsilon$ for its identity element, the all-positive switching function. Certain switching functions have no effect on $\Sigma$; that is, the action of $\signs^V$ on a signature has a kernel, $$ \fK_\Gamma := \{\zeta : \Sigma^\zeta = \Sigma\} = \{\zeta : \zeta \text{ is constant on each component of } \Gamma\}. $$ The kernel is independent of the signature, in fact, of everything except the partition of $V$ into vertex sets of connected components of $\Gamma$. The quotient group is the \emph{switching group} of $\Gamma$, written $$\Sw\Gamma := \signs^V/\fK_\Gamma.$$ The element of this group that corresponds to a switching function $\zeta$ is $\bar\zeta$, but for simplicity of notation, we often use the same symbol $\zeta$ without the bar when it should not cause confusion. We say $\Sigma_1$ and $\Sigma_2$ are \emph{isomorphic} (written $\Sigma_1 \cong \Sigma_2$) if there is a graph isomorphism $\psi: \Gamma_1 \to \Gamma_2$ that preserves edge signs, i.e., $\sigma_2((vw)^\psi) = \sigma_1(vw)$ for every edge. (As we are restricting to simple graphs, $\psi$ can be treated as a bijection $V_1 \to V_2$ and $(vw)^\psi = v^\psi w^\psi$.) They are \emph{switching isomorphic} (written $\Sigma_1 \simeq \Sigma_2$) if $\Sigma_2$ is isomorphic to a switching of $\Sigma_1$; that is, there are a graph isomorphism $\psi: \Gamma_1 \to \Gamma_2$ and a switching function $\zeta : V_1 \to \signs$ such that $\sigma_2((vw)^\psi) = \sigma_1^\zeta(vw)$ for every edge. \begin{lem}[\rm\cite{Soz, CSG}] \label{L:switching} Switching preserves circle signs. Conversely, if two signatures of $\Gamma$ have the same circle signs, then one is a switching of the other. \end{lem} For instance, $\Sigma$ is balanced if and only if it is switching equivalent to the all-positive signature. Because of this lemma, switching-equivalent signed graphs are in most ways the same. Lemma \ref{L:switching} shows that switching isomorphism is a true isomorphism: not of graphs or signed graphs, but of the structure on signed graphs consisting of the underlying graph and the class of positive circles, i.e., of the pair $(|\Sigma|,\cC^+(\Sigma))$ (which constitutes a type of `biased graph' \cite{BG1}). Switching equivalence and switching isomorphism are equivalence relations on signed graphs. An equivalence class under switching equivalence is a \emph{switching equivalence class} of signed graphs. An equivalence class under switching isomorphism is a \emph{switching isomorphism class}. (Many writers say `switching equivalence' when they mean `switching isomorphism', but I find it better to separate the two concepts.) \section{Switching Isomorphism Types}\label{swisom} The most patently obvious signatures of the Petersen graph are $+P$ and $-P$. Two more are $P_1$, which has only one negative edge, and its negative $-P_1$, with only one positive edge. Two more signatures are $P_{2,d}$ where $d=2,3$, which have two negative edges at distance $d$; and the last two that mainly concern us are $P_{3,d}$ for $d=2,3$, which have three negative edges, all at distance $d$; in $P_{3,2}$ the negative edges must be alternate edges of a hexagon. In terms of our classification of matchings, $P_{k,d} := P_{M_{k,d}}$, that is, $E^-(P_{k,d}) = M_{k,d}$. These signed graphs are illustrated in Figure \ref{F:types}. \begin{thm}\label{T:types} There are exactly six signed Petersen graphs up to switching isomorphism. They are $+P \simeq -P_{3,3}$, $P_1 \simeq -P_{2,3}$, $P_{2,2} \simeq -P_{2,2}$, $P_{2,3} \simeq -P_1$, $P_{3,2} \simeq -P_{3,2}$, and $P_{3,3} \simeq -P$. \end{thm} \begin{proof} The first step is to establish the switching equivalences stated in the theorem. To switch $-P$ to $P_{3,3}$, switch an independent set $X=X_m$ of four vertices; this negates $\del X$ leaving three negative edges, which have distance 3. If we begin with $-P_1$ with positive edge $uv$, by choosing $X$ to contain neither $u$ nor $v$ we get $uv \notin \del X$ so, after switching, $uv$ retains its sign; therefore $(-P_1)^X = P_{2,3}$. To switch $-P_{3,2}$, where the positive edges belong to a hexagon $H_v$, switch $N[v]$. That negates all edges except those of $H_v$, giving $P_{3,2}$ whose negative edges are the originally negative edges of the hexagon. To switch $-P_{2,2}$, note that the two positive edges $e$ and $f$, having distance 2, lie in a unique pentagon $J$. Switch the three vertices of $J$ that are not incident to $e$ and the two vertices outside $J$ that are adjacent to $e$. The result is $P_{2,2}$. For the rest of the theorem we need two more steps. First, we must prove that every signed Petersen graph belongs to the switching isomorphism class of one of the six types $+P,\, P_1,\, P_{k,d}$ listed in the theorem. That is implied by Theorem \ref{T:fr}. Second, we must show that none of the six types is switching isomorphic to any other. The second step follows from the calculation of \emph{invariants} of the six switching isomorphism classes, by which we mean numbers or other objects that are the same for every element of a switching isomorphism class. Relevant invariants are the numbers $c_5^-$ and $c_6^-$ of negative circles of lengths 5 and 6 (Theorem \ref{T:circ}), the frustration index $l$ (Theorem \ref{T:fr}, and the switching automorphism groups (Theorem \ref{T:aut}). The six classes must be distinct because no two have all the same invariants. In fact, any two of $c_5^-$, $c_6^-$, and $l$ suffice to distinguish them; and the switching automorphism groups, though more difficult to find, suffice by themselves. \end{proof} \section{Circle Signs}\label{circ} Lemma \ref{L:switching} leads to an effective method of distinguishing switching isomorphism classes, by comparing the numbers of negative circles of each length. \begin{thm}\label{T:circ} The numbers of negative pentagons and hexagons in each of the six signed Petersen graphs of Theorem \ref{T:types} are those listed in Table \ref{Tb:circ}. \end{thm} \begin{table}[htb] \begin{center} \begin{tabular}{|c||c|c|c|c|c|c|} \hline $(P,\sigma)$ \vstrut{15pt}&\hbox to 3em{ $+P$ } &\hbox to 3em{ $P_1$ } &\hbox to 3em{ $P_{2,2}$ } &{$P_{2,3} \simeq -P_1$} &\hbox to 3em{ $P_{3,2}$ } &{$P_{3,3} \simeq -P$} \\[3pt] \hline \vstrut{15pt}Negative $C_5$'s &0 &4 &6 &8 &6 &12 \\ \vstrut{15pt}Negative $C_6$'s &0 &4 &6 &4 &10 &0 \\[2pt] \hline \end{tabular} \end{center} \caption{The numbers of negative pentagons and hexagons in each switching isomorphism type.} \label{Tb:circ} \end{table} \begin{proof} The Petersen graph has $c_5=12$ pentagons and $c_6=10$ hexagons. The number of cases to consider is lessened if we notice that negating $(P,\sigma)$ leaves the number $c_6^-(P,\sigma)$ of negative hexagons the same but complements the number $c_5^-(P,\sigma)$ of negative pentagons to $c_5^-(P,-\sigma) = 12-c_5^-(P,\sigma)$. For $+P$ both numbers are 0, and the values for $-P$ follow. In $P_1$ there are as many negative pentagons, or hexagons, as the number of each that lie on a fixed edge $e$. There are four ways to add an edge at each end of $e$ to get a path of length 3, and each such path completes uniquely to a pentagon or hexagon. Thus, $c_5^-(P_1)=c_6^-(P_1) =4$. The numbers for $-P_1$ are immediate. If we now take an edge $f$ at distance 2 from $e$, the number of negative $k$-gons equals $2(c_k^-(P_1) - d_k)$ where $d_k$ is the number of $k$-gons that contain both $e$ and $f$. It is easy to see that $d_5=d_6=1$. (Use the 3-path transitivity of $P$, by which under the symmetries of $P$ there is only one orbit of pairs of edges at distance 2.) It follows that $c_5^-(P_{2,2})=c_6^-(P_{2,2})=6$. For an $f$ at distance 3 from $e$ there is a similar calculation. However, $f$ cannot lie in a common pentagon with $e$, so now $d_5=0$. The value of $d_6$ is not quite obvious. There are four ways to form a path of length 3 by extending $e$ at each end. Inspection reveals that two of these paths cannot be completed to a hexagon on $f$, but the other two can be completed uniquely. Thus, $d_6=2$. We conclude that $c_5^-(P_{2,3})=8$ and $c_6^-(P_{2,3})=4$. \end{proof} \section{Frustration}\label{fr} The proofs of Theorems \ref{T:types} and \ref{P:chromatic} make use of the measurement of imbalance by edges or vertices. The \emph{frustration index} $l(\Sigma) :=$ the smallest number of edges whose deletion leaves a balanced signed graph. It is equivalent to finding the largest number of edges in a balanced subgraph of $\Sigma$, which is the signed-graph equivalent of the maximum cut problem in an unsigned graph; in fact, $l(-\Gamma) =$ the smallest number of edges whose complement is bipartite. The \emph{frustration number} (or \emph{vertex frustration number}) $l_0(\Sigma)$ is the smallest number of vertices whose deletion leaves a balanced signed graph. Its complement, $|V| - l_0$, is the largest order of a balanced subgraph. For an all-negative graph, $l_0(-\Gamma)$ is the smallest number of vertices whose deletion leaves a bipartite graph. \subsection{Frustration index}\label{frindex} The frustration index is the most significant way to measure how unbalanced a signed graph is. For instance, in social psychology $l(\Sigma)$ is the minimum number of relations that must change to achieve balance. In the non-ferromagnetic Ising model of spin glass theory the frustration index determines the ground state energy of the spin glass. (Frustration index was called `complexity' by Abelson and Rosenberg \cite{PsL}, who introduced the idea, and `line index of balance' by Harary; my name for it was inspired by the picturesque terminology of Toulouse \cite{Toulouse}.) Harary \cite{MSB} proved that $l(\Sigma) =$ the smallest number of edges whose negation or deletion makes the signed graph balanced. (Negating an edge is equivalent to deleting it, so one can delete or negate the edges in any combination.) An edge set whose deletion leaves a balanced graph is called a \emph{balancing set} (of edges); thus, $l(\Sigma) =$ the size of a minimum balancing set. \begin{lem}[implicit in {\cite{BMRU}}]\label{L:swfr} Switching does not change $l(\Sigma)$. Indeed, $l(\Sigma) = \min_\zeta |E^-(\Sigma^\zeta)|$, the minimum number of negative edges in a switching of $\Sigma$. \end{lem} That is, a signed graph has the smallest number of negative edges in its switching equivalence class if and only if $|E^-(\Sigma)| = l(\Sigma)$. Let us call $\Sigma$ \emph{minimal} if it satisfies this equation. By Lemma \ref{L:swfr} we can distinguish switching isomorphism classes by their having different frustration indices. This helps to prove the six signed $P$'s are not switching isomorphic. \begin{thm}\label{T:fr} There are precisely the following six isomorphism types of minimal signed Petersen graph: $+P$, $P_1$, $P_{2,2}$, $P_{3,2}$, $P_{2,3}$, and $P_{3,3}$. Each is the unique minimal isomorphism type in its switching isomorphism class. The frustration indices of the six types are as stated in Table \ref{Tb:fr}. \end{thm} To find the frustration index of any signature of $P$, switch it to be minimal and consult the table. As frustration index is an NP-complete problem (its restriction to all-negative signatures is equivalent to the well known NP-complete maximum-cut problem) that may not be so easy, but in small examples like the Petersen graph Lemma \ref{L:cutfr} is a great help. \begin{table}[htb] \begin{center} \begin{tabular}{|r||c|c|c|c|c|c|} \hline \vstrut{15pt}$(P,\sigma)$ &\hbox to 2em{\,$+P$} &\hbox to 2em{\;\,$P_1$} &\hbox to 2em{\,$P_{2,2}$} &\hbox to 2em{\,$P_{2,3}$} &\hbox to 2em{\,$P_{3,2}$} &\hbox to 2em{\,$P_{3,3}$} \\[3pt] \hline \vstrut{15pt}$l(P,\sigma)$ &0 &1 &2 &2 &3 &3 \\[2pt] \hline \end{tabular} \end{center} \caption{The frustration index of each switching isomorphism type.} \label{Tb:fr} \end{table} \begin{proof} First we show that every signature of $P$ switches to one of the six. \begin{lem}\label{L:cutfr} If every cut in $\Sigma$ has at least as many positive as negative edges, then $l(\Sigma) = |E^-|$. If some cut has more negative than positive edges, then $l(\Sigma) < |E^-|$. \end{lem} \begin{proof} If $|E^-(X,X^c)| > |E^+(X,X^c)|$, then switching $X$ reduces the number of negative edges. If $|E^-(X,X^c)| \leq |E^+(X,X^c)|$ for every $X$, then no switching can reduce the number of negative edges; so $l(\Sigma) = |E^-|$ by Lemma \ref{L:swfr}. \end{proof} Lemma \ref{L:cutfr} has a pleasing effect on a cubic graph. \begin{cor}\label{C:cubicfr} In any minimal signature of a cubic graph the negative edges are a matching. \end{cor} Thus, we need only examine all the automorphism types of matchings in $P$ from Section \ref{structure}. Let $E^- = M_k$ where $0 \leq k \leq 5$. Matchings of 0 or 1 edge are trivial: $\Sigma$ is minimal. When $k=2$, $E^- = M_{2,2}$ or $M_{2,3}$ so we have $P_{2,2}$ or $P_{2,3}$. When $E^- = M_5$, switching the vertices of one of the pentagons separated by $E^-$ makes all edges positive, which is $+P$. When $E^- = M_5\ \setm$ edge or $E^- = M_3' = M_5\ \setm$ 2 edges, the same switching gives $P_1$ or $P_{2,2}$, respectively. For $E^- = M_4'$ switch $\{v_{kl},v_{ij},v_{km},v_{jl}\}$. This also results in $P_{2,2}$. The last case is $E^- = M_{3,2/3}$. Here we switch $\{v_{jk},v_{jl},v_{im}\}$, getting $P_{2,3}$. This proves that every signature is switching isomorphic to one of the six basic types. It remains to show that each of the six types is actually minimal. We have shown that a signature in which $E^-$ is a matching is not minimal if it is not one of the six. Thus, if no two of the six are switching isomorphic, each must be the unique minimal element of its switching isomorphism class. The switching invariants $c_5^-$, $c_6^-$, and $l$ are more than enough to prove that none of the six can switch to any other. Thus, the theorem is proved. \end{proof} \begin{cor}\label{C:min} In each switching equivalence class and in each switching isomorphism class of signed Petersen graphs there is exactly one minimal isomorphism type. \end{cor} The corollary cannot say that there is a unique minimal signature in each switching equivalence class, because that is false. In the switching equivalence class of $-P$ the unique minimal isomorphism type is $P_{3,3}$, but the exact choice of the three negative edges is not unique. The number of minimal graphs in that switching equivalence class equals the number of sets of three edges all at distance 3, which is 5. It is a remarkable fact that not just some but every switching equivalence class, and every switching isomorphism class, of signed Petersens has only one minimal signature up to isomorphism. It is not surprising that some switching equivalence classes have this property, but that all do is. By way of contrast, $K_n$ (with $n\geq4$) has some switching equivalence classes with unique minimal elements, either absolutely or only up to isomorphism, and some with multiple minimal members. In the class of the signature $K_n(e)$, which has exactly one negative edge $e$, clearly the only minimal signed graph is $K_n(e)$. In the class of $-K_n$ the minimal elements are all the signatures of $K_n$ where the positive edges form a cut of maximum size, i.e., where $V(K_n)$ is partitioned into two sets whose sizes differ by at most 1 \cite{Petersdorf}. There are many such signatures and all are switching equivalent to $-K_n$; but they are all isomorphic. Now assume $n=2r+1\geq5$ and consider one more signature, where the negative edges are $e_{1i}$ for $i=2,3,\ldots,r+1$ and $e_{2,3}$. Here $E^-$ is a connected subgraph. This signature is minimal in its switching equivalence class by Lemma \ref{L:cutfr}. Switching $v_1$, the negative edges are $e_{1i}$ for $i=r+2,r+3,\ldots,2r+1$ and $e_{2,3}$. The number of negative edges is unchanged, but they now form a disconnected subgraph. Thus, this switching equivalence class contains (at least) two minimal graphs that are not isomorphic. We see that for $K_n$ there are switching equivalence classes whose minimal graph is unique, those in which the minimal graph is unique only up to isomorphism, and those with nonisomorphic minimal members. Thus, the behavior of the Petersen signatures is not totally ordinary. I suspect it is unusual but the truth is that no one knows whether, in regard to the uniqueness of either minimal signatures or isomorphism types of minimal signatures in either their switching equivalence or isomorphism class, most graphs resemble $K_n$ or $P$. \subsection{Frustration number}\label{frno} The (vertex) frustration number has been less deeply explored than the frustration index, perhaps because it seems less suitable to the social psychology model and is certainly less relevant to spin glass theory. Besides, it appears to be less subtle in distinguishing between different signatures of a graph, because most graphs have fewer vertices than edges. Nevertheless, we find a use for it in counting colorations in Section \ref{col}. \begin{lem}\label{L:swfrno} Switching does not change $l_0(\Sigma)$. Moreover, $l_0(\Sigma) \leq l(\Sigma)$ in every signed graph. \end{lem} \begin{proof} The first part is obvious from Lemma \ref{L:switching} because imbalance depends only on the set of negative circles. The second part follows from the fact that, if we delete one endpoint from each edge of a minimum balancing edge set, we get a balanced subgraph by deleting at most $l$ vertices. \end{proof} \begin{thm}\label{T:frno} The frustration numbers of signed Petersen graphs are given in Table \ref{Tb:frno}. All have frustration number equal to the frustration index. \end{thm} \begin{table}[htb] \begin{center} \begin{tabular}{|r||c|c|c|c|c|c|} \hline \vstrut{15pt}$(P,\sigma)$ &\hbox to 2em{\,$+P$} &\hbox to 2em{\;\,$P_1$} &\hbox to 2em{\,$P_{2,2}$} &\hbox to 2em{\,$P_{2,3}$} &\hbox to 2em{\,$P_{3,2}$} &\hbox to 2em{\,$P_{3,3}$} \\[3pt] \hline \vstrut{15pt}$l_0(P,\sigma)$ &0 &1 &2 &2 &3 &3 \\[2pt] \hline \end{tabular} \end{center} \caption{The frustration number of each switching isomorphism type.} \label{Tb:frno} \end{table} \begin{proof} Consult Figure \ref{F:types}. The values for $+P$ and $P_1$ are obvious. A signature that has two vertex-disjoint negative pentagons cannot have $l_0<2$; if the frustration index is $2$, as in $P_{2,2}$ and $P_{2,3}$, that must be $l_0$. In $P_{3,2}$ the negative pentagons are the inner star and all pentagons with two outer edges. To achieve balance we must delete an inner vertex. Deleting one such vertex $v$ gives $P_{3,2}\setm v$, which is a subdivision of $K_4$ in which the paths corresponding to two opposite edges in $K_4$ are negative and the paths that correspond to other edges in $K_4$ are positive. Every circle in $P_{3,2}\setm v$ that corresponds to a triangle of $K_4$ is negative. It is impossible to make this graph balanced by deleting only one edge; hence $l_0(P_{3,2}) = 3$. Because $P_{3,3}$ is antibalanced, every pentagon is negative. That means a vertex set whose deletion makes for balance must cover all the pentagons. No two vertices can do that, as one can verify by inspecting adjacent and nonadjacent pairs; but any vertex neighborhood $N(v)$ does. Hence, $l_0(-P) = 3$. Comparing Tables \ref{Tb:fr} and \ref{Tb:frno} shows that $l_0 = l$ in every case. \end{proof} One can easily see that $l_0=l$ is not true in general. However, I verified that equality holds for every signature of $K_4$ or $K_{3,3}$. I hesitantly propose: \begin{conj}\label{Cj:cubic} For every signed cubic graph $\Sigma$, $l_0(\Sigma) = l(\Sigma).$ \end{conj} \section{Automorphisms and Orbits}\label{aut} In this section we develop a general theory of switching automorphism groups of signed graphs. Then we compute the automorphism and, more importantly, switching automorphism groups of the six basic signed Petersen graphs and their negatives. Lastly we apply that information to find the number of isomorphic but switching-inequivalent copies of each of the six basic signatures. We regard an automorphism of $\Gamma$ as a permutation of $V$ and we write actions as superscripts, so products are read from left to right. \subsection{Automorphisms and switching automorphisms of signed graphs}\label{genaut} An automorphism of a signed graph is an isomorphism with itself; that is, it is an automorphism of the underlying graph that preserves edge signs. A \emph{switching automorphism} of a signed graph is a switching isomorphism with itself. (As with switching isomorphisms, cf.\ near Lemma \ref{L:switching}, switching automorphisms really are automorphisms: of the biased graph $(|\Sigma|,\cC^+(\Sigma))$.) The group of automorphisms is $\Aut(\Sigma)$ and that of switching automorphisms is $\SwAut(\Sigma)$. \subsubsection{Automorphisms}\label{aut} As concerns automorphisms, a signed graph is just a graph whose edges are colored with two colors; an automorphism is a color-preserving graph automorphism. There is not much to say except the following: \begin{prop}\label{P:autgroup} For a signed graph $\Sigma = (\Gamma,\sigma)$, $$\Aut\Sigma = \Aut\Gamma \cap \Aut\Sigma^+ = \Aut\Gamma \cap \Aut\Sigma^- = \Aut\Sigma^+ \cap \Aut\Sigma^-.$$ \end{prop} \subsubsection{Switching permutations and switching automorphisms}\label{swaut} Switching automorphisms are more complicated; to treat them we need precise definitions and notation. We begin with the action of automorphisms of $\Gamma$ upon signatures: $$ \sigma^\alpha(v^\alpha w^\alpha) := \sigma(vw), $$ and $\Sigma^\alpha := (\Gamma, \sigma^\alpha).$ The action of an automorphism on a switching function is similar: $$ \zeta^\alpha(v^\alpha) := \zeta(v). $$ This leads to the commutation law \begin{equation} \zeta \alpha = \alpha \zeta^\alpha, \label{E:commutation} \end{equation} because $$\sigma^{\zeta \alpha}(v^\alpha w^\alpha) = (\sigma^\zeta)^\alpha(v^\alpha w^\alpha) = \sigma^\zeta(vw) = \zeta(v) \sigma(vw) \zeta(w)$$ while $$\sigma^{\alpha \zeta^\alpha}(v^\alpha w^\alpha) = (\sigma^\alpha)^{\zeta^\alpha}(v^\alpha w^\alpha) = {\zeta^\alpha}(v^\alpha) \sigma^\alpha(v^\alpha w^\alpha) {\zeta^\alpha}(w^\alpha) = \zeta(v) \sigma(vw) \zeta(w).$$ Rewriting \eqref{E:commutation} as $\alpha\inv \zeta \alpha = \zeta^\alpha$, we see that the action of $\alpha$ is that of conjugation, as the notation suggests. Rewriting it in terms of $\zeta_X$ we obtain the important equation \begin{equation} (\zeta_X)^\alpha = \zeta_{X^\alpha}, \label{E:commutationset} \end{equation} since $\zeta_X^\alpha(v^\alpha) = \zeta_X(v) = \zeta_{X^\alpha}(v^\alpha).$ Now we can define a preliminary group to the switching automorphism group. The ground set is $\signs^V \times \Aut\Gamma$, whose elements we call, for lack of a better name, \emph{switching permutations of\/ $\Gamma$}, because when they act on a signature of $\Gamma$ they switch signs and permute the vertices. A \emph{switching permutation of\/ $\Sigma$} is any $\zeta\gamma \in \signs^V \times \Aut\Gamma$ such that $\Sigma^{\zeta\gamma} = \Sigma$. The multiplication rule is $$(\zeta,\alpha)(\eta,\beta) = (\zeta\eta^{\alpha\inv}, \alpha\beta).$$ Because $\signs^V$ and $\Aut\Gamma$ embed naturally into $\signs^V \times \Aut\Gamma$ as $\signs^V \times \{\id\}$ and $\{\eps\} \times \Aut\Gamma$, we regard them as subgroups of $\signs^V \times \Aut\Gamma$ and write the element $(\zeta,\alpha)$ as a product, $\zeta\alpha$. The equation of multiplication is given by the next lemma. \begin{lem}\label{L:multinvset} The product of switching permutations $\zeta_X\gamma$ and $\zeta_Y\xi$, where $\zeta_X, \zeta_Y \in \signs^V$ and $\gamma, \xi \in \Aut \Gamma$, is given by \begin{equation} \zeta_X\gamma \cdot \zeta_Y\xi = \zeta_X\zeta_{Y^{\gamma\inv}} \cdot \gamma\xi. \label{E:multset} \end{equation} The inverse of a switching permutation is \begin{equation} (\zeta_X\gamma)\inv = \zeta_{X^{\gamma}} \gamma\inv. \label{E:invset} \end{equation} \end{lem} \begin{proof} The product formula is a restatement of the previous equations. We verify the inversion formula with a short calculation: $$ \zeta_{X^{\gamma}} \gamma\inv \cdot \zeta_X\gamma = \zeta_{X^{\gamma}} \zeta_{X^{\gamma}} \cdot\gamma\inv \gamma = \zeta_{X^{\gamma} \oplus X^{\gamma}} \,\id = \eps\,\id $$ by \eqref{E:commutationset}. \end{proof} The commutation laws \eqref{E:commutation} and \eqref{E:commutationset} imply that the conjugate of a switching function by an automorphism is another switching function. Consequently, $\signs^V$ is a normal subgroup. That makes the group of switching permutations a semidirect product of $\signs^V$ and $\Aut\Gamma$, so we write it as $\signs^V \rtimes \Aut\Gamma$. We write $p_A$ for the projection onto $\Aut\Gamma$. The action of $\signs^V \rtimes \Aut\Gamma$ on signed graphs $(\Gamma,\sigma)$ has kernel $\fK_\Gamma \times \{\id\}$. The quotient group is the \emph{switching automorphism group of\/ $\Gamma$}, $$ \SwAut\Gamma := \big( \signs^V \rtimes \Aut\Gamma \big) / \big( \fK_\Gamma \times \{\id\} \big). $$ Since $\Sw\Gamma$ can be identified with the normal subgroup $\Sw\Gamma \times \{\id\}$, $\fK_\Gamma$ with $\fK_\Gamma \times \{\id\}$, and $\Aut\Gamma$ with the subgroup $\{\bareps\} \times \Aut\Gamma$, the switching automorphism group of $\Gamma$ is a semidirect product, $$ \SwAut\Gamma = \Sw\Gamma \rtimes \Aut\Gamma, $$ which projects onto $\Aut\Gamma$ by a mapping $\barp_A$. We refer to elements of $\SwAut\Gamma$ as \emph{switching automorphisms of\/ $\Gamma$}. (That is a slight abuse of terminology since they do not actually switch $\Gamma$; they switch signatures of $\Gamma$.) A switching automorphism of $\Gamma$ can be written in several equivalent ways. As a member of $\big( \signs^V \rtimes \Aut\Gamma \big) / \big( \fK_\Gamma \times \{\id\} \big)$ it is $\overline{(\zeta,\alpha)} = \overline{\zeta\alpha}$. As a member of $\Sw\Gamma \rtimes \Aut\Gamma$ it is $(\bar\zeta,\alpha) = \bar\zeta\alpha$. By the natural embeddings $\overline{\zeta\,\id} = \bar\zeta\,\id = \bar\zeta$ and $\overline{\eps\alpha} = \bar\eps\alpha = \alpha$. In particular, the identity element of $\SwAut\Gamma$ is $\overline{\eps\,\id} = \bareps\,\id = \id$. Lemma \ref{L:multinvset} applies in $\SwAut\Gamma$ simply by putting a bar over the switching functions. (Sometimes we omit the bar, as it is obvious which element of $\SwAut\Gamma$ is meant by $\zeta\alpha$.) The switching automorphism group of $\Gamma$ contains the switching automorphism group of each signed graph $\Sigma = (\Gamma,\sigma)$. The latter group is $$ \SwAut\Sigma := \{ \bar\zeta\alpha : \alpha \in \Aut\Gamma \text{ such that } \alpha: \Sigma^\zeta \cong \Sigma \}. $$ That is, $\alpha$ must be an isomorphism from the switched signed graph to the original signed graph. This group projects into $\Aut\Gamma$ by the mapping $\barp_A|_{\SwAut\Sigma}$, which for simplicity we also write as $\barp_A$. We identify $\Aut\Sigma$ with the subgroup $\{\bareps\alpha \in \SwAut\Gamma: \alpha \in \Aut\Sigma\}$. Note that a switching permutation of $\Sigma$ is any switching permutation of $\Gamma$ such that $\bar\zeta\gamma \in \SwAut\Sigma$. \subsubsection{Automorphisms and switching automorphisms}\label{autswaut} Now we can state relationships amongst the automorphisms and switching automorphisms of $\Sigma$ and the automorphisms of $\Gamma$. \begin{prop}\label{P:autgroups} As a function from $\SwAut\Sigma$ to $\Aut\Gamma$, $\barp_A$ is a monomorphism. The groups satisfy $\Aut\Sigma \leq \barp_A(\SwAut\Sigma) \leq \Aut\Gamma$. \end{prop} \begin{proof} It is obvious that $\barp_A$ is a homomorphism. To prove it is injective we examine a switching function $\zeta$ such that $\zeta\,\id$ is a switching automorphism. That means $\Sigma^\zeta = \Sigma$, in other words, $\zeta \in \fK_\Gamma$. But that means the only element of the form $\bar\zeta\,\id$ in $\SwAut\Sigma$ is the trivial one, $\bareps\,\id$. Hence, $\barp_A$ is injective. The relationships of the groups are now obvious. \end{proof} Another relationship makes an obvious but valuable lemma. \begin{lem}\label{L:stab+-} The automorphisms of $\Sigma$ are the automorphisms of $|\Sigma|$ that stabilize $\Sigma^+$, or equivalently $\Sigma^-$. \end{lem} Switching automorphisms of homogeneously signed graphs are not very interesting in themselves. \begin{prop}\label{P:homoaut} The automorphisms and the switching automorphisms of a homogeneous signature, $+\Gamma$ or $-\Gamma$, are the automorphisms of the underlying graph. \end{prop} \begin{proof} This follows at once from Lemma \ref{L:stab+-}. \end{proof} A heterogeneously signed graph, to the contrary, is likely to have switching automorphisms that are not automorphisms of the signed graph. We see this in most, though not all, of the heterogeneous signatures of $P$. Switching can change the automorphism group drastically. Fortunately, the isomorphism type of the switching automorphism group is invariant under switching. In addition, negations need not be considered separately. \begin{prop}\label{P:negaut} $\Aut(-\Sigma) = \Aut(\Sigma)$ and $\SwAut(-\Sigma) = \SwAut(\Sigma)$. Also, $\SwAut(\Sigma^\zeta) \cong \SwAut(\Sigma)$ by the mapping $\bar\eta\gamma \mapsto \bar\zeta\bar\eta\gamma$. \end{prop} \begin{proof} The first statement is immediate from Lemma \ref{L:stab+-}. The second follows from considering how a switching automorphism acts. $(\zeta,\alpha)$ is a switching automorphism of $\Sigma$ if and only if $\Sigma^\zeta \cong \Sigma$, the isomorphism being via $\alpha$. This means that the same graph automorphism is an automorphism both of $(\Sigma^\zeta)^+ \cong \Sigma^+$ and of $(\Sigma^\zeta)^- \cong \Sigma^-$. It follows that $\zeta\alpha$ is a switching automorphism of $-\Sigma$ under exactly the same conditions as it is a switching automorphism of $\Sigma$. For the third statement we simply write down the action of $\bar\eta\alpha$: it converts $\Sigma^\zeta$ to $(\Sigma^{\zeta})^{\eta\alpha} = \Sigma^{\bar\zeta\bar\eta\alpha}$. \end{proof} \begin{cor}\label{C:autpart} Switching $\Sigma$ does not change the automorphisms in the switching automorphism group: $p_A(\SwAut\Sigma^\zeta) = p_A(\SwAut\Sigma)$ for any switching function $\zeta$. \end{cor} \begin{proof} Examine the mapping in Proposition \ref{P:negaut}. \end{proof} Suppose $\zeta\alpha$ is a switching automorphism. Since $(\Sigma^\zeta)^- \cong \Sigma^-$, the switching cannot change the number of negative edges. As switching means negating the signs of edges in a cut, the cut must have equally many positive and negative edges. Thus we have a necessary condition for a switching automorphism: \begin{prop}\label{P:swautcut} If $\zeta_X\alpha$ is a switching automorphism of $\Sigma$, then $\del X$ has equally many edges of each sign. \qed \end{prop} \subsubsection{Coset representation}\label{cosetrep} We treat multiplication in a switching automorphism group $\SwAut\Sigma$ through the left cosets of $\Aut\Sigma$. Choose a system $\bar R$ of representatives of the cosets and a system $R$ of representatives $\zeta_X\gamma_X \in \signs^V \rtimes \Aut\Gamma$ of the elements $\bar\zeta_X\gamma_X \in \bar R$. Then $\SwAut\Sigma$ is the disjoint union of the left $\bar R$-cosets of $\Aut\Sigma$: \begin{equation} \SwAut\Sigma = \bigcup_{\zeta_X\gamma_X \in R} \bar\zeta_X\gamma_X \Aut\Sigma . \label{E:swautrep} \end{equation} Thus we have two levels of representation: a switching automorphism $\bar\zeta_X\gamma_X$ representing each coset, and a switching permutation $\zeta_X\gamma_X$ to represent each $\bar\zeta_X\gamma_X \in \bar R$. Note that $\zeta_X$ and $\zeta_{X^c} = -\zeta_{X}$ are equally valid representatives of $\bar\zeta_X$; thus we can choose $X$ so that $|X|\leq\frac12|V|$. \begin{prop}\label{P:cosetsw} The following three statements about two switching automorphisms, $\bar\zeta_X\gamma$ and $\bar\zeta_Y\xi \in \SwAut\Sigma$, are equivalent. \begin{enumerate}[{\rm(i)}] \item They belong to the same coset of $\Aut\Sigma$ in $\SwAut\Sigma$. \item They have the same switching operation, $\bar\zeta_X = \bar\zeta_Y$. \item $\gamma$ and $\xi$ belong to the same coset of $\Aut\Sigma$ in $\Aut\Gamma$. \end{enumerate} \end{prop} \begin{proof} The switching automorphisms are in the same coset $\iff$ there is an $\alpha \in \Aut\Sigma$ such that $\bar\zeta_X\gamma = \bar\zeta_Y\xi \alpha$. Because $\SwAut\Sigma \subseteq \SwAut\Gamma$ and $\bar p_A$ is a monomorphism, this implies (iii) $\gamma = \xi\alpha \in \xi \Aut\Sigma$ and (ii) $\bar\zeta_X = \bar\zeta_Y$. Now suppose (ii), i.e., there are cosets $\bar\zeta_X\gamma\Aut\Sigma$ and $\bar\zeta_X\xi\Aut\Sigma$ with the same switched set $X$. Then $(\bar\zeta_X\gamma)\inv(\bar\zeta_X\xi) \in \Aut\Sigma$. Simplifying, $(\bar\zeta_X\gamma)\inv(\bar\zeta_X\xi) = \gamma\inv \bar\zeta_X\inv \bar\zeta_X \xi = \gamma\inv \xi$. Thus, $\gamma\inv \xi \in \Aut\Sigma$, which implies (iii). Finally, suppose (iii), i.e., $\xi = \gamma\alpha$. Then $\bar\zeta_X\gamma = \bar\zeta_Y\gamma\alpha$. As in the first part of the proof, this implies (ii) $\bar\zeta_X = \bar\zeta_Y$ and consequently $\bar\zeta_Y\xi = \bar\zeta_X\gamma\alpha \in \bar\zeta_X\gamma \Aut\Sigma$, which is (i). \end{proof} \begin{cor}\label{C:cosetrepsw} Each left coset representative $\bar\zeta_X\gamma_X \in \bar R$ has a different switching function $\bar\zeta_X$. \end{cor} By Corollary \ref{C:cosetrepsw}, $X$ determines $\gamma_X$; thus, we define $$\rho_X := \zeta_X\gamma_X := \text{the unique element of } R \text{ that has switching set } X.$$ Also, define $\rho_{X^c} = \zeta_{X^c}\gamma_{X^c}$. Then $\bar\rho_X = \bar\rho_{X^c}$ because $\bar\zeta_X = -\bar\zeta_X = \bar\zeta_{X^c}$. Thus, assuming $\Sigma$ is connected each $\bar\rho \in \bar R$ has two associated switching sets, $X$ and $X^c$, each of which serves equally well to represent $\bar\rho$. (There are only two because $\fK_\Gamma = \{\pm\eps\}$.) The task now is to express the product of switching automorphisms in terms of coset representatives. In the next subsection we do that for the more complicated signed Petersen examples by setting up multiplication tables for $R$, which combine with a general formula to give all products. Here we explain the format of such tables and obtain the general product formula. The product of representatives, $\zeta_X\gamma_X \cdot \zeta_Y\xi$, has the form $(\zeta_U\gamma_U)\nu$ where $\zeta_U\gamma_U \in R$ and the permutation $\nu \in \Aut\Sigma$ is a correction due to the fact that the product of representatives need not be a representative itself. We need formulas for $U$ and $\nu$ in terms of $R$. (The application to $\bar R$ consists merely of placing bars over the switching functions.) For simplicity we assume $\Sigma$ is connected, to ensure that $\bar\zeta_U$ is represented only by $\zeta_U$ or $\zeta_{U^c} = -\zeta_U$. \begin{prop}\label{P:genmult} Assume $\Sigma = (\Gamma,\sigma)$ is connected. For switching automorphisms $(\bar\zeta_X\gamma_X)\alpha$ and $(\bar\zeta_Y\gamma_Y)\beta$, where $\zeta_X\gamma_X, \zeta_Y\gamma_Y \in R$ and $\alpha, \beta \in \Aut\Sigma$, there is the multiplication formula \begin{equation} (\zeta_X\gamma_X)\alpha \cdot (\zeta_Y\gamma_Y)\beta = (\pm\zeta_U\gamma_U) \nu \cdot \alpha\beta, \label{E:genmult} \end{equation} where $U = X \oplus Y^{\alpha\inv\gamma_X\inv}$, $\gamma_U$ and the sign are determined by $\pm\zeta_U\gamma_U \in R$, and $\nu = \gamma_U\inv \gamma_X\gamma_Y^{\alpha\inv} \in \Aut\Sigma$. \end{prop} \begin{proof} Most of the proof is a calculation: \begin{align*} (\zeta_{X} \gamma_X)\alpha \cdot (\zeta_Y \gamma_Y)\beta &= (\zeta_{X} \gamma_X) (\zeta_Y^{\alpha\inv}\gamma_Y^{\alpha\inv}) \cdot \alpha\beta \\ &= (\zeta_{X} \gamma_X) (\zeta_{Y^{\alpha\inv}}\gamma_Y^{\alpha\inv}) \cdot \alpha\beta \\ &= (\zeta_{X}\zeta_{Y^{\alpha\inv}}^{\gamma_X\inv}) (\gamma_X\gamma_Y^{\alpha\inv}) \cdot \alpha\beta \\ &= (\zeta_{X}\zeta_{Y^{\alpha\inv\gamma_X\inv}}) (\gamma_X\gamma_Y^{\alpha\inv}) \cdot \alpha\beta \\ &= \zeta_{X \oplus Y^{\alpha\inv\gamma_X\inv}} (\gamma_X\gamma_Y^{\alpha\inv}) \cdot \alpha\beta. \end{align*} By Corollary \ref{C:cosetrepsw}, $\bar\zeta_U$ determines $\gamma_U \in \Aut\Gamma$ such that $\bar\zeta_U\gamma_U \in \bar R$; consequently, \begin{align*} (\zeta_{X} \gamma_X)\alpha \cdot (\zeta_Y \gamma_Y)\beta &= (\pm\zeta_{U}\gamma_U) (\gamma_U\inv \gamma_X\gamma_Y^{\alpha\inv}) \cdot \alpha\beta . \end{align*} The sign is determined by whether $U := X \oplus Y^{\alpha\inv\gamma_X\inv}$ or its complement is the set $U'$ switched by the representative $\zeta_{U'}\gamma_X \in R$. In the former case $U' = U$ and the sign is $+$, while in the latter case $U' = U^c$, which introduces the minus sign. $U'$ must be one or the other because switching any other set will give some edge in a spanning tree a different sign. The reason $\nu \in \Aut\Sigma$ is that, by the definition of $U$, $(\bar\zeta_{X} \gamma_X)\alpha \cdot (\bar\zeta_Y \gamma_Y)\beta \in \bar\zeta_U\gamma_U \Aut\Sigma$. Thus, $\nu \cdot \alpha\beta \in \Aut\Sigma$, which entails that $\nu \in \Aut\Sigma$. \end{proof} Ideally, to use Equation \eqref{E:genmult} in conjunction with the multiplication table of $R$, one first finds $Y' := Y^{\alpha\inv}$, then looks up the product $(\pm\rho_U) \nu = \rho_X \rho_{Y'}$ in the table and combines with $\alpha\beta$. (It is not necessary to find $Y^{\alpha\inv\gamma_X\inv}$ or $U$.) For this method to work, $R$ should be closed under conjugation by $\Aut\Sigma$. With $\Sigma = P_{3,2}$ and $P_{3,3}$ one can choose $R$ suitably; that is, so it is a union of orbits of $\Aut\Sigma$ acting on $\SwAut\Sigma$. However, it may not always be possible to choose such an ideal system of representatives. \begin{question}\label{Q:systemofreps} Does a system of representatives $\bar R$ that is closed under conjugation by $\Aut\Sigma$ exist for every signed graph? \end{question} A necessary condition for such a system is that, if $(\zeta_X\gamma_X)^\alpha$ is in the same coset as $\zeta_X\gamma_X$, then it must equal $\zeta_X\gamma_X$. Thus $\gamma_X$ should commute with every automorphism $\alpha$ of $\Sigma$ for which $\bar\zeta_{X^\alpha} = \bar\zeta_X$ (equivalently when $\Sigma$ is connected, $X^\alpha = X$ or $X^c$). \subsection{Petersen automorphisms and switching automorphisms}\label{petaut} Here we find the automorphism and switching automorphism groups of the six minimal signed Petersen graphs and their negations. Between them they have six automorphism groups and six switching automorphism groups, but only four abstract types of switching automorphism group. By Proposition \ref{P:negaut} the negative signature, $(P,-\sigma)$, has exactly the same groups as does $(P,\sigma)$, and furthermore $\SwAut(P_{3,3}) \cong \SwAut(-P)$ and $\SwAut(P_{2,3}) \cong \SwAut(-P_1)$. By Proposition \ref{P:homoaut} and \ref{P:homoaut}, both groups of $+P$ and $-P$ equal $\Aut(P) = \fS_5$. Thus, as abstract groups we have five automorphism groups and three switching automorphism groups to discover; but there are five switching automorphism groups to find as explicit subgroups of $\SwAut P$. \begin{thm}\label{T:aut} The abstract automorphism and switching automorphism groups of the minimal signed Petersen graphs and their negatives are as shown in Table \ref{Tb:aut}. As subgroups of $\SwAut P$ they are shown in Table \ref{Tb:autexact}. \end{thm} \begin{table}[htb] \begin{center} \begin{tabular}{|c||c|c|} \hline \vstrut{15pt}$(P,\sigma)$ &$\Aut(P,\sigma)$ &$\SwAut(P,\sigma)$ \\[3pt] \hline \vstrut{15pt}$+P$, $-P$ &$\fS_5$ &$\fS_5$ \\[3pt] \vstrut{15pt}$P_1$, $-P_1$ &$\fD_4$ &$\fD_4$ \\[3pt] \vstrut{15pt}$P_{2,2}$, $-P_{2,2}$ &$\fZ_2$ &$\fV_4$ \\[6pt] \vstrut{15pt}$P_{2,3}$, $-P_{2,3}$ &$\fD_4$ &$\fD_4$ \\[3pt] \vstrut{15pt}$P_{3,2}$, $-P_{3,2}$ &$\fS_3$ &$\fA_5$ \\[3pt] \vstrut{15pt}$P_{3,3}$, $-P_{3,3}$ &$\fS_4$ &$\fS_5$ \\[3pt] \hline \end{tabular} \end{center} \caption{The automorphism and switching automorphism groups of the minimal signed Petersens and their negatives. $\fS_k$, $\fA_k$, $\fD_k$, and $\fZ_k$ are the symmetric and alternating groups on $k$ letters, the dihedral group of a $k$-gon, and the cyclic group of order $k$. $\fV_4$ is the Klein four-group.} \label{Tb:aut} \end{table} \begin{table}[htb] \begin{center} \begin{tabular}{|l||c|c|c|c|} \hline \vstrut{15pt}$(P,\sigma)$ &$\Aut(P,\sigma)$ &$\SwAut(P,\sigma)$ \\[3pt] \hline\hline \vstrut{16pt}$+P$, $-P$ &$\fS_{\5}$ &$\{\bareps\}\times\fS_{\5}$ \\[5pt] \hline \vstrut{20pt}\parbox{4cm}{$P_1$, $-P_1$ with \\ $E^-=\{v_{ij}v_{kl}\}$} &$\big\langle(ij),(ikjl)\big\rangle$ &$\{\bareps\}\times\big\langle(ij),(ikjl)\big\rangle$ \\[10pt] \hline \vstrut{20pt}\parbox{4cm}{$P_{2,2}$, $-P_{2,2}$ with \\ $E^-=\{v_{il}v_{jm},v_{kl}v_{im}\}$} &$\big\langle(jk)(lm)\big\rangle$ &$\big\langle \bareps(jk)(lm), \zeta_{\{{jm},{kl}\}}(jl)(km) \big\rangle$ \\[10pt] \hline \vstrut{20pt}\parbox{4cm}{$P_{2,3}$, $-P_{2,3}$ with \\ $E^-=\{v_{ik}v_{jl},v_{il}v_{jk}\}$} &$\big\langle(ij),(ikjl)\big\rangle$ &$\{\bareps\}\times\big\langle(ij),(ikjl)\big\rangle$ \\[10pt] \hline \vstrut{20pt}\parbox{5.2cm}{$P_{3,2}$, $-P_{3,2}$ with \\ $E^-=\{v_{il}v_{jm},v_{kl}v_{im},v_{jl}v_{km}\}$} &$\big(\fS_{\{i,j,k\}}\times\fS_{\{l,m\}}\big)^+$ &See Equation \eqref {E:P32set} \\[10pt] \hline \vstrut{20pt}\parbox{4.8cm}{$P_{3,3}$, $-P_{3,3}$ with \\ $E^-=\{v_{ij}v_{kl},v_{ik}v_{jl},v_{il}v_{jk}\}$} &$\fS_{\{i,j,k,l\}}$ &See Equation \eqref {E:P33setgen} \\[10pt] \hline \end{tabular} \end{center} \caption{The exact groups corresponding to specific negative edge sets. $i,j,k,l,m$ are the five elements of $\5$, in any order. For $\fG \leq \fS_n$, $\fG^+$ denotes the set of even permutations in $\fG$. $\zeta_X$ is the switching function that switches $X \subseteq V$ (with $ij$ denoting vertex $v_{ij}$ for readability).} \label{Tb:autexact} \end{table} We preface the proof with a structural lemma. \begin{lem}\label{L:cutswitch} Let $(P,\sigma)$ be a minimal signature of $P$. Suppose $\del X$ is a cut that contains equally many edges of each sign, as when $X$ is switched in a switching automorphism. Then \begin{enumerate}[{\rm(a)}] \item $|\del X|=4$, $X=V(e_0)$ for some edge $e_0$, and $(P,\sigma) = P_{k,2}$ for $k=2$ or $3$, or \item $|\del X|=6$, $X = V(Q)$ for a path $Q$ of order $4$, and $(P,\sigma) = P_{3,2}$, or \item $|\del X|=6$, $X = N[v]$ for some vertex $v$, and $(P,\sigma) = P_{3,3}$. \end{enumerate} \end{lem} Note that Lemma \ref{L:cutswitch} does not apply to a switching automorphism in which there is no switching. \begin{proof} Suppose the subgraph $P{:}X$ induced on $X$, with edge set $E{:}X$, is disconnected; then $\del X$ is the disjoint union of two or more cuts, hence it has at least 6 edges. As $(P,\sigma)$ is minimal, there are no more than three negative edges; hence $|\del X| = 6$ and $X$ consists of two nonadjacent vertices. Then $\del X$ does not contain three independent edges; by Corollary \ref{C:cubicfr} this case is impossible. Therefore $P{:}X$ is connected, so $|\del X| = 3|X| - 2|E{:}X|$. As $|\del X|$ is even, this implies $|X|$ is even, so we may assume $|X| \leq 4$. Then $P{:}X$ is acyclic; being connected, it is a tree. Consequently $|E{:}X| = |X|-1$ and we deduce that $|\del X| = |X|+2$. If the cut has four edges, $|X| = 2$; so $X = V(e_0)$ for some edge $e_0$ and $\del X$ consists of the four edges adjacent to $e_0$. Amongst them the largest distance is 2. It follows that $(P,\sigma) = P_{k,2}$ as in (a). If the cut has six edges, $|X|=4$. $P{:}X$ is a tree which may be either a path $Q$ of length $4$ or a vertex star. If it is a path $Q$, then $X=V(Q)$ and the six edges of $\del X$ contain no three edges at distance 3 from one another. Hence, $d=2$ and we have (b). If $P{:}X$ is a vertex star, $X=N[v]$ for some $v \in V$. In this case $d=3$, for it is not possible to choose three edges in $\del X$ whose distances are all 2. Thus, we are in case (c). \end{proof} \begin{proof}[Proof of Theorem \ref{T:aut}] In the course of the proof we establish many important facts about the groups, in particular multiplication tables for the most complicated ones, $\SwAut P_{2,3}$ and $\SwAut P_{3,3}$. The proofs of these facts could not easily be separated from that of the main theorem so it seemed best, though unconventional, to incorporate them all including their formal statements into one large proof. In order to keep the reader (and the author) from getting lost, the proof is divided into subsections treating different aspects. The groups of $+P$ follow from Proposition \ref{P:homoaut}. We take up the others in turn. \subsubsection{Signatures of type $P_1$.} \label{typeP1} The automorphism group of $P_1$ is the stabilizer of an edge in $\Aut P$. Suppose $P_1$ to have negative edge $e=v_{ij}v_{kl}$; i.e., it is $P_{\{e\}}$. An automorphism $\alpha$ can preserve the vertices; then it is in the four-element group generated by $(ij)$ and $(kl)$. Or, it can exchange the vertices; this is done, for instance, by a permutation $(ikjl)$. The group $\big\langle(ij),(kl),(ikjl)\big\rangle$ is the dihedral group of a square with corners labelled, in circular order, $i,k,j,l$; it is generated by $(ij)$ and $(ikjl)$. Due to Proposition \ref{P:swautcut} and the fact that no cut in $P$ has fewer than three edges, there are no switching automorphisms of $P_1$ other than its automorphisms. \subsubsection{Signatures of type $P_{2,d}$.} \label{typeP2d} We write $P_{\ef}$ for $P_{2,d}$ with negative edges $e$ and $f$. An automorphism of $P_{\ef}$ preserves $\ef$. In $P_{2,3}$ there is a unique third edge $g$ at distance $3$ from $e$ and $f$ forming a matching $M_{3(m)}$. As any edge in $M_{3(m)}$ determines the whole matching, an automorphism of $P$ that stabilizes $\ef$ must fix $g$, and vice versa. Thus, $\Aut P_{\ef} = \Aut P_{\{g\}}$. In $P_{2,2} = P_{\ef}$, $e$ and $f$ are at distance 2 in a hexagon $H_{lm}$. The hexagon is uniquely determined by $\ef$. There is a unique edge $g$ at distance 2 from $e$ and $f$ in $H$. Let $e=v_{il}v_{jm}$, $f=v_{kl}v_{im}$, and $g=v_{jl}v_{km}$. Since an automorphism $\alpha$ of $P_{\ef}$ preserves distance, the adjacent vertices $v_{jm}, v_{kl}$ of $E^-$ are either fixed or interchanged, and the remaining vertices $v_{il}, v_{im}$ are also fixed or interchanged. This implies that $i$ is fixed under $\alpha$, so $\alpha$, if not the identity, transposes $l$ and $m$, and consequently $\alpha = \id$ or $(jk)(lm)$. Hence, $\Aut P_{\ef} = \big\langle(jk)(lm)\big\rangle \cong \fZ_2$, the cyclic group of order 2. Now let us examine possible switching automorphisms $\zeta_X\gamma$ of $P_{\ef} = P_{2,d}$ for $d=2,3$. By Lemma \ref{L:cutswitch} $|\del X| = 4$ and $P_{\ef} = P_{2,2}$. It follows that a nontrivial switching of $P_{2,3}$ cannot be isomorphic to $P_{2,3}$, so $\SwAut P_{2,3} = \{\bareps\}\times\Aut P_{2,3}$. There is a nontrivial switching by $X = \{v_{jm},v_{kl}\}$ forming new negative edges $e' = v_{jm}v_{ik}$ and $f' = v_{kl}v_{im}$, so $\gamma$ must fix $i$ and transpose either $j,l$ and $k,m$ or else $j,m$ and $k,l$. Thus, $\gamma = (jl)(km)$ or $(jm)(kl)$. We conclude that $$ \SwAut P_{\ef} = \{ \bareps\,\id,\ \bareps(jk)(lm),\ \zeta_{\{{jm},{kl}\}}(jl)(km),\ \zeta_{\{{jm},{kl}\}}(jm)(kl) \}. $$ \subsubsection{Signatures of type $P_{3,d}$.} \label{typeP3d} The next groups are those of $P_{3,d} = P_{\efg}$ for $d=2,3$. For each distance $d$ choose the same negative edges $e,f,g$ as in the previous analyses of $P_{2,d}$. In $P_{3,2}$ the negative edges lie in the hexagon $H = H_{lm} = P \setm N[v_{lm}]$. In $P_{3,3}$ the negative edges are $e=v_{ij}v_{kl},\ f=v_{ik}v_{jl},\ g=v_{il}v_{jk}$, so $E^- = M_{3(m)}$. We begin with the automorphism groups. To determine $\Aut P_{3,2}$, note that the hexagon containing $e,f,g$ is $H_v = P \setm N[v]$ for $v=v_{lm}$. An automorphism $\alpha$ of $P_{\efg}$ must fix $v$ and thus must fix or exchange $l$ and $m$. It can also permute the other indices $i,j,k$. Suppose $\alpha$ fixes $l$ and $m$. As the vertices of $H_v$, in order, are $v_{li},v_{mk},v_{lj},v_{im},v_{lk},v_{jm}$, with vertex indices alternating between $l$ and $m$, and as $\alpha$ must preserve the set $\efg$, it must rotate $H_v$ by a multiple of one-third of a full rotation. That means it permutes $i,j,k$ cyclically, so it is a power of $(ijk)$. Now suppose $\alpha$ exchanges $l$ with $m$. Then it reverses the direction of $H_v$, so in order to leave $\efg$ invariant it must fix one of $e,f,g$ and one of $i,j,k$; thus, $\alpha = (ij)(lm)$, $(ik)(lm)$, or $(jk)(lm)$. The conclusion is that $\alpha$ is an even permutation of $\5$ and is an element of $\fS_{\{l,m\}} \times \fS_{\{i,j,k\}}$. Thus, $\Aut P_{\efg} = (\fS_{\{l,m\}}\times\fS_{\{i,j,k\}})^+$, the superscript $+$ denoting even permutations only. As the factor $(lm)$ is predictable by evenness given the $\fS_{\{i,j,k\}}$ part of an automorphism, $\Aut P_{3,2} \cong \fS_3$. The automorphism group of $P_{3,3}$ is determined by the fact that the negative edge set $\efg = M_{3(m)}$. An automorphism permutes $e,f,g$, whence it permutes $i,j,k,l$ arbitrarily and fixes $m$. Thus, $\Aut P_{\efg} = \fS_{\{i,j,k,l\}} \cong \fS_4$. Now we examine the switching automorphism groups. We assume $P_{3,d} = P_{\efg}$ switches by $X$ to $P_{\{e',f',g'\}}$. Lemma \ref{L:cutswitch} presents three cases to consider. \emph{In Case (a)}, $d=2$ so the three negative edges lie in the hexagon $H_v$. As switching changes two edges from negative to positive, this resembles the case of $P_{2,2}$, but now there are three possible switching sets $X$, namely $X = \{w,x\}$ for each positive edge $wx$ in $H_v$. \begin{figure}\label{F:sw2} \end{figure} Switching $X$ gives a $P_{3,2}$ with negative edge set $\{e',f',g'\} \subseteq H_u$. The vertex $u$ can be described in terms of the 3-edge path in $H_v$ centered upon $wx$: there is a unique pentagon containing this path, and $u$ is its one vertex not in $H_v$. It follows that each different edge $wx$ yields a different principal hexagon after switching. Now suppose $X = \{v_{jm},v_{kl}\}$; then $u = v_{jk}$ and $P_{\efg}^X$ is isomorphic to $P_{\efg}$ by the even permutation $\gamma_X:=(jm)(kl)$. Similarly, each of the other two switching sets $X$ gives $P_{\efg}^X$ which is isomorphic to $P_{\efg}$ by an even permutation. It follows from Proposition \ref{P:cosetsw} that each different $\zeta_X\gamma_X$ belongs to a different left coset of $\Aut P_{\efg}$ in $\SwAut P_{\efg}$. Thus we have three cosets besides $\Aut P_{\efg}$ itself. The three coset representatives are a single orbit of the action of $\Aut P_{\efg}$ on $\SwAut P_{\efg}$. To prove this we may point to symmetry or we may compute the action on a coset representative $\bar\zeta_X\gamma_X$, or rather on the switching permutation $\zeta_X\gamma_X$. The argument from symmetry is that each switching automorphism is obtained from one of them, say $\bar\zeta_{jm,kl} (jm)(kl)$, by rotating Figure \ref{F:sw2} through $120^\circ$ once or twice. The rotation is carried out by the permutation $(kji)$. As for a double transposition, say $(jk)(lm) \in \Aut P_{\efg}$, applying it reflects the figure across a line parallel to $v_{jk}v_{lm}$ and therefore does not change the switching automorphism $\bar\zeta_{jm,kl} (jm)(kl)$; the other double transpositions similarly fix the other switching automorphisms. For the computational proof, first, the action of powers of $(ijk)$: \begin{equation} \begin{aligned}{} [ \zeta_{jm,kl} (jm)(kl) ] ^{(ijk)} &= \zeta_{km,il} (km)(il), \\ [ \zeta_{jm,kl} (jm)(kl) ]^{(kji)} &= \zeta_{im,jl} (im)(jl). \label{E:P32aorbit} \end{aligned} \end{equation} This shows the chosen representatives are in one orbit. Next, the action of $(jk)(lm)$: \begin{align*} [\zeta_{jm,kl}(jm)(kl)]^{(jk)(lm)} &= \zeta_{jm,kl}(jm)(kl). \end{align*} As $\Aut P_{\efg} = \langle(ijk)\rangle \cup (jk)(lm) \langle(ijk)\rangle$, this proves there are no other switching permutations in the orbit. The computational proof gives the slightly stronger result that the switching permutations, not only the switching automorphisms, are a whole orbit of $\Aut P_{\efg}$. \emph{In Case (b)}, $d=2$ and $P{:}X$ is a path $wxyz$. Again $e,f,g$ are alternating edges on $H_v$. Given $H_v$, we need to know which sets $X = \{w,x,y,z\}$ can be. To determine that, we reverse the question; we fix $X$ and ask which hexagons $H_v$ can be. (There are 60 paths of length 3, but as $\Aut P$ is transitive on them, there is only one type.) Since $e,f,g \in \del X$, it must be true that $|H_v \cap \del X| = 3$. One finds by checking every vertex of $P$ that only two hexagons $H_v$ have this property; the vertices $v$ are the neighbors of $x$ and $y$ in $X^c$. By choice of notation, we may assume $v$ is adjacent to $y$. \begin{figure} \caption{Switching the four vertices of a path in $P_{3,2}$ for Case (b). Left: $P_{\efg}$, before switching $X = \{w,x,y,z\}$. The principal hexagon $H_v$ is the outer hexagon. Heavy lines indicate the cut $\del X$. Right: $P_{\efg}^X$, after switching. Heavy lines indicate the new principal hexagon $H_u$ and dotted lines mark the new negative edges.} \label{F:sw4path} \end{figure} Now we can describe the relationship between the path $wxyz$ and $P_{\efg}$. The path begins with the positive edge $wx$ of $H_{v}$, which is followed by $y \notin V(H_v)$, and then ends at $z$ in $H_v$. The original negative edges $e,f,g$ are the alternating triple in $H_v$ that excludes $wx$. The vertex $y$ is the neighbor of $x$ along $H_v$. Thus, there are six possible paths for $wxyz$. Once we choose $w$ and $x$, the rest is determined. After switching $X=\{w,x,y,z\}$ we again have three negative edges on a hexagon; this hexagon is $H_u$ where $u$ is the neighbor of $x$ along $H_v$. $H_v \cap H_u$ is the 2-edge path from $w$ to $z$ in $H_v$; the first edge is one of $e,f,g$ and hence positive (after switching), while the next, call it $e'$, is negative. The negative edge set of $P_{\efg}^X$ consists of $e'$ and the edges $f', g'$ at distance 2 from it along $H_u$. Thus, $P_{\efg}$ switches to $P_{\{e',f',g'\}}$. To find a permutation $\alpha$ by which $P_{\efg}^X$ is isomorphic to $P_{\efg}$, we need only examine one case, because each path $wxyz$ maps to any other, $w'x'y'z'$, by the unique automorphism of $P_{\efg}$ which carries $(w,x)$ to $(w',x')$. Let $e=v_{il}v_{jm}$, $f=v_{kl}v_{im}$, and $g=v_{jl}v_{km}$, so $v = v_{lm}$, and let the path $wxyz = v_{jm}v_{kl}v_{ij}v_{km}$. Then $u = v_{im}$. The even permutation $(ilm)$ is one choice for the desired isomorphism. The switching automorphism of $P_{\efg}$ is $\bar\zeta_{\{{jm},{kl},{ij},{mk}\}} (ilm)$. (In the notation of Section \ref{cosetrep} this is $\bar\rho_{\{{jm},{kl},{ij},{mk}\}}$.) These six switching automorphisms are another orbit of $\Aut P_{\efg}$ acting on $\SwAut P_{\efg}$. The proof by symmetry is contained in the observation that the six paths are automorphic under the automorphism group. We show the computational proof in order to demonstrate that the switching permutations are also a single orbit of $\Aut P_{\efg}$. We compute the nontrivial actions on one of the switching permutations: \begin{equation} \begin{aligned}{} [ \zeta_{\{{jm},{kl},{ij},{mk}\}} (ilm) ]^{(ijk)} &= \zeta_{\{{km},{il},{jk},{mi}\}} (jlm), \\ [ \zeta_{\{{jm},{kl},{ij},{mk}\}} (ilm) ]^{(kji)} &= \zeta_{\{{im},{jl},{ki},{mj}\}} (klm), \\ [ \zeta_{\{{jm},{kl},{ij},{mk}\}} (ilm) ]^{(jk)(lm)} &= \zeta_{\{{kl},{jm},{ik},{lj}\}} (mli), \\ [ \zeta_{\{{jm},{kl},{ij},{mk}\}} (ilm) ]^{(ij)(lm)} &= \zeta_{\{{il},{km},{ji},{lk}\}} (mlj), \\ [ \zeta_{\{{jm},{kl},{ij},{mk}\}} (ilm) ]^{(ik)(lm)} &= \zeta_{\{{jl},{im},{kj},{li}\}} (mlk). \label{E:P32borbit} \end{aligned} \end{equation} This displays all six switching permutations of $P_{\efg}$. \emph{In Case (c)}, $X = N[v]$, $\efg = M_{3(m)} := \big\{v_{ij}v_{kl}: \{i,j,k,l\} = \5 \setm m \big\}$, and $\Aut P_{\efg} = \fS_{\5\setm m}$. The complement of $V(M_{3(m)})$ is $X_m$. Any vertex in $X_m$ can be taken as $v$; choosing $v = v_{im}$, $\zeta_{N[v_{im}]} (im)$ is a switching automorphism of $P_{3,3}$. This is the only way to switch $P_{\efg}$ for a switching automorphism, so \begin{equation} \SwAut P_{3,3} = \fS_{\5\setm m} \ \cup \bigcup_{i \in \5 \setm m} \zeta_{N[v_{im}]} (im) \fS_{\5\setm m}. \label{E:P33setgen} \end{equation} Therefore, we may rewrite Equation \eqref{E:P33setgen} as $$ \SwAut P_{3,3} \ = \bigcup_{\alpha \in \fS_{\5\setm m}} [\zeta_{N[v_{im}]} (im)]^\alpha \ \fS_{\5\setm m}. $$ (where $i \neq m$ is fixed). \begin{figure} \caption{Switching the closed neighborhood $X=N[v]$ of a totally positive vertex in $P_{3,3}$ for Case (c). The original negative edges $e,f,g$ are dashed; the new ones after switching, $e',f',g'$, are dotted. The heavy lines show the cut $\del X$.} \label{F:sw4star} \end{figure} Much as with $P_{3,2}$, the switching permutations and switching automorphisms of $P_{3,3}$ are whole orbits of the actions of $\Aut P_{\efg}$ on switching permutations and switching automorphisms of $P$. This is obvious both pictorially, as $\Aut P_{\efg}$ permutes $\5 \setm \{m\}$ and therefore $X_m$, and computationally, as $N[v_{im}]^\alpha = N[v_{i^\alpha m}]$ so $[ \zeta_{N[v_{im}]} (im) ]^\alpha = \zeta_{N[v_{i^\alpha m}]} (i^\alpha m)$. \subsubsection{The structure of $\SwAut P_{3,2}$.} \label{structureP32} A switching automorphism of $P_{3,2}$, if not an automorphism, falls under Case (a) or Case (b). Thus, \begin{equation} \begin{aligned} \SwAut P_{3,2} = \Aut P_{3,2} \ &\cup \bigcup_{\lambda \in \langle(ijk)\rangle} [\zeta_{\{jm,kl\}} (jm)(kl)]^\lambda \Aut P_{3,2} \\ &\cup \bigcup_{\mu \in \Aut P_{3,2}} [\zeta_{\{{jm},{kl},{ij},{mk}\}} (il)(jk)]^\mu \Aut P_{3,2} . \end{aligned} \label{E:P32setgen} \end{equation} That tells us the set $\SwAut P_{3,2}$ but to know the group we need the rules for multiplication and for how to determine, for each permutation in $p_A(\SwAut P_{3,2})$, which switching must be done before the permutation to get a switching automorphism. The description is simplified if we fix the $P_{3,2}$ by choosing a specific negative edge set. Our choice for $E^-$ is $\{e=v_{14}v_{25},\ f=v_{34}v_{15},\ g=v_{24}v_{35}\} \subseteq H_{45}$. (That is, we are setting $i,j,k = 1,2,3$ and $l,m = 4,5$.) To describe the group we fix two switching sets, $$ W := \{v_{15},v_{24}\} \quad \text{ and } \quad Z := \{v_{34},v_{25},v_{13},v_{24}\}, $$ and corresponding switching permutations, $$ \upsilon_W := \zeta_W (15)(24) \quad \text{ and } \quad \omega_Z := \zeta_Z (145). $$ ($W$ is the $X = \{v_{im},v_{jl}\}$ of Case (a) and $Z$ is the $X = \{v_{jm}, v_{kl}, v_{ij}, v_{km}\}$ of Case (b). The permutation part is what was called $\gamma_W$ and $\gamma_Z$; as before, it is partly arbitrary since it is determined only up to right multiplication by elements of $\Aut P_{3,2}$.) For the systems of representatives in Proposition \ref{P:genmult} we choose $$ R := \{ \varepsilon\,\id \} \cup \{ \upsilon_W^\lambda : \lambda \in \langle(123)\rangle \} \cup \{ \omega_Z^\mu : \mu\in\Aut P_{3,2} \} , $$ which we may do because the coset representatives constitute three orbits of $\Aut P_{3,2}$ as shown in Section \ref{typeP3d}, and $\bar R := \{\bar\zeta_X\gamma_X : \zeta_X\gamma_X \in R\}$. As in Cases (a) and (b), $W^{(12)(45)} = W$ and $\rho_X^\mu = \rho_{X^\mu}$ for any $\rho_X = \zeta_X\gamma_X \in R$ and $\mu \in \Aut P_{3,2}$, so $R$ is closed under the action of $\Aut P_{3,2}$. The sets $W^\mu$ and $Z^\mu$ are found in Table \ref{Tb:P32WX}. \begin{table}[hbdp] \begin{center} \begin{tabular}{|c||c|c||c|c|} \hline \vstrut{15pt} $\lambda$ &$W^\mu$ &$\upsilon_{W^\mu}$ &$Z^\mu$ &$\omega_{Z^\mu}$ \\[6pt] \hline\hline \vstrut{18pt} $\id$ & $W = \{v_{15},v_{24}\}$ &$\zeta_{W} (15)(24)$ &$Z = \{v_{34},v_{25},v_{13},v_{24}\}$ &$\zeta_{Z} (145)$ \\[6pt] \hline \vstrut{18pt} $(123)$ &$\{v_{25},v_{34}\}$ &$\zeta_{W}^{(123)} (25)(34)$ &$\{v_{14},v_{35},v_{12},v_{34}\}$ &$\zeta_{Z}^{(123)} (245)$ \\[6pt] \hline \vstrut{18pt} $(321)$ &$\{v_{35},v_{14}\}$ &$\zeta_{W}^{(321)} (35)(14)$ &$\{v_{24},v_{15},v_{23},v_{14}\}$ &$\zeta_{Z}^{(321)} (345)$ \\[6pt] \hline\hline \vstrut{18pt} $(12)(45)$ &$\{v_{15},v_{24}\}$ &$\upsilon_{W}$ &$\{v_{35},v_{14},v_{23},v_{15}\}$ &$\zeta_{Z}^{(12)(45)} (542)$ \\[6pt] \hline \vstrut{18pt} $(23)(45)$ &$\{v_{14},v_{35}\}$ &$\upsilon_{W}^{(321)}$ &$\{v_{25},v_{34},v_{12},v_{35}\}$ &$\zeta_{Z}^{(23)(45)} (541)$ \\[6pt] \hline \vstrut{18pt} $(13)(45)$ &$\{v_{34},v_{25}\}$ &$\upsilon_{W}^{(123)}$ &$\{v_{15},v_{24},v_{13},v_{25}\}$ &$\zeta_{Z}^{(13)(45)} (543)$ \\[6pt] \hline \end{tabular} \end{center} \caption{The transforms $W^\mu$ and $Z^\mu$ and associated switching automorphisms, for $\mu \in \Aut P_{3,2}$. Recall that $\upsilon_W^{\mu}=\upsilon_{W^\mu}$ and $\omega_Z^{\mu}=\omega_{Z^\mu}$.} \label{Tb:P32WX} \end{table} The switching set $X$ associated with $\bar\rho \in \bar R$ is uniquely determined if we insist that $|X| \leq 4$. (That is how we chose $R$.) Thus, we are representing $\SwAut P_{3,2}$ as the disjoint union of the left $\bar R$-cosets of $\Aut P_{3,2}$: \begin{equation} \SwAut P_{3,2} = \bigcup_{\zeta_X\gamma_X \in R} \bar\zeta_X\gamma_X \Aut P_{3,2} . \label{E:P32set} \end{equation} Note again that $\zeta_X$ and $-\zeta_X = \zeta_{X^c}$ are equally valid representatives of $\bar\zeta_X$; this fact helps to calculate and interpret the multiplication tables we provide for $\SwAut P_{3,2}$. The product $(\bar\zeta_X\gamma_X)\alpha \cdot (\bar\zeta_Y\gamma_Y)\beta$ of any two switching automorphisms is completely specified by Proposition \ref{P:genmult}. To find the product follow this procedure: \begin{enumerate} \item Set $Y' = Y^{\alpha\inv}$ and $\zeta_{Y'} = \zeta_Y^{\alpha\inv}$. Then $\zeta_{Y'}\gamma_{Y'}$ is an element of $R$ because $R$ is closed under the action of $\Aut P_{3,2}$. \item Calculate $U = X \oplus Y'$ or $(X \oplus Y')^c$, the former if $|X \oplus Y'| \leq 4$ and the latter otherwise. \item Find $\zeta_U\gamma_U \in R$ to determine $\gamma_U$. \item The product is $(\bar\zeta_{U}\gamma_U) (\gamma_U\inv \gamma_X\gamma_{Y'}) \cdot \alpha\beta$, which lies in the coset $(\bar\zeta_{U}\gamma_U) \Aut P_{3,2}$. \item The product $(\zeta_X\gamma_X)\alpha \cdot (\zeta_Y\gamma_Y)\beta$ in $\signs \times \Aut P$, if desired, is $\pm (\zeta_{U}\gamma_U) (\gamma_U\inv \gamma_X\gamma_{Y'}) \cdot \alpha\beta$, with the positive sign if $|X \oplus Y'| \leq 4$ and the negative sign if not. \end{enumerate} Steps (2) and (3) can be combined by using Tables \ref{Tb:P32mult-upsi-omega}--\ref{Tb:P32mult-omega}, which give the products of elements $\zeta_X\gamma_X, \ \zeta_{Y'}\gamma_{Y'} \in R$. \begin{table}[ht] \begin{center} \begin{tabular}{|c||c|c|c|} \hline \vstrut{15pt} $\cdot$ &\hbox to 2cm{ $\upsilon_W$ } &\hbox to 2cm{ $\upsilon_W^{(123)}$ } &\hbox to 2cm{ $\upsilon_W^{(321)}$ } \\[6pt] \hline\hline \vstrut{18pt} $\upsilon_W$ &$\bareps\,\id$ &$\omega_Z^{(321)} (123)$ &$\omega_Z^{(13)(45)} (321)$ \\[6pt] \hline \vstrut{18pt} $\upsilon_W^{(123)}$ &$\omega_Z^{(12)(45)} (321)$ &$\bareps\,\id$ &$\omega_Z (123)$ \\[6pt] \hline \vstrut{18pt} $\upsilon_W^{(321)}$ &$\omega_Z^{(123)} (123)$ &$\omega_Z^{(23)(45)} (321)$ &$\bareps\,\id$ \\[6pt] \hline \end{tabular} \begin{tabular}{|c||c|c|c||c|c|c|} \hline \vstrut{15pt} $\cdot$ &$\omega_Z$ &$\omega_Z^{(123)}$ &$\omega_Z^{(321)}$ \\[6pt] \hline\hline \vstrut{18pt} $\upsilon_W$ &$\omega_Z^{(12)(45)}$ &$\omega_Z^{(123)} (12)(45)$ &$\upsilon_W^{(123)} (321)$ \\[6pt] \hline \vstrut{18pt} $\upsilon_W^{(123)}$ &$\upsilon_W^{(321)} (321)$ &$\omega_Z^{(13)(45)}$ &$\omega_Z^{(321)} (23)(45)$ \\[6pt] \hline \vstrut{18pt} $\upsilon_W^{(321)}$ &$\omega_Z (13)(45)$ &$\upsilon_W (321)$ &$\omega_Z^{(13)(45)}$ \\[6pt] \hline \end{tabular} \begin{tabular}{|c||c|c|c|} \hline \vstrut{15pt} $\cdot$ &$\omega_Z^{(12)(45)}$ &$\omega_Z^{(23)(45)}$ &$\omega_Z^{(13)(45)}$ \\[6pt] \hline\hline \vstrut{18pt} $\upsilon_W$ &$\omega_Z$ &$-\omega_Z^{(23)(45)} (12)(45)$ &$\upsilon_W^{(23)(45)} (123)$ \\[6pt] \hline \vstrut{18pt} $\upsilon_W^{(123)}$ &$-\omega_Z^{(12)(45)} (23)(45)$ &$\upsilon_W (123)$ &$\omega_Z$ \\[6pt] \hline \vstrut{18pt} $\upsilon_W^{(321)}$ &$\upsilon_W^{(13)(45)} (123)$ &$\omega_Z^{(321)}$ &$-\omega_Z^{(13)(45)} (13)(45)$ \\[6pt] \hline \end{tabular} \end{center} \caption{The multiplication table of elements of $\signs \times \Aut P$ that represent coset representatives of the second kind times the second and third kinds in $\SwAut P_{3,2}$.} \label{Tb:P32mult-upsi-omega} \end{table} \begin{table}[hbt] \begin{center} \begin{tabular}{|c||c|c|c||c|c|c|} \hline \vstrut{15pt} $\cdot$ &$\upsilon_W$ &$\upsilon_W^{(123)}$ &$\upsilon_W^{(321)}$ \\[6pt] \hline\hline \vstrut{18pt} $\omega_Z$ &$-\omega_Z^{(12)(45)} (12)(45)$ &$\upsilon_W^{(123)} (321)$ &$\omega_Z^{(13)(45)}$ \\[6pt] \hline \vstrut{18pt} $\omega_Z^{(123)}$ &$\omega_Z^{(23)(45)}$ &$-\omega_Z^{(13)(45)} (23)(45)$ &$\upsilon_W^{(321)} (321)$ \\[6pt] \hline \vstrut{18pt} $\omega_Z^{(321)}$ &$\upsilon_W (321)$ &$\omega_Z^{(12)(45)}$ &$-\omega_Z^{(23)(45)} (13)(45)$ \\[6pt] \hline\hline \vstrut{18pt} $\omega_Z^{(12)(45)}$ &$-\omega_Z (12)(45)$ &$\omega_Z^{(321)}$ &$\upsilon_W^{(321)} (123)$ \\[6pt] \hline \vstrut{18pt} $\omega_Z^{(23)(45)}$ &$\omega_Z^{(123)}$ &$\upsilon_W^{(123)} (123)$ &$-\omega_Z^{(321)} (13)(45)$ \\[6pt] \hline \vstrut{18pt} $\omega_Z^{(13)(45)}$ &$\upsilon_W (123)$ &$-\omega_Z^{(123)} (23)(45)$ &$\omega_Z$ \\[6pt] \hline \end{tabular} \end{center} \caption{The multiplication table of elements of $\signs \times \Aut P$ that represent coset representatives of the third kind times the second kind in $\SwAut P_{3,2}$.} \label{Tb:P32mult-omega-upsi} \end{table} \begin{table}[hbt] \begin{center} \begin{tabular}{|c||c|c|c||c|c|c|} \hline \vstrut{15pt} $\cdot$ &$\omega_Z$ &$\omega_Z^{(123)}$ &$\omega_Z^{(321)}$ \\[6pt] \hline\hline \vstrut{18pt} $\omega_Z$ &$\omega_Z^{(23)(45)}$ &$\upsilon_W$ &$-\upsilon_W^{(321)}(13)(45)$ \\[6pt] \hline \vstrut{18pt} $\omega_Z^{(123)}$ &$-\upsilon_W(12)(45)$ &$\omega_Z^{(12)(45)}$ &$\upsilon_W^{(123)}$ \\[6pt] \hline \vstrut{18pt} $\omega_Z^{(321)}$ &$\upsilon_W^{(321)}$ &$-\upsilon_W^{(123)}(23)(45)$ &$\omega_Z^{(13)(45)}$ \\[6pt] \hline\hline \vstrut{18pt} $\omega_Z^{(12)(45)}$ &$-\omega_Z^{(23)(45)}(12)(45)$ &$\bareps\,\id$ &$-\omega_Z^{(13)(45)}(23)(45)$ \\[6pt] \hline \vstrut{18pt} $\omega_Z^{(23)(45)}$ &$\bareps\,\id$ &$\omega_Z^{(12)(45)}(12)(45)$ &$\omega_Z^{(13)(45)}(13)(45)$ \\[6pt] \hline \vstrut{18pt} $\omega_Z^{(13)(45)}$ &$-\omega_Z^{(23)(45)}(13)(45)$ &$-\omega_Z^{(12)(45)}(23)(45)$ &$\bareps\,\id$ \\[6pt] \hline \end{tabular} \begin{tabular}{|c||c|c|c||c|c|c|} \hline \vstrut{15pt} $\cdot$ &$\omega_Z^{(12)(45)}$ &$\omega_Z^{(23)(45)}$ &$\omega_Z^{(13)(45)}$ \\[6pt] \hline\hline \vstrut{18pt} $\omega_Z$ &$-\omega_Z^{(123)}(12)(45)$ &$\bareps\,\id$ &$-\omega_Z^{(321)}(13)(45)$ \\[6pt] \hline \vstrut{18pt} $\omega_Z^{(123)}$ &$\bareps\,\id$ &$-\omega_Z(12)(45)$ &$-\omega_Z^{(321)}(23)(45)$ \\[6pt] \hline \vstrut{18pt} $\omega_Z^{(321)}$ &$-\omega_Z^{(123)}(23)(45)$ &$-\omega_Z(23)(45)$ &$\bareps\,\id$ \\[6pt] \hline\hline \vstrut{18pt} $\omega_Z^{(12)(45)}$ &$\omega_Z^{(123)}$ &$\upsilon_W^{(12)(45)}$ &$-\upsilon_W^{(13)(45)}(23)(45)$ \\[6pt] \hline \vstrut{18pt} $\omega_Z^{(23)(45)}$ &$-\upsilon_W^{(12)(45)}(12)(45)$ &$\omega_Z$ &$\upsilon_W^{(13)(45)}$ \\[6pt] \hline \vstrut{18pt} $\omega_Z^{(13)(45)}$ &$\upsilon_W^{(12)(45)}$ &$-\upsilon_W^{(23)(45)}(13)(45)$ &$\omega_Z^{(321)}$ \\[6pt] \hline \end{tabular} \end{center} \caption{The multiplication table of elements of $\signs \times \Aut P$ that represent coset representatives of the third kind in $\SwAut P_{3,2}$.} \label{Tb:P32mult-omega} \end{table} To illustrate the calculations involved in preparing the multiplication tables for $P_{3,2}$ we solve three representative cases. \begin{exam}\label{X:P32a} For the first two examples we compute the product of $\omega_Z$ times two other switching permutations in $R$. First, \begin{align*} \omega_Z \omega_Z &= \zeta_{\{{34},{25},{13},{24}\}}(145) \cdot \zeta_{\{{34},{25},{13},{24}\}}(145) \\ &= \zeta_{\{{34},{25},{13},{24}\}} \zeta_{\{{34},{25},{13},{24}\}^{(145)\inv}} (145) (145) \\ &= \zeta_{\{{34},{25},{13},{24}\}} \zeta_{\{{31},{24},{53},{21}\}} (541) = \zeta_{\{{34},{25},{13},{24}\} \oplus \{{31},{24},{53},{21}\}} (541) \\ &= \zeta_{\{{34},{25},{53},{21}\}} (541) = \zeta_{Z^{(23)(45)}} (541) = \omega_{Z}^{(23)(45)} . \end{align*} Next, a more complicated example involving complementation of the switching set and a residual permutation that is an automorphism of $P_{3,2}$. \begin{align*} \omega_Z \omega_Z^{(321)} &= \zeta_{\{{34},{25},{13},{24}\}}(145) \cdot \zeta_{\{{24},{15},{23},{14}\}}(345) \\ &= \zeta_{\{{34},{25},{13},{24}\}} \zeta_{\{{24},{15},{23},{14}\}^{(145)\inv}} (145)(345) \\ &= \zeta_{\{{34},{25},{13},{24}\}} \zeta_{\{{21},{54},{23},{51}\}} (15)(34) = \zeta_{\{{34},{25},{13},{24}\} \oplus \{{12},{45},{23},{15}\}} (15)(34) \\ &= -\zeta_{\{14,35\}} (15)(34) = [-\zeta_{\{14,35\}} (14)(35)] \cdot [(14)(35)]\inv(15)(34) \\ &= -\upsilon_{\{14,35\}} \cdot (35)(14)(15)(34) = -\upsilon_{W}^{(321)} (13)(45). \end{align*} \end{exam} \begin{exam}\label{X:P32b} We use Example \ref{X:P32a} to compute left multiplication by a transform of $\omega_Z$. \begin{align*} \omega_Z^{(321)} \omega_Z^{(123)} &= \big[ \omega_Z \omega_Z^{(123)(321)\inv} \big]^{(321)} = \big[ \omega_Z \omega_Z \big]^{(321)} , \intertext{which by Example \ref{X:P32a}} &= \big[ -\upsilon_{W}^{(321)} (13)(45) \big]^{(321)} = -\upsilon_{W}^{(123)} (32)(45) . \end{align*} \end{exam} By explicitly inverting the isomorphism $\barp_A : \SwAut P_{3,2} \to \fA_5: \bar\zeta\xi \mapsto \xi$ we can say, for any $\xi \in \fA_5$, exactly which switching function $\zeta_X\gamma_X$ should be associated with it. \begin{prop}\label{P:P32aut-sw} For a permutation $\xi \in \fA_5$, the corresponding switching automorphism of $P_{3,2}$ is $\bar\zeta_X\xi \bar\zeta_X\gamma_X\alpha \in \bar\zeta_X\gamma_X \Aut P_{3,2}$ where $\zeta_X\gamma_X \in R$ is given by $$ \zeta_X\gamma_X = \begin{cases} \zeta_\eset \,\id = \eps\,\id &\text{ if } \{4,5\}^{\xi\inv} = \{4,5\}, \\ \zeta_{\{{34},{25},{13},{24}\}^\lambda}(i45) &\text{ if } \{4,5\}^{\xi\inv} = \{i,4\}, \text{ where } \lambda = (123)^{i-1}, \\ \zeta_{\{{25},{34},{12},{35}\}^\lambda}(54i) &\text{ if } \{4,5\}^{\xi\inv} = \{i,5\}, \text{ where } \lambda = (123)^{i-1}, \\ \zeta_{\{i5,j4\}} (i5)(j4) &\text{ if } \{4,5\}^{\xi\inv} = \{i,j\} \subset \3, \text{ where } j = i^{(123)} , \end{cases} $$ and $\alpha = \gamma_X\inv \xi$. \end{prop} \begin{proof} The question is to find the vertex set $X$ such that $\gamma_X$, of $\zeta_X\gamma_X \in R$, satisfies $\gamma_X\alpha = \xi$ for some $\alpha \in \Aut P_{3,2}$; in other words, $\gamma_X\inv\xi = \alpha \in \Aut P_{3,2}$. By this definition of $\alpha$, $\{4,5\}^{\xi\inv} = \{4,5\}^{\alpha\inv\gamma_X\inv}$. But $\{4,5\}$ is invariant under $\Aut P_{3,2}$. Therefore, $\{4,5\}^{\xi\inv} = \{4,5\}^{\gamma_X\inv}$, which depends only on the coset of $\Aut P_{3,2}$ to which $\xi$ belongs. In other words, we need only consider the case $\alpha = \id$, which means we examine only all $\xi = \gamma_X$. Now the proposition follows easily by inspection of the ten cases of $\gamma_X$. A better method is to show that the proposition for one $X$ implies it for all $X^\lambda$. Replacing $X$ by $X^\lambda$, $$ \{4,5\}^{(\gamma_{X^\lambda})\inv} = \{4,5\}^{(\gamma_{X}^\lambda)\inv} = \{4,5\}^{\lambda\inv(\gamma_{X})\inv\lambda} = (\{4,5\}^{(\gamma_{X})\inv})^\lambda. $$ Taking $\lambda = (123)^p$, and supposing that $\{4,5\}^{\gamma_X\inv} = \{4,5\}$, $\{i,4\}$, $\{i,5\}$, or $\{i,i^{(123}\}$, we deduce that $\{4,5\}^{(\gamma_{X^\lambda})\inv} = \{4,5\}$, $\{i^\lambda,4\}$, $\{i^\lambda,5\}$, or $\{i^\lambda,(i^\lambda)^{(123)}\}$, respectively. That proves the claim for $\lambda = (123)^p$. Thus, we need only check the proposition's validity for $X = \eset, Z, Z^{(12)(45)}, \text{ and } W$, which is easier than checking all ten $X$'s. \end{proof} A natural question is whether $\SwAut P_{3,2}$ can be written as a product of subgroups, $\fH \cdot \Aut P_{3,2}$ where $\fH \cap \Aut P_{3,2} = \{\id\}$, or in other words whether there exists a system of left coset representatives that is a subgroup. It does not, for it is known that no subgroup of $\fA_5$ of order 6 has such a complementary subgroup. \subsubsection{The structure of $\SwAut P_{3,3}$.} \label{structureP33} We know the set $\SwAut P_{3,3}$ but for a full description we need the rule of multiplication and the rule for inverting the projection $p_A$. It is easier to do this if we fix $m$, so we assume $m=5$. Then $E^- = M_{3(5)}$, $\Aut P_{3,3} = \fS_{\4}$, and \begin{equation} \SwAut P_{3,3} = \fS_{\4} \cup \bigcup_{j=1}^{4} \bar\zeta_{N[j5]} (j5) \fS_{\4}. \label{E:P33set} \end{equation} An element of the group has the form $\beta$ or $\bar\zeta_{N[j5]} (j5) \beta$ for $\beta \in \fS_{\4}$ and $j \in \4$. To compute a product refer to Table \ref{Tb:P33mult}. \begin{table}[hbt] \begin{center} \begin{tabular}{|c||c|c|} \hline \vstrut{15pt} Left $\cdot $Top &$\beta$ &$\bar\zeta_{N[j5]} (j5) \beta$\\[6pt] \hline\hline \vstrut{18pt} $\alpha$ &$\alpha\beta$ &$\bar\zeta_{N[j^{\alpha\inv}5]} (j^{\alpha\inv}5) \alpha\beta$\\[6pt] \hline \vstrut{30pt} $\bar\zeta_{N[i5]} (i5) \alpha$ &$\bar\zeta_{N[i5]} (i5) \alpha\beta$ &$\begin{cases} \alpha\beta &\text{ if } j = i^\alpha \\[5pt] \bar\zeta_{N[j^{\alpha\inv}5]} (i j^{\alpha\inv}5) \alpha\beta &\text{ if } j \neq i^\alpha \end{cases}$\\[20pt] \hline \end{tabular} \end{center} \caption{The multiplication table of $\SwAut P_{3,3}$ with negative edge set $M_{3(5)} = \{v_{12}v_{34},v_{13}v_{24},v_{14}v_{23}\}$. $i,j \in \4$ and $\alpha, \beta \in \fS_{\4}$.} \label{Tb:P33mult} \end{table} The second product column in Table \ref{Tb:P33mult} requires proof, for which the main step is this computation (done for a switching permutation $\zeta\alpha$ and consequently the same for the switching automorphism $\bar\zeta\alpha$): \begin{align*} \alpha \cdot \zeta_{N[j5]} (j5) &= \zeta_{N[j^{\alpha\inv}5]} \alpha (j5) = \zeta_{N[j^{\alpha\inv}5]} (j^{\alpha\inv}5) \cdot \alpha. \end{align*} That gives the first product. For the second we continue the calculation, first when $j=i^\alpha$: \begin{align*} \zeta_{N[i5]} (i5) \alpha \cdot \zeta_{N[i^\alpha5]} (i^\alpha5) &= \zeta_{N[i5]} (i5) \zeta_{N[i5]} (i5) \alpha = \zeta_{N[i5]} \zeta_{N[5i]} (i5) (i5) \alpha = \alpha ; \end{align*} second when $j \neq i^\alpha$: \begin{align*} \zeta_{N[i5]} (i5) \alpha \cdot \zeta_{N[j5]} (j5) &= \zeta_{N[i5]} (i5) \zeta_{N[j^{\alpha\inv}5]} (j^{\alpha\inv}5) \alpha \\ &= \zeta_{N[i5]} \zeta_{N[j^{\alpha\inv}i]} (i5) (j^{\alpha\inv}5) \alpha = -\zeta_{N[j^{\alpha\inv}5]} (i j^{\alpha\inv} 5)\alpha , \end{align*} because ${N[pq]} \oplus {N[qr]} ={N[pr]^c}$, whence $\zeta_{N[pq]} \zeta_{N[qr]} = \zeta_{N[pr]^c} = -\zeta_{N[pr]}$. Every permutation $\xi \in \fS_{\5}$ is the projection of a unique element $\bar\zeta_X\gamma_X\cdot\alpha \in \SwAut P_{3,3}$ belonging to the coset $\bar\zeta_X\gamma_X\Aut P_{3,3}$. The following formulas give $\zeta_X$, $\gamma_X$, and $\alpha$ in terms of $\xi$, thereby inverting $p_A$. Let $\zeta_X\gamma_X\alpha := p_A\inv(\xi)$. Then $\zeta_X\gamma_X$ identifies the coset of $\fS_{\4}$, and $\alpha$ identifies the element of $\fS_{\4}$ that gives $\xi$. \begin{align} (\zeta_X, \gamma_X, \alpha) &= \begin{cases} (\eps, \id, \xi) &\text{ if $5$ is fixed by } \xi, \\ (\zeta_{N[5^{\xi\inv} 5]}, (5^{\xi\inv} 5), (5^{\xi\inv} 5) \xi) &\text{ if $5$ is not fixed.} \end{cases} \end{align} (Note that $(5^{\xi\inv} 5)\xi$ in cycle form is $\xi$ with $5$ deleted from whichever cycle it is in. Also note that if we interpret $N[kk]$ as the empty set, so $\zeta_{N[kk]}$ is $\varepsilon$, and $(55)$ as the trivial cycle $(5)$, then the first line is subsumed in the second line.) \subsubsection{The end of the proof}\label{endautproof} That concludes the proof of Theorem \ref{T:aut}. \end{proof} \subsection{Orbits and copies}\label{orbit} There are two ways signed graphs $\Sigma$ and $\Sigma'$ based on the same graph $\Gamma$ can be isomorphic. They may have the same set of positive circles, which (by Lemma \ref{L:switching}) is the same as saying they are switching equivalent; then for many purposes they are essentially the same. The other possibility is that they belong to different switching equivalence classes; in other words, their positive circles are not the same ones even though they correspond under an automorphism of $\Gamma$. From the automorphism and switching automorphism groups we can deduce the number of signatures of $\Gamma$ that are isomorphic to $\Sigma$ and also the number that are switching inequivalent to $\Sigma$ and to each other, i.e., the number of switching equivalence classes of signatures isomorphic to $\Sigma$. There is a nice bonus to this: we get an interpretation of the part of $\Aut \Gamma$ that does not belong to $\barp_A(\SwAut\Sigma)$. Apply any automorphism $\gamma \in \Aut\Gamma$ to $\Sigma$. Then $\Sigma^\gamma \sim \Sigma$ if and only if $\gamma \in \barp_A(\SwAut\Sigma)$. That means $\Sigma^\gamma$ for $\gamma \notin \barp_A(\SwAut\Sigma)$, while isomorphic to $\Sigma$, belongs to a different switching equivalence class. A fine example is $\SwAut P_{3,2}$, whose projection is the alternating group $\fA_5$. Any single transposition changes $(P,\sigma) \cong P_{3,2}$ to an inequivalent $(P,\sigma')$, but there is one that is simplest. In the notation of Table \ref{Tb:autexact}, it is $(lm)$. This permutation preserves the hexagon $H_{lm}$ that contains $E^-$ while reversing the signs of the hexagon's edges. Whether there are such distinguished permutations to change one switching automorphism class of $P_1$, $P_{2,2}$, or $P_{2,3}$ to another is not known. The number of different isomorphic (but possibly switching equivalent) copies of a particular signature $\Sigma$ is the number of orbits of $\Aut\Sigma$, which equals $|\!\Aut\Gamma| / |\!\Aut\Sigma|$. The number of different copies that are not switching equivalent, i.e., the number of switching equivalence classes of signatures isomorphic to $\Sigma$, is $|\!\Aut\Gamma| / |\!\SwAut\Sigma|$, the number of orbits of $\SwAut\Sigma$. For instance, $|\!\Aut P_1| = |\!\SwAut P_1| = |\fD_4| = 8$; $|\!\Aut P| / |\!\Aut P_1| = |\!\Aut P| / |\!\SwAut P_1| = 5!/8 = 15$; and (obviously) there are $|E| = 15$ ways to have one negative edge, none of which is switching equivalent to any other. \begin{table}[htb] \begin{center} \begin{tabular}{|c||c|c|c|c|c|c|} \hline \vstrut{15pt}$(P,\sigma)$ &$+P$, $-P$ &$P_1$, $-P_1$ &$P_{2,2}$, $-P_{2,2}$ &$P_{2,3}$, $-P_{2,3}$ &$P_{3,2}$, $-P_{3,2}$ &$P_{3,3}$, $-P_{3,3}$ \\[3pt] \hline \vstrut{15pt}\# copies &$1$ &$15$ &$60$ &$15$ &$20$ &$5$ \\[2pt] \hline \vstrut{15pt}\# [copies] &$1$ &$15$ &$30$ &$15$ &$2$ &$1$ \\[2pt] \hline \end{tabular} \end{center} \caption{The number of different signatures of $P$ that are isomorphic to each minimal signed Petersen graph and its negative (`copies'); and the number of switching equivalence classes of such signatures (`[copies]').} \label{Tb:count} \end{table} \section{Coloring}\label{col} A \emph{coloration} (in full, \emph{proper $k$-coloration}, where $k \geq 0$) of a signed graph is a function $\kappa: V \to \{0, \pm1, \pm2, \ldots, \pm k\}$ such that if $vw$ is an edge, then $\kappa(w) \neq \sigma(vw)\kappa(v)$. The \emph{chromatic number} $\chi(\Sigma)$ is the smallest $k$ such that there is a proper $k$-coloration of $\Sigma$. A signed graph has a second chromatic number, the \emph{zero-free chromatic number} $\chi^*(\Sigma)$; it is the smallest $k$ such that there is a proper $k$-coloration of $\Sigma$ that does not use the color $0$. As the color $0$ can be replaced by $+(k+1)$ to turn a coloration into a zero-free coloration, $\chi^*(\Sigma) = \chi(\Sigma) + 0$ or $1$. The chromatic numbers pair with chromatic polynomials. The \emph{chromatic polynomial} of $\Sigma$ is the function $\chi_\Sigma(2k+1) :=$ the number of proper $k$-colorations, and the \emph{zero-free chromatic polynomial} is $\chi^*_\Sigma(2k) :=$ the number that are zero free. (One can prove these functions are monic polynomials of degree $|V|$ by any method that establishes the chromatic polynomial $\chi_\Gamma(y)$ of an ordinary graph; see \cite{SGC}. There is another connection: $\chi_\Gamma(y) = \chi_{+\Gamma}(y) = \chi^*_{+\Gamma}(y)$.) \begin{prop}\label{P:chromaticsw} The chromatic numbers and the chromatic polynomials of a signed graph are invariant under switching and isomorphism. \end{prop} \begin{proof} Isomorphism invariance is obvious. For switching invariance, consider a proper coloration $\kappa$. A switching function $\zeta$ acts on $\kappa$ by transforming it to $\kappa^\zeta(v) := \zeta(v)\kappa(v)$. The condition for a coloration to be proper, $\kappa(w) \neq \kappa(v)\sigma(vw)$, when multiplied by $\zeta(w)$, takes the form $$ \kappa^\zeta(w) = \kappa(w)\zeta(w) \neq \kappa(v)\sigma(vw)\zeta(w) = [\kappa(v)\zeta(v)] [\zeta(v)\sigma(vw)\zeta(w)] = \kappa^\zeta(v)\sigma^\zeta(vw). $$ Thus, $\kappa^\zeta$ is a proper coloration of $\Sigma^\zeta$ if and only if $\kappa$ is a proper coloration of $\Sigma$. This establishes a bijection between proper colorations of $\Sigma$ and of $\Sigma^\zeta$ and hence the proposition. \end{proof} \subsection{Chromatic numbers}\label{chronum} The chromatic numbers are weak invariants; they are nearly the same for all signatures of $P$. \begin{thm}\label{T:col} The chromatic and zero-free chromatic numbers of signed Petersen graphs are as in Table \ref{Tb:col}. \end{thm} \begin{table}[htb] \begin{center} \begin{tabular}{|r||c|c|c|c|c|c|} \hline $(P,\sigma)$ \vstrut{15pt}&\hbox to 2em{\,$+P$} &\hbox to 2em{\;\;$P_1$} &\hbox to 2em{\,$P_{2,2}$} &\hbox to 2em{\,$P_{2,3}$} &\hbox to 2em{\,$P_{3,2}$} &\hbox to 2em{\,$P_{3,3}$} \\[3pt] \hline \vstrut{15pt}$\chi(P,\sigma)$ &1 &1 &1 &1 &1 &1 \\ \vstrut{15pt}$\chi^*(P,\sigma)$ &2 &2 &2 &2 &2 &1 \\[2pt] \hline \end{tabular} \end{center} \caption{The chromatic numbers of signed Petersen graphs.} \label{Tb:col} \end{table} To find the chromatic numbers of any $(P,\sigma)$, switch it into one of the minimal forms and look it up in Table \ref{Tb:col}. Note that $+P \simeq -P_{3,3}$, $P_1 \simeq -P_{2,3}$, $P_{2,2} \simeq -P_{2,2}$, $P_{2,3} \simeq -P_1$, $P_{3,2} \simeq -P_{3,2}$, and $P_{3,3} \simeq -P$. We prepare for the proof of Theorem \ref{T:col} with definitions and a lemma. By a \emph{signed color} we mean $0$ or $+i$ or $-i$ for $i>0$. For consistency with the definition of chromatic numbers, when coloring a signed graph we call $\pm1$ a single \emph{unsigned color} and we do not count 0 as an unsigned color. Thus, the counting of unsigned colors on signed graphs is very different from that on unsigned graphs. We can color an unsigned graph with signed colors but each has to be counted separately; for example, $0,+1,-1$ are three colors when coloring an unsigned graph. Note that the endpoints of a negative edge may have the same signed color as long as that color is not 0. \emph{Contracting} a graph $\Gamma$ by an edge set $S$ means one shrinks each connected component of the spanning subgraph $(V,S)$ to a vertex. The contracted graph is written $\Gamma/S$. (Technically, a vertex $W$ of $\Gamma/S$ is a subset of $V$ consisting of the vertices of one component of $(V,S)$; they are the vertices that are coalesced into one by the shrinking.) The edges of $S$ are deleted. Another edge becomes a loop if its endpoints belong to the same component of $(V,S)$. We say that an original vertex that is a component of $(V,S)$ remains a vertex of $\Gamma/S$. Any other vertex of $\Gamma/S$ results from coalescing two or more original vertices; we say it \emph{results from contraction} to distinguish it from remaining original vertices. \begin{lem}\label{L:3sgdcolors} Let $\Sigma$ be a signed graph and let $m \geq 1$. \begin{enumerate}[{\rm(a)}] \item Suppose $\chi(|\Sigma|/E^-(\Sigma)) \leq 2m$. Then $\chi(\Sigma) \leq \chi^*(\Sigma) \leq m$. \label{L:3sgdcolors-m} \item Suppose $|\Sigma|/E^-(\Sigma)$ can be colored with the colors $0, \pm1, \ldots, \pm m$ in such a way that no vertex resulting from contraction gets the color $0$. Then $\chi(\Sigma) \leq m$ and $\chi^*(\Sigma) \leq m+1$. \label{L:3sgdcolors-m0} \item If $\chi(|\Sigma|/E^-(\Sigma)) \leq 2$ and $\Sigma$ has at least one edge, then $\chi(\Sigma) = \chi^*(\Sigma) = 1$. \label{L:3sgdcolors-2} \item If $\chi(|\Sigma|/E^-(\Sigma)) = 3$, then $\chi^*(\Sigma) = 2$. \label{L:3sgdcolors-3} \end{enumerate} \end{lem} \begin{proof} (\ref{L:3sgdcolors-m}) Color $|\Sigma|/E^-$ with the colors $\pm1, \ldots, \pm m$. This coloration can be pulled back to $\Sigma$, because the vertices that are contracted into $W$ can all be given the signed color of $W$. Thereby we see that $\Sigma$ needs at most $m$ unsigned colors, without using the color 0. (\ref{L:3sgdcolors-m0}) Color $|\Sigma|/E^-(\Sigma)$ as specified. This coloration can be pulled back to $\Sigma$, because the vertices that are contracted into $W$ can all be given the signed color of $W$. Thereby we see that $\Sigma$ needs at most $m$ unsigned colors if $0$ is permitted but it may need $m+1$ if $0$ is excluded. (\ref{L:3sgdcolors-2}) When the contraction is bipartite, assign color $+1$ to one color class and $-1$ to the other. Pulling this coloration back to $\Sigma$ yields a zero-free coloration, from which the chromatic numbers follow---as long as there is at least one edge in $\Sigma$ so one cannot color every vertex $0$. (\ref{L:3sgdcolors-3}) From (\ref{L:3sgdcolors-m}) we conclude that $\chi^*(\Sigma) \leq 2$. Trying to color $\Sigma$ using only $\pm1$, the endpoints of a negative edge must have the same signed color; therefore, such a coloration of $\Sigma$ can only be a pullback of a 2-coloration of $|\Sigma|/E^-$, which does not exist. Hence, there is no coloration of $\Sigma$ using only one unsigned color without $0$, and therefore $\chi^*(\Sigma) = 2$. \end{proof} \begin{proof}[Proof of Theorem \ref{T:col}] The chromatic number of $P$ itself is 3 \cite{TPG}. Thus, $+P$ needs exactly three signed colors, which may be $0,+1,-1$ if $0$ is used and otherwise must be, for example, $+1,-1,+2$. The only bipartite contraction is $P/E^-(P_{3,3})$; it can be colored with $+1,-1$, so $P_{3,3}$ can be colored using $\pm1$. (One can more easily see this by coloring the switching-isomorphic graph $-P$.) The other contractions need three or four signed colors. $P/E^-(P_{2,d})$ ($d=2,3$) has chromatic number 3, and since there are just two contracted vertices they can get nonzero signed colors; it follows that $P_{2,d}$ is colorable with signed colors $\pm1,0$, no contraction vertex being colored 0. Therefore, $\chi(P_{2,d}) = 1$ and $\chi^*(P_{2,d}) = 2$. The same reasoning holds for $P_1$, where there is one contracted vertex. The most complicated contraction is $P/E^-(P_{3,2})$. It has a triangle composed of contracted vertices, so its chromatic number is 3 but there does not exist a coloration with colors $\pm1,0$ in which no contracted vertex has color 0. However, one can color $P_{3,2}$ directly using $\pm1,0$. The hexagon that contains all negative edges should be colored alternately $+1$ and $0$. The vertices adjacent to the hexagon get color $-1$ and the remaining vertex is colored $0$ or $+1$. Thus, $\chi(P_{3,2}) = 1$ and $\chi^*(P_{3,2}) = 2$. \end{proof} \subsection{Coloration counts}\label{colcount} A more refined coloring invariant, the chromatic polynomial, does differ for different signatures of $P$, and most likely the zero-free chromatic polynomials differ as well. Since the polynomials have degree 10, computing them is too large a project for us. ($\chi_P(y)$ is known; perhaps it is possible to imitate the technique for calculating it in \cite[Additional Result 12c]{Biggs2}.) I propose that the number of proper $k$-colorations for any $k \geq 1$, and also the number of zero-free proper $k$-colorations for any $k \geq 2$, is a distinguishing invariant. We prove this for proper 1-colorations. \begin{thm}\label{P:chromatic} Any two signatures of the Petersen graph that are not switching isomorphic have different chromatic polynomials and in particular they have different numbers $\chi_{(P,\sigma)}(3)$ of proper $1$-colorations. \end{thm} \begin{conj}\label{Cj:chromatic} (a) Two signed Petersen graphs that are not switching isomorphic have different zero-free chromatic polynomials; in particular they have different numbers $\chi_{(P,\sigma)}^*(4)$ of zero-free proper $2$-colorations. (b) For any $\mu \geq 2$, the six values $\chi_{(P,\sigma)}(2\mu+1)$ are different for each switching isomorphism class of sign functions, and so are the six values $\chi^*_{(P,\sigma)}(2\mu)$. \end{conj} We will establish Theorem \ref{P:chromatic} by investigating $\chi_{(P,\sigma)}(3) - \chi_{+P}(3)$ with the aid of several general lemmas and formulas. Calculating the difference give the actual value, because $$ \chi_{+P}(3) = \chi_P(3) = 120. $$ A proof depends on the fact that every 3-coloration of $P$ has the same form as every other, under graph automorphisms and permutations of the colors. In a coloration define a \emph{head vertex} to be a vertex whose neighbors have only one color. Each proper $3$-coloration of $P$ has a unique head vertex; and there are $12$ such colorations for each head vertex. (To prove this, examine the two ways to 3-color $N[v]$ where $v$ is the head vertex. We omit the details.) To color with a given head vertex, one chooses its color, then chooses the neighborhood color, then colors the uncolored hexagon with the two non-neighborhood colors. One concludes that $\chi_P(3) = 120$. We begin preparing for the proof of Theorem \ref{P:chromatic} with the balanced expansion formula of \cite[Theorem 1.1]{CISG}, which states that for any signed graph $\Sigma = (\Gamma,\sigma)$, \begin{equation} \chi_{\Sigma}(2\mu+1) = \sum_{\substack{W\subseteq V:\\ W \text{ independent}}} \chi_{\Sigma\setm W}^*(2\mu). \label{E:balexp} \end{equation} (The proof is easy, by counting colorations according to the set $W$ with color $0$.) Applying this to the difference of $\Sigma$ and $+\Gamma$, \begin{align*} \chi_{\Sigma}(2\mu+1) - \chi_{+\Gamma}(2\mu+1) &= \sum_{\substack{W\subseteq V:\\ W \text{ independent}}} \chi^*_{\Sigma\setm W}(2\mu) - \chi^*_{+\Gamma\setm W}(2\mu). \end{align*} The term of $W$ disappears if $\Sigma\setm W$ is balanced; thus, \begin{align} \chi_{\Sigma}(2\mu+1) - \chi_{\Gamma}(2\mu+1) &= \sum_{\substack{W\subseteq V:\\ W \text{ independent,}\\ \Sigma\setm W \text{ unbalanced}}} \chi^*_{\Sigma\setm W}(2\mu) - \chi_{\Gamma\setm W}(2\mu), \label{E:coldiff} \end{align} since $\chi^*_{+\Gamma}(y) = \chi_{\Gamma}(y)$. Observe that \begin{equation} \chi^*_\Sigma(2) = \begin{cases} 2^{c(\Sigma)} &\text{ if } \Sigma \text{ is antibalanced}, \\ 0 &\text{ if it is not}. \end{cases} \label{E:2col} \end{equation} To prove this, suppose a zero-free, proper 1-coloration exists. Since there are only the two signed colors $+1$ and $-1$, a negative edge must have the same color at both ends and a positive edge must have oppositely signed colors at its ends. Taking the bipartition of $V$ into sets of vertices with the same sign, that means a positive edge in $-\Sigma$ has both ends in the same part and a negative edge has ends in opposite parts. Hence, $-\Sigma$ is balanced and $\Sigma$ is antibalanced. If $\Sigma$ is antibalanced, there are two choices of color in each component. \begin{lem}\label{L:bipartbalanti} If $\Sigma$ has two of the properties of balance, antibalance, and bipartiteness, then it has the third property as well. \end{lem} \begin{proof} Balance means every circle is positive. Antibalance means every even circle is positive and every odd circle is negative. In a bipartite signed graph, balance and antibalance are equivalent. In any signed graph, the conjunction of balance and antibalance implies there are no odd circles. \end{proof} Now we can further simplify Equation \eqref{E:coldiff} when $\mu=1$. By Lemma \ref{L:bipartbalanti} there are three possibilities: $\Gamma\setm W$ may be bipartite with $\Sigma\setm W$ not antibalanced, $\Sigma\setm W$ may be antibalanced but nonbipartite, or it may be nonbipartite and not antibalanced. Then by Equation \eqref{E:2col}, \begin{align} \begin{aligned} \chi_{\Sigma}(3) - \chi_{\Gamma}(3) &= \sum_{\substack{W\subseteq V:\\ W \text{ independent,}\\ \Sigma\setm W \text{ antibalanced and not bipartite}}} 2^{c(\Gamma\setm W)} \\ &\quad - \sum_{\substack{W\subseteq V:\\ W \text{ independent,}\\ \Sigma\setm W \text{ bipartite and not antibalanced}}} 2^{c(\Gamma\setm W)}. \end{aligned} \label{E:3diff} \end{align} \begin{proof}[Proof of Theorem \ref{P:chromatic}] We use a formula deduced from Equation \eqref{E:3diff}. For $k=0,1,2$, let \begin{align*} \alpha_k(\Sigma) := &\text{ the number of independent sets } X \subseteq V \text{ such that } \Sigma\setm X \text{ is balanced.} \end{align*} \begin{lem}\label{L:petdiff} For a signed Petersen graph, \begin{equation} \chi_{(P,\sigma)}(3) - \chi_{+P}(3) = 2\alpha_0(-(P,\sigma)) + 2\alpha_1(-(P,\sigma)) + 2\alpha_2(-(P,\sigma)) - 4 c_6^-(P,\sigma) . \label{E:petdiffsimp} \end{equation} \end{lem} \begin{proof} By Section \ref{structure}, either $|W| \leq 1$, $W$ is a pair of nonadjacent vertices, or $W = N(v)$ for some vertex $v$. In the former cases $P\setm W$ is connected and nonbipartite. In the last case it is bipartite. Suppose $(P,\sigma)\setm W$ is antibalanced and not bipartite. Because $P\setm W$ is not bipartite, $|W|\leq 2$. Therefore, $P\setm W$ is connected and the term of $W$ contributes $2$ to the first summation if $(P,\sigma)\setm W$ is antibalanced, $0$ otherwise. The respective contributions of $W$ of size $0, 1, 2$ are $2\alpha_0(-(P,\sigma))$, $2\alpha_1(-(P,\sigma))$, and $2\alpha_2(-(P,\sigma))$. Suppose $P\setm W$ is bipartite and not antibalanced. Here $W = N(v)$ so $P\setm W = H_v \cupdot K_1$. Because $(P,\sigma)\setm W$ is not antibalanced, the term of $W$ contributes $4$ to the second summation. Each hexagon lies in $P\setm W$ for a unique $W = N(v)$. Since the contribution of each negative hexagon to \eqref{E:petdiffsimp} is $-4$, the total contribution of all negative hexagons is $4 c_6^-(P,\sigma)$. \end{proof} It remains to evaluate the $\alpha_k$, as $c_6^-$ is given by Table \ref{Tb:circ}. The results are in Table \ref{Tb:3diff} along with the values of $\chi_{(P,\sigma)}(3) - \chi_{+P}(3)$ and $\chi_{(P,\sigma)}(3)$. \begin{table}[hbt] \begin{center} \begin{tabular}{|c||c|c|c|c|c|c|} \hline $(P,\sigma)$ \vstrut{15pt}&\hbox to 3em{ $+P$ } &\hbox to 3em{ $P_1$ } &\hbox to 3em{ $P_{2,2}$ } &{$P_{2,3} \simeq -P_1$} &\hbox to 3em{ $P_{3,2}$ } &{$P_{3,3} \simeq -P$} \\[3pt] \hline\hline \vstrut{15pt}$\alpha_0(P,\sigma)$ &1 &0 &0 &0 &0 &0 \\[2pt] \vstrut{15pt}$\alpha_1(P,\sigma)$ &10 &2 &0 &0 &0 &0 \\[2pt] \vstrut{15pt}$\alpha_2(P,\sigma)$ &30 &14 &6 &4 &0 &0 \\[4pt] \hline \vstrut{15pt}$c_6^-(P,\sigma)$ &0 &4 &6 &4 &10 &0 \\[4pt] \hline\hline \vstrut{15pt}$\chi_{(P,\sigma)}(3) - \chi_{+P}(3)$ &0 &$-8$ &$-12$ &16 &$-40$ &82 \\[4pt] \hline \vstrut{15pt}$\chi_{(P,\sigma)}(3)$ &120 &112 &108 &136 &80 &202 \\[4pt] \hline \end{tabular} \end{center} \caption{The numbers necessary to prove Theorem \ref{P:chromatic}.} \label{Tb:3diff} \end{table} Some of the values $\alpha_k$ are not obvious. For $P_{3,2}$ and $-P$, all $\alpha_k = 0$ because $l_0 > 2$ (Theorem \ref{T:frno}). $\alpha_1(P_1) = 2$ because any edge is the intersection of two pentagons, hence only by deleting an endpoint of the negative edge can we balance $P_1$. $\alpha_1(P_{2,2}) = \alpha_1(P_{2,3}) = 0$ because each graph has $l_0 > 1$. That leaves $\alpha_2$ of $P_{2,2},$ $P_1$, and $-P_1 \simeq P_{2,3}$. Consider deleting a nonadjacent vertex pair from $P_{2,2} \simeq -P_{2,2}$. Suppose the negative edges are $v_{15}v_{34}$ and $v_{23}v_{45}$ (see Figure \ref{F:P}). We get balance by deleting one endpoint of each edge, ignoring $\{v_{15},v_{23}\}$ because those vertices are adjacent; that is three vertex pairs. If we switch $v_{15}$ and $v_{23}$ first so the negative edges are $v_{15}v{24}$ and $v_{23}v_{14}$, we find three more ways to get balance. Thus, the obvious approach gives six balancing sets. These are all. To prove that, we list four negative pentagons forming two vertex-disjoint pairs: $$A := v_{24}v_{15}v_{34}v_{12}v_{35} \text{ and } A' := v_{14}v_{23}v_{45}v_{13}v_{25},$$ and $$B := v_{14}v_{23}v_{45}v_{12}v_{35} \text{ and } B' := v_{34}v_{15}v_{24}v_{13}v_{25}.$$ We need one vertex from each pair, which means (Case 1) one from $A \cap B = \{v_{12},v_{35}\}$ and one from $A' \cap B' = \{v_{13},v_{25}\}$, or else (Case 2) one from $A \cap B' = \{v_{24},v_{15},v_{34}\}$ and one from $A' \cap B = \{v_{14},v_{23},v_{45}\}$. The two other negative pentagons are $$C := v_{15}v_{23}v_{45}v_{13}v_{24} \text{ and } D := v_{34}v_{15}v_{23}v_{14}v_{25} .$$ Case 1 cannot cover both of these. In Case 2, we can take any pair except $v_{24}v_{45}$, $v_{34}v_{14}$, or (because they are adjacent) $v_{15}v_{23}$. Therefore, $\alpha_2(P_{2,2}) = 6$. Next, consider $P_1$ with negative edge $v_{15}v_{23}$. The obvious pairs are $v_{15}$ and any non-neighbor, and $v_{23}$ and any of its non-neighbors; that is 12 pairs. Two pairs that are less obvious are $\{v_{24},v_{34}\}$ and $\{v_{14},v_{54}\}$, which eliminate all circles on $v_{15}$ and $v_{23}$, respectively. To show there are no other possible pairs we list the negative pentagons: $$ D, \ C, \ v_{12}v_{34}v_{15}v_{23}v_{45}, \ v_{14}v_{35}v_{24}v_{13}v_{25}. $$ If a pair excludes $v_{15}$ and $v_{23}$ it needs one vertex from each of the following triples: $$ v_{12}v_{34}v_{45}, \ v_{45}v_{13}v_{24}, \ v_{14}v_{35}v_{24}, \ v_{14}v_{25}v_{34}, $$ in which nonconsecutive sets are disjoint. The possible pairs then are $v_{45}v_{14}$ and $v_{24}v_{34}$. Thus, $\alpha_2(P_1) = 14.$ Finally, consider $-P_1 \sim P_{2,3}$ with (after switching $X_4$) negative edges $e := v_{12}v_{35}$ and $f := v_{13}v_{25}$. The obvious pairs are one from $e$ and one from $f$. They are the only ones possible. As with $P_{2,2}$, $A, A', B, B'$ are negative and we have two cases. Case 1 gives the four obvious vertex pairs. Case 2 is impossible, because it fails to cover every negative pentagon, which is every pentagon that does not contain the edge $v_{15}v_{23}$. Hence, $\alpha_2(P_{2,3}) = 4$. The values of $\chi_{(P,\sigma)}(3) - \chi_{+P}(3)$ and $\chi_{(P,\sigma)}(3)$ follow from Lemma \ref{L:petdiff}. (I also calculated $\chi_{P_1}(3)$ and $\chi_{-P}(3)$ directly, confirming the values $112$ and $202$.) They are different for each switching isomorphism type; that proves the theorem. \end{proof} Theorem \ref{P:chromatic} suggests a problem. \begin{question}\label{Q:chromatic} Is it possible for two switching-nonisomorphic signatures of the same graph to have the same chromatic polynomial? Can they have the same zero-free chromatic polynomial? \end{question} It is not possible for a 2-regular graph. \begin{prop}\label{P:2regchromatic} Two different, switching nonisomorphic signatures of the same $2$-regular graph have different chromatic polynomials and different zero-free chromatic polynomials. \end{prop} \begin{proof} It suffices to consider a circle $C_l$ with two signatures, $\sigma_0$ in which it is positive and $\sigma_1$ in which it is negative. It is well known that $\chi_{C_l}(y) = (y-1) \big[ (y-1)^{l-1} - (-1)^{l-1} \big]$; thus, $$ \chi_{(C_l,\sigma_0)}(y) = \chi^*_{(C_l,\sigma_0)}(y) = (y-1) \big[ (y-1)^{l-1} - (-1)^{l-1} \big] . $$ To calculate the polynomials of $\Sigma_1 := (C_l,\sigma_1)$ we apply the matroid theory of \cite{SG, SGC}. By \cite[Theorem 5.1]{SG} the matroid $G(\Sigma_1)$ is the free matroid $F_l$ on $l$ points, whose characteristic polynomial is $\sum_A (-1)^{|A|}y^{|A|}$, summed over all flats, i.e., all subsets of $E$; thus it equals $(y-1)^l$. By \cite[Theorem 2.4]{SGC}, $\chi_{\Sigma_1}(y)$ equals the characteristic polynomial of $F_l$. For $\chi^*_{\Sigma_1}(y)$ we sum only over balanced sets $A$; since the only unbalanced flat is $E$, $\chi^*_{\Sigma_1}(y) = (y-1)^l - (-1)^l$. \end{proof} A possible approach to Question \ref{Q:chromatic} may be through the geometrical interpretation of signed-graph coloring in \cite[Section 5]{IOP}. \section{Clusterability}\label{clust} A signed graph $\Sigma$ is called \emph{clusterable} if its vertices can be partitioned into sets, called clusters, so that each edge within a cluster is positive and each edge between two clusters is negative. Such a partition is a \emph{clustering} of $\Sigma$. By Proposition \ref{P:balance} balance is clusterability with at most two clusters. Clusterability is the other property we discuss, besides the automorphism group, that is not invariant under switching. Davis proposed it as a possibly more realistic alternative to balance as an ideal state of a social group \cite{Davis}, and he proved: \begin{prop}\label{P:clust} A signed graph is clusterable if and only if no circle has exactly one negative edge. \end{prop} Clusterability of signed graphs has recently taken on new life in the field of knowledge and document classification under the name `correlation clustering' \cite{Bansal}. \newcommand\clun{\operatorname{clu}} There are (at least) two ways to measure clusterability. When $\Sigma$ is clusterable, the smallest possible number of clusters is the \emph{cluster number} $\clun(\Sigma)$. Even if a signed graph is inclusterable, it becomes clusterable when enough edges are deleted; the smallest such number is the \emph{inclusterability index} $Q(\Sigma)$. \begin{thm}\label{T:clcontraction} The cluster number of a signed graph is $\clun(\Sigma) = \chi(|\Sigma|/E^+(\Sigma))$. $\Sigma$ is clusterable if and only if $|\Sigma|/E^+(\Sigma)$ has no loops. \end{thm} Thus an all-positive signed graph is a cluster by itself: $\clun(+\Gamma) = 1$. For an all-negative signed graph, $\clun(-\Gamma) = \chi(\Gamma)$. \begin{proof} In the contraction $\Gamma' := |\Sigma|/E^+$, let $[v] \in V'$ denote the vertex corresponding to $v \in V$. Suppose $\Sigma$ has a clustering $\pi = \{V_1, \ldots, V_k\}$ into $k$ parts (with each $V_i$ nonempty). That means, first, that all positive edges are contained within $V_i$'s, so each $[v]$ is contained within a set $V_i$. Furthermore, two vertices $[u], [v] \in V'$ that lie within the same $V_i$ are nonadjacent, since $E' = E^-$ and no negative edges are within $V_i$. Therefore the function $\kappa: V \to \{1,2,\ldots,k\}$ defined by $\kappa(v) = i$ if $[v] \subseteq V_i$ is a (proper) coloration of $\Gamma'$, and furthermore every color is used at one or more vertices. ($\kappa$ is determined by $\pi$ only up to permutations of the colors.) Conversely, if $\kappa'$ is a (proper) coloration of $\Gamma'$ using exactly $k$ colors, say with color set $\{1,2,\ldots,k\}$, let $V_i := \{v \in V: \kappa'([v]) = i\}$. That implies $\Gamma'$ has no loops and that every color is applied to a vertex, so no $V_i$ is empty. Then in $\Sigma$, no negative edge can lie within a set $V_i$ and, because every positive edge of $\Sigma$ is within a set $[v]$, it lies inside a $V_i$. Hence, $\pi = \{V_1, \ldots, V_k\}$ is a clustering of $\Sigma$ into $k$ clusters. Consequently, clusterings of $\Sigma$ coincide (modulo permuting the colors) with $k$-colorations of $\Gamma'$ that use all $k$ colors, for any $k$. The theorem follows immediately. \end{proof} Observe that $|\Sigma|/E^+(\Sigma) = |\Sigma|/E^-(-\Sigma)$. Thus, the contraction used here in connection with $\Sigma$ is the same one used in Theorem \ref{T:col} in connection with $-\Sigma$. To supplement Davis's criterion for clusterability---that is, for zero inclusterability index---we state a criterion for unit index. The proof is a simple check. \begin{prop}\label{P:clust1} $Q(\Sigma) = 1$ if and only if there is a circle with exactly one negative edge and there is an edge common to all such circles. \end{prop} \begin{thm}\label{T:clust} The clusterabilities of the minimal signed Petersen graphs and their negatives are as stated in Table \ref{Tb:clust}. \end{thm} \begin{table}[hbt] \begin{center} \begin{tabular}{|r||c|c|c|c|c|c|c|c|c|c|c|c|} \hline $(P,\sigma)$ \vstrut{15pt} &\hbox to 2em{\,$+P$} &\hbox to 2em{\,$-P$} &\hbox to 2em{\;\;$P_1$} &\hbox to 2em{$-P_1$} &\hbox to 2em{\,$P_{2,2}$} &\hbox to 2.4em{$-P_{2,2}$} &\hbox to 2em{\,$P_{2,3}$} &\hbox to 2.4em{$-P_{2,3}$} &\hbox to 2em{\,$P_{3,2}$} &\hbox to 2.4em{$-P_{3,2}$} &\hbox to 2em{\,$P_{3,3}$} &\hbox to 2.4em{$-P_{3,3}$} \\[2pt] \hline \vstrut{15pt}$\clun(P,\sigma)$ &1 &3 &-- &3 &-- &3 &-- &3 &-- &4 &-- &2 \\ \vstrut{15pt}$Q(P,\sigma)$ &0 &0 &1 &0 &2 &0 &2 &0 &3 &0 &3 &0 \\[2pt] \hline \end{tabular} \end{center} \caption{The clusterability measures of the minimal signed Petersen graphs and their negatives. A dash denotes an inclusterable signature.} \label{Tb:clust} \end{table} \begin{proof} The cluster numbers are obvious for $+P$, which is balanced, and $P_1, P_{2,2}, P_{2,3}, P_{3,2}, P_{3,3}$, all of which violate Davis's criterion for clusterability. The negatives of these graphs are clusterable; their cluster numbers follow from Theorem \ref {T:clcontraction}. Specifically: The contraction $P/E^+(-P_{2,2})$ has a triangle and is easy to color in 3 colors; thus, $\clun(-P_{2,2}) = 3$. The more complex graph $P/E^+(-P_{3,2})$ consists of three triangles overlapping at vertices---which require three colors arranged so that the three divalent vertices have different colors---and one more vertex adjacent to the divalent vertices; therefore, the chromatic number is 4. That gives $\clun(-P_{3,2}) = 4$. The contraction $P/E^+(-P_{3,3}) = K_{3,4}$. Thus, $\clun(-P_{3,3}) = 2$. The contraction $P/E^+(-P_{2,3})$ is $K_{3,4}$ with one vertex split, forming a $C_5$. As the contraction is nonbipartite, $\clun(-P_{2,3}) > 2$, but as only one vertex was split, only one more color is needed. The fact that clusterability is equivalent to having inclusterability index 0 leaves five signatures with positive inclusterability index. Clearly, $Q(\Sigma) \leq |E^-|$. That implies $Q(P_1) = 1$. Proposition \ref {P:clust1} implies that the other inclusterability indices are at least 2, since in each $P_{k,d}$ there are two edge-disjoint circles containing exactly one negative edge each. Consequently, $Q(P_{2,2}) = Q(P_{2,3}) = 2$. In each of $P_{3,2}$ and $P_{3,3}$, all the pentagons with one edge on the outer pentagon in Figure \ref{F:3neg-clust} have exactly one negative edge. Call them the \emph{sharp pentagons}. To make the signed graph clusterable we must eliminate (at least) all sharp pentagons; thus, we have to remove at least an edge from each one. Any two sharp pentagons have just one edge in common, and no three of them have a common edge. Therefore, to eliminate sharp pentagons one has to delete at least three edges. It follows that $Q(P_{3,2}) = Q(P_{3,3}) = 3$. \end{proof} \begin{figure} \caption{Signed Petersen graphs with three negative edges. Each sharp pentagon has one negative edge.} \label{F:3neg-clust} \end{figure} As clusterability is not a switching invariant, the data in Table \ref{Tb:clust} are not sufficient to describe all signatures of the Petersen graph. The number of inequivalent clustering problems equals the number of nonisomorphic edge 2-colorations of $P$, which is large. That makes it interesting to ask about the maximum inclusterability of $P$, defined as the maximum inclusterability index of any signature. \begin{thm}\label{T:maxclust} The largest inclusterability index of any signed Petersen graph is $3$. \end{thm} \begin{proof} Several of the signatures in Table \ref{Tb:clust} attain inclusterability 3, so the problem is to prove no higher value is possible. We begin with two general observations. First, every signed graph satisfies \begin{equation} \Sigma' \subseteq \Sigma \implies Q(\Sigma') \leq Q(\Sigma). \label{E:subclust} \end{equation} Second, here are properties of general graphs and cubic graphs. \begin{lem}\label{L:cutclust} If the underlying graph of a signed graph $\Sigma$ has a cut with more negative than positive edges, then $Q(\Sigma) < |E^-|$. \end{lem} \begin{proof} If there is a cut $\del X$ with more negative than positive edges, delete the positive edges of $\del X$ and any negative edges outside $\del X$. In the remaining graph $(P,\sigma) \setm S$ the negative edges form a cut, so $(P,\sigma) \setm S$ is clusterable; but as the number of edges that were deleted is less than $|E^-|$, $Q(P,\sigma) < |E^-|$. \end{proof} \begin{prop}\label{P:cubicmaxclust} Let $\Gamma$ be a graph whose maximum degree is at most $3$. The maximum inclusterability index of any signature is attained only by signatures in which the negative edge set is a matching. \end{prop} \begin{proof} This follows from Lemma \ref{L:cutclust} by examining the vertex cuts $\del\{v\}$ in a signature that maximizes inclusterability. \end{proof} \emph{Proof of Theorem \ref{T:maxclust}, continued.} We may assume that $(P,\sigma)$ is a signed Petersen that has maximum inclusterability and that $E^-$ is a matching. A matching in $P$ has at most 5 edges. The matchings were classified in Section \ref{matchings}. If $|E^-|$ has 5 edges, it separates two pentagons. Since $(P,\sigma) \setm E^-$ is all positive, $(P,\sigma)$ is clusterable with two clusters that are the vertex sets of the pentagons of $P \setm E^-$. Suppose, then, that $E^-$ is a matching with 4 edges. Lemma \ref{L:cutclust} applies when $E^- = M_5 \setm$ edge, with $X = V(C)$ where $C$ is one of the pentagons separated by $M_5$. If the matching is $M_4'$, there is a hexagon $H_{lm}$ with three negative edges and the fourth negative edge $d$ is incident with $v_{lm}$ (Figure \ref{F:m3types}). The two negative edges at distance 2 from $d$, together with $d$, are part of an $M_5$ that is a 5-edge cut with three negative edges. It follows that $Q(P,\sigma) < 4$ when $E^-$ is a 4-edge matching, so the theorem is proved. \end{proof} \section{Other Aspects}\label{other} The signed Petersen graphs have other properties that we intend to treat elsewhere. For instance, we can establish the smallest surface in which each $(P,\sigma)$ can be embedded so that a circle is orientable if and only if it is positive (this is called \emph{orientation embedding}). This embeddability, by its definition, is a property of switching isomorphism classes, so there are just six cases. The only signature that embeds in the projective plane is $P_{2,3}$; as $P$ is nonplanar, every other signature of $P$ embeds only in a higher nonorientable surface (if not balanced) or in the torus (if balanced). Another aspect is the relationship between $(P,\sigma)$ and its signed covering graph (the `derived graph' of \cite[Section 9]{Biggs2}), in which each vertex of $P$ splits into a pair, $+v$ and $-v$, and edges double as well, with positive edges connecting vertices of the same sign and negative edges connecting vertices of opposite sign. The switching automorphisms of the signed graph are closely related to the fibered automorphisms of the signed covering. As $\Aut\Sigma$ is not invariant under switching, there is a very large number of possible automorphism groups of signed Petersen graphs: as many as there are nonisomorphic sets of signatures with negatives paired together. (We should pair $\Aut(\Sigma)$ with $\Aut(-\Sigma)$ because they have the same automorphisms by Proposition \ref{P:negaut}.) Sometimes the two members of the pair are isomorphic. Table \ref{Tb:aut} shows examples.) The number of such sets is unknown. Switching and the switching automorphism group generalize from the sign group to any group $\fG$. A \emph{gain graph} is a graph whose edges are labelled invertibly by elements of $\fG$; this means that, if $\varphi(e)$ is the gain of oriented edge $e$ and $e\inv$ is $e$ in the opposite orientation, then $\varphi(e\inv) = \varphi(e)\inv$. Gain graphs and switching over arbitrary groups were introduced in \cite{BG1}. Many of the basic properties of switching automorphisms should extend to the general case, though some, such as the simple description of the switching kernel $\fK$, may depend on having an abelian group, and some (at least, the property that the gain is independent of direction) require a group of exponent 2. This brief description is just an outline; a complete theory of switching automorphisms over an arbitrary gain group, and its application to examples, are open problems. \end{document}
arXiv
{ "id": "1303.3347.tex", "language_detection_score": 0.759750247001648, "char_num_after_normalized": null, "contain_at_least_two_stop_words": null, "ellipsis_line_ratio": null, "idx": null, "lines_start_with_bullet_point_ratio": null, "mean_length_of_alpha_words": null, "non_alphabetical_char_ratio": null, "path": null, "symbols_to_words_ratio": null, "uppercase_word_ratio": null, "type": null, "book_name": null, "mimetype": null, "page_index": null, "page_path": null, "page_title": null }
arXiv/math_arXiv_v0.2.jsonl
null
null
\begin{document} \begin{frontmatter} \title{Variational approach to the existence of solutions for non-instantaneous impulsive differential equations with perturbation} \author[Yao]{Wangjin Yao} \address[Yao]{School of Mathematics and Finance, Putian University, Putian, 351100, P.R. China} \author[Dong]{Liping Dong} \address[Dong]{College of Mathematics and Informatics, Fujian Normal University, Fuzhou, 350117, P.R. China} \author[Zeng]{Jing Zeng\corref{cor}} \address[Zeng]{College of Mathematics and Informatics, Fujian Key Laboratory of Mathematical Analysis and Applications (FJKLMAA), Fujian Normal University, Fuzhou, 350117, P.R. China} \cortext[cor]{Corresponding author, email address: [email protected]. The author is supported by the National Science Foundation of China (Grant No. 11501110) and Fujian Natural Science Foundation (Grant No. 2018J01656).} \begin{abstract} In this paper, we study the existence of solutions for second-order non-instantaneous impulsive differential equations with a perturbation term. By variational approach, we obtain the problem has at least one solution under assumptions that the nonlinearities are super-quadratic at infinity, and sub-quadratic at the origin. \end{abstract} \begin{keyword} Non-instantaneous impulsive differential equation \sep Mountain pass theorem \sep A perturbation term \end{keyword} \end{frontmatter} \section{Introduction} \label{} In this paper, we consider the following problem: \begin{equation}\label{eq1} \left\{ {\begin{array}{l} -u''(t)=D_{x}F_{i}(t,u(t)-u(t_{i+1}))+p(t),\quad t\in(s_{i},t_{i+1}],~i=0,1,2,...,N,\\ u'(t)=\alpha_{i},\qquad \qquad \qquad \qquad \qquad \qquad ~~\quad t\in(t_{i},s_{i}],~i=1,2,...,N,\\ u'(s_{i}^{+})=u'(s_{i}^{-}),\qquad \qquad \qquad \qquad \qquad~ \quad i=1,2,...,N,\\ u(0)=u(T)=0, u'(0)=\alpha_{0}, \end{array}} \right. \end{equation} where $0=s_{0}<t_{1}<s_{1}<t_{2}<s_{2}<...<t_{N}<s_{N}<t_{N+1}=T$. For the impulses start abruptly at the points $t_{i}$ and keep the derivative constant on a finite time interval $(t_{i},s_{i}]$, we set $u'(s_{i}^{\pm})=\lim_{s\rightarrow s_{i}^{\pm}}u'(s)$. $\alpha_{i} \ (i=1,...,N)$ are constants, $p(t):(s_{i},t_{i+1}]\rightarrow \mathbb{R}$ belongs to $ L^{2}(s_{i},t_{i+1}] (i=1, ..., N)$. The mathematical model of real world phenomena, in which discontinuous jump occurs, leads to the impulsive differential equations. The non-instantaneous impulsive differential equation is related to the hemodynamical equilibrium. Hence, it is important to study the non-instantaneous impulsive differential equations with a perturbation term, such as $p(t)$ in \eqref{eq1}. As far as we know, the introduction of equation \eqref{eq1} was initiated by Hern$\acute{a}$ndez and O'Regan in \cite{8}. In \eqref{eq1}, the action starts abruptly at points $t_{i}$, and remains during a finite time interval $(t_{i}, s_{i}]$. Obviously, it is a natural generalization of the following classical instantaneous impulsive differential equation: \begin{equation}\label{eq55} \left\{ {\begin{array}{l} -u''(t)=f(t,u(t)),\quad t\in([0, T],\\ u'(t_{i}^{+})-u'(t_{i}^{-})=I_i(u(t_i)),\qquad i=1,2,...,N,\\ u(0)=u(T)=0. \end{array}} \right. \end{equation} Many classical methods can be used to study the non-instantaneous impulsive differential equations, such as theory of Analytic Semigroup, Fixed-Point theory \cite{6,7,12,13} and so on. For some recent works on this type equation, we refer the readers to \cite{1,4,5,10,11,15,16,17}. To the best of our knowledge, Variational Method can be used to study some impulsive differential equation. Bai-Nieto \cite{2} studied the following linear problem, and obtained the existence and uniqueness of weak solutions. \begin{equation*}\label{eq2} \left\{ {\begin{array}{l} -u''(t)=\sigma_{i}(t),\quad t\in(s_{i},t_{i+1}], i=0,1,2,...,N,\\ u'(t)=\alpha_{i},\quad t\in(t_{i},s_{i}], i=1,2,...,N,\\ u'(s_{i}^{+})=u'(s_{i}^{-}),\quad i=1,2,...,N,\\ u(0)=u(T)=0 , u'(0)=\alpha_{0}, \end{array}} \right. \end{equation*} where $\sigma_{i}\in L^{2}((s_{i},t_{i+1}),\mathbb{R})$, $\alpha_{i} \ (i=0,...,N)$ are constants. By Variational Method, Bai-Nieto-Wang \cite{3} obtained at least two distinct nontrivial weak solutions of problem: \begin{equation*}\label{eq3} \left\{ {\begin{array}{l} -u''(t)=D_{x}F_{i}(t,u(t)-u(t_{i+1})),\quad t\in(s_{i},t_{i+1}], i=0,1,2,...,N,\\ u'(t)=\alpha_{i},\quad t\in(t_{i},s_{i}], i=1,2,...,N,\\ u'(s_{i}^{+})=u'(s_{i}^{-}),\quad i=1,2,...,N,\\ u(0)=u(T)=0 , u'(0)=\alpha_{0}, \end{array}} \right. \end{equation*} where $D_{x}F_{i}(t,x)$ are the derivatives of $F_{i}(t,x)$ with respect to $x$, $i=0,1,2,...,N.$ Zhang-Yuan \cite{18} considered the following equation with a perturbation term $p(t)$, and obtained infinitely many weak solutions. \begin{equation*}\label{eq4} \left\{ {\begin{array}{l} -u''(t)+\lambda u(t)=f(t,u(t))+p(t), \quad a.e.~t\in[0,T],\\ \bigtriangleup u'(t_{i})=I_{i}(u(t_{i})),\quad i=1,...,N,\\ u(0)=u(T)=0, \end{array}} \right. \end{equation*} where $f: [0, T]\times\mathbb{R}\rightarrow \mathbb{R}$ is continuous, the impulsive functions $I_{i}:\mathbb{R}\rightarrow \mathbb{R} (i=1, 2, . . . ,N)$ are continuous and $p(t):[0, T]\rightarrow \mathbb{R}$ belongs to $L^{2}[0, T]$. Motivated by the work of \cite{2,3,18}, we obtain the weak solution of the problem \eqref{eq1} by Variational Method. Our main result is a natural extension of \cite{3}. We denotes $D_{x}F_{i}(t,x)$ the derivatives of $F_{i}(t,x)$ with respect to $x (i=0, 1, ..., N)$. $ F_{i}(t,x) $ is measurable in $t$ for every $x\in \mathbb{R}$ and continuously differentiable in $x$ for $a.e.\ t\in (s_{i},t_{i+1}]$. We assume that $\lambda_{1}$ is the first eigenvalue of: \begin{equation}\label{eq5} \left\{ {\begin{array}{l} \displaystyle -u''(t)=\lambda u(t), \quad t\in[0,T],\\ \displaystyle u(0)=u(T)=0. \end{array}} \right. \end{equation} Our assumptions are: \begin{description} \item[$(H1)$] There exist $\alpha \in C(\mathbb{R}^{+},\mathbb{R}^{+})$ and $ b \in L^{1}(s_{i},t_{i+1};\mathbb{R}^{+})$ such that $$|F_{i}(t,x)|\leq \alpha(|x|)b(t),~|D_{x}F_{i}(t,x)|\leq \alpha(|x|)b(t),$$ for all $x\in \mathbb{R}$, where $F_{i}(t,0)=0$ for $a.e.\ t\in(s_{i},t_{i+1}) ~(i=0,1,2,...,N)$. \item[$(H2)$] There exist constants $\mu_{i}>2$ such that $0<\mu_{i}F_{i}(t,x)\leq xD_{x}F_{i}(t,x)$ for $a.e.\ t\in (s_{i},t_{i+1}], ~x\in \mathbb{R}\backslash \{0\} (i=0,1,2,...,N).$ \item[$(H3)$] There exist constant $M$ such that $\sum\limits_{i=0}^{N}\sqrt{t_{i+1}-s_{i}}\|p\|_{L^{2}}<M,$ where $M=\frac{1}{8\beta^{2}}-\frac{1}{2}\sum\limits_{i=1}^{N}|\alpha_{i-1}-\alpha_{i}|-\sum\limits _{i=0}^{N}\int_{s_{i}}^{t_{i+1}}M_{i}(t)dt,~\beta=(T\lambda_{1})^{-\frac{1}{2}}+T^{\frac{1}{2}},~M_{i}(t):=\max \limits_{|x|=1}F_{i}(t,x)~(i=0,1,2,...,N).$ \end{description} \begin{remark} $M$ in $(H3)$ is originated from the proof of Theorem \ref{th1}. \end{remark} \begin{theorem} \label {th1} Suppose that $(H1)$-$(H3)$ hold, then problem \eqref{eq1} has at least one weak solution. \end{theorem} The article is organized as following: In Section 2, we present some basic knowledge and preliminary results. In Section 3, we prove Theorem \ref{th1}. \section{Preliminaries} In this section, we present some preliminary results which will be used in the proof of our result. \begin{definition}\label{del}{\bf (\cite{9}, (PS) condition)} Let $E$ be a real Banach space and $I\in C^{1}(E, \mathbb{R})$. $I$ is said to be satisfying the Palais-Smale condition on $E$ if any sequence $\{u_{k}\}\in E$ for which $I(u_{k})$ is bounded and $I'(u_{k})\rightarrow0$ as $k\rightarrow\infty$ possesses a convergent subsequence in $E$. \end{definition} \begin{theorem}\label{th2}{\bf(\cite{14}, Mountain Pass Theorem)} Let $E$ be a real Banach space and $I\in C^{1}(E,\mathbb{R})$ satisfy the $(PS)$ condition with $I(0)=0$. If $I$ satisfies the following conditions: \begin{description} \item[$(1)$] there exist constants $\rho,\alpha >0$, such that $I|_{\partial B_{\rho}}\geq \alpha$; \item[$(2)$] there exists an $e\in E\backslash B_{\rho}$, such that $I(e)\leq 0$, \end{description} then $I$ possesses a critical value $c\geq \alpha$. Moreover, $c$ is characterized as $$c=\inf \limits_{g\in \Gamma}\max \limits_{s\in [0,1]}I(g(s)),$$ where $$\Gamma=\{g\in C([0,T],E)|~g(0)=0,g(1)=e\}.$$ \end{theorem} Next, we introduce the well-known Poincar$\acute{e}$ inequality $$\int_{0}^{T}|u|^{2}dt\leq\frac{1}{\lambda_{1}}\int_{0}^{T}|u'|^{2}dt, ~u\in H_{0}^{1}(0,T),$$ where $\lambda_{1}$ is given in \eqref{eq5}. In the Sobolev space $H_{0}^{1}(0,T)$, we consider the inner product $(u,v)=\int_{0}^{T}u'(t)v'(t)dt,$ which induces the norm $\|u\|=\left(\int_{0}^{T}|u'(t)|^{2}\right)^{\frac{1}{2}}.$ In $L^{2}[0,T]$ and $C[0,T]$, we define the norms: $$\|u\|_{L^{2}}=\left(\int_{0}^{T}|u(t)|^{2}dt\right)^{\frac{1}{2}},~~\|u\|_{\infty}= \max\limits_{t\in [0,T]}|u(t)|.$$ By the Mean Value Theorem and the H$\ddot{o}$lder inequality, for any $u\in H_{0}^{1}(0,T)$, we have \begin{equation}\label{eq106} \|u\|_{\infty}\leq\beta\|u\|, \end{equation} where $\beta=(T\lambda_{1})^{-\frac{1}{2}}+T^{\frac{1}{2}},$ $\lambda_{1}$ is given in \eqref{eq5}. Take $v\in H_{0}^{1}(0,T)$, multiply \eqref{eq1} by $v$ and integrate from $0$ to $T$, we obtain \begin{equation*}\label{eq6} \begin{split} \int_{0}^{T}u''vdt=&\int_{0}^{t_{1}}u''vdt+\sum\limits_{i=1}^{N}\int_{t_{i}}^{s_{i}}u''vdt+\sum\limits_{i=1}^{N-1}\int_{s_{i}}^{t_{i+1}}u''vdt+\int_{s_{N}}^{T}u''vdt\\ =&-\int_{0}^{T}u'v'dt+\sum\limits_{i=1}^{N}[u'(t_{i}^{-})-u'(t_{i}^{+})]v(t_{i})+\sum\limits_{i=1}^{N}[u'(s_{i}^{-})-u'(s_{i}^{+})]v(s_{i}). \end{split} \end{equation*} By \eqref{eq1}, \begin{equation}\label{eq7} \begin{split} \int_{0}^{T}u''vdt=&-\int_{0}^{T}u'v'dt+\sum\limits_{i=1}^{N}[\alpha_{i-1}-\alpha_{i}]v(t_{i})\\ &-\sum\limits_{i=0}^{N-1}\int_{s_{i}}^{t_{i+1}}(D_{x}F_{i}(t,u(t)-u(t_{i+1}))+p(t))dt)v(t_{i+1}). \end{split} \end{equation} On the other hand, \begin{equation}\label{eq8} \begin{split} \int_{0}^{T}u''vdt=&-\sum\limits_{i=0}^{N}\int_{s_{i}}^{t_{i+1}}(D_{x}F_{i}(t,u(t)-u(t_{i+1}))+p(t))vdt+\sum\limits_{i=1}^{N}\int_{t_{i}}^{s_{i}}\frac{d}{dt}[\alpha_{i}]vdt\\ =&-\sum\limits_{i=0}^{N}\int_{s_{i}}^{t_{i+1}}(D_{x}F_{i}(t,u(t)-u(t_{i+1}))+p(t))vdt. \end{split} \end{equation} Thus, it follows $v(t_{N+1})=v(T)=0$, \eqref{eq7} and \eqref{eq8} that \begin{equation}\label{eq9} \begin{split} -\int_{0}^{T}u'v'dt+\sum\limits_{i=1}^{N}[\alpha_{i-1}-\alpha_{i}]v(t_{i})=&-\sum\limits_{i=0}^{N}\int_{s_{i}}^{t_{i+1}}(D_{x}F_{i}(t,u(t)-u(t_{i+1}))\\ &+p(t))(v(t)-v(t_{i+1})dt. \end{split} \end{equation} A weak solution to \eqref{eq1} is a function $u\in H_{0}^{1}(0,T)$ such that \eqref{eq9} holds for any $v\in H_{0}^{1}(0,T)$. Consider the functional $I:~H_{0}^{1}(0,T)\rightarrow \mathbb{R},$ \begin{equation}\label{eq10} \begin{split} I(u)=&\displaystyle\frac{1}{2}\int_{0}^{T}|u'|^{2}dt-\sum\limits_{i=1}^{N}(\alpha_{i-1}-\alpha_{i})u(t_{i})\\ &-\sum\limits_{i=0}^{N}\int_{s_{i}}^{t_{i+1}}p(t)(u(t)-u(t_{i+1}))dt-\sum\limits_{i=0}^{N}\varphi_{i}(u), \end{split} \end{equation} where $\varphi_{i}(u):=\displaystyle\int_{s_{i}}^{t_{i+1}}F_{i}(t,u(t)-u(t_{i+1}))dt.$ For $u$ and $v$ fixed in $H_{0}^{1}(0,T)$ and $\lambda\in[-1, 1]$, by \eqref{eq106}, we have \begin{equation}\label{eq50} |u(t)-u(t_{i+1})|\leq2\|u\|_{\infty}\leq2\beta\|u\|. \end{equation} Hence $$|u(t)-u(t_{i+1})+\lambda\theta(v(t)-v(t_{i+1}))|\leq2\beta(\|u\|+\|v\|),~\text{for} ~\theta\in(0,1),$$ and for $a.e.$ $t\in (s_{i},t_{i+1}]$, \begin{align*} \begin{split} &\lim\limits_{\lambda\rightarrow0}\frac{1}{\lambda}\left[F_{i}(t,u(t)-u(t_{i+1})+\lambda(v(t)-v(t_{i+1})))-F_{i}(t,u(t)-u(t_{i+1}))\right]\\ =&D_{x}F_{i}(t,u(t)-u(t_{i+1}))(v(t)-v(t_{i+1})). \end{split} \end{align*} By $(H1)$, \eqref{eq50} and the Mean Value Theorem, we obtain \begin{equation*} \begin{split} &\left|\frac{1}{\lambda}\left[F_{i}(t,u(t)-u(t_{i+1})+\lambda(v(t)-v(t_{i+1})))-F_{i}(t,u(t)-u(t_{i+1}))\right]\right|\\ =&\bigg|D_{x}F_{i}(t,u(t)-u(t_{i+1})+\lambda\theta(v(t)-v(t_{i+1}))(v(t)-v(t_{i+1}))\bigg|\\ \leq&\max \limits_{z\in[0,2\beta(\|u\|+\|v\|)]}a(z)2\beta\|v\|b(t)\in L^{1}(s_{i},t_{i+1};\mathbb{R}^{+}). \end{split} \end{equation*} Lebesgue's Dominated Convergence Theorem shows that \begin{equation*}\label{eq11} (\varphi_{i}'(u),v)=\int_{s_{i}}^{t_{i+1}}D_{x}F_{i}(t,u(t)-u(t_{i+1}))(v(t)-v(t_{i+1}))dt. \end{equation*} Moreover, $\varphi_{i}'(u)$ is continuous. So $I\in C^{1}(H_{0}^{1}(0,T),\mathbb{R})$ and \begin{equation}\label{eq12} \begin{split} I'(u)v=&\int_{0}^{T}u'v'dt+\sum\limits_{i=1}^{N}[\alpha_{i-1}-\alpha_{i}]v(t_{i})\\ &-\sum\limits_{i=0}^{N}\int_{s_{i}}^{t_{i+1}}\left(D_{x}F_{i}(t,u(t)-u(t_{i+1}))+p(t)\right)(v(t)-v(t_{i+1}))dt. \end{split} \end{equation} Then the correspond critical points of $I$ are the weak solutions of the problem \eqref{eq1}. \begin{lemma} \label{le2} {\bf(\cite{3})} If assumption $(H2)$ holds, then for each $i=0,1,2,..,N$, there exist $M_{i},m_{i},b_{i}\in L^{1}(s_{i}, t_{i+1})$ which are almost everywhere positive such that $$F_{i}(t,x)\leq M_{i}(t)|x|^{\mu_{i}},~for ~a.e.~t\in(s_{i}, t_{i+1}],~and~|x|\leq1,$$ and $$F_{i}(t,x)\geq m_{i}(t)|x|^{\mu_{i}}-b_{i}(t),~for ~a.e.~t\in(s_{i}, t_{i+1}],~and~x\in\mathbb{R},$$ where $m_{i}(t):=\min \limits_{|x|=1}F_{i}(t,x)$, $M_{i}(t):=\max \limits_{|x|=1}F_{i}(t,x),~a.e.~t\in(s_{i},t_{i+1}].$ \end{lemma} \begin{remark} Lemma \ref{le2} implies that $D_{x}F_{i}(t,x)\ (i=1, ..., N)$ are super-quadratic at infinity, and sub-quadratic at the origin. \end{remark} \begin{lemma}\label{le3} Suppose that $(H1)$, $(H2)$ hold, then $I$ satisfies the (PS) condition. \end{lemma} \noindent{\bf Proof:} Let $\{u_{k}\}\subset H_{0}^{1}(0,T)$ such that $\{I (u_{k})\}$ be a bounded sequence and $\lim \limits_{k\rightarrow \infty}I'(u_{k})=0$. By \eqref{eq106}, \begin{equation}\label{eq17} |\sum\limits_{i=1}^{N}(\alpha_{i-1}-\alpha_{i})u(t_{i})|\leq\sum\limits_{i=1}^{N}|\alpha_{i-1}-\alpha_{i}|\|u\|_{\infty}\leq\sum\limits_{i=1}^{N}|\alpha_{i-1}-\alpha_{i}|\beta\|u\|. \end{equation} There exists constant $C_{1}>0$ such that $$|I(u_{k})|\leq C_{1},~|I'(u_{k})|\leq C_{1}.$$ First, we prove that $\{u_{k}\}$ is bounded. Let $\mu:=\min\{\mu_{i}:i=0,1,2,...,N\}$, by \eqref{eq10}, \eqref{eq17} and $(H2)$, we obtain \begin{equation*} \begin{split} \int_{0}^{T}|u_{k}'|^{2}dt=& 2I(u_{k})+2\sum\limits_{i=1}^{N}(\alpha_{i-1}-\alpha_{i})u_{k}(t_{i})\\ &+2\sum\limits_{i=0}^{N}\int_{s_{i}}^{t_{i+1}}p(t)(u_{k}(t)-u_{k}(t_{i+1}))dt\\ &+2\sum\limits_{i=0}^{N}\int_{s_{i}}^{t_{i+1}}F_{i}(t,u_{k}(t)-u_{k}(t_{i+1}))dt,\\ \leq& 2C_{1}+2\beta\sum\limits_{i=1}^{N}|\alpha_{i-1}-\alpha_{i}|\|u_{k}\|\\ &+2\sum\limits_{i=0}^{N}\int_{s_{i}}^{t_{i+1}}p(t)(u_{k}(t)-u_{k}(t_{i+1}))dt\\ &+\frac{2}{\mu}\sum\limits_{i=0}^{N}\int_{s_{i}}^{t_{i+1}}D_{x}F_{i}(t,u_{k}(t)-u_{k}(t_{i+1}))(u_{k}(t)-u_{k}(t_{i+1}))dt, \end{split} \end{equation*} which combining \eqref{eq12} yields that \begin{align*} (1-\frac{2}{\mu})\|u_{k}\|^{2}\leq&2C_{1}+(2+\frac{2}{\mu})\sum\limits_{i=1}^{N}|\alpha_{i-1}-\alpha_{i}|\beta\|u_{k}\|-\frac{2}{\mu}I'(u_{k})u_{k}\\ &+(2-\frac{2}{\mu})\sum\limits_{i=0}^{N}\int_{s_{i}}^{t_{i+1}}p(t)(u_{k}(t)-u_{k}(t_{i+1}))dt,\\ \leq& 2C_{1}+(2+\frac{2}{\mu})\beta\sum\limits_{i=1}^{N}|\alpha_{i-1}-\alpha_{i}|\|u_{k}\|+\frac{2}{\mu}C_{1}\beta\|u_{k}\|\\ &+2(2-\frac{2}{\mu})\beta\sum\limits_{i=0}^{N}\|u_{k}\|\sqrt{t_{i+1}-s_{i}}\|p\|_{L^{2}}. \end{align*} Since $\mu>2$, it follow that $\{u_{k}\}$ is bounded in $H_{0}^{1}(0,T)$. Therefore, there exists a subsequence also denoted by $\{u_{k}\}\in H_{0}^{1}(0,T)$ such that \begin{equation*} \begin{split} &u_{k}\rightharpoonup u, ~~ \text{in} ~H_{0}^{1}(0,T),\\ &u_{k} \rightarrow u, ~~\text{in} ~L^{2}(0,T),\\ &u_{k} \rightarrow u, ~~\text{uniformly in} ~[0,T],~~\text{as} ~k\rightarrow \infty. \end{split} \end{equation*} Since \begin{equation*} \begin{split} |u_{k}(t)-u_{k}(t_{i+1})-u(t)+u(t_{i+1})|\leq& |u_{k}(t)-u(t)|+|u(t_{i+1})-u_{k}(t_{i+1})|\\ \leq& 2\|u_{k}-u\|\rightarrow 0, \quad \text{as}~~ k\rightarrow \infty. \end{split} \end{equation*} Hence \begin{equation*} \begin{split} \sum\limits_{i=0}^{N}\int_{s_{i}}^{t_{i+1}}&(D_{x}F_{i}(t,u_{k}(t)-u_{k}(t_{i+1}))-D_{x}F_{i}(t,u(t)-u(t_{i+1})))\\ \cdot&(u_{k}(t)-u_{k}(t_{i+1})-u(t)+u(t_{i+1}))dt\rightarrow 0, \end{split} \end{equation*} $$|\langle I'(u_{k})-I'(u),u_{k}-u\rangle|\leq\|I'(u_{k})-I'(u)\|\|u_{k}-u\|\rightarrow 0.$$ Moreover, we obtain \begin{equation*} \begin{split} &\langle I'(u_{k})-I'(u),u_{k}-u\rangle\\ =&\|u_{k}-u\|-\sum\limits_{i=0}^{N}\int_{s_{i}}^{t_{i+1}}(D_{x}F_{i}(t,u_{k}(t)-u_{k}(t_{i+1}))\\ &-D_{x}F_{i}(t,u(t)-u(t_{i+1})))(u_{k}(t)-u_{k}(t_{i+1})-u(t)+u(t_{i+1}))dt, \end{split} \end{equation*} so $\|u_{k}-u\|\rightarrow 0$ as $k\rightarrow +\infty$. That is, $\{u_{k}\}$ converges strongly to $u$ in $H_{0}^{1}(0,T)$. Thus, $I$ satisfies the (PS) condition. $\Box$ \section{Proof of theorem} \noindent{\bf Proof of Theorem \ref{th1}} We found that $I(0)=0$ and $I\in C^{1}(H_{0}^{1}(0,T),\mathbb{R})$. By Lemma \ref{le3}, we obtain $I$ satisfies (PS) condition. By Lemma \ref{le2} and \eqref{eq50}, we have \begin{equation*} \begin{split} \int_{s_{i}}^{t_{i+1}}F_{i}(t,u(t)-u(t_{i+1})dt&\leq\int_{s_{i}}^{t_{i+1}}M_{i}(t)|u(t)-u(t_{i+1})|^{\mu_{i}}dt\\ &\leq\int_{s_{i}}^{t_{i+1}}M_{i}(t)|2\beta\|u\||^{\mu_{i}}dt, \end{split} \end{equation*} and $$\sum\limits_{i=1}^{N}(\alpha_{i-1}-\alpha_{i})u(t_{i})\leq\sum\limits_{i=1}^{N}|\alpha_{i-1}-\alpha_{i}|\|u\|_{\infty}\leq\sum\limits_{i=1}^{N}|\alpha_{i-1}-\alpha_{i}|\beta\|u\|,$$ $$\sum\limits_{i=0}^{N}\int_{s_{i}}^{t_{i+1}}p(t)(u(t)-u(t_{i+1}))\leq\sum\limits_{i=0}^{N}2\beta\|u\|\sqrt{t_{i+1}-s_{i}}\|p\|_{L^{2}}.$$ By \eqref{eq10}, \begin{equation}\label{eq19} \begin{split} I(u)\geq&\frac{1}{2}\|u\|^{2}-\sum\limits_{i=1}^{N}|\alpha_{i-1}-\alpha_{i}|\beta\|u\|-\sum\limits_{i=0}^{N}\int_{s_{i}}^{t_{i+1}}M_{i}(t)|2\beta\|u\||^{\mu_{i}}dt\\ &-\sum\limits_{i=0}^{N}2\beta\|u\|\sqrt{t_{i+1}-s_{i}}\|p\|_{L^{2}}. \end{split} \end{equation} Take $\|u\|=\frac{1}{2\beta}$, then $|u(t)-u(t_{i+1})|\leq 1$, so \begin{equation*} \begin{split} &\sum\limits_{i=1}^{N}|\alpha_{i-1}-\alpha_{i}|\beta\|u\|\leq\frac{1}{2}\sum\limits_{i=1}^{N}|\alpha_{i-1}-\alpha_{i}|,\\ &\sum\limits_{i=0}^{N}\int_{s_{i}}^{t_{i+1}}M_{i}(t)|2\beta\|u\||^{\mu_{i}}dt\leq\sum\limits _{i=0}^{N}\int_{s_{i}}^{t_{i+1}}M_{i}(t)dt,\\ &\sum\limits_{i=0}^{N}2\beta\|u\|\sqrt{t_{i+1}-s_{i}}\|p\|_{L^{2}}\leq\sum\limits_{i=0}^{N}\sqrt{t_{i+1}-s_{i}}\|p\|_{L^{2}}. \end{split} \end{equation*} Hence, \begin{align*} I(u)=&\frac{1}{2}\|u\|^{2}-\sum\limits_{i=1}^{N}(\alpha_{i-1}-\alpha_{i})u(t_{i})\\ &-\sum\limits_{i=0}^{N}\int_{s_{i}}^{t_{i+1}}p(t)(u(t)-u(t_{i+1}))dt-\sum\limits_{i=0}^{N}\int_{s_{i}}^{t_{i+1}}F_{i}(t,u(t)-u(t_{i+1}))dt,\\ \geq&\frac{1}{2}\|u\|^{2}-\frac{1}{2}\sum\limits_{i=1}^{N}|\alpha_{i-1}-\alpha_{i}|-\sum\limits _{i=0}^{N}\int_{s_{i}}^{t_{i+1}}M_{i}(t)dt-\sum\limits_{i=0}^{N}\sqrt{t_{i+1}-s_{i}}\|p\|_{L^{2}}. \end{align*} By $(H3)$, $I(\frac{1}{2\beta})>0$ and satisfies the condition (1) in Theorem \ref{th2}. Let $\xi>0$ and $w\in H_{0}^{1}(0,T)$ with $\|w\|=1$. We can see that $w(t)$ is not a constant for $a.e.~[s_{i}, t_{i+1}]$. By Lemma \ref{le2}, \begin{equation*} \begin{split} \int_{s_{i}}^{t_{i+1}}F_{i}(t,(w(t)-w(t_{i+1}))\xi)dt\geq & \left(\int_{s_{i}}^{t_{i+1}}m_{i}(t)|w(t)-w(t_{i+1})|^{\mu_{i}}dt\right)\xi^{\mu_{i}}\\&-\int_{s_{i}}^{t_{i+1}}b_{i}(t)dt. \end{split} \end{equation*} Let $W_{i}:=\int_{s_{i}}^{t_{i+1}}m_{i}(t)|w(t)-w(t_{i+1})|^{\mu_{i}}dt$, then $$0\leq W_{i}\leq(2\beta)^{\mu_{i}}\int_{s_{i}}^{t_{i+1}}m_{i}(t)dt,~W_{0}\geq0.$$ We can select the interval $[0, t_{1}]$ and prove $w(t)$ is not a constant for $a.e. ~[0, t_{1}]$. In fact, we suppose that $\int_{0}^{t_{1}}m_{0}(t)|w(t)-w(t_{1})|^{\mu_{0}}dt=0$. Since $m_{0}(t)$ is positive, then $w(t)=w(t_{1})$ for $a.e. ~[0,t_{1}]$. A contradiction with the assumption on $w$. By \eqref{eq10}, we obtain \begin{align*} I(\xi w)=&\frac{1}{2}\xi^{2}w^{2}-\sum\limits_{i=1}^{N}(\alpha_{i-1}-\alpha_{i})w(t_{i})\xi-\sum\limits_{i=0}^{N}\int_{s_{i}}^{t_{i+1}}p(t)(w(t)-w(t_{i+1})\xi)dt\\ &-\sum\limits_{i=0}^{N}\int_{s_{i}}^{t_{i+1}}F_{i}(t,(w(t)-w(t_{i+1}))\xi)dt,\\ \leq&\frac{1}{2}\xi^{2}+\sum\limits_{i=1}^{N}|\alpha_{i-1}-\alpha_{i}|\beta\xi+2\beta\xi\sum\limits_{i=0}^{N}\sqrt{t_{i+1}-s_{i}}\|p\|_{L^{2}}-\sum\limits_{i=0}^{N}W_{i}\xi^{\mu_{i}}\\ &+\sum\limits_{i=0}^{N}\int_{s_{i}}^{t_{i+1}}b_{i}(t)dt. \end{align*} Since $\mu_{i}>2$, the above inequation implies that $I(\xi w)\rightarrow -\infty$ as $\xi \rightarrow \infty$, that is, there exists a $\xi\in \mathbb{R}\backslash \{0\}$ such that $\|\xi w\|>\frac{1}{2\beta}$ and $I(\xi w)\leq0$. The proof of Theorem \ref{th1} is completed. $\Box$ \noindent \bfseries Acknowledgments \mdseries Jing Zeng is supported by the National Science Foundation of China (Grant No. 11501110) and Fujian Natural Science Foundation (Grant No. 2018J01656). \noindent \bfseries References \mdseries \end{document}
arXiv
{ "id": "2103.16114.tex", "language_detection_score": 0.5324000120162964, "char_num_after_normalized": null, "contain_at_least_two_stop_words": null, "ellipsis_line_ratio": null, "idx": null, "lines_start_with_bullet_point_ratio": null, "mean_length_of_alpha_words": null, "non_alphabetical_char_ratio": null, "path": null, "symbols_to_words_ratio": null, "uppercase_word_ratio": null, "type": null, "book_name": null, "mimetype": null, "page_index": null, "page_path": null, "page_title": null }
arXiv/math_arXiv_v0.2.jsonl
null
null
\begin{document} \title{Local Gradient Estimates for Second-Order Nonlinear Elliptic and Parabolic Equations by the Weak Bernstein's Method } \author{G.Barles\thanks{Institut Denis Poisson (UMR CNRS 7013) Université de Tours, Université d'Orléans, CNRS. Parc de Grandmont 37200 Tours, France. Email: [email protected] \newline \indent This work was partially supported by the project ANR MFG (ANR-16-CE40-0015-01) funded by the French National Research Agency } } \maketitle \noindent {\bf Key-words}: Second-order elliptic and parabolic equations, gradient bounds, weak Bernstein's method, viscosity solutions. \\ {\bf MSC}: 35D10 35D40, 35J15 35K10 \begin{abstract} {\footnotesize In the theory of second-order, nonlinear elliptic and parabolic equations, obtaining local or global gradient bounds is often a key step for proving the existence of solutions but it may be even more useful in many applications, for example to singular perturbations problems. The classical Bernstein's method is a well-known tool to obtain these bounds but, in most cases, it has the defect of providing only a priori estimates. The ``weak Bernstein's method'', based on viscosity solutions' theory, is an alternative way to prove the global Lipschitz regularity of solutions together with some estimates but it is not so easy to perform in the case of local bounds. The aim of this paper is to provide an extension of the ``weak Bernstein's method'' which allows to prove local gradient bounds with reasonnable technicalities.} \end{abstract} \maketitle The classical Bernstein's method is a well-known tool for obtaining gradient estimates for solutions of second-order, elliptic and parabolic equations (cf. Caffarelli and Cabr\'e \cite{CClivre} Gilbarg and Trudinger\cite{GT} (Chap. 15) and Lions\cite{LB}). The underlying idea is very simple: if $\Omega$ is a domain in $\mathbb R^N$ and $u : \Omega \to \mathbb R$ is a smooth solution of $$ -\Delta u = 0 \quad\hbox{in }\Omega\; ,$$ where $\Delta$ denotes the Laplacian in $\mathbb R^N$, then $w:=|Du|^2$ satisfies $$ -\Delta w \leq 0 \quad\hbox{in }\Omega\; .$$ The gradient bounded is deduced from this property by using the Maximum Principle if one knows that $Du$ is bounded on $\partial \Omega$ and this bound on the boundary is usually the consequence of the existence of barriers functions. Of course this strategy, consisting in showing that $w:=|Du|^2$ is a {\em subsolution} of an elliptic equation and then using the Maximum Principle, can be applied to far more general equations but it has a clear defect: in order to justify the above computations, the solution has to be $C^3$ and, since it is rare that the solution has such a regularity, the classical Bernstein's method provides, in general, only {\em a priori estimates}; then one has to find a suitable approximation of the equation, with smooth enough solutions, to actually obtain the gradient bound. In 1990, this difficulty was partially overcomed by the weak Bernstein's method whose idea is even simpler: if one looks at the maximum of the function $$(x,y) \mapsto u(x)-u(y)-L|x-y| \quad\hbox{in }\overline \Omega \times \overline \Omega\; ,$$ and if one can prove that it is achieved only for $x=y$ for $L$ large enough, then $|Du|\leq L$. Surprisingly, as it is explained in the introduction of \cite{B-wb}, the computations and structure conditions which are needed to obtain this bound are the same (or almost the same with tiny differences) as for the classical Bernstein's method. Of course, the main advantage of the weak Bernstein's method is that it does not require $u$ to be smooth since there is no differentiation of $u$ and it can even be used in the framework of viscosity solutions. Problem solved? Not completely because the weak Bernstein's method is not of an easy use if one looks for local bounds instead of global bounds. In fact, in order to get such local gradient bounds, the only possible way seems to multiply the solution by a cut-off function and to look for a gradient bound for this new function. Unfortunately, this new function satisfies a rather complicated equation where the derivatives of the cut-off function appear at different places and the computations become rather technical. The classical Bernstein's method also faces similar difficulties but, at least in some cases, succeeds in providing these local bounds in a not too complicated way. The aim of this article is to describe a slight improvement of the weak Bernstein's method which allows to obtain local gradient bounds in a simpler way, ``simpler'' meaning that the technicalities are as reduced as possible, although some are unavoidable. This improvement is based on an idea of P.~Cardaliaguet~\cite{C1} which dramatically simplifies a matrix analysis which is keystone in \cite{B-wb} but also allows this extension to local bounds. To present our result, we consider second-order, possibly degenerate, elliptic equations which we write in the general form \begin{equation}\label{GFNLE} F(x, u, D u, D^2u) = 0 \quad\hbox{in }\Omega\; , \end{equation} where $\Omega$ is a domain of $\mathbb R^N$ and $F :\Omega \times \mathbb R \times \mathbb R^N \times {\mathcal S}^N \to \mathbb R $ is a locally Lipschitz continuous function, ${\mathcal S}^N$ denotes the space of $N \times N$ symmetric matrices, the solution $u$ is a real-valued function defined on $\Omega$, $Du, D^2u$ denote respectively its gradient and Hessian matrix. We assume that $F$ satisfies the (degenerate) ellipticity condition : for any $(x,r,p)\in\Omega \times \mathbb R \times \mathbb R^N$ and for any $X,Y\in{\mathcal S}^N$, $$ F(x,r,p,X) \leq F(x,r,p,Y)\quad\hbox{if }X\geq Y. $$ Our results consist in providing several general ``structure conditions'' on $F$ under which one has a local gradient bound depending or not on the local oscillation of $u$ and the uniform ellipticity of the equation. We also consider the parabolic case for which we give a structure condition on the equation allowing to prove a local gradient bound, depending on the local oscillation of $u$, where ``local'' means both in space and time. In the stationary framework, we focus in particular on the following example \begin{equation}\label{PartEqn} -\Delta u + |Du|^m = f(x) \quad\hbox{in }\Omega\; , \end{equation} where $m>1$ and $f \in W^{1,\infty}_{loc}(\Omega)$, which is a particular case for which the classical Bernstein's method provides local bound (independent of the oscillation of $u$) in a rather easy way, while it is not the case for the weak Bernstein's method. We conclude this introduction by two remarks: the first one concerns the ``structure conditions'' on $F$ on which our results are based. In \cite{B-wb}, it is pointed out that, in general, the equation we consider does not satisfy these structure conditions and we have to make a change of unknown function $v=\psi(u)$, choosing $\psi$ in order that the new equation for $v$ satisfies them. Obviously, the same remark is true here and we provide an example where such a change allows to obtain the desired gradient bound. But, contrarily to \cite{B-wb}, we are not going to study the effect of such changes in a more systematic way. The second remark concerns the method we are going to present: the results we obtain are based on several choices we made at several places and, in particular, in the estimates of the terms we have to handle. Clearly, many variants are possible and we have just tried to convince the reader that, actually, the technicalities are really ``reasonnable'' as we pretend it in the abstract.\\ \noindent{\bf Acknowledgement:} the author would like to thank the anonymous referees whose remarks led to significant improvements of the readability of this article. \section{Some preliminary results}\label{prelim} In this section, we are going to construct the functions we use in the proof of our main result. To do so, we introduce $\mathcal{K}$ which is the class of continuous functions $\chi:[0,+\infty)\to [0,+\infty)$ such that $\chi(t)=0$ if $t\leq 1$, $\chi$ is increasing on $[1,+\infty[$, $\chi(t)\leq {\tilde K}(\chi)t^\beta$ for $t\geq 1$, for some $0<\beta < 1/2$ and some constant ${\tilde K}(\chi)>0$, and $$ \int_1^{+\infty}\frac{dt}{t\chi(t)}<+\infty .$$ The first ingredient we use below is a smooth function $\varphi : [0,1[ \to \mathbb R$ such that $\varphi(0)=0$, $\varphi'(0)=1 \leq \varphi'(t)$ for any $t\in [0,1[$ with $\varphi(t) \to +\infty$ as $t\to 1^-$ and which solves the ode $\varphi''(t)= K_1\varphi'(t) \chi(\varphi'(t))$ for some constant $K_1>0$. In fact the existence of such function is classical using that $$ \int_1^{\varphi'(t)} \frac{ds}{s\chi(s)} = K_1 t\, ,$$ and by choosing $K_1=\int_1^{+\infty} \frac{ds}{s\chi(s)}$ we already see that $\varphi' (t) \to +\infty$ as $t\to 1^-$. Moreover $$ \int_{\varphi'(t)}^{+\infty} \frac{ds}{s\chi(s)} = K_1 (1-t) \; ,$$ and therefore, for $t$ close enough to $1$ $$ K_1 (1-t) \geq [{\tilde K}(\chi)]^{-1}\int_{\varphi'(t)}^{+\infty} \frac{ds}{s^{1+\beta}}= [{\tilde K}(\chi)\beta]^{-1}\varphi'(t)^{-\beta}\; .$$ This means that $$\varphi'(t) \geq \left(\frac{K_1 (1-t)}{[{\tilde K}(\chi)\beta]^{-1}}\right)^{-1/\beta} \; ,$$ and therefore $\varphi'(t)$ is not integrable at $1$ since $1/\beta>2$. Hence we have $\varphi(t) \to +\infty$ as $t\to 1^-$. On the other hand, given $x_0 \in \mathbb R^N$ and $R>0$, we use below a smooth function $C: B(x_0,3R/4) \to \mathbb R$ is a smooth function such that $C(z)= 1$ on $B(x_0,R/4)$, $C(z) \geq 1$ in $ B(x_0,3R/4)$ and $C(z)\to +\infty$ when $z\to \partial B(x_0,3R/4)$ and with $$ \frac{|D^2C(x)|}{C(x)} , \frac{|DC(x)|^2}{[C(x)]^2} \leq K_2(R) [\chi(C(x))]^2\; ,$$ where $\chi$ is a function in the class $\mathcal{K}$. If $C_1$ is a function which satisfies the above properties for $x_0=0$ and $R=1$, we see that we can choose $C$ as $$ C(x)=C_1\left(\frac{x-x_0} R\right)\; ,$$ and therefore $K_2(R)$ behaves like $R^{-2}K_2(1)$. To build $C_1$, we first solve $$ \psi'' (t) = K_3 \psi (t)[\chi(\psi (t))]^2, \; \psi(0)=1,\; \psi'(0)=0\; ,$$ for some constant $K_3$ to be chosen later on. Multiplying the equation by $2 \psi'(t)$, we obtain that $$ \psi'(t) = F(\psi(t))\; ,$$ where $$ [F(\tau)]^2= 2K_3 \int_1^\tau s[\chi(s)]^2 ds\; .$$ Again we look for a function $\psi$ such that $\psi(t) \to +\infty$ as $t\to 1^{-}$ and to do so, the following condition should hold $$ \int_1^{+\infty} \frac{d\tau}{F(\tau)} < +\infty\; .$$ But, since $\chi$ is increasing, $$ [F(\tau)]^2 \geq 2K_3 \int_{\tau/2}^\tau s[\chi(s)]^2 ds\geq \; 2K_3 [\tau/2 \chi(\tau/2)]^2 ,$$ and since $\tau \mapsto \chi(\tau/2)$ is in $\mathcal{K}$, we have the result for $F$, and then for $\psi$ by choosing appropriately the constant $K_3$. Moreover $$ [F(\tau)]^2 \leq 2K_3 (\tau-1) \tau[\chi(\tau)]^2 \leq 2K_3 [\tau \chi(\tau)]^2 \; ,$$ and therefore $$ \psi'(t) \leq (2K_3)^{1/2} \psi(t) \chi(\psi(t))\; .$$ Finally, we can extend $\psi$ by setting $\psi(t)=1$ for $t\leq 0$ and the equations satisfied by $\psi$ show that we define in that way a $C^2$-function on $(-\infty,1)$. With such a $\psi$, the construction of $C_1$ is easy, we may choose $$ C_1(x):= \psi \bigl(4(|x| - 1/2)\bigr)\quad \hbox{for }x\in B(0,3/4) ,$$ and define $C$ from $C_1$ as above. We notice that, because of the properties of $\psi$, $\dfrac{|DC(x)|}{[C(x)]^2}$ remains bounded on $B(x_0,3R/4)$ and is a $O(R^{-1})$, a property that we will use later on. \section{The Main Result} In the statement of our main result below, for the sake of clarity, we are going to drop the arguments of the partial derivatives of $F$ and to simply denote by $F_s$ the quantity $\dfrac{\partial F}{\partial s} (x,r,p,M)$ for $s=x,r,p,M$. Actually these arguments are $(x,r,p,M)$ everywhere. Our result is the following \begin{theorem} \label{main}Assume that $F$ is a locally Lipschitz function in $\Omega \times \mathbb R \times \mathbb R^N \times {\mathcal S}^N \to \mathbb R$ which satisfies : $F(x,r,p,M)$ is Lipschitz continuous in $M$ and $$ F_M(x,r,p,M) \leq 0 \;\hbox{and}\; F_r(x,r,p,M) \geq 0\quad\hbox{a.e. in }\Omega \times \mathbb R \times \mathbb R^N \times {\mathcal S}^N\; ,$$ and let $u\in C(\Omega)$ be a solution of (\ref{GFNLE}).\\ (i) {\bf (Uniformly elliptic equation with coercive gradient dependence: estimates which are independant of the oscillation of $u$)} Assume that there exist a function $\chi \in \mathcal{K}$ and $0<\eta\leq1$ such that, for any $K>0$, there exists $L= L(F,K)$ large enough such that $$ -(1+\eta)|F_x| |p| (1+K\chi(\eta |p|)) - K |F_p| |p|^2 \left(1+K\chi(\eta |p| )\right) \chi(\eta |p| ) - \dfrac1{1+\eta}F_M\cdot M^2 $$ $$ \geq \eta + K \bigl( |p| \left(1+K\chi(\eta |p| )\right) \chi(\eta |p| )\bigr)^2 \; \hbox{a.e.}, $$ in the set $$\{(x,r,p,M);\ |F(x,r,p,M))| \leq K \eta |p|[1 + K\chi(\eta|p|)]+\eta\; ,\; |p|\geq L\}\; .$$ If $\overline{B(x_0,R)} \subset \Omega$ then $u$ is Lipschitz continuous in $B(x_0,R/2)$ and $|Du| \leq {\bar L} $ in $B(x_0,R/2)$ where $\bar L$ depends only on $F$ and $R$.\\ (ii) {\bf (Uniformly elliptic equation with coercive gradient dependence: estimates depending the oscillation of $u$)} Assume that there exist a function $\chi \in \mathcal{K}$ and $0<\eta\leq1$ small enough such that, for any $K>0$, there exists $L= L(F,K)$ large enough such that $$-(1+\eta) |F_x||p| - K|F_p| |p|^2\chi(\eta |p|) - \frac{1}{1+\eta} F_M\cdot M^2 \geq \eta + K |p|^2\chi(\eta |p|)^{2}\; \hbox{a.e.},$$ in the set $\{(x,r,p,M);\ |F(x,r,p,M))| \leq K |p|+\eta \; ,\; |p|\geq L\}$. If $\overline{B(x_0,R)} \subset \Omega$ then $u$ is Lipschitz continuous in $B(x_0,R/2)$ and $|Du| \leq {\bar L} $ in $B(x_0,R/2)$ where $\bar L$ depends on $F$, $R$ and $osc_R (u)$, the oscillation of $u$ on $ \overline{B(x_0,R)}$.\\ (iii) {\bf (Non-uniformly elliptic equation : estimates depending the oscillation of $u$)} Assume that there exist a function $\chi \in \mathcal{K}$ and $0< \eta \leq 1$ small enough such that, for any $K>0$, there exists $L=L(F,K)$ large enough such that $$ -(1+\eta) |F_x| |p| +(1-\eta)^2 F_r|p|^2 - K|F_p| |p|^2\chi(\eta |p|)- \frac{1}{1+\eta} F_M\cdot M^2$$ $$ \geq \eta + K |p|^2\chi(\eta |p|)^{2}\; \hbox{a.e.},$$ in the set $\{(x,r,p,M);\ |F(x,r,p,M))| \leq K |p|+\eta \; ,\; |p|\geq L\}$. If $\overline{B(x_0,R)} \subset \Omega$ then $u$ is Lipschitz continuous in $B(x_0,R/2)$ and $|Du| \leq {\bar L} $ in $B(x_0,R/2)$ where $\bar L$ depends on $F$, $R$ and $osc_R (u)$. \end{theorem} { As an application we consider Equation (\ref{PartEqn}): in order to have a gradient estimate which is independant of the oscillation of $u$, i.e. Result (i) in Theorem~\ref{main}, the idea is to choose $\chi(t)=(t-1)^\beta$ for $t\geq 1$ with $0<\beta <1/2$ and $\gamma:=1+2\beta < m$. The most important point is that, for large $|p|$, the constraint on $F$ reads $$|F(x,r,p,M))| \leq K\eta |p|(1+ K(\eta |p|)^{\beta})+\eta$$ and therefore $|F(x,r,p,M))|$ behaves as $K^2(\eta |p|)^{1+\beta}$ if $|p|$ is large enough. Since $1+\beta<m$, this implies that, for such $(x,r,p,M)$, $$ {\rm Tr}(M)\geq \frac12 |p|^m - ||f||_{L^{\infty}(B(x_0,R)}\; .$$ But, by Cauchy-Schwarz inequality $$ {\rm Tr}(M)\leq C(N)[{\rm Tr}(M^2)]^{1/2}\; .$$ Therefore the term $-F_M\cdot M^2$ behaves like $|p|^{2m}$. For the other terms, we have, for large $|p|$ \begin{enumerate} \item the term $|F_x| |p| (1+K\chi(\eta |p|)) $ behaves like $|p|^{1+\beta}=|p|^{\gamma-\beta}$; \item the term $|F_p| |p|^2 \left(1+K\chi(\eta |p| )\right) \chi(\eta |p| )$ behaves like $|p|^{m+1+2\beta}=|p|^{m+\gamma}$; \item the term $K ||F_M||_\infty \bigl( |p| \left(1+K\chi(\eta |p| )\right) \chi(\eta |p| )\bigr) ^2$ behaves like $|p|^{2(1+2\beta)}=|p|^{2\gamma}$. \end{enumerate} Since $\gamma < m$, the term $-F_M\cdot M^2$ clearly dominates all the other terms as $|p|$ tends to $+\infty$; therefore we have the gradient bound since the assumption holds for any $0<\eta \leq 1$. Moreover the classical case ($m=1$) can be also treated under the assumptions of Result~(ii). In this example, it is also clear that we can replace the term $|Du|^m$ by a term $H(Du)$ where $H$ satisfies: there exists $\chi \in \mathcal{K}$ such that $$\frac{|p|\chi(|p|)}{H(p)}\to 0 \quad\hbox{as } |p|\to +\infty\; , $$ and $$ \frac{|H_p|(|p|\chi(|p|))^2}{[H(p)]^2}\to 0 \quad\hbox{as } |p|\to +\infty\; .$$} \ \\ In the case of non-uniformly elliptic equation, the gradient bound comes necessarely from the $F_r|p|^2$-term. We consider the equation \begin{equation}\label{PartEqnNUN} -{\rm Tr}(A(x)D^2 u) + |Du|^m = f(x) \quad\hbox{in }\Omega\; , \end{equation} where $m>1$ and $f $ is locally bounded and Lipschitz continuous; concerning $A$, we use the classical assumption: $A(x)=\sigma(x)\cdot \sigma^T(x)$ for some bounded, Lipschitz continuous function $\sigma$, where $\sigma^T(x)$ denotes the transpose matrix of $\sigma(x)$. In order to obtain a local gradient bound for $u$, a change of variable is necessary: assuming (without loss of generality) that $u\geq 1$ at least in the ball $\overline{B(x_0,R)}$, we can use the change $u=\exp(v)$. The equation satisfied by $v$ is $$ -{\rm Tr}(A(x)D^2 v) +A(x)Dv\cdot Dv+ \exp((m-1)v)|Dv|^m = \exp(-v)f(x) \quad\hbox{in }\Omega\; , $$ And the aim is now to apply Theorem~\ref{main}-(iii) to get the gradient bound for $v$ (hence for $u$). The computation of the different terms gives $$ F_r(x,r,p,M)= (m-1)\exp((m-1)r)|p|^m + \exp(-r)f(x)\; ,$$ $$ F_x(x,r,p,M)= -{\rm Tr}(A_x(x)M)+A_x(x)p\cdot p-\exp(-r)f_x (x)$$ $$F_p(x,r,p,M)= 2A(x)p +\exp((m-1)r)|p|^{m-2}p\; ,$$ $$ - F_M(x,r,p,M)M^2= {\rm Tr}(A(x)M^2)\; .$$ We first use Cauchy-Schwarz inequality and the assumption on $A$ to deduce that, for any $\eta>0$ $$ |{\rm Tr}(A_x(x) M)| |p|\leq \frac{1}{1+\eta} {\rm Tr}(A(x)M^2)+ O((|\sigma_x||p|)^2)\; ;$$ This control of the first term in $F_x(x,v,p,M)$ is the only use of the term $- F_M(x,v,p,M)M^2$ . Therefore the $F_r(x,r,p,M)|p|^2$-term which behaves like $|p|^{m+2}$ if $m>1$, has to control the terms $$ (A_x(x)\cdot p)(p\cdot p)=O(|p|^3)\; ,\; -\exp(-v)f_x (x) |p|=O(|p|)\; ,\; 2A(x)p\cdot p =O(|p|^2)\; .$$ We have now to consider the $F_p$-term and the term $K \bigl(|p|\chi (\eta |p|)\bigr)^{2}$ in the right-hand side. Notice that, for the time being, we have not chosen $\chi$ nor $\eta$. The $F_p$-term behaves as $|p|^{\max(1,m-1)}$ and therefore $|F_p| |p|^2\chi(\eta |p|)$ behaves as $|p|^{\max(3,m+1)}\chi(\eta |p|)$. On the other hand, $K \bigl(|p|\chi (\eta |p|)\bigr)^{2}$ behaves as $|p|^2[\chi(\eta |p|)]^2$. If we choose any $\chi \in \mathcal{K}$, because of the growth of such $\chi$ at infinity, these two terms are controlled by the $F_r|p|^2$-one. Therefore Theorem~\ref{main} (iii) applies. It is worth pointing out that, in this last example, we do not use the fact that the assumption has to hold only in the set $\{(x,r,p,M);\ |F(x,r,p,M))| \leq K |p|+\eta \; ,\; |p|\geq \bar L\}$, a fact which is going to be (almost) the general case in the parabolic setting. { \section{Proof of Theorem~\ref{main}} We start by proving (i) : the aim is to prove that, for any $x\in B(x_0,R/4)$, $D^+u(x)$ is bounded with an explicit bound. This will provide the desired gradient bound. We recall that $$ D^+u(x)=\{p\in\mathbb R^n:\ u(x+h)\leq u(x)+p\cdot h+o(|h|) \ \text{ as } h\to 0\}.$$ To do so, we consider on $$\Gamma_L :=\{(x,y) \in B(x_0,3R/4) \times B(x_0,R) : LC(x)(|x-y| +\alpha)<1\}$$ the following function $$ \Phi(x,y)= u(x)-u(y) - \varphi\left (LC(x)(|x-y| +\alpha)\right )\; ,$$ where \begin{itemize} \item $L\geq \max(1,4/R)$ is a constant which is our future gradient bound (and therefore which has to be choosen large enough), \item the functions $\varphi$ and $C$ are built in Section~\ref{prelim}, \item $\alpha >0$ is a small constant devoted to tend to $0$. \end{itemize} We remark that the above function achieves its maximum in the open set $\Gamma_L$: indeed, if $(x,y) \in \Gamma_L$, we have $ LC(x)\alpha<1$ and therefore $x\in \overline{B(x_0,R')}$ for some $R'<3R/4$. Moreover $LC(x)|x-y|<1$ implies $|x-y|<L^{-1}$ and, since $L> 4/R$, this implies $y\in \overline{B(x_0,R'+R/4)}$ and $R'+R/4<R$. Therefore, clearly $\Phi(x,y) \to - \infty$ if $(x,y)\to \partial \Gamma_L$. Next we argue by contradiction: if, for some $L$, this maximum is achieved for any $\alpha$ at $({\bar x}_\alpha,{\bar y}_\alpha)$ with ${\bar x}_\alpha={\bar y}_\alpha$, then $\Phi({\bar x}_\alpha,{\bar x}_\alpha)=- \varphi(LC({\bar x}_\alpha)\alpha)$ and therefore necessarely ${\bar x}_\alpha \in B(x_0,R/4)$ by the maximality property and the form of $C$. Moreover, for any $x,y$ $$u(x)-u(y) - \varphi(LC(x)(|x-y| +\alpha))\leq -\varphi(L\alpha)\; ,$$ and if this is true, for a fixed $L$, this implies that, for any $x,y$ $$u(x)-u(y) - \varphi(LC(x)|x-y| )\leq 0\; .$$ Choosing $x\in B(x_0,R/4)$, we have $$ u(y)-u(x) \geq - \varphi(L|x-y| )\; ,$$ and this inequality implies that any element in $D^+u(x)$ has a norm which is less than $L$, which we wanted to prove. Notice that, by using slightly more complicated arguments, the same conclusion is true if, for some $L$, we have ${\bar x}_\alpha-{\bar y}_\alpha \to 0$ when $\alpha \to 0$. Therefore, we may assume without loss of generality that, for any fixed $L$, the maximum points $({\bar x}_\alpha,{\bar y}_\alpha)$ of $\Phi$, satisfies not only ${\bar x}_\alpha\neq {\bar y}_\alpha$ for $\alpha$ small enough but ${\bar x}_\alpha-{\bar y}_\alpha$ is bounded away from $0$ when $\alpha \to 0$. We are going to prove that this is a contradiction for $L$ large enough. For the sake of simplicity of notations, we omit the indice $\alpha$ in all the quantities which depends on $\alpha$ (actually they also depend on $L$). In particular, we denote by $(x,y)$ a maximum point of $\Phi$ and we set $t=LC(x)(|x-y| +\alpha)$ and $$ p= \varphi'(t)LC(x) \frac{(x-y)}{|x-y|} \; ,\; q= \varphi'(t)LDC(x) (|x-y|+\alpha)\; .$$ By a classical result of the User's guide (cf. Crandall, Ishii and Lions \cite{users}), there exist matrices $X,Y \in {\mathcal S}^N$ such that $(p+q,X)\in \overline{D^{2,+}}u(x)$, $(p,Y)\in \overline{D^{2,-}}u(y)$, for which the following viscosity inequalities hold $$ F(x,u(x), p+q,X) \leq 0\; ,\; F(y,u(y), p,Y) \geq 0\; .$$ Moreover the matrices $X,Y$ satisfy, for any $\varepsilon >0$ $$ \left(-\frac1\varepsilon + ||A||\right) I_{2N} \leq \left(\begin{array}{cc}X & 0 \\0 & -Y\end{array}\right)\leq A + \varepsilon A^2$$ and where, if $\psi(x,y)= \varphi(LC(x)(|x-y| +\alpha))$, $A=D^2\psi(x,y)$ and $||A||=\max\{|\lambda|:\ \hbox{$\lambda $ is an eigenvalue of $A$}\}$. Since $\varepsilon>0$ is arbitrary and since we are going to use only the second above inequality, we may choose a sufficiently small $\varepsilon$ in order that the term $\varepsilon A^2$ becomes negligible. Using this remark, we argue below assuming that $\varepsilon=0$ in order to simplify the exposure. With this convention, the matrices $X,Y$ satisfy, for any $r,s \in \mathbb R^N$ \begin{equation}\label{fu} Xr\cdot r - Ys \cdot s \leq \gamma_1|r-s|^2+2\gamma_2 |r-s||r|+\gamma_3|r|^2\; , \end{equation} where $$ \gamma_1= \frac{\varphi'(t)LC(x)}{|x-y|}+ \varphi''(t)(LC(x))^2\; ,$$ $$ \gamma_2= \varphi'(t)L|DC(x)|+ \varphi''(t)L^2|DC(x)|C(x)(|x-y|+\alpha)\; ,$$ $$ \gamma_3= \varphi'(t)\frac{|D^2C(x)|}{C(x)}t+ \varphi''(t)\frac{|DC(x)|^2}{[C(x)]^2}t^2\; ,$$ By easy manipulations, it is easy to see that $$ \gamma_2 \leq \gamma_1\frac{|DC(x)|}{C(x)}(|x-y|+\alpha) + o_\alpha(1)\leq \gamma_1 K_2^{1/2}\chi(C(x))(|x-y|+\alpha)+ o_\alpha(1)\; ,$$ $$ \gamma_3 \leq \gamma_1 K_2[\chi(C(x))]^2(|x-y|+\alpha)^2+ o_\alpha(1)\; ,$$ where the $o_\alpha(1)$ comes from terms of the form $\alpha/|x-y|$. Again, for the sake of clarity, we are going to drop these terms which play no role at the end. By Cauchy-Schwarz inequality, we deduce that, using $\eta$ appearing in the assumption, \begin{equation}\label{si} Xr\cdot r - Ys \cdot s \leq (1+\eta)\gamma_1|r-s|^2+ B(R,\eta)\gamma_1 [\chi(C(x))]^2(|x-y|+\alpha)^2|r|^2\; , \end{equation} where $B(R,\eta)=(1+\eta^{-1})K_2$ depends on $R$ through $K_2$ and therefore is a $O(R^{-2})$ if $\eta$ is fixed. Coming back to $p$ and $q$, we also have $$ |q| = |p|\frac{|DC(x)|}{C(x)} (|x-y|+\alpha) \leq |p|\frac{|DC(x)|}{L [C(x)]^2} \leq O((RL)^{-1})|p|\; ,$$ since $LC(x)(|x-y|+\alpha)\leq 1$, $C\geq 1$ everywhere and since $\dfrac{|DC(x)|}{ [C(x)]^2}$ is a $O(R^{-1})$. In order to have simpler formulas, we denote below by $\varpi_1$ any quantity which is a $O((RL)^{-1})$. Now we arrive at the key point of the proof: by \eqref{fu}, choosing $r=0$, we have $-Y\leq \gamma_1I_N$ where $I_N$ is the identity matrix in $\mathbb R^N$. Therefore the matrix $\displaystyle I_N+[(1+\eta)\gamma_1]^{-1}Y$ is invertible and rewriting \eqref{si} as $$ Xr\cdot r \leq Ys \cdot s + (1+\eta)\gamma_1|r-s|^2+ B(R,\eta)\gamma_1 [\chi(C(x))]^2(|x-y|+\alpha)^2|r|^2 \; ,$$ we can take the infimum in $s$ in the right-hand side and we end up with $$X \leq Y(I_N+\frac{1}{(1+\eta)\gamma_1}Y)^{-1}+B(R,\eta)\gamma_1 [\chi(C(x))]^2(|x-y|+\alpha)^2I_N\; .$$ Setting $\tilde Y:= Y(I_N+\frac{1}{(1+\eta)\gamma_1}Y)^{-1}$, this implies that we have $(p+q,\tilde Y+3\gamma_1 [\chi(C(x))]^2(|x-y|+\alpha)^2I_N)\in \overline{D^{2,+}}u(x)$, $(p,Y)\in \overline{D^{2,-}}u(y)$ and then, using the Lipschitz continuity of $F$ in $M$, we have the viscosity inequalities $$ F(x,u(x), p+q,\tilde Y) \leq ||F_M||_\infty B(R,\eta)\gamma_1 [\chi(C(x))]^2(|x-y|+\alpha)^2\; ,$$ $$ F(y,u(y), p,Y) \geq 0\;.$$ Next we introduce the function $$ g(\tau):= F(X(\tau), U(\tau), P(\tau), Z(\tau))-\tau||F_M||_\infty B(R,\eta)\gamma_1 [\chi(C(x))]^2(|x-y|+\alpha)^2 \; ,$$ where $$ X(\tau) = \tau x+(1-\tau)y\; ,\; U(\tau)= \tau u(x)+(1-\tau)u(y)\; ,\; P(\tau)=p+\tau q\; ,$$ $$ Z(\tau) = Y(I_N+\frac{\tau}{(1+\eta)\gamma_1}Y)^{-1}\; .$$ From now on, in order to simplify the exposure, we are going to argue as if $F$ were $C^1$: the case when $F$ is just locally Lipschitz continuous follows from tedious but standard approximation arguments. The above viscosity inequalities read $g(0)\geq 0$ and $g(1)\leq 0$ : if we can show that the $C^1$-function $g$ satisfies $g'(\tau)>0$ if $g(\tau)=0$, we would have a contradiction. Therefore we compute \begin{eqnarray*} g'(\tau) &=& F_x\cdot(x-y)+F_r (u(x)-u(y)) + F_p\cdot q + F_M\cdot Z'(\tau)\\ &&-||F_M||_\infty B(R,\eta)\gamma_1 [\chi(C(x))]^2(|x-y|+\alpha)^2\; , \end{eqnarray*} and using that $F_r \geq 0$ and $Z'(\tau)=-((1+\eta)\gamma_1)^{-1}[Z(\tau)]^2$, we are lead to \begin{eqnarray*} g'(\tau) &\geq & (\gamma_1)^{-1}\biggl\{-|F_x| \gamma_1 |x-y|- \gamma_1 |F_p| |q| - \dfrac1{1+\eta}F_M\cdot [Z(\tau)]^2\\ && -B(R,\eta) ||F_M||_\infty (\gamma_1)^2 [\chi(C(x))]^2(|x-y|+\alpha)^2\biggr\}. \end{eqnarray*} Before estimating the different terms inside the brackets, we point out that, contrarily to \cite{B-wb} where $Z(\tau)$ was given by $\tau X + (1-\tau)Y$ and where we had to prove an inequality between $X-Y$ and $-[Z(\tau)]^2$, here this inequality comes for free because of the form of $Z(\tau)$: this is the key idea of Cardaliaguet \cite{C1}. Now we estimate the terms $\gamma_1 |x-y|$, $\gamma_1 |q|$ and $\gamma_1 \chi(C(x))(|x-y|+\alpha)$ in terms of $|P(\tau)|$ in order to be able to use the assumptions on $F$. Using that $LC(x)(|x-y|+\alpha) \leq 1$ and the properties of $\varphi$, we have \begin{eqnarray*} \gamma_1|x-y| & \leq & \varphi'(t)LC(x)+ \varphi''(t)(LC(x))^2|x-y|\\ & \leq & \varphi'(t)LC(x) + K_1\varphi'(t)\chi(\varphi'(t)) LC(x)\\ & \leq & |P(\tau)| (1+\varpi_1)\left(1 + K_1\chi(\varphi'(t)) \right)\\ & \leq & |P(\tau)| (1+\varpi_1)\left(1+K_1 \chi(L^{-1}|P(\tau)| (1+\varpi_1))\right)\; . \end{eqnarray*} Indeed, recalling the estimate on $|q|$, $ \varphi'(t)LC(x)=|p|=|P(\tau)|(1+\varpi_1\tau)$ and, on an other hand, since $C\geq 1$, we have $$\chi(\varphi'(t))\leq \chi(L^{-1}|p|)\leq \chi(L^{-1}|P(\tau)| (1+\varpi_1)).$$ From now on, we are going to assume that $L$ is chosen large enough in order to have $L^{-1} (1+\varpi_1)\leq \eta$ and, since $R$ is fixed, $|\varpi_1| \leq \eta$. Notice that these constraints on $L$ depend only on $R$ and $\eta$, hence on $R$ and $F$. Using this choice, the above estimate of $\chi(\varphi'(t))$ -- and we can argue in the same way for $\chi(C(x))$-- takes the simple form \begin{equation}\label{kest} \chi(\varphi'(t)), \chi(C(x)) \leq \chi(\eta |P(\tau)|). \end{equation} This leads to the simpler estimate $$ \gamma_1|x-y| \leq |P(\tau)| (1+\eta)(1+K_1\chi(\eta |P(\tau)|))\; .$$ In the same way, since we can take $\alpha$ as small as we want and $|x-y|$ is bounded away from $0$, one has $$ \gamma_1 (|x-y| +\alpha) \leq |P(\tau)| (1+\eta)\left(1+K_1\chi(\eta |P(\tau)|)\right) +o_\alpha(1)\; .$$ This allows to estimate the $F_p$-term, namely \begin{align*} \gamma_1 |q| & \leq \gamma_1 |p| \frac{|DC(x)|}{C(x)}(|x-y| +\alpha)\\ &\leq |P(\tau)|^2 (1+\eta)^2 \left(1+K_1\chi(\eta |P(\tau)| )\right)K_2^{1/2} \chi(C(x))+o_\alpha(1),\\ & \leq K_2^{1/2} |P(\tau)|^2 (1+\eta)^2 \left(1+K_1\chi(\eta |P(\tau)| )\right) \chi(\eta |P(\tau)| )+o_\alpha(1)\; . \end{align*} Finally, by the same estimates $$\gamma_1 \chi(C(x)) (|x-y|+\alpha) \leq |P(\tau)| (1+\eta)\left(1+K_1\chi(\eta |P(\tau)| )\right)\chi(\eta |P(\tau)| ) +o_\alpha(1)\; .$$ We end up with \begin{eqnarray*} g'(\tau) &\geq & (\gamma_1)^{-1}\biggl\{-|F_x| |P(\tau)| (1+\eta)(1+K_1\chi(\eta |P(\tau)|) ) \\ && \phantom{(\gamma_1)^{-1}\biggl\{} - |F_p| K_2^{1/2} |P(\tau)|^2 (1+\eta)^2 \left(1+K_1\chi(\eta |P(\tau)| )\right) \chi(\eta |P(\tau)| ) \\ && \phantom{(\gamma_1)^{-1}\biggl\{} - \dfrac1{1+\eta}F_M\cdot [Z(\tau)]^2\\ && -B(R,\eta) ||F_M||_\infty \bigl( |P(\tau)| (1+\eta)\left(1+K_1\chi(\eta |P(\tau)| )\right) \chi(\eta |P(\tau)| )\bigr) ^2\biggr\} \\ && + o_\alpha(1). \end{eqnarray*} On the other hand, in order to take into account the constraint $g(\tau)= 0$, we have to estimate $\gamma_1 [\chi(C(x))]^2 (|x-y|+\alpha)^2$. Since $|x-y|$ is bounded away from $0$ and $LC(x)(|x-y|+\alpha)\leq 1$, we have \begin{eqnarray*} \gamma_1(|x-y|+\alpha)^2 & \leq & \varphi'(t)+ \varphi''(t) +o_\alpha (1)\\ & \leq & \varphi'(t)[1 + K_1\chi(\varphi'(t))]+ o_\alpha (1)\\ & \leq & (1+\eta) \frac{|P(\tau)|}{LC(x)}[1 + K_1\chi(\varphi'(t))]+o_\alpha (1)\\ & \leq & (1+\eta) \frac{|P(\tau)|}{LC(x)}[1 + K_1\chi(\eta|P(\tau)| )]+o_\alpha (1). \end{eqnarray*} But $\dfrac{[\chi(C(x))]^2}{C(x)} \leq \tilde K(\chi)$ and therefore $$ \gamma_1 [\chi(C(x))]^2 (|x-y|+\alpha)^2 \leq \eta(1+\eta)\tilde K(\chi) |P(\tau)|[1 + K_1\chi(\eta |P(\tau)| )]+o_\alpha (1).$$ This implies $$ |F(X(\tau), U(\tau), P(\tau), Z(\tau))|\leq \eta(1+\eta)\tilde K(\chi) |P(\tau)|[1 + K_1\chi(\eta|P(\tau)|)]+o_\alpha (1)\; , $$ while $$ |P(\tau)| \geq (1-\eta)L\; .$$ The conclusion follows by applying the assumption on $F$ for $L$ large enough and $\alpha$ small enough in order that the $o_\alpha(1)$-terms are controlled by the $\eta$-terms. Taking $L$ large enough depending on $\eta$ and $R$, we have a contradiction and the proof of (i) is complete. Now we turn to the proof of (ii) where we choose $\varphi(t)=t$ and $$\Gamma'_L :=\{(x,y) \in B(x_0,3R/4) \times B(x_0,R) : LC(x)(|x-y| +\alpha)\leq osc_R (u)\}\; .$$ The proof follows the same arguments, except that the fact that $\varphi''(t)\equiv 0$ allows different estimates on the $\gamma_i$, $i=1,2,3$ because several terms do not exist anymore. We denote by $\varpi_2$ any quantity of the form $O(osc_R (u)(RL)^{-1})$ and we choose $L$ large enough in order to have $|\varpi_2|\leq \eta$ for any of these terms and $L^{-1} \leq \eta/(1+\eta)$. We notice that, here, the constraints on $L$ depend not only on $R$ and $\eta$ but also on $osc_R (u)$. We have $p= LC(x) \dfrac{(x-y)}{|x-y|}$ and therefore $$ |q|= L.|DC(x)| (|x-y|+\alpha)= |p| \frac{|DC(x)|}{C^2}\frac{LC(x)(|x-y|+\alpha)}{L}=\varpi_2|p|\leq \eta |p|\; ,$$ since $\dfrac{|DC(x)|}{C^2}\leq O(R^{-1})$. Using this inequality and taking into account our choice of $L$, it is easy to check that (\ref{kest}) still holds. Moreover we have $$ \gamma_1= \frac{LC(x)}{|x-y|}\; ,\; \gamma_2= L.|DC(x)|\; , \; \gamma_3= L |D^2C(x)|(|x-y|+\alpha) \; .$$ And we still have the same estimates on $\gamma_1, \gamma_2,\gamma_3$ $$ \gamma_2 = \gamma_1\frac{|DC(x)|}{C(x)}|x-y| \leq \gamma_1 \chi(C(x))|x-y|\; ,$$ $$ \gamma_3 \leq \gamma_1 [\chi(C(x))]^2(|x-y|+\alpha)^2\; .$$ The proof is then done in the same way as in the first case with the computation of $ g'(\tau)$ and then with the estimates of the different terms \begin{eqnarray*} g'(\tau) &\geq & (\gamma_1)^{-1}\left\{-|F_x| \gamma_1 |x-y|-\gamma_1 |F_p| |q| - \frac{1}{1+\eta}F_M\cdot [Z(\tau)]^2\right.\\ && \left . -B(R,\eta) ||F_M||_\infty (\gamma_1)^2 [\chi(C(x))]^2(|x-y|+\alpha)^2\right\}\; . \end{eqnarray*} But here $$\gamma_1 |x-y|=|p| \leq |P(\tau)| (1+\eta)\; ,$$ and in the same way, \begin{align*} \gamma_1 |q| & = \frac{LC}{|x-y|} L |DC(x)|(|x-y| +\alpha)\\ &\leq |p|^2 \dfrac{|DC(x)|}{C(x)}(1+o_\alpha(1))\\ & \leq K_2^{1/2} (1+\eta)^2 |P(\tau)|^2 \chi(\eta |P(\tau)| )+o_\alpha(1)\; , \end{align*} and $$\gamma_1 \chi(C(x)) (|x-y|+\alpha) \leq (1+\eta)|P(\tau)|\chi(\eta|P(\tau)|)+o_\alpha (1)\; .$$ We end up with \begin{eqnarray*} g'(\tau) &\geq & (\gamma_1)^{-1}\biggl\{-|F_x| (1+\eta)|P(\tau)| - K_2^{1/2} (1+\eta)^2|F_p| |P(\tau)|^2 \chi(\eta |P(\tau)| ) \\ && -\frac{1}{1+\eta} F_M\cdot [Z(\tau)]^2 - B(R,\eta) ||F_M||_\infty (1+\eta)^2 |P(\tau)|^2[\chi(\eta|P(\tau)|)]^2 \\ && +o_\alpha (1) \biggr\}\; . \end{eqnarray*} On the other hand, for the constraint $g(\tau)= 0$, we have \begin{align*} \gamma_1[\chi(C(x))]^2(|x-y|+\alpha)^2 &= |p|\frac{[\chi(C(x))]^2}{C(x)}\dfrac{LC(|x-y|+\alpha)^2}{|x-y|}\\ &\leq (1+\eta)[\tilde K(\chi)]^2 |P(\tau)|(1+\varpi_2)(1+o_\alpha(1))\\ &\leq (1+\eta)^2[\tilde K(\chi)]^2|P(\tau)|+o_\alpha(1)\; , \end{align*} and $$ |P(\tau)|\geq LC(x) (1-\eta)\geq L(1-\eta)\; .$$ Hence \begin{equation}\label{F-prop} |F(X(\tau), U(\tau), P(\tau), Z(\tau))| \leq B(R,\eta)||F_M||_\infty (1+\eta)^2 |P(\tau)| + o_\alpha(1)\; . \end{equation} The conclusion follows as in the first case by applying the assumption on $F$ for $L$ large enough and $\alpha$ small enough for which we have a contradiction. For the proof of (iii), we keep the same test-function and the same set $\Gamma'_L$ but since we are not expecting the gradient bound to come from the same term in $g'(\tau)$, we are going to change the strategy in our computation of $g'(\tau)$ by keeping the $F_r$-term. Using that $F_r \geq 0$ and $$ u(x)-u(y)\geq LC(x)(|x-y|+\alpha)= \frac{|p |^2}{\gamma_1}(1+o_\alpha(1))\; ,$$ we obtain \begin{eqnarray*} g'(\tau)& = & F_x\cdot(x-y)+F_r (u(x)-u(y)) + F_p\cdot q + F_M\cdot Z'(\tau)\\ &&-||F_M||_\infty B(R,\eta)\gamma_1 [\chi(C(x))]^2(|x-y|+\alpha)^2\; ,\\ &\geq& (\gamma_1)^{-1}\left\{F_x\cdot p+ F_r |p|^2 - \gamma_1 |F_p| |q| - \frac{1}{1+\eta}F_M\cdot [Z(\tau)]^2\right.\\ && \left . -B(R,\eta) ||F_M||_\infty (\gamma_1)^2 [\chi(C(x))]^2(|x-y|+\alpha)^2+o_\alpha(1)\right\}\; . \end{eqnarray*} This computation is close to the one given in \cite{B-wb} if there is no localization term ($C\equiv1$). Since $|P(\tau)| (1-\eta)\leq |p| \leq |P(\tau)| (1+\eta)$ and using anagolous estimates as above, we are lead to \begin{eqnarray*} g'(\tau) &\geq & (\gamma_1)^{-1}\biggl\{-(1+\eta) |F_x| |P(\tau)| +(1-\eta)^2 F_r |P(\tau)|^2\\ && - K_2^{1/2} (1+\eta)^2|F_p| |P(\tau)|^2 \chi(\eta |P(\tau)| )- \frac{1}{1+\eta}F_M\cdot [Z(\tau)]^2 \\ \\ && -B(R,\eta) ||F_M||_\infty (1+\eta)^2 [\tilde K(\chi)]^2 |P(\tau)|^2[\chi(\eta|P(\tau)|)]^2 \biggr\}+o_\alpha (1) \; . \end{eqnarray*} On the other hand, the constraint $g(\tau)= 0$ still implies (\ref{F-prop}) and we also conclude by choosing $L$ large enough and $\alpha$ small enough.} \section{The parabolic case} In this section, we consider evolution equations under the general form \begin{equation}\label{GFNLP} u_t + F(x, t, u, D u, D^2u) = 0 \quad\hbox{in }\Omega \times (0,T)\; , \end{equation} and the aim is to provide a local gradient bound where ``local'' means both local in space and time. As a consequence, we will have to provide a localization also in time and a second main difference is that we will not be able to use that the equation holds since the $u_t$-term has no property in general and therefore the assumptions on $F$ have to hold for any $x, t, r, p, M$ and not only those for which $F(x, t, r, p, M)$ is close to $0$. \begin{theorem} \label{mainP}{\bf (Estimates for non-uniformly parabolic equations : estimates depending the oscillation of $u$)}\\ Assume that $F$ is a locally Lipschitz function in $\Omega \times (0,T) \times \mathbb R \times \mathbb R^N \times {\mathcal S}^N$ which satisfies : $F(x,t,r,p,M)$ is Lipschitz continuous in $M$ and $$ F_M(x,t,r,p,M) \leq 0 \quad\hbox{a.e. in }\Omega \times (0,T)\times \mathbb R \times \mathbb R^N \times {\mathcal S}^N \; ,$$ and let $u\in C(\Omega\times (0,T))$ be a solution of (\ref{GFNLP}). Assume that there exists a function $\chi \in \mathcal{K}$, $0<\eta\leq 1$ such that, for any $K>0$, there exists $L=L(\eta,K)$ large enough such that, for $|p|\geq L$, we have $F_r(x,t,r,p,M)\geq 0$ and $$ -(1+\eta) |F_x| |p| (1+\chi(\eta |p|)) - K |F_p| |p|^2 \left(1+\chi(\eta |p| )\right) \chi(\eta |p| )- \frac{1}{1+\eta} F_M\cdot M^2$$ $$ \geq \eta + K |p|^2\biggl( \chi((1+\eta) |p|)+ \chi(\eta |p|)^{2}\biggr)\; \hbox{a.e.},$$ If $\overline{B(x_0,R)} \subset \Omega$ and $\delta>0$, then $u$ is Lipschitz continuous in $x$ in $B(x_0,R/2)\times [\delta, T-\delta]$ and $|Du| \leq {\bar L} $ in $B(x_0,R/2)\times [\delta, T-\delta]$ where $\bar L$ depends on $F$, $R$, $\delta$ and the oscillation of $u$ in $B(x_0,R)\times (\delta/2,T-\delta]$. \end{theorem} It is worth pointing out that the assumptions of Theorem~\ref{mainP} are rather close to the one of Theorem~\ref{main} (iii) and the same computations provide a gradient bound for the evolution equation \begin{equation}\label{PartEqnNUN-P} u_t-{\rm Tr}(A(x)D^2 u) + |Du|^m = f(x) \quad\hbox{in }\Omega\times (0,T)\; , \end{equation} if $m>1$. \noindent{\bf Proof of Theorem~\ref{mainP} :} We argue as in the proof of Theorem~\ref{main} (iii), except that here $L=L(t)$ with $L(t) \to +\infty$ as $t \to (\delta/2)^+$. We still choose $\varphi(t)=t$ and we denote by $\Gamma'_L$, the subset of points $(x,y,t) \in B(x_0,3R/4) \times B(x_0,R)\times (\delta/2,T-\delta]$ such that $$ L(t)C(x)(|x-y| +\alpha)\leq osc_{R,\delta} (u)\},$$ where $osc_{R,\delta} (u)$ denotes the oscillation of $u$ in $B(x_0,R)\times (\delta/2,T-\delta]$. We consider maximum points $(x,y,t) \in \Gamma'_L$ of the function $$(x,y,t)\mapsto u(x,t)-u(y,t) - L(t)C(x)(|x-y| +\alpha)\; ,$$ and, if $x\neq y$, we are lead to the viscosity inequalities $$ a+ F(x,t, u(x,t), p+q,X) \leq 0\; ,\; b+F(y,t,u(y,t), p,Y) \geq 0\; ,$$ where $(a,p+q,X)\in D^{2,+}u(x,t)$, $(p,Y)\in D^{2,-}u(y,t)$ and $$a-b \geq L'(t)C(x)(|x-y|+\alpha).$$ As in the proof of Theorem~\ref{main}, the second inequality holds for $\tilde Y$ as well and subtracting these inequalities, we have $$ L'(t)C(x)(|x-y|+\alpha)+ F(x,u(x), p+q,X) -F(y,u(y), p,\tilde Y)\leq 0\; .$$ Then, with the notations of the proof of Theorem~\ref{main}, we introduce $$ g(\tau) := F(X(\tau), U(\tau), P(\tau), Z(\tau))-\tau ||F_M||_\infty B(R,\eta)\gamma_1 [\chi(C(x))]^2(|x-y|+\alpha)^2]$$ $$ +\tau L'(t)C(x)(|x-y|+\alpha) \; .$$ Here we have no information on the signs of $g(0)$ and $g(1)$, we only know that $g(1)-g(0)\leq 0$; therefore, in order to have the contradiction, we have to show that $g'(\tau) >0$ for any $0\leq\tau\leq1$ if we choose a function $L(\cdot)$ such that $L(t)$ is large enough for any $t\in (\delta/2,T-\delta]$. The computation of $g'(\tau)$ and the estimates are done as above; we have just to estimate the new term $L'(t)C(x)(|x-y|+\alpha)$ which is multiplied by $\gamma_1$ when we put it inside the bracket. We have $$ \gamma_1 L'(t)C(x)(|x-y|+\alpha) = L(t) L'(t)[C(x)]^2 (1+o_\alpha(1)) \; ,$$ and if we choose $L$ as the solution of the ode $$ L'(t)=-k_T L(t)\chi(L(t))\; , L(T-\delta) = L_T \;\hbox{(large enough)}\; .$$ By choosing properly $k_T>0$, we have $L((\delta/2)^+)=+\infty$ (notice that $k_T$ decreases when $L_T$ increases). Since $L(t)\leq |p|\leq (1+\eta) |P(\tau)|$, we have $$L(t) L'(t)[C(x)]^2 \geq -k_T |P(\tau)|^2 \chi((1+\eta)|P(\tau)|)\; .$$ Using this estimate, the conclusion follows as above by applying the assumption on $F$ for $K$ large enough and $\alpha$ small enough for which we have a contradiction by taking $L_T $ large enough. \thebibliography{99} \bibitem{B-wb} Barles, G., (1991), A weak Bernstein method for fully nonlinear elliptic equations. J. Diff. and Int. Equations, vol 4, n${}^\circ$ 2, pp 241-262. \bibitem{CClivre} L. A. Caffarelli and X. Cabr\'e, {\em Fully non-linear elliptic equations.} American Mathematical Society Colloquium Publications, 43. American Mathematical Society, Providence, RI, 1995. \bibitem{C1} Cardaliaguet, P. : Personal communication. \bibitem{users} Crandall, M.G., Ishii, H., Lions, P.-L. (1992). User’s guide to viscosity solutions of second order partial differential equations. Bull. Amer. Math. Soc. (N.S.) 27(1) 1-67. \bibitem{GT} Gilbarg D., Trudinger N.-S., {\em Elliptic Partial Differential Equations of Second Order}, Second edition, Springer, 2001. \bibitem{LB} Lions P.-L., {\em Generalized solutions of {H}amilton-{J}acobi equations}, vol.~69 of Research Notes in Mathematics, Pitman (Advanced Publishing Program), Boston, Mass., 1982. \end{document}
arXiv
{ "id": "1705.08673.tex", "language_detection_score": 0.7481864094734192, "char_num_after_normalized": null, "contain_at_least_two_stop_words": null, "ellipsis_line_ratio": null, "idx": null, "lines_start_with_bullet_point_ratio": null, "mean_length_of_alpha_words": null, "non_alphabetical_char_ratio": null, "path": null, "symbols_to_words_ratio": null, "uppercase_word_ratio": null, "type": null, "book_name": null, "mimetype": null, "page_index": null, "page_path": null, "page_title": null }
arXiv/math_arXiv_v0.2.jsonl
null
null
\begin{document} \title{The graphs with the max-Mader-flow-min-multiway-cut property} \begin{abstract} We are given a graph $G$, an independant set $\mathcal{S} \subset V(G)$ of \emph{terminals}, and a function $w:V(G) \to \mathbb{N}$. We want to know if the maximum $w$-packing of vertex-disjoint paths with extremities in $\mathcal{S}$ is equal to the minimum weight of a vertex-cut separating $\mathcal{S}$. We call \emph{Mader-Mengerian} the graphs with this property for each independant set $\mathcal{S}$ and each weight function $w$. We give a characterization of these graphs in term of forbidden minors, as well as a recognition algorithm and a simple algorithm to find maximum packing of paths and minimum multicuts in those graphs. \end{abstract} \section{Introduction} Given a graph $G=(V,E)$, a set $\mathcal{S} \subset V$ with $|\mathcal{S}| \geq$ $2$ and inducing a stable set is called a set of \emph{terminals}. An \emph{$\mathcal{S}$-path} is a path having distinct ends in $\mathcal{S}$, but inner nodes in $V \setminus \mathcal{S}$. A set ${\mathcal{P}}$ of $\mathcal{S}$-paths, is a \emph{packing of vertex-disjoint $\mathcal{S}$-paths} (since there is no risk of confusion, we will use the shorter term \emph{packing of $\mathcal{S}$-paths} within this paper), if two paths in ${\mathcal{P}}$ do not have a vertex in common in $V \setminus \mathcal{S}$. We are looking for a maximum number $\nu(G,\mathcal{S})$ of $\mathcal{S}$-paths in a packing. An \emph{${\mathcal{S}}$-cut} is a set of vertices in $V \setminus \mathcal{S}$ that disconnect all the pairs of vertices in $\mathcal{S}$ (that is a blocker of the $\mathcal{S}$-paths). We are looking for an ${\mathcal{S}}$-cut with a minimum number $\kappa(G,\mathcal{S})$ of vertices. The following inequality holds for any graph $G$ and any ${\mathcal{S}}\subseteq V(G)$: $\nu(G,\mathcal{S})\leq \kappa(G,\mathcal{S})$, as any $\mathcal{S}$-path intersects any $\mathcal{S}$-cut. Note that if $|\mathcal{S}| =$ $2$ the equality always holds, being Menger's vertex-disjoint undirected $(s,t)$-paths theorem. This paper deals with graphs for which $\nu(G,{\mathcal{S}})=\kappa(G,{\mathcal{S}})$, for any set $\mathcal{S}$ of terminals. Actually, we try to characterize a stronger property associated with a weighted version of these two optimization problems. Consider the following system with variables $x\in \mathbb{R}^{V\setminus \mathcal{S}}_+$: \begin{equation}\label{eqn:blocking} x(P)\geq 1 \mbox{ for every } {\mathcal{S}}\mbox{-path } P \end{equation} An integral vector $x$ minimizing $wx$ over~\eqref{eqn:blocking} is necessarily a ${0,1}$-vector and is the characteristic vector of a minimum $\mathcal{S}$-cut. Dually, an integral vector $y$ optimum for the dual of minimizing $wx$ over~(\ref{eqn:blocking}) is necessarily a maximum $w$-packing of $\mathcal{S}$-paths. Hence, if~\eqref{eqn:blocking} is a TDI system, we have that the minimum $w$-capacity of an $\mathcal{S}$-vertex-cut is equal to the maximum $w$-packing of ${\mathcal{S}}$-paths. \begin{figure} \caption{The net.} \label{fig:net} \end{figure} As an example, consider the graph of Figure~\ref{fig:net}, called \emph{net}. Let $\mathcal{S}$ be the square vertices. A maximum integral packing of $\mathcal{S}$-paths ($w=1$) contains only one path, while any $\mathcal{S}$-cut must contain at least two vertices. Precisely, there is a fractional packing of $\mathcal{S}$-paths of value $\frac{3}{2}$ (by taking each $\mathcal{S}$-path of length $3$ with value $\frac{1}{2}$), and a fractional $\mathcal{S}$-cut with the same value (by taking $x(v) = \frac{1}{2}$ for all $v \notin \mathcal{S}$). Motivated by the following property, we call \emph{Mader-Mengerian} the graphs for which the system \eqref{eqn:blocking} is TDI for every set ${\mathcal{S}}$ of terminals. \begin{property}\label{lemma:perfect} Given a graph $G$ and a set of terminal $\mathcal{S}$, the following conditions are equivalent: \begin{enumerate} \item The system~\eqref{eqn:blocking} is TDI, \item The polyhedron defined by~\eqref{eqn:blocking} is integral, \item The optimum value of maximizing $w^Tx$ subject to~\eqref{eqn:blocking} is integral (if finite) for all $w \in V^{ \{0,1,+\infty\} }$. \end{enumerate} \end{property} The proof of this property is postponed to section~\ref{sec:bipartite} where the stronger Lemma~\ref{lemma:main} is proved. We already know that the long claw is not Mader-Mengerian. Our main result (Theorem~\ref{th:bad-graphs}) is a description of the Mader-Mengerian graphs in terms of forbidden minors. However we do not use the usual minor operations (edge deletion and edge contraction), but \emph{ad-hoc} operations on vertices. Our proof implies an algorithm (Lemma~\ref{lemma:main}) to find maximal $w$-packing of paths in Mader-Mengerian graphs and minimum vertex multicuts for a given set of terminals. We also give a characterization of the pairs $(G,\mathcal{S})$ for which the system~\eqref{eqn:blocking} is TDI (Theorem~\ref{th:signed}). One of our most surprising results is that $G$ is Mader-Mengerian if and only if the system~\eqref{eqn:blocking} is TDI for every independant set $\mathcal{S}$ of cardinality $3$. This implies (with Lemma~\ref{lemma:main}) a polynomial algorithm to recognize Mader-Mengerian graphs. Finding a minimum $\mathcal{S}$-cut is an NP-complete problem, even if $|\mathcal{S}|=3$~\cite{papaseym}. In fact,~\cite{papaseym} deals with edge-cuts (that is, sets of edges disconnecting $\mathcal{S}$), but one may observe that $\mathcal{S}$-edge-cut in a graph $G$ correspond to vertex-cut in the line-graph of the graph obtained from $G$ by adding one leaf to each vertex in $\mathcal{S}$. Finding maximal packing of disjoint paths is a classical problem in graph theory, even if it was mainly studied for edge-disjoint (or arc-disjoint) paths. Menger~\cite{menger} gave the first significant result, stating that when $|\mathcal{S}|=2$, the maximum number of disjoint $S$-paths is equal to the minimum cardinality of an $(s,t)$-cut, both in edge-disjoint and vertex-disjoint cases. This result was further developped by Ford and Fulkerson~\cite{fordfulkerson}, into what became the network flow theory. When there is more than two terminals, the results are however closer to matching theory than to network flows. Gallai~\cite{gallai} first proved a min-max theorem for packing of fully-disjoint $\mathcal{S}$-paths (that is even the ends of the paths must be disjoint), and his result was then strengthened by Mader~\cite{mader} for inner-disjoint paths with ends in different parts of a partition of the terminals. Mader's theorem implies the following: \begin{theorem}[Mader, 1978] Let $G$ be a graph and $\mathcal{S}$ an independant set of $G$. Then, \[ \nu(G,\mathcal{S}) = \min |U_0| + \sum_{i=1}^k \left\lfloor \frac{b_{U_0}(U_i)}{2} \right\rfloor \] where the minimum ranges over all the partitions $U_0,\ldots,U_k$ of $V \setminus \mathcal{S}$ , such that each $S$-path intersects either $U_0$ or $E(U_i)$ for some $1 \leq i \leq k$. Here, $b_{U_0}(X) := |\{v \in X~:~N(v) \setminus (X \cup U_o) \neq \emptyset\}|$. \end{theorem} In the light of Mader's theorem, we are looking for graphs that admit a much simpler characterization: $\nu(G,\mathcal{S}) = \min |U|$ where the minimum ranges over sets $U$ such that each $S$-path intersects $U$. A practical reason for looking for these graphs is that Mader's theorem relies on matching theory, while our result will only use Menger's theorem, that is flow theory. As a consequence, algorithms for finding an optimal packing of $\mathcal{S}$-paths in Mader-Mengerian graphs are simpler and more efficient than those for general graphs. Mader's theorem has been recently extended by Chudnovsky et al.~\cite{chudnovskyetal}, and by Gyula Pap~\cite{pap}. Let us mention a similar result for edge-disjoint paths, that was proved by Cherkasky~\cite{cherkasky} and Lovász~\cite{lovasz}: \begin{theorem}[Cherkasky, Lovász, 1977] For any inner Eulerian graph $G$, then the maximum number of edge-disjoint $\mathcal{S}$-paths is equal to $\frac{1}{2} \sum_{s \in \mathcal{S}} \lambda{s}$, where $\lambda{s}$ is the minimum cardinality of a cut between $s$ and $\mathcal{S} - s$. \end{theorem} This has been later extended by Karzanov and Lomonosov~\cite{karzanosov}, who proved the Locking Theorem. These results explains when the maximum packing of edge-disjoint $\mathcal{S}$-paths has a characterization in terms of minimal cuts. \section{Vertex minors and skew minors} Given a graph $G=(V,E)$ and $v\in V$, \emph{deleting} $v$ in $G$ means considering the graph $G-v$ induced by $V-v$, that is: $$G-v:=(V - v, E \setminus \delta_G(v))$$ \emph{Contracting} $v$ means considering the graph $G / v$ obtained by removing $v$ and replacing its neighborhood by a clique: $$G/v:=(V - v, E \cup \{wx | w, x \in N_G(v)\} \setminus \delta_G(v))$$ For $e=xy \in E$ \emph{contracting} $e$ means considering the graph $G / e$ obtained by identifying the end-nodes $x$ and $y$ of $e$. $$G/e:=(V, E \cup \{xz | z \in N_G(y)\} \cup \{yz | z \in N_G(x)\} \setminus e)$$ A graph obtained from $G$ by any sequence of vertex deletions and vertex contractions is a \emph{vertex-minor} of $G$. A graph obtained from $G$ by any sequence of vertex deletions, vertex contractions and edge contractions is a \emph{skew-minor} of $G$. Vertex-minors can also be described in the following way: \begin{proposition} Let $G$ be a graph, and $G'$ be a vertex-minor of $G$. Let $D$ be the vertices deleted and $C$ be the vertices contracted to get $G'$ from $G$. Then, $u, v \in V(G')$ are adjacent in $G'$ if and only if there is a path with extremities $u$ and $v$ in $G$ and whose inner nodes are in $C$.\qed \end{proposition} This immediately implies: \begin{lemma}\label{lemma:commutativity} Vertex-deletions and vertex-contractions commute.\qed \end{lemma} By definition, for a class of graph, being closed under skew minors implies being closed under vertex minors, which in turn implies being closed under induced subgraphs. Several important classes of graphs are closed under skew minors. Among them: \begin{definition}$\quad$ \begin{itemize} \item[-] The \emph{interval graphs} are the graphs of intersection of intervals of the real line. \item[-] The \emph{chordal graphs} are the graphs of intersection of subtree of a tree. Equivalently, a graph is chordal if each of its cycles of length at least $4$ has a chord. \item[-] The \emph{cocomparability graphs} are the graphs whose complement is the underlying graph of a partially ordered set. \item[-] The \emph{Asteroidal-Triple-free (AT-free) graphs} are the graphs without asteroidal triple. A stable set $S$ of cardinality $3$ is an \emph{asteroidal triple} of $G$ if there is no $x \in S$ such that $S-x$ is contained in a connected component of $G - (x \cup N(x))$. \item[-] The $P_k$-free graphs, for $k \in \mathbb{N}$, are the graphs with no induced path of length at least $k$. \end{itemize} \end{definition} The following proposition is left as an exercise: \begin{proposition}\label{lemma:closeness} Interval graphs, chordal graphs, co-comparability graphs, AT-free graphs, $P_k$-free graphs are closed under skew minors.\qed \end{proposition} The following lemma explains why we are interested in the vertex-minor operations. \begin{lemma} Given a graph $G$ and a set of terminal $\mathcal{S}$, if the system~\eqref{eqn:blocking} is TDI, then it is also TDI for any vertex-minor of $G$. \end{lemma} \begin{proof} Deleting $v \in V \setminus {\mathcal{S}}$ corresponds to setting $w_v=0$. Contracting $v \in V \setminus {\mathcal{S}}$ corresponds to setting $w_v=+\infty$. \end{proof} \section{Integrality of the blocker of S-paths}\label{sec:bipartite} For a given graph $G$ and a set $\mathcal{S}$ of terminal, we construct an auxiliary graph $G_{\mathcal{S}}$ as follows. First, note that if a non-terminal vertex $v$ is adjacent to two terminals $s$ and $t$, we may assume that the maximum packing for a weight function $w$ contains $w(v)$ times the $2$-length paths $sv,vt$, and the minimal $\mathcal{S}$-cut contains $v$. Hence, we first delete every non-terminal vertex adjacent to two or more terminals. We may also assume that no $\mathcal{S}$-path of a maximum packing contains two vertices of $N_G(s)$ for some terminal $s$ (by taking chordless paths). Therefore if $G - N_G(s)$ contains a component disjoint from $\mathcal{S}$, we can delete all its vertices. From now, we will always suppose that: \begin{itemize} \item[$(i)$] $G$ has no vertices adjacent to two distinct terminals. \item[$(ii)$] for each $s \in \mathcal{S}$, every component of $G - N_G(s)$ intersects $\mathcal{S}$. \item[$(iii)$] $G$ has no edge whose ends are both adjacent to the same terminal. \end{itemize} Then we consider the set $N = N_{G}(\mathcal{S})$ of vertices adjacent to $\mathcal{S}$. $N$ is the vertex set of $G_{\mathcal{S}}$. We delete the terminals, and contract the vertices in $V - (N \cup \mathcal{S})$. Then we remove all the edges whose ends are adjacent to the same terminal in $G$ (the contraction of a path of a maximum packing would not use these edges) . This gives $G_{\mathcal{S}}$. By construction, this graph is $|\mathcal{S}|$-partite, each part being the neighborhood of one terminal. Note that $a, b \in N$ are adjacent in $G_{\mathcal{S}}$ if $a$ and $b$ are not adjacent to a common terminal, and there is an $(a,b)$-path in $G$ whose inner vertices are outside $\mathcal{S} \cup N_G(\mathcal{S})$. \begin{figure}\label{fig:auxiliary} \end{figure} \begin{lemma}\label{lemma:main} Given a graph $G$ and a set of terminal $\mathcal{S}$, the system~\eqref{eqn:blocking} is TDI if and only if the auxiliary graph $G_{\mathcal{S}}$ is bipartite. \end{lemma} \begin{proof} Assume that $G_{\mathcal{S}}$ is not bipartite. Let $C^\ast$ be an induced odd cycle of $G_{\mathcal{S}}$. We define a weight vector $w \in V(G)^{ \{0,1,+\infty\} }$ as follows: \begin{equation}\label{eqn:auxiliary} w_v:= \left\{\begin{array}{ll} 1 & \textrm{if } v \in C^* \\ 0 & \textrm{if } v \in V(G_{\mathcal{S}}) \setminus C^* \\ +\infty & \textrm{otherwise} \end{array}\right. \end{equation} To every edge $uv$ of $C^\ast$, we can associate an $\mathcal{S}$-path of $G$ intersecting $N$ exactly in $u$ and $v$. Then a maximum fractional $w$-packing of ${\mathcal{S}}$-paths is given by taking $1/2$ for each of these paths and a minimum fractional $\mathcal{S}$-cut of $G$ is given by $1/2$ on every node of $C^\ast$, and $1$ on other vertices of $N$. The optimum value of the corresponding pair of dual linear programs is then $|V(C^\ast)|/2$, hence the polyhedron defined by~\eqref{eqn:blocking} is not integer. Suppose now that $G_{\mathcal{S}}$ is bipartite, with bipartition $(A,B)$. Let $H$ be the graph obtained by deleting $\mathcal{S}$ and add two new non-adjacent vertices $s_a$ and $s_b$, adjacent to respectively $A$ and $B$. Let $P$ be a chordless $(s_a,s_b)$-path in $H$. Let $\{a,b\} := N \cap V(P)$. We can associate a unique path $\hat{P}$ of $G$ to $P$, by replacing its extremities by terminals of $G$ (because each vertex of $N$ is adjacent to a unique terminal). We show that $\hat{P}$ cannot be a cycle. Let $Q = V(\hat{P}) \setminus (\mathcal{S} \cup N)$. If $Q$ is empty, $\hat{P}$ is clearly not a cycle because in $H$, the neighborhood of a terminal is a stable set. Else $Q$ is contained in a component $C$ of $G \setminus (N \cup \mathcal{S})$. $C$ is adjacent to $N_G(s)$ and $N_G(t)$ for two distinct terminals $s$ and $t$ by condition $(ii)$. We can suppose that $a \in N_G(s)$. $\hat{P}$ is a cycle only if $b \in N_G(s)$. But if this was the case, then for $c \in N_G(t)$ adjacent to $C$, $a,c,b$ would be a path in $H$, hence $a$ and $b$ would be in the same part of the bipartition $(A,B)$, contradiction. $\hat{P}$ is not a cycle, it is an $\mathcal{S}$-path. By applying the vertex-disjoint version of Menger's theorem to $H$, $\nu(G,w,\mathcal{S}) = \kappa(G,w,\mathcal{S})$ for any $w\in \mathbb{Z}^{V\setminus{\mathcal{S}}}$. \end{proof} \section{A forbidden minor characterization}\label{sec:minor-charac} In this section, we find a characterization of Mader-Mengerian graphs by excluded vertex-minors. We start from the proof of Lemma~\ref{lemma:main}, where we showed that if a graph is not Mader-Mengerian, its auxiliary graph has an odd cycle. In the auxiliary graph construction, we perform vertex-minor operations plus deletion of edges between two vertices adjacent to the same terminal. It follows that a graph that is not Mader-Mengerian contains a vertex-minor $G$ of the following form. $G$ is a graph obtained by taking an odd cycle $C$ and the terminals adjacent to $C$. Each vertex of $C$ is adjacent to exactly one terminal, called the \emph{representant} of this vertex. We color the vertices depending on their representants: each representant gets a distinct color, each other vertex has the color of its representant. A color is thus a set of vertices adjacent to some terminal, plus this terminal. Two consecutive vertices of the odd cycle have distinct colors, while the extremities of each chord share the same color. Let $\mathcal{A}_n$ be the class of graphs obtained in this way with $n$ terminals. One path of lemmas and proofs to obtain the following result is presented in the Appendix. \begin{theorem}\label{th:bad-graphs} Let $G$ be a graph. The system~\eqref{eqn:blocking} is TDI for every stable set $\mathcal{S}$ if and only if $G$ does not contain a vertex minor in $\mathcal{A}_3$. \end{theorem} \begin{proof} Direct consequence of Lemmas~\ref{lemma:odd-colors},~\ref{lemma:7colors} and~\ref{lemma:5colors}. \end{proof} \begin{corollary} System~\eqref{eqn:blocking} is TDI for every stable set $\mathcal{S}$ of $G$ if and only if it is TDI for every stable set $\mathcal{S}$ of cardinality $3$ of $G$.\qed \end{corollary} This gives a polynomial-time recognition algorithm for the related class of graphs, in combination with Lemma~\ref{lemma:main}: we only have to check for each independant subset of three vertices whether the associated auxiliary graph is bipartite. Another important consequence is that the class of graphs for which system~\eqref{eqn:blocking} is TDI for every stable set is large. Indeed, it contains at least the asteroidal-triple-free graphs: \begin{corollary} For every asteroidal-triple-free graph, the system~\eqref{eqn:blocking} is TDI. \end{corollary} \begin{proof} Follows from Theorem~\ref{th:bad-graphs} and Proposition~\ref{lemma:closeness}, as every graph in $\mathcal{A}_3$ contains an asteroidal triple, namely the set of terminals. \end{proof} To conclude this section on vertex-minors, we prove that there is an infinite number of minimal graphs to exclude. \begin{lemma}\label{lemma:infinite} If $n=3$ and each color class induces a clique, then $G$ is a minimal excluded graph. \end{lemma} \begin{proof} Let $U, V, W \subset V(G)$ be the three colors of $G$, Let $u$, $v$, $w$ be the three terminals of a minimal excluded minor $G' = G - D / C$. The distance between two terminals in $G'$ is at least $3$, in particular they cannot be adjacent. If $x, y \in V(G')$ and $xy \in E(G)$ then $xy \in E(G')$, thus $u$, $v$ and $w$ have distinct colors in $G$, say $u \in U$, $v \in V$, $w \in W$. Let $U'$, $V'$ and $W'$ be the color classes of $u$, $v$ and $w$ respectively in $G'$. Every vertex adjacent to $u$ in $G'$ must be in the same color class $U'$ as $u$ in $G'$, proving that $U \setminus (D \cup C) \subset U'$. Because color classes are a partition of the vertex set, we have equality, $U' =U \setminus (C \cup D)$ and similary for $V'$ and $W'$. Suppose $C$ is not empty, let $x \in C$. We may assume $x \in U$. If $x$ is the representant of $U$, then $G' = G - (D + x) / (C - x)$. Else, if $x$ is not the representant of $U$, $x$ has exactly two neighbors $y$ and $z$ outside $U$. Because $u$, $v$ and $w$ must be at distance $3$ of each other in $G'$, $y, z$ must be in $D$. Then we also have that $G' = G - (D + x) / (C - x)$. Hence $G' = G - (C \cup D)$. But then, as the set of edges between colors of $G'$ must be a cycle, $G' = G$, proving that $G$ is minimal. \end{proof} \section{Minimal skew-minors exclusion}\label{sec:skewminor} A skew-minor of a graph $G$ is any graph obtained from $G$ via the following operations: vertex deletion, vertex contraction and edge contraction. Note that Mader-Mengerian graphs are not closed under edge contraction since by inflating one of the central vertex of the net we get a Mader-Mengerian graph. However, we can get a simple sufficient condition for the integrality of system~\eqref{eqn:blocking} based on skew-minors: \begin{theorem}\label{coro:rocket} Any graph $G$ is either Mader-Mengerian or contains a net or a rocket as a skew minor. \end{theorem} \begin{figure} \caption{The rocket} \label{fig:rocket} \end{figure} \section{When the set of terminals is fixed} Our arguments apply when we want to find the pairs $(G,\mathcal{S})$, $\mathcal{S} \subset V(G)$, for which the system~\eqref{eqn:blocking} is TDI. Up to now, we have only looked at graphs $G$ for which we have TDIness for every set of terminals. To deal with a fixed set of terminals, we define another notion of vertex-minor, the \emph{signed vertex-minor}, defined on pairs $(G,\mathcal{S})$. Signed vertex-minor are defined like vertex-minor, except that the set of terminals of the minor must be a subset of the terminals of the original graph. More precisely, $(H,\mathcal{S}')$ is a signed vertex-minor of $(G,\mathcal{S})$ if $H$ is a vertex-minor of $G$ and $\mathcal{S}' \subseteq \mathcal{S}$. Recall that $\mathcal{A}_3$ is the class of graphs built from a three-colored odd cycle, by adding a terminal for each color, and chords with extremities of the same color. We define similarly the class $\overline{\mathcal{A}_3}$ of signed vertex-minor $(G,\mathcal{S})$, where $G \in \mathcal{A}_3$, and $\mathcal{S}$ is the set of the three terminals in the construction of $G$. This setting does not affect Lemma~\ref{lemma:main}, and then the following theorem, close to Theorem~\ref{th:bad-graphs}, can be deduced by the same proof. Indeed, the proofs in Section~\ref{sec:minor-charac} never create new terminals when considering vertex-minors, and hence are still valid for signed vertex-minors. \begin{theorem}\label{th:signed} Let $G$ be a graph and $\mathcal{S}$ a set of terminal in $G$. The system~\eqref{eqn:blocking} is TDI if and only if $(G,\mathcal{S})$ does not have a signed vertex-minor in $\overline{\mathcal{A}_3}$.\qed \end{theorem} \begin{corollary} The system~\eqref{eqn:blocking} is TDI for $(G,\mathcal{S})$ if and only if it is TDI for every $(G,\mathcal{S}')$, with $\mathcal{S'} \subseteq \mathcal{S}$, $|\mathcal{S}| = 3$.\qed \end{corollary} Moreover, all the graphs of $\overline{\mathcal{A}_3}$ are minimal graphs by signed vertex-minors for which system~\eqref{eqn:blocking} is not TDI. Indeed, a potential minor would have the same set of terminals. Moreover, if we contract a vertex, then its two consecutive vertices in the odd cycle become adjacent to two terminals, hence must be deleted. Hence, the minor must be obtained without vertex contraction, and the minimality follows easily. \section{Conclusion} We studied the pairs $(G,\mathcal{S})$ of (graphs, subsets of terminals) for which the cost of an $\mathcal{S}$-vertex-cut is equal to the maximum packing of $\mathcal{S}$-paths. We proved that this property for a given $\mathcal{S}$ is polynomially checkable as it reduces to the bipartiteness of an auxiliary graph. Moreover if this property is true, the minimal $\mathcal{S}$-cut and maximum path-packing problems can be solved by finding a maximum vertex-capacitated flow in a smaller graph. We proved that if $(G,\mathcal{S})$ does not satisfy this property, then there exists $\mathcal{S}' \subseteq \mathcal{S}$ with $|\mathcal{S}'|=3$ such that $(G,\mathcal{S}')$ does not satisfy it either. Moreover, each signed vertex-minor in $\overline{\mathcal{A}_3}$ is a minimal signed vertex-minor obstruction. Concerning the graphs satisfying the min-max formula for any $\mathcal{S}$, we proved that they can be recognized in polynomial time, that the list of vertex-minor obstructions is infinite, but we were unable to provide an explicit description of this list. We believe that this list is hard to obtain, and somehow ugly. We also proved that this class of graphs is interesting as it contains the asteroidal-triple-free graphs. \begin{appendix} \section*{Appendix to section~\ref{sec:minor-charac}} The \emph{distance in $C$} between two vertices $u$ and $v$ is the minimum number of arcs in one of the two $(u,v)$-paths in $C$. We denote $d_C(u,v)$ this minimum. We say that $u$ and $v$ are \emph{consecutive} if $d_C(u,v) = 1$. We denote $\repr(u)$ the representant of a vertex $u$. We say that a vertex of $C$ is \emph{bicolored} if its two neighbors in $C$ have distinct colors. Two colors are \emph{adjacent} if there is an edge in $C$ whose ends have these two colors. Note that the net is a forbidden minor of Mader-Mengerian graphs, and is minimal. We try to find other forbidden minors that do not have a net as vertex-minor. For a graph $H$, we say that $G$ is $H$-free if $H$ is not a vertex minor of $G$. \begin{lemma}\label{lemma:local-bicolor} Let $u$ be a bicolored vertex. Let $v$ be a vertex of the same color as $u$. Then, either $G$ contains a net, or every vertex consecutive to $v$ has the color of a vertex consecutive to $u$. \end{lemma} \begin{figure} \caption{Illustration for Lemma~\ref{lemma:local-bicolor}.} \label{fig:lemma1} \end{figure} \begin{figure} \caption{Illustration for Lemma~\ref{lemma:bicolor}.} \label{fig:bicolor} \end{figure} \begin{proof} Let $u_1$ and $u_2$ be adjacent to $u$ in $C$, $v'$ is adjacent to $v$, and $u_1$, $u_2$ and $v'$ have distinct colors. First, suppose that $d_C(u,v) \geq 3$. There are two cases. If $d_C(v',u) \geq 3$ (Figure~\ref{fig:lemma1}, $a$), let $G'$ be the graph obtained by contracting $u$ and $\repr(u)$ and by deleting all the vertices except $\repr(u_1)$, $\repr(u_2)$, $u_1$, $u_2$ and $v$. $G'$ is a net (Figure~\ref{fig:lemma1}, $b$). If $d_C(v',u) = 2$ (Figure~\ref{fig:lemma1}, $c$), we may assume $v'u_1 \in E(C)$. Let $G'$ be the graph obtained from $G$ by contracting $u$, $v$ and $\repr(u)$ and deleting every other vertex except $v'$, $\repr(v')$, $\repr(u_1)$, $\repr(u_2)$, $u_1$ and $u_2$. Then $G'$ is a net (Figure~\ref{fig:lemma1}, $d$). Now suppose that $d_C(u,v) = 2$. We may assume that $u_1$ is adjacent to $v$. Then the graph obtained from $G$ by contracting $\repr(u)$ and deleting every vertex except $u$, $v$, $u_1$, $u_2$, $v'$ and $\repr(u_1)$, is a net. \end{proof} \begin{lemma}\label{lemma:bicolor} Every color is adjacent to at most two other colors, or $G$ contains a net vertex-minor. \end{lemma} \begin{proof} Let $R$ be any color. By applying iteratively Lemma~\ref{lemma:local-bicolor}, if there is a vertex of color $R$ whose two consecutive vertices have distinct colors, then either $G$ contains a net, or $R$ is adjacent to exactly two colors. Otherwise, each vertex in $C$ is consecutive to two vertices of the same color. Suppose that there are three vertices $u_1$, $u_2$, $u_3$ in $C$ of color $R$, such that their neighbors have three different colors. let $v_1$, $v_2$ and $v_3$ be the vertices following $u_1$, $u_2$, $u_3$ respectively in $C$ (Figure~\ref{fig:bicolor}, $a$). Then, by contracting $u_1$, $u_2$, $u_3$, $\repr(u_1)$ and deleting all the vertices except $v_1$, $v_2$, $v_3$ and their representants, we obtain a net (Figure~\ref{fig:bicolor}, $b$). \end{proof} From now, we suppose that $G$ does not have a net minor. We define the \emph{graph of colors}, whose vertices are the colors, by the adjacency relation introduced above. By Lemma~\ref{lemma:bicolor}, the graph of color has maximum degree two. By connexity, it is either a cycle or a path. We index the colors from $1$ to $n$, following the order defined by the path or the cycle. Thus, each edge of $C$ has extremities of colors $i$ and $i+1$, or $1$ and $n$. We have the following immediate consequence. \begin{lemma}\label{lemma:odd-colors} Let $G$ be net-free. The number $n$ of colors is odd, and the graph of colors is a cycle. \end{lemma} \begin{proof} Suppose not. Then $C$ has a proper $2$-coloring (following the parity of the colors), thus is even, contradicting the assumption. \end{proof} \begin{lemma}\label{lemma:bicolored-vertices} Let $G$ be net-free. Every color contains a bicolored vertex. \end{lemma} \begin{figure} \caption{Illustration for Lemma~\ref{lemma:bicolored-vertices}, each ellipse represents a color.} \label{fig:bicolored-vertices} \end{figure} \begin{proof} Without loss of generality, it is sufficient to prove that there is a bicolored vertex of color $1$. Let $U$ be the vertices of color $1$ adjacent to a vertex of color $n$ and $U'$ be the other vertices of color $1$. Let $W$ the vertices of $C$ having an odd color minus $U'$, and $B$ its complement in $V(C)$(see Figure~\ref{fig:bicolored-vertices}). $B$ does not contain two consecutive vertices of $C$, and $(W,B)$ cannot be a proper two coloring of $C$. Thus there is an edge in $C$ with both extremities in $W$. But this can only be an edge between $U$ and color $n$, hence there is a bicolored vertex in $U$. \end{proof} \begin{lemma}\label{lemma:7colors} If $G$ is net-free, the number $n$ of colors is at most $5$. \end{lemma} \begin{figure} \caption{Illustration for Lemma~\ref{lemma:7colors}. The representant of each color is drawn as a square.} \label{fig:7colors} \end{figure} \begin{proof} By contradiction. Let $u$, $v$ and $w$ be bicolored vertices of colors $1$, $3$ and $5$ respectively (see Figure~\ref{fig:7colors}). Contract every vertex of other colors, and delete every remaining vertex except $u$, $v$, $w$ and their representants. If $n > 5$, the $6$-vertices graph obtained by this way is a net. \end{proof} \begin{lemma}\label{lemma:bicol-consec} If $G$ is net-free and $n=5$, there are no two consecutive bicolored vertices in $C$. \end{lemma} \begin{proof} By contradiction. Let $u$ be a bicolored vertex, of color $1$, and $v$ its bicolored neighbor of color $2$. Let $w$ be a bicolored vertex of color $4$. Then, the graph obtained by contracting vertices of colors $3$ and $5$, and deleting all the vertices of colors $1$, $2$ and $4$, except $u$, $v$, $w$ and their representants, is a net. \end{proof} \begin{lemma}\label{lemma:parity} Suppose $G$ is net-free. The number of edges between any two color classes is zero or odd. \end{lemma} \begin{proof} Choose two adjacent colors, and remove from $C$ every edge between these two colors. Every path thus obtained has its both extremities in the same color class, or in the two chosen colors. So every path has an even length. As $C$ is odd, this proves that we removed an odd number of edges. \end{proof} \begin{lemma}\label{lemma:odd-sequence} Suppose $G$ is net-free. If $n=5$, there is a maximal sequence of consecutive edges in $C$ between any two given adjacent colors of length $2k+1$, for some $k \geq 1$. \end{lemma} \begin{proof} Consider the subpaths of $C$ obtained by keeping only the edges between colors $1$ and $2$. Either there is a net minor or each of these paths has length at least $2$ by Lemma~\ref{lemma:bicol-consec}. Then, by Lemma~\ref{lemma:parity}, there is a path of odd length, proving the lemma. \end{proof} \begin{lemma}\label{lemma:5colors} If $n=5$, then $G$ is not a minimally excluded graph by vertex minor. \end{lemma} \begin{figure} \caption{Illustration for Lemma~\ref{lemma:5colors}} \label{fig:5colors} \end{figure} \begin{proof} If $G$ contains a net minor, it is clearly not minimal. Suppose it does not. Let $u_1$, $v_1$, $u_2$, \ldots $u_k$, $v_k$ be a maximum subpath of $C$ of odd length between colors $1$ and $2$. By Lemma~\ref{lemma:odd-sequence}, we have $k \geq 2$. Let $w$ be the vertex of color $5$ adjacent to $u_1$, and $w'$ the vertex of color $3$ adjacent to $v_k$. Let $s$ be a bicolored vertex of color $4$ (see Figure~\ref{fig:5colors}, $a$). Consider the graph obtained by contracting vertices of colors $3$ and $5$, and deleting all the other vertices except $u_1$, $v_1$, $u_2$, \ldots $u_k$, $v_k$, $s$, $\repr(u_1)$, $\repr(u_2)$ and $\repr(s)$. This graph (Figure~\ref{fig:5colors}, $b$) is composed of a cycle of length $2k+1$ plus three terminals $\repr(u_1)$, $\repr(u_2)$ and $\repr(s)$. It obviously checks the condition for being an excluded graph. Thus $G$ is not minimal. \end{proof} \end{appendix} \end{document}
arXiv
{ "id": "1101.2061.tex", "language_detection_score": 0.8084308505058289, "char_num_after_normalized": null, "contain_at_least_two_stop_words": null, "ellipsis_line_ratio": null, "idx": null, "lines_start_with_bullet_point_ratio": null, "mean_length_of_alpha_words": null, "non_alphabetical_char_ratio": null, "path": null, "symbols_to_words_ratio": null, "uppercase_word_ratio": null, "type": null, "book_name": null, "mimetype": null, "page_index": null, "page_path": null, "page_title": null }
arXiv/math_arXiv_v0.2.jsonl
null
null
\begin{document} \author{Giorgio Ottaviani, Alicia Tocino} \address{Dipartimento di Matematica e Informatica ``Ulisse Dini'', University of Florence, Italy} \email{[email protected], [email protected]} \title{\textbf{Best rank k approximation for binary forms}} \begin{abstract} In the tensor space $\mathrm{Sym}^d \mathbb{R}^2$ of binary forms we study the best rank $k$ approximation problem. The critical points of the best rank $1$ approximation problem are the eigenvectors and it is known that they span a hyperplane. We prove that the critical points of the best rank $k$ approximation problem lie in the same hyperplane. \end{abstract} \maketitle \section{Introduction} The symmetric tensor space $\mathrm{Sym}^dV$, with $V=\mathbb{R}^2$ (resp. $V=\mathbb{C}^2$), contains real (resp. complex) binary forms, which are homogeneous polynomials in two variables. The forms which can be written as $v^d$, with $v\in V$, correspond to polynomials which are the $d$-power of a linear form, they have rank one. We denote by $C_d\subset \mathrm{Sym}^dV$ the variety of forms of rank one. The $k$-secant variety $\sigma_k(C_d)$ is the closure of the set of forms which can be written as $\sum_{i=1}^k\lambda_iv_i^d$ with $\lambda_i\in\mathbb{R}$ (resp. $\lambda_i\in\mathbb{C}$). We say that a nonzero rank $1$ tensor is a critical rank one tensor for $f\in \mathrm{Sym}^d V$ if it is a critical point of the distance function from $f$ to the variety of rank $1$ tensors. Critical rank one tensors are important to determine the best rank one approximation of $f$, in the setting of optimization \cite{FriTam, Lim, Stu}. Critical rank one tensors may be written as $\lambda v^d$ with $\lambda\in\mathbb{C}$ and $v\cdot v=1$, the last scalar product is the Euclidean scalar product. The corresponding vector $v\in V$ has been called tensor eigenvector, independently by Lim and Qi, \cite{Lim, Qi}. In this paper we concentrate on critical rank one tensors $\lambda v^d$, which live in $\mathrm{Sym}^dV$ (not in $V$ like the eigenvectors), for a better comparison with critical rank $k$ tensors, see Definition \ref{def:kcritical} . There are exactly $d$ critical rank one tensor (counting with multiplicities) for any $f$ different from $c(x^2+y^2)^{d/2}$ (with $d$ even), while there are infinitely many critical rank one tensors for $f=(x^2+y^2)^{d/2}$ (see Prop. \ref{prop:eigendisc}). The critical rank one tensors for $f$ are contained in the hyperplane $H_f$ (called the singular space, see \cite{OP}), which is orthogonal to the vector $D(f)=yf_x-xf_y$. We review this statement at the beginning of \S \ref{sec:singularspace}. The main result of this paper is the following extension of the previous statement to critical rank $k$ tensors, for any $k\ge 1$. \begin{thm}\label{thm:main} Let $f\in \mathrm{Sym}^d\mathbb{C}^2$ . i) All critical rank $k$ tensors for $f$ are contained in the hyperplane $H_f$, for any $k\ge 1$. ii) Any critical rank $k$ tensor for $f$ may be written as a linear combination of the critical rank $1$ tensors for $f$. \end{thm} Theorem \ref{thm:main} follows after Theorem \ref{mainTheorem} and Proposition \ref{prop:main2}. Note that Theorem \ref{thm:main} may applied to the best rank $k$ approximation of $f$, which turns out to be contained in $H_f$ and may then be written as a linear combination of the critical rank $1$ tensors for $f$. This statement may be seen as a weak extension of the Eckart-Young Theorem to tensors. Indeed, in the case of matrices, the best rank $k$ approximation is exactly the sum of the first $k$ critical rank one tensors, by the Eckart-Young Theorem, see \cite{OP}. The polynomial $f$ itself may be written as linear combination of its critical rank $1$ tensors, see Corollary \ref{cor:corf}, this statement may be seen as a {\it spectral decomposition for $f$}. All these statements may be generalized to the larger class of tensors, not necessarily symmetric, in any dimension, see \cite{DOT}. In \S\ref{sec:lastreal} we report about some numerical experiments regarding the number of real critical rank $2$ tensors in $\mathrm{Sym}^4\mathbb{R}^2$. \section{Preliminaries} Let $V=\mathbb{R}^2$ equipped with the Euclidean scalar product. The associated quadratic form has the coordinate expression $x^2+y^2$, with respect to the orthonormal basis $x, y$. The scalar product can be extended to a scalar product on the tensor space $\mathrm{Sym}^dV$ of binary forms, which is $SO(V)$-invariant. For powers $l^d$, $m^d$ where $l, m\in V$, we set $\langle l^d, m^d\rangle : = \langle l, m\rangle^d$ and by linearity this defines the scalar product on the whole $\mathrm{Sym}^dV$ (see Lemma \ref{lema:scalarproduct}). Denote as usual $\left\|{f}\right\|=\sqrt{\langle f, f\rangle}$. For binary forms which split in the product of linear forms we have the formula \begin{equation}\label{eq:decomp}\langle l_1l_2\cdots l_d, m_1m_2\cdots m_d\rangle = \frac{1}{d!}\sum_{\sigma}\langle l_1,m_{\sigma(1)}\rangle \langle l_2,m_{\sigma(2)}\rangle \cdots \langle l_d,m_{\sigma(d)}\rangle \end{equation} The powers $l^d$ are exactly the tensors of rank one in $\mathrm{Sym}^dV$, they make a cone $C_d$ over the rational normal curve. The sums $l_1^d+\ldots +l_k^d$ are the tensors of rank $\le k$, and equality holds when the number of summands is minimal. The closure of the set of tensors of rank $\le k$, both in the Euclidean or in the Zariski topology, is a cone $\sigma_kC_d$, which is the $k$-secant variety of $C_d$. The Euclidean distance function $d(f,g)=\left\|f-g\right\|$ is our objective function. The optimization problem we are interested is, given a real $f$, to minimize $d(f,g)$ with the constraint that $g\in \left(\sigma_kC_d\right)_\mathbb{R}$. This is equivalent to minimize the square function $d^2(f,g)$, which has the advantage to be algebraic. The number of complex critical points of the square distance function $d^2$ is called the Euclidean distance degree (EDdegree \cite{DHOST}) of $\sigma_kC_d$ and has been computed for small values of $k, d$ in the rightmost chart in Table 4.1 of \cite{OSS}. We do not know a closed formula for these values, although \cite[Theorem 3.7]{OSS} computes them in the case of a general quadratic distance function, not $SO(2)$-invariant. \section{Critical points of the distance function} Let us recall the notion of eigenvector for symmetric tensors (see \cite{Lim, Qi},\cite[Theorem 4.4]{OP}). \begin{defn}\label{def:eigentensor} Let $f\in \mathrm{Sym}^d V$. We say that a nonzero rank $1$ tensor is a critical rank one tensor for $f$ if it is a critical point of the distance function from $f$ to the variety of rank $1$ tensors. It is convenient to write a critical rank one tensor in the form $\lambda v^d$ with $\left\|{v}\right\|=1$, in this way $v$ is defined up to $d$-th roots of unity and is called an eigenvector of $f$ with eigenvalue $\lambda$. \end{defn} \begin{remark} Let $d=2$ and let $f$ be a symmetric matrix. All the critical rank one tensors of $f$ have the form $\lambda v^2$ where $v$ is a classical eigenvector of norm $1$ for the symmetric matrix $f$, with eigenvalue $\lambda$. \end{remark} \begin{lem}\label{lem:eigentensor} Given $f\in\mathrm{Sym}^d V$, the point $\lambda v^d$ of rank $1$, with $\left\|{v}\right\|=1$, is a critical rank one tensor for $f$ if and only if $\langle f,v^{d-1}w\rangle= \lambda \langle v,w\rangle$ $\forall w\in V$, which can be written (identifying $V$ with $V^\vee$ according to the Euclidean scalar product) as $$f\cdot v^{d-1}= \lambda v,$$ with $\lambda=\langle f, v^d\rangle $. \end{lem} \begin{proof} The property of critical point is equivalent to $f-\lambda v^d$ being orthogonal to $v^{d-1}w$ $\forall w\in V$, which gives $\langle f, v^{d-1}w\rangle =\langle \lambda v^d,v^{d-1}w\rangle$ $\forall w\in V$. The right-hand side is $\left\|{v}\right\|^{2d-2} \lambda \langle v,w\rangle=\lambda \langle v,w\rangle$, as we wanted. Setting $w=v$ we get $\langle f, v^{d}\rangle =\lambda$. \end{proof} On the other hand, eigenvectors correspond to critical points of the function $f(x,y)$ restricted on the circle $S^1=\{(x,y)|x^2+y^2=1\}$ (\cite{Lim, Qi}). By Lagrange multiplier method, we can compute the eigenvectors of $f$ as the normalized solutions $(x,y)$ of: \begin{equation}\label{eq1} \mathrm{rank} \begin{bmatrix} f_{x} & f_{y} \\ x & y \end{bmatrix} \leq 1 \end{equation} This corresponds with the roots of discriminant polynomial $D(f)=yf_x-xf_y$. $D$ is a well known differential operator which satisfies the Leibniz rule, i.e. $D(fg)=D(f)g+fD(g)$ $\forall f, g\in \mathrm{Sym}^d V$. For any $l=ax+by\in V$ denote $l^\perp=D(l)=-bx+ay$. Note that $\langle l,l^\perp\rangle =0$. We have the following: \begin{prop}\label{prop:eigendisc} Consider $f(x,y)\in\mathrm{Sym}^dV$: \begin{itemize} \item If $v$ is eigenvector of $f$ then $D(v)=v^\perp$ is a linear factor of $D(f)$. \item Assume that $D(f)$ splits as product of distinct linear factors and $v^\perp|D(f)$, then $\frac{v}{\left\|{v}\right\|}$ is an eigenvector of $f$. \end{itemize} \end{prop} We postpone the proof after Prop. \ref{prop:critical}. Now let us differentiate some cases in terms of $D(f)$ (see Theorem $2.7$ of \cite{ASS}): \begin{itemize} \item if $d$ is odd: $D(f)= 0$ if and only if $f= 0$, in particular $D:\mathrm{Sym}^dV\rightarrow \mathrm{Sym}^dV$ is an isomorphism. \item if $d$ is even: $D(f)=0$ if and only if $f=c(x^2+y^2)^{d/2}$ for some $c\in\mathbb{R}$. We will show in Lemma \ref{lemma2} which are the eigenvectors in this case. The image of $D:\mathrm{Sym}^dV\rightarrow \mathrm{Sym}^dV$ is the space orthogonal to $f=(x^2+y^2)^{d/2}$. \end{itemize} \begin{lem} (\cite{LS}, Section $2$)\label{lema:scalarproduct} Suppose $f=\sum_{i=0}^{d} \binom{d}{i} a_i x^iy^{d-i}$ and $g=\sum_{i=0}^{d} \binom{d}{i} b_i x^iy^{d-i}$. Then we get: \begin{equation}\label{eq:scalar} \langle f,g\rangle:=\sum_{i=0}^{d} \binom{d}{i} a_ib_i \end{equation} where $\langle\,,\,\rangle$ is the scalar product defined in the introduction. \end{lem} \begin{proof} By linearity we may assume $f=(\alpha x+\beta y)^d$ and $g=(\alpha'x+\beta' y)^d$. The right-hand side of (\ref{eq:scalar}) gives $$\langle f,g\rangle=\sum_{i=0}^{d} \binom{d}{i}(\alpha\alpha')^i(\beta \beta')^{d-i}=(\alpha\alpha'+\beta\beta')^d$$ which agrees with $\langle \alpha x+\beta y,\alpha' x+\beta' y\rangle^d$. \end{proof} \begin{lem}\label{remark:2} Let $f=(x^2+y^2)^{d/2}\in \mathrm{Sym}^d V$ with $d$ even, and $v=\alpha x+\beta y\in V$, $v\neq 0$, then $\langle v^d,f\rangle=\left\|{v}\right\|^d$. \end{lem} \begin{proof} By applying (\ref{eq:decomp}) with a grain of salt (e.g. decomposing $x^2+y^2$ into two conjugates linear factors) we get $$\langle v^d,f\rangle=\langle (x^2+y^2),v^2\rangle^{d/2} = (\alpha^2+\beta^2)^{d/2}=\left\|{v}\right\|^d.$$ \end{proof} \begin{lem}\label{lemma2} If $f=(x^2+y^2)^{d/2}\in \mathrm{Sym}^d V$ then, for every nonzero $v\in V$, $\langle f, v^{d-1}w\rangle=\left\|{v}\right\|^{d-2}\langle v, w\rangle$. In particular every vector $v$ of norm $1$ is eigenvector of $f$ with eigenvalue $1$. \end{lem} \begin{proof} As in Lemma \ref{remark:2} we get $$\langle f, v^{d-1}w\rangle = {\langle (x^2+y^2),v^2\rangle}^{d/2-1} \langle (x^2+y^2),vw\rangle = \left\|{v}\right\|^{d-2}\langle v, w\rangle.$$ The second part follows by putting $w=v$ and equating with Lemma \ref{remark:2}. We get $\langle f, v^{d-1}w\rangle=\langle v^d,f\rangle\langle v, w\rangle$ just in the case $|v|=1$. \end{proof} \begin{remark} Lemma \ref{lemma2} extends the fact that every vector of norm $1$ is eigenvector of the identity matrix with eigenvalue $1$. The geometric interpretation of this lemma is that the $2$-dimensional cone of rank $1$ degree $d$ binary forms cuts any sphere centered in $(x^2+y^2)^{d/2}$ in a curve. This curve \end{remark} \begin{lem}\label{lem} The normal space at $l^d\in C_d$ coincides with $\left(l^\perp\right)^2\cdot \mathrm{Sym}^{d-2}V$ \end{lem} \begin{proof} The tangent space at $l^d$ is spanned by $l^{d-1}V$ and has dimension $2$. The elements in $\left(l^\perp\right)^2\cdot \mathrm{Sym}^{d-2}V$ are orthogonal to the tangent space, moreover the dimension of this space is the expected one $d-1$. \end{proof} \begin{defn}\label{def:kcritical} We say that $g\in\mathrm{Sym}^dV$ is a critical rank $k$ tensor for $f$ if it is a critical point of the distance function $d(f,\_)$ restricted on $\sigma_kC_d$. \end{defn} \begin{prop}\label{prop:critical} Let $2k\le d$. A polynomial $g=\sum_{i=1}^k \mu_i l_i^d \in \sigma_kC_d$ is a critical rank $k$ tensor for $f$ if and only if there exist $h\in\mathrm{Sym}^{d-2k}V$ such that \begin{equation}\label{eq} f=\sum_{i=1}^k \mu_i l_i^d +h\cdot \prod_{i=1}^k\left(l_i^\perp\right)^2 \end{equation} \end{prop} \begin{proof} By Terracini Lemma, the tangent space of the point $g\in\sigma_kC_d$ is given by the sum of $k$ tangent spaces at $l_i^d=(a_ix+b_iy)^d$. By Lemma \ref{lem} the normal space of each of these tangent spaces are given by $\left(l_i^\perp\right)^2\cdot \mathrm{Sym}^{d-2}V$. Hence, the normal space to $g$ is given by intersection of the $k$ normal spaces, which is given by polynomials $\prod_{i=1}^k\left(l_i^\perp\right)^2 \cdot h$ where $h\in\mathrm{Sym}^{d-2k}V$. Now suppose that $g$ is a critical rank $k$ tensor for $f$. This means that $f-g$ is in the normal space. Hence, $f-g$ is of the form $\prod_{i=1}^k\left(l_i^\perp\right)^2 \cdot h$ for some $h\in\mathrm{Sym}^{d-2k}V$. Conversely, if $(\ref{eq})$ holds, we need that $f-g$ belongs to the normal space at $g$ which is also true by the construction of the normal space. \end{proof} \begin{proof} [Proof of Prop. \ref{prop:eigendisc}] If $v$ is eigenvector of $f$ then $\langle f,v^d\rangle v^d$ is critical rank $1$ tensor for $f$ (by Lemma \ref{lem:eigentensor}). By Prop. \ref{prop:critical} $f=\langle f,v^d\rangle v^d+h \left(v^\perp\right)^2$ where $h\in\mathrm{Sym}^{d-2}V$. Applying the operator $D$ to $f$ we get by Leibniz rule, since $D(v)=v^{\perp}$ and $D(v^{\perp})=-v$: $$D(f)=\langle f,v^d\rangle dv^{d-1}v^\perp+D(h)\left(v^\perp\right)^2-2vv^\perp h\Longrightarrow v^\perp|D(f)$$ Conversely, since we assume there are $d$ distinct eigenvectors, then we find all the linear factors of $D(f)$. \end{proof} This proposition is connected with Theorem $2.5$ of \cite{LS}. \section{The singular space}\label{sec:singularspace} In \cite{OP} it was considered the singular space $H_f$ as the hyperplane orthogonal to $D(f)=yf_x-xf_y$. It follows from Prop. \ref{prop:eigendisc} that the critical rank $1$ tensor for $f$ belong to $H_f$ (since the eigenvectors of $f$ can be computed as the solutions of (\ref{eq1}) that coincides with $D(f)$ for binary forms), see \cite[Def. 5.3]{OP}. It is worth to give a direct proof that the critical rank $1$ tensors for $f$ belong to $H_f$, the hyperplane orthogonal to $D(f)$, based on Prop. \ref{prop:critical}. Let $\mu l^d$ be a critical rank $1$ tensors for $f$, then by Prop. \ref{prop:critical} there exist $h\in\mathrm{Sym}^{d-2}V$ such that $f= \mu l^d +h\left(l^\perp\right)^2$. We have to prove $\langle D(f), l^d\rangle =0$ which follows immediately from (\ref{eq:decomp}) since $l^{\perp}$ divides $D(f)$ by Prop. \ref{prop:eigendisc}. \begin{lem}\label{lem:lmperp} Let $l, m\in V$, Then $\langle l^\perp, m\rangle +\langle m^\perp, l\rangle =0$. \end{lem} \begin{proof} Straightforward. \end{proof} Our main result is \begin{thm}\label{mainTheorem} The critical points of the form $\sum_{i=1}^{k}\mu_i l_i^d$ of the distance function $d(f,-)$ restricted on $\sigma_kC_d$ belong to $H_f$. \end{thm} \begin{proof} Given a decomposition $f= \sum_{i=1}^k \mu_i l_i^d +h\cdot \prod_{i=1}^k\left(l_i^\perp\right)^2$, with $h\in\mathrm{Sym}^{d-2k}V$, we compute \begin{equation}\label{eq:3sum} D(f)=d\sum_{i=1}^k \mu_il_i^{\perp}l_i^{d-1}-\sum_{i=1}^k2l_il_i^{\perp}\prod_{j\neq i}^k\left(l_j^\perp\right)^2h+D(h)\prod_{i=1}^k\left(l_i^\perp\right)^2\end{equation} and we have to prove \begin{equation}\label{eq:dfli}\langle D(f),\sum_{j=1}^k l_j^d\rangle =0.\end{equation} We compute separately the contribution of the three summands in (\ref{eq:3sum}) to the scalar product with $l_j^d$. We have for the first summand $$\langle \left(\sum_{i=1}^kl_i^{\perp}l_i^{d-1}\right), l_j^d\rangle = \sum_{i=1}^k\langle l_i^\perp, l_j\rangle\langle l_i\cdot l_j\rangle^{d-1}$$ Summing over $j$ we get zero by Lemma \ref{lem:lmperp}. We have for the second summand $$\langle\left(\sum_{i=1}^kl_i,l_i^{\perp}\prod_{p\neq i}^k\left(l_p^\perp\right)^2h\right),l_j^d \rangle = \langle\left(l_jl_j^{\perp}\prod_{p\neq j}^k\left(l_p^\perp\right)^2h\right),l_j^d \rangle=0 $$ We have for the third summand $$\langle\left(D(h)\prod_{i=1}^k\left(l_i^\perp\right)^2\right), l_j^d\rangle = 0$$ Summing up, this proves (\ref{eq:dfli}) and then the thesis. \end{proof} \begin{example} If $f=x^3y+2y^4$ then there are $6$ critical points of the form $l_1^4+l_2^4$ and $x^3y$ which lies on the tangent line at $x^4$. It cannot be written as $l_1^4+l_2^4$ and indeed it has rank $4$. \end{example} \section{The scheme of eigenvectors for binary forms} Suppose $f\in \mathrm{Sym}^d V$ a symmetric tensor and $\mathrm{dim} V=2$. We denote by $Z$ the scheme defined by the polynomial $D(f)$, embedded in $\mathbb{P}(\mathrm{Sym}^d V)$ by the $d$-Veronese embedding in $\mathbb{P} V$ (see \cite{AEKP} for the case of matrices). \begin{prop}\label{prop:main2} $\langle Z \rangle = H_f$. \end{prop} \begin{proof} $(i)$ If $D(f)$ has $d$ distinct roots then it is known that $\langle Z \rangle\subseteq H_f$, since $H_f$ is the hyperplane orthogonal to $D(f)$ (Theorem \ref{mainTheorem} with $k=1$). Hence $\langle Z \rangle\subseteq H_f$. $(ii)$ Now let us suppose that $D(f)$ has multiple roots but $f\neq (x^2+y^2)^{d/2}$. We show that $\langle Z\rangle\subseteq H_f$ by a limit argument. For every tensor $f$ such that $f\neq 0$ and $f\neq(x^2+y^2)^{d/2}$ there exists a sequence $(f_n)$ such that $f_n\rightarrow f$ and $D(f_n)$ has distinct roots for all $n$. Then, $H_{f_n}\rightarrow H_f$ because the differential operator is continuous. Moreover $H(f_n)$ is a hyperplane for all $n$. On the other hand, by definition we have that $\langle Z_{f_n}\rangle$ is the spanned of the roots of $D(f_n)$. When $f_n$ goes to the limit we get that $\langle Z_{f_n}\rangle\rightarrow \langle Z\rangle$. Hence, $\langle Z\rangle\subseteq H_f$. $(iii)$ In the case that $f=(x^2+y^2)^{d/2}$ with $d$ even, then by Lemma \ref{lemma2} we know that every unitary vector is an eigenvector and $H_f$ is the ambient space. Hence, $\langle Z\rangle=H_f$. We prove now that $\mathrm{dim} \langle Z \rangle=\mathrm{dim} H_f$ for $(i)$ and $(ii)$. Since $\mathcal{I}_{Z,\mathbb{P}^1}=\mathcal{O}_{\mathbb{P}^1}(-d)$, $$\mathrm{codim}\langle Z\rangle=h^0(\mathcal{I}_{Z,\mathbb{P}^1}(d))=h^0(\mathcal{O}_{\mathbb{P}^1}(-d+d))=h^0(\mathcal{O}_{\mathbb{P}^1})=1$$ which coincides with the codimension of $H_f$. \def\niente{ First we suppose that $D(f)$ has $d$ distinct roots. It is known that $\langle Z \rangle\subseteq H_f$, since $H_f$ is the hyperplane orthogonal to $D(f)$. We prove that the equality holds by showing that $\mathrm{dim} \langle Z \rangle=\mathrm{dim} H_f$. If we embedded the $d$ distinct eigenvectors $v_1,\ldots,v_d$ into the rational normal curve of degree $d$, $C_d$, it turns out $d$ independent elements $v_1^d,\ldots,v_d^d$. The embedding of each of the eigenvectors in $C_d$ is of the form: $$(1:v_i)\mapsto (1:v_i:v_i^2:\ldots:v_i^d)\quad i=1,\ldots,d$$ We know that these points are independent by using \textit{Vandermonde determinant} since \begin{equation*} \mathrm{det} \begin{bmatrix} 1 & v_1 & v_1^2 & \ldots & v_1^d\\ 1 & v_2 & v_2^2 & \ldots & v_2^d\\ \vdots & \vdots & \vdots & \ddots & \vdots\\ 1 & v_d & v_d^2 & \ldots & v_d^d \end{bmatrix} =\prod_{i<j}(v_j-v_i)\neq 0 \end{equation*} Hence, the dimension of each of the spaces are equal. Let us suppose that $D(f)$ has multiple roots but $f\neq (x^2+y^2)^{d/2}$. First we show that $\langle Z\rangle\subseteq H_f$ by a limit argument. For every tensor $f$ such that $f\neq 0$ and $f\neq(x^2+y^2)^{d/2}$ there exists a sequence $(f_n)$ such that $f_n\rightarrow f$ and $D(f_n)$ has distinct roots for all $n$. Then, $H_{f_n}\rightarrow H_f$ because the differential operator is continuous. Moreover $H(f_n)$ is a hyperplane for all $n$. On the other hand, by definition we have that $\langle Z_{f_n}\rangle$ is the spanned of the roots of $D(f_n)$. When $f_n$ goes to the limit we get that $\langle Z_{f_n}\rangle\rightarrow \langle Z\rangle$. Hence, $\langle Z\rangle\subseteq H_f$. Now we prove the other inclusion. Suppose that we have $r$ distinct eigenvectors $v_1,\ldots,v_r$ with multiplicities $m_1,\ldots,m_r$ and $m_1+\ldots +m_r=d$. If we embedded the $r$ distinct eigenvectors into the rational normal curve of degree $d$, $C_d$, it turns out $r$ independent elements $v_1^d,\ldots,v_r^d$. The embedding of each of the eigenvectors in $C_d$ is of the form: $$(1:v_i)\mapsto (1:v_i:v_i^2:\ldots:v_i^{d-1}:v_i^d)\quad i=1,\ldots,r$$ We consider also its derivatives: $$(1:v_i)\mapsto (0:1:2v_i:\ldots:(d-1)v_i^{d-2}:dv_i^{d-1})$$ $$(1:v_i)\mapsto (0:0:2:\ldots:(d-1)(d-2)v_i^{d-3}:d (d-1)v_i^{d-2})$$ $$\vdots$$ $$(1:v_i)\mapsto (0:0:0:\ldots:\binom{d-1}{m_i-1}v_i^{d-m_i}:\binom{d}{m_i-1}v_i^{d+1-m_i})$$ We know that these points are independent by using the \textit{Confluent Vandermonde determinant} since, \begin{equation*} \mathrm{det} \begin{bmatrix} 1 & v_1 & v_1^2 & v_1^3 & \ldots & v_1^{d-1} & v_1^d\\ 0 & 1 & 2v_1 & 3 v_1^2 & \ldots & (d-1)v_1^{d-2} & dv_1^{d-1}\\ 0 & 0 & 2 & 6 v_1 & \ldots & (d-1)(d-2)v_1^{d-3} & d (d-1)v_1^{d-2}\\ \vdots & \vdots & \vdots & \ddots & \vdots\\ 1 & v_2 & v_2^2 & v_2^3 & \ldots & v_2^{d-1} & v_2^d\\ 0 & 1 & 2v_2 & 3 v_2^2 & \ldots & (d-1)v_2^{d-2} & dv_2^{d-1}\\ 0 & 0 & 2 & 6 v_2 & \ldots & (d-1)(d-2)v_2^{d-3} & d (d-1)v_2^{d-2}\\ \vdots & \vdots & \vdots & \ddots & \vdots\\ 1 & v_r & v_r^2 & v_r^3 & \ldots & v_r^{d-1} & v_r^d\\ 0 & 1 & 2v_r & 3 v_r^2 & \ldots & (d-1)v_r^{d-2} & dv_r^{d-1}\\ 0 & 0 & 2 & 6 v_r & \ldots & (d-1)(d-2)v_r^{d-3} & d (d-1)v_r^{d-2}\\ \vdots & \vdots & \vdots & \ddots & \vdots \end{bmatrix} =\prod_{1\leq i<j\leq r}(v_j-v_i)^{m_i m_j}\neq 0 \end{equation*} Hence, the dimension of each of the spaces are equal. Finally, if $f=(x^2+y^2)^{d/2}$ with $d$ even, then by Lemma \ref{lemma2} we know that every nonzero vector is an eigenvector and $H_f$ is the ambient space. Hence, $\langle Z\rangle=H_f$.} \end{proof} As a consequence we obtain the following corollary, which may be seen as a {\it Spectral Decomposition} of any binary form $f$. \begin{cor}\label{cor:corf} Any binary form $f\in \mathrm{Sym}^d V$ with $\mathrm{dim} V=2$ can be written as a linear combination of the critical rank one tensors for $f$. \end{cor} The previous statement holds even in the special case $d$ even and $f=(x^2+y^2)^{d/2}$, since from \cite[Theorem 9.5]{Rez} there exists $c_d\in{\mathbb R}$ such that the following decomposition holds $\forall\phi\in{\mathbb R}$ $$(x^2+y^2)^{d/2}=c_d\sum_{k=0}^{d/2}\left[\cos(\frac{2k\pi}{d+2}+\phi)x+\sin(\frac{2k\pi}{d+2}+\phi)y\right]^d$$ In this decomposition the summands on the right-hand side correspond to $(d+2)/2$ consecutive vertices of a regular $(d+2)$-gon. In the $d=2$ case, the Spectral Theorem asserts any binary quadratic form $f\in\mathrm{Sym}^2\mathbb{R}^2$ can be written as sum of its rank one critical tensors. This statement fails for $d\ge 3$, as it can be checked already on the examples $f=x^d+y^d$ for $d\ge 3$, where only two among the $d$ rank one critical tensors are used, namely $x^d$ and $y^d$, and the coefficients of the remaining $d-2$ rank one critical tensors in the Spectral Decomposition of $f$ are zero. \section{Real critical rank $2$ tensors for binary quartics} \label{sec:lastreal} We recall the following result by M. Maccioni. \begin{thm}\label{thm:maccioni}(Maccioni, \cite[Theorem 1]{Mac}) Let $f$ be a binary form. $$\# \text{ real roots of f }\leq\# \text{ real critical rank 1 tensors for\ } f$$ The inequality is sharp, moreover it is the only constraint between the number of real roots and the number of real critical rank $1$ tensors, beyond parity mod $2$. \end{thm} As a consequence, as it was first proved in \cite{ASS}, hyperbolic binary forms (i.e. with only real roots) have all real critical rank $1$ tensors. We attempted to extend Theorem \ref{thm:maccioni} to rank $2$ critical tensors. Our description is not yet complete and we report about some numerical experiments in the space $\mathrm{Sym}^4\mathbb{R}^2$. From these experiments it seems that the constraints about the number of real rank $2$ critical tensors are weaker than for rank $1$ critical tensors. For quartic binary forms the computation of the critical rank $2$ tensors is easier since the dual variety of the secant variety $\sigma_2(C_4)$ is given by quartics which are squares, which make a smooth variety. The number of complex critical rank $2$ tensors for a general binary form of degree d was guessed in \cite{OSS} to be $3/2d^2-9d/2+1$. For $d=4$ this number is $7$, which can be confirmed by a symbolic computation on a rational random binary quartic. In conclusion, for a general binary quartic there are $4$ complex critical rank $1$ tensors and $7$ complex rank $2$ critical tensors. The following table reports some computation done for the case of binary quartic forms, by testing several different quartics. The appearance of ``yes'' in the last column means that we have found an example of a binary quartic with the prescribed number of distinct and simple real roots, real rank $1$ critical tensors and real critical rank $2$ tensors. Note that we have not found any quartic with the maximum number of seven real rank $2$ critical tensors, we wonder if they exist. \begin{center} \begin{tabular}{c | c| c |c| c} &\#\text{real roots}& \#\text{real critical rank 1 tensors} & \#\text{real critical rank 2 tensors} & \\ \hline & $0$ & $2$ & $3$ & yes\\ & $2$ & $2$ & $3$ & yes\\ & $0$ & $2$ & $5$ & yes\\ & $2$ & $2$ & $5$ & yes\\ & $0$ & $4$ & $3$ & yes\\ & $2$ & $4$ & $3$ & yes\\ & $4$ & $4$ & $3$ & yes\\ & $0$ & $4$ & $5$ & yes\\ & $2$ & $4$ & $5$ & yes\\ & $4$ & $4$ & $5$ & ?\\ & * & * & 7 & ? \end{tabular} \end{center} \end{document}
arXiv
{ "id": "1707.04696.tex", "language_detection_score": 0.7030857801437378, "char_num_after_normalized": null, "contain_at_least_two_stop_words": null, "ellipsis_line_ratio": null, "idx": null, "lines_start_with_bullet_point_ratio": null, "mean_length_of_alpha_words": null, "non_alphabetical_char_ratio": null, "path": null, "symbols_to_words_ratio": null, "uppercase_word_ratio": null, "type": null, "book_name": null, "mimetype": null, "page_index": null, "page_path": null, "page_title": null }
arXiv/math_arXiv_v0.2.jsonl
null
null
\begin{document} \title[Generalized surface quasi-geostrophic equations] {Dissipative models generalizing the 2D Navier-Stokes and the surface quasi-geostrophic equations} \author[Dongho Chae, Peter Constantin and Jiahong Wu]{Dongho Chae$^{1}$, Peter Constantin$^{2}$ and Jiahong Wu$^{3}$} \address{$^1$ Department of Mathematics, Sungkyunkwan University, Suwon 440-746, Korea} \address{$^2$Department of Mathematics, University of Chicago, 5734 S. University Avenue, Chicago, IL 60637, USA.} \address{$^3$Department of Mathematics, Oklahoma State University, 401 Mathematical Sciences, Stillwater, OK 74078, USA.} \email{[email protected]} \email{[email protected]} \email{[email protected]} \begin{abstract} This paper is devoted to the global (in time) regularity problem for a family of active scalar equations with fractional dissipation. Each component of the velocity field $u$ is determined by the active scalar $\theta$ through $\mathcal{R} \Lambda^{-1} P(\Lambda) \theta$ where $\mathcal{R}$ denotes a Riesz transform, $\Lambda=(-\Delta)^{1/2}$ and $P(\Lambda)$ represents a family of Fourier multiplier operators. The 2D Navier-Stokes vorticity equations correspond to the special case $P(\Lambda)=I$ while the surface quasi-geostrophic (SQG) equation to $P(\Lambda) =\Lambda$. We obtain the global regularity for a class of equations for which $P(\Lambda)$ and the fractional power of the dissipative Laplacian are required to satisfy an explicit condition. In particular, the active scalar equations with any fractional dissipation and with $P(\Lambda)= (\log(I-\Delta))^\gamma$ for any $\gamma>0$ are globally regular. \end{abstract} \maketitle \section{Introduction} \label{intr} \setcounter{equation}{0} This paper is devoted to the dissipative active scalar equation \begin{equation} \label{general} \left\{ \begin{array}{l} \partial_t \theta + u\cdot\nabla \theta + \kappa (-\Delta)^\alpha \theta=0, \quad x\in \mathbb{R}^d, \, t>0, \\ u = (u_j), \quad u_j = \mathcal{R}_l \Lambda^{-1} P(\Lambda)\, \theta,\quad 1\le j, \,l\le d, \end{array} \right. \end{equation} where $\kappa>0$ and $\alpha>0$ are parameters, $\theta =\theta(x,t)$ is a scalar function of $x\in \mathbb{R}^{d}$ and $t\ge 0$, $u$ denotes a velocity field with each of its components $u_j$ ($1\le j\le d$) given by a Riesz transform $\mathcal{R}_l$ applied to $\Lambda^{-1} P(\Lambda)\, \theta$. Here the operators $\Lambda = (-\Delta)^{\frac12}$, $P(\Lambda)$ and $\mathcal{R}_l$ are defined through their Fourier transforms, $$ \widehat{\Lambda f}(\xi) = |\xi| \widehat{f}(\xi), \quad \widehat{P(\Lambda) f}(\xi) = P(|\xi|) \widehat{f}(\xi), \quad \widehat{\mathcal{R}_l f}(\xi)= \frac{i\,\xi_l}{|\xi|}\, \widehat{f}(\xi), $$ where $1\le l\le d$ is an integer, $\widehat{f}$ or $\mathcal{F}(f)$ denotes the Fourier transform, $$ \widehat{f}(\xi) = \mathcal{F}(f)(\xi) =\frac{1}{(2\pi)^{d/2}} \int_{\mathbb{R}^d} e^{-i x\cdot \xi} f(x)\,dx. $$ We are primarily concerned with the global (in time) regularity issue concerning solutions of (\ref{general}) with a given initial data \begin{equation} \label{IC} \theta(x,0) =\theta_0(x), \quad x\in \mathbb{R}^d. \end{equation} \vskip .1in A special example of (\ref{general}) is the 2D active scalar equation \begin{equation} \label{general2d} \left\{ \begin{array}{l} \partial_t \theta + u\cdot\nabla \theta + \kappa (-\Delta)^\alpha \theta=0, \quad x\in \mathbb{R}^2, \, t>0, \\ u = \nabla^\perp \psi\equiv (-\partial_{x_2}\psi, \partial_{x_1} \psi), \quad \Delta \psi = P(\Lambda)\, \theta \end{array} \right. \end{equation} which includes as special cases the 2D Navier-Stokes vorticity equation \begin{equation}\label{euler} \left\{ \begin{array}{l} \partial_t \omega + u \cdot \nabla \omega-\nu \Delta \omega =0,\\ u =\nabla^\perp \psi, \quad \Delta\psi=\omega \end{array} \right. \end{equation} and the dissipative surface quasi-geostrophic (SQG) equation \begin{equation}\label{SQG} \left\{ \begin{array}{l} \partial_t \theta + u \cdot \nabla \theta + \kappa (-\Delta)^\alpha \theta= 0,\\u=\nabla^\perp \psi, \quad -\Lambda\psi = \theta. \end{array} \right. \end{equation} There are numerous studies on the Navier-Stokes equations and the global regularity in the 2D case has long been established (see e.g. \cite{ConF}, \cite{DoGi} and \cite{MaBe}). The SQG equation models the dynamics of the potential temperature $\theta$ of the 3D quasi-geostrophic equations on the 2D horizontal boundaries and is useful in modeling atmospheric phenomena such as the frontogenesis (see e.g. \cite{CMT}, \cite{MaTa} and \cite{Pe}). The SQG equation (inviscid or dissipative) is also mathematically important. As detailed in \cite{CMT}, the behavior of its strongly nonlinear solutions are strikingly analogous to that of the potentially singular solutions of the 3D incompressible Navier-Stokes and the Euler equations. The global regularity issue concerning the SQG equation has recently been studied very extensively and many important progress has been made (see e.g. \cite{AbHm}, \cite{Bae}, \cite{Bar}, \cite{Blu}, \cite{CaS}, \cite{CV}, \cite{CaFe}, \cite{Ch}, \cite{ChJDE}, \cite{Cha}, \cite{Cha2}, \cite{Cha4}, \cite{CCCF}, \cite{ChL}, \cite{Cham}, \cite{CMZ1}, \cite{Chen}, \cite{Con}, \cite{CCW}, \cite{CIW}, \cite{CLS}, \cite{CMT}, \cite{CNS}, \cite{CWnew1}, \cite{CWnew2}, \cite{Cor}, \cite{CC}, \cite{CoFe1}, \cite{CoFe2}, \cite{CoFe3}, \cite{CFMR}, \cite{Dab}, \cite{DHLY}, \cite{DoCh}, \cite{Dong}, \cite{DoDu}, \cite{DoLi0}, \cite{DoLi}, \cite{DoPo}, \cite{DoPo2}, \cite{FPV}, \cite{FrVi}, \cite{Gil}, \cite{HPGS}, \cite{HmKe}, \cite{HmKe2}, \cite{Ju}, \cite{Ju2}, \cite{KhTi}, \cite{Ki1}, \cite{Ki2}, \cite{Kinew1}, \cite{KN1}, \cite{KN2}, \cite{KNV}, \cite{Li}, \cite{LiRo}, \cite{Maj}, \cite{MaBe}, \cite{MaTa}, \cite{Mar1}, \cite{Mar2}, \cite{Mar3}, \cite{MarLr}, \cite{May}, \cite{MayZ}, \cite{MiXu}, \cite{Mi}, \cite{NiSc}, \cite{OhYa1}, \cite{Pe}, \cite{ReDr}, \cite{Res}, \cite{Ro1}, \cite{Ro2}, \cite{Sch}, \cite{Sch2}, \cite{Si}, \cite{Si2}, \cite{Sta}, \cite{WaJi}, \cite{WaZh}, \cite{Wu97}, \cite{Wu2}, \cite{Wu01}, \cite{Wu02}, \cite{Wu3}, \cite{Wu4}, \cite{Wu41}, \cite{Wu31}, \cite{Wu77}, \cite{Yu}, \cite{Yuan}, \cite{YuanJ}, \cite{Zha0}, \cite{Zha}, \cite{Zhou}, \cite{Zhou2}). In particular, the global regularity for the critical case $\alpha=1/2$ has been successfully established (\cite{CV}, \cite{KNV}). The situation in the supercritical case $\alpha<1/2$ is only partially understood at the time of writing. The results in \cite{CWnew1}, \cite{CWnew2} and \cite{DoPo} imply that any solution of the supercritical SQG equation can develop potential finite time singularity only in the regularity window between $L^\infty$ and $C^\delta$ with $\delta<1-2\alpha$. Several very recent preprints on the supercritical case also revealed some very interesting properties of the supercritical dissipation (\cite{Bar}, \cite{Dab}, \cite{Kinew1}, \cite{Si}). \vskip .1in Our goal here is to establish the global regularity of (\ref{general}) for more general operators $P$. In particular, we are interested in the global regularity of the intermediate equations between the 2D Navier-Stokes equation and the supercritical SQG equation. This paper is a continuation of our previous study on the inviscid counterpart of (\ref{general}) (\cite{ChCW}). The consideration here is restricted to $P$ satisfying the following condition. \begin{assume} \label{P_con} The symbol $P=P(|\xi|)$ assumes the following properties: \begin{enumerate} \item $P$ is continuous on $\mathbb{R}^d$ and $P\in C^\infty(\mathbb{R}^d\setminus\{0\})$; \item $P$ is radially symmetric; \item $P=P(|\xi|)$ is nondecreasing in $|\xi|$; \item There exist two constants $C$ and $C_0$ such that \begin{equation*} \sup_{2^{-1} \le |\eta| \le 2}\, \left|(I-\Delta_\eta)^n \,P(2^j |\eta|)\right| \le C\, P(C_0 \, 2^j) \end{equation*} for any integer $j$ and $n=1,2,\cdots, 1+ \left[\frac{d}{2}\right]$. \end{enumerate} \end{assume} We remark that (4) in Condition \ref{P_con} is a very natural condition on symbols of Fourier multiplier operators and is similar to the main condition in the Mihlin-H\"{o}rmander Multiplier Theorem (see e.g. \cite[p.96]{St}). For notational convenience, we also assume that $P\ge 0$. Some special examples of $P$ are \begin{eqnarray*} && P(\xi) = \left(\log(1 +|\xi|^2)\right)^\gamma \quad\mbox{with $\gamma\ge 0$}, \\ && P(\xi) = \left(\log(1+\log(1 +|\xi|^2))\right)^\gamma \quad\mbox{with $\gamma\ge 0$}, \\ && P(\xi) = |\xi|^\beta \quad\mbox{with $\beta\ge 0$},\\ && P(\xi) = (\log(1 +|\xi|^2))^\gamma\,|\xi|^\beta \quad\mbox{with $\gamma\ge 0$ and $\beta\ge 0$}. \end{eqnarray*} \vskip .1in As in the study of the Navier-Stokes and the Euler equations, the quantity $\|\nabla u\|_{L^\infty}$ plays a crucial role in the global regularity issue. In our previous work on the inviscid counterpart of (\ref{general}), we established bounds for the building blocks $\|\nabla \Delta_j u\|_{L^q}$ and $\|\nabla S_N u\|_{L^q}$ for $1\le q\le \infty$. More precisely, the following theorem is proven in \cite{ChCW}. \begin{thm} \label{nablau} Let $u: \mathbb{R}^d\to \mathbb{R}^d$ be a vector field. Assume that $u$ is related to a scalar $\theta$ by $$ (\nabla u)_{jk} = \mathcal{R}_l \mathcal{R}_m\, P(\Lambda) \, \theta, $$ where $1\le j,k,l,m\le d$, $(\nabla u)_{jk}$ denotes the $(j,k)$-th entry of $\nabla u$, $\mathcal{R}_l$ denotes the Riesz transform, and $P$ obeys Condition \ref{P_con}. Then, for any integers $j\ge 0$ and $N\ge 0$, \begin{eqnarray} \|S_N \nabla u\|_{L^p} &\le& C_{p,d}\, P(C_0 2^N)\,\|S_{N} \theta\|_{L^p}, \quad 1<p<\infty, \label{bound1} \\ \|\Delta_j \nabla u\|_{L^q} &\le& C_d\, P(C_0 2^j)\,\|\Delta_j \theta\|_{L^q}, \quad 1\le q\le \infty, \label{bound2} \\ \|S_N \nabla u\|_{L^\infty} &\le& C_d\,\|\theta\|_{L^1\cap L^\infty} + C_d\, N\,P(C_0 2^N)\,\|S_{N+1}\theta\|_{L^\infty}, \label{bound3} \end{eqnarray} where $C_{p,d}$ is a constant depending on $p$ and $d$ only and $C_d$s' depend on $d$ only. \end{thm} With the aid of these bounds, we were able to show in \cite{ChCW} that (\ref{general}) with $\kappa=0$ and $P(\Lambda)=\left(\log(1+\log(1 -\Delta))\right)^\gamma$ for $0\le \gamma\le 1$ has a unique global (in time) solution in the Besov space $B^s_{q,\infty}(\mathbb{R}^d)$ with $d<q\le \infty$ and $s>1$. In addition, a regularity criterion is also provided in \cite{ChCW} for (\ref{general}) with $P(\Lambda) = \Lambda^\beta$ for $0\le \beta\le 1$. Our goal here is to extend our study to cover more general operators when we turn on the dissipation. Indeed we are able to establish the global existence and uniqueness for a very general family of symbols. Before stating the result, we introduce the extended Besov spaces. Here $\mathcal{S}'$ denotes the class of tempered distributions and $\Delta_j$ with $j\ge -1$ denotes the standard Fourier localization operator. The notation $\Delta_j$, $S_N$ and Besov spaces are now quite standard and can be found in several books and many papers (see e.g. \cite{BL}, \cite{Che}, \cite{RuSi}, \cite{Tr}). They can also be found in Appendix A of \cite{ChCW}. \begin{define} Let $s\in \mathbb{R}$ and $1\le q,r\le \infty$. Let $A=\{A_j\}_{j\ge -1}$ with $A_j\ge 0$ be a nondecreasing sequence. The extended Besov space $B^{s,A}_{q,r}$ consists of $f\in \mathcal{S}'(\mathbb{R}^d)$ satisfying $$ \|f\|_{B^{s,A}_{q,r}} \equiv \left\|2^{s A_j}\, \|\Delta_j f\|_{L^q(\mathbb{R}^d)} \right\|_{l^r} \,< \infty. $$ \end{define} Obviously, when $A_j = j+1$, $B^{s,A}_{q,r}$ becomes the standard inhomogeneous Besov space $B^s_{q,r}$. When $A_j =o(j+1)$ as $j\to \infty$, $B^{s,A}_{q,r}$ is a less regular class than the corresponding Besov space $B^s_{q,r}$; we will refer to these spaces as sub-Besov spaces. When $j=o(A_j)$, $B^{s,A}_{q,r}$, we will refer to the spaces as super-Besov spaces. \vskip .1in With these definitions at our disposal, our main theorem can be stated as follows. \begin{thm} \label{main1} Consider the dissipative active scalar equation (\ref{general}) with $\kappa>0$, $\alpha>0$ and $P(\xi)$ satisfying Condition \ref{P_con}. Let $s>1$, $2\le q\le \infty$ and $A=\{A_j\}_{j\ge -1}$ be a nondecreasing sequence with $A_j\ge 0$. Let $\theta_0 \in L^1(\mathbb{R}^d)\cap L^\infty(\mathbb{R}^d) \cap B^{s,A}_{q,\infty}(\mathbb{R}^d)$. Assume either the velocity $u$ is divergence-free or the solution $\theta$ is bounded in $L^1(\mathbb{R}^d)\cap L^\infty(\mathbb{R}^d)$ for all time. If, there exists a constant $C$ such that for all $j\ge -1$, \begin{equation}\label{sb} \sum_{k\ge j-1, k\ge -1} \frac{2^{s A_{j-2}}\,P(2^{k+1})}{2^{s A_k}\,P(2^{j+1})} < C \end{equation} and \begin{equation}\label{decay} \kappa^{-1}\, 2^{s(A_j-A_{j-2})}\, (j+2) P(2^{j+2})\,2^{-2\alpha j} \to 0\quad \mbox{as} \quad j\to \infty, \end{equation} then (\ref{general}) has a unique global solution $\theta$ satisfying $$ \theta \in L^\infty\left([0,\infty); B^{s,A}_{q,\infty}(\mathbb{R}^d)\right). $$ \end{thm} We single out two special consequences of Theorem \ref{main1}. In the case when \begin{equation}\label{PA} P(|\xi|) = \left(\log(I+|\xi|^2)\right)^\gamma,\,\, \gamma\ge 0\quad\mbox{and}\quad A_j=(j+1)^b\quad\mbox{for some $b\le 1$}, \end{equation} (\ref{sb}) is trivially satisfied and the condition in (\ref{decay}) reduces to \begin{equation}\label{ed} 2^{s((j+1)^b - j^b)}\, (j+2)^{1+\gamma} 2^{-2\alpha j} \to 0 \quad \mbox{as} \quad j\to \infty, \end{equation} which is obviously satisfied for any $\alpha>0$. We thus obtain the following corollary. \begin{cor} Consider the dissipative Log-Euler equation \begin{equation} \label{Log-Euler} \left\{ \begin{array}{l} \partial_t \theta + u\cdot\nabla \theta + \kappa (-\Delta)^\alpha\theta =0, \\ u = \nabla^\perp \psi, \quad \Delta \psi = \left(\log(1 -\Delta)\right)^\gamma\, \theta \end{array} \right. \end{equation} with $\kappa>0$, $\alpha>0$ and $\gamma\ge 0$. Assume that $\theta_0$ satisfies $$ \theta_0 \in Y \equiv L^1(\mathbb{R}^2)\cap L^\infty(\mathbb{R}^2) \cap B^{s,A}_{q,\infty}(\mathbb{R}^2) $$ with $s>1$, $2 \le q \le \infty$ and $A$ given in (\ref{PA}). Then (\ref{Log-Euler}) has a unique global solution $\theta$ satisfying $$ \theta \in L^\infty\left([0,\infty); Y\right). $$ \end{cor} The assumption that $A_j =(j+1)^b$ with $b\le 1$ corresponds to the Besov and the sub-Besov spaces. We can also consider the solutions of (\ref{Log-Euler}) in super-Besov spaces by taking $A_j = (j+1)^b$ for $b>1$. It is easy to see that (\ref{ed}) remains valid if $s\, b <2\alpha$. Therefore (\ref{Log-Euler}) with $2\alpha >s\, b$ has a global solution in the super-Besov space $B^{s,A}_{q,\infty}$ with $A_j=(j+1)^b$ for $b>1$. \vskip .1in Another very important special case is when \begin{equation}\label{ajj} A_j =j+1, \quad P(\xi) = |\xi|^\beta (\log(1 +|\xi|^2))^\gamma\, \quad\mbox{with $\gamma\ge 0$ and $0\le \beta < 2\alpha\le 1$}. \end{equation} Then again (\ref{sb}) is obviously satisfied and (\ref{decay}) is reduced to $$ 2^{s((j+1)^b - j^b)} (j+2)^{1+\gamma}\, 2^{(\beta-2\alpha)j} \to 0 \quad \mbox{as}\,\,j\to \infty, $$ which is clearly true. That is, the following corollary holds. \begin{cor} \label{sss} Consider the active scalar equation \begin{equation} \label{BG} \left\{ \begin{array}{l} \partial_t \theta + u\cdot\nabla \theta + \kappa (-\Delta)^\alpha\theta=0, \\ u = \nabla^\perp \psi, \quad \Delta \psi = \Lambda^\beta\,\left(\log(1 -\Delta)\right)^\gamma\, \theta \end{array} \right. \end{equation} with $\kappa>0$, $\alpha>0$, $0\le \beta< 2\alpha\le 1$ and $\gamma\ge 0$. Assume the initial data $\theta_0\in Y\equiv L^1(\mathbb{R}^2)\cap L^\infty(\mathbb{R}^2) \cap B^{s,A}_{q,\infty}(\mathbb{R}^2)$ with $s>1$, $2\le q\le \infty$ and $A_j$ given by (\ref{ajj}). Then (\ref{BG}) has a unique global solution $\theta$ satisfying $$ \theta \in L^\infty\left([0,\infty); Y\right). $$ \end{cor} Again we could have studied the global solutions of (\ref{BG}) in a super-Besov space $B^{s,A}_{q,\infty}$ with, say $A_j =(j+1)^b$ for $b>1$. Of course we need to put more restrictions on $\alpha$. When $\gamma=0$, (\ref{BG}) becomes \begin{equation} \label{GBG} \left\{ \begin{array}{l} \partial_t \theta + u\cdot\nabla \theta + \kappa (-\Delta)^\alpha\theta=0, \\ u = \nabla^\perp \psi, \quad \Delta \psi = \Lambda^\beta\, \theta, \end{array} \right. \end{equation} which we call the generalized SQG equation. Corollary \ref{sss} does not cover the case when $\beta=2\alpha$, namely the modified SQG equation. The global regularity of the modified SQG equation with any $L^2$ initial data has previously been obtained in \cite{CIW}. In the supercritical case when $\beta>2\alpha$, the global regularity issue for (\ref{GBG}) is open. In particular, the global issue for supercritical SQG equation ($\beta=1$ and $2\alpha<1$) remains outstandingly open. \vskip .1in Following the ideas in \cite{Cha} and \cite{CMT}, we approach the global issue of (\ref{GBG}) in the super case $\beta>2\alpha$ by considering the geometry of the level curves of its solution. We present a geometric type criterion for the regularity of solutions of (\ref{GBG}). This sufficient condition controls the regularity of solutions in terms of the space-time integrability of $|\nabla^\bot \theta| $ and the regularity of the direction field $\xi=\nabla^\bot\theta/|\nabla^\bot\theta|$ (unit tangent vector to a level curve of $\theta$). \begin{thm} \label{crit3} Consider (\ref{GBG}) with $\kappa > 0$, $\alpha>0$ and $0\le \beta\le 1$. Let $\theta$ be the solution of (\ref{GBG}) corresponding to the initial data $\theta_0 \in H^m(\mathbb{R}^2)$ with $m>2$. Let $T>0$. Suppose there exists $\sigma \in (0,1)$, $q_1\in (\frac{2}{1+\beta-\sigma} , \infty]$, $p_1\in (1, \infty]$, $p_2 \in (1, \frac{2}{1+\sigma-\beta} )$ and $r_1, r_2 \in [1, \infty]$ such that the followings hold. \begin{eqnarray}\label{con220} \xi\in L^{r_1}(0,T; \mathcal{\dot{F}}^\sigma_{p_1,q} (\mathbb R^2)) \quad \mbox{and} \quad \nabla^\bot \theta \in L^{r_2} (0, T; L^{p_2} (\mathbb R^2 ))\\ \mbox{with}\qquad \frac{1}{p_1} + \frac{1}{p_2} + \frac{\alpha}{r_1} +\frac{\alpha}{r_2} \leq \alpha+\frac12(1+\sigma-\beta) .\nonumber \end{eqnarray} Then $\theta$ remains in $H^m (\mathbb R^2)$ on $[0,T]$. Especially, when $p_1=r_1=q=\infty$, (\ref{con220}) becomes \begin{eqnarray*} \xi \in L^\infty(0,T; C^\sigma (\mathbb R^2)) \quad\mbox{and}\quad \nabla^\perp \theta \in L^{r_2} (0, T; L^{p_2} (\mathbb R^2 ))\quad \\ \mbox{with}\qquad \frac{1}{p_2} +\frac{\alpha}{r_2} \leq \alpha+\frac12(1+\sigma-\beta). \end{eqnarray*} \end{thm} Here $\dot{\mathcal{F}}^s_{p,q} (\mathbb{R}^2)$ denotes a homogeneous Trebiel-Lizorkin type space. For $0\le s\le 1$, $1\le p\le \infty$ and $1\le q\le \infty$, $\dot{\mathcal{F}}^s_{p,q}$ contains functions such that the following semi-norm is finite, $$ \|f\|_{\dot{\mathcal{F}}^s_{p,q}} = \left\{ \begin{array}{ll} \displaystyle \left\|\left(\int \frac{|f(x+y)-f(x)|^q}{|y|^{n+sq}}\, dy\right)^{\frac1q}\right\|_{L^p}, \quad & \mbox{if $q<\infty$},\\\\ \displaystyle \left\|\sup_{y\not =0} \frac{|f(x+y)-f(x)|}{|y|^s}\right\|_{L^p}, \quad & \mbox{if $q=\infty$} \end{array} \right. $$ We note that if we set $\beta=1$ in Theorem \ref{crit3}, then it reduces to Theorem 1.2 of \cite{Cha}. \vskip .1in The rest of this paper is divided into two sections. Section \ref{proofmain} proves Theorem \ref{main1} while Section \ref{geocri} derives the geometric regularity criterion stated in Theorem \ref{crit3}. \vskip .4in \section{Proof of Theorem \ref{main1}} \label{proofmain} This section is devoted to the proof of Theorem \ref{main1}, which involves Besov space technique and the bounds stated in Theorem \ref{nablau}. In addition, lower bound estimates associated with the fractional dissipation are also used. \vskip .1in \begin{proof}[Proof of Theorem \ref{main1}] The proof is divided into two main parts. The first part establishes the global (in time) {\it a priori} bound on solutions of (\ref{general}) while the second part briefly describes the construction of a unique local (in time) solution. \vskip .1in For notational convenience, we write $Y= L^1(\mathbb{R}^d)\cap L^\infty(\mathbb{R}^d) \cap B^{s,A}_{q,\infty}(\mathbb{R}^d)$. The first part derives the global bound, for any $T>0$, \begin{equation}\label{bdd} \|\theta(\cdot,t)\|_{B^{s,A}_{q,\infty}} \le C(T, \|\theta_0\|_{Y}) \quad\mbox{for}\quad t\le T \end{equation} and we distinguish between two cases: $q<\infty$ and $q=\infty$. The dissipative term is handled differently in these two cases. \vskip .1in We start with the case when $q<\infty$. When the velocity field $u$ is divergence-free, $\theta_0\in L^1 \cap L^\infty$ implies the corresponding solution $\theta$ of (\ref{general}) satisfies the {\it a priori} bound \begin{equation} \label{mmm} \|\theta(\cdot,t)\|_{L^1\cap L^\infty} \le \|\theta_0\|_{L^1\cap L^\infty}, \quad t \ge 0. \end{equation} When $u$ is not divergence-free, (\ref{mmm}) is assumed. The divergence-free condition is not used in the rest of the proof. \vskip .1in Let $j\ge -1$ be an integer. Applying $\Delta_j$ to (\ref{general}) and following a standard decomposition, we have \begin{equation}\label{base1} \partial_t \Delta_j \theta + \kappa (-\Delta)^\alpha \Delta_j \theta = J_1 + J_2 + J_3 +J_4 +J_5, \end{equation} where \begin{eqnarray} J_{1} &=& - \sum_{|j-k|\le 2} [\Delta_j, S_{k-1}(u)\cdot\nabla] \Delta_k \theta, \label{j1t}\\ J_{2} &=& - \sum_{|j-k|\le 2} (S_{k-1}(u) - S_j(u)) \cdot \nabla \Delta_j\Delta_k \theta, \label{j2t}\\ J_3 &=& - S_j(u) \cdot\nabla \Delta_j\theta, \label{j3t}\\ J_{4} &=& - \sum_{|j-k|\le 2} \Delta_j (\Delta_k u \cdot \nabla S_{k-1} (\theta)), \label{j4t}\\ J_{5} &=& -\sum_{k\ge j-1}\Delta_j (\widetilde{\Delta}_k u\cdot\nabla \Delta_k \theta) \label{j5t} \end{eqnarray} with $\widetilde{\Delta}_k = \Delta_{k-1} + \Delta_k + \Delta_{k+1}$. We multiply (\ref{base1}) by $\Delta_j\theta |\Delta_j \theta|^{q-2}$ and integrate in space. Integrating by parts in the term associated with $J_3$, we obtain \begin{eqnarray*} -\int_{\mathbb{R}^d} \left(S_j (u) \cdot\nabla \Delta_j\theta\right) \,\Delta_j\theta |\Delta_j \theta|^{q-2} \,dx &=& \frac1q \, \int_{\mathbb{R}^d} (\nabla\cdot S_j u) |\Delta_j \theta|^q \,dx\\ &=& \int_{\mathbb{R}^d} \widetilde{J_3}\, |\Delta_j \theta|^{q-1}\,dx, \end{eqnarray*} where $\widetilde{J_3}$ is given by $$ \widetilde{J_3} = \frac1q (\nabla\cdot S_j u) |\Delta_j \theta|. $$ Applying H\"{o}lder's inequality, we have \begin{eqnarray} && \frac1q\,\frac{d}{dt} \|\Delta_j \theta\|^q_{L^q} + \kappa \int \Delta_j \theta |\Delta_j \theta|^{q-2}(-\Delta)^\alpha \Delta_j\theta\,dx \label{root1}\\ && \qquad\qquad \qquad \le \left(\|J_1\|_{L^q} + \|J_2\|_{L^q} + \|\widetilde{J_3}\|_{L^q} + \|J_4\|_{L^q} + \|J_5\|_{L^q}\right) \|\Delta_j \theta\|_{L^q}^{q-1}. \nonumber \end{eqnarray} For $j\ge 0$, we have the lower bound (see \cite{CMZ1} and \cite{Wu31}) \begin{equation}\label{low} \int \Delta_j \theta |\Delta_j \theta|^{q-2}(-\Delta)^\alpha \Delta_j\theta \ge C\, 2^{2\alpha j}\,\|\Delta_j \theta\|_{L^q}^q. \end{equation} For $j=-1$, this lower bound is invalid. Still we have \begin{equation}\label{pos} \int \Delta_j \theta |\Delta_j \theta|^{q-2}(-\Delta)^\alpha \Delta_j\theta \ge 0. \end{equation} Attention is paid to the case $j\ge 0$ first. Inserting (\ref{low}) in (\ref{root1}) leads to $$ \frac{d}{dt} \|\Delta_j \theta\|_{L^q} + \kappa \, 2^{2\alpha j}\, \|\Delta_j \theta\|_{L^q} \le \|J_1\|_{L^q} + \|J_2\|_{L^q} + \|\widetilde{J_3}\|_{L^q} + \|J_4\|_{L^q} + \|J_5\|_{L^q}. $$ By a standard commutator estimate, $$ \|J_1\|_{L^q} \le C \sum_{|j-k|\le 2} \|\nabla S_{k-1} u\|_{L^\infty} \|\Delta_k \theta\|_{L^q}. $$ By H\"{o}lder's and Bernstein's inequalities, $$ \|J_2\|_{L^q} \le C\, \|\nabla \widetilde{\Delta}_j u\|_{L^\infty} \, \|\Delta_j \theta\|_{L^q}. $$ Clearly, $$ \|\widetilde{J_3}\|_{L^q} \le C\,\|\nabla\cdot S_j u\|_{L^\infty} \, \|\Delta_j \theta\|_{L^q}. $$ For $J_4$ and $J_5$, we have \begin{eqnarray*} \|J_4\|_{L^q} &\le& \sum_{|j-k|\le 2} \|\Delta_k u\|_{L^\infty} \, \|\nabla S_{k-1} \theta\|_{L^q},\\ \|J_5\|_{L^q} &\le& \sum_{k\ge j-1} \,\|\widetilde{\Delta}_k u\|_{L^\infty} \| \Delta_k \nabla \theta\|_{L^q} \\ &\le& C\, \sum_{k\ge j-1} \|\nabla \widetilde{\Delta}_k u\|_{L^\infty}\, \|\Delta_k \theta\|_{L^q}. \end{eqnarray*} These terms can be further bounded as follows. By Theorem \ref{nablau}, \begin{eqnarray*} \|\nabla S_k u\|_{L^\infty} & \le & \|\theta_0\|_{L^1\cap L^\infty} + C k\,P(2^{k+1})\|S_{k+1} \theta\|_{L^\infty}\\ &\le& \|\theta_0\|_{L^1\cap L^\infty} + C k\,P(2^{k+1}) \|\theta_0\|_{L^\infty}. \end{eqnarray*} Thus, \begin{eqnarray*} \|J_1\|_{L^q} &\le& C\, \|\theta_0\|_{L^1\cap L^\infty} \sum_{|j-k|\le 2} (1+ C k\,P(2^{k+1})) 2^{-s A_k} \, 2^{s A_k} \|\Delta_k \theta\|_{L^q}\\ &\le& C\, 2^{-s A_j}\,\|\theta_0\|_{L^1\cap L^\infty} \|\theta\|_{B^{s,A}_{q,\infty}}\,\sum_{|j-k|\le 2}(1+ C k\,P(2^{k+1})) 2^{s(A_j-A_k)}. \end{eqnarray*} Since $A_j$ is a nondecreasing function of $j$, \begin{equation}\label{ajk} 2^{s (A_j -A_k)} \le 2^{s (A_j-A_{j-2})} \quad\mbox{for}\quad |k-j|\le 2, \end{equation} where we have adopted the convention that $A_l\equiv 0$ for $l<-1$. Consequently, $$ \|J_1\|_{L^q} \le C\, 2^{-s A_{j-2}}\,\|\theta_0\|_{L^1\cap L^\infty} \|\theta\|_{B^{s,A}_{q,\infty}}\,\left(1+ (j+2)P(2^{j+2})\right). $$ Clearly, $\|J_2\|_{L^q}$ and $\|J_3\|_{L^q}$ admits the same bound as $\|J_1\|_{L^q}$. By Bernstein's inequality and Theorem \ref{nablau}, \begin{eqnarray*} \|J_4\|_{L^q} &\le& C\,\sum_{|j-k| \le 2} \|\nabla \Delta_k u\|_{L^q}\, \|S_{k-1} \theta\|_{L^\infty} \\ &\le& C\,\|\theta\|_{L^\infty} \sum_{|j-k| \le 2} P(2^{k+1}) \|\Delta_k \theta\|_{L^q}. \end{eqnarray*} By (\ref{ajk}), we have $$ \|J_4\|_{L^q} \le C\, 2^{-s A_{j-2}}\,\|\theta_0\|_{L^\infty}\, \|\theta\|_{B^{s,A}_{q,\infty}}\,P(2^{j+2}). $$ By Theorem \ref{nablau}, \begin{eqnarray*} \|J_5\|_{L^q} &\le& C\, \sum_{k\ge j-1} P(2^{k+1}) \|\widetilde{\Delta}_k \theta\|_{L^\infty}\|\Delta_k \theta\|_{L^q}\\ &\le& C\, \|\theta_0\|_{L^\infty} \sum_{k\ge j-1} P(2^{k+1}) \|\Delta_k \theta\|_{L^q}\\ &\le& C\, \|\theta_0\|_{L^\infty} 2^{-s A_{j-2}}\, P(2^{j+1}) \|\theta\|_{B^{s,A}_{q,\infty}}\, \sum_{k\ge j-1} \frac{2^{s A_{j-2}}}{P(2^{j+1})}\, \frac{P(2^{k+1})}{2^{s A_k}} \end{eqnarray*} By (\ref{sb}), $$ \|J_5\|_{L^q} \le C\, \|\theta_0\|_{L^\infty} 2^{-s A_{j-2}}\, P(2^{j+1}) \|\theta\|_{B^{s,A}_{q,\infty}}. $$ Collecting all the estimates, we have, for $j\ge 0$, \begin{eqnarray*} \frac{d}{dt} \|\Delta_j \theta\|_{L^q} + \kappa \,2^{2\alpha j}\, \|\Delta_j \theta\|_{L^q} &\le& C\, 2^{-s A_{j-2}}\,\|\theta_0\|_{L^1\cap L^\infty}\\ && \,\times \|\theta\|_{B^{s,A}_{q,\infty}}\,\left(1+ (j+2)P(2^{j+2})\right). \end{eqnarray*} That is, $$ \frac{d}{dt} \left(e^{\kappa 2^{2\alpha j} t} \|\Delta_j \theta\|_{L^q}\right) \le C\, e^{\kappa 2^{2\alpha j} t}2^{-s A_{j-2}}\,\|\theta_0\|_{L^1\cap L^\infty} \|\theta\|_{B^{s,A}_{q,\infty}}\,\left(1+ (j+2)P(2^{j+2})\right). $$ Integrating in time and multiplying by $2^{s A_j}\cdot e^{ -\kappa 2^{2\alpha j} t}$, we obtain, for $j\ge 0$, \begin{equation}\label{aj1} 2^{s A_j}\,\|\Delta_j \theta\|_{L^q} \le 2^{s A_j}\,e^{ -\kappa 2^{2\alpha j} t}\|\Delta_j \theta_0\|_{L^q} + K_j, \end{equation} where \begin{eqnarray*} K_j = C\,\|\theta_0\|_{L^1\cap L^\infty} \,\left(1+ (j+2)P(2^{j+2})\right) 2^{s (A_j-A_{j-2})}\int_0^t e^{-\kappa 2^{2\alpha j} (t-\tau)} \|\theta(\tau)\|_{B^{s,A}_{q,\infty}}\,d\tau. \end{eqnarray*} To further the estimates, we fix $t_0\le T$ and let $t\le t_0$. It is easy to see that $K_j$ admits the upper bound \begin{eqnarray*} K_j &\le & C\,\|\theta_0\|_{L^1\cap L^\infty} \,\left(1+ (j+2)P(2^{j+2})\right) 2^{s (A_j-A_{j-2})} \\ && \quad \times \frac{1}{\kappa 2^{2\alpha j}} \left(1-e^{-\kappa 2^{2\alpha j} t} \right)\, \sup_{0\le \tau \le t_0} \|\theta(\tau)\|_{B^{s,A}_{q,\infty}}. \end{eqnarray*} According to (\ref{decay}), there exists an integer $j_0$ such that, for $j\ge j_0$, \begin{equation}\label{bdd1} K_j\le \frac12 \sup_{0\le \tau \le t_0} \|\theta(\tau)\|_{B^{s,A}_{q,\infty}}. \end{equation} For $0\le j\le j_0$, \begin{equation}\label{bdd2} K_j \le C\,\|\theta_0\|_{L^1\cap L^\infty} \,\left(1+ (j_0+2)P(2^{j_0+2})\right) \max_{0\le j\le j_0} 2^{s(A_j-A_{j-2})} \int_0^t \|\theta(\tau)\|_{B^{s,A}_{q,\infty}}\,d\tau. \end{equation} We now turn to the case when $j=-1$. By combining (\ref{base1}) and (\ref{pos}) and estimating $\|J_1\|_{L^q}$ through $\|J_5\|_{L^q}$ in an similar fashion as for the case $j\ge 0$, we obtain \begin{equation}\label{aj2} \|\Delta_{-1} \theta(t)\|_{L^q} \le \|\Delta_{-1} \theta(0)\|_{L^q} + C\,\|\theta_0\|_{L^1\cap L^\infty}\,\int_0^t \|\theta(\tau)\|_{B^{s,A}_{q,\infty}}\, d\tau. \end{equation} Putting (\ref{aj1}) and (\ref{aj2}) together, we find, for any $j\ge -1$, \begin{equation}\label{aj3} 2^{s A_j}\,\|\Delta_j \theta\|_{L^q} \le \|\theta_0\|_{B^{s,A}_{q,\infty}} + K_j, \end{equation} where $K_j$ obeys the bound (\ref{bdd1}) for $j\ge j_0$ and the bound in (\ref{bdd2}) for $-1\le j<j_0$. Applying $\sup_{j\ge -1}$ to (\ref{aj3}) and using the simple fact that $$ \sup_{j\ge -1} K_j \le \sup_{j\ge j_0} K_j + \sup_{-1 \le j < j_0} K_j, $$ we obtain \begin{eqnarray*} \|\theta(t)\|_{B^{s,A}_{q,\infty}} &\le& \|\theta_0\|_{B^{s,A}_{q,\infty}} + \frac12 \sup_{0\le \tau\le t_0} \|\theta(\tau)\|_{B^{s,A}_{q,\infty}} + C(\theta_0, j_0) \int_0^t \|\theta(\tau)\|_{B^{s,A}_{q,\infty}}\,d\tau, \end{eqnarray*} where $$ C(\theta_0, j_0) = C\,\|\theta_0\|_{L^1\cap L^\infty} \,\left(1+ (j_0+2)P(2^{j_0+2})\right) \max_{0\le j\le j_0} 2^{s(A_j-A_{j-2})}. $$ Now taking supermum over $t\in[0,t_0]$, we obtain $$ \sup_{0\le \tau\le t_0} \|\theta(\tau)\|_{B^{s,A}_{q,\infty}} \le 2\,\|\theta_0\|_{B^{s,A}_{q,\infty}} + C(\theta_0, j_0) \int_0^{t_0} \|\theta(\tau)\|_{B^{s,A}_{q,\infty}}\,d\tau, $$ Gronwall's inequality then implies (\ref{bdd}) for any $t\le t_0\le T$. This finishes the case when $q<\infty$. \vskip .1in We now turn to the case when $q=\infty$. For $j\ge 0$, applying $\Delta_j$ yields $$ \partial_t \Delta_j \theta + S_j u \cdot \nabla (\Delta_j \theta) + \kappa (-\Delta)^\alpha \Delta_j \theta = J_1 + J_2 + J_4 +J_5 $$ where $J_1$, $J_2$, $J_4$ and $J_5$ are as defined in (\ref{j1t}), (\ref{j2t}), (\ref{j4t}) and (\ref{j5t}), respectively. According to Lemma \ref{localm} below, we have \begin{equation} \label{mmmjjj} \partial_t \|\Delta_j \theta\|_{L^\infty} + C\, 2^{2\alpha j} \|\Delta_j\theta\|_{L^\infty} \le \|J_1\|_{L^\infty} + \|J_2\|_{L^\infty} + \|J_4\|_{L^\infty} + \|J_5\|_{L^\infty}. \end{equation} The terms on the right can be estimated similarly as in the case when $q<\infty$. For $j=-1$, (\ref{mmmjjj}) is replaced by $$ \partial_t \|\Delta_{-1} \theta\|_{L^\infty} \le \|J_1\|_{L^\infty} + \|J_2\|_{L^\infty} + \|J_4\|_{L^\infty} + \|J_5\|_{L^\infty}. $$ The rest of the proof for this case is then very similar to the case $q<\infty$ and we thus omit further details. \vskip .1in We briefly describe the construction of a local solution of (\ref{general}) and prove its uniqueness. The solution is constructed through the method of successive approximation. More precisely, we consider a successive approximation sequence $\{\theta^{(n)}\}$ satisfying \begin{equation}\label{succ} \left\{ \begin{array}{l} \theta^{(1)} = S_2 \theta_0, \\ \\ u^{(n)} = (u^{(n)}_j), \quad u^{(n)}_j = \mathcal{R}_l \Lambda^{-1} P(\Lambda) \theta^{(n)},\\ \\ \partial_t \theta^{(n+1)} + u^{(n)} \cdot\nabla \theta^{(n+1)} + \kappa (-\Delta)^\alpha\theta^{(n+1)} = 0,\\ \\ \theta^{(n+1)}(x,0) = S_{n+2} \theta_0 \end{array} \right. \end{equation} and show that $\{\theta^{(n)}\}$ converges to a solution of (\ref{general}). It suffices to prove the following properties of $\{\theta^{(n)}\}$: \begin{enumerate} \item[i)] There exists $T_1>0$ such that $\theta^{(n)}$ is bounded uniformly in $B^{s,A}_{q,\infty}$ for any $t\in[0,T_1]$, namely $$ \|\theta^{(n)}(\cdot,t)\|_{B^{s,A}_{q,\infty}} \le C_1 \|\theta_0\|_{Y}, \quad t\in [0,T_1], $$ where $C_1$ is a constant independent of $n$. \item[ii)] There exists $T_2>0$ such that $\eta^{(n+1)} = \theta^{(n+1)}- \theta^{(n)}$ is a Cauchy sequence in $B^{s-1,A}_{q,\infty}$, $$ \|\eta^{(n)}(\cdot,t)\|_{B^{s-1,A}_{q,\infty}} \le C_2\, 2^{-n}, \quad t\in [0, T_2], $$ where $C_2$ is independent of $n$ and depends on $T_2$ and $\|\theta_0\|_Y$ only. \end{enumerate} Since the essential ingredients in the proof of i) and ii) have appeared in proving the {\it a priori} bound, we omit the details. The uniqueness can be established by estimating the difference of any two solutions in $B^{s-1,A}_{q,\infty}$. A similar argument as in the proof of ii) would yield the desired result. This completes the proof of Theorem \ref{main1}. \end{proof} \vskip .1in We have used the following lemma in the proof of Theorem \ref{main1}. It is obtained in \cite{WaZh}. \begin{lemma} \label{localm} Let $j\ge 0$ be an integer. Let $\theta$, $u$ and $f$ be smooth functions solving the equation $$ \partial_t \Delta_j \theta + u\cdot\nabla \Delta_j \theta + \kappa (-\Delta)^\alpha \Delta_j \theta =f, $$ where $\kappa>0$ is a parameter. Assume that $\Delta_j \theta$ vanishes at infinity. Then, there exists a constant $C$ independent of $\theta$, $u$, $f$ and $j$ such that $$ \partial_t \|\Delta_j \theta\|_{L^\infty} + C\, 2^{2\alpha j} \|\Delta_j \theta\|_{L^\infty} \le \|f\|_{L^\infty}. $$ \end{lemma} \vskip .3in \section{Geometric regularity criterion} \label{geocri} \vskip .06in In this section we prove Theorem \ref{crit3}. For this we recall the following Serrin type of criterion, which is proved for $\beta=1$ in \cite[Theorem 1.1]{Cha}, and obviously holds true for our case of $\beta\in [0, 1]$. \begin{thm}\label{crit30} Let $\theta (x,t)$ be a solution of (\ref{GBG}) with initial data $\theta_0\in H^m (\mathbb{R}^2)$ with $m>2$. Let $T>0$. If there are indices $p,r$ with $\frac{1}{\alpha}<p<\infty$ and $1<r<\infty$ respectively such that \begin{equation}\label{thm1} \nabla^\bot \theta \in L^r (0,T; L^p (\mathbb R^2 )) \quad \mbox{ with}\quad \frac{1}{p} +\frac{\alpha}{r}\leq \alpha, \end{equation} then $\theta$ remains in $H^m (\mathbb R^2)$ on $[0,T]$. \end{thm} \vskip .1in \begin{proof}[Proof of Theorem \ref{crit3}] Since the proof is similar to that of Theorem 1.2 in \cite{Cha}, we will be brief here mostly pointing out the essential changes. Let $p$ be an integer of the form $p=2^k$, where $k$ is a positive integer, and satisfy \begin{equation}\label{first} \frac{1}{\alpha} \leq p <\infty. \end{equation} We take operation of $\nabla^\bot$ on (\ref{GBG}), and take $L^2 (\mathbb R^2)$ inner product of it by \newline $\nabla^\bot \theta (x,t) |\nabla^\bot \theta (x,t)|^{p-2}$, and then substituting $u=-\nabla^\bot \Lambda^{-2+\beta} \theta $ into it, we have \begin{eqnarray}\label{ve} \lefteqn{\frac{1}{p} \frac{d}{dt} \|\nabla^\bot \theta(t)\|_{L^p} ^p +\kappa\int (\Lambda ^{2\alpha}\nabla^\bot \theta )\cdot \nabla^\bot \theta |\nabla^\bot \theta |^{p-2} dx}\hspace{0.0in}\nonumber \\ &&=\int (\nabla^\bot \theta \cdot \nabla )u \cdot \nabla^\bot \theta |\nabla^\bot \theta |^{p-2}dx\nonumber \\ && = \int \int [\nabla \theta (x,t)\cdot \hat{y} ] [\nabla^\bot \theta (x+y,t)\cdot \nabla \theta (x,t )]\frac{dy}{|y|^{1+\beta}} |\nabla^\bot \theta (x,t)|^{p-2}dx \nonumber \\ &&:=I, \end{eqnarray} where the integral with respect to $y$ in the right hand side is in the sense of principal value. The dissipation term can be estimated \begin{eqnarray}\label{dis} \lefteqn{\kappa\int (\Lambda ^{2\alpha}\nabla^\bot \theta )\cdot \nabla^\bot \theta |\nabla^\bot \theta |^{p-2} dx \geq \frac{\kappa}{p} \int \left|\Lambda ^{\alpha} |\nabla^\bot \theta |^{\frac{p}{2}} \right|^2 dx }\hspace{.2in}\nonumber \\ &\geq& \frac{\kappa C_\alpha}{p} \left(\int |\nabla^\bot \theta |^{\frac{p}{1-\alpha}} dx \right)^{1-\alpha}=\frac{\kappa C_\alpha}{p}\|\nabla^\bot \theta \|_{L^{\frac{p}{1-\alpha}}} ^p, \end{eqnarray} where we used Lemma 2.4 of \cite{CC} and the embedding $L^2_{\alpha} (\mathbb R^2)\hookrightarrow L^{\frac{2}{1-\alpha}} (\mathbb R^2)$. Next, we estimate $I$ as follows. \begin{eqnarray*} \lefteqn{I=\int\int (\xi ^ \bot (x,t)\cdot \hat{y} ) [\xi (x+y,t) \cdot \xi^\bot (x,t)]|\nabla^\bot \theta (x+y, t)| \frac{dy}{|y|^{1+\beta}} |\nabla^\bot \theta (x,t ) |^p dx}\nonumber \hspace{.0in}\\ &&=\int\int (\xi ^\bot (x,t)\cdot \hat{y} ) [\xi (x+y,t) -\xi(x,t) ]\cdot\xi^\bot (x,t)|\nabla^\bot \theta (x+y, t)| \frac{dy}{|y|^{1+\beta}} |\nabla^\bot\theta (x,t ) |^p dx \nonumber\\ &&\leq \int\int |\xi (x+y,t) -\xi(x,t) | |\nabla^\bot \theta (x+y,t)|\frac{dy}{|y|^{\frac{2+(\beta-1 +s)q}{q} +\frac{2-sq'}{q'}}} |\nabla^\bot \theta (x,t) |^p dx \nonumber\\ &&\leq \int \left(\int \frac{|\xi (x+y,t) -\xi(x,t) |^q}{|y|^{2+(\beta-1+s) q}} dy\right)^{\frac{1}{q}} \left( \int \frac{|\nabla^\bot \theta (x+y,t)|^{q'}}{ |y|^{2-s q'}} dy \right)^{\frac{1}{q'}} |\nabla^\bot \theta |^p dx \nonumber \\ &&\leq \|\xi \|_{\mathcal{\dot{F}}^{\sigma} _{p_1 , q}} \left\|\{I_{s q'} ( |\nabla^\bot \theta |^{q'}) \}^{\frac{1}{q'}} \right\|_{L^{\tilde{p}_2}} \|\nabla^\bot \theta\|^p _{L^{p_3}},\nonumber \\ \end{eqnarray*} where we used the fact $\xi(x,t) \cdot\xi^\bot (x,t)=0$ in the second equality, and H\"{o}lder's inequality in the second and the third inequalities with the exponents satisfying \begin{equation}\label{pcon} \frac{1}{p_1} +\frac{1}{\tilde{p}_2} +\frac{p}{p_3}=1, \qquad \frac{1}{q} +\frac{1}{q'} =1, \end{equation} and $I_{a} (\cdot ) $, $0<a <2$, is the operator defined by the Riesz potential. We also set \begin{equation}\label{sigma} \sigma=\beta-1 +s \end{equation} in the last inequality. After this, we apply Hardy-Littlewood-Sobolev inequality and Young's inequality to estimate $I$, which is similar to the proof of Theorem 1.2 of \cite{Cha}, and deduce \begin{equation}\label{last1} \frac{d}{dt} \|\nabla^\bot \theta(t)\|_{L^p} ^p +\frac{\kappa C_\alpha }{2} \| \nabla^\bot \theta(t) \|_{L^{\frac{p}{1-\alpha}}} ^p \leq C\|\xi (t)\|_{\mathcal{\dot{F}}^{\sigma} _{p_1 , q}} ^Q\|\nabla^\bot \theta (t)\|_{L^{p_2}} ^Q \|\nabla^\bot \theta (t)\|_{L^p} ^p, \end{equation} where we set \begin{equation}\label{Q} Q=\frac{2\alpha p_1p_2}{(2\alpha+s )p_1 p_2 -2p_1 -2p_2}, \end{equation} which need to satisfy \begin{equation}\label{indices} \frac{1}{r_1}+\frac{1}{r_2}\leq \frac{1}{Q}. \end{equation} We note that (\ref{indices}) is equivalent to $$ \frac{1}{p_1} + \frac{1}{p_2} + \frac{\alpha}{r_1} +\frac{\alpha}{r_2} \leq \alpha+\frac12(1+\sigma-\beta) $$ after substituting $Q$ and $s$ from (\ref{Q}) and (\ref{sigma}) respectively into (\ref{indices}). Since $$ \int_0 ^T \|\xi (t)\|_{\mathcal{\dot{F}}^{\sigma} _{p_1 , q}} ^Q\|\nabla^\bot \theta (t)\|_{L^{p_2}} ^Q dt \leq \left(\int_0 ^T\|\xi (t)\|_{\mathcal{\dot{F}}^{\sigma} _{p_1 , q}} ^{r_1} dt\right)^{\frac{Q}{r_1}} \left(\int_0 ^T\|\nabla^\bot \theta (t)\|_{L^{p_2}} ^{r_2} dt\right)^{\frac{Q}{r_2}} <\infty $$ by our hypothesis, The inequality (\ref{last1}) leads us to $$\int_0 ^T \| \nabla^\bot \theta \|_{L^{\frac{p}{1-\alpha}}} ^p dt <\infty. $$ Now applying Theorem \ref{crit30}, we conclude the proof. \end{proof} \vskip .4in \end{document}
arXiv
{ "id": "1011.0171.tex", "language_detection_score": 0.5337047576904297, "char_num_after_normalized": null, "contain_at_least_two_stop_words": null, "ellipsis_line_ratio": null, "idx": null, "lines_start_with_bullet_point_ratio": null, "mean_length_of_alpha_words": null, "non_alphabetical_char_ratio": null, "path": null, "symbols_to_words_ratio": null, "uppercase_word_ratio": null, "type": null, "book_name": null, "mimetype": null, "page_index": null, "page_path": null, "page_title": null }
arXiv/math_arXiv_v0.2.jsonl
null
null
\begin{document} \setlength{\textwidth}{126mm} \setlength{\textheight}{180mm} \setlength{\parindent}{0mm} \setlength{\parskip}{2pt plus 2pt} \frenchspacing \pagestyle{myheadings} \markboth{Dimitar Mekerov}{Lie groups as $4$-dimensional nonintegrable almost product manifolds} \newtheorem{thm}{Theorem}[section] \newtheorem{lem}[thm]{Lemma} \newtheorem{prop}[thm]{Proposition} \newtheorem{cor}[thm]{Corollary} \newtheorem{probl}[thm]{Problem} \newtheorem{defn}{Definition}[section] \newtheorem{rem}{Remark}[section] \newtheorem{exa}{Example} \newcommand{\mathfrak{X}}{\mathfrak{X}} \newcommand{\mathcal{B}}{\mathcal{B}} \newcommand{\mathfrak{S}}{\mathfrak{S}} \newcommand{\mathfrak{g}}{\mathfrak{g}} \newcommand{\mathcal{W}}{\mathcal{W}} \newcommand{\mathrm{L}}{\mathrm{L}} \newcommand{\mathrm{d}}{\mathrm{d}} \newcommand{\partial}{\partial} \newcommand{\ddx}{\frac{\partial}{\partial x^i}} \newcommand{\ddy}{\frac{\partial}{\partial y^i}} \newcommand{\ddu}{\frac{\partial}{\partial u^i}} \newcommand{\ddv}{\frac{\partial}{\partial v^i}} \newcommand{\mathrm{diag}}{\mathrm{diag}} \newcommand{\mathrm{End}}{\mathrm{End}} \newcommand{\mathrm{Im}}{\mathrm{Im}} \newcommand{\mathrm{id}}{\mathrm{id}} \newcommand{\mathrm{ad}}{\mathrm{ad}} \newcommand{i.e.}{i.e.} \newfont{\w}{msbm9 scaled\magstep1} \def\mbox{\w R}{\mbox{\w R}} \newcommand{\norm}[1]{\left\Vert#1\right\Vert ^2} \newcommand{\norm{N}}{\norm{N}} \newcommand{\norm{\nabla P}}{\norm{\nabla P}} \newcommand{{\rm tr}}{{\rm tr}} \newcommand{\nJ}[1]{\norm{\nabla J_{#1}}} \newcommand{\thmref}[1]{Theorem~\ref{#1}} \newcommand{\propref}[1]{Proposition~\ref{#1}} \newcommand{\secref}[1]{\S\ref{#1}} \newcommand{\lemref}[1]{Lemma~\ref{#1}} \newcommand{\dfnref}[1]{Definition~\ref{#1}} \frenchspacing \title{Lie groups as $4$-dimensional Riemannian or pseudo-Riemannian almost product manifolds with nonintegrable structure} \author{Dimitar Mekerov} \maketitle {\small {\it Abstract.} A Lie group as a 4-dimensional pseudo-Riemannian manifold is considered. This manifold is equipped with an almost product structure and a Killing metric in two ways. In the first case à Riemannian almost product manifold with nonintegrable structure is obtained, and in the second case -- a pseudo-Riemannian one. Each belongs to a 4-parametric family of manifolds, which are characterized geometrically. {\it Mathematics Subject Classification (2000):} 53C15, 53C50 \\ {\it Key words:} almost product manifold, Lie group, Riemannian metric, pseudo-Riemann\-ian metric, nonintegrable structure, Killing metric} \section{Preliminaries} Let $M$ be a differentiable manifold with a tensor field $P$ of type $(1,1)$ and a Riemannian metric $g$ such that \begin{equation}\label{1.1} P^2=id,\quad g(Px,Py)=g(x,y) \end{equation} for arbitrary $x$, $y$ of the algebra $\mathfrak{X}(M)$ of the smooth vector fields on $M$. The tensor field $P$ is called an \emph{almost product structure}. The manifold $(M,P,g)$ is called a \emph{Riemannian} (\emph{pseudo-Riemannian}, resp.) \emph{almost product manifold}, if $g$ is a Riemannian (pseudo-Riemannian, resp.) metric. If ${\rm tr}{P}=0$, then $(M,P,g)$ is an even-dimensional manifold. The classification from \cite{StGr:connect} of Riemannian almost product manifolds is made with respect to the tensor field $F$ of type (0,3), defined by \begin{equation}\label{1.2} F(x,y,z)=g\left(\left(\nabla_x P\right)y,z\right), \end{equation} where $\nabla$ is the Levi-Civita connection of $g$. The tensor $F$ has the following properties: \[ F(x,y,z)=F(x,z,y)=-F(x,Py,Pz),\quad F(x,y,Pz)=-F(x,Py,z). \] In the case when $g$ is a pseudo-Riemannian metric, the same classification is valid for pseudo-Riemannian almost product manifolds, too. In these classifications the condition \begin{equation}\label{sigma} F(x,y,z)+F(y,z,x)+F(z,x,y)=0 \end{equation} defines a class $\mathcal{W}_3$, which is only the class of the three basic classes $\mathcal{W}_1$, $\mathcal{W}_2$ and $\mathcal{W}_3$ with nonintegrable structure $P$. The class $\mathcal{W}_0$, defined by the condition $F(x,y,z)=0$, is contained in the other classes. For this class $\nabla P=0$ and therefore it is an analogue of the class of K\"ahlerian manifolds in the almost Hermitian geometry. The curvature tensor field $R$ is defined by $R(x,y)z=\nabla_x \nabla_y z - \nabla_y \nabla_x z - \nabla_{[x,y]}z$ and the corresponding tensor field of type $(0,4)$ is determined by $R(x,y,z,w)=g(R(x,y)z,w)$. Let $\{e_i\}$ be a basis of the tangent space $T_pM$ at a point $p\in M$ and $g^{ij}$ be the components of the inverse matrix of $g$ with respect to $\{e_i\}$. Then the Ricci tensor $\rho$ and the scalar curvature $\tau$ are defined as follows \begin{equation}\label{1.3} \rho(y,z)=g^{ij}R(e_i,y,z,e_j), \end{equation} \begin{equation}\label{1.4} \tau=g^{ij}\rho(e_i,e_j). \end{equation} The square norm of $\nabla P$ is defined by \begin{equation}\label{1.5} \norm{\nabla P}=g^{ij}g^{ks}g\left(\left(\nabla_{e_i}P\right)e_k,\left(\nabla_{e_j}P\right)e_s\right). \end{equation} It is clear that $\nabla P=0$ implies $\norm{\nabla P}=0$ but the inverse implication for the pseudo-Riemannian case is not always true. We shall call a pseudo-Riemannian almost product manifold \emph{isotropic $P$-manifold} if $\nabla P=0$. The Weyl tensor on a $2n$-dimensional pseudo-Riemannian manifold ($n\geq 2$) is \begin{equation}\label{1.6} W=R-\frac{1}{2n-2}\left(\psi_1(\rho)-\frac{\tau}{2n-1}\pi_1\right), \end{equation} where \[ \begin{array}{l} \psi_1(\rho)(x,y,z,w)=g(y,z)\rho(x,w)-g(x,z)\rho(y,w) \\ \phantom{\psi_1(\rho)(x,y,z,w)}+\rho(y,z)g(x,w)-\rho(x,z)g(y,w); \\ \pi_1(x,y,z,w)=g(y,z)g(x,w)-g(x,z)g(y,w). \\ \end{array} \] Moreover, for $n\geq 2$ the Weyl tensor $W$ is zero if and only if the manifold is \emph{conformally flat}. If $\alpha$ is a non-degenerate 2-plane spanned by vectors $x, y \in T_pM, p\in M$, then its sectional curvature is \begin{equation}\label{1.7} k(\alpha)=\frac{R(x,y,y,x)}{\pi_1(x,y,y,x)}. \end{equation} \section{A Lie group as a 4-dimensional pseudo-Rie\-mannian manifold with Killing metric} Let $V$ be a real 4-dimensional vector space with a basis $\{E_i\}$. Let us consider a structure of a Lie algebra determined by commutators $[E_i,E_j]=C_{ij}^k E_k$, where $C_{ij}^k$ are structure constants satisfying the anti-commutativity condition $C_{ij}^k=-C_{ji}^k$ and the Jacobi identity $C_{ij}^k C_{ks}^l+C_{js}^k C_{ki}^l+C_{si}^k C_{kj}^l=0$. Let $G$ be the associated connected Lie group and $\{X_i\}$ be a global basis of left invariant vector fields which is induced by the basis $\{E_i\}$ of $V$. Then we have the decomposition \begin{equation}\label{2.1} [X_i,X_j]=C_{ij}^k X_k. \end{equation} Let us consider the manifold $(G,g)$, where $g$ is a metric determined by the conditions \begin{equation}\label{2.2} \begin{array}{c} g(X_1,X_1)=g(X_2,X_2)=g(X_3,X_3)=g(X_4,X_4)=1, \\[4pt] g(X_i,X_j)=0\quad \text{for}\quad i\neq j \\ \end{array} \end{equation} or by the conditions \begin{equation}\label{2.3} \begin{array}{c} g(X_1,X_1)=g(X_2,X_2)=-g(X_3,X_3)=-g(X_4,X_4)=1, \\[4pt] g(X_i,X_j)=0\quad \text{for}\quad i\neq j. \\ \end{array} \end{equation} Obviously, $g$ is a Riemannian metric if it is determined by \eqref{2.2} and $g$ is a pseudo-Riemannian metric of signature (2,2) if it is determined by \eqref{2.3}. It is known that the metric $g$ on the group $G$ is called a \emph{Killing metric} \cite{Hel} if the following condition is valid \begin{equation}\label{2.5} g([X,Y],Z)+g([X,Z],Y)=0. \end{equation} where $X$, $Y$, $Z$ are arbitrary vector fields. If $g$ is a Killing metric, then according to the proof of Theorem~2.1 in \cite{MaGrMe-4} the manifold $(G,g)$ is \emph{locally symmetric}, i.e. $\nabla R=0$. Moreover, the components of $\nabla$ and $R$ are respectively \begin{equation}\label{2.6} \nabla_{ij}=\nabla_{X_i} X_j=\frac{1}{2}[X_i,X_j], \end{equation} \begin{equation}\label{2.7} R_{ijks}=R(X_i,X_j,X_k,X_s)=-\frac{1}{4}g\left([X_i,X_j],[X_k,X_s]\right). \end{equation} \section{A Lie group as a Riemannian almost product manifold with Killing metric and nonintegrable structure} In this section we consider a Riemannian manifold $(G,P,g)$ with a metric $g$ determined by \eqref{2.2} and a structure $P$ defined as follows \begin{equation}\label{3.1} PX_1=X_3,\quad PX_2=X_4,\quad PX_3=X_1,\quad PX_4=X_2. \end{equation} Obviously, $P^2=\mathrm{id}$. Moreover, \eqref{2.2} and \eqref{3.1} imply \begin{equation}\label{3.2} g(PX_i,PX_j)=g(X_i,X_j). \end{equation} Therefore, $(G,P,g)$ is a Riemannian almost product manifold. For the manifold $(G,P,g)$ we propose that $g$ be a Killing metric. Then $(G,P,g)$ is locally symmetric. From \eqref{2.6} we obtain \begin{equation}\label{3.3} \left( \nabla_{X_i} P \right)X_j=\frac{1}{2}\bigl([X_i,PX_j]-P[X_i,X_j]\bigr). \end{equation} Then, according to \eqref{1.2}, for the components of $F$ we have \begin{equation}\label{3.4} F_{ijk}=\frac{1}{2}g\bigl([X_i,PX_j]-P[X_i,X_j],X_k\bigr). \end{equation} Hence, having in mind \eqref{2.2}, \eqref{3.1} and \eqref{3.2}, we get \begin{equation}\label{3.5} F_{ijk}+F_{jki}+F_{kij}=0, \end{equation} i.e. $(G,P,g)$ belong to the class $\mathcal{W}_3$. According to \eqref{2.5}, we have \begin{equation}\label{3.6} g\bigl([X_i,X_j],X_i\bigr)=g\bigl([X_i,X_j],X_j\bigr)=0. \end{equation} Then the following decomposition is valid \begin{equation}\label{3.7} \begin{array}{ll} [X_1,X_2]= C_{12}^3 X_3 +C_{12}^4 X_4,\qquad & [X_2,X_3]= C_{23}^1 X_1 +C_{23}^4 X_4,\\[4pt] [X_1,X_3]= C_{13}^2 X_2 +C_{13}^4 X_4,\qquad & [X_2,X_4]= C_{24}^1 X_1 +C_{24}^3 X_3,\\[4pt] [X_1,X_4]= C_{14}^2 X_2 +C_{14}^3 X_3,\qquad & [X_3,X_4]= C_{34}^1 X_1 +C_{34}^2 X_2.\\[4pt] \end{array} \end{equation} Now we apply again \eqref{2.5} using \eqref{3.7}. So we obtain \begin{equation}\label{3.8} \begin{array}{ll} [X_1,X_2]= \lambda_1 X_3 +\lambda_2 X_4,\qquad & [X_2,X_3]= \lambda_1 X_1 +\lambda_3 X_4,\\[4pt] [X_1,X_3]= -\lambda_1 X_2 +\lambda_4 X_4,\qquad & [X_2,X_4]= \lambda_2 X_1 -\lambda_3 X_3,\\[4pt] [X_1,X_4]= -\lambda_2 X_2 -\lambda_4 X_3,\qquad & [X_3,X_4]= \lambda_4 X_1 +\lambda_3 X_2,\\[4pt] \end{array} \end{equation} where $\lambda_1=C_{12}^3$, $\lambda_2=C_{12}^4$, $\lambda_3=C_{23}^4$, $\lambda_4=C_{13}^4$. We verify immediately that the Jacobi identity is satisfied in this case. Let the conditions \eqref{3.8} be satisfied for a Riemannian almost product manifold $(G,P,g)$ with structure $P$ and metric $g$, determined by \eqref{3.1} and \eqref{2.2}, respectively. Then we verify directly that $g$ is a Killing metric. Therefore, the following theorem is valid. \begin{thm}\label{thm-3.1} Let $(G,P,g)$ be a 4-dimensional Riemannian almost product manifold, where $G$ is the connected Lie group with an associated Lie algebra, determined by a global basis $\{X_i\}$ of left invariant vector fields, and $P$ and $g$ are the almost product structure and the Riemannian metric, determined by \eqref{3.1} and \eqref{2.2}, respectively. Then $(G,P,g)$ is a $\mathcal{W}_3$-manifold with a Killing metric $g$ iff $G$ belongs to the 4-parametric family of Lie groups, determined by \eqref{3.8}. \end{thm} From this point on, until the end of this section we shall consider the Riemannian almost product manifold $(G,P,g)$ determined by the conditions of \thmref{thm-3.1}. Using \eqref{3.4}, \eqref{3.8}, \eqref{3.1} and \eqref{3.2}, we obtain the following nonzero components of the tensor $F$: \begin{equation}\label{3.9} \begin{array}{l} F_{211}=-F_{233}=2F_{134}=2F_{323}=-2F_{112}=-2F_{314}=\lambda_1,\\[4pt] F_{144}=-F_{122}=2F_{212}=2F_{423}=-2F_{234}=-2F_{414}=\lambda_2,\\[4pt] F_{322}=-F_{344}=2F_{214}=2F_{434}=-2F_{223}=-2F_{412}=\lambda_3,\\[4pt] F_{433}=-F_{411}=2F_{141}=2F_{321}=-2F_{132}=-2F_{334}=\lambda_4.\\[4pt] \end{array} \end{equation} The other nonzero components of $F$ are obtained from the property $F_{ijk}=F_{ikj}$. Let $F$ be the Nijenhuis tensor on $(G,P,g)$, i.e. \[ N_{ij}=[X_i,X_j]+P[PX_i,X_j]+P[X_i,PX_j]-[PX_i,PX_j]. \] According to \eqref{3.1} and \eqref{3.8}, for the square norm $\norm{N}=N_{ik}N_{js}g^{ij}g^{ks}$ of $N$ we get \begin{equation}\label{3.10} \norm{N}=32\left(\lambda_1^2+\lambda_2^2+\lambda_3^2+\lambda_4^2\right). \end{equation} For the square norm of $\nabla P$, using \eqref{1.5}, \eqref{2.2} and \eqref{3.3}, we obtain \begin{equation}\label{3.11} \norm{\nabla P}=4\left(\lambda_1^2+\lambda_2^2+\lambda_3^2+\lambda_4^2\right). \end{equation} From \eqref{2.7}, having in mind \eqref{2.2} and \eqref{3.8}, we receive the following nonzero components of the curvature tensor $R$: \begin{equation}\label{3.12} \begin{array}{ll} R_{1221}=\frac{1}{4}\left(\lambda_1^2+\lambda_2^2\right),\quad & R_{1331}=\frac{1}{4}\left(\lambda_1^2+\lambda_4^2\right),\\[4pt] R_{1441}=\frac{1}{4}\left(\lambda_2^2+\lambda_4^2\right),\quad & R_{2332}=\frac{1}{4}\left(\lambda_1^2+\lambda_3^2\right),\\[4pt] R_{2442}=\frac{1}{4}\left(\lambda_2^2+\lambda_3^2\right),\quad & R_{3443}=\frac{1}{4}\left(\lambda_3^2+\lambda_4^2\right),\\[4pt] R_{1341}=R_{2342}=\frac{1}{4}\lambda_1\lambda_2,\quad & R_{3123}=R_{4124}=\frac{1}{4}\lambda_3\lambda_4,\\[4pt] R_{1231}=R_{4234}=\frac{1}{4}\lambda_2\lambda_4,\quad & R_{2142}=R_{3143}=\frac{1}{4}\lambda_1\lambda_3,\\[4pt] R_{1241}=R_{3243}=-\frac{1}{4}\lambda_1\lambda_4,\quad & R_{2132}=R_{4134}=-\frac{1}{4}\lambda_2\lambda_3.\\[4pt] \end{array} \end{equation} The other nonzero components of $R$ are obtained from the properties $R_{ijks}=R_{ksij}$ and $R_{ijks}=-R_{jiks}=-R_{ijsk}$. From \eqref{1.3}, having in mind \eqref{2.2}, we receive the components $\rho_{ij}=\rho(X_i,X_j)$ of the Ricci tensor $\rho$. The nonzero components of $\rho$ are: \begin{equation}\label{3.13} \begin{array}{c} \begin{array}{ll} \rho_{11}=\frac{1}{2}\left(\lambda_1^2+\lambda_2^2+\lambda_4^2\right),\quad & \rho_{22}=\frac{1}{2}\left(\lambda_1^2+\lambda_2^2+\lambda_3^2\right),\\[4pt] \rho_{33}=\frac{1}{2}\left(\lambda_1^2+\lambda_3^2+\lambda_4^2\right),\quad & \rho_{44}=\frac{1}{2}\left(\lambda_2^2+\lambda_3^2+\lambda_4^2\right),\\[4pt] \end{array} \\[4pt] \begin{array}{lll} \rho_{12}=\frac{1}{2}\lambda_3\lambda_4,\quad & \rho_{13}=-\frac{1}{2}\lambda_2\lambda_3,\quad & \rho_{14}=\frac{1}{2}\lambda_1\lambda_3,\\[4pt] \rho_{23}=\frac{1}{2}\lambda_2\lambda_4,\quad & \rho_{24}=-\frac{1}{2}\lambda_1\lambda_4,\quad & \rho_{34}=\frac{1}{2}\lambda_1\lambda_2.\\[4pt] \end{array} \end{array} \end{equation} The other nonzero components of $\rho$ are obtained from the property $\rho_{ij}=\rho_{ji}$. For the scalar curvature $\tau$, using \eqref{1.4}, we obtain \begin{equation}\label{3.14} \tau=\frac{3}{2}\left(\lambda_1^2+\lambda_2^2+\lambda_3^2+\lambda_4^2\right). \end{equation} From \eqref{1.6}, having in mind \eqref{2.2}, \eqref{3.12}, \eqref{3.13} and \eqref{3.14}, we get for the Weyl tensor $W=0$. Then $(G,P,g)$ is a conformally flat manifold. For the sectional curvatures $k_{ij}=k(\alpha_{ij})$ of basic 2-planes $\alpha_{ij}=(X_i,X_j)$, according to \eqref{1.7}, \eqref{3.12} and \eqref{2.2}, we have: \begin{equation}\label{3.15} \begin{array}{ll} k_{12}=\frac{1}{4}\left(\lambda_1^2+\lambda_2^2\right),\quad & k_{13}=\frac{1}{4}\left(\lambda_1^2+\lambda_4^2\right),\\[4pt] k_{14}=\frac{1}{4}\left(\lambda_2^2+\lambda_4^2\right),\quad & k_{23}=\frac{1}{4}\left(\lambda_1^2+\lambda_3^2\right),\\[4pt] k_{24}=\frac{1}{4}\left(\lambda_2^2+\lambda_3^2\right),\quad & k_{34}=\frac{1}{4}\left(\lambda_3^2+\lambda_4^2\right).\\[4pt] \end{array} \end{equation} The obtained geometric characteristics of the considered manifold are generalized in the following \begin{thm}\label{thm-3.2} Let $(G,P,g)$ be the 4-dimensional Riemannian almost product manifold where $G$ is the Lie group determined by \eqref{3.8}, and the structure $P$ and the metric $g$ are determined by \eqref{3.1} and \eqref{2.2}, respectively. Then \begin{enumerate} \renewcommand{(\roman{enumi})}{(\roman{enumi})} \item $(G,P,g)$ is a locally symmetric $\mathcal{W}_3$-manifold with Killing metric $g$ and zero Weyl tensor; \item The nonzero components of the basic tensor $F$, the curvature tensor $R$ and the Ricci tensor $\rho$ are \eqref{3.9}, \eqref{3.12} and \eqref{3.13}, respectively; \item The square norms of the Nijenhuis tensor $N$ and $\nabla P$ are \eqref{3.10} and \eqref{3.11}, respectively; \item The scalar curvature $\tau$ and the sectional curvatures $k_{ij}$ of the basic 2-planes are \eqref{3.14} and \eqref{3.15}, respectively. \end{enumerate} \end{thm} Let us remark that the 2-planes $\alpha_{13}$ and $\alpha_{24}$ are \emph{$P$-invariant 2-planes}, i.e. $P\alpha_{13}=\alpha_{13}$, $P\alpha_{24}=\alpha_{24}$. The 2-planes $\alpha_{12}$, $\alpha_{14}$, $\alpha_{23}$, $\alpha_{34}$ are \emph{totally real 2-planes}, i.e. $\alpha_{12}\perp P\alpha_{12}$, $\alpha_{14}\perp P\alpha_{14}$, $\alpha_{23}\perp P\alpha_{23}$, $\alpha_{34}\perp P\alpha_{34}$. Then the equalities \eqref{3.15} imply the following \begin{thm}\label{thm-3.3} Let $(G,P,g)$ be the 4-dimensional Riemannian almost product manifold where $G$ is the Lie group determined by \eqref{3.8}, and the structure $P$ and the metric $g$ are determined by \eqref{3.1} and \eqref{2.2}, respectively. Then \begin{enumerate} \renewcommand{(\roman{enumi})}{(\roman{enumi})} \item $(G,P,g)$ is of constant $P$-invariant sectional curvatures iff \[\lambda_1^2+\lambda_4^2=\lambda_2^2+\lambda_3^2;\] \item $(G,P,g)$ is of constant totally real sectional curvatures iff \[\lambda_1^2=\lambda_4^2,\qquad \lambda_2^2=\lambda_3^2\]. \end{enumerate} \end{thm} \section{A Lie group as a pseudo-Riemannian almost product manifold with Killing metric and nonintegrable structure} In this section we consider a pseudo-Riemannian manifold $(G,P,g)$ with a metric $g$ determined by \eqref{2.3} and a structure $P$ defined as follows \begin{equation}\label{4.1} PX_1=X_1,\quad PX_2=X_1,\quad PX_3=-X_3,\quad PX_4=-X_4. \end{equation} Obviously, $P^2=\mathrm{id}$. Moreover, \eqref{2.3} and \eqref{4.1} imply \begin{equation}\label{4.2} g(PX_i,PX_j)=g(X_i,X_j). \end{equation} Therefore, $(G,P,g)$ is a pseudo-Riemannian almost product manifold. For the manifold $(G,P,g)$ we propose that $g$ be a Killing metric. Then $(G,P,g)$ is locally symmetric and the equalities \eqref{2.6}, \eqref{2.7}, \eqref{3.3} and \eqref{3.4} are valid. From \eqref{2.3}, \eqref{4.1} and \eqref{4.2} we obtain \eqref{3.5}, i.e. $(G,P,g)$ is a $\mathcal{W}_3$-manifold. Now, the equalities \eqref{3.6} and \eqref{3.7} are also satisfied. According to \eqref{2.5}, from \eqref{3.7} we obtain \begin{equation}\label{4.3} \begin{array}{ll} [X_1,X_2]= \lambda_2 X_3 -\lambda_1 X_4,\qquad & [X_2,X_3]= -\lambda_2 X_1 -\lambda_3 X_4,\\[4pt] [X_1,X_3]= \lambda_2 X_2 +\lambda_4 X_4,\qquad & [X_2,X_4]= \lambda_1 X_1 +\lambda_3 X_3,\\[4pt] [X_1,X_4]= -\lambda_1 X_2 -\lambda_4 X_3,\qquad & [X_3,X_4]= -\lambda_4 X_1 +\lambda_3 X_2,\\[4pt] \end{array} \end{equation} where $\lambda_1=C_{24}^1$, $\lambda_2=C_{12}^3$, $\lambda_3=C_{24}^3$, $\lambda_4=C_{13}^4$. We verify immediately that the Jacobi identity is satisfied in this case. Let the conditions \eqref{4.3} be satisfied for a pseudo-Riemannian almost product manifold $(G,P,g)$ with structure $P$ and metric $g$ determined by \eqref{4.1} and \eqref{2.3}, respectively. Then we verify directly that $g$ is a Killing metric. Therefore, the following theorem is valid. \begin{thm}\label{thm-4.1} Let $(G,P,g)$ be a 4-dimensional pseudo-Riemannian almost product manifold, where $G$ is the connected Lie group with an associated Lie algebra, determined by a global basis $\{X_i\}$ of left invariant vector fields, and $P$ and $g$ are the almost product structure and the pseudo-Riemannian metric, determined by \eqref{4.1} and \eqref{2.3}, respectively. Then $(G,P,g)$ is a $\mathcal{W}_3$-manifold with a Killing metric $g$ iff $G$ belongs to the 4-parametric family of Lie groups, determined by \eqref{4.3}. \end{thm} From this point on, until the end of this section we shall consider the pseudo-Riemannian almost product manifold $(G,P,g)$ determined by the conditions of \thmref{thm-4.1}. In an analogous way of the previous section, we get some geometric characteristics of $(G,P,g)$. We obtain the following nonzero components of the tensor $F$: \begin{equation}\label{4.4} \begin{array}{ll} F_{124}=-F_{214}=\lambda_1, \qquad & F_{213}=-F_{123}=\lambda_2,\\[4pt] F_{423}=-F_{324}=\lambda_3, \qquad & F_{314}=-F_{413}=\lambda_4.\\[4pt] \end{array} \end{equation} The other nonzero components of $F$ are obtained from the properties $F_{ijk}=F_{ikj}$. The square norms of the Nijenhuis tensor $N$ and $\nabla P$ are respectively: \begin{equation}\label{4.5} \norm{N}=24\left(\lambda_1^2+\lambda_2^2-\lambda_3^2-\lambda_4^2\right), \end{equation} \begin{equation}\label{4.6} \norm{\nabla P}=-4\left(\lambda_1^2+\lambda_2^2-\lambda_3^2-\lambda_4^2\right). \end{equation} The nonzero components of the curvature tensor $R$ and the Ricci tensor $\rho$ are respectively: \begin{equation}\label{4.7} \begin{array}{ll} R_{1221}=-\frac{1}{4}\left(\lambda_1^2+\lambda_2^2\right),\quad & R_{1331}=\frac{1}{4}\left(\lambda_2^2-\lambda_4^2\right),\\[4pt] R_{1441}=-\frac{1}{4}\left(\lambda_1^2-\lambda_4^2\right),\quad & R_{2332}=\frac{1}{4}\left(\lambda_2^2-\lambda_3^2\right),\\[4pt] R_{2442}=\frac{1}{4}\left(\lambda_1^2-\lambda_3^2\right),\quad & R_{3443}=\frac{1}{4}\left(\lambda_3^2+\lambda_4^2\right),\\[4pt] R_{1341}=R_{2342}=-\frac{1}{4}\lambda_1\lambda_2,\quad & R_{2132}=-R_{4134}=\frac{1}{4}\lambda_1\lambda_3,\\[4pt] R_{1231}=-R_{4234}=\frac{1}{4}\lambda_1\lambda_4,\quad & R_{2142}=-R_{3143}=\frac{1}{4}\lambda_2\lambda_3,\\[4pt] R_{1241}=-R_{3243}=\frac{1}{4}\lambda_2\lambda_4,\quad & R_{3123}=R_{4124}=\frac{1}{4}\lambda_3\lambda_4;\\[4pt] \end{array} \end{equation} \begin{equation}\label{4.8} \begin{array}{c} \begin{array}{ll} \rho_{11}=-\frac{1}{2}\left(\lambda_1^2+\lambda_2^2-\lambda_4^2\right),\quad & \rho_{22}=-\frac{1}{2}\left(\lambda_1^2+\lambda_2^2-\lambda_3^2\right),\\[4pt] \rho_{33}=\frac{1}{2}\left(\lambda_2^2-\lambda_3^2-\lambda_4^2\right),\quad & \rho_{44}=\frac{1}{2}\left(\lambda_1^2+\lambda_3^2-\lambda_4^2\right),\\[4pt] \end{array} \\[4pt] \begin{array}{lll} \rho_{12}=-\frac{1}{2}\lambda_3\lambda_4,\quad & \rho_{13}=\frac{1}{2}\lambda_1\lambda_3,\quad & \rho_{14}=\frac{1}{2}\lambda_2\lambda_3,\\[4pt] \rho_{23}=\frac{1}{2}\lambda_1\lambda_4,\quad & \rho_{24}=\frac{1}{2}\lambda_2\lambda_4,\quad & \rho_{34}=-\frac{1}{2}\lambda_1\lambda_2.\\[4pt] \end{array} \end{array} \end{equation} The other nonzero components of $R$ and $\rho$ are obtained from the properties $R_{ijks}=R_{ksij}$, $R_{ijks}=-R_{jiks}=-R_{ijsk}$ and $\rho_{ij}=\rho_{ji}$. The scalar curvature is \begin{equation}\label{4.9} \tau=-\frac{3}{2}\left(\lambda_1^2+\lambda_2^2-\lambda_3^2-\lambda_4^2\right). \end{equation} We get for the Weyl tensor that $W=0$. Then $(G,P,g)$ is a conformally flat manifold. The sectional curvatures $k_{ij}=k(\alpha_{ij})$ of basic 2-planes $\alpha_{ij}=(X_i,X_j)$ are: \begin{equation}\label{4.10} \begin{array}{ll} k(\alpha_{13})=-\frac{1}{4}\left(\lambda_2^2-\lambda_4^2\right),\quad & k(\alpha_{24})=-\frac{1}{4}\left(\lambda_1^2-\lambda_3^2\right),\\[4pt] k(\alpha_{12})=-\frac{1}{4}\left(\lambda_1^2+\lambda_2^2\right),\quad & k(\alpha_{14})=-\frac{1}{4}\left(\lambda_1^2-\lambda_4^2\right),\\[4pt] k(\alpha_{23})=-\frac{1}{4}\left(\lambda_2^2-\lambda_3^2\right),\quad & k(\alpha_{34})=\frac{1}{4}\left(\lambda_3^2+\lambda_4^2\right).\\[4pt] \end{array} \end{equation} Since $\alpha_{ij}=P\alpha_{ij}$ then all basic 2-planes are $P$-invariant. It is used to check that now $(G,P,g)$ does not accept constant $P$-invariant sectional curvatures. The obtained geometric characteristics of the considered manifold are generalized in the following \begin{thm}\label{thm-4.2} Let $(G,P,g)$ be the 4-dimensional pseudo-Riemannian almost product manifold where $G$ is the Lie group determined by \eqref{4.3}, and the structure $P$ and the metric $g$ are determined by \eqref{4.1} and \eqref{2.3}, respectively. Then \begin{enumerate} \renewcommand{(\roman{enumi})}{(\roman{enumi})} \item $(G,P,g)$ is a locally symmetric conformally flat $\mathcal{W}_3$-manifold with Killing metric $g$; \item The nonzero components of the basic tensor $F$, the curvature tensor $R$ and the Ricci tensor $\rho$ are \eqref{4.4}, \eqref{4.7} and \eqref{4.8}, respectively; \item The square norms of the Nijenhuis tensor $N$ and $\nabla P$ are \eqref{4.5} and \eqref{4.6}, respectively; \item The scalar curvature $\tau$ and the sectional curvatures $k_{ij}$ of the basic 2-planes are \eqref{4.9} and \eqref{4.10}, respectively. \end{enumerate} \end{thm} The last theorem implies immediately the following \begin{cor} Let $(G,P,g)$ be the 4-dimensional pseudo-Riemannian almost product manifold where $G$ is the Lie group determined by \eqref{4.3}, and the structure $P$ and the metric $g$ are determined by \eqref{4.1} and \eqref{2.3}, respectively. Then the following propositions are equivalent: \begin{enumerate} \renewcommand{(\roman{enumi})}{(\roman{enumi})} \item $(G,P,g)$ is an isotropic $P$-manifold; \item $(G,P,g)$ is a scalar flat manifold; \item The Nijenhuis tensor is isotopic; \item The condition $\lambda_1^2+\lambda_2^2-\lambda_3^2-\lambda_4^2=0$ is valid. \end{enumerate} \end{cor} \textit{Dimitar Mekerov\\ University of Plovdiv\\ Faculty of Mathematics and Informatics \\ Department of Geometry\\ 236 Bulgaria Blvd.\\ Plovdiv 4003\\ Bulgaria \\ e-mail: [email protected]} \end{document}
arXiv
{ "id": "0804.4084.tex", "language_detection_score": 0.5365304946899414, "char_num_after_normalized": null, "contain_at_least_two_stop_words": null, "ellipsis_line_ratio": null, "idx": null, "lines_start_with_bullet_point_ratio": null, "mean_length_of_alpha_words": null, "non_alphabetical_char_ratio": null, "path": null, "symbols_to_words_ratio": null, "uppercase_word_ratio": null, "type": null, "book_name": null, "mimetype": null, "page_index": null, "page_path": null, "page_title": null }
arXiv/math_arXiv_v0.2.jsonl
null
null
\begin{document} \title{Nonlinear quantum Rabi model in trapped ions} \author{Xiao-Hang Cheng} \affiliation{Department of Physics, Shanghai University, 200444 Shanghai, People's Republic of China} \affiliation{Department of Physical Chemistry, University of the Basque Country UPV/EHU, Apartado 644, 48080 Bilbao, Spain} \author{I\~{n}igo Arrazola} \affiliation{Department of Physical Chemistry, University of the Basque Country UPV/EHU, Apartado 644, 48080 Bilbao, Spain} \author{Julen S. Pedernales} \affiliation{Department of Physical Chemistry, University of the Basque Country UPV/EHU, Apartado 644, 48080 Bilbao, Spain} \affiliation{Institute for Theoretical Physics and IQST, Albert-Einstein-Allee 11, Universit\"at Ulm, D-89069 Ulm, Germany} \author{Lucas Lamata} \affiliation{Department of Physical Chemistry, University of the Basque Country UPV/EHU, Apartado 644, 48080 Bilbao, Spain} \author{Xi Chen} \affiliation{Department of Physics, Shanghai University, 200444 Shanghai, People's Republic of China} \author{Enrique Solano} \affiliation{Department of Physics, Shanghai University, 200444 Shanghai, People's Republic of China} \affiliation{Department of Physical Chemistry, University of the Basque Country UPV/EHU, Apartado 644, 48080 Bilbao, Spain} \affiliation{IKERBASQUE, Basque Foundation for Science, Maria Diaz de Haro 3, 48013 Bilbao, Spain} \begin{abstract} We study the nonlinear dynamics of trapped-ion models far away from the Lamb-Dicke regime. This nonlinearity induces a blockade on the propagation of quantum information along the Hilbert space of the Jaynes-Cummings and quantum Rabi models. We propose to use this blockade as a resource for the dissipative generation of high-number Fock states. Also, we compare the linear and nonlinear cases of the quantum Rabi model in the ultrastrong and deep strong coupling regimes. Moreover, we propose a scheme to simulate the nonlinear quantum Rabi model in all coupling regimes. This can be done via off-resonant nonlinear red and blue sideband interactions in a single trapped ion, yielding applications as a dynamical quantum filter. \end{abstract} \pacs{03.67.Ac, 03.67.Lx, 37.10.Ty, 42.50.Ct, 37.10.Vz} \maketitle \section{Introduction} Proposed in 1936 by I. I. Rabi~\cite{Rabi}, the most fundamental interaction between a two-level atom and a classical light field, the semiclassical Rabi model, has played an important role in both physics and mathematics~\cite{Rabi80, analytic Rabi}. Under the rotating wave approximation (RWA), its fully quantized form, the quantum Rabi model (QRM), can be reduced into the Jaynes-Cummings model (JCM)~\cite{JCM}, which is analytically solvable~\cite{Wolfgang}. This model describes the basic interaction in trapped ions~\cite{ion review}, superconducting circuits~\cite{superconducting1}, and cavity quantum electrodynamics~\cite{CQED}, when the systems are in the regime where the ratio of coupling strength $g$ and mode frequency $\nu$ is approximately smaller than $0.1$~\cite{structure}. On the other hand, in the ultrastrong coupling (USC) regime, $g/\nu \in (0.1, 1)$~\cite{USC circuits, USC2}, and deep strong coupling (DSC) regime $(g/\nu > 1)$~\cite{DSC, DSC2, spectral}, we have to take the counter rotating term that is neglected in the JCM into account. The QRM is a fruitful physical model with applications in condensed matter, quantum optics, and quantum information processing. In fact, the QRM has been investigated in many contexts, such as quantum phase transitions~\cite{QPT1, ground state, QPT2}, dissipative QRM~\cite{SDE Rabi}, generalized QRM~\cite{GQRM1, GQRM2, GQRM3, GQRM4}, multiparticle QRM~\cite{two qubits Rabi, MultiQRM1, MultiQRM2, MultiQRM3}, and quantum thermodynamics~\cite{heat engine}, among others. Furthermore, proposals and experimental realizations of the QRM in different quantum simulators as optical lattices~\cite{ultracold atoms}, circuit QED~\cite{Rabi circuits}, as well as trapped ions~\cite{ion Rabi, ion Rabi exp, cross cavity Rabi,PueblaNJP} have been put forward. Reference~\cite{ion Rabi} introduced an analog method for the simulation of different regimes of the QRM, otherwise inaccessible to experimentation from first principles. These ideas have been recently demonstrated in the lab \cite{ion Rabi exp}. As one of the most controllable quantum systems, trapped ions play an important role in diverse proposals for quantum simulations~\cite{ion Rabi, ion Rabi exp, trapped ion3, spin models, QFT, fermion lattice, Holstein model, fermionic and bosonic models, qs dirac, nature, Klein Paradox, majorana1, KihwanMajo, parity, Arrazola2016,Cheng2017}. However, most of these works are based on the condition for the system to be in the Lamb-Dicke (LD) regime. In this regime, the size of the motional wavepacket of the ion is much smaller than the wavelength of the external laser driving, such that the effective coupling between the internal and the vibrational degrees of freedom, generated by the laser field, can be approximated to first order~\cite{ion review}. This condition can also be expressed as $\eta\sqrt{\langle (a+a^{\dag})^2 \rangle}\ll 1$, where $a^{\dag}$($a$) is the creation (annihilation) operator associated to the quantum vibrational mode of the ion on a certain direction $x$ and $\eta=k\sqrt{\hbar/2 M\nu}$ is the LD parameter in this direction, with $k$ the wave number of the external laser field, $M$ the mass of the ion and $\nu$ the frequency of the harmonic potential. In this article, we study the nonlinear behaviour of a single trapped ion when it is far away from the LD regime. In the past, research beyond the LD regime was mainly focussed on the nonlinear JCM~\cite{Vogel95,MatosFilho96,MatosFilho96_2,Stevens98}, but has also been studied for its implications in laser cooling~\cite{Morigi97,Morigi99,Foster09} or for its possible applications to simulate Frack-Condon physics~\cite{Hu11}. To set up the stage for a subsequent analysis, we first briefly review the JCM and take this as a reference to show the difference with the nonlinear JCM. The appearance of nonlinear terms in the Hamiltonian suppresses the collapses and revivals for a coherent state evolution typical from linear cases. Later on, we investigate how the nonlinear anti-Jaynes-Cummings model, which appears as the counterpart of nonlinear JCM, can be combined with controlled depolarizing noise, to generate arbitrary $n$-phonon Fock states. Moreover, the latter could in principle be done without a precise control of pulse duration or shape, and without the requirement of a previous high-fidelity preparation of the motional ground state. Furthermore, we propose the quantum simulation of the nonlinear quantum Rabi model by simultaneous off-resonant nonlinear Jaynes-Cummings and anti-Jaynes-Cummings interactions. Finally, we also point out the possibility for the quantum Rabi model to act as a motional state filter. \section{Jaynes-Cummings Models in Trapped Ions} The Hamiltonian describing a laser-cooled two-level ion trapped in a harmonic potential and driven by a monochromatic laser field can be expressed as ($\hbar=1$) \begin{equation}\label{IonHamil} H=\frac{\omega_0}{2}\sigma_z+\nu a^{\dag}a+\frac{\Omega}{2}\sigma^x[e^{i(\eta(a+a^\dag)-\omega t+\phi)}+{\rm H.c.}], \end{equation} where $\omega_0$ is the two-level transition frequency, $\sigma_z,\sigma^x$ are Pauli matrices associated to this two-level system, $\Omega$ is the Rabi frequency, $\omega$ is the driving laser frequency, and $\phi$ is the phase of the laser field. In the Lamb-Dicke regime, moving to an interaction picture with respect to $H_0=\frac{\omega_0}{2}\sigma_z+\nu a^{\dag}a$, and after the application of the so-called optical RWA, the Hamiltonian in Eq.(\ref{IonHamil}) can be written as~\cite{ion review} \begin{equation}\label{LDregime} H_{\rm int}^{\rm LD}=\frac{\Omega}{2}\sigma^+[1+i\eta(ae^{-i\nu t}+a^\dag e^{i\nu t})]e^{i(\phi-\delta t)}+{\rm H.c.}, \end{equation} where $\delta=\omega-\omega_0$ is the laser detuning and the condition $\eta \ll1$ allows to keep only zero and first order terms in the expansion of $\exp{[i\eta(a+a^\dag)]}$. When $\delta=-\nu$ and $\Omega\ll \nu$, after applying the vibrational RWA, the dynamics of such a system is described by Jaynes-Cummings Hamiltonian, $H_{\rm JC}=i g (\sigma^+ a - \sigma^-a^{\dag})$, where $g=\eta\Omega/2$ and $\phi=0$. This JCM is analytically solvable and generates population exchange between states $|\!\downarrow,n\rangle \leftrightarrow |\!\uparrow,n\!-\!1\rangle$ with rate $\Omega_{n,n-1}=\eta\Omega\sqrt{n}$. On the other hand, when the detuning is chosen to be $\delta=\nu$, the effective model is instead described by the anti-JCM $H_{\rm aJC}=i g (\sigma^+ a^\dag - \sigma^-a)$, which generates population transfer between states $|\!\downarrow,n\rangle \leftrightarrow |\!\uparrow,n\!+\!1\rangle$ with rate $\Omega_{n,n+1}=\eta\Omega\sqrt{n+1}$. When the trapped-ion system is beyond the Lamb-Dicke regime, the simplification of the exponential term described above is not justified and Eq.(\ref{LDregime}) reads \begin{eqnarray}\label{BLDregime} H_{\rm int}=\frac{\Omega}{2}\sigma^+ e^{i\eta(a^+ e^{i\nu t}+a e^{-i\nu t})-i(\delta t-\phi)}+{\rm H.c.}\label{IntHam}. \end{eqnarray} When $\delta=-\nu$ and $\Omega \ll \nu$, after applying the vibrational RWA, the effective Hamiltonian describing the system is given by the nonlinear Jaynes-Cummings model~\cite{Vogel95}, which can be expressed as \begin{eqnarray} H_{\rm nJC}=ig[\sigma^+ f_1(\hat{n}) a - \sigma^- a^\dag f_1(\hat{n})], \end{eqnarray} where the nonlinear function $f_1$~\cite{Vogel95} is given by \begin{equation}\label{NLfunc} f_1(\hat{n})=e^{-\eta^2/2}\sum_{l=0}^{\infty}\frac{(-\eta^2)^l}{l!(l+1)!}a^{\dag l} a^l, \end{equation} with $a^{\dag l} a^l=\hat{n}!/(\hat{n}-l)!$. The dynamics of this model can also be solved analytically, and as the linear JCM, yields to population exchange between states $|\!\downarrow,n\rangle \leftrightarrow |\!\uparrow,n\!-\!1\rangle$, but in this case with a rate $\tilde{\Omega}_{n,n-1}= |f_1(n\!-\!1)|\Omega_{n,n-1}=\eta\Omega\sqrt{n} |f_1(n\!-\!1)|$, where $f_1(n)$ corresponds to the value of the diagonal operator $f_1$ evaluated on the Fock state $|n\rangle$, i.e. $f_1(n)\equiv\langle f_1(\hat{n})\rangle_n$. If the detuning in Eq.(\ref{BLDregime}) is chosen to be $\delta=\nu$, and $\Omega \ll \nu$, then the application of the vibrational RWA yields the nonlinear anti-JCM, \begin{eqnarray} H_{\rm naJC}=ig[\sigma^+a^\dag f_1(\hat{n}) - \sigma^- f_1(\hat{n})a ], \end{eqnarray} which, as the linear anti-JCM, generates population exchange between states $|\!\downarrow,n\rangle \leftrightarrow |\!\uparrow,n\!+\!1\rangle$ with rate $\tilde{\Omega}_{n,n+1}=|f_1(n)|\Omega_{n,n+1}=\eta\Omega\sqrt{n+1} |f_1(n)|$. The nonlinear function $f_1$ depends on the LD parameter $\eta$ and on the Fock state $| n \rangle$ on which it is acting. The LD regime is then recovered when $\eta\sqrt{n}\ll 1$. In this regime, $|f_1(n)|\approx1$, and thus the dynamics are the ones that correspond to the linear models. \begin{figure} \caption{(color online) (a) Logarithm of the absolute value of the operator $f_1(\hat{n})$ evaluated for different Fock states $|n\rangle$ and LD parameters $\eta$. Dark (blue) regions represent cases where $f_1(\hat{n})|n\rangle\approx 0$. (b) Nonlinear function $f_1(n)$ for a fixed value of the LD parameter $\eta=0.5$ (oscillating blue curve). Zero value (horizontal orange line)} \label{NLZeros} \end{figure} Beyond the LD regime the nonlinear function $f_1$, which has an oscillatory behaviour both in $n\in\mathbb{N}$ and $\eta\in\mathbb{R}$, needs to be taken into account. In Fig.~\ref{NLZeros}a, we plot the logarithm of the absolute value of $f_1(n,\eta)$ for different values of $n$ and $\eta$, where the green regions represent lower values of $\log{(|f_1(n,\eta)|)}$, i.e, values for which $f_1\approx 0$. This oscillatory behaviour can also be seen in Fig.~\ref{NLZeros}b where we plot the value of $f_1$ as a function of the Fock state number $n$ for $\eta=0.5$. For this specific case, we can see that the function is close to zero around $n=14$ and $n=48$, meaning that for $\eta=0.5$, the rate of population exchange between $|\!\downarrow,15\rangle \leftrightarrow |\!\uparrow,14\rangle$ and $|\!\downarrow,49\rangle \leftrightarrow |\!\uparrow,48\rangle$ states on the nonlinear JCM will vanish. The same will happen to the exchange rate between $|\!\downarrow,14\rangle \leftrightarrow |\!\uparrow,15\rangle$ and $|\!\downarrow,48\rangle \leftrightarrow |\!\uparrow,49\rangle$ states for the nonlinear anti-JCM. We observe approximate collapses and revivals for an initial coherent state with an average number of photons of $|\alpha|^2=30$ by evolving with the JCM, as shown in Ref.~\cite{CollapseRevival}, see Fig.~\ref{JaynesCollapse}a. Here, we plot $\langle\sigma^z(t)\rangle=\langle \psi(t)|\sigma^z|\psi(t)\rangle$ for a state that evolves according to the JCM. Comparing the same case for the nonlinear JCM with $\eta=0.5$, as depicted in Fig.~\ref{JaynesCollapse}b, we appreciate that in the latter case the collapses and revivals vanish, and the dynamics is more irregular. This can seem natural given that the phenomenon of revival takes place whenever the most significant components of the quantum state, after some evolution time, turn out to oscillate in phase again, which may be more unlikely if the dynamics is nonlinear. Notice that we let the case of the nonlinear JCM evolve for a longer time, since the nonlinear function $f_1$ effectively slows down the evolution. \begin{figure} \caption{(color online) Average value of $\sigma_z$ operator versus time for a coherent initial state $|\alpha=\sqrt{30}\rangle$ after (a) linear JC and (b) nonlinear JC evolution, both with the same coupling strength $g$ and $\eta=0.5$ for the nonlinear case. As shown in (a), there exists an approximate collapse and subsequent revival in the JCM dynamics, while for the nonlinear JCM this is not the case. } \label{JaynesCollapse} \end{figure} \section{Fock State generation with Dissipative nonlinear anti-JCM} In this section we study the possibility of using the dynamics of the nonlinear anti-Jaynes-Cummings model introduced in the previous section to, along with depolarizing noise, generate high-number Fock states in a dissipative manner. In particular, the depolarizing noise that we consider corresponds to the spontaneous relaxation of the internal two-level system of the ion. Such a dissipative process, combined with the dynamics of the JCM in the LD regime (linear JCM), is routinely exploited in trapped-ion setups for the implementation of sideband cooling techniques. It is noteworthy to mention that the effect of nonlinearities on sideband cooling protocols, which arise when outside the LD regime, have also been a matter of study~\cite{cooling1, beyond LD}. Our method works as follows: we start in the ground state of both the motional and the internal degrees of freedom $|\!\downarrow,0\rangle$ (as we will show later, our protocol works as well when we are outside the motional ground state, as long as the population of Fock states higher than the target Fock state is negligible). Acting with the nonlinear anti-JC Hamiltonian we induce a population transfer from state $|\!\downarrow,0\rangle$ to state $|\!\uparrow,1\rangle$, while at the same time, the depolarizing noise transfers population from $|\!\uparrow, 1\rangle$ to $|\!\downarrow, 1\rangle$. The simultaneous action of both processes will ``heat" the motional state, progressively transferring the population of the system from one Fock state to the next one. Eventually, all the population will be accumulated in state $|\!\downarrow,n\rangle$, where a blockade of the propagation of population through the chain of Fock states occurs, if $f_1(n)=0$, as the transfer rate between states $|\!\downarrow,n\rangle$ and $|\!\uparrow,n+1\rangle$ vanishes, $\tilde{\Omega}_{n,n+1}=0$. We point out that the condition $f_1(n)=0$ can always be achieved by tuning the LD parameter to a suitable value, i.e. for every Fock state $|n\rangle$, where $n>0$ there exists a value of the LD parameter $\eta$ for which $f_1(n,\eta)=0$. As an example, we choose the LD parameter $\eta=0.4518$, for which $f_1(17)=0$, and simulate our protocol using the master equation \begin{eqnarray}\label{LabMasterEq} \dot{\rho}=&&-i[H_{\rm naJC},\rho]+\Gamma_{m} L(\sigma^{-})\rho, \end{eqnarray} where $\Gamma_{m}=2g$ is the decay rate of the internal state, and the Lindblad superoperator acts on $\rho$ as ${L(\hat{X})\rho=(2\hat{X}\rho \hat{X}^{\dag}-\hat{X}^{\dag} \hat{X}\rho-\rho \hat{X}^{\dag} \hat{X})/2}$. \begin{figure} \caption{(color online) (a) The nonlinear function $f_1$ evaluated at different Fock states $n$, for the case of $\eta=0.4518$ (decreasing blue curve). Zero value (horizontal orange line). For this value of the LD parameter, $f_1|17\rangle=0$. (b) Phonon statistics of the initial thermal state with $\langle n \rangle=1$ (c) Time evolution of the average value of the number operator $\hat{n}$ starting from the state in (b) and following the evolution for the preparation of Fock state $| 17 \rangle$, that is during a nonlinear anti-JCM with spontaneous decay of the two-level system. (d) Phonon statistics at the end of the protocol, $t=100\times2\pi/g$, with all the population concentrated in Fock state $|17\rangle$. } \label{ajcdecay} \end{figure} In Fig.~\ref{ajcdecay} we numerically show how our protocol is able to generate the motional Fock state $ |17 \rangle$, starting from a thermal state $\rho_T=\sum_{k=0}^{\infty} \frac{\langle n\rangle^k}{(\langle n\rangle+1)^{k+1}}|k\rangle\langle k|$, with $\langle n\rangle=1$. In other words, one can obtain large final Fock states starting from an imperfectly cooled motional state, by a suitable tunning of the LD parameter. As an advantage of our method compared to previous approaches~\cite{Meekhof96}, we do not need a fine control over the Rabi frequencies or pulse durations, given that the whole wavefunction, for an arbitrary initial state with motional components smaller than $n$, will converge to the target Fock state $|n\rangle$. We want to point out that this protocol relies only on the precision to which the LD parameter can be set, which in turn depends on the precision to which the wave number $k$ and the trap frequency $\nu$ can be controlled. These parameters enjoy a great stability in trapped-ion setups~\cite{Johnson16}, and therefore we deem the generation of high-number Fock states as a promising application of the nonlinear anti-JCM dynamics. \section{Nonlinear Quantum Rabi Model} Here we propose to implement the nonlinear quantum Rabi model (NQRM) in all its parameter regimes via the use of the Hamiltonian in Eq.(\ref{IntHam}). We consider off-resonant first-order red- and blue-sideband drivings with the same coupling $\Omega$ and corresponding detunings $\delta_r$, $\delta_b$. The interaction Hamiltonian after the optical RWA reads~\cite{ion review, ion Rabi}, \begin{eqnarray} H_{\rm int} = \sum\limits_{n=r,b}\frac{\Omega}{2}\sigma^+e^{i\eta(a^{\dag}e^{i \nu t}+a e^{-i \nu t})}e^{-i(\delta_n t-\phi_n)}+{\rm H.c.}, \end{eqnarray} where $\omega_r=\omega_0-\nu+\delta_r$ and $\omega_b=\omega_0+\nu+\delta_b$, with $\delta_r,\delta_b\ll \nu \ll \omega_0$ and $\Omega \ll \nu$. We consider the system beyond the Lamb-Dicke regime and set the laser field phases to $\phi_{r,b}=0$. If we invoke the vibrational RWA, i.e. neglect terms that rotate with frequencies in the order of $\nu$, the remaining terms read \begin{equation} H_{\rm int}=ig\sigma^+\big(f_1ae^{-i \delta_r t}+a^{\dag}f_1e^{-i \delta_b t}\big)+{\rm H.c.}, \end{equation} where $g=\eta\Omega/2$ and $f_1\equiv f_1(\hat{n})$ was introduced in Eq.~(\ref{NLfunc}). The latter corresponds to an interaction picture Hamiltonian of the NQRM with respect to the free Hamiltonian $H_0=\frac{1}{4}(\delta_b+\delta_r)\sigma_z +\frac{1}{2}(\delta_b-\delta_r)a^\dag a$. Therefore, undoing the interaction picture transformation, we have \begin{equation}\label{NQRM} H_{\rm nQRM}=\frac{\omega_0^{\rm R}}{2}\sigma_z+\omega^{\rm R} a^{\dag}a+i g (\sigma^+ - \sigma^-)(f_1a+a^{\dag}f_1), \end{equation} where $\omega_0^{\rm R}=-\frac{1}{2}(\delta_r+\delta_b)$ and $\omega^{\rm R}=\frac{1}{2}(\delta_r-\delta_b)$. Equation~(\ref{NQRM}) represents the general form of the NQRM, where $\omega_0^{\rm R}$ is the level splitting of the simulated two level system, $\omega^{\rm R}$ is the frequency of the simulated bosonic mode and $g$ is the coupling strength between them, which in turn will be modulated by the nonlinear function $f_1(\hat{n},\eta)$. The different regimes of the NQRM will be characterized by the relation among these four parameters. First, in the LD regime or $\eta\sqrt{\langle (a+a^{\dag})^2 \rangle}\ll 1$, Eq.~(\ref{NQRM}) can be approximated to the linear QRM~\cite{ion Rabi}. Beyond the LD regime, in a parameter regime where $|\omega^{\rm R}-\omega_0^{\rm R}|\ll g\ll|\omega^{\rm R}+\omega_0^{\rm R}|$, the RWA can be applied. This would imply neglecting terms that rotate at frequency $\omega^{\rm R}+\omega_0^{\rm R}$ in an interaction picture with respect to $H_0$, leading to the nonlinear JCM studied previously in this article. On the other hand, the nonlinear anti-JCM would be recovered in a regime where $|\omega^{\rm R}+\omega_0^{\rm R}|\ll g\ll|\omega^{\rm R}-\omega_0^{\rm R}|$. It is worth mentioning that the latter is only possible if the frequency of the two-level system and the frequency of the mode have opposite sign. The USC and DSC regimes are defined as $0.1\lesssim g/\omega^{\rm R} \lesssim 1$ and $g/\omega^{\rm R}\gtrsim1$ respectively, and in these regimes the RWA does not hold anymore. \begin{figure} \caption{(color online) (a) Fidelity with respect to the initial state $P(t)=|\langle\psi_0|\psi(t)\rangle|^2$ versus time. As initial state we choose $|0, \rm g\rangle$ and the evolution occurs under the NQRM with LD parameter $\eta=0.67898$, where $f_1|7\rangle=0$, $g/\omega^{\rm R}=4$ and $\omega_0^{\rm R}=0$. (b) Phonon statistics at different times for the NQRM evolved from the initial state $|0, \rm g\rangle$. The propagation of population through Fock states stops at $|n=7\rangle$, with Fock states of $n>7$ never getting populated. } \label{NonlinearFockProSta} \end{figure} As an example, here we investigate the NQRM in the DSC regime with initial Fock state $|0, \rm g\rangle$, where $|0\rangle$ is the ground-state of the bosonic mode, and $|\rm g \rangle$ stands for the ground state of the effective two-level system. In Fig.~\ref{NonlinearFockProSta}, we study the case for $\eta=0.67898$, where $f_1|7\rangle=0$, $g/\omega^{\rm R}=4$ and $\omega_0^{\rm R}=0$. More specifically, a quantum simulation of the model in this regime can be achieved with the following detunings and Rabi frequency: $\delta_r=2\pi\times11.31$kHz, $\delta_b=-2\pi\times11.31$kHz, $g=2\pi\times45.24$kHz and $\Omega=2\pi\times 133.26$kHz. In Ref.~\cite{DSC}, it was shown that the linear QRM shows collapses and revivals and a round trip of the phonon-number wavepacket along the chain of Fock states, when in the DSC regime. Here, we observe that in the nonlinear case, Fig.~\ref{NonlinearFockProSta}, collapses and revivals do not present the same clear structure, having a more irregular evolution. Most interestingly, the system dynamics never surpasses Fock state $|n\rangle$, for which $f_1(n)=0$. Regarding the simulated regime of the nonlinear QRM, we point out that the nonlinear term also contributes to the coupling strength. Therefore, to keep the NQRM in the DSC regime, the ratio $g/\omega^{\rm R}$ should be larger than that for the linear QRM since $f_1(n)< 1$ always. Summarizing, our result illustrates that the Hilbert space is effectively divided into two subspaces by the NQRM, namely those spanned by Fock states below and above Fock state $| n \rangle$. We denote the Fock number $n$, where $f_1|n\rangle=0$, as ``the barrier'' of the NQRM. \begin{figure} \caption{(color online) (a) Overlap of the instantaneous state with the initial state $P(t)=|\langle\psi_0|\psi(t)\rangle|^2$ versus time, for a coherent initial state $|\alpha\!=\!1,\rm g\rangle$ evolving under the linear QRM. Collapses and revivals are observed, as expected in the DSC regime of the linear QRM. (b) Phonon statistics at different times, where we see the round trip of a phonon number wavepacket. } \label{LinearCohProSta} \end{figure} \begin{figure} \caption{(color online) (a) Overlap with the initial state $P(t)=|\langle\psi_0|\psi(t)\rangle|^2$ versus time, for initial state $|\alpha\!=\!1,\rm g\rangle$ evolving under the NQRM with LD parameter $\eta=0.57838$, where $f_1|10\rangle=0$, $g/\omega^{\rm R}=3.7$ and $\omega_0^{\rm R}=0$. (b) Phonon statistics at different times for the NQRM evolved from the initial state $|\alpha\!=\!1,\rm g\rangle$. The Fock state $|10\rangle$ is never surpassed because $f_1|10\rangle=0$. } \label{NonlinearCohProSta} \end{figure} To benchmark the effect of the barrier, we also provide simulations starting from an initial coherent state with $\alpha=1$ whose average phonon number is $\langle n \rangle=|\alpha|^2=1$, and make the comparison between the QRM and the NQRM in the DSC regime. For the parameter regime $g/\omega^{\rm R}=2$ and $\omega_0^{\rm R}=0$, the fidelity with respect to the initial coherent state in the linear QRM performs periodic collapses and full revivals as it can be seen in Fig.~\ref{LinearCohProSta}(a). In Fig.~\ref{LinearCohProSta}(b), we observe a round trip of the phonon-number wave packet, similarly to what was shown in Ref.~\cite{DSC} for the case of the linear QRM starting from a Fock state. The NQRM, on the other hand, has an associated dynamics that is aperiodic and more irregular, as shown in Fig.~\ref{NonlinearCohProSta}, and never crosses the motional barrier produced by the corresponding $f_1(n)=0$. Therefore, it can be employed as a motional filter, which is determined by the location of the barrier with respect to the initial state distribution. Here, by filter we mean that the population of Fock states above a given threshold can be prevented. For the simulation we choose the LD parameter $\eta=0.57838$ for which $f_1|10\rangle=0$, which is far from the center of the distribution of the initial coherent state, as well as most of its width. The simulated parameter regime corresponds to the DSC regime with $g/\omega^{\rm R}=3.7$ and $\omega_0^{\rm R}=0$. This case could also be simulated with trapped ions with detunings of $\delta_r=2\pi\times11.31$kHz and $\delta_b=-2\pi\times11.31$kHz, and a Rabi frequency of $\Omega=2\pi\times 133.26$kHz. As for the corresponding case with initial Fock state $|0,\rm g\rangle$, the evolution of the NQRM in the coherent state case, depicted in Fig.~\ref{NonlinearCohProSta}, never exceeds the barrier. \section{Conclusions} We have proposed the implementation of nonlinear QRMs in arbitrary coupling regimes, with trapped-ion analog quantum simulators. The nonlinear term that appears in our model is characteristic of the region beyond the Lamb-Dicke regime. This nonlinear term causes the blockade of motional propagation at $|n\rangle$, whenever $f_1(\hat{n})|n\rangle=0$. In order to compare our models with standard linear quantum Rabi models, we have plotted the evolution of the population of the internal degrees of freedom of the ion evolving under the linear JCM and the nonlinear JCM, and observe that for the latter the collapses and revivals disappear. Also, we have proposed a method for generating large Fock states in a dissipative manner, making use of the nonlinear anti-JCM and the spontaneous decay of the two-level system. Finally, we have studied the dynamics of the linear and nonlinear full QRM on the DSC regime and notice that the nonlinear case can act as a motional filter. Our work sheds light on the field of nonlinear QRMs implemented with trapped ions, and suggests plausible applications. {\it Acknowledgements.}---The authors acknowledge support from NSFC (11474193), the Shuguang Program (14SG35), the Program for Eastern Scholar, the Basque Government with PhD grant PRE-2015-1-0394 and grant IT986-16, Ram\'{o}n y Cajal Grant RYC-2012-11391, MINECO/FEDER FIS2015-69983-P, and Chinese Scholarship Council (201506890077). \end{document}
arXiv
{ "id": "1709.07378.tex", "language_detection_score": 0.7697526216506958, "char_num_after_normalized": null, "contain_at_least_two_stop_words": null, "ellipsis_line_ratio": null, "idx": null, "lines_start_with_bullet_point_ratio": null, "mean_length_of_alpha_words": null, "non_alphabetical_char_ratio": null, "path": null, "symbols_to_words_ratio": null, "uppercase_word_ratio": null, "type": null, "book_name": null, "mimetype": null, "page_index": null, "page_path": null, "page_title": null }
arXiv/math_arXiv_v0.2.jsonl
null
null
\begin{document} \title{Generating permutations with a given major index} \large \begin{abstract} In [S. Effler, F. Ruskey, A CAT algorithm for listing permutations with a given number of inversions, {\it I.P.L.}, 86/2 (2003)] the authors give an algorithm, which appears to be CAT, for generating permutations with a given major index. In the present paper we give a new algorithm for generating a Gray code for subexcedant sequences. We show that this algorithm is CAT and derive it into a CAT generating algorithm for a Gray code for permutations with a given major index. \end{abstract} \section{Introduction} We present the first guaranteed constant average time generating algorithm for permutations with a fixed index. First we give a co-lex order generating algorithm for bounded compositions. Changing its generating order and specializing it for particular classes of compositions we derive a generating algorithms for a Gray code for fixed weight subexcedant sequences; and after some improvements we obtain an efficient version of this last algorithm. The generated Gray code has the remarkable property that two consecutive sequences differ in at most three adjacent positions and by a bounded amount in these positions. Finally applying a bijection introduced in \cite{Vaj_11} between subexcedant sequences and permutations with a given index we derive the desired algorithm, where consecutive generated permutations differ by at most three transpositions. Often, Gray code generating algorithms can be re-expressed simpler as algorithms with the same time complexity and generating the same class of objects, but in different ({\em e.g.} lexicographical) order. This is not the case in our construction: the {\em Grayness} of the generated subexcedant sequences is critical in the construction of the efficient algorithm generating permutations with a fixed index. A {\em statistic} on the set $\frak S_n$ of length $n$ permutations is an association of an element of $\mathbb{N}$ to each permutation in $\frak S_n$. For $\pi\in\frak S_n$ the {\em major index}, ${\scriptstyle \mathsf{MAJ}}$, is a statistic defined by (see, for example, \cite[Section 10.6]{Lothaire_83}) $$ \displaystyle {\scriptstyle \mathsf{MAJ}}\, \pi = \mathop{\sum_{1\leq i <n}}_{\pi_i>\pi_{i+1}} i. $$ \begin{De} For two integers $n$ and $k$, an {\it $n$-composition of $k$} is an $n$-sequence $\bsb{c}=c_1c_2\ldots c_n$ of non-negative integers with $\sum_{i=1}^n c_i=k$. For an $n$-sequence $\bsb{b}=b_1b_2\ldots b_n$, $\bsb{c}$ is said {\it $\bsb{b}$-bounded if $0\leq c_i\leq b_i$}, for all $i$, $1\leq i\leq n$. \end{De} In this context $b_1b_2\ldots b_n$ is called {\it bounding sequence} and we will consider only bounding sequences with either $b_i>0$ or $b_i=b_{i-1}=\ldots =b_1=0$ for all $i$, $1\leq i\leq n$. Clearly, $b_i=0$ is equivalent to fix $c_i=0$. We denote by $C(k,n)$ the set of all $n$-compositions of $k$, and by $C^{\bsb{b}}(k,n)$ the set of $\bsb{b}$-bounded $n$-compositions of $k$; and if $b_i\geq k$ for all $i$, then $C^{\bsb{b}}(k,n)=C(k,n)$. \begin{De} A {\it subexcedant sequence} $\bsb{c}=c_1c_2\ldots c_n$ is an $n$-sequence with $0\leq c_i\leq i-1$, for all $i$; and $\sum_{i=1}^nc_i$ is called the {\it weight} of $\bsb{c}$. \end{De} We denote by $S(k,n)$ the set of length $n$ and weight $k$ subexcedant sequences, and clearly $S(k,n)=C^{\bsb{b}}(k,n)$ with $\bsb{b}=0\,1\,2\,\ldots \,(n-1)$. \section{Generating fixed weight subexcedant sequences} We give three generating algorithms, and the third one generates efficiently combinatorial objects in bijection with permutations having fixed index : \begin{itemize} \item {\tt Gen\_Colex} generates the set $C^{\bsb{b}}(k,n)$ of bounded compositions in co-lex order (defined later). \item {\tt Gen1\_Gray} which is obtained from {\tt Gen\_Colex} by: \begin{itemize} \item changing its generating order, and \item restricting it to the bounding sequence $\bsb{b}=01\ldots (n-1)$. \end{itemize} It produces a Gray code for the set $S(k,n)$, and it can be seen as the definition of this Gray code. \item {\tt Gen2\_Gray} is a an efficient version of {\tt Gen1\_Gray}. \end{itemize} Finally, in Section~\ref{gen_perms}, regarding the subexcedant sequences in $S(k,n)$ as McMahon permutation codes (defined in Section~\ref{sec_Mc_code}), a constant average time generating algorithm for a Gray code for the set of permutations of length $n$ with the major index equals $k$ is obtained. \subsection{Algorithm {\tt Gen\_Colex}} This algorithm generates $C^{\bsb{b}}(k,n)$ in {\it co-lex order}, which is defined as: $c_1c_2\ldots c_n$ precedes $d_1d_2\ldots d_n$ in co-lex order if $c_nc_{n-1}\ldots c_1$ precedes $d_nd_{n-1}\ldots d_1$ in lexicographical order. Its worst case time complexity is $O(k)$ per composition. For a set of bounded compositions $C^{\bsb{b}}(k,n)$, an {\it increasable position} (with respect to $C^{\bsb{b}}(k,n)$) in a sequence $c_1c_2\ldots c_n\notin C^{\bsb{b}}(k,n)$ is an index $i$ such that: \begin{itemize} \item $c_1=c_2=\ldots c_{i-1}=0$, and \item there is a composition $d_1d_2\ldots d_n\in C^{\bsb{b}}(k,n)$ with $c_i<d_i$ and $c_{i+1}=d_{i+1}$, $c_{i+2}=d_{i+2}$, \dots, $c_n=d_n$. \end{itemize} For example, for $C^{01233}(3,5)$ the increasable positions are underlined in the following sequences: $0\,0\,\underline{0}\,\underline{1}\,0$ and $0\,\underline{0}\,2\,0\,0$. Indeed, the first two positions in $0\,0\,0\,1\,0$ are not increasable since there is no composition in $C^{01233}(3,5)$ with the suffix $0\,1\,0$; and the third position in $0\,0\,2\,0\,0$ is not increasable because $2$ is the maximal value in this position. Clearly, if $\ell<r$ are two increasable positions in $\bsb{c}$, then each $i$, $\ell<i<r$, is still an increasable position in $\bsb{c}$ (unless $b_i=0$). Here is the sketch of the co-lex order generating procedure for $C^{\bsb{b}}(k,n)$: \begin{itemize} \item[$\bullet$] initialize $\bsb{c}$ by the length $n$ sequence $0\,0\,\ldots\, 0$; \item[$\bullet$] for each increasable position $i$ in $\bsb{c}$, increase $c_i$ by one and call recursively the generating procedure if the obtained sequence $\bsb{c}$ is not a composition in $C^{\bsb{b}}(k,n)$, and output it elsewhere. \end{itemize} The complexity of the obtained algorithm is $O(k)$ per generated composition and so inefficient. Indeed, too many nodes in the generating tree induced by this algorithm have degree one. Algorithm {\tt Gen\_Colex} in Figure \ref{algo_colex} avoids some of these nodes. We will identify a node in a generating tree by the corresponding value of the sequence $\bsb{c}$; and a {\it redundant node} in a generating tree induced by the previous sketched algorithm is a node with a unique successor and which differs in the same position from its ancestor and its successor. For example, in Figure \ref{two_tree} (a) redundant nodes are: $0\,0\,0\,1$, $0\,0\,0\,2$, $0\,0\,1\,3$, $0\,0\,2\,3$ and $0\,1\,3\,3$. These nodes occur when, for a given suffix, the smallest value allowed in an increasable position in the current sequence $\bsb{c}$ is not $1$, and this position is necessarily $\ell$, the leftmost increasable one. Algorithm {\tt Gen\_Colex} avoids redundant nodes by setting $c_{\ell}$ to its minimal value $e=k-\sum_{j=1}^{\ell-1}b_j$ (and $\sum_{j=1}^{i}b_j$ can be computed for each $i$, $1\leq i\leq n$, in a pre-processing step). For example, in Figure \ref{two_tree} (b) there are no redundant nodes. However, in the generating tree induced by {\tt Gen\_Colex} there still remain arbitrary length sequences of successive nodes with a unique successor; they are avoided in procedure {\tt Gen2\_Gray}. Algorithm {\tt Gen\_Colex} is given in Figure \ref{algo_colex} where $\ell$ is the leftmost increasable position in the current sequence $\bsb{c}$, and $r$ the leftmost non-zero position in $\bsb{c}$, and thus the rightmost increasable position in $\bsb{c}$ is $r$ if $c_r<b_r$ and $r-1$ elsewhere ($b_1b_2\ldots b_n$ being the bounding sequence). The main call is {\tt Gen\_Colex($k$,$n$)} and initially $\bsb{c}$ is $0\,0\,\ldots\, 0$. (As previously, in this algorithm the function $k\mapsto \min\{s\,|\,\sum_{j=1}^s b_j\geq k\}$ can be computed and stored in an array, in a pre-processing step.) The induced generating tree for the call {\tt Gen\_Colex($4$,$5$)} is given in Figure \ref{fig_2_trees} (a). \begin{figure} \caption{ The path from the root $0\,0\,0\,0\,0$ to the composition $0\,1\,2\,3\,3\in C^{01234}(9,5)$: (a) before deleting redundant nodes (in boldface); and (b) in the generating tree induced by the call of {\tt Gen\_Colex($9,5$)} where redundant nodes are avoided. } \label{two_tree} \end{figure} \begin{figure} \caption{ Algorithm {\tt Gen\_Colex}.} \label{algo_colex} \end{figure} \begin{figure} \caption{ (a): The tree induced by the call of {\tt Gen\_Colex($4$,$5$)} with $\bsb{b}=0\,1\,2\,3\,4$, and (b): that induced by {\tt Gen1\_Gray($4$,$5$)}. Terminal nodes are in bold-face } \label{fig_2_trees} \end{figure} \subsection{Algorithm {\tt Gen1\_Gray}} This algorithm is defined in Figure \ref{algos_Gen1_Gray} and is derived from {\tt Gen\_Colex}: the order of recursive calls is changed according to a direction (parameter $dir$), and it is specialized for bounding sequences $\bsb{b}=0\,1\,2\,\ldots\, (n-1)$, and so it produces subexcedant sequences. It has the same time complexity as {\tt Gen\_Colex} and we will show that it produces a Gray code. The call of {\tt Gen1\_Gray} with $dir=0$ produces, in order, a recursive call with $dir=0$, then $r-\ell$ calls in the {\tt for} statement with $dir$ equals successively: \begin{itemize} \item $0,1,\ldots 0,1$, if $r-\ell$ is even, and \item $1,0,\ldots 1,0,1$, if $r-\ell$ is odd. \end{itemize} In any case, the value of $dir$ corresponding to the last call is $1$. The call of {\tt Gen1\_Gray} with $dir=1$ produces the same operations as previously but in reverse order, and in each recursive call the value of $dir$ is replaced by $1-dir$. Thus, the call of {\tt Gen1\_Gray} with $dir=1$ produces, in order, $r-\ell$ calls in the {\tt for} statement with $dir$ equals alternatively $0,1,0,\ldots$, then a last call with $dir=1$. See Figure \ref{fig_2_trees} (b) for an example of generating tree induced by this procedure. Let $\mathcal{S}(k,n)$ be the {\it ordered list} for $S(k,n)$ generated by the call {\tt Gen1\_Gray($k$,$n$,$0$)}, and it is easy to see that $\mathcal{S}(k,n)$ is suffix partitioned, that is, sequences with the same suffix are contiguous; and Theorem \ref{main_th} shows that $\mathcal{S}(k,n)$ is a Gray code. For a sequence $\bsb{c}$, a $k\geq 1$ and $dir\in \{0,1\}$ we denote by $\mathrm{first}(k;dir;\bsb{c})$ and $\mathrm{last}(k;dir;\bsb{c})$, the first and last subexcedant sequence produced by the call of {\tt Gen1\_Gray$(k,r,dir)$} if the current sequence is $\bsb{c}$, and $r$ the position of the leftmost non-zero value in $\bsb{c}$. In particular, if $\bsb{c}=0\,0\,\ldots\,0$, then $\mathrm{first}(k;0;\bsb{c})$ is the first sequence in $\mathcal{S}(k,n)$, and $\mathrm{last}(k;0;\bsb{c})$ the last one. \begin{Rem}$ $ \label{rem_2_points} \begin{enumerate} \label{rev_01_rem_reverse} \item For a sequence $\bsb{c}$, the list produced by the call {\tt Gen1\_Gray$(k,r,0)$} is the reverse of the list produced by the call {\tt Gen1\_Gray$(k,r,1)$}, and with the previous notations we have \begin{eqnarray*} \mathrm{last}(k;dir;\bsb{c})=\mathrm{first}(k;1-dir;\bsb{c}), \end{eqnarray*} for $dir\in\{0,1\}$. \item Since the bounding sequence is $\bsb{b}=0\,1\,\ldots\, (n-1)$ it follows that, for $\bsb{c}=0\,0\,\ldots\, 0\,c_ic_{i+1}\ldots c_n$, $c_i\neq 0$, $\mathrm{first}(k;0;\bsb{c})$ is \begin{itemize} \item $a_1a_2\ldots a_{i-1}c_ic_{i+1}\ldots c_n$ if $k\leq\sum_{j=1}^{i-1}(j-1)=\frac{(i-1)\cdot(i-2)}{2}$, where $a_1a_2\ldots a_{i-1}$ is the smallest sequence, in co-lex order, in $S(k,i-1)$, \item $a_1a_2\ldots a_ic_{i+1}\ldots c_n$ if $k>\frac{(i-1)\cdot(i-2)}{2}$, where $a_1a_2\ldots a_i$ is the smallest sequence, in co-lex order, in $S(k+c_i,i)$. \end{itemize} \end{enumerate} \end{Rem} \begin{figure} \caption{ Algorithm {\tt Gen1\_Gray}, the Gray code counterpart of {\tt Gen\_Colex} specialized to subexcedant sequences.} \label{algos_Gen1_Gray} \end{figure} Now we introduce the notion of close sequences. Roughly speaking, two sequences are close if they differ in at most three adjacent positions and by a bounded amount in these positions. Definition \ref{3_tuple} below defines formally this notion, and Theorem \ref{main_th} shows that consecutive subexcedant sequences generated by {\tt Gen1\_Gray} are close. Let $\bsb{s}=s_1s_2\ldots s_n$ and $\bsb{t}=t_1t_2\ldots t_n$ be two subexcedant sequences of same weight which differ in at most three adjacent positions, and let $p$ be the rightmost of them (notice that necessarily $p\geq 3$). The {\it difference} between $\bsb{s}$ and $\bsb{t}$ is the $3$-tuple $$ (a_1,a_2,a_3)=(s_{p-2}-t_{p-2},s_{p-1}-t_{p-1},s_p-t_p). $$ Since $\bsb{s}$ and $\bsb{t}$ have same weight it follows that $a_1+a_2+a_3=0$; and we denote by $-(a_1,a_2,a_3)$ the tuple $(-a_1,-a_2,-a_3)$. \begin{De} \label{3_tuple} Two sequences $\bsb{s}$ and $\bsb{t}$ in $S(k,n)$ are {\it close} if: \begin{itemize} \item $\bsb{s}$ and $\bsb{t}$ differ in at most three adjacent positions, and \item if $(a_1,a_2,a_3)$ is the difference between $\bsb{s}$ and $\bsb{t}$, then $$ (a_1,a_2,a_3)\in \{\pm(0,1,-1),\pm(0,2,-2),\pm(1,-2,1),\pm(1,-3,2),\pm(1,1,-2),\pm(1,0,-1)\}. $$ \end{itemize} \end{De} Even if the second point of this definition sound somewhat arbitrary, it turns out that consecutive sequences generated by algorithm {\tt Gen1\_Gray} are close under this definition, and our generating algorithm for permutations with a given index in Section \ref{gen_perms} is based on it. \begin{Exa} The following sequences are close: $0\underline{12}01$ and $0\underline{03}01$; $010\underline{03}$ and $010\underline{21}$; $0\underline{020}1$ and $0\underline{101}1$; $01\underline{132}$ and $01\underline{204}$; the positions where the sequences differ are underlined. Whereas the following sequences are not close: $0\underline{0211}$ and $0\underline{1030}$ (they differ in more than three positions); $01\underline{201}$ and $01\underline{030}$ (the difference $3$-tuple is not a specified one). \end{Exa} \begin{Rem} \label{rem_inter} If $\bsb{s}$ and $\bsb{t}$ are two close subexcedant sequences in $S(k,n)$, then there are at most two `intermediate' subexcedant sequences $\bsb{s'}$, $\bsb{s''}$ in $S(k,n)$ such that the differences between $\bsb{s}$ and $\bsb{s'}$, between $\bsb{s'}$ and $\bsb{s''}$, and $\bsb{s''}$ and $\bsb{t}$ are $\pm(1,-1,0)$. \end{Rem} \begin{Exa} \label{un_example} Let $\bsb{s}=0\,1\,0\,1\,1\,1$ and $\bsb{t}=0\,0\,2\,0\,1\,1$ be two sequences in $S(4,6)$. Then $\bsb{s}$ and $\bsb{t}$ are close since they difference is $(1,-2,1)$, and there is one `intermediate' sequence $\bsb{s'}=0\,0\,1\,1\,1\,1$ in $S(4,6)$ with \begin{itemize} \item the difference between $\bsb{s}$ and $\bsb{s'}$ is $(1,-1,0)$, \item the difference between $\bsb{s'}$ and $\bsb{t}$ is $(-1,1,0)$. \end{itemize} \end{Exa} A consequence of Remark \ref{rev_01_rem_reverse}.2 is: \begin{Rem}$ $ \label{heredit} If $\bsb{s}$ and $\bsb{t}$ are close subexcedant sequences and $m$ is an integer such that both $\bsb{u}=\mathrm{first}(m;0;\bsb{s})$ and $\bsb{v}=\mathrm{first}(m;0;\bsb{t})$ exist, then $\bsb{u}$ and $\bsb{v}$ are also close. \end{Rem} \begin{The} \label{main_th} Two consecutive sequences in $S(k,n)$ generated by the algorithm {\tt Gen1\_Gray} are close. \end{The} \begin{proof} Let $\bsb{s}$ and $\bsb{t}$ be two consecutive sequences generated by the call of {\tt Gen1\_Gray($k$,$n$,$0$)}. Then there is a sequence $\bsb{c}=c_1c_2\ldots c_n$ and a recursive call of {\tt Gen1\_Gray} acting on $\bsb{c}$ (referred later as the {\it root call} for $\bsb{s}$ and $\bsb{t}$) which produces, in the {\tt for} statement, two calls so that $\bsb{s}$ is the last sequence produced by the first of them and $\bsb{t}$ the first produced by the second of them. By Remark \ref{rev_01_rem_reverse}.1 it is enough to prove that $\bsb{s}$ and $\bsb{t}$ are close when their root call has direction $0$. Let $\ell$ and $r$, $\ell\neq r$, be the leftmost and the rightmost increasable positions in $\bsb{c}$ (and so $c_1=c_2=\ldots =c_{r-1}=0$, and possibly $c_r=0$); and $i$ and $i+1$ be the positions where $\bsb{c}$ is modified by the root call in order to produce eventually $\bsb{s}$ and $\bsb{t}$. Also we denote $m=k-\sum_{j=1}^n c_j$ and $e=m-\frac{\ell\cdot (\ell-1)}{2}$. We will give the shape of $\bsb{s}$ and $\bsb{t}$ according to the following four cases. \begin{enumerate} \item $i=\ell$ and $r-\ell$ is even, \item $i=\ell$ and $r-\ell$ is odd, \item $i\neq\ell$ and the call corresponding to $i$ in the {\tt for} statement of the root call has direction $0$ (and so that corresponding to $i+1$ has direction $1$), \item $i\neq\ell$ and the call corresponding to $i$ in the {\tt for} statement of the root call has direction $1$ (and so that corresponding to $i+1$ has direction $0$). \end{enumerate} \noindent Case 1. \begin{eqnarray*} \bsb{s} & = & \mathrm{last}(m-e;0;00\ldots e c_{\ell+1}\ldots c_n)\\ & = & \mathrm{first}(m-e;1;00\ldots e c_{\ell+1}\ldots c_n)\\ & = & \left\{ \begin {array}{lcc} \mathrm{first} (m-e-(\ell-2);0;00\ldots (\ell-2)ec_{\ell+1} \ldots c_n) & {\rm if} & e=\ell-1\\ \mathrm{first} (m-e-(\ell-2);0;00\ldots (\ell-3)(e+1)c_{\ell+1}\ldots c_n) & {\rm if} & e<\ell-1, \end {array} \right. \end{eqnarray*} and \begin{eqnarray*} \bsb{t} & = & \mathrm{first} (m-1;0;00\ldots (c_{\ell+1}+1)\ldots c_n)\\ & = & \mathrm{first} (m-e;0;00\ldots (e-1)(c_{\ell+1}+1)\ldots c_n)\\ & = & \mathrm{first} (m-e-(\ell-2);0;00\ldots (\ell-2)(e-1)(c_{\ell+1}+1)\ldots c_n). \end{eqnarray*} \noindent Case 2. In this case $\bsb{s}$ is the same as in the previous case and \begin{eqnarray*} \bsb{t} & = & \mathrm{first}(m-1;1;00\ldots 0(c_{\ell+1}+1)\ldots c_n) \\ & = & \left\{ \begin {array}{lcc} \mathrm{first} (m-2;0;00\ldots 0 (c_{\ell+1}+2)\ldots c_n) & {\rm if} & c_{\ell+1}+2\leq \ell\\ \mathrm{first} (m-e;0;00\ldots 0(e-1)(c_{\ell+1}+1)\ldots c_n) & {\rm if} & c_{\ell+1}+2>\ell \end {array} \right.\\ & = & \left\{ \begin {array}{lcc} \mathrm{first} (m-e-(\ell-2);0;00\ldots 0(\ell-2)(e-2)(c_{\ell+1}+2)\ldots c_n) & {\rm if} & c_{\ell+1}+2\leq \ell\\ \mathrm{first} (m-e-(\ell-2);0;00\ldots (\ell-2)(e-1)(c_{\ell+1}+1)\ldots c_n) & {\rm if} & c_{\ell+1}+2>\ell. \end {array} \right. \end{eqnarray*} \noindent Case 3. In this case $c_i=0$ and \begin{eqnarray*} \bsb{s} & = & \mathrm{last} (m-1;0;00\ldots 01c_{i+1}\ldots c_n)\\ & = & \mathrm{last} (m-2;1;00\ldots 02c_{i+1}\ldots c_n)\\ & = & \mathrm{first} (m-2;0;00\ldots 02c_{i+1}\ldots c_n), \end{eqnarray*} and \begin{eqnarray*} \bsb{t} & = & \mathrm{first} (m-1;1;00\ldots 0(c_{i+1}+1)\ldots c_n)\\ & = & \left\{ \begin {array}{lcc} \mathrm{first} (m-2;0;00\ldots 0(c_{i+1}+2)\ldots c_n) & {\rm if} & c_{i+1}+2\leq i\\ \mathrm{first} (m-2;0;00\ldots 1(c_{i+1}+1)\ldots c_n) & {\rm if} & c_{i+1}+2> i. \end {array} \right. \end{eqnarray*} \noindent Case 4. As previously, $c_i=0$ and \begin{eqnarray*} \bsb{s} & = & \mathrm{last} (m-1;1;00\ldots 01c_{i+1}\ldots c_n)\\ & = & \mathrm{first} (m-1;0;00\ldots 01c_{i+1}\ldots c_n), \end{eqnarray*} and $$\bsb{t}=\mathrm{first} (m-1;0;00\ldots 00(c_{i+1}+1)\ldots c_n). $$ Finally, by Remark \ref{heredit} it follows that in each of the four cases $\bsb{s}$ and $\bsb{t}$ are close, and the statement holds. \end{proof} As a byproduct of the previous theorem and Remark \ref{rem_2_points}.2 we have \begin{Rem} \label{boure} If $\bsb{s}=s_1s_2\ldots s_n$ and $\bsb{t}=t_1t_2\ldots t_n$ are two consecutive sequences generated by {\tt Gen1\_Gray} and $p$ is the rightmost position where they differ, then $s_1s_2\ldots s_{p-2}$ and $t_1t_2\ldots t_{p-2}$ are the smallest, in co-lex order, sequences in $S(x,p-2)$ and $S(y,p-2)$, respectively, with $x=s_1+s_2+\ldots +s_{p-2}$ and $y=t_1+t_2+\ldots +t_{p-2}$. Remark that $s_1s_2\ldots s_{p-2}=t_1t_2\ldots t_{p-2}$, and so $x=y$, if $\bsb{s}$ and $\bsb{t}$ differ in two (adjacent) positions. \end{Rem} \subsection{Algorithm {\tt Gen2\_Gray}} \begin{figure} \caption{ Four successive q-terminal nodes in the generating tree induced by the call {\tt Gen1\_Gray}(11,7,0) which generates the list $\mathcal{S}(11,7)$.} \label{q_terminal_n} \end{figure} Since the generating tree induced by the call of {\tt Gen1\_Gray} contains still arbitrary length branches of nodes of degree one it has a poor time complexity. Here we show how some of these nodes can be avoided in order to obtain the efficient generating algorithm {\tt Gen2\_Colex} presented in Figure \ref{Algo_Gen2Gray}. A {\it quasi-terminal node} ({\it q-terminal node} for short) in the tree induced by a generating algorithm is defined recursively as: a q-terminal node is either a terminal node (node with no successor) or a node with only one successor which in turn is a q-terminal node. The q-terminal nodes occur for the calls of {\tt Gen1\_Gray($k,r,dir$)} when $k=\frac{r(r-1)}{2}$. See Figure~\ref{q_terminal_n} for an example. The key improvement made by {\tt Gen2\_Gray} consists in its last parameter $p$, which gives the rightmost position where the current sequence differ from its previous one in the list $\mathcal{S}(k,n)$, and {\tt Gen2\_Gray} stops the recursive calls of more than three successive q-terminal calls. Thus, {\tt Gen2\_Gray} generates only suffixes of the form $c_{p-2}c_{p-1}c_{p}\ldots c_n$; see Table \ref{list_pref} for an example. Since two consecutive sequences in the Gray code $\mathcal{S}(k,n)$ differ in at most three adjacent positions, these suffixes are enough to generate efficiently $\mathcal{S}(k,n)$, and to generate (in Section \ref{gen_perms}) a Gray code for the set of length $n$ permutations having the major index equal to $k$. Now we explain how the parameter $p$ propagates through recursive calls. A non terminal call of {\tt Gen2\_Gray} produces one or several calls. The first of them (corresponding to a left child in the generating tree) inherits the value of the parameter $p$ from its parent call; in the other calls the value of this parameter is the rightmost position where the current sequence differs from its previous generated one; this value is $i$ if $dir=0$ and $i+1$ if $dir=1$. So, each call keeps in the last parameter $p$ the rightmost position where the current generated sequence differs from its previous one in the list ${\mathcal S}(k,n)$. Procedure {\tt Gen2\_Gray} prevents to produce more than three successive q-terminal calls. For convenience, initially $p=0$. The last two parameters $p$ and $u$ of procedure {\tt Gen2\_Gray} and output by it are used by procedure {\tt Update\_Perm} in Section \ref{gen_perms} in order to generates permutations with a given major index; $u$ keeps the value of $c_1+c_2+\ldots +c_p$, and for convenience, initially $u=0$. Even we will not make use later we sketch below an algorithm for efficiently generating the list ${\mathcal S}(k,n)$: \begin{itemize} \item initialize $\bsb{d}$ by the first sequence in $\mathcal{S}(k,n)$, i.e, the the smallest sequence in $S(k,n)$ in co-lex order, or equivalently, the largest one in lexicographical orders, and $\bsb{c}$ by $0\,0\,\ldots\, 0$, \item run {\tt Gen2\_Gray($k,n,0,0,0)$} and for each $p$ output by it update $\bsb{d}$ as: $d[p-2]:=c[p-2]$, $d[p-1]:=c[p-1]$, $d[p]:=c[p]$. \end{itemize} \begin{figure} \caption{ Algorithm {\tt Gen2\_Gray}.} \label{Algo_Gen2Gray} \end{figure} \begin{table} \begin{center} \begin{tabular}{|r|c|c||r|c|c|} \hline sequence & $p$ & permutation & sequence & $p$ & permutation\\ \hline $0\,1\,2\,1\,0\,0$ & & $2\,1\,4\,3\,5\,6$ & $0\,1\,\underline{0\,0\,1}\,2$ & $5$ & $5\,3\,6\,1\,2\,4$ \\ $0\,\underline{1\,0\,3}\,0\,0$ & $4$ & $3\,2\,4\,1\,5\,6$ & $\underline{0\,0\,1}\,0\,1\,2$ & $3$ & $6\,3\,5\,1\,2\,4$ \\ $\underline{0\,0\,1}\,3\,0\,0$ & $3$ & $4\,2\,3\,1\,5\,6$ & $0\,\underline{0\,0\,1}\,1\,2$ & $4$ & $1\,3\,5\,6\,2\,4$ \\ $0\,\underline{0\,2\,2}\,0\,0$ & $4$ & $4\,1\,3\,2\,5\,6$ & $0\,0\,\underline{0\,0\,2}\,2$ & $5$ & $2\,3\,5\,6\,1\,4$ \\ $\underline{0\,1\,1}\,2\,0\,0$ & $3$ & $3\,1\,4\,2\,5\,6$ & $0\,0\,0\,\underline{0\,0\,4}$ & $6$ & $3\,4\,5\,6\,1\,2$ \\ $0\,1\,\underline{2\,0\,1}\,0$ & $5$ & $2\,1\,5\,3\,4\,6$ & $0\,0\,0\,\underline{0\,1\,3}$ & $6$ & $2\,4\,5\,6\,1\,3$\\ $0\,\underline{1\,1\,1}\,1\,0$ & $4$ & $3\,1\,5\,2\,4\,6$ & $0\,0\,\underline{0\,1\,0}\,3$ & $5$ & $1\,4\,5\,6\,2\,3$ \\ $\underline{0\,0\,2}\,1\,1\,0$ & $3$ & $5\,1\,3\,2\,4\,6$ & $0\,\underline{0\,1\,0}\,0\,3$ & $4$ & $6\,4\,5\,1\,2\,3$ \\ $0\,\underline{0\,0\,3}\,1\,0$ & $4$ & $1\,2\,3\,5\,4\,6$ & $\underline{0\,1\,0}\,0\,0\,3$ & $3$ & $5\,4\,6\,1\,2\,3$ \\ $0\,\underline{0\,1\,2}\,1\,0$ & $4$ & $5\,2\,3\,1\,4\,6$ & $0\,1\,0\,\underline{0\,2\,1}$ & $6$ & $4\,3\,6\,1\,2\,5$ \\ $\underline{0\,1\,0}\,2\,1\,0$ & $3$ & $3\,2\,5\,1\,4\,6$ & $\underline{0\,0\,1}\,0\,2\,1$ & $3$ & $6\,3\,4\,1\,2\,5$ \\ $0\,1\,\underline{0\,0\,3}\,0$ & $5$ & $4\,3\,5\,1\,2\,6$ & $0\,\underline{0\,0\,1}\,2\,1$ & $4$ & $1\,3\,4\,6\,2\,5$ \\ $\underline{0\,0\,1}\,0\,3\,0$ & $3$ & $5\,3\,4\,1\,2\,6$ & $0\,0\,\underline{0\,0\,3}\,1$ & $5$ & $2\,3\,4\,6\,1\,5$ \\ $0\,\underline{0\,0\,1}\,3\,0$ & $4$ & $1\,3\,4\,5\,2\,6$ & $0\,0\,\underline{0\,2\,1}\,1$ & $5$ & $1\,2\,4\,6\,3\,5$ \\ $0\,0\,\underline{0\,0\,4}\,0$ & $5$ & $2\,3\,4\,5\,1\,6$ & $0\,\underline{0\,1\,1}\,1\,1$ & $4$ & $6\,2\,4\,1\,3\,5$ \\ $0\,0\,\underline{0\,2\,2}\,0$ & $5$ & $1\,2\,4\,5\,3\,6$ & $\underline{0\,1\,0}\,1\,1\,1$ & $3$ & $4\,2\,6\,1\,3\,5$ \\ $0\,\underline{0\,1\,1}\,2\,0$ & $4$ & $5\,2\,4\,1\,3\,6$ & $0\,\underline{0\,2\,0}\,1\,1$ & $4$ & $6\,1\,4\,2\,3\,5$ \\ $\underline{0\,1\,0}\,1\,2\,0$ & $3$ & $4\,2\,5\,1\,3\,6$ & $\underline{0\,1\,1}\,0\,1\,1$ & $3$ & $4\,1\,6\,2\,3\,5$\\ $0\,\underline{0\,2\,0}\,2\,0$ & $4$ & $5\,1\,4\,2\,3\,6$ & $0\,1\,\underline{1\,1\,0}\,1$ & $5$ & $3\,1\,6\,2\,4\,5$ \\ $\underline{0\,1\,1}\,0\,2\,0$ & $3$ & $4\,1\,5\,2\,3\,6$ & $\underline{0\,0\,2}\,1\,0\,1$ & $3$ & $6\,1\,3\,2\,4\,5$ \\ $0\,1\,1\,\underline{0\,0\,2}$ & $6$ & $5\,1\,6\,2\,3\,4$ & $0\,\underline{0\,0\,3}\,0\,1$ & $4$ & $1\,2\,3\,6\,4\,5$\\ $\underline{0\,0\,2}\,0\,0\,2$ & $3$ & $6\,1\,5\,2\,3\,4$ & $0\,\underline{0\,1\,2}\,0\,1$ & $4$ & $6\,2\,3\,1\,4\,5$\\ $0\,\underline{0\,0\,2}\,0\,2$ & $4$ & $1\,2\,5\,6\,3\,4$ & $\underline{0\,1\,0}\,2\,0\,1$ & $3$ & $3\,2\,6\,1\,4\,5$ \\ $0\,\underline{0\,1\,1}\,0\,2$ & $4$ & $6\,2\,5\,1\,3\,4$ & $0\,\underline{1\,2\,0}\,0\,1$ & $4$ & $2\,1\,6\,3\,4\,5$ \\ $\underline{0\,1\,0}\,1\,0\,2$ & $3$ & $5\,2\,6\,1\,3\,4$ & & & \\ \hline \end{tabular} \end{center} \caption{ \label{list_pref}The subexcedant sequences generated by the call of {\tt Gen1\_Gray($4,6,0$)} and their corresponding length $6$ permutations with major index equals $4$, permutations descent set is either $\{1,3\}$ or $\{4\}$. The three leftmost entries ($c_{p-2}$,$c_{p-1}$,$c_p$) updated by the call of {\tt Gen2\_Gray($4,6,0,0,0$)} are underlined, where $p$ is the rightmost position where a subexcedant sequence differ from its predecessor. } \end{table} \subsubsection*{Analyze of {\tt Gen2\_Gray}} For a call of {\tt Gen2\_Gray($k$,$r$,$dir$,$p$,$u$)} necessarily $k\leq\frac{r(r-1)}{2}$, and if $k>0$ and \begin{itemize} \item $k\leq \frac{(r-1)(r-2)}{2}$, then this call produces at least two recursive calls, \item $\frac{(r-1)(r-2)}{2}<k<\frac{r(r-1)}{2}$, then this call produces a unique recursive call (of the form {\tt Gen2\_Gray($k'$,$r$,$\cdot$,$\cdot$,$\cdot$)}, with $k'=k-\frac{(r-1)(r-2)}{2}$), which in turn produce two calls, \item $k=\frac{r(r-1)}{2}$, then this call is q-terminal call. \end{itemize} Sine the procedure {\tt Gen2\_Gray} stops after three successive q-terminal calls, with a slight modification of Ruskey and van Baronaigien's \cite{Roe_93} {\it `CAT'} principle (see also \cite{Rus_00}) it follows that {\tt Gen2\_Gray} runs in constant amortized time. \section{The McMahon code of a permutation} \label{sec_Mc_code} Here we present the bijection $\psi:S(n)\rightarrow \frak S_n$, introduced in \cite{Vaj_11}, which have the following properties: \begin{itemize} \item the image through $\psi$ of $S(k,n)$ is the set of permutations in $\frak S_n$ with major index $k$, \item $\psi$ is a `Gray code preserving bijection' (see Theorem \ref{sigma_t_a}), \item $\tau$ is easily computed from $\sigma$ and from the difference between $\bsb{s}$ and $\bsb{t}$, the McMahon code of $\sigma$ and $\tau$, if $\bsb{s}$ and $\bsb{t}$ are close. \end{itemize} In the next section we apply $\psi$ in order to construct a list for the permutations in $\frak S_n$ with a major index equals $k$ from the Gray code list $\mathcal{S}(k,n)$. Let permutations act on indices, i.e., for $\sigma=\sigma_1\,\sigma_2\, \ldots \,\sigma_n$ and $\tau=\tau_1\,\tau_2\, \ldots \,\tau_n$ two permutations in $\frak S_n$, $\sigma\cdot\tau=\sigma_{\tau_1}\,\sigma_{\tau_2}\, \ldots \,\sigma_{\tau_n}$. For a fixed integer $n$, let $k$ and $u$ be two integers, $0\leq k<u\leq n$, and define $[\hspace{-0.3mm}[ u,k ]\hspace{-0.3mm}]\in\frak S_n$ as the permutation obtained after $k$ right circular shifts of the length-$u$ prefix of the identity in $\frak S_n$. In two line notation $$ [\hspace{-0.3mm}[ u,k ]\hspace{-0.3mm}]= \left( \begin{array}{cccccccccc} 1 & 2 & \cdots & k & k+1 & \cdots & u & u+1 &\cdots & n \\ u-k+1 & u-k+2 & \cdots & u & 1 &\cdots & u-k & u+1 &\cdots & n \end{array} \right). $$ For example, in $\frak S_5$ we have: $[\hspace{-0.3mm}[ 3,1]\hspace{-0.3mm}]=\underline{3\,1\,2}\,4\,5$, $[\hspace{-0.3mm}[ 3,2]\hspace{-0.3mm}]=\underline{2\,3\,1}\,4\,5$ and $[\hspace{-0.3mm}[ 5,3]\hspace{-0.3mm}]=\underline{3\,4\,5\,1\,2}$ (the rotated elements are underlined). Let $\psi:S(n)\rightarrow \frak S_n $ be the function defined by \begin{equation} \label{def_psi} \begin{array}{ccl} \psi(t_1t_2\ldots t_n) & = & [\hspace{-0.3mm}[ n,t_n]\hspace{-0.3mm}]\cdot [\hspace{-0.3mm}[ n-1,t_{n-1}]\hspace{-0.3mm}]\cdot\ldots\cdot [\hspace{-0.3mm}[ i,t_i]\hspace{-0.3mm}]\cdot \ldots \cdot[\hspace{-0.3mm}[ 2,t_2]\hspace{-0.3mm}]\cdot [\hspace{-0.3mm}[ 1,t_1 ]\hspace{-0.3mm}] \\ & = & \displaystyle \prod_{i=n}^1[\hspace{-0.3mm}[ i,t_i]\hspace{-0.3mm}]. \end{array} \end{equation} \begin{Lem}[\cite{Vaj_11}]$ $ \begin{enumerate} \item The function $\psi$ defined above is a bijection. \item For every $\bsb{t}=t_1t_2\ldots t_n\in S(n)$, we have ${\scriptstyle \mathsf{MAJ}} \prod_{i=n}^1[\hspace{-0.3mm}[ i,t_i]\hspace{-0.3mm}]=\sum_{i=1}^nt_i$. \end{enumerate} \end{Lem} The first point of the previous lemma says that every permutation $\pi\in\frak S_n$ can be uniquely written as $\prod_{i=n}^1[\hspace{-0.3mm}[ i,t_i]\hspace{-0.3mm}]$ for some $t_i$'s, and the subexcedant sequence $t_1t_2\ldots t_n$ is called the {\it McMahon code} of $\pi$. As a consequence of the second point of this lemma we have: \begin{Rem} The restriction of $\psi$ maps bijectively permutations in $S(k,n)$ into permutations in $\frak S_n$ with major index equals $k$. \end{Rem} \begin{Exa} The permutation $\pi =5\,2\,1\,6\,4\,3\in \frak S_n$ can be obtained from the identity by the following prefix rotations: $$1\,2\,3\,4\,5\,6 \overset{[\hspace{-0.3mm}[ 6,3 ]\hspace{-0.3mm}]}{\longrightarrow} 4\,5\,6\,1\,2\,3 \overset{[\hspace{-0.3mm}[ 5,4 ]\hspace{-0.3mm}]}{\longrightarrow} 5\,6\,1\,2\,4\,3 \overset{[\hspace{-0.3mm}[ 4,2 ]\hspace{-0.3mm}]}{\longrightarrow} 1\,2\,5\,6\,4\,3 \overset{[\hspace{-0.3mm}[ 3,2 ]\hspace{-0.3mm}]}{\longrightarrow} 2\,5\,1\,6\,4\,3 \overset{[\hspace{-0.3mm}[ 2,1 ]\hspace{-0.3mm}]}{\longrightarrow} 5\,2\,1\,6\,4\,3 \overset{[\hspace{-0.3mm}[ 1,0 ]\hspace{-0.3mm}]}{\longrightarrow} 5\,2\,1\,6\,4\,3, $$ so $$ \pi= [\hspace{-0.3mm}[ 6,3]\hspace{-0.3mm}]\cdot[\hspace{-0.3mm}[ 5,4 ]\hspace{-0.3mm}]\cdot[\hspace{-0.3mm}[ 4,2 ]\hspace{-0.3mm}]\cdot[\hspace{-0.3mm}[ 3,2 ]\hspace{-0.3mm}]\cdot[\hspace{-0.3mm}[ 2,1 ]\hspace{-0.3mm}]\cdot[\hspace{-0.3mm}[ 1,0 ]\hspace{-0.3mm}],$$ and thus $$ {\scriptstyle \mathsf{MAJ}}\ \pi =3+4+2+2+1+0=12. $$ \end{Exa} Theorem \ref{sigma_t_a} below states that if two permutations have their McMahon code differing in two adjacent positions, and by $1$ and $-1$ in these positions, then these permutations differ by the transposition of two entries. Before proving this theorem we need the following two propositions, where the transposition $\langle u, v\rangle$ denote the permutation $\pi$ (of convenient length) with $\pi(i)=i$ for all $i$, except $\pi(u)=v$ and $\pi(v)=u$. \begin{Pro} \label{first_trans} Let $n,u$ and $v$ be three integers, $n\geq 3$, $0\leq u\leq n-2$, $1\leq v\leq n-2$, and $\sigma,\tau\in\frak S_n$ defined by: \begin{itemize} \item $\sigma= [\hspace{-0.3mm}[ n,u]\hspace{-0.3mm}]\ \cdot [\hspace{-0.3mm}[ n-1,v]\hspace{-0.3mm}]$, and \item $\tau = [\hspace{-0.3mm}[ n,u+1]\hspace{-0.3mm}] \cdot [\hspace{-0.3mm}[ n-1,v-1]\hspace{-0.3mm}]$. \end{itemize} Then $$ \tau=\sigma\cdot\langle n, v\rangle. $$ \end{Pro} \begin{proof} First, remark that: \begin{itemize} \item $[\hspace{-0.3mm}[ n,u+1]\hspace{-0.3mm}]$, is a right circular shift of $[\hspace{-0.3mm}[ n,u]\hspace{-0.3mm}]$, and \item $[\hspace{-0.3mm}[ n-1,v-1]\hspace{-0.3mm}]$ is a left circular shift of the first $(n-1)$ entries of $[\hspace{-0.3mm}[ n-1,v]\hspace{-0.3mm}]$, \end{itemize} and so $\sigma(i)=\tau(i)$ for all $i$, $1\leq i\leq n$, except for $i=n$ and $i=v$. \end{proof} \begin{Exa} For $n=7$, $u=4$ and $v=3$ we have \begin{itemize} \item $\sigma=[\hspace{-0.3mm}[ n,u ]\hspace{-0.3mm}]\cdot [\hspace{-0.3mm}[ n-1,v ]\hspace{-0.3mm}]=[\hspace{-0.3mm}[ 7,4]\hspace{-0.3mm}]\cdot [\hspace{-0.3mm}[ 6,3]\hspace{-0.3mm}]=7\,1\,2\,4\,5\,6\,3$, \item $\tau= [\hspace{-0.3mm}[ n,u+1]\hspace{-0.3mm}]\cdot [\hspace{-0.3mm}[ n-1,v-1]\hspace{-0.3mm}]=[\hspace{-0.3mm}[ 7,5]\hspace{-0.3mm}]\cdot [\hspace{-0.3mm}[ 6,2]\hspace{-0.3mm}]= 7\,1\,3\,4\,5\,6\,2$, \item $\langle n,v\rangle=\langle 7,3\rangle$, \end{itemize} and $\tau=\sigma\cdot \langle n,v\rangle$. \end{Exa} \begin{Pro} \label{before_th} If $\pi\in\frak S_n$ and $\langle u,v\rangle$ is a transposition in $\frak S_n$, then $$\pi^{-1}\cdot \langle u,v\rangle\cdot\pi= \langle\pi^{-1}(u),\pi^{-1}(v)\rangle.$$ \end{Pro} \begin{proof} Indeed, $(\pi^{-1}\cdot \langle u,v\rangle\cdot\pi)(i)=i$, for all $i$, except for $i=\pi^{-1}(u)$ and $i=\pi^{-1}(v)$. \end{proof} \begin{The} \label{sigma_t_a} Let $\sigma$ and $\tau$ be two permutations in $\frak S_n $, $n\geq 3$, and $\bsb{s}=s_1s_2\ldots s_n$ and $\bsb{t}=t_1t_2\ldots t_n$ their McMahon codes. If there is a $f$, $2\leq f\leq n-1$ such that $t_i=s_i$ for all $i$, except $t_f=s_f-1$ and $t_{f+1}=s_{f+1}+1$, then $\tau$ and $\sigma$ differ by a transposition. More precisely, $$ \tau=\sigma \cdot \langle \alpha^{-1}(u), \alpha^{-1}(v)\rangle $$ where $$ \alpha=\prod_{i=f-1}^{1}[\hspace{-0.3mm}[ i,s_i ]\hspace{-0.3mm}]=\prod_{i=f-1}^{1}[\hspace{-0.3mm}[ i,t_i ]\hspace{-0.3mm}], $$ and $u=f+1$, $v=s_f$. \end{The} \begin{proof} $ $ \begin{itemize} \item $\tau=\prod_{i=n}^{1}[\hspace{-0.3mm}[ i,t_i ]\hspace{-0.3mm}]$, and so $\tau\cdot\alpha^{-1}=\prod_{i=n}^{f}[\hspace{-0.3mm}[ i,t_i ]\hspace{-0.3mm}]$, and \item $\sigma=\prod_{i=n}^{1}[\hspace{-0.3mm}[ i,s_i ]\hspace{-0.3mm}]$, and $\sigma\cdot\alpha^{-1}=\prod_{i=n}^{f}[\hspace{-0.3mm}[ i,s_i ]\hspace{-0.3mm}]$. \end{itemize} But, by Proposition \ref{first_trans}, $$ \prod_{i=n}^{f}[\hspace{-0.3mm}[ i,t_i ]\hspace{-0.3mm}]= \prod_{i=n}^{f}[\hspace{-0.3mm}[ i,s_i ]\hspace{-0.3mm}]\cdot \langle f+1,s_f\rangle $$ or, equivalently $$ \tau\cdot\alpha^{-1}=\sigma\cdot\alpha^{-1}\cdot\langle f+1,s_f\rangle, $$ and by Proposition \ref{before_th}, the results holds. \end{proof} The previous theorem says that $\sigma$ and $\tau$ `have a small difference' provided that their McMahon code, $\bsb{s}$ and $\bsb{t}$, do so. Actually, we need that $\bsb{s}$ and $\bsb{t}$ are consecutive sequences in the list $\mathcal{S}(k,n)$ and they have a more particular shape (see Remark \ref{boure}). In this context, permutations having minimal McMahon code play a particular role. It is routine to check the following proposition (see Figure \ref{Fig_3} for an example). \begin{Pro} \label{alpha_n_k} Let $n$ and $k$ be two integers, $0<k\leq\frac{n(n-1)}{2}$; $\bsb{a}=a_1a_2\ldots a_n$ be the smallest subexcedant sequence in co-lex order with $\sum_{i=1}^n a_i=k$, and $\alpha=\alpha_{n,k}=\psi(\bsb{a})$ be the permutation in $\frak S_n$ having its McMahon code $\bsb{a}$. Let $j=\max\, \{ i : a_i\neq 0\}$, that is, $\bsb{a}$ has the form $$ 012\ldots (j-3)(j-2)a_j00\ldots 0.$$ Then \begin{equation} \label{def_alpha} \alpha(i)=\left\{ \begin {array}{ccc} j-a_j-i & {\rm if} & 1\leq i\leq j-(a_j+1), \\ 2j-a_j-i & {\rm if} & j-(a_j+1)<i\leq j, \\ i & {\rm if} & i>j. \end {array} \right. \end{equation} \end{Pro} \begin{figure} \caption{ The permutation $\alpha=2\,1\,6\,5\,4\,3\,8\,7$ with the McMahon code $\bsb{a}=0\,1\,2\,3\,4\,3\,0\,0$, the the smallest, in co-lex order, subexcedant sequence in $S(13,8)$, see Proposition \ref{alpha_n_k}.} \label{Fig_3} \end{figure} \begin{Rem} \label{rem_inv} The permutation $\alpha$ defined in Proposition in \ref{alpha_n_k} is an involution, that is $\alpha^{-1}=\alpha$. \end{Rem} Combining Proposition \ref{alpha_n_k} and Remark \ref{rem_inv}, Theorem \ref{sigma_t_a} becomes in particular \begin{Pro} \label{combi} Let $\sigma$, $\tau$, $\bsb{s}$ and $\bsb{t}$ be as in Theorem \ref{sigma_t_a}. In addition, let suppose that there is a $j$, $0\leq j\leq f-1$, such that \begin{itemize} \item[1.] $s_i=t_i=0$ for $j<i\leq f-1$, and \item[2.] if $j>0$, then \begin{itemize} \item $s_j=t_j\neq 0$, and \item $s_i=t_i=i-1$ for $1\leq i<j$. \end{itemize} \end{itemize} Then $$ \tau=\sigma \cdot \langle \phi_j(f+1), \phi_j(s_f)\rangle $$ with \begin{equation} \label{phi} \phi_j(i)=\left\{ \begin {array}{ccc} j-s_j-i & {\rm if} & 1\leq i\leq j-(s_j+1), \\ 2j-s_j-i & {\rm if} & j-(s_j+1)<i\leq j, \\ i & {\rm if} & i>j. \end {array} \right. \end{equation} \end{Pro} \noindent Notice that, the conditions 1 and 2 in the previous proposition require that $s_1s_2\ldots s_{f-1}=t_1t_2\ldots t_{f-1}$ be the smallest subexcedant sequence, in co-lex order, in $S(f-1)$ with fixed value for $\sum_{i=1}^{f-1}s_i=\sum_{i=1}^{f-1}t_i$. Also, for point 2, necessarily $j\geq 2$. \section{Generating permutations with a given major index} \label{gen_perms} Let $\sigma$ and $\tau$ be two permutations with their McMahon code $\bsb{s}=s_1s_2\ldots s_n$ and $\bsb{t}=t_1t_2\ldots t_n$ belonging to $S(k,n)$, and differing in positions $f$ and $f+1$ by $1$ and $-1$ in these positions. Let \begin{itemize} \item $v=s_f-t_f\in \{-1,1\}$, and \item $x=\sum_{i=1}^{f-1}s_i=\sum_{i=1}^{f-1}t_i$. \end{itemize} If $s_1s_2\ldots s_{f-1}$ is the smallest sequence in $S(x,f-1)$, in co-lex order, then applying Proposition \ref{combi} it follows that the run of the procedure {\tt transp($v,f,x$)} defined in Figure~\ref{algos_transp} transforms $\sigma$ into $\tau$ and $\bsb{s}$ into $\bsb{t}$. \begin{figure} \caption{ Algorithm {\tt transp}, where $\phi_j$ is defined in relation (\ref{phi}).} \label{algos_transp} \end{figure} Let now $f$ be the leftmost position where two consecutive sequences $\bsb{s}$ and $\bsb{t}$ in the list $\mathcal{S}(k,n)$ differ, and $\sigma$ and $\tau$ be the permutations having they McMahon code $\bsb{s}$ and $\bsb{t}$. By Remarks \ref{rem_inter} and \ref{boure} we have that, repeated calls of {\tt transp} transform $\bsb{s}$ into $\bsb{t}$, and $\sigma$ into $\tau$. This is true for each possible $3$-tuples given in Definition~\ref{3_tuple} and corresponding to two consecutive subexcedant sequences in $\mathcal{S}(k,n)$, and algorithm {\tt Update\_Perm} in Figure \ref{algos_update} exhausts all these $3$-tuples. For example, if $\bsb{s}$ and $\bsb{t}$ are the two sequences in Example \ref{un_example} with they difference $(1,-2,1)$, $f=2$ and $x=0$, then the calls {\tt transp($1,f,x$)}; \\ \indent {\tt transp($-1,f+1,x+s[f]$)};\\ \noindent transform $\bsb{s}$ into $\bsb{t}$ and $\sigma$ into $\tau$. Algorithm {\tt Gen2\_Gray} provides $p$, the rightmost position where the current sequence $\bsb{c}$ differs from the previous generated one, and $u=\sum_{i=1}^p c_i$. Algorithm {\tt Update\_Perm} uses $f$, the leftmost position where $\bsb{c}$ differs from the previous generated sequence, and $x=\sum_{i=1}^{f-1}c_i$. \begin{figure} \caption{ Algorithm {\tt Update\_Perm}.} \label{algos_update} \end{figure} Now, we sketch the generating algorithm for the set of permutations in $\frak S_n$ having index $k$. \begin{itemize} \item initialize $\bsb{s}$ by the smallest, in co-lex order, sequence in $S(k,n)$ and $\sigma$ by the permutation in $\frak S_n$ having its McMahon code $\bsb{s}$, \item run {\tt Gen2\_Gray($k,n,0,0,0$)} where {\tt output($p,u$)} is replaced by {\tt Update\_Perm($p,u$)}. \end{itemize} The obtained list of permutations is the image of the Gray code $\mathcal{S}(k,n)$ through the bijection $\psi$ defined in relation (\ref{def_psi}); it consists of all permutations in $\frak S_n$ with major index equal to $k$, and two consecutive permutations differ by at most three transpositions. See Table \ref{list_pref} for the list of permutations in $\frak S_6$ and with major index $4$. \section{Final remarks} \label{Conc} Numerical evidences show that if we change the generating order of algorithm {\tt Gen\_Colex} as for {\tt Gen1\_Gray}, but without restricting it to subexcedant sequences, then the obtained list for bounded compositions is still a Gray code with the closeness definition slightly relaxed: two consecutive compositions differ in at most four adjacent positions. Also, T. Walsh gives in \cite{Walsh} an efficient generating algorithm for a Gray code for bounded compositions of an integer, and in particular for subexcedant sequences. In this Gray code two consecutive sequences differ in two positions and by $1$ and $-1$ in these positions; but these positions can be arbitrarily far, and so the image of this Gray code through the bijection $\psi$ defined by relation (\ref{def_psi}) in Section \ref{sec_Mc_code} does not give a Gray code for permutations with a fixed index. \end{document}
arXiv
{ "id": "1302.6558.tex", "language_detection_score": 0.6397716999053955, "char_num_after_normalized": null, "contain_at_least_two_stop_words": null, "ellipsis_line_ratio": null, "idx": null, "lines_start_with_bullet_point_ratio": null, "mean_length_of_alpha_words": null, "non_alphabetical_char_ratio": null, "path": null, "symbols_to_words_ratio": null, "uppercase_word_ratio": null, "type": null, "book_name": null, "mimetype": null, "page_index": null, "page_path": null, "page_title": null }
arXiv/math_arXiv_v0.2.jsonl
null
null
\begin{document} \title{\scshape A short proof that every finite graph has a tree-decomposition displaying its tangles} \begin{abstract} We give a short proof that every finite graph (or matroid) has a tree-decomposition that displays all maximal tangles. This theorem for graphs is a central result of the graph minors project of Robertson and Seymour and the extension to matroids is due to Geelen, Gerards and Whittle. \end{abstract} \section{Introduction} Robertson and Seymour \cite{GMX} proved as a corner stone of their graph minors project: \begin{thm}[rough version]\label{RS:tangle_tree} Every graph\footnote{In this paper all graphs and matroids are finite.} has a tree-decomposition whose separations distinguish all maximal tangles. \end{thm} Additionally, it can be ensured that this tree-decomposition separates the tangles in a `minimal way'. This theorem was extended to matroids by Geelen, Gerards and Whittle \cite{GGW:tangles_in_matroids}. Here we give a short proof of both of these results. A key idea is that we prove the following strengthening: \begin{thm}[rough version of \autoref{tangle-tree_strong} below]\label{thm_intro} Any tree-decomposition such that each of its separations distinguishes two tangles in a minimal way can be extended to a tree-decomposition that distinguishes any two maximal tangles in a minimal way. \end{thm} Our new proof does not yield the strengthening of \autoref{RS:tangle_tree} proved in \cite{CDHH:profiles}. However, it can be extended from tangles to profiles, compare \autoref{profile_rem}. For tree-decompositions as in \autoref{RS:tangle_tree} that additionally have as few parts as possible see \autoref{tangle_tree_cor}. \section{Notation} Throughout we fix a finite set $E$. A \emph{separation} is a bipartition $(A,B)$ of $E$, and $A$ and $B$ are called the \emph{sides} of $(A,B)$. A function $f$ mapping subsets of $E$ to the integers is \emph{symmetric} if $f(X)=f(X^\complement)$ for every $X\subseteq E$, and it is \emph{submodular} if $f(X)+f(Y)\geq f(X\cap Y) +f(X\cup Y)$ for every $X,Y\subseteq E$. Throughout we fix such a function $f$. Since $f$ is symmetric, it induces a function $o$ on the separations: $o(A,B)=f(A)=f(B)$, which we call the \emph{order} of a separation.\footnote{For the sake of readability, we write $o(A,B)$ instead of $o((A,B))$.} Since $f$ is submodular $o$ satisfies: \begin{equation}\label{lemma1_xyz} o(A,B)+o(C,D)\geq o(A\cap C, B\cup D)+ o(A\cup C, B\cap D) \end{equation} For example, one can take for $E$ the edge set of a matroid and for $f$ its connectivity function. Or one can take for $E$ the edge set of a graph, where the order of a separation $(A,B)$ is the number of vertices incident with edges from both $A$ and $B$. A \emph{tangle} of order $k+1$ picks a \emph{small} side of each separation $(A,B)$ of order at most $k$ such that no three small sides cover $E$. Moreover, the complement of a single element of $E$ is never small.\footnote{This `moreover'-property is never used in our proofs and thus the results are also true for a slightly bigger class. However, the new objects are trivial.} In particular, if $A$ is small, then its complement $B$ cannot be included in a small set and we say that $B$ is \emph{big}. Thus a tangle can be thought of as pointing towards a highly connected piece, which `lies' on the big side of every low of order separation. In this spirit, we shall also say that a tangle $\Tcal$ \emph{orients} a separation $(A,B)$ towards $B$ if $B$ is big in $\Tcal$. A tangle is \emph{maximal} if it is not included in any other tangle (of higher order). A separation $(A,B)$ \emph{distinguishes} two tangles if these tangles pick different small sides for $(A,B)$. It distinguishes them \emph{efficiently} if it has minimal order amongst all separations distinguishing these two tangles. A \emph{tree-decomposition} consists of a tree $T$ and a partition $(P_t|t\in V(T))$ of $E$ consisting of one (possibly empty) partition class for every vertex of $T$. For $X\subseteq V(T)$, we let $S(X)=\bigcup_{t\in X} P_t$. There are two separations \emph{corresponding} to each edge $e$ of $T$, namely $(S(X), S(Y))$ and $(S(Y), S(X))$. Here $X$ and $Y$ are the two components of $T-e$. We say that a tree-decomposition \emph{distinguishes two tangles efficiently} if there is a separation corresponding to an edge of the decomposition-tree distinguishing these tangles efficiently. The following implies \autoref{RS:tangle_tree} and its matroid counterpart mentioned in the Introduction if we plug in the particular choices for the order function mentioned above.\footnote{In \cite{GMX}, the authors use a slightly different notion of separation for graphs. From a separation $(A,B)$ in the sense of this paper, the corresponding separation in their setting is $(V(A), V(B))$, where $V(X)$ denotes the set of vertices incident with edges from $X$. However, it is well-known that these two notions of separations give rise to the same notion of tangle and so \autoref{tangle_tree} implies their version. } \begin{thm}\label{tangle_tree} Let $E$ be a finite set with an order function. Then there is a tree-decomposition distinguishing any two maximal tangles efficiently. \end{thm} Two separations $(A_1,A_2)$ and $(B_1,B_2)$ are \emph{nested}\footnote{Other authors use \emph{laminar} instead.} if $A_i\subseteq B_j$ for some pair $(i,j)\in \{1,2\}\times \{1,2\}$. A set of separations is \emph{nested} if any two separations in the set are nested. A set of separations $N$ is \emph{symmetric} if $(A,B)\in N$ if and only if $(B,A)\in N$. Note that any nested set $N$ is contained in a nested symmetric set, which consists of those separations $(A,B)$ such that $(A,B)$ or $(B,A)$ is in $N$. It is clear that: \begin{rem} Given a tree-decomposition, the set of separations corresponding to the edges of the decomposition-tree is nested and symmetric. \end{rem} The converse is also true: \begin{lem} \label{nested_to_td}[\cite{GGW:tangles_in_matroids}] For every nested symmetric set $N$ of separations, there is a tree-decomposition such that the separations corresponding to edges of the decomposition-tree are precisely those in $N$. \end{lem} Hence to prove \autoref{tangle_tree}, it is enough to construct a suitable nested set of separations. In the old proofs of \cite{GMX} or \cite{GGW:tangles_in_matroids}, the concept of robust separations was introduced in order to find such a set of separations. We show that basically any nested set of separations works -- as long as it does not contain any useless separations and is maximal with this property: \begin{thm}\label{tangle-tree_strong} Let $N$ be any maximal nested set of separations such that each separation in $N$ distinguishes some two tangles efficiently. Then any two maximal tangles are distinguished efficiently by some separation in $N$. \end{thm} Since \autoref{tangle-tree_strong} implies \autoref{tangle_tree}, the next section is dedicated to the proof of \autoref{tangle-tree_strong}. \section{Proof of \autoref{tangle-tree_strong}} In our proof we need the following: \begin{lem}\label{lemma2_xyz}\cite[Lemma 4.20]{C:undom_td_new} Let $(A,B)$, $(C,D)$ and $(E,F)$ be separations such that $(A,B)$ and $(C,D)$ are not nested but $(E,F)$ is nested with the other two separations. Then the corner separation $(A\cap C, B\cup D)$ is nested with $(E,F)$. \end{lem} \begin{proof} Recall that if $(G,H)$ and $(E,F)$ are nested, then one of $G\subseteq E$, $G\subseteq E^\complement$, $G^\complement\subseteq E$ or $G^\complement\subseteq E^\complement$ is true. If one of $G\subseteq E$ or $G\subseteq E^\complement$ is false for $G=A\cap C$, then it is also false for both $G=A$ and $G=C$. If one of $G^\complement\subseteq E$ or $G^\complement\subseteq E^\complement$ is false for $G=A\cap C$, then it is false for at least one of $G=A$ or $G=C$. Suppose for a contradiction that $(A\cap C, B\cup D)$ is not nested with $(E,F)$ but $(A,B)$ and $(C,D)$ are. By exchanging the roles of $(A,B)$ and $(C,D)$ if necessary, we may assume by the above that $A^\complement\subseteq E$ and $C^\complement\subseteq E^\complement$. Then $A^\complement\subseteq C$, contradicting the assumption that $(A,B)$ and $(C,D)$ are not nested. \end{proof} \begin{proof}[Proof of \autoref{tangle-tree_strong}.] Let $N$ be any maximal set of separations each distinguishing some two tangles efficiently. Let $(A,B)$ be a separation distinguishing two maximal tangles $\cal P$ and $\cal Q$ efficiently. Amongst all such $(A,B)$ we pick one such that the number of separations of $N$ not nested with $(A,B)$ is minimal. By the maximality of $N$, it suffices to show that $(A,B)$ is nested with $N$. By symmetry, it suffices to consider the case where $A$ is big in $\Pcal$ and $B$ is big in $\cal Q$. Suppose for a contradiction, there is some $(C,D)$ in $N$ not nested with $(A,B)$. Let $\cal R$ and $\cal S$ be two maximal tangles distinguished efficiently by $(C,D)$ and without loss of generality $D$ is big in $\cal R$ and $C$ is big in $\Scal$. Let $k$ be the order of $(A,B)$, and $\ell$ the order of $(C,D)$. \paragraph{Case 1: $k\geq \ell$.} Then $\cal P$ and $\cal Q$ orient $(C,D)$. If they orient it differently, then $(C,D)$ is a candidate for $(A,B)$ and thus $(A,B)$ must be nested with $N$, which is the desired contradiction. Since $N$ is maximal, it contains also $(D,C)$. Thus by replacing $(C,D)$ by $(D,C)$ if necessary, we may assume that $D$ is big in both $\cal P$ and $\cal Q$. Suppose for a contradiction that $(A\cap C, B\cup D)$ has order at least $\ell$. Then $(A\cup C, B\cap D)$ has order at most $k$ by \autoref{lemma1_xyz}. Then $B\cap D$ is big in $\Qcal$ since three small sets cannot cover $E$ and both $B^\complement$ and $D^\complement$ are small. On the other hand $B\cap D$ is small in $\Pcal$ since any subset of a small set cannot be big. However, by \autoref{lemma2_xyz} the separation $(A\cup C, B\cap D)$ is nested with every separation in $N$ that is nested with $(A,B)$ and additionally with $(C,D)$. This is a contradiction to the choice of $(A,B)$. Hence $(A\cap C, B\cup D)$ has order at most $\ell-1$. By a similar argument $(B\cap C, A\cup D)$ has order at most $\ell-1$. The separation $(A\cap C, B\cup D)$ has a too low order to distinguish $\cal R$ and $\cal S$. Since subsets of small sets cannot be big, $A\cap C$ is small in $\cal R$. Thus $A\cap C$ is also small $\cal S$. A similar argument gives that $B\cap C$ is small in $\Scal$. But then $\Scal$ is not a tangle since its three small sets $D$, $A\cap C$ and $B\cap C$ cover $E$. This is a contradiction. \paragraph{Case 2: $k< \ell$.} Then $\cal R$ and $\cal S$ orient $(A,B)$. They cannot orient it differently as $(C,D)$ distinguishes them efficiently. By replacing $(A,B)$ by $(B,A)$ if necessary, we may assume that $B$ is big in both $\cal R$ and $\cal S$. Suppose for a contradiction that $(A\cap C, B\cup D)$ has order at least $k+1$. Then $(A\cup C, B\cap D)$ has order at most $\ell-1$ by \autoref{lemma1_xyz}. Then $B\cap D$ is big in $\Rcal$ since three small sets cannot cover $E$ and both $B^\complement$ and $D^\complement$ are small. On the other hand $B\cap D$ is small in $\Scal$ since any subset of a small set cannot be big. Thus $(A\cup C, B\cap D)$ distinguishes $\Rcal$ and $\Scal$, which contradicts the efficiency of $(C,D)$. Hence $(A\cap C, B\cup D)$ has order at most $k$. A similar argument gives that $(A\cap D, B\cup C)$ has order at most $k$. By \autoref{lemma2_xyz}, these two corner separations are nested with every separation in $N$ that is nested with $(A,B)$ and additionally with $(C,D)$. Thus by the choice of $(A,B)$, they cannot distinguish $\cal P$ and $\cal Q$. Since subsets of small sets cannot be big, $A\cap C$ is small in $\cal Q$. So it is also small in $\cal P$. Similarly, $A\cap D$ is small in $\cal P$. But then $\Pcal$ is not a tangle since its three small sets $B$, $A\cap C$ and $A\cap D$ cover $E$. This is a contradiction. Since we derive a contradiction in both cases such a separation $(C,D)$ cannot exist and $(A,B)$ is nested with $N$. Thus since $N$ is maximal, for any two maximal tangles $\cal P$ and $\cal Q$, the set $N$ contains a separation distinguishing them efficiently. \end{proof} \section{Concluding remarks} \begin{rem}\label{profile_rem}\emph{ \autoref{tangle-tree_strong} says that if we build a set of separations successively, where at each step we add a separation that is nested with everything so far and distinguishes two tangles in a minimal way, then eventually we will end up with a nested set of separations that distinguishes any two maximal tangles in a minimal way. However, we could build our nested set of separations a little more carefully, taking smaller separations first. More precisely, our construction has a $k$-th subroutine for each $k\in \Nbb$ starting with $k=0$. At the $k$-th subroutine of our construction we add successively separations of order $k$ that are nested with everything so far and that distinguish two tangles in a minimal way. We continue this until we can no longer proceed. With basically the same proof as above (actually, since we take small separations first, we do not need to consider Case 2), one can show that any construction of this type does not only distinguish all the maximal tangles but more generally all the robust profiles as defined in \cite{CDHH:profiles}. } \end{rem} \begin{rem}\label{algo_rem}\emph{ \autoref{tangle-tree_strong} gives rise to the following algorithm to construct a tree-decomposition that distinguishes all maximal tangles efficiently. At each step we have a nested set $N$ of separations such that each of its separations distinguishes some two tangles efficiently. If there are two maximal tangles that are not distinguished efficiently by a separation in $N$, our aim is to add some separation to $N$ that is nested with $N$ and distinguishes these two tangles efficiently. \autoref{tangle-tree_strong} guarantees that this will always be possible no matter which choices we make on the way. } \end{rem} Next we will define what it means for a tangle $\Qcal$ to live in a part of a tree-decomposition $(T,(P_t|t\in V(T)))$. If $tu$ is an edge of $T$ we abbreviate by $(S_t,S_u)$ the separation $(S(X_t),S(X_u))$, where $X_t$ is the component of $T-e$ containing $t$ and $X_u$ is the component of $T-e$ containing $u$. We say that $\Qcal$ \emph{lives} in a nonempty subgraph $S$ of $T$ if for every $t\in V(S)$ and every edge $tu$ incident with $t$ but not in $S$, the separation $(S_t,S_u)$ is big in $\Qcal$. Clearly, every tangle $\Qcal$ lives in $T$ and the intersection of two subgraphs in which $\Qcal$ lives is nonempty and $\Qcal$ lives in that intersection. Hence there is a smallest subgraph $S(\Qcal)$ of $T$ in which $\Qcal$ lives. Clearly, if $\Qcal$ lives in $S$, then $S$ must be connected. So $S(\Qcal)$ is a tree. Also note that the order of a separation corresponding to an edge of $S(\Qcal)$ cannot be smaller than the order of $\Qcal$. Hence if for two tangles $\Pcal$ and $\Qcal$, the sets $S(\Pcal)$ and $S(\Qcal)$ intersect, then no separation corresponding to an edge of $T$ distinguishes $\Pcal$ and $\Qcal$. We are mostly interested in the case where $S(\Qcal)$ just consists of a single node $t$. In this case we say that $\Qcal$ \emph{lives} in the part $P_t$. Our aim is to deduce the following. \begin{cor}\label{tangle_tree_cor} Let $E$ be a finite set with an order function. Then there is a tree-decomposition distinguishing any two maximal tangles efficiently such that in each of its parts lives a maximal tangle. \end{cor} By \autoref{tangle-tree_strong}, it suffices to show the following Lemma. Given a nested set $N$ of separation, by $\Tcal(N)$ we denote the tree-decomposition of the smallest nested symmetric set containing $N$ in the sense of \autoref{nested_to_td}. \begin{lem}\label{get_small_size} Let $N$ be a nested set of separations that is minimal with the property that for any two maximal tangles there is a separation in $N$ that distinguishes them efficiently. Then in each part of $\Tcal(N)$ lives a maximal tangle. Conversely, each maximal tangle lives in a part of $\Tcal(N)$. \end{lem} \begin{proof} By assumption the subtrees $S(\Qcal)$ for different tangles $\Qcal$ are disjoint. Hence the `conversely'-part follows from the first part. Suppose for a contradiction, there is a part $P_t$ in which no tangle lives. Let $u$ be a neighbour of $t$ in $T$ such that the order of a separation $(A,B)$ corresponding to $tu$ is maximal. We will construct for any two tangles $\Pcal$ and $\Qcal$ distinguished efficiently by $(A,B)$ another separation in $N$ that also distinguishes them efficiently. Note that $tu$ separates $S(\Pcal)$ and $S(\Qcal)$. Since not both $S(\Pcal)$ and $S(\Qcal)$ can contain $t$, we may assume that $t$ is not in $S(\Pcal)$. Since $S(\Qcal)$ is not equal to $\{t\}$, there is a neighbour $r$ of $t$ that is different from $u$ such that the edge $tr$ separates a vertex of $S(\Qcal)$ from $S(\Pcal)$. Since the order of a separation corresponding to $tr$ is at most the order of a separation corresponding to $tu$, the node $t$ cannot be in $S(\Qcal)$. Hence a separation corresponding to $tr$ distinguishes $\Pcal$ and $\Qcal$, and they do so efficiently as $(A,B)$ does. Hence $N-(A,B)$ violates the minimality of $N$, which contradicts our assumptions. Hence in each part of $\Tcal(N)$ lives a maximal tangle. \end{proof} \end{document}
arXiv
{ "id": "1511.02734.tex", "language_detection_score": 0.8469443321228027, "char_num_after_normalized": null, "contain_at_least_two_stop_words": null, "ellipsis_line_ratio": null, "idx": null, "lines_start_with_bullet_point_ratio": null, "mean_length_of_alpha_words": null, "non_alphabetical_char_ratio": null, "path": null, "symbols_to_words_ratio": null, "uppercase_word_ratio": null, "type": null, "book_name": null, "mimetype": null, "page_index": null, "page_path": null, "page_title": null }
arXiv/math_arXiv_v0.2.jsonl
null
null
\begin{document} \begin{center} \noindent {\LARGE{\sl{\bf Stability of Planar Nonlinear Switched Systems}}} \end{center} \vskip 1cm \begin{center} Ugo Boscain, {\footnotesize SISSA-ISAS, Via Beirut 2-4, 34014 Trieste, Italy. E-mail: {\tt [email protected]}} Gr\'egoire Charlot {\footnotesize Universit\'e Montpellier II, Math\'ematiques, CC051, 34095 Montpellier Cedex 5, France. E-mail: {\tt [email protected] }} Mario Sigalotti {\footnotesize INRIA, 2004 route des lucioles, 06902 Sophia Antipolis, France. E-mail: {\tt [email protected]}} \end{center} \begin{quotation}\noindent {\bf\em Abstract --- We consider the time-dependent nonlinear system $\dot q(t)=u(t)X(q(t))+(1-u(t))Y(q(t))$, where $q\in\R^2$, $X$ and $Y$ are two smooth vector fields, globally asymptotically stable at the origin and $u:[0,\infty)\to\{0,1\}$ is an arbitrary measurable function. Analysing the topology of the set where $X$ and $Y$ are parallel, we give some sufficient and some necessary conditions for global asymptotic stability, uniform with respect to $u(.)$. Such conditions can be verified without any integration or construction of a Lyapunov function, and they are robust under small perturbations of the vector fields. }\end{quotation} Keywords --- Global asymptotic stability, planar switched systems, nonlinear. \section{Introduction} A {\it switched system} is a family of continuous-time dynamical systems endowed with a rule that determines, at every time, which dynamical system is responsible for the time evolution. More precisely let $\{f_u\,|\;u\in U\}$ be a (possibly infinite) set of smooth vector fields on a manifold $M$, and consider, as $u$ varies in $U$, the family of dynamical systems \begin{eqnarray}\label{-1} \dot q=f_u(q)\,,~~~~q\in M\,. \end{eqnarray} A non-autonomous dynamical system is obtained by assigning a so-called {\it switching function} $u(.):[0,\infty)\to U$. In this paper, the switching function models the behavior of a parameter which cannot be predicted a priori. It represents some phenomena (e.g., a disturbance) that it is not possible to control or include in the dynamical system model. A typical problem related to switched systems is to obtain, out of a property which is shared by all the autonomous dynamical systems governed by the vector fields $f_u$, some, maybe weaker, property for the time-dependent system associated with an arbitrary switching function $u(.)$. For a discussion on various issues related to switched systems we refer the reader to \cite{liberzon-book,survey}. In this paper, we consider a two-dimensional nonlinear switched system of the type \begin{eqnarray} \dot q=u\,X(q)+(1-u)\,Y(q)\,,~~~~q\in\R^2\,,~~~~u\in\{0,1\}\,, \label{system}\label{nonlinear} \end{eqnarray} where the two vector fields $X$ and $Y$ are smooth (say, ${\cal C}^\infty$) on $\R^2$. In order to define proper non-autonomous systems, we require the switching functions to be measurable. Assume that $X(0)=Y(0)=0$ and that the two dynamical systems $\dot q=X(q)$ and $\dot q=Y(q)$ are \underline{globally asymptotically stable} at the origin. Our main aim is to study under which conditions on $X$ and $Y$ the origin is globally asymptotically stable for the system \r{nonlinear}, uniformly with respect to the switching functions (GUAS for short). For the precise formulation of this and other stability properties, see Definition \ref{stabilities}. In order to study the stability of \r{system} it is natural to consider its convexification, i.e., the case in which $u$ varies in the whole interval $[0,1]$. It turns out that the stability properties of the two systems are equivalent (see Section \ref{s-convex}).\\\\ The linear version of the system introduced above, namely, \begin{eqnarray}\label{1} \dot q=u\,A\,q+(1-u)\,B\,q\,,~~~~q\in\R^2\,,~~~~u\in\{0,1\}\,, \end{eqnarray} where the $2\times2$ real matrices $A$ and $B$ have eigenvalues with strictly negative real part, was studied in \cite{SIAM} (see also \cite{lyapunov-paolo}). More precisely, the results in \cite{SIAM} establish a necessary and sufficient condition for GUAS in terms of three relevant parameters, two depending on the eigenvalues of $A$ and $B$ respectively, and the third one (namely, the cross ratio of the four eigenvectors of $A$ and $B$ in the projective line $\mathbb{C} P^1$) accounting for the interrelations among the two systems. The precise necessary and sufficient condition ensuring GUAS of \r{1} is quite technical and can be found in \cite{SIAM} (see also \cite{lyapunov-paolo}). Notice that, in the linear case, GUAS is equivalent to the more often quoted GUES property, i.e., global exponential stability, uniform with respect to the switching rule (see, for example, \cite{angeli} and references therein). For related results on linear switched systems, see \cite{agr,blanchini,daw,hesp,lyapunov-paolo}. For nonlinear systems, the problem of characterizing GUAS completely, without assuming the explicit knowledge of the integral curves of $X$ and $Y$, is hopeless. The problem, however, admits some partial solution. The purpose of this paper is to provide some sufficient and some necessary conditions for stability which are robust (with respect to small perturbations of the vector fields) and easily verifiable, directly on the vector fields $X$ and $Y$, without requiring any integration or construction of a Lyapunov function. Denote by ${\cal Z}$ the set on which $X$ and $Y$ are parallel. One of our main results is that, if ${\cal Z}$ reduces to the singleton $\{0\}$, then \r{nonlinear} is GUAS (Theorem \ref{t-mai-paralleli}). The proofs works by showing that an admissible trajectory starting from a point $p\in\R^2$ is forced to stay in a compact region bounded by the integral curves of $X$ and $Y$ from $p$. The fact that $X$ and $Y$ are linearly independent outside the origin plays as a sort of drift which guarantees that the only possible accumulation point of an admissible trajectory is the origin. When ${\cal Z}$ is just compact, we prove that \r{system} is at least bounded (see Theorem \ref{t-solo-compatte}). Roughly speaking, this means that its trajectories do not escape to infinity. The idea of the proof is that, if we modify $X$ and $Y$ only in a compact region of the plane, then the boundedness properties of the system are left unchanged. Taking advantage of the result obtained in Theorem~\ref{t-mai-paralleli}, we manage to prove the boundedness of \r{system} by reducing, using compact perturbations, ${\cal Z}$ to $\{0\}$, while preserving the global asymptotic stability of $X$ and $Y$. Other conditions can be formulated taking into account the relative position of $X$ and $Y$ along ${\cal Z}$. Assume that ${\cal Z}\setminus\{0\}$ contains at least one point $q_0$. Since both $X(q_0)$ and $Y(q_0)$ are different from zero, the property of pointing in the same or in the opposite versus can be stated unambiguously. If $X(q_0)$ and $Y(q_0)$ have opposite versus, then there exists a switching function, for the convexified system, whose output is the constant trajectory which stays in $q_0$. As a consequence, the system \r{system} is not GUAS. Additional results can be obtained under the assumption that the pair of vector fields $(X,Y)$ is generic. (For the notion of genericity appropriate to our aims, see Section \ref{s-basic-facts}.) In particular, the genericity assumption can be used to guarantee that ${\cal Z}\setminus\{0\}$ is an embedded one-dimensional submanifold of the plane. Clearly, ${\cal Z}$ needs not to be connected. If the connected component of ${\cal Z}$ containing the origin reduces to $\{0\}$ and on all other components $X$ and $Y$ point in the same versus, transversally to ${\cal Z}$, then \r{system} is GUAS. This result is formulated in Theorem \ref{t-mai-tangent}, which follows the pattern of proof of Theorem~\ref{t-mai-paralleli}. Conversely, Theorem \ref{t-inv-non-comp} states that, if one connected component of ${\cal Z}\setminus\{0\}$ is unbounded and such that $X$ and $Y$ have opposite versus on it, then \r{system} admits a trajectory going to infinity. Intuitively, this happens because the orientation of $(X(p),Y(p))$ changes while $p$ crosses ${\cal Z}\setminus\{0\}$. If $X(p)$ is not tangent to ${\cal Z}$ at $p$ and $X(p)$ points in the opposite direction with respect to $Y(p)$, then one can embed ${\cal Z}$, locally near $p$, in a foliation made of admissible trajectories of \r{system}, whose running direction is reversed while crossing ${\cal Z}$ (see Figure~\ref{foli}). \begin{figure} \caption{A local foliation embedding ${\cal Z}$} \label{foli} \end{figure} Since, generically, the points where $X$ is tangent to ${\cal Z}$ are isolated, it turns out that there exists an admissible trajectory which tracks globally the unbounded connected component of ${\cal Z}\setminus\{0\}$ on which $X$ and $Y$ have opposite versus. The paper is organized as follows. In Section \ref{s-basic-facts}, we recall the main definitions of stability in which we are interested, we introduce the convexified system, and we describe the topological structure of the set ${\cal Z}$. The main results are stated in Section \ref{s-main-results}, where their robustness is also discussed. The proofs are given in Sections~\ref{mp}, \ref{mt}, \ref{sc}, and \ref{inc}. \section{Basic definitions and facts}\label{s-basic-facts} \subsection{Definitions of stability} Fix $n,m\in \mathbb{N}$ and consider the switched system \begin{eqnarray}\label{ss-system} \dot q=f_u(q)\,,~~~~q\in \R^n\,,~~~~u\in U\subset\R^m\,, \end{eqnarray} where $U$ is a measurable subset of $\R^m$ and $(q,u)\mapsto f_u(q)$ is the restriction on $\R^n\times U$ of a ${\cal C}^\infty$ function from $\R^n\times\R^m$ to $\R^n$. Assume that $f_u(0)=0$ for every $u\in U$. \\ \noindent For every $\delta>0$, denote by $B_\delta\subset \R^n$ the ball of radius $\delta$, centered at the origin. Set $${\cal U}=\{u:[0,\infty)\rightarrow U\,|\;u(.)\ \mbox{measurable}\}\,.$$ For every $u(.)$ in ${\cal U}$ and every $p\in \R^n$, denote by $t\mapsto \gamma(p,u(.),t)$ the solution of \r{ss-system} such that $\gamma(p,u(.),0)=p$. Notice that, in general, $t\mapsto \gamma(p,u(.),t)$ needs not to be defined for every $t\geq 0$, since the non-autonomous vector field $f_{u(t)}$ may not be complete. Denote by ${\cal T}(p,u(.))$ the maximal element of $(0,+\infty]$ such that $t\mapsto \gamma(p,u(.),t)$ is defined on $[0,{\cal T}(p,u(.)))$, and let $$\mathit{Supp}(\gamma(p,u(.),.))=\gamma(p,u(.),[0,{\cal T}(p,u(.))))\,.$$ If $\mathit{Supp}(\gamma(p,u(.),.))$ is bounded, then ${\cal T}(p,u(.))= +\infty$. Given $p\in \R^n$, the {\it accessible set from $p$}, denoted by ${{\cal A}(p)}$, is defined as $${\cal A}(p)=\cup_{u(.)\in{\cal U}}\mathit{Supp}(\gamma(p,u(.),.))\,.$$ \noindent Several notions of stability for the switched system \r{ss-system} can be introduced. \begin{Definition}\label{stabilities} We say that \r{ss-system} is \begin{itemize} \item {\bf unbounded} if there exist $p\in\R^n$ and $u(.)\in{\cal U}$ such that $\gamma(p,u(.),t)$ goes to infinity as $t$ tends to ${\cal T}(p,u(.))$; \item {\bf bounded} if, for every $K_1\subset \R^n$ compact, there exists $K_2\subset \R^n$ compact such that $\gamma(p,u(.),t)\in K_2$ for every $u(.)\in{\cal U}$, $t\geq 0$ and $p\in K_1$; \item {\bf uniformly stable} at the origin if, for every $\delta>0$, there exists $\varepsilon>0$ such that ${\cal A}(p)\subset B_\delta$ for every $p\in B_\varepsilon$; \item {\bf locally attractive} at the origin if there exists $\delta>0$ such that, for every $u(.)\in{\cal U}$ and every $p\in B_\delta$, $\gamma(p,u(.),t)$ converges to the origin as $t$ goes to infinity; \item {\bf globally attractive} at the origin if, for every $u(.)\in{\cal U}$ and every $p\in\R^n$, $\gamma(p,u(.),t)$ converges to the origin as $t$ goes to infinity; \item {\bf globally uniformly attractive} at the origin if, for every $\delta_1, \delta_2>0$, there exists $T>0$ such that $\gamma(p,u(.),T)\in B_{\delta_1}$ for every $u(.)\in{\cal U}$ and every $p\in B_{\delta_2}$; \item {\bf globally uniformly stable (GUS)} at the origin if it is bounded and uniformly stable at the origin; \item {\bf locally asymptotically stable (LAS)} at the origin if it is uniformly stable and locally attractive at the origin; \item {\bf globally asymptotically stable (GAS)} at the origin if it is uniformly stable and globally attractive at the origin; \item {\bf globally uniformly asymptotically stable (GUAS)} at the origin if it is uniformly stable and globally uniformly attractive at the origin. \end{itemize} \end{Definition} It has been showed by Angeli, Ingalls, Sontag, and Wang \cite{AISW} that, when $U$ is compact, the notions of GAS and GUAS are equivalent. This is the case for system \r{system}. Moreover, it is well known that, in the case in which all the vector fields $f_u$ are linear, local and global properties are equivalent. \subsection{The convexified system} \label{s-convex} In this paper, we focus on the planar switched system \begin{equation}\llabel{nons-system} \dot q=u\,X(q)+(1-u)\,Y(q)\,,~~~~q\in\R^2\,,~~~~u\in\{0,1\}\,, \end{equation} where $X$ and $Y$ denote two vector fields on $\R^2$, of class ${\cal C}^\infty$, such that $X(0)=Y(0)=0$. We assume moreover that $X$ and $Y$ are \underline{globally asymptotically stable} at the origin. Notice, in particular, that $X$ and $Y$ are forward complete. A classical tool in stability analysis is the convexification of the set of admissible velocities. Such transformation does not change the closure of the accessible sets. Moreover, it was proved in \cite{12} (see also \cite[Proposition 7.2]{AISW}) that, for every $p'\in\R^2$, every switching function $u':[0,\infty)\to[0,1]$, and every positive continuous function $r$ defined on $[0,{\cal T}(p',u'(.)))$, there exist $u(.)\in {\cal U}$ and $p\in\R^2$ such that \[ \|\gamma(p,u(.),t)-\gamma(p',u'(.),t)\|\leq r(t) \] for every $t\in[0,{\cal T}(p',u'(.)))$. As a consequence each of the notions introduced in Definition~\ref{stabilities} holds for (\ref{nons-system}) if and only if it holds for the same system where $U=\{0,1\}$ is replaced by $[0,1]$. In the following, to simplify proofs, we deal with the convexified system \begin{equation}\llabel{s-system} \dot q=u\,X(q)+(1-u)\,Y(q)\,,~~~~q\in\R^2\,,~~~~u\in[0,1]\,. \end{equation} \noindent {\bf Notations.} When $u(.)$ is constantly equal to zero (respectively, one), we write $\gamma_Y(p,t)$ (respectively, $\gamma_X(p,t)$) for $\gamma(p,u(.),t)$. Given $p,p'\in\R^2$ and $u(.),u'(.)$ in ${\cal U}$, we say that $\gamma(p,u(.),.)$ and $\gamma(p',u'(.),.)$ {\it forwardly intersect} if $\mathit{Supp}(\gamma(p,u(.),.))$ and $\mathit{Supp}(\gamma(p',u'(.),.))$ have nonempty intersection. \subsection{The collinearity set of $X$ and $Y$} \noindent A key object in order to detect stability properties of \r{s-system} turns out to be the set ${\cal Z}$ on which $X$ and $Y$ are parallel. We have that ${\cal Z}=Q^{-1}(0)$, where \begin{eqnarray}\label{decca} Q(p)=\mathit{det}(X(p),Y(p))\,,\ \ \ \ p\in\R^2\,. \end{eqnarray} In \cite{SIAM}, the stability of the linear switched system \r{1} was studied by associating with every point of $\R^2$ a suitably defined ``worst'' trajectory passing through it, whose construction was based upon ${\cal Z}$. The global asymptotic stability of the linear switched system \r{1} was then proved to be equivalent to the convergence to the origin of every such worst trajectory. We recall that in the linear case, excepted for some degenerate situations, ${\cal Z}$ is either equal to $\{0\}$ or is made of two straight lines passing through the origin. In the nonlinear case, the situation is more complex. Let us represent ${\cal Z}$ as \begin{eqnarray}\label{dec} {\cal Z}=\{0\}\cup\bigcup_{\Gamma\in {\cal G}}\Gamma\,, \end{eqnarray} where ${\cal G}$ is the set of all connected components of ${\cal Z}\setminus\{0\}$. Notice that ${\cal G}$ needs not, in general, to be countable. With a slight abuse of notation, we will refer to the elements of ${\cal G}$ as to the {\it components} of ${\cal Z}$. \begin{Definition} Let $\Gamma$ be a component of ${\cal Z}$ and fix $p\in\Gamma$. We say that $\Gamma$ is {\it direct} (respectively, {\it inverse}) if $X(p)$ and $Y(p)$ have the same (respectively, opposite) direction. \end{Definition} \begin{rmk} The definition is independent of the choice of $p$, since neither $X$ nor $Y$ vanish along $\Gamma$. \end{rmk} An example of how ${\cal Z}$ can look like is represented in Figure~\ref{qq}. \begin{figure} \caption{The set ${\cal Z}$} \label{qq} \end{figure} Some of the results of this paper are obtained assuming that the set ${\cal Z}$ has suitable regularity properties, which are generic in the sense defined below. A base for the {\it Withney topology} on ${\cal C}^\infty(\R^2,\R^2)$ (the set of smooth vector fields on $\R^2$) can be defined, using the multi-index notation, as the family of sets of the type \[ {\cal V}(k,f,r)=\left\{g\in {\cal C}^\infty(\R^2,\R^2)\,\left|\;\left\|\frac{\partial^{|I|}(f-g)}{\partial x^I}(x)\right\|<r(x), \forall x\in\R^2,|I|\leq k\right.\right\}\,,\] where $k$ is a nonnegative integer, $f$ belongs to ${\cal C}^\infty(\R^2,\R^2)$, and $r$ is a positive continuous function defined on $\R^2$. Denote by GAS$(\R^2)$ the set of smooth vector fields on $\R^2$ which are globally asymptotically stable at the origin, and endow it with the topology induced by Withney's one. A {\it generic property} for (\ref{s-system}) is a property which holds for an open dense subset of GAS$(\R^2)\times$GAS$(\R^2)$, endowed with the product topology of GAS$(\R^2)$. \begin{ml} \label{genio} For a generic pair of vector fields $(X,Y)$, ${\cal Z}\setminus\{0\}$ is an embedded one-dimensional submanifold of $\R^2$. Moreover, $Q(p)$ changes sign while $p$ crosses ${\cal Z}\setminus\{0\}$. \end{ml} The lemma is a standard result in genericity theory. It follows from the fact that the condition \begin{description} \item[ (G1)] If $p\neq0$ and $Q(p)=0$, then $\nabla Q(p)\neq0$, \end{description} is generic (see, for instance, \cite{generiko}). When ${\cal Z}\setminus\{0\}$ is a manifold, we say that $p\in {\cal Z}\setminus\{0\}$ is a \underline{tangency point} if $X(p)$ is tangent to ${\cal Z}$. Under condition {\bf (G1)}, $p\in {\cal Z}\setminus\{0\}$ is a tangency point if and only if $\nabla Q(p)$ and $X(p)$ (equivalently, $Y(p)$) are orthogonal. Some of our results are obtained under additional generic conditions. One of these, namely, \begin{description} \item[ (G2)] The Hessian matrix of $Q$ at the origin is non-degenerate, \end{description} ensures that ${\cal Z}$, in a neighborhood of the origin, is given either by $\{0\}$ or by the union of two transversal one-dimensional manifolds intersecting at the origin. Under the generic conditions {\bf (G1)} and {\bf (G2)}, the connected component of ${\cal Z}$ containing the origin looks like one of Figure~\ref{f-origin}. A third generic condition which we will sometimes assume to hold is \begin{description} \item[ (G3)] If $p\neq0$, $Q(p)=0$, and $\nabla Q(p)$ is orthogonal to $X(p)$, then the second derivative of $Q$ at $p$ along $X$ (equivalently, $Y$) is different from zero, \end{description} which, together with {\bf (G1)}, guarantees that the tangency points on ${\cal Z}$ are isolated. \begin{figure} \caption{The connected component of ${\cal Z}$ containing the origin} \label{f-origin} \end{figure} \section{Statement of the results}\label{s-main-results} We organize our results in sufficient and necessary conditions with respect to the stability properties. Notice that all such conditions are easily verified without any integration or construction of a Lyapunov function. Moreover, they are robust under small perturbations of the vector fields, as explained in Section \ref{s-robustness}. Let us recall that $X$ and $Y$ are assumed to be globally asymptotically stable at the origin and that all the results given below, although stated for the case $u\in[0,1]$, are also valid for the system where $u$ varies in $\{0,1\}$. Before stating our main theorems, observe that classical results on linearization imply the following. \begin{mpr} \label{nec-cond} Assume that the eigenvalues of $A=\nabla X|_{p=0}$ and $B=\nabla Y|_{p=0}$ have strictly negative real part. Then \r{s-system} is LAS if and only if \r{1} is GUAS. \end{mpr} \subsection{Sufficient conditions} The following theorem gives a simple sufficient condition for GUAS, which generalizes the analogous one already known for the linear system \r{1} (see \cite{SIAM,lyapunov-paolo}). \begin{Theorem} \label{t-mai-paralleli} Assume that ${\cal Z}=\{0\}$. Then the switched system \r{s-system} is GUAS at the origin. \end{Theorem} Under the generic assumptions {\bf (G1)} and {\bf (G2)}, Theorem \ref{t-mai-paralleli} can be generalized as follows. \begin{Theorem}\label{t-mai-tangent} Assume that the generic conditions {\bf (G1)} and {\bf (G2)} hold. Assume, moreover, that the origin is isolated in ${\cal Z}$ and that there is no tangency point in ${\cal Z}\setminus\{0\}$. Then the switched system \r{s-system} is GUAS. \end{Theorem} When ${\cal Z}$ is bounded, although different from $\{0\}$, some weaker version of Theorem \ref{t-mai-paralleli} still holds. \begin{Theorem} \label{t-solo-compatte} Assume that ${\cal Z}$ is compact. Then the switched system \r{s-system} is bounded. \end{Theorem} As a direct consequence of Proposition \ref{nec-cond} and Theorem \ref{t-solo-compatte}, we have the following sufficient condition for GUS. \begin{corollary}\llabel{c1} Let ${\cal Z}$ be compact, and the linearized switched system be non-degenerate and GUAS. Then the switched system \r{s-system} is GUS. \end{corollary} \subsection{Necessary conditions} The following proposition expresses the straightforward remark that the inverse components of ${\cal Z}$ constitute obstructions to the stability of \r{s-system}. The reason is clear: if $\Gamma$ is inverse and $p$ belongs to $\Gamma$, then a constant switching function $u(.)$ exists such that $\gamma(p,u(.),t)=p$ for every $t\geq 0$. \begin{mpr} \label{p-mai-inverse} If ${\cal Z}$ has an inverse component, then the switched system \r{s-system} is not globally attractive. \end{mpr} The following theorem gives a necessary condition for boundedness, under generic conditions. \begin{Theorem} \label{t-inv-non-comp} Assume that the generic conditions {\bf (G1)} and {\bf (G3)} hold. If ${\cal Z}$ contains an unbounded inverse component, then the switched system \r{s-system} is unbounded. \end{Theorem} \subsection{Robustness}\label{s-robustness} We say that a property satisfied by $(X,Y)$ is \underline{robust} if it still holds for small perturbations of the pair $(X,Y)$, that is, if it holds for all the elements of a neighborhood of $(X,Y)$ in GAS$(\R^2)\times$GAS$(\R^2)$. Such notion of robustness is also known as {\it structural stability}, an expression which we prefer to avoid, in order to prevent confusion with the many definitions of stability already introduced for \r{s-system}. Under the generic conditions {\bf (G1)} and {\bf (G2)}, one can easily verify that the topology of the set ${\cal Z}$ does not change for small perturbations of $X$ and $Y$. Moreover, fixed one component $\Gamma$ of ${\cal Z}$, the fact that $\Gamma$ is direct or inverse is robust. Similarly, if $\Gamma$ is a component of ${\cal Z}$, which has not the origin in its closure, the absence of tangency points along $\Gamma$ is robust. As a consequence, the conditions formulated by the theorems above are robust. More precisely: \begin{Theorem} Under generic assumptions, if any of Theorems \ref{t-mai-paralleli}, \ref{t-mai-tangent}, \ref{t-solo-compatte}, \ref{t-inv-non-comp}, Corollary \ref{c1}, or Proposition \ref{p-mai-inverse} applies to the pair $(X,Y)$, then it applies in a neighborhood of $(X,Y)$ in GAS$(\R^2)\times$GAS$(\R^2)$. \end{Theorem} \section{Proof of Theorem \ref{t-mai-paralleli}}\label{mp} Assume that ${\cal Z}=\{0\}$. We already recalled in Section \ref{s-basic-facts} that GAS and GUAS are two equivalent notions. The main step of the proof consists in showing that (\ref{s-system}) is globally attractive. The uniform stability will be obtained as a byproduct of the adopted demonstration technique. Fix $q\in \R^2\setminus\{0\}$. We first prove that ${{\cal A}(q)}$ is bounded. Then we show that, for every $u(.)$ in ${\cal U}$, the only possible accumulation point of $\gamma(q,u(.),t)$ is the origin. These two facts imply that $\gamma(q,u(.),t)$ converges to the origin as $t$ goes to infinity. \subsection{Boundedness of ${{\cal A}(q)}$}\label{barbecue} \noindent We distinguish two cases. \noindent {\bf \underline{First case:} $\gamma_X(q,.)$ and $\gamma_Y(q,.)$ do not forwardly intersect}. Then, we can define a closed, simple, piecewise smooth curve, by $$ \gamma_{X,Y}(q,t)=\left\{\begin{array}{lr}\!\!\gamma_X(q,\tan(t\pi)) & \mbox{if }\ t\in\left[0,\frac{1}{2}\right],\\[1mm] \!\!\gamma_Y(q,\tan((1-t)\pi)) & \mbox{if }\ t\in\left[\frac{1}{2},1\right], \end{array}\right. $$ where $\gamma_X(q,\tan(\pi/2))$ and $\gamma_Y(q,\tan(\pi/2))$ are identified with the origin. The support of $\gamma_{X,Y}(q,.)$ separates $\R^2$ in two sets, one being bounded. Let us call ${\cal B}(q)$ the interior of the bounded set and ${\cal D}(q)$ the interior of the unbounded one. \begin{ml} \label{goo} ${\cal A}(q)$ is contained in $\overline{\B(q)}={\cal B}(q)\cup\gamma_{X,Y}(q,[0,1])$. \end{ml} \noindent{\bf Proof. } Consider the vector field $(X+Y)/2$. At the point $q$, it points either inside or outside ${\cal B}(q)$. Then, as it becomes clear through a local rectification of $(X+Y)/2$, the same holds true at all points of $\gamma_{X,Y}(q,[0,1])$ sufficiently close to $q$. Moreover, since the orientation defined by $(X,Y)$ does not vary on $\R^2\setminus\{0\}$ and coincides with the ones induced by $(X,(X+Y)/2)$ and $((X+Y)/2,Y)$, then $(X+Y)/2$ is pointing constantly either inside or outside ${\cal B}(q)$, all along $\gamma_{X,Y}(q,[0,1])\setminus\{0\}$. Let us assume that $(X+Y)/2$ points inside ${\cal B}(q)$. Then $\overline{\B(q)}$ is invariant for the flow of all the vector fields of the type $u\,X+(1-u)\,Y$, with $u\in[0,1]$, that is, it is invariant for the dynamics of \r{s-system}. Hence, ${\cal A}(q)$ is contained in $\overline{\B(q)}$. Assume now, by contradiction, that $(X+Y)/2$ points outside ${\cal B}(q)$. The same reasoning as above shows that ${\cal A}(q)$ is contained in $\overline{{\cal D}(q)}$. Define, for every $t\geq 0$ and every $\tau\in \R$, $$ \gamma^{X,Y,t}(q,\tau)=\left\{\begin{array}{lr} \!\!\gamma_X(q,-\tau) & \mbox{if }\ \tau<-t\,, \\ \!\!\gamma_Y(\gamma_X(q,t),\tau+t) & \mbox{if }\ \tau>-t\,. \end{array}\right. $$ The support of $\gamma^{X,Y,t}$ is given by the union of the integral curves of $X$ and $Y$ connecting $\gamma_X(q,t)$ and the origin (see Figure~\ref{gxyt}). For every $t\geq0$, we can identify $\gamma^{X,Y,t}$ with a closed curve passing through the origin. \begin{figure} \caption{The curves $\gamma^{X,Y,t}$} \label{gxyt} \end{figure} Fix a point $q'$ in ${\cal B}(q)$. By hypothesis, no $\gamma^{X,Y,t}$ passes through $q'$. Notice that the index of $\gamma^{X,Y,0}$ with respect to $q'$ is equal to one, since the support of $\gamma^{X,Y,0}$ coincides with the boundary of ${\cal B}(q)$. The stability of $Y$ at the origin implies that the index with respect to $q'$ of the curve $\gamma^{X,Y,t}$ depends continuously of $t$, that is, it is constant on $[0,\infty)$. Hence, for every $t\in [0,\infty)$, \[\max_{\tau\in\R}\|\gamma^{X,Y,t}(q,\tau)\|> \|q'\|>0\,.\] On the other hand, when $t$ goes to infinity, $\gamma_X(q,t)$ converges to the origin and \begin{eqnarray} \sup_{\tau<-t}\|\gamma^{X,Y,t}(q,\tau)\|=\sup_{\tau>t}\|\gamma_X(q,\tau)\|\stackrel{t\rightarrow\infty}{-\!\!\!-\!\!\!-\!\!\!\!\!\longrightarrow} 0\,.\nonumber\end{eqnarray} Therefore, there exist $p\in\R^2$, arbitrarily close to the origin, such that the curve $s\mapsto \gamma_Y(p,s)$, $s>0$, exits the ball $B_{\|q'\|}$, which contradicts the stability of $Y$ at the origin. \ \rule{0.5em}{0.5em} \noindent{\bf \underline{Second case:} $\gamma_X(q,.)$ and $\gamma_Y(q,.)$ do forwardly intersect}. Let $t$ be the first positive time such that the point $\gamma_Y(q,t)$ is equal to $\gamma_X(q,\tau)$ for some $\tau>0$. Define, for every $s\in [0,\tau+t]$, $$ \gamma_{X,Y}(q,s)=\left\{\begin{array}{lcr}\!\!\gamma_X(q,s) & \mbox{if }& s\in[0,\tau]\,,\\[1mm] \!\!\gamma_Y(q,t+\tau-s) & \mbox{if }& s\in[\tau,\tau+t]\,. \end{array}\right. $$ The curve $\gamma_{X,Y}(q,.)$ is simple and closed, and separates $\R^2$ in two open sets ${\cal B}(q)$ and ${\cal D}(q)$, ${\cal B}(q)$ being bounded. \begin{ml} \label{hoo} ${\cal A}(q)$ is contained in $\overline{\B(q)}={\cal B}(q)\cup\gamma_{X,Y}(q,[0,\tau+t])$. \end{ml} {\bf Proof. } Assume that $(X+Y)/2$ points inside ${\cal B}(q)$ at $q$. Hence, the same is true for all points of $\gamma_{X,Y}(q,[0,\tau+t])$ sufficiently close to $q$. Since ${\cal Z}=\{0\}$, the property extends to the entire curve $\gamma_{X,Y}(q,[0,\tau+t])$, except possibly at the point $\gamma_{X,Y}(q,\tau)$. The same reasoning can be applied at $\gamma_{X,Y}(q,\tau)$, showing that $(X+Y)/2$ points either inside or outside ${\cal B}(q)$ at all points of the type $\gamma_{X,Y}(q,s)$, with $s$ close to $\tau$. The only non-contradictory possibility is that $(X+Y)/2$ points inside ${\cal B}(q)$ all along $\gamma_{X,Y}(q,[0,\tau+t])$. Hence, $\overline{\B(q)}$ is invariant under the flow of each vector field $u\,X+(1-u)\,Y$, $u\in[0,1]$, so that ${\cal A}(q)$ is contained in $\overline{\B(q)}$. Assume, by contradiction, that $(X+Y)/2$ points inside ${\cal D}(q)$. The same reasoning as above shows that ${\cal A}(q)$ is contained in $\overline{{\cal D}(q)}$. In particular, the origin belongs to $\overline{{\cal D}(q)}$. On the other hand, the fact that $(X+Y)/2$ points outside ${\cal B}(q)$ all along $\partial {\cal B}(q)$ implies that it has a zero inside ${\cal B}(q)$. Which is impossible, unless ${\cal B}(q)$ contains the origin. \ \rule{0.5em}{0.5em} We proved that, in both cases, the set ${\cal A}(q)$ is bounded. The precise description of ${\cal A}(q)$ is given by the following lemma, where the definition of ${\cal B}(q)$ depends on whether $\gamma_X(q,.)$ and $\gamma_Y(q,.)$ forwardly intersect\ or not. \begin{ml} \label{gooh} ${\cal A}(q)=\overline{\B(q)}\setminus\{0\}$. \end{ml} \noindent{\bf Proof. } First notice that the origin does not belong to ${\cal A}(q)$, being a steady point for both $X$ and $Y$. The inclusion of ${\cal A}(q)$ in $\overline{\B(q)}\setminus\{0\}$ is thus a consequence of Lemma~\ref{goo} and Lemma~\ref{hoo}. As for the opposite inclusion, notice that $\partial{\cal B}(q)\setminus\{0\}$ is, by construction, made of integral curves of $X$ and $Y$ starting from $q$. Therefore, $\partial{\cal B}(q)\setminus\{0\}\subset {\cal A}(q)$. Fix now $p\in{\cal B}(q)\setminus\{0\}$. We are left to prove that $p\in {\cal A}(q)$. Define $$C=\{\gamma_{X}(p,\tau)\,|\;\tau\leq 0\}\,,$$ and let $V$ be a neighborhood of the origin such that $p\not\in V$. Due to the stability of $X$ and the boundedness of ${\cal B}(q)$, there exists $T>0$ such that $\gamma_X(\overline{\B(q)},T)\subset V$. Since $\gamma_X(\gamma_{X}(p,-T),T)=p\not\in V$, then $C$ is not contained in $\overline{\B(q)}$. Therefore, there exists $\tau<0$ such that $\gamma_{X}(p,\tau)\in\partial {\cal B}(q)$. Notice that $\gamma_{X}(p,\tau)$ is different from the origin, since otherwise we would have $p=0$. Finally, $\gamma_{X}(p,\tau)\in {\cal A}(q)$, which implies that $p=\gamma_X(\gamma_{X}(p,\tau),|\tau|)$ belongs to ${\cal A}(q)$. \ \rule{0.5em}{0.5em} \subsection{Global attractivity} In the previous section, we showed that the accessible set from every point is bounded. Hence, the global attractivity of (\ref{s-system}) is proved if we ensure that no admissible curve has an accumulation point different from the origin. Let us show that, for every point $p\neq0$, there exist $\varepsilon>0$ and a neighborhood $V_p$ of $p$ such that every admissible curve $t\mapsto \gamma(q,u(.),t)$ entering $V_p$ at time $\tau$ leaves $V_p$ before time $\tau+\varepsilon$ and never comes back to $V_p$ after time $\tau+\varepsilon$. Since $X$ and $Y$ are not parallel at $p$, we can choose a coordinate system $(x,y)$ such that $X(p)=(1,-1)$ and $Y(p)=(1,1)$. We denote $p=(p_x,p_y)$, $X(x,y)=(X_1(x,y),X_2(x,y))$, $Y(x,y)=(Y_1(x,y),Y_2(x,y))$. The fields $X$ and $Y$ being continuous, there exists $\alpha>0$ such that, if $(x,y)\in B_\infty(\alpha)= \{(a,b)\;|\; |a-p_x|<\alpha, \; \; |b-p_y|<\alpha \}$, then $X_1(x,y)$, $Y_1(x,y)$, $-X_2(x,y)$, and $Y_2(x,y)$ are in $[{1}/{2},{3}/{2}]$. Let $p'=(p_x-\frac{\alpha}{10},p_y)$ and consider $\gamma_X(p',.)=(\gamma_X^1(p',.),\gamma_X^2(p',.))$. Its first coordinate $\gamma_X^1(p',.)$ is increasing and its derivative takes values in $[{1}/{2},{3}/{2}]$. The same is true for $-\gamma_X^2(p',.)$. Hence $\gamma_X(p',.)$ does not leave the set $B_\infty(\alpha)$ before time ${2\alpha}/{3}$. Since $\gamma_X^1(p',2\alpha/5)$ is larger than $p_x+\frac{\alpha}{10}$ and $\gamma_X^2(p',2\alpha/5)$ is in $[p_y-\frac{3\alpha}{10},p_y-\frac{\alpha}{10}]$, then the curve $\gamma_X(p',.)$ intersects the segment $S_p=B_\infty(\alpha)\cap\{(x,y)|x=p_x+\frac{\alpha}{10}\}$ in a time $\tau_X$ smaller than ${2\alpha}/{5}$. The same occurs for $\gamma_Y(p',.)$. Denote by $\tau_Y$ its intersection time with $S_p$. Choose as $V_p$ the bounded set whose boundary is given by the union of $\gamma_X(p',[0,\tau_X])$, $\gamma_Y(p',[0,\tau_Y])$, and the segment $[\gamma_X(p',\tau_X),\gamma_Y(p',\tau_Y)]=\{\lambda\,\gamma_X(p',\tau_X)+(1-\lambda)\,\gamma_Y(p',\tau_Y)\,|\;0\leq \lambda\leq 1\}$ (see Figure~\ref{TTT}). \begin{figure} \caption{The set $V_p$} \label{TTT} \end{figure} The following lemma states that $V_p$ satisfies the required properties. As a consequence, $p$ cannot be the accumulation point of any admissible curve. \begin{ml} \label{pT} We have the following: (i) $V_p$ is a neighborhood of $p$; (ii) every admissible curve entering $V_p$ leaves $V_p$ in a time smaller than ${2\alpha}/{5}$ through the segment $[\gamma_X(p',\tau_X),\gamma_Y(p',\tau_Y)]$; (iii) once an admissible curve leaves $V_p$, it enters ${\cal A}(p')\setminus V_p$ and never leaves it. \end{ml} \noindent{\bf Proof. } The first point follows by the construction of $V_p$. As for ({\it ii}), notice that all the points of $V_p$ have first coordinate in $[p_x-\frac{\alpha}{10},p_x+\frac{\alpha}{10}]$. Since the first coordinate of $X$ and $Y$ is larger than ${1}/{2}$, then every admissible curve entering $V_p$ leaves it in a time smaller than ${2\alpha}/{5}$. Moreover, since along $\gamma_X(p',.)$ and $\gamma_Y(p',.)$ the admissible velocities of \r{s-system} point inside $V_p$, then an admissible curve can leave $V_p$ only through the segment $[\gamma_X(p',\tau_X),\gamma_Y(p',\tau_Y)]$. Finally, ({\it iii}) follows from the remark that ${\cal A}(p')\setminus V_p$ is invariant for the dynamics, since the admissible velocities of \r{s-system} point inside ${\cal A}(p')\setminus V_p$ all along its boundary. \ \rule{0.5em}{0.5em} \subsection{Conclusion of the proof of Theorem \ref{t-mai-paralleli}} We are left to prove that \r{s-system} is uniformly stable. To this extent, fix $\delta>0$. Since both $X$ and $Y$ are stable at the origin, then there exists $\varepsilon>0$ such that every integral curve of $X$ or $Y$ starting in $B_\varepsilon$ is contained in $B_\delta$. Hence, for every $q\in B_\varepsilon$, the boundary of ${\cal A}(q)$ is contained in $B_\delta$. Therefore, ${\cal A}(q)$, being bounded, is itself contained in $B_\delta$. \ \rule{0.5em}{0.5em} \begin{rmk} The proof of Theorem~\ref{t-mai-paralleli} naturally extends to the following case: if $V$ is an open and simply connected subset of $\R^2$, if $X$ and $Y$ point inside $V$ along its boundary, and if ${\cal Z}\cap V=\{0\}$, then \r{s-system} is uniformly asymptotically stable on $V$. \end{rmk} \section{Proof of Theorem \ref{t-mai-tangent}}\label{mt} The proof follows the main steps as the one of Theorem \ref{t-mai-paralleli}. The idea is again to fix a point $q\in\R^2$, to characterize the boundary of its accessible set ${\cal A}(q)$, to prove that such set is bounded, and, finally, to show that no admissible curve has an accumulation point different from the origin. In order to describe the boundary of ${{\cal A}(q)}$, we need some extra construction. Notice that every component $\Gamma$ of ${\cal Z}$ separates the plane in two parts. Since $\Gamma$ contains no tangency points, then one of such two regions must be invariant for $X$, and the same argument holds for $Y$ as well. Necessarily, the invariant region is the one containing the origin, which is attractive both for $X$ and $Y$. In particular, $\Gamma$ is direct and every admissible curve crosses $\Gamma$ at most once. Associate with every point $q\in \R^2$ the number $n(q)$ of components of ${\cal Z}$ that the curve $\gamma_X(q,.)$ crosses at strictly positive times, before converging to the origin (see Figure~\ref{nq}). Since the curve $\gamma_X(q,(0,\infty))$ is bounded and crosses each component of ${\cal Z}$ at most once, then $n(q)$ is finite. \begin{figure}\label{nq} \end{figure} For every $i\leq n(q)$, let us denote by $\Gamma_i$ the $i$-th component of ${\cal Z}$ crossed by $\gamma_X(q,.)$. We claim that $\gamma_Y(q,.)$ crosses exactly the same components as $\gamma_X(q,.)$, in the same order. Otherwise, as one can easily check, $X$ and $Y$ would not both be GAS at the origin (the reason is that the components of ${\cal Z}$ separate the plane and can be crossed by an admissible curve at most once). Let us define two admissible curves, starting from $q$, that can be used to characterize the boundary of ${\cal A}(q)$, in analogy with what has been done in the proof of Theorem \ref{t-mai-paralleli}. The first of such curves follows the flow of $X$ until it reaches $\Gamma_1$, then follows the flow of $Y$ until it crosses $\Gamma_2$, and so on. The second one follows alternatively the flows of $X$ and $Y$ in the other way round, starting with $Y$ and switching to $X$ as it meets $\Gamma_1$. Such two curves converge to the origin, since $n(q)$ is finite. As in the proof of Theorem~\ref{t-mai-paralleli}, we can distinguish two cases, depending on whether the two curves intersect or not (see Figure~\ref{acc3}). \begin{figure}\label{acc3} \end{figure} The arguments of Section~\ref{mp} can be adapted in order to prove the boundedness of \r{s-system} and the absence of accumulation points different from the origin. The details are left to the reader.\ \rule{0.5em}{0.5em} \section{Proof of Theorem \ref{t-solo-compatte}}\label{sc} Consider a system of coordinates $(x,y)$ on $\R^2$ which preserves the origin and renders $X$ radial outside a ball $B_{R_0}$, $R_0>0$. (Such system can be defined using the level sets of a smooth Lyapunov function for $X$, see \cite{g2g}.) Taking possibly a larger $R_0$, we can assume that $X$ and $Y$ are never collinear in $\R^2\setminus B_{R_0}$. For every $R>0$, let $$\Omega_R=\cup_{p\in B_R} {\cal A}(p)\,.$$ Our aim is to prove that each $\Omega_R$ is bounded. Fix $R>R_0+1$. If $(X,Y)$ is replaced with a pair of vector fields $(X',Y')$ which coincides with $(X,Y)$ outside $B_{R_0+1}$, then the set $\Omega_R$, constructed as above, does not change. The idea is to choose $X'$ and $Y'$ in such a way that they are never parallel outside the origin and still GAS. The boundedness of $\Omega_R$ follows then from Theorem \ref{t-mai-paralleli}. Set \begin{eqnarray*} X_0(x,y)&=&-x\partial_x-y\partial_y\,,\\ Y_0(x,y)&=&y\partial_x-x\partial_y+\lambda X_0\,,\ \ \ \ \ \ \ \ \ \lambda>0\,, \end{eqnarray*} and notice that $X$ and $X_0$ are collinear outside $B_{R_0}$. Notice, moreover, that, if $\lambda$ is large enough, then the angle between $X_0$ and $Y_0$ is smaller than the minimum of the angles between $X$ and $Y$ in $B_{R_0+1}\setminus B_{R_0}$ (see Figure~\ref{rad2}). Fix such a $\lambda$. \begin{figure}\label{rad2} \end{figure} The function $Q$ has constant sign on $\R^2\setminus B_{R_0}$. Without lost of generality, we can assume that it is positive. Fix a smooth function $\phi:[0,+\infty)\rightarrow[0,1]$ such that $\phi(r)=0$ if $r\leq R_0$ and $\phi(r)=1$ if $r\geq R_0+1$. Define \begin{eqnarray*} X'(x,y)&=&\left( 1-\phi\left(\sqrt{x^2+y^2}\right)\right) X_0(x,y)+\phi\left(\sqrt{x^2+y^2}\right) X(x,y)\,,\\ Y'(x,y)&=&\left( 1-\phi\left(\sqrt{x^2+y^2}\right)\right) Y_0(x,y)+\phi\left(\sqrt{x^2+y^2}\right) Y(x,y)\,. \end{eqnarray*} By construction, $(X',Y')$ coincides with $(X,Y)$ outside $B_{R_0+1}$ and $det(X',Y')$ is strictly positive on $\R^2\setminus\{0\}$. We are left to check the global asymptotic stability of $Y'$, the one of $X'$ being evident. This can be done by using a comparison argument between the integral curves of $Y$ and $Y'$. Indeed, since the angle between $X'$ and $Y'$ is smaller than the angle between $X'$ and $Y$ in $B_{R_0+1}\setminus B_{R_0}$, then the integral curve of $Y'$ starting from a point $q\in\R^2\setminus B_{R_0}$ joins $B_{R_0}$ in finite time, with a smaller total variation in the angular component that the integral curve of $Y$ starting from the same point $q$. \ \rule{0.5em}{0.5em} \begin{rmk} The proof given above applies, without modifications, to the more general case where the points at which $X$ and $Y$ are globally asymptotically stable are allowed to be different. \end{rmk} \begin{rmk}\label{gus} The conclusion of Theorem~\ref{t-solo-compatte} would not hold under the weaker hypothesis that $X$ and $Y$ are GUS, instead of GAS. A counterexample can be given as follows: Let $\varphi:[0,1]\rightarrow\R$ be a smooth function such that $0<\varphi(t)<\pi/2$ for every $t\in(0,1)$ and $\varphi^{(k)}(0)=\varphi^{(k)}(1)=0$ for every $k\geq 0$. Denote by $(r,\theta)$ the radial coordinates on $\R^2$. Define, using the radial representation of vectors in $\R^2$, \[X(r,\theta)=\left\{\begin{array}{ll} \left( r,\theta+\frac\pi2+\varphi(r)\right)&\mbox{if }\ r\in[0,1]\,,\\[.8ex] \left( r,\theta+\frac\pi2-\varphi(r-[r])\right)&\mbox{if \ }r>1\,, \end{array} \right.\] and \[Y(r,\theta)=\left\{\begin{array}{ll} \left( r,\theta-\frac\pi2-\varphi(2r)\right)&\mbox{if }\ r\in\left[0,\frac12\right]\,,\\[.8ex] \left( r,\theta-\frac\pi2+\varphi\left( r+\frac12-\left[r+\frac12\right]\right)\right)&\mbox{if }\ r>\frac12\,, \end{array} \right.\] where $[r]$ denotes the integer part of $r$. Then, for every $r\geq 1$, $X(r,\theta)$ and $Y(r,\theta)$ are linearly independent, since the difference between their angular components is given by \[0<\pi-\varphi(r-[r])-\varphi\left( r+\frac12-\left[r+\frac12\right]\right)<\pi\,.\] Hence, ${\cal Z}$ is compact. On the other hand, the feedback strategy \[u(t)=\left\{\ba{ll}0&\mbox{if }\ r-[r]\in\left[\frac14,\frac34\right),\\ 1&\mbox{otherwise} \end{array}\right.\] is such that, for every $p\in\R^2\setminus B_{3/4}$, $\|\gamma(p,u(.),t)\|$ tends to infinity as $t$ tends to ${\cal T}(p,u(.))=+\infty$. Notice that the example can be easily modified in such a way that ${\cal Z}$ not only is compact, but actually shrinks to $\{0\}$. It suffices to take $X(r,\theta)=(r,\theta+\psi_X(r))$ and $Y(r,\theta)=(r,\theta+\psi_Y(r))$, where the graphs of $\psi_X$ and $\psi_Y$ are as in Figure~\ref{psis}. \begin{figure}\label{psis} \end{figure} \end{rmk} \section{Proof of Theorem \ref{t-inv-non-comp}}\label{inc} Let $\Gamma$ be an inverse unbounded component of ${\cal Z}$ and assume that {\bf (G1)} and {\bf (G3)} hold. Due to Lemma~\ref{genio}, $\Gamma$ is a one-dimensional submanifold of $\R^2$, which can be parameterized by an injective and smooth map $c:\R\rightarrow \R^2$. Fix a point $p=(p_x,p_y)=c(\tau)$ on $\Gamma$. According to the results by Davydov (see \cite[Theorem 2.2 ]{dav}), up to a change of coordinates (which, in particular, sets $p_x=0$), the vector fields $X$ and $Y$ can be represented locally by one of the following three normal forms \begin{enumerate} \item $X(x,y)=(1,x),\ Y(x,y)=(-1,x)$;\label{passing} \item $X(x,y)=(1,y-p_y-x^2),\ Y(x,y)=(-1,y-p_y-x^2)$;\label{tu1} \item $X(x,y)=(-1,x^2-y+p_y),\ Y(x,y)=(1,x^2-y+p_y)$.\label{tu2} \end{enumerate} \begin{figure}\label{FN} \end{figure} \indent Notice that the type \ref{passing} corresponds to the situation in which $X$ and $Y$ are transversal to $\Gamma$ at $p$, while \ref{tu1} and \ref{tu2} are the normal forms for the case in which $X$ and $Y$ are tangent to $\Gamma$ at $p$. Recall that $p$ is said to have the {\it small time local transitivity} property (STLT, for short) if, for every $T>0$ and every neighborhood $V$ of $p$, there exists a neighborhood $W$ of $p$ such that every two points in $W$ are accessible from each other within time $T$ by an admissible trajectory contained in $V$. It has been proved in \cite[Theorem 3.1 ]{dav} that, under the assumption that the system admits a local representation in normal form, $p$ has the STLT property if and only if it is of the type \ref{passing}. In particular, if $p$ is of the type \ref{passing}, then there exist $t(p),T(p)>0$ such that, for every $r,s\in (-t(p),t(p))$ there exists an admissible trajectory which steers $c(\tau+r)$ to $c(\tau+s)$ within time $T(p)$. Assume now that $p$ is a point of the type \ref{tu1} or \ref{tu2}. The curve $\Gamma$ stays (locally) on one side of the affine line $$p+\mbox{span}(X(p))=\{(x,p_y)\,|\;x\in\R\}\,,$$ which is the affine tangent space to $\Gamma$ at $p$. Up to a reversion in the parameterization of $\Gamma$, we can assume that, for every $t$ in a right neighborhood of $\tau$, $X(c(t))$ points into the locally convex part of the plane bounded by $\Gamma$ (see Figure~\ref{FN}). It can be easily verified that the two branches of $\Gamma\setminus\{p\}$ are connected by integral curves of $X$ and $Y$ arbitrarily close to $p$, in the following sense: for every $t>0$ small enough, there exist $\theta,T>0$ such that, for every $r\in(0,\theta)$, both curves $s\mapsto \gamma_X(c(\tau+r),s)$ and $s\mapsto \gamma_Y(c(\tau-r),s)$ intersect $\Gamma$ in a positive time smaller than $T$, and the intersection points are of the type $c(\tau+\rho)$, with $0<|\rho|\leq t$. We can conclude, using the STLT property at points of $\Gamma\setminus\{p\}$ close to $p$, that there exists $t(p)>0$ such that, for every $\mu\in(0,1)$, every two points of $$\Sigma=\{c(\tau+r)|\;\mu\,t(p)<|r|<t(p)\}$$ can be joined by an admissible trajectory of time-length bounded by a uniform $T(p,\mu)>0$. Therefore, given any pair of points $p_i=c(\tau_i),p_f=c(\tau_f)$ on $\Gamma$ of type \ref{passing}, there exists an admissible trajectory going from $p_i$ to $p_f$ of time-length smaller that $T(c(\tau_1),\mu_1)+\cdots+T(c(\tau_k),\mu_k)$, where $$(\tau_1-t(c(\tau_1)),\tau_1+t(c(\tau_1))),\ldots,(\tau_k-t(c(\tau_k)),\tau_k+t(c(\tau_k)))$$ is a covering of the compact segment of $\R$ bounded by $\tau_i$ and $\tau_f$, $\mu_1,\ldots,\mu_k\in (0,1)$ are properly chosen and $T(p,\mu)=T(p)$ if $p$ is of type \ref{passing}. In particular, system (\ref{s-system}) admits trajectories going to infinity.\ \rule{0.5em}{0.5em} \begin{rmk} In the non-generic case the statement of Theorem \ref{t-inv-non-comp} is false. A counterexample can be found even in the linear case. Indeed, consider the vector fields \begin{eqnarray} &&X(q)=A\,q,\ \ \ \ \ \ \ \mbox{ where}~~~ A=\left(\ba{cc} -1/20& -1/E\{\cal E}&-1/20\end{array}\right),~~~ E=-\frac{201}{200}-\frac{\sqrt{401}}{200}\,,\nonumber\\ &&Y(q)=B\,q,\ \ \ \ \ \ \ \mbox{ where}~~~ B=\left(\ba{cc} -1/20& -1\\1&-1/20\end{array}\right). \end{eqnarray} The integral curves of $X$ are ``elliptical spirals'', while the integral curves of $Y$ are ``circular spirals''. The integral curves of $X$ and $Y$ rotate around the origin in opposite sense (since $E<0$). One can easily check that, in this case, the set ${\cal Z}$ is a single straight line of equation \begin{eqnarray} y=-\frac{20}{\sqrt{401}-1}x\,, \end{eqnarray} and its two components are inverse (see Figure~\ref{ex-l}). It can be checked by hand that the switched system defined by $X$ and $Y$ is GUS, although not GUAS (see also \cite{SIAM}, Theorem 2.3, case (CC.3)). \begin{figure}\label{ex-l} \end{figure} \end{rmk} \newcommand{\auth}[1]{\textsc{#1}} \newcommand{\tit}[1]{\textit{#1}} \newcommand{\jou}[1]{\textrm{#1}} \newcommand{\bff}[1]{{\bf {#1}}} \newcommand{\bbf}[1]{{\bf {#1}}} \end{document}
arXiv
{ "id": "0502361.tex", "language_detection_score": 0.8100244998931885, "char_num_after_normalized": null, "contain_at_least_two_stop_words": null, "ellipsis_line_ratio": null, "idx": null, "lines_start_with_bullet_point_ratio": null, "mean_length_of_alpha_words": null, "non_alphabetical_char_ratio": null, "path": null, "symbols_to_words_ratio": null, "uppercase_word_ratio": null, "type": null, "book_name": null, "mimetype": null, "page_index": null, "page_path": null, "page_title": null }
arXiv/math_arXiv_v0.2.jsonl
null
null
\begin{document} \title{The damped stochastic wave equation on p.c.f. fractals ootnote{Keywords: Stochastic partial differential equation, wave equation, fractal, Dirichlet form} \begin{abstract} A p.c.f. fractal with a regular harmonic structure admits an associated Dirichlet form, which is itself associated with a Laplacian. This Laplacian enables us to give an analogue of the damped stochastic wave equation on the fractal. We show that a unique function-valued solution exists, which has an explicit formulation in terms of the spectral decomposition of the Laplacian. We then use a Kolmogorov-type continuity theorem to derive the spatial and temporal H\"{o}lder exponents of the solution. Our results extend the analogous results on the stochastic wave equation in one-dimensional Euclidean space. It is known that no function-valued solution to the stochastic wave equation can exist in Euclidean dimension two or higher. The fractal spaces that we work with always have spectral dimension less than two, and show that this is the right analogue of dimension to express the ``curse of dimensionality'' of the stochastic wave equation. Finally we prove some results on the convergence to equilibrium of the solutions. \end{abstract} \section{Introduction} The aim of this paper is to investigate the properties of some hyperbolic stochastic partial differential equations (SPDEs) on finitely ramified fractals. In one dimension \cite{walsh1986} motivated this problem as understanding the behaviour of a guitar string in a sandstorm. That is we have a one-dimensional string which is forced by white noise at every point in time and space and are interested in the `music' - the properties of the resulting waves induced in the string. In the fractal setting we may think of the vibrations of a fractal drum in a sandstorm. For a two-dimensional drum, it is known that the solutions to the stochastic wave equation are no longer functions and thus it is of interest to see what happens in the case of finitely ramified fractals which behave analytically as objects with dimension between one and two. As yet the theory for the behaviour of waves propagating through a fractal is much less developed than that for the diffusion of heat and we will not discuss such deterministic waves. Instead we consider the regularity properties of the waves starting from rest and arising from forcing by white noise, which are easier to capture, as it is the noise and its smoothing via the Laplacian which are crucial to understanding the behaviour of the waves. The damped stochastic wave equation on $\mathbb{R}^n$, $n \geq 1$ is the SPDE given by \begin{equation}\label{RnSWE} \begin{split} \frac{\partial^2 u}{\partial t^2}(t,x) &= -2\beta \frac{\partial u}{\partial t}(t,x) + \Delta u(t,x) + \xi(t,x),\\ u(0,\cdot) &= \frac{\partial u}{\partial t}(0,\cdot) = 0, \end{split} \end{equation} where $\beta \geq 0$, $\Delta = \Delta_x$ is the Laplacian on $\mathbb{R}^n$ and $\xi$ is a space-time white noise on $[0,\infty) \times \mathbb{R}^n$, where we interpret $x \in \mathbb{R}^n$ as space and $t \in [0,\infty)$ as time. The equation \eqref{RnSWE} can equivalently be written as a system of stochastic evolution equations in the following way: \begin{equation*} \begin{split} du(t) &= \dot{u}(t)dt,\\ d\dot{u}(t) &= -2\beta \dot{u}(t)dt + \Delta u(t)dt + dW(t),\\ u(0) &= \dot{u}(0) = 0 \in L^2(\mathbb{R}^n), \end{split} \end{equation*} where $W$ is a cylindrical Wiener process on $L^2(\mathbb{R}^n)$, and the solution $u$ and its (formal) derivative $\dot{u}$ are processes taking values in some space of functions on $\mathbb{R}^n$. Here we have used instead the differential notation of stochastic calculus, and one should not presume any a priori relationship between $u$ and $\dot{u}$. The damped stochastic wave equation (SWE) was introduced in \cite{Cabana1970} in the case $n = 1$, and a unique solution was found via a Fourier transform. If $\beta = 0$, there is no damping, and this is the stochastic wave equation. The solution then has a neat characterisation given in \cite[Theorem 3.1]{walsh1986} as a rotated modified Brownian sheet in $[0,\infty) \times \mathbb{R}$, and this immediately implies that it is jointly H\"{o}lder continuous in space and time for any H\"{o}lder exponent less than $\frac{1}{2}$. These properties, however, do not carry over into spatial dimensions $n \geq 2$. Indeed, for $n \geq 2$ a solution to \eqref{RnSWE} still exists, but it is not function-valued. It is necessary to expand our space beyond $L^2(\mathbb{R}^n)$ to include certain distributions in order to make sense of the solution. This is related to the fact that $n$-dimensional Brownian motion has local times if and only if $n = 1$, see \cite{foondun2011} and further references. There is thus a distinct change in the behaviour of the SPDE \eqref{RnSWE} between dimensions $n = 1$ and $n \geq 2$. One of the aims of the present paper is to investigate the behaviour of the SPDE in the case that dimension (appropriately interpreted) is in the interval $[1,2)$. When does a function-valued solution exist, and if it does, what are its space-time H\"{o}lder exponents? To answer these questions we introduce a class of fractals. The theory of analysis on fractals started with the construction of a symmetric diffusion on the two-dimensional Sierpinski gasket in \cite{goldstein1987}, \cite{kusuoka1987} and \cite{barlow1988}, which is now known as \textit{Brownian motion on the Sierpinski gasket}. The field has grown quickly since then; see\cite{kigami2001} and \cite{barlow1998} for analytic and probabilistic introductions respectively. In \cite{kigami2001} it is shown that a certain class of fractals, known as \textit{post-critically finite self-similar} (or \textit{p.c.f.s.s.}) sets with \textit{regular harmonic structures}, admit operators $\Delta$ akin to the Laplacian on $\mathbb{R}^n$. This class includes many well-known fractals such as the $n$-dimensional Sierpinski gasket (for $n \geq 2$) and the Vicsek fractal, though not the Sierpinski carpet. The operators $\Delta$ generate symmetric diffusions on their respective fractals in the same way that the Laplacian on $\mathbb{R}^n$ is the generator of Brownian motion on $\mathbb{R}^n$, and we therefore refer to them also as ``Laplacians'', see \cite{barlow1998}. In particular, the existence of a Laplacian $\Delta$ on a given fractal $F$ allows us to formulate PDEs analogous to the heat equation and the wave equation on $F$. The heat equation on $F$ has been widely studied, see \cite[Chapter 5]{kigami2001} and many other papers showing results such as sub-Gaussian decay of the heat kernel. It is possible in the same way to formulate certain SPDEs on these fractals; for example the stochastic heat equation \cite{hambly2016} and, the subject of the present paper, the damped stochastic wave equation on $F$. The \textit{spectral dimension} $d_s$, defined as the exponent for the asymptotic scaling of the eigenvalue counting function of $\Delta$, for any of these fractals satisfies $d_s < 2$, and is the correct definition of dimension to use when investigating the analytic properties of the SPDE. Since all of our fractals are compact, we can use spectral methods to vastly simplify the problem and find a solution explicitly in terms of the eigenvalues and eigenfunctions of the Laplacian. Previous work on hyperbolic PDEs and SPDEs on fractals is sparse. The wave equation was first introduced in \cite{kusuoka1987}. Since then, there have been two strands of work, either focusing on bounded or on unbounded fractals. In the case of bounded fractals \cite{dalrymple1999} gave strong evidence that there would be infinite propagation speed for the deterministic wave equation and \cite{hu2002} showed existence and uniqueness for a non-linear wave equation. For the unbounded case there is work by \cite{KusuokaZhou1998} and \cite{Strichartz2010} discussing the long time behaviour of waves on manifolds with large scale fractal structure and on fractals themselves. In \cite{foondun2011} it is mentioned that the stochastic heat equation on certain fractals has a so-called ``random-field'' solution as long as the Hausdorff dimension of the fractal is less than 2. The stochastic wave equation is studied elsewhere in that paper but an analogous result is not given. In \cite{hambly2016} the stochastic heat equation on p.c.f.s.s. sets with regular harmonic structures is shown to have continuous function-valued solutions, as the spectral dimension is less than 2, and its spatial and temporal H\"{o}lder exponents are computed; this can be seen to be the direct predecessor of the present paper and is the source of many of the ideas that we use in the following sections. The structure of the present paper is as follows: In the next subsection we set up the problem, state the precise SPDE to be solved and summarise the main results of the paper. In Section \ref{sec:existenceSWE} we make precise the definition of a solution to the damped stochastic wave equation and prove the existence of a unique solution $u$ in the form of an $L^2$-valued process. We show that it is a solution in both a ``mild'' sense and a ``weak'' sense. Then, in Section \ref{sec:regsol}, we show that this solution is H\"{o}lder continuous in $L^2$ and that the point evaluations $u(t,x)$ are well-defined random variables. The latter is a necessary condition for us to be able to consider matters of continuity in space and time. In Section \ref{sec:holderst} we utilise a Kolmogorov-type continuity theorem for fractals proven in \cite{hambly2016} to deduce the spatial and temporal H\"{o}lder exponents of the solution $u$. In Section \ref{sec:equilib} we give results that describe the long-time behaviour of the solutions for any given set of parameters, in particular whether or not they eventually settle down into some equilibrium measure. \subsection{Description of the problem} We use an identical set-up to \cite{hambly2016}. Let $M \geq 2$ be an integer. Let $(F, (\psi_i)_{i=1}^M)$ be a connected p.c.f.s.s. set (see \cite{kigami2001}) such that $F$ is a compact metric space and the $\psi_i: F \to F$ are injective strict contractions on $F$. Let $I = \{ 1,\ldots,M \}$ and for each $n \geq 0$ let $\mathbb{W}_n = I^n$. Let $\mathbb{W}_* = \bigcup_{n \geq 0} \mathbb{W}_n$ and let $\mathbb{W} = I^\mathbb{N}$. We call the sets $\mathbb{W}_n$, $\mathbb{W}_*$ and $\mathbb{W}$ \textit{word spaces} and we call their elements \textit{words}. Note that $\mathbb{W}_0$ is a singleton containing an element known as the \textit{empty word}. We use the notation $w = w_1w_2w_3\ldots$ with $w_i \in I$ for words $w \in \mathbb{W}_* \cup \mathbb{W}$. For a word $w = w_1, \ldots ,w_n \in \mathbb{W}_*$, let $\psi_w = \psi_{w_1} \circ \cdots \circ \psi_{w_n}$ and let $F_w = \psi_w(F)$. If $w$ is the empty word then $\psi_w$ is the identity on $F$. If $\mathbb{W}$ is endowed with the standard product topology then there is a canonical continuous surjection $\pi: \mathbb{W} \to F$ given in \cite[Lemma 5.10]{barlow1998}. Let $P \subset \mathbb{W}$ be the post-critical set of $(F, (\psi_i)_{i=1}^M)$, which is finite by assumption. Then let $F^0 = \pi(P)$, and for each $n \geq 1$ let $F^n = \bigcup_{w \in \mathbb{W}_n} \psi_w(F^0)$. Let $F_* = \bigcup_{n = 0}^\infty F^n$. It is easily shown that $(F^n)_{n\geq 0}$ is an increasing sequence of finite subsets and that $F_*$ is dense in $F$. Let the pair $(A_0,\textbf{r})$ be a regular irreducible harmonic structure on $(F, (\psi_i)_{i=1}^M)$ such that $\textbf{r} = (r_1,\ldots,r_M) \in \mathbb{R}^M$ for some constants $r_i > 0$, $i \in I$ (harmonic structures are defined in \cite[Section 3.1]{kigami2001}). Here \textit{regular} means that $r_i \in (0,1)$ for all $i$. Let $r_{\min} = \min_{i \in I} r_i$ and $r_{\max} = \max_{i \in I} r_i$. If $n \geq 0$, $w = w_1,\ldots w_n \in \mathbb{W}$ then write $r_w := \prod_{i=1}^n r_{w_i}$. Let $d_H > 0$ be the unique real number such that \begin{equation*} \sum_{i \in I} r_i^{d_H} = 1. \end{equation*} Then let $\mu$ be the Borel regular probability measure on $F$ such that for any $n \geq 0$, if $w \in \mathbb{W}_n$ then $\mu(F_w) = r_w^{d_H}$. In other words, $\mu$ is the self-similar measure on $F$ in the sense of \cite[Section 1.4]{kigami2001} associated with the weights $r_i^{d_H}$ on $I$. Let $(\mathcal{E},\mathcal{D})$ be the regular local Dirichlet form on $L^2(F,\mu)$ associated with this harmonic structure, as given by \cite[Theorem 3.4.6]{kigami2001}. This Dirichlet form is associated with a resistance metric $R$ on $F$, defined by \[ R(x,y) = \left(\inf\{ \mathcal{E}(f,f): f(x)=0, f(y)=1, f\in \mathcal{D}\}\right)^{-1}, \] which generates the original topology on $F$, by \cite[Theorem 3.3.4]{kigami2001}. Now let $2^{F^0} = \{ b: b \subseteq F^0 \}$ be the power set of $F^0$. Let $\mathcal{D}_{F^0} = \mathcal{D}$, and for proper subsets $b \in 2^{F^0}$ let \begin{equation*} \mathcal{D}_b = \{ f\in \mathcal{D}: f|_{F^0 \setminus b}=0 \}. \end{equation*} Then similarly to \cite[Corollary 3.4.7]{kigami2001}, $(\mathcal{E},\mathcal{D}_b)$ is a regular local Dirichlet form on $L^2(F \setminus (F \setminus b),\mu)$. If $b = F^0$ then we may equivalently write $b=N$, and if $b = \emptyset$ then we may equivalently write $b=D$, see \cite{hambly2016}. The letters $N$ and $D$ indicate \textit{Neumann} and \textit{Dirichlet} boundary conditions respectively, and all other values of $b$ indicate a \textit{mixed} boundary condition. Intuitively, $b$ gives the subset of $F^0$ of points that are free to move under the influence of the SPDE, whereas the remaining elements of $F^0$ are fixed at the value 0. Let $b \in 2^{F^0}$. By \cite[Chapter 4]{barlow1998}, associated with the Dirichlet form $(\mathcal{E},\mathcal{D}_b)$ on $L^2(F,\mu)$ is a $\mu$-symmetric diffusion $X^b = (X^b_t)_{t \geq 0}$ on $F$ which itself is associated with a $C_0$-semigroup of contractions $S^b = (S^b_t)_{t \geq 0}$ on $L^2(F,\mu)$. Let $\Delta_b$ be the generator of this diffusion. If $b = N$ then $X^N$ has infinite lifetime, by \cite[Lemma 4.10]{barlow1998}. On the other hand, if $b$ is a proper subset of $F^0$ then the process $X^b$ has the law of a version of $X^N$ which is killed at the points $F^0 \setminus b$, by \cite[Section 4.4]{Fukushima2011}. Our notation is identical to that of \cite{hambly2018}. \begin{exmp}\label{intervalSWE} (\cite[Example 1.1]{hambly2016}) Let $F = [0,1]$ and take \textit{any} $M \geq 2$. For $1 \leq i \leq M$ let $\psi_i: F \to F$ be the affine map such that $\psi_i(0) = \frac{i-1}{M}$, $\psi_i(1) = \frac{i}{M}$. It follows that $F^0 = \{0,1\}$. Let $r_i = M^{-1}$ for all $i \in I$ and let \begin{equation*} A_0 = \left( \begin{array}{cc} -1 & 1\\ 1 & -1 \end{array} \right). \end{equation*} Then all the conditions given above are satisfied. We have $\mathcal{D} = H^1[0,1]$ and $\mathcal{E}(f,g) = \int_0^1 f'g'$. The generators $\Delta_N$ and $\Delta_D$ are respectively the standard Neumann and Dirichlet Laplacians on $[0,1]$. The induced resistance metric $R$ is none other than the standard Euclidean metric. \end{exmp} Let $(\Omega, \mathcal{F}, \mathbb{P})$ be a probability space. The \textit{damped stochastic wave equation} that we consider in the present paper is the SPDE (system) given by \begin{equation}\label{SWE} \begin{split} du(t) &= \dot{u}(t)dt,\\ d\dot{u}(t) &= -2\beta \dot{u}(t)dt + \Delta_bu(t)dt + dW(t),\\ u(0) &= \dot{u}(0) = 0 \in L^2(F,\mu), \end{split} \end{equation} where $\beta \geq 0$ the \textit{damping coefficient} and $b \in 2^{F^0}$ the \textit{boundary conditions} are parameters, and $W = (W(t))_{t \geq 0}$ is a $\mathbb{P}$-cylindrical Wiener process on $L^2(F,\mu)$. That is, $W$ satisfies \begin{equation*} \mathbb{E} \left[ \langle f,W(s) \rangle_{L^2(F,\mu)}\langle W(t),g \rangle_{L^2(F,\mu)} \right] = (s \wedge t) \langle f,g \rangle_{L^2(F,\mu)} \end{equation*} for all $s,t \in [0,\infty)$ and $f,g \in L^2(F,\mu)$. We would like the solution process $u = (u(t))_{t \geq 0}$ to be $L^2(F,\mu)$-valued, however it is not clear whether or not the same should be required of the first-derivative process $\dot{u} = (\dot{u}(t))_{t \geq 0}$. This will be clarified in the following section. The main results of the present paper (Theorems \ref{SWEsoln}, \ref{ptregSWE} and \ref{SWEreg}) can be roughly paraphrased as follows: \begin{thm} Equip $F$ with its resistance metric $R$. The SPDE \eqref{SWE} has a unique solution which is a stochastic process $u = (u(t,x): (t,x) \in [0,\infty) \times F)$, which is almost surely jointly continuous in $[0,\infty) \times F$. For each $t \in [0,\infty)$, $u(t,\cdot)$ is almost surely essentially $\frac{1}{2}$-H\"{o}lder continuous in $(F,R)$. For each $x \in F$, $u(\cdot,x)$ is almost surely essentially $(1 - \frac{d_s}{2})$-H\"{o}lder continuous in the Euclidean metric, where $d_s \in [1,2)$ is the spectral dimension of $(F,R)$. \end{thm} The precise meaning of \textit{essentially} is given in Section \ref{sec:regsol}. We see that the H\"{o}lder exponents given in the above theorem agree with the case ``$F = \mathbb{R}$'' described in the introduction---there we have $d_s = 1$, and the solution is a rotation of a modified Brownian sheet so it has essential H\"{o}lder exponent $\frac{1}{2}$ in every direction. Of course $\mathbb{R}$ is not compact so it doesn't exactly fit into our set-up, but we get a similar result by considering the interval $[0,1]$ instead, see Example \ref{intervalSWE}. \begin{exmp}[Hata's tree-like set] See \cite[Figure 1.4]{kigami2001} for a diagram. This p.c.f. fractal takes a parameter $c \in \mathbb{C}$ such that $|c|, |1-c| \in (0,1)$, with $F^0 = \{ c,0,1 \}$, as described in \cite[Example 3.1.6]{kigami2001}. It has a collection of regular harmonic structures given by \begin{equation*} A_0 = \left(\begin{array}{ccc} -h & h & 0\\ h & -(h+1) & 1\\ 0 & 1 & -1\\ \end{array}\right) \end{equation*} with $\textbf{r} = (h^{-1},1-h^{-2})$ for $h \in (1,\infty)$, and these all fit into our set-up. In the introduction to \cite{walsh1986} the stochastic wave equation on the unit interval is said to describe the motion of a guitar string in a sandstorm (as long as we specify Dirichlet boundary conditions). Likewise, by taking $b = \{ c, 1 \}$ in our tree-like set, we are ``planting'' it at the point 0 so the associated stochastic wave equation approximately describes the motion of a tree in a sandstorm. \end{exmp} For more examples see \cite[Example 1.3]{hambly2016}. \begin{rem} The resistance metric $R$ is not a particularly intuitive metric on $F$. However, many fractals have a natural embedding in Euclidean space $\mathbb{R}^n$, and subject to mild conditions on $F$ it can be shown that $R$ is equivalent to some positive power of the Euclidean metric, see \cite{hu2006}. An example is the $n$-dimensional Sierpinski gasket described in \cite[Example 1.3]{hambly2016} with $n \geq 2$. In \cite[Section 3]{hu2006} it is shown that there exists a constant $c > 0$ such that \begin{equation*} c^{-1}|x-y|^{d_w - d_f} \leq R(x,y) \leq c|x-y|^{d_w - d_f} \end{equation*} for all $x,y \in F \subseteq \mathbb{R}^n$, where $d_w = \frac{\log(n+3)}{\log2}$ is the \textit{walk dimension} of the gasket and $d_f = \frac{\log(n+1)}{\log2}$ is its Euclidean Hausdorff dimension. It follows that the above theorem holds (with a different spatial H\"{o}lder exponent) if $R$ is replaced with a Euclidean metric on $F$. We observe that this means that there are function valued solutions to the stochastic wave equation for fractals with arbitrarily large Hausdorff dimension. \end{rem} \section{Existence and uniqueness of solution}\label{sec:existenceSWE} In this section we will make explicit the meaning of a \textit{solution} to the SPDE \eqref{SWE}, and show that such a solution exists and is unique. \begin{defn} Henceforth let $\mathcal{H} = L^2(F,\mu)$ and denote its inner product by $\langle \cdot,\cdot \rangle_\mu$. Moreover, for $\lambda > 0$ let $\mathcal{D}^\lambda$ be the space $\mathcal{D}$ equipped with the inner product \begin{equation*} \langle \cdot,\cdot \rangle_\lambda := \mathcal{E}(\cdot,\cdot) + \lambda \langle \cdot,\cdot \rangle_\mu. \end{equation*} Since $(\mathcal{E},\mathcal{D})$ is closed, $\mathcal{D}^\lambda$ is a Hilbert space. \end{defn} \begin{rem} The space $\mathcal{D}$ contains only $\frac{1}{2}$-H\"{o}lder continuous functions since by the definition of the resistance metric we have that \begin{equation} |f(x) - f(y)|^2 \leq R(x,y) \mathcal{E}(f,f) \end{equation} for all $f \in \mathcal{D}$ and all $x,y \in F$. Therefore, since $\mathcal{D}_b$ is the intersection of the kernels of the continuous linear functionals $\{f \mapsto f(x):\ x \in F^0 \setminus b \}$, it is a closed subset of any $\mathcal{D}^\lambda$ and has finite codimension $|F^0 \setminus b|$. \end{rem} \begin{defn} The unique real $d_H > 0$ such that \begin{equation*} \sum_{i \in I} r_i^{d_H} = 1 \end{equation*} is the \textit{Hausdorff dimension} of $(F,R)$, see \cite[Theorem 1.5.7]{kigami2001}. The \textit{spectral dimension} of $(F,R)$ is given by \begin{equation*} d_s = \frac{2d_H}{d_H + 1}, \end{equation*} see \cite[Theorem 4.1.5 and Theorem 4.2.1]{kigami2001}. Note by \cite[Remark 2.6]{hambly2016} that $d_H \in [1,\infty)$ and $d_s \in [1,2)$. \end{defn} If $A$ is a linear operator on $\mathcal{H}$ then we denote the domain of $A$ by $\mathcal{D}(A)$. If $A$ is bounded, then let $\Vert A \Vert$ be its operator norm. By \cite[Proposition 2.5]{hambly2018}, for each $b \in 2^{F^0}$ there exists an orthonormal basis $(\phi^b_k)_{k=1}^\infty$ of $\mathcal{H}$, where the associated eigenvalues $(\lambda^b_k)_{k=1}^\infty$ are assumed to be in increasing order. In particular any element $f \in \mathcal{H}$ has a series representation \begin{equation*} f = \sum_{k=1}^\infty f_k \phi^b_k \end{equation*} where $f_k = \langle \phi^b_k,f \rangle_\mu$. Then for any function $\Xi: \mathbb{R}_+ \to \mathbb{R}$, the map $\Xi(-\Delta_b)$ is a well-defined self-adjoint operator from $\mathcal{D}(\Xi(-\Delta_b))$ to $\mathcal{H}$ and has the representation \begin{equation*} \Xi(-\Delta_b)f = \sum_{k=1}^\infty f_k \Xi(\lambda^b_k) \phi^b_k, \end{equation*} where the domain $\mathcal{D}(\Xi(-\Delta_b)))$ is the subspace of $\mathcal{H}$ of exactly those $f$ for which the above expression is in $\mathcal{H}$. In fact the operator $\Xi(-\Delta_b)$ is densely defined since $\phi^b_k \in \mathcal{D}(\Xi(-\Delta_b)))$ for all $k$. This theory is known as the \textit{functional calculus} for linear operators, see \cite[Theorem VIII.5]{reed1981}. In particular, if $\alpha \geq 0$ then $(1-\Delta_b)^{\frac{\alpha}{2}}$ is an invertible operator on $\mathcal{H}$, and its inverse $(1-\Delta_b)^{-\frac{\alpha}{2}}$ is a bounded operator on $\mathcal{H}$ which is a bijection from $\mathcal{H}$ to $\mathcal{D}((1-\Delta_b)^{\frac{\alpha}{2}})$. \begin{defn} Let $\alpha \geq 0 $ be a real number and $b \in 2^{F^0}$. The bounded operator $(1-\Delta_b)^{-\frac{\alpha}{2}}$ is called a \textit{Bessel potential operator}, see \cite{strichartz2003}, \cite{issoglio2015}. Let $\mathcal{H}^{-\alpha}_b$ be the closure of $\mathcal{H}$ with respect to the inner product given by \begin{equation*} (f,g) \mapsto \langle (1-\Delta_b)^{-\frac{\alpha}{2}}f,(1-\Delta_b)^{-\frac{\alpha}{2}}g \rangle_\mu. \end{equation*} $\mathcal{H}^{-\alpha}_b$ is called a \textit{Sobolev space}, as in \cite{strichartz2003}. \end{defn} \begin{rem} Recall that $\mathcal{D}((1-\Delta_b)^{\frac{\alpha}{2}})$ is dense in $\mathcal{H}$. It follows that the operator $(1-\Delta_b)^{\frac{\alpha}{2}}: \mathcal{D}((1-\Delta_b)^{\frac{\alpha}{2}}) \to \mathcal{H}$ extends to an isometric isomorphism from $\mathcal{H}$ to $\mathcal{H}^{-\alpha}_b$ characterised by \begin{equation*} (1-\Delta_b)^{\frac{\alpha}{2}} \left( \sum_{k=1}^\infty f_k \phi^b_k \right) = \sum_{k=1}^\infty (1+ \lambda^b_k)^{\frac{\alpha}{2}} f_k \phi^b_k. \end{equation*} It is easy to see that $\left( (1+ \lambda^b_k)^{\frac{\alpha}{2}} \phi^b_k \right)_{k=1}^\infty$ is a complete orthonormal basis of $\mathcal{H}^{-\alpha}_b$. It follows that $\mathcal{H}$ is dense in $\mathcal{H}^{-\alpha}_b$. \end{rem} \subsection{Solution to the SPDE} Let $\oplus$ denote direct sum of Hilbert spaces. Let $\alpha \geq 0$. The SPDE \eqref{SWE} can be recast as a first-order SPDE on the Hilbert space $\mathcal{H} \oplus \mathcal{H}^{-\alpha}_b$ given by \begin{equation}\label{SWE2} \begin{split} dU(t) &= \mathcal{A}_{b,\beta} U(t) dt + d\mathcal{W}(t),\\ U(0) &= 0 \in \mathcal{H} \oplus \mathcal{H}^{-\alpha}_b, \end{split} \end{equation} where \begin{equation*} \mathcal{A}_{b,\beta} := \left(\begin{array}{cc} 0 & 1\\ \Delta_b & -2\beta\\ \end{array}\right) \end{equation*} is a densely defined operator on $\mathcal{H} \oplus \mathcal{H}^{-\alpha}_b$ with $\mathcal{D}(\mathcal{A}_{b,\beta}) = \mathcal{D} \left(\Delta_b^{(1-\frac{\alpha}{2}) \vee 0} \right) \oplus \mathcal{H}$ and \begin{equation*} \mathcal{W} := \left(\begin{array}{c} 0\\ W\\ \end{array}\right). \end{equation*} There is a precise definition of a solution to evolution equations of the form \eqref{SWE2} which is given in \cite[Chapter 5]{DaPrato1992}, so we can now finally define the notion of a solution to the second-order SPDE \eqref{SWE}. Note that it is still not clear what value of $\alpha$ should be picked. \begin{defn} Let $T \in (0,\infty]$. An $\mathcal{H}$-valued predictable process $u = (u(t))_{t=0}^T$ is a \textit{solution} to the SPDE \eqref{SWE} if there exists $\alpha \geq 0$ and an $\mathcal{H}^{-\alpha}_b$-valued predictable process $\dot{u} = (\dot{u}(t))_{t=0}^T$ such that \begin{equation*} U := \left(\begin{array}{c} u\\ \dot{u}\\ \end{array}\right) \end{equation*} is an $\mathcal{H} \oplus \mathcal{H}^{-\alpha}_b$-valued weak solution to the SPDE \eqref{SWE2} in the sense of \cite[Chapter 5]{DaPrato1992}. If $T = \infty$, then it is a \textit{global} solution. \end{defn} Admittedly, the above definition is lacking as it is very abstract and unintuitive. In Theorem \ref{SWEsoln} we prove that solutions to \eqref{SWE} also satisfy a property which is analogous to the concept of weak solution as defined in \cite[Chapter 5]{DaPrato1992}, and is much more instructive. \begin{defn} For $\lambda \geq 0$ and $\beta \geq 0$, let $V_\beta(\lambda,\cdot): [0,\infty) \to \mathbb{R}$ be the unique solution to the second-order ordinary differential equation \begin{equation} \begin{split} \frac{d^2v}{dt^2} &+ 2\beta \frac{dv}{dt} + \lambda v = 0,\\ v(0) &= 0,\ \frac{dv}{dt}(0) = 1. \end{split} \end{equation} Explicitly, \begin{equation*} V_\beta(\lambda,t) = \begin{cases}\begin{array}{lr} (\beta^2 - \lambda)^{-\frac{1}{2}}e^{-\beta t} \sinh \left((\beta^2 - \lambda)^{\frac{1}{2}}t \right) & \lambda < \beta^2,\\ t e^{-\beta t} & \lambda = \beta^2,\\ (\lambda - \beta^2)^{-\frac{1}{2}}e^{-\beta t} \sin \left((\lambda - \beta^2)^{\frac{1}{2}}t \right) & \lambda > \beta^2.\\ \end{array} \end{cases} \end{equation*} For fixed $\lambda$ and $\beta$, this function is evidently smooth in $[0,\infty)$. Let $\dot{V}_\beta(\lambda,\cdot) = \frac{d V_\beta}{d t}(\lambda,\cdot)$. \end{defn} \begin{rem} The different forms of $V_\beta$ correspond respectively to the motion of overdamped, critically damped and underdamped oscillators. \end{rem} \begin{lem}\label{lem:Scalconstr} Let $\alpha = 1$. Then for each $\beta \geq 0$ and $b \in 2^{F^0}$, the operator $\mathcal{A}_{b,\beta}$ generates a quasicontraction semigroup $\mathcal{S}^{b,\beta} = (\mathcal{S}^{b,\beta}_t)_{t \geq 0}$ on $\mathcal{H} \oplus \mathcal{H}^{-1}_b$ such that $\Vert \mathcal{S}^{b,\beta}_t \Vert \leq e^{\frac{t}{2}}$ for all $t \geq 0$. Moreover, the right column of $\mathcal{S}^{b,\beta}_t$ is given by \begin{equation*} \mathcal{S}^{b,\beta}_t = \left(\begin{array}{cc} \cdot & V_\beta(-\Delta_b,t)\\ \cdot & \dot{V}_\beta(-\Delta_b,t) \end{array}\right). \end{equation*} \end{lem} \begin{proof} Recall that \begin{equation*} \mathcal{A}_{b,\beta} = \left(\begin{array}{cc} 0 & 1\\ \Delta_b & -2\beta\\ \end{array}\right). \end{equation*} If $f \in \mathcal{D}(\Delta_b^\frac{1}{2})$, $g \in \mathcal{H}$ then \begin{equation*} \begin{split} &\left\langle \mathcal{A}_{b,\beta} \left(\begin{array}{c} f\\ g \end{array}\right) , \left(\begin{array}{c} f\\ g \end{array}\right) \right\rangle_{\mathcal{H} \oplus \mathcal{H}^{-1}_b}\\ &= \langle g,f \rangle_\mu + \langle \Delta_b(1-\Delta_b)^{-\frac{1}{2}} f, (1-\Delta_b)^{-\frac{1}{2}}g \rangle_\mu -2\beta \Vert (1-\Delta_b)^{-\frac{1}{2}}g \Vert_\mu^2\\ &= \langle (1-\Delta_b)(1-\Delta_b)^{-1} f,g \rangle_\mu + \langle \Delta_b(1-\Delta_b)^{-1} f, g \rangle_\mu -2\beta \Vert (1-\Delta_b)^{-\frac{1}{2}}g \Vert_\mu^2\\ &= \langle f,(1-\Delta_b)^{-1} g \rangle_\mu -2\beta \Vert (1-\Delta_b)^{-\frac{1}{2}}g \Vert_\mu^2\\ &\leq \frac{1}{2}\Vert f \Vert_\mu^2 + \frac{1}{2} \Vert (1-\Delta_b)^{-1} g \Vert_\mu^2 -2\beta \Vert (1-\Delta_b)^{-\frac{1}{2}}g \Vert_\mu^2 \end{split} \end{equation*} where in the last line we have used the Cauchy-Schwarz inequality. Now $\Vert (1-\Delta_b)^{-\frac{1}{2}} \Vert \leq 1$ by the functional calculus. It follows that \begin{equation*} \begin{split} &\left\langle \left(\mathcal{A}_{b,\beta} - \frac{1}{2}\right) \left(\begin{array}{c} f\\ g \end{array}\right) , \left(\begin{array}{c} f\\ g \end{array}\right) \right\rangle_{\mathcal{H} \oplus \mathcal{H}^{-1}_b}\\ &\leq -\frac{1}{2}\left( \Vert (1-\Delta_b)^{-\frac{1}{2}} g \Vert_\mu^2 - \Vert (1-\Delta_b)^{-1} g \Vert_\mu^2 \right) -2\beta \Vert (1-\Delta_b)^{-\frac{1}{2}}g \Vert_\mu^2\\ &\leq 0, \end{split} \end{equation*} which implies that the operator $\mathcal{A}_{b,\beta} - \frac{1}{2}$ is dissipative. Moreover, it can be easily checked that the operator $\lambda - \mathcal{A}_{b,\beta}$ is invertible for any $\lambda > 0$ with bounded inverse \begin{equation*} \left(\lambda - \mathcal{A}_{b,\beta} \right)^{-1} = \left(\begin{array}{cc} 2\beta + \lambda & 1\\ \Delta_b & \lambda\\ \end{array}\right) (\lambda(\lambda + 2\beta) - \Delta_b)^{-1}. \end{equation*} It follows by the Lumer--Phillips theorem for reflexive Banach spaces \cite[Corollary II.3.20]{Engel2001} that $\mathcal{A}_{b,\beta} - \frac{1}{2}$ generates a contraction semigroup on $\mathcal{H} \oplus \mathcal{H}^{-1}_b$. It follows that $\mathcal{A}_{b,\beta}$ generates a quasicontraction semigroup $\mathcal{S}^{b,\beta} = (\mathcal{S}^{b,\beta}_t)_{t \geq 0}$ on $\mathcal{H} \oplus \mathcal{H}^{-1}_b$ such that $\Vert \mathcal{S}^{b,\beta}_t \Vert \leq e^{\frac{t}{2}}$ for all $t \geq 0$. To construct the semigroup $\mathcal{S}$, we first observe that $\mathcal{H} \oplus \mathcal{H}^{-1}_b$ has a complete orthonormal basis given by \begin{equation*} \left\{ \left(\begin{array}{c} \phi^b_k\\ 0 \end{array}\right): k \in \mathbb{N} \right\} \cup \left\{ \left(\begin{array}{c} 0\\ (1+\lambda^b_k)^\frac{1}{2}\phi^b_k \end{array}\right): k \in \mathbb{N} \right\}, \end{equation*} and that all of the elements of this basis are in $\mathcal{D}(\mathcal{A}_{b,\beta})$. By a density argument, it suffices to compute how $\mathcal{A}_{b,\beta}$ affects the elements of this basis. For $k \geq 1$ we see that \begin{equation*} \begin{split} \mathcal{A}_{b,\beta} \left(\begin{array}{c} \phi^b_k\\ 0 \end{array}\right) &= \left(\begin{array}{cc} 0 & 1\\ -\lambda^b_k & -2\beta\\ \end{array}\right) \left(\begin{array}{c} \phi^b_k\\ 0 \end{array}\right),\\ \mathcal{A}_{b,\beta} \left(\begin{array}{c} 0\\ (1+\lambda^b_k)^\frac{1}{2}\phi^b_k \end{array}\right) &= \left(\begin{array}{cc} 0 & 1\\ -\lambda^b_k & -2\beta\\ \end{array}\right) \left(\begin{array}{c} 0\\ (1+\lambda^b_k)^\frac{1}{2}\phi^b_k \end{array}\right). \end{split} \end{equation*} Therefore to compute the semigroup $\mathcal{S}^{b,\beta}$ it will suffice to take a simple matrix exponential. We see that \begin{equation*} \exp\left[ \left(\begin{array}{cc} 0 & 1\\ -\lambda^b_k & -2\beta\\ \end{array}\right) t \right] = \left(\begin{array}{cc} \cdot & V_\beta(\lambda^b_k,t)\\ \cdot & \dot{V}_\beta(\lambda^b_k,t)\\ \end{array}\right), \end{equation*} where the left column of the matrix is not computed as it is not important. It follows that the semigroup generated by $\mathcal{A}_{b,\beta}$ takes the form \begin{equation*} \mathcal{S}^{b,\beta}_t = \left(\begin{array}{cc} \cdot & V_\beta(-\Delta_b,t)\\ \cdot & \dot{V}_\beta(-\Delta_b,t) \end{array}\right). \end{equation*} \end{proof} \begin{prop}\label{SWE2soln} Let $\alpha = 1$. Then for each $\beta \geq 0$ and $b \in 2^{F^0}$ there is a unique global $\mathcal{H} \oplus \mathcal{H}^{-1}_b$-valued weak solution $U$ to the SPDE \eqref{SWE2} given by \begin{equation*} U(t) = \left(\begin{array}{c} \int_0^t V_\beta(-\Delta_b,t-s)dW(s)\\ \int_0^t \dot{V}_\beta(-\Delta_b,t-s)dW(s) \end{array}\right). \end{equation*} In particular, it is a centred Gaussian process and has an $\mathcal{H} \oplus \mathcal{H}^{-1}_b$-continuous version. \end{prop} \begin{proof} Following \cite[Section 5.1.2]{DaPrato1992}, we define the stochastic convolution \begin{equation*} W^b_\beta(t) := \int_0^t \mathcal{S}^{b,\beta}_{t-s} d\mathcal{W}(t) = \int_0^t \mathcal{S}^{b,\beta}_{t-s} \iota_2 dW(t) \end{equation*} for $t \geq 0$, where $\iota_2: \mathcal{H} \to \mathcal{H} \oplus \mathcal{H}^{-1}_b$ is the (bounded linear) map $f \mapsto \left(\begin{array}{c} 0\\ f \end{array}\right)$. For $a \in [0,1)$ we wish to show that \begin{equation*} \int_0^T t^{-a} \left\Vert \mathcal{S}^{b,\beta}_t \iota_2 \right\Vert^2_{\HS(\mathcal{H} \to \mathcal{H} \oplus \mathcal{H}^{-1}_b)} dt < \infty \end{equation*} for all $T > 0$, where $\Vert \cdot \Vert_{\HS(\mathcal{H} \to \mathcal{H} \oplus \mathcal{H}^{-1}_b)}$ denotes the Hilbert-Schmidt norm of operators from $\mathcal{H}$ to $\mathcal{H} \oplus \mathcal{H}^{-1}_b$. We have that \begin{equation*} \begin{split} \int_0^T &t^{-a} \left\Vert \mathcal{S}^{b,\beta}_t \iota_2 \right\Vert^2_{\HS(\mathcal{H} \to \mathcal{H} \oplus \mathcal{H}^{-1}_b)} dt = \int_0^T t^{-a} \sum_{k=1}^\infty \left\Vert \mathcal{S}^{b,\beta}_t \iota_2 \phi^b_k \right\Vert^2_{\mathcal{H} \oplus \mathcal{H}^{-1}_b} dt\\ &= \sum_{k=1}^\infty \int_0^T t^{-a} \left\Vert \left(\begin{array}{c} V_\beta(-\Delta_b,t)\phi^b_k\\ \dot{V}_\beta(-\Delta_b,t)\phi^b_k\\ \end{array}\right) \right\Vert^2_{\mathcal{H} \oplus \mathcal{H}^{-1}_b} dt\\ &= \sum_{k=1}^\infty \int_0^T t^{-a} V_\beta(\lambda^b_k,t)^2 dt + \sum_{k=1}^\infty \int_0^T t^{-a} (1+ \lambda^b_k)^{-1}\dot{V}_\beta(\lambda^b_k,t)^2 dt, \end{split} \end{equation*} and we treat the above two sums separately. Now $t \mapsto t^{-a} V_\beta(\lambda^b_k,t)^2$ is always integrable in $[0,T]$ so the only thing that can go wrong is the sum. Since there are only finitely many $k$ such that $\lambda^b_k \leq \beta^2$, it suffices to consider the case $\lambda^b_k > \beta^2$. In this case we have that \begin{equation*} \begin{split} \int_0^T t^{-a} V_\beta(\lambda^b_k,t)^2 dt &= (\lambda^b_k - \beta^2)^{-1} \int_0^T t^{-a} e^{-2\beta t} \sin^2 \left((\lambda^b_k - \beta^2)^{\frac{1}{2}}t \right) dt\\ &\leq (\lambda^b_k - \beta^2)^{-1} (1-a)^{-1} T^{1-a}. \end{split} \end{equation*} It follows that \begin{equation*} \begin{split} \sum_{k=1}^\infty \int_0^T t^{-a} V_\beta(\lambda^b_k,t)^2 dt &\leq \sum_{k: \lambda^b_k \leq \beta^2} \int_0^T t^{-a} V_\beta(\lambda^b_k,t)^2 dt + \frac{T^{1-a}}{1-a} \sum_{k: \lambda^b_k > \beta^2} (\lambda^b_k - \beta^2)^{-1} \end{split} \end{equation*} which is finite by \cite[Proposition 2.5]{hambly2018}. We use a similar method for the $\dot{V}_\beta$ sum. Taking $a=0$, it thus follows from \cite[Theorem 5.4]{DaPrato1992} that the SPDE \eqref{SWE2} has a unique global solution $U = (U(t))_{t=0}^\infty$ in $\mathcal{H} \oplus \mathcal{H}^{-1}_b$ given by \begin{equation*} U(t) = W^b_\beta(t) = \left(\begin{array}{c} \int_0^t V_\beta(-\Delta_b,t-s)dW(s)\\ \int_0^t \dot{V}_\beta(-\Delta_b,t-s)dW(s) \end{array}\right). \end{equation*} It is a Gaussian process in $\mathcal{H} \oplus \mathcal{H}^{-1}_b$ by \cite[Theorem 5.2]{DaPrato1992}. As a stochastic integral of a cylindrical Wiener process, it is centred. Moreover, taking $a \in (0,1)$ we see that this $U$ has an $\mathcal{H} \oplus \mathcal{H}^{-1}_b$-continuous version by \cite[Theorem 5.11]{DaPrato1992}. \end{proof} \begin{thm}[Solution to \eqref{SWE}]\label{SWEsoln} There exists a unique global solution $u$ to the SPDE \eqref{SWE}. It is a centred Gaussian process on $\mathcal{H}$ given by \begin{equation*} u(t) = \int_0^t V_\beta(-\Delta_b,t-s)dW(s). \end{equation*} Moreover, $u$ is the unique $\mathcal{H}$-valued process which satisfies the following ``weak solution'' property: For all $h \in \mathcal{D}(\Delta_b)$, the function $t \mapsto \langle u(t), h \rangle_\mu$ satisfies $\langle u(0), h \rangle_\mu = 0$, is continuous in $[0,\infty)$, and is continuously differentiable in $(0,\infty)$ with \begin{equation*} \frac{d}{dt}\langle u(t), h \rangle_\mu = \int_0^t \langle u(s), \Delta_b h \rangle_\mu ds - 2\beta \langle u(t), h \rangle_\mu + \int_0^t \langle h, dW(s) \rangle_\mu. \end{equation*} \end{thm} \begin{proof} Existence is given directly by Proposition \ref{SWE2soln}, and yields the required centred Gaussian process $u$ as a solution which is continuous in $\mathcal{H}$, with its associated $\dot{u}$ continuous in $\mathcal{H}^{-1}_b$. Now note that our construction of $\mathcal{S}^{b,\beta}$ in Lemma \ref{lem:Scalconstr} was independent of the value of $\alpha$. That is, for any $\alpha \geq 0$ such that $\mathcal{A}_{b,\beta}$ generates a $C_0$-semigroup on $\mathcal{H} \oplus \mathcal{H}^{-\alpha}_b$, that semigroup must be $\mathcal{S}^{b,\beta}$. This means that the process $U$ constructed in Proposition \ref{SWE2soln} is independent of $\alpha$ and thus ensures uniqueness of $u$. It can be checked directly that the adjoint of the operator $\mathcal{A}_{b,\beta}$ is given by \begin{equation*} \mathcal{A}_{b,\beta}^* = \left(\begin{array}{cc} 0 & (1-\Delta_b)^{-1} \Delta_b\\ 1-\Delta_b & -2\beta\\ \end{array}\right), \end{equation*} with domain $\mathcal{D}(\mathcal{A}_{b,\beta}^*) = \mathcal{D}(\Delta_b^\frac{1}{2}) \oplus \mathcal{H} = \mathcal{D}(\mathcal{A}_{b,\beta})$. By the definition of weak solution in \cite[Chapter 5]{DaPrato1992} for \eqref{SWE2} we see that for all $f \in \mathcal{D}(\Delta_b^\frac{1}{2})$ and $g \in \mathcal{H}$ and $t \in [0,\infty)$, \begin{equation}\label{eqn:weakSWE} \begin{split} &\langle u(t), f \rangle_\mu + \langle \dot{u}(t), g \rangle_{\mathcal{H}^{-1}_b}\\ &= \int_0^t\left( \langle u(s), (1-\Delta_b)^{-1} \Delta_b g \rangle_\mu + \langle \dot{u}(s), (1-\Delta_b)f - 2\beta g \rangle_{\mathcal{H}^{-1}_b} \right) ds + \int_0^t \langle g, dW(s) \rangle_{\mathcal{H}^{-1}_b}. \end{split} \end{equation} Take $g=0$ and $f \in \mathcal{D}(\Delta_b^\frac{1}{2})$ in \eqref{eqn:weakSWE}. Then by the fact that $\dot{u}$ is continuous in $\mathcal{H}^{-1}_b$ and the fundamental theorem of calculus, the function $t \mapsto \langle u(t), f \rangle_\mu$ is continuously differentiable in $(0,\infty)$ with \begin{equation*} \frac{d}{dt}\langle u, f \rangle_\mu = \langle \dot{u}, (1-\Delta_b)f \rangle_{\mathcal{H}^{-1}_b}. \end{equation*} Note in particular that the right-hand side of the above equation is equal to $\langle \dot{u},f \rangle_\mu$ if $\dot{u} \in \mathcal{H}$. Now in \eqref{eqn:weakSWE} we take $f=0$ and let $g = (1-\Delta_b)h$ for some $h \in \mathcal{D}(\Delta_b)$, which gives \begin{equation*} \begin{split} &\langle \dot{u}(t), (1-\Delta_b)h \rangle_{\mathcal{H}^{-1}_b}\\ &= \int_0^t\left( \langle u(s), \Delta_b h \rangle_\mu - 2\beta \langle \dot{u}(s), (1-\Delta_b)h \rangle_{\mathcal{H}^{-1}_b} \right) ds + \int_0^t \langle (1-\Delta_b)h, dW(s) \rangle_{\mathcal{H}^{-1}_b}, \end{split} \end{equation*} which is equivalent to \begin{equation*} \frac{d}{dt}\langle u(t), h \rangle_\mu = \int_0^t \langle u(s), \Delta_b h \rangle_\mu ds - 2\beta \langle u(t), h \rangle_\mu + \int_0^t \langle h, dW(s) \rangle_\mu. \end{equation*} Thus $u$ satisfies the required ``weak'' property. It remains to prove that $u$ uniquely satisfies this property among all $\mathcal{H}$-valued processes. In order to do this let $\bar{u}$ be a process also satisfying the property and let $v = u-\bar{u}$. Let $v_k(t) = \langle v(t),\phi^b_k \rangle_\mu$ for $k \geq 1$, $t \in [0,\infty)$. Then $v_k$ can be seen to satisfy the ordinary differential equation \begin{equation*} \begin{split} \frac{d^2v_k}{dt^2} &= -\lambda^b_k v_k - 2\beta \frac{dv_k}{dt},\\ v_k(0) &= \frac{dv_k}{dt}(0) = 0. \end{split} \end{equation*} The unique solution to this ODE is $v_k = 0$ for every $k$, which implies $u = \bar{u}$. \end{proof} Now that we have our solution $u$ to \eqref{SWE} given by Theorem \ref{SWEsoln}, we show that it has a nice eigenfunction decomposition. Let $u_k = \langle \phi^b_k u \rangle_\mu$ for $k \geq 1$. We see that \begin{equation*} u_k(t) = \int_0^t V_\beta(\lambda^b_k,t-s) \langle \phi^b_k, dW(s) \rangle_\mu, \end{equation*} and it can be easily shown that $(\langle \phi^b_k, W \rangle_\mu)_{k=1}^\infty$ is a sequence of independent standard real Brownian motions. \begin{defn}[Series representation of solution] Let $\beta \geq 0$ and $b \in 2^{F^0}$. For $k \geq 0$ let $Y^{b,\beta}_k = (Y^{b,\beta}_k(t))_{t \geq 0}$ be the centred real-valued Gaussian process given by \begin{equation*} Y^{b,\beta}_k(t) = \int_0^t V_\beta(\lambda^b_k,t-s) \langle \phi^b_k, dW(s) \rangle_\mu. \end{equation*} The family $(Y^{b,\beta}_k)_{k=1}^\infty$ is clearly independent, and if $u$ is the solution to \eqref{SWE} for the given values of $\beta$ and $b$, then \begin{equation}\label{seriesrep} u(t) = \sum_{k=1}^\infty Y^{b,\beta}_k(t) \phi^b_k. \end{equation} \end{defn} \begin{rem} By Theorem \ref{SWEsoln}, the real-valued process $Y^{b,\beta}_k$ satisfies the following stochastic integro-differential equation: \begin{equation*} \begin{split} y'(t) &= - 2\beta y(t) - \lambda^b_k \int_0^t y(s)ds + \int_0^t \langle \phi^b_k, dW(s) \rangle_\mu,\\ y(0) &= 0, \end{split} \end{equation*} and it is easily shown to be the unique solution. \end{rem} \begin{rem}[Non-zero initial conditions] For a moment we consider the SPDE \begin{equation}\label{nonzeroSWE} \begin{split} du(t) &= \dot{u}(t)dt,\\ d\dot{u}(t) &= -2\beta \dot{u}(t)dt + \Delta_bu(t)dt + dW(t),\\ u(0) &= f,\ \dot{u}(0) = g. \end{split} \end{equation} This is simply the SPDE \eqref{SWE} with possibly non-zero initial conditions. We can characterise the solutions of this SPDE using the deterministic damped wave equation \begin{equation}\label{wavePDE} \begin{split} du(t) &= \dot{u}(t)dt,\\ d\dot{u}(t) &= -2\beta \dot{u}(t)dt + \Delta_bu(t)dt,\\ u(0) &= f,\ \dot{u}(0) = g, \end{split} \end{equation} which is studied in \cite{dalrymple1999} and \cite{hu2002} in the case $\beta = 0$. Let $u$ be the unique solution to \eqref{SWE} given in Theorem \ref{SWEsoln}. Then it is clear that a process $\tilde{u}$ solves \eqref{nonzeroSWE} if and only if $\tilde{u} - u$ solves \eqref{wavePDE}. Thus understanding the stochastic wave equation with general initial conditions on a fractal is equivalent to understanding the deterministic wave equation on that fractal. \end{rem} \section{Regularity of solution}\label{sec:regsol} \subsection{$L^2$-H\"{o}lder continuity} The first regularity property of the solution $u = (u(t))_{t=0}^\infty$ to \eqref{SWE} that we will consider is H\"{o}lder continuity in $\mathcal{H}$, when $u$ is interpreted as a function $u:\Omega \times [0,\infty) \to \mathcal{H}$. \begin{prop}\label{l2estimSWE} Let $u: \Omega \times [0,\infty) \to \mathcal{H}$ be the solution to the SPDE \eqref{SWE}. For every $T > 0$ there exists a constant $C > 0$ such that \begin{equation*} \mathbb{E}\left[ \left\Vert u(s) - u(s+t) \right\Vert^2_\mu\right] \leq Ct^{2 - d_s} \end{equation*} for all $s,t \geq 0$ such that $s,s+t \in [0,T]$. \end{prop} \begin{proof} By It\={o}'s isometry for Hilbert spaces, \begin{equation*} \begin{split} \mathbb{E} &\left[ \left\Vert u(s) - u(s+t) \right\Vert^2_\mu\right]\\ &= \mathbb{E}\left[ \left\Vert \int_0^{s+t} \left( V_\beta(-\Delta_b,s+t-t') - V_\beta(-\Delta_b,s-t') \mathbbm{1}_{\{t' \leq s\}} \right)dW(t') \right\Vert^2_\mu\right]\\ &= \int_0^{s+t} \left\Vert V_\beta(-\Delta_b,s+t-t') - V_\beta(-\Delta_b,s-t') \mathbbm{1}_{\{t' \leq s\}} \right\Vert^2_{\HS(\mathcal{H})} dt', \end{split} \end{equation*} where $\Vert \cdot \Vert_{\HS(\mathcal{H})}$ denotes the Hilbert-Schmidt norm for operators from $\mathcal{H}$ to itself. It follows that \begin{equation}\label{l2Holdereqn} \begin{split} \mathbb{E} &\left[ \left\Vert u(s) - u(s+t) \right\Vert^2_\mu\right]\\ &= \int_0^s \left\Vert V_\beta(-\Delta_b,t+t') - V_\beta(-\Delta_b,t') \right\Vert^2_{\HS(\mathcal{H})} dt' + \int_0^t \left\Vert V_\beta(-\Delta_b,t') \right\Vert^2_{\HS(\mathcal{H})} dt'\\ &= \sum_{k=0}^\infty \int_0^s \left( V_\beta(\lambda^b_k,t+t') - V_\beta(\lambda^b_k,t') \right)^2 dt' + \sum_{k=0}^\infty \int_0^t \left( V_\beta(\lambda^b_k,t') \right)^2 dt' \end{split} \end{equation} and we treat each of the above two sums separately. Notice that by \cite[Proposition 2.5]{hambly2018} there are only finitely many $k$ such that $\lambda^b_k \leq \beta^2$. We consider the first sum of \eqref{l2Holdereqn}, and we first look at the case $\lambda^b_k > \beta^2$. Then using standard facts about the Lipschitz coefficients of the functions $\exp$ and $\sin$ in $[0,T]$ we see that \begin{equation*} \begin{split} &\int_0^s \left( V_\beta(\lambda^b_k,t+t') - V_\beta(\lambda^b_k,t') \right)^2 dt'\\ &= (\lambda^b_k - \beta^2)^{-1} \int_0^s \left( e^{-\beta (t+t')} \sin \left((\lambda^b_k - \beta^2)^\frac{1}{2}(t+t') \right) - e^{-\beta t'} \sin \left((\lambda^b_k - \beta^2)^\frac{1}{2}t' \right) \right)^2 dt'\\ &\leq (\lambda^b_k - \beta^2)^{-1} \int_0^s \left( (\beta+(\lambda^b_k - \beta^2)^{\frac{1}{2}})t \wedge 2 \right)^2 dt'\\ &\leq 4T\frac{\lambda^b_k t^2 \wedge 1}{\lambda^b_k - \beta^2}. \end{split} \end{equation*} We get a similar result in the case $\lambda^b_k \leq \beta^2$, that is, a term of order $O(t^2)$. In this case the dependence of this term on $k$ is unimportant as there are only finitely many $k$ such that $\lambda^b_k \leq \beta^2$. There therefore exists a constant $C' > 0$ such that \begin{equation*} \sum_{k=0}^\infty \int_0^s \left( V_\beta(\lambda^b_k,t+t') - V_\beta(\lambda^b_k,t') \right)^2 dt' \leq C't^2 + 4T\sum_{k: \lambda^b_k > \beta^2} \frac{\lambda^b_k t^2 \wedge 1}{\lambda^b_k - \beta^2}. \end{equation*} Using \cite[Proposition 2.5]{hambly2018}, there therefore exists $C'' > 0$ such that \begin{equation*} \sum_{k=0}^\infty \int_0^s \left( V_\beta(\lambda^b_k,t+t') - V_\beta(\lambda^b_k,t') \right)^2 dt' \leq C''\left(t^2 + \sum_{k=1}^\infty k^{-\frac{2}{d_s}} \wedge t^2 \right). \end{equation*} Then by \cite[Lemma 5.2]{hambly2016}, there exists a $C''' > 0$ such that \begin{equation*} \sum_{k=0}^\infty \int_0^s \left( V_\beta(\lambda^b_k,t+t') - V_\beta(\lambda^b_k,t') \right)^2 dt' \leq C''' t^{2 - d_s}. \end{equation*} Now for the second sum of \eqref{l2Holdereqn}, again we first look at the case $\lambda^b_k > \beta^2$. Using Lipschitz coefficents and the fact that $V_\beta(\lambda^b_k,0) = 0$ we have that \begin{equation*} \begin{split} \int_0^t \left( V_\beta(\lambda^b_k,t') \right)^2 dr &= (\lambda - \beta^2)^{-1}\int_0^t e^{-2\beta t'} \sin^2 \left((\lambda - \beta^2)^{\frac{1}{2}}t' \right)dt'\\ &\leq (\lambda^b_k - \beta^2)^{-1} \int_0^t \left( (\beta+(\lambda^b_k - \beta^2)^{\frac{1}{2}})t' \wedge 1 \right)^2 dt'\\ &\leq 4(\lambda^b_k - \beta^2)^{-1} \int_0^t \left( \lambda^b_k (t')^2 \wedge 1 \right) dt'\\ &\leq 4t \frac{\lambda^b_k t^2 \wedge 1}{\lambda^b_k - \beta^2}. \end{split} \end{equation*} In the case $\lambda^b_k \leq \beta^2$ we get as usual a similar result, of order $O(t^3)$, and its dependence on $k$ is unimportant as there are only finitely many. Using the same method as for the first sum of \eqref{l2Holdereqn} we see that there exists a $C'''' > 0$ such that \begin{equation*} \sum_{k=0}^\infty \int_0^t \left( V_\beta(\lambda^b_k,t') \right)^2 dt' \leq C'''' t^{3 - d_s}. \end{equation*} Plugging the estimates into \eqref{l2Holdereqn} finishes the proof. \end{proof} \begin{defn} Let $(M_1,d_1)$ and $(M_2,d_2)$ be metric spaces and let $\delta \in (0,1]$. A function $f: M_1 \to M_2$ is \textit{essentially $\delta$-H\"{o}lder continuous} if for each $\gamma \in (0,\delta)$ there exists $C_\gamma > 0$ such that \begin{equation*} d_2(f(x),f(y)) \leq C_\gamma d_1(x,y)^\gamma \end{equation*} for all $x,y \in M_1$. \end{defn} \begin{thm}[$L^2$-H\"{o}lder continuity]\label{thm:L2verSWE} Let $u: \Omega \times [0,\infty) \to \mathcal{H}$ be the solution to the SPDE \eqref{SWE}. Then there exists a version $\tilde{u}$ of $u$ such that the following holds: for all $T > 0$, the restriction of $\tilde{u}$ to $\Omega \times [0,T]$ is almost surely essentially $(1 - \frac{d_s}{2})$-H\"{o}lder continuous as a function from $[0,T]$ to $\mathcal{H}$. \end{thm} \begin{proof} Fix $T > 0$. This is a simple application of Kolmogorov's continuity theorem. It is a consequence of Fernique's theorem \cite[Theorem 2.7]{DaPrato1992} that for each $p \in \mathbb{N}$ there exists a constant $K_p > 0$ such that if $Z$ is a Gaussian random variable on some separable Banach space $B$ then \begin{equation*} \mathbb{E}\left[\left\Vert Z \right\Vert^{2p}_B\right] \leq K_p\mathbb{E}\left[\left\Vert Z \right\Vert^2_B\right]^p, \end{equation*} see also \cite[Proposition 3.14]{Hairer2016}. Since $u$ is a Gaussian process, Proposition \ref{l2estimSWE} gives us that \begin{equation*} \mathbb{E}\left[ \left\Vert u(s) - u(t) \right\Vert^{2p}_\mu\right] \leq K_p C^p |s - t|^{p(2 - d_s)} \end{equation*} for all $s,t \in [0,T]$, for all $p \in \mathbb{N}$. Then by taking $p$ arbitrarily large and using Kolmogorov's continuity theorem, the result follows. Note that any two continuous versions of $u$ must be indistinguishable, which allows us to extend the construction of $\tilde{u}$ on any given finite time interval $[0,T]$ to the whole interval $[0,\infty)$. \end{proof} \subsection{Pointwise regularity} Let $u$ be the solution to \eqref{SWE} in Theorem \ref{SWEsoln}. Henceforth we assume that $u$ is the $\mathcal{H}$-continuous version constructed in Theorem \ref{thm:L2verSWE}. We currently have $u$ as an $\mathcal{H}$-valued process, so in this section we will show that the ``point evaluations'' $u(t,x)$ for $(t,x) \in [0,\infty) \times F$ can be defined in such a way that they make sense as real-valued random variables. This will allow us to interpret $u$ as a function from $\Omega \times [0,\infty) \times F$ to $\mathbb{R}$, and is necessary for us to be able to talk about continuity of $u$ in space and time. \begin{defn} For $\lambda > 0$ and $b \in 2^{F^0}$ let $\rho^b_\lambda: F \times F \to \mathbb{R}$ be the \textit{resolvent density} associated with $\Delta_b$, exactly as in \cite[Section 3.1]{hambly2018}. \end{defn} \begin{lem}\label{Vint} Let $\beta \geq 0$ and $\lambda \geq 0$. If $\alpha > 0$ then \begin{equation*} \int_0^\infty e^{-2\alpha t} V_\beta(\lambda,t)^2dt = \frac{1}{4(\alpha + \beta)(\alpha^2 + 2\alpha\beta + \lambda)} \end{equation*} \end{lem} \begin{proof} Can be computed explicitly using (complex) integration in each of the cases $\lambda < \beta^2$, $\lambda = \beta^2$ and $\lambda > \beta^2$ using the definition of $V_\beta$. \end{proof} \begin{lem}\label{resolvestimSWE} Let $u: [0,\infty) \to \mathcal{H}$ be the solution to the SPDE \eqref{SWE}. If $g \in \mathcal{H}$ and $t \in [0,\infty)$ then \begin{equation*} \mathbb{E} \left[ \langle u(t),g \rangle_\mu^2 \right] \leq \frac{e^{2(\sqrt{\beta^2 + 1} - \beta)t}}{4\sqrt{\beta^2 + 1}} \int_F \int_F \rho^b_1(x,y)g(x)g(y)\mu(dx)\mu(dy). \end{equation*} \end{lem} \begin{proof} Let $g^* \in \mathcal{H}^*$ be the bounded linear functional $f \mapsto \langle f,g \rangle_\mu$. We see by It\={o}'s isometry that \begin{equation*} \begin{split} \mathbb{E} \left[ \langle u(t),g \rangle_\mu^2 \right] &= \mathbb{E} \left[ g^*(u(t))^2 \right]\\ &= \int_0^t \Vert g^* V_\beta(-\Delta_b,s) \Vert_{\HS}^2 ds\\ &= \int_0^t \Vert V_\beta(-\Delta_b,s) g \Vert_\mu^2 ds\\ \end{split} \end{equation*} where the last equality is a result of the self-adjointness of the operator $V_\beta(-\Delta_b,s)$. If we let $g_k = \langle \phi^b_k, g \rangle_\mu$ for $k \geq 1$ then for any $\alpha > 0$ we have that \begin{equation*} \begin{split} \mathbb{E} \left[ \langle u(t),g \rangle_\mu^2 \right] &= \sum_{k=1}^\infty g_k^2 \int_0^t V_\beta(\lambda^b_k,s)^2 ds\\ &\leq e^{2\alpha t} \sum_{k=1}^\infty g_k^2 \int_0^\infty e^{-2\alpha s} V_\beta(\lambda^b_k,s)^2 ds\\ &= e^{2\alpha t} \sum_{k=1}^\infty g_k^2 \frac{1}{4(\alpha + \beta)(\alpha^2 + 2\alpha\beta + \lambda^b_k)}\\ &= \frac{e^{2\alpha t}}{4(\alpha + \beta)} \left\langle (\alpha^2 + 2\alpha\beta - \Delta_b)^{-1}g , g \right\rangle_\mu\\ &= \frac{e^{2\alpha t}}{4(\alpha + \beta)} \int_F \int_F \rho^b_{\alpha^2+2\alpha\beta}(x,y)g(x)g(y)\mu(dx)\mu(dy), \end{split} \end{equation*} where we have used Lemma \ref{Vint}. Finally we pick $\alpha = \sqrt{\beta^2 + 1} - \beta$ so that $\alpha^2 + 2\alpha\beta = 1$ and the proof is complete. \end{proof} For $x \in F$ and $\epsilon > 0$ let $B(x,\epsilon)$ be the closed $R$-ball in $F$ with centre $x$ and radius $\epsilon$. \begin{lem}[Neighbourhoods]\label{nhoodestimSWE} There exists a constant $c_5 > 0$ such that the following holds: If $x \in F$ and $n \geq 0$ then there exists a subset $D_n^0(x) \subseteq F$ such that $\mu(D_n^0(x)) > r_{\min}^{d_H}2^{-d_Hn}$ and \begin{equation*} x \in D^0_n(x) \subseteq B(x, c_5 2^{-n}). \end{equation*} \end{lem} \begin{proof} The $D^0_n(x)$ we need is the $n$-neighbourhood of $x$ and is defined in \cite[Definition 3.10]{hambly2016}. The result $D^0_n(x) \subseteq B(x, c_5 2^{-n})$ then follows from \cite[Proposition 3.12]{hambly2016}. The result on $\mu(D_n^0(x))$ is due to the fact that by definition, $F_w \subseteq D_n^0(x)$ for some $w \in \mathbb{W}_*$ such that $r_w > r_{\min} 2^{-n}$. \end{proof} \begin{defn} For $x \in F$ and $n \geq 0$, define \begin{equation*} f^x_n = \mu(D^0_n(x))^{-1} \mathbbm{1}_{D^0_n(x)}. \end{equation*} Evidently $f^x_n \in \mathcal{H}$, $\Vert f^x_n \Vert_\mu^2 = \mu(D^0_n(x))^{-1} < r_{\min}^{-d_H} 2^{d_Hn}$ by the above Lemma and if $g \in \mathcal{H}$ is continuous then \begin{equation*} \lim_{n \to \infty}\langle f^x_n,g \rangle_\mu = g(x), \end{equation*} by the above lemma. \end{defn} We can now state and prove the main theorem of this section, for a similar result for the stochastic heat equation see \cite[Theorem 4.8]{hambly2016}. \begin{thm}[Pointwise regularity]\label{ptregSWE} Let $u: [0,\infty) \to \mathcal{H}$ be the solution to the SPDE \eqref{SWE}. Then for all $(t,x) \in [0,\infty) \times F$ the expression \begin{equation*} u(t,x) := \sum_{k=1}^\infty Y^{b,\beta}_k(t) \phi^b_k(x) \end{equation*} is a well-defined real-valued centred Gaussian random variable. There exists a constant $c_6 > 0$ such that for all $x \in F$, $t \in [0,\infty)$ and $n \geq 0$ we have that \begin{equation*} \mathbb{E} \left[ \left( \langle u(t), f^x_n \rangle_\mu - u(t,x) \right)^2 \right] \leq c_6e^{2(\sqrt{\beta^2 + 1} - \beta)t} 2^{-n}. \end{equation*} \end{thm} \begin{proof} Note that $\phi^b_k \in \mathcal{D}(\Delta_b)$ for each $k$, so $\phi^b_k$ is continuous and so $\phi^b_k(x)$ is well-defined. By the definition of $u(t,x)$ as a sum of real-valued centred Gaussian random variables we need only prove that it is square-integrable and that the approximation estimate holds. Let $x \in F$. The theorem is trivial for $t = 0$ so let $t \in (0,\infty)$. By Lemma \ref{resolvestimSWE} we have that \begin{equation*} \begin{split} \mathbb{E} &\left[ \langle u(t),f^x_n - f^x_m \rangle_\mu^2 \right]\\ &\leq \frac{e^{2(\sqrt{\beta^2 + 1} - \beta)t}}{4\sqrt{\beta^2 + 1}} \int_F \int_F \rho^b_1(z_1,z_2)(f^x_n(z_1)-f^x_m(z_1))(f^x_n(z_2)-f^x_m(z_2))\mu(dz_1)\mu(dz_2). \end{split} \end{equation*} Then using the definition of $f^x_n$, \cite[Proposition 3.2]{hambly2018} and Lemma \ref{nhoodestimSWE} we have that \begin{equation}\label{cauchyseqSWE} \begin{split} \mathbb{E} \left[ \langle u(t),f^x_n - f^x_m \rangle_\mu^2 \right] &\leq \frac{e^{2(\sqrt{\beta^2 + 1} - \beta)t}}{4\sqrt{\beta^2 + 1}} \left( 8c_52^{-n} + 8c_52^{-m} \right)\\ &= \frac{2c_5 e^{2(\sqrt{\beta^2 + 1} - \beta)t}}{\sqrt{\beta^2 + 1}} \left( 2^{-n} + 2^{-m} \right). \end{split} \end{equation} Writing $u$ in its series representation \eqref{seriesrep} and using the independence of the $Y^{b,\beta}_k$, it follows that \begin{equation*} \sum_{k=1}^\infty \mathbb{E} \left[Y^{b,\beta}_k(t)^2 \right] \left( \langle \phi^b_k, f^x_n\rangle_\mu - \langle \phi^b_k,f^x_m \rangle_\mu \right)^2 \leq \frac{2c_5 e^{2(\sqrt{\beta^2 + 1} - \beta)t}}{\sqrt{\beta^2 + 1}} \left( 2^{-n} + 2^{-m} \right). \end{equation*} Thus the left-hand side of the above equation tends to zero as $m,n \to \infty$. The solution $u$ is an $\mathcal{H}$-valued Gaussian process so we know that \begin{equation*} \sum_{k=1}^\infty \mathbb{E} \left[Y^{b,\beta}_k(t)^2 \right] \langle \phi^b_k, f^x_n\rangle_\mu^2 = \mathbb{E} \left[ \langle u(t),f^x_n \rangle_\mu^2 \right] < \infty \end{equation*} for all $x \in F$, $n \geq 0$ and $t \in [0,\infty)$, therefore by the completeness of the sequence space $\ell^2$ there must exist a unique sequence $(y_k)_{k=1}^\infty$ such that $\sum_{k=1}^\infty y_k^2 < \infty$ and \begin{equation*} \lim_{n \to \infty} \sum_{k=1}^\infty \left( \mathbb{E} \left[Y^{b,\beta}_k(t)^2 \right]^\frac{1}{2} \langle \phi^b_k, f^x_n\rangle_\mu - y_k \right)^2 = 0. \end{equation*} Since $\phi^b_k$ is continuous we have $\lim_{n \to \infty}\langle \phi^b_k, f^x_n\rangle_\mu = \phi^b_k(x)$. Thus by Fatou's lemma we can identify the sequence $(y_k)$; we must have \begin{equation*} \sum_{k=1}^\infty \mathbb{E} \left[Y^{b,\beta}_k(t)^2 \right] \phi^b_k(x)^2 < \infty \end{equation*} and \begin{equation*} \lim_{n \to \infty}\sum_{k=1}^\infty \mathbb{E} \left[Y^{b,\beta}_k(t)^2 \right] \left( \langle \phi^b_k, f^x_n\rangle_\mu - \phi^b_k(x) \right)^2 = 0. \end{equation*} Equivalently by \eqref{seriesrep}, \begin{equation*} \mathbb{E} \left[ u(t,x)^2 \right] < \infty \end{equation*} (so we have proven square-integrability) and \begin{equation*} \lim_{n \to \infty}\mathbb{E} \left[ \left( \langle u(t), f^x_n \rangle_\mu - u(t,x) \right)^2 \right] = 0. \end{equation*} In particular by taking $m \to \infty$ in \eqref{cauchyseqSWE} we have that \begin{equation*} \mathbb{E} \left[ \left( \langle u(t), f^x_n \rangle_\mu - u(t,x) \right)^2 \right] \leq \frac{2c_5 e^{2(\sqrt{\beta^2 + 1} - \beta)t}}{\sqrt{\beta^2 + 1}} 2^{-n}. \end{equation*} \end{proof} We can now interpret our solution $u$ as a so-called ``random field'' solution $u: \Omega \times [0,\infty) \times F \to \mathbb{R}$. However, the relationship between the random field solution and the original $\mathcal{H}$-valued solution is still rather unclear. We discuss this in the next section. \section{Space-time H\"{o}lder continuity}\label{sec:holderst} Now that we have the interpretation of the solution $u$ to \eqref{SWE} as a function $u: \Omega \times [0,\infty) \times F \to \mathbb{R}$, we can prove results about its continuity in time and space. In particular, we show that it has a H\"older continuous version which is also a version of the original $\mathcal{H}$-valued solution found in Theorem \ref{SWEsoln}. \subsection{Spatial estimate} The spatial continuity of $u$ is the same as for the stochastic heat equation, see \cite[Section 5.1]{hambly2016}. \begin{prop}\label{spaceestimSWE} Let $T > 0$. Let $u: \Omega \times [0,T] \times F \to \mathbb{R}$ be (the restriction of) the solution to the SPDE \eqref{SWE}. Then there exists a constant $C_1 > 0$ such that \begin{equation*} \mathbb{E} \left[ (u(t,x) - u(t,y))^2 \right] \leq C_1R(x,y) \end{equation*} for all $t \in [0,T]$ and all $x,y \in F$. \end{prop} \begin{proof} Recall from Theorem \ref{ptregSWE} that \begin{equation*} \lim_{n \to \infty}\mathbb{E} \left[ \left( \left\langle u(t), f^x_n \right\rangle_\mu - u(t,x) \right)^2 \right] = 0, \end{equation*} and an analogous result holds for $y$. Thus by Lemma \ref{resolvestimSWE}, \begin{equation*} \begin{split} \mathbb{E} &\left[ (u(t,x) - u(t,y))^2 \right] = \lim_{n \to \infty} \mathbb{E} \left[ \left\langle u(t), f^x_n - f^y_n \right\rangle_\mu^2 \right]\\ &\leq \frac{e^{2(\sqrt{\beta^2 + 1} - \beta)t}}{4\sqrt{\beta^2 + 1}} \lim_{n \to \infty}\int_F \int_F \rho^b_1(z_1,z_2)(f^x_n(z_1) - f^y_n(z_1))(f^x_n(z_2) - f^y_n(z_2))\mu(dz_1)\mu(dz_2)\\ &= \frac{e^{2(\sqrt{\beta^2 + 1} - \beta)t}}{4\sqrt{\beta^2 + 1}} \left( \rho^b_1(x,x) - 2\rho^b_1(x,y) + \rho^b_1(y,y) \right), \end{split} \end{equation*} where we have used the continuity of the resolvent density, Lemma \ref{nhoodestimSWE}, and the definition of $f^x_n$ (similarly to the proof of Theorem \ref{ptregSWE}). Hence by \cite[Proposition 3.2]{hambly2018}, \begin{equation*} \begin{split} \mathbb{E} \left[ (u(t,x) - u(t,y))^2 \right] &\leq \frac{e^{2(\sqrt{\beta^2 + 1} - \beta)T}}{4\sqrt{\beta^2 + 1}} \left( \rho^b_1(x,x) - \rho^b_1(x,y) + \rho^b_1(y,y) - \rho^b_1(y,x) \right)\\ &\leq \frac{e^{2(\sqrt{\beta^2 + 1} - \beta)T}}{\sqrt{\beta^2 + 1}} R(x,y). \end{split} \end{equation*} \end{proof} \subsection{Temporal estimate} \begin{lem}\label{opnormestim} We have the following estimates on $V_\beta$ and $\dot{V}_\beta$: \begin{enumerate} \item Let $\beta \geq 0$ and $t \geq 0$. Then \begin{equation*} \sup_{\lambda \geq 0} |V_\beta(\lambda,t)| = \begin{cases}\begin{array}{lr} \beta^{-1} e^{-\beta t} \sinh \left( \beta t \right) & \beta > 0,\\ t & \beta = 0. \end{array}\end{cases} \end{equation*} In particular, $\sup_{\lambda \geq 0} |V_\beta(\lambda,t)|$ is $O(t)$ as $t \to 0$. \item Let $\beta \geq 0$ and $T \geq 0$. Then \begin{equation*} \sup_{0 \leq t \leq T} \sup_{\lambda \geq 0} |\dot{V}_\beta(\lambda,t)| \leq e^{\beta T}. \end{equation*} \end{enumerate} \end{lem} \begin{proof} It is easy, if somewhat tedious, to prove that $V_\beta$ and $\dot{V}_\beta$ are both continuous in $\lambda$ for fixed $t \geq 0$. Note that \begin{equation*} \lim_{x \to 0}\frac{\sin x}{x} = 1 = \lim_{x \to 0}\frac{\sinh x}{x} \end{equation*} and \begin{equation*} \sup_{x \in \mathbb{R} \setminus \{ 0 \}} \left\vert \frac{\sin x}{x} \right\vert = 1 = \inf_{x \in \mathbb{R} \setminus \{ 0 \}} \left\vert \frac{\sinh x}{x} \right\vert. \end{equation*} For (1), assume that $t > 0$ (otherwise the result is trivial). We have that \begin{equation*} \begin{split} \sup_{\lambda > \beta^2} |V_\beta(\lambda,t)| &= te^{-\beta t} \sup_{\lambda > \beta^2} \left\vert\left((\lambda - \beta^2)^{\frac{1}{2}}t \right)^{-1} \sin \left((\lambda - \beta^2)^{\frac{1}{2}}t \right) \right\vert\\ &= te^{-\beta t}\\ &= |V_\beta(\beta^2,t)|, \end{split} \end{equation*} so we need only consider the case $\lambda \leq \beta^2$. If $\beta = 0$ then this directly implies the result. Suppose now that $\beta > 0$. The function $x \mapsto \frac{\sinh x}{x}$ is positive and increasing when $x$ is positive so by continuity we have that \begin{equation*} \begin{split} \sup_{\lambda \geq 0} |V_\beta(\lambda,t)| &= \sup_{\lambda \leq \beta^2} |V_\beta(\lambda,t)|\\ &= te^{-\beta t} \sup_{\lambda \leq \beta^2} \left(\left((\beta^2 - \lambda)^{\frac{1}{2}}t \right)^{-1} \sinh \left((\beta^2 - \lambda)^{\frac{1}{2}}t \right) \right)\\ &= te^{-\beta t} \left( \beta t \right)^{-1} \sinh \left(\beta t \right)\\ &= \beta^{-1} e^{-\beta t} \sinh \left( \beta t \right) \end{split} \end{equation*} which is the required result. Now for (2), assume that $T > 0$, otherwise the result is trivial. We have \begin{equation*} \begin{split} \sup_{0 \leq t \leq T}& \sup_{\lambda > \beta^2}|\dot{V}_\beta(\lambda,t)|\\ &= \sup_{0 \leq t \leq T} \sup_{\lambda > \beta^2} \left\vert e^{-\beta t} \cos \left((\lambda - \beta^2)^{\frac{1}{2}}t \right) - \beta (\lambda - \beta^2)^{-\frac{1}{2}}e^{-\beta t} \sin \left((\lambda - \beta^2)^{\frac{1}{2}}t \right) \right\vert\\ &\leq 1 + \sup_{0 \leq t \leq T} \left\vert \beta t e^{-\beta t} \right\vert\\ &\leq 1 + \beta T \end{split} \end{equation*} and \begin{equation*} \begin{split} \sup_{0 \leq t \leq T}& \sup_{\lambda < \beta^2}|\dot{V}_\beta(\lambda,t)|\\ &= \sup_{0 \leq t \leq T} \sup_{\lambda < \beta^2} \left\vert e^{-\beta t} \cosh \left((\beta^2 - \lambda)^{\frac{1}{2}}t \right) - \beta (\beta^2 - \lambda)^{-\frac{1}{2}}e^{-\beta t} \sinh \left((\beta^2 - \lambda )^{\frac{1}{2}}t \right) \right\vert\\ &\leq \cosh (\beta T) + \beta T \sup_{0 \leq t \leq T} \sup_{\lambda < \beta^2} \left( \left( (\beta^2 - \lambda)^{\frac{1}{2}} t \right)^{-1} \sinh \left((\beta^2 - \lambda )^{\frac{1}{2}}t \right) \right) \\ &\leq \cosh (\beta T) + \sinh (\beta T) = e^{\beta T} \end{split} \end{equation*} and $\sup_{0 \leq t \leq T} |\dot{V}_\beta(\beta^2,t)| = \sup_{0 \leq t \leq T} \left\vert e^{-\beta t} - \beta t e^{-\beta t} \right\vert \leq 1 + \beta T$. Finally we note that the inequality $1 + \beta T \leq e^{\beta T}$ holds. \end{proof} We can now give the temporal estimate. Here we see the effect of the extra time derivative compared to the stochastic heat equation~\cite[Proposition 5.5]{hambly2016}. \begin{prop}\label{timeestimSWE} Let $T > 0$. Let $u: \Omega \times [0,T] \times F \to \mathbb{R}$ be (the restriction of) the solution to the SPDE \eqref{SWE}. Then there exists $C_2 > 0$ such that \begin{equation*} \mathbb{E} \left[ (u(s,x) - u(s+t,x))^2 \right] \leq C_2 t^{2 - d_s} \end{equation*} for all $s,t \geq 0$ such that $s, s+t \leq T$ and all $x \in F$. \end{prop} \begin{proof} Let $c_6' := 8c_6e^{2(\sqrt{\beta^2 + 1} - \beta)T}$, where $c_6$ is from Theorem \ref{ptregSWE}. By Theorem \ref{ptregSWE} we have that if $n \geq 0$ is an integer then \begin{equation}\label{timebd1} \mathbb{E} \left[ (u(s,x) - u(s+t,x))^2 \right] \leq 2\mathbb{E} \left[ \langle u(s) - u(s+t), f^x_n \rangle_\mu^2 \right] + c_6' 2^{-n}. \end{equation} Then It\={o}'s isometry for Hilbert spaces (see also proof of Lemma \ref{resolvestimSWE}) gives us that \begin{equation*} \begin{split} \mathbb{E} &\left[ \langle u(s) - u(s+t), f^x_n \rangle_\mu^2 \right]\\ &= \mathbb{E} \left[ \left\langle \int_0^{s+t} \left( V_\beta(-\Delta_b,s+t - t') - V_\beta(-\Delta_b,s - t')\mathbbm{1}_{\{t' \leq s\}} \right) dW(t'), f^x_n \right\rangle_\mu^2 \right]\\ &= \int_0^{s+t} \left\Vert \left( V_\beta(-\Delta_b,s+t - t') - V_\beta(-\Delta_b,s - t')\mathbbm{1}_{\{t' \leq s\}} \right)f^x_n \right\Vert_\mu^2 dt'\\ &\leq \Vert f^x_n \Vert_\mu^2 \int_0^{s+t} \left\Vert V_\beta(-\Delta_b,s+t - t') - V_\beta(-\Delta_b,s - t')\mathbbm{1}_{\{t' \leq s\}} \right\Vert^2 dt'. \end{split} \end{equation*} Recall that $\Vert f^x_n \Vert_\mu^2 < r_{\min}^{-d_H} 2^{d_Hn}$. Using the functional calculus we see that \begin{equation*} \begin{split} &\int_0^{s+t} \left\Vert V_\beta(-\Delta_b,s+t - t') - V_\beta(-\Delta_b,s - t')\mathbbm{1}_{\{t' \leq s\}} \right\Vert^2 dt'\\ &= \int_0^s \left\Vert V_\beta(-\Delta_b,s+t - t') - V_\beta(-\Delta_b,s - t') \right\Vert^2 dt' + \int_s^{s+t} \left\Vert V_\beta(-\Delta_b,s+t - t') \right\Vert^2 dt'\\ &= \int_0^s \left\Vert V_\beta(-\Delta_b,t + t') - V_\beta(-\Delta_b,t') \right\Vert^2 dt' + \int_0^t \left\Vert V_\beta(-\Delta_b,t') \right\Vert^2 dt'\\ &\leq \int_0^s \sup_{\lambda \geq 0} \left(V_\beta(\lambda,t + t') - V_\beta(\lambda,t') \right)^2 dt' + \int_0^t \sup_{\lambda \geq 0} V_\beta(\lambda,t')^2 dt'\\ &\leq t^2 T \sup_{0 \leq t' \leq T} \sup_{\lambda \geq 0} \dot{V}_\beta(\lambda,t')^2 + \int_0^t \sup_{\lambda \geq 0} V_\beta(\lambda,t')^2 dt', \end{split} \end{equation*} where in the last line we have used the mean value theorem. Therefore by using Lemma \ref{opnormestim} there exists $c>0$ such that \begin{equation*} \int_0^{s+t} \left\Vert V_\beta(-\Delta_b,s+t - t') - V_\beta(-\Delta_b,s - t')\mathbbm{1}_{\{t' \leq s\}} \right\Vert^2 dt' \leq ct^2 \end{equation*} for all $s,t \geq 0$ such that $s,s+t \leq T$. Letting $c' = 2 r_{\min}^{-d_H} c$ and plugging this into \eqref{timebd1} we have that \begin{equation*} \mathbb{E} \left[ (u(s,x) - u(s+t,x))^2 \right] \leq c' t^2 2^{d_H n} + c_6' 2^{-n}. \end{equation*} for all $s,t \geq 0$ such that $s,s+t \leq T$ and all $x \in F$. In fact, defining \begin{equation*} c_6'' := c_6' \vee d_Hc' T^2, \end{equation*} we have that \begin{equation}\label{timebd2} \mathbb{E} \left[ (u(s,x) - u(s+t,x))^2 \right] \leq c' t^2 2^{d_H n} + c_6'' 2^{-n} \end{equation} as well. This estimate will turn out to be easier to work with. We assume now that $t > 0$, and our aim is to choose $n \geq 0$ to minimise the expression on the right of \eqref{timebd2}. Fixing $t \in (0,T]$, define $g: \mathbb{R} \to \mathbb{R}_+$ such that $g(y) = c't^2 2^{d_Hy} + c_6''2^{-y}$. The function $g$ has a unique stationary point which is a global minimum at \begin{equation*} y_0 = \frac{1}{(d_H + 1)\log 2} \log \left( \frac{c_6''}{d_Hc't^2} \right). \end{equation*} Since $t \leq T$ we have by the definition of $c_6''$ that $y_0 \geq 0$. Since $y_0$ is not necessarily an integer we choose $n = \lceil y_0 \rceil$. Then $g$ is increasing in $[y_0,\infty)$ so we have that \begin{equation*} \mathbb{E} \left[ (u(s,x) - u(s+t,x))^2 \right] \leq g(n) \leq g(y_0 + 1). \end{equation*} Setting $c_6''' := \frac{c_6''}{d_H c'}$ and evaluating the right-hand side we see that \begin{equation*} \begin{split} \mathbb{E} \left[ (u(s,x) - u(s+t,x))^2 \right] &\leq c' t^2 2^{d_H} \left( \frac{c'''_6}{t^2} \right)^\frac{d_H}{d_H + 1} + c_6' 2^{-1} \left( \frac{c'''_6}{t^2} \right)^\frac{-1}{d_H + 1}\\ &\leq c_6'''' t^\frac{2}{d_H + 1}\\ &= c_6'''' t^{2 - d_s} \end{split} \end{equation*} for all $s \geq 0$, $t > 0$ such that $s,s+t \leq T$ and all $x \in F$, where the constant $c_6'''' > 0$ is independent of $s,t,x$. This inequality obviously also holds in the case $t=0$. \end{proof} \subsection{H\"{o}lder continuity} We are now ready to prove the main result of this paper. \begin{defn} Let $R_\infty$ be the metric on $\mathbb{R} \times F$ given by \begin{equation*} R_\infty((s,x),(t,y)) = |s-t| \vee R(x,y). \end{equation*} \end{defn} \begin{thm}[Space-time H\"{o}lder continuity]\label{SWEreg} Let $u: \Omega \times [0,\infty) \times F \to \mathbb{R}$ be the solution to the SPDE \eqref{SWE}. Let $\delta = 1 - \frac{d_s}{2}$. Then there exists a version $\tilde{u}$ of $u$ which satisfies the following: \begin{enumerate} \item For each $T > 0$, $\tilde{u}$ is almost surely essentially $(\frac{1}{2} \wedge \delta )$-H\"{o}lder continuous on $[0,T] \times F$ with respect to $R_\infty$. \item For each $t \in [0,\infty)$, $\tilde{u}(t,\cdot)$ is almost surely essentially $\frac{1}{2}$-H\"{o}lder continuous on $F$ with respect to $R$. \item For each $x \in F$, $\tilde{u}(\cdot,x)$ is almost surely essentially $\delta$-H\"{o}lder continuous on $[0,T]$ with respect to the Euclidean metric. \end{enumerate} Moreover, the collection of random variables $\tilde{u} = (\tilde{u}(t,x))_{(t,x) \in [0,\infty) \times F}$ is such that $(\tilde{u}(t,\cdot))_{t \in [0,\infty)}$ is an $\mathcal{H}$-valued process, and moreover $(\tilde{u}(t,\cdot))_{t \in [0,\infty)}$ is an $\mathcal{H}$-continuous version of the $\mathcal{H}$-valued solution to \eqref{SWE} found in Theorem \ref{SWEsoln}. \end{thm} \begin{proof} Take $T > 0$ and consider $u_T$, the restriction of $u$ to $[0,T] \times F$. It is a well-known fact that for every $p \in \mathbb{N}$ there exists a constant $C_p' > 0$ such that if $Z$ is any centred real Gaussian random variable then \begin{equation*} \mathbb{E} [Z^{2p}] = C_p' \mathbb{E}[Z^2]^p. \end{equation*} We know that $u_T$ is a centred Gaussian process on $[0,T] \times F$ by Theorem \ref{ptregSWE}. Propositions \ref{spaceestimSWE} and \ref{timeestimSWE} then give us the estimates \begin{equation*} \begin{split} \mathbb{E} \left[ (u_T(t,x) - u_T(t,y))^{2p} \right] &\leq C_p'C_1^pR(x,y)^p,\\ \mathbb{E} \left[ (u_T(s,x) - u_T(t,x))^{2p} \right] &\leq C_p'C_2^p|s-t|^{p(2 - d_s)} \end{split} \end{equation*} for all $s,t \in [0,T]$ and all $x,y \in F$. The existence of a version $\tilde{u}$ with the required H\"older continuity properties then follows in the same way as in \cite[Theorem 5.6]{hambly2016}. Then using Theorem \ref{ptregSWE} and the series representation of $u$, the rest of the present theorem follows in the same way as in \cite[Theorem 5.7]{hambly2016}. \end{proof} \section{Convergence to equilibrium}\label{sec:equilib} We conclude this paper with a brief discussion of the long-time behaviour of the solution $u$ to the SPDE \eqref{SWE}. We are interested in whether the solution ``settles down'' as $t \to \infty$ to some equilibrium measure. Intuitively, we expect this to be the case when the damping constant $\beta$ is positive. However the undamped case $\beta = 0$ is less clear. In this case there is no dissipation of energy, so is the rate of increase of energy quantifiable? Note that in this section we use the term ``weak convergence'' in the probabilistic sense, \textit{not} in the functional analytic sense. We treat the undamped case first. Throughout this section we will use the interpretation of the solution $u: \Omega \times [0,\infty) \to \mathcal{H}$ as an $\mathcal{H}$-valued process. Recall the series representation of $u$, \begin{equation*} u = \sum_{k=1}^\infty Y^{b,\beta}_k \phi^b_k, \end{equation*} given in \eqref{seriesrep}. \begin{thm}[$\beta = 0$] Let $u$ be the solution to the SPDE \eqref{SWE} with $\beta = 0$. \begin{enumerate} \item If $b \neq N$, then $t^{-\frac{1}{2}}u(t)$ has a non-trivial weak limit in $\mathcal{H}$ as $t \to \infty$. \item If $b = N$, then $t^{-\frac{1}{2}}u(t)$ has no weak limit in $\mathcal{H}$ as $t \to \infty$. However $u - Y^{N,\beta}_1 \phi^N_1$ and $Y^{N,\beta}_1 \phi^N_1$ are independent $\mathcal{H}$-valued processes and $t^{-\frac{1}{2}}\left( u(t) - Y^{N,\beta}_1(t) \phi^N_1 \right)$ has a non-trivial weak limit in $\mathcal{H}$ as $t \to \infty$. \end{enumerate} \end{thm} \begin{proof} Let $(\zeta_k)_{k=1}^\infty$ be an independent and identically distributed sequence of real standard Gaussian random variables. We start with (1), so that $\lambda^b_1 > 0$. For each $t \in [0,\infty)$ let \begin{equation*} \bar{u}(t) = \sum_{k=1}^\infty (2\lambda^b_k)^{-\frac{1}{2}} \left( t - (4\lambda^b_k)^{-\frac{1}{2}} \sin \left( (4\lambda^b_k)^\frac{1}{2} t \right) \right)^\frac{1}{2} \zeta_k \phi^b_k. \end{equation*} It can be easily checked that $\bar{u}(t)$ is a well-defined $\mathcal{H}$-valued random variable with the same law as $u(t)$ for each $t \in [0,\infty)$. Now let \begin{equation*} u_\infty = \sum_{k=1}^\infty (2\lambda^b_k)^{-\frac{1}{2}} \zeta_k \phi^b_k, \end{equation*} so that $u_\infty$ is also a well-defined $\mathcal{H}$-valued random variable. It is then simple to check using dominated convergence (see \cite[Proposition 2.5]{hambly2018}) that \begin{equation*} \lim_{t \to \infty}\mathbb{E}\left[ \Vert t^{-\frac{1}{2}}\bar{u}(t) - u_\infty \Vert_\mu^2 \right] = 0, \end{equation*} so in particular $t^{-\frac{1}{2}}\bar{u}(t) \to u_\infty$ weakly as $t \to \infty$. Therefore the same weak convergence holds for $t^{-\frac{1}{2}}u(t)$. We now tackle (2). The issue that forces us to consider this case separately is that $\lambda^N_1 = 0$, so the variance of $\langle t^{-\frac{1}{2}}u(t) , \phi^N_1 \rangle_\mu$ tends to infinity as $t \to \infty$. We deal with this by subtracting off the offending component, which is exactly $Y^{N,\beta}_1 \phi^N_1$. It is clearly independent of $u - Y^{N,\beta}_1 \phi^N_1$ by \eqref{seriesrep}. Now $\lambda^N_k > 0$ for all $k \geq 2$, so similar to (1) we let \begin{equation*} \bar{u}(t) = \sum_{k=2}^\infty (2\lambda^N_k)^{-\frac{1}{2}} \left( t - (4\lambda^N_k)^{-\frac{1}{2}} \sin \left( (4\lambda^N_k)^\frac{1}{2} t \right) \right)^\frac{1}{2} \zeta_k \phi^N_k, \end{equation*} which has the same law as $u(t) - Y^{N,\beta}_1(t) \phi^N_1$, and \begin{equation*} u_\infty = \sum_{k=2}^\infty (2\lambda^N_k)^{-\frac{1}{2}} \zeta_k \phi^N_k. \end{equation*} As with (1) we conclude that $t^{-\frac{1}{2}}\left( u(t) - Y^{N,\beta}_1(t) \phi^N_1 \right) \to u_\infty$ weakly as $t \to \infty$. \end{proof} We now tackle the damped case $\beta > 0$. It turns out that we must split this again into two subcases: $b \neq N$ and $b = N$. \begin{thm}[$\beta > 0$] Let $u$ be the solution to the SPDE \eqref{SWE} with $\beta > 0$. \begin{enumerate} \item If $b \neq N$, then $u(t)$ has a non-trivial weak limit as $t \to \infty$. \item If $b = N$, then $u(t)$ has no weak limit as $t \to \infty$. However $u - Y^{N,\beta}_1 \phi^N_1$ and $Y^{N,\beta}_1 \phi^N_1$ are independent $\mathcal{H}$-valued processes, and $\left( u(t) - Y^{N,\beta}_1(t) \phi^N_1 \right)$ has a non-trivial weak limit as $t \to \infty$. \end{enumerate} \end{thm} \begin{proof} We do case (1) first. Observe that if $\beta > 0$ and $b \in 2^{F^0} \setminus \{ N \}$ then $V_\beta(\lambda, t)$ decays exponentially as $t \to \infty$ for any $\lambda \geq 0$. It follows that \begin{equation}\label{Vintble} \int_0^\infty V_\beta(\lambda,s)^2ds < \infty \end{equation} for all $\lambda > 0$, and so by It\={o}'s isometry we may define \begin{equation*} Z^{b,\beta}_k(t) = \int_0^t V_\beta(\lambda^b_k,s) \langle \phi^b_k, dW(s) \rangle_\mu \end{equation*} for each $t \in [0,\infty]$ and $k \geq 1$, which is an $\mathcal{H}$-valued random variable. In the case $t \in [0,\infty)$ this evidently has the same law as $Y^{b,\beta}_k(t)$. Then for each $t \in [0,\infty)$ let \begin{equation*} \hat{u}(t) = \sum_{k=1}^\infty Z^{b,\beta}_k(t) \phi^b_k \end{equation*} and \begin{equation*} u_\infty = \sum_{k=1}^\infty Z^{b,\beta}_k(\infty) \phi^b_k. \end{equation*} It is clear that $\bar{u}(t)$ is an $\mathcal{H}$-valued random variable with the same law as $u(t)$, for all $t \in [0,\infty)$. Now for any $t \in [0,\infty)$ we have by It\={o}'s isometry that \begin{equation}\label{uhatestim} \begin{split} \mathbb{E}\left[ \Vert \hat{u}(t) - u_\infty \Vert_\mu^2 \right] &= \sum_{k=1}^\infty \mathbb{E}\left[ \left( Z^{b,\beta}_k(t) - Z^{b,\beta}_k(\infty) \right)^2 \right]\\ &= \sum_{k=1}^\infty \int_t^\infty V_\beta (\lambda^b_k,s)^2 ds\\ &= \sum_{k: \lambda^b_k \leq \beta^2} \int_t^\infty V_\beta (\lambda^b_k,s)^2 ds + \sum_{k: \lambda^b_k > \beta^2} \int_t^\infty V_\beta (\lambda^b_k,s)^2 ds. \end{split} \end{equation} We treat each of these terms separately. As we mentioned in Proposition \ref{SWE2soln}, there are only finitely many $k$ such that $\lambda^b_k \leq \beta^2$, see \cite[Proposition 2.5]{hambly2018}. Then by \eqref{Vintble} we have that \begin{equation*} \begin{split} \sum_{k: \lambda^b_k \leq \beta^2} \int_t^\infty V_\beta (\lambda^b_k,s)^2 ds &< \infty, \quad t \geq 0,\\ \lim_{t \to \infty}\sum_{k: \lambda^b_k \leq \beta^2} \int_t^\infty V_\beta (\lambda^b_k,s)^2 ds &= 0. \end{split} \end{equation*} Now for the the $\{k: \lambda^b_k > \beta^2 \}$ sum we need to do some estimates. Our assumption that $\beta > 0$ allows us to improve on the estimates of Proposition \ref{SWE2soln}: \begin{equation*} \begin{split} \sum_{k: \lambda^b_k > \beta^2} \int_t^\infty V_\beta (\lambda^b_k,s)^2 ds &= \sum_{k: \lambda^b_k > \beta^2} \frac{1}{\lambda^b_k - \beta^2} \int_t^\infty e^{-2\beta s} \sin^2\left( (\lambda^b_k - \beta^2)^\frac{1}{2} s \right) ds\\ &\leq \sum_{k: \lambda^b_k > \beta^2} \frac{1}{\lambda^b_k - \beta^2} \int_t^\infty e^{-2\beta s} ds\\ &= \frac{1}{2\beta} e^{-2\beta t} \sum_{k: \lambda^b_k > \beta^2} \frac{1}{\lambda^b_k - \beta^2}. \end{split} \end{equation*} By \cite[Proposition 2.5]{hambly2018} the infinite sum above converges, so we have by dominated convergence that \begin{equation*} \begin{split} \sum_{k: \lambda^b_k > \beta^2} \int_t^\infty V_\beta (\lambda^b_k,s)^2 ds &< \infty, \quad t \geq 0,\\ \lim_{t \to \infty}\sum_{k: \lambda^b_k > \beta^2} \int_t^\infty V_\beta (\lambda^b_k,s)^2 ds &= 0. \end{split} \end{equation*} Setting $t = 0$ in \eqref{uhatestim}, we have now proven that \begin{equation*} \mathbb{E}\left[ \Vert u_\infty \Vert_\mu^2 \right] < \infty, \end{equation*} and so $u_\infty$ is a well-defined $\mathcal{H}$-valued random variable. From \eqref{uhatestim} we have also proven that \begin{equation*} \lim_{t \to \infty}\mathbb{E}\left[ \Vert \hat{u}(t) - u_\infty \Vert_\mu^2 \right] = 0. \end{equation*} In particular this implies that $\hat{u}(t) \to u_\infty$ weakly as $t \to \infty$. Since $u(t)$ has the same law as $\hat{u}(t)$ for all $t$, this implies that $u(t) \to u_\infty$ weakly as $t \to \infty$. In (2), we have the issue that $\lambda^N_1 = 0$ so $V_\beta(\lambda^N_1,\cdot)$ is not square-integrable, which precludes $u(t)$ from having a weak limit. We get around this issue by simply subtracting the associated term of the series representation of $u$, leaving only the square-integrable terms. We we still have $\lambda^N_k > 0$ for all $k \geq 2$, so by It\={o}'s isometry we may define \begin{equation*} Z^{N,\beta}_k(\infty) := \int_0^\infty V_\beta(\lambda^N_k,s) \phi^{N*}_kdW(s) \end{equation*} for $k \geq 2$. From the series representation \eqref{seriesrep} of $u$, observe that $Y^{N,\beta}_1(t) \phi^N_1$ is simply the component of $u(t)$ associated with the eigenfunction $\phi^N_1$, so that \begin{equation*} u(t) - Y^{N,\beta}_1(t) \phi^N_1 = \sum_{k=2}^\infty Y^{N,\beta}_k(t) \phi^N_k, \end{equation*} and the independence result is clear. For each $t$ we then define \begin{equation*} Z^{N,\beta}_k(t) = \int_0^t V_\beta(\lambda^N_k,s) \phi^{N*}_kdW(s) \end{equation*} and \begin{equation*} \hat{u}(t) = \sum_{k=2}^\infty Z^{N,\beta}_k(t) \phi^N_k, \end{equation*} so that $\hat{u}(t)$ has the same law as $u(t) - Y^{N,\beta}_1(t) \phi^N_1$. The proof proceeds from here in the same way as in the proof of (1)---we show that \begin{equation*} \mathbb{E}\left[ \Vert u_\infty \Vert_\mu^2 \right] < \infty \end{equation*} and \begin{equation*} \lim_{t \to \infty}\mathbb{E}\left[ \Vert \hat{u}(t) - u_\infty \Vert_\mu^2 \right] = 0 \end{equation*} which imply the result. \end{proof} \end{document}
arXiv
{ "id": "1611.04874.tex", "language_detection_score": 0.6261544823646545, "char_num_after_normalized": null, "contain_at_least_two_stop_words": null, "ellipsis_line_ratio": null, "idx": null, "lines_start_with_bullet_point_ratio": null, "mean_length_of_alpha_words": null, "non_alphabetical_char_ratio": null, "path": null, "symbols_to_words_ratio": null, "uppercase_word_ratio": null, "type": null, "book_name": null, "mimetype": null, "page_index": null, "page_path": null, "page_title": null }
arXiv/math_arXiv_v0.2.jsonl
null
null
\begin{document} \title{Qubit-assisted squeezing of the mirror motion in a dissipative optomechanical cavity system} \author{Cheng-Hua Bai} \affiliation{Department of Physics, Harbin Institute of Technology, Harbin, Heilongjiang 150001, China} \author{Dong-Yang Wang} \affiliation{Department of Physics, Harbin Institute of Technology, Harbin, Heilongjiang 150001, China} \author{Shou Zhang} \email{[email protected]} \affiliation{Department of Physics, Harbin Institute of Technology, Harbin, Heilongjiang 150001, China} \affiliation{Department of Physics, College of Science, Yanbian University, Yanji, Jilin 133002, China} \author{Hong-Fu Wang} \email{[email protected]} \affiliation{Department of Physics, College of Science, Yanbian University, Yanji, Jilin 133002, China} \begin{abstract} We investigate a hybrid system consisting of an atomic ensemble trapped inside a dissipative optomechanical cavity assisted with the perturbative oscillator-qubit coupling. It is shown that such a hybrid system is very suitable for generating stationary squeezing of the mirror motion in the long-time limit under the unresolved sideband regime. Based on the approaches of master equation and covariance matrix, we discuss the respective squeezing effects in detail and find that in both approaches, simplifying the system dynamics with adiabatic elimination of the highly dissipative cavity mode is very effective. In the approach of master equation, we find that the squeezing is a resulting effect of cooling process and is robust against the thermal fluctuations of the mechanical mode. While in the covariance matrix approach, we can obtain the analytical result of the steady-state mechanical position variance from the reduced dynamical equation approximately. Finally, we compare the two different approaches and find that they are completely equivalent for the stationary dynamics. The scheme may be meaningful for the possible ultraprecise quantum measurement involved mechanical squeezing. \pacs{42.50.Dv, 42.50.Ct, 42.50.Pq, 07.10.Cm} \keywords{mechanical squeezing, master equation, covariance matrix} \end{abstract} \maketitle \section{Introduction}\label{Sec1} Many significant progresses have been achieved with the recent advance of cavity optomechanics over the last few years~\cite{2014RMP861391}. Examples include ground-state cooling of the mechanical mode~\cite{2007PRL99093901,2007PRL99093902,2014PRA90053841,2015SC58516,2018PRA98023816}, macroscopic entanglement between two spatially separated movable mirrors~\cite{2014PRA89023843,2015SC58518,2018FP13130319}, optical multistability behavior~\cite{2016PRA93023844,2017SC60010311}, and so on. Thereinto, generation of non-classical states of motion around the ground state based on the cavity optomechanical system is one of the most effective methods to study the quantum effects at mesoscopic or macroscopic scales. Specifically, the quantum squeezing associated with the mechanical motion, reduction of the quantum fluctuation in its position or momentum below the quantum noise limit, is not only of significant importance for testing the quantum fundamental theory~\cite{2012PT6529}, such as exploring the quantum-classical boundary~\cite{1991PT4436}, but also has widely potential applications, such as the detection of gravitational waves~\cite{1980RMP52341,1992Science256325,1999PT5244}. Thus, achieving squeezed state in mechanical oscillator (mirror) is a greatly desired goal. To this end, several well-known methods and techniques to generate squeezing of the mechanical mode were proposed~\cite{1991PRL673665,2009PRL103213603,2011PRA83033820,2013OE21020423,2018OE26013783,2013PRA88063833,2009PRA79063819,2010PRA82033811,2016PRA93043844,2015PRA91013834,2010PRA82021806R,2014PRA89023849,2018PRA97043619,2018PRA98023807,2008APL92133102,2015PRL115243601}. One of the early most outstanding schemes was to modulate the frequency of the oscillator~\cite{1991PRL673665}. Nevertheless, although this is simplest, it is not easy to utilize for many different types of mechanical systems. Subsequently, the alternative methods based on the cavity optomechanical system to overcome this drawback have been proposed. Examples include modulation of the external driving laser~\cite{2009PRL103213603,2011PRA83033820,2013OE21020423,2018OE26013783}; adoption of one red detuned and the other blue detuned two-zone driving sources~\cite{2013PRA88063833}; direct squeezing transfer from the squeezed external driving field or squeezed cavity field generated by the parametric amplifier inside the cavity to the oscillator~\cite{2009PRA79063819,2010PRA82033811,2016PRA93043844}; use of the Duffing nonlinearity~\cite{2015PRA91013834}, etc. While concentrating on the linear radiation pressure interaction, the squeezing of the mechanical mode in quadratically coupled optomechanical system has also been investigated. In this case, one can drive the cavity with two beams~\cite{2010PRA82021806R} and use the bang-bang control technique to kick the mechanical mode~\cite{2014PRA89023849}. Meanwhile, we have noted very recently that the effects of the cooperations between the squeezed field driving and quadratic optomechanical coupling~\cite{2018PRA97043619} and between the periodically modulated driving and parametric driving~\cite{2018PRA98023807} on the generation of mechanical squeezing are investigated. The stronger mechanical squeezing can be viewed as the joint effect in the cooperation regime. In fact, the basic mechanism for creating mechanical squeezing is to introduce a parametric coupling for the motional degree of freedom of the oscillator. The Hamiltonian takes the form $H\propto b^2+b^{\dag2}$ (where $b$ and $b^{\dag}$ are the annihilation and creation operators of the oscillator) and the corresponding evolution operator is a squeezed operator so that the squeezing can be achieved effectively~\cite{QuantumOptics}. Therefore, a significantly interesting question is how the parametric coupling can be reached in cavity optomechanical system. Very fortunately, we have noted that this type of parametric coupling has been used to enhance the quantum correlations in optomechanical system and it can be introduced by perturbatively coupling a single qubit to the mechanical oscillator~\cite{2018AOP39239}. In addition, the photon blockade and two-color optomechanically induced transparency in this kind of model have been discussed in detail~\cite{2015PRA92033806,2014PRA90023817}. Meanwhile, the oscillator-qubit coupling can also be realized in experiments successfully based on the superconducting quantum circuit system~\cite{2009Nature459960,2018PRA98023821}. On the other hand, as we all know, the master equation is a powerful tool to study the evolution of a practical quantum system dynamics in quantum theory~\cite{QuantumOptics}. However, since the dynamics of fluctuations is linearized and the noises are Gaussian in general optomechanical system, it is greatly convenient to introduce the covariance matrix to study the system dynamics~\cite{2009PRL103213603,2007PRL98030405,2014PRA89023843}. But to our knowledge, the dynamical results obtained from the two different approaches have not been compared until now. In this paper, we study the squeezing effect of mechanical oscillator induced by the oscillator-qubit coupling in a hybrid system consisting of an atomic ensemble trapped inside a dissipative optomechanical cavity. We discuss the mechanical squeezing in detail based on the approaches of master equation and covariance matrix, respectively. In the approach of master equation, we eliminate the highly dissipative cavity mode adiabatically and obtain the effective Hamiltonian. By solving the master equation numerically, we find that the steady-state squeezing of mechanical oscillator can be generated successfully in the long-time limit. We also demonstrate that the squeezing is the resulting effect of cooling process. By numerically and dynamically deriving the optimal effective detuning simultaneously, we check the cooling effects when the mechanical oscillator is prepared in a thermal state with certain mean thermal phonon number initially. As to the approach of covariance matrix, by eliminating the highly dissipative cavity mode adiabatically, the dynamical equation of $6\times6$ covariance matrix can be reduced as the one of $4\times4$ covariance matrix, which significantly simplifies the system dynamics. In the appropriate parameter regime, the analytical solution of the steady-state variance for the oscillator position can be obtained approximately. Finally, we make a clear comparison for these two different approaches. We find that the steady-state dynamics in the long-time limit obtained from the two different approaches are completely equivalent. This paper is organized as follows. In Sec.~\ref{Sec2}, we introduce the hybrid system model under consideration and derive the Hamiltonian of the system. In Sec.~\ref{Sec3}, we discuss the squeezing effect of mechanical oscillator in detail based on the approaches of master equation and covariance matrix, respectively. In Sec.~\ref{Sec4}, we give a brief discussion about the implementation of present scheme with the circuit-QED system. Finally, a conclusion is given in Sec.~\ref{Sec5}. \section{System and model}\label{Sec2} \begin{figure} \caption{(Color online) Schematic diagram of the considered system. A cloud of identical two-level atoms is trapped in a dissipative optomechanical cavity, which is driven by an external laser field. The qubit (within the black dashed elliptical ring) which is denoted by a yellow dot inside the movable mirror has the levels $|\uparrow\rangle$ and $|\downarrow\rangle$.} \label{Fig1} \end{figure} The system under consideration is schematically shown in Fig.~\ref{Fig1}, where a cloud of identical two-level atoms (with frequency $\omega_a$ and decay rate $\gamma_a$) is trapped in a dissipative optomechanical cavity (with frequency $\omega_c$ and decay rate $\kappa$). An external laser field with time-independent amplitude $E$ and frequency $\omega_l$ drives the optomechanical cavity and the movable mirror coupled with a qubit is modeled as the mechanical oscillator with frequency $\omega_m$ and damping rate $\gamma_m$. The mechanical oscillator is coupled to the cavity field via the radiation-pressure interaction. The Hamilton of the system is given by (in the unit of $\hbar=1$) \begin{eqnarray}\label{Eq1} H&=&\omega_ca^{\dag}a+\omega_aS_z+\frac{\omega_m}{2}(q^2+p^2)+2\eta q^2+ \cr\cr &&g(S_+a+S_-a^{\dag})-g_0a^{\dag}aq+E(a^{\dag}e^{-i\omega_lt}+ae^{i\omega_lt}), \end{eqnarray} in which $a$ ($a^{\dag}$) is the annihilation (creation) operator of the cavity field, $S_{+,-,z}=\sum\limits_i\sigma^{(i)}_{+,-,z}$ are the collective spin Pauli operators of atoms, and $q$ ($p$) is the dimensionless position (momentum) operator of the mechanical oscillator, satisfying the standard canonical commutation relation $[q, p]=1$. $g$ and $g_0$ represent, respectively, the atom-cavity coupling strength and the single-photon radiation-pressure coupling strength. In Hamiltonian Eq.~(\ref{Eq1}), the first three terms in first line correspond to the free Hamiltonian of the driven cavity, atoms, and mechanical oscillator, respectively. The fourth term refers to the Hamiltonian for the qubit-oscillator interaction, where $2\eta$ is the coupling strength. As to the generation of this term, we will make a discussion finally. The first two terms in second line describe the coupling between atoms and cavity field and the optomechanical interaction between the cavity field and mechanical oscillator, respectively. The last term gives the driving of the cavity by an external laser field. The spin operators of the atoms can be described in terms of a collective bosonic operator, $c=S_-/\sqrt{N}$. For the sufficiently large atom number $N$ and weak atom-cavity couping, $S_z\simeq c^{\dag}c-N/2$~\cite{2015PRA92033841}. In the rotating frame with respect to laser frequency $\omega_l$, the Hamiltonian can be rewritten as \begin{eqnarray}\label{Eq2} H^{\prime}=\delta_ca^{\dag}a+\Delta_ac^{\dag}c+\frac{\omega_m}{2}(q^2+p^2)+2\eta q^2+G(c^{\dag}a+ca^{\dag})-g_0a^{\dag}aq+E(a^{\dag}+a), \end{eqnarray} where $\delta_c=\omega_c-\omega_l$ and $\Delta_a=\omega_a-\omega_l$ are, respectively, the cavity and atomic detuning with respect to the external driving laser. $G=\sqrt{N}g$ is the collective atom-cavity coupling strength. In the following, we will discuss the squeezing effect of the movable mirror in detail based on the approaches of master equation and covariance matrix, respectively. \section{Discussion of the squeezing for the movable mirror}\label{Sec3} \subsection{The approach of master equation} \subsubsection{Hamiltonian} To discuss the squeezing of the movable mirror based on the approach of master equation, it is better to introduce the annihilation and creation operators of the mechanical oscillator \begin{eqnarray}\label{Eq3} b=(q+ip)/\sqrt{2},~~~~~~~~~~b^{\dag}=(q-ip)/\sqrt{2}. \end{eqnarray} In terms of $b$ and $b^{\dag}$, the Hamiltonian in Eq.~(\ref{Eq2}) can be rewritten as \begin{eqnarray}\label{Eq4} H^{\prime\prime}=\delta_ca^{\dag}a+\Delta_ac^{\dag}c+\omega_m^{\prime}b^{\dag}b+\eta(b^2+b^{\dag2})+G(c^{\dag}a+ca^{\dag})-g_0^{\prime}a^{\dag}a(b+b^{\dag})+E(a^{\dag}+a), \end{eqnarray} where $\omega_m^{\prime}=\omega_m+2\eta$ and $g_0^{\prime}=g_0/\sqrt{2}$. In general, besides the coherent dynamics, the quantum systems will also be inevitably coupled to their environments. Taking all the damping and noise effects into account, the evolution of the system can be completely described by the following nonlinear quantum Langevin equations (QLEs) \begin{eqnarray}\label{Eq5} \frac{da}{dt}&=&-(\kappa+i\delta_c)a-iGc+ig_0^{\prime}a(b+b^{\dag})-iE+\sqrt{2\kappa}a_{\mathrm{in}}(t), \cr\cr \frac{db}{dt}&=&-(\gamma_m+i\omega_m^{\prime})b-2i\eta b^{\dag}+ig_0^{\prime}a^{\dag}a+\sqrt{2\gamma_m}b_{\mathrm{in}}(t), \cr\cr \frac{dc}{dt}&=&-(\gamma_a+i\Delta_a)c-iGa+\sqrt{2\gamma_a}c_{\mathrm{in}}(t), \end{eqnarray} where $a_{\mathrm{in}}(t)$, $b_{\mathrm{in}}(t)$, and $c_{\mathrm{in}}(t)$ are the noise operators for the cavity field, mechanical oscillator, and atoms, respectively, which have zero mean value and satisfy the following correlation functions \begin{eqnarray}\label{Eq6} \langle a_{\mathrm{in}}(t)a_{\mathrm{in}}^{\dag}(t^{\prime})\rangle&=&\delta(t-t^{\prime}), ~~~~~~~~~~~~~~~~~~~~ \langle a_{\mathrm{in}}^{\dag}(t)a_{\mathrm{in}}(t^{\prime})=0, \cr\cr \langle b_{\mathrm{in}}(t)b_{\mathrm{in}}^{\dag}(t^{\prime})\rangle&=&(n_m+1)\delta(t-t^{\prime}), ~~~~~~~~ \langle b_{\mathrm{in}}^{\dag}(t)b_{\mathrm{in}}(t^{\prime})=n_m\delta(t-t^{\prime}), \cr\cr \langle c_{\mathrm{in}}(t)c_{\mathrm{in}}^{\dag}(t^{\prime})\rangle&=&\delta(t-t^{\prime}), ~~~~~~~~~~~~~~~~~~~~ \langle c_{\mathrm{in}}^{\dag}(t)c_{\mathrm{in}}(t^{\prime})=0, \end{eqnarray} in which $n_m=\left[\mathrm{exp}(\hbar\omega_m/k_BT)-1\right]^{-1}$ is the mean thermal phonon number. Here $T$ is the environment temperature of mechanical reservoir and $k_B$ is the Boltzmann constant. The strong driving on the cavity leads to the large amplitudes for the cavity field, mechanical mode, and atoms. Thus, the standard linearization procedure can be applied to simplify the dynamical equations. To this end, we express the operators in Eq.~(\ref{Eq5}) as the sum of their mean values and quantum fluctuations, i.e., $\mathscr{O}(t)\rightarrow\langle\mathscr{O}(t)\rangle+\mathscr{O}(t)~(\mathscr{O}=a, b, c)$. Hence, the dynamical equation corresponding to the mean values is given by the following set of nonlinear differential equations: \begin{eqnarray}\label{Eq7} \langle\dot{a}(t)\rangle&=&-(\kappa+i\delta_c)\langle a(t)\rangle-iG\langle c(t)\rangle+ig_0^{\prime}\langle a(t)\rangle(\langle b(t)\rangle+\langle b(t)\rangle^{\ast})-iE, \cr\cr \langle\dot{b}(t)\rangle&=&-(\gamma_m+i\omega_m^{\prime})\langle b(t)\rangle-2i\eta\langle b(t)\rangle^{\ast}+ig_0^{\prime}|\langle a(t)\rangle|^2, \cr\cr \langle\dot{c}(t)\rangle&=&-(\gamma_a+i\Delta_a)\langle c(t)\rangle-iG\langle a(t)\rangle. \end{eqnarray} On the other hand, the dynamics of the quantum fluctuations is governed by the following linearized QLEs: \begin{eqnarray}\label{Eq8} \dot{a}&=&-(\kappa+i\delta_c)a-iGc+ig_0^{\prime}\langle a(t)\rangle(b+b^{\dag})+ig_0^{\prime}(\langle b(t)\rangle+\langle b(t)\rangle^{\ast})a+\sqrt{2\kappa}a_{\mathrm{in}}(t), \cr\cr \dot{b}&=&-(\gamma_m+i\omega_m^{\prime})b-2i\eta b^{\dag}+ig_0^{\prime}\langle a(t)\rangle^{\ast}a+ig_0^{\prime}\langle a(t)\rangle a^{\dag}+\sqrt{2\gamma_m}b_{\mathrm{in}}(t), \cr\cr \dot{c}&=&-(\gamma_a+i\Delta_a)c-iGa+\sqrt{2\gamma_a}c_{\mathrm{in}}(t). \end{eqnarray} \begin{figure} \caption{(Color online) Time evolution of the real and imaginary parts of the cavity mode mean value $\langle a(t)\rangle$ and the mechanical mode mean value $\langle b(t)\rangle$. The system parameters are chosen as: $\omega_m=\pi\times10^6~\mathrm{Hz}$, $\gamma_m=10^{-6}\omega_m$, $g_0^{\prime}=10^{-3}\omega_m$, $\omega_c=10^8\omega_m$, $\delta_c=-250\omega_m$, $\kappa=3\omega_m$, $\Delta_a=1.1\omega_m$, $\gamma_a=0.1\omega_m$, $G=8\omega_m$, $\eta=0.2\omega_m$, $P=20~\mathrm{mW}$, and $E=\sqrt{2P\kappa/(\hbar\omega_l)}$.} \label{Fig2} \end{figure} Via solving Eq.~(\ref{Eq7}) numerically, we plot the time evolution of the real and imaginary parts of the cavity mode mean value $\langle a(t)\rangle$ and the mechanical mode mean value $\langle b(t)\rangle$ in Fig.~\ref{Fig2}. From Fig.~\ref{Fig2}, we can find that the real and imaginary parts of the mean values reach their steady states quickly and the real part is much larger than the imaginary part ($\mathrm{Re}[\langle a(t)\rangle]\gg\mathrm{Im}[\langle a(t)\rangle]$ and $\mathrm{Re}[\langle b(t)\rangle]\gg\mathrm{Im}[\langle b(t)\rangle]$). As a consequence, we can make the following approximations safely: \begin{eqnarray}\label{Eq9} \langle a(t)\rangle\simeq\langle a(t)\rangle^{\ast}\simeq|\langle a\rangle_s|,~~~~~~~~~~ \langle b(t)\rangle\simeq\langle b(t)\rangle^{\ast}\simeq|\langle b\rangle_s|, \end{eqnarray} where $\langle a\rangle_s$ and $\langle b\rangle_s$ represent, respectively, the steady state mean values of cavity mode and mechanical mode. So the Hamiltonian of the system for the quantum fluctuations corresponding to Eq.~(\ref{Eq8}) can be written as \begin{eqnarray}\label{Eq10} H_{\mathrm{lin}}=\Delta_ca^{\dag}a+\omega_m^{\prime}b^{\dag}b+\Delta_ac^{\dag}c+\eta(b^2+b^{\dag2})+G(c^{\dag}a+ca^{\dag})-G_0(a+a^{\dag})(b+b^{\dag}), \end{eqnarray} in which $\Delta_c=\delta_c-2g_0^{\prime}|\langle b\rangle_s|$ is the effective cavity detuning and $G_0=g_0^{\prime}|\langle a\rangle_s|$ is the effective optomechanical coupling strength. Under the parameter regimes $|\Delta_c|\gg(\omega_m^{\prime}, |\Delta_a|)$ and $\kappa\gg(\gamma_m, \gamma_a)$, the cavity mode can be eliminated adiabatically and the solution of the fluctuation operator $a(t)$ at the time scale $t\gg1/\kappa$ can be obtain (see Appendix \ref{App1}) \begin{eqnarray}\label{Eq11} a(t)\simeq\frac{iG_0[b(t)+b^{\dag}(t)]}{\kappa+i\Delta_c}+\frac{-iGc(t)}{\kappa+i\Delta_c}+A_{\mathrm{in}}(t), \end{eqnarray} where $A_{\mathrm{in}}(t)$ is the modified noise operator. Substituting Eq.~(\ref{Eq11}) into the expressions about modes $b$ and $c$ in Eq.~(\ref{Eq8}), we obtain the QLEs about $b$ and $c$ after eliminating cavity mode $a$ adiabatically \begin{eqnarray}\label{Eq12} \dot{b}&\simeq&-(\gamma_m+i\tilde{\omega}_m)b-iG_{\mathrm{eff}}(c+c^{\dag})-2i\eta^{\prime}b^{\dag}+b^{\prime}_{\mathrm{in}}(t), \cr\cr \dot{c}&\simeq&-(\gamma_{\mathrm{eff}}+i\Delta_{\mathrm{eff}})c-iG_{\mathrm{eff}}(b+b^{\dag})+c_{\mathrm{in}}^{\prime}(t), \end{eqnarray} where $b_{\mathrm{in}}^{\prime}(t)$ and $c_{\mathrm{in}}^{\prime}(t)$ represent the modified noise terms. The effective parameters for mechanical frequency $\tilde{\omega}_m$, optomechanical coupling $G_{\mathrm{eff}}$, bilinear strength $\eta^{\prime}$, detuning $\Delta_{\mathrm{eff}}$, and decay rate $\gamma_{\mathrm{eff}}$ are defined as, respectively, \begin{eqnarray}\label{Eq13} \tilde{\omega}_m&=&\omega_m^{\prime}-\frac{2G_0^2\Delta_c}{\Delta_c^2+\kappa^2}=\omega_m+2\eta^{\prime},~~~~~~~~~~ G_{\mathrm{eff}}=\left|\frac{G_0G}{\Delta_c-i\kappa}\right|, \cr\cr \eta^{\prime}&=&\eta-\frac{G_0^2\Delta_c}{\Delta_c^2+\kappa^2},~~~~~~~ \Delta_{\mathrm{eff}}=\Delta_a-\frac{G^2\Delta_c}{\Delta_c^2+\kappa^2},~~~~~~~ \gamma_{\mathrm{eff}}=\gamma_a+\frac{G^2\kappa}{\Delta_c^2+\kappa^2}. \end{eqnarray} Therefore, the effective Hamiltonian corresponding to QLEs about mechanical mode $b$ and atom mode $c$ in Eq.~(\ref{Eq12}) is \begin{eqnarray}\label{Eq14} H_{\mathrm{eff}}=\tilde{\omega}_mb^{\dag}b+\Delta_{\mathrm{eff}}c^{\dag}c+G_{\mathrm{eff}}(b+b^{\dag})(c+c^{\dag})+\eta^{\prime}(b^{\dag2}+b^2). \end{eqnarray} \subsubsection{Generation of the mechanical squeezing} We now introduce the quadrature operators for the mechanical mode $X=(b+b^{\dag})/\sqrt{2}$ and $Y=(b-b^{\dag})/\sqrt{2}i$, so the variance of the quadrature operator $Z~(Z=X, Y)$ is determined by \begin{eqnarray}\label{Eq15} \langle\delta Z^2\rangle=\langle Z^2\rangle-\langle Z\rangle^2=\mathrm{Tr}[Z^2\varrho(t)]-\mathrm{Tr}[Z\varrho(t)]^2, \end{eqnarray} where $\varrho(t)$ is the system density operator, the dynamics of which is completely governed by the following master equation \begin{eqnarray}\label{Eq16} \dot{\varrho}(t)=-i[H_{\mathrm{lin}}, \varrho]+\kappa\mathcal{D}[a]\varrho+\gamma_m(n_m+1)\mathcal{D}[b]\varrho+\gamma_mn_m\mathcal{D}[b^{\dag}]\varrho+ \gamma_a\mathcal{D}[c]\varrho, \end{eqnarray} in which $\mathcal{D}[o]\varrho=o\varrho o^{\dag}-(o^{\dag}o\varrho+\varrho o^{\dag}o)/2~(o=a, b, c)$ is the standard Lindblad superoperators. According to the Heisenberg uncertainty principle, the product of the variances $\langle\delta X^2\rangle$ and $\langle\delta Y^2\rangle$ satisfies the following inequality, \begin{eqnarray}\label{Eq17} \langle\delta X^2\rangle\langle\delta Y^2\rangle\geq|\frac12[X, Y]|^2, \end{eqnarray} where $[X, Y]=i$. Thus if either $\langle\delta X^2\rangle$ or $\langle\delta Y^2\rangle$ is below $1/2$, the state of the movable mirror exhibits the behavior of quadrature squeezing. \begin{figure}\label{Fig3} \end{figure} In Fig.~\ref{Fig3}, we present the time evolution of the variance $\langle\delta X^2\rangle$ for the quadrature operator $X$ with the original linearized Hamiltonian in Eq.~(\ref{Eq10}). We find that the variance $\langle\delta X^2\rangle$ finally converges to a steady-state value below $1/2$ after the transitory oscillation. Moreover, to check the validity for the adiabatic elimination of cavity mode $a$, it is very necessary to solve the effective master equation \begin{eqnarray}\label{Eq18} \dot{\varrho}_{\mathrm{eff}}(t)=-i[H_{\mathrm{eff}}, \varrho_{\mathrm{eff}}]+ \gamma_m(n_m+1)\mathcal{D}[b]\varrho_{\mathrm{eff}}+\gamma_mn_m\mathcal{D}[b^{\dag}]\varrho_{\mathrm{eff}}+\gamma_{\mathrm{eff}}\mathcal{D}[c]\varrho_{\mathrm{eff}}, \end{eqnarray} where $\gamma_{\mathrm{eff}}$ is the effective decay rate of atoms, as given in Eq.~(\ref{Eq13}). We also give the time evolution of the variance $\langle\delta X^2\rangle$ in this case in Fig.~\ref{Fig3} and find that the numerical results obtained from the original linearized Hamiltonian $H_{\mathrm{lin}}$ and effective Hamiltonian $H_{\mathrm{eff}}$ agree well. Thus, in the present parameter regime, simplifying the system dynamics with adiabatic elimination of cavity mode is valid. Next, we present steady-state variance $\langle\delta X^2\rangle$ for the quadrature operator $X$ versus the cavity decay rate $\kappa$ and atom-cavity coupling strength $G$ in Fig.~\ref{Fig4}. One notes that the squeezing of movable mirror can be generated successfully even in the high dissipative optomechanical cavity $(\kappa>\omega_m)$, as long as the coupling strength $G$ is appropriate. This originates from the strong enough atom-cavity coupling effectively suppresses the undesired effect of cavity dissipation on the mechanical squeezing. \begin{figure} \caption{(Color online) Steady-state variance $\langle\delta X^2\rangle$ for the quadrature operator $X$ versus the cavity decay rate $\kappa$ and atom-cavity coupling strength $G$ with mean thermal phonon number $n_m=0$. Here the parameters are the same as those in Fig.~\ref{Fig2} and the white curve denotes the contour line of quantum noise limit.} \label{Fig4} \end{figure} \subsubsection{The optimal effective detuning $\Delta_{\mathrm{eff}}$} In Eq.~(\ref{Eq14}), the last term describes a parametric-amplification process and plays the paramount role in the generation of squeezing. While the third term describes an effective optomechanical coupling process that leads to cooling and heating of the mechanical mode simultaneously. As is well known, to reveal the quantum effects including mechanical squeezing at the macroscopic level, it is a prerequisite to suppress the heating process as soon as possible. \begin{figure}\label{Fig5} \end{figure} In Fig.~\ref{Fig5}, we present the dependence of steady-state variance $\langle\delta X^2\rangle$ for the quadrature operator $X$ on the effective detuning $\Delta_{\mathrm{eff}}$ in the case of different mean thermal numbers. We find that, as is expected, the more mean thermal phonon number exists, the larger steady-state variance $\langle\delta X^2\rangle$ will become, but there is an optimal effective detuning point $\Delta_{\mathrm{eff}}\simeq1.4\omega_m$. This is because at this point, the heating process of mechanical mode is strongly suppressed. Thus, the destructive effect of thermal noises on the squeezing of movable mirror is almost non-existent. Next, to give more insight of the physical mechanism, we analyze the optimal effective detuning $\Delta_{\mathrm{eff}}$ from the system dynamics. \begin{figure}\label{Fig6} \end{figure} We first apply the squeezing transformation $S(r)=\mathrm{exp}\left[\frac r2(b^2-b^{\dag2})\right]$ with the squeezing parameter (see Appendix \ref{App2}) \begin{eqnarray}\label{Eq19} r=\frac14\ln\left(1+\frac{4\eta^{\prime}}{\omega_m}\right), \end{eqnarray} to the effective Hamiltonian $H_{\mathrm{eff}}$ in Eq.~(\ref{Eq14}). In this transformation, \begin{eqnarray}\label{Eq20} S^{\dag}(r)bS(r)=\cosh rb-\sinh rb^{\dag}, ~~~~~~~~~~ S^{\dag}(r)cS(r)=c. \end{eqnarray} The transformed effective Hamiltonian is thus given by \begin{eqnarray}\label{Eq21} H_{\mathrm{eff}}^{\prime}=S^{\dag}(r)H_{\mathrm{eff}}S(r) =\tilde{\omega}_m^{\prime}b^{\dag}b+\Delta_{\mathrm{eff}}c^{\dag}c+G_{\mathrm{eff}}^{\prime}(b+b^{\dag})(c+c^{\dag}), \end{eqnarray} where \begin{eqnarray}\label{Eq22} \tilde{\omega}_m^{\prime}=\omega_m\sqrt{1+\frac{4\eta^{\prime}}{\omega_m}},~~~~~~~~~~ G_{\mathrm{eff}}^{\prime}=G_{\mathrm{eff}}\left(1+\frac{4\eta^{\prime}}{\omega_m}\right)^{-\frac14}. \end{eqnarray} In the interaction picture with respect to the free parts $\tilde{\omega}_m^{\prime}b^{\dag}b+\Delta_{\mathrm{eff}}c^{\dag}c$, $H_{\mathrm{eff}}^{\prime}$ in Eq.~(\ref{Eq21}) is transformed to \begin{eqnarray}\label{Eq23} \tilde{H}_{\mathrm{eff}}^{\prime}=G_{\mathrm{eff}}^{\prime} \left[e^{-i(\tilde{\omega}_m^{\prime}-\Delta_{\mathrm{eff}})t}bc^{\dag}+e^{i(\tilde{\omega}_m^{\prime}-\Delta_{\mathrm{eff}})t}b^{\dag}c+ e^{-i(\tilde{\omega}_m^{\prime}+\Delta_{\mathrm{eff}})t}bc+e^{i(\tilde{\omega}_m^{\prime}+\Delta_{\mathrm{eff}})t}b^{\dag}c^{\dag}\right]. \end{eqnarray} In Fig.~\ref{Fig6}, we show the energy-level diagram of above Hamiltonian in the resonant condition $\Delta_{\mathrm{eff}}=\tilde{\omega}_m^{\prime}$ clearly. We find that the cooling of mechanical mode corresponding to the anti-Stokes process can be significantly enhanced due to the resonant interaction. While the heating corresponding to the Stokes process is strongly suppressed since the detuning $2\tilde{\omega}_m^{\prime}$ is much larger than the coupling strength $G_{\mathrm{eff}}^{\prime}$ (in the present parameter regime, $2\tilde{\omega}_m^{\prime}/G_{\mathrm{eff}}^{\prime}\simeq32$). The resonant condition $\Delta_{\mathrm{eff}}=\tilde{\omega}_m^{\prime}=\omega_m\sqrt{1+\frac{4\eta^{\prime}}{\omega_m}}\simeq1.4\omega_m$ just is the optimal effective detuning $\Delta_{\mathrm{eff}}$ in Fig.~\ref{Fig5}. \begin{figure}\label{Fig7} \end{figure} To further check the cooling effect of mechanical mode in the optimal effective detuning condition, there is necessary to present the evolution of the mean phonon number. Since the mechanical oscillator is initially at thermal equilibrium with its environment, it is physically reasonable to assume the mechanical oscillator is prepared in the initial thermal state $\varrho(0)=\sum_n(n_m)^n/(n_m+1)^{n+1}|n\rangle\langle n|$ with certain mean thermal phonon number $n_m$. Here $|n\rangle$ is the Fock basis. In Fig.~\ref{Fig7}, we plot the time evolution of the mean phonon number $\langle b^{\dag}b\rangle$ corresponding to the optimal effective detuning when the initial state of the mechanical oscillator is a thermal state. One can see that the final mean phonon number can be less than 1, therefore, which provides a prerequisite for the reveal of squeezing effect. \subsection{The approach of covariance matrix} \subsubsection{Dynamical equation for covariance matrix} According to Eq.~(\ref{Eq2}), the set of nonlinear QLEs with system operators $q$, $p$, $a$, and $c$ is \begin{eqnarray}\label{Eq24} \frac{dq}{dt}&=&\omega_mp, \cr\cr \frac{dp}{dt}&=&-(\omega_m+4\eta)q-\gamma_mp+g_0a^{\dag}a+\xi(t), \cr\cr \frac{da}{dt}&=&-(\kappa+i\delta_c)a+ig_0aq-iGc-iE+\sqrt{2\kappa}a_{\mathrm{in}}(t), \cr\cr \frac{dc}{dt}&=&-(\gamma_a+i\Delta_a)c-iGa+\sqrt{2\gamma_a}c_{\mathrm{in}}(t), \end{eqnarray} where $\xi(t)$ is the stochastic Hermitian Brownian noise operator which describes the dissipative friction forces subjecting to the mechanical oscillator. Its non-Markovian correlation function is given by~\cite{2001PRA63023812} \begin{eqnarray}\label{Eq25} \langle\xi(t)\xi(t^{\prime})\rangle=\frac{\gamma_m}{2\pi\omega_m}\int\omega\Big[\coth\Big(\frac{\hbar\omega}{2k_BT}\Big)+1\Big]e^{-i\omega(t-t^{\prime})}d\omega. \end{eqnarray} However, as to the case of $\omega_m\gg\gamma_m$ (a high quality mechanical oscillator), only the resonant noise components at frequency $\omega\sim\omega_m$ dominantly affect the dynamics of the mechanical oscillator. Thus the colored spectrum of Eq.~(\ref{Eq25}) can be simplified as the Markovian process and the correlation function becomes \begin{eqnarray}\label{Eq26} \langle\xi(t)\xi(t^{\prime})+\xi(t^{\prime})\xi(t)\rangle\simeq2\gamma_m\coth\Big(\frac{\hbar\omega_m}{2k_BT}\Big)\delta(t-t^{\prime})= 2\gamma_m(2n_m+1)\delta(t-t^{\prime}). \end{eqnarray} Exploiting above similar linearization procedure, the equation of motion corresponding to the classical mean values about $\langle q(t)\rangle$, $\langle p(t)\rangle$, $\langle a(t)\rangle$, and $c(t)$ is given by \begin{eqnarray}\label{Eq27} \langle\dot{q}(t)\rangle&=&\omega_m\langle p(t)\rangle, \cr\cr \langle\dot{p}(t)\rangle&=&-(\omega_m+4\eta)\langle q(t)\rangle-\gamma_m\langle p(t)\rangle+g_0|\langle a(t)\rangle|^2, \cr\cr \langle\dot{a}(t)\rangle&=&-(\kappa+i\delta_c)\langle a(t)\rangle+ig_0\langle a(t)\rangle\langle q(t)\rangle-iG\langle c(t)\rangle-iE, \cr\cr \langle\dot{c}(t)\rangle&=&-(\gamma_a+i\Delta_a)\langle c(t)\rangle-iG\langle a(t)\rangle, \end{eqnarray} and the set of linearized QLEs for the quantum fluctuation operators $\delta q(t)$, $\delta p(t)$, $\delta a(t)$, and $\delta c(t)$ is \begin{eqnarray}\label{Eq28} \delta\dot{q}&=&\omega_m\delta p, \cr\cr \delta\dot{p}&=&-(\omega_m+4\eta)\delta q-\gamma_m\delta p+g_0\langle a(t)\rangle^{\ast}\delta a+g_0\langle a(t)\rangle\delta a^{\dag}+\xi(t), \cr\cr \delta\dot{a}&=&-[\kappa+i(\delta_c-g_0\langle q(t)\rangle)]\delta a+ig_0\langle a(t)\rangle\delta q-iG\delta c+\sqrt{2\kappa}a_{\mathrm{in}}(t), \cr\cr \delta\dot{c}&=&-(\gamma_a+i\Delta_a)\delta c-iG\delta a+\sqrt{2\gamma_a}c_{\mathrm{in}}(t). \end{eqnarray} By introducing the quadrature operators for the cavity field, atoms, and their input noises: \begin{eqnarray}\label{Eq29} \delta x_1&=&(\delta a+\delta a^{\dag})/\sqrt{2},~~~~~~~~~~\delta y_1=(\delta a-\delta a^{\dag})/\sqrt{2}i, \cr\cr \delta x_2&=&(\delta c+\delta c^{\dag})/\sqrt{2},~~~~~~~~~~\delta y_2=(\delta c-\delta c^{\dag})/\sqrt{2}i, \cr\cr \delta x_1^{\mathrm{in}}&=&(a_{\mathrm{in}}+a_{\mathrm{in}}^{\dag})/\sqrt{2},~~~~~~~~~~ \delta y_1^{\mathrm{in}}=(a_{\mathrm{in}}-a_{\mathrm{in}}^{\dag})/\sqrt{2}i, \cr\cr \delta x_2^{\mathrm{in}}&=&(c_{\mathrm{in}}+c_{\mathrm{in}}^{\dag})/\sqrt{2},~~~~~~~~~~ \delta y_2^{\mathrm{in}}=(c_{\mathrm{in}}-c_{\mathrm{in}}^{\dag})/\sqrt{2}i, \end{eqnarray} and the vectors of all quadratures and noises: \begin{eqnarray}\label{Eq30} U&=&[\delta q, \delta p, \delta x_1, \delta y_1, \delta x_2, \delta y_2]^T, \cr\cr N&=&[0, \xi(t), \sqrt{2\kappa}\delta x_1^{\mathrm{in}}, \sqrt{2\kappa}\delta y_1^{\mathrm{in}}, \sqrt{2\gamma_a}\delta x_2^{\mathrm{in}}, \sqrt{2\gamma_a}\delta y_2^{\mathrm{in}}]^T, \end{eqnarray} the linearized QLEs for the quantum fluctuation operators in Eq.~(\ref{Eq28}) can be rewritten as \begin{eqnarray}\label{Eq31} \frac{dU}{dt}=A(t)U+N(t), \end{eqnarray} where $A(t)$ is a 6$\times$6 time-dependent matrix \begin{eqnarray}\label{Eq32} A(t)= \begin{bmatrix} 0~~ & \omega_m~~ & 0~~ & 0~~ & 0~~ & 0~~ \\ -(\omega_m+4\eta) & -\gamma_m~~ & G_x(t)~~ & G_y(t)~~ & 0~~ & 0~~ \\ -G_y(t)~~ & 0~~ & -\kappa~~ & \Delta_c(t)~~ & 0~~ & G~~ \\ G_x(t)~~ & 0~~ & -\Delta_c(t)~~ & -\kappa~~ & -G~~ & 0~~ \\ 0~~ & 0~~ & 0~~ & G~~ & -\gamma_a~~ & \Delta_a~~ \\ 0~~ & 0~~ & -G~~ & 0~~ & -\Delta_a~~ & -\gamma_a \end{bmatrix}. \end{eqnarray} Here, $\Delta_c(t)=\delta_c-g_0\langle q(t)\rangle$ is the effective time-modulated detuning and $G_x(t)$ and $G_y(t)$ are, respectively, the real and imaginary parts of the effective optomechanical coupling $G_0(t)=\sqrt{2}g_0\langle a(t)\rangle$. Due to the above linearized dynamics and the zero-mean Gaussian nature for the quantum noises, the quantum fluctuations in the stable regime will evolve to an asymptotic Gaussian state which can be characterized by the $6\times6$ covariance matrix completely \begin{eqnarray}\label{Eq33} V_{k,l}=\langle U_k(t)U_l(t)+U_l(t)U_k(t)\rangle/2. \end{eqnarray} From Eqs.~(\ref{Eq31}) and (\ref{Eq33}), we can deduce the dynamical equation which governs the evolution of the covariance matrix \begin{eqnarray}\label{Eq34} \dot{V}(t)=A(t)V(t)+V(t)A^T(t)+D, \end{eqnarray} where $A^T(t)$ denotes the transpose of $A(t)$ and $D=\mathrm{Diag}[0, \gamma_m(2n_m+1), \kappa, \kappa, \gamma_a, \gamma_a]$ is the matrix of noise correlation. Equation~(\ref{Eq34}) is an inhomogeneous first-order differential equation with 21 elements which can be numerically solved with the initial condition of covariance matrix $V(0)=\mathrm{Diag}[n_m+1/2, n_m+1/2, 1/2, 1/2, 1/2, 1/2]$. \subsubsection{Time evolution of variance for the mirror position} \begin{figure} \caption{(Color online) Time evolution of the real and imaginary parts of the mirror position mean value $\langle q(t)\rangle$ and the cavity mode mean value $\langle a(t)\rangle$. The parameters are the same as those in Fig.~\ref{Fig2}.} \label{Fig8} \end{figure} In Fig.~\ref{Fig8}, we show the time evolution of the real and imaginary parts of the mirror position mean value $\langle q(t)\rangle$ and the cavity mode mean value $\langle a(t)\rangle$. We find that the real and imaginary parts of the mean values reach the steady states quickly and the real part is much larger than the imaginary part ($\mathrm{Re}[\langle q(t)\rangle]\gg\mathrm{Im}[\langle q(t)\rangle]$ and $\mathrm{Re}[\langle a(t)\rangle]\gg\mathrm{Im}[\langle a(t)\rangle]$). Thus, we can make the approximations as above subsection \begin{eqnarray}\label{Eq35} \langle q(t)\rangle\simeq|\langle q\rangle_s|,~~~~~~~~~~\langle a(t)\rangle\simeq\langle a(t)\rangle^{\ast}\simeq|\langle a\rangle_s|, \end{eqnarray} where $|\langle q\rangle_s|$ and $|\langle a\rangle_s|$ denote the steady state mean values of the mirror position and cavity mode, respectively. By numerically solving the dynamical equation about covariance matrix $V$ in Eq.~(\ref{Eq34}) under above approximations, we plot the time evolution of variance $\langle\delta q^2\rangle$ for the mirror position in Fig.~\ref{Fig9}. From Fig.~\ref{Fig9}, one notes that the variance $\langle\delta q^2\rangle$ also finally reaches its steady-state value below $1/2$. In fact, exploiting the similar means of eliminating the cavity mode adiabatically, we can obtain the dynamical equation about the reduced covariance matrix $V^{\prime}$: \begin{eqnarray}\label{Eq36} \dot{V}^{\prime}(t)=BV^{\prime}(t)+V^{\prime}(t)B^T+D^{\prime}, \end{eqnarray} where \begin{eqnarray}\label{Eq37} B= \begin{bmatrix} ~~~~~0~~~~~ & ~~~~~\omega_m~~~~~ & ~~~~~0~~~~~ & ~~~~~0~~~~~ \\ -\left(\omega_m+4\eta-\frac{2g_0^2|\langle a\rangle_s|^2}{\Delta_c}\right) & -\gamma_m & -\frac{\sqrt{2}g_0|\langle a\rangle_s|G}{\Delta_c} & 0 \\ 0 & 0 & -\gamma_a & \Delta_a-\frac{G^2}{\Delta_c} \\ -\frac{\sqrt{2}g_0|\langle a\rangle_s|G}{\Delta_c} & 0 & -(\Delta_a-\frac{G^2}{\Delta_c}) & -\gamma_a \end{bmatrix}, \end{eqnarray} and $D^{\prime}=\mathrm{Diag}[0, \gamma_m(2n_m+1), \gamma_a, \gamma_a]$. \begin{figure} \caption{(Color online) Time evolution of the variance $\langle\delta q^2\rangle$ for the mirror position. The red solid line and blue dashed line denote, respectively, the corresponding numerical result with full covariance matrix $V$ and reduced covariance matrix $V^{\prime}$. The system parameters are the same as those in Fig.~\ref{Fig2}. Here the mean thermal phonon number has been set $n_m=0$.} \label{Fig9} \end{figure} To check the validity of the dynamical equation about the reduced covariance matrix $V^{\prime}$ in Eq.~(\ref{Eq36}), we also present the time evolution of variance $\langle\delta q^2\rangle$ in Fig.~\ref{Fig9}. Compared with the result obtained from the full covariance matrix $V$, the result obtained from the reduced covariance matrix $V^{\prime}$ is agreed very well. \subsubsection{Variance for the mirror position in the steady-state regime} When the system reaches the steady state, the reduced covariance matrix $V^{\prime}$ is dominated by the following Lyapunov equation \begin{eqnarray}\label{Eq38} BV^{\prime}+V^{\prime}B^T=-D^{\prime}. \end{eqnarray} Equation~(\ref{Eq38}) can be analytically solved in the parameter regime with the negligible mechanical damping $\gamma_m\simeq0$ and the variance $\langle\delta q^2\rangle$ for the mirror position in the steady state is given by \begin{eqnarray}\label{Eq39} \langle\delta q^2\rangle\simeq \left[\omega_m+\frac{(\Delta_G^2+\gamma_a^2)^2}{\Omega_m(\Delta_G^2+\gamma_a^2)-G_g^2\Delta_G}\right]/4\Delta_G, \end{eqnarray} in which \begin{eqnarray} \Delta_G&=&\Delta_a-\frac{G^2}{\Delta_c}, ~~~~~~~~ \Omega_m=\omega_m+4\eta-\frac{2g_0^2|\langle a\rangle_s|^2}{\Delta_c}, ~~~~~~~~ G_g=\frac{\sqrt{2}g_0|\langle a\rangle_s|G}{\Delta_c}. \end{eqnarray} \begin{figure} \caption{(Color online) Steady-state variance $\langle\delta q^2\rangle$ for the mirror position versus $\eta$. The blue line, red line, and yellow dots indicate, respectively, the numerical result with full covariance matrix $V$, numerical result with reduced covariance matrix $V^{\prime}$, and analytical result with reduced covariance matrix $V^{\prime}$. The other parameters are the same as those in Fig.~\ref{Fig2}.} \label{Fig10} \end{figure} The steady-state variance $\langle\delta q^2\rangle$ for the mirror position obtained from, respectively, numerical result with full covariance matrix $V$, numerical result with reduced covariance matrix $V^{\prime}$, and analytical result with reduced covariance matrix $V^{\prime}$, versus $\eta$ is shown in Fig.~\ref{Fig10}. We can find that, as to the result obtained from the reduced covariance matrix $V^{\prime}$, the numerical and analytical solutions are agreed very well. In the appropriate parameter scale of $\eta$, the results obtained from the full covariance matrix $V$ and reduced covariance matrix $V^{\prime}$ are also agreed well. However, with the gradual increase of $\eta$, the two results begin to exist the difference and no longer agree well each other. This is because eliminating the cavity mode adiabatically is valid only in the suitable parameter regime of $\eta$. \subsection{Comparison of the two approaches} In the above subsections, we discuss the squeezing of the movable mirror based on the approaches of master equation and covariance matrix in detail, respectively. In this subsection, we make a comparison for the two different approaches. In the approach of master equation, we make the approximations: $\langle a(t)\rangle\simeq\langle a(t)\rangle^{\ast}\simeq|\langle a\rangle_s|$ and $\langle b(t)\rangle\simeq\langle b(t)\rangle^{\ast}\simeq|\langle b\rangle_s|$. In fact, the Hamiltonian of the time-dependent dynamics for the system is \begin{eqnarray}\label{Eq41} H_{\mathrm{lin}}^{\prime}&=& [\delta_c-g_0^{\prime}(\langle b(t)\rangle+\langle b(t)\rangle^{\ast})]a^{\dag}a+\omega_m^{\prime}b^{\dag}b+\Delta_ac^{\dag}c+\eta(b^2+b^{\dag2})+ \cr\cr &&G(c^{\dag}a+ca^{\dag})-g_0^{\prime}\langle a(t)\rangle^{\ast}a(b+b^{\dag})-g_0^{\prime}\langle a(t)\rangle a^{\dag}(b+b^{\dag}), \end{eqnarray} where $\langle a(t)\rangle$ and $\langle b(t)\rangle$ are the solutions of the set of nonlinear differential equations in Eq.~(\ref{Eq7}). Similarly, in the approach of covariance matrix, we also make the approximationsï¼› $\langle q(t)\rangle\simeq|\langle q\rangle_s|$ and $\langle a(t)\rangle\simeq\langle a(t)\rangle^{\ast}\simeq|\langle a\rangle_s|$. \begin{figure} \caption{(Color online) Time evolution of variance $\langle\delta X^2\rangle$ for the quadrature operator $X$ (variance $\langle\delta q^2\rangle$ for the mirror position) obtained from the master equation (dynamical equation for covariance matrix) in the both cases of without and with approximations. The system parameters are the same as those in Fig.~\ref{Fig2}.} \label{Fig11} \end{figure} To further show the exact dynamics of system explicitly and compare the results obtained from above two different approaches clearly, we numerically solve the master equation in Eq.~(\ref{Eq16}) with the time-dependent Hamiltonian in Eq.~(\ref{Eq41}) and dynamical equation for covariance matrix in Eq.~(\ref{Eq34}) with $6\times6$ time-dependent matrix $A(t)$ in Eq.~(\ref{Eq32}) once again. In Fig.~\ref{Fig11}, we present the time evolution of variance $\langle\delta X^2\rangle$ for the quadrature operator $X$ (variance $\langle\delta q^2\rangle$ for the mirror position) obtained from the master equation (dynamical equation for covariance matrix) in the both cases of without and with approximations. One can note that, as to the two different approaches, although they have different oscillation modes before reaching the steady state, they will reach the same steady state in the long enough time. And as to the cases of with approximations, they all obtain the steady state which is same with the exact dynamics in the long-time limit. Therefore, the approaches of master equation and dynamical equation for covariance matrix are completely equivalent in stationary behavior but differ in dynamical behavior. In terms of stationary behavior, simplifying the system dynamics with the approximations of Eqs.~(\ref{Eq9}) and (\ref{Eq35}) is very reasonable. \section{Implementation with a circuit-QED setup}\label{Sec4} In this section, we give some brief discussions about the implementation of the present scheme with circuit-QED systems. As we all know, the circuit-QED system is formed by the superconducting flux qubit and the integrated-element LC oscillator~\cite{2017NP134447}. Additionally, the generation of nonclassical states and the sensitive detection of the position about the quantized mechanical motion have also been studied based on the superconducting circuit system both in theory and experiment~\cite{2016SB61163,2008NaturePhysics4785}. In the present scheme, the generation of the second-order term introduced in Hamiltonian (\ref{Eq1}) is significantly important for realizing the mechanical squeezing, which is associated with the coupling between the oscillator and qubit. In fact, this oscillator-qubit coupling can be effectively implemented in circuit-QED system~\cite{2009Nature459960,2018PRA98023821}. As to the method of generation of the additional section-order term, the section of appendix in Ref.~\cite{2018AOP39239} has discussed in detail. In addition, as to other couplings involved in our scheme (coupling between atom and cavity and coupling between cavity mode and mechanical mode), we can exploit the similar architecture of superconducting circuits in Ref.~\cite{2016PRL116233604} to realize. \section{Conclusions}\label{Sec5} In conclusion, based on the master equation and covariance matrix these two different approaches, we have detailedly investigated the squeezing effect of the mirror motion in a hybrid system consisting of an atomic ensemble trapped inside a dissipative optomechanical cavity assisted with the perturbative oscillator coupling. We find that adiabatic elimination of the highly dissipative cavity mode to significantly simplify the system dynamics in both approaches is very effective. In the approach of master equation, from the effective Hamiltonian Eqs.~(\ref{Eq14}) and (\ref{Eq23}), we numerically and dynamically derive the optimal effective detuning, respectively. Under the optimal effective detuning condition, we also check the cooling effects of the mechanical mode when the mechanical oscillator is initially prepared in a thermal state with certain mean thermal phonon number and find that the mechanical mode can be cooled down close to its ground state in the long-time limit, which provides a prerequisite for the generation of stationary squeezing. As to the covariance matrix approach, we reduce the dynamical equation of $6\times6$ covariance matrix as the one of $4\times4$ covariance matrix by eliminating the highly dissipative cavity mode adiabatically, which greatly simplifies the system dynamics. In this case, we obtain the analytical solution of the steady-state variance for the mechanical position approximately. Finally, we make a clear comparison for these two different approaches and find that they are completely equivalent for the stationary dynamics. The present scheme may be implemented with the circuit-QED systems and benefit forward the possible ultraprecise quantum measurement involved mechanical squeezing. \begin{center} $\mathbf{Acknowledgements}$ \end{center} This work was supported by the National Natural Science Foundation of China under Grant Nos. 61822114, 11465020, 61465013; The Project of Jilin Science and Technology Development for Leading Talent of Science and Technology Innovation in Middle and Young and Team Project under Grant (20160519022JH). \appendix \section{Adiabatic elimination of cavity mode}\label{App1} Under the approximation of Eq.~(\ref{Eq9}), the linearized QLEs Eq.~(\ref{Eq8}) can be simplified as \begin{eqnarray}\label{Ap1} \dot{a}&=&-(\kappa+i\Delta_c)a-iGc+iG_0(b+b^{\dag})+\sqrt{2\kappa}a_{\mathrm{in}}(t), \cr\cr \dot{b}&=&-(\gamma_m+i\omega_m^{\prime})b+iG_0(a+a^{\dag})-2i\eta b^{\dag}+\sqrt{2\gamma_m}b_{\mathrm{in}}(t),\cr\cr \dot{c}&=&-(\gamma_a+i\Delta_a)c-iGa+\sqrt{2\gamma_a}c_{\mathrm{in}}(t), \end{eqnarray} where $\Delta_c=\delta_c-2g_0^{\prime}|\langle b\rangle_s|$ and $G_0=g_0^{\prime}|\langle a\rangle_s|$. The formal solution of Eq.~(\ref{Ap1}) is \begin{eqnarray}\label{Ap2} a(t)&=&a(0)e^{-(\kappa+i\Delta_c)t}+e^{-(\kappa+i\Delta_c)t}\int_0^t\left\{-iGc(\tau)+iG_0[b(\tau)+b^{\dag}(\tau)]+\sqrt{2\kappa}a_{\mathrm{in}}(\tau)\right\}e^{(\kappa+i\Delta_c)\tau}d\tau, \cr\cr b(t)&=&b(0)e^{-(\gamma_m+i\omega_m^{\prime})}t+e^{-(\gamma_m+i\omega_m^{\prime})t}\int_0^t\left\{iG_0[a(\tau)+a^{\dag}(\tau)]-2i\eta b^{\dag}(\tau)+\sqrt{2\gamma_m}b_{\mathrm{in}}(\tau)\right\}e^{(\gamma_m+i\omega_m^{\prime})\tau}d\tau, \cr\cr c(t)&=&c(0)e^{-(\gamma_a+i\Delta_a)t}+e^{-(\gamma_a+i\Delta_a)t}\int_0^t\left[-iGa(\tau)+\sqrt{2\gamma_a}c_{\mathrm{in}}(\tau)\right]e^{(\gamma_a+i\Delta_a)\tau}d\tau. \end{eqnarray} When the decay rate of cavity field is much larger than the damping rate of mechanical oscillator and the decay rate of atoms, the cavity mode $a$ can only slightly affect the dynamics of mechanical mode $b$ and atom mode $c$. So the approximate expressions about modes $b$ and $c$ are \begin{eqnarray}\label{Ap3} b(t)&\simeq&b(0)e^{-(\gamma_m+i\omega_m^{\prime})t}+B_{\mathrm{in}}^{\prime}(t), \cr\cr c(t)&\simeq&c(0)e^{-(\gamma_a+i\Delta_a)t}+C_{\mathrm{in}}^{\prime}(t), \end{eqnarray} where $B_{\mathrm{in}}^{\prime}(t)$ and $C_{\mathrm{in}}^{\prime}(t)$ represent the noise terms. By substituting Eq.~(\ref{Ap3}) into the expression about mode $a$ in Eq.~(\ref{Ap2}), we obtain \begin{eqnarray}\label{Ap4} a(t)&\simeq&a(0)e^{-(\kappa+i\Delta_c)t}+ e^{-(\kappa+i\Delta_c)t}\int_0^t\Big\{-iGc(0)e^{-(\gamma_a+i\Delta_a)\tau}+ \cr\cr &&iG_0\big[b(0)e^{-(\gamma_m+i\omega_m^{\prime})\tau}+b^{\dag}(0)e^{-(\gamma_m-i\omega_m^{\prime})\tau}\big]\Big\}e^{(\kappa+i\Delta_c)\tau}d\tau+A_{\mathrm{in}}(t) \cr\cr &=&a(0)e^{-(\kappa+i\Delta_c)t}+\frac{-iGc(0)e^{-(\gamma_a+i\Delta_a)t}}{(\kappa-\gamma_a)+i(\Delta_c-\Delta_a)}- \frac{-iGc(0)e^{-(\kappa+i\Delta_c)t}}{(\kappa-\gamma_a)+i(\Delta_c-\Delta_a)}+ \cr\cr &&\frac{iG_0b(0)e^{-(\gamma_m+i\omega_m^{\prime})t}}{(\kappa-\gamma_m)+i(\Delta_c-\omega_m^{\prime})}- \frac{iG_0b(0)e^{-(\kappa+i\Delta_c)t}}{(\kappa-\gamma_m)+i(\Delta_c-\omega_m^{\prime})}+ \cr\cr &&\frac{iG_0b^{\dag}(0)e^{-(\gamma_m-i\omega_m^{\prime})t}}{(\kappa-\gamma_m)+i(\Delta_c+\omega_m^{\prime})}- \frac{iG_0b^{\dag}(0)e^{-(\kappa+i\Delta_c)t}}{(\kappa-\gamma_m)+i(\Delta_c+\omega_m^{\prime})}+A_{\mathrm{in}}^{\prime}(t), \end{eqnarray} where $A_{\mathrm{in}}^{\prime}(t)$ denotes the noise term. In the parameter regimes $|\Delta_c|\gg(\omega_m^{\prime}, \Delta_a)$ and $\kappa\gg(\gamma_m, \gamma_a)$, Eq.~(\ref{Ap4}) can be simplified as \begin{eqnarray}\label{Ap5} a(t)&\simeq&a(0)e^{-(\kappa+i\Delta_c)t}+\frac{-iGc(0)e^{-(\gamma_a+i\Delta_a)t}}{(\kappa-\gamma_a)+i(\Delta_c-\Delta_a)}+ \cr\cr &&\frac{iG_0b(0)e^{-(\gamma_m+i\omega_m^{\prime})t}}{(\kappa-\gamma_m)+i(\Delta_c-\omega_m^{\prime})}+ \frac{iG_0b^{\dag}(0)e^{-(\gamma_m-i\omega_m^{\prime})t}}{(\kappa-\gamma_m)+i(\Delta_c+\omega_m^{\prime})}+A_{\mathrm{in}}^{\prime}(t), \end{eqnarray} by using Eq.~(\ref{Ap3}) once again, \begin{eqnarray}\label{Ap6} a(t)&\simeq&a(0)e^{-(\kappa+i\Delta_c)t}+\frac{-iGc(t)}{(\kappa-\gamma_a)+i(\Delta_c-\Delta_a)}+ \cr\cr &&\frac{iG_0b(t)}{(\kappa-\gamma_m)+i(\Delta_c-\omega_m^{\prime})}+ \frac{iG_0b^{\dag}(t)}{(\kappa-\gamma_m)+i(\Delta_c+\omega_m^{\prime})}+A_{\mathrm{in}}(t) \cr\cr &\simeq&a(0)e^{-(\kappa+i\Delta_c)t}+\frac{-iGc(t)}{\kappa+i\Delta_c}+ \frac{iG_0[b(t)+b^{\dag}(t)]}{\kappa+i\Delta_c}+A_{\mathrm{in}}(t), \end{eqnarray} where $A_{\mathrm{in}}(t)$ is the modified noise operator. Since $\kappa$ is large, the term containing $\mathrm{exp}(-\kappa t)$ in Eq.~(\ref{Ap6}) is a fast decaying term and thus it can be neglected safely. Therefore cavity mode $a(t)$ now can be expressed in terms of $b(t)$, $b^{\dag}(t)$, and $c(t)$ \begin{eqnarray}\label{Ap7} a(t)\simeq\frac{iG_0[b(t)+b^{\dag}(t)]}{\kappa+i\Delta_c}+\frac{-iGc(t)}{\kappa+i\Delta_c}+A_{\mathrm{in}}(t). \end{eqnarray} \section{Derivation of squeezing parameter}\label{App2} Applying the squeezing transformation $S(r)=\mathrm{exp}\left[\frac r2(b^2-b^{\dag 2})\right]$ to the effective Hamiltonian $H_{\mathrm{eff}}$ in Eq.~(\ref{Eq14}), we obtain \begin{eqnarray}\label{Ap8} H_{\mathrm{eff}}^{\prime}&=&\left[\tilde{\omega}_m(\cosh^2r+\sinh^2r)-4\eta^{\prime}\cosh r\sinh r\right]b^{\dag}b+ \Delta_{\mathrm{eff}}c^{\dag}c+G_{\mathrm{eff}}(\cosh r-\sinh r)\times \cr\cr &&(b+b^{\dag})(c+c^{\dag})+\left[-\tilde{\omega}_m\cosh r\sinh r+\eta^{\prime}(\cosh^2r+\sinh^2r)\right](b^2+b^{\dag2}). \end{eqnarray} By setting $-\tilde{\omega}_m\cosh r\sinh r+\eta^{\prime}(\cosh^2r+\sinh^2r)=0$, the squeezing parameter $r$ can be obtained \begin{eqnarray}\label{Ap9} r=\frac14\ln\left(1+\frac{4\eta^{\prime}}{\omega_m}\right). \end{eqnarray} Thus, \begin{eqnarray} \omega_m^{\prime}&=&\tilde{\omega}_m(\cosh^2r+\sinh^2r)-4\eta^{\prime}\cosh r\sinh r=\omega_m\sqrt{1+\frac{4\eta^{\prime}}{\omega_m}}, \cr\cr G_{\mathrm{eff}}^{\prime}&=&G_{\mathrm{eff}}(\cosh r-\sinh r)=G_{\mathrm{eff}}\left(1+\frac{4\eta^{\prime}}{\omega_m}\right)^{-\frac14}. \end{eqnarray} \end{document}
arXiv
{ "id": "1811.05656.tex", "language_detection_score": 0.7055778503417969, "char_num_after_normalized": null, "contain_at_least_two_stop_words": null, "ellipsis_line_ratio": null, "idx": null, "lines_start_with_bullet_point_ratio": null, "mean_length_of_alpha_words": null, "non_alphabetical_char_ratio": null, "path": null, "symbols_to_words_ratio": null, "uppercase_word_ratio": null, "type": null, "book_name": null, "mimetype": null, "page_index": null, "page_path": null, "page_title": null }
arXiv/math_arXiv_v0.2.jsonl
null
null
\begin{document} \title[Weighted homogeneous Siciak-Zaharyuta extremal functions] {A note on weighted homogeneous Siciak-Zaharyuta extremal functions} \author[B. Drinovec Drnov\v sek]{Barbara Drinovec Drnov\v sek} \address{Faculty of Mathematics and Physics\\ University of Ljubljana\\ Institute of Mathematics, Physics and Mechanics\\ Jadranska 19, 1000 Ljubljana, Slovenia} \email{[email protected]} \thanks{B. Drinovec Drnov\v{s}ek was partially supported by grant P1-0291, Republic of Slovenia.} \author[R. Sigursdsson]{Ragnar Sigurdsson} \address{Department of Mathematics, School of Engineering and Natural Sciences\\ University of Iceland\\ IS-107 Reykjav\'ik, Iceland} \email{[email protected]} \date{\today} \begin{abstract} We prove that for any given upper semicontinuous function $\varphi$ on an open subset $E$ of ${\mathbb C}^n\setminus\{0\}$, such that the complex cone generated by $E$ minus the origin is connected, the homogeneous Siciak-Zaharyuta function with the weight $\varphi$ on $E$, can be represented as an envelope of a disc functional. \end{abstract} \subjclass[2010]{Primary 32U35; Secondary 32U15, 32U05} \keywords{Siciak's homogeneous extremal function, envelope of a disc functional} \maketitle {\bf Introduction.} Let ${\mathcal L}$ denote the Lelong class on ${\mathbb C}^n$ and ${\mathcal L}^h$ the subclass of functions $u$ which are {\it logarithmically homogeneous}. Let $\varphi\colon E\to \overline {\mathbb R}$ be a function on a subset $E$ of ${\mathbb C}^n$ taking values in the extended real line $\overline {\mathbb R}$. The {\it Siciak-Zaharyuta extremal function $V_{E,\varphi}$ with weight $\varphi$} is defined by $$ V_{E,\varphi}=\sup \{u \in {\mathcal L}\,;\, u|E \leq \varphi\}. $$ The {\it homogeneous Siciak-Zaharyuta extremal function $V_{E,\varphi}^h$ with weight $\varphi$} is defined similarly with ${\mathcal L}^h$ in the role of ${\mathcal L}$. In the special case when $\varphi=0$ we only write $V_E$ (and $V_E^h$) and we call this function the ({\it homogeneous}) {\it Siciak-Zaharyuta extremal function for the set $E$}. The function $V_E$ ($V_E^h$) is also called the ({\it homogeneous}) {\it pluricomplex Green function for $E$ with pole at infinity}. \begin{theorem} \label{th:main} Let $\varphi\colon E\to {\mathbb R}\cup\{-\infty\}$ be an upper semicontinuous function on an open subset $E$ of ${\mathbb C}^n\setminus\{0\}$. Assume that there exists a function in ${\mathcal L}^h$ dominated by $\varphi$ on $E$. Then the largest logarithmically homogeneous function ${\mathbb C} E\to {\mathbb R}\cup\{-\infty\}$ dominated by $\varphi$ on $E$ is upper semicontinuous on ${\mathbb C}^* E$ and it is of the form $\log\varrho_{E,\varphi}$, where \begin{equation} \label{eq:1.0} \varrho_{E,\varphi}(z)=\inf\{|\lambda|e^{\varphi(z/\lambda)} \,;\, \lambda\in {\mathbb C}^*, \ z/\lambda \in E\}, \qquad z\in {\mathbb C}^* E. \end{equation} If ${\mathbb C}^*E$ is connected, then for every $z\in {\mathbb C}^n$ \begin{align} \nonumber V_{E,\varphi}^h(z) =\inf\Big\{ \int_{\mathbb T} \log\varrho_{E,\varphi}(f_1,\dots,f_n) \, d\sigma \,;\, f\in {\mathcal O}(\overline {\mathbb D},\mathbb P ^n),\, f=[f_0:\cdots:f_n],\\ f({\mathbb T})\subset {\mathbb C}^*E , \, f_0(0)=1, \, (f_1(0),\dots,f_n(0))=z. \Big\} \label{eq:1.1} \end{align} If ${\mathbb C} E={\mathbb C}^n$, then for every $z\in {\mathbb C}^n$ \begin{equation} \label{eq:1.2} V_{E,\varphi}^h(z) =\inf\Big\{ \int_{\mathbb T} \log\varrho_{E,\varphi}\circ f \, d\sigma \,;\, f\in {\mathcal O}(\overline {\mathbb D},\mathbb C ^n),\,f(0)=z\Big\}. \end{equation} \end{theorem} {\it A disc envelope formula} is a formula where the values of a function $F$ defined on a complex space $X$ with values on the extended real line $\overline {\mathbb R}$ are given as $F(z)=\inf\{H(f)\,;\, f\in {\mathcal B}(z)\}$, where $H$ is {\it disc functional}, i.e., a function defined on some subset ${\mathcal A}$ of ${\mathcal O}({\mathbb D},X)$, the set of {\it analytic discs} in $X$, with values on $\overline {\mathbb R}$, $\mathcal B$ is a subclass of ${\mathcal A}$, and ${\mathcal B}(z)$ consists of all of $f\in \mathcal B$ with {\it center} $z=f(0)$. The formula (\ref{eq:1.1}) is an example of a disc envelope formula, where ${\mathcal A}$ consists of all closed analytic discs with value in the projective space, i.e., elements $f$ in ${\mathcal O}(\overline {\mathbb D},\mathbb P ^n)$ which map the unit circle ${\mathbb T}$ into ${\mathbb C}^*E$, $H(f)$ is the integral, and ${\mathcal B}$ is the subset of ${\mathcal A}$ consisting of discs with $f_0(0)=1$. We identify a point $[1:z]\in \mathbb P ^n$ with the point $z\in {\mathbb C}^n$. For general information on the Siciak-Zaharyuta extremal function see Siciak \cite{Sic:1961,Sic:1962,Sic:1981,Sic:1982,Sic:2011} and Zaharyuta \cite{Zah:1976}. The first disc envelope formula for $V_E$ was proved by Lempert in the case when $E$ is an open convex subset of ${\mathbb C}^n$ with real analytic boundary. (The proof is given in Momm \cite[Appendix]{Mom:1996}.) L\'arusson and Sigurdsson \cite{LarSig:2005} proved disc envelope formulas for $V_E$ for open connected subsets $E$ of ${\mathbb C}^n$. Magn\'usson and Sigurdsson \cite{MagSig:2007} generalized this result and obtained a disc formula for $V_{E,\varphi}$ in the case when $\varphi$ is an upper semicontinuous function on an open connected subset $E$ of ${\mathbb C}^n$. Drinovec Drnov\v{s}ek and Forstneri\v{c} \cite{DrnFor:2011a} proved disc envelope formulas for $V_E$ for open subsets $E$ of an irreducible and locally irreducible algebraic subvariety of ${\mathbb C}^n$. Recently, Magn\'usson \cite{Mag:2013} established disc envelope formulas for the global extremal function in projective space. {\bf Acknowledgement.} This paper was written while the second author was visiting University of Ljubljana in the autumn of 2012. He would like to thank the members of the Department of Mathematics for their great hospitality and for many interesting and helpful discussions. {\bf Notation.} Let ${\mathbb D}$ denote the unit disc in ${\mathbb C}$, ${\mathbb T}$ the unit circle, and $\sigma$ the arc length measure on ${\mathbb T}$ normalized to $1$. An analytic disc is a holomorphic map $f\colon {\mathbb D}\to X$, where $X$ is some complex space. We let ${\mathcal O}({\mathbb D},X)$ denote the set of all analytic discs. We say that the disc is closed if it extends as a holomorphic map to some neighbourhood of the closed unit disc $\overline {\mathbb D}$ with values in $X$ and we let ${\mathcal O}(\overline {\mathbb D},X)$ denote the set of all closed analytic discs in $X$. The point $z=f(0)\in X$ is called the center of $f$. For a subset $X$ of ${\mathbb C}^n$ we let ${\operatorname{{\mathcal{USC}}}}(X)$ denote the set of all upper semicontinuous functions on $X$, and for open subset $U$ of ${\mathbb C}^n$ we denote by ${\operatorname{{\mathcal{PSH}}}}(U)$ the set of all plurisubharmonic functions on $U$. The Lelong class ${\mathcal L}$ consists of all $u\in {\operatorname{{\mathcal{PSH}}}}({\mathbb C}^n)$ such that $u-\log^+|{\mathbf \cdot}|$ is bounded above and ${\mathcal L}^h$ is the subclass of all logarithmically homogeneous functions, i.e., functions satisfying $u(\lambda z)=u(z)+\log|\lambda|$ for $\lambda\in {\mathbb C}^*$ and $z\in {\mathbb C}^n$. Observe that every such function takes the value $-\infty$ at the origin. For every subset $E$ of ${\mathbb C}^n$ we set ${\mathbb C} E=\{\lambda z\,;\, \lambda\in {\mathbb C}, z\in E\}$, ${\mathbb C}^* E=\{\lambda z\,;\, \lambda\in {\mathbb C}^*, z\in E\}$ and we call ${\mathbb C} E$ the complex cone generated by $E$. Note that complex cones are suitable sets for the domains of definition of logarithmically homogeneous functions. Let $\mathbb P ^n$ denote the $n$-dimensional projective space, $\pi\colon {\mathbb C}^{n+1}\setminus\{0\}\to \mathbb P ^n$ the natural projection, $(z_0,\dots,z_n)\mapsto [z_0:\cdots:z_n]$, and identify ${\mathbb C}^n$ with the subset of all $[z_0:\cdots:z_n]$ with $z_0\neq 0$ and, in particular, the point $z\in {\mathbb C}^n$ with $[1:z]\in \mathbb P ^n$. {\it The hyperplane at infinity} is $H_\infty=\pi(Z_0\setminus \{0\}\big)$, where $Z_0$ is the hyperplane in ${\mathbb C}^{n+1}$ defined by the equation $z_0=0$. Then $\mathbb P ^n={\mathbb C}^n\cup H_\infty$. {\bf Review of a few results.} Assume that $\psi\colon X\to {\mathbb R}\cup\{-\infty\}$ is a measurable function on a subset $X$ of ${\mathbb C}^n$, such that there is $u\in {\mathcal L}$ satisfying $u|X\leq \psi$. It is an easy observation that a function $u\in {\operatorname{{\mathcal{PSH}}}}({\mathbb C}^n)$ is in ${\mathcal L}$ if and only if the function \begin{equation*} (z_0,\dots,z_n)\mapsto u(z_1/z_0,\dots,z_n/z_0)+\log|z_0| \end{equation*} extends as a plurisubharmonic function from ${\mathbb C}^{n+1}\setminus Z_0$ to ${\mathbb C}^{n+1}\setminus \{0\}$. Let $v$ denote this extension. Take $f=[f_0:\cdots:f_n]\in {\mathcal O}(\overline {\mathbb D},\mathbb P ^n)$ with $f_0(0)=1$, $(f_1(0),\dots,f_n(0))=z$, satisfying $f({\mathbb T})\subset X$, and define $\tilde f=(f_0,\dots,f_n)\in {\mathcal O}(\overline {\mathbb D},{\mathbb C}^{n+1}\setminus\{0\})$. Then the subaverage property of $v\circ \tilde f$ and the Riesz representation formula applied to $\log|f_0|$ give (see \cite[p.~243]{MagSig:2007}) \begin{align*} u(z)&=u(f_1(0),\dots,f_n(0))+\log|f_0(0)| =v\circ \tilde f(0) \\ &\leq \int_{\mathbb T} u(f_1/f_0,\dots,f_n/f_0)\, d\sigma+\int_{\mathbb T} \log|f_0|\, d\sigma\\ &\leq \int_{\mathbb T} \psi(f_1/f_0,\dots,f_n/f_0)\, d\sigma-\sum_{a\in f^{-1}(H_\infty)}m_{f_0}(a) \log|a| . \end{align*} For an open connected $X\subset {\mathbb C}^n$ and $\psi\in {\operatorname{{\mathcal{USC}}}}(X)$, Magn{\'u}sson and Sigurdsson \cite[Theorem~2]{MagSig:2007} proved that for every $z\in {\mathbb C}^n$ \begin{align} \nonumber V_{X,\psi}(z)=\inf\big\{ -\sum_{a\in f^{-1}(H_\infty)}m_{f_0}(a) \log|a| +\int_{\mathbb T} \psi(f_1/f_0,\dots,f_n/f_0)\, d\sigma \,;\, \\ f\in {\mathcal O}(\overline {\mathbb D},\mathbb P ^n),\ f({\mathbb T})\subset X, \ f_0(0)=1, \ (f_1(0),\dots,f_n(0))=z \label{eq:2.3} \big\}. \end{align} Our main result, Theorem~\ref{th:main}, will follow from this formula and the following \begin{proposition} \label{prop:2.1} Let $\varphi\colon E\to {\mathbb R}\cup\{-\infty\}$ be a function on a subset $E \subset {\mathbb C}^n\setminus\{0\}$ such that there exists $u\in {\mathcal L}^h$ satisfying $u|E\leq \varphi$. Let $\tilde \varphi\colon {\mathbb C} E\to {\mathbb R}\cup\{-\infty\}$ be the supremum of all logarithmically homogeneous functions on ${\mathbb C} E$ dominated by $\varphi$ on $E$. Then the following hold: \begin{enumerate} \item[(i)] $\tilde \varphi$ is logarithmically homogeneous on ${\mathbb C} E$ and for every $z\in {\mathbb C}^* E$ \begin{equation} \label{eq:prop(i)} \tilde \varphi(z)=\inf \{ \varphi(\lambda z)-\log |\lambda| \,;\, \lambda\in {\mathbb C}^* \text{ and } \lambda z \in E \}, \end{equation} \item[(ii)] $V_{E,\varphi}^h=V_{E,\tilde\varphi}^h =V_{{\mathbb C}^* E,\tilde \varphi}^h$. \end{enumerate} If, in addition to the above, ${\mathbb C}^* E$ is nonpluripolar and $\varphi\in {\operatorname{{\mathcal{USC}}}}(E)$ then \begin{enumerate} \item[(iii)] $\tilde \varphi\in {\operatorname{{\mathcal{USC}}}}({\mathbb C}^* E)$ and $V_{E,\varphi}^h=V_{{\mathbb C}^* E,\tilde \varphi}$, \item[(iv)] if ${\mathbb C} E={\mathbb C}^n$, then $\tilde \varphi\in {\operatorname{{\mathcal{USC}}}}({\mathbb C}^n)$ and $$V_{E,\varphi}^h=\sup \{u \in {\operatorname{{\mathcal{PSH}}}}({\mathbb C}^n)\,;\, u \leq \tilde\varphi\}.$$ \end{enumerate} \end{proposition} \begin{proof} (i) It is easy to see that the supremum of any family of logarithmically homogeneous functions defined on a complex cone is a logarithmically homogeneous function provided the family is bounded from above at any point of the cone. Take $z\in {\mathbb C}^* E$ and choose $\lambda\in {\mathbb C}^*$ such that $\lambda z\in E$. For any logarithmically homogeneous function $u$ on ${\mathbb C} E$ dominated by $\varphi$ on $E$ we have \begin{equation} \label{eq:proofprop} u(z)=u(\lambda z)-\log |\lambda|\le \varphi(\lambda z)-\log |\lambda| \end{equation} which implies that the family is bounded from above at $z$. Since all logarithmically homogeneous functions take the value $-\infty$ at the origin the family is bounded from above at any point of the cone. Let $\psi$ denote the function on ${\mathbb C}^*E$ whose value at $z$ is given by the right hand side of the equation (\ref{eq:prop(i)}). For a logarithmically homogeneous function $u$ on ${\mathbb C} E$, dominated by $\varphi$ on $E$, we have $u(z)\le \varphi(\lambda z)-\log |\lambda|$ for any $\lambda\in {\mathbb C}^*$ such that $\lambda z\in E$ by (\ref{eq:proofprop}). Taking infimum over all $\lambda\in{\mathbb C}^*$ with $\lambda z\in E$ shows that $u\leq \psi$ on ${\mathbb C}^* E$. Hence $\tilde \varphi \leq \psi$ on ${\mathbb C}^* E$. To prove the converse inequality note that \begin{align}\label{eq:prop1} \psi(\mu z)&=\inf \{ \varphi(\lambda\mu z)-\log |\lambda| \,;\, \lambda\in {\mathbb C}^* \text{ and } \lambda\mu z \in E \}\\ \nonumber &=\inf \{ \varphi(\lambda\mu z)-\log |\lambda\mu| \,;\, \lambda\in {\mathbb C}^* \text{ and } \lambda\mu z \in E \}+\log |\mu|\\ \nonumber &=\psi(z)+\log |\mu| \end{align} for any $z\in{\mathbb C}^*E$ and $\mu\in {\mathbb C}^*$ thus the map $\psi$ is logarithmically homogeneous. Since $\psi\le \varphi$ on $E$ we get $\psi\le \tilde \varphi$. (ii) Since $\varphi\geq \tilde\varphi$ on $E$ and $E\subset {\mathbb C}^* E$ we have $V_{E,\varphi}^h\geq V_{E,\tilde\varphi}^h \geq V_{{\mathbb C}^* E,\tilde \varphi}^h$. For proving the two equalities we take $u\in {\mathcal L}^h$ with $u|E\leq \varphi$. By (i) we obtain $u\leq \tilde \varphi$ on ${\mathbb C}^* E$ which implies $V_{{\mathbb C}^* E,\tilde \varphi}^h\geq V_{E,\varphi}^h$. (iii) To prove that $\tilde \varphi$ is upper semicontinuous take $z _0\in {\mathbb C}^* E$ and $c>\tilde \varphi(z _0)$. We need to show that $c>\tilde \varphi(z )$ for all $z $ in some neighbourhood $U$ of $z _0$. We choose $\lambda_0\in {\mathbb C}^*$ such that $\lambda_0z _0\in E$ and such that $c>\varphi(\lambda_0z _0)-\log|\lambda_0|$. Since $\varphi\in {\operatorname{{\mathcal{USC}}}}(E)$ there exists an open neighbourhood $U$ of $z _0$ such that $\lambda_0z\in E $ and $c>\varphi(\lambda_0z )-\log|\lambda_0|$ for all $z \in U$. By (i) we have $c>\tilde \varphi(z )$ for all $z \in U$. Since ${\mathcal L}^h\subset {\mathcal L}$ we have $V_{{\mathbb C}^* E,\tilde \varphi}^h \leq V_{{\mathbb C}^* E,\tilde \varphi}$. For proving the opposite inequality we take $u\in {\mathcal L}$ such that $u\leq \tilde \varphi$ on ${\mathbb C}^* E$. Then $u(\lambda z )-\log|\lambda|\leq \tilde\varphi(\lambda z )-\log|\lambda|=\tilde\varphi( z )$ for all $ z \in {\mathbb C}^* E$ and $\lambda\in {\mathbb C}^*$. Let $v$ be the upper semicontinous regularization of the function $\sup\{u(\lambda{\mathbf \cdot})-\log|\lambda| \,;\, \lambda\in {\mathbb C}^*\}$ on ${\mathbb C}^n$. We have $u\leq v\leq \tilde \varphi$ on ${\mathbb C}^* E$ and since ${\mathbb C}^*E$ is nonpluripolar and $\tilde\varphi$ is locally bounded above on ${\mathbb C}^*E$, we have $v\in {\mathcal L}$. A similar calculation as in (\ref{eq:prop1}) shows that $v$ is logarithmically homogeneous, which proves the opposite inequality. (iv) The fact that $\tilde \varphi$ is upper semicontinuous at $0$ easily follows from the fact that $\tilde \varphi$ is bounded from above on the unit sphere and that it is logarithmically homogeneous. By (iii) we get $V_{E,\varphi}^h=V_{{\mathbb C}^* E,\tilde \varphi}$ and it is easy to see that in the case ${\mathbb C} E={\mathbb C}^n$ the latter equals $V_{{\mathbb C}^n,\tilde \varphi}$. Let $P_{\tilde \varphi}$ denote the function whose value at $z$ is given by the right hand side of the equation. Since ${\mathcal L}\subset {\operatorname{{\mathcal{PSH}}}}({\mathbb C}^n)$ it follows $V_{{\mathbb C}^n,\tilde \varphi} \leq P_{\tilde\varphi}$. To prove the opposite inequality, it is enough to show that $P_{\tilde \varphi} \in {\mathcal L}$. Since $\tilde \varphi\subset {\operatorname{{\mathcal{USC}}}}({\mathbb C}^n)$ it follows that $P_{\tilde \varphi} $ is the largest plurisubharmonic function on ${\mathbb C}^n$ dominated by $\tilde\varphi$. By upper semicontinuity the map $\tilde \varphi$ is bounded from above on the unit sphere in ${\mathbb C}^n$ by some constant $M\in {\mathbb R}$. Since $\tilde\varphi$ is logarithmically homogeneous we get $$P_{\tilde \varphi}(\lambda z)\le \tilde\varphi(\lambda z)\leq \log|\lambda|+M=\log|\lambda z|+M$$ for any $z\in{\mathbb C}^n$, $|z|=1$, and $\lambda\in{\mathbb C}^*$. It follows that $P_{\tilde \varphi} \in {\mathcal L}$. \end{proof} {\it Proof of Theorem \ref{th:main}.} By Proposition \ref{prop:2.1} the largest logarithmically homogeneous function $\tilde \varphi\colon{\mathbb C} E\to {\mathbb R}\cup\{-\infty\}$ dominated by $\varphi$ on $E$ is upper semicontinuous on ${\mathbb C}^* E$ and $\varrho_{E,\varphi}=e^{\tilde \varphi(z)} =\inf\{ |\lambda|e^{\varphi(z/\lambda)} \,;\, \lambda\in {\mathbb C}^*, \ z/\lambda\in E\}$ which proves (\ref{eq:1.0}). If we take $X={\mathbb C}^* E$ and $\psi=\tilde \varphi$ in (\ref{eq:2.3}), then logarithmic homogeneity of $\tilde \varphi$ on ${\mathbb C}^* E$ implies that $$ \int_{\mathbb T} \tilde\varphi(f_1/f_0,\dots,f_n/f_0)\, d\sigma =\int_{\mathbb T} \tilde\varphi(f_1,\dots,f_n)\, d\sigma -\int_{\mathbb T} \log|f_0| \, d\sigma. $$ If $f_0(0)=1$, then the Riesz representation formula gives $$ \sum_{a\in f^{-1}(H_\infty)}m_{f_0}(a) \log|a| + \int_{\mathbb T} \log|f_0| \, d\sigma=0, $$ which implies that the right hand side of (\ref{eq:2.3}) reduces to \begin{align*} V_{{\mathbb C}^* E,\tilde\varphi}(z)=\inf\Big\{& \int_{\mathbb T} \tilde\varphi(f_1,\dots,f_n) \, d\sigma \,;\, f\in {\mathcal O}(\overline {\mathbb D},\mathbb P ^n),\\ & f({\mathbb T})\subset {\mathbb C}^*E , \ f_0(0)=1,\ (f_1(0),\dots,f_n(0))=z \Big\}, \end{align*} thus (\ref{eq:1.1}) follows from Proposition \ref{prop:2.1} (iii). If ${\mathbb C} E={\mathbb C}^n$ then Proposition \ref{prop:2.1} (iv) and Poletsky theorem \cite{Pol:1991,Pol:1993} imply \begin{align*} V_{E,\varphi}^h &=\sup \{u \in {\operatorname{{\mathcal{PSH}}}}({\mathbb C}^n)\,;\, u \leq \tilde\varphi\}\\ &=\inf\Big\{ \int_{\mathbb T} \log\varrho_{E,\varphi}\circ f \, d\sigma \,;\, f\in {\mathcal O}(\overline {\mathbb D},\mathbb C ^n),\,f(0)=z\Big\} \end{align*} which proves (\ref{eq:1.2}). $\square$ {\bf Observation.} \ In the special case $\varphi=0$ we write $\varrho_E$ for $\varrho_{E,\varphi}$. The function $\varrho_E$ is absolutely homogeneous of degree $1$, i.e., $\varrho_E(z\zeta)=|z|\varrho_E(\zeta)$. Thus, if $E$ is a balanced domain, i.e., $\overline{\mathbb D} E=E$, then $\varrho_E$ is its Minkowski function. {\small \end{document}
arXiv
{ "id": "1501.07736.tex", "language_detection_score": 0.6407400369644165, "char_num_after_normalized": null, "contain_at_least_two_stop_words": null, "ellipsis_line_ratio": null, "idx": null, "lines_start_with_bullet_point_ratio": null, "mean_length_of_alpha_words": null, "non_alphabetical_char_ratio": null, "path": null, "symbols_to_words_ratio": null, "uppercase_word_ratio": null, "type": null, "book_name": null, "mimetype": null, "page_index": null, "page_path": null, "page_title": null }
arXiv/math_arXiv_v0.2.jsonl
null
null
\begin{document} \title[Sampling with positive definite kernels]{Sampling with positive definite kernels and an associated dichotomy} \author{Palle Jorgensen} \address{(Palle E.T. Jorgensen) Department of Mathematics, The University of Iowa, Iowa City, IA 52242-1419, U.S.A. } \email{[email protected]} \urladdr{http://www.math.uiowa.edu/\textasciitilde{}jorgen/} \author{Feng Tian} \address{(Feng Tian) Department of Mathematics, Hampton University, Hampton, VA 23668, U.S.A.} \email{[email protected]} \begin{abstract} We study classes of reproducing kernels $K$ on general domains; these are kernels which arise commonly in machine learning models; models based on certain families of reproducing kernel Hilbert spaces. They are the positive definite kernels $K$ with the property that there are countable discrete sample-subsets $S$; i.e., proper subsets $S$ having the property that every function in $\mathscr{H}\left(K\right)$ admits an $S$-sample representation. We give a characterizations of kernels which admit such non-trivial countable discrete sample-sets. A number of applications and concrete kernels are given in the second half of the paper. \end{abstract} \subjclass[2000]{Primary 47L60, 46N30, 46N50, 42C15, 65R10, 05C50, 05C75, 31C20; Secondary 46N20, 22E70, 31A15, 58J65, 81S25} \keywords{Reproducing kernel Hilbert space, frames, analysis/synthesis, discrete analysis, interpolation, reconstruction, Gaussian free fields, distribution of point-masses, discrete Green's function, non-uniform sampling, optimization, covariance.} \maketitle \tableofcontents{} \section{Introduction} In the theory of non-uniform sampling, one studies Hilbert spaces consisting of signals, understood in a very general sense. One then develops analytic tools and algorithms, allowing one to draw inference for an ``entire'' (or global) signal from partial information obtained from carefully chosen distributions of sample points. While the better known and classical sampling algorithms (Shannon and others) are based on interpolation, modern theories go beyond this. An early motivation is the work of Henry Landau, see e.g., \cite{MR0129065,MR0140733,MR0206615,MR0222554,doi:10.1137/0144089,MR799420}. In this setting, it is possible to make precise the notion of ``average sampling rates'' in general configurations of sample points. (See also \cite{MR2587581,MR2868037}.) When a positive definite kernel $K$ is given, we denote by $\mathscr{H}\left(K\right)$ the associated reproducing kernel Hilbert space (RKHS). In the present paper we study classes of reproducing kernels $K$ on general domains, such kernels arise commonly in machine learning models based on reproducing kernel Hilbert space (see e.g., \cite{MR3450534}) with the property that there are non-trivial restrictions to countable discrete sample subsets $S$ such that every function in $\mathscr{H}\left(K\right)$ has an $S$-sample representation. In this general framework, we study properties of positive definite kernels $K$ with respect to sampling from ``small\textquotedblright{} subsets, and applying to all functions in the associated Hilbert space $\mathscr{H}\left(K\right)$. We are motivated by concrete kernels which are used in a number of applications, for example, on one extreme, the Shannon kernel for band-limited functions, which admits many sampling realizations; and on the other, the covariance kernel of Brownian motion which has no non-trivial countable discrete sample subsets. We offer an operator theoretic condition which explains, in a general context, this dichotomy. Our study continues our earlier papers on reproducing kernels and their restrictions to countable discrete subsets; see e.g., \cite{zbMATH06664785,MR3402823,MR3450534,2015arXiv150202549J}, and also \cite{MR2810909,MR2591839,MR2488871,MR2228737,MR2186447}. A reproducing kernel Hilbert space (RKHS) is a Hilbert space $\mathscr{H}$ of functions on a prescribed set, say $T$, with the property that point-evaluation for functions $f\in\mathscr{H}$ is continuous with respect to the $\mathscr{H}$-norm. They are called kernel spaces, because, for every $t\in T$, the point-evaluation for functions $f\in\mathscr{H}$, $f\left(t\right)$ must then be given as a $\mathscr{H}$-inner product of $f$ and a vector $K_{t}$, in $\mathscr{H}$; called the kernel. The RKHSs have been studied extensively since the pioneering papers by Aronszajn \cite{Aro43,Aro48}. They further play an important role in the theory of partial differential operators (PDO); for example as Green's functions of second order elliptic PDOs \cite{Nel57,HKL14}. Other applications include engineering, physics, machine-learning theory \cite{KH11,MR2488871,CS02}, stochastic processes \cite{AD93,ABDdS93,AD92,AJSV13,MR3251728}, numerical analysis, and more \cite{MR2089140,MR2607639,MR2913695,MR2975345,MR3091062,MR3101840,MR3201917,Shawe-TaylorCristianini200406,SchlkopfSmola200112}. An illustration from \emph{neural networks}: An Extreme Learning Machine (ELM) is a neural network configuration in which a hidden layer of weights are randomly sampled \cite{RW06}, and the object is then to determine analytically resulting output layer weights. Hence ELM may be thought of as an approximation to a network with infinite number of hidden units. Given a positive definite kernel $K:T\times T\rightarrow\mathbb{C}$ (or $\mathbb{R}$ for simplification), there are several notions and approaches to sampling (i.e., an algorithmic reconstruction of suitable functions from values at a fixed and pre-selected set of sample-points): \begin{defn} We say that $K$ has non-trivial sampling property, if there exists a countable subset $S\subset T$, and $a,b\in\mathbb{R}_{+}$, such that \begin{equation} a\sum_{s\in S}\left|f\left(s\right)\right|^{2}\leq\left\Vert f\right\Vert _{\mathscr{H}\left(K\right)}^{2}\leq b\sum_{s\in S}\left|f\left(s\right)\right|^{2},\quad\forall f\in\mathscr{H}\left(K\right),\label{eq:sp1} \end{equation} where $\mathscr{H}\left(K\right)$ is the reproducing kernel Hilbert space (RKHS) of $K$, see \cite{Aro43} and \remref{rk} below. Suppose equality holds in (\ref{eq:sp1}) with $a=b=1$; then we say that $\left\{ K\left(\cdot,s\right)\right\} _{s\in S}$ is a \emph{Parseval frame}. It follows that sampling holds in the form \[ f\left(t\right)=\sum_{s\in S}f\left(s\right)K\left(t,s\right),\quad\forall f\in\mathscr{H}\left(K\right),\:\forall t\in T \] if and only if $\left\{ K\left(\cdot,s\right)\right\} _{s\in S}$ is a Parseval frame; see also \thmref{ps}. \end{defn} As is well known, when a vector $f$ in a Hilbert space $\mathscr{H}$ is expanded in an orthonormal basis (ONB) $B$, there is then automatically an associated Parseval identity. In physical terms, this identity typically reflects a \emph{stability} feature of a decomposition based on the chosen ONB $B$. Specifically, Parseval's identity reflects a conserved quantity for a problem at hand, for example, energy conservation in quantum mechanics. The theory of frames begins with the observation that there are useful vector systems which are in fact not ONBs but for which a Parseval formula still holds. In fact, in applications it is important to go beyond ONBs. While this viewpoint originated in signal processing (in connection with frequency bands, aliasing, and filters), the subject of frames appears now to be of independent interest in mathematics. See, e.g., \cite{MR2837145,MR3167899,MR2367342,MR2147063}, and also \cite{CoDa93,MR2154344,Dutkay_2006}. \begin{rem} \label{rem:rk}To make the discussion self-contained, we add the following (for the benefit of the readers.) \begin{enumerate} \item A given $K:T\times T\rightarrow\mathbb{C}$ is positive definite (p.d.) if and only if for all $n\in\mathbb{N}$, $\left\{ \xi\right\} _{j=1}^{n}\subset\mathbb{C}$, and all $\left\{ t_{j}\right\} _{j=1}^{n}\subset T$, we have: \[ \sum_{i}\sum_{j}\overline{\xi}_{i}\xi_{j}K\left(t_{i},t_{j}\right)\geq0. \] \item \label{enu:rh2}A function $f$ on $T$ is in $\mathscr{H}\left(K\right)$ if and only if there is a constant $C=C\left(f\right)$ such that for all $n$, $\left(\xi_{j}\right)_{1}^{n}$, $\left(t_{j}\right)_{1}^{n}$, as above, we have \begin{equation} \left|\sum_{1}^{n}\xi_{j}f\left(t_{j}\right)\right|^{2}\leq C\sum_{i}\sum_{j}\overline{\xi}_{i}\xi_{j}K\left(t_{i},t_{j}\right).\label{eq:rh2} \end{equation} \end{enumerate} It follows from the above that reproducing kernel Hilbert spaces (RKHS) arise from a given positive definite kernel $K$, a corresponding pre-Hilbert form; and then a Hilbert-completion. The question arises: \textquotedblleft What are the functions in the completion?\textquotedblright{} The \emph{a priori} estimate (\ref{eq:rh2}) in (\ref{enu:rh2}) above is an answer to the question. We will return to this issue in the application section 3 below. By contrast, the Hilbert space completions are subtle; they are classical Hilbert spaces of functions, not always transparent from the naked kernel $K$ itself. Examples of classical RKHSs: Hardy spaces or Bergman spaces (for complex domains), Sobolev spaces and Dirichlet spaces \cite{MR3054607,MR2892621,MR2764237} (for real domains, or for fractals), band-limited $L^{2}$ functions (from signal analysis), and Cameron-Martin Hilbert spaces (see \lemref{cm}) from Gaussian processes (in continuous time domain). \end{rem} \begin{lem} \label{lem:fr}Suppose $K$, $T$, $a$, $b$, and $S$ satisfy the condition in (\ref{eq:sp1}), then there is a positive operator $B$ in $\mathscr{H}\left(K\right)$ with bounded inverse such that \[ f\left(\cdot\right)=\sum_{s\in S}\left(Bf\right)\left(s\right)K\left(\cdot,s\right) \] is a convergent interpolation formula valid for all $f\in\mathscr{H}\left(K\right)$. Equivalently, \[ f\left(t\right)=\sum_{s\in S}f\left(s\right)B\left(K\left(\cdot,s\right)\right)\left(t\right),\;\text{for all \ensuremath{t\in T}.} \] \end{lem} \begin{proof} Define $A:\mathscr{H}\left(K\right)\rightarrow l^{2}\left(S\right)$ by $\left(Af\right)\left(s\right)=f\left(s\right)$, $s\in S$; or \[ Af:=\left(f\left(s\right)\right)_{s\in S}\in l^{2}\left(S\right). \] Then the adjoint operator $A^{*}:l^{2}\left(S\right)\rightarrow\mathscr{H}\left(K\right)$ is given by \[ A^{*}\xi=\sum_{s\in S}\xi_{s}K\left(\cdot,s\right),\;\forall\xi\in l^{2}\left(S\right), \] and \[ A^{*}Af=\sum_{s\in S}f\left(s\right)K\left(\cdot,s\right) \] holds in $\mathscr{H}\left(K\right)$, with $\mathscr{H}\left(K\right)$-norm convergence. This is immediate from (\ref{eq:sp1}). Now set $B=\left(A^{*}A\right)^{-1}$. Note that \[ \left\Vert B\right\Vert _{\mathscr{H}\left(K\right)\rightarrow\mathscr{H}\left(K\right)}\leq a^{-1} \] where $a$ is in the lower bound in (\ref{eq:sp1}). \end{proof} \begin{lem} \label{lem:span}Suppose $K$, $T$, $a$, $b$, and $S$ satisfy (\ref{eq:sp1}), then the linear span of $\left\{ K\left(\cdot,s\right)\right\} _{s\in S}$ is dense in $\mathscr{H}\left(K\right)$. \end{lem} \begin{proof} Let $f\in\mathscr{H}\left(K\right)$, then \begin{gather*} f\perp\left\{ K\left(\cdot,s\right)\right\} _{s\in S}\\ \Updownarrow\\ f\left(s\right)=\left\langle K\left(\cdot,s\right),f\right\rangle _{\mathscr{H}\left(K\right)}=0,\;\forall s\in S, \end{gather*} by the reproducing property in $\mathscr{H}\left(K\right)$. But by (\ref{eq:sp1}), $b<\infty$, this implies that $f=0$ in $\mathscr{H}\left(K\right)$. Hence the family $\left\{ K\left(\cdot,s\right)\right\} _{s\in S}$ has dense span. \end{proof} \section{The dichotomy} We now turn to dichotomy: (i) Existence of countably discrete sampling sets vs (ii) non-existence. To help readers appreciate the nature of the two classes, we begin with two examples, (i) Shannon\textquoteright s kernel for band-limited functions, \exaref{shan}; and (ii) the covariance kernel for standard Brownian motion, \thmref{bm}. \begin{question*} ~ \begin{enumerate} \item Given a positive definite kernel $K:T\times T\rightarrow\mathbb{R}$, how to determine when there exist $S\subset T$, and $a,b\in\mathbb{R}_{+}$ such that (\ref{eq:sp1}) holds? \item Given $K$, $T$ as above, how to determine if there is a countable discrete subset $S\subset T$ such that \begin{equation} \left\{ K\left(\cdot,s\right)\right\} _{s\in S}\label{eq:sp4} \end{equation} has dense span in $\mathscr{H}\left(K\right)$? \end{enumerate} \end{question*} \begin{example} \label{exa:shan}Let $T=\mathbb{R}$, and let $K:\mathbb{R}\times\mathbb{R}\rightarrow\mathbb{R}$ be the Shannon kernel, where \begin{align} K\left(s,t\right) & :=\text{sinc}\,\pi\left(s-t\right)\nonumber \\ & =\frac{\sin\pi\left(s-t\right)}{\pi\left(s-t\right)},\quad\forall s,t\in\mathbb{R}.\label{eq:sp5} \end{align} We may choose $S=\mathbb{Z}$, and then $\left\{ K\left(\cdot,n\right)\right\} _{n\in\mathbb{Z}}$ is even an orthonormal basis (ONB) in $\mathscr{H}\left(K\right)$, but there are many other examples of countable discrete subsets $S\subset\mathbb{R}$ such that (\ref{eq:sp1}) holds for finite $a,b\in\mathbb{R}_{+}$. The RKHS of $K$ in (\ref{eq:sp5}) is the Hilbert space $\subset L^{2}\left(\mathbb{R}\right)$ consisting of all $f\in L^{2}\left(\mathbb{R}\right)$ such that $suppt(\hat{f})\subset\left[-\pi,\pi\right]$, where ``suppt'' stands for support of the Fourier transform $\hat{f}$. Note $\mathscr{H}\left(K\right)$ consists of functions on $\mathbb{R}$ which have entire analytic extensions to $\mathbb{C}$; see \cite{MR2039503,MR2040080,MR2975345,MR1976867}. Using the above observations, we get \begin{align*} f\left(t\right) & =\sum_{n\in\mathbb{Z}}f\left(n\right)K\left(t,n\right)\\ & =\sum_{n\in\mathbb{Z}}f\left(n\right)\text{sinc}\,\pi\left(t-n\right),\quad\forall t\in\mathbb{R},\:\forall f\in\mathscr{H}\left(K\right). \end{align*} \end{example} \begin{example} Let $K$ be the covariant kernel of standard Brownian motion, with $T:=[0,\infty)$, or $T:=[0,1)$. Then \begin{equation} K\left(s,t\right):=s\wedge t=\min\left(s,t\right),\;\forall\left(s,t\right)\in T\times T.\label{eq:sp6} \end{equation} \end{example} \begin{lem} \label{lem:cm}Let $K$, $T$ be as in (\ref{eq:sp6}). Then $\mathscr{H}\left(K\right)$ consists of functions $f$ on $T$ such that $f$ has distribution derivative $f'\in L^{2}\left(T,\lambda\right)$, i.e., $L^{2}$ with respect to Lebesgue measure $\lambda$ on $T$, and \begin{equation} \left\Vert f\right\Vert _{\mathscr{H}\left(K\right)}^{2}=\int_{T}\left|f'\left(x\right)\right|^{2}dx.\label{eq:sp8} \end{equation} \end{lem} \begin{proof} This is well-known, see e.g., \cite{MR3450534,MR3402823,Hi80}. \end{proof} \begin{rem}[see also \subsecref{bm} below] The significance of (\ref{eq:sp8}) for Brownian motion is as follows: Fix $T$, and set $L^{2}\left(T\right)=$ the $L^{2}$-space from the restriction to $T$ of Lebesgue measure on $\mathbb{R}$. Pick an ONB $\left\{ \psi_{k}\right\} $ in $L^{2}\left(T\right)$, for example a Haar-Walsh orthonormal basis in $L^{2}\left(T\right)$. Let $\left\{ Z_{k}\right\} $ be an i.i.d. (independent identically distributed) $N\left(0,1\right)$ system, i.e., standard Gaussian copies. Then \begin{equation} B_{t}\left(\cdot\right)=\sum_{k}\left(\int_{0}^{t}\psi_{k}\left(s\right)ds\right)Z_{k}\left(\cdot\right)\label{eq:sb} \end{equation} is a realization of standard Brownian motion on $T$; in particular we have \[ \mathbb{E}\left(B_{s}B_{t}\right)=s\wedge t=K\left(s,t\right),\;\forall\left(s,t\right)\in T\times T. \] See \figref{bm}. \end{rem} \begin{figure} \caption{Brownian motion; see (\ref{eq:sb}).} \label{fig:bm} \end{figure} \begin{thm} \label{thm:bm}Let $K$, $T$ be as in (\ref{eq:sp6}); then there is no countable discrete subset $S\subset T$ such that $\left\{ K\left(\cdot,s\right)\right\} _{s\in S}$ is dense in $\mathscr{H}\left(K\right)$. \end{thm} \begin{proof} Suppose $S=\left\{ x_{n}\right\} $, where \begin{equation} 0<x_{1}<x_{2}<\cdots<x_{n}<x_{n+1}<\cdots;\label{eq:sp7} \end{equation} then consider the following function \begin{equation} \raisebox{-6mm}{\includegraphics[width=0.7\textwidth]{wave1.pdf}}\label{eq:sp9} \end{equation} On the respective intervals $\left[x_{n},x_{n+1}\right]$, the function $f$ is as follows: \[ f\left(x\right)=\begin{cases} c_{n}\left(x-x_{n}\right) & \text{if }x_{n}\leq x\leq\frac{x_{n}+x_{n+1}}{2}\\ c_{n}\left(x_{n+1}-x\right) & \text{if }\frac{x_{n}+x_{n+1}}{2}<x\leq x_{n+1}. \end{cases} \] In particular, $f\left(x_{n}\right)=f\left(x_{n+1}\right)=0$, and on the midpoints: \[ f\left(\frac{x_{n}+x_{n+1}}{2}\right)=c_{n}\frac{x_{n+1}-x_{n}}{2}, \] see \figref{stooth}. \begin{figure} \caption{The saw-tooth function.} \label{fig:stooth} \end{figure} Choose $\left\{ c_{n}\right\} _{n\in\mathbb{N}}$ such that \begin{equation} \sum_{n\in\mathbb{N}}\left|c_{n}\right|^{2}\left(x_{n+1}-x_{n}\right)<\infty.\label{eq:sp11} \end{equation} Admissible choices for the slope-values $c_{n}$ include \[ c_{n}=\frac{1}{n\sqrt{x_{n+1}-x_{n}}},\;n\in\mathbb{N}. \] We will now show that $f\in\mathscr{H}\left(K\right)$. To do this, use (\ref{eq:sp8}). For the distribution derivative computed from (\ref{eq:sp9}), we get \begin{equation} \raisebox{-12mm}{\includegraphics[width=0.7\textwidth]{wave2.pdf}}\label{eq:sp9b} \end{equation} \[ \int_{0}^{\infty}\left|f'\left(x\right)\right|^{2}dx=\sum_{n\in\mathbb{N}}\left|c_{n}\right|^{2}\left(x_{n+1}-x_{n}\right)<\infty \] which is the desired conclusion, see (\ref{eq:sp9}). \end{proof} \begin{cor} For the kernel $K\left(s,t\right)=s\wedge t$ in (\ref{eq:sp6}), $T=[0,\infty)$, the following holds: Given $\left\{ x_{j}\right\} _{j\in\mathbb{N}}\subset\mathbb{R}_{+}$, $\left\{ y_{j}\right\} _{j\in\mathbb{N}}\subset\mathbb{R}$, then the interpolation problem \begin{equation} f\left(x_{j}\right)=y_{j},\;f\in\mathscr{H}\left(K\right)\label{eq:ip1} \end{equation} is solvable if \begin{equation} \sum_{j\in\mathbb{N}}\left(y_{j+1}-y_{j}\right)^{2}/\left(x_{j+1}-x_{j}\right)<\infty.\label{eq:sp2} \end{equation} \end{cor} \begin{proof} Let $f$ be the piecewise linear spline (see \figref{ip}) for the problem (\ref{eq:ip1}), see \figref{ip}; then the $\mathscr{H}\left(K\right)$-norm is as follows: \[ \int_{0}^{\infty}\left|f'\left(x\right)\right|^{2}dx=\sum_{j\in\mathbb{N}}\left(\frac{y_{j+1}-y_{j}}{x_{j+1}-x_{j}}\right)^{2}\left(x_{j+1}-x_{j}\right)<\infty \] when (\ref{eq:sp2}) holds. \end{proof} \begin{figure} \caption{Piecewise linear spline.} \label{fig:ip} \end{figure} \begin{rem} Let $K$ be as in (\ref{eq:sp6}), where \[ K\left(s,t\right)=s\wedge t,\quad s,t\in[0,\infty). \] For all $0\leq x_{j}<x_{j+1}<\infty$, let \begin{align*} f_{j}\left(x\right): & =\frac{2}{x_{j+1}-x_{j}}\left(K\left(x-x_{j},\frac{x_{j+1}-x_{j}}{2}\right)-K\left(x-\frac{x_{j}+x_{j+1}}{2},\frac{x_{j+1}-x_{j}}{2}\right)\right)\\ & =\raisebox{-5mm}{\includegraphics[width=0.4\textwidth]{tmp.pdf}} \end{align*} Assuming (\ref{eq:sp11}) holds, then \[ f\left(x\right)=\sum_{j}c_{j}f_{j}\left(x\right)\in\mathscr{H}\left(K\right). \] \end{rem} \begin{rem} Let $K\left(s,t\right)=s\wedge t$, $\left(s,t\right)\in[0,\infty)\times[0,\infty)$, extend to $\widetilde{K}\left(s,t\right)=\left|s\right|\wedge\left|t\right|$, $\left(s,t\right)\in\mathbb{R}\times\mathbb{R}$, and $\mathscr{H}(\widetilde{K})=$ all $f$ on $\mathbb{R}$ such that the distribution derivative $f'$ exists on $\mathbb{R}$, and \[ \left\Vert f\right\Vert _{\mathscr{H}(\widetilde{K})}^{2}=\int_{\mathbb{R}}\left|f'\left(x\right)\right|^{2}dx. \] \end{rem} \begin{thm} Let $T$ be a set of cardinality $c$ of the continuum, and let $K:T\times T\rightarrow\mathbb{R}$ be a positive definite kernel. Let $S=\left\{ x_{j}\right\} _{j\in\mathbb{N}}$ be a discrete subset of $T$. Suppose there are weights $\left\{ w_{j}\right\} _{j\in\mathbb{N}}$, $w_{j}\in\mathbb{R}_{+}$, such that \begin{equation} \left(f\left(x_{j}\right)\right)\in l^{2}\left(\mathbb{N},w\right)\label{eq:c1} \end{equation} for all $f\in\mathscr{H}\left(K\right)$. Suppose further that there is a point $t_{0}\in T\backslash S$, a $y_{0}\in\mathbb{R}\backslash\left\{ 0\right\} $, and $\alpha\in\mathbb{R}_{+}$ such that the infimum \begin{equation} \inf_{f\in\mathscr{H}\left(K\right)}\left\{ \sum\nolimits _{j}w_{j}\left|f\left(x_{j}\right)\right|^{2}+\left|f\left(t_{0}\right)-y_{0}\right|^{2}+\alpha\left\Vert f\right\Vert _{\mathscr{H}\left(K\right)}^{2}\right\} \label{eq:c2} \end{equation} is strictly positive. Then $S$ is \uline{not} a interpolation set for $\left(K,T\right)$. \end{thm} \begin{proof} Let $L$ denote the analysis operator defined from condition (\ref{eq:c1}) in the statement of the theorem; see also the beginning in the proof of \lemref{fr} above, and let $L^{*}$ denote the corresponding adjoint operator, the synthesis operator. Using now \cite{MR2228737,MR3450534}, we conclude that the function $f$ which minimizes the problem (\ref{eq:c2}) is unique, and in fact \begin{equation} f=\left(\alpha I+L^{*}L\right)^{-1}L^{*}\left(\left(y_{j}\right)\cup\left(t_{0}\right)\right).\label{eq:c3} \end{equation} So, by the hypothesis in the theorem, we get $f\in\mathscr{H}\left(K\right)\backslash\left\{ 0\right\} $, and $f\left(x_{j}\right)=0$, for all $j\in\mathbb{N}$. Then it follows that the closed span of $\left\{ K\left(\cdot,x_{j}\right)\right\} _{j\in\mathbb{N}}$ is not $\mathscr{H}\left(K\right)$; specifically, $0\neq f\in\left\{ K\left(\cdot,x_{j}\right)\right\} _{j\in\mathbb{N}}^{\perp}$. See also \lemref{span} and \figref{as}. \end{proof} \begin{figure} \caption{Analysis and synthesis operators.} \label{fig:as} \end{figure} \begin{thm} \label{thm:ps}Let $K:T\times T\rightarrow\mathbb{R}$ be a positive definite kernel, and let $S\subset T$ be a countable discrete subset. The RKHS $\mathscr{H}\left(K\right)$ refers to the pair $\left(K,T\right)$. For all $s\in S$, set $K_{s}\left(\cdot\right)=K\left(\cdot,s\right)$. The the following four conditions are equivalent: \begin{enumerate} \item \label{enu:ps1}The family $\left\{ K_{s}\right\} _{s\in S}$ is a Parseval frame in $\mathscr{H}\left(K\right)$; \item \label{enu:ps2} \[ \left\Vert f\right\Vert _{\mathscr{H}\left(K\right)}^{2}=\sum_{s\in S}\left|f\left(s\right)\right|^{2},\;\forall f\in\mathscr{H}\left(K\right); \] \item \label{enu:ps3} \[ K\left(t,t\right)=\sum_{s\in S}\left|K\left(t,s\right)\right|^{2},\;\forall t\in T; \] \item \label{enu:ps4} \[ f\left(t\right)=\sum_{s\in S}f\left(s\right)K\left(t,s\right),\;\forall f\in\mathscr{H}\left(K\right),\:\forall t\in T, \] where the sum converges in the norm of $\mathscr{H}\left(K\right)$. \end{enumerate} \end{thm} \begin{proof} (\ref{enu:ps1}) $\Rightarrow$ (\ref{enu:ps2}). Assume (\ref{enu:ps1}), and note that \begin{equation} \left\langle K_{s},f\right\rangle _{\mathscr{H}\left(K\right)}=f\left(s\right);\label{eq:ps1} \end{equation} and (\ref{enu:ps2}) is immediate from the definition of Parseval-frame. (\ref{enu:ps2}) $\Rightarrow$ (\ref{enu:ps3}). Assume (\ref{enu:ps2}), and set $f=K_{t}$. Note that $\left\Vert K_{t}\right\Vert _{\mathscr{H}\left(K\right)}^{2}=K\left(t,t\right)$, and $\left\langle K_{s},K_{t}\right\rangle _{\mathscr{H}\left(K\right)}=K\left(s,t\right)$. (\ref{enu:ps3}) $\Rightarrow$ (\ref{enu:ps4}). It is enough to prove that \begin{equation} K_{t}=\sum_{s\in S}K\left(t,s\right)K_{s},\;\forall t\in T;\label{eq:ps2} \end{equation} then (\ref{enu:ps4}) follows from an application of the reproducing property of the Hilbert space $\mathscr{H}\left(K\right)$. Now (\ref{eq:ps2}) follows from \begin{equation} \left\Vert K_{t}-\sum\nolimits _{s\in S}K\left(t,s\right)K_{s}\right\Vert _{\mathscr{H}\left(K\right)}^{2}=0.\label{eq:ps3} \end{equation} Finally, (\ref{eq:ps3}) follows from (\ref{enu:ps3}) and multiple application of the kernel property: \[ \text{LHS}{}_{\left(\ref{eq:ps3}\right)}=K\left(t,t\right)+\underset{\left(s,s'\right)\in S\times S}{\sum\sum}K\left(t,s\right)K\left(t,s'\right)K\left(s',s\right)-2\sum_{s\in S}\left|K\left(t,s\right)\right|^{2}=0. \] (\ref{enu:ps4}) $\Rightarrow$ (\ref{enu:ps1}). It is clear that (\ref{enu:ps1}) $\Leftrightarrow$ (\ref{enu:ps2}), and that (\ref{enu:ps4}) $\Rightarrow$ (\ref{enu:ps2}). \end{proof} \begin{rem}[Stationary kernels] Suppose $K:\mathbb{R}\times\mathbb{R}\rightarrow\mathbb{R}$ is a continuous positive definite kernel, and $K\left(s,t\right)=k\left(s-t\right)$, i.e., stationary. Set $K_{t}\left(\cdot\right):=K\left(\cdot,t\right)=k\left(\cdot-t\right)$. By Bochner's theorem, \[ k\left(t\right)=\int_{\mathbb{R}}e^{itx}d\mu\left(x\right), \] where $\mu$ is a finite positive Borel measure on $\mathbb{R}$. Thus, \[ V:K_{t}\longmapsto e^{-itx}\in L^{2}\left(\mu\right) \] extends to an isometry from $\mathscr{H}\left(K\right)$ into $L^{2}\left(\mu\right)$. Let $S\subset\mathbb{R}$ be a countable discrete subset, then for $f\in\mathscr{H}\left(K\right)$, we have \begin{gather*} \left\langle K_{s},f\right\rangle _{\mathscr{H}\left(K\right)}=0,\;\forall s\in S\\ \Updownarrow\\ \left\langle VK_{s},Vf\right\rangle _{L^{2}\left(\mu\right)}=0,\;\forall s\in S\\ \Updownarrow\\ \int_{\mathbb{R}}e^{isx}\left(Vf\right)\left(x\right)d\mu\left(x\right)=0,\;\forall s\in S. \end{gather*} So $S$ has the sampling property if and only if \[ \left[\left(\left(Vf\right)d\mu\right)^{\wedge}\left(s\right)=0,\;\forall s\in S\right]\Longrightarrow\begin{bmatrix}Vf=0,\;i.e.,\;f=0,\;\mu-\text{a.e.}\end{bmatrix} \] \end{rem} \section{Discrete RKHSs} A closely related question from the above discussion is the dichotomy of \emph{discrete} vs \emph{continuous} RKHSs. Our focus in the present section is on the discrete case, i.e., RKHSs of functions defined on a prescribed countable infinite discrete set $V$. \begin{defn}[\cite{MR3450534}] \label{def:dmp}The RKHS $\mathscr{H}=\mathscr{H}\left(K\right)$ is said to have the \emph{discrete mass} property ($\mathscr{H}$ is called a \emph{discrete RKHS}), if $\delta_{x}\in\mathscr{H}$, for all $x\in V$. Here, $\delta_{x}\left(y\right)$ is the Dirac mass at $x\in V$. \end{defn} \begin{question} Let $K:\mathbb{R}^{d}\times\mathbb{R}^{d}\rightarrow\mathbb{R}$ be positive definite, and let $V\subset\mathbb{R}^{d}$ be a countable discrete subset. When does $K\big|_{V\times V}$ have the discrete mass property? \end{question} Of the examples and applications where this question plays an important role, we emphasize three: (i) discrete Brownian motion-Hilbert spaces, i.e., discrete versions of the Cameron-Martin Hilbert space; (ii) energy-Hilbert spaces corresponding to graph-Laplacians; and finally (iii) RKHSs generated by binomial coefficients. We show that the point-masses have finite $\mathscr{H}$-norm in cases (i) and (ii), but not in case (iii). \begin{defn} \label{def:d1}Let $V$ be a countably infinite set, and let $\mathscr{F}\left(V\right)$ denote the set of all \emph{finite} subsets of $V$. \begin{enumerate} \item For all $x\in V$, set \begin{equation} K_{x}:=K\left(\cdot,x\right):V\rightarrow\mathbb{C}.\label{eq:pd2} \end{equation} \item Let $\mathscr{H}:=\mathscr{H}\left(K\right)$ be the Hilbert-completion of the $span\left\{ K_{x}:x\in V\right\} $, with respect to the inner product \begin{equation} \left\langle \sum c_{x}K_{x},\sum d_{y}K_{y}\right\rangle _{\mathscr{H}}:=\sum\sum\overline{c_{x}}d_{y}K\left(x,y\right)\label{eq:pd3} \end{equation} $\mathscr{H}$ is then a reproducing kernel Hilbert space (RKHS), with the reproducing property: \begin{equation} \varphi\left(x\right)=\left\langle K_{x},\varphi\right\rangle _{\mathscr{H}},\;\forall x\in V,\:\forall\varphi\in\mathscr{H}.\label{eq:pd31} \end{equation} \item If $F\in\mathscr{F}\left(V\right)$, set $\mathscr{H}_{F}=span\{K_{x}\}_{x\in F}\subset\mathscr{H}$, and let \begin{equation} P_{F}:=\text{the orthogonal projection onto \ensuremath{\mathscr{H}_{F}}}.\label{eq:pd4} \end{equation} \item For $F\in\mathscr{F}\left(V\right)$, let $K_{F}$ be the matrix given by \begin{equation} K_{F}:=\left(K\left(x,y\right)\right)_{\left(x,y\right)\in F\times F}.\label{eq:pd5} \end{equation} \end{enumerate} \end{defn} \begin{lem} \label{lem:proj1}Let $F\in\mathscr{F}\left(V\right)=$ all finite subsets of $V$, $x_{1}\in F$. Assume $\delta_{x_{1}}\in\mathscr{H}$. Then \begin{equation} P_{F}\left(\delta_{x_{1}}\right)\left(\cdot\right)=\sum_{y\in F}\left(K_{F}^{-1}\delta_{x_{1}}\right)\left(y\right)K_{y}\left(\cdot\right).\label{eq:pd6} \end{equation} \end{lem} \begin{proof} Show that \begin{equation} \delta_{x_{1}}-\sum_{y\in F}\left(K_{F}^{-1}\delta_{x_{1}}\right)\left(y\right)K_{y}\left(\cdot\right)\in\mathscr{H}_{F}^{\perp}.\label{eq:pd7} \end{equation} The remaining part follows easily from this. \end{proof} \begin{thm} \label{thm:del}Let $V$ and $K$ as above, i.e., we assume that $V$ is countably infinite, and $K$ is a p.d. kernel on $V\times V$. Let $\mathscr{H}=\mathscr{H}\left(K\right)$ be the corresponding RKHS. Fix $x_{1}\in V$. Then the following three conditions are equivalent: \begin{enumerate} \item \label{enu:d1}$\delta_{x_{1}}\in\mathscr{H}$; \item \label{enu:d2}$\exists C_{x_{1}}<\infty$ such that for all $F\in\mathscr{F}\left(V\right)$, we have \begin{equation} \left|\xi\left(x_{1}\right)\right|^{2}\leq C_{x_{1}}\underset{F\times F}{\sum\sum}\overline{\xi\left(x\right)}\xi\left(y\right)K\left(x,y\right).\label{eq:d1} \end{equation} \item \label{enu:d3}For $F\in\mathscr{F}\left(V\right)$, set \begin{equation} K_{F}=\left(K\left(x,y\right)\right)_{\left(x,y\right)\in F\times F}\label{eq:d2} \end{equation} as a $\#F\times\#F$ matrix. Then \begin{equation} \sup_{F\in\mathscr{F}\left(V\right)}\left(K_{F}^{-1}\delta_{x_{1}}\right)\left(x_{1}\right)<\infty.\label{eq:d3} \end{equation} \end{enumerate} \end{thm} \begin{proof} This is an application of \remref{rk}. Also see \cite{MR3450534} for details. \end{proof} Let $D$ be an open domain in $\mathbb{R}^{d}$, and assume $V\subset D$ is countable and discrete subset of $D$. In this case, we shall consider two positive definite kernels: the original kernel $K$ on $D\times D$, and $K_{V}:=K\big|_{V\times V}$ on $V\times V$ by restriction. Thus if $x\in V$, then \[ K_{x}^{\left(V\right)}\left(\cdot\right)=K\left(\cdot,x\right):V\longrightarrow\mathbb{R} \] is a function on $V$, while \[ K_{x}\left(\cdot\right):=K\left(\cdot,x\right):D\longrightarrow\mathbb{R} \] is a function on $D$. Further, let $\mathscr{H}$ and $\mathscr{H}_{V}$ be the associated RKHSs respectively. \begin{lem} \label{lem:mc1}$\mathscr{H}_{V}$ is isometrically embedded into $\mathscr{H}$ via the mapping \[ J^{\left(V\right)}:K_{x}^{\left(V\right)}\longmapsto K_{x},\;x\in V. \] \end{lem} \begin{proof} Assume $F\in\mathscr{F}\left(V\right)$, i.e., $F$ is a finite subset of $V$. Let $\xi=\xi_{F}$ is a function on $F$, then \[ \left\Vert \sum\nolimits _{x\in F}\xi\left(x\right)K_{x}^{\left(V\right)}\right\Vert _{\mathscr{H}_{V}}=\left\Vert \sum\nolimits _{x\in F}\xi\left(x\right)K_{x}\right\Vert _{\mathscr{H}}. \] Note that, by definition, the linear span of $\{K_{x}^{\left(V\right)}\mathrel{;}x\in V\}$ is dense in $\mathscr{H}_{V}$, and the span of $\{K_{x}\mathrel{;}x\in D\}$ is dense in $\mathscr{H}$. We conclude that $J^{\left(V\right)}$ extends uniquely to an isometry from $\mathscr{H}_{V}$ into $\mathscr{H}$. The desired result follows from this. \end{proof} In the examples below, we are concerned with cases of kernels $K:D\times D\rightarrow\mathbb{R}$ with restriction $K_{V}:V\times V\rightarrow\mathbb{R}$, where $V$ is a countable discrete subset of $D$. Typically, for $x\in V$, we may have the restriction $\delta_{x}\big|_{V}$ contained in $\mathscr{H}_{V}$, but $\delta_{x}$ in not in $\mathscr{H}$. \subsection{\label{subsec:bm}Brownian Motion} Consider the covariance function of standard Brownian motion $B_{t}$, $t\in[0,\infty)$, i.e., a Gaussian process $\left\{ B_{t}\right\} $ with mean zero and covariance function \begin{equation} K\left(s,t\right):=\mathbb{E}\left(B_{s}B_{t}\right)=s\wedge t.\label{eq:bm1} \end{equation} Restrict to $V:=\left\{ 0\right\} \cup\mathbb{Z}_{+}\subset D$, i.e., consider \[ K^{\left(V\right)}=K\big|_{V\times V}. \] $\mathscr{H}\left(K\right)$: Cameron-Martin Hilbert space, consisting of functions $f\in L^{2}\left(\mathbb{R}\right)$ s.t. \[ \int_{0}^{\infty}\left|f'\left(x\right)\right|^{2}dx<\infty,\quad f\left(0\right)=0. \] $\mathscr{H}_{V}:=\mathscr{H}\left(K_{V}\right)$. Note that \[ f\in\mathscr{H}\left(K_{V}\right)\Longleftrightarrow\sum_{n}\left|f\left(n\right)-f\left(n+1\right)\right|^{2}<\infty. \] We now show that the restriction of (\ref{eq:bm1}) to $V\times V$ for an ordered subset (we fix such a set $V$): \begin{equation} V:\;0<x_{1}<x_{2}<\cdots<x_{i}<x_{i+1}<\cdots\label{eq:bm2} \end{equation} has the discrete mass property (\defref{dmp}). Set $\mathscr{H}_{V}=RKHS(K\big|_{V\times V})$, \begin{equation} K_{V}\left(x_{i},x_{j}\right)=x_{i}\wedge x_{j}.\label{eq:bm3} \end{equation} We consider the set $F_{n}=\left\{ x_{1},x_{2},\ldots,x_{n}\right\} $ of finite subsets of $V$, and \begin{equation} K_{n}=K^{\left(F_{n}\right)}=\begin{bmatrix}x_{1} & x_{1} & x_{1} & \cdots & x_{1}\\ x_{1} & x_{2} & x_{2} & \cdots & x_{2}\\ x_{1} & x_{2} & x_{3} & \cdots & x_{3}\\ \vdots & \vdots & \vdots & \vdots & \vdots\\ x_{1} & x_{2} & x_{3} & \cdots & x_{n} \end{bmatrix}=\left(x_{i}\wedge x_{j}\right)_{i,j=1}^{n}.\label{eq:bm4} \end{equation} We will show that condition (\ref{enu:d3}) in \thmref{del} holds for $K_{V}$. \begin{lem} ~ \begin{equation} D_{n}=\det\left(\left(x_{i}\wedge x_{j}\right)_{i,j=1}^{n}\right)=x_{1}\left(x_{2}-x_{1}\right)\left(x_{3}-x_{2}\right)\cdots\left(x_{n}-x_{n-1}\right).\label{eq:bm5} \end{equation} \end{lem} \begin{proof} Induction. In fact, \[ \begin{bmatrix}x_{1} & x_{1} & x_{1} & \cdots & x_{1}\\ x_{1} & x_{2} & x_{2} & \cdots & x_{2}\\ x_{1} & x_{2} & x_{3} & \cdots & x_{3}\\ \vdots & \vdots & \vdots & \vdots & \vdots\\ x_{1} & x_{2} & x_{3} & \cdots & x_{n} \end{bmatrix}\sim\begin{bmatrix}x_{1} & 0 & 0 & \cdots & 0\\ 0 & x_{2}-x_{1} & 0 & \cdots & 0\\ 0 & 0 & x_{3}-x_{2} & \cdots & 0\\ \vdots & \vdots & \vdots & \ddots & \vdots\\ 0 & \cdots & 0 & \cdots & x_{n}-x_{n-1} \end{bmatrix}, \] unitary equivalence in finite dimensions. \end{proof} \begin{lem} Let \begin{equation} \zeta_{\left(n\right)}:=K_{n}^{-1}\left(\delta_{x_{1}}\right)\left(\cdot\right)\label{eq:bm7} \end{equation} so that \begin{equation} \left\Vert P_{F_{n}}\left(\delta_{x_{1}}\right)\right\Vert _{\mathscr{H}_{V}}^{2}=\zeta_{\left(n\right)}\left(x_{1}\right).\label{eq:bm8} \end{equation} Then, \begin{align*} \zeta_{\left(1\right)}\left(x_{1}\right) & =\frac{1}{x_{1}}\\ \zeta_{\left(n\right)}\left(x_{1}\right) & =\frac{x_{2}}{x_{1}\left(x_{2}-x_{1}\right)},\quad\text{for}\;n=2,3,\ldots, \end{align*} and \[ \left\Vert \delta_{x_{1}}\right\Vert _{\mathscr{H}_{V}}^{2}=\frac{x_{2}}{x_{1}\left(x_{2}-x_{1}\right)}. \] \end{lem} \begin{proof} A direct computation shows the $\left(1,1\right)$ minor of the matrix $K_{n}^{-1}$ is \begin{equation} D'_{n-1}=\det\left(\left(x_{i}\wedge x_{j}\right)_{i,j=2}^{n}\right)=x_{2}\left(x_{3}-x_{2}\right)\left(x_{4}-x_{3}\right)\cdots\left(x_{n}-x_{n-1}\right)\label{eq:bm6} \end{equation} and so \begin{align*} \zeta_{\left(1\right)}\left(x_{1}\right) & =\frac{1}{x_{1}},\quad\mbox{and}\\ \zeta_{\left(2\right)}\left(x_{1}\right) & =\frac{x_{2}}{x_{1}\left(x_{2}-x_{1}\right)}\\ \zeta_{\left(3\right)}\left(x_{1}\right) & =\frac{x_{2}\left(x_{3}-x_{2}\right)}{x_{1}\left(x_{2}-x_{1}\right)\left(x_{3}-x_{2}\right)}=\frac{x_{2}}{x_{1}\left(x_{2}-x_{1}\right)}\\ \zeta_{\left(4\right)}\left(x_{1}\right) & =\frac{x_{2}\left(x_{3}-x_{2}\right)\left(x_{4}-x_{3}\right)}{x_{1}\left(x_{2}-x_{1}\right)\left(x_{3}-x_{2}\right)\left(x_{4}-x_{3}\right)}=\frac{x_{2}}{x_{1}\left(x_{2}-x_{1}\right)}\\ & \vdots \end{align*} The result follows from this. \end{proof} \begin{cor} \label{cor:proj}$P_{F_{n}}\left(\delta_{x_{1}}\right)=P_{F_{2}}\left(\delta_{x_{1}}\right)$, $\forall n\geq2$. Therefore, \begin{equation} \delta_{x_{1}}\in\mathscr{H}_{V}^{\left(F_{2}\right)}:=span\{K_{x_{1}}^{\left(V\right)},K_{x_{2}}^{\left(V\right)}\} \end{equation} and \begin{equation} \delta_{x_{1}}=\zeta_{\left(2\right)}\left(x_{1}\right)K_{x_{1}}^{\left(V\right)}+\zeta_{\left(2\right)}\left(x_{2}\right)K_{x_{2}}^{\left(V\right)} \end{equation} where \[ \zeta_{\left(2\right)}\left(x_{i}\right)=K_{2}^{-1}\left(\delta_{x_{1}}\right)\left(x_{i}\right),\;i=1,2. \] Specifically, \begin{align} \zeta_{\left(2\right)}\left(x_{1}\right) & =\frac{x_{2}}{x_{1}\left(x_{2}-x_{1}\right)}\\ \zeta_{\left(2\right)}\left(x_{2}\right) & =\frac{-1}{x_{2}-x_{1}}; \end{align} and \begin{equation} \left\Vert \delta_{x_{1}}\right\Vert _{\mathscr{H}_{V}}^{2}=\frac{x_{2}}{x_{1}\left(x_{2}-x_{1}\right)}.\label{eq:dn} \end{equation} \end{cor} \begin{proof} Note that \[ \zeta_{n}\left(x_{1}\right)=\left\Vert P_{F_{n}}\left(\delta_{x_{1}}\right)\right\Vert _{\mathscr{H}}^{2} \] and $\zeta_{\left(1\right)}\left(x_{1}\right)\leq\zeta_{\left(2\right)}\left(x_{1}\right)\leq\cdots$, since $F_{n}=\left\{ x_{1},x_{2},\ldots,x_{n}\right\} $. In particular, $\frac{1}{x_{1}}\leq\frac{x_{2}}{x_{1}\left(x_{2}-x_{1}\right)}$, which yields (\ref{eq:dn}). \end{proof} \begin{rem} We showed that $\delta_{x_{1}}\in\mathscr{H}_{V}$, $V=\left\{ x_{1}<x_{2}<\cdots\right\} \subset\mathbb{R}_{+}$, with the restriction of $s\wedge t$ = the covariance kernel of Brownian motion. The same argument also shows that $\delta_{x_{i}}\in\mathscr{H}_{V}$ when $i>1$. Conclusions: \begin{align} \delta_{x_{i}} & \in span\left\{ K_{x_{i-1}}^{\left(V\right)},K_{x_{i}}^{\left(V\right)},K_{x_{i+1}}^{\left(V\right)}\right\} ,\quad\mbox{and}\\ \left\Vert \delta_{x_{i}}\right\Vert _{\mathscr{H}}^{2} & =\frac{x_{i+1}-x_{i-1}}{\left(x_{i}-x_{i-1}\right)\left(x_{i+1}-x_{i}\right)}. \end{align} Details are left for the interested readers. \end{rem} \begin{cor} Let $V\subset\mathbb{R}_{+}$ be countable. If $x_{a}\in V$ is an accumulation point (from $V$), then $\left\Vert \delta_{a}\right\Vert _{\mathscr{H}_{V}}=\infty$. \end{cor} \begin{example}[Sparse sample-points] Let $V=\left\{ x_{i}\right\} _{i=1}^{\infty}$, where \[ x_{i}=\frac{i\left(i-1\right)}{2},\quad i\in\mathbb{N}. \] It follows that $x_{i+1}-x_{i}=i$, and so \[ \left\Vert \delta_{x_{i}}\right\Vert _{\mathscr{H}}^{2}=\frac{x_{i+1}-x_{i}}{\left(x_{i}-x_{i-1}\right)\left(x_{i+1}-x_{i}\right)}=\frac{2i-1}{\left(i-1\right)i}\xrightarrow[i\rightarrow\infty]{}0. \] We conclude that $\left\Vert \delta_{x_{i}}\right\Vert _{\mathscr{H}}\xrightarrow[i\rightarrow\infty]{}0$ if the set $V=\left\{ x_{i}\right\} _{i=1}^{\infty}\subset\mathbb{R}_{+}$ is sparse. \end{example} Now, some general facts: \begin{lem} Let $K:V\times V\rightarrow\mathbb{C}$ be p.d., and let $\mathscr{H}$ be the corresponding RKHS. If $x_{1}\in V$, and if $\delta_{x_{1}}$ has a representation as follows: \begin{equation} \delta_{x_{1}}=\sum_{y\in V}\zeta^{\left(x_{1}\right)}\left(y\right)K_{y}\;,\label{eq:pr1} \end{equation} then \begin{equation} \left\Vert \delta_{x_{1}}\right\Vert _{\mathscr{H}}^{2}=\zeta^{\left(x_{1}\right)}\left(x_{1}\right).\label{eq:pr2} \end{equation} \end{lem} \begin{proof} Substitute both sides of (\ref{eq:pr1}) into $\left\langle \delta_{x_{1}},\cdot\right\rangle _{\mathscr{H}}$ where $\left\langle \cdot,\cdot\right\rangle _{\mathscr{H}}$ denotes the inner product in $\mathscr{H}$. \end{proof} \subsection{Brownian Bridge} Let $D:=\left(0,1\right)=$ the open interval $0<t<1$, and set \begin{equation} K_{bridge}\left(s,t\right):=s\wedge t-st;\label{eq:bb1} \end{equation} then (\ref{eq:bb1}) is the covariance function for the Brownian bridge $B_{bri}\left(t\right)$, i.e., \begin{equation} B_{bri}\left(0\right)=B_{bri}\left(1\right)=0\label{eq:bb2} \end{equation} \begin{figure} \caption{Brownian bridge $B_{bri}\left(t\right)$, a simulation of three sample paths of the Brownian bridge.} \label{fig:bb} \end{figure} \begin{equation} B_{bri}\left(t\right)=\left(1-t\right)B\left(\frac{t}{1-t}\right),\quad0<t<1;\label{eq:bb3} \end{equation} where $B\left(t\right)$ is Brownian motion; see \lemref{mc1}. The corresponding Cameron-Martin space is now \begin{equation} \mathscr{H}_{bri}=\left\{ f\;\mbox{on}\:\left[0,1\right];f'\in L^{2}\left(0,1\right),f\left(0\right)=f\left(1\right)=0\right\} \label{eq:bb4} \end{equation} with \begin{equation} \left\Vert f\right\Vert _{\mathscr{H}_{bri}}^{2}:=\int_{0}^{1}\left|f'\left(s\right)\right|^{2}ds<\infty.\label{eq:bb5} \end{equation} If $V=\left\{ x_{i}\right\} _{i=1}^{\infty}$, $x_{1}<x_{2}<\cdots<1$, is the discrete subset of $D$, then we have for $F_{n}\in\mathscr{F}\left(V\right)$, $F_{n}=\left\{ x_{1},x_{2},\cdots,x_{n}\right\} $, \begin{equation} K_{F_{n}}=\left(K_{bridge}\left(x_{i},x_{j}\right)\right)_{i,j=1}^{n},\label{eq:bb6} \end{equation} see (\ref{eq:bb1}), and \begin{equation} \det K_{F_{n}}=x_{1}\left(x_{2}-x_{1}\right)\cdots\left(x_{n}-x_{n-1}\right)\left(1-x_{n}\right).\label{eq:bb7} \end{equation} As a result, we get $\delta_{x_{i}}\in\mathscr{H}_{V}^{\left(bri\right)}$ for all $i$, and \[ \left\Vert \delta_{x_{i}}\right\Vert _{\mathscr{H}_{V}^{\left(bri\right)}}^{2}=\frac{x_{i+1}-x_{i-1}}{\left(x_{i+1}-x_{i}\right)\left(x_{i}-x_{i-1}\right)}. \] Note $\lim_{x_{i}\rightarrow1}\left\Vert \delta_{x_{i}}\right\Vert _{\mathscr{H}_{V}^{\left(bri\right)}}^{2}=\infty$. \subsection{Binomial RKHS} The purpose of the present subsection if to display a concrete RKHS $\mathscr{H}\left(K\right)$ in the discrete framework with the property that $\mathscr{H}\left(K\right)$ does not contain the Dirac masses $\delta_{x}$. The RKHS in question is generated by the binomial coefficients, and it is relevant for a host of applications; see e.g., \cite{MR3484546,MR3228856,MR1822449}. \begin{defn} Let $V=\mathbb{Z}_{+}\cup\left\{ 0\right\} $; and \[ K_{b}\left(x,y\right):=\sum_{n=0}^{x\wedge y}\binom{x}{n}\binom{y}{n},\quad\left(x,y\right)\in V\times V. \] where $\binom{x}{n}=\frac{x\left(x-1\right)\cdots\left(x-n+1\right)}{n!}$ denotes the standard binomial coefficient from the binomial expansion. Let $\mathscr{H}=\mathscr{H}\left(K_{b}\right)$ be the corresponding RKHS. Set \begin{equation} e_{n}\left(x\right)=\begin{cases} \binom{x}{n} & \text{if \ensuremath{n\leq x}}\\ 0 & \text{if \ensuremath{n>x}}. \end{cases}\label{eq:b1} \end{equation} \end{defn} \begin{lem}[\cite{MR3367659}] \label{lem:b1}~ \begin{enumerate} \item $e_{n}\left(\cdot\right)\in\mathscr{H}$, $n\in V$; \item $\left\{ e_{n}\right\} _{n\in V}$ is an orthonormal basis (ONB) in the Hilbert space $\mathscr{H}$. \item Given $f\in\mathscr{F}unc\left(V\right)$; then \begin{equation} f\in\mathscr{H}\Longleftrightarrow\sum_{k=0}^{\infty}\left|\left\langle e_{k},f\right\rangle _{\mathscr{H}}\right|^{2}<\infty;\label{eq:b4} \end{equation} and, in this case, \[ \left\Vert f\right\Vert _{\mathscr{H}}^{2}=\sum_{k=0}^{\infty}\left|\left\langle e_{k},f\right\rangle _{\mathscr{H}}\right|^{2}. \] \item Set $F_{n}=\left\{ 0,1,2,\ldots,n\right\} $, and \begin{equation} P_{F_{n}}=\sum_{k=0}^{n}\left|e_{k}\left\rangle \right\langle e_{k}\right|\label{eq:b2} \end{equation} or equivalently \begin{equation} P_{F_{n}}f=\sum_{k=0}^{n}\left\langle e_{k},f\right\rangle _{\mathscr{H}}e_{k}\,.\label{eq:b3} \end{equation} Then formula (\ref{eq:b3}) is well defined for all functions $f\in\mathscr{F}unc\left(V\right)$. \end{enumerate} \end{lem} Fix $x_{1}\in V$, then we shall apply \lemref{b1} to the function $f_{1}=\delta_{x_{1}}$ (in $\mathscr{F}unc\left(V\right)$). \begin{thm} \label{thm:bino}We have \[ \left\Vert P_{F_{n}}\left(\delta_{x_{1}}\right)\right\Vert _{\mathscr{H}}^{2}=\sum_{k=x_{1}}^{n}\binom{k}{x_{1}}^{2}. \] \end{thm} The proof of the theorem will be subdivided in steps; see below. \begin{lem}[\cite{MR3367659}] ~ \begin{enumerate} \item \label{enu:b1}For $\forall m,n\in V$, such that $m\leq n$, we have \begin{equation} \delta_{m,n}=\sum_{j=m}^{n}\left(-1\right)^{m+j}\binom{n}{j}\binom{j}{m}.\label{eq:b5} \end{equation} \item \label{enu:b2}For all $n\in\mathbb{Z}_{+}$, the inverse of the following lower triangle matrix is this: With (see Figure \ref{fig:L}) \begin{equation} L_{xy}^{\left(n\right)}=\begin{cases} \binom{x}{y} & \text{if \ensuremath{y\leq x\leq n}}\\ 0 & \text{if \ensuremath{x<y}} \end{cases}\label{eq:b6} \end{equation} we have: \begin{equation} \left(L^{\left(n\right)}\right)_{xy}^{-1}=\begin{cases} \left(-1\right)^{x-y}\binom{x}{y} & \text{if \ensuremath{y\leq x\leq n}}\\ 0 & \text{if \ensuremath{x<y}}. \end{cases}\label{eq:b7} \end{equation} \end{enumerate} Notation: The numbers in (\ref{eq:b7}) are the entries of the matrix $\left(L^{\left(n\right)}\right)^{-1}$. \end{lem} \begin{proof} In rough outline, (\ref{enu:b2}) follows from (\ref{enu:b1}). \end{proof} \begin{figure} \caption{The matrix $L_{n}$ is simply a truncated Pascal triangle, arranged to fit into a lower triangular matrix.} \label{fig:L} \end{figure} \begin{cor} \label{cor:bino}Let $K_{b}$, $\mathscr{H}$, and $n\in\mathbb{Z}_{+}$ be as above with the lower triangle matrix $L_{n}$. Set \begin{equation} K_{n}\left(x,y\right)=K_{b}\left(x,y\right),\quad\left(x,y\right)\in F_{n}\times F_{n},\label{eq:b8} \end{equation} i.e., an $\left(n+1\right)\times\left(n+1\right)$ matrix. \begin{enumerate} \item Then $K_{n}$ is invertible with \begin{equation} K_{n}^{-1}=\left(L_{n}^{tr}\right)^{-1}\left(L_{n}\right)^{-1};\label{eq:b9} \end{equation} an $(\text{upper triangle})\times(\text{lower triangle})$ factorization. \item For the diagonal entries in the $\left(n+1\right)\times\left(n+1\right)$ matrix $K_{n}^{-1}$, we have: \[ \left\langle x,K_{n}^{-1}x\right\rangle _{l^{2}}=\sum_{k=x}^{n}\binom{k}{x}^{2} \] \end{enumerate} Conclusion\textbf{:} Since \begin{equation} \left\Vert P_{F_{n}}\left(\delta_{x_{1}}\right)\right\Vert _{\mathscr{H}}^{2}=\left\langle x_{1},K_{n}^{-1}x_{1}\right\rangle _{\mathscr{H}}\label{eq:b11} \end{equation} for all $x_{1}\in F_{n}$, we get \begin{align} \left\Vert P_{F_{n}}\left(\delta_{x_{1}}\right)\right\Vert _{\mathscr{H}}^{2} & =\sum_{k=x_{1}}^{n}\binom{k}{x_{1}}^{2}\nonumber \\ & =1+\binom{x_{1}+1}{x_{1}}^{2}+\binom{x_{1}+2}{x_{1}}^{2}+\cdots+\binom{n}{x_{1}}^{2};\label{eq:b12} \end{align} and therefore, \[ \left\Vert \delta_{x_{1}}\right\Vert _{\mathscr{H}}^{2}=\sum_{k=x_{1}}^{\infty}\binom{k}{x_{1}}^{2}=\infty. \] In other words, no $\delta_{x}$ is in $\mathscr{H}$. \end{cor} \begin{acknowledgement*} The co-authors thank the following colleagues for helpful and enlightening discussions: Professors Daniel Alpay, Sergii Bezuglyi, Ilwoo Cho, Myung-Sin Song, Wayne Polyzou, and members in the Math Physics seminar at The University of Iowa. \end{acknowledgement*} \end{document}
arXiv
{ "id": "1708.06016.tex", "language_detection_score": 0.5486167669296265, "char_num_after_normalized": null, "contain_at_least_two_stop_words": null, "ellipsis_line_ratio": null, "idx": null, "lines_start_with_bullet_point_ratio": null, "mean_length_of_alpha_words": null, "non_alphabetical_char_ratio": null, "path": null, "symbols_to_words_ratio": null, "uppercase_word_ratio": null, "type": null, "book_name": null, "mimetype": null, "page_index": null, "page_path": null, "page_title": null }
arXiv/math_arXiv_v0.2.jsonl
null
null
\begin{document} \title{Knowledge as Invariance - History and Perspectives of Knowledge-augmented Machine Learning} \begin{abstract} Research in machine learning is at a turning point. While supervised deep learning has conquered the field at a breathtaking pace and demonstrated the ability to solve inference problems with unprecedented accuracy, it still does not quite live up to its name if we think of learning as the process of acquiring knowledge about a subject or problem. Major weaknesses of present-day deep learning models are, for instance, their lack of adaptability to changes of environment or their incapability to perform other kinds of tasks than the one they were trained for. While it is still unclear how to overcome these limitations, one can observe a paradigm shift within the machine learning community, with research interests shifting away from increasing the performance of highly parameterized models to exceedingly specific tasks, and towards employing machine learning algorithms in highly diverse domains. This research question can be approached from different angles. For instance, the field of\emph{ Informed AI} investigates the problem of infusing domain knowledge into a machine learning model, by using techniques such as regularization, data augmentation or post-processing. On the other hand, a remarkable number of works in the recent years has focused on developing models that by themselves guarantee a certain degree of versatility and \emph{invariance} with respect to the domain or problem at hand. Thus, rather than investigating how to provide domain-specific knowledge to machine learning models, these works explore methods that equip the models with the capability of \emph{acquiring} the knowledge by themselves. This white paper provides an introduction and discussion of this emerging field in machine learning research. To this end, it reviews the role of knowledge in machine learning, and discusses its relation to the concept of invariance, before providing a literature review of the field. Additionally, it gives insight into some historical context. \end{abstract} \section{Introduction} One of the most prominent researchers in deep learning, Yoshua Bengio, cites the \emph{Global Workspace Theory of Consciousness} \cite{kahneman2011} as his preferred model of human cognitive capabilities \cite{bengio2017}. According to this theory, human consciousness can be grouped into two systems, with System 1 performing intuitive, automated tasks that we can do instinctively and System 2 performing tasks that require conscious decision making and can be described verbally. Current deep learning algorithms are particularly good at performing System 1-level tasks. For instance, if we are presented with pictures of edible plant matter and are asked to group them into nutritional categories, such as fruits, vegetables, nuts, grains, legumes, etc, we would perform this task instinctively and probably without any hesitation. The same task should also be easily accomplished by a neural network, trained with the appropriate data. Consider now an adaptation of the task, where we are asked to group the same images into botanical categories, such as leafs, fruits (in the botanical sense), roots, seeds, etc. Many of us would probably feel slightly less confident with performing this task, but after reading up the according definitions, humans would likely still perform quite well. A neural network, on the other hand, would typically require re-learning all of its parameters. The above example exposes two remarkable cognitive capabilities present in humans that neural networks typically lack: the ability to incorporate complementary input into the task execution and the ability to generalize a System 1 level skill to changes in the problem setting. It makes sense to treat these two capabilities as flip sides of the same coin. The reason can be found in the \emph{No free lunch theorem} \cite{flach2012}, since, broadly speaking, a model that is perfectly adapted to one task can not be generalized to other tasks without either forfeiting performance or infusing additional assumptions about the task or the data into it. Unsurprisingly, the field of \emph{Informed Machine Learning} \cite{VonRueden2019} that investigates how to enhance machine learning by means of prior domain knowledge has gained considerable importance. The employed techniques include methods such as data augmentation, loss regularization, hyper-parameter design or post-process filtering of the model output \cite{VonRueden2019}. However, these approaches build upon the assumption that today's off-the-shelf deep learning models offer sufficient versatility to adapt to specified scenarios on-demand. This is unlikely the case and one of the major reasons for this has to do with the research culture of the machine learning community. As argued in \cite{chollet2019}, progress in machine learning is heavily driven by universally available and easily implementable benchmarks and improvement of a model is measured by how well it is adapted to these benchmarks. Now, what used to be a catalyst of research advances for deep learning is becoming more and more of a burden, as demand for generalization increases and adversarial attacks expose the weaknesses of highly specialized training. This bias towards specialization has been recognized and identified as a problem by the community's leading figures, such as Yoshua Bengio, Geoffrey Hinton and Yann LeCun \cite{LeCun2020}. Similar concerns were expressed by Fran\c cois Chollet \cite{chollet2019} and Gary Marcus \cite{marcus2020}. These debates have sparked a number of research directions that rather than studying the \emph{adaptation} of machine learning models to a specific problem or situation, focus on their \emph{adaptability.} This adaptability can refer to different aspects of the problem at hand, e.g. the lighting condition in visual data, the length of a natural-language phrase or even the skill that is to be learned itself. This white paper is a modest attempt to provide an overview of the most promising developments in this direction and to exemplify their relation to the concept of knowledge in AI. \section{Knowledge as Invariance} \subsection{From Machine Learning to Knowledge Acquisition} The Oxford Dictionaries define knowledge as \emph{facts, information, and skills acquired by a person through experience or education} or \emph{the theoretical or practical understanding of a subject}. Current AI systems, notably deep learning architectures, can be described as systems that acquire facts or skills through experience or education, i.e. training. Still, neural networks can be hardly considered \emph{knowledgeable} in the broader sense in which we understand this term. But what is it about knowledge that current AI methods in general and deep learning specifically fall short of? What should a "knowledgeable system" be able to do that a typical deep neural network can not? \begin{figure} \caption{The DIKW Pyramid. Source: \cite{dkiw2015}} \label{fig:dikw} \end{figure} \Figref{fig:dikw} depicts the DIKW Pyramid \cite{rowley2007} that groups the terms \emph{data - information - knowledge - wisdom} along an abstraction hierarchy. While raw data is useless for carrying out any decisions, information infers structure and task-bound function from data by providing answers to clearly specified questions \cite{rowley2008}. In a way, today's established deep learning systems predominantly work on this level of abstraction. They take raw data and infer just enough rules from it, to answer questions such as "Does this image contain a cat?" or "Did this reviewer enjoy that book?". If we think of wisdom as the level corresponding to a hypothetical Artificial General Intelligence (AGI), i.e. systems that can be considered fully autonomous up to the point of asking for a higher meaning or purpose of a task, knowledge would correspond to a stage somewhere in-between the two. Knowledgeable machines should go beyond answering well-defined task-specific questions and see a slightly bigger picture without necessarily becoming fully autonomous in that. Davenport and Prusak \cite{davenport2000} describe knowledge as \begin{quote} \textit{[...] a fluid mix of framed experience, values, contextual information, expert insight and grounded intuition that provides an environment and framework for evaluating and incorporating new experiences and information.} \end{quote} Note how this definition puts emphasis on adaptability. This coincides with the widely accepted notion that only by solving \emph{transfer tasks}, we can verify that we have knowledge in a field. \emph{Informed} systems differ from \emph{knowledgeable} ones in how rigid they are with respect to a skill or a situation. The transition from informed to knowledge-based AI thus heavily depends on how invariant they become. Admittedly, machine learning has always been about invariance. Deep convolutional neural nets, for instance, learn low-level invariances on the pixel level, e.g. invariance to translation operations. However, invariance at higher levels of abstraction is still rare to find in contemporary machine learning models. In particular, we believe that the following three types of invariance are crucial for knowledge-based AI models. \subsubsection{Invariance to the Skill} \label{sub:skill} Deep Learning owes much of its success to the \emph{supervised learning} paradigm. At the same time, the emphasis on supervised learning has been identified as one of the major limiting factors in achieving AGI \cite{LeCun2020}. Formally, most of the common supervised learning problems can be written as some form of function approximation, in which the task is to find a ${\bm{\theta}}$-parameterized function \begin{equation} f_{\bm{\theta}}:\mathcal X\to \mathcal Y \end{equation} that maps (approximately) each input data sample ${\bm{x}}$ to a label ${\bm{y}}$ according to the rules that have been learned from a set of labeled training data \begin{equation} \{({\bm{x}}_i, {\bm{y}}_i)\}_{i=1,\dots,N}. \end{equation} We generally assume that the training set has been independently sampled from a joint distribution $p({\mathbf{x}}, {\mathbf{y}})$ such that the function approximation can be phrased as likelihood optimization for the trainable parameter ${\bm{\theta}}$, e.g. the entirety of weights in the neural network to be trained. It should go without saying, that by training a model in such a way, it does not acquire knowledge regarding the data it has been fed with, as knowledge is characterized by being transferable from one skill to another. In order to acquire knowledge, a model should not just yield one particular function, but provide a possibility to infer different kinds of functions with respect to the data it has been trained on. For instance, a model that has been fed with natural images should not just be able to classify the images in categories, but perform semantically related tasks, such as detecting which kinds of objects tend to appear close to each other, how they are spatially aligned with respect to each other and so on. \subsubsection{Invariance to the Data Distribution} Most common optimization objectives in machine learning are derived from likelihood maximization. This builds upon the implicit assumption that training and test data are sampled from the same distribution. In reality, data is gathered under conditions that can change. Arjovski et al. provide an illustrating example of this issue in \cite{arjovsky2019}: consider a neural network that is trained to classify images into photographs of cows and camels. As such pictures are usually taken in the environments one would expect to find the respective ruminants, the network will likely tend to assign green grassland to the former, and sandy landscapes to the latter category. While from a statistical point of view, this is a perfectly legitimate way to infer correlations from the data, it contradicts another aspect of knowledge that transcends the System I-level skillset, namely the capability of adapting to changes in situation, context or environment. This capability is typically referred to as \emph{out of distribution} (OOD) generalization. Unfortunately, there is no universally accepted metric that evaluates the OOD capabilities of a model, since by its very definition it does not characterize machine learning models by what the generalize to, but by what they do not generalize to (the training data distribution). As an illustration of the difficulty to define an appropriate OOD objective, consider again the supervised learning task of identifying the function \begin{equation} f_{\bm{\theta}}:\mathcal X \to \mathcal{Y}, \ {\bm{\theta}}\in\mathbb{R}^p \end{equation} that maps from a data sample ${\bm{x}}\in \mathcal X$ to a label ${\bm{y}}\in \mathcal Y$, where the joint distribution $p_{e}({\bm{x}}, {\bm{y}})$ is determined by the \emph{environment} $ e \in \mathcal{E}$. Assuming an $\ell_2$ loss, we could define the environment-dependent objective function $L^e$ as \begin{equation} L^e({\bm{\theta}}) = \mathbb{E}_{{\mathbf{x}}, {\mathbf{y}}\sim} p_e[\|f_{\bm{\theta}}({\mathbf{x}})-{\mathbf{y}}\|^2]. \end{equation} But in order to minimize such an objective for all environments $ e \in \mathcal{E}$ fairly, we would need to make assumptions about the statistics of the environments themselves. For instance, the obvious optimization problem \begin{equation} \min_{{\bm{\theta}}}\mathbb{E}_{{\textnormal{e}}\sim p({\textnormal{e}})}[L^{\textnormal{e}}({\bm{\theta}})] \label{eq:odd_obj} \end{equation} requires a model for the distribution $p({\textnormal{e}})$ of environments. Note that estimating $p({\textnormal{e}})$ from data would require the training data to be representative of the environments to appear in testing phase, which contradicts the premise of OOD. One such possible assumption is that the function can be written as \begin{equation} f_{\bm{\theta}} = \varphi_{{\bm{\theta}}}\circ\phi, \end{equation} where $\phi$ has the property that \begin{equation} \hat{{\bm{\theta}}} = \argmin_{\bm{\theta}} L^e({\bm{\theta}}) \end{equation} does not depend on $e$. The function $\phi$ is then said to \emph{elicit} an $\mathcal E$-invariant predictor \cite{arjovsky2019}. For instance, in the cow/camel example above, a function that removes the background from an image and replaces it by neutral pixels, elicits an invariant predictor of the class. Again, it is important to be aware of the implicit assumptions that we make about $\mathcal E$. In this case, we assume that a change of $e$ mainly effects the background of an image. If $\phi$ is trained from the data, then we need to trust that if it elicits an invariant predictor for the environments in $\mathcal E$, it does so also for all possible environments that could appear in the testing phase. A slightly different attempt to formalize OOD has been given in \cite{Greenfeld2019}. The authors assume a joint train probability $p_\mathrm{source}({\mathbf{x}}, {\mathbf{y}})$ between data ${\mathbf{x}}$ and label ${\mathbf{y}}$ that does not necessarily coincide with the unknown target distribution $p_\mathrm{target}({\mathbf{x}}, {\mathbf{y}})$. However, it is assumed that the conditional distribution of ${\mathbf{y}}$, given a realization of ${\mathbf{x}}$, is universal to all possible conditions, i.e. \begin{equation} p_\mathrm{source}({\mathbf{y}}|{\mathbf{x}}={\bm{x}})=p_\mathrm{target}({\mathbf{y}}|{\mathbf{x}}={\bm{x}}) \end{equation} for all possible ${\bm{x}}\in\mathcal X$ and no matter what the target environment looks like. The authors approach this aim by learning the function $f_{\bm{\theta}}:\mathcal X\to \mathcal Y$ such that the property of ${\mathbf{z}}:={\mathbf{y}} - f_{\bm{\theta}}({\bm{x}})$ being independent ${\mathbf{x}}$ is fulfilled. To this end a measure of independence that can be easily optimized is necessary. \subsubsection{Invariance to the Data Syntax} Data is always redundant and this is true regardless of it being present in the form of visual data, audio, text or any other possible data modality. This redundancy manifests in a set of implicit rules that determine how meaning is encoded within the respective data modality. For documents written in natural language, for instance, these rules are determined by grammar, whereas in natural images these rules stem from physical laws, as well as our geometrical notion of objects, backgrounds, compositions, etc. When we as humans process data that we gather through sensory stimuli, we are able to abstract knowledge from its syntactical configuration. Deep learning has become so successful precisely because CNNs are capable of learning representations that are invariant to syntactical clutter such as the spatial location or the scale of an object in an image. However, this degree of invariance in neural networks is mostly limited to visual and sequential data, and specifically not complex, structured or compositional data types used in knowledge representation, such as \begin{itemize} \item Tables, \item Graphs, \item Algebraic or Logical Expressions, \item Complex Natural-Language Phrases, \item Sets. \end{itemize} Indeed, neural networks are pure vector processing machines. This means that they realize mathematical functions that map from one real-valued, finite vector space to another. The involved sets are thus naturally equipped with mathematical structure that symbolic data, to name an example, does not possess. Specifically, euclidean vector spaces have a metric, and hence allow for distance-based classification. Functions on vector spaces can also be equipped with properties such as linearity or differentiability, enabling optimization via gradient descent. Non-vector data, on the other hand, lacks many of the conveniences that neural networks relies on, as listed in the following. \textit{Lack of interface for non-euclidean data structures.} Neural networks expect vectors with a fixed dimension at its input and output. While down- and upscaling is possible for input images of different resolutions, normalizing the size of symbolic data is in general not trivial. Moreover, vectors induces a natural order of their elements. This can also not be guaranteed for non-euclidean data, such as graphs or sets. \textit{Lack of a universally applicable inductive bias.} Consider the task of continuing the number sequence $1,2,3,4\dots$ Humans that learn to count from an early age on will naturally make the assumption that this is the beginning of the sequence of natural numbers. Neural networks are universal function approximators. This implies that without any additional assumption about the data at hand, there is no reason to believe that a neural network would continue the sequence the same manner as us. These kinds of prior assumption about the data is called the \emph{inductive bias}. Deep learning has an inductive bias towards translation invariance and self-similarity due to the weight-sharing in convolutional filters. However, these assumptions do not hold for non-image data, in general. The limitations of the inductive bias in deep learning have been recently illustrated by the benchmark presented in \cite{chollet2019}. It contains seemingly simple, low-dimensional abstract toy examples that are easily solved by humans within minutes but are particularly difficult to learned by means of neural nets. \textit{Lack of continuity.} Besides the fact that conversion from continuous to discrete data and back are non-trivial and leads to loss of information, the absence of continuity makes it almost impossible to perform gradient-based optimization as it is common in deep learning. Real syntactical invariance thus requires methods to efficiently process non-euclidean data. Unlike distribution and skill invariance, that demand redefining the learning procedure, syntactical invariance can be realized on an architectural level, e.g. by means of neural layers that generalize convolutions to non-euclidean data. \subsection{Scope of This Work} This work provides a survey of developments in machine learning that can be characterized as approaches to increase invariance towards the three aspects of AI problems discussed in the previous subsection. After providing some historical context on symbolic knowledge, it discusses recent developments in deep learning. First, different architectural elements are reviewed that can be used to increase syntax invariance for different data modalities. In particular, we consider \emph{attention} mechanisms that aim at extracting the relevant fragment within the incoming data and thus reducing the sensitivity to syntactically redundant input. Furthermore, \emph{capsule}-based neural networks are investigated as a mean to disentangle visual entities from their geometric configuration. We then proceed to review approaches that generalize neural networks to non-vector data, in particular graphs and sets, mentioning also results from recent studies o \emph{group action symmetries} that are crucial for theoretical understanding about inductive biases in CNN-like structures. Finally two learning paradigms, namely \emph{meta-Learning} and \emph{self-supervised learning}, which have attracted increasing interest during the recent years and have the potential to enhance skill and distribution invariance, are reviewed. Additionally, we take a look at \emph{metric learning} that has similar potential. \subsection{Relation to Informed Machine Learning} Knowledge is a subject of interest across many fields within machine learning. In this section, we would like to take a closer look at the field of \emph{Informed machine learning}, described by von R\"uden et al. in \cite{VonRueden2019}. The motivation is that it can be considered complimentary to the direction emphasized in thus work. Speaking in broad terms, Informed machine learning focuses on questions like "How do I restrict my video prediction model to learning only physically possible scenarios?" or "How do I tell my autonomous driving system that the traffic conditions have changed?" In short, it studies approaches that adapt general-purpose machine learning models to the problem-specific conditions by incorporating domain knowledge. A major difference between \cite{VonRueden2019} and the present work is in the definition and characterization of the term \emph{knowledge}. While this work agrees with \cite{VonRueden2019} about how knowledge relates to \emph{information} with regards to its level of abstraction, we emphasize the fluidity and adaptability of it. In other words, our emphasis is on the fact that knowledge, once gained, can not only be applied to one particular, rigid problem setting, but adapts to the peculiarity of each given situation. By contrast, von R\"uden et al. stress the aspect of formalization. After defining knowledge as \emph{validated information}, the authors explain how it is characterized by the degree of formalization with regards to its representation. This generally implies that knowledge can be expressed using natural language or a similar system of communication. As a consequence, von R\"uden et al. treat knowledge as input that is usually provided to the system from an external source, typically by an expert who gained his knowledge from domain experience inaccessible to the model and with the capability to formalize it in a machine-readable way. This input is supposed to enhance machine learning models by additional, formalized insights that could not be incorporated within the training phase. We, on the other hand, do not treat knowledge as external to the system but as something it acquires from possibly heterogeneous input and can be applied in the context of different scenarios. Therefore, rather than investigating techniques that leverage external, formalized inputs within machine learning models, we focus on architectural devices and learning paradigms that permit us to learn models in a way such that adaptation to new, unseen scenarios is carried out as seemlessly as possible. As an example, consider again the camel/cow classification problem from before. Recall that the problem consists of building a classification system that tends to wrongly include the semantically irrelevant scenery into the class assignment. Looking at the problem from the point of view of Informed machine learning, we would ask ourselves how to appropriately formalize these changes in scenery and how to communicate them to the model. By contrast, from the perspective of invariance, we are more interested in training the model in such a way that the background is not taken into account when the classification is performed. That is not to say that we expect invariance-based knowledge to supersede Informed machine learning at some point. On the contrary, we expect that these two fields will complement one another in the future. It is thus important to be aware of the difference in how knowledge is defined and characterized in these two related, but distinct research fields. \section{Neural Symbolic Integration} As stated in the previous section, the transition from informed to knowledge-based AI heavily depends on how invariant they become. Logical reasoning can provide mathematically sound invariance to tricky situations. For example, symbolic logic has been used to define properties over the whole system using formal methods over the software. This ensured that the software is safe against known risky behaviours. Symbolic reasoning inherently builds upon invariant properties that are mathematically true. Thus, for AI to be knowledgeable it needs absolute invariance to show common sense (trivial situations) and expert behaviour as well. Early approaches in reaching invariance hence relied on integration of the connectionist approaches with symbolic reasoning. However, in recent years and in line with the success of deep learning, research emphasis has shifted towards achieving invariance by means of design choices in the connectionnist system itself, without relying on additional guidance from symbolic AI. This section provides an overview of techniques that offer invariance via \emph{Neuro-Symbolic Integration} (NSI), before we can dive into the recent, purely connectionnist approaches later on. Traditionally, an artificial neural network (ANN) was understood as a connectionist system that acquired expert knowledge about the problem domain after training (invariance to the skill). ANNs required raw data and were able to generalize to unencountered situations. However, the obtained knowledge was hidden within the acquired network architecture and connection weights. Symbolic systems, on the other hand, utilized complex and often recursive interdependencies between symbolically represented pieces of knowledge (invariance to the data distribution). Realizing the machine learning bottlenecks of using any of these paradigms in isolation, integrated Neuro-Symbolic systems were proposed. These hybrid systems were expected to combine the two invariances--skill and data distribution--to make the combined system robust to both. Earlier methods of Neuro-Symbolic systems addressed the Neuro-Symbolic learning cycle as depicted in Figure \ref{fig:ns_learn_cycle} \cite{bader2005}. A front-end (symbolic system) fed symbolic (partial) expert knowledge to a connectionist system (ANN) that possibly utilized the internally represented symbolic knowledge during the learning phase (training). Knowledge extracted after the learning phase was fed back to the symbolic system for further processing (reasoning) in symbolic form. \begin{figure} \caption{ Neuro-Symbolic learning cycle. Source: \cite{bader2005}} \label{fig:ns_learn_cycle} \end{figure} Later, \cite{garcez2009} described the Neuro-Symbolic system as a framework, where ANNs provide the machinery for parallel computation and robust learning (invariance to noise), while symbolic logic provides an explanation of the network models. These explanations facilitate the interaction with the world and other systems (invariance to skill). It is a tightly-coupled hybrid system that is continuous (ANN) but has a clear discrete interpretation (logic) at various levels of abstraction. These were able to extract logical expressions from trained neural networks and used this extracted knowledge to seed learning in further tasks. In other words, neural networks were used to tackle the invariance to the noisy data and the symbolic logic was used to obtain the invariance to skill. And as a hybrid model, it was expected to solve knowledge-based tasks. In the last decade, Neuro-Symbolic Integration faced many challenges and contributions \cite{garcez2015}. Prominent yet not fully solved challenges are as follows: \begin{itemize} \item Mechanisms of structure learning: Symbolic logic like hypothesis search at concept level (ILP) vs statistical AI using iterative adaptation processes. \item The learning of generalization of symbolic rules \item Effective knowledge extraction from large-scale networks for purposes like explanation, lifelong learning, and transfer learning \end{itemize} In contrast to early approaches using first order logic, there was a shift towards using non-classical logics \cite{besold2017} e.g. Temporal Logic \cite{pnueli1977}, Modal Logic \cite{garcez2007}, Intuitionistic Logic \cite{dalen2002}, Description Logic \cite{krotschz2013}, and logic of intermediate expressiveness e.g. Description Logic \cite{krotschz2013}, Inductive Logic Programmming using propositionalization methods \cite{blockeel2011}, Answer-Set Programming \cite{lifschitz2002}, Modal logic \cite{garcez2007} or Propositional Dynamic Logic \cite{harel2001}. Traditionally, Neuro-Symbolic integration was employed to integrate cognitive abilities (like induction, deduction, and abduction) with the brain method of making mental models. Computationally, it addressed the integration of logic, probabilities, and learning. This led to the development of new models with the objective of robust learning (invariance to data distribution) and efficient reasoning (invariance to skill). Some success was achieved in various domains like simulation, bioinformatics, fault diagnosis, software engineering, model checking, visual information processing, and fraud prevention~\cite{penning2010, penning2011, garcez1999}. In parallel, another approach was using methods like probabilistic programming~\cite{gordon2014} for the generative ML algorithms like Bayesian ML. Probabilistic programs are functional or imperative programs with two additional abilities: (1) obtain values at random from distributions, and (2) condition values of variables via observations. This allows probabilistic programming to understand the program's statistical behaviour. They can also be used to represent \textit{probabilistic graphical models}~\cite{koller2009} which in turn are widely used in statistics and machine learning. These models have diverse application areas like information extraction, speech recognition, computer vision, coding theory, biology and reliability analysis. \section{Recent Developments} \subsection{Innovations in Neural Network Architecture} \subsubsection{Attention} Human beings can focus on a specific area in the field of view or recent memories to avoid over-consuming energies. Inspired by the visual attention of human beings, the attention mechanism in deep learning is a concept that summarizes approaches to extract the most informative subsets from sets of data. It can aid in distilling the essential content from data, making the model thus more invariant to how the data is organized syntactically. Attention has risen to popularity in neural machine translation (NMT). Many classical NMT approaches are based on an encoder-decoder architecture, where the encoder maps phrases word by word to hidden state vectors and the decoder is trained to model the probability of phrases in the output languages conditioned over these vectors. Both the encoder and the decoder are typically realized in a recurrent manner. The translation is then formulated as a likelihood maximization given the probabilistic language model of the decoder and the conditioning over the hidden states. One problem with this approach is that it does not account for sentences of different lengths. In long sentences, the semantic context for each word spreads out differently from shorter sentences. To account for this, the authors of \cite{bahdanau2014} have proposed to include an attention mechanism that maps a subset of the hidden state vectors in an encoded sentence to a fixed-length \emph{context} vector which is then fed to the decoder instead of inputting the hidden state vectors directly. The attention is implemented as a weighted sum of normalized exponential functions. In a work on transformer architectures \cite{vaswani2017}, attention is defined more formally as a function \begin{equation} \begin{split} \mathbb{R}^{d_k\times n_q}\times\mathbb{R}^{d_k\times n_k}\times\mathbb{R}^{d_v\times n_k} &\to \mathbb{R}^{d_v\times n_q}, \\ {\bm{Q}}, {\bm{K}}, {\bm{V}} &\mapsto \mathrm{Attention}({\bm{q}}, {\bm{k}}, {\bm{v}}). \end{split} \label{eq:attention} \end{equation} The columns of ${\bm{Q}}, {\bm{K}}, {\bm{V}}$ are called \emph{queries}, \emph{keys} and \emph{values}, respectively. This function computes for each query ${\bm{q}}_i$ an attention vector ${\bm{a}}_i$ by returning a weighted sum of all values, i.e. \begin{equation} {\bm{a}}_i = \sum_{j=0}^{n_k}\alpha_{i,j} {\bm{v}}_j. \end{equation} The weights are determined from some measure of similarity between the queries and keys. A weight $\alpha_{i,j}$ is high, if ${\bm{q}}_i$ and ${\bm{k}}_j$ are similar according to the measure, and close to $0$ otherwise. In neural networks, attention blocks can be used in different ways. A fixed number of query vectors could be implemented as trainable parameters of the attention blocks that receive sets of key-value pairs as an input and return an attention value as an output. The query-key-value function in \eqref{eq:attention} gives rise to three categories of attention mechanisms, namely \emph{spatial attention}, \emph{self-attention} and \emph{channel-wise attention}. \emph{Spatial attention} imitates human visual attention in a way such that the network is able to focus on significant semantic areas in input images for final decision makings. Queries describe ultimate classification or detection results, while values describe pixel-level image areas and keys are feature maps extracted by convolutional neural networks (CNN). For instance, the image captioning work \cite{Xu:2015:SAT:3045118.3045336} is based on a similar principle as the NMT approach \cite{bahdanau2014} described above, but the attention layer is used to extract important regions from the input image, rather than phrases from a sentence. Many works use spatial attention concepts to improve detection performance or enhance interpretability \cite{kim2017interpretable,pang2019mask}. Another important special case of attention in the sense of \eqref{eq:attention} is \emph{self-attention}, in which ${\bm{Q}}={\bm{K}}={\bm{V}}$ holds. Self-attention computes a representation of an input tuple of feature vectors based on their similarity between each other \cite{vaswani2017,ramachandran2019stand}. This concept resembles the non-local mean in image processing. \cite{Wang2018}. \emph{Channel-wise attention} is used predominantly in computer vision tasks by weighing channels of convolutional layers. Similar to the aforementioned spatial attention, queries describe final classification or detection outputs, while keys and values are feature outputs, extracted from each channel of convolutional layers, because the channels are known to be activated by specific image patterns. For example, the work \cite{zhang2018} considers CNN channel features in a pedestrian detection task and observes that different channels respond to different body parts. An attention mechanism across channels is employed to represent various occlusion patterns in one single model, such that each pattern corresponds to a combination of body parts. The adjusted occlusion features ${\bm{f}}_{occ}$ can be written as \begin{equation} \label{eq:occ} {\bm{f}}_{occ} = \mathbf{\Omega}^T {\bm{f}}_{chn}, \end{equation} where $\mathbf{\Omega}$ represents the weights on channel features ${\bm{f}}_{chn}$. Likewise, the work \cite{hu2018squeeze} uses channel-wise attention to aggregate the information from the entire receptive field. \subsubsection{Capsules} The original motivation behind \emph{capsules} was to disentangle visual entities and their geometric relation to each other. Early works described capsules as neural modules consisting of \emph{recogniton} and \emph{generation} units. Both kinds of units are realized via hidden convolutional layers. \begin{figure} \caption{Depiction of a capsule layer. Source: \cite{hinton2011}.} \label{fig:capsule} \end{figure} \figref{fig:capsule} depicts a capsule layer as described in \cite{hinton2011}. In the \emph{transforming autoencoders} introduced by that work, the recognition units process the image at its input and returns two parameters, a probability $p$ that a particular entity is present in the picture, and a vector ${\bm{T}}$ of pose coordinates (\figref{fig:capsule}: ${\bm{T}}=\begin{bmatrix}x & y\end{bmatrix}^\top$). These values are passed on to the generation units, along with a vector $\Delta{\bm{T}}$ that describes the change in pose (\figref{fig:capsule}: ${\bm{T}}=\begin{bmatrix}\Delta x & \Delta y\end{bmatrix}^\top$). As a result, the generation units create a new image from the visual entities and the new poses created from ${\bm{T}}$ and $\Delta {\bm{T}}$. The contribution of each capsule to the generated output is determined by the presence probability $p$. It is known that the features extracted by convolutional neural networks become more complex and expressive with increasing number of layers \cite{lee2009}. This due to the translationally equivariant nature of convolutional filters as well as the fact that different semantic features appear in different constellations throughout the training data. This is also the case for capsule neural networks but the effect is reinforced by the additional pose information provided during the training process. As an illustrating example, assume that we want to train a transforming autoencoder with images of faces under different pose transformations. This kind of data is typically available and readily labeled in publicly accessible datasets. During training, the output image would contain the result of the change in pose. Different facial features such as mouth, ears, eyes or nose will behave differently under a given pose transformation and thus be captured by different capsules. The contribution of each capsule to the generation at the output of the transforming autoencoder is determined by $p$. If the facial feature modeled by a capsule is absent from the image $p$ should be close to $0$. A capsule layer can be thus viewed as an architectural device to decompose the input into its semantic components. Several capsule layers can be stacked to capture features of increasing complexity. While capsule neural networks, like convolutional nets are by design an instrument for visual data, it is interesting how they incorporate different data modalities into the learning process. Recent capsule architectures are capable of combining euclidean with symbolic representations. For instance, the \emph{Stacked Capsule Autoencoder} \cite{kosiorek2019} decomposes images into sets of objects and then uses set processing methods to organize the sets into constellations of objects. \subsection{Invariance in Different Data Types} \subsubsection{Neural Set Processing} Sets are one of the most fundamental non-euclidean data types. They appear in classical combinatorial problems that are known from theoretical computer science, but also in fields like computer vision in the form of, for instance, point clouds. As such, they occupy a special position within non-euclidean data as essentially all other practically relevant data types can be derived from sets by adding additional structure. For instance, a word is a set of letters equipped with an order, a graph is a set of nodes equipped with a pairwise relational structure. Making neural networks capable of dealing with sets thus potentially renders them applicable to all kinds of data that can be derived from a set. Unlike elements of a vector space, sets can be of different cardinalities and do not have a natural order. To process sets, a neural network should thus be able to handle inputs of different sizes and be invariant to permutations. This permutation invariance is an elimentary inductive bias in set processing and must be considered both when neural networks process sets as their input or return sets as their output. Since recurrent neural networks (RNNs) can handle sequences of different lengths, they have also been employed to process sets. In order to achieve permutation invariance, attention has been used. For instance, the work \cite{vinyals2015} describes a system where an LSTM generates queries to compute attention vectors from sets. The technique is employed for combinatorial tasks such as sorting. However, purely feed-forward structures have also been used for set processing, e.g. by generalizing the convolution operation to sets \cite{li2018a}. An important theoretical result on permutation invariance has been provided in \cite{zaheer2017}. Given a function \begin{equation} \begin{split} f:\mathcal{X}&\to\mathbb{R},\\ X&\mapsto f(X), \end{split} \end{equation} where $\mathcal{X}$ is the set of sets containing elements of a countable set $\mathfrak X$, it can be shown that $f$ is invariant to permutations of its argument, iff it can be written as \begin{equation} f(X)=\rho \left(\sum_{{\bm{x}}\in X}\phi({\bm{x}})\right), \label{eq:f_perm_invariant} \end{equation} with $\rho:\mathbb{R}\to \mathbb{R}$ and $\phi:X\to\mathbb{R}$ being appropriate transformations. This result provides an easy-to-implement guideline in designing models for set processing. Generally, attention is a popular approach in handling sets. The reason is that attention modules can be used to implement functions of the form in \eqref{eq:f_perm_invariant} \cite{lee2018}. \subsubsection{Graph Neural Networks} Like sets, graphs generalizes euclidean data types and at the same type can be used to describe a variety of knowledge representations, such as social networks, multi-view images or molecule structures. The survey \cite{wu2020} provides a comprehensive overview over the recent trends in Graph Neural Nets (GNN). A GNN can refer both to an \emph{intra-graph} framework, that operates on a node or edge level, e.g. for segmenting a graph into semantically distinct clusters, as well as an inter-graph framework, that, for example, performs classification of adjacency matrices. Overall, intra-graph frameworks are less common. A noteworthy example is \cite{kipf2016} that presents a semi-supervised classification architecture that operates on partially labeled graph nodes. Graph neural networks have been realized both by recurrent and feed-forward architectures. \begin{itemize} \item \emph{Recurrent} GNNs typically process each node by a recurrent unit such as a Long-term Short Memory (LSTM) or a Gate Recurrent Unit (GRU). Each unit receives inputs from the units corresponding to its neighboring nodes. Works in this category, such as \cite{dai2018} or \cite{li2015} belong to the pioneering approaches to of GNNs \cite{wu2020}. \item \emph{Convolutional} GNNs aim at generalizing the concept of convolutions to from signals defined on regular grids to signals on graphs. 2D images, for instance, can be viewed as a special case of graphs, where each pixel is described by a node and the neighboring pixels constitute the neighborhood of adjacent nodes. Graph convolutions, like regular ones, can be carried out in the spatial \cite{monti2017,xu2018, velivckovic2018} and the spectral domain \cite{bruna2013,li2018,kipf2016}, by applying an appropriate transform of the graph data. One important question of ongoing research in the context of Convolutional GNNs is the design of appropriate pooling layers \cite{diehl2019}. \item Similarly, \emph{attention} based mechanisms have also been employed \cite{velivckovic2017}. \end{itemize} GNNs have been widely applied to non-supervised learning tasks, such as graph embedding \cite{perozzi2014} and graph generation \cite{bojchevski2018}. \subsubsection{Group Action Symmetries} As we have seen, considerable effort is put into generalizing convolutions to data structures such as graphs, sets or manifolds. This is not without a reason. Weight sharing in convolutional layers of deep networks without doubt provides a strong prior for the most common deep learning applications \cite{ulyanov2018}. While it is not entirely understood what exactly it is about convolutional neural nets that make them capture the essential information from visual data, robustness towards certain transformations such as translations of images seem to play an important role in it \cite{Mallat2016}. Most prominently, convolutions are equivariant to spatial translations, which is advantageous for visual data, as translations typically have little impact on the semantic content of an image. However, if we want to apply deep learning as successfully to non-image inputs, we need to generalize these kinds of symmetries to more exotic types of data which turns out to be tricky. Nevertheless, some theoretical results on this matter have been presented in \cite{ravanbakhsh2017}, with regards to how parameter sharing induces equivariances with respect to some exemplary group operations on the input, such as rotations and permutations. Later, the work \cite{kondor2018} has claimed some stronger results, by showing that a convolutional structure is not only a sufficient, but also a necessary condition for equivariance with respect to certain group actions. \subsection{Recent Trends in Learning Paradigms} \subsubsection{Meta-Learning} \emph{Meta-Learning} refers to a class of approaches in predominantly supervised learning settings that can be vaguely described as "learning to learn" \cite{finn2017}. Traditional supervised learning problems are typically formulated in terms of \emph{training data} and \emph{test data}, where the training data is used to optimize a parameterized function for classifying or regressing test data samples that are assumed to be sufficiently similar to the training data in terms of labeling and statistics. By contrast, in meta-learning, once a model has been trained, it is not directly used to predict labels of unseen data samples, but rather to once again learn the prediction on a small unseen data-set. This field of study has considerable overlap with few-shot classification \cite{yoon2019, vinyals2016}. Meta-learning problems are often framed in terms of \emph{support sets} and \emph{query sets}. A training set \begin{equation} \mathcal{X}_\mathrm{train}=\{(\mathcal S_1, \mathcal Q_1),\dots,(\mathcal S_{N_\mathrm{train}}, \mathcal Q_{N_\mathrm{train}})\} \end{equation} contains $N_\mathrm{train}$ pairs of support and vector sets. A meta-learning framework uses $\mathcal{X}_\mathrm{train}$ to generate a deep learning model that can be easily trained on the support set of a new, unseen pair $(\mathcal{S}_\mathrm{test},\mathcal{Q}_\mathrm{test})$, such that it generalizes to the samples in $\mathcal{Q}_\mathrm{test}$, even when the number of samples in $\mathcal{S}_\mathrm{test}$ is small, and, in the case of classification, contains classes that have not been observed in the training set. It is reasonable to assume that for any support/query pair the probability distributions from which the samples have been drawn are the same for the support and the query set. For classification problems, the same holds for the classes that should coincide for the support and query set of one $(\mathcal Q, \mathcal S)$-pair, but not necessarily across all pairs. Since training is performed twice, in the following, we refer to the first stage of training, i.e. on $\mathcal{X}_\mathrm{train}$, as \emph{training}, and the second stage, i.e. on $(\mathcal Q_\mathrm{test}, \mathcal S_\mathrm{test})$ as \emph{adaptation}. Typically, the models are parameterized by a \emph{task-general} parameter vector ${\bm{\theta}}$ and a parameter vector ${\bm{\vartheta}}_i$ that is specific to one particular support/query set pair $(\mathcal{Q}_i,\mathcal{S}_i)$. The aim of meta-learning is to use $\mathcal X_\mathrm{train}$ to learn a ${\bm{\theta}}$ that is as general as possible, such that inferring ${\bm{\vartheta}}_\mathrm{test}$ from a new, unseen support set $\mathcal{S}_\mathrm{test}$ requires as little effort and data as possible (\emph{fast adaptation}). In \cite{yao2020}, three types of meta-learning approaches have been identified. \emph{Metric-based} methods learn an embedding space parameterized by ${\bm{\theta}}$ in which the classes are well separable across all of $X_\mathrm{train}$ w.r.t. some distance measure. An additional, simple proximity-based classifier parameterized by ${\bm{\vartheta}}_i$ is learned jointly for each $i\in\{1,\dots,N_\mathrm{train}\}$. Recent examples of this type of meta-learning models are \cite{yoon2019,oreshkin2018} and \cite{sung2018}. \emph{Gradient-based} methods minimize a measure of expected non-optimality, such that adaptation requires only few small gradient steps. Prominent examples include \cite{finn2018,yao2020} as well as \cite{grant2018} and \cite{andrychowicz2016}. The more recent class of \emph{Amortization} methods relies on inference networks that predict the task-specific parameters ${\bm{\vartheta}}_i$ \cite{gordon2018}. Additionally, to these three classes, remarkably many meta-learning mechanisms rely on \emph{recurrent} models, since meta-learning can be phrased as a sequence-to-sequence problem \cite{ravi2016,mishra2017} \subsubsection{Self-supervised Learning} Supervised learning gives us the means to solve tasks for which labels are available in sufficient quantities and variations. However, the acquisition of the required annotations is usually associated with great effort and high costs. Meanwhile, a lot of information in the data remains unexploited, labels that are basically free. In contrast, self-supervised methods try to exploit this untapped potential. Self-supervised learning is an important tool in training skill-invariant models. Since data is not assumed to be consistently labeled, the model is not trained with a specific task in mind. The general goal is to learn how to encode objects, such as words, images, audio snippets, graphs, etc., into representations that contain the essential information in a condensed form and, thus, can be used to efficiently solve multiple downstream tasks. To achieve this goal, self-supervised methods formulate tasks for which the labels are automatically provided instead of relying on human-annotated labels. Typically, the performance on this self-supervised task, often called pretext task, is not important. The actual goal is that the intermediate representations of the trained model encode high-level semantic information. The challenge is to design this pretext task in such a way that high-level understanding is necessary to solve it. One class of self-supervised methods formulates the objective as a prediction task, where a hidden part of the input must be derived from other parts. This objective comes in many flavors, such as predicting a word in a sentence from context \cite{mikolov2013}, \cite{devlin2018}, inpainting \cite{pathak2016}, colorization \cite{zhang2016} or predicting future frames in a video which will be accessible in subsequent time steps \cite{srivastava2015}. Another class of methods solves prediction tasks in learned representation spaces; for example, the relative localization of patches \cite{doersch2015}, \cite{noroozi2016}, the natural orientation of images \cite{gidaris2018} or the geometric transformation between images \cite{agrawal2015}, \cite{zamir2016}, \cite{zhang2019}. The potential advantages of the latter techniques are that they have access to the entire input and do not have to learn details at the image level that are irrelevant for understanding image semantics. In a broader sense, generative models like autoencoders and generative adversarial networks \cite{goodfellow2014} can be considered self-supervised. However, while the focus of generative models is typically to create realistic and diverse samples, the goal of self-supervised learning is to extract meaningful information from data. For a broader overview of self-supervised learning we refer the reader to a recent study \cite{jing2019}. \subsubsection{Metric Learning} Since systems that are invariant to the data distribution can not rely on solely inferring the training data statistics, distance and metric learning \cite{Weinberger2009}, has gained considerable importance in the last years. \emph{Deep metric learning} employ deep neural nets to construct in embedding of the data in which the Euclidean distance reflects actual semantic (dis-)similarity between data points. Learning a distance can substitute learning a function $f_{\bm{\theta}}$ as described in \Secref{sub:skill} in order to become more skill- invariant and distribution-invariant. For instance, instead of learning a function that classifies samples, we can learn a metric, and use a simple distance-based classifier, e.g. $k$ Nearest Neighbors on top, which permits us to neglect the joint probability between the data samples and the labels and rather focus on whether we can capture any semantically meaningful notion of similarity. Additionally, we can employ it for a larger variety of task than mere classification, e.g. clustering or content-based retrieval. An important approach to metric learning is by \emph{contrastive} embedding. This strategy aims at penalizing pairs of data samples from the same class that are too far apart, as well as pairs of samples from different classes that are too close together. In \cite{Hadsell2006}, this aim is formalized as follows. Let ${\bm{x}}_i$ and ${\bm{x}}_j$ be two samples from the training data set $\mathcal{X}_\mathrm{train}$ and $y_{i,j}$ a label that is $0$ if the pair is deemed similar and $1$ otherwise. This yields the loss function \begin{equation} L_{\mathrm c}({\bm{\theta}})=\frac{1}{2}\sum_{i,j,i\neq j}(1-y_{i,j})\|f_{\bm{\theta}}({\bm{x}}_i)-f_{\bm{\theta}}({\bm{x}}_j)\|^2+y_{i,j}\max(0, m-\|f_{\bm{\theta}}({\bm{x}}_i)-f_{\bm{\theta}}({\bm{x}}_j)\|)^2, \end{equation} where $m>0$ is a threshold value. Contrastive embedding has been successfully applied to learning similarity of interior design images \cite{Bell2015}. Alternatively \emph{triplet} loss chooses three samples ${\bm{x}}^a, {\bm{x}}^p, {\bm{x}}^n$ where ${\bm{x}}^p$ (positive) is assumed to be similar to ${\bm{x}}_a$ and (anchor) and ${\bm{x}}^n$ (negative is assumed to be dissimilar from it). Based on these assumptions, the loss function \begin{equation} L_{\mathrm{t}}({\bm{\theta}})=\sum_{i}\max(0,\|f_{\bm{\theta}}({\bm{x}}^a_i)-f_{\bm{\theta}}({\bm{x}}^p_i)\|^2-\|f_{\bm{\theta}}({\bm{x}}^a_i)-f_{\bm{\theta}}({\bm{x}}^n_i)\|^2-m), \end{equation} where $m$ is again a threshold, is constructed, based on a sufficient number of triplets ${\bm{x}}^a_i, {\bm{x}}^p_i, {\bm{x}}^n_i$. Triplet loss has been successfully employed to face recognition tasks \cite{Schroff2015}, among others. More recent works propose more sophisticated loss functions, e.g. \emph{Lifted Structured Feature Embedding} \cite{OhSong2016}, \emph{Multi-class $n$-pair loss} \cite{Sohn2016} or angular loss \cite{Wang2017}. \section{Conclusion} Knowledge can be expected to play a key role in deep learning and AI developments of the years to come. Many works have investigated the concept of knowledge by emphasizing its interpretation as domain or expert knowledge and developing methods that infuse complementary, problem-specific insights into general-purpose machine learning algorithms. The research questions this type of works tries to answer usually relate to adapting a given model to a specific problem or situation. By contrast, many recent trends in machine learning research put the machine learning models themselves at the center of interest, rather than the diverse application scenarios they can be applied to. This shifts the focus from \emph{adaptation} to \emph{adaptability}, and to the challenge of \emph{designing} the models in a way such that the effort involved in adapting them can be minimized. Motivated by these developments, we conclude that the decisive facet of knowledge in advancing the field is that of \emph{invariance}. Not incidentally, it coincides with definitions from knowledge management. Invariance can refer to different aspects of a machine learning model and, on a low-level, is already a design principle of well-established neural architectures. However, in order to interpret, process, represent or generate knowledge with machine learning, we need to achieve invariance in a broader and more abstract sense. This is a gradual process as there is no clear boundary at which invariance of skill, distribution or syntax is achieved. As machine learning models become increasingly invariant, one expects to achieve and enhance the following properties of future industrial and societal developments. \begin{itemize} \item Small data size: One fundamental challenge in real-world application is that the size of data available for training an appropriate machine learning model is often too small. This is an obstacle researchers and practitioners face all too often, in particular when they need to apply their model to real-world problems where gathering and annotating data is costly and publicly available datasets do not exist. By leveraging the advantage of capturing or representing intrinsic invariance in data, we expect that models can be learned on small, inconsistent or insufficiently labeled datesets. That way, the bottleneck of industrial applications can be resolved. \item Human-like intelligent system: Invariance is crucial in human-centric engineering. The ways humans interact with machines is unique to every user. Systems that interact with humans in a natural and intuitive way thus require the capability to adapt to a large variety of individual traits, such as pronunciation, physical features or design preferences. By becoming increasingly invariant, human-centric systems could reduce their sensitivity to such peculiarities, for instance by permitting models that are trained on a limited set of users to adapt to new human subjects with their own unique habits and preferences. \item Multi-purpose intelligent systems: Autonomous systems can be expected to become more versatile and universally applicable in the future, a trend that can already observed today. To this end they need the capability to perform different tasks and adapt to diverse situations which can be only achieved with a certain degree of invariance. \end{itemize} \end{document}
arXiv
{ "id": "2012.11406.tex", "language_detection_score": 0.9068183898925781, "char_num_after_normalized": null, "contain_at_least_two_stop_words": null, "ellipsis_line_ratio": null, "idx": null, "lines_start_with_bullet_point_ratio": null, "mean_length_of_alpha_words": null, "non_alphabetical_char_ratio": null, "path": null, "symbols_to_words_ratio": null, "uppercase_word_ratio": null, "type": null, "book_name": null, "mimetype": null, "page_index": null, "page_path": null, "page_title": null }
arXiv/math_arXiv_v0.2.jsonl
null
null
\begin{document} \title{Mixedness in Bell-violation vs. Entanglement of Formation } \author{Sibasish Ghosh\protect\( ^{\%}\protect \)\thanks{ [email protected] } , Guruprasad Kar\protect\( ^{\%}\protect \)\thanks{ [email protected] } , Aditi Sen(De)\protect\( ^{\#}\protect \)\thanks{ [email protected] } and Ujjwal Sen\protect\( ^{\#}\protect \)\protect\( ^{\ddagger }\protect \)} \maketitle \lyxaddress{\protect\( ^{\%}\protect \)Physics and Applied Mathematics Unit, Indian Statistical Institute, 203 B.T. Road, Kolkata 700 035, India } \lyxaddress{\protect\( ^{\#}\protect \)Department of Physics, Bose Institute, 93/1 A.P.C. Road, Kolkata 700 009, India} \begin{abstract} Recently Munro, Nemoto and White (\emph{The Bell Inequality: A measure of Entanglement?}, quant-ph/0102119) tried to indicate that the reason behind a state \( \rho \) having higher amount of entanglement (as quantified by the entanglement of formation) than a state \( \rho ^{\prime } \), but producing the same amount of Bell-violation, is due to the fact that the amount of mixedness (as quantified by the linearised entropy) in \( \rho \) is higher than that in \( \rho ^{\prime } \). We counter their argument with examples. We extend these considerations to the von Neumann entropy. Our results suggest that the reason as to why equal amount of Bell-violation requires different amounts of entanglement cannot, at least, be explained by mixedness alone. \end{abstract} Werner\cite{1} (see also Popescu\cite{2}) first demonstrated the existence of states which are entangled but do not violate any Bell-type inequality\cite{3,4}. But there exist classes of states (pure states, mixture of two Bell states), which violate Bell inequality whenever they are entangled\cite{5,6}. This implies that to produce an equal amount of Bell-violation, some states require to have more entanglement (with respect to some measure) than others. It would be interesting to find out what property of the first state requires it to have more entanglement to produce the same Bell-violation. Recently Munro \emph{et al.}\cite{7} have tried to indicate that this anomalous property of the first state is due to its being more \emph{mixed} than the second, where they took the linearised entropy\cite{8} as the measure of mixedness. As in \cite{7}, we use the entanglement of formation as our measure of entanglement. For a state \( \rho \) of two qubits, its entanglement of formation \( EoF(\rho ) \) is given by\cite{9} \[ EoF(\rho )=h\left( \frac{1+\sqrt{1-\tau }}{2}\right) \] with \[ h(x)=-x\log _{2}x-(1-x)\log _{2}(1-x).\] The tangle \( \tau \) \cite{10} is given by \[ \tau (\rho )=[\max \{0,\: \lambda _{1}-\lambda _{2}-\lambda _{3}-\lambda _{4}\}]^{2},\] the \( \lambda _{i} \)'s being square root of eigen values, in decreasing order, of \( \rho \widetilde{\rho } \), where \[ \widetilde{\rho }=(\sigma _{y}\otimes \sigma _{y})\rho ^{*}(\sigma _{y}\otimes \sigma _{y}),\] the complex conjugation being taken in the standard product basis \( \left| 00\right\rangle \), \( \left| 01\right\rangle \), \( \left| 10\right\rangle \), \( \left| 11\right\rangle \) of two qubits. Note that EoF is monotonically increasing ranging from \( 0 \) to \( 1 \) as \( \tau \) increases from \( 0 \) to \( 1 \) and hence, like Munro \emph{et al.}\cite{7}, we take \( \tau \) as our measure of entanglement. The maximum amount of Bell-violation(\( B \)) of a state \( \rho \) of two qubits is given by\cite{6} \[ B(\rho )=2\sqrt{M(\rho )}\] where \( M(\rho ) \) is the sum of the two larger eigenvalues of \( T_{\rho }T^{\dagger }_{\rho } \), \( T_{\rho } \) being the \( 3\times 3 \) matrix whose \( (m,n) \)-element is \[ t_{mn}=tr(\rho \sigma _{n}\otimes \sigma _{m}).\] The \( \sigma \)'s are the Pauli matrices. The linearised entropy \cite{8} \[ S_{L}(\rho )=\frac{4}{3}(1-tr(\rho ^{2}))\] is taken as the measure of mixedness. Munro \emph{et al.}\cite{7} proposed that given two two-qubit states \( \rho \) and \( \rho ^{\prime } \) with \[ B(\rho )=B(\rho ^{\prime }),\] but \[ \tau (\rho )>\tau (\rho ^{\prime }),\] would imply \[ S_{L}(\rho )>S_{L}(\rho ^{\prime }).\] To support this proposal, it was shown that it holds for any combination of states from the following three classes of states: (1) the class of all pure states \[ \rho _{pure}=P[a\left| 00\right\rangle +b\left| 11\right\rangle ]\] with \( a,\: b\geq 0 \),and \( a^{2}+b^{2}=1, \) (2) the class of all Werner states\cite{1} \[ \rho _{werner}=xP[\Phi ^{+}]+\frac{1-x}{4}I_{2}\otimes I_{2}\] with \( 0\leq x\leq 1 \) and \( \Phi ^{+}=\frac{1}{\sqrt{2}}(\left| 00\right\rangle +\left| 11\right\rangle ) \), and (3) the class of all maximally entangled mixed states\cite{11} \[ \rho _{mems}=\frac{1}{2}(2g(\gamma )+\gamma )P[\Phi ^{+}]+\frac{1}{2}(2g(\gamma )-\gamma )P[\Phi ^{-}]+(1-2g(\gamma ))P[\left| 01\right\rangle \left\langle 01\right| \] with \( g(\gamma )=1/3 \) for \( 0<\gamma <2/3 \) and \( g(\gamma )=\gamma /2 \) for \( 2/3\leq \gamma \leq 1 \), and \( \Phi ^{\pm }=\frac{1}{\sqrt{2}}(\left| 00\right\rangle \pm \left| 11\right\rangle ) \). However, consider the class of all mixtures of two Bell states \[ \rho _{2}=wP[\Phi ^{+}]+(1-w)P[\Phi ^{-}],\] with \( 0<w<1 \). \( \rho _{2} \) is entangled whenever \( w\neq \frac{1}{2} \), and for that entire region, \( \rho _{2} \) is Bell-violating\cite{6}. For this class it is easy to show that \[ B=2\sqrt{1+\tau }\] But the corresponding curve for pure states \( \rho _{pure} \) is also given by\cite{7} \[ B=2\sqrt{1+\tau }\] We see that for any fixed Bell-violation, the corresponding \( \rho _{2} \) has its tangle equal to that for the corresponding pure state. But the mixedness of \( \rho _{2} \) is obviously \emph{larger} than that of the pure state (as the mixedness is always zero for pure states). Next consider the following class of mixtures of \emph{three} Bell states \[ \rho _{3}=w_{1}P[\Phi ^{+}]+w_{2}P[\Phi ^{-}]+w_{3}P[\Psi ^{+}]\] with \( 1\geq w_{1}\geq w_{2}\geq w_{3}\geq 0 \), \( \sum _{i}w_{i}=1 \) and \( \Psi ^{+}=\frac{1}{\sqrt{2}}(\left| 01\right\rangle +\left| 10\right\rangle ) \). We take \( w_{1}>\frac{1}{2} \) so that \( \rho _{3} \) is entangled \cite{12}. For \( \rho _{3} \), we have (as \( w_{1}\geq w_{2}\geq w_{3} \)) \[ B(\rho _{3})=2\sqrt{2-4w_{2}(1-w_{2})-4w_{3}(1-w_{3})},\] \[ \tau (\rho _{3})=1-4w_{1}(1-w_{1}),\] \[ S_{L}(\rho _{3})=\frac{4}{3}\{w_{1}(1-w_{1})+w_{2}(1-w_{2})+w_{3}(1-w_{3})\}.\] Let \[ \rho ^{\prime }_{3}=w^{\prime }_{1}P[\Phi ^{+}]+w_{2}^{\prime }P[\Phi ^{-}]+w^{\prime }_{3}P[\Psi ^{+}]\] with \( 1\geq w^{\prime }_{1}\geq w^{\prime }_{2}\geq w^{\prime }_{3}\geq 0 \), \( \sum _{i}w^{\prime }_{i}=1 \), \( w^{\prime }_{1}>\frac{1}{2} \) be such that \[ B(\rho _{3})=B(\rho _{3}^{\prime })\] which gives \[ w_{2}(1-w_{2})+w_{3}(1-w_{3})=w_{2}^{\prime }(1-w^{\prime }_{2})+w^{\prime }_{3}(1-w^{\prime }_{3}).\] Now if \[ \tau (\rho _{3})>\tau (\rho _{3}^{\prime }),\] we have \[ w_{1}(1-w_{1})<w^{\prime }_{1}(1-w^{\prime }_{1})\] so that \[ w_{1}(1-w_{1})+w_{2}(1-w_{2})+w_{3}(1-w_{3})<w^{\prime }_{1}(1-w^{\prime }_{1})+w^{\prime }_{2}(1-w^{\prime }_{2})+w_{3}^{\prime }(1-w_{3}^{\prime })\] that is \[ S_{L}(\rho _{3})<S_{L}(\rho _{3}^{\prime }).\] Thus for a fixed Bell-violation, the order of \( S_{L} \) for \( \rho _{3} \) and \( \rho _{3}^{\prime } \) is \emph{always} reversed with respect to the order of their \( \tau \)'s. That is, the indication of \cite{7}, referred to earlier, is \emph{always} violated for any two states from the class of mixtures of \emph{three} Bell states. One can now feel that if the \emph{entanglement of formation of two states are equal}, it could imply some order between the amount of Bell-violation and mixedness of the two states. But even that is not true. For our first example, if \[ \tau (\rho _{2})=\tau (\rho _{pure})\] then \[ B(\rho _{2})=B(\rho _{pure}),\] but \[ S_{L}(\rho _{2})>S_{L}(\rho _{pure}).\] On the other hand for our second example, if \[ \tau (\rho _{3})=\tau (\rho ^{\prime }_{3})\] then \[ B(\rho _{3})>B(\rho ^{\prime }_{3})\] implies \[ S_{L}(\rho _{3})<S_{L}(\rho ^{\prime }_{3}).\] In Ref.\cite{7}, the linearised entropy was the only measure of mixedness that was considered. But the von Neumann entropy\cite{13} \[ S(\rho )=-tr(\rho log_{4}\rho ),\] of a state \( \rho \) of two qubits, is a more physical measure of mixedness than the linearised entropy. We have taken the logarithm to the base \( 4 \) to normalise the von Neumann entropy of the maximally mixed state \( \frac{1}{2}I_{2}\otimes \frac{1}{2}I_{2} \) to unity as it is for the linearised entropy. One may now feel that the conjecture under discussion may turn out to be true if we change our measure of mixedness from linearised entropy to von Neumann entropy. But both the von Neumann entropy and the linearised entropy are convex functions, attaining their maximum for the same state \( \frac{1}{2}I_{2}\otimes \frac{1}{2}I_{2} \) and each of them are symmetric about the maximum. Thus \[ S_{L}(\rho )>S_{L}(\rho ^{\prime })\] would imply \[ S(\rho )>S(\rho ^{\prime })\] and viceversa. Thus all our considerations with linearised entropy as the measure of mixedness would carry over to von Neumann entropy as the measure of mixedness. Our results emphasize that the reason as to why equal amount of Bell-violation requires different amounts of entanglement cannot, at least, be explained by mixedness alone. We thank Anirban Roy and Debasis Sarkar for helpful discussions. We acknowledge Frank Verstraete for encouraging us to carry over our considerations to the von Neumann entropy. A.S. and U.S. thanks Dipankar Home for encouragement and U.S. acknowledges partial support by the Council of Scientific and Industrial Research, Government of India, New Delhi. \end{document}
arXiv
{ "id": "0104007.tex", "language_detection_score": 0.8195763230323792, "char_num_after_normalized": null, "contain_at_least_two_stop_words": null, "ellipsis_line_ratio": null, "idx": null, "lines_start_with_bullet_point_ratio": null, "mean_length_of_alpha_words": null, "non_alphabetical_char_ratio": null, "path": null, "symbols_to_words_ratio": null, "uppercase_word_ratio": null, "type": null, "book_name": null, "mimetype": null, "page_index": null, "page_path": null, "page_title": null }
arXiv/math_arXiv_v0.2.jsonl
null
null
\begin{document} \begin{abstract} The edit distance between two graphs on the same labeled vertex set is the size of the symmetric difference of the edge sets. The edit distance function of the hereditary property, $\mathcal{H}$, is a function of $p\in[0,1]$ and is the limit of the maximum normalized distance between a graph of density $p$ and $\mathcal{H}$. This paper uses the symmetrization method of Sidorenko in order to compute the edit distance function of various hereditary properties. For any graph $H$, ${\rm Forb}(H)$ denotes the property of not having an induced copy of $H$. We compute the edit distance function for ${\rm Forb}(H)$, where $H$ is any split graph, and the graph $H_9$, a graph first used to describe the difficulties in computing the edit distance function. \end{abstract} \title{On the computation of edit distance functions} \section{Introduction} For two graphs $G$ and $G'$ on the same labeled vertex set of size $n$, the \textbf{normalized edit distance} between them is denoted ${\rm Dist}(G,G')$ and satisfies $$ {\rm Dist}(G,G')=\left|E(G)\triangle E(G')\right|/\binom{n}{2} . $$ A \textbf{property} of graphs is simply a set of graphs. A \textbf{hereditary property} is a set of graphs that is closed under isomorphism and the taking of induced subgraphs. The normalized edit distance between a graph $G$ and a property $\mathcal{H}$ is denoted ${\rm Dist}(G,\mathcal{H})$ and satisfies $$ {\rm Dist}(G,\mathcal{H})=\min\left\{{\rm Dist}(G,G') : V(G)=V(G'), G'\in\mathcal{H}\right\} . $$ In this paper, all properties will be hereditary. \subsection{The edit distance function} The \textbf{edit distance function} of a property $\mathcal{H}$, denoted ${\textit{ed}}_{\mathcal{H}}(p)$, measures the maximum distance of a density-$p$ graph from a hereditary property. Formally, $$ {\textit{ed}}_{\mathcal{H}}(p) = \sup_{n\rightarrow\infty}\max\left\{{\rm Dist}(G,\mathcal{H}) : |V(G)|=n, |E(G)|=\left\lfloor p{\textstyle\binom{n}{2}}\right\rfloor\right\} . $$ Balogh and the author~\cite{BM} use a result of Alon and Stav~\cite{AS1} to show that the supremum can be made into a limit, as long as the property $\mathcal{H}$ is hereditary. \begin{equation} {\textit{ed}}_{\mathcal{H}}(p) = \lim_{n\rightarrow\infty}\max\left\{{\rm Dist}(G,\mathcal{H}) : |V(G)|=n, |E(G)|=\left\lfloor p{\textstyle\binom{n}{2}}\right\rfloor\right\} . \label{eq:ghhdef} \end{equation} Moreover, the result from~\cite{BM} establishes that if $\mathcal{H}$ is hereditary then we also have $$ {\textit{ed}}_{\mathcal{H}}(p) = \lim_{n\rightarrow\infty}\mathbb{E}\left[{\rm Dist}(G(n,p),\mathcal{H})\right] . $$ That is, the maximum edit distance to a hereditary property for a density-$p$ graph is the same, asymptotically, as that of the Erd\H{o}s-R\'enyi random graph $G(n,p)$ (see Chapter 10 of~\cite{AS:TPM}). For any nontrivial hereditary property $\mathcal{H}$ (that is, one that is not finite), the function ${\textit{ed}}_{\mathcal{H}}(p)$ is continuous and concave down~\cite{BM}. Hence, it achieves its maximum. The maximum value of ${\textit{ed}}_{\mathcal{H}}(p)$ is denoted $d_{\mathcal{H}}^*$. The value of $p$ at which this maximum occurs is denoted $p_{\mathcal{H}}^*$. It should be noted that, for some hereditary properties, the edit distance function may achieve its maximum over a closed interval rather than a single point. In such cases, we will also let $p_{\mathcal{H}}^*$ denote the interval over which the given edit distance function achieves its maximum. \subsection{Symmetrization} In order to compute edit distance functions, we use the method of symmetrization, introduced by Sidorenko~\cite{Sid} and discussed in~\cite{Martin} as a way to compute edit distance functions. We will discuss what symmetrization is and how it is used in Section~\ref{sec:symmetrization}. It uses some properties of quadratic programming, first applied by Marchant and Thomason~\cite{MT}. Some results on the edit distance function can be found in a variety of papers \cite{R,AKM,AM,AS1,AS2,AS3,AS4,MT,MM}. Much of the background to this paper can be found in a paper by Balogh and the author~\cite{BM}. Terminology and proofs of supporting lemmas that are suppressed here can be found in~\cite{Martin}. \subsection{Main results} Given a graph $H$, ${\rm Forb}(H)$ is the set of all graphs that have no induced copy of $H$. Clearly ${\rm Forb}(H)$ is a hereditary property for any graph $H$ and such a property is called a \textit{principal hereditary property}. It is easy to see that, for any hereditary property $\mathcal{H}$, there exists a family of graphs $\mathcal{F}(\mathcal{H})$ such that $\mathcal{H}=\bigcap_{H\in\mathcal{F}(\mathcal{H})}{\rm Forb}(H)$. \subsubsection{Split graphs} The main results of this paper are Theorem~\ref{thm:split} and Theorem~\ref{thm:h9}. A \textbf{split graph} is a graph whose vertex set can be partitioned into one clique and one independent set. If $H$ is a split graph on $h$ vertices with independence number $\alpha$ and clique number $\omega$, then $\alpha+\omega\in\{h,h+1\}$. The value of $p^*_{{\rm Forb}(H)}$ and of $d^*_{{\rm Forb}(H)}$ had been obtained for $H=K_{1,3}$, the claw, by Alon and Stav~\cite{AS2} and for graphs of the form $K_a+E_b$ (an $a$-clique with $b$ isolated vertices) by Balogh and the author~\cite{BM}. For the ${\rm Forb}(K_a+E_b)$ result, the proof required a weighted version of Tur\'an's theorem. The symmetrization method, however, is much more powerful and we can use it to obtain Theorem~\ref{thm:split}, which gives the value of the edit distance function for all ${\rm Forb}(H)$, where $H$ is a split graph. \begin{thm}\label{thm:split} Let $H$ be a split graph that is neither complete nor empty, with independence number $\alpha$ and clique number $\omega$. Then, \begin{equation}\label{eq:split} {\textit{ed}}_{{\rm Forb}(H)}(p)=\min\left\{\frac{p}{\omega-1},\frac{1-p}{\alpha-1}\right\} . \end{equation} \end{thm} It is a trivial result (see, e.g., \cite{Martin}) that ${\textit{ed}}_{{\rm Forb}(K_{\omega})}(p)=p/(\omega-1)$ and ${\textit{ed}}_{{\rm Forb}(E_{\alpha})}(p)=(1-p)/(\alpha-1)$. So, we know the edit distance function for all split graphs. Corollary~\ref{cor:split} follows immediately from Theorem~\ref{thm:split} (and the following comment on trivial split graphs), giving the value of the maximum of the edit distance function and the value at which it occurs. \begin{cor}\label{cor:split} Let $H$ be a split graph with independence number $\alpha$ and clique number $\omega$. Then, $\left(p_{\mathcal{H}}^*,d_{\mathcal{H}}^*\right)=\left(\frac{\omega-1}{\alpha+\omega-2},\frac{1}{\alpha+\omega-2}\right)$. \end{cor} To understand the importance of the upcoming Theorem~\ref{thm:h9}, we must define the notion of colored regularity graphs. \subsubsection{Colored regularity graphs} If $S$ and $T$ are sets, then $S\udot T$ denotes the disjoint union of $S$ and $T$. If $v$ and $w$ are adjacent vertices in a graph, we denote the edge between them to be $vw$. A \textbf{colored regularity graph (CRG)}, $K$, is a simple complete graph, together with a partition of the vertices into white and black $V(K)={\rm VW}(K)\udot{\rm VB}(K)$ and a partition of the edges into white, gray and black, $E(K)={\rm EW}(K)\udot{\rm EG}(K)\udot{\rm EB}(K)$. We say that a graph $H$ embeds in $K$, (writing $H\mapsto K$) if there is a function $\varphi: V(H)\rightarrowV(K)$ so that if $h_1h_2\in E(H)$, then either $\varphi(h_1)=\varphi(h_2)\in{\rm VB}(K)$ or $\varphi(h_1)\varphi(h_2)\in{\rm EB}(K)\cup{\rm EG}(K)$ and if $h_1h_2\not\in E(H)$, then either $\varphi(h_1)=\varphi(h_2)\in{\rm VW}(K)$ or $\varphi(h_1)\varphi(h_2)\in{\rm EW}(K)\cup{\rm EG}(K)$. There are certain kinds of CRGs that occur frequently: A \textbf{gray-edge CRG} is a CRG for which all of the edges are gray. A \textbf{white-vertex CRG} is a CRG for which all the vertices are white and a \textbf{black-vertex CRG} is a CRG for which all vertices are black. For a hereditary property of graphs, $\mathcal{H}$, we denote $\mathcal{K}(\mathcal{H})$ to be the subset of CRGs, $K$, such that no forbidden graph maps into $K$. That is, if $\mathcal{F}(\mathcal{H})$ is defined to be the minimal set of graphs so that $\mathcal{H}=\bigcap_{H\in\mathcal{F}(\mathcal{H})}{\rm Forb}(H)$, then $\mathcal{K}(\mathcal{H})=\{K : H\not\mapsto K, \forall H\in\mathcal{F}(\mathcal{H})\}$. A CRG $K'$ is said to be \textbf{a sub-CRG of $K$} if $K'$ can be obtained by deleting vertices of $K$. \subsubsection{The graph $H_9$} \begin{figure} \caption{The graph $H_9$.} \label{fig:h9} \end{figure} The graph, $H_9$, as drawn in Figure~\ref{fig:h9}, was given in \cite{BM} as an example of a hereditary property $\mathcal{H}={\rm Forb}(H_9)$ such that $d_{\mathcal{H}}^*$ cannot be determined only by gray-edge CRGs, \`a la Theorem~\ref{thm:fandg}. For any hereditary property $\mathcal{H}$, the number of gray-edge CRGs in $\mathcal{K}(\mathcal{H})$ is finite. Hence, it would be ideal if ${\textit{ed}}_{\mathcal{H}}(p)$ or at least $d_{\mathcal{H}}^*$ could be determined by them. However, the relevant CRG in~\cite{BM} had 4 white vertices, 5 gray edges and a single black edge. In~\cite{BM} only an upper bound of $\min\left\{\frac{p}{3},\frac{p}{2+2p},\frac{1-p}{2}\right\}$ is provided for ${\textit{ed}}_{{\rm Forb}(H_9)}(p)$. The symmetrization method not only shows that the CRGs used in~\cite{BM} were insufficient to compute the edit distance function, but using it leads directly to the discovery of a new CRG, one which was necessary to define the edit distance function given in Theorem~\ref{thm:h9}. \begin{thm}\label{thm:h9} Let $H_9$ be the graph in Figure~\ref{fig:h9}. Then, \begin{equation}\label{eq:h9} {\textit{ed}}_{{\rm Forb}(H_9)}(p)=\min\left\{\frac{p}{3},\frac{p}{1+4p},\frac{1-p}{2}\right\} . \end{equation} Consequently, $\left(p_{{\rm Forb}(H_9)}^*,d_{{\rm Forb}(H_9)}^*\right) = \left(\frac{1+\sqrt{17}}{8},\frac{7-\sqrt{17}}{16}\right)$. \end{thm} The new CRG used to determine the function in \eqref{eq:h9} has 5 white vertices, 8 gray edges and two non-incident black edges. \begin{figure}\label{fig:ploth9} \end{figure} \subsection{Structure of the paper} The rest of the paper is organized as follows: Section~\ref{sec:defns} gives some of the general definitions for the edit distance function, such as colored regularity graphs. Section~\ref{sec:pcores} defines and categorizes so-called $p$-core colored regularity graphs, which were introduced by Marchant and Thomason~\cite{MT}. Section~\ref{sec:symmetrization} describes the method we use, called symmetrization. Section~\ref{sec:split} proves Theorem~\ref{thm:split} regarding split graphs. Section~\ref{sec:h9} proves Theorem~\ref{thm:h9} regarding the graph $H_9$. \section{Background and basic facts} \label{sec:defns} For every CRG, $K$, we associate two functions. The function $f$ is a linear function of $p$ and $g$ is found by weighting the vertices. Let $V(K)=\{v_1,\ldots,v_k\}$ be a set of $k$ vertices, and let ${\bf M}_K(p)$ be a $k\times k$ matrix such that the entries are as follows: $$ [{\bf M}_K(p)]_{ij}=\left\{\begin{array}{ll} p, & \mbox{if $i\neq j$ and $v_iv_j\in{\rm EW}(K)$ or $i=j$ and $v_i\in{\rm VW}(K)$;} \\ 1-p, & \mbox{if $i\neq j$ and $v_iv_j\in{\rm EB}(K)$ or $i=j$ and $v_i\in{\rm VB}(K)$;} \\ 0, & \mbox{if $v_iv_j\in{\rm EG}(K)$.} \end{array}\right. $$ Then, we can express the $f$ and $g$ functions over the domain $p\in[0,1]$ as follows, with ${\rm VW}={\rm VW}(K)$, ${\rm VB}={\rm VB}(K)$, ${\rm EW}={\rm EW}(K)$, ${\rm EB}={\rm EB}(K)$ and ${\bf 1}$ to be the vector with all entries equal to one: \begin{align} f_K(p) &= \frac{1}{k^2}\left[p\left(\left|{\rm VW}\right|+2\left|{\rm EW}\right|\right)+(1-p)\left(\left|{\rm VB}\right|+2\left|{\rm EB}\right|\right)\right] \label{eq:fdef} \\ g_K(p) &= \left\{\begin{array}{rrcl} \min & \multicolumn{3}{l}{{\bf x}^T{\bf M}_K(p){\bf x}} \\ \mbox{s.t.} & {\bf x}^T{\bf 1} & = & 1 \\ & {\bf x} & \geq & {\bf 0} . \end{array}\right. \label{eq:gdef} \end{align} Note that $f_K(p)=\left(\frac{1}{k}{\bf 1}\right)^T{\bf M}_K(p)\left(\frac{1}{k}{\bf 1}\right)$. Since ${\bf x}=\frac{1}{k}{\bf 1}$ is a feasible solution to \eqref{eq:gdef}, $f_K(p)\geq g_K(p)$. \begin{thm}\label{thm:fandg} For any nontrivial hereditary property $\mathcal{H}$, $$ {\textit{ed}}_{\mathcal{H}}(p)=\inf_{K\in\mathcal{K}(\mathcal{H})}f_K(p)=\inf_{K\in\mathcal{K}(\mathcal{H})}g_K(p)=\min_{K\in\mathcal{K}(\mathcal{H})}g_K(p) . $$ \end{thm} The first two equalities are due to Balogh and the author~\cite{BM}. The last, that the infimum of the $g$ functions can be replaced by a minimum, is implicit from Marchant and Thomason~\cite{MT}, although their setting is not edit distance. \subsection{Basic observations on ${\textit{ed}}_{\mathcal{H}}(p)$} The following is a summary of basic facts about the edit distance function. Item (\ref{it:bcn}) comes from Alon and Stav~\cite{AS1}. Item (\ref{it:concon}) comes from \cite{BM}. The other items are trivial consequences of the definition. The chromatic number of $\mathcal{H}$, denoted $\chi(\mathcal{H})$ or just $\chi$, where the context is clear, is $\min\{\chi(H) : H \in \mathcal{F}(\mathcal{H})\}$. The complementary chromatic number of $\mathcal{H}$, denoted $\overline{\chi}(\mathcal{H})$ or $\overline{\chi}$, is $\min\{\chi(\overline{H}) : H \in \mathcal{F}(\mathcal{H})\}$. The binary chromatic number is $$ \max\{k+1 : \exists\, r, s, r+s=k, H\not\mapsto K(r,s), \forall H\in\mathcal{F}(\mathcal{H})\} , $$ where $K(r,s)$ denotes the CRG with $r$ white vertices and $s$ black vertices and all edges gray. The complement of hereditary property $\mathcal{H}$, denoted $\overline{\mathcal{H}}$, is $\bigcap_{\overline{H}\in\mathcal{F}(\mathcal{H})}{\rm Forb}(H)$. Observe that $\overline{\mathcal{H}}$ is not the complement of $\mathcal{H}$ as a set. \begin{thm}\label{thm:basic} Let $\mathcal{H}$ be a nontrivial hereditary property with chromatic number $\chi$, complementary chromatic number $\overline{\chi}$, binary chromatic number $\chi_B$ and edit distance function ${\textit{ed}}_{\mathcal{H}}(p)$. \begin{enumerate} \item If $\chi>1$, then ${\textit{ed}}_{\mathcal{H}}(p)\leq p/(\chi-1)$. \label{it:chi} \item If $\overline{\chi}>1$, then ${\textit{ed}}_{\mathcal{H}}(p)\leq (1-p)/(\overline{\chi}-1)$. \label{it:ovchi} \item ${\textit{ed}}_{\mathcal{H}}(1/2)=1/(2(\chi_B-1))$. \label{it:bcn} \item ${\textit{ed}}_{\mathcal{H}}(p)$ is continuous and concave down. \label{it:concon} \item ${\textit{ed}}_{\mathcal{H}}(p)={\textit{ed}}_{\overline{\mathcal{H}}}(1-p)$. \label{it:comp} \end{enumerate} \end{thm} \section{The $p$-cores} \label{sec:pcores} From Theorem~\ref{thm:fandg} we have that, for any hereditary property $\mathcal{H}$ and $p\in [0,1]$, there is a CRG, $K\in\mathcal{K}(\mathcal{H})$ such that ${\textit{ed}}_{\mathcal{H}}(p)=g_K(p)$. This is found by looking at so-called $p$-cores. A CRG, $K$, is a \textbf{$p$-core CRG}, or simply a $p$-core, if $g_{K}(p)<g_{K'}(p)$ for all nontrivial sub-CRGs $K'$ of $K$. Marchant and Thomason~\cite{MT} prove that $$ {\textit{ed}}_{\mathcal{H}}(p)=\min\left\{g_K(p) : K\in\mathcal{K}(\mathcal{H})\mbox{ and $K$ is $p$-core}\right\} . $$ Upper bounds for the edit distance function of $\mathcal{H}$ are found by simply exhibiting some CRGs $K\in\mathcal{K}(\mathcal{H})$ and computing $g_K(p)$ by means of \eqref{eq:gdef}. The symmetrization method obtains lower bounds for ${\textit{ed}}_{\mathcal{H}}(p)$. The main tools are Lemmas~\ref{lem:cores} and~\ref{lem:symm}, found in~\cite{Martin}. We have already seen much of the theoretical underpinnings. For a vertex, $v$, in a CRG, $K$, we say that $v'$ is a \textbf{gray [white,black] neighbor of $v$} if the edge $vv'$ has color gray [white,black]. We use $N_G(v)$, $N_W(v)$, $N_B(v)$ to denote the set of gray, white and black neighbors of $v$. Given $K$, a $p$-core, there is a unique optimum weight vector, ${\bf x}$, with all entries positive, that is a solution to \eqref{eq:gdef}. For any vertex $v\in V(K)$, ${\rm d}_{\rm G}(v)$ denotes the sum of the weights of the gray neighbors of $v$ under ${\bf x}$, ${\rm d}_{\rm W}(v)$ the sum of the white neighbors (including $v$ itself if the color of $v$ is white) and ${\rm d}_{\rm B}(v)$ the sum of the black neighbors (again, including $v$ itself if the color of $v$ is black). Consequently, ${\rm d}_{\rm G}(v)+{\rm d}_{\rm W}(v)+{\rm d}_{\rm B}(v)=1$. The fundamental concept is that we may, in many cases, assume the vertices are monochromatic (say, black) and all edges are either white or gray. The sizes of the gray neighborhoods are a function of the weight ${\bf x}(v)$. We formalize the observations below: \begin{lem} \label{lem:cores} Let $\mathcal{H}$ be a nontrivial hereditary property and $p\in (0,1)$, $\mathcal{K}(\mathcal{H})$ the set of CRGs defined by $\mathcal{H}$. Then, \begin{enumerate} \item ${\textit{ed}}_{\mathcal{H}}(p)=\min\{g_K(p) : K\in\mathcal{K}(\mathcal{H})\mbox{ and $K$ is $p$-core}\}$. \label{it:equivcore} \item If $p\leq 1/2$ and $K$ is a $p$-core CRG, then $K$ has no black edges and white edges can only be incident to black vertices. \label{it:smpcore} \item If $p\geq 1/2$ and $K$ is a $p$-core CRG, then $K$ has no white edges and black edges can only be incident to white vertices. \label{it:lgpcore} \item If ${\bf x}$ is the optimal weight function of a $p$-core CRG $K$, then for all $v\in V(K)$, $g_K(p)=p{\rm d}_{\rm W}(v)+(1-p){\rm d}_{\rm B}(v)$. \label{it:regularize} \end{enumerate} \end{lem} \section{Computing edit distance functions using symmetrization} \label{sec:symmetrization} The overall idea is that we need only consider $p$-core CRGs and their special structure, then a great deal of information can be obtained by focusing on a single vertex. Lemma~\ref{lem:symm} has all of the elements to express ${\rm d}_{\rm G}(v)$ for any vertex $v$ in a $p$-core CRG. It is often useful to focus on the gray neighborhood of vertices. \begin{lem}[Symmetrization]\label{lem:symm} Let $p\in (0,1)$ and $K$ be a $p$-core CRG with optimal weight function ${\bf x}$. \begin{enumerate} \item If $p\leq 1/2$, then, ${\bf x}(v)=g_K(p)/p$ for all $v\in{\rm VW}(K)$ and $$ {\rm d}_{\rm G}(v)=\frac{p-g_K(p)}{p}+\frac{1-2p}{p}{\bf x}(v) , \qquad\mbox{for all $v\in{\rm VB}(K)$.} $$ \label{it:symmsmp} \item If $p\geq 1/2$, then ${\bf x}(v)=g_K(p)/(1-p)$ for all $v\in{\rm VB}(K)$ and $$ {\rm d}_{\rm G}(v)=\frac{1-p-g_K(p)}{1-p}+\frac{2p-1}{1-p}{\bf x}(v) , \qquad\mbox{for all $v\in{\rm VW}(K)$.} $$ \label{it:symmlgp} \end{enumerate} \end{lem} \begin{cor}\label{cor:xbound} Let $p\in (0,1)$ and $K$ be a $p$-core CRG with optimal weight function ${\bf x}$. \begin{enumerate} \item If $p\leq 1/2$, then ${\bf x}(v)\leq g_K(p)/(1-p)$ for all $v\in{\rm VB}(K)$. \label{it:xbound0} \item If $p\geq 1/2$, then ${\bf x}(v)\leq g_K(p)/p$ for all $v\in{\rm VW}(K)$. \label{it:xbound1} \end{enumerate} \end{cor} \begin{rem} From this point forward in the paper, if $K$ is a CRG under consideration and $p$ is fixed, ${\bf x}(v)$ will denote the weight of $v\in V(K)$ under the optimal solution of the quadratic program in equation \eqref{eq:gdef} that defines $g_K$. \end{rem} The notion of a component is natural in a CRG: \begin{defn} A sub-CRG, $K'$, of a CRG, $K$, is a \textbf{component} if it is maximal with respect to the property that, for all $v,w\in V(K')$, there exists a path, consisting of white and black edges, entirely within $K'$. \end{defn} The components of a CRG are equivalence classes of the vertex set and are, therefore, disjoint. From~\cite{Martin}, it is useful to note that the $g$ function of a CRG can be computed from the $g$ functions of its components. This results from the fact that the matrix ${\bf M}_K(p)$ in \eqref{eq:gdef} is block-diagonal if the CRG has more than one component. \begin{thm}\label{thm:components} Let $K$ be a CRG with components $K^{(1)},\ldots,K^{(\ell)}$. Then $$ \left(g_K(p)\right)^{-1}=\sum_{i=1}^{\ell}\left(g_{K^{(i)}}(p)\right)^{-1} . $$ \end{thm} The simplest CRGs are those whose edges are gray. Let $K(w,b)$ denote the CRG with $w$ white vertices, $b$ black vertices and all edges gray. A direct corollary of Theorem~\ref{thm:components} is as follows: \begin{cor}\label{cor:components} Let $w$ and $b$ be nonnegative integers not both zero. $$ g_{K(w,b)}(p)=\left(\frac{w}{p}+\frac{b}{1-p}\right)^{-1} . $$ \end{cor} \section{${\rm Forb}(H)$, $H$ a split graph} \label{sec:split} We need to define a special class of graphs. For $\omega\geq 2$ and a nonnegative integer vector $(\omega;a_0,a_1,\ldots,a_{\omega})$, a \textbf{$(\omega;a_0,a_1,\ldots,a_{\omega})$-clique-star}\footnote{We get the notation from Hung, Sys{\l}o, Weaver and West~\cite{HSWW}. Barrett, Jepsen, Lang, McHenry, Nelson and Owens~\cite{BJLMNO} define a clique-star, but it is a different type of graph.} is a graph $G$ such that $V(G)$ is partitioned into $A$ and $W$. The set $A$ induces an independent set, the set $W=\{w_1,\ldots,w_{\omega}\}$ induces a clique and for $i=1,\ldots,\omega$, vertex $w_i$ is adjacent to a distinct set of $a_i+1$ leaves in $A$ and there are $a_0$ independent vertices. Note that this implies that $\sum_{i=0}^{\omega}a_i=\alpha-\omega$. Colloquially, a clique-star can be partitioned into stars and independent sets such that the centers of the stars are connected by a clique and there are no other edges. (If one of the stars is $K_2$, one of the endvertices is designated to be the center.) Proving that Theorem~\ref{thm:split} is true is much more difficult in the case where either $H$ or its complement is a clique-star. \subsection{Proof of Theorem~\ref{thm:split}} Recall that $H$ is a split graph with independence number $\alpha$ and clique number $\omega$. We will let $h=|V(H)|$. Since we assume that $H$ is neither complete nor empty, $\alpha,\omega\geq 2$. Because ${\textit{ed}}_{{\rm Forb}(H)}(p)={\textit{ed}}_{{\rm Forb}(\overline{H})}(1-p)$ and $\alpha(H)=\omega(\overline{H})$, proving Theorem~\ref{thm:split} for $H$ also proves the theorem for $\overline{H}$. Thus, we may assume that $\omega\leq\alpha$. The following fact is well-known: \begin{fact}\label{fact:chromsplit} If $H$ is a split graph, then it is a perfect graph. In particular, its chromatic number is its clique number. In notation, $\chi(H)=\omega(H)$. Consequently, $\chi(\overline{H})=\alpha(H)$. \end{fact} An immediate consequence of Fact~\ref{fact:chromsplit} is that $H$ cannot be embedded into $K(\omega-1,0)$ and $K(0,\alpha-1)$ and so, by Corollary~\ref{cor:components}, \begin{equation}\label{eq:splitUB} {\textit{ed}}_{{\rm Forb}(H)}(p)\leq\min\left\{g_{K(\omega-1,0)}(p),g_{K(0,\alpha-1)}(p)\right\} = \min\left\{\frac{p}{\omega-1},\frac{1-p}{\alpha-1}\right\} . \end{equation} Let $K\in\mathcal{K}({\rm Forb}(H))$ be a $p$-core CRG and denote $g=g_K(p)$. By Lemma~\ref{lem:cores}, any edge between vertices of different colors must be gray. Since $H$ is a split graph, $H$ would embed into any $K$ with a pair of differently-colored vertices. So, the vertices in $K$ must be monochromatic. Furthermore, if $K$ has only gray edges, then either $K$ has at most $\omega-1$ white vertices or at most $\alpha-1$ black vertices. In particular, if $p=1/2$, then all edges must be gray and so ${\textit{ed}}_{{\rm Forb}(H)}(1/2)=\min\left\{\frac{1/2}{\omega-1},\frac{1/2}{\alpha-1}\right\}$. Because we have assumed that $\omega\leq\alpha$, the inequality \eqref{eq:splitUB} gives ${\textit{ed}}_{{\rm Forb}(H)}(p)\leq\frac{1-p}{\alpha-1}$. Since Theorem~\ref{thm:basic}(\ref{it:concon}) gives that ${\textit{ed}}_{{\rm Forb}(H)}(p)$ is concave down, it is the case that $$ {\textit{ed}}_{{\rm Forb}(H)}(p)=\frac{1-p}{\alpha-1},\qquad \mbox{for}\quad p\in[1/2,1] . $$ If $p<1/2$ and $K$ has white vertices, then Lemma~\ref{lem:cores}(\ref{it:smpcore}) gives that all edges must be gray. In that case, $g_K(p)=\frac{1-p}{\alpha-1}$. So, we may assume that $p<1/2$ and $K$ has only black vertices and only white or gray edges. Let ${\bf x}$ be the weight function that is the optimal solution to~\eqref{eq:gdef}. We make a general observation that holds in both cases: \begin{fact}\label{fact:graydeg} Let $v\in V(K)$. Then, $v$ has fewer than $h-\omega$ gray neighbors. \end{fact} \begin{proof} Suppose that $v$ has $h-\omega$ gray neighbors; that is, suppose there are vertices $w_1,\ldots,w_{h-\omega}$ such that $vw_i$ is gray for $i=1,\ldots,h-\omega$. Since $H$ is a split graph, there is a partition of $V(H)$, $W\bigcup A$, where $W$ is a maximum-sized clique and $A$ is an independent set (of size $h-\omega$). Consider the map $\varphi$, which sends all of the vertices of $W$ to vertex $v$ and each vertex in $A$ to a different member of $\{w_1,\ldots,w_{h-\omega}\}$. It doesn't matter whether an edge in the sub-CRG induced by $\{w_1,\ldots,w_{h-\omega}\}$ is white or gray, there are no edges in $A$. Thus, $\varphi$ shows that $H\mapsto K$, a contradiction. \end{proof} By virtue of the fact that a clique and independent set can intersect in at most one vertex, $h\leq\alpha+\omega\leq h+1$. This yields two cases.~\\ \noindent\textbf{Case 1.} $\alpha+\omega=h+1$.~\\ Let $v\in V(K)$ be a vertex of largest weight $x={\bf x}(v)$. By Fact~\ref{fact:graydeg}, $v$ has at most $h-\omega-1=\alpha-2$ gray neighbors. Because $x$ is the largest weight, Lemma~\ref{lem:symm}(\ref{it:symmsmp}) gives that \begin{align*} {\rm d}_{\rm G}(v) &\leq (\alpha-2)x \\ \frac{p-g}{p}+\frac{1-2p}{p}x &\leq (\alpha-2)x \\ p-g &\leq (p\alpha-1)x . \end{align*} If $p<1/\alpha$, then $g>p\geq p/(\omega-1)$. If $p\geq 1/\alpha$, then Corollary~\ref{cor:xbound}(\ref{it:xbound0}) gives that \begin{align*} p-g &\leq (p\alpha-1)\frac{g}{1-p} \\ p(1-p) &\leq gp(\alpha-1) \\ \frac{1-p}{\alpha-1} &\leq g . \\ \end{align*} This concludes Case 1.~\\ \noindent\textbf{Case 2.} $\alpha+\omega=h$.~\\ Let $p\in\left(0,\frac{\omega-1}{h-1}\right]$. Again, let $v\in V(K)$ be a vertex of largest weight $x={\bf x}(v)$. Fact~\ref{fact:graydeg} gives that $v$ has at most $h-\omega-1$ gray neighbors and Lemma~\ref{lem:symm}(\ref{it:symmsmp}) gives a formula for ${\rm d}_{\rm G}(v)$. Thus, \begin{align*} {\rm d}_{\rm G}(v) &\leq (h-\omega-1)x \\ \frac{p-g}{p}+\frac{1-2p}{p}x &\leq (\alpha-1)x \\ p-g &\leq \left(p(\alpha+1)-1\right)x . \end{align*} If $p<1/(\alpha+1)$, then $g>p\geq p/(\omega-1)$. If $p\geq 1/(\alpha+1)$, then Corollary~\ref{cor:xbound}(\ref{it:xbound0}) gives that $$ p-g \leq \left(p(\alpha+1)-1\right)\frac{g}{1-p} . $$ Then, $$ g \geq \frac{1-p}{\alpha} \geq \frac{1-\frac{\omega-1}{h-1}}{\alpha} = \frac{1}{h-1} = \frac{\frac{\omega-1}{h-1}}{\omega-1} \geq \frac{p}{\omega-1} . $$ Finally, we may assume that $p\in\left(\frac{\omega-1}{h-1},\frac{1}{2}\right)$. We have to split into two cases according to the structure of $H$.~\\ \noindent\textbf{Case 2a.} $\alpha+\omega=h$ and there exists a $c\leq\omega-1$ such that $H$ can be partitioned into $c$ cliques and an independent set of $\alpha-c$ vertices.~\\ Suppose we could find, in $K$, $\alpha$ vertices configured as follows: a gray clique of size $\omega-1$ (call it $v_1,\ldots,v_{\omega-1}$) and $\alpha-\omega+1$ additional vertices that are gray neighbors of each of $v_1,\ldots,v_{\omega-1}$. One can view this as $\alpha-\omega+1$ cliques of size $\omega$ that share $\omega-1$ common vertices. In that case, we can show that $H\mapsto K$ via a $\varphi$ that first maps each of the $c$ cliques as well as $\omega-1-c$ members of the independent set to a different $v_i$. Second, it maps the remaining $\alpha-\omega+1$ vertices of $H$ to the other vertices. Thus, such a configuration of $\alpha$ vertices cannot exist in $K$. Suppose that $g<\min\left\{\frac{p}{\omega-1},\frac{1-p}{\alpha-1}\right\}$. First, we show $K$ must have a gray $(\omega-1)$-clique. Let $v_1,\ldots,v_{\ell}$ be a maximal gray clique. That is, any edge between these vertices is gray and every vertex not in $\{v_1,\ldots,v_{\ell}\}$ has at least one white neighbor in $\{v_1,\ldots,v_{\ell}\}$. Let $x_i={\bf x}(v_i)$ for $i=1,\ldots,\ell$ and let $X=\sum_{i=1}^{\ell}x_i$. Using Lemma~\ref{lem:symm}(\ref{it:symmsmp}), we observe that each vertex in $V(K)-\{v_1,\ldots,v_{\ell}\}$ is a gray neighbor of at most $\ell-1$ members of $\{v_1,\ldots,v_{\ell}\}$. By summing the weights of the gray neighbors of each of $v_1,\ldots,v_{\ell}$ that lie outside of the set $\{v_1,\ldots,v_{\ell}\}$, we obtain the following inequality: \begin{align*} \sum_{i=1}^{\ell}\left[{\rm d}_{\rm G}(v_i)-X+x_i\right] &\leq (\ell-1)(1-X) \\ \ell\frac{p-g}{p}+\frac{1-p}{p}X-\ell X &\leq (\ell-1)(1-X) \\ p-\ell g &\leq (2p-1) X . \end{align*} Hence, $\ell\leq\omega-2$ and $g>p/\ell>p/(\omega-1)$ or $K$ has a gray $(\omega-1)$-clique. We may thus suppose that $K$ has a gray $(\omega-1)$-clique. Let one with maximum total weight be $\{v_1,\ldots,v_{\omega-1}\}$ with $x_i={\bf x}(v_i)$ for $i=1,\ldots,\omega-1$ and $X=\sum_{i=1}^{\omega-1}x_i$. The clique $\{v_1,\ldots,v_{\omega-1}\}$ has at most $\alpha-\omega$ gray neighbors, otherwise the decomposition of $H$ into $c$ cliques and an independent set of $\alpha-c$ vertices would give $H\mapsto K$. Let $Y$ be the sum of the weights of the common gray neighbors of $v_1,\ldots,v_{\omega-1}$. Since $X$ is the largest weight of any gray $(\omega-1)$-clique, the value of $Y$ is at most $\alpha-\omega$ times the average weight of the $\omega-1$ vertices that define $X$. Hence, $$ Y\leq (\alpha-\omega)\frac{X}{\omega-1} . $$ Therefore, if we sum the weights of the gray neighbors of each $v_i$ that are not part of $\{v_1,\ldots,v_{\omega-1}\}$, the common neighbors will be summed $\omega-1$ times and all other vertices will be counted at most $\omega-2$ times. In the inequality below, the left-hand side counts the sum of the gray neighbors of $v_i$ and the right-hand side bounds this sum. $$ \sum_{i=1}^{\omega-1}\left[{\rm d}_{\rm G}(v_i)-X+x_i\right]\leq (\omega-1)Y+(\omega-2)(1-X-Y) . $$ Using Lemma~\ref{lem:symm}(\ref{it:symmsmp}), we have an exact formula for ${\rm d}_{\rm G}(v_i)$ that depends only on $x_i={\bf x}(v_i)$. Also, we use the fact that $\sum_{i=1}^{\omega-1}x_i=X$ to simplify to the following: \begin{align} (\omega-1)\left(\frac{p-g}{p}-X\right)+\frac{1-p}{p}X &\leq Y+(\omega-2)(1-X) \nonumber \\ (1-X)-(\omega-1)\frac{g}{p}+\frac{1-p}{p}X &\leq (\alpha-\omega)\frac{X}{\omega-1} \nonumber \\ 1+X\left(\frac{1}{p}-\frac{h-2}{\omega-1}\right) &\leq \frac{\omega-1}{p}g , \label{eq:case2a} \end{align} because $h=\alpha+\omega$. If $p<(\omega-1)/(h-2)$, then the term in parentheses in \eqref{eq:case2a} is positive and $g>p/(\omega-1)$, which would complete the proof. If $p\geq (\omega-1)/(h-2)$, then the term in parentheses in \eqref{eq:case2a} is nonpositive. We can use Corollary~\ref{cor:xbound}(\ref{it:xbound0}) to bound each $x_i\leq g/(1-p)$, hence $X\leq (\omega-1)\frac{g}{1-p}$. Substituting this value for $X$ into \eqref{eq:case2a}, we conclude \begin{align*} 1+\frac{(\omega-1)g}{1-p}\left(\frac{1}{p}-\frac{h-2}{\omega-1}\right) &\leq \frac{\omega-1}{p}g \\ 1 &\leq g\left(\frac{\omega-1}{p}-\frac{\omega-1}{p(1-p)}+\frac{h-2}{1-p}\right) \\ 1 &\leq g\left(\frac{h-\omega-1}{1-p}\right) . \end{align*} So, $g\geq (1-p)/(h-\omega-1)$. Since $\alpha=h-\omega$ in this case, $g\geq (1-p)/(\alpha-1)$. This concludes Case 2a.~\\ Which graphs are in Case 2, but not Case 2a? Since $\alpha+\omega=h$, we may write $V(H)=A\udot W$, where $A$ is an independent set of size $\alpha$ and $W$ is a clique of size $\omega$. Every $w\in W$ has at least one neighbor in $A$. If any $a\in A$ has more than one neighbor in $W$, then we can greedily find at most $\omega-1$ vertices in $A$ such that the union of their neighborhoods is $W$. Such a graph would be in Case 2a. So, the graphs, $H$ with $\omega\leq\alpha$ that are in neither Case 1 nor Case 2a have the property that $N(w)\cap N(w')\cap A=\emptyset$ for all distinct $w,w'\in W$. This is exactly the case of a clique-star.~\\ \noindent\textbf{Case 2b.} $\alpha+\omega=h$ and $G$ is a clique-star.~\\ In the graph $H$, let $W=\{w_1,\ldots,w_{\omega}\}$ such that $w_i$ has $a_i+1$ neighbors in $A$ for $i=1,\ldots,\omega$ and there are $a_0$ isolated vertices. Note that $\alpha=a_0+\sum_{i=1}^{\omega}(a_i+1)$. \begin{fact} If $\omega\geq 2$ and $H$ is a $(\omega;a_0,\ldots,a_{\omega})$-clique-star and $K$ is a black-vertex CRG (that is, a CRG for which all vertices are black) with no black edges such that there exist vertices $v_1,\ldots,v_{\omega}$ for which \begin{itemize} \item $\{v_1,\ldots,v_{\omega}\}$ is a gray clique, \item for $i=1,\ldots,\omega-1$, $v_i$ has $\alpha-1$ gray neighbors, and \item $v_{\omega}$ has at least $\lfloor (\alpha-\omega)/\omega\rfloor+\omega-1$ gray neighbors (including $v_1,\ldots,v_{\omega-1}$). \end{itemize} Then, $H\mapsto K$. \label{fact:embed} \end{fact} \begin{proof}[Proof of Fact~\ref{fact:embed}] By Fact~\ref{fact:graydeg}, we may assume the maximum gray degree of $K$ is at most $\alpha-1$. Without loss of generality, let $a_1\geq\cdots\geq a_{\omega}$. Our mapping is done recursively: Map $w_{\omega}$ and one of its neighbors to $v_{\omega}$. Map its remaining $A$-neighbors ($a_{\omega}\leq\lfloor (\alpha-\omega)/\omega\rfloor$ of them) to each of $a_{\omega}$ gray neighbors of $v_{\omega}$ that are not in $\{v_1,\ldots,v_{\omega-1}\}$. Having embedded $w_{\omega},\ldots,w_{i+1}$ and each of their respective $A$-neighbors into a total of at most $\sum_{j=i+1}^{\omega}(a_j+1)$ vertices of $K$, we map $w_i$ and one of its $A$-neighbors into $v_i$ and its remaining $a_i$ $A$-neighbors into arbitrary unused gray neighbors of $v_i$. After $w_1$ and its neighbors are mapped, we map the remaining $a_0$ isolated vertices arbitrarily into the vertices of $K$ that were not already used. This mapping can be accomplished because the fact that each of the $v_i$ have at least $\alpha-1$ gray neighbors ensures that, even at the last step, when $w_1$ and a neighbor is embedded, there are at least $\alpha-1$ gray neighbors of $v_1$. The number of gray neighbors of $v_1$ that were used are the $\omega-1$ vertices $v_i$ and at most $\sum_{j=2}^{\omega}a_j=\alpha-\omega-a_1-a_0$ others, for a total of $\alpha-1-a_1-a_0$. So, there are enough gray neighbors of $v_1$ to embed the $a_1$ neighbors of $w_1$ as well as the $a_0$ isolated vertices. Thus, $H\mapsto K$. \end{proof} \begin{fact} Let $p\in(0,1/2)$ and let $K$ be a black-vertex CRG with no black edges. If $g_K(p)<\min\left\{p/(\omega-1), (1-p)/(\alpha-1)\right\}$, then there exist vertices $v_1,\ldots,v_{\omega}$ for which \begin{itemize} \item $\{v_1,\ldots,v_{\omega}\}$ is a gray clique, \item for $i=1,\ldots,\omega-1$, $v_i$ has $\alpha-1$ gray neighbors, and \item $v_{\omega}$ has at least $\lfloor (\alpha-\omega)/\omega\rfloor+\omega-1$ gray neighbors (including $v_1,\ldots,v_{\omega-1}$). \end{itemize} \label{fact:bigdeg} \end{fact} \begin{proof}[Proof of Fact~\ref{fact:bigdeg}] By Fact~\ref{fact:graydeg}, we may assume the maximum gray degree of $K$ is at most $\alpha-1$. We find $v_1,\ldots,v_{\omega}$ greedily. Choose $v_1$ to be a vertex of largest weight. Stop if $i=\omega$ or if $N_G(v_1)\cap\cdots\cap N_G(v_i)$ is empty. Otherwise, let $v_{i+1}$ be a vertex of largest weight in $N_G(v_1)\cap\cdots\cap N_G(v_i)$. We will show later that this process creates at least $\omega$ vertices. First, we find the number of gray neighbors of $v_1$, using Lemma~\ref{lem:symm}(\ref{it:symmsmp}) and the fact that $x_1$ is the largest weight. $$ |N_G(v_1)|\geq\left\lceil\frac{{\rm d}_{\rm G}(v_1)}{x_1}\right\rceil \geq \frac{p-g}{px_1}+\frac{1-2p}{p} . $$ Using Corollary~\ref{cor:xbound}(\ref{it:xbound0}), we have that $x_1\leq g/(1-p)$ and so $$ |N_G(v_1)|\geq\frac{1-p-g}{g}>\alpha-2 . $$ Thus, we may assume $|N_G(v_1)|\geq\alpha-1$. Since $|N_G(v_i)|$ is an integer and must be at most $\alpha-1$, we may assume $|N_G(v_1)|=\alpha-1$. For $i\in\{2,\ldots,\omega-1\}$, we let $X_i=\sum_{j=1}^i x_j$ and consider the common gray neighborhood of $\{v_1,\ldots,v_i\}$. For a set $U\subseteq V(K)$, we use ${\bf x}(U)$ to denote $\sum_{u\in U}{\bf x}(u)$. Now we compute the weight of the common gray neighborhood of $\{v_1,\ldots,v_i\}$: \begin{align*} {\bf x}\left(N_G(v_1)\cap\cdots\cap N_G(v_i)\right) &\geq {\rm d}_{\rm G}(v_i)-(X_i-x_i)-\sum_{j=1}^{i-1}{\bf x}\left(N_W(v_j)\right) \\ &= {\rm d}_{\rm G}(v_i)-(X_i-x_i)-\sum_{j=1}^{i-1}\left(1-x_j-{\rm d}_{\rm G}(v_j)\right) , \end{align*} because $K$ being $p$-core for $p\leq 1/2$ means that each black vertex has only white or gray neighbors. Simplifying, then using Lemma~\ref{lem:symm}(\ref{it:symmsmp}), \begin{align} {\bf x}\left(N_G(v_1)\cap\cdots\cap N_G(v_i)\right) &\geq \sum_{j=1}^i{\rm d}_{\rm G}(v_j)-(i-1) \nonumber \\ &\geq \sum_{j=1}^i\left(\frac{p-g}{p}+\frac{1-2p}{p}x_j\right)-(i-1) \nonumber \\ &= \frac{p-ig}{p}+\frac{1-2p}{p}X_i > 0 \label{eq:graynbhd} . \end{align} The last inequality occurs because $i\leq\omega-1$, $g<p/(\omega-1)$, $p<1/2$ and $X_i>x_i>0$. Thus, $v_{i+1}$ must exist. We use these calculations to obtain the number of vertices in $N_G(v_i)$ for $i=2,\ldots,\omega-1$. First note that $v_i$ has $i-1$ gray neighbors among $\{v_1,\ldots,v_{i-1}\}$ and that every vertex that is a gray neighbor of each of $v_1,\ldots,v_i$ has weight at most $x_i$. For a fixed $i$, partition the set $N_G(v_i)-\left(\{v_1,\ldots,v_{i-1}\}\cup\bigcap_{j=1}^{i-1}N_G(v_j)\right)$ into $T_1\udot\cdots\udot T_{i-1}$, where $T_j=N_G(v_i)\cap\bigcap_{j'=1}^{j-1}N_G(v_{j'})\cap N_W(v_j)$. Here, $T_j$ is the set of vertices that are gray neighbors of $v_i$ and gray neighbors of $v_1,\ldots,v_{j-1}$ but are white neighbors of $v_j$. So, by definition, \begin{align*} N_G(v_i) &= \{v_1,\ldots,v_{i-1}\}\udot \left(N_G(v_1)\cap\cdots\cap N_G(v_i)\right)\udot \bigcup_{j=1}^{i-1} T_j \\ |N_G(v_i)| &= (i-1)+\left|N_G(v_1)\cap\cdots\cap N_G(v_i)\right|+\sum_{j=1}^{i-1} |T_j| . \end{align*} The largest weight of a vertex in $N_G(v_1)\cap\cdots\cap N_G(v_i)$ is at most $x_i={\bf x}(v_i)$, otherwise such a vertex would be chosen in place of $v_i$. Similarly, the largest weight of a vertex in $T_j$ is at most $x_j$. As a result, \begin{align} |N_G(v_i)| &\geq (i-1) +\left\lceil\frac{{\bf x}\left(N_G(v_1)\cap\cdots\cap N_G(v_{i})\right)}{x_i}\right\rceil +\sum_{j=1}^{i-1}\left\lceil\frac{{\bf x}(T_j)}{x_j}\right\rceil \nonumber \\ &\geq (i-1) +\frac{1}{x_i}{\bf x}\left(N_G(v_1)\cap\cdots\cap N_G(v_{i})\right) +\sum_{j=1}^{i-1}\frac{1}{x_j}{\bf x}(T_j) \label{eq:Tsum} . \end{align} We can rewrite this inequality as follows: Assign coefficient $\frac{1}{x_i}$ to every vertex in $N_G(v_i)-\{v_1,\ldots,v_{i-1}\}$. Then for $j=1,\ldots,i-1$, add $\frac{1}{x_j}-\frac{1}{x_i}$ to the coefficient of every vertex in $N_W(v_j)$. As a result, every vertex in $N_G(v_1)\cap\cdots\cap N_G(v_{i})$ gets coefficient $\frac{1}{x_i}$ and, for $j=1,\ldots,i-1$, every vertex in $T_j$ gets coefficient at most $\frac{1}{x_j}$. Every other vertex gets a nonpositive coefficient because $\frac{1}{x_j}\leq\frac{1}{x_i}$. With $X_i=x_1+\cdots+x_i$, we have a lower bound for the expression in \eqref{eq:Tsum}: \begin{align*} |N_G(v_i)| &\geq (i-1) +\frac{1}{x_i}\left({\bf x}(N_G(v_i))-(X_i-x_i)\right) +\sum_{j=1}^{i-1}\left(\frac{1}{x_j}-\frac{1}{x_i}\right){\bf x}(N_W(v_j)) \\ &= (i-1) +\frac{1}{x_i}\left(\frac{p-g}{p}+\frac{1-2p}{p}x_i-X_{i-1}\right) \\ & \;\;\;\; +\sum_{j=1}^{i-1}\left(\frac{1}{x_j}-\frac{1}{x_i}\right)\left(\frac{g}{p}-\frac{1-p}{p}x_j\right) , \end{align*} by using the fact that ${\bf x}(N_W(v_j))=1-x_j-{\rm d}_{\rm G}(v_j)$ and by using ${\rm d}_{\rm G}(v_j)=\frac{p-g}{p}+\frac{1-2p}{p}x_j$ from Lemma~\ref{lem:symm}(\ref{it:symmsmp}). Now we expand the expression: \begin{align} |N_G(v_i)| &\geq (i-1) +\frac{1}{x_i}\left(\frac{p-g}{p}-X_{i-1}\right) +\frac{1-2p}{p} \nonumber \\ & \;\;\;\; +\frac{g}{p}\sum_{j=1}^{i-1}\frac{1}{x_j} -\frac{(i-1)g}{px_i} -\frac{1-p}{p}(i-1) +\frac{1-p}{px_i}X_{i-1} \nonumber \\ &= \frac{g}{p}\sum_{j=1}^{i-1}\frac{1}{x_j} +\frac{2-i+2(i-2)p}{p} +\frac{1}{x_i}\left(\frac{p-ig}{p}+\frac{1-2p}{p}X_{i-1}\right) . \label{eq:degcount} \end{align} If $i=1$, then \eqref{eq:degcount} simplifies to $\frac{p-g}{px_1}+\frac{1-2p}{p}$. Now suppose $i\in\{2,\ldots,\omega-1\}$. Using Jensen's inequality, we obtain $$ \sum_{j=1}^{i-1}\frac{1}{x_j} \geq \frac{i-1}{X_{i-1}/(i-1)} =\frac{(i-1)^2}{X_{i-1}} . $$ For $i\in\{2,\ldots,\omega-1\}$, we return to \eqref{eq:degcount} and use the bound above, along with the fact that $x_i\leq X_{i-1}/(i-1)$ to obtain the following: \begin{align*} |N_G(v_i)| &\geq \frac{g}{p}\left(\frac{(i-1)^2}{X_{i-1}}\right) +\frac{2-i+2(i-2)p}{p} +\frac{i-1}{X_{i-1}}\left(\frac{p-ig}{p}+\frac{1-2p}{p}X_{i-1}\right) \\ &=\frac{i-1}{X_{i-1}}\left(\frac{p-g}{p}\right)+\frac{1-2p}{p} . \end{align*} Since $g<p/(\omega-1)\leq p$, we can use the bound $X_{i-1}/(i-1)\leq x_1$ and obtain that for $i\in\{1,\ldots,\omega-1\}$, \begin{align*} |N_G(v_i)| &\geq \frac{1}{x_1}\left(\frac{p-g}{p}\right)+\frac{1-2p}{p} \\ &\geq \frac{1-p}{g}\left(\frac{p-g}{p}\right)+\frac{1-2p}{p}=\frac{1-p}{g}-1 , \end{align*} because Corollary~\ref{cor:xbound}(\ref{it:xbound0}) gives $x_1\leq g/(1-p)$. Since $g<(1-p)/(\alpha-1)$, we have $|N_G(v_i)|>\alpha-2$. Since $|N_G(v_i)|<\alpha$, we have $|N_G(v_i)|=\alpha-1$ for $i=1,\ldots,\omega-1$. Finally, we try to determine the number of vertices adjacent to $v_{\omega}$ via a gray edge. We only need $|N_G(v_{\omega})|\geq\lfloor\alpha/\omega\rfloor+\omega-2$ in order to finish the proof. First, note that the very existence of $v_{\omega}$ ensures that $|N_G(v_{\omega})|\geq\omega-1$. Thus, we may assume that $\alpha\geq 2\omega$. Second, suppose that $\omega\geq 3$. Recalling that every vertex has weight at most $\frac{g}{1-p}$ from Corollary~\ref{cor:xbound}(\ref{it:xbound0}), we have the simple inequality, $$ \frac{g}{1-p}|N_G(v)| \geq {\rm d}_{\rm G}(v) . $$ Therefore, using ${\rm d}_{\rm G}(v)=\frac{p-g}{p}+\frac{1-2p}{p}{\bf x}(v)$ from Lemma~\ref{lem:symm}(\ref{it:symmsmp}), we have, for any vertex $v$, \begin{align*} |N_G(v)| &\geq \frac{p-g}{p}\cdot\frac{1-p}{g} \\ &> \left\{\begin{array}{ll} \frac{p-\frac{p}{\omega-1}}{p}\cdot\frac{1-p}{p/(\omega-1)}, & \mbox{if $p\leq\frac{\omega-1}{h-2}$;} \\ \frac{p-\frac{1-p}{\alpha-1}}{p}\cdot\frac{1-p}{(1-p)/(\alpha-1)}, & \mbox{if $p\geq\frac{\omega-1}{h-2}$.}\end{array}\right. \\ &\geq \left\{\begin{array}{ll} (\omega-2)\frac{1-p}{p}, & \mbox{if $p\leq\frac{\omega-1}{h-2}$;} \\ \frac{p\alpha-1}{p}, & \mbox{if $p\geq\frac{\omega-1}{h-2}$.}\end{array}\right. \\ &\geq (\alpha-1)\frac{\omega-2}{\omega-1} . \end{align*} Since $|N_G(v)|$ is an integer, it is the case that $|N_G(v)|\geq \left\lfloor (\alpha-1)\frac{\omega-2}{\omega-1}\right\rfloor+1$. Recall that $\alpha\geq 2\omega$ and $\omega\geq 3$. Thus, \begin{align*} |N_G(v)| &\geq \left\lfloor (\alpha-1)\frac{\omega-2}{\omega-1}\right\rfloor+1 \\ &= \left\lfloor \frac{\alpha}{\omega} +\alpha\left(\frac{\omega-2}{\omega-1} -\frac{1}{\omega}\right) -\frac{\omega-2}{\omega-1}\right\rfloor+1 \\ &\geq \left\lfloor \frac{\alpha}{\omega} +\frac{2\omega(\omega-2)}{\omega-1} -2 -\frac{\omega-2}{\omega-1}\right\rfloor+1 \\ &\geq \left\lfloor \frac{\alpha}{\omega} \right\rfloor -1 +\left\lfloor \frac{(2\omega-1)(\omega-2)}{\omega-1} \right\rfloor \\ &= \left\lfloor \frac{\alpha}{\omega} \right\rfloor +\omega-2 +\left\lfloor \frac{\omega^2-3\omega+1}{\omega-1} \right\rfloor \\ &\geq \left\lfloor \frac{\alpha}{\omega} \right\rfloor +\omega-2 , \end{align*} as desired. Third, since $\omega\geq 2$, the only remaining case is $\omega=2$; i.e., $H$ is a double-star (possibly with isolated vertices). Recall that $\alpha\geq 2\omega=4$. Our goal is to show that $|N_G(v_2)|\geq\lfloor\alpha/\omega\rfloor+\omega-2=\lfloor\alpha/2\rfloor$. The computations are, by now, routine. We use $x_1\leq g/(1-p)$ and the fact that $v_2$ is the largest-weight vertex in $N_G(v_1)$ and so, $x_2\geq{\rm d}_{\rm G}(v_1)/(\alpha-1)$. \begin{align*} |N_G(v_2)| &\geq \frac{{\rm d}_{\rm G}(v_2)}{x_1} \\ &\geq \frac{1}{x_1}\left(\frac{p-g}{p}+\frac{1-2p}{p}x_2\right) \\ &\geq \frac{1}{x_1}\left(\frac{p-g}{p}+\frac{1-2p}{p}\cdot\frac{{\rm d}_{\rm G}(v_1)}{\alpha-1}\right) \\ &\geq \frac{p-g}{px_1}\left(1+\frac{1-2p}{p(\alpha-1)}\right)+\left(\frac{1-2p}{p}\right)^2\frac{1}{\alpha-1} \\ &\geq \frac{(p-g)(1-p)}{pg}\left(\frac{p(\alpha-3)+1}{p(\alpha-1)}\right)+\left(\frac{1-2p}{p}\right)^2\frac{1}{\alpha-1} . \end{align*} Recalling that, in the case of $\omega=2$, $g<\min\left\{p,(1-p)/(\alpha-1)\right\}$, \begin{align*} |N_G(v_2)| &> \left\{\begin{array}{ll} \left(\frac{1-2p}{p}\right)^2\frac{1}{\alpha-1}, & \mbox{if $p\leq 1/\alpha$;} \\ \frac{p\alpha-1}{p}\left(\frac{p(\alpha-3)+1}{p(\alpha-1)}\right)+\left(\frac{1-2p}{p}\right)^2\frac{1}{\alpha-1}, & \mbox{if $p\geq 1/\alpha$.} \end{array}\right. \\ &= \left\{\begin{array}{ll} \left(\frac{1-2p}{p}\right)^2\frac{1}{\alpha-1}, & \mbox{if $p\leq 1/\alpha$;} \\ \alpha-2-\frac{(1-2p)}{p(\alpha-1)}, & \mbox{if $p\geq 1/\alpha$.} \end{array}\right. \end{align*} In each case, the smallest value of the expression occurs when $p=1/\alpha$, giving $|N_G(v_2)|>\frac{(\alpha-2)^2}{\alpha-1}$ and so, $$ |N_G(v_2)|\geq\left\lfloor\frac{(\alpha-2)^2}{\alpha-1}\right\rfloor+1\geq\alpha-2 =\left\lfloor\frac{\alpha}{2}\right\rfloor+\left\lceil\frac{\alpha}{2}\right\rceil-2 . $$ This is at least $\lfloor \alpha/2\rfloor$ since $\alpha\geq 4$. This concludes the proof of Fact~\ref{fact:bigdeg}. \end{proof} Summarizing, if $H\not\mapsto K$, then $g\geq p/(\omega-1)$ or $g\geq (1-p)/(\alpha-1)$. This concludes the proof of Theorem~\ref{thm:split}. \subsection{Examples of split graphs} Items (\ref{it:split:kaeb}) and (\ref{it:split:star}) in Corollary~\ref{cor:splitexamp} were proven in~\cite{BM}. \begin{cor}\label{cor:splitexamp} Let $H$ be a graph on $h$ vertices. \begin{enumerate} \item If $H=K_a+E_b$, then ${\textit{ed}}_{{\rm Forb}(H)}(p)=\min\left\{\frac{p}{a-1},\frac{1-p}{b}\right\}$. \label{it:split:kaeb} \item If $H$ is a star (i.e., $H=E_{h-1}\vee K_1$), then ${\textit{ed}}_{{\rm Forb}(H)}(p)=\min\left\{p,\frac{1-p}{h-2}\right\}$. \label{it:split:star} \item If $H$ is a double-star (i.e., there are adjacent vertices $u$ and $v$ to which every other vertex is adjacent to exactly one), then ${\textit{ed}}_{{\rm Forb}(H)}(p)=\min\left\{p,\frac{1-p}{h-3}\right\}$. \label{it:split:double-star} \end{enumerate} \end{cor} \section{${\rm Forb}(H_9)$} \label{sec:h9} Marchant and Thomason~\cite{MT} give the example of $\mathcal{H}={\rm Forb}(C_6^*)$, where $C_6^*$ is a $6$-cycle with an additional diagonal edge, such that ${\textit{ed}}_{\mathcal{H}}(p)$ is not determined by CRGs with all gray edges. More precisely, they prove that $$ {\textit{ed}}_{{\rm Forb}(C_6^*)}(p)=\min\left\{\frac{p}{1+2p},\frac{1-p}{2}\right\} . $$ The CRG which corresponds to $g_K(p)=(1-p)/2$ is $K(0,2)$, the CRG with all edges gray, zero white vertices and two black vertices. The CRG, $K$, which has $g_K(p)=p/(1+2p)$ for $p\in [0,1/2]$ consists of three vertices: two black vertices connected via a white edge and a white vertex. The remaining two edges are gray. The graph $H_9$, shown in Figure~\ref{fig:h9} and cited in~\cite{BM}, generates a hereditary property $\mathcal{H}={\rm Forb}(H_9)$ such that $d_{\mathcal{H}}^*$ cannot be determined by CRGs of the form $K(a,c)$. Note that $d_{{\rm Forb}(C_6^*)}^*$ can be determined by such CRGs, but the part of the function for $p\in (0,1/2)$ cannot. \subsection{Proof of Theorem~\ref{thm:h9}} \noindent\textbf{Upper bound.} We know that $\chi(H_9)=4$ so let $K^{(1)}=K(3,0)$ where $g_{K^{(1)}}(p)=p/3$. We also know that $\chi(\overline{H_9})=3$ so let $K^{(4)}=K(0,2)$ where $g_{K^{(4)}}(p)=(1-p)/2$. In \cite{BM}, another CRG in $\mathcal{K}({\rm Forb}(H_9))$ is given, call it $K^{(2)}$. It consists of 4 white vertices, one black edge and 5 gray edges. It has edit distance function $g_{K^{(2)}}(p)=\min\{p/3,p/(2+2p)\}$. There is a CRG with a smaller $g$ function. We call it $K^{(3)}$, it consists of $5$ white vertices, two disjoint black edges and the remaining 8 edges gray. The function $g_{K^{(3)}}(p)$ can be computed by use of Theorem~\ref{thm:components}. In the setup of that theorem, $K^{(3)}$ has 3 components. Since the components have $g$ functions either $p$ (for the solitary white vertex) or $\min\{p,1/2\}$ (for each of the other two components), the theorem gives that $$ g_{K^{(3)}}(p)^{-1}=p^{-1}+2\left(\min\{p,1/2\}\right)^{-1}=\max\{3/p,(1+4p)/p\} . $$ It is easy to see that $H_9\not\mapsto K^{(1)}$ and $H_9\not\mapsto K^{(4)}$. In~\cite{BM}, it was shown that $H_9\not\mapsto K^{(2)}$. To finish the upper bound, it remains to show that $H_9\not\mapsto K^{(3)}$. Let $\{v_0,v_1,w_1,v_2,w_2\}$ be the vertices of $K^{(3)}$. Let the components be $\{v_0\}$, $\{v_1,w_1\}$ and $\{v_2,w_2\}$, where each of the latter two induces a black edge. First, we show that no component of $K^{(3)}$ can have 4 vertices from $H_9$. Since there are no independent sets of size 4 and no induced stars on 4 vertices, the only way to have a component of size 4 is to have an induced copy of $C_4$ in the component consisting of, say, $\{v_2,w_2\}$. It is not difficult to see that deleting two vertices from the set $\{0,3,6\}$ yields a $C_4$-free graph. So, any $C_4$ contains exactly two members of $\{0,3,6\}$. Without loss of generality, the induced $C_4$ is $\{1,3,6,8\}$. But the graph induced by $\{0,2,4,5,7\}$ induces a $C_5$, which cannot be mapped into the sub-CRG induced by $\{v_0,v_1,w_1\}$. Therefore, if $H_9$ were to map to $K^{(3)}$, each component must contain exactly $3$ vertices. First we map to $v_0$. The only independent sets of size $3$ are $\{1,4,7\}$ and $\{2,5,8\}$. Without loss of generality, assume the former. Second, we consider the graph induced by $\{0,2,3,5,6,8\}$. Any partition of these vertices into two subsets of $3$ vertices either has a triangle or a copy of $\overline{P_3}$, neither of which maps into $\{v_1,w_1\}$ or $\{v_2,w_2\}$. So, these six vertices cannot be mapped into $\{v_1,w_1,v_2,w_2\}$. Hence $H_9\not\mapsto K^{(3)}$. The CRGs $K^{(1)}$, $K^{(3)}$ and $K^{(4)}$ give an upper bound on ${\textit{ed}}_{{\rm Forb}}(H_9)(p)$ of $\min\left\{\frac{p}{3},\frac{p}{1+4p},\frac{1-p}{2}\right\}$.\\ \noindent\textbf{Lower bound, for $p\leq 1/2$.} Assume, by way of contradiction, that $K$ is a $p$-core CRG such that $H_9\not\mapsto K$ and $g_K(p)<p/3$. Recall Lemma~\ref{lem:cores}(\ref{it:smpcore}) which gives that $K$ has no black edges and white edges must be incident only to black vertices. If $K$ has at least $2$ white vertices, then it has no black vertices because $H_9\mapsto K(2,1)$. (The independent sets are $\{1,4,7\}$ and $\{2,5,8\}$ and the clique is $\{0,3,6\}$.) Since $\chi(H_9)=4$ (the independent sets are $\{1,4,7\}$, $\{0,5\}$, $\{2,5\}$ and $\{3,8\}$), such a CRG has at most $3$ white vertices. So, if $K$ has at least 2 white vertices, then either $K=K(2,0)$ or $K=K(3,0)$, so Corollary~\ref{cor:components} implies that $g_K(p)\geq p/3$, a contradiction. If $K$ has exactly one white vertex, then there is no gray edge among the black vertices because $H_9\mapsto K(1,2)$. (The independent set is $\{2,7\}$ and the cliques are $\{0,1,8\}$ and $\{3,4,5,6\}$.) If there are no black vertices, then $g_K(p)=p$, a contradiction. So, let $w$ be the white vertex and $K'=K-\{w\}$ and $k'=|V(K')|$. Since $K'$ is a clique with all black vertices and all white edges (if any), Proposition 9 from~\cite{Martin} gives that, for $p\in (0,1/2]$, $g_{K'}(p) = p+\frac{1-2p}{k'}>p$. By Theorem~\ref{thm:components}, $g_K(p)> 1/(1/p+1/p)=p/2$, a contradiction. If $K$ has no white vertices, then let $v_0$ be the vertex with largest weight and let $v_1$ be a gray neighbor of $v_0$. Let $x_0={\bf x}(v_0)$ and $x_1={\bf x}(v_1)$. Since $K$ can have no gray triangles ($H_9$ can be partitioned into 3 cliques), ${\rm d}_{\rm G}(v_0)+{\rm d}_{\rm G}(v_1)\leq 1$. By Lemma~\ref{lem:symm}(\ref{it:symmsmp}), \begin{align*} 1 &\geq {\rm d}_{\rm G}(v_0)+{\rm d}_{\rm G}(v_1) \\ &= 2\frac{p-g_K(p)}{p}+\frac{1-2p}{p}(x_0+x_1) \\ g_K(p) &\geq \frac{p}{2}+\frac{1-2p}{2}(x_0+x_1)\geq\frac{p}{2} , \end{align*} a contradiction. Summarizing, if $p\leq 1/2$ and $K$ is a $p$-core CRG such that $H\not\mapsto K$, then $g_K(p)\geq p/3$. \\ \noindent\textbf{Lower bound, for $p\geq 1/2$.} Assume, by way of contradiction, that $K$ is a $p$-core CRG such that $H_9\not\mapsto K$ and $g_K(p)<\min\left\{\frac{p}{1+4p},\frac{1-p}{2}\right\}$. Recall Lemma~\ref{lem:cores}(\ref{it:lgpcore}) which gives that $K$ has no white edges and black edges must be incident only to white vertices. If $K$ has at least $2$ black vertices, then there are no white vertices because $H_9\mapsto K(1,2)$. (The independent set is $\{4,8\}$ and the cliques are $\{0,1,2,3\}$ and $\{5,6,7\}$.) Since $\overline{\chi}(H_9)=3$ (the cliques are $\{0,1,2\}$, $\{3,4,5\}$ and $\{6,7,8\}$), such a CRG has at most $2$ black vertices. So, if $K$ has at least 2 black vertices, then $K=K(0,2)$, so Corollary~\ref{cor:components} implies that $g_K(p)\geq (1-p)/2$, a contradiction. If $K$ has exactly one black vertex, then there is no gray edge among the white vertices because $H_9\mapsto K(2,1)$. If there are no white vertices, then $g_K(p)=1-p$, a contradiction. Let $b$ be the black vertex and $K'=K-\{b\}$ and $k'=|V(K')|$. Since $K'$ is a clique with all white vertices and all black edges (if any), Proposition 9 from~\cite{Martin} gives that, for $p\in [1/2,1)$, $g_{K'}(p) = 1-p+\frac{2p-1}{k'}>1-p$. By Theorem~\ref{thm:components}, $g_K(p)> (1-p)/2$, a contradiction. From now on, we will assume that $K$ has only white vertices and, since it is $p$-core for $p\geq 1/2$, all edges are black or gray. Fact~\ref{fact:3nbhs} and Fact~\ref{fact:2nbhs} establish some of the structural theorems. \begin{fact}\label{fact:3nbhs} Let $p\in [1/2,1)$ and $K$ be a $p$-core CRG with white vertices and black or gray edges. Let $v$ and $v'$ be vertices connected by a gray edge. Then, $N_G(v)\cap N_G(v')$ has at most two vertices. \end{fact} \begin{proof} If $N_G(v)\cap N_G(v')$ has three vertices, then map $H_9$ vertices $0$, $3$ and $6$ to each of them, map $\{1,4,7\}$ to $v$ and $\{2,5,8\}$ to $v'$. This is a map demonstrating that $H_9\mapsto K$. \end{proof} For the rest of the proof, denote $g=g_K(p)$. \begin{fact}\label{fact:2nbhs} Let $p\in [1/2,1)$ and $K$ be a $p$-core CRG with white vertices and black or gray edges and let $g=g_K(p)$. Let $v_0$ be a vertex of largest weight and $v_1$ be a vertex that has largest weight among those in $N_G(v_0)$. Then, $N_G(v_0)\cap N_G(v_1)$ has exactly two vertices or $g>(1-p)/2$ or $g\geq p/3$. \end{fact} \begin{proof} Because of Fact~\ref{fact:3nbhs}, if the statement of Fact~\ref{fact:2nbhs} is not true, then $N_G(v_0)\cap N_G(v_1)$ has at most one vertex which, by the choice of $v_1$, has weight at most ${\bf x}(v_1)$ and, by inclusion-exclusion, has weight at least ${\rm d}_{\rm G}(v_0)+{\rm d}_{\rm G}(v_1)-1$. By Lemma~\ref{lem:symm}(\ref{it:symmlgp}), \begin{align} {\bf x}(v_1) &\geq {\rm d}_{\rm G}(v_0)+{\rm d}_{\rm G}(v_1)-1 \nonumber \\ &\geq 2\frac{1-p-g}{1-p}+\frac{2p-1}{1-p}\left({\bf x}(v_0)+{\bf x}(v_1)\right)-1 \nonumber \\ g &\geq \frac{1-p}{2}+\frac{2p-1}{2}{\bf x}(v_0)-\frac{2-3p}{2}{\bf x}(v_1) . \label{eq:2nbhs1} \end{align} Note that \eqref{eq:2nbhs1} holds even if $N_G(v_0)\cap N_G(v_1)$ is empty. If $p\geq 2/3$, then $g>(1-p)/2$. If $p<2/3$, then use ${\bf x}(v_1)\leq{\bf x}(v_0)$ in \eqref{eq:2nbhs1}. \begin{equation} g \geq \frac{1-p}{2}+\frac{5p-3}{2}{\bf x}(v_1) . \label{eq:2nbhs2} \end{equation} If $p>3/5$, then $g>(1-p)/2$. If $p\leq 3/5$, then use the fact that Corollary~\ref{cor:xbound}(\ref{it:xbound1}) gives ${\bf x}(v_1)\leq g/p$, which we use in \eqref{eq:2nbhs2}. \begin{align*} g &\geq \frac{1-p}{2}+\frac{5p-3}{2}{\bf x}(v_1) \geq \frac{1-p}{2}+\frac{5p-3}{2}\left(\frac{g}{p}\right) \\ g &\geq \frac{p}{3} . \end{align*} \end{proof} Given Fact~\ref{fact:2nbhs} and the assumption that $g_K(p)<\min\left\{\frac{p}{1+4p},\frac{1-p}{2}\right\}$ (which is also at most $p/3$ for $p\geq 1/2$), we can identify $v_0$, a vertex of maximum weight, $v_1$ a vertex of maximum weight among those in $N_G(v_0)$ and $\{v_2,w_2\}=N_G(v_0)\cap N_G(v_1)$. (The vertex $v_0$ must have a gray neighbor, $v_1$, otherwise by Lemma~\ref{lem:symm}(\ref{it:symmlgp}), we must have $g\geq 1-p$.) Without loss of generality, let ${\bf x}(v_2)\geq{\bf x}(w_2)$. For ease of notation, let $x_i={\bf x}(v_i)$ for $i=0,1,2$. If $N_G(v_0)\cap N_G(v_2)-\{v_1\}$ is nonempty, then let its unique vertex be denoted $w_1$. (Uniqueness is a consequence of Fact~\ref{fact:3nbhs}.) \\ \noindent\textbf{Case 1.} The vertex $w_1$ does not exist. \\ Most of our observations come from inclusion-exclusion: $|A|+|B|=|A\cup B|+|A\cap B|$. Inequality \eqref{eq:h9case1:lb} comes from the fact that $N_G(v_0)\cap N_G(v_1)=\{v_2,w_2\}$. Inequality \eqref{eq:h9case1:ub} comes from the fact that $N_G(v_0)\cap N_G(v_2)=\{v_1\}$. Hence, \begin{align} {\rm d}_{\rm G}(v_0)+{\rm d}_{\rm G}(v_1) &\leq 1+2x_2 \label{eq:h9case1:lb} \\ {\rm d}_{\rm G}(v_0)+{\rm d}_{\rm G}(v_2) &\leq 1+x_1 . \label{eq:h9case1:ub} \end{align} Solve for $x_2$ in each case, recalling that Lemma~\ref{lem:symm}(\ref{it:symmlgp}) gives that ${\rm d}_{\rm G}(v_2)=\frac{1-p-g}{1-p}+\frac{2p-1}{1-p}x_2$. Inequality \eqref{eq:h9case1:lb} gives a lower bound for $x_2$ and inequality \eqref{eq:h9case1:ub} gives an upper bound: $$ \frac{1}{2}\left({\rm d}_{\rm G}(v_0)+{\rm d}_{\rm G}(v_1)-1\right)\leq x_2\leq \frac{1-p}{2p-1}\left(1+x_1-{\rm d}_{\rm G}(v_0)-\frac{1-p-g}{1-p}\right) . $$ Some simplification gives \begin{align*} 2g &\geq {\rm d}_{\rm G}(v_0)+(2p-1){\rm d}_{\rm G}(v_1)-2(1-p)x_1-2p+1 \\ &= 2p\,\frac{1-p-g}{1-p}+\frac{2p-1}{1-p}x_0+\frac{2p^2-1}{1-p}x_1-2p+1 \\ g &\geq \frac{1-p}{2}+\frac{2p-1}{2}x_0+\frac{2p^2-1}{2}x_1 . \end{align*} If $2p^2-1>0$ (i.e, $p>1/\sqrt{2}$), then $g>(1-p)/2$. Otherwise, we use the bound $x_1\leq x_0$. \begin{align*} g &\geq \frac{1-p}{2}+\frac{2p-1}{2}x_0+\frac{2p^2-1}{2}x_0 \\ &= \frac{1-p}{2}+(p^2+p-1)x_0 . \end{align*} If $p^2+p-1>0$ (i.e, $p>(\sqrt{5}-1)/2$), then $g>(1-p)/2$. Otherwise, we use the bound from Corollary~\ref{cor:xbound}(\ref{it:xbound1}) that $x_0\leq g/p$. \begin{align*} g &\geq \frac{1-p}{2}+(p^2+p-1)x_0 \\ &\geq \frac{1-p}{2}+(p^2+p-1)\frac{g}{p} \\ &\geq \frac{p}{2(1+p)} . \end{align*} This is at least $\frac{p}{1+4p}$ as long as $p\geq 1/2$, a contradiction. \\ \noindent\textbf{Case 2.} The vertex $w_1$ exists. \\ Inequality \eqref{eq:h9case2:lb1} comes from the fact that $N_G(v_0)\cap N_G(v_1)=\{v_2,w_2\}$ and ${\bf x}(w_2)\leq{\bf x}(v_2)=x_2$. Inequality \eqref{eq:h9case2:lb2} comes from the fact that $N_G(v_0)\cap N_G(v_2)=\{v_1,w_1\}$ and ${\bf x}(w_1)\leq{\bf x}(v_1)=x_1$. Since it is the case that ${\bf x}(w_2)\leq x_2$ and ${\bf x}(w_1)\leq x_1$, we have \begin{align} {\rm d}_{\rm G}(v_0)+{\rm d}_{\rm G}(v_1) &\leq 1+2x_2 \label{eq:h9case2:lb1} \\ {\rm d}_{\rm G}(v_0)+{\rm d}_{\rm G}(v_2) &\leq 1+2x_1 . \label{eq:h9case2:lb2} \end{align} Adding \eqref{eq:h9case2:lb1} and \eqref{eq:h9case2:lb2} gives \begin{align} 2{\rm d}_{\rm G}(v_0)+{\rm d}_{\rm G}(v_1)+{\rm d}_{\rm G}(v_2) &\leq 2+2(x_1+x_2) \nonumber \\ 2{\rm d}_{\rm G}(v_0)-\frac{2g}{1-p} &\leq \frac{3-4p}{1-p}(x_1+x_2) . \label{eq:h9case2:lb} \end{align} If $p\geq 3/4$, then \eqref{eq:h9case2:lb} gives that $2{\rm d}_{\rm G}(v_0)-\frac{2g}{1-p}\leq 0$. By Lemma~\ref{lem:symm}(\ref{it:symmlgp}) we can substitute for ${\rm d}_{\rm G}(v_0)$ and conclude that $\frac{p-g}{p}-\frac{g}{1-p}<0$. Consequently, $g>p(1-p)\geq (1-p)/2$, a contradiction. Thus, we assume $p<3/4$. Next, we use Fact~\ref{fact:h96vert} to conclude that $v_0$ is the only common gray neighbor of $v_1$ and $v_2$. \begin{fact}\label{fact:h96vert} Let $p\geq 1/2$ and $K$ be a $p$-core with white vertices and black or gray edges. Let $a_0,a_1,a_2,b_0,b_1,b_2\inV(K)$ such that $\{a_0,a_1,a_2\}$ is a gray triangle and $\{b_i,a_j\}$ is a gray edge as long as $i$ and $j$ are distinct. Then, $H_9\mapsto K$. \end{fact} \begin{proof} The following map shows the embedding: $$ \begin{array}{lclcl} 2,7\rightarrow a_0 & \qquad & 1,5\rightarrow a_1 & \qquad & 4,8\rightarrow a_2 \\ 0\rightarrow b_0 & \qquad & 3\rightarrow b_1 & \qquad & 6\rightarrow b_2 . \end{array} $$ \end{proof} If $v_1$ and $v_2$ have a gray neighbor in $K$ other than $v_0$, call it $w_0$ and observe that by setting $a_i:=v_i$ and $b_i:=w_i$ for $i=0,1,2$, Fact~\ref{fact:h96vert} would imply that $H_9\mapsto K$. Since $v_0$ is the only common gray neighbor of $v_1$ and $v_2$ \begin{align} {\rm d}_{\rm G}(v_1)+{\rm d}_{\rm G}(v_2) &\leq 1+x_0 \nonumber \\ \frac{2p-1}{1-p}(x_1+x_2) &\leq 1+x_0-2\frac{1-p-g}{1-p} . \label{eq:h9case2:ub} \end{align} Inequality \eqref{eq:h9case2:lb} gives a lower bound for $x_1+x_2$ and inequality \eqref{eq:h9case2:ub} gives an upper bound. Recall that Lemma~\ref{lem:symm}(\ref{it:symmlgp}) gives that ${\rm d}_{\rm G}(v)=\frac{1-p-g}{1-p}+\frac{2p-1}{1-p}{\bf x}(v)$ for any vertex $v\in V(K)$. Recall that we assume $p<3/4$. $$ \frac{1-p}{3-4p}\left(2{\rm d}_{\rm G}(v_0)-\frac{2g}{1-p}\right)\leq x_1+x_2\leq\frac{1-p}{2p-1}\left(1+x_0-2\frac{1-p-g}{1-p}\right) . $$ Some simplification gives $$ 2(2p-1)\left((1-p){\rm d}_{\rm G}(v_0)-g\right)\leq (3-4p)\left((1-p)(1+x_0)-2(1-p-g)\right) $$ and a further substitution of ${\rm d}_{\rm G}(v_0)=\frac{1-p-g}{1-p}+\frac{2p-1}{1-p}{\bf x}(v_0)$ and simplification gives $$ g\geq\frac{1-p}{2}+\frac{4p^2-p-1}{2}x_0 . $$ If $4p^2-p-1>0$ (i.e, $p>(\sqrt{17}+1)/8$), then $g>(1-p)/2$. Otherwise, we use the bound $x_0\leq g/p$ from Corollary~\ref{cor:xbound}(\ref{it:xbound1}). \begin{align*} g &\geq \frac{1-p}{2}+\frac{4p^2-p-1}{2}\left(\frac{g}{p}\right) \\ &\geq \frac{p}{1+4p} , \end{align*} a contradiction. Therefore, for $p\in [1/2,1]$ and in each case, $g\geq\min\left\{p/(1+4p),(1-p)/2\right\}$. Combining this with the fact that $g\geq p/3$, for $p\in [0,1/2]$, this concludes the proof of the lower bound. Consequently, $$ {\textit{ed}}_{{\rm Forb}(H_9)}(p)=\min\left\{p/3,p/(1+4p),(1-p)/2\right\} . $$ This concludes the proof of Theorem~\ref{thm:h9}. \section{Thanks} \label{sec:conc} I would like to thank Maria Axenovich and J\'ozsef Balogh for conversations which have improved the results. Thanks to Andrew Thomason for some useful conversations and for directing me to \cite{MT}. Thanks also to Tracy McKay for conversations that helped deepen my understanding and to Doug West for answering my question about clique-stars. A very special thanks to Ed Marchant for finding an error in the original formulation of Theorem~\ref{thm:split}. I am indebted to anonymous referees whose detailed comments resulted in correcting some errors and provided a much better exposition of the proofs. Figures are made by Mathematica and WinFIGQT. \end{document}
arXiv
{ "id": "1012.3716.tex", "language_detection_score": 0.6820516586303711, "char_num_after_normalized": null, "contain_at_least_two_stop_words": null, "ellipsis_line_ratio": null, "idx": null, "lines_start_with_bullet_point_ratio": null, "mean_length_of_alpha_words": null, "non_alphabetical_char_ratio": null, "path": null, "symbols_to_words_ratio": null, "uppercase_word_ratio": null, "type": null, "book_name": null, "mimetype": null, "page_index": null, "page_path": null, "page_title": null }
arXiv/math_arXiv_v0.2.jsonl
null
null
\begin{document} \DeclarePairedDelimiter\ceil{\lceil}{\rceil} \DeclarePairedDelimiter\floor{\lfloor}{\rfloor} \title{Diminimal families of arbitrary diameter} \author[L. E. Allem]{L. Emilio Allem} \address{UFRGS - Universidade Federal do Rio Grande do Sul, Instituto de Matem\'atica, Porto Alegre, Brazil}\email{[email protected]} \author[R. O. Braga]{Rodrigo O. Braga} \address{UFRGS - Universidade Federal do Rio Grande do Sul, Instituto de Matem\'atica, Porto Alegre, Brazil}\email{[email protected]} \author[C. Hoppen]{Carlos Hoppen} \address{UFRGS - Universidade Federal do Rio Grande do Sul, Instituto de Matem\'atica, Porto Alegre, Brazil}\email{[email protected]} \author[E. R. Oliveira]{Elismar R. Oliveira} \address{UFRGS - Universidade Federal do Rio Grande do Sul, Instituto de Matem\'atica, Porto Alegre, Brazil}\email{[email protected]} \author[L. S. Sibemberg]{Lucas Siviero Sibemberg} \address{UFRGS - Universidade Federal do Rio Grande do Sul, Instituto de Matem\'atica, Porto Alegre, Brazil}\email{[email protected]} \author[V. Trevisan]{Vilmar Trevisan} \address{UFRGS - Universidade Federal do Rio Grande do Sul, Instituto de Matem\'atica, Porto Alegre, Brazil}\email{[email protected] } \subjclass{05C50,15A29} \keywords{Minimum number of distinct eigenvalues, trees, seeds, integral spectrum} \maketitle \begin{abstract} Given a tree $T$, let $q(T)$ be the minimum number of distinct eigenvalues in a symmetric matrix whose underlying graph is $T$. It is well known that $q(T)\geq d(T)+1$, where $d(T)$ is the diameter of $T$, and a tree $T$ is said to be diminimal if $q(T)=d(T)+1$. In this paper, we present families of diminimal trees of any fixed diameter. Our proof is constructive, allowing us to compute, for any diminimal tree $T$ of diameter $d$ in these families, a symmetric matrix $M$ with underlying graph $T$ whose spectrum has exactly $d+1$ distinct eigenvalues. \end{abstract} \section{Introduction} As described by Chu in an influential survey paper~\cite{Chu98}, inverse eigenvalue problems are concerned with the reconstruction of a square matrix $M$ assuming that we are given total or partial information about its eigenvalues and/or eigenvectors. Chu points out to two fundamental questions associated with this problem: \begin{enumerate} \item \emph{Solvability}, i.e., whether there exists a matrix $M$ with the required eigenvalues and/or eigenvectors. Such a matrix $M$ is said to be a realization of the inverse eigenvalue problem. \item \emph{Computability}, i.e., whether, assuming that the problem has a solution, there is an efficient procedure to compute (or to find a numerical approximation) of a solution. \end{enumerate} For the inverse eigenvalue problem to be nontrivial or to be meaningful in applications, it is often the case that the sought-after matrix $M$ needs to satisfy additional properties, that is, the domain must be restricted to matrices in a pre-determined class. In this paper, we consider classes of symmetric matrices that may be described in terms of graphs. Note that any symmetric matrix $M=(m_{ij}) \in \mathbb{F}^{n \times n}$ over a field $\mathbb{F}$ may be associated with a simple graph $G$ with vertex set $[n]=\{1,\ldots,n\}$ such that distinct vertices $i$ and $j$ are adjacent if and only if $m_{ij} \neq 0$. We say that $G$ is the \emph{underlying graph} of $M$. In fact, the matrix $M$ itself may be viewed as a weighted version of $G$, where each vertex $i$ is assigned the weight $m_{ii} \in \mathbb{F}$ and each edge $ij$ is assigned the weight $m_{ij} \in \mathbb{F}$. Often the focus is on matrices whose elements are in the field $\mathbb{R}$ of real numbers (or on Hermitian matrices over the field $\mathbb{C}$ of complex numbers). We refer to~\cite{BARRETT2020276}, and the references therein, for a more complete historical discussion of this type of inverse eigenvalue problem, known under the acronym IEPG, the \emph{inverse eigenvalue problem for a graph}. Given a graph $G$, let $\mathcal{S}(G)$ and $\mathcal{H}(G)$ be the sets of real symmetric matrices and of complex Hermitian matrices whose underlying graph is $G$, respectively. An elementary fact about these matrices is that their eigenvalues are real numbers, so that, for any $n$-vertex graph $G$ and any matrix $M\in \mathcal{H}(G)$, the eigenvalues of $M$ may be written as $\lambda_1(M) \leq \cdots \leq \lambda_n(M)$\footnote{When the matrix $M$ is clear from context, we shall omit the explicit reference to $M$ and simply write $\lambda_1\leq \cdots \leq \lambda_n$.}. The multiset $\{\lambda_1,\ldots,\lambda_n\}$ is called the \emph{spectrum} of $M$ and is denoted by $\Spec(M)$, while $\DSpec(M)$ denotes the set of \emph{distinct} eigenvalues of $M$. The \emph{multiplicity} $m_M(\lambda)$ of $\lambda \in \mathbb{R}$ as an eigenvalue of $M$ is the number of occurrences of $\lambda$ in $\Spec(M)$\footnote{It will be convenient to write $m_M(\lambda)=0$ when $\lambda$ is not an eigenvalue of $M$.}. A class of matrices that has been under intense scrutiny is the class of \emph{acyclic symmetric matrices}, the class of matrices whose underlying graph is a connected acyclic graph (that is, a tree). The study of acyclic symmetric matrices may be traced back to Parter~\cite{Parter60} and Wiener~\cite{WIENER1984}, and there has been growing interest on properties of these matrices and of parameters associated with them starting with the systematic work of Leal Duarte, Johnson and their collaborators, see for instance~\cite{ParterWiener03,JOHNSON20027,DUARTE1989173,leal2002minimum}. One of the particularities of the acyclic case is that the inverse eigenvalue problem may be reduced to symmetric matrices, in the sense that, for any tree $T$, a multiset of real numbers is equal to the spectrum of a matrix in $\mathcal{H}(T)$ if and only if it is equal to the spectrum of a matrix in $\mathcal{S}(T)$ (see~\cite[Corollary 2.6.3]{JohnsonSaiago2018}). Our paper deals with the possible number of distinct eigenvalues of acyclic symmetric matrices. For an in-depth discussion of problems of this type, we refer to a comprehensive book on this topic by Johnson and Saiago~\cite{JohnsonSaiago2018}. More precisely, given a tree $T$, we wish to study the quantity \begin{equation}\label{def_q} q(T)=\min\{|\DSpec(A)| \colon A \in \mathcal{S}(T)\}, \end{equation} the minimum number of distinct eigenvalues over all symmetric real matrices whose underlying graph is $T$. An easy lower bound on this number may be given in terms of the \emph{diameter} of $T$, which we now define. As usual, let $P_d$ denote a path on $d$ vertices. The distance $d_G(u,v)$ between two vertices $u$ and $v$ in a graph $G=(V,E)$ is the length (i.e. the number of edges $d-1$) of a shortest path $P_d$ connecting $u$ and $v$ in $G$, where we say that $d(u,v)=\infty$ if $u$ and $v$ lie in different components of $G$. The diameter $\diam(G)$ of $G$ is defined as $$\diam(G)=\max\{d(u,v) \colon u,v\in V\}\footnote{We observe that in~\cite{JohnsonSaiago2018,leal2002minimum} the value of the diameter corresponds to the number of \emph{vertices}, rather than edges, on the path connecting two vertices at maximum distance. As a consequence, what we call diameter $d$ is diameter $d+1$ in~\cite{JohnsonSaiago2018,leal2002minimum}}.$$ The following result is proved in~[Lemma 1]\cite{leal2002minimum}. \begin{theorem}\label{thm:LB} If $T$ is a tree with diameter $d$ and $A \in \mathcal{S}(T)$, then $q(T) \geq d+1$. \end{theorem} The authors of~\cite{leal2002minimum} suspected that, for every tree $T$ of diameter $d$, there exists a matrix $A \in \mathcal{S}(T)$ with exactly $d+1$ distinct eigenvalues. However, this turns out to be false. Barioli and Fallat~\cite{barioli2004two} constructed a tree $T$ on $16$ vertices such that $\diam(T)=6$, but $q(T)=8$. It is now known that $q(T)=d+1$ for every tree $T$ of diameter $d$ if and only if $d\leq 5$~\cite{JohnsonSaiago2018}. For diameter $d\geq 6$, it is thus natural to characterize the trees $T$ for which $q(T)=\diam(T)+1$, which are known as \emph{diameter minimal} (or \emph{diminimal}, for short). The set $\mathcal{D}_d$ of diminimal trees of any fixed dimension $d$ is nonempty, as we trivially have $P_{d+1} \in \mathcal{D}_d$. Johnson and Saiago~\cite{JohnsonSaiago2018} show that the families $\mathcal{D}_{d}$ are infinite\footnote{In the sense that the set of unlabelled trees of diameter $d$ that are diminimal is infinite.} for every $d$. One of the main tools used to address this problem in~\cite{JohnsonSaiago2018} is the construction of trees using an operation called \emph{branch duplication}~\cite{JOHNSON2020}. This concept will be formally defined in Section~\ref{sec:seeds}, but the intuition is that, for any fixed positive integer $d$, there is a finite set $\mathcal{S}_d$ of (unlabelled) trees of diameter $d$, called the \emph{seeds of diameter $d$}, with the property that any (unlabelled) tree of diameter $d$ may be obtained from one of the seeds of diameter $d$ by a sequence of branch duplications. As it turns out, for any tree $T$ of diameter $d$ there is a single seed of diameter $d$ from which it can be obtained, so that the seeds are precisely the trees that cannot be obtained from smaller trees through branch duplication. To illustrate why this can be useful for our purposes, we mention that Section 6.5 in~\cite{JohnsonSaiago2018} deals with $q(T)$ for trees of diameter $d=6$, for which the set $\mathcal{S}_6$ contains 12 seeds. Johnson and Saiago show that the families generated by nine of these seeds consist entirely of diminimal trees, while, in each of the remaining three families, at least one of the trees is not diminimal. The main result in our paper is that, for any fixed $d \geq 4$, there are at least two seeds $S_d$ and $S'_d$ of diameter $d$ such that the families $\mathcal{T}(S_d)$ and $\mathcal{T}(S'_d)$ generated by these seeds consist entirely of diminimal trees. If $d \geq 5$ is odd, there is a third seed $S''_d$ for which this property holds. These seeds are formally defined in Definition~\ref{def_seeds}, and they are depicted in Figures~\ref{fig:s6}-\ref{fig:s72} for small values of $d$. \begin{theorem}\label{thm:main} Let $d$ be a positive integer. Let $\mathcal{T}(S_d)$, $\mathcal{T}(S'_d)$ and $\mathcal{T}(S''_d)$ be the families of trees of diameter $d$ generated by the seeds $S_d$, $S'_d$ and $S''_d$, respectively, where $S'_d$ is defined for $d\geq 4$ and $S''_d$ for odd values of $d \geq 5$. For every $T\in \mathcal{T}(S_d) \cup \mathcal{T}(S'_d) \cup \mathcal{T}(S''_d)$, we have $q(T)=d+1$. \end{theorem} The main additional tool in our proof of Theorem~\ref{thm:main} is an algorithm by Jacobs and Trevisan~\cite{JT2011} that was proposed to solve a problem known as \emph{eigenvalue location} for matrices associated with graphs. A detailed discussion is deferred to Section~\ref{sec:eigenvalue_location}, but we can anticipate that it will have an important role in an inductive approach for Theorem~\ref{thm:main}. As a byproduct of our proof of Theorem~\ref{thm:main} (see Theorem~\ref{main_th}), we obtain a constructive procedure that, given a tree $T \in \mathcal{T}(S_d) \cup \mathcal{T}(S'_d) \cup \mathcal{T}(S''_d)$, produces a symmetric matrix $A \in \mathbb{R}^{n \times n}$ with underlying tree $T$ with the property that $q(T)=|\DSpec(A)|=d+1$. This means that, in addition to exploring the existence of such a matrix, we also address its computability. In particular, the procedure allows us to produce such a matrix $A$ with integral spectrum, i.e., with the property that its spectrum consists entirely of integers. More generally, let \begin{eqnarray*} && q_{int}(T)=\min\{|\DSpec(A)| \colon A \in \mathcal{S}_{int}(T)\}, \end{eqnarray*} where $\mathcal{S}_{int}(T)$ is the subset of $\mathcal{S}(T)$ whose matrices are integral (i.e., have integral spectrum). Clearly, we have $q(T)\leq q_{int}(T)$ for any tree $T$. Our work implies that the following holds for all trees $T$ with diameter $d \leq 5$ and for all $T \in \mathcal{T}(S_d) \cup \mathcal{T}(S'_d) \cup \mathcal{T}(S''_d)$ with $d \geq 6$: $$q(T)=q_{int}(T).$$ It would be interesting to understand how these parameters relate for arbitrary trees. We should mention that, even though the focus of this paper is on matrices associated with trees, there has been a lot of research on the parameter $q(G)$ for more general graphs, see \cite{F2013,allred2022combinatorial,BARRETT2020276,barrett2017generalizations,fallat2022minimum} for example. Our paper is structured as follows. In the next two sections, we describe the main ingredients used in our proofs. In Section~\ref{sec:eigenvalue_location}, we present an algorithm that \emph{locates} eigenvalues of trees, that is, an algorithm that for any real symmetric matrix $M$ whose underlying graph is a tree and for any given real interval $I$, finds the number of eigenvalues of $M$ in $I$. We illustrate its usefulness by providing a short proof of the classical Parter-Wiener Theorem. In Section~\ref{sec:seeds}, we describe how trees of any fixed diameter $d$ can be constructed by a sequence of \emph{branch duplications} starting with some irreducible tree with diameter $d$, which is known as a seed. The remaining sections deal with the proof of Theorem~\ref{thm:main}. In Section~\ref{sec:strongly_realizable}, we state a technical tool (Theorem~\ref{main_th}) that is the heart of the proof. Given $d\geq 1$, it will allow us to inductively define a set of real numbers (of size $d+1$) that, for any tree $T \in \mathcal{T}(S_d)$ with diameter $d$, is equal to the set of distinct eigenvalues in the spectrum of a symmetric matrix $M(T)$ with underlying graph $T$. A set of this type will be called \emph{strongly realizable} because there are realizations of it for all trees in the class under consideration. The existence of such a set immediately implies the validity of Theorem~\ref{thm:main} for seeds of type $S_d$. Theorem~\ref{main_th} will then be proved by induction in Section~\ref{sec:proof_technical}. To conclude the paper, Section~\ref{sec:other_seeds} uses strongly realizable sets to give a proof of Theorem~\ref{thm:main} for seeds of type $S_d'$ and $S_d''$. Moreover, we explain how this may be used to obtain matrices with the minimum number of distinct eigenvalues that satisfy additional properties, such as having integral spectrum. An explicit construction is given in Section~\ref{sec:example}. \section{Eigenvalue location in trees}\label{sec:eigenvalue_location} In a seminal paper~\cite{JT2011}, Jacobs and one of the current authors have proposed an algorithm that, given a real symmetric matrix $M$ whose underlying graph is a tree and a real interval $I$, finds the number of eigenvalues of $M$ in $I$. In fact, the work in~\cite{JT2011} was specifically concerned with eigenvalues of the adjacency matrix of an arbitrary tree. However, the strategy could be extended in a natural way to arbitrary symmetric matrices associated with trees. This more general algorithm, stated in Figure~\ref{chap3:treeunder}, appears in~\cite{TEMA1041}. The algorithm runs on a \emph{rooted tree}, that is a tree $T$ for which one of the vertices $r$ is distinguished as the \emph{root}. Each neighbor of $r$ is regarded as a {\em child} of $r$, and $r$ is called its {\em parent}. For each child $c$ of $r$, all of its neighbors, except the parent, become its children. This process continues until all vertices except $r$ have parents. A vertex that does not have children is called a {\em leaf} of the rooted tree. For the algorithm, the tree $T$ that underlies the input matrix $M$ may be assigned an arbitrary root, but its vertex set must be ordered \emph{bottom-up}, that is, any vertex must appear after all its children. In particular, the root is the last vertex in such an ordering. \begin{figure} \caption{ Diagonalizing $M + xI$ for a symmetric matrix $M$ associated with the tree $T$.} \label{chap3:treeunder} \end{figure} \index{Diagonalize Weighted Tree} The following theorem summarizes the way in which the algorithm will be applied. Its proof is based on a property of matrix congruence known as Sylvester's Law of Inertia, we refer to~\cite{livro} for details. \begin{theorem} \label{inertia} Let $M$ be a symmetric matrix of order $n$ that corresponds to a weighted tree $T$ and let $x$ be a real number. Given a bottom-up ordering of $T$, let $D$ be the diagonal matrix produced by Algorithm Diagonalize with entries $T$ and $x$. The following hold: \begin{itemize} \item[(a)] The number of positive entries in the diagonal of $D$ is the number of eigenvalues of $M$ (including multiplicities) that are greater than $-x$. \item[(b)] The number of negative entries in the diagonal of $D$ is the number of eigenvalues of $M$ (including multiplicities) that are less than $-x$. \item[(c)] The number of zeros in the diagonal of $D$ is the multiplicity of $-x$ as en eigenvalue of $M$. \end{itemize} \end{theorem} An immediate consequence of this result is the well-known fact that the multiplicity of the maximum and of the minimum eigenvalue of any tree is always equal to 1. \begin{theorem}\label{thm:simpleroots} Let $T$ be a tree, let $M\in \mathcal{S}(T)$, and consider $\lambda_{\min}=\min(\Spec(M))$ and $\lambda_{\max}=\max(\Spec(M))$. Then, $m_{M}(\lambda_{\min}) = 1 =m_{M}(\lambda_{\max})$. \end{theorem} \begin{proof} Let $T$ be a tree and $M\in \mathcal{S}(T)$. We prove the theorem for $\lambda_{\min}$, the proof for $\lambda_{\max}$ follows from the fact that $\lambda_{\max}(M)=-\lambda_{\min}(-M)$. Choose an arbitrary root $v$ for $T$ and fix a bottom-up ordering $v_1,\ldots,v_n=v$ of $T$. Set $x=-\lambda_{\min}$ and consider an application of \texttt{Diagonalize}$(M,x)$. Because $\lambda_{\min}$ is an eigenvalue of $M$, at least one of the diagonal elements $d_j$ must be zero at the end of the algorithm by Theorem~\ref{inertia}(c). We claim that $v_j=v$, which implies the desired result. Assume for a contradiction that $v_j \neq v$, so that $v_j$ has a parent $v_k$ in $T$. Because $d_j$ is 0, at the time $v_k$ is processed, it has a child with value 0. The algorithm assigns the positive value 2 to one of the children $v_{j'}$ of $v_k$ with this property and the negative value $-(m_{j'k})^2/2$ to $v_k$. These values cannot be modified in later steps. Theorem~\ref{inertia}(b) implies that $M$ has an eigenvalue $\lambda$ such that $\lambda<\lambda_{\min}$, a contradiction. \end{proof} Theorem~\ref{inertia} can also be used to give a short proof of a result attributed to Parter and Wiener, see~\cite[Theorem 2.3.1]{JohnsonSaiago2018}. We include the proof here to illustrate how our proof method applies. Given a tree $T$ and a matrix $M(T)$ for which $T$ is the underlying tree, we write $M[T-v]$ to denote the submatrix of $M$ obtained by deleting the row and the column corresponding to a vertex $v$ of $T$. More generally, if $T'$ is a subgraph of $T$, we write $M[T']$ for the submatrix of $M$ induced by the rows and columns corresponding to vertices of $T'$. \begin{theorem}[Parter-Wiener Theorem]\label{parter-wiener} Let $T$ be a tree, let $M\in \mathcal{S}(T)$, and suppose that $\lambda \in \mathbb{R}$ is such that $m_{M}(\lambda)\geq2$. Then there is a vertex $v$ of $T$ of degree at least $3$ such that $m_{M[T-v]}(\lambda) = m_{M}(\lambda) + 1$. Moreover, $\lambda$ occurs as an eigenvalue of $M[T_i]$ for at least three different components $T_i$ of $T-v$. \end{theorem} \begin{proof} Let $T$ be an $n$-vertex tree, $M\in \mathcal{S}(T)$ and suppose that $\lambda \in \mathbb{R}$ is such that $m_{M}(\lambda)\geq2$. Choose some vertex $v_n$ of $V(T)$ as the root of $T$ and set $x=-\lambda$. Consider an application of \texttt{Diagonalize}$(M,x)$ with root $v_n$. By Theorem~\ref{inertia}(c), at least two diagonal elements of the output matrix $D$ must be 0. Fix a vertex $v_j$ that is farthest from the root such that $d_j=0$, so that $j<n$. Let $v=v_k$ be the parent of $v_j$. Let $T_i$ be the components of $T-v_k$ rooted in each of the children $v_{k,i}$ of $v_k$, where $i \in \{1,\ldots,\ell\}$. If $v_k\neq v_n$, let $T_0$ be the component of $T-v_k$ that contains $v_n$ (and assume it is rooted at $v_n$). First note that $\ell \geq 2$. Indeed, if $\ell=1$, then $v_j$ would be the only child of $v_k$. However, since $d_j=0$, when the algorithm processes $v_k$, it redefines $d_j$ as 2, contradicting our assumption. In fact, this argument further implies that $v_j$ must have at least one sibling $v_{j'}$ to which the algorithm assigns value $0$ as it processes $v_{j'}$, but then redefines it as 2. Consider applications of \texttt{Diagonalize}$(M[T_i],x)$ for $i \in \{1,\ldots,\ell\}$. Our assumption about the distance from $v_j$ to the root implies that 0 can only appear (as the final value) at the root of each such application. On the other hand, the previous paragraph ensures that 0 appears as the final value of at least two of the roots, namely $v_j$ and $v_{j'}$. If 0 is the value of at least three of these roots, we conclude that $v_k$ has degree at least three and that $\lambda$ occurs as an eigenvalue of at least three components of $T-v$ by Theorem~\ref{inertia}(b). If 0 appears in exactly two of the roots, we conclude that $v_k\neq v_n$, as otherwise one of the 0's would be redefined as 2 when processing $v_n$, and $v_n$ would be assigned a negative value, contradicting the assumption that $m_{M}(\lambda) \geq 2$. The same considerations imply that, when performing \texttt{Diagonalize}$(M,x)$, there are initially two occurrences of 0 at the children of $v_k$, but, when the algorithm processes $v_k$, it replaces one of the zeros by 2, $v_k$ gets a negative value and the edge between $v_k$ and its parent is deleted. Because of this, the values assigned by \texttt{Diagonalize}$(M,x)$ to the remaining vertices of $T$ are not affected by the values on $v_k$'s branch, that is, they are exactly the values assigned by \texttt{Diagonalize}$(M[T_0],x)$. In particular, 0 must appear at least once at the output of \texttt{Diagonalize}$(M[T_0],x)$, thus $\lambda$ is an eigenvalue of $T_0$. Overall, $\lambda$ occurs as an eigenvalue of at least three components of $T-v$. To conclude the proof, we still need to establish $m_{M[T-v]}(\lambda) = m_M(\lambda) + 1$, where $v=v_k$. But this follows immediately from the argument above. Let $s$ be the number of times that 0 appears at children of $v_k$ (before $v_k$ is processed). After processing $v_k$, one of the zeros becomes 2 and the edge between $v_k$ and its parent (if it exists) is deleted, so that $m_{M}(\lambda)=(s-1)+m_{M[T_0]}(\lambda)$ if $v_k\neq v_n$, and $m_{M}(\lambda)=s-1$ if $v_k=v_n$. On the other hand, $m_{M[T-v]}(\lambda)=s+m_{M[T_0]}(\lambda)$ if $v_k\neq v_n$, and $m_{M[T-v]}(\lambda)=s$ if $v_k=v_n$. \end{proof} The following result is proved with similar ideas. \begin{lemat}\label{proposition} Let $T$ be a tree and $M\in \mathcal{S}(T)$. If $v\in V(T)$ is such that $m_{M[T-v]}(\lambda)=m_{M}(\lambda)+1$, for some $\lambda\in\mathbb{R}$, then the following holds when Algorithm Diagonalize is performed for $M$ and $-\lambda$ with root $v$. There is a child $v_j$ of $v$ such that, after processing $v_j$, the algorithm assigns value $d_j=0$. \end{lemat} \begin{proof} Let $T$ be a rooted tree and $M\in \mathcal{S}(T)$. Let $v\in V(T)$ be such that $m_{M[T-v]}(\lambda)=m_{M}(\lambda)+1$, for some $\lambda\in\mathbb{R}$. Consider $T$ rooted at $v$. Let $T_1,\ldots,T_p$ be the connected components of $T-v$ rooted at the children $v_1,\ldots,v_p$ of $v$. By Theorem~\ref{inertia}, $m_{M}(\lambda)$ is given by the number of occurrences of $0$ in the diagonal of the matrix produced by \texttt{Diagonalize}$(M,-\lambda)$ with root $v$. Similarly, $m_{M[T-v]}(\lambda)$ is the sum of the number of occurrences of $0$ in the diagonal of the matrices $D_i$ produced by \texttt{Diagonalize}$(M[T_i],-\lambda)$ with root $v_i$. By hypothesis, this sum is larger than $m_{M}(\lambda)$. In particular, one of the 0´s assigned by \texttt{Diagonalize}$(M[T_i],-\lambda)$ must lie on a vertex $u$ that is assigned a nonzero value by \texttt{Diagonalize}$(M,-\lambda)$. On the other hand, the value assigned by \texttt{Diagonalize}$(M[T_i],-\lambda)$ with root $v_i$ to a vertex $w\neq v_i$ is precisely the value assigned to $w$ by \texttt{Diagonalize}$(M,-\lambda)$ with root $v$. As a consequence, the vertex $u$ mentioned in the previous paragraph must be $v_j$ for some $j\in\{1,\ldots,p\}$. This means that, in an application of \texttt{Diagonalize}$(M,-\lambda)$ with root $v$, the algorithm assigns value $d_j=0$ to $v_j$ upon processing $v_j$, and later redefines the value of $d_j$ as 2 when processing its parent $v$. \end{proof} \section{Trees of diameter $d$ and branch decompositions} \label{sec:seeds} In this section, we shall describe an operation known as branch decomposition, which allows us to view trees of diameter $d$ as being generated by a finite number of such trees, which are known as seeds. Let $d \geq 1$ be a fixed integer and let $\mathcal{T}_{d}^{(n)}$ be the set of $n$-vertex trees with diameter $d$, where $n \geq 3$. Given a tree $T \in \mathcal{T}_{d}^{(n)}$, there is a natural way to consider it as a rooted tree. \begin{defn}[Main root]\label{mainrooteven} Let $T=(V,E)$ be a tree with diameter $d$. \begin{itemize} \item[(a)] If $d=2k$ for some $k\in\mathbb{N}$, then $v\in V$ is the \emph{main root} of $T$ if it is the central vertex of a maximum path $P_{2k+1}$ in $T$. \item[(b)] If $d=2k+1$ for some $k\in\mathbb{N}$, then $e\in E$ is the \emph{main edge} of $T$ if it is the central edge of a maximum path $P_{2k+2}$ in $T$. Each endpoint of $e$ is called a \emph{main root} of $T$. \end{itemize} \end{defn} We note that the main root and the main edge are well defined. For (a), observe that any two distinct copies $Q_1$ and $Q_2$ of $P_{2k+1}$ in $T$ must intersect in a vertex $v$ that is the central vertex of both, otherwise the path $Q$ created by merging the two longest subpaths of $Q_1$ and $Q_2$ joining $v$ to a leaf would have more than $2k+1$ edges, a contradiction. We may similarly argue that any two longest paths in a tree with odd diameter share their central edge. To prove Theorem~\ref{thm:main}, we will construct classes of trees of diameter $d$ in a recursive way. To this end, we define an operation on rooted trees. Let $p\geq 1$ and consider disjoint trees $T_0,T_1,\ldots,T_p$ rooted at vertices $v_0,v_1,\ldots,v_p$, respectively. We write $T_0\odot (T_1,\ldots,T_p)$ for the tree with vertex set $V=\bigcup_{\ell=0}^{p}V(T_\ell)$ and edge set $E=\cup_{\ell=1}^{p}\{v_0v_\ell\}\cup\bigcup_{\ell=0}^{p}E(T_\ell)$ and we write $T= T_0 \odot (T_1,\ldots,T_p)$ (see Figure~\ref{fig:odot} for $p=3$). If $p=1$, we simplify the notation to $T_0\odot (T_1)= T_0 \odot T_1$. \newcommand{\Ttres}{ \begin{tikzpicture}[scale=.7,auto=left,every node/.style={circle,scale=0.5}] \path( 0,0)node[shape=circle,draw,fill=black] (1) {} (.5,0)node[shape=circle,draw,fill=black] (2) {} (1.5,0)node[shape=circle,draw,fill=black] (3) {} (1,1)node[shape=circle,draw,fill=black] (4) {} (0,1)node[shape=rectangle,label=left:\Large{$v_{3}$},draw,fill=red] (5) {}; \draw[-](2)--(4); \draw[-](3)--(4); \draw[-](5)--(4); \draw[-](5)--(1); \end{tikzpicture} } \newcommand{\Tzero}{ \begin{tikzpicture}[scale=.7,auto=left,every node/.style={circle,scale=0.5}] \path( 0,0)node[shape=circle,draw,fill=black] (1) {} (1,0)node[shape=circle,draw,fill=black] (2) {} (0,.7)node[shape=circle,draw,fill=black] (3) {} (1,.7)node[shape=circle,draw,fill=black] (4) {} (.5,1.3)node[shape=rectangle,label=left:\Large{$v_{0}$},draw,fill=red] (5) {} (.5,2)node[shape=circle,draw,fill=black] (6) {}; \draw[-](1)--(3); \draw[-](3)--(5); \draw[-](2)--(4); \draw[-](4)--(5); \draw[-](5)--(6); \end{tikzpicture} } \newcommand{\Tdois}{ \begin{tikzpicture}[scale=.7,auto=left,every node/.style={circle,scale=0.5}] \path( 0,0)node[shape=circle,draw,fill=black] (1) {} (1,.5)node[shape=rectangle,label=right:\Large{$v_{2}$},draw,fill=red] (2) {}; \draw[-](1)--(2); \end{tikzpicture} } \newcommand{\Tum}{ \begin{tikzpicture}[scale=.7,auto=left,every node/.style={circle,scale=0.5}] \path( 0,0)node[shape=circle,draw,fill=black] (1) {} (.5,.5)node[shape=rectangle,label=left:\Large{$v_{1}$},draw,fill=red] (2) {} (1,1)node[shape=circle,draw,fill=black] (3) {}; \draw[-](1)--(2); \draw[-](2)--(3); \end{tikzpicture} } \newcommand{\operation}{ \begin{tikzpicture}[scale=.7,auto=left,every node/.style={circle,scale=0.5}] \path( 0,0)node[shape=circle,draw,fill=black] (1) {} (.6,0)node[shape=circle,draw,fill=black] (2) {} (.3,1)node[shape=circle,label=left:\Large{$v_{1}$},draw,fill=black] (3) {} (1.5,0)node[shape=circle,draw,fill=black] (4) {} (1.5,1)node[shape=circle,label=left:\Large{$v_{2}$},draw,fill=black] (5) {} (2.5,0)node[shape=circle,draw,fill=black] (6) {} (2.5,1)node[shape=circle,label=left:\Large{$v_{3}$},draw,fill=black] (7) {} (3.2,0)node[shape=circle,draw,fill=black] (8) {} (3.5,1)node[shape=circle,draw,fill=black] (9) {} (3.7,0)node[shape=circle,draw,fill=black] (10) {} (1.5,2)node[shape=rectangle,label=left:\Large{$v_{0}$},draw,fill=red] (0) {} (1.2,3)node[shape=circle,draw,fill=black] (11) {} (2.2,2.5)node[shape=circle,draw,fill=black] (14) {} (3,2.5)node[shape=circle,draw,fill=black] (15) {} (2.2,3.5)node[shape=circle,draw,fill=black] (12) {} (3,3.5)node[shape=circle,draw,fill=black] (13) {}; \draw[-](1)--(3); \draw[-](2)--(3); \draw[-](4)--(5); \draw[-](6)--(7); \draw[-](7)--(9); \draw[-](8)--(9); \draw[-](9)--(10); \draw[-](0)--(11); \draw[-](0)--(14); \draw[-](0)--(12); \draw[-](14)--(15); \draw[-](12)--(13); \draw[dotted](0)--(3); \draw[dotted](0)--(5); \draw[dotted](0)--(7); \end{tikzpicture} } \captionsetup[subfigure]{labelformat=empty} \begin{figure} \caption{$T_{0}$} \label{fig:a} \caption{$T_{1}$} \label{fig:b} \caption{$T_{2}$} \label{fig:c} \caption{$T_{3}$} \label{fig:d} \caption{$T_0\odot (T_1,T_2,T_3)$} \label{fig:e} \caption{The rooted tree $T_0\odot (T_1,T_2,T_3)$. The root of each tree is denoted by a square. } \label{fig:odot} \end{figure} The \emph{height} $h(T)$ of a rooted tree $T$ is the distance of the root $v$ to the farthest vertex in $T$, i.e., $h(T)=\max\{d(v,u):u\in V(T)\}$. Note that, when a tree $T$ of diameter $d$ is rooted at a main root, then its height is $h(T)=\ceil{\frac{d}{2}}$. As mentioned in the introduction, the authors of the book~\cite{JohnsonSaiago2018} consider families of trees constructed by successive applications of operations called \emph{branch duplications}. Given a tree $T$, we say that $T_j$ is a \emph{branch of $T$ at a vertex $v$} if $T_j$ is a component of $T-v$. We can view the branch as a rooted tree with root given by the neighbor of $v$. An \emph{$s$-combinatorial branch duplication (CBD)} of $T_j$ at $v$ results in a new tree where $s$ copies of $T_j$ are appended to $T$ at $v$ (see Figure ~\ref{fig:unfolding}). A tree $T'$ that is obtained from $T$ by a finite sequence of CBDs is called an \emph{unfolding} of $T$. In this case, we also say that $T$ is a \emph{folding} of $T'$. It is easy to see that for $T$ to be an unfolding of some other tree, then $T$ must contain a vertex $v$ such that $T-v$ has at least two isomorphic branches, by which we mean that there is a root-preserving isomorphism between the two branches. \begin{figure} \caption{Tree $T$ of diameter $4$ rooted at $v$.} \caption{$2$-CBD of $T_1$ (the branch of $T-v$ that contains $v_1$) at $v$.} \caption{An unfolding of a tree $T$.} \label{fig:unfolding} \end{figure} In this paper (as was the case in~\cite{JohnsonSaiago20182018}), we are interested in unfoldings that preserve the diameter. For this reason, a CBD will be only performed on branches of $T-v$ that do not contain any main root of $T$ (as the diameter would increase otherwise). An (unlabelled) tree $T$ of diameter $d$ is said to be a \emph{seed} if it cannot be folded into a smaller tree of diameter $d$. The work in~\cite{JOHNSON2020} shows that, for every positive integer $d$, there is a finite number of seeds of diameter $d$. Moreover, every tree of diameter $d$ is an unfolding of precisely one of these seeds. For example, the path $P_{d+1}$ is the only seed for trees of diameter $d\leq 3$. For diameter $4$ and $5$ there are two and three seeds, respectively, and for $d=6$ there are twelve seeds. Note that we can think of unfolding in terms of the operation $\odot$. Let $T$ be a tree and consider a branch $T_j$ of $T$ at a vertex $v$. Let $v_j$ be the root of $T_j$ (i.e. the neighbor of $v$ in $T_j$). Define $T_0=T-T_j$, rooted at $v$. Let $T_j^{(i)}$ be disjoint copies of $T_j$, for $i\in\{1,\ldots,s\}$, whose roots are denoted $v^{(i)}_j$, respectively. It is clear that $T_0\odot(T_j,T^{(1)}_j,\ldots,T^{(s)}_j)$ is an $s$-CBD of $T_j$ at $v$. As mentioned in the introduction, given $d$, we are interested in three families of trees of diameter $d$, namely the trees generated by seeds $S_d$, $S'_d$ and $S''_d$. We formally define them here in terms of the operation $\odot$. In the definition, it is convenient to construct the seeds as trees that are rooted at a central vertex. \begin{defn}\label{def_seeds} Let $S_0=K_1$, $S_1=P_2$ and $S_2=P_3$ be the only trees with a single vertex, two vertices and three vertices, respectively, and consider them as rooted trees such that $S_2$ is rooted at the central vertex. Let $S'_{4}= S_{0}\odot(S_{1},S_{1})$, $S'_{5}= (S_{0}\odot S_{1})\odot(S_{0}\odot S_{1})$ and $S''_{5}= (S_{0}\odot S_{1})\odot(S_{1}\odot S_{1})$. For $k \geq 2$, define \begin{itemize} \item[(i)] $S_{2k-1}=S_{2k-3}\odot S_{2k-3}$ and $S_{2k}=S_{2k-3}\odot (S_{2k-3},S_{2k-3})$; \item[(ii)] $S'_{2k+2}= S_{2k-3}\odot(S_{2k-1},S_{2k-1})$ and $S'_{2k+3}= (S_{2k-3}\odot S_{2k-1})\odot(S_{2k-3}\odot S_{2k-1})$; \item[(iii)] $S''_{2k+3}= (S_{2k-3}\odot S_{2k-1})\odot(S_{2k-1}\odot S_{2k-1})$. \end{itemize} \end{defn} We observe that $S_3$ is the only seed of diameter three, that $S_4$ and $S_4'$ are the only two seeds of diameter four and that $S_5$, $S_5'$ and $S_5''$ are the only three seeds of diameter five. In Figure~\ref{fig:seeds}, we depict $S_d$, $S_d'$ and $S_d''$ for $d\in \{6,7\}$. \newcommand{\seedseis}{ \begin{tikzpicture}[scale=.7,auto=left,every node/.style={circle,scale=0.3}] \path( 0,0)node[shape=circle,draw,fill=black] (7) {} (0,.5)node[shape=circle,draw,fill=black] (6) {} (0,1)node[shape=rectangle,draw,fill=black] (4) {} (-.5,1)node[shape=circle,draw,fill=black] (5) {} (1,1)node[shape=circle,draw,fill=black] (9) {} (.5,1)node[shape=rectangle,draw,fill=black] (8) {} (.5,0.5)node[shape=circle,draw,fill=black] (10) {} (.5,0)node[shape=circle,draw,fill=black] (11) {} (.25,2)node[shape=rectangle,draw,fill=red] (0) {} (.25,2.5)node[shape=circle,draw,fill=black] (1) {} (.75,2)node[shape=circle,draw,fill=black] (2) {} (1.25,2)node[shape=circle,draw,fill=black] (3) {}; \draw[-](7)--(6); \draw[-](6)--(4); \draw[-](4)--(5); \draw[-](11)--(10); \draw[-](10)--(8); \draw[-](8)--(9); \draw[-](0)--(1); \draw[-](0)--(2); \draw[-](2)--(3); \draw[dotted](0)--(4); \draw[dotted](0)--(8); \end{tikzpicture} } \newcommand{\seedseisum}{ \begin{tikzpicture}[scale=.7,auto=left,every node/.style={circle,scale=0.3}] \path( 0,0)node[shape=circle,draw,fill=black] (7) {} (0,.5)node[shape=circle,draw,fill=black] (6) {} (0,1)node[shape=rectangle,draw,fill=black] (4) {} (-.5,1)node[shape=circle,draw,fill=black] (5) {} (1,1)node[shape=circle,draw,fill=black] (9) {} (.5,1)node[shape=rectangle,draw,fill=black] (8) {} (.5,0.5)node[shape=circle,draw,fill=black] (10) {} (.5,0)node[shape=circle,draw,fill=black] (11) {} (.25,2)node[shape=rectangle,draw,fill=red] (0) {} (.25,2.5)node[shape=circle,draw,fill=black] (1) {}; \draw[-](7)--(6); \draw[-](6)--(4); \draw[-](4)--(5); \draw[-](11)--(10); \draw[-](10)--(8); \draw[-](8)--(9); \draw[-](0)--(1); \draw[dotted](0)--(4); \draw[dotted](0)--(8); \end{tikzpicture} } \newcommand{\seedsete}{ \begin{tikzpicture}[scale=.7,auto=left,every node/.style={circle,scale=0.3}] \path( 0,.5)node[shape=circle,draw,fill=black] (11) {} (0.5,.5)node[shape=circle,draw,fill=black] (10) {} (1,.5)node[shape=rectangle,draw,fill=black] (8) {} (1,0)node[shape=circle,draw,fill=black] (9) {} (1.5,0.5)node[shape=rectangle,draw,fill=black] (12) {} (1.5,0)node[shape=circle,draw,fill=black] (13) {} (2,0.5)node[shape=circle,draw,fill=black] (14) {} (2.5,0.5)node[shape=circle,draw,fill=black] (15) {} (0,1.5)node[shape=circle,draw,fill=black] (1) {} (0.5,1.5)node[shape=circle,draw,fill=black] (2) {} (1,1.5)node[shape=rectangle,draw,fill=red] (0) {} (1,2)node[shape=circle,draw,fill=black] (3) {} (1.5,1.5)node[shape=rectangle,draw,fill=black] (5) {} (1.5,2)node[shape=circle,draw,fill=black] (4) {} (2,1.5)node[shape=circle,draw,fill=black] (6) {} (2.5,1.5)node[shape=circle,draw,fill=black] (7) {}; \draw[-](11)--(10); \draw[-](10)--(8); \draw[-](8)--(9); \draw[-](12)--(13); \draw[-](12)--(14); \draw[-](14)--(15); \draw[-](8)--(12); \draw[-](1)--(2); \draw[-](2)--(0); \draw[-](0)--(3); \draw[-](0)--(5); \draw[-](5)--(4); \draw[-](5)--(6); \draw[-](6)--(7); \draw[dotted](0)--(8); \end{tikzpicture} } \newcommand{\seedseteum}{ \begin{tikzpicture}[scale=.7,auto=left,every node/.style={circle,scale=0.3}] \path (0.5,.5)node[shape=circle,draw,fill=black] (10) {} (1,.5)node[shape=rectangle,draw,fill=black] (8) {} (1,0)node[shape=circle,draw,fill=black] (9) {} (1.5,0.5)node[shape=rectangle,draw,fill=black] (12) {} (1.5,0)node[shape=circle,draw,fill=black] (13) {} (2,0.5)node[shape=circle,draw,fill=black] (14) {} (1,1.5)node[shape=rectangle,draw,fill=red] (0) {} (1,2)node[shape=circle,draw,fill=black] (3) {} (1.5,1.5)node[shape=rectangle,draw,fill=black] (5) {} (1.5,2)node[shape=circle,draw,fill=black] (4) {} (2,1.5)node[shape=circle,draw,fill=black] (6) {} (2.5,1.5)node[shape=circle,draw,fill=black] (7) {} (1,-.5)node[shape=circle,draw,fill=black] (16) {} (1.5,-.5)node[shape=circle,draw,fill=black] (17) {}; \draw[-](10)--(8); \draw[-](8)--(9); \draw[-](12)--(13); \draw[-](12)--(14); \draw[-](8)--(12); \draw[-](0)--(3); \draw[-](0)--(5); \draw[-](5)--(4); \draw[-](5)--(6); \draw[-](9)--(16); \draw[-](13)--(17); \draw[-](6)--(7); \draw[dotted](0)--(8); \end{tikzpicture} } \newcommand{\seedsetedois}{ \begin{tikzpicture}[scale=.7,auto=left,every node/.style={circle,scale=0.3}] \path (1,.5)node[shape=rectangle,draw,fill=black] (8) {} (1,0)node[shape=circle,draw,fill=black] (9) {} (1.5,0.5)node[shape=rectangle,draw,fill=black] (12) {} (1.5,0)node[shape=circle,draw,fill=black] (13) {} (2,0.5)node[shape=circle,draw,fill=black] (14) {} (2.5,0.5)node[shape=circle,draw,fill=black] (15) {} (1,1.5)node[shape=rectangle,draw,fill=red] (0) {} (1,2)node[shape=circle,draw,fill=black] (3) {} (1.5,1.5)node[shape=rectangle,draw,fill=black] (5) {} (1.5,2)node[shape=circle,draw,fill=black] (4) {} (2,1.5)node[shape=circle,draw,fill=black] (6) {} (2.5,1.5)node[shape=circle,draw,fill=black] (7) {}; \draw[-](8)--(9); \draw[-](12)--(13); \draw[-](12)--(14); \draw[-](14)--(15); \draw[-](8)--(12); \draw[-](0)--(3); \draw[-](0)--(5); \draw[-](5)--(4); \draw[-](5)--(6); \draw[-](6)--(7); \draw[dotted](0)--(8); \end{tikzpicture} } \captionsetup[subfigure]{labelformat=empty} \begin{figure} \caption{Seed $S_{6}$} \label{fig:s6} \caption{Seed $S_{6}^{'}$} \label{fig:s61} \caption{Seed $S_{7}$} \label{fig:s7} \caption{Seed $S_{7}^{'}$} \label{fig:s71} \caption{Seed $S_{7}^{''}$} \label{fig:s72} \caption{The seeds of Definition~\ref{def_seeds} for $d\in \{6,7\}$.} \label{fig:seeds} \end{figure} It turns out that the entire class of trees that may be generated by unfoldings of seeds in $\{S_d\}_{d \geq 0}$ may also be generated recursively using the operation $\odot$. \begin{defn}\label{def_T} Let $\mathcal{T}^\ast$ be the set of trees defined as follows: \begin{enumerate} \item [(i)] $K_1$ is the single tree in $\mathcal{T}^\ast$ with height $0$. \item[(ii)] Let $k,p$ be positive integers and consider trees $T_0,T_1,\ldots,T_p \in \mathcal{T}^\ast$, rooted at a main root, with height $k-1$. Then $T=T_0\odot (T_1,\ldots,T_p) \in \mathcal{T}^\ast$. \end{enumerate} \end{defn} Note that, for a tree $T$ defined in (ii), the diameter is $2k-1$ if $p=1$ and the diameter is $2k$ if $p\geq2$. We also observe that $v_0\in T_0$ is the main root of $T= T_0 \odot (T_1,\ldots,T_p)$ and the edge between $T_0$ and $T_1$ is the central edge of $T=T_0\odot T_1$. It is clear that the tree $T$ generated by trees of height $k$ in (ii) has height $k+1$ when it is rooted at a main vertex. Recall that, if $S$ is a seed of diameter $d$, $\mathcal{T}(S)$ denotes the set of trees of diameter $d$ that are unfoldings of $S$. \begin{prop}\label{prop_equivalence} For any tree $T$ of diameter $d\geq 0$, we have $T \in \mathcal{T}^\ast$ if and only if $T \in \mathcal{T}(S_d)$. \end{prop} \begin{proof} To show that any tree $T \in \mathcal{T}(S_d)$ lies in $\mathcal{T}^\ast$, we prove the following two claims: \begin{itemize} \item[1.] Any seed $S_d$ lies in $\mathcal{T}^\ast$. \item[2.] Assume that $T \in \mathcal{T}^\ast$. Then any tree $T'$ obtained from $T$ by a CBD lies in $\mathcal{T}^\ast$. \end{itemize} For part 1, note that $S_0 \in \mathcal{T}^\ast$ by Definition~\ref{def_T}(i). We have $S_1=S_0 \odot S_0$ and $S_2=S_0 \odot (S_0 \odot S_0)$, and therefore they are elements of height $1$ in $\mathcal{T}^\ast$ by Definition~\ref{def_T}(ii). For larger values of $d$, we proceed inductively. Note that, for all $k \geq 2$, the seed $S_{2k-3}$ (viewed as the rooted tree of Definition~\ref{def_seeds}) has height $k-1$. As a consequence, assuming that $S_{2k-3} \in \mathcal{T}^\ast$, we have $S_{2k-1}=S_{2k-3}\odot S_{2k-3}$ and $S_{2k}=S_{2k-3}\odot (S_{2k-3},S_{2k-3})$ in $\mathcal{T}^\ast$ by Definition~\ref{def_T}(ii). We now prove part 2. We proceed by induction on $k$. For each $k\in\mathbb{N}$, we show that, for any tree $T \in \mathcal{T}^\ast$ of diameter $2k-1$ or $2k$, any CBD of a branch of $T$ leads to a tree $T'$ in $\mathcal{T}^\ast$. The base case $k=1$ is trivial, as all trees with diameter at most two lie in $\mathcal{T}^\ast$. Suppose that the statement holds for all trees in $\mathcal{T}^\ast$ with diameter at most $2k$, and fix $T=T_0\odot(T_1,\ldots,T_p) \in \mathcal{T}^\ast$ with diameter $d\in\{2k+1,2k+2\}$. By Definition~\ref{def_T}, each $T_i$ lies in $\mathcal{T}^\ast$ and has height $k$, so that its diameter is at most $2k$. Let $T'$ be the tree produced by an $s$-CBD of a branch $U_j$ of $T$ at a vertex $u$. \noindent \textit{Case 1.} Assume that $u\neq v_0$, so that $u\in V(T_i)$ for some $i\in\{0,\ldots,p\}$. By the induction hypothesis, the tree $T'_i$ produced by an $s$-CBD of $U_j$ at $u$ lies in $\mathcal{T}^\ast$. By the definition of branch duplication, $T_i$ and $T_i'$ have the same height. We conclude that $T'\in \mathcal{T}^\ast$ because $T'=T_0\odot(T_1,\ldots,T_{i-1},T'_i,T_{i+1},\ldots,T_p)$, if $i\neq 0$, or $T'=T'_0\odot(T_1,\ldots,T_p)$, if $i= 0$. \noindent \textit{Case 2.} Assume that $u=v_0$, so that either the chosen branch is equal to $T_i$ for some $i\in\{1,\ldots,p\}$ or the chosen branch is a branch in $T_0$. In the latter case, we simply repeat the argument of case 1 with $T_0'$ being produced by a CBD in $T_0$. Otherwise, $u=v_0$ and $U_j=T_i$, so that $T'\in \mathcal{T}^\ast$ because $T' = T_0\odot(T_1,\ldots,T_i, T^{(1)}_i,\ldots,T^{(s)}_i,T_{i+1},\ldots,T_p)$. To conclude the proof of Proposition~\ref{prop_equivalence}, we must show that every tree $T \in \mathcal{T}^\ast$ with diameter $d$ lies in $\mathcal{T}(S_d)$. This will again be done by induction on $k$, the height of the tree $T \in \mathcal{T}^\ast$ (viewed as a rooted tree with root at a main vertex). For $k=1$, the statement is trivially true, because $S_1$ and $S_2$ are the only seeds with diameter $2k-1=1$ and $2k=2$, respectively. Suppose that for some $k\in\mathbb{N}$ every $T\in \mathcal{T}^\ast$ of diameter $2k-1$ is an unfolding of $S_{2k-1}$ and every $T\in T^\ast$ of diameter $2k$ is an unfolding of $S_{2k}$. For the induction step, first fix $T\in \mathcal{T}^\ast$ of diameter $2k+1$. Then, $T=T_0\odot T_1$, for some $T_0,T_1 \in \mathcal{T}^\ast$ of height $k$. By hypothesis, $T_0$ and $T_1$ may be folded until we arrive at their respective seeds $S^{(0)}$ and $S^{(1)}$, respectively. There are three possibilities: \begin{enumerate} \item [(i)] $S^{(0)}=S_{2k-1}$ and $S^{(1)}=S_{2k-1}$. In this case, $S^{(0)}\odot S^{(1)}=S_{2k-1}\odot S_{2k-1}=S_{2k+1}$ is a folding of $T$, as required. \item [(ii)] $S^{(0)}=S_{2k-1}$ and $S^{(1)}=S_{2k}$ (the case $S^{(0)}=S_{2k}$ and $S^{(1)}=S_{2k-1}$ is analogous). In this case, \begin{eqnarray*} S^{(0)}\odot S^{(1)}&=&S_{2k-1}\odot S_{2k}\\ &=& S_{2k-1} \odot \left(S_{2k-3}\odot(S_{2k-3},S_{2k-3}) \right). \end{eqnarray*} The pair $(S_{2k-3},S_{2k-3})$ may be folded into a single occurrence of $S_{2k-3}$ without decreasing the diameter, so that we get $$S_{2k-1} \odot \left(S_{2k-3} \odot S_{2k-3}\right) = S_{2k-1} \odot S_{2k-1}=S_{2k+1},$$ as required. \item [(iii)] $S^{(0)}=S_{2k}$ and $S^{(1)}=S_{2k}$. This case is similar to case (ii), as \begin{eqnarray*} S^{(0)}\odot S^{(1)}&=&S_{2k}\odot S_{2k}\\ &=& \left(S_{2k-3}\odot(S_{2k-3},S_{2k-3})\right) \odot \left(S_{2k-3}\odot(S_{2k-3},S_{2k-3}) \right). \end{eqnarray*} In this case, we can fold each pair $(S_{2k-3},S_{2k-3})$ into a single occurrence of $S_{2k-3}$, and the result follows as above. \end{enumerate} To conclude the proof, assume that $T\in \mathcal{T}^\ast$ has diameter $2k+2$. Then, $T=T_0\odot (T_1,\ldots,T_p)$, $p\geq 2$, for some $T_0,T_1,\ldots,T_p \in \mathcal{T}^\ast$ of height $k$. Each $T_i$ may be folded down to $S_{2k-1}$ or to $S_{2k}$, according to its diameter. Each occurrence of $S_{2k}$ may be replaced by $S_{2k-3}\odot(S_{2k-3},S_{2k-3})$, which can be folded to $S_{2k-3} \odot S_{2k-3}=S_{2k-1}$. This means that we reach $S_{2k-1} \odot (S_{2k-1},\cdots,S_{2k-1})$, where the vector contains at least two terms. If it has more than two terms, additional terms may be removed by foldings of branches $S_{2k-1}$ without decreasing the diameter. When we reach $S_{2k-1} \odot (S_{2k-1},S_{2k-1})$, no further folding can be performed, as it would decrease the diameter. The result follows because $S_{2k-1} \odot (S_{2k-1},S_{2k-1})=S_{2k+2}$. \end{proof} The trees generated by unfoldings of the other seeds in~Definition~\ref{def_seeds} may also be described by decompositions involving the operation $\odot$, as described in the proposition below. The arguments in the proof are quite similar to the ones used to prove Proposition~\ref{prop_equivalence} and is therefore omitted. The interested reader finds the proof of item (ii) in the appendix. \begin{prop}\label{equivalence_other_seeds} Let $T$ be a tree and $k\geq1$. The following hold: \begin{enumerate} \item [(i)] $T\in\mathcal{T}(S'_{2k+2})$ if, and only if, there exist $T_1,\ldots,T_p\in \mathcal{T}^\ast, p\geq 2$, of height $k$ and $T_0\in \mathcal{T}^\ast$ of height $k-1$ such that $T=T_0\odot(T_1,\ldots,T_p)$; \item [(ii)] $T\in\mathcal{T}(S'_{2k+3})$ if, and only if, there exist $T_1,\ldots,T_p,T'_1,\ldots,T'_q\in \mathcal{T}^\ast, p,q\geq 1$, of height $k$ and $T_0,T'_0\in \mathcal{T}^\ast$ of height $k-1$ such that $T=(T_0\odot(T_1,\ldots,T_p))\odot (T'_0\odot(T'_1,\ldots,T'_q))$; \item [(iii)] $T\in\mathcal{T}(S''_{2k+3})$ if, and only if, there exist $T_1,\ldots,T_p,T'_0,\ldots,T'_q\in \mathcal{T}^\ast, p,q\geq 1$, of height $k$ and $T_0\in \mathcal{T}^\ast$ of height $k-1$ such that $T=(T_0\odot(T_1,\ldots,T_p))\odot (T'_0\odot(T'_1,\ldots,T'_q))$. \end{enumerate} \end{prop} To conclude this section, we present a useful connection between a symmetric matrix $M$ with underlying tree $T = T_0 \odot (T_1,\ldots,T_p)$ and induced submatrices corresponding to the subtrees $T_i$. \begin{lemat}\label{lema_define_max} Let $T_0,\ldots,T_p$ be rooted trees with roots $v_0,\ldots,v_p$, respectively, where $p\geq 1$. Let $T = T_0 \odot (T_1,\ldots,T_p)$. Given $M_i \in \mathcal{S}(T_i)$, for $i\in\{0,\ldots,p\}$ and $\delta \neq 0$, let $M$ be the matrix $M=(m_{ij})\in \mathcal{S}(T)$ where $M[T_i]=M_i$ and $m_{v_0v_i}=\delta$ for all $i \in \{1,\ldots,p\}$ (see Figure~\ref{lemma_2.10fig}). The following hold: \begin{enumerate} \item [(i)] $\lambda_{\min}(M)<\lambda<\lambda_{\max}(M)$, for all $\lambda\in$ $\bigcup_{\ell=0}^p \Spec(M_\ell)$. \item [(ii)] Given $y>\lambda$, for all $\lambda\in \bigcup_{\ell=0}^p \Spec(M_\ell)$, there exists $\delta(y)>0$ such that $\lambda_{\max}(M) = y$. \item [(iii)] Given $y<\lambda$, for all $\lambda\in \bigcup_{\ell=0}^p \Spec(M_\ell)$, there exists $\delta(y)>0$ such that $\lambda_{\min}(M) = y$. \end{enumerate} \end{lemat} \begin{proof} Let $p\geq 1$ and let $T_0,\ldots,T_p$ be rooted trees with roots $v_0,\ldots,v_p$, respectively for a given $p\in\mathbb{N}$. Given $M_i \in \mathcal{S}(T_i)$ and $\delta \neq 0$, define $M\in \mathcal{S}(T)$, where $T = T_0 \odot (T_1,\ldots,T_p)$, such that $M[T_i]=M_i$ and $m_{v_0v_i}=\delta$ for all $i \in \{1,\ldots,p\}$. \begin{figure} \caption{Matrix $M$ as in the statement of Lemma~\ref{lema_define_max}. The rows and columns of the matrix are ordered according to the tree $T_i$ they come from. } \label{lemma_2.10fig} \end{figure} Consider $\beta=\max\{\lambda\in\mbox{Spec}(M_{i}): 0\leq i \leq p\}$, so that $\beta=\lambda_{\max}(M_{\ell})$ for some $0\leq \ell \leq p$. We start with part (i). First, assume that $\ell> 0$. Since $\beta=\lambda_{\max}(M_\ell)$, Theorem~\ref{thm:simpleroots} tells us that an application of \texttt{Diagonalize}$(M[T_\ell],-\beta)$ with root $v_\ell$ assigns negative values to all vertices of $T_\ell$ except $v_\ell$, for which the value is 0. This coincides with the values assigned to these vertices in an application of \texttt{Diagonalize}$(M,-\beta)$ with root $v_0$ before processing $v_0$. Since $v_{\ell}$ is a child of $v_{0}$ with value $0$, when the algorithm processes $v_{0}$, it redefines $d_{v_j}=2>0$ and $d_{v_0}<0$ for some child $v_j$ of $v_0$ (possibly $j=\ell$). Therefore, according to Theorem~\ref{inertia}(a), $\beta<\lambda_{\max}(M)$. Next assume that $\beta=\lambda_{\max}(M_0)>\lambda_{\max}(M_i)$ for all $i>0$. As in the previous case, an application of \texttt{Diagonalize}$(M[T_0],-\beta)$ with root $v_0$ assigns negative values to all vertices of $T_0$ except $v_0$. Moreover, by Theorem~\ref{inertia}(b), applying \texttt{Diagonalize}$(M[T_i],-\beta)$ to each $T_i$ with root $v_i$ assigns negative values to all vertices of $T_i$. As before, all these values coincide with the values assigned by an application of \texttt{Diagonalize}$(M,-\beta)$ with root $v_0$. When \texttt{Diagonalize}$(M,-\beta)$ processes $v_0$, it assigns the value \begin{equation}\label{eq1} d_{v_{0}}=(m_{v_0v_0}-\beta)-\sum_{w\in C_{T_0}(v_0)}\frac{m_{v_{0}w}^{2}}{d_w}-\sum_{i=1}^{p}\frac{m_{v_{0}v_{i}}^{2}}{d_{v_{i}}}, \end{equation} where $C_{T_0}(v_0)$ denotes the neighborhood of $v_0$ in $T_0$. Also note that, when we run \texttt{Diagonalize}$(M[T_0],-\beta)$ with root $v_0$, we obtain the final permanent value $$0=(m_{v_0v_0}-\beta)-\sum_{w\in C_{T_0}(v_0)}\frac{m_{v_{0}w}^{2}}{d_w}.$$ Thus, as $d_{v_{i}}<0$ for $1\leq i \leq p$, equation \eqref{eq1} becomes $$d_{v_{0}}=-\sum_{i=1}^{p}\frac{m_{v_{0}v_{i}}^{2}}{d_{v_{i}}}>0,$$ so that $\beta<\lambda_{\max}(M)$ by Theorem~\ref{inertia}(a). To prove that $\lambda_{\min}(M)<\lambda$, for all $\lambda\in \bigcup_{\ell=0}^p \Spec(M_\ell)$, it suffices to apply this result to $-M$, as $\lambda_{\min}(M)=-\lambda_{\max}(-M)$. To prove part (ii), fix $y>\beta$. We run \texttt{Diagonalize}$(M,-y)$ with root $v_0$. Just before we process $v_{0}$, all its children have been assigned negative values. Then we have \begin{equation}\label{eq2} d_{v_{0}}=(m_{v_0v_0}-y)-\sum_{w\in C_{T_0}(v_0)}\frac{m_{v_{0}w}^{2}}{d_w}-\sum_{i=1}^{p}\frac{\delta^{2}}{d_{v_{i}}}. \end{equation} Moreover, when we run \texttt{Diagonalize}$(M[T_0],-y)$, it assigns final permanent value $$0>d^{(T_0)}_{v_0}=(m_{v_0v_0}-y)-\sum_{w\in C_{T_0}(v_0)}\frac{m_{v_{0}w}^{2}}{d_w},$$ since $y>\beta\geq\lambda_{\max}(M_{0})$. To obtain $d_{v_0}=0$ in~\eqref{eq2} we can set $$\delta(y)=\sqrt{\frac{\left((m_{v_0v_0}-y)-\sum_{w\in C_{T_0}(v_0)}\frac{m_{v_{0}w}^{2}}{d_w}\right)}{\sum_{i=1}^{p}\frac{1}{d_{v_{i}}}}}.$$ The expression within the square root is positive because $d_{v_{i}}<0$ for $1\leq i \leq p$. Item (iii) may be derived from item (ii) by considering the matrix $-M$. \end{proof} \section{Strongly realizable sets}\label{sec:strongly_realizable} In this section, we shall state the technical result that implies the validity of Theorem~\ref{thm:main}, namely Theorem~\ref{main_th} below. This technical result allows us to inductively define a set of real numbers (of size $d+1$) that is equal to the distinct eigenvalues in the spectrum of a symmetric matrix $M(T)$ whose underlying graph is a tree $T \in \mathcal{T}^\ast$ with diameter $d$. \begin{defn} Let $T=(V,E)$ be a tree with main root $v$. Let $M\in \mathcal{S}(T)$. For each $\lambda\in\DSpec(M)$ we define $$L(M,\lambda) = \min_{u\in V(T)}\{d(v,u):\tilde{d}_u = 0 \},$$ where $\tilde{d}_u$ denotes the final value assigned to $u$ by \texttt{Diagonalize}$(M,-\lambda)$ with root $v$. \end{defn} In the definition below, and in the remainder of the paper, we shall use the notation $\{\lambda_1<\cdots<\lambda_{\ell}\}$ to refer to a set $\{\lambda_1,\ldots,\lambda_{\ell}\}$ of real numbers such that $\lambda_1<\cdots<\lambda_{\ell}$. \begin{defn} Given $k\in\mathbb{N}$, a set of real numbers $A = \{\lambda_0<\cdots<\lambda_{2k}\}$ is said to be \emph{strongly realizable} in a family of rooted trees $\mathcal{C}=\{T_i\}_{i\in\mathcal{I}}$, where $\mathcal{I}$ is a set of indices, if the following holds for any $T\in \mathcal{C}$ of height $k$ and root $v$. There exists $M\in \mathcal{S}(T)$ satisfying: \begin{enumerate} \item $\DSpec(M) \subseteq A$; \item $L(M,\lambda_{2i})=0$, $0\leq i\leq k$; \item $m_{M[T-v]}(\lambda_{2i-1})=m_{M}(\lambda_{2i-1})+1$, $1\leq i\leq k$. \end{enumerate} A matrix $M$ with the above properties is said to be a \emph{strong realization} of $A$ in $\mathcal{C}$. \end{defn} Note that, by this definition, the values $\lambda_0,\lambda_2,\ldots,\lambda_{2k}$ must be in the spectrum of $M$. \begin{example} We show that the set $\{\lambda_0,\ldots,\lambda_4\}=\{-2,-1,0,1,3\}$ is strongly realizable in $\mathcal{T}^\ast$. To this end, we need to show that the following holds for any $T\in \mathcal{T}^\ast$ with diameter $d\in \{3,4\}$. If $d=3$, there must be a matrix $M$ with underlying tree $T$ whose spectrum contains $-2,0,3$ and at least one of the elements $-1$ and $1$. For $d=4$, the set of distinct eigenvalues must be equal to $\{-2,-1,0,1,3\}$. Moreover, conditions (2) and (3) must hold in both cases. Weights that satisfy these properties are given in Figure~\ref{ex:realizable}. Note that the diameter is equal to $3$ if $p=1$ and equal to $4$ if $p\geq 2$. The properties (1)-(3) may be easily checked by applying \texttt{Diagonalize}$(M,-\lambda)$ for values of $\lambda$ in this set. We may further verify that $m_{M}(-2)=1$, $m_{M}(-1)=p-1$, $m_{M}(0)=1-p+\sum_{i=1}^p t_i$, $m_{M}(1)=t_0+p-1$ and $m_{M}(3)=1$, so that the multiplicities add up to $|V(T)|$. Observe that $\lambda=-1$ is an eigenvalue of $M$ if and only if the diameter is 4. \begin{figure} \caption{A weighted tree of diameter $4$ with spectrum $\{-2,-1,0,1,3\}$ that satisfies (2) and (3). Note that choosing $p\geq 2$ and $t_i \geq 1$ for all $i\in \{0,\ldots,p\}$ produces all possible trees in $\mathcal{T}^\ast$. } \label{ex:realizable} \end{figure} \end{example} The main technical result in this section is the following. It states that, for every $k \in \mathbb{N}$, there exists a set of real numbers $C_k=\{\lambda_0<\lambda_1<\cdots<\lambda_{2k+1}\}$ such that the subsets $C_k^{(0)}=\{\lambda_0<\lambda_1<\cdots<\lambda_{2k}\}$ and $C_k^{(1)}=\{\lambda_1<\lambda_2<\cdots<\lambda_{2k+1}\}$ are both strongly realizable in $\mathcal{T}^\ast$. Moreover, as long as $\theta,\delta>0$ are sufficiently small, the sets $\{\lambda_0+\theta<\lambda_1<\cdots<\lambda_{2k}\}$ and $\{\lambda_1<\cdots<\lambda_{2k}<\lambda_{2k+1}+\delta\}$ must also be strongly realizable in $\mathcal{T}^\ast$. \begin{theorem}\label{main_th} Let $\alpha<\beta$ be real numbers. For every $k\in\mathbb{N}$, there exists a set of real numbers $C_k=\{\lambda_0<\lambda_1<\cdots<\lambda_{2k+1}\}$, where $\lambda_k=\alpha$ and $\lambda_{k+1}=\beta$, such that the following holds for every $T \in \mathcal{T}^\ast$ with height $k$, diameter $d$ and main root $v$. There exist matrices $M_1^{(k)},M_2^{(k)}\in \mathcal{S}(T)$ satisfying the following: \begin{itemize} \item[(i)] $\DSpec(M_1^{(k)}) = \{\lambda_0,\ldots,\lambda_{2k}\}$, if $d=2k$; $\DSpec(M_1^{(k)}) = \{\lambda_0,\ldots,\lambda_{2k}\}\setminus\{\lambda_1\}$, if $d=2k-1$; \item[(ii)] $\DSpec(M_2^{(k)})=\{\lambda_1,\ldots,\lambda_{2k+1}\}$, if $d=2k$; $\DSpec(M_2^{(k)})=\{\lambda_1,\ldots,\lambda_{2k+1}\}\setminus\{\lambda_{2k}\}$, if $d=2k-1$; \item[(iii)] $L(M_1^{(k)},\lambda_{2i})=0=L(M_2^{(k)},\lambda_{2i+1})$, for $i\in\{0,\ldots,k\}$; \item[(iv)] $m_{M_1^{(k)}[T-v]}(\lambda_{2i-1})=m_{M_1^{(k)}}(\lambda_{2i-1})+1$ and $m_{M_2^{(k)}[T-v]}(\lambda_{2i})=m_{M_2^{(k)}}(\lambda_{2i})+1$, for $i\in\{1,\ldots,k\}$. \end{itemize} Moreover, the following are satisfied. \begin{itemize} \item[(v)] Let $y_k=\frac{\beta-\alpha}{2^{k-1}}$. For all $\theta\in(0,y_k)$, there exists $M_{1,\theta}^{(k)}$ such that $$\DSpec(M_{1,\theta}^{(k)})\subseteq\{\lambda_0+\theta,\lambda_1,\ldots,\lambda_{2k}\},$$ $L(M_{1,\theta}^{(k)},\lambda_{0}+\theta)=0$, and, for all $i\in\{1,\ldots,k\}$, we have $L(M_{1,\theta}^{(k)},\lambda_{2i})=0$ and $m_{M_{1,\theta}^{(k)}[T-v]}(\lambda_{2i-1})=m_{M_{1,\theta}^{(k)}}(\lambda_{2i-1})+1$. \item[(vi)] For all $\delta\in(0,y_k)$, there exists $M_{2,\delta}^{(k)}$ such that $$\DSpec(M_{2,\delta}^{(k)})\subseteq\{\lambda_1,\ldots,\lambda_{2k},\lambda_{2k+1}+\delta\},$$ $L(M_{2,\delta}^{(k)},\lambda_{2k+1}+\delta)=0$, and, for all $i\in\{1,\ldots,k\}$, we have $L(M_{2,\delta}^{(k)},\lambda_{2i-1})=0$ and $m_{M_{2,\delta}^{(k)}[T-v]}(\lambda_{2i})=m_{M_{2,\delta}^{(k)}}(\lambda_{2i})+1$. \end{itemize} \end{theorem} We emphasize that, in our proof of Theorem~\ref{main_th}, the set $C_k$ does depend on $k$, in the sense that $C_{k+1}$ is not obtained from $C_k$ by the inclusion of two new elements. The proof of Theorem~\ref{main_th} will be the subject of the next section. We now observe that it immediately implies that Theorem~\ref{thm:main} holds for trees in $\mathcal{T}^\ast$. \begin{proof}[Proof of Theorem~\ref{thm:main} for trees in $\mathcal{T}^\ast$] Let $T\in \mathcal{T}^\ast$ with diameter $d$. Theorem~\ref{main_th}(i) tells us that it admits a matrix $M(T)$ with $d+1$ distinct eigenvalues that is a realization of a set of $d+1$ real numbers ($C_k\setminus\{\lambda_{2k+1}\}$, if $d=2k$; $C_k\setminus \{\lambda_1,\lambda_{2k+1}\}$, if $d=2k-1$). By Theorem~\ref{thm:LB}, we deduce that $q(T)=d+1$. \end{proof} \section{Proof of Theorem~\ref{main_th}}\label{sec:proof_technical} Theorem~\ref{main_th} will be proved by induction. One of the main ingredients for the step of induction is the following result, which gives a construction that allows us to extend the spectra of a set of matrices to a larger matrix in terms of the operation $\odot$. \begin{lemat}\label{multiplicidades} Let $\mathcal{C}_1$ and $\mathcal{C}_2$ be two families of rooted trees. Let $k_1$ and $k_2$ be nonnegative integers and assume that $A_1=\{\lambda_0<\cdots<\lambda_{2k_1}\}$ and $A_2=\{\mu_0<\cdots<\mu_{2k_2}\}$ are two strongly realizable sets in $\mathcal{C}_1$ and $\mathcal{C}_2$, respectively, such that $(A_1\cup A_2)\setminus(A_1\cap A_2)=\{a, b\}$, with $a =\min\{\lambda_0,\mu_0\}$ and $b=\max\{\lambda_{2k_1},\mu_{2k_2}\}$. Suppose that there is a partition $A_1 \cap A_2 = \Lambda_1 \cup \Lambda_2$ with the following property. For any trees $T_1\in\mathcal{C}_1$ and $T_2\in\mathcal{C}_2$ with height $k_1$ and $k_2$ and root $v_1$ and $v_2$, respectively, assume that there exist a strong realization $M_1(T_1)$ of $A_1$ and a strong realization $M_2(T_2)$ of $A_2$ such that \begin{enumerate} \item[(i)] For all $\lambda\in \Lambda_1$, we have $L(M_1(T_1),\lambda)=0$ and $m_{M_2[T_2-v_2]}(\lambda)=m_{M_2(T_2)}(\lambda)+1$. \item[(ii)] For all $\lambda\in \Lambda_2$, we have $L(M_2(T_2),\lambda)=0$ and $m_{M_1[T_1-v_1]}(\lambda)=m_{M_1(T_1)}(\lambda)+1$. \end{enumerate} Then the following holds for a tree $T= T_0 \odot (T_1,\ldots,T_p)$ with main root $v_0$, where $T_1,\ldots,T_p\in\mathcal{C}_1$, $p\geq1$, have height $k_1$, and $T_0\in\mathcal{C}_2$ has height $k_2$. Consider a matrix $M\in \mathcal{S}(T)$ for which $M[T_0]=M_2(T_0)$ and $M[T_i]=M_1(T_i),1\leq i\leq p$ (see Figure~\ref{lemma_3.4fig}). Then there exist $\lambda_{\min},\lambda_{\max}\in \mathbb{R}$ such that the following hold: \begin{itemize} \item[(a)] $\displaystyle{\DSpec(M) = \begin{cases} (A_1\cap A_2) \cup \{\lambda_{\min},a,b,\lambda_{\max}\}, & \textrm{ if }p>1 \textrm{ and }a,b\in A_1 ,\\ (A_1\cap A_2) \cup \{\lambda_{\min},a,\lambda_{\max}\}, & \textrm{ if }p>1 \textrm{ and }\{a\}= A_1\cap\{a,b\} ,\\ (A_1\cap A_2) \cup \{\lambda_{\min},b,\lambda_{\max}\}, & \textrm{ if }p>1 \textrm{ and }\{b\}= A_1\cap\{a,b\} ,\\ (A_1\cap A_2) \cup \{\lambda_{\min},\lambda_{\max}\}, & \textrm{ if }p=1 \textrm{ or } (p\geq1 \textrm{ and }a,b\in A_2); \end{cases}}$ \item[(b)] For $\lambda\in A_1\cap A_2$, $$m_M(\lambda)=m_{M_2(T_0)}(\lambda)+\sum_{i=1}^pm_{M_1(T_i)}(\lambda);$$ \item[(c)] $L(M,\lambda)=0$ for all $\lambda\in \Lambda_2$; \item[(d)] $m_{M[T-v_0]}(\lambda)=m_{M}(\lambda)+1$, for all $\lambda\in \Lambda_1$; \item[(e)] For $x\in \{a,b\}$, $$m_M(x)= \begin{cases} p-1 \textrm{ and } m_{M[T-v_0]}(x)=m_{M}(x)+1, \textrm{ if } x\in A_1\\ 0, \textrm{ if } x\in A_2; \end{cases}$$ \item[(f)] $\lambda_{\max}+\lambda_{\min}=a+b$. \end{itemize} \end{lemat} \begin{proof} Let $\mathcal{C}_1$ and $\mathcal{C}_2$ be two families of rooted trees. Fix $k_1, k_2, A_1, A_2$, and $\Lambda_1,\Lambda_2$ satisfying the conditions of the lemma. Let $T = T_0 \odot (T_1,\ldots,T_p)$, where $T_1,\ldots,T_p\in\mathcal{C}_1$, $p\geq1$, have height $k_1$, and $T_0\in\mathcal{C}_2$ has height $k_2$. Let $v_0,\ldots,v_p$ be the root of $T_0,\ldots,T_p$, respectively, and let $w_1,\ldots,w_q$ be the children of $v_0$ in $T_0$. Let $M$ be a matrix as defined in the statement of the lemma, depicted in Figure~\ref{lemma_3.4fig}. Note that the entries associated with edges of the form $v_0v_i$, where $i>0$, have not been assigned any particular values. \begin{figure} \caption{The matrix $M$ given in the statement of Lemma~\ref{multiplicidades}. The rows and columns of the matrix are ordered according to the tree $T_i$ they come from.} \label{lemma_3.4fig} \end{figure} Let $n=|V(T)|$. Clearly, \begin{eqnarray} n&=& \sum_{i=0}^p |V(T_i)|\nonumber \\ &=&\sum_{j=0}^{2k_2}m_{M_2(T_0)}(\mu_j) + \sum_{i=1}^p \sum_{j=0}^{2k_1}m_{M_1(T_i)}(\lambda_j) \nonumber\\ &\stackrel{(*)}{=}&p^{\delta_{a1}}+p^{\delta_{b1}} +\sum_{\lambda\in A_1\cap A_2} \left(m_{M_2(T_0)}(\lambda)+ \sum_{i=1}^p m_{M_1(T_i)}(\lambda)\right). \label{eq:multiplicities1} \end{eqnarray} In (*), $\delta_{a1}= 1$ if $a\in A_1$ and $\delta_{a1}=0$ otherwise, while $\delta_{b1}= 1$ if $b\in A_1$, $\delta_{b1}=0$ otherwise. The term $p^{\delta_{a1}}+p^{\delta_{b1}}$ comes from the multiplicity of $a$ and $b$ as eigenvalues of $M$, which is equal to one for the corresponding trees because the least and the greatest eigenvalues have multiplicity 1 by Theorem~\ref{thm:simpleroots}. We use the algorithm of Section~\ref{sec:eigenvalue_location} to compute the spectrum of $M$. First, we prove parts (b), (c) and (d) for elements $\lambda \in A_1 \cap A_2 = \Lambda_1\cup \Lambda_2$. Consider an application of \texttt{Diagonalize}$(M,-\lambda)$ with root $v_0$. Before $v_0$ is processed, everything happens as if we had processed \texttt{Diagonalize}$(M_2(T_0),-\lambda)$ and \texttt{Diagonalize}$(M_1(T_i),-\lambda)$, for $i \in \{1,\ldots,p\}$. When we process the main root $v_0$ we have two cases according to whether $\lambda \in \Lambda_1$ or $\lambda \in \Lambda_2$. If $\lambda\in\Lambda_2$, we have $m_{M_1[T_i-v_i]}(\lambda)=m_{M_1(T_i)}(\lambda)+1$, and $L(M_2(T_0),\lambda)=0$. By Lemma~\ref{proposition}, each $v_i, i\in\{1,\ldots,p\}$ has a child $u_i$ for which \texttt{Diagonalize}$(M,-\lambda)$ assigns $d_{u_i}=0$ (before processing $v_i$). Then, when $v_i$ is processed, it is assigned a negative value, one of its children with value $0$ (possibly $u_i$) is assigned value $2$, and the edge connecting $v_i$ to $v_0$ is deleted. So, processing $v_0$ in \texttt{Diagonalize}$(M,-\lambda)$ is the same as processing $v_0$ in \texttt{Diagonalize}$(M_2,-\lambda)$. In particular, $d_{v_0}=0$, since $L(M_2(T_0),\lambda)=0$ by hypothesis. Combining these arguments, we see that the multiplicity of $\lambda$ as an eigenvalue of $M$ satisfies $$m_M(\lambda)=m_{M_2(T_0)}(\lambda)+\sum_{i=1}^pm_{M_1(T_i)}(\lambda).$$ We have seen that $L(M,\lambda)=0$. Next suppose $\lambda\in\Lambda_1$, so that $L(M_1(T_i),\lambda)=0$, for all $i\in\{1,\ldots,p\}$, and $m_{M_2[T_0-v_0]}(\lambda)=m_{M_2}(\lambda)+1$. In this case, if we consider \texttt{Diagonalize}$(M,-\lambda)$ just before it processes the root $v_0$, we have $d_{v_i}=0$ for all $i\in\{1,\ldots,p\}$, and by Lemma~\ref{proposition} there is $s\in\{1,\ldots,p\}$, such that the algorithm assigns $d_{w_{s}}=0$ before processing $v_0$. Then, when we process $v_0$, we may suppose that the algorithm assigns $d_{w_s}=2$ and $d_{v_0}<0$, and that all of the remaining children with value $0$ are not modified. This also implies that $$m_M(\lambda)=m_{M_2(T_0)}(\lambda)+\sum_{i=1}^pm_{M_1(T_i)}(\lambda).$$ Moreover, it is clear that $L(M,\lambda)=1$ and that $m_{M[T-v_0]}(\lambda)=m_{M}(\lambda)+1$. Next we prove (e). First assume that $x = a\in A_1$. Since it is the least eigenvalue of $M_1(T_i)$ for all $i$, when applying \texttt{Diagonalize}$(M_1(T_i),-x)$, we get $d_{v_1}=\cdots=d_{v_p}=0$, while all other vertices in these trees are assigned a positive value (see Theorem~\ref{thm:simpleroots}). Also, since $x<\lambda_{\min}(M_2(T_0))$, \texttt{Diagonalize}$(M_2(T_0),-x)$ assigns positive values to all entries. After processing $v_0$, one of the value $d_{v_i}$ above becomes 2, while $v_0$ is assigned a negative value. By Theorem~\ref{inertia}, this means that $m_M(x)=p-1$ and that there is a single eigenvalue less than it. In particular, if $p=1$, $x$ is not an eigenvalue of $M$, but satisfies $m_{M[T-v_0]}(x)=m_{M}(x)+1$. For $p\geq 2$, we get $L(M,x)=1$. The case $b\in A_1$ is analogous, with the least eigenvalue being replaced by the greatest eigenvalue. If $x=a\in A_2$, then when we apply \texttt{Diagonalize}$(M(T),-x)$, all vertices $v$ except $v_0$ are assigned a positive value. When processing $v_0$, the algorithm produces \begin{equation}\label{eq:3} d_{v_{0}}=d^{(T_0)}_{v_0} - \sum_{i=1}^{p}\frac{m_{v_{0}v_{i}}^{2}}{d_{v_{i}}}. \end{equation} Since $L(M[T_0],x)=0$ by hypothesis, we have $d^{(T_0)}_{v_0}=0$. So the expression in~\eqref{eq:3} is negative. Theorem~\ref{inertia} implies that $|V(T)|-1$ eigenvalues of $M$ are greater than $x$ and one eigenvalue is less than $x$. To prove part (a), summing the multiplicities, we obtain \begin{eqnarray*} m_M(a)&+& m_M(b) + \sum_{\lambda\in A_1\cap A_2} m_M(\lambda) \\&=& (p-1)^{\delta_{a1}} +(p-1)^{\delta_{b1}} + \sum_{\lambda\in A_1\cap A_2} \left( m_{M_2(T_0)}(\lambda)+\sum_{i=1}^pm_{M_1(T_i)}(\lambda)\right)\\ &\stackrel{\eqref{eq:multiplicities1}}{=}& n-2. \end{eqnarray*} This means that there are only two eigenvalues in $\Spec(M)$, namely $\lambda_{\max}(M)$ and $\lambda_{\min}(M)$, establishing (a). Finally, we prove (f) using an argument based on the trace of a matrix. (Recall that the trace $\tr(M)$ of a square matrix $M$ is the sum of its diagonal elements; equivalently, it is the sum of its eigenvalues.) By our conclusions in (b) and (e) above, $\tr(M) = \tr(M_2(T_0)) +\sum_{i=1}^p \tr(M_1(T_i))$, we have \begin{equation}\label{eq:part_f} \lambda_{\max}+\lambda_{\min}= a+b, \end{equation} as required. \end{proof} We are now ready to prove Theorem~\ref{main_th}. \begin{proof}[Proof of Theorem~\ref{main_th}] We proceed by induction on $k$. For $k=1$, let $\alpha<\beta\in\mathbb{R}$ and consider the set of real numbers $C_1=\{2\alpha-\beta,\alpha,\beta,2\beta-\alpha\}=\{\lambda_0^{(1)}<\lambda_1^{(1)}<\lambda_2^{(1)}<\lambda_3^{(1)}\}$. Let $T \in \mathcal{T}^\ast$ be a tree of diameter $d\in \{1,2\}=\{2k-1,2k\}$ with main root $v_0$. We define $M_1^{(1)}(T):=M_1^{(1)}$ as follows: set all diagonal values of $M_1^{(1)}$ as $\alpha$. By Lemma~\ref{lema_define_max}(ii), we may assign weights to the edges between $v_0$ and its children such that $\beta$ is the maximum eigenvalue of $M_1^{(1)}$. Applying \texttt{Diagonalize}$(M^{(1)}_1,-\alpha)$, it is easy to see that $m_{M^{(1)}_1}(\alpha)=|V(T)|-2$ (in particular, this number is $0$ if $T$ has diameter $1$) and that $m_{M_1^{(1)}[T-v_0]}(\alpha)=m_{M_1^{(1)}}(\alpha)+1$. As a consequence, $L(M_1^{(1)},\alpha)=1$ if $\alpha$ is an eigenvalue of $M$. Moreover, by Theorem~\ref{inertia}, the two remaining eigenvalues must be $\lambda_{\min}(M_1^{(1)})$ and $\lambda_{\max}(M_1^{(1)})=\beta$. Considering the trace of $M_1^{(1)}$, we obtain $$\lambda_{\min}+(|V(T)|-2)\alpha + \beta= |V(T)|\cdot \alpha.$$ This shows that $\lambda_{\min}=2\alpha-\beta$, so that $\DSpec(M^{(1)}_1)\subseteq\{\lambda_0^{(1)},\lambda_1^{(1)},\lambda_2^{(1)}\}$. By our proof of Theorem~\ref{thm:simpleroots}, we know that $L(M_1^{(1)},\lambda_0)=L(M_1^{(1)},\lambda_2)=0$. This shows that $\{\lambda_0^{(1)},\lambda_1^{(1)},\lambda_2^{(1)}\}$ is strongly realizable for trees of height 1 in $\mathcal{T}^\ast$. Next define $M_2^{(1)}(T):=M_2^{(1)}$ as follows: set all diagonal values of $M_2^{(1)}$ as $\beta$ and, by Lemma~\ref{lema_define_max}(iii), define the weights of the edges between $v_0$ and its children such that $\alpha$ is the minimum eigenvalue of $M_2^{(1)}$. Applying \texttt{Diagonalize}$(M^{(1)}_2,-\beta)$, we again see that $m_{M^{(1)}_2}(\beta)=|V(T)|-2$, that $m_{M_2^{(1)}[T-v_0]}(\beta)=m_{M_2^{(1)}}(\beta)+1$ and that the remaining two eigenvalues are $\lambda_{\min}(M^{(1)}_2)=\alpha$ and $\lambda_{\max}(M^{(1)}_2)$. Considering the trace of $M^{(1)}_2$, we obtain $\lambda_{\max}(M^{(1)}_2)=2\beta-\alpha$, so that $\DSpec(M^{(1)}_2)\subseteq\{\lambda_1^{(1)},\lambda_2^{(1)},\lambda_3^{(1)}\}$. Here $L(M_2^{(1)},\lambda_1^{(1)})=L(M_2^{(1)},\lambda_3^{(1)})=0$. As a consequence, $\{\lambda_1^{(1)},\lambda_2^{(1)},\lambda_3^{(1)}\}$ is strongly realizable for trees of height 1 in $\mathcal{T}^\ast$. So far, we have shown that items (i)-(iv) hold for the base of induction. To prove (v), let $y_1=\beta-\alpha=\frac{\beta-\alpha}{2^{1-1}}$. Fix $\theta$ such that $0<\theta<y_1$. Observe that this interval is not empty, since $\beta>\alpha$. We define $M_{1,\theta}^{(1)}\in \mathcal{S}(T)$ as follows: the diagonal entries of $M_{1,\theta}^{(1)}$ are the same as $M_1^{(1)}$, except for the entry corresponding to $v_0$, which is $\alpha+\theta$. To ensure that the greatest eigenvalue of $M_{1,\theta}^{(1)}$ is equal to $\beta$, the weight $\omega$ assigned to the edges between $v_0$ and its children is defined by the solution of the following equation obtained by applying \texttt{Diagonalize}$(M_{1,\theta}^{(1)},-\beta)$ with root $v_0$: \begin{equation} \label{eq:6} 0 = (\alpha+\theta-\beta) - \sum_{w\neq v_0}\frac{\omega^2}{\alpha-\beta}. \end{equation} Note that $- \sum_{w\neq v_0}\frac{\omega^2}{\alpha-\beta}$ is positive, so (\ref{eq:6}) has a real solution $\omega$ if, and only if, $\alpha+\theta-\beta<0$, which is true since $\theta<\beta-\alpha$. As in the previous case, $\alpha$ has multiplicity $|V(T)|-2$ and $m_{M_{1,\theta}^{(1)}[T-v_0]}(\alpha)=m_{M_{1,\theta}^{(1)}}(\alpha)+1$. So far, we have $\DSpec(M_{1,\theta}^{(1)})\subseteq\{\lambda_{\min}(M_{1,\theta}^{(1)}),\lambda_1^{(1)},\lambda_2^{(1)}\}$. Finally, note that $$\lambda_0^{(1)}+(|V|-2)\alpha+\beta = \tr(M_1^{(1)}) = \tr(M_{1,\theta}^{(1)})-\theta = \lambda_{\min}(M_{1,\theta}^{(1)})+(|V|-2)\alpha+\beta-\theta,$$ from which we obtain $\lambda_{\min}(M_{1,\theta}^{(1)})= \lambda_0^{(1)}+\theta$. As in the previous case, $L(M_{1,\theta}^{(1)},\lambda_2^{(1)})=0$ and $L(M_{1,\theta}^{(1)},\lambda_{0}^{(1)}+\theta)=0$. To prove (vi), fix $\delta$ such that $\beta-\alpha>\delta>0>\alpha-\beta$. We define $M_{2,\delta}^{(1)}\in \mathcal{S}(T)$ as follows: the diagonal entries of $M_{2,\delta}^{(1)}$ are the same of $M_2^{(1)}$, except for the entry corresponding to $v_0$ which is $\beta+\delta$. To ensure that $\alpha$ is the least eigenvalue of $M_{2,\delta}^{(1)}$, the weight $\omega$ assigned to the edges between $v_0$ and its children is defined by the solution of the following equation obtained by applying \texttt{Diagonalize}$(M_{2,\delta}^{(1)},-\alpha)$ with root $v_0$: \begin{equation} \label{eq:7} 0 = (\beta+\delta-\alpha) - \sum_{w\neq v_0}\frac{\omega^2}{\beta-\alpha}. \end{equation} Note that $- \sum_{w\neq v_0}\frac{\omega^2}{\beta-\alpha}$ is negative, so (\ref{eq:7}) has a real solution $\omega$ if, and only if, $\beta+\delta-\alpha>0$, which is true since $\delta>\alpha-\beta$. As in the case of $M^{(1)}_2$, $\lambda_2^{(1)}=\beta$ has multiplicity $|V(T)|-2$ and $m_{M_{2,\delta}^{(1)}[T-v_0]}(\beta)=m_{M_{2,\delta}^{(1)}}(\beta)+1$. So far, we have $\DSpec(M_{2,\delta}^{(1)})\subseteq \{\lambda_1^{(1)},\lambda_2^{(1)},\lambda_{\max}(M_{2,\delta}^{(1)})\}$. Finally, note that $$\alpha+(|V|-2)\beta+\lambda_{3}^{(1)}= \tr(M_2^{(1)}) = \tr(M_{2,\delta}^{(1)})-\delta = \alpha+(|V|-2)\beta+\lambda_{\max}(M_{2,\delta}^{(1)})-\delta,$$ from which we obtain $\lambda_{\max}(M_{2,\delta}^{(1)}) = \lambda_3^{(1)}+\delta$. As in the previous case, $L(M_{2,\delta}^{(1)},\lambda_1^{(1)})=0$ and $L(M_{2,\delta}^{(1)},\lambda_{3}^{(1)}+\delta)=0$. Now, suppose by induction that for some $k\in\mathbb{N}$ we have a set $C_k=\{\lambda_0^{(k)}<\lambda_1^{(k)}<\cdots<\lambda_{2k+1}^{(k)}\}$ such that, for every $T'\in \mathcal{T}^\ast$ with height $k$ and diameter $d\in \{2k-1,2k\}$, there exist $M_1^{(k)}=M_1^{(k)}(T'),M_2^{(k)}=M_2^{(k)}(T')\in \mathcal{S}(T')$ satisfying the following properties: \begin{enumerate} \item[(i)] $\DSpec(M_1^{(k)}) = \{\lambda_0^{(k)},\ldots,\lambda_{2k}^{(k)}\}$, if $d=2k$; $\DSpec(M_1^{(k)}) = \{\lambda_0^{(k)},\ldots,\lambda_{2k}^{(k)}\}\setminus\{\lambda_1^{(k)}\}$, if $d=2k-1$; \item[(ii)] $\DSpec(M_2^{(k)})=\{\lambda_1^{(k)},\ldots,\lambda_{2k+1}^{(k)}\}$, if $d=2k$; $\DSpec(M_2^{(k)})=\{\lambda_1^{(k)},\ldots,\lambda_{2k+1}^{(k)}\}\setminus\{\lambda_{2k}^{(k)}\}$, if $d=2k-1$; \item[(iii)] $L(M_1^{(k)},\lambda_{2i}^{(k)})=0=L(M_2^{(k)},\lambda_{2i+1}^{(k)})$, for $i\in\{0,\ldots,k\}$; \item[(iv)] $m_{M_1^{(k)}[T'-v]}(\lambda_{2i-1}^{(k)})=m_{M_1^{(k)}}(\lambda_{2i-1}^{(k)})+1$ and $m_{M_2^{(k)}[T'-v]}(\lambda_{2i}^{(k)})=m_{M_2^{(k)}}(\lambda_{2i}^{(k)})+1$, for $i\in\{1,\ldots,k\}$; \end{enumerate} Moreover, the following hold: \begin{enumerate} \item[(v)] Let $y_k=\frac{\beta-\alpha}{2^{k-1}}$. For all $\theta\in(0,y_k)$, there exists $M_{1,\theta}^{(k)}$ such that $$\DSpec(M_{1,\theta}^{(k)})\subseteq\{\lambda_0^{(k)}+\theta,\lambda_1^{(k)},\ldots,\lambda_{2k}^{(k)}\},$$ $L(M_{1,\theta}^{(k)},\lambda_{0}^{(k)}+\theta)=0$, and, for all $i\in\{1,\ldots,k\}$, we have $L(M_{1,\theta}^{(k)},\lambda_{2i}^{(k)})=0$ and $m_{M_{1,\theta}^{(k)}[T'-v]}(\lambda_{2i-1}^{(k)})=m_{M_{1,\theta}^{(k)}}(\lambda_{2i-1}^{(k)})+1$. \item[(vi)] For all $\delta\in(0,y_k)$, there exists $M_{2,\delta}^{(k)}$ such that $$\DSpec(M_{2,\delta}^{(k)})\subseteq\{\lambda_1^{(k)},\ldots,\lambda_{2k}^{(k)},\lambda_{2k+1}^{(k)}+\delta\},$$ $L(M_{2,\delta}^{(k)},\lambda_{2k+1}^{(k)}+\delta)=0$, and, for all $i\in\{1,\ldots,k\}$, we have $L(M_{2,\delta}^{(k)},\lambda_{2i-1}^{(k)})=0$ and $m_{M_{2,\delta}^{(k)}[T'-v]}(\lambda_{2i}^{(k)})=m_{M_{2,\delta}^{(k)}}(\lambda_{2i}^{(k)})+1$. \end{enumerate} Fix $\delta_k=\frac{\beta-\alpha}{2^k}\in(0,y_k)$ and $\theta_k=\frac{\beta-\alpha}{2^k}\in(0,y_k)$. Consider the set \begin{eqnarray}\label{def_set} C_{k+1}&=&\{\lambda_0^{(k+1)}<\lambda_1^{(k+1)}<\lambda_2^{(k+1)}<\cdots<\lambda_{2k+1}^{(k+1)}<\lambda_{2k+2}^{(k+1)}<\lambda^{(k+1)}_{2k+3}\}\nonumber\\ &=&\{\lambda_0^{(k)}-\delta_k<\lambda_0^{(k)}<\lambda_1^{(k)}<\cdots<\lambda_{2k}^{(k)}<\lambda_{2k+1}^{(k)}+\delta_k<\lambda_{2k+1}^{(k)}+\delta_k+\theta_k\} \end{eqnarray} of cardinality $2k+4$. We show that $C_{k+1}$ satisfies the required properties. Let $T \in \mathcal{T}^\ast$ (rooted at a main root) with height $k+1$ and diameter $d\in \{2k+1,2k+2\}$. This means that $T = T_0 \odot (T_1,\ldots,T_p)$, where $p\geq1$, and that each $T_i \in \mathcal{T}^\ast$ has height $k$ and main root $v_i$, for all $i\in\{0,\ldots,p\}$. Recall that $T$ has diameter $2k+1$ if and only if $p=1$. First, we define a matrix $M_1^{(k+1)}=M_1^{(k+1)}(T)$ with the structure of Figure~\ref{lemma_3.4fig}, where $M_1^{(k+1)}[T_0]=M_2^{(k)}(T_0)$ and $M_1^{(k+1)}[T_i]=M_1^{(k)}(T_i)$ for all $i\in\{1,\ldots,p\}$ are defined using the induction hypothesis. By parts (i) and (ii) of the induction hypothesis and Lemma~\ref{lema_define_max}, we can define the weights on the edges $v_0v_i$ so that $\lambda_{2k+2}^{(k+1)}=\lambda_{2k+1}^{(k)}+\delta_k$ is the maximum eigenvalue of $M_1^{(k+1)}$. We wish to apply Lemma~\ref{multiplicidades}. By parts (i) to (iv) of the induction hypothesis, the hypotheses of the lemma are satisfied for $A_1=\{\lambda_0^{(k)},\ldots,\lambda_{2k}^{(k)}\}$, $A_2=\{\lambda_1^{(k)},\ldots,\lambda_{2k+1}^{(k)}\}$, $\mathcal{C}_1=\mathcal{C}_2=\mathcal{T}^\ast$, $\Lambda_1=\{\lambda_2^{(k)},\lambda_4^{(k)},\ldots,\lambda_{2k}^{(k)}\}$ and $\Lambda_2=\{\lambda_1^{(k)},\lambda_3^{(k)},\ldots,\lambda_{2k-1}^{(k)}\}$. Observe that $a=\lambda_0^{(k)}\in A_1$ and $b=\lambda_{2k+1}^{(k)} \in A_2$. Given our choice of maximum eigenvalue, Lemma \ref{multiplicidades}(a)immediately implies that \begin{eqnarray}\label{specM1} \DSpec(M_1^{(k+1)})&=& \begin{cases} \{\lambda_{\min},\lambda_0^{(k)},\lambda_1^{(k)},\ldots,\lambda_{2k}^{(k)},\lambda_{2k+1}^{(k)}+\delta_{k}\}, &\text{ if } d=2k+2,\\ \{\lambda_{\min},\lambda_1^{(k)},\ldots,\lambda_{2k}^{(k)},\lambda_{2k+1}^{(k)}+\delta_{k}\} ,&\text{ if } d=2k+1, \end{cases}\\ &\subseteq& \{\lambda_{\min},\lambda_1^{(k+1)},\ldots,\lambda_{2k+2}^{(k+1)}\}. \nonumber \end{eqnarray} Moreover, $\lambda_{\min} = \lambda_0^{(k)}-\delta_{k}=\lambda_0^{(k+1)}$ by Lemma~\ref{multiplicidades}(f). Therefore (i) is satisfied for $M_1^{(k+1)}$. We also obtain (iii) and (iv) by Lemma \ref{multiplicidades}. Next, define $M_2^{(k+1)}=M_2^{(k+1)}(T)$ with the structure of Figure~\ref{lemma_3.4fig}, where $M_2^{(k+1)}[T_0]=M_{1,\theta_k}^{(k)}(T_0)$ and $M_2^{(k+1)}[T_i]=M_{2,\delta_k}^{(k)}(T_i)$ are defined based on the induction hypothesis. By Lemma~\ref{lema_define_max}, we can define the weights on the edges $v_0v_i$ so that $\lambda^{(k+1)}_1=\lambda^{(k)}_{0}$ is the minimum eigenvalue of $M_2^{(k+1)}$. The induction hypothesis ensures that the hypotheses of Lemma~\ref{multiplicidades} are satisfied for $A_1=\{\lambda_1^{(k)},\ldots,\lambda_{2k}^{(k)},\lambda_{2k+1}^{(k)}+\delta_k\}$, $A_2=\{\lambda_0^{(k)}+\theta_k,\lambda_1^{(k)},\ldots,\lambda_{2k}^{(k)}\}$, $\mathcal{C}_1=\mathcal{C}_2=\mathcal{T}^\ast$, $\Lambda_1=\{\lambda_1^{(k)},\lambda_3^{(k)},\ldots,\lambda_{2k-1}^{(k)}\}$ and $\Lambda_2=\{\lambda_2^{(k)},\lambda_4^{(k)},\ldots,\lambda_{2k}^{(k)}\}$. Observe that $a=\lambda_0^{(k)}+\theta_k\in A_2$ and $b=\lambda_{2k+1}^{(k)}+\delta_k \in A_1$. Furthermore, Lemma~\ref{multiplicidades}(a) ensures that \begin{eqnarray}\label{specM2} \DSpec(M_2^{(k+1)}) &=& \begin{cases} \{\lambda_0^{(k)},\lambda_1^{(k)},\ldots,\lambda_{2k}^{(k)},\lambda_{2k+1}^{(k)}+\delta_{k},\lambda_{\max}\},&\text{ if } d=2k+2,\\ \{\lambda_0^{(k)},\lambda_1^{(k)},\ldots,\lambda_{2k}^{(k)},\lambda_{\max}\},&\text{ if } d=2k+1. \end{cases}\\ &\subseteq& \{\lambda_1^{(k+1)},\lambda_2^{(k+1)},\ldots,\lambda_{2k+2}^{(k+1)},\lambda_{\max}\}\nonumber \end{eqnarray} We have $\lambda_{\max} = \lambda_{2k+1}+\delta_{k}+\theta_k=\lambda_{2k+3}^{(k+1)}$ by Lemma \ref{multiplicidades}(f), proving (ii) for $M_{2}^{(k+1)}$. Items (iii) and (iv) also hold by Lemma~\ref{multiplicidades}. It remains to prove (v) and (vi). We start with (v). Let $y_{k+1}=\delta_k=\frac{\beta-\alpha}{2^k}$ and let $\theta\in(0,y_{k+1})$. Notice that, since $0<\theta<\delta_k<y_k$, item (vi) of the induction hypothesis applies to $M_{2,\theta}^{(k)}(T_0)$. We define a matrix $M_{1,\theta}^{(k+1)}=M_{1,\theta}^{(k+1)}(T)$ with the structure of Figure~\ref{lemma_3.4fig}, where the induction hypothesis gives us $M_{1,\theta}^{(k+1)}[T_0]=M_{2,\theta}^{(k)}(T_0)$, $M_{1,\theta}^{(k+1)}[T_i]=M_1^{(k)}(T_i)$ for all $i\in\{1,\ldots,p\}$. By Lemma~\ref{lema_define_max} we can define the weights on the edges $v_0v_i$ such that $\lambda_{2k+2}^{(k+1)}=\lambda_{2k+1}^{(k)}+\delta_k$ is the maximum eigenvalue of $M_{1,\theta}^{(k+1)}$, since $\lambda_{2k+1}^{(k)}+\delta_k>\lambda_{2k+1}^{(k)}+\theta$. We again apply Lemma \ref{multiplicidades}, this time for $A_1=\{\lambda_0^{(k)},\ldots,\lambda_{2k}^{(k)}\}$, $A_2=\{\lambda_1^{(k)},\ldots,\lambda_{2k}^{(k)},\lambda_{2k+1}^{(k)}+\theta\}$, $\mathcal{C}_1=\mathcal{C}_2=\mathcal{T}^\ast$, $\Lambda_1=\{\lambda_2^{(k)},\lambda_4^{(k)},\ldots,\lambda_{2k}^{(k)}\}$ and $\Lambda_2=\{\lambda_1^{(k)},\lambda_3^{(k)},\ldots,\lambda_{2k-1}^{(k)}\}$. Observe that $a=\lambda_0^{(k)}\in A_1$ and $b=\lambda_{2k+1}^{(k)}+\theta \in A_2$. Part (f) of Lemma~\ref{multiplicidades} implies that $\lambda_{\min}=\lambda_0^{(k)}-\delta_{k}+\theta$. Part (a) gives \begin{eqnarray}\label{specMtheta} \DSpec(M_{1,\theta}^{(k+1)}) &=& \begin{cases}\{\lambda_0^{(k)}-\delta_{k}+\theta,\lambda_0^{(k)},\ldots,\lambda_{2k}^{(k)},\lambda_{2k+1}^{(k)}+\delta_{k}\}, &\text{ if } d=2k+2, \\ \{\lambda_0^{(k)}-\delta_{k}+\theta,\lambda_1^{(k)},\ldots,\lambda_{2k}^{(k)},\lambda_{2k+1}+\delta_{k}^{(k)}\}, &\text{ if } d=2k+1. \end{cases} \nonumber\\ &\subseteq& \{\lambda_0^{(k+1)}+\theta,\lambda_1^{(k+1)},\ldots,\lambda_{2k+2}^{(k+1)}\}. \end{eqnarray} The other properties of part (v) also follow from Lemma \ref{multiplicidades}. For (vi), let $z_{k+1}=y_k-\theta_k=\frac{\beta-\alpha}{2^{k-1}}-\frac{\beta-\alpha}{2^k}=\frac{\beta-\alpha}{2^k}$ and fix $\delta\in(0,z_{k+1})$. This gives $0<\delta<y_{k+1}\leq y_k-\theta_k$, so that $\delta+\theta_k<y_{k}$ and items (v) and (vi) of the induction hypothesis apply to $M_{1,\theta_k+\delta}^{(k)}(T_0)$ and $M_{2,\delta_k}^{(k)}(T_i)$. Let $M_{2,\delta}^{(k+1)}=M_{2,\delta}^{(k+1)}(T)$ with the structure of Figure~\ref{lemma_3.4fig}, where we use the induction hypothesis to define $M_{2,\delta}^{(k+1)}[T_0]=M_{1,\theta_k+\delta}^{(k)}(T_0)$ and $M_{2,\delta}^{(k+1)}[T_i]=M_{2,\delta_k}^{(k)}(T_i)$. By Lemma~\ref{lema_define_max} we can define the weights on the edges $v_0v_i$ such that $\lambda_1^{(k+1)}=\lambda_{0}^{(k)}$ is the minimum eigenvalue of $M_{2,\delta}^{(k+1)}$. We apply Lemma \ref{multiplicidades} once more, for $A_1=\{\lambda_1^{(k)},\ldots,\lambda_{2k}^{(k)},\lambda_{2k+1}^{(k)}+\delta_k\}$, $A_2=\{\lambda_0^{(k)}+\theta_k+\delta,\lambda_1^{(k)},\ldots,\lambda_{2k}^{(k)}\}$, $\mathcal{C}_1=\mathcal{C}_2=\mathcal{T}^\ast$, $\Lambda_1=\{\lambda_1^{(k)},\lambda_3^{(k)},\ldots,\lambda_{2k-1}^{(k)}\}$ and $\Lambda_2=\{\lambda_2^{(k)},\lambda_4^{(k)},\ldots,\lambda_{2k}^{(k)}\}$. Observe that $a=\lambda_0^{(k)}+\theta_k+\delta\in A_2$ and $b=\lambda_{2k+1}^{(k)}+\delta_k \in A_1$. Given our choice of $\lambda_{\min}$, we have $\lambda_{\max} = \lambda_{2k+1}^{(k)}+\delta_{k}+\theta_k+\delta=\lambda_{2k+3}^{(k+1)}+\delta$ by Lemma \ref{multiplicidades}(f). Lemma \ref{multiplicidades}(a) gives \begin{eqnarray}\label{specMdelta} \DSpec(M_{2,\delta}^{(k+1)}) &=& \begin{cases}\{\lambda_0^{(k)},\lambda_1^{(k)},\ldots,\lambda_{2k}^{(k)},\lambda_{2k+1}^{(k)}+\delta_{k},\lambda_{2k+1}^{(k)}+\delta_{k}+\theta_k+\delta\},&\text{ if } d=2k+2,\\ \{\lambda_0^{(k)},\lambda_1^{(k)},\ldots,\lambda_{2k}^{(k)},\lambda_{2k+1}^{(k)}+\delta_{k}+\theta_k+\delta\},&\text{ if } d=2k+1. \end{cases}\nonumber\\ &\subseteq& \{\lambda_1^{(k+1)},\ldots,\lambda_{2k+2}^{(k+1)},\lambda_{2k+3}^{(k+1)}+\delta\}. \end{eqnarray} The other properties of part (vi) also follow from Lemma \ref{multiplicidades}. This concludes the step of induction, establishing Theorem~\ref{main_th}. \end{proof} \begin{remark}\label{relation_ck} Note that the proof of Theorem~\ref{main_th} shows how the sets $C_k$ and $C_{k+1}$ relate to each other. Indeed, if $C_{k}=\{\lambda_{0},\ldots,\lambda_{2k+1}\}$ then $C_{k+1}=\{\lambda_{0}-\delta_{k},\lambda_{0},\ldots,\lambda_{2k},\lambda_{2k+1}+\delta_{k},\lambda_{2k+1}+\delta_{k}+\theta_{k}\}$ \end{remark} \section{Proof of Theorem~\ref{thm:main} for other seeds}\label{sec:other_seeds} To conclude the proof of Theorem~\ref{thm:main}, we prove it for unfoldings of the seeds $S'_d$ and $S''_d$. \begin{proof}[Proof of Theorem~\ref{thm:main}] Let $T$ be an unfolding of $S_d'$ or $S_d''$ for some $d\geq 4$. Assume that $d\in\{2k+2,2k+3\}$ for some $k\geq 1$. Given arbitrary $\alpha<\beta$, we apply Theorem~\ref{main_th} (see also Remark~\ref{relation_ck}) to obtain sets $C_{k-1}=\{\lambda_{0},\ldots,\lambda_{2k-1}\}$ and $C_{k}=\{\lambda_{0}-\delta_{k-1},\lambda_{0},\ldots,\lambda_{2k-2},\lambda_{2k-1}+\delta_{k-1},\lambda_{2k-1}+\delta_{k-1}+\theta_{k-1}\}$ that satisfy conditions (i)-(vi) for trees $T^\ast \in \mathcal{T}^\ast$ of height $k-1$ and $k$, respectively. In our construction, we consider each of the three possibilities for seeds $S_d'$ and $S_d''$ in Definition~\ref{def_seeds}. \noindent \textbf{Case 1:} If $d=2k+2$ for some $k\geq 1$ and $T$ is an unfolding of $S_d'$, by Proposition~\ref{equivalence_other_seeds}(i), there exist $T_0\in \mathcal{T}^\ast$ of height $k-1$ and $T_1,\ldots,T_p\in \mathcal{T}^\ast$ of height $k$, for some $p\geq 2$, such that $T=T_0\odot(T_1,\ldots,T_p)$. We define a matrix $M\in \mathcal{S}(T)$ as follows: $M[T_0]=M_1^{(k-1)}(T_0)$ and $M[T_i]=M_1^{(k)}(T_i)$, for $i\in\{1,\ldots,p\}$, where $M_1$ denotes a matrix that satisfies (i) in Theorem~\ref{main_th}. To compute the spectrum of $M$ we apply Lemma~\ref{multiplicidades}. To this end, let $\mathcal{C}_1=\mathcal{C}_2=\mathcal{T}^\ast$ (where each tree is rooted at a main root). Let $k_1=k-1$, $k_2=k$, and consider $A_1=C_{k}\setminus\{\lambda_{2k-1}+\delta_{k-1}+\theta_{k-1}\}$ and $A_2=C_{k-1}\setminus\{\lambda_{2k-1}\}$. Note that $A_1 \cap A_2=\{\lambda_0,\ldots,\lambda_{2k-2}\}$ and that $(A_1 \cup A_2) \setminus (A_1 \cap A_2)=\{\lambda_0-\delta_{k-1},\lambda_{2k-1}+\delta_{k-1}\}$, so $a=\lambda_0-\delta_{k-1}$, $ b=\lambda_{2k-1}+\delta_{k-1}$. Set $\Lambda_1=\{\lambda_1,\lambda_3,\ldots,\lambda_{2k-3}\}$ and $\Lambda_2=\{\lambda_0,\lambda_2,\ldots,\lambda_{2k-2}\}$. By Theorem~\ref{main_th}(i), $M_1^{(k-1)}(T_0)$ is a strong realization of $A_1$ and $M_1^{(k)}(T_i)$ is a strong realization of $A_2$ for each $i\geq 1$. By Theorem~\ref{main_th}(iii) and (iv)\footnote{Note that the same elements of $A_1\cap A_2$ play different roles with respect to $M_1^{(k-1)}(T_0)$ and $M_1^{(k)}(T_i)$, as the eigenvalues with even index with respect to the first matrix have odd index with respect to the second, and vice-versa.}, the following hold: \begin{itemize} \item[(I)] For all $\lambda\in \Lambda_1$, $L(M_1^{(k)}(T_i),\lambda)=0$ and $m_{M_1^{(k-1)}[T_0-v_0]}(\lambda)=m_{M_1^{(k-1)}}(\lambda)+1$. \item[(II)] For all $\lambda \in \Lambda_2$, $L(M_1^{(k-1)}(T_0),\lambda)=0$ and $m_{M_1^{(k)}[T_i-v_i]}(\lambda)=m_{M_1^{(k)}(T_i)}(\lambda)+1$. \end{itemize} Having verified the hypotheses, we are now ready to apply Lemma~\ref{multiplicidades}. Since $p>1$ and $a,b \in A_1$, Lemma~\ref{multiplicidades}(a) tells us that there exist $\lambda_{\min},\lambda_{\max}\in\mathbb{R}$ such that \begin{equation}\label{eq:dspec} \DSpec(M) = \{\lambda_{\min},\lambda_0-\delta_{k-1},\lambda_0,\ldots,\lambda_{2k-2},\lambda_{2k-1}+\delta_{k-1},\lambda_{\max}\}. \end{equation} In particular, $|\DSpec(M)|=2k+3=d+1$, so that $q(T)=d+1$ in this case. \noindent \textbf{ Case 2:} If $d=2k+3$ and $T$ is an unfolding of $S'_d$, by Proposition~\ref{equivalence_other_seeds}(ii), there exist $T_1,\ldots,T_p,T'_1,\ldots,T'_q\in \mathcal{T}^\ast, p,q\geq 1$, of height $k$ and $T_0,T'_0\in \mathcal{T}^\ast$ of height $k-1$ such that $T= (T_0\odot(T_1,\ldots,T_p))\odot (T'_0\odot(T'_1,\ldots,T'_q))$. We define the matrix $M\in \mathcal{S}(T)$ in two parts. For the part that is related to $\tilde{T}=T_0\odot(T_1,\ldots,T_p)$, set $M[T_0]=M_1^{(k-1)}(T_0)$ and $M[T_i]=M_1^{(k)}(T_i)$ for $i\in\{1,\ldots,p\}$, where $M_1$ denotes a matrix that satisfies (i) in Theorem~\ref{main_th}. By Lemma~\ref{lema_define_max}, we define the weights on the edges $v_0v_i$ so that $\lambda_{2k-1}+\delta_{k-1}+\theta_{k-1}$ is the maximum eigenvalue of $M[\tilde{T}]$. Note that, for $M[\tilde{T}]$, the hypotheses of Lemma~\ref{multiplicidades} are satisfied for the same sets $A_1$, $A_2$, $\Lambda_1$, $\Lambda_2$ defined in case 1 (for the same reasons). Then, by Lemma~\ref{multiplicidades}, there exist $\tilde{\lambda}_{\min},\tilde{\lambda}_{\max}=\lambda_{2k-1}+\delta_{k-1}+\theta_{k-1}\in\mathbb{R}$ such that $$\DSpec (M[\tilde{T}]) \subseteq \{\tilde{\lambda}_{\min},\lambda_0-\delta_{k-1},\lambda_0,\ldots,\lambda_{2k-2},\lambda_{2k-1}+\delta_{k-1},\lambda_{2k-1}+\delta_{k-1}+\theta_{k-1}\}.$$ Moreover, $\tilde{\lambda}_{\min},\lambda_0,\ldots,\lambda_{2k-2},\lambda_{2k-1}+\delta_{k-1}+\theta_{k-1}$ satisfy Lemma~\ref{multiplicidades}(c), while the values $ \lambda_0-\delta_{k-1},\lambda_1,\ldots,\lambda_{2k-3},\lambda_{2k-1}+\delta_{k-1}$ satisfy Lemma~\ref{multiplicidades}(d). For the part that is related to $\tilde{T}'=T_0'\odot(T_1',\ldots,T_p')$, we set $M[T'_0]=M_{2,\delta_{k-1}}^{(k-1)}(T'_0)$ and $M[T'_i]=M_2^{(k)}(T'_i)$ for all $i\in\{1,\ldots,q\}$, where $M_2$ denotes the matrix that satisfies (ii) and $M_{2,\delta}$ denotes the matrix that satisfies (vi) in Theorem~\ref{main_th}. By Lemma~\ref{lema_define_max}, we may define the weights on the edges $\{v'_0,v'_i\}$ such that $\lambda_{0}-\delta_{k-1}$ is the minimum eigenvalue of $M[\tilde{T}']$. To compute the spectrum of $M[\tilde{T}']$ we apply Lemma~\ref{multiplicidades}. To this end, let $\mathcal{C}_1=\mathcal{C}_2=\mathcal{T}^\ast$ (where each tree is rooted at a main root). Let $k_1=k-1$, $k_2=k$, and consider $A_1=C_{k}\setminus\{\lambda_{0}-\delta_{k-1}\}$ and $A_2=(C_{k-1}\cup\{\lambda_{2k-1}+\delta_{k-1}\})\setminus\{\lambda_{0}, \lambda_{2k-1}\}$. Note that $A_1 \cap A_2=\{\lambda_{1},\ldots,\lambda_{2k-2},\lambda_{2k-1}+\delta_{k-1}\}$, that $(A_1 \cup A_2) \setminus (A_1 \cap A_2)=\{\lambda_0,\lambda_{2k-1}+\delta_{k-1}+\theta_{k-1}\}$, and hence $a=\lambda_0$, $b=\lambda_{2k-1}+\delta_{k-1}+\theta_{k-1}$. Set $\Lambda_1=\{\lambda_2,\lambda_4,\ldots,\lambda_{2k-2}\}$ and $\Lambda_2=\{\lambda_1,\lambda_3,\ldots,\lambda_{2k-1}+\delta_{k-1}\}$. By Theorem~\ref{main_th}(vi), $M_{2,\delta_{k-1}}^{(k-1)}(T'_0)$ is a strong realization of $A_2$ and $M_2^{(k)}(T'_i)$ is a strong realization of $A_1$ for any $i\geq 1$. By Theorem~\ref{main_th}(iii), (iv) and (vi), the following hold: \begin{itemize} \item[(I)] For all $\lambda\in \Lambda_1$, $L(M_2^{(k)}(T'_i),\lambda)=0$ and $$m_{M_{2,\delta_{k-1}}^{(k-1)}[T'_0-v'_0]}(\lambda)=m_{M_{2,\delta_{k-1}}^{(k-1)}(T'_0)}(\lambda)+1.$$ \item[(II)] For all $\lambda \in \Lambda_2$, $L(M_{2,\delta_{k-1}}^{(k-1)}(T'_0),\lambda)=0$ and $m_{M_2^{(k)}[T'_i-v'_i]}(\lambda)=m_{M_2^{(k)}(T'_i)}(\lambda)+1$. \end{itemize} Having verified the hypotheses, we are now ready to apply Lemma~\ref{multiplicidades}. Lemma~\ref{multiplicidades}(a) tells us that there exist $\tilde{\lambda}'_{\min}=\lambda_{0}-\delta_{k-1},\tilde{\lambda}_{\max}'\in\mathbb{R}$ such that \begin{equation}\label{eq:dspecb} \DSpec(M[\tilde{T}']) \subseteq \{\lambda_0-\delta_{k-1},\lambda_0,\ldots,\lambda_{2k-2},\lambda_{2k-1}+\delta_{k-1},\lambda_{2k-1}+\delta_{k-1}+\theta_{k-1},\tilde{\lambda}'_{\max}\}. \end{equation} Moreover, $\lambda_0-\delta_{k-1},\lambda_1,\ldots,\lambda_{2k-1}+\delta_{k-1},\tilde{\lambda}_{\max}'$ satisfy Lemma~\ref{multiplicidades}(c), while the values $\lambda_0,\lambda_2,\ldots,\lambda_{2k-2},\lambda_{2k-1}+\delta_{k-1}+\theta_{k-1}$ satisfy Lemma~\ref{multiplicidades}(d). To conclude the proof we apply Lemma~\ref{multiplicidades} to $\tilde{T}\odot \tilde{T}'$ using the matrices defined above. Here, $\mathcal{C}_1=\{\tilde{T}\}$, $\mathcal{C}_2=\{\tilde{T}'\}$, $k_1=k_2=k+1$, $A_1=\{\tilde{\lambda}_{\min},\lambda_0-\delta_{k-1},\lambda_0,\ldots,\lambda_{2k-2},\lambda_{2k-1}+\delta_{k-1},\lambda_{2k-1}+\delta_{k-1}+\theta_{k-1}\}$, $A_2=\{\lambda_0-\delta_{k-1},\lambda_0,\ldots,\lambda_{2k-2},\lambda_{2k-1}+\delta_{k-1},\lambda_{2k-1}+\delta_{k-1}+\theta_{k-1},\tilde{\lambda}'_{\max}\}$, hence $a=\tilde{\lambda}_{\min}$, $b=\tilde{\lambda}'_{\max}$. Set $\Lambda_1=\{\lambda_0,\lambda_2,\ldots,\lambda_{2k-2},\lambda_{2k-1}+\delta_{k-1}+\theta_{k-1}\}$ and $\Lambda_2=\{\lambda_0-\delta_{k-1},\lambda_1,\lambda_3,\ldots,\lambda_{2k-3},\lambda_{2k-1}+\delta_{k-1}\}$. Since $a\in A_1$, $b\in A_2$ and $p=1$, it follows that there exist $\lambda_{\min}$ and $\lambda_{\max}$ such that \begin{equation} \DSpec(M) = \{\lambda_{\min},\lambda_0-\delta_{k-1},\lambda_0,\ldots,\lambda_{2k-2},\lambda_{2k-1}+\delta_{k-1},\lambda_{2k-1}+\delta_{k-1}+\theta_{k-1},\lambda_{\max}\}. \end{equation} In particular, $|\DSpec(M)|=2k+4=d+1$, so that $q(T)=d+1$ in this case. \noindent \textbf{Case 3:} If $d=2k+3$ for some $k\geq 1$ and $T$ is an unfolding of $S''_d$, by Proposition~\ref{equivalence_other_seeds}(iii), there exist $T_0\in \mathcal{T}^\ast$ of height $k-1$ and $T_1,\ldots,T_p,T'_0,T'_1,\ldots,T'_q\in \mathcal{T}^\ast$ of height $k$, where $p,q\geq 1$, such that $T= (T_0\odot(T_1,\ldots,T_p))\odot (T'_0\odot(T'_1,\ldots,T'_q))$. We define the matrix $M\in \mathcal{S}(T)$ in two parts. For the part that is related to $\tilde{T}=T_0\odot(T_1,\ldots,T_p)$, set $M[T_0]=M_1^{(k-1)}(T_0)$ and $M[T_i]=M_1^{(k)}(T_i)$ for $i\in\{1,\ldots,p\}$, where $M_1$ denotes a matrix that satisfy (i) in Theorem~\ref{main_th}. By Lemma~\ref{lema_define_max}, we may define the weights on the edges $\{v_0,v_i\}$ such that $\lambda_{2k-1}+\delta_{k-1}+\theta_{k-1}$ is the maximum eigenvalue of $M[T_0\odot(T_1,\ldots,T_p)]$. Note that, for $M[T_0\odot(T_1,\ldots,T_p)]$, all hypotheses of Lemma~\ref{multiplicidades} are satisfied for the same reasons described in case 1 above. Then, by Lemma~\ref{multiplicidades}, there exist $\tilde{\lambda}_{\min},\tilde{\lambda}_{\max}=\lambda_{2k-1}+\delta_{k-1}+\theta_{k-1}\in\mathbb{R}$ such that $$\DSpec (M[\tilde{T}]) \subseteq \{\tilde{\lambda}_{\min},\lambda_0-\delta_{k-1},\lambda_0,\ldots,\lambda_{2k-2},\lambda_{2k-1}+\delta_{k-1},\lambda_{2k-1}+\delta_{k-1}+\theta_{k-1}\}.$$ Moreover, $\tilde{\lambda}_{\min},\lambda_0,\ldots,\lambda_{2k-2},\lambda_{2k-1}+\delta_{k-1}+\theta_{k-1}$ satisfy Lemma~\ref{multiplicidades}(c), while $ \lambda_0-\delta_{k-1},\lambda_1,\ldots,\lambda_{2k-3},\lambda_{2k-1}+\delta_{k-1}$ satisfy Lemma~\ref{multiplicidades}(d). For the part that is related to $\tilde{T}'=T_0'\odot(T_1',\ldots,T_p')$, set $M[T'_0]=M_{1,\theta_{k}}^{(k)}(T'_0)$, $M[T'_i]=M_2^{(k)}(T'_i)$, for $i\in\{1,\ldots,q\}$, where $M_2$ denotes a matrix that satisfies (ii) and $M_{1,\theta}$ denotes a matrix that satisfies (v) in Theorem~\ref{main_th}. By Lemma~\ref{lema_define_max}, we may define the weights on the edges $\{v'_0,v'_i\}$ such that $\lambda_{0}-\delta_{k-1}$ is the minimum eigenvalue of $M[\tilde{T}']$. To compute the spectrum of $M[\tilde{T}']$ we apply Lemma~\ref{multiplicidades}. To this end, let $\mathcal{C}_1=\mathcal{C}_2=\mathcal{T}^\ast$, let $k_1=k_2=k$, and consider $A_1=C_{k}\setminus\{\lambda_{0}-\delta_{k}\}$ and $A_2=(C_{k}\cup\{\lambda_{0}-\delta_{k-1}+\theta_{k}\})\setminus\{\lambda_{0}-\delta_{k-1},\lambda_{2k-1}+\delta_{k-1}+\theta_{k-1}\}$. Note that $A_1 \cap A_2=\{\lambda_{0},\ldots,\lambda_{2k-2},\lambda_{2k-1}+\delta_{k-1}\}$, that $(A_1 \cup A_2) \setminus (A_1 \cap A_2)=\{\lambda_{0}-\delta_{k-1}+\theta_{k},\lambda_{2k-1}+\delta_{k-1}+\theta_{k-1}\}$, and hence $a=\lambda_{0}-\delta_{k-1}+\theta_{k}$, $b=\lambda_{2k-1}+\delta_{k-1}+\theta_{k-1}$. Set $\Lambda_1=\{\lambda_2,\lambda_4,\ldots,\lambda_{2k-2}\}$ and $\Lambda_2=\{\lambda_1,\lambda_3,\ldots,\lambda_{2k-1}+\delta_{k-1}\}$. By Theorem~\ref{main_th}(vi), $M_{1,\theta_{k}}^{(k)}(T'_0)$ is a strong realization of $A_2$ and $M_2^{(k)}(T'_i)$ is a strong realization of $A_1$ for any $i\geq 1$. By Theorem~\ref{main_th}(iii), (iv) and (v), the following hold: \begin{itemize} \item[(I)] For all $\lambda\in \Lambda_1$, $L(M_2^{(k)}(T'_i),\lambda)=0$ and $m_{M_{1,\theta_{k}}^{(k)}[T'_0-v'_0]}(\lambda)=m_{M_{1,\theta_{k}}^{(k)}(T'_0)}(\lambda)+1$. \item[(II)] For all $\lambda \in \Lambda_2$, $L(M_{1,\theta_{k}}^{(k)}(T'_0),\lambda)=0$ and $m_{M_2^{(k)}[T'_i-v'_i]}(\lambda)=m_{M_2^{(k)}(T'_i)}(\lambda)+1$. \end{itemize} Having verified the hypotheses, we are now ready to apply Lemma~\ref{multiplicidades}. Lemma~\ref{multiplicidades}(a) tells us that there exist $\tilde{\lambda}'_{\min}=\lambda_{0}-\delta_{k-1},\tilde{\lambda}'_{\max}\in\mathbb{R}$ such that \begin{equation}\label{eq:dspecc} \DSpec(M[\tilde{T}']) \subseteq \{\lambda_0-\delta_{k-1},\lambda_0,\ldots,\lambda_{2k-2},\lambda_{2k-1}+\delta_{k-1},\lambda_{2k-1}+\delta_{k-1}+\theta_{k-1},\tilde{\lambda}'_{\max}\}. \end{equation} Moreover, $\lambda_0-\delta_{k-1},\lambda_1,\ldots,\lambda_{2k-1}+\delta_{k-1},\tilde{\lambda}'_{\max}$ satisfy Lemma~\ref{multiplicidades}(c), while the values $\lambda_0,\lambda_2,\ldots,\lambda_{2k-2},\lambda_{2k-1}+\delta_{k-1}+\theta_{k-1}$ satisfy Lemma~\ref{multiplicidades}(d). As in case 2, we conclude the proof by applying Lemma~\ref{multiplicidades} to $\tilde{T}\odot \tilde{T}'$. This gives $\lambda_{\min},\lambda_{\max}$ such that \begin{equation} \DSpec(M) = \{\lambda_{\min},\lambda_0-\delta_{k-1},\lambda_0,\ldots,\lambda_{2k-2},\lambda_{2k-1}+\delta_{k-1},\lambda_{2k-1}+\delta_{k-1}+\theta_{k-1},\lambda_{\max}\}. \end{equation} In particular, $|\DSpec(M)|=2k+4=d+1$, so that $q(T)=d+1$ in this case. \end{proof} We observe that our proof of Theorem~\ref{thm:main} using Theorem~\ref{main_th} allows us to ask more about the spectrum of a realization of a diminimal tree. For instance, we may require it to be integral. \begin{corollary} Let $d$ be a positive integer. Let $\mathcal{T}(S_d)$, $\mathcal{T}(S'_d)$ and $\mathcal{T}(S''_d)$ be the families of trees of diameter $d$ generated by the seeds $S_d$, $S'_d$ and $S''_d$, respectively, where $S'_d$ is defined for $d\geq 4$ and $S''_d$ for odd values of $d \geq 5$. For every $T\in \mathcal{T}(S_d) \cup \mathcal{T}(S'_d) \cup \mathcal{T}(S''_d)$, there is a real symmetric matrix $M(T)$ whose spectrum is integral with underlying tree $T$ and $|\DSpec(T)|=d+1$. \end{corollary} \begin{proof} The proof follows the same steps of the proof of Theorem~\ref{thm:main}. However, when $d \in \{2k,2k+1\}$ and we apply Theorem~\ref{main_th} to produce the set $C_k$, we start the proof by fixing an arbitrary integer $\alpha$ and by choosing $\beta=\alpha+2^{k-1}$, so that $\frac{\beta-\alpha}{2^{k-1}}$ is an integer. Then $C_1=\{2\alpha-\beta,\alpha,\beta,2\beta-\alpha\}$ is integral and the elements $\delta_j,\theta_j$ are integers for all $j\leq k-1$. Remark~\ref{relation_ck} ensures that the sets $C_j$ are integral for all $j\leq k$. This gives the desired conclusion for unfoldings of $S_d$. For the other seeds, we need to go back to the proof of Theorem~\ref{thm:main}. For instance, assume that we are in the case $d=2k+2$ and we have an unfolding of $S'_{d}$. With the choices that we made for $S_{d}$, if we repeat the proof of Theorem~\ref{thm:main} until we get to~\eqref{eq:dspec}, we deduce that all elements of $\DSpec(M)$ are integers except possibly $\lambda_{\min}$ and $\lambda_{\max}$. However, when we applied Lemma~\ref{multiplicidades} to define $M(T)$, it was not necessary to assign weights to the edges joining the roots of the trees $T_0\odot (T_1,\ldots,T_p)$. By Lemma~\ref{lema_define_max}, we can assign these weights in a way that $\lambda_{\max}$ is equal to any value greater than $\lambda_{2k-1}+\delta_{k-1}$, and we may choose this value to be an integer. Moreover, Lemma~\ref{multiplicidades}(f) tells us that $\lambda_{\min}+\lambda_{\max}=a+b$, where $a$ and $b$ are both known to be integers. Therefore $\lambda_{\min}$ is also an integer and the result follows. Unfoldings of the other two seeds may be dealt with using similar arguments. \end{proof} \section{Example}\label{sec:example} In this section we provide an example to illustrate that our proofs may be used to construct matrices associated with diminimal trees. In this example, we construct a symmetric matrix whose underlying graph is the seed $S_9$ (of diameter 9) with exactly 10 distinct eigenvalues. It is based on the matrix $M_{1}^{(5)}\in\mathcal{S}(S_{9})$ defined in Theorem~\ref{main_th}. We choose $\alpha=0$ and $\beta=32$, and after $k=5$ steps we obtain the matrix $M^{(5)}_1 (S_{9})\in\mathcal{S}(S_{9})$ with integral spectrum given by $$\Spec(M^{(5)}_1) = \{-62^{[1]}, -56^{[1]}, -48^{[2]}, -32^{[4]}, 0^{[8]}, 32^{[8]}\-, 80^{[4]}\-, 104^{[2]}, 116^{[1]}, 122^{[1]}\}.$$ It is depicted in Figure~\ref{S9}, where vertex weights denote the diagonal entries and edge weights denote the off-diagonal nonzero entries. In a git repository\footnote{\url{https://github.com/Lucassib/Diminimal-Graph-Algorithm} or \url{https://lucassib-diminimal-graph-algorithm-st-app-0t3qu7.streamlit.app/}}, readers can access an algorithm based on the proof of Theorem \ref{main_th} to compute a matrix $M\in\mathcal{S}(S_{d})$ where the input parameters are $\alpha$, $\beta$ and $d$, where $k=\ceil{\frac{d}{2}}$. \begin{figure} \caption{ Matrix $M^{(5)}_1 \in \mathcal{S}(S_{9})$.} \label{S9} \end{figure} \section*{Acknowledgments} This work is partially supported by MATH-AMSUD under project GSA, brazilian team financed by CAPES under project 88881.694479/2022-01. L. E. Allem acknowledges the support of FAPERGS 21/2551- 0002053-9. C.~Hoppen acknowledges the support of FAPERGS~19/2551-0001727-8 and CNPq (Proj.\ 315132/2021-3). V. Trevisan acknowledges partial support of CNPq grants 409746/2016-9 and 310827/2020-5, and FAPERGS grant PqG 17/2551-0001. CNPq is the National Council for Scientific and Technological Development of Brazil. \appendix \section{Additional results} We illustrate how Proposition~\ref{equivalence_other_seeds} can be proved by providing a detailed proof of item (ii). The proofs of (i) and (iii) are analogous. Proposition~\ref{equivalence_other_seeds}(ii) states that $T\in\mathcal{T}(S'_{2k+3})$ if, and only if, there exist $T_1,\ldots,T_p,T'_1,\ldots,T'_q\in \mathcal{T}^\ast, p,q\geq 1$ of height $k$ and $T_0,T'_0\in \mathcal{T}^\ast$ of height $k-1$ such that $$T=(T_0\odot(T_1,\ldots,T_p))\odot (T'_0\odot(T'_1,\ldots,T'_q)).$$ Let $T$ be a tree and $k\geq1$. The case $k=1$ ($S_5'=P_6$) is simple, so we concentrate in the case $k\geq 2$. First assume that there exist $T_1,\ldots,T_p,T'_1,\ldots,T'_q\in \mathcal{T}^\ast$ of height $k$, where $p,q\geq 1$, and $T_0,T'_0\in \mathcal{T}^\ast$ of height $k-1$ such that $T=(T_0\odot(T_1,\ldots,T_p))\odot (T'_0\odot(T'_1,\ldots,T'_q))$ (see Figure~\ref{fig:left}). Note that all paths of length $2k+3$ in $T$ may be decomposed as $Pv_0v_0'Q$ where $P$ is a path of length $k$ joining a leaf of some $T_i$ to its root $v_i$ and $Q$ is a path of length $k$ joining the root $v_j'$ of some $T_j'$ to one of its leaves. In particular, no such path uses vertices in $V(T_0-v_0)\cup V(T_0'-v_0')$ nor vertices in two different components of some $T_i-v_i$ or $T_j-v_j'$. \begin{figure} \caption{$T=(T_0\odot(T_1,\ldots,T_p))\odot (T'_0\odot(T'_1,\ldots,T'_q))$} \label{fig:left} \end{figure} By Proposition~\ref{prop_equivalence}, we know that the trees $T_1,\ldots,T_p,T'_1,\ldots,T'_q$ are unfoldings of $S_{2k-1}$ or $S_{2k}$, and that $T_0,T'_0$ are unfoldings of $S_{2k-3}$ or $S_{2k-2}$. Recall that, in part (ii) of the proof of Proposition~\ref{prop_equivalence}, given $j\geq 2$, we were able to fold the pair $(S_{2j-3},S_{2j-3})$ in $S_{2j}=S_{2j-3}\odot(S_{2j-3},S_{2j-3})$ to $S_{2j-3}\odot S_{2j-3}=S_{2j-1}$ without affecting the diameter of the tree. This \emph{does not} mean that $S_{2j}$ can always be folded onto $S_{2j-1}$, but instead that folding can be performed if the diameter of the tree is not modified. For the tree $T$ in this proposition, where maximum paths have the structure mentioned above, this means that any $T_i$ or $T_i'$ with $i>0$ may be folded directly to $S_{2k-1}$ or may first be folded to $S_{2k}=S_{2k-3} \odot (S_{2k-3},S_{2k-3})$, which can in turn be folded to $S_{2k-3} \odot S_{2k-3}=S_{2k-1}$. Similarly, if $k\geq 3$, $T_0$ and $T_0'$ may be folded directly to $S_{2k-3}$ or may first be folded to $S_{2k-2}=S_{2k-5} \odot (S_{2k-5},S_{2k-5})$ and then to $S_{2k-5} \odot S_{2k-5}=S_{2k-3}$. For $k=2$, $T_0$ and $T_0'$ have height 1, so they are equal to $S_{1}$ or they are stars that can be folded into $S_1$. Combining this, we conclude that $T$ can be folded to $$T'=(S_{2k-3}\odot(S_{2k-1},\ldots,S_{2k-1}))\odot (S_{2k-3}\odot(S_{2k-1},\ldots,S_{2k-1})),$$ with $p$ terms in the first vector and $q$ terms in the second. Now, if $p>1$ or $q>1$, we can fold each $(S_{2k-1},\ldots,S_{2k-1})$ onto a single $S_{2k-1}$, without decreasing the diameter. This results in $(S_{2k-3}\odot S_{2k-1})\odot (S_{2k-3}\odot S_{2k-1})=S'_{2k+3}$, as required. Figure~\ref{fig:case} illustrates this case. \begin{figure} \caption{Folding of $T'$ into $(S_{2k-3}\odot S_{2k-1})\odot (S_{2k-3}\odot S_{(2k-1})=S'_{2k+3}$.} \label{fig:case} \end{figure} For the converse, our proof is by induction on the number of branch decompositions performed on the seed $S'_{2k+3}$ to produce $T$. If no CBD was performed, then $T=S'_{2k+3}=(S_{2k-3}\odot S_{2k-1})\odot (S_{2k-3}\odot S_{2k-1})$ and we have $T=(T_0\odot T_1) \odot (T_0'\odot T_1')$ for $T_0=T_0'=S_{2k-3}$ (of height $k-1$) and $T_1=T_1'=S_{2k-1}$ (of height $k$). Now suppose that if $T\in\mathcal{T}(S'_{2k+3})$ has been formed after a sequence of $\ell$ branch decompositions, then there exist $T_0,T_1,\ldots,T_p,T_0',T'_1,\ldots,T'_q\in \mathcal{T}^\ast$ as in the statement of the theorem for which \begin{equation}\label{eq:2} T=(T_0\odot(T_1,\ldots,T_p))\odot (T'_0\odot(T'_1,\ldots,T'_q)). \end{equation} Note that the central edge of $T$ is $\{v_0,v_0'\}$ and the tree is rooted at $v_0$. We claim that if we perform an additional $s$-CBD to $T$, we still obtain a decomposition as in (iii). Indeed, let $U$ be the tree obtained after performing an $s$-CBD of a branch $B$ at $v\in V(T)$. First assume that $v\notin \{v_0,v_0'\}$. Without loss of generality, assume that $v\in V(T_i)$, so that, in case $i=0$, $v$ is not the root of $T_0$. Since the diameter remains the same, $B$ must be entirely contained in $T_i$. By Proposition~\ref{prop_equivalence}, the tree $U_i$ obtained after performing an $s$-CBD of branch $B$ at $v\in V(T_i)$ lies in $\mathcal{T}^\ast$. In particular, if we replace $T_i$ by $U_i$ in~\eqref{eq:2}, we get the desired decomposition of $U$. Next assume that $v = v_0$ (the case $v = v'_0$ is analogous). Let $B$ be the branch at $v_0$ involved in the duplication. This is not the branch that contains $v_0'$, otherwise the diameter would increase. If $B$ is entirely contained in $T_0$, we may repeat the above argument. Otherwise, $B=T_i$ for some $i$, and $$T=(T_0\odot(T_1,\ldots,T_i,T_i^{(1)},\ldots,T_i^{(s)},T_{i+1},\ldots,T_p))\odot (T'_0\odot(T'_1,\ldots,T'_q)),$$ where each $T_i^{(j)}$ is a copy of $T_i$. This concludes the proof. \end{document}
arXiv
{ "id": "2302.00835.tex", "language_detection_score": 0.671995997428894, "char_num_after_normalized": null, "contain_at_least_two_stop_words": null, "ellipsis_line_ratio": null, "idx": null, "lines_start_with_bullet_point_ratio": null, "mean_length_of_alpha_words": null, "non_alphabetical_char_ratio": null, "path": null, "symbols_to_words_ratio": null, "uppercase_word_ratio": null, "type": null, "book_name": null, "mimetype": null, "page_index": null, "page_path": null, "page_title": null }
arXiv/math_arXiv_v0.2.jsonl
null
null
\begin{document} \begin{abstract} In this work we consider the two-dimensional percolation model arising from the majority dynamics process at a given time~$t\in\mathbb{R}_+$. We show the emergence of a sharp threshold phenomenon for the box crossing event at the critical probability parameter~$p_c(t)$ with polynomial size window. We then use this result in order to obtain stretched-exponential bounds on the one-arm event probability in the subcritical phase. Our results are based on differential inequalities derived from the OSSS inequality, inspired by the recent developments by Ahlberg, Broman, Griffiths, and Morris and by Duminil-Copin, Raoufi, and Tassion. We also provide analogous results for percolation in the voter model. \end{abstract} \maketitle \section{Introduction} ~ \par In recent years, the study of sharp threshold phenomena in percolation has received great attention. This is mainly due to the development of new techniques that allow the treatment of dependent models~\cite{dcrt0,dcrt1,dcrt2}. Following this line, in this paper we prove that, for each fixed $t \geq 0$, percolation in two-dimensional majority dynamics undergoes a sharp phase transition in the density parameter. \par In two-dimensional majority dynamics, each vertex $x \in \mathbb{Z}^{2}$ receives an initial independent opinion which can be either zero or one\footnote{Following the usual notation in percolation theory, we refer to sites with opinion zero as closed and to sites with opinion one as open.}. With rate one, the vertex $x$ updates its opinion to match the majority of its neighbors. In the case of a tie, the original opinion is kept. Denote by $\mathbb{P}_{p,t}$ the distribution of the process at time $t$ when the initial density of ones is $p \in [0,1]$. Our interest lies in understanding the critical percolation function defined as \begin{equation}\label{eq:perc_function} p_{c}(t) = \inf \left\{ p \in [0,1]: \mathbb{P}_{p,t}\left[\begin{array}{c} \text{there exists an} \\ \text{infinite open path} \end{array} \right]>0\right\}. \end{equation} \par Not much is known about the behavior of the function above. In a work by Amir and the second author~\cite{ab}, it is proved that, for each $t>0$, $p_{c}(t) \in \left[\frac{1}{2}, p_{c}^{site}\right)$, where $p_{c}^{site}$ is the critical threshold for two-dimensional site percolation. Besides, the same work proves that $t \mapsto p_{c}(t)$ is a continuous non-increasing function and that there is no percolation at criticality for each $t \geq 0$. \par Our main result here regards crossing events. For each $n \in \mathbb{N}$ and $\lambda >0$, let $R_{n}^{\lambda} = [1, \lambda n]\times [1,n]$, and consider the crossing event \begin{equation} H(\lambda n,n) = \left\{\begin{array}{c} \text{there exists an open path contained in } R_{n}^{\lambda} \\ \text{ that connects } \{1\} \times [1,\lambda n] \text{ to } \{n\} \times [1,\lambda n] \end{array} \right\}. \end{equation} \begin{teo}\label{t:sharp_thresholds} For each $t \geq 0$, there exists $\gamma=\gamma(t)>0$ such that, for all $\lambda>0$, \begin{equation} \mathbb{P}_{p_{c}(t)-n^{-\gamma},t}[H(\lambda n,n)] \to 0 \quad \text{and} \quad \mathbb{P}_{p_{c}(t)+n^{-\gamma},t}[H(\lambda n,n)] \to 1, \end{equation} as $n$ grows. \end{teo} \begin{remark}\label{remark:lambda=1} Even though we state the theorem above for general aspect ratio, we write the proof for the case $\lambda=1$ and denote $R_{n}^{1}$ simply by $R_{n}$. The proof remains the same for the case when $\lambda \neq 1$. \end{remark} \newconstant{c:decay} \par As a consequence of Theorem~\ref{t:sharp_thresholds}, together with a general multiscale renormalisation argument, we obtain stretched-exponential decay of one-arm probabilities in the subcritical phase, together with analogous results for the supercritical case and dual closed $*$-paths\footnote{A $*$-path in $\mathbb{Z}^{2}$ is a path $x_{1}, x_{2}, \dots, x_{n}$ of vertices in $\mathbb{Z}^{2}$ such that $\pnorm{\infty}{x_{i+1}-x_{i}} = 1$, for all $i=1,2, \dots, n-1$. In other words, it is a path that is allowed to cross diagonals on the lattice $\mathbb{Z}^{2}$.}: \begin{teo}\label{t:exp_decay} For any $t \geq 0$, $\varepsilon>0$, and $p < p_{c}(t)$, there exists a positive constant $\useconstant{c:decay}=\useconstant{c:decay}(p, t, \varepsilon)>0$ such that \begin{equation} \label{eq:exp_decay1} \mathbb{P}_{p,t}\left[\begin{array}{c} \text{there exists an open} \\ \text{ path connecting } 0 \text{ to the} \\ \text{ boundary of the ball } B(0,n) \end{array} \right] \leq \useconstant{c:decay}^{-1} \exp\left\{-\useconstant{c:decay}\frac{n}{(\log n)^\varepsilon}\right\}. \end{equation} Furthermore, if~$p > p_{c}(t)$, regarding long closed $*$-paths, we have: \begin{equation} \label{eq:exp_decay2} \mathbb{P}_{p,t}\left[\begin{array}{c} \text{there exists a closed} \\ \text{ $*$-path connecting } 0 \text{ to the} \\ \text{ boundary of the ball } B(0,n) \end{array} \right] \leq \useconstant{c:decay}^{-1} \exp\left\{-\useconstant{c:decay}\frac{n}{(\log n)^\varepsilon}\right\}. \end{equation} Also for~$p > p_{c}(t)$, we have \begin{equation} \label{eq:exp_decay3} \mathbb{P}_{p,t}\left[\begin{array}{c} \text{The open cluster containing }0 \\ \text{is finite with diameter }n \end{array} \right] \leq \useconstant{c:decay}^{-1} \exp\left\{-\useconstant{c:decay}\frac{n}{(\log n)^\varepsilon}\right\}, \end{equation} and an analogous result holds for the closed $*$-connected cluster containing~$0$ when~$p < p_{c}(t)$. \end{teo} \begin{remark} We strongly believe the poly-logarithmic correction present in the exponent of the inequalities above to be a shortcoming of the renormalisation argument used, and conjecture the correct bound to be simply exponential in~$n$. \end{remark} \begin{remark} The above result follows from a general statement inspired by~\cite{pt} that we prove here about dependent percolation with fast decay of correlations. Under general conditions (see Proposition~\ref{p:1armdecaygen}), which are provided by Theorem~\ref{t:sharp_thresholds} and decoupling inequalities, this statement implies stretched-exponential decay of the one-arm event's probability whenever the probability of crossing a large annulus is sufficiently small. \end{remark} \begin{remark} Regarding crossing functions and one-arm events for $p=p_{c}(t)$, in~\cite{ab} it is proved that the probability of $H(\lambda n,n)$ is bounded away from zero and one uniformly in $n$. This follows by combining Theorem 1.1 and Lemma 4.6 from that paper, together with a RSW theory for $*$-crossings analogous to the one that can be found in~\cite{att}. Furthermore, the decay of the one-arm event is polynomial in $n$. \end{remark} \noindent \textbf{Overview of the proofs.} The proof of Theorem~\ref{t:sharp_thresholds} relies on exploiting the relation between Boolean functions and randomized algorithms obtained through OSSS inequality. Here it is possible to write the existence of a crossing at time $t$ as a random Boolean function of the initial configuration, with randomness coming from the evolution of majority dynamics. A first approach would then be to consider the quenched configuration, where the clocks of the Markov process are fixed, and try to use these tools directly on the space of initial configurations, for each possible realization of the Poisson clocks in the evolution. This idea fails, since quenched configurations lack the homogeneity needed for our arguments. \par To circumvent this difficulty, we need to consider the randomness that comes from the evolution together with the one from the initial configuration. We then revisit the idea developed in~\cite{abgm}, and further explored in~\cite{att} and~\cite{ahlbergb}, of using a two-stage construction of the process to obtain a discretization of it that still retains relevant properties of the annealed evolution. The central idea is to construct the process in a way that each vertex is associated to a Poisson point process of clocks of intensity $k \in \mathbb{N}$, with~$k$ large. Whenever a clock in a given vertex ticks, we keep this tick with probability $\frac{1}{k}$ and, in this case, update the opinion of the vertex to agree with the majority of its neighbors. \par This artificial increase of the density of clock ticks allows us now to consider quenched probabilities, as we condition on the denser Poisson process, and still retain good properties of the annealed configuration with large probability. Given a collection of clock ticks, we obtain a Boolean function by considering the initial opinions and the selection of the clock ticks that are kept for the evolution. \par We then proceed to analyze this quenched random Boolean function. First, we devise an algorithm that determines the outcome of the function and bound its revealment. This algorithm is a simple exploration process that discovers the open components that intersect a random line crossing the rectangle $R_{n}$ by querying the initial state of sites and which clock ticks are selected to compose the evolution. The bound on the revealment will follow from one-arm estimates in the quenched setting (see Proposition~\ref{prop:one_arm}). These estimates in turn are derived from Russo-Seymour-Welsh-type results stated in~\cite{ab} and inspired by~\cite{tassion}. \par Since we are considering randomness that comes from the time evolution as well, when applying OSSS inequality it will be necessary to control the influence of clock ticks. We relate time-pivolality to space-pivotality, bounding the influence of a clock tick by a combination of the influences of the initial positions (see Proposition~\ref{prop:piv}). This pivotality relation is the most original and sensitive part of our proof, and fails, for example, if one considers the contact process instead of majority dynamics as the rule for the time evolution of the opinions. Nevertheless, we can also prove a similar result for the voter model (see Section~\ref{sec:further_models}). With this relation in hands, we are able to conclude the proof of Theorem~\ref{t:sharp_thresholds}. \par Let us now turn our attention to the proof of Theorem~\ref{t:exp_decay}. Here, we provide a general statement on the decay rate of the one-arm probability in percolation models with fast decaying correlations. We prove that, provided the annulus crossing probability goes to~$0$ as the size of the annulus goes to infinity, the rate of decay of the one-arm probability is at least stretched exponential in the ball's radius. Combining this with Theorem~\ref{t:sharp_thresholds} yields Theorem~\ref{t:exp_decay}. The proof of this statement relies on a multiscale renormalisation argument adapted from~\cite{pt}. \begin{remark} Our technique is somewhat general and might be applied to other dynamics. As an example, in Section~\ref{sec:further_models}, we explain how to adapt it to the case when the opinions follow the voter model. The greatest obstacle to a broader generalization is the lemma relating time- and space-pivotality, whose proof is strongly model-dependent. \end{remark} \begin{remark} Camia, Newmann, and Sidoravicius in~\cite{cns} prove that fixation of the opinions happens with stretched-exponential speed in a sub-interval of the supercritical phase. The idea of the proof is to observe that, if $p$ is larger than $p_{c}^{site}$ (the critical probability for Bernoulli site percolation in~$\mathbb{Z}^2$), one can obtain a random partition of $\mathbb{Z}^{2}$ into finite subsets whose boundaries are circuits of constant initial opinion which are preserved by the dynamics, reducing the evolution to finite random subsets. This, together with the uniform bound on the number of changes in opinion each vertex can have (see Tamuz and Tessler~\cite{tt}), allows one to conclude that the speed of convergence is stretched exponential. They further improve the proof by performing an enhancement on the initial configuration, and conclude that stretched exponential decay also holds for values of $p$ slightly smaller than $p_{c}^{site}$. We remark that the same idea can be applied together with Theorem~\ref{t:exp_decay} to verify that stretched-exponential decay of the non-fixation probability also holds for $p \in \left( \lim p_{c}(t), 1 \right]$. Symmetry considerations imply an analogous result for $p \in \left[0, 1- \lim p_{c}(t) \right)$. \end{remark} \noindent \textbf{Related works.} Russo's approximate 0-1 law~\cite{russo} is one of the first results regarding sharp thresholds in independent percolation. It says that a sequence of monotone Boolean functions exhibits a sharp threshold, provided the supremum of the influences converges to zero. The use of randomized algorithms and OSSS inequality to understand threshold phenomena is much more recent and so far has proven to be a very powerful technique. Duminil-Copin, Raoufi and Tassion~\cite{dcrt1, dcrt2} use these techniques to study the subcritical phase of Voronoi percolation and threshold phenomena for the random-cluster and Potts models, while~\cite{dcrt0}, by the same authors, considers the case of Boolean percolation, under moment conditions of the radii distribution. \par After these seminal works, other applications of such techniques were found. Muirhead and Vanneuville~\cite{mv} use this approach to conclude that level-set percolation for a wide class of smooth Gaussian processes undergoes a sharp phase transition. Dereudre and Houdebert~\cite{dh} conclude similar statements for the Widom-Rowlinson model. \par The collection of upper invariant measures for the contact process was also studied. Van den Berg~\cite{vdb} considers the two-dimensional case, and proves the existence of a sharp phase transition without relying on the OSSS inequality, but on Talagrand inequality instead. \par The discretization we use here is more in line with the one considered in Ahlberg, Broman, Griffiths, and Morris~\cite{abgm}, where the authors prove noise sensitivity for the critical Boolean model. With a similar discretization, and relying on Talagrand's inequality~\cite{talagrand}, Ahlberg, Tassion, and Teixeira~\cite{att}~deduce that Boolean percolation undergoes a sharp phase transition. Furthermore, Ahlberg, in collaboration with the second author~\cite{ahlbergb}, employs this technique to study noise sensitivity of two-dimensional Voronoi percolation and conclude, as a corollary, the existence of a sharp threshold with polynomial window. \noindent \textbf{Open problems.} Regarding the percolation function $p_{c}(t)$ (see Equation~\eqref{eq:perc_function}), it is known that it is a continuous non-increasing function that is strictly decreasing at zero. Whether or not it is strictly decreasing in the whole non-negative real line it is still not known. We hope our new estimates on the connectivity decay of the subcritical phase might help. Regarding its asymptotic behavior, we conjecture that $p_{c}(t)$ converges to $\frac{1}{2}$, as $t$ grows. From~\cite{tt}, one obtains that, almost surely, the process has a limiting configuration $\eta_{\infty}$. General results on two dimensional percolation imply that $\eta_{\infty}$ does not percolate for $p = \frac{1}{2}$ (see Gandolfi, Keane, and Russo~\cite{gkr}). \par Our techniques are reliant on RSW theory, and are therefore limited to two dimensions. We believe our results to be valid for any dimension and for a large class of particle system models, and that with future developments in the field such general problems will be tractable. An interesting process where this should give some insight is zero-temperature Glauber dynamics for the Ising model. Here, the main difficulty is in relating time- and space-pivotality. We intend to pursue this in a future work. \par Another problem this work leaves open is the correct decay of the one-arm probabilities in Theorem~\ref{t:exp_decay}, which we conjecture to be simply exponential in the distance~$n$. \noindent \textbf{Organization of the paper.} In Section~\ref{sec:properties}, we state properties of majority dynamics and some results that will be used throughout the text. Section~\ref{sec:construction} contains a graphical construction of majority dynamics that will be used in our results, while Section~\ref{sec:influence} discusses the concept of influences and pivotality in the quenched setting. We present a randomized algorithm and bound its revealment in Section~\ref{sec:alg}, and use this algorithm to conclude the proof of Theorem~\ref{t:sharp_thresholds} in Section~\ref{sec:thresholds}. In Section~\ref{sec:one_arm}, we provide quenched one-arm estimates for the model that were previously assumed in the proof of Theorem~\ref{t:sharp_thresholds}. Theorem~\ref{t:exp_decay} is proved in Section~\ref{sec:decay}. Finally, we discuss how to modify our result to the case when the dynamics follows the voter model in Section~\ref{sec:further_models}. \noindent \textbf{Acknowledgments.} The authors thank Daniel Ahlberg, Augusto Teixeira, and Daniel Valesin for valuable discussions and improvements during the elaboration of this work. CA is supported by the DFG grant SA 3465/1-1. RB is supported by the Israel Science Foundation through grant 575/16 and by the German Israeli Foundation through grant I-1363-304.6/2016. \section{Basic properties}\label{sec:properties} ~ \par We denote by~$\eta\equiv\eta(p)=(\eta_t)_{t \in \mathbb{R}{+}}$ the two-dimensional majority dynamics process with initial configuration~$\eta_0\in\{0,1\}^{\mathbb{Z}^2}$, which assigns i.i.d.\ $\mathrm{Bernoulli}(p)$ random variables to each vertex of~$\mathbb{Z}^2$. As mentioned in the Introduction, we denote by~$\mathbb{P}_{p,t}$ the law of~$\eta_t=\eta_t(p)$. We collect here facts about this collection of measures. Complete proofs can be found in~\cite{ab} and references therein. \par Notice that, as a consequence of Harris~\cite{harris} and a correlation decay estimate (see Equation~\ref{eq:correlation_decay_radius}) used to extend the result in~\cite{harris} to countable state space, the measures $\mathbb{P}_{p,t}$ are positively associated. This is the same as stating that $\mathbb{P}_{p,t}$ satisfies the FKG inequality: for any two events $A$ and $B$ that are increasing with respect to the partial ordering\footnote{We say $\eta \preceq \xi$ if $\eta(x) \leq \xi(x)$, for all $x \in \mathbb{Z}^{2}$. An event $A$ is increasing with respect to this partial ordering if $\eta \in A$ and $\eta \preceq \xi$ imply $\xi \in A$.} of $\{0,1\}^{\mathbb{Z}^{2}}$, it holds that \begin{equation} \mathbb{P}_{p,t}[A \cap B] \geq \mathbb{P}_{p,t}[A] \mathbb{P}_{p,t}[B]. \end{equation} \par Given two disjoint subsets $A$ and $B$ of $\mathbb{Z}^{2}$ and $X \subset \mathbb{Z}^{2}$ such that $A \cup B \subset X$, we define the event \begin{equation}\label{eq:open_connection} \left\{A \overset{X}{\longleftrightarrow} B\right\} \end{equation} as the existence of an open path contained in $X$ connecting a vertex in $A$ to a vertex in $B$. We omit $X$ in the notation above when $X=\mathbb{Z}^{2}$. The event where percolation holds is defined as the existence of an infinite open path. Standard arguments yield that \begin{equation} \mathbb{P}_{p,t}[\eta \text{ percolates}]>0 \quad \text{if, and only if,} \quad \inf_{n}\mathbb{P}_{p,t}\left[\{0\} \leftrightarrow \partial B(0,n)\right]>0, \end{equation} where $\partial B(0,n) = \{x \in \mathbb{Z}^{2}: \pnorm{\infty}{x}=n\} $ is the boundary of the ball $B(0,n)=[-n,n]^{2}$. \newconstant{c:correlation} \newconstant{c:rsw} \par Let us now list some properties of the probabilities $\mathbb{P}_{p,t}$ for a fixed $t$. First of all, we state correlation decay for these measures, which is a consequence of standard cone-of-light estimates (see Propositions~\ref{prop:col} and~\ref{prop:decoup} below). For each $t \geq 0$, there exists a constant $\useconstant{c:correlation}=\useconstant{c:correlation}(t)$ such that, if $A$ is an event that depends on the configuration $\eta_{t}(x)$ only on sites inside $[-n,n]^{2}$ and $B$ is an event that depends on the configuration on sites outside $[-2n,2n]^{2}$, then, for every $p \in [0,1]$, \begin{equation}\label{eq:correlation_decay_radius} \Big|\mathbb{P}_{p,t}[A \cap B] - \mathbb{P}_{p,t}[A]\mathbb{P}_{p,t}[B] \Big| \leq \useconstant{c:correlation}n^{2}e^{-\frac{n}{8}\log n}. \end{equation} \par Given $\lambda>0$, denote by $H(\lambda n, n)$ the crossing event \begin{equation} H(\lambda n,n) = \left[ \{1\} \times [1,n] \overset{R_{n}}{\longleftrightarrow} \{\lfloor \lambda n \rfloor \} \times [1,n] \right], \end{equation} where $R_{n}\equiv R_n(\lambda)=[1,\lambda n] \times [1,n]$, and let $H^{*}(\lambda n, n)$ denote the event of the existence of a closed horizontal $*$-crossing of the rectangle $R_{n}$. The main result regarding crossing events is the RSW theory, that we can obtain by adapting the proofs of Tassion~\cite{tassion}, since they rely on the invariance of the percolation measure under certain symmetries of~$\mathbb{Z}^2$, decay of correlations, bounds for crossings of squares, and the FKG inequality, properties that are also available to us. \begin{prop}[RSW theory]\label{prop:RSW} For each fixed value of $t \geq 0$ and each $\lambda>0$, there exists a positive constant $\useconstant{c:rsw}=\useconstant{c:rsw}(\lambda, t)>0$ such that \begin{equation}\label{eq:rsw} \useconstant{c:rsw} \leq \mathbb{P}_{p_{c}(t),t}\left[H(\lambda n,n) \right] \leq 1-\useconstant{c:rsw}, \end{equation} for all $n \in \mathbb{N}$. \end{prop} \par Since $H(\lambda n, n)$ holds if, and only if, there is no closed vertical $*$-crossing of $R_{n}=[1,\lambda n] \times [1,n]$, one can easily deduce from the proposition above that an analogous result holds for the event $H^{*}(\lambda n, n)$. Furthermore, monotonicity considerations imply that, for all $p \geq p_{c}(t)$, \begin{equation} \label{eq:rsw2} \inf_{n} \mathbb{P}_{p,t}\left[H(\lambda n,n) \right] \geq \useconstant{c:rsw}(\lambda, t), \end{equation} and, for all $p \leq p_{c}(t)$, \begin{equation} \label{eq:rsw3} \inf_{n} \mathbb{P}_{p,t}\left[H^{*}(\lambda n,n) \right] \geq \useconstant{c:rsw}(\lambda^{-1}, t). \end{equation} Since this is a straighforward adaptation of the proof in~\cite{tassion}, we choose to omit it here. \noindent \textbf{The OSSS inequality.} Let us quickly recall the version of the OSSS inequality we use here. Fix $f:\{0,1\}^{n} \to \{0,1\}$ a Boolean function and, for a vector $\bold{p} = (p_{1}, \dots p_{n})$, let $\mathbb{P}_{\bold{p}}$ denote the probability measure on $\{0,1\}^{n}$ where each entry is independent and the $i$-th entry has probability $p_{i}$ of being one. For each $i \in [n]$, we define the influence of the bit $i$ as \begin{equation} \infl_{\bold{p}}(f,i) = \mathbb{P}_{\bold{p}}[f(\omega) \neq f(\omega^{i})], \end{equation} where $\omega^{i}$ is obtained from $\omega$ by changing the $i$-th entry of the vector. \par A (randomized) algorithm $\mathcal{A}$ is a rule that outputs a value zero or one, by querying entries of the vector $\omega$, and whose choice of the next entry to be queried is allowed to depend on the previous observations. An algorithm can determine its output before querying all bits, in this case we say the algorithm~\emph{stops}. We say that the algorithm determines $f$ if its outcome coincides with $f(\omega)$, for every $\omega$. The revealment of the bit $i$ for an algorithm $\mathcal{A}$ is the quantity \begin{equation} \delta(\mathcal{A},i) = \mathbb{P}_{\bold{p}}[\mathcal{A} \text{ queries } i \text{ before stopping}]. \end{equation} \par The OSSS inequality (see~\cite{osss}, Theorem 3.1) provides the bound \begin{equation} \label{eq:osss} \var(f) \leq \sum_{i=1}^{n}\delta(\mathcal{A},i)\infl_{\bold{p}}(f,i). \end{equation} Poincar\'{e}'s inequality can be recovered from the above inequality by bounding all the revealments by one. \section{The two-stage construction}\label{sec:construction} ~ \par In this section we present a graphical construction of majority dynamics that will be used in the rest of the paper. We begin by presenting the usual Harris construction, since we will use a simple modification of it. \par Consider a collection $\mathscr{P}=\big( \mathscr{P}_{x} \big)_{x \in \mathbb{Z}^2}$ of i.i.d. Poisson processes in the interval~$[0,t]$ with rate one. For each $x \in \mathbb{Z}^2$, the clocks $\mathscr{P}_{x}$ will control the updates in that site: whenever the clock at $x$ ticks, the opinion at $x$ is updated to match the majority of its neighbors. In case of a draw, the site keeps its original opinion. With this construction, we can fully determine the state of the system at any given time with the collection of clocks $\mathscr{P}$ and the initial configuration $\eta_{0}$. This is a classical fact that follows from the observation that the only vertices whose initial opinions are necessary in order to determine $\eta_{s}(x)$, for each $x \in \mathbb{Z}^{2}$ and $s \leq t$, are the ones which are connected to $x$ via a path of vertices with clocks that ring in increasing order. This set of points is easily seen to be almost surely finite. This same fact is at the heart of the cone-of-light estimates presented below (see Proposition~\ref{prop:col}). \begin{remark}\label{remark:voter_model} It is possible to obtain the voter model with the same graphical construction, just by modifying the way sites are updated: instead of choosing the new opinion to be the majority of the neighboring opinions, the update is made by copying the opinion of a randomly selected neighbor. \end{remark} \par The construction we will use is a slight modification of the one presented above. Instead of considering the collection of clocks $\mathscr{P}$, we start with a denser collection of clocks $\mathscr{P}^{k}=\big( \mathscr{P}_{x}^{k} \big)_{x \in \mathbb{Z}^2}$ distributed as i.i.d. Poisson processes on the interval~$[0,t]$ with rate $k$, where $k$ is a fixed positive integer number that will be taken to be large. With this collection of clocks in hand, we need some additional randomness in order to define the process: whenever a clock ticks, we perform the update at the respective site with probability $\frac{1}{k}$ (this can be realized by considering an independent $\ber\left(\frac{1}{k}\right)$ random variable for each clock tick of $\mathscr{P}^{k}$). In this case, conditioned on the realization of the clocks $\mathscr{P}^{k}$, we can obtain the state of the system at any given time $t \geq 0$ by using the initial configuration $\eta_{0}$ and the collection of random variables that verify whether or not each update is performed. We will denote by $\mathbb{P}_{kt}$ the distribution of $\mathscr{P}^{k}$ and by $\mathbb{P}_{p, \frac{1}{k}}$ the joint distribution of the initial condition and the additional randomness necessary in order to determine the process~$(\eta_{s})_{s \geq 0}$. \par The advantage of the last construction presented above lies in the fact that the model at time $t$ may be seen as a random Boolean function: for each realization of $\mathscr{P}^{k}$, we obtain a Boolean function whose entries select the initial configuration and which updates are performed. By choosing the value of $k$ large enough, we can ensure that these random functions are well-behaved, in a sense that we will make clear later. \par We will work with the process conditioned on the realization $\mathscr{P}^{k}$. In this case, we may write the characteristic function of the crossing event $\left[ \eta_{t} \in H(\lambda n,n)\right]$ as a Boolean function $f_{n}: \{0,1\}^{\Lambda} \to \{0,1\}$, where \[ \Lambda = \mathbb{Z}^2 \cup \{(x,s): x \in \mathbb{Z}^2, \, s \in \mathscr{P}^{k}_{x} \cap [0,t]\}, \] and such that each configuration describes the entries at time zero and whether each clock tick before time $t$ is accepted or not. We will denote a configuration on $\{0,1\}^{\Lambda}$ by a pair $(\eta_{0}, \mathscr{P})$, where the first coordinate contains the initial opinions of each site and the second retains the information of which clock ticks are kept. Moreover, each entry of $\eta_{0}$ will be distributed as a $\ber(p)$ random variable, where $p \in [0,1]$ is the initial density of the process, and each entry of $\mathscr{P}$ will have distribution $\ber\left(\frac{1}{k}\right)$. \par Since, almost surely (on $\mathscr{P}^{k}$), one needs to observe only a finite amount of sites in order to verify if $H(\lambda n,n)$ holds or not, the domain of $f_{n}$ is almost surely finite and hence this is a well-defined Boolean function. \par The main reason we consider this construction is the following lemma. \begin{lemma}\label{lemma:variance_decay} For every integer $k \geq 2$, $p \in (0,1)$ and Boolean function $f$ of the graphical construction, we have \begin{equation*} \var\Big(\mathbb{E}\left[f(\eta_{0}, \mathscr{P})\middle|\mathscr{P}^{k}\right]\Big) \leq \frac{1}{k}. \end{equation*} \end{lemma} \begin{proof} The proof follows simply by considering a particular construction of the pair $(\mathscr{P},\mathscr{P}^{k})$. First, let $\mathscr{P}_{1},\mathscr{P}_{2},\ldots,\mathscr{P}_{k}$ be independent copies of $\mathscr{P}^{1}$ (and independent of $\eta_{0}$), and consider $\kappa$ be chosen uniformly in $[k]\equiv\{1,\dots,k\}$. Observe that $(\mathscr{P},\mathscr{P}^{k}) \sim (\mathscr{P}_{\kappa}, \cup_{i \in [k]} \mathscr{P}_{i})$. From this, one readily obtains the equality \begin{equation*} \var\left(\mathbb{E}\left[f(\eta_{0}, \mathscr{P})\middle|\mathscr{P}^{k}\right]\right) = \var\Big(\mathbb{E}\Big[f(\eta_{0}, \mathscr{P}_{\kappa})\Big|\cup_{i \in [k]}\mathscr{P}_{i}\Big]\Big) \end{equation*} Directly from Jensen's inequality we obtain\footnote{If $\mathcal{F} \subset \mathcal{G}$ are two $\sigma$-algebras, Jensen's inequality implies \begin{equation*} \mathbb{E} \Big[X\Big|\mathcal{F}\Big]^{2} = \mathbb{E} \Big[\mathbb{E}[X|\mathcal{G}]\Big|\mathcal{F}\Big]^{2} \leq \mathbb{E}\Big[\mathbb{E}[X|\mathcal{G}]^{2}\Big|\mathcal{F}\Big], \end{equation*} from where one deduces that \begin{equation*} \var\left(E[X|\mathcal{F}]\right) = \mathbb{E}\left[E[X|\mathcal{F}]^{2}\right] - E[X]^{2} \leq \mathbb{E}\left[E[X|\mathcal{G}]^{2}\right] - E[X]^{2} = \var\left(E[X|\mathcal{G}]\right). \end{equation*} Estimate~\eqref{eq:variance_inequality} follows directly.} \begin{equation}\label{eq:variance_inequality} \var\Big(\mathbb{E}\Big[f(\eta_{0}, \mathscr{P}_{\kappa})\Big|\cup_{j \in [k]}\mathscr{P}_{j}\Big]\Big) \leq \var \Big(\mathbb{E}\left[f(\eta_{0}, \mathscr{P}_{\kappa})\middle|(\mathscr{P}_{i})_{i=1}^{k}\right]\Big) \end{equation} The conditional expectation on the variance above can be easily calculated \begin{equation*} \mathbb{E}\left[f(\eta_{0}, \mathscr{P}_{\kappa})\middle|(\mathscr{P}_{i})_{i=1}^{k}\right] = \frac{1}{k}\sum_{i=1}^{k}\mathbb{E}_{\eta_0}[f(\eta_{0}, \mathscr{P}_{i})], \end{equation*} where $\mathbb{E}_{\eta_{0}}$ denotes the expectation with respect to the initial condition $\eta_{0}$. Finally, since the processes $(\mathscr{P}_{i})_{i=1}^{k}$ are independent, we obtain \begin{equation*} \var\bigg(\frac{1}{k}\sum_{i=1}^{k}\mathbb{E}_{\eta_0}[f_{n}(\eta_{0}, \mathscr{P}_{i})]\bigg) \leq \frac{1}{k}, \end{equation*} concluding the proof. \end{proof} \newconstant{c:cir} \par We can use the above lemma together with RSW theory to bound quenched probabilities in good events. Let \begin{equation}\label{eq:cir} \cir(n) = \left\{ \begin{array}{c} \text{there exists an open circuit} \\ \text{contained in } B\left(0, 3n \right) \setminus B\left(0, n \right) \end{array}\right\}, \end{equation} and write $\cir^{*}(n)$ for the equivalent event, but asking for the existence of a closed $*$-circuit. Notice that Equations~(\ref{eq:rsw2}) and~(\ref{eq:rsw3}) and the FKG inequality imply that there exists a positive constant $\useconstant{c:cir}=\useconstant{c:cir}(t)>0$ such that \begin{equation}\label{eq:RSW_cir} \inf_{n}\mathbb{P}_{p,t}\left[\cir(n)\right] \geq \useconstant{c:cir}, \end{equation} if $p \geq p_{c}(t)$, and \begin{equation}\label{eq:RSW_cir2} \inf_{n}\mathbb{P}_{p,t}\left[\cir^{*}(n)\right] \geq \useconstant{c:cir}, \end{equation} for $p \leq p_{c}(t)$. \begin{lemma}\label{lemma:quenched_cir} For any fixed $t \geq 0$ and $k \geq 2$, \begin{equation*} \mathbb{P}_{kt} \Big[\mathbb{P}_{p , \frac{1}{k}}\left[ \cir(n)\middle| \mathscr{P}^{k}\right] \leq \frac{\useconstant{c:cir}}{2} \Big] \leq \frac{4}{\useconstant{c:cir}^{2}k}, \end{equation*} for all $n \geq 1$ and $p \geq p_{c}(t)$. An analogous estimate holds for $\cir^{*}(n)$ if $p \leq p_{c}(t)$. \end{lemma} \begin{proof} The same proof of Lemma~\ref{lemma:variance_decay} can be used to the characteristic function of $\cir(n)$. Combining this with Chebyshev inequality and~\eqref{eq:RSW_cir} implies \begin{equation*} \begin{split} \mathbb{P}_{kt} & \Big[\mathbb{P}_{p, \frac{1}{k}}\left[\cir(n)\middle| \mathscr{P}^{k}\right] \leq \frac{\useconstant{c:cir}}{2} \Big]\\ & \qquad\leq \mathbb{P}_{kt} \left[\Big|\mathbb{P}_{p, \frac{1}{k}}\left[\cir(n)\middle| \mathscr{P}^{k}\right]-\mathbb{P}_{p, t}[\cir(n)]\Big| \geq \frac{\useconstant{c:cir}}{2}\right] \\ & \qquad \leq \frac{4}{\useconstant{c:cir}^{2} k}, \end{split} \end{equation*} for all $k \geq 2$. To conclude the statement for $\cir^{*}(n)$, one proceeds in the same way, but with~\eqref{eq:RSW_cir2} instead of~\eqref{eq:RSW_cir}. \end{proof} \par The result above provides quenched estimates for the existence of circuits and can be applied to deduce quenched one-arm estimates. For each $n$, let $\arm_{\sqrt n}(\eta_{0}, \mathscr{P})$ denote the event that there exists an open path connecting the boundary of the ball $B\left(0, n^{\sfrac{1}{4}}\right)$ to the boundary of the ball $B\left(0, n^{\sfrac{1}{2}}\right)$. This path can be chosen to be entirely contained inside $B\left(0, n^{\sfrac{1}{2}}\right) \setminus B\left(0, n^{\sfrac{1}{4}}\right)$. Denote also by $\arm^{*}_{\sqrt{n}}(\eta_{0}, \mathscr{P})$ the corresponding event, but asking for a closed $*$-path with the same properties. \begin{prop}[One-arm estimate]\label{prop:one_arm} There exists $\nu>0$ such that, for all $\gamma>0$, there exists $k_{0} \geq 2$ such that, for any $k \geq k_{0}$ and $p \leq p_{c}(t)$, if $n \geq n_{0}=n_{0}(k)$, then \begin{equation} \mathbb{P}_{kt}\Big[\mathbb{P}_{p, \frac{1}{k}}\left[\arm_{\sqrt n}(\eta_{0}, \mathscr{P})\middle|\mathscr{P}^{k}\right] \geq n^{-\nu} \Big] \leq n^{-\gamma}. \end{equation} An analogous result holds for $\arm^{*}_{\sqrt{n}}(\eta_{0}, \mathscr{P})$ instead of $\arm_{\sqrt n}(\eta_{0}, \mathscr{P})$ if we assume $p \geq p_{c}(t)$. \end{prop} \par The proof of the above Proposition relies on observing that $\arm_{\sqrt n}(\eta_{0}, \mathscr{P})$ holds if, and only if, there is no closed $*$-circuit inside $B\left(0, n^{\sfrac{1}{2}}\right) \setminus B\left(0, n^{\sfrac{1}{4}}\right)$. Since it is possible to find a logarithmic amount of disjoint and distant annuli in this set, we can repeatedly apply Lemma~\ref{lemma:quenched_cir} to obtain that the probability of not having such a circuit in any of the annuli is small. A complete proof requires additional care to control dependencies between the disjoint annuli, and we postpone it to Section~\ref{sec:one_arm}. \par To conclude this section, we present cone-of-light estimates for the denser collection of clock ticks. Given~$\mathscr{P}^{k}$ we define the (past) cone of light $C_{k,t}^{\leftarrow}(x)$ to be the collection of vertices one needs to observe in order to determine $\eta_{s}(x)$, for all $s \in [0,t]$, varying over every possible pairs~$(\eta_{0}, \mathscr{P})$ of initial configurations and clocks selections. We also define the future cone of light $C_{k,t}^{\rightarrow}(x)$ as the set of vertices that can be influenced by~$x$ up to time~$t$, that is, \begin{equation} C_{k,t}^{\rightarrow}(x):=\{y\in\mathbb{Z}^2; x\in C_{k,t}^{\leftarrow}(y) \}. \end{equation} \begin{prop}[Cone-of-light estimates]\label{prop:col} Given $k \in \mathbb{N}$ and $t \geq 0$, if $n$ is large enough, \begin{equation} \begin{split} \mathbb{P}_{kt}\Big[C_{k,t}^{\leftarrow}(x) \cap \partial B(x,n) \neq \emptyset \Big] &\leq e^{-\frac{1}{8}n \log n}, \\ \mathbb{P}_{kt}\Big[ C_{k,t}^{\rightarrow}(x) \cap \partial B(x,n) \neq \emptyset \Big] &\leq e^{-\frac{1}{8}n \log n}. \end{split} \end{equation} \end{prop} \begin{proof} Without loss of generality, we consider $x=0$ and prove the bound for~$C_{k,t}^{\leftarrow}(x)$, the other bound following by analogous reasoning. Notice that, in order for $C_{k,t}^{\leftarrow}(0)$ to intersect $\partial B(0, n)$, it is necessary that there exists a path of length at least~$n$ whose vertices' associated Poisson clocks ring in decreasing order. That is, there must exist a (not necessarily simple) path~$0=x_0,x_1,\dots,x_m\in\partial B(0,n)$, $m\geq n$, and a sequence of times \begin{equation*} t \geq t_0>t_1>\dots>t_m \text{ such that }t_j \text{ is a mark in }\mathscr{P}^{k}_{x_j}. \end{equation*} Combining the fact that these clocks are i.i.d with distribution $\expo(k)$, the relation between Poisson and Exponential distributions, and union bounds, we obtain \begin{equation} \begin{split} \lefteqn{\mathbb{P}_{kt}\Big[C_{k,t}^{\leftarrow}(x) \cap \partial B(x,n) \neq \emptyset \Big] }\phantom{********} \\ &\leq \sum_{m\geq n}\mathbb{P}_{kt} \left[ \begin{array}{c} \text{there exists a path of size } m \\ \text{starting at $0$ such that all clocks} \\ \text{ring before time $t$ in decreasing order} \end{array} \right] \\ & \leq \sum_{m\geq n} 4^m \mathbb{P}[\poisson(kt) \geq m] \\ & = e^{-kt}\sum_{m \geq n} 4^m \sum_{j \geq m} \frac{(kt)^{j}}{j!} \\ &\leq e^{-kt}\sum_{m \geq n} e^{kt} \frac{(4tk)^{m}}{m!} \\ & \leq e^{4tk} \frac{(4tk)^{n}}{\left(\frac{n}{2}\right)^{\frac{n}{2}}}\\ & \leq e^{-\frac{1}{8} n \log n}, \end{split} \end{equation} if $n \geq \left(16 kt \right)^{8}$. This concludes the proof. \end{proof} \newconstant{c:cor_decay} \par The bound in~\eqref{eq:correlation_decay_radius} follows directly from Proposition~\ref{prop:col}. We will later need a stronger decoupling inequality in order to obtain bounds on the one-arm event's probability. The following proposition provides a generalized form of~\eqref{eq:correlation_decay_radius}: \begin{prop} \label{prop:decoup} For every $t \geq 0$, there exists a positive constant $\useconstant{c:cor_decay}>0$ depending on~$t$ such that the following holds: for~$L,R>0$ and any pair of events $A$ and $B$ with respective supports inside the sets $x + [-L,2L]^2$ and $y + [-L,2L]^2$, with $\pnorm{\infty}{x-y} \geq 3L+R$, and $p \in [0,1]$, \begin{equation}\label{eq:strong_correlation_decay} \Big|\mathbb{P}_{p,t}[A \cap B] - \mathbb{P}_{p,t}[A]\mathbb{P}_{p,t}[B] \Big| \leq \useconstant{c:cor_decay}^{-1}L^{2}e^{-\useconstant{c:cor_decay}R \log R}. \end{equation} \end{prop} \begin{proof} If $C_{1,t}^{\leftarrow}(z) \cap \partial B\left(z,\frac{R}{2}\right) = \emptyset$ for all $z \in (x + [-L,2L]^2) \cup (y + [-L,2L]^2)$, then the occurrence of $A$ and $B$ are determined by disjoint (and hence independent) parts of the graphical construction. Defining the event \begin{equation*} \left\{\begin{array}{c} \text{For every } z \in (x + [-L,2L]^2) \cup (y + [-L,2L]^2), \\ C_{1,t}^{\leftarrow}(y) \cap \partial B\left(y,\frac{R}{2}\right) = \emptyset \end{array}\right\} =: \mathrm{Dec}(L,R,x,y), \end{equation*} we obtain \begin{equation} \mathbb{P}_{p,t}\left[\mathrm{Dec}(L,R,x,y)^C\right] \leq 18 L^{2}e^{\frac{1}{16}R \log \frac{R}{2}}, \end{equation} where the last bound above is a consequence of Proposition~\ref{prop:col} for large values of $R$. For sufficiently large~$R$, by intersecting the events~$A$, $B$, and $A\cap B$ with~$\mathrm{Dec}(L,R,x,y)$ and~$\mathrm{Dec}(L,R,x,y)^C$, we can show that \begin{equation*} \Big|\mathbb{P}_{p,t}[A \cap B] - \mathbb{P}_{p,t}[A]\mathbb{P}_{p,t}[B] \Big| \leq 5 \mathbb{P}_{p,t}\left[\mathrm{Dec}(L,R,x,y)^C\right] \leq 90 L^{2}e^{\frac{1}{16}R \log \frac{R}{2}} \end{equation*} Choosing the constant in~\eqref{eq:strong_correlation_decay} to cover the cases where~$R$ is not large enough concludes the proof. \end{proof} \section{Influence and space-pivotality}\label{sec:influence} ~ Given a realization of $\mathscr{P}^{k}$, the quenched influence of a bit $x \in \mathbb{Z}^2$ or $(x,s) \in \{x\} \times \mathscr{P}^{k}_{x}$ is defined respectively as \begin{equation} \infl_{x}(f_{n}, \mathscr{P}^{k}) = \mathbb{P}_{p, \frac{1}{k}}\left[ f_{n}(\eta_{0}, \mathscr{P}) \neq f_{n}(\eta_{0}^{x}, \mathscr{P})\middle|\mathscr{P}^{k} \right], \end{equation} and \begin{equation} \infl_{(x,s)}(f_{n}, \mathscr{P}^{k}) = \mathbb{P}_{p, \frac{1}{k}}\left[ f_{n}(\eta_{0}, \mathscr{P}) \neq f_{n}(\eta_{0}, \mathscr{P}^{(x,s)})\middle|\mathscr{P}^{k} \right], \end{equation} where $\eta_{0}^{x}$ and $\mathscr{P}^{(x,s)}$ are obtained from $\eta_{0}$ and $\mathscr{P}$ by exchanging the entries at $x$ and $(x,s)$, respectively. \par The crossing functions $f_{n}$ are monotone non-decreasing in the space variables $\eta_{0}$. Furthermore, the set $\bigcup_{y\in R_n}C_{k,t}^{\leftarrow}(y)$ comprised of vertices whose opinions at time~$0$ can influence the output of~$f_n(\eta_0,\mathscr{P})$ is almost surely finite. Classical arguments then show that Russo's Formula applies to the derivative with respect to $p$ and one obtains \begin{equation} \label{eq:russo} \frac{\partial}{\partial p}\mathbb{E}_{p,\frac{1}{k}}\left[f_n(\eta_{0},\mathscr{P})\middle| \mathscr{P}^{k}\right] = \sum_{x \in \mathbb{Z}^2} \infl_{x}(f_n, \mathscr{P}^{k}). \end{equation} Since \begin{equation} \left|\frac{\partial}{\partial p}\mathbb{E}_{p,\frac{1}{k}}\left[f_n(\eta_{0},\mathscr{P})\middle| \mathscr{P}^{k}\right]\right| \leq \bigg| \bigcup_{y\in R_n}C_{k,t}^{\leftarrow}(y) \bigg|, \end{equation} as a direct consequence of the bounded convergence Theorem and Proposition~\ref{prop:col}, it is possible to conclude \begin{equation}\label{eq:russo_inside_expectation} \begin{split} \frac{\partial}{\partial p}\mathbb{E}_{p,t}\left[f_n(\eta_{0},\mathscr{P})\right] &= \mathbb{E}_{kt}\left[\frac{\partial}{\partial p}\mathbb{E}_{p,\frac{1}{k}}\left[f_n(\eta_{0},\mathscr{P})\middle| \mathscr{P}^{k}\right]\right] \\ &= \mathbb{E}_{kt}\bigg[\sum_{x \in \mathbb{Z}^2} \infl_{x}(f_n, \mathscr{P}^{k})\bigg]. \end{split} \end{equation} \newconstant{c:piv} \par Regarding pivotality of clock ticks, we present a proposition that allows us to relate it to space-pivotality, provided we are in the event where the collection $\mathscr{P}^{k}$ is well behaved. Recall that $R_{n}=[1,n]^{2}$ and that $C_{k,t}^{\rightarrow}(x)$ denotes the future cone of light of the vertex $x$ associated to the collection of clocks~$\mathscr{P}^{k}$. For $\epsilon >0$, consider the event \begin{equation}\label{eq:large_cone} E(\epsilon) = \left\{\begin{array}{c} \text{there exists } x \in [-(n-1),2n]^{2} \text{ such that} \\ C_{k,t}^{\rightarrow}(x) \cap \partial B(x,\epsilon \log n) \neq \emptyset \end{array} \right\}. \end{equation} Our next proposition relates time-pivotality to space-pivotality, provided we are in the event $E(\epsilon)^{c}$. \begin{prop}\label{prop:piv} Given $k \geq 2$ and $p \in (0,1)$, there exists a positive constant $\useconstant{c:piv}=\useconstant{c:piv}(k,p)>0$ such that the following holds: for any $\mu>0$, there exists $\epsilon>0$ such that, for any bit associated to~$(x,s) \in \{x\} \times \mathscr{P}_{x}^{k}$, \begin{equation} \infl_{(x,s)}(f_n, \mathscr{P}^{k}) \mathbf{1}_{E(\epsilon)^{c}}(\mathscr{P}^{k}) \leq \useconstant{c:piv} n^{\mu} \sum_{y \, \in \, \partial B(x, 3\epsilon \log n)} \infl_{y}(f_n, \mathscr{P}^{k}). \end{equation} Furthermore, if $p$ varies in a compact subset of $(0,1)$, the value of $\epsilon$ and $\useconstant{c:piv}$ can be chosen to be uniformly positive and bounded. \end{prop} \begin{proof} Observe first that $|\partial B(x,3\epsilon \log n)| \leq 24 \epsilon \log n +8$. Fix a configuration $\mathscr{P}^{k}$ in $E(\epsilon)^{c}$ and assume that the presence of the clock tick $(x,s)$ is pivotal. This can happen in two ways: first, it might be that adding the clock tick allows us to obtain a crossing, while, with the removal of such clock tick, no open crossings exist. The second possibility is the opposite: the addition of the clock tick prevents the existence of a crossing, while its removal implies on the presence of a crossing. We will consider only the first case, since the second can be treated similarly. When the clock tick is present in the configuration (which we can assure by paying a finite multiplicative factor of $k$ in the probabilities), all possible crossings of the square $R_{n}$ intersect the cone of light $C_{k,t}^{\rightarrow}(x)$. In particular, since the clock tick is pivotal and we are in the event $E(\epsilon)^{c}$, these crossings necessarily intersect the box $B(x, 3\epsilon \log n)$. Hence, if we declare all vertices in $\partial B(x,3\epsilon \log n)$ as closed at time zero, no crossing can be found at time $t$. \emph{This is because ``monochromatic'' nearest-neighbor cycles are stable in the majority dynamics}. Every vertex in a ``monochromatic'' cycle is surrounded by at least two neighbors of the same opinion, and therefore its opinion remains forever unchanged. This defines a $2^{|\partial B(x,3\epsilon \log n)|}$-to-one map of the initial configurations. We now proceed by successively changing each entry in~$(\eta_0(y))_{y\in\partial B(x,3\epsilon \log n)}$ which is one to zero. After all changes are performed, we obtain a configuration that has no crossing at time $t$. In particular, at some step, one of the entries of~$(\eta_0(y))_{y\in\partial B(x,3\epsilon \log n)}$ is space-pivotal for the configuration. Since in order to perform each of these changes we need to pay a multiplicative factor in the probabilities that is bounded from above by $\left(p \wedge (1-p) \right)^{-1}$, we can estimate \begin{multline} \infl_{(x,s)}(f_n, \mathscr{P}^{k}) \mathbf{1}_{E(\epsilon)^{c}}(\mathscr{P}^{k}) \\ \leq k 2^{|\partial B(x,3\epsilon \log n)|}\left(p \wedge (1-p) \right)^{-(24 \epsilon \log n +8)} \sum_{y \, \in \, \partial B(x,3\epsilon \log n)} \infl_{y}(f_n, \mathscr{P}^{k}), \end{multline} where the factor $2^{|\partial B(x,3\epsilon \log n)|}$ comes from the cardinality of the pre-image of the mapping constructed. The proof is completed by choosing $\epsilon>0$ small enough. \end{proof} \section{Low-revealment algorithms}\label{sec:alg} ~ \par In order to apply the OSSS inequality to the crossing functions, we need to develop an algorithm that determines the existence of such crossings in the quenched case, when the realization of $\mathscr{P}^{k}$ is fixed, and bound its revealment. This is the goal of this section, where we define an algorithm with the desired properties and provide bounds on its revealment. \par We begin by presenting the algorithm we will study. This algorithm will be a simple exploration process: we start with a random vertical line contained in the rectangle and query the opinion at time $t$ of all vertices that are in the given line. When we have this realization, we start exploring the components of open vertices that intersect this line. The existence of a crossing is equivalent to the existence of an open component that intersects this line and connects both sides of the rectangle. \par For the rest of this subsection, we fix a realization $\mathscr{P}^{k}$ of the denser collection of clock ticks. Since we are working with a fixed realization of~$\mathscr{P}^{k}$, the sets $C_{k,t}^{\leftarrow}(x)$ are not random and depend only on the collection~$\mathscr{P}^{k}$. Of course, when we reveal the realization of $\mathscr{P}_{y}$ together with $\eta_{0}(y)$, for all $y \in C_{k,t}^{\leftarrow}(x)$, we can determine $\eta_{s}(x)$, for all $s \in [0,t]$. In view of this, whenever we \emph{query} the state of a vertex $x \in \mathbb{Z}^2$, we observe the initial opinions and selection of clock ticks for all vertices $y \in C_{k,t}^{\leftarrow}(x)$. \par We are now in position to present the algorithm we will consider. Recall that $R_{n}=[1,n]^2$ and the notation $\Lambda = \mathbb{Z}^2 \cup \{(x,s): x \in \mathbb{Z}^2, \, s \in \mathscr{P}^{k}_{x} \cap [0,t]\}$. \begin{algorithm}\caption{(Existence of a horizontal open crossing)}\label{alg:crossing} \begin{algorithmic}[1] \State \textbf{Input:} $\mathscr{P}^{k}$ and $(\eta_{0}, \mathscr{P}) \in \{0,1\}^{\Lambda}$. \State If there exists $x \in R_n$ and $y \in C_{k,t}^{\leftarrow}(x)$ such that $\pnorm{1}{x-y} \geq \log n$, query all vertices of $R_n$. \State Choose an integer point $\ell$ uniformly in the set $\left[1, n\right]\cap \mathbb{Z}$. \State Query all vertices of $R_{n}$ whose first space-coordinate is $\ell$, and declare these vertices as explored. \State Proceed to query all vertices that are neighbors to an open explored vertex, and declare all these vertices explored. \State Repeat Step 5 until all open connected components inside $R_n$ that intersect $\{\ell\} \times \mathbb{Z}$ are discovered. If there exists a connected open component inside $R_n$ that connects $\{1\} \times \mathbb{Z}$ to $\{n\} \times \mathbb{Z}$, return 1. Otherwise, return~0. \end{algorithmic} \end{algorithm} \par Notice that Algorithm~\ref{alg:crossing} clearly determines the existence of open crossings, since any open crossing intersects any vertical line $\big(\{\ell\}\times \mathbb{Z} \big) \cap R_{n}$. Furthermore, one can define an analogous algorithm that determines the existence of a closed vertical $*$-crossing of the box. When analyzing the revealment of the algorithm, we will consider Algorithm~\ref{alg:crossing} for $p \leq p_{c}(t)$ and its alternative formulation in terms of closed vertical $*$-crossings for $p > p_{c}(t)$. \par We now proceed to bound the revealment of Algorithm~\ref{alg:crossing} (the bound on the alternative version is obtained analogously). Observe first that the revealment depends only on the sites $y \in \mathbb{Z}^2$, since we reveal all clock ticks of a given site $y$ at once, together with its initial opinion. We can therefore talk about the revealment of a site $y \in \mathbb{Z}^2$. Given a vertex $y \in \mathbb{Z}^{2}$, there are three different possibilities that might lead us to reveal it. The first case that comes from Step 2 in the algorithm is when, for some $x \in R_{n}$, $C^{\leftarrow}_{k,t}(x)$ is large. Second, it might be the case that $y \in C_{k,t}^{\leftarrow}(z)$, for some site $z$ in the vertical line segment $\big(\{\ell\} \times \mathbb{Z} \big)\cap R_{n}$. Finally, there is the case when $y \in C_{k,t}^{\leftarrow}(z)$ and some vertex adjacent to $z$ is connected to the selected vertical line segment by an open path. \par In order to bound the revealment, we consider each of the three cases separately. The first and second cases can be easily controlled. As for the third case, we need finer estimates given by the one-arm estimates provided by Proposition~\ref{prop:one_arm}. \begin{prop}\label{prop:revealment} Let $\mathcal{A}$ denote the Algorithm~\ref{alg:crossing}, and let~$\mathcal{A}^*$ denote the analogous algorithm that looks for vertical closed $*$-crossings. Consider the revealments \begin{equation} \delta_\mathcal{A}(\mathscr{P}^k):=\sup_{x\in R_n} \delta(\mathcal{A},x);\quad \delta_{\mathcal{A}^*}(\mathscr{P}^k):=\sup_{x\in R_n} \delta({\mathcal{A}^*},x) \end{equation} There exist $\nu>0$ and $k_{0}>0$ such that, for all $k \geq k_{0}$, there exists $n_{0}=n_{0}(k)$ such that, if $n \geq n_{0}$ and $p \leq p_{c}(t)$, then \begin{equation} \label{eq:revealmentA} \mathbb{P}_{kt}\left[\delta_\mathcal{A}(\mathscr{P}^k) > n^{-\nu} \right] \leq n^{-50}, \end{equation} and if $p \geq p_{c}(t)$, then \begin{equation} \label{eq:revealmentA*} \mathbb{P}_{kt}\left[\delta_{\mathcal{A}^{*}}(\mathscr{P}^k) > n^{-\nu} \right] \leq n^{-50}. \end{equation} \end{prop} \begin{proof} We will prove Equation~\eqref{eq:revealmentA}, \eqref{eq:revealmentA*} following the same reasoning. We examine separately the revealment of bits. First, we consider the case when $C_{k,t}^{\leftarrow}(x)$ is large, for some $x \in [1,n]^{2}$. Define the event \begin{equation} A = \left\{\begin{array}{c} \text{there exists $x \in R_{n}$ such that} \\ C_{k,t}^{\leftarrow}(x) \cap B(x,\log n) \neq \emptyset\end{array} \right\}, \end{equation} and observe that Lemma~\ref{prop:col} implies \begin{equation} \mathbb{P}_{kt}\left[A\right] \leq n^{2}e^{-\frac{1}{8} \log n \log \log n}. \end{equation} Second, consider the event \begin{equation} B = \left\{\begin{array}{c} \text{there exists $x \in R_{n}$ such that} \\ \mathbb{P}_{p, \frac{1}{k}}\left[\arm_{\sqrt n}(x, \eta_{0}, \mathscr{P})\middle|\mathscr{P}^{k}\right] \geq n^{-\nu'} \end{array} \right\}, \end{equation} where $\nu'$ is obtained from Proposition~\ref{prop:one_arm} by choosing $\gamma=100$, and observe that \begin{equation} \mathbb{P}_{kt}\left[B\right] \leq n^{2}n^{-100} = n^{-98}. \end{equation} We now bound the revealment on the event $A^{c} \cap B^{c}$. In this case, we split the revealment in two cases. Either the distance from the site $x$ to the random selected line is smaller then $2\sqrt{n}$, which is unlikely due to the randomness in selecting the line, or $x \in C_{k,t}^{\leftarrow}(y)$, for some $y$ such that a neighbor of it is connected to the random line by an open path. Since we are in the event $A^{c}$, we may assume that $y$ is close to $x$ and hence that, in the last case, $\arm_{\sqrt n}(x,\eta_{0},\mathscr{P})$ holds. This leads to the bound \begin{equation} \begin{split} \lefteqn{\delta_\mathcal{A}(\mathscr{P}^k) \mathbf{1}_{A^{c} \cap B^{c}}(\mathscr{P}^{k}) }\phantom{******} \\ &\leq \left(\max_{x \in R_{n}} \mathbb{P}_{p, \frac{1}{k}}\left[\arm_{\sqrt n}(x, \eta_{0}, \mathscr{P})\middle|\mathscr{P}^{k}\right]\right)\mathbf{1}_{A^{c} \cap B^{c}}(\mathscr{P}^{k}) + \frac{4\sqrt{n}}{\frac{n}{3}} \\ & \leq n^{-\nu'} + \frac{12}{n^{\sfrac{1}{2}}} \leq n^{-\nu}, \end{split} \end{equation} if $\nu$ is small enough and~$n$ large enough. In particular, we obtain from Proposition~\ref{prop:one_arm}, by choosing~$k$ and~$n$ sufficiently large, \begin{equation} \begin{split} \mathbb{P}_{kt}\left[\delta_\mathcal{A}(\mathscr{P}^k) > n^{-\nu} \right]& \leq \mathbb{P}_{kt}\left[A \cup B\right] \\ & \leq n^{2}e^{-\frac{1}{8} \log n \log \log n}+n^{-98} \leq n^{-50}, \end{split} \end{equation} concluding the proof. \end{proof} \section{Sharp thresholds}\label{sec:thresholds} ~ \par In this section,we combine the results from the previous sections to conclude the proof of Theorem\ref{t:sharp_thresholds}. As already mentioned in Remark~\ref{remark:lambda=1}, we consider only the case $\lambda=1$. \begin{proof}[Proof of Theorem~\ref{t:sharp_thresholds}.] Given $\alpha>0$, consider the interval \begin{equation} I_{\alpha}(n)=\left\{p \in \left[\frac{1}{10},\frac{9}{10}\right]: \mathbb{P}_{p,t}[H(n,n)] \in [\alpha, 1-\alpha]\right\}. \end{equation} Our goal is to prove that the length of this interval is bounded by $cn^{-\gamma}$, for some positive constants $c=c(\alpha)$ and $\gamma$ that does not depend on $\alpha$. This is enough to conclude the proof, once we know that $p_{c}(t) \in I_{\alpha}(n)$, for all $n \in \mathbb{N}$, provided $\alpha$ is small enough. We begin by introducing the event where the process is well behaved inside the box $R_{n}$. Recall the definition of the event $E(\epsilon)$ in~\eqref{eq:large_cone} and consider \begin{equation} A(\epsilon)=E(\epsilon) \cup \left\{|\mathscr{P}^{k}_{x}| \geq \log n, \text{ for some } x \in [-(n-1),2n]^{2} \right\}. \end{equation} Notice that, as a consequence of Proposition~\ref{prop:col} and standard bounds on the tail of the Poisson distribution, we obtain \begin{equation} \mathbb{P}_{kt}[A(\epsilon)] \leq 10 n^{2}\exp\left\{-\frac{\epsilon}{8} \log n \log \left(\epsilon \log n\right)\right\} \end{equation} if $n$ is large enough, depending on $k$ and $t$. Given $p \in I_{\alpha}(n)$, consider the events \begin{equation} B(p)=\left\{ \mathbb{P}_{p,\frac{1}{k}}[f_{n}(\eta_{0}, \mathscr{P})=1|\mathscr{P}^{k}] \notin \left(\frac{\alpha}{2}, 1-\frac{\alpha}{2}\right)\right\} \end{equation} and \begin{equation} C = \left\{ \begin{array}{c} \delta_\mathcal{A}(\mathscr{P}^k) \geq n^{-\nu} \text{ for some } p \in I_{\alpha}(n)\cap(0,p_c(t)]; \\ \delta_\mathcal{A^*}(\mathscr{P}^k) \geq n^{-\nu} \text{ for some } p \in I_{\alpha}(n)\cap(p_c(t),1). \end{array} \right\}, \end{equation} where $\mathcal{A}$ denotes Algorithm~\ref{alg:crossing}, $\mathcal{A}^*$ denotes the analogous algorithm that looks for vertical closed $*$-crossings, and $\nu>0$ is given by Proposition~\ref{prop:revealment}. Here we observe that the revealment of our algorithm (or its analogue) is monotone in~$p$, since it is related to connection probabilities. This can be used to bound the probability of the above event, by considering only the case $p=p_{c}(t)$. We claim that, for each $p \in I_{\alpha}(n)$, \begin{equation} \mathbb{P}_{kt}\left[B(p) \cup C\right] \leq \frac{4}{\alpha^{2}k}+2n^{-50}. \end{equation} The above bound follows partly from Proposition~\ref{prop:revealment} and partly from a reasoning analogous to the proof of Lemma~\ref{lemma:quenched_cir}. If we take $k$ large enough, and $n$ large depending on $k$ and $t$, we have \begin{equation} \mathbb{P}_{kt}[A(\epsilon) \cup B(p) \cup C] \leq \frac{1}{2}. \end{equation} We now use the OSSS inequality in the quenched setting. We assume that $p\leq p_c(t)$, the other case following analogously. If \[ \mathscr{P}^{k} \in \left(A(\epsilon) \cup B(p) \cup C \right)^{c}, \] we can use Proposition~\ref{prop:piv} with $\mu=\frac{\nu}{2}$ and Russo's Formula~\eqref{eq:russo} to estimate \begin{equation} \nonumber \begin{split} \lefteqn{\var\Big( f_{n}(\eta_{0}, \mathscr{P}) \Big|\mathscr{P}^{k}\Big)} \phantom{**} \\ &\leq \sum_{x} \delta_\mathcal{A}(\mathscr{P}^k)\Bigg(\infl_{x}\left(f_{n}, \mathscr{P}^{k}\right)+\sum_{s \in \mathscr{P}^{k}_{x}}\infl_{(x,s)}\left(f_{n}, \mathscr{P}^{k}\right)\Bigg) \\ &\leq \sum_{x} \delta_\mathcal{A}(\mathscr{P}^k)\Bigg(\infl_{x}\left(f_{n}, \mathscr{P}^{k}\right)+\useconstant{c:piv} n^{\frac{\nu}{2}}|\mathscr{P}^{k}_{x}|\sum_{y \in\partial B(x, 3\epsilon \log n)}\infl_{y}\left(f_{n}, \mathscr{P}^{k}\right)\Bigg) \\ &\leq n^{-\frac{\nu}{2}}\sum_{x} \infl_{x}\left(f_{n}, \mathscr{P}^{k}\right) \Big(1+\useconstant{c:piv} \log n \Big|\partial B(x, 3\epsilon \log n)\Big|\Big) \\ &\leq 25 \useconstant{c:piv} n^{-\frac{\nu}{2}}\log^{2} n\sum_{x} \infl_{x}\left(f_{n}, \mathscr{P}^{k}\right) \\ &\leq 25 \useconstant{c:piv} n^{-\frac{\nu}{2}}\log^{2} n \frac{\partial}{\partial p} \mathbb{P}_{p, \frac{1}{k}}\left[f_{n}(\eta_{0}, \mathscr{P})=1\middle|\mathscr{P}^{k}\right] \\ &\leq n^{-\frac{\nu}{3}}\frac{\partial}{\partial p} \mathbb{P}_{p, \frac{1}{k}}\left[f_{n}(\eta_{0}, \mathscr{P})=1\middle|\mathscr{P}^{k}\right], \end{split} \end{equation} if $n$ is large enough. In particular, for $p \in I_{\alpha}(n)$, using the fact that~$f_{n}(\eta_{0}, \mathscr{P})$ is a~$\ber$ variable which is increasing in the intensity of~$\eta_0$ and~\eqref{eq:russo_inside_expectation}, \begin{equation} \begin{split} \frac{\partial}{\partial p} \mathbb{P}_{p, t}\left[H(n,n)\right] & = \frac{\partial}{\partial p} \mathbb{E}_{kt} \left[ \mathbb{P}_{p, \frac{1}{k}}\left[f_{n}(\eta_{0}, \mathscr{P})=1\middle|\mathscr{P}^{k}\right]\right] \\ & \geq \mathbb{E}_{kt}\left[ \frac{\partial}{\partial p} \mathbb{P}_{p, \frac{1}{k}}\left[f_{n}(\eta_{0}, \mathscr{P})=1\middle|\mathscr{P}^{k}\right] \mathbf{1}_{\left(A(\epsilon) \cup B(p) \cup C \right)^{c}}\left(\mathscr{P}^{k}\right) \right] \\ & \geq n^{\frac{\nu}{3}}\mathbb{E}_{kt}\left[ \var \Big( f_{n}(\eta_{0}, \mathscr{P}) \Big|\mathscr{P}^{k}\Big) \mathbf{1}_{\left(A(\epsilon) \cup B(p) \cup C \right)^{c}}\left(\mathscr{P}^{k}\right) \right] \\ & \geq n^{\frac{\nu}{3}}\frac{\alpha^{2}}{4} \mathbb{P}_{p,t}\left[\left(A(\epsilon) \cup B(p) \cup C \right)^{c}\right] \geq n^{\frac{\nu}{3}}\frac{\alpha^{2}}{8}. \end{split} \end{equation} This implies \begin{equation} 1 \geq \int_{I_{\alpha}(n)}\frac{\partial}{\partial p} \mathbb{P}_{p, t}\left[H(n,n)\right] \, {\rm d} p \geq n^{\frac{\nu}{3}}\frac{\alpha^{2}}{8}|I_{\alpha}(n)|, \end{equation} which gives the bound \begin{equation} |I_{\alpha}(n)| \leq \frac{8}{\alpha^{2}}n^{-\frac{\nu}{3}}, \end{equation} and concludes the proof. \end{proof} \section{One-arm estimates}\label{sec:one_arm} ~ \par The goal of this section is to conclude the proof of the quenched one-arm estimates stated as Proposition~\ref{prop:one_arm}. \begin{proof} We will work on the event where all cones of light are well behaved. For each $n$, define \begin{equation} E_n:=\left\{C_{k,t}^{\leftarrow}(x) \cap \partial B\big(x, n^{\sfrac{1}{4}}\big) = \emptyset\text{ for every }x \in B\big(0, n^{\sfrac{1}{2}}\big) \right\}. \end{equation} Proposition~\ref{prop:col} implies, for sufficiently large~$n$, \begin{equation}\label{eq:one_arm_1} \mathbb{P}_{kt}[E_{n}^{c}] \leq 16n e^{-\frac{1}{32}n^{\sfrac{1}{4}} \log n}. \end{equation} Consider the collection of indices \begin{equation} J=\left\{j \in 2\mathbb{N}: n^{\frac{1}{4}} \leq 3^{j}n^{\frac{1}{4}} \leq n^{\frac{1}{2}}\right\}, \end{equation} and, for $j \in J$, denote by $A_{j}$ the set of vertices \begin{equation} A_{j}=B\left(0, 2 \cdot 3^{j+1} n^{\sfrac{1}{4}}\right) \setminus B\left(0, 2\cdot 3^{j-1} n^{\sfrac{1}{4}}\right) \end{equation} and recall the definition of $\cir^{*}(m)$ immediately after~\eqref{eq:cir}. Notice that, on $E_{n}$, $\cir^{*}\left(3^{j}n^{\sfrac{1}{4}}\right)$ depends on $(\eta_{0}(x), \mathscr{P}_{x})$ only for $x \in A_{j}$. In particular, on $E_{n}$, we can use independence to estimate \begin{equation}\label{eq:one_arm_2} \begin{split} \lefteqn{\mathbb{P}_{p, \frac{1}{k}}\left[\arm_{\sqrt n}(\eta_{0}, \mathscr{P})|\mathscr{P}^{k} \right]\mathbf{1}_{E_{n}}(\mathscr{P}^{k}) }\phantom{******} \\ &\leq \mathbb{P}_{p, \frac{1}{k}}\left[\bigcap_{j \in J}\cir^{*}\left(3^{j}n^{\sfrac{1}{4}}\right)^{c}\middle|\mathscr{P}^{k} \right]\mathbf{1}_{E_{n}}(\mathscr{P}^{k}) \\ & = \prod_{j \in J}\mathbb{P}_{p, \frac{1}{k}}\left[\cir^{*}\left(3^{j}n^{\sfrac{1}{4}}\right)^{c}\middle|\mathscr{P}^{k} \right]\mathbf{1}_{E_{n}}(\mathscr{P}^{k}). \end{split} \end{equation} Consider now the event \begin{equation} D_{j}=\left\{\mathbb{P}_{p, \frac{1}{k}}\left[\cir^{*}\left(3^{j}n^{\sfrac{1}{4}}\right)\middle|\mathscr{P}^{k} \right] \geq \frac{\useconstant{c:cir}}{2} \right\}, \end{equation} and denote by $D$ the event where $D_{j}$ holds for at least half of the indices $j \in J$. From~\eqref{eq:one_arm_2}, we obtain \begin{equation} \begin{split} \mathbb{P}_{p, \frac{1}{k}}\left[\arm_{\sqrt n}(\eta_{0}, \mathscr{P})\middle|\mathscr{P}^{k} \right] & \mathbf{1}_{E_{n}\cap D}(\mathscr{P}^{k}) \\ & \leq \prod_{j \in J}\mathbb{P}_{p, \frac{1}{k}}\left[\cir^{*}\left(3^{j}n^{\sfrac{1}{4}}\right)^{c}\middle|\mathscr{P}^{k} \right]\mathbf{1}_{E_{n} \cap D}(\mathscr{P}^{k}) \\ & \leq \left(1-\frac{\useconstant{c:cir}}{2}\right)^{\frac{|J|}{2}} \leq n^{-\nu}, \end{split} \end{equation} for some $\nu$ small enough, since $|J|$ is of order $\log n$. This implies that \begin{equation}\label{eq:one_arm_2.5} \mathbb{P}_{kt}\Big[\mathbb{P}_{p, \frac{1}{k}}\left[\arm_{\sqrt n}(\eta_{0}, \mathscr{P})|\mathscr{P}^{k}\right] \geq n^{-\nu} \Big] \leq \mathbb{P}_{kt}\left[E_{n}^{c} \cup D^{c}\right], \end{equation} so it remains to bound the right hand side probability above. We begin by estimating \begin{equation}\label{eq:one_arm_3} \mathbb{P}_{kt}\left[E_{n}^{c} \cup D^{c}\right] \leq \mathbb{P}_{kt}\left[E_{n}^{c}\right]+ 2^{|J|-1}\sup_{I} \mathbb{P}_{kt}\left[E_{n} \cap \bigcap_{j \in I} D_{j}^{c}\right], \end{equation} where the supremum is taken over all subsets of $J$ with at least $\frac{|J|}{2}$ indices. From Lemma~\ref{lemma:quenched_cir}, we obtain \begin{equation} \mathbb{P}_{kt}[D_{j}^{c}] \leq \frac{4}{\useconstant{c:cir}^{2} k}, \end{equation} provided $k$ is taken large enough. For each $j \in J$, let $E_{n}(j)$ be an event analogous to $E_{n}$, but only observing the cone of light of vertices inside $B\left(0,3^{j+1}n^{\sfrac{1}{4}}\right) \setminus B\left(0,3^{j}n^{\sfrac{1}{4}}\right)$. The events $\left(E_{n}(j)\cap D _{j}^{c}\right)_{j \in J}$ are then independent, since they depend on disjoint parts of the graphical construction. From this, we obtain \begin{equation}\label{eq:one_arm_4} \begin{split} \mathbb{P}_{kt}\left[E_{n} \cap \bigcap_{j \in I} D_{j}^{c}\right] & \leq \mathbb{P}_{kt}\left[\bigcap_{j \in I}E_{n}(j) \cap D_{j}^{c}\right] \\ & \leq \prod_{j \in I} \frac{4}{\useconstant{c:cir}^{2} k} \leq \left(\frac{4}{\useconstant{c:cir}^{2} k}\right)^{\frac{|J|}{2}}, \end{split} \end{equation} whenever $I \geq \frac{|J|}{2}$ and $k$ is large enough. Combining Equations~\eqref{eq:one_arm_1},~\eqref{eq:one_arm_2.5},~\eqref{eq:one_arm_3}, and~\eqref{eq:one_arm_4} yields \begin{equation} \mathbb{P}_{kt}\Big[\mathbb{P}_{p, \frac{1}{k}}\left[\arm_{\sqrt n}(\eta_{0}, \mathscr{P})\middle|\mathscr{P}^{k}\right] \geq n^{-\nu} \Big] \leq n^{-\gamma}, \end{equation} for all $n$ large enough, by further increasing the value of $k$ if necessary. This concludes the proof of the result. \end{proof} \section{Stretched-exponential decay of the one-arm event probability}\label{sec:decay} ~ \newconstant{c:1armexpgen1} \newconstant{c:1armexpgen2} In this section we will prove Theorem~\ref{t:exp_decay} using the results so far obtained. We will in fact prove a more general result, based on the proof of Theorem~$3.1$ of~\cite{pt}, which, together with a decoupling inequality and Theorem~\ref{t:sharp_thresholds}, will imply the desired rate of decay. We first develop some notation needed before we state the result. Given $L\in\mathbb{R}_+$ and~$x\in\mathbb{Z}^d$, we define the subsets \begin{equation} \begin{split} \label{eq:annulusdef} C_x(L):=[0,L)^d + x,\quad\quad D_x(L):=[-L,2L)^d\cap\mathbb{Z}^d + x. \end{split} \end{equation} In accordance with~\eqref{eq:open_connection}, we denote by $\{A \longleftrightarrow B\}$ for the event where there exists a nearest-neighbor open path starting at~$A$ and ending at~$B$. For~$x\in\mathbb{Z}^d$, $L\in\mathbb{R}_+$, we define the annulus-crossing event \begin{equation*} A_x(L) := \{C_x(L){\longleftrightarrow} \mathbb{Z}^d\setminus D_x(L)\}. \end{equation*} \begin{prop} \label{p:1armdecaygen} Let~$\tilde \mathbb{P}$ denote a probability distribution over~$\{0,1\}^{\mathbb{Z}^d}$, invariant under translations of~$\mathbb{Z}^d$. Assume that \begin{equation} \label{eq:1armexpgen2} \liminf_{L\to \infty} \tilde \mathbb{P} \left[A_0(L) \right]<\frac{1}{d^{2}\cdot 7^d}, \end{equation} and that there exists a positive constant~$\useconstant{c:1armexpgen1}>0$ such that, for every~$L,R\in\mathbb{R}_+$ and every~$x,y\in \mathbb{Z}^d$ with~$\|x-y\|_\infty \geq 3L+R$, we have \begin{equation}\label{eq:1armexpgen3} \left| \tilde \mathbb{P}\left[ A_x(L) \cap A_y(L) \right] - \tilde\mathbb{P}\left[ A_x(L) \right] \tilde\mathbb{P}\left[ A_y(L) \right] \right| \leq L^{2d} \exp\left\{ -f(R) \right\}, \end{equation} where~$f:\mathbb{R}_+\to\mathbb{R}_+$ is a non-decreasing function such that \begin{equation} \begin{split} \label{eq:1armexpgenf} \liminf_{R\to \infty} \frac{f(R)}{R\log R}\geq \useconstant{c:1armexpgen1}. \end{split} \end{equation} Then, for every~$\varepsilon>0$ there exists a positive constant $\useconstant{c:1armexpgen2}=\useconstant{c:1armexpgen2}(\varepsilon)>0$ such that, for~$n\in\mathbb{N}$, \begin{equation}\label{eq:1armexpgen4} \tilde \mathbb{P}\left[ \{0\} \longleftrightarrow \partial B(0,n) \right] \leq \useconstant{c:1armexpgen2}^{-1}\exp\left\{ -\useconstant{c:1armexpgen2}\frac{n}{(\log n)^\varepsilon} \right\}. \end{equation} \end{prop} \begin{proof} The proof is based on the proof of Theorem~$3.1$ of~\cite{pt}, specifically, the proof of Equation~$(3.5)$. Since in our case no sprinkling argument is needed in order to obtain a decoupling inequality, the argument here will be simpler. The proof consists in a multiscale renormalisation argument. We start by inductively defining the sequence of scales~$(L_k)_{k\in\mathbb{N}}$. Given~$L_1\in\mathbb{R}_+$, which will chosen to be large, we let, for~$k\in\mathbb{N}$, \begin{equation} \begin{split} \label{eq:Lkdef} L_{k+1}=2\left(1+\frac{1}{(k+5)^{1+\varepsilon}}\right)L_{k}. \end{split} \end{equation} One can easily check that \begin{equation} \begin{split} \label{eq:Lkbound3} L_1 2^{k-1} \leq L_k \leq C_\varepsilon L_1 2^{k-1} . \end{split} \end{equation} for some constant~$C_\varepsilon>0$ depending on the exponent~$1+\varepsilon$. The proof of the theorem is based on an induction argument that bounds the probability \begin{equation} \begin{split} \label{eq:pkdef} p_k:=\tilde \mathbb{P}\left[A_0(L_k)\right]. \end{split} \end{equation} Recalling the sets defined in~\eqref{eq:annulusdef}, note that, for~$k\geq 1$, there exist two collection of points~$\{x_i^k\}_{i=1}^{3d}$ and~$\{y_j^k\}_{j=1}^{2d\cdot 7^{d-1}}$ such that \begin{equation} \begin{split} \label{eq:xiyiproperty} &C_0(L_{k+1})=\cup_{i=1}^{3d}C_{x_i}(L_{k}), \\ &\left(\cup_{j=1}^{2d\cdot 7^{d-1}}C_{y_j}(L_{k})\right)\cap D_0(L_{k+1})=\emptyset, \\ &\partial(\mathbb{Z}^d\setminus D_0(L_{k+1}))\subset\cup_{j=1}^{2d\cdot 7^{d-1}}C_{y_j}(L_{k}). \end{split} \end{equation} Properties~\eqref{eq:xiyiproperty} then imply (see Figure~\ref{f:multiscale}) \begin{equation} \begin{split} \label{eq:akinduc} A_0(L_{k+1})\subset \bigcup_{\substack{i \leq 3 d \\ j \leq 2d\cdot 7^{d-1}}} A_{x_i^k}(L_{k})\cap A_{y_j^k}(L_{k}). \end{split} \end{equation} \begin{figure} \caption{The ``cascading'' nature of the events~$A_x(L_k)$. } \label{f:multiscale} \end{figure} It is also elementary to check that the distance between $D_{x_i^k}(L_k)$ and $D_{y_j^k}(L_k)$ is greater than~$2(k+5)^{-(1+\varepsilon)}L_k$ uniformly in~$i$ and~$j$. Property~\eqref{eq:1armexpgen3} then implies, together with the equation above and the translation invariance of~$\tilde \mathbb{P}$, for~$k\geq 1$, \begin{equation} \begin{split} \label{eq:pkinduc} p_{k+1}\leq d^{2} \cdot 7^d \left( p_k^2 + L_k^{2d}\exp\left\{ - f\left( \frac{2L_k}{(k+5)^{1+\varepsilon}} \right) \right\} \right). \end{split} \end{equation} Proceeding in the same way as in~\cite{pt}, one proves by induction that Equations~\eqref{eq:1armexpgen2} and~\eqref{eq:1armexpgen3} imply, for suitably chosen real numbers~$h_1,h_2>0$, \begin{equation} \begin{split} \label{eq:pkinduc3} p_{k}\leq \exp\left\{ -h_1 -h_2\frac{2^k}{k^{\varepsilon}} \right\}. \end{split} \end{equation} Note then that, for~$n\in[2L_k,2L_{k+1}]$, we have \begin{equation} \label{eq:Lkton} \left\{\{0\} \longleftrightarrow \partial B(0,n)\right\} \subseteq A_0(L_k), \end{equation} and therefore \begin{equation} \label{eq:Lkton2} \tilde \mathbb{P}[\{0\} \longleftrightarrow \partial B(0,n) ] \leq \tilde \mathbb{P}[A_0(L_k)] \leq \exp\left\{ -h_1 -h_2 \frac{2^k}{k^\varepsilon} \right\}. \end{equation} Equation~\eqref{eq:Lkbound3} then implies the result, for a suitably chosen constant~$\useconstant{c:1armexpgen2}$. \end{proof} We can then finish the proof of the main result of this section. \begin{proof}[Proof of Theorem~\ref{t:exp_decay}] For~$p<p_c(t)$, Theorem~\ref{t:sharp_thresholds} implies \begin{equation} \limsup_{n}\mathbb{P}_{p,t}\left[ A_0(n)\right] \leq 4 \limsup_{n} \mathbb{P}_{p,t}[H(n,3n)] = 0. \end{equation} Proposition~\ref{prop:decoup} and basic properties of majority dynamics imply that~$\mathbb{P}_{p,t}$ satisfies the hypotheses of Proposition~\ref{p:1armdecaygen}, which then implies equation~\eqref{eq:exp_decay1}. Equation \eqref{eq:exp_decay2} follows from a completely analogous reasoning. Notice that in the event where the finite open cluster of the origin has diameter~$n$, one can find a closed vertex inside~$B(0,n)$ through which a closed $*$-path of length larger than~$n$ passes. Equation \eqref{eq:exp_decay2} and a union bound argument then yield Equation~\eqref{eq:exp_decay3}, and the analogous result for closed $*$-clusters containing the origin follows from~\eqref{eq:exp_decay1} and the same reasoning. \end{proof} \section{Further models}\label{sec:further_models} ~ \par As we already mentioned, our technique can be applied to other particle systems as long as some basic properties can be verified. In particular, we require equivalent formulations of Lemmas~\ref{lemma:variance_decay} and of Propositions~\ref{prop:one_arm},~\ref{prop:col} and~\ref{prop:piv}. Here, we extend our results for the voter model in the two-dimensional lattice $\mathbb{Z}^{2}$. \par The voter model is very similar to majority dynamics, in the sense that it differs just in the way each vertex selects its new opinion once its clock ticks. In this case, the new opinion is selected randomly among the neighbors' opinions. \par Once again, for each fixed time $t$, there exists a non-trivial critical parameter $p^{VM}_{c}(t) \in (0,1)$ for the existence of percolation at time $t$. Since, by applying Proposition~\ref{prop:col}, we can derive decoupling estimates that are uniform in the value of $p \in [0,1]$, these can be used to deduce taht $p_{c}^{VM}(t)$ is non-trivial , for each positive $t$, by standard renormalisation arguments. \par The usual graphical construction of the voter model (see Remark~\ref{remark:voter_model}) can be modified exactly as we did in Section~\ref{sec:construction}, and Lemma~\ref{lemma:variance_decay} and Proposition~\ref{prop:one_arm} can be obtained from general results, as in the case of majority dynamics. Furthermore, we can apply the same proof to obtain Proposition~\ref{prop:col}. The most delicate part is in establishing a relation between time-pivotality and space-pivotality. \par Let us now describe how one approaches Proposition~\ref{prop:piv} here. In this case, we use the fact that the opinion of each vertex at any time $s \geq 0$ is a copy of one of the initial opinions that are contained in the past cone of light. Not only that, but changing this opinion at time zero implies that the opinion changes at time $s$. This last observation allows us to conclude that time-pivotality implies space-pivotality for some vertex in the cone of light. From this, we derive the bound \begin{equation} \infl_{(x,s)}(f_n, \mathscr{P}^{k}) \leq c \sum_{y \, \in \, C_{k,t}^{\leftarrow}(x)} \infl_{y}(f_n, \mathscr{P}^{k}), \end{equation} for some positive constant $c>0$. This yields a version of Proposition~\ref{prop:piv} that can be used to conclude Theorem~\ref{t:sharp_thresholds} for the voter model. \end{document}
arXiv
{ "id": "1912.06524.tex", "language_detection_score": 0.6900115013122559, "char_num_after_normalized": null, "contain_at_least_two_stop_words": null, "ellipsis_line_ratio": null, "idx": null, "lines_start_with_bullet_point_ratio": null, "mean_length_of_alpha_words": null, "non_alphabetical_char_ratio": null, "path": null, "symbols_to_words_ratio": null, "uppercase_word_ratio": null, "type": null, "book_name": null, "mimetype": null, "page_index": null, "page_path": null, "page_title": null }
arXiv/math_arXiv_v0.2.jsonl
null
null
\begin{document} \begin{frontmatter} \title{Quantum manipulation and simulation using Josephson junction arrays} \author{Xingxiang Zhou} \ead{[email protected]} \author{and} \author{Ari Mizel} \ead{[email protected]} \address{Department of Physics, The Pennsylvania State University, University Park, PA 16802, USA} \begin{abstract} We discuss the prospect of using quantum properties of large scale Josephson junction arrays for quantum manipulation and simulation. We study the collective vibrational quantum modes of a Josephson junction array and show that they provide a natural and practical method for realizing a high quality cavity for superconducting qubit based QED. We further demonstrate that by using Josephson junction arrays we can simulate a family of problems concerning spinless electron-phonon and electron-electron interactions. These protocols require no or few controls over the Josephson junction array and are thus relatively easy to realize given currently available technology. \end{abstract} \begin{keyword} Qubit \sep Quantum computing \sep Josephson junction array \PACS 03.67.Lx \sep 74.90.+n \sep 85.25.Dq \sep 85.25.Cp \end{keyword} \end{frontmatter} Superconducting device based solid state qubits \cite{Makhlin01} are attractive because of their inherent scalability. Microwave spectroscopy and long lived population oscillation consistent with single \cite{Nakamura99,Friedman00,Mooij00,Vion02,Yu02,Martinis02,Mooij03} and two qubit quantum states \cite{Pashkin03,Berkley03,Izmalkov04,Mooij04} have been observed experimentally. Recently, a new approach to scalable superconducting quantum computing analogous to atomic cavity-QED was studied theoretically \cite{Falci03,Blais03,Zhou04,Blais04} and implemented experimentally \cite{Wallraff04,Xu04}. This new approach opens the possibility of applying methods and principles from the rich field of atomic QED in solid state quantum information processing. One practical problem of solid state qubit based QED is the realization of a high quality resonator to which many qubits can couple. A lumped element on-chip LC circuit, such as that used in \cite{Xu04}, suffers from dielectric loss of the capacitor and ac loss of the superconductor \cite{Kadin99}. A high quality resonator can be realized with a co-planar waveguide if high quality substrate material (such as sapphire) is used to minimize the loss \cite{Mazin02,Day03}. A high quality, low leakage Josephson junction provides a natural and easy realization of a high quality resonator due to its high quality tri-layer structure \cite{Rando95}, however only a few qubits can couple to such a single junction resonator \cite{Falci03,Blais03}. In this work, we study the quantum dynamics of a Josephson junction array and show that the ``phonon modes'' corresponding to the small collective vibrations of the junction phases can be used to realize a high quality resonator to which many superconducting qubits can couple. The resultant structure is analogous to the ion trap quantum computer in which qubits communicate through the phonon modes \cite{Cirac95}. We further show that, by using a properly coupled superconducting qubit array we can simulate a family of problems involving spinless electron-phonon and electron-electron interactions. Consider the simple Josephson junction array shown in Fig. \ref{fig:Array} (a). Denote the phases across the vertical junctions $\theta_0$, $\theta_1$, ..., $\theta_{N-1}$ (the phase of the ground is set to 0). The capacitance of the vertical junctions is $C$. The horizontal junctions are much bigger in size than the vertical ones, and their critical current is $K^2$ times that of the vertical junctions ($I_c$), where $K\gg 1$. In practice, all junctions can be realized by low self-inductance dc-SQUIDs to allow tuning of their critical currents; therefore $K^2$ is not necessarily equal to the ratio of the junction sizes. Each vertical junction is biased by a current $I_b< I_c$ to suppress its plasma frequency. The geometric inductance is very small and neglected. \begin{figure} \caption{(a) A simple Josephson junction array consisting of $N$ vertical and $N-1$ horizontal junctions. (b) A charge qubit coupled to a vertical junction (realized with a small self-inductance dc-SQUID) in the Josephson junction array in (a). (c) An rf-SQUID qubit coupled to the Josephson junction array. (d) A more complex design than the simple array in (a). Each dot represents a superconducting island which is grounded by a Josephson junction (grounding junctions not shown). Each and every pair of islands are coupled by a Josephson junction as indicated by the edges connecting the dots. } \label{fig:Array} \end{figure} The potential energy of the Josephson junction array in Fig. \ref{fig:Array} is a sum of the Josephson tunneling energies of the junctions, given by \begin{equation} V=-E_J\sum_{i=0}^{N-1} (i_b\theta_i + \cos{\theta_i}) -K^2E_J\sum_{i=0}^{N-2} \cos{(\theta_i- \theta_{i+1})}, \nonumber \label{eq:H_CB} \end{equation} where $E_J=I_c\Phi_0/2\pi$ and $i_b=I_b/I_c$. The equilibrium values for the junction phases are determined by $\partial V/ \partial \theta_i =0$, which is just the current conservation condition at each node: $i_b -\sin{\theta_i} -\sin{(\theta_i -\theta_{i-1})} -\sin{(\theta_i -\theta_{i+1})} =0$. (When $i=0$ or $N-1$ there is only one horizontal current.) In our setup, the equilibrium values for the phases are $\theta_i^{(0)} =\theta^{(0)} =\arcsin{i_b}$ for $i=0,1,...N-1$. If we are interested in the small oscillations of the phases, we follow the standard procedure expanding $V$ to second order, $V=\frac{1}{2}\sum_{i,j=0}^{N-1} V_{ij}\varphi_i\varphi_j$, where $\varphi_i =\theta_i -\theta_i^{(0)}$ is the small displacement of the phase from its equilibrium value and \begin{equation} \frac{(V)_{ij}}{E_JK^2}= \delta_{ij} (2+\cos{\theta^{(0)}}/K^2 -\delta_{i,0} -\delta_{i,N-1}) -\delta_{i,j-1} -\delta_{i,j+1}. \nonumber \end{equation} Notice that the horizontal junctions are much larger than the vertical ones. In the ``wash board'' analogy this corresponds to a large ``mass'' for the horizontal junctions. Therefore in calculating the kinetic energy of the system we need only consider the vertical junctions (as verified by numerical calculations): $T=(1/2)C(\Phi_0/2\pi)^2 \sum_{i=0} ^{N-1} \dot\varphi_i^2$. From the potential and kinetic energies we can solve for the normal modes of the system, whose spectrum is given by $\nu_s= \omega_p\sqrt{1+ (4K^2/ \cos{\theta^{(0)}}) \sin ^2 (s\pi/2N)}$, where the plasma frequency $\omega_p= \sqrt{2\pi I_c/\Phi_0 C}(1-i_b^2)^{1/4}$ and $s=0, 1, ..., N-1$. Let the orthonormalized normal mode eigenvectors be denoted $b_i^s$. The lowest mode, whose frequency is $\omega_p$, corresponds to the center of mass motion of the phases: $b^0 =(1,1,...,1)/\sqrt{N}$. The quantum mechanical properties of the small collective vibrational modes of the phases can now be evaluated by introducing the operator $\hat{\varphi_i}= (2\pi/\Phi_0) \sum_{s=0} ^{N-1} \sqrt{\hbar/2C\nu_s} b_i^s (a_s +a_s^{\dagger})$, where $a_s$ is the annihilation operator for the $s$th mode. We propose to use the center of mass motion mode of the Josephson junction array to couple superconducting qubits, in analogy to ion trap quantum computer \cite{Cirac95}. Since the center of mass motion mode is an equal weight superposition of the junction phases, its quality factor is as high as that of the individual junctions. Therefore, this approach allows us to take advantage of the high quality Josephson junctions required for the superconducting qubits to realize a high quality resonator. Note in the above we have assumed that all Josephson junctions in the array are identical. In reality, this will not be the case due to unavoidable fabrication errors. However the critical currents of the junctions can be tuned by a magnetic field so that the effective Josephson energies of the junctions can be equalized. The effect of any residual asymmetry can be estimated by perturbation theory. As long as the amplitude of the transition matrix element due to the asymmetry is much smaller than the energy gap between the center of mass motion mode and higher modes, the energy and wavefunction of the center of mass motion mode remain close to unperturbed. In order to implement protocols developed in superconducting qubit based cavity-QED \cite{Falci03,Blais03,Zhou04,Blais04}, we need to couple the superconducting qubits to the center of mass motion mode of the junction array in a way such that they can exchange energy. This is shown in Fig. \ref{fig:Array} (b) and (c) for both charge and flux qubits. Here each vertical junction in the junction array is replaced by a small self-inductance dc-SQUID and coupled inductively to a superconducting qubit. Consider the charge qubit whose Hamiltonian is $H_Q =-B^z\sigma^z/2 -B^x\sigma^x/2$, where $B^z$ and $B^x$ are determined by the gate voltage and Josephson energy of the charge qubit. When its energy $B^z$ is tuned close to $\nu_0 =\omega_p$ and its dc-SQUID is biased at $\Phi_0/2$ (including the flux due to the junction array's bias current $I_b$), the inductive coupling results in a coupling Hamiltonian $H_c= -g(a\sigma^+ +a^{\dagger}\sigma^-)$, where $g= (M/2)(I_c\cos{\theta^{(0)}})I_c^Q (2\pi/\Phi_0) \sqrt{\hbar/2C\omega_p N}$, $M$ is the mutual inductance, $I_c^Q$ is the critical current of the dc-SQUID junctions of the charge qubit, and $a$ is the annihilation operator for the center of mass motion mode of the junction array. In deriving $H_c$, we have used the rotating wave approximation to drop terms that oscillate at high frequencies. For the flux qubit case shown in Fig. \ref{fig:Array} (b), we can derive the same coupling Hamiltonian, with a coupling coefficient which is proportional to the mutual inductance and can be evaluated in terms of the qubit parameters \cite{Orlando99}. With the above design we then have a structure in close analogy to the ion trap quantum computer in which the qubits communicate through the center of mass phonon mode. To realize a universal quantum computer, we can use either the resonant \cite{Falci03,Blais03} or dispersive \cite{Falci03,Zhou04,Blais04} interaction between the qubits and the junction array mode. The interaction is switched on and off by tuning the energy of the qubit into and out of resonance with the junction array. Typical values for $\omega_p$ and qubit energies can be chosen to be up to 10GHz. The coupling strength $g$ can be tens of megahertz \cite{Zhou04,You}. When the system scales up ($N$ increases), the energy gap between the center of mass motion mode and the higher modes should remain much greater than the coupling strength $g$ to avoid excitation of upper modes. When $N$ is large, the energy difference between the lowest two modes is $\Delta\nu_{01} =(\pi^2K^2/2N^2\cos{ \theta^{(0)}}) \omega_p$. For $K=20$ \cite{Note} and $i_b=0.97$, this implies an upper limit of a few hundred for $N$. To relax this limit, we can use more complicated designs than the simple 1d array in Fig. \ref{fig:Array} (a). For instance, we can consider a network of $N$ superconducting islands in which each pair of nodes is coupled by a Josephson junction, as shown in Fig. \ref{fig:Array} (d). Each island is still grounded though a junction whose plasma frequency is $\omega_p$, and the Josephson energy of the coupling junctions is $K^2$ that of the grounding junctions. In this case, the center of mass motion mode remains at $\omega_p$ and all higher modes are pushed up to a frequency $\omega_p\sqrt{1+NK^2/\cos{ \theta^{(0)}}}$. The number of junctions required is on the order $N^2/2$. In the above, we take advantage of the macroscopic quantum behavior of the Josephson junction phases to construct an analogy of the ion trap quantum computer. We can push this concept further and consider using Josephson junction arrays to simulate the dynamics of physical systems. Consider a 1d array of qubits. In the spin 1/2 representation, the dynamics of the qubits are described by the spin operators $S_i^x$, $S_i^y$ and $S_i^z$ whose commutation relations are defined by $[S_i^\alpha, S_j^\beta] =(i/2)\delta_{ij}\epsilon_{\alpha\beta\gamma}S_i^\gamma$ for $\alpha, \beta, \gamma = x, y, z$ and $i, j =0, 1, ...,N-1$. By using a Jordan-Wigner transformation \cite{Jordan28}, we can map this qubit array to a collection of spinless fermions annihilated by the operators \begin{equation} f_n =S_n^- K(n), \label{eq:JW} \end{equation} where $S_n^{\pm} =S_n^x\pm iS_n^y$ and $K(n)$ is a nonlocal operator defined by $K(n) =exp[i\pi\sum_{m=0}^{n-1} f_m^+f_m] =exp[i\pi\sum_{m=0}^{n-1} (S_m^z+1/2)]$. It can be verified that $f_n$'s satisfy the anti-commutation relations required for fermion operators. The fermion number operator is simply $f_n^{\dagger}f_n =S_n^z +1/2$. Now, if we couple the qubits by nearest neighbor XXZ interactions, the Hamiltonian of the system is $H =-J_{xy}\sum_i (S_i^xS_{i+1}^x +S_i^yS_{i+1}^y) +J_z \sum_i S_i^zS_{i+1}^z$. Under the Jordan-Wigner transformation, the system Hamiltonian is transformed to \begin{equation} H =-(J_{xy}/2)\sum_n (f_n^{\dagger}f_{n+1} +f_{n+1}^{\dagger}f_n) +J_z \sum_n (f_n^{\dagger}f_n- 1/2) (f_{n+1}^{\dagger}f_{n+1}- 1/2). \end{equation} As can be seen the coupling in the XY directions transforms into hoping of the fermions between neighboring sites and the Ising coupling causes nearest neighbor interactions between the fermions. \begin{figure} \caption{A 1d rf-SQUID qubit array inductively coupled to local Josephson junction oscillators. The dc-SQUIDs of the qubits are inductively coupled too. Both $M$ and $M'$ are tunable couplings \cite{Orlando99,Mooij99,Filippov03}. The qubits are arranged in a ring structure when periodic boundary conditions are required. } \label{fig:QubitArray} \end{figure} An interesting scenario arises when we couple a superconducting qubit array and ``phonon modes'' realized by small oscillations of Josephson junction phases, as shown in Fig. \ref{fig:QubitArray}. Since the qubit array can be transformed into a fermion system, this allows us to simulate a family of solid state problems concerning spinless electron-phonon and electron-electron interactions. The macroscopic nature of the system and the fine control and measurement techniques developed in superconducting quantum information processing allow us to examine the fermion dynamics directly and closely to obtain vital information about the system. For this purpose, we need to be careful with a few issues. The first is that in a physical system the electron number is conserved; however a $B^x\sigma^x$ term of the qubit can flip its basis states defined by the eigenstates of $\sigma^z$. According to the Jordan-Wigner transformation Eq. (\ref{eq:JW}) the flipping of the qubit state corresponds to the creation or annihilation of a fermion. Therefore, we must keep $B^x$ of the qubits zero. This is realized by using small self-inductance dc-SQUIDs for the qubits and biasing them appropriately so that the $B^x$ field of the qubit is much smaller \cite{Zhou04} than other energy scales of the system and thus can be neglected for the time scale of interest. To further suppress fermion creation and annihilation, we may choose $B^z$ to be large, which makes changes in the total number of fermions energetically forbidden. Another point is that the coupling between the qubit and the Josephson phase should not allow them to exchange excitations, since in a physical system no process happens in which an electron is created (annihilated) and a phonon is annihilated (created). In the following we focus on the 1d Holstein model of spinless electrons, a problem of both theoretical and practical interest \cite{McKenzie96,Bursill98}. The Hamiltonian of the system is \begin{equation} H= -t\sum_i (c_i^{\dagger}c_{i+1} +c_{i+1}^{\dagger}c_i) +\omega \sum_i a_i^{\dagger}a_i -g\sum_i (c_i^{\dagger}c_i -1/2)(a_i+a_i^{\dagger}), \label{eq:Holstein} \end{equation} where $c_i$ destroys a fermion at site $i$ and $a_i$ destroys a local phonon of frequency $\omega$. This system at half filling has two phases depending on the ratio $g/\omega$. When $g< g_c$, the ground state has a metallic (Luttinger liquid) phase. When $g$ exceeds $g_c$, it exhibits an insulating phase with charge density wave (CDW) long range order \cite{McKenzie96,Bursill98}. Though it can be solved analytically in certain limits, for generic parameters the accurate determination of the critical coupling strength and energy gaps is difficult and only limited size systems at half filling have been studied numerically \cite{McKenzie96,Bursill98}. We can construct an artificial realization of the 1d Holstein model using a superconducting qubit array and local oscillator modes realized by Josephson junctions, as shown in Fig. \ref{fig:QubitArray}. For clarity, rf-SQUID qubits are shown, whose dc-SQUIDs are coupled by a tunable mutual inductance. The tunable mutual inductances can be realized by using a flux transformer interrupted by a dc-SQUID whose critical current can be tuned by a flux bias, as discussed in \cite{Orlando99,Mooij99,Filippov03}. As a result of this inter-qubit coupling and the qubit bias, the qubit array Hamiltonian is $H_Q =-\sum_i B^zS_i^z -J\sum_i S_i^xS_{i+1}^x$, where $B^z$ is determined by the flux bias of the qubits and $J$ is proportional to the tunable mutual inductance $M$. The local phonon modes are realized by Josephson junctions whose oscillation frequency is the plasma frequency $\omega_p$. These junctions should have a deep potential well in order to guarantee that their behavior is close to harmonic even when the excitation number is not small. Alternatively, if we are also interested in studying the effect of nonharmonicity of the phonon modes on the system behavior \cite{Mahan96}, we may use an inductor ($L$) shunted dc-SQUID. In this case, under the condition $LI_c \ll \Phi_0/2\pi$, the Hamiltonian for the phonon mode is $Q^2/2C_J +\Phi^2/2L' -(I_c/24)(2\pi/\Phi_0)^3\Phi^4$, where $Q$ and $\Phi$ are the charge and flux across the dc-SQUID, $C_J$ is the total junction capacitance and the effective inductance $L'= L(1-2\pi LI_c/\Phi_0)$. The anharmonicity can be adjusted by tuning the critical current $I_c$ of the dc-SQUID. The local Josephson junction oscillators are coupled to the main loops of the rf-SQUID qubits using a tunable mutual inductance as shown in Fig. \ref{fig:QubitArray}. This introduces a coupling Hamiltonian proportional to $S^z\Phi$, with a coupling coefficient proportional to the mutual inductance $M'$\cite{Orlando99}. After quantizing the local phonon modes and applying the Jordan-Wigner transformation, we obtain the Hamiltonian of the 1d Holstein model Eq. (\ref{eq:Holstein}), with the hopping strength $t=J/4$ and the phonon frequency $\omega =\omega_p$. Since the qubit-qubit and qubit-phonon interactions are realized with tunable couplings, the ratio of $t$ and $g$ to $\omega$ in the Holstein model can be tuned in a wide range. This allows us to explore a large phase space of the system. To simulate the 1d Holstein model, we first set the number of spinless fermions in the system. This is done by initializing half the qubits in the $S^z=1/2$ state and the other half in the $S^z=-1/2$ state (assuming we are interested in the half filling case), which can be accomplished by applying strong $B^z$ fields to the qubits or by performing measurements. Then $t$ and $g$ are turned on (by tuning the inter-qubit and qubit phonon couplings) slowly until the intended point $(t,g,\omega)$ in the phase space is reached. The phase of the system can be inferred by measuring the fermion numbers $ f_n^\dagger f_n$, or $S_n^z$, which are independent of $n$ (1/2 for half filling) for the metallic phase and varying with $n$ for the insulating phase. The excitation energy of the system can be determined by spectroscopic methods like in \cite{Xu04}. In conclusion, we have shown that properly engineered Josephson junction arrays can be used for quantum manipulation and simulation. The discussed protocols require only limited control of the system and provide interesting opportunities for superconducting quantum information processing. The authors acknowledge financial support from the Packard Foundation. \end{document}
arXiv
{ "id": "0605016.tex", "language_detection_score": 0.8429815173149109, "char_num_after_normalized": null, "contain_at_least_two_stop_words": null, "ellipsis_line_ratio": null, "idx": null, "lines_start_with_bullet_point_ratio": null, "mean_length_of_alpha_words": null, "non_alphabetical_char_ratio": null, "path": null, "symbols_to_words_ratio": null, "uppercase_word_ratio": null, "type": null, "book_name": null, "mimetype": null, "page_index": null, "page_path": null, "page_title": null }
arXiv/math_arXiv_v0.2.jsonl
null
null
\begin{document} \title{Another (wrong) construction of $\pi$} \author{Zolt\'an Kov\'acs} \institute{ The Private University College of Education of the Diocese of Linz\\ Salesianumweg 3, A-4020 Linz, Austria\\ \email{[email protected]} } \maketitle \begin{abstract} A simple way is shown to construct the length $\pi$ from the unit length with 4 digits accuracy. \keywords{$\pi$, approximation, elementary geometry, automated theorem proving} \end{abstract} \section{Introduction} It is well-known that accurate construction of the ratio between the perimeter and the diameter of a circle is theoretically impossible \cite{Wantzel1837,Lindemann1882}. Before Lindemann's result (and even thereafter), however, several attempts were recorded to geometrically construct the number $\pi$ with various means, in most cases by using a compass and a straightedge. One of the most successful attempts is Kocha\'nski's work \cite{Kochanski} that produces an approximation of $\pi$ by 4 digits, $\sqrt{40/3-2\sqrt3}\approx3.14153333\ldots$ Kocha\'nski's construction is relatively easy, it requires just a little amount of steps, and can be discussed in the school curriculum as well. In this note the same approximation is given, by using---at least geometrically---an even simpler approach. \section{The construction} The proposed way to construct $\pi$ is shown in Fig.~\ref{fig1} (see also \cite{pi-12gon}). \begin{figure} \caption{A new way to construct $\pi$ approximately} \label{fig1} \end{figure} A proof that $|RS|=\sqrt{40/3-2\sqrt3}$ is as follows. We assume that $A_0=(0,0)$ and $A_1=(1,0)$ (see Fig.~\ref{fig2}). By using that $360^{\rm o}/12=30^{\rm o}$, $\cos(30^{\rm o})=\sqrt3/2$ and $\sin(30^{\rm o})=1/2$, the exact coordinates of the appearing vertices in the construction are $A_3=(3/2+\sqrt3/2,1/2+\sqrt3/2)$, $A_6=(1,2+\sqrt3)$, $A_7=(0,2+\sqrt3)$, $A_8=(-\sqrt3/2,3/2+\sqrt3)$, $A_9=(-1/2-\sqrt3/2,3/2+\sqrt3/2)$, $A_{11}=(-\sqrt3/2,1/2)$. \begin{figure} \caption{Explanation for the proof} \label{fig2} \end{figure} To find the coordinates of $R$ we can compute the equation of the line $A_{11}A_6$. By substituting the coordinates of $A_{11}$ and $A_6$ it can be verified that the equation is $$(3/2+\sqrt3)x-(1+\sqrt3/2)y=-2-\sqrt3,$$ and solving this for $y=0$ we obtain the exact coordinates $R=(-2/\sqrt3,0)$. (Alternatively, it can be shown that $|A_1R|=1+2/\sqrt3$, because it is the shorter cathetus of the triangle $RA_1A_6$ which is a half of an equilateral triangle---this holds because the angle $A_1A_6A_{11}$ is an inscribed angle of the circumcircle of the regular 12-gon, and it must be $60^{\rm o}/2=30^{\rm o}$.) Now, $A_3A_8\perp A_6A_{11}$, so we are searching for the equation of line $A_3A_8$ in form $$(1+\sqrt3/2)x+(3/2+\sqrt3)y=c.$$ After substituting the coordinates of $A_3$ in this, we obtain that $c=(9+5\sqrt3)/2$. Because of symmetry, it is clear that the equation of line $A_7A_9$ is of form $$y=x+d.$$ By using the coordinates of $A_7$, we immediately obtain that $d=2+\sqrt3$. To find the coordinates of $S$ we solve the equation system \begin{align} (1+\sqrt3/2)x+(3/2+\sqrt3)y=&(9+5\sqrt3)/2,\\ y=&x+2+\sqrt3 \end{align} now, which produces the coordinates $x=(\sqrt3-3)/2$, $y=(3\sqrt3+1)/2$. Finally we compute the length of $RS$: $$|RS|=\sqrt{\left((\sqrt3-3)/2+(2/\sqrt3)\right)^2+\left((3\sqrt3+1)/2\right)^2}=\sqrt{40/3-2\sqrt3}.$$ \section{Remarks} This result has been found by the software tool \cite{RegularNGons} in an automated way by considering all possible configurations of distances between intersections of diagonals in regular polygons. It seems very likely that the construction described above is the simplest and most accurate one among the considered cases. Another construction which is based on a regular \textit{star}-12-gon (see \cite{pi-star12gon}) can produce the same approximation. Regular polygons with less sides have already been completely studied for the constructible cases for $n<12$ with less accurate results (see \cite{pi-12gon} for details). Checking cases $n=15,16,17,20$ (all are constructible) is an on-going project. While the software tool \cite{RegularNGons} gave a machine assisted proof by using Gr\"obner bases and elimination (see \cite{Cox_2007,Recio1999} for more details), the proof above was compiled by the author manually. \section*{Acknowledgments} The author was partially supported by a grant MTM2017-88796-P from the Spanish MINECO (Ministerio de Economia y Competitividad) and the ERDF (European Regional Development Fund). \end{document}
arXiv
{ "id": "1806.02218.tex", "language_detection_score": 0.8117404580116272, "char_num_after_normalized": null, "contain_at_least_two_stop_words": null, "ellipsis_line_ratio": null, "idx": null, "lines_start_with_bullet_point_ratio": null, "mean_length_of_alpha_words": null, "non_alphabetical_char_ratio": null, "path": null, "symbols_to_words_ratio": null, "uppercase_word_ratio": null, "type": null, "book_name": null, "mimetype": null, "page_index": null, "page_path": null, "page_title": null }
arXiv/math_arXiv_v0.2.jsonl
null
null
\begin{document} \title[Dicritical singularities of Levi-flat hypersurfaces and foliations]{Existence of dicritical singularities of Levi-flat hypersurfaces and holomorphic foliations} \author{Andr\'es Beltr\'an} \address[A. Beltr\'an]{Dpto. Ciencias - Secci\'on Matem\'aticas, Pontificia Universidad Cat\'olica del Per\'u.} \curraddr{Av. Universitaria 1801, San Miguel, Lima 32, Peru} \email{[email protected]} \author{Arturo Fern\'andez-P\'erez} \address[A. Fern\'andez-P\'erez]{Departamento de Matem\'atica - ICEX, Universidade Federal de Minas Gerais, UFMG} \curraddr{Av. Ant\^onio Carlos 6627, 31270-901, Belo Horizonte-MG, Brasil.} \email{[email protected]} \author{Hern\'an Neciosup} \address[H. Neciosup]{Dpto. Ciencias - Secci\'on Matem\'aticas, Pontificia Universidad Cat\'olica del Per\'u.} \curraddr{Av. Universitaria 1801, San Miguel, Lima 32, Peru} \email{[email protected]} \thanks{This work is supported by the Pontificia Universidad Cat\'olica del Per\'u project VRI-DGI 2016-1-0018. Second author is partially supported by CNPq grant number 301825/2016-5} \subjclass[2010]{Primary 32V40 - 32S65} \keywords{Levi-flat hypersurfaces - Holomorphic foliations} \begin{abstract} We study holomorphic foliations tangent to singular real-analytic Levi-flat hypersurfaces in compact complex manifolds of complex dimension two. We give some hypotheses to guarantee the existence of dicritical singularities of these objects. As consequence, we give some applications to holomorphic foliations tangent to real-analytic Levi-flat hypersurfaces with singularities in $\mathbb{P}^2$. \end{abstract} \maketitle \section{Introduction} \par In this paper we study holomorphic foliations tangent to singular real-analytic Levi-flat hypersurfaces in compact complex manifolds of complex dimension two with emphasis on the type of singularities of them. Singular Levi-flat hypersurfaces in complex manifolds appear in many contexts, for example the zero set of the real-part of a holomorphic function or as an invariant set of a holomorphic foliation. While singular Levi-flat hypersurfaces have many properties of complex subvarieties, they have a more complicated geometry and inherit many pathologies from general real-analytic subvarieties. The interconnection between singular Levi-flat hypersurfaces and holomorphic foliations have been studied by many authors, see for example \cite{burns}, \cite{brunella}, \cite{alcides}, \cite{normal}, \cite{arturo}, \cite{generic}, \cite{arnold}, \cite{libro}, \cite{lebl}, \cite{singularlebl} and \cite{shafikov}. \par Let $M$ be a real-analytic closed subvariety of real dimension 3 in a compact complex manifold $X$ of complex dimension two. Unless specifically stated, subvarieties are analytic, not necessarily algebraic. Throughout the text, the term \textit{real-analytic hypersurface} will be employed with the meaning \textit{real-analytic subvariety of real dimension 3}. Let us denote $M ^{*}$ the set of points of $M$ near which $M$ is a nonsingular real-analytic hypersurface. $M$ is said to be {\textit{Levi-flat}} if the codimension one distribution on $M^{*}$ $$T^{\mathbb{C}}M^{*}=TM^{*}\cap J(TM^{*})\subset TM^{*}$$ is integrable, in Frobenius sense. It follows that $M^{*}$ is foliated locally by immersed one-dimensional complex manifolds, the foliation defined by $T^{\mathbb{C}}M^{*}$ is called the \textit{Levi-foliation} and will be denoted by $\mathcal{L}$. \par Let $\{U_j\}_{j\in I}$ be an open covering of $X$. A \textit{holomorphic foliation} $\mc{F}$ on $X$ can be described by a collection of holomorphic 1-forms $\omega_j\in\Omega^{1}_{X}(U_{j})$ with isolated zeros such that \begin{equation*} \omega_i=g_{ij}\omega_j\,\,\,\,\,\,\,\,\,\text{on}\,\,\,U_i\cap U_j,\,\,\,\,\,\,\,\,\,\,\,\,\,\,g_{ij}\in\mathcal{O}^{*}_{X}(U_i\cap U_j). \end{equation*} The cocycle $\{g_{ij}\}$ defines a line bundle $N_{\mc{F}}$ on $X$, called \textit{normal bundle} of $\mc{F}$. The \textit{singular set} $\textsf{Sing}(\mc{F})$ of $\mc{F}$ is the finite subset of $X$ defined by $$\textsf{Sing}(\mc{F})\cap U_{j}=\text{zeros of}\,\,\,\omega_{j},\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\forall j\in I.$$ A point $q\not\in\textsf{Sing}(\mc{F})$ is said to be \textit{regular}. We will be denote $c_1(N_{\mc{F}})\in H^2(X,\mathbb{Z})$ the \textit{first Chern class} of $N_{\mc{F}}$ and if $\Omega$ is a smooth closed 2-form on $X$ which represents, in the De Rham sense, the first Chern class of $N_{\mc{F}}$, we will use the following notation $$c_1^{2}(N_{\mc{F}}):=\int_{X}\Omega\wedge\Omega.$$ We shall say that a holomorphic foliation $\mathcal{F}$ on $X$ is \textit{tangent} to $M$ if locally the leaves of the Levi foliation $\mathcal{L}$ on $M^{*}$ are also leaves of $\mathcal{F}$. A singular point $p\in M$ is called \textit{dicritical} if for every neighborhood $U$ of $p$, infinitely many leaves of the Levi-foliation on $M^{*}\cap U$ have $p$ in their closure. Analogously, a singular point $p\in\textsf{Sing}(\mc{F})$ of a holomorphic foliation $\mc{F}$ is called \textit{dicritical} if for every neighborhood $U$ of $p$, infinitely many leaves have $p$ in their closure. Otherwise it is called \textit{non-dicritical}. \par Recently dicritical singularities of singular real-analytic Levi-flat hypersurfaces have been characterized in terms of the \textit{Segre varieties}, see for instance Pinchuk-Shafikov-Sukhov \cite{pinchuk}. Using this characterization, the notion of dicritical singularity in the theory of holomorphic foliations coincides with the notion of Segre-degenerate singularity of a real-analytic Levi-flat hypersurface and therefore, we can use results of residue-type indices associated to singular points of holomorphic foliations \cite{index}, \cite{suwa} to prove the following result. \begin{maintheorem}\label{main_theorem} Let $\mc{F}$ be a holomorphic foliation on a compact complex manifold $X$ of complex dimension two tangent to an irreducible real-analytic Levi-flat hypersurface $M \subset X$ such that $\textsf{Sing}(\mc{F})\subset M$. Suppose $c^2_1(N_\mc{F})>0$. Then there exists a dicritical singularity $p \in M$ such that $\mc {F}$ has a non-constant meromorphic first integral at $p$. \end{maintheorem} \par We emphasize that the existence of dicritical singularities of $M$ depends on the condition $c^2_1(N_\mc{F})>0$, because otherwise the result is false. In section \ref{examples_paper} we give some examples that show the importance of the condition $c^2_1(N_\mc{F})>0$. The condition $\textsf{Sing}(\mc{F})\subset M$ is used in the proof of Theorem \ref{main_theorem}, we do not know if this condition can be removed. \par In the sequel we apply Theorem \ref{main_theorem} to non-dicritical projective foliations, that is, holomorphic foliations on $\mathbb{P}^2$ with only non-dicritical singularities. \begin{maincorollary} Let $\mc{F}$ be a holomorphic foliation on $\mathbb{P}^2$ with only non-dicritical singularities. Then there are no singular real-analytic Levi-flat hypersurfaces in $\mathbb{P}^{2}$ tangent to $\mc{F}$ that contain $\textsf{Sing}(\mathcal{F})$. \end{maincorollary} Now we apply Theorem \ref{main_theorem} for holomorphic foliations of degree 2 on $\mathbb{P}^2$ with a unique singularity. \begin{secondcorollary} Let $\mc{F}$ be a holomorphic foliation of degree 2 on $\mathbb{P}^2$ with a unique singular point $p$. Suppose that $\mc{F}$ is tangent to a singular real-analytic Levi-flat hypersurface $M\subset\mathbb{P}^2$ and $p\in M$. Then, up to automorphism, $\mc{F}$ is given in affine coordinates $(x,y)\in\mathbb{C}^2$ by the 1-form $$\omega=x^2dx+y^2(xdy-ydx).$$ Moreover, let $[x:y:z]$ be the homogenous coordinates of $\mathbb{P}^2$, then $R=\frac{y^3-3x^2z}{3x^3}$ is a rational first integral for $\mc{F}$. \end{secondcorollary} \par We say that a singularity $p$ of a germ of a real-analytic Levi-flat hypersurface $M$ is said to be \textit{semialgebraic}, if the germ of $M$ at $p$ is biholomorphic to a semialgebraic Levi-flat hypersurface. We recall that a real-analytic Levi-flat hypersurface is said to be \textit{semialgebraic}, if it is contained in a codimension one real-analytic subvariety defined by the vanishing of a real polynomial. \par To continue we apply Theorem \ref{main_theorem} to find \textit{semialgebraic} singularities of singular real-analytic Levi-flat hypersurfaces which are tangent to singular holomorphic foliations on compact complex manifolds of complex dimension two. Similarly results of algebraization of singularities of holomorphic foliations can be found in \cite{genzmer}. Recently in \cite{casale}, Casale considered the algebraization problem for \textit{simple dicritical singularities} of germs of holomorphic foliations. These singularities are those that become nonsingular after one blow-up and such that a unique leaf is tangent to the exceptional divisor with tangency order of one. Motived by \cite{casale}, we state the following result. \begin{Thirdcorollary} Let $\mc{F}$ be a holomorphic foliation on a compact complex manifold $X$ of complex dimension two tangent to an irreducible real-analytic Levi-flat hypersurface $M \subset X$ such that $\textsf{Sing}(\mc{F})\subset M$. Suppose that $c^2_1(N_{\mc{F}})>0$ and $\mc{F}$ has only a unique simple dicritical singularity $p\in M$. Then there exists an algebraic surface $V$, a rational function $H$ on $V$ and a point $q\in V$ such that the germ of $M$ at $p$ is biholomorphic to a semialgebraic Levi-flat hypersurface $M'\subset V$ in a neighborhood of $q$. \end{Thirdcorollary} \par Finally we state a result that guarantee the existence of dicritical singularities of $M$ in presence of invariant compact complex curves by $\mc{F}$. \begin{secondtheorem}\label{second} Let $\mc{F}$ be a holomorphic foliation on a compact complex manifold $X$ of complex dimension two tangent to an irreducible real-analytic Levi-flat hypersurface $M \subset X$. Suppose that the self-intersection $C\cdot C>0$, where $C\subset M$ is an irreducible compact complex curve invariant by $\mc{F}$, then there exists a dicritical singularity $p \in \textsf{Sing} (\mc{F})\cap C$ such that $\mc {F}$ has a non-constant meromorphic first integral at $p$. \end{secondtheorem} \par To prove Theorem \ref{second} we use the Camacho-Sad index (cf. \cite{CS}) and a result of Cerveau-Lins Neto (see Theorem \ref{lins-cerveau}). \par We organize the paper as follows: in section \ref{indices} we review some definitions and results about indices of holomorphic foliations at singular points. In section \ref{Existence} we give the proof of Theorem \ref{main_theorem}. Section \ref{application} is devoted to show some applications of Theorem \ref{main_theorem}. In section \ref{dicritical} we provide the proof of Theorem \ref{second}. Finally, in section \ref{examples_paper} we show some examples which show the importance of the hypotheses in theorems \ref{main_theorem} and \ref{second}. \section{Indices of holomorphic foliations}\label{indices} \par In this section we state two important results on indices of holomorphic foliations: Camacho-Sad index \cite{CS} and the Baum-Bott index \cite{baum}. The first one concerns the computation of $C\cdot C$, where $C\subset X$ is an invariant compact curve by $\mathcal{F}$ and the second one concerns the computation of $c^{2}_1(N_{\mc{F}})$. More references for these index theorems can be found in \cite[Chapter V]{suwa}, see also \cite{birational} and \cite{index}. \par First, we recall the definition of meromorphic and holomorphic first integral for holomorphic foliations. Let $\mc{F}$ be a singular holomorphic foliation on $X$. Recall that $\mathcal{F}$ admit a \textit{meromorphic} (\textit{holomorphic}) first integral at $p\in X$, if there exists a neighborhood $U$ of $p$ and a \textit{meromorphic} (\textit{holomorphic}) function $h$ defined in $U$ such that its indeterminacy (zeros) set is contained in $\textsf{Sing}(\mc{F})\cap U$ and its level curves contain the leaves of $\mathcal{F}$ in $U$. \subsection{Camacho-Sad index} \par Let us consider a separatrix $C$ at $p\in X$. Let $f$ be a holomorphic function on a neighborhood of $p$ and defining $C =\{f=0\}$. We may assume that $f$ is reduced, i.e. $df\neq 0$ outside $p$. Then \cite{lins}, \cite{suwa} there are functions $g$, $k$ and a 1-form $\eta$ on a neighborhood of $p$ such that $$g\omega=kdf+f\eta$$ and moreover $k$ and $f$ are prime, i.e. $k\neq 0$ on $C^{*}=C\setminus\{p\}$. The Camacho-Sad index \cite{CS} is defined as $$\text{CS}(\mc{F},C,p)=-\frac{1}{2\pi i}\int_{\partial{C}}\frac{1}{k}\eta,$$ where $\partial{C}=C\cap S^{3}$ and $S^3$ is a small sphere around $p$; $\partial{C}$ is oriented as a boundary of $S^{3}\cap B^4$, with $B^4$ a ball containing $p$. \par If $C\subset X$ is a compact complex curve invariant by $\mc{F}$, one has the formula due by Camacho-Sad. \begin{theorem}\cite[Camacho-Sad]{CS}\label{CS} $$\sum_{p\in\textsf{Sing}(\mc{F})\cap C}\text{CS}(\mc{F},C,p)=C\cdot C.$$ \end{theorem} \subsection{Baum-Bott index} Let $\mc{F}$ be a holomorphic foliation with isolated singularities on $X$. Let $p\in X$ be a singular point of $\mc{F}$; near $p$ the foliation is given by a holomorphic vector field $$v=F(x,y)\frac{\partial}{\partial{x}}+G(x,y)\frac{\partial}{\partial{y}}$$ or by a holomorphic 1-form $$\omega=F(x,y)dy-G(x,y)dx.$$ \par Let $J(x,y)$ be the Jacobian matrix of $(F,G)$ at $(x,y)$, then following \cite{baum} one can define the Baum-Bott index at $p\in\textsf{Sing}(\mc{F})$ as $$\text{BB}(\mc{F},p)=\text{Res}_{p}\Big\{\frac{(\text{Tr} J)^2}{F\cdot G}dx\wedge dy\Big\}.$$ The Baum-Bott index depended only on the conjugacy class of the germ of $\mc{F}$ at $p$. For example when the singularity $p$ is non-degenerate then $$\text{BB}(\mc{F},p)=\frac{(\text{Tr} J(p))^2}{\det J(p)}=\frac{\lambda_1}{\lambda_2}+\frac{\lambda_2}{\lambda_1}+2,$$ where $\lambda_1$ and $\lambda_2$ are the eigenvalues of the linear part $Dv(p)$ of $v$ at $p$. The set of separatrices $S$ through $p$ is formed by two transversal branches $C_1$ and $C_2$, both of them analytic. We note also that $$\text{CS}(\mc{F},S,p)=\text{CS}(\mc{F},C_1,p)+\text{CS}(\mc{F},C_2,p)+2[C_1,C_2]_p=\frac{\lambda_1}{\lambda_2}+\frac{\lambda_2} {\lambda_1}+2,$$ where $[C_1,C_2]_{p}$ denotes the intersections number between the curves $C_1$ and $C_2$ at $p$. We remark that $\text{CS}(\mc{F},C_i,p)$ are computed over $\partial{C}_{i}$ for each $i=1,2$ respectively. Thus $\text{BB}(\mc{F},p)=\text{CS}(\mc{F},S,p)$. Of course this remains valid for generalized curve foliations with non-dicritical singularities. \begin{theorem}\cite[Brunella]{index}\label{brunella_index} Let $\mc{F}$ be a non-dicritical germ of holomorphic foliation at $0\in\mathbb{C}^2$ and let $S$ be the union of all its separatrices. If $\mc{F}$ is a generalized curve foliation, then \begin{eqnarray*} \text{BB}(\mc{F},0)&=&\text{CS}(\mc{F},S,0). \end{eqnarray*} \end{theorem} \par We recall that the foliation, induced by $v$, is said to be \textit{generalized curve} at $0\in\mathbb{C}^2$ if there are no saddle-nodes in its reduction of singularities. It is easily seen that Theorem \ref{brunella_index} is false for saddle-nodes singularities. The following formula will used in the proof of Theorem \ref{main_theorem}. \begin{theorem}\cite[Baum-Bott]{baum}\label{baum_bott} $$\sum_{p\in\textsf{Sing}(\mc{F})}\text{BB}(\mc{F},p)=c^{2}_1(N_{\mc{F}}).$$ \end{theorem} \section{Existence of dicritical singularities of Levi-flat hypersurfaces}\label{Existence} \par In order to prove Theorem \ref{main_theorem} we need the following results. \begin{theorem}[Cerveau-Lins Neto \cite{alcides}]\label{lins-cerveau} Let $\mathcal{F}$ be a germ at $0\in\mathbb{C}^{n}$, $n\geq{2}$, of codimension one holomoprhic foliation tangent to a germ of an irreducible real-analytic hypersurface $M$. Then $\mathcal{F}$ has a non-constant meromorphic first integral. In the case of dimension two we can precise more: \begin{enumerate} \item If $\mc{F}$ is dicritical then it has a non-constant meromorphic first integral. \item If $\mc{F}$ is non-dicritical then it has a non-constant holomorphic first integral. \end{enumerate} \end{theorem} Now we prove the following lemma. \begin{lemma}\label{baum-bott_signo} Let $\mc{F}$ be a germ of a non-dicritical holomorphic foliation at $0\in\mathbb{C}^2,$ tangent to a germ of an irreducible real-analytic Levi-flat hypersurface $M$ at $0\in\mathbb{C}^2$. Then the Baum-Bott index satisfies $\text{BB}(\mc{F},0)\leq 0.$ \end{lemma} \begin{proof} Since $\mc{F}$ is non-dicritical at $0\in\mathbb{C}^2$, we have $\mc{F}$ has a non-constant holomorphic first integral $g\in\mathcal{O}_2$ by Theorem \ref{lins-cerveau}. In particular, $\mc{F}$ is a generalized curve foliation and Theorem \ref{brunella_index} implies that $$\text{BB}(\mc{F},0)=\text{CS}(\mc{F},S,0),$$ where $S$ is the union of all separatrices of $\mc{F}$ at $0\in\mathbb{C}^2$. To prove the lemma we need calculate $CS(\mc{F},S,0)$. In fact, as $dg\wedge\omega=0$, then $dg=h\omega$, where $h\in\mathcal{O}_2$. Moreover, if $g=g^{\ell_1}_1\cdots g^{\ell_k}_{k}$, we get $S=\displaystyle\bigcup^{k}_{j=1} C_{j}$ with $C_j=\{g_j=0\}$ and $$\sum^{k}_{j=1}\ell_jg_1\cdots\widehat{g_j}\cdots g_k dg_j=h_1\omega,$$ where $h_1=\frac{h}{g^{\ell_1-1}_1\cdots g^{\ell_k-1}_k}$. Hence $$\text{BB}(\mc{F},0)=\text{CS}(\mc{F},S,0)=-\sum_{1\leq i< j\leq k}\frac{(\ell_i-\ell_j)^2}{\ell_i\ell_j}[g_i,g_j]_{0}\leq 0,$$ here $[g_i,g_j]_{0}$ denotes the number of intersections between $C_i$ and $C_j$ at $0\in\mathbb{C}^2$. \end{proof} To continue we prove Theorem \ref{main_theorem}. \subsection{Proof of Theorem \ref{main_theorem}} We use the Theorem \ref{baum_bott} and Theorem \ref{lins-cerveau} to prove that $\mc{F}$ has a dicritical singularity in $X$. In fact, suppose by contradiction that $\textsf{Sing}(\mc{F})$ consists only of non-dicritical singularities. Take any point $p\in\textsf{Sing} (\mc{F})$ and let $U$ be a small neighborhood of $p$ in $X$ such that $\mc{F}$ is represented by a holomorphic 1-form $\omega$ on $U$ and $p$ is an isolated singularity of $\omega$. Since $\mc{F}$ and $M$ are tangent in $U$ and $p\in M$, we have $\mc{F}|_{U}$ admits a holomorphic first integral $g\in\mc{O}(U)$, that is, $\omega\wedge dg=0$ on $U$, by Theorem \ref{lins-cerveau}. \par Applying Lemma \ref{baum-bott_signo}, we get $\text{BB}(\mc{F},p)\leq 0$, for any $p\in\textsf{Sing}(\mc{F})$. But Baum-Bott's formula implies that $$\sum_{p\in\textsf{Sing}(\mc{F})}\text{BB}(\mc{F},p)=c^{2}_{1}(N_{\mc{F}})>0,$$ which is absurd. Therefore, there exists a dicritical singularity $p$ of $\mc{F}$. Applying again Theorem \ref{lins-cerveau}, we obtain a non-constant meromorphic first integral for $\mc{F}$ in a neighborhood of $p$. \section{Applications of Theorem \ref{main_theorem}}\label{application} \par First we apply Theorem \ref{main_theorem} to holomorphic foliations of $\mathbb{P}^2$ with only non-dicritical singularities. \begin{corollary} Let $\mc{F}$ be a holomorphic foliation on $\mathbb{P}^2$ with only non-dicritical singularities. Then there are no singular real-analytic Levi-flat hypersurfaces in $\mathbb{P}^{2}$ tangent to $\mc{F}$ that contain $\textsf{Sing}(\mathcal{F})$. \end{corollary} \begin{proof} Suppose by contradiction that $\mc{F}$ is tangent to a singular real-analytic Levi-flat hypersurface $M\subset\mathbb{P}^2$ such that $\textsf{Sing}(\mc{F})\subset M$. Since $N_{\mc{F}}=\mathcal{O}_{\mathbb{P}^2}(d+2)$, where $d$ is a positive integer, one has $c^{2}_{1}(N_{\mc{F}})=(d+2)^{2}>0$. Therefore $\mc{F}$ has a dicritical singularity by Theorem \ref{main_theorem}. But it is absurd with the assumption. \end{proof} \par The Jouanolou foliation $\mathcal{J}_d$ of degree $d$ on $\mathbb{P}^2$, is given in affine coordinates $(x,y)\in\mathbb{C}^2$ by $$\omega_d=(y^{d}-x^{d+1})dy-(1-x^{d}y)dx.$$ It is well known that $\mathcal{J}_d$ belongs to a holomorphic foliations class of $\mathbb{P}^2$ without algebraic solutions, this means, $\mathcal{J}_d$ does not admit invariant algebraic curves, see for instance \cite{lins}. \par On the other hand, we know that $\mathcal{J}_d$ is a foliation with only non-dicritical singularities, because $\mathcal{J}_d$ has $d^{2}+d+1$ singularities and for each singularity passing only two analytic separatrices. Hence the above corollary shows that $\mathcal{J}_d$ does not tangent to any singular real-analytic Levi-flat hypersurface of $\mathbb{P}^2$. \par Now we apply Theorem \ref{main_theorem} to holomorphic foliations of degree 2 on $\mathbb{P}^2$ with only one singularity. \begin{corollary} Let $\mc{F}$ be a holomorphic foliation of degree 2 on $\mathbb{P}^2$ with a unique singular point $p$. Suppose that $\mc{F}$ is tangent to a singular real-analytic Levi-flat hypersurface $M\subset\mathbb{P}^2$ and $p\in M$. Then, up to automorphism, $\mc{F}$ is given in affine coordinates $(x,y)\in\mathbb{C}^2$ by the 1-form $$\omega=x^2dx+y^2(xdy-ydx).$$ Moreover, let $[x:y:z]$ be the homogenous coordinates of $\mathbb{P}^2$, then $R=\frac{y^3-3x^2z}{3x^3}$ is a rational first integral for $\mc{F}$. \end{corollary} \begin{proof} Let us assume that $\mc{F}$ has a unique singularity, say $p$. Applying Theorem \ref{main_theorem} we have $\mc{F}$ has a non-constant meromorphic first integral $f/g$ in a neighborhood of $p$. Now since $\mc{F}$ has degree 2, one can apply the main theorem of \cite{deserti} which asserts that, up automorphism, $\mc{F}$ is given in affine coordinates $(x,y)\in\mathbb{C}^2$ by one of the following types: \begin{enumerate} \item $\omega_1=x^2dx+y^2(xdy-ydx)$; \item $\omega_2=x^2dx+(x+y^2)(xdy-ydx)$; \item $\omega_3=xydx+(x^2+y^2)(xdy-ydx)$; \item $\omega_4=(x+y^2-x^2y)dy+x(x+y^2)dx$. \end{enumerate} The foliations given in (2) and (3) have first integral respectively $$\left(2+\frac{1}{x}+2\left(\frac{y}{x}\right)+\left(\frac{y}{x}\right)^2\right)\exp\left(-\frac{y}{x}\right)\,\,\,\text{and}\,\,\,\,\left(\frac{y}{x}\right)\exp\left(\frac{1}{2}\left(\frac{y}{x}\right)^2-\frac{1}{x}\right),$$ and the foliation in (4) has no meromorphic first integral, see \cite{deserti}. Then $\mc{F}$ can only be the foliation induced by $$\omega_1=x^2dx+y^2(xdy-ydx).$$ It is easy to check that $R(x,y,z)=\frac{y^3-3x^2z}{3x^3}$ is a rational first integral for $\mc{F}$. \end{proof} \par An \textit{algebraizable singularity} is a germ of a singular holomorphic foliation which can be defined in some appropriated local chart by a differential equation with algebraic coefficients. It is proved in \cite{genzmer} the existence of countable many classes of saddle-node singularities which are not algebraizable. In \cite{casale}, Casale studied \textit{simple dicritical singularities}, these singularities are those that become nonsingular after one blow-up and such that a unique leaf is tangent to the exceptional divisor with tangency order of one. He show that a simple dicritical singularity with meromorphic first integral is algebraizable (cf. \cite[Theorem 1]{casale}). \begin{theorem}[Casale]\label{Casale_theorem} If $\mathcal{F}$ is a simple dicritical foliation at $0\in\mathbb{C}^2$ with a meromorphic first integral then there exist an algebraic surface $S$, a rational function $H$ on $S$ and a point $p\in S$ such that $\mathcal{F}$ is biholomorphic to the foliation given by the level curves of $H$ in a neighborhood of $p$. \end{theorem} \par Similarly, our aim is give a result of algebraization of singularities for real-analytic Levi-flat hypersurfaces in compact complex manifolds of complex dimension two. \begin{corollary} Let $\mc{F}$ be a holomorphic foliation on a compact complex manifold $X$ of complex dimension two tangent to an irreducible real-analytic Levi-flat hypersurface $M \subset X$ such that $\textsf{Sing}(\mc{F})\subset M$. Suppose that $c^2_1(N_{\mc{F}})>0$ and $\mc{F}$ has only a unique simple dicritical singularity $p\in X$. Then there exists an algebraic surface $V$, a rational function $H$ on $V$ and a point $q\in V$ such that the germ of $M$ at $p$ is biholomorphic to a semialgebraic Levi-flat hypersurface $M'\subset V$ in a neighborhood of $q$. \end{corollary} \begin{proof} Since $p\in X$ is the unique singularity of $\mc{F}$, we have $\mc{F}$ has a non-constant meromorphic first integral in a neighborhood of $p$ by Theorem \ref{main_theorem}. Then it is follows from Theorem \ref{Casale_theorem} that there exist an algebraic surface $V$, a rational function $H$ on $V$ and a point $q\in V$ such that $\mc{F}$ is biholomorphic to the foliation given by level curves of $H$ in a neighborhood of $q$. According to \cite[Lemma 5.2]{lebl}, there exists a real-algebraic subvariety Levi-flat $N\subset V$ (of real dimension three) such that $M$ is biholomorphic to a subset $M'\subset N$. Thus $M'$ is semialgebraic. \end{proof} \section{Dicritical singularities of Levi-flat hypersurfaces in presence of compact leaves}\label{dicritical} We use Camacho-Sad's formula (Theorem \ref{CS}) and Theorem \ref{lins-cerveau} to prove the following result. \begin{theorem}\label{variotional} Let $\mc{F}$ be a holomorphic foliation on a compact complex manifold $X$ of complex dimension two tangent to an irreducible real-analytic Levi-flat hypersurface $M \subset X$. Suppose that $C\cdot C>0$, where $C\subset M$ is an irreducible compact complex curve invariant by $\mc{F}$, then there exists a dicritical singularity $p \in \textsf{Sing} (\mc{F})\cap C$ such that $\mc {F}$ has a non-constant meromorphic first integral at $p$. \end{theorem} \begin{proof} Suppose by contradiction that all the singularities of $\mc{F}$ over $C$ are non-dicritical. Take any point $q\in\textsf{Sing} (\mc{F})\cap C$ and let $U$ be a neighborhood of $q$ in $X$ such that $C\cap U=\{f=0\}$, $\mc{F}$ is represented by a holomorphic 1-form $\omega$ on $U$ and $q$ is an isolated singularity of $\omega$. Since $\mc{F}$ and $M$ are tangent in $U$, we have $\mc{F}|_{U}$ admits a holomorphic first integral $g\in\mc{O}(U)$, that is, $\omega\wedge dg=0$ on $U$, by Theorem \ref{lins-cerveau}. Then $dg=h\omega$, where $h\in\mc{O}(U)$. If $g=g^{\ell_1}_1\cdots g^{\ell_k}_{k}$, we get $f=g_i$ for some $i$ and $$\sum^{k}_{j=1}\ell_jg_1\cdots\widehat{g_j}\cdots g_k dg_j=h_1\omega,$$ where $h_1=\frac{h}{g^{\ell_1-1}_1\cdots g^{\ell_k-1}_k}$. It follows that, $$CS(\mc{F},C,q)=-\sum_{i\neq j}\frac{\ell_{j}}{\ell_i}[g_i,g_j]_q$$ where $[g_{i},g_j]_{q}$ denotes the intersections number between the curves $\{g_{i}=0\}$ and $\{g_{j}=0\}$ at $q$. In particular, $$CS(\mc{F},C,q)\leq 0\,\,\,\,\,\text{for any}\,\,q\in\textsf{Sing} (\mc{F})\cap C. $$ It follows from Theorem \ref{CS} that $$\sum_{q\in\textsf{Sing}{\mc{F}}\cap C}CS(\mc{F},C,q)=C\cdot C.$$ But this is a contradiction with $C\cdot C> 0.$ Now since $p\in M$, we can apply again Theorem \ref{lins-cerveau} to find a meromorphic first integral for $\mc{F}$ in a neighborhood of $p$. \end{proof} \section{Examples}\label{examples_paper} First we give an example where the hypotheses of Theorem \ref{main_theorem} are satisfied. \begin{example} The canonical local example of a real-analytic Levi-flat hypersurface in $\mathbb{C}^2$ is given by $\text{Im} z_1=0$. This hypersurface can be extended to all $\mathbb{P}^2$ given by $$M=\{[Z_1:Z_2:Z_3]\in\mathbb{P}^2: Z_1\bar{Z}_3-\bar{Z}_1Z_3=0\}.$$ Moreover, this hypersurface is tangent to holomorphic foliation $\mathcal{F}$ given by the level of rational function $Z_1/Z_3$. Note also that $\mathcal{F}$ has degree $0$ and therefore $c^2_1(N_\mathcal{F})=c^2_1(\mathcal{O}_{\mathbb{P}^2}(2))=4$. The foliation $\mathcal{F}$ has a dicritical singularity at $[0:1:0]\in M$. \end{example} We now give two examples where Theorem \ref{main_theorem} is false. \begin{example} Consider the Hopf surface $X=(\mathbb{C}^2-\{0\})/\Gamma_{a,b}$ induced by the infinite cyclic subgroup of $GL(2,\mathbb{C})$ generated by the transformation $(z_1,z_2)\mapsto(az_1,bz_2)$ with $|a|,|b|>1$. Levenberg-Yamaguchi \cite{Levenberg} proved that the domain $$D=\{(z_1,z_2)\cdot\Gamma_{a,b}|\,z_1\in\mathbb{C}, \text{Im}\,\,z_2>0\}$$ in $X$ with $b\in\mathbb{R}$ is Stein. Furthermore it is bounded by a real-analytic Levi-flat hypersurface $$M=\{(z_1,z_2)\cdot\Gamma_{a,b}|\,z_1\in\mathbb{C}, \text{Im}\,\,z_2=0\}.$$ It is clear that the levels of the holomorphic function $f(z_1,z_2)=z_2$ on $\mathbb{C}^2-\{0\}$ defines a holomorphic foliation $\mc{F}$ on $X$ tangent to $M$. Since any line bundle on $X$ is flat \cite{mall} we have $c^2_1(N_\mc{F})=0$ and $M$ has no singularities in $X$. \end{example} \begin{example} Let $X=\mathbb{P}^1\times\mathbb{P}^1$ and let $\mc{F}$ be the foliation given by the vertical fibration on $X$. Now let $\pi:X\to\mathbb{P}^1$ be the projection on the first coordinate and let $M=\pi^{-1}(\gamma)$, where $\gamma$ is a real-analytic embedded loop in $\mathbb{P}^1$. Take a fiber $F\subset M$. We have $F$ is isomorphic to $\mathbb{P}^1$ with $F^2=0$. Clearly $M$ is a Levi-flat hypersurface in $X$ tangent to $\mc{F}$ and it has no dicritical singularities in $F$. \end{example} \vskip 0.2 in \noindent{\it\bf Acknowledgments.--} The authors wishes to express his thanks to Alcides Lins Neto (IMPA) for several helpful comments during the preparation of the paper. Also, we would like to thank the referee for suggestions and pointing out corrections. \end{document}
arXiv
{ "id": "1611.04387.tex", "language_detection_score": 0.7379189133644104, "char_num_after_normalized": null, "contain_at_least_two_stop_words": null, "ellipsis_line_ratio": null, "idx": null, "lines_start_with_bullet_point_ratio": null, "mean_length_of_alpha_words": null, "non_alphabetical_char_ratio": null, "path": null, "symbols_to_words_ratio": null, "uppercase_word_ratio": null, "type": null, "book_name": null, "mimetype": null, "page_index": null, "page_path": null, "page_title": null }
arXiv/math_arXiv_v0.2.jsonl
null
null
\begin{document} \title{Graph Algorithms for Topology Identification\ using Power Grid Probing} \thispagestyle{empty} \begin{abstract} To perform any meaningful optimization task, power distribution operators need to know the topology and line impedances of their electric networks. Nevertheless, distribution grids currently lack a comprehensive metering infrastructure. Although smart inverters are widely used for control purposes, they have been recently advocated as the means for an active data acquisition paradigm: Reading the voltage deviations induced by intentionally perturbing inverter injections, the system operator can potentially recover the electric grid topology. Adopting inverter probing for feeder processing, a suite of graph-based topology identification algorithms is developed here. If the grid is probed at all leaf nodes but voltage data are metered at all nodes, the entire feeder topology can be successfully recovered. When voltage data are collected only at probing buses, the operator can find a reduced feeder featuring key properties and similarities to the actual feeder. To handle modeling inaccuracies and load non-stationarity, noisy probing data need to be preprocessed. If the suggested guidelines on the magnitude and duration of probing are followed, the recoverability guarantees carry over from the noiseless to the noisy setup with high probability. \end{abstract} \begin{IEEEkeywords} Energy systems; identification; smart grid. \end{IEEEkeywords} \allowdisplaybreaks \section{Introduction}\label{sec:intro} \IEEEPARstart{P}{ower} distribution grids will be heavily affected by the penetration of distributed energy resources. To comply with network constraints, system operators need to know the topologies of their electric networks. Often utilities have limited information on their primary or secondary networks. Even if they know the line infrastructure and impedances, they may not know which lines are energized. This explains the recent interest on feeder topology processing. Several works capitalize on the properties of grid data covariance matrices to reconstruct feeder topologies; see e.g., \cite{Deka1}, \cite{BoSch13}. Graphical models have been used to fit a spanning tree relying on the mutual information of voltage data~\cite{WengLiaoRajagopal17}. Tree recovery methods operating on a bottom-up fashion have been devised in \cite{ParkDekaChertkov}; yet they presume non-metered buses have degree larger than two, fail in buses with constant power factor, and lack practical guidelines for handling noisy setups. All the previous approaches build on second-order statistics of grid data. However, since sample statistics converge to their ensemble counterparts only asymptotically, a large number of grid data is typically needed to attain reasonable performance thus rendering topology estimates obsolete. When the line infrastructure is known, the problem of finding the energized lines can be cast as a maximum likelihood detection problem in~\cite{CaKe2017}, \cite{Sharon12}. Given power readings at all terminal nodes and selected lines, topology identification has also been posed as a spanning tree recovery exploiting the concept of graph cycles~\cite{sevlian2015distribution}. Line impedances are estimated via a total least-squares fit in~\cite{patopa}. Presuming phasor measurements and sufficient input excitation, a Kron-reduced admittance matrix is recovered via a low rank-plus-sparse decomposition~\cite{Ardakanian17}. Rather than passively collecting data, an active data acquisition paradigm has been suggested in \cite{BheKeVe2017}: Inverters are commanded to instantaneously vary their power injections so that the operator can infer non-metered loads by processing the incurred voltage profiles. Perturbing the primary controls of inverters to identify topologies in DC microgrids has also been suggested in~\cite{Scaglione2017}. Line impedances have been estimated by having inverters injecting harmonics in \cite{Ciobotaru07}. Instead of load learning, grid probing has been adopted towards topology inference in~\cite{cake2018inverter}, which analyzes topology recoverability via grid probing and estimates the grid Laplacian via a convex relaxation followed by a heuristic to enforce radiality. The current work extends \cite{cake2018inverter} on three fronts. First, it provides a graph algorithm for recovering feeder topologies using the voltage deviations induced at all nodes upon probing a subset of them (Section~\ref{sec:id}). Second, topology recoverability is studied under partially observed voltage deviations, an algorithm is devised, and links between the revealed grid and the actual grid are established (Section~\ref{sec:partial}). Third, noisy data setups are handled by properly modifying the previous schemes and by providing probing guidelines to ensure recoverability with high probability (Section~\ref{sec:noisy}). \section{Modeling Preliminaries}\label{sec:model} Let $\mathcal{G}=(\mcN,\mcL)$ be an undirected tree graph, where $\mcN$ is the set of nodes and $\mcL$ the set of edges $\mcL:=\{(m,n): m,n \in \mcN\}$. A tree is termed rooted if one of its nodes is designated as the root. This root node will be henceforth indexed by 0. In a tree graph, a \emph{path} is the unique sequence of edges connecting two nodes. The nodes adjacent to the edges forming the path between node $m$ and the root are the \emph{ancestors} of node $m$ and form the set $\mcA_m$. Reversely, if $n\in\mcA_m$, then $m$ is a \emph{descendant} of node $n$. The descendants of node $m$ comprise the set $\mcD_m$. By convention, $m \in \mcA_m$ and $m\in \mcD_m$. If $n \in \mcA_m$ and $(m,n)\in \mathcal E$, node $n$ is the \emph{parent} of $m$. A node without descendants is called a \emph{leaf} or \emph{terminal} node. Leaf nodes are collected in the set $\mcF$, while non-leaf nodes will be termed \emph{internal} nodes. For each node $m$, define its \emph{depth} $d_m:= |\mcA_m|$ as the number of its ancestors. The depth of the entire tree is $d_{\mcG}:=\max_{m\in\mcN} d_m$. If $n\in\mcA_m$ and $d_n=k$, node $n$ is the unique $k$-depth ancestor of node $m$ and will be denoted by $\alpha_{m}^k$ for $k=0,\ldots,d_m$. Let also $\mcT_m^k$ denote the subset of the nodes belonging to the subtree of $\mcG$ rooted at the $k$-depth node $m$ and containing all the descendants of $m$. Our analysis will be built on the concept of the level sets of a node. The \emph{$k$-th level set} of node $m$ is defined as~\cite{cake2018inverter} \begin{equation}\label{eq:levelset} \mcN_m^k := \left\{\begin{array}{ll} \mcD_{\alpha^k_m} \setminus \mcD_{\alpha^{k+1}_m}&,~ k=0,\ldots,d_m-1\\ \mcD_m&,~ k=d_m \end{array}\right.. \end{equation} In essence, the level set $\mcN_m^k$ consists of node $\alpha^k_m$ and all the subtrees rooted at $\alpha^k_m$ excluding the one containing node $m$. Since by definition $\mcN^k_m\subseteq \mcD_{\alpha^k_m}$, the level sets satisfy the ensuing properties that will be needed later. \begin{lemma}[\cite{cake2018inverter}]\label{le:Nmk} Let $m$ be a node in a tree graph. \renewcommand{(\roman{enumi})}{(\roman{enumi})} \begin{enumerate} \item The node $\alpha^{k}_m$ is the only node in $\mcN_m^k$ at depth $k$; the remaining nodes in $\mcN_m^k$ are at larger depths; \item if $n,s\in\mcN_m^k$, then $\alpha^{k}_n=\alpha^{k}_s=\alpha^{k}_m\in\mcN_m^k$; \item if $m\in \mcF$, then $\mcN_m^{d_m}=\{m\}$; \item if $s\in \mcD_n$ and $n\in\mcN$, then $\mcN^{k}_n=\mcN^{k}_s$ for $k < d_n$; and \item if $d_m=k$, then $m \in \mcN_m^k$ and $m \notin \mcN_m^\ell$ for $\ell < k$. \end{enumerate} \end{lemma} A radial single-phase distribution grid having $N+1$ buses can be modeled by a tree graph $\mcG=(\mcN,\mcL)$rooted at the substation. The nodes in $\mcN:=\{0,\ldots,N\}$ denote grid buses, and the edges in $\mcL$ lines. Define $v_n$ as the deviation of the voltage magnitude at node $n$ from the substation voltage, and $p_n+jq_n$ as the power injected through node $n$. The voltage deviations and power injections at all buses in $\mcN\setminus \{0\}$ are stacked in $\bv$, $\bp$, and $\bq$. Let $r_\ell+j x_\ell$ be the impedance of line $\ell$, and collect all impedances in $\br+j\bx$. The so termed linearized distribution flow (LDF) model approximates nodal voltage magnitudes as~\cite{BW3,Deka1} \begin{equation}\label{eq:model} \bv = \bR\bp + \bX\bq \end{equation} where $(\bR,\bX)$ are the inverses of weighted reduced Laplacian matrices of the grid~\cite{CaKe2017}. Let $\br_m$ be the $m$-th column of $\bR$, and $R_{m,n}$ its $(m,n)$-th entry that equals~\cite{Deka1} \begin{equation}\label{eq:entriesR1} R_{m,n} = \sum_{\substack{\ell = (c,d) \in \mcL \\ c,d\in \mcA_m \cap \mcA_n}} r_{\ell}. \end{equation} The entry $R_{m,n}$ can be equivalently interpreted as the voltage drop between the substation and bus $m$ when a unitary active power is injected as bus $n$ and the remaining buses are unloaded. Leveraging this interpretation, the entries of $\bR$ relate to the levels sets in $\mcG$ as follows. \begin{lemma}[\cite{cake2018inverter}]\label{le:entriesR2} Let $m$, $n$, $s$ be nodes in a radial grid. \renewcommand{(\roman{enumi})}{(\roman{enumi})} \begin{enumerate} \item if $m\in\mcF$, then $R_{m,m} > R_{n,m}$ for all $n\neq m$; \item $n,s\in\mcN_m^k$ for a $k$ if and only if $R_{n,m} = R_{s,m}$; and \item if $n\in \mcN_m^{k-1}$, $s\in \mcN_m^{k}$, then $R_{s,m}=R_{n,m} + r_{\alpha^{k-1}_m,\alpha^{k}_m}$. \end{enumerate} \end{lemma} \section{Grid Probing using Smart Inverters}\label{sec:probing} Solar panels and energy storage units are interfaced to the grid via inverters featuring advanced communication, actuation, and sensing capabilities. An inverter can be commanded to shed solar generation, or change its power factor within msec. The distribution feeder as an electric circuit responds within a second and reaches a different steady-state voltage profile. Upon measuring the bus voltage differences incurred by probing, the feeder topology may be identified. Rather than processing smart meter data on a 15- or 60-min basis, probing actively senses voltages on a per-second basis, over which conventional loads are assumed to be invariant. The buses hosting controllable inverters are collected in $\mcP \subseteq \mcN$ with $P:=|\mcP|$. Consider the probing action at time $t$. Each bus $m\in\mcP$ perturbs its active injection by $\delta_m(t)$ for one second or so. All inverter perturbations $\{\delta_m(t)\}_{m\in\mcP}$ at time $t$ are stacked in the $P$-length vector $\bdelta(t)$. Based on the model in \eqref{eq:model}, perturbations in active power injections incur voltage differences \begin{equation}\label{eq:dv} \tbv(t):=\bv(t)-\bv(t-1) = \bR_\mcP \bdelta(t) \end{equation} where $\bR_\mcP\in \mathbb{R}^{N\times C}$ is the submatrix obtained by keeping only the columns of $\bR$ indexed by $\mcP$. The grid is perturbed over $T$ probing periods. Stacking the probing actions $\{\bdelta(t)\}_{t=1}^T$ and voltage differences $\{\tbv(t)\}_{t=1}^T$ respectively as columns of matrices $\bDelta$ and $\tbV$ yields \begin{equation}\label{eq:dV} \tbV = \bR_{\mcP} \bDelta. \end{equation} The data model in \eqref{eq:dv}--\eqref{eq:dV} presumes that injections at non-probing buses remain constant during probing and ignores modeling inaccuracies and measurement noise. The practical setup of noisy data is handled in Section~\ref{sec:noisy}. Knowing $\bDelta$ and measuring $\tbV$ in \eqref{eq:dV}, the goal is to recover the grid connectivity along with line resistances; line reactances can be found by reactive probing likewise. This task of topology identification can be split into three stages: \emph{s1)} finding $\bR_\mcP$ from \eqref{eq:dV}; \emph{s2)} recovering the level sets for all buses in $\mcP$; and \emph{s3)} finding topology and resistances. At stage \emph{s1)}, if the probing matrix $\bDelta\in\mathbb{R}^{C\times T}$ is full row-rank, then matrix $\bR_{\mcP}$ can be uniquely recovered as $\bR_{\mcP} = \tbV \bDelta^+$, where $\bDelta^+$ is the right pseudo-inverse of $\bDelta$. Under this setup, probing for $T=P$ times suffices to find $\bR_{\mcP}$. At stage \emph{s2)}, using Lemma~\ref{le:entriesR2} we can recover the level sets for each bus $m\in\mcP$ as follows: \begin{enumerate} \item Append a zero entry at the top of the vector $\br_m$. \item Group the entries of $\br_m$ to find the level sets of node $m$; see Lemma~\ref{le:entriesR2}-(ii). \item The number of unique values in the entries of $\br_m$ yields the depth $d_m$. \item Rank the unique values of $\br_m$ in increasing order, to find the depth of each level set; see Lemma~\ref{le:entriesR2}--(iii). \end{enumerate} Given the level sets for all $m\in\mcP$, stage \emph{s3)} recovers the grid topology as detailed next. \section{Topology Recovery}\label{sec:id} By properly probing the nodes in $\mcP$, the matrix $\bR_\mcP$ can be found at stage \emph{s1)}. Then, the level sets for all buses in $\mcP$ can be recovered at stage \emph{s2)}. Nevertheless, knowing these level sets may not guarantee topology recoverability. Interestingly, probing a radial grid at all leaf nodes has been shown to be sufficient for topology identification~\cite[Th.~1]{cake2018inverter}. To comply with this requirement, the next setup will be henceforth assumed; see also \cite{ParkDekaChertkov}. \begin{assumption}\label{as:FinC} All leaf nodes are probed, that is $\mcF\subseteq\mcP$. \end{assumption} Albeit Assumption~\ref{as:FinC} ensures topology recoverability, it does not provide a solution for stage \emph{s3)}. We will next devise a \emph{recursive} graph algorithm for grid topology recovery. The input to the recursion is a depth $k$ and a maximal subset of probing nodes $\mcP_n^k$ having the \emph{same} $(k-1)$-depth and $k$-depth ancestors. The $(k-1)$-depth ancestor is known and is denoted by $\alpha_n^{k-1}$. The $k$-depth ancestor is known to exist, is assigned the symbol $n$, yet the value of $n$ is unknown for now. We are also given the level sets $\mcN_{m}^k$ for all $m\in\mcP_{n}^k$. The recursion proceeds in three steps. The \emph{first step} finds the $k$-depth ancestor $n$ by intersecting the sets $\mcN_{m}^k$ for all $m\in\mcP_{n}^k$. The existence and uniqueness of this intersection are asserted next as shown in the appendix. \begin{proposition}\label{prop:levelsets} Consider the subset $\mcP_n^k$ of probing nodes located on the subtree rooted at an unknown $k$-depth node $n\in\mcN$. The node $n$ can be found as the unique intersection \begin{equation}\label{eq:prop1} \{n\}= \bigcap_{m \in \mcP_n^k} \mcN^k_m. \end{equation} \end{proposition} At the \emph{second step}, node $n$ is connected to node $\alpha_n^{k-1}$. Since $n=\alpha_m^{k}\in\mcN_{m}^k$ and $\alpha_n^{k-1}=\alpha_m^{k-1}\in\mcN_m^{k-1}$, the resistance of line $(n,\alpha_n^{k-1})$ can be found as [Lemma~\ref{le:entriesR2}-(iii)] \begin{equation}\label{eq:resistance} r_{\alpha_n^{k-1},n} = r_{\alpha_m^{k-1},\alpha_m^{k}} = R_{\alpha_m^{k},m} - R_{\alpha_m^{k-1},m} \end{equation} for any $m\in\mcP_{n}^k$. The \emph{third step} partitions $\mcP_n^k\setminus \{n\}$ into subsets of buses sharing the same $(k+1)$-depth ancestor. This can be easily accomplished thanks to the next result. \begin{proposition}\label{pro:levset&subtree} For nodes $m$ and $m'$ in a tree graph, it holds that $\alpha_m^{k+1} = \alpha_{m'}^{k+1}$ if and only if $\mcN_m^{k} = \mcN_{m'}^{k}$. \end{proposition} Based on Prop.~\ref{pro:levset&subtree} (shown in the appendix), the set $\mcP_n^k\setminus \{n\}$ can be partitioned by grouping buses with identical $\mcN_m^k$'s. The buses forming one of these partitions $\mcP_s^{k+1}$ have the same $k$-depth and $(k+1)$-depth ancestors. Node $n$ was found to be the $k$-depth ancestor. The $(k+1)$-depth ancestor is known to exist and is assigned the symbol $s$. The value of $s$ is found by invoking the recursion with new inputs the depth $(k+1)$, the set of buses $\mcP_s^{k+1}$ along with their $(k+1)$-depth level sets, and their common $k$-depth ancestor (node $n$). \begin{algorithm} \caption{Topology Recovery with Complete Data} \begin{algorithmic}[1] \Require $\mcN$, $\{\mcN_m^{k}\}_{k=0}^{d_m}$ for all $m\in \mcP$. \State Run \texttt{Root\&Branch}$(\mcP,\emptyset,0)$. \Ensure Radial grid and line resistances over $\mcN$. \end{algorithmic} \textbf{Function} \texttt{Root\&Branch}$(\mcP_n^k,\alpha_n^{k-1},k)$ \begin{algorithmic}[1] \State Identify the node $n$ serving as the common $k$-depth ancestor for all buses in $\mcP_n^k$ via~\eqref{eq:prop1}. \If{$k > 0$,} \State Connect node $n$ to $\alpha_n^{k-1}$ with the resistance of \eqref{eq:resistance}. \EndIf \If{$\mcP_n^k\setminus\{n\}\neq \emptyset$,} \State Partition $\mcP_n^k\setminus\{n\}$ into groups of buses $\{\mcP_s^{k+1}\}$ having identical $k$-depth level sets. \State Run \texttt{Root\&Branch}$(\mcP_s^{k+1},n,k+1)$ for all $s$. \EndIf \end{algorithmic}\label{alg:full} \end{algorithm} To \emph{initialize} the recursion, set $\mcP_n^0=\mcP$ since every probing bus has the substation as $0$-depth ancestor. At $k=0$, the second step is skipped as the substation does not have any ancestors to connect. The recursion \emph{terminates} when $\mcP_n^k$ is a singleton $\{m\}$. In this case, the first step identifies $m$ as node $n$; the second step links $m$ to its known ancestor $\alpha_m^{k-1}$; and the third step has no partition to accomplish. The recursion is tabulated as Alg.~\ref{alg:full}. \section{Topology Recovery with Partial Data}\label{sec:partial} Although the scheme described earlier probes the grid only through a subset of buses $\mcP$, voltage responses are collected across all buses. This may be unrealistic in distribution grids with limited real-time metering infrastructure, where the operator reads voltage data only at a subset of buses. To simplify the exposition, the next assumption will be adopted. \begin{assumption}\label{as:MetC} Voltage differences are metered only in $\mcP$. \end{assumption} Under this assumption, the probing model \eqref{eq:dV} becomes \begin{equation}\label{eq:dVidealred} \tbV = \bR_{\mcP \mcP} \bDelta \end{equation} where now $\tbV$ is of dimension $P \times T$ and $\bR_{\mcP \mcP}$ is obtained from $\bR$ upon maintaining only the rows and columns in $\mcP$. Similar to \eqref{eq:dV}, $\bR_{\mcP\mcP}$ is identifiable if $\bDelta$ is full row-rank. This is the equivalent of stage \emph{s1)} in Section~\ref{sec:probing} under the partial data setup. Towards the equivalent of stage \emph{s2)}, since column $\br_m$ is partially observed, only the \emph{metered level sets} of node $m\in\mcP$ defined as $\mcM_m^k := \mcN_m^k \cap \mcP$ can be recovered. The metered level sets for node $m$ can be obtained by grouping the indices associated with the same values in the observed subvector of $\br_m$. Although, the grid topology cannot be fully recovered based on $\mcM_m^k$'s, one can recover a \emph{reduced grid} relying on the concept of internal identifiable nodes; see Fig.~\ref{fig:redgrid}. \begin{definition}\label{de:I} The set $\mcI\subset\mcN$ of \emph{internal identifiable} nodes consists of all buses in $\mcG$ having at least two children with each of one of them being the ancestor of a probing bus. \end{definition} The reduced grid induced by $\mcP$ can now be defined as the graph $\mcG^r := (\mcN^r, \mcL^r)$ with \begin{itemize} \item node set $\mcN^r:=\mcP \cup \mcI$; \item $\ell = (m,n) \in \mcL^r$ if $m,n\in\mcN^r$ and all other nodes on the path from $m$ to $n$ in $\mcG$ do not belong to $\mcN^r$; and \item the resistance of line $\ell = (m,n) \in \mcL^r$ equals the effective resistance between $m$ and $n$ in $\mcG$, that is $r_{mn}^\text{eff}:= (\be_m - \be_n)^\top \bR (\be_m - \be_n)$, where $\be_m$ is the $m$-th canonical vector~\cite{Dorfler13}. \end{itemize} In fact, for radial $\mcG$, the resistance $r_{mn}^\text{eff}$ equals the sum of resistances across the $m-n$ path in $\mcG$; see~\cite{Dorfler13}. Let $\bR^r$ be the inverse reduced Laplacian associated with $\mcG^r$. From the properties of effective resistances, it holds~\cite{Dorfler13} \begin{equation}\label{eq:sameR} \bR^r_{\mcP\mcP} = \bR_{\mcP\mcP}. \end{equation} In words, the grid $\mcG$ is not the only electric grid having $\bR_{\mcP\mcP}$ as the top-left block of its $\bR$ matrix. The reduced grid $\mcG^r$; the (meshed) Kron reduction of $\mcG$ given $\mcP$; and even grids having nodes additional to $\mcN$ can yield the same $\bR_{\mcP\mcP}$; see Fig.~\ref{fig:redgrid}. Lacking any more detailed information, the grid $\mcG^r$ features desirable properties: i) it connects the actuated and possibly additional identifiable nodes in a radial fashion; ii) it satisfies \eqref{eq:sameR} with the minimal number of nodes; and iii) its resistances correspond to the effective resistances of $\mcG$. Actually, this reduced grid conveys all the information needed to solve an optimal power flow task~\cite{Bolognani2013w}. \begin{figure} \caption{\emph{a)} the original IEEE 37-bus feeder; \emph{b)} its reduced equivalent; and \emph{c)} another feeder with the same $\bR_{\mcP\mcP}$. Red nodes are probed; black and blue are not. Blue nodes are internal identifiable nodes comprising $\mcI$.} \label{fig:redgrid} \end{figure} The next lemma shows that the number of metered level sets $\mcM_m^k$ coincides with the number of level sets $\mcN_m^k$ for all $m\in\mcP$, so the degrees of probing buses can be reliably recovered even with partial data. \begin{lemma}\label{lem:metered=level} Let $\mcG^r = (\mcN^r, \mcL^r)$ be the reduced grid of a radial graph $\mcG$, and let Assumption~\ref{as:FinC} hold true. Then, $\mcN_m^k \cap \mcM_m^k \neq \emptyset$ for all $m \in \mcF$ and $k=1,\ldots,d_m$. \end{lemma} \begin{IEEEproof} Arguing by contradiction, suppose there exists $m\in\mcF$ such that $\mcN_m^k \cap \mcM_m^k = \emptyset$ for some $k\leq d_m$. Since by definition $\alpha_m^k\in\mcN_m^k$, the hypothesis $\mcN_m^k \cap \mcM_m^k = \emptyset$ implies that $\alpha_m^k\notin\mcM_m^k$. Therefore, $\alpha_m^k\notin\mcP$ and the degree of $\alpha_m^k$ is $g_{\alpha_m^k}\geq 3$. The latter implies that $\alpha_m^k$ has at least one child $w\notin\mcA_m$. Let the leaf node $s\in\mcD_w$. Observe that $s$ belongs to both $\mcN^k_m$ and $\mcM^k_m$, contradicting the hypothesis. \end{IEEEproof} The next result proved in the appendix guarantees that the topology of $\mcG^r$ is identifiable under Assumption~\ref{as:FinC}. \begin{proposition}\label{pro:uniqueness_red} Given a tree $\mcG = (\mcN, \mcL)$ with leaf nodes $\mcF \subseteq \mcN$ and under Assumption~\ref{as:FinC}, its reduced graph $\mcG^r = (\mcN^r, \mcL^r)$ is uniquely characterized by $\{\mcM_m^k \}_{k=0}^{d_m}$ for all $m\in \mcP $, up to different labellings for non-probing nodes. \end{proposition} Moving forward to the equivalent of stage \emph{s3) in Section~\ref{sec:probing}}, a three-step recursion operating on metered rather than ordinary level sets is devised next. Suppose we are given the set of probing nodes $\mcP^k_n$ having the same $(k-1)$-depth and $k$-depth ancestors (known and unknown, respectively), along with their $k$-depth metered level sets. At the \emph{first step}, if there exists a node $m\in \mcP_n^k$ such that $\mcM_m^k = \mcP^k_n$, then the $k$-depth ancestor $n$ is set as $m$. Otherwise, a non-probing node is added and assigned to be the $k$-depth ancestor. This is justified by the next result. \begin{proposition} The root $n$ of subtree $\mcT_n^k$ is a probing node if and only if $\mcM_n^k = \mcP_n^k$. \end{proposition} \begin{IEEEproof} Proving by contradiction, suppose there exists a node $m \in \mcT_n^k$ with $\mcM_m^k = \mcT_n^k \cap \mcP=\mcP_n^k$ and $m\neq n$. Since $m$ is not the root of $\mcT_n^k$, it holds that $d_m > k$, $m \notin \mcM_n^k$, and so $m \notin \mcP_n^k$. If $n$ is a probing node and the root of $\mcT_n^k$, then $d_n=k$ and so $\mcN_n^k=\mcD_n$. Because of this, it follows that $\mcM_n^k = \mcN_n^k \cap \mcP = \mcD_n \cap \mcP = \mcT_n^k \cap \mcP = \mcP_n^k$. \end{IEEEproof} At the \emph{second step}, node $n=\alpha_m^k$ is connected to node $\alpha_n^{k-1}=\alpha_m^{k-1}$. The line resistance can be found through a modified version of \eqref{eq:resistance}. Given any bus $m \in \mcP^k_n$, Lemma~\ref{lem:metered=level} ensures that there exist at least two probing buses $s \in \mcN_m^{k-1}$ and $s' \in \mcN_m^{k}$. Moreover, Lemma~\ref{le:entriesR2}-(ii) guarantees that $R_{\alpha_m^{k-1},m} = R_{s,m}$ and $R_{\alpha_m^{k},m} = R_{s',m}$. Since nodes $m$, $s$, and $s'$ are metered, both $R_{s,m}$ and $R_{s',m}$ can be retrieved from $\bR_{\mcP \mcP}$. Thus, the sought resistance can be found as \begin{equation}\label{eq:resistance_red} r_{\alpha_m^{k-1},\alpha_m^k} = R_{\alpha_m^{k},m} - R_{\alpha_m^{k-1},m} =R_{s',m} - R_{s,m}. \end{equation} At the \emph{third step}, the set $\mcP_n^k\setminus \{n\}$ is partitioned into subsets of buses having the same $(k+1)$-depth ancestor. This can be accomplished by comparing their $k$-depth metered level sets, as asserted by the next result. \begin{proposition}\label{pro:levset&subtree_red} Let $m,m' \in \mcP_n^k$. It holds that $\alpha_m^{k+1} = \alpha_{m'}^{k+1}$ if and only if $\mcM_m^k = \mcM_{m'}^k$. \end{proposition} \begin{IEEEproof} If $\alpha_m^{k+1} = \alpha_{m'}^{k+1}$, then Proposition~\ref{pro:levset&subtree} ensures that $\mcN_m^k = \mcN_{m'}^k$ and so $\mcM_m^k = \mcM_{m'}^k$. We will show that if $\alpha_m^{k+1} \neq \alpha_{m'}^{k+1}$, then $\mcM_m^k \neq \mcM_{m'}^k$. Since $m,m' \in \mcP_n^k$, it holds that $n = \alpha_m^k = \alpha_{m'}^k$ and $$\mcM_m^{k} = (\mcD_n \backslash \mcD_{\alpha_m^{k+1}})\cap \mcP,\quad \mcM_{m'}^{k} = (\mcD_n \backslash \mcD_{\alpha_{m'}^{k+1}})\cap \mcP.$$ Because $\mcD_{\alpha_m^{k+1}} \neq \mcD_{\alpha_{m'}^{k+1}}$, $\mcD_{\alpha_m^{k+1}} \cap \mcP \neq 0$, and $\mcD_{\alpha_{m'}^{k+1}} \cap \mcP \neq 0$, it follows that $\mcM_m^{k} \neq \mcM_{m'}^{k}$. \end{IEEEproof} The recursion is tabulated as Alg.~\ref{alg:partial}. It is initialized at $k=1$, since the substation is not probed and $\mcM_m^0$ does not exist; and is terminated as in Section~\ref{sec:id}. \begin{algorithm} \caption{Topology Recovery with Partial Data} \begin{algorithmic}[1] \Require $\mcM$, $\{\mcM_m^{k}\}_{k=1}^{d_m}$ for all $m\in \mcP$. \State Run \texttt{Root\&Branch-P}$(\mcP,\emptyset,1)$. \Ensure Reduced grid $\mcG^r$ and resistances over $\mcL^r$. \end{algorithmic} \textbf{Function} \texttt{Root\&Branch-P}$(\mcP_n^k,\alpha_n^{k-1},k)$ \begin{algorithmic}[1] \If{ $\exists$ node $n$ such that $\mcM_n^k = \mcP_n^k$,} \State Set $n$ as the parent node of subtree $\mcT_n^k$. \Else \State Add node $n \in \mcI$ and set it as the root of $\mcT_n^k$. \EndIf \If{$k>1$,} \State Connect $n$ to $\alpha_n^{k-1}$ via a line with resistance \eqref{eq:resistance_red}. \EndIf \If{$\mcP_n^k\setminus\{n\}\neq \emptyset$,} \State Partition $\mcP_n^k\setminus\{n\}$ into groups of buses $\{\mcP_s^{k+1}\}$ having identical $k$-depth metered level sets. \State Run \texttt{Root\&Branch-P}$(\mcP_s^{k+1},n,k+1)$ for all $s$. \EndIf \end{algorithmic}\label{alg:partial} \end{algorithm} \section{Topology Recovery with Noisy Data}\label{sec:noisy} So far, matrices $\bR_\mcP$ and $\bR_{\mcP\mcP}$ have been obtained using the noiseless model of \eqref{eq:dv}. Under a more realistic setup, inverter and voltage perturbations are related as \begin{equation}\label{eq:dv-noisy} \tbv(t) = \bR_\mcP \bdelta(t) + \bn(t) \end{equation} where $\bn(t)$ captures possible deviations due to non-probing buses, measurement noise, and modeling errors. Stacking $\{\bn(t)\}_{t=1}^T$ as columns of matrix $\bN$, model \eqref{eq:dV} translates to \begin{equation}\label{eq:dV-noisy} \tbV = \bR_{\mcP} \bDelta + \bN. \end{equation} Under this setup, a least-square estimate can be found as \begin{equation}\label{eq:LSE2} \hat\bR_{\mcP} := \arg \min_{\bTheta}\;\|\tbV-\bTheta \bDelta\|_F^2=\tbV\bDelta^+. \end{equation} To facilitate its statistical characterization and implementation, a simplified probing protocol is advocated: \begin{itemize} \item[\emph{p1})] Every probing bus $m\in\mcP$ perturbs its injection by an identical amount $\delta_m$ over $T_m$ consecutive periods. \item[\emph{p2})] During these $T_m$ probing periods, the remaining probing buses do not perturb their injections. \end{itemize} Under this protocol, the probing matrix takes the form \begin{equation}\label{eq:Delta} \bDelta = \begin{bmatrix} \delta_1 \be_1 \mathbf{1}_{T_1}^\top & \delta_2 \be_2 \mathbf{1}_{T_2}^\top & \cdots & \delta_P \be_P \mathbf{1}_{T_P}^\top \end{bmatrix}. \end{equation} If at time $t$ node $m$ is probed, the collected $\tbv(t)$ is simply \begin{equation}\label{eq:Delta2} \tbv(t)=\delta_m\br_m+\bn(t). \end{equation} Under \eqref{eq:Delta}--\eqref{eq:Delta2}, it is not hard to see that the minimization in \eqref{eq:LSE2} decouples over the columns of $\bR_{\mcP}$. In fact, the $m$-th column of $\bR_{\mcP}$ can be found as the scaled sample mean of voltage differences collected only over the times $\mcT_m:=\left\{\sum_{\tau=1}^{m-1}T_\tau+1,\ldots,\sum_{\tau=1}^{m}T_\tau\right\}$ node $m$ was probed \begin{equation}\label{eq:sample-mean} \hbr_m = \frac{1}{\delta_m T_m} \sum_{t\in\mcT_m} \tbv(t). \end{equation} To statistically characterize $\hbr_m$, we will next postulate a model for the error term $\bn(t)$ in \eqref{eq:Delta2} as \begin{equation}\label{eq:noise} \bn(t) := \bR \tbp(t) + \bX \tbq(t) + \bw(t) \end{equation} where $\tbp(t)+j \tbq(t)$ are the injection deviations from non-actuated buses, and $\bw(t)$ captures approximation errors and measurement noise. If $\{\tbp(t),\tbq(t),\bw(t)\}$ are independent zero-mean with respective covariance matrices $\sigma_p^2 \bI$, $\sigma_q^2 \bI$, and $\sigma_w^2 \bI$; the random vector $\bn(t)$ is zero-mean with covariance matrix $\bPhi := \sigma_p^2 \bR^2 + \sigma_q^2 \bX^2 + \sigma_w^2 \bI$. Invoking the central limit theorem, the estimate $\hbr_m$ can be approximated as zero-mean Gaussian with covariance matrix $\frac{1}{\delta_m^2 T_m}\bPhi$. By increasing $T_m$ and/or $\delta_m$, the estimate $\hbr_m$ can go arbitrarily close to the actual $\br_m$, and this distance can be bounded probabilistically using $\bPhi$. Note however, that $\bPhi$ depends on the unknown $(\bR,\bX)$. To resolve this issue, we resort to an upper bound on $\bPhi$ based on minimal prior information: Suppose the spectral radii $\rho(\bR)$ and $\rho(\bX)$, and the variances $(\sigma_p^2,\sigma_q^2,\sigma_w^2)$ are known; see \cite{Bolognani2013w} for upper bounds. Then, it is not hard to verify that $\rho(\bPhi)\leq \sigma^2$, where $\sigma^2 := \sigma_p^2\rho^2(\bR)+ \sigma_q^2\rho^2(\bX) + \sigma_w^2$. The standard Gaussian concentration inequality bounds the deviation of the $n$-th entry of $\hbr_m$ from its actual value as \begin{equation}\label{eq:prob} \Pr \left(|\hat{R}_{n,m} - R_{n,m}| \geq \frac{4 \sigma}{\delta_m \sqrt{T_m}}\right) \leq \pi_0:=6\cdot 10^{-5}. \end{equation} Let us now return to stage \emph{s2)} of recovering level sets from the columns of $\bR_\mcP$. In the noiseless case, level sets were identified as the indices of $\br_m$ related to equal values. Almost surely though, there will not be any equal entries in the noisy $\hbr_m$. Instead, the entries of $\hbr_m$ will be concentrated around the actual values. To identify groups of similar values, first sort the entries of $\hbr_m$ in increasing order, and then take the differences of successive sorted entries. A key fact stemming from Lemma~\ref{le:entriesR2}-(iii) guarantees that the minimum difference between the entries of $\br_m$ is larger or equal to the smallest line resistance $r_{\min}$. Hence, if all estimates were confined within $|\hat{R}_{n,m} - {R}_{n,m}| \leq r_{\min}/4$, a difference of sorted $\hat{R}_{n,m}$'s larger than $r_{\min}/2$ would safely pinpoint the boundary between two node groups. In practice, if the operator knows $r_{\min}$ \emph{a priori} and selects \begin{equation}\label{eq:Tm} \delta_m\sqrt{T_m} \geq 16 \sigma/r_{\min} \end{equation} the requirement $|\hat{R}_{n,m} - {R}_{n,m}| \leq r_{\min}/4$ will be satisfied with probability higher than $99.95\%$. In such a case and taking the union bound, the probability of correctly recovering all level sets is larger than $1-N^2\pi_0$. The argument carries over to $\bR_{\mcP\mcP}$ under the partial data setup. \section{Numerical Tests}\label{sec:tests} Our algorithms were validated on the IEEE 37-bus feeder converted to its single-phase equivalent~\cite{CaKe2017}. Figures~\ref{fig:redgrid}a--\ref{fig:redgrid}b show the actual and reduced topologies that can be recovered under a noiseless setup if all leaf nodes are probed. Setups with complete and partial noisy data were tested. Probing was performed on a per-second basis following the protocol \emph{p1)}--\emph{p2)} of Sec.~\ref{sec:noisy}. Probing buses were equipped with inverters having the same rating as the related load. Loads were generated by adding a zero-mean Gaussian variation to the benchmark data, with standard deviation 0.067 times the average of nominal loads. Voltages were obtained via a power flow solver, and then corrupted by zero-mean Gaussian noise with $3\sigma$ deviation of 0.01\% per unit (pu). Although typical voltage sensors exhibit accuracies in the range of 0.1--0.5\%, here we adopt the high-accuracy specifications of the micro-phasor measurement unit of \cite{mPMU}. \begin{table} \renewcommand{1.1}{1.1} \caption{Numerical Tests under Full and Partial Noisy Data} \label{tbl:errors} \centering \begin{tabular}{|l|l|r|r|r|r|r|} \hline\hline & $T_m $ & 1 & $10$ &$20$ & $40$ & $90$ \\ \hline\hline Alg. 1 & Error Prob. [\%] & 98.5 & 55.3 & 20.9 & 3.1 & 0.2\\ \hline & MPE [\%] & 35.1 & 32.5 & 31.2 & 30.9 & 28.5\\ \hline\hline & $T_m$ & $1$ & $5$ & $10$ & $20$ & $39$\\ \hline\hline Alg. 2 & Error Prob. [\%] & 97.2 & 45.8 & 26.3 & 18.9 & 0.1\\ \hline & MPE [\%] & 18.6 & 16.4 & 15.4 & 14.8 & 13.2\\ \hline \hline \end{tabular} \end{table} For the 37-bus feeder $r_{\min} = 0.0014$~pu. From the rated $\delta_m$'s; the $r_{\min}$; and \eqref{eq:Tm}, the number of probing actions was set as $T_m = 90$. In the partial data case, the smallest \emph{effective resistance} was $0.0021$~pu, yielding $T_m = 39$. Level sets were obtained using the procedure described in Sec.~\ref{sec:noisy}, and given as inputs to Alg.~\ref{alg:full} and \ref{alg:partial}. The algorithms were tested through 10,000 Monte Carlo tests. Table~\ref{tbl:errors} demonstrates that the probability of error in topology recovery and the mean percentage error (MPE) of line resistances in correctly detected topologies decay gracefully for increasing $T_m$. \section{Conclusions}\label{sec:conclusions} To conclude, this letter has put forth an active sensing paradigm for topology identification of inverter-enabled grids. If all lead nodes are probed and voltage responses are metered at all nodes, the grid topology can be unveiled via a recursive algorithm. If voltage responses are metered only at probing buses, a reduced topology can be recovered instead. Guidelines for designing probing actions to cope with noisy data have been tested on a benchmark feeder. Generalizing to multi-phase and meshed grids; coupling (re)active probing strategies; incorporating prior line information; and exploiting voltage phasors are exciting research directions. \section*{Appendix} \emph{Proof of Proposition~\ref{prop:levelsets}.} We will first show that \begin{equation}\label{eq:lvlset1} \bigcap_{m \in \mcP_n^k} \mcN^k_m = \bigcap_{m \in \mcT^k_n \cap \mcF} \mcN^k_m. \end{equation} By definition $\mcP_n^k=\mcT^k_n \cap \mcP$. Consider a node $w \in \mcP_n^k$ with $w\notin\mcF$. Two cases can be identified. In case i), $w$ equals the root $n$ of subtree $\mcT_n^k$ and hence $\mcN_w^k=\mcD_n=\mcT_n^k$ by the second branch in \eqref{eq:levelset}. Note that $\mcN_s^k \cap \mcD_n = \mcN_s^k$ for all $s \in \mcT_n^k \cap \mcP$. In case ii), node $w$ is different than $n$ and thus Lemma~\ref{le:Nmk}-(iv) implies that $\mcN_w^k = \mcN_s^k$ for all $s \in \mcF \cap \mcD_w$. Either way, it holds \begin{equation}\label{eq:lvlset2} \bigcap_{m \in \mcP^k_n} \mcN^k_m = \bigcap_{m \in \left(\mcT^k_n \cap \mcP\right) \setminus \{w\}} \mcN^k_m. \end{equation} By recursively applying~\eqref{eq:lvlset2} for each non-leaf probing bus $m$, the equivalence in \eqref{eq:lvlset1} follows. From the definition of level sets in \eqref{eq:levelset}, $\mcN_m^k=\mcD_{\alpha^{k}_m}\setminus \mcD_{\alpha^{k+1}_m}$ but $\mcD_{\alpha^{k}_m}=\mcD_n$ since $n$ is the common $k$-depth ancestor for all $m\in \mcP_n^k$. The intersection in the RHS of \eqref{eq:lvlset1} becomes \begin{equation*} \bigcap_{m\in \mcT^k_n \cap \mcF} \left(\mcD_n \setminus \mcD_{\alpha^{k+1}_m} \right) = \mcD_n\setminus \bigcup_{m\in \mcT^k_n \cap \mcF}\mcD_{\alpha^{k+1}_m} =\{n\} \end{equation*} because $\bigcup_{m\in \mcT^k_n \cap \mcF }\mcD_{\alpha^{k+1}_m}=\mcD_n\setminus \{n\}$. \emph{Proof of Proposition~\ref{pro:levset&subtree}.} If $\alpha_m^{k+1} = \alpha_{m'}^{k+1}$, it follows that $\alpha_m^{k} = \alpha_{m'}^{k}$. Then $\mcD_{\alpha^{k+1}_m} = \mcD_{\alpha^{k+1}_{m'}}$ and $\mcD_{\alpha^{k}_m} = \mcD_{\alpha^{k}_{m'}}$. By the definition of the level sets in \eqref{eq:levelset}, it follows that $\mcN_m^{k} = \mcN_{m'}^{k}$. Conversely, assume that $\mcN_m^{k} = \mcN_{m'}^{k}$. Since $\alpha_m^k$ and $\alpha_{m'}^k$ are the only nodes at depth $k$ respectively in $\mcN_m^{k}$ and $\mcN_{m'}^{k}$ (see Lemma~\ref{le:Nmk}, claim (i)), it follows that $\alpha_m^k = \alpha_{m'}^k$. By the definition of the level sets in \eqref{eq:levelset}, it holds that $\mcN_{m}^{k}=\mcD_{\alpha_{m}^k}\setminus \mcD_{\alpha_m^{k+1}}$, while $\mcD_{\alpha_{m}^{k+1}}\subset \mcD_{\alpha_{m}^{k}}$. Similarly for node $m'$, it holds that $\mcN_{m'}^{k}=\mcD_{\alpha_{m'}^k}\setminus \mcD_{\alpha_{m'}^{k+1}}$ and $\mcD_{\alpha_{m'}^{k+1}}\subset \mcD_{\alpha_{m'}^{k}}$. Since $\mcN_m^{k}=\mcN_{m'}^{k}$ and $\mcD_{\alpha_m^k}=\mcD_{\alpha_{m'}^k}$, it follows that $\mcD_{\alpha_m^{k+1}}=\mcD_{\alpha_{m'}^{k+1}}$ and consequently $\alpha_m^{k+1} = \alpha_{m'}^{k+1}$. \emph{Proof of Proposition~\ref{pro:uniqueness_red}}. For the sake of contradiction, assume there exists another reduced grid $\hat \mcG^r = (\mcN^r,\hat \mcL^r)$ with $\hat \mcL^r\neq \mcL^r$ such that $\mcF(\mcG^r) = \mcF(\hat \mcG^r)=\mcF(\mcG)$ and $\{\mcM_w^k(\mcG^r) = \mcM_w^k(\hat \mcG^r)\}_{k=0}^{d_w}$ for all $w \in \mcP$. Note that Lemma~\ref{lem:metered=level} and the latter equality imply that $d_w(\mcG^r)=d_w(\hat{\mcG}^r)$ for all $w\in\mcP$. Since $\hat \mcG^r \neq \mcG^r$ up to different labelling for non-probing nodes, there exists a subtree $\mcT^{k}_n(\hat \mcG^r)$ with the properties:\\ \emph{1)} It appears both in $\hat \mcG^r$ and $\mcG^r$.\\ \emph{2)} Node $n$ has different parent nodes in $\hat \mcG^r$ and $\mcG^r$, that is $m=\alpha^{k-1}_n(\hat \mcG^r)\neq\alpha^{k-1}_n(\mcG^r)$.\\ \emph{3)} At least one of $\alpha^{k-1}_n(\hat \mcG^r)$ and $\alpha^{k-1}_n(\mcG^r)$ belongs to $\mcP$. Such a $\mcT^{k}_n(\hat \mcG^r)$ exists and it may be the singleton $\mcT^{k}_n(\hat \mcG^r)=\{n\}$ for $n\in\mcF$. Assume without loss of generality $n\in\mcP$. Based on p3), two cases are identified for $m$. \emph{Case I}: $m \in \mcP$. We will next show that $d_m(\hat \mcG^r)\neq d_m(\mcG^r)$. From p2) and Lemma~\ref{le:Nmk}-(i), it follows $m\in \mcM_n^{k-1}(\hat \mcG^r)$. On the other hand, property p2) along with the hypothesis $\mcM_s^{k-1}(\hat \mcG^r) = \mcM_s^{k-1}(\mcG^r)$ imply that $d_m(\mcG^r)> k-1=d_m(\hat \mcG^r)$. Lemma~\ref{le:Nmk}-(v) ensures then that $m\in \mcN_m^{k-1}(\hat \mcG^r)$, but $m \notin \mcN_m^{k-1}(\mcG^r)$. \emph{Case II}: $m\notin \mcP$. Since non-probing buses have degree greater than two in reduced grids, there exists at least one probing node $s$, such that $s\in\mcD_m$, but $s\notin\mcT_n^{k}(\hat\mcG^r)$. Observe that $m,s \in \mcN_n^{k-1}(\hat\mcG^r)$ and $s \in \mcM_n^{k-1}(\hat\mcG^r)$. Let now $w$ be the parent of $n$ in $\mcG^r$. Due to p3), it holds that $w \in \mcP$. Using Lemma~\ref{le:Nmk}-(i), node $w \in \mcM_n^{k-1}(\mcG^r)$ and so $w \in \mcM_n^{k-1}(\hat \mcG^r)$ with possibly $w =s$. Therefore, $d_w(\mcG^r) < d_w(\hat \mcG^r)$ and Lemma~\ref{le:Nmk}-(v) ensures $w \notin \mcN_w^{k-1}(\hat \mcG^r)$ and $w \in \mcN_w^{k-1}(\mcG^r)$. \end{document}
arXiv
{ "id": "1803.04506.tex", "language_detection_score": 0.7679193019866943, "char_num_after_normalized": null, "contain_at_least_two_stop_words": null, "ellipsis_line_ratio": null, "idx": null, "lines_start_with_bullet_point_ratio": null, "mean_length_of_alpha_words": null, "non_alphabetical_char_ratio": null, "path": null, "symbols_to_words_ratio": null, "uppercase_word_ratio": null, "type": null, "book_name": null, "mimetype": null, "page_index": null, "page_path": null, "page_title": null }
arXiv/math_arXiv_v0.2.jsonl
null
null
\begin{document} \begin{frontmatter} \title{Laurent biorthogonal polynomials, $q$-Narayana polynomials and domino tilings of the Aztec diamonds} \author{Shuhei Kamioka} \ead{[email protected]} \address{Department of Applied Mathematics and Physics, Graduate School of Informatics, Kyoto University, Kyoto 606-8501, Japan} \begin{abstract} A T\"oplitz determinant whose entries are described by a $q$-analogue of the Narayana polynomials is evaluated by means of Laurent biorthogonal polynomials which allow of a combinatorial interpretation in terms of Schr\"oder paths. As an application, a new proof is given to the Aztec diamond theorem by Elkies, Kuperberg, Larsen and Propp concerning domino tilings of the Aztec diamonds. The proof is based on the correspondence with non-intersecting Schr\"oder paths developed by Eu and Fu. \end{abstract} \begin{keyword} orthogonal polynomials \sep Narayana polynomials \sep Aztec diamonds \sep lattice paths \sep Hankel determinants \end{keyword} \end{frontmatter} \section{Introduction} \label{sec:introduction} {\em Laurent biorthogonal polynomials (LBPs)} are orthogonal functions which play fundamental roles in the theory of two-point Pad\'e approximants at zero and infinity \cite{Jones-Thron(1982)}. In Pad\'e approximants, LBPs appear as the denominators of the convergents of a T-fraction. (See also, e.g., \cite[Chapter 7]{Jones-Thron(1980CF)} and \cite{Hendriksen-VanRossum(1986),Zhedanov(1998)}.) Recently, the author exhibited a combinatorial interpretation of LBPs in terms of lattice paths called {\em Schr\"oder paths} \cite{Kamioka(2007),Kamioka(2008)}. In this paper, we utilize LBPs to calculate a determinant whose entries are given by a $q$-analogue of the Narayana polynomials \cite{Bonin-Shapiro-Simion(1993)} which have a combinatorial expression in Schr\"oder paths. As an application, we give a new proof to the Aztec diamond theorem by Elkies, Kuperberg, Larsen and Propp \cite{Elkies-Kuperberg-Larsen-Propp(1992.01),Elkies-Kuperberg-Larsen-Propp(1992.02)} by means of LBPs and Schr\"oder paths. A {\em Schr\"oder path} $P$ is a lattice path in the two-dimensional plane $\mathbb{Z}^{2}$ consisting of up steps $(1,1)$, down steps $(1,-1)$ and level steps $(2,0)$, and never going beneath the $x$-axis. See Figure \ref{fig:SchPath} for example. \begin{figure} \caption{A Schr\"oder path $P \in S_{10}$ such that $\mathrm{level}(P) = 4$ and $\mathrm{area}(P) = 20$.} \label{fig:SchPath} \end{figure} For $k \in \mathbb{N} = {\{ 0,1,2,\ldots \}}$, let $S_k$ denote the set of Schr\"oder paths from $(0,0)$ to $(2k,0)$. The number $\# S_k$ of such paths is counted by the $k$-th large Schr\"oder number (A006318 in OEIS \cite{OEIS}). The first few of $\# S_k$ are $1$, $2$, $6$, $22$ and $90$. Enumerative or statistical properties of Schr\"oder paths are often investigated through the {\em Narayana polynomials} \begin{gather} L_{k}(t) = \sum_{j=1}^{k} \frac{1}{k} \binom{k}{j} \binom{k}{j-1} (1+t)^{j}, \qquad k \in \mathbb{N}, \end{gather} where $N_0(t) = 1$. (The coefficients $\frac{1}{k} \binom{k}{j} \binom{k}{j-1}$ are the {\em Narayana numbers}, A001263 in OEIS \cite{OEIS}.) Bonin, Shapiro and Simion \cite{Bonin-Shapiro-Simion(1993)} interpreted the Narayana polynomials by counting the level steps in Schr\"oder paths, \begin{gather} L_{k}(t) = \sum_{P \in S_k} t^{\mathrm{level}(P)} \end{gather} where $\mathrm{level}(P)$ denotes the number of level steps in a Schr\"oder path $P$. (Level steps in this paper are identified with ``diagonal'' steps in \cite{Bonin-Shapiro-Simion(1993)}.) For more about the Narayana polynomials and related topics, see, e.g., Sulanke's paper \cite{Sulanke(2002)} and the references therein. Besides level steps, Bonin et al.~also examined the area polynomials \begin{gather} A_{k}(q) = \sum_{P \in S_k} q^{\mathrm{area}(P)}, \qquad k \in \mathbb{N}, \end{gather} with respect to the statistic $\mathrm{area}(P)$ that measures the area bordered by a path $P$ and the $x$-axis. (In \cite{Bonin-Shapiro-Simion(1993)}, the major index is also examined, but we will not consider in this paper.) In this paper, we consider the two statistics $\mathrm{level}(P)$ and $\mathrm{area}(P)$ simultaneously in the polynomials \begin{gather} N_{k}(t,q) = \sum_{P \in S_k} t^{\mathrm{level}(P)} q^{\mathrm{area}(P)}, \qquad k \in \mathbb{N}. \end{gather} We refer to $N_{k}(t,q)$ by the {\em $q$-Narayana polynomials}. Obviously, the $q$-Narayana polynomials satisfy that $N_k(t,1) = L_k(t)$ and $N_k(1,q) = A_k(t)$, and reduce to the large Schr\"oder numbers, $N_k(1,1) = \# S_k$, as well as the Catalan numbers, $N_k(0,1) = \frac{1}{k+1} \binom{2k}{k}$. In Section \ref{sec:qNarPolMom}, we find the LBPs of which the moments are described by the $q$-Narayana polynomials. The aim of this paper is twofold: (i) to calculate a determinant whose entries are described by the $q$-Narayana polynomials $N_k(t,q)$; (ii) to give a new proof to the Aztec diamond theorem by Elkies, Kuperberg, Larsen and Propp \cite{Elkies-Kuperberg-Larsen-Propp(1992.01),Elkies-Kuperberg-Larsen-Propp(1992.02)} by means of LBPs and Schr\"oder paths. Determinants whose entries are given by the large Schr\"oder numbers, by the Narayana polynomials and by their $q$-analogues are calculated by many authors using various techniques. Ishikawa, Tagawa and Zeng \cite{Ishikawa-Tagawa-Zeng(2009)} found a closed-form expression of Hankel determinants of a $q$-analogue of the large Schr\"oder numbers in a combinatorial way based on Gessel--Viennot's lemma \cite{Gessel-Viennot(1985)}. Petkovi\'c, Barry and Rajkovi\'c \cite{Petkovic-Barry-Rajkovic(2012)} calculated Hankel determinants described by the Narayana polynomials using an analytic method of solving a moment problem of orthogonal polynomials. In Section \ref{sec:ToeplitzDets}, we evaluate a T\"oplitz determinant described by the $q$-Narayana polynomials $N_k(t,q)$ by means of a combinatorial interpretation of LBPs in terms of Schr\"oder paths. Counting domino tilings of the Aztec diamonds is a typical problem of tilings which is exactly solvable. For $n \in \mathbb{N}$, the {\em Aztec diamond $\mathit{AD}_{n}$ of order $n$} is the union of all unit squares which lie inside the closed region $|x|+|y| \le n+1$. A domino denotes a one-by-two or two-by-one rectangle. Then, a {\em domino tiling}, or simply a {\em tiling}, of $\mathit{AD}_{n}$ is a collection of non-overlapping dominoes which exactly covers $\mathit{AD}_{n}$. Figure \ref{fig:AD_tiling} shows an Aztec diamond and an example of a tiling. \begin{figure} \caption{The Aztec diamond $\mathit{AD}_{5}$ (left) and a tiling of $\mathit{AD}_{5}$ (right).} \label{fig:AD_tiling} \end{figure} Let $T_n$ denote the set of all tilings of $\mathit{AD}_{n}$. Elkies, Kuperberg, Larsen and Propp, in their two-parted paper \cite{Elkies-Kuperberg-Larsen-Propp(1992.01),Elkies-Kuperberg-Larsen-Propp(1992.02)}, considered the statistics $v(T)$ and $r(T)$ of a tiling $T$, where $v(T)$ denotes half the number of vertical dominoes in $T$ and $r(T)$ the {\em rank} of $T$. (The definition of the rank is explained in Section \ref{sec:ADT}.) They showed that the counting polynomials \begin{gather} \mathrm{AD}_{n}(t,q) = \sum_{T \in T_n} t^{v(T)} q^{r(T)}, \qquad n \in \mathbb{N}, \end{gather} admit the following closed-form expression. \begin{thm}[Aztec diamond theorem \cite{Elkies-Kuperberg-Larsen-Propp(1992.01),Elkies-Kuperberg-Larsen-Propp(1992.02)}] \label{thm:ADT} For $n \in \mathbb{N}$, \begin{gather} \label{eq:ADT} \mathrm{AD}_{n}(t,q) = \prod_{k=0}^{n-1} (1 + t q^{2k+1})^{n-k}. \end{gather} \end{thm} Especially, the number $\# T_n$ of possible tilings of $\mathit{AD}_{n}$ equals to \begin{gather} \label{eq:ADT11} \# T_n = \mathrm{AD}_{n}(1,1) = 2^{\frac{n(n+1)}{2}}. \end{gather} (That is the solution to Exercise 6.49b in Stanley's book \cite{Stanley(1999EC2)}.) In \cite{Elkies-Kuperberg-Larsen-Propp(1992.01),Elkies-Kuperberg-Larsen-Propp(1992.02)}, a proof by means of the domino shuffling is shown for \eqref{eq:ADT} as well as three different proofs for \eqref{eq:ADT11}. Further different proofs for \eqref{eq:ADT11} are given by several authors \cite{Ciucu(1996),Kuo(2004),Brualdi-Kirkland(2005),Eu-Fu(2005)}. In particular, Eu and Fu \cite{Eu-Fu(2005)} gives a proof of \eqref{eq:ADT11} by calculating Hankel determinants of the large and the small Schr\"oder numbers. They showed a one-to-one correspondence between tilings and tuples of non-intersecting Schr\"oder paths to apply Gessel--Viennot's lemma \cite{Gessel-Viennot(1985),Aigner(2001LNCS)} on non-intersecting paths and determinants. In this paper, we give a new proof to \eqref{eq:ADT} based on the correspondence developed by Eu and Fu. Clarifying the connection between the statistics $v(T)$ and $r(T)$ of tilings $T$ and the statistics $\mathrm{level}(P)$ and $\mathrm{area}(P)$ of Schr\"oder paths $P$, we reduce the proof to the calculation of a determinant of the $q$-Narayana polynomials. This paper is organized as follows. In Section \ref{sec:LBPs_TFrac}, we recall the definitions and the fundamentals of LBPs and T-fractions focusing on the moments and a moment determinant of the T\"oplitz form. Sections \ref{sec:momPaths}--\ref{sec:nonIntPaths} concern a combinatorial interpretation of LBPs in terms of Schr\"oder paths which is applicable to general families of LBPs. In Section \ref{sec:momPaths}, we exhibit a combinatorial expression of the moments of LBPs (Theorem \ref{thm:momPaths}) with two different proofs. In Section \ref{sec:nonIntPaths}, we show a combinatorial expression of the moment determinant in terms of non-intersecting Schr\"oder paths (Theorem \ref{thm:detInNIPaths}) based on Gessel--Viennot's methodology \cite{Gessel-Viennot(1985),Aigner(2001LNCS)}. Sections \ref{sec:qNarPolMom}--\ref{sec:ADT} concern the special case of the moments of LBPs given by the $q$-Narayana polynomials. In Section \ref{sec:qNarPolMom}, we find the LBPs whose moments are given by the $q$-Narayana polynomials (Theorem \ref{thm:NarayanaPolsInMoms}). In Section \ref{sec:ToeplitzDets}, we evaluate a determinant of the $q$-Narayana polynomials by calculating the moment determinant of LBPs (Theorem \ref{thm:DetNarayanaPols}). Finally, in Section \ref{sec:ADT}, we give a new proof of the Aztec diamond theorem based on the discussions in the foregoing sections about LBPs, Schr\"oder paths and the $q$-Narayana polynomials. Section \ref{sec:conclusions} is devoted to concluding remarks. \section{Laurent biorthogonal polynomials and T-fractions} \label{sec:LBPs_TFrac} In Section \ref{sec:LBPs_TFrac}, we recall the definition and the fundamentals of LBPs and T-fractions. See, e.g., \cite{Jones-Thron(1982),Hendriksen-VanRossum(1986),Zhedanov(1998)} for more details. The formulations of LBPs may differ depending on the authors though they are essentially equivalent. In this paper, we adopt the formulation in \cite{Zhedanov(1998)}. \subsection{Laurent biorthogonal polynomials} \label{sec:LBPs} Let $b_{n+1}$ and $c_{n}$ for $n \in \mathbb{N}$ be arbitrary nonzero constants. The (monic) {\em Laurent biorthogonal polynomials (LBPs)} $P_n(z)$, $n \in \mathbb{N}$, is the polynomials determined from the recurrence \begin{gather} \label{eq:recurrence} P_{n+1}(z) = (z - c_n) P_n(z) - b_n z P_{n-1}(z) \qquad \text{for $n \ge 1$} \end{gather} with the initial values $P_{0}(z) = 1$ and $P_1(z) = z - c_0$. The first few of the LBPs are \begin{subequations} \begin{align} P_0(z) &= 1, \\ P_1(z) &= z - c_0, \\ P_2(z) &= z^2 - (b_1 + c_0 + c_1) z + c_0 c_1, \\ P_3(z) &= z^3 - (b_1 + b_2 + c_0 + c_1 + c_2) z^2 \nonumber \\ & \qquad\qquad {} + (b_1 c_2 + b_2 c_0 + c_0 c_1 + c_0 c_2 + c_1 c_2) z - c_0 c_1 c_2. \end{align} \end{subequations} The LBP $P_n(z)$ is a monic polynomial in $z$ exactly of degree $n$ of which the constant term does not vanish. In fact, \begin{gather} \label{eq:LBPConstTerm} P_n(0) = (-1)^{n} \prod_{j=0}^{n-1} c_j \neq 0. \end{gather} The orthogonality of LBPs is described in the following theorem, that is sometimes referred to by {\em Favard type theorem}. \begin{thm}[Favard type theorem for LBPs] \label{thm:Favard} There exists a linear functional $\mathcal{F}$ defined over Laurent polynomials in $z$ with respect to which the LBPs $P_n(z)$ satisfy the orthogonality \begin{gather} \label{eq:orthty} \mathcal{F}[P_n(z) z^{-k}] = h_n \delta_{n,k} \qquad \text{for $0 \le k \le n$} \end{gather} with some constants $h_n \neq 0$, where $\delta_{n,k}$ denotes the Kronecker delta. The linear functional $\mathcal{F}$ is unique up to a constant factor. \end{thm} We can prove Theorem \ref{thm:Favard} in almost the same way as Favard's theorem for orthogonal polynomials. See, e.g., Chihara's book \cite[Chapter I, Theorem 4.4]{Chihara(1978OP)}. We write the {\em moments} of the linear functional $\mathcal{F}$, \begin{gather} \label{eq:mom} f_k = \mathcal{F}[z^k], \qquad k \in \mathbb{Z}. \end{gather} We fix the first moment $f_1 = \mathcal{F}[z]$ by \begin{gather} f_1 = \kappa \end{gather} where $\kappa$ is an arbitrary nonzero constant. We can show that the {\em moment determinant} of T\"oplitz form \begin{gather} \Delta^{(s)}_{n} = \det(f_{s-j+k})_{j,k=0,\ldots,n-1} = {} \begin{vmatrix} f_{s} & f_{s+1} & \cdots & f_{s+n-1} \\ f_{s-1} & f_{s} & \cdots & f_{s+n-2} \\ \vdots & \vdots & \ddots & \vdots \\ f_{s-n+1} & f_{s-n+2} & \cdots & f_{s} \\ \end{vmatrix} \end{gather} does not vanish for $s \in {\{ 0, 1 \}}$ and $n \in \mathbb{N}$. The LBPs $P_n(z)$ have the determinant expression \begin{gather} P_n(z) = \frac{1}{\Delta^{(0)}_{n}} \begin{vmatrix} f_0 & f_1 & \cdots & f_{n-1} & f_n \\ f_{-1} & f_0 & \cdots & f_{n-2} & f_{n-1} \\ \vdots & \vdots & \ddots & \vdots & \vdots \\ f_{-n+1} & f_{-n+2} & \cdots & f_0 & f_1 \\ 1 & z & \cdots & z^{n-1} & z^n \\ \end{vmatrix}. \end{gather} Thus, from \eqref{eq:recurrence} and \eqref{eq:orthty}, the coefficients $b_n$ and $c_n$ of the recurrence \eqref{eq:recurrence} and the constants $h_n$ in the orthogonality \eqref{eq:orthty} \begin{gather} \label{eq:cfsInDets} b_n = -\frac{\Delta^{(1)}_{n+1} \Delta^{(0)}_{n-1}}{\Delta^{(1)}_{n} \Delta^{(0)}_{n}}, \qquad c_n = \frac{\Delta^{(1)}_{n+1} \Delta^{(0)}_{n}}{\Delta^{(1)}_{n} \Delta^{(0)}_{n+1}}, \qquad h_n = \frac{\Delta^{(0)}_{n+1}}{\Delta^{(0)}_{n}}. \end{gather} The inverted polynomials \begin{gather} \tilde{P}_n(z) = \frac{z^{n} P_n(z^{-1})}{P_n(0)} \end{gather} also make a family of LBPs which are determined by the recurrence \eqref{eq:recurrence} with the different coefficients \begin{gather} \label{eq:bc_arginv} \tilde{b}_{n} = \frac{b_n}{c_{n-1} c_{n}}, \qquad \tilde{c}_{n} = \frac{1}{c_n}. \end{gather} We can determine a linear functional $\tilde{\mathcal{F}}$ for $\tilde{P}_n(z)$ by the moments \begin{gather} \label{eq:momDual} \tilde{f}_k = \tilde{\mathcal{F}}[z^k] = f_{1-k}, \qquad k \in \mathbb{Z}. \end{gather} Then, \begin{gather} \label{eq:kappa-kappaTilde} \tilde{f}_{1} = \tilde{\kappa} := f_{0} = \frac{\kappa}{c_0}. \end{gather} \begin{subequations} \label{eq:detsInCfs} The equations \eqref{eq:cfsInDets} and \eqref{eq:bc_arginv} imply that \begin{align} \Delta^{(1)}_{n} &= {} (-1)^{\frac{n(n-1)}{2}} \kappa^{n} \prod_{k=1}^{n-1} {\left( \frac{b_{k}}{c_{k-1}} \right)}^{n-k}, \label{eq:detsInCfs01} \\ \Delta^{(0)}_{n} &= {} (-1)^{\frac{n(n-1)}{2}} \tilde{\kappa}^{n} \prod_{k=1}^{n-1} {\left( \frac{\tilde{b}_{k}}{\tilde{c}_{k-1}} \right)^{n-k}}. \label{eq:detsInCfs00} \end{align} \end{subequations} In Section \ref{sec:ToeplitzDets}, we make use of the formulae \eqref{eq:detsInCfs} to compute the moment determinant $\Delta^{(s)}_{n}$. \subsection{T-fractions} A {\em T-fraction} is a continued fraction \begin{gather} \label{eq:T-fraction} T(z) = \CFrac{\kappa}{z - c_0} - \CFrac{b_1 z}{z - c_1} - \CFrac{b_2 z}{z - c_2} - \cdots. \end{gather} The $n$-th convergent of $T(z)$ \begin{gather} T_n(z) = \CFrac{\kappa}{z - c_0} - \CFrac{b_1 z}{z - c_1} - \cdots - \CFrac{b_{n-1} z}{z - c_{n-1}} \end{gather} is expressed by a ratio of polynomials \begin{gather} T_n(z) = \frac{Q_n(z)}{P_n(z)} \end{gather} where $P_n(z)$ is the LBP of degree $n$ determined by the recurrence \eqref{eq:recurrence}, and $Q_n(z)$ is the polynomial determined by the same recurrence \eqref{eq:recurrence} from different initial values $Q_0(z) = 0$ and $Q_1(z) = \kappa$. Thus, we can identify the LBP $P_n(z)$ with the denominator polynomial of $T_n(z)$. In Pad\'e approximants, the convergent $T_n(z)$ simultaneously approximates two formal power series \begin{gather} \label{eq:momSeries} F_{+}(z) = \sum_{k=1}^{\infty} f_k z^{-k} \qquad \text{and} \qquad F_{-}(z) = -\sum_{k=0}^{\infty} f_{-k} z^k \end{gather} in the sense that \begin{subequations} \label{eq:PadeApprox} \begin{align} T_n(z) &= F_{+}(z) + \mathop{\mathrm{O}}(z^{-n-1}) && \text{as $z \to \infty;$} \\[1\jot] {} &= F_{-}(z) + \mathop{\mathrm{O}}(z^{n}) && \text{as $z \to 0,$} \end{align} \end{subequations} where $f_k = \mathcal{F}[z^k]$ are the moments of the LBPs $P_n(z)$. That is, expanded into series at $z=\infty$ and at $z=0$, $T_n(z) = Q_n(z)/P_n(z)$ coincide with $F_+(z)$ and $F_-(z)$, respectively, at least in the first $n$ terms. The approximation \eqref{eq:PadeApprox} of $F_{+}(z)$ and $F_{-}(z)$ by $T_n(z)$ is equivalent to the orthogonality \eqref{eq:orthty} of LBPs. Taking the limit $n \to \infty$ in \eqref{eq:PadeApprox}, we observe that the T-fraction $T(z)$ equals to $F_{+}(z)$ and $F_{-}(z)$ as formal power series, \begin{subequations} \label{eq:TFracSeries} \begin{align} T(z) &= F_{+}(z) && \text{as $z \to \infty;$} \\[1\jot] {} &= F_{-}(z) && \text{as $z \to 0.$} \end{align} \end{subequations} \section{Moments and Schr\"oder paths} \label{sec:momPaths} In Section \ref{sec:momPaths}, we give a combinatorial interpretation to the moments of LBPs. Theorem \ref{thm:momPaths} of expressing each moment in terms of Schr\"oder paths is already shown in \cite[Theorem 8]{Kamioka(2007)}. In this paper, we review the result by providing two new simple proofs. The lattice path interpretation of LBPs is quite analogous to those in the combinatorial interpretation of orthogonal polynomials by Viennot \cite{Viennot(1983OP)}. We owe the idea of the proof in Section \ref{sec:proofTFrac} by T-fractions to a combinatorial interpretation of continued fractions by Flajolet \cite{Flajolet(1980)}. Let $P$ be a Schr\"oder path. We label each step in $P$ by unity if the step is an up step, by $b_n$ if a down step descending from the line $y=n$ and by $c_n$ if a level step on the line $y=n$, where $b_n$ and $c_n$ are the coefficients of the recurrence \eqref{eq:recurrence} of the LBPs $P_n(z)$. We then define the {\em weight} $w(P)$ of $P$ by the product of the labels of all the steps in $P$. For example, the path in Figure \ref{fig:SchPath} weighs $w(P) = b_1^2 b_2^3 b_3 c_0 c_1 c_2^2$. In the same way, labeling each step in $P$ using the recurrence coefficients $\tilde{b}_{n}$ and $\tilde{c}_{n}$ for $\tilde{P}_n(z)$, we define another weight $\tilde{w}(P)$. The main statement in Section \ref{sec:momPaths} is the following. \begin{thm} \label{thm:momPaths} The moments $f_k = \mathcal{F}[z^k]$ of LBPs admit the expressions \begin{subequations} \label{eq:momentsPaths} \begin{align} f_{k+1} &= \kappa \sum_{P \in S_k} w(P), \label{eq:momPathsPos} \\ f_{-k} &= \tilde{\kappa} \sum_{P \in S_k} \tilde{w}(P) && \text{for $k \in \mathbb{N}$.} \label{eq:momPathsNeg} \end{align} \end{subequations} \end{thm} For example, \begin{subequations} \begin{align} f_{-2} &= \tilde{\kappa} (\tilde{b}_1 \tilde{b}_2 + \tilde{b}_1^2 + \tilde{b}_1 \tilde{c}_1 + {} 2 \tilde{b}_1 \tilde{c}_0 + \tilde{c}_0^2), \\ f_{-1} &= \tilde{\kappa} (\tilde{b}_1 + \tilde{c}_0), \\ f_{0} &= \tilde{\kappa}, \\ f_{1} &= \kappa, \\ f_{2} &= \kappa (b_1 + c_0), \\ f_{3} &= \kappa (b_1 b_2 + b_1^2 + b_1 c_1 + 2 b_1 c_0 + c_0^2). \end{align} \end{subequations} In the rest of Section \ref{sec:momPaths}, we show two different proofs of Theorem \ref{thm:momPaths}. The first proof in Section \ref{sec:proofLBPs} is based on LBPs. The second proof in Section \ref{sec:proofTFrac} is based on T-fractions. \subsection{Proof of Theorem \ref{thm:momPaths} by LBPs} \label{sec:proofLBPs} \begin{lem} \label{lem:momPathsGen} For $n \in \mathbb{N}$ and $k \in \mathbb{N}$, \begin{subequations} \label{eq:fvodsjcb} \begin{align} \mathcal{F}[P_{n}(z) z^{k+1}] &= \kappa \sum_{P} w(P), \label{eq:momPathsGen01} \\ \tilde{\mathcal{F}}[\tilde{P}_{n}(z) z^{k+1}] &= \tilde{\kappa} \sum_{P} \tilde{w}(P) \label{eq:momPathsGen00} \end{align} \end{subequations} where both the sums range over all Schr\"oder paths $P$ from $(-n,-n)$ to $(2k,0)$. \end{lem} \begin{proof} Let us write $f_{n,k} = \mathcal{F}[P_n(z) z^{k+1}]$. From the recurrence \eqref{eq:recurrence} of $P_n(z)$, we obtain a recurrence of $f_{n,k}$ \begin{gather} \label{eq:uwcondvvp} f_{n,k} = f_{n+1,k-1} + c_n f_{n,k-1} + b_n f_{n-1,k} \end{gather} for $n \in \mathbb{N}$ and $k \in \mathbb{N}$, where the boundary values $f_{-1,k} = 0$ and $f_{n,-1} = \tilde{\kappa} \delta_{n,0}$ are induced from \eqref{eq:orthty} and \eqref{eq:kappa-kappaTilde}. The recurrence \eqref{eq:uwcondvvp} leads us to a combinatorial expression of \eqref{eq:momPathsGen01}, \begin{gather} f_{n,k} = \tilde{\kappa} c_0 \sum_{P} w(P) = \kappa \sum_{P} w(P) \end{gather} where the sum ranges over all Schr\"oder paths $P$ from $(-n,-n)$ to $(2k,0)$. In much the same way, we can derive \eqref{eq:momPathsGen00} using Schr\"oder paths labelled with $\tilde{b}_{n}$ and $\tilde{c}_{n}$. \end{proof} From \eqref{eq:mom} and \eqref{eq:momDual}, Theorem \ref{thm:momPaths} is the special case of $n=0$ in Lemma \ref{lem:momPathsGen}. That completes the proof of Theorem \ref{thm:momPaths} by LBPs. \subsection{Proof of Theorem \ref{thm:momPaths} by T-fractions} \label{sec:proofTFrac} For a Schr\"oder path $P$, we define $\mathrm{length}(P)$ by the sum of half the number of up and down steps and the number of level steps in $P$. For example, the path $P$ in Figure \ref{fig:SchPath} is as long as $\mathrm{length}(P) = 10$. \begin{lem} \label{lem:TFracPaths} The T-fraction $T(z)$ admits the expansions into formal power series \begin{subequations} \label{eq:TFracPaths} \begin{align} T(z) &= \kappa \sum_{P} w(P) z^{-\mathrm{length}(P)-1} && \text{as $z \to \infty;$} \label{eq:TFracPathsInfty} \\ {} &= -\tilde{\kappa} \sum_{P} \tilde{w}(P) z^{\mathrm{length}(P)} && \text{as $z \to 0,$} \label{eq:TFracPathsZero} \end{align} \end{subequations} where the both (formal) sums range over all Schr\"oder paths $P$ from $(0,0)$ to some point on the $x$-axis. \end{lem} \begin{proof} Let us consider partial convergents of $T(z)$ \begin{gather} T_{m,n}(z) = \CFrac{1}{z - c_m} - \CFrac{b_{m+1} z}{z - c_{m+1}} - \cdots - \CFrac{b_{m+n-1} z}{z - c_{m+n-1}} \end{gather} for $m \in \mathbb{N}$ and $n \in \mathbb{N}$, where $T_{m,0}(z) = 0$. We first show by induction for $n \in \mathbb{N}$ that $T_{m,n}(z) = S_{m,n}(z)$ as $n \to \infty$ where $S_{m,n}(z)$ denotes the formal power series \begin{gather} \label{eq:uig8vweu} S_{m,n}(z) = \sum_{P} w(P) z^{-\mathrm{length}(P)-1} \end{gather} over all Schr\"oder paths $P$ from $(m,m)$ to some point on the line $y=m$ which lie in the region bounded by $y=m$ and $y=m+n-1$. (Hence, all the points in $P$ have the $y$-coordinates $\ge m$ and $\le m+n-1$.) For $n=0$, it is trivial that $S_{m,0}(z) = 0$ because the region in which $P$ may live is empty. Hence, $T_{m,0}(z) = S_{m,0}(z) = 0$. Suppose that $n \ge 1$. We classify Schr\"oder paths $P$ in the sum \eqref{eq:uig8vweu} into three classes: (i) the empty path only of one point at $(m,m)$ (without steps) of weight $1$; (ii) paths $P_2$ beginning by an up step; (iii) paths $P_3$ beginning by a level step. Thus, \begin{gather} \label{eq:nsiuhgbve} S_{m,n}(z) = {} z^{-1} + \sum_{P_2} w(P_2) z^{-\mathrm{length}(P_2)-1} + \sum_{P_3} w(P_3) z^{-\mathrm{length}(P_3)-1}, \end{gather} where the sums with respect to $P_2$ and $P_3$ are taken over all Schr\"oder paths in the classes (ii) and (iii), respectively. Each path $P_2$ in the class (ii) consists of an initial level step on $y=m$, labelled $c_m$, and a subpath (maybe empty) from $(m+2,m)$ to some point on $y=m$. Hence, \begin{gather} \label{eq:osh89ewvh} \sum_{P_2} w(P_2) z^{-\mathrm{length}(P_2)-1} = c_m z^{-1} S_{m,n}(z). \end{gather} Each path $P_3$ in the class (iii), as shown in Figure \ref{fig:pathDecomp}, uniquely decomposed into four parts: (A) an initial up step, labelled unity; (B) a subpath (maybe empty) from $(m+1,m+1)$ to some point on $y=m+1$ never going beneath $y=m+1$; (C) the first down step descending from $y=m+1$ to $y=m$, labelled $b_{m+1}$; (D) a subpath (maybe empty) both of whose initial and terminal points are on $y=m$. \begin{figure} \caption{The decomposition of a path $P_3$ in the class (iii) into four parts (A), (B), (C) and (D).} \label{fig:pathDecomp} \end{figure} Hence, \begin{gather} \label{eq:nuch87fvf} \sum_{P_3} w(P_3) z^{-\mathrm{length}(P_3)-1} = b_{m+1} S_{m+1,n-1}(z) S_{m,n}(z). \end{gather} Substituting \eqref{eq:osh89ewvh} and \eqref{eq:nuch87fvf} into \eqref{eq:nsiuhgbve}, we get \begin{gather} S_{m,n}(z) = {\{ z - c_m - b_{m+1} z S_{m+1,n-1}(z) \}^{-1}}. \end{gather} From the assumption of induction, we can assume that $S_{m+1,n-1}(z) = T_{m+1,n-1}(z)$ as $z \to \infty$ and hence \begin{gather} S_{m,n}(z) = {\{ z - c_m - b_{m+1} z T_{m+1,n-1}(z) \}^{-1}} = T_{m,n}(z) \qquad \text{as $z \to \infty.$} \end{gather} Now, let us prove Lemma \ref{lem:TFracPaths}. In taking the limit $n \to \infty$ of the identity $T_{0,n}(z) = S_{0,n}(z)$ as $z \to \infty$, the left-hand side $T_{0,n}(z)$ tends to $T(z)$ while the right-hand side $S_{0,n}$ to the right-hand side of \eqref{eq:TFracPathsInfty}. In order to show \eqref{eq:TFracPathsZero}, we observe from \eqref{eq:bc_arginv} that $T_{m,n}(z)$ is equivalent to \begin{gather} T_{m,n}(z) = {} -\CFrac{\tilde{c}_{m} z^{-1}}{z^{-1} - \tilde{c}_{m}} - {} \CFrac{\tilde{b}_{m+1} z^{-1}}{z^{-1} - \tilde{c}_{m+1}} - \cdots - {} \CFrac{\tilde{b}_{m+n-1} z^{-1}}{z^{-1} - \tilde{c}_{m+n-1}}. \end{gather} We can thereby show \eqref{eq:TFracPathsZero} as a simple corollary of \eqref{eq:TFracPathsInfty}. That completes the proof of Lemma \ref{lem:TFracPaths}. \end{proof} The expressions \eqref{eq:momentsPaths} of moments in Theorem \ref{thm:momPaths} are derived just by equating \eqref{eq:TFracSeries} and \eqref{eq:TFracPaths}. Indeed, every Schr\"oder path $P$ from $(0,0)$ to some point on the $x$-axis terminates at $(2k,0)$ if and only if $\mathrm{length}(P) = k$. That completes the proof of Theorem \ref{thm:momPaths} by T-fractions. \section{Non-intersecting Schr\"oder paths} \label{sec:nonIntPaths} In Section \ref{sec:nonIntPaths}, as a consequence of Theorem \ref{thm:momPaths}, we examine the moment determinant $\Delta^{(s)}_{n}$ from a combinatorial viewpoint. We utilize {\em Gessel--Viennot's lemma} \cite{Gessel-Viennot(1985), Aigner(2001LNCS)} to read the determinant in terms of non-intersecting paths. For $m \in \mathbb{N}$ and $n \in \mathbb{N}$, let $\bm{S}_{m,n}$ denote the set of $n$-tuples $\bm{P} = (P_0,\ldots,P_{n-1})$ of Schr\"oder paths $P_k$ such that (i) $P_{k}$ goes from $(-k,k)$ to $(2m+k,k)$ and that (ii) every two distinct paths $P_j$ and $P_k$, $j \neq k$, are {\em non-intersecting}, namely $P_j \cap P_k = \emptyset$. As shown in Figure \ref{fig:nonIntPaths}, each $n$-tuple $\bm{P} \in \bm{S}_{m,n}$ can be drawn in a diagram of $n$ non-intersecting Schr\"oder paths which are pairwise disjoint. \begin{figure} \caption{ A quintuple $\bm{P} = (P_0,\ldots,P_4) \in \bm{S}_{1,5}$ of non-intersecting Schr\"oder paths which is drawn in a plane. } \label{fig:nonIntPaths} \end{figure} For simplicity, we write \begin{gather} w(\bm{P}) = \prod_{k=0}^{n-1} w(P_k), \qquad \tilde{w}(\bm{P}) = \prod_{k=0}^{n-1} \tilde{w}(P_k). \end{gather} \begin{thm} \label{thm:detInNIPaths} \begin{subequations} For general $b_n$ and $c_n$ nonzero, the moment determinant $\Delta^{(s)}_{n}$ admits the expressions \begin{align} \Delta^{(s)}_{n} {} &= (-1)^{\frac{n(n-1)}{2}} \kappa^{n} {\left( \prod_{j=1}^{n-1} b_j^{n-j} \right)} \sum_{\bm{P} \in \bm{S}_{s-n,n}} w(\bm{P}) && \text{if $s \ge n;$} \label{eq:GV} \\[2\jot] {} &= (-1)^{\frac{n(n-1)}{2}} \tilde{\kappa}^{n} {\left( \prod_{j=1}^{n-1} \tilde{b}_j^{n-j} \right)} \sum_{\bm{P} \in \bm{S}_{|s|-n+1,n}} \tilde{w}(\bm{P}) && \text{if $s \le -n+1.$} \label{eq:GVDual} \end{align} \end{subequations} \end{thm} \begin{proof} Suppose that $s \ge n \ge 0$. We rewrite $\Delta^{(s)}_{n}$ into Hankel form, \begin{gather} \label{eq:gqof8e9} \Delta^{(s)}_{n} = (-1)^{\frac{n(n-1)}{2}} \det(f_{s-n+j+k+1})_{j,k=0,\ldots,n-1}. \end{gather} Owing to Theorem \ref{thm:momPaths}, the $(j,k)$-entry of the last Hankel determinant has the combinatorial expression \begin{gather} f_{s-n+j+k+1} = \kappa \sum_{P_{j,k}} w(P_{j,k}) \end{gather} where we can assume that the sum ranges over all Schr\"oder paths $P_{j,k}$ from $(-2j,0)$ to $(2(s-n)+2k,0)$. Thus, we can apply Gessel--Viennot's lemma \cite{Gessel-Viennot(1985),Aigner(2001LNCS)} to expand the determinant \eqref{eq:gqof8e9}, \begin{gather} \label{eq:voihvedavj} \det(f_{s-n+j+k+1})_{j,k=0,\ldots,n-1} = {} \kappa^{n} \sum_{(P_{0,0},\ldots,P_{n-1.n-1})} w(P_{0,0}) \cdots w(P_{n-1,n-1}) \end{gather} where the sum ranges over all $n$-tuples $(P_{0,0},\ldots,P_{n-1,n-1})$ of {\em non-intersecting} Schr\"oder paths $P_{k,k}$ such that $P_{k,k}$ goes from $(-2k,0)$ to $(2(s-n)+2k,0)$ for each $k$. (See Figure \ref{fig:gcd9dlcwv} for example.) \begin{figure} \caption{ A quintuple $(P_{0,0},\ldots,P_{4,4}) $ of non-intersecting Schr\"oder paths counted in the right-hand sum of \eqref{eq:voihvedavj} ($m=1$ and $n=5$). } \label{fig:gcd9dlcwv} \end{figure} As shown in Figure \ref{fig:gcd9dlcwv}, the first and last $k$ steps of $P_{k,k}$ must be all up and down steps, respectively, so that the paths do not collide. Especially, $P_{k,k}$ passes the points $(-k,k)$ and $(2(s-n)+k,k)$. We thus have \begin{gather} \det(f_{s-n+j+k+1})_{j,k=0,\ldots,n-1} = {} \kappa^{n} {\left( \prod_{j=1}^{n-1} b_j^{n-j} \right)} \sum_{\bm{P} \in \bm{S}_{s-n,n}} w(\bm{P}) \end{gather} and thereby \eqref{eq:GV}. In the same way, we can show \eqref{eq:GVDual} from Theorem \ref{thm:momPaths}. \end{proof} For example, for $m=3$ and $n=2$, the set $\bm{S}_{1,2}$ contains exactly eight doubles $(P_0,P_1)$ of non-intersecting Schr\"oder paths which are shown in Figure \ref{fig:NIPaths}. \begin{figure} \caption{ The eight doubles $(P_0,P_1) \in \bm{S}_{1,2}$ of non-intersecting Schr\"oder paths. } \label{fig:NIPaths} \end{figure} Thus, the moment determinant $\Delta^{(3)}_{2}$ equals to the polynomial of eight monomials \begin{gather} \Delta^{(3)}_{2} = {} -\kappa^2 b_1 (c_0 c_1^2 + 2 b_2 c_0 c_1 + b_2^2 c_0 + b_2 c_0 c_2 + b_1 b_2 c_2 + b_2 b_3 c_0 + b_1 b_2 b_3) \end{gather} of which each monomial corresponds to a diagram in Figure \ref{fig:NIPaths}. \section{$q$-Narayana polynomials as moments} \label{sec:qNarPolMom} In Section \ref{sec:qNarPolMom} and the subsequent, we consider the special case of the LBPs whose moments are described by the $q$-Narayana polynomials. Let us recall from Section \ref{sec:introduction} the definition of the $q$-Narayana polynomials \begin{gather} \label{eq:qNaraPolRp} N_{k}(t,q) = \sum_{P \in S_k} t^{\mathrm{level}(P)} q^{\mathrm{area}(P)} \end{gather} where $\mathrm{level}(P)$ denotes the number of level steps in a Schr\"oder path $P$, and $\mathrm{area}(P)$ the area bordered by $P$ and the $x$-axis. For example, the first few of the $q$-Narayana polynomials are enumerated in \begin{subequations} \begin{align} N_0(t,q) &= 1, \\ N_1(t,q) &= t+q, \\ N_2(t,q) &= t^2 + 2 t q + t q^3 + q^2 + q^4, \\ N_3(t,q) &= t^3 + 3 t^2 q + 2 t^2 q^3 + t^2 q^5 + 3 t q^2 + 4 t q^4 + 2 t q^6 + t q^8 \nonumber \\ & \qquad {} + q^3 + 2 q^5 + q^7 + q^9. \end{align} \end{subequations} In view of Theorem \ref{thm:momPaths}, it is easy to find the $q$-Narayana polynomials in the moments of LBPs. \begin{thm} \label{thm:NarayanaPolsInMoms} Let us determine the LBPs $P_{n}(z)$ by the recurrence \eqref{eq:recurrence} with the coefficients \begin{gather} \label{eq:dsach87we7} b_{n} = q^{2n-1}, \qquad c_{n} = t q^{2n}. \end{gather} Then, \begin{subequations} \label{eq:NarayanaPolsInMoms} the linear functional $\mathcal{F}$ for $P_{n}(z)$ admits the moments $f_k = \mathcal{F}[z^k]$ described by the $q$-Narayana polynomials, \begin{align} f_{k} {} &= \kappa N_{k-1}(t,q) && \text{for $k \ge 1;$} \label{eq:momNarPos} \\[1\jot] {} &= \kappa t^{-2|k|-1} N_{|k|}(t,q^{-1}) && \text{for $k \le 0,$} \label{eq:momNarNeg} \end{align} where $\kappa$ is an arbitrary nonzero constant. \end{subequations} \end{thm} \begin{proof} Let $P \in S_k$. Labelled with \eqref{eq:dsach87we7}, $P$ weighs $w(P) = t^{\mathrm{level}(P)} q^{\mathrm{area}(P)}$. Hence, by virtue of Theorem \ref{thm:momPaths}, we have \eqref{eq:momNarPos} as a special case of \eqref{eq:momPathsPos}. Similarly, with \begin{gather} \label{eq:NarayanaCfsDual} \tilde{b}_{n} = t^{-2} q^{-2n+1}, \qquad \tilde{c}_{n} = t^{-1} q^{-2n} \end{gather} from \eqref{eq:bc_arginv}, $\tilde{w}(P) = t^{-2k+\mathrm{level}(P)} q^{-\mathrm{area}(P)}$. Now $\tilde{\kappa} = \kappa t^{-1}$ from \eqref{eq:kappa-kappaTilde}. Hence, we obtain \eqref{eq:momNarNeg} from \eqref{eq:momPathsNeg}. \end{proof} We remark that the $q$-Narayana polynomials \eqref{eq:qNaraPolRp} defined in a combinatorial way are already investigated by Cigler \cite{Cigler(2005arXiv)} who introduced the polynomials by modifying the generating function of the $q$-Catalan numbers. Indeed, we can deduce from \eqref{eq:qNaraPolRp} a recurrence \begin{gather} \label{eq:qNarRec} N_k(t,q) = t N_{k-1}(t,q) + \sum_{j=0}^{k-1} q^{2j+1} N_j(t,q) N_{k-j-1}(t,q). \end{gather} We can identify \eqref{eq:qNarRec} with the recurrence in \cite[Eq.~(19)]{Cigler(2005arXiv)}. \section{Determinant of $q$-Narayana polynomials} \label{sec:ToeplitzDets} In Section \ref{sec:ToeplitzDets}, we examine a T\"oplitz determinant of the $q$-Narayana polynomials \begin{gather} \label{eq:DetNarayanaPol} \mathcal{N}^{(s)}_{n}(t,q) = \det(N_{s+j-k-1}(t,q))_{j,k=0,\ldots,n-1}, \end{gather} where, in view of \eqref{eq:NarayanaPolsInMoms}, we define $N_k(t,q)$ for negative $k$ by \begin{gather} N_{k}(t,q) = t^{-2|k|-1} N_{|k|-1}(t,q^{-1}) \qquad \text{for $k < 0$.} \end{gather} As the special case of the moments given by the $q$-Narayana polynomials, Theorem \ref{thm:detInNIPaths} allows us to read the determinant \eqref{eq:DetNarayanaPol} in the context of non-intersecting Schr\"oder paths. The results in this section, Theorem \ref{thm:DetNarayanaPols} and Corollary \ref{cor:NIPathsNarayana}, will be applied later in Section \ref{sec:ADT} to a new proof of the Aztec diamond theorem (Theorem \ref{thm:ADT}). Theorem \ref{thm:NarayanaPolsInMoms} implies that \begin{gather} \label{eq:hiusgf7e} \Delta^{(s)}_{n} = \mathcal{N}^{(s)}_{n}(t,q) \end{gather} provided that the coefficients $b_n$ and $c_n$ of the recurrence \eqref{eq:recurrence} are given by \eqref{eq:dsach87we7} where $\kappa = 1$. Hence, we can use the formulae \eqref{eq:detsInCfs} to find the exact value of $\mathcal{N}^{(s)}_{n}$ for $s \in {\{ 0, 1 \}}$. Recall that, in using \eqref{eq:detsInCfs00}, we assume $\tilde{b}_{n}$ and $\tilde{c}_{n}$ to be given by \eqref{eq:NarayanaCfsDual} and $\tilde{\kappa} = t^{-1}$. \begin{lem} \label{lem:NarayanaDets} For $s \in {\{ 0, 1 \}}$ and $n \in \mathbb{N}$, the exact value of the determinant $\mathcal{N}^{(s)}_{n}$ is given by \begin{subequations} \label{eq:NarayanaDets} \begin{align} \mathcal{N}^{(1)}_{n}(t,q) &= (-1)^{\frac{n(n-1)}{2}} t^{-\frac{n(n-1)}{2}} q^{\frac{n(n-1)}{2}}, \label{eq:NarayanaDets_one} \\[1\jot] \mathcal{N}^{(0)}_{n}(t,q) &= (-1)^{\frac{n(n-1)}{2}} t^{-\frac{n(n+1)}{2}} q^{-\frac{n(n-1)}{2}}. \label{eq:NarayanaDets_zero} \end{align} \end{subequations} \end{lem} In order to find the value of $\mathcal{N}^{(s)}_{n}(t,q)$ for further $s \in \mathbb{Z}$ and $n \in \mathbb{N}$, we can use Sylvester's determinant identity: \begin{gather} \label{eq:SylvesterDetId} X \cdot X(i,j;k,\ell) - X(i;k) \cdot X(j;\ell) + X(i;\ell) \cdot X(j;k) = 0 \end{gather} where $X$ is an arbitrary determinant and $X(i,j;k,\ell)$ denotes the minor of $X$ obtained by deleting the $i$-th and the $j$-th rows and the $k$-th and the $\ell$-th columns; $X(i;k)$ the minor of $X$ with respect to the $i$-th row and the $k$-th column. Applying Sylvester's determinant identity, we get \begin{gather} \label{eq:NarayanaSylvester} \mathcal{N}^{(s)}_{n+1} \cdot \mathcal{N}^{(s)}_{n-1} - {} \mathcal{N}^{(s)}_{n} \cdot \mathcal{N}^{(s)}_{n} + {} \mathcal{N}^{(s+1)}_{n} \cdot \mathcal{N}^{(s-1)}_{n} = 0 \end{gather} for $s \in \mathbb{Z}$ and $n \in \mathbb{N}$, where $\mathcal{N}^{(s)}_{n} = \mathcal{N}^{(s)}_{n}(t,q)$ except that $\mathcal{N}^{(s)}_{-1} = 0$. Using \eqref{eq:NarayanaSylvester} as a recurrence from appropriate initial value, we can compute the value of $\mathcal{N}^{(s)}_{n}(t,q)$ for each $s \in \mathbb{Z}$ and $n \in \mathbb{N}$. Especially, we find a closed form of $\mathcal{N}^{(s)}_{n}(t,q)$ for $-n \le s \le n+1$ as follows. \begin{thm} \label{thm:DetNarayanaPols} For $-n \le s \le n+1$, the exact value of the determinant $\mathcal{N}^{(s)}_{n}(t,q)$ is given by \begin{subequations} \label{eq:NarayanaDetsCor} \begin{align} \mathcal{N}^{(s)}_{n}(t,q) {} &= \varphi^{(s)}_{n}(t,q) \prod_{k=1}^{s-1} (t+q^{2k-1})^{s-k} && \text{for $1 \le s \le n+1;$} \label{eq:DetNarayanaPols+} \\ {} &= \varphi^{(s)}_{n}(t,q) \prod_{k=1}^{|s|} (t+q^{-2k+1})^{|s|-k+1} && \text{for $-n \le s \le 0$} \end{align} where \begin{gather} \varphi^{(s)}_{n}(t,q) = (-1)^{\frac{n(n-1)}{2}} t^{-\frac{(n-s)(n-s+1)}{2}} q^{\frac{n(n-1)(2s-1)}{2}}. \end{gather} \end{subequations} \end{thm} \begin{proof} Using Sylvester's identity \eqref{eq:NarayanaSylvester} from the initial value \eqref{eq:NarayanaDets}, we can easily show \eqref{eq:NarayanaDetsCor} by induction. \end{proof} Note that Cigler \cite{Cigler(2005arXiv)} found a closed-form expression of the Hankel determinant $\det(N_{s+j+k}(t,q))_{j,k=0,\ldots,n-1}$ of the $q$-Narayana polynomials for $s \in {\{ 0, 1 \}}$ and $n \in \mathbb{N}$ by means of orthogonal polynomials. (The Hankel determinant coincides with $\mathcal{N}^{(s)}_{n}(t,q)$ for $s \in {\{ n, n+1 \}}$ without sign.) Theorem \ref{thm:DetNarayanaPols} generalizes Cigler's result \cite[Eqs.~(24) and (25)]{Cigler(2005arXiv)} for further $s$ and $n$. As a corollary of Theorem \ref{thm:DetNarayanaPols}, equating \eqref{eq:DetNarayanaPols+} with \eqref{eq:GV} in Theorem \ref{thm:detInNIPaths}, we obtain the following result about non-intersecting Schr\"oder paths. \begin{cor} \label{cor:NIPathsNarayana} For $m \in {\{ 0,1 \}}$ and $n \in \mathbb{N}$, \begin{gather} \sum_{\bm{P} \in \bm{S}_{m,n}} t^{\mathrm{level}(\bm{P})} q^{\mathrm{area}(\bm{P})} = {} q^{\frac{n(n-1)(3m+2n-1)}{3}} \prod_{k=1}^{m+n-1} (t + q^{2k-1})^{m+n-k} \end{gather} where $\mathrm{level}(\bm{P}) = \sum_{k=0}^{n-1} \mathrm{level}(P_k)$ and $\mathrm{area}(\bm{P}) = \sum_{k=0}^{n-1} \mathrm{area}(P_k)$ with $\bm{P} = (P_0,\ldots,P_{n-1})$. \end{cor} \section{Proof of Aztec diamond theorem} \label{sec:ADT} Finally, in Section \ref{sec:ADT}, we give a new proof of the Aztec diamond theorem (Theorem \ref{thm:ADT}) based on the discussions in the foregoing sections. In the two-parted paper by Elkies, Kuperberg, Larsen and Propp \cite{Elkies-Kuperberg-Larsen-Propp(1992.01),Elkies-Kuperberg-Larsen-Propp(1992.02)}, the Aztec diamond theorem is proven by the technique of the domino shuffling. Whereas, the proof in this paper is based on the one-to-one correspondence between tilings of the Aztec diamonds and tuples of non-intersecting Schr\"oder paths developed by Eu and Fu \cite{Eu-Fu(2005)} who used the correspondence to prove \eqref{eq:ADT11}. In order to make the statement precise, as we announced in Section \ref{sec:introduction}, we review from \cite{Elkies-Kuperberg-Larsen-Propp(1992.01)} the definition of the rank statistic. Let $T \in T_n$ be a tiling of the Aztec diamond $\mathit{AD}_n$. If $n \ge 1$, $T$ certainly contains one or more two-by-two blocks of two horizontal or vertical dominoes. Thus, choosing one from such two-by-two blocks and rotating it by ninety degrees, we obtain a new tiling $T' \in T_n$. (See Figure \ref{fig:elmMove}.) \begin{figure} \caption{Rotation of a two-by-two block of two horizontal or vertical dominoes in an elementary move.} \label{fig:elmMove} \end{figure} We refer by an {\em elementary move} to this operation of transforming $T$ into $T'$ by rotating a two-by-two block. It can be shown that any tiling of $\mathit{AD}_n$ can be reached from any other tiling of $\mathit{AD}_n$ by a sequence of elementary moves. The rank $r(T)$ of $T$ denotes the minimal number of elementary moves required to reach $T$ from the ``all-horizontal'' tiling $T^0$ consisting only of horizontal dominoes, where $r(T^0) = 0$. For example, in Figure \ref{fig:rank}, the rightmost tiling $T$ of $\mathit{AD}_{2}$ has the rank $r(T) = 4$ since at least four elementary moves are required to reach from the leftmost $T^0$. \begin{figure} \caption{ A sequence of elementary moves from $T^0$ to $T$ of $\mathit{AD}_{2}$. At least four elementary moves are required to reach from $T^0$ only of horizontal dominoes to the rightmost $T$, and thereby $r(T) = 4$. } \label{fig:rank} \end{figure} Eu and Fu \cite{Eu-Fu(2005)} developed a one-to-one correspondence between $T_n$ and $\bm{S}_{1,n}$. We describe the bijection from $T_n$ to $\bm{S}_{1,n}$ in a slightly different manner from \cite{Eu-Fu(2005)}. Following \cite{Elkies-Kuperberg-Larsen-Propp(1992.01)}, we color the Aztec diamond $\mathit{AD}_n$ in a black-white checkerboard fashion so that all unit squares on the upper-left border of $\mathit{AD}_n$ are white. We say that a horizontal domino (resp.~a vertical domino) put into $\mathit{AD}_n$ is {\em even} if the left half (resp.~the upper half) of the domino covers a white unit square. Otherwise, the domino is {\em odd}. The bijection mapping a tiling $T \in T_n$ to an $n$-tuple $\bm{P} = (P_0,\ldots,P_{n-1}) \in \bm{S}_{1,n}$ of non-intersecting Schr\"oder paths is described by the following procedure: For each domino in $T$, as shown in Figure \ref{fig:step-domino}, draw an up step (resp.~ a down step, a level step) that goes through the center of the domino if the domino is even vertical (resp.~odd vertical, odd horizontal). (For even horizontal dominoes, we do nothing.) \begin{figure} \caption{The rule to draw a step on a domino.} \label{fig:step-domino} \end{figure} Then, we find $n$ non-intersecting Schr\"oder paths $P_0,\ldots,P_{n-1}$ on $T$ of which the $n$-tuple $\bm{P} = (P_0,\ldots,P_{n-1})$ belongs to $\bm{S}_{1,n}$. For example, see Figure \ref{fig:bijection}. \begin{figure} \caption{ The bijection mapping a tiling $T \in T_5$ of $\mathit{AD}_5$ to a quintuple $\bm{P} = (P_0,\ldots,P_4) \in S_{1,5}$ of non-intersecting Schr\"oder paths. (The Aztec diamond is colored in a checkerboard fashion.) } \label{fig:bijection} \end{figure} The bijection connects the statistics $v(T)$ and $r(T)$ for tilings and the statistics $\mathrm{level}(P)$ and $\mathrm{area}(P)$ for Schr\"oder paths as follows. Recall from Section \ref{sec:introduction} that $v(T)$ denotes half the number of vertical dominoes in a tiling $T$. \begin{lem} \label{lem:1-1} Suppose that a tiling $T \in T_n$ and an $n$-tuple $\bm{P} = (P_0,\ldots,P_{n-1}) \in \bm{S}(1,n)$ of non-intersecting Schr\"oder paths are in the one-to-one correspondence by the bijection. Then, \begin{align} v(T) &= \frac{n(n+1)}{2} - \mathrm{level}(\bm{P}), \label{eq:1-1_vert-level} \\ r(T) &= \mathrm{area}(\bm{P}) - \frac{2n(n+1)(n-1)}{3}, \label{eq:1-1_rank-area} \end{align} where $\mathrm{level}(\bm{P}) = \sum_{k=0}^{n-1} \mathrm{level}(P_k)$ and $\mathrm{area}(\bm{P}) = \sum_{k=0}^{n-1} \mathrm{area}(P_k)$. \end{lem} \begin{proof} The bijection implies that $v(T)$ equals to half the number of up and down steps in $\bm{P}$. The sum of half the number of up and down steps and the number of level steps in $\bm{P}$ is a constant independent of $\bm{P}$ that equals to $n(n+1)/2$. Thus, we have \eqref{eq:1-1_vert-level}. As shown in Figure \ref{fig:emoves-pathDefms}, each elementary move of a tiling $T$ raising the rank by one gives rise to a deformation of some path in $\bm{P}$ increasing the area by one. \begin{figure} \caption{ A rotation of a two-by-two block in an elementary move raises the rank of the tiling $T$ by one (left-to-right, respectively) if and only if the corresponding deformation of a tuple $\bm{P}$ of non-intersecting Schr\"oder paths increases $\mathrm{area}(\bm{P})$ by one. } \label{fig:emoves-pathDefms} \end{figure} Thus, $r(T)$ and $\mathrm{area}(\bm{P})$ differ by a constant independent of $T$ and $\bm{P}$. Since $r(T^0) = 0$ then the constant equals to $\mathrm{area}(\bm{P}^0) = 2n(n+1)(n-1)/3$, where $T^0$ denotes the ``all-horizontal'' tiling of $\mathit{AD}_n$ and $\bm{P}^0 \in \bm{S}_{1,n}$ the $n$-tuple of non-intersecting Schr\"oder paths only of level steps that corresponds to $T^0$. Thus, we have \eqref{eq:1-1_rank-area}. \end{proof} Now, we give a proof of the Aztec diamond theorem. \begin{proof}[Proof of Theorem \ref{thm:ADT}] As a consequence of Lemma \ref{lem:1-1}, we can substitute \eqref{eq:1-1_vert-level} and \eqref{eq:1-1_rank-area} into \eqref{eq:ADT} to obtain \begin{gather} \label{eq:lnciubewg9} \mathrm{AD}_n(t,q) = {} t^{\frac{n(n+1)}{2}} q^{-\frac{2n(n-1)(n+1)}{3}} \sum_{\bm{P} \in S_{1,n}} t^{-\mathrm{level}(\bm{P})} q^{\mathrm{area}(\bm{P})}. \end{gather} From Corollary \ref{cor:NIPathsNarayana}, the sum in the right-hand side of \eqref{eq:lnciubewg9} is equated with \begin{gather} \label{eq:o0qkpvsw} \sum_{\bm{P} \in S_{1,n}} t^{-\mathrm{level}(\bm{P})} q^{\mathrm{area}(\bm{P})} = {} q^{\frac{2n(n-1)(n+1)}{3}} \prod_{k=1}^{n} (t^{-1} + q^{2k-1})^{n-k+1}. \end{gather} Substituting \eqref{eq:o0qkpvsw} into the right-hand side of \eqref{eq:lnciubewg9}, we have \begin{gather} \mathrm{AD}_n(t,q) = {} t^{\frac{n(n+1)}{2}} \prod_{k=1}^{n} (t^{-1} + q^{2k-1})^{n-k+1} = {} \prod_{k=1}^{n} (1 + t q^{2k-1})^{n-k+1}. \end{gather} That completes the proof of Theorem \ref{thm:ADT}. \end{proof} \section{Concluding remarks} \label{sec:conclusions} In this paper, we evaluated a determinant whose entries are given by the $q$-Narayana polynomials (Theorem \ref{thm:DetNarayanaPols}). In order to find the value of the determinant, we utilized Laurent biorthogonal polynomials which allow of a combinatorial interpretation in terms of Schr\"oder paths (Theorem \ref{thm:momPaths} and Theorem \ref{thm:NarayanaPolsInMoms}). As an application, we exhibited a new proof of the Aztec diamond theorem (Theorem \ref{thm:ADT}) by Elkies, Kuperberg, Larsen and Propp \cite{Elkies-Kuperberg-Larsen-Propp(1992.01),Elkies-Kuperberg-Larsen-Propp(1992.02)} with the help of the one-to-one correspondence developed by Eu and Fu \cite{Eu-Fu(2005)} between tilings of the Aztec diamonds and tuples of non-intersecting Schr\"oder paths. We remark that, in Theorem \ref{thm:DetNarayanaPols}, we can evaluate the determinant $\mathcal{N}^{(s)}_{n}$ of the $q$-Narayana polynomials also for $s < -n$ and $s > n+1$ by using the formula \eqref{eq:NarayanaSylvester} from Sylvester's identity. For example, if $s = n+2$, \begin{gather} \label{eq:whf8escnp} \mathcal{N}^{(n+2)}_{n} = {} (-1)^{\frac{n(n-1)}{2}} q^{\frac{n(n-1)(2n+3)}{2}} \prod_{k=1}^{n} (t + q^{2k-1})^{n-k+1} \sum_{\ell=0}^{n} t^{n-\ell} q^{\ell^2} \binom{n+1}{\ell}_{q^2}, \end{gather} where $\binom{m}{n}_{q}$ denotes the $q$-binomial coefficient \begin{gather} \binom{m}{n}_{q} = \prod_{k=1}^{n} \frac{1-q^{m-k+1}}{1-q^{k}}. \end{gather} From Theorem \ref{thm:detInNIPaths}, we can read \eqref{eq:whf8escnp} in terms of non-intersecting Schr\"oder paths, \begin{multline} \sum_{\bm{P} \in \bm{S}_{2,n}} t^{\mathrm{level}(\bm{P})} q^{\mathrm{area}(\bm{P})} \\ {} = {} q^{\frac{n(n-1)(2n+5)}{3}} \prod_{k=1}^{n} (t + q^{2k-1})^{n-k+1} \sum_{\ell=0}^{n} t^{n-\ell} q^{\ell^2} \binom{n+1}{\ell}_{q^2}. \end{multline} We can readily observe that the bijection in Section \ref{sec:ADT} gives a one-to-one correspondence between $n$-tuples of non-intersecting Schr\"oder paths in $\bm{S}_{2,n}$ and tilings of the region $\mathit{AD}_{2,n}$, the Aztec diamond $\mathit{AD}_{n+1}$ from which two unit squares at the south corner are removed. (See Figure \ref{fig:pathsTiling25}). \begin{figure} \caption{ The one-to-one correspondence between a tiling of $\mathit{AD}_{m,n}$ and a $n$-tuple $\bm{P} = (P_0,\ldots,P_{n-1}) \in \bm{S}_{m,n}$ of non-intersecting Schr\"oder paths. (The left figure shows an instance for $m=2$ and $n=4$ while the right for $m=3$ and $n=2$.) } \label{fig:pathsTiling25} \end{figure} Therefore, as a variant of \eqref{eq:ADT}, we have \begin{gather} \sum_{T} t^{v(T)} q^{r(T)} = {} \prod_{k=0}^{n-1} (1 + t q^{2k+1})^{n-k} \sum_{\ell=0}^{n} t^{\ell} q^{\ell^2} \binom{n+1}{\ell}_{q^2}, \end{gather} where the sum in the left-hand side ranges over all tilings $T$ of $\mathit{AD}_{2,n}$. (The $\mathrm{rank}(T)$ is defined in the same way as $\mathit{AD}_{n}$ to be the minimal number of elementary moves required to reach from ``all-horizontal'' tilings of $\mathit{AD}_{2,n}$.) Similarly, calculating the determinant $\mathcal{N}^{(m+n)}_{n}(t,q)$, we can obtain in principle variant formulae of \eqref{eq:ADT} for tilings of the Aztec diamond $\mathit{AD}_{m+n}$ from which $m(m-1)$ unit squares at the south corner are removed. However, the value of $\mathcal{N}^{(m+n)}_{n}(t,q)$ seems much complicated for large $m$, and exact formulae has not been found yet for general $m$ and $n$. \section*{Acknowledgment} The author would like to thank Professor Yoshimasa Nakamura and Professor Satoshi Tsujimoto for valuable discussions and comments. This work was supported by JSPS KAKENHI 24740059. \end{document}
arXiv
{ "id": "1309.0268.tex", "language_detection_score": 0.6313453912734985, "char_num_after_normalized": null, "contain_at_least_two_stop_words": null, "ellipsis_line_ratio": null, "idx": null, "lines_start_with_bullet_point_ratio": null, "mean_length_of_alpha_words": null, "non_alphabetical_char_ratio": null, "path": null, "symbols_to_words_ratio": null, "uppercase_word_ratio": null, "type": null, "book_name": null, "mimetype": null, "page_index": null, "page_path": null, "page_title": null }
arXiv/math_arXiv_v0.2.jsonl
null
null
\begin{document} \title{On Hunting for Taxicab Numbers} \author{P. Emelyanov \\ Institute of Informatics Systems \\ 6 avenue Lavrentiev, 630090, Novosibirsk, Russia \\ e-mail: [email protected]} \maketitle \begin{abstract} In this article, we make use of some known method to investigate some properties of the numbers represented as sums of two equal odd powers, i.e., the equation $x^n+y^n=N$\/ for $n\ge3$. It was originated in developing algorithms to search new taxicab numbers (i.e., naturals that can be represented as a sum of positive cubes in many different ways) and to verify their minimality. We discuss properties of diophantine equations that can be used for our investigations. This techniques is applied to develop an algorithm allowing us to compute new taxicab numbers (i.e., numbers represented as sums of two positive cubes in $k$ different ways), for $k=7\ldots14$\/. \end{abstract} \section*{Introduction} This work was originated in searching new so--called {\em taxicab numbers}, i.e., naturals $T_k$\/ that can be represented/decomposed as/into a sum of positive cubes in $k$ different ways, and verifying their minimality. We made use of some known method to investigate properties of the cubic equation that could help us to find new taxicab numbers. Already Fermat proved that numbers expressible as a sum of two cubes in $n$\/ different ways exist for any $n$. But still finding taxicab numbers and proving their minimality are hard computational problems. Whereas the first nontrivial taxicab number $T_2=1729$\/ became widely--known in 1917 thanks to Ramanujan and Hardy, next ones were only found with help of computers: $T_3=87539319$ (J. Leech, 1957), $T_4=6963472309248$ (E. Rosenstiel, J.A. Dardis, and C.R. Rosenstiel, 1991), $\mbox{\bf\em W}_5=T_5=48988659276962496$ (D. Wilson, 1997, \cite{Wilson-jis-1999}). It is known that these numbers are minimal. For $\mbox{\bf\em R}_6=T_6=24153319581254312065344$ (R.L. Rathbun, 2002) as well as for next discovered taxicab numbers it is unknown. In January--September 2006 the author computed $T_7=139^3\mbox{\bf\em R}_6$, $T_8=727^3T_7$, $T_9=4327^3T_8$, $T_{10}=38623^3T_9$, and $T_{11}=45294^3T_{10}$. At the end of 2006 the author learned about the results of C. Boyer \cite{Boyer-2006} who established smaller $T_7,\ldots,T_{11}$\/ and first $T_{12}$\/ in December 2006. At the begin of 2007 the author computed $T_{13}$\/ and $T_{14}$. The article is organized as follows. We start with putting the equation in a new form. Next, we deduce simple properties of the equation of interest based on this presentation. At the end, we present a new algorithm to compute taxicab numbers which we used to find new ones. \section{Common Properties} We are interested in the problem of representations (also called decompositions) of numbers as the sums of two positive odd $n$-powers; i.e., solvability of the equation \begin{equation}\label{OriginalEquation} x^n+y^n=N \end{equation} in positive integers. A solution of this equation is also called a representation or a decomposition of the number $N$. The equation of interest is too ``smooth'' in its original form. We want to make it ``uneven''. We are going to consider this equation in the following $m\pm{}h$-form ($m\neq h>0$) \begin{equation}\label{TheEquation} (m-h)^n+(m+h)^n=N \end{equation} which is not an infrequent guest in number--theoretical proofs. Although only even numbers can be directly represented in this way, there is a simple transformation that allows us to treat this equation for odd $N$\/ as well. In fact, any pair $(x,y)$\/ consisting of even and odd integers can be represented as $(t-s-1,t+s)$. If $N$\/ is odd, we write \[ (t-s-1)^n+(t+s)^n=N. \] Multiplying both sides by $2^n$\/ we can put the previous equation into the form \begin{equation}\label{TheEqOdd} ((2t-1)-(2s+1))^n+((2t-1)+(2s+1))^n=2^nN \end{equation} and, then some extra steps are needed to obtain representations of $N$\/ itself. For the exponent 3, the least odd number $N$\/ for which $2^3N$\/ yields a not only proper two cubes representation is 513: \[ 2^3 513=2^3\left(1^3+8^3\right)=(12-3)^3+(12+3)^3=4104. \] Notice that 4104 is the least even number represented as a sum of two cubes in two different ways. Next, assume $N$\/ to be even if we do not explicitly state the contrary. We are interested in any prime powers, although sometimes it is sufficient that they are odd only. Such representations for odd powers are closely related to divisors of the numbers of interest. We shall refer to $m$\/ as a {\em median}\/ of the corresponding power representation and to $N_d$\/ as an integer quotient $N/d$\/ if it exists. We shall make use the following property (a simple corollary of Quadratic Reciprocity Law) of odd prime divisors of binary forms: \begin{Property}\label{BinaryForm3} \[ p ~|~ ax^2+by^2 ~\wedge~ \gcd(ax,by)=1 ~~\Longrightarrow~~ \legendre{ab}{p}=(-1)^\frac{p-1}{2}. \] In particular, for the binary form $u^2+3v^2$\/ the forbidden divisors are \[ \legendre{3}{p}\not=(-1)^\frac{p-1}{2}; \mbox{~~i.e.,~~} p\equiv5,11\mod{12}. \] \end{Property} Given $N$\/ and its divisors, by solving an $n-1$-order polynomial equation \begin{equation}\label{ExpansionEQ} (m-h)^n+(m+h)^n= 2m\left( \sum_{k=0}^{\frac{n-1}{2}}\bincoeff{n}{2k}m^{n-2k-1}h^{2k} \right)=N \end{equation} with respect to $h$, we can either ``easily'' find some representation(s) of this number or prove that it is impossible. Notice that in this polynomial $m$\/ and $h$\/ occur only in odd and even powers, respectively. We start the investigation by establishing the following simple properties of {\bf Equation (\ref{TheEquation})}. \begin{Lemma}\label{DivNModulo} If $m$\/ is a median of some representation of $N$, then \[ m\equiv N_2\mod{n}. \] If $n | N$, then also $n^2 | N$. If $n \mbox{$\hspace*{4pt}|\hspace*{-3.75pt}/~$}{} N$, then $N=2m(nt+1)$. \end{Lemma} \proof First, rewriting {\bf Equation (\ref{TheEquation})} in the form \[ m^n+n\sum_{k=1}^{\frac{n-1}{2}}\frac{1}{n}\bincoeff{n}{2k}m^{n-2k}h^{2k}=N_2 \] we can derive the modular equation $m^n\equiv N_2\mod{n}$. Next: \begin{itemize} \item By applying Fermat's Little Theorem we have the first statement. \item Because $n | N$, therefore also $n | m$, and this yields the second statement. \item Because $n \mbox{$\hspace*{4pt}|\hspace*{-3.75pt}/~$}{} N$, then also $n \mbox{$\hspace*{4pt}|\hspace*{-3.75pt}/~$}{} m$. By applying Fermat's Little Theorem to $m^{n-1}\equiv N_{2m}\mod{n}$\/ we have the third statement. \end{itemize} \qed Because $h$\/ is ranged in $(0,m)$\/ it is easy to establish \begin{Lemma}\label{DivBounds} A necessary condition for $N$\/ to have a representation as the sum of $n$-powers is \[ \exists m|N ~~:~~ \sqrt[n]{\frac{N}{2^n}}~<~m~<~\sqrt[n]{\frac{N}{2}}. \] \end{Lemma} \noindent Obviously, the number of such representations does not exceed the number of divisors of $N$\/ satisfying this condition (see also {\bf Lemma \ref{TaxicabLowerBound}}). {\bf Lemmas 1} and {\bf2} allow us to estimate numbers being the sum of two odd powers higher than 2 in $k$\/ ways. If a number has two different representations for the power $n$, then the medians $m_1, m_2$\/ corresponding to them also satisfy the congruence $m_1\equiv m_2\mod{n}$. Because $\sqrt[n]{N/2^n}+n(k-1)\leq\sqrt[n]{N/2}$, we have the following properties of generalized taxicab numbers \begin{Lemma}\label{TaxicabLowerBound} If number $T(n,k)$, $k>1$, represented as the sum of two $n$-powers in $k$\/ ways is even, then it has at least $k$\/ divisors in the range $(\sqrt[n]{N/2^n},\sqrt[n]{N/2})$\/ and the following lower bound holds \[ T(n,k)\geq2\left(\frac{2n}{2-\sqrt[n]{2}}\right)^n(k-1)^n \] \end{Lemma} \noindent This bound is far from optimal due to a quite conservative assumption about the gaps between medians. This is a subject of further investigation. Recall that only wide-known theoretical bound for $T(3,k)=T_k$\/ is Silverman's result \cite{Silverman-jlms-1983} that describes its logarithmic behavior: \[ \log T_k=o(k^{r+2/r}), \] where $r$\/ is the highest rank of {\bf Equation (\ref{OriginalEquation})}. The highest rank known now is 5. When there are "too many" taxicab medians they cannot be relative prime because all of them are divisors. Hence they share common divisors. In particular, for taxicab medians $m_1<\ldots<m_k$\/ the following inequality holds: \[ \mbox{lcm}(m_1,\ldots,m_k)\leq (2m_1)^n. \] The cubic equation in the form $m^2+3h^2=N_{2m}$\/ provides a way to derive parameterizations of the two cubes representation problem\footnote{Here we treat independently the median $m$\/ and its co-factor $N_{2m}$; therefore this does not cover general cases.}. We mention only those of them that relate to the taxicab numbers problem. It arises when $N_{2m}$\/ is a cube and this case is connected to the well-known problem of the decomposition of numbers into two rational cubes (positive or not) which was investigated by Fermat, Euler, Sylvester, and other researchers. G\'erardin proved \cite[Chapter XX]{Dickson-1999} that all solutions of $u^2+3v^2=w^3$\/ with $\gcd(u,v)=1$\/ are generated by \label{Gerardin} \[ (t^3-9ts^2)^2+3(3t^2s-3s^3)^2=(t^2+3s^2)^3. \] We have \[ (t^3-3t^2s-9ts^2+3s^3)^3+(t^3+3t^2s-9ts^2-3s^3)^3=2(t^3-9ts^2)(t^2+3s^2)^3, \] and next \[ \left(\frac{t^3-3t^2s-9ts^2+3s^3}{t^2+3s^2}\right)^3+ \left(\frac{t^3+3t^2s-9ts^2-3s^3}{t^2+3s^2}\right)^3=2t(t-3s)(t+3s). \] So, if the diophantine equation $2t^3-18ts^2\pm{}Nr^3=0$\/ is solvable \footnote{Euler's solution of the two rational cubes problem is slightly different.}, then $N$\/ is decomposable. This can be simplified into one-parametric examples as follows \[ \left(\frac{w^3+3w^2-6w+1}{3(w^2-w+1)}\right)^3- \left(\frac{w^3-6w^2+3w+1}{3(w^2-w+1)}\right)^3=w(w-1), \] and \[ \pm{}\left(\frac{8w^9\pm{}24w^6+6w^3\mp{}1}{3w(4w^6\pm{}2w^3+1)}\right)^3\mp{} \left(\frac{8w^9\mp{}12w^6-12w^3\mp{}1}{3w(4w^6\pm{}2w^3+1)}\right)^3=4w^3\pm{}2. \] Also, the substitution $t-3s=u^2v, t+3s=uv^2$\/ gives \[ \left(\frac{u^3+6u^2v+3uv^2-v^3}{3(u^2+uv+v^2)}\right)^3+ \left(\frac{v^3+6v^2u+3vu^2-u^3}{3(u^2+uv+v^2)}\right)^3=uv(u+v) \] which provides the following parametrization of the sum of two integer powers \[ \left(\frac{p^9+6p^6q^3+3p^3q^6-q^9}{3pq(p^6+p^3q^3+q^6)}\right)^3+ \left(\frac{q^9+6q^6p^3+3q^3p^6-p^9}{3pq(p^6+p^3q^3+q^6)}\right)^3=p^3+q^3. \] Catalan's parametrization \[ \left({\mbox{\small$\frac12$}}\,(t+s)(t-2s)(s-2t)\right)^2+ 3\left({\mbox{\small$\frac32$}}\,ts(t-s)\right)^2= \left(t^2-ts+s^2\right)^3 \] leads us to another rational cubes identity \[ \left(\frac{t^3-3t^2s+s^3}{t^2-ts+s^2}\right)^3+ \left(\frac{t^3-3ts^2+s^3}{t^2-ts+s^2}\right)^3=(t+s)(2s-t)(s-2t). \] The substitution $2s-t=u^2v, s-2t=uv^2$\/ gives the following identity \[ \left(\frac{u^3+3u^2v-6uv^2+v^3}{3(u^2-uv+v^2)}\right)^3- \left(\frac{u^3-6u^2v+3uv^2+v^3}{3(u^2-uv+v^2)}\right)^3=uv(u-v) \] which provides the following parametrization of the sum of two integer powers \[ \left(\frac{p^9+3p^6q^3-6p^3q^6+q^9}{3pq(p^6-p^3q^3+q^6)}\right)^3- \left(\frac{p^9-6p^6q^3+3p^3q^6+q^9}{3pq(p^6-p^3q^3+q^6)}\right)^3=p^3-q^3. \] It is easy to note that these parameterizations of the sum and the difference of two integer cubes also give parameterizations to the diophantine equation $X^3+Y^3=S^3+T^3$. Euler's parametric solution to $X^3+Y^3=S^3+T^3$\/ is \[ \begin{array}{lll} X = w (1-(u -3v)(u^2+ 3v^2)) &~~~& Y = w ((u + 3v)(u^2+ 3v^2)-1) \\ S = w ((u + 3v)-(u^2+ 3v^2)^2) & & T = w ((u^2+ 3v^2)^2+ (3v-u)) \end{array} \] Finally, we mention some properties of the equation of interest that can be used to investigate taxicab numbers. Sometimes we can improve the congruence of {\bf Lemma \ref{DivNModulo}}: \begin{Lemma} If $(m-h)^p+(m+h)^p=N$,~ $\gcd(m,h)=1$\/,~ $m\not\equiv h\mod{2}$,~ then \[ p=3 ~~\Rightarrow~~ m\equiv N_2\mod{12} \] \[ p=5 ~~\Rightarrow~~ m\equiv N_2\mod{20}. \] \end{Lemma} \proof We write down $2m^3+6mh^2=N$\/ as $m^2-h^2+4h^2=N_{2m}$\/ and $2m^5+20m^3h^2+10mh^4=N$\/ as $5m(m^2+h^2)^2-4m^5=N_2$. Considering these equations by modulo 4 we conclude $m\equiv N_2\mod{4}$. Combining this congruence with the congruence from {\bf Lemma \ref{DivNModulo}} we obtain these lemma statements. \qed The forbidden divisors condition for two-squares representation is well known since Fermat's work. For cubic and quintic equations there are analogies which follow from {\bf Property \ref{BinaryForm3}}: \begin{Lemma}\label{ForbiddenDivisors} Necessary conditions for $N$\/ to have a cubic/quintic representation with $\gcd(m,h)=1$\/ are the following: \begin{enumerate} \item It has no prime divisors of forms $12t+5$\/ and $12t+11$\/ (the cubic case) or of forms $10t\pm{}1$\/ (the quintic case), or \item If such divisors exist, then all of them are factors of the median. \end{enumerate} \end{Lemma} \noindent{\bf Remark.}~ In view of the cubic case of {\bf Lemma \ref{ForbiddenDivisors}}, we can mention the results of Euler et al for the divisors of numbers in the form $u^2+3v^2$: all prime divisors have the same form $\alpha^2+3\beta^2$. \vskip\baselineskip \section{New Taxicab/Cabtaxi Numbers} Before we discuss cubic taxicab numbers, we briefly consider the equation $x^5+y^5=u^5+v^5$. No such number is known within the range up to $1.05\cdot10^{26}$. We have not yet found any, but we found some solution in Gaussian integers: \[ \begin{array}{c} \left(t^2+s^2-(t^2-2ts-s^2)\imath\right)^5+\left(t^2+s^2+(t^2-2ts-s^2\right)\imath)^5=\\ \left(t^2+s^2-(t^2+2ts-s^2)\imath\right)^5+\left(t^2+s^2+(t^2+2ts-s^2\right)\imath)^5=\\ -8(t^2+s^2)(t^4-2t^3s-6t^2s^2+2ts^3+s^4)(t^4+2t^3s-6t^2s^2-2ts^3+s^4) \end{array} \] The least such positive number is $3800=(5-\imath)^5+(5+\imath)^5=(5-7\imath)^5+(5+7\imath)^5$. The observation that $T_6=79^3T_5$\/ stirs up our interest in searching for new taxicab numbers $T_k$\/ in the same way. The usual definition of taxicab numbers is equipped with a condition that they are minimal. But for brevity we designate all multi-ways representable numbers as taxicab numbers. Even an open question \footnote{ C. Calude et al \cite{CaludeCaludeDinneen-jucs-2003} (with an update \cite{CaludeCaludeDinneen-CDMTCS-2005}) stated that the minimality of $T_6$\/ can be confirmed with the probability $>0.99$\/ but G. Martin criticized their considerations in Mathematical Reviews MR2149410 (2006a:11175). } ~about the minimality of $T_6$\/ does not matter. To compute some $k+1$--way representable number we can try any $k$-way representable number. Our approach can produce non-minimal numbers, but such numbers can be used to check their minimality or to search for smaller ones. We believe that this median--based approach reducing the length of tested numbers in three times allows us to check the minimality of $T_6$\/ and $T_7$. Notice that Wilson \cite{Wilson-jis-1999} used similar ideas (cubic multipliers) to find 5--way representable number $\mbox{\bf\em W}_5=48988659276962496$\/ in 1997 but his approach is more expensive even for small numbers. During this search a six-way example was also detected. Inspired by Wilson's approach in 2002 R. L. Rathbun \cite{Rathbun-2002} presented the smaller candidate \[ \mbox{\bf\em R}_6=79^3\,\mbox{\bf\em W}_5=24153319581254312065344. \] Rathbun also mentioned multipliers $139$\/ and $727$\/ giving other examples of six-way representable numbers. Our approach demonstrates that they appear in multipliers of $T_9$\/ and $T_{11}$, respectively. In the first version of this article (December 2006) we described a modification of our algorithm that produces some taxicab numbers. In January--September 2006 with help of this algorithm we computed $T_7=139^3\mbox{\bf\em R}_6$, $T_8=727^3T_7$, $T_9=4327^3T_8$, $T_{10}=38623^3T_9$, and $T_{11}=45294^3T_{10}$. At that moment we learned about results of C. Boyer \cite{Boyer-2006} who established smaller $T_7,\ldots,T_{11}$\/ and first $T_{12}$\/ in December 2006. Unfortunately he has not yet published details of his algorithm. Our renewed algorithm, given later in this article, produces the same numbers. Also, for the first time we found $T_{13}$\/ and $T_{14}$. The main idea of our approach is not too surprising. If we know some $k$--way representable number $T_k$, then we can try to find $T_{k+1}$\/ in the form $\mu^3\,T_k$. If $m_1,\ldots,m_k$\/ are medians of the representations of $T_k$, then medians of the representations of $T_{k+1}$\/ are $\mu{}m_1,\ldots,\mu{}m_k,d^\prime{}d$\/ where $d^\prime\in\mbox{\rm divisors}(\mu^3)$\/ (the first version of the algorithm uses only $d^\prime=1$) and $d\in\mbox{\rm divisors}(T_k)$. A simple observation is that the multiplier of interest does not exceed $2T_k^{2/3}$. The iterative procedure formalizing this idea and using the properties of the equation is the following: \begin{description} \item[$\bullet$~~~] Create an ordered array $D$\/ of all divisors of $T_k$\/ excluding known too small divisors, i.e., less than $\sqrt[3]{T_k/4}$. \item[$\bullet$~~~] For multipliers $M$\/ from 2 to $\lfloor 2T_k^{2/3}\rfloor$\/ do \begin{description} \item[$\bullet$~~~] Let $N=M^3T_k$; \item[$\bullet$~~~] For $\mu\in\mbox{divisors}(M^3)$\/ do \begin{description} \item[$\bullet$~~] Using dichotomic search, find a range of $D$\/ where the divisors satisfying {\bf Lemma \ref{DivBounds}} for $\frac1\mu{}N$\/ are located; \item[$\bullet$~~] Within this range for divisors $d$\/ such that $\mu{}d\equiv \frac12N\!\mod{3}$\/ do: if the value $(\frac{1}{2\mu{}d}N-(\mu{}d)^2)/3$\/ is a perfect square, then $\mu{}d$\/ is the $k+1$\/ median and therefore $N$\/ is $T_{k+1}$. Otherwise continue. \end{description} \end{description} \end{description} A set of all divisors of $T_k$\/ may be space-consuming. To avoid the explicit computation of this set we used the following trick. A taxicab number $T_k$\/ is a product $M\!\cdot T_s$\/ where $T_s$\/ is a ``seed'', i.e., a small taxicab number with an easily computed set of divisors and $M=(\mu_{s+1}\cdots\mu_k)^3$. Evidently $M=1$\/ for $T_{k+1}=T_{s+1}$. Thus computing $T_{k+1}$\/ we split the loop iterating through all divisors of $T_k$\/ into two nested loops: the outer loop iterating through all divisors of $M$\/ and the inner one iterating through those divisors of $T_s$\/ such that product of the first iterator, the second iterator, and some divisor of the current cubic multiplier satisfies {\bf Lemma \ref{DivBounds}}. Choice of the seed $T_s$\/ affects the space used by the algorithm. We used $\mbox{\bf\em W}_5$\/ to compute new $T_k$\/ for $k=7\ldots12$. But for the next numbers, cardinality of the divisor set for $M$\/ exceeds one for $T_s$\/ more and more. To balance the cardinalities of these sets we took greater seeds. {\begin{center} \begin{tabular}{||r|r|r|r||} \hline\hline Ways & Seed & Multiplier & Time~~~~~ \\ \hline\hline 7 & 5 & 101 & 58 s. \\ 8 & 5 & 127 & 5 m. 1 s. \\ 9 & 5 & 139 & 18 m. 47 s. \\ 10 & 5 & 377 & 4 h. 8 m. \\ 11 & 5 & 727 & 123 h. 20 m. \\ 12 & 5 & 2971 & 152 d. \\ 13 & 6 & 4327 & 21 h. 8 m.$^{a)}$ \\ 14 & 6 & 7549 & 23 m. 39 s.$^{b)}$ \\ \hline\hline \end{tabular} {\small \vskip\baselineskip\flushleft{$^{a)}$ To compute this number we examined only prime multipliers great than 2971.} \vspace*{-1em}\flushleft{$^{b)}$ To compute this number we examined only this multiplier.} } \vskip\baselineskip {\bf Table 1.} Computational results. \end{center}} {\bf Table 1.} represents multipliers producing new taxicab numbers. In {\bf APPENDIX A} we give these numbers themselves and their decompositions. Also, we found that all of our taxicab numbers $T(3,k)$\/ are {\em cabtaxi} (i.e., without the restriction on the cubes of the decomposition to be positive) numbers $C(3,k+2)$. Surprisingly the multiplier 5 gives cabtaxi numbers of higher orders: $5^3T(3,k)=C(3,k+4)$. We checked this property for $k=6\ldots12$. \section*{Final Remark} In September 2007 we learned about new results of C. Boyer who established new taxicab numbers for $n=13\ldots19$\/ and cabtaxi numbers for $n=10\ldots30$. Boyer's article is going to be published in a mathematical magazine. \providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace} \providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR } \providecommand{\MRhref}[2]{ \href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2} } \providecommand{\href}[2]{#2} \section*{APPENDIX A. Taxicab numbers decompositions} \noindent$T_7=101^3\,\mbox{\bf\em R}_6=24885189317885898975235988544$: \[ \begin{array}{rcrc} 58798362^3 & + & 2919526806^3 & = \\ 309481473^3 & + & 2918375103^3 & = \\ 459531128^3 & + & 2915734948^3 & = \\ 860447381^3 & + & 2894406187^3 & = \\ 1638024868^3 & + & 2736414008^3 & = \\ 1766742096^3 & + & 2685635652^3 & = \\ 1847282122^3 & + & 2648660966^3 & \\ \end{array} \] \noindent$T_8=127^3\,T_7$=50974398750539071400590819921724352: \[ \begin{array}{rcrc} 7467391974^3 & + & 370779904362^3 & = \\ 39304147071^3 & + & 370633638081^3 & = \\ 58360453256^3 & + & 370298338396^3 & = \\ 109276817387^3 & + & 367589585749^3 & = \\ 208029158236^3 & + & 347524579016^3 & = \\ 224376246192^3 & + & 341075727804^3 & = \\ 234604829494^3 & + & 336379942682^3 & = \\ 288873662876^3 & + & 299512063576^3 & \end{array} \] \noindent$T_9=139^3\,T_8=136897813798023990395783317207361432493888$: \[ \begin{array}{rcrc} 1037967484386^3 & + & 51538406706318^3 & = \\ 4076877805588^3 & + & 51530042142656^3 & = \\ 5463276442869^3 & + & 51518075693259^3 & = \\ 8112103002584^3 & + & 51471469037044^3 & = \\ 15189477616793^3 & + & 51094952419111^3 & = \\ 28916052994804^3 & + & 48305916483224^3 & = \\ 31188298220688^3 & + & 47409526164756^3 & = \\ 32610071299666^3 & + & 46756812032798^3 & = \\ 40153439139764^3 & + & 41632176837064^3 & \\ \end{array} \] \noindent$T_{10}=377^3\,T_9=7335345315241855602572782233444632535674275447104$: \[ \begin{array}{rcrc} 391313741613522^3 & + & 19429979328281886^3 & = \\ 904069333568884^3 & + & 19429379778270560^3 & = \\ 1536982932706676^3 & + & 19426825887781312^3 & = \\ 2059655218961613^3 & + & 19422314536358643^3 & = \\ 3058262831974168^3 & + & 19404743826965588^3 & = \\ 5726433061530961^3 & + & 19262797062004847^3 & = \\ 10901351979041108^3 & + & 18211330514175448^3 & = \\ 11757988429199376^3 & + & 17873391364113012^3 & = \\ 12293996879974082^3 & + & 17627318136364846^3 & = \\ 15137846555691028^3 & + & 15695330667573128^3 & \\ \end{array} \] \noindent$T_{11}=727^3\,T_{10}=2818537360434849382734382145310807703728251895897826621632$: \[ \begin{array}{rcrc} 284485090153030494^3 & + & 14125594971660931122^3 & = \\ 657258405504578668^3 & + & 14125159098802697120^3 & = \\ 1117386592077753452^3 & + & 14123302420417013824^3 & = \\ 1497369344185092651^3 & + & 14120022667932733461^3 & = \\ 2223357078845220136^3 & + & 14107248762203982476^3 & = \\ 4163116835733008647^3 & + & 14004053464077523769^3 & = \\ 6716379921779399326^3 & + & 13600192974314732786^3 & = \\ 7925282888762885516^3 & + & 13239637283805550696^3& = \\ 8548057588027946352^3 & + & 12993955521710159724^3& = \\ 8937735731741157614^3 & + & 12815060285137243042^3& = \\ 11005214445987377356^3 & + & 11410505395325664056^3& \\ \end{array} \] \noindent$T_{12}=2971^3\,T_{11}=\\ 73914858746493893996583617733225161086864012865017882136931801625152$: \[ \begin{array}{rcrc} 845205202844653597674^3 & + & 41967142660804626363462^3 & = \\ 1933097542618122241026^3 & + & 41965889731136229476526^3 & = \\ 1952714722754103222628^3 & + & 41965847682542813143520^3 & = \\ 3319755565063005505892^3 & + & 41960331491058948071104^3 & = \\ 4448684321573910266121^3 & + & 41950587346428151112631^3 & = \\ 6605593881249149024056^3 & + & 41912636072508031936196^3 & = \\ 12368620118962768690237^3 & + & 41606042841774323117699^3 & = \\ 19954364747606595397546^3 & + & 40406173326689071107206^3 & = \\ 23546015462514532868036^3 & + & 39334962370186291117816^3 & = \\ 25396279094031028611792^3 & + & 38605041855000884540004^3 & = \\ 26554012859002979271194^3 & + & 38073544107142749077782^3 & = \\ 32696492119028498124676^3 & + & 33900611529512547910376^3 & \\ \end{array} \] \noindent$T_{13}=4327^3\,T_{12}=\\ 5988146776742829080553965820313279739849705084894534523771076163371248442670016$: \[ \begin{array}{rcrc} 3657202912708816117135398^3 & + & 181591826293301618274700074^3 & = \\ 8364513066908614936919502^3 & + & 181586404866626464944928002^3 & = \\ 8449396605357004644311356^3 & + & 181586222922362752472011040^3 & = \\ 14364582330027624823994684^3 & + & 181562354361812068303667008^3 & = \\ 19249457059450309721505567^3 & + & 181520191447994609864354337^3 & = \\ 28582404724165067827090312^3 & + & 181355976285742254187920092^3 & = \\ 53519019254751900122655499^3 & + & 180029347376357496130283573^3 & = \\ 54818831102057750995052604^3 & + & 179911586979069103444414128^3 & = \\ 86342536262893738285181542^3 & + & 174837511984583610680880362^3 & = \\ 101883608906300383719991772^3 & + & 170202382175796081666789832^3& = \\ 109889699639872260803223984^3 & + & 167044016106588827404597308^3& = \\ 114899213640905891306456438^3 & + & 164744225351606675259562714^3& = \\ 141477721399036311385473052^3 & + & 146687946088200794808196952^3& \\ \end{array} \] \noindent$T_{14}=7549^3\,T_{13}=$ \[ \begin{array}{c} 257608810925730001281963766003343299028977072 ~~~\backslash\\ ~~~~~~~~~~~~~~~~~~~ 5881505682307757452553496715044742867424072384: \end{array} \] \[ \begin{array}{rcrc} 27608224788038852868255119502^3 & + & 1370836696688133916355710858626^3 & = \\ 63143709142093134158805320598^3 & + & 1370795770338163183869261487098^3 & = \\ 63784494973840028059906426444^3 & + & 1370794396840916418411211340960^3 & = \\ 108438232009378539796335869516^3 & + & 1370614213077319303624382243392^3 & = \\ 145314151341790388087645525283^3 & + & 1370295925240911309866010890013^3 & = \\ 215768573262722097026704765288^3 & + & 1369056264981068276864608774508^3 & = \\ 404015076354122094025926361951^3 & + & 1359041543344122738287510692577^3 & = \\ 413827355989433962261652107596^3 & + & 1358152570104992661901882252272^3 & = \\ 617989830682279948575932296880^3 & + & 1327627770274178602420131034444^3 & = \\ 651799806248584830314835460558^3 & + & 1319848377971621677029965852738^3 & = \\ 769119363633661596702217886828^3 & + & 1284857783045084620502596441768^3 & = \\ 829557342581395696803537855216^3 & + & 1261015277588639058077305078092^3 & = \\ 867374163775198573472439650462^3 & + & 1243654157179278791534438927986^3 & = \\ 1068015318841325114648936069548^3 & + & 1107347305019827800007078790648^3 & \\ \end{array} \] \end{document}
arXiv
{ "id": "0802.1147.tex", "language_detection_score": 0.6882658004760742, "char_num_after_normalized": null, "contain_at_least_two_stop_words": null, "ellipsis_line_ratio": null, "idx": null, "lines_start_with_bullet_point_ratio": null, "mean_length_of_alpha_words": null, "non_alphabetical_char_ratio": null, "path": null, "symbols_to_words_ratio": null, "uppercase_word_ratio": null, "type": null, "book_name": null, "mimetype": null, "page_index": null, "page_path": null, "page_title": null }
arXiv/math_arXiv_v0.2.jsonl
null
null
\begin{document} \title{Richardson elements for classical Lie algebras} \author{Karin Baur} \thanks{Supported by Freie Akademische Stiftung and by a DARPA Grant} \address{Karin Baur, Department of Mathematics, University of California, San Diego, USA} \email{[email protected]} \date{January 20, 2005} \begin{abstract} Parabolic subalgebras of semi-simple Lie algebras decompose as $\liea{p}=\liea{m}\oplus\liea{n}$ where $\liea{m}$ is a Levi factor and $\liea{n}$ the corresponding nilradical. By Richardsons theorem \cite{ri}, there exists an open orbit under the action of the adjoint group $P$ on the nilradical. The elements of this dense orbits are known as Richardson elements. In this paper we describe a normal form for Richardson elements in the classical case. This generalizes a construction for $\liea{gl}_N$ of Br\"ustle, Hille, Ringel and R\"ohrle \cite{bhrr} to the other classical Lie algebra and it extends the authors normal forms of Richardson elements for nice parabolic subalgebras of simple Lie algebras to arbitrary parabolic subalgebras of the classical Lie algebras \cite{b04}. As applications we obtain a description of the support of Richardson elements and we recover the Bala-Carter label of the orbit of Richardson elements. \end{abstract} \maketitle \section*{Introduction} The goal of this paper is to describe Richardson elements for parabolic subalgebras of the classical Lie algebras. Let $\liea{p}$ be a parabolic subalgebra of a semi-simple Lie algebra $\liea{g}$ over ${\mathbb C}$ and $\liea{p}=\liea{m}\oplus\liea{n}$ a Levi decomposition. By a fundamental theorem of Richardson \cite{ri} there always exist elements $x$ in the nilradical $\liea{n}$ such that $[\liea{p},x]=\liea{n}$. In other words, if $P$ is the adjoint groups of $\liea{p}$, then the orbit $P\cdot x$ is dense in $\liea{n}$. It is usually called the Richardson orbit. Richardson orbits have been studied for a long time and there are many open questions related to this setting. Our goal is to give explicit representatives for Richardson elements. In the case of $\liea{gl}_n$ there is a beautiful way to construct Richardson elements that has been described by Br\"ustle, Hille, Ringel and R\"ohrle in~\cite{bhrr}. Furthermore, Richardson elements with support in the first graded part $\liea{g}_1$ (where the grading is induced from the parabolic subalgebra) have been given for all simple Lie algebras in~\cite{b04}. However, these constructions do not work in general for classical Lie algebras. To fill this gap, we have modified the existing approaches to obtain Richardson elements for parabolic subalgebras of the classical Lie algebras. We do this using certain simple line diagrams. They correspond to nilpotent matrices with at most one non-zero entry in each row and in each column. We show that for most parabolic subalgebras, there exists a simple line diagram that defines a Richardson element. But there are cases where this is not possible as we will see. We expect that the representatives we describe will give more insight and hopefully answer some of the open questions. One of the interesting questions in the theory of Richardson elements is the structure of the support of a Richardson element. Recall that any parabolic subalgebra $\liea{p}$ induces a ${\mathbb Z}$-grading of $\liea{g}$, \[ \liea{g}=\oplus_{i\in{\mathbb Z}}\liea{g}_i\quad\text{with} \quad\liea{p}=\oplus_{i\ge 0}\liea{g}_i= \liea{g}_0\oplus(\bigoplus_{i>0}\liea{g}_i) \] where $\liea{g}_0$ is a Levi factor and $\liea{n}:=\oplus_{i>0}\liea{g}_i$ the corresponding nilradical. For details, we refer to our joint work with Wallach,~\cite{bw}. The support of a Richardson element $X=\sum_{\alpha\text{ root of }\liea{n}}k_{\alpha}X_{\alpha}$ are the roots of the nilradical $\liea{n}$ with $k_{\alpha}\neq 0$ (where $X_{\alpha}$ spans the root subspace $\liea{g}_{\alpha}$). The support $\operatorname{supp}(X)$ of $X$ lies in the subspace $\liea{g}_1\oplus\dots\oplus\liea{g}_k$ for some $k\ge 1$. For the normal form of Richardson elements we can determine the minimal $k_0$ such that $\operatorname{supp}(X)\subset$ $\liea{g}_1\oplus\dots\oplus\liea{g}_{k_0}$. We also recover the Bala-Carter label of the dense orbit of Richardson elements, also called the {\itshape type} of the orbit. The Bala-Carter label is used in the classification of nilpotent orbits of simple Lie algebras, given in~\cite{bc}. For a description of these labels see chapter 8 of~\cite{cm}. The type of any nilpotent orbit in a classical Lie algebra has been described by Panyushev \cite{pan} in terms of the partitions of the orbit. \noindent Before we describe our results and explain the structure of this article, we need to fix some notation. If $\liea{p}$ is a parabolic subalgebra of a semi-simple Lie algebra $\liea{g}$ we can assume that $\liea{p}$ contains a fixed Borel subalgebra. In this case we say that $\liea{p}$ is standard. If $\liea{m}$ is a Levi factor of $\liea{p}$ we say that $\liea{m}$ is standard if it contains a fixed Cartan subalgebra $\liea{h}$ that is contained in the fixed Borel subalgebra. From now on we will assume that $\liea{g}$ is a classical Lie algebra, unless stated otherwise. As usual, the Cartan subalgebra consists of the diagonal matrices and the fixed Borel subalgebra is the set of upper triangular matrices. Then a standard Levi factor has the shape of a sequence of square matrices (blocks) on the diagonal and zeroes outside. In the case of $\liea{so}_{2n}$, we have to be careful: we will only consider parabolic subalgebras where $\alpha_n$ and $\alpha_{n-1}$ are both roots of the Levi factor or both roots of the nilradical or $\alpha_{n-1}$ a root of the Levi factor and $\alpha_n$ a root of the nilradical. In other words the case $\alpha_n$ a root of the Levi factor and $\alpha_{n-1}$ a root of the nilradical will be identified with this last case since the two parabolic subalgebras are isomorphic. So our standard $\liea{p}$ or $\liea{m}$ are uniquely defined by the sequence $d:=\underline{d}=(d_1,\dots,d_r)$ of the sizes of these blocks (and by specifying the type of the Lie algebra). We start by defining line diagrams for dimension vectors in section~\ref{se:line-diag}. It will turn out that each horizontal line diagram corresponds uniquely to elements of the nilradical of the parabolic subalgebra of $\liea{sl}_n$ of the given dimension vector. In section~\ref{se:rich-theory} we gather the necessary properties of Richardson elements. In section~\ref{se:sl-case} we show that horizontal line diagrams in fact correspond to Richardson elements of the given parabolic subalgebra. The construction of such diagrams for $\liea{gl}_n$ appears first in~\cite{bhrr}. We have alreday mentioned that for the other classical Lie algebras, the horizontal line diagrams do not give Richardson elements. In general, the matrix obtained is not an element of the Lie algebra in question. Thus we will introduce generalized line diagrams in section~\ref{se:BCD-type} to obtain Richardson elements for parabolic subalgebras of the symplectic and orthogonal Lie algebras. As a by-product we obtain the partition of a Richardson element for the so-called simple parabolic subalgebras. The last section discusses the cases where line diagrams do not produce Richardson elements. For these we will allow ``branched'' diagrams. In the appendix we add examples illustrating branched diagrams. \section{Line diagrams}\label{se:line-diag} Let $d=(d_1,\dots,d_r)$ be a dimension vector, i.e. a sequence of positive integers. Arrange $r$ columns of $d_i$ dots, top-adjusted. A {\it (filled) line diagram} for $d$, denoted by $L(d)$, is a collection of lines joining vertices of different columns such that each vertex is connected to at most one vertex of a column left of it and to at most one vertex of a column right of it and such that it cannot be extended by any line. We say that it is a {\it (filled) horizontal line diagram} if all edges are horizontal lines. Such a diagram will be denoted by $L_h(d)$. We will always assume that the line diagrams are filled and omit the term `filled'. Line diagrams are not unique. However, for each dimension vector there is a unique horizontal line diagram. \begin{ex} As an example, consider the dimension vector $(3,1,2,3)$ and three line diagrams for it, the last one horizontal. $$ {\small \xymatrix@-5mm{ \bullet\ar@{-}[rrd] & \bullet &\bullet\ar@{-}[r] & \bullet \\ \bullet\ar@{-}[rru] & & \bullet\ar@{-}[r] & \bullet \\ \bullet\ar@{-}[rrr] & & & \bullet}\quad\quad \xymatrix@-5mm{ \bullet\ar@{-}[r] & \bullet\ar@{-}[rd] & \bullet\ar@{-}[r] & \bullet \\ \bullet\ar@{-}[rrrd] & & \bullet & \bullet \\ \bullet\ar@{-}[rrru] & & & \bullet}\quad\quad \xymatrix@-5mm{ \bullet\ar@{-}[r] & \bullet\ar@{-}[r] &\bullet\ar@{-}[r] & \bullet \\ \bullet\ar@{-}[rr] & & \bullet\ar@{-}[r] & \bullet \\ \bullet\ar@{-}[rrr] & & & \bullet} } $$ \end{ex} \section{Richardson elements}\label{se:rich-theory} In this section we describe a method to check whether a given nilpotent element of the nilradical of a classical Lie algebra is a Richardson element. The first statement is given in~\cite{bw}. Since we will use this result constantly, we repeat its proof. \begin{thm}\label{thm:dim-cent-Rich} Let $\liea{p}\subset\liea{g}$ be a parabolic subalgebra of a semi-simple Lie algebra $\liea{g}$, let $\liea{p}=\liea{m}\oplus\liea{n}$ where $\liea{m}$ is a Levi factor and $\liea{n}$ the corresponding nilradical. Then $x\in\liea{n}$ is a Richardson element for $\liea{p}$ if and only if $\dim\liea{g}^x=\dim\liea{m}$. \end{thm} \begin{proof} Denote the nilradical of the opposite parabolic by $\overline{\liea{n}}$ (the opposite parabolic is defined as the parabolic subalgebra whose intersection with $\liea{p}$ is equal to $\liea{m}$). If $x\in\liea{n}$ then $\operatorname{ad}(x)\liea{g}=\operatorname{ad}(x)\overline{\liea{n}}+\operatorname{ad}(x)\liea{p}$. Now $\operatorname{ad}(x)\liea{p}\subset\liea{n}$ and $\dim\operatorname{ad}(x)\overline{\liea{n}}\le\dim\overline{\liea{n}}$. Thus \[ \dim\operatorname{ad}(x)\liea{g}\le\,2\dim\liea{n}. \] This implies for $x\in\liea{n}$ that $\dim\liea{m}\le \dim\liea{g}^x$ and equality implies that $\dim\operatorname{ad}(x)\liea{p}=\dim\liea{n}$. Thus equality implies that $x$ is a Richardson element. For the other direction, let $x$ be a Richardson element for $\liea{p}$. We show that the map $\operatorname{ad}(x)$ is injective on $\overline{\liea{n}}$: Let $y\in\overline{\liea{n}}$ with $\operatorname{ad}(x)y=0$. Then \[ 0=B(\operatorname{ad}(x)y,\liea{p})=B(y,\operatorname{ad}(x)\liea{p}) =B(y,\liea{n}). \] In particular, $y=0$. So $\operatorname{ad}(x)$ is injective on $\overline{\liea{n}}$, giving $\dim\operatorname{ad}(x)\overline{\liea{n}}=\dim\liea{n}$. Thus \begin{eqnarray*} \dim\overbrace{\operatorname{ad}(x)\liea{p}}^\liea{n} + \dim\overbrace{\operatorname{ad}(x)\overline{\liea{n}}}^{\overline{\liea{n}}} & = & 2\dim\liea{n} \\ & = & \dim\operatorname{ad}(x)\liea{g} \\ & = & \dim\liea{g}-\dim\liea{g}^x \end{eqnarray*} So $\dim\liea{g}^x+\dim\liea{n}=\dim\liea{g}-\dim\liea{n}$ $=\dim\liea{p}=\dim\liea{m}+\dim\liea{n}$, i.e. $\dim\liea{m}=\dim\liea{g}^x$. \end{proof} \begin{cor} Let $\liea{p}=\liea{m}\oplus\liea{n}$ be a parabolic subalgebra of a semi-simple Lie algebra. Let $X\in\liea{n}$ be a Richardson element. Then $\dim\liea{g}^X\le\dim\liea{g}^Y$ for any $Y\in\liea{n}$. \end{cor} Theorem~\ref{thm:dim-cent-Rich} gives us a tool to decide whether an element of the nilradical of a parabolic subalgebra is a Richardson element. Namely, we have to calculate its centralizer. Centralizers of nilpotent elements of the classical Lie algebras can be computed using their Jordan canonical form. This well-known result is due to Kraft and Procesi, cf.~\cite{kp}. \begin{thm}\label{thm:dim-cent-Jordan} Let $(n_1,\dots,n_r)$ be the partition of the Jordan canonical form of a nilpotent matrix $x$ in the Lie algebra $\liea{g}$, let $(m_1,\dots, m_s)$ be the dual partition. Then the dimension of the centralizer of $x$ in $\liea{g}$ is \[ \begin{array}{ll} \sum\limits_i m_i^2 & \mbox{if $\liea{g}=\liea{gl}_n$}\\ \sum\limits_i \frac{m_i^2}{2}+\frac{1}{2}|\{i\mid n_i\ odd\}| & \mbox{if }\liea{g}=\liea{sp}_{2n} \\ \sum\limits_i \frac{m_i^2}{2}-\frac{1}{2}|\{i\mid n_i\ odd \}| & \mbox{if }\liea{g}=\liea{so}_N \end{array} \] \end{thm} So it remains to determine the Jordan canonical form of a given nilpotent element $x$. It is given by the dimensions of the kernels of the maps $x^j$, $j\ge 1$: \begin{lm}\label{lm:Jordan-form} Let $x$ be a nilpotent $n\times n$ matrix with $x^{m-1}\neq 0$ and $x^m=0$, set $b_j:=\dim\ker x^j$ ($j=1,\dots,m$). Define \[ a_j:=\left\{ \begin{array}{ll}2b_1-b_2 & j=1 \\ 2b_j-b_{j-1}-b_{j+1}& j=2,\dots,m-1\\ b_m-b_{m-1} & j=m \end{array}\right. \] Then the Jordan canonical form of $x$ has $a_s$ blocks of size $s$ for $s=1,\dots,m$. \end{lm} \begin{cor}\label{cor:part} With the notation of Lemma~\ref{lm:Jordan-form} above, the Jordan canonical form of $x$ is given by the partition \[ (1^{a_1},2^{a_2},\dots,(m-1)^{a_{m-1}},m^{a_m}). \] \end{cor} \section{The special linear Lie algebra}\label{se:sl-case} We now describe how to obtain a Richardson element from a (horizontal) line diagram. Recall that a standard parabolic subalgebra of $\liea{sl}_n$ is uniquely described by the sequence of lengths of the blocks in $\liea{m}$ (the standard Levi factor). Let $d=(d_1,\dots,d_r)$ be the dimension vector of these block lengths. We form the horizontal line diagram $L_h(d)$ and label its vertices column wise by the numbers $1,2,\dots,n$, starting with column $1$, labeling top-down. This labeled diagram defines a nilpotent element as the sum of all elementary matrices $E_{ij}$ such that there is a line from $i$ to $j$, where $i<j$: \[ X(d)=X(L_h(d)) =\sum_{i\mbox{---}j}E_{ij} \] \begin{ex}\label{ex:constr} Let $\liea{p}\subset\liea{sl}_9$ be given by the dimension vector $(3,1,2,3)$. We label its horizontal line diagram, $$ {\small \xymatrix@-6mm{ 1\ar@{-}[r] & 4\ar@{-}[r] & 5\ar@{-}[r] & 7 \\ 2\ar@{-}[rr] & & 6\ar@{-}[r] & 8 \\ 3\ar@{-}[rrr] & & & 9 }}, $$ and obtain $X(d)=$ $E_{1,4}+E_{4,5}+E_{5,7}+E_{2,6}+E_{6,8}+E_{3,9}$, an element of the nilradical $\liea{n}$ of $\liea{p}$. Using Lemma~\ref{lm:Jordan-form} and Corollary~\ref{cor:part} one checks that the dimension of the centralizer of $X(d)$ is equal to the dimension of the Levi factor. Thus $X(d)$ is a Richardson element for $\liea{p}$ (by Theorem~\ref{thm:dim-cent-Jordan}). \end{ex} By construction, the matrix $X(d)$ is nilpotent for any dimension vector $d$. It is in fact an element of the nilradical $\liea{n}$ of the parabolic subalgebra $\liea{p}=\liea{p}(d)$: If $d=(n)$, this is obvious, the constructed nilpotent element is the zero matrix. If $d=(d_1,d_2)$ then the nonzero coefficients of the matrix of $X(d)$ are in the rows $1,\dots,d_1$ and columns $d_1+1,\dots,d_2$. In other words, they lie in the $d_1\times d_2$-block in the upper right corner. The standard Levi factor consists of the blocks $d_1\times d_1$, $d_2\times d_2$ on the diagonal. In particular, $X(d_1,d_2)$ is a matrix that lies above the Levi factor. This generalizes to dimension vectors with more entries. So we get part (1) of the following Lemma. For part (2) we introduce a new notion. \begin{defn} If there exists a sequence of $k$ connected lines in a line diagram $L(d)$ that is not contained in a longer sequence we say that $L(d)$ has a {\itshape $k$-chain} or a {\itshape chain of length $k$}. A {\itshape subchain of length $k$} (or $k$-subchain) is a sequence of $k$ connected lines in $L(d)$ that maybe contained in a longer chain. A (sub)chain of length $0$ is a single vertex that is not connected to any other vertex. \end{defn} \begin{lm}\label{lm:X-nilrad} (1) The element $X(d)$ is an element of the nilradical of $\liea{p}(d)$. (2) For $k\ge 1$, the rank of $X(d)^k$ is equal to the number of $k$-subchains of lines in $L_h(d)$. \end{lm} \begin{proof}[Proof of (2)] It is clear that the rank of $X=X(d)$ is the number of lines in the diagram: to construct $X$, we sum over all lines of the diagram. Since these lines are disjoint (each vertex $i$ is joint to at most one neighbour $j$ with $i<j$) the rows and columns of $X$ are linearly independent. Therefore the rank of $X$ is equal to the number of vertices $i$ such that there is a line from $i$ to some $j$ with $i<j$. For any $k>0$, the matrix $X^k$ consists of linearly independent rows and columns. It is clear that an entry $(ij)$ of $X\cdot X$ is non-zero if and only if there is a line $i$---$k$---$j$ in $L_h(d)$: $X\cdot X=\sum_{i-k}E_{ik}\sum_{l-j}E_{lj}$ where $E_{ik}E_{lj}=\delta_{kl}E_{ij}$. Similarly, the rank of $X^k$ is the number of vertices $i$ such that there exist vertices $j_1<j_2<\dots<j_k$ and lines $i$---$j_1$---$\,\cdots$---$j_k$ joining them, i.e. the number of $k$-subchain. \end{proof} It turns out that $X(d)$ is a Richardson element for $\liea{p}(d)$, as we will show below. This fact follows also from the description of Br\"ustle et al. in~\cite{bhrr} of $\Delta$-filtered modules without self-extension of the Auslander-Reiten quiver of type $\lieg{A}_r$ (the number $r$ is the number of blocks in the standard Levi factor of the parabolic subalgebra). \begin{thm}\label{thm:lines-rich} The mapping $d\mapsto X(d)$ associates to each dimension vector with $\sum d_i=n$ a Richardson element for the corresponding parabolic subalgebra $\liea{p}=\liea{p}(d)$ of $\liea{sl}_n$. \end{thm} We give here an elementary proof of Theorem~\ref{thm:lines-rich} above. We will use the ideas of this proof to deal with the other classical groups (where we will have to use line diagrams that are not horizontal in general). The main idea is to use the dimension of the centralizer of a Richardson element and the partition of the Jordan canonical form of a nilpotent element. \begin{proof} Let $d$ be the dimension vector corresponding to the parabolic subalgebra $\liea{p}=\liea{p}(d)$. Let $X=X(d)$ be the nilpotent element associated it (through the horizontal line diagram). By Theorem~\ref{thm:dim-cent-Rich} we have to calculate the dimension of the centralizer of $X$ and of the Levi factor $\liea{m}$ of $\liea{p}$. By Theorem~\ref{thm:dim-cent-Jordan}, $\dim\liea{g}^X$ is equal to $\sum_i m_i^2-1$ where $(m_1,\dots,m_s)$ is the dual partition to the partition of $X$. The parts of the dual partition are the entries of $d_i$ the dimension vector as is shown in Lemma~\ref{lm:diagr-Jordan} below. In particular, $\dim\liea{l}=\sum_i d_i^2-1=\dim\liea{g}^X$. \end{proof} The following result shows how to obtain the partition and the dual partition of the Jordan canonical form of the nilpotent element associated to the dimension vector $d$. \begin{lm}\label{lm:diagr-Jordan} Let $d$ be the dimension vector for $\liea{p}\subset\liea{sl}_n$, $X=X(d)$ the associated nilpotent element of $\liea{sl}_n$. Order the entries $d_1,\dots,d_r$ of the dimension vector in decreasing order as $D_1,D_2,\dots,D_r$ (i.e. such that $D_i\ge D_{i+1}$ for all $i$). Then the Jordan canonical form of $X$ is \[ 1^{D_1-D_2},2^{D_2-D_3},\dots,(r-1)^{D_{r-1}-D_r},r^{D_r} \] and the dual partition is \[ D_r,D_{r-1},\dots, D_1. \] \end{lm} In other words, the dual partition for $X(d)$ is given by the entries of the dimension vector. Furthermore, for every $i$-chain in $L_h(d)$ (i.e. for every sequences of length $i$, $i\ge 0$, that is not contained in a longer sequence) the partition has an entry $i$. \begin{proof} Let $d=(d_1,\dots,d_r)$ be the dimension vector of $\liea{p}$ and $D_1,\dots,D_r$ its permutation in decreasing order, $D_i\ge D_{i+1}$. To determine the Jordan canonical form of $X=X(d)$ we have to compute the rank of the powers $X^s$, $s\ge 1$, cf. Lemma~\ref{lm:Jordan-form}. Since the nilpotent matrix $X$ is given by the horizontal line diagram $L_h(d)$, the rank of $X^s$ is easy to compute: by Lemma~\ref{lm:X-nilrad} (2), the rank of $X^s$ is the number of $s$-subchains. In particular, $\operatorname{rk} X=n-D_1$ and $\operatorname{rk} X^2=n-D_1-D_2$, $\operatorname{rk} X^3=n-D_1-D_2-D_3$, etc. This gives \[ b_s:=\dim\ker X^s=D_1+\dots+D_s \ \mbox{for} \ s=1,\dots,r. \] And so, by Lemma~\ref{lm:Jordan-form}, we obtain $a_1=D_1-D_2$, $a_2=D_2-D_3$, $\dots,a_r=D_r$ proving the first statement. The statement about the dual partition (i.e. the partition given by the lengths of the columns of the partition) follows then immediately. \end{proof} \section{Richardson elements for the other classical Lie algebras}\label{se:BCD-type} In this section we will introduce generalized line diagrams to deal with the symplectic and orthogonal Lie algebras. Having introduced them, we show that they correspond to Richardson elements for the parabolic subalgebra in question. Then we discuss some properties and describe the dual of the partition of a nilpotent element given by such a generalized line diagram. Furthermore, we describe the support of the constructed $X(d)$ and relate it to the Bala-Carter label of the $G$-orbit through $X(d)$ where $G$ is the adjoint group of $\liea{g}$. To define the orthogonal Lie algebras, we use the skew diagonal matrix $J_n$ with ones on the skew diagonal and zeroes else. The symplectic Lie algebras $\liea{sp}_{2n}$ are defined using ${\small\begin{bmatrix} 0 & J_n \\ -J_n & 0\end{bmatrix}}$. (For details we refer the reader to~\cite{gw}.) So $\liea{so}_n$ consists of the $n\times n$-matrices that are skew-symmetric around the skew-diagonal and $\liea{sp}_{2n}$ is the set of $2n\times 2n$-matrices of the form \[\begin{bmatrix}A & B\\ C&A^*\end{bmatrix}\] where $A^*$ is the the negative of the skew transpose of $A$. Thus in the case of the symplectic and orthogonal Lie algebras, the block sizes of the standard Levi factor form a palindromic sequence. If there is an even number of blocks in the Levi factor, the dimension vector is of the form $(d_1,\dots,d_r,d_r,\dots,d_1)$. We will refer to this situation as type~(a). If there is an odd number of blocks in the Levi factor, type (b), the dimension vector is $(d_1,\dots,d_r,d_{r+1},d_r,\dots,d_1)$. By the (skew) symmetry around the skew diagonal, the entries below the skew diagonal of the matrices $X(d)$ are determined by the entries above the skew diagonal. In terms of line diagrams: For $\liea{sp}_N$ and $\liea{so}_N$ there is a line $(N-j+1)$---$(N-i+1)$ whenever there is a line $i$---$j$. We will call the line $(N-j+1)$---$(N-i+1)$ the {\itshape counterpart} of $i$---$j$ and will sometimes denote counterparts by dotted lines. In particular, it suffices to describe the lines attached to the left to vertices of the first $r$ columns for both types (a) and (b). The (skew)-symmetry will give constraints on the diagram - there will also appear negative entries. For the moment, let us assume that $L(d)$ is a diagram defining an element of the nilradical of the parabolic subalgebra in question. Then part (2) of Lemma~\ref{lm:X-nilrad} still holds. \begin{lm}\label{lm:chains-rank} If $X(d)$ is defined by $L(d)$ then the rank of the map $X(d)^k$ is the number of $k$-subchains of lines in the diagram. \end{lm} This uses the same argument as Lemma~\ref{lm:X-nilrad} since by construction, $X(d)$ only has linearly independent rows and columns and the product $X(d)^2$ only has nonzero entries $E_{il}$ if $X(d)$ has an entry $E_{ij}$ and an entry $E_{jl}$ for some $j$. The following remark allows us to simplify the shapes of the diagrams we are considering. If $d=(d_1,\dots,d_r)$ is an $r$-tuple in ${\mathbb N}^r$, and $\sigma\in S_r$ (where $S_r$ is the permutation group on $r$ letters) we define $d_{\sigma}$ as $(d_{\sigma 1},d_{\sigma 2},\dots,d_{\sigma r})$. By abuse of notation, for $d=(d_1,\dots,d_r,d_r,\dots,d_1)$ in ${\mathbb N}^{2r}$, we write $d_{\sigma}=(d_{\sigma 1},\dots,d_{\sigma r}, d_{\sigma r},\dots,d_{\sigma 1})$ and for $d=(d_1,\dots,d_r,d_{r+1},d_r,\dots,d_1)$ in ${\mathbb N}^{2r+1}$, we define $d_{\sigma}$ to be the $2r+1$-tuple $(d_{\sigma 1},\dots,d_{\sigma r}, d_{r+1},d_{\sigma r},\dots,d_{\sigma 1})$. It will be clear from the context which tuple we are referring to. \begin{re}\label{re:permutations} For $d=(d_1,\dots,d_r)$ the diagrams $L_h(d)$ and $L_h(d_{\sigma})$ have the same chains of lines for any $\sigma\in S_r$. In other words: for any $k\ge 1$, the number of chains of lines of length $k$ in $L_h(d)$ is the same as the number of lines of length $k$ in $L_h(d_{\sigma})$. As an illustration, consider the permutation $1243$ of $d=(3,1,2,3)$: $$ {\small \xymatrix@-5mm{ \bullet\ar@{-}[r] & \bullet\ar@{-}[r] &\bullet\ar@{-}[r] & \bullet \\ \bullet\ar@{-}[rr] & & \bullet\ar@{-}[r] & \bullet \\ \bullet\ar@{-}[rrr] & & & \bullet}\quad\quad \xymatrix@-5mm{ \bullet\ar@{-}[r] & \bullet\ar@{-}[r] &\bullet\ar@{-}[r] & \bullet \\ \bullet\ar@{-}[r] & \bullet\ar@{-}[r] &\bullet & \\ & \bullet\ar@{-}[r] & \bullet &} } $$ Similarly, for $f=(f_1,\dots,f_r,f_r,\dots,f_1)$ resp. for $g=(g_1,\dots,g_r,g_{r+1},g_r,\dots,g_1)$, if $L(f)$ and $L(g)$ are line diagrams for $\liea{sp}_{2n}$ or $\liea{so}_N$ then for any $\sigma\in S_r$, the diagrams $L(f_{\sigma})$ resp. $L(g_{\sigma})$ are also diagrams for the corresponding Lie algebras and have the same exactly the same chains as $L(f)$ resp. as $L(g)$. \end{re} We have an immediate consequence of Remark~\ref{re:permutations} and of Lemma~\ref{lm:chains-rank}: \begin{cor}\label{cor:reordering} Let $d=(d_1,\dots,d_r,d_r,\dots,d_1)$ or $d=(d_1,\dots,d_r,d_{r+1},d_r,\dots,d_1)$ be the dimension vector of a parabolic subalgebra of a symplectic or orthogonal Lie algebra and $X(d)$ be given by the appropriate line diagram. In calculating the rank of $X(d)^k$ we can assume that $d_1\le\dots\le d_r$. \end{cor} We will make frequent use of this property. Now we will finally be able to construct diagrams for the other classical cases. We have already mentioned that the horizontal line diagrams do not produce Richardson elements. One reason is that the counterpart of a line $i$---$j$ is not always horizontal. The other reason is that we have to introduce negative signs for the symplectic and orthogonal cases when we associate a nilpotent matrix to a diagram: If $\liea{g}=\liea{sp}_{2n}$, in the definition of $X(d)$ we subtract $E_{ij}$ whenever there is a line $i$---$j$ with $n<i<j$. If $\liea{g}=\liea{so}_N$ we subtract $E_{ij}$ whenever there is a line $i$---$j$ with $i+j>N$. \begin{ex}\label{ex:sp-non-horizontal} Let $(1,2,2,1)$ be the dimension vector of a parabolic subalgebra of $\liea{sp}_6$. Then the following three line diagrams determine elements of the nilradical of $\liea{p}$: $${\small \xymatrix@-6mm{ 1\ar@{-}[r] & 2\ar@{-}[r] & 4 & 6\\ & 3\ar@{.}[r] & 5\ar@{.}[ur] \\ }\quad\quad \xymatrix@-6mm{ 1\ar@{-}[r] & 2\ar@{-}[rd] & 4 & 6\\ & 3\ar@{-}[ru] & 5\ar@{.}[ur] \\ }\quad\quad \xymatrix@-6mm{ 1\ar@{-}[r] & 2\ar@{-}[r] & 5\ar@{.}[r] & 6\\ & 3\ar@{-}[r] & 4 \\ }} $$ The last diagram is just a reordering of the second. The nilpotent elements are $X_1=E_{12}+E_{24}+E_{35}-E_{56}$ resp. $X_2=E_{12}+E_{25}+E_{34}-E_{56}$. By calculating the Jordan canonical forms for these elements one checks that only the nilpotent element $X_2$ is a Richardson element. \end{ex} This example and the discussion above illustrate that for the symplectic and orthogonal Lie algebras, we will use: (i) non-horizontal lines, (ii) labeling top-bottom {\bf and} bottom-top, (iii) negative signs, too. \noindent Before we start defining these line diagrams we introduce a new notion. \begin{defn} Let $\liea{p}$ be the standard parabolic subalgebra of a symplectic or orthogonal Lie algebra $\liea{g}$. We say that $\liea{p}$ is {\itshape simple} if $\liea{p}\subset\liea{g}$ is of one of the following forms: \begin{enumerate} \item A parabolic subalgebra of $\liea{sp}_{2n}$ with an even number of blocks in the standard Levi factor. \item A parabolic subalgebra of $\liea{so}_{2n}$ with an even number of blocks in the standard Levi factor such that odd block lengths appear exactly twice. \item A parabolic subalgebra of $\liea{sp}_{2n}$ with an odd number of blocks in the Levi factor and such that each odd $d_i$ that is smaller than $d_{r+1}$ appears exactly twice. \item A parabolic subalgebra of $\liea{so}_N$ with an odd number of blocks in the Levi factor such that either all $d_i$ are odd or there is an index $k\le r$ such that all $d_i$ with $i\le k$ are even, $d_j$ odd for $j>k$ and the even $d_i$ are smaller than $d_{k+1},\dots,d_r$. Furthermore, the even block lengths that are larger than $d_{r+1}$ appear only once among $d_1,\dots,d_k$. \end{enumerate} \end{defn} \begin{defn}[Type (a)] Let $\liea{p}$ be a simple parabolic subalgebra of $\liea{sp}_{2n}$ or $\liea{so}_{2n}$, given by the dimension vector $d=(d_1,\dots,d_r,d_r,\dots,d_1)$. Then we define the {\itshape line diagram} $L_{even}(d)$ {\itshape associated to} $d$ (and $\liea{g}$) as follows. \begin{enumerate} \item Draw $2n$ vertices in $2r$ columns of length $d_1,\dots$, top-adjusted. Label the first $r$ columns with the numbers $1,\dots, n$, top--bottom. Label the second $r$ columns with the numbers $n+1,\dots, 2n$, bottom--top. \item Join the first $r$ columns with horizontal lines as for $\liea{sl}_n$. Draw the counterparts of these lines in the second $r$ columns. \item[(3) (i)] If $\liea{g}=\liea{sp}_{2n}$, add the lines $k$---$(2n-k+1)$. \item[(3) (ii)] If $\liea{g}=\liea{so}_{2n}$, one adds the lines $(2l-1)$---$(2n-2l+1)$ and their counterparts $2l$---$(2n-2l+2)$ if $n$ is even. If $n$ is odd, the lines $2l$---$(2n-2l)$ and their counterparts $(2l+1)$---$(2n-2l+1)$. \end{enumerate} \end{defn} \begin{defn}[Type (b)] Let $\liea{p}$ be a simple parabolic subalgebra of $\liea{sp}_{2n}$ or of $\liea{so}_N$, given by the dimension vector $d$ $=(d_1,\dots,d_r,d_{r+1},d_r,\dots,d_1)$. Then we define the {\itshape line diagram} $L_{odd}(d)$ {\itshape associated to} $d$ (and $\liea{g}$) as follows. \begin{enumerate} \item Draw $2r+1$ columns of length $d_1,\dots$, top-adjusted. Label them with the numbers $1,\dots$ in increasing order, top--bottom in each column. \item[(2) (i)] For $\liea{sp}_{2n}$: \\ If $\min_i\{d_i\}\ge 2$, draw a horizontal of lines in the first row and all their counterparts, forming a sequence joining the lowest vertices of each column. Repeat this procedure as long as the columns of the remaining vertices are all at least of length two. \item[(2) (ii)] For $\liea{so}_N$: \\ If $d_1$ is odd, go to step (3) (ii). If $d_1$ is even, do as in (2) (i), drawing lines in the first row and their counterparts joining the lowest vertices. Repeat until either the first of the remaining columns has odd length or there are no vertices left to be joined. Continue as in (3) (ii). \item[(3) (i)] For $\liea{sp}_{2n}$: \\ For the remaining vertices: draw horizontal lines following the top-most remaining vertices and simultaneously their counterparts (the lowest remaining vertices). \item[(3) (ii)] For $\liea{so}_N$: \\ All columns have odd length. Connect the central entries of each column. The remaining column lengths are all even, the are joined as in (2) (ii). \end{enumerate} \end{defn} \begin{thm}\label{thm:line-richardson} Let $d$ be the dimension vector for a simple parabolic subalgebra of $\liea{sp}_{2n}$ or $\liea{so}_N$. Then the associated diagram $L_{even}(d)$ resp. $L_{odd}(d)$ determines a Richardson element for $\liea{p}(d)$ by setting \[ \begin{array}{ccll} X(d) & = & \sum_{i\mbox{---}j,\ i\le n}E_{ij} - \sum_{i\mbox{---}j,\ i>n}E_{ij} & \mbox{for}\ \liea{sp}_{2n}\\ X(d) & = & \sum_{i\mbox{---}j,\ i+j<N}E_{ij} - \sum_{i\mbox{---}j,\ i+j>N}E_{ij} & \mbox{for} \ \liea{so}_N \end{array} \] where the sums are over all lines in the diagram. \end{thm} We first include some immediate consequences of this result. After that we add an observation about the (dual of the) partition corresponding to $X(d)$ and then we are ready to prove Theorem~\ref{thm:line-richardson}. Theorem~\ref{thm:line-richardson} enables us to determine the minimal $k$ such that the Richardson element $X(d)$ lies in the graded parts $\liea{g}_1\oplus\dots\oplus\liea{g}_k$. To do so we introduce $s(d)$ as the maximal number of entries $d_i,\dots,d_{i+s}$ of $d$ that are surrounded by larger entries $d_{i-1}$ and $d_{i+s+1}$. More precisely, if $d=(d_1,\dots,d_r,d_r,\dots,d_1)$ or $d=(d_1,\dots,d_r,d_{r+1},\dots,d_1)$ is the dimension vector, we rewrite $d$ as a vector with increasing indices, $(c_1\dots,c_r,c_{r+1},c_{r+2},\dots,c_{2r})$ resp. $(c_1\dots,c_r,c_{r+1},c_{r+2},\dots,c_{2r+1})$ and define $s(d) :=1+\max_i\{\text{there are}\ c_{j+1},\dots, c_{j+i}\mid c_j>c_{j+l}<c_{j+i+1}\text{ for all }0\le l\le i\}$. \begin{cor}\label{cor:bound-grade} Let $\liea{p}(d)$ be a simple parabolic subalgebra of the orthogonal or symplectic Lie algebras. Then the element $X(d)$ belongs to $\liea{g}_1\oplus\dots\oplus\liea{g}_{s(d)}$. The same holds for parabolic subalgebras of $\liea{sl}_n$. \end{cor} This follows from the fact that $E_{ij}$ with $i$ from column $k$ of the line diagram and $j$ from column $k+s$ is an entry of the graded part $\liea{g}_s$. If, e.g., we have $c_1>c_j<c_{s+1}$ for $j=2,\dots,s$ then there is a line joining columns one and $s+1$. So $X(d)$ has an entry in $\liea{g}_{s}$. \begin{cor} For $\liea{sl}_n$, $s(d)$ is equal to one if and only if the dimension vector satisfies $d_1\le\dots\le d_t\ge\dots\ge d_r$ for some $1\le t\le r$. \end{cor} This well-known result has been observed by Lynch~\cite{l}, Elashvili and Kac~\cite{ek}, Goodwin and R\"ohrle~\cite{gr}, and in our joint work with Wallach~\cite{bw}. The next lemma shows how to obtain the dual of the partition of $X(d)$ if $X(d)$ is given by the appropriate line diagram for $d$. \begin{lm}\label{lm:dual-part} If $\liea{p}(d)$ is a simple parabolic subalgebra of a symplectic or orthogonal Lie algebra let $X=X(d)$ be given by the appropriate line diagram $L_{even}(d)$ or $L_{odd}(d)$. The dual of the partition of $X$ has the form \[ \begin{array}{llll} & \text{Dual of the partition of $X$} & \liea{g} & \text{Type of $\liea{p}$} \\ & \\ (i) & d_1,d_1,\dots,d_r,d_r & \liea{sp}_{2n} & (a) \\ & & \\ (ii) & d_{r+1}\cup \left(\bigcup_{d_i\notin D_o}d_i,d_i\right) \cup\left(\bigcup_{d_i\in D_o} d_i-1,d_i+1\right) & \liea{sp}_{2n} & (b) \\ & & \\ (iii) & \left(\bigcup_{d_i\text{even}} d_i,d_i\right) \cup\left(\bigcup_{d_i\text{odd}}d_i-1,d_i+1\right) & \liea{so}_{2n} & (a) \\ & & \\ (iv) & d_{r+1}\cup\left( \bigcup_{d_i\notin D^e} d_i,d_i\right) \cup\left(\bigcup_{d_i\in D^e}d_i-1,d_i+1\right) & \liea{so}_{2n+1} & (b) \\ & \\ (v) & d_{r+1}\cup\left( \bigcup_{d_i\notin D^o} d_i,d_i\right) \cup\left(\bigcup_{d_i\in D^o}d_i-1,d_i+1\right) & \liea{so}_{2n} & (b) \end{array} \] where $D_o:=\{d_i\text{ odd}\mid d_i<d_{r+1}\}$, $D^o:=\{d_i\text{ odd}\mid d_i>d_{r+1}\}$ and $D^e:=\{d_i\text{ even}\mid d_i>d_{r+1}\}$ are subsets of $\{d_1,\dots,d_r\}$. In particular, if $D_o$, $D^e$ or $D^o$ are empty, the partition in the corresponding case (ii), (iv) or (v) has the same parts as the dimension vector. The same is true for (iii), if there are no odd $d_i$. \end{lm} The proof consists mainly in counting lines and (sub)chains of lines of the corresponding diagrams. Therefore we postpone it and include it in the appendix. We are now ready to prove Theorem~\ref{thm:line-richardson} with the use of Theorem~\ref{thm:dim-cent-Jordan} and of Lemma~\ref{lm:dual-part}. \begin{proof}[Proof of Theorem~\ref{thm:line-richardson}] We consider the case $\liea{g}=\liea{sp}_{2n}$. For the parabolic subalgebras of an orthogonal Lie algebra, the claim follows using the same methods. The idea is to use the dimension of the centralizer of $X(d)$ and compare it to the dimension of the Levi factor. To calculate the dimension of the centralizer, we use the formulae of Theorem~\ref{thm:dim-cent-Jordan}, i.e. we use the dual of the partition of $X=X(d)$ as described in Lemma~\ref{lm:dual-part} and the number of odd parts in the partition of $X$. \noindent $\liea{sp}_{2n}$, type (a): \\ By Lemma~\ref{lm:dual-part} the dual partition of the nilpotent element $X=X(d)$ has as parts the entries of $d$. Since they all appear in pairs, the partition of the orbit has no odd entries. So by the formula of Theorem~\ref{thm:dim-cent-Jordan} we obtain $\dim\liea{g}^X=\frac{1}{2}(2d_1^2+\dots+2d_r^2)$, the same as the dimension of the Levi factor. In particular, $X$ is a Richardson element for the parabolic subalgebra $\liea{p}(d)$ of $\liea{sp}_{2n}$. \noindent $\liea{sp}_{2n}$, type (b): \\ As in Lemma~\ref{lm:dual-part} let $D_o\subset\{d_1,\dots,d_r\}$ be the possibly empty set of the odd $d_i$ that are smaller than $d_{r+1}$. Then the dual partition has the parts \[ \{d_i,d_i \mid i<r,\ d_i\notin D_o\}\cup\{d_{r+1}\} \cup\{d_{i+1},d_{i-1}\mid d_i\in D_o\}. \] The $d_i$ that are not in $D_o$ come in pairs and do not contribute to odd parts in the partition of $X=X(d)$. In particular, the number of odd parts only depends on $d_{r+1}$ and on the entries of $D_o$. We write the elements of $D_o$ in decreasing order as $\tilde{d}_1,\dots,\tilde{d}_s$ (where $s=|D_o|$). By assumption (the parabolic subalgebra is simple) these odd entries are all different, $\tilde{d}_1>\tilde{d}_2>\dots>\tilde{d}_s$. Then the number of odd parts of the partition of $X$ is the same as the number of odd parts of the dual of the partition \[ \tilde{P}:\quad d_{r+1},\tilde{d}_1+1,\tilde{d}_1-1,\dots, \tilde{d}_s+1,\tilde{d}_s-1. \] This has $d_{r+1}-(\tilde{d}_1+1)$ ones, $(\tilde{d}_1+1)-(\tilde{d}_1-1)$ twos, $(\tilde{d}_1-1)-(\tilde{d}_2+1)$ threes, and so on. So the number of odd parts in the dual of $\tilde{P}$ is \[ [d_{r+1}-(\tilde{d_1}+1)]+[(\tilde{d}_1-1)-(\tilde{d}_2+1)] + \dots + [(\tilde{d}_{s-1}-1)-(\tilde{d}_s+1)]+\tilde{d}_s-1 \\ = d_{r+1}-2s. \] Thus the dimension of the centralizer of $X$ is \begin{align*} \frac{1}{2} & \left[ \left(\sum_{\substack{i<r+1\\ d_i\notin D_o}}2d_i^2\right) + d_{r+1}^2 + \left(\sum_{d_i\in D_o}(d_i-1)^2+(d_i+1)^2\right) + d_{r+1}-2s\right] \\ & = \sum_{i\le r}d_i^2 +\binom{d_{r+1}+1}{2} = \dim\liea{m}. \end{align*} \end{proof} \subsection{Bala Carter labels for Richardson orbits} The support of the nilpotent element of a simple line diagram is by construction a simple system of root. Namely, for any $d$, the corresponding $X(d)$ has at most one non-zero element in each row and each column. One can check that none of the corresponding positive roots subtract from each other. In other words, the support $\operatorname{supp}(X)$ forms a simple system of roots. \begin{re} The converse statement is not true. There are Richardson elements whose support form a simple system of roots but where there is no simple line diagram defining a Richardson element. A family of examples are the Borel subalgebras of $\liea{so}_{2n}$ or more general, parabolic subalgebras of $\liea{so}_{2n}$ where $\alpha_n$ and $\alpha_{n-1}$ are both not roots of the Levi factor \end{re} If $X$ is a nilpotent element of $\liea{g}$ we denote the $G$-orbit through $X$ by $\mathcal{O}_X$ (where $G$ is the adjoint group of $\liea{g}$). \begin{cor} Let $\liea{p}(d)$ be a parabolic subalgebra of $\liea{sl}_n$. Define $X(d)$ by the line diagram $L_h(d)$ or a simple parabolic subalgebra of (b)-type for $\liea{sp}_{2n}$, $\liea{so}_N$ Then the group spanned by $\operatorname{supp} X(d)$ is equal to the Bala-Carter label of the $G$-orbit $\mathcal{O}_{X(d)}$. \end{cor} \begin{proof} This follows from the characterization of the type (i.e. the Bala-Carter label) of $\mathcal{O}_X$ given by Panyushev in Section 3 of~\cite{pan}. For simplicity we assume $d_1\le \dots\le d_r$. Note that in any case, the partition of $X(d)$ is given by the chains in the line diagram. The partition of $X(d)$ has entry $i$ for every chain of length $i+1$. If $\alpha$ given by $E_{ij}$ and $\beta$ given by $E_{kl}$ are roots of $\operatorname{supp} X(d)$ then they add to a root of $\liea{sl}_n$ if and only if there is a line connecting them. Thus in the case of the special linear Lie algebra a chain of length $i+1$ corresponds to a factor $\lieg{A}_i$ in $\operatorname{supp} X(d)$. Similarly, for $\liea{sp}_{2n}$ and $\liea{so}_N$, a chain of length $i+1$ together with its counterpart give a factor $\lieg{A}_i$. Finally, the possibly remaining single chain of length $2j+1$ (passing through the central vertex of column $r+1$) in the case of $\liea{so}_{2n+1}$ gives a factor $\lieg{B}_j$. Then the claim follows with~\cite{pan} where Panyushev describes the type of a nilpotent orbit in terms of its partition. \end{proof} \section{Branched diagrams}\label{se:branched} The diagrams we have introduced had at most one line to the left and at most one line to the right of a vertex. We call such a diagram a {\itshape simple line diagram}. In the case of simple parabolic subalgebras, we can always choose a simple line diagram to define a Richardson element. However, there are parabolic subalgebras where no simple diagram gives rise to a Richardson elements. After giving an example we characterize the parabolic subalgebras for which there exists a simple line diagram giving a Richardson element. Then we discuss the case of the symplectic Lie algebras. We introduce a branched diagram and obtain a Richardson elements for the parabolic subalgebra in question. \begin{ex} 1) Consider the parabolic subalgebra of $\liea{so}_{2n}$ given by the dimension vector $(n,n)$ where $n$ is odd. The element $X=X(n,n)$ given by the diagram $L_{even}(n,n)$ has rank $n-1$ and so the kernel of the map $X^k$ has dimension $n+1$ or $2n$ for $k=1,2$ resp. The partition of $X$ is then $1^2,2^{n-1}$, its dual is $n-1,n+1$. The centralizer of $X$ has dimension $2n^2+1-1$ and the Levi factor of this parabolic subalgebra has dimension $n^2$. So $X$ is a Richardson element. 2) Let $\liea{p}\subset\liea{so}_{4d}$ be given by $(d,d,d,d)$ where $d$ is odd. Note that the skew-symmetry of the orthogonal Lie algebra allows at most $d-1$ lines between the two central columns. \[ {\small \xymatrix@-7mm{ \bullet\ar@{-}[r] & \bullet\ar@{-}[rd] & \bullet\ar@{-}[r] & \bullet \\ \bullet\ar@{-}[r] & \bullet\ar@{-}[ru] & \bullet\ar@{-}[r] & \bullet \\ \bullet\ar@{-}[r] &\bullet & \bullet\ar@{-}[r] & \bullet }} \] The line diagram $L_{even}(d,d,d,d)$ has $2d+d-1$ lines, $2(d-1)$ two-subchains and $d-1$ three-chains. Calculating the dimensions of the kernel of the map $X^k$ (where $X=X(d,d,d,d)$) yields the partition $2^2,4^{d-1}$. Its dual is $(d-1)^2,(d+1)^2$, hence the centralizer of $X$ has dimension $2d^2+2$ while the Levi factor has dimension $2d^2$. \end{ex} \begin{thm} Let $\liea{g}$ be a simple Lie algebra. The parabolic subalgebras $\liea{p}$ of $\liea{g}$ for which there exists a simple line diagram that defines a Richardson element for $\liea{p}$ are: The parabolic subalgebras of $\liea{sl}_n$ and the simple parabolic subalgebras of the symplectic and orthogonal Lie algebras. \end{thm} \begin{proof} By Theorems~\ref{thm:lines-rich} and~\ref{thm:line-richardson} there is always a simple line diagram giving a Richardson element in these cases. It remains to show that these are the only ones. By Corollary~\ref{cor:reordering} we can assume w.l.o.g. that $d_1\le\dots\le d_r$. Then it turns out that if there is an even number of blocks for $\liea{so}_{2n}$ or if $d_r\le d_{r+1}$ for $\liea{sp}_{2n}$ the problem is translated to the problem of finding a Richardson element in the first graded part $\liea{g}_1$ of $\liea{g}$ because of the following observation: Since $d_1\le\dots\le d_r=d_r\ge\dots\ge d_1$, or $d_1\le\dots\le d_r\le d_{r+1}\ge d_r\ge\dots\ge d_1$ all lines are connecting neighbored columns. But lines connecting neighbored columns correspond to entries $E_{i,j}$ of the first super diagonal of the parabolic subalgebra, i.e. to entries of $\liea{g}_1$. Then the claim follows from the classification of parabolic subalgebras with a Richardson element in $\liea{g}_1$ for type (a) of $\liea{so}_{2n}$ and if $d_r\le d_{r+1}$ for type (b) parabolic subalgebras of the symplectic Lie algebra. In both cases there exists a Richardson element in $\liea{g}_1$ if and only if each odd block length $d_i$ only appears once among $d_1,\dots,d_r$, cf.~\cite{bw}. If there is no Richardson element in $\liea{g}_1$ then in particular no simple line diagram can give a Richardson element. It remains to deal with (b)-types for $\liea{so}_N$ and (b)-types for $\liea{sp}_{2n}$ where $d_{r+1}$ is not maximal. Both are straightforward but rather lengthy calculation that we omit here. \end{proof} By way of illustration we include examples of branched diagrams for non-simple parabolic subalgebras of $\liea{sp}_{2n}$ and of $\liea{so}_N$ in the appendix. In general, it is not clear how branched diagrams should be defined uniformely for the symplectic and orthogonal Lie algebras. It is clear from the description of simple parabolic subalgebras of $\liea{so}_N$ that this case is more intricate. We assume that Richardson elements can be obtained by adding lines to the corresponding simple line diagrams: \begin{conj} For the (b)-type of $\liea{sp}_{2n}$ the appropriate diagram defining a Richardson element is obtained from $L_{odd}(d)$ by adding a branching for every repetition $d_i=d_{i+1}=\dots=d_{i+s}$ of odd entries smaller than $d_{r+1}$. \end{conj} We conclude this section with a remark on the bound $s(d)$ introduced in Section~\ref{se:BCD-type}. If there is no simple line diagram defining a Richardson element, we can still define $s(d)$ to be the maximal number of a sequence of entries of $d$ that are surrounded by two larger entries. But this will now only be a lower bound, the Richardson element defined by a branched diagram does not necessarily lie in $\liea{g}_1\oplus\dots\oplus\liea{g}_{s(d)}$, cf. Examples~\ref{ex:branched-sp},~\ref{ex:branched-sp22}, and~\ref{ex:branched-so}. \section*{Appendix} We discuss some examples of branched line diagrams for $\liea{sp}_{2n}$ and for $\liea{so}_N$ to illustrate Section~\ref{se:branched}. Recall that the parabolic subalgebras of type (b) of $\liea{sp}_{2n}$ are simple if and only if every odd $d_i<d_{r+1}$ only appears once among $d_1,\dots,d_r$. In particular, the smallest example of $\liea{sp}_{2n}$ where there is no simple line exists for $n=3$. \begin{ex}\label{ex:branched-sp} Let $\liea{p}$ be the parabolic subalgebra of $\liea{sp}_6$ with dimension vector $(1,1,2,1,1)$. Consider the diagrams \[ {\small \xymatrix@-5mm{ & & \ & \\ 1\ar@{-}[r] & 2\ar@{-}[r] & 3 & 5\ar@{-}[r] & 6 \\ & & 4\ar@{-}[ru] }\quad\quad \xymatrix@-6mm{ & & 3 \\ 1\ar@{-}[r] & 2\ar@{-}[ru]\ar@{--}[rr] & & 5\ar@{-}[r] & 6 \\ & & 4\ar@{-}[ru] }} \] The diagram to the left is a line diagram as in Section~\ref{se:BCD-type}. The corresponding nilpotent element has a centralizer of dimension $7$. However, the Levi factor is five dimensional. In the second diagram, there is one extra line, connecting the vertices $2$ and $5$. The defined matrix $X=E_{12}+E_{23}+E_{25}-E_{45}-E_{56}$ has a five dimensional centralizer as needed. \end{ex} \begin{ex}\label{ex:branched-sp22} The following branched line diagram for the parabolic subalgebra of $\liea{sp}_{22}$ with dimension vector $d=(1,1,1,3,3,4,3,3,1,1,1)$ gives a Richardson element for $\liea{p}(d)$ \[ {\small \xymatrix@-4mm{ & & & & & 10\ar@{-}[rd] \\ & & & 4\ar@{-}[r] & 7\ar@{-}[ru]\ar@{--}[rrdd] & 11 & 14\ar@{-}[r] & 17 \\ 1\ar@{-}[r] & 2\ar@{-}[r] & 3\ar@{-}[ru] & 5\ar@{-}[r] & 8\ar@{-}[ru]\ar@{--}[rr] & & 15\ar@{-}[r] & 18 & 20\ar@{-}[r] & 21\ar@{-}[r] & 22 \\ & & & 6\ar@{-}[r] & 9\ar@{-}[rd] & 12\ar@{-}[ru] & 16\ar@{-}[r] & 19\ar@{-}[ru] & \\ & & & & & 13\ar@{-}[ru] }} \] The Levi factor and the centralizer of the constructed $X$ have dimension $31$. \end{ex} \begin{ex}\label{ex:branched-so} For the orthogonal Lie algebras, the smallest example are given by $d=(1,1,2,2,1,1)$, i.e. (a)-type of $\liea{g}=\liea{so}_8$ and by $d=(2,2,1,2,2)$ for an odd number of blocks in $\liea{so}_9$. The following branched diagrams give Richardson elements for the corresponding parabolic subalgebras. \[ {\small \xymatrix@-6mm{ & & 3\ar@{-}[rdd] & 6\ar@{-}[rd] \\ 1\ar@{-}[r]\ar@{--}[rrd]&2\ar@{-}[ru] & & &7\ar@{-}[r]&8\\ & & 4\ar@{-}[ruu]& 5\ar@{--}[rru] }\quad\quad \xymatrix@-6mm{ 1\ar@{-}[r] & 3\ar@{-}[rd]\ar@{--}[rr] & & 6\ar@{-}[r] & 8 \\ & & 5\ar@{-}[rd] & & \\ 2\ar@{-}[r] & 4\ar@{--}[rr] & & 7\ar@{-}[r] & 9 }} \] \end{ex} \begin{proof}[Proof of Lemma~\ref{lm:dual-part}] We prove the statement for the symplectic Lie algebras. The corresponding statements for $\liea{so}_N$ are proven similarly. \noindent \underline{(i) - Type (a) of $\liea{sp}_{2n}$}: \noindent Note that the bottom-top ordering of the second half of $L_{even}(d)$ ensures that the counterpart of a line $i$---$j$ (for $j\le n$) is again horizontal and that all lines connecting any entry of column $r$ to an entry to its right are horizontal. Therefore the line diagram $L_{even}$ has the same shape as the horizontal line diagram defined for $\liea{sl}_n$. In particular, the orbit of the nilpotent element defined by $L_{even}(d)$ has the same partition as the one defined by $L_h(d)$. Then the assertion follows with Lemma~\ref{lm:diagr-Jordan}. \noindent \underline{(ii) - Type (b) of $\liea{sp}_{2n}$}:\\ The proof is done by induction on $r$. Let $d=(d_1,d_2,d_1)$ be the dimension vector. If $d_1\notin D_o$ (i.e. $d_1$ is not an odd entry smaller than $d_2$) then the line diagram $L_{even}(d_1,d_2,d_1)$ has the same chains of lines as the horizontal diagram for $\liea{sl}_{2n}$. For $d_1\in D_o$ the diagram $L_{even}(d_1,d_2,d_1)$ has $d_1-1$ two-chains (chains of length two) and $2$ one-chains (i.e. lines). So the kernel of the map $X^k$ has dimension $d_2$, $d_1+d_2+1$, $2d_1+d_2$ for $k=1,2,3$, giving the partition $1^{d_2-d_1-1}, 2^2,3^{d_1-1}$ and the dual of it is $d_2,d_1+1,d_1-1$ as claimed. Let now $d=(d_1,\dots,d_r,d_{r+1},d_r,\dots,d_1)$ with $d_1\le\dots\le d_{r+1}$. For $d'=(d_2,\dots,d_r,d_{r+1},d_r, \dots,d_2)$ is ok. Let $d_1$ be even. If $d_1=d_{r+1}$ then the diagram $L_{odd}(d)$ is the same as $L_h(d)$, the claim follows immediately. If $d_1<d_{r+1}$, the diagram $L_{odd}(d)$ is obtained from $L_{odd}(d')$ by extending $d_1$ $(2r-2)$-chains to $2r$-chains. The kernels of the map $X^k$ satisfy $\dim\ker X^k=\dim\ker Y^k$ for $k\le 2r-1$, $\dim\ker X^{2r}=2n-d_1=\dim\ker Y^{2r}+d_1$ and $\dim\ker X^{2r+1}=2n=\dim\ker Y^{2r+1}+2d_1$ where $Y\in\liea{sp}_{2n-2d_1}$ is defined by the line diagram $L_{even}(d')$. If the partition of $Y$ is $1^{b_1},2^{b_2}, \dots,(2r-1)^{b_{2r-1}}$ then the partition of $X$ is \[ 1^{b_1},\dots,(2r-2)^{b_{2r-2}}, (2r-1)^{b_{2r-1}-d_1}, (2r)^0,(2r+1)^{d_1}. \] Thus the dual of this partition is the dual of the partition of $Y$ together with the parts $d_1,d_1$. If $d_1$ is even and $d_1>d_{r+1}$, the diagram $L_{odd}(d)$ is obtained from $L_{odd}(d')$ by extending $d_{r+1}$ $(2r-2)$-chains to $2r$-chains and by extending $d_1-d_{r+1}$ $(2r-3)$-chains to $(2r-1)$-chains. Here we get $\dim\ker X^k =\dim\ker Y^k$ for $k\le 2r-2$, $\dim\ker X^{2r-1}= \dim\ker Y^{2r-1}+d_1-d_{r+1}$, $\dim\ker X^{2r}= 2n-d{r+1}=\dim\ker Y^{2r}+2d_1-d_{r+1}$ and $\dim\ker X^{2r+1}=2n=\dim\ker Y^{2r+1}+2d_1$. So the partition of $X$ can be calculated to be \[ 1^{b_1},\dots,(2r-3)^{b_{2r-3}},(2r-2)^{b_{2r-2}-d_1+d_{r+1}}, (2r-1)^{b_{2r-1}-d_{r+1}}, (2r)^{d_1-d_{r+1}},(2r+1)^{d_{r+1}} \] with $b_{2r-1}=d_{r+1}$. Again, the dual of the partition of $X$ is obtained from the dual of the partition of $Y$ by adding $d_1,d_1$. Let $d_1$ be odd and $d_1>d_{r+1}$. In particuar, there are no odd $d_i$ that are smaller than $d_{r+1}$. The shape of $L_{odd}(d)$ is the same as the diagram for $\liea{sl}_{2n}$ (i.e. they have the same chain lengths). So the dual of the partition is just the dimension vector and we are done. If $d_1<d_{r+1}$, the diagram $L_{odd}(d)$ is obtained from $L_{odd}(d')$ by extending $d_1-1$ $(2r-2)$-chains to $2r$-chains and by extending two $(2r-2)$-chains to $(2r-1)$-chains. The calculations of the dimensions of the kernels for $X$ (compared to those for $Y$) give as partition of $X$: \[ 1^{b_1},\dots,(2r-2)^{b_{2r-2}}, (2r-1)^{b_{2r-1}-d_1-1}, (2r)^{2},(2r+1)^{d_1-1} \] Hence the dual of the partition of $X$ is obtained from the dual of the partition of $Y$ by adjoining $d_1+1,d_1-1$. \end{proof} \end{document}
arXiv
{ "id": "0501350.tex", "language_detection_score": 0.7057517766952515, "char_num_after_normalized": null, "contain_at_least_two_stop_words": null, "ellipsis_line_ratio": null, "idx": null, "lines_start_with_bullet_point_ratio": null, "mean_length_of_alpha_words": null, "non_alphabetical_char_ratio": null, "path": null, "symbols_to_words_ratio": null, "uppercase_word_ratio": null, "type": null, "book_name": null, "mimetype": null, "page_index": null, "page_path": null, "page_title": null }
arXiv/math_arXiv_v0.2.jsonl
null
null
\begin{document} \pagestyle{fancy} \title{ \fontfamily{phv}\selectfont\bfseries\Large A martingale approach for Pólya urn processes} \author{\fontfamily{phv}\selectfont\bfseries Lucile Laulin} \date{} \AtEndDocument{ {\footnotesize \textsc{Université de Bordeaux, Institut de Mathématiques de Bordeaux, UMR 5251, 351 Cours de la Libération, 33405 Talence cedex, France.} \par \textit{E-mail adress :} \href{mailto:[email protected]}{\texttt{[email protected]}} \par }} \maketitle \centerline{ \begin{minipage}[c]{0.7\textwidth} {\small\section*{Abstract} This paper is devoted to a direct martingale approach for Pólya urn models asymptotic behaviour. A Pólya process is said to be small when the ratio of its remplacement matrix eigenvalues is less than or equal to $1/2$, otherwise it is called large. We find again some well-known results on the asymptotic behaviour for small and large urns processes. We also provide new almost sure properties for small urns processes. }\end{minipage} } \setlength{\parindent}{0pt} \section{Introduction} At the inital time $n=0$, an urn is filled with $\alpha \geq 0$ red balls and $\beta \geq 0$ white balls. Then, at any time $n\geq 1$ one ball is drawn randomly from the urn and its color observed. If it is red it is then returned to the urn together with $a$ additional red balls and $b\geq 0$ white ones. If it is white it is then returned to the urn together with $c\geq 0$ additional red balls and $d$ white ones. The model corresponding replacement matrix is given, for $a,b,c,d\in\mathbb{N}$, by \begin{equation} R= \begin{pmatrix}a & b \\ c & d\end{pmatrix}. \end{equation} The urn processe is said to be {\it balanced} if the total number of balls added at each step is a constant, $S =a+b=c+d \geq 1$. Thanks to the balance assumption, $S$ is the maximum eigenvalue of $R^T$. Moreover, the second eigenvalue of $R^T$ is given by $m=a-c=d-b$. Throughout the rest of this paper, we shall denote \begin{equation*} \sigma = m/S\leq 1 \end{equation*} the ratio of the two eigenvalues. It is straightforward that the respective eigenvectors of $R^T$ are given by \begin{equation*} v_1 = \frac{S}{b+c} \begin{pmatrix} c \\ b\end{pmatrix} \hspace{1cm}\text{and}\hspace{1cm} v_2 = \frac{S}{b+c} \begin{pmatrix} 1 \\ -1\end{pmatrix}. \end{equation*} We can rewrite $R^T$ under the following form \begin{equation*} R^T = PDP^{-1} = \frac{1}{b+c} \begin{pmatrix} c & 1 \\ b & -1 \end{pmatrix} \begin{pmatrix} S & 0 \\ 0 & m \end{pmatrix}\begin{pmatrix} 1 & 1 \\ b & -c \end{pmatrix}. \end{equation*} Hereafter, let us define the process $(U_n)$, the composition of the urn at time $n$, by \begin{equation*} U_n=\begin{pmatrix}X_n \\ Y_n\end{pmatrix} \hspace{1cm}\text{and}\hspace{1cm} U_0=\begin{pmatrix}\alpha \\ \beta\end{pmatrix} \end{equation*} where $X_n$ is the number of red balls and $Y_n$ is the number of white ones. Then, let $\tau=\alpha+\beta \geq 1$ and $\tau_n=\tau+nS$ be the number of ball inside the urn at time $n$. In particular, one can observe that $X_n + Y_n = \tau_n$ is a deterministic quantity. The traditionnal Pólya urn model corresponds to the case where the replacement matrix $R$ is diagonal, while the generalized Pólya urn model corresponds to the case where the replacement matrix $R$ is at least triangular. The questions about the asymptotic behavior of $(U_n)$ have been extensively studied, firstly by Freedman \cite{freedman65} and by many after, see for example \cite{Chauvin2011,Flajolet06,Flajolet05,Janson04,Pouyanne08,janson18}. We also refer the reader to Pouyanne's CIMPA summer school lectures 2014 \cite{cimpa14} for a very comprehensive survey on Pólya urn processes that has been a great source of inspiration. The reader may notice that this paper is related to Bercu \cite{Bercu18} on the elephant random walk. This is due to the paper of Baur and Bertoin \cite{Baur16} on connection between elephant random walks and Pólya-type urns. Our strategy is to use the martingale theory \cite{Duflo97,Hall80} in order to propose a direct proof of the asymptotic normality associated with $(U_n)$. We also establish new refinements on the alm{}ost sure convergence of $(U_n)$. The paper is organized as follows. In Section 2, we briefly present the traditional Pólya urn model, as well as the martingale related to this case. We establish the almost sure convergence and the asymptotic normality for this martingale. In Section 3, we present the generalized Pólya urn model with again the martingale related to this case, and we also give the main results for this model. Hence, we first investigate small urn regime where $\sigma \leq 1/2$ and we establish the almost sure convergence, the law of iterated logarithm and the quadratic strong law for $(U_n)$. The asymptotic normality of the urn composition is also provided. We finally study the large urn where $\sigma >1/2$ and we prove the almost sure convergence as well as the mean square convergence of $(U_n)$ to a non-degenerate random vector whose moments are given. The proofs are postponed to Sections 4 and 5. \section{Traditional Pólya urn model} This model corresponds to the case where the replacement matrix is diagonal \begin{equation*} R= \begin{pmatrix} S & 0 \\ 0 & S \end{pmatrix}. \end{equation*} It means that at any time $n\geq 1$, one ball is drawn randomly from the urn, its color observed and it is then returned to the urn together with $S \geq 1$ additional balls of the same color. Let us define the process $(M_n)$ by \begin{equation*} M_n= \frac{X_n}{\tau_n} \end{equation*} and write \begin{equation} X_n=\alpha+S\sum_{k=1}^n \varepsilon_k \end{equation} where the conditional distribution of $\varepsilon_{n+1}$ given the past up to time $n$ is $\mathcal{L}(\varepsilon_{n+1} |\mathcal{F}_n)=\mathcal{B}(M_n)$. We clearly have \begin{equation*} \mathbb{E}[M_{n+1}|\mathcal{F}_n] = M_n \end{equation*} which means that $(M_n)$ is a martingale. We have $\Delta M_{n+1}=\frac{S}{\tau_{n+1}}\big(\varepsilon_{n+1}-M_n\big)$. Hence, \begin{equation*} \mathbb{E}\bigl[\Delta M_{n+1}^2 | \mathcal{F}_n \bigr] = \frac{S^2}{\tau^2_{n+1}}\Big(\mathbb{E}\bigl[\varepsilon_{n+1}^2 | \mathcal{F}_n \bigr] -M_n^2\Big)= \frac{S^2M_n(1-M_n)}{\tau_{n+1}^2}. \end{equation*} We now focus our attention on the asymptotic behavior of $(M_n)$. \begin{theorem} \label{T-tradi-as} The process $(M_n)$ converges to a random variable $M_\infty$ almost surely and in any $\mathbb{L}^p$ for $p\geq1$. The limit $M_\infty$ has a beta distribution, with parameters $\frac{\alpha}{S}$ and $\frac{\beta}{S}$. \end{theorem} \begin{remark} This results was first proved by Freedman, Theorem 2.2 in \cite{freedman65}. \end{remark} Our first new result on the gaussian fluctuation of $(M_n)$ is as follows. \begin{theorem} \label{T-tradi-dis} We have the following convergence in distribution \begin{equation} \label{CVMNth-norm} \sqrt{n}\frac{ M_\infty - M_n}{\sqrt{M_n(1-M_n)}} \underset{n\to\infty}{\overset{\mathcal{L}}{\longrightarrow}} \mathcal{N}\big(0,1\big) \end{equation} \end{theorem} \section{Gereralized Pólya urn model} This model corresponds to the case where the replacement matrix is not diagonal, \begin{equation*} R= \begin{pmatrix} a & b \\ c & d \end{pmatrix}. \end{equation*} Let us rewrite \begin{equation*} X_n=\alpha + a\sum_{k=1}^n \varepsilon_k + c\sum_{k=1}^n (1-\varepsilon_k) \end{equation*} where the conditional distribution of $\varepsilon_{n+1}$ given the past up to time $n$ is $\mathcal{L}(\varepsilon_{n+1} |\mathcal{F}_n)=\mathcal{B}(\tau_n^{-1}X_n)$. We have \begin{equation*} U_{n+1}= U_n +R^T \begin{pmatrix} \varepsilon_{n+1} \\ 1-\varepsilon_{n+1}\end{pmatrix} \end{equation*} and \begin{equation*} U_{n} - \mathbb{E}[U_n] = \begin{pmatrix} X_n -\mathbb{E}[X_n] \\ Y_n - \mathbb{E}[Y_n] \end{pmatrix} = \big(X_n -\mathbb{E}[X_n]\big)\begin{pmatrix} 1 \\ -1 \end{pmatrix} = \frac{b+c}{S} \big( X_n -\mathbb{E}[X_n] \big) v_2. \end{equation*} Hence, we obtain that \begin{eqnarray} \mathbb{E}\big[U_{n+1} - \mathbb{E}[U_{n+1}]|\mathcal{F}_n\big] & = & \nonumber U_{n} - \mathbb{E}[U_n] + R^T\mathbb{E}\Big[ \begin{pmatrix} \varepsilon_{n+1} \\ 1-\varepsilon_{n+1}\end{pmatrix} - \mathbb{E}\big[ \begin{pmatrix} \varepsilon_{n+1} \\ 1-\varepsilon_{n+1}\end{pmatrix}\big]|\mathcal{F}_n\Big] \\ \nonumber & = & \big(I_2 + \tau_n^{-1}R^T\big)\Big(U_n - \mathbb{E}[U_n]\Big) \\ \nonumber & = & (X_n - E[X_n]\big)\big(I_2 + \tau_n^{-1}R^T\big)\begin{pmatrix}1 \\ -1\end{pmatrix} \\\nonumber & = & \big(1+\tau_n^{-1}m\big)\big(X_n - E[X_n]\big)\begin{pmatrix}1 \\ -1\end{pmatrix} \\ & = & \big(1+\tau_n^{-1}m\big)\big(U_{n} - \mathbb{E}[U_n]\big). \label{EspUn-gene} \end{eqnarray} Finally, denote \begin{equation} \label{DEF-sigma-n} \sigma_n=\prod_{k=0}^{n-1}\big(1+\tau_k^{-1}m\big)^{-1}=\frac{\Gamma(n+\frac{\tau}{S})\Gamma(\frac{\tau}{S}+\sigma)}{\Gamma(\frac{\tau}{S})\Gamma(n+\frac{\tau}{S}+\sigma)}. \end{equation} One can observe that \begin{equation} \label{LIMIT-sigma-n} \lim_{n\to\infty} {n^\sigma}\sigma_n=\frac{\Gamma(\frac{\tau}{S}+\sigma)}{\Gamma(\frac{\tau}{S})}. \end{equation} Hereafter, we define the process $(M_n)$ by \begin{equation} \label{defMn-gene} M_n = \sigma_n\big(U_n - \mathbb{E}[U_n]\big). \end{equation} Thanks to equation \eqref{EspUn-gene} we immediatly get that \begin{equation*} \label{Mn-martg} \mathbb{E}[M_{n+1}|\mathcal{F}_n] = M_n. \end{equation*} Hence, the sequence $(M_n)$ is a locally bounded and square integrable martingale. We are now allowed to compute the quadratic variation of $(M_n)$. First of all \begin{equation} \Delta M_{n+1} = m \sigma_{n+1} \big( \varepsilon_{n+1} -\mathbb{E}[\varepsilon_{n+1}|\mathcal{F}_n] \big) \begin{pmatrix} 1 \\ -1\end{pmatrix} = m \sigma_{n+1} \big( \varepsilon_{n+1} -\tau_n^{-1} X_n \big) \begin{pmatrix} 1 \\ -1\end{pmatrix} \label{DeltaMn-1}. \end{equation} Moreover, \begin{equation} \label{ESP-eps-quad} \mathbb{E}\big[\big(\varepsilon_{n+1} -\tau_n^{-1} X_n \big)^2\big | \mathcal{F}_n ] = \tau_n^{-1} X_n \big(1-\tau_n^{-1} X_n\big). \end{equation} Consequently, we obtain from \eqref{DeltaMn-1} and \eqref{ESP-eps-quad} that \begin{equation} \mathbb{E}\big[\Delta M_{n+1} \Delta M_{n+1}^T \big | \mathcal{F}_n ] = m^2\sigma_{n+1}^{2}\tau_n^{-1} X_n \big(1-\tau_n^{-1} X_n\big) \begin{pmatrix} 1 & -1 \\ -1 & 1 \end{pmatrix}. \label{ESP-delta-Mn-T} \end{equation} Therefore \begin{eqnarray} \langle M\rangle_n & = & \sum_{k=0}^{n-1} \mathbb{E}\big[\Delta M_{k+1} \Delta M_{k+1}^T \big | \mathcal{F}_k ] \nonumber\\ & = & m^2 \begin{pmatrix}1 & -1 \\ -1 & 1\end{pmatrix} \sum_{k=0}^{n-1}\sigma_{k+1}^{2}\tau_k^{-1} X_k \big(1-\tau_k^{-1} X_k\big) \label{quadMn}. \end{eqnarray} It is not hard to see that \begin{equation} \label{Tr-wn} \text{Tr} \langle M\rangle_n \leq m^2 w_n \hspace{1cm}\text{where}\hspace{1cm} w_n = \sum_{k=1}^{n}\sigma_{k}^{2}. \end{equation} The asymptotic behavior of $(M_n)$ is closely related to the one of $(w_n)$ with the following trichotomy \begin{itemize} \item The diffusive regime where $\sigma <1/2$ : the urn is said to be small and we have \begin{equation*} \lim_{n\to\infty} \frac{w_n}{n^{1-2\sigma}} = \frac{\lambda^2}{1-2\sigma} \hspace{1cm} \text{where} \hspace{1cm} \lambda= \frac{\Gamma(\frac{\tau}{S}+\sigma)}{\Gamma(\frac{\tau}{S})}. \end{equation*} \item The critical regime where $\sigma = 1/2$ : the urn is said to be critically small and we have \begin{equation*} \lim_{n\to\infty} \frac{w_n}{\log n} = \frac{\Gamma(\frac{\tau}{S}+\frac{1}{2})}{\Gamma(\frac{\tau}{S})}. \end{equation*} \item The superdiffusive regime where $\sigma > 1/2$ : the urn is said to be large and we have \begin{equation*} \lim_{n\to\infty} {w_n} = \sum_{k=0}^{\infty} \Big(\frac{\Gamma(k+\frac{\tau}{S})\Gamma(\frac{\tau}{S}+\sigma)}{\Gamma(\frac{\tau}{S})\Gamma(k+\frac{\tau}{S}+\sigma)}\Big)^2. \end{equation*} \end{itemize} \begin{proposition} \label{P-ESP-limit} We have for small and large urns \begin{equation} \mathbb{E}[U_n] = n v_1 + \sigma_n^{-1} \Big(\frac{b\alpha - c\beta}{S}\Big)v_2 + \frac{\tau}{S}v_1. \label{ESP-sigma} \end{equation} \end{proposition} \begin{proof}{Proposition}{\ref{P-ESP-limit}} First of all, denote $\Lambda_n = I_2 + \tau_n^{-1}R^T = P \big(I_2 +\tau_n^{-1}D\big) P^{-1}$ and $T_n= \prod_{k=0}^{n-1} \Lambda_k$. For any $n\in\mathbb{N}$, $T_n$ is diagonalisable and \begin{equation*} T_n = P D_n P^{-1} = \frac{1}{b+c}\begin{pmatrix} c & 1 \\ b & -1\end{pmatrix} \begin{pmatrix} \tau_n/\tau & 0 \\ 0 & \sigma_n^{-1}\end{pmatrix} \begin{pmatrix} 1 & 1 \\ b & -c\end{pmatrix}. \end{equation*} Since $E[U_{n+1}|\mathcal{F}_n]= \Lambda_n U_n$ we easily get that $\mathbb{E}[U_n]= T_n U_0$, which leads to \begin{eqnarray*} \mathbb{E}[U_n] & = & \frac{1}{b+c}\Big(\frac{\tau_n}{\tau} \begin{pmatrix} c & c \\ b & b \end{pmatrix} + \sigma_n^{-1} \begin{pmatrix} b & -c \\ -b & c \end{pmatrix}\Big)U_0\\ & = & n v_1 + \frac{\tau}{S}v_1 + \sigma_n^{-1} \frac{b\alpha - c\beta}{S}v_2. \end{eqnarray*} \end{proof} \subsection{Small urns} The almost sure convergence of $(U_n)$ for small urns is due to Janson, Theorem 3.16 in \cite{Janson04}. \begin{theorem} \label{T-general-as-small} When the urn is small, $\sigma <1/2$, we have the following convergence \begin{equation} \label{general-as-small} \lim_{n\to\infty}\frac{U_n}{n} = v_1 \end{equation} almost surely and in any $\mathbb{L}^p$, $p\geq 1$. \end{theorem} Our new refinements on the almost sure rates of convergence are as follows. \begin{theorem} \label{T-general-LFQLIL-small} When the urn is small and $bc\neq0$, we have the quadratic strong law \begin{equation} \label{LFQsmall} \lim_{n \rightarrow \infty} \frac{1}{\log n} \sum_{k=1}^n \frac{1}{k^2}(U_k-kv_1) (U_k-kv_1)^T=\frac{1}{1-2\sigma}\frac{bcm^2}{(b+c)^2}\begin{pmatrix}1 & -1 \\ -1 & 1\end{pmatrix} \hspace{1cm} \text{a.s.} \end{equation} In particular, \begin{equation} \lim_{n \rightarrow \infty} \frac{1}{ \log n} \sum_{k=1}^n \frac{\|U_k-kv_1\|^2}{k^2}= \frac{2}{1-2\sigma}\frac{bcm^2}{(b+c)^2} \hspace{1cm} \text{a.s.} \label{LFQNORMsmall} \end{equation} Moreover, we have the law of iterated logarithm \begin{equation} \limsup_{n \rightarrow \infty} \frac{\|U_n-nv_1\|^2}{2 n \log \log n} = \frac{2}{1-2\sigma}\frac{bcm^2}{(b+c)^2} \hspace{1cm} \text{a.s.} \label{LILsmall} \end{equation} \end{theorem} \begin{remark} The law of iterated logarithm for $(X_n)$ was previously established by Bai, Hu and Zhang via a strong approximation argument, see Corollary 2.1 in \cite{bai02}. \end{remark} \begin{theorem} \label{T-general-dis-small} When the urn is small and $bc\neq0$, we have the following convergence asymptotic normality \begin{equation} \frac{U_n - n v_1}{\sqrt{n}} \overset{\mathcal{L}}{\underset{n\to\infty}{\longrightarrow}} \mathcal{N} \big(0,\Gamma\big) \end{equation} where $\displaystyle \Gamma =\frac{1}{1-2\sigma}\frac{bcm^2}{(b+c)^2}\begin{pmatrix}1 & -1 \\ -1 & 1\end{pmatrix}$. \end{theorem} \begin{remark} An invariance principle for $(X_n)$ was proved by Gouet, see Proposition 2.1 in \cite{gouet93}. \end{remark} \subsection{Critically small urns} The almost sure convergence of $(U_n)$ for critically small urns is again due to Janson, Theorem 3.16 in \cite{Janson04}. \begin{theorem} \label{T-general-as-crit} When the urn is critically small, $\sigma=1/2$, we have the following convergence \begin{equation} \label{general-as-crit} \lim_{n\to\infty}\frac{U_n}{n} = v_1 \end{equation} almost surely and in any $\mathbb{L}^p$, $p\geq 1$. \end{theorem} Once again, we have some refinements on the almost sure rates of convergence. \begin{theorem} \label{T-general-LFQLIL-crit} When the urn is critically small and $bc\neq0$, we have the quadratic strong law \begin{equation} \label{LFQsmallcrit} \lim_{n \rightarrow \infty} \frac{1}{\log \log n} \sum_{k=1}^n \frac{1}{(k\log k)^2}(U_k-kv_1) (U_k-kv_1)^T=bc\begin{pmatrix}1 & -1 \\ -1 & 1\end{pmatrix} \hspace{1cm} \text{a.s.} \end{equation} In particular, \begin{equation} \lim_{n \rightarrow \infty} \frac{1}{ \log \log n} \sum_{k=1}^n \frac{\|U_k-kv_1\|^2}{(k\log k)^2}= 2 bc \hspace{1cm} \text{a.s.} \label{LFQNORMsmallcrit} \end{equation} Moreover, we have the law of iterated logarithm \begin{equation} \limsup_{n \rightarrow \infty} \frac{\|U_n-nv_1\|^2}{2 \log n \log \log \log n} = 2 bc \hspace{1cm} \text{a.s.} \label{LILsmallcrit} \end{equation} \end{theorem} \begin{remark} The law of iterated logarithm for $(X_n)$ was also established by Bai, Hu and Zhang via a strong approximation argument, see Corollary 2.2 in \cite{bai02}. \end{remark} \begin{theorem} \label{T-general-dis-crit} When the urn is critically small and $bc\neq0$, we have the following asymptotic normality \begin{equation} \frac{U_n - n v_1}{\sqrt{n\log n}} \overset{\mathcal{L}}{\underset{n\to\infty}{\longrightarrow}} \mathcal{N} \big(0,\Gamma\big) \end{equation} where $\displaystyle \Gamma =bc\begin{pmatrix}1 & -1 \\ -1 & 1\end{pmatrix}$. \end{theorem} \begin{remark} An invariance principle for $(X_n)$ was also proven by Gouet, see Proposition 2.1 in \cite{gouet93}. \end{remark} \subsection{Large urns} The convergences of $n^{-\sigma}(U_n-nv_1)$ to $Wv_2$ first appeared in Pouyanne \cite{Pouyanne08}, Theorem 3.5. The almost sure convergence of $(U_n)$ for large urns is again due to Janson, Theorem 3.16 in \cite{Janson04}. The explicit calculation of the moments of $W$ are new. \begin{theorem} \label{T-general-as-large} When the urn is large, $\sigma>1/2$, we have the following convergence \begin{equation} \label{EQ-as-large} \lim_{n\to\infty}\frac{U_n}{n} = v_1 \end{equation} almost surely and in any $\mathbb{L}^p$, $p\geq 1$. Moreover, we also have \begin{equation} \label{CVGLW} \lim_{n\to\infty}\frac{U_n-n v_1}{n^\sigma} = W v_2 \end{equation} almost surely and in $\mathbb{L}^2$, where $W$ is a real-valued random variable and \begin{equation} \mathbb{E}[W]= \frac{\Gamma(\frac{\tau}{S})}{\Gamma(\frac{\tau}{S}+\sigma)}\frac{b\alpha - c\beta}{S}, \end{equation} \begin{equation} \mathbb{E}[W^2] = \sigma^2\frac{\Gamma(\frac{\tau}{S})}{\Gamma(\frac{\tau}{S}+2\sigma)} \Big(\frac{bc}{2\sigma-1}\frac{\tau}{S} + (b-c)\frac{b\alpha - c\beta}{\sigma S} + \frac{(b\alpha - c\beta)^2}{\sigma^2S^2} \Big). \end{equation} \end{theorem} \section{Proofs of the almost sure convergence results} \subsection{Generalized urn model -- small urns} \begin{proof}{Theorem}{\ref{T-general-as-small}} We denote the maximum eigenvalue of $\langle M\rangle_n$ by $\lambda_{max} \langle M\rangle_n$. We make use of the strong law of large numbers for martingales given e.g. by Theorem 4.3.15 of \cite{Duflo97}, that is for any $\gamma >0$, \begin{equation*} \frac{\|M_n\|^2}{\lambda_{max} \langle M\rangle_n} = o \big((\log \text{Tr} \langle M\rangle_n)^{1+\gamma}\big) \hspace{1cm}\text{a.s.} \end{equation*} It follows from \eqref{Tr-wn} that \begin{equation*} \|M_n\|^2 = o \big( w_n (\log w_n)^{1+\gamma}\big) \hspace{1cm}\text{a.s.} \end{equation*} which implies \begin{equation*} \|M_n\|^2 = o \big( n^{1-2\sigma} (\log n)^{1+\gamma}\big) \hspace{1cm}\text{a.s.} \end{equation*} Hence, we deduce from \eqref{LIMIT-sigma-n} and \eqref{defMn-gene} that \begin{equation*} \|U_n-\mathbb{E}[U_n]\|^2 = o \big( n (\log n)^{1+\gamma}\big) \hspace{1cm}\text{a.s.} \end{equation*} which completes the proof for the almost sure convergence. The convergence in any $\mathbb{L}^p$ for $p\geq 1$ holds since $n^{-1}\|U_n-\mathbb{E}[U_n]\|$ is uniformly bounded by $2\sqrt{2}(\tau +S)$. \end{proof} \begin{proof}{Theorem}{\ref{T-general-LFQLIL-small}} We shall make use of Theorem 3 of \cite{Bercu04}. For any $u\in\mathbb{R}^2$ let $M_n(u)= \langle u , M_n\rangle$ and denote $\displaystyle{f_n=\frac{\sigma_n^2}{w_n}}$. We have from \eqref{LIMIT-sigma-n} that $f_n$ is equivalent to $(1-2\sigma)n^{-1}$ and converges to 0. Moreover, we obtain from equations \eqref{quadMn}, \eqref{general-as-small} and Toeplitz lemma that \begin{eqnarray*} \lim_{n\to\infty} \frac{1}{w_n}\langle M\rangle_n & = & \lim_{n\to\infty} \frac{m^2}{w_n}\begin{pmatrix}1 & -1 \\ -1 & 1\end{pmatrix}\sum_{k=0}^{n-1}\sigma_{k+1}^{2}\tau_k^{-1} X_k \big(1-\tau_k^{-1} X_k\big) \\ & = & \frac{bcm^2}{(b+c)^2}\begin{pmatrix}1 & -1 \\ -1 & 1\end{pmatrix}\hspace{1cm}\text{a.s.} \end{eqnarray*} which implies that \begin{equation} \label{Mnwnrates} \lim_{n\to\infty} \frac{1}{w_n}\langle M\rangle_n = (1-2\sigma)\Gamma \hspace{1cm}\text{a.s.} \end{equation} Therefore, we get from \eqref{Mnwnrates} that \begin{equation*} \lim_{n\to\infty} \frac{1}{\log w_n} \sum_{k=1}^n f_k \Big(\frac{M_k(u)^2}{w_k}\Big) = (1-2\sigma)u^T \Gamma u\hspace{1cm}\text{a.s.} \end{equation*} which leads to \begin{equation*} \lim_{n\to\infty} \frac{1}{\log n} \sum_{k=1}^n f_k^2 u^T (U_k-E[U_k]) (U_n-E[U_k])^T u = (1-2\sigma)^2 u^T \Gamma u\hspace{1cm}\text{a.s.} \end{equation*} Furthermore, we have from \eqref{ESP-sigma} that $\mathbb{E}[U_n]$ is equivalent to $nv_1$. Consequently, we obtain that \begin{equation*} \lim_{n\to\infty} \frac{1}{\log n} \sum_{k=1}^n \frac{1}{k^2} (U_k-kv_1) (U_k-kv_1)^T = \Gamma \hspace{1cm}\text{a.s.} \end{equation*} We now focus our attention on the law of iterated logarithm. We already saw that \begin{equation*} \sum_{n=1}^{\infty} \frac{\sigma_n^4}{w_n^2}< \infty. \end{equation*} Hence, it follows from the law of iterated logarithm for real martingales that first appeared in Stout \cite{Stout70,Stout74}, that for any $u\in\mathbb{R}^d$, \begin{eqnarray*} \underset{n \to \infty}{\lim \sup} \frac{1}{\sqrt{2w_n\log\log w_n}}M_n(u) & = & - \underset{n \to \infty}{\lim \inf} \frac{1}{\sqrt{2w_n\log\log w_n}}M_n(u) \\ & = & \sqrt{(1-2\sigma) u^T \Gamma u }\hspace{1cm} \text{a.s.} \end{eqnarray*} Consequently, as $M_n(u) =\sigma_n\langle u, U_n - \mathbb{E}[U_n] \rangle$, we obtain that \begin{eqnarray*} \underset{n \to \infty}{\lim \sup} \frac{1}{\sqrt{2 n\log\log n}}\langle u, U_n - \mathbb{E}[U_n] \rangle & = & - \underset{n \to \infty}{\lim \inf} \frac{1}{\sqrt{2n\log\log n}}\langle u, U_n - \mathbb{E}[U_n] \rangle \\ & = & \sqrt{ u^T \Gamma u }\hspace{1cm} \text{a.s.} \end{eqnarray*} In particular, for any vector $u\in\mathbb{R}^2$ \begin{equation*} \underset{n \to \infty}{\lim \sup} \frac{1}{2 n\log\log n} u^T (U_n - \mathbb{E}[U_n])(U_n - \mathbb{E}[U_n]) u = u^T \Gamma u \hspace{1cm} \text{a.s.} \end{equation*} Finally, we deduce once again from \eqref{ESP-sigma} \begin{equation*} \underset{n \to \infty}{\lim \sup} \frac{1}{2 n\log\log n} (U_n - nv_1 ) (U_n - nv_1)^T= \Gamma \hspace{1cm} \text{a.s.} \end{equation*} which completes the proof of Theorem \ref{T-general-LFQLIL-small}. \end{proof} \subsection{Generalized urn model -- critically small urns} \begin{proof}{Theorem}{\ref{T-general-as-crit}} Again, we make use of the strong law of large numbers for martingales given e.g. by Theorem 4.3.15 of \cite{Duflo97}, that is for any $\gamma >0$, $$\frac{\|M_n\|^2}{\lambda_{max} \langle M\rangle_n} = o \big((\log \text{Tr} \langle M\rangle_n)^{1+\gamma}\big) \hspace{1cm}\text{a.s.} $$ Since $\text{Tr} \langle M\rangle_n \leq m^2 w_n$ and the quadratic version of $M_n$ is a semi-definite positive matrix we have $\lambda_{max} \langle M\rangle_n \leq m^2 w_n$ so that $$\|M_n\|^2 = o \big( w_n (\log w_n)^{1+\gamma}\big) \hspace{1cm}\text{a.s.}$$ which implies $$\|M_n\|^2 = o \big( \log n (\log \log n)^{1+\gamma}\big) \hspace{1cm}\text{a.s.}$$ Moreover, by definition of $M_n$ and using $\sigma_n$ equivalent we get $$\|U_n-\mathbb{E}[U_n]\|^2 = o \big( \sqrt{n} \log n (\log \log n)^{1+\gamma}\big) \hspace{1cm}\text{a.s.} $$ which completes the proof for the almost sure convergence. The convergence in any $\mathbb{L}^p$ for $p\geq 1$ holds by the same arguments as in the proof of Theorem \ref{T-general-as-small}. \end{proof} \begin{proof}{Theorem}{\ref{T-general-LFQLIL-crit}} We shall once again make use of Theorem 3 of \cite{Bercu04}. For any $u\in\mathbb{R}^2$ let $M_n(u)= \langle u , M_n\rangle$ and denote $\displaystyle{f_n=\frac{\sigma_n^2}{w_n}}$. We have from \eqref{LIMIT-sigma-n} that $f_n$ is equivalent to $(n\log n){-1}$ and converges to 0. When $\sigma=1/2$ we have $b+c=m$. Moreover, we obtain from equations \eqref{quadMn}, \eqref{general-as-crit} and Toeplitz lemma that \begin{eqnarray*} \lim_{n\to\infty} \frac{1}{w_n}\langle M\rangle_n & = & \lim_{n\to\infty} \frac{m^2}{w_n}\begin{pmatrix}1 & -1 \\ -1 & 1\end{pmatrix}\sum_{k=0}^{n-1}\sigma_{k+1}^{2}\tau_k^{-1} X_k \big(1-\tau_k^{-1} X_k\big) \\ & = & {bc}\begin{pmatrix}1 & -1 \\ -1 & 1\end{pmatrix}\hspace{1cm}\text{a.s.} \end{eqnarray*} which implies that \begin{equation} \label{Mnwnrates-crit} \lim_{n\to\infty} \frac{1}{w_n}\langle M\rangle_n = \Gamma \hspace{1cm}\text{a.s.} \end{equation} Therefore, we get from \eqref{Mnwnrates} that \begin{equation*} \lim_{n\to\infty} \frac{1}{\log w_n} \sum_{k=1}^n f_k \Big(\frac{M_k(u)^2}{w_k}\Big) = u^T \Gamma u\hspace{1cm}\text{a.s.} \end{equation*} which leads to \begin{equation*} \lim_{n\to\infty} \frac{1}{\log \log n} \sum_{k=1}^n f_k^2 u^T (U_k-E[U_k]) (U_n-E[U_k])^T u = u^T \Gamma u\hspace{1cm}\text{a.s.} \end{equation*} Consequently, we obtain from \eqref{ESP-sigma} that \begin{equation*} \lim_{n\to\infty} \frac{1}{\log \log n} \sum_{k=1}^n \frac{1}{(k\log k)^2} (U_k-kv_1) (U_k-kv_1)^T = \Gamma \hspace{1cm}\text{a.s.} \end{equation*} We now focus our attention on the law of iterated logarithm. It is not hard to see that \begin{equation*} \sum_{n=1}^{\infty} \frac{\sigma_n^4}{w_n^2}< \infty. \end{equation*} Hence, it follows from the law of iterated logarithm for real martingales that first appeared in Stout \cite{Stout70,Stout74}, that for any $u\in\mathbb{R}^d$, \begin{eqnarray*} \underset{n \to \infty}{\lim \sup} \frac{1}{\sqrt{2w_n\log\log w_n}}M_n(u) & = & - \underset{n \to \infty}{\lim \inf} \frac{1}{\sqrt{2w_n\log\log w_n}}M_n(u) \\ & = & \sqrt{u^T \Gamma u }\hspace{1cm} \text{a.s.} \end{eqnarray*} Consequently, we obtain that \begin{eqnarray*} \underset{n \to \infty}{\lim \sup} \frac{1}{\sqrt{2 \log n\log\log\log n}}\langle u, U_n - \mathbb{E}[U_n] \rangle & = & - \underset{n \to \infty}{\lim \inf} \frac{1}{\sqrt{2\log n\log\log\log n}}\langle u, U_n - \mathbb{E}[U_n] \rangle \\ & = & \sqrt{ u^T \Gamma u }\hspace{1cm} \text{a.s.} \end{eqnarray*} In particular, for any vector $u\in\mathbb{R}^2$ \begin{equation*} \underset{n \to \infty}{\lim \sup} \frac{1}{2\log n\log\log\log n} u^T (U_n - \mathbb{E}[U_n])(U_n - \mathbb{E}[U_n]) u = u^T \Gamma u \hspace{1cm} \text{a.s.} \end{equation*} Finally, we deduce once again from \eqref{ESP-sigma} that \begin{equation*} \underset{n \to \infty}{\lim \sup} \frac{1}{2\log n\log\log\log n} (U_n - nv_1 ) (U_n - nv_1)^T= \Gamma \hspace{1cm} \text{a.s.} \end{equation*} which completes the proof of Theorem \ref{T-general-LFQLIL-crit}. \end{proof} \subsection{Generalized urn model -- large urns} \begin{proof}{Theorem}{\ref{T-general-as-large}} First, as $\text{Tr}\langle M\rangle_n \leq m^2 w_n < \infty$, we have that $(M_n)$ converges almost surely to a random vector $Mv_2$, where $M$ is a real-valued random variable and \begin{equation*} \lim_{n\to\infty} \sigma_n \big( X_n -\mathbb{E}[X_n] \big) =\frac{S}{b+c} M = \frac{1}{1-\sigma} M \hspace{1cm}\text{a.s.} \end{equation*} Hence, it follows from \eqref{defMn-gene} that \begin{equation} \label{CV-Mv2} \lim_{n\to\infty} \sigma_n(U_n - \mathbb{E}[U_n])= Mv_2 \hspace{1cm}\text{a.s.} \end{equation} which implies via \eqref{LIMIT-sigma-n} that \begin{equation*} \lim_{n\to\infty} \sigma_n(U_n - \mathbb{E}[U_n])=\lim_{n\to\infty} \frac{\lambda }{n^\sigma}\| U_n - \mathbb{E}[U_n]\| = \|Mv_2\|\hspace{1cm}\text{a.s.} \end{equation*} Therefore, we obtain that \begin{equation} \label{Un-large-cv} \lim_{n\to\infty} \frac{\| U_n - \mathbb{E}[U_n]\|}{n} = 0 \hspace{1cm}\text{a.s.} \end{equation} Hence, we deduce \eqref{EQ-as-large} from \eqref{CV-Mv2} and \eqref{Un-large-cv}. The convergence in any $\mathbb{L}^p$ for $p\geq 1$ holds again by the same arguments as before. We now focus our attention on equation \eqref{CVGLW}. We have from \eqref{ESP-sigma} and \eqref{CV-Mv2} that \begin{equation*} \lim_{n\to \infty } \sigma_n \big( U_n -\mathbb{E}[U_n] \big) = \lim_{n\to \infty }\sigma_n \big(U_n -n v_1\big) - \Big(\frac{b\alpha - c\beta}{S}\Big)v_2 = Mv_2 \hspace{1cm}\text{a.s.} \end{equation*} Consequently, \begin{equation*} \label{lim-UnW} \lim_{n\to \infty }\frac{U_n -n v_1}{n^\sigma} = Wv_2 \hspace{1cm}\text{a.s.} \end{equation*} where the random variable W is given by \begin{equation} \label{defW} W = \frac{1}{\lambda}\big(M + \frac{b\alpha - c\beta}{S}\big) \end{equation} Hereafter, as \begin{equation*} \mathbb{E}\big[\|M_n\|^2\big] = \mathbb{E}\big[\text{Tr}\langle M\rangle_n] \leq m^2 w_n, \end{equation*} we get that \begin{equation*} \sup_{n\geq 1} \mathbb{E}\big[\|M_n\|^2\big] < \infty \end{equation*} which means that $(M_n)$ is a martingale bounded in $\mathbb{L}^2$, thus converging in $\mathbb{L}^2$. Finally, as $\mathbb{E}[M_n]=0$ and $(M_n)$ converges in $\mathbb{L}^1$ to $M$, $\mathbb{E}[M]=0$. Hence, we find from \eqref{lim-UnW} that \begin{equation*} \mathbb{E}[W] = \frac{\Gamma(\frac{\tau}{S})}{\Gamma(\frac{\tau}{S}+\sigma)}\frac{b\alpha - c\beta}{S}. \end{equation*} We shall now proceed to the computation of $\mathbb{E}[W^2]$. We have from \eqref{defW} that \begin{equation} \label{ESP-MW-2} \mathbb{E}[M^2] = {\lambda^2}\mathbb{E}[W^2] - \frac{(b\alpha - c\beta)^2}{S^2}, \end{equation} so that we only need to find $\mathbb{E}[M^2]$. It is not hard to see that \begin{equation*} \mathbb{E}\big[(X_{n+1}-\mathbb{E}[X_{n+1}])^2\big] = (1+2m\tau_n^{-1})\mathbb{E}\big[(X_{n}-\mathbb{E}[X_{n}])^2\big] + m^2 \tau_n^{-1}\mathbb{E}[X_n]\big(1-\tau_n^{-1}\mathbb{E}[X_n]) \end{equation*} wich leads to \begin{eqnarray*} \label{ESP-2-Xn} \mathbb{E}\big[X_n-\mathbb{E}[X_n]\big]^2 & = & m^2\frac{\Gamma(n+\frac{\tau}{S}+2\sigma)}{\Gamma(n+\frac{\tau}{S})} \sum_{k=0}^{n-1} \frac{\Gamma(k+1+\frac{\tau}{S})}{\Gamma(k+1+\frac{\tau}{S}+2\sigma)} \tau_k^{-1}\mathbb{E}[X_k]\big(1-\tau_k^{-1}\mathbb{E}[X_k]) \\ & = & \frac{\sigma^2}{(1-\sigma)^2}\frac{\Gamma(n+\frac{\tau}{S}+2\sigma)}{\Gamma(n+\frac{\tau}{S})} S_n. \end{eqnarray*} It follows from \eqref{ESP-sigma} that \begin{eqnarray*} S_n & = & {(b+c)^2}\sum_{k=0}^{n-1} \tau_k^{-1}\mathbb{E}[X_k]\big(1-\tau_k^{-1}\mathbb{E}[X_k]) \frac{\Gamma(k+1+\frac{\tau}{S})}{\Gamma(k+1+\frac{\tau}{S}+2\sigma)} \\ & = & bc A_n +(b-c)\frac{b\alpha - c\beta}{S}\frac{\Gamma(\frac{\tau}{S})}{\Gamma(\frac{\tau}{S}+\sigma)}B_n - \frac{(b\alpha - c\beta)^2}{S^2} \frac{\Gamma(\frac{\tau}{S})^2}{\Gamma(\frac{\tau}{S}+\sigma)^2} C_n \end{eqnarray*} where $A_n$, $B_n$ and $C_n$ are as follows, and we obtain from lemma B.1 in \cite{Bercu18} that \begin{equation*} A_n = \sum_{k=1}^{n}\frac{\Gamma(k+\frac{\tau}{S})}{\Gamma(k+\frac{\tau}{S}+2\sigma)} =\frac{1}{2\sigma-1} \big(\frac{\Gamma(\frac{\tau}{S}+1)}{\Gamma(\frac{\tau}{S}+2\sigma)} - \frac{\Gamma(n+\frac{\tau}{S}+1)}{\Gamma(n+\frac{\tau}{S}+2\sigma)}\big), \end{equation*} \begin{equation*} B_n = \sum_{k=1}^{n}\frac{\Gamma(k-1+\frac{\tau}{S}+\sigma)}{\Gamma(k+\frac{\tau}{S}+2\sigma)} = \frac{1}{\sigma} \big(\frac{\Gamma(\frac{\tau}{S}+\sigma)}{\Gamma(\frac{\tau}{S}+2\sigma)} - \frac{\Gamma(n+\frac{\tau}{S}+\sigma)}{\Gamma(n+\frac{\tau}{S}+2\sigma)}\big), \end{equation*} \begin{equation*} C_n = \sum_{k=1}^{n}\frac{\Gamma(k-1+\frac{\tau}{S}+\sigma)^2}{\Gamma(k+\frac{\tau}{S})\Gamma(k+\frac{\tau}{S}+2\sigma)} = \frac{1}{\sigma^2} \big(\frac{\Gamma(n+\frac{\tau}{S}+\sigma)^2}{\Gamma(n+\frac{\tau}{S})\Gamma(n+\frac{\tau}{S}+2\sigma)}-\frac{\Gamma(\frac{\tau}{S}+\sigma)^2}{\Gamma(\frac{\tau}{S})\Gamma(\frac{\tau}{S}+2\sigma)}\big). \end{equation*} Consequently, we have \begin{equation} \label{ESP-M2} \mathbb{E}[M^2] = \frac{\sigma^2\lambda^2\Gamma(\frac{\tau}{S})}{\Gamma(\frac{\tau}{S}+2\sigma)} \Big(\frac{bc}{2\sigma-1}\frac{\tau}{S} + (b-c)\frac{b\alpha - c\beta}{\sigma S} + \frac{(b\alpha - c\beta)^2}{\sigma^2S^2} \Big) - \frac{(b\alpha - c\beta)^2}{S^2} \end{equation} and we achieve the proof of Theorem \ref{T-general-as-large} via \eqref{ESP-MW-2} and \eqref{ESP-M2}. \end{proof} \section{Proofs of the asymptotic normality results} \subsection{Traditional urn model} \begin{proof}{Proof}{\ref{T-tradi-dis}} We shall make use of part $(b)$ of Theorem 1 and Corollaries 1 and 2 from \cite{Heyde77}. Let \begin{equation*} s_n^2=\sum_{k=n}^\infty \mathbb{E}[\Delta M_k^2]. \end{equation*} It is not hard to see that \begin{equation*} \lim_{n\to\infty} s_n^2 =0 \end{equation*} since \begin{equation*} \sum_{n=1}^\infty \mathbb{E}[\Delta M_n^2] \leq \frac{S^2}{4}\sum_{n=1}^\infty \frac{1}{\tau_n^2}<+\infty. \end{equation*} Moreover, using the convergence of $(M_n)$ in $\mathbb{L}^2$ and the moments of a beta distribution with parameters $\frac{\alpha}{S}$ and $\frac{\beta}{S}$, we get that \begin{equation*} \lim_{n\to \infty}\Big(\sum_{k=n}^\infty \frac{1}{\tau_{k+1}^2}\Big)^{-1}s_n^2= \frac{\alpha\beta S^2}{(\alpha+\beta)(\alpha+\beta+S)}, \end{equation*} leading to \begin{equation*} \lim_{n\to \infty} n s_n^2= \ell \hspace{1cm}\text{where}\hspace{1cm} \ell=\displaystyle\frac{\alpha\beta}{(\alpha+\beta)(\alpha+\beta+S)}. \end{equation*} Hence \begin{eqnarray*} \lim_{n\to\infty} \frac{1}{s_n^2}\sum_{k=n}^\infty \mathbb{E}\bigl[\Delta M_{k+1}^2 | \mathcal{F}_{k} \bigr] & = & \lim_{n\to\infty} \frac{1}{s_n^2}\sum_{k=n}^\infty \frac{c^2M_k(1-M_k)}{\tau_{k+1}^2} \hspace{1cm} \text{a.s.}\\ & = & \lim_{n\to\infty} \frac{1}{\ell S^2}\Big(\sum_{k=n}^\infty \frac{1}{\tau_{k+1}^2}\Big)^{-1} \sum_{k=n}^\infty \frac{S^2M_k(1-M_k)}{\tau_{k+1}^2} \hspace{1cm} \text{a.s.}\\ & = & \frac{M_\infty(1-M_\infty)}{\ell} \hspace{1cm} \text{a.s.} \end{eqnarray*} Consequently, the first condition of part (b) of Corollary 1 in \cite{Heyde77} is satisfied with $\displaystyle\eta^2 = \ell^{-1} M_\infty(1-M_\infty)$. Let us now focus on the second condition of Corollary 1 in \cite{Heyde77} and let $\varepsilon>0$. On the one hand, we get that for all $\varepsilon>0$ \begin{equation*} \frac{1}{s_n^2}\sum_{k=n}^\infty \mathbb{E}\bigl[\Delta M_{k+1}^2 \mathds{1}_{|\Delta M_{k+1}| > \varepsilon s_n} \bigr] \leq \frac{1}{\varepsilon^2 s_n^4}\sum_{k=n}^\infty \mathbb{E}\bigl[\Delta M_{k+1}^4 \bigr] \leq \frac{7S^4}{\varepsilon^2 s_n^4}\sum_{k=n}^\infty \frac{1}{\tau_k^4} \leq \frac{7}{\varepsilon^2 s_n^4}\sum_{k=n}^\infty \frac{1}{k^4}. \end{equation*} On the other and, using that $s_n^4$ increases at speed $n^2$ and that \begin{equation*} \lim_{n\to\infty} 3n^3\sum_{k=n}^\infty \frac{1}{k^4}=1, \end{equation*} we can conclude that \begin{equation*} \displaystyle \lim_{n\to\infty} \frac{1}{s_n^2}\sum_{k=n}^\infty \mathbb{E}\bigl[\Delta M_k^2 \mathds{1}_{|\Delta M_k| > \varepsilon s_n} \bigr] = 0 \hspace{1cm}\text{a.s.} \end{equation*} Hereafter, we easily get that \begin{equation} \label{bracketMn} \sum_{k=1}^\infty \frac{1}{s_k^4}\mathbb{E}\big[\Delta M_k^4 | \mathcal{F}_{k-1}\big] \leq 7 \sum_{k=1}^\infty \frac{1}{k^2} <+\infty. \end{equation} Noting that \begin{equation*} \sum_{k=1}^n\frac{1}{s_k^2}\big(|\Delta M_k|^2 - \mathbb{E}\big[|\Delta M_k|^2 |\mathcal{F}_{k-1}\big]\big) \end{equation*} is a martingale, the equation \eqref{bracketMn} proves that its bracket is convergent, wich implies that the martingale is also convergent. This gives us \begin{equation*} \sum_{k=1}^\infty\frac{1}{s_k^2}\big(|\Delta M_k|^2 - \mathbb{E}\big[|\Delta M_k|^2 |\mathcal{F}_{k-1}\big]\big) < + \infty \hspace{1cm}\text{a.s.} \end{equation*} Hence, the second condition of Corollary 1 in \cite{Heyde77} is satisfied. Therefore we obtain that \begin{equation} \frac{ M_\infty - M_n}{\sqrt{\langle M\rangle_\infty - \langle M\rangle_n}} \underset{n\to\infty}{\overset{\mathcal{L}}{\longrightarrow}} \mathcal{N}\big(0,1\big). \end{equation} Moreover, since \begin{equation*} \lim_{n\to\infty}\sqrt{\frac{M_n(1-M_n)}{n(\langle M\rangle_\infty - \langle M\rangle_n)}} = 1\hspace{1cm}\text{a.s.} \end{equation*} we finally obtain from Slutky's Lemma that \begin{equation} \sqrt{n}\frac{ M_\infty - M_n}{\sqrt{M_n(1-M_n)}} \underset{n\to\infty}{\overset{\mathcal{L}}{\longrightarrow}} \mathcal{N}\big(0,1\big). \end{equation} which achieves the proof of Theorem \ref{T-tradi-dis}. \end{proof} \subsection{Generalized urn model -- small urns} \begin{proof}{Theorem}{\ref{T-general-dis-small}} We shall make use of the central limit theorem for multivariate martingales given e.g. by Corollary 2.1.10 in \cite{Duflo97}. First of all, we already saw from \eqref{Mnwnrates} that \begin{equation*} \lim_{n\to\infty} \frac{1}{w_n}\langle M\rangle_n = (1-2\sigma)\Gamma \hspace{1cm}\text{a.s.} \end{equation*} It only remains to show that Linderberg's condition is satisfied, that is for all $\varepsilon >0$, \begin{equation*} \frac{1}{w_n} \sum_{k=0}^{n-1} \mathbb{E}\big[\|\Delta M_{k+1}\|^2 \mathds{1}_{\|\Delta M_{k+1}\|\geq \varepsilon \sqrt{w_n}}| \mathcal{F}_k\big] \overset{\mathbb{P}}{\underset{n\to\infty}{\longrightarrow}} 0. \end{equation*} We clearly have \begin{equation*} \frac{1}{w_n} \sum_{k=0}^{n-1} \mathbb{E}\big[\|\Delta M_{k+1}\|^2 \mathds{1}_{\|\Delta M_{k+1}\|\geq \varepsilon \sqrt{w_n}}| \mathcal{F}_k\big] \leq \frac{1}{\varepsilon w_n^2} \sum_{k=0}^{n-1} \mathbb{E}\big[\|\Delta M_{k+1}\|^4\big] \leq\frac{m^2}{\varepsilon w_n^2} \sum_{k=0}^{n-1} \sigma_k^4 \hspace{1cm}\text{a.s.} \end{equation*} However, it is not hard to see that \begin{equation*} \lim_{n\to\infty} \frac{1}{w_n^2}\sum_{k=0}^{n-1} \sigma_k^4 = 0 \end{equation*} which ensures Lindeberg's condition is satisfied. Consequently, we can conclude that \begin{equation*} \frac{M_n}{\sqrt{w_n}} \overset{\mathcal{L}}{\underset{n\to\infty}{\longrightarrow}} \mathcal{N} \big(0,(1-2\sigma)\Gamma\big). \end{equation*} As $M_n =\sigma_n\big(U_n - \mathbb{E}[U_n]\big)$ and $\sqrt{n}\sigma_n$ is equivalent to $\sqrt{(1-2\sigma)w_n}$, together with \eqref{ESP-sigma}, we obtain that \begin{equation*} \frac{U_n - n v_1}{\sqrt{n}} \overset{\mathcal{L}}{\underset{n\to\infty}{\longrightarrow}} \mathcal{N} \big(0,\Gamma\big). \end{equation*} \end{proof} \subsection{Generalized urn model -- critically small urns} \begin{proof}{Theorem}{\ref{T-general-dis-crit}} We shall also make use of the central limit thoerem for multivariate martingales. We already saw from \eqref{Mnwnrates-crit} that \begin{equation*} \lim_{n\to\infty} \frac{1}{w_n}\langle M\rangle_n = bc\begin{pmatrix}1 & -1 \\ -1 & 1\end{pmatrix}. \end{equation*} Once again, it only remains to show that Linderberg's condition is satisfied, that is for all $\varepsilon >0$, \begin{equation*} \frac{1}{w_n} \sum_{k=0}^{n-1} \mathbb{E}\big[\|\Delta M_{k+1}\|^2 \mathds{1}_{\|\Delta M_{k+1}\|\geq \varepsilon \sqrt{w_n}}| \mathcal{F}_k\big] \overset{\mathbb{P}}{\underset{n\to\infty}{\longrightarrow}} 0. \end{equation*} As in the proof of Theorem \eqref{T-general-dis-small}, we have \begin{equation*} \frac{1}{w_n} \sum_{k=0}^{n-1} \mathbb{E}\big[\|\Delta M_{k+1}\|^2 \mathds{1}_{\|\Delta M_{k+1}\|\geq \varepsilon \sqrt{w_n}}| \mathcal{F}_k\big] \leq \frac{1}{\varepsilon w_n^2} \sum_{k=0}^{n-1} \mathbb{E}\big[\|\Delta M_{k+1}\|^4\big] \leq \frac{m^2}{2\varepsilon w_n^2} \sum_{k=0}^{n-1} \sigma_k^4. \hspace{1cm}\text{a.s.} \end{equation*} It is not hard to see that once again \begin{equation*} \lim_{n\to\infty} \frac{1}{w_n^2}\sum_{k=0}^{n-1} \sigma_k^4=0. \end{equation*} Hence, Lindeberg's condition is satisfied and we find that \begin{equation*} \frac{M_n}{\sqrt{w_n}} \overset{\mathcal{L}}{\underset{n\to\infty}{\longrightarrow}} \mathcal{N} \big(0,\Gamma\big). \end{equation*} As $M_n =\sigma_n\big(U_n - \mathbb{E}[U_n]\big)$ and $\sigma_n\sqrt{n\log n}$ is equivalent to $\sqrt{w_n}$, together with \eqref{ESP-sigma}, we can conclude that \begin{equation*} \frac{U_n - n v_1}{\sqrt{n}} \overset{\mathcal{L}}{\underset{n\to\infty}{\longrightarrow}} \mathcal{N} \big(0,\Gamma\big). \end{equation*} \end{proof} \nocite{*} \small \end{document}
arXiv
{ "id": "2003.04592.tex", "language_detection_score": 0.5310393571853638, "char_num_after_normalized": null, "contain_at_least_two_stop_words": null, "ellipsis_line_ratio": null, "idx": null, "lines_start_with_bullet_point_ratio": null, "mean_length_of_alpha_words": null, "non_alphabetical_char_ratio": null, "path": null, "symbols_to_words_ratio": null, "uppercase_word_ratio": null, "type": null, "book_name": null, "mimetype": null, "page_index": null, "page_path": null, "page_title": null }
arXiv/math_arXiv_v0.2.jsonl
null
null
\begin{document} \title{First Bloch eigenvalue in high contrast media} \begin{abstract} This paper deals with the asymptotic behavior of the first Bloch eigenvalue in a heterogeneous medium with a high contrast $\ep Y$-periodic conductivity. When the conductivity is bounded in $L^1$ and the constant of the Poincar\'e-Wirtinger weighted by the conductivity is very small with respect to $\ep^{-2}$, the first Bloch eigenvalue converges as $\ep\to 0$ to a limit which preserves the second-order expansion with respect to the Bloch parameter. In dimension two the expansion of the limit can be improved until the fourth-order under the same hypotheses. On the contrary, in dimension three a fibers reinforced medium combined with a $L^1$-unbounded conductivity leads us to a discontinuity of the limit first Bloch eigenvalue as the Bloch parameter tends to zero but remains not orthogonal to the direction of the fibers. Therefore, the high contrast conductivity of the microstructure induces an anomalous effect, since for a given low-contrast conductivity the first Bloch eigenvalue is known to be analytic with respect to the Bloch parameter around zero. \end{abstract} \vskip .5cm\noindent {\bf Keywords:} periodic structure, homogenization, high contrast, Bloch waves, Burnett coefficients \par\bs\noindent {\bf Mathematics Subject Classification:} 35B27, 35A15, 35P15 \section{Introduction} The oscillating operators of type \beq\label{ade} \nabla\cdot\big(a(x/\de)\nabla\cdot\big),\quad\mbox{as }\de\to 0, \eeq for coercive and bounded $Y$-periodic matrix-valued functions $a(y)$ in $\RR^d$, which model the conduction in highly heterogeneous media, have been widely studied since the seminal work \cite{BLP} based on an asymptotic expansion of \refe{ade}.. In the end of the nineties an alternative approach was proposed in \cite{CoVa} using the Bloch wave decomposition. More precisely, this method consists in considering the discrete spectrum $\big(\la_m(\eta),\phi_m(\eta)\big)$, $m\geq 1$, of the translated complex operator (see \cite{COV1} for the justification) \beq\label{Aeta} A(\eta):=-\,(\nabla+i\eta)\cdot\big[a(y)\,(\nabla+i\eta)\big],\quad\mbox{for a given }\eta\in\RR^d. \eeq It was proved in \cite{CoVa} that the first Bloch pair $\big(\la_1(\eta),\phi_1(\eta)\big)$ actually contains the essential informations on the asymptotic analysis of the operator \refe{ade}., and are analytic with respect to the Bloch parameter $\eta$ in a neighborhood of $0$. Moreover, by virtue of \cite{CoVa,COV1} it turns out that the first Bloch eigenvalue satisfies the following expansion in terms of the so-called Burnett coefficients: \beq\label{expla1eta} \la_1(\eta)=q\eta\cdot\eta+D(\eta\otimes\eta):(\eta\otimes\eta)+o(|\eta|^4), \eeq where $q$ is the homogenized positive definite conductivity associated with the oscillating sequence $a(x/\de)$, and $D$ is the fourth-order dispersion tensor which has the remarkable property to be non-positive for any conductivity matrix $a$ (see \cite{COV2}). \par The expansion \refe{expla1eta}. has been investigated more deeply in one dimension~\cite{CSSV2} and in low contrast regime \cite{CSSV1}. It is then natural to study the behavior of \refe{expla1eta}. in high contrast regime. This is also motivated by the fact that the homogenization of operators \refe{ade}. with high contrast coefficients may induce nonlocal effects in dimension three as shown in \cite{FeKh,BeBo,CESe,BrTc}, while the two-dimensional case of \cite{Bri3,BrCa1,BrCa2} is radically different. We are interested in knowing the consequences of these effects in Bloch waves analysis. \bs\par The aim of the paper is then to study the asymptotic behavior of the first Bloch eigenvalue in the presence of high contrast conductivity coefficients. In particular we want to specify the validity of expansion \refe{expla1eta}. in high contrast regime. To this end we consider an $\ep Y$-periodic matrix conductivity $a^\ep$ which is equi-coercive but not equi-bounded with respect to $\ep$, namely $\|a^\ep\|_{L^\infty}\to\infty$ as $\ep\to 0$. The classical picture is an $\ep Y$-periodic two-phase microstructure, one of the phase has a conductivity which blows up as $\ep$ tends to $0$. More precisely, we will study the limit behavior of the first Bloch eigenvalue $\la_1^\ep(\eta)$ associated with $a^\ep$, and its expansion \beq\label{expla1epeta} \la_1^\ep(\eta)=q^\ep\eta\cdot\eta+D^\ep(\eta\otimes\eta):(\eta\otimes\eta)+o(|\eta|^4). \eeq \par In Section \ref{s.L1bd}, we prove that in any dimension (see Theorem~\ref{thm1}), if the conductivity $a^\ep$ is bounded in $L^1$ and the constant of the Poincar\'e-Wirtinger inequality weighted by $a^\ep$ is an $o(\ep^{-2})$ (see \cite{Bri1} for an example), then the first Bloch eigenvalue $\la_1^\ep(\eta)$ associated with $a^\ep$ converges to some limit $\la_1^*(\eta)$ which satisfies \beq \la_1^*(\eta)=q^*\eta\cdot\eta,\quad\mbox{for small enough }|\eta|, \eeq where $q^*$ is the limit of the homogenized matrix $q^\ep$ in \refe{expla1epeta}.. Moreover, in dimension two and under the same assumptions we show that the tensor $D^\ep$ tends to $0$, which thus implies that the fourth-order expansion \refe{expla1epeta}. of $\la_1^\ep(\eta)$ converges to the fourth-order expansion of its limit. We can also refine the two-dimensional case by relaxing the $L^1$-boundedness of $a^\ep$ by the sole convergence of $q^\ep$ (see Theorem~\ref{thm12d}). \par In Section~\ref{s.L1unbd}, we show that the previous convergences do not hold generally in dimension three when $a^\ep$ is not bounded in $L^1$. We give a counter-example (see Theorem~\ref{thm2}) which is based on the fibers reinforced structure introduced first in \cite{FeKh} to derive nonlocal effects in high contrast homogenization. This is the main result of the paper. We show the existence of a jump at $\eta=0$ in the limit $\la_1^*(\eta)$ of the first Bloch eigenvalue. Indeed, when the radius of the fibers has a critical size and $\eta$ is not orthogonal to their direction, the first Bloch eigenvector $\psi_1^\ep$ is shown to converge weakly in $H^1_{\rm loc}(\RR^3;\CC)$ to some function $\psi_1^*$ solution of \beq\label{equpsi*} -\,\De\psi_1^*+\ga\,\psi_1^*=\la_1^*(\eta)\,\psi_1^*\quad\mbox{in }\RR^3, \eeq where \beq \ga=\lim_{\eta\to 0}\la_1^*(\eta)\neq\la_1^*(0)=0. \eeq Therefore, contrary to the analyticity of $\eta\mapsto\la_1^\ep(\eta)$ which holds for fixed $\ep$, the limit $\la_1^*$ of the first Bloch eigenvalue is not even continuous at $\eta=0$! On the other hand, the zero-order term in limit \refe{equpsi*}. is linked to the limit zero-order term obtained in \cite{BeBo,BrTc} under the same regime, for the conduction equation with the conductivity $a^\ep$ but with a Dirichlet boundary condition. Here, the periodicity condition satisfied by the function $y\mapsto e^{-i\,\eta\cdot y}\,\psi_1^\ep(y)$ (in connection with the translated operator \refe{Aeta}.) is quite different and more delicate to handle. Using an estimate of the Poincar\'e-Wirtinger inequality weighted by $a^\ep$ and the condition that $\eta$ is not orthogonal to the direction of the fibers, we can get the limit in the Radon measures sense of the eigenvector $\psi_1^\ep$ rescaled in the fibers. \subsection{Notations} \begin{itemize} \item $\ep$ denotes a small positive number such that $\ep^{-1}$ is an integer. \item $\left(e_1,\dots,e_d\right)$ denotes the canonical basis of $\RR^d$. \item $\cdot$ denotes the canonical scalar product in $\RR^d$. \item $:$ denotes the canonical scalar product in $\RR^{d\times d}$. \item $Y$ denotes the cube $(0,2\pi)^d$ in $\RR^d$. \item $H^1_\sharp(Y)$ denotes the space of the $Y$-periodic functions which belong to $H^1_{\rm loc}(\RR^d)$. Similarly, $L^p_\sharp(Y)$, for $p\geq 1$, denotes the space of the $Y$-periodic functions which belong to $L^p_{\rm loc}(\RR^d)$, and $C^k_\sharp(Y)$, for $k\in\NN$, denotes the space of the $C^k$-regular $Y$-periodic functions in $\RR^d$. \item For any $\eta\in\RR^d$, $H^1_\eta(Y;\CC)$ denotes the space of the functions $\psi$ such that \beq\label{H1eta} \big(x\mapsto e^{-i\,x\cdot\eta}\,\psi(x)\big)\in H^1_\sharp(Y;\CC). \eeq Similarly, $L^p_\eta(Y;\CC)$, for $p\geq 1$, denotes the set denotes the set associated with the space $L^p_\sharp(Y;\CC)$, and $C^k_\eta(Y;\CC)$, for $k\in\NN$, the set associated with the space $C^k_\sharp(Y;\CC)$. \item For any open set $\Om$ of $\RR^d$, $BV(\Om)$ denotes the space of the functions in $L^2(\Om)$ the gradient of which is a Radon measure on $\Om$. \end{itemize} \section{The case of $L^1$-bounded coefficients}\label{s.L1bd} Let $\ep>0$ be such that $\ep^{-1}$ is an integer. Let $A^\ep$ be a $Y$-periodic measurable real matrix-valued function satisfying \beq\label{Aep} \left(A^\ep\right)^T(y)=A^\ep(y)\quad\mbox{and}\quad\al\,I_d\leq A^\ep(y)\leq\be_\ep\,I_d\qquad\mbox{a.e. }y\in\RR^d, \eeq where $\al$ is a fixed positive number and $\be_\ep$ is a sequence in $(0,\infty)$ which tends to $\infty$ as $\ep\to 0$. Let $a^\ep$ be the rescaled matrix-valued function defined by \beq\label{aep} a^\ep(x):=A^\ep\left({x\over\ep}\right)\quad\mbox{for }x\in\RR^d. \eeq Define the effective conductivity $q^\ep$ by \beq\label{qep} q^\ep\la:=\fint_Y A^\ep\big(\la+\nabla X_\la^\ep\big)\,dy\quad\mbox{for }\la\in\RR^d, \eeq where $X_\la^\ep$ is the unique solution in $H^1_\sharp(Y)/\RR$ of the equation \beq\label{Xepla} \div\left(A^\ep\la+A^\ep\nabla X_\la^\ep\right)=0\quad\mbox{in }\RR^d. \eeq For a fixed $\ep>0$, the constant matrix $q^\ep$ is the homogenized matrix associated with the oscillating sequence $A^\ep({x\over\de})$ as $\de\to 0$, according to the classical homogenization periodic formula (see, {\it e.g.}, \cite{BLP}). \par Consider for $\eta\in\RR^d$, the first Bloch eigenvalue $\la_1^\ep(\eta)$ associated with the conductivity $a^\ep$ by \beq\label{laepeta} \la_1^\ep(\eta):=\min\left\{\int_Y a^\ep\nabla\psi\cdot \nabla\overline{\psi}\,dx\;:\;\psi\in H^1_\eta(Y;\CC)\mbox{ and }\int_Y|\psi|^2\,dx=1\right\}. \eeq A minimizer $\psi^\ep$ of \refe{laepeta}. solves the variational problem \beq\label{vppsiep} \int_Y a^\ep\nabla\psi^\ep\cdot \nabla\overline{\psi}\,dx=\la^\ep_1(\eta)\int_Y\psi^\ep\,\overline{\psi}\,dx,\quad\forall\,\psi\in H^1_\eta(Y;\CC), \eeq with \beq\label{psiep} \psi^\ep\in H^1_\eta(Y;\CC)\quad\mbox{and}\quad\int_Y|\psi^\ep|^2\,dx=1. \eeq An alternative definition for $\psi^\ep$ is given by the following result: \begin{Pro}\label{pro.psiep} The variational problem \refe{vppsiep}. is equivalent to the equation in the distributional sense \beq\label{eqpsiep} -\,\div\left(a^\ep\nabla\psi^\ep\right)=\la^\ep_1(\eta)\,\psi^\ep\quad\mbox{in }\RR^d. \eeq \end{Pro} \begin{proof} Let $\psi^\ep\in H^1_\eta(Y)$ be a solution of \refe{vppsiep}. and let $\ph$ be a function in $C^\infty_c(\RR^d)$. Writing $\psi^\ep=e^{i\,x\cdot\eta}\,\ph^\ep$ with $\ph^\ep\in H^1_\sharp(Y;\CC)$, and putting the function $\psi\in C^\infty_\eta(Y;\CC)$ defined by \[ \psi(x):=\sum_{k\in\ZZ^d}e^{-i\,2\pi k\cdot\eta}\,\overline{\ph}(x+2\pi k) =e^{i\,x\cdot\eta}\sum_{k\in\ZZ^d}e^{-i\,(x+2\pi k)\cdot\eta}\,\overline{\ph}(x+2\pi k), \] as test function in \refe{vppsiep}., we have by the $Y$-periodicity of $a^\ep$ (recall that $\ep$ is an integer), \[ \ba{l} \dis \int_Y a^\ep\nabla\psi^\ep\cdot\nabla\overline{\psi}\,dx \\ \ecart \dis =\sum_{k\in\ZZ^d}\int_Y a^\ep\left(\nabla\ph^\ep+i\,\eta\,\ph^\ep\right)\cdot \left[\nabla\big(e^{i(x+2\pi k)\cdot\eta}\,\ph(x+2\pi k)\big)-i\,\eta\,\big(e^{i(x+2\pi k)\cdot\eta}\,\ph(x+2\pi k)\big)\right]dx \\ \ecart \dis =\dis \sum_{k\in\ZZ^d}\int_{2\pi k+Y} a^\ep\left(\nabla\ph^\ep+i\,\eta\,\ph^\ep\right)\cdot \left[\nabla\big(e^{i\,x\cdot\eta}\,\ph\big)-i\,\eta\,\big(e^{i\,x\cdot\eta}\,\ph\big)\right]dx \\ \ecart \dis =\int_{\RR^d}a^\ep\nabla\psi^\ep\cdot\nabla\ph\,dx, \ea \] and \[ \ba{ll} \dis \int_Y \psi^\ep\,\overline{\psi}\,dx & \dis =\sum_{k\in\ZZ^d}\int_Y \ph^\ep\,e^{i(x+2\pi k)}\,\ph(x+2\pi k)\,dx \\ \ecart & =\dis \sum_{k\in\ZZ^d}\int_{2\pi k+Y} \ph^\ep\,e^{i\,x\cdot\eta}\,\ph\,dx=\int_{\RR^d}\psi^\ep\,\ph\,dx. \ea \] Hence, we get that \[ \int_{\RR^d}a^\ep\nabla\psi^\ep\cdot\nabla\ph\,dx=\la^\ep_1(\eta)\int_{\RR^d}\psi^\ep\,\ph\,dx,\quad\mbox{for any }\ph\in C^\infty_c(\RR^d), \] which yields equation \refe{eqpsiep}.. \par Conversely, assume that $\psi^\ep$ is a solution of \refe{eqpsiep}.. Consider $\psi\in C^\infty_\eta(Y;\CC)$, and for any integer $n\geq 1$, a function $\th_n\in C^\infty_c(\RR^d)$ such that \[ \th_n=1\;\;\mbox{in }[-2\pi n,2\pi n]^d,\quad \th_n=0\;\;\mbox{in }\RR^d\setminus[-2\pi(n+1),2\pi(n+1)]^d, \quad|\nabla\th_n|\leq 1\;\;\mbox{in }\RR^d. \] Putting $\ph:=\th_n\,\overline{\psi}$ as test function in \refe{eqpsiep}., we have as $n\to\infty$ and by the $Y$-periodicity of $a^\ep\nabla\psi^\ep\cdot\nabla\overline{\psi}$, \[ \ba{ll} \dis {1\over (2n)^d}\int_{\RR^d}a^\ep\nabla\psi^\ep\cdot\nabla(\th_n\,\overline{\psi})\,dx & \dis ={1\over (2n)^d}\sum_{k\in\{-n,\dots,n-1\}^d}\int_{2\pi k+Y}a^\ep\nabla\psi^\ep\cdot\nabla\overline{\psi}\,dx+o_n(1) \\ \ecart & \dis =\int_Y a^\ep\nabla\psi^\ep\cdot\nabla\overline{\psi}\,dx +o_n(1), \ea \] and by the $Y$-periodicity of $\psi^\ep\,\overline{\psi}$, \[ \ba{ll} \dis {1\over (2n)^d}\int_{\RR^d}\psi^\ep\,\th_n\,\overline{\psi}\,dx & \dis ={1\over (2n)^d}\sum_{k\in\{-n,\dots,n-1\}^d}\int_{2\pi k+Y}\psi^\ep\,\overline{\psi}\,dx+o_n(1) \\ \ecart & \dis =\int_Y \psi^\ep\,\overline{\psi}\,dx +o_n(1). \ea \] Therefore, it follows that $\psi^\ep$ is solution of the variational problem \refe{vppsiep}.. \end{proof} Note that for a fixed $\ep>0$, the oscillating sequence $a^\ep({x\over\de})=A^\ep({x\over\ep\de})$ has the same homogenized limit as $A^\ep({x\over\de})$ when $\de$ tends to $0$, namely the constant matrix $q^\ep$ defined by \refe{qep}.. Hence, the asymptotic expansion in $\eta$ of the first Bloch eigenvalue derived in \cite{COV2} reads as \beq\label{asylaepeta} \la_1^\ep(\eta)=q^\ep\eta\cdot\eta+D^\ep(\eta\otimes\eta):(\eta\otimes\eta)+O(|\eta|^6), \eeq where $D^\ep$ is a non-positive fourth-order tensor defined in formula \refe{Dep}. below. \par When $a^\ep$ is not too high, we have the following asymptotic behavior for $\la_1^\ep(\eta)$: \begin{Thm}\label{thm1} Assume that the sequence $a^\ep$ of \refe{aep}. is bounded in $L^1(Y)$. \begin{itemize} \item If $d=2$, then there exists a subsequence of $\ep$, still denoted by $\ep$, such that the sequence $q^\ep$ converges to some $q^*$ in $\RR^{2\times 2}$. Moreover, we have for any $\eta\in\RR^2$, \beq\label{limlaepeta} \lim_{\ep\to 0}\la_1^\ep(\eta)=\min\left\{\int_Y q^*\nabla\psi\cdot\nabla\overline{\psi}\,dx\;:\;\psi\in H^1_\eta(Y;\CC)\mbox{ and }\int_Y|\psi|^2\,dx=1\right\}, \eeq and for small enough $|\eta|$, \beq\label{limlaepeta0} \lim_{\ep\to 0}\la_1^\ep(\eta)=q^*\eta\cdot\eta. \eeq \item If $d\geq 2$, under the extra assumption that that for any $\la\in\RR^d$, \beq\label{Cepla} C^\ep_\la:=\max\left\{\int_Y(A^\ep\la\cdot\la)\,V^2\,dy\;:\;V\in H^1_\sharp(Y),\ \int_Y A^\ep\nabla V\cdot\nabla V\,dy=1\right\}\ll{1\over\ep^2}, \eeq and \refe{limlaepeta}., \refe{limlaepeta0}. still hold. Moreover, if $d=2$ we have \beq\label{limDep} \lim_{\ep\to 0}\big(D^\ep(\eta\otimes\eta):(\eta\otimes\eta)\big)=0,\quad\forall\,\eta\in\RR^d. \eeq \end{itemize} \end{Thm} Using a more sophisticated approach we can relax in dimension two the $L^1(Y)$-boundedness of $a^\ep$: \begin{Thm}\label{thm12d} Assume that $d=2$ and that the sequence $q^\ep$ converges to $q^*$ in $\RR^{2\times 2}$. Then, the limits \refe{limlaepeta}. and \refe{limlaepeta0}. still hold. \end{Thm} \begin{Rem} The constant $C^\ep_\la$ of \refe{Cepla}. is the best constant of the Poincar\'e-Wirtinger inequality weighted by $A^\ep$. The condition $\ep^2\,C^\ep_\la\to 0$ was first used in \cite{Bri1} to prevent the appearance of nonlocal effects in the homogenization of the conductivity equation with $a^\ep$. Under this assumption the first Bloch eigenvalue and its second-order expansion converge as $\ep$ tends to $0$ in any dimension $d\geq 2$. The case $d=2$ is quite particular since it is proved in \cite{BrCa2} that nonlocal effects cannot appear. This explains {\it a posteriori} that the first Bloch eigenvalue has a good limit behavior under the $L^1(Y)$-boundedness of $a^\ep$ (Theorem~\ref{thm1}), or the sole boundedness of $q^\ep$ (Theorem~\ref{thm12d}). Note that the second condition is more general than the first one due to the estimate \refe{qepmin}. below. \end{Rem} \noindent {\bf Proof of Theorem~\ref{thm1}.} \par\ms\noindent {\it The case $d=2$}: The proof is divided in two parts. In the first part we determine the limit of the eigenvalue problem \refe{vppsiep}.. The second part provides the limit of the minimization problem~\refe{laepeta}.. \par The matrix $q^\ep$ of \refe{qep}. is also given by the minimization problem for any $\la\in\RR^d$: \beq\label{qepmin} q^\ep\la\cdot\la=\min\left\{\fint_Y A^\ep(\la+\nabla V)\cdot(\la+\nabla V)\,dy\;:\;V\in H^1_\sharp(Y)\right\}\leq\fint_Y A^\ep\la\cdot\la\,dy \eeq which is bounded. Therefore, up to a subsequence $q^\ep$ converges to some $q^*$ in $\RR^{d\times d}$. \par To obtain the limit behavior of \refe{vppsiep}. we need to consider the rescaled test functions $w^\ep_j$, $j=1,2$, associated with the cell problem \refe{Xepla}. and defined by \beq\label{wepj} w^\ep_j(x):=x_j+\ep\,X^\ep_{e_j}\left({x\over\ep}\right)\quad\mbox{for }x\in\RR^d. \eeq Since by the $\ep Y$-periodicity of $\nabla w^\ep_j$, $j=1,2$, and by \refe{qepmin}. \beq\label{estwepj} \fint_Y a^\ep\nabla w^\ep_j\cdot\nabla w^\ep_j=q^\ep_{jj}\leq c, \eeq the sequence $w^\ep_j$ is bounded in $H^1_{\rm loc}(\RR^2)$ and thus converges weakly to $x_i$ in $H^1_{\rm loc}(\RR^2)$. By the Corollary~2.3 of \cite{BrCa2} (which is specific to dimension two), the sequence $w^\ep:=(w^\ep_1,w^\ep_2)$ converges uniformly to the identity function locally in $\RR^2$. Moreover, since $\ep^{-1}$ is an integer and the functions $X^\ep_{e_j}$ are $Y$-periodic, we have for any $x\in\RR^2$ and $k\in\ZZ^2$, \[ w^\ep_j(x+2\pi k)=x_j+2\pi k_j+\ep\,X^\ep_{e_j}\left({x+2\pi k\over\ep}\right)=x_j+2\pi k_j+\ep\,X^\ep_{e_j}\left({x\over\ep}\right)=w^\ep_j(x)+2\pi k_j, \] or equivalently, \beq\label{wepk} w^\ep(x+2\pi k)=w^\ep(x)+2\pi k,\quad\forall\,(x,k)\in\RR^2\times\ZZ. \eeq This implies that for any $\chi\in C^1_\eta(Y;\CC)$, the function $\chi(w^\ep)$ belongs to $H^1_\eta(Y;\CC)$ (see \refe{H1eta}.). \par On the other hand, the eigenvalue $\la^\ep_1(\eta)$ \refe{laepeta}. is bounded due to the $L^1(Y)$-boundedness of $a^\ep$, and thus converges up to a subsequence to some number $\la^*_1(\eta)\geq 0$. Hence, the sequence $\psi^\ep$ is bounded in $H^1_\eta(Y;\CC)$, and thus converges weakly up to a subsequence to some function $\psi^*$ in $H^1_\eta(Y;\CC)$. Then, putting $\chi(w^\ep)$ as test function in \refe{vppsiep}., using the uniform convergence of $w^\ep$ and the convergence of $\psi^\ep$ to $\psi^*$, we get that \beq\label{est21} \int_Y a^\ep\nabla\psi^\ep\cdot\nabla w^\ep_j\,\overline{\partial_j\chi(w^\ep)}\,dx =\int_Y a^\ep\nabla\psi^\ep\cdot\nabla w^\ep_j\,\partial_j\overline{\chi}\,dx+o(1) =\la^*_1(\eta)\int_Y\psi^*\,\overline{\chi}\,dx+o(1). \eeq Next, let us apply the div-curl approach of \cite{Bri3,BrCa1}. To this end, since by \refe{Xepla}. and \refe{wepj}. the current $a^\ep\nabla w^\ep_j$ is divergence free, we may consider a stream function $\tilde{w}^\ep_j$ associated with $a^\ep\nabla w^\ep_j$ such that \beq\label{twepj} a^\ep\nabla w^\ep_j=\nabla^\perp\tilde{w}^\ep_j:=\begin{pmatrix} -\,\partial_2\tilde{w}^\ep_j \\ \partial_1\tilde{w}^\ep_j\end{pmatrix} \quad\mbox{a.e. in }\RR^2. \eeq By the Cauchy-Schwarz inequality combined with \refe{estwepj}. and the $L^1(Y)$-boundedness of $a^\ep$, the function $\tilde{w}^\ep_j$ is bounded in $BV_{\rm loc}(\RR^2)$. Moreover, due to the periodicity the sequence $\nabla\tilde{w}^\ep_j$ induces no concentrated mass in the space $\M(\RR^2)^2$ of the Radon measures on $\RR^2$. Therefore, by the Lions concentration-compactness lemma \cite{PLL} $\tilde{w}^\ep_j$ converges strongly in $L^2_{\rm loc}(\RR^2)$ to some function $\tilde{w}^j$ in~$BV_{\rm loc}(\RR^2)$. By the $\ep Y$-periodicity of $a^\ep\nabla w^\ep_j$ and the definition \refe{qep}. of $q^\ep$, we also have in the weak-$*$ sense of the Radon measures \beq\label{Dtwj} a^\ep\nabla w^\ep_j\;\rightharpoonup\;\nabla^\perp\tilde{w}_j=\lim_{\ep\to 0}\left(\fint_Y A^\ep\big(e_j+\nabla X^\ep_{e_j}\big)\,dy\right)=q^*e_j \quad\mbox{weakly in }\M(\RR^2)^2*. \eeq On the other hand, integrating by parts using that $a^\ep\nabla w^\ep_j$ is divergence free and $\psi^\ep\,\partial_j\overline{\chi}$ is $Y$-periodic, then applying the strong convergence of $\tilde{w}^\ep_j$ in $L^2(Y)$ and \refe{Dtwj}., it follows that (with the summation over repeated indices) \[ \ba{ll} \dis \int_Y a^\ep\nabla w^\ep_j\cdot\nabla\psi^\ep\,\partial_j\overline{\chi}\,dx & =-\dis \int_Y a^\ep\nabla w^\ep_j\cdot\nabla(\partial_j\overline{\chi})\,\psi^\ep\,dx =\int_Y \tilde{w}^\ep_j\,\nabla^T\psi^\ep\cdot\nabla(\partial_j\overline{\chi})\,dx \\ \ecart & \dis =\int_Y \tilde{w}_j\,\nabla^T\psi^*\cdot\nabla(\partial_j\overline{\chi})\,dx+o(1) =-\int_Y q^*e_j\cdot\nabla(\partial_j\overline{\chi})\,\psi^*\,dx+o(1) \\ \ecart & \dis =\int_Y q^*\nabla\psi^*\cdot\nabla\overline{\chi}\,dx+o(1). \ea \] This combined with \refe{est21}. and a density argument yields the limit variational problem \beq\label{vppsi*} \int_Y q^*\nabla\psi^*\cdot\nabla\overline{\chi}\,dx=\la^*_1(\eta)\int_Y\psi^*\,\overline{\chi}\,dx,\quad\forall\,\chi\in H^1_\eta(Y;\CC), \eeq where by Rellich's theorem and \refe{psiep}. the limit $\psi^*$ of $\psi^\ep$ satisfies \beq\label{psi*} \psi^*\in H^1_\eta(Y;\CC)\quad\mbox{and}\quad\int_Y|\psi^*|^2\,dx=1. \eeq \par It remains to prove that \beq\label{la*eta} \la_1^*(\eta)=\min\left\{\int_Y q^*\nabla\psi\cdot\nabla\overline{\psi}\,dx\;:\;\psi\in H^1_\eta(Y;\CC)\mbox{ and }\int_Y|\psi|^2\,dx=1\right\}. \eeq To this end consider a covering of $Y$ by $n\geq 1$ two by two disjoint cubes $Q^n_k$ of same size, and $n$ smooth functions $\th^n_k$, for $1\leq k\leq n$, such that \beq\label{Qnk} \th^n_k\in C^1_0\big(Q^n_k;[0,1]\big)\quad\mbox{and}\quad\sum_{k=1}^n \th^n_k\;\mathop{\longrightarrow}_{n\to\infty}\;1\;\;\mbox{strongly in }L^2(Y). \eeq For $\chi\in C^1_\eta(Y;\CC)$ with a unit $L^2(Y)$-norm, consider the approximation $\chi^\ep_n$ of $\chi$ defined by \beq\label{chiep} \chi^\ep_n(x):=\nu^\ep_n\left(\chi(x)+\ep\,e^{i\,x\cdot\eta}\sum_{k=1}^n\th^n_k(x)\,X^\ep_{\xi^n_k}\left({x\over\ep}\right)\right),\quad\mbox{where}\quad \xi^n_k:=\fint_{Q^n_k}e^{-i\,x\cdot\eta}\,\nabla\chi(x)\,dx, \eeq and $\nu^\ep_n>0$ is chosen in such a way that $\chi^\ep_n$ has a unit $L^2(Y)$-norm. Since $\ep^{-1}$ is an integer, the function $\chi^\ep_n$ belongs to $H^1_\eta(Y;\CC)$ and can thus be used as a test function in problem \refe{laepeta}.. Then, by \refe{Qnk}. we have \[ \ba{l} \dis \la^\ep_1(\eta)\leq\int_Y a^\ep\nabla\chi^\ep_n\cdot\nabla\overline{\chi^\ep_n}\,dx \\ \ecart \dis \leq(\nu^\ep_n)^2\int_Y a^\ep\left(\nabla\chi+e^{i\,x\cdot\eta}\sum_{k=1}^n\th^n_k\,\nabla X^\ep_{\xi^n_k}\left({x\over\ep}\right)\right)\cdot\overline{\left(\nabla\chi+e^{i\,x\cdot\eta}\sum_{k=1}^n\th^n_k\,\nabla X^\ep_{\xi^n_k}\left({x\over\ep}\right)\right)}dx+o(1) \\ \ecart \dis =(\nu^\ep_n)^2\int_Y a^\ep\left(R^n+e^{i\,x\cdot\eta}\sum_{k=1}^n\th^n_k\,\big(\xi^n_k+\nabla X^\ep_{\xi^n_k}\big)\left({x\over\ep}\right)\right)\cdot\overline{\left(R^n+e^{i\,x\cdot\eta}\sum_{k=1}^n\th^n_k\,\big(\xi^n_k+\nabla X^\ep_{\xi^n_k}\big)\left({x\over\ep}\right)\right)}dx \\ +\,o(1), \ea \] \beq\label{Rnk} \mbox{where}\quad R^n:=\nabla\chi-e^{i\,x\cdot\eta}\sum_{k=1}^n\th^n_k\,\xi^n_k\in C^0(\RR^2)^2. \eeq Passing to the limit as $\ep\to 0$ in the previous inequality, and using \refe{Qnk}., the $L^1(Y)$-boundedness combined with the $Y$-periodicity of $A^\ep$, $A^\ep\nabla X^\ep_\la$, $A^\ep\nabla X^\ep_\la\cdot\nabla X^\ep_\la$, and the convergence \beq\label{limDwxink} \ba{l} \dis a^\ep\,\big(\xi^n_k+\nabla X^\ep_{\xi^n_k}\big)\left({x\over\ep}\right)\cdot\big(\xi^n_k+\nabla X^\ep_{\xi^n_k}\big)\left({x\over\ep}\right) =\left(A^\ep\big(\xi^n_k+\nabla X^\ep_{\xi^n_k}\big)\cdot\big(\xi^n_k+\nabla X^\ep_{\xi^n_k}\big)\right)\left({x\over\ep}\right) \\ \ecart \dis \;\rightharpoonup\;\lim_{\ep\to 0}\left(\fint_Y A^\ep\big(\xi^n_k+\nabla X^\ep_{\xi^n_k}\big)\cdot\big(\xi^n_k+\nabla X^\ep_{\xi^n_k}\big)\,dy\right)=q^*\xi^n_k\cdot\overline{\xi^n_k}\quad\mbox{weakly in }\M(\bar{Y})\,*, \ea \eeq it follows that \beq\label{est22} \la^*_1(\eta)\leq\int_Yq^*\left(e^{i\,x\cdot\eta}\sum_{k=1}^n\th^n_k\,\xi^n_k\right)\cdot\overline{\left(e^{i\,x\cdot\eta}\sum_{k=1}^n\th^n_k\,\xi^n_k\right)} +c\int_Y\left(|R^n|^2+|R^n|\right)dx. \eeq Therefore, since the sequence $R^n$ of \refe{Rnk}. converges strongly to $0$ in $L^2(Y;\CC)^2$, passing to the limit as $n\to\infty$ in \refe{est22}. we get that for any $\chi\in C^1_\eta(Y;\CC)$ with a unit $L^2(Y)$-norm, \beq\label{estla*eta} \la^*_1(\eta)\leq\int_Y q^*\nabla\chi\cdot\nabla\overline{\chi}\,dx. \eeq Using a density argument the inequality \refe{estla*eta}. combined with the limit problem \refe{vppsi*}. implies the desired formula \refe{la*eta}.. Moreover, due to the uniqueness of \refe{la*eta}. in term of $q^*$ the limit \refe{limlaepeta}. holds for any $\eta\in\RR^2$, and for the whole sequence $\ep$ such that $q^\ep$ converges to $q^*$. Finally, decomposing formula \refe{la*eta}. in Fourier's series and using Parseval's identity we obtain that equality \refe{limlaepeta0}. holds for any $\eta\in\RR^2$ with small enough norm. \par\ms\noindent {\it The case $d\geq 2$ under assumption \refe{Cepla}.}: First, note that the proof of the inequality \refe{estla*eta}. in the previous case actually holds for any dimension. Therefore, it is enough to obtain the limit eigenvalue problem \refe{vppsi*}. to conclude to the minimization formula \refe{la*eta}.. To this end, applying the homogenization Theorem~2.1 of \cite{Bri1} to the linear equation \refe{eqpsiep}., we get the limit equation \beq\label{eqpsi*} \int_{\RR^2} q^*\nabla\psi^*\cdot\nabla\ph\,dx=\la^*_1(\eta)\int_{\RR^2}\psi^*\,\ph\,dx,\quad\forall\,\ph\in C^\infty_c(\RR^2), \eeq which is equivalent to \refe{la*eta}. by Proposition~\ref{pro.psiep}. \par It thus remains to prove~\refe{limDep}. when $d=2$, which is also a consequence of \refe{Cepla}.. By \cite{COV2} we have \beq\label{Dep} D^\ep(\eta\otimes\eta):(\eta\otimes\eta) =-\fint_Y a^\ep\,\nabla\left(\chi^\ep_{2,\eta}-{1\over 2}\,(\chi^\ep_{1,\eta})^2\right)\cdot\nabla\left(\chi^\ep_{2,\eta}-{1\over 2}\,(\chi^\ep_{1,\eta})^2\right)dx, \eeq where, taking into account \refe{Xepla}. and \refe{wepj}., \beq\label{chiep1} \chi^\ep_{1,\eta}(x):=\ep\,X^\ep_\eta\left({x\over\ep}\right)=\eta_1\left(w^\ep_1-x_1\right)+\eta_2\left(w^\ep_2-x_2\right) \quad\mbox{for }x\in\RR^2, \eeq and $\chi^\ep_{2,\eta}$ is the unique function in $H^1_\sharp(Y)$ with zero $Y$-average, solution of \beq\label{chiep2} -\,\div\left(a^\ep\nabla\chi^\ep_{2,\eta}\right) =a^\ep\eta\cdot\eta-q^\ep\eta\cdot\eta+a^\ep\eta\cdot\nabla\chi^\ep_{1,\eta}+\div\left(\chi^\ep_{1,\eta}\,a^\ep\eta\right) \quad\mbox{in }\RR^d. \eeq Consider the partition of $Y$ by the small cubes $2\pi\ep k+\ep Y$, for $k\in\{0,\dots,\ep^{-1}-1\}^2$, and define from $\chi^\ep_{j,\eta}$, $j=1,2$, the associated average function \beq\label{bchiepj} \breve{\chi}^\ep_{j,\eta}:=\sum_{k\in\{0,\dots,\ep^{-1}-1\}^2}\left(\fint_{2\pi\ep k+\ep Y}\chi^\ep_{j,\eta}\,dx\right)1_{2\pi \ep k+\ep Y}, \eeq where $1_E$ denotes the characteristic function of the set $E$. Then, $\ep$-rescaling estimate \refe{Cepla}. we get that \beq\label{estchiepj} \int_Y a^\ep\eta\cdot\eta\left(\chi^\ep_{j,\eta}-\breve{\chi}^\ep_{j,\eta}\right)^2dx \leq\ep^2\,C^\ep_\eta\int_Y a^\ep\nabla\chi^\ep_{j,\eta}\cdot\nabla\chi^\ep_{j,\eta}\,dx. \eeq Also note that $\breve{\chi}^\ep_{1,\eta}=0$, and since $\chi^\ep_{2,\eta}$ has zero $Y$-average, we have \beq\label{mbchiepj} \int_Y \breve{\chi}^\ep_{2,\eta}(x)\,Z\left({x\over\ep}\right)dx=0,\quad\forall\,Z\in L^2_\sharp(Y). \eeq Putting $\chi^\ep_{2,\eta}$ as test function in equation \refe{chiep2}., then using equality \refe{mbchiepj}. and the Cauchy-Schwarz inequality combined with the $L^1(Y)$-boundedness of $a^\ep$ and estimates \refe{estchiepj}., \refe{estwepj}., we obtain that \[ \ba{l} \dis \int_Y a^\ep\nabla\chi^\ep_{2,\eta}\cdot\nabla\chi^\ep_{2,\eta}\,dx \\ \ecart \dis =\int_Y a^\ep\eta\cdot\eta\left(\chi^\ep_{2,\eta}-\breve{\chi}^\ep_{2,\eta}\right)dx +\int_Y a^\ep\eta\cdot\nabla\chi^\ep_{1,\eta}\left(\chi^\ep_{2,\eta}-\breve{\chi}^\ep_{2,\eta}\right)dx -\int_Y a^\ep\eta\cdot\nabla\chi^\ep_{2,\eta}\left(\chi^\ep_{1,\eta}-\breve{\chi}^\ep_{1,\eta}\right)dx \\ \ecart \dis \leq c\left(\ep^2\,C^\ep_\eta\int_Y a^\ep\nabla\chi^\ep_{2,\eta}\cdot\nabla\chi^\ep_{2,\eta}\,dx\right)^{1\over 2} \left[1+\left(\int_Y a^\ep\nabla\chi^\ep_{1,\eta}\cdot\nabla\chi^\ep_{1,\eta}\,dx\right)^{1\over 2}\right] \\ \ecart \dis \leq c\,\ep^2\,C^\ep_\eta\left(\int_Y a^\ep\nabla\chi^\ep_{2,\eta}\cdot\nabla\chi^\ep_{2,\eta}\,dx\right)^{1\over 2}. \ea \] This together with assumption \refe{Cepla}. yields \beq\label{estchiep2} \lim_{\ep\to 0}\left(\int_Y a^\ep\nabla\chi^\ep_{2,\eta}\cdot\nabla\chi^\ep_{2,\eta}\,dx\right)=0. \eeq On the other hand, by \refe{chiep1}. and the Corollary~2.3 of \cite{BrCa2} (see the previous step) the sequence $\chi^\ep_{1,\eta}$ converges uniformly to $0$ in $Y$. At this level the dimension two is crucial. This combined with the Cauchy-Schwarz inequality and the $L^1(Y)$-boundedness of $a^\ep\nabla\chi^\ep_{j,\eta}\cdot\nabla\chi^\ep_{j,\eta}$ implies that \beq\label{estchiep1} \lim_{\ep\to 0}\left(\int_Y a^\ep\nabla\chi^\ep_{1,\eta}\cdot\nabla\chi^\ep_{j,\eta}\,(\chi^\ep_{1,\eta})^k\,dx\right)=0 \quad\mbox{for }j,k\in\{1,2\}. \eeq Therefore, passing to the limit in \refe{Dep}. thanks to \refe{estchiep2}. and \refe{estchiep1}. we get the desired convergence \refe{limDep}., which concludes the proof of Theorem~\ref{thm1}. \cqfd \par\bs To prove Theorem~\ref{thm12d} we need the following result the main ingredients of which are an estimate due to Manfredi \cite{Man} and a uniform convergence result of \cite{BrCa3}: \begin{Lem}\label{lemM} Let $\Om$ be a domain of $\RR^2$, and let $\si^\ep$ be a sequence of symmetric matrix-valued functions in $\RR^{2\times 2}$ such that $\al\,I_2\leq \si^\ep(x)\leq\be_\ep\,I_2$ a.e. $x\in\Om$, for a constant $\al>0$ independent of $\ep$ and a constant $\be_\ep>\al$. Let $f^\ep$ be a strongly convergent sequence in $W^{-1,p}(\Om)$ for some $p>2$. Consider a bounded sequence $u^\ep$ in $H^1(\Om)$ solution of the equation $-\,\div\left(\si^\ep\nabla u^\ep\right)=f^\ep$ in $\Om$. Then, up to a subsequence $u^\ep$ converges uniformly in any compact set of $\Om$. \end{Lem} \begin{proof} On the one hand, let $D$ be a disk of $\Om$ such that $\bar{D}\subset\Om$, and let $u^\ep_D$ be the solution in $H^1_0(D)$ of the equation $-\,\div\left(\si^\ep\nabla u^\ep_D\right)=f^\ep$ in $D$. Since $u^\ep_D\equiv 0$ converges uniformly on $\partial D$ and $f^\ep$ converges strongly in $W^{-1,p}(\Om)$, by virtue of the Theorem~2.7 of \cite{BrCa3}, up to a subsequence $u^\ep_D$ converges weakly in $H^1(D)$ and uniformly in $\bar{D}$. \par On the other hand, the function $v^\ep:=u^\ep-u^\ep_D$ is bounded in $H^1(D)$ and solves the equation $\div\left(\si^\ep\nabla v^\ep\right)=0$ in $D$. By the De Giorgi-Stampacchia regularity theorem for second-order elliptic equations, $v^\ep$ is H\"older continuous in $D$ and satisfies the maximum principle in any disk of $D$. Hence, the function $v^\ep$ is continuous and weakly monotone in $D$ in the sense of \cite{Man}. Therefore, the estimate (2.5) of \cite{Man} implies that for any $x_0\in D$, there exists a constant $r>0$ such that \beq\label{estM} \forall\,x,y\in D(x_0,r),\quad\big|v^\ep(x)-v^\ep(y)\big|\leq{C\,\|\nabla v^\ep\|_{L^2(D)^2}\over\big[\ln\left(4r/|x-y|\right)\big]^{1\over 2}} \leq{C\,\|\nabla v^\ep\|_{L^2(D)^2}\over\left(\ln 2\right)^{1\over 2}}, \eeq where $D(x_0,r)$ is the disk centered on $x_0$ of radius $r$, and $C>0$ is a constant depending only on dimension two. The sequence $v^\ep-v^\ep(x_0)$ is bounded in $D(x_0,r)$, independently of $\ep$ by the right-hand term of \refe{estM}.. This combined with the boundedness of $v^\ep$ in $L^2(D)$ implies that $v^\ep$ is bounded uniformly in $D(x_0,r)$. Moreover, estimate \refe{estM}. shows that the sequence $v^\ep$ is equi-continuous in $D(x_0,r)$. Then, by virtue of Ascoli's theorem together with a diagonal extraction procedure, up to a subsequence $v^\ep$ converges uniformly in any compact set of $D$. So does the sequence $u^\ep=u^\ep_D+v^\ep$. Again using a diagonal procedure from a countable covering of $\Om$ by disks $D$, there exists a subsequence of $\ep$, still denoted by $\ep$, such that $u^\ep$ converges uniformly in any compact set of $\Om$. \end{proof} \noindent {\bf Proof of Theorem~\ref{thm12d}.} We have only to show that the limit $\psi^*$ of the eigenvector $\psi^\ep$ satisfying \refe{vppsiep}. and \refe{psiep}. is solution of \refe{vppsi*}.. Indeed, the proof of inequality \refe{estla*eta}. follows from the convergence of $q^\ep$ thanks to limit \refe{limDwxink}.. as shown in the proof of Theorem~\ref{thm1}. \par First of all, for $\chi\in C^1_\eta$ with a unit $L^2(Y)$-norm, $\chi(w^\ep)$ converges uniformly tends to $\chi$ in $Y$ due to the uniform convergence of $w^\ep$ (see the proof of Theorem~\ref{thm1} or apply Lemma~\ref{lemM}). Then, using successively the minimum formula \refe{laepeta}. with the test function $\psi:=\chi(w^\ep)$, the Cauchy-Schwarz inequality, the $\ep Y$-periodicity of $a^\ep\nabla w^\ep_j\cdot\nabla w^\ep_j$ and the boundedness of $q^\ep$, we have (with the summation over repeated indices) \[ \ba{ll} \la^\ep_1(\eta) & \dis \leq{1\over\|\chi(w^\ep)\|^2_{L^2(Y)}} \int_Y a^\ep\nabla w^\ep_j\cdot\nabla w^\ep_k\,\partial_j\chi(w^\ep)\,\overline{\partial_k\chi(w^\ep)}\,dx \\ \ecart & \dis \leq c\int_Y a^\ep\nabla w^\ep_j\cdot\nabla w^\ep_j\,dx\leq c\,{\rm tr}\left(q^\ep\right)\leq c. \ea \] Hence, up to a subsequence $\la^\ep_1(\eta)$ converges to some $\la^*_1(\eta)$ in $\RR$. This combined with \refe{vppsiep}. and \refe{psiep}. implies that the eigenvector $\psi^\ep$ converges weakly to some $\psi^*$ in $H^1_{\rm loc}(\RR^2)$. Moreover, $\Re(\psi^\ep)$, $\Im(\psi^\ep)$ are solutions of equation \refe{eqpsiep}. with respective right-hand sides $\la^\ep_1(\eta)\,\Re(\psi^\ep)$, $\la^\ep_1(\eta)\,\Im(\psi^\ep)$ which are bounded in $H^1_{\rm loc}(\RR^2)$ thus in $W^{-1,p}_{\rm loc}(\RR^2)$ for any $p>2$. Therefore, thanks to Lemma~\ref{lemM} and up to extract a new subsequence, $\psi^\ep$ converges uniformly to $\psi^*$ in any compact set of $\RR^2$. \par On the other hand, for $\ph\in C^\infty_c(\RR^2)$, putting $\ph(w^\ep)$ as test function in equation \refe{eqpsiep}., using that $a^\ep\nabla w^\ep_j$ is divergence free (due to \refe{Xepla}. and \refe{wepj}.), and integrating by parts, we have (with the summation over repeated indices) \beq\label{D2phweppsiep} \ba{ll} \dis \int_Y a^\ep\nabla\psi^\ep\cdot\nabla w^\ep_j\,\partial_j\ph(w^\ep)\,dx & \dis =-\int_{\RR^2} a^\ep\nabla w^\ep_j\cdot\nabla w^\ep_k\,\partial^2_{jk}\ph(w^\ep)\,\psi^\ep\,dx \\ \ecart & \dis =\la^\ep_1(\eta)\int_{\RR^2}\psi^\ep\,\ph(w^\ep)\,dx. \ea \eeq Then, passing to the limit in \refe{D2phweppsiep}. using the uniform convergences of $w^\ep, \psi^\ep$ combined with the convergences \[ a^\ep\nabla w^\ep_j\cdot\nabla w^\ep_k\;\rightharpoonup\; \lim_{\ep\to 0}\left(\fint_Y A^\ep\left(e_j+\nabla X^\ep_{e_j}\right)\cdot\left(e_k+\nabla X^\ep_{e_k}\right)\right) =q^*_{jk}\quad\mbox{weakly in }\M(\RR^2)\,*, \] we get that \beq\label{D2phpsi*} -\int_{\RR^2} q^*_{jk}\,\partial^2_{jk}\ph\,\psi^*\,dx =\la^*_1(\eta)\int_{\RR^2}\psi^*\,\ph\,dx. \eeq Finally, integrating by parts the left-hand side of \refe{D2phpsi*}. we obtain the limit equation \refe{eqpsi*}., which is equivalent to the limit eigenvalue problem \refe{vppsi*}. by virtue of Proposition~\ref{pro.psiep}. \cqfd \section{Anomalous effect with $L^1$-unbounded coefficients}\label{s.L1unbd} In this section we assume that $d=3$. Let $\ep>0$ be such that $\ep^{-1}$ is an integer. Consider the fiber reinforced structure introduced in \cite{FeKh} and extended in several subsequent works \cite{BeBo, CESe, Bri2} to derive nonlocal effects in homogenization. Here we will consider this structure with a very high isotropic conductivity $a^\ep$ which is not bounded in $L^1(Y)$. More precisely, let $\om^\ep\subset Y$ be the $\ep Y$-periodic lattice composed by $\ep^{-2}$ cylinders of axes \[ \big(2\pi k_1\ep+\pi\ep,2\pi k_2\ep+\pi\ep,0\big)+\RR\,e_3,\quad\mbox{for }(k_1,k_2)\in\{0,\dots,\ep^{-1}-1\}^2, \] of length $2\pi$, and of radius $\ep\,r_\ep$ such that \beq\label{rep} \lim_{\ep\to 0}\left({1\over2\pi\,\ep^2|\ln r_\ep|}\right)=\ga\in(0,\infty). \eeq The conductivity $a^\ep$ is defined by \beq\label{aepbep} a^\ep(x):=\left\{\ba{ll} \be_\ep & \mbox{if }x\in\om^\ep \\ \ecart 1 & \mbox{if }x\in Y\setminus\om^\ep \ea\right. \quad\mbox{with}\quad\lim_{\ep\to 0}\be_\ep\,r_\ep^2=\infty, \eeq so that $a^\ep$ is not bounded in $L^1(Y)$. \par Then, we have the following result: \begin{Thm}\label{thm2} Assume that condition \refe{rep}. holds. Then, the first Bloch eigenvalue $\la^\ep_1(\eta)$ defined by \refe{laepeta}. with the conductivity $a^\ep$ of \refe{aepbep}. satisfies for any $\eta\in\RR^3$ with $\eta_3\notin\ZZ$, \beq\label{laepetaga} \lim_{\ep\to 0}\la_1^\ep(\eta)=\ga+\min\left\{\int_Y |\nabla\psi|^2\,dx\;:\;\psi\in H^1_\eta(Y;\CC)\mbox{ and }\int_Y|\psi|^2\,dx=1\right\}, \eeq and for $|\eta|\leq{1\over 2}$ with $\eta_3\neq 0$, \beq\label{laepetaga0} \lim_{\ep\to 0}\la^\ep_1(\eta)=\ga+|\eta|^2. \eeq \end{Thm} \begin{Rem} For a fixed $\ep>0$, the function $\eta\mapsto\la^\ep_1(\eta)$ is analytic in a neighborhood of $0$. However, the limit $\la^*_1$ of $\la^\ep_1$ is not even continuous at $\eta=0$, since by \refe{asylaepeta}. and \refe{laepetaga0}. we have \[ \la^*_1(0)=0\quad\mbox{while}\quad\lim_{\eta\to 0,\,\eta_3\neq 0}\la^*_1(\eta)=\ga>0. \] Contrary to the case of the $L^1$-bounded coefficients the $L^1(Y)$-unboundedness of $a^\ep$ induces a gap of the first Bloch eigenvalue. Therefore, the very high conductivity of the fiber structure deeply modifies the wave propagation in any direction $\eta$ such that $\eta_3\notin\ZZ$. \end{Rem} \noindent {\bf Proof of Theorem~\ref{thm2}.} First we will determine the limit of the eigenvalue problem \refe{vppsiep}.. Following \cite{BeBo,BrTc} consider the function $\hat{v}^\ep$ related to the fibers capacity and defined by \beq\label{hvep} \hat{v}^\ep(x):=\hat{V}^\ep\left({x\over\ep}\right)\quad\mbox{for }x\in\RR^3, \eeq where $\hat{V}^\ep$ is the $y_3$-independent $Y$-periodic function defined in the cell period $Y$ by \beq\label{hVep} \hat{V}^\ep(y):= \left\{\ba{cl} 0 & \mbox{if }\in[0,r_\ep] \\ \ecart \dis {\ln r-\ln r_\ep\over\ln R-\ln r_\ep} & \mbox{if }r\in(r_\ep,R) \\ \ecart 1 & \mbox{if }r\geq R \ea\right. \quad\mbox{where}\quad r:=\sqrt{(y_1-\pi)^2+(y_2-\pi)^2}, \eeq and $R$ is a fixed number in $(r_\ep,\pi)$. By a simple adaptation of the Lemma~2 of \cite{BrTc} combined with~\refe{rep}. the sequence $\hat{v}^\ep$ satisfies the following properties \beq\label{conhvep} \hat{v}^\ep=0\;\;\mbox{in }\om^\ep\quad\mbox{and}\quad\hat{v}^\ep\rightharpoonup 1\;\;\mbox{weakly in }H^1(Y), \eeq and for any bounded sequence $v^\ep$ in $H^1(Y)$, with ${1_{\om^\ep}\over|\om^\ep|}\,v^\ep$ bounded in $L^1(Y)$, \beq\label{limhvepvep} \nabla v^\ep\cdot\nabla\hat{v}^\ep-\ga\left(v^\ep-|Y|\,{1_{\om^\ep}\over|\om^\ep|}\,v^\ep\right) \rightharpoonup 0\quad\mbox{weakly in }\M(\bar{Y})*. \eeq The last convergence involves the potential $v^\ep$ in the whole domain and the rescaled potential ${1_{\om^\ep}\over|\om^\ep|}\,v^\ep$ in the fibers set. In \cite{BeBo,BrTc,Bri2} it is proved that under assumption \refe{rep}. the homogenization of the conduction problem, with a Dirichlet boundary condition on the bottom of a cylinder parallel to the fibers, yields two different limit potentials inducing: \begin{itemize} \item either a nonlocal term if $a^\ep$ is bounded in $L^1$, \item or only a zero-order term if $a^\ep$ is not bounded in $L^1$. \end{itemize} Here the situation is more intricate since the Dirichlet boundary condition is replaced by condition \refe{H1eta}.. This needs an alternative approach to obtain the boundedness of the potential $\psi^\ep$ solution of \refe{vppsiep}. and its rescaled version ${1_{\om^\ep}\over|\om^\ep|}\,\psi^\ep$. \par On the one hand, putting $e^{i\,x\cdot\eta}\,\hat{v}^\ep/\|e^{i\,x\cdot\eta}\,\hat{v}^\ep\|_{L^1(Y)}$ which is zero in $\om^\ep$, as test function in the minimization problem \refe{laepeta}. and using \refe{rep}. we get that $\la^\ep_1(\eta)$ is bounded. Hence, the sequence $\psi^\ep$ is bounded in $H^1_\eta(Y;\CC)$, and up to a subsequence converges weakly to some $\psi^*$ in $H^1_\eta(Y;\CC)$. On the other hand, the boundedness of ${1_{\om^\ep}\over|\om^\ep|}\,\psi^\ep$ in $L^1(Y;\CC)$ is more delicate to derive. To this end, we need the following Poincar\'e-Wirtinger inequality weighted by the conductivity $A^\ep(y):=a^\ep(\ep y)$: \beq\label{PW} \int_Y A^\ep\left|\,V-\fint_Y V\,dy\,\right|^2 dy\leq C\,|\ln r_\ep|\,\|A^\ep\|_{L^1(Y)}\int_Y A^\ep\,|\nabla V|^2\,dy, \quad\forall\,V\in H^1(Y;\CC), \eeq which is an easy extension of the Proposition~2.4 in \cite{Bri1} to the case where $A^\ep$ is not bounded in~$L^1(Y)$. Rescaling \refe{PW}. and using \refe{rep}. combined with the boundedness of $\la^\ep_1(\eta)$ we get that \beq\label{estbpsiep} \ba{ll} \dis \int_Y a^\ep\,\big|\psi^\ep-\breve{\psi^\ep}\big|^2\,dx & \dis \leq c\,\ep^2\,|\ln r_\ep|\,\|A^\ep\|_{L^1(Y)}\int_Y a^\ep\,|\nabla\psi^\ep|^2\,dx \\ \ecart &\dis \leq c\,\|A^\ep\|_{L^1(Y)}\int_Y a^\ep\,|\nabla\psi^\ep|^2\,dx\leq c\,\|a^\ep\|_{L^1(Y)}, \ea \eeq where for any $\chi\in L^2(Y;\CC)$, $\breve{\chi}$ denotes the piecewise constant function \beq\label{bchiep} \breve{\chi}:=\sum_{k\in\{0,\dots,\ep^{-1}-1\}^3}\left(\fint_{2\pi\ep k+\ep Y}\chi\,dx\right)1_{2\pi \ep k+\ep Y}. \eeq Then, from the Jensen inequality, the estimates $\be_\ep\,|\om^\ep|\sim\|a^\ep\|_{L^1(Y)}$ and \refe{estbpsiep}. we deduce that \beq\label{estbpsiep2} \fint_{\om^\ep}\big|\psi^\ep-\breve{\psi^\ep}\big|\,dx\leq\left(\fint_{\om^\ep}\big|\psi^\ep-\breve{\psi^\ep}\big|^2dx\right)^{1\over 2} \leq c\left(\int_{\om^\ep}{a^\ep\over\|a^\ep\|_{L^1(Y)}}\,\big|\psi^\ep-\breve{\psi^\ep}\big|^2dx\right)^{1\over 2}\leq c. \eeq Moreover, since $\big|\om^\ep\cap(2\pi\ep k+\ep Y)\big|=\ep^3\,|\om^\ep|$ for any $k\in\{0,\dots,\ep^{-1}-1\}^3$, we have \beq\label{estbpsiep3} \ba{ll} \dis \fint_{\om^\ep}\big|\breve{\psi^\ep}\big|\,dx & \dis \leq\sum_{k\in\{0,\dots,\ep^{-1}-1\}^3}{1\over|\om^\ep|} \int_{\om^\ep\cap(2\pi\ep k+\ep Y)}\left(\fint_{2\pi\ep k+\ep Y}\left|\psi^\ep\right|\right)dx \\ \ecart & \dis =\sum_{k\in\{0,\dots,\ep^{-1}-1\}^3}{1\over|Y|}\int_{2\pi\ep k+\ep Y}\left|\psi^\ep\right| =\fint_Y\left|\psi^\ep\right|dx\leq c. \ea \eeq Estimates \refe{estbpsiep2}. and \refe{estbpsiep3}. imply that the rescaled potential ${1_{\om^\ep}\over|\om^\ep|}\,\psi^\ep$ is bounded in $L^1(Y;\CC)$. Therefore, up to extract a new subsequence there exists a Radon measure $\tilde{\psi}^*$ on $\bar{Y}$ such that \beq\label{tpsi*} \tilde{\psi}^\ep:={1_{\om^\ep}\over|\om^\ep|}\,\psi^\ep\;\rightharpoonup\;\tilde{\psi}^*\quad\mbox{weakly in }\M(\bar{Y})\,*, \eeq or equivalently, \beq\label{tph*} \tilde{\ph}^\ep:=e^{-i\,x\cdot\eta}\, \tilde{\psi}^\ep\;\rightharpoonup\;\tilde{\ph}^*:=e^{-i\,x\cdot\eta}\,\tilde{\psi}^*\quad\mbox{weakly in }\M(\bar{Y})*. \eeq \par Now, we have to evaluate the Radon measure $\tilde{\psi}^*$. Let $\chi\in C^1_\eta(Y;\CC)$, since $1_{\om^\ep}$ is independent of the variable $x_3$ and the function $\psi^\ep\,\overline{\chi}$ is $Y$-periodic, an integration by parts yields \beq\label{D3psiep} \fint_{\om^\ep}\psi^\ep\,\partial_3\overline{\chi}\,dx=-\fint_{\om^\ep}\partial_3\psi^\ep\,\overline{\chi}\,dx. \eeq Moreover, using successively the Cauchy-Schwarz inequality, the boundedness of $\la^\ep_1(\eta)$ and the estimate \refe{aepbep}. satisfied by $\be_\ep$, we have \beq\label{estD3psiep} \left|\fint_{\om^\ep}\partial_3\psi^\ep\,\overline{\chi}\,dx\right| \leq\left(\fint_{\om^\ep}|\nabla\psi^\ep|^2\,dx\right)^{1\over 2}\left(\fint_{\om^\ep}|\chi|^2\,dx\right)^{1\over 2} \leq {c\over\sqrt{\be_\ep\,|\om^\ep|}}\,\left(\fint_{\om^\ep}|\chi|^2\,dx\right)^{1\over 2}=o(1). \eeq Then, passing to the limit in \refe{D3psiep}. thanks to \refe{tpsi*}. and \refe{estD3psiep}., we get that \beq\label{d3tpsi*} \int_{\bar{Y}}\partial_3\overline{\chi}\,d\tilde{\psi}^*=0. \eeq Writing $\chi=e^{i\,x\cdot\eta}\,\overline{\ph}$ with $\ph\in C^1_\sharp(Y;\CC)$, and using \refe{tph*}., equality \refe{d3tpsi*}. reads as \beq\label{d3tph*} \int_{\bar{Y}}\left(\partial_3\ph-i\,\eta_3\,\ph\right)d\tilde{\ph}^*=0. \eeq From now on assume that $\eta_3\notin\ZZ$. Then, for $f\in C^1_\sharp(Y;\CC)$, we may define the function $\ph$ by \beq\label{fph} \ph(x',x_3):=e^{i\,\eta_3 x_3}\int_0^{x_3}e^{-i\,\eta_3 t}\,f(x',t)\,dt+{e^{i\,\eta_3 x_3}\over e^{-i\,2\pi\eta_3}-1}\int_0^{2\pi}e^{-i\,\eta_3 t}\,f(x',t)\,dt. \eeq It is easy to check that $\ph$ belongs to $C^1_\sharp(Y;\CC)$ and satisfies the equation $\partial_3\ph-i\,\eta_3\,\ph=f$ in $\RR^3$. Therefore, from \refe{d3tph*}. we deduce that \beq\label{limtphiep} \int_{\bar{Y}}f\,d\tilde{\ph}^*=0,\quad\forall\,f\in C^1_\sharp(Y;\CC), \eeq or equivalently by \refe{tph*}., \beq\label{limtpsiep} \int_{\bar{Y}}\overline{\chi}\,d\tilde{\psi}^*=0,\quad\forall\,\chi\in C^1_\eta(Y;\CC). \eeq \par We can now determine the limit of the eigenvalue problem \refe{vppsiep}.. Let $\chi\in C^1_\eta(Y;\CC)$, putting the function $\chi\,\hat{v}^\ep$ defined by \refe{hvep}. as test function in \refe{vppsiep}. we have \[ \int_Y \hat{v}_\ep\nabla\psi^\ep\cdot\nabla\overline{\chi}\,dx+\int_Y\nabla\hat{v}_\ep\cdot\nabla\psi^\ep\,\overline{\chi}\,dx =\la^\ep_1(\eta)\int_Y\psi^\ep\,\overline{\chi}\,\hat{v}^\ep\,dx. \] Consider a subsequence of $\ep$, still denoted by $\ep$, such that $\la^\ep_1(\eta)$ converges to $\la^*_1(\eta)$. Then, passing to the limit in the previous equality thanks to the convergence of $\psi^\ep$ to $\psi^*$ in $H^1_\eta(Y;\CC)$, to \refe{conhvep}., and to the limit \refe{limhvepvep}. combined with equality \refe{limtpsiep}., we obtain the limit eigenvalue problem \beq\label{limvppsi*} \int_Y\nabla\psi^*\cdot\nabla\overline{\chi}\,dx+\ga\int_Y\psi^*\,\overline{\chi}\,dx=\la^*_1(\eta)\int_Y\psi^*\,\overline{\chi}\,dx, \quad\forall\,\chi\in H^1_\eta(Y;\CC), \eeq where $\psi^*$ satisfies \refe{psi*}.. \par It remains to prove that the limit of the first Bloch eigenvalue is given by \beq\label{la*etaga} \la_1^*(\eta)=\ga+\min\left\{\int_Y |\nabla\psi|^2\,dx\;:\;\psi\in H^1_\eta(Y;\CC)\mbox{ and }\int_Y|\psi|^2\,dx=1\right\}, \eeq Let $\chi$ be a function in $C^1_\eta(Y;\CC)$ with a unit $L^2(Y)$-norm. Using \refe{laepeta}., \refe{conhvep}. and the convergence \[ |\nabla\hat{v}^\ep|^2\;\rightharpoonup\;\lim_{\ep\to 0}\left({1\over\ep^2}\fint_Y|\nabla\hat{V}^\ep|^2\,dy\right)=\ga \quad\mbox{weakly in }\M(\bar{Y})^2\,*,\quad\mbox{due to \refe{rep}. and \refe{hVep}.}, \] we have \[ \ba{ll} \dis \la^\ep_1(\eta) & \dis \leq{1\over\|\chi\,\hat{v}^\ep\|^2_{L^2(Y)}}\int_Y a^\ep\big|\nabla(\chi\,\hat{v}^\ep)\big|^2\,dx \\ \ecart & \dis ={1\over\|\chi\,\hat{v}^\ep\|^2_{L^2(Y)}}\left(\int_Y |\nabla\hat{v}^\ep|^2\,|\chi|^2\,dx+\int_Y (\hat{v}^\ep)^2\,|\nabla\chi|^2\,dx +2\int_Y\hat{v}^\ep\,\nabla\hat{v}^\ep\cdot\Re\left(\overline{\chi}\nabla\chi\right)dx\right) \\ \ecart & \dis =\ga+\int_Y |\nabla\chi|^2\,dx+o(1), \ea \] which, by a density argument, implies that \[ \la^*_1(\eta)\leq\ga+\int_Y |\nabla\psi|^2\,dx,\quad\forall\,\psi\in H^1_\eta(Y;\CC)\mbox{ with }\int_Y |\psi|^2\,dx=1. \] This combined with \refe{limvppsi*}. and \refe{psi*}. yields the minimization formula \refe{la*etaga}., which shows the uniqueness of the limit. Therefore, limit \refe{laepetaga}. holds for the whole sequence $\ep$. Finally, using the expansion in Fourier's series with $|\eta|\leq{1\over 2}$, formula \refe{laepetaga}. reduces to \refe{laepetaga0}.. The proof of Theorem~\ref{thm2} is thus complete. \cqfd \par\bs\noindent {\bf Acknowledgment.} The authors wish to thank J. Casado-D\'iaz for a stimulating discussion about Lemma~\ref{lemM} in connection with reference \cite{Man}. This work has been carried out within the project ``Homogenization and composites" supported by the {\em Indo French Center for Applied Mathematics - UMI IFCAM}. The authors are also grateful to the hospitality of TIFR-CAM Bangalore in February 2013 and INSA de Rennes in May 2013. \end{document}
arXiv
{ "id": "1305.7209.tex", "language_detection_score": 0.6187224984169006, "char_num_after_normalized": null, "contain_at_least_two_stop_words": null, "ellipsis_line_ratio": null, "idx": null, "lines_start_with_bullet_point_ratio": null, "mean_length_of_alpha_words": null, "non_alphabetical_char_ratio": null, "path": null, "symbols_to_words_ratio": null, "uppercase_word_ratio": null, "type": null, "book_name": null, "mimetype": null, "page_index": null, "page_path": null, "page_title": null }
arXiv/math_arXiv_v0.2.jsonl
null
null
\begin{document} \title{Achieve the Minimum Width of Neural Networks for Universal Approximation} \begin{abstract} The universal approximation property (UAP) of neural networks is fundamental for deep learning, and it is well known that wide neural networks are universal approximators of continuous functions within both the $L^p$ norm and the continuous/uniform norm. However, the exact minimum width, $w_{\min}$, for the UAP has not been studied thoroughly. Recently, using a decoder-memorizer-encoder scheme, \citet{Park2021Minimum} found that $w_{\min} = \max(d_x+1,d_y)$ for both the $L^p$-UAP of ReLU networks and the $C$-UAP of ReLU+STEP networks, where $d_x,d_y$ are the input and output dimensions, respectively. In this paper, we consider neural networks with an arbitrary set of activation functions. We prove that both $C$-UAP and $L^p$-UAP for functions on compact domains share a universal lower bound of the minimal width; that is, $w^*_{\min} = \max(d_x,d_y)$. In particular, the critical width, $w^*_{\min}$, for $L^p$-UAP can be achieved by leaky-ReLU networks, provided that the input or output dimension is larger than one. Our construction is based on the approximation power of neural ordinary differential equations and the ability to approximate flow maps by neural networks. The nonmonotone or discontinuous activation functions case and the one-dimensional case are also discussed. \end{abstract} \section{Introduction} The study of the universal approximation property (UAP) of neural networks is fundamental for deep learning and has a long history. Early studies, such as \cite{Cybenkot1989Approximation, Hornik1989Multilayer, Leshno1993Multilayer}, proved that wide neural networks (even shallow ones) are universal approximators for continuous functions within both the $L^p$ norm ($1\le p < \infty$) and the continuous/uniform norm. Further research, such as \cite{Telgarsky2016Benefits}, indicated that increasing the depth can improve the expression power of neural networks. If the budget number of the neuron is fixed, the deeper neural networks have better expression power \cite{Yarotsky2019phase, Shen2022Optimal}. However, this pattern does not hold if the width is below a critical threshold $w_{\min}$. \cite{Lu2017Expressive} first showed that the ReLU networks have the UAP for $L^1$ functions from $\mathbb{R}^{d_x}$ to $\mathbb{R}$ if the width is larger than $d_x+4$, and the UAP disappears if the width is less than $d_x$. Further research, \cite{Hanin2018Approximating, Kidger2020Universal, Park2021Minimum}, improved the minimum width bound for ReLU networks. Particularly, \citet{Park2021Minimum} revealed that the minimum width is $w_{\min} = \max(d_x+1,d_y)$ for the $L^p(\mathbb{R}^{d_x},\mathbb{R}^{d_y})$ UAP of ReLU networks and for the $C(\mathcal{K},\mathbb{R}^{d_y})$ UAP of ReLU+STEP networks, where $\mathcal{K}$ is a compact domain in $\mathbb{R}^{d_x}$. For general activation functions, the exact minimum width $w_{\min}$ for UAP is less studied. \cite{Johnson2019Deep} consider uniformly continuous activation functions that can be approximated by a sequence of one-to-one functions and give a lower bound $w_{\min} \ge d_x+1$ for $C$-UAP \change{(means UAP for $C(\mathcal{K},\mathbb{R}^{d_y})$)}. \cite{Kidger2020Universal} consider continuous nonpolynomial activation functions and give an upper bound $w_{\min} \le d_x+d_y+1$ for $C$-UAP. \cite{Park2021Minimum} improved the bound for $L^p$-UAP \change{(means UAP for $L^p(\mathcal{K},\mathbb{R}^{d_y})$)} to $w_{\min} \le \max(d_x+2,d_y+1)$. A summary of known upper/lower bounds on minimum width for the UAP can be found in \cite{Park2021Minimum}. In this paper, we consider neural networks having the UAP with arbitrary activation functions. We give a universal lower bound, $w_{\min} \ge w^*_{\min} = \max(d_x,d_y)$, to approximate functions from a compact domain $\mathcal{K} \subset \mathbb{R}^{d_x}$ to $\mathbb{R}^{d_y}$ in the $L^p$ norm or continuous norm. Furthermore, we show that the critical width $w^*_{\min}$ can be achieved by many neural networks, as listed in Table~\ref{tab:main}. Surprisingly, the leaky-ReLU networks achieve the critical width for the $L^p$-UAP provided that the input or output dimension is larger than one. This result relies on a novel construction scheme proposed in this paper based on the approximation power of neural ordinary differential equations (ODEs) and the ability to approximate flow maps by neural networks. \begin{table}[htp!] \caption{Summary of the known minimum width of feed-forward neural networks that have the universal approximation property.} \begin{center} \begin{tabular}{llll} \multicolumn{1}{c}{\bf Functions} &\multicolumn{1}{c}{\bf Activation} &\multicolumn{1}{c}{\bf Minimum width} &\multicolumn{1}{c}{\bf References}\\ \hline $C(\mathcal{K},\mathbb{R})$ &ReLU & $w_{\min} = d_x+1$ & \cite{Hanin2018Approximating} \\ $L^p(\mathbb{R}^{d_x},\mathbb{R}^{d_y})$ &ReLU & $w_{\min} = \max(d_x+1,d_y)$ & \cite{Park2021Minimum} \\ $C([0,1],\mathbb{R}^{2})$ &ReLU & $w_{\min} = 3=\max(d_x,d_y)+1$ & \cite{Park2021Minimum} \\ $C(\mathcal{K},\mathbb{R}^{d_y})$ &ReLU+STEP & $w_{\min} = \max(d_x+1,d_y)$ & \cite{Park2021Minimum} \\ $L^p(\mathcal{K},\mathbb{R}^{d_y})$ &Conti. nonpoly$^\ddagger$ & $w_{\min} \le \max(d_x+2,d_y+1)$ & \cite{Park2021Minimum} \\ \hline $L^p(\mathcal{K},\mathbb{R}^{d_y})$ &Arbitrary & $w_{\min} \ge \max(d_x,d_y)=:w_{\min}^{*}$ & {\bf Ours} (Lemma \ref{th:universal_w_min}) \\ & Leaky-ReLU & $w_{\min} = \max(d_x,d_y,2)$ & {\bf Ours} (Theorem \ref{th:main_LpUAP_leaky_ReLU})\\ & Leaky-ReLU+ABS & $w_{\min} = \max(d_x,d_y)$ & {\bf Ours} (Theorem \ref{th:main_LpUAP_leaky_ReLU+ABS})\\ \hline $C(\mathcal{K},\mathbb{R}^{d_y})$ &Arbitrary & $w_{\min} \ge \max(d_x,d_y)=:w_{\min}^{*}$ & {\bf Ours} (Lemma \ref{th:universal_w_min}) \\ & ReLU+FLOOR & $w_{\min} = \max(d_x,d_y,2)$ & {\bf Ours} (Lemma \ref{th:C-UAP_ReLU+FLOOR})\\ & UOE$^\dagger$+FLOOR & $w_{\min} = \max(d_x,d_y)$ & {\bf Ours} (Corollary \ref{th:C-UAP_ReLU+FLOOR+UOE})\\ $C([0,1],\mathbb{R}^{d_y})$ & UOE$^\dagger$ & $w_{\min} = d_y$ & {\bf Ours} (Theorem \ref{th:C-UAP_UOE})\\ \hline \multicolumn{4}{l}{\change{\small $\ddagger$ Continuous nonpolynomial $\rho$ that is continuously differentiable at some $z$ with $\rho'(z) \neq 0$.} }.\\ \multicolumn{4}{l}{\small $\dagger$ UOE means the function having \emph{universal ordering of extrema}, see Definition \ref{def:UOE} }. \end{tabular} \end{center} \label{tab:main} \end{table} \subsection{Contributions} \begin{itemize} \item[1)] Obtained the universal lower bound of width $w^*_{\min}$ for feed-forward neural networks (FNNs) that have universal approximation properties. \item[2)] Achieved the critical width $w^*_{\min}$ by leaky-ReLU+ABS networks and UOE+FLOOR networks. \change{(UOE is a continuous function which has \emph{universal ordering of extrema}. It is introduced to handle $C$-UAP for one-dimensional functions. See Definition \ref{def:UOE}.)} \item[3)] Proposed a novel construction scheme from a differential geometry perspective that could deepen our understanding of UAP through topology theory. \end{itemize} \subsection{Related work} To obtain the exact minimum width, one must verify the lower and upper bounds. Generally, the upper bounds are obtained by construction, while the lower bounds are obtained by counterexamples. \textbf{Lower bounds.} For ReLU networks, \cite{Lu2017Expressive} utilized the disadvantage brought by the insufficient size of the dimensions and proved a lower bound $w_{\min} \ge d_x$ for $L^1$-UAP; \cite{Hanin2018Approximating} considered the compactness of the level set and proved a lower bound $w_{\min} \ge d_x+1$ for $C$-UAP. For monotone activation functions or its variants, \cite{Johnson2019Deep} noticed that functions represented by networks with width $d_x$ have unbounded level sets, and \cite{Beise2020Expressiveness} noticed that such functions on a compact domain $\mathcal{K}$ take their maximum value on the boundary $\partial \mathcal{K}$. These properties allow one to construct counterexamples and give a lower bound $w_{\min} \ge d_x+1$ for $C$-UAP. For general activation functions, \cite{Park2021Minimum} used the volume of simplex in the output space and gave a lower bound $w_{\min} \ge d_y$ for either $L^p$-UAP or $C$-UAP. Our universal lower bound, $w_{\min} \ge \max(d_x,d_y)$, is based on the insufficient size of the dimensions for both the input and output space, which combines the ideas from these references above. \textbf{Upper bounds.} For ReLU networks, \cite{Lu2017Expressive} explicitly constructed a width-$(d_x+4)$ network by concatenating a series of blocks so that the whole network can be approximated by scale functions in $L^1(\mathbb{R}^{d_x},\mathbb{R})$ to any given accuracy. \cite{Hanin2018Approximating,Hanin2019Universal} constructed a width-$(d_x+d_y)$ network using the max-min string approach to achieve $C$-UAP for functions on compact domains; \cite{Park2021Minimum} proposed an encoder-memorizer-decoder scheme that achieves the optimal bounds $w_{\min}=\max(d_x+1,d_y)$ of the UAP for $L^p(\mathbb{R}^{d_x}, \mathbb{R}^{d_y})$. For general activation functions, \cite{Kidger2020Universal} proposed a register model construction that gives an upper bound $w_{\min} \le d_x+d_y+1$ for $C$-UAP. Based on this result, \cite{Park2021Minimum} improved the upper bound to $w_{\min}\le \max(d_x+2,d_y+1)$ for $L^p$-UAP. In this paper, we adopt the encoder-memorizer-decoder scheme to calculate the universal critical width for $C$-UAP by ReLU+FLOOR activation functions. However, the floor function is discontinuous. For $L^p$-UAP, we reach the critical width by leaky-ReLU, which is a continuous network using a novel scheme based on the approximation power of neural ODEs. \textbf{ResNet and neural ODEs.} Although our original aim is the UAP for feed-forward neural networks, our construction is related to the neural ODEs and residual networks (ResNet, \cite{He2016Deep}), which include skipping connections. Many studies, such as \cite{E2017Proposal, Lu2018Finite, Chen2018Neural}, have emphasized that ResNet can be regarded as the Euler discretization of neural ODEs. The approximation power of ResNet and neural ODEs have also been examined by researchers. To list a few, \cite{Li2022Deep} gave a sufficient condition that covers most networks in practice so that the neural ODE/dynamic systems (without extra dimensions) process $L^p$-UAP for continuous functions, provided that the spatial dimension is larger than one; \cite{Ruiz-Balet2021Neural} obtained similar results focused on the case of one-hidden layer fields. \cite{Tabuada2020Universal} obtained the $C$-UAP for monotone functions, and for continuous functions it was obtained by adding one extra spatial dimension. Recently, \cite{Duan2022Vanilla} noticed that the FNN could also be a discretization of neural ODEs, which motivates us to construct networks achieving the critical width by inheriting the approximation power of neural ODEs. For the excluded dimension one, we design an approximation scheme with leaky-ReLU+ABS and UOE activation functions. \subsection{Organization} We formally state the main results and necessary notations in Section \ref{sec:main_results}. The proof ideas are given in Section \ref{sec:case_of_d=1} \ref{sec:case_of_N=d}, and \ref{sec:minimal_width}. In Section \ref{sec:case_of_d=1}, we consider the case where $N=d_x=d_y=1$, which is basic for the high-dimensional cases. The construction is based on the properties of monotone functions. In Section \ref{sec:case_of_N=d}, we prove the case where $N=d_x=d_y \ge 2$. The construction is based on the approximation power of neural ODEs. In Section \ref{sec:minimal_width}, we consider the case where $d_x \neq d_y$ and discuss the case of more general activation functions. Finally, we conclude the paper in Section \ref{sec:conclusion}. All formal proofs of the results are presented in the Appendix. \section{Main results} \label{sec:main_results} In this paper, we consider the standard feed-forward neural network with $N$ neurons at each hidden layer. We say that a $\sigma$ network with depth $L$ is a function with inputs $x\in \mathbb{R}^{d_x}$ and outputs $y\in\mathbb{R}^{d_y}$, which has the following form: \begin{align} \label{eq:FNN} y \equiv f_{L}(x) = W_{L+1} \sigma(W_L( \cdots \sigma( W_1 x + b_1) + \cdots)+ b_L) + b_{L+1}, \end{align} where $b_i$ are bias vectors, $W_i$ are weight matrices, and $\sigma(\cdot)$ is the activation function. For the case of multiple activation functions, for instance, $\sigma_1$ and $\sigma_2$, we call $f_L$ a $\sigma_1$+$\sigma_2$ network. In this situation, the activation function of each neuron is either $\sigma_1$ or $\sigma_2$. In this paper, we consider arbitrary activation functions, while the following activation functions are emphasized: ReLU ($\max(x,0)$), leaky-ReLU ($\max(x,\alpha x), \alpha \in (0,1)$ is a fixed positive parameter), ABS ($|x|$), SIN ($\sin(x)$), STEP ($1_{x>0}$), FLOOR ($\floor{x}$) and UOE (\emph{universal ordering of extrema}, which will be defined later). \begin{lemma}\label{th:universal_w_min} For any compact domain $\mathcal{K} \subset \mathbb{R}^{d_x}$ and any finite set of activation functions $\{\sigma_i\}$, the $\{\sigma_i\}$ networks with width $w < w^*_{\min} \equiv \max(d_x, d_y)$ do not have the UAP for both $L^p(\mathcal{K},\mathbb{R}^{d_y})$ and $C(\mathcal{K},\mathbb{R}^{d_y})$. \end{lemma} \change{\bf $L^p$-UAP and $C$-UAP.} The lemma indicates that $w^*_{\min} \equiv \max(d_x, d_y)$ is a universal lower bound for the UAP in both $L^p(\mathcal{K},\mathbb{R}^{d_y})$ and $C(\mathcal{K},\mathbb{R}^{d_y})$. The main result of this paper illustrates that the minimal width $w^*_{\min}$ can be achieved. We consider the UAP for these two function classes, \emph{i.e.,} $L^p$-UAP and $C$-UAP, respectively. \change{Note that any compact domain can be covered by a big cubic, the functions on the former can be extended to the latter, and the cubic can be mapped to the unit cubic by a linear function. This allows us to assume $\mathcal{K}$ to be a (unit) cubic without loss of generality.} \subsection{$L^p$-UAP} \begin{theorem}\label{th:main_LpUAP_leaky_ReLU} Let $\mathcal{K} \subset \mathbb{R}^{d_x}$ be a compact set; then, for the function class $L^p(\mathcal{K},\mathbb{R}^{d_y})$, the minimum width of leaky-ReLU networks having $L^p$-UAP is exactly $w_{\min} = \max(d_x, d_y,2)$. \end{theorem} The theorem indicates that leaky-ReLU networks achieve the critical width $w^*_{\min} = \max(d_x, d_y)$, except for the case of $d_x=d_y=1$. The idea is to consider the case where $d_x=d_y=d>1$ and let the network width equal $d$. According to the results of \cite{Duan2022Vanilla}, leaky-ReLU networks can approximate the flow map of neural ODEs. Thus, we can use the approximation power of neural ODEs to finish the proof. \cite{Li2022Deep} proved that many neural ODEs could approximate continuous functions in the $L^p$ norm. This is based on the fact that orientation preserving diffeomorphisms can approximate continuous functions \cite{Brenier2003Approximation}. The exclusion of dimension one is because of the monotonicity of leaky ReLU. When we add a nonmonotone activation function such as the absolute value function or sine function, the $L^p$-UAP at dimension one can be achieved. \begin{theorem}\label{th:main_LpUAP_leaky_ReLU+ABS} Let $\mathcal{K} \subset \mathbb{R}^{d_x}$ be a compact set; then, for the function class $L^p(\mathcal{K},\mathbb{R}^{d_y})$, the minimum width of leaky-ReLU+ABS networks having $L^p$-UAP is exactly $w_{\min} = \max(d_x, d_y)$. \end{theorem} \subsection{$C$-UAP} $C$-UAP is more demanding than $L^p$-UAP. However, if the activation functions could include discontinuous functions, the same critical width $w^*_{\min}$ can be achieved. Following the encoder-memory-decoder approach in \cite{Park2021Minimum}, the step function is replaced by the floor function, and one can obtain the minimal width $w_{\min}=\max(d_x,2,d_y)$. \begin{lemma}\label{th:C-UAP_ReLU+FLOOR} Let $\mathcal{K} \subset \mathbb{R}^{d_x}$ be a compact set; then, for the function class $C(\mathcal{K},\mathbb{R}^{d_y})$, the minimum width of ReLU+FLOOR networks having $C$-UAP is exactly $w_{\min} = \max(d_x, 2, d_y)$. \end{lemma} Since ReLU and FLOOR are monotone functions, the $C$-UAP critical width $w^*_{\min}$ does not hold for $C([0,1],\mathbb{R})$. This seems to be the case even if we add ABS or SIN as an additional activator. However, it is still possible to use the UOE function (Definition \ref{def:appendix}). \begin{theorem}\label{th:C-UAP_UOE} The UOE networks with width $d_y$ have $C$-UAP for functions in $C([0,1],\mathbb{R}^{d_y})$. \end{theorem} \begin{corollary}\label{th:C-UAP_ReLU+FLOOR+UOE} Let $\mathcal{K} \subset \mathbb{R}^{d_x}$ be a compact set; then, for the continuous function class $C(\mathcal{K},\mathbb{R}^{d_y})$, the minimum width of UOE+FLOOR networks having $C$-UAP is exactly $w_{\min} = \max(d_x, d_y)$. \end{corollary} \section{Approximation in dimension one ($N=d_x=d_y=d=1$)} \label{sec:case_of_d=1} In this section, we consider one-dimensional functions and neural networks with a width of one. In this case, the expression of ReLU networks is extremely poor. Therefore, we consider the leaky ReLU activation $\sigma_\alpha(x)$ with a fixed parameter $\alpha\in(0,1)$. Note that leaky-ReLU is strictly monotonic, and it was proven by \cite{Duan2022Vanilla} that any monotone function in $C([0,1],\mathbb{R})$ can be uniformly approximated by leaky-ReLU networks with width one. This is useful for our construction to approximate nonmonotone functions. Since the composition of monotone functions is also a monotone function, to approximate nonmonotone functions we need to add a nonmonotone activation function. Let us consider simple nonmonotone functions, such as $|x|$ or $\sin(x)$. We show that leaky-ReLU+ABS or leaky-ReLU+SIN can approximate any continuous function $f^*(x)$ under the $L^p$ norm. The idea, shown in Figure \ref{fig:alpha_profile}, is that the target function $f^*(x)$ can be uniformly approximated by the polynomial $p(x)$, which can be represented as the composition $$g \circ u(x) =p(x) \approx f^*(x) .$$ Here, the outer function $g(x)$ is any continuous function whose value at extrema matches the value at extrema of $p(x)$, and the inner function $u(x)$ is monotonically increasing, which adjusts the location of the extrema (see Figure \ref{fig:alpha_profile}). Since polynomials have a finite number of extrema, the inner function $u(x)$ is piecewise continuous. \begin{figure} \caption{Example of approximating/representing a polynomial by the composition of a monotonically increasing function $u(x)$ and a nonmonotone function $g(x)$. (a) only matching the ordering of extrema values, (b) matching the values as well. } \label{fig:alpha_profile} \end{figure} For $L^p$-UAP, the approximation is allowed to have a large deviation on a small interval; therefore, the extrema could not be matched exactly (over a small error). For example, we can choose $g(x)$ as the sine function or the sawtooth function (which can be approximated by ABS networks), and $u(x)$ is a leaky-ReLU network approximating $g^{-1}\circ p(x)$ at each monotone interval of $p$. Figure \ref{fig:alpha_profile}(a) shows an example of the composition. For $C$-UAP, matching the extrema while keeping the error small is needed. To achieve this aim, we introduce the UOE functions. \begin{definition}[Universal ordering of extrema (UOE) functions]\label{def:UOE} A UOE function is a continuous function in $C(\mathbb{R},\mathbb{R})$ such that any (finite number of) possible ordering(s) of values at the (finite) extrema can be found in the extrema of the function. \end{definition} There are an infinite number of UOE functions. Here, we give an example, as shown in Figure~\ref{fig:Activation_UOE}. This UOE function $\rho(x)$ is defined by a sequence $\{o_i\}_{i=1}^\infty$, \begin{align}\label{eq:UOE} \rho(x) = \left\{ \begin{array}{lll} x/4, && x \le 0, \\ o_i + (x-i)(o_{i+1}-o_i), && x \in [i,i+1), \end{array} \right. \end{align} where $ \{o_i\}_{i=1}^\infty = ( \underline{1,2}, \underline{2,1}, \underline{1,2,3}, \underline{1,3,2}, \underline{2,1,3}, \underline{2,3,1}, \underline{3,1,2}, \underline{3,2,1}, \underline{1,2,3,4},...) $ is the concatenation of all permutations of positive integer numbers. The term UOE in this paper means this function $\rho$. Since the UOE function $\rho(x)$ can represent leaky-ReLU $\sigma_{1/4}$ on any finite interval, this implies that the UOE networks can uniformly approximate any monotone functions. \begin{figure} \caption{An example of the UOE function $\rho(x)$, which has an infinite number of pieces.} \label{fig:Activation_UOE} \end{figure} To illustrate the $C$-UAP of UOE networks, we only need to construct a continuous function $g(x)$ matching the extrema of $p(x)$ (see Figure~\ref{fig:alpha_profile}(b)). That is, construct $g(x)$ by the composition $\tilde u \circ \rho(x)$, where $\tilde u(x)$ is a monotone and continuous function. This is possible since the UOE function contains any ordering of the extrema. The following lemma summarizes the approximation of one-dimensional functions. As a consequence, Theorem \ref{th:C-UAP_UOE} holds since functions in $C([0,1],\mathbb{R}^{d_y})$ can be regarded as $d_y$ one-dimensional functions. \begin{lemma}\label{th:1D_UAP} For any function $f^*(x) \in C[0,1]$ and $\varepsilon>0$, there is a leaky-ReLU+ABS (or leaky-ReLU+SIN) network with width one and depth $L$ such that $ \int_0^1 |f^*(x)-f_L(x)|^p dx < \varepsilon^p. $ There is a leaky-ReLU+UOE network with a width of one and a depth of $L$ such that $ |f^*(x)-f_L(x)| < \varepsilon, \forall x\in [0,1]. $ \end{lemma} \section{Connection to the neural ODEs ($N=d_x=d_y=d\ge2$)} \label{sec:case_of_N=d} Now, we turn to the high-dimensional case and connect the feed-forward neural networks to neural ODEs. To build this connection, we assume that the input and output have the same dimension, $d_x=d_y=d$. Consider the following neural ODE with one-hidden layer neural fields: \begin{align}\label{eq:neural_ODE} \left\{ \begin{aligned} &\dot{x}(t) = v(x(t),t) := A(t)\tanh(W(t)x(t)+b(t)), t\in(0,\tau),\\ &x(0)=x_0, \end{aligned} \right. \end{align} where $x,x_0,\in \mathbb{R}^d$ and the time-dependent parameters $(A,W,b)\in \mathbb{R}^{d\times d}\times \mathbb{R}^{d\times d} \times \mathbb{R}^d$ are piecewise constant functions of $t$. The flow map is denoted as $\phi^\tau(\cdot)$, which is the function from $x_0$ to $x(\tau)$. According to the approximation results of neural ODEs (see \cite{Li2022Deep, Tabuada2020Universal, Ruiz-Balet2021Neural} for examples), we have the following lemma. \begin{lemma} [Special case of \cite{Li2022Deep} ] \label{th:ODE_Lp_approximation_C} Let $d \ge 2$. Then, for any continuous function $f^*:\mathbb{R}^d \rightarrow \mathbb{R}^d$, any compact set $\mathcal{K} \subset \mathbb{R}^d$, and any $\varepsilon >0$, there exist a time $\tau \in \mathbb{R}^+$ and a piecewise constant input $(A,W,b):[0,\tau]\rightarrow \mathbb{R}^{d\times d}\times \mathbb{R}^{d\times d}\times \mathbb{R}^d$ so that the flow-map $\phi^{\tau}$ associated with the neural ODE (\ref{eq:neural_ODE}) satisfies: $||f^*-\phi^{\tau}||_{L^p(\mathcal{K})} \leq \varepsilon.$ \end{lemma} Next, we consider the approximation of the flow map associated with (\ref{eq:neural_ODE}) by neural networks. Recently, \cite{Duan2022Vanilla} found that leaky-ReLU networks could perform such approximations. \begin{lemma}[Theorem 2.2 in \cite{Duan2022Vanilla}]\label{th:leaky_ReLU_approximate_phi} If the parameters $(A,W,b)$ in (\ref{eq:neural_ODE}) are piecewise constants, then for any compact set $\mathcal{K}$ and any $\varepsilon>0$, there is a leaky-ReLU network $f_L(x)$ with width $d$ and depth $L$ such that \begin{align} \|\phi^\tau(x) - f_L(x)\| \le \varepsilon, \forall x \in \mathcal{K}. \end{align} \end{lemma} Combining these two lemmas, one can directly prove the following corollary, which is a part of our Theorem \ref{th:main_LpUAP_leaky_ReLU}. \begin{corollary}\label{th:main_LpUAP_leaky_ReLU_d} Let $\mathcal{K} \subset \mathbb{R}^{d}$ be a compact set and $d\ge 2$; then, for the function class $L^p(\mathcal{K},\mathbb{R}^{d})$, the leaky-ReLU networks with width $d$ have $L^p$-UAP. \end{corollary} Here, we summarize the main ideas of this result. Let us start with the discretization of the ODE by the splitting approach \change{(see \cite{McLachlan2002Splitting} for example). Consider the spliting of (\ref{eq:neural_ODE}) with $v(x,t) = \sum_{i,j} v^{(j)}_{i}(x,t) e_j$, where $v^{(j)}_{i}(x,t) = A_{ji}(t) \tanh(W_{i,:}(t)x+b_i(t))$ is a scalar function and $e_j$ is the $j$-th axis unit vector. Then for a given time step $\Delta t = \tau/K$, ($K$ large enough), the splitting method gives the following iteration of $x_k$ which approximates $\phi^{k\Delta t}(x_0)$, \begin{align} x_{k+1} = T_k^{(d,d)} \circ \dots \circ T_k^{(1,2)} \circ T_k^{(1,1)} x_k, \end{align} where the map $T_k^{(i,j)}: x \to y$ is defined as \begin{align} \left\{ \begin{aligned} & y^{(l)} = x^{(l)} , l \neq j, \\ & y^{(j)} = x^{(j)} + \Delta t v^{(j)}_{i}(x,k\Delta t) = x^{(j)} + a \Delta t \tanh(w x + \beta). \end{aligned} \right. \end{align} Here the superscript in $x^{(l)}$ means the $l$-th coordinate of $x$. $a=A_{ji},w=W_{i,:}$ and $\beta=b_i$ take their value at $t=k\Delta t$. Note that the scalar functions $\tanh(\xi) $ and $\xi + a\Delta t \tanh(\xi)$ are monotone with respect to $\xi$ when $\Delta t$ is small enough. This allows us to construct leaky-ReLU networks with width $d$ to approximate each map $T_k^{(i,j)}$ and then approximate the flow-map, $\phi^\tau(x_0) \approx x_K$. } Note that Lemma \ref{th:leaky_ReLU_approximate_phi} holds for all dimensions, while Lemma \ref{th:ODE_Lp_approximation_C} holds for dimensions larger than one. This is because flow maps are orientation-preserving diffeomorphisms, and they can approximate continuous functions only for dimensions larger than one; see \cite{Brenier2003Approximation}. The approximation is based on control theory where the flow map can be adjusted to match any finite set of input-output pairs. This match does not hold for dimension one. However, the case of dimension one is discussed in the last section. \section{Achieving the minimal width} \label{sec:minimal_width} Now, we turn to the cases where the input and output dimensions cannot be equal. \subsection{Universal lower bound $w^*_{\min}=\max(d_x,d_y)$} Here, we give a sketch of the proof of Lemma \ref{th:universal_w_min}, which states that $w^*_{\min}$ is a universal lower bound over all activation functions. Parts of Lemma \ref{th:universal_w_min} have been demonstrated in many papers, such as \cite{Park2021Minimum}. Here, we give proof by two counterexamples that are simple and easy to understand from the topological perspective. It contains two cases: 1) there is a function $f^*$ that cannot be approximated by networks with width $w \le d_x-1$; 2) there is a function $f^*$ that cannot be approximated by networks with width $w \le d_y -1$. Figure~\ref{fig:example_for_TH1}(a)-(b) shows the counterexamples that illustrate the essence of the proof. For the first case, $w \le d_x-1$, we show that $f^*(x)=\|x\|^2, x \in \mathcal{K} = [-2,2]^{d_x},$ is what we want; see Figure~\ref{fig:example_for_TH1}(a). In fact, we can relax the networks to a function $f(x) = \phi(Wx+b)$, where $Wx+b$ is a transformer from $\mathbb{R}^{d_x}$ to $\mathbb{R}^{d_x-1}$ and $\phi(x)$ could be any function. A consequence is that there exists a direction $v$ (set as the vector satisfying $Wv=0$, $\|v\|=1$) such that $f(x) = f(x+\lambda v)$ for all $\lambda \in \mathbb{R}$. Then, considering the sets $A=\{x : \|x\|\le 0.1\}$ and $B=\{x : \|x-v\|\le 0.1\}$, we have \begin{align*} \int_\mathcal{K} |f(x) - f^*(x)| dx &\ge \int_A |f(x) - f^*(x)| dx + \int_B |f(x) - f^*(x)| dx \\ &\ge \int_A (|f(x) - f^*(x)| +|f(x+v) - f^*(x+v)|) dx\\ &\ge \int_A (|f^*(x) - f^*(x+v)|) dx \ge 0.8 |A|. \end{align*} Since the volume of $A$ is a fixed positive number, the inequality implies that even the $L^1$ approximation for $f^*$ is impossible. The case of the $L^p$ norm and the uniform norm is impossible as well. For the second case, $w \le d_y -1$, we show the example of $f^*$, which is the parametrized curve from $\textbf{0}$ to $\textbf{1}$ along the edge of the cubic, see Figure~\ref{fig:example_for_TH1}(b). Relaxing the networks to a function $f(x) = W \psi(x) + b$, $\psi(x)$ could be any function. Since the range of $f$ is in a hyperplane while $f^*$ has a positive distance to any hyperplane, the target $f^*$ cannot be approximated. \subsection{Achieving $w^*_{\min}$ for $L^p$-UAP} Now, we show that the lower bound $w_{\min}^*$ for $L^p$-UAP can be achieved by leaky-ReLU+ABS networks. Without loss of generality, we consider $\mathcal{K}=[0,1]^{d_x}$. For any function $f^*$ in $L^p([0,1]^{d_x},\mathbb{R}^{d_y})$, we can extend it to a function $\tilde f^*$ in $L^p([0,1]^{d},\mathbb{R}^{d})$ by filling in zeros where $d=\max(d_x,d_y)=w^*_{\min}$. When $d_x>1$ or $d_y>1$, the $L^p$-UAP for leaky-ReLU networks with width $w^*_{\min}$ is obtained by using Corollary \ref{th:main_LpUAP_leaky_ReLU_d}. Recall that by the Lemma \ref{th:universal_w_min}, $w_{\min}^*$ is optimal, and we obtain our main result Theorem \ref{th:main_LpUAP_leaky_ReLU}. Combining the case of $d_x=d_y=d=1$ in Section \ref{sec:case_of_d=1}, adding absolute function ABS as an additional activation function, we obtain Theorem \ref{th:main_LpUAP_leaky_ReLU+ABS}. \subsection{Achieving $w^*_{\min}$ for $C$-UAP} Here, we use the encoder-memorizer-decoder approach proposed in \cite{Park2021Minimum} to achieve the minimum width. Without loss of generality, we consider the function class $C([0,1]^{d_x},[0,1]^{d_y})$. \change{The encoder-memorizer-decoder approach} includes three parts: \begin{itemize} \item[1)] an \change{\bf encoder} maps $[0,1]^{d_x}$ to $[0,1]$ which quantizes each coordinate of $x$ by a $K$-bit binary representation and concatenates the quantized coordinates into a single scalar value $\bar x$ having a $(d_x K)$-bit binary representation; \item[2)] a \change{\bf memorizer} maps each codeword $\bar x$ to its target codeword $\bar y$; \item[3)] a \change{\bf decoder} maps $\bar y$ to the quantized target that approximates the true target. \end{itemize} As illustrated in Figure \ref{fig:example_for_TH1}(c), using the floor function instead of a step function, one can construct the encoder by FLOOR networks with width $d_x$ and the decoder by FLOOR networks with width $d_y$. The memorizer is a one-dimensional scalar function that can be approximated by ReLU networks with a width of two or UOE networks with a width of one. Therefore, the minimal widths $\max(d_x,2,d_y)$ and $\max(d_x,d_y)$ are obtained, which demonstrate Lemma \ref{th:C-UAP_ReLU+FLOOR} and Corollary \ref{th:C-UAP_ReLU+FLOOR+UOE}, respectively. \begin{figure} \caption{ (a)(b) Counterexamples for proving Lemma \ref{th:universal_w_min}. (a) Points $A$ and $B$ on a level set of networks $f(x)$; $f(A)=f(B)$ but $f^*(A)-f^*(B)$ is not small. (b) The curve from $\textbf{0}$ to $\textbf{1}$ along the edge of the cubic has a positive distance to any hyperplane. (c) illustration of the \change{encoder-memorizer-decoder} scheme for $C$-UAP by an example where $d_x=d_y=3$, 4 bits for the input and 5 bits for the output.} \label{fig:example_for_TH1} \end{figure} \subsection{Effect of the activation functions} Here, we emphasize that our universal bound of the minimal width is optimized over arbitrary activation functions. However, it cannot always be achieved when the activation functions are fixed. Here, we discuss the case of monotone activation functions. If the activation functions are strictly monotone and continuous (such as leaky-ReLU), a width of at least $d_x+1$ is needed for $C$-UAP. This can be understood through topology theory. Leaky-ReLU, the nonsingular linear transformer, and its inverse are continuous and homeomorphic. Since compositions of homeomorphisms are also homeomorphisms, we have the following proposition: If $N=d_x=d_y=d$ and the weight matrix in leaky-ReLU networks are nonsingular, then the input-output map is a homeomorphism. Note that singular matrices can be approximated by nonsingular matrices; therefore, we can restrict the weight matrix in neural networks to the nonsingular case. When $d_x \ge d_y$, we can reformulate the leaky-ReLU network as $f_L(x) = W_{L+1} \psi(x) + b_{L+1}$, where $\psi(x)$ is the homeomorphism. Note that considering the case where $d_y=1$ is sufficient, according to \cite{Hanin2018Approximating, Johnson2019Deep}. They proved that the neural network width $d_x$ cannot approximate any scalar function with a level set containing a bounded path component. This can be easily understood from the perspective of topology theory. An example is to consider the function $f^*(x) = \|x\|^2, x \in \mathcal{K} = [-2,2]^{d_x}$ shown in Figure \ref{fig:UAP_demo_circ}. \begin{figure} \caption{Illustrating the possibility of UAP when $N=d_x$. (a) Plot of $f^*(x) = \|x\|^2$ and its contour at $\|x\|=1$. (b) The original point $P$ is an inner point of the unit ball, while its image is a boundary point, which is impossible for homeomorphisms. (c) Any homeomorphism, approximating $\|x\|^2$ with error less than $\varepsilon$ (=0.1 for example) on $\Gamma$, should have error larger than $1-\varepsilon$ (=0.9) at $P$. (d) Approximating $f^*$ in $L^p$ is possible by leaving a small region. } \label{fig:UAP_demo_circ} \end{figure} The case where $d_x < d_y$. We present a simple example in Figure~\ref{fig:UAP_demo_curve}. The curve `4' corresponding to a continuous function from $[0,1]\subset \mathbb{R}$ to $\mathbb{R}^2$ cannot be uniformly approximated. However, the $L^p$ approximation is still possible. \begin{figure} \caption{Illustrating the possibility of $C$-UAP when $d_x \le d_y$. The curve in (a) is homeomorphic to the interval $[0,1]$, while the curve `4' in (b) is not and cannot be approximated uniformly by homeomorphisms. The $L^p$ approximation is possible via (a). } \label{fig:UAP_demo_curve} \end{figure} \section{Conclusion} \label{sec:conclusion} Let us summarize the main results and implications of this paper. After giving the universal lower bound of the minimum width for the UAP, we proved that the bound is optimal by constructing neural networks with some activation functions. For the $L^p$-UAP, our construction to achieve the critical width was based on the approximation power of neural ODEs, which bridges the feed-forward networks to the flow maps corresponding to the ODEs. This allowed us to understand the UAP of the FNN through topology theory. Moreover, we obtained not only the lower bound but also the upper bound. For the $C$-UAP, our construction was based on the encoder-memorizer-decoder approach in \cite{Park2021Minimum}, where the activation sets contain a discontinuous function $\floor{x}$. It is still an open question whether we can achieve the critical width by continuous activation functions. \cite{Johnson2019Deep} proved that continuous and monotone activation functions need at least width $d_x+1$. This implies that nonmonotone activation functions are needed. By using the UOE activation, we calculated the critical width for the case of $d_x=1$. It would be of interest to study the case of $d_x \ge 2$ in future research. We remark that our UAP is for functions on a compact domain. Examining the critical width of the UAP for functions on unbounded domains is desirable for future research. \begin{abstract} The universal approximation property (UAP) of neural networks is fundamental for deep learning, and it is well known that wide neural networks are universal approximators of continuous functions within both the $L^p$ norm and the continuous/uniform norm. However, the exact minimum width, $w_{\min}$, for the UAP has not been studied thoroughly. Recently, using a decoder-memorizer-encoder scheme, \citet{Park2021Minimum} found that $w_{\min} = \max(d_x+1,d_y)$ for both the $L^p$-UAP of ReLU networks and the $C$-UAP of ReLU+STEP networks, where $d_x,d_y$ are the input and output dimensions, respectively. In this paper, we consider neural networks with an arbitrary set of activation functions. We prove that both $C$-UAP and $L^p$-UAP for functions on compact domains share a universal lower bound of the minimal width; that is, $w^*_{\min} = \max(d_x,d_y)$. In particular, the critical width, $w^*_{\min}$, for $L^p$-UAP can be achieved by leaky-ReLU networks, provided that the input or output dimension is larger than one. Our construction is based on the approximation power of neural ordinary differential equations and the ability to approximate flow maps by neural networks. The nonmonotone or discontinuous activation functions case and the one-dimensional case are also discussed. \end{abstract} \section{Introduction} The study of the universal approximation property (UAP) of neural networks is fundamental for deep learning and has a long history. Early studies, such as \cite{Cybenkot1989Approximation, Hornik1989Multilayer, Leshno1993Multilayer}, proved that wide neural networks (even shallow ones) are universal approximators for continuous functions within both the $L^p$ norm ($1\le p < \infty$) and the continuous/uniform norm. Further research, such as \cite{Telgarsky2016Benefits}, indicated that increasing the depth can improve the expression power of neural networks. If the budget number of the neuron is fixed, the deeper neural networks have better expression power \cite{Yarotsky2019phase, Shen2022Optimal}. However, this pattern does not hold if the width is below a critical threshold $w_{\min}$. \cite{Lu2017Expressive} first showed that the ReLU networks have the UAP for $L^1$ functions from $\mathbb{R}^{d_x}$ to $\mathbb{R}$ if the width is larger than $d_x+4$, and the UAP disappears if the width is less than $d_x$. Further research, \cite{Hanin2018Approximating, Kidger2020Universal, Park2021Minimum}, improved the minimum width bound for ReLU networks. Particularly, \citet{Park2021Minimum} revealed that the minimum width is $w_{\min} = \max(d_x+1,d_y)$ for the $L^p(\mathbb{R}^{d_x},\mathbb{R}^{d_y})$ UAP of ReLU networks and for the $C(\mathcal{K},\mathbb{R}^{d_y})$ UAP of ReLU+STEP networks, where $\mathcal{K}$ is a compact domain in $\mathbb{R}^{d_x}$. For general activation functions, the exact minimum width $w_{\min}$ for UAP is less studied. \cite{Johnson2019Deep} consider uniformly continuous activation functions that can be approximated by a sequence of one-to-one functions and give a lower bound $w_{\min} \ge d_x+1$ for $C$-UAP \change{(means UAP for $C(\mathcal{K},\mathbb{R}^{d_y})$)}. \cite{Kidger2020Universal} consider continuous nonpolynomial activation functions and give an upper bound $w_{\min} \le d_x+d_y+1$ for $C$-UAP. \cite{Park2021Minimum} improved the bound for $L^p$-UAP \change{(means UAP for $L^p(\mathcal{K},\mathbb{R}^{d_y})$)} to $w_{\min} \le \max(d_x+2,d_y+1)$. A summary of known upper/lower bounds on minimum width for the UAP can be found in \cite{Park2021Minimum}. In this paper, we consider neural networks having the UAP with arbitrary activation functions. We give a universal lower bound, $w_{\min} \ge w^*_{\min} = \max(d_x,d_y)$, to approximate functions from a compact domain $\mathcal{K} \subset \mathbb{R}^{d_x}$ to $\mathbb{R}^{d_y}$ in the $L^p$ norm or continuous norm. Furthermore, we show that the critical width $w^*_{\min}$ can be achieved by many neural networks, as listed in Table~\ref{tab:main}. Surprisingly, the leaky-ReLU networks achieve the critical width for the $L^p$-UAP provided that the input or output dimension is larger than one. This result relies on a novel construction scheme proposed in this paper based on the approximation power of neural ordinary differential equations (ODEs) and the ability to approximate flow maps by neural networks. \begin{table}[htp!] \caption{Summary of the known minimum width of feed-forward neural networks that have the universal approximation property.} \begin{center} \begin{tabular}{llll} \multicolumn{1}{c}{\bf Functions} &\multicolumn{1}{c}{\bf Activation} &\multicolumn{1}{c}{\bf Minimum width} &\multicolumn{1}{c}{\bf References}\\ \hline $C(\mathcal{K},\mathbb{R})$ &ReLU & $w_{\min} = d_x+1$ & \cite{Hanin2018Approximating} \\ $L^p(\mathbb{R}^{d_x},\mathbb{R}^{d_y})$ &ReLU & $w_{\min} = \max(d_x+1,d_y)$ & \cite{Park2021Minimum} \\ $C([0,1],\mathbb{R}^{2})$ &ReLU & $w_{\min} = 3=\max(d_x,d_y)+1$ & \cite{Park2021Minimum} \\ $C(\mathcal{K},\mathbb{R}^{d_y})$ &ReLU+STEP & $w_{\min} = \max(d_x+1,d_y)$ & \cite{Park2021Minimum} \\ $L^p(\mathcal{K},\mathbb{R}^{d_y})$ &Conti. nonpoly$^\ddagger$ & $w_{\min} \le \max(d_x+2,d_y+1)$ & \cite{Park2021Minimum} \\ \hline $L^p(\mathcal{K},\mathbb{R}^{d_y})$ &Arbitrary & $w_{\min} \ge \max(d_x,d_y)=:w_{\min}^{*}$ & {\bf Ours} (Lemma \ref{th:universal_w_min}) \\ & Leaky-ReLU & $w_{\min} = \max(d_x,d_y,2)$ & {\bf Ours} (Theorem \ref{th:main_LpUAP_leaky_ReLU})\\ & Leaky-ReLU+ABS & $w_{\min} = \max(d_x,d_y)$ & {\bf Ours} (Theorem \ref{th:main_LpUAP_leaky_ReLU+ABS})\\ \hline $C(\mathcal{K},\mathbb{R}^{d_y})$ &Arbitrary & $w_{\min} \ge \max(d_x,d_y)=:w_{\min}^{*}$ & {\bf Ours} (Lemma \ref{th:universal_w_min}) \\ & ReLU+FLOOR & $w_{\min} = \max(d_x,d_y,2)$ & {\bf Ours} (Lemma \ref{th:C-UAP_ReLU+FLOOR})\\ & UOE$^\dagger$+FLOOR & $w_{\min} = \max(d_x,d_y)$ & {\bf Ours} (Corollary \ref{th:C-UAP_ReLU+FLOOR+UOE})\\ $C([0,1],\mathbb{R}^{d_y})$ & UOE$^\dagger$ & $w_{\min} = d_y$ & {\bf Ours} (Theorem \ref{th:C-UAP_UOE})\\ \hline \multicolumn{4}{l}{\change{\small $\ddagger$ Continuous nonpolynomial $\rho$ that is continuously differentiable at some $z$ with $\rho'(z) \neq 0$.} }.\\ \multicolumn{4}{l}{\small $\dagger$ UOE means the function having \emph{universal ordering of extrema}, see Definition \ref{def:UOE} }. \end{tabular} \end{center} \label{tab:main} \end{table} \subsection{Contributions} \begin{itemize} \item[1)] Obtained the universal lower bound of width $w^*_{\min}$ for feed-forward neural networks (FNNs) that have universal approximation properties. \item[2)] Achieved the critical width $w^*_{\min}$ by leaky-ReLU+ABS networks and UOE+FLOOR networks. \change{(UOE is a continuous function which has \emph{universal ordering of extrema}. It is introduced to handle $C$-UAP for one-dimensional functions. See Definition \ref{def:UOE}.)} \item[3)] Proposed a novel construction scheme from a differential geometry perspective that could deepen our understanding of UAP through topology theory. \end{itemize} \subsection{Related work} To obtain the exact minimum width, one must verify the lower and upper bounds. Generally, the upper bounds are obtained by construction, while the lower bounds are obtained by counterexamples. \textbf{Lower bounds.} For ReLU networks, \cite{Lu2017Expressive} utilized the disadvantage brought by the insufficient size of the dimensions and proved a lower bound $w_{\min} \ge d_x$ for $L^1$-UAP; \cite{Hanin2018Approximating} considered the compactness of the level set and proved a lower bound $w_{\min} \ge d_x+1$ for $C$-UAP. For monotone activation functions or its variants, \cite{Johnson2019Deep} noticed that functions represented by networks with width $d_x$ have unbounded level sets, and \cite{Beise2020Expressiveness} noticed that such functions on a compact domain $\mathcal{K}$ take their maximum value on the boundary $\partial \mathcal{K}$. These properties allow one to construct counterexamples and give a lower bound $w_{\min} \ge d_x+1$ for $C$-UAP. For general activation functions, \cite{Park2021Minimum} used the volume of simplex in the output space and gave a lower bound $w_{\min} \ge d_y$ for either $L^p$-UAP or $C$-UAP. Our universal lower bound, $w_{\min} \ge \max(d_x,d_y)$, is based on the insufficient size of the dimensions for both the input and output space, which combines the ideas from these references above. \textbf{Upper bounds.} For ReLU networks, \cite{Lu2017Expressive} explicitly constructed a width-$(d_x+4)$ network by concatenating a series of blocks so that the whole network can be approximated by scale functions in $L^1(\mathbb{R}^{d_x},\mathbb{R})$ to any given accuracy. \cite{Hanin2018Approximating,Hanin2019Universal} constructed a width-$(d_x+d_y)$ network using the max-min string approach to achieve $C$-UAP for functions on compact domains; \cite{Park2021Minimum} proposed an encoder-memorizer-decoder scheme that achieves the optimal bounds $w_{\min}=\max(d_x+1,d_y)$ of the UAP for $L^p(\mathbb{R}^{d_x}, \mathbb{R}^{d_y})$. For general activation functions, \cite{Kidger2020Universal} proposed a register model construction that gives an upper bound $w_{\min} \le d_x+d_y+1$ for $C$-UAP. Based on this result, \cite{Park2021Minimum} improved the upper bound to $w_{\min}\le \max(d_x+2,d_y+1)$ for $L^p$-UAP. In this paper, we adopt the encoder-memorizer-decoder scheme to calculate the universal critical width for $C$-UAP by ReLU+FLOOR activation functions. However, the floor function is discontinuous. For $L^p$-UAP, we reach the critical width by leaky-ReLU, which is a continuous network using a novel scheme based on the approximation power of neural ODEs. \textbf{ResNet and neural ODEs.} Although our original aim is the UAP for feed-forward neural networks, our construction is related to the neural ODEs and residual networks (ResNet, \cite{He2016Deep}), which include skipping connections. Many studies, such as \cite{E2017Proposal, Lu2018Finite, Chen2018Neural}, have emphasized that ResNet can be regarded as the Euler discretization of neural ODEs. The approximation power of ResNet and neural ODEs have also been examined by researchers. To list a few, \cite{Li2022Deep} gave a sufficient condition that covers most networks in practice so that the neural ODE/dynamic systems (without extra dimensions) process $L^p$-UAP for continuous functions, provided that the spatial dimension is larger than one; \cite{Ruiz-Balet2021Neural} obtained similar results focused on the case of one-hidden layer fields. \cite{Tabuada2020Universal} obtained the $C$-UAP for monotone functions, and for continuous functions it was obtained by adding one extra spatial dimension. Recently, \cite{Duan2022Vanilla} noticed that the FNN could also be a discretization of neural ODEs, which motivates us to construct networks achieving the critical width by inheriting the approximation power of neural ODEs. For the excluded dimension one, we design an approximation scheme with leaky-ReLU+ABS and UOE activation functions. \subsection{Organization} We formally state the main results and necessary notations in Section \ref{sec:main_results}. The proof ideas are given in Section \ref{sec:case_of_d=1} \ref{sec:case_of_N=d}, and \ref{sec:minimal_width}. In Section \ref{sec:case_of_d=1}, we consider the case where $N=d_x=d_y=1$, which is basic for the high-dimensional cases. The construction is based on the properties of monotone functions. In Section \ref{sec:case_of_N=d}, we prove the case where $N=d_x=d_y \ge 2$. The construction is based on the approximation power of neural ODEs. In Section \ref{sec:minimal_width}, we consider the case where $d_x \neq d_y$ and discuss the case of more general activation functions. Finally, we conclude the paper in Section \ref{sec:conclusion}. All formal proofs of the results are presented in the Appendix. \section{Main results} \label{sec:main_results} In this paper, we consider the standard feed-forward neural network with $N$ neurons at each hidden layer. We say that a $\sigma$ network with depth $L$ is a function with inputs $x\in \mathbb{R}^{d_x}$ and outputs $y\in\mathbb{R}^{d_y}$, which has the following form: \begin{align} \label{eq:FNN} y \equiv f_{L}(x) = W_{L+1} \sigma(W_L( \cdots \sigma( W_1 x + b_1) + \cdots)+ b_L) + b_{L+1}, \end{align} where $b_i$ are bias vectors, $W_i$ are weight matrices, and $\sigma(\cdot)$ is the activation function. For the case of multiple activation functions, for instance, $\sigma_1$ and $\sigma_2$, we call $f_L$ a $\sigma_1$+$\sigma_2$ network. In this situation, the activation function of each neuron is either $\sigma_1$ or $\sigma_2$. In this paper, we consider arbitrary activation functions, while the following activation functions are emphasized: ReLU ($\max(x,0)$), leaky-ReLU ($\max(x,\alpha x), \alpha \in (0,1)$ is a fixed positive parameter), ABS ($|x|$), SIN ($\sin(x)$), STEP ($1_{x>0}$), FLOOR ($\floor{x}$) and UOE (\emph{universal ordering of extrema}, which will be defined later). \begin{lemma}\label{th:universal_w_min} For any compact domain $\mathcal{K} \subset \mathbb{R}^{d_x}$ and any finite set of activation functions $\{\sigma_i\}$, the $\{\sigma_i\}$ networks with width $w < w^*_{\min} \equiv \max(d_x, d_y)$ do not have the UAP for both $L^p(\mathcal{K},\mathbb{R}^{d_y})$ and $C(\mathcal{K},\mathbb{R}^{d_y})$. \end{lemma} \change{\bf $L^p$-UAP and $C$-UAP.} The lemma indicates that $w^*_{\min} \equiv \max(d_x, d_y)$ is a universal lower bound for the UAP in both $L^p(\mathcal{K},\mathbb{R}^{d_y})$ and $C(\mathcal{K},\mathbb{R}^{d_y})$. The main result of this paper illustrates that the minimal width $w^*_{\min}$ can be achieved. We consider the UAP for these two function classes, \emph{i.e.,} $L^p$-UAP and $C$-UAP, respectively. \change{Note that any compact domain can be covered by a big cubic, the functions on the former can be extended to the latter, and the cubic can be mapped to the unit cubic by a linear function. This allows us to assume $\mathcal{K}$ to be a (unit) cubic without loss of generality.} \subsection{$L^p$-UAP} \begin{theorem}\label{th:main_LpUAP_leaky_ReLU} Let $\mathcal{K} \subset \mathbb{R}^{d_x}$ be a compact set; then, for the function class $L^p(\mathcal{K},\mathbb{R}^{d_y})$, the minimum width of leaky-ReLU networks having $L^p$-UAP is exactly $w_{\min} = \max(d_x, d_y,2)$. \end{theorem} The theorem indicates that leaky-ReLU networks achieve the critical width $w^*_{\min} = \max(d_x, d_y)$, except for the case of $d_x=d_y=1$. The idea is to consider the case where $d_x=d_y=d>1$ and let the network width equal $d$. According to the results of \cite{Duan2022Vanilla}, leaky-ReLU networks can approximate the flow map of neural ODEs. Thus, we can use the approximation power of neural ODEs to finish the proof. \cite{Li2022Deep} proved that many neural ODEs could approximate continuous functions in the $L^p$ norm. This is based on the fact that orientation preserving diffeomorphisms can approximate continuous functions \cite{Brenier2003Approximation}. The exclusion of dimension one is because of the monotonicity of leaky ReLU. When we add a nonmonotone activation function such as the absolute value function or sine function, the $L^p$-UAP at dimension one can be achieved. \begin{theorem}\label{th:main_LpUAP_leaky_ReLU+ABS} Let $\mathcal{K} \subset \mathbb{R}^{d_x}$ be a compact set; then, for the function class $L^p(\mathcal{K},\mathbb{R}^{d_y})$, the minimum width of leaky-ReLU+ABS networks having $L^p$-UAP is exactly $w_{\min} = \max(d_x, d_y)$. \end{theorem} \subsection{$C$-UAP} $C$-UAP is more demanding than $L^p$-UAP. However, if the activation functions could include discontinuous functions, the same critical width $w^*_{\min}$ can be achieved. Following the encoder-memory-decoder approach in \cite{Park2021Minimum}, the step function is replaced by the floor function, and one can obtain the minimal width $w_{\min}=\max(d_x,2,d_y)$. \begin{lemma}\label{th:C-UAP_ReLU+FLOOR} Let $\mathcal{K} \subset \mathbb{R}^{d_x}$ be a compact set; then, for the function class $C(\mathcal{K},\mathbb{R}^{d_y})$, the minimum width of ReLU+FLOOR networks having $C$-UAP is exactly $w_{\min} = \max(d_x, 2, d_y)$. \end{lemma} Since ReLU and FLOOR are monotone functions, the $C$-UAP critical width $w^*_{\min}$ does not hold for $C([0,1],\mathbb{R})$. This seems to be the case even if we add ABS or SIN as an additional activator. However, it is still possible to use the UOE function (Definition \ref{def:appendix}). \begin{theorem}\label{th:C-UAP_UOE} The UOE networks with width $d_y$ have $C$-UAP for functions in $C([0,1],\mathbb{R}^{d_y})$. \end{theorem} \begin{corollary}\label{th:C-UAP_ReLU+FLOOR+UOE} Let $\mathcal{K} \subset \mathbb{R}^{d_x}$ be a compact set; then, for the continuous function class $C(\mathcal{K},\mathbb{R}^{d_y})$, the minimum width of UOE+FLOOR networks having $C$-UAP is exactly $w_{\min} = \max(d_x, d_y)$. \end{corollary} \section{Approximation in dimension one ($N=d_x=d_y=d=1$)} \label{sec:case_of_d=1} In this section, we consider one-dimensional functions and neural networks with a width of one. In this case, the expression of ReLU networks is extremely poor. Therefore, we consider the leaky ReLU activation $\sigma_\alpha(x)$ with a fixed parameter $\alpha\in(0,1)$. Note that leaky-ReLU is strictly monotonic, and it was proven by \cite{Duan2022Vanilla} that any monotone function in $C([0,1],\mathbb{R})$ can be uniformly approximated by leaky-ReLU networks with width one. This is useful for our construction to approximate nonmonotone functions. Since the composition of monotone functions is also a monotone function, to approximate nonmonotone functions we need to add a nonmonotone activation function. Let us consider simple nonmonotone functions, such as $|x|$ or $\sin(x)$. We show that leaky-ReLU+ABS or leaky-ReLU+SIN can approximate any continuous function $f^*(x)$ under the $L^p$ norm. The idea, shown in Figure \ref{fig:alpha_profile}, is that the target function $f^*(x)$ can be uniformly approximated by the polynomial $p(x)$, which can be represented as the composition $$g \circ u(x) =p(x) \approx f^*(x) .$$ Here, the outer function $g(x)$ is any continuous function whose value at extrema matches the value at extrema of $p(x)$, and the inner function $u(x)$ is monotonically increasing, which adjusts the location of the extrema (see Figure \ref{fig:alpha_profile}). Since polynomials have a finite number of extrema, the inner function $u(x)$ is piecewise continuous. \begin{figure} \caption{Example of approximating/representing a polynomial by the composition of a monotonically increasing function $u(x)$ and a nonmonotone function $g(x)$. (a) only matching the ordering of extrema values, (b) matching the values as well. } \label{fig:alpha_profile} \end{figure} For $L^p$-UAP, the approximation is allowed to have a large deviation on a small interval; therefore, the extrema could not be matched exactly (over a small error). For example, we can choose $g(x)$ as the sine function or the sawtooth function (which can be approximated by ABS networks), and $u(x)$ is a leaky-ReLU network approximating $g^{-1}\circ p(x)$ at each monotone interval of $p$. Figure \ref{fig:alpha_profile}(a) shows an example of the composition. For $C$-UAP, matching the extrema while keeping the error small is needed. To achieve this aim, we introduce the UOE functions. \begin{definition}[Universal ordering of extrema (UOE) functions]\label{def:UOE} A UOE function is a continuous function in $C(\mathbb{R},\mathbb{R})$ such that any (finite number of) possible ordering(s) of values at the (finite) extrema can be found in the extrema of the function. \end{definition} There are an infinite number of UOE functions. Here, we give an example, as shown in Figure~\ref{fig:Activation_UOE}. This UOE function $\rho(x)$ is defined by a sequence $\{o_i\}_{i=1}^\infty$, \begin{align}\label{eq:UOE} \rho(x) = \left\{ \begin{array}{lll} x/4, && x \le 0, \\ o_i + (x-i)(o_{i+1}-o_i), && x \in [i,i+1), \end{array} \right. \end{align} where $ \{o_i\}_{i=1}^\infty = ( \underline{1,2}, \underline{2,1}, \underline{1,2,3}, \underline{1,3,2}, \underline{2,1,3}, \underline{2,3,1}, \underline{3,1,2}, \underline{3,2,1}, \underline{1,2,3,4},...) $ is the concatenation of all permutations of positive integer numbers. The term UOE in this paper means this function $\rho$. Since the UOE function $\rho(x)$ can represent leaky-ReLU $\sigma_{1/4}$ on any finite interval, this implies that the UOE networks can uniformly approximate any monotone functions. \begin{figure} \caption{An example of the UOE function $\rho(x)$, which has an infinite number of pieces.} \label{fig:Activation_UOE} \end{figure} To illustrate the $C$-UAP of UOE networks, we only need to construct a continuous function $g(x)$ matching the extrema of $p(x)$ (see Figure~\ref{fig:alpha_profile}(b)). That is, construct $g(x)$ by the composition $\tilde u \circ \rho(x)$, where $\tilde u(x)$ is a monotone and continuous function. This is possible since the UOE function contains any ordering of the extrema. The following lemma summarizes the approximation of one-dimensional functions. As a consequence, Theorem \ref{th:C-UAP_UOE} holds since functions in $C([0,1],\mathbb{R}^{d_y})$ can be regarded as $d_y$ one-dimensional functions. \begin{lemma}\label{th:1D_UAP} For any function $f^*(x) \in C[0,1]$ and $\varepsilon>0$, there is a leaky-ReLU+ABS (or leaky-ReLU+SIN) network with width one and depth $L$ such that $ \int_0^1 |f^*(x)-f_L(x)|^p dx < \varepsilon^p. $ There is a leaky-ReLU+UOE network with a width of one and a depth of $L$ such that $ |f^*(x)-f_L(x)| < \varepsilon, \forall x\in [0,1]. $ \end{lemma} \section{Connection to the neural ODEs ($N=d_x=d_y=d\ge2$)} \label{sec:case_of_N=d} Now, we turn to the high-dimensional case and connect the feed-forward neural networks to neural ODEs. To build this connection, we assume that the input and output have the same dimension, $d_x=d_y=d$. Consider the following neural ODE with one-hidden layer neural fields: \begin{align}\label{eq:neural_ODE} \left\{ \begin{aligned} &\dot{x}(t) = v(x(t),t) := A(t)\tanh(W(t)x(t)+b(t)), t\in(0,\tau),\\ &x(0)=x_0, \end{aligned} \right. \end{align} where $x,x_0,\in \mathbb{R}^d$ and the time-dependent parameters $(A,W,b)\in \mathbb{R}^{d\times d}\times \mathbb{R}^{d\times d} \times \mathbb{R}^d$ are piecewise constant functions of $t$. The flow map is denoted as $\phi^\tau(\cdot)$, which is the function from $x_0$ to $x(\tau)$. According to the approximation results of neural ODEs (see \cite{Li2022Deep, Tabuada2020Universal, Ruiz-Balet2021Neural} for examples), we have the following lemma. \begin{lemma} [Special case of \cite{Li2022Deep} ] \label{th:ODE_Lp_approximation_C} Let $d \ge 2$. Then, for any continuous function $f^*:\mathbb{R}^d \rightarrow \mathbb{R}^d$, any compact set $\mathcal{K} \subset \mathbb{R}^d$, and any $\varepsilon >0$, there exist a time $\tau \in \mathbb{R}^+$ and a piecewise constant input $(A,W,b):[0,\tau]\rightarrow \mathbb{R}^{d\times d}\times \mathbb{R}^{d\times d}\times \mathbb{R}^d$ so that the flow-map $\phi^{\tau}$ associated with the neural ODE (\ref{eq:neural_ODE}) satisfies: $||f^*-\phi^{\tau}||_{L^p(\mathcal{K})} \leq \varepsilon.$ \end{lemma} Next, we consider the approximation of the flow map associated with (\ref{eq:neural_ODE}) by neural networks. Recently, \cite{Duan2022Vanilla} found that leaky-ReLU networks could perform such approximations. \begin{lemma}[Theorem 2.2 in \cite{Duan2022Vanilla}]\label{th:leaky_ReLU_approximate_phi} If the parameters $(A,W,b)$ in (\ref{eq:neural_ODE}) are piecewise constants, then for any compact set $\mathcal{K}$ and any $\varepsilon>0$, there is a leaky-ReLU network $f_L(x)$ with width $d$ and depth $L$ such that \begin{align} \|\phi^\tau(x) - f_L(x)\| \le \varepsilon, \forall x \in \mathcal{K}. \end{align} \end{lemma} Combining these two lemmas, one can directly prove the following corollary, which is a part of our Theorem \ref{th:main_LpUAP_leaky_ReLU}. \begin{corollary}\label{th:main_LpUAP_leaky_ReLU_d} Let $\mathcal{K} \subset \mathbb{R}^{d}$ be a compact set and $d\ge 2$; then, for the function class $L^p(\mathcal{K},\mathbb{R}^{d})$, the leaky-ReLU networks with width $d$ have $L^p$-UAP. \end{corollary} Here, we summarize the main ideas of this result. Let us start with the discretization of the ODE by the splitting approach \change{(see \cite{McLachlan2002Splitting} for example). Consider the spliting of (\ref{eq:neural_ODE}) with $v(x,t) = \sum_{i,j} v^{(j)}_{i}(x,t) e_j$, where $v^{(j)}_{i}(x,t) = A_{ji}(t) \tanh(W_{i,:}(t)x+b_i(t))$ is a scalar function and $e_j$ is the $j$-th axis unit vector. Then for a given time step $\Delta t = \tau/K$, ($K$ large enough), the splitting method gives the following iteration of $x_k$ which approximates $\phi^{k\Delta t}(x_0)$, \begin{align} x_{k+1} = T_k^{(d,d)} \circ \dots \circ T_k^{(1,2)} \circ T_k^{(1,1)} x_k, \end{align} where the map $T_k^{(i,j)}: x \to y$ is defined as \begin{align} \left\{ \begin{aligned} & y^{(l)} = x^{(l)} , l \neq j, \\ & y^{(j)} = x^{(j)} + \Delta t v^{(j)}_{i}(x,k\Delta t) = x^{(j)} + a \Delta t \tanh(w x + \beta). \end{aligned} \right. \end{align} Here the superscript in $x^{(l)}$ means the $l$-th coordinate of $x$. $a=A_{ji},w=W_{i,:}$ and $\beta=b_i$ take their value at $t=k\Delta t$. Note that the scalar functions $\tanh(\xi) $ and $\xi + a\Delta t \tanh(\xi)$ are monotone with respect to $\xi$ when $\Delta t$ is small enough. This allows us to construct leaky-ReLU networks with width $d$ to approximate each map $T_k^{(i,j)}$ and then approximate the flow-map, $\phi^\tau(x_0) \approx x_K$. } Note that Lemma \ref{th:leaky_ReLU_approximate_phi} holds for all dimensions, while Lemma \ref{th:ODE_Lp_approximation_C} holds for dimensions larger than one. This is because flow maps are orientation-preserving diffeomorphisms, and they can approximate continuous functions only for dimensions larger than one; see \cite{Brenier2003Approximation}. The approximation is based on control theory where the flow map can be adjusted to match any finite set of input-output pairs. This match does not hold for dimension one. However, the case of dimension one is discussed in the last section. \section{Achieving the minimal width} \label{sec:minimal_width} Now, we turn to the cases where the input and output dimensions cannot be equal. \subsection{Universal lower bound $w^*_{\min}=\max(d_x,d_y)$} Here, we give a sketch of the proof of Lemma \ref{th:universal_w_min}, which states that $w^*_{\min}$ is a universal lower bound over all activation functions. Parts of Lemma \ref{th:universal_w_min} have been demonstrated in many papers, such as \cite{Park2021Minimum}. Here, we give proof by two counterexamples that are simple and easy to understand from the topological perspective. It contains two cases: 1) there is a function $f^*$ that cannot be approximated by networks with width $w \le d_x-1$; 2) there is a function $f^*$ that cannot be approximated by networks with width $w \le d_y -1$. Figure~\ref{fig:example_for_TH1}(a)-(b) shows the counterexamples that illustrate the essence of the proof. For the first case, $w \le d_x-1$, we show that $f^*(x)=\|x\|^2, x \in \mathcal{K} = [-2,2]^{d_x},$ is what we want; see Figure~\ref{fig:example_for_TH1}(a). In fact, we can relax the networks to a function $f(x) = \phi(Wx+b)$, where $Wx+b$ is a transformer from $\mathbb{R}^{d_x}$ to $\mathbb{R}^{d_x-1}$ and $\phi(x)$ could be any function. A consequence is that there exists a direction $v$ (set as the vector satisfying $Wv=0$, $\|v\|=1$) such that $f(x) = f(x+\lambda v)$ for all $\lambda \in \mathbb{R}$. Then, considering the sets $A=\{x : \|x\|\le 0.1\}$ and $B=\{x : \|x-v\|\le 0.1\}$, we have \begin{align*} \int_\mathcal{K} |f(x) - f^*(x)| dx &\ge \int_A |f(x) - f^*(x)| dx + \int_B |f(x) - f^*(x)| dx \\ &\ge \int_A (|f(x) - f^*(x)| +|f(x+v) - f^*(x+v)|) dx\\ &\ge \int_A (|f^*(x) - f^*(x+v)|) dx \ge 0.8 |A|. \end{align*} Since the volume of $A$ is a fixed positive number, the inequality implies that even the $L^1$ approximation for $f^*$ is impossible. The case of the $L^p$ norm and the uniform norm is impossible as well. For the second case, $w \le d_y -1$, we show the example of $f^*$, which is the parametrized curve from $\textbf{0}$ to $\textbf{1}$ along the edge of the cubic, see Figure~\ref{fig:example_for_TH1}(b). Relaxing the networks to a function $f(x) = W \psi(x) + b$, $\psi(x)$ could be any function. Since the range of $f$ is in a hyperplane while $f^*$ has a positive distance to any hyperplane, the target $f^*$ cannot be approximated. \subsection{Achieving $w^*_{\min}$ for $L^p$-UAP} Now, we show that the lower bound $w_{\min}^*$ for $L^p$-UAP can be achieved by leaky-ReLU+ABS networks. Without loss of generality, we consider $\mathcal{K}=[0,1]^{d_x}$. For any function $f^*$ in $L^p([0,1]^{d_x},\mathbb{R}^{d_y})$, we can extend it to a function $\tilde f^*$ in $L^p([0,1]^{d},\mathbb{R}^{d})$ by filling in zeros where $d=\max(d_x,d_y)=w^*_{\min}$. When $d_x>1$ or $d_y>1$, the $L^p$-UAP for leaky-ReLU networks with width $w^*_{\min}$ is obtained by using Corollary \ref{th:main_LpUAP_leaky_ReLU_d}. Recall that by the Lemma \ref{th:universal_w_min}, $w_{\min}^*$ is optimal, and we obtain our main result Theorem \ref{th:main_LpUAP_leaky_ReLU}. Combining the case of $d_x=d_y=d=1$ in Section \ref{sec:case_of_d=1}, adding absolute function ABS as an additional activation function, we obtain Theorem \ref{th:main_LpUAP_leaky_ReLU+ABS}. \subsection{Achieving $w^*_{\min}$ for $C$-UAP} Here, we use the encoder-memorizer-decoder approach proposed in \cite{Park2021Minimum} to achieve the minimum width. Without loss of generality, we consider the function class $C([0,1]^{d_x},[0,1]^{d_y})$. \change{The encoder-memorizer-decoder approach} includes three parts: \begin{itemize} \item[1)] an \change{\bf encoder} maps $[0,1]^{d_x}$ to $[0,1]$ which quantizes each coordinate of $x$ by a $K$-bit binary representation and concatenates the quantized coordinates into a single scalar value $\bar x$ having a $(d_x K)$-bit binary representation; \item[2)] a \change{\bf memorizer} maps each codeword $\bar x$ to its target codeword $\bar y$; \item[3)] a \change{\bf decoder} maps $\bar y$ to the quantized target that approximates the true target. \end{itemize} As illustrated in Figure \ref{fig:example_for_TH1}(c), using the floor function instead of a step function, one can construct the encoder by FLOOR networks with width $d_x$ and the decoder by FLOOR networks with width $d_y$. The memorizer is a one-dimensional scalar function that can be approximated by ReLU networks with a width of two or UOE networks with a width of one. Therefore, the minimal widths $\max(d_x,2,d_y)$ and $\max(d_x,d_y)$ are obtained, which demonstrate Lemma \ref{th:C-UAP_ReLU+FLOOR} and Corollary \ref{th:C-UAP_ReLU+FLOOR+UOE}, respectively. \begin{figure} \caption{ (a)(b) Counterexamples for proving Lemma \ref{th:universal_w_min}. (a) Points $A$ and $B$ on a level set of networks $f(x)$; $f(A)=f(B)$ but $f^*(A)-f^*(B)$ is not small. (b) The curve from $\textbf{0}$ to $\textbf{1}$ along the edge of the cubic has a positive distance to any hyperplane. (c) illustration of the \change{encoder-memorizer-decoder} scheme for $C$-UAP by an example where $d_x=d_y=3$, 4 bits for the input and 5 bits for the output.} \label{fig:example_for_TH1} \end{figure} \subsection{Effect of the activation functions} Here, we emphasize that our universal bound of the minimal width is optimized over arbitrary activation functions. However, it cannot always be achieved when the activation functions are fixed. Here, we discuss the case of monotone activation functions. If the activation functions are strictly monotone and continuous (such as leaky-ReLU), a width of at least $d_x+1$ is needed for $C$-UAP. This can be understood through topology theory. Leaky-ReLU, the nonsingular linear transformer, and its inverse are continuous and homeomorphic. Since compositions of homeomorphisms are also homeomorphisms, we have the following proposition: If $N=d_x=d_y=d$ and the weight matrix in leaky-ReLU networks are nonsingular, then the input-output map is a homeomorphism. Note that singular matrices can be approximated by nonsingular matrices; therefore, we can restrict the weight matrix in neural networks to the nonsingular case. When $d_x \ge d_y$, we can reformulate the leaky-ReLU network as $f_L(x) = W_{L+1} \psi(x) + b_{L+1}$, where $\psi(x)$ is the homeomorphism. Note that considering the case where $d_y=1$ is sufficient, according to \cite{Hanin2018Approximating, Johnson2019Deep}. They proved that the neural network width $d_x$ cannot approximate any scalar function with a level set containing a bounded path component. This can be easily understood from the perspective of topology theory. An example is to consider the function $f^*(x) = \|x\|^2, x \in \mathcal{K} = [-2,2]^{d_x}$ shown in Figure \ref{fig:UAP_demo_circ}. \begin{figure} \caption{Illustrating the possibility of UAP when $N=d_x$. (a) Plot of $f^*(x) = \|x\|^2$ and its contour at $\|x\|=1$. (b) The original point $P$ is an inner point of the unit ball, while its image is a boundary point, which is impossible for homeomorphisms. (c) Any homeomorphism, approximating $\|x\|^2$ with error less than $\varepsilon$ (=0.1 for example) on $\Gamma$, should have error larger than $1-\varepsilon$ (=0.9) at $P$. (d) Approximating $f^*$ in $L^p$ is possible by leaving a small region. } \label{fig:UAP_demo_circ} \end{figure} The case where $d_x < d_y$. We present a simple example in Figure~\ref{fig:UAP_demo_curve}. The curve `4' corresponding to a continuous function from $[0,1]\subset \mathbb{R}$ to $\mathbb{R}^2$ cannot be uniformly approximated. However, the $L^p$ approximation is still possible. \begin{figure} \caption{Illustrating the possibility of $C$-UAP when $d_x \le d_y$. The curve in (a) is homeomorphic to the interval $[0,1]$, while the curve `4' in (b) is not and cannot be approximated uniformly by homeomorphisms. The $L^p$ approximation is possible via (a). } \label{fig:UAP_demo_curve} \end{figure} \section{Conclusion} \label{sec:conclusion} Let us summarize the main results and implications of this paper. After giving the universal lower bound of the minimum width for the UAP, we proved that the bound is optimal by constructing neural networks with some activation functions. For the $L^p$-UAP, our construction to achieve the critical width was based on the approximation power of neural ODEs, which bridges the feed-forward networks to the flow maps corresponding to the ODEs. This allowed us to understand the UAP of the FNN through topology theory. Moreover, we obtained not only the lower bound but also the upper bound. For the $C$-UAP, our construction was based on the encoder-memorizer-decoder approach in \cite{Park2021Minimum}, where the activation sets contain a discontinuous function $\floor{x}$. It is still an open question whether we can achieve the critical width by continuous activation functions. \cite{Johnson2019Deep} proved that continuous and monotone activation functions need at least width $d_x+1$. This implies that nonmonotone activation functions are needed. By using the UOE activation, we calculated the critical width for the case of $d_x=1$. It would be of interest to study the case of $d_x \ge 2$ in future research. We remark that our UAP is for functions on a compact domain. Examining the critical width of the UAP for functions on unbounded domains is desirable for future research. \subsubsection*{Acknowledgments} We thank anonymous reviewers for their valuable comments and useful suggestions. This research is supported by the National Natural Science Foundation of China (Grant No. 12201053). \appendix \section{Proof of the lemmas} \subsection{Proof of Lemma \ref{th:1D_UAP}} We give a definition and a lemma below that are useful for proving Lemma \ref{th:1D_UAP}. \begin{definition}\label{def:appendix} We say two functions, $f_1,f_2 \in C(\mathbb{R},\mathbb{R})$, have the same ordering of extrema if they have the following properties: \begin{itemize} \item[1)] $f_i(x)$ has only a finite number of extrema that are (increasing) $x^*_{i,j}, j=1,2,...,m_i.$ \item[2)] $m_1=m_2=:m$ and the two sequences, $$S_1:=\{f_1(-\infty), f_1(x^*_{1,1}),..., f_1(x^*_{1,m}),f_1(+\infty)\},$$ and $$S_2:=\{f_2(-\infty), f_2(x^*_{2,1}),..., f_2(x^*_{2,m}),f_2(+\infty)\},$$ have the same ordering, \emph{i.e.}, \begin{align*} S_{1,i} < S_{1,j} \Longleftrightarrow S_{2,i} < S_{2,j}, \quad \forall i,j,\\ S_{1,i} = S_{1,j} \Longleftrightarrow S_{2,i} = S_{2,j}, \quad \forall i,j. \end{align*} \end{itemize} \end{definition} \begin{lemma}\label{lemma:appendix} Let $f_1$ and $f_2$ be continuous functions in $C(\mathbb{R},\mathbb{R})$ that have the same ordering of extrema; then, there are two strictly monotone functions, $v$ and $u$, such that \begin{align*} f_1 = v \circ f_2 \circ u. \end{align*} \end{lemma} \begin{proof} Here, we use the same notation in Definition \ref{def:appendix}. The functions $v$ and $u$ can be constructed as follows. (1) Construct the outer function $v$ that tries to match the function values at the extrema. The only requirement is that $$S_{1,i} = v(S_{2,i}), \quad \forall i.$$ Since $S_1$ and $S_2$ have the same ordering, it is easy to construct such a function $v$ that is continuous and strictly increasing, for example, piecewise linear. (2) Construct the inner function $u$ to match the location of the extrema. Denote $g = v \circ f_2$, which satisfies $f_1(x^*_{1,i}) = g(x^*_{2,i})$. Since $f_1$ and $g$ are strictly monotone and continuous on the intervals $I_i:=(x^*_{1,i}, x^*_{1,i+1})$ and $J_i=(x^*_{2,i}, x^*_{2,i+1})$, respectively, we can construct the function $u$ on $I_i$ as \begin{align*} u(x) = g^{-1} (f_1(x)), x \in I_i. \end{align*} Combining each piece of $u$, we have a strictly increasing and continuous function $u$ on the whole space $\mathbb{R}$. As a consequence, we have $f_1 = g \circ u = v \circ f_2 \circ u$. \end{proof} \setcounter{theorem}{7} \begin{lemma} For any function $f^*(x) \in C[0,1]$ and $\varepsilon>0$, 1) there is a leaky-ReLU+ABS (or leaky-ReLU+SIN) network with width one and depth $L$ such that $ \int_0^1 |f^*(x)-f_L(x)|^p dx < \varepsilon^p. $ 2) there is a leaky-ReLU + UOE network width one and depth $L$ such that $ |f^*(x)-f_L(x)| < \varepsilon, \forall x\in [0,1]. $ \end{lemma} \begin{proof} We mainly provide proof of the second point, while the first point can be proven using the same scheme. For any function $f^*(x) \in C([0,1],\mathbb{R})$ and $\varepsilon>0$, we can approximate it by a polynomial $p_n(x)$ with order $n$ such that \begin{align*} |f^*(x) - p_n(x)| \le \varepsilon/2, \quad \forall x \in [0,1], \end{align*} according to the well-known Weierstrass approximation theorem. Without a loss of generality, we can assume that $p_n(x)$ is not the same at all of its extrema. Then, we can represent $p_n(x)$ by the following composition, using Lemma \ref{lemma:appendix} and the property of UOE: \begin{align} \label{eq:p=vgu} p_n(x) = v \circ \rho \circ u(x), \end{align} where $\rho(x)$ is the UOE function (\ref{eq:UOE}) and $v(x)$ and $u(x)$ are monotonically increasing continuous functions. Then, we can approximate $p_n(x)$ by UOE networks. Since $v(x)$ and $u(x)$ are monotone, there are UOE networks $\tilde v(x)$ and $\tilde u(x)$ such that $\|v - \tilde v\|$ and $\|u - \tilde u\|$ are arbitrarily small. Hence, there is a UOE network $f_L(x) = \tilde v \circ \rho \circ \tilde u(x)$ that can approximate $p_n(x)$ such that \begin{align*} |p_n(x) - f_L(x)| \le \varepsilon/2, \quad \forall x \in [0,1], \end{align*} which implies that \begin{align*} |f^*(x) - f_L(x)| \le \varepsilon. \end{align*} This completes the proof of the second point. For the first point, we only emphasize that it is easy to construct a function $f(x)$ that has the same local maximum and local minimum in the interval and has $\|f-f^*\|_{L^p}$ small enough. This $f(x)$ has the same ordering of extrema as the sawtooth function (or sine) and hence can be uniformly approximated by leaky-ReLU+ABS (or leaky-ReLU+SIN) networks $f_L$. As a consequence, $\|f_L-f^*\|_{L^p}$ is small enough. \end{proof} \subsection{Proof of Lemma \ref{th:ODE_Lp_approximation_C}} \begin{lemma} Let $d \ge 2$. Then, for any continuous function $f^*:\mathbb{R}^d \rightarrow \mathbb{R}^d$, any compact set $\mathcal{K} \subset \mathbb{R}^d$, and any $\varepsilon >0$, there exist a time $\tau \in \mathbb{R}^+$ and a piecewise constant input $(A,W,b):[0,\tau]\rightarrow \mathbb{R}^{d\times d}\times \mathbb{R}^{d\times d}\times \mathbb{R}^d$ so that the flow map $\phi^{\tau}$ associated with the neural ODE (\ref{eq:neural_ODE}) satisfies: $||f^*-\phi^{\tau}||_{L^p(\mathcal{K})} \leq \varepsilon.$ \end{lemma} \begin{proof} This is a special case of Theorem 2.3 in \cite{Li2022Deep}. \end{proof} \subsection{Proof of Lemma \ref{th:leaky_ReLU_approximate_phi}} \begin{lemma} If the parameters $(A,W,b)$ in (\ref{eq:neural_ODE}) are piecewise constants, then for any compact set $\mathcal{K}$ and any $\varepsilon>0$, there is a leaky-ReLU network $f_L(x)$ with width $d$ and depth $L$ such that \begin{align} \|\phi^\tau(x) - f_L(x)\| \le \varepsilon, \forall x \in \mathcal{K}. \end{align} \end{lemma} \begin{proof} It is Theorem 2.2 in \cite{Duan2022Vanilla}. \end{proof} \subsection{Proof of Corollary \ref{th:main_LpUAP_leaky_ReLU_d}} \begin{corollary} Let $\mathcal{K} \subset \mathbb{R}^{d}$ be a compact set and $d\ge 2$; then, for the function class $L^p(\mathcal{K},\mathbb{R}^{d})$, the leaky-ReLU networks with width $d$ have $L^p$-UAP. \end{corollary} \begin{proof} For any $f^*(x) \in L^p(\mathcal{K},\mathbb{R}^{d})$ and $\varepsilon>0$, there is a flow map $\phi^\tau(x)$ associated with the neural ODE (\ref{eq:neural_ODE}) such that (according to Lemma \ref{th:ODE_Lp_approximation_C}) \begin{align*} \|f^*(\cdot) - \phi^\tau(\cdot)\|_{L^p} \le \frac{\varepsilon}{2}. \end{align*} Then, employing Lemma \ref{th:leaky_ReLU_approximate_phi}, there is a leaky-ReLU network $f_L$ such that \begin{align*} \|f_L(\cdot) - \phi^\tau(\cdot)\|_{L^p} \le \frac{\varepsilon}{2}. \end{align*} Therefore, we have \begin{align*} \|f_L(\cdot) - f^*(\cdot)\|_{L^p} \le \|f^*(\cdot) - \phi^\tau(\cdot)\|_{L^p} + \|f_L(\cdot) - \phi^\tau(\cdot)\|_{L^p} \le \varepsilon. \end{align*} \end{proof} \section{Proof of the main results} \setcounter{theorem}{0} \subsection{Proof of Lemma \ref{th:universal_w_min}} \begin{lemma} For any compact domain $\mathcal{K} \subset \mathbb{R}^{d_x}$ and any finite set of activation functions $\{\sigma_i\}$, the $\{\sigma_i\}$ networks with width $w < w^*_{\min} \equiv \max(d_x, d_y)$ do not have the UAP for both $L^p(\mathcal{K},\mathbb{R}^{d_y})$ and $C(\mathcal{K},\mathbb{R}^{d_y})$. \end{lemma} \begin{proof} It is enough to show the following two counterexamples $f^*(x)$ that cannot be approximated in the $L^p$-norm. 1) $f^*(x)=\|x\|^2, x \in \mathcal{K} = [-2,2]^{d_x},$ cannot be approximated by any networks with widths less than $d_x-1$. In fact, we can relax the networks to a function $f(x) = \phi(Wx+b)$, where $Wx+b$ is a transformer from $\mathbb{R}^{d_x}$ to $\mathbb{R}^{d_x-1}$ and $\phi(x)$ could be any function. A consequence is that there exists a direction $v$ (set as the vector satisfying $Wv=0$, $\|v\|=1$) such that $f(x) = f(x+\lambda v)$ for all $\lambda \in \mathbb{R}$. Then, considering the sets $A=\{x : \|x\|\le 0.1\}$ and $B=\{x : \|x-v\|\le 0.1\}$, we have \begin{align*} \int_\mathcal{K} |f(x) - f^*(x)| dx &\ge \int_A |f(x) - f^*(x)| dx + \int_B |f(x) - f^*(x)| dx \\ &\ge \int_A (|f(x) - f^*(x)| +|f(x+v) - f^*(x+v)|) dx\\ &\ge \int_A (|f^*(x) - f^*(x+v)|) dx \ge 0.8 |A|. \end{align*} Since the volume of $A$ is a fixed positive number, the inequality implies that even the $L^1$ approximation for $f^*$ is impossible. The case of the $L^p$ norm and the uniform norm is impossible as well. 2) The function $f^*$, the parametrized curve from $\textbf{0}$ to $\textbf{1}$ along the edge of the cubic, cannot be approximated by any networks with a width less than $d_y-1$. Relaxing the networks to a function $f(x) = W \psi(x) + b$, $\psi(x)$ could be any function. Since the range of $f$ is in a hyperplane while $f^*$ has a positive distance to any hyperplane, the target $f^*$ cannot be approximated. \end{proof} \subsection{Proof of Theorem \ref{th:main_LpUAP_leaky_ReLU}} \begin{theorem} Let $\mathcal{K} \subset \mathbb{R}^{d_x}$ be a compact set; then, for the function class $L^p(\mathcal{K},\mathbb{R}^{d_y})$, the minimum width of leaky-ReLU networks having $L^p$-UAP is exactly $w_{\min} = \max(d_x, d_y,2)$. \end{theorem} \begin{proof} Using Lemma \ref{th:universal_w_min}, we only need to prove two points: 1) the $L^p$-UAP holds when $\max(d_x,d_y) \ge 2$, 2) when $d_x=d_y=1$, there is a function that cannot be approximated by leaky-ReLU networks with width one (since width two is enough for the $L^p$-UAP). The first point is a consequence of Corollary \ref{th:main_LpUAP_leaky_ReLU_d} since we can extend the target function to dimension $d=\max(d_x,d_y)$. The second point is obvious since leaky-ReLU networks with a width of one are monotone functions that cannot approximate nonmonotone functions such as $f^*(x) = x^2, x \in [-1,1]$. \end{proof} \subsection{Proof of Theorem \ref{th:main_LpUAP_leaky_ReLU+ABS}} \begin{theorem} Let $\mathcal{K} \subset \mathbb{R}^{d_x}$ be a compact set; then, for the function class $L^p(\mathcal{K},\mathbb{R}^{d_y})$, the minimum width of leaky-ReLU+ABS networks having $L^p$-UAP is exactly $w_{\min} = \max(d_x, d_y)$. \end{theorem} \begin{proof} This is a consequence of Theorem \ref{th:main_LpUAP_leaky_ReLU} (for the case of $\max(d_x, d_y)\ge 2$) combined with Lemma \ref{th:1D_UAP} (for the case of $d_x=d_y=1$). \end{proof} \subsection{Proof of Lemma \ref{th:C-UAP_ReLU+FLOOR}} \begin{lemma} Let $\mathcal{K} \subset \mathbb{R}^{d_x}$ be a compact set; then, for the function class $C(\mathcal{K},\mathbb{R}^{d_y})$, the minimum width of ReLU+FLOOR networks having $C$-UAP is exactly $w_{\min} = \max(d_x, 2, d_y)$. \end{lemma} \begin{proof} Recalling the results of Lemma \ref{th:universal_w_min}, we only need to prove two points: 1) the C-UAP holds when $\max(d_x,d_y) \ge 2$, 2) when $d_x=d_y=1$, there is a function that cannot be approximated by ReLU+FLOOR networks with width one (since width two is enough for the $C$-UAP). The first step can be constructed by the encoder-memorizer-decoder approach. The second point is obvious since ReLU+FLOOR networks with width one are monotone functions that cannot approximate nonmonotone functions such as $f^*(x) = x^2, x \in [-1,1]$. \end{proof} \subsection{Proof of Theorem \ref{th:C-UAP_UOE}} \begin{theorem} The UOE networks with width $d_y$ have $C$-UAP for functions in $C([0,1],\mathbb{R}^{d_y})$. \end{theorem} \begin{proof} Since functions in $C([0,1],\mathbb{R}^{d_y})$ can be regarded as $d_y$ one-dimensional functions, it is enough to prove the case of $d_y=1$, which is the result in Lemma \ref{th:1D_UAP}. \end{proof} \subsection{Proof of Corollary \ref{th:C-UAP_ReLU+FLOOR+UOE}} \begin{corollary} Let $\mathcal{K} \subset \mathbb{R}^{d_x}$ be a compact set; then, for the continuous function class $C(\mathcal{K},\mathbb{R}^{d_y})$, the minimum width of UOE+FLOOR networks having $C$-UAP is exactly $w_{\min} = \max(d_x, d_y)$. \end{corollary} \begin{proof} The case where $\max(d_x, d_y) \ge 2$ is a consequence of Lemma \ref{th:C-UAP_ReLU+FLOOR} since the UOE function contains the leaky-ReLU as a part. The case where $\max(d_x, d_y)=1$, \emph{i.e.} $d_x=d_y=1$, is a consequence of Lemma \ref{th:1D_UAP}. \end{proof} \end{document}
arXiv
{ "id": "2209.11395.tex", "language_detection_score": 0.7554286122322083, "char_num_after_normalized": null, "contain_at_least_two_stop_words": null, "ellipsis_line_ratio": null, "idx": null, "lines_start_with_bullet_point_ratio": null, "mean_length_of_alpha_words": null, "non_alphabetical_char_ratio": null, "path": null, "symbols_to_words_ratio": null, "uppercase_word_ratio": null, "type": null, "book_name": null, "mimetype": null, "page_index": null, "page_path": null, "page_title": null }
arXiv/math_arXiv_v0.2.jsonl
null
null
\begin{document} \title{Extremal $p$-adic $L$-functions} \begin{abstract} In this note we propose a new construction of cyclotomic $p$-adic L-functions attached to classical modular cuspidal eigenforms. This allows us to cover most known cases to date and provides a method which is amenable to generalizations to automorphic forms on arbitrary groups. In the classical setting of ${\mathrm{GL}}_2$ over ${\mathbb Q}$ this allows us to construct the $p$-adic $L$-function in the so far uncovered {\em extremal} case which arises under the unlikely hypothesis that $p$-th Hecke polynomial has a double root. Although Tate's conjecture implies that this case should never take place for ${\mathrm{GL}}_2/{\mathbb Q}$, the obvious generalization does exist in nature for Hilbert cusp forms over totally real number fields of even degree and this article proposes a method which should adapt to this setting. We further study the admissibility and the interpolation properties of these \emph{extremal $p$-adic L-functions} $L_p^{\rm ext}(f,s)$, and relate $L_p^{\rm ext}(f,s)$ to the two-variable $p$-adic L-function interpolating cyclotomic $p$-adic L-functions along a Coleman family. \end{abstract} \tableofcontents \section{Introduction} Let $f \in S_{k+2}(\Gamma_1(N),\epsilon)$ be a modular cuspidal eigeform for $\Gamma_1(N)$ with nebentypus $\epsilon$ and weight $k+2$. A very important topic in modern Number Theory is the study of the complex L-function $L(s,\pi)$ attached to the automorphic representation $\pi$ of ${\mathrm{GL}}_2({\mathbb A})$ generated by $f$. Understanding this complex valued analytic function is the key point for some of the most important problems in mathematics such as the \emph{Birch and Swinnerton-Dyer conjecture}. Back in the middle of the seventies, Vishik \cite{Vis} and Amice-V\'elu \cite{A-V} defined a $p$-adic measure $\mu_{f,p}$ of ${\mathbb Z}_p^\times$ associated with $f$, under the hypothesis that $p$ does not divide $N$. The construction of this measure was the starting point for the theory of $p$-adic L-functions attached to modular cuspforms. The $p$-adic L-function $L_p(f,s)$ is a ${\mathbb C}_p$-valued analytic function which interpolates the critical values of the L-function $L(s,\pi)$. The function $L_p(f,s)$ is defined by means of $\mu_{f,p}$ as \[ L_p(f,s):=\int_{{\mathbb Z}_p^\times}{\rm exp}(s\cdot{\rm log(x)})d\mu_{f,p}(x), \] where ${\rm exp}$ and ${\rm log}$ are respectively the $p$-adic exponential and $p$-adic logarithm functions. Mazur, Tate and Teitelbaum extended in \cite{MTT86} the definition of $\mu_{f,p}$ to more general situations and proposed a $p$-adic analogue of the Birch and Swinnerton-Dyer conjecture, replacing the complex L-function $L(s,\pi)$ with its $p$-adic counterpart $L_p(f,s)$. It has been shown that $L_p(f,s)$ is directly related with the ($p$-adic, or eventually $l$-adic) cohomology of modular curves, and this makes the $p$-adic Birch and Swinnerton-Dyer conjectures become more tractable. In fact, the theory of $p$-adic L-functions has grown tremendously during the last years. Many results, whose complex counterparts are inaccessible with current techniques, have been proven in the analogous $p$-adic scenarios. In this note we provide a reinterpretation of the construction of the $p$-adic measures $\mu_{f,p}$. Our approach exploits the theory of automorphic representations and, in that sense, it is similar to the construction provided by Spiess in \cite{Spi14} for weights strictly greater than $2$. This opens the door to possible generalizations of $p$-adic measures attached to automorphic representations of ${\mathrm{GL}}_2({\mathbb A}_F)$ of any weight, for any number field $F$. We are able to construct $\mu_{f,p}$ in every possible situation except when the local automorphic representation $\pi_p$ attached to $f$ is \emph{supercuspidal}, and we hope our work clarifies why it is not expected to find good $p$-adic measures in the latter case. We obtain a genuinely new construction in the unlikely setting where the $p$-th Hecke polynomial has a double root. In this case, our main result (Theorem \ref{mainthm}) reads as follows: \begin{theorem} Let $f =\sum_{n\geq 1} a_n q^n\in S_{k+2}(\Gamma_1(N),\epsilon)$ be a cuspform, and assume that $P(X):=X^2-a_pX+\epsilon(p)p^{k+1}$ has a double root $\alpha$. Then there exists a locally analytic $p$-adic measure $\mu_{f,p}^{{\rm ext}}$ of ${\mathbb Z}_p^\times$ such that, for any locally polynomial character $\chi=\chi_0(x)x^m$ with $m\leq k$: \begin{equation}\label{IntForInt} \int_{{\mathbb Z}_p^\times}\chi d\mu_{f,p}^{{\rm ext}}=\frac{4\pi }{\Omega_f^\pm i^m}\cdot e_p^{\rm ext}(\pi_p,\chi_0)\cdot L\left(m-k+\frac{1}{2},\pi,\chi_0\right). \end{equation} Here $L\left(s,\pi,\chi_0\right)$ denotes the complex the $L$-function of $\pi$ twisted by $\chi_0$, and we have set \[ e_p^{\rm ext}(\pi_p,\chi_0)=\left\{\begin{array}{ll} (1-p^{-1})^{-1}\left(p^{k-m}\alpha^{-1}+p^{m-k-1}\alpha-2p^{-1}\right);&\chi_0\mid_{{\mathbb Z}_p^\times}=1;\\ -(1-p^{-1})^{-1}rp^{r(m-k-1)}\alpha^{r}\tau(\chi_0);&{\rm cond}(\chi_0)=r>0, \end{array}\right. \] where $\tau(\chi_0)$ is the Gauss sum attached to $\chi_0$ \end{theorem} We call $\mu_{f,p}^{{\rm ext}}$ \emph{the extremal $p$-adic measure}. Coleman and Edixhoven showed in \cite{ColEd} that $P(X)$ never has double roots if the weight is 2, namely, $k = 0$. Moreover, they showed that assuming \emph{Tate's conjecture} the polynomial $P(X)$ can never be a square for general weights $k+2$. Since we believe in Tate's conjecture, we expect this situation never occur, hence surely the hypothesis of the theorem is never fulfilled and $\mu_{f,p}^{\rm ext}$ can never be constructed. Since these extremal scenarios do appear in nature for other reductive groups, for instance for ${\mathrm{GL}}_2/F$ where $F$ is a totally real number field of even degree over ${\mathbb Q}$ (see \cite[\S 3.3.1]{Chi15}), we believe our result above is potentially powerful. We plan to employ the approach of this note to cover these cases in the near future. Notice that in the unlikely situation of the above theorem, the two $p$-adic measures $\mu_{f,p}$ and $\mu_{f,p}^{{\rm ext}}$ coexist. One can thus define the $p$-adic L-function \[ L_p^{\rm ext}(f,s):=\int_{{\mathbb Z}_p^\times}{\rm exp}(s\cdot{\rm log(x)})d\mu_{f,p}^{{\rm ext}}(x), \] called \emph{the extremal $p$-adic L-function}, which coexists with $L_p(f,s)$, and satisfies the interpolation property \eqref{IntForInt} with completely different Euler factors $e_p^{\rm ext}(\pi_p,\chi_0)$ from the classical scenario. In the non-critical setting, namely when the roots of the Hecke polynomial are distinct, there is a classical result that relates $\mu_{f,p}$ to a two-variable $p$-adic L-function ${\mathcal L}_p$ that interpolates $\mu_{g,p}$ as $g$ ranges over a Coleman family passing through $f$. In \cite{BW}, Betina and Williams have recently extended this result to this critical setting. They construct an element \[ {\mathcal L}_p\in T\hat\otimes_{{\mathbb Q}_p}{\mathcal R}, \] where ${\mathcal R}$ is the ${\mathbb Q}_p$-algebra of locally analytic distributions of ${\mathbb Z}_p^\times$ and $T$ is certain Hecke algebra defining a connected component of the eigencurve. Since an element of the Coleman family corresponds to a morphism $g:T\rightarrow\bar{\mathbb Q}_p$, the function ${\mathcal L}_p$ is characterized by the property \[ {\mathcal L}_p=C(g)\cdot\mu_{g,p}, \] where $C(g)\in \bar{\mathbb Q}_p^\times$ is a constant normalized so that $C(f)=1$. The following result proved in \S \ref{secLmu} relates ${\mathcal L}_p$ to our extremal $p$-adic measure $\mu_{f,p}^{{\rm ext}}$: \begin{theorem} Let $t\in T$ the element corresponding to $U_p-\alpha$. Then \[ \frac{\partial {\mathcal L}_p}{\partial t}(f)\in \alpha^{-1}\mu_{f,p}^{{\rm ext}}+\bar{\mathbb Q}_p\mu_{f,p}. \] \end{theorem} This last result implies that these extremal $p$-adic L-functions are analogous to the so-called secondary $p$-adic L-functions defined by Bella\"iche in \cite{Be}. \subsubsection*{Acknowledgements.} The author would like to thank David Loeffler, V\'ictor Rotger and Chris Williams for their comments and discussions throughout the development of this paper. The author is supported in part by DGICYT Grant MTM2015-63829-P. This project has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement No. 682152). \subsection{Notation} For any ring $R$, we denote by ${\mathcal P}(k)_R:={\mathrm {Sym}}^{k}(R^2)$ the $R$-module of homogeneous polynomials in two variables with coefficients in $R$, endowed with an action of ${\mathrm{GL}}_2(R)$: \begin{equation}\label{actpol} \left( \left(\begin{array}{cc}a&b\\c&d\end{array}\right)\ast P\right)(x,y):= P\left((x,y)\left(\begin{array}{cc}a&b\\c&d\end{array}\right)\right). \end{equation} We denote by $V(k)_R:={\rm Hom}_R({\mathcal P}(k)_R,R)$ and $V(k):=V(k)_{\mathbb C}$. Similarly, we define the (right-) action of $A\in{\mathrm{GL}}_2({\mathbb R})^+$ on the set of modular forms of weight $k+2$ \[ (f\mid A)(z):=\rho(A,z)^{k+2}\cdot f(Az); \qquad \rho\left(\left(\begin{array}{cc}a&b\\c&d\end{array}\right),z\right):=\frac{(ad-bc)}{cz+d}. \] We will denote by $dx$ the Haar measure of ${\mathbb Q}_p$ so that ${\rm vol}({\mathbb Z}_p)=1$. Similarly, we write $d^\times x$ for the Haar measure of ${\mathbb Q}_p^\times$ so that ${\rm vol}({\mathbb Z}_p^\times)=1$. By abuse of notation, will will also denote by $d^\times x$ the corresponding Haar measure of the group of ideles ${\mathbb A}^\times$. For any local character $\chi:{\mathbb Q}_p^\times\rightarrow{\mathbb C}^\times$, write \[ L(s,\chi)=\left\{\begin{array}{lc}(1-\chi(p)p^{-s})^{-1},&\chi\mbox{ unramified}\\ 1,&\mbox{otherwise.}\end{array}\right. \] \section{Local integrals} \subsection{Gauss sums} In this section $\psi:{\mathbb Q}_p\rightarrow{\mathbb C}^\times$ will be a non-trivial additive character such that $\ker(\psi)={\mathbb Z}_p$. \begin{lemma}\label{intpsi} For all $s\in{\mathbb Q}_p^\times$ and $n>0$, we have \[ \int_{s+p^n{\mathbb Z}_p}\psi(ax)dx=p^{-n}\psi(sa)\cdot1_{{\mathbb Z}_p}(p^na). \] In particular, \[ \int_{{\mathbb Z}_p^\times}\psi(ax)dx=\left\{\begin{array}{ll}(1-p^{-1}),&a\in{\mathbb Z}_p\\-p^{-1},&a\in p^{-1}{\mathbb Z}_p^\times\\0,&\mbox{otherwise}\end{array}\right. \] \end{lemma} \begin{proof} We compute \begin{eqnarray*} \int_{s+p^n{\mathbb Z}_p}\psi(xa)d x&=&\int_{p^n{\mathbb Z}_p}\psi((s+x)a)d x=\psi(sa)\int_{{\mathbb Z}_p}|xp^n|\psi(xp^na)d^\times x\\ &=&p^{-n}\psi(sa)\int_{{\mathbb Z}_p}\psi(xp^na)d x=p^{-n}\psi(sa)\cdot1_{{\mathbb Z}_p}(p^na). \end{eqnarray*} To deduce the second part, notice that \[ \int_{{\mathbb Z}_p^\times}\psi(ax)dx=\sum_{s\in({\mathbb Z}/p{\mathbb Z})^\times}\int_{s+p{\mathbb Z}_p}\psi(ax)dx=p^{-1}\sum_{s\in({\mathbb Z}/p{\mathbb Z})^\times}\psi(sa)1_{{\mathbb Z}_p}(pa), \] hence the result follows. \end{proof} \begin{lemma}\label{psichi} For all $\chi:{\mathbb Z}_p^\times\rightarrow{\mathbb C}^\times$ be a character of conductor $n\geq 1$. Let $1+p^n{\mathbb Z}_p\subset U\subseteq {\mathbb Z}_p^\times$ be a open subgroup. We have \[ \int_{U}\chi(x)\psi(ax)d^\times x=0,\qquad\mbox{unless $|a|=p^n$.} \] \end{lemma} \begin{proof} We compute \begin{eqnarray*} \int_{U}\chi(x)\psi(ax)d^\times x&=&\sum_{s\in U/(1+p^n{\mathbb Z}_p)}\chi(s)\int_{s+p^n{\mathbb Z}_p}\psi(ax)d x\\ &=&p^{-n}1_{{\mathbb Z}_p}(p^na)\sum_{s\in U/(1+p^n{\mathbb Z}_p)}\chi(s)\psi(sa). \end{eqnarray*} Hence the integral $I:=\int_{U}\chi(x)\psi(ax)d^\times x$ must be zero if $a\not\in p^{-n}{\mathbb Z}_p$. Moreover, if $a\in p^{-n+1}{\mathbb Z}_p$, \[ I=\int_{U}\chi(x(1+p^{n-1}))\psi(ax(1+p^{n-1}))d^\times x=\chi(1+p^{n-1})I=0, \] and the result follows. \end{proof} We now define the Gauss sum: \begin{definition}\label{defGS} For any character $\chi$ of conductor $n\geq 0$, \[ \tau(\chi)=\tau(\chi,\psi)=p^n\int_{{\mathbb Z}_p^\times}\chi(x)\psi(-p^{-n}x)dx. \] \end{definition} \section{Classical cyclotomic $p$-adic $L$-function}\label{Classical} \subsection{Classical Modular symbols} Let $f\in S_{k+2}(N,\epsilon)$ be a modular cuspidal newform of weight $(k+2)$ level $\Gamma_1(N)$ and nebentypus $\epsilon$. By definition, we have \[ (f\mid A)(z)\cdot (A^{-1}P)(1,-z)\cdot dz=\det(A)\cdot f(Az)\cdot P(1,-Az)\cdot d(Az), \quad A\in {\mathrm{GL}}_2({\mathbb R})^+, \] for any $P\in V(k)$. Hence, if we denote by $\Delta_0$ the group of degree zero divisors of ${\mathbb P}^1({\mathbb Q})$ with the natural action of ${\mathrm{GL}}_2({\mathbb Q})$, we obtain the \emph{Modular Symbol}: \begin{eqnarray*} &&\phi_f^{\pm}\in {\mathrm {Hom}}_{\Gamma_1(N)}(\Delta_0,V(k));\\ &&\phi_f^{\pm}(s-t)(P):=2\pi i\left(\int_t^s f(z)P(1,-z)dz\pm\int_{-t}^{-s} f(z)P(1,z)dz\right). \end{eqnarray*} Notice that $\Gamma_1(N)$-equivariance follows from relation \begin{equation}\label{eqint} \phi_{f\mid A}^\pm(D)=\det(A)\cdot A^{-1}\left(\phi_f^\pm(AD)\right),\qquad A\in{\mathrm{GL}}_2({\mathbb R})^+, \end{equation} deduced from the above equality and the fact that {\tiny $\left(\begin{array}{cc}1&\\&-1\end{array}\right)$} normalizes $\Gamma_1(N)$. The following result is well known and classical: \begin{proposition}\label{ratMS} There exists periods $\Omega_\pm$ such that \[ \phi_f^\pm=\Omega_{\pm}\cdot\varphi_f^\pm, \] for some $\varphi_f^\pm\in {\mathrm {Hom}}_{\Gamma_1(N)}(\Delta_0,V(k)_{R_f})$, where $R_f$ is the ring of coefficients of $f$. \end{proposition} \subsection{Classical $p$-adic distributions}\label{classicdist} Given $f\in S_{k+2}(N,\epsilon)$, we will assume that $f$ is an eigenvector for the Hecke operator $T_p$ with eigenvalue $a_p$. Let $\alpha$ be a non zero root of the Hecke polynomial $X^2-a_pX+\epsilon(p)p^{k+1}$ We will construct distributions $\mu_{f,\alpha}^\pm$ of locally polynomial functions of ${\mathbb Z}_p^\times$ of degree less that $k$ attached to $f$ (and $\alpha$ in case $p\nmid N$). Since the open sets $U(a,n)=a+p^n{\mathbb Z}_p$ ($a\in {\mathbb Z}_p^\times$ and $n\in {\mathbb N}$) form a basis of ${\mathbb Z}_p^\times$, it is enough to define the image of $P\left(1,\frac{x-a}{p^n}\right)1_{U(a,n)}(x)$, for any $P\in {\mathcal P}(k)_{\mathbb Z}$ \begin{equation}\label{eqclassmu} \int_{U(a,n)}P\left(1,\frac{x-a}{p^n}\right)d\mu^\pm_{f,\alpha}(x):=\frac{1}{\alpha^n}\varphi^\pm_{f_\alpha}\left(\frac{a}{p^n}-\infty\right)(P), \end{equation} where $f_{\alpha}(z):=f(z)-\beta\cdot f(pz)$ and $\beta=\frac{\epsilon(p)p^{k+1}}{\alpha}$. It defines a distribution because $\mu^\pm_{f,\alpha}$ satisfies \emph{additivity}, namely, since \[ P\left(1,\frac{x-a}{p^n}\right)1_{U(a,n)}(x)= \sum_{b\equiv a\;{\rm mod} \;p^{n}} (\gamma_{a,b}P)\left(1,\frac{x-b}{p^{n+1}}\right)1_{U(b,n+1)}(x),\quad\gamma_{a,b}:=\mbox{\tiny$\left(\begin{array}{cc}1&\frac{b-a}{p^n}\\0&p\end{array}\right)$}, \] and by \eqref{eqint} we have that $U_p\varphi_{f_\alpha}^\pm=\alpha\cdot \varphi_{f_\alpha}^\pm$, where \begin{equation}\label{defUp} (U_p\varphi_{f_\alpha}^\pm)(D):=\sum_{c\in {\mathbb Z}/p{\mathbb Z}}\left(\begin{array}{cc}1&c\\&p\end{array}\right)^{-1}\varphi_{f_\alpha}^\pm\left(\left(\begin{array}{cc}1&c\\&p\end{array}\right)D\right), \end{equation} it can be shown that \[ \int_{U(a,n)}P\left(1,\frac{x-a}{p^n}\right)d\mu^\pm_{f,\alpha}(x)= \sum_{b\equiv a\;{\rm mod} \;p^{n}}\int_{U(b,n+1)} (\gamma_{a,b}P)\left(1,\frac{x-b}{p^{n+1}}\right)d\mu^\pm_{f,\alpha}(x). \] The following result shows that, under certain hypothesis, we can extend $\mu^\pm_{f,\alpha}$ to a locally analytic measure. \begin{theorem}[Visnik, Amice-V\'elu]\label{ThmVAV} Fix an integer $h$ such that $1\leq h\leq k+1$. Suppose that $\alpha$ satisfies ${\rm ord}_p\alpha<h$. Then there exists a locally analytic measure $\mu_{f,\alpha}^\pm$ satifying: \begin{itemize} \item $\int_{U(a,n)}P\left(1,\frac{x-a}{p^n}\right)d\mu^\pm_{f,\alpha}(x):=\frac{1}{\alpha^n}\varphi^\pm_{f_{\alpha}}\left(\frac{a}{p^n}-\infty\right)(P)$, for any locally polynomial function $P\left(1,\frac{x-a}{p^n}\right)1_{U(a,n)}(x)$ of degree strictly less than $h$. \item For any $m\geq 0$, \[ \int_{U(a,n)}(x-a)^md\mu^\pm_{f,\alpha}(x)\in \left(\frac{p^m}{\alpha}\right)^n\alpha^{-1}. \] \item If $F(x)=\sum_{m\geq 0}c_m(x-a)^m$ is convergent on $U(a,n)$, then \[ \int_{U(a,n)}F(x)d\mu^\pm_{f,\alpha}(x)=\sum_{m\geq 0}c_m\int_{U(a,n)}(x-a)^md\mu^\pm_{f,\alpha}(x). \] \end{itemize} \end{theorem} If we assume that there exists such a root $\alpha$ with ${\rm ord}_p\alpha<k+1$, then we define $\mu_{f,\alpha}:=\mu_{f,\alpha}^++\mu_{f,\alpha}^-$ and the (\emph{cyclotomic}) \emph{$p$-adic $L$-function}: \[ L_p(f,\alpha,s):=\int_{{\mathbb Z}_p^\times}{\rm exp}(s\cdot{\rm log(x)})d\mu_{f,\alpha}(x). \] \begin{remark} Write $V_f$ the $\bar{\mathbb Q}[{\mathrm{GL}}_2({\mathbb Q})]$-representation generated by $f$. For any $g\in V_f$, write \begin{equation}\label{defMS} \varphi_{g}^\pm(s-t)(P):=\frac{2\pi i}{\Omega_\pm}\left(\int_t^s g(z)P(1,-z)dz\pm\int_{-t}^{-s} g(z)P(1,z)dz\right). \end{equation} Relation \eqref{eqint} implies that the morphism \begin{equation}\label{GL_2-equiv} \varphi^\pm:V_f\longrightarrow {\mathrm {Hom}}\left(\Delta_0,V(k)_{\bar{\mathbb Q}}\right)[\det],\qquad g\mapsto\varphi_g^\pm, \end{equation} is ${\mathrm{GL}}_2({\mathbb Q})$-equivariant. \end{remark} \section{$p$-adic $L$-functions}\label{padicLfunct} In this section we provide a reinterpretation of the distributions $\mu^\pm_{f,\alpha_p}$. Let $f\in S_{k+2}(\Gamma_1(N),\epsilon)$ be a cuspidal newform as above and let $p$ be any prime. Fix the embedding \begin{equation}\label{actQ_p} {\mathbb Z}_p^\times\hookrightarrow{\mathbb Q}_p^\times\hookrightarrow{\mathrm{GL}}_2({\mathbb Q}_p);\qquad x\longmapsto\left(\begin{array}{cc}x&\\&1\end{array}\right). \end{equation} \begin{assumption}\label{mainassumption} Assume that there exists a ${\mathbb Z}_p^\times$-equivariant morphisms \[ \delta:C({\mathbb Z}_p^\times,L)\longrightarrow V, \] where $L$ is certain finite extension of the coefficient field ${\mathbb Q}(\{a_n\}_n)$, and $V$ is certain model over $L$ of the local automorphic representation $\pi_p$ generated by $f$. Assume also that, for big enough $n$, \begin{equation}\label{keyprop} \left(\begin{array}{cc}1&s\\&p^n\end{array}\right)\delta(1_{U(s,n)})=\frac{1}{\gamma^n}\sum_{i=0}^mc_i(s,n)V_i, \end{equation} where $m$ is fixed, $V_i\in V$ do not depend neither $s$ nor $n$, and $c_i(s,n)\in {\mathcal O}_L$. \end{assumption} \subsection{$p$-adic distributions} Let us consider the subgroup \[ \hat K_1(N)=\left\{g\in {\mathrm{GL}}_2(\hat {\mathbb Z}):\;g\equiv(\begin{smallmatrix}\ast&\ast\\0&1\end{smallmatrix})\;{\rm mod}\;N\right\}. \] Again by strong approximation we have that ${\mathrm{GL}}_2({\mathbb A}_f)={\mathrm{GL}}_2({\mathbb Q})^+\hat K_1(N)$. Thus, for any ${\mathrm{GL}}_2({\mathbb A}_f)\ni g=h_gk_g$, where $h_g\in {\mathrm{GL}}_2({\mathbb Q})^+$, $k_g\in \hat K_1(N)$ are well defined up to multiplication by $\Gamma_1(N)={\mathrm{GL}}_2({\mathbb Q})^+\cap\hat K_1(N)$. Write $K:=\hat K_1(N)\cap{\mathrm{GL}}_2({\mathbb Z}_p)$. By strong multiplicity one $\pi_p^K$ is one dimensional. Therefore $V^K=Lw_0$ and $V=L[{\mathrm{GL}}_2({\mathbb Q}_p)]w_0$. Notice that we have a natural morphism \[ \varphi_{f,p}^\pm:V\longrightarrow{\mathrm {Hom}}(\Delta_0,V(k)_L);\qquad \varphi_{f,p}^\pm(gw_0)=\det(h_g)\cdot \varphi_{f\mid h_g^{-1}}^\pm. \] \begin{remark}\label{rmkacthom} If $g\in{\mathrm{GL}}_2({\mathbb Q}_p)$ then $h_g\in \hat K_1(N)^p:=\hat K_1(N)\cap\prod_{\ell\neq p}{\mathrm{GL}}_2({\mathbb Q}_\ell)$. This implies that, for any $h\in {\mathrm{GL}}_2({\mathbb Q})^+\cap\hat K_1(N)^p$, we have $h_{hg}=h\cdot h_g$ for all $g\in{\mathrm{GL}}_2({\mathbb Q}_p)$. By \eqref{eqint}, this implies that $\varphi_{f,p}^\pm(hv)=h\ast\varphi_{f,p}^\pm(v)$, for all $v\in V\subset\pi_p$, where the action of $h\in {\mathrm{GL}}_2({\mathbb Q})^+\cap\hat K_1(N)^p$ is given by \[ (h\ast\varphi)(D):=h(\varphi(h^{-1}D)),\qquad\varphi\in {\mathrm {Hom}}(\Delta_0,V(k)_L). \] \end{remark} \begin{remark}\label{rmkchar} By definition, for any $\left(\begin{smallmatrix}a&b\\c&d\end{smallmatrix}\right)\in\Gamma_0(N)$, we have \[ f\left(\frac{az+b}{cz+d}\right)=\epsilon(d)\cdot(cz+d)^{k+2}f(z),\qquad f\mid\left(\begin{smallmatrix}a&b\\c&d\end{smallmatrix}\right)=\epsilon(d)\cdot f. \] For any $z\in {\mathbb Q}_p^\times$ such that $z=p^nu$ where $u\in{\mathbb Z}_p^\times$, we can choose $d\in{\mathbb Z}$ such that $d\equiv u^{-1}\;{\rm mod}\;N{\mathbb Z}_p$ and $d\equiv p^{n}\;{\rm mod}\;N{\mathbb Z}_\ell$, for $\ell\neq p$. Let us choose $A=(\begin{smallmatrix}a&b\\c&d\end{smallmatrix})\in\Gamma_0(N)$, and we have \[ (z,1)=p^nA^{-1}(uA,p^{-n}A)\in{\mathrm{GL}}_2({\mathbb A}_f),\qquad (uA,p^{-n}A)\in\hat K_1(N). \] This implies that, if $\varepsilon_p$ is the central character of $\pi_p$, \[ \varepsilon_p(z)\varphi_{f,p}^\pm(w_0)=\varphi_{f,p}^\pm(zw_0)=\det(p^nA^{-1})\cdot\varphi_{f\mid p^{-n}A}^\pm=p^{-nk}\epsilon(d)\cdot\varphi_{f}^\pm \] Hence $\varepsilon_p=\epsilon_p^{-1}|\cdot|^k$, where $\epsilon_p=\epsilon\mid_{{\mathbb Z}_p^\times}$. \end{remark} Again let $C_k({\mathbb Z}_p^\times,{\mathbb C}_p)$ be the space of locally polynomial functions of ${\mathbb Z}_p^\times$ of degree less that $k$. Recall the ${\mathbb Z}_p^\times$-equivariant isomorphism \begin{equation}\label{imathpol} \imath:C({\mathbb Z}_p^\times,{\mathbb Z})\otimes_{\mathbb Z} {\mathcal P}(k)_{{\mathbb C}_p}(-k)\longrightarrow C_k({\mathbb Z}_p^\times,{\mathbb C}_p) ;\qquad h\otimes P\longmapsto P(1,x)\cdot h(x). \end{equation} Fixing $L\hookrightarrow{\mathbb C}_p$, we define the distributions $\mu^\pm_{f,\delta}$ attached to $f$ and $\delta$: \begin{equation}\label{defmugen} \int_{{\mathbb Z}_p^\times}\imath(h\otimes P)(x)d\mu^\pm_{f,\delta}(x):=\varphi_{f,p}^\pm(\delta(h))(0-\infty)(P). \end{equation} \subsection{Admissible Distributions} We have just constructed a distribution \[ \mu_{f,\delta}^\pm:C_k({\mathbb Z}_p^\times,{\mathbb C}_p)\longrightarrow {\mathbb C}_p. \] This section is devoted to extend this distribution to a locally analytic measure $\mu_{f,\delta}^\pm\in{\mathrm {Hom}}\left(C_{\rm loc-an}({\mathbb Z}_p^\times,{\mathbb C}_p),{\mathbb C}_p\right)$. \begin{definition} Write $v_p:{\mathbb C}_p\rightarrow{\mathbb Q}\cup\{-\infty\}$ the usual normalized $p$-adic valuation. For any $h\in {\mathbb R}^+$, a distribution $\mu\in {\mathrm {Hom}}(C_k({\mathbb Z}_p^\times,{\mathbb C}_p),{\mathbb C}_p)$ is \emph{$h$-admissible} if \[ v_p\left(\int_{U(a,n)}g d\mu\right)\geq v_p(A)-n\cdot h, \] for some fixed $A\in {\mathbb C}_p$, and any $g\in C_k({\mathbb Z}_p^\times,{\mathcal O}_{{\mathbb C}_p})$ which is polynomical in a small enough $U(a,n)\subseteq{\mathbb Z}_p^\times$. We will denote previous relation by \[ \int_{U(a,n)}g d\mu\in A\cdot p^{-n h}{\mathcal O}_{{\mathbb C}_p}. \] \end{definition} \begin{proposition}\label{propext} If $h<k+1$, a $h$-admissible the distribution $\mu$ can be extended to a locally analytic measure such that \[ \int_{U(a,n)}g d\mu\in A\cdot p^{-n h}{\mathcal O}_{{\mathbb C}_p}, \] for any $g\in C({\mathbb Z}_p^\times,{\mathcal O}_{{\mathbb C}_p})$ which is analytic in $U(a,n)$. \end{proposition} \begin{proof} Notice that any locally analytic function is topologically generated by functions of the form $P_m^{a,N}(x):=\left(\frac{x-a}{p^N}\right)^m1_{U(a,N)}(x)$, where $m\in{\mathbb N}$. By definition, we have defined the values $\mu(P^{a,N}_m)$ when $m\leq k$. If $m>h$, we define $\mu(P_m^{a,N})=\lim_{n\rightarrow\infty}a_n$, where \[ a_n=\sum_{b\;{\rm mod}\; p^{n};\;b\equiv a\;{\rm mod}\; p^N}\sum_{j\leq h}\left(\frac{b-a}{p^N}\right)^{m-j}\binom{m}{j}p^{j(n-N)}\mu(P_j^{b,n}) \] and the definition agrees with $\mu$ when $h<m\leq k$ because $p^{j(n-N)}\mu(P_j^{b,n})\stackrel{n}{\rightarrow}0$ when $j>h$, hence \[ \lim_{n\rightarrow\infty}a_n=\sum_{b\;{\rm mod}\; p^{n};\;b\equiv a\;{\rm mod}\; p^N}\sum_{j=0}^m\left(\frac{b-a}{p^N}\right)^{m-j}\binom{m}{j}p^{j(n-N)}\mu(P_j^{b,n})=\mu(P_m^{a,N}) \] The limit converge because $\{a_n\}_n$ is Cauchy, indeed by additivity \[ a_{n_2}-a_{n_1}=\sum_{j\leq h}\sum_{b\equiv a\;(p^{n_2})}\sum_{b'\equiv b\;(p^{n_1})}\sum_{k=h+1}^{m}r(k)\binom{k}{j}\left(\frac{b'-b}{p^N}\right)^{k-j}p^{(n_2-N)j}\mu(P_{j}^{b',n_2}), \] where $r(k)=\binom{m}{k}\left(\frac{b'-a}{p^N}\right)^{m-k}$. Since \[ \left(\frac{b'-b}{p^N}\right)^{k-j}p^{(n_2-N)j}\mu(P_{j}^{b',n_2})\in A\cdot p^{-Nk}p^{(n_1-n_2)(k-j)}p^{(k-h)n_2}{\mathcal O}_{{\mathbb C}_p}, \] we have that $a_{n+1}-a_{n}\stackrel{n}{\rightarrow} 0$. It is clear by the definition that $\mu(P_m^{a,N})\in A\cdot p^{-N h}{\mathcal O}_{{\mathbb C}_p}$ for all $m, a$ and $N$. Moreover, it extends to a locally analytic measure by continuity which is determined by the image of locally polynomial functions of degree at most $h$. \end{proof} Notice that, for all $m\leq k$, \[ P_m^{a,n}(x)=\left(\frac{x-a}{p^n}\right)^m1_{U(a,n)}(x)=\imath\left(1_{U(a,n)}\otimes\left(\frac{Y-aX}{p^n}\right)^mX^{k-m}\right) \] Using property \eqref{keyprop} and Remarks \ref{rmkacthom} and Remark \ref{rmkchar}, we compute that \begin{eqnarray*} \int_{{\mathbb Z}_p^\times}P_m^{a,n} d\mu^\pm_{f,p}&=&\varphi_{f,p}^\pm(\delta(1_{U(a,n)}))(0-\infty)\left(\left(\frac{Y-aX}{p^n}\right)^mX^{k-m}\right)\\ &=&\sum_{i=0}^m\frac{c_i(a,n)}{\gamma^n}\cdot\varphi_{f,p}^\pm\left(p^{-n}\left(\begin{smallmatrix}p^n&-a\\&1\end{smallmatrix}\right)V_i\right)(0-\infty)\left(\left(\frac{Y-aX}{p^n}\right)^mX^{k-m}\right)\\ &=&\sum_{i=0}^m\frac{c_i(a,n)}{\varepsilon_p(p)^{n}\gamma^n}\cdot\varphi^\pm_{f,p}(V_i)\left(\frac{a}{p^n}-\infty\right)\left((p^{-n}Y)^m(p^{-n}X)^{k-m})\right)\\ &=&\sum_{i=0}^m\frac{c_i(a,n)}{\gamma^n}\cdot\varphi^\pm_{f,p}(V_i)\left(\frac{a}{p^n}-\infty\right)\left(Y^mX^{k-m}\right). \end{eqnarray*} Notice that $\varphi^\pm_{f,p}(V_i)\in {\mathrm {Hom}}(\Delta_0,V(k)_L)^{\Gamma_1(Np^r)}_\epsilon:={\mathrm {Hom}}_{\Gamma_1(Np^r)}(\Delta_0,V(k)_L)_\epsilon$ for some big enough $r\in{\mathbb N}$, where the subindex $\epsilon$ indicates that the action of $\Gamma_1(Np^r)/\Gamma_0(Np^r)$ is given by the character $\epsilon$. By Manin's trick we have that \[ {\mathrm {Hom}}_{\Gamma_1(Np^r)}(\Delta_0,V(k)_L)_\epsilon\simeq{\mathrm {Hom}}_{\Gamma_1(Np^r)}(\Delta_0,V(k)_{{\mathcal O}_L})_\epsilon\otimes_{{\mathcal O}_L} L. \] Since $Y^mX^{k-m}\in {\mathcal P}(k)_{{\mathcal O}_L}$, $c(a,n)\in{\mathcal O}_L$ and the functions $P_m^{a,n}$ generate $C_k({\mathbb Z}_p^\times,{\mathcal O}_{{\mathbb C}_p})$, we obtain that \begin{equation}\label{calcint} \int_{U(a,n)}gd\mu^\pm_{f,\delta}\in\frac{A}{\gamma^n}{\mathcal O}_{{\mathbb C}_p},\qquad\mbox{for all }g\in C_k({\mathbb Z}_p^\times,{\mathcal O}_{{\mathbb C}_p}), \end{equation} and some fixed $A\in L$. We deduce the following result. \begin{theorem}\label{thmadm} Fix an embedding $L\hookrightarrow{\mathbb C}_p$. We have that $\mu^\pm_{f,\delta}$ is $v_p(\gamma)$-admissible. \end{theorem} \begin{definition} If we assume that $v_p(\gamma)<k+1$, we define the cyclotomic $p$-adic measure attached to $f$ and $\delta$ \[ \mu_{f,\delta}:=\mu_{f,\delta}^++\mu_{f,\delta}^-. \] \end{definition} \subsection{Interpolation properties} Given the modular form $f\in S_{k+2}(\Gamma_1(N))$, let us consider the automorphic form $\phi:{\mathrm{GL}}_2({\mathbb Q})\backslash{\mathrm{GL}}_2({\mathbb A})\rightarrow{\mathbb C}$, characterized by its restriction to ${\mathrm{GL}}_2({\mathbb R})^+\times{\mathrm{GL}}_2({\mathbb A}_f)$: \[ \phi(g_\infty,g_f)=\frac{\det\left(\gamma\right)}{\det(g_\infty)}\cdot f\mid\gamma^{-1}g_\infty \left( i\right), \quad g_f=\gamma k\in {\mathrm{GL}}_2({\mathbb Q})^+\hat K_1(N), \quad g_\infty=\left(\begin{smallmatrix}a&b\\c&d\end{smallmatrix}\right). \] Given $g\in{\mathrm{GL}}_2({\mathbb Q}_p)$, we compute $\varphi^\pm_{f,p}(gw_0)(0-\infty)(Y^mX^{k-m})=$ \begin{eqnarray*} &=&\det(h_g)\cdot\varphi^\pm_{f\mid h_g^{-1}}(0-\infty)(Y^mX^{k-m})\\ &=&\frac{-2\pi\det(h_g)}{\Omega_f^\pm}\cdot\left(\int_\infty^0 f\mid h_g^{-1}(ix)(-ix)^mdx\pm\int_\infty^0 f\mid h_g^{-1}(ix)(ix)^mdx\right)\\ &=&\frac{2\pi }{\Omega_f^\pm}\cdot\int_{{\mathbb R}^+}x^{m-k}\cdot\phi\left(\left(\begin{smallmatrix}x&\\&1\end{smallmatrix}\right),g\right)d^{\times}x\cdot((-i)^m\pm i^m). \end{eqnarray*} This implies that, if we consider the automorphic representation $\pi$ generated by $\phi$, and the ${\mathrm{GL}}_2({\mathbb Q}_p)$-equivariant morphism \[ \phi_f:\pi_p\longrightarrow\pi:\qquad gw_0\longmapsto g\phi, \] we have that \[ \varphi^\pm_{f,p}(\delta(h))(0-\infty)(Y^mX^{k-m})=\frac{4\pi(-i)^m }{\Omega_f^\pm}\cdot\int_{{\mathbb R}^+}x^{m-k}\cdot\phi_f\left(\delta(h)\right)\left(\left(\begin{smallmatrix}x&\\&1\end{smallmatrix}\right),1\right)d^{\times}x\cdot\left(\frac{1\pm(-1)^m}{2}\right). \] Let $H$ be the maximum subgroup of ${\mathbb Z}_p^\times$ such that $h\mid_{sH}$ is constant, for all $sH\in{\mathbb Z}_p^\times/H$. Notice that $h=\sum_{s\in{\mathbb Z}_p^\times/H}h(s)1_{sH}$. Moreover, for all $v\in\pi_p$, the automorphic form $\phi_f(v)$ is $U^p:=\prod_{\ell\neq p}{\mathbb Z}_\ell^\times$-invariant when embedded in ${\mathrm{GL}}_2({\mathbb A}_f)$ by means of \eqref{actQ_p}. Hence, if we consider $\varphi_{f,p}:=\varphi_{f,p}^++\varphi_{f,p}^-$, we have $\varphi_{f,p}(\delta(h))(0-\infty)(Y^mX^{k-m})=$ \begin{eqnarray*} &=&\sum_{sH\in{\mathbb Z}_p^\times/H}\frac{4\pi h(s)}{i^m\Omega_f^\pm}\cdot\int_{{\mathbb R}^+}\int_{U^p}x^{m-k}\phi_f\left(\delta(1_{sH})\right)\left(\left(\begin{smallmatrix}x&\\&1\end{smallmatrix}\right),1,\left(\begin{smallmatrix}t&\\&1\end{smallmatrix}\right)\right)d^{\times}xd^\times t\\ &=&\sum_{sH\in{\mathbb Z}_p^\times/H}\frac{4\pi h(s)}{i^m\Omega_f^\pm}\cdot\int_{{\mathbb R}^+}\int_{U^p}x^{m-k}\phi_f\left(\delta(1_{H})\right)\left(\left(\begin{smallmatrix}x&\\&1\end{smallmatrix}\right),\left(\begin{smallmatrix}s&\\&1\end{smallmatrix}\right),\left(\begin{smallmatrix}t&\\&1\end{smallmatrix}\right)\right)d^{\times}xd^\times t\\ &=&\frac{4\pi}{\Omega_f^\pm{\rm vol}(H)}\cdot\int_{{\mathbb A}^\times/{\mathbb Q}^\times}\tilde h(y)\cdot\phi_f\left(\delta(1_{H})\right)\left(\begin{smallmatrix}y&\\&1\end{smallmatrix}\right)d^{\times}y, \end{eqnarray*} where $\tilde h(y)=(-i)^m\cdot h(y_p|y|y_\infty^{-1})\cdot|y|^{m-k}$, for all $y=(y_v)_v\in{\mathbb A}^\times$, and $\Omega_f^\pm$ is $\Omega_f^+$ or $\Omega_f^-$ depending if $m$ is even or odd. Let $\chi\in C_k({\mathbb Z}_p^\times,{\mathbb C}_p)$ be a locally polynomial character. This implies that $\chi(x)=\chi_0(x)x^m$, for some natural $m\leq k$ and some locally constant character $\chi_0$. This implies that $\chi=\imath(\chi_0\otimes Y^{m}X^{k-m})$. We deduce that \[ \int_{{\mathbb Z}_p^\times}\chi(x)d\mu_{f,\delta}(x):=\frac{4\pi }{\Omega_f^\pm i^m{\rm vol}(H)}\cdot\int_{{\mathbb A}^\times/{\mathbb Q}^\times}\tilde\chi_0(y)|y|^{m-k}\phi_f\left(\delta(1_H)\right)\left(\begin{smallmatrix}y&\\&1\end{smallmatrix}\right)d^{\times}y, \] where $\tilde\chi_0(y):=\chi_0(y_p|y|y_\infty^{-1})$. Let $\psi:{\mathbb A}/{\mathbb Q}\rightarrow{\mathbb C}^\times$ be a global additive character and we define the Whittaker model element \[ W_\delta^H:{\mathrm{GL}}_2({\mathbb A})\longrightarrow{\mathbb C};\qquad W_\delta^H(g):=\int_{{\mathbb A}/{\mathbb Q}}\phi_f(\delta(1_H))\left(\left(\begin{array}{cc}1&x\\&1\end{array}\right)g\right)\psi(-x)dx. \] This element admits a expression $W_\delta^H(g)=\prod_vW_{\delta,v}^H(g_v)$, if $g=(g_v)\in{\mathrm{GL}}_2({\mathbb A})$. Moreover by \cite[Theorem 3.5.5]{Bump}, it provides the \emph{Fourier expansion} \[ \phi_f(\delta(1_H))(g)=\sum_{a\in{\mathbb Q}^\times}W_\delta^H\left(\left(\begin{array}{cc}a&\\&1\end{array}\right)g\right). \] We compute \begin{eqnarray*} \int_{{\mathbb A}^\times/{\mathbb Q}^\times}\tilde\chi_0(y)|y|^{m-k}\phi_f\left(\delta(1_H)\right)\left(\begin{smallmatrix}y&\\&1\end{smallmatrix}\right)d^{\times}y&=&\int_{{\mathbb A}^\times}\tilde\chi_0(y)|y|^{m-k}W_\delta^H\left(\begin{smallmatrix}y&\\&1\end{smallmatrix}\right)d^{\times}y\\ &=&\prod_v \int_{{\mathbb Q}_v^\times}\tilde\chi_0(y_v)|y_v|^{m-k}W_{\delta,v}^H\left(\begin{smallmatrix}y_v&\\&1\end{smallmatrix}\right)d^{\times}y_v. \end{eqnarray*} By definition of $\delta$, when $v\neq p$ the element $W_{\delta,v}^H$ correspond to the new-vector, thus by \cite[Proposition 3.5.3]{Bump} \[ \int_{{\mathbb Q}_v^\times}\tilde\chi_0(y_v)|y_v|^{m-k}W_{\delta,v}^H\left(\begin{smallmatrix}y_v&\\&1\end{smallmatrix}\right)d^{\times}y_v=L_v\left(m-k+\frac{1}{2},\pi_v,\tilde\chi_0\right),\qquad v\neq p. \] We conclude using the results explained in \cite[\S 3.5]{Bump} \[ \int_{{\mathbb Z}_p^\times}\chi(x)d\mu_{f,\delta}(x)=\frac{4\pi }{\Omega_f^\pm i^m}\cdot e_\delta(\pi_p,\chi_0)\cdot L\left(m-k+\frac{1}{2},\pi,\tilde\chi_0\right), \] where the \emph{Euler factor} \[ e_\delta(\pi_p,\chi_0)=\frac{L_p\left(m-k+\frac{1}{2},\pi_p,\tilde\chi_0\right)^{-1}}{{\rm vol}(H)}\int_{{\mathbb Q}_p^\times}\tilde\chi_0(y_p)|y_p|^{m-k}W_{\delta,p}^H\left(\begin{smallmatrix}y_p&\\&1\end{smallmatrix}\right)d^{\times}y_p. \] \subsection{The morphisms $\delta$} In this section we will construct morphisms $\delta$ satisfying Assumption \ref{mainassumption}. The only case that will be left is the case when $\pi_p$ is \emph{supercuspidal}. In this situation we will not be able to construct admissible $p$-adic distributions. Let $\pi_p$ be the local representation. Let $W:\pi_p\rightarrow{\mathbb C}$ be the Whittaker functional, and let us consider the Kirillov model $\mathcal{K}$ given by the embedding \[ \lambda:\pi_p\hookrightarrow \mathcal{K};\qquad \lambda(v)(y)=W\left(\left(\begin{array}{cc}y&\\&1\end{array}\right)v\right). \] Recall that the Kirillov model lies in the space of locally constant functions $\phi:{\mathbb Q}_p^\times\rightarrow{\mathbb C}$ endowed with the action \begin{equation}\label{eqKir} \left(\begin{array}{cc}1&x\\&1\end{array}\right)\phi(y)=\psi(xy)\phi(y),\qquad \left(\begin{array}{cc}a&\\&1\end{array}\right)\phi(y)=\phi(ay). \end{equation} We construct the ${\mathbb Z}_p^\times$-equivariant morphism \begin{equation}\label{defdelta} \delta:C({\mathbb Z}_p^\times,{\mathbb C})\longrightarrow \mathcal{K};\qquad \delta(h)(y)=\int_{{\mathbb Z}_p^\times}\Psi(zy)h(z)\psi(-zy)d^\times z, \end{equation} for a well chosen locally constant function $\Psi$. Notice that, if $h=1_{H}$ for $H$ small enough \[ \delta(h)(y)=\Psi(y)\int_{H}\psi(-zy)d^\times z={\rm vol}(H)\Psi(y),\qquad \mbox{if $|y|<<0$.} \] This implies that, in order to choose $\Psi$, we need to control how $\mathcal{K}$ looks like: \begin{itemize} \item By \cite[Theorem 4.7.2]{Bump}, if $\pi_p=\pi(\chi_1,\chi_2)$ principal series then $\mathcal{K}$ consists on functions $\phi$ such that $\phi(y)=0$ for $|y|>>0$, and \[ \phi(y)=\left\{\begin{array}{ll}C_1|y|^{1/2}\chi_1(y)+C_2|y|^{1/2}\chi_2(y),&\chi_1\neq\chi_2,\{\mathbb C}_1|y|^{1/2}\chi_1(y)+C_2v_p(y)|y|^{1/2}\chi_1(y),&\chi_1=\chi_2,\end{array}\right.\qquad |y|<<0, \] for some constants $C_1$ and $C_2$. \item By \cite[Theorem 4.7.3]{Bump}, if $\pi_p=\sigma(\chi_1,\chi_2)$ a special representation such that $\chi_1\chi_2^{-1}=|\cdot|^{-1}$ then $\mathcal{K}$ consists on functions $\phi$ such that $\phi(y)=0$ for $|y|>>0$, and \[ \phi(y)=C|y|^{1/2}\chi_2(y),\qquad |y|<<0, \] for some constant $C$. \item By \cite[Theorem 4.7.1]{Bump} If $\pi_p$ is supercuspidal then $\mathcal{K}=C_c({\mathbb Q}_p^\times,{\mathbb C})$. \end{itemize} By Lemma \ref{intpsi} and Lemma \ref{psichi} we have that $\delta(h)(y)=0$ for $y$ with big absolute value. This implies that \begin{itemize} \item In case $\pi_p=\pi(\chi_1,\chi_2)$ with $\chi_1\neq\chi_2$, we can choose \[ \Psi=|\cdot|^{1/2}\chi_1\qquad\mbox{or}\qquad\Psi=|\cdot|^{1/2}\chi_2. \] \item In case $\pi_p=\pi(\chi_1,\chi_2)$ with $\chi_1=\chi_2$, we can choose \[ \Psi=|\cdot|^{1/2}\chi_1\qquad\mbox{or}\qquad\Psi=v\cdot|\cdot|^{1/2}\chi_1. \] \item In case $\pi_p=\sigma(\chi_1,\chi_2)$ we have \[ \Psi=|\cdot|^{1/2}\chi_2. \] \item In case $\pi_p$ supercuspidal it is not possible to choose any $\Psi$. \end{itemize} We have to prove whether $\delta$ satisfies the property \eqref{keyprop}: If $\Psi$ is invariant under the action of $1+p^n{\mathbb Z}_p$, \begin{eqnarray*} \left(\begin{smallmatrix}1&a\\&p^n\end{smallmatrix}\right)\delta(1_{U(a,n)})(y)=&=&\left(\begin{smallmatrix}p^{n}&\\&p^n\end{smallmatrix}\right)\left(\begin{smallmatrix}p^{-n}&\\&1\end{smallmatrix}\right)\left(\begin{smallmatrix}1&a\\&1\end{smallmatrix}\right)\delta(1_{U(a,n)})(y)\\ &=&\varepsilon_p(p^{n})\cdot \psi(ap^{-n}y)\cdot\delta(1_{U(a,n)})(p^{-n}y)\\ &=&\varepsilon_p(p)^{n}\cdot \int_{U(a,n)}\Psi(p^{-n}yz)\psi(p^{-n}y(a-z))d^\times z\\ &=&\frac{\varepsilon_p(p)^{n}\cdot\Psi(p^{-n}ya)\cdot |p|^n}{1-p^{-1}}\cdot\int_{{\mathbb Z}_p}\psi(yz)d z\\ &=&\frac{\varepsilon_p(p)^{n}\cdot |p|^n}{1-p^{-1}}\cdot\Psi(p^{-n}ya)\cdot 1_{{\mathbb Z}_p}(y), \end{eqnarray*} since $d^\times x=(1-p^{-1})^{-1}|x|^{-1}dx$. \begin{itemize} \item If $\Psi$ is a character we deduce the property \eqref{keyprop} with $m=0$, $\gamma=\Psi(p)p\varepsilon_p(p)^{-1}$, $c_0(a,n)=\Psi(a)$ and $V_0=(1-p^{-1})^{-1}\Psi(y) 1_{{\mathbb Z}_p}(y)$. \item If $\Psi=v_p\cdot\chi$, with $\chi$ a character, it also satisfies property \eqref{keyprop} with $m=1$, $\gamma=\chi(p)p\varepsilon_p(p)^{-1}$, $c_0(a,n)=-n\chi(a)$, $c_1(a,n)=\chi(a)$, $V_0=(1-p^{-1})^{-1}\chi(y) 1_{{\mathbb Z}_p}(y)$ and $V_1=(1-p^{-1})^{-1}v_p(y)\chi(y) 1_{{\mathbb Z}_p}(y)$. \end{itemize} \subsection{Computation Euler factors} The following result describes the Euler factors in each of the situations: \begin{proposition}\label{EulerFactors} We have the following cases: \begin{itemize} \item[$(i)$] If $\Psi=|\cdot|^{1/2}\chi_i$ we have that \[ e_\delta(\pi_p,\chi_0)=\left\{\begin{array}{lc} \frac{(1-p^{-1})^{-1}p^{r(m-k-\frac{1}{2})}\chi_i(p)^{-r}\tau(\chi_0\chi_i,\psi)}{L(m-k+1/2,\tilde\chi_0\chi_j)L(k-m+1/2,\tilde\chi_0\chi_i^{-1})},&\pi_p=\pi(\chi_i,\chi_j);\\ \frac{(1-p^{-1})^{-1}p^{r(m-k-\frac{1}{2})}\chi_i(p)^{-r}\tau(\chi_0\chi_i,\psi)}{L(k-m+1/2,\tilde\chi_0\chi_i^{-1})},&\pi_p=\sigma(\chi_i,\chi_j), \end{array} \right. \] where $r$ is the conductor of $\chi_i\chi_0$. \item[$(ii)$] If $\Psi=v_p\cdot|\cdot|^{1/2}\chi_i$ we have that \[ e_\delta(\pi_p,\chi_0)=\left\{\begin{array}{ll} \frac{p^{k-m-\frac{1}{2}}\chi_i(p)+p^{m-k-\frac{1}{2}}\chi_i(p)^{-1}-2p^{-1}}{1-p^{-1}};&\chi_0\chi_i\mid_{{\mathbb Z}_p^\times}=1;\\ \frac{-rp^{r(m-k-\frac{1}{2})}\chi_i(p)^{-r}\tau(\chi_0\chi_i,\psi)}{1-p^{-1}};&{\rm cond}(\chi_0\chi_i)=r>0. \end{array}\right. \] \end{itemize} \end{proposition} \begin{proof} In order to compute the Euler factors $e_\delta(\pi_p,\chi_0)$, we have to compute the local periods \[ I_\delta:=\frac{1}{{\rm vol}(H)}\int_{{\mathbb Q}_p^\times}\tilde\chi_0(y)|y|^{m-k}W_{\delta,p}^H\left(\begin{smallmatrix}y&\\&1\end{smallmatrix}\right)d^\times y=\frac{1}{{\rm vol}(H)}\int_{{\mathbb Q}_p^\times}\tilde\chi_0(y)|y|^{m-k}\delta(1_H)(y)d^\times y. \] Recalling that $\tilde\chi_0$ is $H$-invariant, we obtain \[ I_\delta=\frac{1}{{\rm vol}(H)}\int_{{\mathbb Q}_p^\times}\tilde\chi_0(y)|y|^{m-k}\int_{H}\Psi(zy)\psi(-zy)d^\times zd^\times y=\int_{{\mathbb Q}_p^\times}\tilde\chi_0(x)|x|^{m-k}\Psi(x)\psi(-x)d^\times x. \] In case $(i)$ we have that $\Psi=|\cdot|^{1/2}\chi_i$, hence by Lemma \ref{intpsi} and Lemma \ref{psichi} \begin{eqnarray*} I_\delta&=&\sum_np^{n(k-m-\frac{1}{2})}\chi_i(p)^n\int_{{\mathbb Z}_p^\times}\chi_0(x)\chi_i(x)\psi(-p^nx)d^\times x\\ &=&\left\{\begin{array}{ll} \sum_{n\geq0}p^{n(k-m-\frac{1}{2})}\chi_i(p)^n-(1-p^{-1})^{-1}p^{m-k-\frac{1}{2}}\chi_i(p)^{-1};&\chi_0\chi_i\mid_{{\mathbb Z}_p^\times}=1;\\ (1-p^{-1})^{-1}p^{r(m-k-\frac{1}{2})}\chi_i(p)^{-r}\tau(\chi_0\chi_i,\psi);&{\rm cond}(\chi_0\chi_i)=r>0 \end{array}\right.\\ &=&\left\{\begin{array}{ll} (1-p^{-1})^{-1}(1-p^{m-k-\frac{1}{2}}\chi_i(p)^{-1})(1-p^{k-m-\frac{1}{2}}\chi_i(p))^{-1};&\chi_0\chi_i\mid_{{\mathbb Z}_p^\times}=1;\\ (1-p^{-1})^{-1}p^{r(m-k-\frac{1}{2})}\chi_i(p)^{-r}\tau(\chi_0\chi_i,\psi);&{\rm cond}(\chi_0\chi_i)=r>0 \end{array}\right. \end{eqnarray*} Since $e_\delta(\pi_p,\chi_0)=L_p(m-k+1/2,\pi_p,\tilde\chi_0)^{-1}\cdot I_\delta$ and \begin{eqnarray*} L_p(s,\pi_p,\tilde\chi_0)&=&\left\{\begin{array}{lc}L(s,\tilde\chi_0\chi_i)\cdot L(s,\tilde\chi_0\chi_j),&\pi_p=\pi(\chi_i,\chi_j),\\ L(s,\tilde\chi_0\chi_i),&\pi_p=\sigma(\chi_i,\chi_j),\end{array}\right. \end{eqnarray*} part $(i)$ follows. In case $(ii)$ we have that $\Psi=v_p\cdot|\cdot|^{1/2}\chi_i$, hence we compute \begin{eqnarray*} I_\delta&=&\sum_nnp^{n(k-m-\frac{1}{2})}\chi_i(p)^n\int_{{\mathbb Z}_p^\times}\chi_0(x)\chi_i(x)\psi(-p^nx)d^\times x\\ &=&\left\{\begin{array}{ll} \sum_{n\geq0}np^{n(k-m-\frac{1}{2})}\chi_i(p)^n+(1-p^{-1})^{-1}p^{m-k-\frac{1}{2}}\chi_i(p)^{-1};&\chi_0\chi_i\mid_{{\mathbb Z}_p^\times}=1;\\ -r(1-p^{-1})^{-1}p^{r(m-k-\frac{1}{2})}\chi_i(p)^{-r}\tau(\chi_0\chi_i,\psi);&{\rm cond}(\chi_0\chi_i)=r>0 \end{array}\right.\\ &=&\left\{\begin{array}{ll} \frac{p^{k-m-\frac{1}{2}}\chi_i(p)+p^{m-k-\frac{1}{2}}\chi_i(p)^{-1}-2p^{-1}}{(1-p^{-1})(1-p^{k-m-\frac{1}{2}}\chi_i(p))^2};&\chi_0\chi_i\mid_{{\mathbb Z}_p^\times}=1;\\ -r(1-p^{-1})^{-1}p^{r(m-k-\frac{1}{2})}\chi_i(p)^{-r}\tau(\chi_0\chi_i,\psi);&{\rm cond}(\chi_0\chi_i)=r>0, \end{array}\right. \end{eqnarray*} where the second equality follows from the identity $\sum_{n>0}nx^n=x(1-x)^{-2}$. The result then follows. \end{proof} \section{Extremal $p$-adic L-functions} If $\pi_p=\pi(\chi_1,\chi_2)$ or $\sigma(\chi_1,\chi_2)$ with $\chi_1$ unramified, then the Hecke polynomial $X^2-a_pX+\epsilon(p)p^{k+1}=(x-\alpha)(x-\beta)$, where $\alpha=p^{1/2}\chi_1(p)^{-1}$. This implies that if $\gamma=\alpha$ has small enough valuation, we can always construct $v(\alpha)$-admissible distributions $\mu_\alpha^\pm$ and $\mu_\alpha=\mu_\alpha^++\mu_\alpha^-$. In fact, if $\pi_p=\pi(\chi_1,\chi_2)$ and $\chi_2$ is also unramified, we can sometimes construct a second $v_p(\beta)$-admissible distribution $\mu_\beta$. By previous computations, the interpolation property implies that, for any locally polynomial character $\chi=\chi_0(x)x^m\in C_k({\mathbb Z}_p^\times,{\mathbb C}_p)$, \[ \int_{{\mathbb Z}_p^\times}\chi d\mu_\alpha=\frac{4\pi }{\Omega_f^\pm i^m}\cdot e_p(\pi_p,\chi_0)\cdot L\left(m-k+\frac{1}{2},\pi,\chi_0\right), \] with \[ e_p(\pi_p,\chi_0)=\left\{\begin{array}{ll} (1-p^{-1})^{-1}(1-\epsilon(p)\alpha^{-1} p^{m})(1-\alpha^{-1}p^{k-m});&\chi_0\chi_2\mid_{{\mathbb Z}_p^\times}=1;\\ (1-p^{-1})^{-1}p^{rm}\alpha^{-r}\tau(\chi_0\chi_2,\psi);&{\rm cond}(\chi_0\chi_2)=r>0. \end{array}\right. \] This interpolation formula coincides (up to constant) with the classical interpolation formula of the distribution $\mu_{f,\alpha}$ defined in \S \ref{classicdist}. Indeed, it is easy to prove that $\varphi^\pm_{f_\alpha}$ is proportional to $\varphi^\pm_{f,p}(V_0)$ (see equation \eqref{UpV0}), hence the fact that $\mu_{f,\alpha}^\pm$ is proportional to $\mu_{\alpha}^\pm$ follows from \eqref{eqclassmu}, \eqref{defmugen} and property \eqref{keyprop}. In fact, if $\Psi$ is a character, all the the admissible $p$-adic distributions constructed in this paper are twists of the $p$-adic distributions described in \S \ref{classicdist} (also in \cite{MTT86}), hence for those situations we only provide a new interpretation of classical constructions. The only genuine new construction is for the case $\Psi=v_p\cdot|\cdot|^{1/2}\chi$ and $\pi_p=\pi(\chi,\chi)$. \begin{theorem}\label{mainthm} Let $f\in S_{k+2}(\Gamma_1(N),\epsilon)$ be a newform, and assume that $\pi_p=\pi(\chi,\chi)$. Then there exists a $(k+1)/2$-admissible distribution $\mu_{f,p}^{\rm ext}$ of ${\mathbb Z}_p^\times$ such that, for any locally polynomial character $\chi=\chi_0(x)x^m\in C_k({\mathbb Z}_p^\times,{\mathbb C}_p)$, \[ \int_{{\mathbb Z}_p^\times}\chi d\mu_{f,p}^{\rm ext}=\frac{4\pi }{\Omega_f^\pm i^m}\cdot e_p^{\rm ext}(\pi_p,\chi_0)\cdot L\left(m-k+\frac{1}{2},\pi,\chi_0\right), \] with \[ e_p^{\rm ext}(\pi_p,\chi_0)=\left\{\begin{array}{ll} \frac{p^{k-m-\frac{1}{2}}\chi(p)+p^{m-k-\frac{1}{2}}\chi(p)^{-1}-2p^{-1}}{1-p^{-1}};&\chi_0\chi\mid_{{\mathbb Z}_p^\times}=1;\\ \frac{-rp^{r(m-k-\frac{1}{2})}\chi(p)^{-r}\tau(\chi_0\chi,\psi)}{1-p^{-1}};&{\rm cond}(\chi_0\chi)=r>0. \end{array}\right. \] \end{theorem} \begin{proof} The only thing that is left to prove is that $\mu_{f,p}^{\rm ext}$ is $(k+1)/2$-admissible, but this follows directly from Theorem \ref{thmadm} and the fact that \[ \varepsilon_p=\epsilon_p^{-1}|\cdot|^k=\chi^2,\qquad\gamma=\chi(p)p|p|^{\frac{1}{2}}\varepsilon_p(p)^{-1}=\chi(p)p^{\frac{1}{2}+k}\epsilon_p(p). \] Hence $v_p(\gamma)=\frac{1}{2}+k+v_p(\chi(p))=\frac{k+1}{2}$. \end{proof} \begin{remark} Notice that $\mu_{f,p}^{\rm ext}$ has been constructed as the sum \[ \mu_{f,p}^{\rm ext}=\mu_{f,p}^{{\rm ext},+}+\mu_{f,p}^{{\rm ext},-}. \] \end{remark} \begin{definition} We call $\mu_{f,p}^{\rm ext}$ \emph{extremal $p$-adic measure}. Since $(k+1)/2<k+1$, by Proposition \ref{propext} we can extend $\mu_{f,p}^{\rm ext}$ to a locally analytic measure. Hence we define the \emph{extremal $p$-adic L-function} \[ L_p^{\rm ext}(f,s):=\int_{{\mathbb Z}_p^\times}{\rm exp}(s\cdot{\rm log(x)})d\mu_{f,p}^{\rm ext}(x). \] \end{definition} Hence, we conclude that in the conjecturally impossible situation that $\pi_p=\pi(\chi,\chi)$, two $p$-adic L-functions coexist \[ L_p(f,s),\qquad L_p^{\rm ext}(f,s). \] their corresponding interpolation properties look similar but they have completely different Euler factors. \subsection{Alternative description} In the classical setting described in \S \ref{Classical} ($\chi$ unramified), $p$-adic distributions $\mu^{\pm}_{f,p}$ are given by Equation \eqref{eqclassmu}, while extremal $p$-adic distributions satisfy \begin{eqnarray*} \int_{U(a,n)}P\left(1,\frac{x-a}{p^n}\right)d\mu^{{\rm ext},\pm}_{f,p}(x)&=&\varphi_{f,p}^\pm(\delta(1_{U(a,n)}))(0-\infty)\left(P\left(X,\frac{Y-aX}{p^n}\right)\right)\\ &=&\frac{1}{\alpha^n}\cdot\varphi^\pm_{f,p}(V_1-nV_0)\left(\frac{a}{p^n}-\infty\right)\left(P\right), \end{eqnarray*} where $V_0=(1-p^{-1})^{-1}|y|^{1/2}\chi(y)1_{{\mathbb Z}_p}(y)$ and $V_1=(1-p^{-1})^{-1}v_p(y)|y|^{1/2}\chi(y)1_{p{\mathbb Z}_p}(y)$. Using the relations \eqref{eqKir}, we compute the action of the Hecke operator $T_p$ on $V_0+V_1$: \begin{eqnarray*} T_p (V_0+V_1)&=&\left(\begin{array}{cc}p^{-1}&\\&1\end{array}\right)(V_0+V_1)+\sum_{c\in{\mathbb Z}/p{\mathbb Z}}\left(\begin{array}{cc}1&p^{-1}c\\&p^{-1}\end{array}\right)(V_0+V_1)\\ &=&(V_0+V_1)(p^{-1}y)+\frac{1}{\varepsilon_p(p)}(V_0+V_1)(py)\sum_{c\in{\mathbb Z}/p{\mathbb Z}}\psi(cy)\\ &=&\frac{\alpha|y|^{1/2}\chi(y)}{(1-p^{-1})}\left(v_p(y)1_{{\mathbb Z}_p}(p^{-1}y)+\frac{1+v_p(py)}{p}\sum_{c\in{\mathbb Z}/p{\mathbb Z}}\psi(cy)1_{{\mathbb Z}_p}(py)\right)\\ &=&\frac{|y|^{1/2}\chi(y)}{(1-p^{-1})}2\alpha\left(1+v_p(y)\right)1_{{\mathbb Z}_p}(y)=2\alpha(V_0+V_1) \end{eqnarray*} since $\alpha=\gamma=p^{1/2}\chi(p)^{-1}=\varepsilon_p(p)^{-1}p^{1/2}\chi(p)$. Similarly, \begin{equation}\label{UpV0} U_p V_0=\sum_{c\in{\mathbb Z}/p{\mathbb Z}}\left(\begin{array}{cc}1&p^{-1}c\\&p^{-1}\end{array}\right)V_0=\frac{1}{\varepsilon_p(p)}V_0(py)\sum_{c\in{\mathbb Z}/p{\mathbb Z}}\psi(cy)=\alpha V_0. \end{equation} Hence, $V_0$ and $V_1$ are basis of the generalized eigenspace of $U_p$, in which $V_0$ is the eigenvector and $V_0+V_1$ is the newform. This implies that (up to constant) $\varphi^\pm_{f,p}(V_0)\stackrel{\cdot}{=}\varphi^\pm_{f_\alpha}$, where $f_\alpha$ is the p-specialization defined in \S \ref{classicdist}, while we have that $\varphi^\pm_{f,p}(V_0+V_1)\stackrel{\cdot}{=}\varphi^\pm_{f}$. We conclude that, in terms of the classical definitions given in \S \ref{classicdist}, the extremal distribution can be described as \[ \int_{U(a,n)}P\left(1,\frac{x-a}{p^n}\right)d\mu^{{\rm ext},\pm}_{f,p}(x)=\frac{1}{\alpha^n}\cdot\varphi^\pm_{f-(n+1)f_\alpha}\left(\frac{a}{p^n}-\infty\right)\left(P\right). \] \section{Overconvergent modular symbols} For any $r\in p^{{\mathbb Q}}$, let $B[{\mathbb Z}_p,r]=\{z\in{\mathbb C}_p,\;\exists a\in{\mathbb Z}_p,\;|z-a|\leq r\}$. We denote by $A[r]$ the ring of affinoid function on $B[{\mathbb Z}_p,r]$. The ring $A[r]$ has structure of ${\mathbb Q}_p$-Banach algebra with the norm $\parallel f\parallel_r={\rm sup}_{z\in B[{\mathbb Z}_p,r]}|f(z)|$. Denote by $D[r]={\mathrm {Hom}}_{{\mathbb Q}_p}(A[r],{\mathbb Q}_p)$ the continuous dual. It is also a Banach space with the norm \[ \parallel \mu\parallel_r={\rm sup}_{f\in A[r]}\frac{|\mu(f)|}{\parallel f\parallel_r}. \] We define \[ D^\dagger[r]:=\varprojlim_{r'\in p^{\mathbb Q},r'>r}D[r'], \] where the projective limit is taken with respect the usual maps $D[r_2]\rightarrow D[r_1]$, $r_1>r_2$. Since these maps are injective and compact, the space $D^\dagger[r]$ is endowed with structure of Frechet space. Given an affinoid ${\mathbb Q}_p$-algebra $R$ and a character $w:{\mathbb Z}_p\rightarrow R^\times$ such that $w\in A[r]\hat\otimes_{{\mathbb Q}_p} R$, we can define an action of the monoid \[ \Sigma_0(p)=\left\{\left(\begin{array}{cc}a&b\\c&d\end{array}\right)\in \mathrm{M}_2({\mathbb Z}_p),\;p\nmid a,\;p\mid c,\;ad-bc\neq 0\right\} \] on $A[r]\hat\otimes_{{\mathbb Q}_p} R$ and $D[r]\hat\otimes_{{\mathbb Q}_p} R$ given by \begin{eqnarray*} (\gamma\ast_w f)(z)&=&w(a+cz) \cdot f\left(\frac{b+dz}{a+cz}\right),\qquad f\in A[r]\hat\otimes_{{\mathbb Q}_p} R,\\ (\gamma\ast_w \mu)(f)&=&\mu(\gamma^{-1}\ast_wf),\qquad\gamma^{-1}\in\Sigma_0(p),\quad \mu\in D[r]\hat\otimes_{{\mathbb Q}_p} R. \end{eqnarray*} Write $D_w[r]$ for the space $D[r]\hat\otimes_{{\mathbb Q}_p} R$ with the corresponding action. Similarly we define \[ D_w^\dagger:=\varprojlim_{r'\in p^{\mathbb Q},r'>r} D_w[r]=D^\dagger[r]\hat\otimes_{{\mathbb Q}_p}R, \] by \cite[Lemma 3.2]{Be}. Compatibility with base change and \cite[Lemma 3.5]{Be} imply that, given a morphism of affinoid ${\mathbb Q}_p$-algebras $\varphi: R\rightarrow R'$ we have isomorphisms \begin{equation}\label{esp} D_w[r]\otimes_R R'\stackrel{\simeq}{\longrightarrow}D_{\varphi\circ w}[r],\qquad D^\dagger_w[r]\otimes_R R'\stackrel{\simeq}{\longrightarrow}D^\dagger_{\varphi\circ w}[r]. \end{equation} \begin{definition} We call the space ${\mathrm {Hom}}_\Gamma(\Delta_0,D^\dagger_w[r])$ the \emph{space of modular symbols of weight $w$}. We denote by ${\mathrm {Hom}}^\pm_\Gamma(\Delta_0,D^\dagger_w[r])$ the subgroup of ${\mathrm {Hom}}_\Gamma(\Delta_0,D^\dagger_w[r])$ of elements that are fixed or multiplied by $-1$ by the involution given by $\mbox{\tiny$\left(\begin{array}{cc}-1&\\&1\end{array}\right)$}$. \end{definition} The action of $\Sigma_0(p)$ on $D^\dagger_w[r]$ induces an action of $U_p$ on ${\mathrm {Hom}}^\pm_\Gamma(\Delta_0,D^\dagger_w[r])$ given by the formula \eqref{defUp}. Assume that $R$ is reduced and its norm $|\cdot|$ extends the norm of ${\mathbb Q}_p$. Write as usual $v_p(x)=-\log|x|/\log p$, so that $v_p(p)=1$. Let us consider \[ R\{\{T\}\}:=\left\{\sum_{n\geq 0}a_nT^n,\;a_n\in R,\;\lim_n(v_p(a_n)-n\nu)=\infty\mbox{ for all }\nu\in {\mathbb R}\right\} \] Given $F(T)\in R\{\{T\}\}$ and $\nu\in {\mathbb R}$, \[ N(F,\nu):=\max\{n\in{\mathbb N},\;v_p(a_n)-n\nu={\rm inf}_m(v_p(a_m)-m\nu)\}. \] A polynomial $Q(T)\in R[T]\subseteq R\{\{T\}\}$ is $\nu$-dominant if it has degree $N(Q,\nu)$ and, for all $x\in {\rm Sp}(R)$, we have $N(Q,\nu)=N(Q_x,\nu)$. We say that $F(T)\in R\{\{T\}\}$ is \emph{$\nu$-adapted} if there exists a (unique) decomposition $F(T)=Q(T)\cdot G(T)$, where $Q(T)\in R[T]$ is a $\nu$-dominant polynomial of degree $N(F,\nu)$ and $Q(0)=G(0)=1$. Since ${\mathrm {Hom}}^\pm_\Gamma(\Delta_0,D_w[r])$ satisfies property (Pr) of \cite[\S 2]{Buz} and $U_p$ acts compactly, one can define the characteristic power series $F(T)\in R\{\{T\}\}$ of $U_p$ acting on ${\mathrm {Hom}}^\pm_\Gamma(\Delta_0,D_w[r])$. We say that $R$ is \emph{$\nu$-adapted} for some $\nu\in{\mathbb R}$, if $F$ is $\nu$-adapted. If this is the case, we can define the submodule ${\mathrm {Hom}}^\pm_\Gamma(\Delta_0,D_w[r])^{\leq\nu}$ of \emph{slope bounded by $\nu$ modular symbols} as the kernel of $Q(U_p)$ in ${\mathrm {Hom}}^\pm_\Gamma(\Delta_0,D_w[r])$. We write ${\mathrm {Hom}}^\pm_\Gamma(\Delta_0,D^\dagger_w[r])^{\leq\nu}$ for the intersection \[ {\mathrm {Hom}}^\pm_\Gamma(\Delta_0,D^\dagger_w[r])^{\leq\nu}:={\mathrm {Hom}}^\pm_\Gamma(\Delta_0,D^\dagger_w[r])\cap {\mathrm {Hom}}^\pm_\Gamma(\Delta_0,D_w[r'])^{\leq\nu} \] in ${\mathrm {Hom}}^\pm_\Gamma(\Delta_0,D_w[r'])$, for any $r'>r$. \subsection{Control Theorem}\label{CtrlThm} Let us consider the character \[ k:{\mathbb Z}_p^\times\rightarrow{\mathbb Q}_p^\times,\qquad x\longmapsto x^k. \] Then we have a morphism of $\Sigma_0(p)$-modules \[ \rho_k^\ast:D^\dagger_k[1]\longrightarrow V(k):=V(k)_{{\mathbb Q}_p};\qquad\rho_k^\ast(\mu)(P):=\mu(P(1,z)). \] This provides a morphism \begin{equation}\label{morphrho} \rho_k^\ast:{\mathrm {Hom}}^\pm_\Gamma(\Delta_0,D^\dagger_k[1])\longrightarrow {\mathrm {Hom}}^\pm_\Gamma(\Delta_0,V(k)) \end{equation} \begin{theorem}[Steven's control Theorem]\label{ctrlthm} The above morphism induces an isomorphism of ${\mathbb Q}_p$-vector spaces \[ \rho_k^\ast:{\mathrm {Hom}}^\pm_\Gamma(\Delta_0,D^\dagger_k[1])^{< k+1}\longrightarrow {\mathrm {Hom}}^\pm_\Gamma(\Delta_0,V(k))^{< k+1}. \] \end{theorem} \begin{proof} See \cite[Theorem 7.1]{S} and \cite[Theorem 5.4]{SP}. \end{proof} \subsection{Extremal modular symbols} Let $f\in S_{k+2}(N,\epsilon)$ as before, and assume that the Hecke polynomial $x^2-a_px+\epsilon(p)p^{k+1}$ has a double root $\alpha$. We have defined admissible locally analytic measures $\mu^{{\rm ext},\pm}_{f,p}$ characterized by \[ \int_{a+p^n{\mathbb Z}_p}P\left(1,\frac{x-a}{p^n}\right)d\mu^{{\rm ext},\pm}_{f,p}(x)=\frac{1}{\alpha^n}\cdot\varphi^\pm_{f-(n+1)f_\alpha}\left(\frac{a}{p^n}-\infty\right)\left(P\right), \] for any $P\in {\mathcal P}(k)_{{\mathbb Q}}$. Our aim is to describe $\mu^{{\rm ext},\pm}_{f,p}$ as the evaluation at $0-\infty$ of certain overconvergent modular symbol ${\mathrm {Hom}}^\pm_\Gamma(\Delta_0,D^\dagger_k[0])$. Notice that, if we write $g_n:=f-(n+1)f_\alpha$ and $\gamma_{a,n}:=\left(\begin{array}{cc}1&a\\&p^n\end{array}\right)$, \begin{eqnarray*} \int_{{\mathbb Z}_p}\gamma_{a,n}^{-1}\left(\rho_k(P)1_{{\mathbb Z}_p}\right)(x)d\mu^{{\rm ext},\pm}_{f,p}(x)&=&\int_{a+p^n{\mathbb Z}_p}P\left(1,\frac{x-a}{p^n}\right)d\mu^{{\rm ext},\pm}_{f,p}(x)\\ &=&\frac{1}{\alpha^n}\cdot\varphi^\pm_{g_n}\left(\frac{a}{p^n}-\infty\right)\left(P\right)\\ &=&\frac{1}{\alpha^n}\cdot\varphi^\pm_{g_n}\left(\gamma_{a,n}(0-\infty)\right)\left(P\right)\\ &=&\left(\frac{1}{p\alpha}\right)^n\cdot\varphi^\pm_{g_n\mid_{\gamma_{a,n}}}\left(0-\infty\right)\left(\gamma_{a,n}^{-1}P\right). \end{eqnarray*} Moreover, the elements $\gamma_{a,n}^{-1}\left(\rho_k(P)1_{{\mathbb Z}_p}\right)\in A[p^{-n}]$ for all $n\in{\mathbb N}$, $a\in{\mathbb Z}_p$, and these functions form a dense set in $\bigcup_{n\geq 0}A[p^{-n}]$. \begin{lemma} For any divisor $D\in \Delta_0$, the expression \[ \gamma_{a,n}^{-1}\left(\rho_k(P)1_{{\mathbb Z}_p}\right)\longmapsto\left(\frac{1}{p\alpha}\right)^n\cdot\varphi^\pm_{g_n\mid_{\gamma_{a,n}}}\left(D\right)\left(\gamma_{a,n}^{-1}P\right) \] extends to a measure in $\hat\varphi_{\rm ext}^\pm(D)\in D_k^\dagger[1]$. \end{lemma} \begin{proof} we have to show \emph{additivity}, namely, since \[ \gamma_{a,n}^{-1}\left(\rho_k(P)1_{{\mathbb Z}_p}\right)= \sum_{b\equiv a\;{\rm mod} \;p^{n}} \gamma_{b,n+1}^{-1}\left(\rho_k(\gamma_bP)1_{{\mathbb Z}_p}\right),\quad\gamma_{b}:=\left(\begin{array}{cc}1&\frac{b-a}{p^n}\\0&p\end{array}\right), \] we have to show that \[ \left(\frac{1}{p\alpha}\right)^n\cdot\varphi^\pm_{g_n\mid_{\gamma_{a,n}}}\left(D\right)\left(\gamma_{a,n}^{-1}P\right)= \sum_{b\equiv a\;{\rm mod} \;p^{n}}\left(\frac{1}{p\alpha}\right)^{n+1}\cdot\varphi^\pm_{g_{n+1}\mid_{\gamma_{b,n+1}}}\left(D\right)\left(\gamma_{b,n+1}^{-1}\gamma_bP\right). \] Indeed, we have that $\gamma_{b,n+1}^{-1}\gamma_b=\gamma_{a,n}^{-1}$, thus the above equation follows from the fact that $g_n\in S_{k+2}(\Gamma,\epsilon)$ satisfies $U_p g_{n+1}=\frac{1}{p}\sum_{b\equiv a}g_{n+1}\mid_{\gamma_b}=\alpha\cdot g_{n}$. First we notice that by \eqref{eqint}, for any $P\in {\mathcal P}(k)_{{\mathbb Z}_p}$, \[ \hat\varphi_{\rm ext}^+(D)(\gamma_{a,N}^{-1}\left(\rho_k(P)1_{{\mathbb Z}_p}\right))=\left(\frac{1}{\alpha}\right)^N\cdot\varphi^+_{g_N}\left(\gamma_{a,N}D\right)\left(P\right)\in A\cdot p^{-N \frac{k}{2}}{\mathcal O}_{{\mathbb C}_p}, \] for big enough $N$ since $v_p(\alpha)=k/2$. On the other hand, any locally analytic function is topologically generated by functions of the form $P_m^{a,N}(x):=\left(\frac{x-a}{p^N}\right)^m1_{a+p^N}(x)$, where $m\in{\mathbb N}$. The functions $\gamma_{a,N}^{-1}\left(\rho_k(P)1_{{\mathbb Z}_p}\right)$ are generated by $P^{a,N}_m$ when $m\leq k$, hence our distribution must be determined by \[ \hat\varphi_{\rm ext}^\pm(D)(P^{a,N}_m)=\left(\frac{1}{p\alpha}\right)^N\cdot\varphi^\pm_{g_N\mid_{\gamma_{a,N}}}\left(D\right)\left(\gamma_{a,N}^{-1}(x^{k-m}y^m)\right),\qquad m\leq k. \] If $m>k$, we define $\hat\varphi_{\rm ext}^\pm(D)(P_m^{a,N})=\lim_{n\rightarrow\infty}a_n$, where \[ a_n=\sum_{b\;{\rm mod}\; p^{n};\;b\equiv a\;{\rm mod}\; p^N}\sum_{j\leq k}\left(\frac{b-a}{p^N}\right)^{m-j}\binom{m}{j}p^{j(n-N)}\hat\varphi_{\rm ext}^\pm(D)(P_j^{b,n}). \] The limit converge because $\{a_n\}_n$ is Cauchy, indeed by additivity \[ a_{n_2}-a_{n_1}=\sum_{j\leq h}\sum_{b\equiv a\;(p^{n_2})}\sum_{b'\equiv b\;(p^{n_1})}\sum_{k=h+1}^{m}r(k)\binom{k}{j}\left(\frac{b'-b}{p^N}\right)^{k-j}p^{(n_2-N)j}\hat\varphi_{\rm ext}^\pm(D)(P_{j}^{b',n_2}), \] where $r(k)=\binom{m}{k}\left(\frac{b'-a}{p^N}\right)^{m-k}$. Since \[ \left(\frac{b'-b}{p^N}\right)^{k-j}p^{(n_2-N)j}\hat\varphi_{\rm ext}^\pm(D)(P_{j}^{b',n_2})\in A\cdot p^{(n_1-n_2)(k-j)}p^{k\left(\frac{n_2}{2}-N\right)}{\mathcal O}_{{\mathbb C}_p}, \] we have that $a_{n+1}-a_{n}\stackrel{n}{\rightarrow} 0$. Hence we have extended $\hat\varphi_{\rm ext}^\pm(D)$ to a locally analytic measure by continuity, which is determined by the image of locally polynomial functions of degree at most $k$. \end{proof} The above lemma implies that $\hat\varphi_{\rm ext}^\pm\in {\mathrm {Hom}}(\Delta_0,D_k^\dagger[1])$. Let us check that it is $\Gamma$-equivariant: For any $g\in\Gamma$, it is easy to show that $g\gamma_{a,n}^{-1}1_{{\mathbb Z}_p}=\gamma_{g^{-1}a,n}^{-1}1_{{\mathbb Z}_p}$, where $\left(\begin{array}{cc}\alpha&\beta\\\gamma&\delta\end{array}\right)a=\frac{\beta+\delta a}{\alpha+\gamma a}$. Thus by \eqref{GL_2-equiv} \begin{eqnarray*} \hat\varphi_{\rm ext}^\pm(g D)(g\gamma_{a,n}^{-1}\left(\rho_k(P)1_{{\mathbb Z}_p}\right))&=&\hat\varphi_{\rm ext}^\pm(g D)(\gamma_{g^{-1}a,n}^{-1}\left(\rho_k(\gamma_{g^{-1}a,n}g\gamma_{a,n}^{-1}P)1_{{\mathbb Z}_p}\right))\\ &=&\left(\frac{1}{p\alpha}\right)^n\cdot\varphi^\pm_{g_n\mid_{\gamma_{g^{-1}a,n}}}\left(gD\right)\left(g\gamma_{a,n}^{-1}P\right)\\ &=&\left(\frac{1}{p\alpha}\right)^n\cdot\varphi^\pm_{g_n\mid_{\gamma_{g^{-1}a,n}g}}\left(D\right)\left(\gamma_{a,n}^{-1}P\right)\\ &=&\hat\varphi_{\rm ext}^\pm(D)(\gamma_{a,n}^{-1}\left(\rho_k(P)1_{{\mathbb Z}_p}\right)) \end{eqnarray*} where the last equality has been obtained from the fact that $\gamma_{g^{-1}a,n}g\gamma_{a,n}^{-1}\in\Gamma$ and $g_n$ is $\Gamma$-invariant for all $n$. One easily checks that $\hat\varphi^\pm$ is in the corresponding $\mbox{\tiny$\left(\begin{array}{cc}-1&\\&1\end{array}\right)$}$-subspace \[ \hat\varphi_{\rm ext}^\pm\in {\mathrm {Hom}}^\pm_\Gamma(\Delta_0,D_k^\dagger[1]). \] From the definition it is easy to check the following result \begin{proposition} The measures $\mu^{{\rm ext},\pm}_{f,p}$ and $\mu^{{\rm ext}}_{f,p}$ can be obtained as \[ \mu^{{\rm ext},\pm}_{f,p}=\hat\varphi_{\rm ext}^\pm(0-\infty)\mid_{{\mathbb Z}_p^\times},\qquad \mu^{{\rm ext}}_{f,p}=\hat\varphi_{\rm ext}(0-\infty)\mid_{{\mathbb Z}_p^\times}, \] where $\hat\varphi_{\rm ext}:=\hat\varphi_{\rm ext}^++\hat\varphi_{\rm ext}^-$. \end{proposition} \subsection{Action of $U_p$}\label{actionUp} Recall that the action of $\Sigma_0(p)$ on ${\mathrm {Hom}}_\Gamma(\Delta_0,D_k^\dagger[1])$ provides an action of the Hecke operator $U_p$, the aim of this section is to compute $U_p\hat\varphi_{\rm ext}^\pm$. Notice that it is enough to compute the image of the functions $f_{a,n,P}:=\gamma_{a,n}^{-1}\left(\rho_k(P)1_{{\mathbb Z}_p}\right)$: \begin{eqnarray*} (U_p\hat\varphi_{\rm ext}^\pm)(D)(f_{a,n,P})&=&\sum_{c\;{\rm mod}\;p}\hat\varphi_{\rm ext}^\pm(\gamma_{c,1}D)(\gamma_{c,1}\gamma_{a,n}^{-1}\left(\rho_k(P)1_{{\mathbb Z}_p}\right))\\ &=&\hat\varphi_{\rm ext}^\pm(\gamma_{a,1}D)(\gamma_{0,n-1}^{-1}\left(\rho_k(P)1_{{\mathbb Z}_p}\right))\\ &=&\left(\frac{1}{p\alpha}\right)^{n-1}\cdot\varphi^\pm_{g_{n-1}\mid_{\gamma_{0,n-1}}}\left(\gamma_{a,1}D\right)\left(\gamma_{0,n-1}^{-1}P\right)\\ &=&\frac{1}{p}\left(\frac{1}{p\alpha}\right)^{n-1}\cdot\varphi^\pm_{g_{n-1}\mid_{\gamma_{0,n-1}\gamma_{a,1}}}\left(D\right)\left(\gamma_{a,1}^{-1}\gamma_{0,n-1}^{-1}P\right)\\ &=&\alpha\left(\frac{1}{p\alpha}\right)^{n}\cdot\varphi^\pm_{g_{n-1}\mid_{\gamma_{a,n}}}\left(D\right)\left(\gamma_{a,n}^{-1}P\right). \end{eqnarray*} Since $g_n=g_{n-1}-f_\alpha$, we deduce that \begin{equation}\label{actUp} U_p\hat\varphi_{\rm ext}^\pm=\alpha\cdot\left(\hat\varphi_{\rm ext}^\pm+\hat\varphi^\pm\right), \end{equation} where $\hat\varphi^\pm\in {\mathrm {Hom}}^\pm_\Gamma(\Delta_0,D_k^\dagger[1])$ is the classical overconvergent modular symbol corresponding through Theorem \ref{ctrlthm} to the eigenvector with eigenvalue $\alpha$ given by $f_\alpha$. \subsection{Specialization of $\hat\varphi_{\rm ext}^\pm$} Theorem \ref{ctrlthm} asserts that the morphism $\rho_k^\ast$ of \eqref{morphrho} becomes an isomorphism when we restrict ourselves to generalized eigenspaces for $U_p$ with valuation of the eigenvector strictly less than $k+1$. We have seen that $\hat\varphi_{\rm ext}^\pm$ lives in the eigenspace of eigenvalue $\alpha$, and we know that $v_p(\alpha)=k/2$. Thus, it corresponds bijectively to an element of ${\mathrm {Hom}}^\pm_\Gamma(\Delta_0,V(k))$. We can easily compute the image $\rho_k^\ast \hat\varphi_{\rm ext}^\pm$ just calculating the image of the polynomical functions $\rho_k(P)1_{{\mathbb Z}_p}$: \[ \hat\varphi_{\rm ext}^\pm(D)(\rho_k(P)1_{{\mathbb Z}_p})=\left(\frac{1}{p\alpha}\right)^0\cdot\varphi^\pm_{g_0}\left(D\right)\left(P\right)=\varphi^\pm_{f-f_\alpha}\left(D\right)\left(P\right). \] Thus, $\rho_k^\ast \hat\varphi_{\rm ext}^\pm=\varphi^\pm_{f-f_\alpha}$, that corresponds via Eichler-Shimura to the modular form $f-f_\alpha$. This fact fits with Theorem \ref{ctrlthm} since $f-f_\alpha$ belongs to the generalized eigenspace, indeed, $(U_p-\alpha)^2(f-f_\alpha)=0$. \section{Extremal p-adic L-functions in families} \subsection{Weight space} Let ${\mathcal W}/{\mathbb Q}_p$ be the standard one-dimensional weight space. It is a rigid analytic space that classify characters of ${\mathbb Z}_p^\times$, namely, \[ {\mathcal W}={\mathrm {Hom}}_{\rm cnt}({\mathbb Z}_p^\times, {\mathbb G}_m). \] If $L$ is any normed extension of ${\mathbb Q}_p$, we write $\tilde w:{\mathbb Z}_p^\times\rightarrow L^\times$ for the continuous morphism of groups corresponding to a point $w\in{\mathcal W}(L)$. If $k \in {\mathbb Z}$, then the morphism $\tilde k(t) = t^k$ for all $t \in {\mathbb Z}_p^\times$ defines a point in ${\mathcal W}({\mathbb Q}_p)$ that we will also denote by $k$. Thus ${\mathbb Z}\subset{\mathcal W}({\mathbb Q}_p)$, and we call points in ${\mathbb Z}$ inside ${\mathcal W}({\mathbb Q}_p)$ integral weights. If $W={\rm Sp}R$ is an admissible affinoid of ${\mathcal W}$, the immersion ${\rm Sp}(R)=W\hookrightarrow{\mathcal W}$ defines an element $K\in{\mathcal W}(R)$ such that, for every $w\in W({\mathbb Q}_p)\hookrightarrow{\mathcal W}({\mathbb Q}_p)$, we have $\tilde w=w\circ\tilde K$. By \cite[Lemma 3.3]{Be}, there exists $r(W)>1$ such that the morphism \[ {\mathbb Z}_p\longrightarrow R^\times,\qquad z\longmapsto \tilde K(1+pz) \] belongs to $A[r(W)](R)$. We say that $W$ is \emph{nice} if the points ${\mathbb Z}\cap W$ are dense in $W$ and both $R$ and $R_0/pR_0$ are PID, where $R_0$ is the unit ball for the supremum norm in $R$. \subsection{The Eigencurve} For a fixed nice affinoid subdomain $W={\rm Sp}R$ of ${\mathcal W}$, we can consider the $R$-modules ${\mathrm {Hom}}_\Gamma^\pm(\Delta_0,D_{\tilde K}[r])$, for $1<r\leq r(W)$. By \cite[Proposition 3.6]{Be}, we have that the space ${\mathrm {Hom}}_\Gamma^\pm(\Delta_0,D_{\tilde K}[r])$ is potentially orthonormalizable Banach $R$-module. The elements of the Hecke algebra ${\mathcal H}={\mathbb Z}[T_q,\langle n\rangle,U_p]$ act continuously and $U_p$ acts compactly. If we consider ${\mathrm {Hom}}_\Gamma^\pm(\Delta_0,D^\dagger_{\tilde K}[r])$, \cite[Theorem 3.10]{Be} asserts that, for any $w\in W({\mathbb Q}_p)$ and any real number $1<r\leq r(W)$, there natural ${\mathcal H}$-equivariant morphism \begin{equation}\label{special} {\mathrm {Hom}}_\Gamma^\pm(\Delta_0,D^\dagger_{\tilde K}[r])\otimes_{R,w}{\mathbb Q}_p\longrightarrow {\mathrm {Hom}}_\Gamma^\pm(\Delta_0,D_{\tilde w}[r]) \end{equation} is always injective and surjective except when $w=0$ and the sign $\pm$ is -1. The $R$-modules ${\mathrm {Hom}}^\pm_\Gamma(\Delta_0,D_w[r])$ for all $1<r\leq r(W)$ are all $\nu$-adapted if one is, in which case we say that $W={\rm Sp}R$ is $\nu$-adapted. If $W$ is $\nu$-adapted the restriction maps define isomorphisms between the $R$-modules ${\mathrm {Hom}}^\pm_\Gamma(\Delta_0,D_{\tilde w}[r])^{\leq \nu}$ for all $1<r\leq r(W)$. Thus we obtain an isomorphism \begin{equation}\label{isoOC} {\mathrm {Hom}}^\pm_\Gamma(\Delta_0,D^\dagger_{\tilde w}[r])^{\leq\nu}\simeq{\mathrm {Hom}}^\pm_\Gamma(\Delta_0,D_{\tilde w}[r])^{\leq\nu},\qquad 1<r\leq r(W), \end{equation} as seen in \cite[Proposition 3.11]{Be}. The eigencurves ${\mathcal C}^{\pm}\stackrel{\kappa}{\rightarrow}{\mathcal W}$ can be constructed as the union of local pieces \[ {\mathcal C}^{\pm}_{W,\nu}\longrightarrow W={\rm Sp}R, \] where $\nu\in {\mathbb R}$ is a real and $W$ is a nice affinoid subspace adapted to $\nu$. By definition \[ {\mathcal C}^\pm_{W,\nu}={\rm Sp}\mathbb{T}^\pm_{W,\nu}, \] where $\mathbb{T}^\pm_{W,\nu}$ is the $R$-subalgebra of ${\mathrm{End}}_R({\mathrm {Hom}}_\Gamma^\pm(\Delta_0,D^\dagger_{\tilde K}[1])^{\leq\nu})$ generated by the image of the Hecke algebra ${\mathcal H}$. \begin{remark} The cuspidal parts of ${\mathcal C}^+_{W,\nu}$ and ${\mathcal C}_{W,\nu}^-$ coincide by \cite[Theorem 3.27]{Be}, hence we will sometimes identify certain neighbourhoods of cuspidal points. \end{remark} \subsection{Specialization} Let $w\in W({\mathbb Q}_p)$ and write ${\mathrm {Hom}}_\Gamma^\pm(\Delta_0,D^\dagger_{\tilde w}[1])^{\leq\nu}_g$ for the image of the composition. \begin{equation}\label{comps} {\mathrm {Hom}}_\Gamma^\pm(\Delta_0,D^\dagger_{\tilde K}[1])^{\leq\nu}\otimes_{R,w}{\mathbb Q}_p\stackrel{\eqref{special}}{\longrightarrow}{\mathrm {Hom}}_\Gamma^\pm(\Delta_0,D_{\tilde w}[1])^{\leq\nu}\stackrel{\eqref{isoOC}}{\longrightarrow}{\mathrm {Hom}}_\Gamma^\pm(\Delta_0,D^\dagger_{\tilde w}[1])^{\leq\nu} \end{equation} In analogy with previous definition, we write $\mathbb{T}^\pm_{w,\nu}$ for the ${\mathbb Q}_p$-subalgebra of the endomorphism ring ${\mathrm{End}}_{{\mathbb Q}_p}({\mathrm {Hom}}_\Gamma^\pm(\Delta_0,D^\dagger_{\tilde w}[1])^{\leq\nu}_g)$ generated by the image of the Hecke algebra ${\mathcal H}$. By definition, there is a correspondence between points $x\in {\rm Spec}\mathbb{T}^\pm_{w,\nu}(\bar{\mathbb Q}_p)$ and systems of ${\mathcal H}$-eigenvalues appearing in ${\mathrm {Hom}}_\Gamma^\pm(\Delta_0,D^\dagger_{\tilde w}[1])^{\leq\nu}_g$. For any such $x$, we denote by \[ {\mathrm {Hom}}_\Gamma^\pm(\Delta_0,D^\dagger_{\tilde w}[1])_{(x)} \] the generalized eigenspace of the corresponding eigenvalues. Similarly, we denote by $(\mathbb{T}^\pm_{w,\nu})_{(x)}$ the localization of $\mathbb{T}^\pm_{w,\nu}\otimes_{{\mathbb Q}_p}\bar{\mathbb Q}_p$ at the maximal ideal corresponding to $x$. We have that \begin{equation}\label{eqlocx} {\mathrm {Hom}}_\Gamma^\pm(\Delta_0,D^\dagger_{\tilde w}[1])_{(x)}={\mathrm {Hom}}_\Gamma^\pm(\Delta_0,D^\dagger_{\tilde w}[1])^{\leq\nu}\otimes_{\mathbb{T}^\pm_{w,\nu}}(\mathbb{T}^\pm_{w,\nu})_{(x)}. \end{equation} Since by definition ${\mathrm {Hom}}_\Gamma^\pm(\Delta_0,D^\dagger_{\tilde K}[1])^{\leq\nu}\otimes_{R,w}{\mathbb Q}_p\simeq{\mathrm {Hom}}_\Gamma^\pm(\Delta_0,D^\dagger_{\tilde w}[1])^{\leq\nu}_g$, we have a natural \emph{specialization map} \[ s_w:\mathbb{T}^\pm_{W,\nu}\otimes_{R,w}{\mathbb Q}_p\longrightarrow \mathbb{T}^\pm_{w,\nu}. \] By \cite[Lemme 6.6]{Che} the morphism $s_w$ is surjective for all $w\in{\mathcal W}({\mathbb Q}_p)$ and its kernel is nilpotent. In particular \[ {\rm Spec} \mathbb{T}^\pm_{w,\nu}(\bar{\mathbb Q}_p)=\kappa^{-1}(w)(\bar{\mathbb Q}_p), \qquad \kappa:{\mathcal C}^\pm\longrightarrow{\mathcal W}. \] Given $x\in{\rm Spec} \mathbb{T}^\pm_{w,\nu}(\bar{\mathbb Q}_p)\subset {\mathcal C}^\pm_{W,\nu}(\bar{\mathbb Q}_p)$, we can consider the rigid analytic localization $(\mathbb{T}^\pm_{W,\nu})_{(x)}$ of $\mathbb{T}^\pm_{W,\nu}\otimes_{{\mathbb Q}_p}\bar{\mathbb Q}_p$ at the maximal ideal corresponding to $x$. Notice that, if we denote by $R_{(w)}$ the rigid analytic localization of $R\otimes_{{\mathbb Q}_p}\bar{\mathbb Q}_p$ at the maximal ideal corresponding to $w$, then $(\mathbb{T}^\pm_{W,\nu})_{(x)}$ is naturally a $R_{(w)}$-algebra. Localizing at $x$ we obtain a surjective local morphism of finite local $\bar{\mathbb Q}_p$-algebras with nilpotent kernel \begin{equation}\label{defsw} s_w:(\mathbb{T}^\pm_{W,\nu})_{(x)}\otimes_{R_{(w)},w}\bar{\mathbb Q}_p\longrightarrow (\mathbb{T}^+_{w,\nu})_{(x)}. \end{equation} \begin{lemma}\label{charTx} We have that \[ (\mathbb{T}^\pm_{w,\nu})_{(x)}\simeq \bar{\mathbb Q}_p[X]/X^2, \] where $X$ corresponds to the element of the Hecke algebra $U_p-\alpha$. \end{lemma} \begin{proof} Equation \eqref{eqlocx} shows that $(\mathbb{T}^\pm_{w,\nu})_{(x)}$ is the ${\mathbb Q}_p$-subalgebra of the endomorphism ring ${\mathrm{End}}_{{\mathbb Q}_p}({\mathrm {Hom}}_\Gamma^\pm(\Delta_0,D^\dagger_{\tilde w}[1])^{\leq\nu}_{(x)})$ generated by the image of the Hecke algebra ${\mathcal H}$. By Theorem \ref{ctrlthm} we have \[ {\mathrm {Hom}}_\Gamma^\pm(\Delta_0,D^\dagger_{\tilde w}[1])^{\leq\nu}_{(x)}={\mathrm {Hom}}_\Gamma^\pm(\Delta_0,V(k))^{\leq\nu}_{(x)}=\bar{\mathbb Q}_p\hat\varphi^\pm+\bar{\mathbb Q}_p\hat\varphi_{\rm ext}^\pm, \] Hence the result follows from results of \S \ref{actionUp} and the fact that Hecke operators $T_q$ and $\langle n\rangle$ act by scalar. \end{proof} \begin{definition} Any classical cuspidal non-critical $y\in{\mathcal C}^\pm(\bar{\mathbb Q}_p)$ corresponds to a $p$-stabilized normalized cuspidal modular symbol $\varphi_{f'_{\alpha'}}^\pm$ of weight $\kappa(y)+2$. In this situation, we write \[ \mu_y^\pm:=\mu_{f',\alpha'}^\pm. \] Analogously, in our irregular situation given by $x\in{\mathcal C}^\pm(\bar{\mathbb Q}_p)$, we write \[ \mu_{x}^{{\rm ext},\pm}:=\mu_{f,p}^{{\rm ext},\pm}. \] \end{definition} \subsection{Two variable $p$-adic L-functions}\label{secLmu} In this irregular situation, Betina and Williams define in \cite{BW} two variable $p$-adic L-functions ${\mathcal L}_p^\pm$ that interpolate the $p$-adic L-functions $\mu_y^\pm$ as $y\in{\mathcal C}^\pm(\bar{\mathbb Q}_p)$ runs over classical points in a neighbourhood of $x\in{\mathcal C}^\pm(\bar{\mathbb Q}_p)$. In this section, we recall their construction and we give a relation between ${\mathcal L}_p^{\pm}$ and $\mu_{x}^{{\rm ext},\pm}$. \begin{proposition} The space ${\mathrm {Hom}}_\Gamma^\pm(\Delta_0,D^\dagger_{\tilde K}[1])_{(x)}$ is a free $(\mathbb{T}^\pm_{W,\nu})_{(x)}$-module of rank one. \end{proposition} \begin{proof} \cite[Proposition 4.10]{BW}. \end{proof} \begin{corollary} After possibly shrinking $W$, there exists a connected component $V={\rm Sp}(T)\subset {\mathcal C}^\pm_{W,\nu}$ through $x$ such that $T$ is Gorestein and \[ {\mathcal M}_\pm:={\mathrm {Hom}}_\Gamma^\pm(\Delta_0,D^\dagger_{\tilde K}[1])^{\leq\nu}\otimes_{\mathbb{T}^\pm_{W,\nu}}T \] is a free $T$-module of rank one. \end{corollary} \begin{proof} \cite[Corollary 4.11]{BW}. \end{proof} From the formalism of Gorestein rings, it follows that the $R$-linear dual ${\mathcal M}_\pm^\vee:={\mathrm {Hom}}_R({\mathcal M}_\pm,R)$ is free of rank one over $T$. Let ${\mathcal R}$ be the ${\mathbb Q}_p$-algebra of locally analytic distributions of ${\mathbb Z}_p^\times$. We have a natural morphism $D^\dagger[1]\rightarrow {\mathcal R}$ provided by the extension-by-zero map. This induces a morphism $\iota:D^\dagger_{\tilde K}[1]\rightarrow {\mathcal R}\hat\otimes_{{\mathbb Q}_p}R$ and a $R$-linear morphism \begin{eqnarray*} {\rm Mel}:{\mathrm {Hom}}_\Gamma^\pm(\Delta_0,D^\dagger_{\tilde K}[1])&\longrightarrow &{\mathcal R}\hat\otimes_{{\mathbb Q}_p}R\\ \varphi&\longmapsto&\iota\left(\varphi(0-\infty)\right) \end{eqnarray*} Since $V$ is a connected component of the eigencurve, ${\mathcal M}_\pm$ is a direct summand of ${\mathrm {Hom}}_\Gamma^\pm(\Delta_0,D^\dagger_{\tilde K}[1])^{\leq\nu}$. Thus the restriction of ${\rm Mel}$ defines an element of ${\mathcal R}\hat\otimes_{{\mathbb Q}_p}{\mathcal M}_\pm^\vee$. \begin{definition} By choosing a basis of ${\mathcal M}_\pm^\vee$ over $T$, the above construction provides \[ {\mathcal L}_p^\pm\in {\mathcal R}\hat\otimes_{{\mathbb Q}_p} T \] called the \emph{the two variables $p$-adic L-functions}. \end{definition} Write $\bar{\mathbb Q}_p[\varepsilon]:=\bar{\mathbb Q}_p[X]/(X^2)$, and let us consider the morphism \[ x[\varepsilon]^\ast:T\longrightarrow T_{(x)}=(T_{W,\nu}^\pm)_{(x)}\longrightarrow (T_{W,\nu}^\pm)_{(x)}\otimes_{R_{(w)},w}\bar{\mathbb Q}_p\stackrel{s_w}{\longrightarrow}(T_{w,\nu}^\pm)_{(x)}\simeq\bar{\mathbb Q}_p[\varepsilon], \] given by \eqref{defsw} and Lemma \ref{charTx}. This provides a point $x[\epsilon]\in V(\bar{\mathbb Q}_p[\varepsilon])$ lying above $x\in V(\bar{\mathbb Q}_p)$. \begin{theorem} For any $y\in V(\bar{\mathbb Q}_p)$ corresponding to a small slope $p$-stabilized cuspidal eigenform, \[ {\mathcal L}_p^\pm=C^\pm(y)\cdot \mu_{y}^\pm\in {\mathcal R}, \] for some $C\pm(y)\in \bar{\mathbb Q}_p^\times$. We can normalize ${\mathcal L}_p^\pm$ by choosing the right $T$-basis $\phi^\pm$ of ${\mathcal M}_\pm^\vee$ so that $C^\pm(x)=1$. Moreover, for a good choice of $\phi^\pm$, \[ {\mathcal L}^\pm_p(x[\epsilon])=\mu_{x}^\pm+\alpha^{-1}\mu_{x}^{{\rm ext},\pm}\varepsilon\in{\mathcal R}\otimes_{{\mathbb Q}_p}\bar{\mathbb Q}_p[\varepsilon]. \] \end{theorem} \begin{proof} The first part of this theorem corresponds to \cite[Theorem 5.2]{BW}. We can extend here their arguments to deduce also the second part of the theorem. By definition \[ {\rm Mel}={\mathcal L}_p^\pm\phi^\pm\in{\mathcal R}\hat\otimes_{{\mathbb Q}_p}{\mathcal M}_\pm^\vee. \] For any point $y\in V(\bar{\mathbb Q}_p)$, write $w=\kappa(y)\in W(\bar{\mathbb Q}_p)$. If we denote ${\mathcal M}_{(y)}:={\mathcal M}_\pm\otimes_T T_{(y)}$, we have \[ {\mathcal M}_{(y)}^\vee\otimes_{R_w,w}\bar{\mathbb Q}_p={\mathrm {Hom}}_{R_w}({\mathcal M}_{(y)},R_w)\otimes_{R_w,w}\bar{\mathbb Q}_p={\mathrm {Hom}}_{\bar{\mathbb Q}_p}({\mathcal M}_{(y)}\otimes_{R_w,w}\bar{\mathbb Q}_p,\bar{\mathbb Q}_p), \] since ${\mathcal M}_{(y)}$ is a finite free $R_w$-module. By \cite[Proposition 4.3]{BW} and the control Theorem \ref{ctrlthm}, the composition \eqref{comps} provides an isomorphism \begin{eqnarray*} {\mathcal M}_{(y)}\otimes_{R_w,w}\bar{\mathbb Q}_p&=&{\mathrm {Hom}}_\Gamma^\pm(\Delta_0,D^\dagger_{\tilde w}[1])_{(y)}\simeq{\mathrm {Hom}}_\Gamma^\pm(\Delta_0,V(w))_{(y)}\\ &=&\left\{ \begin{array}{ll}\bar{\mathbb Q}_p\hat\varphi_y^\pm,&\mbox{regular case,}\\\bar{\mathbb Q}_p\hat\varphi_y^\pm+\bar{\mathbb Q}_p\hat\varphi_{y,\rm ext}^\pm,&\mbox{irregular case.}\end{array} \right. \end{eqnarray*} We observe that, since \[ T_{(y)}\otimes_{R_w,w}\bar{\mathbb Q}_p=\left\{ \begin{array}{ll}\bar{\mathbb Q}_p,&\mbox{regular case,}\\\bar{\mathbb Q}_p[\epsilon],&\mbox{irregular case,}\end{array} \right. \] a $T_{(y)}\otimes_{R_w,w}\bar{\mathbb Q}_p$-basis for ${\mathcal M}_{(y)}^\vee\otimes_{R_w,w}\bar{\mathbb Q}_p$ is given by $\phi_y^\pm$ with $\phi^\pm_y(\hat\varphi_y^\pm)=1$ and $\phi^\pm_{y}(\hat\varphi_{y,{\rm ext}}^\pm)=0$. Notice first that the point $y:T\rightarrow\bar{\mathbb Q}_p$ factors through $T_{(y)}\otimes_{R_w,w}\bar{\mathbb Q}_p\rightarrow\bar{\mathbb Q}_p$, and fits into the commutative diagram \[ \xymatrix{ T_{(y)}\otimes_{R_w,w}\bar{\mathbb Q}_p\ar[rr]^{y}\ar[d]^{\cdot\phi_y^\pm}&&\bar{\mathbb Q}_p\ar[d]^{=}\\ {\mathcal M}^\vee_{(y)}\otimes_{R_w,w}\bar{\mathbb Q}_p\ar[rr]^{f\mapsto f(\hat\varphi_y^\pm)}&&\bar{\mathbb Q}_p } \] Since $\phi_y^\pm$ corresponds to the specialization of $\phi^\pm$ up to constant, we compute \[ C^\pm(y)\cdot \mu_{y}^\pm=C^\pm(y)\cdot \hat\varphi_{y}^\pm(0-\infty)=C^\pm(y)\cdot {\rm Mel}(\hat\varphi_y^\pm)={\mathcal L}_p^\pm(y)\cdot\phi^\pm_y(\hat\varphi_y^\pm)={\mathcal L}_p^\pm(y), \] for some $C^\pm(y)\in\bar{\mathbb Q}_p$ so that $C^\pm(y)\cdot\phi^\pm=\phi^\pm_y$. This proves the first assertion. For the second, notice that $C^\pm(x)=1$ and we have the commutative diagram \[ \xymatrix{ T_{(x)}\otimes_{R_w,w}\bar{\mathbb Q}_p\ar[rrrr]_{\simeq}^{x[\varepsilon]}\ar[d]^{\cdot\phi_x^\pm}&&&&\bar{\mathbb Q}_p[\varepsilon]\ar[d]^{=}\\ {\mathcal M}^\vee_{(x)}\otimes_{R_w,w}\bar{\mathbb Q}_p\ar[rrrr]^{f\mapsto f(\hat\varphi_x^\pm)+\varepsilon\alpha^{-1} f(\hat\varphi_{x,{\rm ext}}^\pm)}&&&&\bar{\mathbb Q}_p[\varepsilon] } \] since by \eqref{actUp} we have $(U_p-\alpha)\hat\varphi_{x,{\rm ext}}^\pm=\alpha\hat\varphi_x^\pm$. Again we compute \begin{eqnarray*} \mu_{x}^\pm+\alpha^{-1}\mu_{x}^{{\rm ext},\pm}\varepsilon&=&\hat\varphi_{x}^\pm(0-\infty)+\alpha^{-1}\hat\varphi_{x,{\rm ext}}^\pm(0-\infty)\varepsilon={\rm Mel}(\hat\varphi_x^\pm)+\varepsilon\alpha^{-1} {\rm Mel}(\hat\varphi_{x,{\rm ext}}^\pm)\\ &=&{\mathcal L}_p^\pm(x[\varepsilon])\cdot\left(\phi_x^\pm(\hat\varphi_x^\pm)+\varepsilon\alpha^{-1}\phi^\pm_x(\hat\varphi_{x,{\rm ext}}^\pm)\right)={\mathcal L}_p^\pm(x[\varepsilon]), \end{eqnarray*} and the result follows. \end{proof} Notice that there is no canonical choice of $\phi_x^\pm$ even though we impose $C^\pm(x)=1$. In fact, $(1+\varepsilon c)\cdot\phi_x^\pm$ with $c\in\bar{\mathbb Q}_p$ is also a basis so that $C^\pm(x)=1$. For any such a change of basis we obtain \[ {\mathcal L}^\pm_p(x[\epsilon])=(1+\varepsilon c)^{-1}(\mu_{x}^\pm+\alpha^{-1}\mu_{x}^{{\rm ext},\pm}\varepsilon)=\mu_{x}^\pm+(\alpha^{-1}\mu_{x}^{{\rm ext},\pm}-c\mu_x^{\pm})\varepsilon. \] The following result does not depend on the choice of the generator $\phi^\pm$: \begin{corollary} Let $t\in T$ the element corresponding to $U_p-\alpha$. Then \[ \frac{\partial {\mathcal L}_p^\pm}{\partial t}(x)\in \alpha^{-1}\mu_{x}^{{\rm ext},\pm}+\bar{\mathbb Q}_p\mu_x^{\pm}. \] \end{corollary} \end{document}
arXiv
{ "id": "2007.09984.tex", "language_detection_score": 0.5538865327835083, "char_num_after_normalized": null, "contain_at_least_two_stop_words": null, "ellipsis_line_ratio": null, "idx": null, "lines_start_with_bullet_point_ratio": null, "mean_length_of_alpha_words": null, "non_alphabetical_char_ratio": null, "path": null, "symbols_to_words_ratio": null, "uppercase_word_ratio": null, "type": null, "book_name": null, "mimetype": null, "page_index": null, "page_path": null, "page_title": null }
arXiv/math_arXiv_v0.2.jsonl
null
null
\begin{document} \newcommand{\ket}[1]{|#1\rangle} \newcommand{\bra}[1]{\langle #1|} \newcommand{\braket}[2]{\langle #1|#2\rangle} \def\langle{\langle} \def\rangle{\rangle} \def\widehat{\widehat} \newcommand{\begin{equation}}{\begin{equation}} \newcommand{\end{equation}}{\end{equation}} \newcommand{\begin{eqnarray}}{\begin{eqnarray}} \newcommand{\end{eqnarray}}{\end{eqnarray}} \newcommand{\int_{-\infty}^\infty}{\int_{-\infty}^\infty} \newcommand{\int_0^\infty}{\int_0^\infty} \title{Quantum kinetic energy densities: An operational approach} \author{J. G. Muga} \affiliation{* Departamento de Qu\'\i mica-F\'\i sica, UPV-EHU, Apdo. 644, Bilbao, Spain} \author{D. Seidel} \affiliation{Institut f\"ur Theoretische Physik, Universit\"at G\"ottingen, Friedrich-Hund-Platz~1, 37077 G\"ottingen, Germany} \author{G. C. Hegerfeldt} \affiliation{Institut f\"ur Theoretische Physik, Universit\"at G\"ottingen, Friedrich-Hund-Platz~1, 37077 G\"ottingen, Germany} \begin{abstract} We propose and investigate a procedure to measure, at least in principle, a positive quantum version of the local kinetic energy density. This procedure is based, under certain idealized limits, on the detection rate of photons emitted by moving atoms which are excited by a localized laser beam. The same type of experiment, but in different limits, can also provide other non positive-definite versions of the kinetic energy density. A connection with quantum arrival time distributions is discussed. \end{abstract} \pacs{03.65.Nk, 03.65.Xp} \maketitle \section{Introduction} To obtain an expression for the local density of a quantum observable not diagonal in coordinate representation, one may look for guidance to the corresponding classical case. For a classical dynamical variable, $A(q,p)$, in position-momentum phase space its local density, $\alpha_A(x)$, is simply obtained by \begin{eqnarray} \alpha_A(x)&=& \int dp \rho(x,p) A(x,p)\\ \nonumber &=& \int dq dp \rho(q,p) \delta(x-q) A(q,p) \end{eqnarray} where $\rho(q,p)$ is the phase space density. To quantize this expression one can use \begin{equation} \delta(x-\widehat{x})=|x\rangle\langle x| \end{equation} and consider, for a point $x$, the operator $\widehat{A}(x)=\widehat{A}|x\rangle\langle x|$, or rather one of its many symmetrizations, as a quantum density for the observable $\widehat{A}$. For a given state $|\psi \rangle$, the expectation value $\langle \psi|\widehat{A}(x)|\psi \rangle$ would then be a candidate for the value of the local density at the point $x$ of the observable $\widehat{A}$. If $\widehat{A}$ is not diagonal in coordinate representation so that it does not commute with $|x\rangle\langle x|$ , there are infinitely many ``combinations'' (orderings) to construct a quantum density,\cite{CL02,MPS98} for example, \begin{eqnarray} \widehat{A}^{1/2}|x\rangle\langle x|\widehat{A}^{1/2} \\ \frac{1}{2}\left(\widehat{A}|x\rangle\langle x|+|x\rangle\langle x|\widehat{A}\right) \\ \frac{1}{2}\left(\widehat{A}^{1/2}|x\rangle\langle x|\widehat{A}^{1/2}\right) +\frac{1}{4}\left(\widehat{A}|x\rangle\langle x|+|x\rangle\langle x|\widehat{A}\right) \end{eqnarray} The noncommutativity of two observables does not mean that there is only one ``true'' symmetrization of their product. Different symmetrizations may have a perfectly respectful status as physically observable and measurable quantities, and different orderings may be associated with latent properties that may be realized via different experimental measurement procedures. They may also be related more indirectly to observables and yet carry valuable physical information. An example of this non-uniqueness due to different symmetrizations is the arrival time of a quantum particle at a particular position. Classically the distribution of arrival times would be the flux for particles moving in one direction, but quantum mechanically different quantizations have been proposed.\cite{ML00,MSE02} Another important example of this quantum non-uniqueness for a single classical quantity is the kinetic energy density, \cite{Cohen79,Cohen84,Robinett95} in which case $\widehat{A}$ becomes the kinetic energy operator, $\widehat{A}=\widehat{p}^2/2m$. There is no unique definition of a quantum ``kinetic energy density'', in spite of the relevance of the concept in several fields. The Thomas-Fermi theory provides an early example of a possible realization and application. In density functional theory, it enters as one of the terms of the energy functional to determine the electronic structure of atoms, molecules, solids or fermionic gases, see e.g. Refs. \onlinecite{APN02,BZ01}. In this context it has been used in particular to define a local temperature and identify molecular sites most reactive to electrophilic reagents.\cite{APN02} The kinetic energy density also plays a key role in partitioning molecular systems into fragments with well defined energies and virial relations,\cite{Cohen79,BB72,Bader90} or to define ``intrinsic shapes'' of reactants, transition states and products along the course of a chemical reaction.\cite{Tachibana01} It is moreover a basic quantity in quantum hydrodynamic equations.\cite{TS70,MPS98} In all these applications, different quantum versions of the kinetic energy density have been used. The most commonly found cases are the three quantizations considered above, or suitable generalizations thereof. They satisfy different properties and there have been intensive discussions which one is best, but clearly they all are useful in different ways and for different purposes. However, not so much emphasis has been placed on possible procedures of how to measure them. Many of these arguments and controversies already can be seen in the simple case of the kinetic energy density of a free particle in one dimension. Real, Muga and Brouard \cite{RMB97} studied and compared three versions of an operator, $\widehat{\tau}(x)$, for the kinetic energy density at a point $x$, associated with different quantizations, namely \begin{eqnarray} \widehat{\tau}^{(1)}(x)&=&\frac{\widehat{p}\delta(x-\widehat{x})\widehat{p}}{2m}, \\ \widehat{\tau}^{(2)}(x)&=&\frac{1}{2} \left[\frac{\widehat{p}^2}{2m}\delta(x-\widehat{x})+\delta(x-\widehat{x}) \frac{\widehat{p}^2}{2m}\right], \\ \widehat{\tau}^{(3)}(x)&=&\frac{1}{2}\left[\widehat{\tau}^{(1)}(x)+ \widehat{\tau}^{(2)}(x)\right]~. \end{eqnarray} The second operator follows from the quantization rule of Rivier. \cite{Rivier-PR-1957} The corresponding density $\langle \widehat{\tau}^{(2)}\rangle_t$ is given by its, generally time dependent, expectation value and may in principle be obtained operationally by a weak measurement of the kinetic energy post-selected at position $x$.\cite{AV90,APRV93,Johansen2004} The third one, which is the average of $\langle \widehat{\tau}^{(1)}\rangle_t$ and $\langle \widehat{\tau}^{(2)}\rangle_t$, corresponds to Weyl's quantization rule. An indirect way to measure $\langle \widehat{\tau}^{(3)}\rangle_t$ for free motion was described by Johansen,\cite{Johansen98} who noticed that the second time derivative of the expectation value of $|\widehat{x}-x|$ is proportional to $\langle \widehat{\tau}^{(3)}\rangle_t$. In this paper we will provide an operational interpretation of the first expression which, incidentally, is the only positive one among the three, and in fact among a much broader family of quantizations.\cite{APN02} We will use for this end a simple model, originally devised to study time of arrival measurements.\cite{DEHM02} The basic physical idea is to send atoms in their ground state through a region illuminated by a perpendicular laser beam and to measure the resulting fluorescence rate of photons. \section{Kinetic energy density, fluorescence and atomic absorption rate in an imaginary potential barrier} The description of photon fluorescence from moving atoms, which are excited by a localized laser beam, is based on the quantum jump approach \cite{Hegerfeldt93, DCM92, Carmichael93} and has been discussed in detail elsewhere.\cite{DEHM02, DEHM03, HSM03} We will only summarize the results which are relevant for the present investigation and assume a ``lambda'' configuration of three atomic levels in which the laser couples levels 1 and 2 with Rabi frequency $\Omega$, whereas level 2 decays with inverse life time $\gamma$ predominantly and irreversibly to a ground (sink) state 3.\cite{OABSZ96} For a laser on resonance with the atomic transition, atoms for which no photon is detected (``undetected atoms'') are governed by the following effective Hamiltonian \begin{equation} \widehat{H} = {\widehat{p}}^2/2m + \frac{\hbar}{2} \Omega(\widehat{x}) \left\{ |2\rangle\langle 1| + |1\rangle\langle 2|\right\} - \frac{i}{2} \hbar \gamma |2\rangle\langle 2|~, \label{Hami2} \end{equation} where $\Omega(x)$ is the position dependent Rabi frequency. The evolution of the wave function for undetected atoms simplifies to a one channel Schr\"odinger equation if $\hbar\gamma$ is large compared to the kinetic energy, with a complex imaginary potential \cite{RDNMH04} \begin{equation} U(\widehat{x})=-i\hbar\frac{\Omega(\widehat{x})^2}{2\gamma}. \end{equation} In this ``low saturation regime'' the undetected atoms, whose number is proportional to the norm-squared of the wave-function, are in the ground state most of the time, and the temporal probability density for a photon detection (``detection rate'') is given by the decrease of the number of undetected atoms, \begin{equation} \Pi(t)=-d\langle \psi(t)|\psi(t) \rangle/dt=-\frac{2}{\hbar}\langle \psi(t)|{\rm Im}(U)|\psi(t) \rangle~. \label{pit} \end{equation} This is the basic operational quantity obtained in the modeled experiment. We will relate it, or its normalized version \begin{equation}\label{pin} \Pi_N(t)=\frac{\Pi(t)}{\int dt\,\Pi(t)} \end{equation} to ideal quantities for the freely moving atom, unperturbed by the laser, by taking certain limits. First, we consider a square laser beam profile. Then the imaginary potential becomes a barrier of height \begin{equation} V=\hbar \Omega^2/2\gamma~ \end{equation} located between $x=0$ and $x=L$, and the effective Hamiltonian for undetected atoms reads \begin{equation} \label{eff} \widehat{H} = \frac{\widehat{p}^2}{2m} - iV\chi_{[0,L]}(\widehat{x}), \end{equation} where $\chi_{[0,L]}$ is 1 inside the laser illuminated region and zero outside. The absorption, or detection, rate is found from Eq. (\ref{pit}) and is given by \begin{equation}\label{eq:abs_rate_barrier} \Pi(t) = \frac{2V}{\hbar}\int_0^L dx\,|\psi(x,t)|^2. \end{equation} To obtain the time development of a wave packet coming in from the left, we solve first the stationary equation \begin{equation} \widehat{H}\phi_k = E_k \phi_k. \end{equation} Using standard matching conditions, the energy eigenfunctions $\phi_k$ in the barrier region $0 \leq x \leq L$ for a plane wave coming in from the left with momentum $\hbar k$ are given by \begin{equation} \phi_k(x) = \frac{1}{\sqrt{2\pi}}\left(A_+(k) e^{iqx} + A_-(k) e^{-iqx} \right) \end{equation} with $q^2 = k^2 +2imV/\hbar^2$ and \begin{equation} A_{\pm}(k) = \frac{k(q \pm k)e^{\mp iqL}}{2kq\cos(qL) -i(k^2+q^2)\sin(qL)}. \end{equation} Writing the initial state as a superposition of eigenfunctions with positive momenta, we obtain \begin{equation} \psi(x,t) = \int_0^\infty dk\,\widetilde{\psi}(k)\,e^{-i\hbar k^2 t/2m} \phi_k(x). \end{equation} Inserting this into Eq. (\ref{eq:abs_rate_barrier}) yields the absorption rate. It is also useful to define the auxiliary freely moving wave packet \begin{equation} \psi_f(x,t) = \frac{1}{\sqrt{2 \pi}} \int_0^\infty dk\,\widetilde{\psi}(k)\,e^{-i\hbar k^2 t/2m} e^{ikx}. \end{equation} We will now relate the absorption rate to an ideal distribution by going to the limit $V\to\infty$. When $V$ is increased, more and more atoms are reflected without being detected, but normalizing the result a finite distribution is obtained in the limit, even though the absorption probability vanishes eventually due to total reflection. For large $V$, $V \gg \hbar^2 k^2/2m$, one has in leading order \begin{eqnarray} q &\simeq& \sqrt{2imV/\hbar^2},\nonumber\\ A_+ &\simeq& \frac{2k}{q},\nonumber\\ A_- &\simeq& \frac{2k}{q}e^{2iqL},\nonumber\\ \phi_k(x) &\simeq& \frac{1}{\sqrt{2\pi}}\frac{2k}{q}\left(e^{iqx} + e^{iq(2L-x)}\right),\qquad 0\leq x \leq L. \end{eqnarray} Integrating over $x$ and neglecting the terms which vanish exponentially, the absorption rate becomes \begin{eqnarray} \Pi(t) \simeq \frac{\hbar^2}{\pi m\sqrt{mV}} \int_0^\infty dk \int_0^\infty dk' \widetilde{\psi}^*(k)\widetilde{\psi}(k')\,e^{i\hbar (k^2-k'^2)t/2m} kk'. \end{eqnarray} This expression is independent of the barrier length $L$ as a result of the large $V$ limit, so the same result is obtained with an imaginary step potential $-iV \Theta(\widehat{x})$ or with a very narrow barrier. The normalization constant is given by $\int \Pi(t) dt \simeq 2\hbar k_0(mV)^{-1/2}$, where $k_0 = \int |\widetilde{\psi}(k)|^2 k\,dk$, and the normalized absorption rate is \begin{eqnarray} \label{eq:abs_rate_N} \Pi_N(t) \simeq \frac{\hbar}{2\pi m k_0} \int_0^\infty dk \int_0^\infty dk' \widetilde{\psi}^*(k)\widetilde{\psi}(k')\,e^{i\hbar (k^2-k'^2)t/2m} kk'~. \end{eqnarray} With the freely moving wave packet $\psi_f(x,t)$ this can finally be rewritten in the form \begin{equation} \label{abs} \Pi_N(t) \simeq \frac{\hbar}{mk_0}\bra{\psi_f(t)} \widehat{k}\delta(\widehat{x})\widehat{k} \ket{\psi_f(t)}~. \end{equation} Now, the right hand side is just the expectation value at time t of the kinetic energy density $\widehat{\tau}^{(1)}$ evaluated at the origin! Thus, with $p_0=k_0\hbar$ the initial average momentum we have obtained, in the limit $V\to \infty$, \begin{equation} \label{tau} \lim_{V\to\infty}\Pi_N(t)=\frac{2}{p_0}\langle \widehat {\tau}^{(1)}(x=0)\rangle_t~. \end{equation} Note that the averages are computed with the {\em{freely moving}} wave function and that one obtains the kinetic energy density at an arbitrary point $a$ by shifting the laser region, i.e. by replacing $[0,L]$ by $[a,L+a]$ in Eq. (\ref{eff}). {\it Remark:} Instead of normalizing the absorption rate of Eq. (\ref{eq:abs_rate_barrier}) by dividing by a constant one can normalize it on the level of operators,\cite{BF02} which preserves it as a bilinear form. However, in this case the result depends on the constant to which one normalizes. It has been shown in Ref. \onlinecite{HSM03} that if the constant is chosen as 1 then operator normalization of Eq. (\ref{eq:abs_rate_barrier}) leads for $V\to\infty$ to the arrival time distribution of Kijowski. \cite{Kijowski74} Now, since the time integral of $\langle \widehat{\tau}^{(1)}(x=0) \rangle_t$ equals $p_0/2=\hbar k_0/2$, it is suggestive to choose $p_0/2$ as normalization constant. Following the approach of Ref.~\onlinecite{HSMN04} operator normalization of $\Pi(t)$ then leads to $\langle\widehat{\tau}^{(1)}(x=0) \rangle_t$ in the limit $V\to\infty$. \section{Kinetic energy density from first photon measurement and deconvolution} In the preceding section we had $V=\Omega^2/2\gamma$, with $\hbar\gamma$ much larger than the kinetic energy. Therefore, the limit $V\to\infty$ implies a simultaneous change of $\Omega$ and $\gamma$. Experimentally, the Rabi frequency $\Omega$ is easy to adjust, but not the decay rate $\gamma$. To overcome this problem we therefore describe a procedure that allows to keep the value of $\gamma$ fixed. We again consider the moving-atom laser model but now for the limit $\Omega\to\infty$ and $\gamma = const$. In that case, the simplified description of the evolution of the wave function by means of the imaginary potential $U(x)$ is not feasible, and one has to solve the full two-channel problem for the three-level atom with the Hamiltonian in Eq. (\ref{Hami2}). This has been done in Refs. \onlinecite{DEHM02,HSM03}, and normalizing with a constant the resulting photon detection rate $\Pi_N(t)$ becomes \begin{equation} \label{eq:first_photon_gamma} \Pi_N(t) \simeq \frac{\hbar}{2\pi m k_0} \int_0^\infty dk dk' \widetilde{\psi}^*(k)\widetilde{\psi}(k')\,e^{i\hbar (k^2-k'^2)t/2m} \frac{\gamma kk'}{\gamma+i\hbar(k^2-k'^2)/m}~. \end{equation} For $\hbar\gamma$ large compared to the kinetic energy of the incident atom, Eq. (\ref{eq:abs_rate_N}) is recovered, but for finite $\gamma$ there is a delay in the detection rate. This can be eliminated by means of a deconvolution with the first-photon distribution $W(t)$ for an atom at rest. \cite{DEHM02,HSM03} The ansatz \begin{equation} \label{eq:deconv} \Pi(t) = \Pi_{id}(t)\ast W(t) \end{equation} yields in terms of Fourier transforms \begin{equation} \label{eq:deconv_fourier} \widetilde{\Pi}_{id}(\nu) = \frac{\widetilde{\Pi}(\nu)}{\widetilde{W}(\nu)} \end{equation} with \cite{DEHM02,KKW87} \begin{eqnarray} \frac{1}{\widetilde{W}(\nu)} &=& 1 + \left(\frac{\gamma}{\Omega^2} + \frac{2}{\gamma} \right)i\nu + \frac{3}{\Omega^2}(i\nu)^2 + \frac{2}{\gamma\Omega^2}(i\nu)^3\nonumber\\ &\simeq& 1+ \frac{2i\nu}{\gamma},\qquad \Omega\to\infty. \end{eqnarray} Inserting this and the Fourier transform of Eq. (\ref{eq:first_photon_gamma}) into Eq.~(\ref{eq:deconv_fourier}), the resulting ideal distribution, after performing the inverse Fourier transform, reads \begin{equation} \Pi_{id}(t) \simeq \frac{\hbar}{2\pi m k_0} \int_0^\infty dk \int_0^\infty dk' \widetilde{\psi}^*(k)\widetilde{\psi}(k')\,e^{i\hbar (k^2-k'^2)t/2m} kk', \end{equation} which is the same expression as the absorption rate of Eq.~(\ref{eq:abs_rate_N}), obtained here operationally for fixed $\gamma$. Naturally, \begin{equation} \Pi_{id}(t) \simeq \frac{2}{p_0} \langle \widehat{\tau}^{(1)}(x=0) \rangle_t \end{equation} holds as before. \section{Connection with quantum arrival time distributions} \label{sec:expansion} Here we briefly discuss a formal connection between the kinetic energy density $\widehat{\tau}^{(1)}(x)$ given in Eq. (\ref{abs}) and Eq. (\ref{tau}) and the arrival-time distribution of Kijowski\cite{Kijowski74}, \begin{equation} \label{eq:Kijowski} \Pi_K(t) = \frac{\hbar}{m} \bra{\psi_f(t)} \widehat{k}^{1/2} \delta(\widehat{x}) \widehat{k}^{1/2} \ket{\psi_f(t)}~, \end{equation} at $x=0$. For wave packets peaked around some $k_0$ in momentum space, the operator $\widehat{k}^{1/2}$ acting on $\psi_f$ in Eq. (\ref{eq:Kijowski}) can be expanded in terms of $(\widehat{k} - k_0)$, \begin{equation} \label{eq:Kij_expansion} \widehat{k}^{1/2} = k_0^{1/2} + \frac{1}{2}k_0^{-1/2}(\widehat{k}-k_0) - \frac{1}{8} k_0^{-3/2}(\widehat{k}-k_0)^2 + \mathcal{O}\left((\widehat{k}-k_0)^3\right). \end{equation} In the following we take $k_0$ to be the first moment of the momentum distribution, $k_0=\int |\widetilde{\psi}(k)|^2k\,dk$. Inserting the expansion in Eq. (\ref{eq:Kij_expansion}) into Eq. (\ref{eq:Kijowski}) yields in zeroth order a very simple result, \begin{equation} \Pi_K(t) = v_0 |\psi_f(0,t)|^2 + \mathcal{O}(\widehat{k}-k_0)~, \end{equation} i.e. the particle density times the average velocity $v_0=k_0\hbar/m$. To first order in $(\widehat{k} - k_0)$ one obtains the flux at $x=0$, \begin{equation} \Pi_K(t) = J(0,t) + \mathcal{O}\bigl((\widehat{k}-k_0)^2\bigr)~, \end{equation} where \begin{equation} J(0,t) = \frac{\hbar}{2m} \bra{\psi_f(t)} (\widehat{k}\delta(\widehat{x}) + \delta(\widehat{x})\widehat{k}) \ket{\psi_f(t)}~, \end{equation} and to second order the expression \begin{equation}\label{eq:Kij_expansion_2nd} \Pi_K(t) = J(0,t) + \frac{1}{2p_0}\Delta(0,t) + \mathcal{O}\bigl((\widehat{k}-k_0)^3\bigr) \end{equation} where \begin{equation} \Delta(0,t)=\langle \widehat{\tau}^{(1)} (x=0)\rangle_t- \langle \widehat{\tau}^{(2)}(x=0)\rangle_t~. \end{equation} For states with positive momentum, which we are considering here, the first order, namely the flux, is correctly normalized to one, and so is the second order since the time integral over $\Delta$ is easily shown to vanish. This difference only provides a local-in-time correction to $J$ that averages out globally. Its quantum nature can be further appreciated by the more explicit expression \begin{equation} \frac{1}{2p_0}\Delta =\frac{\hbar^2}{8mp_0} \frac{\partial^2|\psi_f(0,t)|^2}{\partial x^2}, \end{equation} which shows the inverse dependence on mass and momentum and the explicit quadratic dependence on $\hbar$. \section{Discussion} In Fig. 1, operational and ideal kinetic energy densities are compared for a coherent superposition of two Gaussian wave packets with different mean momenta. They are prepared in such a way that their centers of mass arrive simultaneously at the origin. This enhances the interference among different momentum components and the differences between the distributions. As seen in the figure, the differences between various versions of the quantum kinetic energy density may be quite significant. While $\langle\widehat{\tau}^{(1)}(x)\rangle_t$ is always positive, $\langle\widehat{\tau}^{(2)}(x)\rangle_t$ can become negative in classically forbidden regions for stationary eigenstates of the Hamiltonian, a fact that has been used by Tachibana \cite{Tachibana01} to define molecular and reaction shapes. It is perhaps less obvious that this quantity can also be negative as a result of free motion dynamics, as seen in the figure. The main contribution of this paper has been to point out that, under certain idealizations and limiting conditions, fluorescence experiments can lead to an operational approach to kinetic energy densities. A positive kinetic energy density can be obtained from the ``measured'' signal in a strong laser limit. With $\Delta$ the difference of the positive density $\langle \widehat{\tau}^{(1)}\rangle_t$ and the density $\langle \widehat{\tau}^{2}\rangle_t$ of Rivier, an interesting relation was found for $\Delta$ with the ideal time-of-arrival distribution of Kijowski and with the flux. The latter two can also be obtained operationally under appropriate limits. In a recent review by Ayers, Parr and Nagy \cite{APN02} on the kinetic energy density, one of the suggestions for future research was the need to study and understand this quantity $\Delta$ better. In a completely different context from the present work, and suitably generalized to three dimensions, $\Delta$ plays a major role in Bader's theory to separate a molecular system into meaningful fragments.\cite{Cohen79,BB72,Bader90} Note that, if $\Delta=0$, $\langle \widehat{\tau}^{(3)}\rangle_t$ becomes also equal to the other two densities considered. Therefore this condition implies a certain ``classicality'' or coalescence of the multiple quantum possibilities. If $\Delta$ or its integral over some volume are zero, a fragment can be defined with a well defined kinetic energy and virial relations. It is quite striking that, in the second order expansion of the arrival time distribution of Kijowski, $\Delta=0$ implies that $\Pi_K$ becomes the flux, which is, as we know, the quantity that plays the role of a arrival time distribution in classical mechanics. In summary, we think that these results clarify the status, as physically meaningful physical quantities, of several versions of the local kinetic energy densities, and may stimulate experimental research on quantum kinetic energy densities and as well as on arrival times. \begin{acknowledgments} This work has been supported by Ministerio de Ciencia y Tecnolog\'\i a, FEDER (BFM2000-0816-C03-03, BFM2003-01003), UPV-EHU (00039.310-13507/2001), and Acci\'on integrada. \end{acknowledgments} FIGURE CAPTION (Fig. 1) Comparison of kinetic energy densities at $x=0$: $\langle \widehat{\tau}^{(1)} \rangle$ (solid), $\langle \widehat{\tau}^{(2)} \rangle$ (dashed), $\langle \widehat{\tau}^{(3)} \rangle$ (dotted) and the operational quantity $p_0 \Pi_N(t)/2$, see Eqs. (\ref{pin}) and (\ref{tau}), for $V=1.9\,\hbar\,\mu$s$^{-1}$, $L=0.21\,\mu$m (triangles) and $V=950.0\, \hbar\,\mu$s$^{-1}$, $L=0.42 \,\mu$m (circles). The initial wave packet is a coherent combination $\psi=2^{-1/2}(\psi_1+\psi_2)$ of two Gaussian states for the center-of-mass motion of a single caesium atom that become separately minimal uncertainty packets (with $\Delta x_1 = \Delta x_2 = 0.031\,\mu$m, and average velocities $\langle v\rangle _1 = 18.96$ {cm/s}, $\langle v\rangle _2 = 5.34$ {cm/s} at $x=0$ and $t=2\,\mu$s). The mass is $2.2\times 10^{25}$ kg and $p_0=2.67\times 10^{-26}$kg m/s. $^{}$\vspace*{1cm}\\ Muga, Seidel and Hegerfeldt, Figure 1 \begin{figure}\label{fig3} \end{figure} \end{document}
arXiv
{ "id": "0502079.tex", "language_detection_score": 0.770573616027832, "char_num_after_normalized": null, "contain_at_least_two_stop_words": null, "ellipsis_line_ratio": null, "idx": null, "lines_start_with_bullet_point_ratio": null, "mean_length_of_alpha_words": null, "non_alphabetical_char_ratio": null, "path": null, "symbols_to_words_ratio": null, "uppercase_word_ratio": null, "type": null, "book_name": null, "mimetype": null, "page_index": null, "page_path": null, "page_title": null }
arXiv/math_arXiv_v0.2.jsonl
null
null
\begin{document} \title{Decay of the Kolmogorov $N$-width for wave problems} \author{Constantin Greif and Karsten Urban} \address{Ulm University, Institute of Numerical Mathematics, Helmholtzstr.\ 20, D-89081 Ulm, Germany} \email{\{constantin.greif,karsten.urban\}@uni-ulm.de} \begin{abstract} The Kolmogorov $N$-width $d_N(\mathcal{M})$ describes the rate of the worst-case error (w.r.t.\ a subset $\mathcal{M}\subset H$ of a normed space $H$) arising from a projection onto the best-possible linear subspace of $H$ of dimension $N\in\mathbb{N}$. Thus, $d_N(\mathcal{M})$ sets a limit to any projection-based approximation such as determined by the reduced basis method. While it is known that $d_N(\mathcal{M})$ decays exponentially fast for many linear coercive parametrized partial differential equations, i.e., $d_N(\mathcal{M})=\mathcal{O}(e^{-\beta N})$, we show in this note, that only $d_N(\mathcal{M}) =\mathcal{O}(N^{-1/2})$ for initial-boundary-value problems of the hyperbolic wave equation with discontinuous initial conditions. This is aligned with the known slow decay of $d_N(\mathcal{M})$ for the linear transport problem. \end{abstract} \keywords{Kolmogorov $N$-width, wave equation} \subjclass[2010]{41A46, 65D15} \maketitle \section{Introduction}\label{Sec:1} The Kolmogorov $N$-width is a classical concept of (nonlinear) approximation theory as it describes the error arising from a projection onto the best-possible space of a given dimension $N\in\mathbb{N}$, \cite{MR774404}. This error is measured for a class $\mathcal{M}$ of objects in the sense that the \emph{worst} error over $\mathcal{M}$ is considered. Here, we focus on subsets $\mathcal{M}\subset H$, where $H$ is some Banach or Hilbert space with norm $\| \cdot \|_{H}$. Then, the Kolmogorov $N$-width is defined as \begin{align} \label{defKolNwidth} d_N(\mathcal{M}) := \inf\limits_{V_N\subset H;\ \dim V_N=N} \sup\limits_{ u \in \mathcal{M} } \inf\limits_{v_N \in V_N } \| u - v_N \|_{H}, \end{align} where $V_N$ are linear subspaces. The corresponding approximation scheme is nonlinear as one is looking for the best possible linear space of dimension $N$. Due to the infimum, the decay of $d_N(\mathcal{M})$ as $N\to\infty$ sets a lower bound for the best possible approximation of all elements in $\mathcal{M}$ by a linear approximation in $V_N$. Particular interest arises if the set $\mathcal{M}$ is chosen as a set of solutions of certain equations such as partial differential equations (PDEs), which is the reason why sometimes (even though slightly misleading) $\mathcal{M}$ is termed as `solution manifold'. In that setting, one considers a \emph{parameterized} PDE (PPDE) with a suitable solution $u_\mu$ and $\mu$ ranges over some parameter set $\mathcal{D}$, i.e., $\mathcal{M}\equiv\mathcal{M}(\mathcal{D}):=\{ u_\mu:\, \mu\in\mathcal{D}\}$, where we will skip the dependence on $\mathcal{D}$ for notational convenience. As a consequence, the decay of the Kolmogorov $N$-width is of particular interest for model reduction in terms of the reduced basis method. There, given a PPDE and a parameter set $\mathcal{D}$, one wishes to construct a possibly optimal linear subspace $V_N$ in an offline phase in order to highly efficiently compute a reduced approximation with $N$ degrees of freedom (in $V_N$) in an online phase. For more details on the reduced basis method, we refer the reader e.g.\ to the recent surveys \cite{Haasdonk:RB,RozzaRB,QuarteroniRB}. It has been proven that for certain linear, coercive parameterized problems, the Kolmogorov $N$-width decays exponentially fast, i.e., \begin{align*} d_N( \mathcal{M} ) \leq C e^{-\beta N} \end{align*} with some constants $C<\infty$ and $\beta>0$, see e.g.\ \cite{MR2877366,OR16}. This extremely fast decay is at the heart of any model reduction strategy (based upon a projection to $V_N$) since it allows us to chose a very moderate $N$ to achieve small approximation errors. It is worth mentioning that this rate can in fact be achieved numerically by determining $V_N$ by a greedy-type algorithm. However, the situation dramatically changes when leaving the elliptic and parabolic realm. In fact, it has been proven in \cite{OR16} that $d_N$ decays for certain first-order linear transport problems at most with the rate $N^{-1/2}$. This in turn implies that projection-based approximation schemes for transport problems severely lack efficiency, \cite{MR3911721,MR3177860}. In this note, we consider hyperbolic problems and show in a similar way as in \cite{OR16} that \begin{align*} d_N(\mathcal{M}) \geq \ts{\frac{1}{4}}\, N^{-1/2}, \end{align*} (see Thm.\ \ref{maintheoremabschatzung} below) for an example of the second-order wave equation. In Section \ref{Sec:2}, we describe the Cauchy problem of a second-order wave equation with discontinuous initial conditions and review the distributional solution concept. Section \ref{Sec:3} is devoted to the investigation of a corresponding initial-boundary-value problem and Section \ref{Sec:4} contains the proof of Thm.\ \ref{maintheoremabschatzung}. \section{Distributional solution of the wave equation on $\mathbb{R}$}\label{Sec:2} We start by considing the univariate wave equation on the spatial domain $\Omega := \mathbb{R}$ and on the time interval $I := \mathbb{R}^+$ (i.e., a Cauchy problem) for a real-valued parameter $\mu \geq 0$ with discontinuous initial values, i.e., \begin{subequations} \label{simple_waveequation} \begin{align} \partial_{tt} u_{\mu}(t,x) - \mu^2 \, \partial_{xx} u_{\mu}(t,x) &= 0 \quad \text{for} \quad (t,x) \in \Omega_I := I \times \Omega, \label{simple_wave_wobound} \\ u_{\mu}(0,x) &= u_0(x) := \begin{cases} 1, & \text{if $x<0$}, \\ -1, & \text{if $x\geq 0$}, \end{cases} \quad x \in \Omega, \label{simple_wave_ini1} \\ \partial_{t} u_{\mu}(0,x) &= 0, \quad x \in \Omega. \label{simple_wave_ini2} \end{align} \end{subequations} This initial value problem has no classical solution, so that we consider a weak solution concept, namely we look for solutions in the distributional sense, which is known to be appropriate for hyperbolic problems. \begin{Lemma} \label{lemma_dis_sol_wave} A distributional solution of \eqref{simple_waveequation} is given, for $(t,x) \in \Omega_I = \mathbb{R}^+ \times \mathbb{R}$, by\\ \begin{minipage}{0.65\textwidth} \begin{align*} u_{\mu}(t,x) = \begin{cases} 1, & \text{if $x < - \mu t $}, \\ -1, & \text{if $x \geq \mu t $}, \\ 0, & \text{else}. \end{cases} \end{align*} \end{minipage} \begin{minipage}{0.3\textwidth} \begin{tikzpicture}[x=10mm,y=10mm] \draw[->] (-1.3,0) -- (1.3,0) node[right] {\footnotesize$x$}; \draw[->] (0,0) -- (0,1.3) node[above] {\footnotesize$t$}; \fill[opacity=0.5, black!20!white] (-1.2,0) -- (0,0) -- (-1.2,1.2) -- (-1.2,0); \fill[opacity=0.5, black!40!white] (0,0) -- (-1.2,1.2) -- (1.2,1.2) -- (0,0); \fill[opacity=0.5, black!30!white] (1.2,0) -- (0,0) -- (1.2,1.2) -- (1.2,0); \node at (-0.75,0.4) {$1$}; \node at (0.8,0.4) {$-1$}; \node at (0.2,0.7) {$0$}; \draw (0,0) -- (-1.2,1.2) node [pos=0.9,above] {\tiny$t\!=\!-\frac{x}{\mu}$}; \draw (0,0) -- (1.2,1.2) node [pos=0.9,above] {\tiny$t\!=\!\frac{x}{\mu}$}; \end{tikzpicture} \end{minipage} \end{Lemma} \begin{proof} We start by considering the following initial value problem \begin{align} \label{funwaveeq} \begin{split} \partial_{tt} G_{\mu}(t,x) - \mu^2 \cdot \partial_{xx} G_{\mu}(t,x) = 0 \quad \text{for} \quad (t,x) \in \Omega_I , \\ G_{\mu}(0,x) = 0, \quad \partial_{t} G_{\mu}(0,x) = \delta(x ), \quad x \in \Omega, \end{split} \end{align} where $\delta(\cdot)$ denotes Dirac's $\delta$-distribution at 0. A solution $G_{\mu}$ of \eqref{funwaveeq} is called \emph{fundamental solution} (see e.g. \cite[Ch.\ 5]{MR2028503}) and can easily be seen to read $G_{\mu}(t,x) = \frac{1}{2 \mu} \big(H(x+ \mu t) - H(x- \mu t)\big)$, where $H(x):= \int^{x }_{-\infty} \delta(y) dy $ denotes the Heaviside step function with distributional derivative $H' = \delta$. Hence, the distributional derivative of $G_{\mu}$ w.r.t.\ $t$ reads \begin{align} \label{Ablfundsol} \partial_t G_{\mu}(t,x) = \frac12 \big(\delta(x+\mu t) + \delta(x-\mu t)\big) \end{align} and it is obvious that $G_{\mu}(0,x) = 0$ as well as $\partial_{t} G_{\mu}(0,x) = \delta(x )$ for $x \in \mathbb{R}$. By using the properties of the Dirac's $\delta$-distribution (see e.g.\ \cite{MR1275833}) we observe that $\partial_{tt} G_{\mu}(t,x) = \frac{\mu}{2} \big( \delta(x+\mu t) - \delta(x-\mu t)\big)$ and $\partial_{xx} G_{\mu}(t,x) = \frac{1}{2 \mu} \big(\delta(x+\mu t) - \delta(x-\mu t) \big)$ in the distributional sense. Hence, $G_{\mu}$ satisfies \eqref{funwaveeq}. Now, we consider the original problem \eqref{simple_waveequation}. To this end, the following relation of the fundamental solution $G_{\mu}$ of \eqref{funwaveeq} and the solution $u_{\mu}$ of \eqref{simple_waveequation} is well-known \cite{MR2028503}, \begin{align*} u_{\mu} (t,x) = & \int _{\mathbb{R}} \partial_t G_{\mu}(t,x-y) u_{\mu}(0,y) dy + \int _{\mathbb{R}} G_{\mu}(t,x-y) \partial_t u_{\mu}(0,y) dy. \end{align*} Finally, inserting $\partial_t G_{\mu}$ from \eqref{Ablfundsol}, the initial condition $u_{\mu}(0,\cdot) = u_0(\cdot)$ in $\mathbb{R}$, and the Neumann initial condition $ \partial_t u_{\mu}(0,\cdot) = 0$ in $\mathbb{R}$, yields \begin{align*} u_{\mu} (t,x) &= \ts{\frac12} \int _{\mathbb{R}} \big( \delta(x-y+\mu t) + \delta(x-y-\mu t)\big ) u_0(y)\, dy \\ &= \ts{\frac{1}{2}} \Big[ u_{0}(x+\mu t ) + u_{0}(x - \mu t ) \Big] = \begin{cases} 1, & \text{if $x < - \mu t $}, \\ -1, & \text{if $x \geq \mu t $}, \\ 0, & \text{else}, \end{cases} \end{align*} which proves the claim. \end{proof} \section{The wave equation on the interval}\label{Sec:3} Let us consider the wave equation \eqref{simple_wave_wobound}, but now on the bounded space-time domain $\Omega_I := (0,1) \times (-1,1)$ with Dirichlet boundary conditions \begin{align} \tag{\ref{simple_waveequation}d} \label{21drandhi} u_{\mu}(t,-1) = 1,\quad u_{\mu}(t,1) = -1, \qquad \text{for} \quad t \in I:= (0,1), \end{align} and the initial conditions (\ref{simple_wave_ini1},\ref{simple_wave_ini2}). It is readily seen that the functions $ \varphi_{\mu}$ defined by\\ \begin{minipage}{0.65\textwidth} \begin{align} \label{phiss} \varphi_{\mu}(t,x) := \begin{cases} 1, & \text{if $x < - \mu t $}, \\ -1, & \text{if $x \geq \mu t $}, \\ 0, & \text{else}, \end{cases} \end{align} \end{minipage} \begin{minipage}{0.3\textwidth} \begin{tikzpicture}[x=10mm,y=10mm] \draw[->] (-1.1,0) -- (1.1,0) node[right] {\footnotesize$x$}; \draw[->] (0,0) -- (0,1.2) node[above] {\footnotesize$t$}; \fill[opacity=0.5, black!20!white] (-1.0,0) -- (0,0) -- (-1.0,1.0) -- (-1.0,0); \fill[opacity=0.5, black!40!white] (0,0) -- (-1.0,1.0) -- (1.0,1.0) -- (0,0); \fill[opacity=0.5, black!30!white] (1.0,0) -- (0,0) -- (1.0,1.0) -- (1.0,0); \node at (-0.75,0.4) {$1$}; \node at (0.75,0.4) {$-1$}; \node at (0.2,0.7) {$0$}; \draw (0,0) -- (-1.0,1.0) node [pos=0.9,above] {\tiny$t\!=\!-\frac{x}{\mu}$}; \draw (0,0) -- (1.0,1.0) node [pos=0.9,above] {\tiny$t\!=\!\frac{x}{\mu}$}; \draw[thick] (-1,0) -- (-1,1) -- (1,1) -- (1,0); \draw (-1,0.1) -- (-1,-0.1) node [below] {\tiny$-1$}; \draw (1,0.1) -- (1,-0.1) node [below] {\tiny$1$}; \draw (0.1,1) -- (-0.1,1) node [above] {\tiny$1$}; \end{tikzpicture} \end{minipage} \newline for $(t,x) \in \overline{\Omega}_I = [0,1] \times [-1,1]$ are contained in the solution manifold of (\ref{simple_waveequation}a-d), i.e., \begin{align} \label{solmanifoldM} \{ \varphi_{\mu}: \mu \in \mathcal{D} \} \subset \mathcal{M} \equiv \mathcal{M}(\mathcal{D}) := \{ u_{\mu}: \mu \in \mathcal{D} := [0,1] \} \subset L_2 (\Omega_I). \end{align} In fact, by Lemma \ref{lemma_dis_sol_wave}, $\varphi_{\mu}$ solves (\ref{simple_waveequation}a-c) on $\mathbb{R}^+ \times \mathbb{R}$ and they also satisfy the boundary conditions \eqref{21drandhi}. The next step is the consideration of a specific family of functions to be defined now. For some $M \in \mathbb{N}$ and $1 \leq m \leq M$, let \begin{align} \label{psisortho} \psi_{M ,m}(t,x) := \begin{cases} 1, & \text{if $x \in \big[- \frac{m }{M } t, -\frac{m-1}{M } t \big) $}, \\ -1, & \text{if $x \in \big[\frac{m-1}{M } t, \frac{m }{M } t \big) $}, \\ 0, & \text{else}, \end{cases} \quad \text{for} \quad (t,x) \in \bar{\Omega}_I , \end{align} and we collect all $\psi_{M ,m}$, $m=1,\ldots,M$ in \begin{align} \label{Psifncset} \Psi_{M} := \{ \psi_{M,m} :\, 1 \leq m \leq M \}. \end{align} Note, that $\Psi_{M}$ can be generated by \begin{align} \label{Phifncset} \Phi_{M} :=\{ \varphi_{\frac{m}{M}} :\, 0 \leq m \leq M \} \subset \{ \varphi_{\mu}:\, \mu \in \mathcal{D} \} , \end{align} as follows $\psi_{M ,m} = \varphi_{\frac{m-1}{M}} - \varphi_{\frac{m}{M}}$, $1 \leq m \leq M$, which in fact can be easily seen; see also Figure~\ref{fig:test}. \begin{figure} \caption{Top: functions $\varphi_\mu$ for $\mu=0,\frac13,\frac23,1$. Bottom: functions $\psi_{M,m}$ for $M=3$ and $m=1,2,3$. All for $t=\frac12$ fixed on $[-1,1]$. } \label{fig:test} \end{figure} We will see later that $d_N(\Phi_M)\ge \frac12 d_N(\Psi_M)$. Moreover $\| \psi_{M,m} \|_{ L_2 (\Omega_I) } = \sqrt{1/M} $ and these functions are pairwise orthogonal, i.e. \begin{align*} \big( \psi_{M,m_1}, \psi_{M,m_2} \big)_{L_2 (\Omega_I)} = \int\limits_{0}^1 \int\limits_{-1}^1 \psi_{M,m_1}(t,x) \ \psi_{M,m_2}(t,x) \ dx \ dt = \ts{\frac{1}{M}}\, \delta_{m_1, m_2}, \end{align*} where $\delta_{m_1, m_2}$ denotes the Kronecker-$\delta$ for $m_1, m_2 \in \{ 1,\dots,M \}$. Thus, \begin{align} \label{orthonormalf} \tilde{\Psi}_{M}:= \{ \tilde{\psi}_{M,m} :\, 1 \leq m \leq M \}, \qquad \tilde{\psi}_{M,m} := \ts{\sqrt{M}}\, \psi_{M ,m}, 1 \leq m \leq M, \end{align} is a set of orthonormal functions. \section{Kolmogorov $N$-width of sets of orthonormal elements}\label{Sec:4} Let us start by introducing the notation $\mathcal{V}_N := \{ V_N \subset H:\, \text{linear space with } \text{dim}(V_N) = N \}$, so that the Kolmogorov $N$-width in \eqref{defKolNwidth} can be rephrased as \begin{align*} d_N(\mathcal{M}) := \inf\limits_{V_N \in \mathcal{V}_N } \sup\limits_{ u \in \mathcal{M} } \inf\limits_{v_N \in V_N } \| u - v_N \|_{H}. \end{align*} We are going to determine either the exact value or lower bounds of $d_N(\mathcal{M})$ for certain sets of functions. \begin{Lemma} \label{lemmakolNeinheie} The canonical orthonormal basis $\{ e_1, \dots, e_{2N} \}$ of $H := ( \mathbb{R}^{2N}, \| \cdot \|_2 )$ has the Kolmogorov $N$-width $d_N ( \{ e_1, \dots, e_{2N} \} ) = \frac{1}{\sqrt{2}}$. \end{Lemma} \begin{proof} Let $V_N = \{ v = \sum_{j=1}^N a_j d_j \ \vert \ a_1, \dots, a_N \in \mathbb{R} \} \in \mathcal{V}_N$, with $\{d_1, \dots, d_{N}\}$ being an arbitrary set of orthonormal vectors in $H$. Thus, $V_N$ is an arbitrary linear subspace of $H$ of dimension $N$. Then, for any $k \in \{1, \dots, 2N \} $ and the canonical basis vector $e_k \in \mathbb{R}^{2N}$, we get \begin{align*} \sigma_{V_N}(k)^2 := \inf_{v \in V_N} \| e_k - v \|_2^2 = \| e_k - P_{V_N}(e_k) \|_2^2 = \Big\| e_k - \sum_{j=1}^N ( d_j)_k d_j \Big\|_2^2, \end{align*} where $P_{V_N}(e_k) = \sum_{j=1}^N \langle e_k,d_j \rangle d_j = \sum_{j=1}^N ( d_j)_k d_j $ is the orthogonal projection of $e_k$ onto $V_N$. Then, \begin{align*} \| P_{V_N}(e_k) \|_2^2 = \Big\langle \sum_{j=1}^N ( d_j)_k d_j, \sum_{l=1}^N ( d_l)_k d_l \Big\rangle = \sum_{j=1}^N ( d_j)_k \Big\langle d_j, \sum_{l=1}^N ( d_l)_k d_l \Big\rangle = \sum_{j=1}^N ( d_j)_k^2. \end{align*} Next, for $k \in \{1, \dots, 2N \} $ we get\footnote{We also refer to \cite{MR0109826,MR1971217}, where it was proven that $\|P \| = \| I-P \| $ for any idempotent operator $P\ne0$, i.e., \eqref{sumorthoproof}.} \begin{align}\label{sumorthoproof} \sigma_{V_N}(k)^2 &= \| e_k - P_{V_N}(e_k) \|_2^2 = \| P_{V_N}(e_k) \|_2^2 - ( P_{V_N}(e_k))_k^2 + \big(1- ( P_{V_N}(e_k))_k \big)^2 \nonumber \\ &= \sum_{j=1}^N ( d_j)_k^2 - \Big(\sum_{j=1}^N ( d_j)_k^2 \Big)^2 + 1 - 2 \sum_{j=1}^N ( d_j)_k^2 + \Big(\sum_{j=1}^N ( d_j)_k^2 \Big)^2 = 1 - \sum_{j=1}^N ( d_j)_k^2. \end{align} Let us now assume that \begin{align} \label{contramustwrong} \sum_{j=1}^N ( d_j)_k^2 > \frac{1}{2} \quad \text{for all} \quad k \in \{1, \dots, 2N \}. \end{align} Then, we would have that \begin{align*} N = \sum_{j=1}^N \| d_j \|_2^2 = \sum_{j=1}^N \sum_{k=1}^{2N} ( d_j)_k^2 = \sum_{k=1}^{2N} \sum_{j=1}^N ( d_j)_k^2 > 2N \cdot \ts{\frac{1}{2}} = N, \end{align*} which is a contradiction, so that \eqref{contramustwrong} must be wrong and we conclude that there exists a $k^* \in \{1, \dots, 2N \}$ such that $ \sum_{j=1}^N ( d_j)_{k^*}^2 \leq \ts{\frac{1}{2}}$. This yields by \eqref{sumorthoproof} that $\sigma_{V_N}(k^*)^2 = 1 - \sum_{j=1}^N ( d_j)_{k^*}^2 \geq \ts{\frac{1}{2}}$. By using this $k^*$, this leads us to \begin{align*} d_N ( \{ e_1, \dots, e_{2N} \} ) = \inf_{ V_N \in \mathcal{V}_N } \sup_{ k \in \{1, \dots, 2N \} } \inf_{v \in V_N} \| e_k - v \|_{2} \geq \inf_{ V_N \in \mathcal{V}_N} \sigma_{V_N}(k^*) \geq \ts{\frac{1}{\sqrt{2}}}. \end{align*} To show equality, we consider $V_N:=\operatorname{span}\{ d_j:\, j=1,\ldots,N\}$ generated by orthonormal vectors $d_j := \frac{1}{\sqrt{2}} ( e_{2j-1} + e_{2j})$. Then, for any even $k \in \{2,4, \dots, 2N \}$ (and analogous for odd $k$) we get by \eqref{sumorthoproof} that \begin{align*} \sigma_{V_N}(k)^2 = 1 - \sum_{j=1}^N ( d_j)_k^2 = 1 - \Big( \frac{1}{\sqrt{2}} ( e_{k-1} + e_{k}) \Big)_k^2 = 1 - \Big( \frac{1}{\sqrt{2}} \Big)^2 = \frac{1}{2} , \end{align*} which proves the claim. \end{proof} \begin{Rem} We note that, more general, for $k \in \mathbb{N}$, it holds that $d_N ( \{ e_1, \dots, e_{k N} \} ) = \ts{\sqrt{\frac{ k-1 }{ k}}}$, which can easily be proven following the above lines. \end{Rem} Having these preparations at hand, we can now estimate the Kolmogorov $N$-width for arbitrary orthonormal sets in Hilbert spaces. \begin{Lemma} \label{proprealtionsortho} Let $H$ be an infinite-dimensional Hilbert space and $\{ \tilde{\psi}_1, \dots, \tilde{\psi}_{2 N} \} \subset H$ any orthonormal set of size $2N$. Then, $d_N (\{ \tilde{\psi}_1, \dots, \tilde{\psi}_{2 N} \}) = \frac{1}{\sqrt{2}}$. \end{Lemma} \begin{proof} Since $V_N := \arg\inf\limits_{V_N \in \mathcal{V}_N } \sup\limits_{ w \in \{ \tilde{\psi}_1, \dots, \tilde{\psi}_{2 N} \} } \inf\limits_{v \in V_N } \| w - v \|_{H} \subset \text{span} \{ \tilde{\psi}_1, \dots, \tilde{\psi}_{2 N} \}$, we can consider the subspace $\operatorname{span} \{ \tilde{\psi}_1, \dots, \tilde{\psi}_{2 N} \} \subset H$ instead of whole $H$. The space $\text{span} \{ \tilde{\psi}_1, \dots, \tilde{\psi}_{2 N} \} $ with norm $ \| \cdot \|_{H} $ can be isometrically mapped to $\mathbb{R}^{2N} $ with canonical orthonormal basis $\{ e_1, \dots, e_{2N} \}$ and Euclidean norm $\| \cdot \|_2$. In fact, by defining the map $f : \text{span} \{ \tilde{\psi}_1, \dots, \tilde{\psi}_{2 N} \} \to \mathbb{R}^{2N}$ with $f(v) := \sum_{i = 1}^{2 N} ( v, \tilde{\psi}_i )_{H} \ e_i$. for $v,w \in \text{span} \{ \tilde{\psi}_1, \dots, \tilde{\psi}_{2 N} \}$ we get \begin{align*} \| f(w) - f(v) \|_2^2 &= \Big\| \sum_{i = 1}^{2 N} ( w-v, \tilde{\psi}_i )_{H} \ e_i \Big\|_2^2 = \sum_{i = 1}^{2 N} ( w-v, \tilde{\psi}_i )_{H}^2 \| e_i \|_2^2 = \sum_{i = 1}^{2 N} ( w-v, \tilde{\psi}_i )_{H}^2 \\ &= \sum_{i = 1}^{2 N} ( w-v, \tilde{\psi}_i )_{H}^2 \| \tilde{\psi}_i \|_{H}^2 = \Big\| \sum_{i = 1}^{2 N} ( w-v, \tilde{\psi}_i )_{H} \ \tilde{\psi}_i \Big\|_{H}^2 = \| w - v \|_{H}^2. \end{align*} Choosing $w = \tilde{\psi}_k, k \in \{ 1, \dots, {2 N} \}$, we have $f(w) = \sum_{i = 1}^{2 N} ( \tilde{\psi}_k , \tilde{\psi}_i )_{H} e_i = e_k$. Thus, Lemma \ref{lemmakolNeinheie}, yields $d_N (\{ \tilde{\psi}_1, \dots, \tilde{\psi}_{2 N} \}) = d_N ( \{ e_1, \dots, e_{2N} \} ) = \frac{1}{\sqrt{2}}$, which proves the claim. \end{proof} \begin{Prop} \label{propo} Let $\mathcal{M}$ be the solution manifold of (2.1a -- d) in \eqref{solmanifoldM} and $\Phi_{M}$, $\Psi_{M}$ defined in (\ref{Psifncset}, \ref{Phifncset}), $M\in\mathbb{N}$. Then, $d_N(\mathcal{M}) \geq d_N ( \Phi_{M} ) \geq \frac{1}{2} d_N( \Psi_{M} )$ for $N \in \mathbb{N}$. \end{Prop} \begin{proof} By \eqref{solmanifoldM}, we have $\Phi_{M} = \{ \varphi_{\frac{m}{M}} :\, 0 \leq m \leq M \} \subset \{ \varphi_{\mu} \ \vert \ \mu \in \mathcal{D} \} \subset \mathcal{M}$, so that the first inequality is immediate. For the proof of the second inequality, we use the abbreviation $\| \cdot \| = \| \cdot \|_{L_2 (\Omega_I)}$. First, we denote some optimizing spaces and functions, $m \in \{ m^*-1, m^* \}$ \begin{align*} V_N^{\Psi_M} &:= \arg\inf\limits_{V_N \in \mathcal{V}_N} \sup \limits_{\psi \in \Psi_M } \inf \limits_{v \in V_N } \| \psi - v \|, & \psi_{M,m^*} &:= \arg \sup \limits_{\psi \in \Psi_M } \inf \limits_{v \in V_N^{\psi} } \| \psi - v \|, \\ V_N^{m} &:= \arg\inf\limits_{V_N \in \mathcal{V}_N} \inf \limits_{v \in V_N } \| \varphi_{\frac{m}{M}} - v \|, & v^{m } &:= \arg \inf\limits_{v \in V_N^{m}} \| \varphi_{\frac{m }{M}} - v \|. \end{align*} With those notations, we get \begin{align*} d_N ( \Psi_{M}) &= \inf\limits_{V_N \in \mathcal{V}_N} \sup\limits_{ \psi \in \Psi_{M} } \inf\limits_{ v \in V_N} \| \psi - v \| = \inf \limits_{v \in V_N^{\Psi_M} } ||\psi_{M,m^*} - v || \\ &\kern-20pt\leq \| \psi_{M,m^*} - ( v^{m^*} - v^{m^*-1} ) \| = \| (\varphi_{\frac{m^*-1}{M}} - \varphi_{\frac{m^*}{M}}) - ( v^{m^*} - v^{m^*-1} ) \| \\ &\kern-20pt\leq \| \varphi_{\frac{m^*-1}{M}} - v^{m^*-1} \| + \| \varphi_{\frac{m^* }{M}} - v^{m^* } \| = \inf\limits_{v \in V_N^{m^*-1} } \| \varphi_{\frac{m^*-1}{M}} - v \| + \inf\limits_{v \in V_N^{m^*} } \| \varphi_{\frac{m^* }{M}} - v \| \\ &\kern-20pt= \inf\limits_{V_N \in \mathcal{V}_N} \inf\limits_{v \in V_N } \| \varphi_{\frac{m^*-1}{M}} - v \| + \inf\limits_{V_N \in \mathcal{V}_N} \inf\limits_{v \in V_N } \| \varphi_{\frac{m^* }{M}} - v \| \leq \inf\limits_{v \in W_N} \| \varphi_{\frac{m^*-1}{M}} - v \| + \inf\limits_{v \in W_N} \| \varphi_{\frac{m^* }{M}} - v \|, \end{align*} where $W_N := \arg\inf\limits_{V_N \in \mathcal{V}_N} \big( \inf\limits_{v \in V_N} \| \varphi_{\frac{m^*-1}{M}} - v \| + \inf\limits_{v \in V_N} \| \varphi_{\frac{m^* }{M}} - v \| \big) $. This gives \begin{align*} \inf\limits_{v \in W_N} \| \varphi_{\frac{m^*-1}{M}} - v \| + \inf\limits_{v \in W_N} \| \varphi_{\frac{m^* }{M}} - v \| & = \inf\limits_{V_N \in \mathcal{V}_N} \big( \inf\limits_{v \in V_N} \| \varphi_{\frac{m^*-1}{M}} - v \| + \inf\limits_{v \in V_N} \| \varphi_{\frac{m^* }{M}} - v \| \big) \\ & \leq \inf\limits_{V_N \in \mathcal{V}_N} \big( 2 \sup\limits_{\varphi \in \Phi_{M} } \inf\limits_{v \in V_N} \| \varphi - v \| \big) = 2 \cdot d_N ( \Phi_{M}), \end{align*} which proves the second inequality. \end{proof} We can now prove the main result of this note. \begin{Th} \label{maintheoremabschatzung} For $\mathcal{M}$ being defined as in \eqref{solmanifoldM}, we have that $d_N(\mathcal{M}) \geq \frac{1}{4}\, N^{-1/2}$. \end{Th} \begin{proof} Using Proposition \ref{propo} with $M = 2N$ (which in fact maximizes $d_N(\Psi_{M})$) yields $d_N(\mathcal{M}) \geq d_N ( \Phi_{2N} ) \geq \frac{1}{2} \cdot d_N( \Psi_{2N} )$. Since $V_N$ is a linear space, we have \begin{align*} d_N( \Psi_{2N} ) = d_N(\{ \psi_{2N,n} :\, 1 \leq n \leq 2N \}) = \ts{\frac{1}{\sqrt{2N}}}\, d_N(\{ \sqrt{2N} \psi_{2N,n} :\, 1 \leq n \leq 2N \}) = \ts{\frac{1}{\sqrt{2N}}}\, d_N( \tilde{\Psi}_{2N} ). \end{align*} Applying now Lemma \ref{proprealtionsortho} for the orthonormal functions previously defined in \eqref{orthonormalf} gives $ \frac{1}{2}\, d_N( \Psi_{2N} ) = \frac{1}{2}\, \frac{1}{\sqrt{2N}} d_N( \tilde{\Psi}_{2N} ) = \frac{1}{2} \frac{1}{\sqrt{2N}} \cdot \frac{1}{\sqrt{2}} = \frac{1}{{4} } N^{-1/2}$, which completes the proof. \end{proof} Theorem \ref{maintheoremabschatzung} shows the same decay of $d_N (\mathcal{M} )$ as for linear advection problems, \cite{OR16}. Thus, transport and hyperbolic parametrized problems are expected to admit a significantly slower decay as for certain elliptic and parabolic problems as mentioned in the introduction. We note, that this result is \emph{not} limited to the specific discontinuous initial conditions \eqref{simple_wave_ini1}. In fact, also for continuous initial conditions with a smooth `jump', one can construct similar orthogonal functions like \eqref{psisortho} yielding the slow decay result. \end{document}
arXiv
{ "id": "1903.08488.tex", "language_detection_score": 0.5682042241096497, "char_num_after_normalized": null, "contain_at_least_two_stop_words": null, "ellipsis_line_ratio": null, "idx": null, "lines_start_with_bullet_point_ratio": null, "mean_length_of_alpha_words": null, "non_alphabetical_char_ratio": null, "path": null, "symbols_to_words_ratio": null, "uppercase_word_ratio": null, "type": null, "book_name": null, "mimetype": null, "page_index": null, "page_path": null, "page_title": null }
arXiv/math_arXiv_v0.2.jsonl
null
null
\begin{document} \title{Formation of deeply bound molecules via chainwise adiabatic passage} \author{Elena Kuznetsova} \affiliation{Department of Physics, University of Connecticut, Storrs, CT 06269} \affiliation{ITAMP, Harvard-Smithsonian Center for Astrophysics, Cambridge, MA 02138} \author{Philippe Pellegrini} \affiliation{Department of Physics, University of Connecticut, Storrs, CT 06269} \author{Robin C\^ot\'e} \affiliation{Department of Physics, University of Connecticut, Storrs, CT 06269} \author{M. D. Lukin} \affiliation{Physics Department, Harvard University, Oxford St., Cambridge, MA 02138} \author{S. F. Yelin} \affiliation{Department of Physics, University of Connecticut, Storrs, CT 06269} \affiliation{ITAMP, Harvard-Smithsonian Center for Astrophysics, Cambridge, MA 02138} \date{\today} \begin{abstract} We suggest and analyze a novel technique for efficient and robust creation of dense ultracold molecular ensembles in their ground rovibrational state. In our approach a molecule is brought to the ground state through a series of intermediate vibrational states via a {\em multistate chainwise Stimulated Raman Adiabatic Passage} (c-STIRAP) technique. We study the influence of the intermediate states decay on the transfer process and suggest an approach that minimizes the population of these states, resulting in a maximal transfer efficiency. As an example, we analyze the formation of $^{87}$Rb$_{2}$ starting from an initial Feshbach molecular state and taking into account major decay mechanisms due to inelastic atom-molecule and molecule-molecule collisions. Numerical analysis suggests a transfer efficiency $>$ 90\%, even in the presence of strong collisional relaxation as are present in a high density atomic gas. \end{abstract} \maketitle Ultracold molecular gases open possibilities for studyng new exciting physical phenomena and their applications. For example, ultracold molecules can find use in testing fundamental symmetries \cite{DeMille-EDM,Hudson}, in precision spectroscopy \cite{weak-inter,fine-structure} and ultracold chemistry \cite{quant-chem}. Dipolar ultracold quantum gases promise to show new phenomena due to strong anisotropic dipole-dipole interactions. Dipolar molecules in optical lattices can be employed as quantum simulators of condensed matter systems \cite{mol-cond-matt}. Ultracold polar molecules also represent an attractive platform for quantum computation \cite{DeMille}. Dense samples of molecules in their ground rovibrational state $v=0,J=0$ are required for many of these applications. In this state, they have a large permanent electric dipole moment and are stable with respect to collisions and spontaneous emission. Currently translationally ultracold (100 nK - 1 mK) molecules are produced by magneto- \cite{Feshbach} and photoassociation \cite{photoass} techniques. In both of these techniques the molecules are translationally cold, but vibrationally hot, since they are formed in high vibrational states near the dissociation limit of the electronic ground state. Therefore, once created, molecules have to be rapidly transfered to the ground rovibrational state. One of the most efficient ways to transfer population between two states is based on the Stimulated Raman Adiabatic Passage (STIRAP) technique \cite{STIRAP-review, STIRAP-experiment,STIRAP-KRb,STIRAP-Ye}. STIRAP provides a lossless robust transfer between an initial and a final state of a three-level system using a Raman transition with two counterintuitively ordered laser pulses. The main difficulty with a two-pulse STIRAP in molecules is to find an intermediate vibrational state of the excited electronic potential with a good Franck-Condon overlap with both a highly delocalized initial high vibrational state and a tightly localized $v=0$ state \cite{Stwalley}. It was therefore proposed in \cite{Zoller} to transfer population in several steps down the ladder of vibrational states using a sequence of stimulated optical Raman transitions. In this case the initial and final vibrational levels of each step do not differ significantly, and it is easier to find a suitable intermediate vibrational level in the excited electronic state. In this step-wise approach population is transferred through a number of vibrational levels in the ground electronic state. In a dense gas, molecules in such states are subject to inelastic collisions with background atoms or other molecules. The released kinetic energy greatly exceeds the trap depth resulting in loss of both molecules and atoms from the trap. This process is expected to limit the efficiency of creation of dense ultracold molecular samples. In this work we present a technique allowing an efficient transfer of a molecule from a high-lying to the ground vibrational state which minimizes population loss due to inelastic collisions in intermediate levels. Our technique is based on generalized chainwise STIRAP, which in principle allows for lossless transfer to the ground vibrational state. We note that serial STIRAP as in \cite{Zoller,STIRAP-Ye} and pump-dump technique with a train of short pulses \cite{Jun-pump-dump} should also allow lossless transfer if pulses are shorter than the collisional relaxation time. The idea of this work can be described using a simple five-level model molecular system with states chainwise coupled by optical fields as illustrated in \fig{fig:chain-STIRAP}. The states $\ket{g_{1}}$, $\ket{g_{2}}$ and $\ket{g_{3}}$ are vibrational levels of the ground electronic molecular state, while $\ket{e_{1}}$ and $\ket{e_{2}}$ are vibrational states of an excited electronic molecular state. Molecules are formed in a high vibrational state $\ket{g_{1}}$, which in the following ia assumed to be a molecular Feshbach state. The state $\ket{g_{3}}$ is the deepest bound vibrational state $v=0$, and $\ket{g_{2}}$ is an intermediate vibrational state. The goal is to efficiently transfer population from the state $\ket{g_{1}}$ to state $\ket{g_{3}}$. At least two vibrational levels $\ket{e_1}$ and $\ket{e_2}$ in an excited electronic state are required, one having a good Franck-Condon overlap with $\ket{g_3}$, and the other with the initial Feshbach molecular state $\ket{g_1}$. In the states $\ket{e_{1}}$ and $\ket{e_{2}}$, molecules decay due to spontaneous emission and collisions, and in the states $\ket{g_{1}}$ (for bosonic molecules) and $\ket{g_{2}}$ they experience fast inelastic collisions with background atoms leading to loss of molecules from a trap. It means that populating the states $\ket{e_{1}}$, $\ket{e_{2}}$ and $\ket{g_{2}}$ has to be avoided when a background atomic gas is present, or the transfer process has to be faster than the collisional relaxation time. \begin{figure}\label{fig:chain-STIRAP} \end{figure} This can be achieved via chainwise STIRAP. The wave function of the system is $\ket{\Psi}=\sum_{i}C_{i}\exp{(-i\phi_{i}(t))}\ket{i}$, where $i=g_{1},e_{1},g_{2},e_{2},g_{3}$; $\phi_{g_{1}}=0$, $\phi_{e_{1}}=\nu_1t$, $\phi_{g_2}=(\nu_2-\nu_1)t$, $\phi_{e_{2}}=(\nu_3+\nu_2-\nu_1)t$, $\phi_{g_{3}}=(\nu_4-\nu_3+\nu_2-\nu_1)t$; $\nu_i$ is the frequency of the $i$th optical field. The evolution is then governed by the Schr\"odinger equation \begin{equation} i\hbar \frac{\partial \ket{\Psi}}{\partial t}=H(t)\ket{\Psi}, \end{equation} where the time-dependent Hamiltonian is given by \begin{equation} \label{eq:hamiltonian} H = \left( \begin{array}{ccccc} 0 & -\Omega_{4} & 0 & 0 & 0 \\ -\Omega_{4} & \Delta_{2} & -\Omega_{3} & 0 & 0 \\ 0 & -\Omega_{3} & 0 & -\Omega_{2} & 0 \\ 0 & 0 & -\Omega_{2} & \Delta_{1} & -\Omega_{1} \\ 0 & 0 & 0 & -\Omega_{1} & 0 \end{array} \right). \end{equation} Here $\Omega_{i}(t)=\mu_i{\cal E}_{i}(t)/2\hbar$, $i=1,2,3,4$ are the Rabi frequencies of optical fields; ${\cal E}_{i}$ is the amplitude of $i$th optical field, $\mu_{i}$ is the dipole matrix element along the respective transition, $\Delta_{1}=\omega_{1}-\nu_{1}$ and $\Delta_{2}=\omega_{4}-\nu_{4}$ are one-photon detunings of the fields, and the $\omega_i$ are the molecular frequencies along transition $i$. We assumed in \eq{eq:hamiltonian} that pairs of fields coupling two neighboring ground state vibrational levels are in a two-photon (Raman) resonance. The Hamiltonian \eq{eq:hamiltonian} has a dark state, a specific superposition of states uncoupled from applied laser fields, given by the expression \begin{equation} \label{eq:dark-state} \ket{\Phi^{0}}=\frac{\Omega_{2}\Omega_{4}\ket{g_{1}}-\Omega_{4}\Omega_{1}\ket{g_{2}}+\Omega_{1}\Omega_{3}\ket{g_{3}}}{\sqrt{\Omega_{4}^{2}\Omega_{1}^{2}+\Omega_{1}^{2}\Omega_{3}^{2}+\Omega_{2}^{2}\Omega_{4}^{2}}}. \end{equation} In c-STIRAP (as in classical STIRAP) the optical fields are applied in a counterintuitive way, i.e. at $t=-\infty$ only a combination of the $\Omega_{4}$, $\Omega_{3}$, $\Omega_{2}$ fields, and at $t=+\infty$ only of $\Omega_{3}$, $\Omega_{2}$, and $\Omega_{1}$ is present. As a result the dark state is initially associated with the $\ket{g_{1}}$ and finally with the $\ket{g_{3}}$ state. Adiabatically changing the Rabi frequencies of the optical fields so that the system stays in the dark state during evolution, one can transfer the system from the initial high-lying $\ket{g_{1}}$ to the ground vibrational $\ket{g_{3}}$ state with unit efficiency, defined as the population of the $\ket{g_{3}}$ state at $t=+\infty$. The dark state does not have contributions from the $\ket{e_{1}}$ and $\ket{e_{2}}$ excited states, which means that they are not populated during the transfer process. As a result, the decay from these states does not affect the transfer efficiency. To minimize the population in the intermediate ground vibrationl state we apply the fields such that $\Omega_{2}$, $\Omega_{3}\gg$ $\Omega_{1}$, $\Omega_{4}$ and $\Omega_{2}$, $\Omega_{3}$ temporally overlap both the $\Omega_{1}$ and $\Omega_{4}$ pulses \cite{Malinovsky}. In this case Eq.(\ref{eq:dark-state}) indicates that the population in the $\ket{g_{2}}$ state can, in principle, be completely suppressed at all times. This is the main idea of this work. We now analyze this system in detail. To simplify the analysis, we assume $\Omega_{2}=\Omega_{3}=\Omega_{0}$; $\Omega_{0}$ is independent of time (in practice the corresponding pulses just have to be much longer than $\Omega_{1}(t)$, $\Omega_{4}(t)$ and overlap both of them), and $\Omega_{0}\gg |\Omega_{1}|,|\Omega_{4}|$. We also set one-photon detunings to zero $\Delta_{1}=\Delta_{2}=0$, and define the effective Rabi frequency $\Omega(t)=\sqrt{\Omega_{1}^{2}+\Omega_{4}^{2}}$ and a rotation angle by $\tan\theta(t)=\Omega_{1}(t)/\Omega_{4}(t)$. The eigenvalues of the system (2) are $\varepsilon_{0}=0$, corresponding to the dark state, and $\varepsilon_{1,2}=\pm \Omega/\sqrt{2}$, and $\varepsilon_{3,4}=\pm \sqrt{2}\Omega_{0}$ to bright states. Adiabatic eigenstates $\ket{\Phi}=\left\{\ket{\Phi^{n}}\right\}$, $n=0,...4$ and the bare states are transformed as $\Psi_{i}=\sum_{j}W_{ij}\Phi_{j}$ via a rotation matrix \begin{equation} W = \frac{1}{2} \left( \begin{array}{ccccc} -2\cos\theta & 0 & \xi \sin2\theta & 0 & -2\sin\theta \\ -\sqrt{2}\sin\theta & 2 & -\frac{\xi}{\sqrt{2}}\cos2\theta & -1 & \sqrt{2}\cos\theta \\ -\sqrt{2}\sin\theta & -1 & -\frac{\xi}{\sqrt{2}}\cos2\theta & 1 & \sqrt{2}\cos\theta \\ \xi\sin\theta & -1 & \sqrt{2} & -1 & \xi\cos\theta \\ \xi\sin\theta & 1 & \sqrt{2} & 1 & \xi\cos\theta \end{array} \right), \label{(A2)} \end{equation} where $\xi=\Omega/\Omega_{0}$ and terms of the order of $O(\xi^{2})$ and higher are neglected. The adiabaticity condition in this case requires $\dot{\theta}\ll \Omega,\;\Omega_{0}$. If the condition $\dot{\theta}\ll \Omega$ is satisfied, the dark state will not couple to the $\ket{\Phi^{1,2}}$ states, corresponding to the closest in energy $\varepsilon_{1,2}$ eigenvalues. Coupling to the $\ket{\Phi^{3,4}}$ states will be suppressed even more strongly, since $\Omega \ll \Omega_{0}$. This gives a standard STIRAP adiabaticity requirement $\Omega T_{tr}\gg 1$, where $T_{tr}$ is the c-STIRAP transfer time. To study the effect of the decay from $\ket{g_{2}}$ and $\ket{g_{1}}$ on the dark state evolution, we turn to a density matrix description \cite{STIRAP-with-loss}, and use the adiabatic basis states. The density matrix equation then takes a form \begin{equation} \label{eq:den-matr-a} i\hbar \frac{d\rho^{a}}{dt}=\left[H^{a},\rho^{a}\right]-i\hbar\left[W^{T}\dot{W},\rho^{a}\right]-{\cal L}^a\rho^a, \end{equation} where the density matrix $\rho$ and the Liouville operator ${\cal L}$ in this basis are given by $\rho^{a}=W^{T}\rho W$ and ${\cal L}^a\rho^a=W^{T}{\cal L}\rho W$, and the Hamiltonian $H^{a}$ is diagonal; $W$ is the rotation matrix. The Liouville operator ${\cal L}$ consists of the usual decays, where only population decays ($\propto T_1^{-1}$) into other vibrational states or the continuum are considered (see Fig.\ref{fig:chain-STIRAP}). Since at $t=-\infty$ all population is assumed to be in state $\ket{g_1}$, initial conditions for \eq{eq:den-matr-a} read as $\rho^{a}_{00}=1$, $\rho^{a}_{nm}=0$ for $nm\neq 00$, where $\rho^{a}_{00}$ denotes the dark state population. The decay of the dark state due to the population loss from the $\ket{g_{1}}$ and $\ket{g_{2}}$ states is then described by the equation (keeping only terms up to the $\Omega^{2}/\Omega_{0}^{2}$ order) \begin{equation} \label{eq:dark-state-chainwise} \frac{\dot{\rho}^{a}_{00}}{\rho^{a}_{00}}\approx -(\Gamma_{2}+\Gamma_{1}\cos^{2}\theta)\left(\frac{\Omega}{2\Omega_{0}}\sin 2\theta\right)^{2}- \Gamma_{1} \cos^{2}\theta. \nonumber \end{equation} Equation \noeq{eq:dark-state-chainwise} shows that the intermediate state decay can be neglected during the transfer time $T_{tr}$ if $(\Gamma_{1}+\Gamma_{2})T_{tr}\left(\sin2\theta\Omega/2\Omega_{0}\right)^{2}\ll1$. From this expression one can see that the intermediate state decay rate is reduced by a factor $(\Omega/\Omega_{0})^{2}\ll1$ in this regime. It also follows from \eq{eq:dark-state-chainwise} that decay from $\ket{g_{1}}$ is not suppressed, so that the transfer process has to be faster than this decay. Magneto- and photo-association techniques produce molecules mostly from ultracold Bose, two-spin component Fermi and mixture of alkali metal atomic gases. In traps with high initial atomic density, weakly bound Feshbach molecules rapidly decay due to inelastic atom-molecule collisions, which were found to be the major limiting factor of molecule lifetime. Depending on the quantum statistics of the constituent atoms, the alkali dimers show different behavior with respect to inelastic atom-molecule and molecule-molecule collisions. Fermionic alkali dimers in the Feshbach state are very stable with respect to collisions, especially close to the resonance, where the scattering length is large. Lifetimes of the Feshbach molecules of the order of 1 s have been observed experimentally \cite{Feshbach-lifetime,Feshbach-lifetime-1}. In contrast, bosonic and mixed dimers experience fast vibrational quenching due to inelastic atom-molecule collisions, even in their Feshbach state. An atomic density in a trap is typically in the range $n_{at}\sim 10^{11}-10^{14}$ cm$^{-3}$, then the Feshbach state relaxation rate is in the range $\Gamma_{1}\sim 10^{1}-10^{4}$ s$^{-1}$ (calculated from the corresponding inelastic atom-molecule collision coefficient $k_{inel}\sim 10^{-10}$ cm$^{3}$s$^{-1}$ \cite{Ketterly,Cs,Rb-collisions}). At the same atomic densities the vibrational relaxation rate $\Gamma_{2}$ of intermediate vibrational states for bosonic molecules is in the range $\Gamma_{2}\sim 10^{2}-10^{5}$ s$^{-1}$ (calculated from $k_{inel}\sim 6\cdot 10^{-10}$ cm$^{3}$s$^{-1}$ for $^{7}$Li$_{2}$ \cite{collisions-review} and the same range of atomic densities). Inelastic molecule-molecule collisional relaxation rates are about two orders of magnitude smaller due to typically smaller molecular density. We next illustrate the technique for a sample seven-state bosonic $^{87}$Rb$_{2}$ molecular system (see the inset to Fig.\ref{fig:STIRAP-b}a). In the first step, the Feshbach state can be coupled to the electronically excited pure long range molecular state $\ket{0_{g}^{-},v,J=0}$, located close to the $5S_{1/2}+5P_{3/2}$ dissociation asymptote. For example, following \cite{STIRAP-experiment}, the $v=31$ vibrational level can be chosen $6.87$ cm$^{-1}$ below the dissociation limit. The second STIRAP step can be to $v=116$ in the ground electronic state. The authors of Ref.\cite{STIRAP-experiment} mention that the Franck-Condon factors from the excited $\ket{0_{g}^{-},v=31,J=0}$ state to the ground state vibrational levels down to the $X\;^{1}\Sigma^{+}_{g}(v=116)$ are similar to the second-to-last vibrational state used in the STIRAP experiment in \cite{STIRAP-experiment}. The ground $v=0$ state can then be reached in four steps, using e.g. the path given in Table I. We note that in Rb$_{2}$ the $v=0$ state cannot be reached from the $v=116$ in two steps due to unfavorable Franck-Condon factors, a minimum of four steps is therefore required, resulting in a seven-state system. A four-step path from the Feshbach to the $v=0$ state can be realized in Cs$_{2}$ \cite{Cs-STIRAP}, and therefore in other alkali dimers as well. \begin{table} \centering \caption{Possible chainwise transfer path from the Feshbach to the ground rovibrational state in the Rb$_{2}$ molecule. Shown also the corresponding transition dipole moments and wavelengths.} \begin{tabular}{c c c c} \hline i& $v - v'$ transition & $D_{v,v'i}$ & $\lambda$ \\ & & Debye & nm \\ \hline 1 & $\ket{Feshbach} - \ket{0^{-}_{g},v=31,J=0}$ & 0.4 & 780.7 \\ 2 & $\ket{0^{-}_{g},v=31,J=0} - X\;^{1}\Sigma^{+}_{g}(v=116,J=0)$ & 0.8 & 780.4 \\ 3 & $X\;^{1}\Sigma^{+}_{g}(v=116,J=0) - A^{1}\Sigma^{+}_{u}(v'=152,J=1)$ & 0.55\ & 846 \\ 4 & $A\;^{1}\Sigma^{+}_{u}(v'=152,J=1)- X^{1}\Sigma^{+}_{g}(v=50,J=0)$ & 0.64 & 907.4 \\ 5 & $X\;^{1}\Sigma^{+}_{g}(v=50,J=0)- A^{1}\Sigma^{+}_{u}(v'=21,J=1)$ & 0.53 & 990 \\ 6 & $A\;^{1}\Sigma^{+}_{u}(v'=21,J=1)- X^{1}\Sigma^{+}_{u}(v=0,J=0)$ & 2.37 & 856.4 \\ \hline \end{tabular} \label{table:Rb} \end{table} The transitions $\ket{e_{1}}-\ket{g_{2}}$; $\ket{g_{2}}-\ket{e_{2}}$, $\ket{e_{2}}-\ket{g_{3}}$ and $\ket{g_{3}}-\ket{e_{3}}$ are coupled by CW laser fields, the first transition $\ket{g_{1}}-\ket{e_{1}}$ and the last transition $\ket{e_{3}}-\ket{g_{4}}$ in the chain are coupled by the fields $\Omega_{1}=\Omega_{1}^{max}(1+\tanh{(t-\tau/2)/T})/2$ and $\Omega_{6}=\Omega_{6}^{max}(1-\tanh{(t+\tau/2)/T})/2$, respectively. In the above scheme we picked the $\ket{e_{3}}-\ket{g_{4}}$ transition with a large transition dipole moment, and intermediate transitions coupled by CW fields with close and reasonably large moments. In this case the Stokes pulse intensity can be minimized, and CW fields, provided, e.g., by laser diodes, can have the same intensity to optimize the transfer efficiency. The wavelengths of the transitions in Table I are covered by Ti:Sapphire and diode lasers. To provide the phase coherence between the laser fields required to carry out STIRAP, lasers can be phase locked to spectral components of a frequency comb \cite{freq-comb}. The results of the numerical simulation are given in \fig{fig:STIRAP-b}. We assumed that the CW fields have the same amplitude of the electric field ${\cal E}_{0}$, resulting in a Rabi frequency $\Omega_{0\;i}=D_{v,v'i}{\cal E}_{0}/2\hbar$ for $i$th transition. The Rabi frequencies of STIRAP fields were chosen to satisfy a condition that $\Omega$ is less than the binding energy of the Feshbach molecular state to minimize Raman dissociation of weakly bound molecules. The pulse duration $T$ and delay $\tau$ were varied to obtain the maximal transfer efficiency. To estimate the decay rate of intermediate vibrational states, the highest atomic density $n_{at}\sim 10^{14}$ cm$^{-3}$ available experimentally was used along with the inelastic collision coefficient for intermediate vibrational states $k_{inel}\sim 6\cdot 10^{-10}$ cm$^{3}$s$^{-1}$, giving $\Gamma_{2,3}=6\cdot 10^{4}$ s$^{-1}$. A Decay rate of the Feshbach state $\Gamma_{1}=10^{4}$ s$^{-1}$ was used. Numerical analysis shows that $>90\%$ of the population can be transferred to $v=0$ at high initial atomic density even in the presence of collisional decay from the intial Feshbach state. As can be seen from Fig.\ref{fig:STIRAP-b}b, the population of the intermediate ground vibrational states does not exceed $7\%$ during the transfer process and only for a short time, reducing the molecular loss due to collisions in this states. \begin{figure}\label{fig:STIRAP-b} \end{figure} We can now estimate intensities of CW and pulsed fields corresponding to Rabi frequencies used in our calculations. Taking the peak Rabi frequency of the pump and Stokes fields $\Omega_{1,6}^{max}=3\cdot 10^{7}$ s$^{-1}$, the corresponding intensities are $I_{1,6}^{peak}=c{\cal E}_{1,6}^{2}/8\pi=c(\Omega_{1,6}^{max}\hbar/D_{v,v'\;1,6})^{2}/8\pi$, resulting in $I_{1}\sim 3$ W/cm$^{2}$ and $I_{6}\sim 0.1$ W/cm$^{2}$; for CW fields with a Rabi frequency $\Omega_{0\;i}\sim 6\cdot 10^{7}$ s$^{-1}$ the corresponding intensity is $I_{2,3,4,5}\sim 5$ W/cm$^{2}$. In summary, we propose a method of vibrational cooling of ultracold molecules, based on the multistate chainwise STIRAP technique. Molecules which are formed in high-lying vibrational states are transfered into a ground rovibrational state $v=0,J=0$ using Raman transitions via several intermediate vibrational states in the ground electronic state. Our technique provides 100$\%$ vibrational as well as rotational selectivity using selection rules $\Delta J=0,\pm 1$ for rotational transitions. Numerical analysis of the transfer process for a typical bosonic Rb$_{2}$ molecular system in a trap with a high atomic density $n_{at}\sim 10^{14}$ cm$^{-3}$ shows that transfer efficiencies $\sim 90\%$ are possible even in the presence of fast collisional relaxation of the Feshbach molecular state. The multistate chainwise STIRAP technique allows one to use various transitions, coupled by, e.g., rf fields and DC interactions. It can therefore be combined with the recently demonstrated resonant association method \cite{Res-assos}. Another possibility is to use the magnetic field dependent DC interchannel coupling between an entrance and a closed channel state as a first transition in the STIRAP chain \cite{STIRAP-DC} followed by optical transitions to the ground vibrational state. The chainwise STIRAP can be applied to resonant photoassociation as well, then the first transition in the STIRAP chain will couple the continuum states to a high energy vibrational state in the ground electronic state \cite{Robin}. We gratefully acknowledge fruitful discussions with J. Ye and financial support from ARO and NSF. \end{document}
arXiv
{ "id": "0806.0821.tex", "language_detection_score": 0.8005589842796326, "char_num_after_normalized": null, "contain_at_least_two_stop_words": null, "ellipsis_line_ratio": null, "idx": null, "lines_start_with_bullet_point_ratio": null, "mean_length_of_alpha_words": null, "non_alphabetical_char_ratio": null, "path": null, "symbols_to_words_ratio": null, "uppercase_word_ratio": null, "type": null, "book_name": null, "mimetype": null, "page_index": null, "page_path": null, "page_title": null }
arXiv/math_arXiv_v0.2.jsonl
null
null
\begin{document} \maketitle \begin{abstract} We consider a class of $1D$ NLS perturbed with a steplike potential. We prove that the nonlinear solutions satisfy the double scattering channels in the energy space. The proof is based on concentration-compactness/rigidity method. We prove moreover that in dimension higher than one, classical scattering holds if the potential is periodic in all but one dimension and is steplike and repulsive in the remaining one. \end{abstract} \section{Introduction}\label{intro} The main motivation of this paper is the analysis of the behavior for large times of solutions to the following $1D$ Cauchy problems (see below for a suitable generalization in higher dimensions): \begin{equation}\label{NLSV} \left\{ \begin{aligned} i\partial_{t}u+\partial_{x}^2 u-Vu&=|u|^{\alpha}u, \quad (t,x)\in \mathbb{R}\times \mathbb{R}, \quad \alpha>4\\ u(0)&=u_0\in H^1(\mathbb{R}) \end{aligned}\right., \end{equation} namely we treat the $L^2$-supercritical defocusing power nonlinearities, and $V:{\mathbb{R}}\to{\mathbb{R}}$ is a real time-independent steplike potential. More precisely we assume that $V(x)$ has two different asymptotic behaviors at $\pm\infty$: \begin{equation}\label{differentlimit} a_+=\lim_{x\rightarrow+ \infty} V(x)\neq \lim_{x\rightarrow -\infty} V(x)=a_-. \end{equation} In order to simplify the presentation we shall assume in our treatment \begin{equation*} a_+=1 \quad\hbox{ and }\quad a_-=0, \end{equation*} but of course the arguments and the results below can be extended to the general case $a_+\neq a_-$. Roughly speaking the Cauchy problem \eqref{NLSV} looks like the following Cauchy problems respectively for $x>>0$ and $x<<0$: \begin{equation}\label{NLS} \left\{\begin{aligned} i\partial_{t}v+\partial_{x}^2v&=|v|^{\alpha}v\\ v(0)&=v_0\in H^1(\mathbb{R}) \end{aligned} \right. \end{equation} and \begin{equation}\label{NLS1} \left\{ \begin{aligned} i\partial_{t}v+(\partial_{x}^2-1)v&=|v|^{\alpha}v\\ v(0)&=v_0\in H^1(\mathbb{R}) \end{aligned} \right.. \end{equation} \noindent We recall that in $1D,$ the long time behavior of solutions to \eqref{NLS} (and also to \eqref{NLS1}) has been first obtained in the work by Nakanishi (see \cite{N}), who proved that the solutions to \eqref{NLS} (and also \eqref{NLS1}) scatter to a free wave in $H^{1}(\mathbb{R})$ (see \autoref{def-classic} for a precise definition of scattering from nonlinear to linear solutions in a general framework). The Nakanishi argument is a combination of the induction on the energy in conjunction with a suitable version of Morawetz inequalities with time-dependent weights. Alternative proofs based on the use of the interaction Morawetz estimates, first introduced in \cite{CKSTT}, have been obtained later (see \cite{CHVZ, CGT, PV, Vis} and the references therein). \newline As far as we know, there are not results available in the literature about the long time behavior of solutions to NLS perturbed by a steplike potential, and this is the main motivation of this paper. We recall that in physics literature the steplike potentials are called {\em barrier potentials} and are very useful to study the interactions of particles with the boundary of a solid (see Gesztesy \cite{G} and Gesztesy, Noewll and P\"otz \cite{GNP} for more details). We also mention the paper \cite{DaSi} where, in between other results, it is studied via the twisting trick the long time behavior of solutions to the propagator $e^{i t(\partial_x^2 - V)}$, where $V(x)$ is steplike (see below for more details on the definition of the double scattering channels). For a more complete list of references devoted to the analysis of steplike potentials we refer to \cite{DS}. Nevertheless, at the best of our knowledge, no results are available about the long time behavior of solutions to nonlinear Cauchy problem \eqref{NLSV} with a steplike potential. \\ \\ It is worth mentioning that in $1D$, we can rely on the Sobolev embedding $H^1(\mathbb{R} )\hookrightarrow L^\infty(\mathbb{R})$. Hence it is straightforward to show that the Cauchy problem \eqref{NLSV} is locally well posed in the energy space $H^1(\mathbb{R})$. For higher dimensions the local well posedness theory is still well known, see for example Cazenave's book \cite{CZ}, once the good dispersive properties of the linear flow are established. Moreover, thanks to the defocusing character of the nonlinearity, we can rely on the conservation of the mass and of the energy below, valid in any dimension: \begin{equation}\label{consmass} \|u(t)\|_{L^2(\mathbb{R}^d)}=\|u(0)\|_{L^2(\mathbb{R}^d)}, \end{equation} and \begin{equation}\label{consen} E(u(t)):=\frac{1}{2}\int_{\mathbb{R}^d}\bigg(|\nabla u(t)|^2+V|u(t)|^2+\frac{2}{\alpha+2}|u(t)|^{\alpha+2}\bigg) dx=E(u(0)), \end{equation} in order to deduce that the solutions are global. Hence for any initial datum $u_0\in H^1(\mathbb{R}^d)$ there exists one unique global solution $u(t,x)\in {\mathcal C}(\mathbb{R}; H^1(\mathbb{R}^d))$ to \eqref{NLSV} for $d=1$ (and to \eqref{NLSV-d} below in higher dimension). It is well-known that a key point in order to study the long time behavior of nonlinear solutions is a good knowledge of the dispersive properties of the linear flow, namely the so called Strichartz estimates. A lot of works have been written in the literature about the topic, both in $1D$ and in higher dimensions. We briefly mention \cite{AY, CK, DF, GS, W1, W2, Y} for the one dimensional case and \cite{BPST-Z, GVV, JK, JSS, R, RS} for the higher dimensional case, referring to the bibliographies contained in these papers for a more detailed list of works on the subject. It is worth mentioning that in all the papers mentioned above the potential perturbation is assumed to decay at infinity, hence steplike potential are not allowed. Concerning contributions in the literature to NLS perturbed by a decaying potential we have several results, in between we quote the following most recent ones: \cite{BV, CA, CGV, GHW, H, La, Li, LZ}, and all the references therein. At the best of our knowledge, the unique paper where the dispersive properties of the corresponding $1D$ linear flow perturbed by a steplike potential $V(x)$ have been analyzed is \cite{DS}, where the $L^1-L^ \infty$ decay estimate in $1D$ is proved: \begin{equation}\label{disp} \|e^{it(\partial_{x}^2-V)}f\|_{L^{\infty}(\mathbb{R})}\lesssim|t|^{-1/2}\|f\|_{L^1(\mathbb{R})}, \quad \forall\, t\neq0\quad \forall\, f\in L^1({\mathbb{R}}). \end{equation} We point out that beside the different spatial behavior of $V(x)$ on left and on right of the line, other assumptions must be satisfied by the potential. There is a huge literature devoted to those spectral properties, nevertheless we shall not focus on it since our main point is to show how to go from \eqref{disp} to the analysis of the long time behavior of solutions to \eqref{NLSV}. We will assume therefore as black-box the dispersive relation \eqref{disp} and for its proof, under further assumptions on the steplike potential $V(x)$, we refer to Theorem $1.1$ in \cite{DS}. Our first aim is to provide a nonlinear version of the {\em double scattering channels } that has been established in the literature in the linear context (see \cite{DaSi}). \begin{definition}\label{def1} Let $u_0\in H^1(\mathbb{R})$ be given and $u(t, x)\in \mathcal {C}(\mathbb{R};H^1(\mathbb{R}))$ be the unique global solution to \eqref{NLSV} with $V(x)$ that satisfies \eqref{differentlimit} with $a_-=0$ and $a_+=1$. Then we say that $u(t,x)$ satisfies the {\em double scattering channels} provided that $$\lim_{t\rightarrow \pm \infty} \|u(t,x) - e^{it\partial_x^2} \eta_\pm - e^{it(\partial_x^2-1)} \gamma_\pm \|_{H^1 (\mathbb{R})}=0, $$ for suitable $\eta_\pm, \gamma_\pm\in H^1(\mathbb{R})$. \end{definition} We can now state our first result in $1D$. \begin{theorem}\label{1Dthm} Assume that $V:{\mathbb{R}}\to{\mathbb{R}}$ is a bounded, nonnegative potential satisfying \eqref{differentlimit} with $a_-=0$ and $a_+=1,$ and \eqref{disp}. Furthermore, suppose that: \begin{itemize} \item\label{hhyp0} $|\partial_xV(x)|\overset{|x|\rightarrow\infty}\longrightarrow0$; \\ \item\label{hhyp1}$\lim_{x\rightarrow +\infty}|x|^{1+\varepsilon}|V(x)-1|=0,\, \lim_{x\rightarrow -\infty}|x|^{1+\varepsilon}|V(x)|=0\,$ for some\, $\epsilon>0$;\\ \item\label{hhyp3} $x \cdot \partial_xV (x)\leq 0.$ \end{itemize} Then for every $u_0\in H^1({\mathbb{R}})$ the corresponding unique solution $u(t,x)\in \mathcal {C}(\mathbb{R};H^1(\mathbb{R}^d))$ to \eqref{NLSV} satisfies the {\em double scattering channels} (according to \autoref{def1}). \end{theorem} \begin{remark}\label{per1D} It is worth mentioning that the assumption \eqref{disp} it may look somehow quite strong. However we emphasize that the knowledge of the estimate \eqref{disp} provides for free informations on the long time behavior of nonlinear solutions for small data, but in general it is more complicated to deal with large data, as it is the case in \autoref{1Dthm}. For instance consider the case of $1D$ NLS perturbed by a periodic potential. In this situation it has been established in the literature the validity of the dispersive estimate for the linear propagator (see \cite{Cu}) and also the small data nonlinear scattering (\cite{CuV}). However, at the best of our knowledge, it is unclear how to deal with the large data scattering. \end{remark} The proof of \autoref{1Dthm} goes in two steps. The first one is to show that solutions to \eqref{NLSV} scatter to solutions of the linear problem (see \autoref{def-classic} for a rigorous definition of scattering in a general framework); the second one is the asymptotic description of solutions to the linear problem associated with \eqref{NLSV} in the energy space $H^1$ (see \autoref{linscat}). Concerning the first step we use the technique of concentration-compactness/rigidity pioneered by Kenig and Merle (see \cite{KM1, KM2}). Since this argument is rather general, we shall present it in a more general higher dimensional setting. \newline More precisely in higher dimension we consider the following family of NLS \begin{equation}\label{NLSV-d} \left\{ \begin{aligned} i\partial_{t}u+\Delta u-Vu&=|u|^{\alpha}u, \quad (t,x)\in \mathbb{R}\times \mathbb{R}^d\\ u(0)&=u_0\in H^1(\mathbb{R}^d) \end{aligned}\right., \end{equation} where \begin{equation*} \begin{cases} \frac4d<\alpha<\frac{4}{d-2} &\text{if}\qquad d\geq3\\ \frac4d<{\alpha} &\text{if}\qquad d\leq2 \end{cases}. \end{equation*} The potential $V(x)$ is assumed to satisfy, uniformly in $\bar x\in{\mathbb{R}}^{d-1},$ \begin{equation}\label{difflim2} a_-=\lim_{x_1\rightarrow-\infty} V(x_1,\bar x)\neq \lim_{x_1\rightarrow +\infty} V(x_1, \bar x)=a_+, \quad\hbox{ where }\quad x=(x_1, \bar x). \end{equation} Moreover we assume $V(x)$ periodic w.r.t. the variables $\bar x=(x_2,\dots, x_d).$ Namely we assume the existence of $d-1$ linear independent vectors $P_2,\dots,P_d\in{\mathbb{R}}^{d-1}$ such that for any fixed $x_1\in{\mathbb{R}},$ the following holds: \begin{equation}\label{periods} \begin{aligned} V(x_1, \bar x)=V(x_1,\bar x +k_2P_2+\dots +k_dP_d),\\ \forall\,\bar x=(x_2,\dots,x_d)\in{\mathbb{R}}^{d-1}, \quad \forall\, (k_2,\dots,k_d)\in\mathbb{Z}^{d-1}. \end{aligned} \end{equation} Some comments about this choice of assumptions on $V(x)$ are given in \autoref{assVd}. \newline Exactly as in $1D$ case mentioned above, we assume as a black-box the dispersive estimate \begin{equation}\label{disp-d} \|e^{it(\Delta-V)}f\|_{L^{\infty}(\mathbb{R}^d)}\lesssim|t|^{-d/2}\|f\|_{L^1(\mathbb{R}^d)}, \quad \forall\, t\neq0\quad \forall\, f\in L^1({\mathbb{R}}^d). \end{equation} Next we recall the classical definition of scattering from nonlinear to linear solutions in a general setting. We recall that by classical arguments we have that once \eqref{disp-d} is granted, then the local (and also the global, since the equation is defocusing) existence and uniqueness of solutions to \eqref{NLSV-d} follows by standard arguments. \begin{definition}\label{def-classic} Let $u_0\in H^1(\mathbb{R}^d)$ be given and $u(t, x)\in \mathcal {C}(\mathbb{R};H^1(\mathbb{R}^d))$ be the unique global solution to \eqref{NLSV-d}. Then we say that $u(t,x)$ {\em scatters to a linear solution} provided that $$\lim_{t\rightarrow \pm \infty} \|u(t,x) - e^{it(\Delta-V)}\psi^\pm \|_{H^1 (\mathbb{R}^d)}=0 $$ for suitable $\psi^\pm\in H^1(\mathbb{R}^d).$ \end{definition} In the sequel we will also use the following auxiliary Cauchy problems that roughly speaking represent the Cauchy problems \eqref{NLSV-d} in the regions $x_1<<0$ and $x_1>>0$ (provide that we assume $a_-=0$ and $a_+=1$ in \eqref{difflim2}): \begin{equation}\label{NLS-d} \left\{ \begin{aligned} i\partial_{t}u+\Delta u&=|u|^{\alpha} u \quad (t,x)\in \mathbb{R}\times \mathbb{R}^d\\ u(0)&=\psi\in H^1(\mathbb{R}^d) \end{aligned}\right., \end{equation} and \begin{equation}\label{NLS1-d} \left\{ \begin{aligned} i\partial_{t}u+(\Delta -1)u&=|u|^{\alpha} u \quad (t,x)\in \mathbb{R}\times \mathbb{R}^d\\ u(0)&=\psi\in H^1(\mathbb{R}^d) \end{aligned}\right.. \end{equation} Notice that those problems are respectively the analogue of \eqref{NLS} and \eqref{NLS1} in higher dimensional setting. \newline We can now state our main result about scattering from nonlinear to linear solutions in general dimension $d\geq 1$. \begin{theorem}\label{finalthm} Let $V\in\mathcal C^1(\mathbb{R}^d;{\mathbb{R}})$ be a bounded, nonnegative potential which satisfies \eqref{difflim2} with $a_-=0,\,a_+=1,$ \eqref{periods} and assume moreover: \begin{itemize} \item\label{hyp0} $|\nabla V(x_1,\bar x)|\overset{|x_1|\to\infty}\longrightarrow0$ uniformly in $\bar x\in{\mathbb{R}}^{d-1};$\\ \item\label{hyp2} the decay estimate \eqref{disp-d} is satisfied;\\ \item\label{hyp3} $x_1 \cdot \partial_{x_1}V (x)\leq 0$ for any $x\in{\mathbb{R}}^d$. \end{itemize} Then for every $u_0\in H^1(\mathbb{R}^d)$ the unique corresponding global solution $u(t,x)\in \mathcal {C}(\mathbb{R};H^1(\mathbb{R}^d))$ to \eqref{NLSV-d} scatters. \end{theorem} \begin{remark}\label{assVd} Next we comment about the assumptions done on the potential $V(x)$ along \autoref{finalthm}. Roughly speaking we assume that the potential $V(x_1,\dots,x_d)$ is steplike and repulsive w.r.t. $x_1$ and it is periodic w.r.t. $(x_2,\dots, x_d)$. The main motivation of this choice is that this situation is reminiscent, according with \cite{DaSi}, of the higher dimensional version of the $1D$ double scattering channels mentioned above. Moreover we highlight the fact that the repulsivity of the potential in one unique direction is sufficient to get scattering, despite to other situations considered in the literature where repulsivity is assumed w.r.t. the full set of variables $(x_1,\dots,x_d)$. Another point is that along the proof of \autoref{finalthm} we show how to deal with a partially periodic potential $V(x)$, despite to the fact that, at the best of our knowledge, the large data scattering for potentials periodic w.r.t. the full set of variables has not been established elsewhere, either in the $1D$ case (see \autoref{per1D}).\end{remark}\begin{remark}\label{repulsively}Next we discuss about the repulsivity assumption on $V(x)$. As pointed out in \cite{H}, this assumption on the potential plays the same role of the convexity assumption for the obstacle problem studied by Killip, Visan and Zhang in \cite{KVZ}. The author highlights the fact that both strict convexity of the obstacle and the repulsivity of the potential prevent wave packets to refocus once they are reflected by the obstacle or by the potential. From a technical point of view the repulsivity assumption is done in order to control the right sign in the virial identities, and hence to conclude the rigidity part of the Kenig and Merle argument. In this paper, since we assume repulsivity only in one direction we use a suitable version of the Nakanishi-Morawetz time-dependent estimates in order to get the rigidity part in the Kenig and Merle road map. Of course it is a challenging mathematical question to understand whether or not the repulsivity assumption (partial or global) on $V(x)$ is a necessary condition in order to get scattering. \end{remark}When we specialize in $1D,$ we are able to complete the theory of double scattering channels in the energy space. Therefore how to concern the linear part of our work, we give the following result, that in conjunction with \autoref{finalthm} where we fix $d=1$, provides the proof of \autoref{1Dthm}. \begin{theorem}\label{linscat} Assume that $V(x)\in\mathcal C(\mathbb{R};{\mathbb{R}})$ satisfies the following space decay rate: \begin{equation}\label{hyp1} \lim_{x\rightarrow +\infty}|x|^{1+\varepsilon}|V(x)-1|=\lim_{x\rightarrow -\infty}|x|^{1+\varepsilon}|V(x)|=0\quad\text{for\, some\quad} \varepsilon>0. \end{equation} Then for every $\psi\in H^1(\mathbb{R})$ we have $$\lim_{t\rightarrow \pm \infty} \|e^{it(\partial_x^2-V)} \psi - e^{it\partial_x^2} \eta_\pm - e^{it(\partial_x^2-1)} \gamma_\pm \|_{H^1 (\mathbb{R})}=0$$ for suitable $\eta_\pm, \gamma_\pm\in H^1(\mathbb{R}).$ \end{theorem} Notice that \autoref{linscat} is a purely linear statement. The main point (compared with other results in the literature) is that the asymptotic convergence is stated with respect to the $H^1$ topology and not with respect to the weaker $L^2$ topology. Indeed we point out that the content of \autoref{linscat} is well-known and has been proved in \cite{DaSi} in the $L^2$ setting. However, it seems natural to us to understand, in view of \autoref{finalthm}, whether or not the result can be extended in the $H^1$ setting. In fact according with \autoref{finalthm} the asymptotic convergence of the nonlinear dynamic to linear dynamic occurs in the energy space and not only in $L^2$. As far as we know the issue of $H^1$ linear scattering has not been previously discussed in the literature, not even in the case of a potential which decays in both directions $\pm\infty$. For this reason we have decided to state \autoref{linscat} as an independent result. \subsection{Notations}\label{notations} The spaces $L_I^{p}L^{q}=L^{p}_{t}(I;L^{q}_x(\mathbb{R}^d))$ are the usual time-space Lebesgue mixed spaces endowed with norm defined by \begin{equation}\notag \|u\|_{L^{p}_{t}(I;L^{q}_x(\mathbb{R}^d))}=\bigg(\int_{I}\bigg|\int_{\mathbb{R}^d}|u(t,x)|^q\,dx\bigg|^{p/q}\,dt\bigg)^{1/p} \end{equation} and by the context it will be clear which interval $I\subseteq\mathbb{R},$ bounded or unbounded, is considered. If $I=\mathbb{R}$ we will lighten the notation by writing $L^pL^q.$ The operator $\tau_z$ will denote the translation operator $\tau_zf(x):=f(x-z).$ If $z\in\mathbb C,$ $\Re{z}$ and $\Im{z}$ are the common notations for the real and imaginary parts of a complex number and $\bar z$ is its complex conjugate. In what follows, when dealing with a dimension $d\geq2,$ we write ${\mathbb{R}}^d\ni x:=(x_1,\bar x)$ with $\bar x\in {\mathbb{R}}^{d-1}.$ For $x\in{\mathbb{R}}^d$ the quantity $|x|$ will denote the usual norm in ${\mathbb{R}}^d$. With standard notation, the Hilbert spaces $L^2(\mathbb{R}^d), H^1(\mathbb{R}^d), H^2(\mathbb{R}^d)$ will be denoted simply by $L^2, H^1, H^2$ and likely for all the Lebesgue $L^p(\mathbb{R}^d)$ spaces. By $(\cdot,\cdot)_{L^2}$ we mean the usual $L^2$-inner product, i.e. $(f,g)_{L^2}=\int_{\mathbb{R}^d}f\bar{g}\,dx,$ $\forall\, f,g\in L^2,$ while the energy norm $\mathcal H$ is the one induced by the inner product $(f,g)_{\mathcal H}:=(f,g)_{\dot H^1}+(Vf,g)_{L^2}.$ Finally, if $d\geq 3,$ $2^*=\frac{2d}{d-2}$ is the Sobolev conjugate of 2 ($2^*$ being $+\infty$ in dimension $d\leq2$), while if $1\leq p\leq\infty$ then $p^\prime$ is the conjugate exponent given by $p^{\prime}=\frac{p}{p-1}.$ \newline \section{Strichartz Estimates}\label{strichartz} The well known Strichartz estimates are a basic tool in the studying of the nonlinear Schr\"odinger equation and we will assume the validity of them in our context. Roughly speaking, we can say that these essential space-time estimates arise from the so-called dispersive estimate for the Schr\"odinger propagator \begin{equation}\label{disp2} \|e^{it(\Delta-V)}f\|_{L^{\infty}}\lesssim|t|^{-d/2}\|f\|_{L^1,} \quad \forall\, t\neq0\quad \forall\, f\in L^1, \end{equation} which is proved in $1D$ in \cite{DS}, under suitable assumptions on the steplike potential $V(x)$, and we take for granted by hypothesis. \\ As a first consequence we get the following Strichartz estimates $$\|e^{it(\Delta-V)}f\|_{L^aL^b}\lesssim \|f\|_{L^2}$$ where $a,b\in [1, \infty]$ are assumed to be Strichartz admissible, namely \begin{equation}\label{admissible} \frac 2a=d\left(\frac 12-\frac 1 b\right). \end{equation} We recall, as already mentioned in the introduction, that along our paper we are assuming the validity of the dispersive estimate \eqref{disp2} also in higher dimensional setting. We fix from now on the following Lebesgue exponents \begin{equation*} r=\alpha+2,\qquad p=\frac{2\alpha(\alpha+2)}{4-(d-2)\alpha},\qquad q=\frac{2\alpha(\alpha+2)}{d\alpha^2-(d-2)\alpha-4}. \end{equation*} (where $\alpha$ is given by the nonlinearity in \eqref{NLSV-d}). Next, we give the linear estimates that will be fundamental in our study: \begin{align}\label{fxc1} \|e^{it(\Delta-V)}f\|_{L^{\frac{4({\alpha}+2)}{d{\alpha}}}L^r}&\lesssim \|f\|_{H^1},\\\label{fxc2} \|e^{it(\Delta-V)}f\|_{L^\frac{2(d+2)}{d} L^\frac{2(d+2)}{d} }&\lesssim \|f\|_{H^1},\\\label{fxc3} \|e^{it(\Delta-V)}f\|_{L^pL^r}&\lesssim \|f\|_{H^1}. \end{align} The last estimate that we need is (some in between) the so-called inhomogeneous Strichartz estimate for non-admissible pairs: \begin{align}\label{str2.4} \bigg\|\int_{0}^{t}e^{i(t-s)(\Delta-V)}g(s)\,ds \bigg\|_{L^pL^r}\lesssim\|g\|_{L^{q'}L^{r'}}, \end{align} whose proof is contained in \cite{CW}. \begin{remark} In the unperturbed framework, i.e. in the absence of the potential, and for general dimensions, we refer to \cite{FXC} for comments and references about Strichartz estimates \eqref{fxc1}, \eqref{fxc2}, \eqref{fxc3} and \eqref{str2.4}. \end{remark} \section{Perturbative nonlinear results}\label{perturbative} The results in this section are quite standard and hence we skip the complete proofs which can be found for instance in \cite{BV, CZ, FXC}. In fact the arguments involved are a compound of dispersive properties of the linear propagator and a standard perturbation argument. Along this section we assume that the estimate \eqref{disp-d} is satisfied by the propagator associated with the potential $V(x)$. We do not need for the moment to assume the other assumptions done on $V(x)$. We also specify that in the sequel the couple $(p,r)$ is the one given in \autoref{strichartz}. \begin{lemma}\label{lemma3.1} Let $u_0\in H^1$ and assume that the corresponding solution to \eqref{NLSV-d} satisfies $u(t,x)\in\mathcal{C}(\mathbb{R};H^{1})\cap L^{p}L^r$. Then $u(t,x)$ {\em scatters} to a linear solution in $H^1.$ \end{lemma} \begin{proof} It is a standard consequence of Strichartz estimates. \end{proof} \begin{lemma}\label{lemma3.2} There exists $\varepsilon_{0}>0$ such that for any $u_0\in H^{1}$ with $\|u_0\|_{H^1}\leq\varepsilon_{0},$ the solution $u(t,x)$ to the Cauchy problem \eqref{NLSV-d} {\em scatters} to a linear solution in $H^1$. \end{lemma} \begin{proof} It is a simple consequence of Strichartz estimates. \end{proof} \begin{lemma}\label{lemma3} For every $M>0$ there exist $\varepsilon=\varepsilon(M)>0$ and $C=C(M)>0$ such that: if $u(t,x)\in\mathcal{C}(\mathbb{R};H^1)$ is the unique global solution to \eqref{NLSV-d} and $w\in\mathcal{C}(\mathbb{R};H^1)\cap L^pL^r$ is a global solution to the perturbed problem \begin{equation*} \left\{ \begin{aligned} i\partial_{t}w+\Delta w-Vw&=|w|^{\alpha}w+e(t,x)\\ w(0,x)&=w_0\in H^1 \end{aligned} \right. \end{equation*} satisfying the conditions $\|w\|_{L^pL^r}\leq M$, $\|\int_{0}^{t}e^{i(t-s)(\Delta-V)}e(s)\,ds\|_{L^pL^r}\leq \varepsilon$ and $\|e^{it(\Delta-V)}(u_0-w_0)\|_{L^pL^r}\leq\varepsilon$, then $u\in L^pL^r$ and $\|u-w\|_{L^pL^r}\leq C \varepsilon.$ \end{lemma} \begin{proof} The proof is contained in \cite{FXC}, see Proposition $4.7,$ and it relies on \eqref{str2.4}. \end{proof} \section{Profile decomposition}\label{profile} The main content of this section is the following profile decomposition theorem. \begin{theorem}\label{profiledec} Let $V(x)\in L^\infty$ satisfies: $V\geq 0$, \eqref{periods}, \eqref{difflim2} with $a_{-}=0$ and $a_{+}=1,$ the dispersive relation \eqref{disp-d} and suppose that $|\nabla V(x_1,\bar x)|\rightarrow0$ as $|x_1|\to\infty$ uniformly in $\bar x\in{\mathbb{R}}^{d-1}.$ Given a bounded sequence $\{v_n\}_{n\in\mathbb{N}}\subset H^1,$ $\forall\, J\in\mathbb{N}$ and $\forall\,1\leq j\leq J$ there exist two sequences $\{t_n^j\}_{n\in\mathbb{N}}\subset {\mathbb{R}},\,\{x_n^j\}_{n\in\mathbb{N}}\subset{\mathbb{R}}^d$ and $\psi^j\in H^1$ such that, up to subsequences, \begin{equation*} v_n=\sum_{1\leq j\leq J}e^{it_n^j(\Delta - V)}\tau_{x_n^j}\psi^j+R_n^J \end{equation*} with the following properties: \begin{itemize} \item for any fixed $j$ we have the following dichotomy for the time parameters $t_n^j$: \begin{align*} either\quad t_n^j=0\quad \forall\, n\in\mathbb{N}\quad &or \quad t_n^j\overset{n\to \infty}\longrightarrow\pm\infty; \end{align*} \item for any fixed $j$ we have the following scenarios for the space parameters $x_n^j=(x_{n,1}^j, \bar x_n^j)\in {\mathbb{R}}\times {\mathbb{R}}^{d-1}$: \begin{equation*} \begin{aligned} either& \quad x_n^j=0\quad \forall\, n\in\mathbb{N}\\ or &\quad |x_{n,1}^j|\overset{n\to \infty}\longrightarrow\infty\\ or \quad x_{n,1}^j=0, \quad \bar x_{n}^j=\sum_{l=2}^dk_{n,l}^j &P_l \quad with \quad k_{n,l}^j\in \mathbb Z \quad and \quad \sum_{l=2}^d |k_{n,l}^j| \overset{n\to \infty}\longrightarrow\infty, \\ \quad\qquad\hbox{ where } P_l \hbox{ are given in \eqref{periods}; } \end{aligned} \end{equation*} \item (orthogonality condition) for any $j\neq k$ \begin{equation*} |x_n^j-x_n^k|+|t_n^j-t_n^k| \overset{n\rightarrow\infty}\longrightarrow\infty; \end{equation*} \item (smallness of the remainder) $\forall\,\varepsilon>0\quad\exists\,J=J(\varepsilon)$ such that \begin{equation*} \limsup_{n\rightarrow\infty}\|e^{it(\Delta - V)}R_n^J\|_{L^{p}L^{r}}\leq\varepsilon; \end{equation*} \item by defining $\|v\|_{\mathcal H}^2=\int (|\nabla v|^2 + V |v|^2) dx$ we have, as $n\to\infty,$ \begin{align*}\, \|v_n\|_{L^2}^2=&\sum_{1\leq j\leq J}\|\psi^j\|_{L^2}^2+\|R_n^J\|_{L^2}^2+o(1), \quad \forall\, J\in\mathbb{N}, \\ \|v_n\|^2_{\mathcal H}=&\sum_{1\leq j\leq J}\|\tau_{x_n^j}\psi^j\|^2_{\mathcal H}+\|R_n^J\|^2_{\mathcal H}+o(1), \quad \forall\, J\in\mathbb{N}; \end{align*} \item $\forall\, J\in\mathbb{N}$\quad and\quad $\forall\,\,2<q<2^*$ we have, as $n\to\infty,$ \begin{equation*} \|v_n\|_{L^q}^q=\sum_{1\leq j\leq J}\|e^{it_n^j(\Delta - V)}\tau_{x_n^j}\psi^j\|_{L^q}^q+\|R_n^J\|_{L^q}^q+o(1);\\ \end{equation*} \item with $E(v)=\frac{1}{2}\int \big(|\nabla v|^2+V|v|^2+\frac{2}{\alpha+2}|v|^{\alpha+2}\big)dx,$ we have, as $n\to\infty,$ \begin{equation}\label{energypd} E(v_n)=\sum_{1\leq j\leq J}E(e^{it_n^j(\Delta - V)}\tau_{x_n^j}\psi^j)+E(R_n^j)+o(1), \quad \forall\, J\in\mathbb{N}. \end{equation} \end{itemize} \end{theorem} First we prove the following lemma. \begin{lemma}\label{lemmapreli} Given a bounded sequence $\{v_n\}_{n\in{\mathbb{N}}}\subset H^1({\mathbb{R}}^d)$ we define \begin{equation*} \Lambda =\left\{ w\in L^2\quad |\quad \exists \{x_k\}_{k\in{\mathbb{N}}}\quad and \quad \{n_k\}_{k\in{\mathbb{N}}}\quad\text{s. t. }\quad \tau_{x_k}v_{n_k}\overset{L^2}\rightharpoonup w\right\} \end{equation*} and \begin{equation*} \lambda=\sup\{\|w\|_{L^2},\quad w\in\Lambda\}. \end{equation*} Then for every $q\in(2,2^*)$ there exists a constant $M=M(\sup_n\|v_n\|_{H^1})>0$ and an exponent $e=e(d,q)>0$ such that \begin{equation*} \limsup_{n\to\infty}\|v_n\|_{L^q}\leq M\lambda^e. \end{equation*} \end{lemma} \begin{proof} We consider a Fourier multiplier $\zeta$ where $\zeta$ is defined as \begin{equation*} C^{\infty}_c({\mathbb{R}}^d;{\mathbb{R}})\ni\zeta(\xi)= \begin{cases} 1 & \text{if}\quad |\xi|\leq1\\ 0 & \text{if}\quad |\xi|>2 \end{cases}. \end{equation*} By setting $\zeta_R(\xi)=\zeta(\xi/R),$ we define the pseudo-differential operator with symbol $\zeta_R,$ classically given by $\zeta_R(|D|)f=\mathcal F^{-1}(\zeta_R\mathcal Ff)(x)$ and similarly we define the operator $\tilde{\zeta}_R(|D|)$ with the associated symbol given by $\tilde{\zeta}_R(\xi)=1-\zeta_R(\xi).$ Here by $\mathcal F,\mathcal F^{-1}$ we mean the Fourier transform operator and its inverse, respectively. For any $q\in(2,2^*)$ there exists a $\epsilon\in(0,1)$ such that $H^\epsilon\hookrightarrow L^{\frac{2d}{d-2\epsilon}}=:L^q.$ Then \begin{equation*} \begin{aligned} \|\tilde{\zeta}_R(|D|)v_n\|_{L^q}&\lesssim \|\langle\xi\rangle^\epsilon\tilde{\zeta}_R(\xi)\hat{v}_n\|_{L^2_\xi}\\ &= \|\langle\xi\rangle^{\epsilon-1}\langle\xi\rangle\tilde{\zeta}_R(\xi)\hat{v}_n\|_{L^2_\xi}\\ &\lesssim R^{-(1-\epsilon)} \end{aligned} \end{equation*} where we have used the boundedness of $\{v_n\}_{n\in\mathbb N}$ in $H^1$ at the last step. For the localized part we consider instead a sequence $\{y_n\}_{n\in\mathbb N}\subset{\mathbb{R}}^d$ such that \begin{equation*} \|\zeta_R(|D|)v_n\|_{L^\infty}\leq2|\zeta_R(|D|)v_n(y_n)| \end{equation*} and we have that up to subsequences, by using the well-known properties $\mathcal F^{-1}(fg)=\mathcal F^{-1}f\ast\mathcal F^{-1}g$ and $\mathcal F^{-1}\left(f\left(\frac{\cdot}{r}\right)\right)=r^d(\mathcal F^{-1}f)(r\cdot),$ \begin{equation*} \limsup_{n\to\infty}|\zeta_R(|D|)v_n(y_n)|=R^d\limsup_{n\to\infty}\left|\int\eta(Rx)v_n(x-y_n)\,dx\right|\lesssim R^{d/2}\lambda\\ \end{equation*} where we denoted $\eta=\mathcal F^{-1}\zeta$ and we used Cauchy-Schwartz inequality. Given $\theta\in(0,1)$ such that $\frac1q=\frac{1-\theta}{2},$ by interpolation follows that \begin{equation*} \|\zeta_R(|D|)v_n\|_{L^q}\leq\|\zeta_R(|D|)v_n\|^{\theta}_{L^\infty}\|\zeta_R(|D|)v_n\|^{1-\theta}_{L^2}\lesssim R^{\frac{d\theta}{2}}\lambda^{\theta} \end{equation*} \begin{equation*} \limsup_{n\to\infty}\|v_n\|_{L^q}\lesssim \left(R^{\frac{d\theta}{2}}\lambda^{\theta}+R^{-1+\epsilon}\right) \end{equation*} and the proof is complete provided we select as radius $R=\lambda^{-\beta}$ with $0<\beta=\theta\left(1-\epsilon+\frac{d\theta}{2}\right)^{-1}$ and so $e=\theta(1-\epsilon)\left(1-\epsilon+\frac{d\theta}{2}\right)^{-1}.$ \end{proof} Based on the previous lemma we can prove the following result. \begin{lemma} Let $\{v_n\}_{n\in {\mathbb{N}}}$ be a bounded sequence in $H^1({\mathbb{R}}^d).$ There exists, up to subsequences, a function $\psi\in H^1$ and two sequences $\{t_n\}_{n\in {\mathbb{N}}}\subset{\mathbb{R}},$ $\{x_n\}_{n\in {\mathbb{N}}}\subset{\mathbb{R}}^d$ such that \begin{equation}\label{ex} \tau_{-x_n}e^{it_n(\Delta-V)}v_n=\psi+W_n, \end{equation} where the following conditions are satisfied: \begin{equation*} W_n\overset{H^1}\rightharpoonup0, \end{equation*} \begin{equation*} \limsup_{n\to\infty}\|e^{it(\Delta-V)}v_n\|_{L^\infty L^q}\leq C\left(\sup_n\|v_n\|_{H^1}\right)\|\psi\|_{L^2}^e \end{equation*} with the exponent $e>0$ given in \autoref{lemmapreli}. Furthermore, as $n\to \infty,$ $v_n$ fulfills the Pythagorean expansions below: \begin{equation}\label{gasp1} \|v_n\|_{L^2}^2=\|\psi\|_{L^2}^2+\|W_n\|_{L^2}^2+o(1); \end{equation} \begin{equation}\label{gasp2} \|v_n\|_{\mathcal H}^2=\|\tau_{x_n}\psi\|_{\mathcal H}^2+\|\tau_{x_n}W_n\|_{\mathcal H}^2+o(1); \end{equation} \begin{equation}\label{gasp3} \|v_n\|_{L^q}^q=\|e^{it_n(\Delta-V)}\tau_{x_n}\psi\|_{L^q}^q+\|e^{it_n(\Delta-V)}\tau_{x_n}W_n\|_{L^q}^q+o(1),\qquad q\in(2,2^*). \end{equation} Moreover we have the following dichotomy for the time parameters $t_n$: \begin{align}\label{parav} either \quad t_n=0\quad \forall\, n\in\mathbb{N}\quad &or \quad t_n\overset{n\to\infty}\longrightarrow\pm\infty. \end{align} \item Concerning the space parameters $x_n=(x_{n,1}, \bar x_n)\in {\mathbb{R}}\times {\mathbb{R}}^{d-1}$ we have the following scenarios: \begin{align}\label{para2v} & either \quad x_n=0\quad \forall\, n\in\mathbb{N}\\ \nonumber & or \quad |x_{n,1}|\overset{n\to \infty}\longrightarrow\infty\\ \nonumber or \quad x_{n,1}=0, \quad \bar x_{n}^j=\sum_{l=2}^dk_{n,l} P_l \quad & with \quad k_{n,l}\in \mathbb Z \quad and \quad \sum_{l=2}^d |k_{n,l}| \overset{n\to \infty}\longrightarrow\infty. \end{align} \end{lemma} \begin{proof} Let us choose a sequences of times $\{t_n\}_{n\in\mathbb N}$ such that \begin{equation}\label{time} \|e^{it_n(\Delta-V)}v_n\|_{L^q}>\frac12\|e^{it(\Delta-V)}v_n\|_{L^\infty L^q}. \end{equation} According to \autoref{lemmapreli} we can consider a sequence of space translations such that \begin{equation*} \tau_{-x_n}(e^{it_n(\Delta-V)}v_n)\overset{H^1}\rightharpoonup \psi, \end{equation*} which yields \eqref{ex}. Let us remark that the choice of the time sequence in \eqref{time} is possible since the norms $H^1$ and $\mathcal H$ are equivalent. Then \begin{equation*} \limsup_{n\to\infty}\|e^{it_n(\Delta-V)}v_n\|_{L^q}\lesssim\|\psi\|_{L^2}^e, \end{equation*} which in turn implies by \eqref{time} that \begin{equation*} \limsup_{n\to\infty}\|e^{it(\Delta-V)}v_n\|_{L^\infty L^q}\lesssim\|\psi\|_{L^2}^e, \end{equation*} where the exponent is the one given in \autoref{lemmapreli}. By definition of $\psi$ we can write \begin{equation}\label{dec2} \tau_{-x_n}e^{it_n(\Delta-V)}v_n=\psi+W_n,\qquad W_n\overset{H^1}\rightharpoonup 0 \end{equation} and the Hilbert structure of $L^2$ gives \eqref{gasp1}.\\ Next we prove \eqref{gasp2}. We have \begin{equation*} v_n=e^{-it_n(\Delta-V)}\tau_{x_n}\psi+e^{-it_n(\Delta-V)}\tau_{x_n}W_n,\qquad W_n\overset{H^1}\rightharpoonup 0 \end{equation*} and we conclude provided that we show \begin{equation}\label{gasp5}(e^{-it_n(\Delta-V)}\tau_{x_n}\psi, e^{-it_n(\Delta-V)}\tau_{x_n}W_n)_{\mathcal H}\overset{n\rightarrow \infty} \longrightarrow 0. \end{equation} Since we have \begin{align*} &(e^{-it_n(\Delta-V)}\tau_{x_n}\psi, e^{-it_n(\Delta-V)}\tau_{x_n}W_n)_{\mathcal H}\\ &=(\psi, W_n)_{\dot{H}^1}+\int V(x+x_n)\psi(x)\bar{W}_n(x)\,dx \end{align*} and $W_n\overset{H^1}\rightharpoonup 0 $, it is sufficient to show that \begin{equation}\label{gasp4} \int V(x+x_n)\psi(x)\bar{W}_n(x)\,dx \overset{n\rightarrow \infty} \longrightarrow 0. \end{equation} If (up to subsequence) $x_n\overset{n\to \infty}\longrightarrow x^*\in{\mathbb{R}}^d$ or $|x_{n,1}|\overset{n\to \infty}\longrightarrow\infty$, where we have splitted $x_n=(x_{n,1}, \bar x_n)\in {\mathbb{R}}\times {\mathbb{R}}^{d-1}$, then we have that the sequence $\tau_{-x_n}V (x)=V(x+x_n)$ pointwise converges to the function $\tilde{V}(x)\in L^{\infty}$ defined by \begin{equation*} \tilde{V}(x)= \left\{ \begin{array}{ll} 1\quad &if\quad x_{n,1}\overset{n\rightarrow \infty}\longrightarrow+\infty\\ V(x+x^*)\quad &if\quad x_n\overset{n\rightarrow \infty}\longrightarrow x^*\in{\mathbb{R}}^d\\ 0\quad &if\quad x_{n,1}\overset{n\rightarrow \infty}\longrightarrow-\infty \end{array} \right. \end{equation*} and hence \begin{equation*} \begin{aligned} \int V(x+x_n)\psi(x)\bar{W}_n(x)\,dx&=\int[V(x+x_n)-\tilde{V}(x)]\psi(x)\bar{W}_n(x)\,dx\\ &\quad+\int \tilde{V}(x)\psi(x)\bar{W}_n(x)\,dx. \end{aligned} \end{equation*} The function $\tilde{V}(x)\psi(x)$ belongs to $L^2$ since $\tilde{V}$ is bounded and $\psi\in H^1$, and since $W_n\rightharpoonup0$ in $H^1$ (and then in $L^2$) we have that \begin{equation*} \int \tilde{V}(x)\psi(x)\bar{W}_n(x)\,dx\overset{n\rightarrow \infty}\longrightarrow0. \end{equation*} Moreover by using Cauchy-Schwartz inequality \begin{align*} \bigg|\int[V(x+x_n)-\tilde{V}(x)]\psi(x)\bar{W}_n(x)\,dx\bigg|\leq&\sup_{n}\|W_n\|_{L^2}\|[V(\cdot+x_n)-\tilde{V}(\cdot)]\psi(\cdot)\|_{L^2}; \end{align*} since $\left|[V(\cdot+x_n)-\tilde{V}(\cdot)]\psi(\cdot) \right|^2\lesssim|\psi(\cdot)|^2\in L^1$ we claim, by dominated convergence theorem, that also \begin{equation*} \int[V(x+x_n)-\tilde{V}(x)]\psi(x)\bar{W}_n(x)\,dx\overset{n\rightarrow \infty}\longrightarrow0, \end{equation*} and we conclude \eqref{gasp4} and hence \eqref{gasp5}. It remains to prove \eqref{gasp5} in the case when, up to subsequences, $x_{n,1} \overset{n\rightarrow \infty} \longrightarrow x_1^*$ and $|\bar x_n|\overset {n\rightarrow \infty} \longrightarrow \infty$. Up to subsequences we can assume therefore that $\bar x_{n}= \bar x^*+\sum_{l=2}^d k_{n, l}P_l+o(1)$ with $\bar x^*\in {\mathbb{R}}^{d-1}$, $k_{n, l}\in \mathbb Z$ and $\sum_{l=2}^d |k_{n,l}|\overset{n\rightarrow \infty} \longrightarrow \infty.$ Then by using the periodicity of the potential $V$ w.r.t. the $(x_2,\dots, x_d)$ variables we get: \begin{equation*} \begin{aligned} &(e^{-it_n(\Delta-V)}\tau_{x_n}\psi, e^{-it_n(\Delta-V)}\tau_{x_n}W_n)_{\mathcal H}=\\ &(e^{-it_n(\Delta-V)}\tau_{(x_1^*,\bar x_n)}\psi, e^{-it_n(\Delta-V)}\tau_{(x_1^*,\bar x_n)}W_n)_{\mathcal H}+o(1)=\\ &(\tau_{(x_1^*,\bar x^*)}\psi, \tau_{(x_1^*,\bar x^*)}W_n)_{\mathcal H}+o(1)=\\ &(\psi,W_n)_{\dot H^1}+\int V(x+(x_1^*,\bar x^*))\psi(x)\bar{W}_n\,dx=o(1) \end{aligned} \end{equation*} where we have used the fact that $W_n\overset{ H^1} \rightharpoonup0$. \newline We now turn our attention to the orthogonality of the non quadratic term of the energy, namely \eqref{gasp3}. The proof is almost the same of the one carried out in \cite{BV}, with some modification. \\ \noindent \emph{Case 1.} Suppose $|t_n|\overset{n\to \infty}\longrightarrow\infty.$ By \eqref{disp2} we have $\|e^{it(\Delta-V)}\|_{L^1\rightarrow L^{\infty}}\lesssim|t|^{-d/2}$ for any $t\neq0.$ We recall that for the evolution operator $e^{it(\Delta-V)}$ the $L^2$ norm is conserved, so the estimate $\|e^{it(\Delta-V)}\|_{L^{p\prime}\rightarrow L^{p}}\lesssim|t|^{-d\left(\frac{1}{2}-\frac{1}{p}\right)}$ holds from Riesz-Thorin interpolation theorem, thus we have the conclusion provided that $\psi\in L^1\cap L^2$. If this is not the case we can conclude by a straightforward approximation argument. This implies that if $|t_n|\to\infty$ as $n\to\infty$ then for any $p\in(2,2^*)$ and for any $\psi\in H^1$ \begin{equation*} \|e^{it_n(\Delta-V)}\tau_{x_n}\psi\|_{L^p}\overset{n\rightarrow \infty}\longrightarrow 0. \end{equation*} Thus we conclude by \eqref{dec2}. \\ \noindent\emph{Case 2.} Assume now that $t_n\overset{n\to \infty}\longrightarrow t^*\in{\mathbb{R}}$ and $x_n \overset{n\to \infty}\longrightarrow x^*\in{\mathbb{R}}^d.$ In this case the proof relies on a combination of the Rellich-Kondrachov theorem and the Brezis-Lieb Lemma contained in \cite{BL}, provided that \begin{equation}\notag \|e^{it_n(\Delta-V)}(\tau_{x_n}\psi)-e^{it^*(\Delta-V)}(\tau_{x^*}\psi)\|_{H^1}\overset{n\rightarrow\infty}\longrightarrow0,\qquad\forall\,\psi\in H^1. \end{equation} But this is a straightforward consequence of the continuity of the linear propagator (see \cite{BV} for more details). \newline \noindent \emph{Case 3.} It remains to consider $t_n\overset{n\to \infty}\longrightarrow t^*\in{\mathbb{R}}$ and $|x_n|\overset{n\to \infty}\longrightarrow\infty.$ Also here we can proceed as in \cite{BV} provided that for any $\psi\in H^1$ there exists a $\psi^*\in H^1$ such that \begin{equation}\notag \|\tau_{-x_n}(e^{it_n(\Delta-V)}(\tau_{x_n}\psi))-\psi^*\|_{H^1}\overset{n\rightarrow\infty}\longrightarrow0. \end{equation} Since translations are isometries in $H^1,$ it suffices to show that for some $\psi^*\in H^1$ \begin{equation*} \|e^{it_n(\Delta-V)}\tau_{x_{n}}\psi-\tau_{x_n}\psi^*\|_{H^1} \overset{n\rightarrow \infty}\longrightarrow0. \end{equation*} We decompose $x_n=(x_{n,1}, \bar x_n)\in {\mathbb{R}}\times {\mathbb{R}}^{d-1}$ and we consider the two scenarios: $|x_{n,1}|\overset{n\rightarrow \infty}\longrightarrow \infty$ and $ \sup_n |x_{n,1}|<\infty$. \newline If $x_{n,1} \overset{n\rightarrow \infty}\longrightarrow -\infty,$ by continuity in $H^1$ of the flow, it is enough to prove that \begin{equation*} \| e^{it^*(\Delta-V)}\tau_{x_{n}}\psi-e^{it^*\Delta}\tau_{x_{n}}\psi\|_{H^{1}} \overset{n\rightarrow \infty}\longrightarrow 0. \end{equation*} We observe that \begin{equation*} e^{it^*(\Delta-V)}\tau_{x_{n}}\psi-e^{it^*\Delta}\tau_{x_{n}}\psi=\int_{0}^{t^*}e^{i(t^*-s)(\Delta-V)}(Ve^{-is \Delta}\tau_{x_{n}}\psi)(s)\,ds \end{equation*} and hence, \begin{equation*} \| e^{it^*(\Delta-V)}\tau_{x_{n}}\psi-e^{it^*\Delta}\tau_{x_{n}}\psi\|_{H^1}\leq \int_0^{t^*} \|(\tau_{-x_n}V)e^{is\Delta}\psi\|_{H^1} ds. \end{equation*} We will show that \begin{equation}\label{s}\int_0^{t^*} \|(\tau_{-x_n}V)e^{is\Delta}\psi\|_{H^1} ds\overset{n\rightarrow \infty}\longrightarrow 0. \end{equation} Since we are assuming $x_{n,1} \overset{n\rightarrow \infty}\longrightarrow -\infty,$ for fixed $x\in\mathbb{R}^d$ we get $V(x+x_n)\overset{n\rightarrow \infty}\longrightarrow 0,$ namely $(\tau_{-x_n}V)(x)\overset{n\rightarrow \infty}\longrightarrow 0$ pointwise; since $V\in L^{\infty},$ $|\tau_{-x_n}V|^2|e^{it \Delta}\psi|^2\leq\|V\|^2_{L^{\infty}}|e^{it\Delta}\psi|^2$ and $|e^{it \Delta}\psi|^2\in L^1,$ the dominated convergence theorem yields to \begin{equation*} \|(\tau_{-x_n}V)e^{it \Delta}\psi\|_{L^2}\overset{n\rightarrow \infty}\longrightarrow 0. \end{equation*} Analogously, since $|x_{n,1}| \overset{n\rightarrow \infty}\longrightarrow\infty$ implies $| \nabla \tau_{-x_n} V(x)| \overset{n\rightarrow \infty}\longrightarrow 0,$ we obtain \begin{equation*} \|\nabla(\tau_{-x_n}Ve ^{it\Delta} \psi)\|_{L^2}\leq\| (e^{it \Delta} \psi) \nabla \tau_{-x_n} V \|_{L^2}+\| (\tau_{-x_n}V) \nabla(e^{it \Delta}\psi)\|_{L^2}\overset{n\rightarrow \infty}\longrightarrow 0. \end{equation*} We conclude \eqref{s} by using the dominated convergence theorem w.r.t the measure $ds$. For the case $x_{n,1}\overset{n\rightarrow \infty}\longrightarrow \infty$ we proceed similarly. If $\sup_{n\in {\mathbb{N}}} |x_{n,1}|<\infty,$ then up to subsequence $x_{n,1} \overset{n\rightarrow \infty} \longrightarrow x_1^*\in {\mathbb{R}}$. The thesis follows by choosing $\psi^*=e^{it^*(\Delta-V)}\tau_{(x_1^*,\bar x^*)}\psi,$ with $\bar x^* \in {\mathbb{R}}^{d-1}$ defined as follows (see above the proof of \eqref{gasp2}): $\bar x_{n}= \bar x^*+\sum_{l=2}^d k_{n, l}P_l+o(1)$ with $k_{n, l}\in \mathbb Z$ and $\sum_{l=2}^d |k_{n,l}|\overset{n\rightarrow \infty} \rightarrow \infty.$ \newline Finally, it is straightforward from \cite{BV} that the conditions on the parameters \eqref{parav} and \eqref{para2v} hold. \end{proof} \begin{proof}[Proof of \autoref{profiledec}] The proof of the profile decomposition theorem can be carried out as in \cite{BV} iterating the previous lemma. \end{proof} \section{Nonlinear profiles}\label{nonlinearpro} The results of this section will be crucial along the construction of the minimal element. We recall that the couple $(p,r)$ is the one given in \autoref{strichartz}. Moreover for every sequence $x_n\in {\mathbb{R}}^d$ we use the notation $x_n=(x_{n,1}, \bar x_n)\in {\mathbb{R}}\times {\mathbb{R}}^{d-1}$. \begin{lemma}\label{lem5.1} Let $\psi\in H^1$ and $\{x_n\}_{n\in \mathbb{N}}\subset\mathbb{R}^d$ be such that $|x_{n,1}| \overset{n\rightarrow \infty}\longrightarrow \infty$. Up to subsequences we have the following estimates: \begin{equation}\label{eq5.1} x_{n,1} \overset{n\rightarrow \infty}\longrightarrow -\infty \implies \|e^{it\Delta}\psi_{n}-e^{it(\Delta-V)}\psi_{n}\|_{L^pL^r}\overset{n\rightarrow\infty}\longrightarrow 0, \end{equation} \begin{equation}\label{eq5.2} x_{n,1} \overset{n\rightarrow \infty}\longrightarrow +\infty \implies \|e^{it(\Delta-1)}\psi_{n}-e^{it(\Delta-V)}\psi_{n}\|_{L^pL^r}\overset{n\rightarrow\infty}\longrightarrow 0, \end{equation} where $\psi_n:=\tau_{x_n}\psi.$ \end{lemma} \begin{proof} Assume $x_{n,1} \overset{n\rightarrow \infty}\longrightarrow -\infty$ (the case $x_{n,1} \overset{n\rightarrow \infty}\longrightarrow +\infty$ can be treated similarly). We first prove that \begin{equation}\label{eq5.3} \sup_{n\in\mathbb{N}}\| e^{it(\Delta-V)}\psi_{n}\|_{L^{p}_{(T,\infty)}L^{r}}\overset{T\rightarrow\infty}\longrightarrow0. \end{equation} Let $\varepsilon>0$. By density there exists $\tilde{\psi}\in C^{\infty}_c$ such that $\|\tilde{\psi}-\psi\|_{H^{1}}\leq\varepsilon,$ then by the estimate \eqref{fxc3} \begin{equation*} \|e^{it(\Delta-V)}(\tilde{\psi}_{n}-\psi_{n})\|_{L^{p}L^{r}}\lesssim\|\tilde{\psi}_{n}-\psi_{n}\|_{H^{1}}=\|\tilde{\psi}-\psi\|_{H^{1}}\lesssim\varepsilon. \end{equation*} Since $\tilde{\psi}\in L^{r'}$, by interpolation between the dispersive estimate \eqref{disp2} and the conservation of the mass along the linear flow, we have \begin{equation*} \| e^{it(\Delta-V)}\tilde{\psi}_{n}\|_{L^{r}}\lesssim|t|^{-d\left(\frac{1}{2}-\frac{1}{r}\right)}\|\tilde{\psi}\|_{L^{r'}}, \end{equation*} and since $f(t)=|t|^{-d\left(\frac{1}{2}-\frac{1}{r}\right)}\in L^p(|t|>1),$ there exists $T>0$ such that \begin{equation*} \sup_n\| e^{it(\Delta-V)}\tilde{\psi}_{n}\|_{L^{p}_{|t|\geq T}L^{r}}\leq\varepsilon, \end{equation*} hence we get \eqref{eq5.3}. In order to obtain \eqref{eq5.1}, we are reduced to show that for a fixed $T>0$ \begin{equation*} \| e^{it \Delta}\psi_{n}-e^{it(\Delta-V)}\psi_{n}\|_{L^{p}_{(0,T)}L^{r}}\overset{n\rightarrow\infty}\longrightarrow0. \end{equation*} Since $w_n=e^{it \Delta}\psi_{n}-e^{it(\Delta-V)}\psi_{n}$ is the solution of the following linear Schr\"odinger equation \begin{equation*} \left\{\begin{aligned} i\partial_{t}w_n+ \Delta w_n-Vw_n&=-Ve^{it\Delta}\psi_{n}\\ w_n(0)&=0 \end{aligned}\right., \end{equation*} by combining \eqref{fxc3} with the Duhamel formula we get \begin{align*} \|e^{it \Delta}\psi_{n}-e^{it(\Delta-V)}\psi_{n}\|_{L^{p}_{(0,T)}L^{r}}&\lesssim \|(\tau_{-x_n}V)e^{it \Delta}\psi\|_{L^1_{(0,T)}H^1}. \end{align*} The thesis follows from the dominated convergence theorem. \end{proof} \begin{lemma}\label{lem5.2} Let $\{x_{n}\}_{n\in\mathbb{N}}\subset\mathbb{R}^d$ be a sequence such that $x_{n,1} \overset{n\rightarrow \infty}\longrightarrow -\infty,$ (resp. $x_{n,1} \overset{n\rightarrow \infty}\longrightarrow +\infty$) and $v\in \mathcal{C}(\mathbb{R};H^{1})$ be the unique solution to \eqref{NLS-d} (resp. \eqref{NLS1-d}). Define $v_{n}(t,x):=v(t,x-x_{n})$. Then, up to a subsequence, the followings hold: \begin{equation}\label{eq5.11} \bigg\|\int_{0}^{t}[e^{i(t-s)\Delta}\left (|v_{n}|^{\alpha}v_{n} \right )-e^{i(t-s)(\Delta-V)} \left (|v_{n}|^{\alpha}v_{n}\right )]ds \bigg\|_{L^{p}L^{r}}\overset{n\rightarrow\infty}\longrightarrow0; \end{equation} \begin{equation}\label{eq5.12} \left( \hbox{resp.\,} \bigg\|\int_{0}^{t}[e^{i(t-s)(\Delta-1)}\left (|v_{n}|^{\alpha}v_{n}\right ) -e^{i(t-s)(\Delta-V)} \left (|v_{n}|^{\alpha}v_{n}\right )]ds\bigg\|_{L^{p}L^{r}}\overset{n\rightarrow\infty}\longrightarrow0\right). \end{equation} \end{lemma} \begin{proof} Assume $x_{n,1} \overset{n\rightarrow \infty}\longrightarrow -\infty$ (the case $x_{n,1} \overset{n\rightarrow \infty}\longrightarrow +\infty$ can be treated similarly). Our proof starts with the observation that \begin{equation}\label{fiseca} \lim_{T\rightarrow \infty} \bigg(\sup_n \bigg\|\int_{0}^{t}e^{i(t-s)(\Delta-V)} \left (|v_{n}|^{\alpha}v_{n}\right )ds\bigg\|_{L^{p}_{(T,\infty)}L^{r}}\bigg)=0. \end{equation} By Minkowski inequality and the interpolation of the dispersive estimate \eqref{disp2} with the conservation of the mass, we have \begin{align}\notag \bigg\|\int_{0}^{t}e^{i(t-s)(\Delta-V)}\left(|v_{n}|^{\alpha}v_{n}\right)ds\bigg\|_{L^r_x}&\lesssim\int_{0}^{t}|t-s|^{-d\left(\frac{1}{2}-\frac{1}{r}\right)}\||v_n|^{\alpha}v_n\|_{L^{r'}_x}ds\\\notag &\lesssim\int_{\mathbb{R}}|t-s|^{-d\left(\frac{1}{2}-\frac{1}{r}\right)}\||v|^{\alpha}v\|_{L^{r'}_x} ds= |t|^{-d\left(\frac 12 - \frac 1r\right)}\ast g \end{align} with $g(s)=\||v|^{\alpha}v(s)\|_{L^{r'}_x}.$ We conclude \eqref{fiseca} provided that we show $|t|^{-d\left(\frac 12 - \frac 1r\right)}\ast g(t)\in L^p_t$. By using the Hardy-Littlewood-Sobolev inequality (see for instance Stein's book \cite{ST}, p. 119) we assert \begin{equation*} \big\||t|^{-1+\frac{(2-d){\alpha}+4}{2({\alpha}+2)}}\ast g(t) \big\|_{L^p_t} \lesssim\||v|^{\alpha}v\|_{L^{\frac{2\alpha(\alpha+2)}{\left((2-d){\alpha}+4\right)({\alpha}+1)}}L^{r'}}=\|v\|^{{\alpha}+1}_{L^pL^r}. \end{equation*} Since $v$ scatters, then it belongs to $L^pL^r,$ and so we can deduce the validity of \eqref{fiseca}. \newline Consider now $T$ fixed: we are reduced to show that \begin{equation*} \bigg\|\int_{0}^{t}[e^{i(t-s)\Delta}\left(|v_{n}|^{\alpha}v_{n}\right)-e^{i(t-s)(\Delta-V)}\left(|v_{n}|^{\alpha}v_{n}\right)]ds\bigg\|_{L^{p}_{(0,T)} L ^{r}}\overset{n\rightarrow\infty}\longrightarrow0. \end{equation*} As usual we observe that \begin{equation*} w_n(t,x)=\int_{0}^{t}e^{i(t-s) \Delta}\left(|v_{n}|^{\alpha}v_{n}\right) ds - \int_{0}^{t}e^{i(t-s)(\Delta-V)}\left(|v_{n}|^{\alpha}v_{n}\right) ds \end{equation*} is the solution of the following linear Schr\"odinger equation \begin{equation*} \left\{\begin{aligned} i\partial_{t}w_n+ \Delta w_n -V w_n&=-V\int_{0}^{t}e^{i(t-s) \Delta}\left(|v_{n}|^{\alpha}v_{n}\right)ds\\ w_n(0)&=0 \end{aligned}\right., \end{equation*} and likely for \autoref{lem5.1} we estimate \begin{align*}\notag \bigg\| \int_{0}^{t}e^{i(t-s) \Delta}&\left(|v_{n}|^{\alpha}v_{n}\right)ds-\int_{0}^{t}e^{i(t-s)(\Delta-V)}\left(|v_{n}|^{\alpha}v_{n}\right)ds \bigg\|_{L^{p}_{(0,T)}L^{r}}\\ &\lesssim \|(\tau_{-x_n}V)|v|^{\alpha}v\|_{L^1_{(0,T)}H^1}. \end{align*} By using the dominated convergence theorem we conclude the proof. \end{proof} The previous results imply the following useful corollaries. \begin{corollary}\label{cor5.3} Let $\{x_n\}_{n\in\mathbb{N}}\subset\mathbb{R}^d$ be a sequence such that $x_{n,1} \overset{n\rightarrow \infty}\longrightarrow -\infty,$ and let $v\in\mathcal{C}(\mathbb{R};H^1)$ be the unique solution to \eqref{NLS-d} with initial datum $v_0\in H^1$. Then \begin{equation*} v_n(t,x)=e^{it(\Delta-V)}v_{0,n} -i\int_{0}^{t}e^{i(t-s)(\Delta-V)}\left(|v_{n}|^{\alpha}v_{n}\right)ds+e_{n}(t,x) \end{equation*} where $v_{0,n}(x):=\tau_{x_n}v_0(x),$ $v_{n}(t,x):=v(t,x-x_n)$ and $\|e_n\|_{L^pL^r} \overset{n\rightarrow\infty}\longrightarrow 0$. \end{corollary} \begin{proof} It is a consequence of \eqref{eq5.1} and \eqref{eq5.11}. \end{proof} \begin{corollary}\label{cor5.4} Let $\{x_n\}_{n\in\mathbb{N}}\subset\mathbb{R}^d$ be a sequence such that $x_{n,1} \overset{n\rightarrow \infty}\longrightarrow +\infty,$ and let $v\in\mathcal{C}(\mathbb{R};H^1)$ be the unique solution to \eqref{NLS1-d} with initial datum $v_0\in H^1$. Then \begin{equation*} v_n(t,x)=e^{it(\Delta-V)} v_{0,n}- i\int_{0}^{t}e^{i(t-s)(\Delta-V)}\left(|v_{n}|^{\alpha}v_{n}\right)ds+e_{n}(t,x) \end{equation*} where $v_{0,n}(x):=\tau_{x_n}v_0(x),$ $v_{n}(t,x):=v (t,x-x_n)$ and $\|e_n\|_{L^pL^r} \overset{n\rightarrow\infty}\longrightarrow 0$. \end{corollary} \begin{proof} It is a consequence of \eqref{eq5.2} and \eqref{eq5.12}. \end{proof} \begin{lemma}\label{lem5.5} Let $v(t,x)\in \mathcal C(\mathbb{R}; H^1)$ be a solution to \eqref{NLS-d} (resp. \eqref{NLS1-d}) and let $\psi_\pm \in H^{1}$ (resp. $\varphi_\pm \in H^{1}$) be such that \begin{equation*} \begin{aligned} &\|v(t,x)-e^{it \Delta}\psi_\pm \|_{H^1}\overset{t\rightarrow\pm\infty} {\longrightarrow}0 \\ \bigg(\hbox{ resp. } &\|v(t,x)-e^{it(\Delta-1)}\varphi_\pm\|_{H^1}\overset{t\rightarrow\pm\infty} {\longrightarrow}0\bigg). \end{aligned} \end{equation*} Let $\{x_{n}\}_{n\in\mathbb{N}}\subset{\mathbb{R}}^d, \,\{t_{n}\}_{n\in\mathbb{N}}\subset\mathbb{R}$ be two sequences such that $x_{n,1} \overset{n\rightarrow \infty}\longrightarrow -\infty$ (resp. $x_{n,1} \overset{n\rightarrow \infty}\longrightarrow +\infty$) and $|t_{n}|\overset{n\rightarrow\infty}\longrightarrow \infty.$ Let us define moreover $v_{n}(t,x):=v(t-t_{n},x-x_{n})$ and $\psi_n^\pm (x):=\tau_{x_n}\psi_\pm(x)$ (resp. $\varphi_n^\pm (x)=\tau_{x_n}\varphi_\pm(x)$). Then, up to subsequence, we get \begin{align}\notag t_n\rightarrow \pm \infty \Longrightarrow \| e^{i(t-t_{n})\Delta}\psi_{n}^\pm -e^{i(t-t_{n})(\Delta-V)}\psi_{n}^\pm\|_{L^{p}L^{r}}\overset{n\rightarrow\infty}\longrightarrow0\quad\emph{and} \\\label{eq517} \bigg\|\int_{0}^{t}[e^{i(t-s) \Delta}\big(|v_{n}|^{\alpha} v_{n}\big)ds- e^{i(t-s)(\Delta-V)}\big(|v_{n}|^{\alpha}v_{n}\big)]ds \bigg\|_{L^{p}L^{r}}\overset{n\rightarrow\infty}\longrightarrow0 \end{align} \begin{align*} \bigg(\hbox{ resp. } t_n\rightarrow \pm \infty \Longrightarrow \| e^{i(t-t_{n})(\Delta-1)}\varphi_{n}^\pm -e^{i(t-t_{n})(\Delta-V)}\varphi_{n}^\pm\|_{L^{p}L^{r}}\overset{n\rightarrow\infty}\longrightarrow0\quad\emph{and} \\\nonumber \bigg\|\int_{0}^{t}[e^{i(t-s)(\Delta-1)}\big(|v_{n}|^{\alpha} v_{n}\big)ds- e^{i(t-s)(\Delta-V)}\big(|v_{n}|^{\alpha} v_{n}\big)]ds \bigg\|_{L^{p}L^{r}}\overset{n\rightarrow\infty}\longrightarrow0 \bigg). \end{align*} \end{lemma} \begin{proof} It is a multidimensional suitable version of Proposition 3.6 in \cite{BV}. Nevertheless, since in \cite{BV} the details of the proof are not given, we expose below the proof of the most delicate estimate, namely the second estimate in \eqref{eq517}. After a change of variable in time, proving \eqref{eq517} is clearly equivalent to prove \begin{equation*} \bigg\|\int_{-t_n}^{t}e^{i(t-s) \Delta}\tau_{x_n}(|v|^{\alpha} v)(s)ds- \int_{-t_n}^te^{i(t-s)(\Delta-V)}\tau_{x_n}(|v|^{\alpha} v)(s)ds \bigg\|_{L^{p}L^{r}}\overset{n\rightarrow\infty}\longrightarrow0. \end{equation*} \newline We can focus on the case $t_n\rightarrow \infty$ and $x_{n,1} \overset{n\rightarrow \infty}\longrightarrow+ \infty,$ being the other cases similar. \\ The idea of the proof is to split the estimate above in three different regions, i.e. $(-\infty,-T)\times\mathbb{R}^d, (-T,T)\times\mathbb{R}^d, (T,\infty)\times\mathbb{R}^d$ for some fixed $T$ which will be chosen in an appropriate way below. The strategy is to use translation in the space variable to gain smallness in the strip $(-T,T)\times\mathbb{R}^d$ while we use smallness of Strichartz estimate in half spaces $(-T,T)^c\times\mathbb{R}^d$. Actually in $(T,\infty)$ the situation is more delicate and we will also use the dispersive relation. \\ Let us define $g(t)=\|v(t)\|^{{\alpha}+1}_{L^{({\alpha}+1)r'}}$ and for fixed $\varepsilon>0$ let us consider $T=T(\varepsilon)>0$ such that: \begin{equation}\label{smallness} \begin{aligned} \left\{ \begin{array}{ll} \||v|^{\alpha}v\|_{L^{q'}_{(-\infty,-T)}L^{r'}}<\varepsilon\\ \||v|^{\alpha}v\|_{L^{q'}_{(T,+\infty)}L^{r'}}<\varepsilon\\ \||v|^{\alpha}v\|_{L^1_{(-\infty,-T)}H^1 }<\varepsilon\\ \left\||t|^{-d\left(\frac 12 - \frac 1r\right)}\ast g(t)\right \|_{L^p_{(T,+\infty)}}<\varepsilon \end{array} \right.. \end{aligned} \end{equation} The existence of such a $T$ is guaranteed by the integrability properties of $v$ and its decay at infinity (in time). We can assume without loss of generality that $|t_n|>T.$\\ We split the term to be estimated as follows: \begin{equation*} \begin{split} \int_{-t_n}^{t}e^{i(t-s)\Delta}\tau_{x_n}(|v|^{\alpha} v)(s)ds- \int_{-t_n}^te^{i(t-s)(\Delta-V)}\tau_{x_n}(|v|^{\alpha} v)(s)ds\\ =e^{it\Delta}\int_{-t_n}^{-T}e^{-is\Delta}\tau_{x_n}(|v|^{\alpha} v)(s)ds- e^{it(\Delta-V)}\int_{-t_n}^{-T}e^{-is(\Delta-V)}\tau_{x_n}(|v|^{\alpha} v)(s)ds\\ +\int_{-T}^{t}e^{i(t-s)\Delta}\tau_{x_n}(|v|^{\alpha} v)(s)ds- \int_{-T}^te^{i(t-s)(\Delta-V)}\tau_{x_n}(|v|^{\alpha} v)(s)ds. \end{split} \end{equation*} By Strichartz estimate \eqref{fxc3} and the third one of \eqref{smallness}, we have, uniformly in $n,$ \begin{equation*} \begin{aligned} \bigg\|e^{it\Delta}\int_{-t_n}^{-T}e^{-is\Delta}\tau_{x_n}(|v|^{\alpha} v)(s)ds\bigg\|_{L^pL^r}&\lesssim\varepsilon,\\ \bigg\|e^{it(\Delta-V)}\int_{-t_n}^{-T}e^{-is(\Delta-V)}\tau_{x_n}(|v|^{\alpha} v)(s)ds \bigg\|_{L^{p}L^{r}}&\lesssim\varepsilon. \end{aligned} \end{equation*} Thus, it remains to prove \begin{equation*} \bigg\|\int_{-T}^{t}e^{i(t-s)\Delta}\tau_{x_n}(|v|^{\alpha} v)(s)ds- \int_{-T}^te^{i(t-s)(\Delta-V)}\tau_{x_n}(|v|^{\alpha} v)(s)ds \bigg\|_{L^{p}L^{r}}\overset{n\rightarrow\infty}\longrightarrow0 \end{equation*} and we split it by estimating it in the regions mentioned above. By using \eqref{str2.4} and the first one of \eqref{smallness} we get uniformly in $n$ the following estimates: \begin{equation*} \begin{aligned} \bigg\|\int_{-T}^{t}e^{i(t-s)\Delta}\tau_{x_n}(|v|^{\alpha} v)(s)ds\bigg\|_{L^{p}_{(-\infty,-T)}L^{r}}&\lesssim\||v|^{\alpha}v\|_{L^{q'}_{(-\infty,-T)}L^{r'}}\lesssim\varepsilon,\\ \bigg\|\int_{-T}^{t}e^{i(t-s)(\Delta-V)}\tau_{x_n}(|v|^{\alpha} v)(s)ds\bigg\|_{L^{p}_{(-\infty,-T)}L^{r}}&\lesssim\||v|^{\alpha}v\|_{L^{q'}_{(-\infty,-T)}L^{r'}}\lesssim\varepsilon. \end{aligned} \end{equation*} The difference $w_n=\int_{-T}^{t}e^{i(t-s)\Delta}\tau_{x_n}(|v|^{\alpha} v)(s)ds- \int_{-T}^te^{i(t-s)(\Delta-V)}\tau_{x_n}(|v|^{\alpha} v)(s)ds$ satisfies the following Cauchy problem: \begin{equation*} \left\{ \begin{aligned} i\partial_{t}w_n+(\Delta-V)w_n&=-V\int_{-T}^te^{i(t-s)\Delta}\tau_{x_n}(|v|^{\alpha} v)(s)\,ds\\ w_n(-T)&=0 \end{aligned} \right.. \end{equation*} Then $w_n$ satisfies the integral equation \begin{equation*} \begin{aligned} w_n(t)=\int_{-T}^te^{i(t-s)(\Delta-V)}\bigg(-V\int_{-T}^se^{i(s-\sigma)\Delta}\tau_{x_n}(|v|^{\alpha} v)(\sigma)\,d\sigma\bigg)\,ds \end{aligned} \end{equation*} which we estimate in the region $(-T,T)\times\mathbb{R}^d.$ By Sobolev embedding $H^1\hookrightarrow L^r,$ H\"older and Minkowski inequalities we have therefore \begin{equation*} \begin{aligned} \bigg\|\int_{-T}^te^{i(t-s)(\Delta-V)}\bigg(-V\int_{-T}^se^{i(s-\sigma)\Delta}\tau_{x_n}(|v|^{\alpha} v)(\sigma)\,d\sigma\bigg)\,ds\bigg\|_{L^p_{(-T,T)}L^r}\lesssim\\ \lesssim T^{1/p}\int_{-T}^T\bigg\|(\tau_{-x_n}V)\int_{-T}^se^{i(s-\sigma)\Delta}|v|^{\alpha} v(\sigma)\,d\sigma\bigg\|_{H^1}\,ds\lesssim\varepsilon \end{aligned} \end{equation*} by means of Lebesgue's theorem. \\ What is left is to estimate in the region $(T,\infty)\times{\mathbb{R}}^d$ the terms $$\int_{-T}^te^{i(t-s)(\Delta-V)}\tau_{x_n}(|v|^{\alpha} v)\,ds\hbox{\qquad and \qquad}\int_{-T}^te^{i(t-s)\Delta}\tau_{x_n}(|v|^{\alpha} v)\,ds.$$ We consider only one term being the same for the other. Let us split the estimate as follows: \begin{equation*} \begin{aligned} \bigg\|\int_{-T}^t&e^{i(t-s)(\Delta-V)}\tau_{x_n}(|v|^{\alpha} v)\,ds\bigg\|_{L^p_{(T,{\infty})}L^r}\leq\\ &\leq\bigg\|\int_{-T}^{T}e^{i(t-s)(\Delta-V)}\tau_{x_n}(|v|^{\alpha} v)\,ds\bigg\|_{L^p_{(T,{\infty})}L^r}\\ &\quad+\bigg\|\int_{T}^te^{i(t-s)(\Delta-V)}\tau_{x_n}(|v|^{\alpha} v)\,ds\bigg\|_{L^p_{(T,{\infty})}L^r}. \end{aligned} \end{equation*} The second term is controlled by Strichartz estimates, and it is $\lesssim\varepsilon$ since we are integrating in the region where $\||v|^{\alpha} v\|_{L^{q'}((T,{\infty});L^{r'})}<\varepsilon$ (by using the second of \eqref{smallness}), while the first term is estimated by using the dispersive relation. More precisely \begin{equation*} \begin{aligned} \bigg\|\int_{-T}^{T}e^{i(t-s)(\Delta-V)}&\tau_{x_n}(|v|^{\alpha} v)\,ds\bigg\|_{L^p_{(T,{\infty})}L^r}\lesssim\\ &\lesssim\bigg\|\int_{-T}^{T}|t-s|^{-d\left(\frac{1}{2}-\frac{1}{r}\right)}\|v\|^{\alpha+1}_{L^{(\alpha+1)r'}}ds \bigg\|_{L^p_{(T,\infty)}}\\ &\lesssim\bigg\|\int_{{\mathbb{R}}}|t-s|^{-d\left(\frac{1}{2}-\frac{1}{r}\right)}\|v\|^{\alpha+1}_{L^{(\alpha+1)r'}} ds \bigg\|_{L^p_{(T,\infty)}}\lesssim\varepsilon \end{aligned} \end{equation*} where in the last step we used Hardy-Sobolev-Littlewood inequality and the fourth of \eqref{smallness}. \end{proof} As consequences of the previous lemma we obtain the following corollaries. \begin{corollary}\label{cor5.6} Let $\{x_n\}_{n\in\mathbb{N}}\subset\mathbb{R}^d$ be a sequence such that $x_{n,1} \overset{n\rightarrow \infty}\longrightarrow-\infty$ and let $v\in\mathcal{C}(\mathbb{R};H^1)$ be a solution to \eqref{NLS-d} with initial datum $\psi\in H^1$. Then for a sequence $\{t_n\}_{n\in\mathbb{N}}$ such that $|t_n|\overset{n\rightarrow\infty}\longrightarrow\infty$ \begin{equation*} v_n(t,x)=e^{it(\Delta-V)}\psi_n-i\int_{0}^{t}e^{i(t-s)(\Delta-V)}\big(|v_{n}|^{\alpha}v_{n}\big)ds+e_n(t,x) \end{equation*} where $\psi_n:=e^{-it_n(\Delta -V)}\tau_{x_n}\psi,$ $v_n:=v(t-t_n,x-x_n)$ and $\|e_n\|_{L^pL^r}\overset{n\rightarrow\infty}\longrightarrow 0$. \end{corollary} \begin{corollary}\label{cor5.7} Let $\{x_n\}_{n\in\mathbb{N}}\subset\mathbb{R}^d$ be a sequence such that $x_{n,1} \overset{n\rightarrow \infty}\longrightarrow+ \infty$ and let $v\in\mathcal{C}(\mathbb{R};H^1)$ be a solution to \eqref{NLS1-d} with initial datum $\psi\in H^1$. Then for a sequence $\{t_n\}_{n\in\mathbb{N}}$ such that $|t_n|\overset{n\rightarrow\infty}\longrightarrow \infty$ \begin{equation*} v_n(t,x)=e^{it(\Delta-V)}\psi_n-i\int_{0}^{t}e^{i(t-s)(\Delta-V)}\big(|v_{n}|^{\alpha}v_{n}\big)ds+e_n(t,x) \end{equation*} where $\psi_n:=e^{-it_n(\Delta -V)}\tau_{x_n}\psi,$ $v_n:=v(t-t_n,x-x_n)$ and $\|e_n\|_{L^pL^r}\overset{n\rightarrow\infty}\longrightarrow 0$. \end{corollary} We shall also need the following results, for whose proof we refer to \cite{BV}. \begin{prop}\label{prop5.8} Let $\psi\in H^1.$ There exists $\hat{U}_{\pm}\in\mathcal{C}(\mathbb{R}_{\pm};H^1)\cap L^{p}_{\mathbb{R}_{\pm}}L^r$ solution to \eqref{NLSV-d} such that \begin{equation*} \|\hat{U}_{\pm}(t,\cdot)-e^{-it(\Delta-V)}\psi\|_{H^1}\overset{t\rightarrow\pm\infty}\longrightarrow0. \end{equation*} Moreover, if $t_n\rightarrow\mp\infty$, then \begin{equation*} \hat{U}_{\pm,n}=e^{it(\Delta-V)}\psi_n-i\int_{0}^{t}e^{i(t-s)(\Delta-V)}\big(|\hat{U}_{\pm,n}|^{\alpha}\hat{U}_{\pm,n}\big)ds+h_{\pm,n}(t,x) \end{equation*} where $\psi_n:=e^{-it_n(\Delta-V)}\psi,$ $\hat{U}_{\pm,n}(t,\cdot)=:\hat{U}_{\pm}(t-t_n,\cdot)$ and $\|h_{\pm,n}(t,x)\|_{L^pL^r}\overset{n\rightarrow\infty}\longrightarrow 0.$ \end{prop} \section{Existence and extinction of the critical element}\label{critical} In view of the results stated in \autoref{perturbative}, we define the following quantity belonging to $(0, \infty]$: \begin{align*} E_{c}=\sup\bigg\{ &E>0 \textit{ such that if } \varphi\in H^1\, \textit{with } E(\varphi)<E\\\notag &\textit{then the solution of \eqref{NLSV-d} with initial data } \varphi \textit{ is in } L^{p}L^{r}\bigg\}. \end{align*} Our aim is to show that $E_c=\infty$ and hence we get the large data scattering. \subsection{Existence of the Minimal Element} \begin{prop}\label{lemcri} Suppose $E_{c}<\infty.$ Then there exists $\varphi_{c}\in H^1$, $\varphi_{c}\not\equiv0$, such that the corresponding global solution $u_{c}(t,x)$ to \eqref{NLSV-d} does not scatter. Moreover, there exists $\bar x(t)\in{\mathbb{R}}^{d-1}$ such that $\left\{ u_{c}(t, x_1,\bar x-\bar x(t))\right\}_{t\in{\mathbb{R}}^+} $ is a relatively compact subset in $H^{1}$. \end{prop} \begin{proof} If $E_{c}<\infty$, there exists a sequence $\varphi_{n}$ of elements of $H^{1}$ such that \begin{equation*} E(\varphi_{n})\overset{n\rightarrow\infty}{\longrightarrow}E_{c}, \end{equation*} and by denoting with $u_{n}\in \mathcal{C}(\mathbb{R};H^{1})$ the corresponding solution to \eqref{NLSV} with initial datum $\varphi_n$ then \begin{equation*} u_{n}\notin L^{p}L^{r}. \end{equation*} We apply the profile decomposition to $\varphi_{n}:$ \begin{equation}\label{profdec} \varphi_{n}=\sum_{j=1}^{J}e^{-it_{n}^{j}(-\Delta +V)}\tau_{x_{n}^{j}}\psi^{j}+R_{n}^{J}. \end{equation} \begin{claim}\label{claim} There exists only one non-trivial profile, that is $J=1$. \end{claim} Assume $J>1$. For $j\in\{1,\dots, J\}$ to each profile $\psi^{j}$ we associate a nonlinear profile $U_n^j$. We can have one of the following situations, where we have reordered without loss of generality the cases in these way: \begin{enumerate} \item $(t_{n}^{j},x_{n}^{j})=(0,0)\in {\mathbb{R}}\times {\mathbb{R}}^d$, \item $t_{n}^{j}=0$ and $x_{n,1}^{j}\overset{n \rightarrow \infty}\longrightarrow-\infty,$ \item $t_{n}^{j}=0,$ and $x_{n,1}^{j}\overset{n \rightarrow \infty}\longrightarrow +\infty,$ \item $t_{n}^{j}=0,$ $x_{n,1}^{j}=0$ and $|\bar x_{n}^{j}|\overset{n \rightarrow \infty}\longrightarrow\infty,$ \item $x_{n}^{j}=\vec 0$ and $t_{n}^{j}\overset{n \rightarrow \infty}\longrightarrow-\infty$, \item $x_{n}^{j}=\vec 0$ and $t_{n}^{j}\overset{n \rightarrow \infty}\longrightarrow+\infty$, \item $x_{n,1}^{j}\overset{n\to \infty}\longrightarrow-\infty$ and $t_{n}^{j}\overset{n \rightarrow \infty}\longrightarrow-\infty,$ \item $x_{n,1}^{j}\overset{n\to \infty}\longrightarrow-\infty$ and $t_{n}^{j}\overset{n \rightarrow \infty}\longrightarrow+\infty,$ \item $x_{n,1}^{j}\overset{n\to \infty}\longrightarrow+\infty$ and $t_{n}^{j}\overset{n \rightarrow \infty}\longrightarrow-\infty,$ \item $x_{n,1}^{j}\overset{n\to \infty}\longrightarrow+\infty$ and $t_{n}^{j}\overset{n \rightarrow \infty}\longrightarrow+\infty,$ \item $x_{n,1}^{j}=0,$ $t_{n}^{j}\overset{n \rightarrow \infty}\longrightarrow-\infty$ and $|\bar x^j_n|\overset{n \rightarrow \infty}\longrightarrow\infty,$ \item $x_{n,1}^{j}=0,$ $t_{n}^{j}\overset{n \rightarrow \infty}\longrightarrow+\infty$ and $|\bar x^j_n|\overset{n \rightarrow \infty}\longrightarrow\infty.$ \end{enumerate} Notice that despite to \cite{BV} we have twelve cases to consider and not six (this is because we have to consider a different behavior of $V(x)$ as $|x|\rightarrow \infty$). Since the argument to deal with the cases above is similar to the ones considered in \cite{BV} we skip the details. The main point is that for instance in dealing with the cases $(2)$ and $(3)$ above we have to use respectively \autoref{cor5.3} and \autoref{cor5.4}. When instead $|\bar x_n^j| \overset{n\to \infty}\longrightarrow \infty$ and $x_{1,n}^j=0$ we use the fact that this sequences can be assumed, according with the profile decomposition \autoref{profiledec} to have components which are integer multiples of the periods, so the translations and the nonlinear equation commute and if $|t_n|\overset{n\to \infty}\longrightarrow\infty$ we use moreover \autoref{prop5.8}. We skip the details. Once it is proved that $J=1$ and \begin{equation*} \varphi_n=e^{it_n(\Delta-V)}\psi+R_n \end{equation*} with $\psi\in H^1$ and $\underset{n\rightarrow\infty}\limsup\|e^{it(\Delta-V)}R_n\|_{L^pL^r}=0,$ then the existence of the critical element follows now by \cite{FXC}, ensuring that, up to subsequence, $\varphi_n$ converges to $\psi$ in $H^1$ and so $\varphi_c=\psi.$ We define by $u_c$ the solution to \eqref{NLSV-d} with Cauchy datum $\varphi_c,$ and we call it critical element. This is the minimal (with respect to the energy) non-scattering solution to \eqref{NLSV-d}. We can assume therefore with no loss of generality that $\|u_c\|_{L^{p}((0,+\infty);L^r)}=\infty.$ The precompactenss of the trajectory up to translation by a path $\bar x(t)$ follows again by \cite{FXC}. \end{proof} \subsection{Extinction of the Minimal Element} Next we show that the unique solution that satisfies the compactness properties of the minimal element $u_c(t,x)$ (see \autoref{lemcri}) is the trivial solution. Hence we get a contradiction and we deduce that necessarily $E_c=\infty$. The tool that we shall use is the following Nakanishi-Morawetz type estimate. \begin{lemma} Let $u(t,x)$ be the solution to \eqref{NLSV-d}, where $V(x)$ satisfies $x_1 \cdot \partial_{x_1}V (x)\leq 0$ for any $x\in{\mathbb{R}}^d,$ then \begin{equation}\label{pote} \int_{\mathbb{R}}\int_{{\mathbb{R}}^{d-1}}\int_{\mathbb{R}}\frac{t^2|u|^{{\alpha}+2}}{(t^2+x_1^2)^{3/2}}\,dx_1\,d\bar x\,dt <\infty. \end{equation} \end{lemma} \begin{proof} The proof follows the ideas of \cite{N}; we shall recall it shortly, with the obvious modifications of our context. Let us introduce \begin{equation*} m(u)=a\partial_{x_1}u+gu \end{equation*} with \begin{equation*} \begin{aligned} a=-\frac{2x_1}{\lambda},\quad g=-\frac{t^2}{\lambda^3}-\frac{it}{\lambda}, \quad \lambda=(t^2+x_1^2)^{1/2} \end{aligned} \end{equation*} and by using the equation solved by $u(t,x)$ we get \begin{equation}\label{identity} \begin{aligned} 0&=\Re\{(i\partial_t u+\Delta u-Vu-|u|^{\alpha} u)\bar{m})\} \\ &=\frac{1}{2}\partial_t\bigg(-\frac{2x_1}{\lambda}\Im\{\bar{u}\partial_{x_1}u\}-\frac{t|u|^2}{\lambda}\bigg)\\ & \quad +\partial_{x_1}\Re\{\partial_{x_1}u\bar{m}-al_V(u)-\partial_{x_1}g\frac{|u|^2}{2}\}\\ & \quad +\frac{t^2G(u)}{\lambda^3}+\frac{|u|^2}{2}\Re\{\partial_{x_1}^2g\}\\ & \quad +\frac{|2it\partial_{x_1}u+{x_1}u|^2}{2\lambda^3}-{x_1}\partial_{x_1}V\frac{|u|^2}{\lambda}\\ & \quad +div_{\bar x}\Re\{\bar m\nabla_{\bar x}u\}. \end{aligned} \end{equation} with $G(u)=\frac{{\alpha}}{{\alpha}+2}|u|^{{\alpha}+2},$ $l_V(u)=\frac{1}{2}\left(-\Re\{i\bar{u}\partial_tu\}+|\partial_{x_1}u|^2+\frac{2|u|^{{\alpha}+2}}{{\alpha}+2}+V|u|^2\right)$ and $div_{\bar x}$ is the divergence operator w.r.t. the $(x_2,\dots,x_d)$ variables. Making use of the repulsivity assumption in the $x_1$ direction, we get \eqref{pote} by integrating \eqref{identity} on $\{1<|t|<T\}\times{\mathbb{R}}^d,$ obtaining \begin{equation*} \int_1^T\int_{{\mathbb{R}}^{d-1}}\int_{\mathbb{R}}\frac{t^2|u|^{{\alpha}+2}}{(t^2+x_1^2)^{3/2}}\,dx_1\,d\bar x\,dt\leq C, \end{equation*} where $C=C(M,E)$ depends on mass and energy and then letting $T\to\infty.$ \end{proof} \begin{lemma}\label{limit-point} Let $u(t,x)$ be a nontrivial solution to \eqref{NLSV-d} such that for a suitable choice $\bar x(t)\in {\mathbb{R}}^{d-1}$ we have that $\{u(t,x_1, \bar x-\bar x(t))\}\subset H^1$ is a precompact set. If $\bar{u}\in H^1$ is one of its limit points, then $\bar{u}\neq0.$ \end{lemma} \begin{proof} This property simply follows from the conservation of the energy. \end{proof} \begin{lemma}\label{lem2} If $u(t,x)$ is an in \autoref{limit-point} then for any $\varepsilon>0$ there exists $R>0$ such that \begin{equation} \sup_{t\in {\mathbb{R}}} \int_{{\mathbb{R}}^{d-1}} \int_{|x_1|>R} (|u|^2+|\nabla_x u|^2+|u|^{\alpha+2})\,d\bar x\,dx_1<\varepsilon. \end{equation} \end{lemma} \begin{proof} This is a well-known property implied by the precompactness of the sequence. \end{proof} \begin{lemma}\label{lem1} If $u(t,x)$ is an in \autoref{limit-point} then there exist $R_0>0$ and $\varepsilon_0>0$ such that \begin{equation} \int_{{\mathbb{R}}^{d-1}}\int_{|x_1|<R_0}|u(t,x_1,\bar x-\bar x(t))|^{\alpha+2}\,d\bar x\,dx_1>\varepsilon_0 \qquad \forall\,t\in{\mathbb{R}}^+. \end{equation} \end{lemma} \begin{proof} It is sufficient to prove that $\inf_{t\in {\mathbb{R}}^+} \|u(t ,x_1,\bar x-\bar x(t))\|_{L^{\alpha+2}} >0$, then the result follows by combining this fact with \autoref{lem2}. If by the absurd it is not true then there exists a sequence $\{t_n\}_{n\in {\mathbb{N}}}\subset{\mathbb{R}}^+$ such that $u(t_n ,x_1,\bar x-\bar x(t_n)) \overset{n\rightarrow\infty}\longrightarrow 0$ in $L^{\alpha+2}.$ On the other hand by the compactness assumption, it implies that $u(t_n ,x_1,\bar x-\bar x(t_n)) \overset{n\rightarrow\infty}\longrightarrow 0$ in $H^{1}$, and it is in contradiction with \autoref{limit-point}. \end{proof} We now conclude the proof of scattering for large data, by showing the extinction of the minimal element. Let $R_0>0$ and $\varepsilon_0>0$ be given by \autoref{lem1}, then \begin{equation*} \begin{aligned} \int_{\mathbb{R}}\int_{{\mathbb{R}}^{d-1}}\int_{\mathbb{R}}\frac{|u|^{\alpha+2}t^2}{(t^2+x_1^2)^{3/2}}\,dx_1\,d\bar x\,dt&\geq\int_{\mathbb{R}} \int_{{\mathbb{R}}^{d-1}}\int_{|x_1|<R_0}\frac{t^2|u(t,x_1,\bar x-\bar x(t))|^{\alpha+2}}{(t^2+x_1^2)^{3/2}}\,dx_1\,d\bar x\,dt\\ &\geq\varepsilon\int_{1}^T\frac{t^2}{(t^2+R_0^2)^{3/2}}\,dt\to\infty\qquad \text{if}\quad T\to\infty. \end{aligned} \end{equation*} Hence we contradict \eqref{pote} and we get that the critical element cannot exist. \section{Double scattering channels in $1D$} This last section is devoted to prove \autoref{linscat}. Following \cite{DaSi} (see \emph{Example 1,} page $283$) we have the following property: \begin{equation}\label{multi} \begin{aligned} \forall\, \psi\in L^2 \quad &\exists\,\eta_\pm, \gamma_\pm\in L^2 \hbox{ \,such that } \\ \|e^{it (\partial_x^2 - V)} \psi &-e^{it\partial_x^2}\eta_\pm-e^{it(\partial_x^2 -1)}\gamma_\pm\|_{L^2}\overset{t \rightarrow\pm \infty}\longrightarrow 0. \end{aligned} \end{equation} Our aim is now to show that \eqref{multi} actually holds in $H^1$ provided that $\psi\in H^1$. We shall prove this property for $t\rightarrow +\infty$ (the case $t\rightarrow -\infty$ is similar). \subsection{Convergence \eqref{multi} occurs in $H^1$ provided that $\psi\in H^2$} In order to do that it is sufficient to show that \begin{equation}\label{firststep}\psi\in H^2 \Longrightarrow \eta_+, \gamma_+\in H^2.\end{equation} Once it is proved then we conclude the proof of this first step by using the following interpolation inequality $$\|f\|_{H^1}\leq \|f\|_{L^2}^{1/2} \|f\|_{H^2}^{1/2}$$ in conjunction with \eqref{multi} and with the bound $$ \sup_{t\in \mathbb R} \|e^{it (\partial_x^2 - V)} \psi -e^{it\partial_x^2}\eta_+-e^{it(\partial_x^2 -1)}\gamma_+\|_{H^2}<\infty$$ (in fact this last property follows by the fact that $D(\partial_x^2 - V(x))=H^2$ is preserved along the linear flow and by \eqref{firststep}). Thus we show \eqref{firststep}. Notice that by \eqref{multi} we get \begin{equation*} \|e^{-it\partial_x^2}e^{it (\partial_x^2 - V)} \psi -\eta_+-e^{-it}\gamma_+\|_{L^2}\overset{t \rightarrow \infty}\longrightarrow 0, \end{equation*} and by choosing as subsequence $t_n=2\pi n$ we get \begin{equation*} \|e^{-it_n\partial_x^2}e^{it_n (\partial_x^2 - V)} \psi -\eta_+-\gamma_+\|_{L^2}\overset{n \rightarrow \infty}\longrightarrow 0. \end{equation*} By combining this fact with the bound $\sup_n \|e^{-it_n\partial_x^2}e^{it_n (\partial_x^2 - V)} \psi\|_{H^2}<\infty$ we get $\eta_++\gamma_+\in H^2$. Arguing as above but by choosing $t_n=(2n+1)\pi$ we also get $\eta_+-\gamma_+\in H^2$ and hence necessarily $\eta_+, \gamma_+\in H^2$. \subsection{The map $H^2\ni \psi\mapsto (\eta_+, \gamma_+)\in H^2\times H^2$ satisfies $\|\gamma_+\|_{H^1}+\|\eta_+\|_{H^1}\lesssim \|\psi\|_{H^1}$} Once this step is proved then we conclude by a straightforward density argument. By a linear version of the conservation laws \eqref{consmass}, \eqref{consen} we get \begin{equation}\label{H1V} \|e^{it (\partial_x^2 - V)} \psi\|_{H^1_V}= \|\psi\|_{H^1_V} \end{equation} where $$ \|w\|_{H^1_V}^2 =\int |\partial_x w|^2 dx+\int V|w|^2dx+\int |w|^2 dx. $$ Notice that this norm is clearly equivalent to the usual norm of $H^1$. \\ Next notice that by using the conservation of the mass we get \begin{equation*} \|\eta_++\gamma_+\|_{L^2}^2=\|\eta_+ + e^{-2n\pi i}\gamma_+\|_{L^2}^2 =\|e^{i2\pi n\partial_x^2}\eta_+ + e^{i2\pi n(\partial_x^2-1)}\gamma_+\|_{L^2}^2 \end{equation*} and by using \eqref{multi} we get \begin{equation*}\|\eta_++\gamma_+\|_{L^2}^2=\lim_{t\rightarrow\infty}\|e^{it( \partial_x^2 - V)} \psi\|_{L^2}^2 =\|\psi\|_{L^2}^2 \end{equation*} Moreover we have \begin{align}\notag \|\partial_x(\eta_++\gamma_+)\|^2_{L^2}&=\|\partial_x(\eta_++e^{-2n\pi i}\gamma_+)\|^2_{L^2}=\|\partial_x(e^{i2\pi n\partial_x^2}(\eta_++e^{-i2\pi n}\gamma_+))\|^2_{L^2}\\\notag &=\|\partial_x(e^{i2\pi n \partial_x^2}\eta_++e^{i2\pi n (\partial_x^2-1)}\gamma_+)\|^2_{L^2}\\\notag \end{align} and by using the previous step and \eqref{H1V} we get \begin{align*} \|\partial_x(\eta_++\gamma_+)\|^2_{L^2}=&\lim_{t\rightarrow+\infty}\|\partial_x(e^{it(\partial_x^2 - V)}\psi)\|^2_{L^2}\\ \leq&\lim_{t\rightarrow \infty}\|e^{it(\partial_x^2 - V)}\psi \|^2_{H^1_V}=\|\psi\|^2_{H^1_V} \lesssim \|\psi\|^2_{H^1}. \end{align*} Summarizing we get $$\|\eta_++\gamma_+\|_{H^1}\lesssim \|\psi\|_{H^1}.$$ By a similar argument and by replacing the sequence $t_n=2\pi n$ by $t_n=(2n+1)\pi$ we get $$\|\eta_+-\gamma_+\|_{H^1}\lesssim \|\psi\|_{H^1}.$$ The conclusion follows. \end{document}
arXiv
{ "id": "1610.05496.tex", "language_detection_score": 0.5463241934776306, "char_num_after_normalized": null, "contain_at_least_two_stop_words": null, "ellipsis_line_ratio": null, "idx": null, "lines_start_with_bullet_point_ratio": null, "mean_length_of_alpha_words": null, "non_alphabetical_char_ratio": null, "path": null, "symbols_to_words_ratio": null, "uppercase_word_ratio": null, "type": null, "book_name": null, "mimetype": null, "page_index": null, "page_path": null, "page_title": null }
arXiv/math_arXiv_v0.2.jsonl
null
null
\begin{document} \title{Singular limit of an Allen-Cahn equation with nonlinear diffusion} \begin{abstract} { We consider an Allen-Cahn equation with nonlinear diffusion, motivated by the study of the scaling limit of certain interacting particle systems. We investigate its singular limit and show the generation and propagation of an interface in the limit. The evolution of this limit interface is governed by mean curvature flow with a novel, homogenized speed in terms of a surface tension-mobility parameter emerging from the nonlinearity in our equation. } \end{abstract} \footnote{ \hskip -6.5mm { $^+$Aix Marseille University, Toulon Univeristy, Laboratory Centre de Physique Théorique, CNRS, Marseille, France. \\ e-mail: [email protected] \\ $^\ast$Department of Mathematics, Waseda University, 3-4-1 Okubo, Shinjuku-ku, Tokyo 169-8555, Japan. \\ e-mail: [email protected] \\ $^\%$CNRS and Laboratoire de Math\'ematiques, University Paris-Saclay, Orsay Cedex 91405, France. \\ e-mail: [email protected] \\ $^\dagger$Department of Mathematical Sciences, Korean Advanced Institute of Science and Technology, 291 Daehak-ro, Yuseong-gu, Daejeon 34141, Korea. e-mail: [email protected]\\ $^\diamond$Department of Mathematics, University of Arizona, 621 N.\ Santa Rita Ave., Tucson, AZ 85750, USA. \\ e-mail: [email protected] } } \thanks{MSC 2020: 35K57, 35B40. \thanks{keywords: Allen-Cahn equation, Mean curvature flow, Singular limit, Nonlinear diffusion, Interface, Surface tension} \section{Introduction} The Allen-Cahn equation with linear diffusion \begin{align*} u_t = \Delta u - \dfrac{1}{\varepsilon^2} F'(u) \end{align*} was introduced to understand the phase separation phenomena which appears in the construction of polycrystalline materials \cite{AC1979}. Here, $u$ stands for the order parameter which describes the state of the material, $F$ is a double-well potential with two distinct local minima $\alpha_\pm$ at two different phases, and the parameter $\varepsilon > 0$ corresponds to the interface width in the phase separation process. When $\varepsilon$ is small, it is expected that $u$ converges to either of the two states $u = \alpha_+$ and $u = \alpha_-$. Thus, the limit $\varepsilon \downarrow 0$ creates a steep interface dividing two phases; this coincides with the phase separation phenomena and the limiting interface is known to evolve according to mean curvature flow; see \cite{AHM2008, Xinfu1990}. In this paper, we prove generation and propagation of interface properties for an Allen-Cahn equation with nondegenerate nonlinear diffusion. More precisely, we study the problem \begin{align*} (P^\varepsilon)~~ \begin{cases} u_t = \Delta \varphi(u) + \displaystyle{ \frac{1}{\varepsilon^2}} f(u) &\mbox{ in } D \times \mathbb{R}^+\\ \displaystyle{ \frac{\partial \varphi(u)}{\partial \nu} } = 0 &\mbox{ in } \partial D \times \mathbb{R}^+\\ u(x,0) = u_0(x) &\text{ for } x \in D \end{cases} \end{align*} where the unknown function $u$ denotes a phase function, $D$ is a smooth bounded domain in $\mathbb{R}^N, N \geq 2$, $\nu$ is the outward unit normal vector to the boundary $\partial D$ and $\varepsilon > 0$ is a small parameter. The nonlinear functions $\varphi$ and $f$ satisfy the following properties. We assume that $f$ has exactly three zeros $f(\alpha_-) = f(\alpha_+) = f(\alpha) = 0$ where $\alpha_- < \alpha < \alpha_+$, and \begin{align}\label{cond_f_bistable} f \in C^2(\mathbb{R}) ,~ f'(\alpha_-) < 0 ,~ f'(\alpha_+) < 0 ,~ f'(\alpha) > 0 \end{align} so that \begin{align}\label{cond_f_tech} f(s) > 0 ~ \text{for} ~ s < \alpha_- ,~ f(s) < 0 ~ \text{for} ~ s > \alpha_+. \end{align} We suppose that \begin{align}\label{cond_phi'_bounded} \varphi \in C^4(\mathbb{R}), ~~ \varphi' \geq C_\varphi \end{align} for some positive constant $C_\varphi $. We impose one more relation between $f$ and $\varphi$, namely \begin{align}\label{cond_fphi_equipotential} \int_{\alpha_-}^{\alpha_+} \varphi'(s) f(s) ds = 0. \end{align} \noindent As for the initial condition $u_0(x)$ we assume that $u_0 \in C^2(\overline{D})$. Throughout the paper, we define $C_0$ and $C_1$ as follows: \begin{align} C_0 &:= || u_0 || _{C^0 \left( \overline{D} \right)} + || \nabla u_0 || _{C^0 \left( \overline{D} \right)} + || \Delta u_0 || _{C^0 \left( \overline{D} \right)}\label{cond_C0} \\ C_1 &:= \max_{|s - \alpha| \leq I} \varphi(s) + \max_{|s - \alpha| \leq I} \varphi'(s) + \max_{|s - \alpha| \leq I} \varphi''(s) ,~~ I = C_0 + \max(\alpha - \alpha_-, \alpha_+ - \alpha).\label{cond_C1} \end{align} Furthermore, we define $\Gamma_0$ by \begin{align*} \Gamma_0 := \{ x \in D: u_0(x) = \alpha \}. \end{align*} In addition, we suppose $\Gamma_0$ is a $C^{4+\nu}, 0 < \nu < 1$ hypersurface without boundary such that \begin{align} \Gamma_0 \Subset D, \nabla u_0(x) \cdot n(x) \neq 0 \text{ if}~ x \in \Gamma_0 \label{cond_gamma0_normal} \\ u_0 > \alpha \text{ in } D_0^+, ~~~~~~ u_0 < \alpha \text{ in } D_0^-, \label{cond_u0_inout} \end{align} where $D_0^-$ denotes the region enclosed by $\Gamma_0$, $D_0^+$ is the region enclosed between $\partial D$ and $\Gamma_0$, and $n$ is the outward normal vector to $D_0^-$. It is standard that the above formulation, referred to as Problem $(P^\varepsilon)$, possesses a unique classical solution $u^\varepsilon$. The present paper is originally motivated by the study of the scaling limit of a Glauber+Zero-range particle system. In this microscopic system of interacting random walks, the Zero-range part governs the rates of jumps, while the Glauber part prescribes creation and annihilation rates of the particles. In a companion paper \cite{EFHPS}, we show that the system exhibits a phase separation and, under a certain space-time scaling limit, an interface arises, in the limit macroscopic density field of particles, evolving in time according to the motion by mean curvature. The system is indeed well approximated from macroscopic viewpoint by the Allen-Cahn equation with nonlinear diffusion $(P^\varepsilon)$, or more precisely by its discretized equation. Although, in this paper, we study $(P^\varepsilon)$ under the Neumann boundary conditions, the formulation under periodic boundary conditions, used in the particle system setting in \cite{EFHPS}, can be treated similarly; see Remark \ref{rem:1} below. In some other physical situations, it is expected that the diffusion can depend on the order parameter as in our case. In the experimental article \cite{Wagner1952}, Wagner suggested that for metal alloys the diffusion depends on the concentration. In \cite{MD2010, RLA1999}, the authors considered degenerate diffusions such as porous medium diffusions instead of linear diffusions. In \cite{FL1994}, Fife and Lacey generalized the Allen-Cahn equation, which leads them to a parameter dependent diffusion Allen-Cahn equation. Recently, \cite{FHLR2020} considered an Allen-Cahn equation with density dependent diffusion in $1$ space dimension and showed a slow motion property. However, no rigorous proof on the motion of the interface in the nonlinear diffusion context has been given for larger space dimensions $N\geq 2$. In this context, the purpose of this article is to study the singular limit of $u^\varepsilon$ as $\varepsilon \downarrow 0$. We first present a result on the generation of the interface. We use the following notation: \begin{align}\label{cond_mu_eta0} \mu = f'(\alpha) , ~~ t^\varepsilon = \mu^{-1} \varepsilon^2 |\ln \varepsilon| , ~~ \eta_0 = \min(\alpha - \alpha_-, \alpha_+ - \alpha). \end{align} \begin{thm}\label{Thm_Generation} Let $u^\varepsilon$ be the solution of the problem $(P^\varepsilon)$, $\eta$ be an arbitrary constant satisfying $0 < \eta < \eta_0$. Then, there exist positive constants $\varepsilon_0$ and $M_0$ such that, for all $\varepsilon \in (0, \varepsilon_0)$, the following holds: \begin{enumerate}[label = (\roman*)] \item for all $x \in D$ \begin{align}\label{Thm_generation_i} \alpha_- - \eta \leq u^\varepsilon(x,t^\varepsilon) \leq \alpha_+ + \eta; \end{align} \item if $u_0(x) \geq \alpha + M_0 \varepsilon$, then \begin{align}\label{Thm_generation_ii} u^\varepsilon(x,t^\varepsilon) \geq \alpha_+ - \eta; \end{align} \item if $u_0(x) \leq \alpha - M_0 \varepsilon$, then \begin{align}\label{Thm_generation_iii} u^\varepsilon(x,t^\varepsilon) \leq \alpha_- + \eta. \end{align} \end{enumerate} \end{thm} After the interface has been generated, the diffusion term has the same order as the reaction term. As a result the interface starts to propagate. Later, we will prove that the interface moves according to the following motion equation: \begin{align}\label{eqn_motioneqn} (IP) \begin{cases} V_n = - (N - 1) \lambda_0 \kappa & \text{ on } \Gamma_t \\ \Gamma_t|_{t = 0} = \Gamma_0, & ~~ \end{cases} \end{align} where $\Gamma_t$ is the interface at time $t > 0$, $V_n$ is the normal velocity on the interface, $\kappa$ denotes its mean curvature, and $\lambda_0$ is a positive constant which will be defined later (see \eqref{eqn_lambda0} and {\eqref{second lambda_0}}). It is well known that Problem $(IP)$ possesses locally in time a unique smooth solution. Fix $T > 0$ such that the solution of $(IP)$, in \eqref{eqn_motioneqn}, exists in $[0,T]$ and denote the solution by $\Gamma = \cup_{0\leq t < T} (\Gamma_t \times \{t\})$. From Proposition 2.1 of \cite{Xinfu1990} such a $T > 0$ exists, and one can deduce that $\Gamma \in C^{4 + \nu, \frac{4 + \nu}{2}}$ in $[0,T]$, given that $\Gamma_0 \in C^{4 + \nu}$. The second main theorem states a result on the generation and the propagation of the interface. \begin{thm}\label{Thm_Propagation} Under the conditions given in Theorem \ref{Thm_Generation}, for any given $0 < \eta < \eta_0$ there exist $\varepsilon_0 > 0$ and $C > 0$ such that \begin{align}\label{thm_propagation_1} u^\varepsilon \in \begin{cases} [\alpha_- - \eta, \alpha_+ + \eta] & \text{ for } x \in D \\ [\alpha_+ - \eta, \alpha_+ + \eta] & \text{ if } x \in D^+_t \setminus\mathcal{N}_{C\varepsilon}(\Gamma_t) \\ [\alpha_- - \eta, \alpha_- + \eta] & \text{ if } x \in D^-_t \setminus\mathcal{N}_{C\varepsilon}(\Gamma_t) \end{cases} \end{align} for all $\varepsilon \in (0, \varepsilon_0)$ and $t \in [t^\varepsilon,T]$, where $D_t^-$ denotes the region enclosed by $\Gamma_t$, $D_t^+$ is that enclosed between $\partial D$ and $\Gamma_t$, and $\mathcal{N}_r(\Gamma_t) := \{ x \in D, dist(x, \Gamma_t) < r \}$. \end{thm} This theorem implies that, after generation, the interface propagates according to the motion $(IP)$ with a width of order $\mathcal{O}(\varepsilon)$. Note that Theorems \ref{Thm_Generation} and \ref{Thm_Propagation} extend similar results for linear diffusion Allen-Cahn equations due to \cite{AHM2008}. We now state an approximation result inspired by a similar result proved in \cite{MH2012}. \begin{thm}\label{thm_asymvali} \begin{enumerate} \item[(i)] Let the assumptions of Theorem \ref{Thm_Propagation} hold and $\rho > 1$. Then, the solution $u^\varepsilon$ of $(P^\varepsilon)$ satisfies \begin{align}\label{thm_asymvali_i} \lim_{\varepsilon \rightarrow 0} \sup_{\rho t^\varepsilon \leq t \leq T,~x \in D} \left| u^\varepsilon(x,t) - U_0 \left( \dfrac{d^\varepsilon(x,t)}{\varepsilon} \right) \right| = 0, \end{align} where $U_0$ is a standing wave solution defined in \eqref{eqn_AsymptExp_U0} and $d^\varepsilon$ denotes the signed distance function associated with $\Gamma_t^\varepsilon := \{ x \in D : u^\varepsilon(x,t) = \alpha \}$, defined as follows: \begin{align*} d^\varepsilon(x,t) = \begin{cases} dist(x,\Gamma^\varepsilon_t) &\text{if}~~ x \in D^{\varepsilon,+}_t \\ - dist(x,\Gamma^\varepsilon_t) &\text{if}~~ x \in D^{\varepsilon,-}_t \end{cases} \end{align*} where $D^{\varepsilon,-}_t$ denotes the region enclosed by $\Gamma^\varepsilon_t$ and $D^{\varepsilon,+}_t$ denotes the region enclosed between $\partial D$ and $\Gamma^\varepsilon_t$. \item[(ii)] For small enough $\varepsilon > 0$ and for any $t \in [\rho t_\varepsilon, T]$, $\Gamma^\varepsilon_t$ can be expressed as a graph over $\Gamma_t$. \end{enumerate} \end{thm} \begin{rmk} \label{rem:1} Theorems \ref{Thm_Generation}, \ref{Thm_Propagation} and \ref{thm_asymvali} hold not only for the Neumann boundary condition of Problem $(P^\varepsilon)$ but also for periodic boundary conditions with $D = \mathbb{T}^N$, with similar proofs as given in Sections \ref{section_3}, \ref{section_4} and \ref{section_5}. \end{rmk} The paper is organized as follows. In Section \ref{sec:2}, the interface motion $(IP)$ is formally derived from the problem $(P^\varepsilon)$ as $\varepsilon \downarrow 0$. In particular, the constant $\lambda_0$ is obtained. Section \ref{section_3} studies the generation of interface and gives the proof of Theorem \ref{Thm_Generation}. In a short time, the reaction term $f$ governs the system and the solution of $(P^\varepsilon)$ behaves close to that of an ordinary differential equation. Section \ref{section_4} discusses the propagation of interface and Theorem \ref{Thm_Propagation} is proved. The sub- and super-solutions are constructed by means of two functions $U_0$ and $U_1$ formally introduced in asymptotic expansions in Section \ref{sec:2}. Section \ref{section_5} gives the proof of Theorem \ref{thm_asymvali}. Finally, in the Appendix, we define the mobility $\mu_{AC}$ and the surface tension $\sigma_{AC}$ of the interface, especially in our nonlinear setting, and show the relation $\lambda_0= \mu_{AC}\sigma_{AC}$. \section{Formal derivation of the interface motion equation} \label{sec:2} In this section, we formally derive the equation of interface motion of the Problem $(P^\varepsilon)$ by applying the method of matched asymptotic expansions. To this purpose, we first define the interface $\Gamma_t$ and then derive its equation of motion. Suppose that $u^\varepsilon$ converges to a step function $u$ where \begin{align*} u(x,t) = \begin{cases} \alpha_+ & \text{in}~ D^+_t \\ \alpha_- & \text{in}~ D^-_t. \end{cases} \end{align*} Let \begin{align*} \Gamma_t = \overline{D^+_t}\cap \overline{D^-_t}, \overline{D^+_t}\cup \overline{D^-_t} = D ,~t \in [0,T]. \end{align*} Let also $\overline{d}(x,t)$ be the signed distance function to $\Gamma_t$ defined by \begin{align}\label{eqn_signed_dist} \overline{d}(x,t) := \begin{cases} - dist(x, \Gamma_t) & \text{ for } x \in \overline{D^-_t} \\ dist(x, \Gamma_t) & \text{ for } x \in D^+_t. \end{cases} \end{align} Assume that $u^\varepsilon$ has the expansions \begin{align*} u^\varepsilon(x,t) = \alpha_\pm + \varepsilon u^\pm_1(x,t) + \varepsilon^2 u^\pm_2(x,t) + \cdots \end{align*} away from the interface $\Gamma$ and that \begin{align}\label{eqn_u^eps_expansion} u^\varepsilon(x,t) = U_0(x,t,\xi) + \varepsilon U_1(x,t,\xi) + \varepsilon^2 U_2(x,t,\xi) + \cdots \end{align} near $\Gamma$, where $\displaystyle{\xi = \frac{\overline{d}}{\varepsilon}}$. Here, the variable $\xi$ is given to describe the rapid transition between the regions $\{ u^\varepsilon \simeq \alpha^+ \}$ and $ \{ u^\varepsilon \simeq \alpha^- \}$. In addition, we normalize $U_0$ and $U_k$ so that \begin{align}\label{cond_Uk_normal} U_0(x,t,0) = \alpha \nonumber \\ U_k(x,t,0) = 0. \end{align} To match the inner and outer expansions, we require that \begin{align}\label{cond_U0_matching} U_0(x,t,\pm \infty) = \alpha_\pm, ~~~ U_k(x,t,\pm \infty) = u^\pm_k(x,t) \end{align} for all $k \geq 2$. After substituting the expansion (\ref{eqn_u^eps_expansion}) into $(P^\varepsilon)$, we collect the $\varepsilon^{-2}$ terms, to obtain \begin{align*} \varphi(U_0)_{zz} + f(U_0) = 0. \end{align*} Since this equation only depends on the variable $z$, we may assume that $U_0$ is only a function of the variable $z$, that is $U_0(x,t,z) = U_0(z)$. In view of the conditions (\ref{cond_Uk_normal}) and (\ref{cond_U0_matching}), we find that $U_0$ is the unique increasing solution of the following problem \begin{align}\label{eqn_AsymptExp_U0} \begin{cases} (\varphi(U_0))_{zz} + f(U_0) = 0 \\ U_0(-\infty) = \alpha_-,~ U_0(0)= \alpha,~ U_0(+\infty) = \alpha_+. \end{cases} \end{align} In order to understand the nonlinearity more clearly, we set \begin{align*} g(v) := f(\varphi^{-1}(v)), \end{align*} where $\varphi^{-1}$ is the inverse function of $\varphi$ and define $V_0(z) := \varphi(U_0(z))$; note that such transformation is possible by the condition (\ref{cond_phi'_bounded}). Substituting $V_0$ into equation (\ref{eqn_AsymptExp_U0}) yields \begin{align}\label{eqn_AsymptExp_V0} \begin{cases} V_{0zz} + g(V_0) = 0 \\ V_0(-\infty) = \varphi(\alpha_-),~ V_0(0)= \varphi(\alpha), ~ V_0(+\infty) = \varphi(\alpha_+). \end{cases} \end{align} Condition (\ref{cond_fphi_equipotential}) then implies the existence of the unique increasing solution of (\ref{eqn_AsymptExp_V0}). Next we collect the $\varepsilon^{-1}$ terms in the asymptotic expansion. In view of the definition of $U_0(z)$ and the condition (\ref{cond_Uk_normal}), we obtain the following problem \begin{align}\label{eqn_AsymptExp_U1} \begin{cases} (\varphi'(U_0) \overline{U_1})_{zz} + f'(U_0)\overline{U_1} = \overline{d}_t U_{0z} - (\varphi(U_0))_z \Delta \overline{d} \\ \overline{U_1}(x,t,0) = 0, ~~~ \varphi'(U_0) \overline{U_1} \in L^\infty(\mathbb{R}). \end{cases} \end{align} To prove the existence of solution to (\ref{eqn_AsymptExp_U1}), we consider the transformed function $\overline{V_1} = \varphi'(U_0)\overline{U_1}$, which gives the problem \begin{align}\label{eqn_AsymptExp_V1} \begin{cases} \overline{V_{1}}_{zz} + g'(V_0)\overline{V_1} = \displaystyle{\frac{V_{0z}}{\varphi'(\varphi^{-1} (V_0) )}} \overline{d}_t - V_{0z} \Delta \overline{d} \\ \overline{V_1}(x,t,0) = 0, ~~~ \overline{V_1} \in L^\infty(\mathbb{R}). \end{cases} \end{align} Now, Lemma 2.2 of \cite{AHM2008} implies the existence and uniqueness of $V_1$ provided that \begin{align*} \int_{\mathbb{ R}} \left( \frac{1}{\varphi'(\varphi^{-1}(V_0))} \overline{d}_t - \Delta \overline{d} \right) V_{0z}^2 = 0. \end{align*} Substituting $V_0 = \varphi(U_0)$ and $ V_{0z} = \varphi'(U_0) U_{0z} $ yields \begin{align} \overline{d}_t = \frac {\int_{\mathbb{ R}} V_{0z}^2} {\int_{\mathbb{ R}} \frac{V_{0z}^2}{\varphi'(\varphi^{-1}(V_0))}} \Delta \overline{d} = \frac {\int_{\mathbb{ R}} (\varphi'(U_0) U_{0z})^2}{\int_{\mathbb{ R}} \varphi'(U_0) U_{0z}^2} \Delta \overline{d}. \end{align} It is known that $\overline{d}_t=-V_n$, where $V_n$ is equal to the normal velocity on the interface $\Gamma_t$, and $\Delta \overline{d}$ is equal to $(N - 1) \kappa$, where $\kappa$ is the mean curvature of $\Gamma_t$. Thus, we obtain the equation of motion of the interface $\Gamma_t$, \begin{align} V_n = -( N - 1 ) \lambda_0 \kappa, \end{align} where \begin{align} \lambda_0 = \frac {\int_{\mathbb{ R}} (\varphi'(U_0) U_{0z})^2}{\int_{\mathbb{ R}} \varphi'(U_0) U_{0z}^2}. \label{eqn_lambda0} \end{align} The constant $\lambda_0$ is interpreted as the surface tension $\sigma_{AC}$ multiplied by the mobility $\mu_{AC}$ of the interface; see Appendix. In particular, $(IP)$ coincides with the equation (1) in \cite{AC1979}. Finally, we derive an explicit form of $\lambda_0$. Indeed, we multiply the equation \eqref{eqn_AsymptExp_U0} by ${\varphi(U_0)}_z$, yielding \begin{equation*} {\varphi(U_0)}_{zz} {\varphi(U_0)}_z+f(U_0) {\varphi(U_0)}_{z}=0\,. \end{equation*} Integrating from $-\infty$ to $z$, we obtain $$ \frac 12 \big[\varphi(U_0)_z\big]^2(z)+\int _{-\infty} ^{z} f(U_0) {\varphi(U_0)}_{z} dz=0\, $$ or alternatively $$ \frac 12 \big[\varphi(U_0)_z\big]^2(z)+\int _{\alpha_-} ^{U_0(z)} f(s) \varphi'(s) ds=0\,. $$ Hence, \begin{equation}\label{intrinsic} {\varphi(U_0)_z}(z)= \sqrt 2 \sqrt {W(U_0(z))}\,, \end{equation} where $W$ is given by \begin{align}\label{eq:28} W(u) = - \int ^{u} _{\alpha_-} f(s) \varphi'(s) ds = \int _{u} ^{\alpha_+} f(s) \varphi'(s) ds, \end{align} the last equality holdinf by \eqref{cond_fphi_equipotential}. It follows that $$ \int_{{\mathbb{ R}}} {\varphi(U_0)}_z{U_{0z}}(z) dz= \sqrt 2 \int_{{\mathbb{ R}}} \sqrt{W(U_0(z))} U_{0z}(z) dz $$ so that also $$ \int _{{\mathbb{ R}}} {\varphi'(U_0)} U_{0z}^2(z)dz= \sqrt 2 \int _{\alpha_-} ^{\alpha_+} \sqrt{W(u)}du. $$ Similarly, since $$ \int_{\mathbb{ R}} (\varphi'(U_0) U_{0z})^2dz = \sqrt 2 \int_{\mathbb{ R}} (\varphi'(U_0) \sqrt{W(U_0(z))} U_{0z}) dz, $$ we get \begin{align}\label{eq:29} \int_{\mathbb{ R}} (\varphi'(U_0) U_{0z})^2dz = \sqrt 2 \int _{\alpha_-} ^{\alpha_+} \varphi'(u) \sqrt{W(u)} du, \end{align} so that we finally obtain the formula \begin{align} \label{second lambda_0} \lambda_0 = \frac {\int _{\alpha_-} ^{\alpha_+} \varphi'(u) \sqrt{W(u)} du}{\int _{\alpha_-} ^{\alpha_+} \sqrt{W(u)}du}. \end{align} Note that if $\varphi(u)=u$, the case of the linear diffusion Allen-Cahn equation, we recover the value $\lambda_0 =1$ as expected. \section{Generation of the interface}\label{section_3} In this section, we prove Theorem \ref{Thm_Generation} on the generation of the interface. The main idea, based on the comparison principle Lemma \ref{lem_comparison}, is to construct suitable sub- and super-solutions. The proof of Theorem \ref{Thm_Generation} is given in Section \ref{proof_subsec_thm_gen}. \subsection{Comparison principle} \begin{lem}\label{lem_comparison} Let $v \in C^{2,1} (\overline{D} \times {\mathbb{ R}}^+)$ satisfy \begin{align*} (P)~ \begin{cases} v_t \geq \Delta \varphi(v) + \displaystyle{ \frac{1}{\varepsilon^2}} f(v) &\mbox{ in } D \times \mathbb{R}^+\\ { \dfrac{\partial \varphi(v)}{\partial \nu} } = 0 &\mbox{ in } \partial D \times \mathbb{R}^+\\ v(x,0) \geq u_0(x) &\text{ for } x \in D. \end{cases} \end{align*} Then, $v$ is a super-solution of Problem $(P^\varepsilon)$ and we have \begin{align*} v(x,t) \geq u^\varepsilon(x,t) ,~~ (x,t) \in D \times {\mathbb{ R}}^+. \end{align*} If $v$ satisfies the opposite inequalities in Problem $(P)$, then $v$ is a sub-solution of Problem $(P^\varepsilon)$ and we have \begin{align*} v(x,t) \leq u^\varepsilon(x,t) ,~~ (x,t) \in D \times {\mathbb{ R}}^+. \end{align*} \end{lem} \begin{proof} {Consider the inequality satisfied for the difference of a super-solution $v$ and a solution $u^\varepsilon$. Apply the maximum principle to the function $w : = v - u^\varepsilon$ to see that it is positive.} \end{proof} \subsection{Solution of the corresponding ordinary differential equation} In the first stage of development, we expect that the solution behaves as that of the corresponding ordinary differential equation: \begin{align}\label{eqn_generation_ODE} \begin{cases} Y_\tau(\tau, \zeta) = f(Y(\tau,\zeta)) & \tau > 0\\ Y(0,\zeta) = \zeta & \zeta \in \mathbb{R}. \end{cases} \end{align} We deduce the following result from \cite{AHM2008}. \begin{lem}\label{Lem_Generation_Matthieu} Let $\eta \in (0, \eta_0)$ be arbitrary. Then, there exists a positive constant $C_Y = C_Y(\eta)$ such that the following holds: \begin{enumerate}[label =(\roman*)] \item There exists a positive constant $\overline{\mu}$ such that for all $\tau > 0$ and all $\zeta \in (-2C_0, 2C_0)$, \begin{align}\label{lem_gen_1} e^{- \overline{\mu} \tau} \leq Y_\zeta(\tau,\zeta) \leq C_Y e^{\mu \tau}. \end{align} \item For all $\tau > 0$ and all $\zeta \in (-2C_0, 2C_0)$, $$ \left| \frac{Y_{\zeta \zeta}(\tau, \zeta)}{Y_\zeta(\tau, \zeta)} \right| \leq C_Y (e^{\mu \tau} - 1). $$ \item There exists a positive constants $\varepsilon_0$ such that, for all $\varepsilon \in (0, \varepsilon_0)$, we have \begin{enumerate} \item for all $\zeta \in (-2C_0, 2C_0)$ \begin{align}\label{Lem_Generation_i} \alpha_- - \eta \leq Y(\mu^{-1} |\ln \varepsilon|, \zeta) \leq \alpha_+ + \eta; \end{align} \item if $\zeta \geq \alpha + C_Y \varepsilon$, then \begin{align}\label{Lem_Generation_ii} Y(\mu^{-1} |\ln \varepsilon|, \zeta) \geq \alpha_+ - \eta; \end{align} \item if $\zeta \leq \alpha - C_Y \varepsilon$, then \begin{align*} Y(\mu^{-1} |\ln \varepsilon|, \zeta) \leq \alpha_- + \eta. \end{align*} \end{enumerate} \end{enumerate} \end{lem} \begin{proof} These results can be found in Lemma 4.7 and Lemma 3.7 of \cite{AHM2008}, except for \eqref{lem_gen_1}. To prove \eqref{lem_gen_1}, we follow similar computations as in Lemma 3.2 of \cite{AHM2008}. Differentiating \eqref{eqn_generation_ODE} by $\zeta$, we obtain \begin{align*} \begin{cases} Y_{\zeta \tau}(\tau, \zeta) = f'(Y(\tau,\zeta))Y_\zeta, & \tau > 0\\ Y_\zeta(0,\zeta) = 1, & ~ \end{cases} \end{align*} which yields the following equality, \begin{align}\label{lem_gen_2} Y_\zeta(\tau, \zeta) = \exp \left[ \int_0^\tau f'(Y(s,\zeta)) \right]. \end{align} Hence, for $\zeta = \alpha$, \begin{align*} Y_\zeta(\tau, \alpha) = \exp \left[ \int_0^\tau f'(Y(s,\alpha)) \right] = e^ { \mu \tau }, \end{align*} where the last equality follows since $Y(\tau,\alpha) = \alpha$. Also, for $\zeta = \alpha_\pm$, by \eqref{cond_f_bistable}, we have \begin{align*} Y_\zeta(\tau, \alpha_\pm) \leq e^ { \mu \tau }. \end{align*} For $\zeta \in (\alpha_- + \eta, \alpha_+ - \eta) \setminus \{\alpha\}$, Lemma 3.4 of \cite{AHM2008} guarantees the upper bound of $Y_\zeta$ in \eqref{lem_gen_1}. We only need to consider the case that $\zeta \in (-2C_0, 2C_0 ) \setminus (\alpha_- + \eta, \alpha_+ - \eta)$. It follows from \eqref{cond_f_bistable} that we can choose a positive constant $\eta$ and $\overline{\eta}$ such that \begin{align}\label{lem_gen_4} f'(s) < 0 ~, s \in I, \end{align} where $I := (\alpha_- - \overline{\eta}, \alpha_- + \eta) \cup (\alpha_+ - \eta, \alpha_+ + \overline{\eta})$. Moreover, \eqref{cond_f_bistable} and \eqref{cond_f_tech} imply \begin{align}\label{lem_gen_5} Y(\tau,\zeta) \in J, \end{align} for $\zeta \in J$ where $J := (\min\{- 2C_0, \alpha_- - \overline{\eta}\}, \alpha_- + \eta) \cup (\alpha_+ - \eta, \max\{ 2C_0, \alpha_+ + \overline{\eta}\})$. Thus, \eqref{lem_gen_2}, \eqref{lem_gen_4} and \eqref{lem_gen_5} guarantee the upper bound of \eqref{lem_gen_1} for $\zeta \in I$, which leaves us only the case $\zeta \in (- 2C_0, 2C_0) \setminus I.$ We consider now the case $\zeta \in (\alpha_+ + \overline{\eta}, 2 C_0)$; the case of $\zeta \in (-2C_0, \alpha_- - \overline{\eta})$ can be analysed in a similar way. By (3.13) in \cite{AHM2008}, we have \begin{align}\label{lem_gen_3} \ln Y_\zeta(\tau, \zeta) = f'(\alpha_+) \tau + \int_\zeta^{Y(\tau,\zeta)} \tilde{f}(s) ds ,~ {\rm \ and \ } \tilde{f}(s) = \dfrac{f'(s) - f'(\alpha_+)}{f(s)}. \end{align} Note that $\tilde{f}(s) \to \dfrac{f''(\alpha_+)}{f'(\alpha_+)}$ as $s \to \alpha_+$, so that $\tilde{f}$ may be extended as a continuous function. We define \begin{align*} \tilde{F} := \Vert \tilde{f} \Vert_{L^{\infty} (\alpha_+, \max\{ 2C_0, \alpha_+ + \overline{\eta}\})}. \end{align*} Since \eqref{cond_f_tech} yields $Y(\tau, \zeta) > \alpha_+$ for $\zeta \in (\alpha_+ + \overline{\eta}, 2 C_0)$, by \eqref{lem_gen_3} we can find a constant $C_Y$ large enough such that \begin{align*} Y_\zeta(\tau,\zeta) \leq C_Y e^{f'(\alpha_+) \tau} \leq C_Y e^{\mu \tau}. \end{align*} Thus, we obtain the upper bound of \eqref{lem_gen_1}. \newline For the lower bound, we first define \begin{align*} \overline{\mu} := - \min_{s \in I'} f'(s),~ I' = [- 2C_0, 2C_0] \cup [\alpha_-, \alpha_+]. \end{align*} Note that $\overline{\mu} > 0$ by \eqref{cond_f_bistable}. Thus, by \eqref{lem_gen_2}, we obtain \begin{align*} Y_\zeta(\tau, \zeta) \geq e^{- \overline{\mu} \tau}. \end{align*} \end{proof} \subsection{Construction of sub- and super-solutions} We now construct sub- and super-solutions for the proof of Theorem \ref{Thm_Generation}. For simplicity, we first consider the case where \begin{align}\label{cond_subsuper_Neumann} \frac{\partial u_0}{\partial \nu} = 0 \text{ on } \partial D. \end{align} In this case, we define sub- and super- solution as follows: \begin{equation*} w^{\pm}_\varepsilon(x,t) = Y \left( \frac{t}{\varepsilon^2}, u_0(x) \pm \varepsilon^2 C_2 \left( e^{\mu t/\varepsilon^2} - 1 \right) \right)\\ = Y \left( \frac{t}{\varepsilon^2}, u_0(x) \pm P(t) \right) \end{equation*} for some the constant $C_2$. In the general case, where (\ref{cond_subsuper_Neumann}) does not necessarily hold, we need to modify $w^{\pm}_\varepsilon$ near the boundary $\partial D$. This will be discussed later in the proof of Theorem \ref{Thm_Generation}; see after equation \eqref{eqn_proofofgeneration}. \begin{lem}\label{Lem_generation_with_homo_Neumann} Assume (\ref{cond_subsuper_Neumann}). Then, there exist positive constants $\varepsilon_0$ and $C_2, \overline{C}_2$ independent of $\varepsilon$ such that, for all $\varepsilon \in (0,\varepsilon_0)$, $w^{\pm}_\varepsilon$ satisfies \begin{align}\label{eqn_Gen_subsuper} \begin{cases} \mathcal{L} (w^-_\varepsilon) < - \overline{C}_2 e^{-\frac{\overline{\mu} t}{\varepsilon^2}} < \overline{C}_2 e^{-\frac{\overline{\mu} t}{\varepsilon^2}} < \mathcal{L}(w^+_\varepsilon) & \text{ in } \overline{D} \times [0,t^\varepsilon] \\ \displaystyle{\frac{\partial w^-_\varepsilon}{\partial \nu} = \frac{\partial w^+_\varepsilon}{\partial \nu}} = 0 & \text{ on } \partial D \times [0,t^\varepsilon]. \end{cases} \end{align} \end{lem} \begin{proof} We only prove that $w^{+}_\varepsilon$ is the desired super-solution; the case for $w^-_\varepsilon$ can be treated in a similar way. The assumption (\ref{cond_subsuper_Neumann}) implies $$ \frac{\partial w^\pm_\varepsilon}{\partial \nu} = 0 \text{ on } \partial D \times \mathbb{R}^+ $$ Define the operator $\mathcal{L}$ by $$ \mathcal{L} u = u_t - \Delta \varphi(u) - \frac{1}{\varepsilon^2} f(u). $$ Then, direct computation with $\tau = t/\varepsilon^2$ gives \begin{align*} \mathcal{L}(w^+_\varepsilon) &= \frac{1}{\varepsilon^2} Y_\tau + P'(t) Y_\zeta - \left( \varphi''(w^+_\varepsilon) | \nabla u_0|^2 (Y_\zeta)^2 + \varphi'(w^+_\varepsilon) \Delta u_0 Y_\zeta + \varphi'(w^+_\varepsilon) |\nabla u_0|^2 Y_{\zeta\zeta} + \frac{1}{\varepsilon^2} f(Y) \right)\\ &= \frac{1}{\varepsilon^2} ( Y_\tau - f(Y)) + Y_{\zeta} \left( P'(t) - \left( \varphi''(w^+_\varepsilon) | \nabla u_0|^2 Y_\zeta + \varphi'(w^+_\varepsilon) \Delta u_0 + \varphi'(w^+_\varepsilon) |\nabla u_0|^2 \frac{Y_{\zeta\zeta}}{Y_\zeta} \right) \right). \end{align*} By the definition of $Y$, the first term on the right-hand-side vanishes. By choosing $\varepsilon_0$ sufficiently small, for $0 \leq t \leq t_\varepsilon$, we have $$ P(t) \leq P(t^\varepsilon) = \varepsilon^2 C_2(e^{\mu t^\varepsilon/\varepsilon^2} - 1) = \varepsilon^2 C_2(\varepsilon^{-1} - 1) < C_0. $$ Hence, $|u_0 + P(t)| < 2C_0$. Applying Lemma \ref{Lem_Generation_Matthieu}, \eqref{cond_C0} and \eqref{cond_C1} gives \begin{align*} \mathcal{L} w^+_\varepsilon &\geq Y_\zeta \left( C_2 \mu e^{\mu t /\varepsilon^2} - ( C_0^2 C_1 C_Y e^{\mu t / \varepsilon^2} + C_0 C_1 + C_0^2 C_1 C_Y (e^{\mu t / \varepsilon^2} - 1)) \right)\\ &= Y_\zeta \left( (C_2 \mu - C_0^2 C_1 C_Y - C_0^2 C_1 C_Y)e^{\mu t / \varepsilon^2} + C_0^2 C_1C_Y - C_0 C_1 \right). \end{align*} By \eqref{lem_gen_1}, for $C_2$ large enough, we can find a positive constant $\overline{C}_2$ independent to $\varepsilon$ such that $$ \mathcal{L} w^+_\varepsilon \geq \overline{C}_2 e^{-\frac{\overline{\mu} t}{\varepsilon^2}}. $$ Thus, $w^+_\varepsilon$ is a super-solution for Problem $(P^\varepsilon)$. \end{proof} \subsection{Proof of Theorem \ref{Thm_Generation}} \label{proof_subsec_thm_gen} We deduce from the comparison principle Lemma \ref{lem_comparison} and the construction of the sub- and super-solutions that \begin{align}\label{eqn_proofofgeneration} w^-_\varepsilon(x,t^\varepsilon) \leq u^\varepsilon(x,t^\varepsilon) \leq w^+_\varepsilon(x,t^\varepsilon) \end{align} under the condition (\ref{cond_subsuper_Neumann}). If \eqref{cond_subsuper_Neumann} does not hold, one can modify the functions $w^\pm$ as follows: from condition (\ref{cond_u0_inout}), there exist positive constants $d_0$ and $\rho$ such that (i) the distance function $d(x,\partial D) $ is smooth enough on $\{ x \in D : d(x,\partial D) < 2 d_0 \}$ and (ii) $u_0(x) \geq \alpha + \rho$ if $d(x, \partial D) \leq d_0$. Let $\xi$ be a smooth cut-off function defined on $[0,+\infty)$ such that $0 \leq \xi \leq 1, \xi(0) = \xi'(0) = 0$ and $\xi(z) = 1$ for $z \geq d_0$. Define \begin{align*} u_0^+ &:= \xi(d(x,\partial D)) u_0(x) +\left[ 1 - \xi(d(x, \partial D)) \right] \max_{\overline{D}} u_0 \\ u_0^- &:= \xi(d(x,\partial D)) u_0(x) +\left[ 1 - \xi(d(x, \partial D)) \right] (\alpha + \rho). \end{align*} Then, $u_0^- \leq u_0 \leq u_0^+$ and $u_0^\pm$ satisfy the homogeneous Neumann boundary condition \eqref{cond_subsuper_Neumann}. Thus, by using a similar argument as in the proof of Lemma \ref{Lem_generation_with_homo_Neumann}, we may find sub- and super-solutions as follows, \begin{align*} w^{\pm}_\varepsilon(x,t) = Y \left( \frac{t}{\varepsilon^2}, u_0^\pm(x) \pm \varepsilon^2 C_2 \left( e^{\mu t/\varepsilon^2} - 1 \right) \right). \end{align*} We now show \eqref{Thm_generation_i}, \eqref{Thm_generation_ii} and \eqref{Thm_generation_iii}. By the definition of $C_0$ in (\ref{cond_C0}), we have \begin{align*} -C_0 \leq \min_{x \in \overline{D}} u_0(x) < \alpha + \rho. \end{align*} Thus, for $\varepsilon_0$ small enough, we have that $$ - 2 C_0 \leq u^\pm_0(x) \pm (C_2 \varepsilon - C_2 \varepsilon^2) \leq 2 C_0 ~~~ \text{ for } x \in D $$ holds for any $\varepsilon \in (0, \varepsilon_0)$. Thus, the assertion (\ref{Thm_generation_i}) is a direct consequence of (\ref{Lem_Generation_i}) and (\ref{eqn_proofofgeneration}). For (\ref{Thm_generation_ii}), first we choose $M_0$ large enough so that $M_0 \varepsilon - C_2 \varepsilon + C_2 \varepsilon^2 \geq C_Y \varepsilon$. Then, for any $x \in D$ such that $u^-_0(x) \geq \alpha + M_0 \varepsilon$, we have $$ u_0^-(x) - \varepsilon^2 C_2 \left( e^{\mu t/\varepsilon^2} - 1 \right)\geq u^-_0(x) - (C_2 \varepsilon - C_2 \varepsilon^2) \geq \alpha + M_0 \varepsilon - C_2 \varepsilon + C_2 \varepsilon^2 \geq \alpha + C_Y \varepsilon. $$ Therefore, with (\ref{Lem_Generation_ii}) and (\ref{eqn_proofofgeneration}), we see that $$ u^\varepsilon(x,t^\varepsilon) \geq \alpha_+ - \eta $$ \noindent for any $x \in D$ such that $u^-_0(x) \geq \alpha + M_0 \varepsilon$, which implies (\ref{Thm_generation_ii}). Note that (\ref{Thm_generation_iii}) can be shown in the same way. This completes the proof of Theorem \ref{Thm_Generation}. \qed \section{Propagation of the interface}\label{section_4} The main idea of the proof of Theorem \ref{Thm_Propagation} is that we proceed by imbrication: By the comparison principle Lemma \ref{lem_comparison}, we show at the generation time that $u^+(x,0) \geq w^+(x, t^\varepsilon)$ and $u^-(x,0) \leq w^-(x,t^\varepsilon)$ so that we can pass continuously from the generation of interface sub- and super-solutions to the propagation of interface sub- and super-solutions. To this end, we first introduce a modified signed distance function, and several estimates on the functions $U_0$ and $U_1$ useful in the sub- and super-solution construction, before showing Theorem \ref{Thm_Propagation} in Section \ref{proof_thm_prop}. \subsection{A modified signed distance function} We introduce a useful cut off signed distance function $d$ as follows. Recall the signed distance function $\overline{d}$ defined in \eqref{eqn_signed_dist}, and interface $\Gamma_t$ satisfying \eqref{eqn_motioneqn}. Choose $d_0 > 0$ small enough so that the signed distance function $\overline{d}$ is smooth in the set $$ \{ (x,t) \in \overline{D} \times [0,T] , | \overline{d}(x,t) | < 3 d_0 \} $$ and that $$ dist(\Gamma_t, \partial D) \geq 3 d_0 \text{ for all } t \in [0,T]. $$ Let $h(s)$ be a smooth { non-decreasing} function on $\mathbb{R}$ such that $$ h(s) = \begin{cases} s & \text{if}~ |s| \leq d_0\\ -2d_0 & \text{if}~ s \leq -2d_0\\ 2d_0 & \text{if}~ s \geq 2d_0. \end{cases} $$ We then define the cut-off signed distance function $d$ by $$ d(x,t) = h(\overline{d}(x,t)), ~~~ (x,t) \in \overline{D} \times [0,T]. $$ Note, as $d$ coincides with $\overline{d}$ in the region $$ \{ (x,t) \in D \times [0,T] : | d(x,t)| < d_0 \}, $$ that we have \begin{align*} d_t = \lambda_0 \Delta d ~\text{ on }~ \Gamma_t. \end{align*} Moreover, $d$ is constant near $\partial D$ and the following properties hold. \begin{lem}\label{Lem_d_bound} There exists a constant $C_d > 0$ such that \begin{enumerate}[label = (\roman*)] \item $|d_t| + |\nabla d| + |\Delta d| \leq C_d$, \item $ \left| d_t - \lambda_0 \Delta d \right| \leq C_d |d| $ \end{enumerate} in $\overline{D} \times [0,T]$. \end{lem} \subsection{Estimates for the functions $U_0, U_1$} Here, we give estimates for the functions which will be used to construct the sub- and super-solutions. Recall that $U_0$ (cf. \eqref{eqn_AsymptExp_U0}) is a solution of the equation \begin{align*} (\varphi(U_0))_{zz} + f(U_0) = 0. \end{align*} We have the following lemma. \begin{lem}\label{Lem_U0_bound} There exists constants $\hat{C}_0, \lambda_1 > 0$ such that for all $z\in \mathbb{R}$, \begin{enumerate}[label = (\roman*)] \item $ |U_0| , ~ |U_{0z}| , ~ |U_{0zz}| \leq \hat{C}_0, $ \item $ |U_{0z}|, ~ |U_{0zz}| \leq \hat{C}_0 \exp(- \lambda_1 |z|). $ \end{enumerate} \end{lem} \begin{proof} Recall that $V_0 = \varphi(U_0)$ satisfies the equation (\ref{eqn_AsymptExp_V0}) with $\varphi \in C^4({\mathbb{ R}})$. Lemma 2.1 of \cite{AHM2008} implies that there exist some positive constants $\overline{C}_0$ and $\lambda_1$ such that, for all $z\in \mathbb{R}$, \begin{align*} &|V_0| ,~ |V_{0z}| ,~ |V_{0zz}| \leq \overline{C}_0; \\ &|V_{0z}| ,~ |V_{0zz}| \leq \overline{C}_0 \exp(- \lambda_1 |z|), \end{align*} and therefore similar bounds for $U_0$. \end{proof} In terms of the cut-off signed distance function $d=d(x,t)$, for each $(x,t) \in \overline{D}\times [0,T]$, we define $U_1(x,t,\cdot) : {\mathbb{ R}} \rightarrow {\mathbb{ R}}$ } as the solution of the following equation: \begin{align}\label{eqn_U1_bar} \begin{cases} (\varphi'(U_0) U_1)_{zz} + f'(U_0)U_1 = (\lambda_0 U_{0z} - (\varphi(U_0))_z) \Delta d\\ U_1(x,t,0) = 0, ~~~ \varphi'(U_0) U_1 \in L^\infty(\mathbb{R}). \end{cases} \end{align} Existence of the solution $U_1$ can be shown in the same way as that for $\overline{U_1}$ in \eqref{eqn_AsymptExp_U1}. Finally, we give the following estimates for $U_1=U_1(x,t,z)$. \begin{lem}\label{Lem_U1_bound} There exists a constant $\hat{C}_1, \lambda_1 > 0$ such that for all $z \in \mathbb{R}$ \begin{enumerate}[label = (\roman*)] \item $ |U_1| ,~ |{U_1}_z| ,~ |{U_1}_{zz}| ,~ |\nabla {U_1}_z| ,~ |\nabla {U_1}| ,~ |\Delta{U_1}| ,~ |U_{1t}| \leq \hat{C}_1, $ \item $ |{U_1}_z| ,~ |{U_1}_{zz}|,~ |\nabla {U_1}_z| \leq \hat{C}_1 \exp(- \lambda_1 |z|). $ \end{enumerate} { Here, the operators $\nabla$ and $\Delta$ act on the variable $x$.} \end{lem} \begin{proof} Define $V_1(z) := \varphi'(U_0(z)) {U}_1(z)$. As in (\ref{eqn_AsymptExp_U1}), we obtain an equation for $V_1$: \begin{align}\label{eqn_AsymptExp_V1} \begin{cases} V_{1zz} + g'(V_0)V_1 = \Big[ \lambda_0 \displaystyle{\frac{V_{0z}}{\varphi'(\varphi^{-1} (V_0) )} } - V_{0z} \Big] \Delta d \\ V_1(x,t,0) = 0, ~~~ V_1 \in L^\infty(\mathbb{R}). \end{cases} \end{align} Applying Lemmas 2.2 and 2.3 of \cite{AHM2008} to (\ref{eqn_AsymptExp_V1}) implies the boundedness of $V_1, V_{1z}, V_{1zz}$. Moreover, since $d$ is smooth in $\overline{D} \times [0,T]$, we can apply Lemma 2.2 of \cite{AHM2008} to obtain the boundedness of $\nabla V_1, \Delta V_1$. The desired estimates for the function $U_1$ now follows via the smoothness of $\varphi$ as in the proof of Lemma \ref{Lem_U0_bound}. \end{proof} \subsection{Construction of sub- and super-solutions} We construct candidates sub- and super-solutions as follows: Given $\varepsilon > 0$, define \begin{align} \label{star0} u^\pm(x,t) = U_0 \left( \frac{d(x,t) \pm \varepsilon p(t)}{\varepsilon} \right) + \varepsilon U_1 \left(x,t, \frac{d(x,t) \pm \varepsilon p(t)}{\varepsilon} \right) \pm q(t) \end{align} where \begin{align*} & p(t) = - e^{- \beta t/ \varepsilon^2} + e^{Lt} + K,\\ &q(t) = \sigma \left( \beta e^{- \beta t / \varepsilon^2} + \varepsilon^2 L e^{Lt} \right), \end{align*} in terms of positive constants $\varepsilon, \beta, \sigma, L, K$. Next, we give specific conditions for these constants which will be used to show that indeed $u^\pm$ are sub- and super-solutions. We assume that the positive constant $\varepsilon_0$ obeys \begin{align}\label{eqn_cond_elc} \varepsilon_0^2 L e^{LT} \leq 1, ~~~ \varepsilon_0\hat{C}_1 \leq \frac{1}{2}. \end{align} We first give a result on the boundedness of $f'(U_0(z)) + (\varphi'(U_0(z))_{zz}$. \begin{lem}\label{Lem_f'+phi'_bound} There exists $b > 0$ such that $f'(U_0(z)) + (\varphi'(U_0))_{zz} < 0$ on $\{ z : U_0(z) \in [\alpha_-,~\alpha_- + b] \cup [\alpha_+ - b ,~\alpha_+]\}$. \end{lem} \begin{proof} We can choose $b_1, \mathcal{F} > 0$ such that \begin{align*} f'(U_0(z)) < - \mathcal{F} \end{align*} on $\{ z : U_0(z) \in [\alpha_-,~\alpha_- + b_1] \cup [\alpha_+ - b_1 ,~\alpha_+]\}$. Note that $(\varphi'(U_0))_{zz} = \varphi'''(U_0) U^2_{0z} + \varphi''(U_0) U_{0zz}$. From Lemma \ref{Lem_U0_bound}, we can choose $b_2 > 0$ small enough so that \begin{align*} | (\varphi'(U_0))_z | < \mathcal{F},~~~| (\varphi'(U_0))_{zz} | < \mathcal{F} \end{align*} on $\{ z : U_0(z) \in [\alpha_-,~\alpha_- + b_2] \cup [\alpha_+ - b_2 ,~\alpha_+]\}$. Define $b := \min \{b_1, b_2 \}$. Then, we have \begin{align*} f'(U_0(z)) + (\varphi'(U_0))_{zz} <\mathcal{F} - \mathcal{F} = 0. \end{align*} \end{proof} Fix $b > 0$ which satisfies the result of Lemma \ref{Lem_f'+phi'_bound}. Denote $ J_1 := \{ z : U_0(z) \in [\alpha_-,~\alpha_- + b] \cup [\alpha_+ - b ,~\alpha_+]\}, J_2 = \{ z : U_0(z) \in [\alpha_- + b,~\alpha_+ - b]\}$. Let \begin{align}\label{cond_beta} \beta := - \sup \left\{ \frac{f'(U_0(z)) + (\varphi'(U_0(z)))_{zz}}{3} : z \in J_1 \right\}. \end{align} The following result plays an important role in verifying sub- and super-solution properties. \begin{lem}\label{Lem_E3_bound} There exists a constant $\sigma_0$ small enough such that for every $0 < \sigma < \sigma_0$, we have $$ U_{0z} - \sigma (f'(U_0) + (\varphi'(U_0))_{zz}) \geq 3 \sigma \beta. $$ \end{lem} \begin{proof} To show the assertion, it is sufficient to show that there exists $\sigma_0$ such that, for all $0 < \sigma < \sigma_0$, \begin{align}\label{lem_E3_1} \frac{U_{0z}}{\sigma} - \left( f'(U_0) + (\varphi'(U_0))_{zz} \right) \geq 3 \beta. \end{align} We prove the result on each of the sets $J_1, J_2$. On the set $J_1$, note that $U_{0z} > 0$ on $\mathbb{R}$. If $z \in J_1$, for any $\sigma > 0$ we have $$ \frac{U_{0z}}{\sigma} - \left( f'(U_0) + (\varphi'(U_0))_{zz} \right) > - \sup_{z \in J_1} ( f'(U_0) + (\varphi'(U_0))_{zz} ) = 3 \beta. $$ On the set $J_2$, which is compact in $\mathbb{R}$, there exists positive constants $c_1, c_2$ such that \begin{align*} U_{0z} \geq c_1 ,~~ | f'(U_0) + (\varphi'(U_0))_{zz} | \leq c_2. \end{align*} Therefore, we have \begin{align*} \frac{U_{0z}}{\sigma} - \left( f'(U_0) + (\varphi'(U_0))_{zz} \right) \geq \dfrac{c_1}{\sigma} - c_2 \rightarrow \infty ~\text{as}~ \sigma \downarrow 0, \end{align*} implying \eqref{lem_E3_1} on $J_2$ for $\sigma$ small enough. \end{proof} Before we give the rigorous proof that $u^\pm$ are sub- and super-solutions, we first give detailed computations needed in the sequel. Recall \eqref{star0}. First, note, with $U_0$ and $U_1$ corresponding to $u^+$, that \begin{align} \varphi(u^+) &= \varphi(U_0) + (\varepsilon {U_1} + q) \varphi'(U_0) + (\varepsilon {U_1} + q)^2 \int_0^1 (1 - s) \varphi''( U_0 + ( \varepsilon {U_1} + q )s ) ds\nonumber\\ f(u^+) &= f(U_0) + (\varepsilon {U_1} + q) f'(U_0) + \frac{(\varepsilon {U_1} + q)^2 }{2} f''(\theta(x,t)), \label{star1} \end{align} where $\theta$ is a function satisfying $\theta(x,t) \in \left(U_0, U_0 + \varepsilon {U_1} + q(t)\right)$. Straightforward computations yield \begin{align} (u^+)_t &= U_{0z} \left( \frac{d_t + \varepsilon p_t}{\varepsilon} \right) + \varepsilon {U_1}_t + {U_1}_z ( d_t + \varepsilon p_t ) + q_t \nonumber\\ \Delta \varphi(u^+) &= \nabla \cdot \left( ( \varphi(U_0) )_z \frac{\nabla d}{\varepsilon} + {U_1}_z \varphi'(U_0) \nabla d + \varepsilon \nabla {U_1} \varphi'(U_0) + (\varepsilon {U_1} + q)(\varphi'(U_0))_z \frac{\nabla d}{\varepsilon} + \nabla R \right) \nonumber \\ &= ( \varphi(U_0) )_{zz} \frac{|\nabla d|^2}{\varepsilon^2} + (\varphi(U_0))_z \frac{\Delta d}{\varepsilon} \nonumber \\ &+ ({U_1}_z \varphi'(U_0))_z \frac{| \nabla d|^2}{\varepsilon} + {U_1}_z \varphi'(U_0) \Delta d + 2 \nabla {U_1}_z \varphi'(U_0) \cdot \nabla d + \nabla {U_1} (\varphi'(U_0))_z \cdot \nabla d + \varepsilon \Delta {U_1} \varphi'(U_0) \nonumber \\ &+({U_1} \varphi'(U_0)_z)_z \frac{|\nabla d|^2}{\varepsilon} + q (\varphi'(U_0))_{zz} \frac{|\nabla d|^2}{\varepsilon^2} + \nabla {U_1} (\varphi'(U_0))_z \cdot \nabla d \nonumber\\ &+ (\varepsilon {U_1} + q) (\varphi'(U_0))_z \frac{ \Delta d }{\varepsilon} + \Delta R \label{star2} \end{align} where $R(x,t) = (\varepsilon {U_1} + q)^2 \int_0^1 (1 - s) \varphi''( U_0 + ( \varepsilon U_1 + q )s ) ds$. Define $r(x,t) = \int_0^1 (1 - s) \varphi''( U_0 + ( \varepsilon {U_1} + q )s ) ds$. Then, we have \begin{align} \Delta R(x,t) &= \nabla \cdot \nabla \Big{[} \Big{(} (\varepsilon {U_1})^2 + 2 \varepsilon q {U_1} + q^2 \Big{)}r \Big{]} \nonumber\\ &= \nabla \cdot \Big{[} \Big{(} 2 \varepsilon {U_1} \left( {U_{1}}_z \nabla d + \varepsilon \nabla {U_1} \right) + 2 q \left( {U_{1}}_z \nabla d + \varepsilon \nabla {U_1} \right) \Big{]} r(x,t) + \Big{(} (\varepsilon{U_1})^2 + 2 \varepsilon q {U_1} + q^2 \Big{)} \nabla r(x,t) \Big{]} \nonumber\\ &= \left[ 2\left( {U_1}_z \nabla d + \varepsilon \nabla {U_1} \right)^2 + 2 \varepsilon {U_1} \left( U_{1zz} \frac{| \nabla d |^2 }{\varepsilon} + {U_1}_z \Delta d + 2\nabla {U_1}_z \cdot \nabla d + \varepsilon \Delta {U_1} \right) \right] r(x,t) \nonumber\\ & + 2q \left( U_{1zz} \frac{| \nabla d |^2 }{\varepsilon} + {U_1}_z \Delta d + 2\nabla {U_1}_z \cdot \nabla d + \varepsilon \Delta {U_1} \right) r(x,t) \nonumber\\ & + 2 \Big{[} 2 \varepsilon {U_1} \left( {U_{1}}_z \nabla d + \varepsilon \nabla {U_1} \right) + 2 q \left( {U_{1}}_z \nabla d + \varepsilon \nabla {U_1} \right) \Big{]} \nabla r(x,t) \nonumber\\ & + \Big{(} (\varepsilon {U_1})^2 + 2 \varepsilon q {U_1} + q^2 \Big{)} \Delta r(x,t) \label{star3} \end{align} where \begin{align*} \nabla r(x,t) &= \int_0^1 (1 - s) \varphi'''( U_0 + ( \varepsilon {U_1} + q) s ) \left( \left( U_{0} + \varepsilon U_{1} s \right)_z \frac{\nabla d}{\varepsilon} + \varepsilon \nabla {U_1} s \right) ds \\ \Delta r(x,t) &= \int_0^1 (1 - s) \varphi'''( U_0 + ( \varepsilon {U_1} + q) s ) \left( (U_0 + \varepsilon {U_1} s)_{z} \frac{ \Delta d }{\varepsilon} \right. \\ &\left. + (U_0 + \varepsilon {U_1} s)_{zz} \frac{ | \nabla d |^2 }{\varepsilon^2} + ( 2 \nabla {U_1}_z \cdot \nabla d + \varepsilon \Delta {U_1})s \right) ds \\ &+ \int_0^1 (1 - s) \varphi^{(4)}( U_0 + ( \varepsilon {U_1} + q) s ) \left( (U_{0} + \varepsilon {U_{1}} s)_z \frac{\nabla d}{\varepsilon} + \varepsilon \nabla {U_1} s \right)^2 ds. \end{align*} Define $l(x,t), r_i(x,t)$ for $i = 1,2,3$ as follows: \begin{align*} l(x,t) &= U_{1zz} \frac{| \nabla d |^2 }{\varepsilon} + {U_1}_z \Delta d + 2\nabla {U_1}_z \cdot \nabla d + \varepsilon \Delta {U_1} \\ r_1(x,t) &= \left[ 2\left( {U_1}_z \nabla d + \varepsilon \nabla {U_1} \right)^2 + 2 \varepsilon {U_1} l(x,t) \right] r(x,t) + 4 \varepsilon {U_1} \left( {U_{1}}_z \nabla d + \varepsilon \nabla {U_1} \right) \nabla r(x,t) + ( \varepsilon {U_1} )^2 \Delta r(x,t) \\ r_2(x,t) &= 2 q l(x,t) r(x,t) +4 q \left( {U_{1}}_z \nabla d + \varepsilon \nabla {U_1} \right) \nabla r(x,t) + 2 \varepsilon q {U_1} \Delta r(x,t) \\ r_3(x,t) &= q^2 \Delta r(x,t). \end{align*} Thus, \begin{align} \label{star4} \Delta R=r_1+r_2+r_3. \end{align} We have the following properties for $r_i$. \begin{lem}\label{Lem_remiander_bound} There exists $C_r > 0$ independent of $\varepsilon$ such that \begin{eqnarray} \label{ineq_r} |r_1| \leq C_r, ~~~ |r_2| \leq \frac{q}{\varepsilon} C_r, ~~~ |r_3| \leq \frac{q^2}{\varepsilon^2} C_r. \end{eqnarray} \end{lem} \begin{proof} Note that, by Lemmas \ref{Lem_U0_bound}, \ref{Lem_U1_bound} and (\ref{eqn_cond_elc}) the term $U_a := U_0 + (\varepsilon {U_1} + q)s$ is uniformly bounded. Hence, the terms $\varphi''(U_a), \varphi'''(U_a), \varphi^{(4)}(U_a)$ are uniformly bounded, and in particular $r$ is bounded. By similar reasoning for $\nabla r$ and $\Delta r$, it follows that there exists some positive constants $c_\nabla, c_\Delta$ such that $$ | \nabla r | \leq \frac{c_\nabla}{\varepsilon}, ~~~ | \Delta r | \leq \frac{c_\Delta}{\varepsilon^2}. $$ \noindent Moreover, by Lemmas \ref{Lem_U0_bound}, \ref{Lem_U1_bound} there exists a positive constant $c_l$ such that $$ |l(x,t)| \leq \frac{c_l}{\varepsilon}. $$ Combining these estimates yields \eqref{ineq_r}. \end{proof} \noindent Let $\sigma$ a fixed constant satisfying \begin{align}\label{cond_sigma} 0 < \sigma \leq \min \{ \sigma_0,\sigma_1,\sigma_2 \}, \end{align} where $\sigma_0$ is the constant defined in Lemma \ref{Lem_E3_bound}, and $\sigma_1$ and $\sigma_2$ are given by \begin{align}\label{cond_sigma2} \sigma_1 = \frac{1}{2(\beta + 1)}, ~~~ \sigma_2 = \frac{\beta}{( F + C_r) (\beta + 1)}, ~~~ F = ||f''||_{L^\infty(\alpha_- -1, \alpha_+ + 1)}. \end{align} Note that, since $\sigma < \sigma_1$ and \eqref{eqn_cond_elc}, we have \begin{align*} \alpha_- - 1 \leq |u^\pm| \leq \alpha_+ + 1. \end{align*} \begin{lem}\label{Lem_Prop_subsuper} Let $\beta$ be given by (\ref{cond_beta}) and let $\sigma$ satisfy (\ref{cond_sigma}). Then, there exists $\varepsilon_0 > 0$ and a positive constant $C_p$, which does not depend on $\varepsilon$, such that \begin{align}\label{eqn_Prop_subsuper} \begin{cases} \mathcal{L} (u^-) < - C_p < C_p < \mathcal{L}(u^+) & \text{ in } \overline{D} \times [0,T] \\ \displaystyle{\frac{\partial u^-}{\partial \nu} = \frac{\partial u^+}{\partial \nu}} = 0 & \text{ on } \partial D \times [0,T] \end{cases} \end{align} for every $\varepsilon \in (0, \varepsilon_0)$. \end{lem} \begin{proof} In the following, we only show that $u^+$ is a super solution; one can show that $u^-$ is a sub-solution in a similar way. Combining the computations above in \eqref{star1}, \eqref{star2}, \eqref{star3} and \eqref{star4}, we obtain \begin{align*} \mathcal{L}u^+ &= (u^+)_t - \Delta (\varphi(u^+)) - \frac{1}{\varepsilon^2} f(u^+) \\ &= { E_1+E_2+E_3+E_4+E_5+E_6,} \end{align*} { where } \begin{align*} E_1 &= - \frac{1}{\varepsilon^2} \left( (\varphi(U_0))_{zz} | \nabla d |^2 + f(U_0) \right) - \frac{| \nabla d |^2 - 1}{\varepsilon^2} q(\varphi'(U_0))_{zz} - \frac{| \nabla d |^2 - 1}{\varepsilon} ({U_1} \varphi'(U_0))_{zz} \\ E_2 &= \frac{1}{\varepsilon} U_{0z} d_t - \frac{1}{\varepsilon} \left( (\varphi(U_0))_z \Delta d + ({U_1}_z \varphi'(U_0))_z + ({U_1}\varphi'(U_0)_z)_z +{U_1} f'(U_0) \right) \\ E_3 &= [ U_{0z} p_t + q_t ] - \frac{1}{\varepsilon^2} \left[ q f'(U_0) + q (\varphi'(U_0))_{zz} + \frac{q^2}{2} f''(\theta) \right] - r_3(x,t) \\ E_4 &= \varepsilon {U_1}_z p_t - \frac{q}{\varepsilon} \Big{[} (\varphi'(U_0))_z \Delta d + {U_1} f''(\theta) \Big{]} - r_2(x,t) \\ E_5 &= \varepsilon {U_1}_t - \varepsilon \Delta {U_1} \varphi'(U_0) \\ E_6 &= {U_1}_z d_t - 2 \nabla {U_1}_z \varphi'(U_0) \cdot \nabla d - 2 \nabla {U_1} (\varphi'(U_0))_z \cdot \nabla d - ( {U_1} \varphi'(U_0))_z \Delta d - r_1(x,t) - \frac{({U_1})^2}{2} f''(\theta). \end{align*} \vskip .1cm {\it Estimate of the term $E_1$.} Using (\ref{eqn_AsymptExp_U0}) we write $E_1$ in the form $$ E_1 = -\frac{| \nabla d |^2 - 1}{\varepsilon^2} \big( (\varphi(U_0))_{zz} + q(\varphi'(U_0))_{zz} \big) -\frac{| \nabla d |^2 - 1}{\varepsilon} ({U_1}\varphi'(U_0))_{zz}. $$ We only consider the term $ e_1 := \displaystyle{\frac{| \nabla d |^2 - 1}{\varepsilon}} ({U_1}\varphi'(U_0))_{zz} $ ; the other terms can be bounded similarly. In the region where $|d| \leq d_0$, we have $| \nabla d | = 1$ so that $e_1 = 0$. If, however $|\nabla d| \neq 1$, we have $$ \frac{|({U_1}\varphi'(U_0))_{zz}|}{\varepsilon} \leq \frac{\hat{C}_1}{\varepsilon} e^{- \lambda_1 \left| \frac{d}{\varepsilon} + p(t) \right| } \leq \frac{\hat{C}_1}{\varepsilon} e^{- \lambda_1 \left[ \frac{d_0}{\varepsilon} - p(t) \right]} \leq \frac{\hat{C}_1}{\varepsilon} e^{- \lambda_1 \left[ \frac{d_0}{\varepsilon} - (1 + e^{LT} + K) \right]}. $$ Choosing $\varepsilon_0$ small enough such that $$ \frac{d_0}{2\varepsilon_0} - \Big( 1 + e^{LT} + K \Big) \geq 0, $$ we deduce $$ \frac{|({U_1} \varphi'(U_0))_{zz}|}{\varepsilon} \leq \frac{\hat{C}_1}{\varepsilon} e^{- \lambda_1 \frac{d_0}{2 \varepsilon}} \rightarrow 0 \text{ as } \varepsilon \downarrow 0. $$ Thus, $\tfrac{1}{\varepsilon} |({U_1} \varphi'(U_0))_{zz}|$ is uniformly bounded, so that there exists $\hat{C}_2$ independent of $\varepsilon, L $ such that $$ | e_1 | \leq \hat{C}_2. $$ Finally, as a consequence, we deduce that there exists $\tilde{C}_1$ independent of $\varepsilon, L $ such that \begin{align}\label{eqn_E1_bound} | E_1 | \leq \tilde{C}_1. \end{align} \vskip .1cm {\it Estimate of the term $E_2$.} Using (\ref{eqn_U1_bar}), we write $E_2$ in the form $$ E_2 = \frac{1}{\varepsilon} U_{0z} d_t - \frac{1}{\varepsilon} \lambda_0 U_{0z} \Delta d = \frac{U_{0z}}{\varepsilon} (d_t - \lambda_0 \Delta d). $$ Applying Lemma \ref{Lem_d_bound}, \ref{Lem_U0_bound} and \ref{Lem_U1_bound} gives $$ |E_2| \leq C_d \hat{C}_0 \frac{|d|}{\varepsilon} e^{- \lambda _1 \left| \frac{d}{\varepsilon} + p \right| } \leq C_d \hat{C}_0 \max_{\xi \in \mathbb{R}} | \xi | e^{-\lambda_1 |\xi + p|}. $$ Note that $\max_{\xi \in \mathbb{R}} | \xi | e^{-\lambda_1 |\xi + p|} \leq |p| + \frac{1}{\lambda_1}$ (cf.\ \cite{Danielle2018}). Thus, there exists $\tilde{C}_2$ such that \begin{align}\label{eqn_E2_bound} |E_2| \leq \tilde{C}_2(1 + e^{LT}). \end{align} \vskip .1cm {\it Estimate of the term $E_3$.} Substituting $p_t = \dfrac{q}{\varepsilon^2 \sigma}$ and then replacing $q$ by its explicit form (cf.\ \eqref{star0}) gives \begin{align*} E_3 &= \frac{q}{\varepsilon^2\sigma} \left[ U_{0z} - \sigma ( f'(U_0) + (\varphi'(U_0) )_{zz} ) - \sigma q \left( \frac{1}{2}f''(\theta) + \frac{\varepsilon^2}{q^2} r_3 \right) \right] + q_t \\ &= \frac{1}{\varepsilon^2} \left( \beta e^{- \frac{\beta t }{\varepsilon^2}} + \varepsilon^2 L e^{Lt} \right) \left[ U_{0z} - \sigma ( f'(U_0) + (\varphi'(U_0) )_{zz} ) - \sigma^2 (\beta e^{- \frac{\beta t}{\varepsilon^2}} + L \varepsilon^2 e^{Lt}) \left( \frac{1}{2}f''(\theta) + \frac{\varepsilon^2}{q^2} r_3 \right) \right] \\ & - \frac{1}{\varepsilon^2} \sigma \beta^2 e^{ - \frac{\beta t}{\varepsilon^2}} + \varepsilon^2 \sigma L^2 e^{Lt} \\ &= \frac{1}{\varepsilon^2} \beta e^{- \frac{\beta t}{\varepsilon^2}}(I - \sigma\beta) + L e^{Lt} [I + \varepsilon^2 \sigma L] \end{align*} where $$ I := U_{0z} - \sigma ( f'(U_0) + (\varphi'(U_0) )_{zz} ) - \sigma^2 (\beta e^{- \frac{\beta t}{\varepsilon^2}} + L \varepsilon^2 e^{Lt}) \left( \frac{1}{2}f''(\theta) + \frac{\varepsilon^2}{q^2} r_3 \right). $$ Applying Lemma \ref{Lem_E3_bound}, using \eqref{eqn_cond_elc} and \eqref{cond_sigma}, yields \begin{eqnarray*} I &\geq & 3 \sigma \beta - \sigma \sigma_2 \left(\beta + L \varepsilon^2 e^{Lt} \right) \left( |f''(\theta)| + \frac{\varepsilon^2}{q^2} |r_3| \right)\\ &\geq & 3 \sigma \beta - \sigma \sigma_2 \left(\beta +1 \right) \left( |f''(\theta)| + \frac{\varepsilon^2}{q^2} |r_3| \right)\\ &\geq& 2 \sigma \beta, \end{eqnarray*} where the last inequality follows from \eqref{cond_sigma2}. This implies that \begin{align}\label{eqn_E3_bound} E_3 \geq \frac{\sigma \beta^2}{\varepsilon^2} e^{- \frac{\beta t }{\varepsilon^2}} + 2 \sigma \beta L e^{Lt}. \end{align} \vskip .1cm {\it Estimate of the term $E_4$.} Substituting again $p_t = \dfrac{q}{\varepsilon^2 \sigma}$, with $q$ in its explicit form \eqref{star0} gives \begin{align*} E_4 &= \frac{q}{\varepsilon \sigma} \left( {U_1}_z - \sigma((\varphi'(U_0))_z \Delta d +U_1 f''(\theta)) - \sigma\frac{\varepsilon}{q} r_2 \right)\\ &= \frac{1}{\varepsilon} \left( \beta e^{-\frac{ \beta t }{\varepsilon^2}} + \varepsilon^2 L e^{Lt} \right) \left({U_1}_z - \sigma((\varphi'(U_0))_z \Delta d + {U_1} f''(\theta)) - \sigma\frac{\varepsilon}{q} r_2 \right). \end{align*} Applying Lemma \ref{Lem_d_bound}, \ref{Lem_U0_bound}, \ref{Lem_U1_bound} and \ref{Lem_remiander_bound} gives the uniform boundedness of the last factor in parenthesis. Thus, there exists a constant $\tilde{C}_4$ such that \begin{align}\label{eqn_E4_bound} |E_4| \leq \tilde{C}_4 \frac{1}{\varepsilon} \left( \beta e^{-\frac{\beta t }{\varepsilon^2}} + \varepsilon^2 L e^{Lt} \right). \end{align} \vskip .1cm {\it Estimate of the terms $E_5$ and $E_6$.} Applying Lemma \ref{Lem_d_bound}, \ref{Lem_U0_bound} and \ref{Lem_U1_bound}, it follows that there exists $\tilde{C}_5$ such that \begin{align}\label{eqn_E5_bound} |E_5| + |E_6| \leq \tilde{C}_5. \end{align} \vskip .1cm {\it Combination of the above estimates.} Collecting the estimates (\ref{eqn_E1_bound}),(\ref{eqn_E2_bound}),(\ref{eqn_E3_bound}),(\ref{eqn_E4_bound}),(\ref{eqn_E5_bound}), we obtain \begin{align*} \mathcal{L}(u^+) &\geq \left[ \frac{\sigma \beta^2}{\varepsilon^2} - \tilde{C}_4 \frac{\beta}{\varepsilon} \right]e^{- \frac{\beta t }{\varepsilon^2}} + \left[ 2 \sigma \beta L - \varepsilon \tilde{C}_4 L -\tilde{C}_2 \right]e^{Lt} - \tilde{C}_1 - \tilde{C}_2 - \tilde{C}_5 \\ & \geq \left[ \frac{\sigma \beta^2}{\varepsilon^2} - \tilde{C}_4 \frac{\beta}{\varepsilon} \right]e^{- \frac{\beta t }{\varepsilon^2}} + \left[ \frac{2 \sigma \beta L}{3} - \varepsilon \tilde{C}_4 L \right]e^{Lt} \\ &+ \left[ \frac{2 \sigma \beta L}{3} -\tilde{C}_2 \right]e^{Lt} + \left[ \frac{2 \sigma \beta L}{3} -\tilde{C}_6 \right] \end{align*} where $\tilde{C}_6 = \tilde{C}_1 + \tilde{C}_2 + \tilde{C}_5$. Choose $\varepsilon_0$ small enough and $L$ large enough so that \begin{align*} \sigma \beta > 3 \tilde{C}_4 \varepsilon_0 ,~ \sigma \beta L > 3 \max \{ \tilde{C}_2, \tilde{C}_6 \}. \end{align*} Then, we deduce that there exists a positive constant $C_p$, independent of $\varepsilon$, such that $\mathcal{L}(u^+) \geq C_p$. \end{proof} \subsection{Proof of Theorem \ref{Thm_Propagation}} \label{proof_thm_prop} The proof of Theorem \ref{Thm_Propagation} is divided in two steps: (i) For large enough $K>0$, we prove that $u^-(x,t) \leq u^\varepsilon(x,t + t^\varepsilon) \leq u^+(x,t)$ for $x \in \overline{D}, t \in [0,T - t^\varepsilon]$ and (ii) we employ (i) to show the desired result. \vskip .1cm {\it Step 1.} Fix $\sigma, \beta$ as in (\ref{cond_beta}), (\ref{cond_sigma}). Without loss of generality, we may assume that \begin{align*} 0 < \eta < \min \left\{ \eta_0, \sigma\beta \right\}. \end{align*} Theorem \ref{Thm_Generation} implies the existence of constants $\varepsilon_0$ and $M_0$ such that (\ref{Thm_generation_i})-(\ref{Thm_generation_iii}) are satisfied. Conditions (\ref{cond_gamma0_normal}) and (\ref{cond_u0_inout}) imply that there exists a positive constant $M_1$ such that \begin{align*} &\text{if } d(x,0) \leq - M_1 \varepsilon, ~~ \text{ then } u_0(x) \leq \alpha - M_0 \varepsilon, \\ &\text{if } d(x,0) \geq M_1 \varepsilon, ~~ \text{ then } u_0(x) \geq \alpha + M_0 \varepsilon. \end{align*} Hence, we deduce, by applying (\ref{Thm_generation_i}), (\ref{Thm_generation_iii}), that \begin{align*} u^\varepsilon(x,t^\varepsilon) \leq H^+(x) := \begin{cases} \alpha_+ + \frac{\eta}{4} & ~~ \text{ if } d(x,0) \geq - M_1 \varepsilon \\ \alpha_- + \frac{\eta}{4} & ~~ \text{ if } d(x,0) < - M_1 \varepsilon. \end{cases} \end{align*} Also, by applying (\ref{Thm_generation_i}), (\ref{Thm_generation_ii}), \begin{align*} u^\varepsilon(x,t^\varepsilon) \geq H^-(x) := \begin{cases} \alpha_+ - \frac{\eta}{4} & ~~ \text{ if } d(x,0) > M_1 \varepsilon \\ \alpha_- - \frac{\eta}{4} & ~~ \text{ if } d(x,0) \leq M_1 \varepsilon. \end{cases} \end{align*} Next, we fix a sufficient large constant $K$ such that \begin{align*} U_0(M_1 - K) \leq \alpha_- + \frac{\eta}{4} ~~~\text{and} ~~~ U_0(- M_1 + K) \geq \alpha_+ - \frac{\eta}{4}. \end{align*} For such a constant $K$, Lemma \ref{Lem_Prop_subsuper} implies the existence of coefficients $\varepsilon_0$ and $L$ such that the inequalities in (\ref{eqn_Prop_subsuper}) holds. We claim that \begin{align}\label{thm_prop_proof1} u^-(x,0) \leq H^-(x) \leq H^+(x) \leq u^+(x,0). \end{align} We only prove the last inequality since the first inequality can be proved similarly. By Lemma \ref{Lem_U1_bound}, we have $| {U_1} | \leq \hat{C}_1$. Thus, we can choose $\varepsilon_0$ small enough so that, for $\varepsilon\in (0,\varepsilon_0)$, we have $ \varepsilon \hat{C}_1 \leq \dfrac{\sigma \beta}{4}$. Then, noting \eqref{star0}, \begin{align*} u^+(x,0) &\geq U_0 \left( \frac{d(x,0) + \varepsilon p(0)}{\varepsilon} \right) - \varepsilon \hat{C}_1 + \sigma \beta + \varepsilon^2 \sigma L \\ &> U_0 \left( \frac{d(x,0)}{\varepsilon} + K \right) + \frac{3}{4} \eta. \end{align*} In the set $\{ x \in D : d(x,0) \geq - M_1\varepsilon \}$, the inequalities above, and the fact that $U_0$ is an increasing function imply \begin{align*} u^+(x,0) > U_0(- M_1 + K) + \frac{3}{4} \eta \geq \alpha_+ + \frac{\eta}{2} > H^+(x). \end{align*} Moreover, since $U_0 \geq \alpha_-$ in the set $\{ x \in D : d(x,0) < - M_1\varepsilon \}$, we have \begin{align*} u^+(x,0) > \alpha_- + \frac{3}{4} \eta > H^+(x). \end{align*} Thus, we proved the first inequality in \eqref{thm_prop_proof1} above. The inequalities \eqref{thm_prop_proof1} and Lemma \ref{Lem_Prop_subsuper} {now} permit to apply the comparison principle Lemma \ref{lem_comparison}, so that we have \begin{align}\label{compprinciple} u^-(x,t) \leq u^\varepsilon(x,t + t^\varepsilon) \leq u^+(x,t) ~~ \text{ for } ~~ x \in \overline{D}, t \in [0,T - t^\varepsilon]. \end{align} \vskip .1cm {\it Step 2.} Choose $C > 0$ so that \begin{align*} U_0(C - e^{LT} - K) \geq \alpha_+ - \frac{\eta}{2} ~~ \text{and} ~~ U_0( - C + e^{LT} + K) \leq \alpha_- + \frac{\eta}{2}. \end{align*} Then, we deduce from \eqref{compprinciple}, noting \eqref{star0}, that \begin{align*} &\text{if}~ d(x,t) \geq \varepsilon C, ~\text{then } u^\varepsilon(x,t + t^\varepsilon) \geq \alpha_+ - \eta \\ &\text{if}~ d(x,t) \leq - \varepsilon C, ~\text{then } u^\varepsilon(x,t + t^\varepsilon) \leq \alpha_- + \eta \end{align*} and since $\alpha_\pm \pm \eta$ are respectively sub- and super-solutions of $(P^\varepsilon)$, we conclude that \begin{align*} u^\varepsilon(x,t + t^\varepsilon) \in [\alpha_- - \eta, \alpha_+ + \eta] \end{align*} for all $(x,t) \in D \times [0, T - t^\varepsilon], \varepsilon \in (0,\varepsilon_0)$. \qed \begin{rmk}\label{rmk_thm13} These sub and super solutions guarantee that $u^\varepsilon \simeq \alpha_+$(respectively, $u^\varepsilon \simeq \alpha_-$) for $d(x,t) \geq c$(respectively, $d(x,t) \leq -c$) with $t > \rho t^\varepsilon, \rho > 1$ and $\varepsilon > 0$ small enough. In fact, by the definition of $q(t)$, we expect \begin{align*} \varepsilon U_1 \pm q(t) = \mathcal{O}(\varepsilon) \end{align*} for $t > (\rho - 1) t^\varepsilon$. Also, by Lemma \ref{Lem_U0_bound}, we expect \begin{align*} 0 < U_0(z) - \alpha_- < \tilde{c} \varepsilon ~\text{for}~ z > \dfrac{c}{\varepsilon} ,~~ 0 < \alpha_+ - U_0(z) < \tilde{c} \varepsilon ~\text{for}~ z < - \dfrac{c}{\varepsilon}. \end{align*} These estimates yield that there exists a positive constant $c'$ such that \begin{align*} |u^\varepsilon(x,t) - \alpha_+| \leq c'\varepsilon ~\text{for}~ d(x,t) > c ,~~ |u^\varepsilon(x,t) - \alpha_-| \leq c'\varepsilon ~\text{for}~ d(x,t) < - c \end{align*} for $t > \rho t^\varepsilon.$ \end{rmk} \section{Proof of Theorem \ref{thm_asymvali}}\label{section_5} We now introduce the concept of an eternal solution. A solution of an evolution equation is called \textit{eternal} if it is defined for all positive and negative times. In our problem, we study the nonlinear diffusion problem \begin{align}\label{eqn_entire} w_\tau = \Delta \varphi(w) + f(w), ~~ ((z', z^{(N)}), \tau) \in {\mathbb{ R}}^N \times {\mathbb{ R}}, \end{align} { where $z'\in {\mathbb{ R}}^{N-1}$ and $z^{(N)}\in {\mathbb{ R}}$.} In order to prove Theorem \ref{thm_asymvali}, we first present two lemmas. \begin{lem}\label{lem_locregularity} Let $S$ be a domain of $\mathbb{R}^N \times {\mathbb{ R}}$ and let $u$ be a bounded function on $S$ satisfying \begin{align} \label{eq:NAC} u_t = \Delta \varphi(u) + f(u), ~~ (x,t) \in S, \end{align} where $\varphi, f$ satisfy conditions \eqref{cond_f_bistable}, \eqref{cond_phi'_bounded}. Then, for any smooth bounded subset $S' \subset S$ separated from $\partial S$ by a positive constant $\tilde{d}$ we have \begin{align} \label{eq:C21} \Vert u \Vert_{C^{2 + \theta, 1 + \theta/2}(\overline{S'})} \leq C', \end{align} for any positive constants $0 < \theta < 1$ and $C'= C'(\|u\|_{L^\infty(S)})$ which depends on $\|u\|_{L^\infty(S)}$, $\varphi, f$, $\tilde{d}, \theta$ and the size of $S'$, where \begin{align*} \Vert u \Vert_{C^{k + \theta, k' + \theta'}(\overline{S'})} &= \Vert u \Vert_{C^{k,k'}(\overline{S'})} + \sum_{i,j = 1}^N \sup_{(x,t),(y,t) \in S', x \neq y} \left\{ \dfrac{|D^k_x u(x,t) - D^k_x u(y,t)|}{|x - y|^\theta} \right\} \\ & + \sup_{(x,t),(x,t') \in S', t \neq t'} \left\{ \dfrac{|D^{k'}_tu(x,t) - D^{k'}_tu(x,t')|}{|t - t'|^{\theta'}} \right\} \end{align*} where $k, k'$ are non-negative integers and $0 < \theta, \theta' < 1$. \end{lem} \begin{proof} Since $S'$ is separated from $\partial S$ by a positive distance, we can find subsets $S_1, S_2 $ such that $S' \subset S_2 \subset S_1 \subset S$ and such that $\partial S, \partial S', \partial S_i$ are separated by a positive distance less than $\tilde{d}$. By condition \eqref{cond_phi'_bounded} the regularity of $u(x,t)$ is the same as the regularity of $v(x,t) = \varphi(u(x,t))$. Note that by \eqref{eq:NAC} $v$ satisfies \begin{align*} v_t = \varphi'(\varphi^{-1}(v)) [ \Delta v +g(v) ] ~, g(s) = f(\varphi^{-1}(s)) \end{align*} on $S$. By Theorem 3.1 p.\ 437-438 of \cite{LOVA1988}, there exists a positive constant $c_1$ such that \begin{align*} | \nabla v | \leq c_1 \text{ in } S_1 \end{align*} where $c_1$ depends only on $N, \varphi, ||u||_{L^\infty(S)}$ and the distance between $S$ and $S_1$. This, together with Theorem 5, p 122 of \cite{Krylov2008}, imply that \begin{align*} \Vert v \Vert_{W^{2,1}_p(S_2)} \leq c_2(\Vert v \Vert_{L^p(S_1)} + \Vert \varphi'(\varphi^{-1}(v)) g(v) \Vert_{L^p(S_1)}) \end{align*} for any $p > {N + 2}$ where $c_2$ is a constant that depends on $c_1, p, N, \varphi$. With this, by fixing $p$ large enough, the Sobolev embedding theorem in chapter 2, section 3 of \cite{LOVA1988} yields \begin{align*} \Vert v \Vert_{C^{1 + \theta, (1 + \theta)/2}(S_2)} \leq c_3 \Vert v \Vert_{W^{2,1}_p(S_2)} \end{align*} where $0 < \theta < 1 - \frac{N + 2}{p}$ and $c_3$ depends on $c_2$ and $p$. This implies that $\varphi'(\varphi^{-1}(v)), g(v)$ are bounded uniformly in $C^{1 + \theta, (1 + \theta)/2}(S_2)$. Therefore, by Theorem 10.1 p 351-352 of \cite{LOVA1988} we obtain \begin{align*} \Vert v \Vert_{C^{2 + \theta, 1 + \theta/2}(S')} \leq c_4 \Vert v \Vert_{C^{1 + \theta, (1 + \theta)/2}(S_2)} \end{align*} where $c_4$ depends on $c_2, f$ and $\varphi$. \end{proof} \begin{rmk}\label{rmk_locreg} Lemma \ref{lem_locregularity} implies uniform $C^{2,1}$ boundedness of the entire solution $w$ in the whole space. This can be derived as follows: Let \begin{align*} S_{(a,b)} = \{(x,t) \in {\mathbb{ R}}^N \times {\mathbb{ R}}, |x - a|^2 + (t - b)^2 \leq 2 \},~ S'_{(a,b)} = \{(x,t) \in {\mathbb{ R}}^N \times {\mathbb{ R}}, |x - a|^2 + (t - b)^2 \leq 1 \} \end{align*} where ${(a,b)} \in {\mathbb{ R}}^N \times {\mathbb{ R}}$. Then, Lemma \ref{lem_locregularity} implies uniform $C^{2,1}$ boundedness of $w$ within $S'_{(a,b)}$ where the upper bound is fixed by \eqref{eq:C21}. Since this upper bound is independent to the choice of $(a,b)$, we have uniform $C^{2,1}$ bound of $w$ in the whole space. \end{rmk} Next, we present a result inspired by a similar one in \cite{BH2007}. \begin{lem}\label{lem_entire} Let $w((z',z^{(N)}),\tau)$ be a bounded eternal solution of \eqref{eqn_entire} satisfying \begin{align}\label{lem_entire_1} \liminf_{z^{(N)} \rightarrow - \infty} \inf_{z' \in {\mathbb{ R}}^{N - 1}, \tau \in {\mathbb{ R}}} w((z',z^{(N)}),\tau) = \alpha_- ,~~ \limsup_{z^{(N)} \rightarrow \infty} \sup_{z' \in {\mathbb{ R}}^{N - 1}, \tau \in {\mathbb{ R}}} w((z',z^{(N)}),\tau) = \alpha_+, \end{align} where $z' = (z^{(1)}, z^{(2)}, \cdots z^{(N - 1)})$. Then, there exists a constant $z^* \in {\mathbb{ R}}$ such that \begin{align*} w((z',z^{(N)}),\tau) = U_0(z^{(N)} - z^*). \end{align*} \end{lem} \begin{proof} We prove the lemma in two steps. First we show $w$ is an increasing function with respect to the $z^{(N)}$ variable. Then, we prove that $w$ only depends on $z^{(N)}$, which means that there exists a function $\psi : {\mathbb{ R}} \rightarrow (\alpha_-, \alpha_+)$ such that \begin{align*} w((z',z^{(N)}),\tau) = \psi(z^{(N)}), \ \ ((z',z^{(N)}), \tau) \in {\mathbb{ R}}^N \times {\mathbb{ R}}. \end{align*} From the increasing property with respect to $z^{(N)}$, this allows us to identify $\psi$ as the unique standing wave solution $U_0$ of the problem \eqref{eqn_AsymptExp_U0} up to a translation factor $z^*$. We deduce from \eqref{lem_entire_1} that there exist $A > 0$ and $\eta \in (0, \eta_0)$ such that \begin{align}\label{lem_entire_2} \begin{cases} \alpha_+ - \eta \leq w((z',z^{(N)}),\tau) \leq \alpha_+ + \eta , &~~ z^{(N)} \geq A \\ \alpha_- - \eta \leq w((z',z^{(N)}),\tau) \leq \ \alpha_- + \eta , &~~ z^{(N)} \leq - A \end{cases} \end{align} where $\eta_0$ is defined in \eqref{cond_mu_eta0}. Let $\tilde{\tau} \in {\mathbb{ R}}, \rho \in {\mathbb{ R}}^{N - 1}$ be arbitrary. Define \begin{align*} w^s((z',z^{(N)}),\tau) := w((z' + \rho, z^{(N)} + s), \tau + \tilde{\tau}) \end{align*} where $s \in {\mathbb{ R}}$. Fix $\chi \geq 2 A$ and define \begin{align}\label{lem_entire_9} b^* := \inf \left\{ b > 0: \varphi(w^\chi) + b \geq \varphi(w) ~ \text{in}~ {\mathbb{ R}}^N \times {\mathbb{ R}} \right\}. \end{align} We will prove that $b^* = 0$, which will imply that $w^\chi \geq w$ in ${\mathbb{ R}}^N \times {\mathbb{ R}}$ since $\varphi$ is a strictly increasing function. To see this, we assume that this does not hold, that is $b^* > 0$. Note, by \eqref{lem_entire_1} and \eqref{lem_entire_2} we have \begin{align}\label{lem_entire_3} w^\chi \geq \alpha_+ - \eta > \alpha_- + \eta \geq w ~\text{if}~ z^{(N)} = -A,~ \lim_{z^{(N)} \rightarrow \pm \infty} \varphi(w^\chi) - \varphi(w) \to 0. \end{align} Let $E = \{ (x,t) \in {\mathbb{ R}}^N \times {\mathbb{ R}}, \varphi({w}) - \varphi({w^\chi}) > 0\}$. Define a function $Z$ on $E$ as follows \begin{align*} Z((z', z^{(N)}), \tau) &:= e^{-C_Z \tau}[\varphi({w}) - \varphi({w^\chi})]((z',z^{(N)}),\tau) , \\ C_Z &:= \max \left( \sup_{x \in {E}} \dfrac{[\varphi'({w}) - \varphi'({w^\chi})] \Delta \varphi({w}) + [\varphi'({w})f({w}) - \varphi'({w^\chi})f({w^\chi})]}{\varphi({w}) - \varphi({w^\chi})} , 0 \right) \geq 0. \end{align*} Note that $C_Z$ is bounded, since ${w^\chi}$ is bounded uniformly in $C^{2,1}(E)$ by Remark \ref{rmk_locreg} and in view of \eqref{cond_f_bistable} and \eqref{cond_phi'_bounded} we have \begin{align}\label{lem_entire_10} \lim_{x \to y} \dfrac{\varphi'(x) - \varphi'(y)}{\varphi(x) - \varphi(y)} = \dfrac{\varphi''(y)}{\varphi'(y)} < \infty, \lim_{x \to y} \dfrac{\varphi'(x)f(x) - \varphi'(y)f(y)}{\varphi(x) - \varphi(y)} = \dfrac{(\varphi'f)'(y)}{\varphi'(y)} < \infty. \end{align} Direct computations give \begin{align*} Z_\tau - \varphi'({w^\chi}) \Delta Z &= e^{-C_Z\tau}\varphi'({w})[\Delta \varphi({w}) + f({w})] - e^{-C_Z\tau}\varphi'({w^\chi})[\Delta \varphi({w^\chi}) + f({w^\chi})] \\ &- C_Z Z - e^{-C_Z\tau}\varphi'({w^\chi}) [\Delta \varphi({w}) - \Delta \varphi({w^\chi})] \\ &= \Big( [\varphi'({w}) - \varphi'({w^\chi})]\Delta \varphi({w}) +[\varphi'({w})f({w}) - \varphi'({w^\chi})f({w^\chi})] \Big) e^{-C_z\tau} - C_Z Z \\ &\leq C_Z Z - C_Z Z = 0 \end{align*} in $E$. Then, the maximum principle \cite{PW1984} Theorem 5 p.173 yields that the maximum of $Z$ is located at the boundary of $E$. By the definition of $E$, $Z = 0$ on the boundary of $E$ which implies $Z \leq 0$ in $E$. This contradicts the definition of $E$. Thus, we conclude that $b^* = 0$. Next, we prove that $w \leq w^\chi$ for any $\chi > 0$ (see \eqref{lem_entire_7} below). For this purpose, define \begin{align}\label{lem_entire_6} \chi^* := \inf\left \{ \chi \in {\mathbb{ R}}, w^{\tilde{\chi}} \geq w ~ \text{for all }~ \tilde{\chi} \geq \chi \right \}. \end{align} Then, our goal can be obtained by proving that $\chi^* \leq 0$. By the previous argument, we already know that $\chi^* \leq 2A$. Since $w((z', - \infty), \tau) = \alpha_-$, it follows from \eqref{lem_entire_1} that $\chi^* > -\infty$, since otherwise we would have \begin{align*} \alpha_- = w^{-\infty}((z',z^{(N)}),\tau) \geq w, \end{align*} leading to a contradiction since $w((z', + \infty), \tau) = \alpha_+ > \alpha_-$. Thus, we conclude $- \infty < \chi^* \leq 2A$. Assume that $\chi^* > 0$, and define $E' := \{((z',z^{(N)}), \tau) \in {\mathbb{ R}}^N \times {\mathbb{ R}}; ~ |z^{(N)}| \leq A \}$. If $\inf_{E'} (w^{\chi^*} - w) > 0$, then there exists $\delta_0 \in (0, \chi^*)$ such that $w \leq w^{\chi^* - \delta}$ in $E'$ for all $\delta \in (0, \delta_0)$. Since $w \leq w^{\chi^* - \delta}$ on $\partial E'$, we deduce from a similar argument as above that $w \leq w^{\chi^* - \delta}$ in $\{((z',z^{(N)}), \tau) \in {\mathbb{ R}}^N \times {\mathbb{ R}}; ~ |z^{(N)}| \geq A \}$. This contradicts the definition of $\chi^*$ in \eqref{lem_entire_6} so that $\inf_{E'} (w^{\chi^*} - w) = 0$. Thus, we must have a sequence $(({z}'_n, {z}_n), {t}_n)$ and $\tilde{z}_\infty \in [-A, A]$ such that \begin{align*} w(({z}'_n, {z}_n), {t}_n) - w^{\chi^*}(({z}'_n, {z}_n), {t}_n) \rightarrow 0 ,~~ {z}_n \rightarrow {z}_\infty ~\text{as}~ n \rightarrow\infty. \end{align*} Define ${w}_n((z',z^{(N)}),\tau) := w((z' + {z}'_n, z^{(N)}), \tau + {t}_n)$. Since $w_n$ is bounded uniformly in $C^{2 + \theta, 1 + \theta/2}({\mathbb{ R}}^N \times {\mathbb{ R}})$ by Lemma \ref{lem_locregularity}, $w_n$ converges in $C^{2,1}_{loc}$ to a solution ${{w}_\infty}$ of \eqref{eqn_entire}. Define $\tilde{Z}$ by \begin{align*} \tilde{Z}((z',z^{(N)}),\tau) := [\varphi({{w}^{\chi^*}_\infty}) - \varphi({{w}_\infty})]((z',z^{(N)}),\tau). \end{align*} Since $\varphi$ is strictly increasing, by \eqref{lem_entire_6} we have \begin{align}\label{lem_entire_11} \begin{cases} \tilde{Z}((z',z^{(N)}),\tau) \geq 0 ~ \text{in} ~ {\mathbb{ R}}^N \times {\mathbb{ R}} \\ \tilde{Z}((0, {z}_\infty), 0) = \lim_{n\rightarrow\infty} [{\varphi({w}^{\chi^*}_n)} - {\varphi({w}_n)}] ((0,{z}_n),0) = \lim_{n \rightarrow \infty} [{\varphi(w^{\chi^*})} -{\varphi(w)}] (({z}'_n, {z}_n), {t}_n) = 0. \end{cases} \end{align} Then, direct computation gives \begin{align*} \tilde{Z}_\tau - \varphi'({{w}^{\chi^*}_\infty}) \Delta \tilde{Z} &= \varphi'({{w}^{\chi^*}_\infty})[\Delta \varphi({{w}^{\chi^*}_\infty}) + f({{w}^{\chi^*}_\infty})] - \varphi'({{w}_\infty})[\Delta \varphi({{w}_\infty}) + f({{w}_\infty})] \\ & - \varphi'({{w}^{\chi^*}_\infty}) [\Delta \varphi({{w}^{\chi^*}_\infty}) - \Delta \varphi({{w}_\infty})] \\ &= [\varphi'({{w}^{\chi^*}_\infty}) - \varphi'({{w}_\infty})]\Delta \varphi({{w}_\infty}) +[\varphi'({{w}^{\chi^*}_\infty})f({{w}^{\chi^*}_\infty}) - \varphi'({{w}_\infty})f({{w}_\infty})], \end{align*} If $\tilde{Z} = 0$ we obtain $\tilde{Z}_\tau - \varphi'({{w}^{\chi^*}_\infty}) \Delta \tilde{Z} = 0$. If $\tilde{Z} > 0$, we obtain \begin{align*} \tilde{Z}_\tau - \varphi'({{w}^{\chi^*}_\infty}) \Delta \tilde{Z} &= \left( \dfrac{[\varphi'({{w}^{\chi^*}_\infty}) - \varphi'({{w}_\infty})]\Delta \varphi({{w}_\infty})+[\varphi'({{w}^{\chi^*}_\infty})f({{w}^{\chi^*}_\infty}) - \varphi'({{w}_\infty})f({{w}_\infty})]}{\varphi({{w}^{\chi^*}_\infty}) - \varphi({{w}_\infty})} \right)\tilde{Z} \\ &\geq - C \tilde{Z}, \end{align*} for some positive constant $C$, where the last inequality follows from \eqref{lem_entire_10} and the fact that $\Delta\varphi({{w}_\infty})$ is uniformly bounded in the whole space. Since by \eqref{lem_entire_11} $\tilde{Z}$ attains a non-positive minimum at $((0,{z}_\infty),0)$, we deduce from the maximum principle applied on the domain ${\mathbb{ R}}^N \times (-\infty, 0]$ that $\tilde{Z} = 0$ for all $(z',z^{(N)}) \in {\mathbb{ R}}^N,~\tau \leq 0$. Hence, $\tilde{Z} \equiv 0$ in ${\mathbb{ R}}^N \times {\mathbb{ R}}$. This implies that \begin{align*} {{w}_\infty}((0,0),0) = {{w}_\infty}((\rho, \chi^*), \tilde{\tau}) = {{w}_\infty}((2\rho, 2\chi^*), 2\tilde{\tau}) = \cdots = {{w}_\infty}((k\rho, k \chi^*), k \tilde{\tau}) \end{align*} for all $k \in \mathbb{Z}$, contradicting the fact that ${{w}_\infty}((k\rho, k \chi^*), k \tilde{\tau}) \rightarrow \alpha_+$ as $k \rightarrow \infty$ and ${{w}_\infty}((k\rho, k \chi^*), k \tilde{\tau}) \rightarrow \alpha_-$ as $k \rightarrow - \infty$. Thus, we have $\chi^* \leq 0$, and therefore \begin{align}\label{lem_entire_7} w((z',z^{(N)}),\tau) \leq w^0((z',z^{(N)}),\tau) = w((z' + \rho, z^{(N)}), \tau + \tilde{\tau}) \end{align} holds for any $\rho \in {\mathbb{ R}}^{N - 1}, \tilde{\tau} \in {\mathbb{ R}}$. We now show that $w$ only depends on $z^{(N)}$. Suppose $w$ depends on $z'$ and $\tau$. Then, there exist $z'_1, z'_2 \in {\mathbb{ R}}^{N - 1}, z^{(N)} \in {\mathbb{ R}}$ and $t'_1, t'_2 \in {\mathbb{ R}}$ such that \begin{align}\label{lem_entire_8} w((z'_1, z^{(N)}), t_1') < w((z'_2, z^{(N)}), t'_2). \end{align} Then, by letting $z' = z_2',~\rho = z_1' - z_2'$ and $\tau = t_2',~ \tilde{\tau} = t_1' - t_2'$ in the inequality \eqref{lem_entire_7}, we deduce \begin{align*} w((z_2', z^{(N)}) ,t_2') \leq w((z_1', z^{(N)}), t_1'), \end{align*} contradicting \eqref{lem_entire_8}. This implies that $w$ only depends on $z^{(N)}$, namely $w((z',z^{(N)}),\tau) = \psi(z^{(N)})$. Finally, from the definition of $\chi^*$, we have that $\psi$ is increasing. \end{proof} {\bf Proof of Theorem \ref{thm_asymvali}.} We first prove $(i)$. Recall that $d(x,t)$ is the {cut-off} signed distance function to the interface $\Gamma_t$ moving according to equation \eqref{eqn_motioneqn}, and $d^\varepsilon(x,t)$ is the signed distance function corresponding to the interface \begin{align*} \Gamma_t^\varepsilon := \{ x \in D, ~ u^\varepsilon(x,t) = \alpha \}. \end{align*} Let $T_1$ be an arbitrary constant such that $\frac{T}{2} < T_1 < T$. Assume by contradiction that \eqref{thm_asymvali_i} does not hold. Then, there exist $\eta > 0$ and sequences $\varepsilon_k \downarrow 0,~ t_k \in [\rho t^{\varepsilon_k}, T],~ x_k \in D$ such that $\alpha_+ - \eta > \alpha > \alpha_- + \eta$ and \begin{align}\label{eqn_asymproof_1} \left| u^{\varepsilon_k}(x_k, t_k) - U_0 \left( \dfrac{d^{\varepsilon_k}(x_k, t_k)}{\varepsilon_k} \right) \right| \geq \eta . \end{align} For the inequality \eqref{eqn_asymproof_1} to hold, by Theorem \ref{Thm_Propagation} and $U_0(\pm \infty) = \alpha_\pm$, we need \begin{align*} d^{\varepsilon_k}(x_k, t_k) = \mathcal{O}(\varepsilon_k). \end{align*} With these observations, and also by Theorem \ref{Thm_Propagation}, there exists a positive constant $\tilde{C}$ such that \begin{align}\label{eqn_asymproof_2} | d(x_k,t_k) | \leq \tilde{C} \varepsilon_k \end{align} for $\varepsilon_k$ small enough. If $x_k \in \Gamma^{\varepsilon_k}_{t_k}$, then the left-hand side of \eqref{eqn_asymproof_1} vanishes, which contradicts this inequality. Since the sign can either be positive or negative, by extracting a subsequence if necessary we may assume that \begin{align}\label{eqn_asymproof_7} u^{\varepsilon_k}(x_k,t_k) - \alpha > 0~~ \text{for all}~~ k \in \mathbb{N}, \end{align} which is equivalent to \begin{align*} d^{\varepsilon_k}(x_k,t_k) > 0 ~~ \text{for all}~~ k \in \mathbb{N}. \end{align*} By \eqref{eqn_asymproof_2}, each $x_k$ has a unique orthogonal projection $p_k := p(x_k,t_k) \in \Gamma_{t_k}$. Let $y_k$ be a point on $\Gamma^{\varepsilon_k}_{t_k}$ that has the smallest distance from $x_k$, and therefore $u^{\varepsilon_k}(y_k,t_k) = \alpha$. Moreover, we have \begin{align}\label{eqn_asymproof_3} u^{\varepsilon_k}(x, t_k) > \alpha ~~ \text{if} ~~ \Vert x - x_k \Vert < \Vert y_k - x_k \Vert. \end{align} We now rescale $u^{\varepsilon_k}$ around $(p_k,t_k)$. Define \begin{align}\label{eqn_asymproof_10} w^k(z, \tau) := u^{\varepsilon_k}(p_k + \varepsilon_k \mathcal{R}_kz, t_k + \varepsilon^2_k \tau), \end{align} where $\mathcal{R}_k$ is a orthogonal matrix in $SO(N,{\mathbb{ R}})$ that rotates the $z^{(N)}$ axis, namely the vector $(0, \cdots ,0, 1) \in {\mathbb{ R}}^N$ onto the unit normal vector to $\Gamma_{t_k}$ at $p_k \in \Gamma_{t_k}$, say $\dfrac{x_k - p_k}{\Vert x_k - p_k \Vert}$. To prove our result, we use Theorem \ref{Thm_Propagation} which gives information about $u^{\varepsilon_k}$ for $t_k + \varepsilon^2_k \tau \geq t^{\varepsilon_k}$. Then, since $\Gamma_t$ is separated from $\partial D$ by some positive distance, $w^k$ is well-defined at least on the box \begin{align*} B_k := \left\{ (z, \tau) \in {\mathbb{ R}}^N \times {\mathbb{ R}} : |z| \leq \dfrac{c}{\varepsilon_k} ,~~ - (\rho - 1) \dfrac{{|\ln \varepsilon_k|}}{\mu} \leq \tau \leq \dfrac{T - T_1}{\varepsilon_k^2} \right\}, \end{align*} for some $c > 0$. We remark that $B_k \subset B_{k + 1}, k \in \mathbb{N}$ and $\lim_{k \rightarrow \infty} B_k = {\mathbb{ R}}^N \times {\mathbb{ R}}$. Writing $\mathcal{R}_k = (r_{ij})_{1 \leq i,j \leq N}$, we remark that $\mathcal{R}_k^{-1} = \mathcal{R}_k^T$, which implies that \begin{align}\label{eqn_asymproof_9} \sum_{i = 1}^N r_{ \ell i }^2 = 1 ,~ \sum_{i = 1, j \neq m}^N r_{j i} r_{\ell i} = 0. \end{align} Since \begin{align*} \partial_{z_i}^2 \varphi(w^k) = \varepsilon_k^2 \sum_{j = 1}^N \sum_{\ell = 1}^N r_{j i} r_{\ell i} \partial_{x_\ell x_j} \varphi(u^{\varepsilon_k}), \end{align*} we have \begin{align*} \Delta \varphi(w^k) &= \varepsilon_k^2 \sum_{i = 1}^N \partial_{z_i}^2 \varphi(w^k) \\ &= \varepsilon_k^2 \sum_{i = 1}^N \sum_{\ell = 1}^N r_{\ell i}^2 \partial_{x_\ell}^2 \varphi(u^{\varepsilon_k}) + \varepsilon_k^2 \sum_{i = 1}^N \sum_{j,\ell = 1, j \neq \ell}^N r_{j i} r_{\ell i} \partial_{x_\ell x_j} \varphi(u^{\varepsilon_k}) \\ & =\varepsilon_k^2 \Delta \varphi(u^{\varepsilon_k}). \end{align*} Thus, we obtain \begin{align*} w^k_\tau = \Delta \varphi(w^k) + f(w^k) ~~ \text{in} ~~ B_k. \end{align*} From the propagation result in Theorem \ref{Thm_Propagation} and the fact that the rotation matrix $\mathcal{R}_k$ maps the $z^{(N)}$ axis to the unit normal vector of $\Gamma_t$ at $p_k$, there exists a constant $C > 0$ such that \begin{align}\label{eqn_asymproof_4} z^{(N)} \geq C \Rightarrow w^k(z,\tau) \geq \alpha_+ - \eta > \alpha ,~~ z^{(N)} \leq -C \Rightarrow w^k(z,\tau) \leq \alpha_- + \eta < \alpha \end{align} as long as $(z,\tau) \in B_k$. It follows from the first line of \eqref{thm_propagation_1} that $\alpha_- - \eta_0 \leq w^k \leq \alpha_+ + \eta_0$ for $k$ large enough. Then, by Lemma \ref{lem_locregularity} we can find a subsequence of $(w^k)$ converging to some $w \in C^{2,1}({\mathbb{ R}}^N \times {\mathbb{ R}})$ which satisfies \begin{align*} w_\tau = \Delta \varphi(w) + f(w) ~~ on ~~ {\mathbb{ R}}^N \times {\mathbb{ R}}. \end{align*} From Remark \ref{rmk_thm13} we can deduce \eqref{lem_entire_1}. Then, by Lemma \ref{lem_entire}, there exists $z^* \in {\mathbb{ R}}$ such that \begin{align}\label{eqn_asymproof_5} w(z, \tau) = U_0(z^{(N)} - z^*). \end{align} Define sequences of points $\{ z_k \}, \{ \tilde{z}_{k} \}$ by \begin{align}\label{eqn_asymproof_11} z_k := \dfrac{1}{\varepsilon_k} \mathcal{R}^{-1}_k(x_k - p_k), ~~ \tilde{z}_{k} := \dfrac{1}{\varepsilon_k} \mathcal{R}^{-1}_k(y_k - p_k). \end{align} From \eqref{eqn_asymproof_2} and Theorem \ref{Thm_Propagation}, we have \begin{align*} &|d(x_k,t_k)| = \Vert x_k - p_k \Vert = \mathcal{O}(\varepsilon_k) ,~~\\ & \Vert y_k - p_k \Vert \leq \Vert y_k - x_k \Vert + \Vert x_k - p_k \Vert = |d^{\varepsilon_k}(x_k,t_k)| + |d(x_k,t_k)| = \mathcal{O}(\varepsilon_k) \end{align*} (see Figure \ref{fig1}), which implies that the sequences $z_k$ and $\tilde{z}_k$ are bounded. Thus, there exist subsequences of $\{ z_k \}, \{ \tilde{z}_k \}$ and $z_\infty, \tilde{z}_\infty \in {\mathbb{ R}}^N$ such that \begin{align*} z_{k_n} \rightarrow z_\infty ,~~ \tilde{z}_{k_n} \rightarrow \tilde{z}_\infty ,~~ \text{as}~~ k \rightarrow \infty. \end{align*} \begin{figure} \caption{Points $x_k, y_k, p_k$ and interfaces $\Gamma_{t_k}, \Gamma_{t_k}^{\varepsilon_k}$ inside the box $B_k$.} \caption{Points $z_\infty$ and $\tilde{z}_\infty$ and hyperplanes $z^{(N)} = z^*, z^{(N)} = z^{(N)}_\infty$.} \caption{In (a) the distance between $\Gamma_{t_k}$ and $\Gamma_{t_k}^{\varepsilon_k}$ is of $\mathcal{O}(\varepsilon_k)$. In (b), since we rescale space by $\varepsilon^{-1}$, the distance between two hyperplanes is of $\mathcal{O}(1)$.} \label{fig1} \end{figure} Since the normal vector to $\Gamma_{t_k}$ at $p_k$ is equal to $x_k - p_k$, and the mapping $\mathcal{R}_k^{-1}$ sends the unit normal vector to $\Gamma_{t_k}$ at $p_k$ to the vector $(0, \cdots 0, 1) \in {\mathbb{ R}}^N$, we conclude $z_\infty$ must lie on the $z^{(N)}$ axis so that we can write \begin{align*} z_\infty = (0, \cdots, 0, z^{(N)}_\infty). \end{align*} Since, by \eqref{eqn_asymproof_7}, \begin{align*} w(z_\infty, 0) = \lim_{k_n \rightarrow \infty} w^{k_n}(z_{k_n}, 0) = \lim_{k_n \rightarrow \infty} u^{\varepsilon_{k_n}}(x_{k_n}, t_{k_n}) \geq \alpha, \end{align*} we deduce from \eqref{eqn_asymproof_5} and the fact that $U_0$ is an increasing function that \begin{align*} w(z_\infty, 0) = U_0(z^{(N)}_{\infty} - z^*) \geq \alpha \Rightarrow z^{(N)}_{\infty} \geq z^*. \end{align*} From the definition of $y_{k_n}$ and \eqref{eqn_asymproof_10}, we have \begin{align}\label{eqn_asymproof_6} w(\tilde{z}_\infty, 0) = \lim_{k \rightarrow \infty} w^{k_n}(\tilde{z}_{k_n}, 0) = \lim_{k \rightarrow \infty} u^{\varepsilon_{k_n}}(y_{k_n}, t_{k_n}) = \alpha. \end{align} Next, we show that \begin{align}\label{eqn_asymproof_8} w(z,0) \geq \alpha~ \text{if} ~ \Vert z - z_\infty \Vert \leq \Vert \tilde{z}_\infty - z_\infty \Vert. \end{align} Choose $z \in {\mathbb{ R}}^N$ satisfying $\Vert z - z_\infty \Vert \leq \Vert \tilde{z}_\infty - z_\infty \Vert$ and a sequence $a_{k_n} \in {\mathbb{ R}}^+$ such that $a_{k_n} \rightarrow \Vert z - z_\infty \Vert$ and $\varepsilon_{k_n} a_{k_n} \leq \Vert x_{k_n} - y_{k_n} \Vert $ as $k \rightarrow \infty$. Then, we define sequences $n_{k_n}$ and $b_{k_n}$ by \begin{align*} n_{k_n} = \dfrac{z - z_{k_n}}{\Vert z - z_{k_n} \Vert} ,~ b_{k_n} = a_{k_n} n_{k_n} + z_{k_n}. \end{align*} Note that $b_{k_n} \rightarrow z$ as $k \rightarrow \infty$. Then, by \eqref{eqn_asymproof_11}, we obtain \begin{align*} w(z,0) &= \lim_{k_n \rightarrow \infty} w^{k_n}(b_{k_n}, 0)= \lim_{k_n \rightarrow \infty} u^{\varepsilon_{k_n}} ( p_{k_n} + \varepsilon_{k_n} \mathcal{R}_{k_n}(a_{k_n} k_n + z_{k_n}),t_{k_n}) \\ & = \lim_{k _n\rightarrow \infty} u^{\varepsilon_{k_n}}(\varepsilon_{k_n} a_{k_n} \mathcal{R}_{k_n} n_{k_n} + x_{k_n}, t_{k_n}) \geq \alpha, \end{align*} where the last inequality holds by \eqref{eqn_asymproof_3}. Note that \eqref{eqn_asymproof_5} implies $\{ w = \alpha \} = \{(z,\tau) \in {\mathbb{ R}}^N \times {\mathbb{ R}}, z^{(N)} = z^* \}$. Thus, we have either $z_\infty = \tilde{z}_\infty$ or, in view of \eqref{eqn_asymproof_5} , \eqref{eqn_asymproof_6} and \eqref{eqn_asymproof_8}, that the ball of radius $|| \tilde{z}_\infty - z_\infty ||$ centered at $z_\infty$ is tangent to the hyperplane $z^{(N)} = z^*$ at $\tilde{z}_\infty$. Hence, $\tilde{z}_\infty$ is a point on $z^{(N)}$ axis. With this observation and \eqref{eqn_asymproof_5}, we have \begin{align*} \tilde{z}_\infty = (0, \cdots, 0, z^*). \end{align*} This last property implies \begin{align}\label{eqn_asymproof_12} \dfrac{d^{\varepsilon_{k_n}}(x_{k_n},t_{k_n})}{\varepsilon_{k_n}} = \dfrac{\Vert x_{k_n} - y_{k_n} \Vert}{\varepsilon_{k_n}} = \Vert \mathcal{R}_{k_n} \left( z_{k_n} - \tilde{z}_{k_n} \right) \Vert = \Vert z_{k_n} - \tilde{z}_{k_n} \Vert \rightarrow \Vert z_\infty - \tilde{z}_\infty \Vert = z^{(N)}_\infty - z^*. \end{align} We have therefore reached a contradiction since, by \eqref{eqn_asymproof_2}, \eqref{eqn_asymproof_5} and \eqref{eqn_asymproof_12}, \begin{align*} 0 &= \vert w(z_\infty, 0) - U_0 (z^{(N)}_\infty - z^*) \vert \\ &= \left\vert \lim_{k_n \rightarrow \infty} \left[ w^{k_n}(z_{k_n}, 0) - U_0 \left( \dfrac{d^{\varepsilon_{k_n}}(x_{k_n},t_{k_n})}{\varepsilon_{k_n}} \right) \right] \right\vert \\ &= \left\vert \lim_{k_n \rightarrow \infty} \left[ u^{\varepsilon_{k_n}}(x_{k_n},t_{k_n}) - U_0 \left( \dfrac{d^{\varepsilon_{k_n}}(x_{k_n},t_{k_n})}{\varepsilon_{k_n}} \right) \right] \right\vert, \end{align*} contradicting \eqref{eqn_asymproof_1}. For the proof of $(ii)$, we use the same method as in \cite{MH2012}. \qed \section*{Appendix: Mobility and surface tension} Mobility is defined as a linear response of the speed of traveling wave to the external force. More precisely, motivated by (4.1) and (4.2) in \cite{Spohn1993}, let us consider the nonlinear Allen-Cahn equation with external force $\delta$ on ${\mathbb{ R}}$ for small enough $|\delta|$: \begin{equation} \label{eq:AC-delta} u_t = \varphi(u)_{zz} +f(u)+\delta, \quad z\in {\mathbb{ R}}, \end{equation} and corresponding traveling wave solution $U=U_\delta(z)$ with speed $c(\delta)$: \begin{align} \label{eq:TW-delta} & \varphi(U_\delta)_{zz} +c(\delta) U_{\delta z}+f(U_\delta)+\delta=0, ~~ z\in {\mathbb{ R}}, \\ & U_\delta(\pm\infty)= \alpha_{\pm, \delta}, \notag \end{align} where $\alpha_{\pm,\delta}$ are two stable solutions of $f(u)+\delta=0$. Then, we define the mobility by $$ \mu_{AC}:=-\frac{c'(0)}{ \alpha_+- \alpha_-}, $$ with a normalization factor $\alpha_+-\alpha_-$ as in \cite{Spohn1993}; compare (4.6) and (4.7) in \cite{Spohn1993} noting that the boundary conditions at $\pm\infty$ are switched so that we have a negative sign for $\mu_{AC}$. To derive a formula for $\mu_{AC}$, we multiply $\varphi(U_\delta)_z$ to \eqref{eq:TW-delta} and integrate it over ${\mathbb{ R}}$ to obtain \begin{align} \label{eq:94} c(\delta) \int_{\mathbb{ R}} U_{\delta z} \varphi(U_\delta)_z dz + \delta (\varphi(\alpha_+)-\varphi(\alpha_-)) =O(\delta^2), \end{align} by noting that \begin{align*} & \int_{\mathbb{ R}} \varphi(U_{\delta})_{zz} \varphi(U_\delta)_z dz = \frac12 \int_{\mathbb{ R}} \big\{\big( \varphi(U_{\delta})_{z} \big)^2 \big\}_z dz =0, \\ & \int_{\mathbb{ R}} \varphi(U_{\delta})_{z} dz = \varphi(\alpha_{+,\delta})-\varphi(\alpha_{-,\delta}) = \varphi(\alpha_+)-\varphi(\alpha_-)+O(\delta), \\ & \int_{\mathbb{ R}} f(U_{\delta}) \varphi(U_\delta)_z dz = \int_{\mathbb{ R}} f(U_{\delta}) \varphi'(U_\delta) U_{\delta z} dz = - \int_{\alpha_{-,\delta}}^{\alpha_{+,\delta}} W'(u)du = O(\delta^2). \end{align*} The last line follows by the change of variable $u=U_\delta(z)$, $W'(u) = -f(u)\varphi'(u)$ (recall \eqref{eq:28}), $\int_{\alpha_{-}}^{\alpha_{+}} W'(u)du=0$ and $W'(\alpha_\pm)=0$, $W'\in C^1$. However, since one can at least formally expect $U_\delta = U_0+O(\delta)$ (recall \eqref{eqn_AsymptExp_U0} for $U_0$), by \eqref{intrinsic}, \begin{align*} \int_{\mathbb{ R}} U_{\delta z} \varphi(U_\delta)_z dz & = \int_{\mathbb{ R}} U_{0 z} \varphi(U_0)_z dz + O(\delta) \\ & = \int_{\mathbb{ R}} U_{0 z} \sqrt{2W(U_0(z))} dz + O(\delta) \\ & = \int_{\alpha_-}^{\alpha_+} \sqrt{2W(u)} du + O(\delta), \end{align*} by the change of variable $u=U_0(z)$ again. This combined with \eqref{eq:94} leads to \begin{align*} c'(0) = - \frac{\varphi(\alpha_+)-\varphi(\alpha_-)} {\int_{\alpha_-}^{\alpha_+} \sqrt{2W(u)} du}. \end{align*} Thus, the mobility is given by the formula \begin{align} \label{mobility} \mu_{AC} = \frac{ \varphi_\pm^*}{\int_{\alpha_-}^{\alpha_+} \sqrt{2W(u)} du} = \frac{ \varphi_\pm^*}{\int_{\mathbb{ R}} \varphi'(U_0) U_{0z}^2(z)dz}, \end{align} where $$ \varphi_\pm^* = \frac{\varphi(\alpha_+)-\varphi(\alpha_-)}{\alpha_+ - \alpha_-}. $$ On the other hand, surface tension is defined as a gap of the energy of the microscopic transition surface from $\alpha_-$ to $\alpha_+$ in normal direction and that of the constant profile $\alpha_-$ or $\alpha_+$. More precisely, define the energy of a profile $u= \{u(z)\}_{z\in {\mathbb{ R}}}$ by $$ \mathcal{E}(u) = \int_{\mathbb{ R}} \Big\{ \frac12\big(\varphi(u)_z\big)^2 +W(u)\Big\} dz. $$ Recall that the potential $W$ is defined by \eqref{eq:28}, and $W\ge 0$ and $W(\alpha_\pm)=0$ hold. In particular, $W$ is normalized as $\min_{u\in {\mathbb{ R}}}W(u)=0$ so that $\min_{u =u(\cdot)}\mathcal{E}(u)=0$. Then, the surface tension is defined as $$ \sigma_{AC} := \frac1{\varphi_\pm^*} \min_{u: u(\pm \infty)=\alpha_\pm} \mathcal{E}(u), $$ by normalizing the energy by $\varphi_\pm^*$. We observe that $\mathcal{E}$ is defined through $\varphi$. Note that the nonlinear Allen-Cahn equation, that is \eqref{eq:AC-delta} with $\delta=0$, is a distorted gradient flow associated with $\mathcal{E}(u)$: $$ u_t = - \frac{\delta \mathcal{E}(u)}{\delta\varphi(u)}, \quad z \in {\mathbb{ R}}, $$ where the right hand side is defined as a functional derivative of $\mathcal{E}(u)$ in $\varphi(u)$, which is given by $$ \frac{\delta \mathcal{E}(u)}{\delta\varphi(u)} = - \varphi(u)_{zz} -f(u(z)). $$ Indeed, to see the second term $-f(u(z))$, setting $v=\varphi(u)$, one can rewrite $W(u) = W(\varphi^{-1}(v))$ as a function of $v$ so that \begin{align*} \big(W(\varphi^{-1}(v))\big)' & = W'(\varphi^{-1}(v)) \big(\varphi^{-1}(v)\big)' \\ & = - f(\varphi^{-1}(v)) \varphi'(\varphi^{-1}(v)) \frac1{\varphi'(\varphi^{-1}(v))} \\ & = - f(\varphi^{-1}(v)) = -f(u). \end{align*} We call the flow ``distorted'', since the functional derivative is taken in $\varphi(u)$ and not in $u$. One can rephrase this in terms of the change of variables $v(z) = \varphi(u(z))$. Indeed, we have $\mathcal{E}(u)= \widetilde{\mathcal{E}}(v)$ under this change, where $$ \widetilde{\mathcal{E}}(v) = \int_{\mathbb{ R}} \Big\{ \frac12 v_z^2 + W(\varphi^{-1}(v))\Big\}dz, $$ and $$ \frac{\delta \widetilde{\mathcal{E}}}{\delta v} = -v_{zz} - f(\varphi^{-1}(v)) = -v_{zz} -g(v). $$ Therefore, in the variable $v(z)$, the nonlinear Allen-Cahn equation can be rewritten as \begin{align*} v_t = \varphi'(u) u_t = - \varphi'(\varphi^{-1}(v)) \cdot \frac{\delta \widetilde{\mathcal{E}}}{\delta v} = \varphi'(\varphi^{-1}(v)) \big\{ v_{zz} +g(v)\}. \end{align*} This type of distorted equation for $v$ is sometimes called Onsager equation; see \cite{Mi}. Now we come back to the computation of the surface tension $\sigma_{AC}$. In fact, it is given by \begin{align} \label{eq:ST} \sigma_{AC}= \frac1{\varphi_\pm^*} \int_{\mathbb{ R}} V_{0z}^2 dz = \frac1{\varphi_\pm^*} \int_{\alpha_-}^{\alpha_+} \varphi'(u) \sqrt{2W(u)} du, \end{align} where $V_0=\varphi(U_0)$ and satisfies \eqref{eqn_AsymptExp_V0}. Indeed, the second equality follows from \eqref{eq:29}. To see the first equality, by definition, \begin{align*} \sigma_{AC} = \frac1{\varphi_\pm^*} \min_{u: u(\pm \infty)=\alpha_\pm} \mathcal{E}(u) = \frac1{\varphi_\pm^*} \min_{v: v(\pm \infty)= \varphi(\alpha_\pm)} \widetilde{\mathcal{E}}(v) \end{align*} and the minimizers of $\widetilde{\mathcal{E}}$ under the condition $v(\pm \infty)= \varphi(\alpha_\pm)$ are given by $V_0$ and its spatial shifts. Thus, $$ \sigma_{AC} = \frac1{\varphi_\pm^*} \widetilde{\mathcal{E}}(V_0) = \frac1{\varphi_\pm^*} \int_{\mathbb{ R}} \Big\{\frac12 V_{0z}^2 + W(\varphi^{-1}(V_0)) \Big\} dz. $$ However, since $V_{0z}= \sqrt{2W(U_0(z))}$ by \eqref{intrinsic}, we have $\int_{\mathbb{ R}} W(\varphi^{-1}(V_0)) dz = \int_{\mathbb{ R}} \frac12 V_{0z}^2 dz$. In particular, this implies the first equality of \eqref{eq:ST}. By \eqref{second lambda_0} combined with \eqref{mobility} and \eqref{eq:ST}, we see that $\lambda_0= \mu_{AC}\sigma_{AC}$. \begin{rmk} The linear case $\varphi(u) = \frak{K} u$ is discussed by Spohn \cite{Spohn1993}, in which $\frak{K}$ is denoted by $\kappa$. In this case, since $\varphi'=\frak{K}$ and $\varphi_\pm^*=\frak{K}$, by \eqref{mobility} and \eqref{eq:ST}, we have $\mu_{AC} = \big[\int_{\mathbb{ R}} U_{0z}^2 dz\big]^{-1}$ and $\sigma_{AC} = \frak{K}\int_{\mathbb{ R}} U_{0z}^2 dz$. These formulas coincide with (4.7) and (4.8) in \cite{Spohn1993} by noting that $U_0$ is the same as $w$ in \cite{Spohn1993} in the linear case except that the direction is switched due to the choice of the boundary conditions. \end{rmk} \end{document}
arXiv
{ "id": "2112.13081.tex", "language_detection_score": 0.5462246537208557, "char_num_after_normalized": null, "contain_at_least_two_stop_words": null, "ellipsis_line_ratio": null, "idx": null, "lines_start_with_bullet_point_ratio": null, "mean_length_of_alpha_words": null, "non_alphabetical_char_ratio": null, "path": null, "symbols_to_words_ratio": null, "uppercase_word_ratio": null, "type": null, "book_name": null, "mimetype": null, "page_index": null, "page_path": null, "page_title": null }
arXiv/math_arXiv_v0.2.jsonl
null
null
\begin{document} \title{A classical analog for the electron spin state} \author{K.B. Wharton, R.A. Linck and C.H. Salazar-Lazaro} \affiliation{San Jos\'e State University, Department of Physics and Astronomy, San Jos\'e, CA 95192-0106} \email{[email protected]} \date{\today} \begin{abstract} Despite conventional wisdom that spin-1/2 systems have no classical analog, we introduce a set of classical coupled oscillators with solutions that exactly map onto the dynamics of an unmeasured electron spin state in an arbitrary, time-varying, magnetic field. While not addressing the quantum measurement problem (discrete outcomes and their associated probabilities), this new classical analog yields a classical, physical interpretation of Zeeman splitting, geometric phase, the electron's doubled gyromagnetic ratio, and other quantum phenomena. This Lagrangian-based model can be used to clarify the division between classical and quantum systems, and might also serve as a guidepost for certain approaches to quantum foundations. \end{abstract} \maketitle \section{Introduction} Despite the conventional view of quantum spin as being an inherently non-classical phenomenon\cite{LL}, there is a rich history of exploring classical analogs for spin-1/2 systems in particular. For example, there exists a well-developed classical analog to a two-level quantum system, based upon the classical polarization (CP) of a plane electromagnetic wave\cite{Mcmaster,HnS,Klyshko,Malykin,Zap}. Although this CP-analog has been used to motivate introductory quantum mechanics texts\cite{Baym, Sakurai}, the power and depth of the analogy is not widely appreciated. For example, the CP-analog contains a straightforward classical picture for a $\pi$ geometric phase shift resulting from a full $2\pi$ rotation of the spin angular momentum, but this fact is rarely given more than a casual mention (with at least one notable exception\cite{Klyshko}). Still, the CP-analog contains certain drawbacks, especially when the analogy is applied to an electron spin state in an arbitrary, time-varying magnetic field. These drawbacks, along with complications involving quantum measurement outcomes, have prevented a general agreement on exactly which aspects of quantum spin are inherently non-classical. In this paper, we extend the CP-analog to a system of four coupled oscillators, and prove that this classical system exactly reproduces the quantum dynamics of an unmeasured electron spin state in an arbitrary magnetic field. This result demonstrates, by explicit construction, that if there are any aspects of an electron spin state that cannot be described in a classical context, those aspects must lie entirely in the domain of quantum measurement theory, not the dynamics. In order to accomplish this feat, it turns out there must necessarily be a many-to-one map from the classical system to the quantum state. In other words, the classical system contains a natural set of ``hidden variables'', accessible to the classical analog, but hidden to a complete specification of the quantum state. Some might argue that no classical analog is needed to discuss quantum spin dynamics because an unmeasured quantum state governed by the Schr\"odinger-Pauli Equation (SPE) could be interpreted as a set of classical quantities coupled by first-order differential equations. One can even analyze the classical Dirac field and deduce quantities which map nicely onto quantum spin concepts \cite{Ohanian}. But simply reinterpreting quantum wave equations as classical fields is not a very enlightening ``analog'', especially if the spin state is considered separately from the spatial state. For example, the use of complex numbers in these equations is significantly different from how they are used to encode phases in classical physics, and therefore have no obvious classical interpretation. And if a system of first-order differential equations cannot be directly transformed into a set of second-order differential equations, it is unclear how certain classical physics concepts (\textit{e.g.} generalized forces) can be applied. As we will demonstrate below, the full SPE \textit{can} be expanded to a system of second-order equations, but only by adding additional ``hidden variables'' along with new overall constraints. The classical analog presented here arrives at this result from a different direction, starting with a simple Lagrangian. Apart from clarifying how quantum spin might best be presented to students, the question of which aspects of quantum theory are truly ``non-classical'' is of deep importance for framing our understanding of quantum foundations. For example, Spekkens has recently demonstrated a simple classical toy theory that very roughly maps onto two-level quantum systems, showing several examples of purportedly-quantum phenomena that have a strong classical analog\cite{Spekkens}. Still, neither Spekkens nor other prominent classical-hidden-variable approaches to two-level quantum systems\cite{Bell, KS} have concerned themselves with classical analogies to the curious \textit{dynamics} of such systems. Our result demonstrates that starting with the dynamics can naturally motivate particular foundational approaches, such as a natural hidden variable space on which classical analogies to quantum measurement theory might be pursued. And because this classical analog derives from a simple Lagrangian, it is potentially a useful test bed for approaches where the action governs the transition probabilities, as in quantum field theory. The plan for the paper is as follows: After summarizing the CP-analog in Section II, a related two-oscillator analog (similar to a Foucault Pendulum) is presented in Section III. This two-oscillator analog is shown to be identical to a quantum spin state in a one-dimensional magnetic field; a three-dimensional field requires an extension to four oscillators, as shown in Section IV. The final section discusses and summarizes these results -- the most notable of which is that a classical system can encompass all the dynamics of a quantum spin-1/2 state. \section{The Classical Polarization Analog} For a classical plane electromagnetic (EM) wave moving in the z-direction with frequency $\omega_0$, the transverse electric fields $E_x(t)$ and $E_y(t)$ in the $z=0$ plane can always be expressed in two-vector notation as the real part of \begin{equation} \label{eq:cp} \spi{E_x}{E_y}= \spi{a}{b} e^{i \omega_0 t}. \end{equation} Here $a$ and $b$ are complex coefficients, encoding the amplitude and phase of two orthogonal polarizations. A strong analogy can be drawn between the two-vector $(a,b)$ on the right side of (\ref{eq:cp}) -- the well-known ``Jones vector" -- and the spinor $\ket{\chi}$ that defines a spin-1/2 state in quantum mechanics. The quantum normalization condition $<\!\!\chi |\chi \!\!>=1$ maps to a normalization of the energy density of the EM wave, and the global phase transformation $\ket{\chi} \to \ket{\chi} \, exp(i \theta)$ is analogous to changing the EM wave's phase. This equivalence between a spinor and a Jones vector can be made more explicit by projecting them both onto the surface of a unit sphere in an abstract space (the ``Bloch sphere'' and the ``Poincar\'e sphere'' respectively). Each spinor/Jones vector maps to a unit vector in the angular direction $(\theta, \phi)$, according to the usual convention $a=cos(\theta/2)$, $b=sin(\theta/2)exp(i\phi)$. This is more familiarly described in terms of the sphere's six intersections with a centered cartesian axis \begin{align} \label{eq:defs} \ket{z_+} &= \spi{1}{0} & \ket{z_-} &= \spi{0}{1} \notag \\ \ket {x_+} &= \frac{1}{\sqrt{2}} \spi{1}{1} & \ket{x_-} &= \frac{1}{\sqrt{2}} \spi{1}{-1} \\ \ket{y_+} &= \frac{1}{\sqrt{2}} \spi{1}{i} & \ket{y_-} &= \frac{1}{\sqrt{2}} \spi{1}{-i}. \notag \end{align} The CP-analog therefore maps linear x-polarized light to a spin-up electron $\ket{z_+}$ and linear y-polarized light to a spin-down electron $\ket{z_-}$. Electrons with measured spins $\ket{x_\pm}$ correspond to xy-diagonal linear polarizations, while $\ket{y_\pm}$ correspond to the two circular-polarization modes. In this framework, note that quantum superpositions are no different than ordinary EM superpositions. The analogy extends further, but this is already sufficient background to classically motivate some of the strange features of spin-1/2 systems. Consider the rotation of a Jones vector around the equator of a Poincar\'e sphere, corresponding to a continuous rotation of the direction of linear polarization -- from horizontal, through vertical, and back to the original horizontal state. Any transformation that leads to this rotation (say, a physical rotation of the wave medium) will then be analogous to a magnetic-field induced precession of a spin state around the corresponding equator of the Bloch sphere. The key point is that the above-described $2\pi$ rotation around the Poincar\'e sphere merely corresponds to a $\pi$ rotation of the EM polarization in physical space. And this is equivalent to a $\pi$ phase shift in the resulting wave; it would now interfere destructively with an identical unrotated wave. Of course, this is also the observed effect for a $2\pi$ rotation of a quantum spin state around the Bloch sphere, although in the latter case the net geometric phase shift is generally thought to be inexplicable from a classical perspective. What the CP-analog accomplishes is to demonstrate that such behavior does indeed have a straightforward classical interpretation, because the geometrical phase of the spin state is directly analogous to the overall phase of the physical EM wave \cite{Klyshko}. The key is that the Poincar\'e sphere does not map to physical space, so a $2\pi$ rotation need not return the EM wave to its original state. The CP-analog therefore advocates the viewpoint that the Bloch sphere should not map to physical space, even for an electron spin state. This viewpoint will be implemented below in a fully consistent fashion. To our knowledge, it has not been explicitly noted that this classical analogy naturally motivates an apparently-doubled gyromagnetic ratio for the electron. In the above-described Poincar\'e sphere rotation, as the EM wave is being rotated around its propagation axis, suppose an observer had reference to another system (say, a gyroscope) that truly recorded rotation in physical space. As compared to the gyroscope, the Jones vector would seem to complete a full rotation in half the time. If one interpreted the Poincar\'e sphere as corresponding to real space, the natural conclusion would be that the Jones vector was coupled to the physical rotation at double its ``classical'' value. Misinterpreting the Bloch sphere as corresponding to physical space would therefore lead to exactly the same conclusion for the electron's gyromagnetic ratio. The classical polarization analog can be pursued much further than is summarized here, mapping the quantum dynamics induced by a magnetic field to the effects of different birefringent materials \cite{Klyshko,Malykin,Zap,Baym,Kubo}. The two EM modes in such materials then map to the two energy eigenstates, and generic rotations around the Poincar\'e sphere can be given a physical implementation. Still, this analog becomes quite abstract; there is no easy-to-describe vector quantity of a birefringent material that corresponds to the magnetic field, and the situation is even more convoluted for time-dependent field analogs. Another disanalogy is the relation between the magnitude of the Zeeman energy splitting and the difference in wavenumber of the two EM modes. A more natural analogy would relate energy to a classical frequency, but the two EM modes always have identical frequencies. And of course, an electromagnetic plane wave cannot be pictured as a confined system with internal spin-like properties. In the next sections, we develop a novel classical analog that alleviates all of these problems. \section{A Foucault-Pendulum-Like Analog} The central success of the CP-analog stems from its use of two physical oscillators, which need not be electromagnetic. For any two uncoupled classical oscillators with the same natural frequency $\omega_0$, their solution can also be encoded by two complex numbers $a$ and $b$, representing the amplitude and phase of each oscillator. Therefore the use of Jones vectors and the Poincar\'e sphere does not only pertain to EM waves. As an intermediate step towards our proposed classical analog for an electron spin state, consider this classical Lagrangian: \begin{eqnarray} L_1(x_1,x_2,\dot{x}_1,\dot{x}_2,t)=\frac{1}{2}\left[ p_1^2+p_2^2 - \omega_0^2(x_1^2+x_2^2)\right], \label{eq:L1}\\ \spi{p_1}{p_2} \equiv \spi{\dot{x_1}}{\dot{x_2}} + \left[ \begin{array}{cc} 0 & -\beta \\ \beta & 0 \end{array} \right] \spi{x_1}{x_2}. \label{eq:p1} \end{eqnarray} Here the $x's$ are all purely real quantities, and $\beta$ is some coupling constant that may be time-dependent. (As $\beta\to 0$, this becomes two uncoupled classical oscillators). Equation (\ref{eq:p1}) can be rewritten as $\bm{p}=\dot{\bm{x}}+\bm{B} \bm{x}$, where the conjugate momenta $p_1$ and $p_2$ form the column vector $\bm{p}$, etc. Note that squaring the $\bm{B}$ matrix yields $\bm{B}^2=-\beta^2 \bm{I}$. In this notation, $L_1=(\bm{p\cdot p}-\omega_0^2 \,\bm{x\cdot x})/2$. First, consider the case of a constant $\beta$. The Euler-Lagrange equations of motion for $L_1$ are then \begin{eqnarray} \ddot{x_1}+(\omega_0^2-\beta^2) x_1 = 2\beta \dot{x}_2, \nonumber \\ \ddot{x_2}+(\omega_0^2-\beta^2) x_2 = -2\beta \dot{x}_1. \label{eq:2d1} \end{eqnarray} These equations happen to describe the projection of a Foucault pendulum into a horizontal plane (with orthogonal axes $x_1$ and $x_2$) in the small-angle limit. Specifically, $\beta=\Omega sin(\phi)$, where $\Omega$ is the rotation frequency of the Earth and $\phi$ is the latitude of the pendulum. (The natural frequency of such a pendulum is actually $\sqrt{\omega_0^2-\beta^2}$, because of a $\beta^2$ term in $L_1$ that does not appear in the Foucault pendulum Lagrangian, but for a constant $\beta$ this is just a renormalization of $\omega_0$). The precession of the Foucault pendulum therefore provides a qualitative way to understand the effect of a constant $\beta$ on the unnormalized Jones vector $\bm{x}$. Given a non-zero $\beta$, it is well-known that linear oscillation in $x_1$ (mapping to $\ket{z_+}$ on the Poincar\'e sphere) precesses into a linear oscillation in $x_2$ (mapping to $\ket{z_-}$) and then back to $x_1$ ($\ket{z_+}$). But this $2\pi$ rotation of $\bm{x}$ around the Poincar\'e sphere merely corresponds to a $\pi$ rotation of the pendulum's oscillation axis in physical space, leaving the overall phase of the pendulum shifted by $\pi$, exactly as was described for the CP-analog. Quantitatively, solutions to (\ref{eq:2d1}) are of the form $exp(-i\omega_\pm t)$, where $\omega_\pm =\omega_0 \pm \beta$. The generic solution can always be expressed as the real component of \begin{equation} \label{eq:2ds} \spi{x_1}{x_2}= a\spi{1}{i} e^{-i (\omega_0- \beta ) t} + b\spi{1}{-i} e^{-i (\omega_0 + \beta )t}. \end{equation} Here $a$ and $b$ are arbitrary complex parameters (although again note that $x_1$ and $x_2$ are purely real). One notable feature of this result is that the coupling constant $\beta$ has the effect of producing two solutions with well-defined frequencies equally spaced above and below the natural frequency -- just like Zeeman splitting of an electron's energy levels in a magnetic field. Furthermore, the modes that correspond to these two pure frequencies happen to be right- and left-hand circular motion of the pendulum, directly analogous to $\ket{y_+}$ and $\ket{y_-}$. A comparison of (\ref{eq:2ds}) with standard results from quantum mechanics reveals that $\beta$ produces exactly the same dynamics on $\bm{x}$ as does a constant magnetic field in the $y$ direction on an electron spin state (apart from an overall $exp(-i\omega_0 t)$ global phase). Given the strong analogy between a constant $\beta$ and a constant (one-component) magnetic field, one can ask whether this correspondence continues to hold for a time-varying $\beta$. In this case the strict analogy with the Foucault pendulum fails (thanks to the $\beta^2$ terms in $L_1$) and comparing the exact solutions becomes quite difficult. But starting from the Euler-Lagrange equations for a time-varying $\beta$, \begin{eqnarray} \ddot{x_1}+(\omega_0^2-\beta^2) x_1 &=& 2\beta \dot{x}_2+\dot{\beta} x_2, \nonumber \\ \ddot{x_2}+(\omega_0^2-\beta^2) x_2 &=& -2\beta \dot{x}_1-\dot{\beta} x_1, \label{eq:2d2} \end{eqnarray} one can compare them directly to the relevant Schr\"odinger-Pauli Equation (SPE). Using a $y$-directed magnetic field $B_y=2\beta/\gamma$ (where $\gamma=-e/m$ is the gyromagnetic ratio) and an overall phase oscillation corresponding to a rest mass $mc^2=\hbar\omega_0$, this yields \begin{equation} \label{eq:sey} i\hbar \frac{\partial}{\partial t} \spi{a}{b} = \hbar \left[ \begin{array}{cc} \omega_0 & i\beta \\ -i\beta & \omega_0 \end{array} \right] \spi{a}{b}. \end{equation} Taking an additional time-derivative of (\ref{eq:sey}), and simplifying the result using (\ref{eq:sey}) itself, it is possible to derive the following second-order differential equations: \begin{eqnarray} \ddot{a}+(\omega_0^2-\beta^2) a &=& 2\beta \dot{b}+\dot{\beta} b, \nonumber \\ \ddot{b}+(\omega_0^2-\beta^2) b &=& -2\beta \dot{a}-\dot{\beta} a. \label{eq:sey2} \end{eqnarray} While $a$ and $b$ are still complex, the real and imaginary parts have naturally split into separate coupled equations that are formally identical to (\ref{eq:2d2}). So every solution to the SPE (\ref{eq:sey}) must therefore have a real component which solves (\ref{eq:2d2}). At first glance it may seem that the imaginary part of (\ref{eq:sey2}) contains another set of solutions not encoded in the real part of (\ref{eq:sey2}), but these solutions are not independent because they also solve (\ref{eq:sey}). The solution space of (\ref{eq:sey}) is a complex vector space of dimension 2 over the complex numbers. It can be verified that the SPE with a rest-mass oscillation cannot admit purely real solutions. Also, it is an elementary exercise to show that if a vector space over the complex numbers has a function basis given by $\{ z_1, z_2 \}$ and there is no complex linear combination of $z_1, z_2$ that yields a purely real function, then the set $\{ Re(z_1), Re(z_2), Im(z_1), Im(z_2) \}$ is a linearly independent set of real functions where linear independence is taken over the reals instead of the complex numbers. From this elementary result, it follows that if $\{z_1, z_2\}$ is a basis for the solution space of (\ref{eq:sey}) over the complex numbers, then the set of functions $X= \{Re(z_1), Re(z_2), Im(z_1), Im(z_2) \}$ spans a 4-d real subspace of the solution space of (\ref{eq:2d2}). Since (\ref{eq:2d2}) indeed has a 4-d solution space over the reals, it follows that the subspace spanned by $X$ is indeed the full solution space of (\ref{eq:2d2}). In summary, the solutions to the real, second-order differential equations (\ref{eq:2d2}) exactly correspond to the solutions to the complex, first-order differential equations (\ref{eq:sey}). For a one-dimensional magnetic field, these results explicitly contradict the conventional wisdom concerning the inherent complexity of the spin-1/2 algebra. By moving to real second-order differential equations -- a natural fit for classical systems -- it is possible to retain exactly the same dynamics, even for a time-varying magnetic field. The resulting equations not only account for a Zeeman-like frequency splitting, but demonstrate that the quantum geometric phase can be accounted for as the classical phase of an underlying, high-frequency oscillation (a strict analog to the usually-ignored rest mass oscillation at the Compton frequency). Despite the breadth of the above conclusions, this coupled-oscillator analog has a major drawback as an analog to an electron spin state. It is limited by is the lack of coupling parameters that correspond to magnetic fields in the $x$ or $z$ directions, associated with the appropriate rotations around the Poincar\'e sphere. The classical model in the next section solves this problem, although it comes at the expense of the Foucault pendulum's easily-visualized oscillations. \section{The Full Analog: Four Coupled Oscillators} In order to expand the above example to contain an analog of an arbitrarily-directed magnetic field, two more coupling parameters must enter the classical Lagrangian. But with only two oscillators, there are no more terms to couple. With this in mind, one might be tempted to extend the above example to three coupled oscillators, but in that case the odd number of eigenmodes makes the dynamics unlike that of a spin-1/2 system. It turns out that four coupled oscillators can solve this problem, so long as the eigenmodes come in degenerate pairs. By extending $\bm{x}$ to a real 4-component vector (as opposed to the 2-component vector in the previous section), one can retain the same general form of the earlier Lagrangian: \begin{equation} \label{L2} L_2(\bm{x},\dot{\bm{x}},t)=\frac{1}{2}(\bm{p\cdot p}-\omega_0^2 \,\bm{x\cdot x}). \end{equation} Here we are still using the definition $\bm{p}\equiv\dot{\bm{x}}+\bm{B} \bm{x}$, but now with a 4x4 matrix encoding three independent coupling coefficients, \begin{equation} \label{eq:Bdef} \bm{B} = \left[ \begin{array}{cccc} 0 & -\beta_z & \beta_y & -\beta_x \\ \beta_z & 0 & \beta_x & \beta_y \\ -\beta_y & -\beta_x & 0 & \beta_z \\ \beta_x & -\beta_y & -\beta_z & 0 \end{array} \right]. \end{equation} Again, note that squaring the matrix $\bm{B}$ yields $\bm{B}^2=-\beta^2 \bm{I}$, where now $\beta \equiv \sqrt{\beta_x^2 + \beta_y^2 + \beta_z^2}$. \subsection{Constant Magnetic Fields} The four corresponding Euler-Lagrange equations of motion (for constant $\beta$'s) can be written as \begin{equation} \label{eq:modes} \left[ 2\bm{B}\frac{\partial}{\partial t}+\bm{I}\left(\omega_0^2-\beta^2+\frac{\partial^2}{\partial t^2} \right) \right]\bm{x}=0. \end{equation} Solving (\ref{eq:modes}) for the eigenmodes via the replacement $\partial/\partial t \to -i\omega$ yields only two solutions, as the eigenvalues are doubly degenerate. They are of the same form as in the previous section: $\omega_\pm = \omega_0 \pm \beta$. Because of the degeneracy, the full classical solutions can be expressed in a variety of ways. It is convenient to consider a vector with the cartesian components $\bm{\beta}=(\beta_x,\beta_y,\beta_z)$, and then to transform it into spherical coordinates $(\beta,\theta,\phi)$. Using the two-spinors $\ket{y_+}$ and $\ket{y_-}$ defined in (\ref{eq:defs}), the general solutions to (\ref{eq:modes}) can then be written as the real part of \begin{align} \label{eq:full} \fspi{x_1(t)}{x_2(t)}{x_3(t)}{x_4(t)}= & \, a\spi{cos(\theta/2)\ket{y_-}}{sin(\theta/2)e^{i\phi}\ket{y_-}} e^{-i \beta t} + b\spi{sin(\theta/2)\ket{y_-}}{-cos(\theta/2)e^{i\phi}\ket{y_-}} e^{i \beta t} \notag \\ & + c\spi{-sin(\theta/2)e^{i\phi}\ket{y_+}}{cos(\theta/2)\ket{y_+}} e^{-i \beta t} + d\spi{cos(\theta/2)e^{i\phi}\ket{y_+}}{sin(\theta/2)\ket{y_+}} e^{i \beta t}. \end{align} Here the global $exp(-i\omega_0 t)$ dependence has been suppressed; one multiplies by this factor and takes the real part to get the actual coordinate values. Having doubled the number of classical oscillators, the solution here is parameterized by {\it four} complex numbers ($a,b,c,d$). This solution bears a striking similarity to the known dynamics of an electron spin state in an arbitrary uniform magnetic field $\vec{B}$ with components $(B,\theta,\phi)$. In the basis defined above in (\ref{eq:defs}), those solutions to the SPE are known to be \begin{equation} \label{eq:qmev} \spi{\chi_+ (t)}{\chi_- (t)} = f \spi{cos(\theta/2)}{sin(\theta/2)e^{i\phi}} e^{-ie Bt/2m} + g \spi{sin(\theta/2)}{-cos(\theta/2)e^{i\phi}} e^{ie Bt/2m}, \end{equation} where the left side of this equation is the spinor $\ket{\chi (t)}$. Here $f$ and $g$ are two complex constants subject to the normalization condition $|f|^2+|g|^2=1$. It is not difficult to see how all possible SPE solutions (\ref{eq:qmev}) have corresponding classical solutions (\ref{eq:full}). Equating $\bm{\beta}=\vec{B}e/(2m)$, adding the quantum-irrelevant global phase dependence $exp(-i\omega_0 t)$ to $\ket{\chi (t)}$, and setting $c=d=0$ in (\ref{eq:full}) makes the two expressions appear almost identical if $a=\sqrt{2}f$ and $b=\sqrt{2}g$. (The $\sqrt{2}$'s appear in the definition of $\ket{y_-}$). The final step is to map the fully-real $x's$ to the complex $\chi's$ according to \begin{equation} \label{eq:map1} \chi_+=x_1+ix_2 \, \, , \, \, \chi_- = x_3 + ix_4. \end{equation} This mapping turns out to be just one of many possible ways to convert a solution of the form (\ref{eq:qmev}) into the form (\ref{eq:full}). For example, setting $a=b=0$, $c=\sqrt{2}f$ and $d=\sqrt{2}g$ corresponds to the alternate map \begin{equation} \label{eq:map2} \chi_+=x_3-ix_4 \, \, , \, \, \chi_- = -x_1 + ix_2. \end{equation} More generally, one can linearly combine the above two maps by introducing two complex parameters $A$ and $B$. Under the assignment $a=\sqrt{2}Af$, $b=\sqrt{2}Ag$, $c=\sqrt{2}Bf$ and $d=\sqrt{2}Bg$ (which can always be done if $ad\!=\!bc$) then the connection between the above equations (\ref{eq:full}) and (\ref{eq:qmev}) corresponds to \begin{eqnarray} \label{eq:map3} \chi_+&=& \frac{(x_1+ix_2)A^*+(x_3-ix_4)B^*}{|A|^2+|B|^2}, \nonumber \\ \chi_- &=& \frac{(-x_1 + ix_2)B^*+(x_3 + ix_4)A^*}{|A|^2+|B|^2}. \end{eqnarray} This shows that for any solution (\ref{eq:full}) that obeys the $ad\!=\!bc$ condition, it will always encode a particular quantum solution to (\ref{eq:qmev}) via the map (\ref{eq:map3}), albeit with extra parameters $A$, $B$, and a specified global phase. Remarkably, this $ad\!=\!bc$ condition happens to be equivalent to the simple classical constraint $L_2(t)=0$. Imposing such a constraint on (\ref{eq:modes}) therefore yields a classical system where all solutions can be mapped to the dynamics of a spin-1/2 quantum state in an arbitrary, constant, magnetic field -- along with a number of ``hidden variables'' not encoded in the quantum state. \subsection{Time-varying Magnetic Fields} As in Section III, a generalization to time-varying magnetic fields is best accomplished at the level of differential equations, not solutions. Allowing $\bm{\beta}(t)$ to vary with time again adds a new term to the Euler-Lagrange equations, such that they now read: \begin{equation} \label{eq:ELx} \left[ 2\bm{B}\frac{\partial}{\partial t}+\frac{\partial \bm{B}}{\partial t}+\bm{B}^2+\bm{I}\left(\omega_0^2+\frac{\partial^2}{\partial t^2} \right) \right]\bm{x}=0. \end{equation} Here $\bm{B}$ is given by (\ref{eq:Bdef}) with time-dependent $\beta_x$, $\beta_y$, and $\beta_z$. This must be again compared with the SPE with an explicit rest mass oscillation $mc^2=\hbar\omega_0$: \begin{equation} \label{eq:SE} i\hbar \frac{\partial}{\partial t} \spi{\chi_+}{\chi_-} = \hbar \left( \omega_0 \bm{I}+\bm{\beta}\cdot\bm{\sigma} \right) \spi{\chi_+}{\chi_-}, \end{equation} where again we have used $\bm{\beta}(t)=\vec{B}(t)e/(2m)$ to relate the coupling parameters in $L_2$ with the magnetic field $\vec{B}(t)$. (Here $\bm{\sigma}$ is the standard vector of Pauli matrices). While it is possible to use the map (\ref{eq:map3}) to derive (\ref{eq:ELx}) from (\ref{eq:SE}) (and its time-derivative) via brute force, it is more elegant to use the quaternion algebra, as it is closely linked to both of the above equations. Defining the two quaternions $\mathfrak{q}=x_1+ix_2+jx_3+kx_4$, and $\mathfrak{b}=0+i\beta_z-j\beta_y+k\beta_x$, allows one to rewrite (\ref{eq:ELx}) as the quaternionic equation \begin{equation} \label{eq:ELq} \ddot{\mathfrak{q}}+2\dot{\mathfrak{q}}\mathfrak{b}+\mathfrak{q}(\mathfrak{b}^2+\dot{\mathfrak{b}}+\omega_0^2)=0. \end{equation} Note that while $\bm{B}$ operates on $\bm{x}$ from the left, the $\mathfrak{b}$ acts as a right-multiplication on $\mathfrak{q}$ because (\ref{eq:Bdef}) is of the form of a right-isoclinic rotation in SO(4). While it is well-known that the components of $i\bm{\sigma}$ act like purely imaginary quaternions, the precise mapping of $i\bm{\beta}\cdot\bm{\sigma}$ to $\mathfrak{b}$ depends on how one maps $\ket{\chi}$ to a quaternion $\mathfrak{s}$. Using the above map (\ref{eq:map1}), combined with the above definition of $\mathfrak{q}$, it happens that $i(\bm{\beta}\cdot\bm{\sigma})\ket{\chi}=\mathfrak{s}\mathfrak{b}$, where $\mathfrak{s}$ is the quaternionic version of $\ket{\chi}$ (as defined by the combination of (\ref{eq:map1}) and $\mathfrak{q}=\mathfrak{s}$). This allows one to write the SPE (\ref{eq:SE}) as \begin{equation} \label{eq:SEq} \dot{\mathfrak{s}}+\mathfrak{s}\mathfrak{b}=-i\omega_0\mathfrak{s}. \end{equation} This equation uses a quaternionic $i$, not a complex $i$, acting as a left-multiplication (again because of the particular mapping from $\ket{\chi}$ to $\mathfrak{s}$ defined by (\ref{eq:map1})). While the SPE would look more complicated under the more general map (\ref{eq:map3}) as applied to $\mathfrak{q}=\mathfrak{s}$, this is equivalent to applying the simpler map (\ref{eq:map1}) along with \begin{equation} \label{eq:qtos} \mathfrak{q}=[Re(A)+iIm(A)+jRe(B)-kIm(B)]\mathfrak{s}\equiv \mathfrak{u}\mathfrak{s}, \end{equation} so long as $\mathfrak{u}$ is a constant unit quaternion (linking the normalization of $\mathfrak{q}$ and $\mathfrak{s}$). Keeping the SPE in the form (\ref{eq:SEq}), we want to show that for any solution $\mathfrak{s}$ to (\ref{eq:SEq}), there is a family of solutions $\mathfrak{q=us}$ to the classical oscillators (\ref{eq:ELq}). The time-derivative of (\ref{eq:SEq}) can be expanded as \begin{equation} \label{eq:SEq2a} \ddot{\mathfrak{s}}+2\dot{\mathfrak{s}}\mathfrak{b}+\mathfrak{s}\dot{\mathfrak{b}} = \dot{\mathfrak{s}}\mathfrak{b}-i\omega_0\dot{\mathfrak{s}}. \end{equation} Using (\ref{eq:SEq}) to eliminate the $\dot{\mathfrak{s}}$'s on the right side of (\ref{eq:SEq2a}) then yields \begin{equation} \label{eq:SEq2} \ddot{\mathfrak{s}}+2\dot{\mathfrak{s}}\mathfrak{b}+\mathfrak{s}(\mathfrak{b}^2+\dot{\mathfrak{b}}+\omega_0^2)=0. \end{equation} If $\mathfrak{s}$ solves (\ref{eq:SEq}), it must solve (\ref{eq:SEq2}), but this is exactly the same equation as (\ref{eq:ELq}). And because $\mathfrak{u}$ is multiplied from the left, $\mathfrak{q=us}$ must then also solve (\ref{eq:ELq}). This concludes the proof that all solutions to the SPE (\ref{eq:SE}) -- even for a time-varying magnetic field -- have an exact classical analog in the solutions to (\ref{eq:ELx}). The question remains as to which subset of solutions to (\ref{eq:ELx}) has this quantum analog. If the above connection exists between $\mathfrak{q}$ and $\mathfrak{s}$, then by definition $\mathfrak{s=u^*q}$, where $\mathfrak{u}$ is a unit quaternion. This substitution transforms the left side of (\ref{eq:SEq}) into $\mathfrak{u^*p}$, where $\mathfrak{p}=\dot{\mathfrak{q}}+\mathfrak{qb}$ is the quaternionic version of the canonical momentum. Therefore, from (\ref{eq:SEq}), $\mathfrak{p}=-\mathfrak{u}i\omega_0\mathfrak{u^*q}$. As $\mathfrak{u}$ is a unit quaternion, this yields a zero Lagrangian density $L_2=(|\mathfrak{p}|^2-\omega_0^2|\mathfrak{q}|^2)/2=0$, consistent with the constant-field case. \section{Discussion} The Foucault pendulum is often discussed in the context of classical analogs to quantum spin states\cite{Klyshko}, but the discussion is typically restricted to geometric phase. Section III demonstrated that the analogy runs much deeper, as the Coriolis coupling between the two oscillatory modes is exactly analogous to a one-dimensional magnetic field acting on an electron spin state. The analog also extends to the dynamics, and provides a classical description of Zeeman energy splitting, geometric phase shifts, and the appearance of a doubled gyromagnetic ratio. Apart from a global phase, there were no additional classical parameters needed to complete the Section III analog. In Section IV, we demonstrated that it is possible to take four classical oscillators and physically couple them together in a particular manner (where the three coupling coefficients correspond to the three components of a magnetic field), yielding the equations of motion given in (\ref{eq:ELx}). Imposing a global physical constraint ($L_2=0$) on this equation forces the solutions to have an exact map (\ref{eq:map3}) to solutions of the Schr\"odinger-Pauli equation for a two-level quantum system with a rest-mass oscillation. This is a many-to-one map, in that there are additional parameters in the solution to (\ref{eq:ELx}) that can be altered without affecting the corresponding quantum solution, including an overall phase. From a quantum perspective, these additional parameters would be ``hidden variables''. Perhaps one reason this analog has not been noticed before is that many prior efforts to find classical analogs for the spin-1/2 state have started with a physical angular momentum vector, in real space. Rotating such a physical vector by $2\pi$, it is impossible to explain a $\pi$ geometric phase shift without reference to additional elements outside the state itself, such as in Feynman's coffee cup demonstration \cite{Feynman}. In the four-oscillator analog, however, the expectation value of the spin angular momentum merely corresponds to an unusual combination of physical oscillator parameters: \begin{equation} \label{eq:expS} \left< \bm{S\cdot\hat{e}} \right> = -\frac{\hbar}{2\omega_0} \bm{p\cdot B(\hat{e}) x}. \end{equation} Here $\bm{\hat{e}}$ is an arbitrary unit vector, and the above definition of $\bm{B(\beta)}$ in (\ref{eq:Bdef}) is used to define $\bm{B(\hat{e})}$. Note, for example, that a sign change of both $\bm{x}$ and $\bm{p}$ leaves $\left<\bm{S}\right>$ unchanged. This is indicative of the fact that the overall phase of the oscillators are shifted by $\pi$ under a $2\pi$ rotation of $\left<\bm{S}\right>$, exactly as in the CP-analog and the Foucault pendulum. This result explicitly demonstrates that if there is any inherently non-classical aspect to a quantum spin-1/2 state, such an aspect need not reside in the dynamics. On the other hand, if the system is measured, this classical analog cannot explain why superpositions of eigenmodes are never observed, or indeed what the probability distribution of measurements should be. That analysis resides in the domain of quantum measurement theory, and these results do not indicate whether or not that domain can be considered to have a classical analog. With this in mind, these results should still be of interest to approaches where the usual quantum state is not treated as a complete description of reality. The hidden variables that naturally emerge from the above analysis are the complex parameters $A$ and $B$ (or equivalently, the unit quaternion $\mathfrak{u}$). These parameters effectively resulted from the doubling of the parameter space (from two to four oscillators), but do not seem to have any quantitative links to prior hidden-variable approaches. Still, they are loosely aligned with the doubling of the ontological state space in Spekkens's toy model \cite{Spekkens}, as well as with the doubling of the parameter space introduced when moving from the first-order Schr\"odinger equation to the second-order Klein Gordon equation \cite{KGE}. Another point of interest is that this analog stems from a relatively simple Lagrangian, $L_2$, and there is good reason to believe than any realistic model of quantum phenomena should have the same symmetries as a Lagrangian density \cite{WMP}. One final question raised by these results is whether or not it is possible to construct a working mechanical or electrical version of the classical oscillators described in Section IV. If this were possible, it would make a valuable demonstration concerning the dynamics of an unmeasured electron spin state. Even if it were not possible, some discussion of these results in a quantum mechanics course might enable students to utilize some of their classical intuition in a quantum context. \begin{acknowledgments} The authors are indebted to Patrick Hamill for recognizing (\ref{eq:L1}) as the Foucault pendulum Lagrangian; further thanks are due to Ian Durham, David Miller, and William Wharton. An early version of this work was completed when KW was a Visiting Fellow at the Centre for Time in the Sydney Centre for Foundations of Science, University of Sydney. \end{acknowledgments} \end{document}
arXiv
{ "id": "1111.3348.tex", "language_detection_score": 0.8120135068893433, "char_num_after_normalized": null, "contain_at_least_two_stop_words": null, "ellipsis_line_ratio": null, "idx": null, "lines_start_with_bullet_point_ratio": null, "mean_length_of_alpha_words": null, "non_alphabetical_char_ratio": null, "path": null, "symbols_to_words_ratio": null, "uppercase_word_ratio": null, "type": null, "book_name": null, "mimetype": null, "page_index": null, "page_path": null, "page_title": null }
arXiv/math_arXiv_v0.2.jsonl
null
null
\begin{document} \title{Horizon-unbiased Investment with Ambiguity\footnote{ We are grateful for the funding from the NSF of China (11501425 and 71801226). }} \author{Qian Lin\thanks{ Email: [email protected]}} \affil{School of Economics and Management, Wuhan University, China} \author{Xianming Sun\thanks{ Email: [email protected]}} \affil{School of Finance, Zhongnan University of Economics and Law, China} \author{Chao Zhou\thanks{ Email: [email protected]}} \affil{Department of Mathematics, National University of Singapore, Singapore } \maketitle \begin{abstract} In the presence of ambiguity on the driving force of market randomness, we consider the dynamic portfolio choice without any predetermined investment horizon. The investment criteria is formulated as a robust forward performance process, reflecting an investor's dynamic preference. We show that the market risk premium and the utility risk premium jointly determine the investors' trading direction and the worst-case scenarios of the risky asset's mean return and volatility. The closed-form formulas for the optimal investment strategies are given in the special settings of the CRRA preference. \end{abstract} \textbf{Keywords}: Ambiguity, Forward Performance, Robust Investment, Risk Premium \section{Introduction} Dynamic portfolio choice problems usually envisage an investment setting where an investor is exogenously assigned an investment performance criteria and stochastic models for the price processes of risky assets. However, the investor may extemporaneously change the investment horizon, consistently update her preference with the market evolution, and conservatively invest due to ambiguity on the driving force of market randomness or the dynamics of the risky assets. Motivated by these investment realities, we study a robust horizon-unbiased portfolio problem in a continuous-time framework. In the seminal work of \cite{Merton1969}, continuous-time portfolio choice is formulated as a stochastic control problem to maximize the expected utility at a specific investment horizon by searching for the optimal strategy in an admissible strategy space. Note that if the investor has two candidate investment horizon $T_1$, $T_2$, $(T_2>T_1>0)$, the resulting optimal strategies associated with these two horizons are generally not consistent over the common time interval $[0,T]$, $(T\le T_1<T_2)$ \citep{Musiela2007}. Hence, Merton's framework is neither suitable for the case where an investor may extend or shorten her initial investment horizon, nor the case where the investor may update her preference in accordance to the accumulated market information. In these quite realistic settings, the investor needs an optimal strategy which is independent of the investment horizon and reflects her dynamic preference in time and wealth. The horizon-unbiased utility or forward performance measure, independently proposed by \cite{Choulli2007,Henderson2007,Musiela2007}, provides a portfolio framework satisfying the aforementioned requirements. In such framework, an investor specifies initial preferences (utility function), and then propagates them \emph{forward} as the financial market evolves. This striking characteristic contrasts the portfolio choice based on the forward performance measure from that in Merton's framework, in which intertemporal preference is derived from the terminal utility function in a \emph{backward} way. \cite{Musiela2010a} specify the generic forward performance measure as a stochastic flow $U=U(t,x)_{t\ge0}$, taking time $(t)$ and wealth $(x)$ as arguments. The randomness of the forward performance measure is driven by the Brownian motion which is the same as the driving force of the randomness of asset price. It implies that the driving force of market randomness is simultaneously embedded into the investor's preference and the risky asset price process. Such modeling approach implicitly assumes that the Brownian motion represents the essential source of risk behind the financial market and the risky assets. Especially, the volatility of a forward performance measure reflects the investor's uncertainty about her future preference due to the randomness of the financial market states. However, due to the epistemic limitation or limited information, an investor may have ambiguity about the driving force of market randomness and her future preference. Focusing on such ambiguities, we will introduce a robust forward performance measure, and investigate the corresponding portfolio selection problems. The mean return rate and volatility are important factors characterizing the dynamics of risky assets. In the traditional portfolio theory, these two factors are usually modeled by stochastic processes, the distributions of which are known to the decision-maker at each time node before the specified investment horizon. In this case, the investor is actually assumed to have full information on the driving force of market randomness, and can accurately assigns probabilities to the various possible outcomes of investment or factors associated with the investment. However, in so complicated financial markets, it is \emph{unrealistic} for investors to have accurate information on the dynamics or distributions of the risk factors, essentially due to the cognitive limitation on the driving force of market randomness. This situation is referred to as ``ambiguity" in the sense of Knight, while ``risk" in the former situation. Ambiguity has raised researchers' attention in the area of asset pricing and portfolio management \citep[see e.g.][]{Maenhout2004,Garlappi2007,wang2009optimal,Bossaerts2010,Liu2011a,Chen2014i,Luo2014,Guidolin2016,luo2016robustly,zeng2018ambiguity,Escobar2018}. We assume that an investor has ambiguous beliefs on the paths of the risky asset price. Ambiguous beliefs are characterized by a set $\mathcal{P}$ of probability measures $(\mathbb{P}\in \mathcal{P})$ defined on the canonical space $\Omega$, the set of continuous paths starting from the current price of the risky asset. We incorporate the investor's ambiguity on the risky asset price into her preference, by defining the forward performance measure on the canonical space $\Omega$. We first characterize ambiguity on the dynamics of risky asset in terms of ambiguity on its mean return and volatility. More specifically, we assume that the mean return and the volatility processes of the risky asset lie in a convex compact set $\Theta\subset \mathbb{R}^2_+$, which then leads to the set of probability measures $\mathcal{P}$. This formulation is different from the stochastic models with the known distributions at each time node, and generalizes the framework defined on a probability space with only one probability measure. Within in this general setting, we investigate an ambiguity-averse investor's investment strategy, and her conservative beliefs on the mean return and the volatility of risky assets. We then define the robust forward performance measure, by taking the investor's ambiguity on the deriving force of market randomness. In turn, we propose a method to construct such robust forward performance measure for a given initial preference, and derive the corresponding investment strategy and conservative beliefs on the mean return and the volatility of risky assets. We show that the sum of the market risk premium and the utility risk premium determines the trading direction. We further specify the initial preference of the constant relative risk aversion (CRRA) type, and investigate the determinants of the conservative beliefs on the mean return and the volatility of risky assets in three settings, i.e., ambiguity on the mean return rate, ambiguity on the volatility, and the structured ambiguity. When we consider ambiguity on the mean return rate, we keep the volatility as a constant, and vise versa. Such ambiguities have been investigated in Merton's framework \citep[see e.g.][]{Lin2014c,luo2016robustly}. The third setting is motivated by the fact that there is no consensus on the relation between the mean return and the volatility of risky assets in the empirical literature \citep[see e.g.][]{Omori2007,Bandi2012,Yu2012}, and investigated by \cite{Epstein2013}. We show that the sign of the total risk premium determines the conservative belief on the mean return in the first setting, while the risk attitude and the relative value of the market risk premium over the utility risk premium jointly determine the conservative belief on the volatility in the second setting. In the third setting, we would not derive the closed-form formula for the conservative beliefs, but show that the corresponding beliefs can take some intermediate value within the candidate value interval, as well as the upper and lower bounds. To our knowledge, such interesting results are new in the portfolio selection literature. This paper contributes to the existing literature in three folds. \emph{First}, we propose a generic formulation of robust forward performance accommodating an investor's ambiguity on the dynamics of risky assets. \emph{Second}, we figure out the determinants of trading direction for an investor in a market with one risk-free asset and one risky asset. From the economic point of view, it is the sum of the market risk premium and the utility risk premium that determines an investor's trading direction. \emph{Third}, we show that the market risk premium, the utility risk premium, and the risk tolerance affect an investor's conservative belief on the mean return and volatility. Especially, if the maximum of the total risk premium is negative, an investor will take the maximum of the mean return as the worst-case value; if the minimum of the total risk premium is positive, an investor will take the minimum of the mean return as the value in the worst-case scenario; otherwise, the worst-case mean return lies between its minimum and maximum. The market risk premium, the utility risk premium, and the risk tolerance jointly determine an investor's conservative belief on the volatility of risky assets. We emphasize that the conservative belief is related to the optimization associated with risk premiums, and these conservative beliefs may be some intermediate values within their candidate value intervals, as well as boundaries. \textbf{Related Literature}. Most of the existing results on forward performance measures have so far focused on its construction and portfolio problems in the setting of risk, rather than ambiguity \cite[][to name a few]{Zariphopoulou2010,Musiela2010,Alghalith2012,Karoui2013,Kohlmann2013,Anthropelos2014,Nadtochiy2017,Avanesyan2018,Shkolnikov2015a,Case2018}. As one of the few exceptions, \cite{Kallblad2013a} investigate the robust forward performance measure in the setting of ambiguity characterized by a set of equivalent probability measures. However, this approach fails to solve the robust ``forward" investment problem under ambiguous volatility, since volatility ambiguity is characterized by a set of mutually singular probability measures \citep{Epstein2013}. We fill this gap by characterizing an investor's ambiguity with a set of probability measures, which may not be equivalent with each other. Similar to our work, \cite{Chong2018} investigate robust forward investment under parameter uncertainty in the framework where a unique probability measure is aligned to the canonical space $(\Omega)$. Different from such model setup, we align a set of probability measures on the canonical space $(\Omega)$, accounting for an investor's ambiguity on the future scenarios of the risky asset price. This approach is not only technically more general than the approach with a set of dynamic models under a unique probability measure (as detailed in Remark 4 by \cite{Epstein2013}), but also allows an investor to explicitly incorporate ambiguity on the risk source into her preference. That is the key difference between our framework and the framework of \cite{Chong2018}. On the other hand, \cite{Chong2018} construct the forward performance measure based on the solution of an infinite horizon backward stochastic differential equation (BSDE). Our approach associates the forward performance measure with a stochastic partial differential equation (SPDE), which provides the analogue of the Hamilton-Jacobi-Bellman equation (HJB) in Merton's framework. For the reason of tractability, we limit ourself to forward performance measures of some special forms, and investigate the corresponding robust investment. It is out of this paper's scope to investigate the existence, uniqueness, and regularity of the solution of the associated SPDE in the general setting. Such simplified model setup and the corresponding results shed light on how ambiguity-aversion investors dynamically revise their preferences as the market involves. The remainder of this paper is organized as follows. Section \ref{Setup} introduces the model setup for robust forward investment. The construction of the robust forward performance measure is investigated in Section \ref{Construction}. In Section \ref{CRRA:case}, we study the conservative belief of an ambiguity-averse investor with preference of the constant relative risk aversion (CRRA) type. Section \ref{Conclusion} concludes. \section{Model setup} \label{Setup} We consider a financial market with two tradable assets: the risk-free bond and the risky asset. The risk-free bond has a constant return rate $r$, i.e., \begin{equation}\label{RisklessBondPrice} \ud P_t=r P_t \ud t\,, \end{equation} where $P$ is bond price with $P_0=1$. The risk asset price $S = (S_t)_{t\in[0,\infty)} $ is modelled by the canonical process of $\Omega$, defined by $$ \Omega = \left\{ \omega = {(\omega (t))_{t \in [0,\infty )}} \in C([0,\infty ),\mathbb{R}^+):\omega (0) = S_0\right\}, $$ where $S_0$ is the current price of the risky asset and $S_t(\omega)=\omega(t)$. We equip $\Omega$ with the uniform norm and the corresponding Borel $\sigma$-field $\mathcal{F}$. $\mathbb{F} = (\mathcal{F}_t)_{t\in[0,\infty)} $ denotes the canonical filtration, i.e., the natural (raw) filtration generated by $S$. Due to the complication of financial market and the limitation of individual cognitive ability, an investor may have ambiguous belief on the risky asset price, i.e., ambiguity on the mean return $(\mu) $ or volatility $(\sigma)$ in our model setup. We assume that $(\mu_t,\sigma_t)$ can take any value within a convex compact set $\Theta\subset\mathbb R^{2}_+$, but without additional information about their distributions for any time $t\in[0,\infty)$. That is, $\Theta$ represents ambiguity on the return and volatility of the risky asset. More explicitly, we characterize ambiguity by $\Gamma^\Theta$, defined by \begin{equation}\label{processGama} \Gamma^\Theta=\Big\{\theta \mid \theta=(\mu_t ,\sigma_t )_{t\ge 0} \mbox{ is an } \mathcal {F}\mbox{-progressively measurable process and } \ (\mu_t,\sigma_t)\in \Theta \mbox{ for any } t>0 \Big\}\,. \end{equation} For $\theta=(\mu_t,\sigma_t)_{t \ge 0}\in \Gamma^\Theta$, let $\mathbb{P}^\theta$ be the probability measure on $(\Omega,\mathbb{F})$ such that the following stochastic differential equation (SDE) \begin{equation}\label{SDE1} \ud S_t=S_t(\mu_t\ud t+\sigma_t\ud W_t^\theta)\,, \end{equation} admits a unique strong solution $S=(S_t)_{t\ge0}$, where $W^\mathbb{\theta}=(W^\mathbb{\theta}_t)_{t\ge 0}$ is a Brownian motion under $\mathbb{P}^\theta$. Let $\mathcal{P}^\Theta$ denote the set of probabilities $\mathbb{P}^{\theta}$ on $(\Omega,\mathbb{F})$ such that the SDE \eqref{SDE1} has a unique strong solution, corresponding to the ambiguity characteristic $\Theta$ $(\theta\in \Gamma^\Theta)$. The Brownian motion $W^\theta$ can be interpreted as the driving force of randomness behind the risky asset under the probability measure $\mathbb{P}^\theta$. Such model setup allows us to analyze how the investor's belief on the risky asset affects her preference and investment strategy, especially the effect of ambiguity on the risk source. An investor is endowed with some wealth $x_0>0$ at time $t=0$, and allocates her wealth dynamically between the risky asset and the risk-free bond. For $t\ge0$ and $s\ge t$, let $\pi_s $ be the proportion of her wealth invested in the stock at time $s\ge 0$. Due to the self-financing property, the discounted wealth $X^\pi=(X_s^\pi)_{s\ge t}$ is given by \begin{equation}\label{Wealth} \ud X_s^\pi=(\mu_s-r)\pi_s X_s^\pi\ud s+\pi_s X_s^\pi \sigma_s\ud W_s^\mathbb{\theta},\quad X_t^\pi=x\,, \end{equation} where $r$ is the risk-free interest rate, $ W^\theta$ is a Brownian motion under $\mathbb{P}^\theta\in \mathcal{P}^\Theta$. The set of admissible strategies $\mathcal{A}(x)$ is defined by \[\mathcal{A}(x)=\left\{\pi \left| \pi \mbox{ is self-financing and } \mathbb{F}\mbox{-adapted}, \int_{t}^{s}\pi_r^2\ud r <\infty, s\geq t, \;\right.\mathbb{P}^\theta\mbox{-a.s., for all}\; \mathbb{P}^\theta\in \mathcal{P}^\Theta\right\}\,.\] The optimal investment strategy $\pi^*$ and the corresponding wealth process $X^*$ are usually associated with an optimization problem, such as utility maximization or risk minimization. Within Merton's framework for portfolio theory \citep{Merton1969}, the value process $U(t,x;\tilde T)$ is formulated as \begin{equation}\label{classical:U} U(t ,x;\tilde T):=\sup_{\pi\in \mathcal{A}_{\tilde T}}E[u(X_{\tilde T}^{\pi})\mid\F_t,\,X_t^\pi=x]\,, \end{equation} where the investment horizon $\tilde T$ is predetermined, $u$ is a utility function, $\mathcal{A}_{\tilde T}$ is the set of admissible strategy, and $X_{\tilde T}^\pi$ is the terminal wealth corresponding to an admissible strategy $\pi\in \mathcal{A}_{\tilde T}$. The expectation $(E)$ in \eqref{classical:U} is taken under some probability measure $\ensuremath{\mathbb{P}}$\,, if there is no ambiguity on the deriving force of market randomness. Then, the dynamic programming principle can be applied to solve the optimal control problem \eqref{classical:U}, namely, \begin{equation}\label{DPP} U(t,x;\tilde T)=\sup_{\pi\in \mathcal{A}_{\tilde T}}E[U(s,X_s^{\pi};\tilde T)\mid\F_t,\,X_t^\pi=x]\,. \end{equation} By verification arguments, $U$ is the solution of the Hamilton-Jacobi-Bellman (HJB) equation \citep{Zhou1997}. The dynamic programming equation \eqref{DPP} essentially signifies that $\{U(t,X_t^{\pi};\tilde{T})\}_{t\in[0,\tilde T]}$ is a martingale at the optimum, and a supermartingale otherwise, associated with some probability measure $\ensuremath{\mathbb{P}}$\,. This property can be interpreted as follows: if the system is currently at an optimal state, one needs to seek for controls which preserve the same level of the average performance over all future times before the predetermined investment horizon $\tilde T$. We refer to this property as the martingale property of the value process. On the other hand, \eqref{DPP} hints that $U(\tilde T,X_{\tilde T}^{\pi};\tilde{T})$ coincides with $u(X_{\tilde T}^{\pi})$, where $u$ represents the preference at $t=\tilde T$. Note that the future utility function $u$ is specified at $t=0$. However, it is not intuitive to specify the \emph{future} preference at the \emph{initial} time with complete isolation from the evolution of the market. \cite{Musiela2007,Musiela2008} propose the so-called forward performance measure $U(t,x)$ which keeps the martingale property of $\{U(t,X_t^{\pi})\}_{t\in[0,T]}$ for \emph{any} horizon $T>0$, and coincides with the initial preference, namely $U(0,X_0^{\pi})=u(X_0^{\pi})$. In this framework, the future preference dynamically changes in accordance with the market evolution. In the similar spirit of \cite{Musiela2007,Musiela2008}, we will generalize the definition of forward performance measure by considering ambiguity on the risk source. For $\mathbb{P}^\theta\in \mathcal{P}^\Theta$, we define $\mathcal{P}^\Theta(t,\mathbb{P}^\theta)$ \begin{equation}\label{Con:P} \mathcal{P}^\Theta(t,\mathbb{P}^\theta):=\{\mathbb{P}'\in \mathcal{P}^\Theta\mid \mathbb{P}'=\mathbb{P}^\theta\mbox{ on } \mathcal{F}_t\} \end{equation} which facilitates the definition of the robust forward performance measure. \begin{defi}[Robust forward performance]\label{RFP} An $\F_t$-progressively measurable process $(U(t,x))_{t\ge0}$ is called a robust forward performance if for $t\ge0$ and $x\in\ensuremath{\mathbb{R}}^+$, the following holds. \begin{enumerate}[(i)] \item The mapping $x\rightarrow U(t,x)$ is strictly concave and increasing. \item For each $\pi\in \mathcal{A}(x)$, $\essinf_{\mathbb{P}\in \mathcal{P}^\Theta}\mathbb{E}^\mathbb{P} [U(t,X_t^\pi)]^+<\infty$, and \[\essinf_{\mathbb{P}'\in \mathcal{P}^\Theta(t,\mathbb{P}^\theta)}\mathbb{E}^\mathbb{P' } [U(s,X_s^\pi)\mid \F_t]\le U(t,X_t^\pi),\ t\le s, \; \mathbb{P}^\theta\mbox{-a.s.}\] \item There exists $\pi^*\in\mathcal{A}(x) $ for which \[ {\essinf_{\mathbb{P'}\in \mathcal{P}^\Theta(t,\mathbb{P}^\theta)}\mathbb{E}^\mathbb{P'} [U(s,X_s^{\pi^*})\mid \F_t]= U(t,X_t^{\pi^*}),\ t\le s,}\;\mathbb{P}^\theta\mbox{-a.s.}\] \end{enumerate} \end{defi} Given the dynamics of the forward performance measure $(U(t,x))_{t\ge0}$, we will solve the problem for optimal investment strategy, which can be formulated as a similar problem as \eqref{DPP}. \begin{Problem}[Robust Investment Problem]\label{RobustOpt} Given the robust forward performance $(U(t,x))_{t\ge0}$, the investment problem is to solve \begin{equation} U(t,x) = \mathop {\sup }\limits_{\pi \in \mathcal{A}(x)} \mathop {\inf }\limits_{\mathbb{P'}\in \mathcal{P}^\Theta(t,\mathbb{P}^\theta)} {\mathbb{E}^\mathbb{P'}}[U(s,X_s^\pi )\mid \mathcal{F}_t, {X_t^\pi} = x], \quad\mathbb{P}^\theta\mbox{-a.s.}\,, \end{equation} where $X^\pi$ follows \eqref{Wealth} and $\mathcal{P}^\Theta(t,\mathbb{P}^\theta)$ is given in \eqref{Con:P}. \end{Problem} The solution of this problem provides the robust investment strategy $\pi^*$ and the worst-case scenario of $(\mu^{\pi^*},\sigma^{\pi^*})$ under ambiguity. In turn, they will implicitly provide the corresponding probability measure $\mathbb{P}^{\theta^*}$. In the next section, we will introduce the construction methods for the forward performance under ambiguity, and then solve the robust investment problem. \section{Robust Investment under the Forward Performance Measures} \label{Construction} The specification of a forward performance measure $(\bar U(t,x))_{t\ge0}$ can take the market state and investor's wealth level into account at time $t$. Mathematically, $(\bar U(t,x))_{t\ge0}$ is called stochastic flow, a stochastic process with space parameter. It can be characterized by its drift random field and diffusion random field. Under certain regularity hypotheses \citep{Karoui2013}, it can be written in the integration form \begin{equation}\label{FP} \bar U(t,x)=u(x)+\int_0^t\beta(s,x)\ud s+\int_0^t\gamma(s,x)\ud \bar{B}_s \,, \end{equation} where $\bar{B}$ is the standard Brownian motion defined on some probability space, $u$ is the initial utility, and $\beta$ and $\gamma$ are the so-called drift random field and the diffusion random field, respectively. To guarantee a stochastic flow $(\bar U(t,x))_{t\ge0}$ satisfy the definition \ref{RFP}, its drift random filed $\beta$ and diffusion random field $\gamma$ should satisfy some structure. By exploring such structure, \cite{Musiela2010a} constructed some examples of forward performance measures. In this framework, the driving force of market randomness is modelled by the standard Brownian motion $\bar {B}$. We will generalize such framework, and account for the ambiguity on the driving force of market randomness or the risky asset price. Different from the dynamics of the risky asset price \eqref{SDE1}, we give even more freedom to an investor's preference, and propose the robust forward performance measure of the following form, \begin{equation}\label{U:flow} U(t,x) = u(x) + \int_0^t \left[{\beta (s,x)} + \delta(s,x)\mu_s+ \gamma (s,x)\sigma_s^2 \right]\ud{s} + \int_0^t {\eta (s,x)\sigma_s\ud{W_s^\theta}} , \end{equation} where $W^\theta$ a Brownian motion under $\mathbb{P}^\theta\in \mathcal{P}^\Theta$ and $\theta=(\mu_t ,\sigma_t )_{t\ge 0} \in \Gamma^\Theta$\,. The random field $(\beta,\delta,\gamma,\eta)$ characterizes an investor's attitudes toward wealth level, ambiguity, and market risk. Especially, the volatility term $\eta(t,x)\sigma_t$ of the robust forward performance measure reflects the investor's ambiguity about her preferences in the future, and is subject to her choice. The BSDE-based approach proposed by \cite{Chong2018} captures the investor's concern on parameter uncertainty by the generator of the associated BSDE. Different from this BSDE-based approach, we explicitly embed such concern into the axiomatic formulation \eqref{U:flow}. For any given robust forward preference of the form \eqref{U:flow}, the investment problem \ref{RobustOpt} allows an investor to maximize her utility under the worst-case scenario of $ (\mu_t ,\sigma_t )_{t\ge 0} \in \Gamma^\Theta$. To make the investment problem tractable, the forward performance measure is assumed to be regular enough. For this reason, we introduce the notation of $\mathcal{L}^{2}(\mathbb{P})$-smooth stochastic flow. \begin{defi}[$\mathcal{L}^{2}(\mathbb{P})$-Smooth Stochastic Flow] Let $F: \Omega\times[0,\infty)\times\ensuremath{\mathbb{R}}\rightarrow\ensuremath{\mathbb{R}}$ be a stochastic flow with spatial argument $x$ and local characteristics $(\beta,\gamma)$, i.e., \[F(t,x) = F(0,x) + \int_0^t {\beta(s,x)\ud s} + \int_0^t {\gamma (s,x)\ud{B^\mathbb{P}_s}} ,\] where $B^\mathbb{P}$ is a Brownian motion defined on a filtered probability space $(\Omega,(\mathcal {F}_{t})_{t\geq 0},\mathbb{P})$. $F$ is said to be $\mathcal{L}^{2}(\mathbb{P})$-smooth or belong to $\mathcal{L}^{2}(\Omega,(\mathcal {F}_{t})_{t\geq 0},\mathbb{P})$, if \begin{enumerate} \item [(i)] for $x\in \mathbb{R}$, $F(\cdot,x)$ is continuous; for each $t>0$, $F(t,\cdot)$ is a $C^2$-map from $\ensuremath{\mathbb{R}}$ to $\ensuremath{\mathbb{R}}$, $\mathbb{P}$-a.s., \item [(ii)]$\beta:\Omega\times[0,\infty)\times\ensuremath{\mathbb{R}}\rightarrow\ensuremath{\mathbb{R}} $ and $\gamma:\Omega\times[0,\infty)\times\ensuremath{\mathbb{R}}\rightarrow\ensuremath{\mathbb{R}}^d $ are continuous process continuous in $(t,x)$ such that \begin{itemize} \item [(a)] for each $t>0$, $\beta(t,\cdot),\gamma(t,\cdot)$ belong to $C^1(\ensuremath{\mathbb{R}})$, $\mathbb{P}$-a.s.; \item [(b)] for each $x\in\ensuremath{\mathbb{R}}$, $\beta(\cdot,x)$ and $\gamma(\cdot,x) $ are $\mathcal{F}$-adapted. \end{itemize} \end{enumerate} \end{defi} For $\mathbb{P^\theta}\in \mathcal{P}^\Theta$, we are ready to formulate the robust forward performance as a $\mathcal{L}^2(\mathbb{P^\theta})$-smooth stochastic flow \begin{equation}\label{RU} U(t,x) = u(x) + \int_0^t \left[{\beta (s,x)} + \delta(s,x)\mu_s+ \gamma (s,x)\sigma_s^2 \right]\ud{s} + \int_0^t {\eta (s,x)\sigma_s\ud{W_s^\theta}} , \end{equation} where $W^\theta$ a Brownian motion under $\mathbb{P}^\theta\in \mathcal{P}^\Theta$ and $\theta=(\mu_t ,\sigma_t )_{t\ge 0} \in \Gamma^\Theta$\,. Its smoothness plays a key role to construct the robust forward performance measures by specifying the structure of $(\beta,\delta,\gamma,\eta)$. \begin{lem}\label{AssumptionThm} For $\mathbb{P^\theta}\in \mathcal{P}^\Theta$, let $U$ be a $\mathcal{L}^2(\mathbb{P^\theta})$-smooth stochastic flow as defined in (\ref{RU}). Let us suppose that \begin{enumerate} \item[(i)] the mapping $x\rightarrow U(t,x)$ is strictly concave and increasing; \item [(ii)]for an arbitrary $\pi\in \mathcal{A}(x)$, there exists $(\mu^\pi,\sigma^\pi)\in \Gamma^\Theta$, such that \[\essinf_{ \mathbb{P}'\in \mathcal {P}(t,\mathbb{P}^\theta)}\mathbb{E}^\mathbb{P'}[U(s,X_s^{\pi})\mid \F_t]= \mathbb{E}^{\mathbb{P}^{\mu^\pi,\sigma^\pi}} [U(s,X_s^{\pi})\mid \F_t], \; t\le s, \mathbb{P^{\theta}}\mbox{-}a.s.\,,\] and $Y_s=U(s,X_s^{\pi})$ is a $\mathbb{P}^{\mu^\pi,\sigma^\pi}$-supermartingale; \item [(iii)] there exists $\pi^*\in \mathcal{A}(x)$ such that $Y_s^*=U(s,X_s^{\pi*})$ is a $\mathbb{P}^{ \mu^{\pi^*},\sigma^{\pi^*}}$-martingale. \end{enumerate} Then $\pi^*$ is the optimal investment strategy for Problem \ref{RobustOpt}, associated with the worst-case scenario $(\mu^{\pi^*},\sigma^{\pi^*})$ of $(\mu ,\sigma )$ . \end{lem} \begin{proof} For each $\pi\in \mathcal{A}(x)$, since $Y_s=U(s,X_s^{\pi})$ is a $\mathbb{P}^{\mu^\pi,\sigma^\pi}$-supermartingale, \[\essinf_{ \mathbb{P}'\in \mathcal {P}(t,\mathbb{P}^\theta)}\mathbb{E}^\mathbb{P'}[U(s,X_s^{\pi})\mid \F_t]= \mathbb{E}^{\mathbb{P}^{\mu^\pi,\sigma^\pi}} [U(s,X_s^{\pi})\mid \F_t]\le U(t,X_t^\pi),\; t\le s, \mathbb{P^{\theta}}\mbox{-}a.s. \] Since there exists $\pi^*\in \mathcal{A}(x)$ such that $Y_s^*=U(s,X_s^{\pi*})$ is a $\mathbb{P}^{ \mu^{\pi^*},\sigma^{\pi^*}}$-martingale, we have \[\essinf_{ \mathbb{P}'\in \mathcal {P}(t,\mathbb{P}^\theta)}\mathbb{E}^\mathbb{P'}[U(s,X_s^{\pi})\mid \F_t]= \mathbb{E}^{\mathbb{P}^{\mu^{\pi^{*}},\sigma^{\pi^{*}}}} [U(s,X_s^{\pi^{*}})\mid \F_t]= U(t,X_t^{\pi^{*}}),\; t\le s, \mathbb{P^{\theta}}\mbox{-}a.s. \] Recalling the definition of robust forward performance (Definition \ref{RFP}), we can see that $U$ is a forward performance, and the statement of this theorem is proved. \end{proof} Lemma \ref{AssumptionThm} provides a method to find the worst-case scenario of the mean return and volatility of the risky asset, and the corresponding investment strategy, as stated in Theorem \ref{th4.1}. \begin{thm} \label{th4.1} Let $U$ be a $\mathcal{L}^2(\mathbb{P^\theta})$-smooth stochastic flow on $(\Omega,\mathbb{F},\mathbb{P}^\theta)$ with $\mathbb{P^\theta}\in \mathcal{P}^\Theta$ and $\theta=(\mu_t ,\sigma_t )_{t\ge 0} \in \Gamma^\Theta$, and the mapping $x\rightarrow U(t,x)$ is strictly concave and increasing. We suppose the following holds. \begin{enumerate} \item [(i)]$U$ satisfies the following equation \begin{equation}\label{H1} \begin{aligned} &\sup_{\pi}\inf_{(\mu,\sigma)\in \Theta}\Big\{\beta(t,{x})+\delta(t,x)\mu+\gamma(t,x)\sigma^2+U_{x}(t,x)(\mu-r)\pi x\\ &\quad\quad\quad\quad\quad+\eta_x(t,{x})\pi x\sigma^2+\frac{1}{2}U_{xx}(t,x)\pi^2\sigma^2x^2\Big\}=0. \end{aligned} \end{equation} \item [(ii)] For any $\pi (t,x) \in \mathbb{R}$, there exists $(\tilde{\mu}_t,\tilde{\sigma}_t)\in \Theta$ such that \begin{eqnarray*}\label{} && \inf_{(\mu,\sigma)\in \Theta}\Big\{\delta(t,x)\mu+\gamma(t,x)\sigma^2+U_{x}(t,x)(\mu-r)\pi x+\eta_x(t,{x})\pi x\sigma^2+\frac{1}{2}U_{xx}(t,x)\pi^2\sigma^2x^2\Big\}\\ &&=\delta(t,x)\tilde{\mu}_t+\gamma(t,x)\tilde{\sigma}^2_t+U_{x}(t,x)(\tilde{\mu}_t-r)\pi x+\eta_x(t,{x})\pi x\tilde{\sigma}^2_t+\frac{1}{2}U_{xx}(t,x)\pi^2\tilde{\sigma}^2_tx^2. \end{eqnarray*} \end{enumerate} Let $\pi^{*}(t,x) \in \mathbb{R}$ satisfy \begin{eqnarray*}\label{} \pi^{*}(t,x) &=& \arg\sup\limits_{\pi} \inf_{(\mu,\sigma)\in \Theta}\Big\{\delta(t,x)\mu+\gamma(t,x)\sigma^2+U_{x}(t,x)(\mu-r)\pi x+\eta_x(t,{x})\pi x\sigma^2\\&&\qquad+\frac{1}{2}U_{xx}(t,x)\pi^2\sigma^2x^2\Big\}, \end{eqnarray*} and $(\mu^{*},\sigma^{*})$ satisfy \begin{eqnarray}\label{martingale} &&\sup\limits_{\pi}\inf\limits_{(\mu,\sigma)\in \Theta}\Big\{ \delta(t,x)\mu+\gamma(t,x)\sigma^2+U_{x}(t,x)(\mu-r)\pi x+\eta_x(t,{x})\pi x\sigma^2+\frac{1}{2}U_{xx}(t,x)\pi^2\sigma^2x^2\Big\}\nonumber\\ &= & \delta(t,x)\mu^{*}_t+\gamma(t,x)(\sigma^{*}_t)^2+U_{x}(t,x)(\mu^{*}_t-r)\pi^{*} x+\eta_x(t,{x})\pi^{*} x(\sigma^{*}_t)^2\nonumber\\&&+\frac{1}{2}U_{xx}(t,x)(\pi^{*})^2(\sigma^{*}_t)^2x^2. \end{eqnarray} Let $X^{*}$ be the unique solution of the stochastic differential equation \begin{equation*}\label{X:star} \ud X^{*}_t=(\mu^{*}_t-r)\pi^{*}_t X_t^* \ud t+\pi^{*}_t X_t^* \sigma^{*}_t\ud W_t^\mathbb{\mu^{*},\sigma^{*}},\quad X^{*}_0 =x. \end{equation*} Then $\pi^{*}(t,X_{t}^{*} ) $ solves the Problem \ref{RobustOpt}. \end{thm} \begin{proof} Under the regularity conditions on $U$, we apply the It\^{o}-Ventzell formula to $U(t,X^\pi)$ for any admissible portfolio $X^\pi$ under each $\mathbb{P}^\theta\in \mathcal{P}^\Theta$ \begin{align*} \ud U(t,{X_t^\pi}) &= \left\{\beta (t,{X_t^\pi})+\delta(t,X_t^\pi)\mu_t+\gamma(t,X^\pi_t)\sigma_t^2\right\} \ud t + {\eta (t,{X_t^\pi})\sigma_t\ud{W^\theta_t}} + {{U_x}(t,{X_t^\pi})\ud{X_t^\pi}}\\ & \quad+ \frac{1}{2} {{U_{xx}}(t,{X_t^\pi})\ud \langle {X^\pi} \rangle_t } + {{\eta _x}(t,{X_t^\pi})\sigma_t\ud \langle X^\pi,{W^\theta}{ \rangle _t}}\\ &=\Big\{\beta(t,{X_t^\pi})+\delta(t,X_t^\pi)\mu_t+\gamma(t,X^\pi_t)\sigma_t^2+U_{x}(t,X_t^\pi)(\mu_t-r)\pi_t X_t^\pi+\eta_x(t,{X_t^\pi})\pi_t{X_t^\pi}\sigma_t^2 \\ &\quad +\frac{1}{2}U_{xx}(t,X_t^\pi)\pi^2_t\sigma_t^2({X_t^\pi})^2\Big\}\ud t+\left\{\eta(t,X_t^\pi)\sigma_t+U_x(t,X_t^\pi)\pi_t X_t^\pi\sigma_t\right\}\ud W_t^\mathbb{\theta}\,. \end{align*} We denote by $g(t,\mu_t, \sigma_t)=\beta(t,{X_t^\pi})+\delta(t,X_t^\pi)\mu_t+\gamma(t,X^\pi_t)\sigma_t^2+U_{x}(t,X_t^\pi)(\mu_t-r)\pi_t X^\pi+\eta_x(t,{X_t^\pi})\pi_t{X_t^\pi}\sigma_t^2 +\frac{1}{2}U_{xx}(t,X_t^\pi)\pi^2_t\sigma_t^2({X_t^\pi})^2$. For $t<s$, \begin{eqnarray*} \essinf_{\mathbb{P}'\in \mathcal {P}(t,\mathbb{P}^\theta)}\mathbb{E}^\mathbb{P'}[U(s,X_s^{\pi})\mid \F_t] &= &\essinf_{\mathbb{P'}\in \mathcal {P}(t,\mathbb{P}^\theta)}\mathbb{E}^\mathbb{P'}[U(t,X_t^{\pi})+\int_{t}^{s}g(r,\mu_r, \sigma_r)\ud r\mid \F_t]\\ &\geq&\essinf_{\mathbb{P}'\in \mathcal {P}(t,\mathbb{P}^\theta)}\mathbb{E}^\mathbb{P'}[U(t,X_t^{\pi})+\int_{t}^{s}\inf\limits_{\mu, \sigma}g(r,\mu , \sigma )\ud r\mid \F_t]\\ &=&\essinf_{\mathbb{P}'\in \mathcal {P}(t,\mathbb{P}^\theta)}\mathbb{E}^\mathbb{P'}[U(t,X_t^{\pi})+\int_{t}^{s}g(r,\tilde{\mu} , \tilde{\sigma} )\ud r\mid \F_t]\\ &=&\mathbb{E}^\mathbb{P^{\mu^\pi,\sigma^\pi}}[U(t,X_t^{\pi})+\int_{t}^{s}g(r,\tilde{\mu} , \tilde{\sigma} )\ud r\mid \F_t]\\ &=&\mathbb{E}^\mathbb{P^{\mu^\pi,\sigma^\pi}}[U(s,X_s^{\pi})\mid \F_t]\,. \end{eqnarray*} where $(\mu^{\pi}, \sigma^{\pi})=(\mu, \sigma)$ on $[0,t]$, and $(\mu^{\pi}, \sigma^{\pi})=(\tilde{\mu}, \tilde{\sigma})$ on $[t,s]$. Therefore, \begin{equation}\label{Cor:Lem3} \essinf_{\mathbb{P}'\in \mathcal {P}(t,P^\theta)}\mathbb{E}^\mathbb{P'}[U(s,X_s^{\pi})\mid \F_t] = \mathbb{E}^\mathbb{P^{\mu^\pi,\sigma^\pi}}[U(s,X_s^{\pi})\mid \F_t]\,. \end{equation} It is obvious that $U(s,X_s^{\pi})$ is a $\mathbb{P}^{\theta}$-supermartingale. From (\ref{martingale}) it follows that $U(s,X_s^{\pi^*})$ is a $\mathbb{P}^{ \theta^*}$-martingale. Recalling Lemma \ref{AssumptionThm} and \eqref{Cor:Lem3}, $(\mu^*,\sigma^*)$ represents the worst-case scenario of the mean return and volatility of the risky asset, and $\pi^*$ is the corresponding investment strategy. \end{proof} Theorem \ref{th4.1} provides a natural way to construct a robust forward performance measure, optimal investment strategy and the worst-case scenario of the mean return and volatility of risky assets. We summarize such results in Corollary \ref{Col3}. \begin{coll}\label{Col3} \begin{enumerate} \item [(i)] If the $U$ is a robust forward performance measure and the worst-case $(\mu^*,\sigma^*)$ is selected, the optimal investment strategy is given in the feedback form \begin{equation}\label{Strategy:vol} \tilde \pi (t,x) = - \frac{{{\eta _x}(t,x){{\sigma_{t}^*} ^2} + (\mu_{t}^* - r){U_x}(t,x)}}{{x{{\sigma_{t}^*}^2}{U_{xx}}(t,x)}}\,, \end{equation} where the first and second term of the optimal strategy will be referred to as its non-myopic and myopic part, respectively \citep{Musiela2010a}. \item [(ii)] If the $U$ is a robust forward performance measure, its characteristics $(\beta,\delta,\gamma,\eta)$ should satisfy. \begin{equation} \label{con} \inf_{(\mu,\sigma)\in \Theta}\left\{\beta+\delta\mu+\left(\gamma-\frac{\eta_x^2}{2U_{xx}(t,x)}\right)\sigma^2 -\frac{(\mu-r)^2U_x^2(t,x)}{2U_{xx}(t,x)\sigma^2} -\frac{(\mu-r)U_x\eta_x}{U_{xx}(t,x)}\right\}=0\,, \end{equation} for $(t,x)\in [0,\infty)\times\ensuremath{\mathbb{R}}^+$. The solution of condition \eqref{con} leads to the worst-case $(\mu^*,\sigma^*)$. \end{enumerate} \end{coll} The constraint \eqref{con} on the local characteristics $(\beta,\delta,\gamma,\eta)$ implies that the forward performance measure is not unique for a given initial utility function. By specifying three of them, we can calculate the fourth one. Hence, the investor in this framework has the freedom to specify her initial utility, as well as the additional characteristics of the utility field. However, in Merton's framework, the dynamics and characteristics of the utility field are derived from the terminal utility function, which is specified by the investor at the initial time. We note that the constraint \eqref{con} holds in the path-wise sense. The local characteristics $(\beta,\delta,\gamma,\eta)$ actually can be used to represent the investor's attitude through local risk tolerance $\tau^U(t,x)=-\frac{U_x(t,x)}{U_{xx}(t,x)}$, utility risk premium $\varrho^U(t,x,\sigma)=\frac{\eta_x(t,x)\sigma_t}{U_x(t,x)}$ \citep{Karoui2013}, and market risk premium $m(\mu,\sigma)=\frac{\mu -r}{\sigma }$. Actually, the optimal strategy $\tilde \pi$ \eqref{Strategy:vol} can be written as \begin{align} \tilde \pi (t,x)& =\frac{\mu^*-r}{x{\sigma^*}^2}\tau^U-\frac{\eta_x(t,x)}{xU_{xx}(t,x)} \label{Strategy:vol22}\\ &=\frac{\tau^U}{\sigma^*x}\left(\frac{\mu^*-r}{\sigma^*}+ \frac{\eta_x(t,x)\sigma^*}{U_x(t,x)} \right)=\frac{\tau^U}{\sigma^*x}\left(m(\mu^*,\sigma^*)+\varrho^U(t,x,\sigma^*) \right)\,.\label{Strategy:vol2} \end{align} The first component of the investment strategy \eqref{Strategy:vol22}, known as myopic strategy, resembles the investment policy followed by an investor in markets in which the investment opportunity set remains constant through time. The second one is called the excess hedging demand and represents the additional (positive or negative) investment generated by the volatility process $\eta\sigma$ of the performance process $U$ \citep{Musiela2010a}. Essentially, the investment strategy \eqref{Strategy:vol2} reveals that it is affected by the investor's risk tolerance, market risk premium, and utility risk premium, as well as the worst-case scenario of the mean return $\mu$ and the volatility $\sigma$ of the risky asset. Obviously, it is the sum of utility risk premium and market risk premium that determines the trading direction of an investor. Such statement holds regardless of the specification of the robust forward performance measure. Note that the worst-case scenario of $(\mu,\sigma)$ is characterized by \eqref{con}. To analyze the implication of ambiguity, we restrict ourself to the robust forward performance measure of special forms, and derive the analytical solution for \eqref{con}. \section{Robust Forward Performance of the CRRA type} \label{CRRA:case} Utility function of the CRRA type is one of the commonly used utility function, which is a power function of wealth. We assume an investor's dynamic preference is characterized by utility function of the CRRA type over the time $t\in[0,\infty)$, with the initial utility function $u(x)=x^\kappa/\kappa$, $\kappa\in(0,1)$ and time-varying coefficients. More specifically, we set such forward performance $U$ of the following form \begin{equation}\label{U:pow} \left\{ \begin{aligned} U(t,x)&=\frac{ \exp(\alpha(t))}{\kappa}x^\kappa, &&U(0,x)=x^\kappa/\kappa\,,\\ \ud \alpha(t)&=f(t)\ud t+g(t)\sigma_t\ud W_t^\theta,&&\alpha(0)=0\,, \end{aligned} \right. \end{equation} where $\kappa\in(0,1)$ and $W^\theta=(W^{\theta}_t)_{t\ge 0}$ is a Brownian motion defined on a filtered probability space $(\Omega,\mathbb{F},\mathbb{P}^\theta)$ with $\mathbb{P^\theta}\in \mathcal{P}^\Theta$ and $\theta=(\mu,\sigma)\in \Gamma^\Theta$. Without loss of generality. Its differential form is then given by \begin{align} \ud U(t,x) & =U(t,x)\left(f(t)\ud t+\frac{1}{2}g^2(t)\sigma_t^2\ud t+g(t)\sigma_t\ud W_t^\theta\right),\quad U(0,x)=x^\kappa/\kappa\,, \label{Power:F} \end{align} and \begin{align} U_x(t,x) & = x^{\kappa-1}\exp(\alpha(t)),\label{Pow:F1}\\ U_{xx}(t,x)&= (\kappa-1)x^{\kappa-2}\exp(\alpha(t)). \end{align} In this case, the utility risk premium $\varrho^U(t,x,\sigma) =\varrho^U(\sigma)=g(t)\sigma$. We can rewrite the forward performance measure \eqref{Power:F} in the form of \eqref{RU}, where \begin{equation}\label{Pow:F2} \begin{aligned} \beta & =U(t,x)f(t)\,, & \gamma & =\frac{1}{2}U(t,x)g^2(t)\,,&\\ \delta&=0\,,& \eta&=U(t,x)g(t)\,.&\\ \end{aligned} \end{equation} The characteristics $(\beta,\delta,\gamma,\eta)$ can be substituted into the constraints \eqref{con}, to specify the structure of the forward performance \eqref{U:pow}. If there is no ambiguity on the mean return and volatility, the constraint \eqref{con} is reduced to \begin{equation}\label{con2} f(t)=\frac{1}{2}\frac{ {g^2}(t){\sigma_t ^2}}{\kappa-1}{\rm{ + }}\frac{\kappa }{{ \kappa-1 }} \left\{ {\frac{1}{2}\frac{{{{(\mu_t - r)}^2}}}{{{\sigma_t^2}}}{\rm{ + }}(\mu_t - r)g(t)} \right\}\,, \end{equation} and the corresponding investment strategy is given by \begin{align} \pi^*&=\frac{g(t){\sigma_t}^2+(\mu_t-r)}{(1-\kappa){\sigma_t}^2}\notag\\ & =\frac{1}{(1-\kappa)\sigma_t}\left(\frac{\mu_t-r}{\sigma_t}+g(t)\sigma_t \right)\notag\\ & =\frac{1}{(1-\kappa)\sigma_t}\left(m(\mu_t,\sigma_t)+\varrho^U(\sigma_t)\right)\,.\label{Str} \end{align} The optimal investment strategy without ambiguity \eqref{Str}, as well as the optimal strategy with ambiguity \eqref{Strategy:vol2}, implies that the market risk premium and utility risk premium play an important role in the trading direction in both settings. In the following sub-sections, we will consider an investor's conservative beliefs and the forward performance of the CRRA type in different settings: ambiguity on mean return $\mu$, ambiguity on the volatility $\sigma$, and ambiguity on both mean return and volatility. The structure of forward performance in these settings will involve optimizations with respect to $\mu$ and $\sigma$, as implied by the constraint \eqref{con}. \subsection{Ambiguity only on the mean return } Ambiguity on the mean return is referred to as the case where the dynamics of mean return is ambiguous, with known dynamics of volatility. For the sake of simplicity, we assume $\sigma_t$ is known as a constant $\sigma$. \begin{prop} Assume an investor's forward preference $U$ is characterized by the initial utility function $u(x)=x^\kappa/\kappa$ with $\kappa\in (0,1)$\,, and propagates in the following form \begin{align} \ud U(t,x) & =U(t,x)\left(f(t)\ud t+\frac{1}{2}g^2(t)\sigma ^2\ud t+g(t)\sigma \ud W_t^\theta\right), \quad U(0,x)=u(x),\label{Power:F} \end{align} where $f$ and $g$ are deterministic functions of $t$, $\sigma$ is the volatility of the risky asset, and $W^\theta$ is a Brownian motion defined on a filtered probability space $(\Omega,\mathbb{F},\mathbb{P}^\theta)$ with $\mathbb{P^\theta}\in \mathcal{P}^\Theta$ and $\theta=(\mu,\sigma)\in \Gamma^\Theta$. If the investor's ambiguity is characterized by the lower bound $\underline{\mu}$ and upper bound $\overline{\mu}$ of $\mu$, $f$ should satisfy the following condition \begin{equation}\label{Power:f3aa} f (t)= \frac{1}{2}\frac{ {g^2}(t){\sigma ^2}}{\kappa-1}{\rm{ + }}\frac{\kappa }{{ \kappa-1 }} \left\{ {\frac{1}{2}\frac{{{{(\mu^* - r)}^2}}}{{{\sigma^2}}}{\rm{ + }}(\mu^* - r)g(t)} \right\}\,, \end{equation} where \begin{equation}\label{Ustar} \mu^*= \left\{ \begin{aligned} &\overline{\mu}\,, && \quad\mbox{ if } \;\frac{\overline{\mu}-r}{\sigma}<-g(t)\sigma \,,&\\ & r-g\sigma^2\,, & &\quad\mbox{ if }\;\frac{\underline{\mu}-r}{\sigma}\le -g(t)\sigma \le \frac{\overline{\mu}-r}{\sigma},\\ & \underline{\mu}\,,&& \quad\mbox{ if }\;\frac{\underline{\mu}-r}{\sigma}\ge -g(t)\sigma \,.& \end{aligned} \right. \end{equation} Corresponding to the selection of worst-case mean return $\mu^*$, the investment strategy $\pi^*$ is given by \begin{equation} \label{Pow:Stratergy:m} \pi^*=\frac{g(t){\sigma }^2+(\mu^*-r)}{(1-\kappa){\sigma }^2}\,. \end{equation} \end{prop} \begin{proof} In this case, the constraint \eqref{con} is reduced to \begin{equation}\label{Power:f3a} f(t) = \frac{1}{2}\frac{ {g^2}(t){\sigma ^2}}{\kappa-1}{\rm{ + }}\kappa {\sup _{\mu \in [\underline{\mu},\overline{\mu}]}}\left\{ {\frac{1}{2}\frac{{{{(\mu - r)}^2}}}{{(\kappa - 1)\sigma ^2}}{\rm{ + }}\frac{{(\mu - r)g(t)}}{{\kappa - 1}}} \right\}\,, \end{equation} Assume the supermum is achieved at $\mu^*$. Simple calculations lead to \begin{equation} \label{UStare2} \mu^*= \left\{ \begin{aligned} &\overline{\mu}\,, && \quad\mbox{ if } \;\overline{\mu}-r<-g(t)\sigma^2 \,,&\\ & r-g\sigma^2\,, & &\quad\mbox{ if }\;\underline{\mu}-r\le -g(t)\sigma^2\le \overline{\mu}-r,\\ & \underline{\mu}\,,&& \quad\mbox{ if }\;\underline{\mu}-r\ge -g(t)\sigma^2\,.& \end{aligned} \right. \end{equation} Due to $\sigma>0$, the belief on the worst-case return \eqref{UStare2} is equivalent to that given by \eqref{Ustar}. Correspondingly, the optimal strategy \eqref{Strategy:vol} is reduced to \begin{equation} \pi^*=\frac{g(t){\sigma}^2+(\mu^*-r)}{(1-\kappa){\sigma}^2}\,. \end{equation} \end{proof} We can interpret the selection rule \eqref{Ustar} from the premium point of view. Recalling the definition of the market risk premium $m(\mu,\sigma)$ and the utility risk premium $\varrho^U(\sigma)$, i.e., \[m(\mu,\sigma)=\frac{\mu-r}{\sigma} \quad\mbox{and}\quad\varrho^U(\sigma)=g(t)\sigma\,,\] we can rewrite \eqref{Ustar} as \begin{equation}\label{Ustar2} \mu^*= \left\{ \begin{aligned} &\overline{\mu}\,, && \quad\mbox{ if } \;m(\overline{\mu},\sigma)+\varrho^U(\sigma)<0 \,,&\\ & r-g\sigma^2\,, & &\quad\mbox{ if }\;m(\underline{\mu},\sigma)+\varrho^U(\sigma)\le 0\le m(\overline{\mu},\sigma)+\varrho^U(\sigma) \,,&\\ & \underline{\mu}\,,&& \quad\mbox{ if }\;m(\underline{\mu},\sigma)+\varrho^U(\sigma)>0 \,.& \end{aligned} \right. \end{equation} It implies that the worst-case mean return and the trading direction depend on the total risk premium that the investor can achieve in the setting of ambiguity on mean return, i.e., $ m(\mu,\sigma)+\varrho^U (\sigma)$\,. When $m(\underline{\mu},\sigma)+\varrho^U(\sigma)$ is positive, an investor will take $\underline{\mu}$ as the worst-case mean return, and take a long position $(\pi>0)$. When $m(\overline{\mu},\sigma)+\varrho^U(\sigma)$ is negative, an investor will take $\overline{\mu}$ as the worst case, and take a short position $(\pi<0)$. Otherwise, she will take $r-g\sigma^2$ as the worst-case mean return, and do not invest on the risky asset $(\pi=0)$. From this point of view, it is the total risk premium that characterizes the worst-case mean return and the investor's trading direction. Such premium-based rule \eqref{Ustar} on the conservative belief towards the mean return is consistent with the rule proposed by \cite{Chong2018} and \cite{Lin2014c}. \cite{Chong2018} propose to select the worst-case scenario of the mean return in a feedback form associated with the position on risky assets, i.e., the long and short positions correspond to $\underline{\mu}$ and $\overline{\mu}$, respectively. In the classical framework, the selection of worst-case mean return dependents on the investor's position on the risky asset, as argued by \cite{Lin2014c} that nature decides for a low drift if an investor takes a long position, and for a high drift if an investor takes a long position. However, the rule \eqref{Ustar} is not given in a feedback form associated with an investor's position, but directly related to the market situations and the investor's utility risk premium. In this new framework, we highlight the combination effect of the utility risk premium and the market risk premium on the worst-case mean return of the risky asset. \subsection{Ambiguity only on volatility} We refer to volatility ambiguity as the case where the dynamics of volatility is unknown, but constrained in the interval $[\underline{\sigma},\overline{\sigma}]$ with $0< \underline{\sigma}\le\overline{\sigma}$. For the sake of simplicity, we suppose $\mu_t$ to be a constant $\mu$ over the time. \begin{prop} Assume an investor's preference $U$ is characterized by the initial utility function $u(x)=x^\kappa/\kappa$ with $\kappa\in (0,1)$\,, and propagates in the following form \begin{align} \ud U(t,x) & =U(t,x)\left(f(t)\ud t+\frac{1}{2}g^2(t)\sigma_t^2\ud t+g(t)\sigma_t\ud W_t^\theta\right), \quad U(0,x)=u(x),\label{Power:F} \end{align} where $f$ and $g$ are deterministic functions of $t$, $\sigma$ is the volatility of the risky asset, and $W^\theta$ is a Brownian motion defined on the filtered probability space $(\Omega,\mathbb{F},\mathbb{P}^\theta)$ with $\mathbb{P^\theta}\in \mathcal{P}^\Theta$ and $\theta=(\mu,\sigma)\in \Gamma^\Theta$. If the investor ambiguity is characterized by the lower bound $\underline{\sigma}$ and upper bound $\overline{\sigma}$ of $\sigma$, $f$ should satisfy the following structure \begin{equation}\label{Power:fv} f (t)= \frac{{\kappa (\mu - r)g(t)}}{{\kappa - 1}} + \frac{1}{2(\kappa-1)} {\left( {{g^2}(t){{\sigma^*} ^2} + \frac{{\kappa {{(\mu - r)}^2}}}{{{{\sigma^*} ^2}}}} \right)} \,, \end{equation} where \begin{equation}\label{Con:Pow} {\sigma^*}^2=\left\{ \begin{aligned} &\underline{\sigma}^2,&&\mbox{ if } g^2(t)\ge \frac{\kappa(\mu-r)^2}{\underline{\sigma}^4},\\ & \overline{\sigma}^2,&&\mbox{ if } g^2(t)\le \frac{\kappa(\mu-r)^2}{\overline{\sigma}^4},\\ & \frac{|\mu-r|}{|g(t)|}\sqrt{\kappa},&&\mbox{ if } \frac{\kappa(\mu-r)^2}{\overline{\sigma}^4} \le g^2(t)\le \frac{\kappa(\mu-r)^2}{\underline{\sigma}^4}. \end{aligned} \right. \end{equation} Correspondingly, the optimal investment strategy is \[ \pi^* =\frac{g(t){\sigma^*}^2+(\mu-r)}{{\sigma^*}^2(1-\kappa)}\,.\] \end{prop} \begin{proof} In this setting, the constraint \eqref{con} is reduced to \begin{equation}\label{Power:fv} f = \frac{{\kappa (\mu - r)g(t)}}{{\kappa - 1}} + {\sup _{{\sigma ^2\in[\underline{\sigma}^2,\overline{\sigma}^2]}}} \frac{1}{2( \kappa-1)} {\left( {{g^2}(t){\sigma ^2} + \frac{{\kappa {{(\mu - r)}^2}}}{{{\sigma ^2}}}} \right)} \,. \end{equation} To solve the optimization problem, we denote $\theta:=\sigma^2$, and define a function $h$ by \[h(\theta)=-g^2(t)\theta-\frac{\kappa(\mu-r)^2}{\theta},\quad \theta\in[\underline{\sigma}^2,\overline{\sigma}^2]\,.\] Simply analysis can provide that $h$ reaches its maximum at $\theta^*$, where \begin{equation}\label{Con:Pow} \theta^*=\left\{ \begin{aligned} &\underline{\sigma}^2,&&\mbox{ if } g^2(t)\underline{\sigma}^2\ge \frac{\kappa(\mu-r)^2}{\underline{\sigma}^2},\\ & \overline{\sigma}^2,&&\mbox{ if } g^2(t)\overline{\sigma}^2\le \frac{\kappa(\mu-r)^2}{\overline{\sigma}^2},\\ & \frac{|\mu-r|}{|g(t)|}\sqrt{\kappa},&&\mbox{ if } \frac{\kappa(\mu-r)^2}{\overline{\sigma}^4} \le g^2(t)\le \frac{\kappa(\mu-r)^2}{\underline{\sigma}^4}. \end{aligned} \right. \end{equation} Due to $\theta=\sigma^2$, we have the worst-case scenario ${\sigma^*}^2$ of $\sigma^2$ \eqref{Con:Pow}. \end{proof} The conservative belief on the volatility depends on the market risk premium and the utility risk premium, as the case of the conservative belief on mean return \eqref{Ustar} or \eqref{Ustar2}. We will show that it is the relative value of these two premiums that determines the conservative belief on volatility. Note that it is the sum of these two premium determines the conservative belief on mean return, as shown by \eqref{Ustar} or \eqref{Ustar2}. In our specific setting, ambiguity on mean return only affects the market risk premium, while ambiguity on volatility affects both the market risk premium and the utility risk premium. It is then natural to consider the effects of their relative value. Define the relative value $\tau(\sigma)$ of the utility risk premium $\varrho^U(\sigma)$ with respect to the market price of risk $m(\mu,\sigma)$ as \[\tau(\sigma)=\frac{\varrho^U(\sigma)}{m(\mu,\sigma)}:=\frac{g(t)\sigma}{\frac{ \mu-r }{\sigma}}\,.\] Then, the worst-case volatility \eqref{Con:Pow} can be rewritten as \begin{equation}\label{Con:Pow2} {\sigma_t^*}^2=\left\{ \begin{aligned} &\underline{\sigma}^2,&&\mbox{ if } \tau^2(\underline{\sigma})\ge\kappa,\\ & \overline{\sigma}^2,&&\mbox{ if } \tau^2(\overline{\sigma})\le\kappa ,\\ & \frac{|\mu-r|}{|g(t)|}\sqrt{\kappa},&&\mbox{ otherwise} . \end{aligned} \right. \end{equation} The rule \eqref{Con:Pow2} for worst-case volatility implies that if the relative value of the utility risk premium over the market risk premium is large enough than the investor's risk-averse attitude $\kappa$, the investor will take $\underline{\sigma}$ as the worst-case volatility. Alternatively, if such relative value is smaller enough than the investor's risk-averse attitude, the investor will take $\overline{\sigma}$ as the worst-case volatility. Otherwise, the worst-case volatility depends on her attitude toward risk and ambiguity about her future preferences. Overall, an ambiguity-averse investor will take her attitude toward risk and ambiguity into account when ambiguous on the volatility of the driving force of market randomness. \subsection{Structured ambiguity on mean return and volatility} Empirical research shows that the mean return can be either positively or negatively related to the volatility of risky assets \citep[see e.g.][]{Omori2007,Bandi2012,Yu2012}. Without a consensus of their relation, we employ a flexible model to capture the structured ambiguity on the mean return and volatility of the driving force of market randomness \citep{Epstein2013,Epstein2014}, \begin{equation}\label{structure} \Theta=\Big\{(\mu,\sigma)\mid \sigma^2 = \sigma_0^2+\alpha z, \mu-r =\mu_0+z, z\in [z_{ 1},z_{2}]\Big\}, \end{equation} where $\sigma_0,\mu_0>0$ and $\alpha\in\ensuremath{\mathbb{R}}$ such that $\sigma^2>0$. $\alpha>0$ implies that the return is positively related to the volatility, and vice versa. The selection of worst-case value of mean return and volatility will be reduced to the selection of $z^*\in [z_{ 1},z_{2}]$, where the spread $z_2-z_1$ represents the size of an investor's ambiguity on the mean return and volatility. Recalling the constraints \eqref{con} and \eqref{Pow:F2}, we have \begin{align} f & = \sup _{\mu,\sigma^2}\frac{1}{\kappa-1}\left\{\kappa g(t)(\mu-r)+\frac{1}{2}\left(g^2(t)\sigma^2+\frac{\kappa(\mu-r)^2}{\sigma^2}\right)\right\}\notag\\ &={\sup _{{z}}}\frac{1}{\kappa-1}\left\{ {{\kappa (\mu_0+z )g(t)}}{ } +{\frac{1}{{2 }}\left( { {(\sigma_0 ^2+\alpha z)}{g^2(t)} + \frac{{\kappa {{(\mu_0 +z)}^2}}}{{{\sigma_0 ^2+\alpha z}}}} \right)} \right\}\notag\\ &= {\sup _{{z\in[z_1,z_2]}}}\frac{1}{\kappa-1}\left\{ az+\frac{b}{2(\sigma_0^2+\alpha z)}+c \right\} \,,\label{Power:structure} \end{align} where \begin{equation}\label{abc} \left\{ \begin{aligned} a &= \kappa g(t) + \frac{1}{2}{g^2}(t)\sigma _0^2 + \frac{\kappa }{{2\alpha }} \,,\\ b &= \kappa {\left( {{\mu _0} - \frac{{\sigma _0^2}}{\alpha }} \right)^2} \,,\\ c& = \kappa {\mu _0}g(t) + \frac{1}{2}\sigma _0^2{g^2}(t) + \frac{{\kappa \sigma _0^2}}{{2{\alpha ^2}}} + \frac{{\kappa \left( {\alpha {\mu _0} - \sigma _0^2} \right)}}{{{\alpha ^2}}}\,. \end{aligned} \right. \end{equation} For any given set of the parameters $(\sigma_0,\mu_0,\kappa,\alpha,z_1,z_2,g)$, we can easily solve the problem \eqref{Power:structure} with respect to $z\in[z_1,z_2]$, and the optimal investment strategy is correspondingly given as the expression \eqref{Strategy:vol}. The analytical expression for $z^*$ is omitted here, since it is not very expressive, in the sense that the solution for \eqref{Power:structure} does not provide a straightforward intuition for the determinants of the conservative beliefs. Obviously, the value of $z^*$ depends on the interval $[z_1,z_2]$ and the shape of \eqref{Power:structure}. To derive more intuitional information on the conservative beliefs and its dependence on $z^*$, we define \begin{equation} \hat f(z)=\frac{1}{\kappa-1}\left\{ az+\frac{b}{2(\sigma_0^2+\alpha z)}+c \right\} , \end{equation} where $a,b,c$ are given in \eqref{abc}. The second order derivative $\hat f^{''}$ of $\hat f$ with respect to $z$ is \[\hat f^{''}(z)=\frac{\alpha^2b}{(\kappa-1)(\sigma_0^2+\alpha z)^3}\,.\] Since $\kappa\in(0,1), \sigma_0^2+\alpha z>0$, and $b>0$, we have \[\hat f^{''}(z)<0\,\mbox{ for } z\in[z_1,z_2]\,.\] That is, $\hat f$ is a concave function on $[z_1,z_2]$. Such property relates $z^*$ to the model parameters and the concavity of $\hat f$, as shown in the Figure \ref{Fig1} with some toy examples. These toy examples show the concavity of $\hat f$ in the setting of $\alpha=0.5$ and $\alpha=-0.5$ with the following common parameters \[ \kappa=0.4,\quad \mu_0=0.02,\quad\sigma_0^2=0.1,\quad g(t)\equiv0.1\,. \] For each $\alpha$, we denote by $\tilde{z}$ the value of $z\in [-0.2,0.2]$ at which $\hat f$ reaches its maximum. Then, we have three cases of $[z_1,z_2]\subseteq[-0.2,0.2]$ for each $\alpha$, \emph{i.e.}, $z_2<\tilde{z}$, $z_1<\tilde{z}<z_2$, and $\tilde{z}<z_1$. Take the case of $\alpha=0.5$ and $ z_2<\tilde{z}$ for example, $\hat f$ reaches its maximum at $z^*=z_2$ if $z\in[z_1,z_2]$. Correspondingly, we have $\mu^*=\overline{\mu}$ and $\sigma^*=\overline{\sigma}^2$. One can easily figure out $z^*$ in the other cases from Figure \ref{Fig1}. We summarize these toy examples in Table \ref{opz22}. Generally speaking, $z^*$ may take the upper or lower bound of the interval for $z$, or some value lying in the interval. When the mean return is positively related to the volatility of the risky asset $(\alpha>0)$, the worst-case scenario of these two parameters is $(\underline{\mu},\underline{\sigma}^2)$, $(\overline{\mu},\overline{\sigma}^2)$, or some intermediate value depending on some $\tilde{z}\in[z_1,z_2]$. When they are negatively related, the conservative belief is $(\underline{\mu},\overline{\sigma}^2)$, $(\overline{\mu},\underline{\sigma}^2)$, or some intermediate value depending on some $\tilde{z}\in[z_1,z_2]$. \begin{table}[h] \centering \caption{Conservative belief on the mean return and the volatility}\label{opz22} \setlength{\tabcolsep}{6mm}{ \begin{tabular}{|c|c|c|c|c|c|c|} \hline & \multicolumn{3}{c|}{$\alpha>0$} &\multicolumn{3}{c|}{$\alpha<0$} \\ \hline $z^*$ & $z_1$ &$\tilde{z}\in (z_1,z_2)$& $z_2$ & $z_{1}$ & $\tilde{z}\in (z_1,z_2)$& $z_{2}$ \\ \hline $\mu^*$ & $\underline{\mu}$ & $\mu_0+r-\tilde{z}$&$\overline{\mu}$ & $\underline{\mu}$ &$\mu_0+r-\tilde{z}$&$\overline{\mu}$ \\ ${\sigma^2}^*$ & $\underline{\sigma}^2$ & $\sigma_0^2+\alpha \tilde{z}$&$\overline{\sigma}^2$ & $\overline{\sigma}^2$ & $\sigma_0^2+\alpha \tilde{z}$ &$\underline{\sigma}^2$\\ \hline \end{tabular}} \end{table} By specifying the interval $[z_1,z_2]$, we can not only verify the conservative belief on $(\mu,\sigma^2)$ given in Table \ref{opz22} or Figure \ref{Fig1}, but also the relation between trading direction and total risk premium. Some alternatives for $[z_1,z_2]$ are given in Table \ref{opz2}. The worst-case scenario $(\mu^*,\sigma^*)$ is consistent with the implications of Table \ref{opz22} or Figure \ref{Fig1}. The corresponding investment strategy and total risk premium listed in the last two columns show that the investor will take a long position on the risky assets if the total risk premium is position, and vice versa. This is consistent with our theoretical statements, as given by \eqref{Strategy:vol2}. \begin{table}[h] \centering \caption{Utility parameters and the corresponding worst-case mean return and volatility}\label{opz2} \begin{tabular}{ r r r c c c r c} \toprule \multicolumn{1}{ c } { $z_1$} & \multicolumn{1}{ c } {$z_2$} & \multicolumn{1}{ c } {$\alpha$} & $z^*$&$u^*$ & $\sigma^*$ &\multicolumn{1}{ c }{$\pi^*$} & Total Risk Premium\\ \midrule -0.15 & -0.08 & 0.5 & $z_2$ &$\overline{\mu}$& $\overline{\sigma}^2$&-0.0795 &$-$\\ -0.08 &0.07&0.5& -0.0289& $r -$0.0089 & $ \sigma_0^2-$0.0145&-0.0059 &$-$\\ 0.02& 0.12& 0.5 &$z_1$&$\underline{\mu}$ & $\underline{\sigma}^2$&0.7727 &$+$\\ -0.15 & -0.08& -0.5 & $z_2$ &$\overline{\mu}$& $\underline{\sigma}^2$& -0.5476 &$-$\\ -0.08 &0.07&-0.5& -0.0311 & $r-$ 0.0111& $ \sigma_0^2+$0.0156&0.0066 &$+$\\ 0.02& 0.12& -0.5 &$z_1$& $\underline{\mu}$ & $\overline{\sigma}^2$&0.9074 &$+$\\ \bottomrule \end{tabular} \end{table} \begin{figure} \caption{optimal vale $z^*$ in different settings.} \label{Fig1} \end{figure} \section{Conclusion} \label{Conclusion} The complicated market confronts an investor to ambiguity on the driving force of market randomness. Such ambiguity may take the form of ambiguity on the mean return rate and volatility of an risky asset. It may also affect the investor's preference when making investing decisions. That is, the investor may be ambiguous not only on the characteristics of risky assets but also on her future preference. We took these two types of ambiguity into account, and investigated the horizon-unbiased investment problem. We proposed the robust forward performance measure by accounting for an investor's ambiguity on the future preference, arising from the ambiguity on the driving force of market randomness. This robust forward performance measure is then applied to formulate the investment problem. The solution to such investment problem shows that the sum of the market risk premium and the utility risk premium jointly determines the optimal trading direction. If it is positive, the investor will take a long position on the risky asset. Otherwise, she will take a short position. This statement holds regardless of the specific form of the forward performance measures. We then explored the worst-case scenarios of the mean return and volatility when the initial utility is of the CRRA type. Specifically, we investigate the worst-case mean return and volatility in three settings: ambiguity on mean return $\mu$, ambiguity on the volatility $\sigma$, and ambiguity on both mean return and volatility. In the case of ambiguity on the mean return, it is the total value of the market risk premium and the utility risk premium that determines an investor's conservative belief; In the case of ambiguity on the volatility, it is the relative value of these two premiums that affects an investor's conservative belief. In the case of ambiguity on both the mean return and volatility, the conservative belief may not be directly associated with these two premiums. Note that, in all the three settings, the conservative beliefs may be the some intermediate values within their candidate value intervals, as well as boundaries. In conclusion, the results provide explanations on the mechanism of conservative belief selection and robust portfolio choice when an investor propagates her preference in accordance with the market evolution. \renewcommand{1}{1} \end{document}
arXiv
{ "id": "1904.09379.tex", "language_detection_score": 0.7095810770988464, "char_num_after_normalized": null, "contain_at_least_two_stop_words": null, "ellipsis_line_ratio": null, "idx": null, "lines_start_with_bullet_point_ratio": null, "mean_length_of_alpha_words": null, "non_alphabetical_char_ratio": null, "path": null, "symbols_to_words_ratio": null, "uppercase_word_ratio": null, "type": null, "book_name": null, "mimetype": null, "page_index": null, "page_path": null, "page_title": null }
arXiv/math_arXiv_v0.2.jsonl
null
null
\begin{document} \title{The unirational components of the strata of genus $11$ curves with several pencils of degree $6$ in $\mathcal{M} \begin{abstract} We show that the strata $ \mathcal{M}_{11,6}(k) \subset \mathcal{M}_{11} $ of $ 6-$gonal curves of genus $ 11 $, equipped with $k$ mutually independent and type I pencils of degree six, have a unirational irreducible component for $5\leq k\leq 9$. The unirational families arise from degree $ 9 $ plane curves with $ 4 $ ordinary triple and $ 5 $ ordinary double points that dominate an irreducible component of expected dimension. We will further show that the family of degree $ 8 $ plane curves with $ 10 $ ordinary double points covers an irreducible component of excess dimension in $ \mathcal{M}_{11,6}(10) $. \end{abstract} \section*{Introduction} Let $ C $ be a smooth irreducible $ d-$gonal curve of genus $ g $ defined over an algebraically closed field $ \mathbb{K} $. Recall that by definition of gonality, there exists a $ g^{1}_{d} $ but no $ g^{1}_{d-1} $ on $ C $. It is well-known that $ d\leq [\frac{g+3}{2}]$ with equality for general curves. In a series of papers (\cite{coppens4},\cite{coppens98},\cite{coppens3}, \cite{coppens1}, \cite{coppens2}) Coppens studied the number of pencils of degree $ d $ on $ C $, for various $ d $ and $ g $. For low gonalities up to $ d=5 $, the problem is intensively studied for almost all possible genera. For $ 6-$gonal curves, Coppens has settled the problem only for genera $ g\geq 15 $.\\ \indent In this paper, we focus on $ 6-$gonal curves of genus $ g=11 $. The motivation for our choice of genus $ 11 $ was the question asked by Michael Kemeny, whether any smooth curve of genus $ 11 $ carrying at least six pencils $ g^{1}_{6}$'s, comes from degree $ 8 $ plane curves with $ 10 $ ordinary double points, where the pencils are cut out by the pencil of lines through each of the singular points. More precisely, there exists no smooth curve of genus $ 11 $ possessing exactly $ 6,7,8 $ or $ 9 $ pencils of degree six. We will show the answer to this question is negative. Let $ \mathcal{M}_{11,6}(k)\subset \mathcal{M}_{11}$ be the stratum of smooth $ 6-$gonal curves of genus $ 11 $, equipped with exactly $ k $ mutually independent\footnote{Two pencils $ g_1,g_2 $ of degree $ d $ on a smooth curve $ C $ are called independent the corresponding map gives a birational model of $ C $ inside $ \mathbb{P}^{1}\times \mathbb{P}^{1} $} $ g^{1}_{6} $'s of type I\footnote{A base point free pencil $ g^{1}_{d} $ on a smooth curve $C$ is called of type I if $ \dim \vert 2g^{1}_{d}\vert=2 $. Type I pencils are exactly those that we should count with multiplicity 1.}. We first investigate the possible number of $ g^{1}_{6} $'s on a $ 6-$gonal curve of genus $ 11 $, and therefore the possible values of $ k $ for which $ \mathcal{M}_{11,6}(k) $ is non-empty. In \cite{sometopic}, Schreyer gave a list of conjectural Betti tables for canonical curves of genus $ 11 $. Related to our question and interesting for us is the Betti table of the following form \begin{center} \begin{tabular}{|c c c c c c c c c c} \hline $1$ & $.$ & $ . $& $.$ & $.$ & $.$& $ . $ &$ . $&$ . $&$ . $\\ $.$ & $36$ & $ 160 $&$315$ & $288$ & $5k$& $ . $& $ . $& $ . $& $ . $\\ $.$ &$.$ & $.$ & $ . $& $5k$ & $288$ &$315$ &$ 160 $&$ 36 $&$ . $\\ $.$ &$.$ & $.$ & $ . $& $.$ & $.$ &$.$ &$ . $&$ . $&$ 1 $ \end{tabular} \label{bettitable} \end{center} where $ k $ is expected to have the values $ k=1,2,\ldots,10,12,20$. Although, in view of Green's conjecture \cite{green}, it is not clear that for a smooth canonical curve of genus $ 11 $ with Betti table as above, the number $ k $ can always be interpreted as the multiple number of pencils of degree six existing on the curve. Nonetheless, for $ k=1,2,\ldots,10,12,20 $ we can provide families of curves, whose generic element carries exactly $ k $ mutually independent pencils of type I. The critical Betti number in this case is $ \beta_{5,6}=\beta_{4,6}=5k $ as expected. Therefore, in this range the stratum $ \mathcal{M}_{11,6}(k) $ is non-empty.\\ The first natural question is then to ask about the geometry of the stratum $ \mathcal{M}_{11,6}(k) $, in particular about its unirationality.\\ For $ k=1 $, the corresponding stratum is the famous Brill-Noether divisor $ \mathcal{M}_{11,6} $ of $ 6-$gonal curves \cite{harrismumford}, which is irreducible and furthermore known to be unirational \cite{GeissUnirationality}. The stratum $ \mathcal{M}_{11,6}(2) $ is irreducible \cite{tyomkin}, and unirational such that a general element of $ \mathcal{M}_{11,6}(2) $ can be obtained from a model of bidegree $ (6,6) $ in $ \mathbb{P}^{1}\times \mathbb{P}^{1}$ with $ \delta=14 $ ordinary double points. In \cite{keneshlou} it has been also shown that $ \mathcal{M}_{11,6}(3) $ has a unirational irreducible component of expected dimension. A general curves lying on this component can be constructed via liaison in two steps from a rational curve in multiprojective space $ \mathbb{P}^{1}\times \mathbb{P}^{1}\times \mathbb{P}^{1}$.\\ Here we construct rational families of curves with additional pencils from plane curves of suitable degrees with only ordinary multiple points, as singularities. As the first significant result (Theorem \ref{degree9}), we will prove that for $ 5\leq k\leq9 $, the stratum $ \mathcal{M}_{11,6}(k)$ has a unirational irreducible component of expected dimension. A general curve lying on this component arises from a degree $ 9 $ plane model with $ 4 $ ordinary triple and $ 5 $ ordinary double points which contains $ k-5 $ points among the ninth fixed point of the pencil of cubics passing through the $ 4 $ triple and $ 4 $ chosen double points . The key technique of the proof is to study the space of first order equisingular deformations of plane curves with prescribed singularities, as well as that of the first order embedded deformations of their canonical model. In fact, denoting by $ M $ the $ 5k\times 5k $ submatrix in the deformed minimal resolution corresponding to the general first order deformation family of a canonical curve $ C $ with Betti table as above, we use the condition $ M=0 $ to determine the subspace of the deformations with extra syzygies of rank $ 5k $. It turns out that for $ 5\leq k\leq 9 $, and respectively $ k $ linearly independent linear forms $ l_1,\ldots, l_k $ in the free deformation parameters corresponding to a basis of $ T_C\mathcal{M}_{11} $, we have $ \det M=l_1^{5}\cdot \ldots \cdot l_k^{5} $. This implies that $\mathcal{M}_{11,6}(k) $ has an irreducible component of exactly codimension $ k $ inside the moduli space $ \mathcal{M}_{11} $. Furthermore, let $ \mathcal{K}_{11} $ to be the locus of the curves $ C\in \mathcal{M}_{11} $ with extra syzygies, that is $ \beta_{5,6}\neq 0 $. It is known by Hirschowitz and Ramanan \cite{hirsch} that $ \mathcal{K}_{11} $ is a divisor, called the Koszul divisor, such that $ \mathcal{K}_{11}=5 \mathcal{M}_{11,6} $. Thus, $ \mathcal{M}_{11,6} $ at the point $ C $ is locally analytically the union of $ k $ smooth transversal branches.\\ We will then compute the kernel of the Kodaira-Spencer map and from that the rank of the induced differential maps, in order to show that the rational families of plane curves dominate this component. By following the similar approach, we obtain our second main result. We show that the family of degree $ 8 $ plane curves with $ 10 $ ordinary double points covers an irreducible component of excess dimension in $ \mathcal{M}_{11,6}(10) $ (Theorem \ref{degree10}). This paper is structured as follow. In section \ref{sec:familiesofcurves} we recall some basics of deformation theory for smooth and singular plane curves. In section \ref{sec:tangentspacecomputation} we deal with the computation of the tangent spaces to our parameter spaces and we continue by proving the main theorems on unirationality in section \ref{sec:injectivity}. In the last section \ref{sec:Further components}, using the syzygy schemes of the curves, we study the irreducibility of these strata. Our results and conjectures rely on the computations and experiments, performed by the computer algebra system \textit{Macaulay2} \cite{m2} and uses the supporting functions in the packages \cite{kenesh} and \cite{kensch}. \section*{\small{\textbf{Acknowledgement}}} We would like to thank Michael Kemeny for discussing this question with us which was the motivation point of this work. This work is a contribution to Project I.6 within the SFB-TRR 195 “Symbolic Tools in Mathematics and their Application” of the German Research Foundation (DFG). \section{Planar model description} \label{sec:Planar model description} In this section, we describe families of plane curves of genus $11 $ carrying $ k=4,\ldots,10,12,20 $ pencils. In particualr, we give a model of genus 11 curve with infinitely many pencils, arised as the triple cover of an elliptic curve. Throughout this paper, to avoid iteration, a pencil is always of the degree six, unless otherwise mentioned, and several pencils on a curve are supposed to be mutually independent of type I.\\ We first deal with the construction of plane model for smooth curves of genus $ 11 $ with $ k=5,\ldots,9 $ pencils. Clearly, smooth curves of genus $ 11 $ with ten pencils can be constructed from a plane model of degree $ 8 $ with $ 10 $ ordinary double points in general position. The code provided by the function \texttt{random6gonalGenus11Curve10pencil} in \cite{kenesh}, uses this plane model to produce a random canonical curve of genus $ 11 $ with exactly $ 10g^{1}_{6}$'s. We remark that, although we further provide a method to produce curves with $ k=4,12 $ pencils, by dimension reasons the rational family obtained from these models may not cover any component of the corresponding stratum.\\ \noindent \textsc{model of curves with $ 5\leq k\leq 9 $ pencils} \\ Let $ P_1,\ldots,P_4, Q_1,\ldots,Q_5 $ be general points in the projective plane $ \mathbb{P}^{2} $ and let $ \Gamma \subset \mathbb{P}^{2} $ be a plane curve of degree $ 9 $ with $ 4 $ ordinary triple points $ P_1,\ldots,P_4 $, and $ 5 $ ordinary double points $ Q_1,\ldots,Q_5 $. We note that, since an ordinary triple (resp. double) point in general position imposes six (resp. three) linear conditions, such a plane curve with these singular points exists as \[ \binom{9+2}{2}-6\cdot 4-3\cdot 5>0. \] Blowing up these singular points $$ \sigma: \widetilde{\mathbb{P}}^{2}=\mathbb{P}^{2}(\ldots, P_i,\ldots,Q_j,\ldots)\longrightarrow \mathbb{P}^{2},$$ let $ C\subset \widetilde{\mathbb{P}}^{2} $ be the strict transformation of $ \Gamma $ on the blown up surface of $ \mathbb{P}^{2} $. Hence, $$ C \sim 9H-\sum_{i=1}^{4}3E_{P_i}-\sum_{j=1}^{5}2E_{Q_j},$$ where $ H $ is the pullback of the class of a line in $ \mathbb{P}^{2} $, and $ E_{P_i} $ and $ E_{Q_j} $ denote the exceptional divisors of the blow up at the points $ P_i $ and $ Q_j $, respectively. By the genus-degree formula, $ C $ is a smooth curve of genus $ 11=\binom{9-1}{2}-4.3-5 $. Moreover, $ C $ admits five mutually independent pencils of type I. Indeed, for $ i=1, \ldots,4 $ the linear series $ \vert H-E_{P_i}\vert $, identified with the pencil of lines through the triple point $ P_i $ induces a base point free pencil $G_{i} $ on $ C $. As by adjunction, the canonical system $ \vert K_C\vert $ is cut out by the complete linear series $$ \vert C+K_{\widetilde{\mathbb{P}}^{2}}\vert=\vert 6H-\sum_{i=1}^{4}2E_{P_i}-\sum_{j=1}^{5}E_{Q_j}\vert,$$ the linear series $ \vert K_{C}-2G_{i}\vert $ is cut out by $$ \vert 4H-\sum_{i=1}^{4}2E_{P_i}-\sum_{j=1}^{5}E_{Q_j}+2E_{P_i}\vert.$$ Therefore, we have $\dim \vert K_{C}-2G_{i}\vert=0 $ and by Riemann--Roch $ \dim \vert 2G_{i}\vert=2 $. Thus, the induced pencils from linear system of lines through each of the triple points are of type I. Furthermore, the linear series $ \vert 2H-\sum_{i=1}^{4}E_{P_i}\vert $ identified with the the pencil of conics through the four triple points induces an extra pencil $ G_5 $ on $C$. Similarly by adjunction, the corresponding linear system $ \vert K_{C}-2G_5\vert $ can be identified with the linear system of quadrics containing the double points. We obtain $\dim \vert K_{C}-2G_5\vert=0 $, which then Riemann--Roch implies that $ \dim \vert 2G_5\vert=2 $. Hence, this gives another pencil of type I. In this way we obtain smooth curves of genus $ 11 $ having five pencils. \\ In order to get the model of curves with further pencils, we impose certain one dimensional conditions on the plane curve of degree $ 9 $ such that each condition gives exactly one extra $ g^{1}_{6} $.\\ For $ j=1,\ldots,5 $, let $ R_j $ be the ninth fix point of the pencil of cubics through the eight residual singular points by omitting $ Q_j $. The condition that $ R_j$ lies on the plane curves imposes exactly one condition on linear series of degree $ 9 $ plane curves with $ 4 $ ordinary triple points at $ P_i $'s and $ 5 $ ordinary double points at $ Q_j $'s. On the other hand, the linear series $$ \vert 3H-\sum_{i=1}^{4}E_{P_i}-\sum_{j=1}^{5}E_{Q_j}+E_{Q_j}\vert $$ induces a pencil $ G^{\prime}_j $ of degree $ 7 $ with a fix point at $ R_j $. Therefore, by forcing the degree $ 9 $ plane curves to pass additionally through each $ R_j $, we obtain one further pencil of type I, given by $G^{\prime}_{j}-R_j$. This way, by choosing $0\leq m\leq 4$ points among $ R_1, \ldots,R_5 $, we get families of smooth curves of genus $ 11 $ possessing up to nine pencils. The function \texttt{random6gonalGenus11Curvekpencil} in \cite{kenesh} is an implementation of the above construction which produces a random canonical curve of genus $ 11 $ possessing $ 5\leq k\leq 9 $ pencils. \begin{figure}\label{fig:1} \label{fig:2} \end{figure} \begin{remark} \label{failur} Although we expect that plane curves of degree $ 9 $ with singular points as above, passing through all the five fixed points $ R_1,\ldots,R_5 $, lead to the model of curves of genus $ 11 $ with ten pencils, our experimental computations show that such a curve is in general reducible. It is a union of a sextic and the unique cubic through the five double points and $ R_1,\ldots,R_5 $, which has further singular points than expected. Thus, our pattern fails to cover the case $ k=10 $. \end{remark} Our families of plane curves depend on expected number of parameters as desired. In fact, let \[\mathcal{V}_{9}^{4,5,m}:=\lbrace (\Gamma;P_1,\ldots,P_4, Q_1,\ldots, Q_5) \rbrace \subset \mathbb{P}^{N}\times (\mathbb{P}^{2})^{9}\] denote the variety, where $ N=\binom{9+2}{2}-1 $ and $ \Gamma\subset \mathbb{P}^{2} $ is a plane curve of degree $ 9 $ with prescribed singular points passing through $0\leq m\leq 4$ points among $ R_1,\ldots,R_5 $ as above. As an ordinary triple (resp. double) point in general position imposes six (resp. three) linear conditions, we expect naively that each irreducible component of $ \mathcal{V}_{9}^{4,5,m}$ has dimension \[\frac{9(9+3)}{2}+2\cdot 9-3\cdot 5-6\cdot 4-m=33-m.\] Identifying the plane curves under automorphisms of $ \mathbb{P}^{2} $ reduces this dimension by $ 8=\dim \pgl(2) $. From Brill-Noether theory this fits to the expected dimension of the stratum $ \mathcal{M}_{11,6}(k)\subset \mathcal{M}_{11} $, of curves possessing $ k=m+5 $ pencils. In fact, denoting by $ \rho $ the Brill-Noether number, we have \[ \dim \mathcal{M}_{11,6}(k) \geq 3\cdot 11-3+(m+5)\rho(11,6,1)=25-m. \] \noindent \textsc{models of curves with $ k=4 $ pencils.} \\ Let $ P_1,P_2,P_3, Q_1,\ldots,Q_7$ be general points in the projective plane and $ R $ be the ninth fix point of a pencil of cubics through eight points, obtained by omitting two of $ Q_i$'s. Then, the normalization of a general degree $ 9 $ plane curve with ordinary triple points at $ P_1,P_2,P_3 $ and ordinary double points at $ Q_1,\ldots,Q_7,R$ is a smooth curve of genus $ 11 $ that carries exactly $ k=4 $ pencils. In fact, the three pencils are induced from the pencil of lines through each of the triple points and the pencil of cubics through the eight points gives the extra $ g^{1}_{6} $. In \cite{kenesh}, this construction is implemented in the function \texttt{random6gonalGenus11Curve4pencil}. \begin{remark} \label{k4} The number of parameters for the choice of ten points in the plane as above plus the dimension of the linear system of plane curves of degree $ 9 $ with ordinary triple points at $ P_1,P_2,P_3 $ and ordinary double points at $ Q_1,\ldots,Q_7,R$ amounts to $ 32 $ parameters. Therefore, modulo the isomorphisms of the projective plane, we obtain a family of smooth curves of genus $ 11 $ with exactly $ k=4 $ pencils and smaller dimension than $ 26 $, which is the expected dimension of $ \mathcal{M}_{11,6}(4) $. Thus, the rational family of curves obtained from this model cannot cover any component of $ \mathcal{M}_{11,6}(4) $. \end{remark} \noindent \textsc{models of curves with $ k=12 $ pencils} \\ Let $ P_1,\ldots,P_{10} $ be general points in the projective plane and $ V_1 \subset \vert L\vert=\vert 4H-\sum_{i=1}^{10}E_{P_i}\vert $ be a pencil in the linear system of quartics passing through these points. Let $ q_1,\ldots,q_6 $ be the further fixed points of this pencil. Then, normalization of a degree $ 8 $ plane curve $ \Gamma $ with $ 10 $ ordinary double points $ P_1,\ldots,P_{10} $ and passing through $ q_1,\ldots,q_6 $, carries exactly twelve pencils. One has $10$ pencils cut out by the lines through each of the double points and also the $g^1_6$ cut out by $V_1$. Moreover, considering $ Q_1,\ldots,Q_6 $ to be the six moving points of a divisor in $ V_1 $, our experiments show that $ Q_1,\ldots,Q_6 $ are the extra fixed points of an another pencil $ V_2 \subset \vert L\vert $. Namely, there is a two dimentional vector space of quartics passing through $P_1,\ldots,P_{10}, Q_1,\ldots,Q_6$ cutting out the twelfth $g^1_6$. More precisely, let $\mathbb{P}^2\dashrightarrow \mathbb{P}^4$ be the rational map associated to $|L|$. The image of $\Gamma$ under this map is a curve $C$ of degree $12$, which is cut out by a unique rank $4$ quadric hypersuface $Q$ on the determinantal image surface of $\mathbb{P}^2$. As the divisors of the linear series $|L|$ are cut out by the linear system of hyperplanes on $C$, and the six fixed points impose exactly three linearly independent conditions on this linear series, they span a projective plane $\mathbb{P}^2\subset Q$ and they do not lie on a conic. As $Q$ is isomorphic to the cone over $\mathbb{P}^1\times \mathbb{P}^1$, the projections to each projective line naturally give two extra pencils. In \cite{kenesh}, the function \texttt{random6gonalGenus11Curve12pencil} uses this method to produce a random canonical curve of genus $ 11 $ carrying exactly twelve pencils.\\ \noindent \textsc{models of curves with $ k=20 $ pencils} \\ Let $ C $ be a smooth curve of genus $ 11 $ with a linear system $ g^3_{10} $. The space model of $ C $ has exactly twenty 4-secant lines which cut out the twenty pencils. A plane curve of degree $ 9 $ with $ 5 $ ordinary triple and $ 2 $ ordinary double points provides a model of such curves. Using this pattern, in \cite{kenesh}, the function \texttt{random6gonalGenus11Curv20pencil} gives model of genus $ 11 $ curves with $ 20g^1_6$'s. \\ \noindent \textsc{models of curves with infinitely many pencils} \\ Let $ E\subset \mathbb{P}^{2} $ be a smooth plane cubic, and consider $ X_1:=E\times \mathbb{P}^{1}\subset \mathbb{P}^{2}\times \mathbb{P}^{1}$ as a hypersurface of bidegree $ (3,0) $ containing two random lines $ L_1, L_2 $ and four points $ P_1,\ldots,P_4 $. Choosing a random hypersurface $ X_2 $ of bidegree $ (3,3) $ with double points at $ P_i's $ and containing the two lines, we obtain the complete intersection $ X_1\cap X_2=C\cup L_1\cup L_2$, where $ C $ is the triple cover of the elliptic curve $ E $ of bi-degree $ (9,7) $ in $ \mathbb{P}^{2}\times \mathbb{P}^{1} $. Naturally, $ C $ admits infinitely many pencils which are cut out by the pencil of lines through random points of $ E $. In \cite{kenesh}, this algorithm is implemented in the function \texttt{random6gonalGenus11CurveInfinitepencil} and produces model of $ C $ of $ \deg(C)=16 $ in $ \mathbb{P}^{5} $. Considering the space of hyperplanes through three general points of $ C $, we obtain a $ g^2_{13} $. Using this linear series one can compute the plane model and from that the canonical model of $ C $ which leads into the Betti number $ \beta_{5,6}=25 $. With the same approach, and starting from three lines and the choice of two points, we obtain a genus $ 11 $ triple cover of an elliptic curve of bi-degree $ (9,6) $ whose canonical model has the Betti number $ \beta_{5,6}=30 $. \section{Families of curves and their deformation}\label{sec:familiesofcurves} To study the local geometry of parameter spaces introduced in the previous section, and also the strata of the smooth curves with several pencils, we study the space of the first order deformation of curves. This leads to the computation of the tangent space at the corresponding points in the moduli. We recall some basics on deformation theory for smooth and singular plane curves which can be found in the standard textbook \cite{Sernesi}.\\ Let $C\subset \mathbb{P}^{n} $ be a smooth curve and $ \mathcal{N}_{C/\mathbb{P}^{n}}=\mathcal{H}om_{\mathcal{O}_C}(\mathcal{I}/\mathcal{I}^{2},\mathcal{O}_C) $ denote the normal bundle of $C$ in $ \mathbb{P}^{n} $. The space of global sections $ \Hb^{0}(C,\mathcal{N}_{C/\mathbb{P}^{n}}) $ parametrizes the set of first order embedded deformations of $ C$ in $ \mathbb{P}^{n}$. This is precisely the tangent space to the Hilbert scheme $ \mathcal{H}_{C/\mathbb{P}^{n}} $ of $ C $ inside $ \mathbb{P}^{n} $ (see \cite{Sernesi}, Theorem 3.2.12). An important refinement of the embedded deformation of a smooth curve is consideration of flat families of curves inside a projective space having prescribed singularities, that is of families whose members have the same type of singularities in some specified sense. This leads to the notion of equisingularity.\\ Let $ \Gamma\subset \mathbb{P}^{2} $ be a singular plane curve. There exists an exact sequence of coherent sheaves on $ \Gamma $, \[ 0\longrightarrow T_{\Gamma}\longrightarrow T_{\mathbb{P}^{2}}\vert_{\Gamma}\longrightarrow \mathcal{N}_{\Gamma/\mathbb{P}^{2}}\longrightarrow T^{1}_{\Gamma}\longrightarrow 0, \] where the two middle sheaves are locally free, whereas the first one is not (see \cite{Sernesi}, Proposition 1.1.9). The sheaf $ T^{1}_{\Gamma} $ is the so-called cotangent sheaf, supported on the singular locus of $ \Gamma $. The \textit{equisingular normal sheaf} of $ \Gamma $ in $ \mathbb{P}^{2}$ is defined to be \[ \mathcal{N}^{\prime}:=\ker[\mathcal{N}_{\Gamma/\mathbb{P}^{2}}\longrightarrow T^{1}_{\Gamma}], \] which describes deformations preserving the singularities of $ \Gamma $. In fact, the vector space $ \Hb^{0}(\Gamma,\mathcal{N}^{\prime}_{\Gamma/\mathbb{P}^{2}}) $ parameterizes the locally trivial first order deformations of $ \Gamma $ in $ \mathbb{P}^{2} $ having the prescribed singularities as $ \Gamma $ (See \cite{Sernesi}, Section 4.7.1). In particular, the equisingular normal bundle fits into the short exact sequence \begin{equation}\label{short} 0\longrightarrow \mathcal{O}_{\mathbb{P}^{2}}\longrightarrow \mathcal{I}(d)\longrightarrow \mathcal{N}^{\prime}_{\Gamma/\mathbb{P}^{2}}\longrightarrow 0, \end{equation} where $ \mathcal{I} $ is the ideal sheaf locally generated by the partial derivatives of a local equation of $ \Gamma$, and the first injective map is defined by multiplication by an equation of $ \Gamma $ (See \cite{Sernesi}, page 55). \section{The tangent space computation}\label{sec:tangentspacecomputation} In this section, we compute the tangent space to the parameter space $ \mathcal{V}_{9}^{4,5,m} $ as well as that to the strata $ \mathcal{M}_{11,6}(k)\subset \mathcal{M}_{11} $. We further prove the existence of a component with expected dimension on both spaces. \vskip 0.02cm \begin{theorem} \label{tangent} For $ m=0,\ldots,4 $, the parameter space $\mathcal{V}_{9}^{4,5,m} $ has an irreducible component of expected dimension. \end{theorem} \begin{proof} Let $ (\Gamma;P_1,\ldots,P_4, Q_1,\ldots, Q_5) \in \mathcal{V}_{9}^{4,5,m} $ be a point corresponding to a plane curve $ \Gamma:(f=0)\subset \mathbb{P}^{2}$ with prescribed singular points and passing through $ R_1,\ldots,R_m$. Assume $ x,y,z $ are the coordinates of the projective plane. Considering $\Gamma$ as a point in the parameter space $ \mathbb{P}^{{{9+2}\choose{2}}-1} $ of degree $ 9 $ plane curves, without loss of generality we can assume it lies in the affine chart, which does not contain the point $ (1:0:0) $. Moreover, to simplify our notations, we can assume all the distinguished points of $ \Gamma $ are in the open affine subset of $ \mathbb{P}^{2} $ defined by $ z=1 $. Thus, $ \Gamma $ is locally defined by $f=\sum_{u,v}a_{uv}x^{u}y^{v}$ such that $ a_{9,0}=1$, $ (x_i,y_i)$ for $ \ 1\leq i\leq 9 $ are the affine coordinates of the singular points and $ (x'_l,y'_l) $ is the affine coordinate of $ R_l $. Therefore, in a neighbourhood of $ \Gamma $, the space $\mathcal{V}_{9}^{4,5,m} $ is the set of pairs $ (\bar{h}; S_1,\ldots,S_9) $ with $\bar{h}=\sum_{u,v}b_{uv}x^{u}y^{v}$, $ b_{9,0}=1 $ and $ S_i=(X_i,Y_i) $ for $ 1\leq i\leq 9 $, satisfying the following equations: $$ R_{i,s,t}(\ldots,b_{uv},\ldots, X_j,Y_j,\ldots):=\frac{\partial \bar{h}}{\partial^{t}x\partial^{^{s-t}}y}(X_i,Y_i)=0, $$ for $ 1\leq i\leq 4,\ s=0,1,2,\ t\in\lbrace 0,\ldots,s\rbrace $, $$ R'_{i,s,t}(\ldots,b_{uv},\ldots, X_j,Y_j,\ldots):=\frac{\partial \bar{h}}{\partial^{t}x\partial^{^{s-t}}y}(X_i,Y_i)=0, $$ for $ \ 5\leq i\leq 9,\ s=0,1,\ t\in\lbrace 0,\ldots,s\rbrace $ and $$ F_l:=(\sum_{u,v}b_{uv}x^{u}y^{v})(X'_l,Y'_l)=0, \ \ \forall\ 1\leq l\leq m,$$ where $ (X'_l,Y'_l) $ are the coordinates of $ m $ points among the fixed points. Then, the tangent space at $ \Gamma $ is the set of points $ (\bar{g}; T_1,\ldots,T_9) $ with $\bar{g}=\sum_{u,v}c_{uv}x^{u}y^{v}$, $ c_{9,0}=1, c_{uv}=a_{uv}+b_{uv} $ for $ u\neq 9 $ and $ T_i=(x_i+X_i,y_i+Y_i)$ for $ 1\leq i\leq 9 $, satisfying the following equations with indeterminate in $ \ldots,b_{uv},\ldots,X_j,Y_j,\ldots $: \begin{align*} &\sum_{\substack{u,v\geq 0\\ u+v\leq 9\\ u\neq 9}}b_{uv}\frac{\partial R_{i,s,t}}{\partial b_{uv}}(\ldots,a_{uv},\ldots,x_i,y_i,\ldots)\\ & + \sum _{\alpha=0} ^{9}[X_\alpha\frac{\partial R_{i,s,t}}{\partial X_{\alpha}}(\ldots,a_{uv},\ldots,x_i,y_i,\ldots)+Y_{\alpha}\frac{\partial R_{i,s,t}}{\partial Y_{\alpha}}(\ldots,a_{uv},\ldots,x_i,y_i,\ldots)]=0 \end{align*} for all $ 1\leq i\leq 4,\ s=0,1,2,\ t\in\lbrace 0,\ldots,s\rbrace $, the same relation with $ R'_{i,s,t} $, for all $ \ 5\leq i\leq 9,\ s=0,1,\ t\in\lbrace 0,\ldots,s\rbrace $ and $$ \sum _{\substack{u,v\geq 0\\ u+v\leq 9\\ u\neq 9}}b_{uv}\frac{\partial \bar{h}}{\partial b_{uv}}(x'_l,y'_l)=0, \ \ \ \forall\ 1\leq l\leq m. $$ In \cite{kenesh}, the code provided by the implemented function \texttt{verifyAssertion(1)} uses this method to compute the tangent space as the space of solutions to the above equations. Our computation of an explicit example for a randomly chosen point on $ \mathcal{V}_{9}^{4,5,m} $ shows that this space is of dimension $ 33-m$. Therefore, the irreducible component of $ \mathcal{V}_{9}^{4,5,m} $ containing that point is of expected dimension. \end{proof} \begin{remark} \label{cohomologygroup} Let $ (\Gamma;P_1,\ldots,P_4, Q_1,\ldots, Q_5) \in \mathcal{V}_{9}^{4,5,0}$ be a point and let $ \Delta $ denote the singular locus of the corresponding plane curve with prescribed number of double and triple points. Via the first projection map $$p_1:\mathcal{V}_{9}^{4,5,0}\longrightarrow \mathcal{V}^{4,5}_{9}\subset \mathbb{P}^{N},$$ the variety $\mathcal{V}_{9}^{4,5,0} $ maps one-to-one to the Severi variety $ \mathcal{V}^{4,5}_{9} $, parametrizing the degree $ 9 $ plane curves with $ 4 $ ordinary triple points and $ 5 $ ordinary double points. This way, we can naturally denote $ \mathcal{V}_{9}^{4,5,0} $ by $ \mathcal{V}_{9}^{4,5} $ and identify the tangent space to $\mathcal{V}_{9}^{4,5,0}$ at $ \Gamma $ with the space of the first order deformation of $ \Gamma\in \mathcal{V}^{4,5}_{9}$. Thus, from the short exact sequence $\ref{short}$ we obtain $$ T_{\Gamma}\mathcal{V}_{9}^{4,5}\cong \Hb^{0}(\mathbb{P}^{2},\mathcal{I}_{\Delta}(9))/\langle f\rangle, $$ where $ \langle f\rangle $ is the one-dimensional vector space generated by the defining equation of $ \Gamma $. Moreover, for $ m>0 $ the computed tangent space to $ \mathcal{V}_{9}^{4,5,m} $ at a random point as in Theorem \ref{tangent}, can be regarded as a subspace of such a vector space. \end{remark} \indent Now we turn to the computation of the tangent space to the strata $ \mathcal{M}_{11,6}(k)$.\\ \noindent Let $ C\subset\mathbb{P}^{10} $ be a smooth canonical curve with extra syzygies of rank $5k$ and \begin{small} \[ 0\longleftarrow S/I_{C} \longleftarrow S\xleftarrow{f} S(-2)^{36}\xleftarrow{\varphi_{1}} S(-3)^{160}\xleftarrow{\varphi_{2}}S(-4)^{315} \xleftarrow{\varphi_{3}} \begin{matrix} S(-5)^{288} \\ \oplus \\ S(-6)^{5k} \end{matrix} \xleftarrow{\varphi_{4}} \begin{matrix} S(-6)^{5k}\\ \oplus\\ S(-7)^{288} \end{matrix} \] \end{small} be the part of a minimal free resolution of $ C $, where $ S=\mathbb{K}[x_0,\ldots,x_{10}] $ is the coordinate ring of $ \mathbb{P}^{10}$, and $ f=(f_1,\ldots,f_{36}) $ is the minimal set of generators of the ideal $ I_{C} \subset S$. Consider the pullback to $C$ of the Euler sequence \begin{equation} \label{euler} 0\longrightarrow \mathcal{O}_{C}\longrightarrow \mathcal{O}_{C}(1)^{\oplus g}\longrightarrow T_{\mathbb{P}^{10}}\vert_{C}\longrightarrow 0. \end{equation} From the long exact sequence of cohomologies, the dual vector space $ \Hb^{1}(C,T_{\mathbb{P}^{10}}\vert_{C})^{\vee} $ can be identified with the kernel of the Petri map $$ \mu_{0}:\Hb^{0}(C,L)\otimes \Hb^{0}(C,\omega_{C}\otimes L^{-1})\longrightarrow \Hb^{0}(C,\omega_{C})$$ where $ L=\mathcal{O}_{C}(1)$. Therefore, we get $ \Hb^{1}(C,T_{\mathbb{P}^{10}}\vert_{C})=0$ and from that, the induced long exact sequence of the normal exact sequence $$ 0\longrightarrow T_{C}\longrightarrow T_{\mathbb{P}^{10}}\vert_{C}\longrightarrow \mathcal{N}_{C/\mathbb{P}^{10}}\longrightarrow 0,$$ reduces to the following short exact sequence \begin{equation} \label{kodariraspencer} 0 \longrightarrow \Hb^{0}(C,T_{\mathbb{P}^{10}}\vert_{C})\longrightarrow \Hb^{0}(C,\mathcal{N}_{C/\mathbb{P}^{10}})\xrightarrow{\kappa} \Hb^{1}(C,T_{C})\longrightarrow 0, \end{equation} where $ \kappa $ is the so-called Kodaira-Spencer map. More precisely, here we realize $ \Hb^{1}(C,T_{C}) $ as the tangent space to the moduli space $ \mathcal{M}_{11} $ at the point corresponding to $ C $, and $ \kappa $ as the induced map between the tangent spaces from the natural map $ \mathcal{H}_{C/\mathbb{P}^{10}}\longrightarrow \mathcal{M}_{11} $. We observe that by Serre duality $$\Hb^{1}(C,T_{C}) \cong \Hb^{0}(C,\omega_{C}^{\otimes 2})^{\vee}. $$ Since we assume that the curve is canonically embedded, the sheaf $ \omega_{C}^{\otimes 2} $ is just the twisted sheaf $ \mathcal{O}_C(2) $. Hence, the cohomology group above will be given by the quotient $ S_2/(I_C)_2 $ and thus $ \hsm^{1}(C,T_{C})=30$. As $ I_C $ is minimally generated by $ 36 $ generators, we can identify a basis of $ \Hb^{1}(C,T_{C}) $ with columns of a matrix of size $ 36\times 30 $ with entries in $ S_2/(I_C)_2 $, introducing $ 30 $ free deformation parameters $ b_0,\ldots, b_{29} $. Let $ \bar{f}=f+f^{(1)} $ be the general first order family perturbing $ f $ defined by the general element of $ \Hb^{1}(C,T_{C}) $ and let $$ \bar{S}\xleftarrow{\bar{f}} \bar{S}(-2)^{36} $$ be the corresponding morphism, where $$ \bar{S}=\mathbb{K}[b_0,\ldots,b_{29}]/(b_0,\ldots,b_{29})^{2} \otimes_{\mathbb{K}} S.$$ To find a lift $ \bar{\varphi_{1}}=\varphi_{1}+\varphi_{1}^{(1)}$ of $ \varphi_{1} $, we apply the necessary condition $ \bar{f}\circ\bar{\varphi_{1}}\equiv 0 $ mod $ (b_0,\ldots,b_{29})^{2} $, and we solve for an unknown $ \varphi_{1}^{(1)} $ the equation: $$ 0\equiv \bar{f}\circ\bar{\varphi_{1}}=(f+f^{(1)})(\varphi_{1}+\varphi_{1}^{(1)})=f\circ \varphi_{1}+(f\circ \varphi_{1}^{(1)}+f^{(1)}\circ \varphi_{1})\ \text{mod} \ (b_0,\ldots,b_{29})^{2}.$$ This leads to $ f\circ \varphi_{1}^{(1)}= -f^{(1)}\circ \varphi_{1} $, such that solving it for $ \varphi_{1}^{(1)} $ by matrix quotient gives the required perturbation of the first syzygy matrix $ \varphi_{1} $. Continuing through the remaining resolution maps, we can lift the entire resolution to first order in the same way. In \cite{kensch}, an implementation of this algorithm is provided by the function \texttt{liftDeformationToFreeResolution}, which lifts a resolution to the first order deformed resolution. \begin{theorem} \label{tangentlocus} Let $ 0\leq m\leq 4 $, and set $ k:=m+5 $. The stratum $ \mathcal{M}_{11,6}(k)\subset \mathcal{M}_{11} $ has an irreducible component $ H_k $ of expected dimension $ 30-k $. Moreover, at a general point $ P\in H_k $, $ \mathcal{M}_{11,6} $ is locally analytically a union of $ k $ smooth transversal branches. In other words, $ \mathcal{M}_{11,6} $ is a normal crossing divisor around the point $ P $. \end{theorem} \begin{proof} Consider the natural commutative diagram \begin{displaymath} \xymatrix{ \mathcal{V}_{9}^{4,5,m} \ar[r]^{\psi}\ar[d]_\phi & \mathcal{H}_{C/\mathbb{P}^{10}}\ar[d]\\ \mathcal{M}_{11,6}(k) \ar@{^{(}->}[r] & \mathcal{M}_{11}} \end{displaymath} where $\phi $ takes the plane curve to its canonical model forgetting the embedding. Let $ H_k\subset\mathcal{M}_{11,6}(k)$ be the irreducible component containing the image points of curves lying in an irreducible component $ H\subset \mathcal{V}_{9}^{4,5,m} $ with expected dimension (see Theorem \ref{tangent}). We show that $ H_k$ is of expected dimension.\\ Let $C\subset \mathbb{P}^{10}$ be a canonical curve with $\ell$ extra syzygies, and let $\mathcal{C} \to (T,0)$ be its Kuranishi family. Then, the Koszul divisor $\mathcal{K}_{11}$ can be computed locally in this family as follows. One extends the minimal free resolution of the coordinate ring over $\mathbb{K}[x_0,\ldots,x_{10}]$ to a resolution over $\mathcal{O}_{T,0}[x_0, \ldots, x_{10}]$. The resulting complex will have a $\ell \times \ell$ square submatrix with entries in $\mathcal{O}_{T,0}$. The determinant of this matrix defines the Koszul divisor restricted to the Kuranishi family. Due to Hirschowitz and Ramanan \cite{hirsch}, this divisor coincides with $5$ times the Brill-Noether divisor $\mathcal{M}_{11,6}$, that is $ \mathcal{K}_{11}=5\mathcal{M}_{11,6} $. Thus, the determinant of the matrix is a fifth power. Here, we compute the first order terms of this matrix for specific curves in various strata. \\ Let $ C\in H_k $ be the canoical model of a plane curve $ \Gamma\in H $ and for the general first order deformation family of $ C$, let $ M $ denote the $ 5k\times 5k $ submatrix of $ \overline{\varphi}_{4} $ in the deformed free resolution with linear entries in free deformation parameters $ b_0,\ldots,b_{29} $. In a minimal free resolution of $C$, the matrix defining the map $S(-6)^{5k}\longleftarrow S(-6)^{5k}$ is zero, hence the condition $ M=0 $ determines the space of the first order deformations of $C$ with extra syzygies of rank $ 5k $. \\ By means of the implemented function \texttt{verfiyAssertion(2)} in \cite{kenesh}, we can compute an explicit single example which shows that for exactly $ k $ linearly independent linear forms \[ l_{1},\ldots,l_{k}\in \mathbb{K}[b_0,\ldots,b_{29}],\] we have \[ \det \ M=l_{1}^{5}\cdot \ldots\cdot l_{k}^{5}. \] As the entries of the matrix $M$ are linear combinations of the $k$ independent forms $l_{1},\ldots, l_{k}$, one has $M=0$ if and only if $l_{1}= \cdots=l_{k}=0$. Moreover, identifying $T_C\mathcal{M}_{11,6}(k)$ with the space of first order deformations of $C$ with $k$ pencils, $T_C\mathcal{M}_{11,6}(k)$ is a subset of the space of the first order deformations of $C$ with extra syzygies of rank $5k$. Thus, since $\dim T_C\mathcal{M}_{11,6}(k)\geq 30-k$, the tangent space $T_C\mathcal{M}_{11,6}(k) $ is the zero locus of these linear forms, and is of codimension exactly $ k $ inside $ T_{C}\mathcal{M}_{11} $. Hence, $ H_k $ is an irreducible component of expected dimension $ 25-m $. Moreover, the equality $ \mathcal{K}_{11}=5\mathcal{M}_{11,6} $ implies that $$T_C\mathcal{M}_{11,6}=V(\ell_1)\cup \ldots \cup V(\ell_k),$$ which proves $ \mathcal{M}_{11,6} $ at the point $ C $ is locally analytically union of $ k $ smooth branches. \end{proof} \begin{remark} With the notation as above, under a change of basis, we can turn the matrix $ M $ to a block (or even a diagonal) matrix \[ M'=\begin{pmatrix} B_1 & 0& \ldots & 0\\ 0& B_2 & \ldots &0\\ \vdots & \vdots & \ddots & \vdots\\ 0 & 0 & \ldots& B_k \end{pmatrix} \] such that for $ i=1,\ldots, k $ the non-zero block is $ B_i=A_iL_i $, where $ A_i $ is an invertible $ 5\times 5 $ matrix with constant entries and $ L_i $ is the diagonal matrix with diagonal entries equal to $ l_i $. In fact, for $ i=1,\ldots,k $, let $ X_i $ be the scroll swept out by the pencil $ g_i $ on $ C $. Let $ M_i=(MV_i)^{t} $ be the $ 5\times 5k $ matrix, where $ V_i $ is the constant matrix defining the last map $\varphi_{i}=S(-6)^{5}\longrightarrow S(-6)^{5k} $ in the injective morphism of chain complexes from the resolution of $ X_i $ to the linear strand of a minimal resolution of $ C $. Set $ W_i:=\ker M_i $ and for $ j\in \lbrace 1,\ldots, k\rbrace $, let $ \overline{W}_{j}$ be the intersection of the modules $ W_i $'s by omitting $ W_j $. Since the pencils are mutually independent so that the corresponding scrolls contribute independently to the rank of the module $ \bar{S}(-6)^{5k}$, one can see $ \operatorname{rank} W_i=5(k-1) $, and a basis of $ W_i $ can be identified by columns of a constant matrix of size $ 5k\times 5(k-1) $. Moreover, we have $ \operatorname{rank} \overline{W}_{j}=5 $ such that a basis of the module $\overline{W}_{1}\oplus\ldots\oplus\overline{W}_{k} $ determines a $ 5k\times 5k $ invertible constant matrix. Using this invertible matrix for changing the basis of the space $ \bar{S}(-6)^{5k} $ turns the matrix $ M $ to a block matrix as above. To speed up our computations, we have used this presentation of $ M $ to compute its determinant. \end{remark} \begin{theorem} \label{g310} The locus $ \mathcal{M}_{11,10}^{3} $ of genus $ 11 $ curves with a $ g^3_{10} $ is an irreducible component of $ \mathcal{M}_{11,6}(20)$ with expected dimension $ 25 $. \end{theorem} \begin{proof} With the same arument as above, the theorem follows from computation of an explicit example (see \texttt{verfiyAssertion(6)} in \cite{kenesh}) which proves for five linearly independent linear forms $ l_{1},\ldots,l_{5} $ we have $$ T_C\mathcal{M}_{11,10}^{3}=T_C \mathcal{M}_{11,6}(20)=V(l_1,\ldots,l_5).$$ \end{proof} \section{Unirational irreducible components}\label{sec:injectivity} In this section, we prove that the so-constructed rational families of plane curves dominate an irreducible component of the strata $ \mathcal{M}_{11,6}(k) $ for $ k=5,\ldots,10 $. To this end, we count the number of moduli for these families, by computing the rank of the differential map between the tangent spaces. \begin{theorem} \label{degree9} For $ 5\leq k\leq9 $, the stratum $ \mathcal{M}_{11,6}(k)$ has a unirational irreducible component of expected dimension $ 30-k $. A general curve lying on this component arises from a degree $ 9 $ plane model with $ 4 $ ordinary triple and $ 5 $ ordinary double points which contains $ k-5 $ points among the ninth fixed point of the pencil of cubics passing through the $ 4 $ triple and $ 4 $ chosen double points. \end{theorem} \begin{proof} With notations as in Theorem \ref{tangentlocus}, let $ \phi_{\vert H}:H\longrightarrow H_{k} $ be the natural map between the irreducible components of expected dimensions. To compute the dimension of $ \overline{\phi(H)} $, one has to compute the rank of the differential map $$ d\phi_{\Gamma}:T_{\Gamma}H\longrightarrow T_{C}H_{k},$$ at a smooth point $ C\in \phi(H)$. We recall that for $ m>0 $ the tangent space to $ \mathcal{V}_{9}^{4,5,m} $ at a point $ \Gamma $ is a subspace of $T_{\Gamma} \mathcal{V}_{9}^{4,5} $. Therefore, it suffices to show that $ \dim( \ker \ d\phi_{\Gamma})=8 $ for the case $ k=5 $. Considering the following commutative diagram of tangent maps \begin{displaymath} \xymatrix{ & &T_{\Gamma}H \ar[d]_{d\psi_{\Gamma}}\ar[r]^{d\phi_{\Gamma}}& T_{C}H_{k} \ar@{^{(}->}[d]\\ 0 \ar[r]& \Hb^{0}(C,T_{\mathbb{P}^{10}}\vert_{C}) \ar[r] & \Hb^{0}(C,\mathcal{N}_{C/\mathbb{P}^{10}})\ar[r]& \Hb^{1}(C,T_{C}) \ar[r]& 0 } \end{displaymath} our explicit computation of a single example (see \texttt{VerfiyAssertion(3)} in \cite{kenesh}) shows that the image of the map $ d\psi_{\Gamma} $ has exactly $ 8- $dimensional intersection with the image of $ \Hb^{0}(C,T_{\mathbb{P}^{10}}\vert_{C}) $ inside $ \Hb^{0}(C,\mathcal{N}_{C/\mathbb{P}^{10}}) $, which corresponds to the automorphisms of the projective plane. Therefore, the rational family of plane curves lying on the irreducible component $ H $ dominates an irreducible component of $ \mathcal{M}_{11,6}(k) $ with expected dimension. \end{proof} \begin{theorem} \label{degree10} The stratum $ \mathcal{M}_{11,6}(10) $ has a unirational irreducible component of excess dimension $26$, where the curves arise from degree $ 8 $ plane models with $ 10 $ ordinary double points. More precisely, the locus $ \mathcal{M}_{11,8}^{2} $ of curves possessing a linear system $ g^{2}_{8} $ is a unirational irreducible component of $ \mathcal{M}_{11,6}(10) $ of expected dimension $ 26 $. \end{theorem} \begin{proof} Let $ \mathcal{V}_{8}^{10} $ be the Severi variety of degree $ 8 $ plane curves with $ 10 $ ordinary double points. By classical results \cite{zariskiharris}, it is known that $ \mathcal{V}_{8}^{10} $ is smooth at each point and of pure dimension $ 34 $. Let $ \Gamma $ be a plane curve of degree $ 8 $ with $ 10 $ ordinary double points, and let $ C\in \mathcal{M}_{11,8}^{2}\subset \mathcal{M}_{11,6}(10) $ be its normalization. With the same argument as in the proof of \ref{tangentlocus} and \ref{degree9}, the theorem follows from the computation of an example which shows that for linear forms $ l_1,\ldots,l_{10} $ we have $ \dim T_C\mathcal{M}_{11}(10)=\dim V(l_1,\ldots,l_{10})=26 $ and furthermore the induced differential map is of full rank $ 26 $. The verification of this statement is implemented in the function \texttt{verifyAssertion(4)} in \cite{kenesh}. \end{proof} \begin{corollary} Let $ \Gamma $ be a general plane curve of degree $ 8 $ with $ 10 $ ordinary double points, and let $ C\in \mathcal{M}_{11} $ be its normalization. Consider a deformation of $ C $ which preserves at least four pencils $ g^{1}_{6}$'s of the $ 10 $ existing pencils. Then, the deformation of $ C $ preserves the $ g^{2}_{8} $. In other words, a deformation of $ C $ which keeps at least four pencils $ g^{1}_{6}$'s lies still on the locus $ \mathcal{M}_{11,8}^{2} $. \end{corollary} \begin{proof} By the above theorem, around a general point $ C\in \mathcal{M}_{11,8}^{2}$, the Brill--Noether divisor $ \mathcal{M}_{11,6} $ is locally a union of $ 10 $ branches defined by $ l_1\cdot \ldots \cdot l_{10}=0$. On the other hand, $ \operatorname{codim} T_C \mathcal{M}_{11,8}^{2}=\operatorname{codim} V (l_1,\ldots, l_{10})=4 $, such that any four of the linear forms are independent defining $ \mathcal{M}_{11,8}^{2} $ locally around $ C $. Therefore, a deformation of $ C $ which keeps at least four of $ g^{1}_{6}$'s lies still on the locus $ \mathcal{M}_{11,8}^{2} $. \end{proof} \section{Further components} \label{sec:Further components} Having already described an irreducible unirational component of the strata $ \mathcal{M}_{11,6}(k) $ for $ k=5,\ldots,10 $, the first natural question is to ask about the irreducibility of these strata. If the answer is negative, then the question is how the other irreducible components arise. \\ Although one may mimic our pattern to find model of plane curves of higher degree with singular points of higher multiplicity, considering the degree $ 9 $ plane curves with $ 4 $ ordinary triple and $ 5 $ ordinary double points as our original model, our simple computations indicates that the models of higher degree are usually a Cremona transformation of this model with respect to three singular points. Therefore, considering models of different degrees and singularities, we have not found new elements in these strata. On the other hand, the study of syzygy schemes of curves lying on these strata leads to the following theorem which states the existence of further irreducible components.\\ \begin{theorem} \label{othercomponent} For $ 5\leq k\leq 8 $, the stratum $ \mathcal{M}_{11,6}(k) $ has at least two irreducible components both of expected dimension, along which $\mathcal{M}_{11,6} $ is generically a simple normal crossing divisor. \end{theorem} \begin{proof} The proof relies on the syzygy schemes and our computation of tangent cone at a point C in $ H_k $. Consider $ \eta: \mathcal{W}_{11,6}^1 \longrightarrow\mathcal{M}_{11,6}\subset \mathcal M_{11}$ and let $C$ be a point in our unirational component $H_k \subset \mathcal{M}_{11,6}(k)$ for $ 6\leq k\leq 9 $. Then, by the Theorem \ref{tangentlocus}, the tangent cone of the Brill-Noether divisor $\mathcal{M}_{11,6} $ is defined by a product $l_1 \cdot\ldots\cdot l_k$ of $k$ linearly independent linear forms, and $\mathcal{W}_{11,6}^1 \longrightarrow \mathcal{M}_{11,6} $ is locally around $C$ the normalization of $\mathcal{M}_{11,6} $. Let $f_1, \ldots, f_k$ be power series which define the $k$ branches of $\mathcal{M}_{11,6}$ in an analytic or \'{e}tale neighbourhood $U$ of $C \in \mathcal{M}_{11}$. Then $$f_i = l_i + \hbox{ higher order terms }$$ and the zero locus $V(f_i) \subset U$ has the following interpretation: $$V(f_i) \cong \{ (C',L'): \ (C',L')\in\ U_i \},$$ where $ \eta^{-1}(U) = \bigcup_{i=1}^k U_i$ is the disjoint union of smooth $ 3g-4 $ dimensional manifolds with $(C,L_i) \in U_i$ such that $L_i$ denotes line bundle corresponding to the the $i-$th pencil $g^{1}_6$ on $C$ in some enumeration of the pencils $L_1, \ldots,L_k $ that we fix. \\ The submanifold $B_i=\{f_i=0\}$ then consists of deformations of $C$ induced by deformation of pair $(C,L_i)$, and for any family $\Delta \subset B_i$ the Kuranishi family restricted to $\Delta$ extends to a deformation of the pair $(C,L_i)$ $$ \begin{matrix} C &\subset &\mathcal C & \quad & (C,L_i) &\subset &(\mathcal C,\mathcal L_i) \cr \downarrow && \downarrow && \downarrow && \downarrow \cr 0 & \in & \Delta & \quad & 0 & \in& \Delta \cr \end{matrix} $$ Let $I \subset \{1,\ldots,k\}$ be any subset of cardinality $\ell\geq 5$ and $C' \in U$ be a point such that $$C'\in \bigcap_{i\in I} V(f_i) \setminus \bigcup_{j\notin I} V(f_j).$$ Then, by Theorem \ref{tangentlocus} $$C' \in \mathcal M_{11,6}(\ell) \setminus \mathcal M_{11,6}(\ell+1) $$ since the $l_i$ with $i \in I$ are linearly independent, $\mathcal M_{11,6}(\ell)$ is of codimension $\ell$ and $ \mathcal M_{11,6} $ is a normal crossing divisor around $C'$. Now, we examine that whether or not $C'$ lies in our component $H_\ell$. For this purpose, we deform the $L_i$ for $i \in I$ in a one-dimensional family of curves $$ \Delta=\lbrace C''\rbrace\subset \bigcap_{i\in I} V(f_i)$$ through $C$ and $C'$, which intersects $ \bigcup_{j\notin I} V(f_j) $ only in the point $ C $. The syzygy schemes of the $C''\in \Delta$ forms an algebraic family defined by the intersection of the deformed scrolls $X_i''$ swept out by the deformed line bundle $ L''_i $. Thus by semicontinuity, the dimension of the syzygy scheme of $C''$ near $C \in \Delta$ is smaller or equal than the dimension of the syzygy scheme $ \bigcap X_i$, and in case of equality we should have $\deg(\bigcap X''_i)\leq \deg(\bigcap X_i)$. If we take special syzygy scheme of $ C'' $ corresponding to the syzygies of $ \bigcap_{j\in J}X_j'' $ then likewise we have the semicontinuity compare to $ \bigcap_{j\in J}X_j $. Therefore, for $ C'' $ to lie on $ H_l $ we need a subset $ J\subset I $ of cardinality $ 5 $ such that the syzygy scheme is a surface of degree $ 15 $ (see table \ref{table4}). By the Remark \ref{aandb}, this occurs only if we have $a=5$ and $b=0$. Thus, taking $I$ to be a subset of $ \{2,\ldots,5\} \cup \{6,\ldots,k\}$ we obtain a point $C'' \in \mathcal M_{11,6}(\ell) \setminus H_\ell$. This proves that for $5 \leq \ell \leq 8 $ the stratum $\mathcal{M}_{11,6}(\ell)$ has at least two components, one of which $H_\ell$ and the other a component containing $C''$. \end{proof} In paricular, considering the five smooth transversal branches of $ \mathcal{M}_{11,6} $ at a general point $ C $ of the irreducible conponent $ H_5\subset \mathcal{M}_{11,6}(5) $, we can deform $ C $ away from one of the branches, in a one-dimensional family of curves with $ 4 $ pencils, which proves the following. \begin{theorem} The stratum $ \mathcal{M}_{11,6}(4) $ has an irreducible component of expected dimension $ 26 $. \end{theorem} \begin{remark} \label{aandb} For the model of plane curve of degree $ 9 $ with nine pencils described in \ref{sec:Planar model description}, we have computed the dimension, degree and the Betti table of the syzyzgy schemes associated to different number $ 2\leq l\leq 9 $ of pencils $ g^{1}_{6}$'s. We recall that for a number of pencils indexed by a subset $ I\subset \lbrace 1,\ldots,9 \rbrace $, the associated syzygy scheme is the intersection $ \bigcap_{i\in I} X_i $ of the scrolls swept out by each of the pencils. Let $ 1\leq a\leq 5 $ be the number of chosen pencils which are induced by projection from the triple points or the pencil of conics. Likewise, let $ 1\leq b\leq 4 $ be the number of chosen pencils arised from the pencil of cubics through the certain number of points. In the following tables, and for a specific genus $ 11 $ curve possessing nine pencils, we have listed the numerical data of the plausible syzygy schemes arised form different number $ l=a+b\geq 2 $ of the existing pencils $ g^{1}_{6}$'s. In \cite{kenesh}, one can compute an example of such a curve over a finite field of characteristic $ p $, by running the function \texttt{random6gonalGenus11Curvekpencil(p,9)}. In particular, the function \texttt{verifyAsserion(5)} provides the explicit equation of our specific curve and the collection of the nine scrolls. In the columns "dim" and "deg" we have marked the possible dimension and the degree of the corresponding syzygy schemes for this specific curve. Based on our experiments, it turns out that the values only depend on the numbers $ a $ and $ b $ of the chosen pencils.\\ \begin{longtable}{ |l|l|l|l|l|l| } \multicolumn{6}{ c } {}\\ \hline & $ $ & $ \dim $ & $ \deg $ & \text{genus} & \hspace*{2.5cm}$ \text{ Betti table} $\\ \hline $\ \ a=0 $ & $\ \ b=2 $ & $\ 2 $ & $ 18$ & & \ \small{\begin{tabular}{|c c c c c c c c c } \noalign{\vskip 2mm} \hline $1$ & $.$ & $ . $& $.$ & $.$ & $.$& $ . $ &$ . $&$ . $\\ $.$ & $27$ & $ 96 $&$127$ & $48$ & $10$& $ . $& $ . $& $ . $\\ $.$ &$.$ & $1$ & $ 48 $& $220$ & $288$ &$189$ &$ 64 $&$ 9 $ \\ \noalign{\vskip 2mm} \end{tabular}}\\ \hline $\ \ a=1 $ & $ \ \ b=1 $ & $\ 2 $ & $ 18$ & &\ \small{\begin{tabular}{|c c c c c c c c c } \noalign{\vskip 2mm} \hline $1$ & $.$ & $ . $& $.$ & $.$ & $.$& $ . $ &$ . $&$ . $\\ $.$ & $27$ & $ 96 $&$127$ & $48$ & $10$& $ . $& $ . $& $ . $\\ $.$ &$.$ & $1$ & $ 48 $& $220$ & $288$ &$189$ &$ 64 $&$ 9 $ \\ \noalign{\vskip 2mm} \end{tabular} }\\ \hline $\ \ a=2 $ & $\ \ b=0 $ & $\ 2 $ & $ 18$ & &\ \small{\begin{tabular}{|c c c c c c c c c } \noalign{\vskip 2mm} \hline $1$ & $.$ & $ . $& $.$ & $.$ & $.$& $ . $ &$ . $&$ . $\\ $.$ & $27$ & $ 96 $&$127$ & $48$ & $10$& $ . $& $ . $& $ . $\\ $.$ &$.$ & $ 1$& $48$ & $220$ &$288$ &$ 189 $&$ 64 $& $9 $ \\ \noalign{\vskip 2mm} \end{tabular}}\\ \hline \caption{\label{table1} Numerical data of possible syzygy schemes with $ a+b=2 $.} \end{longtable} \begin{longtable}{ |l|l|l|l|l|l| } \multicolumn{6}{ c } {}\\ \hline & $ $ & $ \dim $ & $ \deg $ & \text{genus} & \hspace*{2.5cm}$ \text{ Betti table} $\\ \hline $ \ \ a=0 $ & $ \ \ b=3 $ & $\ 1 $ & $ 21 $ & $ 12 $ & \ \small{ \begin{tabular}{|c c c c c c c c c c} \noalign{\vskip 2mm} \hline $1$ & $.$ & $ . $& $.$ & $.$ & $.$& $ . $ &$ . $&$ . $&$ . $\\ $.$ & $35$ & $ 151 $&$279$ & $207$ & $15$& $ . $& $ . $& $ . $& $ . $\\ $.$ &$.$ & $.$ & $ 3 $& $141$ & $414$ &$399$ &$ 196 $&$ 45 $&$ 1 $\\ $.$ &$.$ & $.$ & $ . $& $.$ & $.$ &$.$ &$ . $&$ . $&$ 1 $ \\ \noalign{\vskip 2mm} \end{tabular}} \\ \hline $ \ a=1 $ & $ \ b=2 $ & $\ 1 $ & $ 20$ & $ \ 11 $ & \small{ \begin{tabular}{|c c c c c c c c c c} \noalign{\vskip 2mm} \hline $1$ & $.$ & $ . $& $.$ & $.$ & $.$& $ . $ &$ . $&$ . $&$ . $\\ $.$ & $36$ & $ 160 $&$315$ & $288$ & $45$& $ . $& $ . $& $ . $& $ . $\\ $.$ &$.$ & $.$ & $ . $& $45$ & $288$ &$315$ &$ 160 $&$ 36 $&$ . $\\ $.$ &$.$ & $.$ & $ . $& $.$ & $.$ &$.$ &$ . $&$ . $&$ 1 $ \\ \noalign{\vskip 2mm} \end{tabular}}\\ \hline $ \ a=2$ & $ \ \ b=1 $ & $\ 1 $ & $ 21$ & $ \ 12 $ & \small{ \begin{tabular}{|c c c c c c c c c c} \noalign{\vskip 2mm} \hline $1$ & $.$ & $ . $& $.$ & $.$ & $.$& $ . $ &$ . $&$ . $&$ . $\\ $.$ & $35$ & $ 151 $&$279$ & $210$ & $30$& $ . $& $ . $& $ . $& $ . $\\ $.$ &$.$ & $.$ & $ . $& $6$ & $156$ &$414$ &$ 399 $&$ 45 $&$ 1 $\\ $.$ &$.$ & $.$ & $ . $& $.$ & $.$ &$.$ &$ . $&$ . $&$ 1 $ \\ \noalign{\vskip 2mm} \end{tabular}}\\ \hline $\ \ a=3 $ & $\ \ b=0 $ & $\ 2 $ & $ 16$ & &\ \small{\begin{tabular}{|c c c c c c c c c } \noalign{\vskip 2mm} \hline $1$ & $.$ & $ . $& $.$ & $.$ & $.$& $ . $ &$ . $&$ . $\\ $.$ & $29$ & $ 112 $&$182$ & $113$ & $15$& $ . $& $ . $& $ . $\\ $.$ &$.$ & $.$& $1$ & $ 85 $& $176$ & $133$ &$48$ &$ 7 $ \\ \noalign{\vskip 2mm} \end{tabular}}\\ \hline \caption{\label{table2} Numerical data of possible syzygy schemes with $ a+b=3 $.} \end{longtable} \begin{longtable}{ |l|l|l|l|l|l| } \multicolumn{6}{ c } {}\\ \hline & $ $ & $ \dim $ & $ \deg $ & \text{genus} & \hspace*{2.5cm}$ \text{ Betti table} $\\ \hline $ \ a=0 $ & $ \ b=4 $ & $\ 1 $ & $ 20$ & $ \ 11 $ & \small{ \begin{tabular}{|c c c c c c c c c c} \noalign{\vskip 2mm} \hline $1$ & $.$ & $ . $& $.$ & $.$ & $.$& $ . $ &$ . $&$ . $&$ . $\\ $.$ & $36$ & $ 160 $&$315$ & $288$ & $45$& $ . $& $ . $& $ . $& $ . $\\ $.$ &$.$ & $.$ & $ . $& $45$ & $288$ &$315$ &$ 160 $&$ 36 $&$ . $\\ $.$ &$.$ & $.$ & $ . $& $.$ & $.$ &$.$ &$ . $&$ . $&$ 1 $ \\ \noalign{\vskip 2mm} \end{tabular}}\\ \hline $ \ a=1 $ & $ \ b=3 $ & $\ 1 $ & $ 20$ & $ \ 11 $ & \small{ \begin{tabular}{|c c c c c c c c c c} \noalign{\vskip 2mm} \hline $1$ & $.$ & $ . $& $.$ & $.$ & $.$& $ . $ &$ . $&$ . $&$ . $\\ $.$ & $36$ & $ 160 $&$315$ & $288$ & $45$& $ . $& $ . $& $ . $& $ . $\\ $.$ &$.$ & $.$ & $ . $& $45$ & $288$ &$315$ &$ 160 $&$ 36 $&$ . $\\ $.$ &$.$ & $.$ & $ . $& $.$ & $.$ &$.$ &$ . $&$ . $&$ 1 $ \\ \noalign{\vskip 2mm} \end{tabular}}\\ \hline $ \ a=2 $ & $ \ b=2 $ & $\ 1 $ & $ 20$ & $ \ 11 $ & \small{ \begin{tabular}{|c c c c c c c c c c} \noalign{\vskip 2mm} \hline $1$ & $.$ & $ . $& $.$ & $.$ & $.$& $ . $ &$ . $&$ . $&$ . $\\ $.$ & $36$ & $ 160 $&$315$ & $288$ & $45$& $ . $& $ . $& $ . $& $ . $\\ $.$ &$.$ & $.$ & $ . $& $45$ & $288$ &$315$ &$ 160 $&$ 36 $&$ . $\\ $.$ &$.$ & $.$ & $ . $& $.$ & $.$ &$.$ &$ . $&$ . $&$ 1 $ \\ \noalign{\vskip 2mm} \end{tabular}}\\ \hline $ \ \ a=3 $ & $ \ \ b=1 $ & $\ 1 $ & $ 21 $ & $ \ 12 $ & \small{ \begin{tabular}{|c c c c c c c c c c} \noalign{\vskip 2mm} \hline $1$ & $.$ & $ . $& $.$ & $.$ & $.$& $ . $ &$ . $&$ . $&$ . $\\ $.$ & $35$ & $ 151 $&$279$ & $210$ & $30$& $ . $& $ . $& $ . $& $ . $\\ $.$ &$.$ & $.$ & $ . $& $6$ & $156$ &$414$ &$ 399 $&$ 45 $&$ 1 $\\ $.$ &$.$ & $.$ & $ . $& $.$ & $.$ &$.$ &$ . $&$ . $&$ 1 $ \\ \noalign{\vskip 2mm} \end{tabular}} \\ \hline $\ \ a=4 $ & $\ \ b=0 $ & $\ 2 $ & $ 15$ & &\ \small{\begin{tabular}{|c c c c c c c c c } \noalign{\vskip 2mm} \hline $1$ & $.$ & $ . $& $.$ & $.$ & $.$& $ . $ &$ . $&$ . $\\ $.$ & $30$ & $ 120 $&$210$ & $169$ & $25$& $ . $& $ . $& $ . $\\ $.$ &$.$ & $ . $&$1$ & $ 25 $& $120$ & $105$ &$40$ &$ 6 $ \\ \noalign{\vskip 2mm} \end{tabular}}\\ \hline \caption{\label{table3} Numerical data of possible syzygy schemes with $ a+b=4 $.} \end{longtable} \begin{longtable}{ |l|l|l|l|l|l| } \multicolumn{6}{ c } {}\\ \hline & $ $ & $ \dim $ & $ \deg $ & \text{genus} & \hspace*{2.5cm}$ \text{ Betti table} $\\ \hline $ \ a=1 $ & $ \ b=4 $ & $\ 1 $ & $ 20$ & $ \ 11 $ & \small{ \begin{tabular}{|c c c c c c c c c c} \noalign{\vskip 2mm} \hline $1$ & $.$ & $ . $& $.$ & $.$ & $.$& $ . $ &$ . $&$ . $&$ . $\\ $.$ & $36$ & $ 160 $&$315$ & $288$ & $45$& $ . $& $ . $& $ . $& $ . $\\ $.$ &$.$ & $.$ & $ . $& $45$ & $288$ &$315$ &$ 160 $&$ 36 $&$ . $\\ $.$ &$.$ & $.$ & $ . $& $.$ & $.$ &$.$ &$ . $&$ . $&$ 1 $ \\ \noalign{\vskip 2mm} \end{tabular}}\\ \hline $ \ a=2 $ & $ \ b=3 $ & $\ 1 $ & $ 20$ & $ \ 11 $ & \small{ \begin{tabular}{|c c c c c c c c c c} \noalign{\vskip 2mm} \hline $1$ & $.$ & $ . $& $.$ & $.$ & $.$& $ . $ &$ . $&$ . $&$ . $\\ $.$ & $36$ & $ 160 $&$315$ & $288$ & $45$& $ . $& $ . $& $ . $& $ . $\\ $.$ &$.$ & $.$ & $ . $& $45$ & $288$ &$315$ &$ 160 $&$ 36 $&$ . $\\ $.$ &$.$ & $.$ & $ . $& $.$ & $.$ &$.$ &$ . $&$ . $&$ 1 $ \\ \noalign{\vskip 2mm} \end{tabular}}\\ \hline $ \ a=3 $ & $ \ b=2 $ & $\ 1 $ & $ 20$ & $ \ 11 $ & \small{ \begin{tabular}{|c c c c c c c c c c} \noalign{\vskip 2mm} \hline $1$ & $.$ & $ . $& $.$ & $.$ & $.$& $ . $ &$ . $&$ . $&$ . $\\ $.$ & $36$ & $ 160 $&$315$ & $288$ & $45$& $ . $& $ . $& $ . $& $ . $\\ $.$ &$.$ & $.$ & $ . $& $45$ & $288$ &$315$ &$ 160 $&$ 36 $&$ . $\\ $.$ &$.$ & $.$ & $ . $& $.$ & $.$ &$.$ &$ . $&$ . $&$ 1 $ \\ \noalign{\vskip 2mm} \end{tabular}}\\ \hline $ \ a=4 $ & $ \ b=1 $ & $\ 1 $ & $ 21$ & $ \ 12 $ & \small{ \begin{tabular}{|c c c c c c c c c c} \noalign{\vskip 2mm} \hline $1$ & $.$ & $ . $& $.$ & $.$ & $.$& $ . $ &$ . $&$ . $&$ . $\\ $.$ & $35$ & $ 151 $&$279$ & $210$ & $30$& $ . $& $ . $& $ . $& $ . $\\ $.$ &$.$ & $.$ & $ . $& $6$ & $156$ &$414$ &$ 399 $&$ 45 $&$ 1 $\\ $.$ &$.$ & $.$ & $ . $& $.$ & $.$ &$.$ &$ . $&$ . $&$ 1 $ \\ \noalign{\vskip 2mm} \end{tabular}} \\ \hline $\ \ a=5 $ & $\ \ b=0 $ & $\ 2 $ & $ 15$ & &\ \small{\begin{tabular}{|c c c c c c c c c } \noalign{\vskip 2mm} \hline $1$ & $.$ & $ . $& $.$ & $.$ & $.$& $ . $ &$ . $&$ . $\\ $.$ & $30$ & $ 120 $&$210$ & $169$ & $25$& $ . $& $ . $& $ . $\\ $.$ &$.$ & $ . $&$1$ & $ 25 $& $120$ & $105$ &$40$ &$ 6 $ \\ \noalign{\vskip 2mm} \end{tabular}}\\ \hline \caption{\label{table4} Numerical data of possible syzygy schemes with $ a+b=5 $.} \end{longtable} \begin{longtable}{ |l|l|l|l|l|l| } \multicolumn{6}{ c } {}\\ \hline & $ $ & $ \dim $ & $ \deg $ & \text{genus} & \hspace*{2.5cm}$ \text{ Betti table} $\\ \hline $ \ a=2 $ & $ \ b=4 $ & $\ 1 $ & $ 20$ & $ \ 11 $ & \small{ \begin{tabular}{|c c c c c c c c c c} \noalign{\vskip 2mm} \hline $1$ & $.$ & $ . $& $.$ & $.$ & $.$& $ . $ &$ . $&$ . $&$ . $\\ $.$ & $36$ & $ 160 $&$315$ & $288$ & $45$& $ . $& $ . $& $ . $& $ . $\\ $.$ &$.$ & $.$ & $ . $& $45$ & $288$ &$315$ &$ 160 $&$ 36 $&$ . $\\ $.$ &$.$ & $.$ & $ . $& $.$ & $.$ &$.$ &$ . $&$ . $&$ 1 $ \\ \noalign{\vskip 2mm} \end{tabular}}\\ \hline $ \ a=3 $ & $ \ b=3 $ & $\ 1 $ & $ 20$ & $ \ 11 $ & \small{ \begin{tabular}{|c c c c c c c c c c} \noalign{\vskip 2mm} \hline $1$ & $.$ & $ . $& $.$ & $.$ & $.$& $ . $ &$ . $&$ . $&$ . $\\ $.$ & $36$ & $ 160 $&$315$ & $288$ & $45$& $ . $& $ . $& $ . $& $ . $\\ $.$ &$.$ & $.$ & $ . $& $45$ & $288$ &$315$ &$ 160 $&$ 36 $&$ . $\\ $.$ &$.$ & $.$ & $ . $& $.$ & $.$ &$.$ &$ . $&$ . $&$ 1 $ \\ \noalign{\vskip 2mm} \end{tabular}}\\ \hline $ \ a=4 $ & $ \ b=2 $ & $\ 1 $ & $ 20$ & $ \ 11 $ & \small{ \begin{tabular}{|c c c c c c c c c c} \noalign{\vskip 2mm} \hline $1$ & $.$ & $ . $& $.$ & $.$ & $.$& $ . $ &$ . $&$ . $&$ . $\\ $.$ & $36$ & $ 160 $&$315$ & $288$ & $45$& $ . $& $ . $& $ . $& $ . $\\ $.$ &$.$ & $.$ & $ . $& $45$ & $288$ &$315$ &$ 160 $&$ 36 $&$ . $\\ $.$ &$.$ & $.$ & $ . $& $.$ & $.$ &$.$ &$ . $&$ . $&$ 1 $ \\ \noalign{\vskip 2mm} \end{tabular}}\\ \hline $ \ a=5 $ & $ \ b=1 $ & $\ 1 $ & $ 21$ & $ \ 12 $ & \small{ \begin{tabular}{|c c c c c c c c c c} \noalign{\vskip 2mm} \hline $1$ & $.$ & $ . $& $.$ & $.$ & $.$& $ . $ &$ . $&$ . $&$ . $\\ $.$ & $35$ & $ 151 $&$279$ & $210$ & $30$& $ . $& $ . $& $ . $& $ . $\\ $.$ &$.$ & $.$ & $ . $& $6$ & $156$ &$414$ &$ 399 $&$ 45 $&$ 1 $\\ $.$ &$.$ & $.$ & $ . $& $.$ & $.$ &$.$ &$ . $&$ . $&$ 1 $ \\ \noalign{\vskip 2mm} \end{tabular}} \\ \hline \caption{\label{table5} Numerical data of possible syzygy schemes with $ a+b=6 $.} \end{longtable} \begin{longtable}{ |l|l|l|l|l|l| } \multicolumn{6}{ c } {}\\ \hline & $ $ & $ \dim $ & $ \deg $ & \text{genus} & \hspace*{2.5cm}$ \text{ Betti table} $\\ \hline $ \ a$ & $ \ b $ & $\ 1 $ & $ 20$ & $ \ 11 $ & \small{ \begin{tabular}{|c c c c c c c c c c} \noalign{\vskip 2mm} \hline $1$ & $.$ & $ . $& $.$ & $.$ & $.$& $ . $ &$ . $&$ . $&$ . $\\ $.$ & $36$ & $ 160 $&$315$ & $288$ & $45$& $ . $& $ . $& $ . $& $ . $\\ $.$ &$.$ & $.$ & $ . $& $45$ & $288$ &$315$ &$ 160 $&$ 36 $&$ . $\\ $.$ &$.$ & $.$ & $ . $& $.$ & $.$ &$.$ &$ . $&$ . $&$ 1 $ \\ \noalign{\vskip 2mm} \end{tabular}}\\ \hline \caption{\label{table6} Numerical data of possible syzygy schemes with $ a+b\geq 7$.} \end{longtable} \end{remark} \thanks{Hanieh Keneshlou: \texttt{[email protected]}\\ \thanks{Frank-Olaf Schreyer: \texttt{[email protected]} \end{document}
arXiv
{ "id": "1903.00254.tex", "language_detection_score": 0.5723564624786377, "char_num_after_normalized": null, "contain_at_least_two_stop_words": null, "ellipsis_line_ratio": null, "idx": null, "lines_start_with_bullet_point_ratio": null, "mean_length_of_alpha_words": null, "non_alphabetical_char_ratio": null, "path": null, "symbols_to_words_ratio": null, "uppercase_word_ratio": null, "type": null, "book_name": null, "mimetype": null, "page_index": null, "page_path": null, "page_title": null }
arXiv/math_arXiv_v0.2.jsonl
null
null
\begin{document} \title[Asymptotically regular semigroups]{On the structure of fixed-point sets of asymptotically regular semigroups} \author[A. Wi\'{s}nicki]{Andrzej Wi\'{s}nicki} \begin{abstract} We extend a few recent results of G\'{o}rnicki (2011) asserting that the set of fixed points of an asymptotically regular mapping is a retract of its domain. In particular, we prove that in some cases the resulting retraction is H\"{o}lder continuous. We also characterise Bynum's coefficients and the Opial modulus in terms of nets. \end{abstract} \subjclass[2010]{Primary 47H10; Secondary 46B20, 47H20, 54C15.} \keywords{Asymptotically regular mapping, retraction, fixed point, Opial property, Bynum's coefficients, weakly null nets. } \address{Andrzej Wi\'{s}nicki, Institute of Mathematics, Maria Curie-Sk\l odowska University, 20-031 Lublin, Poland} \email{[email protected]} \maketitle \section{Introduction.} The notion of asymptotic regularity, introduced by Browder and Petryshyn in \cite{BrPe}, has become a standing assumption in many results concerning fixed points of nonexpansive and more general mappings. Recall that a mapping $T:M\rightarrow M$ acting on a metric space $(M,d)$ is said to be asymptotically regular if \begin{equation*} \lim_{n\rightarrow \infty }d(T^{n}x,T^{n+1}x)=0 \end{equation*} for all $x\in M.$ Ishikawa\ \cite{Is} proved that if $C$ is a bounded closed convex subset of a Banach space $X$ and $T:C\rightarrow C$ is nonexpansive, then the mapping $T_{\lambda }=(1-\lambda )I+\lambda T$ is asymptotically regular for each $\lambda \in (0,1).$ Edelstein and O'Brien \cite{EdOb} showed independently that $T_{\lambda }$ is uniformly asymptotically regular over $x\in C,$ and Goebel and Kirk \cite{GoKi3} proved that the convergence is even uniform with respect to all nonexpansive mappings from $C$ into $C$. Other examples of asymptotically regular mappings are given by the result of Anzai and Ishikawa \cite{AnIs} (see also \cite{XuYa}): if $T$ is an affine mapping acting on a bounded closed convex subset of a locally convex space $ X $, then $T_{\lambda }=(1-\lambda )I+\lambda T$ is uniformly asymptotically regular. In 1987, Lin \cite{Li} constructed a uniformly asymptotically regular Lipschitz mapping in $\ell _{2}$ without fixed points which extended an earlier construction of Tingley \cite{Ti}. Subsequently, Maluta, Prus and Wo \'{s}ko \cite{MaPrWo} proved that there exists a continuous fixed-point free asymptotically regular mapping defined on any bounded convex subset of a normed space which is not totally bounded (see also \cite{Er}). For the fixed-point existence theorems for asymptotically regular mappings we refer the reader to the papers by T. Dom\'{\i}nguez Benavides, J. G\'{o}rnicki, M. A. Jap\'{o}n Pineda and H. K. Xu (see \cite{DoJa, DoXu, Go1}). It was shown in \cite{SeWi} that the set of fixed points of a k-uniformly Lipschitzian mapping in a uniformly convex space is a retract of its domain if $k$ is close to $1$. In recent papers \cite{GoT,GoTai,GoN}, J. G\'{o}rnicki proved several results concerning the structure of fixed-point sets of asymptotically regular mappings in uniformly convex spaces. In this paper we continue this work and extend a few of G\'{o}rnicki's results in two aspects: we consider a more general class of spaces and prove that in some cases, the fixed-point set $\mathrm{Fix\,}T$ is not only a (continuous) retract but even a H\"{o}lder continuous retract of the domain. We present our results in a more general case of a one-parameter nonlinear semigroup. We also characterise Bynum's coefficients and the Opial modulus in terms of nets. \section{Preliminaries} Let $G$ be an unbounded subset of $[0,\infty )$ such that $t+s,t-s\in G$ for all $t,s\in G$ with $t>s$ (e.g., $G=[0,\infty )$ or $G=\mathbb{N}$). By a nonlinear semigroup on $C$ we shall mean a one-parameter family of mappings $ \mathcal{T}=\{T_{t}:t\in G\}$ from $C$ into $C$ such that $ T_{t+s}x=T_{t}\,T_{s}x$ for all $t,s\in G$ and $x\in C$. In particular, we do not assume in this paper that $\{T_{t}:t\in G\}$ is strongly continuous. We use a symbol $|T|$ to denote the exact Lipschitz constant of a mapping $ T:C\rightarrow C$, i.e., \begin{equation*} |T|=\inf \{k:\Vert Tx-Ty\Vert \leq k\Vert x-y\Vert \ \text{for\ all}\ x,y\in C\}. \end{equation*} If $T$ is not Lipschitzian we define $|T|=\infty $. A semigroup $\mathcal{T}=\{T_{t}:t\in G\}$ from $C$ into $C$ is said to be asymptotically regular if $\lim_{t}\left\Vert T_{t+h}x-T_{t}x\right\Vert =0$ for every $x\in C$ and $h\in G.$ Assume now that $C$ is convex and weakly compact and $\mathcal{T} =\{T_{t}:t\in G\}$ is a nonlinear semigroup on $C$ such that $s(\mathcal{T} )=\liminf_{t}|T_{t}|<\infty .$ Choose a sequence $(t_{n})$ of elements in $G$ such that $\lim_{n\rightarrow \infty }t_{n}=\infty $ and $s(\mathcal{T} )=\lim_{n\rightarrow \infty }\left\vert T_{t_{n}}\right\vert .$ By Tikhonov's theorem, there exists a pointwise weakly convergent subnet $ (T_{t_{n_{\alpha }}})_{\alpha \in \emph{A}}$ of $(T_{t_{n}}).$ We denote it briefly by $(T_{t_{\alpha }})_{\alpha \in \emph{A}}.$ For every $x\in C$, define \begin{equation} Lx=w\text{-}\lim_{\alpha }T_{t_{\alpha }}x, \label{Lx} \end{equation} i.e., $Lx$ is the weak limit of the net $(T_{t_{\alpha }}x)_{\alpha \in \emph{A}}$. Notice that $Lx$ belongs to $C$ since $C$ is convex and weakly compact. The weak lower semicontinuity of the norm implies \begin{equation*} \Vert Lx-Ly\Vert \leq \liminf_{\alpha }\Vert T_{t_{\alpha }}x-T_{t_{\alpha }}y\Vert \leq \limsup_{n\rightarrow \infty }\Vert T_{t_{n}}x-T_{t_{n}}y\Vert \leq s(\mathcal{T})\Vert x-y\Vert . \end{equation*} We formulate the above observation as a separate lemma. \begin{lemma} \label{nonexp}Let $C$ be a convex weakly compact subset of a Banach space $X$ and let $\mathcal{T}=\{T_{t}:t\in G\}$ be a semigroup on $C$ such that $s( \mathcal{T})=\liminf_{t}|T_{t}|<\infty .$ Then the mapping $L:C\rightarrow C$ defined by (\ref{Lx}) is $s(\mathcal{T})$-Lipschitz. \end{lemma} We end this section with the following variant of a well known result which is crucial for our work (see, e.g., \cite[Prop. 1.10]{BeLi}). \begin{lemma} \label{holder}Let $(X,d)$ be a complete bounded metric space and let $ L:X\rightarrow X$ be a k-Lipschitz mapping. Suppose there exists $0<\gamma <1 $ and $c>0$ such that $\Vert L^{n+1}x-L^{n}x\Vert \leq c\gamma ^{n}$ for every $x\in X$. Then $Rx=\lim_{n\rightarrow \infty }L^{n}x$ is a H\"{o}lder continuous mapping. \end{lemma} \begin{proof} We may assume that $\diam X<1$. Fix $x\neq y$ in $X$ and notice that for any $n\in \mathbb{N}$, \begin{equation*} d(Rx,Ry)\leq d(Rx,L^{n}x)+d(L^{n}x,L^{n}y)+d(L^{n}y,Ry)\leq 2c\frac{\gamma ^{n}}{1-\gamma }+k^{n}d(x,y). \end{equation*} Take $\alpha <1$ such that $k\leq \gamma ^{1-\alpha ^{-1}}$ and put $\gamma ^{n-r}=d(x,y)^{\alpha }$ for some $n\in \mathbb{N}$ and $0<r\leq 1$. Then $ k^{n-1}\leq (\gamma ^{1-\alpha ^{-1}})^{n-r}$ and hence \begin{equation*} d(Rx,Ry)\leq 2c\frac{\gamma ^{n-r}}{1-\gamma }+k(\gamma ^{n-r})^{1-\alpha ^{-1}}d(x,y)=(\frac{2c}{1-\gamma }+k)d(x,y)^{\alpha }. \end{equation*} \end{proof} \section{Bynum's coefficients and Opial's modulus in terms of nets} From now on, $C$ denotes a nonempty convex weakly compact subset of a Banach space $X$. Let $\mathcal{A}$ be a directed set, $(x_{\alpha })_{\alpha \in \mathcal{A}}$ a bounded net in $X$, $y\in X$ and write \begin{align*} r(y,(x_{\alpha }))& =\limsup_{\alpha }\Vert x_{\alpha }-y\Vert , \\ r(C,(x_{\alpha }))& =\inf \{r(y,(x_{\alpha })):y\in C\}, \\ A(C,(x_{\alpha }))& =\{y\in C:r(y,(x_{\alpha }))=r(C,(x_{\alpha }))\}. \end{align*} The number $r(C,(x_{\alpha }))$ and the set $A(C,(x_{\alpha }))$ are called, respectively, the asymptotic radius and the asymptotic center of $(x_{\alpha })_{\alpha \in \mathcal{A}}$ relative to $C$. Notice that $A(C,(x_{\alpha })) $ is nonempty convex and weakly compact. Write \begin{equation*} r_{a}(x_{\alpha })=\inf \{\limsup_{\alpha }\Vert x_{\alpha }-y\Vert :y\in \overline{\conv}(\{x_{\alpha }:\alpha \in \mathcal{A}\})\} \end{equation*} and let \begin{equation*} \diam_{a}(x_{\alpha })=\inf_{\alpha }\sup_{\beta ,\gamma \geq \alpha }\Vert x_{\beta }-x_{\gamma }\Vert \end{equation*} denote the asymptotic diameter of $(x_{\alpha })$. The normal structure coefficient $\Nor(X)$ of a Banach space $X$ is defined by \begin{equation*} \Nor(X)=\sup \left\{ k:k\,r(K)\leq \diam K\ \ \text{for\ each\ bounded\ convex\ set}\ K\subset X\right\} , \end{equation*} where $r(K)=\inf_{y\in K}\sup_{x\in K}\Vert x-y\Vert $ is the Chebyshev radius of $K$ relative to itself. Assuming that $X$ does not have the Schur property, the weakly convergent sequence coefficient (or Bynum's coefficient) is given by \begin{equation*} \WCS(X) =\sup \left\{ k:k\,r_{a}(x_{n})\leq \diam_{a}(x_{n})\ \ \text{for\ each\ sequence}\ x_{n}\overset{w}{\longrightarrow }0\right\} , \end{equation*} where $x_{n}\overset{w}{\longrightarrow }0$ means that $(x_{n})$ is weakly null in $X$ (see \cite{By}). For Schur spaces, we define $WCS(X)=2$. It was proved independently in \cite{DoLop, Pr, Zh} that \begin{equation} WCS(X)=\sup \left\{ k:k\,\limsup_{n}\Vert x_{n}\Vert \leq \diam_{a}(x_{n})\ \text{for each\ sequence}\ x_{n}\overset{w}{\longrightarrow }0\right\} \label{wcs1} \end{equation} and, in \cite{DoLoXu}, that \begin{equation*} WCS(X)=\sup \left\{ k:k\,\limsup_{n}\Vert x_{n}\Vert \leq D[(x_{n})]\ \text{ for\ each\ sequence}\ x_{n}\overset{w}{\longrightarrow }0\right\} , \end{equation*} where $D[(x_{n})]=\limsup_{m}\limsup_{n}\left\Vert x_{n}-x_{m}\right\Vert . $ Kaczor and Prus \cite{KaPr} initiated a systematic study of assumptions under which one can replace sequences by nets in a given condition. We follow the arguments from that paper and use the well known method of constructing basic sequences attributed to S. Mazur (see \cite{Pe}). Let us first recall a variant of a classical lemma which can be proved in the same way as for sequences (see, e.g., \cite[Lemma]{Pe}). \begin{lemma} \label{Ma} Let $\{x_{\alpha }\}_{\alpha \in \mathcal{A}}$ be a bounded net in $X$ weakly converging to $0$ such that $\inf_{\alpha }\Vert x_{\alpha }\Vert >0$. Then for every $\varepsilon >0$, $\alpha ^{\prime }\in \mathcal{A }$ and for every finite dimensional subspace $E$ of $X$, there is $\alpha >\alpha ^{\prime }$ such that \begin{equation*} \Vert e+tx_{\alpha }\Vert \geq (1-\varepsilon )\Vert e\Vert \end{equation*} for any $e\in E$ and every scalar $t.$ \end{lemma} Recall that a sequence $(x_{n})$ is basic if and only if there exists a number $c>0$ such that $\Vert \sum_{i=1}^{q}t_{i}x_{i}\Vert \leq c\Vert \sum_{i=1}^{p}t_{i}x_{i}\Vert $ for any integers $p>q\geq 1$ and any sequence of scalars $(t_{i})$. In the proof of the next lemma, based on Mazur's technique, we follow in part the reasoning given in \cite[Cor. 2.6] {KaPr}. Set $D[(x_{\alpha })]=\limsup_{\alpha }\limsup_{\beta }\left\Vert x_{\alpha }-x_{\beta }\right\Vert .$ \begin{lemma} \label{KaPr} Let $(x_{\alpha })_{\alpha \in \mathcal{A}}$ be a bounded net in $X$ which converges to $0$ weakly but not in norm. Then there exists an increasing sequence $(\alpha _{n})$ of elements of $\mathcal{A}$ such that $ \lim_{n}\Vert x_{\alpha _{n}}\Vert =\limsup_{\alpha }\Vert x_{\alpha }\Vert $ , $\diam_{a}(x_{\alpha _{n}})\leq D[(x_{\alpha })]$ and $(x_{\alpha _{n}})$ is a basic sequence. \end{lemma} \begin{proof} Since $(x_{\alpha })_{\alpha \in \mathcal{A}}$ does not converge strongly to $0$ and $D[(x_{\alpha _{s}})]\leq D[(x_{\alpha })]$ for any subnet $ (x_{\alpha _{s}})_{s\in \mathcal{B}}$ of $(x_{\alpha })_{\alpha \in \mathcal{ A}}$, we can assume, passing to a subnet, that $\inf_{\alpha }\Vert x_{\alpha }\Vert >0$ and the limit $c=\lim_{\alpha }\Vert x_{\alpha }\Vert $ exists. Write $d=D[(x_{\alpha })]$. Let $(\varepsilon _{n})$ be a sequence of reals from the interval $(0,1)$ such that $\Pi _{n=1}^{\infty }(1-\varepsilon _{n})>0$. We shall define the following sequences $(\alpha _{n})$ and $(\beta _{n})$ by induction. Let us put $\alpha _{1}<\beta _{1}\in \mathcal{A}$ such that $\left\vert \Vert x_{\alpha _{1}}\Vert -c\right\vert <1$ and $\sup_{\beta \geq \beta _{1}}\Vert x_{\alpha _{1}}-x_{\beta }\Vert <d+1$. By the definitions of $c$ and $d$, there exists $\alpha ^{\prime }>\beta _{1}$ such that $\left\vert \Vert x_{\alpha }\Vert -c\right\vert <\frac{1}{2}$ and $\inf_{\beta ^{\prime }}\sup_{\beta \geq \beta ^{\prime }}\Vert x_{\alpha }-x_{\beta }\Vert <d+ \frac{1}{2}$ for every $\alpha \geq \alpha ^{\prime }.$ It follows from Lemma \ref{Ma} that there exists $\alpha _{2}>\alpha ^{\prime }$ such that \begin{equation*} \Vert t_{1}x_{\alpha _{1}}+t_{2}x_{\alpha _{2}}\Vert \geq (1-\varepsilon _{2})\Vert t_{1}x_{\alpha _{1}}\Vert \end{equation*} for any scalars $t_{1},t_{2}.$ Furthermore, $\left\vert \Vert x_{\alpha _{2}}\Vert -c\right\vert <\frac{1}{2},$ and we can find $\beta _{2}>\alpha _{2}$ such that $\sup_{\beta \geq \beta _{2}}\Vert x_{\alpha _{2}}-x_{\beta }\Vert <d+\frac{1}{2}.$ Suppose now that we have chosen $\alpha _{1}<\beta _{1}<...<\alpha _{n}<\beta _{n}$ $(n>1)$ in such a way that $\left\vert \Vert x_{\alpha _{k}}\Vert -c\right\vert <\frac{1}{k}$, $\sup_{\beta \geq \beta _{k}}\Vert x_{\alpha _{k}}-x_{\beta }\Vert <d+\frac{1}{k}$ and \begin{equation*} (1-\varepsilon _{k})\Vert t_{1}x_{\alpha _{1}}+...+t_{k-1}x_{\alpha _{k-1}}\Vert \leq \Vert t_{1}x_{\alpha _{1}}+...+t_{k}x_{\alpha _{k}}\Vert \end{equation*} for any scalars $t_{1},...,t_{k}$, $k=2,...,n.$ From the definitions of $c$ and $d$, and by Lemma \ref{Ma}, we can find $\beta _{n+1}>\alpha _{n+1}>\beta _{n}$ such that $\left\vert \Vert x_{\alpha _{n+1}}\Vert -c\right\vert <\frac{1}{n+1}$, $\sup_{\beta \geq \beta _{n+1}}\Vert x_{\alpha _{n+1}}-x_{\beta }\Vert \leq d+\frac{1}{n+1}$ and (considering a subspace $E$ spanned by the elements $x_{\alpha _{1}},...,x_{\alpha _{n}}$ and putting $e=t_{1}x_{\alpha _{1}}+...+t_{n}x_{\alpha _{n}}$), \begin{equation*} (1-\varepsilon _{n+1})\Vert t_{1}x_{\alpha _{1}}+...+t_{n}x_{\alpha _{n}}\Vert \leq \Vert t_{1}x_{\alpha _{1}}+...+t_{n+1}x_{\alpha _{n+1}}\Vert \end{equation*} for any scalars $t_{1},...,t_{n+1}$. Notice that the sequence $(x_{\alpha _{n}})$ defined in this way satisfies $ \lim_{n\rightarrow \infty }\Vert x_{\alpha _{n}}\Vert =c$ and $\diam _{a}(x_{\alpha _{n}})\leq d$. Furthermore, \begin{equation*} \Vert t_{1}x_{\alpha _{1}}+...+t_{p}x_{\alpha _{p}}\Vert \geq \Pi _{n=q+1}^{p}(1-\varepsilon _{n})\Vert t_{1}x_{\alpha _{1}}+...+t_{q}x_{\alpha _{q}}\Vert \end{equation*} for any integers $p>q\geq 1$ and any sequence of scalars $(t_{i})$. Hence $ (x_{\alpha _{n}})$ is a basic sequence. \end{proof} We are now in a position to give a characterization of the coefficient WCS(X) in terms of nets. The abbreviation \textquotedblleft $\left\{ x_{\alpha }\right\} $ is r.w.c.\textquotedblright\ means that the set $ \left\{ x_{\alpha }:\alpha \in \mathcal{A}\right\} $ is relatively weakly compact. \begin{theorem} \label{Wi1} Let $X$ be a Banach space without the Schur property and write \begin{align*} w_{1}& =\sup \left\{ k:k\,r_{a}(x_{\alpha })\leq \diam_{a}(x_{\alpha })\ \text{ for\ each\ net}\ x_{\alpha }\overset{w}{\longrightarrow }0,\text{ } \left\{ x_{\alpha }\right\} \text{ is r.w.c.}\right\} , \\ w_{2}& =\sup \left\{ k:k\,\limsup_{\alpha }\Vert x_{\alpha }\Vert \leq \diam _{a}(x_{\alpha })\ \text{for\ each\ net}\ x_{\alpha }\overset{w}{ \longrightarrow }0,\text{ }\left\{ x_{\alpha }\right\} \text{ is r.w.c.} \right\} , \\ w_{3}& =\sup \left\{ k:k\,\limsup_{\alpha }\Vert x_{\alpha }\Vert \leq D[(x_{\alpha })]\ \text{for\ each\ net}\ x_{\alpha }\overset{w}{ \longrightarrow }0,\text{ }\left\{ x_{\alpha }\right\} \text{ is r.w.c.} \right\} . \end{align*} Then \begin{equation*} \WCS(X)=w_{1}=w_{2}=w_{3}. \end{equation*} \end{theorem} \begin{proof} Fix $k>w_{3}$ and choose a weakly null net $(x_{\alpha })$ such that the set $\left\{ x_{\alpha }:\alpha \in \mathcal{A}\right\} $ is relatively weakly compact and $k\,\limsup_{\alpha }\Vert x_{\alpha }\Vert >D[(x_{\alpha })].$ Then, by Lemma \ref{KaPr}, there exists an increasing sequence $(\alpha _{n}) $ such that \begin{equation*} k\,\lim_{n}\Vert x_{\alpha _{n}}\Vert >D[(x_{\alpha })]\geq \diam _{a}(x_{\alpha _{n}}) \end{equation*} and $(x_{\alpha _{n}})$ is a basic sequence. Since the set $\left\{ x_{\alpha }:\alpha \in \mathcal{A}\right\} $ is relatively weakly compact, we can assume (passing to a subsequence) that $(x_{\alpha _{n}})$ is weakly convergent. Since it is a basic sequence, its weak limit equals zero. It follows from (\ref{wcs1}) that $ \WCS(X) \leq k$ and letting $k$ go to $w_{3}$ we have \begin{equation*} \WCS(X) \leq w_{3}\leq w_{2}\leq w_{1}\leq \WCS(X) . \end{equation*} \end{proof} Notice that a similar characterisation holds for the normal structure coefficient. \begin{theorem} For a Banach space $X$, \begin{equation*} \Nor(X)=\sup \left\{ k:k\,r_{a}(x_{\alpha })\leq \diam_{a}(x_{\alpha })\ \text{for\ each bounded net}\ (x_{\alpha })\text{ in }X\right\} . \end{equation*} \end{theorem} \begin{proof} Let \begin{equation*} N_{1}=\sup \left\{ k:k\,r_{a}(x_{\alpha })\leq \diam_{a}(x_{\alpha })\ \text{ for\ each bounded net}\ (x_{\alpha })\text{ in }X\right\} . \end{equation*} Set $k>N_{1}$ and choose a bounded net $(x_{\alpha })$ such that $ k\,r_{a}(x_{\alpha })>\diam_{a}(x_{\alpha }).$ Fix $y\in \overline{\conv} (\{x_{\alpha }:\alpha \in \mathcal{A}\})$ and notice that $ k\,\limsup_{\alpha }\Vert x_{\alpha }-y\Vert >\diam_{a}(x_{\alpha }).$ In a straightforward way, we can choose a sequence $(\alpha _{n})$ such that \begin{equation*} k\,\lim_{n}\Vert x_{\alpha _{n}}-y\Vert =k\,\limsup_{\alpha }\Vert x_{\alpha }-y\Vert >\diam_{a}(x_{\alpha })\geq \diam_{a}(x_{\alpha _{n}}). \end{equation*} It follows from \cite[Th. 1]{By} that $\Nor(X)\leq k$ and letting $k$ go to $ N_{1}$ we have $\Nor(X)\leq N_{1}.$ By \cite[Th. 1]{Lim}, $\Nor(X)\geq N_{1}$ and the proof is complete. \end{proof} In the next section we shall need a similar characterisation for the Opial modulus of a Banach space $X,$ defined for each $c\geq 0$ by \begin{equation*} r_{X}(c)=\inf \left\{ \liminf_{n\rightarrow \infty }\left\Vert x_{n}+x\right\Vert -1\right\} , \end{equation*} where the infimum is taken over all $x\in X$ with $\left\Vert x\right\Vert \geq c$ and all weakly null sequences $(x_{n})$ in $X$ such that $ \liminf_{n\rightarrow \infty }\left\Vert x_{n}\right\Vert \geq 1$ (see \cite {LiTaXu}). We first prove the following counterpart of Lemma \ref{KaPr}. \begin{lemma} \label{KaPr2} Let $(x_{\alpha })_{\alpha \in \mathcal{A}}$ be a bounded net in $X$ which converges to $0$ weakly but not in norm and $x\in X.$ Then there exists an increasing sequence $(\alpha _{n})$ of elements of $\mathcal{ A}$ such that $\lim_{n}\Vert x_{\alpha _{n}}+x\Vert =\liminf_{\alpha }\Vert x_{\alpha }+x\Vert ,$ $\lim_{n}\Vert x_{\alpha _{n}}\Vert \geq \liminf_{\alpha }\Vert x_{\alpha }\Vert $ and $(x_{\alpha _{n}})$ is a basic sequence. \end{lemma} \begin{proof} Since $(x_{\alpha })_{\alpha \in \mathcal{A}}$ does not converge strongly to $0$ and \begin{equation*} \liminf_{s}\Vert x_{\alpha _{s}}\Vert \geq \liminf_{\alpha }\Vert x_{\alpha }\Vert \end{equation*} for any subnet $(x_{\alpha _{s}})_{s\in \mathcal{B}}$ of $(x_{\alpha })_{\alpha \in \mathcal{A}}$, it is sufficient (passing to a subnet) to consider only the case that $\inf_{\alpha }\Vert x_{\alpha }\Vert >0$ and the limits $c_{1}=\liminf_{\alpha }\Vert x_{\alpha }+x\Vert $, $ c_{2}=\liminf_{\alpha }\Vert x_{\alpha }\Vert $ exist. Let $(\varepsilon _{n})$ be a sequence of reals from the interval $(0,1)$ such that $\Pi _{n=1}^{\infty }(1-\varepsilon _{n})>0$. We shall define the sequence $ (\alpha _{n})$ by induction. Let us put $\alpha _{1}\in \mathcal{A}$ such that $\left\vert \Vert x_{\alpha _{1}}+x\Vert -c_{1}\right\vert <1$ and $\left\vert \Vert x_{\alpha _{1}}\Vert -c_{2}\right\vert <1$. By the definitions of $c_{1}$ and $c_{2}$, there exists $\alpha ^{\prime }>\alpha _{1}$ such that $\left\vert \Vert x_{\alpha }+x\Vert -c_{1}\right\vert <\frac{1}{2}$ and $\left\vert \Vert x_{\alpha }\Vert -c_{2}\right\vert <\frac{1}{2}$ for every $\alpha \geq \alpha ^{\prime }.$ It follows from Lemma \ref{Ma} that there exists $\alpha _{2}>\alpha ^{\prime }$ such that \begin{equation*} \Vert t_{1}x_{\alpha _{1}}+t_{2}x_{\alpha _{2}}\Vert \geq (1-\varepsilon _{2})\Vert t_{1}x_{\alpha _{1}}\Vert \end{equation*} for any scalars $t_{1},t_{2}.$ We can now proceed analogously to the proof of Lemma \ref{KaPr} to obtain a basic sequence $(x_{\alpha _{n}})$ with the desired properties. \end{proof} \begin{theorem} \label{Wi2}For a Banach space $X$ without the Schur property and for $c\geq 0,$ \begin{equation*} r_{X}(c)=\inf \left\{ \liminf_{\alpha }\left\Vert x_{\alpha }+x\right\Vert -1\right\} , \end{equation*} where the infimum is taken over all $x\in X$ with $\left\Vert x\right\Vert \geq c$ and all weakly null nets $(x_{\alpha })$ in $X$ such that $ \liminf_{\alpha }\Vert x_{\alpha }\Vert \geq 1$ and the set $\left\{ x_{\alpha }:\alpha \in \mathcal{A}\right\} $ is relatively weakly compact. \end{theorem} \begin{proof} Let $r_{1}(c)=\inf \left\{ \liminf_{\alpha }\left\Vert x_{\alpha }+x\right\Vert -1\right\} ,$ where the infimum is taken as above. Fix $c\geq 0$ and take $k>r_{1}(c).$ Then there exist $x\in X$ with $\left\Vert x\right\Vert \geq c$ and a weakly null net $(x_{\alpha })_{\alpha \in \mathcal{A}}$ such that $\liminf_{\alpha }\Vert x_{\alpha }\Vert \geq 1,$ $ \left\{ x_{\alpha }:\alpha \in \mathcal{A}\right\} $ is relatively weakly compact and \begin{equation*} \liminf_{\alpha }\left\Vert x_{\alpha }+x\right\Vert -1<k. \end{equation*} By Lemma \ref{KaPr2}, there exists an increasing sequence $(\alpha _{n})$ of elements of $\mathcal{A}$ such that $\lim_{n}\Vert x_{\alpha _{n}}\Vert \geq 1,\lim_{n}\Vert x_{\alpha _{n}}+x\Vert -1<k$ and $(x_{\alpha _{n}})$ is a basic sequence. Since $\left\{ x_{\alpha }:\alpha \in \mathcal{A}\right\} $ is relatively weakly compact, we can assume (passing to a subsequence) that $ (x_{\alpha _{n}})$ is weakly null. Hence $r_{X}(c)<k$ and since $k$ is an arbitrary number greater than $r_{1}(c)$, it follows that $r_{X}(c)\leq r_{1}(c).$ The reverse inequality is obvious. \end{proof} \section{Fixed-point sets as H\"{o}lder continuous retracts} The following lemma may be proved in a similar way to \cite[Th. 7.2 ]{DoJaLo} . \begin{lemma} \label{main}Let $C$ be a nonempty convex weakly compact subset of a Banach space $X$ and $\mathcal{T}=\{T_{t}:t\in G\}$ an asymptotically regular semigroup on $C$ such that $s(\mathcal{T})=\lim_{\alpha }\left\vert T_{t_{\alpha }}\right\vert $ for a pointwise weakly convergent subnet $ (T_{t_{\alpha }})_{\alpha \in \emph{A}}$ of $(T_{t})_{t\in G}.$ Let $ x_{0}\in C$, $x_{m+1}=w$-$\lim_{\alpha }T_{t_{\alpha }}x_{m},m=0,1,...,$ and \begin{equation*} B_{m}=\limsup_{\alpha }\left\Vert T_{t_{\alpha }}x_{m}-x_{m+1}\right\Vert . \end{equation*} Assume that \end{lemma} \begin{enumerate} \item[(a)] $s(\mathcal{T})<\sqrt{\WCS(X)}$ or, \item[(b)] $s(\mathcal{T})<1+r_{X}(1).$ \end{enumerate} Then, there exists $\gamma <1$ such that $B_{m}\leq \gamma B_{m-1} $ for any $m=1,2,...$. \begin{proof} It follows from the asymptotic regularity of $\{T_{t}:t\in G\}$ that \begin{equation*} \limsup_{\alpha }\left\Vert T_{t_{\alpha }-l}\,x-y\right\Vert =\limsup_{\alpha }\left\Vert T_{t_{\alpha }}x-y\right\Vert \end{equation*} for any $l\in G$ and $x,y\in C$. Thus \begin{align*} & D[(T_{t_{\alpha }}x_{m})]=\limsup_{\beta }\limsup_{\alpha }\left\Vert T_{t_{\alpha }}x_{m}-T_{t_{\beta }}x_{m}\right\Vert \\ & \ \leq \limsup_{\beta }\left\vert T_{t_{\beta }}\right\vert \limsup_{\alpha }\left\Vert T_{t_{\alpha }-t_{\beta }}x_{m}-x_{m}\right\Vert =s(\mathcal{T})\limsup_{\alpha }\left\Vert T_{t_{\alpha }}x_{m}-x_{m}\right\Vert . \end{align*} Hence, from Theorem \ref{Wi1} and from the weak lower semicontinuity of the norm, \begin{align*} B_{m}& \leq \frac{D[(T_{t_{\alpha }}x_{m})]}{\WCS(X)}\leq \frac{s(\mathcal{T} )}{\WCS(X)}\limsup_{\alpha }\left\Vert T_{t_{\alpha }}x_{m}-x_{m}\right\Vert \\ & \leq \frac{s(\mathcal{T})}{\WCS(X)}\limsup_{\alpha }\liminf_{\beta }\left\Vert T_{t_{\alpha }}x_{m}-T_{t_{\beta }}x_{m-1}\right\Vert \\ & \leq \frac{s(\mathcal{T})}{\WCS(X)}\limsup_{\alpha }\left\vert T_{t_{\alpha }}\right\vert \limsup_{\beta }\left\Vert x_{m}-T_{t_{\beta }-t_{\alpha }}x_{m-1}\right\Vert =\frac{(s(\mathcal{T}))^{2}}{\WCS(X)} B_{m-1}. \end{align*} This gives (a). For (b), we can use Theorem \ref{Wi2} and proceed analogously to the proof of \cite[Th. 7.2 ]{DoJaLo} (see also \cite[Th. 5] {GoN}). \end{proof} We are now in a position to prove a qualitative semigroup version of \cite[ Th. 7.2 (a) (b)]{DoJaLo} which is in turn based on the results given in \cite {DoJa, DoXu} (see also \cite{Ku}). It also extends, in a few directions, \cite[Th. 5]{GoN}. \begin{theorem} \label{Thwcs}Let $C$ be a nonempty convex weakly compact subset of a Banach space $X$ and $\mathcal{T}=\{T_{t}:t\in G\}$ an asymptotically regular semigroup on $C.$ Assume that \end{theorem} \begin{enumerate} \item[(a)] $s(\mathcal{T})<\sqrt{\WCS(X)}$ or, \item[(b)] $s(\mathcal{T})<1+r_{X}(1).$ \end{enumerate} Then $\mathcal{T}$ has a fixed point in $C$ and $\Fix\mathcal{T} =\{x\in C:T_{t}x=x,\,t\in G\}$ is a H\"{o}lder continuous retract of $C.$ \begin{proof} Choose a sequence $(t_{n})$ of elements in $G$ such that $\lim_{n\rightarrow \infty }t_{n}=\infty $ and $s(\mathcal{T})=\lim_{n\rightarrow \infty }\left\vert T_{t_{n}}\right\vert .$ Let $(T_{t_{n_{\alpha }}})_{\alpha \in \emph{A}}$ (denoted briefly by $(T_{t_{\alpha }})_{\alpha \in \emph{A}}$) be a pointwise weakly convergent subnet of $(T_{t_{n}}).$ Define, for every $ x\in C$, \begin{equation*} Lx=w-\lim_{\alpha }T_{t_{\alpha }}x. \end{equation*} Fix $x_{0}\in C$ and put $x_{m+1}=Lx_{m},m=0,1,....$ Let $ B_{m}=\limsup_{\alpha }\left\Vert T_{t_{\alpha }}x_{m}-x_{m+1}\right\Vert .$ By Lemma \ref{main}, there exists $\gamma <1$ such that $B_{m}\leq \gamma B_{m-1}$ for any $m\geq 1.$ Since the norm is weak lower semicontinuous and the semigroup is asymptotically regular, \begin{align*} & \Vert L^{m+1}x_{0}-L^{m}x_{0}\Vert =\left\Vert x_{m+1}-x_{m}\right\Vert \leq \liminf_{\alpha }\left\Vert T_{t_{\alpha }}x_{m}-x_{m}\right\Vert \\ & \ \leq \liminf_{\alpha }\liminf_{\beta }\left\Vert T_{t_{\alpha }}x_{m}-T_{t_{\beta }}x_{m-1}\right\Vert \leq \limsup_{\alpha }\left\vert T_{t_{\alpha }}\right\vert \limsup_{\beta }\left\Vert x_{m}-T_{t_{\beta }-t_{\alpha }}x_{m-1}\right\Vert \\ & \ =s(\mathcal{T})B_{m-1}\leq s(\mathcal{T})\gamma ^{m-1}\diam C \end{align*} for every $x_{0}\in C$ and $m\geq 1.$ Furthermore, by Lemma \ref{nonexp}, the mapping $L:C\rightarrow C$ is $s(\mathcal{T})$-Lipschitz. It follows from Lemma \ref{holder} that $Rx=\lim_{n\rightarrow \infty }L^{n}x$ is a H \"{o}lder continuous mapping on $C$. We show that $R$ is a retraction onto $ \Fix\mathcal{T}.$ It is clear that if $x\in \Fix\mathcal{T},$ then $Rx=x.$ Furthermore, for every $x\in C,m\geq 1$ and $\alpha \in \emph{A},$ \begin{equation*} \Vert T_{t_{\alpha }}Rx-Rx\Vert \leq \left\Vert T_{t_{\alpha }}Rx-T_{t_{\alpha }}L^{m}x\right\Vert +\left\Vert T_{t_{\alpha }}L^{m}x-L^{m+1}x\right\Vert +\left\Vert L^{m+1}x-Rx\right\Vert \end{equation*} and hence \begin{equation*} \lim_{\alpha }\Vert T_{t_{\alpha }}Rx-Rx\Vert \leq s(\mathcal{T})\left\Vert Rx-L^{m}x\right\Vert +B_{m}+\left\Vert L^{m+1}x-Rx\right\Vert . \end{equation*} Letting $m$ go to infinity, $\limsup_{\alpha }\Vert T_{t_{\alpha }}Rx-Rx\Vert =0.$ Since $s(\mathcal{T})=\lim_{\beta }\left\vert T_{t_{\beta }}\right\vert <\infty ,$ there exists $\beta _{0}\in \emph{A}$ such that $ \left\vert T_{t_{\beta }}\right\vert <\infty $ for every $\beta \geq \beta _{0}.$ Then, the asymptotic regularity of $\mathcal{T}$ implies \begin{align*} \Vert T_{t_{\beta }}Rx-Rx\Vert & \leq \left\vert T_{t_{\beta }}\right\vert \limsup_{\alpha }\Vert Rx-T_{t_{\alpha }}Rx\Vert +\lim_{\alpha }\Vert T_{t_{\beta }+t_{\alpha }}Rx-T_{t_{\alpha }}Rx\Vert \\ & +\limsup_{\alpha }\Vert T_{t_{\alpha }}Rx-Rx\Vert =0. \end{align*} Hence $T_{t_{\beta }}Rx=Rx$ for every $\beta \geq \beta _{0}$ and, from the asymptotic regularity again, \begin{equation*} \Vert T_{t}Rx-Rx\Vert =\lim_{\beta }\left\Vert T_{t+t_{\beta }}Rx-T_{t_{\beta }}Rx\right\Vert =0 \end{equation*} for each $t\in G.$ Thus $Rx\in \Fix\mathcal{T}$ for every $x\in C$ and the proof is complete. \end{proof} It is well known that the Opial modulus of a Hilbert space $H,$ \begin{equation*} r_{H}(c)=\sqrt{1+c^{2}}-1, \end{equation*} and the Opial modulus of $\ell _{p},p>1,$ \begin{equation*} r_{\ell _{p}}(c)=(1+c^{p})^{1/p}-1 \end{equation*} for all $c\geq 0$ (see \cite{LiTaXu}). The following corollaries are sharpened versions of \cite[Th. 2.2]{GoT} and \cite[Cor. 8]{GoN}. \begin{corollary} Let $C$ be a nonempty bounded closed convex subset of a Hilbert space $H.$ If $\mathcal{T}=\{T_{t}:t\in G\}$ is an asymptotically regular semigroup on $ C$ such that \begin{equation*} \liminf_{t}|T_{t}|<\sqrt{2}, \end{equation*} then $\Fix\mathcal{T}$ is a H\"{o}lder continuous retract of $C.$ \end{corollary} \begin{corollary} Let $C$ be a nonempty bounded closed convex subset of $\ell _{p},1<p<\infty . $ If $\mathcal{T}=\{T_{t}:t\in G\}$ is an asymptotically regular semigroup on $C$ such that \begin{equation*} \liminf_{t}|T_{t}|<2^{1/p}, \end{equation*} then $\Fix\mathcal{T}$ is a H\"{o}lder continuous retract of $C.$ \end{corollary} Let $1\leq p,q<\infty .$ Recall that the Bynum space $\ell _{p,q}$ is the space $\ell _{p}$ endowed with the equivalent norm $\Vert x\Vert _{p,q}=(\Vert x^{+}\Vert _{p}^{q}+\Vert x^{-}\Vert _{p}^{q})^{1/q},$ where $ x^{+},x^{-}$ denote, respectively, the positive and the negative part of $x.$ If $p>1,$ then \begin{equation*} r_{\ell _{p,q}}(c)=\min \{(1+c^{p})^{1/p}-1,(1+c^{q})^{1/q}-1\} \end{equation*} for all $c\geq 0$ (see, e.g., \cite{AyDoLo}). The following corollary extends \cite[Cor. 10]{GoN}. \begin{corollary} Let $C$ be a nonempty convex weakly compact subset of $\ell _{p,q},1<p<\infty ,1\leq q<\infty .$ If $\mathcal{T}=\{T_{t}:t\in G\}$ is an asymptotically regular semigroup on $C$ such that \begin{equation*} \liminf_{t}|T_{t}|<\min \{2^{1/p},2^{1/q}\}, \end{equation*} then $\Fix\mathcal{T}$ is a H\"{o}lder continuous retract of $C.$ \end{corollary} Let us now examine the case of $p$-uniformly convex spaces. Recall that a Banach space $X$ is $p$-uniformly convex if $\inf_{\varepsilon >0}\delta (\varepsilon )\varepsilon ^{-p}>0,$ where $\delta $ denotes the modulus of uniform convexity of $X.$ If $X$ is $p$-uniformly convex, then (see \cite {Xu91}) \begin{equation} \left\Vert \lambda x+(1-\lambda )y\right\Vert ^{p}\leq \lambda \left\Vert x\right\Vert ^{p}+(1-\lambda )\left\Vert y\right\Vert ^{p}-c_{p}W_{p}(\lambda )\left\Vert x-y\right\Vert ^{p} \label{ineq_Xu} \end{equation} for some $c_{p}>0$ and every $x,y\in X,0\leq \lambda \leq 1,$ where $ W_{p}(\lambda )=\lambda (1-\lambda )^{p}+\lambda ^{p}(1-\lambda ).$ A Banach space $X$ satisfies the Opial property if \begin{equation*} \liminf_{n\rightarrow \infty }\left\Vert x_{n}-x\right\Vert <\liminf_{n\rightarrow \infty }\left\Vert x_{n}-y\right\Vert \end{equation*} for every sequence $x_{n}\overset{w}{\longrightarrow }x$ and $y\neq x.$ The following theorem is an extension of \cite[Th. 7]{GoN}, and a partial extension of \cite[Th. 9]{GoTai}. \begin{theorem} \label{Pconvex}Let $C$ be a nonempty bounded closed convex subset of a $p$ -uniformly convex Banach space $X$ with the Opial property and $\mathcal{T} =\{T_{t}:t\in G\}$ an asymptotically regular semigroup on $C$ such that \begin{equation*} \liminf_{t}|T_{t}|<\max \left\{ (1+c_{p})^{1/p},\left( \frac{1}{2}\left( 1+(1+4c_{p}WCS(X)^{p})^{1/2}\right) \right) ^{1/p}\right\} . \end{equation*} Then $\mathcal{T}$ has a fixed point in $C$ and $\Fix\mathcal{T}$ is a H\"{o} lder continuous retract of $C.$ \end{theorem} \begin{proof} Choose a sequence $(t_{n})$ of elements in $G,$ $\lim_{n\rightarrow \infty }t_{n}=\infty ,$ such that $s(\mathcal{T})=\lim_{n\rightarrow \infty }\left\vert T_{t_{n}}\right\vert $ and let $(t_{\alpha })_{\alpha \in \emph{A }}$ denotes a pointwise weakly convergent subnet of $(t_{n}).$ Define, for every $x\in C$, \begin{equation*} Lx=w\text{-}\lim_{\alpha }T_{t_{\alpha }}x. \end{equation*} Fix $x_{0}\in C$ and put $x_{m+1}=Lx_{m},m\geq 0.$ Let $B_{m}=\limsup_{ \alpha }\left\Vert T_{t_{\alpha }}x_{m}-x_{m+1}\right\Vert .$ Since $X$ satisfies the Opial property, it follows from \cite[Prop. 2.9]{KaPr} that \begin{equation*} \limsup_{\alpha }\left\Vert T_{t_{\alpha }}x_{m}-x_{m+1}\right\Vert <\limsup_{\alpha }\left\Vert T_{t_{\alpha }}x_{m}-y\right\Vert \end{equation*} for every $y\neq x_{m+1},$ i.e., $x_{m+1}$ is the unique point in the asymptotic center $A(C,(T_{t_{\alpha }}x_{m})),m\geq 0.$ Applying (\ref {ineq_Xu}) yields \begin{align*} & c_{p}W_{p}(\lambda )\left\Vert x_{m}-T_{t_{\alpha }}x_{m}\right\Vert ^{p}+\left\Vert \lambda x_{m}+(1-\lambda )T_{t_{\alpha }}x_{m}-T_{t_{\beta }}x_{m-1}\right\Vert ^{p} \\ & \leq \lambda \left\Vert x_{m}-T_{t_{\beta }}x_{m-1}\right\Vert ^{p}+(1-\lambda )\left\Vert T_{t_{\alpha }}x_{m}-T_{t_{\beta }}x_{m-1}\right\Vert ^{p}. \end{align*} for every $\alpha ,\beta \in \emph{A},0<\lambda <1,m>0.$ Following \cite[Th. 9]{GoTai} (see also \cite{Xu90}) and using the asymptotic regularity of $ \mathcal{T},$ we obtain \begin{equation} \limsup_{\alpha }\left\Vert T_{t_{\alpha }}x_{m}-x_{m}\right\Vert ^{p}\leq \frac{s(\mathcal{T})^{p}-1}{c_{p}}(B_{m-1})^{p}. \label{in1} \end{equation} for any $m>0.$ By Theorem \ref{Wi1} and the weak lower semicontinuity of the norm, we have \begin{equation} B_{m}\leq \frac{D[(T_{t_{\alpha }}x_{m})]}{\WCS(X)}\leq \frac{s(\mathcal{T}) }{\WCS(X)}\limsup_{\alpha }\left\Vert T_{t_{\alpha }}x_{m}-x_{m}\right\Vert . \label{in2} \end{equation} Furthermore, by the Opial property, \begin{equation} B_{m}\leq \limsup_{\alpha }\left\Vert T_{t_{\alpha }}x_{m}-x_{m}\right\Vert . \label{in3} \end{equation} Combining (\ref{in1}) with (\ref{in2}) and (\ref{in3}) we see that \begin{equation*} (B_{m})^{p}=\limsup_{\alpha }\left\Vert T_{t_{\alpha }}x_{m}-x_{m+1}\right\Vert ^{p}\leq \gamma ^{p}(B_{m-1})^{p}, \end{equation*} where \begin{equation*} \gamma ^{p}=\max \left\{ \frac{s(\mathcal{T})^{p}-1}{c_{p}},\frac{s(\mathcal{ T})^{p}-1}{c_{p}}\left( \frac{s(\mathcal{T})}{\WCS(X)}\right) ^{p}\right\} <1, \end{equation*} by assumption. Hence $B_{m}\leq \gamma B_{m-1}$ for every $m\geq 1$ and, proceeding in the same way as in the proof of Theorem \ref{Thwcs}, we conclude that $\Fix\mathcal{T}$ is a nonempty H\"{o}lder continuous retract of $C.$ \end{proof} \end{document}
arXiv
{ "id": "1201.0986.tex", "language_detection_score": 0.5704181790351868, "char_num_after_normalized": null, "contain_at_least_two_stop_words": null, "ellipsis_line_ratio": null, "idx": null, "lines_start_with_bullet_point_ratio": null, "mean_length_of_alpha_words": null, "non_alphabetical_char_ratio": null, "path": null, "symbols_to_words_ratio": null, "uppercase_word_ratio": null, "type": null, "book_name": null, "mimetype": null, "page_index": null, "page_path": null, "page_title": null }
arXiv/math_arXiv_v0.2.jsonl
null
null
\begin{document} \title[Left or right centralizers on $ \star $-algebras]{Left or right centralizers on $ \star $-algebras through orthogonal elements } \author{ Hamid Farhadi} \thanks{{\scriptsize \hskip -0.4 true cm \emph{MSC(2020)}: 15A86; 47B49; 47L10; 16W10. \newline \emph{Keywords}: Left centralizer, right centralizer, $ \star $-algebra, orthogonal element, zero product determined, standard operator algebra.\\}} \address{Department of Mathematics, Faculty of Science, University of Kurdistan, P.O. Box 416, Sanandaj, Kurdistan, Iran} \email{[email protected]} \begin{abstract} In this paper we consider the problem of characterizing linear maps on special $ \star $-algebras behaving like left or right centralizers at orthogonal elements and obtain some results in this regard. \end{abstract} \maketitle \section{Introduction} Throughout this paper all algebras and vector spaces will be over the complex field $ \mathbb{C} $. Let $ \mathcal{A} $ be an algebra. Recall that a linear (additive) map $ \varphi : \mathcal{A} \to \mathcal{A} $ is said to be a \textit{right $($left$)$ centralizer} if $ \varphi (ab) = a \varphi(b) (\varphi(ab) = \varphi(a)b) $ for each $a, b \in \mathcal{A}$. The map $ \varphi $ is called a \textit{centralizer} if it is both a left centralizer and a right centralizer. In the case that $ \mathcal{A} $ has a unity $1$, $ \varphi : \mathcal{A} \to \mathcal{A} $ is a right (left) centralizer if and only if $ \varphi $ is of the form $ \varphi (a) = a \varphi(1) ( \varphi(a) = \varphi(1)a)$ for all $a \in \mathcal{A}$. Also $ \varphi $ is a centralizer if and only if $ \varphi (a) = a \varphi(1) = \varphi(1)a$ for each $a \in \mathcal{A}$. The notion of centralizer appears naturally in $C^{*}$-algebras. In ring theory it is more common to work with module homomorphisms. We refer the reader to \cite{gh1, gh2, vuk} and references therein for results concerning centralizers on rings and algebras. In recent years, several authors studied the linear (additive) maps that behave like homomorphisms, derivations or right (left) centalizers when acting on special products (for instance, see \cite{barar, bre, fad0, fad1, fad2} and the references therein). An algebra $ \mathcal{A} $ is called \textit{zero product determined} if for every linear space $\mathcal{X}$ and every bilinear map $\phi:\mathcal{A}\times \mathcal{A}\rightarrow \mathcal{X}$ the following holds: If $\phi(a,b)=0$ whenever $ab=0$, then there exists a linear map $T : \mathcal{A}^{2}\rightarrow \mathcal{X}$ such that $\phi(a,b)=T(ab)$ for each $a,b\in \mathcal{A}$. If $\mathcal{A}$ has unity $1$, then $\mathcal{A}$ is zero product determined if and only if for every linear space $\mathcal{X}$ and every bilinear map $\phi:\mathcal{A}\times \mathcal{A}\rightarrow \mathcal{X}$, the following holds: If $\phi(a,b)=0$ whenever $ab=0$, then $\phi(a,b)=\phi(ab,1)$ for each $a,b\in \mathcal{A}$. Also in this case $\phi(a,1)=\phi(1,a)$ for all $a\in \mathcal{A}$. The question of characterizing linear maps through zero products, Jordan products, etc. on algebras sometimes can be effectively solved by considering bilinear maps that preserve certain zero product properties (for instance, see \cite{al, al1, fos, gh3,gh4,gh5,gh6, gh7, gh8}). Motivated by these works, Bre\v{s}ar et al. \cite{bre2} introduced the concept of zero product (Jordan product) determined algebras, which can be used to study linear maps preserving zero products (Jordan products) and derivable (Jordan derivable) maps at zero point. \par Let $ \varphi : \mathcal{A} \to \mathcal{A} $ be a linear mapping on algebra $ \mathcal{A} $. A tempting challenge for researchers is to determine conditions on a certain set $ \mathcal{S} \subseteq \mathcal{A} \times \mathcal{A} $ to guarantee that the property \begin{equation} \label{1} \varphi (ab) = a \varphi(b)\quad \big (\varphi(ab) = \varphi(a)b\big), \text{ for every } (a, b) \in \mathcal{S} , \end{equation} implies that $ \varphi $ is a (right, left) centralizer. Some particular subsets $ \mathcal{S} $ give rise to precise notions studied in the literature. For example, given a fixed element $z \in \mathcal{A}$, a linear map $ \varphi : \mathcal{A} \to \mathcal{A} $ satisfying \eqref{1} for the set $\mathcal{S}_{z} = \{ (a, b) \in \mathcal{A} \times \mathcal{A} : ab = z \} $ is called \textit{centralizer} at $z$. Motivated by \cite{barar, fad1, fad2, gh7, gh8} in this paper we consider the problem of characterizing linear maps on special $ \star $-algebras behaving like left or right centralizers at orthogonal elements for several types of orthogonality conditions. \par In this paper we consider the problem of characterizing linear maps behaving like right or left centralizers at orthogonal elements for several types of orthogonality conditions on $ \star $-algebras with unity. In particular, in this paper we consider the subsequent conditions on a linear map $ \varphi : \mathcal{A} \to \mathcal{A} $ where $ \mathcal{A} $ is a zero product determined $ \star $-algebra with unity or $ \mathcal{A} $ is a unital standard operator algebras on a Hilbert space $H$ such that $ \mathcal{A} $ is closed under adjoint operation : \[ a, b \in \mathcal{A} , a b^\star =0 \Longrightarrow a \varphi(b)^\star = 0 ; \] \[ a, b \in \mathcal{A} , a^\star b =0 \Longrightarrow \varphi(a)^\star b = 0. \] Let $H$ be a Hilbert space. We denote by $B(H)$ the algebra of all bounded linear operators on $H$, and $F(H)$ denotes the algebra of all finite rank operators in $B(H)$. Recall that a \textit{standard operator algebra} is any subalgebra of $B(H)$ which contains $F(H)$. We shall denote the identity matrix of $B(H)$ by $I$. \section{Main results} We first characterize the centralizers at orthogonal elements on unital zero product determined $ \star $-algebras. \begin{thm} \label{tc} Let $ A $ be a zero product determined $ \star $-algebra with unity $1$ and $ \varphi : A \to A $ be a linear map. Then the following conditions are equivalent: \begin{enumerate} \item[(i)] $\varphi$ is a left centralizer; \item[(ii)] $ a, b \in \mathcal{A} , a b^\star =0 \Longrightarrow a \varphi(b)^\star = 0 $. \end{enumerate} \end{thm} \begin{proof} $ (i) \Rightarrow (ii) $ Since $\mathcal{A}$ is unital, it follows that $\varphi(a) = \varphi(1)a$ for each $a\in \mathcal{A}$. If $ a b^\star =0$, then \[a \varphi(b)^\star=a(\varphi(1)b)^{\star}=a b^\star\varphi(1)^{\star} =0. \] So (ii) holds. \\ $ (ii) \Rightarrow (i) $ Define $\phi:\mathcal{A}\times \mathcal{A}\rightarrow \mathcal{A}$ by $\phi(a,b)=a\varphi(b^{\star})^{\star}$. It is easily checked that $\phi$ is a bilinear map. If $a,b\in \mathcal{A}$ such that $ab=0$, then $a(b^{\star})^{\star}=0$. It follows from hypothesis that $a\varphi(b^{\star})^{\star}=0$. Hence $\phi(a,b)=0$. Since $\mathcal{A}$ is a zero product determined algebra, it follows that $\phi(a,b)=\phi(ab,1)$ for each $a,b\in \mathcal{A}$. Now we have \[ a\varphi(b^{\star})^{\star}=ab\varphi(1)^{\star}\] for each $a,b\in \mathcal{A}$. By letting $a=1$ we get \[\varphi(b^{\star})^{\star}=b\varphi(1)^{\star} \] for each $b\in \mathcal{A}$. Thus $\varphi(b^{\star})=\varphi(1)b^{\star}$ for all $b\in \mathcal{A}$ and hence $\varphi(a)=\varphi(1)a$ for all $a\in \mathcal{A}$. Hence $\varphi$ is a left centralizer. \end{proof} \begin{thm} \label{tc2} Let $ A $ be a zero product determined $ \star $-algebra with unity $1$ and $ \varphi : A \to A $ be a linear map. Then the following conditions are equivalent: \begin{enumerate} \item[(i)] $\varphi$ is a right centralizer; \item[(ii)] $ a, b \in \mathcal{A} , a^\star b =0 \Longrightarrow \varphi(a)^\star b = 0 $. \end{enumerate} \end{thm} \begin{proof} $ (i) \Rightarrow (ii) $ Since $\mathcal{A}$ is unital, it follows that $\varphi(a) = a\varphi(1)$ for each $a\in \mathcal{A}$. If $ a^\star b=0$, then \[ \varphi(a)^\star b=(a\varphi(1))^{\star}= \varphi(1)^{\star}a^\star b=0. \] So (ii) holds. \\ $ (ii) \Rightarrow (i) $ Define the bilinear map $\phi:\mathcal{A}\times \mathcal{A}\rightarrow \mathcal{A}$ by $\phi(a,b)=\varphi(a^{\star})^{\star}b$. If $a,b\in \mathcal{A}$ such that $ab=0$, then $(a^{\star})^{\star}b=0$. By hypothesis $\varphi(a^{\star})^{\star}b=0$. So $\phi(a,b)=0$. Since $\mathcal{A}$ is a zero product determined algebra, it follows that $\phi(a,b)=\phi(ab,1)=\phi(1,ab)$ for each $a,b\in \mathcal{A}$. Now \[ \varphi(a^{\star})^{\star}b=\varphi(1)^{\star}ab\] for each $a,b\in \mathcal{A}$. By letting $b=1$ we arrive at \[\varphi(a^{\star})^{\star}=\varphi(1)^{\star}a \] for each $a\in \mathcal{A}$. Thus $\varphi(a^{\star})=a^{\star}\varphi(1)$ for all $a\in \mathcal{A}$ and hence $\varphi(a)=a\varphi(1)$ for all $a\in \mathcal{A}$, giving us $\varphi$ is a right centralizer. \end{proof} \begin{rem} Every algebra which is generated by its idempotents is zero product determined \cite{bre3}. So the following algebras are zero product determined: \begin{itemize} \item[(i)] Any algebra which is linearly spanned by its idempotents. By \cite[Lemma 3. 2]{hou} and \cite[Theorem 1]{pe}, $B(H)$ is linearly spanned by its idempotents. By \cite[Theorem 4]{pe}, every element in a properly infinite $W^*$-algebra $\mathcal{A}$ is a sum of at most five idempotents. In \cite{mar} several classes of simple $C^*$-algebras are given which are linearly spanned by their projections. \item[(ii)] Any simple unital algebra containing a non-trivial idempotent, since these algebras are generated by their idempotents \cite{bre}. \end{itemize} Therefore Theorems \ref{tc} and \ref{tc2} hold for $\star$-algebras that satisfy one of the above conditions. \end{rem} In the following, we will characterize the the centralizers at orthogonal elements on the unital standard operator algebras on Hilbert spaces that are closed under adjoint operation. \begin{thm}\label{s1} Let $\mathcal{A}$ be a unital standard operator algebra on a Hilbert space $H$ with $dimH \geq 2$, such that $\mathcal{A}$ is closed under adjoint operation. Suppose that $ \varphi : A \to A $ is a linear map. Then the following conditions are equivalent: \begin{enumerate} \item[(i)] $\varphi$ is a left centralizer; \item[(ii)] $ A,B \in \mathcal{A} , AB^\star =0 \Longrightarrow A \varphi(B)^\star = 0 $. \end{enumerate} \end{thm} \begin{proof} $ (i) \Rightarrow (ii) $ is similar to proof of Theorem \ref{tc}.\\ $ (ii) \Rightarrow (i) $ Define $\psi :\mathcal{A} \rightarrow \mathcal{A}$ by $\psi(A)=\varphi(A^{\star})^{\star}$. Then $\psi$ is a linear map such that \[ A,B \in \mathcal{A} , AB=0 \Longrightarrow A \psi(B) = 0. \] Let $P\in \mathcal{A}$ be an idempotent operator of rank one and $P\in \mathcal{A}$. Then $P (I-P)A = 0$ and $(I-P)P A= 0$, and by assumption, we have \[P\psi(A)=P\psi(PA) \quad \text{and} \quad \psi(PA)=P\psi(PA) \] So $\psi(PA)=P\psi(A)$ for all $A\in \mathcal{A}$. By \cite[Lemma 1.1]{bur}, every element $X \in F(H )$ is a linear combination of rank-one idempotents, and so \begin{equation}\label{e1} \psi(XA)=X\psi(A) \end{equation} for all $X \in F(H )$ and $A\in \mathcal{A}$. By letting $A=I$ in \eqref{e1} we get $\psi(X)=X\psi(I)$ for all $X \in F(H )$. Since $F(H)$ is an ideal in $\mathcal{A}$, it follows that \begin{equation}\label{e2} \psi(XA)=XA\psi(I) \end{equation} for all $X \in F(H )$. By comparing \eqref{e1} and \eqref{e2}, we see that $X\psi(A)=XA\psi(I)$ for all $X \in F(H )$ and $A\in \mathcal{A}$. Since $F(H)$ is an essential ideal in $B(H)$, it follows that $\psi(A)=A\psi(I)$ for all $A\in \mathcal{A}$. From definition of $\psi$ we have $\varphi(A^{\star})^{\star}=A\varphi(I)^{\star}$ for all $A\in \mathcal{A}$. Thus $\varphi(A^{\star})=\varphi(I)A^{\star}$ for all $A\in \mathcal{A}$ and hence $\varphi(A)=\varphi(I)A$ for all $A\in \mathcal{A}$. Thus $\varphi$ is a left centralizer. \end{proof} \begin{thm}\label{s2} Let $\mathcal{A}$ be a unital standard operator algebra on a Hilbert space $H$ with $dimH \geq 2$, such that $\mathcal{A}$ is closed under adjoint operation. Suppose that $ \varphi : A \to A $ is a linear map. Then the following conditions are equivalent: \begin{enumerate} \item[(i)] $\varphi$ is a right centralizer; \item[(ii)] $ A,B \in \mathcal{A} , A^\star B =0 \Longrightarrow \varphi(A)^\star B = 0 $. \end{enumerate} \end{thm} \begin{proof} $ (i) \Rightarrow (ii) $ is similar to proof of Theorem \ref{tc2}.\\ $ (ii) \Rightarrow (i) $ Define $\psi :\mathcal{A} \rightarrow \mathcal{A}$ by $\psi(A)=\varphi(A^{\star})^{\star}$. Then $\psi$ is a linear map such that \[ A,B \in \mathcal{A} , AB=0 \Longrightarrow \psi(A)B = 0. \] Let $P\in \mathcal{A}$ be an idempotent operator of rank one and $P\in \mathcal{A}$. Then $AP (I-P) = 0$ and $A(I-P)P = 0$, and by assumption, we arrive at $\psi(AP)=\psi(A)P$ for all $A\in \mathcal{A}$. So \begin{equation}\label{e3} \psi(AX)=\psi(A)X \end{equation} for all $X \in F(H )$ and $A\in \mathcal{A}$. By letting $A=I$ in \eqref{e3} we have $\psi(X)=\psi(I)X$ for all $X \in F(H )$. Since $F(H)$ is an ideal in $\mathcal{A}$, it follows that \begin{equation}\label{e4} \psi(AX)=\psi(I)AX \end{equation} for all $X \in F(H )$. By comparing \eqref{e3} and \eqref{e4}, we get $\psi(A)X=\psi(I)AX$ for all $X \in F(H )$ and $A\in \mathcal{A}$. Since $F(H)$ is an essential ideal in $B(H)$, it follows that $\psi(A)=\psi(I)A$ for all $A\in \mathcal{A}$. From definition of $\psi$ we have $\varphi(A^{\star})^{\star}=\varphi(I)^{\star}A$ for all $A\in \mathcal{A}$. Thus $\varphi(A^{\star})=A^{\star}\varphi(I)$ for all $A\in \mathcal{A}$ and hence $\varphi(A)=A\varphi(I)$ for all $A\in \mathcal{A}$ implying that $\varphi$ is a right centralizer. \end{proof} Finally, we note that the characterization of left or right centralizers through orthogonal elements can be used to study local left or right centralizers. \end{document} \end{document}
arXiv
{ "id": "2204.14266.tex", "language_detection_score": 0.6473863124847412, "char_num_after_normalized": null, "contain_at_least_two_stop_words": null, "ellipsis_line_ratio": null, "idx": null, "lines_start_with_bullet_point_ratio": null, "mean_length_of_alpha_words": null, "non_alphabetical_char_ratio": null, "path": null, "symbols_to_words_ratio": null, "uppercase_word_ratio": null, "type": null, "book_name": null, "mimetype": null, "page_index": null, "page_path": null, "page_title": null }
arXiv/math_arXiv_v0.2.jsonl
null
null
\begin{document} \title{No Fine theorem for macrorealism: Limitations of the Leggett-Garg inequality} \date{\today} \author{Lucas Clemente} \email{[email protected]} \author{Johannes Kofler} \email{[email protected]} \affiliation{Max Planck Institute of Quantum Optics, Hans-Kopfermann-Str.\ 1, 85748 Garching, Germany} \begin{abstract} Tests of local realism and macrorealism have historically been discussed in very similar terms: Leggett-Garg inequalities follow Bell inequalities as necessary conditions for classical behavior. Here, we compare the probability polytopes spanned by all measurable probability distributions for both scenarios and show that their structure differs strongly between spatially and temporally separated measurements. We arrive at the conclusion that, in contrast to tests of local realism where Bell inequalities form a necessary and sufficient set of conditions, no set of inequalities can ever be necessary and sufficient for a macrorealistic description. Fine's famous proof that Bell inequalities are necessary and sufficient for the existence of a local realistic model, therefore cannot be transferred to macrorealism. A recently proposed condition, no-signaling in time, fulfills this criterion, and we show why it is better suited for future experimental tests and theoretical studies of macrorealism. Our work thereby identifies a major difference between the mathematical structures of local realism and macrorealism. \end{abstract} \pacs{03.65.Ta, 03.65.Ud} \maketitle The violation of classical world views, such as local realism \cite{Bell:1964wu} and macrorealism \cite{Leggett:1985bl, Leggett:2002dk}, is one of the most interesting properties of quantum mechanics. Experiments performed over the past decades have shown violations of local realism in various systems \cite{Freedman:1972ka, Aspect:1982ja, Weihs:1998cc}, while violations of macrorealism are on the horizon \cite{PalaciosLaloy:2010ih, Goggin:2011iw, Xu:2011cr, Dressel:2011hh, Fedrizzi:2011ji, Waldherr:2011km, Athalye:2011bi, Souza:2011hu, Zhou:2015fi, Knee:2012cg, Suzuki:2012dw, George:2013bd, Katiyar:2013dt, Emary:2014ck, Asadian:2014fw, Robens:2015baa, White:2015ui-arxiv, Knee:2016wd-arxiv}. The latter endeavors pave the way towards the experimental realization of Schr\"odinger's famous thought experiment \cite{Schrodinger:1935kq}. In the future, they might offer insight into important foundational questions, such as the quantum measurement problem \cite{Leggett:2005bf}, and allow experimental tests of (possibly gravitational) extensions of quantum mechanics \cite{RomeroIsart:2011es}. Historically, the discussion of tests of macrorealism (MR) follows the discussion of tests of local realism (LR) closely: Leggett-Garg inequalities (LGIs) \cite{Leggett:1985bl} are formulated similarly to Bell inequalities \cite{Bell:1964wu, Clauser:1969ff, Clauser:1974fv}, and some concepts, e.g.\ quantum contextuality \cite{Kochen:1967vo}, are connected to both fields \cite{Avis:2010jm, Kleinmann:2012wr, Araujo:2013fq, Kujala:2015gk, Dzhafarov:2015ic}. However, recently, a discrepancy between LR and MR has been identified: Whereas Fine's theorem states that Bell \emph{inequalities} are both necessary and sufficient for LR \cite{Fine:1982ic}, a combination of arrow of time (AoT) and no-signaling in time (NSIT) \cite{Kofler:2013hb} \emph{equalities} are necessary and sufficient for the existence of a macrorealistic description \cite{Clemente:2015hv}. A previous study \cite{Clemente:2015hv} also demonstrated that LGIs involving temporal correlation functions of pairs of measurements are not sufficient for macrorealism, but did not rule out a potential sufficiency of other sets of LGIs, e.g.\ of the CH type \cite{Clauser:1974fv, Mal:2015wn-arxiv}, leaving open the possibility of a Fine theorem for macrorealism. Moreover, cases have been identified where LGIs hide violations of macrorealism \cite{Avis:2010jm} that are detected by a simple NSIT condition \cite{Kofler:2013hb}. The latter fails for totally mixed initial states, where a more involved NSIT condition is required \cite{Clemente:2015hv}. These fundamental differences between tests of local realism and macrorealism seem connected to the peculiar definition of macrorealism \cite{Maroney:2014ws-arxiv, Bacciagaluppi:2014ue}. In this paper, we analyze the reasons for and the consequences of this difference. We show that the probability space spanned by quantum mechanics (QM) is of a higher dimension in an MR test than in an LR test, and we analyze the resulting structure of the probability polytope. We conclude that inequalities---excluding the pathological case of inequalities pairwise merging into equalities---are not suited to be sufficient conditions for MR, and form only weak necessary conditions. Fine's theorem \cite{Fine:1982ic}, therefore cannot be transferred to macrorealism (unless one uses potentially negative quasi-probabiltities \cite{Halliwell:2015ws-arxiv}). Our study thus identifies a striking difference between the mathematical structures of LR and MR. While current experimental tests of macrorealism overwhelmingly use Leggett-Garg inequalities, this difference explains why NSIT is better suited as a witness of non-classicality, i.e.\ why it is violated for a much larger range of parameters~\cite{Kofler:2013hb, Clemente:2015hv}. Let us start by reviewing the structure of the LR polytope (\textsf{LR}), as described in refs.\ \cite{Pironio:2005jb, Pironio:2014bx, Brunner:2014kr}. Consider an LR test between $n \geq 2$ parties $i \in \{1 \dots n\}$. Each party can perform a measurement in one of $m \geq 2$ settings $s_i \in \{1 \dots m\}$. Each setting has the same number $\Delta \geq 2$ of possible outcomes $q_i \in \{1 \dots \Delta\}$, and, to allow for all possible types of correlations, it may measure a distinct property of the system. We can define probability distributions $p_{q_1 \dots q_n | s_1 \dots s_n}$ for obtaining outcomes $q_1 \dots q_n$, given the measurement settings $s_1 \dots s_n$. If a party $i$ chooses not to perform a measurement, the corresponding ``setting'' is labeled $s_i=0$, and there is only one ``outcome'' labeled $q_i=0$ (e.g.\ $p_{q_1,0|s_1,0}$ when only the first party performs a measurement). We leave out final zeros, e.g.\ $p_{q_1 \dots q_i, 0 \dots 0 | s_1 \dots s_i, 0 \dots 0} = p_{q_1 \dots q_i | s_1 \dots s_i}$. Note that this convention differs from the literature for LR tests, where the case of no measurement is often left out \cite{Pironio:2005jb, Brunner:2014kr}, but simplifies the comparison between LR and MR tests. Each experiment is then completely described by $(m \Delta + 1)^n$ probability distributions; it can be seen as a point in a probability space $\mathbb R^{(m \Delta + 1)^n}$. We now require normalization of the probabilities. There are $(m+1)^n$ linearly independent normalization conditions, as each probability only appears once: \begin{equation} \forall s_1 \dots s_n\!: \sum_{q_1 \dots q_n} p_{q_1 \dots q_n | s_1 \dots s_n} = 1. \end{equation} Because of the special case of no measurements ($s_i = 0$), here (and in the following equations) we have abbreviated the notation of the summation: The possible values of $q_i$, in fact, depend on $s_i$. The normalization conditions reduce the dimension of the probability space to \begin{equation}\label{eq:dimP} (m \Delta + 1)^n - (m+1)^n. \end{equation} Furthermore, the positivity conditions \begin{equation} \forall s_1 \dots s_n, q_1 \dots q_n\!: p_{q_1 \dots q_n | s_1 \dots s_n} \geq 0 \end{equation} restrict the reachable space to a subspace with the same dimension, but they are delimited by flat hyperplanes. The resulting subspace is called the \emph{probability polytope} $\mathsf P$. In an LR test with space-like separated parties, special relativity prohibits signaling from every party to any other, \begin{equation}\label{eq:ns-conditions} \begin{split} & \forall i, q_1 \dots q_{i-1}, q_{i+1} \dots q_n, s_1 \dots s_n, s_i \neq 0 \!:\\ & p_{q_1 \dots q_{i-1}, 0, q_{i+1} \dots q_n | s_1 \dots s_{i-1}, 0, s_{i+1} \dots s_n} = \sum_{q_i=1}^\Delta p_{q_1 \dots q_n| s_1 \dots s_n}. \end{split} \end{equation} These \emph{no-signaling} (NS) conditions restrict the probability polytope to a NS polytope (\textsf{NS}) of lower dimension. Taking their linear dependence, both amongst each other and with the normalization conditions, into account, we arrive at dimension \cite{Pironio:2005jb} \begin{equation} \dim \mathsf{NS} = [m(\Delta-1)+1]^n - 1. \end{equation} Since quantum mechanics obeys NS, and due to Tsirelson bounds \cite{Cirelson:1980fp}, the space of probability distributions from spatially separated experiments implementable in quantum mechanics, $\mathsf{QM_S}$, is located strictly within the NS polytope. Furthermore, the space of local realistic probability distributions, \textsf{LR}, is a strict subspace of $\mathsf{QM_S}$. It is delimited by Bell inequalities (e.g. the CH/CHSH inequalities for $n = m = \Delta = 2$) and positivity conditions, and therefore forms a polytope within $\mathsf{QM_S}$ \cite{Fine:1982ic, Pironio:2005jb}. In summary, we have $\mathsf P \supset \mathsf{NS} \supset \mathsf{QM_S} \supset \mathsf{LR}$, with $\dim \mathsf P > \dim \mathsf{NS} = \dim \mathsf{QM_S} = \dim \mathsf{LR}$. The structure of the \textsf{NS}, $\mathsf{QM_S}$ and \textsf{LR} spaces is sketched on the left of \cref{fig:polytopes}. \begin{figure} \caption{(Color online.) \emph{Left:} A sketch of subspaces in an LR test \cite{Brunner:2014kr}. The no-signaling polytope (\textsf{NS}) contains the space of probability distributions realizable from spatially separated experiments in quantum mechanics ($\mathsf{QM_S}$), which contains the local realism polytope (\textsf{LR}). \textsf{LR} is delimited by Bell inequalities and the positivity conditions. \textsf{NS}, $\mathsf{QM_S}$, and \textsf{LR} have the same dimension. A Bell inequality (BI) is also sketched, delimiting \textsf{LR}. Another tight Bell inequality (BI') is less suited as a witness of non-LR behavior, and illustrates the role of Leggett-Garg inequalities in macrorealism tests.\\ \emph{Right:} A sketch of polytopes in an MR test. The arrow of time polytope (\textsf{AoT}) is equal to the space of probability distributions realizable from temporally separated experiments in quantum mechanics ($\mathsf{QM_T}$), which contains the macrorealism polytope (\textsf{MR}). \textsf{MR} is a polytope of lower dimension, located fully within the $\mathsf{QM_T}$ subspace and solely delimited by positivity constraints. Since each probability can easily be minimized or maximized individually, \textsf{MR} reaches all facets of \textsf{AoT}. A Leggett-Garg inequality (LGI) is also sketched; it is a hyperplane of dimension $\dim \mathsf{QM_T}-1$, which, in general, is much larger than $\dim \mathsf{MR}$. Note that the LGI can only touch \textsf{MR} (i.e.\ be tight) at the boundary of the positivity constraints. } \label{fig:polytopes} \end{figure} \begin{figure*} \caption{ Arrow of time (AoT) and no-signaling in time (NSIT) conditions relating different outcome probability distributions for the case $n=3$ measurement times and $m=2$ possible settings. The notation $(xyz)$ refers to distributions with settings $s_1=x, s_2=y, s_3=z$. The arrows denote the process of marginalization: e.g.\ the AoT condition $p_{q_1|s_1=x} = \sum_{q_2} p_{q_1, q_2 | s_1=x, s_2=y}$ is denoted by $(x) \leftarrow (xy)$, and the NSIT condition $p_{q_2|s_2=y} = \sum_{q_1} p_{q_1, q_2 | s_1=x, s_2=y}$ is denoted by $(y) \leftarrow (xy)$. It can easily be seen that the AoT conditions are linearly independent, since they cannot form loops. Adding more measurement times (adding further rows), or adding more settings (broadening the trees) does not change their independence. In contrast, the NSIT conditions are not linearly independent, and thus form loops. Note that marginalizing only over a single measurement is sufficient, as simultaneous marginalizations follow from individual ones, and hence they are always linearly dependent. } \label{fig:independence} \end{figure*} In a test of MR, temporal correlations take the role of an LR test's spatial correlations. Instead of spatially separated measurements on $n$ systems by different observers, a single observer performs $n$ sequential (macroscopically distinct) measurements on one and the same system. Again, each measurement is either skipped (``0'') or performed in one of $m \geq 1$ \footnote{In contrast to LR tests, where $m \geq 2$ is required to observe quantum violations, $m=1$ allows for violations of MR, and is in fact the most considered case in the literature.} settings, with $\Delta$ possible outcomes each. With this one-to-one correspondence, the resulting probability polytope $\mathsf P$ in the space $\mathbb R^{(m \Delta+1)^n-(m+1)^n}$ is identical to the one in the Bell scenario. However, without further physical assumptions, no-signaling in temporally separated experiments is only a requirement in one direction: While past measurements can affect the future, causality demands that future measurements cannot affect the past. This assumption is captured by the \emph{arrow of time} (AoT) conditions: \begin{equation}\label{eq:aot-conditions} \begin{split} & \forall i \geq 2 \!: \forall q_1 \dots q_{i-1}, s_1 \dots s_{i-1} \text{ with } \Sigma_{j=1}^{i-1} s_j \neq 0, s_i \neq 0 \!:\\ & p_{q_1 \dots q_{i-1} | s_1 \dots s_{i-1}} = \sum_{q_i=1}^\Delta p_{q_1 \dots q_i| s_1 \dots s_i}. \end{split} \end{equation} Counting the number of equalities in \cref{eq:aot-conditions} shows that their number is \begin{equation}\label{eq:nAoT} \sum_{i=2}^{n} [(m \Delta + 1)^{i-1} - 1] m = \frac{(m \Delta + 1)^n - n m \Delta - 1}{\Delta}, \end{equation} where the first factor in the sum counts the setting and outcome combinations for times $1 \dots i-1$, excluding the choice of all $s_i=0$, and the second factor the number of settings at time $i$. All listed conditions are linearly independent due to their hierarchical construction, see \cref{fig:independence}. However, a number of the normalization conditions for the marginal distributions, already subtracted in \cref{eq:dimP}, are not linearly independent from AoT, and thus become obsolete. Their number is obtained by counting the different settings in \cref{eq:aot-conditions}: \begin{equation}\label{eq:nNormAoT} \sum_{i=2}^{n} [(m+1)^{i-1} - 1] m = (m+1)^n - nm - 1. \end{equation} The remaining normalization conditions are the ones for probability distributions with just one measurement and for the ``0-distribution''; there are $n m + 1$ such distributions. Taking \cref{eq:dimP}, subtracting \cref{eq:nAoT} and adding \cref{eq:nNormAoT}, we conclude that the AoT conditions restrict the probability polytope to an AoT polytope (\textsf{AoT}) of dimension \begin{equation}\label{eq:dimAoT} \dim \mathsf{AoT} = \frac{[(m \Delta + 1)^n - 1] (\Delta - 1)}{\Delta}. \end{equation} By simple extension of the proof in ref.\ \cite{Clemente:2015hv}, the set of all \emph{no-signaling in time} (NSIT) conditions, \begin{equation}\label{eq:nsit-conditions} \begin{split} & \forall i \!<\! n, q_1 \dots q_{i-1}, q_{i+1} \dots q_n, s_1 \dots s_n, \Sigma_{j>i} s_j \!\neq\! 0, s_i \!\neq\! 0\!:\\ & p_{q_1 \dots q_{i-1}, 0, q_{i+1} \dots q_n | s_1 \dots s_{i-1}, 0, s_{i+1} \dots s_n} = \sum_{q_i=1}^\Delta p_{q_1 \dots q_n| s_1 \dots s_n}, \end{split} \end{equation} is, together with AoT, necessary and sufficient for macrorealism. To get from \textsf{AoT} to the macrorealism polytope, \textsf{MR}, we therefore require a linearly independent subset of these conditions. However, since the AoT conditions from \cref{eq:aot-conditions} plus the NSIT conditions from \cref{eq:nsit-conditions} are equivalent to the NS conditions from \cref{eq:ns-conditions}, we arrive at \textsf{MR} with the same dimension as the LR polytope: \begin{equation} \dim \mathsf{MR} = \dim \mathsf{LR} = [m(\Delta-1)+1]^n - 1. \end{equation} We are left with the question of how the space of probability distributions realizable from temporally separated experiments in quantum mechanics, $\mathsf{QM_T}$, relates to \textsf{AoT}. Fritz has shown in ref.~\cite{Fritz:2010ba} that $\mathsf{QM_T} = \mathsf{AoT}$ for $n = m = \Delta = 2$, if we allow for positive-operator valued measurements (POVMs). Let us now generalize his proof to arbitrary $n, m, \Delta$. We do so by constructing a quantum experiment that produces all possible probability distributions which are allowed by AoT. \newcommand{\vsym}[1]{\rotatebox[origin=c]{-90}{$#1$}} \begin{table*}[tb] \renewcommand*{\arraystretch}{1.5} \begin{ruledtabular} \begin{tabular}{lcccc} & LR test & & MR test \\\hline Number of unnormalized distributions & & \hspace{-5em} $(m \Delta+1)^n$ \hspace{-5em} & \\ $\dim \mathsf P$ & & \hspace{-5em} $(m \Delta+1)^n - (m+1)^n$ \hspace{-5em} & \\ $\dim \mathsf{QM_S},~ \dim \mathsf{QM_T}$ & $[m (\Delta-1)+1]^n-1$ & $<$ & $[(m \Delta + 1)^n - 1] (\Delta - 1)/ \Delta$ \\ $\dim \mathsf{LR},~ \dim \mathsf{MR}$ & & \hspace{-5em} $[m (\Delta-1)+1]^n-1$ \hspace{-5em} & \end{tabular} \end{ruledtabular} \caption{\label{tbl:dimensions} Dimensions of the probability space $\mathsf P$ and its subspaces reachable by spatially separated ($\mathsf{QM_S}$) or temporally separated ($\mathsf{QM_T}$) experiments in quantum mechanics, local realism (\textsf{LR}), and macrorealism (\textsf{MR}). There are $n$ spatially or temporally separated measurements with $m$ settings and $\Delta$ outcomes each. } \end{table*} Consider a quantum system of dimension $(m \Delta + 1)^n$, with states enumerated as $\ket{q_1 \dots q_n; s_1 \dots s_n}$. As with the probability distributions, final zeros may be omitted. The initial state of the system is $\ket{0 \dots 0; 0 \dots 0}$. Now, $n$ POVMs are performed on the system. The measurements are chosen such that depending on their setting and outcome they take the system to the corresponding state: Performing a measurement on a system in state $\ket{q_1 \dots q_{i-1}; s_1 \dots s_{i-1}}$ with setting $s_i$ and obtaining outcome $q_i$ should leave the system in state $\ket{q_1 \dots q_i; s_1 \dots s_i}$. This is accomplished by choosing Kraus operators for the $i$-th measurement in basis $s_i$ for outcome $q_i$ as \begin{equation}\label{eq:kraus-construction} \begin{split} K^{(i)}_{s_i, q_i} = &\sum_{s_1 \dots s_{i-1}, q_1 \dots q_{i-1}} \sqrt{r_{q_i | q_1 \dots q_{i-1}, s_1 \dots s_i}} \\ &\times \ketbra{q_1 \dots q_i; s_1 \dots s_i}{q_1 \dots q_{i-1}; s_1 \dots s_{i-1}} \\ + \sum_{\substack{s_1 \dots s_n \\ q_1 \dots q_n \\ \Sigma_{j=i}^{n} s_j \neq 0}} &\frac{1}{\sqrt{\Delta}} \ketbra{q_1 \dots q_n; s_1 \dots s_n}{{q_1 \dots q_n; s_1 \dots s_n}}. \end{split} \end{equation} For $i = 1$, the first sum in \cref{eq:kraus-construction} reduces to the single term $\sqrt{p_{q_1 | s_1}} \ketbra{q_1; s_1}{0 \dots 0; 0 \dots 0}$, while the second sum remains unchanged. The second sum in \cref{eq:kraus-construction} is necessary for the completeness relation $\sum_{q_i} (K^{(i)}_{s_i, q_i})^\dagger K^{(i)}_{s_i, q_i} = \mathbbm 1$. The above definitions also work for $s_i = 0$, where $r_{q_i = 0|q_1 \dots q_{i-1}, s_1 \dots s_{i-1}, s_i = 0} = 1$, and $(K^{(i)}_{s_i, q_i})^\dagger K^{(i)}_{s_i, q_i} = \mathbbm 1$. The conditional probabilities $r$ in \cref{eq:kraus-construction} can be obtained from the probabilities $p$ using the assumption of AoT: \begin{equation} r_{q_i | q_1 \dots q_{i-1}, s_1 \dots s_i} = \frac{p_{q_1 \dots q_i | s_1 \dots s_i}}{p_{q_1 \dots q_{i-1} | s_1 \dots s_{i-1}}}. \end{equation} This construction gives a recipe to obtain any point in the AoT probability space in a quantum experiment. We have therefore shown that $\mathsf{AoT} = \mathsf{QM_T}$ for any choice of $n, m, \Delta$. Note that the probability distributions constructed above can also be achieved by a purely classical stochastic model, albeit with invasive measurements. Such an experiment would therefore not convince a macrorealist to give up their world view. For that to happen, an experiment needs to properly address the clumsiness loophole \cite{Leggett:1985bl, Wilde:2011ip, Moreira:2015iz}. The relevant methods previously established for the LGI can also be applied to NSIT-based experiments \cite{Knee:2016wd-arxiv}. Since \textsf{AoT} is a polytope, $\mathsf{QM_T}$ with POVMs is also a polytope, and no non-trivial Tsirelson-like bounds exist. If, on the other hand, we only allowed projective measurements, we would have $\mathsf{QM_T} \subset \mathsf{AoT}$ with non-trivial Tsirelson-like bounds, as shown in ref.~\cite{Fritz:2010ba}. In this case, $\mathsf{QM_T}$ would not be a polytope. It is easy to see that QM with projectors is unable to reproduce some probability distributions: $n = 2, m=1, \Delta = 2$, $p_{11|11} = 1, p_{01|01} = 0$ fulfills AoT but cannot be constructed in projective quantum mechanics, since the initial state must be an eigenstate of the first measurement. Here we consider the general case of POVMs. In summary, we have \begin{equation} \begin{matrix} \mathsf P & \supset & \mathsf{NS} & \supset & \mathsf{QM_S} & \supset & \mathsf{LR} \\ \vsym{=} & & \vsym{\subset} & & \vsym{\subset} & & \vsym{\subset} \\ \mathsf P & \supset & \mathsf{AoT} & = & \mathsf{QM_T} & \supset & \mathsf{MR} \end{matrix}, \end{equation} with $\mathsf{NS} = \mathsf{MR}$, and dimensions \begin{equation} \begin{matrix} \dim \mathsf P & > & \dim \mathsf{NS} & = & \dim \mathsf{QM_S} & = & \dim \mathsf{LR} \\ \vsym{=} & & \vsym{<} & & \vsym{<} & & \vsym{=} \\ \dim \mathsf P & > & \dim \mathsf{AoT} & = & \dim \mathsf{QM_T} & > & \dim \mathsf{MR} \end{matrix}. \end{equation} The structure of \textsf{AoT}, $\mathsf{QM_T}$ and \textsf{MR} within $\mathsf P$ is sketched on the right of \cref{fig:polytopes}, the dimensions of all mentioned subspaces are printed in \cref{tbl:dimensions}. Finally, let us compare the characteristics of quantum mechanics in LR and MR tests. Trivially, QM fulfills NS between spatially separated measurements, and AoT between temporally separated measurements \footnote{ To show that QM fulfills NS, we consider a setup with only two parties, 1 and 2, performing measurements with POVM elements $\hat M_{q_1,s_1}^\dagger \hat M_{q_1,s_1}$ and $\hat M_{q_2,s_2}^\dagger \hat M_{q_2,s_2}$, respectively, on a two-particle state $\hat \rho_{12}$. We then calculate $ \sum_{q_2} p_{q_1 q_2|s_1 s_2} = \sum_{q_2} \tr[(\hat M_{q_1,s_1}^\dagger \hat M_{q_1,s_1}) \!\otimes\! (\hat M_{q_2,s_2}^\dagger \hat M_{q_2,s_2})\,\hat \rho_{12}]ยง = \tr[(\hat M_{q_1,s_1}^\dagger \hat M_{q_1,s_1} \!\otimes\! \mathbbm{1}_2) \,\hat \rho_{12}] = \tr_1[\hat M_{q_1,s_1}^\dagger \hat M_{q_1,s_1} \tr_2(\hat \rho_{12}) ] = \tr_1 [\hat M_{q_1,s_1}^\dagger \hat M_{q_1,s_1} \hat \rho_{1}] = p_{q_1|s_1} $. To show that QM fulfills AoT, we consider a setup where $\hat M_{q_1,s_1}^\dagger \hat M_{q_1,s_1}$ are measured at time $1$ on state $\hat \rho_1$, and $\hat M_{q_2,s_2}^\dagger \hat M_{q_2,s_2}$ are measured at time $2$. We then have $ \sum_{q_2} p_{q_1 q_2|s_1 s_2} = \sum_{q_2} \tr[\hat M_{q_1,s_1}^\dagger \hat M_{q_1,s_1} \hat \rho_1] \tr[\hat M_{q_2,s_2}^\dagger \hat M_{q_2,s_2} \hat \rho_2^{q_1,s_1}] = \tr[\hat M_{q_1,s_1}^\dagger \hat M_{q_1,s_1} \hat \rho_1] = p_{q_1|s_1} $, where $\hat \rho_2^{q_1,s_1}$ is the state after measurement of $s_1$ at time 1 with outcome $q_1$, evolved to time 2. The proofs for more parties or more measurement times follow straightforwardly. }. While $\mathsf{QM_S}$ and \textsf{LR} have the same dimension and are separated by Bell inequalities, $\mathsf{QM_T}$ and \textsf{MR} span subspaces with different dimensions. Inequalities can never reduce the dimension of the probability space, since they act as a hyperplane separating the fulfilling from the violating volume of probability distributions. We conclude that no combination of (Leggett-Garg) inequalities can be sufficient for macrorealism. The observation that inequalities cannot be sufficient for macrorealism, and the differences in the structure of the probability space shown above, present fundamental discrepancies between LR and MR. Fine's observation \cite{Fine:1982ic} that Bell inequalities are necessary and sufficient for LR can therefore not be transferred to the case of LGIs and MR. More precisely, Fine's proof uses the implicit assumption of NS, which is obeyed by all reasonable physical theories, including QM. However, the temporal analogue to NS is the conjunction of AoT and NSIT, where AoT is obeyed by all reasonable physical theories, while NSIT is violated in QM. Therefore, \begin{alignat}{3} \text{BIs} &~\substack{\Leftarrow\\\nRightarrow}~ \text{LR} && \Leftrightarrow \text{NS} \land \text{BIs} \\ \text{LGIs} &~\substack{\Leftarrow\\\nRightarrow}~ \text{MR} && \Leftrightarrow \text{AoT} \land \text{NSIT} ~\substack{\nLeftarrow\\\Rightarrow}~ \text{AoT} \land \text{LGIs}, \end{alignat} where ``BIs'' and ``LGIs'' denote the sets of all Bell and Leggett-Garg inequalities, respectively. Moreover, since \textsf{MR} is a polytope with smaller dimension than $\mathsf{QM_T}$, LGIs can only touch \textsf{MR} (i.e.\ be \emph{tight}) at one facet, i.e.\ a positivity constraint, as sketched in \cref{fig:polytopes} on the right. A comparable Bell inequality, sketched in \cref{fig:polytopes} on the left as BI', clearly illustrates the limitations resulting from this requirement. In an experimental test of MR, using a LGI therefore needlessly restricts the parameter space where violations can be found. The favorable experimental feasibility of NSIT is demonstrated by the theoretical analyses of refs.~\cite{Kofler:2013hb, Clemente:2015hv}, as well as the recent experiment of ref.~\cite{Knee:2016wd-arxiv}. Note also the mathematical simplicity of the NSIT conditions when compared to the LGI. We conclude that for further theoretical studies and future experiments it might be advantageous to eschew the LGIs and rather use NSIT. \begin{acknowledgments} We acknowledge support from the EU Integrated Project SIQS. \end{acknowledgments} \end{document}
arXiv
{ "id": "1509.00348.tex", "language_detection_score": 0.764832615852356, "char_num_after_normalized": null, "contain_at_least_two_stop_words": null, "ellipsis_line_ratio": null, "idx": null, "lines_start_with_bullet_point_ratio": null, "mean_length_of_alpha_words": null, "non_alphabetical_char_ratio": null, "path": null, "symbols_to_words_ratio": null, "uppercase_word_ratio": null, "type": null, "book_name": null, "mimetype": null, "page_index": null, "page_path": null, "page_title": null }
arXiv/math_arXiv_v0.2.jsonl
null
null
\begin{document} \title {On the symbolic powers of binomial edge ideals} \author {Viviana Ene, J\"urgen Herzog} \address{Viviana Ene, Faculty of Mathematics and Computer Science, Ovidius University, Bd.\ Mamaia 124, 900527 Constanta, Romania} \email{[email protected]} \address{J\"urgen Herzog, Fachbereich Mathematik, Universit\"at Duisburg-Essen, Campus Essen, 45117 Essen, Germany} \email{[email protected]} \begin{abstract} We show that under some conditions, if the initial ideal $\ini_<(I)$ of an ideal $I$ in a polynomial ring has the property that its symbolic and ordinary powers coincide, then the ideal $I$ shares the same property. We apply this result to prove the equality between symbolic and ordinary powers for binomial edge ideals with quadratic Gr\"obner basis. \end{abstract} \subjclass[2010]{05E40,13C15} \keywords{symbolic power, binomial edge ideal, chordal graphs} \maketitle \section{Introduction} Binomial edge ideals were introduced in \cite{HHHKR} and, independently, in \cite{Oh}. Let $S=K[x_1,\ldots,x_n,y_1,\ldots,y_n]$ be the polynomial ring in $2n$ variables over a field $K$ and $G$ a simple graph on the vertex set $[n]$ with edge set $E(G).$ The binomial edge ideal of $G$ is generated by the set of $2$-minors of the generic matrix $X=\left( \begin{array}{cccc} x_1 & x_2 & \cdots & x_n\\ y_1 & y_2 & \cdots & y_n \end{array}\right) $ indexed by the edges of $G$. In other words, \[ J_G=(x_iy_j-x_jy_i: i<j \text{ and }\{i,j\}\in E(G)). \] We will often use the notation $[i,j]$ for the maximal minor $x_iy_j-x_jy_i$ of $X.$ In the last decade, several properties of binomial edge ideals have been studied. In \cite{HHHKR}, it was shown that, for every graph $G,$ the ideal $J_G$ is a radical ideal and the minimal prime ideals are characterized in terms of the combinatorics of the graph. Several articles considered the Cohen-Macaulay property of binomial edge ideals; see, for example, \cite{BMS, EHH, RR, Ri, Ri2}. A significant effort has been done for studying the resolution of binomial edge ideals. For relevant results on this topic we refer to the recent survey \cite{Sara} and the references therein. In this paper, we consider symbolic powers of binomial edge ideals. The study and use of symbolic powers have been a reach topic of research in commutative algebra for more than 40 years. Symbolic powers and ordinary powers do not coincide in general. However, there are classes of homogeneous ideals in polynomial rings for which the symbolic and ordinary powers coincide. For example, if $I$ is the edge ideal of a graph, then $I^k=I^{(k)}$ for all $k\geq 1$ if and only if the graph is bipartite. More general, the facet ideal $I(\Delta)$ of a simplicial complex $\Delta$ has the property that $I(\Delta)^k=I(\Delta)^{(k)}$ for all $k\geq 1$ (equivalently, $I(\Delta)$ is normally torsion free) if and only if $\Delta$ is a Mengerian complex; see \cite[Section 10.3.4]{HH10}. The ideal of the maximal minors of a generic matrix shares the same property, that is, the symbolic and ordinary powers coincide \cite{DEP}. To the best of our knowledge, the comparison between symbolic and ordinary powers for binomial edge ideals was considered so far only in \cite{Oh2}. In Section 4 of this paper, Ohtani proved that if $G$ is a complete multipartite graph, then $J_G^k=J_G^{(k)}$ for all integers $k\geq 1.$ In our paper we prove that, for any binomial edge ideal with quadratic Gr\"obner basis, the symbolic and ordinary powers of $J_G$ coincide. The proof is based on the transfer of the equality for symbolic and ordinary powers from the initial ideal to the ideal itself. The structure of the paper is the following. In Section~\ref{one} we survey basic results needed in the next section on symbolic powers of ideals in Noetherian rings and on binomial edge ideals and their primary decomposition. In Section~\ref{three} we discuss symbolic powers in connection to initial ideals. Under some specific conditions on the homogeneous ideal $I$ in a polynomial ring over a field, one may derive that if $\ini_<(I)^k=\ini_<(I)^{(k)}$ for some integer $k\geq 1,$ then $I^k=I^{(k)}$; see Lemma~\ref{inilemma}. By using this lemma and the properties of binomial edge ideals, we show in Theorem~\ref{iniconseq} that if $\ini_<(J_G)$ is a normally torsion-free ideal, then the symbolic and ordinary powers of $J_G$ coincide. This is the case, for example, if $G$ is a closed graph (Corollary~\ref{closed}) or the cycle $C_4.$ However, in general, $\ini_<(J_G)$ is not a normally torsion-free ideal. For example, for the binomial edge ideal of the $5$--cycle, we have $J_{C_5}^2=J_{C_5}^{(2)}$, but $(\ini_<(J_{C_5}))^2\subsetneq (\ini_<(J_{C_5}))^{(2)}.$ \section{Preliminaries} \label{one} In this section we summarize basic facts about symbolic powers of ideals and binomial edge ideals. \subsection{Symbolic powers of ideals} Let $I\subset R$ be an ideal in a Noetherian ring $R,$ and let $\Min(I)$ the set of the minimal prime ideals of $I.$ For an iteger $k\geq 1,$ one defines the \emph{$k^{th}$ symbolic power} of $I$ as follows: \[ I^{(k)}=\bigcap_{{\frk p}\in\Min(I)}(I^kR_{\frk p}\cap R)=\bigcap_{{\frk p}\in\Min(I)}\ker(R\to (R/I^k)_{\frk p})=\] \[=\{a\in R: \text{ for every }{\frk p}\in \Min(I), \text{ there exists }w_{\frk p}\not\in {\frk p} \text{ with }w_{\frk p} a\in I^k\}= \] \[=\{a\in R: \text{ there exists }w\not\in \bigcup_{{\frk p}\in \Min(I)}{\frk p}\text{ with } wa\in I^k\}. \] By the definition of the symbolic power, we have $I^k\subseteq I^{(k)}$ for $k\geq 1. $ Symbolic powers do not, in general, coincide with the ordinary powers. However, if $I$ is a complete intersection or it is the determinantal ideal generated by the maximal minors of a generic matrix, then it is known that $I^k= I^{(k)}$ for $k\geq 1;$ see \cite{DEP} or \cite[Corollary 2.3]{BC03}. Let $I=Q_1\cap \cdots \cap Q_m$ an irredundant primary decomposition of $I$ with $\sqrt{Q_i}={\frk p}_i$ for all $i.$ If the minimal prime ideals of $I$ are ${\frk p}_1,\ldots {\frk p}_s,$ then \[I^{(k)}=Q_1^{(k)}\cap\cdots \cap Q_s^{(k)}. \] In particular, if $I\subset R=K[x_1,\ldots,x_n]$ is a square-free monomial ideal in a polynomial ring over a field $K$, then \[I^{(k)}=\bigcap_{{\frk p}\in \Min(I)} {\frk p}^k. \] Moreover, $I$ is normally torsion-free (i.e. $\Ass(I^m)\subseteq \Ass(I)$ for $m\geq 1$) if and only if $I^k= I^{(k)}$ for all $k\geq 1,$ if and only if $I$ is the Stanley-Reisner ideal of a Mengerian simplicial complex; see \cite[Theorem 1.4.6, Corollary 10.3.15]{HH10}. In particular, if $G$ is a bipartite graph, then its monomial edge ideal $I(G)$ is normally torsion-free \cite[Corollary 10.3.17]{HH10}. In what follows, we will often use the binomial expansion of symbolic powers \cite{HNTT}. Let $I\subset R$ and $J\subset R^\prime$ be two homogeneous ideals in the polynomial algebras $R,R^\prime$ in disjoint sets of variables over the same field $K$. We write $I,J$ for the extensions of these two ideals in $R\otimes_K R^\prime.$ Then, the following binomial expansion holds. \begin{Theorem}\cite[Theorem 3.4]{HNTT} In the above settings, \[ (I+J)^{(n)}=\sum_{i+j=n}I^{(i)}J^{(j)}.\] \end{Theorem} Moreover, we have the following criterion for the equality of the symbolic and ordinary powers. \begin{Corollary}\cite[Corollary 3.5]{HNTT} \label{corh} In the above settings, assume that $I^t\neq I^{t+1}$ and $J^t\neq J^{t+1}$ for $t\leq n-1.$ Then $(I+J)^{(n)}=(I+J)^n$ if and only if $I^{(t)}=I^t$ and $J^{(t)}=J^t$ for every $t\leq n.$ \end{Corollary} \subsection{Binomial edge ideals} Let $G$ be a simple graph on the vertex set $[n]$ with edge set $E(G)$ and let $S$ be the polynomial ring $K[x_1,\ldots,x_n,y_1,\ldots,y_n]$ in $2n$ variables over a field $K.$ The binomial edge ideal $J_G\subset S$ associated with $G$ is \[ J_G=(f_{ij}: i<j, \{i,j\}\in E(G)), \] where $f_{ij}=x_iy_j-x_jy_i$ for $1\leq i<j\leq n.$ Note that $f_{ij}$ are exactly the maximal minors of the $2\times n$ generic matrix $X=\left( \begin{array}{cccc} x_1 & x_2 & \cdots & x_n\\ y_1 & y_2 & \cdots & y_n \end{array}\right). $ We will use the notation $[i,j]$ for the $2$- minor of $X$ determined by the columns $i$ and $j.$ We consider the polynomial ring $S$ endowed with the lexicographic order induced by the natural order of the variables, and $\ini_<(J_G)$ denotes the initial ideal of $J_G$ with respect to this monomial order. By \cite[Corollary 2.2]{HHHKR}, $J_G$ is a radical ideal. Its minimal prime ideals may be characterized in terms of the combinatorics of the graph $G.$ We introduce the following notation. Let ${\mathcal S}\subset [n]$ be a (possible empty) subset of $[n]$, and let $G_1,\ldots,G_{c({\mathcal S})}$ be the connected components of $G_{[n]\setminus {\mathcal S}}$ where $G_{[n]\setminus {\mathcal S}}$ is the induced subgraph of $G$ on the vertex set $[n]\setminus {\mathcal S}.$ For $1\leq i\leq c({\mathcal S}),$ let $\tilde{G}_i$ be the complete graph on the vertex set $V(G_i).$ Let \[P_{{\mathcal S}}(G)=(\{x_i,y_i\}_{i\in {\mathcal S}}) +J_{\tilde{G_1}}+\cdots +J_{\tilde{G}_{c({\mathcal S})}}.\] Then $P_{{\mathcal S}}(G)$ is a prime ideal. Since the symbolic powers of an ideal of maximal minors of a generic matrix coincide with the ordinary powers, and by using Corollary~\ref{corh}, we get \begin{equation}\label{eqprime} P_{{\mathcal S}}(G)^{(k)}=P_{{\mathcal S}}(G)^k \text{ for } k\geq 1. \end{equation} By \cite[Theorem 3.2]{HHHKR}, $J_G=\bigcap_{{\mathcal S}\subset [n]}P_{{\mathcal S}}(G).$ In particular, the minimal primes of $J_G$ are among the prime ideals $P_{{\mathcal S}}(G)$ with ${\mathcal S}\subset [n].$ The following proposition characterizes the sets ${\mathcal S}$ for which the prime ideal $P_{{\mathcal S}}(G)$ is minimal. \begin{Proposition}\label{cpset}\cite[Corollary 3.9]{HHHKR} $P_{{\mathcal S}}(G)$ is a minimal prime of $J_G$ if and only if either ${\mathcal S}=\emptyset$ or ${\mathcal S}$ is non-empty and for each $i\in {\mathcal S},$ $c({\mathcal S}\setminus\{i\})<c({\mathcal S})$. \end{Proposition} In combinatorial terminology, for a connected graph $G$, $P_{{\mathcal S}}(G)$ is a minimal prime ideal of $J_G$ if and only if ${\mathcal S}$ is empty or ${\mathcal S}$ is non-empty and is a \emph{cut-point set} of $G,$ that is, $i$ is a cut point of the restriction $G_{([n]\setminus{\mathcal S})\cup\{i\}}$ for every $i\in {\mathcal S}.$ Let ${\mathcal C}(G)$ be the set of all sets ${\mathcal S}\subset [n]$ such that $P_{{\mathcal S}}(G)\in \Min(J_G).$ Let us also mention that, by \cite[Theorem 3.1]{CDeG} and \cite[Corollary 2.12]{CDeG}, we have \begin{equation}\label{intersectini} \ini_<(J_G)=\bigcap_{{\mathcal S} \in {\mathcal C}(G)} \ini_< P_{{\mathcal S}}(G). \end{equation} \begin{Remark} {\em The cited results of \cite{CDeG} require that $K$ is algebraically closed. However, in our case, we may remove this condition on the field $K.$ Indeed, neither the Gr\"obner basis of $J_G$ nor the primary decomposition of $J_G$ depend on the field $K,$ thus we may extend the field $K$ to its algebraic closure $\bar{K}.$} \end{Remark} When we study symbolic powers of binomial edge ideals, we may reduce to connected graphs. Let $G=G_1\cup \cdots \cup G_c$ where $G_1,\ldots,G_c$ are the connected components of $G$ and $J_G\subset S$ the binomial edge ideal of $G.$ Then we may write \[J_G=J_{G_1}+\cdots +J_{G_c} \] where $J_{G_i}\subset S_i=K[{x_j,y_j: j\in V(G_i)}]$ for $1\leq i\leq c.$ In the above equality, we used the notation $J_{G_i}$ for the extension of $J_{G_i}$ in $S$ as well. \begin{Proposition}\label{connected} In the above settings, we have $J_G^k=J_G^{(k)}$ for every $k\geq 1$ if and only if $J_{G_i}^k=J_{G_i}^{(k)}$ for every $k\geq 1.$ \end{Proposition} \begin{proof} The equivalence is a direct consequence of Corollary~\ref{corh}. \end{proof} \section{Symbolic powers and initial ideals} \label{three} In this section we discuss the transfer of the equality between symbolic and ordinary powers from the initial ideal to the ideal itself. Let $R=K[x_1,\ldots,x_n]$ be the polynomial ring over the field $K$ and $I\subset R$ a homogeneous ideal. We assume that there exists a monomial order $<$ on $R$ such that $\ini_<(I)$ is a square-free monomial ideal. In particular, it follows that $I$ is a radical ideal. Let $\Min(I)=\{{\frk p}_1,\ldots,{\frk p}_s\}.$ Then $I=\bigcap_{i=1}^s {\frk p}_i.$ \begin{Lemma}\label{inilemma} In the above settings, we assume that the following conditions are fulfilled: \begin{itemize} \item [(i)] $\ini_<(I)=\bigcap_{i=1}^s \ini_<({\frk p}_i);$ \item [(ii)] For an integer $t\geq 1$ we have: \begin{itemize} \item [(a)] ${\frk p}_i^{(t)}={\frk p}_i^t$ for $1\leq i\leq s;$ \item [(b)] $\ini_<({\frk p}_i^t)=(\ini_<({\frk p}_i))^t$ for $1\leq i\leq s;$ \item [(c)] $(\ini_<(I))^{(t)}=(\ini_<(I))^t.$ \end{itemize} \end{itemize} Then $I^{(t)}=I^t.$ \end{Lemma} \begin{proof} In our hypothesis, we obtain: \[\ini_<(I^t)\supseteq (\ini_<(I))^t=(\ini_<(I))^{(t)}=\bigcap_{i=1}^s(\ini_<({\frk p}_i))^{(t)}\supseteq \bigcap_{i=1}^s(\ini_<({\frk p}_i))^{t}=\bigcap_{i=1}^s \ini_<({\frk p}_i^t)\supseteq\] \[\supseteq \ini_<(\bigcap_{i=1}^s {\frk p}_i^t)=\ini_<(\bigcap_{i=1}^s {\frk p}_i^{(t)})=\ini_<(I^{(t)})\supseteq \ini_<(I^t). \] Therefore, it follows that $\ini_<(I^{(t)})=\ini_<(I^t).$ Since $I^t\subseteq I^{(t)},$ we get $I^t= I^{(t)}.$ \end{proof} We now investigate whether one may use the above lemma for studying symbolic powers of binomial edge ideals. Note that, by (\ref{intersectini}), the first condition in Lemma~\ref{inilemma} holds for any binomial edge ideal $J_G.$ In addition, as we have seen in (\ref{eqprime}), condition (a) in Lemma~\ref{inilemma} holds for any prime ideal $P_{\mathcal S}(G)$ and any integer $t\geq 1.$ \begin{Lemma}\label{inipowers} Let ${\mathcal S}\subset [n].$ Then $\ini_<(P_{{\mathcal S}}(G)^{t})=(\ini_<(P_{\mathcal S}(G)))^t,$ for every $t\geq 1.$ \end{Lemma} \begin{proof} To shorten the notation, we write $P$ instead of $P_{{\mathcal S}}(G)$, $c$ instead of $c({\mathcal S}),$ and $J_i$ instead of $J_{\tilde{G_i}}$ for $1\leq i\leq c.$ Let ${\mathcal R}(P),$ respectively ${\mathcal R}(\ini_<(P))$ be the Rees algebras of $P,$ respectively $\ini_<(P).$ Then, as the sets of variables $\{x_j,y_j:j\in V(\tilde{G_i})\}$ are pairwise disjoint, we get \begin{equation}\label{eqRees1} {\mathcal R}(P)={\mathcal R}((\{x_i,y_i\}_{i\in {\mathcal S}}))\otimes_K (\otimes_{i=1}^c{\mathcal R}(J_i)). \end{equation} On the other hand, since $\ini_<(P)=(\{x_i,y_i\}_{i\in {\mathcal S}})+\ini_<(J_1)+\cdots+\ini_<(J_c),$ due to the fact that $J_1,\ldots,J_c$ are ideals in disjoint sets of variables different from $\{x_i,y_i\}_{i\in {\mathcal S}}$ (see \cite{HHHKR}), we obtain \begin{eqnarray}\label{eqRees2} {\mathcal R}(\ini_<P)={\mathcal R}((\{x_i,y_i\}_{i\in {\mathcal S}}))\otimes_K (\otimes_{i=1}^c{\mathcal R}(\ini_<J_i))=\\ \nonumber ={\mathcal R}((\{x_i,y_i\}_{i\in {\mathcal S}}))\otimes_K (\otimes_{i=1}^c\ini_<{\mathcal R}(J_i)). \end{eqnarray} For the last equality we used the equality $\ini_<(J_i^t)=(\ini_<J_i)^t$ for all $t\geq 1$ which is a particular case of \cite[Theorem 2.1]{Con} and the equality ${\mathcal R}(\ini_<J_i)=\ini_<{\mathcal R}(J_i)$ due to \cite[Theorem 2.7]{CHV}. We know that ${\mathcal R}(P)$ and $\ini_<({\mathcal R}(P))$ have the same Hilbert function. On the other hand, equalities~(\ref{eqRees1}) and (\ref{eqRees2}) show that ${\mathcal R}(P)$ and ${\mathcal R}(\ini_<P)$ have the same Hilbert function since ${\mathcal R}(J_i)$ and $\ini_<{\mathcal R}(J_i)$ have the same Hilbert function for every $1\leq i\leq s.$ Therefore, ${\mathcal R}(\ini_<P)$ and $\ini_<{\mathcal R}(P)$ have the same Hilbert function. As ${\mathcal R}(\ini_<P)\subseteq \ini_<({\mathcal R}(P))$, we have ${\mathcal R}(\ini_<P)= \ini_<({\mathcal R}(P))$, which implies by \cite[Theorem 2.7]{CHV} that $\ini_<(P^t)=(\ini_<P)^t$ for all $t.$ \end{proof} \begin{Theorem}\label{iniconseq} Let $G$ be a connected graph on the vertex set $[n].$ If $\ini_<(J_G)$ is a normally torsion-free ideal, then $J_G^{(k)}=J_G^k$ for $k\geq 1.$ \end{Theorem} \begin{proof} The proof is a consequence of Lemma~\ref{inipowers} combined with relations (\ref{intersectini}) and (\ref{eqprime}). \end{proof} There are binomial edge ideals whose initial ideal with respect to the lexicographic order are normally torsion-free. For example, the binomial edge ideals which have a quadratic Gr\"obner basis have normally torsion-free initial ideals. They were characterized in \cite[Theorem 1.1]{HHHKR} and correspond to the so-called closed graphs. The graph $G$ is \textit{closed} if there exists a labeling of its vertices such that for any edge $\{i,k\}$ with $i<k$ and for every $i<j<k$, we have $\{i,j\}, \{j,k\}\in E(G).$ If $G$ is closed with respect to its labeling, then, with respect to the lexicographic order $<$ on $S$ induced by the natural ordering of the indeterminates, the initial ideal of $J_G$ is $\ini_<(J_G)=(x_iy_j: i<j \text{ and }\{i,j\}\in E(G)).$ This implies that $\ini_<(J_G)$ is the edge ideal of a bipartite graph, hence it is normally torsion-free. Therefore, we get the following. \begin{Corollary}\label{closed} Let $G$ be a closed graph on the vertex set $[n].$ Then $J_G^{(k)}=J_G^k$ for $k\geq 1.$ \end{Corollary} Let $C_4$ be the $4$-cycle with edges $\{1,2\},\{2,3\},\{3,4\},\{1,4\}.$ Let $<$ be the lexicographic order on $K[x_1,\ldots,x_4,y_1,\ldots,y_4]$ induced by $x_1>x_2>x_3>x_4>y_1>y_2>y_3>y_4.$ With respect to this monomial order, we have \[ \ini_<(J_{C_4})=(x_1x_4y_3,x_1y_2,x_1y_4,x_2y_1y_4,x_2y_3,x_3y_4). \] Let $\Delta$ be the simplicial complex whose facet ideal $I(\Delta)=\ini_<(J_{C_4}).$ It is easily seen that $\Delta$ has no special odd cycle, therefore, by \cite[Theorem 10.3.16]{HH10}, it follows that $I(\Delta)$ is normally torsion-free. Note that the $4$-cycle is a complete bipartite graph, thus the equality $J_{C_4}^k=J_{C_4}^{(k)}$ for all $k\geq 1$ follows also from \cite{Oh2}. In view of this result, one would expect that initial ideals of binomial edge ideals of cycles are normally torsion-free. But this is not the case. Indeed, let $C_5$ be the $5$-cycle with edges $\{1,2\},\{2,3\},\{3,4\},\{4,5\},\{1,5\}$ and $I=\ini_<(J_{C_5})$ the initial ideal of $J_{C_5}$ with respect to the lexicographic order on $K[x_1\ldots,x_5,y_1,\ldots,y_5].$ By using \textsc{Singular} \cite{Soft}, we checked that $I^2\subsetneq I^{(2)}.$ Indeed, the monomial $x_1^2x_4x_5y_3y_5\in I^2$ is a minimal generator of $I^2.$ On the other hand, the monomial $x_1x_4x_5y_3y_5\in I^{(2)}$, thus $I^2\neq I^{(2)}$, and $I$ is not normally torsion-free. On the other hand, again with \textsc{Singular}, we have checked that $J_{C_5}^2=J_{C_5}^{(2)}.$ \end{document}
arXiv
{ "id": "2009.03156.tex", "language_detection_score": 0.71836918592453, "char_num_after_normalized": null, "contain_at_least_two_stop_words": null, "ellipsis_line_ratio": null, "idx": null, "lines_start_with_bullet_point_ratio": null, "mean_length_of_alpha_words": null, "non_alphabetical_char_ratio": null, "path": null, "symbols_to_words_ratio": null, "uppercase_word_ratio": null, "type": null, "book_name": null, "mimetype": null, "page_index": null, "page_path": null, "page_title": null }
arXiv/math_arXiv_v0.2.jsonl
null
null
\begin{document} \title{Recovery of the Derivative of the Conductivity at the Boundary} \begin{abstract} We describe a method to reconstruct the conductivity and its normal derivative at the boundary from the knowledge of the potential and current measured at the boundary. This boundary determination implies the uniqueness of the conductivity in the bulk when it lies in $W^{1+\frac{n-5}{2p}+,p}$, for dimensions $n\ge 5$ and for $n\le p<\infty$. \end{abstract} Electrical Impedance Imaging is a technique to recover the conductivity in the bulk of a body from measurements of potential and current at the boundary. The potential $u$ in a domain $\Omega\subset\mathbb{R}^n$ satisfies the equation \begin{equation}\label{eq:BVP} \begin{aligned} \div{\gamma\nabla u} &= 0 \\ u|_{\partial\Omega} &= f, \end{aligned} \end{equation} where $\gamma$ is the conductivity and $f\in H^\frac{1}{2}(\partial\Omega)$ is the potential at the boundary --the definitions of the spaces used here are placed at the end of the article. The conductivity satisfies the condition $0<c\le \gamma\le C$. The current measured at the boundary is $\gamma\partial_\nu u|_{\partial\Omega}$, where $\nu$ is the outward-pointing normal vector. The operator $\Lambda_\gamma$ that maps $u|_{\partial\Omega}$ to $\gamma\partial_\nu u|_{\partial\Omega}$ is known as the Dirichlet-to-Neumann map, and it is defined as the functional $\Lambda_\gamma:H^\frac{1}{2}(\partial\Omega)\mapsto H^{-\frac{1}{2}}(\partial\Omega)$ given by \begin{equation*} \inner{\Lambda_\gamma f}{g} := \int_{\Omega}\gamma\nabla u\cdot\nabla v, \end{equation*} where $u$ solves the boundary value problem \eqref{eq:BVP} and $v\in H^1(\Omega)$ is \textit{any} extension of $g\in H^\frac{1}{2}(\partial\Omega)$. If we choose $v$ such that $\div{\gamma\nabla v}=0$, then we see that $\Lambda_\gamma$ is symmetric. In \cite{MR590275} Calderón posed the problem of deciding whether the conductivity can be uniquely recovered from the data at the boundary, \textit{i.e.} whether $\Lambda_{\gamma_1}=\Lambda_{\gamma_2}$ implies that $\gamma_1=\gamma_2$. One of the earliest results, due to Kohn and Vogelius in \cite{MR739921}, is that $\Lambda_{\gamma_1}=\Lambda_{\gamma_2}$ implies that $\partial^N_\nu\gamma_1=\partial^N_\nu\gamma_2$ at the boundary for every $N$ when $\gamma_1,\gamma_2\in C^\infty$. Sylvester and Uhlmann \cite{MR873380} made use of this result to prove uniqueness in the bulk for $C^2$ conductivities. It is hard to prove uniqueness in the bulk, so uniqueness at the boundary may be considered as a toy problem; moreover, many proofs of inner uniqueness use uniqueness at the boundary as the first step, and this is in fact the motivation behind this article. For $\gamma_1,\gamma_2\in W^{s,p}(\Omega)$, some arguments need to extend the conductivities $\gamma_1$ and $\gamma_2$ to the whole space in such a way that $\gamma_1=\gamma_2$ in $\mathbb{R}^n\backslash\Omega$, and $\gamma_1,\gamma_2\in W^{s,p}(\mathbb{R}^n)$. Brown proved in \cite{MR1881563} that, under mild conditions of regularity, the conductivity can be recovered at the boundary. In particular, if $\gamma_1,\gamma_2\in W^{s,p}(\Omega)$ for $1\le s\le 1+\frac{1}{p}$ and $p\ge n$, then $\Lambda_{\gamma_1}=\Lambda_{\gamma_2}$ implies that $\gamma_1=\gamma_2$ at $\partial\Omega$, and then, by function space arguments, $\gamma_1$ and $\gamma_2$ can be adequately extended to $\mathbb{R}^n$; for details the reader is referred to \cite{MR2026763}. The possibility of this extension was used by Haberman \cite{MR3397029}, and by Ham, Kwon and Lee \cite{HKL} to prove uniqueness inside $\Omega\subset\mathbb{R}^n$ when $3\le n\le 6$. When $\gamma_1,\gamma_2\in W^{s,p}(\Omega)$ for $s>1+\frac{1}{p}$, then to extend both conductivities adequately to $\mathbb{R}^n$ the condition $\partial_\nu\gamma_1=\partial_\nu\gamma_2$ at $\partial\Omega$ is necessary. The result of Kohn and Vogelius holds for smooth conductivities, so we cannot use it with rough conductivities. The main result of this article is that, under mild conditions of regularity, the conductivity and its normal derivative at the boundary is uniquely determined by $\Lambda_\gamma$; furthermore, our theorem provides a method of reconstruction. \begin{theorem}\label{thm:recovery} Suppose that $0<c\le\gamma\le C$ and that $\Omega\subset\mathbb{R}^n$ is a Lipschitz domain. {\bf (A)} If $\gamma\in W^{s,p}(\Omega)$ for $s>\frac{1}{p}$ and $1<p<\infty$, then for $y\in\partial\Omega$ a.e. there exist a family of functions $f_{0,h}$ and constants $c_{0,h}\sim 1$ such that \begin{equation}\label{eq:thm_recovery_A} \inner{\Lambda_\gamma f_{0,h}}{f_{0,h}} = c_{0,h}\gamma(y)+o(1)\quad \text{as } h\to 0. \end{equation} The constants $c_{0,h}$ do not depend on the conductivity. {\bf (B)} If $\gamma\in W^{s,p}(\Omega)$ for $s>1+\frac{1}{p}$ and $2\le p<\infty$, or for $s>\frac{3}{p}$ and $1<p<2$, then for $y\in\partial\Omega$ a.e. there exist a family of functions $f_{1,h}$ and constants $c_{0,h},c_{1,h}\sim 1$ such that \begin{equation}\label{eq:thm_recovery_B} \inner{\Lambda_\gamma f_{1,h}}{f_{1,h}}-c_{0,h} = c_{1,h}\partial_\nu\log\gamma(y)h+o(h)\quad \text{as } h\to 0. \end{equation} The constants $c_{0,h}$ and $c_{1,h}$ do not depend on the conductivity. \end{theorem} No attempt is made to get the best error terms implicitly involved in \eqref{eq:thm_recovery_A} and \eqref{eq:thm_recovery_B}. The condition $\gamma\in W^{s,p}(\Omega)$, for $s>l+\frac{1}{p}$ and $l=0$ or 1, is the lowest regularity needed to make sense of the trace values of $\gamma$ and of $\partial_\nu \gamma$ respectively. In fact, by the trace theorem $\norm{\partial^l_\nu\gamma}_{L^p(\partial\Omega)}\le C\norm{\gamma}_{s,p}$ if $s>l+\frac{1}{p}$; the reader is referred to \cite{MR884984, MR781540} for details. The proof is mainly inspired by the work of Brown \cite{MR1881563}, who used highly oscillatory solutions to recover the value of $\gamma|_{\partial\Omega}$. We borrow many of his arguments, but we do not use oscillatory solutions, instead we follow Alessandrini \cite{MR1047569} and use singular solutions. The motivation of the proof comes from the expansion, at least in the smooth class, $\Lambda_\gamma \sim \lambda^{1}+\lambda^0+\cdots$, where $\lambda^i\in S^i$ are pseudo-differential operators. This was proved by Sylvester and Uhlmann in \cite{MR924684}, and they showed that the information about $\partial^l_\nu\gamma$ at $\partial\Omega$ can be extracted from $\lambda^{1-l}$. Therefore, we try to use approximated solutions of \eqref{eq:BVP}, so that the boundary data $f$ concentrates as a Dirac's delta at some point on the boundary, and heuristically we get $\Lambda_\gamma(\delta_0)$. We follow this argument in Section~\ref{sec:Recovery}. The main tool in our investigation is an approximation property at almost every point on the boundary. We did not find a suitable reference to the approximation we needed, so we include a proof here in Section~\ref{sec:Approximation}. \begin{theorem}\label{thm:Boundary_Approximation} Suppose that $\Omega\subset\mathbb{R}^n$ is a domain with Lipschitz boundary. If $f\in B^{s,p}(\Omega)$ for $1+\frac{1}{p}>s>\frac{1}{p}$, then for $0\le\alpha<s-\frac{1}{p}$ and for $y\in\partial\Omega$ a.e. it holds that \begin{equation} \Big(\frac{1}{r^n}\int_{B_r(y)\cap\Omega}\abs{f(x)-f(y)}^q\,dx\Big)^\frac{1}{q} \le Cr^\alpha,\quad \text{where } r\le 1 \text{ and } 1\le q\le p. \end{equation} The constant $C$ depends on $y$. \end{theorem} As a consequence of Theorem~\ref{thm:recovery} and a result of the author in \cite[Thm. 4]{PV}, we get the following theorem. \begin{theorem} For $n\ge 5$ suppose that $\Omega\subset\mathbb{R}^n$ is a Lipschitz domain. If $\gamma_1$ and $\gamma_2$ are in $W^{1+\frac{n-5}{2p}+, p}(\Omega)\cap L^\infty(\Omega)$ for $n\le p<\infty$, and if $\gamma_1,\gamma_2\ge c>0$, then \begin{equation} \Lambda_{\gamma_1}=\Lambda_{\gamma_2}\text{ implies that } \gamma_1=\gamma_2. \end{equation} \end{theorem} The reader can consult the symbols and notations used at the end of the article. \subsection*{Acknowledgments} I thank Pedro Caro for sharing his many insights with me. This research is supported by the Basque Government through the BERC 2018-2021 program, and by the Spanish State Research Agency through BCAM Severo Ochoa excellence accreditation SEV-2017-0718 and through projects ERCEA Advanced Grant 2014 669689 - HADE and PGC2018-094528-B-I00. \section{Reconstruction at the Boundary}\label{sec:Recovery} We assume in this section that $0\in\partial\Omega$ and that Theorem~\ref{thm:Boundary_Approximation} holds for $0$ every time we use it or a variant of it. We assume also that there is a ball $B_\delta(0)$ and a Lipschitz function $\psi$ such that \begin{equation*} B_\delta\cap\Omega = \{(x',x_n)\in B_\delta \mid \psi(x')<x_n \}. \end{equation*} We assume that $\psi(0)=\nabla\psi(0)=0$ and that $-e_n$ is a Lebesgue point of the outward-pointing normal vector $\nu:=(1+\abs{\nabla\psi}^2)^{-\frac{1}{2}}(\nabla\psi, -1)$. \subsection{Value at the Boundary} The reconstruction of $\gamma|_{\partial\Omega}$ is based on the function $u(x):=x_n/\abs{x}^n$, which solves the boundary value problem $\Delta u = 0$ in the upper half-plane $H_+$, with $u|_{\partial H_+}=c\delta_0$. Since $\gamma\in W^{s,p}\cap L^\infty$, then by Gagliardo-Nirenberg, see \textit{e.g.} \cite{MR3813967}, we can assume that $\gamma\in W^{s,p}(\Omega)$ for $s>\frac{1}{p}$ and $2\le p<\infty$. For $h\ll 1$ we define the approximated solutions $u_h(x):=u(x+he_d)$ of \eqref{eq:BVP}, and we define the correction functions $r_h\in H^1_0(\Omega)$ such that \begin{equation*} \div{\gamma\nabla(u_h+r_h)} = 0. \end{equation*} Thus, we have that \begin{equation*} h^n\inner{\Lambda_\gamma (u_h|_{\partial\Omega})}{u_h|_{\partial\Omega}} = h^n\int \gamma\nabla (u_h+r_h)\cdot\nabla u_h. \end{equation*} The term $h^n$ is a normalization factor, and the functions $f_{0,h}$ in Theorem~\ref{thm:recovery}(A) are $f_{0,h}:=h^\frac{n}{2}u_h|_{\partial\Omega}$. The main part of the integral above is $\int\gamma\nabla u_h\cdot\nabla u_h$, and we extract the value of $\gamma(0)$ from it. We use the dilation $(\nabla u_h)(hx)= h^{-n}\nabla u_1(x)$ to get \begin{align*} \int_\Omega\gamma\nabla u_h\cdot\nabla u_h &= \gamma(0) \int_\Omega\abs{\nabla u_h}^2+ \int_\Omega(\gamma-\gamma(0))\abs{\nabla u_h}^2 \\ &=\gamma(0)h^{-n}\int_{h^{-1}\Omega}\abs{\nabla u_1}^2+ \int_\Omega(\gamma-\gamma(0))\abs{\nabla u_h}^2 \\ &= A_1 + A_2. \end{align*} We set the first term as $h^nA_1=c_{0,h}\gamma(0)$. To control $A_2$ we bound it as \begin{equation*} \abs{A_2}\le C\int_{\Omega}\abs{\gamma(x)-\gamma(0)}\frac{dx}{\abs{x+he_n}^{2n}}. \end{equation*} When $\abs{x}\ge 5h$ we see that $\abs{x}\sim \abs{x+he_n}$. When $\abs{x}<5h$ we exploit the Lipschitz regularity of the boundary, and notice that $x\in\Omega$ implies that $x_n\ge -L\abs{x'}$ for some $L>0$, so $\abs{x+he_n}\ge h/(1+L^2)^\frac{1}{2}$. Now we apply these estimates and Theorem~\ref{thm:Boundary_Approximation} for some allowable $\alpha>0$ to get \begin{align*} \abs{A_2}&\lesssim \sum_{h\le\lambda\le 1}\lambda^{-n}\frac{1}{\lambda^n}\int_{B_\lambda\cap \Omega}\abs{\gamma(x)-\gamma(0)}\,dx + \norm{\gamma}_\infty \\ &\lesssim \sum_{h\le\lambda\le 1}\lambda^{-n+\alpha} + \norm{\gamma}_\infty \\ &= o(h^{-n}). \end{align*} The sums here and elsewhere run over dyadic numbers $\lambda=2^k$, for $k$ integer. This concludes the estimates for the main part. We turn now to the error term $\div{\gamma\nabla r_h}$. We control it as \begin{equation*} \abs{\int \gamma\nabla r_h\cdot\nabla u_h\,dx}\le \norm{r_h}_{H^1}\norm{\div{\gamma\nabla u_h}}_{H^{-1}}. \end{equation*} From the \textit{a priori} estimate $\norm{r_h}_{H^1}\le C\norm{\div{\gamma\nabla u_h}}_{H^{-1}}$ we get \begin{equation*} \abs{\int \gamma\nabla r_h\cdot\nabla u_h\,dx}\le C\norm{\div{\gamma\nabla u_h}}_{H^{-1}}^2. \end{equation*} Since $u_h$ is harmonic, we can bound the operator norm by duality as \begin{align*} \abs{\int \gamma\nabla u_h\cdot\nabla\phi} &= \abs{\int (\gamma-\gamma(0))\nabla u_h\cdot\nabla\phi} \\ &\le C\norm{\phi}_{H^1}\Big(\sum_{h\le\lambda\le 1}\lambda^{-n}\frac{1}{\lambda^n}\int_{B_\lambda\cap\Omega}\abs{\gamma-\gamma(0)}^2\,dx + \norm{\gamma}_\infty^2 \Big)^\frac{1}{2} \\ &= \norm{\phi}_{H^1}O(h^{-\frac{n}{2}+\alpha}), \end{align*} where $\alpha>0$. Hence, we get \begin{equation*} h^n\norm{\div{\gamma\nabla u_h}}_{H^{-1}}^2 = o(1), \end{equation*} which concludes the proof of Theorem~\ref{thm:recovery}(A). We end this section with an estimate for $c_{0,h}=\int_{h^{-1}\Omega}\abs{\nabla u_1}^2$ in \eqref{eq:thm_recovery_A} when the boundary is $C^1$. We fix $\delta\ll 1$ and write \begin{equation*} \int_{h^{-1}\Omega}\abs{\nabla u_1}^2 = \int_{h^{-1}(B_\delta\cap\Omega)}\abs{\nabla u_1}^2 + \int_{h^{-1}(B_\delta^c\cap\Omega)}\abs{\nabla u_1}^2. \end{equation*} Since $\abs{\nabla u_1}^2$ is integrable, the second term goes to zero as $h\to 0$. We split the first term as \begin{align*} \int_{h^{-1}(B_\delta\cap\Omega)}\abs{\nabla u_1}^2 &= \int_{h^{-1}(B_\delta\cap H_+)}\abs{\nabla u_1}^2 + \\ &\hspace*{1cm}+ \Big[\int_{h^{-1}(\Omega\backslash (B_\delta\cap H_+))}\abs{\nabla u_1}^2 - \int_{h^{-1}((B_\delta\cap H_+)\backslash\Omega)}\abs{\nabla u_1}^2\Big] \\ &= B_1 + B_2. \end{align*} The term $B_1$ tends to $\int_{H_+}\abs{\nabla u_1}^2$, and for the second term we have that \begin{equation*} \abs{B_2}\lesssim h^{-1}\int_{\abs{x'}\le \delta h^{-1}}\frac{1}{\japan{x'}^{2n}}\abs{\psi(hx')}\,dx' = \frac{1}{h^n}\int_{\abs{x'}\le\delta}\frac{1}{\japan{x'/h}^{2n}}\abs{\psi(x')}\,dx'\to 0. \end{equation*} Then $c_{0,h} = \int_{H_+}\abs{\nabla u_1}^2+o(1)$. \subsection{Normal Derivative at the Boundary} We will recover $\partial_\nu\gamma$ with the aid of the functions $v_h=\gamma^{-\frac{1}{2}}u_h$. Since $\gamma\in W^{s,p}\cap L^\infty$, then by Gagliardo-Nirenberg we can assume that $\gamma\in W^{s,p}(\Omega)$ for $s>1+\frac{1}{p}$ and $2\le p<\infty$; in fact, if $1<p<2$ and $s>\frac{3}{p}$, then $\gamma\in W^{s',2}(\Omega)$ for $s'=\frac{p}{2}s> 1+\frac{1}{2}$. We define again a correction function $r_h\in H^1_0(\Omega)$ such that \begin{equation*} \div{\gamma\nabla(v_h+r_h)}=0. \end{equation*} In this case the functions $f_{1,h}$ in Theorem~\ref{thm:recovery}(B) are $f_{1,h}:=h^\frac{n}{2}v_h|_{\partial\Omega}$; the use of these functions is licit because we already know the value of $\gamma$ at the boundary. We repeat here the arguments in the previous section, but now the computations are longer. For the main term we have that \begin{align} \int_\Omega \gamma\nabla v_h\cdot \nabla v_h &= \int \abs{\nabla u_h}^2 + 2\int \gamma^\frac{1}{2}\nabla\gamma^{-\frac{1}{2}}u_h\nabla u_h + \int (\gamma^\frac{1}{2}\nabla\gamma^{-\frac{1}{2}})\cdot(\gamma^\frac{1}{2}\nabla\gamma^{-\frac{1}{2}})u_h^2 \notag \\ &=\int \abs{\nabla u_h}^2 - \int\nabla\log\gamma \cdot u_h\nabla u_h + \frac{1}{4}\int \abs{\nabla\log\gamma}^2u_h^2 \notag \\ &= A_1 + A_2 + A_3 \label{eq:Main_Term_Normal_Values} \end{align} The principal term in the asymptotic expansion is $h^n A_1=c_{0,h}$; since this term does not involve the conductivity, we can subtract it harmlessly. The next term is $A_2$, and we estimate it as \begin{align*} \int\nabla\log\gamma \cdot u_h\nabla u_h &= \int_\Omega\nabla\log\gamma(0) \cdot u_h\nabla u_h + \int(\nabla\log\gamma-\nabla\log\gamma(0)) \cdot u_h\nabla u_h \\ &= \frac{1}{2}\int_{\partial\Omega}u_h^2\nabla\log\gamma(0)\cdot\nu + \int(\nabla\log\gamma-\nabla\log\gamma(0)) \cdot u_h\nabla u_h \\ &=\inlabel{eq:A_2Main} + \inlabel{eq:A_2Error} \end{align*} The term \eqref{eq:A_2Main} contains the information about the normal derivative at the boundary, and it has order $h^{-n+1}$ in the asymptotic expansion. We thus have that \begin{equation*} h^n\cdot\eqref{eq:A_2Main} = \frac{h^n}{2}\partial_n\log\gamma(0)\int_{\partial\Omega}u_h^2 + \frac{h^n}{2}\int_{\partial\Omega}u_h^2\nabla\log\gamma(0)\cdot(\nu-e_n). \end{equation*} We set $c_{1,h}:=-\frac{1}{2}h^{n-1}\int_{\partial\Omega}u_h^2$ and bound the remaining term as \begin{equation*} \abs{\int_{\partial\Omega}u_h^2\nabla\log\gamma(0)\cdot(\nu-e_n)} \lesssim \sum_{h\le\lambda\le 1}\lambda^{-n+1}\frac{1}{\lambda^{n-1}}\int_{B_\lambda\cap\partial\Omega}\abs{\nu-e_n} + 1 \end{equation*} Since $\nu(0)=-e_n$ is a Lebesgue point, then $G(\lambda):=\frac{1}{\lambda^{n-1}}\int_{B_\lambda\cap\partial\Omega}\abs{\nu-e_n}\xrightarrow{\lambda\to 0} 0$; furthermore, $G$ is uniformly bounded, so by the dominated convergence theorem we get \begin{equation*} h^{n-1}\sum_{h\le\lambda\le 1}\lambda^{-n+1}\frac{1}{\lambda^{n-1}}\int_{B_\lambda\cap\partial\Omega}\abs{\nu-e_n} = \sum_{1\le \mu\le h^{-1}}\mu^{-n+1}G(h\mu)\xrightarrow{h\to 0} 0 \end{equation*} Then we conclude that \begin{equation*} h^n\cdot\eqref{eq:A_2Main} = -c_{1,h}\partial_n\log\gamma(0)\,h + o(h). \end{equation*} To control \eqref{eq:A_2Error} we apply Theorem~\ref{thm:Boundary_Approximation} to $\nabla\log\gamma\in W^{s-1,p}(\Omega)$ to get \begin{align*} \abs{\eqref{eq:A_2Error}} &\lesssim \sum_{h\le \lambda\le 1}\lambda^{-n+1}\frac{1}{\lambda^n}\int_{B_\lambda\cap\Omega}\abs{\nabla\log\gamma-\nabla\log\gamma(0)}+\int_{B_1^c\cap\Omega}\abs{\nabla\log\gamma-\nabla\log\gamma(0)} \\ &\lesssim \sum_{h\le \lambda\le 1}\lambda^{-n+1+\alpha} + \norm{\nabla\log\gamma}_{2}+\abs{\nabla\log\gamma(0)} \\ &= O(h^{-n+1+\alpha}); \end{align*} we have thus $h^n\abs{\eqref{eq:A_2Error}}=o(h)$, which allows us to conclude that the term $A_2$ in \eqref{eq:Main_Term_Normal_Values} is \begin{equation}\label{eq:A_2} h^nA_2 = c_{1,h}\partial_n\log\gamma(0)\,h + o(h). \end{equation} We are left with the error term $A_3$ in \eqref{eq:Main_Term_Normal_Values}. We bound it using the same arguments as above \begin{align} A_3 &= \frac{1}{4}\abs{\nabla\log\gamma(0)}^2\int u_h^2 + \frac{1}{4}\int (\abs{\nabla\log\gamma}^2-\abs{\nabla\log\gamma(0)}^2)u_h^2 \notag \\ &\lesssim \int_{\Omega} u_h^2 + \sum_{h\le\lambda\le 1}\lambda^{-n+2}\frac{1}{\lambda^n}\int_{B_\lambda\cap\Omega}(\abs{\nabla\log\gamma}^2-\abs{\nabla\log\gamma(0)}^2) + 1 \notag \\ &= o(h^{-n+1}). \label{eq:A_3} \end{align} The estimate we used here to approximate the value of $\abs{\nabla\log\gamma}^2$ at the boundary is not contained in Theorem~\ref{thm:Boundary_Approximation}, and the reader is referred instead to Corollary~\ref{cor:difference_control_square} in the next section. We collect the estimates \eqref{eq:A_2} and \eqref{eq:A_3} to find \begin{equation}\label{eq:Derivative_Main_Part} h^n\int_\Omega \gamma\nabla v_h\cdot \nabla v_h - c_{0,h} = c_{1,h}\partial_n\log\gamma(0)h+ o(h), \end{equation} which is what we wanted. We deal with the error term as before. We have that \begin{equation*} \abs{\int \gamma \nabla r_h\cdot\nabla v_h} \le C\norm{\div{\gamma\nabla v_h}}_{H^{-1}}^2. \end{equation*} We estimate the norm by duality as \begin{align*} \int \gamma\nabla v_h\cdot\nabla \phi &= -\int (\nabla\gamma^\frac{1}{2})u_h\nabla\phi + \int\gamma^\frac{1}{2}\nabla u_h\cdot\nabla\phi \\ &= -\int \nabla\gamma^\frac{1}{2}\cdot\nabla(u_h\phi) \\ &= -\int (\nabla\gamma^\frac{1}{2}-\nabla\gamma^\frac{1}{2}(0))\cdot u_h\nabla\phi + \int (\nabla\gamma^\frac{1}{2}-\nabla\gamma^\frac{1}{2}(0))\cdot \phi\nabla u_h \\ &= E_1 + E_2; \end{align*} to get the second and third identities we used the divergence theorem, and the identity $\Delta u_h = 0$. We bound the error term $E_1$ as \begin{align*} \abs{E_1} &\lesssim \Big(\sum_{h\le\lambda\le 1} \lambda^{-n+2}\frac{1}{\lambda^n}\int_{B_\lambda\cap\Omega}\abs{\nabla\gamma^\frac{1}{2}-\nabla\gamma^\frac{1}{2}(0)}^2\,dx + 1\Big)^\frac{1}{2}\norm{\phi}_{H^1} \\ &=\norm{\phi}_{H^1}O(h^{-\frac{n}{2}+1}) \end{align*} To bound $E_2$ we need Hardy's inequality. \begin{theorem}[Hardy's Inequality] If $f\in H^1(\mathbb{R}^n)$ for $n\ge 3$, then \begin{equation}\label{eq:Hardy_High} \int_{\mathbb{R}^n}\frac{\abs{f}^2}{\abs{x}^2}\le C\norm{f}_{H^1}^2. \end{equation} If $f\in H^1_0(A_{\delta, R})$ for $A_{\delta,R}:=\{\delta <\abs{x}< R\}\subset\mathbb{R}^2$, then \begin{equation}\label{eq:Hardy_plane} \int_{\mathbb{R}^2}\frac{\abs{f}^2}{\abs{x}^2}\le C\Big(\log\big(\frac{R}{\delta}\big)\Big)^2\norm{f}_{H^1}^2. \end{equation} \end{theorem} A beautiful proof can be found in \cite[sec. 2]{MR1616905}. The inequality \eqref{eq:Hardy_plane} is not there, but it follows after minor changes. For $n\ge 3$, we apply Hardy's inequality with the weight $\abs{x+he_n}^{-2}$ to get \begin{align*} \abs{E_2}&\le (\int_\Omega \abs{\nabla\gamma^\frac{1}{2}-\nabla\gamma^\frac{1}{2}(0)}^2\abs{x+he_n}^2\abs{\nabla u_h}^2)^\frac{1}{2}\norm{\phi}_{H^1} \\ &\le \Big(\sum_{h\le\lambda\le 1}\lambda^{-n+2}\frac{1}{\lambda^n}\int_{B_\lambda\cap\Omega}\abs{\nabla\gamma^\frac{1}{2}-\nabla\gamma^\frac{1}{2}(0)}^2 + 1 \Big)^\frac{1}{2}\norm{\phi}_{H^1} \\ &= \norm{\phi}_{H^1}O(h^{-\frac{n}{2}+1}). \end{align*} For $n=2$ we get $\abs{E_2}=\norm{\phi}_{H^1}O(\log h^{-1})$. Hence, we conclude that \begin{equation*} h^n\int \gamma \nabla r_h\cdot\nabla v_h = o(h). \end{equation*} With this and \eqref{eq:Derivative_Main_Part} we get Theorem~\ref{thm:recovery}(B). \section{Lebesgue Points at the Boundary} \label{sec:Approximation} In this section we prove Theorem~\ref{thm:Boundary_Approximation}. The main tool to control the value of a function at the boundary is the next theorem. \begin{theorem}\label{thm:difference_control} If $f\in B^{s,p}(\mathbb{R}^n)$ for $1+\frac{1}{p}>s>\frac{1}{p}$, and $\Gamma$ is the graph of a Lipschitz function, then for $0\le\alpha<s-\frac{1}{p}$ it holds that \begin{equation}\label{eq:difference_control} \Big(\int_\Gamma\int_{\abs{x-y}\le 1}\frac{\abs{f(x)-f(y)}^q}{\abs{x-y}^{n+\alpha q}}\,dx d\Gamma(y)\Big)^\frac{1}{q} \le C_\alpha\norm{f}_{s,p},\quad \text{where } 1\le q\le p<\infty. \end{equation} \end{theorem} As a consequence of this theorem we get Theorem~\ref{thm:Boundary_Approximation}, which we restate and prove here. \begin{customthm}{\ref{thm:Boundary_Approximation}} {\it Suppose that $\Omega\subset\mathbb{R}^n$ is a domain with Lipschitz boundary. If $f\in B^{s,p}(\Omega)$ for $1+\frac{1}{p}>s>\frac{1}{p}$, then for $0\le\alpha<s-\frac{1}{p}$ and for $y\in\partial\Omega$ a.e. it holds that \begin{equation} \Big(\frac{1}{r^n}\int_{B_r(y)\cap\Omega}\abs{f(x)-f(y)}^q\,dx\Big)^\frac{1}{q} \le Cr^\alpha,\quad \text{where } r\le 1 \text{ and } 1\le q\le p. \end{equation} The constant $C$ depends on $y$. } \end{customthm} \begin{proof} By definition there is some $g\in B^{s,p}(\mathbb{R}^n)$ that extends $f$, and \begin{equation} \int_{B_r(y)\cap\Omega}\abs{f(x)-f(y)}^q\,dx\le \int_{B_r(y)}\abs{g(x)-g(y)}^q\,dx. \end{equation} We divide the boundary into pieces $\Gamma\subset\partial\Omega$, where $\Gamma$ is the graph of a Lipschitz function. Since \begin{equation*} \frac{1}{r^{n+\alpha q}}\int_{B_r(y)}\abs{g(x)-g(y)}^q\,dx \le \int_{\abs{x-y}\le 1}\frac{\abs{g(x)-g(y)}^q}{\abs{x-y}^{n+\alpha q}}\,dx, \end{equation*} and since the term at the right is finite for $y\in\Gamma\subset\partial\Omega$ a.e. by Theorem~\ref{thm:difference_control}, then we have that \begin{equation*} \frac{1}{r^n}\int_{B_r(y)}\abs{g(x)-g(y)}^q\,dx\le Cr^{\alpha q}, \end{equation*} and the statement of the theorem follows. \end{proof} In Theorem~\ref{thm:difference_control} we assumed implicitly that $f\in B^{s,p}(\mathbb{R}^n)$, for $s>1/p$, is well defined in $\Gamma$, but this set has measure zero, so this need some justification. Let $f=\sum_{\lambda\ge 1}P_\lambda f$ be a Littlewood-Paley decomposition, where $(P_\lambda f)^\wedge:=m_\lambda \widehat{f}$, and $m_\lambda(\xi):=m(\xi/\lambda)$ for some smooth multiplier $m$ supported in frequencies $\abs{\xi}\sim 1$; for low frequencies we take a function $m_1$ supported in $\abs{\xi}\lesssim 1$. We choose the representative of $f$ given by $\lim_{M\to\infty}P_{\le M}f(x):= \lim_{M\to\infty}\sum_{1\le \lambda\le M} P_\lambda f(x)$. The following theorem justifies this choice. \begin{lemma} Suppose that $\Gamma$ is the graph of a Lipschitz function. If $f\in B^{s,p}(\mathbb{R}^n)$ for $s>1/p$, then $\lim_{M\to\infty}P_{\le M}f(y)$ exits for $y\in\Gamma$ a.e. \end{lemma} \begin{proof} The set of divergence is \begin{equation*} \{y\in\Gamma\mid \limsup_{N\le M\to\infty}\abs{P_{N\le M}f(y)}>0\}=\bigcup_{\mu>0}\{y\in\Gamma\mid \limsup_{N\le M\to\infty}\abs{P_{N\le M}f(y)}>\mu\}, \end{equation*} then it suffices to prove that each set at the right has measure zero. For each one of these sets and for every $A>0$ we have that \begin{equation*} \{y\in\Gamma\mid \limsup_{N\le M\to\infty}\abs{P_{N\le M}f(y)}>\mu\}\subset \{y\in\Gamma\mid \sup_{A\le N\le M}\abs{P_{N\le M}f(y)}>\mu\}, \end{equation*} so we only need to show that the sets at the right are as small as we please if we choose $A\gg 1$. We bound their measure as \begin{align*} \abs{\{\sup_{A\le N\le M}\abs{P_{N\le M}f(y)}>\mu\}}&\le\abs{\{\sum_{\lambda\ge A}\abs{P_\lambda f(y)}>\mu\}} \\ &\le \frac{1}{\mu^p}\norm{\sum_{\lambda\ge A}\abs{P_\lambda f}}_{L^p(\Gamma)}^p. \end{align*} We use the triangle inequality, the trace inequality $\norm{P_\lambda f}_{L^p(\Gamma)}\le C\lambda^\frac{1}{p}\norm{f}_{L^p(\mathbb{R}^n)}$, which we will prove in Lemma~\ref{lemm:trace_bound} below, and Hölder to bound the last term as \begin{equation*} \abs{\{\sup_{A\le N\le M}\abs{P_{N\le M}f(y)}>\mu\}}\le C\frac{A^{1-sp}}{\mu^p}\norm{f}_{s,p}^p. \end{equation*} Since $1-sp<0$, then the right hand side goes to zero as $A\to \infty$. \end{proof} \begin{lemma}[The Trace Inequality]\label{lemm:trace_bound} Suppose that $m_\lambda(\xi):=m(\xi/\lambda)$ is a smooth multiplier supported in frequencies $\abs{\xi}\sim \lambda$, and that $(P_\lambda f)^\wedge=m_\lambda\hat{f}$ is the associated projection. If $\Gamma$ is the graph of a Lipschitz function, then \begin{equation}\label{eq:lemm_trace_bound} \norm{P_\lambda f}_{L^q(\Gamma)}\le C\lambda^\frac{1}{p}\norm{f}_{L^p(\mathbb{R}^n)},\quad \text{for } 1\le q\le p. \end{equation} \end{lemma} \begin{proof} We interpolate between $(q,p)=(\infty,\infty)$ and $=(1,r)$ for $r=p/q$. For the first point we have that \begin{equation*} \norm{P_\lambda f}_{L^\infty(\Gamma)}\le\norm{\widecheck{m}_\lambda}_1\norm{f}_\infty = C\norm{f}_{L^\infty(\mathbb{R}^n)}, \end{equation*} where $C$ does not depend on $\lambda$. For the point $(q,p)=(1,r)$ we have that \begin{align}\label{eq:lemm_trace_measure} \norm{P_\lambda f}_{L^1(\Gamma)} &\le \int\abs{f(z)}\int_\Gamma \abs{\widecheck{m}_\lambda(y-z)}\,d\Gamma(y)dz \notag \\ &\le \norm{f}_{L^r(\mathbb{R}^n)}\norm{\int_\Gamma \abs{\widecheck{m}_\lambda(y-z)}\,d\Gamma(y)}_{L_z^{r'}}. \end{align} By the smoothness of $m$ we have that $\abs{\widecheck{m}_\lambda}\le C\lambda^n\sum_{\mu\ge \lambda^{-1}} (\mu\lambda)^{-N}\mathds{1}_{B_\mu}$, where $N\gg 1$. Then \begin{equation*} \norm{\int_\Gamma \abs{\widecheck{m}_\lambda(y-z)}\,d\Gamma(y)}_{r'}\le \lambda^n\sum_{\mu\ge \lambda^{-1}}(\mu\lambda)^{-N}\norm{\int \mathds{1}_{B_\mu}(y-z)d\Gamma(y)}_{r'}, \end{equation*} and we define the functions $G_\mu(z):=\int \mathds{1}_{B_\mu}(y-z)d\Gamma(y)=\abs{B_\mu(z)\cap\Gamma}$; we use here the induced measure in $\Gamma$. If $N_\mu(\Gamma)$ denotes the $\mu$-neighborhood of $\Gamma$, then we have the following estimates \begin{equation*} G_\mu(z)\lesssim \begin{cases} \mu^{n-1}\mathds{1}_{N_\mu(\Gamma)} &\text{if } \mu\le \text{diam}(\Gamma) \\ \mathds{1}_{N_\mu(\Gamma)} &\text{otherwise}. \end{cases} \end{equation*} Hence, \begin{align*} \norm{\int_\Gamma \abs{\widecheck{m}_\lambda(y-z)}\,d\Gamma(y)}_{r'}&\le \lambda^n\sum_{\mu\ge \lambda^{-1}}(\mu\lambda)^{-N}\norm{G_\mu}_{r'} \\ &\le C\lambda^n\Big(\sum_{\lambda^{-1}\le\mu \le \text{diam}(\Gamma)}(\mu\lambda)^{-N}\mu^{n-\frac{1}{r}} + \sum_{\text{diam}(\Gamma)\le\mu}(\mu\lambda)^{-N}\mu^\frac{n}{r'}\Big)\\ &\le C\lambda^\frac{1}{r}. \end{align*} We replace this bound in \eqref{eq:lemm_trace_measure} to get $\norm{P_\lambda f}_{L^1(\Gamma)}\le C\lambda^\frac{1}{r}\norm{f}_r$, which concludes the proof. \end{proof} Now we are ready to prove Theorem~\ref{thm:difference_control}. \begin{proof}[Proof of Theorem~\ref{thm:difference_control}] We follow the arguments in \cite[chp. 5]{MR0290095}. After a change of variables we can write \eqref{eq:difference_control} as \begin{equation}\label{eq:modulus_norm} \Big(\int_{\abs{x}\le 1}\frac{\norm{f(x+y)-f(y)}_{L_y^q(\Gamma)}^q}{\abs{x}^{n+\alpha q}}\,dx\Big)^\frac{1}{q} \le C_\alpha\norm{f}_{s,p}. \end{equation} To estimate $\norm{f(x+y)-f(y)}_{L_y^q(\Gamma)}$ we write the difference as \begin{multline*} f(x+y)-f(y)=(f(x+y) - P_{\le M}f(x+y))+ \\ +(P_{\le M}f(x+y)-P_{\le M}f(y))+(P_{\le M}f(y)-f(y)). \end{multline*} For the first term $f(x+y) - P_{\le M}f(x+y)=P_{>M}f(x+y)$, we start by applying the triangle inequality to get \begin{equation*} \norm{P_{>M}f(x+y)}_{L_y^q(\Gamma)}\le \sum_{\lambda>M}\norm{P_\lambda f(x+y)}_{L_y^q(\Gamma)}; \end{equation*} now we use the trace inequality $\norm{P_\lambda g}_{L^q(\Gamma)}\le C\lambda^\frac{1}{p}\norm{g}_{L^p(\mathbb{R}^n)}$ in Lemma~\ref{lemm:trace_bound} to get \begin{equation}\label{eq:high_frequencies} \norm{P_{>M}f(x+y)}_{L_y^q(\Gamma)}\le C\sum_{\lambda>M}\lambda^\frac{1}{p}\norm{P_\lambda f}_{L^p(\mathbb{R}^n)}\le CM^{\frac{1}{p}-s}\norm{f}_{s,p}. \end{equation} We estimate the difference $P_{\le M}f(y)-f(y)$ in the same way. For the difference $P_{\le M}f(x+y)-P_{\le M}f(y)$ we use the smoothness of the projection to write \begin{equation*} P_{\le M}f(x+y)-P_{\le M}f(y)=x\cdot\int_0^1\nabla P_{\le M}f(tx+y)\,dt. \end{equation*} The multiplier of $\partial_iP_\lambda$ is $\xi_im(\xi/\lambda)=\lambda\tilde{m}(\xi/\lambda)$, where $\tilde{m}(\xi):=\xi_im(\xi)$ is a smooth function supported in $\abs{\xi}\sim 1$. By Minkowski and the trace inequality we have that \begin{align}\label{eq:low_frequencies} \norm{P_{\le M}f(x+y)-P_{\le M}f(y)}_{L_y^q(\Gamma)}&\le\abs{x}\sum_{\lambda\le M}\lambda\int_0^1\norm{\tilde{P}_{\lambda}f(tx+y)}_{L_y^q(\Gamma)}\,dt \notag \\ &\le \abs{x}\sum_{\lambda\le M}\lambda^{1+\frac{1}{p}}\norm{P_{\lambda}f}_{L^p(\mathbb{R}^n)} \notag \\ &\le C\abs{x}M^{1+\frac{1}{p}-s}\norm{f}_{s,p}. \end{align} We bound $\norm{f(x+y)-f(y)}_{L_y^q(\Gamma)}$ using \eqref{eq:high_frequencies} and \eqref{eq:low_frequencies} to get \begin{align*} \norm{f(x+y)-f(y)}_{L_y^q(\Gamma)}&\le C(M^{\frac{1}{p}-s}+\abs{x}M^{1+\frac{1}{p}-s})\norm{f}_{s,p} \\ &\le C\abs{x}^{s-\frac{1}{p}}\norm{f}_{s,p}; \end{align*} we obtained the last inequality by choosing $M\sim\abs{x}^{-1}$. We insert this bound into the term at the left of \eqref{eq:modulus_norm} to get \begin{equation*} \Big(\int_{\abs{x}\le 1}\frac{\norm{f(x+y)-f(y)}_{L_y^q(\Gamma)}^q}{\abs{x}^{n+\alpha q}}\,dx\Big)^\frac{1}{q} \le C\Big(\int_{\abs{x}\le 1}\abs{x}^{-n+q(s-\frac{1}{p}-\alpha)}\,dx\Big)^\frac{1}{q}\norm{f}_{s,p}, \end{equation*} and the last integral is bounded whenever $\alpha<s-\frac{1}{p}$. \iffalse For $s=1$ we follow the method in \cite{MR1616905}. By a limiting argument we can assume that $f$ is smooth. We write the difference as \begin{equation*} \abs{f(x+y)-f(y)}^q \le q\abs{x}\int_0^1\abs{f(y+x)-f(y)}^{q-1}\abs{\nabla f(tx+y)}\,dt. \end{equation*} We integrate in the variable $x$ to get \begin{align*} \int_{\abs{x}\le 1}\frac{\abs{f(x+y)-f(y)}^q}{\abs{x}^{n+\alpha q}}\,x &\le q\int_0^1\int_{\abs{x}\le 1}\abs{f(y+x)-f(y)}^{q-1}\abs{\nabla f(tx+y)}\frac{dx}{\abs{x}^{n+\alpha q -1}}\,dt \\ &\le C\int \abs{f(y+x)-f(y)}^{q-1}\frac{\abs{\nabla f(x+y)}}{\abs{x}^{n+\alpha q -1}} \int_{\abs{x}}^1t^{\alpha q-1}\,dt dx \\ &\le C\int_{\abs{x}\le 1} \abs{f(y+x)-f(y)}^{q-1}\frac{\abs{\nabla f(x+y)}}{\abs{x}^{n+\alpha q -1}} dx. \end{align*} We write $\abs{x}^{n+\alpha q-1}$ as $\abs{x}^{(n+\alpha q)/q'}\abs{x}^{(n+\alpha q)/q-1}$ and apply Hölder to find \begin{equation*} \int_{\abs{x}\le 1}\frac{\abs{f(x+y)-f(y)}^q}{\abs{x}^{n+\alpha q}}\,dx \lesssim \Big(\int_{\abs{x}\le 1}\frac{\abs{f(y+x)-f(y)}^q}{\abs{x}^{n+\alpha q}}\,dx\Big)^\frac{1}{q'}\Big(\int_{\abs{x}\le 1} \frac{\abs{\nabla f(x+y)}^q}{\abs{x}^{n+(\alpha-1)q}}\,dx\Big)^\frac{1}{q}. \end{equation*} Now we integrate in $\Gamma$, apply Hölder and cancel to get \begin{align*} \Big(\int_\Gamma\int_{\abs{x}\le 1}\frac{\abs{f(x+y)-f(y)}^q}{\abs{x}^{n+\alpha q}}\,dx d\Gamma\Big)^\frac{1}{q}&\le C\Big(\int_\Gamma \int_{\abs{x}\le 1} \frac{\abs{\nabla f(x+y)}^q}{\abs{x}^{n+(\alpha-1)q}}\,dxd\Gamma\Big)^\frac{1}{q} \\ &= \Big(\int\abs{\nabla f(x)}^q\int_\Gamma \mathds{1}_{\{\abs{x-y}\le 1\}}\frac{d\Gamma(y)}{\abs{x-y}^{n+(\alpha -1)q}}\,dx\Big)^\frac{1}{q}. \end{align*} We use Hölder again with $(p/q)\ge 1$ and $r=(p/q)'$ to get \begin{equation*} \Big(\int_\Gamma\int_{\abs{x}\le 1}\frac{\abs{f(x+y)-f(y)}^q}{\abs{x}^{n+\alpha q}}\,dx d\Gamma\Big)^\frac{1}{q} \le \norm{f}_{1,p}\norm{\int_\Gamma \mathds{1}_{\{\abs{x-y}\le 1\}}\frac{d\Gamma(y)}{\abs{x-y}^{n+(\alpha -1)q}}}_{r}. \end{equation*} To bound the norm of the function $x\mapsto \int_\Gamma \mathds{1}_{\{\abs{x-y}\le 1\}}\frac{d\Gamma(y)}{\abs{x-y}^{n+(\alpha -1)q}}$ we write it as \begin{equation*} \int_\Gamma \mathds{1}_{\{\abs{x-y}\le 1\}}\frac{d\Gamma(y)}{\abs{x-y}^{n+(\alpha -1)q}}\le \sum_{\lambda\le 1}\lambda^{-n-(\alpha -1)q}\int_\Gamma \mathds{1}_{B_\lambda(x)}\,d\Gamma(y). \end{equation*} Each summand is a function supported in a $\lambda$-neighborhood of $\Gamma$, and has value $\lambda^{-n-(\alpha -1)q}\abs{B_\lambda(x)\cap\Gamma}\le C\lambda^{-(\alpha -1)q-1}$. Since $N_\lambda(\Gamma)$, the $\lambda$-neighborhood of $\Gamma$, satisfies $\abs{N_\lambda(\Gamma)}\le C\lambda$, then \begin{align*} \norm{\int_\Gamma \mathds{1}_{\{\abs{x-y}\le 1\}}\frac{d\Gamma(y)}{\abs{x-y}^{n+\alpha q-q}}}_{r}&\le C\sum_{\lambda\le 1} \lambda^{-(\alpha -1)q -1}\abs{N_\lambda(\Gamma)}^\frac{1}{r} \\ &\le C\sum_{\lambda\le 1}\lambda^{(-\alpha + 1 -\frac{1}{p})q}; \end{align*} in the last inequality we used $r=(p/q)'$. If $\alpha<1-\frac{1}{p}$ then the sum converges and we conclude that \begin{equation*} \Big(\int_\Gamma\int_{\abs{x}\le 1}\frac{\abs{f(x+y)-f(y)}^q}{\abs{x}^{n+\alpha q}}\,dx\Big)^\frac{1}{q} \le C\norm{f}_{1,p}. \end{equation*} \fi \end{proof} In Section~\ref{sec:Recovery} we needed also the following result. \begin{corollary}\label{cor:difference_control_square} Suppose that $2\le p<\infty$. If $f\in B^{s,p}(\mathbb{R}^n)$ for $1+\frac{1}{p}>s>\frac{1}{p}$, and $\Gamma$ is the graph of a Lipschitz function, then for $0\le\alpha<s-\frac{1}{p}$ it holds that \begin{equation}\label{eq:difference_control_square} \int_\Gamma\int_{\abs{x-y}\le 1}\frac{\abs{f^2(x)-f^2(y)}}{\abs{x-y}^{n+\alpha}}\,dx d\Gamma(y) \le C_\alpha\norm{f}_{s,p}^2. \end{equation} Consequently \begin{equation}\label{eq:cor_Square_approach} \frac{1}{r^n}\int_{B_r(y)\cap\Omega}\abs{f^2(x)-f^2(y)}\,dx\le C_y r^\alpha \quad\text{for } y\in\partial\Omega \text{ a.e.} \end{equation} \end{corollary} \begin{proof} We write again the expected estimate as \begin{equation*} \int_{\abs{x}\le 1}\frac{\norm{f^2(x+y)-f^2(y)}_{L_y^1(\Gamma)}}{\abs{x}^{n+\alpha }}\,dx\le C_\alpha\norm{f}_{s,p}^2. \end{equation*} By Hölder and by the trace theorem we can bound the difference as \begin{equation*} \norm{f^2(x+y)-f^2(y)}_{L_y^1(\Gamma)}\le 2\norm{f}_{s,p}\norm{f(x+y)-f(y)}_{L_y^{p'}(\Gamma)}. \end{equation*} Then by Hölder and by Theorem~\ref{thm:difference_control} we get \begin{align*} \int_{\abs{x}\le 1}\frac{\norm{f^2(x+y)-f^2(y)}_{L_y^1(\Gamma)}}{\abs{x}^{n+\alpha }}\,dx &\le C\norm{f}_{s,p}\int_{\abs{x}\le 1}\frac{\norm{f(x+y)-f(y)}_{L_y^{p'}(\Gamma)}}{\abs{x}^{n+\alpha}}\,dx \\ &\le C\norm{f}_{s,p}\Big(\int_{\abs{x}\le 1}\frac{\norm{f(x+y)-f(y)}_{L_y^{p'}(\Gamma)}^{p'}}{\abs{x}^{n+(\alpha+\varepsilon)p'}}\,dx\Big)^\frac{1}{p'} \\ &\le C\norm{f}_{s,p}^2, \end{align*} where $\varepsilon\ll 1$. This is inequality \eqref{eq:difference_control_square}, and \eqref{eq:cor_Square_approach} follows. \end{proof} \iffalse We also need the special case in inequality ?, and the argument is essentially the same, so we only sketch the analogue of the main Theorem~\ref{thm:difference_control}. \begin{theorem}\label{thm:difference_control_square} If $f\in B^{s,p}(\mathbb{R}^n)$, where $1+\frac{1}{p}>s>\frac{1}{p}$, and $\Gamma$ is the graph of a Lipschitz function, then for $0\le\alpha<s-\frac{1}{p}$ it holds that \begin{equation}\label{eq:difference_control_square} \int_\Gamma\int_{\abs{x-y}\le 1}\frac{\abs{f^2(x)-f^2(y)}}{\abs{x-y}^{n+\alpha }}\,dx d\Gamma(y) \le C\norm{f}_{s,p}^2,\quad \text{where } p\ge 2. \end{equation} \end{theorem} \begin{proof} We have to get the bound \begin{equation}\label{eq:modulus_norm_square} \int_{\abs{x}\le 1}\frac{\norm{f^2(x+y)-f^2(y)}_{L_y^1(\Gamma)}}{\abs{x}^{n+\alpha}}\,dx \le C\norm{f}_{s,p}^2. \end{equation} For the term $f^2 - (P_{\le M}f)^2=2(P_{\le M}f)(P_{>M}f)+(P_{>M}f)^2$, we use the trace inequality and Cauchy-Schwarz to get \begin{equation}\label{eq:high_f_square} \norm{f^2- (P_{\le M}f)^2}_{L_y^1(\Gamma)}\le CM^{\frac{1}{p}-s}\norm{f}_{s,p}^2; \end{equation} For the term $(P_{\le M}f)^2(x+y)-(P_{\le M}f)^2(y)$ we have that \begin{equation*} (P_{\le M}f)^2(x+y)-(P_{\le M}f)^2(y)=2x\cdot\int_0^1P_{\le M}f\,\nabla P_{\le M}f(tx+y)\,dt. \end{equation*} By Minkowski, Cauchy-Schwarz and the trace inequality we get \begin{align}\label{eq:low_f_square} \norm{(P_{\le M}f)^2(x+y)-(P_{\le M}f)^2(y)}_{L_y^1(\Gamma)}&\le \notag \\ &\hspace*{-3cm}\abs{x}\int\norm{P_{\le M}f(tx+y)}_{L_y^2(\Gamma)}\norm{\nabla P_{\le M}f(tx+y)}_{L_y^2(\Gamma)}\,dt \notag \\ &\le C\abs{x}M^{1+\frac{1}{p}-s}\norm{f}_{s,p}^2. \end{align} We bound $\norm{f^2(x+y)-f^2(y)}_{L_y^1(\Gamma)}$ using \eqref{eq:high_f_square} and \eqref{eq:low_f_square} to get \begin{align*} \norm{f(x+y)-f(y)}_{L_y^1(\Gamma)}&\le C(M^{\frac{1}{p}-s}+\abs{x}M^{1+\frac{1}{p}-s})\norm{f}^2_{s,p} \\ &\le C\abs{x}^{s-\frac{1}{p}}\norm{f}^2_{s,p}; \end{align*} We insert this bound into the term at the left of \eqref{eq:modulus_norm_square} to get \begin{equation*} \int_{\abs{x}\le 1}\frac{\norm{f^2(x+y)-f^2(y)}_{L_y^1(\Gamma)}}{\abs{x}^{n+\alpha}}\,dx \le C\int_{\abs{x}\le 1}\abs{x}^{-n+s-\frac{1}{p}-\alpha}\,dx\norm{f}_{s,p}^2, \end{equation*} and the last integral is bounded whenever $\alpha<s-\frac{1}{p}$. \end{proof} \fi \subsection*{Notations} \begin{itemize} \item Various: $B_r$ is a ball or radius $r$, usually centered at zero, and we make the center explicit by writing $B_r(y)$. $H_+$ is the upper half plane. When we write $\sum_{\lambda}$, we are summing over dyadic numbers $\lambda=2^k$ for $k$ integer. \item If $E$ is a set in some measure space, then $\abs{E}$ denotes the size of $E$ for the corresponding measure. \item We write $A=O(B)$, or $A\lesssim B$, if $A\le CB$ for some $C>0$; if $cB\le A\le CB$ for some constants $0<c,C$, then $A\sim B$. We write $A\ll 1$ if $A$ is sufficiently small. We write $A=o(B)$ if $A$ and $B$ are functions of $h$ and $\lim_{h\to 0}A/B=0$. \item Projections: for a dyadic number $\lambda$ we define the projection $(P_\lambda f)^\wedge = m_\lambda\widehat{f}$, where $m_\lambda(\xi)=m(\xi/\lambda)$ and $m$ is a smooth multiplier supported in frequencies $\abs{\xi}\sim 1$; for $\lambda=1$ we take instead $m_1$ supported in $\abs{\xi}\lesssim 1$. \item Function spaces: Let $1< p<\infty$. We denote by $B^{s,p}(\mathbb{R}^n)$ the Besov space of distributions $f$ for which \begin{equation*} \norm{f}_{B^{s,p}}^p := \norm{P_1f}_p^p + \sum_{\lambda\ge 1}\lambda^{sp}\norm{P_\lambda f}^p_p <\infty. \end{equation*} The Sobolev-Slobodeskij space $W^{s,p}(\mathbb{R}^n)$ equals $B^{s,p}(\mathbb{R}^n)$ for $s\neq$ integer. For $s$ integer, $W^{s,p}(\mathbb{R}^n)$ is the space of distributions $f$ for which \begin{equation*} \norm{f}_{W^{s,p}} := \sum_{\abs{\alpha}\le s}\norm{D^\alpha f}_p <\infty. \end{equation*} We set $H^s(\mathbb{R}^n):=W^{s,2}(\mathbb{R}^n)$. The spaces $B^{s,p}(\Omega)$, $W^{s,p}(\Omega)$ and $H^s(\Omega)$ are defined by restriction of functions in $\mathbb{R}^n$ to $\Omega$. The space $H^s_0(\Omega)$ is the completion in the norm $H^s(\mathbb{R}^n)$ of test functions compactly supported in $\Omega$. \end{itemize} \iffalse \begin{equation*} Mf(y):=\sup_{0<r<c}\frac{1}{\abs{B_r(y)\cap \Omega}}\int_{B_r(y)\cap \Omega}\abs{f(x)}\,dx\quad\text{for } y\in\partial\Omega, \end{equation*} and we want to know if $M_r(f-f(y))(y)\to 0$ as $r\to 0$ a.e. for $f\in W^{s,p}(\Omega)$ and $s>1/p$. Take $\gamma_\varepsilon\in C^\infty$ such that $\norm{\gamma-\gamma_\varepsilon}_{s,p}<\varepsilon$, then \begin{multline*} \abs{\{y\mid \frac{1}{\abs{B_h\cap \Omega}}\int_{B_r(y)\cap \Omega}\abs{\gamma(x)-\gamma(y)}\,dx>\lambda\}}\le \\ \abs{\{\frac{1}{\abs{B_r\cap \Omega}}\int_{B_r(y)\cap \Omega}\abs{\gamma(x)-\gamma_\varepsilon(x)}\,dx>\frac{\lambda}{3}\}}+ \\ +\abs{\{\frac{1}{\abs{B_r\cap \Omega}}\int_{B_r(y)\cap \Omega}\abs{\gamma_\varepsilon(x)-\gamma_\varepsilon(y)}\,dx>\frac{\lambda}{3}\}} +\\ +\abs{\{\abs{\gamma_\varepsilon(y)-\gamma(y)}>\frac{\lambda}{3}\}}. \end{multline*} The second term tends to zero as $r\to 0$. The third term is controlled by $\norm{\gamma_\varepsilon-\gamma}_{L^p(\partial\Omega)}\le C\norm{\gamma_\varepsilon-\gamma}_{s,p}\le C\varepsilon$. For the first, we prove that $\norm{Mf}_p\le C\norm{f}_{s,p}$. \begin{equation*} \abs{\int \gamma\nabla r_h\cdot\nabla v_h\,dx}\le C\norm{\zeta_{B_{3h}}\nabla r_h}_2\norm{\nabla v_h}_2\le Ch^\frac{n}{2}\norm{\zeta_{B_{3h}}\nabla r_h}_2. \end{equation*} To estimate the last term we define the function $w_h\in H^1_0(\Omega)$ such that $\div{\gamma\nabla w_h}=\div{\zeta_{B_{3h}}^2\nabla r_h}$, then on one hand \begin{equation*} \int \gamma \nabla r_h\cdot \nabla w_h = \int (\zeta_{B_{3h}}\nabla r_h)\cdot(\zeta_{B_{3h}}\nabla r_h) = \norm{\zeta_{B_{3h}}\nabla r_h}_2^2. \end{equation*} On the other hand \begin{align*} \abs{\int \gamma \nabla r_h\cdot \nabla w_h} &= \abs{\int \gamma \nabla (\zeta_{B_{4h}}(u_{0,h}-q_h))\cdot \nabla w_h} \\ &\le \norm{\div{\gamma \nabla (\zeta_{B_{4h}} (u_{0,h}-q_h))}}_{H^{-1}}\norm{\nabla w_h}_2. \end{align*} The identity comes from the fact that $(1-\zeta_{B_{4h}})(u_{0,h}-q_h)\in H^1_0(\Omega)$ and $\div{\gamma \nabla w_h}=0$ outside the support of $\zeta_{B_{3h}}$. To estimate $\norm{\nabla w_h}_2$ we do \begin{equation*} \norm{\nabla w_h}_2^2\le C\int \gamma \nabla w_h\cdot\nabla w_h\le C\int \zeta_{B_{3h}}^2\nabla r_h\cdot\nabla w_h\le \norm{\zeta_{B_{3h}}\nabla r_h}_2\norm{\nabla w_h}_2, \end{equation*} then $\norm{\nabla w_h}_2\le \norm{\zeta_{B_{3h}}\nabla r_h}_2$. We join all the inequalities to get \begin{equation*} \norm{\zeta_{B_{3h}}\nabla r_h}_2^2 \le \norm{\div{\gamma \nabla (\zeta_{B_{4h}} (u_{0,h}-q_h))}}_{H^{-1}}\norm{\zeta_{B_{3h}}\nabla r_h}_2, \end{equation*} then we conclude that \begin{equation*} \norm{\zeta_{B_{3h}}\nabla r_h}_2\le \norm{\div{\gamma \nabla (\zeta_{B_{4h}} (u_{0,h}-q_h))}}_{H^{-1}}. \end{equation*} So we have for the error term that \begin{equation*} \abs{\int \gamma\nabla r_h\cdot\nabla v_h\,dx}\le Ch^\frac{n}{2}\norm{\div{\gamma \nabla (\zeta_{B_{4h}} (u_{0,h}-q_h))}}_{H^{-1}}. \end{equation*} We want to show that it goes to zero. To estimate the last term we do \begin{align*} \abs{\int \gamma \nabla(\zeta_{B_{4h}}(u_{0,h}-q_h))\cdot \nabla\phi}&\le \abs{\int (\gamma-\gamma(0)) \nabla(\zeta_{B_{4h}}(u_{0,h}-q_h))\cdot \nabla\phi}+ \\ &+\abs{\gamma(0)\int \nabla(\zeta_{B_{4h}}(u_{0,h}-q_h))\cdot \nabla\phi} \\ &\le \text{I}+\text{II} \end{align*} To estimate I we integrate by parts so that it becomes \begin{equation*} I\le \abs{\int (u_{0,h}-q_h)\phi \Delta \zeta_{B_{4h}}} + 2\abs{\int \nabla\zeta_{B_{4h}}\cdot \nabla (u_{0,h}-q_h)\phi} \end{equation*} To bound this term we do for $\rho = -h\log h$ (there are other possibilities) and taking into account that $u_{0,h}$ is harmonic so that $\int \nabla u_{0,h}\cdot\nabla\phi=0$, and so \begin{align*} \abs{\int (\gamma-\gamma(0))\nabla u_{0,h}\cdot \nabla \phi} &\le \abs{\int_{B_\rho^c\cap\Omega}(\gamma-\gamma(0))\nabla u_{0,h}\cdot \nabla \phi}+\\ &\hspace*{3cm}+\abs{\int_{B_\rho\cap\Omega}(\gamma-\gamma(0))\nabla u_{0,h}\cdot \nabla \phi} \\ &\le C\norm{\nabla u_{0,h}}_{L^2(B_R\backslash B_\rho^c)}\norm{\nabla \phi}_2+\\ &\hspace*{2cm}+\Big(\int_{B_\rho\cap\Omega} \abs{(\gamma-\gamma(0)) \nabla u_{0,h}}^2\Big)^\frac{1}{2}\norm{\nabla \phi}_2 \\ &\le C\Big[\rho^{-\frac{n}{2}}+h^{-\frac{n}{2}}\Big(\frac{1}{h^n}\int_{B_\rho\cap\Omega} \abs{\gamma-\gamma(0)}^2\Big)^\frac{1}{2}\Big]\norm{\nabla \phi}_2 \end{align*} The term $h^\frac{n}{2}\rho^{-\frac{n}{2}}=(-\log h)^{-\frac{n}{2}}$ goes to zero. For the other we get \begin{equation*} \frac{(-\log h)^n}{\rho^n}\int_{B_\rho\cap\Omega} \abs{\gamma-\gamma(0)}^2 \end{equation*} We approximate $\gamma^\frac{1}{2}$ as $\gamma^\frac{1}{2}(0)+\nabla\gamma^\frac{1}{2}(0)\cdot x$ so that \begin{align*} \int \gamma\nabla u_{1,h}\cdot \nabla v_h &= -\int u_{0,h}\nabla\gamma^\frac{1}{2}(0)\cdot \nabla v_h + \int (\gamma^\frac{1}{2}(0)+\nabla\gamma^\frac{1}{2}(0)\cdot x)\nabla u_{0,h}\cdot \nabla v_h -\\ &-\int u_{0,h}(\nabla\gamma^\frac{1}{2}-\nabla\gamma^\frac{1}{2}(0))\cdot \nabla v_h + \\ &+\int (\gamma^\frac{1}{2}-\gamma^\frac{1}{2}(0)-\nabla\gamma^\frac{1}{2}(0)\cdot x)\nabla u_{0,h}\cdot \nabla v_h \\ &=c\gamma^\frac{1}{2}(0)+\int(\nabla\gamma^\frac{1}{2}(0)\cdot x)\nabla u_{0,h}\cdot \nabla v_h -\int u_{0,h}\nabla\gamma^\frac{1}{2}(0)\cdot \nabla v_h +\\ &+h^{-1}O\Big(\frac{1}{h^n}\int_{B_h\cap\Omega} \abs{\nabla\gamma^\frac{1}{2}-\nabla\gamma^\frac{1}{2}(0)}\Big)+\\ &+O\Big(\frac{1}{h^n}\int_{B_h\cap\Omega} \abs{\gamma^\frac{1}{2}-\gamma^\frac{1}{2}(0)-\nabla\gamma^\frac{1}{2}(0)\cdot x}\Big) \end{align*} For the derivative term at zero we have \begin{align*} \int(\nabla\gamma^\frac{1}{2}(0)\cdot x)\nabla u_{0,h}\cdot \nabla v_h -\int u_{0,h}\nabla\gamma^\frac{1}{2}(0)\cdot \nabla v_h &= -\int \div{(\nabla\gamma^\frac{1}{2}(0)\cdot x)\nabla u_{0,h}}v_h +\\ &\hspace*{-5cm} + \int v_h\nabla u_{0,h}\cdot \nabla\gamma^\frac{1}{2}(0) + \int_{\partial\Omega}v_h(\nabla\gamma^\frac{1}{2}(0)\cdot x)\partial_\nu u_{0,h}-\\ &\hspace*{-5cm} - \int_{\partial\Omega}v_hu_{0,h}\partial_\nu\gamma^\frac{1}{2}(0) \\ &\hspace*{-5cm}=\int_{\partial\Omega}v_h(\nabla\gamma^\frac{1}{2}(0)\cdot x)\partial_\nu u_{0,h} - \int_{\partial\Omega}v_hu_{0,h}\partial_\nu\gamma^\frac{1}{2}(0). \end{align*} These terms are of the order $h^{-1}$. \fi \end{document}
arXiv
{ "id": "1908.08427.tex", "language_detection_score": 0.5518085956573486, "char_num_after_normalized": null, "contain_at_least_two_stop_words": null, "ellipsis_line_ratio": null, "idx": null, "lines_start_with_bullet_point_ratio": null, "mean_length_of_alpha_words": null, "non_alphabetical_char_ratio": null, "path": null, "symbols_to_words_ratio": null, "uppercase_word_ratio": null, "type": null, "book_name": null, "mimetype": null, "page_index": null, "page_path": null, "page_title": null }
arXiv/math_arXiv_v0.2.jsonl
null
null
\begin{document} \title{Combinatorial $R$ matrices for a family \ of crystals : $B^{(1)} \begin{abstract} For coherent families of crystals of affine Lie algebras of type $B^{(1)}_n$, $D^{(1)}_{n}$, $A^{(2)}_{2n}$ and $D^{(2)}_{n+1}$ we describe the combinatorial $R$ matrix using column insertion algorithms for $B,C,D$ Young tableaux. This is a continuation of \cite{HKOT}. \end{abstract} \section{Introduction} \label{sec:intro} A combinatorial $R$ matrix is the $q = 0$ limit of the quantum $R$ matrix for a quantum affine algebra $U_q(\mathfrak{g})$, where $q$ is the deformation parameter and $q=1$ means non-deformed. It is defined on the tensor product of two affine crystals $\mbox{\sl Aff}(B)\otimes\mbox{\sl Aff}(B')$ (See Section \ref{sec:crystals} for notations), and consists of an isomorphism and an energy function. It was first introduced in \cite{KMN1} for the {\em homogeneous} case where one has $B=B'$. In this case the isomorphism is trivial. The energy function was used to describe the path realization of the crystals of highest weight representations of quantum affine algebras. The definition of the energy function was extended in \cite{NY} to the {\em inhomogeneous} case, i.e. $B\neq B'$, to study the charge of the Kostka-Foulkes polynomials \cite{Ma,LS,KR}. In \cite{KKM} the theory of coherent families of perfect crystals was developed for quantum affine algebras of type $A^{(1)}_n,A^{(2)}_{2n-1}, A^{(2)}_{2n}$, $B^{(1)}_n, C^{(1)}_n$, $D^{(1)}_n$ and $D^{(2)}_{n+1}$. An element of these crystals is written as an array of nonnegative integers and an explicit description of the energy functions is given in terms of piecewise linear functions of its entries for the homogeneous case. Unfortunately this description is not applicable to the inhomogeneous cases. The purpose of this paper is to give an explicit description of the isomorphism and energy function for the inhomogeneous cases. The main tool of our description is an insertion algorithm, that is a certain procedure on Young tableaux. Insertion algorithm itself had been invented in the context of the Robinson-Schensted correspondence \cite{F} long before the crystal basis theory was initiated, and subsequently generalized in, e.g. \cite{Ber,P,Su}. As far as $A_n^{(1)}$ crystals are concerned, the isomorphisms and energy functions were obtained in terms of usual (type $A$) Young tableaux and insertion algorithms thereof \cite{S,SW}. In contrast, no similar description for the combinatorial $R$ matrix had been made for other quantum affine algebras, since an insertion algorithm suitable for the $B,C,D$ tableaux given in \cite{KN} was known only recently \cite{B1,B2,L}. In the previous work \cite{HKOT} the authors gave a description for type $C_n^{(1)}$ and $A_{2n-1}^{(2)}$. There we used the $\mathfrak{sp}$-version of semistandard tableaux defined in \cite{KN} and the column insertion algorithm presented in \cite{B1} on these tableaux. In this paper we study the remaining types, $A^{(2)}_{2n}, D^{(2)}_{n+1}, B^{(1)}_n$ and $D^{(1)}_n$. We use $\mathfrak{sp}$- and $\mathfrak{so}$- versions of semistandard tableaux defined in \cite{KN} and the column insertion algorithms presented in \cite{B1,B2} on these tableaux. The layout of this paper is as follows. In Section \ref{sec:crystals} we give a brief review of the basic notions in the theory of crystals and give the definition of combinatorial $R$ matrix. We first give the description for types $A^{(2)}_{2n}$ and $D^{(2)}_{n+1}$ in Section \ref{sec:Cx}. In Sections \ref{subsec:twista} and \ref{subsec:twistd} we recall the definitions of crystal $B_l$ for type $A^{(2)}_{2n}$ and $D^{(2)}_{n+1}$ respectively, and give a description of its elements in terms of one-row tableaux. We introduce the map $\omega$ from these crystals to the crystal of type $C^{(1)}_n$, hence the procedure is reduced to that of the latter case which we have already developed in \cite{HKOT}. In Section \ref{subsec:ccis} we list up elementary operations of column insertions and their inverses for type $C$ tableaux with at most two rows. In Section \ref{subsec:ruleCx} we give the main theorem and give the description of the isomorphism and energy function for type $A^{(2)}_{2n}$ and $D^{(2)}_{n+1}$, and in Section \ref{subsec:exCx} we give examples. The $B^{(1)}_n$ and $D^{(1)}_n$ cases are treated in Section \ref{sec:bd}. The layout is parallel to Section \ref{sec:Cx}. In Sections \ref{subsec:cib} and \ref{subsec:cid}, however, we also prove the column bumping lemmas (Lemmas \ref{lem:bcblxx} and \ref{lem:cblxx}) for $B$ and $D$ tableaux, since a route in the tableau made from inserted letters (bumping route) has some importance in the main theorem. \section{\mathversion{bold}Crystals and combinatorial $R$ matrix} \label{sec:crystals} Let us recall basic notions in the theory of crystals. See \cite{KMN1,KKM} for details. Let $I=\{0,1,\cdots,n\}$ be the index set. Let $B$ be a $P_{\scriptstyle \mbox{\scriptsize \sl cl}}$-weighted crystal, i.e. $B$ is a finite set equipped with the {\em crystal structure} that is given by the maps $\tilde{e}_i$ and $\tilde{f}_i$ from $B\sqcup \{0\}$ to $B \sqcup \{0\}$ and maps $\varepsilon_i$ and $\varphi_i$ from $B$ to $\Z_{\ge0}$. It is always assumed that $\tilde{e}_i 0 = \tilde{f}_i 0 = 0$ and $\tilde{f}_i b = b'$ means $\tilde{e}_i b' = b$. The crystal $B$ is identified with a colored oriented graph ({\em crystal graph}) if one draws an arrow as $b \stackrel{i}{\rightarrow} b'$ for $\tilde{f}_i b = b'$. Such an arrow is called $i$-arrow. Pick any $i$ and neglect all the $j$-arrows with $j\neq i$. One then finds that all the connected components are {\em strings} of finite lengths, i.e. there is no loop or branch. Fix a string and take any node $b$ in the string. Then the maps $\varepsilon_i(b),\varphi_i(b)$ have the following meaning. Along the string you can go forward by $\varphi_i(b)$ steps to an end following the arrows and backward by $\varepsilon_i(b)$ steps against the arrows. Given two crystals $B$ and $B'$, let $B \otimes B'$ be a crystal defined as follows. As a set it is identified with $B\times B'$. The actions of the operators $\et{i},\ft{i}$ on $B\otimes B'$ are given by \begin{eqnarray*} \et{i}(b\otimes b')&=&\left\{ \begin{array}{ll} \et{i} b\otimes b'&\mbox{ if }\varphi_i(b)\ge\varepsilon_i(b')\\ b\otimes \et{i} b'&\mbox{ if }\varphi_i(b) < \varepsilon_i(b'), \end{array}\right. \\ \ft{i}(b\otimes b')&=&\left\{ \begin{array}{ll} \ft{i} b\otimes b'&\mbox{ if }\varphi_i(b) > \varepsilon_i(b')\\ b\otimes \ft{i} b'&\mbox{ if }\varphi_i(b)\le\varepsilon_i(b'). \end{array}\right. \end{eqnarray*} Here $0\otimes b'$ and $b\otimes 0$ should be understood as $0$. All crystals $B$ and the tensor products of them $B\otimes B'$ are connected as a graph. Let $\mbox{\sl Aff} (B)=\left\{ z^d b | b \in B,\,d \in {\mathbb Z} \right\}$ be an affinization of $B$ \cite{KMN1}, where $z$ is an indeterminate. The crystal $\mbox{\sl Aff} (B)$ is equipped with the crystal structure, where the actions of $\et{i},\ft{i}$ are defined as $\et{i}\cdot z^d b=z^{d+\delta_{i0}}(\et{i}b),\, \ft{i}\cdot z^d b=z^{d-\delta_{i0}}(\ft{i}b)$. The {\em combinatorial $R$ matrix} is given by \begin{eqnarray*} R\;:\;\mbox{\sl Aff}(B)\otimes\mbox{\sl Aff}(B')&\longrightarrow&\mbox{\sl Aff}(B')\otimes\mbox{\sl Aff}(B)\\ z^d b\otimes z^{d'} b'&\longmapsto&z^{d'+H(b\otimes b')}\tilde{b}'\otimes z^{d-H(b\otimes b')}\tilde{b}, \end{eqnarray*} where $\iota (b\otimes b') = \tilde{b}'\otimes\tilde{b}$ under the isomorphism $\iota: B\otimes B' \stackrel{\sim}{\rightarrow}B'\otimes B$. $H(b\otimes b')$ is called the {\em energy function} and determined up to a global additive constant by \[ H(\et{i}(b\otimes b'))=\left\{ \begin{array}{ll} H(b\otimes b')+1&\mbox{ if }i=0,\ \varphi_0(b)\geq\varepsilon_0(b'),\ \varphi_0(\tilde{b}')\geq\varepsilon_0(\tilde{b}),\\ H(b\otimes b')-1&\mbox{ if }i=0,\ \varphi_0(b)<\varepsilon_0(b'),\ \varphi_0(\tilde{b}')<\varepsilon_0(\tilde{b}),\\ H(b\otimes b')&\mbox{ otherwise}, \end{array}\right. \] since $B\otimes B'$ is connected. By definition $\iota$ satisfies $\et{i} \iota = \iota \et{i}$ and $\ft{i} \iota = \iota \ft{i}$ on $B \otimes B'$. The definition of the energy function assures the intertwining property of $R$, i.e. $\et{i} R = R \et{i}$ and $\ft{i} R = R \ft{i}$ on $\mbox{\sl Aff}(B)\otimes\mbox{\sl Aff}(B')$. In the remaining part of this paper we do not stick to the formalism on $\mbox{\sl Aff}(B)\otimes\mbox{\sl Aff}(B')$ and rather treat the isomorphism and energy function separately. \section{\mathversion{bold}$U_q'(A_{2n}^{(2)})$ and $U_q'(D_{n+1}^{(2)})$ crystal cases} \label{sec:Cx} \subsection{\mathversion{bold}Definitions : $U_q'(A_{2n}^{(2)})$ case} \label{subsec:twista} Given a positive integer $l$, we consider a $U_q'(A_{2n}^{(2)})$ crystal denoted by $B_l$, that is defined in \cite{KKM}. $B_l$'s are the crystal bases of the irreducible finite-dimensional representations of the quantum affine algebra $U_q'(A_{2n}^{(2)})$. As a set $B_{l}$ reads $$ B_{l} = \left\{( x_1,\ldots, x_n,\overline{x}_n,\ldots,\overline{x}_1) \Biggm| x_i, \overline{x}_i \in {\mathbb Z}_{\ge 0}, \sum_{i=1}^n(x_i + \overline{x}_i) \in \{l,l-1,\ldots ,0\} \right\}. $$ For its crystal structure see \cite{KKM}. $B_{l}$ is isomorphic to $\bigoplus_{0 \leq j \leq l} B(j \Lambda_1)$ as a $U_q(C_n)$ crystal, where $B(j \Lambda_1)$ is the crystal associated with the irreducible representation of $U_q(C_n)$ with highest weight $j \Lambda_1$. The $U_q(C_n)$ crystal $B(j \Lambda_1)$ has a description in terms of semistandard $C$ tableaux \cite{KN}. The entries are $1,\ldots ,n$ and $\ol{1}, \ldots ,\ol{n}$ with the total order: \begin{displaymath} 1 < 2 < \cdots < n < \ol{n} < \cdots < \ol{2} < \ol{1}. \end{displaymath} For an element $b$ of $B(j\Lambda_1)$ let us denote by $\mathcal{T}(b)$ the tableau associated with $b$. Thus for $b= (x_1, \ldots, x_n, \overline{x}_n,\ldots,\overline{x}_1) \in B(j \Lambda_1)$ the tableau $\mathcal{T}(b)$ is depicted by \begin{equation} \label{eq:tabtwistax} \mathcal{T}(b)=\overbrace{\fbox{$\vphantom{\ol{1}} 1 \cdots 1$}}^{x_1}\! \fbox{$\vphantom{\ol{1}}\cdots$}\! \overbrace{\fbox{$\vphantom{\ol{1}}n \cdots n$}}^{x_n}\! \overbrace{\fbox{$\vphantom{\ol{1}}\ol{n} \cdots \ol{n}$}}^{\ol{x}_n}\! \fbox{$\vphantom{\ol{1}}\cdots$}\! \overbrace{\fbox{$\ol{1} \cdots \ol{1}$}}^{\ol{x}_1}. \end{equation} The length of this one-row tableau is equal to $j$, namely $\sum_{i=1}^n(x_i + \overline{x}_i) =j$. Here and in the remaining part of this paper we denote $\overbrace{\fbox{$\vphantom{\ol{1}}i$} \fbox{$\vphantom{\ol{1}}i$} \fbox{$\vphantom{\ol{1}}\cdots$} \fbox{$\vphantom{\ol{1}}i$}}^{x}$ by \par\noindent \setlength{\unitlength}{5mm} \begin{picture}(22,3)(-6,0) \put(0,0){\makebox(10,3) {$\overbrace{\fbox{$\vphantom{\ol{1}} i \cdots i$}}^{x}$ or more simply by }} \put(10,0.5){\line(1,0){3}} \put(10,1.5){\line(1,0){3}} \put(10,0.5){\line(0,1){1}} \put(13,0.5){\line(0,1){1}} \put(10,0.5){\makebox(3,1){$i \cdots i$}} \put(10,1.5){\makebox(3,1){${\scriptstyle x}$}} \put(13,0){\makebox(1,1){.}} \end{picture} \par\noindent To describe our rule for the combinatorial $R$ matrix we shall depict the elements of $B_{l}$ by one-row tableaux with length $2l$. We do this by duplicating each letters and then by supplying pairs of $0$ and $\ol{0}$. Adding $0$ and $\ol{0}$ into the set of entries of the tableaux, we assume the total order $0 < 1 < \cdots < \ol{1} < \ol{0}$. Let us introduce the map $\omega$ from the $U_q'(A_{2n}^{(2)})$ crystal $B_l$ to the $U_q'(C_{n}^{(1)})$ crystal\footnote{Here we adopted the notation $B_{2l}$ that we have used in the previous work \cite{HKOT}. This $B_{2l}$ was originally denoted by $B_l$ in \cite{KKM}.} $B_{2l}$. This $\omega$ sends $b= (x_1, \ldots, x_n, \overline{x}_n,\ldots,\overline{x}_1)$ to $\omega (b) = (2x_1, \ldots, 2x_n ,$ $2\overline{x}_n ,\ldots,2\overline{x}_1)$. On the other hand let us introduce the symbol ${\mathbb T} (b')$ for a $U_q'(C_{n}^{(1)})$ crystal element $b' \in B_{l'}$ \cite{HKOT}, that represents a one-row tableau with length $l'$. Putting these two symbols together we have \begin{equation} \label{eq:tabtwistaxx} {\mathbb T} (\omega(b))=\bbx{0}{x_\emptyset}\! \overbrace{\fbox{$\vphantom{\ol{1}} 1 \cdots 1$}}^{2x_1}\! \fbox{$\vphantom{\ol{1}}\cdots$}\! \overbrace{\fbox{$\vphantom{\ol{1}}n \cdots n$}}^{2x_n}\! \overbrace{\fbox{$\vphantom{\ol{1}}\ol{n} \cdots \ol{n}$}}^{2\ol{x}_n}\! \fbox{$\vphantom{\ol{1}}\cdots$}\! \overbrace{\fbox{$\ol{1} \cdots \ol{1}$}}^{2\ol{x}_1}\! \overbrace{\fbox{$\ol{0} \cdots \ol{0}$}}^{\ol{x}_\emptyset}, \end{equation} where $x_\emptyset = \overline{x}_\emptyset = l-\sum_{i=1}^n (x_i + \overline{x}_i)$. We shall use this tableau in our description of the combinatorial $R$ matrix (Rule \ref{rule:Cx}). {}From now on we shall devote ourselves to describing several important properties of the map $\omega$. Our goal here is Lemma \ref{lem:1} that our description of the combinatorial $R$ matrix relies on. For this purpose we also use the symbol $\omega$ for the following map for $\et{i},\ft{i}$. \begin{align*} \omega (\tilde{e}_i ) & = (\tilde{e}_i')^{2 - \delta_{i,0}} ,\\ \omega (\tilde{f}_i ) & = (\tilde{f}_i')^{2 - \delta_{i,0}}. \end{align*} Hereafter we attach prime $'$ to the notations for the $U_q'(C_{n}^{(1)})$ crystals, e.g. $\tilde{e}_i', \varphi_i'$ and so on. \begin{lemma} \label{lem:01} \begin{align*} \omega (\tilde{e}_i b) & = \omega (\tilde{e}_i) \omega (b), \\ \omega (\tilde{f}_i b) & = \omega (\tilde{f}_i) \omega (b), \end{align*} i.e. the $\omega$ commutes with actions of the operators on $B_l$. \end{lemma} \noindent Let us give a proof in $\tilde{e}_0$ case. (For the other $\tilde{e}_i$s (and also for $\tilde{f}_i$s) the proof is similar.) \begin{proof} Let $b=(x_1,\ldots,\ol{x}_l) \in B_l$ ($U_q'(A_{2n}^{(2)})$ crystal). We have \cite{KKM} \begin{displaymath} \tilde{e}_0 b = \begin{cases} (x_1-1,x_2,\ldots,\ol{x}_1) & \text{if $x_1 \geq \ol{x}_1+1$,} \\ (x_1,\ldots,\ol{x}_2,\ol{x}_1+1) & \text{if $x_1 \leq \ol{x}_1$.} \end{cases} \end{displaymath} This means that \begin{displaymath} \omega (\tilde{e}_0 b) = \begin{cases} (2x_1-2,2x_2,\ldots,2\ol{x}_1) & \text{if $2x_1 \geq 2\ol{x}_1+2$,} \\ (2x_1,\ldots,2\ol{x}_2,2\ol{x}_1+2) & \text{if $2x_1 \leq 2\ol{x}_1$.} \end{cases} \end{displaymath} On the other hand, let $b'=(x'_1,\ldots,\ol{x}'_l) \in B'_{l'}$ ($U_q'(C_{n}^{(1)})$ crystal). We have \cite{KKM} \begin{displaymath} \tilde{e}'_0 b' = \begin{cases} (x'_1-2,x'_2,\ldots,\ol{x}'_1) & \text{if $x'_1 \geq \ol{x}'_1+2$,} \\ (x'_1-1,x'_2,\ldots,\ol{x}'_2,\ol{x}'_1+1) & \text{if $x'_1 = \ol{x}'_1+1$,} \\ (x'_1,\ldots,\ol{x}'_2,\ol{x}'_1+2) & \text{if $x'_1 \leq \ol{x}'_1$.} \end{cases} \end{displaymath} Thus putting $l'=2l$ and $b'=\omega (b)$ we obtain $\omega (\tilde{e}_0) \omega (b) = \tilde{e}'_0 b' = \omega (\tilde{e}_0 b) $. (The second choice in the above equation does not occur.) \end{proof} Let us denote $\omega (b_1) \otimes \omega (b_2)$ by $\omega(b_1 \otimes b_2)$ for $b_1 \otimes b_2 \in B_l \otimes B_k$. \begin{lemma} \label{lem:02} \begin{align*} \omega (\tilde{e}_i (b_1 \otimes b_2)) &= \omega (\tilde{e}_i) \omega(b_1 \otimes b_2), \\ \omega (\tilde{f}_i (b_1 \otimes b_2)) &= \omega (\tilde{f}_i) \omega(b_1 \otimes b_2). \end{align*} Namely the $\omega$ commutes with actions of the operators on $B_l \otimes B_k$. \end{lemma} \begin{proof} Let us check the latter. Suppose we have $\varphi_i(b_1) \geq \varepsilon_i(b_2)+1$. Then $\tilde{f}_i (b_1 \otimes b_2) = (\tilde{f}_i b_1) \otimes b_2$. In this case we have $\varphi'_i (\omega (b_1)) \geq \varepsilon'_i (\omega (b_2))+2-\delta_{i,0}$, since \begin{align*} \varphi_i'(\omega (b)) & = (2 - \delta_{i,0}) \varphi_i (b) ,\\ \varepsilon_i'(\omega (b)) & = (2 - \delta_{i,0}) \varepsilon_i (b) . \end{align*} Therefore we obtain \begin{align*} \omega (\tilde{f}_i) \omega(b_1 \otimes b_2) &= (\tilde{f}_i')^{2-\delta_{i,0}} (\omega (b_1) \otimes \omega (b_2)) \\ &= ((\tilde{f}_i')^{2-\delta_{i,0}} \omega (b_1)) \otimes \omega (b_2) \\ &= (\omega (\tilde{f}_i) \omega (b_1)) \otimes \omega (b_2) \\ &= \omega (\tilde{f}_i b_1) \otimes \omega (b_2) \\ &= \omega (\tilde{f}_i b_1 \otimes b_2). \end{align*} The other case when $\varphi_i(b_1)\le\varepsilon_i(b_2)$ is similar. \end{proof} Finally we obtain the following important properties of the map $\omega$. \begin{lemma} \label{lem:1} \par\noindent \begin{enumerate} \renewcommand{(\roman{enumi})}{(\roman{enumi})} \item If $b_1 \otimes b_2$ is mapped to $b'_2 \otimes b'_1$ under the isomorphism of the $U_q'(A_{2n}^{(2)})$ crystals $B_l \otimes B_k \simeq B_k \otimes B_l$, then $\omega(b_1) \otimes \omega(b_2)$ is mapped to $\omega(b'_2) \otimes \omega(b'_1)$ under the isomorphism of the $U_q'(C_{n}^{(1)})$ crystals $B_{2l} \otimes B_{2k} \simeq B_{2k} \otimes B_{2l}$. \item Up to a global additive constant, the value of the energy function $H_{B_lB_k}(b_1 \otimes b_2)$ for the $U_q'(A_{2n}^{(2)})$ crystal $B_l \otimes B_k$ is equal to the value of the energy function $H'_{B_{2l}B_{2k}}(\omega(b_1) \otimes \omega(b_2))$ for the $U_q'(C_{n}^{(1)})$ crystal $B_{2l} \otimes B_{2k}$. \end{enumerate} \end{lemma} \begin{proof} First we consider (i). Since the crystal graph of $B_l \otimes B_k$ is connected, it remains to check (i) for any specific element in $B_l \otimes B_k \simeq B_k \otimes B_l$. We can do it by taking $(l,0,\ldots,0)\otimes (k,0,\ldots,0) \stackrel{\sim}{\mapsto} (k,0,\ldots,0)\otimes (l,0,\ldots,0) $ as the specific element, for which (i) certainly holds. We proceed to (ii). We can set \begin{displaymath} H_{B_l B_k}((l,0,\ldots,0)\otimes (k,0,\ldots,0) ) = H'_{B_{2l} B_{2k}}(\omega((l,0,\ldots,0))\otimes \omega((k,0,\ldots,0)) ). \end{displaymath} Suppose $\tilde{e}_i (b_1 \otimes b_2) \ne 0$. Recall the defining relations of the energy function $H_{B_l B_k}$. \begin{displaymath} H_{B_l B_k} (\tilde{e}_i (b_1 \otimes b_2)) = \begin{cases} H_{B_l B_k} (b_1 \otimes b_2)+1 & \text{if $i=0, \varphi_0(b_1) \geq \varepsilon_0(b_2), \varphi_0(b_2') \geq \varepsilon_0(b_1')$,} \\ H_{B_l B_k} (b_1 \otimes b_2)-1 & \text{if $i=0, \varphi_0(b_1) < \varepsilon_0(b_2), \varphi_0(b_2') < \varepsilon_0(b_1')$,} \\ H_{B_l B_k} (b_1 \otimes b_2) & \text{otherwise.} \end{cases} \end{displaymath} Claim (ii) holds if for any $i$ and $b_1 \otimes b_2$ with $\tilde{e}_i (b_1 \otimes b_2) \ne 0$, we have \begin{eqnarray} && H'_{B_{2l} B_{2k}}(\omega(\tilde{e}_i (b_1 \otimes b_2)))- H'_{B_{2l} B_{2k}}(\omega(b_1 \otimes b_2)) \nonumber\\ && \quad = H_{B_l B_k} (\tilde{e}_i (b_1 \otimes b_2))- H_{B_l B_k} (b_1 \otimes b_2). \label{eq:hfuncdif} \end{eqnarray} The $i=0$ case is verified as follows. Since $\omega$ commutes with crystal actions we have $\omega (\tilde{e}_0 (b_1 \otimes b_2)) = \omega (\tilde{e}_0) (\omega(b_1) \otimes \omega(b_2)) = \tilde{e}'_0 (\omega(b_1 \otimes b_2))$. On the other hand $\omega$ preserves the inequalities in the classification conditions in the defining relations of the energy function, i.e. $\varphi'_0(\omega (b_1)) \geq \varepsilon'_0(\omega(b_2)) \Leftrightarrow \varphi_0(b_1) \geq \varepsilon_0(b_2)$, and so on. Thus (\ref{eq:hfuncdif}) follows from the defining relations of the $H'_{B_{2l} B_{2k}}$. The $i\neq0$ case is easier. This completes the proof. \end{proof} Since $\omega$ is injective we obtain the converse of (i). \begin{coro} If $\omega(b_1) \otimes \omega(b_2)$ is mapped to $\omega(b'_2) \otimes \omega(b'_1)$ under the isomorphism of the $U_q'(C_{n}^{(1)})$ crystals $B_{2l} \otimes B_{2k} \simeq B_{2k} \otimes B_{2l}$, then $b_1 \otimes b_2$ is mapped to $b'_2 \otimes b'_1$ under the isomorphism of the $U_q'(A_{2n}^{(2)})$ crystals $B_l \otimes B_k \simeq B_k \otimes B_l$. \end{coro} \subsection{\mathversion{bold}Definitions : $U_q'(D_{n+1}^{(2)})$ crystal case} \label{subsec:twistd} Given a positive integer $l$, we consider a $U_q'(D_{n+1}^{(2)})$ crystal denoted by $B_l$ that is defined in \cite{KKM}. $B_l$'s are the crystal bases of the irreducible finite-dimensional representation of the quantum affine algebra $U_q'(D_{n+1}^{(2)})$. As a set $B_{l}$ reads $$ B_{l} = \left\{( x_1,\ldots, x_n,x_{\circ},\overline{x}_n,\ldots,\overline{x}_1) \Biggm| \begin{array}{l} x_{\circ}=\mbox{$0$ or $1$}, \quad x_i, \overline{x}_i \in {\mathbb Z}_{\ge 0}\\ x_{\circ}+\sum_{i=1}^n(x_i + \overline{x}_i) \in \{l,l-1,\ldots ,0\} \end{array} \right\}. $$ For its crystal structure see \cite{KKM}. $B_{l}$ is isomorphic to $\bigoplus_{0 \leq j \leq l} B(j \Lambda_1)$ as a $U_q(B_n)$ crystal, where $B(j \Lambda_1)$ is the crystal associated with the irreducible representation of $U_q(B_n)$ with highest weight $j \Lambda_1$. The $U_q(B_n)$ crystal $B(j \Lambda_1)$ has a description in terms of semistandard $B$-tableaux \cite{KN}. The entries are $1,\ldots ,n$, $\ol{1}, \ldots ,\ol{n}$ and $\circ$ with the total order: \begin{displaymath} 1 < 2 < \cdots < n < \circ < \ol{n} < \cdots < \ol{2} < \ol{1}. \end{displaymath} In this paper we use the symbol $\circ$ for the member of entries of the semistandard $B$ tableaux that is conventionally denoted by $0$. For $b= (x_1, \ldots, x_n, x_{\circ}, \overline{x}_n,\ldots,\overline{x}_1) \in B(j \Lambda_1)$ the tableau $\mathcal{T}(b)$ is depicted by \begin{displaymath} \mathcal{T}(b)=\overbrace{\fbox{$\vphantom{\ol{1}} 1 \cdots 1$}}^{x_1}\! \fbox{$\vphantom{\ol{1}}\cdots$}\! \overbrace{\fbox{$\vphantom{\ol{1}}n \cdots n$}}^{x_n}\! \overbrace{\fbox{$\vphantom{\ol{1}}\hphantom{1}\circ\hphantom{1}$}}^{x_{\circ}}\! \overbrace{\fbox{$\vphantom{\ol{1}}\ol{n} \cdots \ol{n}$}}^{\ol{x}_n}\! \fbox{$\vphantom{\ol{1}}\cdots$}\! \overbrace{\fbox{$\ol{1} \cdots \ol{1}$}}^{\ol{x}_1}. \end{displaymath} The length of this one-row tableau is equal to $j$, namely $x_{\circ}+\sum_{i=1}^n(x_i + \overline{x}_i) =j$. To describe our rule for the combinatorial $R$ matrix we shall depict the elements of $B_{l}$ by one-row $C$ tableaux with length $2l$. We introduce the map $\omega$ from the $U_q'(D_{n+1}^{(2)})$ crystal $B_l$ to the $U_q'(C_{n}^{(1)})$ crystal $B_{2l}$. $\omega$ sends $b= (x_1, \ldots, x_n, \overline{x}_n,\ldots,\overline{x}_1)$ to $\omega (b) = (2x_1, \ldots, 2x_{n-1},$ $2x_n + x_{\circ}, 2\overline{x}_n + x_{\circ},2\overline{x}_{n-1},\ldots,2\overline{x}_1)$. By using the symbol ${\mathbb T}$ introduced in the previous subsection the tableau ${\mathbb T} (\omega(b))$ is depicted by \begin{displaymath} {\mathbb T} (\omega (b))=\bbx{0}{x_\emptyset}\! \overbrace{\fbox{$\vphantom{\ol{1}} 1 \cdots 1$}}^{2x_1}\! \fbox{$\vphantom{\ol{1}}\cdots$}\! \overbrace{\fbox{$\vphantom{\ol{1}}n \cdots n$}}^{2x_n+x_{\circ}}\! \overbrace{\fbox{$\vphantom{\ol{1}}\ol{n} \cdots \ol{n}$}}^{2\ol{x}_n+x_{\circ}}\! \fbox{$\vphantom{\ol{1}}\cdots$}\! \overbrace{\fbox{$\ol{1} \cdots \ol{1}$}}^{2\ol{x}_1}\! \overbrace{\fbox{$\ol{0} \cdots \ol{0}$}}^{\ol{x}_\emptyset}, \end{displaymath} where $x_\emptyset = \overline{x}_\emptyset = l-x_{\circ}-\sum_{i=1}^n (x_i + \overline{x}_i)$. Our description of the combinatorial $R$ matrix (Theorem \ref{th:main1}) is based on the following lemma. \begin{lemma} \label{lem:2} \par\noindent \begin{enumerate} \renewcommand{(\roman{enumi})}{(\roman{enumi})} \item If $b_1 \otimes b_2$ is mapped to $b'_2 \otimes b'_1$ under the isomorphism of the $U_q'(D_{n+1}^{(2)})$ crystals $B_l \otimes B_k \simeq B_k \otimes B_l$, then $\omega(b_1) \otimes \omega(b_2)$ is mapped to $\omega(b'_2) \otimes \omega(b'_1)$ under the isomorphism of the $U_q'(C_{n}^{(1)})$ crystals $B_{2l} \otimes B_{2k} \simeq B_{2k} \otimes B_{2l}$. \item Up to a global additive constant, the value of the energy function $H_{B_lB_k}(b_1 \otimes b_2)$ for the $U_q'(D_{n+1}^{(2)})$ crystal $B_l \otimes B_k$ is equal to the value of the energy function $H'_{B_{2l}B_{2k}}(\omega(b_1) \otimes \omega(b_2))$ for the $U_q'(C_{n}^{(1)})$ crystal $B_{2l} \otimes B_{2k}$. \end{enumerate} \end{lemma} \begin{proof} To distinguish the notations we attach prime $'$ to the notations for the $U_q'(C_{n}^{(1)})$ crystals. Then we have \begin{align*} \varphi_i'(\omega (b)) & = (2 - \delta_{i,0}- \delta_{i,n}) \varphi_i (b) ,\\ \varepsilon_i'(\omega (b)) & = (2 - \delta_{i,0}- \delta_{i,n}) \varepsilon_i (b) . \end{align*} Let us define the action of $\omega$ on the operators by \begin{align*} \omega (\tilde{e}_i ) & = (\tilde{e}_i')^{2 - \delta_{i,0}- \delta_{i,n}} ,\\ \omega (\tilde{f}_i ) & = (\tilde{f}_i')^{2 - \delta_{i,0}- \delta_{i,n}}. \end{align*} By repeating an argument similar to that in the previous subsection we obtain formally the same assertions of Lemmas \ref{lem:01}, \ref{lem:02} and \ref{lem:1}. This completes the proof. \end{proof} Since $\omega$ is injective we obtain the converse of (i). \begin{coro} If $\omega(b_1) \otimes \omega(b_2)$ is mapped to $\omega(b'_2) \otimes \omega(b'_1)$ under the isomorphism of the $U_q'(C_{n}^{(1)})$ crystals $B_{2l} \otimes B_{2k} \simeq B_{2k} \otimes B_{2l}$, then $b_1 \otimes b_2$ is mapped to $b'_2 \otimes b'_1$ under the isomorphism of the $U_q'(D_{n+1}^{(2)})$ crystals $B_l \otimes B_k \simeq B_k \otimes B_l$. \end{coro} \subsection{\mathversion{bold} Column insertion and inverse insertion for $C_n$} \label{subsec:ccis} Set an alphabet $\mathcal{X}=\mathcal{A} \sqcup \bar{\mathcal{A}},\, \mathcal{A}=\{0, 1,\dots,n\}$ and $\bar{\mathcal{A}}=\{\overline{0}, \overline{1},\dots,\overline{n}\}$, with the total order $0 < 1 < 2 < \dots < n < \overline{n} < \dots < \overline{2} < \overline{1} < \overline{0}$.\footnote{We also introduce $0$ and $\ol{0}$ in the alphabet. Compare with \cite{B1,KN}.} \subsubsection{Semi-standard $C$ tableaux} \label{subsubsec:ssct} Let us consider a {\em semistandard $C$ tableau} made by the letters from this alphabet. We follow \cite{KN} for its definition. We present the definition here, but restrict ourselves to the special cases that are sufficient for our purpose. Namely we consider only those tableaux that have no more than two rows in their shapes. Thus they have the forms as \begin{equation} \label{eq:pictureofsst} \setlength{\unitlength}{5mm} \begin{picture}(6,1.5)(1.5,0) \put(1,0){\line(1,0){5}} \put(1,1){\line(1,0){5}} \put(1,0){\line(0,1){1}} \put(2,0){\line(0,1){1}} \put(3,0){\line(0,1){1}} \put(5,0){\line(0,1){1}} \put(6,0){\line(0,1){1}} \put(1,0){\makebox(1,1){$\alpha_1$}} \put(2,0){\makebox(1,1){$\alpha_2$}} \put(3,0){\makebox(2,1){$\cdots$}} \put(5,0){\makebox(1,1){$\alpha_j$}} \end{picture} \mbox{or} \setlength{\unitlength}{5mm} \begin{picture}(12,2.5) \put(2,0){\line(1,0){5}} \put(2,1){\line(1,0){10}} \put(2,2){\line(1,0){10}} \put(2,0){\line(0,1){2}} \put(3,0){\line(0,1){2}} \put(4,0){\line(0,1){2}} \put(6,0){\line(0,1){2}} \put(7,0){\line(0,1){2}} \put(9,1){\line(0,1){1}} \put(11,1){\line(0,1){1}} \put(12,1){\line(0,1){1}} \put(2,0){\makebox(1,1){$\beta_1$}} \put(2,1){\makebox(1,1){$\alpha_1$}} \put(3,0){\makebox(1,1){$\beta_2$}} \put(3,1){\makebox(1,1){$\alpha_2$}} \put(4,0){\makebox(2,1){$\cdots$}} \put(4,1){\makebox(2,1){$\cdots$}} \put(6,0){\makebox(1,1){$\beta_i$}} \put(6,1){\makebox(1,1){$\alpha_i$}} \put(7,1){\makebox(2,1){$\alpha_{i+1}$}} \put(9,1){\makebox(2,1){$\cdots$}} \put(11,1){\makebox(1,1){$\alpha_j$}} \end{picture}, \end{equation} and the letters inside the boxes should obey the following conditions: \begin{equation} \label{eq:notdecrease} \alpha_1 \leq \cdots \leq \alpha_j,\quad \beta_1 \leq \cdots \leq \beta_i, \end{equation} \begin{equation} \label{eq:strictincrease} \alpha_a < \beta_a, \end{equation} \begin{equation} \label{eq:zerozerobar} (\alpha_a,\beta_a) \ne (0,\ol{0}), \end{equation} \begin{equation} \label{eq:absenceofxxconf} (\alpha_a,\alpha_{a+1},\beta_{a+1}) \ne (x,x,\ol{x}),\quad (\alpha_a,\beta_{a},\beta_{a+1}) \ne (x,\ol{x},\ol{x}). \end{equation} Here we assume $1 \leq x \leq n$. The last conditions (\ref{eq:absenceofxxconf}) are referred to as the absence of the $(x,x)$-configurations. \subsubsection{\mathversion{bold}Column insertion for $C_n$ \cite{B1}} \label{subsubsec:insc} We give a list of column insertions on semistandard $C$ tableaux that are sufficient for our purpose (Rule \ref{rule:Cx}). First of all let us explain the relation between the {\em insertion} and the {\em inverse insertion}. Since we are deliberately avoiding the occurrence of the {\em bumping sliding transition} (\cite{B1}), the situation is basically the same as that for the usual tableau case (\cite{F}, Appendix A.2). Namely, when a letter $\alpha$ was inserted into the tableau $T$, we obtain a new tableau $T'$ whose shape is one more box larger than the shape of $T$. If we know the location of the new box we can reverse the insertion process to retrieve the original tableau $T$ and letter $\alpha$. This is the inverse insertion process. These processes go on column by column. Thus, from now on let us pay our attention to a particular column $C$ in the tableau. Suppose we have inserted a letter $\alpha$ into $C$. Suppose then we have obtained a column $C'$ and a letter $\alpha'$ bumped out from the column. If we inversely insert the letter $\alpha'$ into the column $C'$, we retrieve the original column $C$ and letter $\alpha$. For the alphabet $\mathcal{X}$, we follow the convention that Greek letters $ \alpha, \beta, \ldots $ belong to $\mathcal{X}$ while Latin letters $x,y,\ldots$ (resp. $\overline{x},\overline{y},\ldots$) belong to $\mathcal{A}$ (resp. $\bar{\mathcal{A}}$). The pictorial equations in the list should be interpreted as follows. (We take up two examples.) \begin{itemize} \item In Case B0, the letter\footnote{By abuse of notation we identify a letter with the one-box tableau having the letter in it.} $\alpha$ is inserted into the column with only one box that has letter $\beta$ in it. The $\alpha$ is set in the box and the $\beta$ is bumped out to the right-hand column. \item In Case B1, the letter $\beta$ is inserted into the column with two boxes that have letters $\alpha$ and $\gamma$ in them. The $\beta$ is set in the lower box and the $\gamma$ is bumped out to the right-hand column. \end{itemize} Other equations should be interpreted in a similar way\footnote{This interpretation is also eligible for the lists of type $B$ and $D$ cases in Sections \ref{subsubsec:insb} and \ref{subsubsec:insd}.}. We note that there is no overlapping case in the list. Note also that it does not exhaust all patterns of the column insertions that insert a letter into a column with at most two boxes. For instance it does not cover the case of insertion \begin{math} \setlength{\unitlength}{3mm} \begin{picture}(3,2)(0,0.3) \multiput(0,0)(1,0){2}{\line(0,1){1}} \multiput(0,0)(0,1){2}{\line(1,0){1}} \put(0,0){\makebox(1,1){${\scriptstyle \ol{2}}$}} \put(1,0){\makebox(1,1){${\scriptstyle \rightarrow}$}} \multiput(2,0)(1,0){2}{\line(0,1){2}} \multiput(2,0)(0,1){3}{\line(1,0){1}} \put(2,0){\makebox(1,1){${\scriptstyle 2}$}} \put(2,1){\makebox(1,1){${\scriptstyle 1}$}} \end{picture} \end{math}. In Rule \ref{rule:Cx} we do not encounter such a case. \par \noindent \ToBOX{A0}{\alpha} \raisebox{1.25mm}{,} \noindent \ToDOMINO{A1}{\alpha}{\beta}{\alpha}{\beta} \raisebox{4mm}{if $\alpha < \beta$ ,} \noindent \ToYOKODOMINO{B0}{\beta}{\alpha}{\beta}{\alpha} \raisebox{1.25mm}{if $\alpha \le \beta$,} \noindent \ToHOOKnn{B1}{\alpha}{\gamma}{\beta}{\gamma}{\alpha}{\beta} \raisebox{4mm}{if $\alpha < \beta \leq \gamma$ and $(\alpha,\gamma) \ne (x,\overline{x})$,} \noindent \ToHOOKnn{B2}{\beta}{\gamma}{\alpha}{\beta}{\alpha}{\gamma} \raisebox{4mm}{if $\alpha \leq \beta < \gamma$ and $(\alpha,\gamma) \ne (x,\overline{x})$,} \noindent \ToHOOKll{B3}{x}{\overline{x}}{\beta}{\overline{x\!-\!1}}{x\!-\!1}{\beta} \raisebox{4mm}{if $x \le \beta \le \overline{x}$ and $x\ne 0$,} \noindent \ToHOOKln{B4}{\beta}{\overline{x}}{x}{\beta}{x\!+\!1}{\overline{x\!+\!1}} \raisebox{4mm}{if $x < \beta < \overline{x}$ and $x\ne n$.} \noindent \subsubsection{Column insertion and $U_q (\ol{\mathfrak{g}})$ crystal morphism} In this subsection we illustrate the relation between column insertion and the crystal morphism that was given by Baker \cite{B1,B2}. A crystal morphism is a (not necessarily one-to-one) map between two crystals that commutes with the actions of crystals. See, for instance \cite{KKM} for a precise definition. A $U_q (\ol{\mathfrak{g}})$ crystal morphism is a morphism that commutes with the actions of $\tilde{e}_i$ and $\tilde{f}_i$ for $i \ne 0$. For later use we also include semistandard $B$ and $D$ tableaux in our discussion, therefore we assume $\ol{\mathfrak{g}} =B_n, C_n$ or $D_n$. See section \ref{subsubsec:ssbt} (resp.~\ref{subsubsec:ssdt}) for the definition of semistandard $B$ (resp.~$D$) tableaux. Let $T$ be a semistandard $B$, $C$ or $D$ tableau. For this $T$ we denote by $w(T)$ the Japanese reading word of $T$, i.e. $w(T)$ is a sequence of letters that is created by reading all letters on $T$ from the rightmost column to the leftmost one, and in each column, from top to bottom. For instance, \par\noindent \setlength{\unitlength}{5mm} \begin{picture}(22,1.5)(-5,0) \put(0,0){\makebox(1,1){$w($}} \put(1,0){\line(1,0){5}} \put(1,1){\line(1,0){5}} \put(1,0){\line(0,1){1}} \put(2,0){\line(0,1){1}} \put(3,0){\line(0,1){1}} \put(5,0){\line(0,1){1}} \put(6,0){\line(0,1){1}} \put(1,0){\makebox(1,1){$\alpha_1$}} \put(2,0){\makebox(1,1){$\alpha_2$}} \put(3,0){\makebox(2,1){$\cdots$}} \put(5,0){\makebox(1,1){$\alpha_j$}} \put(6,0){\makebox(6,1){$)=\alpha_j \cdots \alpha_2 \alpha_1,$}} \end{picture} \par\noindent \setlength{\unitlength}{5mm} \begin{picture}(22,2.5)(-2,0) \put(0,0){\makebox(2,2){$w \Biggl($}} \put(2,0){\line(1,0){5}} \put(2,1){\line(1,0){10}} \put(2,2){\line(1,0){10}} \put(2,0){\line(0,1){2}} \put(3,0){\line(0,1){2}} \put(4,0){\line(0,1){2}} \put(6,0){\line(0,1){2}} \put(7,0){\line(0,1){2}} \put(9,1){\line(0,1){1}} \put(11,1){\line(0,1){1}} \put(12,1){\line(0,1){1}} \put(2,0){\makebox(1,1){$\beta_1$}} \put(2,1){\makebox(1,1){$\alpha_1$}} \put(3,0){\makebox(1,1){$\beta_2$}} \put(3,1){\makebox(1,1){$\alpha_2$}} \put(4,0){\makebox(2,1){$\cdots$}} \put(4,1){\makebox(2,1){$\cdots$}} \put(6,0){\makebox(1,1){$\beta_i$}} \put(6,1){\makebox(1,1){$\alpha_i$}} \put(7,1){\makebox(2,1){$\alpha_{i+1}$}} \put(9,1){\makebox(2,1){$\cdots$}} \put(11,1){\makebox(1,1){$\alpha_j$}} \put(12,0){\makebox(14,2){$\Biggr) =\alpha_j \cdots \alpha_{i+1} \alpha_i \beta_i \cdots \alpha_2 \beta_2 \alpha_1 \beta_1.$}} \end{picture} \par\noindent Let $T$ and $T'$ be two tableaux. We define the product tableau $T * T'$ by \begin{displaymath} T * T' = (\tau_1 \to \cdots (\tau_{j-1} \to ( \tau_j \to T ) ) \cdots ) \end{displaymath} where \begin{displaymath} w(T') = \tau_j \tau_{j-1} \cdots \tau_1. \end{displaymath} The symbol $\to$ represents the column insertions in \cite{B1,B2} which we partly describe in sections \ref{subsubsec:insc}, \ref{subsubsec:insb} and \ref{subsubsec:insd}. (Note that the author of \cite{B1,B2} uses $\leftarrow$ instead of $\to$.) For a dominant integral weight $\lambda$ of the $\ol{\mathfrak{g}}$ root system, let $B(\lambda)$ be the $U_q(\ol{\mathfrak{g}})$ crystal associated with the irreducible highest weight representation $V(\lambda)$. The elements of $B(\lambda)$ can be represented by semistandard $\ol{\mathfrak{g}}$ tableaux of shape $\lambda$ \cite{KN}. \begin{proposition}[\cite{B1,B2}] \label{pr:morphgen} Let $B(\mu) \otimes B(\nu) \simeq \bigoplus_j B(\lambda_j)^{\oplus m_j}$ be the tensor product decomposition of crystals. Here $\lambda_j$'s are distinct highest weights and $m_j(\ge1)$ is the multiplicity of $B(\lambda_j)$. Forgetting the multiplicities we have the canonical morphism from $B(\mu) \otimes B(\nu)$ to $\bigoplus_j B(\lambda_j)$. Define $\psi$ by \begin{displaymath} \psi(b_1 \otimes b_2) = b_1 * b_2. \end{displaymath} Then $\psi$ gives the unique $U_q(\ol{\mathfrak{g}})$ crystal morphism from $B(\mu) \otimes B(\nu)$ to $\bigoplus_j B(\lambda_j)$. \end{proposition} \noindent See Examples \ref{ex:morC1}, \ref{ex:morC2}, \ref{ex:morB}, \ref{ex:morD1} and \ref{ex:morD2}. \subsubsection{Column insertion and $U_q (C_n)$ crystal morphism} To illustrate Proposition \ref{pr:morphgen}, let us check a morphism of the $U_q(C_2)$ crystal $B(\Lambda_2) \otimes B(\Lambda_1)$ by taking two examples. Let $\psi$ be the map that sends \begin{math} \setlength{\unitlength}{3mm} \begin{picture}(3,2)(0,0.3) \multiput(0,0)(1,0){2}{\line(0,1){2}} \multiput(0,0)(0,1){3}{\line(1,0){1}} \put(0,0){\makebox(1,1){${\scriptstyle \beta}$}} \put(0,1){\makebox(1,1){${\scriptstyle \alpha}$}} \put(1,0.5){\makebox(1,1){${\scriptstyle \otimes}$}} \multiput(2,0.5)(1,0){2}{\line(0,1){1}} \multiput(2,0.5)(0,1){2}{\line(1,0){1}} \put(2,0.5){\makebox(1,1){${\scriptstyle \gamma}$}} \end{picture} \end{math} to the tableau which is made by the column insertion \begin{math} \setlength{\unitlength}{3mm} \begin{picture}(3,2)(0,0.3) \multiput(0,0)(1,0){2}{\line(0,1){1}} \multiput(0,0)(0,1){2}{\line(1,0){1}} \put(0,0){\makebox(1,1){${\scriptstyle \gamma}$}} \put(1,0){\makebox(1,1){${\scriptstyle \rightarrow}$}} \multiput(2,0)(1,0){2}{\line(0,1){2}} \multiput(2,0)(0,1){3}{\line(1,0){1}} \put(2,0){\makebox(1,1){${\scriptstyle \beta}$}} \put(2,1){\makebox(1,1){${\scriptstyle \alpha}$}} \end{picture} \end{math}. \begin{example} \label{ex:morC1} \begin{displaymath} \begin{CD} \setlength{\unitlength}{5mm} \begin{picture}(3,2)(0,0.3) \multiput(0,0)(1,0){2}{\line(0,1){2}} \multiput(0,0)(0,1){3}{\line(1,0){1}} \put(0,0){\makebox(1,1){$\ol{2}$}} \put(0,1){\makebox(1,1){$2$}} \put(1,0.5){\makebox(1,1){$\otimes$}} \multiput(2,0.5)(1,0){2}{\line(0,1){1}} \multiput(2,0.5)(0,1){2}{\line(1,0){1}} \put(2,0.5){\makebox(1,1){$2$}} \end{picture} @>\text{$\tilde{e}_1$}>> \setlength{\unitlength}{5mm} \begin{picture}(3,2)(0,0.3) \multiput(0,0)(1,0){2}{\line(0,1){2}} \multiput(0,0)(0,1){3}{\line(1,0){1}} \put(0,0){\makebox(1,1){$\ol{2}$}} \put(0,1){\makebox(1,1){$1$}} \put(1,0.5){\makebox(1,1){$\otimes$}} \multiput(2,0.5)(1,0){2}{\line(0,1){1}} \multiput(2,0.5)(0,1){2}{\line(1,0){1}} \put(2,0.5){\makebox(1,1){$2$}} \end{picture} \\ @VV\text{$\psi$}V @VV\text{$\psi$}V \\ \setlength{\unitlength}{5mm} \begin{picture}(2,2)(0,0.3) \multiput(0,0)(1,0){2}{\line(0,1){2}} \put(0,0){\line(1,0){1}} \multiput(0,1)(0,1){2}{\line(1,0){2}} \put(2,1){\line(0,1){1}} \put(0,0){\makebox(1,1){$2$}} \put(0,1){\makebox(1,1){$1$}} \put(1,1){\makebox(1,1){$\ol{1}$}} \end{picture} @>\text{$\tilde{e}_1$}>> \setlength{\unitlength}{5mm} \begin{picture}(2,2)(0,0.3) \multiput(0,0)(1,0){2}{\line(0,1){2}} \put(0,0){\line(1,0){1}} \multiput(0,1)(0,1){2}{\line(1,0){2}} \put(2,1){\line(0,1){1}} \put(0,0){\makebox(1,1){$2$}} \put(0,1){\makebox(1,1){$1$}} \put(1,1){\makebox(1,1){$\ol{2}$}} \end{picture} \end{CD} \end{displaymath} \vskip3ex \noindent Here the left (resp.~right) $\psi$ is given by Case B3 (resp.~B1) column insertion. \end{example} \begin{example} \label{ex:morC2} \begin{displaymath} \begin{CD} \setlength{\unitlength}{5mm} \begin{picture}(3,2)(0,0.3) \multiput(0,0)(1,0){2}{\line(0,1){2}} \multiput(0,0)(0,1){3}{\line(1,0){1}} \put(0,1){\makebox(1,1){$2$}} \put(0,0){\makebox(1,1){$\ol{1}$}} \put(1,0.5){\makebox(1,1){$\otimes$}} \put(2,0.5){\makebox(1,1){$1$}} \multiput(2,0.5)(1,0){2}{\line(0,1){1}} \multiput(2,0.5)(0,1){2}{\line(1,0){1}} \end{picture} @>\text{$\tilde{f}_1$}>> \setlength{\unitlength}{5mm} \begin{picture}(3,2)(0,0.3) \multiput(0,0)(1,0){2}{\line(0,1){2}} \multiput(0,0)(0,1){3}{\line(1,0){1}} \put(0,1){\makebox(1,1){$2$}} \put(0,0){\makebox(1,1){$\ol{1}$}} \put(1,0.5){\makebox(1,1){$\otimes$}} \put(2,0.5){\makebox(1,1){$2$}} \multiput(2,0.5)(1,0){2}{\line(0,1){1}} \multiput(2,0.5)(0,1){2}{\line(1,0){1}} \end{picture} \\ @VV\text{$\psi$}V @VV\text{$\psi$}V \\ \setlength{\unitlength}{5mm} \begin{picture}(2,2)(0,0.3) \multiput(0,0)(1,0){2}{\line(0,1){2}} \put(0,0){\line(1,0){1}} \multiput(0,1)(0,1){2}{\line(1,0){2}} \put(2,1){\line(0,1){1}} \put(0,1){\makebox(1,1){$2$}}\put(1,1){\makebox(1,1){$2$}} \put(0,0){\makebox(1,1){$\ol{2}$}} \end{picture} @>\text{$\tilde{f}_1$}>> \setlength{\unitlength}{5mm} \begin{picture}(2,2)(0,0.3) \multiput(0,0)(1,0){2}{\line(0,1){2}} \put(0,0){\line(1,0){1}} \multiput(0,1)(0,1){2}{\line(1,0){2}} \put(2,1){\line(0,1){1}} \put(0,1){\makebox(1,1){$2$}}\put(1,1){\makebox(1,1){$2$}} \put(0,0){\makebox(1,1){$\ol{1}$}} \end{picture} \end{CD} \end{displaymath} \vskip3ex \noindent Here the left (resp.~right) $\psi$ is given by Case B4 (resp.~B2) column insertion. \end{example} \subsubsection{\mathversion{bold}Inverse insertion for $C_n$ \cite{B1}} \label{subsubsec:invinsc} In this subsection we give a list of inverse column insertions on semistandard $C$ tableaux that are sufficient for our purpose (Rule \ref{rule:Cx}). The pictorial equations in the list should be interpreted as follows. (We take two examples.) \begin{itemize} \item In Case C0, the letter $\beta$ is inversely inserted into the column with only one box that has letter $\alpha$ in it. The $\beta$ is set in the box and the $\alpha$ is bumped out to the left-hand column. \item In Case C1, the letter $\gamma$ is inversely inserted into the column with two boxes that have letters $\alpha$ and $\beta$ in them. The $\gamma$ is set in the lower box and the $\beta$ is bumped out to the left-hand column. \end{itemize} Other equations illustrate analogous procedures. \par \noindent \FromYOKODOMINO{C0}{\beta}{\alpha}{\beta}{\alpha} \raisebox{1.25mm}{if $\alpha \le \beta$,} \noindent \FromHOOKnn{C1}{\gamma}{\alpha}{\beta}{\alpha}{\gamma}{\beta} \raisebox{4mm}{if $\alpha < \beta \leq \gamma$ and $(\alpha,\gamma) \ne (x,\overline{x})$,} \noindent \FromHOOKnn{C2}{\beta}{\alpha}{\gamma}{\beta}{\gamma}{\alpha} \raisebox{4mm}{if $\alpha \leq \beta < \gamma$ and $(\alpha,\gamma) \ne (x,\overline{x})$,} \noindent \FromHOOKnl{C3}{\overline{x}}{x}{\beta}{x\!+\!1}{\overline{x\!+\!1}}{\beta} \raisebox{4mm}{if $x < \beta < \overline{x}$ and $x\ne n$,} \noindent \FromHOOKll{C4}{\beta}{x}{\overline{x}}{\beta}{\overline{x\!-\!1}}{x\!-\!1} \raisebox{4mm}{if $x \le \beta \le \overline{x}$ and $x\ne 0$.} \subsection{\mathversion{bold}Main theorem : $A^{(2)}_{2n}$ and $D^{(2)}_{n+1}$ cases} \label{subsec:ruleCx} Fix $l, k \in {\mathbb Z}_{\ge 1}$. Given $b_1 \otimes b_2 \in B_{l} \otimes B_{k}$, we define an $U'_q(C^{(1)}_n)$ crystal element $\tilde{b}_2 \otimes \tilde{b}_1 \in B_{2k} \otimes B_{2l}$ and $l',k', m \in {\mathbb Z}_{\ge 0}$ by the following rule. \begin{rules}\label{rule:Cx} \par\noindent Set $z = \min(\sharp\,\fbx{0} \text{ in }{\mathbb T}(\omega(b_1)),\, \sharp\,\fbx{0} \text{ in }{\mathbb T}(\omega(b_2)))$. Thus ${\mathbb T}(\omega(b_1))$ and ${\mathbb T}(\omega(b_2))$ can be depicted by \begin{eqnarray*} {\mathbb T}(\omega (b_1)) &=& \setlength{\unitlength}{5mm} \begin{picture}(10.5,1.4)(0,0.3) \multiput(0,0)(0,1){2}{\line(1,0){10}} \put(0,0){\line(0,1){1}} \put(3,0){\line(0,1){1}} \put(7,0){\line(0,1){1}} \put(10,0){\line(0,1){1}} \put(0,0){\makebox(3,1){$0\cdots 0$}} \put(3,0){\makebox(4,1){$T_*$}} \put(7,0){\makebox(3,1){$\ol{0}\cdots \ol{0}$}} \multiput(0,0.9)(7,0){2}{\put(0,0){\makebox(3,1){$z$}}} \end{picture},\\ {\mathbb T}(\omega(b_2)) &=& \setlength{\unitlength}{5mm} \begin{picture}(9.5,2)(0,0.3) \multiput(0,0)(0,1){2}{\line(1,0){9}} \multiput(0,0)(3,0){4}{\line(0,1){1}} \put(3.9,0){\line(0,1){1}} \put(5,0){\line(0,1){1}} \put(0,0){\makebox(3,1){$0\cdots 0$}} \put(3,0){\makebox(1,1){$v_{1}$}} \put(4,0){\makebox(1,1){$\cdots$}} \put(5,0){\makebox(1,1){$v_{k'}$}} \put(6,0){\makebox(3,1){$\ol{0}\cdots \ol{0}$}} \multiput(0,0.9)(6,0){2}{\put(0,0){\makebox(3,1){$z$}}} \end{picture}. \end{eqnarray*} Set $l' = 2l-2z$ and $k' =2k-2z$, hence $T_*$ is a one-row tableau with length $l'$. Operate the column insertions for semistandard $C$ tableaux and define \begin{displaymath} T^{(0)} := (v_1 \longrightarrow ( \cdots ( v_{k'-1} \longrightarrow ( v_{k'} \longrightarrow T_* ) ) \cdots ) ). \end{displaymath} It has the form: \setlength{\unitlength}{5mm} \begin{picture}(20,4) \put(5,1.5){\makebox(3,1){$T^{(0)}=$}} \put(8,1){\line(1,0){3.5}} \put(8,2){\line(1,0){9}} \put(8,3){\line(1,0){9}} \put(8,1){\line(0,1){2}} \put(11.5,1){\line(0,1){1}} \put(12.5,2){\line(0,1){1}} \put(17,2){\line(0,1){1}} \put(12.5,2){\makebox(4.5,1){$i_{m+1} \;\cdots\; i_{l'}$}} \put(8,1){\makebox(3,1){$\;\;i_1 \cdots i_m$}} \put(8.5,2){\makebox(3,1){$\;\;j_1 \cdots\cdots j_{k'}$}} \end{picture} \noindent where $m$ is the length of the second row, hence that of the first row is $l'+k'-m$ ($0 \le m \le k'$). Next we bump out $l'$ letters from the tableau $T^{(0)}$ by the type $C$ reverse bumping algorithm in section \ref{subsubsec:invinsc}. In general, an inverse column insertion starts at a rightmost box in a row. After an inverse column insertion we obtain a tableau which has the shape with one box deleted, i.e. the box where we started the reverse bumping is removed from the original shape. We have labeled the boxes by $i_{l'}, i_{l'-1}, \ldots, i_1$ at which we start the inverse column insertions. Namely, for the boxes containing $i_{l'}, i_{l'-1}, \ldots, i_1$ in the above tableau, we do it first for $i_{l'}$ then $i_{l'-1}$ and so on. Correspondingly, let $w_{1}$ be the first letter that is bumped out from the leftmost column and $w_2$ be the second and so on. Denote by $T^{(i)}$ the resulting tableau when $w_i$ is bumped out ($1 \le i \le l'$). Note that $w_1 \le w_2 \le \cdots \le w_{l'}$. Now the $U'_q(C^{(1)}_n)$ crystal elements $\tilde{b}_1 \in B_{2l}$ and $\tilde{b}_2 \in B_{2k}$ are uniquely specified by \begin{eqnarray*} {\mathbb T}(\tilde{b}_2) &=& \setlength{\unitlength}{5mm} \begin{picture}(9.5,1.4)(0,0.3) \multiput(0,0)(0,1){2}{\line(1,0){9}} \multiput(0,0)(3,0){4}{\line(0,1){1}} \put(0,0){\makebox(3,1){$0\cdots 0$}} \put(3,0){\makebox(3,1){$T^{(l')}$}} \put(6,0){\makebox(3,1){$\ol{0}\cdots \ol{0}$}} \multiput(0,0.9)(6,0){2}{\put(0,0){\makebox(3,1){$z$}}} \end{picture},\\ {\mathbb T}(\tilde{b}_1) &=& \begin{picture}(10.5,2)(0,0.3) \multiput(0,0)(0,1){2}{\line(1,0){10}} \multiput(0,0)(3,0){2}{\line(0,1){1}} \multiput(4.25,0)(1.5,0){2}{\line(0,1){1}} \multiput(7,0)(3,0){2}{\line(0,1){1}} \put(0,0){\makebox(3,1){$0\cdots 0$}} \put(3,0){\makebox(1.25,1){$w_{1}$}} \put(4.25,0){\makebox(1.5,1){$\cdots$}} \put(5.75,0){\makebox(1.25,1){$w_{l'}$}} \put(7,0){\makebox(3,1){$\ol{0}\cdots \ol{0}$}} \multiput(0,0.9)(7,0){2}{\put(0,0){\makebox(3,1){$z$}}} \end{picture}. \end{eqnarray*} \end{rules} (End of the Rule) \vskip3ex We normalize the energy function as $H_{B_l B_k}(b_1 \otimes b_2)=0$ for \begin{math} \mathcal{T}(b_1) = \setlength{\unitlength}{5mm} \begin{picture}(3,1.5)(0,0.3) \multiput(0,0)(0,1){2}{\line(1,0){3}} \multiput(0,0)(3,0){2}{\line(0,1){1}} \put(0,0){\makebox(3,1){$1\cdots 1$}} \put(0,1){\makebox(3,0.5){$\scriptstyle l$}} \end{picture} \end{math} and \begin{math} \mathcal{T}(b_2) = \setlength{\unitlength}{5mm} \begin{picture}(3,1.5)(0,0.3) \multiput(0,0)(0,1){2}{\line(1,0){3}} \multiput(0,0)(3,0){2}{\line(0,1){1}} \put(0,0){\makebox(3,1){$\ol{1}\cdots \ol{1}$}} \put(0,1){\makebox(3,0.5){$\scriptstyle k$}} \end{picture} \end{math} irrespective of $l < k$ or $l \ge k$. Our main result for $A^{(2)}_{2n}$ and $D^{(2)}_{n+1}$ is \begin{theorem}\label{th:main1} Given $b_1 \otimes b_2 \in B_l \otimes B_k$, find the $U'_q(C^{(1)}_n)$ crystal element $\tilde{b}_2 \otimes \tilde{b}_1 \in B_{2k} \otimes B_{2l}$ and $l', k', m$ by Rule \ref{rule:Cx}. Let $\iota: B_l \otimes B_k \stackrel{\sim}{\rightarrow} B_k \otimes B_l$ be the isomorphism of $U'_q(A^{(2)}_{2n})$ (or $U'_q(D^{(2)}_{n+1})$) crystal. Then $\tilde{b}_2 \otimes \tilde{b}_1$ is in the image of the $B_k \otimes B_l$ by the injective map $\omega$ and we have \begin{align*} \iota(b_1\otimes b_2)& = \omega^{-1}(\tilde{b}_2 \otimes \tilde{b}_1),\\ H_{B_l B_k}(b_1 \otimes b_2) &= \min(l',k')- m. \end{align*} \end{theorem} \noindent By Theorem 3.4 of \cite{HKOT} (the corresponding theorem for the $C^{(1)}_n$ case), one can immediately obtain this theorem using Lemmas \ref{lem:1}, \ref{lem:2} and their corollaries. \subsection{Examples} \label{subsec:exCx} \begin{example} Let us consider $B_3 \otimes B_2 \simeq B_2 \otimes B_3$ for $A^{(2)}_4$. Let $b$ be an element of $B_3$ (resp.~$B_2$). It is depicted by a one-row tableau $\mathcal{T} (b)$ with length 0, 1, 2 or 3 (resp.~0, 1 or 2). \begin{displaymath} \begin{array}{ccccccc} \setlength{\unitlength}{5mm} \begin{picture}(3,1)(0,0.3) \multiput(1,0)(1,0){2}{\line(0,1){1}} \multiput(1,0)(0,1){2}{\line(1,0){1}} \put(1,0){\makebox(1,1){$1$}} \end{picture} & \otimes & \setlength{\unitlength}{5mm} \begin{picture}(2,1)(0,0.3) \multiput(0,0)(1,0){3}{\line(0,1){1}} \multiput(0,0)(0,1){2}{\line(1,0){2}} \put(0,0){\makebox(1,1){$2$}} \put(1,0){\makebox(1,1){$2$}} \end{picture} & \stackrel{\sim}{\mapsto} & \setlength{\unitlength}{5mm} \begin{picture}(2,1)(0,0.3) \multiput(0,0)(1,0){3}{\line(0,1){1}} \multiput(0,0)(0,1){2}{\line(1,0){2}} \put(0,0){\makebox(1,1){$1$}} \put(1,0){\makebox(1,1){$1$}} \end{picture} & \otimes & \setlength{\unitlength}{5mm} \begin{picture}(3,1)(0,0.3) \multiput(0,0)(1,0){4}{\line(0,1){1}} \multiput(0,0)(0,1){2}{\line(1,0){3}} \put(0,0){\makebox(1,1){$2$}} \put(1,0){\makebox(1,1){$2$}} \put(2,0){\makebox(1,1){$\ol{1}$}} \end{picture} \\ & & & & & & \\ \setlength{\unitlength}{5mm} \begin{picture}(3,1)(0,0.3) \multiput(0.5,0)(1,0){3}{\line(0,1){1}} \multiput(0.5,0)(0,1){2}{\line(1,0){2}} \put(0.5,0){\makebox(1,1){$1$}} \put(1.5,0){\makebox(1,1){$2$}} \end{picture} & \otimes & \setlength{\unitlength}{5mm} \begin{picture}(2,1)(0,0.3) \multiput(0.5,0)(1,0){2}{\line(0,1){1}} \multiput(0.5,0)(0,1){2}{\line(1,0){1}} \put(0.5,0){\makebox(1,1){$2$}} \end{picture} & \stackrel{\sim}{\mapsto} & \setlength{\unitlength}{5mm} \begin{picture}(2,1)(0,0.3) \multiput(0.5,0)(1,0){2}{\line(0,1){1}} \multiput(0.5,0)(0,1){2}{\line(1,0){1}} \put(0.5,0){\makebox(1,1){$1$}} \end{picture} & \otimes & \setlength{\unitlength}{5mm} \begin{picture}(3,1)(0,0.3) \multiput(0.5,0)(1,0){3}{\line(0,1){1}} \multiput(0.5,0)(0,1){2}{\line(1,0){2}} \put(0.5,0){\makebox(1,1){$2$}} \put(1.5,0){\makebox(1,1){$2$}} \end{picture} \\ & & & & & & \\ \setlength{\unitlength}{5mm} \begin{picture}(3,1)(0,0.3) \multiput(0,0)(1,0){4}{\line(0,1){1}} \multiput(0,0)(0,1){2}{\line(1,0){3}} \put(0,0){\makebox(1,1){$1$}} \put(1,0){\makebox(1,1){$1$}} \put(2,0){\makebox(1,1){$2$}} \end{picture} & \otimes & \setlength{\unitlength}{5mm} \begin{picture}(2,1)(0,0.3) \multiput(0,0)(1,0){3}{\line(0,1){1}} \multiput(0,0)(0,1){2}{\line(1,0){2}} \put(0,0){\makebox(1,1){$2$}} \put(1,0){\makebox(1,1){$\ol{1}$}} \end{picture} & \stackrel{\sim}{\mapsto} & \setlength{\unitlength}{5mm} \begin{picture}(2,1)(0,0.3) \multiput(0,0)(1,0){3}{\line(0,1){1}} \multiput(0,0)(0,1){2}{\line(1,0){2}} \put(0,0){\makebox(1,1){$1$}} \put(1,0){\makebox(1,1){$2$}} \end{picture} & \otimes & \setlength{\unitlength}{5mm} \begin{picture}(3,1)(0,0.3) \multiput(1,0)(1,0){2}{\line(0,1){1}} \multiput(1,0)(0,1){2}{\line(1,0){1}} \put(1,0){\makebox(1,1){$2$}} \end{picture} \end{array} \end{displaymath} Here we have picked up three samples. One can check that they are mapped to each other under the isomorphism of the $U_q'(A^{(2)}_4)$ crystals by explicitly writing down the crystal graphs of $B_3 \otimes B_2$ and $B_2 \otimes B_3$ . First we shall show that the use of the tableau $\mathcal{T} (b)$ given by (\ref{eq:tabtwistax}) is not enough for our purpose, while the less simpler tableau ${\mathbb T} (\omega (b))$ given by (\ref{eq:tabtwistaxx}) suffices it. Recall that by neglecting its zero arrows any $U_q'(A^{(2)}_4)$ crystal graph decomposes into $U_q(C_2)$ crystal graphs. Thus if $b_1 \otimes b_2$ is mapped to $b_2' \otimes b_1'$ under the isomorphism of the $U_q'(A^{(2)}_4)$ crystals, they should also be mapped to each other under an isomorphism of $U_q(C_2)$ crystals. In this example this $U_q(C_2)$ crystal isomorphism can be checked in terms of the tableau $\mathcal{T} (b)$ in the following way. Given $b_1 \otimes b_2$ let us construct the product tableau $\mathcal{T} (b_1) * \mathcal{T} (b_2)$ according to the original insertion rule in \cite{B1} where in particular we have \begin{math} (\ol{1} \longrightarrow \setlength{\unitlength}{5mm} \begin{picture}(1,1)(0,0.3) \multiput(0,0)(1,0){2}{\line(0,1){1}} \multiput(0,0)(0,1){2}{\line(1,0){1}} \put(0,0){\makebox(1,1){$1$}} \end{picture} ) = \emptyset. \end{math}\footnote{See the first footnote of subsection \ref{subsec:ccis}.} One can see that both sides of the above three mappings then yield a common tableau \begin{math} \setlength{\unitlength}{3mm} \begin{picture}(2,2)(0,0.5) \multiput(0,0)(1,0){2}{\line(0,1){2}} \put(2,1){\line(0,1){1}} \multiput(0,1)(0,1){2}{\line(1,0){2}} \put(0,0){\line(1,0){1}} \put(0,1){\makebox(1,1){${\scriptstyle 1}$}} \put(1,1){\makebox(1,1){${\scriptstyle 2}$}} \put(0,0){\makebox(1,1){${\scriptstyle 2}$}} \end{picture} \end{math}. This means that they are mapped to each other under isomorphisms of $U_q(C_2)$ crystals. Thus we see that they are satisfying the necessary condition for the isomorphism of $U_q'(A^{(2)}_4)$ crystals. However, we also see that this method of constructing $\mathcal{T} (b_1) * \mathcal{T} (b_2)$ is not strong enough to determine the $U_q'(A^{(2)}_4)$ crystal isomorphism. Theorem \ref{th:main1} asserts that we are able to determine the $U_q'(A^{(2)}_4)$ crystal isomorphism by means of the tableau ${\mathbb T} (\omega (b_1))* {\mathbb T} (\omega (b_2))$. Namely the above three mappings are embedded into the following mappings in $B_6 \otimes B_4 \simeq B_4 \otimes B_6$ for the $U'_q(C^{(1)}_2)$ crystals. \begin{displaymath} \begin{array}{ccccccc} \setlength{\unitlength}{5mm} \begin{picture}(6,1)(0,0.3) \multiput(0,0)(1,0){7}{\line(0,1){1}} \multiput(0,0)(0,1){2}{\line(1,0){6}} \put(0,0){\makebox(1,1){$0$}} \put(1,0){\makebox(1,1){$0$}} \put(2,0){\makebox(1,1){$1$}} \put(3,0){\makebox(1,1){$1$}} \put(4,0){\makebox(1,1){$\ol{0}$}} \put(5,0){\makebox(1,1){$\ol{0}$}} \end{picture} & \otimes & \setlength{\unitlength}{5mm} \begin{picture}(4,1)(0,0.3) \multiput(0,0)(1,0){5}{\line(0,1){1}} \multiput(0,0)(0,1){2}{\line(1,0){4}} \put(0,0){\makebox(1,1){$2$}} \put(1,0){\makebox(1,1){$2$}} \put(2,0){\makebox(1,1){$2$}} \put(3,0){\makebox(1,1){$2$}} \end{picture} & \stackrel{\sim}{\mapsto} & \setlength{\unitlength}{5mm} \begin{picture}(4,1)(0,0.3) \multiput(0,0)(1,0){5}{\line(0,1){1}} \multiput(0,0)(0,1){2}{\line(1,0){4}} \put(0,0){\makebox(1,1){$1$}} \put(1,0){\makebox(1,1){$1$}} \put(2,0){\makebox(1,1){$1$}} \put(3,0){\makebox(1,1){$1$}} \end{picture} & \otimes & \setlength{\unitlength}{5mm} \begin{picture}(6,1)(0,0.3) \multiput(0,0)(1,0){7}{\line(0,1){1}} \multiput(0,0)(0,1){2}{\line(1,0){6}} \put(0,0){\makebox(1,1){$2$}} \put(1,0){\makebox(1,1){$2$}} \put(2,0){\makebox(1,1){$2$}} \put(3,0){\makebox(1,1){$2$}} \put(4,0){\makebox(1,1){$\ol{1}$}} \put(5,0){\makebox(1,1){$\ol{1}$}} \end{picture} \\ & & & & & & \\ \setlength{\unitlength}{5mm} \begin{picture}(6,1)(0,0.3) \multiput(0,0)(1,0){7}{\line(0,1){1}} \multiput(0,0)(0,1){2}{\line(1,0){6}} \put(0,0){\makebox(1,1){$0$}} \put(1,0){\makebox(1,1){$1$}} \put(2,0){\makebox(1,1){$1$}} \put(3,0){\makebox(1,1){$2$}} \put(4,0){\makebox(1,1){$2$}} \put(5,0){\makebox(1,1){$\ol{0}$}} \end{picture} & \otimes & \setlength{\unitlength}{5mm} \begin{picture}(4,1)(0,0.3) \multiput(0,0)(1,0){5}{\line(0,1){1}} \multiput(0,0)(0,1){2}{\line(1,0){4}} \put(0,0){\makebox(1,1){$0$}} \put(1,0){\makebox(1,1){$2$}} \put(2,0){\makebox(1,1){$2$}} \put(3,0){\makebox(1,1){$\ol{0}$}} \end{picture} & \stackrel{\sim}{\mapsto} & \setlength{\unitlength}{5mm} \begin{picture}(4,1)(0,0.3) \multiput(0,0)(1,0){5}{\line(0,1){1}} \multiput(0,0)(0,1){2}{\line(1,0){4}} \put(0,0){\makebox(1,1){$0$}} \put(1,0){\makebox(1,1){$1$}} \put(2,0){\makebox(1,1){$1$}} \put(3,0){\makebox(1,1){$\ol{0}$}} \end{picture} & \otimes & \setlength{\unitlength}{5mm} \begin{picture}(6,1)(0,0.3) \multiput(0,0)(1,0){7}{\line(0,1){1}} \multiput(0,0)(0,1){2}{\line(1,0){6}} \put(0,0){\makebox(1,1){$0$}} \put(1,0){\makebox(1,1){$2$}} \put(2,0){\makebox(1,1){$2$}} \put(3,0){\makebox(1,1){$2$}} \put(4,0){\makebox(1,1){$2$}} \put(5,0){\makebox(1,1){$\ol{0}$}} \end{picture} \\ & & & & & & \\ \setlength{\unitlength}{5mm} \begin{picture}(6,1)(0,0.3) \multiput(0,0)(1,0){7}{\line(0,1){1}} \multiput(0,0)(0,1){2}{\line(1,0){6}} \put(0,0){\makebox(1,1){$1$}} \put(1,0){\makebox(1,1){$1$}} \put(2,0){\makebox(1,1){$1$}} \put(3,0){\makebox(1,1){$1$}} \put(4,0){\makebox(1,1){$2$}} \put(5,0){\makebox(1,1){$2$}} \end{picture} & \otimes & \setlength{\unitlength}{5mm} \begin{picture}(4,1)(0,0.3) \multiput(0,0)(1,0){5}{\line(0,1){1}} \multiput(0,0)(0,1){2}{\line(1,0){4}} \put(0,0){\makebox(1,1){$2$}} \put(1,0){\makebox(1,1){$2$}} \put(2,0){\makebox(1,1){$\ol{1}$}} \put(3,0){\makebox(1,1){$\ol{1}$}} \end{picture} & \stackrel{\sim}{\mapsto} & \setlength{\unitlength}{5mm} \begin{picture}(4,1)(0,0.3) \multiput(0,0)(1,0){5}{\line(0,1){1}} \multiput(0,0)(0,1){2}{\line(1,0){4}} \put(0,0){\makebox(1,1){$1$}} \put(1,0){\makebox(1,1){$1$}} \put(2,0){\makebox(1,1){$2$}} \put(3,0){\makebox(1,1){$2$}} \end{picture} & \otimes & \setlength{\unitlength}{5mm} \begin{picture}(6,1)(0,0.3) \multiput(0,0)(1,0){7}{\line(0,1){1}} \multiput(0,0)(0,1){2}{\line(1,0){6}} \put(0,0){\makebox(1,1){$0$}} \put(1,0){\makebox(1,1){$0$}} \put(2,0){\makebox(1,1){$2$}} \put(3,0){\makebox(1,1){$2$}} \put(4,0){\makebox(1,1){$\ol{0}$}} \put(5,0){\makebox(1,1){$\ol{0}$}} \end{picture} \end{array} \end{displaymath} We adopted a rule that the column insertion \begin{math} (\ol{1} \longrightarrow \setlength{\unitlength}{5mm} \begin{picture}(1,1)(0,0.3) \multiput(0,0)(1,0){2}{\line(0,1){1}} \multiput(0,0)(0,1){2}{\line(1,0){1}} \put(0,0){\makebox(1,1){$1$}} \end{picture} ) \end{math} does not vanish \cite{HKOT}. Accordingly the both sides of the first mapping give the tableau \begin{math} \setlength{\unitlength}{3mm} \begin{picture}(6,2)(0,0.5) \multiput(0,0)(1,0){5}{\line(0,1){2}} \multiput(5,1)(1,0){2}{\line(0,1){1}} \multiput(0,1)(0,1){2}{\line(1,0){6}} \put(0,0){\line(1,0){4}} \put(0,1){\makebox(1,1){${\scriptstyle 0}$}} \put(1,1){\makebox(1,1){${\scriptstyle 0}$}} \put(2,1){\makebox(1,1){${\scriptstyle 1}$}} \put(3,1){\makebox(1,1){${\scriptstyle 1}$}} \put(4,1){\makebox(1,1){${\scriptstyle \ol{0}}$}} \put(5,1){\makebox(1,1){${\scriptstyle \ol{0}}$}} \put(0,0){\makebox(1,1){${\scriptstyle 2}$}} \put(1,0){\makebox(1,1){${\scriptstyle 2}$}} \put(2,0){\makebox(1,1){${\scriptstyle 2}$}} \put(3,0){\makebox(1,1){${\scriptstyle 2}$}} \end{picture} \end{math}. By deleting a $0,\ol{0}$ pair, those of the second one give the tableau \begin{math} \setlength{\unitlength}{3mm} \begin{picture}(4,2)(0,0.5) \multiput(0,0)(1,0){3}{\line(0,1){2}} \multiput(3,1)(1,0){2}{\line(0,1){1}} \multiput(0,1)(0,1){2}{\line(1,0){4}} \put(0,0){\line(1,0){2}} \put(0,1){\makebox(1,1){${\scriptstyle 1}$}} \put(1,1){\makebox(1,1){${\scriptstyle 1}$}} \put(2,1){\makebox(1,1){${\scriptstyle 2}$}} \put(3,1){\makebox(1,1){${\scriptstyle 2}$}} \put(0,0){\makebox(1,1){${\scriptstyle 2}$}} \put(1,0){\makebox(1,1){${\scriptstyle 2}$}} \end{picture} \end{math}. Those of the third one give the tableau \begin{math} \setlength{\unitlength}{3mm} \begin{picture}(6,2)(0,0.5) \multiput(0,0)(1,0){5}{\line(0,1){2}} \multiput(5,1)(1,0){2}{\line(0,1){1}} \multiput(0,1)(0,1){2}{\line(1,0){6}} \put(0,0){\line(1,0){4}} \put(0,1){\makebox(1,1){${\scriptstyle 0}$}} \put(1,1){\makebox(1,1){${\scriptstyle 0}$}} \put(2,1){\makebox(1,1){${\scriptstyle 1}$}} \put(3,1){\makebox(1,1){${\scriptstyle 1}$}} \put(4,1){\makebox(1,1){${\scriptstyle 2}$}} \put(5,1){\makebox(1,1){${\scriptstyle 2}$}} \put(0,0){\makebox(1,1){${\scriptstyle 2}$}} \put(1,0){\makebox(1,1){${\scriptstyle 2}$}} \put(2,0){\makebox(1,1){${\scriptstyle \ol{0}}$}} \put(3,0){\makebox(1,1){${\scriptstyle \ol{0}}$}} \end{picture} \end{math}. They are distinct. The right hand side is uniquely determined from the left hand side. Second let us illustrate in more detail the procedure of Rule \ref{rule:Cx}. Take the last example. {}From the left hand side we proceed the column insertions as follows. \begin{align*} \ol{1} &\rightarrow \setlength{\unitlength}{5mm} \begin{picture}(6,1)(0,0.3) \multiput(0,0)(1,0){7}{\line(0,1){1}} \multiput(0,0)(0,1){2}{\line(1,0){6}} \put(0,0){\makebox(1,1){$1$}} \put(1,0){\makebox(1,1){$1$}} \put(2,0){\makebox(1,1){$1$}} \put(3,0){\makebox(1,1){$1$}} \put(4,0){\makebox(1,1){$2$}} \put(5,0){\makebox(1,1){$2$}} \end{picture} \quad = \quad \setlength{\unitlength}{5mm} \begin{picture}(6,2)(0,0.8) \multiput(0,0)(1,0){2}{\line(0,1){2}} \multiput(2,1)(1,0){5}{\line(0,1){1}} \multiput(0,1)(0,1){2}{\line(1,0){6}} \put(0,0){\line(1,0){1}} \put(0,1){\makebox(1,1){$1$}} \put(1,1){\makebox(1,1){$1$}} \put(2,1){\makebox(1,1){$1$}} \put(3,1){\makebox(1,1){$1$}} \put(4,1){\makebox(1,1){$2$}} \put(5,1){\makebox(1,1){$2$}} \put(0,0){\makebox(1,1){$\ol{1}$}} \end{picture} \\ \ol{1} &\rightarrow \setlength{\unitlength}{5mm} \begin{picture}(6,2)(0,0.8) \multiput(0,0)(1,0){2}{\line(0,1){2}} \multiput(2,1)(1,0){5}{\line(0,1){1}} \multiput(0,1)(0,1){2}{\line(1,0){6}} \put(0,0){\line(1,0){1}} \put(0,1){\makebox(1,1){$1$}} \put(1,1){\makebox(1,1){$1$}} \put(2,1){\makebox(1,1){$1$}} \put(3,1){\makebox(1,1){$1$}} \put(4,1){\makebox(1,1){$2$}} \put(5,1){\makebox(1,1){$2$}} \put(0,0){\makebox(1,1){$\ol{1}$}} \end{picture} \quad = \quad \setlength{\unitlength}{5mm} \begin{picture}(6,2)(0,0.8) \multiput(0,0)(1,0){3}{\line(0,1){2}} \multiput(3,1)(1,0){4}{\line(0,1){1}} \multiput(0,1)(0,1){2}{\line(1,0){6}} \put(0,0){\line(1,0){2}} \put(0,1){\makebox(1,1){$0$}} \put(1,1){\makebox(1,1){$1$}} \put(2,1){\makebox(1,1){$1$}} \put(3,1){\makebox(1,1){$1$}} \put(4,1){\makebox(1,1){$2$}} \put(5,1){\makebox(1,1){$2$}} \put(0,0){\makebox(1,1){$\ol{1}$}} \put(1,0){\makebox(1,1){$\ol{0}$}} \end{picture} \\ 2 &\rightarrow \setlength{\unitlength}{5mm} \begin{picture}(6,2)(0,0.8) \multiput(0,0)(1,0){3}{\line(0,1){2}} \multiput(3,1)(1,0){4}{\line(0,1){1}} \multiput(0,1)(0,1){2}{\line(1,0){6}} \put(0,0){\line(1,0){2}} \put(0,1){\makebox(1,1){$0$}} \put(1,1){\makebox(1,1){$1$}} \put(2,1){\makebox(1,1){$1$}} \put(3,1){\makebox(1,1){$1$}} \put(4,1){\makebox(1,1){$2$}} \put(5,1){\makebox(1,1){$2$}} \put(0,0){\makebox(1,1){$\ol{1}$}} \put(1,0){\makebox(1,1){$\ol{0}$}} \end{picture} \quad = \quad \setlength{\unitlength}{5mm} \begin{picture}(6,2)(0,0.8) \multiput(0,0)(1,0){4}{\line(0,1){2}} \multiput(4,1)(1,0){3}{\line(0,1){1}} \multiput(0,1)(0,1){2}{\line(1,0){6}} \put(0,0){\line(1,0){3}} \put(0,1){\makebox(1,1){$0$}} \put(1,1){\makebox(1,1){$1$}} \put(2,1){\makebox(1,1){$1$}} \put(3,1){\makebox(1,1){$1$}} \put(4,1){\makebox(1,1){$2$}} \put(5,1){\makebox(1,1){$2$}} \put(0,0){\makebox(1,1){$2$}} \put(1,0){\makebox(1,1){$\ol{1}$}} \put(2,0){\makebox(1,1){$\ol{0}$}} \end{picture} \\ 2 &\rightarrow \setlength{\unitlength}{5mm} \begin{picture}(6,2)(0,0.8) \multiput(0,0)(1,0){4}{\line(0,1){2}} \multiput(4,1)(1,0){3}{\line(0,1){1}} \multiput(0,1)(0,1){2}{\line(1,0){6}} \put(0,0){\line(1,0){3}} \put(0,1){\makebox(1,1){$0$}} \put(1,1){\makebox(1,1){$1$}} \put(2,1){\makebox(1,1){$1$}} \put(3,1){\makebox(1,1){$1$}} \put(4,1){\makebox(1,1){$2$}} \put(5,1){\makebox(1,1){$2$}} \put(0,0){\makebox(1,1){$2$}} \put(1,0){\makebox(1,1){$\ol{1}$}} \put(2,0){\makebox(1,1){$\ol{0}$}} \end{picture} \quad = \quad \setlength{\unitlength}{5mm} \begin{picture}(6,2)(0,0.8) \multiput(0,0)(1,0){5}{\line(0,1){2}} \multiput(5,1)(1,0){2}{\line(0,1){1}} \multiput(0,1)(0,1){2}{\line(1,0){6}} \put(0,0){\line(1,0){4}} \put(0,1){\makebox(1,1){$0$}} \put(1,1){\makebox(1,1){$0$}} \put(2,1){\makebox(1,1){$1$}} \put(3,1){\makebox(1,1){$1$}} \put(4,1){\makebox(1,1){$2$}} \put(5,1){\makebox(1,1){$2$}} \put(0,0){\makebox(1,1){$2$}} \put(1,0){\makebox(1,1){$2$}} \put(2,0){\makebox(1,1){$\ol{0}$}} \put(3,0){\makebox(1,1){$\ol{0}$}} \end{picture} \end{align*} \vskip3ex \noindent The reverse bumping procedure goes as follows. \begin{align*} T^{(0)} &= \setlength{\unitlength}{5mm} \begin{picture}(6,2)(0,0.8) \multiput(0,0)(1,0){5}{\line(0,1){2}} \multiput(5,1)(1,0){2}{\line(0,1){1}} \multiput(0,1)(0,1){2}{\line(1,0){6}} \put(0,0){\line(1,0){4}} \put(0,1){\makebox(1,1){$0$}} \put(1,1){\makebox(1,1){$0$}} \put(2,1){\makebox(1,1){$1$}} \put(3,1){\makebox(1,1){$1$}} \put(4,1){\makebox(1,1){$2$}} \put(5,1){\makebox(1,1){$2$}} \put(0,0){\makebox(1,1){$2$}} \put(1,0){\makebox(1,1){$2$}} \put(2,0){\makebox(1,1){$\ol{0}$}} \put(3,0){\makebox(1,1){$\ol{0}$}} \end{picture} & \\ T^{(1)} &= \setlength{\unitlength}{5mm} \begin{picture}(6,2)(0,0.8) \multiput(0,0)(1,0){5}{\line(0,1){2}} \put(5,1){\line(0,1){1}} \multiput(0,1)(0,1){2}{\line(1,0){5}} \put(0,0){\line(1,0){4}} \put(0,1){\makebox(1,1){$0$}} \put(1,1){\makebox(1,1){$1$}} \put(2,1){\makebox(1,1){$1$}} \put(3,1){\makebox(1,1){$2$}} \put(4,1){\makebox(1,1){$2$}} \put(0,0){\makebox(1,1){$2$}} \put(1,0){\makebox(1,1){$2$}} \put(2,0){\makebox(1,1){$\ol{0}$}} \put(3,0){\makebox(1,1){$\ol{0}$}} \end{picture} &,w_1 = 0 \\ T^{(2)} &= \setlength{\unitlength}{5mm} \begin{picture}(6,2)(0,0.8) \multiput(0,0)(1,0){5}{\line(0,1){2}} \multiput(0,0)(0,1){3}{\line(1,0){4}} \put(0,1){\makebox(1,1){$1$}} \put(1,1){\makebox(1,1){$1$}} \put(2,1){\makebox(1,1){$2$}} \put(3,1){\makebox(1,1){$2$}} \put(0,0){\makebox(1,1){$2$}} \put(1,0){\makebox(1,1){$2$}} \put(2,0){\makebox(1,1){$\ol{0}$}} \put(3,0){\makebox(1,1){$\ol{0}$}} \end{picture} &,w_2 = 0 \\ T^{(3)} &= \setlength{\unitlength}{5mm} \begin{picture}(6,2)(0,0.8) \multiput(0,0)(1,0){4}{\line(0,1){2}} \put(4,1){\line(0,1){1}} \multiput(0,1)(0,1){2}{\line(1,0){4}} \put(0,0){\line(1,0){3}} \put(0,1){\makebox(1,1){$1$}} \put(1,1){\makebox(1,1){$1$}} \put(2,1){\makebox(1,1){$2$}} \put(3,1){\makebox(1,1){$2$}} \put(0,0){\makebox(1,1){$2$}} \put(1,0){\makebox(1,1){$\ol{0}$}} \put(2,0){\makebox(1,1){$\ol{0}$}} \end{picture} &,w_3 = 2 \\ T^{(4)} &= \setlength{\unitlength}{5mm} \begin{picture}(6,2)(0,0.8) \multiput(0,0)(1,0){3}{\line(0,1){2}} \multiput(3,1)(1,0){2}{\line(0,1){1}} \multiput(0,1)(0,1){2}{\line(1,0){4}} \put(0,0){\line(1,0){2}} \put(0,1){\makebox(1,1){$1$}} \put(1,1){\makebox(1,1){$1$}} \put(2,1){\makebox(1,1){$2$}} \put(3,1){\makebox(1,1){$2$}} \put(0,0){\makebox(1,1){$\ol{0}$}} \put(1,0){\makebox(1,1){$\ol{0}$}} \end{picture} &,w_4 = 2 \\ T^{(5)} &= \setlength{\unitlength}{5mm} \begin{picture}(6,2)(0,0.8) \multiput(0,0)(1,0){2}{\line(0,1){2}} \multiput(2,1)(1,0){3}{\line(0,1){1}} \multiput(0,1)(0,1){2}{\line(1,0){4}} \put(0,0){\line(1,0){1}} \put(0,1){\makebox(1,1){$1$}} \put(1,1){\makebox(1,1){$1$}} \put(2,1){\makebox(1,1){$2$}} \put(3,1){\makebox(1,1){$2$}} \put(0,0){\makebox(1,1){$\ol{0}$}} \end{picture} &,w_5 = \ol{0} \\ T^{(6)} &= \setlength{\unitlength}{5mm} \begin{picture}(6,2)(0,0.3) \multiput(0,0)(1,0){5}{\line(0,1){1}} \multiput(0,0)(0,1){2}{\line(1,0){4}} \put(0,0){\makebox(1,1){$1$}} \put(1,0){\makebox(1,1){$1$}} \put(2,0){\makebox(1,1){$2$}} \put(3,0){\makebox(1,1){$2$}} \end{picture} &, w_6 = \ol{0} \end{align*} Thus we obtained the right hand side. We assign $H_{B_3,B_2}=0$ to this element since we have $l'=6, k'=4$ and $m=4$ in this case. \end{example} \begin{example} $B_3 \otimes B_2 \simeq B_2 \otimes B_3$ for $D^{(2)}_3$. \begin{displaymath} \begin{array}{ccccccc} \setlength{\unitlength}{5mm} \begin{picture}(3,1)(0,0.3) \multiput(1,0)(1,0){2}{\line(0,1){1}} \multiput(1,0)(0,1){2}{\line(1,0){1}} \put(1,0){\makebox(1,1){$1$}} \end{picture} & \otimes & \setlength{\unitlength}{5mm} \begin{picture}(2,1)(0,0.3) \multiput(0,0)(1,0){3}{\line(0,1){1}} \multiput(0,0)(0,1){2}{\line(1,0){2}} \put(0,0){\makebox(1,1){$2$}} \put(1,0){\makebox(1,1){$\circ$}} \end{picture} & \stackrel{\sim}{\mapsto} & \setlength{\unitlength}{5mm} \begin{picture}(2,1)(0,0.3) \multiput(0,0)(1,0){3}{\line(0,1){1}} \multiput(0,0)(0,1){2}{\line(1,0){2}} \put(0,0){\makebox(1,1){$1$}} \put(1,0){\makebox(1,1){$1$}} \end{picture} & \otimes & \setlength{\unitlength}{5mm} \begin{picture}(3,1)(0,0.3) \multiput(0,0)(1,0){4}{\line(0,1){1}} \multiput(0,0)(0,1){2}{\line(1,0){3}} \put(0,0){\makebox(1,1){$2$}} \put(1,0){\makebox(1,1){$\circ$}} \put(2,0){\makebox(1,1){$\ol{1}$}} \end{picture} \\ & & & & & & \\ \setlength{\unitlength}{5mm} \begin{picture}(3,1)(0,0.3) \multiput(0.5,0)(1,0){3}{\line(0,1){1}} \multiput(0.5,0)(0,1){2}{\line(1,0){2}} \put(0.5,0){\makebox(1,1){$1$}} \put(1.5,0){\makebox(1,1){$\circ$}} \end{picture} & \otimes & \setlength{\unitlength}{5mm} \begin{picture}(2,1)(0,0.3) \multiput(0.5,0)(1,0){2}{\line(0,1){1}} \multiput(0.5,0)(0,1){2}{\line(1,0){1}} \put(0.5,0){\makebox(1,1){$2$}} \end{picture} & \stackrel{\sim}{\mapsto} & \setlength{\unitlength}{5mm} \begin{picture}(2,1)(0,0.3) \multiput(0.5,0)(1,0){2}{\line(0,1){1}} \multiput(0.5,0)(0,1){2}{\line(1,0){1}} \put(0.5,0){\makebox(1,1){$1$}} \end{picture} & \otimes & \setlength{\unitlength}{5mm} \begin{picture}(3,1)(0,0.3) \multiput(0.5,0)(1,0){3}{\line(0,1){1}} \multiput(0.5,0)(0,1){2}{\line(1,0){2}} \put(0.5,0){\makebox(1,1){$2$}} \put(1.5,0){\makebox(1,1){$\circ$}} \end{picture} \\ & & & & & & \\ \setlength{\unitlength}{5mm} \begin{picture}(3,1)(0,0.3) \multiput(0,0)(1,0){4}{\line(0,1){1}} \multiput(0,0)(0,1){2}{\line(1,0){3}} \put(0,0){\makebox(1,1){$1$}} \put(1,0){\makebox(1,1){$1$}} \put(2,0){\makebox(1,1){$\circ$}} \end{picture} & \otimes & \setlength{\unitlength}{5mm} \begin{picture}(2,1)(0,0.3) \multiput(0,0)(1,0){3}{\line(0,1){1}} \multiput(0,0)(0,1){2}{\line(1,0){2}} \put(0,0){\makebox(1,1){$2$}} \put(1,0){\makebox(1,1){$\ol{1}$}} \end{picture} & \stackrel{\sim}{\mapsto} & \setlength{\unitlength}{5mm} \begin{picture}(2,1)(0,0.3) \multiput(0,0)(1,0){3}{\line(0,1){1}} \multiput(0,0)(0,1){2}{\line(1,0){2}} \put(0,0){\makebox(1,1){$1$}} \put(1,0){\makebox(1,1){$\circ$}} \end{picture} & \otimes & \setlength{\unitlength}{5mm} \begin{picture}(3,1)(0,0.3) \multiput(1,0)(1,0){2}{\line(0,1){1}} \multiput(1,0)(0,1){2}{\line(1,0){1}} \put(1,0){\makebox(1,1){$2$}} \end{picture} \end{array} \end{displaymath} Here we have picked up three samples. According to the rule of the type $B$ column insertion in \cite{B2} we obtain \begin{math} (\ol{1} \longrightarrow \setlength{\unitlength}{5mm} \begin{picture}(1,1)(0,0.3) \multiput(0,0)(1,0){2}{\line(0,1){1}} \multiput(0,0)(0,1){2}{\line(1,0){1}} \put(0,0){\makebox(1,1){$1$}} \end{picture} ) = \emptyset \end{math}. In this rule we find that both sides of the above three mappings give a common tableau \begin{math} \setlength{\unitlength}{3mm} \begin{picture}(2,2)(0,0.5) \multiput(0,0)(1,0){2}{\line(0,1){2}} \put(2,1){\line(0,1){1}} \multiput(0,1)(0,1){2}{\line(1,0){2}} \put(0,0){\line(1,0){1}} \put(0,1){\makebox(1,1){${\scriptstyle 1}$}} \put(1,1){\makebox(1,1){${\scriptstyle \circ}$}} \put(0,0){\makebox(1,1){${\scriptstyle 2}$}} \end{picture} \end{math}. Theorem \ref{th:main1} asserts that we are able to determine the isomorphism of $U'_q(D^{(2)}_3)$ crystals by means of the tableau ${\mathbb T} (\omega (b))$. The above three mappings are embedded into the following mappings in $B_6 \otimes B_4 \simeq B_4 \otimes B_6$ for the $U'_q(C^{(1)}_2)$ crystals. \begin{displaymath} \begin{array}{ccccccc} \setlength{\unitlength}{5mm} \begin{picture}(6,1)(0,0.3) \multiput(0,0)(1,0){7}{\line(0,1){1}} \multiput(0,0)(0,1){2}{\line(1,0){6}} \put(0,0){\makebox(1,1){$0$}} \put(1,0){\makebox(1,1){$0$}} \put(2,0){\makebox(1,1){$1$}} \put(3,0){\makebox(1,1){$1$}} \put(4,0){\makebox(1,1){$\ol{0}$}} \put(5,0){\makebox(1,1){$\ol{0}$}} \end{picture} & \otimes & \setlength{\unitlength}{5mm} \begin{picture}(4,1)(0,0.3) \multiput(0,0)(1,0){5}{\line(0,1){1}} \multiput(0,0)(0,1){2}{\line(1,0){4}} \put(0,0){\makebox(1,1){$2$}} \put(1,0){\makebox(1,1){$2$}} \put(2,0){\makebox(1,1){$2$}} \put(3,0){\makebox(1,1){$\ol{2}$}} \end{picture} & \stackrel{\sim}{\mapsto} & \setlength{\unitlength}{5mm} \begin{picture}(4,1)(0,0.3) \multiput(0,0)(1,0){5}{\line(0,1){1}} \multiput(0,0)(0,1){2}{\line(1,0){4}} \put(0,0){\makebox(1,1){$1$}} \put(1,0){\makebox(1,1){$1$}} \put(2,0){\makebox(1,1){$1$}} \put(3,0){\makebox(1,1){$1$}} \end{picture} & \otimes & \setlength{\unitlength}{5mm} \begin{picture}(6,1)(0,0.3) \multiput(0,0)(1,0){7}{\line(0,1){1}} \multiput(0,0)(0,1){2}{\line(1,0){6}} \put(0,0){\makebox(1,1){$2$}} \put(1,0){\makebox(1,1){$2$}} \put(2,0){\makebox(1,1){$2$}} \put(3,0){\makebox(1,1){$\ol{2}$}} \put(4,0){\makebox(1,1){$\ol{1}$}} \put(5,0){\makebox(1,1){$\ol{1}$}} \end{picture} \\ & & & & & & \\ \setlength{\unitlength}{5mm} \begin{picture}(6,1)(0,0.3) \multiput(0,0)(1,0){7}{\line(0,1){1}} \multiput(0,0)(0,1){2}{\line(1,0){6}} \put(0,0){\makebox(1,1){$0$}} \put(1,0){\makebox(1,1){$1$}} \put(2,0){\makebox(1,1){$1$}} \put(3,0){\makebox(1,1){$2$}} \put(4,0){\makebox(1,1){$\ol{2}$}} \put(5,0){\makebox(1,1){$\ol{0}$}} \end{picture} & \otimes & \setlength{\unitlength}{5mm} \begin{picture}(4,1)(0,0.3) \multiput(0,0)(1,0){5}{\line(0,1){1}} \multiput(0,0)(0,1){2}{\line(1,0){4}} \put(0,0){\makebox(1,1){$0$}} \put(1,0){\makebox(1,1){$2$}} \put(2,0){\makebox(1,1){$2$}} \put(3,0){\makebox(1,1){$\ol{0}$}} \end{picture} & \stackrel{\sim}{\mapsto} & \setlength{\unitlength}{5mm} \begin{picture}(4,1)(0,0.3) \multiput(0,0)(1,0){5}{\line(0,1){1}} \multiput(0,0)(0,1){2}{\line(1,0){4}} \put(0,0){\makebox(1,1){$0$}} \put(1,0){\makebox(1,1){$1$}} \put(2,0){\makebox(1,1){$1$}} \put(3,0){\makebox(1,1){$\ol{0}$}} \end{picture} & \otimes & \setlength{\unitlength}{5mm} \begin{picture}(6,1)(0,0.3) \multiput(0,0)(1,0){7}{\line(0,1){1}} \multiput(0,0)(0,1){2}{\line(1,0){6}} \put(0,0){\makebox(1,1){$0$}} \put(1,0){\makebox(1,1){$2$}} \put(2,0){\makebox(1,1){$2$}} \put(3,0){\makebox(1,1){$2$}} \put(4,0){\makebox(1,1){$\ol{2}$}} \put(5,0){\makebox(1,1){$\ol{0}$}} \end{picture} \\ & & & & & & \\ \setlength{\unitlength}{5mm} \begin{picture}(6,1)(0,0.3) \multiput(0,0)(1,0){7}{\line(0,1){1}} \multiput(0,0)(0,1){2}{\line(1,0){6}} \put(0,0){\makebox(1,1){$1$}} \put(1,0){\makebox(1,1){$1$}} \put(2,0){\makebox(1,1){$1$}} \put(3,0){\makebox(1,1){$1$}} \put(4,0){\makebox(1,1){$2$}} \put(5,0){\makebox(1,1){$\ol{2}$}} \end{picture} & \otimes & \setlength{\unitlength}{5mm} \begin{picture}(4,1)(0,0.3) \multiput(0,0)(1,0){5}{\line(0,1){1}} \multiput(0,0)(0,1){2}{\line(1,0){4}} \put(0,0){\makebox(1,1){$2$}} \put(1,0){\makebox(1,1){$2$}} \put(2,0){\makebox(1,1){$\ol{1}$}} \put(3,0){\makebox(1,1){$\ol{1}$}} \end{picture} & \stackrel{\sim}{\mapsto} & \setlength{\unitlength}{5mm} \begin{picture}(4,1)(0,0.3) \multiput(0,0)(1,0){5}{\line(0,1){1}} \multiput(0,0)(0,1){2}{\line(1,0){4}} \put(0,0){\makebox(1,1){$1$}} \put(1,0){\makebox(1,1){$1$}} \put(2,0){\makebox(1,1){$2$}} \put(3,0){\makebox(1,1){$\ol{2}$}} \end{picture} & \otimes & \setlength{\unitlength}{5mm} \begin{picture}(6,1)(0,0.3) \multiput(0,0)(1,0){7}{\line(0,1){1}} \multiput(0,0)(0,1){2}{\line(1,0){6}} \put(0,0){\makebox(1,1){$0$}} \put(1,0){\makebox(1,1){$0$}} \put(2,0){\makebox(1,1){$2$}} \put(3,0){\makebox(1,1){$2$}} \put(4,0){\makebox(1,1){$\ol{0}$}} \put(5,0){\makebox(1,1){$\ol{0}$}} \end{picture} \end{array} \end{displaymath} The both sides of the first mapping give the tableau \begin{math} \setlength{\unitlength}{3mm} \begin{picture}(6,2)(0,0.5) \multiput(0,0)(1,0){5}{\line(0,1){2}} \multiput(5,1)(1,0){2}{\line(0,1){1}} \multiput(0,1)(0,1){2}{\line(1,0){6}} \put(0,0){\line(1,0){4}} \put(0,1){\makebox(1,1){${\scriptstyle 0}$}} \put(1,1){\makebox(1,1){${\scriptstyle 0}$}} \put(2,1){\makebox(1,1){${\scriptstyle 1}$}} \put(3,1){\makebox(1,1){${\scriptstyle 1}$}} \put(4,1){\makebox(1,1){${\scriptstyle \ol{0}}$}} \put(5,1){\makebox(1,1){${\scriptstyle \ol{0}}$}} \put(0,0){\makebox(1,1){${\scriptstyle 2}$}} \put(1,0){\makebox(1,1){${\scriptstyle 2}$}} \put(2,0){\makebox(1,1){${\scriptstyle 2}$}} \put(3,0){\makebox(1,1){${\scriptstyle \ol{2}}$}} \end{picture} \end{math}. By deleting a $0,\ol{0}$ pair, those of the second one give the tableau \begin{math} \setlength{\unitlength}{3mm} \begin{picture}(4,2)(0,0.5) \multiput(0,0)(1,0){3}{\line(0,1){2}} \multiput(3,1)(1,0){2}{\line(0,1){1}} \multiput(0,1)(0,1){2}{\line(1,0){4}} \put(0,0){\line(1,0){2}} \put(0,1){\makebox(1,1){${\scriptstyle 1}$}} \put(1,1){\makebox(1,1){${\scriptstyle 1}$}} \put(2,1){\makebox(1,1){${\scriptstyle 2}$}} \put(3,1){\makebox(1,1){${\scriptstyle \ol{2}}$}} \put(0,0){\makebox(1,1){${\scriptstyle 2}$}} \put(1,0){\makebox(1,1){${\scriptstyle 2}$}} \end{picture} \end{math}. Those of the third one give the tableau \begin{math} \setlength{\unitlength}{3mm} \begin{picture}(6,2)(0,0.5) \multiput(0,0)(1,0){5}{\line(0,1){2}} \multiput(5,1)(1,0){2}{\line(0,1){1}} \multiput(0,1)(0,1){2}{\line(1,0){6}} \put(0,0){\line(1,0){4}} \put(0,1){\makebox(1,1){${\scriptstyle 0}$}} \put(1,1){\makebox(1,1){${\scriptstyle 0}$}} \put(2,1){\makebox(1,1){${\scriptstyle 1}$}} \put(3,1){\makebox(1,1){${\scriptstyle 1}$}} \put(4,1){\makebox(1,1){${\scriptstyle 2}$}} \put(5,1){\makebox(1,1){${\scriptstyle \ol{2}}$}} \put(0,0){\makebox(1,1){${\scriptstyle 2}$}} \put(1,0){\makebox(1,1){${\scriptstyle 2}$}} \put(2,0){\makebox(1,1){${\scriptstyle \ol{0}}$}} \put(3,0){\makebox(1,1){${\scriptstyle \ol{0}}$}} \end{picture} \end{math}. They are distinct. The right hand side is uniquely determined from the left hand side. \end{example} \section{\mathversion{bold}$U_q'(B_n^{(1)})$ and $U_q'(D_n^{(1)})$ crystal cases} \label{sec:bd} \subsection{\mathversion{bold}Definitions : $U_q'(B_n^{(1)})$ crystal case} \label{subsec:typeB} Given a positive integer $l$, let us denote by $B_{l}$ the $U_q'(B_n^{(1)})$ crystal defined in \cite{KKM}. As a set $B_{l}$ reads $$ B_{l} = \left\{( x_1,\ldots, x_n,x_\circ,\overline{x}_n,\ldots,\overline{x}_1) \Biggm| x_\circ=\mbox{$0$ or $1$}, x_i, \overline{x}_i \in {\mathbb Z}_{\ge 0}, x_\circ+\sum_{i=1}^n(x_i + \overline{x}_i) = l \right\}. $$ For its crystal structure see \cite{KKM}. $B_{l}$ is isomorphic to $B(l \Lambda_1)$ as a $U_q(B_n)$ crystal. We depict the element $b= (x_1, \ldots, x_n, x_\circ,\overline{x}_n,\ldots,\overline{x}_1) \in B_{l}$ by the tableau \begin{displaymath} \mathcal{T}(b)=\overbrace{\fbox{$\vphantom{\ol{1}} 1 \cdots 1$}}^{x_1}\! \fbox{$\vphantom{\ol{1}}\cdots$}\! \overbrace{\fbox{$\vphantom{\ol{1}}n \cdots n$}}^{x_n}\! \overbrace{\fbox{$\vphantom{\ol{1}}\hphantom{1}\circ\hphantom{1}$}}^{x_\circ}\! \overbrace{\fbox{$\vphantom{\ol{1}}\ol{n} \cdots \ol{n}$}}^{\ol{x}_n}\! \fbox{$\vphantom{\ol{1}}\cdots$}\! \overbrace{\fbox{$\ol{1} \cdots \ol{1}$}}^{\ol{x}_1}. \end{displaymath} The length of this one-row tableau is equal to $l$, namely $x_\circ+\sum_{i=1}^n(x_i + \overline{x}_i) =l$. \subsection{\mathversion{bold}Definitions : $U_q'(D_n^{(1)})$ crystal case} \label{subsec:typeD} Given a positive integer $l$, let us denote by $B_{l}$ the $U_q'(D_n^{(1)})$ crystal defined in \cite{KKM}. As a set $B_{l}$ reads $$ B_{l} = \left\{( x_1,\ldots, x_n,\overline{x}_n,\ldots,\overline{x}_1) \Biggm| \mbox{$x_n=0$ or $\overline{x}_n=0$}, x_i, \overline{x}_i \in {\mathbb Z}_{\ge 0}, \sum_{i=1}^n(x_i + \overline{x}_i) = l \right\}. $$ For its crystal structure see \cite{KKM}. $B_{l}$ is isomorphic to $B(l \Lambda_1)$ as a $U_q(D_n)$ crystal. We depict the element $b= (x_1, \ldots, x_n,\overline{x}_n,\ldots,\overline{x}_1) \in B_{l}$ by the tableau \begin{displaymath} {\mathcal T} (b)=\overbrace{\fbox{$\vphantom{\ol{1}} 1 \cdots 1$}}^{x_1}\! \fbox{$\vphantom{\ol{1}}\cdots$}\! \overbrace{\fbox{$\vphantom{\ol{1}}n \cdots n$}}^{x_n}\! \overbrace{\fbox{$\vphantom{\ol{1}}\ol{n} \cdots \ol{n}$}}^{\ol{x}_n}\! \fbox{$\vphantom{\ol{1}}\cdots$}\! \overbrace{\fbox{$\ol{1} \cdots \ol{1}$}}^{\ol{x}_1}. \end{displaymath} The length of this one-row tableau is equal to $l$, namely $\sum_{i=1}^n(x_i + \overline{x}_i) =l$. \subsection{\mathversion{bold}Column insertion and inverse insertion for $B_n$} \label{subsec:cib} Set an alphabet $\mathcal{X}=\mathcal{A} \sqcup \{\circ\} \sqcup \bar{\mathcal{A}},\, \mathcal{A}=\{ 1,\dots,n\}$ and $\bar{\mathcal{A}}=\{\overline{1},\dots,\overline{n}\}$, with the total order $1 < 2 < \dots < n < \circ < \overline{n} < \dots < \overline{2} < \overline{1}$. \subsubsection{Semistandard $B$ tableaux} \label{subsubsec:ssbt} Let us consider a {\em semistandard $B$ tableaux} made by the letters from this alphabet. We follow \cite{KN} for its definition. We present the definition here, but restrict ourselves to special cases that are sufficient for our purpose. Namely we consider only those tableaux that have no more than two rows in their shapes. Thus they have the forms as in (\ref{eq:pictureofsst}), with the letters inside the boxes are now chosen from the alphabet given as above. The letters obey the conditions (\ref{eq:notdecrease}) and the absence of the $(x,x)$-configuration (\ref{eq:absenceofxxconf}) where we now assume $1 \leq x <n$. They also obey the following conditions: \begin{equation} \alpha_a < \beta_a \quad \mbox{or} \quad (\alpha_a,\beta_a) = (\circ,\circ), \end{equation} \begin{equation} \label{eq:absenceofoneonebar} (\alpha_a,\beta_a) \ne (1,\ol{1}), \end{equation} \begin{equation} \label{eq:absenceofnnconf} (\alpha_a,\beta_{a+1}) \ne (n,\ol{n}),(n,\circ),(\circ,\circ),(\circ,\ol{n}). \end{equation} The last conditions (\ref{eq:absenceofnnconf}) are referred to as the absence of the $(n,n)$-configurations. \subsubsection{\mathversion{bold}Column insertion for $B_n$ \cite{B2}} \label{subsubsec:insb} We give a partial list of patterns of column insertions on the semistandard $B$ tableaux that are sufficient for our purpose. For the alphabet $\mathcal{X}$, we follow the convention that Greek letters $ \alpha, \beta, \ldots $ belong to $\mathcal{X}$ while Latin letters $x,y,\ldots$ (resp. $\overline{x},\overline{y},\ldots$) belong to $\mathcal{A}$ (resp. $\bar{\mathcal{A}}$). For the interpretation of the pictorial equations in the list, see the remarks in Section \ref{subsubsec:insc}. Note that this list does not exhaust all cases. It does not contain, for instance, the insertion \begin{math} \setlength{\unitlength}{3mm} \begin{picture}(3,2)(0,0.3) \multiput(0,0)(1,0){2}{\line(0,1){1}} \multiput(0,0)(0,1){2}{\line(1,0){1}} \put(0,0){\makebox(1,1){${\scriptstyle \circ}$}} \put(1,0){\makebox(1,1){${\scriptstyle \rightarrow}$}} \multiput(2,0)(1,0){2}{\line(0,1){2}} \multiput(2,0)(0,1){3}{\line(1,0){1}} \put(2,0){\makebox(1,1){${\scriptstyle \circ}$}} \put(2,1){\makebox(1,1){${\scriptstyle \circ}$}} \end{picture} \end{math}. As far as this case is concerned, we see that in Rule \ref{rule:typeB} neither $\mathcal{T} (b_1)$ nor $\mathcal{T} (b_2)$ has more than one $\circ$'s. Thus we do not encounter a situation where more than two $\circ$'s appear in the procedure. (See Proposition \ref{pr:atmosttworows}.) \noindent \ToBOX{A0}{\alpha} \raisebox{1.25mm}{,} \noindent \ToDOMINO{A1}{\alpha}{\beta}{\alpha}{\beta} \raisebox{4mm}{if $\alpha < \beta$ or $(\alpha,\beta)=(\circ,\circ)$,} \noindent \ToYOKODOMINO{B0}{\beta}{\alpha}{\beta}{\alpha} \raisebox{1.25mm}{if $\alpha \le \beta$ and $(\alpha,\beta) \ne (\circ,\circ)$,} \noindent \ToHOOKnn{B1}{\alpha}{\gamma}{\beta}{\gamma}{\alpha}{\beta} \raisebox{4mm}{if $\alpha < \beta \leq \gamma$ and $(\alpha,\gamma) \ne (x,\overline{x})$ and $(\beta,\gamma) \ne (\circ,\circ)$,} \noindent \ToHOOKnn{B2}{\beta}{\gamma}{\alpha}{\beta}{\alpha}{\gamma} \raisebox{4mm}{if $\alpha \leq \beta < \gamma$ and $(\alpha,\gamma) \ne (x,\overline{x})$ and $(\alpha,\beta) \ne (\circ,\circ)$,} \noindent \ToHOOKnn{B3}{\circ}{\overline{x}}{\circ}{\overline{x}}{\circ}{\circ} \raisebox{4mm}{,} \noindent \ToHOOKnn{B4}{\circ}{\circ}{x}{\circ}{x}{\circ} \raisebox{4mm}{,} \noindent \ToHOOKll{B5}{x}{\overline{x}}{\beta}{\overline{x\!-\!1}}{x\!-\!1}{\beta} \raisebox{4mm}{if $x \le \beta \le \overline{x}$ and $x\ne 1$,} \noindent \ToHOOKln{B6}{\beta}{\overline{x}}{x}{\beta}{x\!+\!1}{\overline{x\!+\!1}} \raisebox{4mm}{if $x < \beta < \overline{x}$ and $x\ne n$,} \noindent \ToHOOKnn{B7}{\circ}{\overline{n}}{n}{\overline{n}}{n}{\circ} \raisebox{4mm}{.} \noindent \subsubsection{Column insertion and $U_q (B_n)$ crystal morphism} To illustrate Proposition \ref{pr:morphgen} let us check a morphism of the $U_q(B_3)$ crystal $B(\Lambda_2) \otimes B(\Lambda_1)$ by taking an example. Let $\psi$ be the map that is similarly defined as in Section \ref{subsubsec:insc} for type $C$ case. \begin{example} \label{ex:morB} \begin{displaymath} \begin{CD} \setlength{\unitlength}{5mm} \begin{picture}(3,2)(0,0.3) \multiput(0,0)(1,0){2}{\line(0,1){2}} \multiput(0,0)(0,1){3}{\line(1,0){1}} \put(0,1){\makebox(1,1){$\circ$}} \put(0,0){\makebox(1,1){$\ol{3}$}} \put(1,0.5){\makebox(1,1){$\otimes$}} \put(2,0.5){\makebox(1,1){$\circ$}} \multiput(2,0.5)(1,0){2}{\line(0,1){1}} \multiput(2,0.5)(0,1){2}{\line(1,0){1}} \end{picture} @>\text{$\tilde{e}_3$}>> \setlength{\unitlength}{5mm} \begin{picture}(3,2)(0,0.3) \multiput(0,0)(1,0){2}{\line(0,1){2}} \multiput(0,0)(0,1){3}{\line(1,0){1}} \put(0,1){\makebox(1,1){$\circ$}} \put(0,0){\makebox(1,1){$\ol{3}$}} \put(1,0.5){\makebox(1,1){$\otimes$}} \put(2,0.5){\makebox(1,1){$3$}} \multiput(2,0.5)(1,0){2}{\line(0,1){1}} \multiput(2,0.5)(0,1){2}{\line(1,0){1}} \end{picture} @>\text{$\tilde{e}_3$}>> \setlength{\unitlength}{5mm} \begin{picture}(3,2)(0,0.3) \multiput(0,0)(1,0){2}{\line(0,1){2}} \multiput(0,0)(0,1){3}{\line(1,0){1}} \put(0,1){\makebox(1,1){$\circ$}} \put(0,0){\makebox(1,1){$\circ$}} \put(1,0.5){\makebox(1,1){$\otimes$}} \put(2,0.5){\makebox(1,1){$3$}} \multiput(2,0.5)(1,0){2}{\line(0,1){1}} \multiput(2,0.5)(0,1){2}{\line(1,0){1}} \end{picture} @>\text{$\tilde{e}_3$}>> \setlength{\unitlength}{5mm} \begin{picture}(3,2)(0,0.3) \multiput(0,0)(1,0){2}{\line(0,1){2}} \multiput(0,0)(0,1){3}{\line(1,0){1}} \put(0,1){\makebox(1,1){$3$}} \put(0,0){\makebox(1,1){$\circ$}} \put(1,0.5){\makebox(1,1){$\otimes$}} \put(2,0.5){\makebox(1,1){$3$}} \multiput(2,0.5)(1,0){2}{\line(0,1){1}} \multiput(2,0.5)(0,1){2}{\line(1,0){1}} \end{picture} \\ @VV\text{$\psi$}V @VV\text{$\psi$}V @VV\text{$\psi$}V @VV\text{$\psi$}V \\ \setlength{\unitlength}{5mm} \begin{picture}(2,2)(0,0.3) \multiput(0,0)(1,0){2}{\line(0,1){2}} \put(0,0){\line(1,0){1}} \multiput(0,1)(0,1){2}{\line(1,0){2}} \put(2,1){\line(0,1){1}} \put(0,1){\makebox(1,1){$\circ$}}\put(1,1){\makebox(1,1){$\ol{3}$}} \put(0,0){\makebox(1,1){$\circ$}} \end{picture} @>\text{$\tilde{e}_3$}>> \setlength{\unitlength}{5mm} \begin{picture}(2,2)(0,0.3) \multiput(0,0)(1,0){2}{\line(0,1){2}} \put(0,0){\line(1,0){1}} \multiput(0,1)(0,1){2}{\line(1,0){2}} \put(2,1){\line(0,1){1}} \put(0,1){\makebox(1,1){$3$}}\put(1,1){\makebox(1,1){$\ol{3}$}} \put(0,0){\makebox(1,1){$\circ$}} \end{picture} @>\text{$\tilde{e}_3$}>> \setlength{\unitlength}{5mm} \begin{picture}(2,2)(0,0.3) \multiput(0,0)(1,0){2}{\line(0,1){2}} \put(0,0){\line(1,0){1}} \multiput(0,1)(0,1){2}{\line(1,0){2}} \put(2,1){\line(0,1){1}} \put(0,1){\makebox(1,1){$3$}}\put(1,1){\makebox(1,1){$\circ$}} \put(0,0){\makebox(1,1){$\circ$}} \end{picture} @>\text{$\tilde{e}_3$}>> \setlength{\unitlength}{5mm} \begin{picture}(2,2)(0,0.3) \multiput(0,0)(1,0){2}{\line(0,1){2}} \put(0,0){\line(1,0){1}} \multiput(0,1)(0,1){2}{\line(1,0){2}} \put(2,1){\line(0,1){1}} \put(0,1){\makebox(1,1){$3$}}\put(1,1){\makebox(1,1){$3$}} \put(0,0){\makebox(1,1){$\circ$}} \end{picture} \end{CD} \end{displaymath} \vskip3ex \noindent Here the $\psi$'s are given by Case B3, B7, B4 and B2 column insertions, respectively from left to right. \end{example} \subsubsection{\mathversion{bold}Inverse insertion for $B_n$ \cite{B2}} \label{subsubsec:invinsb} We give a list of inverse column insertions on semistandard $B$ tableaux that are sufficient for our purpose. For the interpretation of the pictorial equations in the list, see the remarks in Section \ref{subsubsec:invinsc}. \par \noindent \FromYOKODOMINO{C0}{\beta}{\alpha}{\beta}{\alpha} \raisebox{1.25mm}{if $\alpha \le \beta$ and $(\alpha,\beta) \ne (\circ,\circ)$,} \noindent \FromHOOKnn{C1}{\gamma}{\alpha}{\beta}{\alpha}{\gamma}{\beta} \raisebox{4mm}{if $\alpha < \beta \leq \gamma$ and $(\alpha,\gamma) \ne (x,\overline{x})$ and $(\beta,\gamma) \ne (\circ,\circ)$,} \noindent \FromHOOKnn{C2}{\beta}{\alpha}{\gamma}{\beta}{\gamma}{\alpha} \raisebox{4mm}{if $\alpha \leq \beta < \gamma$ and $(\alpha,\gamma) \ne (x,\overline{x})$ and $(\alpha,\beta) \ne (\circ,\circ)$,} \noindent \FromHOOKnn{C3}{\overline{x}}{\circ}{\circ}{\circ}{\overline{x}}{\circ} \raisebox{4mm}{,} \noindent \FromHOOKnn{C4}{\circ}{x}{\circ}{\circ}{\circ}{x} \raisebox{4mm}{,} \noindent \FromHOOKnl{C5}{\overline{x}}{x}{\beta}{x\!+\!1}{\overline{x\!+\!1}}{\beta} \raisebox{4mm}{if $x < \beta < \overline{x}$ and $x\ne n$,} \noindent \FromHOOKll{C6}{\beta}{x}{\overline{x}}{\beta}{\overline{x\!-\!1}}{x\!-\!1} \raisebox{4mm}{if $x \le \beta \le \overline{x}$ and $x\ne 1$,} \noindent \FromHOOKnn{C7}{\overline{n}}{n}{\circ}{\circ}{\overline{n}}{n} \raisebox{4mm}{.} \subsubsection{\mathversion{bold} Column bumping lemma for $B_n$} \label{subsubsec:cblB} The aim of this subsection is to give a simple result on successive insertions of two letters into a tableau (Corollary \ref{cor:bcblxxx}). This result will be used in the proof of the main theorem (Theorem \ref{th:main3}). This corollary follows from Lemma \ref{lem:bcblxx}. This lemma is (a special case of) the {\em column bumping lemma}, whose claim is almost the same as that for the original lemma for the usual tableaux (\cite{F}, Exercise 3 of Appendix A). We restrict ourselves to the situation where by column insertions there only appear semistandard $B$ tableaux with at most two rows. We consider a column insertion of a letter $\alpha$ into a tableau $T$. We insert the $\alpha$ into the leftmost column of $T$. According to the rules, the $\alpha$ is set in the column, and bump a letter if possible. The bumped letter is then inserted in the right column. The procedure continues until we come to Case A0 or A1. When a letter is inserted in a tableau, we can define a {\em bumping route}. It is a collection of boxes in the new tableau that has those letters set by the insertion. In each column there is at most one such box. Thus we regard the bumping route as a path that goes from the left to the right. In the classification of the column insertions, we regard that the inserted letter is set in the first row in Cases A0, B0, B2, B4, B6 and B7, and that it is set in the second row in the other cases. \begin{example} Here we give an example of column insertion and its resulting bumping route in a $B_3$ tableau. \begin{displaymath} \setlength{\unitlength}{5mm} \begin{picture}(4,4)(0,1) \multiput(0,2)(1,0){5}{\line(0,1){2}} \multiput(0,2)(0,1){3}{\line(1,0){4}} \put(0,3){\makebox(1,1){$1$}} \put(1,3){\makebox(1,1){$2$}} \put(2,3){\makebox(1,1){$\circ$}} \put(3,3){\makebox(1,1){$\ol{3}$}} \put(0,2){\makebox(1,1){$3$}} \put(1,2){\makebox(1,1){$3$}} \put(2,2){\makebox(1,1){$\ol{3}$}} \put(3,2){\makebox(1,1){$\ol{2}$}} \put(0,1){\makebox(1,1){$\uparrow$}} \put(0,0){\makebox(1,1){$2$}} \end{picture} \quad \Rightarrow \quad \begin{picture}(4,4)(0,1) \multiput(0,2)(1,0){5}{\line(0,1){2}} \multiput(0,2)(0,1){3}{\line(1,0){4}} \put(0,3){\makebox(1,1){$1$}} \put(1,3){\makebox(1,1){$2$}} \put(2,3){\makebox(1,1){$\circ$}} \put(3,3){\makebox(1,1){$\ol{3}$}} \put(0,2){\makebox(1,1){$2$}} \put(0,2){\makebox(1,1){$\bigcirc$}} \put(1,2){\makebox(1,1){$3$}} \put(2,2){\makebox(1,1){$\ol{3}$}} \put(3,2){\makebox(1,1){$\ol{2}$}} \put(1,1){\makebox(1,1){$\uparrow$}} \put(1,0){\makebox(1,1){$3$}} \end{picture} \quad \Rightarrow \quad \begin{picture}(4,4)(0,1) \multiput(0,2)(1,0){5}{\line(0,1){2}} \multiput(0,2)(0,1){3}{\line(1,0){4}} \put(0,3){\makebox(1,1){$1$}} \put(1,3){\makebox(1,1){$2$}} \put(2,3){\makebox(1,1){$\circ$}} \put(3,3){\makebox(1,1){$\ol{3}$}} \put(0,2){\makebox(1,1){$2$}} \put(0,2){\makebox(1,1){$\bigcirc$}} \put(1,2){\makebox(1,1){$3$}} \put(1,2){\makebox(1,1){$\bigcirc$}} \put(2,2){\makebox(1,1){$\ol{3}$}} \put(3,2){\makebox(1,1){$\ol{2}$}} \put(2,1){\makebox(1,1){$\uparrow$}} \put(2,0){\makebox(1,1){$3$}} \end{picture} \quad \Rightarrow \quad \begin{picture}(4,4)(0,1) \multiput(0,2)(1,0){5}{\line(0,1){2}} \multiput(0,2)(0,1){3}{\line(1,0){4}} \put(0,3){\makebox(1,1){$1$}} \put(1,3){\makebox(1,1){$2$}} \put(2,3){\makebox(1,1){$3$}} \put(2,3){\makebox(1,1){$\bigcirc$}} \put(3,3){\makebox(1,1){$\ol{3}$}} \put(0,2){\makebox(1,1){$2$}} \put(0,2){\makebox(1,1){$\bigcirc$}} \put(1,2){\makebox(1,1){$3$}} \put(1,2){\makebox(1,1){$\bigcirc$}} \put(2,2){\makebox(1,1){$\circ$}} \put(3,2){\makebox(1,1){$\ol{2}$}} \put(3,1){\makebox(1,1){$\uparrow$}} \put(3,0){\makebox(1,1){$\ol{3}$}} \end{picture} \end{displaymath} \begin{displaymath} \quad \Rightarrow \quad \setlength{\unitlength}{5mm} \begin{picture}(5,4)(0,1) \multiput(0,2)(1,0){5}{\line(0,1){2}} \multiput(0,2)(0,1){3}{\line(1,0){4}} \put(0,3){\makebox(1,1){$1$}} \put(1,3){\makebox(1,1){$2$}} \put(2,3){\makebox(1,1){$3$}} \put(2,3){\makebox(1,1){$\bigcirc$}} \put(3,3){\makebox(1,1){$\ol{3}$}} \put(3,3){\makebox(1,1){$\bigcirc$}} \put(0,2){\makebox(1,1){$2$}} \put(0,2){\makebox(1,1){$\bigcirc$}} \put(1,2){\makebox(1,1){$3$}} \put(1,2){\makebox(1,1){$\bigcirc$}} \put(2,2){\makebox(1,1){$\circ$}} \put(3,2){\makebox(1,1){$\ol{2}$}} \put(4,1){\makebox(1,1){$\uparrow$}} \put(4,0){\makebox(1,1){$\ol{3}$}} \end{picture} \quad \Rightarrow \quad \begin{picture}(5,4)(0,1) \multiput(0,2)(1,0){5}{\line(0,1){2}} \put(5,3){\line(0,1){1}} \multiput(0,3)(0,1){2}{\line(1,0){5}} \put(0,2){\line(1,0){4}} \put(0,3){\makebox(1,1){$1$}} \put(1,3){\makebox(1,1){$2$}} \put(2,3){\makebox(1,1){$3$}} \put(2,3){\makebox(1,1){$\bigcirc$}} \put(3,3){\makebox(1,1){$\ol{3}$}} \put(3,3){\makebox(1,1){$\bigcirc$}} \put(0,2){\makebox(1,1){$2$}} \put(0,2){\makebox(1,1){$\bigcirc$}} \put(1,2){\makebox(1,1){$3$}} \put(1,2){\makebox(1,1){$\bigcirc$}} \put(2,2){\makebox(1,1){$\circ$}} \put(3,2){\makebox(1,1){$\ol{2}$}} \put(4,3){\makebox(1,1){$\ol{3}$}} \put(4,3){\makebox(1,1){$\bigcirc$}} \end{picture} \end{displaymath} \end{example} \begin{lemma} \label{lem:bcblx} The bumping route does not move down. \end{lemma} \begin{proof} It suffices to consider the bumping processes occurring on pairs of neighboring columns in the tableau. Our strategy is as follows. We are to show that if in the left column the inserted letter sets into the first row then the same occurs in the right column as well. Let us classify the situations on the neighboring columns into five cases. \begin{enumerate} \item Suppose that in the following column insertion Case B0 has occurred in the first column. \begin{equation} \setlength{\unitlength}{5mm} \begin{picture}(5,1.4)(0,0.3) \multiput(0,0)(1,0){2}{\line(0,1){1}} \multiput(0,0)(0,1){2}{\line(1,0){1}} \multiput(2.5,0)(1,0){3}{\line(0,1){1}} \multiput(2.5,0)(0,1){2}{\line(1,0){2}} \put(0,0){\makebox(1,1){$\alpha$}} \put(1,0){\makebox(1.5,1){$\longrightarrow$}} \put(2.5,0){\makebox(1,1){$\beta$}} \put(3.5,0){\makebox(1,1){$\gamma$}} \end{picture} \end{equation} Then in the second column Case B0 occurs and Case A1 does not happen. The semistandard condition for $B$ tableau imposes that $(\beta,\gamma) \ne (\circ,\circ)$ and $\beta \leq \gamma$. Thus if $\beta$ is bumped out from the left column, it certainly bumps $\gamma$ out of the right column. \item Suppose that in the following column insertion one of the Cases B2, B4 or B6 has occurred in the first column. \begin{equation} \setlength{\unitlength}{5mm} \begin{picture}(2.5,2.4)(0,0.3) \multiput(0,0)(1,0){2}{\line(0,1){1}} \multiput(0,0)(0,1){2}{\line(1,0){1}} \put(0,0){\makebox(1,1){$\alpha$}} \put(1,0){\makebox(1.5,1){$\longrightarrow$}} \multiput(2.5,1)(0,1){2}{\line(1,0){2}} \multiput(2.5,0)(1,0){2}{\line(0,1){2}} \put(2.5,0){\line(1,0){1}} \put(4.5,1){\line(0,1){1}} \put(2.5,0){\makebox(1,1){$\delta$}} \put(2.5,1){\makebox(1,1){$\beta$}} \put(3.5,1){\makebox(1,1){$\gamma$}} \end{picture} \end{equation} Then in the second column Case B0 occurs and Case A1 does not happen. The reason is as follows. Whichever one of the B2, B4 or B6 may have occurred in the first column, the letter bumped out from the first column is always $\beta$. And again we have the semistandard condition between $\beta$ and $\gamma$. \item In the following column insertion Case B7 occurs in the first column. \begin{equation} \setlength{\unitlength}{5mm} \begin{picture}(2.5,2.4)(0,0.3) \multiput(0,0)(1,0){2}{\line(0,1){1}} \multiput(0,0)(0,1){2}{\line(1,0){1}} \put(0,0){\makebox(1,1){$n$}} \put(1,0){\makebox(1.5,1){$\longrightarrow$}} \multiput(2.5,1)(0,1){2}{\line(1,0){2}} \multiput(2.5,0)(1,0){2}{\line(0,1){2}} \put(2.5,0){\line(1,0){1}} \put(4.5,1){\line(0,1){1}} \put(2.5,0){\makebox(1,1){$\ol{n}$}} \put(2.5,1){\makebox(1,1){$\circ$}} \put(3.5,1){\makebox(1,1){$\gamma$}} \end{picture} \end{equation} Then in the second column Case B0 occurs and Case A1 does not happen. The letter bumped out from the first column is $\ol{n}$. Due to the semistandard condition we have $\gamma \geq \ol{n}$, hence the claim follows. \item Suppose that in the following column insertion one of the Cases B2, B4 or B6 has occurred in the first column. \begin{equation} \setlength{\unitlength}{5mm} \begin{picture}(2.5,2.4)(0,0.3) \multiput(0,0)(1,0){2}{\line(0,1){1}} \multiput(0,0)(0,1){2}{\line(1,0){1}} \multiput(2.5,0)(1,0){3}{\line(0,1){1}} \multiput(2.5,0)(0,1){2}{\line(1,0){2}} \put(0,0){\makebox(1,1){$\alpha$}} \put(1,0){\makebox(1.5,1){$\longrightarrow$}} \multiput(2.5,0)(0,1){3}{\line(1,0){2}} \multiput(2.5,0)(1,0){3}{\line(0,1){2}} \put(2.5,0){\makebox(1,1){$\delta$}} \put(2.5,1){\makebox(1,1){$\beta$}} \put(3.5,0){\makebox(1,1){$\varepsilon$}} \put(3.5,1){\makebox(1,1){$\gamma$}} \end{picture} \end{equation} Then in the second column Cases B1, B3 and B5 do not happen. The reason is as follows. The letter bumped out from the first column is always $\beta$. Since $\beta \leq \gamma$, B1 does not happen. Since $(\beta,\gamma) \ne (\circ,\circ)$, B3 does not happen. B5 does not happen since $(\beta,\gamma,\varepsilon) \ne (x,x,\ol{x})$, i.e. due to the absence of the $(x,x)$-configuration (\ref{eq:absenceofxxconf}). \item In the following column insertion Case B7 occurs in the first column. \begin{equation} \setlength{\unitlength}{5mm} \begin{picture}(2.5,2.4)(0,0.3) \multiput(0,0)(1,0){2}{\line(0,1){1}} \multiput(0,0)(0,1){2}{\line(1,0){1}} \multiput(2.5,0)(1,0){3}{\line(0,1){1}} \multiput(2.5,0)(0,1){2}{\line(1,0){2}} \put(0,0){\makebox(1,1){$n$}} \put(1,0){\makebox(1.5,1){$\longrightarrow$}} \multiput(2.5,0)(0,1){3}{\line(1,0){2}} \multiput(2.5,0)(1,0){3}{\line(0,1){2}} \put(2.5,0){\makebox(1,1){$\ol{n}$}} \put(2.5,1){\makebox(1,1){$\circ$}} \put(3.5,0){\makebox(1,1){$\varepsilon$}} \put(3.5,1){\makebox(1,1){$\gamma$}} \end{picture} \end{equation} Then in the second column Cases B1, B3 and B5 do not happen. The letter bumped out from the first column is $\ol{n}$. Due to the semistandard condition we have $\gamma \geq \ol{n}$, hence the claim follows. \end{enumerate} \end{proof} \begin{lemma} \label{lem:bcblxx} Let $\alpha' \leq \alpha$ and $(\alpha,\alpha') \ne (\circ,\circ)$. Let $R$ be the bumping route that is made when $\alpha$ is inserted into $T$, and $R'$ be the bumping route that is made when $\alpha'$ is inserted into $\left( \alpha \longrightarrow T \right)$. Then $R'$ does not lie below $R$. \end{lemma} \begin{proof} First we consider the case where the bumping route lies only in the first row. Suppose that, when $\alpha$ was inserted into the tableau $T$, it was set in the first row in the first column. We are to show that when $\alpha'$ is inserted, it will be also set in the first row in the first column. If $T$ is an empty set (resp.~has only one row), the insertion of $\alpha$ should have been A0 (resp.~B0). In either case we have B0 when $\alpha'$ is inserted, hence the claim is true. Suppose $T$ has two rows. By assumption B2, B4, B6 or B7 has occurred when $\alpha$ was inserted. We see that, if B4, B6 or B7 has occurred, then B2 will occur when $\alpha'$ is inserted. Thus it is enough to show that if B2 has occurred, then B1, B3 or B5 does not happen when $\alpha'$ is inserted. Since $\alpha' \leq \alpha$, B1 does not happen. Since $(\alpha,\alpha') \ne (\circ,\circ)$, B3 does not happen. B5 does not happen, since the first column does not have the entry ${x \atop \overline{x}}$ as the result of B2 type insertion of $\alpha$. Second we consider the case where the bumping route $R$ lies across the first and the second rows. Suppose that from the leftmost column to the $(i-1)$-th column the bumping route lies in the second row, and from the $i$-th column to the rightmost column it lies in the first row. Let us call the position of the vertical line between the $(i-1)$-th and the $i$-th columns the {\em crossing point} of $R$. It is unique due to Lemma \ref{lem:bcblx}. We call an analogous position of $R'$ its crossing point. We are to show that the crossing point of $R'$ does not locate strictly right to the crossing point of $R$. Let the situation around the crossing point of $R$ be \begin{equation} \label{eq:crptb} \setlength{\unitlength}{5mm} \begin{picture}(4,2.4)(0,0.3) \multiput(1,0)(1,0){3}{\line(0,1){2}} \multiput(1,0)(0,1){3}{\line(1,0){2}} \multiput(0.1,0)(0,1){3}{ \multiput(0,0)(0.2,0){5}{\line(1,0){0.1}}} \multiput(3,0)(0,1){3}{ \multiput(0,0)(0.2,0){5}{\line(1,0){0.1}}} \put(1,0){\makebox(1,1){$\xi$}} \put(2,1){\makebox(1,1){$\eta$}} \end{picture} \quad \mbox{or} \quad \setlength{\unitlength}{5mm} \begin{picture}(4,2.4)(0,0.3) \multiput(1,0)(1,0){2}{\line(0,1){2}} \put(3,1){\line(0,1){1}} \multiput(1,1)(0,1){2}{\line(1,0){2}} \put(1,0){\line(1,0){1}} \multiput(0.1,0)(0,1){3}{ \multiput(0,0)(0.2,0){5}{\line(1,0){0.1}}} \multiput(3,1)(0,1){2}{ \multiput(0,0)(0.2,0){5}{\line(1,0){0.1}}} \put(1,0){\makebox(1,1){$\xi$}} \put(2,1){\makebox(1,1){$\eta$}} \end{picture} \, \mbox{.} \end{equation} While the insertion of $\alpha$ that led to these configurations, let $\eta'$ be the letter that was bumped out from the left column. Claim 1: $\xi \leq \eta' \le \eta$ and $(\xi,\eta) \ne (\circ,\circ)$. To see this note that in the left column, B1, B3 or B5 has occurred when $\alpha$ was inserted. We have $\xi \leq \eta'$ and $(\xi,\eta') \ne (\circ,\circ)$ (B1), or $\xi < \eta'$ (B3, B5). In the right column A0, B0, B2, B4, B6 or B7 has subsequently occurred. We have $\eta' = \eta$ (A0, B0, B2, B4, B7), or $\eta' < \eta$ (B6). In any case we have $\xi \le \eta' \le \eta$ and $(\xi,\eta) \ne (\circ,\circ)$. Claim 2: In (\ref{eq:crptb}) the following configurations do not exist. \begin{equation} \setlength{\unitlength}{5mm} \begin{picture}(4,2.4)(0,0.3) \multiput(1,0)(1,0){3}{\line(0,1){2}} \multiput(1,0)(0,1){3}{\line(1,0){2}} \multiput(0.1,0)(0,1){3}{ \multiput(0,0)(0.2,0){5}{\line(1,0){0.1}}} \multiput(3,0)(0,1){3}{ \multiput(0,0)(0.2,0){5}{\line(1,0){0.1}}} \put(1,0){\makebox(1,1){$\ol{x}$}} \put(1,1){\makebox(1,1){$x$}} \put(2,1){\makebox(1,1){$\ol{x}$}} \end{picture} \, \mbox{,} \, \setlength{\unitlength}{5mm} \begin{picture}(4,2.4)(0,0.3) \multiput(1,0)(1,0){2}{\line(0,1){2}} \put(3,1){\line(0,1){1}} \multiput(1,1)(0,1){2}{\line(1,0){2}} \put(1,0){\line(1,0){1}} \multiput(0.1,0)(0,1){3}{ \multiput(0,0)(0.2,0){5}{\line(1,0){0.1}}} \multiput(3,1)(0,1){2}{ \multiput(0,0)(0.2,0){5}{\line(1,0){0.1}}} \put(1,0){\makebox(1,1){$\ol{x}$}} \put(1,1){\makebox(1,1){$x$}} \put(2,1){\makebox(1,1){$\ol{x}$}} \end{picture} \, \mbox{,} \, \begin{picture}(4,2.4)(0,0.3) \multiput(1,0)(1,0){3}{\line(0,1){2}} \multiput(1,0)(0,1){3}{\line(1,0){2}} \multiput(0.1,0)(0,1){3}{ \multiput(0,0)(0.2,0){5}{\line(1,0){0.1}}} \multiput(3,0)(0,1){3}{ \multiput(0,0)(0.2,0){5}{\line(1,0){0.1}}} \put(1,0){\makebox(1,1){$x$}} \put(2,0){\makebox(1,1){$\ol{x}$}} \put(2,1){\makebox(1,1){$x$}} \end{picture} \, \mbox{,} \, \begin{picture}(4,2.4)(0,0.3) \multiput(1,0)(1,0){3}{\line(0,1){2}} \multiput(1,0)(0,1){3}{\line(1,0){2}} \multiput(0.1,0)(0,1){3}{ \multiput(0,0)(0.2,0){5}{\line(1,0){0.1}}} \multiput(3,0)(0,1){3}{ \multiput(0,0)(0.2,0){5}{\line(1,0){0.1}}} \put(1,0){\makebox(1,1){$\circ$}} \put(2,1){\makebox(1,1){$\circ$}} \end{picture} \, \mbox{or} \, \setlength{\unitlength}{5mm} \begin{picture}(4,2.4)(0,0.3) \multiput(1,0)(1,0){2}{\line(0,1){2}} \put(3,1){\line(0,1){1}} \multiput(1,1)(0,1){2}{\line(1,0){2}} \put(1,0){\line(1,0){1}} \multiput(0.1,0)(0,1){3}{ \multiput(0,0)(0.2,0){5}{\line(1,0){0.1}}} \multiput(3,1)(0,1){2}{ \multiput(0,0)(0.2,0){5}{\line(1,0){0.1}}} \put(1,0){\makebox(1,1){$\circ$}} \put(2,1){\makebox(1,1){$\circ$}} \end{picture} \, \mbox{.} \label{eq:bforbiddenconfs} \end{equation} Due to Claim 1, the first and the second configurations can exist only if B1 with $\alpha= x, \gamma=\eta'$ happens in the left column and $\xi = \eta'= \eta = \overline{x}$. But $(\alpha,\gamma) = (x,\overline{x})$ is not compatible with B1. The third configuration can exist only if B6 happens in the right column and $\xi= \eta'= \eta= x$ by Claim 1. But we see from the proof of Claim 1 that B6 actually happens only when $\eta' < \eta$. The fourth and the fifth are forbidden since $(\xi,\eta) \neq (\circ,\circ)$ by Claim 1. Claim 2 is proved. Let the situation around the crossing point of $R$ be one of (\ref{eq:crptb}) excluding (\ref{eq:bforbiddenconfs}). When inserting $\alpha'$, suppose in the left column of the crossing point, B1, B3 or B5 has occurred. Let $\xi'$ be the letter bumped out therefrom. Claim 3: $\xi' \leq \eta$ and $(\xi',\eta) \ne (\circ,\circ)$. We divide the check into two cases. a) If B1 or B3 has occurred in the left column, we have $\xi' = \xi$. Thus the assertion follows from Claim 1. b) If B5 has occurred, the left column had the entry ${x \atop \ol{x}}$ and we have $\xi' = \ol{x-1}$, $\xi = \ol{x}$. Claim 1 tells $\xi = \ol{x} \le \eta$, and Claim 2 does $\eta \neq \ol{x}$. Therefore we have $\xi' = \ol{x-1} \le \eta$. $(\xi',\eta) \ne (\circ,\circ)$ is obvious. Claim 3 is proved. Now we are ready to finish the proof of the main assertion. Assume the same situation as Claim 3. We should verify that A1, B1, B3 and B5 do not occur in the right column. Claim 3 immediately prohibits A1, B1 and B3 in the right column. Suppose that B5 happens in the right column. It means that $\eta \in \{1,\ldots n\}$, $\xi' \geq \eta$ and the right column had the entry ${\eta \atop \ol{\eta}}$. Since $\xi' \leq \eta$ by Claim 3, we find $\xi' = \eta$, therefore $\xi' \in \{1,\ldots, n\}$. Such $\xi'$ can be bumped out from B1 process only in the left column and not from B3 or B5. It follows that $\xi' = \xi$. This leads to the third configuration in (\ref{eq:bforbiddenconfs}), hence a contradiction. Finally we consider the case where the bumping route $R$ lies only in the second row. If $R'$ lies below $R$ the tableau should have more than two rows, which is prohibited by Proposition \ref{pr:atmosttworows}. \end{proof} \begin{coro} \label{cor:bcblxxx} Let $\alpha' \leq \alpha$ and $(\alpha,\alpha') \ne (\circ,\circ)$. Suppose that a new box is added at the end of the first row when $\alpha$ is inserted into $T$. Then a new box is added also at the end of the first row when $\alpha'$ is inserted into $\left( \alpha \longrightarrow T \right)$. \end{coro} \subsection{\mathversion{bold}Column insertion and inverse insertion for $D_n$} \label{subsec:cid} Set an alphabet $\mathcal{X}=\mathcal{A} \sqcup \bar{\mathcal{A}},\, \mathcal{A}=\{ 1,\dots,n\}$ and $\bar{\mathcal{A}}=\{\overline{1},\dots,\overline{n}\}$, with the partial order $1 < 2 < \dots < {n \atop \ol{n}} < \dots < \overline{2} < \overline{1}$. \subsubsection{Semistandard $D$ tableaux} \label{subsubsec:ssdt} Let us consider a {\em semistandard $D$ tableau} made by the letters from this alphabet. We follow \cite{KN} for its definition. We present the definition here, but restrict ourselves to special cases that are sufficient for our purpose. Namely we consider only those tableaux that have no more than two rows in their shapes. Thus they have the forms as in (\ref{eq:pictureofsst}), with the letters inside the boxes being chosen from the alphabet given as above. The letters obey the conditions (\ref{eq:notdecrease}),\footnote{Note that there is no order between $n$ and $\ol{n}$.} (\ref{eq:absenceofoneonebar}) and the absence of the $(x,x)$-configuration (\ref{eq:absenceofxxconf}) where we now assume $1 \leq x <n$. They also obey the following conditions: \begin{equation} \alpha_a < \beta_a \quad \mbox{or} \quad (\alpha_a,\beta_a) = (n,\ol{n}) \quad \mbox{or} \quad (\alpha_a,\beta_a) = (\ol{n},n), \end{equation} \begin{equation} (\alpha_a,\alpha_{a+1},\beta_a,\beta_{a+1}) \ne (n-1,n,n,\ol{n-1}), (n-1,\ol{n},\ol{n},\ol{n-1}), \end{equation} \begin{equation} \label{eq:absenceofnnconfd} (\alpha_a,\beta_{a+1}) \ne (n,n),(n,\ol{n}),(\ol{n},n),(\ol{n},\ol{n}). \end{equation} The conditions (\ref{eq:absenceofnnconfd}) are referred to as the absence of the $(n,n)$-configurations. \subsubsection{\mathversion{bold}Column insertion for $D_n$ \cite{B2}} \label{subsubsec:insd} We give a list of column insertions on semistandard $D$ tableaux that are sufficient for our purpose. For the alphabet $\mathcal{X}$, we follow the convention that Greek letters $ \alpha, \beta, \ldots $ belong to $\mathcal{X}$ while Latin letters $x,y,\ldots$ (resp. $\overline{x},\overline{y},\ldots$) belong to $\mathcal{A}$ (resp. $\bar{\mathcal{A}}$). For the interpretation of the pictorial equations in the list, see the remarks in Section \ref{subsubsec:insc}. Note that this list does not exhaust all cases. It does not contain, for instance, the insertion \begin{math} \setlength{\unitlength}{3mm} \begin{picture}(3,2)(0,0.3) \multiput(0,0)(1,0){2}{\line(0,1){1}} \multiput(0,0)(0,1){2}{\line(1,0){1}} \put(0,0){\makebox(1,1){${\scriptstyle n}$}} \put(1,0){\makebox(1,1){${\scriptstyle \rightarrow}$}} \multiput(2,0)(1,0){2}{\line(0,1){2}} \multiput(2,0)(0,1){3}{\line(1,0){1}} \put(2,0){\makebox(1,1){${\scriptstyle \ol{n}}$}} \put(2,1){\makebox(1,1){${\scriptstyle n}$}} \end{picture} \end{math}. (See Proposition \ref{pr:atmosttworows}.) \noindent \ToBOX{A0}{\alpha} \raisebox{1.25mm}{,} \noindent \ToDOMINO{A1}{\alpha}{\beta}{\alpha}{\beta} \raisebox{4mm}{if $\alpha < \beta$ or $(\alpha,\beta)=(n,\overline{n})$ or $(\overline{n},n)$,} \noindent \ToYOKODOMINO{B0}{\beta}{\alpha}{\beta}{\alpha} \raisebox{1.25mm}{if $\alpha \le \beta$,} \noindent \ToHOOKnn{B1}{\alpha}{\gamma}{\beta}{\gamma}{\alpha}{\beta} \raisebox{4mm}{if $\alpha < \beta \leq \gamma$ and $(\alpha,\gamma) \ne (x,\overline{x})$,} \noindent \ToHOOKnn{B2}{\beta}{\gamma}{\alpha}{\beta}{\alpha}{\gamma} \raisebox{4mm}{if $\alpha \leq \beta < \gamma$ and $(\alpha,\gamma) \ne (x,\overline{x})$,} \noindent \ToHOOKll{B3}{x}{\overline{x}}{\beta}{\overline{x\!-\!1}}{x\!-\!1}{\beta} \raisebox{4mm}{if $x \le \beta \le \overline{x}$ and $x\ne n,1\,$,} \noindent \ToHOOKln{B4}{\beta}{\overline{x}}{x}{\beta}{x\!+\!1}{\overline{x\!+\!1}} \raisebox{4mm}{if $x < \beta < \overline{x}$ and $x\ne n\!-\!1,n\,$,} \noindent \ToHOOKnn{B5}{\mu_1}{\mu_2}{x}{\mu_1}{x}{\mu_2} \raisebox{4mm}{if $(\mu_1,\mu_2) = (n,\overline{n})$ or $(\overline{n},n)$ and $x \ne n$,} \noindent \ToHOOKnn{B6}{\mu_1}{\overline{x}}{\mu_2}{\overline{x}}{\mu_1}{\mu_2} \raisebox{4mm}{if $(\mu_1,\mu_2) = (n,\overline{n})$ or $(\overline{n},n)$ and $\overline{x} \ne \overline{n}$,} \noindent \ToHOOKllnn{B7}{\mu}{\overline{n\!-\!1}}{n\!-\!1}{\mu}{\mu}{\overline{\mu}} \raisebox{4mm}{if $\mu = n$ or $\overline{n}\;\; (\overline{\mu}:=n$ if $\mu=\overline{n}$),} \noindent \ToHOOKll{B8}{\mu_1}{\mu_2}{\mu_2}{\overline{n\!-\!1}}{n\!-\!1}{\mu_2} \raisebox{4mm}{if $(\mu_1,\mu_2) = (n,\overline{n})$ or $(\overline{n},n)$.} \noindent \subsubsection{Column insertion and $U_q (D_n)$ crystal morphism} To illustrate Proposition \ref{pr:morphgen} let us check a morphism of the $U_q(D_4)$ crystal $B(\Lambda_2) \otimes B(\Lambda_1)$ by taking two examples. Let $\psi$ be the map that is similarly defined as in Section \ref{subsubsec:insc} for type $C$ case. \begin{example} \label{ex:morD1} \begin{displaymath} \begin{CD} \setlength{\unitlength}{5mm} \begin{picture}(3,2)(0,0.3) \multiput(0,0)(1,0){2}{\line(0,1){2}} \multiput(0,0)(0,1){3}{\line(1,0){1}} \put(0,1){\makebox(1,1){$\ol{4}$}} \put(0,0){\makebox(1,1){$4$}} \put(1,0.5){\makebox(1,1){$\otimes$}} \put(2,0.5){\makebox(1,1){$3$}} \multiput(2,0.5)(1,0){2}{\line(0,1){1}} \multiput(2,0.5)(0,1){2}{\line(1,0){1}} \end{picture} @>\text{$\tilde{f}_4$}>> \setlength{\unitlength}{5mm} \begin{picture}(3,2)(0,0.3) \multiput(0,0)(1,0){2}{\line(0,1){2}} \multiput(0,0)(0,1){3}{\line(1,0){1}} \put(0,1){\makebox(1,1){$\ol{4}$}} \put(0,0){\makebox(1,1){$\ol{3}$}} \put(1,0.5){\makebox(1,1){$\otimes$}} \put(2,0.5){\makebox(1,1){$3$}} \multiput(2,0.5)(1,0){2}{\line(0,1){1}} \multiput(2,0.5)(0,1){2}{\line(1,0){1}} \end{picture} @>\text{$\tilde{f}_4$}>> \setlength{\unitlength}{5mm} \begin{picture}(3,2)(0,0.3) \multiput(0,0)(1,0){2}{\line(0,1){2}} \multiput(0,0)(0,1){3}{\line(1,0){1}} \put(0,1){\makebox(1,1){$\ol{4}$}} \put(0,0){\makebox(1,1){$\ol{3}$}} \put(1,0.5){\makebox(1,1){$\otimes$}} \put(2,0.5){\makebox(1,1){$\ol{4}$}} \multiput(2,0.5)(1,0){2}{\line(0,1){1}} \multiput(2,0.5)(0,1){2}{\line(1,0){1}} \end{picture} \\ @VV\text{$\psi$}V @VV\text{$\psi$}V @VV\text{$\psi$}V \\ \setlength{\unitlength}{5mm} \begin{picture}(2,2)(0,0.3) \multiput(0,0)(1,0){2}{\line(0,1){2}} \put(0,0){\line(1,0){1}} \multiput(0,1)(0,1){2}{\line(1,0){2}} \put(2,1){\line(0,1){1}} \put(0,1){\makebox(1,1){$3$}}\put(1,1){\makebox(1,1){$\ol{4}$}} \put(0,0){\makebox(1,1){$4$}} \end{picture} @>\text{$\tilde{f}_4$}>> \setlength{\unitlength}{5mm} \begin{picture}(2,2)(0,0.3) \multiput(0,0)(1,0){2}{\line(0,1){2}} \put(0,0){\line(1,0){1}} \multiput(0,1)(0,1){2}{\line(1,0){2}} \put(2,1){\line(0,1){1}} \put(0,1){\makebox(1,1){$\ol{4}$}}\put(1,1){\makebox(1,1){$\ol{4}$}} \put(0,0){\makebox(1,1){$4$}} \end{picture} @>\text{$\tilde{f}_4$}>> \setlength{\unitlength}{5mm} \begin{picture}(2,2)(0,0.3) \multiput(0,0)(1,0){2}{\line(0,1){2}} \put(0,0){\line(1,0){1}} \multiput(0,1)(0,1){2}{\line(1,0){2}} \put(2,1){\line(0,1){1}} \put(0,1){\makebox(1,1){$\ol{4}$}}\put(1,1){\makebox(1,1){$\ol{4}$}} \put(0,0){\makebox(1,1){$\ol{3}$}} \end{picture} \end{CD} \end{displaymath} \vskip3ex \noindent Here the $\psi$'s are given by Case B5, B7 and B2 column insertions, respectively. \end{example} \begin{example} \label{ex:morD2} \begin{displaymath} \begin{CD} \setlength{\unitlength}{5mm} \begin{picture}(3,2)(0,0.3) \multiput(0,0)(1,0){2}{\line(0,1){2}} \multiput(0,0)(0,1){3}{\line(1,0){1}} \put(0,1){\makebox(1,1){$\ol{4}$}} \put(0,0){\makebox(1,1){$\ol{3}$}} \put(1,0.5){\makebox(1,1){$\otimes$}} \put(2,0.5){\makebox(1,1){$4$}} \multiput(2,0.5)(1,0){2}{\line(0,1){1}} \multiput(2,0.5)(0,1){2}{\line(1,0){1}} \end{picture} @>\text{$\tilde{e}_4$}>> \setlength{\unitlength}{5mm} \begin{picture}(3,2)(0,0.3) \multiput(0,0)(1,0){2}{\line(0,1){2}} \multiput(0,0)(0,1){3}{\line(1,0){1}} \put(0,1){\makebox(1,1){$\ol{4}$}} \put(0,0){\makebox(1,1){$4$}} \put(1,0.5){\makebox(1,1){$\otimes$}} \put(2,0.5){\makebox(1,1){$4$}} \multiput(2,0.5)(1,0){2}{\line(0,1){1}} \multiput(2,0.5)(0,1){2}{\line(1,0){1}} \end{picture} @>\text{$\tilde{e}_4$}>> \setlength{\unitlength}{5mm} \begin{picture}(3,2)(0,0.3) \multiput(0,0)(1,0){2}{\line(0,1){2}} \multiput(0,0)(0,1){3}{\line(1,0){1}} \put(0,1){\makebox(1,1){$3$}} \put(0,0){\makebox(1,1){$4$}} \put(1,0.5){\makebox(1,1){$\otimes$}} \put(2,0.5){\makebox(1,1){$4$}} \multiput(2,0.5)(1,0){2}{\line(0,1){1}} \multiput(2,0.5)(0,1){2}{\line(1,0){1}} \end{picture} \\ @VV\text{$\psi$}V @VV\text{$\psi$}V @VV\text{$\psi$}V \\ \setlength{\unitlength}{5mm} \begin{picture}(2,2)(0,0.3) \multiput(0,0)(1,0){2}{\line(0,1){2}} \put(0,0){\line(1,0){1}} \multiput(0,1)(0,1){2}{\line(1,0){2}} \put(2,1){\line(0,1){1}} \put(0,1){\makebox(1,1){$\ol{4}$}}\put(1,1){\makebox(1,1){$\ol{3}$}} \put(0,0){\makebox(1,1){$4$}} \end{picture} @>\text{$\tilde{e}_4$}>> \setlength{\unitlength}{5mm} \begin{picture}(2,2)(0,0.3) \multiput(0,0)(1,0){2}{\line(0,1){2}} \put(0,0){\line(1,0){1}} \multiput(0,1)(0,1){2}{\line(1,0){2}} \put(2,1){\line(0,1){1}} \put(0,1){\makebox(1,1){$3$}}\put(1,1){\makebox(1,1){$\ol{3}$}} \put(0,0){\makebox(1,1){$4$}} \end{picture} @>\text{$\tilde{e}_4$}>> \setlength{\unitlength}{5mm} \begin{picture}(2,2)(0,0.3) \multiput(0,0)(1,0){2}{\line(0,1){2}} \put(0,0){\line(1,0){1}} \multiput(0,1)(0,1){2}{\line(1,0){2}} \put(2,1){\line(0,1){1}} \put(0,1){\makebox(1,1){$3$}}\put(1,1){\makebox(1,1){$4$}} \put(0,0){\makebox(1,1){$4$}} \end{picture} \end{CD} \end{displaymath} \vskip3ex \noindent Here the $\psi$'s are given by Case B6, B8 and B1 column insertions, respectively. \end{example} \subsubsection{\mathversion{bold} Inverse insertion for $D_n$ \cite{B2}} \label{subsubsec:invinsd} We give a list of inverse column insertions on semistandard $D$ tableaux that are sufficient for our purpose. For the interpretation of the pictorial equations in the list, see the remarks in Section \ref{subsubsec:invinsc}. \par \noindent \FromYOKODOMINO{C0}{\beta}{\alpha}{\beta}{\alpha} \raisebox{1.25mm}{if $\alpha \le \beta$,} \noindent \FromHOOKnn{C1}{\gamma}{\alpha}{\beta}{\alpha}{\gamma}{\beta} \raisebox{4mm}{if $\alpha < \beta \leq \gamma$ and $(\alpha,\gamma) \ne (x,\overline{x})$,} \noindent \FromHOOKnn{C2}{\beta}{\alpha}{\gamma}{\beta}{\gamma}{\alpha} \raisebox{4mm}{if $\alpha \leq \beta < \gamma$ and $(\alpha,\gamma) \ne (x,\overline{x})$,} \noindent \FromHOOKnl{C3}{\overline{x}}{x}{\beta}{x\!+\!1}{\overline{x\!+\!1}}{\beta} \raisebox{4mm}{if $x < \beta < \overline{x}$ and $x\ne n\!-\!1,n\,$,} \noindent \FromHOOKll{C4}{\beta}{x}{\overline{x}}{\beta}{\overline{x\!-\!1}}{x\!-\!1} \raisebox{4mm}{if $x \le \beta \le \overline{x}$ and $x\ne n,1\,$,} \noindent \FromHOOKnn{C5}{\mu_1}{x}{\mu_2}{\mu_1}{\mu_2}{x} \raisebox{4mm}{if $(\mu_1,\mu_2) = (n,\overline{n})$ or $(\overline{n},n)$ and $x \ne n$} \noindent \FromHOOKnn{C6}{\overline{x}}{\mu_1}{\mu_2}{\mu_1}{\overline{x}}{\mu_2} \raisebox{4mm}{if $(\mu_1,\mu_2) = (n,\overline{n})$ or $(\overline{n},n)$ and $\overline{x} \ne \overline{n}$,} \noindent \FromHOOKll{C7}{\mu_1}{\mu_1}{\mu_2}{\mu_1}{\overline{n\!-\!1}}{n\!-\!1} \raisebox{4mm}{if $(\mu_1,\mu_2) = (n,\overline{n})$ or $(\overline{n},n)$,} \noindent \FromHOOKllnn{C8}{\overline{n\!-\!1}}{n\!-\!1}{\mu}{\overline{\mu}}{\mu}{\mu} \raisebox{4mm}{if $\mu = n$ or $\overline{n} \;\; (\overline{\mu}:=n$ if $\mu=\overline{n}$).} \subsubsection{\mathversion{bold} Column bumping lemma for $D_n$} \label{subsubsec:cblD} The aim of this subsection is to give a simple result on successive insertions of two letters into a tableau (Corollary \ref{cor:cblxxx}). This result will be used in the proof of the main theorem (Theorem \ref{th:main3}). This corollary follows from the column bumping lemma (Lemma \ref{lem:cblxx}). We restrict ourselves to the situation where by column insertions there only appear semistandard $D$ tableaux with at most two rows. In the classification of the column insertions, we regard that the inserted letter is set in the first row in Cases A0, B0, B2, B4, B5 and B7, and that it is set in the second row in the other cases. Then the bumping route is defined in the same way as in section~\ref{subsubsec:cblB}. \begin{lemma} \label{lem:cblx} The bumping route does not move down. \end{lemma} \begin{proof} It is enough to consider the following three cases. \begin{enumerate} \item Suppose that in the following column insertion Case B0 has occurred in the first column. \begin{equation} \setlength{\unitlength}{5mm} \begin{picture}(5,1.4)(0,0.3) \multiput(0,0)(1,0){2}{\line(0,1){1}} \multiput(0,0)(0,1){2}{\line(1,0){1}} \multiput(2.5,0)(1,0){3}{\line(0,1){1}} \multiput(2.5,0)(0,1){2}{\line(1,0){2}} \put(0,0){\makebox(1,1){$\alpha$}} \put(1,0){\makebox(1.5,1){$\longrightarrow$}} \put(2.5,0){\makebox(1,1){$\beta$}} \put(3.5,0){\makebox(1,1){$\gamma$}} \end{picture} \end{equation} Then in the second column Case B0 occurs and Case A1 does not happen. The semistandard condition for $D$ tableau imposes that $(\beta,\gamma) \ne (n,\ol{n}), (\ol{n},n)$ and $\beta \leq \gamma$. Thus if $\beta$ is bumped out from the left column, it certainly bumps $\gamma$ out of the right column. \item Suppose that in the following column insertion one of the Cases B2, B4, B5 or B7 has occurred in the first column. \begin{equation} \setlength{\unitlength}{5mm} \begin{picture}(2.5,2.4)(0,0.3) \multiput(0,0)(1,0){2}{\line(0,1){1}} \multiput(0,0)(0,1){2}{\line(1,0){1}} \put(0,0){\makebox(1,1){$\alpha$}} \put(1,0){\makebox(1.5,1){$\longrightarrow$}} \multiput(2.5,1)(0,1){2}{\line(1,0){2}} \multiput(2.5,0)(1,0){2}{\line(0,1){2}} \put(2.5,0){\line(1,0){1}} \put(4.5,1){\line(0,1){1}} \put(2.5,0){\makebox(1,1){$\delta$}} \put(2.5,1){\makebox(1,1){$\beta$}} \put(3.5,1){\makebox(1,1){$\gamma$}} \end{picture} \end{equation} Then in the second column Case B0 occurs and Case A1 does not happen. The reason is as follows. Whichever one of the B2, B4, B5 or B7 may have occurred in the first column, the letter bumped out from the first column is always $\beta$. And we have the semistandard condition between $\beta$ and $\gamma$. \item Suppose that in the following column insertion one of the Cases B2, B4, B5 or B7 has occurred in the first column. \begin{equation} \setlength{\unitlength}{5mm} \begin{picture}(2.5,2.4)(0,0.3) \multiput(0,0)(1,0){2}{\line(0,1){1}} \multiput(0,0)(0,1){2}{\line(1,0){1}} \multiput(2.5,0)(1,0){3}{\line(0,1){1}} \multiput(2.5,0)(0,1){2}{\line(1,0){2}} \put(0,0){\makebox(1,1){$\alpha$}} \put(1,0){\makebox(1.5,1){$\longrightarrow$}} \multiput(2.5,0)(0,1){3}{\line(1,0){2}} \multiput(2.5,0)(1,0){3}{\line(0,1){2}} \put(2.5,0){\makebox(1,1){$\delta$}} \put(2.5,1){\makebox(1,1){$\beta$}} \put(3.5,0){\makebox(1,1){$\varepsilon$}} \put(3.5,1){\makebox(1,1){$\gamma$}} \end{picture} \end{equation} Then in the second column Cases B1, B3, B6 and B8 do not happen. The reason is as follows. As in the previous case the letter bumped out from the first column is always $\beta$. Since $\beta \leq \gamma$, B1 does not happen. Since $(\beta,\gamma) \ne (n,\ol{n}), (\ol{n},n)$, B6 and B8 do not happen. B3 does not happen since $(\beta,\gamma,\varepsilon) \ne (x,x,\ol{x})$, i.e. due to the absence of the $(x,x)$-configuration (\ref{eq:absenceofxxconf}). \end{enumerate} \end{proof} \begin{lemma} \label{lem:cblxx} Let $\alpha' \leq \alpha$, in particular $(\alpha,\alpha') \ne (n,\ol{n}),(\ol{n},n)$. Let $R$ be the bumping route that is made when $\alpha$ is inserted into $T$, and $R'$ be the bumping route that is made when $\alpha'$ is inserted into $\left( \alpha \longrightarrow T \right)$. Then $R'$ does not lie below $R$. \end{lemma} \begin{proof} First we consider the case where the bumping route lies only in the first row. Suppose that, when $\alpha$ was inserted into the tableau $T$, it was set in the first row in the first column. We are to show that when $\alpha'$ is inserted, it will be also set in the first row in the first column. If $T$ is an empty set (resp.~has only one row), the insertion of $\alpha$ should have been A0 (resp.~B0). In either case we have B0 when $\alpha'$ is inserted, hence the claim is true. Suppose $T$ has two rows. By assumption B2, B4, B5 or B7 has occurred when $\alpha$ was inserted. We see that; a) If B7 has occurred, then B5 will occur when $\alpha'$ is inserted; b) If B5 or B4 has occurred, then B2 will occur when $\alpha'$ is inserted. Thus it is enough to show that if B2 has occurred, then B1, B3, B6 and B8 do not happen when $\alpha'$ is inserted. Since $\alpha' \leq \alpha$, B1 does not happen. Since $(\alpha , \alpha') \ne (n,\ol{n}),(\ol{n},n)$, B6 and B8 do not happen. B3 does not happen, since the first column does not have the entry ${x \atop \overline{x}}$ as the result of B2 type insertion of $\alpha$. Second we consider the case where the bumping route $R$ lies across the first and the second rows. Suppose that from the leftmost column to the $(i-1)$-th column the bumping route lies in the second row, and from the $i$-th column to the rightmost column it lies in the first row. As in the type $B$ case let us call the position of the vertical line between the $(i-1)$-th and the $i$-th columns the crossing point of $R$. It is unique due to Lemma \ref{lem:cblx}. We call an analogous position of $R'$ its crossing point. We are to show that the crossing point of $R'$ does not locate strictly right to the crossing point of $R$. Let the situation around the crossing point of $R$ be \begin{equation} \label{eq:crptd} \setlength{\unitlength}{5mm} \begin{picture}(4,2.4)(0,0.3) \multiput(1,0)(1,0){3}{\line(0,1){2}} \multiput(1,0)(0,1){3}{\line(1,0){2}} \multiput(0.1,0)(0,1){3}{ \multiput(0,0)(0.2,0){5}{\line(1,0){0.1}}} \multiput(3,0)(0,1){3}{ \multiput(0,0)(0.2,0){5}{\line(1,0){0.1}}} \put(1,0){\makebox(1,1){$\xi$}} \put(2,1){\makebox(1,1){$\eta$}} \end{picture} \quad \mbox{or} \quad \setlength{\unitlength}{5mm} \begin{picture}(4,2.4)(0,0.3) \multiput(1,0)(1,0){2}{\line(0,1){2}} \put(3,1){\line(0,1){1}} \multiput(1,1)(0,1){2}{\line(1,0){2}} \put(1,0){\line(1,0){1}} \multiput(0.1,0)(0,1){3}{ \multiput(0,0)(0.2,0){5}{\line(1,0){0.1}}} \multiput(3,1)(0,1){2}{ \multiput(0,0)(0.2,0){5}{\line(1,0){0.1}}} \put(1,0){\makebox(1,1){$\xi$}} \put(2,1){\makebox(1,1){$\eta$}} \end{picture} \, \mbox{.} \end{equation} While the insertion of $\alpha$ that led to these configurations, let $\eta'$ be the letter that was bumped out from the left column. Claim 1: $\xi \leq \eta$ and $(\xi,\eta) \ne (n,\ol{n}),(\ol{n},n)$. To see this note that in the left column, B1, B3, B6 or B8 has occurred when $\alpha$ was inserted. We have $\xi \leq \eta'$ (B1) or $\xi < \eta'$ (B3, B6, B8). In the right column A0, B0, B2, B4, B5 or B7 has subsequently occurred. We have $\eta' = \eta$ (A0, B0, B2, B5), or $\eta' < \eta$ (B4, B7). In any case we have $\xi \leq \eta$ and $(\xi,\eta) \ne (n,\ol{n}),(\ol{n},n)$. Claim 2: In (\ref{eq:crptd}) the following configurations do not exist. \begin{equation} \setlength{\unitlength}{5mm} \begin{picture}(4,2.4)(0,0.3) \multiput(1,0)(1,0){3}{\line(0,1){2}} \multiput(1,0)(0,1){3}{\line(1,0){2}} \multiput(0.1,0)(0,1){3}{ \multiput(0,0)(0.2,0){5}{\line(1,0){0.1}}} \multiput(3,0)(0,1){3}{ \multiput(0,0)(0.2,0){5}{\line(1,0){0.1}}} \put(1,0){\makebox(1,1){$\ol{x}$}} \put(1,1){\makebox(1,1){$x$}} \put(2,1){\makebox(1,1){$\ol{x}$}} \end{picture} \quad \mbox{,} \quad \setlength{\unitlength}{5mm} \begin{picture}(4,2.4)(0,0.3) \multiput(1,0)(1,0){2}{\line(0,1){2}} \put(3,1){\line(0,1){1}} \multiput(1,1)(0,1){2}{\line(1,0){2}} \put(1,0){\line(1,0){1}} \multiput(0.1,0)(0,1){3}{ \multiput(0,0)(0.2,0){5}{\line(1,0){0.1}}} \multiput(3,1)(0,1){2}{ \multiput(0,0)(0.2,0){5}{\line(1,0){0.1}}} \put(1,0){\makebox(1,1){$\ol{x}$}} \put(1,1){\makebox(1,1){$x$}} \put(2,1){\makebox(1,1){$\ol{x}$}} \end{picture} \quad \mbox{,} \quad \begin{picture}(4,2.4)(0,0.3) \multiput(1,0)(1,0){3}{\line(0,1){2}} \multiput(1,0)(0,1){3}{\line(1,0){2}} \multiput(0.1,0)(0,1){3}{ \multiput(0,0)(0.2,0){5}{\line(1,0){0.1}}} \multiput(3,0)(0,1){3}{ \multiput(0,0)(0.2,0){5}{\line(1,0){0.1}}} \put(1,0){\makebox(1,1){$x$}} \put(2,0){\makebox(1,1){$\ol{x}$}} \put(2,1){\makebox(1,1){$x$}} \end{picture} \quad \mbox{or} \quad \begin{picture}(4,2.4)(0,0.3) \multiput(1,0)(1,0){3}{\line(0,1){2}} \multiput(1,0)(0,1){3}{\line(1,0){2}} \multiput(0.1,0)(0,1){3}{ \multiput(0,0)(0.2,0){5}{\line(1,0){0.1}}} \multiput(3,0)(0,1){3}{ \multiput(0,0)(0.2,0){5}{\line(1,0){0.1}}} \put(1,0){\makebox(1,1){$\ol{n}$}} \put(2,0){\makebox(1,1){$n$}} \put(2,1){\makebox(1,1){$\ol{n}$}} \end{picture} \, \mbox{.} \label{eq:forbiddenconfs} \end{equation} Due to Claim 1, the first and the second configurations can exist only if B1 with $\alpha= x, \gamma=\eta'$ happens in the left column and $\xi = \eta'= \eta = \overline{x}$. But $(\alpha,\gamma) = (x,\overline{x})$ is not compatible with B1. The third (resp.~fourth) configuration can exist only if B4 (resp.~B7) happens in the right column and $\xi= \eta'= \eta= x \mbox{(resp.~$=\ol{n}$)}$ by Claim 1. But we see from the proof of Claim 1 that B4 (resp.~B7) actually happens only when $\eta' < \eta$. Claim 2 is proved. Let the situation around the crossing point of $R$ be one of (\ref{eq:crptd}) excluding (\ref{eq:forbiddenconfs}). When inserting $\alpha'$, suppose in the left column of the crossing point, B1, B3, B6 or B8 has occurred. Let $\xi'$ be the letter bumped out therefrom. Claim 3: $\xi' \leq \eta$ and $(\xi',\eta) \ne (n,\ol{n}),(\ol{n},n)$. We divide the check into three cases. a) If B1 or B6 has occurred in the left column, we have $\xi' = \xi$. Thus the assertion follows from Claim 1. b) If B3 has occurred, the left column had the entry ${x \atop \ol{x}}$ and we have $\xi' = \ol{x-1}$, $\xi = \ol{x}$. Claim 1 tells $\xi = \ol{x} \le \eta$, and Claim 2 does $\eta \neq \ol{x}$. Therefore we have $\xi' = \ol{x-1} \le \eta$. c) If B8 has occurred we have $\xi' = \ol{n-1}$ for either $\xi = n$ or $\xi = \ol{n}$. If $\xi = n \mbox{ (resp. $\ol{n}$)}$ the entry on the left of $\eta$ was $\ol{n} \mbox{ (resp. $n$)}$, therefore $\eta \geq \ol{n} \mbox{ (resp. $n$)}$. On the other hand Claim 1 tells $(\xi,\eta) \ne (n,\ol{n}),(\ol{n},n)$. Thus we have $\eta \geq \ol{n-1}$. Claim 3 is proved. Now we are ready to finish the proof of the main assertion. Assume the same situation as Claim 3. We should verify that A1, B1, B3, B6 and B8 do not occur in the right column. Claim 3 immediately prohibits A1, B1, B6 and B8 in the right column. Suppose that B3 happens in the right column. It means that $\eta \in \{1,\ldots n\}$, $\xi' \geq \eta$ and the right column had the entry ${\eta \atop \ol{\eta}}$. Since $\xi' \leq \eta$ by Claim 3, we find $\xi' = \eta$, therefore $\xi' \in \{1,\ldots, n\}$. Such $\xi'$ can be bumped out from B1 process only in the left column and not from B3, B6 or B8. It follows that $\xi' = \xi$. This leads to the third configuration in (\ref{eq:forbiddenconfs}), hence a contradiction. Finally we consider the case where the bumping route $R$ lies only in the second row. If $R'$ lies below $R$ the tableau should have more than two rows, which is prohibited by Proposition \ref{pr:atmosttworows}. \end{proof} \begin{coro} \label{cor:cblxxx} Let $\alpha' \leq \alpha$, in particular $(\alpha,\alpha') \ne (n,\ol{n}),(\ol{n},n)$. Suppose that a new box is added at the end of the first row when $\alpha$ is inserted into $T$. Then a new box is added also at the end of the first row when $\alpha'$ is inserted into $\left( \alpha \longrightarrow T \right)$. \end{coro} \subsection{\mathversion{bold}Main theorem : $B^{(1)}_n$ and $D^{(1)}_n$ cases} \label{subsec:isoruletypeb} Given $b_1 \otimes b_2 \in B_{l} \otimes B_{k}$, we define the element $b'_2 \otimes b'_1 \in B_{k} \otimes B_{l}$ and $l',k', m \in {\mathbb Z}_{\ge 0}$ by the following rule. \begin{rules}\label{rule:typeB} \par\noindent Set $z = \min(\sharp\,\fbx{1} \text{ in }{\mathcal T}(b_1),\, \sharp\,\fbx{\ol{1}} \text{ in }{\mathcal T}(b_2))$. Thus ${\mathcal T}(b_1)$ and ${\mathcal T}(b_2)$ can be depicted by \[ {\mathcal T}(b_1) = \setlength{\unitlength}{5mm} \begin{picture}(7.5,1.4)(0,0.3) \multiput(0,0)(0,1){2}{\line(1,0){7}} \put(0,0){\line(0,1){1}} \put(3,0){\line(0,1){1}} \put(7,0){\line(0,1){1}} \put(3,0){\makebox(4,1){$T_*$}} \put(0,0){\makebox(3,1){$1\cdots 1$}} \put(0,0.9){\makebox(3,1){$z$}} \end{picture},\;\; {\mathcal T}(b_2) = \setlength{\unitlength}{5mm} \begin{picture}(6.5,1.4)(0,0.3) \multiput(0,0)(0,1){2}{\line(1,0){6}} \put(0,0){\line(0,1){1}} \put(0.9,0){\line(0,1){1}} \put(2,0){\line(0,1){1}} \put(3,0){\line(0,1){1}} \put(6,0){\line(0,1){1}} \put(0,0){\makebox(1,1){$v_{1}$}} \put(1,0){\makebox(1,1){$\cdots$}} \put(2,0){\makebox(1,1){$v_{k'}$}} \put(3,0){\makebox(3,1){$\ol{1}\cdots\ol{1}$}} \put(3,0.9){\makebox(3,1){$z$}} \end{picture}. \] Let $l' = l-z$ and $k' = k-z$, hence $T_*$ is a one-row tableau with length $l'$. Operate the column insertions and define \begin{equation} \label{eq:prodtab} T^{(0)} := (v_1 \longrightarrow ( \cdots ( v_{k'-1} \longrightarrow ( v_{k'} \longrightarrow T_* ) ) \cdots ) ). \end{equation} It has the form (See Proposition \ref{pr:atmosttworows}.): \begin{equation} \label{eq:prodtabpic} \setlength{\unitlength}{5mm} \begin{picture}(20,4) \put(5,1.5){\makebox(3,1){$T^{(0)}=$}} \put(8,1){\line(1,0){3.5}} \put(8,2){\line(1,0){9}} \put(8,3){\line(1,0){9}} \put(8,1){\line(0,1){2}} \put(11.5,1){\line(0,1){1}} \put(12.5,2){\line(0,1){1}} \put(17,2){\line(0,1){1}} \put(12.5,2){\makebox(4.5,1){$i_{m+1} \;\cdots\; i_{l'}$}} \put(8,1){\makebox(3,1){$\;\;i_1 \cdots i_m$}} \put(8.5,2){\makebox(3,1){$\;\;j_1 \cdots\cdots j_{k'}$}} \end{picture} \end{equation} where $m$ is the length of the second row, hence that of the first row is $l'+k'-m$. ($0 \le m \le k'$.) Next we bump out $l'$ letters from the tableau $T^{(0)}$ by the reverse bumping algorithm. For the boxes containing $i_{l'}, i_{l'-1}, \ldots, i_1$ in the above tableau, we do it first for $i_{l'}$ then $i_{l'-1}$ and so on. Correspondingly, let $w_{1}$ be the first letter that is bumped out from the leftmost column and $w_2$ be the second and so on. Denote by $T^{(i)}$ the resulting tableau when $w_i$ is bumped out ($1 \le i \le l'$). Now $b'_1 \in B_l$ and $b'_2 \in B_k$ are uniquely specified by \[ {\mathcal T}(b'_2) = \setlength{\unitlength}{5mm} \begin{picture}(6.5,1.4)(0,0.3) \multiput(0,0)(0,1){2}{\line(1,0){6}} \put(0,0){\line(0,1){1}} \put(3,0){\line(0,1){1}} \put(6,0){\line(0,1){1}} \put(3,0){\makebox(3,1){$T^{(l')}$}} \put(0,0){\makebox(3,1){$1\cdots 1$}} \put(0,0.9){\makebox(3,1){$z$}} \end{picture},\;\; {\mathcal T}(b'_1) = \setlength{\unitlength}{5mm} \begin{picture}(7.5,1.4)(0,0.3) \multiput(0,0)(0,1){2}{\line(1,0){7}} \put(0,0){\line(0,1){1}} \put(1.25,0){\line(0,1){1}} \put(2.75,0){\line(0,1){1}} \put(4,0){\line(0,1){1}} \put(7,0){\line(0,1){1}} \put(0,0){\makebox(1.25,1){$w_{1}$}} \put(1.25,0){\makebox(1.5,1){$\cdots$}} \put(2.75,0){\makebox(1.25,1){$w_{l'}$}} \put(4,0){\makebox(3,1){$\ol{1}\cdots\ol{1}$}} \put(4,0.9){\makebox(3,1){$z$}} \end{picture}. \] \end{rules} (End of the Rule) \vskip3ex We normalize the energy function as $H_{B_l B_k}(b_1 \otimes b_2)=0$ for \begin{math} \mathcal{T}(b_1) = \setlength{\unitlength}{5mm} \begin{picture}(3,1.5)(0,0.3) \multiput(0,0)(0,1){2}{\line(1,0){3}} \multiput(0,0)(3,0){2}{\line(0,1){1}} \put(0,0){\makebox(3,1){$1\cdots 1$}} \put(0,1){\makebox(3,0.5){$\scriptstyle l$}} \end{picture} \end{math} and \begin{math} \mathcal{T}(b_2) = \setlength{\unitlength}{5mm} \begin{picture}(3,1.5)(0,0.3) \multiput(0,0)(0,1){2}{\line(1,0){3}} \multiput(0,0)(3,0){2}{\line(0,1){1}} \put(0,0){\makebox(3,1){$\ol{1}\cdots \ol{1}$}} \put(0,1){\makebox(3,0.5){$\scriptstyle k$}} \end{picture} \end{math} irrespective of $l < k$ or $l \ge k$. Our main result for $U'_q(B^{(1)}_n)$ and $U'_q(D^{(1)}_n)$ is the following. \begin{theorem}\label{th:main3} Given $b_1 \otimes b_2 \in B_l \otimes B_k$, find $b'_2 \otimes b'_1 \in B_k \otimes B_l$ and $l', k', m$ by Rule \ref{rule:typeB} with type $B$ (resp.~type $D$) insertion. Let $\iota: B_l \otimes B_k \stackrel{\sim}{\rightarrow} B_k \otimes B_l$ be the isomorphism of $U'_q(B^{(1)}_n)$ (resp.~ $U'_q(D^{(1)}_n)$) crystal. Then we have \begin{align*} \iota(b_1\otimes b_2)& = b'_2 \otimes b'_1,\\ H_{B_l B_k}(b_1 \otimes b_2) &= 2\min(l',k')- m. \end{align*} \end{theorem} \noindent Before giving a proof of this theorem we present two propositions associated with Rule \ref{rule:typeB}. Let the product tableau ${\mathcal T}(b_1) *{\mathcal T}(b_2)$ be given by the $T^{(0)}$ in eq.~(\ref{eq:prodtab}) in Rule \ref{rule:typeB}. We assume that it is indeed a (semistandard $B$ or $D$) tableau. \begin{proposition} \label{pr:atmosttworows} The product tableau ${\mathcal T}(b_1) *{\mathcal T}(b_2)$ made by (\ref{eq:prodtab}) has no more than two rows. \end{proposition} \begin{proof} Let $T_{\bullet}$ be a tableau that appears in the intermediate process of the sequence of the column insertions (\ref{eq:prodtab}). Assume that $T_{\bullet}$ has two rows. We denote by $\alpha$ the letter which we are going to insert into $T_{\bullet}$ in the next step of the sequence, and denote by $\beta$ the letter which resides in the second row of the leftmost column of $T_{\bullet}$. It suffices to show that the $\alpha$ does not make a new box in the third row in the leftmost column. In other words it suffices to show that $\alpha \leq \beta$ and $(\alpha,\beta) \ne (\circ,\circ)$ (resp.~and in particular $(\alpha,\beta) \ne (n,\ol{n}),(\ol{n},n)$) in $B_n$ (resp.~$D_n$) case. Let us first consider $B_n$ case. We divide the proof in two cases: (i) $\beta = \circ$ (ii) $\beta \ne \circ$. In case (i) either this $\beta=\circ$ was originally contained in ${\mathcal T}(b_2)$ or this $\beta=\circ$ was made by Case B7 in section \ref{subsubsec:insb}. In any case we see $\alpha \leq n$ (thus $\alpha < \beta$) because of the original arrangement of the letters in ${\mathcal T}(b_2)$. (Note that ${\mathcal T}(b_2)$ did not have more than one $\circ$s.) In case (ii) either this $\beta$ was originally contained in ${\mathcal T}(b_2)$ or this $\beta$ is an $\ol{x+1}$ which had originally been an $\ol{x}$ in ${\mathcal T}(b_2)$ and then transformed into $\ol{x+1}$ by Case B6 in section \ref{subsubsec:insb}. In any case we see $\alpha \leq \beta$ and $(\alpha,\beta) \ne (\circ,\circ)$. Second we consider $D_n$ case. We divide the proof in two cases: (i) $\beta = n, \ol{n}$ (ii) $\beta \ne n, \ol{n}$. In case (i) either this $\beta=n, \ol{n}$ was originally contained in ${\mathcal T}(b_2)$ or this $\beta$ was made by Case B7 in section \ref{subsubsec:insd}. In any case we see $\alpha \leq \beta$, in particular $(\alpha,\beta) \ne (n,\ol{n}),(\ol{n},n)$, because of the original arrangement of the letters in ${\mathcal T}(b_2)$. (Note that ${\mathcal T}(b_2)$ did not contain $n$ and $\ol{n}$ simultaneously.) In case (ii) either this $\beta$ was originally contained in ${\mathcal T}(b_2)$ or this $\beta$ is an $\ol{x+1}$ which had originally been an $\ol{x}$ in ${\mathcal T}(b_2)$ and then transformed into $\ol{x+1}$ by Case B4 in section \ref{subsubsec:insd}. In any case we see $\alpha \leq \beta$ and $(\alpha,\beta) \ne (n,\ol{n}),(\ol{n},n)$. \end{proof} Let $\mathfrak{g} = B^{(1)}_n \mbox{ or } D^{(1)}_n$ and $\ol{\mathfrak{g}} = B_n \mbox{ or } D_n$. By neglecting zero arrows, the crystal graph of $B_l \otimes B_k$ decomposes into $U_q(\ol{\mathfrak{g}})$ crystals, where only arrows with indices $i=1,\ldots,n$ remain. Let us regard $b_1 \in B_l$ as an element of $U_q(\ol{\mathfrak{g}})$ crystal $B(l \Lambda_1)$, and regard $b_2 \in B_k$ as an element of $B(k \Lambda_1)$. Then $b_1 \otimes b_2$ is regarded as an element of $U_q(\ol{\mathfrak{g}})$ crystal $B(l \Lambda_1) \otimes B(k \Lambda_1)$. On the other hand the tableau ${\mathcal T}(b_1) *{\mathcal T}(b_2)$ specifies an element of $B(\lambda)$ which we shall denote by $b_1 * b_2$, where $B(\lambda)$ is a $U_q(\ol{\mathfrak{g}})$ crystal that appears in the decomposition of $B(l \Lambda_1) \otimes B(k \Lambda_1)$. \begin{proposition} \label{pr:crysmorpb} The map $\psi : b_1 \otimes b_2 \mapsto b_1 * b_2$ is a $U_q(\ol{\mathfrak{g}})$ crystal morphism, i.e. the actions of $\tilde{e}_i$ and $\tilde{f}_i$ for $i=1,\ldots,n$ commute with the map $\psi$. \end{proposition} \noindent This proposition is a special case of Proposition~\ref{pr:morphgen}. Note that, although we have removed the $z$ pairs of $1$'s and $\ol{1}$'s from the tableaux by hand, this elimination of the letters is also a part of this rule of column insertions (i.e. \begin{math} (\ol{1} \longrightarrow \setlength{\unitlength}{5mm} \begin{picture}(1,1)(0,0.3) \multiput(0,0)(1,0){2}{\line(0,1){1}} \multiput(0,0)(0,1){2}{\line(1,0){1}} \put(0,0){\makebox(1,1){$1$}} \end{picture} ) = \emptyset \end{math}), followed by the sliding (jeu de taquin) rules \cite{B1,B2}. \par\noindent \begin{proof}[Proof of Theorem \ref{th:main3}] First we consider the isomorphism. We are to show: \begin{enumerate} \item If $b_1 \otimes b_2$ is mapped to $b'_2 \otimes b'_1$ under the isomorphism, then the product tableau ${\mathcal T}(b_1) * {\mathcal T}(b_2)$ is equal to the product tableau ${\mathcal T}(b'_2) * {\mathcal T}(b'_1)$. \item If $k$ and $l$ are specified, we can recover ${\mathcal T}(b'_2) $ and ${\mathcal T}(b'_1)$ from their product tableau by using the algorithm shown in Rule \ref{rule:typeB}. In other words, we can retrieve them by assuming the arrangement of the locations $i_{l'},\ldots,i_1$ of the boxes in (\ref{eq:prodtabpic}) from which we start the reverse bumpings. \end{enumerate} Claim 2 is verified from Corollary \ref{cor:bcblxxx} or \ref{cor:cblxxx}. We consider Claim 1 in the following. The value of the energy function value will be settled at the same time. Thanks to the $U_q(\ol{\mathfrak{g}})$ crystal morphism (Proposition \ref{pr:crysmorpb}), it suffices to prove the theorem for any element in each connected component of the $U_q(\ol{\mathfrak{g}})$ crystal. We take up the $U_q(\ol{\mathfrak{g}})$ highest weight element as such a particular element. There is a special extreme $U_q(\ol{\mathfrak{g}})$ highest weight element \begin{equation} \iota : (l,0,\ldots,0) \otimes (k,0,\ldots,0) \stackrel{\sim}{\mapsto} (k,0,\ldots,0) \otimes (l,0,\ldots,0), \label{eqn:ultrahighest} \end{equation} wherein we find that they are obviously mapped to each other under the $U'_q(\mathfrak{g})$ isomorphism, and that the image of the map is also obviously obtained by Rule \ref{rule:typeB}. Let us assume $l \geq k$. (The other case can be treated in a similar way.) Suppose that $b_1 \otimes b_2 \in B_l \otimes B_k$ is a $U_q(\ol{\mathfrak{g}})$ highest element. In general, it has the form: $$b_1\otimes b_2 = (l,0,\ldots,0) \otimes (x_1,x_2,0,\ldots,0,\ol{x}_1),$$ where $x_1, x_2$ and $\ol{x}_1$ are arbitrary as long as $k = x_1 + x_2 + \ol{x}_1$. We are to obtain its image under the isomorphism. Applying \begin{eqnarray*} &&\et{0}^{\ol{x}_1} \et{2}^{x_2+\ol{x}_1} \cdots \et{n-1}^{x_2+\ol{x}_1} \et{n}^{2x_2+2\ol{x}_1} \et{n-1}^{x_2+\ol{x}_1} \cdots \et{2}^{x_2+\ol{x}_1} \et{0}^{x_2+\ol{x}_1} \; \mbox{for $\mathfrak{g} = B^{(1)}_n$}\\ &&\et{0}^{\ol{x}_1} \et{2}^{x_2+\ol{x}_1} \cdots \et{n-1}^{x_2+\ol{x}_1} \et{n}^{x_2+\ol{x}_1} \et{n-2}^{x_2+\ol{x}_1} \cdots \et{2}^{x_2+\ol{x}_1} \et{0}^{x_2+\ol{x}_1} \; \mbox{for $\mathfrak{g} = D^{(1)}_n$} \end{eqnarray*} to the both sides of (\ref{eqn:ultrahighest}), we find \begin{displaymath} \iota : (l,0,\ldots,0) \otimes (x_1,x_2,0,\ldots,0,\ol{x}_1) \stackrel{\sim}{\mapsto} (k,0,\ldots,0) \otimes (x_1',x_2,0,\ldots,0,\ol{x}_1). \end{displaymath} Here $x_1' = l-x_2-\ol{x}_1$. In the course of the application of $\tilde{e}_i$'s, the value of the energy function has changed as $$ H\left((l,0,\ldots,0) \otimes (x_1,x_2,0,\ldots,0,\ol{x}_1)\right) = H\left((l,0,\ldots,0) \otimes (k,0,\ldots,0)\right) - x_2 - 2 \ol{x}_1. $$ (We have omitted the subscripts of the energy function.) Thus according to our normalization we have $H(b_1 \otimes b_2)=2(k- \ol{x}_1)-x_2 $. (Note that the $z$ in Rule \ref{rule:typeB} is now equal to $\ol{x}_1$, hence we have $k' = k-\ol{x}_1$.) On the other hand for this highest element the column insertions lead to a common tableau \begin{displaymath} \setlength{\unitlength}{5mm} \begin{picture}(7,2) \put(0,0.5){\makebox(3,1){$T^{(0)}=$}} \put(3,2){\line(1,0){4}} \put(3,1){\line(1,0){4}} \put(3,0){\line(1,0){3}} \put(3,0){\line(0,1){2}} \put(6,0){\line(0,1){1}} \put(7,1){\line(0,1){1}} \put(3,1){\makebox(4,1){$1\cdots\cdots 1$}} \put(3,0){\makebox(3,1){$2\cdots 2$}} \end{picture} \end{displaymath} whose second row has length $x_2$ (and first row has the length $l+k-x_2-2\ol{x}_1$). This completes the proof. \end{proof} \subsection{Examples} \label{subsec:exBD} \begin{example} $B_5 \otimes B_3 \simeq B_3 \otimes B_5$ for $B^{(1)}_5$. \begin{displaymath} \begin{array}{ccccccc} \setlength{\unitlength}{5mm} \begin{picture}(5,1)(0,0.3) \multiput(0,0)(1,0){6}{\line(0,1){1}} \multiput(0,0)(0,1){2}{\line(1,0){5}} \put(0,0){\makebox(1,1){$5$}} \put(1,0){\makebox(1,1){$5$}} \put(2,0){\makebox(1,1){$\circ$}} \put(3,0){\makebox(1,1){$\ol{5}$}} \put(4,0){\makebox(1,1){$\ol{5}$}} \end{picture} & \otimes & \setlength{\unitlength}{5mm} \begin{picture}(3,1)(0,0.3) \multiput(0,0)(1,0){4}{\line(0,1){1}} \multiput(0,0)(0,1){2}{\line(1,0){3}} \put(0,0){\makebox(1,1){$5$}} \put(1,0){\makebox(1,1){$\circ$}} \put(2,0){\makebox(1,1){$\ol{5}$}} \end{picture} & \stackrel{\sim}{\mapsto} & \setlength{\unitlength}{5mm} \begin{picture}(3,1)(0,0.3) \multiput(0,0)(1,0){4}{\line(0,1){1}} \multiput(0,0)(0,1){2}{\line(1,0){3}} \put(0,0){\makebox(1,1){$5$}} \put(1,0){\makebox(1,1){$\circ$}} \put(2,0){\makebox(1,1){$\ol{5}$}} \end{picture} & \otimes & \setlength{\unitlength}{5mm} \begin{picture}(5,1)(0,0.3) \multiput(0,0)(1,0){6}{\line(0,1){1}} \multiput(0,0)(0,1){2}{\line(1,0){5}} \put(0,0){\makebox(1,1){$5$}} \put(1,0){\makebox(1,1){$5$}} \put(2,0){\makebox(1,1){$\circ$}} \put(3,0){\makebox(1,1){$\ol{5}$}} \put(4,0){\makebox(1,1){$\ol{5}$}} \end{picture} \\ & & & & & & \\ \setlength{\unitlength}{5mm} \begin{picture}(5,1)(0,0.3) \multiput(0,0)(1,0){6}{\line(0,1){1}} \multiput(0,0)(0,1){2}{\line(1,0){5}} \put(0,0){\makebox(1,1){$5$}} \put(1,0){\makebox(1,1){$5$}} \put(2,0){\makebox(1,1){$\ol{5}$}} \put(3,0){\makebox(1,1){$\ol{5}$}} \put(4,0){\makebox(1,1){$\ol{5}$}} \end{picture} & \otimes & \setlength{\unitlength}{5mm} \begin{picture}(3,1)(0,0.3) \multiput(0,0)(1,0){4}{\line(0,1){1}} \multiput(0,0)(0,1){2}{\line(1,0){3}} \put(0,0){\makebox(1,1){$4$}} \put(1,0){\makebox(1,1){$4$}} \put(2,0){\makebox(1,1){$\circ$}} \end{picture} & \stackrel{\sim}{\mapsto} & \setlength{\unitlength}{5mm} \begin{picture}(3,1)(0,0.3) \multiput(0,0)(1,0){4}{\line(0,1){1}} \multiput(0,0)(0,1){2}{\line(1,0){3}} \put(0,0){\makebox(1,1){$\circ$}} \put(1,0){\makebox(1,1){$\ol{5}$}} \put(2,0){\makebox(1,1){$\ol{5}$}} \end{picture} & \otimes & \setlength{\unitlength}{5mm} \begin{picture}(5,1)(0,0.3) \multiput(0,0)(1,0){6}{\line(0,1){1}} \multiput(0,0)(0,1){2}{\line(1,0){5}} \put(0,0){\makebox(1,1){$4$}} \put(1,0){\makebox(1,1){$4$}} \put(2,0){\makebox(1,1){$5$}} \put(3,0){\makebox(1,1){$5$}} \put(4,0){\makebox(1,1){$\ol{5}$}} \end{picture} \\ & & & & & & \\ \setlength{\unitlength}{5mm} \begin{picture}(5,1)(0,0.3) \multiput(0,0)(1,0){6}{\line(0,1){1}} \multiput(0,0)(0,1){2}{\line(1,0){5}} \put(0,0){\makebox(1,1){$1$}} \put(1,0){\makebox(1,1){$1$}} \put(2,0){\makebox(1,1){$\circ$}} \put(3,0){\makebox(1,1){$\ol{5}$}} \put(4,0){\makebox(1,1){$\ol{5}$}} \end{picture} & \otimes & \setlength{\unitlength}{5mm} \begin{picture}(3,1)(0,0.3) \multiput(0,0)(1,0){4}{\line(0,1){1}} \multiput(0,0)(0,1){2}{\line(1,0){3}} \put(0,0){\makebox(1,1){$\circ$}} \put(1,0){\makebox(1,1){$\ol{1}$}} \put(2,0){\makebox(1,1){$\ol{1}$}} \end{picture} & \stackrel{\sim}{\mapsto} & \setlength{\unitlength}{5mm} \begin{picture}(3,1)(0,0.3) \multiput(0,0)(1,0){4}{\line(0,1){1}} \multiput(0,0)(0,1){2}{\line(1,0){3}} \put(0,0){\makebox(1,1){$1$}} \put(1,0){\makebox(1,1){$1$}} \put(2,0){\makebox(1,1){$\circ$}} \end{picture} & \otimes & \setlength{\unitlength}{5mm} \begin{picture}(5,1)(0,0.3) \multiput(0,0)(1,0){6}{\line(0,1){1}} \multiput(0,0)(0,1){2}{\line(1,0){5}} \put(0,0){\makebox(1,1){$\circ$}} \put(1,0){\makebox(1,1){$\ol{5}$}} \put(2,0){\makebox(1,1){$\ol{5}$}} \put(3,0){\makebox(1,1){$\ol{1}$}} \put(4,0){\makebox(1,1){$\ol{1}$}} \end{picture} \end{array} \end{displaymath} Here we have picked up three samples that are specific to type $B$. The values of the energy function are assigned to be 3, 5 and 1, respectively. Let us illustrate in more detail the procedure of Rule \ref{rule:typeB} by taking the first example. {}From the left hand side we proceed the column insertions as follows. \begin{align*} \ol{5} &\rightarrow \setlength{\unitlength}{5mm} \begin{picture}(5,1)(0,0.3) \multiput(0,0)(1,0){6}{\line(0,1){1}} \multiput(0,0)(0,1){2}{\line(1,0){5}} \put(0,0){\makebox(1,1){$5$}} \put(1,0){\makebox(1,1){$5$}} \put(2,0){\makebox(1,1){$\circ$}} \put(3,0){\makebox(1,1){$\ol{5}$}} \put(4,0){\makebox(1,1){$\ol{5}$}} \end{picture} \quad = \quad \setlength{\unitlength}{5mm} \begin{picture}(5,2)(0,0.8) \multiput(0,0)(1,0){2}{\line(0,1){2}} \multiput(2,1)(1,0){4}{\line(0,1){1}} \multiput(0,1)(0,1){2}{\line(1,0){5}} \put(0,0){\line(1,0){1}} \put(0,1){\makebox(1,1){$5$}} \put(1,1){\makebox(1,1){$5$}} \put(2,1){\makebox(1,1){$\circ$}} \put(3,1){\makebox(1,1){$\ol{5}$}} \put(4,1){\makebox(1,1){$\ol{5}$}} \put(0,0){\makebox(1,1){$\ol{5}$}} \end{picture} \\ \circ &\rightarrow \setlength{\unitlength}{5mm} \begin{picture}(5,2)(0,0.8) \multiput(0,0)(1,0){2}{\line(0,1){2}} \multiput(2,1)(1,0){4}{\line(0,1){1}} \multiput(0,1)(0,1){2}{\line(1,0){5}} \put(0,0){\line(1,0){1}} \put(0,1){\makebox(1,1){$5$}} \put(1,1){\makebox(1,1){$5$}} \put(2,1){\makebox(1,1){$\circ$}} \put(3,1){\makebox(1,1){$\ol{5}$}} \put(4,1){\makebox(1,1){$\ol{5}$}} \put(0,0){\makebox(1,1){$\ol{5}$}} \end{picture} \quad = \quad \setlength{\unitlength}{5mm} \begin{picture}(5,2)(0,0.8) \multiput(0,0)(1,0){3}{\line(0,1){2}} \multiput(3,1)(1,0){3}{\line(0,1){1}} \multiput(0,1)(0,1){2}{\line(1,0){5}} \put(0,0){\line(1,0){2}} \put(0,1){\makebox(1,1){$4$}} \put(1,1){\makebox(1,1){$5$}} \put(2,1){\makebox(1,1){$\circ$}} \put(3,1){\makebox(1,1){$\ol{5}$}} \put(4,1){\makebox(1,1){$\ol{5}$}} \put(0,0){\makebox(1,1){$\circ$}} \put(1,0){\makebox(1,1){$\ol{4}$}} \end{picture} \\ 5 &\rightarrow \setlength{\unitlength}{5mm} \begin{picture}(5,2)(0,0.8) \multiput(0,0)(1,0){3}{\line(0,1){2}} \multiput(3,1)(1,0){3}{\line(0,1){1}} \multiput(0,1)(0,1){2}{\line(1,0){5}} \put(0,0){\line(1,0){2}} \put(0,1){\makebox(1,1){$4$}} \put(1,1){\makebox(1,1){$5$}} \put(2,1){\makebox(1,1){$\circ$}} \put(3,1){\makebox(1,1){$\ol{5}$}} \put(4,1){\makebox(1,1){$\ol{5}$}} \put(0,0){\makebox(1,1){$\circ$}} \put(1,0){\makebox(1,1){$\ol{4}$}} \end{picture} \quad = \quad \setlength{\unitlength}{5mm} \begin{picture}(5,2)(0,0.8) \multiput(0,0)(1,0){4}{\line(0,1){2}} \multiput(4,1)(1,0){2}{\line(0,1){1}} \multiput(0,1)(0,1){2}{\line(1,0){5}} \put(0,0){\line(1,0){3}} \put(0,1){\makebox(1,1){$4$}} \put(1,1){\makebox(1,1){$5$}} \put(2,1){\makebox(1,1){$\circ$}} \put(3,1){\makebox(1,1){$\ol{5}$}} \put(4,1){\makebox(1,1){$\ol{5}$}} \put(0,0){\makebox(1,1){$5$}} \put(1,0){\makebox(1,1){$\circ$}} \put(2,0){\makebox(1,1){$\ol{4}$}} \end{picture} \end{align*} \vskip3ex \noindent The reverse bumping procedure goes as follows. \begin{align*} T^{(0)} &= \setlength{\unitlength}{5mm} \begin{picture}(5,2)(0,0.8) \multiput(0,0)(1,0){4}{\line(0,1){2}} \multiput(4,1)(1,0){2}{\line(0,1){1}} \multiput(0,1)(0,1){2}{\line(1,0){5}} \put(0,0){\line(1,0){3}} \put(0,1){\makebox(1,1){$4$}} \put(1,1){\makebox(1,1){$5$}} \put(2,1){\makebox(1,1){$\circ$}} \put(3,1){\makebox(1,1){$\ol{5}$}} \put(4,1){\makebox(1,1){$\ol{5}$}} \put(0,0){\makebox(1,1){$5$}} \put(1,0){\makebox(1,1){$\circ$}} \put(2,0){\makebox(1,1){$\ol{4}$}} \end{picture} & \\ T^{(1)} &= \setlength{\unitlength}{5mm} \begin{picture}(5,2)(0,0.8) \multiput(0,0)(1,0){4}{\line(0,1){2}} \put(4,1){\line(0,1){1}} \multiput(0,1)(0,1){2}{\line(1,0){4}} \put(0,0){\line(1,0){3}} \put(0,1){\makebox(1,1){$4$}} \put(1,1){\makebox(1,1){$\circ$}} \put(2,1){\makebox(1,1){$\ol{5}$}} \put(3,1){\makebox(1,1){$\ol{5}$}} \put(0,0){\makebox(1,1){$5$}} \put(1,0){\makebox(1,1){$\circ$}} \put(2,0){\makebox(1,1){$\ol{4}$}} \end{picture} &, w_1 = 5 \\ T^{(2)} &= \setlength{\unitlength}{5mm} \begin{picture}(5,2)(0,0.8) \multiput(0,0)(1,0){4}{\line(0,1){2}} \multiput(0,0)(0,1){3}{\line(1,0){3}} \put(0,1){\makebox(1,1){$4$}} \put(1,1){\makebox(1,1){$\circ$}} \put(2,1){\makebox(1,1){$\ol{5}$}} \put(0,0){\makebox(1,1){$\circ$}} \put(1,0){\makebox(1,1){$\ol{5}$}} \put(2,0){\makebox(1,1){$\ol{4}$}} \end{picture} &, w_2 = 5 \\ T^{(3)} &= \setlength{\unitlength}{5mm} \begin{picture}(5,2)(0,0.8) \multiput(0,0)(1,0){3}{\line(0,1){2}} \put(3,1){\line(0,1){1}} \multiput(0,1)(0,1){2}{\line(1,0){3}} \put(0,0){\line(1,0){2}} \put(0,1){\makebox(1,1){$4$}} \put(1,1){\makebox(1,1){$\circ$}} \put(2,1){\makebox(1,1){$\ol{5}$}} \put(0,0){\makebox(1,1){$\ol{5}$}} \put(1,0){\makebox(1,1){$\ol{4}$}} \end{picture} &, w_3 = \circ \\ T^{(4)} &= \setlength{\unitlength}{5mm} \begin{picture}(5,2)(0,0.8) \multiput(0,0)(1,0){2}{\line(0,1){2}} \multiput(2,1)(1,0){2}{\line(0,1){1}} \multiput(0,1)(0,1){2}{\line(1,0){3}} \put(0,0){\line(1,0){1}} \put(0,1){\makebox(1,1){$5$}} \put(1,1){\makebox(1,1){$\circ$}} \put(2,1){\makebox(1,1){$\ol{5}$}} \put(0,0){\makebox(1,1){$\ol{5}$}} \end{picture} &, w_4 = \ol{5} \\ T^{(5)} &= \setlength{\unitlength}{5mm} \begin{picture}(5,2)(0,0.3) \multiput(0,0)(1,0){4}{\line(0,1){1}} \multiput(0,0)(0,1){2}{\line(1,0){3}} \put(0,0){\makebox(1,1){$5$}} \put(1,0){\makebox(1,1){$\circ$}} \put(2,0){\makebox(1,1){$\ol{5}$}} \end{picture} &, w_5 = \ol{5} \end{align*} Thus we obtained the right hand side. We have $H_{B_5,B_3}=3$, since $l'=5, k'=3$ and $m=3$. \end{example} \begin{example} $B_2 \otimes B_1 \simeq B_1 \otimes B_2$ for $D^{(1)}_5$. \begin{displaymath} \begin{array}{ccccccc} \setlength{\unitlength}{5mm} \begin{picture}(2,1)(0,0.3) \multiput(0,0)(1,0){3}{\line(0,1){1}} \multiput(0,0)(0,1){2}{\line(1,0){2}} \put(0,0){\makebox(1,1){$4$}} \put(1,0){\makebox(1,1){$\ol{4}$}} \end{picture} & \otimes & \setlength{\unitlength}{5mm} \begin{picture}(1,1)(0,0.3) \multiput(0,0)(1,0){2}{\line(0,1){1}} \multiput(0,0)(0,1){2}{\line(1,0){1}} \put(0,0){\makebox(1,1){$5$}} \end{picture} & \stackrel{\sim}{\mapsto} & \setlength{\unitlength}{5mm} \begin{picture}(1,1)(0,0.3) \multiput(0,0)(1,0){2}{\line(0,1){1}} \multiput(0,0)(0,1){2}{\line(1,0){1}} \put(0,0){\makebox(1,1){$\ol{5}$}} \end{picture} & \otimes & \setlength{\unitlength}{5mm} \begin{picture}(2,1)(0,0.3) \multiput(0,0)(1,0){3}{\line(0,1){1}} \multiput(0,0)(0,1){2}{\line(1,0){2}} \put(0,0){\makebox(1,1){$5$}} \put(1,0){\makebox(1,1){$5$}} \end{picture} \\ & & & & & & \\ \setlength{\unitlength}{5mm} \begin{picture}(2,1)(0,0.3) \multiput(0,0)(1,0){3}{\line(0,1){1}} \multiput(0,0)(0,1){2}{\line(1,0){2}} \put(0,0){\makebox(1,1){$\ol{5}$}} \put(1,0){\makebox(1,1){$\ol{5}$}} \end{picture} & \otimes & \setlength{\unitlength}{5mm} \begin{picture}(1,1)(0,0.3) \multiput(0,0)(1,0){2}{\line(0,1){1}} \multiput(0,0)(0,1){2}{\line(1,0){1}} \put(0,0){\makebox(1,1){$5$}} \end{picture} & \stackrel{\sim}{\mapsto} & \setlength{\unitlength}{5mm} \begin{picture}(1,1)(0,0.3) \multiput(0,0)(1,0){2}{\line(0,1){1}} \multiput(0,0)(0,1){2}{\line(1,0){1}} \put(0,0){\makebox(1,1){$\ol{5}$}} \end{picture} & \otimes & \setlength{\unitlength}{5mm} \begin{picture}(2,1)(0,0.3) \multiput(0,0)(1,0){3}{\line(0,1){1}} \multiput(0,0)(0,1){2}{\line(1,0){2}} \put(0,0){\makebox(1,1){$4$}} \put(1,0){\makebox(1,1){$\ol{4}$}} \end{picture} \end{array} \end{displaymath} Here we have picked up two samples that are specific to type $D$. \end{example} \noindent {\bf Acknowledgements} \hspace{0.1cm} It is a pleasure to thank T.H. Baker for many helpful discussions and correspondence. \end{document}
arXiv
{ "id": "0012247.tex", "language_detection_score": 0.560043454170227, "char_num_after_normalized": null, "contain_at_least_two_stop_words": null, "ellipsis_line_ratio": null, "idx": null, "lines_start_with_bullet_point_ratio": null, "mean_length_of_alpha_words": null, "non_alphabetical_char_ratio": null, "path": null, "symbols_to_words_ratio": null, "uppercase_word_ratio": null, "type": null, "book_name": null, "mimetype": null, "page_index": null, "page_path": null, "page_title": null }
arXiv/math_arXiv_v0.2.jsonl
null
null
\begin{document} \title{ Entanglement-assisted local operations and classical communications conversion in the quantum critical systems} \author{ Jian Cui, Jun-Peng Cao, Heng Fan } \affiliation{ Institute of Physics, Chinese Academy of Sciences, Beijing National Laboratory for Condensed Matter Physics, Beijing 100190, China } \date{\today} \begin{abstract} Conversions between the ground states in quantum critical systems via entanglement-assisted local operations and classical communications (eLOCC) are studied. We propose a new method to reveal the different convertibility by local operations when a quantum phase transition occurs. We have studied the ground state local convertibility in the one dimensional transverse field Ising model, XY model and XXZ model. It is found that the eLOCC convertibility sudden changes at the phase transition points. In transverse field Ising model the eLOCC convertibility between the first excited state and the ground state are also distinct for different phases. The relation between the order of quantum phase transitions and the local convertibility is discussed. \end{abstract} \pacs{03.67.-a, 64.60.A-, 05.70.Jk} \maketitle \section{Introduction} The recent development in quantum information processing (QIP) \cite{nature review} has provided much insight into the quantum many-body physics \cite{REVIEWS OF MODERN PHYSICS}. In particular, using the ideas from QIP to investigate quantum phase transitions has drawn vast attentions and has been successful in detecting a number of critical points of great interest. For instance, entanglements measured by concurrence, negativity, geometric entanglement and von Neumann entropy are studied in several critical systems \cite{von Neumann entropy,Nielsen ising,Osterloh nature,LAW,TCW}. It was found that the von Neumann entropy diverges logarithmically at the critical point \cite{von Neumann entropy}, and the concurrence shows a maximum at the quantum critical points of transverse field Ising model and XY model \cite{Nielsen ising}. The second order phase transition in the XY model can also be determined by the divergence of the concurrence derivative or the negativity derivative with respect to the external field parameter \cite{Osterloh nature,LAW}. Apart from entanglement, other concepts in quantum information such as mutual information and quantum discord which feature some singularities at critical points were also found to be useful in detecting quantum phase transitions \cite{QD,jiancui}. Recent studies in entanglement spectrum can be applied to the detection of quantum phase transitions \cite{lihui Dillenschneider hongyao,xiaogangwen}, too. At the same time, fidelity as well as the fidelity susceptibility of the ground state also works well in exploring numerous phase transition points in various critical systems \cite{Quan polozanardi gushijianreview Buonsnate zanadiPRL zhou,gushijian,quanPRE}. These methods have achieved great success in understanding the deep nature of the different phase transitions, especially the structure of the correlations revolved \cite{AA,LCV}. However, in the previous studies although the concepts from quantum information were investigated, they were not fully explored, because they were mealy used as some common order parameters, and some important operational properties were missing. In this paper, from a new point of view, we reveal the operational properties of the critical systems by fully studying the ground state R\'{e}nyi entropy and show that the operational property sudden changes at the quantum phase transition point, so that we could put forward a new proposal to investigate the quantum phases and quantum phase transitions. To be specific, we investigate a many-body system whose Hamiltonian is $H(\lambda)$ with a variable parameter $\lambda$. The system has a critical point at $\lambda=\lambda_c$, which separates two quantum phases. We examine the possibility for the ground state of $H(\lambda)$ to convert into the ground state of $H(\lambda')$ by entanglement-assisted local operations and classical communications (eLOCC). If we can find a method to convert them, we say there is local convertibility between them, otherwise there is no local convertibility. Our motivation is that from the R\'{e}nyi entropy interceptions of different states we can get the knowledge of their local convertibility, and this local convertibility is different for ground states from different phases. Thus, phase transitions can be distinct observed. Besides the operational meaning, our proposal has other advantages in that it is easy to moderate the accuracy, and the quantum phase transition points can be detected for small-sized system. In addition, we do not need a priori knowledge of the order parameters nor the symmetry. As we are revealing the local operation properties of different quantum phases, this paper is essentially concerned with deterministic conversion of a single copy of state, which would be different from the stochastic results \cite{SLOCC} and asymptotic results \cite{asymptotic}. The paper is organized as follows. In section II, we introduce some local conversions as well as their necessary and sufficient conditions, especially the eLOCC conversion, which is mainly focused on in this paper. In section III, we study the ground state local convertibility in the one dimensional spin half transverse field Ising model, XY model and XXZ model. In particular, for the transverse field Ising model we show how to determine the critical point numerically with small-sized systems by the finite size scaling analysis, and we also investigate the local convertibility between the ground state and the corresponding first excited state for the Ising model. In section IV we give some conclusions and discussions. \section{Local convertibility of two pure states} In this section, we introduce the notion of local convertibility and give the necessary and sufficient conditions for the local convertibility used in this paper. Generally, local convertibility concerns the following question: given two quantum states, is that possible to convert one to the other merely using local operations (without global operations)? If it is possible, we say there is local convertibility between the states. Otherwise, there is no local convertibility. The answer to this question is related to the comparisons between the entanglements of the two states. A measure of entanglement which does not increase in the process of LOCC besides other basic conditions is defined as entanglement monotone \cite{monotone}. Entanglement monotone is not unique. Different entanglement monotones reflect certain aspects of the entanglement and could have inequable usages in QIP. In particular, local convertibilities within different kinds of allowed local operations have different entanglement monotones to compose the necessary and sufficient conditions. The most basic local conversion is LOCC, which is also the most natural restriction in quantum information processing. It has fabulous applications in several fundamental problems in QIP, such as teleportation \cite{teleportation}, quantum states discrimination \cite{QSD}, testing the security of hiding bit \cite{hiding bit} and quantum channel corrections \cite{channel correction}. By Schmidt decomposition, a bipartite pure quantum state can be written as $ |\Psi \rangle _{AB}=\sum _{k=1}^d\sqrt {\lambda _k}|kk\rangle _{AB}$, where $d$ is the rank of the reduced density operator $\rho _{A(B)}=Tr_{B(A)}\left( |\Psi \rangle \langle \Psi |\right) $, and $\{ \lambda _k\} _{k=1}^d$ are the Schmidt coefficients in descending order. They satisfy the positive and normalization conditions, i.e., $\lambda_k>0$ for all $1\leq k\leq d$, and $\sum _k\lambda _k=1$. For a given bipartition of a real physical system all the knowledge of the ground state is contained in the Schmidt coefficients \cite{Poilblanc}. It is pointed out that quantities $\{ \sum _{k=l}^{d}\lambda _k\} _{l=1}^{d}$ constitute a set of entanglement monotones \cite {vidal}. For two bipartite pure states $|\Psi \rangle $ and $|\Psi '\rangle =\sum _{k=1}^d\sqrt {\lambda '_k}|kk\rangle _{AB}$, if the majorization relations \begin{eqnarray} \sum _{k=l}^d\lambda _k\ge \sum _{k=l}^d\lambda '_k \end{eqnarray} are satisfied for all $1\leq l\leq d$, state $|\Psi \rangle $ can be converted with $100\% $ probability of success to $|\Psi '\rangle $ by LOCC \cite{Nielsen}. Otherwise these two states are incomparable, i.e., neither can state $|\Psi \rangle $ be converted to $|\Psi '\rangle $ nor can state $|\Psi '\rangle $ be converted to $|\Psi \rangle $ by LOCC. Thus, the majorization relations provide a necessary and sufficient condition for the local convertibility with LOCC. In this sense, $\{\sum _{k=l}^d\lambda _k\} _{l=1}^{d}$ is a minimal and complete set of entanglement monotones for LOCC. Another useful local conversion which is also the most powerful one is eLOCC, which is also called entanglement catalyst. Entanglement catalyst in LOCC is such a phenomenon that there are some pure states $|\Psi\rangle_{AB}$ and $|\Psi\rangle_{AB}^{\prime}$ that they cannot be converted to each other by LOCC alone, because they violate the majorization theorem. However, when the two local parties share certain kind of ancillary entanglement, which is labeled as $|\phi\rangle$, the state with larger entanglement can be converted to the other state by LOCC and the ancillary state does not change after the conversion just like the catalyst in chemistry reactions \cite{jonathanplenio}, i.e., $|\Psi\otimes\phi\rangle\rightarrow|\Psi^{\prime}\otimes\phi\rangle$. For example, $|\Psi\rangle=\sqrt{0.4} |00\rangle+\sqrt{0.4} |11\rangle+\sqrt{0.1} |22\rangle+\sqrt{0.1} |33\rangle$, and $|\Psi^{\prime}\rangle=\sqrt{0.5} |00\rangle+\sqrt{0.25} |11\rangle+\sqrt{0.25} |22\rangle+\sqrt{0} |33\rangle$. It can be easily checked that $\lambda_2+\lambda_3+\lambda_4>\lambda_2^{\prime}+\lambda_3^{\prime}+\lambda_4^{\prime}$, but $\lambda_3+\lambda_4<\lambda_3^{\prime}+\lambda_4^{\prime}$, therefore, neither state can be converted to the other with certainty, i.e., $|\Psi\rangle\nrightarrow|\Psi'\rangle$ and $|\Psi'\rangle\nrightarrow|\Psi\rangle$. Whereas if $|\phi\rangle=\sqrt{0.6}|44\rangle+\sqrt{0.4}|55\rangle$ is applied, the Schmidt coefficients for the product state $|\Psi\rangle|\phi\rangle$ and $|\Psi^{\prime}\rangle|\phi\rangle$ in decreasing order are $\Lambda=\{0.24,0.24,0.16,0.16,0.06,0.06,0.004,0.04\}$ and $\Lambda'=\{0.30,0.20,0.15,0.15,0.10,0.10,0.00,0.00\}$, such that $\sum_{k=l}^8\Lambda_k\geq\sum_{k=l}^8\Lambda_k^{\prime}, \forall 1\le l \le 8$. As a result, $|\Psi\rangle|\phi\rangle$ can be converted to $|\Psi^{\prime}\rangle|\phi\rangle$ with certainty by LOCC. We can see that the LOCC with ancillary assisted-entanglement (eLOCC) actually enlarges the previous Hilbert space and is a generalized version of LOCC. The eLOCC conversion can be determined by the R\'{e}nyi entropy \cite{Renyi}, which is defined as \begin{eqnarray} S_{\alpha }(\rho _A)=\frac {1}{1-\alpha }\log Tr\rho _A^{\alpha }. \end{eqnarray} In particular, when $\alpha\rightarrow 1$, it tends to the von Neumann entropy. R\'{e}nyi entropy was extensively studied in spin chain systems \cite{spinchainrenyi}. States $|\Psi \rangle _{AB}$ can be converted to $|\Psi ' \rangle _{AB}$ by eLOCC iff their R\'{e}nyi entropies satisfy $S_{\alpha }(\rho _A)\ge S_{\alpha }(\rho '_A)$ for all $\alpha$ \cite{necessaryandsufficient}. So in the graph of R\'{e}nyi entropy versus $\alpha$, if the R\'{e}nyi entropies of states $|\Psi \rangle _{AB}$ and $|\Psi '\rangle _{AB}$ intercept, these two states are incomparable, see Fig.1 (right). On the other hand, if there is no interception, the state with larger entanglement can be locally converted to the other state, see Fig.1 (left). Therefore, the R\'{e}nyi entropies are a minimal and complete set of entanglement monotones for eLOCC. In the following, we focus on the local convertibility within eLOCC. \begin{figure} \caption{ (Color online).R\'{e}nyi entropies of two bipartite states $|\Psi \rangle $ and $|\Psi^{\prime} \rangle $ may have two types of behavior: they intercept or not. For interception case (right), the two states cannot be locally converted to each other. For non-interception case (left), $|\Psi \rangle $ can be locally converted to $|\Psi^{\prime} \rangle $. } \label{fig1} \end{figure} Now we consider the R\'{e}nyi entropy in quantum critical systems. The wave functions of ground states from different quantum phases are drastically distinct. When quantum phase transition occurs, the properties of the ground state must change abruptly \cite{quantum phase transition}. We argue that as part of this general feature of quantum phase transition, the interception of ground state R\'{e}nyi entropies as well as the local conversion property of the ground state will change at the critical point, and the different quantum phases boundaries can be determined by the R\'{e}nyi entropy. By carefully analyzing the behavior of R\'{e}nyi entropy, we find two possible cases: (i) Please see Table I (left). In phases I, the R\'{e}nyi entropies of any two different ground states intercept, while in phases II, any two different ground states do not intercept. And the R\'{e}nyi entropies of two states from different phases will intercept. That means for any two states in phase II the one with larger entanglement can be converted to the other one via eLOCC, while in phase I any ground state cannot convert to other states via eLOCC, as a result global operation is necessary in phase I. The corresponding examples for this type are the transverse field Ising model and XY model. (ii) Please see Table I (right). The ground states R\'{e}nyi entropies do not intercept with others in the same phase, but they intercept in the different phases. That means the ground state can be converted through eLOCC within the same phase. However, the ground state cannot be converted locally into the different phases. The corresponding example is XXZ model. These two scenarios can be used to detect quantum phase transitions. \begin{table} \caption{\label{tab0} Interceptions of the ground states R\'{e}nyi entropies, where $\times$ means R\'{e}nyi entropies are intercepted and N means the non-interception. The left table is for case (i) where the phase boundary can be obtained along the diagonal elements. The right table corresponds to the case (ii) where the phase boundary can be obtained with the help of the anti-diagonal elements.} \begin{normalsize} \begin{tabular}{|c|c|c|} \hline & phase I & phase II \\ \hline phase I& $\times$ & $\times$\\ \hline phase II& $\times$ & N\\ \hline \end{tabular} \begin{tabular}{|c|c|c|} \hline & phase I & phase II \\ \hline phase I& N & $\times$ \\ \hline phase II& $\times$ & N\\ \hline \end{tabular} \end{normalsize} \end{table} \section{eLOCC in quantum critical systems} In this section we use the above method to study some typical quantum critical systems. For a Hamiltonian $H(\lambda)$ with a critical point $\lambda_c$, we change the parameter $\lambda$ gradually from phase I to phase II. For each given $\lambda$, we calculate and plot the ground state $S_{\alpha}$ with respect to $\alpha$. We expect to see the two cases described in Table I emergence. We first consider the one dimensional spin $1/2$ $XY$ chain with the Hamiltonian \begin{eqnarray} H=-\sum_{i=1}^N[(1+\gamma)\sigma_i^x\sigma_{i+1}^x+(1-\gamma)\sigma_i^y\sigma_{i+1}^y+h\sigma_i^z], \end{eqnarray} where $\sigma_i^{x,y,z}$ stand for the Pauli matrices. $\gamma$ is coupling parameter and $h$ is field parameter. We can focus on the positive valued $\gamma$ and $h$ space because of the symmetry of the Hamiltonian. This model and its generalizations have been studied extensively \cite{xy quench disorder xy fidelity Korepin}. Fig. 2 shows the phase diagram of this model. This two dimensional phase diagram is a little bit complicated. In order to make a clearer description of the eLOCC proposal, we first employ the transverse field $Ising$ model, which is a special case of the $XY$ model to show how this method works. We can obtain the transverse field Ising model from the XY Hamiltonian by setting $\gamma=1$, $h=2g$ and removing the unimportant global factor $2$ for each components as \begin{eqnarray} H_I=-\sum_{i=1}^N(\sigma_i^x\sigma_{i+1}^x+g\sigma_i^z). \end{eqnarray} A second order quantum phase transition takes place at $g=1$. When $g\neq1$, there is a energy gap between the ground state and the first excited state, and this gap vanishes when $g=1$ \cite{quantum phase transition}. \begin{figure} \caption{ (Color online). Phase diagram of the $XY$ model. The phase transition from phase $1A$ or phase $1B$ to phase $2$ referred to as the $Ising$ transition is driven by the transverse magnetic field $h$, and the phase boundary $h=2$ is a critical line. Phase $1$ has two distinct regions $A$ and $B$. The boundary $\gamma^2+(h/2)^2=1$ is not a critical line. But the properties of this two regions are different. We consider the red dash line $\gamma=\sqrt 3/2$ in the phase diagram. Thus, it intercepts with the two boundaries at $h=1$ and $h=2$, respectively. } \label{fig2:epsart} \end{figure} We calculate the R\'{e}nyi entropies of the ground states with the parameter $g$ varying from $0.5$ to $1.5$ and plot them in Fig. 3. Here the system size $N=10$ and periodical boundary condition is assumed, i.e., $\sigma_{N+1}^{x,y}=\sigma_1^{x,y}$. To calculate the R\'{e}nyi entropy we need to know the ground state, which is obtained by diagonalizing the whole Hamiltonian using Matlab. Although the system size which can be calculated is relatively small, the advantage of directly diagonalizing the whole Hamiltonian is the high accuracy. Other numerical methods, such as Lanczos algorithm, DMRG and so on are worth generalizing in this proposal, but as we are conceiving a new proposal to investigate the quantum critical point rather than developing a different numeric technique within the framework of those already known proposals, we focus on the basic diagonalization method, and the application of other numeric methods are not within the scope of the present paper. The results have shown that these states can be classified into three distinct groups: In group I (blue line) $g<1$ ; In group II (red line) $g$ is at transition point and in group III (black line) $g>1$ . Group I and III are two phases and group II is the phase transition region, which is the boundary of I and III. Fig. 3 clearly shows that in group I R\'{e}nyi entropies for different ground states intercept with each other. While in group III the R\'{e}nyi entropy of different states never intercepts with each other. Apart from that the R\'{e}nyi entropy from different phases will intercept. \begin{figure} \caption{ (Color online). R\'{e}nyi entropy for the ground state of the transverse field Ising model versus the parameter $\alpha$. The total site number $N$ is taken to be $10$, and the spins are cut into two blocks with $5$ sites respectively. Periodical boundary condition is assumed. The blue lines are for the ground states with $g=0.5,0.6,0.7,0.8,0.9$, and the black lines are for the ground states with $g=1.1,1.2,1.3,1.4,1.5$,respectively. The red line is $g=1$. } \label{fig:epsart} \end{figure} These results mean that although in region I and III the model are both gapped their ground states convertibility by eLOCC are quite different: In the ferromagnetic phase where $g<1$, the ground states cannot convert to each other because their R\'{e}nyi entropy intercepts; while in the paramagnetic phase where $g>1$, the ground states can convert by eLOCC because their R\'{e}nyi entropies never intercept. In addition, states from different regions cannot convert to each other by eLOCC. We can conclude from here that the phase transition has its global signature, and the long range correlations which influence the eLOCC convertibility must play a fundamental role. Due to the resolving limit of human eyes, we can illustrate the results better by directly comparing the R\'{e}nyi entropy data in a table instead of the $S_{\alpha}$ versus $\alpha$ figure, please see table II. It shows whether any two ground states intercept or not. For example in the second row of Table II, we find that the ground state of $g=0.94$ intercepts with those of $g=0.95, 0.96$,..., at $\alpha =0.6, 0.5$,.... For the sack of space limit and clarity, we only present the segment of g taking from $0.94$ to$1.04$. Notice that the diagonal elements are always 'N', which means there is no interception between the corresponding two states, because they stand for two same states must completely overlap but not intercept. By Table II, we find the phase transition lies in $0.98\leq g\leq1.00$. We can go on investigating this phase transition more accurately by the same method and we list the result here: when the accuracy (g step) is $0.001$ the critical region obtained by this method is $0.987\leq g\leq0.989$, and the critical region is $0.9883\leq g\leq0.9885$ with the accuracy of $0.0001$. We can see from the results that the interval becomes smaller as the varying step of the parameter $g$ becomes more accurate. Moreover, the critical value we obtain is not exactly $1$ because we only calculate a finite chain with $10$ spins. To get rid of the finite size effect, we give the scaling analysis in Fig.4. When $N\rightarrow\infty$, the phase transition point obtained by this proposal tends to $0.9940$. This Ising model corresponds to the first case of the proposal described in the previous section, and Table II corresponds to the left type of Table I. \begin{table} \caption{\label{tab2} Interception table of transverse field Ising model near the critical point. For clearance, we cut the table into separate parts, $g\le 0.98$, $g\ge 1$ and $0.98\le g\le 1$.} \begin{scriptsize} \begin{tabular}{|c|c|c|c|c|c||c||c|c|c|c|c|} \hline g & 0.94&0.95&0.96&0.97&0.98&0.99&1.00&1.01&1.02&1.03&1.04\\ \hline 0.94& N& 0.6 &0.5 &0.5 &0.5 &0.4 &0.4 &0.3 &0.3 &0.2 &N\\ \hline 0.95 &0.6 &N &0.5 &0.5 &0.4 &0.4 &0.3 &0.3 &0.2& N& N\\ \hline 0.96 &0.5 &0.5 &N &0.4 &0.4 &0.3 &0.3 &0.2 &N &N &N\\ \hline 0.97 &0.5 &0.5 &0.4 &N &0.3 &0.3 &0.2 &N &N &N& N\\ \hline 0.98 &0.5 &0.4 &0.4 &0.3 &N &0.2 &N &N &N &N& N\\ \hline \hline 0.99 &0.4 &0.4 &0.3 &0.3 &0.2 &N &N &N &N& N &N\\ \hline \hline 1.00 &0.4 &0.3 &0.3 &0.2 &N &N &N &N &N &N& N\\ \hline 1.01 &0.3 &0.3 &0.2&N &N &N &N &N &N &N &N\\ \hline 1.02 &0.3 &0.2 &N &N &N &N &N &N &N &N &N\\ \hline 1.03 &0.2 &N &N &N &N &N &N &N &N &N &N\\ \hline 1.04 &N &N &N &N &N &N &N &N &N &N &N\\ \hline \end{tabular} \end{scriptsize} \end{table} \begin{figure} \caption{ (Color online).The finite size scaling analysis of Ising model. In the thermodynamic limit, the critical point labeled as $g_c$ obtained by our method tends to $0.9940$ with the accuracy of $0.0001$. The data can be fitted as $g_c=-9.149e^{-N/1.2522}+0.9940$.} \label{fig4:epsart} \end{figure} So far we have shown the role of R\'{e}nyi entropy in detecting the critical point by investigating the eLOCC convertibility between different ground states. Then we find the eLOCC convertibility between the ground state and the first excited state also gives good discrimination of different phases. In the ferromagnetic phase where $g<1$, the R\'{e}nyi entropy of the ground state and the corresponding first excited state intercepts, while in the paramagnetic phase where $g>1$, the two R\'{e}nyi entropies have no interception. We show this in Fig. 5. Moreover the difference of the R\'{e}nyi entropy in the large $\alpha$ limit between the excited state and the ground state becomes larger with the increasing of the energy gap in the paramagnetic phase. \begin{figure} \caption{ (Color online). R\'{e}nyi entropies of ground state and the first excited state. The dash lines are for the ground states, and the solid lines are the first excited states.} \label{fig6:epsart} \end{figure} \begin{table}[h] \caption{\label{tab3}Interception table of the XY model near the first boundary $h=1$. } \begin{scriptsize} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|} \hline h& 0.5& 0.6& 0.7 &0.8 &0.9 &1 &1.1 &1.2 &1.3 &1.4 &1.5 \\ \hline 0.5 &N&N&N&N&N&N&N&N& 0.3& 1.5 &1.4\\ \hline 0.6 &N&N&N&N&N&N&N&N& 0.2& 1.5 &1.5\\ \hline 0.7 &N&N&N&N&N&N&N& 0.5& 1.6 &1.6 &1.5 \\ \hline 0.8 &N&N&N&N&N&N&N& 1.6 &1.8 &1.6 &1.5\\ \hline 0.9 &N&N&N&N&N&N& 1.4& 2 &1.9 &1.7 &1.5\\ \hline 1 &N&N&N&N&N&N& 2.2 &2.2 &1.9 &1.7 &1.5\\ \hline 1.1 &N&N&N&N &1.4 &2.2 &N &2.2 &1.9 &1.7 &1.5\\ \hline 1.2 &N&N& 0.5 &1.6 &2 &2.2 &2.2 &N &1.8 &1.7 &1.5\\ \hline 1.3 & 0.3 &0.2 &1.6 &1.8 &1.9 &1.9 &1.9 &1.8 &N &1.6 &1.5\\ \hline 1.4 & 1.5 &1.5 &1.6 &1.6 &1.7 &1.7 &1.7 &1.7 &1.6 &N &1.4\\ \hline 1.5 & 1.4 &1.5 &1.5 &1.5 &1.5 &1.5 &1.5 &1.5 &1.5 &1.4&N\\ \hline \end{tabular} \end{scriptsize} \end{table} \begin{table}[h] \caption{\label{tab4}Interception table of the XY model near the critical point $h=2$. } \begin{scriptsize} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|} \hline h& 1.6 &1.7 &1.8 &1.9 &2 &2.1 &2.2 &2.3 &2.4 &2.5 &2.6 \\ \hline 1.6 &N &1.1 &1 &0.9 &0.8 &0.7 &0.6 &0.5 &0.4 &0.3 &N\\ \hline 1.7 &1.1 &N &0.9 &0.8 &0.7 &0.6 &0.4 &0.2 &N&N&N\\ \hline 1.8 &1 &0.9 &N &0.7 &0.6 &0.4 &0.3 &N&N&N&N \\ \hline 1.9 &0.9 &0.8 &0.7 &N &0.4 &0.2 &N&N&N&N&N\\ \hline 2 &0.8 &0.7 &0.6 &0.4 &N&N&N&N&N&N&N\\ \hline 2.1 &0.7 &0.6 &0.4 &0.2 &N&N&N&N&N&N&N\\ \hline 2.2 &0.6 &0.4 &0.3 &N&N&N&N&N&N&N&N\\ \hline 2.3 &0.5 &0.2 &N&N&N&N&N&N&N&N&N\\ \hline 2.4 &0.4 &N&N&N&N&N&N&N&N&N&N\\ \hline 2.5 &0.3 &N&N&N&N&N&N&N&N&N&N\\ \hline 2.6 &N&N&N&N&N&N&N&N&N&N&N \\ \hline \end{tabular} \end{scriptsize} \end{table} In order to study the general case of $XY$ model we set $\gamma=\sqrt3/2$ and varying $h$ from $0$ to $3$, see the red dash line in Fig.2. We can use the eLOCC proposal to determine the critical point at $h=2$ as well as the boundary at $h=1$. Here we just list the results: considering the eLOCC convertibility between ground states, there is no interception in region $1B$, but every two ground states in region $1A$ intercept, and then in phase$2$ there is no interception again, please see table III and IV. The boundary at $h=1$ and the critical point at $h=2$ also correspond to the first case introduced in the previous section. Table III and IV are corresponding to the left type in Table I. There are interceptions between region $1B$ and phase$2$. For the case of total lattice number $N=10$, we detect the first boundary lies in the range $0.999\leq h\leq1.000$ and the critical point is $2.010\leq h\leq2.012$ with the accuracy of $0.001$. Next, we study one dimensional XXZ model with the Hamiltonian \begin{eqnarray} H_{XXZ}=\sum_i\sigma_i^x\sigma_{i+1}^x+\sigma_i^y\sigma_{i+1}^y+\Delta \sigma_i^z\sigma_{i+1}^z, \end{eqnarray} where $\Delta$ is the anisotropic parameter. There are two phase transition points: $\Delta=-1$ corresponds to a first order phase transition, and $\Delta=1$ is a continuous phase transition of infinite order \cite{CNYang}. In particular, the phase transition at $\Delta=1$ is a Kosterlitz-Thouless like transition, which can be described by the correlation length divergency but without long range order \cite{yang}. We focus on identifying the critical point $\Delta=1$ by the eLOCC proposal. Here we use the same method as we did in the Ising model and XY model to get the ground state R\'{e}nyi entropy, i.e., diagonalizing the whole Hamiltonian to obtain all the eigenstates, then we select the ground state to calculate the eigenvalues of reduced density matrix to obtain the R\'{e}nyi entropy. Table V shows the interceptions near $\Delta =1$. We can see that each state in either region $\Delta \ge 1.0$ or $\Delta \le 1.0$ never intercepts with any of the states in the same region, but intercepts with at least one state from the other region. Therefore this model corresponds to the second case of the proposal introduced in the previous section. Thus, the critical region can be determined as $0.9\le \Delta \le 1.1$. By narrowing the varying step of $\Delta$, this critical point can be detected more accurately. Therefore, the eLOCC proposal also works well for this infinite order phase transition in $XXZ$ spin chain. We can find that result of $XXZ$ model resembles the right pattern of Table I, i.e., the ground states do not intercept with the states form the same phase, but intercepts the states from the other phase. \begin{table}[h] \caption{\label{tab1} Interception table of $XXZ$ model. For clearance, we cut the table into separate parts, $\Delta\le 0.9$, $\Delta\ge 1.1$ and $0.9\le \Delta\le 1.1$. It is a $10$ sites chain with comb like partition, i.e., the odd numbered sites belong to part $A$. } \begin{scriptsize} \begin{tabular}{|c|c|c|c|c|c|c||c||c|c|c|c|c|c|} \hline $\Delta$ &0.4& 0.5& 0.6 &0.7 &0.8 &0.9 &1.0 &1.1 &1.2 &1.3 &1.4 &1.5&1.6\\ \hline 0.4 &N &N &N &N &N &N &N &N &N &N &N &5.4 &1.1\\ \hline 0.5 &N&N &N &N &N &N &N &N &N &N &N &1&N\\ \hline 0.6 &N& N &N &N &N &N &N &N &N &N &1 &N&N\\ \hline 0.7&N &N &N &N &N &N &N &N &N &0.9 &N &N&N\\ \hline 0.8&N &N &N &N &N &N &N &N &0.9 &N &N &N&N\\ \hline 0.9 &N&N &N &N &N &N &N &0.9 &N &N &N &N&N\\ \hline \hline 1.0 &N&N &N &N &N &N &N &N &N &N &N &N&N\\ \hline \hline 1.1 &N&N &N &N &N &0.9 &N &N &N &N &N &N&N\\ \hline 1.2 &N&N &N &N &0.9 &N &N &N &N &N &N &N&N\\ \hline 1.3&N &N &N &0.9 &N &N &N &N &N &N &N &N&N\\ \hline 1.4 &N&N &1 &N &N &N &N &N &N &N &N &N&N\\ \hline 1.5 &5.4&1 &N &N &N &N &N &N &N &N &N &N&N\\ \hline 1.6 &1.1&N &N &N &N &N &N &N &N &N &N &N&N\\ \hline \end{tabular} \end{scriptsize} \end{table} \section{Conclusions} In this paper, we analyzed the R\'{e}nyi entropy and the eLOCC convertibility in quantum critical systems. We developed a new proposal to describe the eLOCC conversion properties in quantum critical systems by examining the R\'{e}nyi entropy interception, which shows the disability of eLOCC conversion. We applied this proposal to the study of several typical quantum critical systems, e.g. one dimensional transverse field Ising model, $XY$ model and XXZ model. The results showed that: the eLOCC convertibility changes at critical points. For the Ising phase transition, eLOCC convertibility in the paramagnetic phase is stronger than that in the ferromagnetic phase in two ways: (i) any two ground states in paramagnetic phase can convert by eLOCC but those in ferromagnetic phase cannot; (ii) each first excited state in paramagnetic phase can be converted to their ground states by eLOCC while states in ferromagnetic phase cannot. For the XY model with two dimensional phase diagram, the critical points can be determined via this method at very high accuracy. The boundary between region $1A$ and $1B$ can be detected as well. For the XXZ model the infinite order phase transition at $\Delta=1$ can also be determined by this method while the pattern of the interception table is different from the second order quantum phase transitions in Ising model and XY model. In fact, as is shown in Table I, the R\'{e}nyi entropy inception table can be divided into four blocks. The two anti-diagonal blocks represent the comparison between ground states from two different phases, and it is not surprising that the two blocks are crossings which means that the ground states from the two phases are too different to convert by local operations. The two diagonal blocks represent the comparison between the ground states from the same phase. We find two possible cases in Table I, which are two extremes. Case (i) has the most crossings in the table, while case (ii) has the least crossings. As a matter of fact, The crossings in the table reflect the degree of how hard it is to convert the different ground states, which can be served as a measure of the difference between the two phases. It is quite interesting to find that this observation is amazingly consistent with the orders of the phase transitions in the two example models. In case (i), i.e., the second order quantum phase transition which is the lowest order that quantum critical phenomena can exist, we find the pattern with the most crossings, while in case (ii), i.e., the infinite order quantum phase transition, we find the pattern with the least crossings. The eLOCC proposal may help further understanding the complicated phenomena of quantum critical systems. This paper would initiate extensive studies of quantum phase transitions from the brand new perspective of local convertibility. This simple but effective method is worth (a) generalizing to study finite temperature phase transitions (b) generalizing based on the majorization scheme \cite{Nielsen} and (c) applying to other systems. {\label{sec:level1}} \emph{Acknowledgement}. ---One of the authors J. Cui thanks the helpful discussion with Zhi-Hao Xu and Zhao Liu. This work is supported by NSFC grant and the National Program for Basic Research of MOST (``973'' program). \end{document}
arXiv
{ "id": "1111.3537.tex", "language_detection_score": 0.8312656283378601, "char_num_after_normalized": null, "contain_at_least_two_stop_words": null, "ellipsis_line_ratio": null, "idx": null, "lines_start_with_bullet_point_ratio": null, "mean_length_of_alpha_words": null, "non_alphabetical_char_ratio": null, "path": null, "symbols_to_words_ratio": null, "uppercase_word_ratio": null, "type": null, "book_name": null, "mimetype": null, "page_index": null, "page_path": null, "page_title": null }
arXiv/math_arXiv_v0.2.jsonl
null
null
\begin{document} \title{On customer flows in Jackson queuing networks} \author{Sen Tan\ and\ Aihua Xia\footnote{Corresponding author. E-mail: [email protected]}\\ Department of Mathematics and Statistics\\ The University of Melbourne\\ Parkville, VIC 3052\\ Australia} \date {{19 January, 2010}} \maketitle \begin{abstract} Melamed's theorem states that for a Jackson queuing network, the equilibrium flow along a link follows Poisson distribution if and only if no customers can travel along the link more than once. Barbour \& Brown~(1996) considered the Poisson approximate version of Melamed's theorem by allowing the customers a small probability $p$ of travelling along the link more than once. In this paper, we prove that the customer flow process is a Poisson cluster process and then establish a general approximate version of Melamed's theorem accommodating all possible cases of $0\le p<1$. \vskip12pt \noindent\textit{Key words and phrases.} Jackson queuing network, Palm distribution, Poisson cluster process, over-disperson, Stein's method, negative binomial. \vskip12pt \noindent\textit{AMS 2000 subject classifications.} Primary 60G55; secondary 60F05, 60E15. \end{abstract} \section{Introduction} \setcounter{equation}{0} We consider a Jackson queuing network with $J$ queues and the following specifications [see Barbour \& Brown~(1996) for more details]. First, we assume that customers can move from one queue to another as well as can enter and leave from any queue. We assume that the exogenous arrival processes are independent Poisson processes with rates $\nu_j$, $1\le j\le J$. Service requirements are assumed to be exponential random variables with parameter 1 and when there are $m$ customers in queue $j$, the service effort for queue $j$ is $\phi_j(m)$, where $\phi_j(0)=0$, $\phi_j(1)>0$ and $\phi_j(m)$ is a non-decreasing function of $m$. Second, we define the switching process as follows. Let $\lambda_{ij}$ be the probability that an individual moves from queue $i$ to queue $j$, $\mu_i$ be the exit probability from queue $i$ and it is natural to assume $$\sum_{j=1}^J\lambda_{ij}+\mu_i=1, \ 1\le i\le J.$$ Without loss of generality, we may assume that the network is irreducible in the sense that all customers can access any queue with a positive probability. Set $\alpha_j$ as the total rate of arriving customers (including both exogenous and endogenous arrivals) to queue $j$, then the rates $\{\alpha_j\}$ satisfy the equations $$\alpha_j=\nu_j+\sum_{i=1}^J\alpha_i\lambda_{ij},\ 1\le j\le J$$ and they are the unique solution of the equations with $\alpha_j>0$ for all $j$. For convenience, we define state 0 as the outside of the network, that is, the point of arrival and departure of an individual into and from the system. We write ${\cal S}:=\{(j,k):\ 0\le j,k\le J\}$ as the set of all possible direct links and use $\mbox{\boldmath$\Xi$}^{jk}$ to record the transitions of individuals moving from queue $j$ to queue $k$, then $\mbox{\boldmath$\Xi$}=\{\mbox{\boldmath$\Xi$}^{jk},\ 0\le j,k\le J\}$ gives a full account of customer flows in the network, where departures are transitions to 0 and arrivals are transitions from 0. If $\rho_{jk}$ is the rate of equilibrium flow along the link $(j,k)$, then $\rho_{jk}=\alpha_j\lambda_{jk}$ and the mean measure of $\mbox{\boldmath$\Xi$}$ is $$\mbox{\boldmath$\lambda$}(ds,(j,k))=\rho_{jk}ds,\ s\in{\rm {I\ \nexto R}},\ (j,k)\in {\cal S}.$$ Our interest is on the customer flows along the links in $C\subset{\cal S}$ for the time interval $[0,t]$, so we set the carrier space as $\Gamma_{C,t}=[0,t]\times C$ and use $\boldXi_{C,t}$ to stand for the transitions along the links in $C$ for the period $[0,t]$. Then the mean measure of $\boldXi_{C,t}$ is $${\bl_{C,t}}(ds,(j,k))=\rho_{jk}ds,\ 0\le s\le t,\ (j,k)\in C.$$ Melamed's theorem states that $\boldXi_{C,t}$ is a Poisson process if and only if no customers travel along the links in $C$ more than once [Melamed~(1979) proved the theorem when $\phi_j(m)=c_j$ for $m\ge 1$ and $1\le j\le J$, and the general case was completed by Walrand \& Varaiya~(1981)]. Barbour \& Brown~(1996) considered the Poisson approximate version of Melamed's theorem by allowing the customers a small probability of traveling along the links more than once. For convenience, we call the probability of customers traveling along the links in $C$ more than once as the {\it loop probability}. The bounds for the errors of Poisson process approximation are sharpened by Brown, Weinberg \& Xia~(2000) and Brown, Fackrell \& Xia~(2005) and it is concluded in these studies that the accuracy of Poisson approximation depends on how small the loop probability is. In section~2 of the paper, we use the Palm theory, Barbour-Brown Lemma [Barbour \& Brown~(1996)] and infinite divisibility of point processes to prove that $\boldXi_{C,t}$ is a Poisson cluster process [Daley \& Vere-Jones~(1988), p.~243]. The characterization involves a few quantities which are generally intractable, so in section~3, we prove that $\mbox{\boldmath$\Xi$}$ is over-dispersed [see Brown, Hamza \& Xia~(1998)], i.e., its variance is greater than its mean, and conclude that suitable approximations should be those with the same property, such as the compound Poisson or negative binomial distributions. We then establish a general approximate version of Melamed's theorem for the total number of customers traveling along the links in $C$ for the period $[0,t]$ based on a suitably chosen negative binomial distribution. The approximation error is measured in terms of the total variation distance and the error bound is small when the loop probability is small [cf. the Poisson approximation error bound in Barbour \& Brown~(1996)] and/or $t$ is large with order $\frac{1}{\sqrt{t}}$ [cf. Berry--Esseen bound for normal approximation, Petrov~(1995)]. \section{A characterization of the customer flow process} \setcounter{equation}{0} The customer flows are directly linked to the changes of the states of the queue lengths, so we define $N_i(t)$ as the number of customers, including those in service, at queue $i$ and time $t$, $1\le i\le J$. Then $\{{\bf N}(t):=(N_1(t),\dots,N_J(t)):\ t\in{\rm {I\ \nexto R}}\}$ is a pure Markov jump process on state space $\{0,1,2,\dots\}^J$ with the following transition rates ${\bf n}=(n_1,\dots,n_J)$: $$q_{{\bf n},{\bf n}+e_j}=\nu_j,\ q_{{\bf n},{\bf n}-e_j+e_k}=\phi_j(n_j)\lambda_{jk},\ q_{{\bf n},{\bf n}-e_j}=\mu_j\phi_j(n_j),\ 1\le j,k\le J,$$ where $e_j$ is the $j$th coordinate vector in $\{0,1,2,\dots\}^J$. The stationary Markov queue length process has a unique stationary distribution: for each $t>0$, $N_j(t)$, $j=1,\dots,J$ are independent with $${\rm {I\ \nexto P}}(N_j(t)=k)=\frac{\alpha_j^k/\prod_{r=1}^k\phi_j(r)}{\sum_{l=0}^\infty \alpha_j^l/\prod_{r=1}^l\phi_j(r)}.$$ Let $X_i$ be the $i$th queue visited by a given customer, then $\{X_i:\ i=1,2,\dots\}$ is called the {\it forward customer chain} and it is a homogeneous finite Markov chain with transition probabilities \begin{eqnarray*} &&p_{00}=0,\ p_{0k}=\frac{\nu_k}{\sum_{j=1}^J\nu_j};\\ &&p_{j0}=\mu_j,\ p_{jk}=\lambda_{jk},\ j,k=1,\dots,J. \end{eqnarray*} The backward customer chain $X^\ast$ is the forward customer chain for the time-reversed process of $\{{\bf N}(t):=(N_1(t),\dots,N_J(t)):\ t\in{\rm {I\ \nexto R}}\}$ [Barbour \& Brown~(1996), p.~475] and it can be viewed as the time-reversal of the forward customer chain $\{X_i:\ i=1,2,\dots\}$ with transition probabilities \begin{eqnarray*} &&p_{00}^\ast=0,\ p_{0j}^\ast=\frac{\mu_j\alpha_j}{\sum_{l=1}^J\mu_l\alpha_l};\\ &&p_{k0}^\ast=\frac{\nu_k}{\alpha_k},\ p_{kj}^\ast=\frac{\alpha_j\lambda_{jk}}{\alpha_k},\ j,k=1,\dots,J. \end{eqnarray*} We will use the Palm distributions to characterize the distribution of $\boldXi_{C,t}$, prove its properties and establish a general approximate version of Melamed's theorem for $\boldXi_{C,t}(\Gamma_{C,t})$. For the point process $\mbox{\boldmath$\Xi$}$ with locally finite mean measure $\mbox{\boldmath$\lambda$}(ds,(j,k))=\rho_{jk}ds,\ s\in{\rm {I\ \nexto R}},\ (j,k)\in{\cal S}$, we may consider it as a random measure on the metric space ${\rm {I\ \nexto R}}\times {\cal S}$ equipped with the metric $$d((u_1,(j_1,j_2)),(u_2,(k_1,k_2)))=|u_1-u_2|{\bf 1}_{(j_1,j_2)\ne(k_1,k_2)},\ u_1,u_2\in{\rm {I\ \nexto R}}\mbox{ and }(j_1,j_2),\ (k_1,k_2)\in{\cal S},$$ so that we can define the {\it Palm distribution} at $\alpha\in {\rm {I\ \nexto R}}\times {\cal S}$ as the distribution of $\mbox{\boldmath$\Xi$}$ conditional on the presence of a point at $\alpha$, that is, $$P^\alpha(\cdot)= \frac{{\rm {I\ \kern -0.54em E}} \left[1_{[{\scriptsize \mbox{\boldmath$\Xi$}}\in\cdot]}\mbox{\boldmath$\Xi$}(d\alpha)\right]}{\mbox{\boldmath$\lambda$}(d\alpha)},\ \alpha\in\Gamma_{C,t}\ \mbox{\boldmath$\lambda$}-\mbox{almost surely},$$ see Kallenberg~(1983), p.~83 for more details. A process $\mbox{\boldmath$\Xi$}^\alpha$ is called the {\it Palm process} of $\mbox{\boldmath$\Xi$}$ at $\alpha$ if its distribution is $P^\alpha$. In applications, it is often more convenient to work with the {\it reduced Palm process} $\mbox{\boldmath$\Xi$}^\alpha-\delta_\alpha$ [Kallenberg~(1983), p.~84], where $\delta_\alpha$ is the Dirac measure at $\alpha$. The Palm distributions are closely related to the {\it size-biasing} in sampling contexts [Cochran~(1977)]. More precisely, if $X$ is a non-negative integer-valued random variable, one may consider it as a point process with the carrier space having only one point so its Palm distribution becomes $${\rm {I\ \nexto P}}(X_s=i):=\frac{i{\rm {I\ \nexto P}}(X=i)}{{\rm {I\ \kern -0.54em E}} X}.$$ However, this is exactly the definition of the {\it size biased} distribution of $X$ [see Goldstein \& Xia~(2006)]. \begin{Lemma}\label{BBlemma} [Barbour \& Brown (1996)] For the open queuing network, the reduced Palm distribution for the network given a transition at link $(j,k)$ at time $0$ is the same as that for the original network, save that the network on $(0,\infty)$ behaves as if there were an extra individual at queue $k$ at time 0 and the network on $(-\infty,0)$ behaves as if there were an extra individual in queue $j$ at time 0. \end{Lemma} For two random elements $\eta_1$ and $\eta_2$ having the same distribution, we write for brevity $\eta_1\stackrel{\mbox{\scriptsize{{\rm d}}}}{=}\eta_2$. \begin{Lemma}\label{XiaLemma1} For each $(j,k)\in{\cal S}$, there is a point process $\mbox{\boldmath$\xi$}^{(0,(j,k))}$ on ${\rm {I\ \nexto R}}\times{\cal S}$ independent of $\mbox{\boldmath$\Xi$}$ such that $$\mbox{\boldmath$\xi$}^{(0,(j,k))}+\mbox{\boldmath$\Xi$} \stackrel{\mbox{\scriptsize{{\rm d}}}}{=}\mbox{\boldmath$\Xi$}^{(0,(j,k))}.$$ \end{Lemma} \noindent{\bf Proof.} The proof is adapted from Barbour \& Brown~(1996), p.~480. By Lemma~\ref{BBlemma}, the reduced Palm process $\mbox{\boldmath$\Xi$}^{(0,(j,k))}-\delta_{(0,(j,k))}$ has the same distribution as that of $\mbox{\boldmath$\Xi$}$ except that the network on $(0,\infty)$ behaves as if there were an extra individual at queue $k$ at time 0 and the network on $(-\infty,0)$ behaves as if there were an extra individual in queue $j$ at time 0. Let $\tilde X^{(0)}$ and $X^{(0)}$ be the routes taken by the extra individual on $(-\infty,0)$ and $(0,\infty)$ respectively. Whenever the extra customer is at queue $i$ together with other $m$ customers, we use independently sampled exponential service requirements with instantaneous service rate $\phi_i(m+1)-\phi_i(m)$. Noting that this construction ensures that the extra customer uses the ``spare" service effort and never ``interferes" with the flow of the main traffic, one can see that its transitions are independent of $\mbox{\boldmath$\Xi$}$. The same procedure applies to the construction of the backward route. Let $\mbox{\boldmath$\xi$}^{(0,(j,k))}$ be the transitions taken by the extra customer on $(-\infty,0)\cup(0,\infty)$ plus the Dirac measure $\delta_{(0,(j,k))}$, then $\mbox{\boldmath$\xi$}^{(0,(j,k))}$ is independent of $\mbox{\boldmath$\Xi$}$ and the conclusion of the lemma follows from the construction. \hbox{\vrule width 5pt height 5pt depth 0pt} Let $\theta_s, \ s\in{\rm {I\ \nexto R}}$, denote the shift operator on ${\rm {I\ \nexto R}}\times{\cal S}$ which translates each point in ${\rm {I\ \nexto R}}\times{\cal S}$ by $s$ to the left, i.e. $\theta_s((u,(j,k)))=(u-s,(j,k))$ and we use $\mbox{\boldmath$\xi$}^{(s,(j,k))}$ to stand for a copy of $\mbox{\boldmath$\xi$}^{(0,(j,k))}\circ\theta_s$, $s\in{\rm {I\ \nexto R}}$. From now on, we focus on the point process $\boldXi_{C,t}$. With metric $d$, $\Gamma_{C,t}$ is a Polish space and we use ${\cal B}\left(\Gamma_{C,t}\right)$ to stand for the Borel $\sigma$-algebra in $\Gamma_{C,t}$. Let $H_{C,t}$ denote the class of all configurations (finite nonnegative integer-valued measures) on $\Gamma_{C,t}$ with ${\cal H}_{C,t}$ the $\sigma$-algebra in $H_{C,t}$ generated by the sets $$\{\xi\in H_{C,t}: \xi(B)=i\},\ i\in\mathbb{Z}_+:=\{0,1,2,\dots\}, \ B\in{\cal B}\left(\Gamma_{C,t}\right),$$ see Kallenberg~(1983), p.~12. \begin{Theorem}\label{Th1} Let $\{\mbox{\boldmath$\eta$}_i,\ i\ge 0\}$ be independent and identically distributed random measures on $\Gamma_{C,t}$ having the distribution \begin{equation}{\rm {I\ \nexto P}}\left[\mbox{\boldmath$\eta$}_0\left(\Gamma_{C,t}\right)\ge 1\right]=1,\ {\rm {I\ \nexto P}}(\mbox{\boldmath$\eta$}_0\in A)={\rm {I\ \kern -0.54em E}} \sum_{(j,k)\in C}\int_0^t \frac{{\bf 1}_{[{\scriptsize\mbox{\boldmath$\xi$}}^{(s,(j,k))}\in A]}}{\mbox{\boldmath$\xi$}^{(s,(j,k))}\left(\Gamma_{C,t}\right)}\cdot\frac{\rho_{jk}}{\theta_{C,t}}ds,\ A\in {\cal H}_{C,t}, \label{Xia2.1} \end{equation} where \begin{equation}\theta_{C,t}={\rm {I\ \kern -0.54em E}} \sum_{(j,k)\in C}\int_0^t \frac{1}{\mbox{\boldmath$\xi$}^{(s,(j,k))}\left(\Gamma_{C,t}\right)}\rho_{jk}ds.\label{Xia2.2}\end{equation} Let $M$ be a Poisson random variable with mean $\theta_{C,t}$ and independent of $\{\mbox{\boldmath$\eta$}_i,\ i\ge 0\}$, then $$\boldXi_{C,t}\stackrel{\mbox{\scriptsize{{\rm d}}}}{=} \sum_{i=1}^M\mbox{\boldmath$\eta$}_i.$$ \end{Theorem} \noindent{\bf Proof.} By Lemma~\ref{XiaLemma1} and Theorem~11.2 of [Kallenberg~(1983)], we can conclude that $\boldXi_{C,t}$ is infinitely divisible, hence we obtain from Lemma~6.6 and Theorem~6.1 of [Kallenberg~(1983)] that $\boldXi_{C,t}$ is a Poisson cluster process, that is, $$\boldXi_{C,t}\stackrel{\mbox{\scriptsize{{\rm d}}}}{=} \sum_{i=1}^M\mbox{\boldmath$\eta$}_i,$$ where $\mbox{\boldmath$\eta$}_i,\ i\ge 0$ are independent and identically distributed random measures on $\Gamma_{C,t}$ such that ${\rm {I\ \nexto P}}\left(\mbox{\boldmath$\eta$}_0(\Gamma_{C,t})\ge 1\right)=1$, $M$ is a Poisson random variable with mean $\theta_{C,t}$ and independent of $\{\mbox{\boldmath$\eta$}_i,\ i\ge 1\}$. The direct verification ensures that the Palm process of $\sum_{i=1}^M\mbox{\boldmath$\eta$}_i$ at $\alpha\in\Gamma_{C,t}$ is $\sum_{i=1}^M\mbox{\boldmath$\eta$}_i+\mbox{\boldmath$\eta$}_0^{\alpha}$, where $\mbox{\boldmath$\eta$}_0^{\alpha}$ is the Palm process of $\mbox{\boldmath$\eta$}_0$ at $\alpha$ and is independent of $\{M,\mbox{\boldmath$\eta$}_i,\ i\ge 1\}$. This in turn implies that $\mbox{\boldmath$\xi$}^{(s,(j,k))}\stackrel{\mbox{\scriptsize{{\rm d}}}}{=} \mbox{\boldmath$\eta$}_0^{(s,(j,k))}.$ Let $\mbox{\boldmath$\mu$}(ds,(j,k))$ denote the mean measure of the point process $\mbox{\boldmath$\eta$}_0$, then some elementary computation ensures that the mean measure of $\sum_{i=1}^M\mbox{\boldmath$\eta$}_i$ is $\theta_{C,t}\mbox{\boldmath$\mu$}(ds,(j,k))$ for $(j,k)\in C$ and $0\le s\le t$. On the other hand, the mean measure of $\boldXi_{C,t}$ is ${\bl_{C,t}}(ds,(j,k))=\rho_{jk}ds $, $(j,k)\in C$ and $s\in[0,t]$, so we obtain \begin{equation}\mbox{\boldmath$\mu$}(ds,(j,k))=\frac{\rho_{jk}}{\theta_{C,t}}ds,\ (j,k)\in C,\ s\in[0,t].\label{Xia2.3}\end{equation} The representation \Ref{Xia2.1} is because of the fact that ${\rm {I\ \nexto P}}\left(\mbox{\boldmath$\eta$}_0(\Gamma_{C,t})\ge 1\right)=1$ and $${\rm {I\ \nexto P}}\left(\mbox{\boldmath$\eta$}_0\in A\right)={\rm {I\ \kern -0.54em E}}\int_{\Gamma_{C,t}}\frac{{\bf 1}_{[{\scriptsize\mbox{\boldmath$\eta$}_0}\in A]}}{\mbox{\boldmath$\eta$}_0\left(\Gamma_{C,t}\right)}\mbox{\boldmath$\eta$}_0(d\alpha)={\rm {I\ \kern -0.54em E}} \sum_{(j,k)\in C}\int_0^t \frac{{\bf 1}_{[{\scriptsize\mbox{\boldmath$\xi$}}^{(s,(j,k))}\in A]}}{\mbox{\boldmath$\xi$}^{(s,(j,k))}\left(\Gamma_{C,t}\right)}\frac{\rho_{jk}}{\theta_{C,t}}ds.$$ In particular, if we take $A={\cal H}_{C,t}$, then the left hand side becomes 1, so \Ref{Xia2.2} follows. \hbox{\vrule width 5pt height 5pt depth 0pt} Despite the fact that $\theta_{C,t}$ is specified by \Ref{Xia2.2}, since the Palm process $\mbox{\boldmath$\xi$}^{(s,(j,k))}$ is generally intractable, it is virtually impossible to express $\theta_{C,t}$ explicitly in terms of the specifications of the Jackson queuing network. On the other hand, the relationship \Ref{Xia2.3} yields $${\rm {I\ \kern -0.54em E}}\mbox{\boldmath$\eta$}_0(\Gamma_{C,t})=\frac{\rho_C}{\theta_{C,t}}t.$$ The following proposition tells us the range of values that ${\rm {I\ \kern -0.54em E}}\mbox{\boldmath$\eta$}_0(\Gamma_{C,t})$ and $\theta_{C,t}$ may take. To this end, we define \begin{equation}\epsilon_C(j,k)={\rm {I\ \kern -0.54em E}}\mbox{\boldmath$\xi$}^{(0,(j,k))}({\rm {I\ \nexto R}}\times C)-1\mbox{ and }\epsilon_C=\sum_{(j,k)\in C}\frac{\rho_{jk}}{\rho_C}\epsilon_C(j,k).\label{Xia2.6}\end{equation} In other words, $\epsilon_C(j,k)$ is the average number of visits in $C$ by the extra customer crossing the link $(j,k)$ and $\epsilon_C$ is the weighted average number of visits by an extra customer crossing links in $C$. \begin{Proposition} We have \eq1\le {\rm {I\ \kern -0.54em E}}\mbox{\boldmath$\eta$}_0(\Gamma_{C,t})\le 1+\epsilon_C\label{Xia2.4}\end{equation} and \begin{equation}\frac{\rho_C}{1+\epsilon_C}t\le\theta_{C,t}\le\rho_C t.\label{Xia2.5}\end{equation} \end{Proposition} \noindent{\bf Proof.} The first inequality of \Ref{Xia2.4} follows immediately from the fact that \\ ${\rm {I\ \nexto P}}(\mbox{\boldmath$\eta$}_0(\Gamma_{C,t})\ge 1)=1$. For the second inequality of \Ref{Xia2.4}, noting that the mean measure of $\mbox{\boldmath$\eta$}_0$ is $$\mbox{\boldmath$\mu$}(ds,(j,k))=\frac{\rho_{jk}}{\theta_{C,t}}ds,\ (j,k)\in C,\ s\in[0,t],$$ we have \begin{eqnarray*} \left\{\frac{\rho_C}{\theta_{C,t}}t\right\}^2&=&[{\rm {I\ \kern -0.54em E}}\mbox{\boldmath$\eta$}_0(\Gamma_{C,t})]^2\le{\rm {I\ \kern -0.54em E}}\left[\mbox{\boldmath$\eta$}_0(\Gamma_{C,t})^2\right]\\ &=&{\rm {I\ \kern -0.54em E}}\int_{\Gamma_{C,t}}\mbox{\boldmath$\eta$}_0(\Gamma_{C,t})\mbox{\boldmath$\eta$}_0(d\alpha)\\ &=&\sum_{(j,k)\in C}\int_0^t{\rm {I\ \kern -0.54em E}} \mbox{\boldmath$\eta$}_0^{(s,(j,k))}(\Gamma_{C,t})\mbox{\boldmath$\mu$}(ds,(j,k))\\ &\le&\sum_{(j,k)\in C}(1+\epsilon_C(j,k))\frac{\rho_{jk}}{\theta_{C,t}}t\\ &=&\frac{\rho_C}{\theta_{C,t}}t+\frac{\sum_{(j,k)\in C}\epsilon_C(j,k)\rho_{jk}}{\theta_{C,t}}t\\ &=&\frac{\rho_C}{\theta_{C,t}}t(1+\epsilon_C). \end{eqnarray*} We divide both sides by $\frac{\rho_C}{\theta_{C,t}}t$ to get $${\rm {I\ \kern -0.54em E}}\mbox{\boldmath$\eta$}_0(\Gamma_{C,t})=\frac{\rho_C}{\theta_{C,t}}t\le 1+\epsilon_C.$$ Finally, \Ref{Xia2.5} is an immediate consequence of \Ref{Xia2.4} and the equation $\theta_{C,t}=\frac{\rho_C}{{\rm {I\ \kern -0.54em E}}\mbox{\boldmath$\eta$}_0(\Gamma_{C,t})}t$. \hbox{\vrule width 5pt height 5pt depth 0pt} \begin{Remark}{\rm If the loop probability in $C$ is 0, then $\epsilon_C=0$ and ${\rm {I\ \kern -0.54em E}}\mbox{\boldmath$\eta$}_0(\Gamma_{C,t})=1$, so there is only one customer on $\Gamma_{C,t}$. This customer is crossing the link $(j,k)$ with probability $\frac{\rho_{jk}}{\rho_C}$ at a time uniformly distributed on $[0,t].$} \end{Remark} \section{A discrete central limit theorem for the customer flow process} \setcounter{equation}{0} A random variable is said to be {\it over-dispersed} (resp. {\it under-dispersed}) if its variance to mean ratio is greater (resp. less) than one. A random measure $\chi$ on a Polish space is said to be {\it over-dispersed} (resp. {\it under-dispersed}) if $\chi(B)$ is over-dispersed (resp. under-dispersed) for all bounded Borel subset $B$ of the Polish space. It is concluded in Brown, Hamza \& Xia~(1998) that point processes which arise from Markov chains which are time-reversible, have finitely many states and are irreducible are always over-dispersed. As our process $\mbox{\boldmath$\Xi$}$ is virtually a multivariate version of point processes studied in Brown, Hamza \& Xia~(1998), the following property can be viewed as a natural extension of the study in Brown, Hamza \& Xia~(1998). \begin{Proposition} The point process $\mbox{\boldmath$\Xi$}$ is over-dispersed. \end{Proposition} \noindent{\bf Proof.} The space $({\rm {I\ \nexto R}}\times{\cal S},d)$ is a Polish space and for each bounded Borel subset $B$ of ${\rm {I\ \nexto R}}\times {\cal S}$, it follows from the definition of the Palm processes [see Kallenberg~(1983), p.~84, equation~(10.4)] that \begin{eqnarray*} &&{\rm {I\ \kern -0.54em E}}\left[\mbox{\boldmath$\Xi$}(B)\right]^2={\rm {I\ \kern -0.54em E}}\int_B\mbox{\boldmath$\Xi$}(B)\mbox{\boldmath$\Xi$}(d\alpha)={\rm {I\ \kern -0.54em E}} \int_B\mbox{\boldmath$\Xi$}^\alpha(B)\mbox{\boldmath$\lambda$}(d\alpha) \ge{\rm {I\ \kern -0.54em E}} \int_B\left(\mbox{\boldmath$\Xi$}(B)+1\right)\mbox{\boldmath$\lambda$}(d\alpha), \end{eqnarray*} that is, \begin{equation}{\rm Var}\left[\mbox{\boldmath$\Xi$}(B)\right]\ge{\rm {I\ \kern -0.54em E}} \mbox{\boldmath$\Xi$}(B),\label{xia3.1} \end{equation} completing the proof. \hbox{\vrule width 5pt height 5pt depth 0pt} The inequality in \Ref{xia3.1} is generally strict except that the loop probability is 0, i.e. $\mbox{\boldmath$\Xi$}$ is a Poisson process. Hence, suitable approximate models for the distribution of $\Xi_{C,t}:=\boldXi_{C,t}(\Gamma_{C,t})$ are necessarily over-dispersed. One potential candidate for approximating the distribution of $\Xi_{C,t}$ is the compound Poisson distribution. However, as it is virtually impossible to extract the distribution of $\mbox{\boldmath$\xi$}^{(s,(j,k))}(\Gamma_{C,t})$ for $0\le s\le t$, we face the same difficulty to specify and estimate the approximate distribution if a general compound Poisson is used. On the other hand, as a special family of the compound Poisson distributions [Johnson, Kemp \& Kotz (2005), pp.~212--213 and p.~346], the negative binomial distribution has been well documented as a natural model for many over-dispersed random phenomena [see Bliss \& Fisher~(1953), Wang \& Xia~(2008) and Xia \& Zhang~(2009)]. The negative binomial distribution ${\rm NB}(r,q)$, $r>0$, $0<q<1$, is defined as $$\pi_i=\frac{\Gamma(r+i)}{\Gamma(r)i!}q^r(1-q)^i,\ i\in\mathbb{Z}_+.$$ The advantage of using negative binomial approximation is that it suffices to estimate the mean and variance of the approximating distribution, like what we often do in applying the central limit theorem based on the normal approximation. We will use the total variation distance between the distributions of nonnegative integer-valued random variables $Y_1$ and $Y_2$ $$d_{TV}(Y_1,Y_2):=\sup_{A\subset \mathbb{Z}_+}|{\rm {I\ \nexto P}}(Y_1\in A)-{\rm {I\ \nexto P}}(Y_2\in A)|$$ to measure the approximation errors in negative binomial approximation. The discrete central limit theorem is valid under the assumption that the loop probability is less than 1. More precisely, let $w_C(jk)$ be the probability that a link $(j,k)$ crossing customer crosses the links in $C$ only once, i.e., the only time that the customer crosses the links in $C$ is the one the customer is crossing. Define $$w_C=\sum_{(j,k)\in C}w_C(jk)\rho_{jk}/\rho_C,$$ the weighted probability of customers crossing links in $C$ only once. Clearly, we have $w_C(jk)\ge\mu_k$, so $$w_C\ge\sum_{(j,k)\in C}\rho_{jk}\mu_k/\rho_C.$$ The following lemma plays a crucial rule for the estimation of the negative binomial approximation error. \begin{Lemma}\label{keylemma1} $d_{TV}\left(\Xi_{C,t},\Xi_{C,t}+1\right)\le \frac1{\sqrt{2e w_C\rho_Ct}}.$ \end{Lemma} \noindent{\bf Proof.} We prove the claim by a coupling based on the ``priority principle" [cf. the proof of Lemma~\ref{XiaLemma1}]. We define a customer as a {\it single crossing} ({\it sc} for brevity) customer if the customer crosses links in $C$ only once, otherwise, the customer is labeled as {\it multiple crossing}, or {\it mc} for short. We ``manage" the network by regrouping the customers at each queue into {\it sc} customers and {\it mc} customers. Whenever there are $m_2$ {\it mc} customers together with $m_1$ {\it sc} customers at queue $j$, we use independently sampled exponential service requirements with instantaneous service rate $\phi_j(m_1+m_2)-\phi_j(m_1)$ for all of the {\it mc} customers while the service for the {\it sc} customers is carried out with instantaneous service rate $\phi_j(m_1)$, that is, as if no {\it mc} customers present at the queue. Since the {\it sc} customers take priority over the {\it mc} customers and the {\it mc} customers use the ``spare" service effort and never interrupt the traffic flow of the {\it sc} ones, we can see that its transitions are independent of the transitions of the {\it sc} customers. Let $Z_1^{jk}$ (resp. $Z_2^{jk}$) denote the transitions of {\it sc} (resp. {\it mc}) customers moving from queue $j$ to queue $k$ in the period $[0,t]$, then ${\bf Z}_1:=\{Z_1^{jk},\ (j,k)\in C\}$ and ${\bf Z}_2:=\{Z_2^{jk},\ (j,k)\in C\}$ are independent and $$\boldXi_{C,t}\stackrel{\mbox{\scriptsize{{\rm d}}}}{=} {\bf Z}_1+{\bf Z}_2.$$ By Melamed's theorem, the point process ${\bf Z}_1$ is a Poisson process with mean measure $$\mbox{\boldmath$\lambda$}_{{\bf Z}_1}(ds,(j,k))=w_C(jk)\rho_{jk}ds,\ (j,k)\in C,\ 0\le s\le t,$$ so ${\bf Z}_1(\Gamma_{C,t})$ follows Poisson distribution with mean $w_C\rho_Ct$ and $$d_{TV}\left(\Xi_{C,t},\Xi_{C,t}+1\right)\le d_{TV}\left({\bf Z}_1(\Gamma_{C,t}),{\bf Z}_1(\Gamma_{C,t})+1\right)\le \frac1{\sqrt{2e w_C\rho_Ct}},$$ where the last inequality is because of the fact that the distribution of Poisson is unimodal and Proposition~A.2.7 of [Barbour, Holst \& Janson~(1992), p.~262]. \hbox{\vrule width 5pt height 5pt depth 0pt} To state the discrete central limit theorem, we set $$\sigma_C(j,k)={\rm {I\ \kern -0.54em E}}[\mbox{\boldmath$\xi$}^{(0,(j,k))}({\rm {I\ \nexto R}}\times C)(\mbox{\boldmath$\xi$}^{(0,(j,k))}({\rm {I\ \nexto R}}\times C)-1)]\mbox{ and }\sigma_C=\sum_{(j,k)\in C}\frac{\rho_{jk}}{\rho_C}\sigma_C(j,k).$$ That is, $\sigma_C(j,k)$ is the second factorial moment of the number of visits in $C$ by the extra customer crossing the link $(j,k)$ and $\sigma_C$ is the weighted average of the second factorial moments of the number of visits by an extra customer crossing links in $C$ [cf. \Ref{Xia2.6}]. \begin{Theorem}\label{Xia3.2} Let $$r=\frac{(\rho_Ct)^2}{{\rm Var}(\Xi_{C,t})-\rho_Ct},\ q=\frac{\rho_Ct}{{\rm Var}(\Xi_{C,t})},$$ then \begin{eqnarray} d_{TV}\left(\Xi_{C,t},{\rm NB}(r,q)\right)&\le&\frac1{(\rho_Ct)^2\sqrt{2e w_C\rho_Ct}} \left\{2({\rm Var}(\Xi_{C,t})-\rho_Ct)^2\right.\nonumber\\ &&\mbox{\hskip0cm}\left.+\rho_Ct(\Xi_{C,t}[3]-\rho_Ct\Xi_{C,t}[2]-2\rho_Ct({\rm Var}(\Xi_{C,t})-\rho_Ct))\right\}\label{Xia3.2.1}\\ &\le&\frac1{\sqrt{2e w_C\rho_Ct}}(2\epsilon_C^2+\sigma_C),\label{Xia3.2.2} \end{eqnarray} where $\Xi_{C,t}[n]$ stands for the $n$th factorial moment of $\Xi_{C,t}$ defined as $$\Xi_{C,t}[n]={\rm {I\ \kern -0.54em E}}[\Xi_{C,t}(\Xi_{C,t}-1)\dots(\Xi_{C,t}-n+1)].$$ \end{Theorem} \begin{Remark}{\rm The parameters of the approximating negative binomial distribution are chosen so that it matches the mean and variance of $\Xi_{C,t}$.} \end{Remark} \begin{Remark}{\rm If the loop probability in $C$ is 0, then the negative binomial is reduced to Poisson distribution and the upper bound in Theorem~\ref{Xia3.2} becomes 0. This implies half of Melamed's theorem~(1979).} \end{Remark} \begin{Remark}{\rm If the loop probability is between 0 and 1, then both $\epsilon_C$ and $\sigma_C$ are finite, so the negative binomial approximation error bound is of order $O(1/\sqrt{t})$. Furthermore, if the loop probability is small, then both $\epsilon_C$ and $\sigma_C$ are small, so the negative binomial approximation to the distribution of $\Xi_{C,t}$ is even faster.} \end{Remark} \noindent{\bf Proof of Theorem~\ref{Xia3.2}.} The essence of Stein's method is to find a generator which characterizes the approximating distribution, establish a Stein identity to transform the problem of estimating the approximation errors into the study of the structure of the object under investigation. In the context of negative binomial approximation, let $a=r(1-q)$, $b=1-q$, then a generator which characterizes ${\rm NB}(r,q)$ is defined as $${\cal B} g(i)=(a+bi)g(i+1)-ig(i),\ i\in\mathbb{Z}_+,$$ for all bounded functions $g$ on $\mathbb{Z}_+$ [see Brown \& Phillips~(1999) and Brown \& Xia~(2001)]. The Stein identity is naturally established as \begin{equation}{\cal B} g(i)=f(i)-\pi(f)\label{Steinidentity}\end{equation} for $f\in{\cal F}:=\{f:\ \mathbb{Z}_+\to[0,1]\}$, where $\pi(f)=\sum_{i=0}^\infty f(i)\pi_i$. It was shown in Brown \& Xia~(2001) that, for each $f\in{\cal F}$, the solution $g_f$ to the Stein equation \Ref{Steinidentity} satisfies \begin{equation}\|\Delta g_f\|\le\frac1a,\label{Steinconstant}\end{equation} where $\Delta g_f(\cdot)=g_f(\cdot+1)-g_f(\cdot).$ The Stein identity \Ref{Steinidentity} ensures that $$\sup_{f\in{\cal F}}\left |{\rm {I\ \kern -0.54em E}} f(\Xi_{C,t})-\pi(f)\right|=\sup_{f\in{\cal F}}\left|{\rm {I\ \kern -0.54em E}} {\cal B} g_f(\Xi_{C,t})\right|,$$ hence, it suffices to estimate ${\rm {I\ \kern -0.54em E}} {\cal B} g_f(\Xi_{C,t})$ for all $f\in{\cal F}$. For convenience, we drop $f$ from the subindex of $g_f$. By Lemma~\ref{XiaLemma1}, we can take a point process $\mbox{\boldmath$\xi$}_{C,t}^{(s,(j,k))}$ on $\Gamma_t$ independent of $\boldXi_{C,t}$ such that $$\boldXi_{C,t}^{(s,(j,k))}=\boldXi_{C,t}+\mbox{\boldmath$\xi$}_{C,t}^{(s,(j,k))}.$$ Therefore, if we write $\mbox{\boldmath$\xi$}_{C,t}^{(s,(j,k))}(\Gamma_{C,t})=1+\xi^{(s,(j,k))}$, then \begin{eqnarray} {\rm {I\ \kern -0.54em E}}{\cal B} g(\Xi_{C,t})&=&{\rm {I\ \kern -0.54em E}}[(a+b\Xi_{C,t})g(\Xi_{C,t}+1)-\Xi_{C,t} g(\Xi_{C,t})]\nonumber\\ &=&a{\rm {I\ \kern -0.54em E}} g(\Xi_{C,t}+1)+b\sum_{(j,k)\in C}\int_0^tg(\Xi_{C,t}+2+\xi^{(s,(j,k))})\rho_{jk}ds\nonumber\\ &&-\sum_{(j,k)\in C}\int_0^tg(\Xi_{C,t}+1+\xi^{(s,(j,k))})\rho_{jk}ds.\label{proof01} \end{eqnarray} Let \begin{equation} a+(b-1)\sum_{(j,k)\in C}\rho_{jk}t=0,\label{coefficient1}\end{equation} and ${\tilde \Xi_{C,t}}=\Xi_{C,t}+1$, then it follows from \Ref{proof01} that \begin{eqnarray} &&{\rm {I\ \kern -0.54em E}}{\cal B} g(\Xi_{C,t})\nonumber\\ &&={\rm {I\ \kern -0.54em E}} \sum_{(j,k)\in C}\int_0^t\left[b\left(g\left({\tilde \Xi_{C,t}}+1+\xi^{(s,(j,k))}\right)-g\left({\tilde \Xi_{C,t}}\right)\right)-\left(g\left({\tilde \Xi_{C,t}}+\xi^{(s,(j,k))}\right)-g\left({\tilde \Xi_{C,t}}\right)\right)\right]\rho_{jk}ds\nonumber\\ &&={\rm {I\ \kern -0.54em E}} \sum_{(j,k)\in C}\int_0^t\left\{\sum_{r=0}^{\xi^{(s,(j,k))}-1}\left[b\Delta g\left({\tilde \Xi_{C,t}}+r+1\right) -\Delta g\left({\tilde \Xi_{C,t}}+r\right)\right]+b\Delta g\left({\tilde \Xi_{C,t}}\right)\right\}\rho_{jk}ds. \label{proof02}\nonumber\\ \end{eqnarray} Now, set \begin{equation} b=\frac{\sum_{(j,k)\in C}\int_0^t{\rm {I\ \kern -0.54em E}} \xi^{(s,(j,k))}\rho_{jk}ds}{\sum_{(j,k)\in C}\int_0^t{\rm {I\ \kern -0.54em E}} \xi^{(s,(j,k))}\rho_{jk}ds+\rho_C t} =\frac{{\rm Var}\left(\Xi_{C,t}\right)-\rho_Ct}{{\rm Var}\left(\Xi_{C,t}\right)}\label{coefficient2},\end{equation} where the last equality is due to the following observation: \begin{eqnarray*} {\rm {I\ \kern -0.54em E}}\Xi_{C,t}^2&=&{\rm {I\ \kern -0.54em E}}\int_{\Gamma_{C,t}}\Xi_{C,t}\boldXi_{C,t}(d\alpha)\\ &=&\sum_{(j,k)\in C}{\rm {I\ \kern -0.54em E}}\int_0^t\left(\Xi_{C,t}+1+\xi^{(s,(j,k))}\right)\rho_{jk}ds\\ &=&({\rm {I\ \kern -0.54em E}}\Xi_{C,t})^2+\rho_Ct+{\rm {I\ \kern -0.54em E}}\sum_{(j,k)\in C}\int_0^t\xi^{(s,(j,k))}\rho_{jk}ds, \end{eqnarray*} and so \begin{equation}{\rm {I\ \kern -0.54em E}}\sum_{(j,k)\in C}\int_0^t\xi^{(s,(j,k))}\rho_{jk}ds={\rm Var}(\Xi_{C,t})-\rho_Ct.\label{proof04}\end{equation} We then obtain from \Ref{proof02} that \begin{eqnarray} &&{\rm {I\ \kern -0.54em E}}{\cal B} g(\Xi_{C,t})\nonumber\\ &&={\rm {I\ \kern -0.54em E}} \sum_{(j,k)\in C}\int_0^t\left\{\sum_{r=0}^{\xi^{(s,(j,k))}-1}\left[b\Delta^2 g\left({\tilde \Xi_{C,t}}+r\right)-(1-b)\sum_{l=0}^{r-1}\Delta^2 g\left({\tilde \Xi_{C,t}}+l\right)\right]\right\}\rho_{jk}ds\nonumber\\ &&={\rm {I\ \kern -0.54em E}} \sum_{(j,k)\in C}\int_0^t\left\{\sum_{r=0}^{\xi^{(s,(j,k))}-1}\left[b{\rm {I\ \kern -0.54em E}}\Delta^2 g\left({\tilde \Xi_{C,t}}+r\right)-(1-b)\sum_{l=0}^{r-1}{\rm {I\ \kern -0.54em E}}\Delta^2 g\left({\tilde \Xi_{C,t}}+l\right)\right]\right\}\rho_{jk}ds, \label{proof03}\nonumber\\ \end{eqnarray} where the last equation is due to the fact that $\xi^{(s,(j,k))}$ is independent of ${\tilde \Xi_{C,t}}$. On the other hand, using \Ref{Steinconstant}, we have $$\left|{\rm {I\ \kern -0.54em E}} \Delta^2 g({\tilde \Xi_{C,t}}+l)\right|\le 2\|\Delta g\|d_{TV}(\Xi_{C,t},\Xi_{C,t}+1)\le \frac{2d_{TV}(\Xi_{C,t},\Xi_{C,t}+1)}{a},$$ so it follows from \Ref{proof03} that \begin{eqnarray}&&\left|{\rm {I\ \kern -0.54em E}}{\cal B} g\left(\Xi_{C,t}\right)\right|\nonumber\\ &&\le\frac{d_{TV}\left(\Xi_{C,t},\Xi_{C,t}+1\right)}{a}\sum_{(j,k)\in C}\int_0^t\left[2b{\rm {I\ \kern -0.54em E}}\xi^{(s,(j,k))}+(1-b){\rm {I\ \kern -0.54em E}} \xi^{(s,(j,k))}\left(\xi^{(s,(j,k))}-1\right)\right]\rho_{jk}ds. \nonumber\\ \label{proof05}\end{eqnarray} Using the Palm distributions of $\boldXi_{C,t}$ together with \Ref{proof04}, we get \begin{eqnarray*} \Xi_{C,t}[3]&=&{\rm {I\ \kern -0.54em E}}\int_{\Gamma_{C,t}}(\Xi_{C,t}-1)(\Xi_{C,t}-2)\boldXi_{C,t}(d\alpha)\\ &=&\sum_{(j,k)\in C}\int_0^t{\rm {I\ \kern -0.54em E}}\left[\left(\Xi_{C,t}+\xi^{(s,(j,k))}\right)\left(\Xi_{C,t}+\xi^{(s,(j,k))}-1\right)\right]\rho_{jk}ds\\ &=&\rho_Ct\Xi_{C,t}[2]+2\rho_Ct({\rm Var}(\Xi_{C,t})-\rho_Ct)+\sum_{(j,k)\in C}\int_0^t{\rm {I\ \kern -0.54em E}} \xi^{(s,(j,k))}\left(\xi^{(s,(j,k))}-1\right)\rho_{jk}ds. \end{eqnarray*} This in turn ensures \begin{equation}\sum_{(j,k)\in C}\int_0^t{\rm {I\ \kern -0.54em E}} \xi^{(s,(j,k))}\left(\xi^{(s,(j,k))}-1\right)\rho_{jk}ds=\Xi_{C,t}[3]-\rho_Ct\Xi_{C,t}[2]-2\rho_Ct({\rm Var}(\Xi_{C,t})-\rho_Ct).\label{proof06}\end{equation} Consequently, combining \Ref{proof04}, \Ref{proof06} with \Ref{proof05} gives \Ref{Xia3.2.1}. Finally, by the definitions of $\epsilon_C$ and $\sigma_C$, we have $${\rm {I\ \kern -0.54em E}}\sum_{(j,k)\in C}\int_0^t\xi^{(s,(j,k))}\rho_{jk}ds\le \epsilon_C\rho_Ct$$ and $$\sum_{(j,k)\in C}\int_0^t{\rm {I\ \kern -0.54em E}} \xi^{(s,(j,k))}\left(\xi^{(s,(j,k))}-1\right)\rho_{jk}ds\le \sigma_C\rho_Ct.$$ Therefore, \Ref{Xia3.2.2} follows from \Ref{Xia3.2.1}, \Ref{proof04} and \Ref{proof06}. \hbox{\vrule width 5pt height 5pt depth 0pt} \def{Academic Press}~{{Academic Press}~} \def{Adv. Appl. Prob.}~{{Adv. Appl. Prob.}~} \def{Ann. Probab.}~{{Ann. Probab.}~} \def{Ann. Appl. Probab.}~{{Ann. Appl. Probab.}~} \def{J. Appl. Probab.}~{{J. Appl. Probab.}~} \def{John Wiley $\&$ Sons}~{{John Wiley $\&$ Sons}~} \def{New York}~{{New York}~} \def{Probab. Theory Related Fields}~{{Probab. Theory Related Fields}~} \def{Springer}~{{Springer}~} \def{Stochastic Processes Appl.}~{{Stochastic Processes Appl.}~} \def{Springer-Verlag}~{{Springer-Verlag}~} \def{Theory Probab. Appl.}~{{Theory Probab. Appl.}~} \def{Z. Wahrsch. Verw. Gebiete}~{{Z. Wahrsch. Verw. Gebiete}~} \end{document}
arXiv
{ "id": "1006.5545.tex", "language_detection_score": 0.6635388135910034, "char_num_after_normalized": null, "contain_at_least_two_stop_words": null, "ellipsis_line_ratio": null, "idx": null, "lines_start_with_bullet_point_ratio": null, "mean_length_of_alpha_words": null, "non_alphabetical_char_ratio": null, "path": null, "symbols_to_words_ratio": null, "uppercase_word_ratio": null, "type": null, "book_name": null, "mimetype": null, "page_index": null, "page_path": null, "page_title": null }
arXiv/math_arXiv_v0.2.jsonl
null
null
\begin{document} \markboth{David Lorch} {Single-Class Genera of Positive Integral Lattices} \title{Single-Class Genera of Positive Integral Lattices} \author{DAVID LORCH} \address{Lehrstuhl D f\"ur Mathematik, RWTH Aachen University} \email{[email protected]} \author{MARKUS KIRSCHMER} \address{Lehrstuhl D f\"ur Mathematik, RWTH Aachen University} \email{[email protected]} \begin{abstract} We give an enumeration of all positive definite primitive $\mathbb{Z}$-lattices in dimension $\geq 3$ whose genus consists of a single isometry class. This is achieved by using bounds obtained from the \textsc{Smith-Minkowski-Siegel} mass formula to computationally construct the square-free determinant lattices with this property, and then repeatedly calculating pre-images under a mapping first introduced by \textsc{G.\,L.\,Watson}.\\ We hereby complete the classification of single-class genera in dimensions $4$ and $5$ and correct some mistakes in Watson's classifications in other dimensions. A list of all single-class primitive $\mathbb{Z}$-lattices has been compiled and incorporated into the Catalogue of Lattices. \end{abstract} \maketitle \section{Introduction} There has been extensive research on single-class lattices. \textsc{G.\,L.\,Watson} proved in \cite{watsonsingleclass} that single-class $\mathbb{Z}$-lattices cannot exist in dimension $n>10$ and, in a tremendous effort, tried to compile complete lists of single-class lattices in dimensions $3$~--~$10$ using only elementary number theory (\cite{ternary1, watson5, quaternary, ternary2, watson910, watson8, watson7, watson6}). While he succeeded in classifying most of these lattices, he found the case of dimension $4$ and $5$ to be exceedingly difficult and classified only a subset of the single-class lattices. In this work, all of the lattices missing from his classification have been found for what we believe is the first time. It turns out that, aside from some omissions in dimensions $3$ to $6$, \textsc{Watson}'s results are largely correct. Partial improvements to his results have already been published, notably for the three-dimensional case in an article by \textsc{Jagy} et al., \cite{jagy}, whose results agree with our computation. Another such improvement concerns the subset of single-class $\mathbb{Z}$-lattices in dimensions $3$~--~$10$ which correspond to a maximal primitive quadratic form -- this has been enumerated by \textsc{Hanke} in \cite{hanke}. Again, these results agree with ours. In dimension~$2$, single-class $\mathbb{Z}$-lattices have been classified by capitalizing on a connection to class groups of imaginary quadratic number fields, cf.~Voight in~\cite{voight}. This classification is proven complete if the Generalized Riemann Hypothesis holds. \subsection{Statement of results} We give an algorithm for finding, up to equivalence, all primitive positive definite $\mathbb{Z}$-lattices with class-number $1$ in dimension $\geq 3$. Computation on genera of lattices is performed by means of a \emph{genus symbol} developed by \textsc{Conway} and \textsc{Sloane} (\cite[Chapter~15]{SPLAG}). A mass formula, stated by the same authors in \cite{massformula} but dating back to contributions by \textsc{Smith} , \textsc{Minkowski} and \textsc{Siegel} provides effective bounds to the number of local invariants that need to be taken into account, thus making the enumeration computationally feasible. Our main result, the complete list of single-class primitive $\mathbb{Z}$-genera, and representative lattices of each of these genera, has been incorporated into the Catalogue of Lattices \cite{latdb}. Tables containing only the genus symbols, and only in dimension $\geq 4$, can be found in the appendix. An additional contribution of this work is a number of essential algorithms for computation on genera of $\mathbb{Z}$-lattices. All of these algorithms have been implemented in {\textsc{Magma}} \cite{magma} and are available on request. \section{Preliminaries} \subsection*{Genera of lattices}\label{subsec:preliminaries} We denote by $\mathbb{P}$ the set of rational primes. In the following, let $R=\mathbb{Z}$ or $R=\mathbb{Z}_p$ for some $p\in \mathbb{P}$. By an $R$-{lattice} $L$ we mean a tuple $(L,\beta)$, where $L=\erz{b_1,\dots,b_n}$ is a free $R$-module of finite rank $n$ and $\beta:L\times L\rightarrow \text{Quot}(R)$ is a symmetric bilinear form. Additionally, if $R=\mathbb{Z}$, we assume that $\beta$ is positive definite. $L$ is {integral} if $\beta(L,L)\subseteq R$. An integral $R$-lattice is called {even} if $\beta(L,L)\subseteq 2R$, and {odd} otherwise. $L$ is quadratic-form-maximal if either $L$ is even and the associated quadratic form is maximal integral, or if $L$ is odd and the quadratic form associated to $2L$ is maximal integral. By $\text{Aut}(L)$ we mean the group of bijective isometries of $L$. The {Gram matrix} of $L$ is $\text{Gram}(L):=(\beta(b_i,b_j))_{1\leq i,j\leq n}$. By the determinant $\det(L)$ we mean $\det(\text{Gram}(L))$, and the discriminant of $L$ is $\text{disc}(L)=(-1)^s\det(L)$ where $s:=\left\lceil\frac{\text{rank}(L)}{2}\right\rceil$. Recall the definition of the genus of a lattice: two $\mathbb{Z}$-lattices $L$, $L'$ are said to be in the same genus if their completions $\mathbb{Z}_p\otimes_\mathbb{Z} L$ and $\mathbb{Z}_p\otimes_\mathbb{Z} L'$ are isometric for every $p\in\mathbb{P}\cup\{-1\}$, with the convention that $\mathbb{Z}_{-1}:=\mathbb{R}$. It is well known that genera of lattices are finite unions of isometry classes. We denote the genus of $L$ by $\text{Genus}(L)$ and call the number of isometry classes contained in $\text{Genus}(L)$ the class-number of~$L$. If $L$ is an $R$-lattice with Gram matrix $G=(g_{ij})_{1\leq i,j\leq n}\in R^{n\times n}$, we call $L$ {primitive} if $\gcd(\{g_{ij}:\ 1\leq i,j\leq n\})=1$. By $\text{rescale}(L)$ we mean the unique scaled lattice ${^\alpha}L:=(L,\alpha\beta)$ with the property that $\text{Gram}({^\alpha}L)$ is integral and primitive. By $L^\#=\{ax:\ x\in L, a\in \text{Quot}(R), \beta(ax, L)\subseteq R\}$ we mean the dual of~$L$. Whenever, for some integer $a\in\mathbb{N}$, $aL^\# = L$, we call $L$ $a$-modular. If a $\mathbb{Z}$-lattice $L$ is integral, or equivalently if $L\subseteq L^\#$, we will call $L$ {$p$-adically square-free} whenever $\exp(L^\#/L) \not\equiv 0\mod p^2$. Here, $\exp(L^\#/L)$ denotes the exponent of the finite abelian group $L^\#/L$. If $L$ is $p$-adically square-free for all $p\in\mathbb{P}$, we will call $L$ square-free. \subsection*{Jordan splitting}We remind the reader of the {Jordan splitting} of $\mathbb{Z}_p$-lattices: let $L$ be a $\mathbb{Z}$-lattice and $p\in\mathbb{P}$. Then $\mathbb{Z}_p\otimes_\mathbb{Z} L$ admits an orthogonal splitting $\mathbb{Z}_p\otimes_\mathbb{Z} L=\bot_{i\in\mathbb{Z}}L_i$ with $L_i$ $p^i$-modular (but possibly zero-dimensional). When $L$ is integral, $i$ ranges over $\mathbb{N}_0$ only. For $2\not=p\in\mathbb{P}$, the $p$-adic Jordan decompositions of $L$ are unique up to isometry of the components. Much of the complication in computations with genera of lattices arises from the fact that a $\mathbb{Z}_2$-lattice can have many essentially different Jordan decompositions. The dimensions of the $L_i$ are, however, unique even in the case $p=2$. If $a=\min\{i\in\mathbb{Z}:\ \text{rank}(L_i)>0\}$ and $b=\max\{i\in\mathbb{Z}:\ \text{rank}(L_i)>0\}$, then we define the Jordan decomposition's {length} to be $\text{len}_p(L):=b-a+1$. By $\text{Jordan}_p(L)=L_a\bot \cdots\bot L_b$ we mean that the right side is {a} $p$-adic Jordan decomposition of $\mathbb{Z}_p\otimes_\mathbb{Z} L$, with $L_i$ a $p^i$-modular lattice for each $a\leq i\leq b$. If $L$ is $p$-adically square-free, then clearly all $L_i$ have dimension $0$ for $i\geq 2$. If $L$ is square-free and, in addition, for all $p\in \mathbb{P}$ we have $\text{Jordan}_p(L)=L_0\bot L_1$ with $\text{rank}(L_0)\geq \text{rank}(L_1)$, we call $L$ {strongly primitive}. \subsection*{The genus symbol} There have been many descriptions of complete sets of real and $p$-adic invariants of genera of $\mathbb{Z}$-lattices $L$. We will adopt the notion of a {genus symbol} $\text{sym}(L)$ that has been put forth by \textsc{Conway} and \textsc{Sloane} in \cite[Chapter~15]{SPLAG}, because it appears to us to be the most concise in terms of understanding and computing the $2$-adic invariants. We assume basic familiarity with this symbol and only note that $\text{sym}(L)$ is a formal product of lists of tuples for each prime $p$ dividing $2\det(L)$, with each tuple containing $i$, $\text{rank}(L_i)$, $\det(L_i)\bmod{(\mathbb{Z}_p^*)^2}$ corresponding to a $p^i$-modular orthogonal summand $L_i$ of $\text{Jordan}_p(L)$, and (for $p=2$) an invariant taking either a value based on $\text{trace}(L_i)\bmod{8}$ (called the {oddity} of $L_i$), or the value ``II'' if $L_i$ is an even $\mathbb{Z}_2$-lattice. These tuples are abbreviated as $(p^i)^{\varepsilon \text{rank}(L_i)}_\text{oddity}$, with the subscript present only for $p=2$. For $p\not=2$, $\varepsilon\in\{-1,1\}\cong (\mathbb{Z}_p^*)/(\mathbb{Z}_p^*)^2$ denotes the square-class of $\det(L_i)$. For $p=2$, we write $\varepsilon=-1$ for $\det(L_i)\equiv\pm 3\bmod 8$, and $\varepsilon=1$ otherwise. The value $\det(L_i)\in (\mathbb{Z}_2^*)/(\mathbb{Z}_2^*)^2\cong C_2\times C_2$ can then be obtained from $\varepsilon$ and the oddity. Whenever a set of local invariants satisfies the existence conditions given in \cite[Theorem 11]{SPLAG}, a $\mathbb{Z}$-lattice with these local invariants exists. \subsection*{The mass formula} \label{subsec:massformula} We assume basic familiarity with the \textsc{Minkowski-Siegel} mass formula, which relates the \emph{mass} of a $\mathbb{Z}$-lattice $L$: $$\text{mass}(L)=\sum_{[L_i]\in \text{Genus}(L)}\frac{1}{\#\text{Aut}(L_i)}$$ to the local invariants comprising $\text{sym}(L)$. Recall that all $\mathbb{Z}$-lattices $L$ in this article are definite, hence $\text{Aut}(L)$ is always finite. \begin{definition}\label{defi:masscondition} We say that a lattice $L$ \emph{fulfils the mass condition} if $\text{mass}(L)\leq \frac{1}{2}$ and $\frac{1}{\text{mass}(L)}\in 2\mathbb{Z}$. \end{definition} Clearly, if $L$ is a single-class lattice, then $L$ must fulfil the mass condition. We will use a modern formulation of the mass formula, put forth by \textsc{Conway} and \textsc{Sloane} in \cite{massformula}. The following paragraph gives a simplified overview of those parts of the mass formula which are relevant to our computations: \subsection*{Mass calculation and approximation} When all local invariants are trivial, i.e. $\det(L)\in\mathbb{Z}^*$, the mass of a lattice in dimension $2\leq n \in\mathbb{N}$, and of discriminant $D=(-1)^s\det(L)$ where $s:=\left\lceil\frac{n}{2}\right\rceil$, is the \emph{standard mass}: $$\text{std}_n(D) = 2\pi^{-n(n+1)/4}\prod_{j=1}^n\Gamma\left(\frac{j}{2}\right)\left(\prod_{i=1}^{s-1}\zeta(2i)\right)\zeta_D(s).$$ Here $\Gamma$ and $\zeta$ denote the usual Gamma and Zeta functions, and $$\zeta_D(s)=\begin{cases}1, & n\text{\ odd}\\ \prod_{2\not=p\in\mathbb{P}}\frac{1}{1-\left[\frac{D}{p}\right]p^{-s}}, &\text{otherwise}\end{cases}.$$ where $\left[\frac{D}{p}\right]$ is the \textsc{Legendre} symbol. Finally, $\text{mass}(L)$ is obtained from the standard mass by multiplying with correction factors at all primes $p$ dividing $2\det(L)$. Unlike the standard mass, these depend on the $p$-adic Jordan decomposition of~$L$. Let $\text{Jordan}_p(L)=L_0\bot L_1\bot \dots \bot L_k$ and $s_i := \left\lceil\frac{\text{rank}(L_i)}{2}\right\rceil$, $i=0,\dots,k$. Then $$\text{mass}(L) = \text{std}_n(D)\cdot\prod_{p| 2\det(L)}\left(m_p(L)\cdot\underbrace{2\prod_{j=2}^s(1-p^{2-2j})}_{=:\ \text{std}_p(L)^{-1}} \right) \text{\ where\ }$$ $$m_p(L)=\left(\prod_{i\in\mathbb{Z}:\ \text{rank}(L_i)\not=0}M_p(L_i)\right)\left(\prod_{i<j\in\mathbb{Z}}p^{\frac{1}{2}(j-i)\text{rank}(L_i)\text{rank}(L_j)}\right)\hspace{1em}\text{\ (for\ } p\not=2), \text{\ with}$$ $$M_p(L_i)=\frac{1}{2}(1+\varepsilon p^{-{s_i}})^{-1}\prod_{i=2}^{s_i}(1-p^{2-2i})^{-1}.$$ Here, $\varepsilon=0$ if $\text{rank}(L_i)$ is odd, and otherwise $\varepsilon\in\{-1,1\}$ depends on the species of the orthogonal group $\mathcal{O}_{\text{rank}(L_i)}(p)$ associated with $L_i$, which can be determined from the $p$-adic invariants of $\text{Genus}(L)$. \section{Single-class lattices} A fundamental method used in \textsc{Watson}'s classifications is a strategy of descent that transforms a primitive integral quadratic form $f$ into another form $f'$ whose corresponding $\mathbb{Z}$-lattice has shorter Jordan decomposition at a given $p\in\mathbb{P}$. For the original definition in terms of quadratic forms, cf. \cite[Section~2]{watsontransformations}. We formulate a similar strategy for $\mathbb{Z}$-lattices: \begin{definition} For $p\in\mathbb{P}$, the \emph{Watson mapping} $\text{Wat}_p$ is defined by \[\text{Wat}_p(L) := \text{rescale}(L\cap pL^\#).\] \end{definition} \begin{remark}\label{thm:watsonProcess}The following properties of the $\text{Wat}_p$ mappings justify the term ``strategy of descent'': \begin{enumerate} \item The $\text{Wat}_p$ mappings are compatible with isometries and extend to well-defined functions on genera of $\mathbb{Z}$-lattices with the property $\text{Genus}(\text{Wat}_p(L)) = \text{Wat}_p(\text{Genus}(L))$. In particular, the $\text{Wat}_p$ do not increase the class-number. \item $\text{len}_p(\text{Wat}_p(L)) \leq \max\{\text{len}_p(L)-1, 2\}$. Hence, repeatedly applying $\text{Wat}_p$ decreases the length of a lattice's $p$-adic Jordan decomposition until a $p$-adically square-free lattice $L'$ is reached. \end{enumerate} \begin{proof}For details on (2), cf. \cite[(7.4) and (8.4)]{watsontransformations}.\end{proof} \end{remark} We will make use of Watson's work one more time by citing the following result: \begin{theorem}\label{thm:watsonBound}If $L$ is a single-class $\mathbb{Z}$-lattice, then $\text{rank}(L)\leq 10$. \begin{proof}Cf.~\cite{watsonclassnumber}. \end{proof} \end{theorem} \section{Strategy} Let $L$ be a single-class square-free primitive $\mathbb{Z}$-lattice with $\text{rank}(L)=n \geq 3$. We will see in Section~\ref{sec:sqf} that the Smith-Minkowski-Siegel mass formula yields an upper bound $\text{maxprime}(n)$ to the set of prime divisors of $\det(L)$. The bound $\text{maxprime}(n)$ depends on $n$ alone (and not on $L$). For the relevant dimensions $3\leq n\leq 10$, its values are given in Table~\ref{table:maximalPrime}. Since $\det(L)$ is thus a-priori bounded and $L$ is square-free, there is only a finite number of possibilities for the genus symbol $\text{sym}(L)$. An algorithm introduced in Section~\ref{constructionoflattices} constructs a $\mathbb{Z}$-lattice from a given square-free genus symbol, allowing an enumeration of all single-class square-free $\mathbb{Z}$-lattices in any dimension $n\geq 3$. A second algorithm, based on the $\text{Wat}_p$ mappings and described in Section~\ref{sec:all}, then yields the complete list of (not necessarily square-free) single-class primitive $\mathbb{Z}$-lattices. \section{Square-free lattices}\label{sec:sqf} \subsection{Generation of candidate genera} \begin{lemma}\label{lemma:zeta} Let $1<s\in\mathbb{N}$, $D\in\mathbb{Z}$. Then $\zeta_D(s)\geq \frac{\zeta(2s)}{\zeta(s)}$, with $\zeta_D$ defined as in Section~\ref{subsec:massformula}. \begin{proof}Let $\lambda: \mathbb{N}\rightarrow\{-1,1\}$, $n\mapsto (-1)^{\#\{p\in\mathbb{P}:\ p| n\}}$ denote the \textsc{Liouville} function. Then \[\zeta_D(s) = \prod_{2\not=p\in \mathbb{P}}{\left(1-\left[\frac{D}{p}\right]\frac{1}{p^s}\right)}^{-1} \geq \prod_{p\in\mathbb{P}}\left(1+\frac{1}{p^s}\right)^{-1} = \sum_{n=1}^\infty\frac{\lambda(n)}{n^s}\] which converges for any $s>1$. Multiplying by $\zeta(s)$, and writing $\ast$ for {\textsc{Dirichlet}} convolution: \[\zeta(s)\cdot\left(\sum_{n=1}^\infty\frac{\lambda(n)}{n^s}\right) = \sum_{n=1}^\infty\frac{1\ast \lambda(n)}{n^s}=\sum_{n=1}^\infty\frac{\sum_{d|n}\lambda(\frac{n}{d})}{n^s} = \sum_{n=1}^\infty\frac{1}{{(n^2)}^s}=\zeta(2s).\] \end{proof} \end{lemma} \begin{corollary} \label{corollary:lowerbound} Whenever $2 < n=2s$ is an even number, there is a lower bound $s(n)$ to the standard mass of a lattice of dimension $n$, independent of its discriminant~$D$: \[\text{std}_n(D) \geq s(n):= 2\pi^{\frac{-n(n+1)}{4}}\frac{1}{\zeta(s)}\prod_{j=1}^n\Gamma\left(\frac{j}{2}\right)\prod_{j=1}^{s}\zeta(2j).\] \end{corollary} We note that this approach fails for $n=2$, since no similar approximation of $\zeta_D(1)$ is possible. For odd $n$, by definition, $\text{std}_n(D)$ does not depend on $D$ at all. \begin{lemma}\label{lemma:terminationSCSF} Let $K$ and $L$ be square-free primitive lattices in dimension $n\geq 3$ whose local invariants differ only at a single prime $p\in \mathbb{P}$, where $K$ has trivial invariants and $L$ does not (i.e. $\text{Jordan}_p(K)=K_0$, and $\text{Jordan}_p(L)=L_0\bot L_1$ with $\text{rank}(L_0)>0$ and $\text{rank}(L_1)>0$). Let $s:=\left\lceil\frac{n}{2}\right\rceil$. Then ${\text{mass}(L)}\geq a_n(p)\cdot \text{mass}(K)$, where $$a_n(p)= \varepsilon \left(\frac{1}{1+p^{-1}}\right)^2(\sqrt{p})^{n-1}(1-p^{-2})^{s-1}$$ The factor $\varepsilon$ is $\frac{\zeta(2s)}{{\zeta(s)}^2}$ if $n$ is even, and $1$ otherwise. \begin{proof} Let $\text{Jordan}_p(L)=L_0\bot L_1$, $s:=\left\lceil\frac{\text{rank}(L)}{2}\right\rceil$ and $s_k:=\left\lceil\frac{\text{rank}(L_k)}{2}\right\rceil$, $k=0,1$. Then $s \geq 2, s_0, s_1 \geq 1$ and $s_0+s_1\leq s+1$. We have $$\text{std}_p(L)\leq\frac{1}{2}\left(1-p^{-2}\right)^{1-s}$$ and for $k=0, 1$: $$M_p(L_k)\geq \frac{1}{2}\left(\frac{1}{1+p^{-1}}\right)\left(1-p^{2-2s_k}\right)^{1-s_k}$$ Since $(1-p^{2-2s_k})^{1-s_k}>1$ for any $s_k>1$, $$\frac{m_p(L)}{\text{std}_p(L)} = M_p(L_0)M_p(L_1)(\sqrt{p})^{\text{rank}(L_0)\text{rank}(L_1)}\frac{1}{\text{std}_p(L)}$$ $$\geq \frac{1}{2}\left(\frac{1}{1+p^{-1}}\right)^2(\sqrt{p})^{n-1}(1-p^{-2})^{s-1}.$$ Finally, we have $m_q(L)=m_q(K)$ and $\text{std}_q(L)=\text{std}_q(K)$ for all $p\not=q\in\mathbb{P}$. Applying Lemma~\ref{lemma:zeta} and the trivial inequality $\zeta_D(s)\leq \zeta(s)$ to the standard masses, we obtain, for $n$ even: $$\frac{\text{mass}(L)}{\text{mass}(K)}=\frac{\text{std}_n(\text{disc}(L))}{\text{std}_n(\text{disc}(K))}\cdot\frac{m_p(L)}{\text{std}_p(L)} \geq\frac{\zeta(2s)m_p(L)}{\zeta(s)^2\text{std}_p(L)}.$$ For odd $n$, the factor $\zeta_D$ is not present in the standard mass, so in that case $$\frac{\text{mass}(L)}{\text{mass}(K)}\geq \frac{m_p(L)}{\text{std}_p(L)}.$$ \end{proof} \end{lemma} \begin{proposition}Let $3\leq n\in\mathbb{N}$. Then there is a bound $\text{maxprime}(n)\in\mathbb{N} $ such that for any primitive $\mathbb{Z}$-lattice $L$ of rank~$n$ which is square-free and fulfils the mass condition (cf. \ref{defi:masscondition}), $\max\{p\in\mathbb{P}:\ p\mid \det(L)\}\leq \text{maxprime}(n)$. \begin{proof} Let $s(n)$ denote the lower bound to $\text{std}_n(D)$ obtained from Corollary~\ref{corollary:lowerbound}. The quantities $m_2(L)$ and $\text{std}_2(L)$ from the mass formula, as defined in \cite[Chapter~15]{SPLAG}, depend only on the $2$-adic genus invariants of~$L$, for which there are finitely many possibilities when $L$ is square-free. Hence $t(n):=\min\{m_2(L)\text{std}_2(L)^{-1}:\ L\ \mathrm{a\ primitive,\ squarefree}\ \mathbb{Z}-\mathrm{lattice\ of\ rank}\ n\}$ is well-defined. Further, the bound $a_n(p)$ from Lemma~\ref{lemma:terminationSCSF} is increasing in both $n$ and $p$, and $\lim_{p\rightarrow \infty}a_n(p)=\infty$. Let $B(n) := \{2\not=p\in\mathbb{P}:\ a_n(p)<1\}$. Now let $L$ be any $\mathbb{Z}$-lattice that is single-class, primitive and square-free. Write $s:=\left\lceil\frac{n}{2}\right\rceil$ and $D:=(-1)^s\det(L)$. Then by the mass formula, and using Lemma~\ref{lemma:terminationSCSF} to compare $\text{mass}(L)$ to the mass of the standard lattice, \begin{eqnarray*}\text{mass}(L)=\text{std}_n(D)\cdot \prod_{p| 2\det(L)}\left(m_p(L)\text{std}_p(L)^{-1}\right) \geq s(n)t(n)\prod_{2\not=p\mid\det(L)}{a_n(p)}.\end{eqnarray*} Hence, if $p\mid\det(L)$ for some $2< p\in \mathbb{P} - B(n)$, then \[\text{mass}(L)\geq a_n(p)\cdot s(n)t(n)\prod_{q\in B(n)}{a_n(q)}.\] So, $\text{maxprime}(n):= \max(\{p\in\mathbb{P}: a_n(p)\cdot s(n)t(n)\prod_{q\in B(n)}{a_n(q)} \leq \frac{1}{2}\})$ is the desired bound. \end{proof} \end{proposition} For the relevant dimensions, i.e. $3\leq n\leq 10$ (cf. Theorem~\ref{thm:watsonBound}), the values of $t(n)$, $B(n)$ and $\text{maxprime}(n)$ are given in Table~\ref{table:maximalPrime}. Since the genus of a $\mathbb{Z}$-lattice $L$ is determined by local invariants associated to the primes $p|2\det(L)$, for each of which there are finitely many possibilities if $L$ is square-free, we now obtain: \begin{corollary} Let $3\leq n\in\mathbb{N}$. Then the number of genera of primitive, single-class square-free $\mathbb{Z}$-lattices of rank~$n$ is finite. \end{corollary} In the remainder of this section, we outline an algorithm to explicitly enumerate the genera of primitive, single-class square-free $\mathbb{Z}$-lattices. \begin{table}[h] \caption{\label{table:maximalPrime}Upper bound $\text{maxprime}(n)$ for primes dividing $\det(L)$, where $L$ is a square-free $\mathbb{Z}$-lattice that fulfils the mass condition} \begin{tabular}{lllllllll} \toprule $n$ & $3$ & $4$ & $5$ & $6$ & $7$ & $8$ & $9$ & $10$\\ \midrule $t(n)$ & $8^{-1}$ & ${24}^{-1}$ & $8^{-1}$ & ${72}^{-1}$ & ${16}^{-1}$ & ${272}^{-1}$ & ${32}^{-1}$ & ${1056}^{-1}$ \\ $B(n)$ & $\{3\}$ & $\{3\}$ & $\emptyset$ & $\emptyset$ & $\emptyset$ & $\emptyset$ & $\emptyset$ & $\emptyset$\\ $\text{maxprime}(n)$ & $61$ & $467$ & $73$ & $283$ & $139$ & $373$ & $193$ & $421$\\ \bottomrule \end{tabular} \end{table} \SetKwFunction{MinimalMass}{MinimalMass} \begin{proposition}\label{prop:maxprime} Let $u$ denote a $2$-adic genus symbol, $2\not=q\in\mathbb{P}$ and let $v$ denote a list of elements $[p_i,v_i]$, $1\leq i\leq k$, where $p_i\in\mathbb{P}$, $p_i\not\in\{2, q\}$ and $v_i$ is a $p$-adic genus symbol. Denote by $\MinimalMass(u, v, q)$ a lower bound to the mass of any genus of a $\mathbb{Z}$-lattice which has the local invariants specified by $u$ and the $v_i$, and which has an unspecified but non-trivial invariant at the prime $q$. Then the output of the following algorithm, called for an integer $n\geq 3$, contains all primitive single-class square-free genus symbols in dimension~$n$: \end{proposition} \begin{algorithm}[H]\label{alg:candidates} \caption{generating candidate genera for single-class square-free lattices} \LinesNumbered \DontPrintSemicolon \SetKwInOut{Input}{input} \Input{An integer $n\geq 3$} \SetKwInOut{Output}{output} \Output{A list of square-free genus symbols} \SetKwFunction{NextPrime}{NextPrime} \SetKwFunction{Mass}{Mass} \SetKwFunction{IsValidGenusSymbol}{IsValidGenusSymbol} \SetKw{Append}{append} \SetKw{Continue}{continue} \SetKwData{OddSym}{oddsym} \SetKwData{TwoSym}{twosym} \Begin{ $\texttt U\longleftarrow$ list of possible $2$-adic invariants for square-free lattices\; $\texttt V\longleftarrow$ list of possible $p$-adic invariants for square-free lattices, for $2\not=p\in\mathbb{P}$\; $\texttt O\longleftarrow \emptyset$, $\texttt T\longleftarrow \emptyset$\; \lForEach{$\TwoSym \in \texttt U$}{add $[\TwoSym, \emptyset, 3]$ to $\texttt T$}\; \While{$\texttt T\not=\emptyset$}{ remove an element $[\texttt{twosym}, \texttt{oddsym}, p]$ from $\texttt T$\; $q\longleftarrow \NextPrime(p)$\; \ForEach{$v\in \texttt V$}{ $g\longleftarrow [[2,\TwoSym], \Append(\OddSym, [p, v])]$\; \lIf{$\IsValidGenusSymbol(g)$ and $\Mass(g)\leq\frac{1}{2}$}{add $g$ to \texttt O}\; \If{$\MinimalMass(\TwoSym, \Append(\OddSym, [p, v]), q)\leq\frac{1}{2}$}{add $[\TwoSym, \Append(\OddSym, [p, v]), q]$ to \texttt T} } \If{$\MinimalMass(\TwoSym, \OddSym, q)\leq\frac{1}{2}$}{add $[\TwoSym, \OddSym, q]$ to \texttt T} } \Return $\texttt O$\; } \end{algorithm} \begin{proof} The two lists generated in steps $2$ and $3$ are finite because we restrict to square-free lattices. The lower bound $\MinimalMass(u,v,q)$ can be evaluated similarly to the value of $\text{maxprime}(n)$ in Proposition~\ref{prop:maxprime}, and is (for fixed $u$ and $v$) increasing in $q$. This ensures that, in steps $12$ and $16$, all primitive square-free genera fulfiling the mass condition are generated by the algorithm. \end{proof} \subsection{Construction of square-free lattices} \label{constructionoflattices} In this section, we will present an algorithm which generates representative lattices for the candidate genera produced by Algorithm~\ref{alg:candidates}. Let $(L, \beta)$ be a definite $\mathbb{Z}$-lattice. We view $L$ as embedded into the rational quadratic space $V = L \otimes_\mathbb{Z} \mathbb{Q}$ and denote by $Q_L \colon V \to V, x \mapsto \beta(x,x)$ the corresponding rational quadratic form. We set $\det(Q_L) := \prod a_i\in \mathbb{Q}^*/(\mathbb{Q}^*)^2$. The form $Q_L$ is diagonalizable over the rationals, say $Q_L(x) = \sum_{i=1}^n a_i x_i^2$. For each $p\in\mathbb{P}$, we define the local Hasse invariant $c_p(Q_L) = \prod_{i < j} \left(\frac{a_i, a_j}{p}\right)$ where $\left(\frac{a,b}{p}\right) \in \{\pm 1\}$ denotes the usual Hilbert symbol of $a,b$ at $p$. It is well known that the isometry type of the rational form $Q_L$ is uniquely determined by $n$, $\det(Q_L)=\det(L)\in \mathbb{Q}^*/(\mathbb{Q}^*)^2$ and the set of all primes $p$ for which $c_p(Q_L) = -1$ (cf. \cite[Remark 66:5]{OMeara}). Further, these invariants can easily be determined from the genus symbol of $L$. Thus, to construct a square-free lattice $L$ with given genus symbol, one can proceed as follows. \begin{algorithm}[H]\label{alg:allSingleClass} \caption{finding a representative lattice of a square-free genus} \LinesNumbered \DontPrintSemicolon \SetKwInOut{Input}{input} \Input{A genus symbol $g$ of a square-free $\mathbb{Z}$-lattice.} \SetKwInOut{Output}{output} \Output{A $\mathbb{Z}$-lattice $L$ with genus symbol $g$.} \Begin{ $P\longleftarrow$ the set of primes at which the enveloping space $L\otimes_\mathbb{Z} \mathbb{Q}$ has Hasse invariant $-1$\; $(V, Q)\longleftarrow$ a rational quadratic space of dimension~$\text{rank}(L)$, with $\det(Q)=\det(L)\in\mathbb{Q}^*/(\mathbb{Q}^*)^2$, and with Hasse invariant $-1$ only at the primes in~$P$\; $L \longleftarrow $ a lattice $L_0$ in $V$ with $Q(L_0) \subseteq \mathbb{Z}$, and maximal with that property\; \ForEach{$p\in\mathbb{P}$ with $p|\ 2 \cdot \det(L)$} {\lIf{$p=2$}{$e_p\longleftarrow 4$} \lElse{$e_p\longleftarrow p$}\; $L\longleftarrow$ a lattice $L'$ with $e_p L\subseteq L'\subseteq L$ that has $p$-adic genus symbol $g_p$\; } \Return $L$\; } \end{algorithm} \begin{proof} The genus of $L_0$ is uniquely determined by $Q$ and thus by $L$, see for example \cite[Theorem 91:2]{OMeara}. In particular, $L_0$ contains a sublattice $L$ with genus symbol $g$. Since $L$ is assumed to be integral and square-free, the index $[L_0 : L]$ is at most divisible by $\prod_{p \mid 2\det(L)} e_p$. \end{proof} \begin{remark} The values of $P$, $\det(L)$ and $\text{rank}(L)$ in steps $2$ and $3$ can be read from the genus symbol~$g$. Further, step~$3$ of the above algorithm can be performed as follows. Let $P'$ be a set of primes containing the prime divisors of $2\cdot \det(L)$. Then we try diagonal forms $\left \langle a_1,\dots, a_{n-1} , \det(L) \cdot \prod_{i=1}^{n-1} a_i \right\rangle$ where the $a_i$ are products of distinct elements in~$P'$. If the set $P'$ is large enough, this will quickly produce a quadratic form with the correct invariants. For step~$7$, randomized generation of sublattices between $L$ and $e_pL$ turned out to produce the desired lattice $L'$ quickly enough. \end{remark} \section{Completing the classification}\label{sec:all} The algorithms provided in Section~\ref{sec:sqf} allowed an enumeration of all primitive single-class square-free $\mathbb{Z}$-lattices in dimension $3$~--~$10$. In this section, we complete that classification to include all single-class primitive $\mathbb{Z}$-lattices in these dimensions, whether square-free or not. \begin{lemma}\label{lemma:massbound2} Let $K$ be a definite primitive lattice in dimension $n\geq 3$, $2\not=p\in\mathbb{P}$ with $p\not| \det(K)$ and $L\in\text{Wat}_p^{-1}(K)$. \begin{enumerate} \item Either $\text{len}_p(L)=3$ (more precisely, we have $\text{Jordan}_p(L)=L_0\bot L_2$), or $\text{rescale}(\text{Genus}(K)) = \text{Genus}(L)$. \item If $\text{len}_p(L)=3$, then $\text{mass}(L)\geq b_n(p)\cdot\text{mass}(K)$, where $n=\text{rank}(L)$, $s=\left\lceil\frac{n}{2}\right\rceil$ and $$b_n(p)=\varepsilon\left(\frac{1}{1+p^{-1}}\right)^2p^{n-1}(1-p^{-2})^{s-1}$$ with $\varepsilon=\frac{\zeta(2s)}{2\zeta(s)^2}$ if $\text{rank}(L)$ is even, and $\varepsilon=1$ otherwise. \end{enumerate} \end{lemma} \begin{proof} The first claim is immediate from the definition of $\text{Wat}_p$, see \cite[(7.4) and (8.4)]{watsontransformations}. The second claim follows from a calculation similar to Lemma~\ref{lemma:terminationSCSF}. \end{proof} \begin{remark}\label{rem:watsoninclusions}Let $L$ be a $\mathbb{Z}$-lattice, $p\in\mathbb{P}$ and $M\in\text{Wat}_p^{-1}(L)$. Then there is some $\alpha\in\mathbb{Z}$ such that $M\cap pM^\#$ is the rescaled lattice ${^\alpha}L$. \[\frac{1}{p}({^\alpha}L)=\frac{1}{p}\text{Wat}_p(M)=\frac{1}{p}M\cap M^\#\supseteq M \supseteq M\cap pM^\#=\text{Wat}_p(M)={^\alpha}L.\] Hence, the pre-images under $\text{Wat}_p$ correspond to subspaces of $\frac{1}{p}L/L\cong\mathbb{F}_p^n$. \end{remark} The following algorithm completes the classification of single-class lattices. \begin{algorithm}[H]\label{alg:alllattices} \caption{Generating single-class lattices} \LinesNumbered \DontPrintSemicolon \SetKwInOut{Input}{input} \Input{a list {$\mathcal L$} of primitive single-class square-free lattices in dimension $n\geq 2$\;} \SetKwInOut{Output}{output} \Output{a list of primitive single-class lattices in dimension $n$} \Begin{ $\texttt{O}\longleftarrow\mathcal L$, $\texttt{T}\longleftarrow\mathcal L$, $\texttt{A} \longleftarrow \{\text{rescale}(\text{sym}(L)):\ L\in \mathcal{L}\}$\; $L\longleftarrow$ some lattice from $\texttt{T}$, delete $L$ from $\texttt{T}$\; $\texttt{S}\longleftarrow \{p \in \mathbb{P}:\ p| 2\det(L)\text{\ or\ } b_n(p)\cdot\text{mass}(L)\leq\frac{1}{2}\}$, cf. Lemma~\ref{lemma:massbound2}(2)\; \ForEach{$p\in \texttt{S}$}{ $\texttt U \longleftarrow \{\text{rescale}(\text{sym}(K)): K \in \text{Wat}_p^{-1}(L)\} - \texttt A$\; $\texttt A \longleftarrow \texttt A \cup \texttt U$\; \While{$\texttt{U} \not= \emptyset$}{ $L'\longleftarrow$ a random $\mathbb{Z}$-lattice with $pL \subseteq L' \subseteq L$\; $L'\longleftarrow \text{rescale}(L')$\; \lIf{$\text{sym}(L')\in \texttt U$}{remove $\text{sym}(L')$ from $\texttt U$} \lElse{go to step 9}\; \lIf{$L'$ is single-class}{add $L'$ to $\texttt{O}$ and to $\texttt{T}$} } } \lIf{$\texttt{T}=\emptyset$}{\Return $\texttt{O}$} \lElse {go to step 3} } \end{algorithm} \begin{proposition} Called with a complete list of primitive representatives for single-class \emph{square-free} genera in dimension $n\geq 2$, Algorithm~\ref{alg:alllattices} outputs a \emph{complete} list of primitive representatives for single-class lattices in dimension $n$. \begin{proof} Let $L$ be any primitive single-class $\mathbb{Z}$-lattice in dimension~$n$. By Remark~\ref{thm:watsonProcess}(2), $L$ can be reduced to a square-free lattice by repeated application of $\text{Wat}_p$ mappings. More precisely, there is a list $p_1,\dots,p_k$ of primes and a list $L=L_0, L_1,\dots,L_k$ of $\mathbb{Z}$-lattices such that $\text{Wat}_{p_i}(L_{i-1})=L_i$ for all $1\leq i\leq k$ and $L_k$ is square-free. By Remark~\ref{thm:watsonProcess}(1), $L_k$ is also single-class. $L_k$ is contained in the input $\mathcal L$ by assumption. If $L_i$ ($1\leq i\leq k$) is picked from $\texttt T$ in step~$3$, the set $\texttt S$ computed in step~$4$ will contain $p_i$ by Lemma~\ref{lemma:massbound2}(2) since $L_{i-1}$ is a single-class lattice. A lattice from the (rescaled) isometry class of $L_{i-1}$ will eventually be generated in step~$9$ because of Remark~\ref{rem:watsoninclusions}, and because $\text{sym}(L_{i-1})$ is included in $\texttt U$ in step~$6$. By induction, a lattice isometric to $L$ will be generated by Algorithm~\ref{alg:alllattices}. \end{proof} \end{proposition} An implementation of the above algorithm in \textsc{Magma} produced the complete lists of single-class lattices in dimensions $3-10$ reasonably fast. \begin{remark} Since $b_n(p)$ is increasing in $p$, the set $\texttt S$ from step~$4$ is finite and subject to a straightforward computation. The calculation of the full preimage $\text{Wat}_p^{-1}(\text{sym}(L))$ in step~$6$ is an easy local process that changes only the invariants for the prime $p$. \end{remark} \begin{remark} In the situation of Lemma~\ref{lemma:massbound2}(2), assuming $D:=\text{disc}(K)$ is known, we have $\text{disc}(L)=p^2D$, and a bound $b_2(p,D)$ similar to $b_n(p)$ can easily be obtained as \[b_2(p,D)=\frac{\zeta_{p^2D}(1)}{2\zeta_{D}(1)}\left(\frac{1}{1+p^{-1}}\right)^2\cdot p.\] As a consequence, Algorithm~\ref{alg:alllattices} (step~$4$ in particular) can be modified to apply to dimension~$2$. Thus, while -- to our knowledge -- there is still no way around the Generalized Riemann Hypothesis to classify the $2$-dimensional single-class square-free lattices (cf.~\cite{voight}), this hypothesis is not needed to complete the classification once the single-class square-free lattices are known. \end{remark} Algorithm~\ref{alg:alllattices} concludes our classification of single-class $\mathbb{Z}$-lattices. In a future publication, we hope to generalize our methods to single-class lattices over arbitrary number fields. \section*{Acknowledgments} The present work benefited from the input of Prof.~Gabriele Nebe, RWTH Aachen University. It was made possible by a studentship in DFG Graduiertenkolleg 1632: Experimentelle und konstruktive Algebra. \section{Genera of single-class lattices} \begin{table}[h]\label{table:resultOverview} \caption{Number of primitive single-class $\mathbb{Z}$-lattices} \begin{tabular}{llllllllll} \toprule dimension & $2$ & $3$ & $4$ & $5$ & $6$ & $7$ & $8$ & $9$ & $10$\\ \midrule \textbf{total} & $1609$ & $794$ & $481$ & $295$ & $186$ & $86$ & $36$ & $4$ & $2$ \\ \hspace{1em} maximal & $769$ & $77$ & $44$ & $16$ & $21$ & $7$ & $6$ & $1$ & $1$\\ \hspace{1em} quadratic-form-maximal & $780$ & $64$ & $20$ & $12$ & $10$ & $5$ & $2$ & $1$ & $1$ \\ maximal prime dividing $\det$. & $409$ & $23$ & $23$ & $11$ & $23$ & $5$ & $5$ & $2$ & $3$\\ maximal determinant & $3{\cdot}5{\cdot} 11{\cdot} 13{\cdot} 19$ & $2^83^37^2$ & $2^43^3{11}^3$ & $2^{12}7^4$ & ${23}^5$ & $2^{18}3^6$ & $3^{14}$ & $2^{24}$ & $3^9$\\ \bottomrule \end{tabular} \end{table} \small \newenvironment{resulttable}[2] { \subsubsection{Dimension #1} #2 lattices: \begin{longtable}{llllll} \toprule Lattice & & Lattice & & Lattice \\ \midrule } {\bottomrule \end{longtable} } \subsection{How to read the tables} We give tables of all genus symbols of primitive single-class $\mathbb{Z}$-lattices in dimension $\geq 4$. For dimensions $2$ and $3$, and for representative lattices for all the genera in dimensions $3$ -- $10$, see \cite{latdb}. The single-class $\mathbb{Z}$-lattices have been grouped into families of rescaled partial duals. By a partial dual, we mean $L^{\#,p} := \erz{L, \{v\in L^\#:\ v+L\in S_p\}}$, where $S_p$ denotes the Sylow $p$-subgroup of $L^\#/L$. To keep the tables brief, we repeatedly pass to partial duals of $L$, until the lattice with the smallest possible determinant is reached, and print only that lattice in the tables below. In the Catalogue of Lattices available at \cite{latdb}, the list of genera is given in full, and a representative $\mathbb{Z}$-lattice for each of these genera is listed. Printed next to each genus $\text{Genus}(L)$ is a number of the form $\mu^{*N}_{m_1,m_2}$, with $\mu=\text{mass}(L)^{-1}=\#Aut(L)$, $N$ the number of distinct genera $\text{Genus}(L_1), \dots, \text{Genus}(L_N)$ that can be obtained from $L$ by passing to partial duals of $L$, and the quantities $m_1$ of maximal lattices, and $m_2$ of quadratic-form-maximal lattices (as defined in section~\ref{subsec:preliminaries}) among these. If no subscript is present, both $m_1$ and $m_2$ are $0$. \subsubsection{Dimension 10} $2$ lattices: \begin{longtable}{ll} \toprule Lattice & \\ \midrule \input{Kapitel/table-dim10.tex} \bottomrule \end{longtable} \subsubsection{Dimension 9} $4$ lattices: \begin{longtable}{llll} \toprule Lattice & & Lattice &\\ \midrule \input{Kapitel/table-dim9.tex} \bottomrule \end{longtable} \begin{resulttable}{8}{36} \input{Kapitel/table-dim8.tex} \end{resulttable} \begin{resulttable}{7}{86} \input{Kapitel/table-dim7.tex} \end{resulttable} \begin{resulttable}{6}{186} \input{Kapitel/table-dim6.tex} \end{resulttable} \begin{resulttable}{5}{295} \input{Kapitel/table-dim5.tex} \end{resulttable} \begin{resulttable}{4}{481} \input{Kapitel/table-dim4.tex} \end{resulttable} \normalsize \end{document}
arXiv
{ "id": "1208.5638.tex", "language_detection_score": 0.6537250280380249, "char_num_after_normalized": null, "contain_at_least_two_stop_words": null, "ellipsis_line_ratio": null, "idx": null, "lines_start_with_bullet_point_ratio": null, "mean_length_of_alpha_words": null, "non_alphabetical_char_ratio": null, "path": null, "symbols_to_words_ratio": null, "uppercase_word_ratio": null, "type": null, "book_name": null, "mimetype": null, "page_index": null, "page_path": null, "page_title": null }
arXiv/math_arXiv_v0.2.jsonl
null
null
\begin{document} \title{A relation between additive and multiplicative complexity of Boolean functions\footnote{Research supported in part by RFBR, grants 11--01--00508, 11--01--00792, and OMN RAS ``Algebraic and combinatorial methods of mathematical cybernetics and information systems of new generation'' program (project ``Problems of optimal synthesis of control systems'').}} \date{} \author{Igor S. Sergeev\footnote{e-mail: [email protected]}} \maketitle \begin{abstract} In the present note we prove an asymptotically tight relation between additive and multiplicative complexity of Boolean functions with respect to implementation by circuits over the basis $\{\oplus,\wedge,1\}$. \end{abstract} To start, consider a problem of computation of polynomials over a semiring $(K,+,\times)$ by circuits over the arithmetic basis $\{+,\times\} \cup K$. It's a common knowledge that a polynomial of $n$ variables with nonscalar multiplicative complexity $M$ (i.e. the minimal number of multiplications to implement the polynomial, not counting multiplications by constants) has total complexity $O(M(M+n))$. Generally speaking, the bound could not be improved for infinite semirings. For instance, it follows from results by E. G. Belaga~\cite{ebe} and V. Ya. Pan~\cite{epa} (there exist 1-variable complex and real polynomials of degree $n$ with additive complexity $n$; at the same time, each such polynomial has nonscalar multiplicative complexity $O(\sqrt n)$~\cite{eps}). An analogous standard bound for finite semirings is $O(M(M+n)/\log M)$. Generally speaking, this bound is also tight in order. A result of such sort was proven in~\cite{ez}.\footnote{\cite{ez} deals with monotone Boolean circuits.} We prove a similar but asymptotically tight result. \begin{theorem} If a Boolean function of $n$ variables can be implemented by a circuit over the basis $\{\oplus, \wedge, 1\}$ of multiplicative complexity $M=\Omega(n)$, then it can be implemented by a circuit of total complexity $(1/2+o(1))M(M+2n)/\log_2 M$ over the same basis. The bound is asymptotically optimal. \end{theorem} The stated result is nearly folklore, since it's an immediate corollary of results by E. I. Nechiporuk of early 1960s. However, these results are little known, and the corollary is even less known. Thus, it seems appropriate to give a proof. The second claim of the theorem (the bound optimality) holds since almost all Boolean functions of $n$ variables have multiplicative complexity $\sim2^{n/2}$~\cite{en62}\footnote{Instead of this result of Nechiporuk a trivial upper bound $\frac3{\sqrt2}\cdot2^{n/2}$ from the later paper~\cite{ebpp} is often cited.} and total complexity $\sim2^n/n$~\cite{elup}. Let us prove the first claim. Let $A$ be a Boolean matrix of size $m\times n$ ($m$ rows, $n$ columns). Assign 1 to each entry of matrix which is located at most $\log_2 m$ positions from a one of matrix $A$ in the same row. We denote by $S(A)$ a weight\footnote{Weight of a matrix is the number of nonzero entries in it.} of the obtained matrix and name it an {\it active square} of matrix $A$. The following lemma is an appropriate reformulation of particular case of a result due to Nechiporuk~\cite{en63,en69}. In what follows, under an implementation of a matrix we understand an implementation of a linear operator with that matrix. \begin{lemma}\label{1} Any Boolean matrix $A$ of size $m\times n$ can be implemented by an additive circuit\footnote{Over any associative and commutative semigroup $(G, +)$.} of complexity $\frac{S(A)}{2\log_2m}+o\left(\frac{(m+n)^2}{\log m}\right).$ \end{lemma} \proof Divide a set of $n$ variables into groups of $s<\log_2 m$. All possible sums in every group can be trivially computed with complexity $<2^s$. Regard the computed sums as new variables and note that the problem is now reduced to implementation of a matrix of size $m \times 2^s\lceil n/s \rceil$ and weight $\le S(A)/s$. Divide the new matrix into horizontal sections of height $p$. Implement each section independently. For this, in each column of a section group all ones into pairs. Denote by $y_{i,j}$ a sum of (new) variables corresponding to columns with paired ones from $i$-th and $j$-th rows. Compute all $y_{i,j}$ independently. Next, implement an $i$-th row of a section as $y_{i,1}+\ldots+y_{i,p}+z_i$, where $z_i$ is a sum of variables corresponding to positions with odd ones. Note that the total complexity of computation of all $y_{i,j}$ in all sections is at most as large as the half of matrix weight, that is, $S(A)/(2s)$, and the number of odd ones in each section is at most as large as the number of columns, i.e. $2^s\lceil n/s \rceil$. Therefore, the complexity of the described circuit is bounded from above by $$ \frac{n2^s}{s} + \frac{S(A)}{2s} + mp + \left\lceil \frac{m}{p} \right\rceil 2^s \left\lceil \frac{n}{s} \right\rceil. $$ Assuming $p \sim m/\log^2 m$ and $s \sim \log_2 m - 3\log_2\log_2 m$, we obtain the required bound. \qed The bound of lemma is asymptotically tight. More general results of that sort established by N. Pippenger~\cite{epip} and V.~V. Kochergin~\cite{eko}. Now we complete the proof of the theorem. Let a circuit $S$ to implement a Boolean function $f$ with multiplicative complexity $M$. Number all conjunction gates in the circuit in an order not contradicting the orientation. Denote by $h_{2i-1}, h_{2i}$ input functions of $i$-th conjunction gate, and denote by $g_i$ its output function. Each function $h_j$ is a linear combination of variables and functions $g_i$, where $1\le i < j/2$. The function $f$ itself is a linear combination of variables and all functions $g_i$. Computation of all functions $h_j$, $j=1,\ldots,2M$, together with the function $f$ as linear combinations of variables and functions $g_i$ can be performed by a linear operator with matrix of size $(2M+1)\times(M+n)$ and active square $\le (2M+1)(n+M/2+\log_2 M)$. To obtain the desired bound, implement this operator via the method of Lemma 1. \end{document}
arXiv
{ "id": "1303.4177.tex", "language_detection_score": 0.842818021774292, "char_num_after_normalized": null, "contain_at_least_two_stop_words": null, "ellipsis_line_ratio": null, "idx": null, "lines_start_with_bullet_point_ratio": null, "mean_length_of_alpha_words": null, "non_alphabetical_char_ratio": null, "path": null, "symbols_to_words_ratio": null, "uppercase_word_ratio": null, "type": null, "book_name": null, "mimetype": null, "page_index": null, "page_path": null, "page_title": null }
arXiv/math_arXiv_v0.2.jsonl
null
null
\begin{document} \title{\LARGE \bf Entropy-Regularized Stochastic Games} \begin{abstract} In two-player zero-sum stochastic games, where two competing players make decisions under uncertainty, a pair of optimal strategies is traditionally described by Nash equilibrium and computed under the assumption that the players have perfect information about the stochastic transition model of the environment. However, implementing such strategies may make the players vulnerable to unforeseen changes in the environment. In this paper, we introduce entropy-regularized stochastic games where each player aims to maximize the causal entropy of its strategy in addition to its expected payoff. The regularization term balances each player's rationality with its belief about the level of misinformation about the transition model. We consider both entropy-regularized $N$-stage and entropy-regularized discounted stochastic games, and establish the existence of a value in both games. Moreover, we prove the sufficiency of Markovian and stationary mixed strategies to attain the value, respectively, in $N$-stage and discounted games. Finally, we present algorithms, which are based on convex optimization problems, to compute the optimal strategies. In a numerical example, we demonstrate the proposed method on a motion planning scenario and illustrate the effect of the regularization term on the expected payoff. \end{abstract} \section{Introduction} A two-player zero-sum stochastic game (SG) \cite{Shapley} models sequential decision-making of two players with opposing objectives in a stochastic environment. An SG is played in stages. At each stage, the game is in a state, and the players choose one of their available actions simultaneously and receive payoffs. The game then transitions to a new random state according to a probability distribution which represents the stochasticity in the environment. In an SG, each player aims to synthesize a strategy that maximizes the player's expected payoff at the end of the game. Traditionally, a pair of optimal strategies is described by Nash equilibrium \cite{Nash} according to which both players play their best-response strategies against the opponent's strategy. The value of the game then corresponds to the expected payoff that each player receives at the end of the game, if they both play their respective equilibrium strategies. The concept of Nash equilibrium is based on the assumptions that the players have perfect information about the environment and act rationally \cite{QRE_book}. However, in certain scenarios, the information that a player has about the environment may not match the reality. For example, in a planning scenario, if the player obtains its information about the environment through surveillance missions performed in the past, it may face with a significantly different environment during the execution of the play. In such scenarios, playing an equilibrium strategy may dramatically decrease the player's actual expected payoff as the strategy is computed under the assumption of perfect information. The principle of maximum entropy prescribes a probability distribution that is \say{maximally noncommittal with regard to missing information} \cite{Jaynes}. The principle of maximum causal entropy extends the maximum entropy principle to settings where there is dynamically revealed side information that causally affects the evolution of a stochastic process \cite{Ziebart2, Ziebart3}. A distribution that maximizes the causal entropy of a stochastic process (in the absence of additional constraints) is the one that makes all admissible realizations equally probable regardless of the revealed information \cite{Yagiz}. Therefore, the causal entropy of a player's strategy provides a convenient way to quantify the dependence of its strategy to its level of information about the environment as well as the other player's strategy. In this paper, we propose a method to synthesize a pair of strategies that balances each player's rationality with its belief about the level of missing information. Specifically, we regularize each player's objective with the causal entropy of its strategy which is causally dependent on the history of play. Therefore, the proposed method allows the players to adjust their strategies according to different levels of misinformation by tuning a parameter that controls the importance of the regularization term. For example, in two extremes, it allows the player to be perfectly rational or to purely randomize its strategy. We study both entropy-regularized $N$-stage and entropy-regularized discounted games, and show the existence of a value in both games. We first prove the sufficiency of Markovian and stationary strategies for both players, respectively, in $N$-stage and discounted games in order to maximize their entropy-regularized expected payoff. Then, we provide algorithms based on a sequence of convex optimization problems to compute a pair of equilibrium strategies. Finally, we demonstrate the proposed methods on a motion planning scenario, and illustrate that the introduced regularization term yields strategies that perform well in different environments. \noindent \textbf{Related work.} In stochastic games literature, the idea of balancing the expected payoffs with an additional regularization term appeared recently in \cite{Jordi} and \cite{Reza}. The work \cite{Jordi} proposes to bound the rationality of the players to obtain tunable behavior in video games. They study $\gamma$-discounted games and restrict their attention to stationary strategies to balance the expected payoffs with the Kullback-Leibner distance of the player's strategies from reference strategies. In \cite{Reza}, authors study $N$-stage games and consider only Markovian strategies to balance the expected payoffs with the player's sensing costs which are expressed as directed information from states to actions. Unlike this work, we introduce causal entropy of strategies as the regularization term. Additionally, we allow the player's to follow history-dependent strategies and prove the sufficiency of Markovian strategies to attain the value in $N$-stage games. Regularization terms are also used in matrix and extensive form games generally to learn equilibrium strategies \cite{Merti}, \cite{Leslie}, \cite{Ling}, \cite{Levine}. When each player uses the same parameter to regularize its expected payoffs with the entropy of its strategy, an equilibrium strategy profile is called a quantal response equilibrium (QRE) \cite{QRE_book}, and an equilibrium strategy of a player is referred as quantal best response \cite{Leslie} or logit choice strategy \cite{Merti}. From a theoretical perspective, the main difference between our approach and the well-studied QRE concept \cite{McKelvey} is that we establish the existence of equilibrium strategies even if the players use different regularization parameters. Additionally, we provide an efficient algorithm based on a convex optimization problem to compute the equilibrium strategies. Robust stochastic games \cite{Kardes}, \cite{Aghassi} concern the synthesis of equilibrium strategies when the uncertainty in transition probabilities and payoff functions can be represented by structured sets. Unlike robust SG models, the proposed method in this paper can still be used when it is not possible to form a structured uncertainty set. In reinforcement learning literature, the use of regularization terms is extensively studied to obtain robust behaviors \cite{Haarnoja}, improve the convergence rates \cite{Fox}, and compute optimal strategies efficiently \cite{Todorov}. As stochastic games model multi-player interactions, our approach leverages the ideas discussed in aforementioned work to environments where an adversary aims to prevent a player to achieve its objective. \section{Background} We first review some concepts from game theory and information theory that will be used in the subsequent sections. \textbf{Notation:} For a sequence $x$, we write $x^t$ to denote $(x_1,x_2,\ldots, x_t)$. Upper case symbols such as $X$ denote random variables, and lower case symbols such as $x$ denote a specific realization. The cardinality of a set $\mathcal{X}$ is denoted by $\lvert \mathcal{X} \rvert$, and the probability simplex defined over the set $\mathcal{X}$ is denoted by $\Delta(\mathcal{X})$. For $V_1$$,$$V_2$$\in$$\mathbb{R}^n$, we write $V_1$$\preccurlyeq$$V_2$ to denote the coordinate-wise inequalities. We use the index set $\mathbb{Z_{+}}$$=$$\{1,2,\ldots\}$ and the natural logarithm $\log (\cdot)$$=$$\log_e (\cdot)$. \subsection{Two-Player Stochastic Games} A two-player stochastic game $\Gamma$ \cite{Shapley} is played in stages. At each stage $t$, the game is in one of its finitely many states $\mathcal{X}$, and each player observes the current state $x_t$. At each state $x_t$, the players choose one of their finitely many actions, and the game transitions to a successor state $x_{t+1}$ according to a probability distribution $\mathcal{P}$$:$$\mathcal{X}$$\times$$\mathcal{U}$$\times$$\mathcal{W}$$\rightarrow$$\Delta(\mathcal{X})$ where $\mathcal{U}$ and $\mathcal{W}$ are finite action spaces for player 1 and player 2, respectively. The pair of actions, $u_t$$\in$$\mathcal{U}$ and $w_t$$\in$$\mathcal{W}$, together with the current state $x_t$$\in$$\mathcal{X}$ determine the payoff $\mathcal{R}(x_t,u_t,w_t)$$\leq$$\overline{\mathcal{R}}$$<$$\infty$ to be made by player 2 to player 1 at stage $t$. A player's strategy is a specification of a probability distribution over available actions at each stage conditional on the history of the game up to that stage. Formally, let $\mathcal{H}_t$$=$$(\mathcal{X}$$\times$$\mathcal{U}$$\times$$\mathcal{W})^{t-1}$$\times$$\mathcal{X}$ be the set of all possible history of plays up to stage $t$. Then, the strategy of player 1 and player 2 are denoted by $\boldsymbol{\sigma}$$=$$(\sigma_1,\sigma_2,\ldots)$ and $\boldsymbol{\tau}$$=$$(\tau_1, \tau_2,\ldots)$, respectively, where $\sigma_t$ $:$$\mathcal{H}_t$$\rightarrow$$\Delta(\mathcal{U})$ and $\tau_t$ $:$$\mathcal{H}_t$$\rightarrow$$\Delta(\mathcal{W})$ for all $t$. If a player's strategy depends only on the current state for all stages, e.g., $\sigma_t$$:$$\mathcal{X}_t$$\rightarrow$$\Delta(\mathcal{U})$ for all $t$, the strategy is said to be \textit{Markovian}. A \textit{stationary} strategy depends only on the current state and is independent of the stage number, e.g., $\boldsymbol{\sigma}$$=$$(\sigma, \sigma,\ldots)$, where $\sigma$$:$$\mathcal{X}$$\rightarrow$$\Delta(\mathcal{U})$. We denote the set of all strategies, all Markovian strategies, and all stationary strategies for player $i$$\in$$\{1,2\}$ by $\Gamma_i$, $\Gamma_i^M$, and $\Gamma_i^S$, respectively. Let $\mu_{t+1}(x^{t+1},u^t,w^t)$ be the joint probability distribution over the history $\mathcal{H}_{t+1}$ of play which is uniquely determined by the initial state distribution $\mu_1(h_1)$ through the recursive formula \begin{align} &\mu_{t+1}(x^{t+1},u^t,w^t)=\mathcal{P}(x_{t+1} | x_t, u_t, w_t)\sigma_t(u_t | h_t)\nonumber \\ &\qquad\qquad \qquad\qquad \ \times \tau_t(w_t | h_t)\mu_{t}(h_t) \end{align} where $h_t$$\in$$\mathcal{H}_t$ is the history of play up to stage $t$. A stochastic game with the initial distribution $\mu_1(x_1)$ is called an $N$-stage game, if the game ends after $N$ stages. The evaluation function for an $N$-stage game is \begin{align}\label{usual_evaluation_finite} &J(X^N, U^N, W^N):=\nonumber\\ &\qquad\qquad\sum_{t=1}^N\mathbb{E}^{\overline{\mu}_t}\mathcal{R}(X_t, U_t, W_t)+\mathbb{E}^{{\mu}_{T+1}}\mathcal{R}(X_{N+1}) \end{align} where $\overline{\mu}_t(\cdot)$$:=$${\mu}_t(\cdot)$$\sigma_t(\cdot)$$\tau_t(\cdot)$. Similarly, if the number of stages in the game is infinite, and the future payoffs are discounted by a factor $0$$<$$\gamma$$<$$1$, the game is called a $\gamma$-discounted game. The evaluation function for a $\gamma$-discounted game is \begin{align}\label{classical_infinite} \sum_{t=1}^{\infty}\gamma^{t-1}\mathbb{E}^{\overline{\mu}_t}\mathcal{R}(X_t, U_t, W_t). \end{align} The player 1's objective is to maximize the evaluation function, i.e., its expected payoff, whereas the player 2 aims to minimize it. A stochastic game is said to have the \textit{value} $\mathcal{V}^{\star}$, if for an evaluation function $f(\boldsymbol{\sigma},\boldsymbol{\tau})$, we have \begin{align*} \mathcal{V}^{\star}=\max_{\boldsymbol{\sigma}\in \Gamma_u} \min_{\boldsymbol{\tau}\in\Gamma_w} f(\boldsymbol{\sigma},\boldsymbol{\tau})=\min_{\boldsymbol{\tau}\in\Gamma_w}\max_{\boldsymbol{\sigma}\in \Gamma_u} f(\boldsymbol{\sigma},\boldsymbol{\tau}). \end{align*} A pair of strategies $(\boldsymbol{\sigma}^{\star},\boldsymbol{\tau}^{\star})$ is said to be equilibrium strategies if it attains the value of the game. It is well-known that both $N$-stage and $\gamma$-discounted games have a value for finite state and action sets \cite{Bewley}. Moreover, Markovian and stationary strategies are sufficient for players to attain the value in $N$-stage and $\gamma$-discounted games, respectively \cite{Shapley}, \cite{Sorin}. \subsection{Causal Entropy} For a sequential decision-making problem where decisions depend causally on the past information such as the history of play, the causal entropy of a strategy is a measure to quantify the randomness of the strategy. Let $X^N$, $Y^N$ and $Z^N$ be sequences of random variables with length $N$. The entropy of the sequence $X^N$ causally conditioned on the sequences $Y^N$ and $Z^N$ is defined as \cite{Kramer} \begin{align}\label{causal_deff} H(X^N || Y^N, Z^N ):= \sum_{t=1}^NH(X_t | X^{t-1}, Y^{t}, Z^ t), \end{align} where \begin{align} &H(X_t | X^{t-1}, Y^{t}, Z^ t):=\nonumber\\ &-\sum_{\mathcal{X}^N, \mathcal{Y}^{N},\mathcal{Z}^{N} }\text{Pr}(x^t, y^t, z^t)\log \text{Pr}(x_t | x^{t-1}, y^t, z^t). \end{align} The concept of causal entropy has recently been used to infer correlated-equilibrium strategies in Markov games \cite{Ziebart1} and to recover cost functions in inverse optimal control problems \cite{Ziebart3}. In this study, we employ causal entropy to compute an equilibrium strategy profile that balances the players' expected payoff with the randomness of their strategies in stochastic games. In the absence of additional constraints, a strategy $\boldsymbol{\sigma}$$\in$$\Gamma_1$ that maximizes the causal entropy $H(U^N || X^N, W^{N-1})$ of the player 1, which is conditioned on the revealed history of play, is the stationary strategy $\boldsymbol{\sigma}$$=$$(\sigma,\sigma,\ldots)$ where $\sigma(x)(u)$$=$$1/ \lvert \mathcal{U} \rvert$. Therefore, a player that maximizes the entropy of its strategy acts purely randomly regardless of the history of play. On the other hand, a player that regularizes its expected payoff with the entropy of its strategy can be thought as a player that balances its rationality with its belief about the correctness of the underlying transition model of the environment. \section{Problem Statement} We first consider entropy-regularized $N$-stage games for which we define the evaluation function as \begin{align}\label{entropy_regularized} \Phi_N(\boldsymbol{\sigma},\boldsymbol{\tau}) :=& J(X^N, U^N, W^N)+\frac{1}{\beta_1}H(U^N || X^N, W^{N-1} )\nonumber\\ &-\frac{1}{\beta_2}H(W^N || X^N, U^{N-1} ), \end{align} where $\beta_1,\beta_2$$>$$0$ are regularization parameters that adjust for players the importance of the randomness in their strategies. Note that, when $\beta_1$$=$$\beta_2$$=$$\infty$, both players act perfectly rational, and we recover the evaluation function \eqref{usual_evaluation_finite}. Additionally, since the play is simultaneous, the information of a player's strategy at a given stage is not revealed to the other player. Hence, at each stage, players are allowed to condition their strategies only to observed history of play. \noindent \textbf{Problem 1:} Provide an algorithm to synthesize, if exists, equilibrium strategies in entropy-regularized $N$-stage games. We next consider stochastic games that are played in infinite stages, and introduce entropy-regularized $\gamma$-discounted games for which we define the evaluation function as \begin{align}\label{entropy_regularized_infinite} \Phi_{\infty}(\boldsymbol{\sigma},\boldsymbol{\tau}):=& \sum_{t=1}^{\infty}\gamma^{t-1}\Big[\mathbb{E}^{\overline{\mu}_t}\mathcal{R}(X_t, U_t, W_t)\nonumber\\ &+\frac{1}{\beta_1}H(U_t | H_t )-\frac{1}{\beta_2}H(W_t | H_t )\Big], \end{align} where $H_t$$=$$(X^t, U^{t-1}, W^{t-1})$, i.e., the admissible histories of play at stage $t$. Note that in the evaluation function \eqref{entropy_regularized_infinite}, we discount players' future entropy gains as well as the expected payoff in order to ensure the finiteness of the evaluation function. \noindent \textbf{Problem 2:} Provide an algorithm to synthesize, if exists, equilibrium strategies in entropy-regularized $\gamma$-discounted games. \section{Existence of Values and The Computation of Optimal Strategies} {\setlength{\parindent}{0cm} In this section, we analyze entropy regularized $N$-stage and $\gamma$-discounted games, and show that both games have values. Then, we provide algorithms to synthesize equilibrium strategies that attain the corresponding game values. \subsection{Entropy-Regularized $N$-Stage Games}\label{finite_section} Searching optimal strategies that solve a stochastic game with the evaluation function $\Phi_N(\boldsymbol{\sigma},\boldsymbol{\tau})$ in the space of all strategies can be intractable for large $N$. We begin with establishing the existence of optimal strategies for both players in the space of Markovian strategies. \begin{prop}\label{Markovian_finite} Markovian strategies are sufficient for both players to attain, if exists, the value in entropy-regularized N-stage games, i.e., \begin{align*} \max_{\boldsymbol{\sigma}\in\Gamma_u^M}\min_{\boldsymbol{\tau}\in\Gamma_w^M}\Phi_N(\boldsymbol{\sigma},\boldsymbol{\tau})=\max_{\boldsymbol{\sigma}\in\Gamma_u}\min_{\boldsymbol{\tau}\in\Gamma_w}\Phi_N(\boldsymbol{\sigma},\boldsymbol{\tau}),\\ \min_{\boldsymbol{\tau}\in\Gamma_w^M}\max_{\boldsymbol{\sigma}\in\Gamma_u^M}\Phi_N(\boldsymbol{\sigma},\boldsymbol{\tau})=\min_{\boldsymbol{\tau}\in\Gamma_w}\max_{\boldsymbol{\sigma}\in\Gamma_u}\Phi_N(\boldsymbol{\sigma},\boldsymbol{\tau}). \end{align*} \end{prop}} \noindent\textbf{Proof:} See Appendix A. $\quad \Box$ Next, we show that entropy-regularized $N$-stage games have a value. Let $\rho_t$$:$$\mathcal{X}$$\times$$\mathcal{U}$$\times$$\mathcal{W}$$\rightarrow$$\mathbb{R}$ be a function and $x_t$ be a fixed state. Additionally, let \begin{align}\label{one-shot-game} \mathcal{V}_t^{\sigma_t,\tau_t}(x_t):=\mathbb{E}^{\sigma_t, \tau_t}\Big[&\rho_t(x_t,u_t,w_t)-\frac{1}{\beta_1}\log \sigma_t(u_t | x_t )\nonumber\\ &+\frac{1}{\beta_2}\log \tau_t(w_t | x_t )\Big] \end{align} be the evaluation function for a \say{one-shot} game in which the game starts from the state $x_t$ and ends after both players play their one-step strategy. {\setlength{\parindent}{0cm} \begin{prop}\label{normal_game_prop} A stochastic game with the evaluation function \eqref{one-shot-game} has a value, i.e., \begin{align*} \max_{\sigma_t\in \Delta(\mathcal{U})}\min_{\tau_t\in\Delta(\mathcal{W})}\mathcal{V}_t^{\sigma_t,\tau_t}(x_t)=\min_{\tau_t\in\Delta(\mathcal{W})}\max_{\sigma_t\in \Delta(\mathcal{U})}\mathcal{V}_t^{\sigma_t,\tau_t}(x_t). \end{align*} \end{prop}} \noindent\textbf{Proof:} It is clear that $\mathcal{V}_t(x_t)$ is a continuous function that is concave in $\sigma_t$ and convex in $\tau_t$. Additionally, $\Delta(\mathcal{U})$ and $\Delta(\mathcal{W})$ are compact convex sets. The result follows from von Neumann's minimax theorem \cite{Neumann}.$\quad\Box$ The following proposition states that one can compute the value of the one shot game \eqref{one-shot-game} and synthesize equilibrium strategies by solving a convex optimization problem. {\setlength{\parindent}{0cm} \begin{prop}\label{optimal_strategy_finite} For a given one-shot game with the evaluation function \eqref{one-shot-game}, optimal strategies $(\sigma_t^{\star},\tau_t^{\star})$ satisfy \begin{align}\label{player_1_st} &\sigma_t^{\star}(u_t|x_t)\in\nonumber\\ &\arg\max_{\sigma_t\in\Delta(U)}\Big[-\frac{1}{\beta_1}\sum_{u_t\in\mathcal{U}}\sigma_t(u_t | x_t)\log\sigma_t(u_t | x_t)\nonumber\\ &-\frac{1}{\beta_2}\log\sum_{w_t\in \mathcal{W}}\exp\big(-\beta_2\sum_{u_t\in\mathcal{U}}\sigma_t(u_t | x_t)\rho_t(x_t,u_t,w_t)\big)\Big],\\ &\tau_t^{\star}(w_t|x_t)=\nonumber\\\label{player_2_st} &\frac{\exp\big(-\beta_2\sum_{u_t\in\mathcal{U}}\sigma_t^{\star}(u_t | x_t)\rho_t(x_t,u_t,w_t)\big)}{\sum_{w_t\in \mathcal{W}}\exp\big(-\beta_2\sum_{u_t\in\mathcal{U}}\sigma^{\star}_t(u_t | x_t)\rho_t(x_t,u_t,w_t)\big)}. \end{align} Furthermore, the unique value $\mathcal{V}^{\star}_t(x_t)$ of the game is given by \begin{align}\label{value_game_opt} &\mathcal{V}_t^{\star}(x_t)=-\frac{1}{\beta_1}\sum_{u_t\in\mathcal{U}}\sigma^{\star}_t(u_t | x_t)\log\sigma^{\star}_t(u_t | x_t)\nonumber\\ &-\frac{1}{\beta_2}\log\sum_{w_t\in \mathcal{W}}\exp\big(-\beta_2\sum_{u_t\in\mathcal{U}}\sigma^{\star}_t(u_t | x_t)\rho_t(x_t,u_t,w_t)\big). \end{align} \end{prop}} \noindent\textbf{Proof:} See Appendix A.$\quad \Box$ It is worth noting that the objective function of the optimization problem given in \eqref{player_1_st} is strictly concave, and therefore, optimal strategies $\sigma_t^{\star}(u_t|x_t)$ and $\tau_t^{\star}(w_t|x_t)$ are unique. Additionaly, an optimal strategy with the form \eqref{player_2_st} is known in the economics literature as quantal best response \cite{McKelvey}, and for $\beta_1$$=$$\beta_2$$<$$\infty$, the optimal strategies form the well-studied quantal response equilibrium strategies \cite{QRE_book}. We remark that the optimization problem in \eqref{player_1_st} has a closed-form solution which is a function of the optimal strategy $\tau_t^{\star}(w_t|x_t)$. However, since the closed-form expressions for equilibrium strategies constitute a system of coupled nonlinear equations, the convex optimization formulation provides a more convenient way to compute equilibrium strategies. Utilizing the results of above propositions, we now reformulate an entropy-regularized $N$-stage game as a series of \say{one-shot} games through the use of Bellman recursions. Let \begin{align}\label{recursion_function} &\rho_t(x_t,u_t,w_t)=\mathcal{R}(x_t,u_t,w_t)\nonumber\\ &\qquad+\sum_{x_{t+1}\in \mathcal{X}}\mathcal{P}(x_{t+1}| x_t, u_t, w_t)\mathcal{V}_{t+1}^{\sigma_t,\tau_t}(x_{t+1}), \end{align} for $t$$=$$1,\ldots, N$ where $\mathcal{V}_{N+1}^{\sigma_t,\tau_t}(x_{N+1})$$=$$\mathcal{R}(X_{N+1})$. Then, it can be easily verified that \begin{align}\label{objective-equivalence} \Phi_N(\boldsymbol{\sigma},\boldsymbol{\tau})=\sum_{x_1\in\mathcal{X}}\mu_1(x_1)\mathcal{V}_1(x_1) \end{align} for a given initial distribution $\mu_1(x_1)$. Consequently, we obtain the following result. {\setlength{\parindent}{0cm} \begin{thm}\label{finite_game_has_value} Entropy-regularized $N$-stage games have a value. \end{thm}} \noindent\textbf{Proof:} Due to Proposition \ref{Markovian_finite}, we can focus on Markovian strategies to find an equilibrium point in $N$-stage games. We start from the stage $k$$=$$N$ and compute the value of the one shot game \eqref{one-shot-game}, which exists due to Proposition \ref{normal_game_prop}. Using \eqref{one-shot-game} with \eqref{recursion_function} for $k$$=$$N$$-$$1,N$$-$$2,\ldots,1$, we compute the value of $N$$-$$k$$+$$1$ stage games. As a result, the claim follows due to the equivalence given in \eqref{objective-equivalence}.$\quad\Box$ Algorithm \ref{euclid} summarizes the computation of the pair $(\boldsymbol{\sigma}^{\star},\boldsymbol{\tau}^{\star})$ of optimal strategies for entropy-regularized $N$-stage games. \begin{algorithm} \caption{Strategy computation for $N$-stage games}\label{euclid} \begin{algorithmic}[1] \State \textbf{Initialize:} $\mathcal{V}_{N+1}(x_{N+1})$=$\mathcal{R}(x_{N+1})$ for all $x_{N+1}$$\in$$\mathcal{X}$. \For{$t = N, N$$-$$1, ..., 1$} \State Compute $\rho_t(x_t,u_t,w_t)$ for all $x_t$$\in$$\mathcal{X}$, $u_t$$\in$$\mathcal{U}$, and $w_t$$\in$$\mathcal{W}$ as in \eqref{recursion_function}. \State For all $x_t$$\in$$\mathcal{X}$, \begin{align*} &\mathcal{V}^{\star}_{t}(x_{t})=\max_{\sigma_t\in\Delta(U)}-\frac{1}{\beta_1}\sum_{u_t\in\mathcal{U}}\sigma_t(u_t | x_t)\log\sigma_t(u_t | x_t)\\ &-\frac{1}{\beta_2}\log\sum_{w_t\in \mathcal{W}}\exp(-\beta_2\sum_{u_t\in\mathcal{U}}\sigma_t(u_t | x_t)\rho_t(x_t,u_t,w_t))\end{align*} \State For all $x_t$$\in$$\mathcal{X}$, compute $\sigma_t^{\star}$ and $\tau_t^{\star}$ as in \eqref{player_1_st} and \eqref{player_2_st}, respectively. \EndFor \State \textbf{return} $\boldsymbol{\sigma}^{\star}$$=$$(\sigma_1^{\star},\ldots, \sigma_T^{\star})$ and $\boldsymbol{\tau}^{\star}$$=$$(\tau_1^{\star},\ldots, \tau_T^{\star})$. \end{algorithmic} \end{algorithm} \noindent\textbf{Remark:} In certain scenarios, one of the players may prefer to play perfectly rationally against a boundedly rational opponent, e.g., $\beta_2$$=$$\infty$. In that case, it is still possible to compute equilibrium strategies by solving a convex optimization problem at each stage. The value of the one-shot game \eqref{one-shot-game} still exists due to the arguments provided in the proof of Proposition \ref{normal_game_prop}. However, the form of optimal strategies slightly changes to \begin{align}\label{player_1_infinity} &\tau_t^{\star}(w_t|x_t)=\arg\min_{\tau_t\in\Delta(W)}\frac{1}{\beta_1}\log\sum_{u_t\in \mathcal{U}}\exp\Big(\nonumber\\ &\qquad\qquad\qquad\beta_1\sum_{w_t\in\mathcal{W}}\tau_t(w_t | x_t)\rho_t(x_t,u_t,w_t)\Big),\\ &\sigma_t^{\star}(u_t|x_t)=\nonumber\\\label{player_2_infinity} &\frac{\exp\big(\beta_1\sum_{w_t\in\mathcal{W}}\tau_t^{\star}(u_t | x_t)\rho_t(x_t,u_t,w_t)\big)}{\sum_{u_t\in \mathcal{U}}\exp\big(\beta_1\sum_{w_t\in\mathcal{W}}\tau^{\star}_t(w_t | x_t)\rho_t(x_t,u_t,w_t)\big)}. \end{align} It is important to note that, if $\beta_i$$=$$\infty$ for some $i$$\in$$\{1,2\}$, equilibrium strategies may be not unique since the function $\log\sum\exp(\cdot)$ is not strictly convex over its domain \cite{Rockafellar}. \subsection{Entropy-Regularized $\gamma$-Discounted Games} In this section, we focus on Markovian strategies, whose optimality for $N$-stage games is shown in Proposition \ref{Markovian_finite}. Let $\mathcal{V}$$\in$$\mathbb{R}^{\lvert \mathcal{X}\rvert}$ be a real-valued function, and for a given $x$$\in$$\mathcal{X}$, $\mathcal{L}(\mathcal{V})(x,\cdot,\cdot)$ $:$ $\mathcal{U}$$\times$$\mathcal{W}$$\rightarrow$$\mathbb{R}$ be a function such that \begin{align} &\mathcal{L}(\mathcal{V})(x,\sigma,\tau):=\mathbb{E}^{\sigma, \tau}\Big[\mathcal{R}(x,u,w)-\frac{1}{\beta_1}\log \sigma(u | x )\nonumber\\ &+\frac{1}{\beta_2}\log \tau(w | x )+\gamma\sum_{{x'}\in\mathcal{X}}\mathcal{P}(x' | x,u, w)\mathcal{V}(x')\Big]. \end{align} As discussed in Proposition \ref{normal_game_prop}, a one-shot game with the evaluation function $\mathcal{L}(\mathcal{V})(x,\sigma,\tau)$ has a value. Therefore, we can introduce the Shapley operator $\Psi$$:$$\mathcal{V}$$\rightarrow$$\Psi(\mathcal{V})$ from $\mathbb{R}^{\lvert \mathcal{X}\rvert}$ to itself specified, for all $x$$\in$$\mathcal{X}$, as \begin{align*} \Psi(\mathcal{V})[x]:=\max_{\sigma\in\Delta(U)}\min_{\tau\in\Delta(W)}\mathcal{L}(\mathcal{V})(x,\sigma,\tau). \end{align*} It is clear that the operator $\Psi$ satisfies two key properties: \textit{monotonicity}, i.e., $\mathcal{V}$$\preccurlyeq$$\overline{\mathcal{V}}$ implies $\Psi(\mathcal{V})$$\preccurlyeq$$\Psi(\overline{\mathcal{V}})$, and \textit{reduction of constants}, i.e., for any $k$$\geq$$0$, $\Psi(\mathcal{V}$$+$$k\boldsymbol{1})[x]$$=$$\Psi(\mathcal{V})[x]$$+$$\gamma k$ for all $x$$\in$$\mathcal{X}$. Consequently, it is straightforward to show that the operator $\Psi$ is a contraction mapping \cite{Bertsekas}. Specifically, we have \begin{align*} \lVert \Psi(\mathcal{V})-\Psi(\overline{\mathcal{V}})\rVert_{\infty}\leq \gamma \lVert \mathcal{V}- \overline{\mathcal{V}}\rVert_{\infty}, \end{align*} where $\lVert\mathcal{V}\rVert_{\infty}$$=$$\max_{x\in\mathcal{X}}\mathcal{V}(x)$. We omit the details since similar results can be easily found in the literature \cite{Neyman}. Then, by Banach's fixed-point theorem \cite{Puterman}, we conclude that the operator $\Psi$ has a unique fixed point which satisfies $\mathcal{V}^{\star}$$=$$\Psi\mathcal{V}^{\star}$. Next, we need to show that the fixed point $\mathcal{V}^{\star}$ is indeed the value of the entropy-regularized $\gamma$-discounted game. Let $\boldsymbol{\sigma}^{\star}$$=$$(\sigma^{\star},\sigma^{\star},\ldots)$ be a stationary strategy such that $\sigma^{\star}$ is a one-step strategy for player 1 satisfying the fixed point equation, and $\boldsymbol{\tau}$ be an arbitrary Markovian strategy for player 2. Denoting by $h_t$ the history of play of length $t$, one has, by definition of $\Psi$ and $\boldsymbol{\sigma}^{\star}$, \begin{align*} &\mathbb{E}^{\overline{\mu}_t}\Big[\mathcal{R}(x_t,u_t,w_t)-\frac{1}{\beta_1}\log \sigma^{\star}(u_t | x_t )+\frac{1}{\beta_2}\log \tau(w_t | x_t )\nonumber\\ &+\gamma\sum_{{x'}\in\mathcal{X}}\mathcal{P}(x' | x_t,u_t, w_t)\mathcal{V}^{\star}(x') \Big| h_t\Big]\geq \mathbb{E}^{\overline{\mu}_t}\Big[\mathcal{V}^{\star}(x_t)\Big| h_t\Big]. \end{align*} This expression can further be written as \begin{align*} &\mathbb{E}^{\overline{\mu}_t,\overline{\mu}_{t+1}}\Big[\mathcal{R}(x_t,u_t,w_t)-\frac{1}{\beta_1}\log \sigma^{\star}(u_t | x_t )\nonumber\\ &+\frac{1}{\beta_2}\log \tau(w_t | x_t )+\gamma\mathcal{V}^{\star}(x_{t+1})\Big| h_t\Big]\geq \mathbb{E}^{\overline{\mu}_t}\Big[\mathcal{V}^{\star}(x_t)\Big| h_t\Big]. \end{align*} Multiplying by $\gamma^{t-1}$, taking expectations and summing over $1$$\leq$$t$$<$$k$, one obtains \begin{align*} &\sum_{t=1}^{k-1}\gamma^{t-1}\mathbb{E}^{\overline{\mu}_t}\Big[\mathcal{R}(x_t,u_t,w_t)-\frac{1}{\beta_1}\log \sigma^{\star}(u_t | x_t )+\nonumber\\ &\frac{1}{\beta_2}\log \tau(w_t | x_t )\Big| x_1\Big]\geq \mathcal{V}^{\star}(x_1)-\gamma^{k}\mathbb{E}^{\overline{\mu}_{k+1}}\Big[\mathcal{V}^{\star}(x_{k+1})\Big| x_1\Big]. \end{align*} Taking the limit as $k$$\rightarrow$$\infty$ and using Proposition \ref{Markovian_finite}, we obtain \begin{align}\label{infinite_first} \Phi_{\infty}(\sigma^{\star},\tau)\geq \mathcal{V}^{\star}(x_1). \end{align} Similarly, when player 2 plays the optimal stationary strategy $\boldsymbol{\tau}^{\star}$$=$$(\tau^{\star},\tau^{\star},\ldots)$ against an arbitrary Markovian strategy $\boldsymbol{\sigma}$ of player 1, we have \begin{align}\label{infinite_second} \Phi_{\infty}(\sigma,\tau^{\star})\leq \mathcal{V}^{\star}(x_1). \end{align} Then, the combination of \eqref{infinite_first} and \eqref{infinite_second} implies the following result. {\setlength{\parindent}{0cm} \begin{thm} Entropy-regularized $\gamma$-discounted games have a value which satisfies $\Psi(\mathcal{V})$$=$$\mathcal{V}$. Furthermore, stationary strategies are sufficient for both players to attain the game value, i.e., \begin{align*} \max_{\boldsymbol{\sigma}\in\Gamma}\min_{\boldsymbol{\tau}\in\Gamma}\Phi_{\infty}(\sigma,\tau)=\max_{\boldsymbol{\sigma}\in\Gamma^S}\min_{\boldsymbol{\tau}\in\Gamma^S}\Phi_{\infty}(\sigma,\tau). \end{align*} \end{thm}} Computation of optimal strategies is just an extension of Algorithm \ref{euclid}. Note that for $\gamma$-discounted games, we use the same one-shot game introduced in \eqref{one-shot-game}. Therefore, optimal decision rules at each stage has the form \eqref{player_1_st} and \eqref{player_2_st}. Consequently, to compute the optimal strategies, we initialize Algorithm \ref{euclid} with an arbitrary value vector $\mathcal{V}$$\in$$\mathbb{R}^{\lvert \mathcal{X}\rvert}$ and iterate until convergence, which is guaranteed by the existence of a unique fixed point. \section{A Numerical Example} In this section, we demonstrate the proposed strategy synthesis method on a motion planning scenario that we model as an entropy-regularized $\gamma$-discounted game. To solve the convex optimization problems required for the computation of equilibrium strategies, we use ECOS solver \cite{Ecos} through the interface of CVXPY \cite{cvxpy}. All computations are performed by setting $\gamma$$=$$0.8$. As the environment model, we consider a $5$$\times$$5$ grid world which is given in Figure \ref{grid_graph} (top left). The brown grid denotes the initial position of the player 1 which aims to reach the goal (green) state. The red grid is the initial position of the player 2 whose aim is to catch the player 1 before reaching the goal state. Finally, black grids represent walls. Let $x$$=$$(s_1,s_2)$ be the current state of the game such that $x[1]$$=$$s_1$ and $x[2]$$=$$s_2$ are the positions of the player 1 and the player 2, respectively. At each state, the action space for both players is given as $\mathcal{U}$$=$$\mathcal{W}$$=$$\{right,left,up,down,stay\}$. For simplicity, we assume deterministic transitions, i.e., $\mathcal{P}(x,u,w)$$\in$$\{0,1\}$ for all $x$$\in$$\mathcal{X}$, $u$$\in$$\mathcal{U}$ and $w$$\in$$\mathcal{W}$. If a player takes an action for which the successor state is a wall, the player stays in the same state with probability 1. For a given $(x,u,w)$$\in$$\mathcal{X}$$\times$$\mathcal{U}$$\times$$\mathcal{W}$, we encode the payoff function $\mathcal{R}(x,u,w)$ as the sum of two functions such that $\mathcal{R}(x,u,w)$$=$$\mathcal{R}_1(x,u,w)$$+$$\mathcal{R}_2(x,u,w)$ where \begin{align*} &\mathcal{R}_1(x,u,w)=\sum_{x'[1]=\text{G}}\mathcal{P}(x' | x,u,w), \\ &\mathcal{R}_2(x,u,w)=\begin{cases} -\mathcal{P}(x' | x,u,w) & \text{if} \ \ x'[1]=x'[2]\neq \text{G}\\ 0 & \text{otherwise}.\end{cases} \end{align*} Note that the payoff function defines a zero-sum game which is won by the player 1, if it reaches the goal state before getting caught, and by the player 2, if it catches the player 1 before reaching the goal state. We first compute Nash equilibrium strategies in the absence of causal entropy terms, i.e., $\beta_1$$=$$\beta_2$$=$$\infty$, by employing standard linear programming formulation \cite{Miltersen} for zero-sum games. Starting from the initial state, an equilibrium strategy for the player 1 is to move towards the goal state by taking the action $right$ deterministically, and for the player 2 is to chase the player 1 by taking the action $up$ in the first two stages, and then, take the action $right$ until reaching the goal state. Therefore, a perfectly rational player 1 wins the game with probability 1 no matter what strategy is followed by the player 2. To illustrate the drawback of playing with perfect rationality, we assume that there is another wall in the environment about which the players have no information while they compute the equilibrium strategies, i.e., the players use the nominal environment (top left) to compute the equilibrium strategies. First, we consider the case that the wall is between the goal state and the player 1, as shown in Figure \ref{grid_graph} (middle left). In this case, if the player 1 follows the Nash equilibrium strategy, the probability that it reaches the goal state becomes zero. Therefore, following the Nash equilibrium strategy makes the player 1 significantly vulnerable to such changes in the environment. \begin{figure} \caption{ (Top left) The nominal environment players use for computing their strategies. (Top right) The probability that the player 1 wins the game when it plays the strategy computed by using $\beta_1=\beta_2=\beta$ against the perfectly rational player 2. (Middle and bottom left) The actual environments where the game is played. (Middle and bottom right) The probability of winning for the player 1 when it employs strategies computed by using different $\beta$ values against the perfectly rational player 2. } \label{grid_graph} \end{figure} To investigate the tradeoff between rationality and randomness, we compute 9 different strategies for player 1 by using $\beta_1$$=$$\beta_2$$=$$2,3,\ldots,10$, and let it play against the perfectly rational player 2 which follows its Nash equilibrium strategy computed on the nominal environment (top left). The winning probabilities of player 1 under different strategies are shown in Figure \ref{grid_graph} (right) for the corresponding environments given in Figure \ref{grid_graph} (left). This specific scenario demonstrates that, by choosing $\beta$$=$$6$, the player 1 can obtain a robust behavior against unforeseen changes in the environment, i.e., the winning probability is around 20\%, without sacrificing too much from its optimal performance, i.e., around 15\%, if the structure of the environment remains the same. It is worth noting that the asymptotical performance of the player 1 as $\beta$$\rightarrow$$\infty$ approaches to its performance under Nash equilibrium strategy as discussed in \cite{McKelvey}. Additionally, the importance of the randomness for the player 1 increases as $\beta$$\rightarrow$$0$, and using smaller $\beta$ values negatively affects the performance after a critical point, i.e., $\beta$$=$$4$. Finally, one can argue that the tradeoff occurs in this specific scenario only if the unpredicted wall is between the player 1 and the goal state. To justify the choice of $\beta$ value, we also consider the scenario in which the unexpected wall occupies another state which is shown in Figure \ref{grid_graph} (bottom left). In this case, as shown in Figure \ref{grid_graph} (bottom right), the use of $\beta$$=$$6$ result in a strategy that guarantees around 80\% winning probability. Therefore, the entropy-regularized strategy of the player 1 still provides an advantage against unpredicted changes without sacrificing too much from the optimal performance. \section{Conclusions and Future Work} We consider the problem of two-player zero-sum stochastic games with entropy regularization, wherein the players aim to maximize the causal entropy of their strategies in addition to the conventional expected payoff. We show that equilibrium strategies exist for both entropy-regularized $N$-stage and entropy-regularized $\gamma$-discounted games, and can be computed by solving a convex optimization problem. In numerical examples, we applied the proposed approach to a motion planning scenario and observed that by tuning the regularization parameter, a player can synthesize robust strategies that perform well in different environments against a perfectly rational opponent. Extending this work to multi-agent reinforcement learning settings, as discussed in \cite{Jordi}, is an interesting future direction. Future work can also investigate the effect of entropy regularization term on the convergence rate of learning algorithms as discussed in \cite{Fox}. \section*{Appendix A} \textbf{Proof of Proposition \ref{Markovian_finite}:} We show the sufficiency of Markovian strategies only for the maximin problem. The proof for the minimax formulation follows the same lines with the arguments provided below. The proof is based on backward induction on the stage number $1$$\leq$$k$$\leq$$N$. Let \begin{align*} &\mathcal{V}_k:=\max_{\boldsymbol{\sigma}\in \Gamma_u}\min_{\boldsymbol{\tau}\in\Gamma_w}\sum_{l=k}^N\mathbb{E}^{\overline{\mu}_l}\Big[\mathcal{R}(X_l, U_l, W_l)\nonumber\\ &-\frac{\log \sigma_l(U_l | H_l )}{\beta_1}+\frac{\log \tau_l(W_l | H_l )}{\beta_2}\Big]+\mathbb{E}^{{\mu}_{T+1}}\mathcal{R}(X_{N+1}) \end{align*} be the value of the $N$$-$$k$ stage problem. Then, we can write the value of $N$$-$$k$ stage problem recursively as \begin{align}\label{recursion} &\mathcal{V}_k=\max_{\sigma_k}\min_{\tau_k}\mathbb{E}^{\overline{\mu}_k}\Big[\mathcal{R}(X_k, U_k, W_k)-\frac{1}{\beta_1}\log \sigma_k(U_k | H_k )\nonumber\\ &\qquad \qquad\ +\frac{1}{\beta_2}\log \tau_k(W_k | H_k )+\mathbb{E}^{\mathcal{P}}[\mathcal{V}_{k+1}]\Big]. \end{align} \underline{Base step: $k$$=$$N$.} Let $\sigma_N$ and $\tau_N^{{\star}}$ be an arbitrary strategy for player 1 and the optimal strategy for player 2 at stage $N$, respectively. Let \begin{align*} &\lambda_N(h_N, u_N, w_N):=\mu_N(x^N,u^{N-1},w^{N-1})\\ &\qquad\qquad\qquad\qquad\qquad\qquad\times\sigma_N(u_N| h_N)\tau_N^{{\star}}(w_N| h_N) \end{align*} be the joint distribution induced by $\sigma_N(u_N| h_N)$ and $\tau_N^{\star}(w_N| h_N)$. Additionally, let $\lambda_N(x_N,w_N)$ and $\lambda_N(x_N)$ be the marginal distributions of $\lambda_N(h_N, u_N, w_N)$. We construct a new strategy for player 2 as $\overline{\tau}_N(w_N| x_N)$$:=$$\frac{\lambda_N(x_N,w_N)}{\lambda_N(x_N)}$. Let \begin{align*} &\overline{\lambda}_N(h_N, u_N, w_N):=\mu_N(x^N, u^{N-1},w^{N-1})\\ &\qquad\qquad\qquad\qquad\qquad\qquad\times\sigma_N(u_N| h_N)\overline{\tau}_N(w_N| x_N) \end{align*} be the joint distribution induced by $\overline{\tau}_N(w_N| x_N)$. Then, by construction, we have $\overline{\lambda}_N(x_N, u_N, w_N)$$=$${\lambda}_N(x_N, u_N, w_N)$, which can be easily verified by calculating the corresponding marginal distributions. (For a similar strategy construction, see Theorem 5.5.1 in \cite{Puterman}.) The inner optimization problem in \eqref{recursion} for $k$$=$$N$ reads \begin{align} \mathcal{V}_N=\min_{\tau_N}J_N^c(\lambda_N)-J_N^H(\lambda_N) \end{align} where \begin{align*} &J_N^c(\lambda_N)=\mathbb{E}^{\lambda_N, \mathcal{P}}[\mathcal{R}(X_N,U_N, W_N)+\mathcal{R}(X_{N+1})\\ &\qquad\qquad\qquad\qquad \qquad-\frac{1}{\beta_1}\log \sigma_N(U_N | H_N)]\\ &J_N^H(\lambda_N)=\frac{1}{\beta_2}H_{\lambda_N}(W_N| X^N, U^{N-1}, W^{N-1}). \end{align*} Since the strategy $\sigma_N$ is arbitrarily chosen, it is sufficient to show that $J_N^c(\lambda_N)$$=$$J_N^c(\overline{\lambda}_N)$ and $J_N^H(\lambda_N)$$\leq$$J_N^H(\overline{\lambda}_N)$ in order to establish the sufficiency of Markovian strategies for player 2. The first equality holds by construction. (Note that the $\log(\cdot)$ term is indifferent to changes in the strategy of player 2.) The second inequality can be derived as \begin{align}\label{ent_1} H_{\lambda_N}(W_N| X^N, U^{N-1}, W^{N-1})\leq H_{\lambda_N}(W_N| X_N)&\\\label{ent_2} = H_{\overline{\lambda}_N}(W_N| X_N)&\\\label{ent_3} = H_{\overline{\lambda}_N}(W_N| X^N, U^{N-1}, W^{N-1})& \end{align} where \eqref{ent_1} holds since conditioning reduces entropy \cite{Cover}, \eqref{ent_2} is because $\lambda_N$$=$$\overline{\lambda}_N$ by construction, and \eqref{ent_3} is due to the fact that $\overline{\tau}_N(w_N | x_N )$ is a Markovian strategy. Consequently, for any strategy chosen by player 1 in stage $N$, player 2 has a best response strategy in the space of Markovian strategies. Next, we can assume that player 2 uses a Markovian strategy and show through a similar strategy construction explained above that player 1 has an optimal strategy in the space of Markovian strategies. As a result, the value $\mathcal{V}_N$ depends on the joint distribution $\mu_N(x^N, u^{N-1}, w^{N-1})$ only through its marginal $\mu_N(x_N)$ and becomes a function of the marginal $\mu_N(x_N)$ only. \underline{Inductive step: $k$$=$$t$} Assume that Markovian strategies suffice for both players for $k$$=$$t$$+$$1,t$$+$$2,\ldots,N$. Then, by induction hypothesis, $\mathcal{V}_{t+1}$ is a function of $\mu_{t+1}(x_{t+1})$ only. Therefore, using the similar construction to the case $k$$=$$N$, we can construct Markovian strategies $\overline{\sigma}_t(u_t | x_t)$ and $\overline{\tau}_t(w_t | x_t)$ such that the objective function in the right hand side of \eqref{recursion} attained by $\overline{\sigma}_t$ and $\overline{\tau}_t$ is equal to the value of $N-t$ stage problem. As a result, we conclude that Markovian strategies are sufficient for both players to solve the maximin problem \eqref{entropy_regularized}. $\quad \Box$ \textbf{Proof of Proposition \ref{optimal_strategy_finite}:} Since the one-shot game has a value, without loss of generality, we focus on the problem $\max_{\sigma_t\in \Delta(\mathcal{U})}\min_{\tau_t\in\Delta(\mathcal{W})}\mathcal{V}_t(x_t)$. For notational convenience, we rewrite the problem as \begin{align*} &\max_{Q^{ij}}\min_{Q^{ik}}\sum_{jk}Q^{ij}Q^{ik}\rho_{ijk}-\frac{1}{\beta_1}\sum_{jk}Q^{ik}Q^{ij}\log Q^{ij}\nonumber\\ &\qquad\qquad\qquad\qquad\qquad\quad+\frac{1}{\beta_2}\sum_{jk}Q^{ij}Q^{ik}\log Q^{ik}\\ &\text{subject to}\ \sum_jQ^{ij}=1, \ \sum_kQ^{ik}=1, \ Q^{ij}\geq 0, \ Q^{ik}\geq 0, \end{align*} where $Q^{ij}$$=$$\sigma_t(u_t|x_t)$, $Q^{ik}$$=$$\tau_t(w_t|x_t)$ and $\rho_{ijk}$$=$$\rho_t(x_t,u_t,w_t)$. Note that due to constraints $\sum_jQ^{ij}$$=$$1$ and $\sum_kQ^{ik}$$=$$1$, we can replace $\frac{1}{\beta_1}\sum_{jk}Q^{ik}Q^{ij}\log Q^{ij}$ and $\frac{1}{\beta_2}\sum_{jk}Q^{ij}Q^{ik}\log Q^{ik}$ by $\frac{1}{\beta_1}\sum_{j}Q^{ij}\log Q^{ij}$ and $\frac{1}{\beta_2}\sum_{k}Q^{ik}\log Q^{ik}$, respectively. For now, we neglect the non-negativity constraints and write the Lagrangian for the above optimization problem as \begin{align*} L=&\sum_{jk}Q^{ij}Q^{ik}\rho_{ijk}-\frac{1}{\beta_1}\sum_{j}Q^{ij}\log Q^{ij}\\ &\qquad\qquad+\frac{1}{\beta_2}\sum_{k}Q^{ik}\log Q^{ik}+\lambda^j(\sum_jQ^{ij}-1)\\ &\qquad\qquad+\lambda^k(\sum_kQ^{ik}-1) \end{align*} where $\lambda^j$,$\lambda^k$ are Lagrange multipliers. Then, taking derivative with respect to $Q^{ik}$ and equating it to zero, we obtain \begin{align*} \frac{\partial L}{\partial Q^{ik}}=\sum_{j}Q^{ij}\rho_{ijk}+\frac{1}{\beta_2}\log Q^{ik}_{\star}+\frac{1}{\beta_2}+\lambda^k=0. \end{align*} Rearranging terms and using the constraint $\sum_kQ^{ik}=1$, we obtain \begin{align*} Q^{ik}_{\star}=\frac{\exp(-\beta_2\sum_jQ^{ij}\rho_{ijk})}{\sum_{k}\exp(-\beta_2\sum_jQ^{ij}\rho_{ijk})} \end{align*} which is the same as \eqref{player_2_st}. Note that the resulting strategy also satisfies the non-negativity constraint. Plugging $Q^{ik}_{\star}$ into Lagrangian $L$, we obtain the optimization problem given in \eqref{player_1_st} for which the optimal variables correspond to the optimal strategy of player 1. Similarly, the optimal value \eqref{value_game_opt} of the resulting optimization problem is the value of the game. Uniqueness of the value follows from the fact that the value of the game is the optimal value of a convex optimization problem given in \eqref{player_1_st}. $\quad \Box$ \end{document}
arXiv
{ "id": "1907.11543.tex", "language_detection_score": 0.7756818532943726, "char_num_after_normalized": null, "contain_at_least_two_stop_words": null, "ellipsis_line_ratio": null, "idx": null, "lines_start_with_bullet_point_ratio": null, "mean_length_of_alpha_words": null, "non_alphabetical_char_ratio": null, "path": null, "symbols_to_words_ratio": null, "uppercase_word_ratio": null, "type": null, "book_name": null, "mimetype": null, "page_index": null, "page_path": null, "page_title": null }
arXiv/math_arXiv_v0.2.jsonl
null
null
\begin{document} \title[Seven-game series vs. five-game series]{Are seven-game baseball playoffs fairer than five-game series when home-field advantage is considered?} \author{Brian Dean} \address{Department of Mathematics and Computer Science\\ Salisbury University\\ Salisbury, MD 21801} \email{[email protected]} \date{} \begin{abstract} Conventional wisdom in baseball circles holds that a seven-game playoff series is fairer than a five-game series. In an earlier paper, E. Lee May, Jr. showed that, treating each game as an independent event, a seven-game series is not significantly fairer. In this paper, we take a different approach, taking home-field advantage into account. That is, we consider a given series to consist of two disjoint sets of independent events---the home games and the road games. We will take the probability of winning a given road game to be different from the probability of winning a given home game. Our analysis again shows that a seven-game series is not significantly fairer. \end{abstract} \maketitle \section{Introduction}\label{intro} It is often said in baseball that a seven-game playoff series is fairer than a five-game series. The argument is that, in a five-game series, the team without home-field advantage need only win its two home games, and take just one out of three on the road, in order to win the series. On the other hand, to win a seven-game series, the team would have to either win all three home games and one of four on the road, or win at least two out of four on the road. Analyzing this question is a useful exercise in mathematical modeling and probability. In \cite{Ma}, E. Lee May, Jr. showed that a seven-game series is not significantly fairer. (By \textit{significantly fairer}, we mean that there is at least a four percent greater probability of winning the seven-game series than winning the five-game series.) May approached the problem as follows: he let $p$ be the probability that the better team would win a given game in the series, and treated each game equally as an independent event without regard to where the game was being played. In this paper, we will examine the same problem while attempting to account for home-field advantage. From now on, $p$ will represent the probability that the team with home-field advantage in the series will win a given home game. The probability that that team will win a given road game will be $rp$, where $r$, the \textit{road multiplier}, will be discussed in Section~\ref{roadmultiplier}. Each home game will be treated as an independent event, and each road game will be treated as an independent event. Since May approached the problem from the point of view of the better team, he necessarily had $p\in [0.5,1]$. In this paper, where we approach the problem from the point of view of the team with home-field advantage, that will still be the case most of the time---in the Division Series and League Championship Series, home-field advantage goes to the better team. However, in the World Series, this is not always the case. Home-field advantage in the World Series alternated between the American and National Leagues through 2002; since 2003, it has been given to the champion of the league which had won that year's All-Star Game. Still, in most cases, if a team is good enough to reach the World Series, then the probability that it will win a given home game is still likely to be at least 0.5, regardless of the opposition. Nevertheless, it is possible that $p$ could be below 0.5, so we will only require $p\in [0,1]$. Practically speaking, it seems unlikely that $p$ would ever be below, say, 0.4, but we will not require that to be the case. \section{The Road Multiplier}\label{roadmultiplier} As discussed in the Introduction, we will take the probability that the team with home-field advantage will win a given road game to be $rp$, where $r$ is a fixed number which we will call the \textit{road multiplier}. For an individual team, the road multiplier is obtained by dividing the team's road winning percentage by its home winning percentage, i.e., $$\mbox{road multiplier}\,\, =\frac{\frac{RW}{RW+RL}}{\frac{HW}{HW+HL}}$$ where $RW$, $RL$, $HW$, and $HL$, are the number of the team's road wins, road losses, home wins, and home losses, respectively, in that season. Our value $r$ will be the average of the road multipliers of the 96 teams which have made the playoffs in the wildcard era (1995-2006). This ends up giving us (to 9 decimal places) $$r=0.894762228,$$ that is, we will consider the team with home-field advantage to be about 89.5 percent as likely to win a given road game as they are to win a given home game. We will not list the results for all 96 teams here. However, we will make a few comments. The five highest and five lowest road multipliers of the 96 are as follows: \begin{tabular}{lccc} Team & Home Record & Road Record & Road Multiplier \\ \hline 2001 Braves & 40-41 & 48-33 & 1.2 \\ 1997 Orioles & 46-35 & 52-29 & 1.130434783 \\ 2001 Astros & 44-37 & 49-32 & 1.113636364 \\ 2005 White Sox & 47-34 & 52-29 & 1.106382979 \\ 2006 Tigers & 46-35 & 49-32 & 1.065217391 \\ (tie) 2000 White Sox & 46-35 & 49-32 & 1.065217391 \\ \hline 2000 Mets & 55-26 & 39-42 & 0.709090909 \\ 2005 Braves & 53-28 & 37-44 & 0.698113208 \\ 2006 Cardinals & 49-31 & 34-47 & 0.685311162 \\ 2003 Athletics & 57-24 & 39-42 & 0.684210526 \\ 2005 Astros & 53-28 & 36-45 & 0.679245283 \end{tabular} Of the 96 teams, 23 of them had road multipliers of 1 or higher (meaning that about a quarter of the teams did at least as well on the road as they did at home), while 12 of the teams had road multipliers of 0.75 or below. 12 of the 16 highest road multipliers belong to American League teams, while 11 of the 16 lowest road multipliers belong to National League teams. The road multipliers for the 12 World Series champions of the wildcard era, from highest to lowest, are as follows: \begin{tabular}{lccc} Team & Home Record & Road Record & Road Multiplier \\ \hline 2005 White Sox & 47-34 & 52-29 & 1.106382979 \\ 1995 Braves & 44-28 & 46-26 & 1.045454545 \\ 1999 Yankees & 48-33 & 50-31 & 1.041666667 \\ 2000 Yankees & 44-35 & 43-39 & 0.941518847 \\ 2001 Diamondbacks & 48-33 & 44-37 & 0.916666667 \\ 1996 Yankees & 49-31 & 43-39 & 0.856147337 \\ 1998 Yankees & 62-19 & 52-29 & 0.838709677 \\ 2002 Angels & 54-27 & 45-36 & 0.833333333 \\ 2004 Red Sox & 55-26 & 43-38 & 0.781818182 \\ 1997 Marlins & 52-29 & 40-41 & 0.769230769 \\ 2003 Marlins & 53-28 & 38-43 & 0.716981131 \\ 2006 Cardinals & 49-31 & 34-47 & 0.685311162 \end{tabular} \section{Comparing Three-Game Series and Five-Game Series}\label{threeversusfive} Before comparing seven-game series and five-game series, we will first look at five-game series versus three-game series, as that case is a bit easier to dive right into. Throughout the next two sections, we will use the following notation: we will use capital letters (W and L) to denote games in which the team with home-field advantage wins and loses at home, and lowercase letters (w and l) to denote games in which that team wins and loses on the road. Thus, each instance of W will have probability $p$, each L will have probability $1-p$, each w will have probability $rp$, and each l will have probability $1-rp$. \subsection{Three-game series} There have never been three-game playoff series in baseball, except to break ties (most notably the playoff between the New York Giants and Brooklyn Dodgers following the 1951 season). However, if there were, they would likely be in one of two formats---either a 1-1-1 format (in which the team with home-field advantage plays games one and three at home, and game two on the road), or a 1-2 format (in which they play game one on the road and games two and three at home). The scenarios for that team to win the series, in a 1-1-1 format, are as follows. \begin{tabular}{lc} Scenario & Probability \\ \hline Ww & $p(rp)$ \\ WlW & $p^2(1-rp)$ \\ LwW & $p(rp)(1-p)$ \end{tabular} Adding these probabilities, we see that the total probability that the team with home-field advantage will win the series, in a 1-1-1 format, is $(2r+1)p^2-2rp^3$. The following are the corresponding scenarios if the series were played in a 1-2 format. \begin{tabular}{lc} Scenario & Probability \\ \hline wW & $p(rp)$ \\ wLW & $p(rp)(1-p)$ \\ lWW & $p^2(1-rp)$ \end{tabular} Again, the total probability of victory in this format is $(2r+1)p^2-2rp^3$. So, the probability that the team with home-field advantage will win a three-game series is the same in either format. \subsection{Five-game series} Major League Baseball employed five-game playoff series for the League Championship Series from 1969-1984. (Prior to 1969, the playoffs consisted solely of the teams with the best records in each league meeting in the World Series.) Since 1985, the League Championship Series have been in a best-of-seven format. However, five-game series returned with the advent of the wildcard system; since 1995, each league has had two five-game Division Series, with the winners advancing to the seven-game League Championship Series. Two formats for best-of-five series have been used over the years: a 2-3 format (in which the team with home-field advantage plays the first two games on the road and the final three games at home), and a 2-2-1 format (in which that team plays games one, two, and five at home, and games three and four on the road). We will examine each format separately; as with the two formats for three-game series, we will see that the probability that the team with home-field advantage will win the series is independent of the format. First, we examine the scenarios in which the team with home-field advantage will win the series, if the series is in a 2-3 format. \begin{tabular}{lc} Scenario & Probability \\ \hline wwW & $p(rp)^2$ \\ lwWW & $p^2(rp)(1-rp)$ \\ wlWW & $p^2(rp)(1-rp)$ \\ wwLW & $p(rp)^2(1-p)$ \\ llWWW & $p^3(1-rp)^2$ \\ lwLWW & $p^2(rp)(1-p)(1-rp)$ \\ lwWLW & $p^2(rp)(1-p)(1-rp)$ \\ wlLWW & $p^2(rp)(1-p)(1-rp)$ \\ wlWLW & $p^2(rp)(1-p)(1-rp)$ \\ wwLLW & $p(rp)^2(1-p)^2$ \end{tabular} Summing these, we see that the total probability that the team with home-field advantage will win the series, in a 2-3 format, is $$(3r^2+6r+1)p^3-(9r^2+6r)p^4+6r^2p^5$$ Next, we look at the corresponding scenarios for a 2-2-1 format. \begin{tabular}{lc} Scenario & Probability \\ \hline WWw & $p^2(rp)$ \\ LWww & $p(rp)^2(1-p)$ \\ WLww & $p(rp)^2(1-p)$ \\ WWlw & $p^2(rp)(1-rp)$ \\ LLwwW & $p(rp)^2(1-p)^2$ \\ LWlwW & $p^2(rp)(1-p)(1-rp)$ \\ LWwlW & $p^2(rp)(1-p)(1-rp)$ \\ WLlwW & $p^2(rp)(1-p)(1-rp)$ \\ WLwlW & $p^2(rp)(1-p)(1-rp)$ \\ WWllW & $p^3(1-rp)^2$ \end{tabular} Again, if we add these, we see that the total probability of victory is $$(3r^2+6r+1)p^3-(9r^2+6r)p^4+6r^2p^5$$ and so the probability that the team with home-field advantage will win a five-game series is the same in either format. \subsection{Comparing the two} To find the difference in probabilities in winning a five-game series and a three-game series, we just subtract the two: the probability of winning a five-game series, minus the probability of winning a three-game series, is the function $$f(p)=6r^2p^5-(9r^2+6r)p^4+(3r^2+8r+1)p^3-(2r+1)p^2,\;\;p\in [0,1]$$ We will find the extreme values of $f$ using the Extreme Value Theorem. The derivative of $f$ is $$f'(p)=30r^2p^4-(36r^2+24r)p^3+(9r^2+24r+3)p^2-(4r+2)p;$$ keeping in mind that $r=0.894762228$, the derivative is 0 for $$p=0$$ $$p\approx 0.294269665$$ $$p\approx 0.756820873$$ and for a value of $p$ between 1 and 2 (as can be verified using the Intermediate Value Theorem). Checking the values of $f$ at the critical points and the endpoints, we get $$f(0)=0$$ $$f(0.294269665)\approx -0.056156576$$ $$f(0.756820873)\approx 0.047338476$$ $$f(1)=0$$ So, a five-game series is at most about 4.73\% fairer than a three-game series, and at worst about 5.62\% less fair. However, as mentioned in the Introduction, it is extremely unlikely that $p$ would ever be as low as $0.294$. If we look at the value of $f$ at a more realistic lower bound for $p$, we get $$f(0.4)=-0.0431953192$$ and so a five-game series is about 4.32\% less fair than a three-game series for that value of $p$. In summary, there does appear to be a significant difference between three-game and five-game series for certain values of $p$. The value of $p$ in $[0,1]$ for which $f(p)=0$ is approximately 0.537783; for $p$ less than that, three-game series are fairer (or, put another way, five-game series are fairer for the team without home-field advantage), while for $p$ greater than that, five-game series are fairer. \section{Comparing Five-Game Series and Seven-Game Series}\label{fiveversusseven} We are now ready to examine the question of interest to us, comparing a five-game series and a seven-game series. We have already shown that the probability that the team with home-field advantage will win a five-game series, regardless of format, is $$(3r^2+6r+1)p^3-(9r^2+6r)p^4+6r^2p^5$$ \subsection{Seven-game series} A seven-game series in baseball is played under a 2-3-2 format---the team with home-field advantage plays games one, two, six, and seven at home, and the middle three games on the road. There are a total of 35 possible scenarios for victory, so we will not list each separately. However, we will list each scenario lasting four, five, or six games. \begin{tabular}{lc} Scenario & Probability \\ \hline WWww & $p^2(rp)^2$ \\ LWwww & $p(rp)^3(1-p)$ \\ WLwww & $p(rp)^3(1-p)$ \\ WWlww & $p^2(rp)^2(1-rp)$ \\ WWwlw & $p^2(rp)^2(1-rp)$ \\ LLwwwW & $p(rp)^3(1-p)^2$ \\ LWlwwW & $p^2(rp)^2(1-p)(1-rp)$ \\ LWwlwW & $p^2(rp)^2(1-p)(1-rp)$ \\ LWwwlW & $p^2(rp)^2(1-p)(1-rp)$ \\ WLlwwW & $p^2(rp)^2(1-p)(1-rp)$ \\ WLwlwW & $p^2(rp)^2(1-p)(1-rp)$ \\ WLwwlW & $p^2(rp)^2(1-p)(1-rp)$ \\ WWllwW & $p^3(rp)(1-rp)^2$ \\ WWlwlW & $p^3(rp)(1-rp)^2$ \\ WWwllW & $p^3(rp)(1-rp)^2$ \end{tabular} There are a total of 20 scenarios for victory which last the full seven games. Rather than list each one separately, we will just list the various combinations of W, L, w, and l, give the probability of each occurrence, and give the number of ways each scenario occurs. For example, occurrences of the first type include LLlwwWW, LWwlwLW, and WLwwlLW. \begin{tabular}{lcc} Scenario & Probability & Occurrences \\ \hline 2 W, 2 w, 2 L, 1 l & $p^2(rp)^2(1-p)^2(1-rp)$ & 9 \\ 3 W, 1 w, 1 L, 2 l & $p^3(rp)(1-p)(1-rp)^2$ & 9 \\ 1 W, 3 w, 3 L, 0 l & $p(rp)^3(1-p)^3$ & 1 \\ 4 W, 0 w, 0 L, 3 l & $p^4(1-rp)^3$ & 1 \end{tabular} Adding together all of the probabilities for the 35 victory scenarios, we see that the total probability that the team with home-field advantage will win a seven-game series is $$(4r^3+18r^2+12r+1)p^4-(24r^3+48r^2+12r)p^5+(40r^3+30r^2)p^6-20r^3p^7$$ \subsection{Comparing the two} If we take the probability of winning a seven-game series, and subtract the probability of winning a five-game series, we get the function \begin{eqnarray*} s(p) &=& -20r^3p^7+(40r^3+30r^2)p^6-(24r^3+54r^2+12r)p^5 \\ & & \,\, +(4r^3+27r^2+18r+1)p^4-(3r^2+6r+1)p^3 \end{eqnarray*} where $p\in [0,1]$. The derivative of this function is \begin{eqnarray*} s'(p) &=& -140r^3p^6 + (240r^3+180r^2)p^5-(120r^3+270r^2+60r)p^4 \\ & & \,\, +(16r^3+108r^2+72r+4)p^3-(9r^2+18r+3)p^2 \end{eqnarray*} Again using the fact that we are taking $r=0.894762228$, the derivative $s'$ is 0 for $$p=0$$ $$p\approx 0.329786090$$ $$p\approx 0.723663130$$ and for a value of $p$ between 1 and 1.05, and a value of $p$ between 1.05 and 1.1. (These last two can be verified using the Intermediate Value Theorem.) Checking the values of $s$ at the critical points and the endpoints, we get $$s(0)=0$$ $$s(0.329786090)\approx -0.038565024$$ $$s(0.723663130)\approx 0.034221072$$ $$s(1)=0$$ So, a seven-game series is at most about 3.42\% fairer than a five-game series, and at worst about 3.86\% less fair (and that occurs for a value of $p$ which is likely too small to occur in practice). Therefore, there is no significant difference between a five-game series and a seven-game series. The value of $p$ in $[0,1]$ for which $s(p)=0$ is approximately 0.533711; for $p$ less than that, five-game series are fairer (i.e., seven-game series are fairer for the team without home-field advantage), while for $p$ greater than that, seven-game series are fairer. \section{Further Questions}\label{furtherquestions} There are a few ways in which this model could be amended. First, instead of finding a fixed value of $r$ for the road multiplier, we could keep $r$ as a variable (with appropriate upper and lower bounds for $r$), and then treat the functions $f$ and $s$ as functions of two variables. Another approach would be to account for morale. In \cite{Re}, S. Reske approaches the problem as May did in \cite{Ma}---that is, with $p$ representing the probability that the better team would win a given game, without regard to home-field advantage. However, if the better team has a lead in the series, then its probability of winning the next game would be $p+a$, while if it trails in the series, then its probability of winning the next game would be $p-a$, where $a$ may be either positive or negative. The idea is that, if the team leads the series, its increase in morale (and subsequent decrease in the other team's morale) could actually make it more likely to win the next game, and vice versa if it trails the series. In that case, $a>0$. The case $a<0$ would correspond to what happens if the team leads the series but then gets overconfident, making it less likely to win the next game. With this approach, Reske again shows that there is no significant difference between a five-game series and a seven-game series. This could be easily adapted to account for home-field advantage, with the fixed value of $r$ we used in this paper: if the team with home-field advantage leads the series, and the next game is at home, its probability of winning would be $p+a$, while if the next game were on the road, it would be $r(p+a)$; similarly if the team with home-field advantage trails the series, its probability of winning the next game would be $p-a$ if at home, and $r(p-a)$ if on the road. This would again be a two-variable problem, with variables $p$ and $a$. If we do not require $r$ to be fixed, then it would become a three-variable problem. A final approach could be one of cumulative morale. That is, if the team with home-field advantage leads the series by one game, then its probability of winning the next game would be $p+a$ or $r(p+a)$, if it leads the series by two games, its probability of winning the next game would be $p+2a$ or $r(p+2a)$, and so forth. The idea here would be that, the further ahead the team is, the greater its morale would get (if $a>0$), or the more overconfident it would get (if $a<0$). \end{document}
arXiv
{ "id": "0701651.tex", "language_detection_score": 0.912377655506134, "char_num_after_normalized": null, "contain_at_least_two_stop_words": null, "ellipsis_line_ratio": null, "idx": null, "lines_start_with_bullet_point_ratio": null, "mean_length_of_alpha_words": null, "non_alphabetical_char_ratio": null, "path": null, "symbols_to_words_ratio": null, "uppercase_word_ratio": null, "type": null, "book_name": null, "mimetype": null, "page_index": null, "page_path": null, "page_title": null }
arXiv/math_arXiv_v0.2.jsonl
null
null
\begin{document} \begin{abstract} The $q$-semicircular distribution is a probability law that interpolates between the Gaussian law and the semicircular law. There is a combinatorial interpretation of its moments in terms of matchings where $q$ follows the number of crossings, whereas for the free cumulants one has to restrict the enumeration to connected matchings. The purpose of this article is to describe combinatorial properties of the classical cumulants. We show that like the free cumulants, they are obtained by an enumeration of connected matchings, the weight being now an evaluation of the Tutte polynomial of a so-called crossing graph. The case $q=0$ of these cumulants was studied by Lassalle using symmetric functions and hypergeometric series. We show that the underlying combinatorics is explained through the theory of heaps, which is Viennot's geometric interpretation of the Cartier-Foata monoid. This method also gives results for the classical cumulants of the free Poisson law. \end{abstract} \maketitle \tableofcontents \section{Introduction} Let us consider the sequence $\{m_n(q)\}_{n\geq0}$ defined by the generating function \[ \sum_{n\geq 0} m_n(q) z^n = \cfrac{1}{1 - \cfrac{ [1]_q z^2}{1 - \cfrac{ [2]_q z^2}{1 - \ddots }}} \] where $[i]_q=\frac{1-q^i}{1-q}$. For example, $m_0(q)=m_2(q)=1$, $m_4(q)=2+q$, and the odd values are 0. The generating function being a Stieltjes continued fraction, $m_n(q)$ is the $n$th moment of a symmetric probability measure on $\mathbb{R}$ (at least when $0\leq q\leq 1$). An explicit formula for the density $w(x)$ such that $m_n(q)=\int x^n w(x) {\rm d}x$ is given by Szegő~\cite{szego}: \[ w(x) = \begin{cases} \frac 1\pi \sqrt{1-q} \sin\theta \prod\limits_{n=1}^\infty (1-q^n) |1-q^ne^{2i\theta}|^2 & \text{ if } -2\leq x\sqrt{1-q} \leq 2, \\ 0 & \text{otherwise,} \end{cases} \] where $\theta\in[0,\pi]$ is such that $2 \cos \theta = x \sqrt{1-q}$. At $q=0$, it is the semicircular distribution with density $(2\pi)^{-1}\sqrt{4-x^2}$ supported on $[-2,2]$, whereas at the limit $q\to 1$ it becomes the Gaussian distribution with density $(2\pi)^{-1/2}e^{-x^2/2}$. This law is therefore known either as the $q$-Gaussian or the $q$-semicircular law. It can be conveniently characterized by its orthogonal polynomials, defined by the relation $xH_n(x|q) = H_{n+1}(x|q) + [n]_q H_{n-1}(x|q)$ together with $H_1(x|q)=x$ and $H_0(x|q)=1$, and called the continuous $q$-Hermite polynomials (but we do not insist on this point of view since the notion of cumulant is not particularly relevant for orthogonal polynomials). The semicircular law is the analogue in free probability of the Gaussian law \cite{hiai,nica}. More generally, the $q$-semicircular measure plays an important role in noncommutative probability theories \cite{anshelevitch,blitvic,bozejko1,bozejko2,leeuwen1,leeuwen2}. This was initiated by Bożejko and Speicher \cite{bozejko1,bozejko2} who used creation and annihilation operators in a twisted Fock space to build generalized Brownian motions. The goal of this article is to examine the combinatorial meaning of the classical cumulants $k_n(q)$ of the $q$-semicircular law (we recall the definition in the next section). The first values lead to the observation that \begin{equation*} \tilde k_{2n}(q) = \frac{ k_{2n}(q) }{ (q-1)^{n-1} } \end{equation*} is a polynomial in $q$ with nonnegative coefficients. For example: \begin{equation*} \tilde k_2(q)=\tilde k_4(q)=1, \qquad \tilde k_6(q)=q+5, \qquad \tilde k_8(q)= q^3+7q^2+28q+56. \end{equation*} We actually show in Theorem~\ref{cumultutte} that this $ \tilde k_{2n}(q)$ can be given a meaning as a generating function of connected matchings, i.e. the same objects that give a combinatorial meaning to the free cumulants of the $q$-semicircular law. However, the weight function that we use here on connected matching is not as simple as in the case of free cumulants, it is given by the value at $(1,q)$ of the Tutte polynomial of a graph attached to each connected matching, called the crossing graph. There are various points where the evaluation of a Tutte polynomials has combinatorial meaning, in particular $(1,0)$, $(1,1)$ and $(1,2)$. In the first and third case ($q=0$ and $q=2$), they can be used to give an alternative proof of Theorem~\ref{cumultutte}. These will be provided respectively in Section~\ref{sec:heaps} and Section~\ref{sec:q=2}. The integers $\tilde k_{2n}(0)$ were recently considered by Lassalle \cite{lassalle} who defines them as a sequence simply related with Catalan numbers, and further studied in \cite{vignat}. Being the (classical) cumulants of the semicircular law, it might seem unnatural to consider this quantity since this law belongs to the world of free probability, but on the other side the free cumulants of the Gaussian have numerous properties (see \cite{belinschi}). The interesting feature is that this particular case $q=0$ can be proved via the theory of heaps \cite{cartier,viennot}. As for the case $q=2$, even though the $q$-semicircular is only defined when $|q|<1$ its moments and cumulants and the link between still exist because \eqref{relmk} can be seen as an identity between formal power series in $z$. The particular proof for $q=2$ is an application of the exponential formula. \section{Preliminaries} Let us first precise some terms used in the introduction. Besides the moments $\{m_n(q)\}_{n\geq0}$, the $q$-semicircular law can be characterized by its {\it cumulants} $\{k_n(q)\}_{n\geq1}$ formally defined by \begin{equation} \label{relmk} \sum_{n\geq 1} k_n(q) \frac{z^n}{n!} = \log \Bigg( \sum_{n\geq 0} m_n(q) \frac{z^n}{n!} \Bigg), \end{equation} or by its {\it free cumulants} $\{c_n(q)\}_{n\geq1}$ \cite{nica} formally defined by \[ C(zM(z)) = M(z)\quad \text{ where } M(z)=\sum_{n\geq0} m_n(q)z^n,\quad C(z) = 1+\sum_{n\geq1} c_n(q) z^n. \] These relations can be reformulated using set partitions. For any finite set $V$, let $\mathcal{P}(V)$ denote the lattice of set partitions of $V$, and let $\mathcal{P}(n)=\mathcal{P}(\{1,\dots,n\})$. We will denote by $\hat 1$ the maximal element and by $\mu$ the Möbius function of these lattices, without mentioning $V$ explicitly. Although we will not use it, let us mention that $\mu(\pi,\hat 1) = (-1)^{\# \pi -1} (\#\pi -1)!$ where $\#\pi$ is the number of blocks in $\pi$. See \cite[Chapter~3]{stanley} for details. When we have some sequence $(u_n)_{n\geq 1}$, for any $\pi\in\mathcal{P}(V)$ we will use the notation: \[ u_\pi = \prod_{ b \in \pi } u_{\# b}. \] Then the relations between moments and cumulants read: \begin{equation} \label{inversion} m_n(q) = \sum_{ \pi \in \mathcal{P}(n) } k_\pi(q), \qquad k_n(q) = \sum_{ \pi \in \mathcal{P}(n) } m_\pi(q) \mu(\pi,\hat 1). \end{equation} These are equivalent via the Möbius inversion formula and both can be obtained from \eqref{relmk} using Faà di Bruno's formula. When $V\subset\mathbb{N}$, let $\mathcal{NC}(V)\subset\mathcal{P}(V)$ denote the subset of {\it noncrossing partitions}, which form a sublattice with Möbius function $\mu^{NC}$. Then we have \cite{hiai,nica}: \begin{equation} \label{inversionfree} m_n(q) = \sum_{ \pi \in \mathcal{NC}(n) } c_\pi(q), \qquad c_n(q) = \sum_{ \pi \in \mathcal{NC}(n) } m_\pi(q) \mu^{NC}(\pi,\hat 1). \end{equation} Equations \eqref{inversion} and \eqref{inversionfree} can be used to compute the first non-zero values: \begin{equation*}\begin{array}{lll} k_2(q)=1,\qquad & k_4(q)=q-1, \qquad & k_6(q) =q^3+3q^2-9q+5, \\[2mm] c_2(q)=1,\qquad & c_4(q)=q, \qquad & c_6(q) =q^3+3q^2. \end{array}\end{equation*} Let $\mathcal{M}(V)\subset\mathcal{P}(V)$ denote the set of {\it matchings}, i.e. set partitions whose all blocks have size 2. As is customary, a block of $\sigma\in\mathcal{M}(V)$ will be called an {\it arch}. When $V\subset\mathbb{N}$, a {\it crossing} \cite{ismail} of $\sigma\in\mathcal{M}(V)$ is a pair of arches $\{i,j\}$ and $\{k,\ell\}$ such that $i<k<j<\ell$. Let $\cro(\sigma)$ denote the number of crossings of $\sigma\in\mathcal{M}(V)$. Let $\mathcal{N}(V) = \mathcal{M}(V) \cap \mathcal{NC}(V)$ denote the set of {\it noncrossing matchings}, i.e. those such that $\cro(\sigma)=0$. Let also $\mathcal{M}(2n) = \mathcal{M}(\{1,\dots,2n\})$ and $\mathcal{N}(2n) = \mathcal{N}(\{1,\dots,2n\})$. Let $\mathcal{P}^c(n) \subset \mathcal{P}(n)$ denote the set of {\it connected} set partitions, i.e. $\pi$ such that no proper interval of $\{1,\dots,n\}$ is a union of blocks of $\pi$, and let $\mathcal{M}^c(2n) = \mathcal{M}(2n) \cap \mathcal{P}^c(2n)$ denote the set of connected matchings. It is known \cite{ismail} that for any $n\geq0$, the moment $m_{2n}(q)$ count matchings on $2n$ points according to the number of crossings: \begin{equation} \label{mucro} m_{2n}(q) = \sum_{\sigma\in\mathcal{M}(2n)} q^{\cro(\sigma)}. \end{equation} It was showed by Lehner~\cite{lehner} that \eqref{inversionfree} and \eqref{mucro} gives a combinatorial meaning for the free cumulants: \[ c_{2n}(q) = \sum_{\sigma\in\mathcal{M}^c(2n)} q^{\cro(\sigma)}. \] See \cite{belinschi} for various properties of connected matchings in the context of free probability. Let us also mention that both quantities $m_{2n}(q)$ and $c_{2n}(q)$ are considered in an article by Touchard \cite{touchard}. \section{\texorpdfstring{A combinatorial formula for $k_n(q)$}{A combinatorial formula for kn(q)} } We will use the Möbius inversion formula in Equation~\eqref{inversion}, but we first need to consider the combinatorial meaning of the products $m_{\pi}(q)$. \begin{lem} \label{lemmpi} For any $\sigma\in\mathcal{M}(2n)$ and $\pi\in\mathcal{P}(2n)$ such that $\sigma \leq \pi$, let $\cro(\sigma,\pi)$ be the number of crossings $(\{i,j\},\{k,\ell\})$ of $\sigma$ such that $\{i,j,k,\ell\}\subset b$ for some $b\in\pi$. Then we have: \begin{equation} \label{mpi} m_\pi(q) = \sum_{ \substack{ \sigma \in\mathcal{M}(2n) \\ \sigma \leq \pi} } q^{\cro(\sigma,\pi)}. \end{equation} \end{lem} \begin{proof} Denoting $\sigma|_b = \{ x\in\sigma \; : \; x\subset b \}$, the map $\sigma \mapsto (\sigma|_b)_{b\in\pi}$ is a natural bijection between the set $\{\sigma \in\mathcal{M}(2n) \; : \; \sigma\leq\pi \}$ and the product $\Pi_{b\in\pi} \mathcal{M}(b) $, in such a way that $\cro(\sigma,\pi) = \sum_{b\in\pi} \cro (\sigma|_b)$. This allows to factorize the right-hand side in \eqref{mpi} and obtain $m_{\pi}(q)$. \end{proof} From Equation~\eqref{inversion} and the previous lemma, we have: \begin{equation} \label{kW} \begin{split} k_{2n}(q) &= \sum_{ \pi \in \mathcal{P}(2n) } m_\pi(q) \mu(\pi,\hat 1) = \sum_{ \pi \in \mathcal{P}(2n) } \sum_{\substack{ \sigma \in \mathcal{M}(2n) \\ \sigma \leq \pi}} q^{\cro(\sigma,\pi)} \mu(\pi,\hat 1) \\ &= \sum_{ \sigma \in \mathcal{M}(2n) } \sum_{ \substack{ \pi\in\mathcal{P}(2n) \\ \pi \geq \sigma}} q^{\cro(\sigma,\pi)} \mu(\pi,\hat 1) = \sum_{ \sigma \in \mathcal{M}(2n) } W(\sigma), \end{split}\end{equation} where for each $\sigma\in\mathcal{M}(2n)$ we have introduced: \begin{equation} \label{W1} W(\sigma) = \sum_{\substack{ \pi\in\mathcal{P}(2n) \\ \pi \geq \sigma}} q^{\cro(\sigma,\pi)} \mu(\pi,\hat 1). \end{equation} A key point is to note that $W(\sigma)$ only depends on how the arches of $\sigma$ cross with respect to each other, which can be encoded in a graph. This leads to the following: \begin{defn} Let $\sigma\in\mathcal{M}(2n)$. The {\it crossing graph} $G(\sigma)=(V,E)$ is as follows. The vertex set $V$ contains the arches of $\sigma$ (i.e. $V=\sigma$), and the edge set $E$ contains the crossings of $\sigma$ (i.e. there is an edge between the vertices $\{i,j\}$ and $\{k,\ell\}$ if and only if $i<k<j<\ell$). \end{defn} See Figure~\ref{crogra} for an example. Note that the graph $G(\sigma)$ is connected if and only if $\sigma$ is a connected matching in the sense of the previous section. \begin{figure} \caption{A matching $\sigma$ and its crossing graph $G(\sigma)$. } \label{crogra} \end{figure} \begin{lem} \label{Wgraph} Let $\sigma\in\mathcal{M}(2n)$ and $G(\sigma)=(V,E)$ be its crossing graph. If $\pi\in\mathcal{P}(V)$, let $i(E,\pi)$ be the number of elements in the edge set $E$ such that both endpoints are in the same block of $\pi$. Then we have: \begin{equation} \label{W2} W(\sigma) = \sum_{\pi \in \mathcal{P}(V)} q^{i(E,\pi)} \mu(\pi,\hat 1). \end{equation} \end{lem} \begin{proof} There is a natural bijection between the interval $[\sigma,\hat 1]$ in $\mathcal{P}(2n)$ and the set $\mathcal{P}(V)$, in such a way that $\cro(\sigma,\pi) = i(E,\pi)$. Hence Equation~\eqref{W2} is just a rewriting of \eqref{W1} in terms of the graph $G(\sigma)$. \end{proof} Now we can use Proposition~\ref{proptutte} from the next section. It allows to recognize $(q-1)^{-n+1}W(\sigma)$ as an evaluation of the Tutte polynomial $T_{G(\sigma)}$, except that it is 0 when the graph is not connected. Gathering Equations~\eqref{kW}, \eqref{W2}, and Proposition~\ref{proptutte} from the next section, we have proved: \begin{thm} \label{cumultutte} For any $n\geq 1$, \[ \tilde k_{2n}(q) = \sum_{\sigma \in \mathcal{M}^c(2n)} T_{G(\sigma)} (1,q). \] In particular $\tilde k_{2n}(q)$ is a polynomial in $q$ with nonnegative coefficients. \end{thm} \section{The Tutte polynomial of a connected graph} For any graph $G=(V,E)$, let $T_G(x,y)$ denote its Tutte polynomial, we give here a short definition and refer to \cite[Chapter~9]{aigner} for details. This graph invariant can be computed recursively via edge deletion and edge contraction. Let $e\in E$, let $G\backslash e = (V,E\backslash e)$ and $G/e = ( V/e , E\backslash e)$ where $V/e$ is the quotient set where both endpoints of the edge $e$ are identified. Then the recursion is: \begin{equation} \label{recurtutte} T_G(x,y) = \begin{cases} xT_{G/e}(x,y) & \text{if $e$ is a bridge,} \\ yT_{G\backslash e}(x,y) & \text{if $e$ is a loop,} \\ T_{G/e}(x,y)+T_{G\backslash e}(x,y) & \text{otherwise.} \end{cases} \end{equation} The initial case is that $T_G(x,y)=1$ if the graph $G$ has no edge. Here, a {\it bridge} is an edge $e$ such that $G\backslash e$ has one more connected component than $G$, and a {\it loop} is an edge whose both endpoints are identical. \begin{prop} \label{proptutte} Let $G=(V,E)$ be a graph (possibly with multiple edges and loops). Let $n=\#V$. With $i(E,\pi)$ defined as in Lemma~\ref{Wgraph}, we have: \begin{equation} \label{tutte} \frac{1}{(q-1)^{n-1}} \sum_{\pi\in\mathcal{P}(V)} q^{i(E,\pi)} \mu(\pi,\hat 1) = \begin{cases} T_G(1,q) & \hbox{ if $G$ is connected,} \\ 0 & \hbox{otherwise.} \end{cases} \end{equation} \end{prop} \begin{proof} Denote by $U_G$ the left-hand side in \eqref{tutte} and let $e$ be an edge of $G$. Suppose $e\in E$ is a loop, it is then clear that $i(E\backslash e,\pi)=i(E,\pi)-1$, so $U_G = qU_{G\backslash e}$. Then suppose $e$ is not a loop, and let $x$ and $y$ be its endpoints. We have: \[ U_G - U_{G\backslash e} = \frac{1}{(q-1)^{n-1}} \sum_{\pi\in\mathcal{P}(V)} \Big(q^{i(E,\pi)} - q^{i(E\backslash e,\pi)} \Big) \mu(\pi,\hat 1). \] In this sum, all terms where $x$ and $y$ are in different blocks of $\pi$ vanish. So we can keep only $\pi$ such that $x$ and $y$ are in the same block, and these can be identified with elements of $\mathcal{P}(V/e)$ and satisfy $i(E\backslash e,\pi)=i(E,\pi)-1$. We obtain: \[ U_G - U_{G\backslash e} = \frac{1}{(q-1)^{n-2}} \sum_{\pi\in\mathcal{P}(V/e)} q^{i(E\backslash e,\pi)} \mu(\pi,\hat 1) = U_{G/e}. \] This is a recurrence relation which determines $U_G$, and it remains to describe the initial case. So, suppose the graph $G$ has $n$ vertices and no edge, i.e. $G=(V,\emptyset)$. We have $i(\emptyset,\pi)=0$. By the definition of the Möbius function, we have: \[ \sum_{\pi\in\mathcal{P}(V)} \mu(\pi,\hat 1) = \delta_{n1}, \] hence $U_G=\delta_{n1}$ as well in this case. We have thus a recurrence relation for $U_G$, and it remains to show that the right-hand side of \eqref{tutte} satisfies the same relation. This is true because when $x=1$, and when we consider a variant of the Tutte polynomial which is 0 for a non-connected graph, then the first case of \eqref{recurtutte} becomes a particular case of the third case. \end{proof} \begin{rem} The proposition of this section can also be derived from results of Burman and Shapiro \cite{burman}, at least in the case where $G$ is connected. More precisely, in the light of \cite[Theorem~9]{burman} we can recognize the sum in the left-hand side of \eqref{tutte} as the {\it external activity polynomial} $C_G(w)$, where all edge variables are specialized to $q-1$. It is known to be related with $T_G(1,q)$, see for example \cite[Section 2.5]{sokal}. \end{rem} \section{\texorpdfstring{The case $q=0$, Lassalle's sequence and heaps}{The case q=0, Lassalle's sequence and heaps}} \label{sec:heaps} In the case $q=0$, the substitution $z \to iz$ recasts Equation~\eqref{relmk} as \begin{equation} \label{relmk2} - \log\bigg( \sum_{n\geq 0} (-1)^n C_n \frac{z^{2n}}{(2n)!} \bigg) = \sum_{n\geq 1} \tilde k_{2n}(0) \frac{z^{2n}}{(2n)!}, \end{equation} where $C_n = \frac{1}{n+1}\tbinom {2n}n $ is the $n$th Catalan number, known to be the cardinal of $\mathcal{N}(2n)$, see \cite{stanley}. The integer sequence $\{\tilde k_{2n}(0)\}_{n\geq1}=(1,1,5,56,\dots)$ was previously defined by Lassalle \cite{lassalle} via an equation equivalent to \eqref{relmk2}, and Theorem 1 from \cite{lassalle} states that the integers $\tilde k_{2n}(0)$ are positive and increasing (stronger results are also true, see \cite{lassalle,vignat}). The goal of this section is to give a meaning to \eqref{relmk2} in the context of the theory of heaps \cite{viennot} \cite[Appendix 3]{cartier}. This will give an alternative proof of Theorem~\ref{cumultutte} for the case $q=0$, based on a classical result on the evaluation $T_G(1,0)$ of a Tutte polynomial in terms of some orientations of the graph $G$. \begin{defn} A graph $G=(V,E)$ is {\it rooted} when it has a distinguished vertex $ r \in V$, called the {\it root}. An orientation of $G$ is {\it root-connected}, if for any vertex $v\in V$ there exists a directed path from the root to $v$. \end{defn} \begin{prop}[Greene \& Zaslavsky \cite{greene}] \label{tutte10} If $G$ is a rooted and connected graph, $T_G(1,0)$ is the number of its root-connected acyclic orientations. \end{prop} The notion of heap was introduced by Viennot \cite{viennot} as a geometric interpretation of elements in the Cartier-Foata monoid \cite{cartier}, and has various applications in enumeration. We refer to \cite[Appendix 3]{cartier} for a modern presentation of this subject (and comprehensive bibliography). Let $M$ be the monoid built on the generators $(x_{ij})_{1\leq i < j}$ subject to the relations $x_{ij}x_{k\ell} = x_{k\ell} x_{ij} $ if $i<j<k<\ell$ or $i<k<\ell<j$. We call it the Cartier-Foata monoid (but in other contexts it could be called a partially commutative free monoid or a trace monoid as well). Following \cite{viennot}, we call an element of $M$ a {\it heap}. Any heap can be represented as a ``pile'' of segments, as in the left part of Figure~\ref{heapposet} (this is remindful of \cite{bousquet}). This pile is described inductively: the generator $x_{ij}$ correspond to a single segment whose extremities have abscissas $i$ and $j$, and multiplication $m_1m_2$ is obtained by placing the pile of segments corresponding to $m_2$ above the one corresponding to $m_1$. In terms of segments, the relation $x_{ij}x_{k\ell} = x_{k\ell} x_{ij} $ if $i<j<k<\ell$ has a geometric interpretation: segments are allowed to move vertically as long as they do not intersect (this is the case of $x_{34}$ and $x_{67}$ in Figure~\ref{heapposet}). Similarly, the other relation $x_{ij}x_{k\ell} = x_{k\ell} x_{ij} $ if $i<k<\ell<j$ can be treated by thinking of each segment as the projection of an arch as in the central part of Figure~\ref{heapposet}. In this three-dimensional representation, all the commutation relations are translated in terms of arches that are allowed to move along the dotted lines as long as they do not intersect. A heap can also be represented as a poset. Consider two segments $s_1$ and $s_2$ in a pile of segments, then the relation is defined by saying that $s_1<s_2$ if $s_1$ is always below $s_2$, after any movement of the arches (along the dotted lines and as long as they do not intersect, as above). This way, a heap can be identified with a poset where each element is labeled by a generator of $M$, and two elements whose labels do not commute are comparable. See the right part of Figure~\ref{heapposet} for an example and \cite[Appendice 3]{cartier} for details. \begin{figure} \caption{The heap $m=x_{46}x_{67}x_{34}x_{16}x_{47}$ as a pile of segments and the Hasse diagram of the associated poset. } \label{heapposet} \end{figure} \begin{defn} For any heap $m\in M$, let $|m|$ denote its length as a product of generators. Moreover, $m\in M$ is called a {\it trivial heap} if it is a product of pairwise commuting generators. Let $M^\circ\subset M $ denote the set of trivial heaps. \end{defn} Let $\mathbb{Z}[[M]]$ denote the ring of formal power series in $M$, i.e. all formal sums $\sum_{m\in M} \alpha_m m$ with multiplication induced by the one of $M$. A fundamental result of Cartier and Foata \cite{cartier} is the identity in $\mathbb{Z}[[M]]$ as follows: \begin{equation} \label{cartierfoata} \bigg( \sum_{m \in M^\circ } (-1)^{|m|} m \bigg)^{-1} = \sum_{m\in M} m. \end{equation} Note that $M^\circ$ contains the neutral element of $M$ so that the sum in the left-hand side is invertible, being a formal power series with constant term equal to 1. \begin{defn} \label{defpyr} An element $m\in M$ is called a {\it pyramid} if the associated poset has a unique maximal element. Let $P\subset M$ denote the subset of pyramids. \end{defn} A fundamental result of the theory of heaps links the generating function of pyramids with the one of all heaps \cite{cartier,viennot}. It essentially relies on the exponential formula for labeled combinatorial objects, and reads: \begin{equation} \label{logpyr1} \log \bigg( \sum_{m\in M} m \bigg) =_{\text{comm}} \sum_{p \in P} \frac{1}{|p|} p, \end{equation} where the sign $=_{\text{comm}}$ means that the equality holds in any commutative quotient of $\mathbb{Z}[[M]]$. Combining \eqref{cartierfoata} and \eqref{logpyr1}, we obtain: \begin{equation} \label{logpyr2} - \log \bigg( \sum_{m\in M^\circ} (-1)^{|m|} m \bigg) =_{\text{comm}} \sum_{p \in P} \frac{1}{|p|} p. \end{equation} Now, let us examine how to apply this general equality to the present case. The following lemma is a direct consequence of the definitions, and permits to identify trivial heaps with noncrossing matchings. \begin{lem} \label{Phi} The map \begin{equation} \label{defphi} \Phi : x_{i_1j_1} \cdots x_{i_nj_n} \mapsto \{\{i_1,j_1\},\dots,\{i_n,j_n\}\} \end{equation} defines a bijection between the set of trivial heaps $M^\circ$ and the disjoint union of $\mathcal{N}(V)$ where $V$ runs through the finite subsets (of even cardinal) of $\mathbb{N}_{>0}$. \end{lem} For a general heap $m\in M$, we can still define $\Phi(m)$ via \eqref{defphi} but it may not be a matching, for example $\Phi(x_{1,2}x_{2,3}) = \{\{1,2\},\{2,3\}\}$. Let us first consider the case of $m\in M$ such that $\Phi(m)$ is really a matching. \begin{lem} \label{ac_or} Let $\sigma\in\mathcal{M}(V)$ for some $V\subset \mathbb{N}_{>0}$. Then the heaps $m\in M$ such that $\Phi(m)=\sigma$ are in bijection with acyclic orientations of $G(\sigma)$. Thus, such a heap $m\in M$ can be identified with a pair $(\sigma,r)$ where $r$ is an acyclic orientation of the graph $G(\sigma)$. \end{lem} \begin{proof} An acyclic orientation $r$ on $G(\sigma)$ defines a partial order on $\sigma$ by saying that two arches $x$ and $y$ satisfy $x<y$ if there is a directed path from $y$ to $x$. In this partial order, two crossing arches are always comparable since they are adjacent in $G(\sigma)$. We recover the description of heaps in terms of posets, as described above, so each pair $(\sigma,r)$ corresponds to a heap $m\in M$ with $\Phi(m)=\sigma$. \end{proof} To treat the case of $m\in M$ such that $\Phi(m)$ is not a matching, such as $x_{12}x_{23}$, we are led to introduce a set of commuting variables $(a_i)_{ i \geq 1}$ such that $a_i^2=0$, and consider the specialization $x_{ij}\mapsto a_ia_j$ which defines a morphism of algebras $\omega : \mathbb{Z}[[M]] \to \mathbb{Z}[[a_1,a_2,\dots]] $. This way, for any $m\in M$ we have either $\omega(m)=0$, or $\Phi(m) \in \mathcal{M}(V)$ for some $V\subset \mathbb{N}_{>0}$. Let $m\in M$ such that $\omega(m)\neq0$. As seen in Lemma~\ref{ac_or}, it can be identified with the pair $(\sigma,r)$ where $\sigma=\Phi(m)$, and $r$ is an acyclic orientation of $G(\sigma)$. Then the condition defining pyramids is easily translated in terms of $(\sigma,r)$, indeed we have $m\in P$ if and only if the acyclic orientation $r$ has a unique source (where a {\it source} is a vertex having no ingoing arrows). Under the specialization $\omega$, the generating function of trivial heaps is: \begin{equation} \label{omega1} \omega\bigg( \sum_{m \in M^\circ } (-1)^{|m|} m \bigg) = \sum_{n\geq 0} (-1)^n C_n e_{2n}, \end{equation} where $e_{2n}$ is the $2n$th elementary symmetric functions in the $a_i$'s. Indeed, let $V\subset \mathbb{N}_{>0}$ with $\# V = 2n$, then the coefficient of $\prod_{i\in V} a_i $ in the left-hand side of \eqref{omega1} is $(-1)^n \# \mathcal{N}(V)= (-1)^n C_n$, as can be seen using Lemma~\ref{Phi}. In particular, it only depends on $n$ so that this generating function can be expressed in terms of the $e_{2n}$. Moreover, since the variables $a_i$ have vanishing squares their elementary symmetric functions satisfy \[ e_{2n} = \frac{1}{(2n)!} e_1^{2n}, \] so that the right-hand side of \eqref{omega1} is actually the exponential generating of the Catalan numbers (evaluated at $e_1$). It remains to understand the meaning of taking the logarithm of the left-hand side of \eqref{omega1} using pyramids and Equation~\eqref{logpyr2}. Note that the relation $=_{\text{comm}}$ becomes a true equality after the specialization $x_{ij}\mapsto a_ia_j$. So taking the image of \eqref{logpyr2} under $\omega$ and using \eqref{omega1}, this gives \[ - \log\bigg( \sum_{n \geq 0} (-1)^n C_n e_{2n} \bigg) = \sum_{p\in P} \frac{1}{|p|} \omega(p). \] The argument used to obtain \eqref{omega1} shows as well that the right-hand side of the previous equation is $\sum_{} \frac{x_n}n e_{2n}$ where $x_n=\#\{ p\in P \; : \; \omega(p)=a_1 \cdots a_{2n} \}$. So we have \[ - \log\bigg( \sum_{n \geq 0} (-1)^n C_n e_{2n} \bigg) = \sum_{n \geq 0} \frac{x_n}n e_{2n}, \] and comparing this with \eqref{relmk2}, we obtain $ \tilde k_{2n} (0) = \frac {x_n}{n}$. Clearly, a graph with an acyclic orientation always has a source, and it has a unique source only when it is root-connected (for an appropriate root, viz. the source). So a pyramid $p$ such that $\omega(p)\neq0$ can be identified with a pair $(\sigma,r)$ where $r$ is a root-connected acyclic orientation of $G(\sigma)$. Then using Proposition~\ref{tutte10}, it follows that \[ x_n = n \sum_{\sigma \in \mathcal{M}^c(2n) } T_{G(\sigma)}(1,0). \] Here, the factor $n$ in the right-hand side accounts for the $n$ possible choices of the source in each graph $G(\sigma)$. Eventually, we obtain \begin{equation} \label{cumultutte0} \tilde k _{2n}(0) = \sum_{\sigma \in \mathcal{M}^c(2n) } T_{G(\sigma)} (1,0), \end{equation} i.e. we have proved the particular case $q=0$ of Theorem~\ref{cumultutte}. Let us state again the result in an equivalent form. We can consider that if $\sigma\in\mathcal{M}(2n)$, the graph $G(\sigma)$ has a canonical root which the arch containing 1. Then, Equation \eqref{cumultutte0} gives a combinatorial model for the integers $\tilde k_{2n}(0)$: \begin{thm} The integer $\tilde k_{2n}(0)$ counts pairs $(\sigma,r)$ where $\sigma\in\mathcal{M}^c(2n)$, and $r$ is an acyclic orientation of $G(\sigma)$ whose unique source is the arch of $\sigma$ containing 1. \end{thm} From this, it is possible to give a combinatorial proof that the integers $\tilde k_{2n}(0)$ are increasing, as suggested by Lassalle \cite{lassalle} who gave an algebraic proof. Indeed, we can check that pairs $(\sigma,r)$ where $\{1,3\}$ is an arch of $\sigma$ are in bijection with the same objects but of size one less, hence $\tilde k_{2n}(0) \leq \tilde k_{2n+2}(0)$. Before ending this section, note that the left-hand side of \eqref{relmk2} is $-\log( \frac 1z J_1(2z))$ where $J_1$ is the Bessel function of order 1. There are quite a few other cases where the combinatorics of Bessel functions is related with the theory of heaps, see the articles of Fédou \cite{fedou1,fedou2}, Bousquet-Mélou and Viennot \cite{bousquet}. \section{\texorpdfstring{The case $q=2$, the exponential formula}{The case q=2, the exponential formula}} \label{sec:q=2} The specialization at $(1,2)$ of a Tutte polynomial has combinatorial significance in terms of connected spanning subgraphs (see \cite[Chapter 9]{aigner}), so it is natural to consider the case $q=2$ of Theorem~\ref{cumultutte}. This case is particular because the factor $(q-1)^{n-1}$ disappears, so that $\tilde k_{2n}(2) = k_{2n}(2)$. We can then interpret the logarithm in the sense of combinatorial species, by showing that $\tilde k_{2n}(2)$ counts some {\it primitive} objects and $m_{2n}(2)$ counts {\it assemblies} of those, just like permutations that are formed by assembling cycles (this is the exponential formula for labeled combinatorial objects, see \cite[Chapter 3]{aigner}). What we obtain is another more direct proof of Theorem~\ref{cumultutte}, based on an interpretation of $T_G(1,2)$ as follows. \begin{prop}[Gioan \cite{gioan}] \label{propgioan} If $G$ is a rooted and connected graph, $T_G(1,2)$ is the number of its root-connected orientations. \end{prop} This differs from the more traditional interpretation of $T_G(1,2)$ in terms of connected spanning subgraphs mentioned above, but it is what naturally appears in this context. \begin{defn} Let $\mathcal{M}^+(2n)$ be the set of pairs $(\sigma,r)$ where $\sigma\in\mathcal{M}(2n)$ and $r$ is an orientation of the graph $G(\sigma)$. Such a pair is called an {\it augmented matching}, and is depicted with the convention that the arch $\{i,j\}$ lies above the arch $\{k,\ell\}$ if there is an oriented edge $\{i,j\} \rightarrow \{k,\ell\}$, and behind it if there is an oriented edge $\{k,\ell \} \rightarrow \{i,j\}$ . \end{defn} See Figure~\ref{aug} for example. Clearly, $\#\mathcal{M}^+(2n) = m_{2n}(2)$. Indeed, each graph $G(\sigma)=(V,E)$ has $2^{\# E}$ orientations, and $\# E= \cro(\sigma)$, so this follows from \eqref{mucro}. \begin{figure} \caption{An augmented matching $(\sigma,r)$ and the corresponding orientation of $G(\sigma)$. } \label{aug} \end{figure} Notice that if there is no directed cycle in the oriented graph $(G(\sigma),r)$, the augmented matching $(\sigma,r)$ can be identified with a heap $m\in M$ as defined in the previous section. The one in Figure~\ref{aug} would be $x_{3,5}x_{4,11}x_{10,12}x_{1,6}x_{7,9}x_{2,8}$. Actually, the application of the exponential formula in the present section is quite reminiscent of the link between heaps and pyramids as seen in the previous section. \begin{defn} Recall that each graph $G(\sigma)$ is rooted with the convention that the root is the arch containing 1. Let $\mathcal{I}(2n) \subset \mathcal{M}^+(2n)$ be the set of augmented matchings $(\sigma,r)$ such that $\sigma$ is connected and $r$ is a root-connected orientation of $G(\sigma)$. The elements of $\mathcal{I}(2n)$ are called {\it primitive} augmented matchings. For any $V\subset \mathbb{N}_{>0}$ with $\#V=2n$, we also define the set $\mathcal{I}(V)$, with the same combinatorial description as $\mathcal{I}(2n)$ except that matchings are based on the set $V$ instead of $\{1,\dots,2n\}$. \end{defn} Using Proposition~\ref{propgioan}, we have \[ \# \mathcal{I}(2n) = \sum_{\sigma\in\mathcal{M}^c(2n)} T_{G(\sigma)}(1,2), \] so that the particular case $q=2$ of Theorem~\ref{cumultutte} is the equality $\# \mathcal{I}(2n) = k_{2n}(2)$. To prove this from \eqref{relmk} and using the exponential formula, we have to see how an augmented matching can be decomposed into an assembly of primitive ones, as stated in Proposition~\ref{propdecomp} below. This decomposition thus proves the case $q=2$ of Theorem~\ref{cumultutte}. Note also that the bijection given below is equivalent to the first identity in \eqref{inversion}. \begin{prop} \label{propdecomp} There is a bijection \[ \mathcal{M}^+(2n) \longrightarrow \biguplus_{\pi\in\mathcal{P}(n)} \; \prod_{ V\in \pi } \mathcal{I}(V). \] \end{prop} \begin{proof} Let $(\sigma,r) \in \mathcal{M}^+(2n)$, the bijection is defined as follows. Consider the vertices of $G(\sigma)$ which are accessible from the root. This set of vertices defines a matching on a subset $V_1\subset \{1,\dots,2n\}$. For example, in the case in Figure~\ref{aug}, the root is $\{1,6\}$ and the only other accessible vertex is $\{2,8\}$, so $V_1=\{1,2,6,8\}$. Together with the restriction of the orientation $r$ on this subset of vertices, this defines an augmented matching $(\sigma_1,r_1)\in\mathcal{M}^+(V_1)$ which by construction is primitive. By repeating this operation on the set $\{1,\dots,2n\}\backslash V_1$, we find $V_2\subset \{1,\dots,2n\}\backslash V_1$ and $(\sigma_2,r_2)\in\mathcal{I}(V_2)$, and so on. See Figure~\ref{decomp} for the result, in the case of the augmented matching in Figure~\ref{aug}. The inverse bijection is easily described. If $(\sigma_i,r_i)\in\mathcal{I}(V_i)$ for any $1\leq i\leq k$ where $\pi=\{V_1,\dots,V_k\}$, let $\sigma=\sigma_1 \cup \dots \cup \sigma_k$, and the orientation $r$ of $G(\sigma)$ is as follows. Let $e$ be an edge of $G(\sigma)$ and $x_1$, $x_2$ be its endpoints, with $x_1\in\sigma_{j_1}$ and $x_2\in\sigma_{j_2}$. If $j_1=j_2$, the edge $e$ is oriented in accordance with the orientation $r_{j_1}=r_{j_2}$. Otherwise, say $j_1<j_2$, then the edge $e$ is oriented in the direction $x_1 \leftarrow x_2$. \end{proof} \begin{figure} \caption{Decomposition of an augmented matching into primitive ones. } \label{decomp} \end{figure} \section{Cumulants of the free Poisson law} The free Poisson law appears in free probability and random matrices theory, and can be characterized by the fact that all free cumulants are equal to some $\lambda>0$, see \cite{nica}. It follows from \eqref{inversionfree} that its moments $m_n(\lambda)$ count noncrossing partitions, and consequently the coefficients are given by the Narayana numbers (see \cite{stanley}): \[ m_n(\lambda)= \sum_{\pi\in\mathcal{NC}(n)} \lambda^{\# \pi} = \sum_{k=1}^n \frac {\lambda^k}{n} \binom{n}{k}\binom{n}{k-1}. \] The corresponding cumulants are as before defined by \begin{equation} \label{defklam} \sum_{n\geq1} k_n(\lambda) \frac{z^n}{n!} =\log \Bigg( \sum_{n\geq0} m_n(\lambda) \frac{z^n}{n!} \Bigg). \end{equation} For any set partition $\pi\in\mathcal{P}(V)$ for some $V\subset \mathbb{N}$, we can define a crossing graph $G(\pi)$, whose vertices are the blocks of $\pi$, and there is an edge between $b,c\in\pi$ if $\{b,c\}$ is not a noncrossing partition. Note that $\pi$ is connected if and only if the graph $G(\pi)$ is connected. The two different proofs for the semicircular cumulants show as well the following: \begin{thm} \label{cumulpoisson} For any $n\geq1$, we have: \[ k_n(\lambda) = - \sum_{\pi\in\mathcal{P}^c(n)} (-\lambda)^{\# \pi } T_{G(\pi)}(1,0). \] \end{thm} Let us sketch the proofs. If $\pi\in\mathcal{P}(n)$, similar to Lemma~\ref{lemmpi} we have: \[ m_{\pi}(\lambda) = \sum_{\substack{ \rho \in \mathcal{P}(n) \\ \rho \unlhd \pi } } \lambda^{\# \rho} \] where the relation $\rho \unlhd \pi $ means that $\rho \leq \pi $ and $\rho|_b$ is a noncrossing partition for each $b\in\pi$. Indeed the map $\rho \mapsto (\rho|_b)_{b\in\pi}$ is a bijection between $\{ \rho\in\mathcal{P}(n) : \rho \unlhd \pi \}$ and $\prod_{b\in\pi} \mathcal{NC}(b)$. The same computation as in \eqref{kW} and \eqref{W1} gives \begin{equation} \label{invpoisson} k_n(\lambda) = \sum_{\pi\in\mathcal{P}(n)} m_{\pi}(\lambda) \mu(\pi,\hat 1) = \sum_{ \substack{ \rho,\pi \in \mathcal{P}(n) \\ \rho \unlhd \pi } } \lambda^{ \# \rho } \mu(\pi,\hat 1) = \sum_{\rho \in \mathcal{P}(n) } \lambda^{\# \rho} W(\rho), \end{equation} where \[ W(\rho) = \sum_{ \substack{ \pi \in \mathcal{P}(n) \\ \rho \unlhd \pi } } \mu(\pi,\hat 1). \] Denoting $G(\rho)=(V,E)$ the crossing graph of $\rho$, the previous equality is rewritten $W(\rho) = \sum \mu(\pi,\hat 1)$ where the sum is over $\pi\in\mathcal{P}(V)$ such that for any $b\in\pi$, $e\in E$, the block $b$ does not contain both endpoints of the edge $e$. Then the case $q=0$ of Proposition~\ref{proptutte} shows that \[ W(\rho) = \begin{cases} (-1)^{\# \rho +1 } T_{G(\rho)}(1,0) & \text{if } \rho\in\mathcal{P}^c(n), \\ 0 & \text{otherwise.} \end{cases} \] Together with \eqref{invpoisson}, this completes the first proof of Theorem~\ref{cumulpoisson}. As for the second proof, we follow the outline of Section~\ref{sec:heaps}, but with another definition for $M$, $M^\circ$, $P$ and $\omega$. Let $M$ be the monoid with generators $(x_V)$ where $V$ runs through finite subsets of $\mathbb{N}_{>0}$, and with relations $x_Vx_W=x_Wx_V$ if $\{V,W\}$ is a noncrossing partition. We also denote $M^\circ \subset M$ the corresponding set of trivial heaps, i.e. products of pairwise commuting generators. The subset $P\subset M$ is characterized by Definition~\ref{defpyr}. Now, we consider the morphism $\omega$ defined by \[ \omega(x_V) = \lambda \prod_{i\in V} a_i. \] We have: \begin{align*} \omega \bigg( \sum_{m\in M^\circ} (-1)^{|m|} m \bigg) &= \sum_{V} \sum_{\pi\in\mathcal{NC}(V)} (-1)^{\#\pi} \prod_{b\in \pi} \omega(x_b) \\ &= \sum_{V} \sum_{\pi\in\mathcal{NC}(V)} (-\lambda)^{\#\pi} \prod_{i\in V} a_i \\ &= \sum_{n\geq 0} m_n(-\lambda) e_{n} = \sum_{n\geq 0} m_n(-\lambda) \frac{e_{1}^n}{n!}. \end{align*} We still understand that $V\subset \mathbb{N}_{>0}$ is finite, $(a_i)_{i\geq 1}$ are commuting variables with vanishing squares, and $e_n$ is the $n$th elementary symmetric function in the $a_i$'s. Equation \eqref{logpyr2} is still valid as such with the new definition of $M^\circ$ and $P$, and taking the image by $\omega$ gives: \[ -\log \bigg( \sum_{n\geq 0} m_n(-\lambda) \frac{e_{1}^n}{n!} \bigg) = \sum_{p\in P} \frac{1}{|p|} \omega(p). \] Comparing with \eqref{defklam} and taking the coefficient of $e_1^n$, we get: \[ -k_n(-\lambda) = \sum_{\substack{ p\in P \\ \omega(p) = a_1 \cdots a_n }} \frac{\lambda^{|p|}}{|p|} . \] Let $p\in P$ be such that $\omega(p) = a_1 \cdots a_n$. Following the idea in Lemma~\ref{ac_or}, we can write $p=x_{V_1}\cdots x_{V_k}$ where $V_1,\dots,V_k$ are the blocks of a set partition $\pi\in\mathcal{P}(n)$, and $p$ is characterized by $\pi$ together with an acyclic orientation of the graph $G(\pi)$ having a unique source. Following the idea at the end of Section~\ref{sec:heaps}, we thus complete the second proof of Theorem~\ref{cumulpoisson}. \section{Final remarks} It would be interesting to explain why the same combinatorial objects appear both for $c_{2n}(q)$ and $k_{2n}(q)$. This suggests that there exists some quantity that interpolates between the classical and free cumulants of the $q$-semicircular law, however, building a noncommutative probability theory that encompasses the classical and free ones appear to be elusive (see \cite{leeuwen2} for a precise statement). It means that building such an interpolation would rely not only on the $q$-semicircular law and its moments, but on its realization as a noncommutative random variable. This might be feasible using $q$-Fock spaces \cite{bozejko1,bozejko2} but is beyond the scope of this article. \section*{Acknowledgment} This work was initiated during the trimester ``Bialgebras in Free Probability'' at the Erwin Schrödinger Institute in Vienna. In particular I thank Franz Lehner, Michael Anshelevich and Natasha Blitvić for their conversation. \setlength{\parindent}{0pt} \end{document}
arXiv
{ "id": "1203.3157.tex", "language_detection_score": 0.7541689872741699, "char_num_after_normalized": null, "contain_at_least_two_stop_words": null, "ellipsis_line_ratio": null, "idx": null, "lines_start_with_bullet_point_ratio": null, "mean_length_of_alpha_words": null, "non_alphabetical_char_ratio": null, "path": null, "symbols_to_words_ratio": null, "uppercase_word_ratio": null, "type": null, "book_name": null, "mimetype": null, "page_index": null, "page_path": null, "page_title": null }
arXiv/math_arXiv_v0.2.jsonl
null
null
\begin{document} \title{A finiteness theorem for algebraic cycles} \author{Peter O'Sullivan} \address{Centre for Mathematics and its Applications\\ Australian National University, Canberra ACT 0200 \\ Australia} \email{[email protected]} \thanks{The author was partly supported by ARC Discovery Project grant DP0774133.} \subjclass[2000]{14C15, 14C25} \keywords{algebraic cycle, Chow ring} \date{} \dedicatory{} \begin{abstract} Let $X$ be a smooth projective variety. Starting with a finite set of cycles on powers $X^m$ of $X$, we consider the $\mathbf Q$\nobreakdash-\hspace{0pt} vector subspaces of the $\mathbf Q$\nobreakdash-\hspace{0pt} linear Chow groups of the $X^m$ obtained by iterating the algebraic operations and pullback and push forward along those morphisms $X^l \to X^m$ for which each component $X^l \to X$ is a projection. It is shown that these $\mathbf Q$\nobreakdash-\hspace{0pt} vector subspaces are all finite-dimensional, provided that the $\mathbf Q$\nobreakdash-\hspace{0pt} linear Chow motive of $X$ is a direct summand of that of an abelian variety. \end{abstract} \maketitle \section{Introduction} Let $X$ be a smooth projective variety over a field $F$. Starting with a finite set of cycles on powers $X^m$ of $X$, consider the $\mathbf Q$\nobreakdash-\hspace{0pt} vector subspaces $C_m$ of the $\mathbf Q$\nobreakdash-\hspace{0pt} linear Chow groups $CH(X^m)_\mathbf Q$ formed by iterating the algebraic operations and pullback $p^*$ and push forward $p_*$ along those morphisms $p:X^l \to X^m$ for which each component $X^l \to X$ is a projection. It is plausible that the $C_m$ are always finite-dimensional, because when $F$ is finitely generated over the prime field it is plausible that the $CH(X^m)_\mathbf Q$ themselves are finite-dimensional. In this paper we prove the finite-dimensionality of the $C_m$ when the $\mathbf Q$\nobreakdash-\hspace{0pt} linear Chow motive of $X$ is a direct summand of that of an abelian variety over $F$. More precisely, suppose given an adequate equivalence relation $\sim$ on $\mathbf Q$\nobreakdash-\hspace{0pt} linear cycles on smooth projective varieties over $F$. We say that $X$ is a \emph{Kimura variety for $\sim$} if, in the category of $\mathbf Q$\nobreakdash-\hspace{0pt} linear Chow motives modulo $\sim$, the motive of $X$ is the direct sum of one for which some exterior power is $0$ and one for which some symmetric power is $0$. A Kimura variety for $\sim$ is also a Kimura variety for any coarser equivalence relation. It is known (e.g.\ \cite{Kim}, Corollary~4.4) that if the $\mathbf Q$\nobreakdash-\hspace{0pt} linear Chow motive of $X$ is a direct summand of that of an abelian variety, then $X$ is a Kimura variety for any $\sim$. The main result is now Theorem~\ref{t:fin} below. In addition to a finiteness result, it contains also a nilpotence result. By a filtration $C^\bullet$ on a graded $\mathbf Q$\nobreakdash-\hspace{0pt} algebra $C$ we mean a descending sequence $C = C^0 \supset C^1 \supset \dots$ of graded ideals of $C$ such that $C^r.C^s \subset C^{r+s}$ for every $r$ and $s$. The morphisms $p:X^l \to X^m$ in Theorem~\ref{t:fin}~\ref{i:pullpush} are exactly those defined by maps $[1,m] \to [1,l]$. \begin{thm}\label{t:fin} Let $X$ be a smooth projective variety over $F$ which is a Kimura variety for the equivalence relation ${\sim}$. For $n = 0,1,2, \dots $, let $Z_n$ be a finite subset of $CH(X^n)_\mathbf Q/{\sim}$ , with $Z_n$ empty for $n$ large. Then there exists for each $n$ a graded $\mathbf Q$\nobreakdash-\hspace{0pt} subalgebra $C_n$ of $CH(X^n)_\mathbf Q/{\sim}$, and a filtration $(C_n)^\bullet$ on $C_n$, with the following properties. \begin{enumerate} \renewcommand{(\alph{enumi})}{(\alph{enumi})} \item\label{i:pullpush} If $p:X^l \to X^m$ is a morphism for which each component $X^l \to X$ is a projection, then $p^*$ sends $C_m$ to $C_l$ and $p_*$ sends $C_l$ to $C_m$, and $p^*$ and $p_*$ respect the filtrations on $C_l$ and $C_m$. \item\label{i:fin} For every $n$, the $\mathbf Q$\nobreakdash-\hspace{0pt} algebra $C_n$ is finite-dimensional and contains $Z_n$. \item\label{i:nilp} For every $n$, the cycles in $C_n$ which are numerically equivalent to $0$ lie in $(C_n)^1$, and we have $(C_n)^r = 0$ for $r$ large. \end{enumerate} \end{thm} The finiteness result of Theorem~\ref{t:fin} is non-trivial only if $Z_n$ is non-empty for some $n > 1$. Indeed it will follow from Proposition~\ref{p:Chowsub} below that for any smooth projective variety $X$ over $F$ and finite subset $Z_1$ of $CH(X)_\mathbf Q$, there is a finite-dimensional graded $\mathbf Q$\nobreakdash-\hspace{0pt} subalgebra $C_n$ of $CH(X^n)_\mathbf Q$ for each $n$ such that $C_1$ contains $Z_1$ and $p^*$ sends $C_m$ to $C_l$ and $p_*$ sends $C_l$ to $C_m$ for every $p$ as in \ref{i:pullpush} of Theorem~\ref{t:fin}. If $X$ is a Kimura variety for $\sim$, then the ideal of correspondences numerically equivalent to $0$ in the algebra $CH(X \times_F X)_\mathbf Q/{\sim}$ of self-correspondences of $X$ has been shown by Kimura (\cite{Kim}, Proposition~7.5) to consist of nilpotent elements, and by Andr\'e and Kahn (\cite{AndKah}, Proposition~9.1.14) to be in fact nilpotent. The nilpotence result of Theorem~\ref{t:fin} implies that of Kimura, but neither implies nor is implied by that of Andr\'e and Kahn. If $\sim$ is numerical equivalence, then the $CH(X^m)_\mathbf Q/{\sim} = \overline{CH}(X^m)_\mathbf Q$ are all finite dimensional. The following result shows that they are generated in a suitable sense by the $\overline{CH}(X^m)_\mathbf Q$ for $m$ not exceeding some fixed $n$. \begin{thm}\label{t:num} Let $X$ be a smooth projective variety over $F$ which is a Kimura variety for numerical equivalence. Then there exists an integer $n \ge 0$ with the following property: for every $m$, the $\mathbf Q$\nobreakdash-\hspace{0pt} vector space $\overline{CH}(X^m)_\mathbf Q$ is generated by elements of the form \begin{equation}\label{e:numgen} p_*((p_1)^*(z_1).(p_2)^*(z_2). \cdots .(p_r)^*(z_r)), \end{equation} where $z_i$ lies in $\overline{CH}(X^{m_i})_\mathbf Q$ with $m_i \le n$, and $p:X^l \to X^m$ and the $p_i:X^l \to X^{m_i}$ are morphisms for which each component $X^l \to X$ is a projection. \end{thm} Theorem~\ref{t:fin} will be proved in Section~\ref{s:finproof} and Theorem~\ref{t:num} in Section~\ref{s:numproof}. Both theorems are deduced from the following fact. Given a Kimura variety $X$ for $\sim$, there is a reductive group $G$ over $\mathbf Q$, a finite-dimensional $G$\nobreakdash-\hspace{0pt} module $E$, a central $\mathbf Q$\nobreakdash-\hspace{0pt} point $\rho$ of $G$ with $\rho^2 = 1$, and a commutative algebra $R$ in the tensor category $\REP(G,\rho)$ of $G$\nobreakdash-\hspace{0pt} modules with symmetry modified by $\rho$, with the following property. If we write $\mathcal M_{\sim}(F)$ for the category of ungraded motives over $F$ modulo ${\sim}$ and $E_R$ for the free $R$\nobreakdash-\hspace{0pt} module $R \otimes_k E$ on $E$, then there exist isomorphisms \[ \xi_{r,s}:\Hom_{G,R}((E_R)^{\otimes r},(E_R)^{\otimes s}) \xrightarrow{\sim} \Hom_{\mathcal M_{\sim}(F)}(h(X)^{\otimes r},h(X)^{\otimes s}) \] which are compatible with composition and tensor products of morphisms and with symmetries interchanging the factors $E_R$ and $h(X)$. These isomorphisms arise because there is a fully faithful tensor functor from the category of finitely generated free $R$\nobreakdash-\hspace{0pt} modules to $\mathcal M_{\sim}(F)$, which sends $E_R$ to $h(X)$ (see \cite{O}, Lemma~3.3 for a similar result). However to keep the exposition as brief as possible, the $\xi_{r,s}$ will simply be constructed directly here, in Sections~\ref{s:Kimobj} and \ref{s:Kimvar}. Now we have an equality \[ CH(X^r)_\mathbf Q/{\sim} = \Hom_{\mathcal M_{\sim}(F)}(\mathbf 1,h(X)^{\otimes r}) \] of $\mathbf Q$\nobreakdash-\hspace{0pt} algebras, and pullback along a morphism $f:X^l \to X^m$ is given by composition with $h(f)$. There is a canonical autoduality $h(X)^{\otimes 2} \to \mathbf 1$ on $h(X)$, and push forward along $f$ is given by composing with the transpose of $h(f)$ defined by this autoduality. Using the isomorphisms $\xi_{r,s}$, Theorems~\ref{t:fin} and \ref{t:num} then reduce to easily solved problems about the $G$\nobreakdash-\hspace{0pt} algebra $R$. \section{Generated spaces of cycles} The following result gives an explicit description of the spaces of cycles generated by a given set of cycles on the powers of an arbitrary smooth projective variety $X$. By the top Chern class of $X$ we mean the element $(\Delta_X)^*(\Delta_X)_*(1)$ of $CH(X)_\mathbf Q$, where $\Delta_X:X \to X^2$ is the diagonal. We define the tensor product $z \otimes z'$ of $z$ in $CH(X)_\mathbf Q$ and $z'$ in $CH(X')_\mathbf Q$ as $\pr_1{}\!^*(z).\pr_2{}\!^*(z')$ in $CH(X \times_F X')_\mathbf Q$. The push forward of $z \otimes z'$ along a morphism $f \times f'$ is then $f_*(z) \otimes f'{}_*(z')$. \begin{prop}\label{p:Chowsub} Let $X$ be a smooth projective variety over $F$. For $m = 0,1,2, \dots$ let $Z_m$ be a subset of $CH(X^m)_\mathbf Q$, such that $Z_1$ contains the top Chern class of $X$. Denote by $C_m$ the $\mathbf Q$\nobreakdash-\hspace{0pt} vector subspace of $CH(X^m)_\mathbf Q$ generated by elements of the form \begin{equation}\label{e:pullpush} p_*((p_1)^*(z_1).(p_2)^*(z_2). \cdots .(p_n)^*(z_r)), \end{equation} where $z_i$ lies in $Z_{m_i}$, and $p:X^j \to X^m$ and the $p_i:X^j \to X^{m_i}$ are morphisms for which each component $X^j \to X$ is a projection. Then $C_m$ is a $\mathbf Q$\nobreakdash-\hspace{0pt} subalgebra of $CH(X^m)_\mathbf Q$ for each $m$. If $q:X^l \to X^m$ is a morphism for which each component $X^l \to X$ is a projection, then $q^*$ sends $C_m$ into $C_l$ and $q_*$ sends $C_l$ into $C_m$. \end{prop} \begin{proof} Write $\mathcal P_{l,m}$ for the set of morphisms $X^l \to X^m$ for which each component $X^l \to X$ is a projection. Then the composite of an element of $\mathcal P_{j,l}$ with an element of $\mathcal P_{l,m}$ lies in $\mathcal P_{j,m}$. Thus $q_*(C_l) \subset C_m$ for $q$ in $\mathcal P_{l,m}$. Similarly the product of two elements of $\mathcal P_{j,m}$ lies in $\mathcal P_{2j,2m}$. Thus the tensor product of two elements of $C_m$ lies in $C_{2m}$. Since the product of two elements of $CH(X^m)_\mathbf Q$ is the pullback of their tensor product along $\Delta_{X^m} \in \mathcal P_{m,2m}$, it remains only to show that \begin{equation}\label{e:qC} q^*(C_m) \subset C_l \end{equation} for $q$ in $\mathcal P_{l,m}$. This is clear when $l = m$ and $q$ is a symmetry $\sigma$ permuting the factors $X$ of $X^m$, because $\sigma^* = (\sigma^{-1})_*$. An arbitrary $q$ factors for some $l'$ as $q'' \circ q'$ with $q'$ in $\mathcal P_{l,l'}$ a projection and $q''$ in $\mathcal P_{l',m}$ a closed immersion. It thus is enough to show that \eqref{e:qC} holds when $q$ is a projection or $q$ is a closed immersion. Suppose that $q$ is a projection. If we write \[ w_{s,n}:X^{s+n} \to X^n \] for the projection onto the last $n$ factors, then $q$ is a composite of a symmetry with $w_{l-m,m}$. Thus \eqref{e:qC} holds because $w_{l-m,m}{}^* = 1 \otimes -$ sends $C_m$ into $C_l$. Suppose that $q$ is a closed immersion. Then $q$ is a composite of the \[ e_s = X^{s-2} \times \Delta_X:X^{s-1} \to X^s \] for $s \ge 2$ and symmetries. To prove \eqref{e:qC}, we may thus suppose that $m \ge 2$ and $q = e_m$. Denote by $W_m$ the $\mathbf Q$\nobreakdash-\hspace{0pt} subalgebra of $CH(X^m)_\mathbf Q$ generated by the $v^*(Z_{m'})$ for any $m'$ and $v$ in $\mathcal P_{m,m'}$. Then $u^*$ sends $W_m$ into $W_l$ for any $u$ in $\mathcal P_{l,m}$, and by the projection formula $C_m$ is a $W_m$\nobreakdash-\hspace{0pt} submodule of $CH(X^m)_\mathbf Q$. Since $C_m$ is the sum of the $p_*(W_j)$ with $p$ in $\mathcal P_{j,m}$, it is to be shown that \begin{equation}\label{e:emp} (e_m)^*p_*(W_j) \subset C_{m-1} \end{equation} for every $p$ in $\mathcal P_{j,m}$. We have $p = w_{j,m} \circ \Gamma_p$ with $\Gamma_p$ the graph of $p$, and \[ (e_m)^* \circ (w_{j,m})_* = (w_{j,m-1})_* \circ (e_{j+m})^*. \] Thus \eqref{e:emp} will hold provided that $(e_{j+m})^*(\Gamma_p)_*(W_j) \subset C_{j+m-1}$. Replacing $m$ by $j+m$ and $p$ by $\Gamma_p$, we may thus suppose that $p$ has a left inverse in $\mathcal P_{m,j}$. In that case any $y$ in $W_j$ is of the form $p^*(x)$ with $x$ in $W_m$, and then \[ (e_m)^*p_*(y) = (e_m)^*(p_*(1).x) = (e_m)^*p_*(1).(e_m)^*(x). \] Thus \eqref{e:emp} will hold provided that $(e_m)^*p_*(1)$ lies in $C_{m-1}$. To see that $(e_m)^*p_*(1)$ lies in $C_{m-1}$, note that $e_m$ has a left inverse $f$ in $\mathcal P_{m,m-1}$. Then \[ (e_m)^*p_*(1) = f_*(e_m)_*(e_m)^*p_*(1) = f_*(p_*(1).(e_m)_*(1)) = f_*p_*p^*(e_m)_*(1). \] Since $(e_m)_*(1) = (w_{m-2,2})^*(\Delta_X)_*(1)$, we reduce finally to showing that $h^*(\Delta_X)_*(1)$ lies in $C_j$ for every $h$ in $\mathcal P_{j,2}$. Such an $h$ factors as a projection followed by either a symmetry of $X^2$ or $\Delta_X$, so we may suppose that $j = 1$ and $h = \Delta_X$. Then $h^*(\Delta_X)_*(1)$ is the top Chern class of $X$, which by hypothesis lies in $Z_1 \subset C_1$. \end{proof} \section{Group representations}\label{s:grprep} Let $k$ be a field. By a $k$\nobreakdash-\hspace{0pt} linear category we mean a category equipped with a structure of $k$\nobreakdash-\hspace{0pt} vector space on each hom-set such that the composition is $k$\nobreakdash-\hspace{0pt} bilinear. A $k$\nobreakdash-\hspace{0pt} linear category is said to be pseudo-abelian if it has a zero object and direct sums, and if every idempotent endomorphism has an image. A \emph{$k$\nobreakdash-\hspace{0pt} tensor category} is a pseudo-abelian $k$\nobreakdash-\hspace{0pt} linear category $\mathcal C$, together with a structure of symmetric monoidal category on $\mathcal C$ (\cite{Mac},~VII~7) such that the tensor product $\otimes$ is $k$\nobreakdash-\hspace{0pt} bilinear on hom-spaces. Thus $\mathcal C$ is equipped with a unit object $\mathbf 1$, and natural isomorphisms \[ (L \otimes M) \otimes N \xrightarrow{\sim} L \otimes (M \otimes N), \] the associativities, \[ M \otimes N \xrightarrow{\sim} N \otimes M, \] the symmetries, and $\mathbf 1 \otimes M \xrightarrow{\sim} M$ and $M \otimes \mathbf 1 \xrightarrow{\sim} M$, which satisfy appropriate compatibilities. We assume in what follows that $\mathbf 1 \otimes M \xrightarrow{\sim} M$ and $M \otimes \mathbf 1 \xrightarrow{\sim} M$ are identities: this can be always arranged by replacing if necessary $\otimes$ by an isomorphic functor. Brackets in multiple tensor products will often be omitted when it is of no importance which bracketing is chosen. The tensor product of $r$ copies of $M$ will then be written as $M^{\otimes r}$, and similarly for morphisms. There is a canonical action $\tau \mapsto M^{\otimes \tau}$ of the symmetric group $\mathfrak{S}_r$ on $M^{\otimes r}$, defined using the symmetries. It extends to a homomorphism of $k$\nobreakdash-\hspace{0pt} algebras from $k[\mathfrak{S}_r]$ to $\End(M^{\otimes r})$. When $k$ is of characteristic $0$, the symmetrising idempotent in $k[\mathfrak{S}_r]$ maps to an idempotent endomorphism $e$ of $M^{\otimes r}$, and we define the $r$th symmetric power $S^r M$ of $M$ as the image of $e$. Similarly we define the $r$th exterior power $\bigwedge^r M$ of $M$ using the antisymmetrising idempotent in $k[\mathfrak{S}_r]$ Let $G$ be a linear algebraic group over $k$. We write $\REP(G)$ for the category of $G$\nobreakdash-\hspace{0pt} modules. The tensor product $\otimes_k$ over $k$ defines on $\REP(G)$ a structure of $k$\nobreakdash-\hspace{0pt} tensor category. Recall (\cite{Wat}, 3.3) that every $G$\nobreakdash-\hspace{0pt} module is the filtered colimit of its finite-dimensional $G$\nobreakdash-\hspace{0pt} submodules. If $E$ is a finite-dimensional $G$\nobreakdash-\hspace{0pt} module, then regarding $\REP(G)$ as a category of comodules (\cite{Wat}, 3.2) shows that $\Hom_G(E,-)$ preserves filtered colimits. When $k$ is algebraically closed, a $k$\nobreakdash-\hspace{0pt} vector subspace of a $G$\nobreakdash-\hspace{0pt} module is a $G$\nobreakdash-\hspace{0pt} submodule provided it is stable under every $k$\nobreakdash-\hspace{0pt} point of $G$. This is easily seen by reducing to the finite-dimensional case. We suppose from now on that $k$ has characteristic $0$. Let $\rho$ be a central $k$\nobreakdash-\hspace{0pt} point of $G$ with $\rho^2 = 1$. Then $\rho$ induces a $\mathbf Z/2$\nobreakdash-\hspace{0pt} grading on $\REP(G)$, with the $G$\nobreakdash-\hspace{0pt} modules pure of degree $i$ those on which $\rho$ acts as $(-1)^i$. We define as follows a $k$\nobreakdash-\hspace{0pt} tensor category $\REP(G,\rho)$. The underlying $k$\nobreakdash-\hspace{0pt} linear category, tensor product and associativities of $\REP(G,\rho)$ are the same as those of $\REP(G)$, but the symmetry $M \otimes N \xrightarrow{\sim} N \otimes M$ is given by multiplying that in $\REP(G)$ by $(-1)^{ij}$ when $M$ is of degree $i$ and $N$ of degree $j$, and then extending by linearity. When $\rho = 1$, the $k$\nobreakdash-\hspace{0pt} tensor categories $\REP(G)$ and $\REP(G,\rho)$ coincide. An algebra in a $k$\nobreakdash-\hspace{0pt} tensor category is defined as an object $R$ together with a multiplication $R \otimes R \to R$ and unit $\mathbf 1 \to R$ satisfying the usual associativity and identity conditions. Since the symmetry is not used in this definition, an algebra in $\REP(G,\rho)$ is the same as an algebra in $\REP(G)$, or equivalently a $G$\nobreakdash-\hspace{0pt} algebra. An algebra $R$ in $\REP(G,\rho)$ will be said to be finitely generated if its underlying $k$\nobreakdash-\hspace{0pt} algebra is. It is equivalent to require that $R$ be generated as a $k$\nobreakdash-\hspace{0pt} algebra by a finite-dimensional $G$\nobreakdash-\hspace{0pt} submodule. A (left) module over an algebra $R$ is an object $N$ equipped with an action $R \otimes N \to N$ satisfying the usual associativity and identity conditions. If $R$ is an algebra in $\REP(G,\rho)$ or $\REP(G)$, we also speak of a $(G,R)$\nobreakdash-\hspace{0pt} module. A $(G,R)$\nobreakdash-\hspace{0pt} module is said to be finitely generated if it is finitely generated as a module over the underlying $k$\nobreakdash-\hspace{0pt} algebra of $R$. It is equivalent to require that it be generated as a module over the $k$\nobreakdash-\hspace{0pt} algebra $R$ by a finite-dimensional $G$\nobreakdash-\hspace{0pt} submodule. An algebra $R$ in a $k$\nobreakdash-\hspace{0pt} tensor category is said to be commutative if composition with the symmetry interchanging the factors $R$ in $R \otimes R$ leaves the multiplication unchanged. If $R$ is an algebra in $\REP(G,\rho)$, this notion of commutativity does not in general coincide with that of the underlying $k$\nobreakdash-\hspace{0pt} algebra, but it does when $\rho$ acts as $1$ on $R$. Coproducts exist in the category of commutative algebras in a $k$\nobreakdash-\hspace{0pt} tensor category: the coproduct of $R$ and $R'$ is $R \otimes R'$ with multiplication the tensor product of the multiplications of $R$ and $R'$ composed with the appropriate symmetry. To any map $[1,m] \to [1,l]$ and commutative algebra $R$ there is then associated a morphism $R^{\otimes m} \to R^{\otimes l}$, defined using symmetries $R^{\otimes \tau}$ and the unit and multiplication of $R$ and their tensor products and composites, such that each component $R \to R^{\otimes l}$ is the embedding into one of the factors. Let $R$ be a commutative algebra in $\REP(G,\rho)$. Then the symmetry in $\REP(G,\rho)$ defines on any $R$\nobreakdash-\hspace{0pt} module a canonical structure of $(R,R)$\nobreakdash-\hspace{0pt} bimodule. The category of $(G,R)$\nobreakdash-\hspace{0pt} modules has a structure of $k$\nobreakdash-\hspace{0pt} tensor category, with the tensor product $N \otimes_R N'$ of $N$ and $N'$ defined in the usual way as the coequaliser of the two morphisms \[ N \otimes_k R \otimes_k N' \to N \otimes_k N' \] given by the actions of $R$ on $N$ and $N'$, and the tensor product $f \otimes_R f'$ of $f:M \to N$ and $f':M' \to N'$ as the unique morphism rendering the square \[ \begin{CD} M \otimes_R M' @>{f \otimes_R f'}>> N \otimes_R N' \\ @AAA @AAA \\ M \otimes_k M' @>{f \otimes_k f'}>> N \otimes_k N' \end{CD} \] commutative. Let $P$ be an object in $\REP(G,\rho)$. We write $P_R$ for the object $R \otimes_k P$ in the $k$\nobreakdash-\hspace{0pt} tensor category of $(G,R)$\nobreakdash-\hspace{0pt} modules. A morphism of commutative algebras $R' \to R$ in $\REP(G,\rho)$ induces by tensoring with $P$ a morphism of $R'$\nobreakdash-\hspace{0pt} modules $P_{R'} \to P_R$. For each $l$ and $m$, extension of scalars along $R' \to R$ then gives a $k$\nobreakdash-\hspace{0pt} linear map \begin{equation}\label{e:extscal} \Hom_{G,R'}((P_{R'})^{\otimes m},(P_{R'})^{\otimes l}) \to \Hom_{G,R}((P_R)^{\otimes m},(P_R)^{\otimes l}) \end{equation} Explicitly, \eqref{e:extscal} sends $f'$ to the unique morphism of $(G,R)$\nobreakdash-\hspace{0pt} modules $f$ that renders the square \[ \begin{CD} (P_R)^{\otimes m} @>{f}>> (P_R)^{\otimes l} \\ @AAA @AAA \\ (P_{R'})^{\otimes m} @>{f'}>> (P_{R'})^{\otimes l} \end{CD} \] commutative, where the vertical arrows are those defined by $P_{R'} \to P_R$. If $P$ is finite-dimensional, then for given commutative algebra $R$ and $f$, there is a finitely generated $G$\nobreakdash-\hspace{0pt} subalgebra $R'$ of $R$ such that $f$ is in the image of \eqref{e:extscal}. This can be seen by writing $R$ as the filtered colimit $\colim_\lambda R_\lambda$ of its finitely generated $G$\nobreakdash-\hspace{0pt} subalgebras, and noting that since $P^{\otimes m}$ is finite-dimensional, the composite of $P^{\otimes m} \to (P_R)^{\otimes m}$ with $f$ factors through some $(P_{R_\lambda})^{\otimes l}$. Suppose that $G$ is reductive, or equivalently that $\Hom_G(P,-)$ is exact for every $G$\nobreakdash-\hspace{0pt} module $P$. Then $\Hom_G(P,-)$ preserves colimits for $P$ finite-dimensional. In particular $(-)^G = \Hom_G(k,-)$ preserves colimits. If $R$ is a commutative algebra in $\REP(G,\rho)$ with $R^G = k$, then $R$ has a unique maximal $G$\nobreakdash-\hspace{0pt} ideal. Indeed $J^G = 0$ for $J \ne R$ a $G$\nobreakdash-\hspace{0pt} ideal of $R$, while $(J_1)^G = 0$ and $(J_2)^G = 0$ implies $(J_1 + J_2)^G = 0$. \begin{lem}\label{l:repfin} Let $G$ be a reductive group over a field $k$ of characteristic $0$ and $\rho$ be a central $k$\nobreakdash-\hspace{0pt} point of $G$ with $\rho^2 = 1$. Let $R$ be a finitely generated commutative algebra in $\REP(G,\rho)$ with $R^G = k$, and $N$ be a finitely generated $R$\nobreakdash-\hspace{0pt} module. \begin{enumerate} \item\label{i:algfin} The $k$\nobreakdash-\hspace{0pt} vector space $N^G$ is finite-dimensional. \item\label{i:idealcompl} For every $G$\nobreakdash-\hspace{0pt} ideal $J \ne R$ of $R$, we have $(J^rN)^G = 0$ for $r$ large. \end{enumerate} \end{lem} \begin{proof} Every object $P$ of $\REP(G,\rho)$ decomposes as $P_0 \oplus P_1$ where $\rho$ acts as $(-1)^i$ on $P_i$. In particular $R = R_0 \oplus R_1$ with $R_0$ a $G$\nobreakdash-\hspace{0pt} subalgebra of $R$. Suppose that $R$ is generated as an algebra by the finite-dimensional $G$\nobreakdash-\hspace{0pt} submodule $M$. Then $R_0$ is generated as an algebra by $M_0 + M_1{}\!^2$, and hence is finitely generated. Since $R$ is a commutative algebra in $\REP(G,\rho)$, it is generated as an $R_0$\nobreakdash-\hspace{0pt} module by $M_1$. Hence any finitely generated $R$\nobreakdash-\hspace{0pt} module is finitely generated as an $R_0$\nobreakdash-\hspace{0pt} module. To prove \ref{i:algfin}, we reduce after replacing $R$ by $R_0$ to the case where $R = R_0$. Then $R$ is a commutative $G$\nobreakdash-\hspace{0pt} algebra in the usual sense. In this case it is well known that $N^G$ is finite-dimensional over $k = R^G$ (e.g \cite{ShaAlgIV}, II~Theorem~3.25). To prove \ref{i:idealcompl}, note that $J_0 \ne R_0$ is an ideal of $R_0$. Since $R$ is a finitely generated $R_0$\nobreakdash-\hspace{0pt} module, so also is $R_1$. If $x_1, x_2, \dots ,x_s$ generate $R_1$ over $R_0$, then since each $x_i$ has square $0$ we have $R_1{}\!^r = 0$ and hence $J_1{}\!^r = 0$ for $r>s$. Thus for $r > s$ we have \[ J^r N = (J_0 + J_1)^r N = J_0{}\!^r N + J_0{}\!^{r-1}J_1 N + \dots + J_0{}\!^{r-s}J_1{}\!^s N \subset J_0{}\!^{r-s} N. \] Replacing $R$ by $R_0$ and $J$ by $J_0$, we thus reduce again to the case where $R = R_0$ is a commutative $G$\nobreakdash-\hspace{0pt} algebra in the usual sense. We may suppose further that $k$ is algebraically closed. By \ref{i:algfin}, it is enough to show that $\bigcap_{r=0}^\infty J^r N = 0$, or equivalently (\cite{BAC-1} III \S 3 No.~2 Proposition~5 and IV \S 1 No.~1 Proposition~2, Corollaire~2) that $J + \mathfrak{p} \ne R$ for every associated prime $\mathfrak{p}$ of $N$. Fix such a $\mathfrak{p}$, and consider the intersection $\mathfrak{p}'$ of the $g\mathfrak{p}$ for $g$ in $G(k)$. It is stable under $G(k)$, and hence since $k$ is algebraically closed is a $G$\nobreakdash-\hspace{0pt} ideal of $R$. Thus $J + \mathfrak{p}' \ne R$, because $J$ and $\mathfrak{p}'$ are contained in the unique maximal $G$\nobreakdash-\hspace{0pt} ideal of $R$. Since each $g\mathfrak{p}$ lies in the finite set of associated primes of $N$, it follows that $J + g\mathfrak{p} \ne R$ for some $g$ in $G(k)$. Applying $g^{-1}$ then shows that $J + \mathfrak{p} \ne R$. \end{proof} Let $l_0$ and $l_1$ be integers $\ge 0$. Write \begin{equation}\label{e:Gdef} G = GL_{l_0} \times_k GL_{l_1}, \end{equation} $E_i$ for the standard representation of $GL_{l_i}$, regarded as a $G$\nobreakdash-\hspace{0pt} module, and \begin{equation}\label{e:Edef} E = E_0 \oplus E_1. \end{equation} We may identify the endomorphism of $E$ that sends $E_i$ to itself and acts on it as $(-1)^i$ with a central $k$\nobreakdash-\hspace{0pt} point $\rho$ of $G$ with $\rho^2 = 1$. Consider the semidirect product \begin{equation}\label{e:semidir} \Gamma_r = (\mathbf Z/2)^r \rtimes \mathfrak{S}_r, \end{equation} where the symmetric group $\mathfrak{S}_r$ acts on $(\mathbf Z/2)^r$ through its action on $[1,r]$. For each $r$, the group $\Gamma_r$ acts on $E^{\otimes r}$, with the action of $(\mathbf Z/2)^r$ the tensor product of the actions $i \mapsto \rho^i$ of $\mathbf Z/2$ on $E$, and the action of $\mathfrak{S}_r$ that defined by the symmetries in $\REP(G,\rho)$. Thus we obtain a homomorphism \begin{equation}\label{e:semidirhom} k[\Gamma_r] \to \End_G(E^{\otimes r}) \end{equation} of $k$\nobreakdash-\hspace{0pt} algebras. For $r \le r'$ we may regard $\Gamma_r$ as a subgroup of $\Gamma_{r'}$, and hence $k[\Gamma_r]$ as a $k$\nobreakdash-\hspace{0pt} subalgebra of $k[\Gamma_{r'}]$, by identifying $(\mathbf Z/2)^r$ with the subgroup of $(\mathbf Z/2)^{r'}$ with the last $r'-r$ factors the identity and $\mathfrak{S}_r$ with the subgroup of $\mathfrak{S}_{r'}$ which leaves the last $r'-r$ elements of $[1,r']$ fixed. Write $e_0$ for the idempotent of $k[\mathbf Z/2]$ given by half the sum of the two elements of $\mathbf Z/2$, and $e_1$ for $1-e_0$. Given $\pi = (\pi_1,\pi_2,\dots,\pi_r)$ in $(\mathbf Z/2)^r$, we then have an idempotent \[ e_\pi = e_{\pi_1} \otimes e_{\pi_2} \otimes \dots \otimes e_{\pi_r} \] in $k[\mathbf Z/2]^{\otimes r} = k[(\mathbf Z/2)^r] \subset k[\Gamma_r]$. When every component of $\pi$ is $i \in \mathbf Z/2$, we write $e_{i,r}$ for $e_\pi$. We also write $a_{0,r}$ for the antisymmetrising idempotent and $a_{1,r}$ for the symmetrising idempotent in $k[\mathfrak{S}_r]$, and for $i \in \mathbf Z/2$ we write \begin{equation}\label{e:xir} x_{i,r} = e_{i,l_i +1} a_{i,l_i +1} = e_{i,l_i +1} a_{i,l_i +1} e_{i,l_i +1} = a_{i,l_i +1} e_{i,l_i +1} \in k[\Gamma_{l_i +1}] \subset k[\Gamma_r] \end{equation} if $r > l_i$ and $x_{i,r} = 0$ otherwise. \begin{lem}\label{l:GL} \begin{enumerate} \item\label{i:hom0} If $r \ne r'$ then $\Hom_G(E^{\otimes r},E^{\otimes r'}) = 0$. \item\label{i:homsurj} The homomorphism \eqref{e:semidirhom} is surjective, with kernel the ideal of $k[\Gamma_r]$ generated by $x_{0,r}$ and $x_{1,r}$. \end{enumerate} \end{lem} \begin{proof} \ref{i:hom0} The action of $G$ on $E$ restricts along the appropriate $\mathbf G_m \to G$ to the homothetic action of $\mathbf G_m$ on $E$. \ref{i:homsurj} Write $I$ for the ideal of $k[\Gamma_r]$ generated by $x_{0,r}$ and $x_{1,r}$. The image of $e_\pi$ under \eqref{e:semidirhom} is the projection onto the direct summand \[ E_\pi = E_{\pi_1} \otimes_k E_{\pi_2} \otimes_k \dots \otimes_k E_{\pi_r} \] of $E^{\otimes r}$. The $e_\pi$ give a decomposition of the identity of $k[\Gamma_r]$ into orthogonal idempotents, and \eqref{e:semidirhom} is the direct sum over $\pi$ and $\pi'$ of the homomorphisms \begin{equation}\label{e:semidirhompi} e_{\pi'} k[\Gamma_r] e_\pi \to \Hom_G(E_\pi,E_{\pi'}) \end{equation} it induces on direct summands of $k[\Gamma_r]$ and $\End_G(E^{\otimes r})$. It is thus enough to show that \eqref{e:semidirhompi} is surjective, with kernel $e_{\pi'} I e_\pi$. Restricting to the centre of $G$ shows that the target of \eqref{e:semidirhompi} is $0$ unless $\pi'$ and $\pi$ have the same number of components $0$ or $1$, or equivalently $\pi' = \tau \pi \tau^{-1}$ for some $\tau \in \mathfrak{S}_r$. The same holds for the source of \eqref{e:semidirhompi}, because \[ \tau e_\pi \tau^{-1} = e_{\tau \pi \tau^{-1}} \] for every $\tau$ and $\pi$. Since further the image of $\tau \in \mathfrak{S}_r$ under \eqref{e:semidirhom} induces an isomorphism from $E_\pi$ to $E_{\tau \pi \tau^{-1}}$, to show that \eqref{e:semidirhompi} has the required properties we may suppose that $\pi' = \pi$ and that $r = r_0 + r_1$ where the first $r_0$ components of $\pi$ are $0$ and the last $r_1$ are $1$. Then the source of \eqref{e:semidirhompi} has a basis $e_\pi \tau e_\pi = e_\pi \tau$ with $\tau$ in the subgroup $\mathfrak{S}_{r_0} \times \mathfrak{S}_{r_1}$ of $\mathfrak{S}_r$ that permutes the first $r_0$ and last $r_1$ elements of $[1,r]$ among themselves. Thus we may identify \[ k[\mathfrak{S}_{r_0}] \otimes_k k[\mathfrak{S}_{r_1}] = k[\mathfrak{S}_{r_0} \times \mathfrak{S}_{r_1}] \] with the (non-unitary) $k$\nobreakdash-\hspace{0pt} subalgebra $e_\pi k[\Gamma_r] e_\pi$ of $k[\Gamma_r]$. Similarly we may identify \[ \End_G(E_0{}\!^{\otimes r_0}) \otimes_k \End_G(E_1{}\!^{\otimes r_1}) = \End_G(E_0{}\!^{\otimes r_0} \otimes_k E_1{}\!^{\otimes r_1}) \] with the (non-unitary) $k$\nobreakdash-\hspace{0pt} subalgebra $\End_G(E_\pi)$ of $\End_G(E)$. Now given $\tau$ and $\tau'$ in $\mathfrak{S}_r$, the element $e_\pi \tau' x_{i,r} \tau^{-1} e_\pi$ is $0$ unless both $\tau$ and $\tau'$ send $[1,l_i + 1]$ into $[1,r_0]$ if $i = 0$ or into $[r_0+1,r]$ if $i = 1$. With the above identifications, $e_\pi I e_\pi$ is thus the ideal of $k[\mathfrak{S}_{r_0}] \otimes_k k[\mathfrak{S}_{r_1}]$ generated by $y_0 \otimes 1$ and $1 \otimes y_1$, where $y_i$ is $a_{i,l_i+1}$ in $k[\mathfrak{S}_{l_i +1}] \subset k[\mathfrak{S}_{r_i}]$ if $r_i > l_i$ and $y_i = 0$ otherwise. Further \eqref{e:semidirhompi} is the tensor product of the homomorphisms \begin{equation}\label{e:symhomi} k[\mathfrak{S}_{r_i}] \to \End_G(E_i{}\!^{\otimes r_i}) \end{equation} of $k$\nobreakdash-\hspace{0pt} algebras sending $\tau \in \mathfrak{S}_{r_i}$ to $E_i{}\!^{\otimes \tau}$ in $\REP(G,\rho)$. It will thus suffice to prove that \eqref{e:symhomi} is surjective with kernel generated by $y_i$. If $i = 0$, \eqref{e:symhomi} may be identified with the homomorphism defined by the action of $\mathfrak{S}_{r_i}$ by symmetries on the $r_i$th tensor power in $\REP(GL_{l_i})$ of the standard representation of $GL_{l_i}$, while if $i = 1$, the composite of the automorphism $\tau \mapsto \mathrm{sgn}(\tau) \tau$ of $k[\mathfrak{S}_{r_i}]$ with \eqref{e:symhomi} may be so identified. The required result is thus classical (e.g. \cite{FulHar}, Theorem~6.3). \end{proof} \section{Duals}\label{s:dual} Let $\mathcal C$ be a $k$\nobreakdash-\hspace{0pt} tensor category. By a duality pairing in $\mathcal C$ we mean a quadruple $(L,L^\vee,\eta,\varepsilon)$ consisting of objects $L$ and $L^\vee$ of $\mathcal C$ and morphisms $\eta:\mathbf 1 \to L^\vee \otimes L$, the unit, and $\varepsilon:L \otimes L^\vee \to \mathbf 1$, the counit, satisfying triangular identities analogous for those of an adjunction (\cite{Mac}, p.~85). Explicitly, it is required that, modulo associativities, the composite of $L \otimes \eta$ with $\varepsilon \otimes L$ should be $1_L$, and of $\eta \otimes L^\vee$ with $L^\vee \otimes \varepsilon$ should be $1_{L^\vee}$. When such an $(L,L^\vee,\eta,\varepsilon)$ exists for a given $L$, it is said to be a duality pairing for $L$, and $L$ is said to be dualisable, and $L^\vee$ to be dual to $L$. We then have a dual pairing $(L^\vee,L,\widetilde{\eta},\widetilde{\varepsilon})$ for $L^\vee$, with $\widetilde{\eta}$ and $\widetilde{\varepsilon}$ obtained from $\eta$ and $\varepsilon$ by composing with the appropriate symmetries. In verifying the properties of duals recalled below, it is useful to reduce to the case where $\mathcal C$ is strict, i.e.\ where all associativities of $\mathcal C$ are identities. This can be done by taking (see \cite{Mac},~XI~3, Theorem~1) a $k$\nobreakdash-\hspace{0pt} linear strong symmetric monoidal functor (\cite{Mac},~XI~2) $\mathcal C \to \mathcal C'$ giving an equivalence to a strict $k$\nobreakdash-\hspace{0pt} tensor category $\mathcal C'$. Suppose given duality pairings $(L,L^\vee,\eta,\varepsilon)$ for $L$ and $(L',L'{}^\vee,\eta',\varepsilon')$ for $L'$. Then we have a tensor product duality pairing for $L \otimes L'$, with dual $L^\vee \otimes L'{}^\vee$, and unit and counit obtained from $\eta \otimes \eta'$ and $\varepsilon \otimes \varepsilon'$ by composing with the appropriate symmetries. Further any morphism $f:L \to L'$ has a transpose $f^\vee:L'{}^\vee \to L^\vee$, characterised by the condition \[ \varepsilon \circ (L \otimes f^\vee) = \varepsilon' \circ (f \otimes L'{}^\vee), \] or by a similar condition using $\eta$ and $\eta'$. Explicitly, $f^\vee$ is given modulo associativities by the composite of $\eta \otimes L'{}^\vee$ with $L^\vee \otimes f \otimes L'{}^\vee$ and $L^\vee \otimes \varepsilon'$. We have $(1_L)^\vee = 1_{L^\vee}$ and $(f' \circ f)^\vee = f^\vee \circ f'{}^\vee$, and, with the transpose of $f^\vee$ taken using the dual pairing, we have $f^{\vee \vee} = f$. In particular taking $L = L'$ shows that a duality pairing for $L$ is unique up to unique isomorphism. Let $L$ be a dualisable object of $\mathcal C$. Then we have a $k$\nobreakdash-\hspace{0pt} linear map \[ \tr_L:\Hom_\mathcal C(N \otimes L,N' \otimes L) \xrightarrow{\sim} \Hom_\mathcal C(N,N'), \] natural in $N$ and $N'$, which sends $f$ to its contraction $\tr_L(f)$ with respect to $L$, defined as follows. Modulo associativities, $\tr_L(f)$ is the composite of $N \otimes \widetilde{\eta}$ with $f \otimes L^\vee$ and $N' \otimes \varepsilon$, with $L^\vee$ and $\varepsilon$ as above and $\widetilde{\eta}$ the composite of $\eta$ with the symmetry interchanging $L^\vee$ and $L$. It does not depend on the choice of duality pairing for $L$. When $N = N' = \mathbf 1$, the contraction $\tr_L(f)$ is the trace $\tr(f)$ of the endomorphism $f$ of $L$. The rank of $L$ is defined as $\tr(1_L)$. Modulo associativities, $\tr_{L \otimes L'}$ is given by successive contraction with respect to $L'$ and $L$, and $\tr_L$ commutes with $M \otimes -$. By the appropriate triangular identity for $L$ we have \begin{equation}\label{e:gcomp} g'' \circ g' = \tr_L((g'' \otimes g') \circ \sigma) \end{equation} for $g':M' \to L$ and $g'':L \to M''$, with $\sigma$ the symmetry interchanging $M'$ and $L$. Let $L$ be a dualisable object of $\mathcal C$, and let $\tau$ be a permutation of $[1,r+1]$ and $f_1,f_2, \dots ,f_{r+1}$ be endomorphisms of $L$. Write $\tau'$ for the permutation of $[1,r]$ obtained by omitting $r+1$ from the cycle of $\tau$ containing it, and define endomorphisms $c$ of $\mathbf 1$ and $f'{}\!_1,f'{}\!_2, \dots ,f'{}\!_r$ of $L$ as follows. If $\tau$ leaves $r+1$ fixed, then $c = \tr(f_{r+1})$ and $f'{}\!_i = f_i$ for $i\le r$. If $\tau$ sends $r+1$ to $i_0 \le r$, then $c = 1$, and $f'{}\!_i$ for $i\le r$ is $f_i$ when $i \ne i_0$ and $f_{i_0} \circ f_{r+1}$ when $i = i_0$. We then have \begin{equation}\label{e:symcontr} \tr_L((f_1 \otimes f_2 \otimes \dots \otimes f_{r+1}) \circ L^{\otimes \tau}) = c ((f'{}\!_1 \otimes f'{}\!_2 \otimes \dots \otimes f'{}\!_r) \circ L^{\otimes \tau'}). \end{equation} To see this, reduce to the case where $\tau$ leaves all but the last two elements of $[1,r+1]$ fixed, by composing on the left and right with appropriate morphisms $L^{\otimes \tau_0} \otimes L$ with $\tau_0$ a permutation of $[1,r]$. Let $L$, $L'$, $M$ and $M'$ be objects in $\mathcal C$, and $(L,L^\vee,\eta,\varepsilon)$ and $(L',L'{}^\vee,\eta',\varepsilon')$ be duality pairings for $L$ and $L'$. Then we have a canonical isomorphism \[ \Hom_\mathcal C(M,M' \otimes L) \xrightarrow{\sim} \Hom_\mathcal C(M \otimes L^\vee,M') \] which modulo associativities sends $f:M \to M'\otimes L$ to the composite of $f \otimes L^\vee$ with $M' \otimes \varepsilon$. Its inverse is defined using $\eta$. We also have a canonical isomorphism \[ \Hom_\mathcal C(L' \otimes M,M') \xrightarrow{\sim} \Hom_\mathcal C(M,L'{}^\vee \otimes M') \] defined using $\eta'$. Replacing $M$ by $L' \otimes M$ in the first of these isomorphisms and by $M \otimes L^\vee$ in the second, and using the symmetries interchanging $M$ and $L'$ and $L'{}^\vee$ and $M'$, then gives a canonical isomorphism \[ \delta_{M,L;M',L'}: \Hom_\mathcal C(M \otimes L',M' \otimes L) \xrightarrow{\sim} \Hom_\mathcal C(M \otimes L^\vee,M' \otimes L'{}^\vee). \] Modulo associativities, $\delta_{M,L;M',L'}$ sends $f$ to the composite of $M \otimes \widetilde{\eta}' \otimes L^\vee$, the tensor product of $f$ with the symmetry interchanging $L'{}^\vee$ and $L^\vee$, and $M' \otimes \varepsilon \otimes L'{}^\vee$, where $\widetilde{\eta}'$ is $\eta'$ composed with the symmetry interchanging $L'{}^\vee$ and $L'$. With the transpose taken using the chosen duality pairings for $L$ and $L'$, we have \begin{equation}\label{e:deltahg} \delta_{M,L;M',L'}(h \otimes g) = h \otimes g^\vee. \end{equation} With the duality pairing $(\mathbf 1,\mathbf 1,1_\mathbf 1,1_\mathbf 1)$ for $\mathbf 1$, we have \begin{equation}\label{e:deltaf} \delta_{M,L;\mathbf 1,\mathbf 1}(f) = \varepsilon \circ (f \otimes L^\vee). \end{equation} With the tensor product duality pairings for $L_1 \otimes L_2$ and $L_1{}\!' \otimes L_2{}\!'$, we have \begin{multline}\label{e:deltatens} \sigma''' \circ (\delta_{M_1,L_1;M_1{}\!',L_1{}\!'}(f_1) \otimes \delta_{M_2,L_2;M_2{}\!',L_2{}\!'}(f_2)) \circ \sigma'' = \\ = \delta_{M_1 \otimes M_2,L_1 \otimes L_2;M_1{}\!' \otimes M_2{}\!',L_1{}\!' \otimes L_2{}\!'} (\sigma' \circ (f_1 \otimes f_2) \circ \sigma) \end{multline} where each of $\sigma$, $\sigma'$, $\sigma''$ and $\sigma'''$ is a symmetry interchanging the middle two factors in a tensor product $(- \otimes -) \otimes (- \otimes -)$. If $M'$ is dualisable, we have \begin{equation}\label{e:deltaMcomp} \delta_{M',L';M'',L''}(f') \circ \delta_{M,L;M',L'}(f) = \delta_{M,L;M'',L''}(\tr_{M' \otimes L'} (\sigma_2 \circ (f' \otimes f) \circ \sigma_1)) \end{equation} for $\sigma_1$ the symmetry interchanging $M$ and $M'$ and $\sigma_2$ the symmetry interchanging $L'$ and $L$. This can be seen by showing that modulo associativities both sides of \eqref{e:deltaMcomp} coincide with a morphism obtained from \[ f' \otimes f \otimes L^\vee \otimes L''{}^\vee \otimes M'{}^\vee \otimes L'{}^\vee \] as follows: compose on the left and right with appropriate symmetries, then on the left with the tensor product of $M'' \otimes L''{}^\vee$ and the counits for $L$, $L'$ and $M'$ and on the right with the tensor product of $M \otimes L^\vee$ with the units for $L''$, $L'$ and $M'$. To show this in the case of the left hand side of \eqref{e:deltaMcomp}, write it as a contraction with respect to $M' \otimes L'{}^\vee$ using \eqref{e:gcomp} and contract first with respect to $L'{}^\vee$, using the triangular identity. With the duality pairing $(\mathbf 1,\mathbf 1,1_\mathbf 1,1_\mathbf 1)$ for $\mathbf 1$ and the tensor product duality pairing for $L \otimes N$, we have \begin{equation}\label{e:deltaNcomp} \delta_{M,N;\mathbf 1,\mathbf 1}(g \circ f) = (\delta_{M,L;\mathbf 1,\mathbf 1}(f) \otimes \delta_{L,N;\mathbf 1,\mathbf 1}(g)) \circ \sigma \circ \delta_{M,N;M \otimes L,L \otimes N}(\alpha), \end{equation} with $\alpha:M \otimes (L \otimes N) \xrightarrow{\sim} (M \otimes L) \otimes N$ the associativity and $\sigma$ the symmetry interchanging $L$ and $L^\vee$ in the tensor product of $M \otimes L$ and $L^\vee \otimes N^\vee$. Indeed modulo associativities $\sigma \circ \delta_{M,N;M \otimes L,L \otimes N}(\alpha)$ is $1_M \otimes \eta \otimes 1_{N^\vee}$ by the triangular identity for $N$, and \eqref{e:deltaNcomp} then follows by the triangular identity for $L$. Let $(L,L^\vee,\eta,\varepsilon)$ be a duality pairing for the object $L$ of $\mathcal C$. Its $r$th tensor power $(L^{\otimes r},(L^\vee)^{\otimes r},\eta_r,\varepsilon_r)$ is a duality pairing for $L^{\otimes r}$. We write \[ L^{r,s} = L^{\otimes r} \otimes (L^\vee)^{\otimes s}. \] Then $L^{r,0} = L^{\otimes r}$. We define a $k$\nobreakdash-\hspace{0pt} bilinear product $\widetilde{\otimes}$ on morphisms between the $L^{r,s}$ by requiring that the square \begin{equation}\label{e:tildedef} \begin{CD} L^{r_1,s_1} \otimes L^{r_2,s_2} @>{\sim}>> L^{r_1+r_2,s_1+s_2} \\ @V{f_1 \otimes f_2}VV @VV{f_1 \mathbin{\widetilde{\otimes}} f_2}V \\ L^{r_1{}\!',s_1{}\!'} \otimes L^{r_2{}\!',s_2{}\!'} @>{\sim}>> L^{r_1{}\!'+r_2{}\!',s_1{}\!'+s_2{}\!'} \end{CD} \end{equation} commute, with the top isomorphism the symmetry interchanging the two factors $(L^\vee)^{\otimes s_1}$ and $L^{\otimes r_2}$ and the bottom that interchanging $(L^\vee)^{\otimes s'{}\!_1}$ and $L^{\otimes r'{}\!_2}$. Then $\widetilde{\otimes}$ preserves composites, is associative, and we have \begin{equation}\label{e:tildecom} f_2 \mathbin{\widetilde{\otimes}} f_1 = \sigma' \circ (f_1 \mathbin{\widetilde{\otimes}} f_2) \circ \sigma^{-1}, \end{equation} where $\sigma$ interchanges the first $r_1$ with the last $r_2$ factors $L$ and the first $s_1$ with the last $s_2$ factors $L^\vee$ of $ L^{r_1+r_2,s_1+s_2}$, and similarly for $\sigma'$. We define an isomorphism \begin{equation}\label{e:deltaLdef} \delta_{L;r,s;r',s'}: \Hom_\mathcal C(L^{\otimes (r+s')},L^{\otimes (r'+s)}) \xrightarrow{\sim} \Hom_\mathcal C(L^{r,s},L^{r',s'}) \end{equation} by taking $L^{\otimes r},L^{\otimes s},L^{\otimes r'},L^{\otimes s'}$ for $M,L,M',L'$ in $\delta_{M,L;M',L'}$. It follows from \eqref{e:deltahg} that \begin{equation}\label{e:deltaLhg} \delta_{L;r,s;r',s'}(h \otimes g) = h \otimes g^\vee. \end{equation} and from \eqref{e:deltaf} that \begin{equation}\label{e:deltaLf} \delta_{L;r,s;0,0}(f) = \varepsilon_s \circ (f \otimes (L^\vee)^{\otimes s}). \end{equation} By \eqref{e:deltatens}, we have \begin{multline}\label{e:deltaLtens} \delta_{L;r_1,s_1;r_1{}\!',s_1{}\!'}(f_1) \mathbin{\widetilde{\otimes}} \delta_{L;r_2,s_2;r_2{}\!',s_2{}\!'}(f_2) = \\ = \delta_{L;r_1 + r_2,s_1 + s_2;r_1{}\!' + r_2{}\!',s_1{}\!' + s_2{}\!'} (\sigma' \circ (f_1 \otimes f_2) \circ \sigma) \end{multline} for appropriate symmetries $\sigma$ and $\sigma'$. By \eqref{e:deltaMcomp}, we have \begin{equation}\label{e:deltaLcomp} \delta_{L;r',s';r'',s''}(f') \circ \delta_{L;r,s;r',s'}(f) = \delta_{L;r,s;r'',s''}(\tr_{L^{\otimes (r'+s')}} (\sigma_2 \circ (f' \otimes f) \circ \sigma_1)). \end{equation} for appropriate symmetries $\sigma_1$ and $\sigma_2$. We have \begin{equation}\label{e:deltacomp} \delta_{L;r,t;0,0}(g \circ f) = (\delta_{L;r,s;0,0}(f) \mathbin{\widetilde{\otimes}} \delta_{L;s,t;0,0}(g)) \circ \delta_{L;r,t;r+s,s+t}(1_{L^{\otimes (r+s+t)}}), \end{equation} by \eqref{e:deltaNcomp}. Let $G$ be a linear algebraic group over $k$ and $\rho$ be a central $k$\nobreakdash-\hspace{0pt} point of $G$ with $\rho^2 = 1$. Let $E$ be a finite-dimensional $G$\nobreakdash-\hspace{0pt} module and $R$ be a commutative algebra in $\REP(G,\rho)$. Then $E$ in $\REP(G,\rho)$ and $E_R$ in the $k$\nobreakdash-\hspace{0pt} tensor category of $(G,R)$\nobreakdash-\hspace{0pt} modules are dualisable. Suppose chosen duality pairings for $E$ and $E_R$. Then we have a $G$\nobreakdash-\hspace{0pt} module $E^{r,s}$ and a $(G,R)$\nobreakdash-\hspace{0pt} module $(E_R)^{r,s}$ for every $r$ and $s$. We have canonical embeddings $E \to E_R$ and $E^\vee \to (E_R)^\vee$, which are compatible with the units and counits of the chosen duality pairings for $E$ and $E_R$. They define a canonical embedding $E^{r,s} \to (E_R)^{r,s}$, which induces an isomorphism of $(G,R)$\nobreakdash-\hspace{0pt} modules $(E^{r,s})_R \xrightarrow{\sim} (E_R)^{r,s}$. Given $u:E^{r,s} \to E^{r',s'}$, we write $u_{R;r,s;r',s'}$ for the unique morphism of $(G,R)$\nobreakdash-\hspace{0pt} modules for which the square \begin{equation}\label{e:daggerdef} \begin{CD} (E_R)^{r,s} @>{u_{R;r,s;r',s'}}>> (E_R)^{r',s'} \\ @AAA @AAA \\ E^{r,s} @>{u}>> E^{r',s'} \end{CD} \end{equation} commutes, with the vertical arrows the canonical embeddings. Then $(-)_{R;r,s;r',s'}$ preserves identities and composites, counits $E^{r,r} \to E^{0,0}$ and $(E_R)^{r,r} \to (E_R)^{0,0}$ and (with the identification $E^{r,0} = E^{\otimes r}$) commutes with the isomorphisms $\delta_E$ and $\delta_{E_R}$. For each $r$ and $s$ we have an isomorphism \begin{equation}\label{e:psidef} \psi_{r,s}:\Hom_{G,R}((E_R)^{r,s},R) \xrightarrow{\sim} \Hom_G(E^{r,s},R), \end{equation} given by composing with the canonical embedding $E^{r,s} \to (E_R)^{r,s}$. Then \begin{equation}\label{e:psinat} \psi_{r,s}(w' \circ u_{R;r,s;r',s'}) = \psi_{r',s'}(w') \circ u \end{equation} for every $w':(E_R)^{r',s'} \to R$ and $u:E^{r,s} \to E^{r',s'}$. Suppose that $G$ is reductive and that $R^G = k$, so that $R$ has a unique maximal $G$\nobreakdash-\hspace{0pt} ideal $J$. Let $N$ be a dualisable $(G,R)$\nobreakdash-\hspace{0pt} module and $f:R \to N$ be a morphism of $(G,R)$\nobreakdash-\hspace{0pt} modules which does not factor through $J N$. Then $f$ has a left inverse. Indeed $f^\vee$ does not factor through $J$, because $f$ is the composite of the unit for $N^\vee$ with $N \otimes_R f^\vee$. Hence $f^\vee$ is surjective, and there is an $x$ in its source fixed by $G$ with $f^\vee(x) = 1$. Thus $f^\vee$ has a unique right inverse $g$ with $g(1) = x$, and $g^\vee$ is left inverse to $f = f^{\vee \vee}$. \section{Kimura objects}\label{s:Kimobj} Let $k$ be a field of characteristic $0$ and $\mathcal C$ be a $k$\nobreakdash-\hspace{0pt} tensor category with $\End_\mathcal C(\mathbf 1) = k$. An object $L$ of $\mathcal C$ will be called positive (resp.\ negative) if it is dualisable and $\bigwedge^{r+1} L$ (resp.\ $S^{r+1} L$) is $0$ for some $r$. An object of $\mathcal C$ will be called a Kimura object if it is the direct sum of a positive and a negative object of $\mathcal C$. Let $L$ be a Kimura object of $\mathcal C$. Then $L = L_0 \oplus L_1$ with $L_0$ positive and $L_1$ negative. Denote by $l_0$ (resp.\ $l_1$) the least $r$ such that $\bigwedge^{r+1} L_0$ (resp.\ $S^{r+1} L_1$) is $0$, and let $G$ and $E$ be as in \eqref{e:Gdef} and \eqref{e:Edef}, and $\rho$ be the central $k$\nobreakdash-\hspace{0pt} point of $G$ which acts as $(-1)^i$ on $E_i$. The goal of this section is to construct a commutative algebra $R$ in $\REP(G,\rho)$ and an isomorphism \begin{equation}\label{e:xiiso} \xi_{r,s}:\Hom_{G,R}((E_R)^{\otimes r},(E_R)^{\otimes s}) \xrightarrow{\sim} \Hom_\mathcal C(L^{\otimes r},L^{\otimes s}) \end{equation} for every $r$ and $s$, such that the $\xi$ preserve composites and symmetries and are compatible with $\otimes_R$ and $\otimes$. Given an object $M$ of $\mathcal C$, write $a_{M,0,r}$ (resp.\ $a_{M,1,r}$) for the image of the antisymmetrising (resp.\ symmetrising) idempotent of $k[\mathfrak{S}_r]$ under the $k$\nobreakdash-\hspace{0pt} homomorphism to $\End(M^{\otimes r})$ that sends $\tau$ in $\mathfrak{S}_r$ to $M^{\otimes \tau}$. If $M$ is dualisable of rank $d$, then applying \eqref{e:symcontr} with the $f_j$ the identities shows that \[ (r+1)\tr_M(a_{M,i,r+1}) = (d - (-1)^i r)a_{M,i,r} \] for $i = 0,1$. If $M$ is positive (resp.\ negative), it follows that $d$ (resp.\ $-d$) is the least $r$ for which $\bigwedge^{r+1} M$ (resp.\ $S^{r+1} M$) is $0$. Thus $L_i$ has rank $(-1)^i l_i$. Write $b$ for the automorphism of $L$ that sends $L_i$ to $L_i$ and acts on it as $(-1)^i$. Then for every $r$, the group $\Gamma_r$ of \eqref{e:semidir} acts on $L^{\otimes r}$ with the action of $(\mathbf Z/2)^r$ the $r$th tensor power of the action $i \mapsto b^i$ of $\mathbf Z/2$ on $L$, and the action of $\mathfrak{S}_r$ that given by $\tau \mapsto L^{\otimes \tau}$. Thus we obtain a homomorphism \[ \alpha_r:k[\Gamma_r] \to \End_\mathcal C(L^{\otimes r}) \] of $k$\nobreakdash-\hspace{0pt} algebras. If $l_i < r$, then $\alpha_i$ sends the element $x_{i,r}$ of \eqref{e:xir} to the projection onto the direct summand $\bigwedge^{l_0 + 1}L_0 \otimes_k L^{\otimes (r-l_0-1)}$ when $i = 0$ and $S^{l_1 + 1}L_1 \otimes_k L^{\otimes (r-l_0-1)}$ when $i = 1$. Thus both $x_{0,r}$ and $x_{1,r}$ lie in the kernel of $\alpha_r$. If we write $\beta_r$ for \eqref{e:semidirhom}, it follows by Lemma~\ref{l:GL}~\ref{i:homsurj} that the kernel of $\alpha_r$ contains that of $\beta_r$. Hence by Lemma~\ref{l:GL} there is for each $r$ and $r'$ a unique $k$\nobreakdash-\hspace{0pt} linear map \[ \varphi_{r;r'}:\Hom_G(E^{\otimes r},E^{\otimes r'}) \to \Hom_\mathcal C(L^{\otimes r},L^{\otimes r'}) \] such that \[ \alpha_r = \varphi_{r;r} \circ \beta_r \] for every $r$. By construction, the $\varphi_{r;r'}$ preserve symmetries, identities and composites, and they are compatible with $\otimes_k$ and $\otimes$. Applying \eqref{e:symcontr} $t$ times shows that for $v:E^{\otimes (r+t)} \to E^{\otimes (r'+t)}$ we have \begin{equation}\label{e:phicontr} \varphi_{r;r'}(\tr_{E^{\otimes t}}(v)) = \tr_{L^{\otimes t}}(\varphi_{r + t;r' + t}(v)), \end{equation} because $\tr(\rho^i) = \tr(b^i) = l_0 - (-1)^il_1$. For every $r,s$ and $r',s'$, we define a $k$\nobreakdash-\hspace{0pt} linear map $\varphi_{r,s;r's'}$ by requiring that the square \[ \begin{CD} \Hom_G(E^{r,s},E^{r',s'}) @>{\varphi_{r,s;r's'}}>> \Hom_\mathcal C(L^{r,s},L^{r',s'}) \\ @A{\delta_{E;r,s;r's'}}AA @AA{\delta_{L;r,s;r's'}}A \\ \Hom_G(E^{\otimes (r+s')},E^{\otimes (r'+s)}) @>{\varphi_{r+s';r'+s}}>> \Hom_\mathcal C(L^{\otimes (r+s')},L^{\otimes (r'+s)}) \end{CD} \] commute, with the $\delta$ the isomorphisms of \eqref{e:deltaLdef}. Then by \eqref{e:deltaLhg} the $\varphi_{r,s;r's'}$ preserve identities, and by \eqref{e:deltaLcomp} and \eqref{e:phicontr} they preserve composites. By \eqref{e:deltaLtens}, they are compatible with the bilinear products, defined as in \eqref{e:tildedef}, $\widetilde{\otimes}_k$ on $G$\nobreakdash-\hspace{0pt} homomorphisms between the $E^{r,s}$ and $\widetilde{\otimes}$ on morphisms between the $L^{r,s}$. By \eqref{e:deltaLhg}, they send symmetries permuting the factors $E$ or $E^\vee$ of $E^{r,s}$ to the corresponding symmetries of $L^{r,s}$. We now define as follows a commutative algebra $R$ in $\REP(G,\rho)$. Consider the small category $\mathcal L$ whose objects are triples $(r,s,f)$ with $r$ and $s$ integers $\ge 0$ and $f:L^{r,s} \to \mathbf 1$, where a morphism from $(r,s,f)$ to $(r',s',f')$ in $\mathcal L$ is a morphism $u:E^{r,s} \to E^{r',s'}$ such that \[ f = f' \circ \varphi_{r,s;r',s'}(u). \] Then we define $R$ as the colimit \[ R = \colim_{(r,s,f) \in \mathcal L} E^{r,s} \] in $\REP(G,\rho)$. Write the colimit injection at $(r,s,f)$ as \[ i_{(r,s,f)}:E^{r,s} \to R. \] We define the unit $\mathbf 1 \to R$ of $R$ as $i_{(0,0,1_\mathbf 1)}$. We define the multiplication $R \otimes_k R \to R$ by requiring that for every $((r_1,s_1,f_1),(r_2,s_2,f_2))$ in $\mathcal L \times \mathcal L$ the square \begin{equation}\label{e:musquare} \begin{CD} E^{r_1,s_1} \otimes_k E^{r_2,s_2} @>{\sim}>> E^{r_1+r_2,s_1+s_2} \\ @V{i_{(r_1,s_1,f_1)} \otimes_k i_{(r_1,s_1,f_2)}}VV @VV{i_{(r_1+r_2,s_1+s_2,f_1 \mathbin{\widetilde{\otimes}} f_2)}}V \\ R \otimes_k R @>>> R \end{CD} \end{equation} should commute, where the top isomorphism is that of \eqref{e:tildedef} with $E$ for $L$. Such an $R \otimes_k R \to R$ exists and is unique because the left vertical arrows of the squares \eqref{e:musquare} form a colimiting cone by the fact that $\otimes_k$ preserves colimits, while their top right legs form a cone by the compatibility of the $\varphi_{r,s;r's'}$ with $\widetilde{\otimes}_k$ and $\widetilde{\otimes}$. The associativity of the multiplication can be checked by writing $R \otimes_k R \otimes_k R$ as a colimit over $\mathcal L \times \mathcal L \times \mathcal L$ and using the associativity of $\widetilde{\otimes}$. The commutativity follows from \eqref{e:tildecom} and the compatibility of the $\varphi_{r,s;r's'}$ with the symmetries. Since $G$ is reductive, each $\Hom_G(E^{r,s},-)$ preserves colimits. Hence the \[ \Hom_G(E^{r,s},i_{(r',s',f')}):\Hom_G(E^{r,s},E^{r',s'}) \to \Hom_G(E^{r,s},R). \] form a colimiting cone of $k$\nobreakdash-\hspace{0pt} vector spaces. Thus for every $r$ and $s$ there is a unique homomorphism \[ \theta_{r,s}:\Hom_G(E^{r,s},R) \to \Hom_\mathcal C(L^{r,s},\mathbf 1) \] whose composite with $\Hom_G(E^{r,s},i_{(r',s',f')})$ sends $u:E^{r,s} \to E^{r',s'}$ to \[ f' \circ \varphi_{r,s;r',s'}(u). \] Further $\theta_{r,s}$ is an isomorphism, with inverse sending $f:L^{r,s} \to \mathbf 1$ to $i_{(r,s,f)}$. Thus every $E^{r,s} \to R$ can be written uniquely in the form $i_{(r,s,f)}$. It follows that \begin{equation}\label{e:theta0nat} \theta_{r,s}(v' \circ u) = \theta_{r',s'}(v') \circ \varphi_{r,s;r',s'}(u) \end{equation} for $v':E^{r',s'} \to R$, that $\theta_{0,0}$ sends the identity $k \to R$ of $R$ to $1_{\mathbf 1}$, and that \begin{equation}\label{e:theta0tens} \theta_{r_1+r_2,s_1+s_2}(v) = \theta_{r_1,s_1}(v_1) \mathbin{\widetilde{\otimes}} \theta_{r_2,s_2}(v_2) \end{equation} for $v_1:E^{r_1,s_1} \to R$ and $v_2:E^{r_2,s_2} \to R$, where $v$ is defined by a diagram of the form \eqref{e:musquare} with left arrow $v_1 \otimes_k v_2$ and right arrow $v$. Composing the isomorphisms $\psi_{r,s}$ of \eqref{e:psidef} and $\theta_{r,s}$ gives an isomorphism \[ \widehat{\theta}_{r,s} = \theta_{r,s} \circ \psi_{r,s}: \Hom_{G,R}((E_R)^{r,s},R) \xrightarrow{\sim} \Hom_\mathcal C(L^{r,s},\mathbf 1). \] Then with $u_{R;r,s;r',s'}$ as in \eqref{e:daggerdef}, we have by \eqref{e:psinat} and \eqref{e:theta0nat} \begin{equation}\label{e:thetanat} \widehat{\theta}_{r,s}(w' \circ u_{R;r,s;r',s'}) = \widehat{\theta}_{r',s'}(w') \circ \varphi_{r,s;r',s'}(u) \end{equation} for every $w':(E_R)^{r',s'} \to R$ and $u:E^{r,s} \to E^{r',s'}$. Also $\widehat{\theta}_{0,0}(1_R) = 1_{\mathbf 1}$, and \begin{equation}\label{e:thetatens} \widehat{\theta}_{r_1+r_2,s_1+s_2}(w_1 \mathbin{\widetilde{\otimes}}_R w_2) = \widehat{\theta}_{r_1,s_1}(w_1) \mathbin{\widetilde{\otimes}} \widehat{\theta}_{r_2,s_2}(w_2) \end{equation} for every $w_1:(E_R)^{r_1,s_1} \to R$ and $w_2:(E_R)^{r_2,s_2} \to R$, by \eqref{e:theta0tens}. We now define the isomorphism \eqref{e:xiiso} by requiring that the square \[ \begin{CD} \Hom_{G,R}((E_R)^{\otimes r},(E_R)^{\otimes s}) @>{\xi_{r,s}}>> \Hom_\mathcal C(L^{\otimes r},L^{\otimes s}) \\ @V{\delta_{E_R;r,s,0,0}}VV @VV{\delta_{L;r,s,0,0}}V\\ \Hom_{G,R}((E_R)^{r,s},R) @>{\widehat{\theta}_{r,s}}>> \Hom_\mathcal C(L^{r,s},\mathbf 1) \end{CD} \] commute. The $\xi$ preserve composites by \eqref{e:deltacomp}, \eqref{e:thetanat}, \eqref{e:thetatens}, and the fact that $(-)_{R;r,s;r',s'}$ preserves identities and is compatible with $\delta_E$ and $\delta_{E_R}$. They are compatible with $\otimes_R$ and $\otimes$ by \eqref{e:deltaLtens}, where the relevant $\sigma$ and $\sigma'$ reduce to associativities, and \eqref{e:thetatens}. They are compatible with the symmetries by \eqref{e:thetanat} with $w' = 1_R$ and $u$ the composite of $\sigma \otimes_k (E^\vee)^{\otimes r}$ for $\sigma$ a symmetry of $E^{\otimes r}$ with the counit $E^{r,r} \to k$, using \eqref{e:deltaLf} and the compatibility of $(-)_{R;r,s;r',s'}$ with symmetries, composites, and counits. \section{Kimura varieties}\label{s:Kimvar} We denote by $\mathcal M_{\sim}(F)$ the category of ungraded $\mathbf Q$\nobreakdash-\hspace{0pt} linear motives over $F$ for the equivalence relation ${\sim}$. It is a $\mathbf Q$\nobreakdash-\hspace{0pt} tensor category. There is a contravariant functor $h$ from the category $\mathcal V_F$ of smooth projective varieties over $F$ to $\mathcal M_{\sim}(F)$, which sends products in $\mathcal V_F$ to tensor products in $\mathcal M_{\sim}(F)$. We then have \begin{equation}\label{e:HomChow} \Hom_{\mathcal M_{\sim}(F)}(h(X'),h(X)) = CH(X' \times_F X)_\mathbf Q/{\sim}, \end{equation} and the composite $z \circ z'$ of $z':h(X'') \to h(X')$ with $z:h(X') \to h(X)$ is given by \[ z \circ z' = (\pr_{13})_*((\pr_{12})^*(z').(\pr_{23})^*(z)), \] where the projections are from $X'' \times_F X' \times_F X$. Further $h(q)$ for $q:X \to X'$ is the push forward of $1$ in $CH(X)_\mathbf Q/{\sim}$ along $X \to X' \times_F X$ with components $q$ and $1_X$. The images under $h$ of the structural morphism and diagonal of $X$ define on $h(X)$ a canonical structure of commutative algebra in $\mathcal M_{\sim}(F)$. With this structure \eqref{e:HomChow} reduces when $X' = \Spec(F)$ to an equality of algebras \[ \Hom_{\mathcal M_{\sim}(F)}(\mathbf 1,h(X)) = CH(X)_\mathbf Q/{\sim}. \] Also $h(X)$ is canonically autodual: we have canonical duality pairing \[ (h(X),h(X),\eta_X,\varepsilon_X), \] with both $\eta_X$ and $\varepsilon_X$ the class in $CH(X \times_F X)_\mathbf Q/{\sim}$ of the diagonal of $X$. The canonical duality pairing for $h(X \times_F X')$ is the tensor product of those for $h(X)$ and $h(X')$. The canonical duality pairings define a transpose $(-)^\vee$ for morphisms $h(X') \to h(X)$, given by pullback of cycles along the symmetry interchanging $X$ and $X'$. For $q:X \to X'$ and $z \in CH(X)_\mathbf Q/{\sim}$ and $z' \in CH(X')_\mathbf Q/{\sim}$, we have \begin{equation}\label{e:pull} q^*(z') = h(q) \circ z' \end{equation} and \begin{equation}\label{e:push} q_*(z) = h(q)^\vee \circ z. \end{equation} A \emph{Kimura variety for ${\sim}$} is a smooth projective variety $X$ over $F$ such that $h(X)$ is a Kimura object in $\mathcal M_{\sim}(F)$. If the motive of $X$ in the category of \emph{graded} motives for $\sim$ is a Kimura object, then $X$ is a Kimura variety for $\sim$. The converse also holds, as can be seen by factoring out the tensor ideals of tensor nilpotent morphisms, but this will not be needed. Let $X$ be a Kimura variety for ${\sim}$. We may apply the construction of Section~\ref{s:Kimobj} with $k = \mathbf Q$, $\mathcal C = \mathcal M_{\sim}(F)$ and $L = h(X)$. For appropriate $l_0$ and $l_1$, we then have with $G$, $E$ and $\rho$ as in Section~\ref{s:Kimobj} a commutative algebra $R$ in $\REP(G,\rho)$ and isomorphisms \[ \xi_{r,s}:\Hom_{G,R}((E_R)^{\otimes r},(E_R)^{\otimes s}) \xrightarrow{\sim} \Hom_{\mathcal M_{\sim}(F)}(h(X)^{\otimes r},h(X)^{\otimes s}) \] which are compatible with composites, tensor products, and symmetries. The homomorphisms of $R$\nobreakdash-\hspace{0pt} modules $\iota$ and $\mu$ with respective images under $\xi_{0,1}$ and $\xi_{2,1}$ the unit and multiplication of $h(X)$ define a structure of commutative $R$\nobreakdash-\hspace{0pt} algebra on $E_R$. Also the homomorphisms $\eta_1$ and $\varepsilon_1$ with respective images $\eta_X$ and $\varepsilon_X$ under $\xi_{0,2}$ and $\xi_{2,0}$ are the unit and counit a duality pairing $(E_R,E_R,\eta_1,\varepsilon_1)$ for $E_R$. We denote by \[ ((E_R)^{\otimes r},(E_R)^{\otimes r},\eta_r,\varepsilon_r) \] its $r$th tensor power. Then $\xi_{0,2r}(\eta_r) = \eta_{X^r}$ and $\xi_{2r,0}(\varepsilon_r) = \varepsilon_{X^r}$. For any $(G,R)$\nobreakdash-\hspace{0pt} homomorphism $f$ from $(E_R)^{\otimes m}$ to $(E_R)^{\otimes l}$ we have \[ \xi_{l,m}(f^\vee) = \xi_{m,l}(f)^\vee, \] where the transpose of $f$ is taken using duality pairings just defined. Further \[ \xi_{0,n}:\Hom_{G,R}(R,(E_R)^{\otimes n}) \xrightarrow{\sim} \Hom_{\mathcal M_{\sim}(F)}(\mathbf 1,h(X)^{\otimes n}) \] is an isomorphism of $\mathbf Q$\nobreakdash-\hspace{0pt} algebras. We note that \[ R^G = \Hom_{G,R}(R,R) = CH(\Spec(F))_\mathbf Q/{\sim} = \mathbf Q, \] by the isomorphism $\xi_{0,0}$. \section{Proof of Theorem~\ref{t:fin}}\label{s:finproof} To prove Theorem~\ref{t:fin}, we may suppose that $Z_1$ contains the classes of the equidimensional components of $X$, and that $Z_n$ contains the homogeneous components of each of its elements for the grading of $CH(X^n)_\mathbf Q/{\sim}$. Denote by $\mathcal A$ the set of those families $C = ((C_n)^i)_{n,i \in \mathbf N}$ with $(C_n)^0$ a $\mathbf Q$\nobreakdash-\hspace{0pt} subalgebra $C_n$ of $CH(X^n)_\mathbf Q/{\sim}$ and $((C_n)^i)_{i \in \mathbf N}$ a filtration of the algebra $C_n$, such that \ref{i:pullpush}, \ref{i:fin} and \ref{i:nilp} of Theorem~\ref{t:fin} hold. It is to be shown that there is a $C$ in $\mathcal A$ which is graded, i.e.\ such that $(C_n)^i$ is a graded $\mathbf Q$\nobreakdash-\hspace{0pt} vector subspace of $CH(X^n)_\mathbf Q/{\sim}$ for each $n$ and $i$. For $\lambda \in \mathbf Q^*$, define an endomorphism $z \mapsto \lambda * z$ of the algebra $CH(X^n)_\mathbf Q/{\sim}$ by taking $\lambda * z = \lambda^j z$ when $z$ is homogeneous of degree $j$. Then the graded subspaces of $CH(X^n)_\mathbf Q/{\sim}$ are those that are stable under each $\lambda * -$. For each $C$ in $\mathcal A$ we have a $\lambda * C$ in $\mathcal A$ with $((\lambda * C)_n)^i$ the image under $\lambda * -$ of $(C_n)^i$. Indeed $(\lambda * C)_n$ contains $Z_n$ by the homogeneity assumption on $Z_n$, and $p_*$ sends $((\lambda * C)_l)^i$ to $((\lambda * C)_m)^i$ for $p$ as in \ref{i:pullpush}, because $C_l$ contains the classes of the equidimensional components of each factor $X$ of $X^l$ by the assumption on $Z_1$. The $C$ in $\mathcal A$ that are graded are then those fixed by each $\lambda * -$. Now if $\mathcal A$ is non-empty, it has a least element for the ordering of the $C$ by inclusion of the $(C_n)^i$. Such a least element will be fixed by the $\lambda * -$, and hence graded. It will thus suffice to show that $\mathcal A$ is non-empty. Let $G$, $E$, $\rho$, $R$, $\xi_{r,s}$, $\eta_r$, $\varepsilon_r$, $\iota$ and $\mu$ be as in Section~\ref{s:Kimvar}. With the identification \begin{equation}\label{e:CHXn} \Hom_{\mathcal M_{\sim}(F)}(\mathbf 1,h(X)^{\otimes n}) = CH(X^n)_\mathbf Q/{\sim}, \end{equation} there exists a finitely generated $G$\nobreakdash-\hspace{0pt} subalgebra $R'$ of $R$ such that if we write $\beta_{m,n}$ for the homomorphism \eqref{e:extscal} with $P = E$, then $(\xi_{0,n})^{-1}(Z_n)$ is contained in the image of $\beta_{0,n}$ for every $n$, and $\eta_1 = \beta_{0,2}(\eta'{}\!_1)$, $\varepsilon_1 = \beta_{2,0}(\varepsilon'{}\!_1)$, $\iota = \beta_{0,1}(\iota')$ and $\mu = \beta_{2,1}(\mu')$ for some $\varepsilon'{}\!_1$, $\eta'{}\!_1$, $\iota'$ and $\mu'$. We then have duality pairing $(E_{R'},E_{R'},\eta'{}\!_1,\varepsilon'{}\!_1)$ for $E_{R'}$, and if its $r$th tensor power is \[ ((E_{R'})^{\otimes r},(E_{R'})^{\otimes r},\eta'{}\!_r,\varepsilon'{}\!_r), \] we have $\eta_r = \beta_{0,2r}(\eta'{}\!_r)$ and $\varepsilon_r = \beta_{2r,0}(\varepsilon'{}\!_r)$. Further $\iota'$ and $\mu'$ define a structure of commutative $(G,R')$\nobreakdash-\hspace{0pt} algebra on $E_{R'}$. The $\beta_{m,l}$, and hence their composites \[ \xi'{}\!_{m,l}:\Hom_{G,R'}((E_{R'})^{\otimes m},(E_{R'})^{\otimes n}) \to \Hom_{\mathcal M_{\sim}(F)}(h(X)^{\otimes n},h(X)^{\otimes n}) \] with the $\xi_{m,n}$, preserve identities, composition, tensor products, and transposes defined using the $\eta'{}\!_r$ and $\varepsilon'{}\!_r$. Further $\xi'{}\!_{0,n}$ is a homomorphism of $\mathbf Q$\nobreakdash-\hspace{0pt} algebras. We have $R'{}^G = R^G = \mathbf Q$. Thus by Lemma~\ref{l:repfin}~\ref{i:algfin}, the $\mathbf Q$\nobreakdash-\hspace{0pt} algebra \[ \Hom_{G,R'}(R',(E_{R'})^{\otimes n}) \xrightarrow{\sim} \Hom_G(k,(E_{R'})^{\otimes n}) = ((E_{R'})^{\otimes n})^G \] is finite-dimensional for every $n$. Denote by $J'$ the unique maximal $G$\nobreakdash-\hspace{0pt} ideal of $R'$. Then we have for each $n$ a filtration of the $(G,R')$\nobreakdash-\hspace{0pt} algebra $(E_{R'})^{\otimes n}$ by the $G$\nobreakdash-\hspace{0pt} ideals \[ J'{}^r (E_{R'})^{\otimes n}, \] and hence a filtration of the $\mathbf Q$\nobreakdash-\hspace{0pt} algebra $\Hom_{G,R'}(R',(E_{R'})^{\otimes n})$ by the ideals \begin{equation}\label{e:homideal} \Hom_{G,R'}(R',J'{}^r(E_{R'})^{\otimes n}). \end{equation} Since \eqref{e:homideal} is isomorphic to $(J'{}^r(E_{R'})^{\otimes n})^G$, it is $0$ for $r$ large, by Lemma~\ref{l:repfin}~\ref{i:idealcompl}. We now define an element $C$ of $\mathcal A$ as follows. With the identification \eqref{e:CHXn}, take for $C_n$ the image of $\xi'{}\!_{0,n}$, and for $(C_n)^r$ the image under $\xi'{}\!_{0,n}$ of \eqref{e:homideal}. Then \ref{i:fin} holds. Let $z = \xi'{}\!_{0,n}(x)$ be an element of $C_n$ which does not lie in $(C_n)^1$. Then $x$ does not factor through $J'(E_{R'})^{\otimes n}$. As was seen at the end of Section~\ref{s:dual}, this implies that $x$ has a left inverse. Hence $z$ has a left inverse $y:h(X)^{\otimes n} \to \mathbf 1$. Identifying $y$ with an element of $CH(X^n)_\mathbf Q/{\sim}$, the composite $y \circ z = 1_\mathbf 1$ is the push forward of $y.z$ along the structural morphism of $X^n$. Thus $z$ is not numerically equivalent to $0$. The first statement of \ref{i:nilp} follows. The second statement of \ref{i:nilp} follows from the fact that \eqref{e:homideal} is $0$ for $r$ large. Let $p:X^l \to X^m$ be as in \ref{i:pullpush}. If $p$ is defined by $\nu:[1,m] \to [1,l]$, then \[ h(p):h(X)^{\otimes m} \to h(X)^{\otimes l} \] is the morphism of commutative algebras in $\mathcal M_{\sim}(F)$ defined by $\nu$. Thus \[ h(p) = \xi'{}\!_{m,l}(f) \] for $f:(E_{R'})^{\otimes m} \to (E_{R'})^{\otimes l}$ the morphism of commutative $(G,R')$\nobreakdash-\hspace{0pt} algebras defined by $\nu$. That $p^*$ sends $C_m$ to $C_l$ and respects the filtrations now follows from \eqref{e:pull} and the compatibility of the $\xi'{}\!_{m,l}$ with composites. That $p_*$ sends $C_l$ to $C_m$ and respects the filtrations follows from \eqref{e:push} and the compatibility of the $\xi'{}\!_{m,l}$ with composites and transposes. Thus \ref{i:pullpush} holds. \section{Proof of Theorem~\ref{t:num}}\label{s:numproof} Let $G$, $E$, $\rho$, $R$, $\xi_{r,s}$, $\eta_r$ and $\varepsilon_r$ be as in Section~\ref{s:Kimvar}, and suppose that the equivalence relation $\sim$ is numerical equivalence. We show first that $R$ is $G$\nobreakdash-\hspace{0pt} simple, i.e.\ has no $G$\nobreakdash-\hspace{0pt} ideals other than $0$ and $R$. Any non-zero $z:h(X)^{\otimes m} \to \mathbf 1$ has a right inverse $y$, because $z \circ y$ is the push forward of $z.y$ along the structural isomorphism of $X^m$. The isomorphisms $\xi$ then show that any non-zero $(E_R)^{\otimes m} \to R$ has a right inverse, and is thus surjective. Let $J \ne 0$ be a $G$\nobreakdash-\hspace{0pt} ideal of $R$. Since $G$ is reductive and $E$ is a faithful representation of $G$, the category of finite-dimensional representations of $G$ is the pseudo-abelian hull of its full subcategory with objects the $E^{r,s}$ (\cite{Wat},~3.5). Thus for some $r,s$ there is a non-zero homomorphism of $G$\nobreakdash-\hspace{0pt} modules from $E^{r,s}$ to $R$ which factors through $J$. It defines by the isomorphism \eqref{e:psidef} a non-zero homomorphism of $(G,R)$\nobreakdash-\hspace{0pt} modules $f$ from $(E_R)^{r,s}$ to $R$ which also factors through $J$. Since $(E_R)^{r,s}$ is isomorphic by autoduality of $E_R$ to $(E_R)^{r+s}$, it follows that $f$ is surjective, so that $J = R$. Thus $R$ has no $G$ ideals other than $0$ and $R$. If $R_1$ is the $G$\nobreakdash-\hspace{0pt} submodule of $R$ on which $\rho$ acts as $-1$, then the ideal of $R$ generated by $R_1$ is a $G$\nobreakdash-\hspace{0pt} ideal $\ne R$, because the elements of $R_1$ have square $0$. Thus $R_1 = 0$, so that $R$ is commutative as an algebra in $\REP(G)$. By a theorem of Magid (\cite{Mag}, Theorem~4.5), the $G$\nobreakdash-\hspace{0pt} simplicity of $R$ and the fact that $R^G = \mathbf Q$ then imply that $\Spec(R)_k$ is isomorphic to $G_k/H$ for some extension $k$ of $\mathbf Q$ and closed subgroup $H$ of $G_k$. Thus $R$ is a finitely generated $\mathbf Q$\nobreakdash-\hspace{0pt} algebra. Hence there exists an $n$ such that a set of generators of $R$ is contained in the sum of the images of the $G$\nobreakdash-\hspace{0pt} homomorphisms $E^{r,s} \to R$ for $r+s \le n$. We may suppose that $n \ge 2$. We show that $n$ satisfies the requirements of Theorem~\ref{t:num}. Denote by $U_m$ the $\mathbf Q$\nobreakdash-\hspace{0pt} vector subspace of $\overline{CH}(X^m)_\mathbf Q = CH(X^m)_\mathbf Q/\sim$ generated by the elements \eqref{e:numgen}, and by \[ U_{m,l} \subset \Hom_{G,R}((E_R)^{\otimes m},(E_R)^{\otimes l}) \] the inverse image of \[ U_{m+l} \subset \Hom_{\mathcal M_{\sim}(F)}(h(X)^{\otimes m},h(X)^{\otimes l}) = \overline{CH}(X^{m+l})_\mathbf Q \] under $\xi_{m,l}$. The symmetries of $(E_R)^{\otimes m}$ lie in $U_{m,m}$, because by Proposition~\ref{p:Chowsub} the symmetries of $h(X)^{\otimes m}$ lie in $U_{2m}$. Similarly the composite of an element of $U_{m,m'}$ with an element of $U_{m',m''}$ lies in $U_{m,m''}$, the tensor product of an element of $U_{m,l}$ with an element of $U_{m',l'}$ lies in $U_{m+m',l+l'}$, and $\eta_m$ lies in $U_{0,2m}$ and $\varepsilon_m$ lies in $U_{2m,0}$. Also $U_{m,l}$ coincides with $\Hom_{G,R}((E_R)^{\otimes m},(E_R)^{\otimes l})$ for $m+l \le n$. Since $E_R$ is canonically autodual, we may identify $(E_R)^{r,s}$ with $(E_R)^{\otimes (r+s)}$. The morphism $u_{R;r,s;r',s'}$ of \eqref{e:daggerdef} may then be identified with a morphism of $R$\nobreakdash-\hspace{0pt} modules \[ u_{R;r,s;r',s'}:(E_R)^{\otimes (r+s)} \to (E_R)^{\otimes (r'+s')}, \] and the isomorphism $\psi_{r,s}$ of \eqref{e:psidef} with an isomorphism \[ \psi_{r,s}:\Hom_{G,R}((E_R)^{\otimes (r+s)},R) \xrightarrow{\sim} \Hom_G(E^{r,s},R). \] Then \eqref{e:psinat} still holds. Also we have a commutative square \begin{equation}\label{e:psicompat} \begin{CD} E^{r_1,s_1} \otimes_\mathbf Q E^{r_2,s_2} @>{\sim}>> E^{r_1 + r_2,s_1 + s_2} \\ @V{\psi_{r_1,s_1}(f_1) \otimes_\mathbf Q \psi_{r_2,s_2}(f_2)}VV @VV{\psi_{r_1+r_2,s_1+s_2}(f)}V \\ R \otimes_\mathbf Q R @>>> R \end{CD} \end{equation} where the top isomorphism is that of \eqref{e:tildedef} with $E$ for $L$, the bottom arrow is the multiplication of $R$, and $f$ is the composite of the appropriate symmetry of $(E_R)^{\otimes (r_1+r_2+s_1+s_2)}$ with $f_1 \otimes_R f_2$. By Lemma~\ref{l:GL}, a non zero $w:E^{r',0} \to E^{r,0}$ exists only if $r = r'$, when any such $w$ is a composite of symmetries and tensor products of endomorphisms of $E$. Thus $w_{R;r',0;r,0}$ lies in $U_{r',r}$ for such a $w$, because $n \ge 2$. Since $(-)_{R;r,s;r',s'}$ commutes with the isomorphisms $\delta$ of \eqref{e:deltaLdef}, it follows that $w_{R;r',s';r,s}$ lies in $U_{r'+s',r+s}$ for any $w:E^{r',s'} \to E^{r,s}$. To prove Theorem~\ref{t:num}, write \[ W_{r,s} = \psi_{r,s}(U_{r+s,0}). \] Consider the smallest $G$\nobreakdash-\hspace{0pt} submodule $R'$ of $R$ such that $a:E^{r,s} \to R$ factors through $R'$ for each $r$, $s$, and $a$ in $W_{r,s}$. By \eqref{e:psicompat}, $R'$ is a subalgebra of $R$. Since every $E^{r,s} \to R$ lies in $W_{r,s}$ when $r+s \le n$, the algebra $R'$ contains a set of generators of $R$. Hence $R' = R$. Given $a:E^{r,s} \to R$, there are thus $a_i$ in $W_{r_i,s_i}$ for $i = 1,,2, \dots,t$ such that the image of $a$ lies in the sum of the images of the $a_i$. By semisimplicity of $\REP(G,\rho)$, it follows that \[ a = a_1 \circ w_1 + a_2 \circ w_2 + \dots + a_t \circ w_t \] for some $w_i$. Hence by \eqref{e:psinat}, $a$ lies in $W_{r,s}$. Thus $W_{r,s}= \Hom_G(E^{r,s},R)$ for every $r$ and $s$. It follows that $U_{m,0} = \Hom_{G,R}((E_R)^{\otimes m},R)$, and hence $U_m = \overline{CH}(X^m)_\mathbf Q$, for every $m$. This proves Theorem~\ref{t:num}. \section{Concluding remarks} Theorem~\ref{t:fin} is easily generalised to the case where instead of cycles on the powers of a single Kimura variety $X$ for ${\sim}$, we consider also cycles on products of a finite number of such varieties: it suffices to take for $X$ their disjoint union and to include in $Z_1$ their fundamental classes. Similarly in the condition on $X^l \to X^m$ in \ref{i:pullpush}, we may consider a finite number of morphisms $X^l \to X$ additional to the projections: it suffices to include in the $Z_i$ the classes of their graphs. Suppose for example that $X$ is an abelian variety, and let $\Gamma$ be a finitely generated subgroup of $X(k)$. Then we may consider in \ref{i:pullpush} pullback and push forward along any morphism $X^l \to X^m$ which sends the identity of $X(k)^l$ to an element of $\Gamma^m$. More generally, we can construct a small category $\mathcal V$, an equivalence $T$ from $\mathcal V$ to the category Kimura varieties over $F$ for ${\sim}$, a filtered family $(\mathcal V_\lambda)_{\lambda \in \Lambda}$ of (not necessarily full) subcategories $\mathcal V_\lambda$ of $\mathcal V$ with union $\mathcal V$, and for each $\lambda$ in $\Lambda$ and $V$ in $\mathcal V_\lambda$ a finite-dimensional graded $\mathbf Q$\nobreakdash-\hspace{0pt} subalgebra $C_\lambda(V)$ of $CH(T(V))_\mathbf Q/{\sim}$ and a filtration $C_\lambda(V)^\bullet$ on $C_\lambda(V)$, with the following properties. \begin{enumerate} \renewcommand{(\alph{enumi})}{(\alph{enumi})} \item Finite products exist in the $\mathcal V_\lambda$, and the embeddings $\mathcal V_\lambda \to \mathcal V$ preserve them. \item We have $C_\lambda(V)^r \subset C_{\lambda'}(V)^r$ for $\lambda \le \lambda'$ and $V$ in $\mathcal V_\lambda$, and $CH(T(V))_\mathbf Q/{\sim}$ for $V$ in $\mathcal V_\lambda$ is the union of the $C_{\lambda'}(V)$ for $\lambda' \ge \lambda$. \item $T(f)^*$ sends $C_\lambda(V')$ into $C_\lambda(V)$ and $T(f)_*$ sends $C_\lambda(V)$ into $C_\lambda(V')$ for $f:V \to V'$ in $\mathcal V_\lambda$, and $T(f)^*$ and $T(f)_*$ preserve the filtrations. \item\label{i:surjnilp} For $V$ in $\mathcal V_\lambda$, the projection from $C_\lambda(V)$ to $\overline{CH}(T(V))_\mathbf Q$ is surjective with kernel $C_\lambda(V)^1$, and $C_\lambda(V)^r$ is $0$ for $r$ large. \end{enumerate} By applying the usual construction for motives (say ungraded) to $\mathcal V$ and the $CH(T(V))_\mathbf Q/{\sim}$ we obtain a $\mathbf Q$\nobreakdash-\hspace{0pt} tensor category $\mathcal M$ and a cohomology functor from $\mathcal V$ to $\mathcal M$, and $T$ defines a fully faithful functor from $\mathcal M$ to $\mathcal M_{\sim}(F)$. Similarly we obtain from $\mathcal V_\lambda$ and the $C_\lambda(V)$ a (not necessarily full) $\mathbf Q$\nobreakdash-\hspace{0pt} tensor subcategory $\mathcal M_\lambda$ of $\mathcal M$. Then each $\mathcal M_\lambda$ has finite-dimensional hom-spaces, and $\mathcal M$ is the filtered union of the $\mathcal M_\lambda$. A question involving a finite number of Kimura varieties, a finite number of morphisms between their products, and a finite number of morphisms between the motives of such products, thus reduces to a question in some $\mathcal V_\lambda$ and $\mathcal M_\lambda$. By \ref{i:surjnilp}, the projection from $\mathcal M_\lambda$ to the quotient of $\mathcal M_{\sim}(F)$ by its unique maximal tensor ideal is full, with kernel the unique maximal tensor ideal $\mathcal J_\lambda$ of $\mathcal M_\lambda$. Further $\mathcal M_\lambda$ is the limit of the $\mathcal M_\lambda/(\mathcal J_\lambda)^r$. Thus we can argue by lifting successively from the semisimple abelian category $\mathcal M_\lambda/\mathcal J_\lambda$ to the $\mathcal M_\lambda/(\mathcal J_\lambda)^r$. Theorems~\ref{t:fin} and \ref{t:num} extend easily to the case where the base field $F$ is replaced by a non-empty connected smooth quasi-projective scheme $S$ over $F$. For the category of ungraded motives over $S$ we then have $\End(\mathbf 1) = CH(S)_\mathbf Q/{\sim}$, which is a local $\mathbf Q$\nobreakdash-\hspace{0pt} algebra with residue field $\mathbf Q$ and nilpotent maximal ideal. All the arguments carry over to this case, provided that Lemma~\ref{l:repfin} is proved in the more general form where the hypothesis ``$R^G = k$'' is replaced by ``$R^G$ is a local $k$\nobreakdash-\hspace{0pt} algebra with residue field $k$''. \end{document}
arXiv
{ "id": "1003.4789.tex", "language_detection_score": 0.6990750432014465, "char_num_after_normalized": null, "contain_at_least_two_stop_words": null, "ellipsis_line_ratio": null, "idx": null, "lines_start_with_bullet_point_ratio": null, "mean_length_of_alpha_words": null, "non_alphabetical_char_ratio": null, "path": null, "symbols_to_words_ratio": null, "uppercase_word_ratio": null, "type": null, "book_name": null, "mimetype": null, "page_index": null, "page_path": null, "page_title": null }
arXiv/math_arXiv_v0.2.jsonl
null
null
\begin{document} \begin{abstract} We define and study Cartan--Betti numbers of a graded ideal $J$ in the exterior algebra over an infinite field which include the usual graded Betti numbers of $J$ as a special case. Following ideas of Conca regarding Koszul--Betti numbers over the symmetric algebra, we show that Cartan--Betti numbers increase by passing to the generic initial ideal and the squarefree lexsegement ideal respectively. Moreover, we characterize the cases where the inequalities become equalities. As combinatorial applications of the first part of this note and some further symmetric algebra methods we establish results about algebraic shifting of simplicial complexes and use them to compare different shifting operations. In particular, we show that each shifting operation does not decrease the number of facets, and that the exterior shifting is the best among the exterior shifting operations in the sense that it increases the number of facets the least. \end{abstract} \maketitle \section{Introduction} Let $S=K[x_1,\dots,x_n]$ be a polynomial ring over a field $K$ of characteristic $0$, let $I \subset S$ be a graded ideal and denote by $\beta^S_{ij}(S/I)=\dim_K \Tor_i^S(S/I,K)_j$ the graded Betti numbers of $S/I$. In the last decades the graded Betti numbers $\beta^S_{ij}(S/I)$ were studied intensively. To $I$ one associates several important monomial ideals like the generic initial ideal $\gin(I)$ with respect to the reverse lexicographic order, or the lexsegment ideal $\lex(I)$. It is well-known by work of \cite{BI93}, \cite{HU93} and \cite{PA96} that there are inequalities $\beta^S_{ij}(S/I) \leq \beta^S_{ij}(S/\gin(I)) \leq \beta^S_{ij}(S/\lex(I))$ for all $i,j$. The cases where these are equalities for all $i,j$ have been characterized by Aramova--Herzog--Hibi \cite{ARHEHI00ideal} and \cite{HEHI99}. The first inequality is an equality for all $i,j$ if and only if $J$ is a componentwise linear ideal, and we have equalities everywhere for all $i,j$ if and only if $J$ is a Gotzmann ideal. All these results were generalized to so--called Koszul--Betti numbers in \cite{CO04}. Let $E=K\langle e_1,\dots,e_n \rangle$ be the exterior algebra over an infinite field $K$ and $J \subset E$ a graded ideal. Aramova--Herzog--Hibi (\cite{ARHEHI97}) showed that the constructions and results mentioned above hold similarly for $J$. More precisely, there exists a generic initial ideal $\gin(J)\subset E$ with respect to the reverse lexicographic order, and the unique squarefree lexsegment ideal $\lex(J)\subset E$ with the same Hilbert function as $J$. Let $\beta^E_{ij}(E/J)=\dim_K \Tor_i^E(E/J,K)_j$ be the graded Betti numbers of $E/J$. Then we also have the inequalities $\beta^E_{ij}(E/J) \leq \beta^E_{ij}(E/\gin(J)) \leq \beta^E_{ij}(E/\lex(J))$ for all integers $i,j$. The first inequality is again an equality for all $i,j$ if and only if $J$ is a componentwise linear ideal as was observed in \cite{ARHEHI00ideal}. The first part of this paper is devoted to extend the latter results to statements about Cartan--Betti numbers, similarly as Conca did in the symmetric algebra case for Koszul--Betti numbers. Over the exterior algebra there exists the construction of the Cartan complex and Cartan homology (see Section \ref{cartan} for details), which behave in many ways like the Koszul complex and Koszul homology. Following Conca's ideas we define the Cartan--Betti numbers as $ \beta^E_{ijp}(E/J)= \dim_K H_i(f_1,\dots,f_p;E/J)_j $ where $H_i(f_1,\dots,f_p;E/J)$ is the $i$-th Cartan homology of $E/J$ with respect to a generic sequence of linear forms $f_1,\dots,f_p$ for $1\leq p \leq n$. Observe that this definition does not depend on the chosen generic sequence and that these modules are naturally graded. We set $H_i(f_1,\dots,f_p;E/J)=0$ for $i>p$. Note that $\beta^E_{ijn}(E/J)= \beta^E_{ij}(E/J)$ are the graded Betti numbers of $E/J$. Observe that a generic initial ideal $\gin_\tau(J)$ can be defined with respect to any term order $\tau$ on $E$. Our first main result, Theorem \ref{mainleq}, shows that, for all integers $i,j,p$: $$ \beta^E_{ijp}(E/J) \leq \beta^E_{ijp}(E/\gin_\tau(J)) \leq \beta^E_{ijp}(E/\lex(J)) . $$ Analogously to result in \cite{CO04} we can characterize precisely the cases where the inequalities become equalities. At first we have: \begin{thm} Let $J \subset E$ be a graded ideal. The following conditions are equivalent: \begin{enumerate} \item $\beta_{ijp}^E(E/J)= \beta_{ijp}^E(E/\gin(J))$ for all $i,j,p$; \item $\beta_{1jn}^E(E/J)= \beta_{1jn}^E(E/\gin(J))$ for all $j$; \item $J$ is a componentwise linear ideal; \item A generic sequence of linear forms $f_1,\dots,f_n$ is a proper sequence of $E/J$. \end{enumerate} \end{thm} We refer to Section \ref{cartan} for the definition of a proper sequence. Moreover, we show: \begin{thm} For each graded ideal $J \subset E$, the following statements are equivalent: \begin{enumerate} \item $\beta^E_{ijp}(E/J)= \beta^E_{ijp}(E/\lex(J))$ for all $i,j,p$; \item $\beta^E_{1jn}(E/J)= \beta^E_{1jn}(E/\lex(J))$ for all $j$; \item $J$ is a Gotzmann ideal in the exterior algebra; \item $\beta^E_{0jp}(E/J) = \beta^E_{0jp}(E/\lex(J))$ for all $j,p$ and $J$ is componentwise linear. \end{enumerate} \end{thm} Section \ref{cartan} ends with the discussion which generic initial ideal increases the Cartan--Betti numbers the least. In Theorem \ref{mainginleq} we show that for all $i,j,p$ and for any term order $\tau$: $$ \beta_{ijp}^E(E/J)\leq \beta_{ijp}^E(E/\gin(J)) \leq \beta_{ijp}^E(E/\gin_\tau(J)) . $$ The second part of this note presents combinatorial applications of the results mentioned above and some symmetric algebra methods. More precisely, we study properties of simplicial complexes under shifting operations. Let $\mathcal{C}([n])$ be the set of simplicial complexes on $n$ vertices. Following \cite{KA84}, \cite{KA01}, a shifting operation is a map $\shift \colon \mathcal{C}([n]) \to \mathcal{C}([n])$ satisfying certain conditions. We refer to Section \ref{sec-def} for precise definitions. Intuitively, shifting replaces each complex $\Gamma$ with a combinatorially simpler complex $\Delta (\Gamma)$ that still captures some properties of $\Gamma$. Shifting has become an important technique that has been successfully applied in various contexts and deserves to be investigated in its own right (cf., for example, \cite{ARHEHI00}, \cite{ARHE}, \cite{BNT}, \cite{BNT2}, \cite{MH1}, \cite{MH2}, \cite{Nevo}). Several shifting operations can be interpreted algebraically. Denote by $I_{\Gamma}$ the Stanley--Reisner ideal of $\Gamma$ in the polynomial ring $S = K[x_1,\ldots,x_n]$. Then symmetric shifting $\Delta^s$ can be realized by passing to a kind of polarization of the generic initial ideal of $I_{\Gamma}$ with respect to the reverse lexicographic order (cf.\ Example \ref{exsymmetric}). Similarly, denote by $J_{\Gamma}$ the exterior Stanley--Reisner ideal in the exterior algebra $E = K\langle e_1,\dots,e_n\rangle$. The passage from $J_{\Gamma}$ to its generic initial ideal with respect to any term order $\tau$ on $E$ leads to the exterior algebraic shifting operation $\Delta^{\tau}$ (Example \ref{exexterior}). If $\tau$ is the reverse-lexicographic order, we call $\Delta^e := \Delta^{\tau}$ the exterior algebraic shifting. While each shifting operation preserves the $f$-vector, the number of facets may change. However, we show in Theorem \ref{thm-adeg-incr} that it cannot decrease. After recalling basic definitions in Section \ref{sec-def}, we first study symmetric algebraic shifting. To this end we use degree functions. In Section \ref{sec-degree} we show that the smallest extended degree is preserved under symmetric algebraic shifting whereas the arithmetic degree $\adeg (\Gamma)$ may increase because it equals the number of facets. However, the smallest extended degree and the arithmetic degree of each shifted complex agree. Furthermore we show that exterior algebraic shifting shift is the best exterior shifting operation in the sense that it increases the number of facets the least, i.\ e.\ for any term order $\tau$ $$ \adeg \Gamma \leq \adeg \Delta^e(\Gamma) \leq \adeg \Delta^\tau(\Gamma). $$ Moreover, $\Delta^e$ preserves the arithmetic degree of $\Gamma$ if and only $\Gamma$ is sequentially Cohen--Macaulay. These results rely on the full strength of the results in Section \ref{cartan}. The proof also uses Alexander duality and a reinterpretation of the arithmetic degree over the exterior algebra. In Section \ref{ringprop} we use degree functions to give a short new proof of the fact that $\Delta^s (\Gamma)$ is a pure complex if and only if $\Gamma$ is Cohen--Macaulay. We then derive a combinatorial interpretation of the smallest extended degree using iterated Betti numbers and show that these numbers agree with the $h$-triangle if and only if $\Gamma$ is sequentially Cohen--Macaulay. For notions and results related to commutative algebra we refer to \cite{EI}, \cite{BRHE98} and \cite{VA}. For details on combinatorics we refer to the book \cite{ST96} and \cite{KA01}. \section{Cartan--Betti numbers} \label{cartan} In this section we introduce and study Cartan--Betti numbers of graded ideals in the exterior algebra $E=K\langle e_1,\dots,e_n \rangle$ over an infinite field $K$. Our results rely on techniques from Gr\"obner basis theory in the exterior algebra. Its basics are treated in \cite{ARHEHI97}. We begin with establishing some extensions of the theory that are analogous to results over the polynomial ring. Recall that a monomial of degree $k$ in $E$ is an element $e_F=e_{a_1} \wedge \ldots \wedge e_{a_k}$ where $F=\{a_1,\ldots,a_k\}$ is a subset of $[n]$ with $a_1< \ldots < a_k$. To simplify notation, we write sometimes $f g = f \wedge g$ for any two elements $f, g \in E$. We will use only term orders $\tau$ on $E$ that satisfy $ e_1 >_{\tau} e_2 >_{\tau} \ldots >_{\tau} e_n.$ Given a term order $\tau$ on $E$ and a graded ideal $J\subset E$ we denote by $\ini_\tau(J)$ and $\gin_\tau(J)$ respectively the {\em initial ideal} of $J$ and the {\em generic initial ideal} of $J$ with respect to $\tau$. We also write $\ini(J)$ and $\gin(J)$ for the initial ideal of $J$ and the generic initial ideal of $J$ with respect to the reverse lexicographic order on $E$ induced by $e_1>\cdots>e_n$. Recall that a monomial ideal $J \subset E$ is called a {\em squarefree strongly stable ideal} with respect to $e_1>\dots>e_n$ if for all $F\subseteq [n]$ with $e_F \in J$ and all $i \in F$, $j<i$, $j \not\in F$ we have $e_j\wedge e_{F \setminus\{i\}} \in J$. Given an ideal $J\subset E$ and a term order $\tau$ on $E$, note that $\gin_\tau(J)$ is always squarefree strongly stable and that $\gin_\tau(J)=J$ iff $J$ is squarefree strongly stable which can be seen analogously to the case of a polynomial ring. As expected, the passage to initial ideals increases the graded Betti numbers. The next result generalizes \cite[Proposition 1.8]{ARHEHI97}. It is the exterior analogue of a well-known fact for the polynomial ring (cf.\ \cite[Lemma 2.1]{CO04}). \begin{prop} \label{helpergb} Let $J, J' \subset E$ be graded ideals and let $\tau$ be any term order on $E$. Then, for all $i, j \in \mathbb{Z}$: $$ \dim_K [\tor^E_i (E/J, E/J')]_j \leq \dim_K [\tor^E_i (E/\inn_{\tau} (J), E/\inn_{\tau} (J'))]_j. $$ \end{prop} \begin{proof} This follows from standard deformation arguments. The proof is verbatim the same as the one of the symmetric version in \cite[Lemma 2.1]{CO04} after replacing the polynomial ring $S$ with the exterior algebra $E$. \end{proof} In the following we present exterior versions of results of Conca \cite{CO04}. We follow the strategy of his proofs and replace symmetric algebra methods and the Koszul complex by exterior algebra methods and the Cartan complex. We first recall the construction of the Cartan complex (see, e.g., \cite{ARHEHI97} or \cite{HE01}). For important results on this complex, we refer to \cite{ARAVHE00}, \cite{ARHEHI97}, \cite{ARHEHI98}, \cite{ARHEHI00ideal}. For a sequence $\vb=v_1,\ldots, v_m \subseteq E_1$, the Cartan complex $C_{{\hbox{\large\bf.}}}(\vb;E)$ is defined to be the free divided power algebra $E\langle x_1,\ldots,x_m \rangle$ together with a differential $\delta$. The free divided power algebra $E\langle x_1,\ldots,x_m \rangle$ is generated over $E$ by the divided powers $x_{i}^{(j)}$, \ $i=1,\ldots,m$ and $j\geq 0$, that satisfy the relations $x_{i}^{(j)}x_{i}^{(k)}=\frac{(j+k)!}{j!k!}x_{i}^{(j+k)}$. We set $x_{i}^{(0)}=1$ and $x_{i}^{(1)}=x_{i}$ for $i=1,\ldots, m$. Hence $C_{{\hbox{\large\bf.}}}(\vb;E)$ is a free $E$-module with basis $x^{(a)}:=x_{1}^{(a_1)}\ldots x_{m}^{(a_m)}$, $a = (a_1,\ldots,a_m) \in \mathbb{N}^{m}$. We set $\deg x^{(a)}=i$ if $|a|:=a_1+\ldots+a_m=i$ and $C_{i}(\vb;E)=\bigoplus_{|a|=i} Ex^{(a)}$. The $E$-linear differential $\partial$ on $C_{{\hbox{\large\bf.}}}(\vb;E)$ is defined as follows. For $x^{(a)}=x_{1}^{(a_1)}\ldots x_{m}^{(a_m)}$ we set $\partial(x^{(a)})=\sum_{a_i>0} v_{i} \cdot x_{1}^{(a_1)}\ldots x_{i}^{(a_{i}- 1)}\ldots x_{m}^{(a_m)}.$ One easily checks that $\partial\circ\partial=0$, thus $C_{{\hbox{\large\bf.}}}(\vb;E)$ is indeed a complex. We denote by $\mathcal{M}$ the category of finitely generated graded left and right $E$-modules $M$, satisfying $ax=(-1)^{|\deg(a)||\deg(x)|}xa$ for all homogeneous $a\in E$ and $x\in M$. For example, every graded ideal $J\subseteq E$ belongs to $\mathcal{M}$. For $M \in \mathcal{M}$ and $\vb=v_1,\ldots, v_m \subseteq E_1$, the complex $C_{{\hbox{\large\bf.}}}(\vb;M)=C_{{\hbox{\large\bf.}}}(\vb;E)\otimes_E M$ is called the {\em Cartan complex} of $M$ with respect to $\vb$. Its homology is denoted by $H_{{\hbox{\large\bf.}}}(\vb;M)$; it is called {\em Cartan homology}. There is a natural grading of this complex and its homology. We set $\deg x_i =1$ and $C_{j}(\vb;M)_{i}:=\text{span}_{K}\{m_{a}x^{(b)} \; | \; m_a \in M_a, a+|b|=i,|b|=j \}.$ Cartan homology can be computed recursively. For $j=1,\ldots,m-1$, the following sequence is exact: \begin{eqnarray} \label{eq-cartan} \hspace*{.5cm} 0 \to C_{{\hbox{\large\bf.}}}(v_1,\ldots,v_j;M) \overset{\iota}{\to} C_{{\hbox{\large\bf.}}}(v_1,\ldots,v_{j+1};M)\overset{\varphi}{\to}C_{{\hbox{\large\bf.}}- 1}(v_1,\ldots,v_{j+1};M)(-1)\to 0. \end{eqnarray} Here $\iota$ is a natural inclusion map and $\varphi$ is given by $$ \varphi(g_0 + g_1x_{j+1}+\ldots+g_{k}x_{j+1}^{(k)})=g_1 + g_2x_{j+1}+\ldots+g_{k}x_{j+1}^{(k-1)} \text{ for } g_i \in C_{i-k}(v_1,\ldots ,v_{j};M). $$ The associated long exact homology sequence is $$ \ldots \to H_{i}(v_1,\ldots,v_j;M) \overset{\alpha_i}{\to} H_{i}(v_1,\ldots,v_{j+1};M)\overset{\beta_i}{\to} H_{i- 1}(v_1,\ldots,v_{j+1};M)(-1) $$ $$ \overset{\delta_{i-1}}{\to} H_{i-1}(v_1,\ldots,v_j;M) \overset{\alpha_{i- 1}}{\to} H_{i-1}(v_1,\ldots,v_{j+1};M)\overset{\beta_{i-1}}{\to}\ldots, $$ where $\alpha_{i}$ and $\beta_{i}$ are induced by $\iota$ and $\varphi$, respectively. For a cycle $z=g_0 + g_1x_{j+1}+\ldots+g_{i-1}x_{j+1}^{(i-1)}$ in $C_{i- 1}(v_1,\ldots,v_{j+1};M)$ one has $\delta_{i-1}([z])=[g_0v_{j+1}]$. It is now easy to see that e.~ g.\ for $e_t,\dots,e_n$, the Cartan complex $C_{{\hbox{\large\bf.}}}(e_t,\dots,e_n;E)$ is a free resolution of $E/(e_t,\dots,e_n)$. Hence, for each module $M \in \mathcal{M}$, there are isomorphisms \begin{equation} \label{tor} \Tor_i^{E}(E/(e_t,\dots,e_n),M)\cong H_{i}(e_t,\dots,e_n;M). \end{equation} In particular, for $\eb=e_1,\dots,e_n$ we have $K\cong E/(\eb)$ and there are isomorphisms of graded $K$-vector spaces $ \Tor_i^{E}(K,M)\cong H_{i}(\eb;M)$. Cartan homology is useful to study resolutions of graded ideals in the exterior algebra. For example, it has been used to show that the Castelnuovo--Mumford regularity $\reg(E/J)=\max\{j \mid \exists $ $i \text{ such that } \Tor_i(K,E/J)_{i+j}\neq 0 \}$ does not change by passing to the generic initial ideal with respect to the reverse lexicographic order, i.~e.\ $ \reg(E/J)= \reg(E/\gin(J))$ (cf.\ \cite{ARHE}, Theorem 5.3). There are several other algebraic invariants which behave similarly. See for example \cite{HETE} for results in this direction. Here, we are interested in the following numbers which, for $p = n$, include the graded Betti numbers of $E/J$: \begin{defi} \label{def-CB} Let $J \subset E$ be a graded ideal and $f_1,\dots,f_p$ be a generic sequence of linear forms. Let $C(f_1,\dots,f_p;E/J)$ be the Cartan complex of $E/J$ with respect to that sequence and $H(f_1,\dots,f_p;E/J)$ the corresponding Cartan homology. We denote by $$ \beta^E_{ijp}(E/J)= \dim_K H_i(f_1,\dots,f_p;E/J)_j $$ the {\em Cartan--Betti numbers} of $E/J$ for $i=0,\dots,p$. We set $H_i(f_1,\dots,f_p;E/J)=0$ for $i>p$. \end{defi} A generic sequence means here that there exists a non-empty Zariski open set $U$ in $K^{p\times n}$ such that if one chooses the $p\times n$ coefficients that express $f_1,\dots,f_p$ as linear combinations of $e_1,\dots,e_n$ inside $U$, then the ranks of the maps in each degree in the Cartan complex are as big as possible. In particular, the $K$-vector space dimension of the homology is the same for each generic sequences and thus the definition of $\beta^E_{ijp}(E/J)$ does not depend on the chosen generic sequence. We are going to explain that it is easy to compute the Cartan--Betti numbers for squarefree strongly stable ideals in the exterior algebra. In fact, given a generic sequence of linear forms $f_1,\dots,f_p$ there exists an invertible upper triangular matrix $g$ such that the linear space spanned by $f_1,\dots,f_p$ is mapped by the induced isomorphism $g \colon E \to E $ to the one generated by $e_{n-p+1},\dots,e_n$. It follows that, as graded $K$-vector spaces, $ H_i(f_{1},\dots,f_p;E/J)\cong H_i(e_{n-p+1},\dots,e_n;E/g(J))$. Since $J$ is squarefree strongly stable we have that $g(J)=J$ and thus we get: \begin{lem} \label{cartanhelper} Let $J \subset E$ be a squarefree strongly stable ideal. Then we have for all $i,j,p$: $$ \beta_{ijp}^E(E/J)=\dim_K H_i(e_{n-p+1},\dots,e_n;E/J)_j. $$ \end{lem} Aramova, Herzog and Hibi (\cite{ARHEHI97}) computed the Cartan--Betti numbers for squarefree strongly stable ideals which gives rise to Eliahou--Kervaire type resolutions in the exterior algebra. To present this result we need some more notation. Recall that for a monomial ideal $J\subset E$ we denote by $G(J)$ the unique minimal system of monomial generators of $J$. For a monomial $e_F \in E$ where $F \subseteq [n]$ we set $\max(F)=\max\{i \in F\}.$ Given a set $G$ of monomials we define: \begin{eqnarray*} m_i(G) & = & |\{e_F \in G : \max(e_F)=i \}|, \\ m_{\leq i}(G) & = & |\{e_F \in G : \max(e_F) \leq i \}|, \\ m_{ij}(G) & = & |\{e_F \in G : \max(e_F)=i,\ |F|=j \}|. \end{eqnarray*} For a monomial ideal or a vector space generated by monomials $J$, we denote by $m_i(J)$, $m_{\leq i}(J)$ and $m_{ij}(J)$ the numbers $m_i(G)$, $m_{\leq i}(G)$ and $m_{ij}(G)$ where $G$ is the set of minimal monomial generators of $J$. Using Lemma \ref{cartanhelper} we restate the result of Aramova, Herzog and Hibi as follows: \begin{thm} \label{cartanbetti} Let $J \subset E$ be a squarefree strongly stable ideal. Then the Cartan--Betti numbers are given by the following formulas: \begin{enumerate} \item{(Aramova--Herzog--Hibi)} For all $i>0$, $p \in [n]$ and every $j \in \mathbb{Z}$ we have $$ \beta_{ijp}^E(E/J) = \sum_{k = n-p+1}^n m_{k,j-i+1}(J) \binom{k+p-n+i-2}{i-1}. $$ \item For all $p \in [n]$ and every $j \in \mathbb{Z}$ we have $$ \beta_{0jp}^E(E/J) = \binom{n-p}{j}-m_{\leq n-p}(J_j) $$ where $J_j$ is the degree $j$ component of the ideal $J$. \end{enumerate} \end{thm} \begin{proof} Only (ii) needs a proof since (i) immediately follows from the proof of Proposition 3.1 of \cite{ARHEHI97}. But since $e_{n-p+1},\dots,e_n$ is generic for $E/J$ we can compute $\beta_{0jp}^E(E/J)$ from the Cartan homology with respect to this sequence. Then the formula follows from a direct computation. \end{proof} Note that for $p=n$ we get the graded Betti numbers of $E/J$. Using the numbers $m_{\leq i}(J_j)$, it is possible to compare the Cartan--Betti numbers of squarefree strongly stable ideals. In fact we have the following: \begin{prop} \label{maintechnical} Let $J, J' \subset E$ be squarefree strongly stable ideals with the same Hilbert function such that $m_{\leq i}(J_j)\leq m_{\leq i}(J'_j)$ for all $i,j$. Then $ \beta_{ijp}^E(E/J') \leq \beta_{ijp}^E(E/J) $ for all $i,j,p$ with equalities everywhere if and only if $m_{\leq i}(J_j)= m_{\leq i}(J'_j)$ for all $i,j$. \end{prop} \begin{proof} It follows from Theorem \ref{cartanbetti} (ii) and the assumption that \begin{eqnarray*} \beta_{0jp}^E(E/J) - \beta_{0jp}^E(E/J') &=& \binom{n-p}{j} - m_{\leq n-p}(J_j) - \binom{n-p}{j} + m_{\leq n-p}(J'_j)\\ &=& m_{\leq n-p}(J'_j) - m_{\leq n-p}(J_j)\\ &\geq& 0. \end{eqnarray*} This shows the desired inequalities for $i=0$. Next assume that $i>0$ and observe \begin{eqnarray*} \lefteqn{ m_{k}(J_{j-i+1})=|\{e_F \in J : \max (e_F) = k, \ |F| = j-i+1\}| }\\ &=& |\{e_F \in G(J) : \max (e_F) = k, |F| = j-i+1\}| \\ &+& | \{e_F \in J : \exists e_H \in J_{j-i}, \max (e_H)\leq k-1, e_F=e_H \wedge e_k\} |\\ &=& m_{k,j-i+1}(J) + m_{\leq k-1}(J_{j-i}).\\ \end{eqnarray*} We compute \begin{eqnarray*} \lefteqn{ \beta_{ijp}^E(E/J') } \\ &=& \sum_{k = n-p+1}^n m_{k,j-i+1}(J') \binom{k+p-n+i-2}{i-1}\\ &=& \sum_{k = n-p+1}^n \bigl( m_{k}(J'_{j-i+1}) - m_{\leq k-1}(J'_{j-i}) \bigr) \binom{k+p-n+i-2}{i-1}\\ &=& \sum_{k = n-p+1}^n \bigl( m_{\leq k}(J'_{j-i+1})- m_{\leq k-1}(J'_{j-i+1}) - m_{\leq k-1}(J'_{j-i}) \bigr) \binom{k+p-n+i-2}{i-1}\\ &=& m_{\leq n}(J'_{j-i+1}) \binom{p+i-2}{i-1} - m_{\leq n-p}(J'_{j-i+1})\\ &+& \sum_{k = n-p+1}^{n-1} m_{\leq k}(J'_{j-i+1}) \bigl( \binom{k+p-n+i-2}{i-1} - \binom{k+p-n+i-1}{i-1} \bigr)\\ &-& \sum_{k = n-p+1}^n m_{\leq k-1}(J'_{j-i}) \binom{k+p-n+i-2}{i-1}\\ &=& \dim_K (J'_{j-i+1}) \binom{p+i-2}{i-1} - m_{\leq n-p}(J'_{j-i+1})\\ &-& \sum_{k = n-p+1}^{n-1} m_{\leq k}(J'_{j-i+1}) \binom{k+p-n+i-2}{i-2} \\ &-& \sum_{k = n-p+1}^n m_{\leq k-1}(J'_{j-i}) \binom{k+p-n+i-2}{i-1}\\ \end{eqnarray*} and similarly $\beta_{ijp}^E(E/J)$. We know by assumption that $\dim_K (J'_{j-i+1})=\dim_K (J_{j-i+1})$ and $m_{\leq i}(J_j)\leq m_{\leq i}(J'_j)$ for all $i,j$. Hence the last equation of the computation implies $\beta_{ijp}^E(E/J)- \beta_{ijp}^E(E/J') \geq 0$ because the left-hand side is a sum of various $m_{\leq i}(J'_j)-m_{\leq i}(J_j)$ times some binomial coefficients. Moreover, we have equalities everywhere if and only if $m_{\leq i}(J_j)= m_{\leq i}(J'_j)$ for all $i,j$. \end{proof} Using the above results as well as \cite[Lemma 3.7]{ARHEHI98} instead of Proposition 3.5 in \cite{CO04}, we get the following rigidity statement that is proved analogously to Proposition 3.7 in \cite{CO04}: \begin{cor} \label{rigid} Let $J, J' \subset E$ be a squarefree strongly stable ideals with the same Hilbert function and $m_{\leq i}(J_j)\leq m_{\leq i}(J'_j)$ for all $i,j$. Then the following statements are equivalent: \begin{enumerate} \item $\beta_{ijp}^E(E/J')=\beta_{ijp}^E(E/J)$ for all $i,j$ and $p$; \item $\beta_{ij}^E(E/J')=\beta_{ij}^E(E/J)$ for all $i$ and $j$; \item $\beta_{1j}^E(E/J')=\beta_{1j}^E(E/J)$ for all $j$; \item $\beta_{1}^E(E/J')=\beta_{1}^E(E/J)$; \item $m_{i}(J'_j)=m_{i}(J_j)$ for all $i,j$; \item $m_{\leq i}(J'_j)=m_{\leq i}(J_j)$ for all $i,j$. \end{enumerate} \end{cor} Given a graded ideal $J \subset E$ there exists the unique {\em squarefree lexsegment ideal} $\lex(J) \subset E$ with the same Hilbert function as $J$ (see \cite{ARHEHI97} for details). Next we present a mild variation of a crucial result in \cite{ARHEHI98} which was an important step to compare Betti numbers of squarefree strongly stable ideals and their corresponding squarefree lexsegment ideals. \begin{thm}{(Aramova--Herzog--Hibi)} \label{lexext} Let $J \subset E$ be a squarefree strongly stable ideal and let $L \subset E$ be its squarefree lexsegment ideal. Then $$m_{\leq i}(L_j) \leq m_{\leq i}(J_j) \text{ for all } i,j.$$ \end{thm} \begin{proof} It is a consequence of \cite[Theorem 3.9]{ARHEHI98} by observing the following facts. For $j\in \mathbb{N}$ consider the ideals $J'=(J_j)$ and $L'=(L_j)$ of $E$. Then $J'$ is a squarefree strongly stable ideal, $L'$ is a squarefree lexsegment ideal, and $\dim_K L'_t \leq \dim_K J'_t$ for all $t$ by \cite[Theorem 4.2]{ARHEHI97}. \end{proof} As \cite[Lemma 2]{CO03} one shows for Cartan--Betti numbers: \begin{lem} \label{someeq} Let $J \subset E$ be a graded ideal. Then $$ \beta_{0jp}^E(E/J) = \beta_{0jp}^E(E/\gin(J)) \text{ for all } j,p. $$ \end{lem} \begin{proof} Consider the graded $K$-algebra homomorphism $g\colon E\to E$ that is induced by a generic matrix in $\GL_n(K)$. Then the linear forms $f_i := g^{-1}(e_{n-p+i})$, $i=1,\ldots,p$, are generic too and the Hilbert functions of $E/(J+L)$ and $E/(g(J)+L')$ coincide, where $L=(f_1,\dots,f_p)$ and $L' = (e_{n-p+1},\ldots,e_n)$. Note that passing to the initial ideals does not change the Hilbert function and that $\ini(g(J)+L')=\ini(g(J))+L'$ because the chosen term order is revlex. Furthermore, we have that $\ini(g(J))=\gin(J)$ because $g$ is generic. Hence, the Hilbert function of $E/(J+L)$ equals to that of $E/(\gin(J)+L')$. Since $ \beta_{0jp}^E(E/J)= \dim_K (E/(J+L))_j $ and $ \beta_{0jp}^E(E/\gin(J))= \dim_K (E/(\gin(J)+L'))_j $, our claim follows. \end{proof} The following result generalizes \cite[Prop. 1.8, Theorem 4.4]{ARHEHI97}. The proof is similar to \cite[Theorem 4.3]{CO04}. \begin{thm} \label{mainleq} Let $J \subset E$ be a graded ideal and $\tau$ an arbitrary term order on $E$. Then \begin{enumerate} \item $ \beta_{ijp}^E(E/J) \leq \beta_{ijp}^E(E/\gin_\tau(J)) $ for all $i,j,p$. \item $ \beta_{ijp}^E(E/J) \leq \beta_{ijp}^E(E/\lex(J)) $ for all $i,j,p$. \end{enumerate} \end{thm} \begin{proof} (i): Let $g\in \GL_n(K)$ be a generic matrix and let $f_i$ be the preimage of $e_{n-p+1}$ under the induced $K$-algebra isomorphism $g \colon E\to E$ where $i=1,\dots,p$. Then $$ H_i(f_1,\dots, f_p;E/J) \cong H_i(e_{n-p+1},\dots,e_{n};E/g(J)) \cong \Tor^E_i(E/g(J), E/(e_{n-p+1},\dots,e_{n})) $$ where the last isomorphism was noted above in (\ref{tor}). Proposition \ref{helpergb} provides: \begin{eqnarray*} \beta^E_{ijp}(E/J) &\leq& \dim_K \Tor^E_i(E/\ini_\tau(g(J)), E/(e_{n-p+1},\dots,e_{n}))_j\\ &=& \dim_K H_i(e_{n-p+1},\dots,e_{n}; E/\ini_\tau(g(J)))_j. \end{eqnarray*} Since $g$ is a generic matrix, $\ini_\tau(g(J))=\gin_\tau(J)$ is squarefree strongly stable. Thus we can apply Lemma \ref{cartanhelper} and (i) follows. (ii): By (i) we may replace $J$ by $\gin(J)$, thus we may assume that $J$ is squarefree strongly stable. Now (ii) follows from \ref{lexext} (ii) and \ref{maintechnical}. \end{proof} The next results answer the natural question, in which cases we have equalities in Theorem \ref{mainleq}. Denote by $J_{\langle t\rangle}$ the ideal that is generated by all degree $t$ elements of the graded ideal $J$. Then $J \subset E$ is called {\em componentwise linear} if, for all $t\in \mathbb{N}$, the ideal has an $t$-linear resolution, i.~e.\ $\Tor_i^E(J_{\langle t\rangle},K)_{i+j}=0$ for $j \neq t$. Given a module $M \in \mathcal{M}$, we call, in analogy to the symmetric case, the sequence $v_1,\dots,v_m \in E$ a {\em proper $M$-sequence} if for all $i\geq 1$ and all $1\leq j< m$ the maps $$ \delta_i \colon H_i(v_1,\dots,v_{j+1};M)(-1) \to H_i(v_1,\dots,v_{j};M) $$ are zero maps. The following result generalizes \cite[Theorem 2.1]{ARHEHI00ideal}. We follow the idea of the proof of \cite[Theorem 4.5]{CO04}. \begin{thm} \label{maincl} Let $J \subset E$ be a graded ideal. The following conditions are equivalent: \begin{enumerate} \item $\beta_{ijp}^E(E/J)= \beta_{ijp}^E(E/\gin(J))$ for all $i,j,p$; \item $\beta_{1jn}^E(E/J)= \beta_{1jn}^E(E/\gin(J))$ for all $j$; \item $J$ is a componentwise linear ideal; \item A generic sequence of linear forms $f_1,\dots,f_n$ is a proper sequence of $E/J$. \end{enumerate} \end{thm} \begin{proof} (i) $\Rightarrow$ (ii): This is trivial. (ii) $\Rightarrow$ (iii): This follows from of \cite[Theorem 1.1 and Theorem 2.1]{ARHEHI00ideal}. In fact, the proof of Aramova, Herzog an Hibi shows this stronger result. (iii) $\Rightarrow$ (iv): Assume that $J$ is componentwise linear. Let $f_1,\dots,f_n$ be a generic sequence of linear forms. We have to show that for all $i\geq 1$ and all $1\leq j< n$ the homomorphisms $ \delta_i \colon H_i(f_1,\dots,f_{j+1};E/J)(-1) \to H_i(f_1,\dots,f_{j};E/J)$ are zero maps. Assume first that $J$ is generated in a single degree $d$. Since the regularity does not change by passing to the generic initial ideal with respect to revlex, it follows that also $\gin(J)$ is generated in degree $d$. By Theorem \ref{cartanbetti} (i) we get that for $i\geq 1$ the homology module $H_i(f_1,\dots,f_j;E/\gin(J))$ is concentrated in the single degree $i+d-1$. Theorem \ref{mainleq} (i) shows that the same is true also for $H_i(f_1,\dots,f_j;E/J)$. This implies that $H_i(f_1,\dots,f_{j+1};E/J)(-1)$ has non-trivial homogeneous elements only in degree $i+d$ and the module $H_i(f_1,\dots,f_{j};E/J)$ has only homogeneous elements in degree $i+d-1$. Since $\delta_i$ is a homogeneous homomorphism of degree zero, this implies that simply by degree reasons $\delta_i$ is the zero map. Now we consider the general case. Recall that for a cycle $c$ in some complex we write $[c]$ for the corresponding homology class. For any element $e \in E$ we denote by $\bar e$ the corresponding residue class in $E/J$ to distinguish notation. Now we fix some $i$ and $j$. Let $a \in H_i(f_1,\dots,f_j;E/J)(-1)$ be a homogeneous element of degree $s$. We have to show that $\delta_i(a)=0$. There exists an homogeneous element $z \in C_i(f_1,\dots,f_j;E)$ of degree $s-1$ such that $\bar z \in C_i(f_1,\dots,f_j;E/J)$ is a cycle representing $a$, i.~e. $\partial_i(\bar z)=0$ and $a=[\bar z]$. Since $\partial_i(\bar z)=0$ we have that $\partial_i(z) \in JC_{i-1}(f_1,\dots,f_j;E)$ because $JC_{i-1}(f_1,\dots,f_j;E)=C_{i-1}(f_1,\dots,f_j;J)$ and we have the commutative diagram $$ \begin{array}{ccccccc} 0 \ \rightarrow & C_i(f_1,\dots,f_j;J) & \longrightarrow & C_i(f_1,\dots,f_j;E) & \longrightarrow & C_i(f_1,\dots,f_j;E/J) & \rightarrow \ 0\\ & \downarrow \partial_i & & \downarrow \partial_i & & \downarrow \partial_i & \\ 0 \ \rightarrow & C_{i-1}(f_1,\dots,f_j;J) & \longrightarrow & C_{i-1}(f_1,\dots,f_j;E) & \longrightarrow & C_{i-1}(f_1,\dots,f_j;E/J) & \rightarrow \ 0\\ \end{array} $$ \noindent Since $\partial_i (z) $ is homogeneous of degree $s-1$, we obtain that $\partial_i(z)$ is in $J_k C_{i-1}(f_1,\dots,f_j;E)$ for $k=(s-1)-(i-1)=s-i$. Set $J' := J_{<k>}$. Then $[\bar z]$ can also be considered as the homology class of $z$ in $H_i(f_1,\dots,f_j;E/J')$. By construction, $J'$ is generated in a single degree and, by assumption, it has a linear resolution. Hence we know already that $\delta_i([\bar z])=0$ as an element of $H_i(f_1,\dots,f_{j-1};E/J')$ (in degree $s$). The natural homomorphism $H_i(f_1,\dots,f_{j-1};E/J') \to H_i(f_1,\dots,f_{j-1};E/J)$ induced by the short exact sequence $0\to J/J' \to E/J' \to E/J \to 0$ implies that $\delta_i(a)=\delta_i([\bar z])=0$ as an element of $H_i(f_1,\dots,f_j;E/J)$ (in degree $s$) which we wanted to show. (iv) $\Rightarrow$ (i): By Lemma \ref{someeq} we know that $\beta^E_{0jp}(E/J)=\beta^E_{0jp}(E/\gin(J))$ for all $j,p$. Observe that $\beta^E_{ij1}(E/J)=\beta^E_{ij1}(E/\gin(J))$ for $i>0$ and $j\in \mathbb{Z}$ as one easily checks using properties of $\gin$. We prove (i) by showing that the numbers $\beta_{ijp}^E(E/J)$ only depend on the numbers $\beta_{0jp}^E(E/J)$ and $\beta_{ij1}^E(E/J)$. Since $f_1,\dots,f_n$ is a proper sequence of $E/J$, the long exact Cartan homology sequence splits into short exact sequences $$ 0 \to H_1(f_1,\dots,f_{p-1};E/J) \to H_1(f_1,\dots,f_{p};E/J) \to H_0(f_1,\dots,f_{p};E/J)(-1) $$ $$ \to H_0(f_1,\dots,f_{p-1};E/J) \to H_0(f_1,\dots,f_{p};E/J) \to 0 $$ and, for $i>1$, $$ 0 \to H_i(f_1,\dots,f_{p-1};E/J) \to H_i(f_1,\dots,f_{p};E/J) \to H_{i-1}(f_1,\dots,f_{p};E/J)(-1) \to 0. $$ Thus $$ \beta^E_{1jp}(E/J) = \beta^E_{1jp-1}(E/J) + \beta^E_{0 j-1 p}(E/J) - \beta^E_{0jp-1}(E/J) + \beta^E_{0jp}(E/J) $$ and, for $i > 1$, $$ \beta^E_{ijp}(E/J) = \beta^E_{ijp-1}(E/J) + \beta^E_{i-1 j-1 p}(E/J). $$ Since $\beta^E_{ijp}(E/J)=0$ for all $i>p$, it is easy to see that these equalities imply (i). \end{proof} Recall that a graded ideal $J \subset E$ is called a {\em Gotzmann ideal} if the growth of its Hilbert function is the least possible, i.\ e.\ $\dim_K E_1 \cdot J_j = \dim_K E_1\cdot \lex(J)_j$ for all $j$. The next result is an exterior version of the corresponding result \cite[Theorem 4.6]{CO04} in the polynomial ring. \begin{thm} \label{maingotz} For each graded ideal $J \subset E$, the following statements are equivalent: \begin{enumerate} \item $\beta^E_{ijp}(E/J)= \beta^E_{ijp}(E/\lex(J))$ for all $i,j,p$; \item $\beta^E_{1jn}(E/J)= \beta^E_{1jn}(E/\lex(J))$ for all $j$; \item $J$ is a Gotzmann ideal in the exterior algebra; \item $\beta^E_{0jp}(E/J) = \beta^E_{0jp}(E/\lex(J))$ for all $j,p$ and $J$ is componentwise linear. \end{enumerate} \end{thm} \begin{proof} (i) $\Rightarrow$ (ii): This is trivial. (ii) $\Leftrightarrow$ (iii): This follows immediately from the definition of Gotzmann ideals. (ii) $\Rightarrow$ (iv): By Theorems \ref{mainleq} and \ref{maincl}, a Gotzmann ideal is componentwise linear and $\beta^E_{0jp}(E/J) = \beta^E_{0jp}(E/\lex(J))$ for all $j,p$. (iv) $\Rightarrow$ (i): From Theorem \ref{cartanbetti} (ii) we know that $m_{\leq n-p}(\gin(J)_j) = m_{\leq n-p}(\lex(J)_j)$ for all $j, p$. Then it follows from Corollary \ref{rigid} that $\beta^E_{ijp}(E/\gin(J)) = \beta^E_{ijp}(E/\lex(J))$ for all $i,j,p$. Since $J$ is componentwise linear, the assertion follows from Theorem \ref{maincl}. \end{proof} We conclude this section by comparing the Cartan--Betti numbers of generic initial ideals. The goal is to show that in the family $$ \gins(J) :=\{\gin_\tau(J) : \tau \text{ a term order of } E\} $$ of all generic initial ideals of $J$, the revlex-gin has the smallest Cartan--Betti numbers. We need a lemma and some more notation. Let $V \subset E_i$ be a $d$-dimensional subspace. Then $\bigwedge^d V$ is a $1$-dimensional subspace of $\bigwedge^d E_i$. We identify it with any of its nonzero elements. An exterior monomial in $\bigwedge^d E_i$ is by definition an element of the form $m_1 \wedge \ldots \wedge m_d$ where $m_1,\ldots,m_d$ are distinct monomials in $E_i$. It is called a {\em $\tau$-standard} exterior monomial if $m_1 >_{\tau} \ldots >_{\tau} m_d$. The vector space $\bigwedge^d E_i$ has a $K$-basis of $\tau$-standard exterior monomials that we order lexicographically by $ m_1 \wedge \ldots \wedge m_d >_{\tau} n_1 \wedge \ldots \wedge n_d $ if $m_i >_{\tau} n_i$ for the smallest index $i$ such that $m_i \neq n_i$. Using this order one defines the initial (exterior) monomial $\inn_{\tau} (f)$ of any $f \in \bigwedge^d E_i$ as the maximal $\tau$-standard exterior monomial with respect to $>_{\tau}$ which appears in the unique representation of $f$ as a sum of $\tau$-standard exterior monomials. Similarly, the initial subspace $\inn_{\tau}(V)$ of any subspace $V$ of $\bigwedge^d E_i$ is defined as the subspace generated by all $\inn_{\tau} (f)$ for $f \in V$. The following result and its proof are analogous to \cite[Corollary 1.6]{CO03}. \begin{lem} \label{helperdim} Let $\tau$ and $\sigma$ be term orders on $E$ and let $V \subset E_i$ be any $d$-dimensional subspace. If $m_1 \wedge \ldots \wedge m_d$ and $n_1 \wedge \ldots \wedge n_d$ are $\tau$-standard monomials such that $m_1,\ldots,m_d $ is a $K$-basis of $\gin_{\tau} (V) $ and $n_1,\ldots,n_d$ is a $K$-basis of $\gin_{\tau} (\inn_{\sigma} (V)) $, then $m_i \geq_{\tau} n_i$ for all $i = 1,\ldots,d$. \end{lem} Now we get analogously to the symmetric case \cite[Theorem 5.1]{CO04}: \begin{thm} \label{mainginleq} Let $J \subset E$ be a graded ideal and $\tau$ a term order on $E$. Then $$ \beta_{ijp}^E(E/\gin(J)) \leq \beta_{ijp}^E(E/\gin_\tau(J)) \quad \text{ for all } i,j,p. $$ \end{thm} \begin{proof} Set $J'=\gin(J)$ and $J''=\gin_\tau(J)$. Note that $J'$ and $J''$ are squarefree strongly stable ideals with the same Hilbert function as $J$. Thus by Proposition \ref{maintechnical}, it is enough to show that $m_{\leq i}(J''_j) \leq m_{\leq i}(J'_j)$ for all $i$ and $j$. Let $a_1,\dots,a_k$ be the generators of $J'_j$ and let $b_1\dots,b_k$ be those of $J''_j$. We may assume that the $a_r$'s and the $b_r$'s are listed in the revlex order. Then we claim: $$ a_r\geq b_r \text{ in the revlex order for all } r. $$ This implies that $\max(a_r)\leq \max(b_r)$ for all $r$, and hence $m_{\leq i}(J''_j)\leq m_{\leq i}(J'_j)$. Thus it remains to shows the claim. Let $V,W \subset E_j$ be two monomial vector spaces of the same dimension where $V$ has a monomial $K$-basis $v_1,\dots,v_k$ and $W$ has a monomial $K$-basis $w_1,\dots,w_k$. Define $V \geq_{\revlex} W$ if $v_i \geq_{\revlex} w_i$, $v_i>_{\revlex} v_{i+1}$ and $w_i>_{\revlex} w_{i+1}$ for all $i$. Then Lemma \ref{helperdim} provides: $$ \gin_{\revlex}(V) \geq_{\revlex} \gin_{\revlex}(\ini_\tau(V)). $$ If $V \subset E_j$ is a generic subspace, then $\ini_\tau(V)= \gin_\tau(V)$. Since $\gin_{\revlex}(\gin_\tau(V))=\gin_\tau(V)$, we then get $ \gin_{\revlex}(V) \geq_{\revlex} \gin_\tau(V). $ Choosing $V=J_j'$ proves the claim. \end{proof} Observe that inequalities similar to the crucial $m_{\leq i}((\gin_\tau(J))_j) \leq m_{\leq i}(\gin(J)_j)$ above were shown by \cite[Proposition 2.4]{M05}. \section{Simplicial complexes and algebraic shifting} \label{sec-def} In the second part of this paper we give some combinatorial applications of the exterior algebra methods presented in Section \ref{cartan} and some symmetric algebra methods which will be presented below. For this we introduce at first some notation and discuss some basic concepts. Recall that $\Gamma$ is called a {\em simplicial complex} on the vertex set $[n]=\{1,\dots,n\}$ if $\Gamma$ is a subset of the power set of $[n]$ which is closed under inclusion, i.~e.\ if $F \subseteq G$ and $G \in \Gamma$, then $F\in \Gamma$. The elements $F$ of $\Gamma$ are called {\em faces}, and the maximal elements under inclusion are called {\em facets}. We denote the set of all facets by $\facets(\Gamma)$. If $F$ consists of $d+1$ elements of $[n]$, then $F$ is called a {\em $d$-dimensional} face, and we write $\dim F = d$. The empty set is a face of dimension $-1$. Faces of dimension 0, 1 are called {\em vertices} and {\em edges}, respectively. $\Gamma$ is called {\em pure} if all facets have the same dimension. If $\Gamma$ is non-empty, then the {\em dimension} $\dim \Gamma $ is the maximum of the dimensions of the faces of $\Gamma$. Let $f_i$ be the total number of $i$-dimensional faces of $\Gamma$ for $i=-1,\dots,\dim \Gamma$. The vector $f(\Gamma)=(f_{-1},\dots,f_{\dim \Gamma})$ is called the {\em $f$-vector} of $\Gamma$. Several constructions associate other simplicial complexes to a given one. For example, the {\em Alexander dual} of $\Gamma$ is defined as $\Gamma^* =\{F \subseteq [n] : F^c \not\in \Gamma\}$ where $F^c=[n] \setminus F$. This gives a duality operation on simplicial complexes in the sense that $\Gamma^*$ is indeed a simplicial complex on the vertex set $[n]$ and we have that $(\Gamma^*)^*=\Gamma$. The connection to algebra is due to the following construction. Let $K$ be a field and $S=K[x_1,\dots,x_n]$ be the polynomial ring in $n$ indeterminates. $S$ is a graded ring by setting $\deg x_i=1$ for $i=1,\dots,n$. We define the {\em Stanley--Reisner ideal} $I_\Gamma=(\prod_{i \in F} x_i : F\subseteq [n],\ F\not\in \Gamma)$ and the corresponding {\em Stanley--Reisner ring} $K[\Gamma]=S/I_\Gamma$. For a subset $F \subset [n]$, we write $x_F$ for the squarefree monomial $\prod_{i \in F} x_i$. We will say that $\Gamma$ has an algebraic property like Cohen--Macaulayness if and only if $K[\Gamma]$ has this property. Note that these definitions may depend on the ground field $K$. There is an analogous exterior algebra construction due to Aramova, Herzog and Hibi (e.g.\ see \cite{HE01}). Let $E=K \langle e_1,\dots,e_n\rangle$ be the exterior algebra on $n$ exterior variables. $E$ is also a graded ring by setting $\deg e_i=1$ for $i=1,\dots,n$. One defines the {\em exterior Stanley--Reisner ideal} as $J_\Gamma=(e_F : F \not\in \Gamma)$ and the {\em exterior face ring} $K\{\Gamma\}=E/J_\Gamma$. We keep these notations throughout the paper. Further information on simplicial complexes can be found, for example, in the books \cite{BRHE98} and \cite{ST96}. Let $\mathcal{C}([n])$ be the set of simplicial complexes on $[n]$. Following constructions of Kalai, we define axiomatically the concept of algebraic shifting. See \cite{HE01} and \cite{KA01} for surveys on this subject. We call a map $\shift \colon \mathcal{C}([n]) \to \mathcal{C}([n])$ a {\em shifting operation} if the following conditions are satisfied: \begin{enumerate} \item[(S1)] If $\Gamma \in \mathcal{C}([n])$, then $\shift(\Gamma)$ is a {\em shifted complex}, i.~e.\ for all $F \in \shift(\Gamma)$ and $j > i \in F$ we have that $(F \setminus \{i\}) \cup \{j\} \in \shift(\Gamma)$. \item[(S2)] If $\Gamma \in \mathcal{C}([n])$ is shifted, then $\shift(\Gamma)=\Gamma$. \item[(S3)] If $\Gamma \in \mathcal{C}([n])$, then $f(\Gamma)=f(\shift(\Gamma))$. \item[(S4)] If $\Gamma', \Gamma \in \mathcal{C}([n])$ and $\Gamma' \subseteq \Gamma$ is a subcomplex, then $\shift(\Gamma')$ is a subcomplex of $ \shift(\Gamma)$. \end{enumerate} See \cite{BNT2} for variations of these axioms. Note that a simplicial complex $\Gamma$ is shifted if and only if the Stanley--Reisner ideal $I_\Gamma \subset S=K[x_1,\dots,x_n]$ is a {\em squarefree strongly stable ideal} with respect to $x_1>\dots>x_n$, i.~e.\ for all $F\subseteq [n]$ with $x_F \in I_\Gamma$ and all $i \in F$, $j<i$, $j \not\in F$ we have that $(x_jx_F)/x_i \in I_\Gamma$. Analogously, $\Gamma$ is shifted if and only if the exterior Stanley--Reisner ideal $J_\Gamma \subset E=K\langle e_1,\dots,e_n \rangle$ is a {\em squarefree strongly stable ideal} with respect to $e_1>\dots>e_n$. Two of the most important examples of such operations are defined as follows. \begin{exa}{(Symmetric algebraic shifting)} \label{exsymmetric} At first we introduce the symmetric algebraic shifting introduced in \cite{KA84}, \cite{KA91}. Here, we follow the algebraic approach of Aramova, Herzog and Hibi \cite{ARHEHI00}. Assume that $K$ is a field of characteristic $0$ and let $S=K[x_1,\dots,x_n]$. We consider the following operation on monomial ideals of $S$. For a monomial $m=x_{i_1}\cdots x_{i_t}$ with $i_1\leq i_2 \leq \dots \leq i_t$ of $S$ we set $m^\sigma=x_{i_1}x_{i_2+1}\dots x_{i_t+ t-1}$. For a monomial ideal $I$ with unique minimal system of generators $G(I)=\{m_1,\dots,m_s\}$ we set $I^\sigma=(m_1^\sigma,\dots,m_s^\sigma)$ in a suitable polynomial ring with sufficiently many variables. Let $\Gamma$ be a simplicial complex on the vertex set $[n]$ with Stanley--Reisner ideal $I_{\Gamma}$. The {\em symmetric algebraic shifted complex} of $\Gamma$ is the unique simplicial complex $\Delta^s(\Gamma)$ on the vertex set $[n]$ such that $$ I_{\Delta^s(\Gamma)} = \bigl(\gin(I_\Gamma)\bigr)^\sigma \subset S. $$ It is not obvious that $\Delta^s(\cdot)$ is indeed a shifting operation. The first difficulty is to show that $I_{\Delta^s(\Gamma)}$ is an ideal of $S$. This and the proofs of the other properties can be found in \cite{ARHEHI00} or \cite{HE01}. \end{exa} \begin{exa}{(Exterior algebraic shifting)} \label{exexterior} Exterior algebraic shifting was also defined by Kalai in \cite{KA84}. Let $E=K\langle e_1,\dots,e_n\rangle$ where $K$ is any infinite field. Let $\Gamma$ be a simplicial complex on the vertex set $[n]$. The {\em exterior algebraic shifted complex} of $\Gamma$ is the unique simplicial complex $\Delta^e(\Gamma)$ on the vertex set $[n]$ such that $$ J_{\Delta^e(\Gamma)} = \gin(J_{\Gamma}). $$ For an introduction to the theory of Gr\"obner bases in the exterior algebra see \cite{ARHEHI97}. As opposed to symmetric algebraic shifting, it is much easier to see that $\Delta^ e(\cdot)$ is indeed a shifting operation since generic initial ideals in the exterior algebra are already squarefree strongly stable. See again \cite{HE01} for details. \end{exa} There are several other shifting operations. Since the proof of \cite[Proposition 8.8]{HE01} works for arbitrary term orders $\tau$, one can take a generic initial ideal $\gin_\tau(\cdot)$ in $E$ with respect to any term order $\tau$ to obtain, analogously to Example \ref{exexterior}, the {\em exterior algebraic shifting operation} $\Delta^\tau(\cdot)$ with respect to the term order $\tau$. \section{Degree functions of simplicial complexes I} \label{sec-degree} In commutative algebra degree functions are designed to provide measures for the size and the complexity of the structure of a given module. Given a simplicial complex $\Gamma$ on the vertex set $[n]$, we recall the definition of several degree functions on the Stanley--Reisner ring $K[\Gamma]$ in terms of combinatorial data. The first important invariant is the degree (or multiplicity) $\deg \Gamma$ of $\Gamma$. By definition $\deg \Gamma=\deg K[\Gamma]$ is the degree of the Stanley--Reisner ring of $\Gamma$. It can be combinatorially described as \begin{equation} \label{eq-comb-deg} \deg \Gamma=f_{\dim \Gamma}(\Gamma), \; \text{ the number of faces of maximal dimension.} \end{equation} Since any algebraic shifting operation $\shift(\cdot)$ preserves the $f$-vector, we have that \begin{equation} \label{eq-deg-inv} \deg \Gamma = \deg \shift(\Gamma). \end{equation} In case $\Gamma$ is a Cohen--Macaulay complex, this invariant gives a lot of information about $\Gamma$, because $\Gamma$ is pure and thus $\deg \Gamma$ counts all facets. There are several other degree functions which take into account more information about $\Gamma$. Next we study the {\em arithmetic degree}. Algebraically it is defined as $$ \adeg \Gamma = \adeg K[\Gamma] = \sum l(H^0_{\p}(K[\Gamma]_{\p})) \cdot \deg \bigl(K[\Gamma]/\p \bigr) $$ where the sum runs over the associated prime ideals of $K[\Gamma]$ and $l(\cdot)$ denotes the length function of a module. In \cite{VA-98} and \cite{BNT} it was noted that the results of \cite{STTRVO} imply that \begin{equation} \label{eq-comb-adeg} \adeg \Gamma = |\{F \in \facets(\Gamma)\}| \end{equation} is the number of facets of $\Gamma$. Similarly, one defines for $i=0,\dots,\dim K[\Gamma]$ $$ \adeg_i \Gamma = \adeg_i K[\Gamma] = \sum l(H^0_{\p}(K[\Gamma]_{\p})) \cdot \deg \bigl(K[\Gamma]/\p \bigr) $$ where the sum runs over the associated prime ideals of $K[\Gamma]$ such that $\dim K[\Gamma]/\p=i$ and gets \begin{equation} \label{eq-comb-adeg-i} \adeg_i \Gamma = |\{F \in \facets(\Gamma): \dim F=i-1\}|. \end{equation} By definition, shifting preserves the $f$-vector and the number of facets of maximal dimension of a simplicial complex. However, the total number of facets may change under shifting. But it may only increase, as we show now. \begin{thm} \label{thm-adeg-incr} Let $\Delta$ be a shifting operation and $\Gamma$ a $(d-1)$-dimensional simplicial complex. Then $$ \adeg_i \Gamma \leq \adeg_i \Delta (\Gamma) \text{ for } i=0,\dots,d. $$ In particular, $|\facets (\Gamma)|\leq |\facets (\Delta (\Gamma))|$. \end{thm} \begin{proof} Let $m-1$ be the smallest dimension of a facet of $\Gamma$. We prove the assertion by induction on the number $(d-1)-(m-1)=d-m \geq 0$. If $d-m = 0$, then $\Gamma$ is a pure complex and only $\adeg_d \Gamma$ is different from zero. Using Identity \eqref{eq-deg-inv} we get $\adeg_d \Gamma = \deg \Gamma = \deg \shift (\Gamma) = \adeg_d \shift (\Gamma)$ as claimed. Let $d > m$. Observe that $\adeg_i \Gamma=0$ for $i<m$. Denote by $F_1,\ldots,F_s \in \Gamma$ the $m-1$-dimensional facets of $\Gamma$ and let $\Gamma'$ be the subcomplex of $\Gamma$ whose facets are the remaining facets of $\Gamma$. Thus, we have in particular: $$ f_i (\Gamma') = f_i (\Gamma) \quad \mbox{for all} ~ i > m-1. $$ Since shifting does not change the $f$-vector and $\shift (\Gamma')$ is a subcomplex of $\shift (\Gamma)$, we conclude that $$ f_i (\shift (\Gamma')) = f_i (\shift (\Gamma)) \quad \mbox{for all} ~ i > m-1, $$ and, in particular, that each facet of $\shift (\Gamma')$ of dimension $> m-1$ is also a facet of $\shift (\Gamma)$. It follows from the induction hypothesis that $$ \adeg_i \Gamma = \adeg_i \Gamma' \leq \adeg_i \Delta(\Gamma') = \adeg_i \Delta(\Gamma) \text{ for } i>m. $$ Hence, it remains to show that $\shift (\Gamma)$ has at least $s=\adeg_{m} \Gamma$ facets of dimension $m-1$. To this end note that, by definition of $\Gamma'$, $f_{m-1} (\Gamma) = f_{m-1} (\Gamma') + s$, thus we get for the shifted complexes $f_{m-1} (\shift (\Gamma)) = f_{m-1} (\shift (\Gamma')) + s$. Let $G \in \shift (\Gamma)$ be any $(m-1)$-dimensional face that is not in $\shift (\Gamma')$. Assume that $G$ is strictly contained in a face $\widetilde{G}$ of $\shift (\Gamma)$. Then $\dim \widetilde{G} > m-1$ and, using the fact that the faces of $\shift (\Gamma')$ and $\shift (\Gamma)$ of dimension $> m-1$ coincide, we conclude that $\widetilde{G} \in \shift (\Gamma')$. Thus $G \in \shift (\Gamma')$. This contradiction to the choice of $G$ shows that $G$ is a facet of $\shift (\Gamma)$ and we are done. \end{proof} Observe that we only used axioms (S3) and (S4) of a shifting operation to prove Theorem \ref{thm-adeg-incr}. It would be interesting to know if there are further results in this direction like: \begin{quest} \label{importantquestion} Is there a shifting operation $\shift(\cdot)$ such that $\adeg$ increases the least, i.~e.\, for every simplicial complex $\Gamma$, we have $\adeg \shift (\Gamma) \leq \adeg \shift' (\Gamma)$ for each algebraic shifting operation $\shift'(\cdot)$? \end{quest} We use the results in Section \ref{cartan} to give a partial answer to the previous question. We begin by observing that shifting and Alexander duality commute. \begin{lem} \label{shifthelper} Let $\Gamma$ be a simplicial complex on the vertex set $[n]$. Let $\Delta^{\tau}(\cdot)$ be the exterior shifting operation with respect to a term order $\tau$ on $E$. Then Alexander duality commutes with shifting, i.\ e.\ we have $ \Delta^\tau(\Gamma)^* = \Delta^\tau(\Gamma^*). $ \end{lem} \begin{proof} In \cite{HETE} this was proved for $\tau$ being the revlex order. But the proof works for any term order $\tau$. \end{proof} As a further preparation, we need an interpretation of the arithmetic degree over the exterior face ring using the socle. The socle $\Soc N$ of a finitely generated $E$-module $N$ is the set of elements $x \in N$ such that $(e_1,\dots,e_n)x=0$. Observe that $\Soc N$ is always a finite-dimensional $K$-vector space. \begin{lem} \label{arithdesc} Let $\Gamma$ be a $(d-1)$-dimensional simplicial complex on the vertex set $[n]$. Then $$ \adeg_i \Gamma = \dim_K \Soc K\{\Gamma\}_i \text{ for } i=0,\dots,d. $$ In particular, $\adeg \Gamma = \dim_K \Soc K\{\Gamma\}$. \end{lem} \begin{proof} The residue classes of the monomials $e_F$, $F \in \Gamma$, form a $K$-vector space basis of $K\{\Gamma\}$. Hence, the facets of $\Gamma$ correspond to a $K$-basis of $\Soc K\{\Gamma\}$. \end{proof} We have seen in Theorem \ref{thm-adeg-incr} that each shifting operation increases the number of facets. Below we show that among the exterior shifting operations, standard exterior shifting with respect to revlex leads to the least possible increase. Now we recall the concept of a sequentially Cohen--Macaulay module. Let $M$ be a finitely generated graded $S$-module. The module $M$ is said to be {\em sequentially Cohen--Macaulay} (sequentially CM modules for short), if there exists a finite filtration \begin{equation} \label{filter} 0=M_0 \subset M_1 \subset M_2 \subset \dots \subset M_r=M \end{equation} of $M$ by graded submodules of $M$ such that each quotient $M_i/M_{i-1}$ is Cohen--Macaulay and $\dim M_1/M_0 <\dim M_2/M_1 <\dots<\dim M_r/M_{r-1}$ where $\dim$ denotes the Krull dimension of $M$. \begin{thm} \label{shifted} Let $\Gamma$ be a $(d-1)$-dimensional simplicial complex on the vertex set $[n]$. For any term order $\tau$ on $E$, we have $$ \adeg_i \Gamma \leq \adeg_i \Delta^e(\Gamma) \leq \adeg_i \Delta^\tau(\Gamma) \text{ for } i=0,\dots,d. $$ In particular, $\adeg\Gamma \leq \adeg\Delta^e(\Gamma) \leq \adeg \Delta^\tau(\Gamma)$. Moreover, $\adeg \Gamma = \adeg \Delta^e(\Gamma)$ if and only if $\Gamma$ is sequentially Cohen--Macaulay. \end{thm} \begin{proof} Using $\adeg_i \Gamma = \dim_K \Soc (E/J_{\Gamma})_i$ and the definition of Alexander duality, it is easy to see that $\adeg_i \Gamma$ coincides with the number of minimal generators of $J_{\Gamma^*}$ of degree $n-i$ which is $\beta_{0n-i}^E(J_{\Gamma^*})$, i.~e.\ we have $ \adeg_i \Gamma = \beta_{0n-i}^E(J_{\Gamma^*}) = \beta_{1n-i}^E(E/J_{\Gamma^*}).$ Hence Lemma \ref{shifthelper} provides $$ \adeg_i \Gamma = \beta_{1n-i}^E(E/J_{\Gamma^*}) \leq \beta_{1n-i}^E(E/\gin(J_{\Gamma^*})) = \beta_{1n-i}^E(E/J_{\Delta^e(\Gamma^*)}) $$ $$ = \adeg_i \Delta^e(\Gamma^*)^* = \adeg_i \Delta^e(\Gamma) $$ for $i=0,\dots,d$. Since $ \beta_{1j}^E(E/J_{\Gamma^*}) \leq \beta_{1j}^E(E/\gin(J_{\Gamma^*})) $ for all $j$, it follows from Theorem \ref{maincl} that we have equality if and only if $J_{\Gamma^*}$ is componentwise linear. By \cite{HEHI99}, this is the case if and only if the Stanley--Reisner ideal $I_{\Gamma^*} \subset S=K[x_1,\dots,x_n]$ is componentwise linear. But the latter is equivalent to $\Gamma$ being sequentially Cohen--Macaulay according to \cite{HEREWE}. Combining the above argument and Theorem \ref{mainginleq}, we obtain also $$ \adeg_i \Delta^e(\Gamma) = \beta_{1n-i}^E(E/\gin(J_{\Gamma^*})) \leq \beta_{1n-i}^E(E/\gin_\tau(J_{\Gamma^*})) = \adeg_i \Delta^\tau(\Gamma), $$ and the proof is complete. \end{proof} \begin{rem} \ \begin{enumerate} \item Observe that Theorem \ref{mainginleq} is essentially equivalent to the following inequalities $m_{\leq i}((\gin_\tau(J))_j) \leq m_{\leq i}(\gin(J)_j)$. Thus one could argue in the above proof of Theorem \ref{shifted} by using the latter numbers instead of the corresponding graded Betti-numbers. (Again see also \cite[Proposition 2.4]{M05} for the needed inequalities.) \item The fact that $\adeg \Gamma = \adeg \Delta^e(\Gamma)$ if and only if $\Gamma$ is sequentially Cohen--Ma\-caulay can also be shown by refining the arguments in the proof of Theorem \ref{thm-adeg-incr} and using the following facts: a simplicial complex $\Gamma$ is sequentially Cohen--Macaulay if and only if each subcomplex $\Gamma^{[i]}$ generated by the $i$-faces of $\Gamma$ is Cohen--Macaulay (see \cite{DU}); $\Gamma$ is Cohen--Macaulay if and only if $\Delta^e(\Gamma)$ is pure. \end{enumerate} \end{rem} The arithmetic degree has nice properties especially for Cohen--Macaulay $K$-algebras. However, there are some disadvantages for non-Cohen--Macaulay $K$-algebras. See \cite{VA} for a discussion. Vasconcelos axiomatically defined the following concept. Recall that $S=K[x_1,\dots,x_n]$ where $K$ is a field. A numerical function $\Deg$ that assigns to every finitely generated graded $S$-module a non-negative integer is said to be an {\em extended degree function} if it satisfies the following conditions: \begin{enumerate} \item If $L=H^0_{\m}(M)$, then $\Deg M =\Deg M/L +l(L).$ \item If $y\in S_1$ is sufficiently general and $M$-regular, then $\Deg M \geq \Deg M/yM.$ \item If $M$ is a Cohen--Macaulay module, then $\Deg M=\deg M.$ \end{enumerate} There exists a {\em smallest extended degree function} $\sdeg$ in the sense that $\sdeg M \leq \Deg M$ for every other extended degree function. \begin{rem} \label{sdeghelper} We need the following properties of $\sdeg$. (For proofs see \cite{NR}.) For simplicity we state them only for a graded $K$-algebra $S/I$ where $I \subset S$ is a graded ideal. \begin{enumerate} \item $S/I$ is Cohen--Macaulay if and only if $\sdeg S/I= \deg S/I$. (This is true for any extended Degree function.) \item $\adeg S/I\leq \sdeg S/I$. \item If $S/I$ is sequentially Cohen--Macaulay, then $\adeg S/I=\sdeg S/I$. \item $\sdeg S/I = \sdeg S/\gin(I)$. \end{enumerate} \end{rem} Then we have: \begin{thm} \label{sedgadegshift} Let $\Gamma$ be a simplicial complex on the vertex set $[n]$. We have that: \begin{enumerate} \item $\sdeg \shift(\Gamma)=\adeg \shift(\Gamma)$ for each algebraic shifting operation $\shift$. \item $\deg \Gamma \leq \adeg \Gamma \leq \sdeg \Gamma = \sdeg \shift^s(\Gamma)=\adeg \shift^s(\Gamma)$. \end{enumerate} \end{thm} \begin{proof} In the proof we use that $\shift(\Gamma)$ is always sequentially Cohen--Macaulay, i.\ e.\ $K[\shift(\Gamma)]$ is a sequentially CM-ring (cf.\ Proposition \ref{prop-shifted-sCM} below). This fact and the properties of $\sdeg$ observed in Remark \ref{sdeghelper} imply (i). The only critical equality in (ii) is $\sdeg \Gamma = \sdeg \shift^s(\Gamma)$. We compute \begin{eqnarray*} \sdeg \Gamma &=& \sdeg S/I_\Gamma\\ &=& \sdeg S/\gin(I_\Gamma)\\ &=& \adeg S/\gin(I_\Gamma)\\ &=& \adeg S/I_{\shift^s(\Gamma)}\\ &=& \sdeg S/I_{\shift^s(\Gamma)} \end{eqnarray*} where the first equality is the definition of $\sdeg \Gamma$, the second one was noted above and the third equality follows from the fact that $S/\gin(I_\Gamma)$ is sequentially Cohen--Macaulay (see \cite[Theorem 2.2]{HESB}). The forth equality follows again from \cite[Theorem 6.6]{BNT} and the last equality is (i) which we proved already. \end{proof} We already mentioned that $\deg \Gamma$ is the number of facets of maximal dimension of $\Gamma$ and $\adeg \Gamma$ is the number of facets of $\Gamma$. Theorem \ref{sedgadegshift} provides a combinatorial interpretations of $\sdeg \Gamma$. It is the number of facets of $\Delta^s(\Gamma)$. \section{The Cohen--Macaulay property and iterated Betti numbers} \label{ringprop} In this section we use degree functions to relate properties of the shifted complex and iterated Betti numbers to the original complex. Our first observation is well-known to specialists. It states that shifted complexes have a nice algebraic structure. \begin{prop} \label{prop-shifted-sCM} Shifted simplicial complexes are sequentially Cohen--Macaulay. In particular, if $\Gamma$ is a simplicial complex on the vertex set $[n]$ and $\shift(\cdot)$ is an arbitrary algebraic shifting operation, then $\shift(\Gamma)$ is sequentially Cohen--Macaulay. \end{prop} \begin{proof} Let $\Gamma$ be a shifted simplicial complex. Recall that then $I_{\Gamma} \subset S$ is a squarefree strongly stable ideal with respect to $x_1>\dots>x_n$, i.~e.\ for all $x_F=\prod_{l \in F}x_l \in I_{\Gamma}$ and $i$ with $x_i|x_F$ we have for all $j<i$ with $x_j\nmid x_F$ that $(x_F/x_i)x_j\in I_{\Gamma}$. It is easy to see that we have that $I_{\Gamma}$ is squarefree strongly stable if and only if $I_{\Gamma^*}$ is squarefree strongly stable where $\Gamma^*$ is the Alexander dual of $\Gamma$. It is well-known that in this situation $I_{\Gamma^*}$ is a so-called componentwise linear ideal (see Section \ref{cartan} for the definition) and thus the theorem follows now from \cite[Theorem 9]{HEREWE}. \end{proof} \begin{rem} Alternatively one can prove Proposition \ref{prop-shifted-sCM} using more combinatorial arguments as follows. Shifted simplicial complexes are non-pure shellable by \cite{BW97} and thus sequentially Cohen--Macaulay by \cite{ST96}. \end{rem} It is easy to decide whether a shifted simplicial complex is Cohen--Macaulay because of the following well-known fact: \begin{prop} Let $\Gamma$ be a shifted simplicial complex on the vertex set $[n]$. Then the following statements are equivalent: \begin{enumerate} \item $\Gamma$ is Cohen--Macaulay; \item $\Gamma$ is pure. \end{enumerate} \end{prop} \begin{proof} If $\Gamma$ is Cohen--Macaulay, then it is well-known that $\Gamma$ is pure. Assume now that $\Gamma$ is a pure shifted simplicial complex. We compute $$ \sdeg S/I_{\Gamma} = \adeg S/I_{\shift^s(\Gamma)} = \adeg S/I_{\Gamma} = \deg S/I_{\Gamma}. $$ The first equality was shown in Theorem \ref{sedgadegshift}, the second one holds because $\Gamma$ is shifted, and the third one because $\Gamma$ is pure. Hence $\sdeg S/I_{\Gamma}=\deg S/I_{\Gamma}$ and this implies by Remark \ref{sdeghelper} that $\Gamma$ is Cohen--Macaulay. \end{proof} Intuitively, shifting leads to a somewhat simpler complex and one would like to transfer properties from $\shift (\Gamma)$ to $\Gamma$. In this respect, we have: \begin{prop} \label{justnoted} Let $\Gamma$ be a simplicial complex on the vertex set $[n]$ and let $\shift(\cdot)$ be any algebraic shifting operation such that $\sdeg \Gamma \leq \sdeg \shift(\Gamma)$. If $\shift(\Gamma)$ is Cohen--Macaulay, then $\Gamma$ is Cohen--Macaulay. \end{prop} \begin{proof} Suppose that $\shift(\Gamma)$ is Cohen--Macaulay. Then $\shift(\Gamma)$ is pure and we have that $\adeg \shift(\Gamma) = \deg \shift(\Gamma)$. Thus, we get $$ \deg \Gamma \leq \sdeg \Gamma \leq \sdeg \shift(\Gamma) = \adeg \shift(\Gamma) = \deg \shift(\Gamma) = \deg \Gamma. $$ The only critical equality is $\sdeg \shift(\Gamma) = \adeg \shift(\Gamma)$ which follows from Remark \ref{sdeghelper} (iii) and Proposition \ref{prop-shifted-sCM}. Hence $\deg \Gamma=\sdeg \Gamma$ and it follows that $\Gamma$ is Cohen--Macaulay. \end{proof} For exterior algebraic shifting, the following result was proved in \cite{KA01}. For symmetric algebraic shifting, this result follows from some general homological arguments given in \cite{BYCHPO}, as noted in \cite{BNT} and essentially first appeared in \cite{KA91} (Theorem 6.4). Below, we provide a very short proof using only degree functions. \begin{thm} \label{cmnice} Let $\Gamma$ be a simplicial complex on the vertex set $[n]$. Then the following statements are equivalent: \begin{enumerate} \item $\Gamma$ is Cohen--Macaulay; \item $\shift^{s}(\Gamma)$ is Cohen--Macaulay; \item $\shift^{s}(\Gamma)$ is pure. \end{enumerate} \end{thm} \begin{proof} We already know that (ii) and (iii) are equivalent. Suppose that (i) holds. Then $$ \deg \shift^s(\Gamma) = \deg \Gamma = \sdeg \Gamma = \adeg \shift^s(\Gamma) \geq \adeg \Gamma \geq \deg \Gamma = \deg \shift^s(\Gamma). $$ The first equality is trivial. The second one holds because $\Gamma$ is Cohen--Macaulay, and the third one follows from Theorem \ref{sedgadegshift} (ii). The first inequality is due to Theorem \ref{thm-adeg-incr} and the second one follows from the definition of $\adeg$. Hence $\adeg \shift^s(\Gamma) = \deg \shift^s(\Gamma)$, and thus $\shift^s(\Gamma)$ is pure, as claimed in (iii). The fact that (iii) implies (i) follows from $\sdeg \Gamma=\adeg \shift^s(\Gamma) \leq \sdeg \shift^s (\Gamma)$ and Proposition \ref{justnoted}. \end{proof} \begin{rem} Using Theorem \ref{shifted} one can prove in a similar way that Theorem \ref{cmnice} holds also for the exterior shifting $\Delta^e$. \end{rem} For the next results, we need some further notation and definitions. At first we recall a refinement of the $f$- and $h$-vector of a simplicial complex due to \cite{BW96}. \begin{defi} Let $\Gamma$ be a $(d-1)$-dimensional simplicial complex on the vertex set $[n]$. We define $$ f_{i,r}(\Gamma) = |\{ F \in \Gamma : \deg_{\Gamma} F = i \text{ and } |F| = r \}| $$ where $\deg_{\Gamma} F= \max \{|G| : F \subseteq G \in \Gamma\}$ is the {\em degree} of a face $F \in \Gamma$. The triangular integer array $f(\Gamma) = (f_{i,r}(\Gamma))_{0\leq r \leq i\leq d}$ is called the {\em $f$-triangle} of $\Gamma$. Further we define $$ h_{i,r}(\Gamma) = \sum_{s=0}^r (-1)^{r-s}\binom{i-s}{r-s} f_{i,s}(\Gamma). $$ The triangular array $h(\Gamma)= (h_{i,r}(\Gamma))_{0\leq r\leq i\leq d}$ is called the {\em $h$-triangle} of $\Gamma$. \end{defi} It is easy to see that giving the $h$-triangle, one can also compute the $f$-triangle and thus these triangles determine each other. For $F \subseteq [n]$ let $\init(F)=\{k, \dots, n\}$ if $k,\dots,n \in F$, but $k-1 \not\in F$. (Here we set $\init(F)=\emptyset$ if no such $k$ exists.) Duval proved in \cite[Corollary 6.2]{DU} that if $\Gamma$ is sequentially Cohen--Macaulay, then $h_{i,r} (\Gamma)= |\{F \in \facets(\Gamma) : |\init(F)|=i-r,\ |F|=i \}|$. In particular, this formula holds for shifted simplicial complexes. Using this fact we define: \begin{defi} Let $\Gamma$ be a simplicial complex on the vertex set $[n]$ and $\shift(\cdot)$ an arbitrary algebraic shifting operation. We call $$ b^\Delta_{i,r}(\Gamma) = h_{i,r}(\shift(\Gamma)) = |\{F \in \facets(\shift(\Gamma)) : |\init(F)|=i-r,\ |F|=i \}| $$ the {\em $\Delta$-iterated Betti numbers} of $\Gamma$. \end{defi} For symmetric and exterior shifting these notions were defined and studied in \cite{BNT} and \cite{DURO}. Since $\shift(\shift(\Gamma))=\shift(\Gamma)$ we have that $$ b^\Delta_{i,r}(\Gamma) = b^\Delta_{i,r}(\shift(\Gamma)) $$ and therefore $\Delta$-iterated Betti numbers are invariant under each algebraic shifting operation $\shift(\cdot)$. As noted in \cite{BNT}, there is the following comparison result. \begin{thm} \label{thm-it-Betti} Let $\Gamma$ be a simplicial complex on the vertex set $[n]$ and $\Delta(\cdot)$ be the symmetric shifting $\Delta^s$ or the exterior shifting $\Delta^e $. Then the following conditions are equivalent: \begin{enumerate} \item $\Gamma$ is sequentially Cohen--Macaulay; \item $b^{\shift}_{i,r}(\Gamma)=h_{i,r}(\Gamma)$ for all $i,r$. \end{enumerate} In particular, if $\Gamma$ is sequentially Cohen--Macaulay, then all iterated Betti numbers of $\Gamma$ with respect to the symmetric shifting and the exterior algebraic shifting coincide. \end{thm} \begin{proof} Duval proved in \cite[Theorem 5.1]{DU} the equivalence of (i) and (ii) for exterior algebraic shifting $\shift^e$. But his proof works also for the symmetric shifting since he used only axioms (S1)--(S4) of an algebraic shifting operation and the fact that Theorem \ref{cmnice} holds also for the exterior shifting. \end{proof} The next result extends Theorem 6.6 of \cite{BNT} where the case of the symmetric algebraic shifting operation was studied. It follows by combining Identities \eqref{eq-comb-deg}, \eqref{eq-comb-adeg}, Proposition \ref{prop-shifted-sCM}, and Remark \ref{sdeghelper} (iii). \begin{thm} Let $\Gamma$ be a simplicial complex on the vertex set $[n]$ of dimension $d-1$ and let $\shift(\cdot)$ be any algebraic shifting operation. Then we have: \begin{enumerate} \item $ \deg \shift(\Gamma) = \sum_{r=0}^{d} b^{\shift}_{d,r}(\Gamma) = |\{F \in \facets(\shift(\Gamma)),\ \dim F=d-1 \}|. $ \item $ \sdeg \shift(\Gamma)= \adeg \shift(\Gamma) = \sum_{r,i} b^{\shift}_{i,r}(\Gamma) = |\{F \in \facets(\shift(\Gamma))\}|. $ \end{enumerate} \end{thm} The last two results suggest the following question: \begin{quest} \ What is the relationship between $b^{\shift}_{i,r}(\Gamma)$ for different shifting operations? For example, in \cite{BNT} it is conjectured that $b^{\shift^s}_{i,r}(\Gamma) \leq b^{\shift^e}_{i,r}(\Gamma)$ for all $i,r$. \end{quest} \section*{Acknowledgments} The authors would like to thank the referee for the careful reading and the very helpful comments. \end{document}
arXiv
{ "id": "0512521.tex", "language_detection_score": 0.7197749018669128, "char_num_after_normalized": null, "contain_at_least_two_stop_words": null, "ellipsis_line_ratio": null, "idx": null, "lines_start_with_bullet_point_ratio": null, "mean_length_of_alpha_words": null, "non_alphabetical_char_ratio": null, "path": null, "symbols_to_words_ratio": null, "uppercase_word_ratio": null, "type": null, "book_name": null, "mimetype": null, "page_index": null, "page_path": null, "page_title": null }
arXiv/math_arXiv_v0.2.jsonl
null
null
\begin{document} \title{Analysis of quantum-state disturbance in a \\protective measurement of a spin-1/2 particle} \author{Maximilian Schlosshauer} \affiliation{\small Department of Physics, University of Portland, 5000 North Willamette Boulevard, Portland, Oregon 97203, USA} \begin{abstract} A distinguishing feature of protective measurement is the possibility of obtaining information about expectation values while making the disturbance of the initial state arbitrarily small. Quantifying this state disturbance is of paramount importance. Here we derive exact and perturbative expressions for the state disturbance and the faithfulness of the measurement outcome in a model describing a spin-$\frac{1}{2}$ particle protectively measured by an inhomogeneous magnetic field. We determine constraints on the experimentally required field strengths from bounds imposed on the allowable state disturbance. We also demonstrate that the protective measurement may produce an incorrect result even when the particle's spin state is unaffected by the measurement, and show that successive measurements using multiple magnetic fields produce the same state disturbance as a single measurement involving a superposition of these fields. Our results supply comprehensive understanding of a paradigmatic model for protective measurement, may aid the experimental implementation of the measurement scheme, and illustrate fundamental properties of protective measurements. \\[-.1cm] \noindent Journal reference: \emph{Phys.\ Rev.\ A\ }\textbf{92}, 062116 (2015), DOI: \href{http://dx.doi.org/10.1103/PhysRevA.92.062116}{10.1103/PhysRevA.92.062116} \end{abstract} \pacs{03.65.Ta, 03.65.Wj} \maketitle \section{Introduction} Any quantum measurement changes (``disturbs'') the quantum state of the measured system. Protective measurement \cite{Aharonov:1993:qa,Aharonov:1993:jm,Aharonov:1996:fp,Dass:1999:az,Vaidman:2009:po,Gao:2014:cu} is a scheme for measuring expectation values of a quantum system in a way that makes this state disturbance arbitrarily small. For such a measurement to obtain, the system is coupled weakly to the apparatus, and the initial state of the system is required to be in a nondegenerate eigenstate of its Hamiltonian. Neither the initial state nor the Hamiltonian need to be known. Besides being an important and unusual instance of a quantum measurement, protective measurement offers the possibility of successively carrying out measurements of expectation values of different observables while the system is likely to remain in its initial quantum state. This state may then be reconstructed from the measured expectation values with in principle arbitrarily high fidelity \cite{Aharonov:1993:qa,Aharonov:1993:jm,Aharonov:1996:fp,Dass:1999:az}. In this way, protective measurement may provide a route toward state tomography of single quantum systems. Two essential aspects of protective measurement remain insufficiently studied. The first is to actually quantify the state disturbance, rather than to simply consider the idealization of an infinitely weak and infinitely long measurement interaction, for which no state disturbance occurs. The second is to study concrete models for implementing protective measurements. In particular, what has not yet been done is to study the issue of state disturbance in the context of such a model, so one can better understand the physics and parameter choices that would need to go into an experiment realizing a protective measurement with minimal state disturbance. Our paper addresses this open problem. Specifically, we consider the protective measurement of the state of a spin-$\frac{1}{2}$ particle by an inhomogeneous magnetic field using a Stern--Gerlach-like setup. This paradigmatic and experimentally relevant model was first studied in Refs.~\cite{Aharonov:1993:qa, Dass:1999:az}, but those studies merely demonstrated one basic feature of protective measurement, namely, how information about the expectation value becomes encoded in the shift of the apparatus pointer. The studies only considered in the limit of an infinitesimally weak inhomogeneous magnetic field applied for an infinite amount of time (and infinitely rapidly turned on and off), without attending to the crucial issue of state disturbance. Here, we revisit this model but develop a significantly more general account of it that allows us to precisely quantify the amount of state disturbance as a function of the relevant physical parameters. Given the importance of two-level systems (qubits) in quantum mechanics and quantum information processing, our model is of great theoretical and practical interest. Because the model is exactly solvable, we can also use it as a tool for assessing the accuracy of approximate and perturbative solutions that are needed to treat most other models of protective measurement. By applying a recently developed perturbative approach \cite{Schlosshauer:2014:pm}, we are also able to explore how time-dependent couplings between system and apparatus help reduce the state disturbance. The main result of our paper is the derivation of both exact and approximate expressions for the state disturbance predicted by the model. This has direct physical implications, because it allows us to estimate the values of the magnetic-field parameters that would be required to implement a protective measurement that has a suitably low probability of disturbing the initial state. We also show that multiple simultaneous protective measurements do not result in smaller cumulative state disturbance when compared to a successive implementation of these measurements. More generally, our analysis identifies and illustrates properties of protective measurements and of quantum measurements in general, such as complementarity and the tradeoff between information gain and disturbance. This paper is organized as follows. In Sec.~\ref{sec:model-prot-meas} we describe the model for the protective quantum measurement of a spin-$\frac{1}{2}$ particle. In Sec.~\ref{sec:state-disturbance} we derive exact and perturbative solutions for the state disturbance incurred during the protective measurement. In Sec.~\ref{sec:quant-state-reconstr}, we compare the state disturbance for successive and simultaneous protective measurements. In Sec.~\ref{sec:pointer-shift} we explore a hitherto overlooked issue, namely, the possibility of a protective measurement's failing due to a reversed momentum shift. We discuss our results in Sec.~\ref{sec:disc-concl}. \section{\label{sec:model-prot-meas}Measurement model} In a protective measurement \cite{Aharonov:1993:qa,Aharonov:1993:jm,Aharonov:1996:fp,Dass:1999:az,Vaidman:2009:po}, the quantum system interacts weakly with a measuring apparatus via an interaction Hamiltonian given by \begin{equation}\label{eq:lalaa} \op{H}_\text{int}(t) = g(t)\op{O} \otimes \op{P}. \end{equation} Here $\op{O}$ is the observable protectively measured on the system, and $\op{P}$ is the operator that generates a corresponding shift of the pointer of the apparatus. The coupling function $g(t)$ describes the time dependence of the strength of the system--apparatus interaction and is normalized according to \begin{equation}\label{eq:lalaanorm} \int_{0}^{T} \text{d} t\, g(t) =1, \end{equation} where $t=0$ marks the onset of the protective measurement and $T$ is the total measurement time [thus $g(t)=0$ for $t<0$ and $t>T$]. It follows that the average coupling strength over the course of the measurement is equal to $1/T$ for all choices of $g(t)$. The normalization condition \eqref{eq:lalaanorm} not only imposes an inversely proportional relationship between this average coupling strength and the measurement time $T$, but also ensures that the total pointer shift produced by the protective measurement is independent of the particular functional form of $g(t)$ \cite{Schlosshauer:2014:pm}. Consider now the following model of a protective measurement of a spin-$\frac{1}{2}$ particle by a magnetic field, first discussed in Ref.~\cite{Aharonov:1993:jm} (see also Sec.~II\,E of Ref.~\cite{Dass:1999:az}). A spin-$\frac{1}{2}$ particle travels through a uniform magnetic field of unknown direction and magnitude. The field provides the protection of the particle's initial spin state, which is quantized along the direction of the field. Information about expectation values of spin components along different axes is obtained by introducing weak inhomogeneous magnetic fields in different directions, which produce a change in the particle's momentum (resulting in a displacement of its trajectory). The expectation values of different spin components can then be obtained from measuring these momentum shifts, and the spin state of the particle may be reconstructed from the measured expectation values. Denoting the uniform magnetic field by $\bvec{B}_0=B_0\bvec{e}_z$, the spin part of the Hamiltonian of the particle is \begin{equation}\label{eq:vshvbjfdjhvs} \op{H}_S = - \mu \bopvecgr{\sigma} \cdot \bvec{B}_0 = -\mu B_0 \op{\sigma}_z, \end{equation} where $\mu$ is the magnetic moment of the particle \footnote{It is important to emphasize that although we have specified the direction of the field $\bvec{B}_0$ to enable the subsequent mathematical description of the model, in an actual realization of the protective measurement the field's direction and magnitude will be \emph{a priori} unknown (see also the discussion in Refs.~\cite{Aharonov:1993:jm} and \cite{Dass:1999:az}). Indeed, an observer who knows $\bvec{B}_0$ would also know $\op{H}_S$ and could therefore perform a simple projective measurement in the eigenbasis of $\op{H}_S$ to determine the spin state, eliminating the need to perform a protective measurement.}. The eigenstates of $\op{H}_S$ are the eigenstates $\ket{\pm}$ of $\op{\sigma}_z$, with eigenvalues $E_\pm=\mp \mu B_0$ and corresponding transition frequency \begin{equation}\label{eq:vshvbjs} \omega_0 = \frac{2\mu B_0}{\hbar}. \end{equation} The particle is assumed to be in one of these two eigenstates. Between $t=0$ and $t=T$, the particle traverses a region in which an additional inhomogeneous time-dependent magnetic field \begin{equation}\label{eq:measfield} \bvec{B}_1(\bvec{x},t) = g(t) \beta q \bvec{n}, \end{equation} is present, where $\bvec{n}$ is the (known) direction, $g(t)$ is the time dependence of the field strength, and $q$ is the position coordinate in the direction of $\bvec{n}$ \footnote{As already noted in Ref.~\cite{Aharonov:1993:jm}, since Eq.~\eqref{eq:measfield} has nonzero divergence, it violates Maxwell's equations and cannot represent a real physical magnetic field. However, a suitable divergence-free inhomogeneous field is easily constructed \cite{Anandan:1993:uu}. Such a field effectively acts as a superposition of three fields of the kind given by Eq.~\eqref{eq:measfield} and leads to the same cumulative momentum shift and state disturbance (see also Sec.~\ref{sec:quant-state-reconstr}). Without loss of generality, we may therefore restrict our attention to the field defined by Eq.~\eqref{eq:measfield}.}. We refer to $\bvec{B}_0$ as the protection field and $\bvec{B}_1$ as the measurement field. The interaction Hamiltonian is taken to be \begin{equation}\label{eq:1dvhjbbdhvbdhjv} \op{H}_\text{int}(\bvec{x}, t) = - \mu \bopvecgr{\sigma} \cdot \bvec{B}_1(\bvec{x},t) = -g(t) \mu\beta (\bopvecgr{\sigma} \cdot \bvec{n}) \otimes \op{q}. \end{equation} Comparison with Eq.~\eqref{eq:lalaa} shows that the system observable to be protectively measured is the spin component $\bopvecgr{\sigma} \cdot \bvec{n}$ in the direction $\bvec{n}$ of the measurement field, while $\op{q}$ is the apparatus observable that couples to the spin component. The apparatus observable $\op{q}$ does not commute with the self-Hamiltonian $\op{H}_A =\bopvec{p}^2/2m$ of the apparatus associated with the phase-space degree of freedom of the particle, and therefore $\op{q}$ is not a constant of motion. Because this noncommutativity complicates the mathematical treatment while leaving unaffected the possibility or the physics of a protective measurement \cite{Dass:1999:az}, Refs.~\cite{Aharonov:1993:jm,Dass:1999:az} have considered the particle in its rest frame, such that $\op{H}_A =0$. Adopting this approach, the state of the particle for $t<0$ is \begin{equation}\label{eq:b333hjsvbhsaasas} \Psi(\bvec{x},t) = \ket{\pm}\exp \left( \pm \frac{\text{i} \mu B_0 t}{\hbar} \right) = \ket{\pm}\exp \left( \pm \frac{\text{i} \omega_0 t}{2} \right), \end{equation} and the total Hamiltonian $\op{H}(t)$ defining our model is \begin{align}\label{eq:bhjsvbhsaasas} \op{H}(\bvec{x}, t) &= \op{H}_S + \op{H}_\text{int}(\bvec{x}, t) \notag \\ &= -\mu B_0 \op{\sigma}_z -g(t) \mu\beta (\bopvecgr{\sigma} \cdot \bvec{n}) \otimes \op{q}. \end{align} \section{\label{sec:state-disturbance}State disturbance} \subsection{Constant coupling} We now derive our main result, an expression for the state disturbance of the initial spin state by the protective measurement. We first consider the time-independent coupling function (hereafter ``constant coupling'') defined by \begin{equation}\label{eq:lbivdddhv} g(t) = \begin{cases} 1/T, & 0 \le t \le T, \\ 0, & \text{otherwise}. \end{cases} \end{equation} Then the Hamiltonian~\eqref{eq:bhjsvbhsaasas} is time-independent, \begin{equation}\label{eq:vihdgsv} \op{H}(\bvec{x}) = - \mu\bopvecgr{\sigma} \cdot \bvec{B}(\bvec{x}), \end{equation} where \begin{equation}\label{eq:vihdgsv22} \bvec{B}(\bvec{x}) = B_0 \bvec{e}_z + \frac{1}{T} \beta q \bvec{n}, \end{equation} which shows that the strength of the measurement field scales as $1/T$. The eigenvectors of the Hamiltonian~\eqref{eq:vihdgsv} are \begin{subequations}\label{eq:huhuhu} \begin{align} \ket{+}_\bvec{x} &= \cos\frac{\theta(\bvec{x})}{2}\ket{+} + \sin\frac{\theta (\bvec{x})}{2}\text{e}^{\text{i} \phi (\bvec{x})}\ket{-}, \label{eq:huhuhu1}\\ \ket{-}_\bvec{x} &= \sin\frac{\theta(\bvec{x})}{2}\ket{+} - \cos\frac{\theta (\bvec{x})}{2}\text{e}^{\text{i} \phi (\bvec{x})}\ket{-},\label{eq:huhuhu2} \end{align} \end{subequations} with corresponding eigenvalues $E_\pm(\bvec{x})=\mp \mu B(\bvec{x})$. Here $\theta (\bvec{x})$ and $\phi (\bvec{x})$ are the polar and azimuthal angles of $\bvec{B}(\bvec{x})$. Note that $\theta (\bvec{x})$ is also the angle between $\bvec{B}(\bvec{x})$ and $\bvec{B}_0$, and that $\phi (\bvec{x})$ is equal to the (fixed) azimuthal angle $\eta$ of the field-direction vector $\bvec{n}$. If the initial spin state is $\ket{+}= \cos\frac{\theta(\bvec{x})}{2}\ket{+} _\bvec{x} + \sin\frac{\theta(\bvec{x})}{2}\ket{-}_\bvec{x}$, then at $t=T$ it is \begin{align}\label{eq:vihdgs7cf6gv} \ket{\psi(\bvec{x}, T)} &= \cos\frac{\theta(\bvec{x})}{2} \exp\left( \frac{\text{i} \mu B(\bvec{x}) T}{\hbar}\right) \ket{+} _\bvec{x} \notag \\ &\quad + \sin\frac{\theta(\bvec{x})}{2} \exp\left(- \frac{\text{i} \mu B(\bvec{x}) T}{\hbar}\right) \ket{-}_\bvec{x}. \end{align} Existing treatments of the model \cite{Aharonov:1993:jm,Dass:1999:az} have only considered the limit of (infinitely) large measurement times $T$, such that $\bvec{B}_1 \ll \bvec{B}_0$. Then $\theta (\bvec{x}) \ll 1$, and thus \begin{equation}\label{eq:hbvdvbhj} B(\bvec{x}) \approx \bvec{B}(\bvec{x}) \cdot \bvec{e}_z = B_0 + \frac{1}{T} \beta q \cos\gamma, \end{equation} where $\gamma$ is the polar angle of $\bvec{n}$, which is also the angle between $\bvec{B}_1(\bvec{x})$ and $\bvec{B}_0$. In this limit, the state~\eqref{eq:vihdgs7cf6gv} becomes \begin{equation} \ket{\psi(\bvec{x}, T)} \approx \exp\left( \frac{\text{i} \omega_0 T}{2} \right) \exp\left(\frac{\text{i} \mu \beta q \cos\gamma }{\hbar} \right) \ket{+} . \end{equation} This is the familiar result of protective measurement originally derived in Ref.~\cite{Aharonov:1993:jm}. The system remains, with arbitrarily large probability, in its initial state $\ket{+}$, while the term $\exp\left( \text{i} \mu \beta q \cos\gamma /\hbar\right)$ induces a change in momentum (pointer shift) in the direction of $\bvec{n}$ of size $\Delta p = \mu\beta\cos\gamma$. Since $\cos\gamma = \bra{+} \bopvecgr{\sigma} \cdot \bvec{n} \ket{+}$, the momentum shift can be written as $\Delta \bvec{p} = \mu\beta \bra{+} \bopvecgr{\sigma} \cdot \bvec{n} \ket{+} \bvec{n}$, which is proportional to the expectation value of the system observable $\bopvecgr{\sigma} \cdot \bvec{n}$ in the initial state $\ket{+}$. State disturbance manifests itself in a nonzero probability amplitude for finding the system in the state $\ket{-}$ at $t=T$. We write the final state as \begin{align}\label{eq:1fbhjsbfk4554jaa} \ket{\Psi(\bvec{x},T)} &= A_+(\bvec{x},T)\ket{+} + A_-(\bvec{x},T) \ket{-}, \end{align} where the phase-space part has been absorbed into $A_\pm(\bvec{x},T) = \braket{\pm}{\psi(\bvec{x}, T)} $. The amplitude $A_-(\bvec{x},T)$ is the probability amplitude for the transition $\ket{+} \rightarrow \ket{-}$, and we therefore take $\abs{A_-(\bvec{x},T)}^2$ as a measure for the state disturbance. From Eqs.~\eqref{eq:huhuhu} and \eqref{eq:vihdgs7cf6gv}, we find \begin{equation}\label{eq:sy7syvvfvihdgs7cf6gv} A_-(\bvec{x},T)= \text{i} \text{e}^{\text{i} \eta} \sin \theta(\bvec{x}) \sin \left( \frac{\mu B(\bvec{x}) T}{\hbar}\right). \end{equation} Using Eq.~\eqref{eq:vihdgsv22}, we have \begin{align}\label{eq:vdh5478457} B(\bvec{x}) &=\sqrt{ \bvec{B}(\bvec{x}) \cdot \bvec{B}(\bvec{x}) } =B_0 \sqrt{1 + \xi^2 + 2 \xi \cos\gamma}, \end{align} where \begin{equation}\label{eq:xi} \xi=\xi(q,T)=\frac{\beta q}{B_0T} \end{equation} measures the relative field strength $B_1/B_0$, where $B_1$ has been evaluated at position $q$ along direction $\bvec{n}$. We may associate $q$ with the measured location of the particle when it has completed its passage through the measurement field. Explicitly evaluating the term $\sin \theta(\bvec{x})$ appearing in Eq.~\eqref{eq:sy7syvvfvihdgs7cf6gv}, \begin{align}\label{eq:vbdiubdfiubdfisaks} \sin \theta (\bvec{x}) &= \frac{\sqrt{ \left[\bvec{B}(\bvec{x}) \cdot \bvec{e}_x\right]^2 + \left[\bvec{B}(\bvec{x}) \cdot \bvec{e}_y\right]^2}}{B(\bvec{x})} \notag\\&= \frac{\xi \sin\gamma}{\sqrt{ 1 + \xi^2 + 2 \xi \cos\gamma}}, \end{align} the transition amplitude can be written as \begin{multline}\label{eq:sy7syvvfvihdgs7cf6gv0} A_-(\bvec{x},T) \equiv A_-(q,\gamma,T) \\ = \text{i} \text{e}^{\text{i} \eta}\frac{\mu\beta q}{\hbar} \sin\gamma\,\mathrm{sinc} \left( \frac{\omega_0 T }{2} \sqrt{1 + \xi^2 + 2 \xi \cos\gamma}\right), \end{multline} where $\mathrm{sinc}(x)=\sin(x)/x$. Note that the quantity $\xi$ appearing in Eq.~\eqref{eq:sy7syvvfvihdgs7cf6gv0} depends on $T$, $B_0$, $\beta$, and $q$ [see Eq.~\eqref{eq:xi}]. We have also explicitly added the variable $\gamma$ to the argument to indicate that the transition amplitude depends on this angle approximately as $\sin^2 \gamma$. Note that Eq.~\eqref{eq:sy7syvvfvihdgs7cf6gv0} is zero if the sinc function is equal to zero. However, we cannot deliberately target these zeros to evade state disturbance, since it would require precise \emph{a priori} knowledge of $B_0$ and $\gamma$ that is unavailable in a protective measurement. We may therefore disregard the oscillations of $\mathrm{sinc}(x)$ and replace it by its decay envelope given by $1/x$ for $x \gtrsim 1$, such that \begin{equation}\label{eq:sydvhiuvhuis7cf6gv} A_-(\xi,\gamma)= \text{i} \text{e}^{\text{i} \eta} \sin\gamma\frac{\xi}{\sqrt{1 + \xi^2 + 2 \xi \cos\gamma}}, \end{equation} and thus the transition probability is \begin{equation}\label{eq:sydvhiuvhuii90hj232} P_-(\xi,\gamma) = \abs{A_-(T)}^2 = \frac{\xi^2 \sin^2\gamma}{1 + \xi^2 + 2 \xi \cos\gamma}. \end{equation} We have written the argument as $(\xi,\gamma)$ to emphasize that the amount of state disturbance depends only on the relative strength $\xi$ of the measurement field (which is inversely proportional to $T$) and the angle $\gamma$ (which is determined by the measured observable $\bopvecgr{\sigma} \cdot \bvec{n}$). \begin{figure} \caption{(Color online) Amount of state disturbance introduced during the protective measurement as quantified by the probability $P_-(\xi,\gamma)$ of transitioning to the orthogonal state, shown for constant coupling as a function of the dimensionless parameter $\xi=\beta q/B_0T$ for three different values of $\gamma$. The parameter $\xi$ measures the strength of the measurement field $\bvec{B}_1(\bvec{x})$ relative to the strength of the protection field $\bvec{B}_0$, and $\gamma$ represents the angle between $\bvec{B}_1(\bvec{x})$ and $\bvec{B}_0$. In addition to the probabilities calculated from the exact expression \eqref{eq:sydvhiuvhuii90hj232}, we also show the $O(1/T^2)$ approximation obtained from Eq.~\eqref{eq:bgb11j} for $\gamma=45^\circ$.} \label{fig:dist} \end{figure} Figure~\ref{fig:dist} shows the state disturbance calculated from the exact expression~\eqref{eq:sydvhiuvhuii90hj232} as a function of $\xi$ for $\gamma=22.5^\circ$, $45^\circ$, and $90^\circ$. The reduction of the state disturbance achieved by decreasing $\xi$ is clearly seen, where a decrease in $\xi$ corresponds to an increase in the measurement time $T$ with its inversely proportional effect on the strength of the measurement field [see Eq.~\eqref{eq:vihdgsv22}]. We also see how the state disturbance grows with $\gamma$ as the direction of the measured spin component moves further away from the direction of the protection field $\bvec{B}_0$. Specifically, in the worst-case scenario $\gamma=90^\circ$ for state disturbance, in order not to exceed a probability $P_\text{max}$ of state disturbance we need $\xi \le \sqrt{P_\text{max}/(1-P_\text{max})}$. It follows that $\xi=0.1$ is sufficient for remaining within a 1\% probability of transitioning to the orthogonal state, regardless of the particular value of $\gamma$ (making such thresholds hold for all possible values of $\gamma$ is important because in an experimental setting the value of $\gamma$ will not be known \emph{a priori}). Assuming weak measurement ($\xi \ll 1$), we can Taylor-expand Eq.~\eqref{eq:sydvhiuvhuis7cf6gv} to first order in $\xi$, \begin{align}\label{eq:bgb11jy78g} A_-(\xi, \gamma)&= \text{i} \text{e}^{\text{i} \eta} \xi \sin\gamma + O(\xi^2)\notag\\&= \text{i} \text{e}^{\text{i} \eta} \frac{\beta q}{B_0 T} \sin\gamma + O(1/T^2). \end{align} To second order in $1/T$, the corresponding transition probability is therefore \begin{equation}\label{eq:bgb11j} P_-(\xi, \gamma) \approx \xi^2\sin^2\gamma = \left(\frac{\beta q}{B_0}\right)^2 \frac{\sin^2\gamma }{T^2}. \end{equation} This expression exhibits the $1/T^2$ dependence familiar from first-order perturbation theory \cite{Schlosshauer:2014:tp} and the sinusoidal dependence on $\gamma$. It is shown in Fig.~\ref{fig:dist} (dashed-dotted line) as a function of $\xi$ for $\gamma=45^\circ$. We see that it represents an excellent approximation to the exact transition amplitude given by Eq.~\eqref{eq:sydvhiuvhuii90hj232} for the case $\xi \ll 1$ relevant to protective measurement. For larger $\xi \lesssim 1$, it produces a small overestimate of the state disturbance. Let us investigate the extent to which the weak-measurement condition of small $\xi$ may hold in a concrete experimental setting. We consider a Stern--Gerlach experiment based on evaporated potassium atoms ($\mu = \unit[9.3 \times 10^{-24}]{J/T}$) and a movable hot-wire detector, a common modern implementation \cite{Daybell:1967:sg}, and estimate the required inhomogeneity of the measurement field to achieve an appreciable displacement. The magnitude of the momentum shift is $\Delta p = \mu\beta \cos\gamma$, and thus the force on the particle due to the measurement field is $F = \mu(\beta /T) \cos\gamma$, where $\beta/T$ corresponds to the measurement-field gradient $\abs{\mbox{\boldmath$\nabla$} B_1}$. With a typical oven temperature $T_\text{oven}=\unit[500]{K}$, the most probable velocity of a potassium atom emitted from the oven is $v=\sqrt{2 k_BT_\text{oven}/m} \approx \unit[450]{m/s}$. Then the spatial displacement in the direction $\mbox{\boldmath$\nabla$} B_1$ of the inhomogeneity is \begin{equation} \Delta s = \frac{\mu\beta \cos\gamma}{2m T} T^2 = \frac{\mu \abs{\mbox{\boldmath$\nabla$} B_1}\cos\gamma}{2mv^2} d^2 = \frac{\mu \abs{\mbox{\boldmath$\nabla$} B_1}\cos\gamma}{4 k_BT_\text{oven}} d^2. \end{equation} To achieve a displacement $\Delta s = \unit[0.5]{mm}$, the required measurement-field gradient (setting $\gamma =45^\circ$ from here on) is $\mbox{\boldmath$\nabla$} B_1 \approx \unit[20]{T/m}$, which is a typical value in a Stern--Gerlach experiment. Experimentally achievable strengths of a continuous uniform magnetic field are around $\unit[10]{T}$. For $B_0=\unit[10]{T}$, the weak-measurement condition of small $\xi$, which is here given by $(\mbox{\boldmath$\nabla$} B_1) d/B_0$, is reasonably well fulfilled, since $(\mbox{\boldmath$\nabla$} B_1) d/B_0 \approx 0.2$. From Eq.~\eqref{eq:bgb11j}, this value implies a transition probability of 2\%. Improvement is possible by increasing the size $d$ of the measurement region, since $\Delta s \propto d^2$ while $\xi$ increases only linearly with $d$. For example, for $d=\unit[1]{m}$, the same displacement $\Delta s = \unit[0.5]{mm}$ requires only $\mbox{\boldmath$\nabla$} B_1 \approx \unit[0.2]{T/m}$. We may then lower the uniform field to $B_0=\unit[1]{T}$ while maintaining the previous values for the field ratio $(\mbox{\boldmath$\nabla$} B_1) d/B_0 \approx 0.2$ and the state disturbance. Alternatively, if we maintain the uniform field at $B_0=\unit[10]{T}$, the field ratio is reduced to 0.02, leading to a 100-fold reduction in state disturbance. One experimental challenge will be to supply a sufficiently strong uniform magnetic field with a small but well-defined inhomogeneity over an extended region in space. Furthermore, since the magnitude of the displacement depends on the atomic velocities, which follow a thermal distribution upon the emission of the atoms from the oven, resolving the $\cos\gamma$ dependence of the momentum shift will necessitate a selection stage that produces atoms with a narrow range of velocities and directions. Experimental challenges of this kind notwithstanding, our numerical estimates suggest that the protective measurement considered in this paper may be experimentally realizable, at least in principle, using a standard Stern--Gerlach experiment with a superposed strong uniform magnetic field of practically achievable strength. \subsection{\label{sec:state-dist-time}State disturbance for time-dependent measurement fields} Explicitly time-dependent couplings $g(t)$ between system and apparatus, such as those describing a gradual turn-on and turnoff of the measurement interaction, are of great relevance to protective measurement, since they allow for a significant reduction of the state disturbance compared to the case of constant coupling \cite{Schlosshauer:2014:pm}. Here we illustrate this reduction in the context of our model. We consider the interaction Hamiltonian~\eqref{eq:1dvhjbbdhvbdhjv} as a time-dependent perturbation to $\op{H}_S =-\mu B_0 \op{\sigma}_z$ and express the state-vector amplitudes $A_\pm(\bvec{x},T)$ appearing in Eq.~\eqref{eq:1fbhjsbfk4554jaa} as a perturbative series (Dyson series), $A_\pm(\bvec{x},T) = \sum_{\ell=0}^\infty A_\pm^{(\ell)}(\bvec{x},T)$. Here $A^{(\ell)}_\pm(\bvec{x},T)$ is the expression for the $\ell$th-order correction to the zeroth-order amplitudes $A^{(0)}_+(\bvec{x},T)=1$ and $A^{(0)}_-(\bvec{x},T)=0$. For protective measurements, the first-order transition amplitude $A_\pm^{(1)}(\bvec{x},T)$ is a reliable measure of the state disturbance \cite{Schlosshauer:2014:pm}. Applying the formalism of Ref.~\cite{Schlosshauer:2014:pm} to our model, we find \begin{align}\label{eq:8dhj7gr7ss82} A_-^{(1)}(q,\gamma,T) &= \text{i} \text{e}^{-\text{i} \omega_0 T/2} \frac{\mu\beta q}{\hbar} \,\text{e}^{\text{i} \eta} \sin\gamma \int_{0}^{T} \text{d} t\, \text{e}^{\text{i} \omega_0 t} g(t). \end{align} For the case of constant coupling considered before, this becomes (again disregarding the oscillations of the sinc function) \begin{align}\label{eq:bdghv7vgh7g87vgy} A_-^{(1)}(q,\gamma,T) &= \text{i} \frac{\mu\beta q}{\hbar} \sin\gamma \,\text{e}^{\text{i} \eta}\frac{1}{\omega_0 T/2} = \text{i} \text{e}^{\text{i} \eta}\xi \sin\gamma, \end{align} which is the same as Eq.~\eqref{eq:bgb11jy78g}, the approximation of Eq.~\eqref{eq:sy7syvvfvihdgs7cf6gv0} for weak measurement fields. Indeed, by evaluating the higher-order corrections $A_-^{(\ell)} (q,\gamma,T)$, one finds that $A_-^{(\ell)} (q,\gamma,T)$ is equal to the term of order $\ell$ in $\xi$ in the Taylor expansion of the exact solution for $A_-(q,\gamma,T)$, Eq.~\eqref{eq:sy7syvvfvihdgs7cf6gv0}. This agreement can be explained by noting that the Dyson series is a power series in the perturbation-strength parameter, here represented by $\xi$. \begin{figure} \caption{(Color online) (a) Raised-cosine function defined by Eq.~\eqref{eq:jfkhjkvhjkvhjkv11881} and the ``optimized'' coupling function defined by Eq.~\eqref{eq:bvdhkjbvd11}. Both coupling functions describe a gradual turn-on and turnoff of the measurement field. The horizontal axis is in units of the measurement time $T$ and the vertical axis is in units of $1/T$. (b) Corresponding reduction of state disturbance. We plot the ratio $p_-=P_-^{(1)}/P_-^\text{const}$ of the transition probability $P_-^{(1)}$ for each coupling to the transition probability $P_-^\text{const}$ for constant coupling, shown as a function of the dimensionless parameter $\omega_0 T$.} \label{fig:rc} \end{figure} To illustrate the reduction of the state disturbance obtained for time-dependent couplings, we consider two examples. The raised-cosine function was already studied, more generally, in Ref.~\cite{Schlosshauer:2014:pm} and is defined by \begin{equation}\label{eq:jfkhjkvhjkvhjkv11881} g(t)=\frac{1}{T}\left[ 1+\cos\left(\frac{2\pi (t-T/2)}{T}\right)\right] \qquad \text{for $0 \le t \le T$}, \end{equation} and $g(t)=0$ otherwise [Fig.~\ref{fig:rc}(a)]. From Eq.~\eqref{eq:8dhj7gr7ss82}, the first-order transition amplitude is \begin{align}\label{eq:8dhj7gr7ss82aaa} A_-^{(1)}(q,\gamma,T) &= \text{i} \frac{\mu\beta q}{\hbar}\sin\gamma \,\text{e}^{\text{i} \eta} \,\frac{\mathrm{sinc}(\omega_0 T/2)}{1-(\omega_0 T/ 2\pi)^2}. \end{align} In a protective measurement, $\omega_0 T$ is chosen to be large to minimize the state disturbance (since the sinc function decays as the inverse of its argument). In this case the damping factor is well approximated by $-(\omega_0 T/ 2\pi)^{-2}$. Disregarding as usual the oscillations represented by the sinc function, Eq.~\eqref{eq:8dhj7gr7ss82aaa} becomes \begin{align}\label{eq:8dhj7g117gr82as78978} A_-^{(1)}(q,\gamma,T) &\approx - \text{i} \frac{\mu\beta q}{\hbar}\sin\gamma \,\text{e}^{\text{i} \eta} \frac{\pi^2}{(\omega_0 T/ 2)^3}. \end{align} This expression scales as $1/T^3$, which is to be compared to the $1/T$ scaling for constant coupling [see Fig.~\ref{fig:rc}(b)]. We can further reduce the state disturbance by making the turn-on and turnoff of the raised-cosine function \eqref{eq:jfkhjkvhjkvhjkv11881} even more gradual, \begin{align}\label{eq:bvdhkjbvd11} g(t) &= \frac{1}{T}\biggl[ 1 +\frac{4}{3}\cos\biggl(\frac{2\pi (t-T/2)}{T}\biggr) \notag \\ &\quad +\frac{1}{3}\cos\biggl(\frac{4\pi (t-T/2)}{T}\biggr)\biggr] \qquad \text{for $0 \le t \le T$}, \end{align} and $g(t)=0$ otherwise [Fig.~\ref{fig:rc}(a)]. For $\omega_0 T \gg 1$ and disregarding the oscillations of the sinc function, we find \begin{equation}\label{eq:bvdhkjbvd20000} A_-^{(1)}(q,\gamma,T) \approx -\frac{\mu\beta q}{\hbar} \,\text{e}^{\text{i} \eta} \sin\gamma \frac{4\pi^4}{(\omega_0 T/2)^5}, \end{equation} which scales as $1/T^{5}$, a significant additional reduction of the state disturbance compared to the $1/T^3$ dependence for the raised-cosine function [see Fig.~\ref{fig:rc}(b)]. The condition $\omega_0T\gg 1$ is easily achieved in an experimental setting. As an example, we take again the Stern--Gerlach experiment with potassium atoms. For $B_0=\unit[1]{T}$, $\omega_0^{-1}$ is on the order of $\unit[10^{-11}]{s}$, which is to be compared to the time $T$ required for the atom to traverse the region over which the measurement field is applied. For a width $d=\unit[0.1]{m}$ of this region and an atomic velocity $v = \unit[450]{m/s}$, we have $T \approx \unit[0.2]{ms}$, which is seven orders of magnitude larger than $\omega_0^{-1}$. \section{\label{sec:quant-state-reconstr}Successive versus simultaneous protective measurements} To reconstruct a quantum state, we need to protectively measure multiple observables. To minimize the total state disturbance, should we measure those observables successively or simultaneously? One might expect that a simultaneous measurement will be superior, because for successive measurements the disturbed state produced by an earlier measurement will become the initial state for a subsequent measurement, thus propagating the error. Here we show that, for our model, this is not so: both methods will result in the same state disturbance. State reconstruction requires the protective measurement of three spin directions. Consider the measurement fields $\bvec{B}_1^{(k)}(\bvec{x}) = \frac{1}{T} \beta_k q_k \bvec{n}_k$ for orthogonal directions $\bvec{n}_k$, with $k=1,2,3$ (we assume constant coupling), and also consider the combined field $\bvec{B}_1(\bvec{x}) = \sum_{k=1}^3 \bvec{B}_1^{(k)}(\bvec{x})$. The first-order transition amplitude is now found to be \begin{equation}\label{eq:huhu44} A_-^{(1)}(\bvec{x},T) = \frac{\text{i}}{\hbar} \mu \left( \sum_{k=1}^3 \beta_k q_k \sin\gamma_k \,\text{e}^{\text{i} \eta_k}\right) \mathrm{sinc}\left( \frac{\omega_0 T}{2}\right), \end{equation} where $\gamma_k$ is the angle between $\bvec{n}_k$ and $\bvec{B}_0$, and $\eta_k$ is the azimuthal angle of $\bvec{n}_k$. This is simply a sum over the first-order transition amplitudes~\eqref{eq:8dhj7gr7ss82} evaluated separately for the three measurement fields $\bvec{B}_1^{(k)}(\bvec{x})$. Equation~\eqref{eq:huhu44} is to be compared to the transition amplitude for a measurement procedure consisting of three successive protective measurements. We model this procedure as a single protective measurement of duration $3T$ with interaction Hamiltonian \begin{multline}\label{eq:1dvhjaabbdhvbdhjv} \op{H}_\text{int}(\bvec{x}, t) = \\ \begin{cases} -\frac{1}{T} \mu\beta_k (\bopvecgr{\sigma} \cdot \bvec{n}_k) \otimes \op{q}_k, & (k-1)T \le t \le kT, \\ & \quad k=1,2,3, \\ 0, & \text{otherwise}. \end{cases} \end{multline} The corresponding first-order transition amplitude is \begin{align}\label{eq:huhu445} A_-^{(1)}(\bvec{x},T) &= \frac{\text{i}}{\hbar} \mu \left( \sum_{k=1}^3\beta_k q_k \sin\gamma_k \,\text{e}^{\text{i} \eta_k}\,\text{e}^{\text{i} (k-2) \omega_0 T}\right) \notag \\ &\qquad \times \,\mathrm{sinc}\left( \frac{\omega_0 T}{2}\right), \end{align} which, apart from the relative phase factors $\text{e}^{\text{i} (k-2) \omega_0 T}$, is the same as Eq.~\eqref{eq:huhu44}. These phase factors describe rapid oscillations as a function of $T$ and can therefore be considered experimentally irrelevant in the same way as we have disregarded the oscillations of the sinc function. Thus, we have confirmed that successive and simultaneous protective measurements introduce identical amounts of state disturbance in our model. \section{\label{sec:pointer-shift}Momentum-shift reversals} Finally, let us explore a hitherto overlooked caveat of protective measurement: even if no state disturbance occurs, the measured direction of the momentum shift may be reversed, resulting in a reconstructed state that can differ drastically from the initial state. Consider the amplitude $A_+(\bvec{x},T) = \braket{+}{\psi(\bvec{x}, T)}$ for the system to be found in the initial state $\ket{+}$ at the conclusion of the measurement. Assuming constant coupling, Eq.~\eqref{eq:vihdgs7cf6gv} gives \begin{align}\label{eq:sy658hhgvfvihdgs7cf6gv} A_+(\bvec{x},T) &= \frac{1+\cos\theta(\bvec{x})}{2} \exp\left( \frac{\text{i} \mu B(\bvec{x}) T}{\hbar}\right) \notag \\ &\quad +\frac{1-\cos\theta(\bvec{x})}{2} \exp\left(- \frac{\text{i} \mu B(\bvec{x}) T}{\hbar}\right). \end{align} Using $B(\bvec{x}) \approx B_0 + \frac{\beta q}{T} {\bra{+} \bopvecgr{\sigma} \cdot \bvec{n} \ket{+}}$ [see Eq.~\eqref{eq:hbvdvbhj}], Eq.~\eqref{eq:sy658hhgvfvihdgs7cf6gv} shows that the state $\ket{+}$ in the final state vector is associated with a superposition of opposite momentum shifts $\pm \Delta \bvec{p} = \pm \mu\beta \bra{+} \bopvecgr{\sigma} \cdot \bvec{n} \ket{+} \bvec{n}$. Only the state corresponding to $+\Delta \bvec{p}$, however, represents the correct pointer shift. From Eq.~\eqref{eq:sy658hhgvfvihdgs7cf6gv}, and using that \begin{align} \cos \theta (\bvec{x}) &= \frac{\bvec{B}(\bvec{x}) \cdot \bvec{e}_z}{B(\bvec{x})} = \frac{1 + \xi \cos\gamma}{ \sqrt{1 + \xi^2 + 2 \xi \cos\gamma}}, \end{align} the probability for a measurement of the particle's momentum shift to yield the incorrect value $-\Delta \bvec{p}$ conditional on the system's being found in the state $\ket{+}$ is, to lowest order in $\xi$, \begin{align}\label{eq:djdjzaaa} P (\xi, \gamma) \approx \left(\frac{1}{2}\xi\sin\gamma\right)^4. \end{align} Because this probability is of order $O(\xi^4)$, it is typically negligible. The fact that the probability amplitude corresponding to Eq.~\eqref{eq:djdjzaaa} is of order $1/T^2$ explains why it was missed not only in the zeroth-order limit of infinitely large $T$, but also in the first-order treatment of Refs.~\cite{Schlosshauer:2014:pm,Schlosshauer:2014:tp}. Even though the probability is typically small, we may still ask what the consequence of measuring $-\Delta \bvec{p}$ instead of $+\Delta \bvec{p}$ would be, as far as state reconstruction is concerned. The reversed momentum shift translates to the expectation value of $\bopvecgr{\sigma} \cdot \bvec{n}$ in the orthogonal state $\ket{-}$. Suppose that one of three successive protective measurements of observables $\bopvecgr{\sigma} \cdot \bvec{n}_i$ has resulted in a reversed momentum shift, and take $\bopvecgr{\sigma} \cdot \bvec{n}_3$ to be this failed measurement. Then the expectation value of $\bopvecgr{\sigma} \cdot \bvec{n}_3$ in the initial state $\ket{+}$ determined from the momentum shift would be $-\cos\gamma$, where $\gamma$ is the angle between $\bvec{n}_3$ and $\bvec{B}_0$. In the $\{\ket{+},\ket{-}\}$ basis, the corresponding density matrix reconstructed from the three protective measurements of $\bopvecgr{\sigma} \cdot \bvec{n}_i$ would then be \begin{align}\label{eq:djd7g8ygt8j} \op{\rho}= \begin{pmatrix} \sin^2\gamma & -\text{i}\sin\gamma\cos\gamma\,\text{e}^{-\text{i}\eta} \\ \text{i}\sin\gamma\cos\gamma\,\text{e}^{\text{i}\eta} & \cos^2\gamma \end{pmatrix}, \end{align} rather than $\op{\rho}= \ketbra{+}{+}=\left(\begin{smallmatrix} 1&0\\0&0 \end{smallmatrix}\right)$, where $\eta$ is the azimuthal angle of $\bvec{n}_3$. The pure state corresponding to Eq.~\eqref{eq:djd7g8ygt8j} is $\ket{\psi}=\sin\gamma\ket{+}+\text{i} \text{e}^{\text{i}\eta}\cos\gamma\ket{-}$. For example, if $\gamma=45^\circ$, then $\ket{\psi}=2^{-1/2} \left(\ket{+}+\text{i} \text{e}^{\text{i}\eta}\ket{-}\right)$, an equal-weight superposition of $\ket{+}$ and $\ket{-}$. The functional dependence of the fidelity $F(\op{\rho},\ket{+})=\sqrt{\bra{+}\op{\rho}\ket{+}}=\sin \gamma$ may be understood as follows. If $\gamma=0$, then the failed measurement is in the direction of the quantization axis of the initial spin state $\ket{+}$ and indicates the expectation value $-1$ associated with the state $\ket{-}$, while the expectation values of the protective measurements in the two orthogonal directions are zero. Thus the conjunction of these three measurements would lead one to conclude that the system's state must be $\ket{-}$, and therefore the fidelity will be zero. Conversely, if $\gamma=90^\circ$, then the expectation value of the protective measurement along this direction is zero, and thus a sign flip of this expectation value leaves the fidelity unaffected. \section{\label{sec:disc-concl}Discussion and conclusions} Physically realizable protective measurements inevitably disturb the initial state of the system, implying a nonzero probability for the measurement to fail. Fundamentally, this disturbance is rooted in the tradeoff between quantum-state disturbance and information gain in a quantum measurement \cite{Fuchs:1996:op}, as well as in the fact that the maximum possible information gain does not depend on the particular implementation of the quantum measurement \cite{Ariano:1996:om}. To determine the likelihood of success of a protective measurement and the fidelity of quantum-state reconstruction based on protective measurements, one needs to be able to quantify the state disturbance, as well as the faithfulness of the measurement outcome. In this paper we have analyzed these issues in the context of a concrete model that may be experimentally realizable using a setup of the Stern--Gerlach type. While this model has been used previously \cite{Aharonov:1993:qa, Dass:1999:az} to illustrate basic features of protective measurement, the essential issues studied in this paper had not yet been considered. One of our main results is that if the strength of the weak inhomogeneous magnetic field producing the measurement interaction does not vary in time during the measurement interval, then the amount of disturbance of the initial state is completely quantified by two parameters. The first parameter is the ratio between the measurement field and the strong uniform magnetic field providing the protection of the initial state. The second parameter is the angle between the unknown direction of the protection field and the experimentally chosen direction of the measurement field. We found that the transition probability shows, to good approximation, a quadratic dependence on the field-strength parameter and a sinusoidal dependence on the angle parameter. Thus, weakening the measurement field reduces the state disturbance, despite the fact that the measurement time $T$ is simultaneously increased by the same factor (this is so because the measurement field is inversely proportional to $T$). The increase of the state disturbance as a function of the angle parameter can be understood from the complementarity principle, since increasing the angle corresponds to measuring a spin component in a direction further away from the direction represented by the quantization axis of the initial spin state. The state disturbance can be reduced not only by decreasing the relative strength of the measurement field, but also by turning the field on and off in a gradual fashion, as was already shown more generally in Ref.~\cite{Schlosshauer:2014:pm}. Our results illustrate that such a gradual turn-on and turnoff accomplishes a much more effective reduction of the state disturbance than could be achieved by merely making the measurement field weaker. Although it is in principle easy to realize any desired smooth time dependence of the measurement field by gradually changing the current in the electromagnet, the experimental challenge lies in appropriately timing the field such that it is gradually turned on just as the particle enters the measurement region. This also requires that only a single particle traverses the measurement field at any given time. Among other measures, reaching the single-particle regime may require a suitably narrow collimating slit for minimizing the particle flux issuing from the source. For a protective measurement to be successful, it needs to not only leave the system in its initial state at the conclusion of the measurement, but the shift of the apparatus pointer must also be a faithful representation of the expectation value of the measured observable in this initial state. Our analysis shows that the issues of state disturbance and faithful pointer shift (here realized as a momentum transfer) are distinct and require individual attention. In particular, we found that even when the system is left in its initial state at the end of the measurement interaction, the measurement may still result in the wrong pointer shift (albeit with a probability proportional to $1/T^4$ that is likely to be negligibly small in practice). This pointer shift corresponds to a change in the particle's momentum in the opposite direction from the direction associated with the expectation value of the measured spin component in the initial quantum state. As we have shown, such an error can have severe consequences for the fidelity of the quantum-state reconstruction. If one uses a measurement field that is inhomogeneous in all three directions in space, one can correspondingly impart a momentum shift with three distinct spatial components. Measuring these components provides the same information as gained from successive protective measurements using three different (nonphysical) measurement fields that are inhomogeneous in only one direction. We showed that the resulting state disturbance is also the same, in agreement with what would one expect from the general relationship between information gain and disturbance in a quantum measurement. This suggests a more general result concerning protective measurements, namely, that multiple successive measurements are equivalent, both in terms of the resulting pointer shifts and the cumulative state disturbance, to carrying out the same measurements simultaneously. Nonetheless, in a concrete experimental situation one of these two possible measurement procedures may be easier to realize. For example, in the spin measurement considered in this paper, a simultaneous implementation using a measurement field that is inhomogeneous in all three directions in space is not only the physically relevant case (since the field must be divergence-free), but also makes it unnecessary to arrange three separate Stern--Gerlach apparatuses. Despite several promising theoretical proposals \cite{Aharonov:1993:jm,Anandan:1993:uu,Nussinov:1998:yy,Dass:1999:az}, the experimental realization of protective measurements is still an open challenge. Our analysis indicates that an implementation of the measurement scheme studied in this paper may be within the parameter regime of existing Stern--Gerlach experiments, such as those based on a beam of evaporated potassium atoms. Both the dimensions of the apparatus and the field inhomogeneities typically found in such experiments are suited for producing an appreciable, macroscopic pointer shift, although meeting the weak-measurement condition will require a sufficiently strong uniform magnetic field in the vicinity of \unit[1--10]{T}. While a more careful estimate may need to include consideration of experimental imperfections and other factors, our aim here was to focus on the issue of state disturbance and to show how it constrains the experimental parameters. While the experimental challenges are considerable, our analysis suggests that an implementation of the measurement scheme studied here may well be feasible in the near future. \end{document}
arXiv
{ "id": "1512.02994.tex", "language_detection_score": 0.7708806395530701, "char_num_after_normalized": null, "contain_at_least_two_stop_words": null, "ellipsis_line_ratio": null, "idx": null, "lines_start_with_bullet_point_ratio": null, "mean_length_of_alpha_words": null, "non_alphabetical_char_ratio": null, "path": null, "symbols_to_words_ratio": null, "uppercase_word_ratio": null, "type": null, "book_name": null, "mimetype": null, "page_index": null, "page_path": null, "page_title": null }
arXiv/math_arXiv_v0.2.jsonl
null
null
\begin{document} \date\today \title[A note on exhaustion of hyperbolic complex manifolds]{A note on exhaustion of hyperbolic complex manifolds} \author{Ninh Van Thu\textit{$^{1,2}$} and Trinh Huy Vu\textit{$^{1}$}} \address{Ninh Van Thu} \address{\textit{$^{1}$}~Department of Mathematics, Vietnam National University, Hanoi, 334 Nguyen Trai, Thanh Xuan, Hanoi, Vietnam} \address{\textit{$^{2}$}~Thang Long Institute of Mathematics and Applied Sciences, Nghiem Xuan Yem, Hoang Mai, HaNoi, Vietnam} \email{[email protected]} \address{Trinh Huy Vu} \address{\textit{$^{1}$}~Department of Mathematics, Vietnam National University at Hanoi, 334 Nguyen Trai str., Hanoi, Vietnam} \email{[email protected]} \subjclass[2010]{Primary 32H02; Secondary 32M05, 32F18.} \keywords{Hyperbolic complex manifold, exhausting sequence, $h$-extendible domain} \begin{abstract} The purpose of this article is to investigate a hyperbolic complex manifold $M$ exhausted by a pseudoconvex domain $\Omega$ in $\mathbb C^n$ via an exhausting sequence $\{f_j\colon \Omega\to M\}$ such that $f_j^{-1}(a)$ converges to a boundary point $\xi_0 \in \partial \Omega$ for some point $a\in M$. \end{abstract} \maketitle \section{introduction} Let $M$ and $\Omega$ be two complex manifolds. One says that \emph{$\Omega$ can exhaust $M$} or \emph{$M$ can be exhausted by $\Omega$} if for any compact subset $K$ of $M$ there is a holomorphic embedding $f_K \colon \Omega \to M$ such that $f_K(\Omega)\supset K$. In particular, one says that \emph{$M$ is a monotone union of $\Omega$} via a sequence of holomorphic embeddings $f_j\colon \Omega\to M$ if $f_j(\Omega)\subset f_{j+1}(\Omega)$ for all $j$ and $M=\bigcup_{j=1}^\infty f_j(\Omega)$ (see \cite{FS77, Fr83}). In \cite[Theorem $1$]{Fr86}, there exists a bounded domain $D$ in $\mathbb C^n$ such that $D$ can exhaust any domain in $\mathbb C^n$. In addition, the unit ball $\mathbb B^n$ in $\mathbb C^n$ can exhaust many complex manifods, which are not biholomorphically equivalent to each other (see \cite{For04, FS77}). However, if $M$ in addition is hyperbolic then $M$ must be biholomorphically equivalent to $\mathbb B^n$ (cf. \cite{FS77}). Furthermore, any $n$-dimensional hyperbolic complex manifold, exhausted by a homogeneous bounded domain $D$ in $\mathbb C^n$, is biholomorphically equivalent to $D$. As a consequence, although the polydisc $\mathbb U^n$ and the unit ball $\mathbb B^n$ are both homogeneous and there is a domain $U$ in $\mathbb B^n$ that contains almost all of $\mathbb B^n$, i.e., $\mathbb B^n\setminus U$ has measure zero (cf. \cite[Theorem $1$]{FS77}) and is biholomorphically equivalent to $\mathbb U^n$, but $\mathbb U^n$ cannot exhaust the unit ball $\mathbb B^n$ since it is well-known that $\mathbb U^n$ is not biholomorphically equivalent to $\mathbb B^n$. Let $M$ be a hyperbolic complex manifold exhausted by a bounded domain $\Omega\subset \mathbb C^n$ via an exhausting sequence $\{f_j\colon \Omega\to M\}$. Let us fix a point $a\in M$. Then, thanks to the boundedness of $\Omega$, without loss of generality we may assume that $f_j^{-1}(a)\to p\in \overline{\Omega}$ as $j\to \infty$. If $p\in \Omega$, then one always has $M$ is biholomorphically equivalent to $\Omega$ (cf. Lemma \ref{orbitinside} in Section \ref{S2}). The purpose of this paper is to investigate such a complex manifold $M$ with $p\in \partial \Omega$. More precisely, our first main result is the following theorem. \begin{theorem}\label{togetmodel} Let $M$ be an $(n+1)$-dimensional hyperbolic complex manifold and let $\Omega$ be a pseudoconvex domain in $\mathbb{C}^{n+1}$ with $C^\infty$-smooth boundary. Suppose that $M$ can be exhausted by $\Omega$ via an exhausting sequence $\{f_j: \Omega \to M\}$. If there exists a point $a \in M$ such that the sequence $f_j^{-1}(a)$ converges $\Lambda$-nontangentially to a $h$-extendible boundary point $\xi_0 \in \partial \Omega$ (see Definition \ref{def-order} in Section \ref{S2} for definitions of the $\Lambda$-nontangentially convergence and of the $h$-extendibility), then $M$ is biholomorphically equivalent to the associated model $M_P$ for $\Omega$ at $\xi_0$. \end{theorem} When $\xi_0$ is a strongly pseudoconvex boundary point, we do not need the condition that the sequence $f_j^{-1}(a)$ converges $\Lambda$-nontangentially to $\xi_0$ as $j\to \infty$. Moreover, in this circumstance, the model $M_P$ is in fact biholomorphically equivalent to $M_{|z|^2}$, which is biholomorphically equivalent to the unit ball $\mathbb B^n$. More precisely, our second main result is the following theorem. \begin{theorem}\label{togetmodelstronglypsc} Let $M$ be an $ (n+1) $-dimensional hyperbolic complex manifold and let $\Omega$ be a pseudoconvex domain in $\mathbb{C}^{n+1}$. Suppose that $\partial\Omega$ is $\mathcal{C}^2$-smooth boundary near a strongly pseudoconvex boundary point $\xi_0 \in \partial \Omega$. Suppose also that $M$ can be exhausted by $\Omega$ via an exhausting sequence $\{f_j: \Omega \to M\}$. If there exists a point $a \in M$ such that the sequence $\eta_j := f_j^{-1}(a)$ converges to $\xi_0$, then $M$ is biholomorphically equivalent to the unit ball $\mathbb{B}^{n+1}$. \end{theorem} Notice that Theorem \ref{togetmodelstronglypsc} is a local version of \cite[Theorem $1.1$]{DZ19} and \cite[Theorem I]{Fr83} (see Corollary \ref{str-psc-ex} in Section \ref{S3}). We note that their proofs are based on the boundary estimate of the Fridman invariant and of the squeezing function for strongly pseudoconvex domains. However, in order to prove Theorem \ref{togetmodel} and Theorem \ref{togetmodelstronglypsc}, we shall use the scaling technique, achieved recently in \cite{Ber06, DN09, NN19}. By applying Theorem \ref{togetmodelstronglypsc} and Lemma \ref{orbitinside}, we also prove that if a hyperbolic complex manifold $M$ exhausted by a general ellipsoid $D_P$ (see Section \ref{S4} for the definition of $D_P$), then $M$ is either biholomorphically equivalent to $D_P$ or the unit ball $\mathbb B^n$ (cf. Proposition \ref{generalellipsoid} in Section \ref{S4}). In particular, when $D_P$ is an ellipsoid $E_m\; (m\in \mathbb Z_{\geq 1})$, given by $$ E_m=\left\{(z,w)\in \mathbb C^2 \colon |w|^2+|z|^{2m}<1\right\}, $$ in fact Proposition \ref{generalellipsoid} is a generalization of \cite[Theorem $1$]{Liu18}. The organization of this paper is as follows: In Section~\ref{S2} we provide some results concerning the normality of a sequence of biholomorphisms and the $h$-extendibility. In Section \ref{S3}, we give our proofs of Theorem \ref{togetmodel} and Theorem \ref{togetmodelstronglypsc}. Finally, the proof of Proposition \ref{generalellipsoid} will be introduced in Section \ref{S4}. \section{The normality and the $h$-extendibility}\label{S2} \subsection{The normality of a sequence of biholomorphisms} First of all, we recall the following definition (see \cite{GK} or \cite{DN09}). \begin{define} Let $\{\Omega_i\}_{i=1}^\infty$ be a sequence of open sets in a complex manifold $M$ and $\Omega_0 $ be an open set of $M$. The sequence $\{\Omega_i\}_{i=1}^\infty$ is said to converge to $\Omega_0 $ (written $\lim\Omega_i=\Omega_0$) if and only if \begin{enumerate} \item[(i)] For any compact set $K\subset \Omega_0,$ there is an $i_0=i_0(K)$ such that $i\geq i_0$ implies that $K\subset \Omega_i$; and \item[(ii)] If $K$ is a compact set which is contained in $\Omega_i$ for all sufficiently large $i,$ then $K\subset \Omega_0$. \end{enumerate} \end{define} Next, we recall the following proposition, which is a generalization of the theorem of H. Cartan (see \cite{DN09, GK, TM}). \begin{proposition} \label{T:7} Let $\{A_i\}_{i=1}^\infty$ and $\{\Omega_i\}_{i=1}^\infty$ be sequences of domains in a complex manifold $M$ with $\lim A_i=A_0$ and $\lim \Omega_i=\Omega_0$ for some (uniquely determined) domains $A_0$, $\Omega_0$ in $M$. Suppose that $\{f_i: A_i \to \Omega_i\} $ is a sequence of biholomorphic maps. Suppose also that the sequence $\{f_i: A_i\to M \}$ converges uniformly on compact subsets of $ A_0$ to a holomorphic map $F:A_0\to M $ and the sequence $\{g_i:=f^{-1}_i: \Omega_i\to M \}$ converges uniformly on compact subsets of $\Omega_0$ to a holomorphic map $G:\Omega_0\to M $. Then either of the following assertions holds. \begin{enumerate} \item[(i)] The sequence $\{f_i\}$ is compactly divergent, i.e., for each compact set $K\subset A_0$ and each compact set $L\subset \Omega_0$, there exists an integer $i_0$ such that $f_i(K)\cap L=\emptyset$ for $i\geq i_0$; or \item[(ii)] There exists a subsequence $\{f_{i_j}\}\subset \{f_i\}$ such that the sequence $\{f_{i_j}\}$ converges uniformly on compact subsets of $A_0$ to a biholomorphic map $F: A_0 \to \Omega_0$. \end{enumerate} \end{proposition} \begin{remark} \label{r1} By \cite[Proposition $2.1$]{Ber94} or \cite[Proposition $2.2$]{DN09} and by the hypotheses of Theorem \ref{togetmodel} and Theorem \ref{togetmodelstronglypsc}, it follows that for each compact subset $K\Subset M$ and each neighborhood $U$ of $\xi_0$ in $\mathbb C^{n+1}$, there exists an integer $j_0=j_0(K)$ such that $K\subset f_j(\Omega\cap U)$ for all $j\geq j_0$. Consequently, the sequence of domains $\{f_j(\Omega\cap U)\}$ converges to $M$. \end{remark} We will finish this subsection by recalling the following lemma (cf. \cite[Lemma $1.1$]{Fr83}). \begin{lemma}[see \cite{Fr83}]\label{orbitinside} Let $M$ be a hyperbolic manifold of complex dimension $n$. Assume that $M$ can be exhausted by $\Omega$ via an exhausting sequence $\{f_j: \Omega \to M\}$, where $\Omega$ is a bounded domain in $\mathbb{C}^n$. Suppose that there is an interior point $a \in M$ such that $f_j^{-1} (a) \to p \in \Omega$. Then, $M$ is biholomorphically equivalent to $\Omega$. \end{lemma} \subsection{The $h$-extendibility } In this subsection, we recall some definitions and notations given in \cite{Cat84, Yu95}. Let $\Omega$ be a smooth pseudoconvex domain in $\mathbb C^{n+1}$ and $p\in \partial\Omega$. Let $\rho$ be a local defining function for $\Omega$ near $p$. Suppose that the multitype $\mathcal{M}(p)=(1,m_1,\ldots,m_n)$ is finite. (See \cite{Cat84} for the notion of multitype.) Let us denote by $\Lambda=\left(1/m_1,\ldots,1/m_n\right)$. Then, there are distinguished coordinates $(z,w)=(z_1,\ldots,z_n,w)$ such that $p=0$ and $\rho(z,w)$ can be expanded near $0$ as follows: $$ \rho(z,w)=\mathrm{Re}(w)+P(z)+R(z,w), $$ where $P$ is a $\Lambda$-homogeneous plurisubharmonic polynomial that contains no pluriharmonic terms, $R$ is smooth and satisfies $$ |R(z,w)|\leq C \left( |w|+ \sum_{j=1}^n |z_j|^{m_j} \right)^\gamma, $$ for some constant $\gamma>1$ and $C>0$. Here and in what follows, a polynomial $P$ is called $\Lambda$-homogeneous if $$ P(t^{1/m_1}z_1,t^{1/m_2}z_2, \ldots,t^{1/m_n}z_n)=P(z),\; \forall t>0, \forall z\in \mathbb C^n. $$ \begin{define}[see \cite{NN19}]\label{def-order} The domain $M_P=\{(z,w)\in \mathbb C^n\times \mathbb C\colon \mathrm{Re}(w)+P(z)<0\}$ is called an \emph{associated model} of $\Omega$ at $p$. A boundary point $p\in \partial \Omega$ is called \emph{$h$-extendible} if its associated model $M_P$ is \emph{$h$-extendible}, i.e., $M_P$ is of finite type (see \cite[Corollary $2.3$]{Yu94}). In this circumstance, we say that a sequence $\{\eta_j=(\alpha_j,\beta_j)\}\subset \Omega$ \emph{converges $\Lambda$-nontangentially to $p$} if $|\mathrm{Im}(\beta_j)|\lesssim |\mathrm{dist}(\eta_j,\partial \Omega)|$ and $ \sigma(\alpha_j) \lesssim |\mathrm{dist}(\eta_j,\partial \Omega)|$, where $$ \sigma(z)=\sum_{k=1}^n |z_k|^{m_k}. $$ \end{define} Throughout this paper, we use $\lesssim$ and $\gtrsim$ to denote inequalities up to a positive multiplicative constant. Moreover, we use $\approx $ for the combination of $\lesssim$ and $\gtrsim$. In addition, $\mathrm{dist}(z,\partial\Omega)$ denotes the Euclidean distance from $z$ to $\partial\Omega$. Furthermore, for $\mu>0$ we denote by $\mathcal{O}(\mu,\Lambda)$ the set of all smooth functions $f$ defined near the origin of $\mathbb C^n$ such that $$ D^\alpha \overline{D}^\beta f(0)=0~\text{whenever}~ \sum_{j=1}^n (\alpha_j+\beta_j)\dfrac{1}{m_j} \leq \mu. $$ If $n=1$ and $\Lambda = (1)$ then we use $\mathcal{O}(\mu)$ to denote the functions vanishing to order at least $\mu$ at the origin (cf. \cite{Cat84, Yu95}). \section{Proofs of Theorem \ref{togetmodel} and Theorem \ref{togetmodelstronglypsc}}\label{S3} This section is devoted to our proofs of Theorem \ref{togetmodel} and Theorem \ref{togetmodelstronglypsc}. First of all, let us recall the definition of the Kobayashi infinitesimal pseudometric and the Kobayashi pseudodistance as follows: \begin{define} Let $M$ be a complex manifold. The Kobayashi infinitesimal pseudometric $F_M \colon M\times T^{1,0}M\to \mathbb R$ is defined by $$ F_M(p,X)=\inf \left\{c>0\;|\; \exists \; f\colon \Delta \to M \;\text{holomorphic with}\; f(0)=p, f'(0)=X/c \right\}, $$ for any $p\in M$ and $X\in T^{1,0}M$, where $\Delta $ is the unit open disk of $\mathbb C$. Moreover, the Kobayashi pseudodistance $d_M^K\colon M\times M \to \mathbb R$ is defined by $$ d_M^K(p,q)=\inf_\gamma\int_0^1 F_M(\gamma(t),\gamma'(t)) dt, $$ for any $p,q\in M$ where the infimum is taken over all differentiable curves $\gamma:[0,1] \to M$ joining $p$ and $q$. A complex manifold $M$ is called hyperbolic if $d_M^K(p,q)$ is actually a distance, i.e., $d_M^K(p,q)>0$ whenever $p\ne q$. \end{define} Next, we need the following lemma, whose proof will be given in Appendix for the convenience of the reader, and the following proposition. \begin{lemma}\label{conti-kob} Assume that $\{D_j\}$ is a sequence of domains in $\mathbb C^{n+1}$ converging to a model $M_P$ of finite type. Then, we have $$ \lim_{j\to \infty} F_{D_j}(z,X)=F_{M_P}(z,X),~\forall (z,X)\in M_P\times \mathbb C^{n+1}. $$ Moreover, the convergence takes place uniformly over compact subsets of $M_P\times \mathbb C^{n+1}$. \end{lemma} \begin{proposition}[see \cite{NN19}]\label{pro-scaling} Assume that $\{D_j\}$ is a sequence of domains in $\mathbb C^{n+1}$ converging to a model $M_P$ of finite type. Assume also that $\omega$ is a domain in $\mathbb C^k$ and $\sigma_j: \omega \to D_j$ is a sequence of holomorphic mappings such that $\{\sigma_j(a)\}\Subset M_P$ for some $a\in \omega$. Then $\{\sigma_j\}$ contains a subsequence that converges locally uniformly to a holomorphic map $\sigma: \omega \to M_P$. \end{proposition} Now we are ready to prove Theorem \ref{togetmodel} and Theorem \ref{togetmodelstronglypsc}. \begin{proof}[Proof of Theorem \ref{togetmodel}] Let $\rho$ be a local defining function for $\Omega$ near $\xi_0$ and the multitype $\mathcal{M}(\xi_0)=(1,m_1,\ldots,m_n)$ is finite. In what follows, denote by $\Lambda=(1/m_1,\ldots,1/m_n)$. Since $\xi_0$ is a $h$-extendible point, there exist local holomorphic coordinates $(z,w)$ in which $\xi_0=0$ and $\Omega$ can be described in a neighborhood $U_0$ of $0$ as follows: $$ \Omega\cap U_0=\left\{\rho(z,w)=\mathrm{Re}(w)+ P(z) +R_1(z) + R_2(\mathrm{Im} w)+(\mathrm{Im} w) R(z)<0\right\}, $$ where $P$ is a $\Lambda$-homogeneous plurisubharmonic real-valued polynomial containing no pluriharmonic terms, $R_1\in \mathcal{O}(1, \Lambda),R\in \mathcal{O}(1/2, \Lambda) $, and $R_2\in \mathcal{O}(2)$. (See the proof of Theorem $1.1$ in \cite{NN19} or the proof of Lemma $4.11$ in \cite{Yu95}.) By assumption, there exists a point $a\in M$ such that the sequence $\eta_j:=f^{-1}_j(a)$ converges $\Lambda$-nontangentially to $\xi_0$. Without loss of generality, we may assume that the sequence $\{\eta_j\}\subset \Omega\cap U_0$ and we write $\eta_j=(\alpha_j,\beta_j)=(\alpha_{j1},\ldots,\alpha_{jn},\beta_j)$ for all $j$. Then, the sequence $\{\eta_j:=f^{-1}(a)\}$ has the following properties: \begin{itemize} \item[(a)] $|\mathrm{Im}(\beta_j)|\lesssim |\mathrm{dist}(\eta_j,\partial \Omega)|$; \item[(b)] $|\alpha_{jk}|^{m_k}\lesssim |\mathrm{dist}(\eta_j,\partial \Omega)|$ for $1\leq k\leq n$. \end{itemize} For the sequence $\{\eta_j=(\alpha_j,\beta_j)\}$, we associate with a sequence of points $\eta_j'=(\alpha_{j1}, \ldots, \alpha_{jn},\beta_j +\epsilon_j)$, where $\epsilon_j>0$, such that $\eta_j'$ is in the hypersurface $\{\rho=0\}$ for all $j$. We note that $\epsilon_j\approx \mathrm{dist}(\eta_j,\partial \Omega)$. Now let us consider the sequences of dilations $\Delta^{\epsilon_j}$ and translations $L_{\eta_j'}$, defined respectively by $$ \Delta^{\epsilon_j}(z_1,\ldots,z_n,w)=\left(\frac{z_1}{\epsilon_j^{1/m_1}},\ldots,\frac{z_n}{\epsilon_j^{1/m_n}},\frac{w}{\epsilon_j}\right) $$ and $$ L_{\eta_j'}(z,w)=(z,w)-\eta'_j=(z-\alpha'_j,w-\beta'_j). $$ Under the change of variables $(\tilde z,\tilde w):=\Delta^{\epsilon_j}\circ L_{\eta'_j}(z,w)$, i.e., \[ \begin{cases} w-\beta'_j= \epsilon_j\tilde{w}\\ z_k-\alpha'_{j k}=\epsilon_j^{1/m_k}\tilde{z}_k,\, k=1,\ldots,n, \end{cases} \] one can see that $\Delta^{\epsilon_j}\circ L_{\eta_j'}(\alpha_j,\beta_j)=(0,\cdots,0,-1)$ for all $j$. Moreover, as in \cite{NN19}, after taking a subsequence if necessary, we may assume that the sequence of domains $\Omega_j:=\Delta^{\epsilon_j}\circ L_{\eta_j'}(\Omega\cap U_0) $ converges to the following model $$ M_{P,\alpha}:=\left \{(\tilde z,\tilde w)\in \mathbb C^n\times\mathbb C\colon \mathrm{Re}(\tilde w)+P(\tilde z+\alpha)-P(\alpha)<0\right\}, $$ which is obviously biholomorphically equivalent to the model $M_P$. Without loss of generality, in what follows we always assume that $\{\Omega_j\}$ converges to $M_P$. Now we first consider the sequence of biholomorphisms $F_j:= T_j\circ f_j^{-1}\colon M\supset f_j(\Omega\cap U_0)\to \Omega_j$, where $T_j:=\Delta^{\epsilon_j}\circ L_{\eta_j'}$. Since $F_j(a)=(0',-1)$ and notice that $f_j(\Omega\cap U_0)$ converges to $M$ as $j\to \infty$ (see Remark \ref{r1}), by Proposition \ref{pro-scaling}, without loss of generality, we may assume that the sequence $F_j$ converges uniformly on on every compact subset of $M$ to a holomorphic map $F$ from $M$ to $\mathbb C^{n+1}$. Note that $F(M)$ contains a neighborhood of $(0',-1)$ and $F(M)\subset \overline{M_P}$. Since $F_j$ is normal, by the Cauchy theorem it follows that $\{J(F_j)\}$ converges uniformly on every compact subsets of $M$ to $J(F)$, where $J(F)$ denotes the Jacobian determinant of $F$. However, by the Cartan theorem, $J(F_j)(z)$ is nowhere zero for any $j$ because $F_j$ is a biholomorphism. Then, the Hurwitz theorem implies that $J(F)$ is a zero function or nowhere zero. In the case that $JF\equiv 0$, $F$ is regular at no point of $M$. As $F(M)$ contains a neighborhood of $(0',-1)$, the Sard theorem shows that $F$ is regular outside a proper subvariety of $M$, which is a contradiction. This yields $JF$ is nowhere zero and hence $F$ is regular everywhere on $M$. By \cite[Lemma 0]{FS77}, it follows that $F(M)$ is open and $F(M)\subset M_P$. Next, we shall prove that $F$ is one-to-one. Indeed, let $z_1, z_2\in M$ be arbitrary. Fix a compact subset $L \Subset M$ such that $z_1,z_2\in L$. Then, by Remark \ref{r1} there is a $j_0(L)>0$ such that $L\subset f_j(\Omega\cap U_0)$ and $F_j(L)\subset K\Subset M_P$ for all $j>j_0(L)$, where $K$ is a compact subset of $M_P$. By Lemma \ref{conti-kob} and the decreasing property of Kobayashi distance, one has \begin{align*} d^K_M(z_1,z_2)&\leq d^K_{f_j(\Omega\cap U_0)}(z_1,z_2)=d^K_{\Omega_j}(F_j(z_1),F_j(z_2)))\leq C \cdot d^K_{M_P}(F_j(z_1),F_j(z_2))\\ &\leq C \left( d^K_{M_P}(F(z_1),F(z_2))+ d^K_{M_P}(F_j(z_1),F(z_1))+d^K_{M_P}(F_j(z_2),F(z_2))\right), \end{align*} where $C>1$ is a positive constant. Letting $j\to \infty$, we obtain $$ d^K_M(z_1,z_2)\leq C \cdot d^K_{M_P}(F(z_1),F(z_2)). $$ Since $M$ is hyperbolic, it follows that if $F(z_1)=F(z_2)$, then $z_1=z_2$. Consequently, $F$ is one-to-one, as desired. Finally, because of the biholomorphism from $M$ to $F(M)\subset M_P$ and the tautness of $M_P$ (cf. \cite{Yu95}), it follows that the sequence $ F_j^{-1}=f_j\circ T_j^{-1} \colon T_j(\Omega \cap U_0)\to f_j(\Omega \cap U_0) \subset M$ is also normal. Moreover, since $T_j\circ f_j^{-1}(a)=(0',-1)\in M_P$, it follows that the sequence $T_j\circ f_j^{-1}$ is not compactly divergent. Therefore, by Proposition \ref{T:7}, after taking some subsequence we may assume that $T_j\circ f_j^{-1}$ converges uniformly on every compact subset of $M$ to a biholomorphism from $M$ onto $M_P$. Hence, the proof is complete. \end{proof} \begin{remark} If $M$ is a bounded domain in $\mathbb C^{n+1}$, the normality of the sequence $F_j^{-1}$ can be shown by using the Montel theorem. Thus, the proof of Theorem \ref{togetmodel} simply follows from Proposition \ref{T:7}. \end{remark} \begin{proof}[Proof of Theorem \ref{togetmodelstronglypsc}] Let $\rho$ be a local defining function for $\Omega$ near $\xi_0$. We may assume that $\xi_0=0$. After a linear change of coordinates, one can find local holomorphic coordinates $(\tilde z,\tilde w)=(\tilde z_1,\cdots, \tilde z_n,\tilde w)$, defined on a neighborhood $U_0$ of $\xi_0$, such that \begin{equation*} \begin{split} \rho(\tilde z,\tilde w)=\mathrm{Re}(\tilde w)+ \sum_{j=1}^{n}|\tilde z_j|^2+ O(|\tilde w| \|\tilde z\|+\|\tilde z\|^3) \end{split} \end{equation*} By \cite[Proposition 3.1]{DN09} (or Subsection $3.1$ in \cite{Ber06} for the case $n=1$), for each point $\eta$ in a small neighborhood of the origin, there exists an automorphism $\Phi_\eta$ of $\mathbb C^n$ such that \begin{equation*} \begin{split} \rho(\Phi_{\eta}^{-1}(z,w))-\rho(\eta)=\mathrm{Re}(w)+ \sum_{j=1}^{n}|z_j|^2+ O(|w| \|z\|+\|z\|^3). \end{split} \end{equation*} Let us define an anisotropic dilation $\Delta^\epsilon$ by $$ \Delta^\epsilon (z_1,\cdots,z_n,w)= \left(\frac{z_1}{\sqrt{\epsilon}},\cdots,\frac{z_{n}}{\sqrt{\epsilon}},\frac{w}{\epsilon}\right). $$ For each $\eta\in \partial \Omega$, if we set $\rho_\eta^\epsilon(z, w)=\epsilon^{-1}\rho\circ \Phi_\eta^{-1}\circ( \Delta^\epsilon)^{-1}(z,w)$, then \begin{equation*} \rho_\eta^\epsilon(z, w)= \mathrm{Re}(w)+\sum_{j=1}^{n}|z_j|^2+O(\sqrt{\epsilon}). \end{equation*} By assumption, the sequence $\eta_j:=f^{-1}_j(a)$ converges to $\xi_0$. Then, we associate with a sequence of points ${\eta}_j'=(\eta_{j1}, \cdots, \eta_{jn},\eta_{j(n+1)}+\epsilon_j)$, $ \epsilon_j>0$, such that ${\eta}_j'$ is in the hypersurface $\{\rho=0\}$. Then $ \Delta^{\epsilon_j}\circ \Phi_{{\eta'}_j}({\eta}_j)=(0,\cdots,0,-1)$ and one can see that $ \Delta^{\epsilon_j}\circ \Phi_{{\eta'}_j}(\{\rho=0\}) $ is defined by an equation of the form \begin{equation*} \begin{split} \mathrm{Re}(w)+\sum_{j=1}^{n}|z_j|^2+O(\sqrt{\epsilon_j})=0. \end{split} \end{equation*} Therefore, it follows that, after taking a subsequence if necessary, $\Omega_j:=\Delta^{\epsilon_j}\circ \Phi_{{\eta'}_p}(U_0^-)$ converges to the following domain \begin{equation}\label{Eq29} \mathcal{E}:=\{\hat\rho:= \mathrm{Re}(w)+\sum_{j=1}^{n}|z_j|^2<0\}, \end{equation} which is biholomorphically equivalent to the unit ball $\mathbb B^{n+1}$. Now let us consider the sequence of biholomorphisms $F_j:= T_j\circ f_j^{-1} \colon M\supset f_j(\Omega \cap U_0)\to T_j(\Omega \cap U_0)$, where $T_j:= \Delta^{\epsilon_j}\circ \Phi_{{\eta'}_j}$. Since $F_j(a)=(0',-1)$, by \cite[Theorem 3.11]{DN09}, without loss of generality, we may assume that the sequence $F_j$ converges uniformly on every compact subset of $M$ to a holomorphic map $F$ from $M$ to $\mathbb C^{n+1}$. Note that $F(M)$ contains a neighborhood of $(0',-1)$ and $F(M)\subset \overline{M_P}$. Following the argument as in the proof of Theorem \ref{togetmodel}, we conclude that $F$ is a biholomorphism from $M$ onto $\mathcal{E}$, and thus $M$ is biholomorphically equivalent to $\mathbb B^{n+1}$, as desired. \end{proof} By Lemma \ref{orbitinside} and Theorem \ref{togetmodelstronglypsc}, we obtain the following corollary, proved by F. S. Deng and X. J. Zhang \cite[Theorem 2.4]{DZ19} and by B. L. Fridman \cite[Theorem I]{Fr83}. \begin{corollary} \label{str-psc-ex} Let $D$ be a bounded strictly pseudoconvex domain in $\mathbb C^n$ with $\mathcal{C}^2$-smooth boundary. If a bounded domain $\Omega$ can be exhausted by $D$, then $\Omega$ is biholomorphically equivalent to $D$ or the unit ball $\mathbb B^n$. \end{corollary} \section{Exhausting a complex manifold by a general ellipsoid}\label{S4} In this section, we are going to prove that if a complex manifold $M$ can be exhausted by a general ellipsoid $D_P$ (see the definition of $D_P$ below), then $M$ is biholomorphically equivalent to either $D_P$ or the unit ball $B^n$. First of all, let us fix $n$ positive integers $m_1,\ldots, m_{n-1}$ and denote by $\Lambda:=\left(\frac{1}{m_1}, \ldots, \frac{1}{m_{n-1}}\right)$. We assign weights $\frac{1}{m_1}, \ldots, \frac{1}{m_{n-1}}, 1$ to $z_1,\ldots,z_n$. For an $(n-1)$-tuple $K = (k_1,\ldots,k_{n-1}) \in \mathbb{Z}^{n-1}_{\geq 0}$, denote the weight of $K$ by $$wt(K) := \sum_{j=1}^{k-1} \dfrac{k_j}{m_j}.$$ Next, we consider the general ellipsoid $D_P$ in $\mathbb C^n\;(n\geq2)$, defined by \begin{equation*} \begin{split} D_P &:=\{(z',z_n)\in \mathbb C^n\colon |z_n|^2+P(z')<1\}, \end{split} \end{equation*} where \begin{equation}\label{series expression of P on D_P} P(z')=\sum_{wt(K)=wt(L)=1/2} a_{KL} {z'}^K \bar{z'}^L, \end{equation} where $a_{KL}\in \mathbb C$ with $a_{KL}=\bar{a}_{LK}$, satisfying that $P(z')>0$ whenever $z' \in \mathbb{C}^{n-1} \setminus \{0'\}$. We would like to emphasize here that the polynomial $P$ given in (\ref{series expression of P on D_P}) is $\Lambda$-homogeneous and the assumption that $P(z')>0$ whenever $z'\ne 0$ ensures that $D_P$ is bounded in $\mathbb{C}^n$ (cf. \cite[Lemma 6]{NNTK19}). Moreover, since $P(z')>0$ for $z'\ne 0$ and by the $\Lambda$-homogeneity, there are two constants $c_1,c_2>0$ such that $$ c_1 \sigma_\Lambda(z') \leq P(z')\leq c_2 \sigma_\Lambda(z'), \; \forall z'\in \mathbb C^{n-1}, $$ where $\sigma_\Lambda(z')=|z_1|^{m_1}+\cdots+|z_{n-1}|^{m_{n-1}}$. In addition, $D_P$ is called a WB-domain if it is strongly pseudoconvex at every boundary point outside the set $\{(0',e^{i\theta})\colon \theta\in \mathbb R\}$ (cf. \cite{AGK16}). Now we prove the following proposition. \begin{proposition} \label{generalellipsoid} Let $M$ be a $n$-dimensional complex hyperbolic manifold. Suppose that $M$ can be exhausted by the general ellipsoid $D_P$ via an exhausting sequence $\{f_j: D_P \to M\}$. If $D_P$ is a $WB$-domain, then $M$ is biholomorphically equivalent to either $D_P$ or the unit ball $\mathbb{B}^n$. \end{proposition} \begin{remark} The possibility that $M$ is biholomorphic onto the unit ball $\mathbb B^n$ is not excluded because $D_P$ can exhaust the unit ball $\mathbb B^n$ by \cite[Corollary $1.4$]{FM95}. \end{remark} \begin{proof}[Proof of Proposition \ref{generalellipsoid}] Let $q$ be an arbitrary point in $M$. Then, thanks to the boundedness of $D_P$, after passing to a subsequence if necessary we may assume that the sequence $\{f_j^{-1}(q)\}_{j=1}^{\infty}$ converges to a point $p\in \overline{D_P}$ as $j \to \infty$. We now divide the argument into two cases as follows: \noindent {\bf Case 1.} $f_j^{-1}(q)\to p\in D_P$. Then, it follows from Lemma \ref{orbitinside} that $M$ is biholomorphically equivalent to $D_P$. \noindent {\bf Case 2.} $f_j^{-1}(q)\to p\in\partial D_P$. Let us write $f_j^{-1}(q)=(a_j', a_{jn})\in D_P$ and $p=(a',a_n)\in \partial D_P$. As in \cite{NNTK19}, for each $j\in \mathbb N^*$ we consider $\psi_j\in \mathrm{Aut}(D_P)$, defined by $$ \psi_j(z)=\left( \frac{(1-|a_{jn}|^2)^{1/m_1}}{(1-\bar{a}_{jn}z_n)^{2/m_1}} z_1,\ldots, \frac{(1-|a_{jn}|^2)^{1/m_{n-1}}}{(1-\bar{a}_{jn}z_n)^{2/m_{n-1}}} z_{n-1}, \frac{z_n-a_{jn}}{1-\bar{a}_{jn} z_n}\right). $$ Then $\psi_j\circ f_j(q)=(b_j,0)$, where $$ b_j= \left( \frac{a_{j1}}{(1-|a_{jn}|^2)^{1/m_1}} ,\ldots, \frac{a_{j (n-1)}}{(1-|a_{jn}|^2)^{1/m_{n-1}}}\right),\; \forall j\in \mathbb N^*. $$ Without loss of generality, one may assume that $b_j\to b\in \mathbb C^{n-1}$ as $j\to \infty$. Since $D_P$ is a $WB$-domain, two possibilities may occur: \noindent {\bf Subcase 1:} $p=(a',a_n)$ is a strongly pseudoconvex boundary point. In this subcase, it follows directly from Theorem \ref{togetmodelstronglypsc} that $M$ is biholomorphically equivalent to $\mathbb B^{n}$. \noindent {\bf Subcase 2:} $p=(0',e^{i\theta})$ is a weakly pseudoconvex boundary point. In this subcase, one must have $a_j'\to 0'$ and $a_{jn}\to e^{i\theta}$ as $j\to \infty$. Denote by $\rho(z):=|z_n|^2-1+P(z')$ a definition function for $D_P$. Then $\text{dist}(a_j, \partial D_P)\approx -\rho(a_j)= 1-|a_{jn}|^2-P(a_j')$. Suppose that $\{a_j\}$ converges $\Lambda$-nontangentially to $p$, i.e., $P(a_j')\approx \sigma_\Lambda(a_j')\lesssim \text{dist}(a_j, \partial D_P)$, or equivalently $P(a_j')\leq C(1-|a_{jn}|^2-P(a_j')),\; \forall j\in \mathbb N^*$, for some $C>0$. This implies that $$ P(a_j')\leq \dfrac{C}{1+C}(1-|a_{jn}|^2),\; \forall j\in \mathbb N^*, $$ and thus $P(b_j)=\dfrac{1}{1-|a_{jn}|^2}P(a_j')\leq \dfrac{C}{1+C}<1,\; \forall j\in \mathbb N^*$. This yields $ \psi_j\circ f_j^{-1}(q)=(b_j,0)\to (b,0)\in D_P$ as $j\to \infty$. So, again by Lemma \ref{orbitinside} one concludes that $M$ is biholomorphically equivalent to $D_P$. Now let us consider the case that the sequence $\{a_j\}$ does not converge $\Lambda$-nontangentially to $p$, i.e., $P(a_j')\geq c_j \text{dist}(a_j, \partial D_P), \; \forall j\in \mathbb N^*$, where $0<c_j\to +\infty$. This implies that $P(a_j')\geq c_j'(1-|a_{jn}|^2-P(a_j')),\; \forall j\in \mathbb N^*$, for some $0<c_j'\to +\infty$, and hence $$ P(a_j')\geq \dfrac{c_j'}{1+c_j'}(1-|a_{jn}|^2),\; \forall j\in \mathbb N^*. $$ Thus, one obtains that $P(b_j)=\dfrac{1}{1-|a_{jn}|^2}P(a_j')\geq \dfrac{c_j'}{1+c_j'}$, which implies that $P(b)=1$. Consequently, $\psi_j\circ f_j^{-1}(q)$ converges to the strongly pseudoconvex boundary point $p'=(b,0)$ of $\partial D_P$. Hence, as in Subcase $1$, it follows from Theorem \ref{togetmodelstronglypsc} that $M$ is biholomorphically equivalent to $\mathbb B^{n}$. Therefore, altogether, the proof of Proposition \ref{generalellipsoid} finally follows. \end{proof} \section*{Appendix} \begin{proof}[Proof of Lemma \ref{conti-kob}] We shall follow the proof of \cite[Theorem $2.1$]{Yu95} with minor modifications. To do this, let us fix compact subsets $K\Subset M_P$ and $L\Subset \mathbb C^{n+1}$. Then it suffices to prove that $F_{D_j}(z,X)$ converges to $F_{M_P}(z,X)$ uniformly on $K\times L$. Indeed, suppose otherwise. Then, there exist $\epsilon_0>0$, a sequence of points $\{z_{j_\ell}\}\subset K$, and a sequence $X_{j_\ell}\subset L$ such that $$ |F_{D_{j_\ell}}(z_{j_\ell},X_{j_\ell})-F_{M_P}(z_{j_\ell},X_{j_\ell})|>\epsilon_0,~\forall~\ell\geq 1. $$ By the homogeneity of the Kobayashi metrics $F(z,X)$ in $X$, we may assume that $\|X_{j_\ell}\|=1$ for all $\ell\geq 1$. Moreover, passing to subsequences, we may also assume that $z_{j_\ell}\to z_0\in K$ and $X_{j_\ell}\to X_0\in L$ as $\ell \to \infty$. Since $M_P$ is taut (see \cite[Theorem $3.13$]{Yu95}), for each $(z,X)\in M_P\times \mathbb C^{n+1}$ with $X\ne 0$, there exists an analytic disc $\varphi\in \mathrm{Hol}(\Delta, M_P)$ such that $\varphi(0)=z$ and $\varphi'(0)=X/F_{M_P}(z,X)$. This implies that $F_{M_P}(z,X)$ is continuous on $M_P\times \mathbb C^{n+1}$. Hence, we obtain $$ F_{M_P}(z_{j_\ell},X_{j_\ell})\to F_{M_P}(z_0,X_0), $$ and thus we have \begin{align}\label{eq136-0} |F_{D_{j_\ell}}(z_{j_\ell},X_{j_\ell})-F_{M_P}(z_0,X_0)|>\epsilon_0/2 \end{align} for $\ell$ big enough. By definition, for any $\delta\in (0,1)$ there exists a sequence of analytic discs $\varphi_{j_\ell}\in \mathrm{Hol}(\Delta, D_{j_\ell})$ such that $\varphi_{j_\ell}(0)=z_0,\varphi_{j_\ell}'(0)= \lambda_{j_\ell} X_{j_\ell}$, where $\lambda_{j_\ell}>0$, and \begin{align*} F_{D_{j_\ell}}(z_{j_\ell},X_{j_\ell})\geq \frac{1}{\lambda_{j_\ell}}-\delta. \end{align*} It follows from Proposition \ref{pro-scaling} that every subsequence of the sequence $\{\varphi_{j_\ell}\}$ has a subsequence converging to some analytic disc $\psi\in \mathrm{Hol}(\Delta, M_P)$ such that $\psi(0)=z_0,\psi'(0)= \lambda X_0$, for some $\lambda>0$. Thus, one obtains that $$ F_{M_P}(z_0,X_0)\leq \frac{1}{|\psi'(0)|} $$ for any such $\psi$. Therefore, one has \begin{align} \label{eq136-1} \liminf_{\ell\to \infty} F_{D_{j_\ell}}(z_{j_\ell},X_{j_\ell})\geq F_{M_P}(z_0,X_0)-\delta. \end{align} On the other hand, as in \cite{Yu95}, by the tautness of $M_P$, there exists a analytic disc $\varphi \in \mathrm{Hol}(\Delta, M_P)$ such that $\varphi(0)=z_0, \varphi'(0)=\lambda X_0$, where $\lambda=1/F_{M_P}(z_0,X_0)$. Now for $\delta\in (0,1)$, let us define an analytic disc $\psi_{j_\ell}^\delta:\Delta\to \mathbb C^{n+1}$ by settings: \begin{align*} \psi_{j_\ell}^\delta(\zeta):= \varphi((1-\delta)\zeta)+\lambda (1-\delta) (X_{j_\ell}-X_0)+(z_{j_\ell}-z_0)\; \text{for all}\; \zeta \in \Delta. \end{align*} Since $\varphi((1-\delta)\overline{\Delta})$ is a compact subset of $M_P$ and $X_{j_\ell}\to X_0$, $z_{j_\ell}\to z_0$ as $\ell\to \infty$, it follows that $\psi_{j_\ell}^\delta(\Delta)\subset D_{j_\ell}$ for all sufficiently large $\ell$, that is, $\psi_{j_\ell}^\delta\in \mathrm{Hol}(\Delta, D_{j_\ell})$. Moreover, by construction, $\psi_{j_\ell}^\delta(0)=z_{j_\ell}$ and $\left(\psi_{j_\ell}^\delta\right)'(0)=(1-\delta)\lambda X_{j_\ell}$. Therefore, again by definition, one has \begin{align*} F_{D_{j_\ell}}(z_{j_\ell},X_{j_\ell})\leq \frac{1}{(1-\delta) \lambda}=\frac{1}{(1-\delta)} F_{M_P}(z_0,X_0) \end{align*} for all large $\ell$. Thus, letting $\delta\to 0^+$, one concludes that \begin{align}\label{eq136-2} \limsup_{\ell\to \infty} F_{D_{j_\ell}}(z_{j_\ell},X_{j_\ell})\leq F_{M_P}(z_0,X_0). \end{align} By (\ref{eq136-1}), (\ref{eq136-2}), and (\ref{eq136-0}), we seek a contradiction. Hence, the proof is complete. \end{proof} \end{document}
arXiv
{ "id": "2006.03821.tex", "language_detection_score": 0.6896296739578247, "char_num_after_normalized": null, "contain_at_least_two_stop_words": null, "ellipsis_line_ratio": null, "idx": null, "lines_start_with_bullet_point_ratio": null, "mean_length_of_alpha_words": null, "non_alphabetical_char_ratio": null, "path": null, "symbols_to_words_ratio": null, "uppercase_word_ratio": null, "type": null, "book_name": null, "mimetype": null, "page_index": null, "page_path": null, "page_title": null }
arXiv/math_arXiv_v0.2.jsonl
null
null
\begin{document} \begin{center} \LARGE{\bf Describing groups using first-order language}\\[10pt] \large{Yuki Maehara}\\ \large{Supervisor: Andr\'e Nies} \end{center} \section{Introduction} How can large groups be described efficiently? Of course one can always use natural language, or give presentations to be more rigorous, but how about using formal language? In this paper, we will investigate two notions concerning such descriptions; \emph{quasi-finite axiomatizability}, concerning infinite groups, and \emph{polylogarithmic compressibility}, concerning classes of finite groups. An infinite group is said to be \emph{quasi-finitely axiomatizable} if it can be described by a single first-order sentence, together with the information that the group is finitely generated (which is not first-order expressible). In first-order language, the only parts of a sentence that can contain an infinite amount of information are the quantifiers; $\exists$ and $\forall$. Since the variables correspond to the elements in the language of groups, we can only ``talk'' about elements, but not, for example, all subgroups of a given group. We give several examples of groups that can be described in this restricted language, with proofs intended to be understandable even to undergraduate students. We say a class of finite groups is \emph{polylogarithmically compressible} if each group in the class can be described by a first-order sentence, whose length is order polylogarithmic (i.e.~polynomial in $\mathtt{log}$) to the size of the group. We need a restriction on the length of the sentence because each finite group can be described by a first-order sentence. The most standard (and inefficient) way to do so is to describe the whole Cayley table, in which case the length of the sentence has order square of the size of the group. The examples given in this paper include the class of finite simple groups (excluding a certain family), and the class of finite abelian groups. \section{Preliminaries} \label{preliminaries} \subsection{Homomorphisms} \label{homomorphisms} A surjective homomorphism is called an {\it epimorphism}. A homomorphism from a structure into itself is called an {\it endomorphism}. If $G$ is an abelian group, then endomorphisms of $G$ form a ring under usual addition and composition (see \cite[\S 5]{Kar.Mer:79}). In particular, if $x$ (not necessarily in $G$) is such that conjugation by $x$ is an automorphism of $G$, and if $P(X)=\sum_{i \in I}\alpha_i X^i$ is a polynomial over $\mathbb{Z}$ where $I$ is a finite subset of $\mathbb{Z}$, then for any element $g \in G$, we write \[ g^{P(x)}=\sum_{i \in I} \alpha_i g^{x^i} \] if $G$ is written additively, and similarly \[ g^{P(x)}=\prod_{i \in I} (g^{x^i})^{\alpha_i} \] if $G$ is written multiplicatively. If $G$ is a group and $x \in G$, then the map defined by $g \mapsto x^{-1}gx$ is an automorphism of $G$ called {\it conjugation} by $x$. We denote $x^{-1}gx$ by $g^x$, or $\mathtt{Conj}(g,x)$. \subsection{Presentations} \label{presentations} \emph{Presentations} are a way of describing groups. A group $G$ has a presentation \[ P = \langle~X~|~R~\rangle \] where $X$ is the set of generators and $R$ is the set of relators written in terms of the generators in $X$, iff \[ G \cong F(X)/N \] where $F(X)$ is the free group generated by $X$ and $N$ is the least normal subgroup containing $R$. $G$ is said to be \emph{finitely presented} (\emph{f.p.}) if $X,R$ can be chosen to be finite. \subsection{Metabelian groups} \label{metabelian} For any groups $G,A,C$, one says $G = A \rtimes C$ ($G$ is a {\it semidirect product} of $A$ and $C$) if \begin{center} $AC=G$, $A \triangleleft G$, and $A\cap C= \{1\}$. \end{center} In particular, $G$ is said to be {\it metabelian} if $G = A \rtimes C$ for some abelian groups $A, C$. This is equivalent to saying that $G'$ is abelian. \begin{lemma}[Nies \cite{Nies:03}] \label{metabelian commutators} Let $G = A \rtimes C$ where $A$ and $C$ are abelian. Then, the commutator subgroup of $G$ is $G' = [A, C]$. Moreover, if $C$ is generated by a single element $d$, then $G' = \{[u,d]~|~u \in A \}$. In particular, $G'$ coincides with the set of commutators. \end{lemma} \begin{proof} Let $u,v \in A$ and $x,y \in C$. Then, $[ux,vy] = [u^x, y][x, v^y]$ using the commutator rule ${[ab,c] = [a,c]^b[b,c]}$ (\cite[3.2.(3)]{Kar.Mer:79}) and since $A$ and $C$ are abelian. Since both $[u^x, y]$ and $[x, v^y]$ are in $[A,C]$, $G' = [A, C]$. Now suppose $C = \langle d \rangle$. We use additive notation in $A$. Then, clearly $S = {\{[u,d] \ | \ u \in A \}}$ is a subset of $G' = [A, \langle d \rangle]$. Since $[u,d][v,d]=[u+v,d]$ and $[u,d]^{-1}=[-u,d]$, $S$ forms a subgroup of $G$. Also, since $[u,x^{-1}]=[-u^{x^{-1}},x]$ and ${[u, d^{n+1}]=[u^{d^n}+ \ldots +u^d+u,d]}$ for all $x \in C$, $S$~contains all commutators. Hence $G'=S$. \end{proof} \subsection{Finitely generated groups} \label{f.g.} One says a group $G$ is {\it finitely generated} ({\it f.g.}) if $G$ is generated by some finite subset. \begin{lemma} \label{f.g. factor} Let $G$ be a f.g.\ group and let $N$ be a normal subgroup of $G$. Then the factor group $G/N$ is also f.g. \end{lemma} \begin{proof} Let $S = \{g_1, \ldots, g_n\}$ be a finite generating set of $G$. Then, $N$ is generated by $SN = \{g_i N \ | \ 1 \le i \le n \}$, which is clearly finite. \end{proof} \begin{lemma}[Kargapolov and Merzljakov \cite{Kar.Mer:79}] \label{f.g. abelian} Each f.g.\ abelian group is a finite direct sum of cyclic groups. \end{lemma} \begin{lemma} \label{prod conj} If a group $G$ is generated by a subset $X \cup Y$ of $G$, then each element $g \in G$ can be written as a product \begin{center} $g = w \cdot \prod \mathtt{Conj}(x_i, y_i)$ \ where $x_i \in X$, \ $w, y_i \in \langle Y \rangle$. \end{center} In particular, if $A$ is a normal subgroup of $G$ such that $X \subseteq A$ and $\langle Y \rangle \cap A$ is trivial, then each element $u \in A$ can be written as \begin{center} $u = \prod \mathtt{Conj}(x_i, y_i)$ \ where $x_i \in X$, \ $y_i \in \langle Y \rangle$. \end{center} \end{lemma} \begin{proof} Let $g \in G$. Then since $X \cup Y$ generates $G$, it can be written as a product \[ g = \prod_{j=1}^n v_j u_j \] for some positive integer $n$, where $u_j \in \langle X \rangle$, $v_j \in \langle Y \rangle$ for each $j$. The lemma can be proven by induction on $n$. \end{proof} \subsection{Rings} \label{rings} In this paper, we mean by a \textquotedblleft ring\textquotedblright \ an associative ring with the multiplicative identity $1 \neq 0$. An ideal of a ring is said to be {\it principal} if it is generated by a single element. One says a ring $\mathcal{R}$ is {\it principal} if every ideal of $\mathcal{R}$ is principal. Let $\mathcal{R}$ be a ring. A non-zero element $u \in \mathcal{R}$ is called a {\it zero-divisor} if there exists a non-zero element $v \in \mathcal{R}$ such that $uv=0$ or $vu=0$. $\mathcal{R}$ is said to be {\it entire} if it is commutative and does not contain any zero-divisors. Let $\mathcal{R}$ be a commutative ring and let $S$ be a subset of $\mathcal{R}$ containing $1$, closed under multiplication. Define a relation $\sim$ on the set ${\{(a,s) \ | \ a \in \mathcal{R}, \ s \in S\}}$ by \begin{center} $(a,s) \sim (a',s')$ iff there exists $s_1 \in S$ such that $s_1(s'a-sa')=0$, \end{center} then clearly $\sim$ is an equivalence relation. Let $S^{-1}\mathcal{R}$ be the set of equivalence classes and let $a/s$ denote the equivalence class containing $(a,s)$. Then $S^{-1}\mathcal{R}$ forms a ring under the following operations; \begin{itemize} \item addition is defined by $a/s + a'/s' = (s'a+sa')/ss'$ \item multiplication is defined by $(a/s) \cdot (a'/s') = aa'/ss'$ \end{itemize} This ring is called the {\it ring of fractions} of $\mathcal{R}$ by $S$. For the proofs that these operations are well-defined and $S^{-1}\mathcal{R}$ forms a ring, see \cite[\S 3]{Lang:84}. \subsection{Modules} \label{modules} Let $\mathcal{R}$ be a ring. A {\it module} over $\mathcal{R}$, or an {\it $\mathcal{R}$-module} $M$, written additively, is an abelian group with multiplication by elements in $\mathcal{R}$ defined in such a way that for any $a,b \in \mathcal{R}$ and for any $x,y \in M$, \begin{itemize} \item $(a+b)x=ax+bx$ \ and \item $a(x+y)=ax+ay$. \end{itemize} Modules are generalization of abelian groups in the sense that every abelian group is a $\mathbb{Z}$-module. A {\it generating set} $S$ of an $\mathcal{R}$-module $M$ is a subset of $M$ such that every element of $M$ can be written as a sum of terms in the form $a_i s_i$ where $a_i \in \mathcal{R}$, $s_i \in S$. A module is said to be {\it finitely generated} if it possesses a finite generating set. One says an $\mathcal{R}$-module $M$ is {\it torsion-free} if for any $a \in \mathcal{R}$, $x \in M$, $ax=0$ implies $a=0$ or $x=0$. An $\mathcal{R}$-module $M$ is {\it free} if it is isomorphic to $\bigoplus_{i \in I} R_i$ for some finite index $I$, where each $R_i$ is isomorphic to $\mathcal{R}$ seen as a module over itself in the natural way. \section{Quasi-finitely axiomatizable groups} \label{QFA} \begin{definition} \label{QFA def} An infinite f.g.\ group $H$ is \emph{quasi-finitely axiomatizable (QFA)} if there exists a first-order sentence $\psi$ such that $H \models \psi$ and if $G$ is a f.g.\ group and $G \models \psi$, then $G \cong H$. \end{definition} The idea of quasi-finite axiomatizability was introduced by Nies in \cite{Nies:03}. It was originally used to determine the expressiveness of first-order logic in group theory. Later, it turned out to be interesting even from an algebraic point of view. For example, Oger and Sabbagh \cite{Oger.Sabbagh:06} showed that if $G$ is a f.g.~nilpotent group, then $G$ is QFA iff each element $z \in Z(G)$ satisfies $z^n \in G'$ for some positive integer $n$. In each of the proofs below, we give the sentence $\psi$, suppose a f.g.~group $G$ satisfies $\psi$ and show that $G$ must be isomorphic to $H$. Since we ``do not know'' whether $G \cong H$ holds until the end of the proof, we want to talk about $G$ and $H$ separately. So we refer to the group $H$ as the {\it standard case}, as opposed to the {\it general case} $G$. \subsection{Finitely presented groups} \label{BS} Our first example is Baumslag-Solitar groups, which are finitely presented (see below for presentations). They are relatively easy to describe in first-order because the whole presentation can be a part of the sentence. Although most of the QFA groups we give in this paper are described as semidirect products, this is the only case where we can define the action in first-order. A Baumslag-Solitar group is a group with a presentation of the form \[ \langle \ a,d \ | \ d^{-1}a^n d= a^m \rangle \] for some integers $m,n$. We show that each Baumslag-Solitar group is QFA for the cases where $m \ge 2$ and $n=1$. For each $m \ge 2$, define \[ H_m = \langle~a,d \ | \ d^{-1}ad = a^m \rangle. \] Then $H_m$ is the semidirect product of $A = \mathbb{Z}[1/m] = \{z m^{-i} \ |\ z \in \mathbb{Z}, i \in {\mathbb{N}} \}$ by $C = \langle d \rangle$, where the action of $d$ on $A$ is given by $d^{-1}ud = um$ for $u \in A$. \begin{theorem}[Nies \cite{Nies:07}] \label{BS thm} $H_m$ is QFA for each integer $m \ge 2$. \end{theorem} \begin{proof} We prove the theorem by actually giving a sentence $\exists d \ \varphi_m(d)$ describing the group. We list sufficiently many first-order properties (P0)-(P9) of $H_m$ and its element $d$ so that whenever a f.g.\ group $G$ has an element $d$ satisfying the conjunction $\varphi_m(d) \equiv {{\rm[(P0)} \wedge \ldots \wedge {\rm(P9)]}}$, we must have $G \cong H_m$. That is, \ $G = A \rtimes C$ where each of $A,C$ is isomorphic to that of the standard case, and $C$ is generated by $d$. The first two are properties of $d$. \begin{itemize} \item[(P0)] $d \neq 1$ \item[(P1)] $\forall g \, [g^i \neq d]$, for each $i$, $1 < i \le m$. \end{itemize} The next five formulas define the subgroups $G', A, C$ and describe some of their properties. Fix a prime $q$ that does not divide $m$. \begin{itemize} \item[(P2)] The commutators form a subgroup (so that $G'$ is definable) \item[(P3)] $A=\{g \ | \ g^{m-1} \in G'\}$ and $C=C(d)$ are abelian, and $G = A \rtimes C$ \item[(P4)] $|C:C^2|=2$ \item[(P5)] $|A:A^q|=q$ \item[(P6)] The map $u \mapsto u^q$ is 1-1 in $A$. \end{itemize} We know (P2) holds in $H_m$ from Lemma \ref{metabelian commutators}. (P4) can be expressed as \textquotedblleft there is an element which is not the square of any element, and for any three elements $x_1, x_2, x_3$, the formula $\exists y \ [x_i = x_j y^2]$ is satisfied for some $1 \le i < j \le 3$\textquotedblright, and similar for (P5). We show that (P3) actually defines $A$ in the standard case. If $g \in \mathbb{Z}[1/m]$, then $[g,d]=g^{-1}d^{-1}gd=g^{-1}g^{m}=g^{m-1}$, so $g^{m-1} \in H'_m$. Conversely, since $\mathbb{Z}[1/m]$ is closed under taking roots (because $H_m/\mathbb{Z}[1/m]$ is torsion-free), $g \notin \mathbb{Z}[1/m]$ implies $g^{m-1} \notin \mathbb{Z}[1/m]$. Since $H_m/\mathbb{Z}[1/m]$ is abelian, $H'_m \le \mathbb{Z}[1/m]$ and so $g^{m-1} \notin H'_m$. The last three describe how $C$ acts on $A$. \begin{itemize} \item[(P7)] $\forall u \in A \ [ d^{-1}u d= u^m]$ \item[(P8)] $u^x \in A-\{1,u\}$ for $u \in A- \{1\}$, $x \in C -\{1\}$ \item[(P9)] $u^x \neq u^{-1}$ for $u \in A-\{1\}$, $x \in C$ \end{itemize} (P8) says that $C-\{1\}$ acts on $A-\{1\}$ without fixed points, and (P9) says that the orbit of $u \in A$ under $C$ does not contain $u^{-1}$, unless $u=1$. Now let $G$ be a f.g.\ group and suppose $d \in G$ satisfies (P0)-(P9). First, we show that the order of $d$ is infinite. If $d^r = 1$ for some $r > 0$, then for each $u \in A$ we have $u = d^{-r}ud^r = u^{mr}$. So $A$ is a periodic group of some exponent $k \le mr-1$. If $q$ divides $k$, then there exists an element $v \in A$ of order $q$. This makes the map $u \mapsto u^q$ not 1-1, contrary to (P6). If $q$ does not divide $k$, then the map is an automorphism of $A$ and so $A^q=A$, contrary to (P5). Let $\mathcal{R} = \mathbb{Z}[1/m]$ viewed as a ring. Then $A$ can be seen as an $\mathcal{R}$-module by defining $u(zm^{-i}) = \mathtt{Conj}(u^z,d^{-i}) \ ( = u^{zm^{-i}}$, so well-defined) for $z \in \mathbb{Z}$, $i \in {\mathbb{N}}$. Now we show that $A$ is f.g.\ and torsion-free as an $\mathcal{R}$-module. Since $C \cong G/A$ and $G$ is f.g., $C$ is f.g.\ abelian by Lemma \ref{f.g. factor} and (P3). So $C$ is a direct sum of cyclic groups by Lemma \ref{f.g. abelian}, and has only one infinite cyclic factor by (P4). Since $d$ has infinite order, we can choose a generator $c \in C$ of this factor that satisfies $c^s = d$ for some $s \ge 1$. Then, $C = \langle c \rangle \times F$ where $F = T(C)$ is the torsion subgroup of C. Since $G = AC$, $G$~has a finite generating set of the form $B \cup \{ c \} \cup F$, where $B \subseteq A$. We may assume $B$ is closed under taking inverse, and under conjugation by elements of the set $F \cup \{ c^i \ | \ 1 \le i < s\}$. If $u \in A$, then $u$ can be written as a product of the terms $\mathtt{Conj}(b, xc^z)$ where $x \in F$, $z \in \mathbb{Z}$, $b \in B$, by Lemma \ref{prod conj}. Hence by the closure properties of $B$, $u$ is a product of the terms $\mathtt{Conj}(b', d^w)$ where $b' \in B$ and $w \in \mathbb{Z}$. This shows that $A$ is f.g.\ as an $\mathcal{R}$-module. Suppose $u(zm^{-i}) = \mathtt{Conj}(u^z, d^{-i}) = 1$ for some $u \neq 1$, $i \ge 0$, $z \neq 0$. Then $u^z = 1$ by (P8), so conjugation by $d$ is an automorphism of the finite subgroup $\langle u \rangle$ by (P7). Hence some power of $d$ has a fixed point, contrary to (P8). Therefore $A$ is torsion-free as an $\mathcal{R}$-module. Since $\mathcal{R}$ is a principal entire ring, $A$ is a free $\mathcal{R}$-module by \cite[Thm.\ XV.2.2]{Lang:84}, so that $A$ as a group is isomorphic to $\bigoplus_{1 \le i \le k} R_i$ for some positive integer $k$, where each $R_i$ is isomorphic to the additive group of $\mathcal{R}$. But then $|A:A^q| = q^k$, so $k = 1$ by (P5). Now we show that $F$ is trivial. For, suppose $x \in F$, then the action of $x$ is an automorphism of $\mathbb{Z} [1/m]$ of finite order. Note $\mathtt{Aut}(\mathbb{Z}[1/m])$ is isomorphic to $\mathbb{Z} \times \mathbb{Z}_2$ where the first factor is generated by the map $u \mapsto um$ and the second by the map $u \mapsto -u$. So the action of $x$ is either the identity, or inversion. But we know that $x$ cannot be inversion by (P9), so $F = \{1\}$. Recall that $s$ is the positive integer satisfying $c^s=d$. Then $s \le m$ because the automorphism $u \mapsto um$ is not an $i$th power in $\mathtt{Aut}(\mathbb{Z} [1/m])$ for any $i > m$. So $i = 1$ by (P1), meaning $\langle d \rangle = C$. Hence $G \cong H_m$. \end{proof} (P1) was needed in the last part because $d$ might be a proper power of $c$. For example, if $m=4$, then $\mathbb{Z}[1/4]=\mathbb{Z}[1/2]$ and the map $u \mapsto 4u$ is clearly the square of the map $u \mapsto 2u$, which is an automorphism of $\mathbb{Z}[1/4]$. \subsection{Non-finitely presented groups} \label{wreath} As mentioned in the last subsection, the groups in this example are also described as semidirect products, but in this case we cannot define the action explicitly. Instead, we use a relationship between (definable) subgroups to restrict our possibilities. The restricted wreath product $\mathbb{Z}_p \wr \mathbb{Z}$ is the semidirect product $H_p = A \rtimes C$ where $A = \bigoplus_{z \in \mathbb{Z}} \mathbb{Z}_p^{(z)}$, $\mathbb{Z}_p^{(z)}$ is a copy of $\mathbb{Z}_p$, $C = \langle d \rangle$ with $d$ of infinite order, and $d$ acts on $A$ by shifting, i.e.\ $(\mathbb{Z}_p^{(z)})^d = \mathbb{Z}_p^{(z+1)}$. It has a presentation \begin{equation} \label{presentation 1} \langle~a,d \ | \ a^p, [v_r, v_s] (r,s \in \mathbb{Z}, r<s) \rangle \end{equation} where $a$ corresponds to a generator of $\mathbb{Z}_p^{(0)}$ and $v_r = a^{d^r}$. \begin{theorem}[Nies \cite{Nies:03}] \label{wreath thm} $\mathbb{Z}_p \wr \mathbb{Z}$ is QFA for each prime $p$. \end{theorem} \begin{proof} The general idea of the proof is the same as that of the previous example. We express the group as a semidirect product of its subgroups $A,C$, and then show that each of them is isomorphic to that in the standard case. We also need to make sure that $C$ acts on $A$ correctly in this case, since the action of $d$ cannot be expressed in first-order. Let $H_p = \mathbb{Z}_p \wr \mathbb{Z}$. Then the sentence describing $H_p$ is $\exists a \exists d \ \varphi_p(a,d)$ where $\varphi_p(a,d) \equiv$ [(P0) $\wedge \ \ldots \ \wedge$ (P6)]. We use additive notation in $A$. The first formula says that neither of $a,d$ is the identity. Note both 0 and 1 refer to the identity here, since $a$ is in $A$ where we use additive notation, while $d$ is in $C$ where we use multiplicative notation. \begin{itemize} \item[(P0)] $a \neq 0$, $p \cdot a = 0$, $d \neq 1$. \end{itemize} The next five define the subgroups $G', A, C$ and describe some of their properties. Note that $\langle a \rangle$ is first-order definable since it is finite by (P0) above. \begin{itemize} \item[(P1)] The commutators form a subgroup (so that $G'$ is definable) \item[(P2)] $A = G' + \langle a \rangle = G' \oplus \langle a \rangle$ and $C = C(d)$ are abelian, and $G = A \rtimes C$ \item[(P3)] $|C:C^2|=2$ \item[(P4)] $\forall u \in A \ [p \cdot u=0]$ \item[(P5)] No element in $C-\{1\}$ has order $< p$. \end{itemize} The last thing we need to say is that $C-\{1\}$ acts on $A-\{0\}$ without fixed points. \begin{itemize} \item[(P6)] $u^x \in A-\{0, u\}$ for $u \in A-\{0\}$, $x \in C-\{1\}$. \end{itemize} First, we show that ${A = H'_p \oplus \langle a \rangle}$ holds in the standard case. We know that it has the form ${H'_p = \{[u,d] \ | \ u \in A \}}$ from Lemma~\ref{metabelian commutators}, and so it is a subgroup of $A$. Consider the group $\widetilde{H}_p$ with a presentation \begin{equation} \label{presentation 2} \langle~\widetilde{a}, \widetilde{d} \ | \ {\widetilde{a}}^p, [\widetilde{a},\widetilde{d}]~\rangle. \end{equation} Since for each relator in (\ref{presentation 1}), a corresponding relator is in (\ref{presentation 2}), there exists an epimorphism $\Psi: H_p \rightarrow \widetilde{H}_p$ mapping $a, d$ to $\widetilde{a}, \widetilde{d}$ respectively. As $\mathtt{Ker}(\Psi)$ is properly contained in $A$ and $\widetilde{H}_p$ is abelian, $H'_p$ is properly contained in $A$. Now for each $z \neq 0$, $\mathbb{Z}_p^{(z)}$ is generated by $a^{d^z}=a+[a,d^z]$, so $H'_p+\langle a \rangle=A$. If $a^r \in H'_p$ for some $0<r<p$, then there exists $u \in A$ such that $a^r=[u,d]$ or equivalently $u+a^r=u^d$ i.e.\ $a^r$ shifts $u$, which is impossible. Hence $H'_p \cap \langle a \rangle$ is trivial and so $A = H'_p \oplus \langle a \rangle$. Now let $G$ be a f.g.\ group and suppose $a,d \in G$ satisfy (P0)-(P6). We first prove that $C$ is infinite cyclic. Since $C$ is f.g.\ abelian of torsion-free rank 1 by (P3), it suffices to show that $C$ is torsion-free by Lemma \ref{f.g. abelian}. For, suppose $t \in C-\{1\}$ has finite order $r$. Then every orbit in $A-\{0\}$ under the action of $t$ has size $r$, because if some orbit has size $s<r$, then $t^s \in C-\{1\}$ has a fixed point. Let $A$ be viewed as a vector space over $\mathbb{Z}_p$ and let $U$ be the $t$-invariant subspace of $A$ generated by $a$. Then $|U| = p^n$ for some $1 \le n \le r$, because $U = \left\{\sum_{0 \le i < r} m_i \cdot a^{t^i} \ | \ m_i \in \mathbb{Z}_p \right\}$. But $|U-\{0\}| \ge p$ by (P5) and so $n > 1$. Now consider the size of $G' \cap U$, which is also $t$-invariant because $G'$ is normal in~$G$. Since $a$ is not in $G'$, $|U:G' \cap U| > 1$. We also know that $|U:G' \cap U| \le |A:G'|=p$ \mbox{from \cite[Exercise 2.4.4]{Kar.Mer:79}}. Hence the only possible size is $p^{n-1}$ since it must divide $|U|=p^n$. As every orbit has size $r$ and the orbits partition each $t$-invariant subspace excluding the identity, $r$ divides $p^n-1$ and $p^{n-1}-1$. But $(p^n-1)-p(p^{n-1}-1)=p-1$, so $r$ also divides $p-1$. In particular, $r \le p-1$, contrary to (P5). Choose a generator $c$ of $C$ and let $\mathcal{R}$ be the ring of fractions of $\mathbb{Z}_p[c]$ by the multiplicative subset $\{c^n \ | \ n \ge 0\}$. Then $\mathcal{R}$ is a principal entire ring because the polynomial ring $\mathbb{Z}_p[c]$ is principal entire (see \cite[Section II.3 and Exercise 4]{Lang:84}). Now, $A$ can be seen as an $\mathcal{R}$-module by defining $u \cdot P = \sum_{i=r}^s\alpha_i u^{c^i}$ for $u \in A$, $P = \sum_{i=r}^s\alpha_ic^i \in \mathcal{R}$. We show that $A$ is f.g.\ and torsion-free as an $\mathcal{R}$-module. Let $B=\{b_1,\ldots,b_m\}$ be a finite generating set of $G$. Then, since each $b_i \in B$ can be written in the form $b_i = u_i c^{z_i}$ where $u_i \in A$, $z_i \in \mathbb{Z}$, the set $S \cup \{c\}$ also generates $G$ where $S = \{u_1,\ldots,u_m\}$. Hence every element $u$ in $A$ can be written as a sum of the terms $\mathtt{Conj}(u_j, c^{z_j})$ where $u_j \in S$, $z_j \in \mathbb{Z}$ by Lemma \ref{prod conj}, meaning $A$ is f.g.\ as an $\mathcal{R}$-module. Suppose $u \cdot P=0$ for some $u \in A - \{0\}$, $P = \sum_{i=r}^s\alpha_ic^i \in \mathcal{R} - \{0\}$. Then $P$ must consist of more than one term, for if $\alpha u^{c^z} = 0$ for some $\alpha \neq 0$, then $u^{c^z} = 0$ by~(P4), contrary to (P6). We can assume that the leading coefficient of $P$ is $-1$, so that $u^{c^s} = \sum_{i=r}^{s-1} \alpha_i u^{c^i}$. But then, for each $w \ge s$, $u^{c^w}$ is in the finite subspace of~$A$ generated by ${\{ u^{c^i} \ | \ r \le i \le s-1 \}}$. \footnote{e.g. \[ \begin{split} u^{c^{s+1}}&={\sum_{i=r}^{s-1} \alpha_i u^{c^{i+1}}}\\ &={\alpha_{s-1} u^{c^s} + \sum_{i=r}^{s-2} \alpha_i u^{c^{i+1}}}\\ &={\sum_{i=r}^{s-1} [\alpha_{s-1} \alpha_i + \alpha_{i-1}]u^{c^i}} \end{split} \] for the same $\alpha_i$ as above except $\alpha_{r-1}=0$.} Hence the action of some power of $c$ has a fixed point, contrary to (P6). This shows that $A$ is torsion-free as an $\mathcal{R}$-module. Recall $\mathcal{R}$ is a principal entire ring. Since $A$ f.g.\ and torsion-free as an $\mathcal{R}$-module, it is a free module by \cite[Thm.\ XV.2.2]{Lang:84} so that $A$ as a group is isomorphic to $\bigoplus_{1 \le i \le k} R_i$ for some positive integer $k$, where each $R_i$ is isomorphic to the additive group of $\mathcal{R}$. Observe that $R_i \rtimes C \cong H_p$ for each $i$, where the action of $c$ on $R_i$ is defined by $P \mapsto P \cdot c$, and so $|R_i:[R_i,C]| = p$. If $k >1$, then $|A:G'| = \left|\bigoplus_i R_i:\left[\bigoplus_i R_i,C\right]\right| > p$, contrary to (P2). The last thing we need to show is that the action of $c$ on $A$ is correct. To avoid confusion, here we denote by $d_H$, $A_H$ one of the generators and the normal subgroup of $H_p$ respectively. Since $c$ has infinite order and each power of $c$ (except the identity) acts without fixed points, the action of $c$ on $A$ is equivalent to the action of $d_H^m$ on $A_H$ for some $m \ge 1$. But if $m > 1$, then $A \nsubseteq G' \oplus \langle a \rangle$, contrary to~(P2). \end{proof} \subsection{Semidirect products of f.g.~groups} \label{Oger} In \cite{Oger:06}, Oger gave examples of QFA groups, which are semidirect products of $\mathbb{Z}[u]$ and infinite cyclic $\langle u \rangle$ where $u$ is a complex number satisfying certain conditions. Since both $\mathbb{Z}[u], \langle u \rangle$ are f.g.~abelian, we can talk about the rank of $\mathbb{Z}[u]$ as a free abelian group, making the proof fairly different from the previous examples. Let $\mathcal{R}$ be a commutative ring. An element $\alpha$ of $\mathcal{R}$ is said to be {\it integral} over $\mathcal{R}$ if there exists a monic (i.e.\ the leading coefficient is 1) polynomial $P$ over $\mathcal{R}$ such that $P(\alpha)=0$. Let $\mathcal{S}$ be a commutative ring containing $\mathcal{R}$ as a subring. Then, the elements of $\mathcal{S}$ integral over $\mathcal{R}$ form a subring of $\mathcal{S}$. This ring is called the {\it integral closure} of $\mathcal{R}$ in $\mathcal{S}$ (see \cite[IX, \S1]{Lang:84}). \begin{theorem}[Oger \cite{Oger:06}] \label{Oger thm} Let $u$ be a complex number such that \begin{itemize} \item $\mathbb{Z}[u]$ is the integral closure of $\mathbb{Z}$ in $\mathbb{Q}[u]$ \item the multiplicative group $(\mathbb{Z}[u]^*, \times)$ is infinite and generated by $u$ and $-1$. \end{itemize} Then there exists a first-order sentence $\psi$ which characterizes, among f.g.\ groups, those which are isomorphic to semidirect products $A \rtimes \langle u \rangle$, where $A$ is a non-zero ideal of $\mathbb{Z}[u]$, and the action of $u$ on $A$ is defined by $x \mapsto xu$. \end{theorem} \begin{corollary}[\cite{Oger:06}] \label{Oger col} If $u$ satisfies the conditions above and $\mathbb{Z}[u]$ is principal, then $\mathbb{Z}[u] \rtimes \langle u \rangle$ is QFA. \end{corollary} \begin{proof} Let $u$ be a complex number that satisfies all of these conditions. Let $A$ be a non-zero ideal of $\mathbb{Z}[u]$. Then, since $\mathbb{Z}[u]$ is principal, there exists $a \in \mathbb{Z}[u]$ such that $A =a \cdot\mathbb{Z}[u]$. If we define a map ${\Phi : \mathbb{Z}[u] \rightarrow A}$ by $\Phi(x) = ax$, then $\Phi$ is clearly a group isomorphism ${(\mathbb{Z}[u], +) \rightarrow (A, +)}$. Since $\Phi$ also preserves the action of~$u$ (as $\Phi(xu)=axu=\Phi(x) \cdot u$), $\Phi$ can be extended to an isomorphism $\mathbb{Z}[u] \rtimes \langle u \rangle \rightarrow A \rtimes \langle u \rangle$. \end{proof} One example of such $u$ was given in \cite{Oger:06}, namely $u=2+\sqrt{3}$. Clearly ${\mathbb{Z}[u]=\mathbb{Z}[\sqrt{3}]}$ is the integral closure of $\mathbb{Z}$ in $\mathbb{Q}[\sqrt{3}]$. One can show that each invertible element $x\in \mathbb{Z}[\sqrt{3}]$ has the form $x=\pm (2+\sqrt{3})^n$ for some integer $n$ by considering the sequence $\{x_k\}$ defined by $x_0=x$, $x_{k+1}=x_k \cdot (2+\sqrt{3})^{-1}$. Since the (norm) function $N:\mathbb{Z}[\sqrt{3}] \rightarrow {\mathbb{N}}$ defined by $N(a+b\sqrt{3})=|a^2-3b^2|$ satisfies the conditions \begin{itemize} \item if $y_1, y_2 \neq 0$ then $N(y_1) \le N(y_1 \cdot y_2)$ \item if $y_2 \neq 0$ then there exist $q,r \in \mathbb{Z}[\sqrt{3}]$ such that $y_1=q \cdot y_2+r$ and ${f(r) < f(y_2)}$ \end{itemize} for any $y_1, y_2 \in \mathbb{Z}[\sqrt{3}]$, the ring $\mathbb{Z}[\sqrt{3}]$ is an Euclidean domain and so is principal (see \cite[5.5]{Ribenboim:01}). \begin{proof}[Proof of Theorem \ref{Oger thm}] The first-order sentence describing the semidirect products is $\exists y \exists z \ \varphi(y,z)$ where $\varphi(y,z) \equiv$ [(P0) $\wedge \ \ldots \ \wedge$ (P6)]. Let $P$ be the minimal polynomial of $u$ over $\mathbb{Z}$, and let $n = \mathtt{deg}(P)$. We use additive notation in $A$. First, we state that $y,z$ are non-identity elements. Note both $0,1$ refer to the identity element. \begin{itemize} \item[(P0)] $y \neq 0$, $z \neq 1$ \end{itemize} Next, we define $A,C$ and describe some of their properties. \begin{itemize} \item[(P1)] $A = C(y)$ and $C = C(z)$ are abelian, and $G = A \rtimes C$ \item[(P2)] $|A:2A|=2^n$ \item[(P3)] $|C:C^2|=2$ \item[(P4)] $x^k \neq 1$ for $x \in C-\{1\}$, $1 \le k \le n+1$ \end{itemize} The rest is the following. \begin{itemize} \item[(P5)] $\mathtt{Conj}(w,x) \neq w$ for $w \in A-\{0\}$, $x \in C-\{1\}$ \item[(P6)] $P(f)=0$ for the automorphism $f$ of $A$ defined by $w \mapsto w^z$ \end{itemize} (P6) is equivalent to saying $P(z)=0$, but we need to express it this way because the group operation (which is multiplication when considering $C$) is the only operation we are allowed to use. Let $G$ be a f.g.\ model of $\psi$. First, we show that $z$ has infinite order. For, suppose $z^t = 1$ for some positive integer $t>1$. Then $f$ is a root of the polynomial $X^t-1$ and so $P$ divides $X^t-1$ since $P$ is also a minimal polynomial of $f$. But then $u^t-1=0$, contrary to the fact that $u,-1$ generate the infinite multiplicative group $\mathbb{Z}[u]^*$. Hence $z$ has infinite order, in particular, $|C:\langle z \rangle|$ is finite as $C$ is f.g.\ abelian of torsion-free rank 1 by (P1), (P3). Let $w_1, \ldots, w_r \in A$, $x_1, \ldots, x_r \in C$ such that $\{ w_1 x_1, \ldots, w_r x_r \}$ generates $G$, and let $z_1, \ldots, z_s \in C$ such that $C = z_1 \langle z \rangle \cup \ldots \cup z_s \langle z \rangle$ i.e.\ $\{z_1, \ldots, z_s\}$ contains at least one representative from each coset of $\langle z \rangle$ in $C$. Then by Lemma \ref{prod conj}, \[ \begin{split} A&=\langle \{ \mathtt{Conj}(w_i,x) \ | \ 1 \le i \le r, \ x \in C \} \rangle\\ &=\langle \{ \mathtt{Conj}(\mathtt{Conj}(w_i,z_j),z^k) \ | \ 1 \le i \le r, \ 1 \le j \le s, \ k \in \mathbb{Z} \} \rangle \end{split} \] because each $x \in C$ can be written in the form $x=z_j \cdot z^k$ for some \mbox{$1 \le j \le s$, $k \in \mathbb{Z}$.} Since $\langle \{ \mathtt{Conj}(w,z^k) \ | \ k \in \mathbb{Z} \} \rangle$ is f.g.\ for each $w \in C$ by (P6), this means that $A$ is f.g. Now we show that $A$ is torsion-free. For, suppose $w \in A-\{0\}$ is a torsion element. Then ${\{f^k(w) \ | \ k \in \mathbb{Z} \}}$ is contained in the torsion subgroup of $A$, which is finite since $A$ is f.g.\ abelian (see \cite[Exercise 8.1.5]{Kar.Mer:79}). Hence there exist $k_1,k_2 \in \mathbb{Z}$ with $k_1 < k_2$ such that $f^{k_1}(w) = f^{k_2}(w)$. But this means that $z^{k_2-k_1} \in C-\{1\}$ fixes $f^{k_1}(w)=\mathtt{Conj}(w,z^{k_1}) \in A-\{0\}$, contrary to (P5). Since $A$ is f.g.\ torsion-free, it is free abelian of rank $n$ by (P2). Also, the subgroup $A_{(y)} = \langle \{ f^k(y) \ | \ 0 \le k \le n-1 \} \rangle$ of $A$ has rank $n$ by (P6) and the minimality of~$P$. Hence the action of $z$ on $A_{(y)}$, which has finite index in $A$, is equivalent to the action of $u$ on a non-zero ideal of $\mathbb{Z}[u]$, meaning that the action of $z$ on $A$ is also equivalent. Now we show that $C$ is torsion-free. Otherwise, there exists ${x \in C-\{1\}}$ of prime order $p \ge n+2$ by (P4). But then $1 = y^{x^p-1} = (y^{x^{p-1} + \ldots + x + 1})^{x-1}$, or equivalently, $Y^x = Y$ where ${Y = y^{x^{p-1} + \ldots + x + 1}}$, and so $Y=1$ by (P2). Since $A$ is torsion-free (in particular, $y$ has infinite order) and ${X^{p-1}+ \ldots + X+1}$ is irreducible \mbox{(see \cite[Exercise IV.5.6]{Lang:05})}, ${y^{x^{p-1} + \ldots + x + 1}=1}$ means that the set $\{y^{x^k} \ | \ 0 \le k \le p-2\}$ generates a free abelian group of rank $p-1 \ge n+1$, which is a subgroup of~$A$. But $A$ is free abelian of rank $n$, contradiction. We know $C$ is f.g.\ torsion-free abelian of rank 1, or equivalently, infinite cyclic. Choose a generator $c$ of $C$. Then there exists $k \in \mathbb{Z}$ such that $c^k = z$. Define an automorphism $g$ of $A$ by $w \mapsto w^c$ and let $Q$ be the minimal polynomial of $g$ over $\mathbb{Z}$. We show that $\mathtt{deg}(Q)=n$. Because $g$ is an automorphism of a free abelian group of rank~$n$, $\mathtt{deg}(Q)\le n$. Also, since ${\langle\{g^{kr}(y) \ | \ 0 \le r \le n-1\}\rangle} = {\langle\{ f^r(y) \ | \ 0 \le r \le n-1 \}\rangle}$ has rank $n$, $\mathtt{deg}(Q) \ge n$. Choose a root $v \in \mathbb{C}$ of $Q$ and an ideal $I$ of the integral closure of $\mathbb{Z}$ in $\mathbb{Q}[v]$ so that the action of $c$ on $A$ is equivalent to the action of $v$ on $I$. Because $g^k = f$, we can assume $v^k = u$ and so $\mathbb{Q}[u] \subseteq \mathbb{Q}[v]$. But since both fields have dimension~$n$ over $\mathbb{Q}$, $\mathbb{Q}[u] = \mathbb{Q}[v]$. Now $v$ belongs to $\mathbb{Z}[v]=\mathbb{Z}[u]=\langle u,-1 \rangle \cup \{0\}$, so $k = \pm 1$ and $C= \langle c \rangle= \langle z \rangle$. \end{proof} \subsection{Nilpotent groups} \label{UT} Our last example is a nilpotent group. We give the definition of (class 2) nilpotency later, and for now we only mention that it has a non-trivial center. This fact stops us from describing it as a semidirect product, because we do not have the main weapon ``no action has a fixed point'' any more. Let $U$ be the discrete Heisenberg group $UT_3(\mathbb{Z})$, the group of upper unitriangular matrices (i.e.~the entries on the main diagonal are all $1$ and the entries below the diagonal are all $0$) over $\mathbb{Z}$. Then $U$ is a nilpotent group of \mbox{class 2}. That is, the $U'$ is contained in the center $Z$. In fact, by \cite[Exercise 16.1.3]{Kar.Mer:79}, $U$ is isomorphic to the free class 2 nilpotent group $F$ with two generators. Let $t_{mn}(k)$ denote the 3-by-3 matrix with $1$ in its diagonal entries, $k$ in the $m$-th row $n$-th column entry and $0$ everywhere else. Then the generators of $F$ correspond to $a = t_{23}(1)$ and $b = t_{12}(1)$. The following is a well-known fact about nilpotent groups. \begin{lemma} \label{f.g. nilpotent} If $G$ is a nilpotent group such that $G/G'$ is f.g., then every subgroup of $G$ is f.g. In particular, every subgroup of a f.g.\ nilpotent group is f.g. \end{lemma} \begin{proof} See Robinson \cite[3.1.6, 5.2.17]{Robinson:82} for the proof of the first part. The second part follows because every factor group of a f.g.~group is f.g.~(Lemma \ref{f.g. factor}). \end{proof} QFAness of $U$ can be shown using Oger and Sabbagh's criterion (\cite[Thm.10]{Oger.Sabbagh:06}), but here we give a sentence describing $U$ to make it easier to see how $U$ can be characterized in first-order. The following facts will be used in the proof of \mbox{Theorem \ref{UT thm}}. \begin{lemma} \label{nilpotent commutators} Let $G$ be a nilpotent group of class 2 and let $x,y \in G$. Then, $[x^{m_1}y^{n_1},x^{m_2}y^{n_2}]=[x,y]^{m_1n_2-m_2n_1}$ for any $m_1,m_2,n_1,n_2 \in \mathbb{Z}$. \end{lemma} \begin{proof} First, we show that $[x^m,y^n]=[x,y]^{mn}$ for any $m,n \in \mathbb{Z}$. Note \[ \begin{split} y^{-1}x&=xx^{-1} \cdot y^{-1}x \cdot yy^{-1}\\ &=x \cdot [x,y] \cdot y^{-1} \end{split} \] (i.e.\ we get $[x,y]$ every time we swap $y^{-1}$ and~$x$). Since $G$ is class 2 nilpotent, $[x,y] \in G' \subseteq Z(G)$ and so \[ \begin{split} [x^m,y^n]&=x^{-m}y^{-n}x^my^n\\ &=x^{-m}y^{-(n-1)}xyx^{m-1}y^n[x,y]\\ &\hspace{50pt}\vdots\\ &=x^{-m}x^my^{-n}y^n[x,y]^{mn}\\ &=[x,y]^{mn} \end{split} \] holds for positive $m,n$. Also, since \[ \begin{split} [x^{-1},y]&=xy^{-1}x^{-1}y \cdot xx^{-1}\\ &=[y,x]^{x^{-1}}\\ &=[y,x]=[x,y]^{-1} \end{split} \] and similarly $[x,y^{-1}]=[x,y]^{-1}$ holds in $G$, $[x^m,y^n]=[x,y]^{mn}$ holds for any $m,n \in \mathbb{Z}$. Now, since $G' \subseteq Z(G)$, the commutator rule ${[ab,c] = [a,c]^b[b,c]}$ (\cite[3.2.(3)]{Kar.Mer:79}) can be reduced to ${[ab,c] = [a,c][b,c]}$, and by taking inverse we also get $[c,ab]=[c,b][c,a]$. So we have \[ \begin{split} [x^{m_1}y^{n_1},x^{m_2}y^{n_2}]&=[x^{m_1},x^{m_2}y^{n_2}][y^{n_1},x^{m_2}y^{n_2}]\\ &=[x^{m_1},y^{n_2}][x^{m_1},x^{m_2}][y^{n_1},y^{n_2}][y^{n_1},x^{m_2}]\\ &=[x,y]^{m_1n_2} \cdot 1 \cdot 1 \cdot [y,x]^{m_2n_1}\\ &=[x,y]^{m_1n_2-m_2n_1} \end{split} \] as required. \end{proof} \begin{lemma} \label{UT center} The center $Z$ of $UT_3(\mathbb{Z})$ is the infinite cyclic group generated by $c = [a,b] = t_{13}(1)$, which coincides with the set of commutators. \end{lemma} \begin{proof} Since \[ \left[ \left( \begin{array}{@{}ccc@{}} 1 & \alpha_1 & \beta_1 \\ 0 & 1 & \gamma_1 \\ 0 & 0 & 1 \end{array} \right), \left( \begin{array}{@{}ccc@{}} 1 & \alpha_2 & \beta_2 \\ 0 & 1 & \gamma_2 \\ 0 & 0 & 1 \end{array} \right) \right] = t_{13}(\alpha_1\gamma_2-\alpha_2\gamma_1), \] the center is precisely $Z = \{t_{13}(z) \ | \ z \in \mathbb{Z}\} = \langle c \rangle$. Now by \cite[Exercise 16.1.3]{Kar.Mer:79}, each element $u \in U$ can be written as $u = a^mb^nc^l$ for some $m,n,l \in \mathbb{Z}$ and so \[ \begin{split} [u,v]&=[a^{m_1}b^{n_1}c^{l_1},a^{m_2}b^{n_2}c^{l_2}]\\ &=[a^{m_1}b^{n_1},a^{m_2}b^{n_2}]\\ &=[a,b]^{m_1n_2-m_2n_1}\\ &=c^{m_1n_2-m_2n_1} \in \langle c \rangle \end{split} \] for any $u=a^{m_1}b^{n_1}c^{l_1},v=a^{m_2}b^{n_2}c^{l_2} \in U$ by Lemma \ref{nilpotent commutators}. Hence $U' = \langle c \rangle$. \end{proof} As a part of the sentence describing $U$, we use the modified version of a formula first introduced by Mal'cev \cite{Malcev:71}. The formula $\mu(x,y;a,b)$ with parameters $a,b$ defines the ``square'' operation $M_{a,b}$ on the center $Z$ in the sense that $(Z, \circ, M_{a,b}) \cong (\mathbb{Z}, +, Q)$ where $Q=\{ (t,t^2) | \ t \in \mathbb{Z}\}$. The formula is \[ \begin{split} \mu(x,y;a,b)\equiv\exists u \exists v\{&[u,a]=[v,b]=1 \ \wedge\\ &x=[a,v]=[u,b] \ \wedge\\ &y=[u,v]\}. \end{split} \] This defines the ``square'' because $[a^m,b^n]=[a,b]^{mn}$ holds in $U$ by Lemma \ref{nilpotent commutators}. \begin{theorem}[Nies \cite{Nies:03}] \label{UT thm} $UT_3(\mathbb{Z})$ is QFA. \end{theorem} \begin{proof} The sentence $\psi_U$ consists of four formulas; \begin{itemize} \item[(P1)] the center $Z$ coincides with the set of commutators \item[(P2)] $\exists r \exists s \ \gamma(r,s)$ where $\gamma(r,s)$ is as described below \item[(P3)] $|Z:Z^2|=2$ \item[(P4)] $|B:B^2|=4$ where $B=G/Z$. \end{itemize} Roughly speaking, $\gamma(r,s)$ says $Z$ is linearly orderable, using Lagrange's theorem: an integer is non-negative iff it is the sum of four squares of integers. Formally, $\gamma(r,s)$ is a formula expressing \begin{itemize} \item $\mu(x,y;r,s)$ defines a unary operation $M_{r,s}$ on $Z$ \item let $P_{r,s}=\{u\ |\ \exists v_1 \ldots \exists v_4 \ u=M_{r,s}(v_1) \circ \ldots \circ M_{r,s}(v_4)\}$. Then $x \le y \leftrightarrow y-x \in P_{r,s}$ defines a linear order which turns $Z$ into an ordered abelian group with $[r,s]$ being the least positive element. \end{itemize} Let $G$ be a f.g.\ model of $\psi_U$. Since $Z$ is linearly orderable by (P2), it is torsion-free. Now we show that $B$ is also torsion-free. If $u \in G-Z$, there exists $v \in G$ such that $[u,v] \neq 1$. Then for each positive integer $n$, $[u^n, v]=[u,v]^n\neq 1$ by Lemma \ref{nilpotent commutators}. Hence $u^n \notin Z$. Since $G$ is f.g.\ and (class 2) nilpotent by (P1), $Z$ is f.g.\ by Lemma \ref{f.g. nilpotent}. Also, since $Z=G'$ by (P1), $B=G/Z=G/G'$ is abelian. So we know that $Z, B$ are both f.g.\ torsion-free abelian and have rank 1,2 respectively by (P3) and (P4) i.e.~$Z \cong \mathbb{Z}$, $B \cong \mathbb{Z} \oplus \mathbb{Z}$. Now we show that $G$ is generated by two elements. Let $c,d \in G$ such that the cosets $Zc,Zd$ generate $B$, and let $g,h \in G$ such that the commutator $[g,h]$ generates~$Z$. Then, there exist $x,y,z,w \in \mathbb{Z}$ and $u,v \in Z$ such that $g=uc^xd^y$ and $h=vc^zd^w$. Hence $[g,h]=[c^xd^y,c^zd^w]=[c,d]^{xw-yz}$ by Lemma \ref{nilpotent commutators}. But also $[g,h]^r=[c,d]$ for some $r \in \mathbb{Z}$ because $Z$ is generated by $[g,h]$. Since $Z$ is torsion-free, it follows that $xw-yz=r=\pm1$. Thus $[c,d]$ also generates $Z$ and so the two elements $c,d$ generate $G$. Because $U$ is the free class 2 nilpotent group of rank 2, there exists an epimorphism $h:U \rightarrow G$ mapping $a,b$ to $c,d$ respectively. If $h$ is not $1-1$, then $\mathtt{Ker}(h)$ is non-trivial and so it must intersect $Z(U)$ non-trivially as $U$ is nilpotent, \mbox{by \cite[Thm.\ 16.2.3]{Lang:84}}. But this is impossible, because $h([a,b])=[c,d]$ and so $h$ induces an isomorphism $Z(U) \rightarrow Z(G)$. Hence $h$ is $1-1$, or equivalently, $h$ is itself an isomorphism. \end{proof} \section{Polylogarithmic compressibility} \label{PLC} As an analogue of quasi-finite axiomatizability, we define \emph{polylogarithmic compressibility} below as a property of a class of finite groups, that the groups can be described by ``short'' first-order sentences in the sense described below. It makes sense to define it as a property of a class of groups rather than a single group, because the length of the sentence is always constant (and so cannot be compared to the size of the group) if we have only one group. We define the length $|\psi|$ of a first-order formula $\psi$ to be the number of symbols used in $\psi$. We assume we have infinitely many variables and so each variable is counted as one symbol. It usually reduces the length of each (sufficiently long) formula by the factor of $O(\mathtt{log} \ n)$ where $n$ is the number of variables used in the sentence. This is because, if we have only finitely many variables, then (when $n$ is sufficiently large) the variables in the sentence require extra indices, which have length $O(\mathtt{log} \ n)$. It can be avoided in some cases by repeating the same variables. e.g.\ the sentence \[ \forall x_1 \forall x_2 [x_1,x_2]=1 \rightarrow \exists x_3 \exists x_4 [x_3,x_4]=1 \] is equivalent to \[ \forall x \forall y [x,y]=1 \rightarrow \exists x \exists y [x,y]=1. \] \begin{definition} \label{PLC def} A class $\mathcal{C}$ of finite groups is \emph{polylogarithmically compressible (PLC)} if for any $H \in \mathcal{C}$, there exists a first-order sentence $\psi_H$ such that $H \models \psi_H$, $|\psi_H| = O(\mathtt{log}^k|H|)$ for some fixed $k$, and if $G \models \psi_H$ then $G \cong H$. In particular, we say $\mathcal{C}$ is \emph{logarithmically-compressible (LC)} if $k = 1$. \end{definition} Since we allow the polynomial change in the length, PLCness is independent of the particular way we define first-order language. For example, it does not matter whether we use parentheses or Polish notation (which allows us to write parenthesis-free formulas without ambiguity). Here we give an example of an LC class to illustrate the definition, namely the cyclic groups of order $2^n$. The sentence describing $\mathbb{Z}_{2^n}$, written additively, consists of three formulass; $\psi \equiv \forall x[\psi_1 \wedge \psi_2 \wedge \psi_3]$ where \[ \begin{split} \psi_1(x) &\equiv \forall y [2y \neq x] \ \vee \ \exists z \exists w [(2z=x) \wedge (2w=x) \wedge \forall t [(2t=x) \rightarrow (t=z \vee t=w)]]\\ \psi_2(x) &\equiv \neg \exists x_2 \ldots \exists x_{n+1} \left[2x=x_2 \wedge \bigwedge_{2 \leq i < n+1} 2x_i=x_{i+1}\wedge x_{n+1} \neq 0\right]\\ \psi_3(x) &\equiv \exists x_1 \ldots \exists x_n \left[\bigwedge_{1 \leq i < n} 2x_i=x_{i+1}\wedge x_n \neq 0 \right]. \end{split} \] Note that each part has length $O(n)$. The first formula $\psi_1$ says that for each element $x$ of the group, either no element $y$ satisfies $2y=x$, or there are exactly 2 such $y$. This is true in $\mathbb{Z}_{2^n}$ because, if $x$ is odd then no $y$ satisfies $2y=x$, and if $2m=x$ for some $m$ in $\mathbb{Z}$ then precisely $y_1=m$ and $y_2=m+2^{n-1}$ satisfy the equation in $\mathbb{Z}_{2^n}$. The next formula says $2^n x=0$ for any element $x$ (i.e.\ every element has order $2^i$ where $i \le n$), and the last formula says there exists an element $x_1$ such that $2^{n-1} x_1 \neq 0$. Clearly both of them hold in $\mathbb{Z}_{2^n}$. Now let $G$ be a group written additively such that $G \models \psi$. Then since $0 \in G$ and $0+0=0$, there exists exactly one element of order $2^1$ from $\psi_1$. Similarly, it can be shown that $G$ has at most $2^{i-1}$ elements of order $2^i$ for each $i$. Since every element of $G$ has order $2^i$ for some $i \le n$ from $\psi_2$, the maximum number of elements $G$ can have is $1+\sum_{1 \le i \le n} 2^{i-1}=2^n$. But there exists an element of order $2^n$ from~$\psi_2,\psi_3$ and so the cyclic subgroup generated by this element must coincide with the whole group $G$. In other words, $G \cong \mathbb{Z}_{2^n}$. In this section, we give more examples of PLC and LC classes. The proofs follow the scheme described below except for the last example: \begin{itemize} \item[(i)] We give a presentation for the group $H$ so that if $G \models \psi_H$, then $G$ contains a subgroup $\widetilde{G}$ isomorphic to some factor of $H$. \item[(ii)] We express that the generators of $\widetilde{G}$ generate the whole group $G$. \item[(iii)] We express that $\widetilde{G} \cong H$. \end{itemize} The following lemmas are used repeatedly. \begin{lemma} \label{presentation} Given a finite presentation for a group $H$ with generators $a_1,\ldots,a_m$, there exists a first-order formula $\zeta(x_1,\ldots,x_m)$ such that $H \models \zeta(a_1,\ldots,a_m)$, and if $G \models \zeta(b_1,\ldots,b_m)$ for some group $G$ and its elements $b_1,\ldots,b_m$, then the subgroup $\langle b_1,\ldots,b_m \rangle$ of $G$ is isomorphic to $H/N$ for some normal subgroup $N$ of~$H$. \end{lemma} Of course, this lemma is used in the part (i) of the scheme. The length of the formula $\zeta$ depends on the length of the relators. \begin{proof} Let $P=\langle~a_1,\ldots,a_m~|~t_1,\ldots,t_n~\rangle$ be a presentation for $H$. Note that each relator $t_i$ is first-order definable with parameters $a_1,\ldots,a_m$ since it is a product of the gerenators and their inverses \mbox{(i.e.~$t_i=t_i(a_1,\ldots,a_m)$)}. Then the formula is \[ \zeta(x_1,\ldots,x_m) \equiv \bigwedge_{1 \le i \le n} t_i(x_1,\ldots,x_m)=1. \] If $G \models \zeta(b_1,\ldots,b_m)$ for some group $G$ and its elements $b_1,\ldots,b_m$, then the subgroup $\widetilde{G}=\langle b_1,\ldots,b_m \rangle$ of $G$ has a presentation \[\langle~x_1,\ldots,x_m~|~t_1,\ldots,t_n, u_1,\ldots~\rangle\] where each $x_j$ corresponds to $b_j$, and $t_i=t_i(x_1,\ldots,x_m)$, $u_k=u_k(x_1,\ldots,x_m)$ for each $i,k$. Hence $\widetilde{G} \cong H/N$ where $N$ is the normal subgroup of $H$ generated \mbox{by $\{u_k(a_1,\ldots,a_m)~|~1 \le k\}$.} In particular, if $N$ is trivial then $\widetilde{G} \cong H$. \end{proof} \begin{lemma} \label{repeated squaring} For each positive integer $n$, there exists a first-order formula $\theta_n(x,y)$ of length $O(\mathtt{log}\ n)$ such that $G \models \theta_n(x,y)$ iff $x^n=y$ in the group $G$. \end{lemma} The method used here is called \emph{repeated squaring}. The formulas $\psi_2, \psi_3$ in the example above are also using this technique. \begin{proof} Let $n=\alpha_1\ldots\alpha_k$ written in binary where $k=\lfloor \mathtt{log_2}\ n \rfloor$. Then the formula $\theta_n$ is \[ \theta_n(x,y) \equiv \exists y_1 \ldots \exists y_k \left[y_1=x \ \wedge \ y_k=y \ \wedge \ \bigwedge_{1 \le i < k} y_{i+1}=y_i \cdot y_i \cdot x^{\alpha_{i+1}}\right] \] where $x^{\alpha_i}=x$ if ${\alpha_i}=1$ and $x^{\alpha_i}=1_G$ if ${\alpha_i}=0$. Clearly $\theta_n$ has length $O(\mathtt{log}\ n)$. Now we show that the formula is correct, by induction on $k$. If $k=1$, then the only possibility is $n=1$ and correctness is obvious because the formula is reduced to ${\theta_1(x,y) \equiv \exists y_1 [x=y_1=y]}$. Suppose $\theta_n(x,y)$ is correct for all $n < 2^{k}$ for some~$k$. Let $N \in {\mathbb{N}}$ such that $2^{k} \le N < 2^{k+1}$ and let $N=\beta_1 \ldots \beta_k$ written in binary. Then, \[ \begin{split} \theta_N(x,y)&\equiv \exists y_1 \ldots \exists y_{k} \left[\bigwedge_{1 \le i < k} y_{i+1}=y_i \cdot y_i \cdot x^{\alpha_{i+1}} \ \wedge \ y_1=x \ \wedge \ y_{k}=y\right]\\ &\equiv \exists y_k \left[\theta_{\widetilde{N}}(x,y_{k-1}) \ \wedge \ y_k=y_{k-1} \cdot y_{k-1} \cdot x^{\beta_k} \ \wedge \ y_k=y \right] \end{split} \] where $\widetilde{N}=\beta_1 \ldots \beta_{k-1}$. If $\theta_N(x,y)$ holds in $G$ with witnesses $y_1,\ldots,y_k$, then we have ${y_{k-1}=x^{\widetilde{N}}}$ by the inductive hypothesis because $\widetilde{N} < 2^k$. Since $N = 2\widetilde{N}+{\beta_k}$, it follows that $y_k=y_{k-1} \cdot y_{k-1} \cdot x^{\beta_k}=x^{2\widetilde{N}+{\beta_k}}=x^N$, as required. \end{proof} \begin{lemma} \label{finite product} Given a generating set $S$ of a finite group $G$, every element of $G$ can be written as a product of at most $|G|$ generators in $S$. \end{lemma} This lemma, combined with the next one, is used in the part (ii) of the scheme. The basic idea of the proof is the pigeonhole principle. \begin{proof} Let $S=\{s_1,\ldots,s_n\}$ be a generating set of $G$. Then, each element $g \in G$ can be written as a product \[ g=\prod_{1 \le i \le m}t_i \] for some $m$ where $t_i \in S$ for each $i$. If $m > |G|$, then there exist $j,k \in {\mathbb{N}}$ with $j < k \le m$ and \[ \prod_{1 \le i \le j}t_i = \prod_{1 \le i \le k}t_i \] and so $g$ can also be written as \[ g=\left(\prod_{1 \le i \le j}t_i\right) \cdot \left(\prod_{k < i \le m}t_i\right) \] which is a product of $m-(k-j)$ generators. We can repeat the same procedure until $g$ is written as a product of no more than $|G|$ generators. \end{proof} \begin{lemma} \label{generation} Let $G$ be a finite group. Then for each positive integer $n$, there exists a first-order formula $\pi_n(g;x_1,\ldots,x_n)$ with parameters $x_1,\ldots,x_n$ of length $O(n+\mathtt{log}|G|)$ such that ${G \models \pi_n(g;x_1,\ldots,x_n)}$ iff ${g \in \langle x_1,\ldots,x_n \rangle}$. In other words, $\pi_n$ defines the subgroup ${\langle x_1,\ldots,x_n \rangle}$ of $G$. \end{lemma} As mentioned above, this lemma is usually used in the part (ii) of the scheme. A modified version of the formula is also used in the proof of Theorem \ref{abelian thm} to define a subset of the group consisting of some powers of a certain element. \begin{proof} We use a device that originated in computational complexity to show that the set of true quantified boolean formulas is PSPACE complete \cite[Thm 8.9]{Sipser:97}. We define the formulas $\delta_i(g;x_1,\ldots,x_n)$ with parameters $x_1,\ldots,x_n$ for each $i \in {\mathbb{N}}$ inductively. For $i=0$, \[ \delta_0(g;x_1,\ldots,x_n) \equiv \bigvee_{1 \le j \le n}g=x_j \ \vee \ g=1 \] and for $i>0$, \[ \begin{split} \delta_i(g;x_1,\ldots,x_n) \equiv \exists u_i \exists v_i [&g = u_i v_i \ \wedge\\ &\forall w_i [(w_i = u_i \vee w_i = v_i) \rightarrow \delta_{i-1}(w_i;x_1,\ldots,x_n)]]. \end{split} \] Note $\delta_i$ has length $O(n+i)$, and $G \models \delta_i(g;x_1,\ldots,x_n)$ iff $g$ can be written as a product of at most $2^i$ $x$'s. Now let $\widetilde{G}=\langle x_1,\ldots,x_n \rangle$. Then by Lemma \ref{finite product}, each $g \in \widetilde{G}$ can be written as a product of at most $|\widetilde{G}|$ generators of $\widetilde{G}$ (i.e.\ $x_1,\ldots,x_n$). Hence by defining $\pi_n(g;x_1,\ldots,x_n) \equiv \delta_k(g;x_1,\ldots,x_n)$ where $k = \lceil \mathtt{log_2}|G| \rceil$ (and so $2^k \ge |G| \ge |\widetilde{G}|$), we get the required formula. \end{proof} \subsection{Simple groups} \label{simple} It is known that finite simple groups can be classified into 18 infinite families, with exceptions of 26 so-called sporadic groups. In \cite{Babai:97}, L.~Babai {\it et al.}\ showed that all finite simple groups have `short' presentations, with possible exception of the three families: the projective special unitary groups $PSU_3(q) = {^2A_2(q)}$ where $q$ is a prime-power, the Suzuki groups $Sz(q)={^2B_2(q)}$ where $q=2^{2e+1}$ for some positive integer $e > 1$, and the Ree groups $R(q)={^2G_2(q)}$ where $q=3^{2e+1}$ for some positive integer $e > 1$. They defined the length $l(P)$ of a presentation $P$ to be the number of characters required to write all the relations (or equivalently relators) in $P$, where the exponents are written in binary, and proved that each of these groups have a presentation of length $O(\mathtt{log}^2|G|)$ where $|G|$ is the size of the group. Note that $l(P)$ is the maximum number of generators in $P$ the relations can `talk' about. Since each generator must appear in the relations at least once (otherwise it will have infinite order), this means the number of generators in $P$ is also $O(\log^2|G|)$. Among the families they missed, two of them were shown to have short presentations by other people. One is $PSU_3(q)$, shown by Hulpke and Seress \cite{Hulpke:01}. Given a finite field $F_{q^2}$ for some prime-power $q$, an order 2 automorphism $\alpha$ of $F_{q^2}$ can be defined by $x \mapsto x^q$ and it can be extended in the natural way to the (multiplicative) groups of matrices over $F_{q^2}$. The {\it special unitary group} $SU_3(q)$ is \[ SU_3(q) = \{A \in SL_3(q^2) \ | \ A \omega \overline{A}^T = \omega\} \] where $\overline{A}=A^{\alpha}$ and $\omega = \left( \begin{array}{@{}ccc@{}} 0 & 0 & 1 \\ 0 & 1 & 0 \\ 1 & 0 & 0 \end{array} \right)$, and the {\it projective special unitary group} $PSU_3(q)$ is the factor of $SU_3(q)$ by its center. The other family shown to have short presentations is the Suzuki groups. In fact, the presentation was given in the original paper by Suzuki \cite{Suzuki:62}, and that was observed by J.~Thompson (personal communication to W.~Kantor) according to Hulpke and Seress \cite{Hulpke:01}. We use these results to prove the following theorem. \begin{theorem} The class of finite simple groups, excluding the family of the Ree groups $R(q)={^2G_2(q)}$, is PLC. \end{theorem} \begin{proof} Let $H$ be a finite simple group not belonging to the family $R(q)$ and let $P=\langle~a_1,\ldots,a_m~|~t_1,\ldots,t_n~\rangle$ be a presentation for $H$ with $l(P)=O(\mathtt{log}^2|H|)$. Then the sentence describing $H$ is $\psi \equiv \exists a_1 \ldots \exists a_m [\psi_1 \wedge \psi_2 \wedge \psi_3]$ where formulas $\psi_1,\psi_2,\psi_3$ correspond to (i),(ii),(iii) in the scheme respectively. First, we show that $\psi_1$, the `presentation' for the group (which corresponds to the formula $\zeta$ in Lemma \ref{presentation}), has length $O(\mathtt{log}^2|H|)$. It suffices to show that we can write each relator in appropriate length. If $t=a_{\varphi(1)}^{z_1} \ldots a_{\varphi(k)}^{z_k}$ is a relator, where each $a_{\varphi(i)}$ is a generator in $P$, then the number of characters required to write $t$ is $l(t)=k+\sum_{1 \le i \le k} \lfloor \mathtt{log}_2 z_i \rfloor$. Since the formula \[ \tau(a_1,\ldots,a_m) \equiv \exists b_1 \ldots \exists b_k \left[\bigwedge_{1 \le i \le k} \theta_{z_i}(a_{\varphi(i)},b_i) \wedge \prod_{1 \le i \le k} b_i=1\right] \] expresses $t=1$, and has length $\simeq 5k + 10 \sum_{1 \le i \le k} \lfloor \mathtt{log_2}~z_i \rfloor$ by Lemma \ref{repeated squaring}, we obtain the required result. I.e.\ the formula \[ \psi_1(a_1,\ldots,a_m) \equiv \bigwedge_{1 \le j \le n} \tau_j(a_1,\ldots,a_{m}) \] where each $\tau_j$ corresponds to the relator $t_j$, has length approximately \[ \sum_{1 \le j \le n} \left[ 5k + 10 \sum_{1 \le i \le k} \lfloor \mathtt{log_2}~z_i \rfloor \right] \] where $k$ is dependent on $j$. The second formula $\psi_2$ expresses that $a_1,\ldots,a_n$ generate $H$, using Lemma \ref{generation}; \[ \psi_2(a_1,\ldots,a_m) \equiv \forall h [\pi_k(h;a_1,\ldots,a_m)] \] where $k = \lceil \mathtt{log}|H| \rceil$. (Recall that $H \models \pi_k(h;a_1,\ldots,a_m)$ iff $h \in \langle a_1,\ldots,a_m \rangle$.) Clearly $\psi_2$ has length $O(\mathtt{log}|H|)$. Now let $G$ be a group and let $x_1,\ldots,x_m \in G$ such that $G \models \psi_1 \wedge \psi_2(x_1,\ldots,x_m)$. Then we know that $G$~is generated by the elements $x_1,\ldots,x_m$ and is isomorphic to some factor group of $H$. But since $H$ is simple, we must have $G \cong H$ unless $G$~is trivial. Hence the last formula $\psi_3$ is \[ \psi_3(a_1,\ldots,a_m) \equiv [a_1 \neq 1] \] assuming $a_1$ is not the identity element in $H$. We can make this assumption safely because if $a_1=1$, then we can get a shorter presentation for $H$ by excluding $a_1$ from $P$. Clearly $\psi_3$ has constant length and so the length of the whole sentence $\psi$ is $O(\mathtt{log}^2|H|)$. \end{proof} \subsection{Symmetric groups} \label{symmetric} In the previous subsection, it was easy to obtain the formula $\psi_2$ because the groups considered were simple. Something similar happens for the case of the symmetric groups, because for each $n \ge 5$, the alternating group $A_n$ is the only non-trivial normal subgroup of $S_n$ (see \cite[3.2.3]{Robinson:82}). In \cite{Bray:11}, Bray {\it et al.}\ found presentations of length $O(\mathtt{log}(n))$ for the symmetric groups $S_n$, one of whose generators corresponds to the $n$-cycle $(1,\ldots,n)$. They defined the length of a presentation in a slightly different way from Babai {\it et al.}~\cite{Babai:97}, but it does not affect the order of a presentation (they included the number of generators). Note that their presentation for $S_n$ has length $O(\mathtt{log \ log}|S_n|)$ because $|S_n| = n!$. \begin{theorem} \label{symmetric thm} The class of symmetric groups $S_n$ is LC. \end{theorem} \begin{proof} We can assume $n \ge 5$ and so $A_n$ is the only non-trivial normal subgroup of $S_n$. The sentence is $\psi \equiv \exists \eta \exists \sigma_2 \ldots \exists \sigma_k [\psi_1 \wedge \psi_2 \wedge \psi_3]$ where $k$ is the number of generators in the short presentation for $S_n$, $\eta$ corresponds to the $n$-cycle $(1,\ldots,n)$ and $\sigma_2, \ldots, \sigma_k$ correspond to the rest of the generators. The constructions of $\psi_1, \psi_2$ are exactly the same as that in the previous theorem, and so $\psi_1,\psi_2$ have length $O(\mathtt{log} \ n)$, $O(\mathtt{log}|S_n|)$ respectively. Now let $G$ be a group and let $x_1, \ldots, x_k \in G$ such that $G \models \psi_1 \wedge \psi_2 (x_1, \ldots, x_k)$. Then, since $\{1\}$, $A_n$, $S_n$ are the only normal subgroups of $S_n$, the group $G$ is isomorphic to $S_n$, $\mathbb{Z}_2$ or $\{1\}$. So the formula \[ \psi_3(\eta, \sigma_2, \ldots, \sigma_k) \equiv [\eta \neq 1 \ \wedge \ \eta^2 \neq 1] \] guarantees that if $G \models \psi$ then $G \cong S_n$. Since $\psi_3$ has constant length, the length of the whole sentence $\psi$ is $O(\mathtt{log}|S_n|)$. \end{proof} \subsection{Abelian groups} \label{abelian} It is known that each finite abelian group is isomorphic to a direct product of (finite) cyclic groups, and $\mathbb{Z}_m \oplus \mathbb{Z}_n \cong \mathbb{Z}_{mn}$ iff $m,n$ are coprime. Hence each finite abelian group can be written as a unique direct product of cyclic groups of prime-power order, up to permutation of the factors. \begin{theorem} \label{abelian thm} The class of finite abelian groups is LC. \end{theorem} \begin{proof} Let $H=\bigoplus_{1 \le i \le n} \mathbb{Z}_{q_i}$ where each $q_i$ has the form $q_i = p_i^{z_i}$ for some prime $p_i$ and some positive integer $z_i$. Then $H$ has a presentation \[ \langle~a_1,\ldots,a_n~|~a_1^{q_1},\ldots,a_n^{q_n}, [a_j,a_k]~(1 \le j < k \le n)~\rangle \] where each $a_i$ corresponds to a generator of $\mathbb{Z}_{q_i}$. We follow the scheme again (i.e.\ the sentence $\psi$, written additively, has the form ${\psi \equiv\exists a_1 \ldots \exists a_n [\psi_1 \wedge \psi_2 \wedge \psi_3]}$), but $\psi_1$ is slightly different here. Since saying ``every element commutes with each other'' requires a shorter formula than saying ``every commutator commutes with each other'', $\psi_1$ is \[ \psi_1(a_1,\ldots,a_n) \equiv \forall g \forall h [g,h]=0 \ \wedge \ \bigwedge_{1 \le i \le n} \theta_{q_i}(a_i,0). \] (Recall that $H \models \theta_n(x,y)$ iff $x^n=y$ holds in $H$.) Clearly it has length $O(\mathtt{log}|H|)$. The second formula $\psi_2$ is exactly the same as that of the previous example; \[ \psi_2(a_1,\ldots,a_n) \equiv \forall g [\pi_k(g;a_1,\ldots,a_n)] \] where $k = \lceil \mathtt{log}|H| \rceil$. It has length $O(\mathtt{log}|H|)$. Now let $G$ be a group written additively and let $x_1,\ldots,x_n \in G$ such that ${G \models \psi_1 \wedge \psi_2(x_1,\ldots,x_n)}$, then we know $G$ is abelian and generated by $x_1,\ldots,x_n$. For $G$ to be isomorphic to $H$, it suffices that $p_i^{z_i-1} \cdot x_i \neq 0$ for each $i$, and $x_1,\ldots,x_n$ form an independent set in the sense that if $\sum_{1 \le i \le n} \alpha_i x_i = 0$ for some integers $\alpha_i$, then $\alpha_i x_i = 0$ for each~$i$. Recall that each $x_i$ satisfies $q_i x_i = p_i^{z_i} x_i = 0$ where $p_i$ is prime. We define a relation $\sim$ on $\{1,\ldots,n\}$ by $i \sim j$ iff $p_i=p_j$. Now clearly $\sim$ is an equivalence relation. We denote by $[i], \Omega$ the equivalence class containing $i$ and the set of all the equivalence classes respectively. Then it is easy to see that if $S_{[i]} = \{x_j~|~j \in [i]\}$ is independent for each $[i] \in \Omega$, then the whole group $G$ is independent. So the last formula $\psi_3$ is \[ \psi_3(a_1,\ldots,a_n) \equiv \bigwedge_{1 \le i \le n} \neg \left[ \theta_{p_i^{z_i-1}}(a_i,0) \right] \ \wedge \ \bigwedge_{[i] \in \Omega} \xi_{[i]}(a_1,\ldots,a_n) \] where each $\xi_{[i]}$ expresses that $S_{[i]}+p_i G = \{x_j + p_i G~|~j \in [i]\}$ is independent. This formula is correct because $S_i$ is independent iff $G/p_i G \cong \bigoplus_{j \in [i]} \mathbb{Z}_{p_i}^{(j)}$ where each $\mathbb{Z}_{p_i}^{(j)}$ is a copy of $\mathbb{Z}_{p_i}$ iff $S_{[i]}+p_i G$ is independent. (Strictly speaking, we need the assumption that $S_{[i]}+p_i G$ does not contain the identity element $p_i G$, which is the first part of $\xi_{[i]}$ below.) As a part of $\xi_{[i]}$, we use a modified version of the formula~$\pi_n$; \[ \pi'_i(g;x) = \delta_k(g;x) \] where $k = \lceil \mathtt{log_2} \ p_i \rceil$ and $\delta_k$ is as defined in Lemma \ref{generation}. So $G \models \pi'_i(g;x)$ iff $g=z \cdot x$ for some non-negative integer $z \le r$ where $r$ is the smallest power of $2$ not smaller than $p_i$. In particular, $G \models \pi'_i(z \cdot x;x)$ for all $0 \le z < p_i$. Note that $\pi'_i$ has length $O(\mathtt{log} \ p_i)$. Now we are ready to write~$\xi_{[i]}$; \[ \begin{split} \xi_{[i]}(a_1,\ldots,a_n) \equiv \ &\bigwedge_{j \in [i]} \nexists b \left[ \theta_{p_i}(b,a_j) \right] \ \wedge\\ &\forall b_1 \ldots \forall b_{\lambda(i)} \left[ \left( \bigwedge_{j \in [i]} \pi'_i(a_j,b_{\varphi(j)}) \wedge \exists c \left[ \theta_{p_i} \left(c,\sum_{j \in [i]} b_{\varphi(j)}\right) \right] \right) \right.\rightarrow\\ &\hspace{125pt}\left. \exists c_1, \ldots, \exists c_{\lambda(i)} \bigwedge_{j \in [i]} \theta_{p_i} (c_{\varphi(j)},b_{\varphi(j)}) \right] \end{split} \] where $\lambda(i)$ is the size of $[i]$ and $\varphi$ is a bijection from $[i]$ to $\{1,\ldots,\lambda(i)\}$. The second part says that if a linear combination $\sum_{j \in [i]} z_j a_j$ is in $p_i G$ for some non-negative integers ${z_j \le r = 2^{\lceil \mathtt{log_2}~p_i \rceil}}$, then $z_j a_j \in p_i G$ for each $j$. Now consider the length of $\psi_3$. For each $i$, $\psi_3$ contains: \begin{itemize} \item one $\theta_{p_i^{z_i-1}}$, which has length $\simeq 10(z_i-1)\lfloor \mathtt{log_2}~p_i \rfloor$ \item three $\theta_{p_i}$, each of which has length $\simeq 10\lfloor \mathtt{log_2}~p_i \rfloor$ \item one $\pi'_i$, which has length $\simeq 26 \lceil \mathtt{log_2}~p_i \rceil$. \end{itemize} Hence the length of the formula $\psi_3$ has order of \[ \sum_{1 \le i \le n} z_i \ \mathtt{log} \ p_i = \mathtt{log} \left( \prod_{1 \le i \le n} p_i^{z_i} \right) = \mathtt{log}|H| \] and so the length of the whole sentence $\psi$ is $O(\mathtt{log}|H|)$. \end{proof} \subsection{Upper unitriangular matrix groups} \label{unitriangular} In Subsection \ref{UT}, we analyzed the structure and some properties of the group $UT_3(\mathbb{Z})$. Using some of those results, and also some part of the previous theorem, we consider similar finite groups, namely ${UT_3(n)=UT_3(\mathbb{Z}_n)}$ where $n$ is any positive integer. Each $UT_3(n)$ is isomorphic to the free 2-generated class 2 nilpotent group with exponent $n$. Their freeness can be shown in a similar way to the case of $UT_3(\mathbb{Z})$ (see \cite[Exercise 16.1.3]{Kar.Mer:79}). \begin{proposition} \label{UT finite thm} The class of the unitriangular groups $UT_3(n)$ is LC. \end{proposition} \begin{proof} Let ${a=t_{23}(1)}$, ${b=t_{12}(1)}$. (Recall that $t_{mn}(k)$ denotes the 3-by-3 matrix with $1$ in its diagonal entries, $k$ in the $m$-th row $n$-th column entry and $0$ everywhere else.) We begin by analyzing the structure of the group $H=UT_3(n)$. Since \[ \left[ \left( \begin{array}{@{}ccc@{}} 1 & \alpha_1 & \beta_1 \\ 0 & 1 & \gamma_1 \\ 0 & 0 & 1 \end{array} \right), \left( \begin{array}{@{}ccc@{}} 1 & \alpha_2 & \beta_2 \\ 0 & 1 & \gamma_2 \\ 0 & 0 & 1 \end{array} \right) \right] = t_{13}(\alpha_1\gamma_2-\alpha_2\gamma_1), \] holds in $H$, the center $Z$ of $H$ is \[ Z=\{t_{13}(z)~|~z \in \mathbb{Z}_n\}=\langle c \rangle \] where $c=[a,b]=t_{13}(1)$. Now $H/Z$ is isomorphic to $\mathbb{Z}_n \oplus \mathbb{Z}_n$ (generated by $aZ,bZ$) which is abelian, so $H$ is class 2 nilpotent. Hence each element $h \in H$ can be written as a product of the form $h=xyz$ where $x \in \langle a \rangle$, $y \in \langle b \rangle$, $z \in Z$, and $H'$~coincides with the set of commutators by Lemma \ref{nilpotent commutators}. Let $\varphi$ be the formula \[ \begin{split} \varphi(h,x,y,z;a,b) \equiv \exists u \exists v \{&\pi'_n(u;a)~\wedge~\pi'_n(v;b)~\wedge~\forall w~[z,w]=1~\wedge\\ &h=uvz~\wedge~x=[u,b]~\wedge~y=[a,v]\} \end{split} \] with parameters $a,b$ where $\pi'_n(r;s)=\delta_k(r;s)$, $k=\lceil \mathtt{log_2}~n \rceil$ (i.e.~$H \models \pi'_n(r;s)$ iff $s=r^z$ for some $z \in \mathbb{Z}_n$). Then it defines a bijection $\Phi : H \rightarrow Z \times Z \times Z$ such that $\Phi(h)=(x,y,z)$ for ${h = \left( \begin{array}{@{}ccc@{}} 1 & y & z \\ 0 & 1 & x \\ 0 & 0 & 1 \end{array} \right) \in H}$. Note that $\varphi$ has length $O(\mathtt{log}~n)$. Now we are ready to write the sentence $\psi$ describing $H$. Let $n = \prod_{1 \le i \le m} p_i^{z_i}$ be the prime decomposition of $n$. Then the sentence is $\psi \equiv \exists a \exists b [\psi_1 \wedge \ldots \wedge \psi_6]$ where $\psi_1$ says that $a,b$ have order dividing $n$ \[ \psi_1(a,b) \equiv \theta_n(a,1)~\wedge~\theta_n(b,1) \] $\psi_2$ says that $c$ has order $n$ (using the previous result) \[ \begin{split} \psi_2(a,b) \equiv~&\theta_n([a,b],1)~\wedge \\ &\exists c_1 \ldots \exists c_m \left[ \bigwedge_{1 \le i \le m} \left\{ \pi'_n(c_i;[a,b])~\wedge~\theta_{p_i^{z_i}}(c_i,1)~\wedge~\neg \theta_{p_i^{z_i-1}}(c_i,1) \right\} \right] \end{split} \] $\psi_3$ says that $H'=Z=\langle c \rangle$ coincides with the set of commutators \[ \begin{split} \psi_3(a,b) \equiv~&\forall r \forall s \forall t \forall u \exists v \exists w~[r,s][t,u]=[v,w]~\wedge~\forall r \forall s \forall h [[r,s],h]=1~\wedge \\ &\forall z \{ \forall h~[z,h]=1 \rightarrow (\exists r \exists s~[r,s]=z~\wedge~\pi'_n([a,b],z)\} \end{split} \] $\psi_4,\psi_5$ say that $\Phi(h)$ is a function $H \rightarrow Z \times Z \times Z$ \[ \begin{split} \psi_4(a,b) \equiv~&\forall h \exists x \exists y \exists z~\varphi(h,x,y,z;a,b)\\ \psi_5(a,b) \equiv~&\forall h \forall x_1 \forall x_2 \forall y_1 \forall y_2 \forall z_1 \forall z_2\\ &[\{\varphi(h,x_1,y_1,z_1;a,b)~\wedge~\varphi(h,x_2,y_2,z_2;a,b)\} \rightarrow\\ &\hspace{100pt}\{x_1=x_2~\wedge~y_1=y_2~\wedge~z_1=z_2\}] \end{split} \] and $\psi_6$ says that $\Phi$ is surjective \[ \psi_6(a,b) \equiv~\forall x \forall y \forall z \{\forall g~[x,g]=[y,g]=[z,g]=1 \rightarrow \exists h~\varphi(h,x,y,z;a,b)\}. \] It can be easily seen that $\psi$ has length $O(\mathtt{log}~n)$. Now let $G$ be a group satisfying $\psi$ with witnesses $a,b \in G$. Then from $\psi_2,\psi_3$, the center $Z$ of $G$ is cyclic of order $n$ generated by $c=[a,b]$. Since $\varphi$ defines an surjective function $\Phi : G \rightarrow Z \times Z \times Z$ from $\psi_4,\psi_5,\psi_6$, $G$ has size at least $n^3$. But since $a,b$ have order at most $n$ from $\psi_1$ and each element $g \in G$ can be written as a product of the form $g=uvz$ where $u \in \langle a \rangle$, $v \in \langle b \rangle$, $z \in Z$ from $\psi_4$, $G$ cannot have more than $n^3$ elements. Hence $G$ has precisely $n^3$ elements, and has the form $G=\{a^{\alpha} b^{\beta} c^{\gamma}~|~\alpha, \beta, \gamma \in \mathbb{Z}_n \}$. Since $G$ is class 2 nilpotent from $\psi_3$ and $c=[a,b]$, one can deduce the equation \[ a^{\alpha_1} b^{\beta_1} c^{\gamma_1} \cdot a^{\alpha_2} b^{\beta_2} c^{\gamma_2} = a^{\alpha_1+\alpha_2} b^{\beta_1+\beta_2} c^{\gamma_1+\gamma_2-\alpha_2 \beta_1} \] and it determines the group uniquely up to isomorphism. \end{proof} The short presentation conjecture \cite{Babai:97} asks whether there exists a constant $C$ such that every finite group $G$ has a presentation of length $O({\mathtt {log}}^c~|G|)$. In analogy, we ask: \begin{question} Is the class of finite groups polylogarithmically compressible (PLC)? Is it in fact logarithmically compressible? \end{question} \def$'${$'$} \end{document}
arXiv
{ "id": "1305.0080.tex", "language_detection_score": 0.7367550730705261, "char_num_after_normalized": null, "contain_at_least_two_stop_words": null, "ellipsis_line_ratio": null, "idx": null, "lines_start_with_bullet_point_ratio": null, "mean_length_of_alpha_words": null, "non_alphabetical_char_ratio": null, "path": null, "symbols_to_words_ratio": null, "uppercase_word_ratio": null, "type": null, "book_name": null, "mimetype": null, "page_index": null, "page_path": null, "page_title": null }
arXiv/math_arXiv_v0.2.jsonl
null
null
\begin{document} \title{RMCMC:\ A System for Updating Bayesian Models} \begin{abstract} A system to update estimates from a sequence of probability distributions is presented. The aim of the system is to quickly produce estimates with a user-specified bound on the Monte Carlo error. The estimates are based upon weighted samples stored in a database. The stored samples are maintained such that the accuracy of the estimates and quality of the samples is satisfactory. This maintenance involves varying the number of samples in the database and updating their weights. New samples are generated, when required, by a Markov chain Monte Carlo algorithm. The system is demonstrated using a football league model that is used to predict the end of season table. Correctness of the estimates and their accuracy is shown in a simulation using a linear Gaussian model. \end{abstract} {\bf Key words:} Importance sampling; Markov chain Monte Carlo methods; Monte Carlo techniques; Streaming data; Sports modelling \section{Introduction}\label{sec:intro} We are interested in producing estimates from a sequence of probability distributions. The aim is to quickly report these estimates with a user-specified bound on the Monte Carlo error. We assume that it is possible to use MCMC methods to draw samples from the target distributions. For example, the sequence can be the posterior distributions of parameters from a Bayesian model as additional data becomes available, with the aim of reporting the posterior means with the variance of the Monte Carlo error being less than $0.01$. We present a general system that addresses this problem. Our system involves saving the samples produced from the MCMC sampler in a database. The samples are updated each time there is a change of sample space. The update involves weighting or transiting the samples, depending on whether the space sample changes or not. In order to control the accuracy of the estimates, the samples in the database are maintained. This maintenance involves increasing or decreasing the number of samples in the database. This maintenance also involves monitoring the quality of the samples using their effective sample size. See Table \ref{tab:control_variates} for a summary of the control variables. Another feature of our system is that the MCMC sampler is paused whenever the estimate is accurate enough. The MCMC sampler can later be resumed if a more accurate estimate is required. Therefore, it may be the case that no new samples are generated for some targets. Hence the system is efficient, as it reuses samples and only generates new samples when necessary. Our approach has similar steps to those used in sequential Monte Carlo (SMC) methods \citep{smcm,liu2008monte}, such as an update (or transition) step and re-weighting of the samples. Despite the similarities, SMC methods are unable to achieve the desired aims considered in this paper. Specifically, even though SMC methods are able to produce estimates from a sequence of distributions, it is unclear how to control the accuracy of this estimate without restarting the whole procedure. For example, consider the simulations in \cite{gordon} where the bootstrap particle filter, a particular SMC method, is introduced. In these simulations the posterior mean is reported with the interval between $2.5$ and $97.5$ percentile points. As these percentile points are fixed, there is no way to reduce the length of the interval. The only hope of reducing the interval is to rerun the particle filter with more particles, although there is no guarantee. This conflicts with the aim of reporting the estimates quickly. In practice, most SMC methods are concerned with models where only one observation is revealed at a time (see simulations in e.g.\ \cite{kitagawa2014computational}, \cite{del2006sequential}, \cite{chopin2002sequential}). Our framework allows for observations to be revealed in batches of varying sizes; see the application presented in \S \ref{sec:football}. \begin{table}[tbp] \caption{Summary of control variables} \centering \begin{tabular}{lll} \toprule Control Variable & Measurement & Target Interval \\ \midrule Accuracy of estimates ($A$) & Standard deviation of estimates & $\left[\beta_1,\beta_2\right]$ \\ Quality of samples ($Q$) & $\text{Effective sample size of database}/N_{\text{MAX}}$& $\left[\gamma_1,\gamma_2\right]$ \\ \bottomrule \end{tabular} \label{tab:control_variates} \end{table} A potential application of the system is monitoring the performance of multiple hospitals where the data observed are patient records and estimate of interest relates to the quality of patient care at each hospital. Controlling the accuracy of this estimate may relate to controlling the proportion of falsely inspected hospitals. Another example of a realistic application of the system is a football league model (see \S \ref{sec:football}) where the data revealed are the match results and the estimate of interest is the end of season rank league table. Controlling the estimated rank may be of interest to sports pundits and gamblers. In \S \ref{sec:process_descriptions} we define, in detail, the setup we are considering. We then describe the separate processes of the system. We also describe how to combine the weighted samples to form the estimate of interest. Then in \S \ref{sec:est_acc} we present a modified batch mean approach that we use to compute the accuracy of the estimate. In \S \ref{sec:football} we investigate the performance of the system using a model for a football league. For this application, the aim is to provide quick and accurate predictions of the end of season team ranks as football match results are revealed. We examine the performance of the system as the size of data received increases. Currently, there is no theoretical proof that the proposed system is stable, however simulations verify correctness of the reported accuracy and the estimate. We present such a simulation in \S \ref{sec:kalman_compare}, where we apply the system using a linear Gaussian model and simulated data. We conclude in \S \ref{sec:conclusion} with a discussion of potential future topics of research. \section{Description of the System}\label{sec:process_descriptions} \subsection{Setup}\label{sec:setup} We now describe the settings we consider and the necessary operations required for our system to function. Let $(S_n,\mathcal{S}_n,\pi_n)_{n\in\mathbb{N}}$ be a sequence of probability spaces. We are interested in reporting $\pi_ng_n=\int g_n(x)d\pi_n(x)$ where $g_n$ is a, possibly multivariate, random variable on $(S_n,\mathcal{S}_n,\pi_n)$. In order to implement our system, the following operations are required:\ \begin{enumerate} \item \textit{MCMC}: For all $n\in\mathbb{N}$, generate samples from an MCMC with stationary distribution $\pi_n$. \item \textit{Weighting Samples}: For all $k\in \overline{D}:=\{j: S_{j-1}=S_j \text{ and }\pi_j\ll\pi_{j-1} \}$, the Radon-Nikodym derivative $\frac{d\pi_j}{d\pi_{j-1}}$ can be evaluated. \item \textit{Transiting Samples}: For all $k\in D$ $\exists f_k : S_{k-1}\times [0,1]\rightarrow S_k$ such that $\xi\sim \pi_{k-1}$, $U\sim U[0,1] \implies f_k(\xi,U)\sim\pi_k$. \end{enumerate} The weighting operation enables us to weight previously generated samples according to the latest measure. In the case where the sample space changes or the Radon-Nikodym derivative is not defined, the transition operation allows us to map the samples to the latest measure. If such a transition function, in operation 3, is unavailable, then the following may be used instead. \begin{enumerate} \item[$3^\prime.$] \textit{Transiting Samples}: For all $k\in D$ $\exists f_k : S_{k-1}\times [0,1]\rightarrow S_k$. \end{enumerate} This alternative transition operation allows us to map the samples into the latest sample space of interest. \subsection{Global Variables}\label{sec:sample database} The samples produced by the RMCMC process (\S \ref{sec:RMCMC}) are stored in a database. Each sample is recorded to the database with a production date and an information cut-off. The production date is the date and time the sample was written to the database from the RMCMC process. The information cut-off refers to the measure the MCMC was targeting when the sample was produced. Lastly, each sample will be enter the database with weight $1$. The maximum number of samples allowed in the database is $N_{\text{MAX}}$. In \S \ref{sec:control} we explain how the control process varies $N_{\text{MAX}}$ over time. Further, we shall refer to the current number of samples in the database as $N$. The deletion process (\S \ref{sec:deletion}) ensures that $N\leq N_{\text{MAX}}$ by sometimes removing samples from the database. A summary of the systems global variables is provided in Table \ref{tab:global_variables} along with their descriptions. \begin{table}[tbp] \caption{Description of the global variables in the rolling MCMC system.} \centering \begin{tabular}{l p{5in}} \toprule Variable & Description \\ \midrule \textsc{rmcmc\_on} & Indicator if RMCMC process is supposed to be producing new samples.\\ $N_{\text{MAX}}$ & Maximum number of samples allowed in the database.\\ $N$ & Number of samples contained in the database.\\ $(\xi_i,w_i)_{i=1,\dots,N}$ & The samples and their corresponding weights in the database. \\ \bottomrule \end{tabular} \label{tab:global_variables} \end{table} \subsection{RMCMC Process}\label{sec:RMCMC} The RMCMC process, summarized in Algorithm \ref{code:rmcmc_process}, is an MCMC method that changes its target without the need to restart. When the target of interest changes from $\pi_{j-1}$ to $\pi_{j}$ so does the target of the MCMC. The Markov chain continues from the latest sample, making a transition step using $f_j$ if there is a change of sample space. This ensures the next MCMC is exploring the correct space. We continue from this sample in the hope that the Markov chain converges faster to the updated target distribution than a randomly chosen starting point. To allow the Markov chain to move toward the updated target distribution we use a burn-in period where the first $B_0$ samples are discarded after the target changes. This burn-in period will also weaken the dependence between samples from different target distributions. As this MCMC method is never reset and continues from the last generated sample we refer to it as a rolling MCMC (RMCMC) process. \begin{algorithm}[tbp] \caption{RMCMC process} \label{code:rmcmc_process} \begin{algorithmic}[1] \Statex Parameters:\ MCMC algorithm, $B_0$. \Repeat{ indefinitely.} \If{target changes to $\pi_{j}$} \State Set $B=B_0$. \State Update target of MCMC to $\pi_{j}$. \If{$j\in D$} \State Set current position of MCMC to $f_j(\xi,U)$ where $U\sim U[0,1]$ and $\xi$ is the latest sample generated. \EndIf \EndIf \If{\textsc{rmcmc\_on=true}} \State Perform MCMC step. \If{$B=0$} write sample to database with weight $1$. \Else $\text{ }$ $B\leftarrow B-1$. \EndIf \Else $\text{ }$sleep for some time. \EndIf \Until{} \end{algorithmic} \end{algorithm} The RMCMC process is only active when new samples are required as it can be paused and resumed by the control process (\S \ref{sec:control}). If the process is paused for long periods, it may be the case that no samples are produced for some targets. In Algorithm \ref{code:rmcmc_process} the generated samples are written to the database individually. In practice, however, it may be more convenient to write the samples to the database in batches. This practice is allowable and will not affect the functioning of the system. \subsection{Update Process}\label{sec:reweight} The update process, presented in Algorithm \ref{code:reweight_process}, ensures that the samples are weighted correctly each time the target changes. There are two types of updates depending on the measures and their sample spaces. More precisely, consider a change of target from $\pi_{j-1}$ to $\pi_{j}$. If $j\in D$, that is the sample spaces differ or the Radon-Nikodym $d \pi_{j}/d\pi_{j-1}$ is not defined, then the function $f_k$ is used to map the samples in the database onto the new space. On the other hand, if $j\notin D$, the samples are first re-weighted according to $d \pi_{j}/d\pi_{j-1}$, then scaled. We now discuss this re-weight and scaling steps in more detail. Suppose that the RMCMC process produces the samples $\xi_1,\dots,\xi_m\sim \pi_{j-1}$ where $\pi_{j-1}$ is the target of interest. Next, suppose the the target changes from $\pi_{j-1}$ to $\pi_{j}$. In order to use the samples from the previous measure, $\pi_{j-1}$, for estimating $\pi_{j}g_j$, the weights are updated as follows. For $i=1,\dots,m$ define the updated weight $W_i$ from $w_i$ as \begin{equation*} W_i= w_i v_i\quad\text{where}\quad v_i\propto \frac{d\pi_{j}}{d\pi_{j-1}}(\xi_i). \end{equation*} After, the weights are scaled such that the sum of the weights is equal to their effective sample size. More precisely, define the scaled weight $\widehat{w}_i$ from $W_i $ as \begin{equation*} \widehat{w}_i = W_i \frac{\sum_{k=1}^mW_k}{\sum_{k=1}^mW_k^2} \quad (i=1,\dots,m). \end{equation*} Straightforward calculations show that scaling in this fashion ensures the effective sample size of the database is the sum of the effective sample sizes of the most recently weighted samples and the newly generated samples. \begin{algorithm}[H] \caption{Update Process} \label{code:reweight_process} \begin{algorithmic}[1] \Repeat{ indefinitely.} \If{the target changes from $\pi_{j-1}$ to $\pi_{j}$} \State{Label the out-of-date samples $\xi_1,\dots\xi_m$ with corresponding weights $w_1,\dots,w_m$.} \If{$j\notin D$} \State Update the weight $w_i\leftarrow w_i v_i$ where $v_i\propto \frac{d\pi_{j}}{d\pi_{j-1}}(\xi_i)$ for $i=1,\dots,m$. \State Compute $d= \left(\sum_{k=1}^mw_k\right)/\left(\sum_{k=1}^mw_k^2\right)$. \State Set $w_i\leftarrow d w_i $ for $i=1,\dots,m$. \State Write the weights into the sample database. \EndIf \If{$j\in D$} \State Replace samples by $f_j(\xi_i,U_1),\dots,f_j(\xi_m,U_m)$ where $U_1,\dots,U_m\stackrel{\text{iid}}{\sim} U[0,1]$, leaving the weights unchanged. \EndIf \Else $\text{ }$ sleep for some time. \EndIf \Until \end{algorithmic} \end{algorithm} \subsection{Control Process}\label{sec:control} The control process determines when the RMCMC process is paused and changes the maximum number of samples contained in the database. This is done to maintain the accuracy of the estimate of interest and the quality of the samples. We now discuss each of these in turn. At any given time, denote the samples in the database by $\xi_1,\dots,\xi_N$. For $i=1,\dots,N$ denote the $i$th sample weight in the database as $w_i$. To estimate the quantity of interest $\pi_k g_k$, for some $k\in \mathbb{N}$, we use the estimator \begin{equation*} T=\frac{\sum_{i=1}^Nw_ig_k(\xi_i)}{\sum_{i=1}^Nw_i}. \end{equation*} The accuracy of the estimate, $A$, is defined as the standard deviation of $T$ (in \S \ref{sec:est_acc} we discuss how to estimate $A$). The process aims to control the accuracy $A$ such that $A<\epsilon$ for some fixed $\epsilon>0$. When considering multiple estimates i.e.\ multivariate $g_k$, we force the standard deviation of all the estimates below the threshold $\epsilon$. One approach to control the accuracy would be to pause the RMCMC process each time $A<\epsilon$ and resumed if $A\geq\epsilon$. However, this may lead to the RMCMC process being paused and resumed each time a new observation is revealed, as a small change in the accuracy will inevitably occur. Therefore, we use $0<\beta_1<\beta_2\leq\epsilon$ so that if $A\leq \beta_1$ the RMCMC process is paused and if $A>\beta_2$ the RMCMC process is resumed. The control process is also controls the quality of the samples in the database. The process aims to hold a good mixture of samples in the hope that a future change of measure does not require the resuming of the RMCMC process. We define the quality of the samples in the database as \begin{equation*} Q=\frac{\text{ESS}}{N_{\text{MAX}}}\quad\text{where}\quad\text{ESS}=\frac{\left(\sum_{i=1}^Nw_i\right)^2}{\sum_{i=1}^Nw_i^2}. \end{equation*} The quality of the samples, $Q$, is the effective sample size of the all the weights in the database divided by the optimal effective sample size of the database, $N_{\text{MAX}}$. The optimal effective size of the database consists of a database with $N_{\text{MAX}}$ samples all with weight 1. As with the accuracy, we aim to maintain the quality such that $\gamma_1<Q<\gamma_2$ for some $0<\gamma_1<\gamma_2\leq 1$. The control process is summarised in Algorithm \ref{code:control_process}. To ensure that the database in never depleted, a minimum number of samples is imposed at $N_{\text{MIN}}>0$ such that the number of samples, $N$ and $N_{\text{MAX}}$ cannot drop below $N_{\text{MIN}}$. Therefore, when the RMCMC process is paused, $Q<\gamma_1$ and $N_{\text{MAX}}=N_{\text{MIN}}$ we cannot decrease $N_{\text{MAX}}$ any more. In this case, the RMCMC process is resumed to generate new samples that replace the poor quality samples in the database. \begin{algorithm}[btp] \caption{Control Process}\label{code:control_process} \begin{algorithmic}[1] \Statex{Parameters:\ $\beta_1,\beta_2,\gamma_1,\gamma_2,N_{\text{MIN}}$.} \Repeat{ indefinitely.} \State Compute $Q$ and $A$. \If{$A < \beta_1$ and $N\geq N_{\text{MIN}}$} set \textsc{rmcmc\_on=false}. \EndIf \If{($A >\beta_2$) or (\textsc{rmcmc\_on=false} and $Q<\gamma_1$ and $N=N_{\text{MIN}}$)} set \textsc{rmcmc\_on=true} \EndIf \If{\textsc{rmcmc\_on=false} and $Q<\gamma_1$ and $N>N_{\text{MIN}}$} decrease $N_{\text{MAX}}$. \EndIf \If{\textsc{rmcmc\_on=true} and $Q>\gamma_2$} increase $N_{\text{MAX}}$. \EndIf \Until \end{algorithmic} \end{algorithm} \subsection{Deletion Process}\label{sec:deletion} This process deletes samples from the database if the current number of samples, $N$, exceeds the maximum number of samples allowed $N_{\text{MAX}}$. Removing samples from the database reduces the computational work performed by the update process and calculating the estimates. Moreover, lowering the number of samples is the way the control process maintains the quality of the samples. For simplicity, if $N>N_{\text{MAX}}$, the $N-N_{\text{MAX}}$ samples that were produced the earliest are removed. The deletion process is summarized in Algorithm \ref{code:deletion_process}. \begin{algorithm}[btp] \caption{Deletion Process}\label{code:deletion_process} \begin{algorithmic}[1] \Repeat{ indefinitely.} \If{$N > N_{\text{MAX}}$} delete samples from the database. \Else $\text{ }$ sleep for some time. \EndIf \Until \end{algorithmic} \end{algorithm} \subsection{Modifying Batch Means to Estimate the Accuracy}\label{sec:est_acc} There are several methods to estimate the variance of MCMCs such as block bootstrapping \citep[][Chapter 3]{lahiri_resampling}, batch means \citep{flegal_batch} and initial sequence estimators \citep{practical_MCMC_geyer}. In our system the samples in the database have weights which complicates estimation of the variance. The aforementioned methods cannot be used as they essentially treat all samples with equal weight. We now present a version of the batch mean approach that is modified to account for the sample weights. Assume the estimate of interest is $\pi_n g_n$ for some $n\in\mathbb{N}$. First, order the samples in the database $\xi_1,\dots,\xi_N$ and their corresponding weights $w_1,\dots,w_N$ by their production date. This ensures that the dependence structure of the samples is maintained. Then we divide the samples into batches or intervals of length $b$ according to their weights. More precisely, let $D_0=0$, $D_j=\sum_{i=1}^jw_i$ and $L=\lceil \sum_{i=1}^Nw_i/b \rceil$ be the number of batches. It may be the case that a weight spans more than one interval. Therefore we need to divide each weight by the proportion it spans a given interval. For the $i$th interval and $u$th sample define $\kappa_i(u)=\left[ \min\left\{D_u,ib \right\}-\max\left\{D_{u-1},(i-1)b \right\} \right]^+$, where $\left[ x \right]^+=\max\left( 0, x\right)$, for $i=1,\dots,L$. Then $\kappa_i(u)$ is the batch weight of $\xi_u$ in interval $i$. The mean of the weighted samples in the $i$th interval is \begin{equation*} \widehat{\mu}_i=\frac{\sum_{u=1}^N\kappa_i(u)g_n(\xi_u)}{\sum_{u=1}^N\kappa_i(u)}. \end{equation*} Finally, we estimate the squared accuracy by \begin{equation*} \hat{A}^2=\frac{1}{L} \sum_{i=1}^L \left(\widehat{\mu}_i-\widehat{\mu} \right)^2 ,\quad \widehat{\mu}=\frac{1}{L}\sum_{j=1}^L\widehat{\mu}_j\text{ }. \end{equation*} The batch length $b$ should be large enough to capture the correlation between samples, yet small enough to give a stable estimate of the variance. In practice we recommend using several batch lengths in order to get a conservative estimate of $A$. Moreover, the batch mean estimate should not be trusted when the number of batches, $L$, is low. This can occur as $\sum_{i=1}^Nw_i$ can become very small. In this case, we suggest setting the accuracy $A$ to $-1$ nominally. This prompts the control process to remove samples from the database and then restart the RMCMC process. This action effectively replenishes the database with new samples. In practice, we recommend taking this action when $L<20$. \subsection{Remarks}\label{sec:practical-notes} \subsubsection{Effective sample size for correlated samples.} The quality, $Q$, uses the effective sample size defined for independent samples, not correlated samples which we use in our system. In the system, consider the extreme case where all samples have the same value i.e.\ $\xi_1=\dots=\xi_N$ produced from the same target. Each of these samples will have the same weight and therefore $Q=1$ suggesting the optimal quality has been achieved. Further, the accuracy of the estimate, $A$, will be very low since the weights and samples are all the same. Hence, in this extreme case, the control process would take no action. This is clearly undesirable. Ideally, the effective sample size used to calculate $Q$, should take into account the autocorrelation of the samples, where high autocorrelation (in absolute value) leads to a lower effective sample size. However, we use the this version the effective sample size for independent samples as it is quick and simple to compute. \subsubsection{Degeneracy of the Sample Weights.} We now discuss how the system handles two types of degeneracy of the sample weights. The first is where a single sample in the database has most of the total weight and all other samples have $0$ or nearly $0$ weight. If this were to occur, the effective sample size, and therefore the quality, $Q$, will be very low. In this case, the control process will remove samples from the database before resuming the RMCMC process. The second is where all sample weights are $0$ or nearly $0$. As a consequence, the sum of the weights, $\sum_{i=1}^Nw_i$, will be very low. Recall that the batch mean approach uses $L=\lceil \sum_{i=1}^Nw_i/b \rceil$ batches where $b$ is the length of the batch. Further, if $L<20$ the control process removes samples from the database and resumes the RMCMC process. Therefore, in the case where $\sum_{i=1}^Nw_i$ drops to low, the sample database in replenished. To summarise, the system does not attempt to avoid these types of degeneracy, but to take remedial action when it does occur. \subsubsection{Burn-in Periods.} In the RMCMC process, we perform a burn-in each time a change of measure occurs. In some cases, however, it may not be necessary, as we now discuss. Assume we have the samples $\xi_1,\dots,\xi_m\sim \pi_{j-1}$. Next, consider a change of measure from $\pi_{j-1}$ to $\pi_j$ such that $j\in D$. In this case, a burn-in period is unnecessary as the new chain starts at a representative of $\pi_j$, namely $f_j(\xi_m,U)\sim\pi_j$ where $U\sim U[0,1]$. On the other hand, if either the samples $\xi_1,\dots,\xi_m$ are not from $\pi_{j-1}$ or the transition function in operation $3^\prime$, but not operation $3$, is available, then a burn-in period is required. In any case, performing a burn-in is mostly harmless. \subsubsection{Subsampling.} The samples produced from a MCMC method are correlated. If the correlation of the samples is high then a large number of samples are required to achieved the desired accuracy of the estimate. As a consequence, the update process and the calculation of the estimate would take a long time. To alleviate this problem we use subsampling. Use of subsampling within an MCMC method entails saving only some samples produced. More precisely, with a subsampling size $k$, every $k$th sample is saved and the rest discarded. To choose the subsampling size $k$, we suggest performing a pre-initialisation run of the MCMC on the initial set of data. One approach, that we use in our implementation of the system, is to vary $k$ until $\rho:=\varsigma^2/\text{var}\left\{g_1(\xi_1)\right\}\approx 2$ where $ \varsigma^2=\text{var}\left\{g_1(\xi_1)\right\}+2\sum_{j=1}^\infty\text{cov}\left\{g_1(\xi_1),g_1(\xi_{1+j})\right\}$. We found that setting $\rho\approx 2$ worked well in our implementations of the system, however may not be appropriate in all applications. In practice, a method such as initial sequence methods \citep{practical_MCMC_geyer} or a batch mean approach \citep[][\S 1.10.1]{MonteCarloHandbook} can be used to estimate $\varsigma^2$. We chose to use the batch mean approach in our system. If the initial Markov chain $\xi_1,\xi_2,\dots$ is Harris recurrent and stationary with invariant distribution $\pi$, then by the Markov chain central limit theorem \citep[e.g.][]{jones2004markov} \begin{equation*} \sqrt{n}\left\{\frac{1}{n}\sum_{i=1}^ng(\xi_i)- \int g(x)\pi(dx)\right\} \xrightarrow[]{d} N\left(0,\varsigma^2\right)\quad\text{as}\quad n\rightarrow\infty. \end{equation*} Thus $\varsigma^2$ is the asymptotic variance of the Markov chain. Hence, by choosing $\rho\approx 2$, we obtain \begin{equation*} 2\sum_{j=1}^\infty\text{cov}\left\{g(\xi_1),g(\xi_{1+j})\right\}\approx \text{var}\left\{g(\xi_1)\right\} \end{equation*} i.e.\ the sum of all covariance terms contributes as much as $\text{var}\left\{g(\xi_1)\right\}$ to $\varsigma^2$. This way, the covariance between the samples is prevented from getting to large relative to $\text{var}\left\{g(\xi_1)\right\}$. \subsubsection{Choice of Scaling.} As discussed in \S \ref{sec:RMCMC}, the database will consist of weighted samples from different target distributions. In \S \ref{sec:control} the weighted sample average, $T$, is used to estimate $\pi_j g_j$ for some $j\in\mathbb{N}$. In this subsection we show that, due to the scaling of the weights (\S \ref{sec:reweight}), the variance of $T$ is minimised under certain assumptions. A similar calculation can be found in \cite{gramacy}. We begin by showing that $T$ can be decomposed according to two sets of samples. Denote the invariant measure of the RMCMC process at a given time instance as $\pi_j$ for some known $j\in\mathbb{N}$. Further, label the samples produced from this MCMC targeting $\pi_j$ as $\xi_{m+1},\dots,\xi_N$ for some $m\in\{0,\dots,N\}$. The case $m=N$ corresponds to the situation when no samples have been produced from $\pi_j$. Label the remaining sample as $\xi_{1},\dots,\xi_m$. These samples will have already been weighted and scaled in previous iterations. The estimator, $T$, can be decomposed according to the two sets of samples as \begin{equation*} T = \frac{\sum_{i=1}^m\widehat{w}_ig_j(\xi_i)+\sum_{i=m+1}^Ng_j(\xi_i)}{\sum_{i=1}^m\widehat{w}_i+(N-m)} \end{equation*} as $w_j=1$ for $j=m+1,\dots,N$. In terms of the updated weights, $T$ can be written as $T = \alpha T_1+(1-\alpha) T_2$ where $T_1=\sum_{i=1}^mW_ig_j(\xi_i)/ \sum_{i=1}^mW_i$ and $T_2=\sum_{i=m+1}^Ng_j(\xi_i)/(N-m)$ are the individual estimators of $\pi_jg_j$ given by the two sets of samples and \begin{equation*} \alpha=\frac{\text{ESS}_m}{\text{ESS}_m+(N-m)}\quad\text{where}\quad\text{ESS}_m=\frac{\left(\sum_{i=1}^mW_i\right)^2}{\sum_{i=1}^mW_i^2}. \end{equation*} The choice of the scaling performed in the update process (\S \ref{sec:reweight}) led to this choice of $\alpha$. We now show that this choice of $\alpha$, under certain assumptions, minimises the variance of $T$. Assume $\phi\in\mathbb{R}$ is a constant. Then the variance of the estimator $T=\phi T_1 +(1-\phi) T_2$ is $\text{var}(T)=\phi^2\text{var}(T_1)+(1-\phi)^2\text{var}(T_2)$ where we assume that $T_1$ and $T_2$, or more specifically the two sets of samples $\xi_1,\dots,\xi_m$ and $\xi_{m+1},\dots,\xi_{N}$, are independent. The variances of the individual estimators are \begin{equation*} \text{var}(T_1)=\frac{\sigma^2}{\text{ESS}_m}\quad\text{and}\quad \text{var}(T_2)=\frac{\sigma^2}{N-m} \end{equation*} where we assume $\text{var}\left\{g_j(\xi_i)\right\}=\sigma^2$, for $i=1,\dots,N$ and that the weights are constants. Upon differentiating we find that setting $\phi$ to $\text{ESS}_m/\{ \text{ESS}_m+(N-m)\}$ minimises $\text{var}(T)$ thus regaining $\alpha$. These assumptions are unrealistic in our setting. However, this motivates the use of a burn-in period within the RMCMC process after new data are observed. Although we can not guarantee independence between the sets of samples, the burn-in period at least weakens their dependence. \section{Application to a Model of a Football League}\label{sec:football} In this section we demonstrate how the system performs on a model of a football league. The data we use are the English Premier League results from $2005/06$ to $2012/13$ season. In a season, a team plays all other teams twice. For each match played, a team receives points based on the number of goals they and their opponent scores. If a team scores more goals than their opponent they receive $3$ points. If a team score the same number of goals as their opponent they receive $1$ point. If a team scores less goals than their opponent they receive $0$ points. The rank of each team is determined by their total number of points, where the team with the highest number of points is ranked $1$st. A tie of ranks then determined by goal difference and then number of goals scored. We are interested in the probability of each rank position for all teams at the end of a season. The aim is to estimate these rank probabilities to a given accuracy. Thus, in this application we are concerned about maintaining the accuracy of multiple predictions. Throughout this section, we use the following notation. Let $\bs I_p$ be the $p \times p$ identity matrix and $\bs 1_p$ be a vector of $1$s of length $p$. Further, let $\text{N}(\bs\mu,\bs\Sigma)$ denote a multivariate normal distribution with mean $\bs\mu$ and covariance matrix $\bs\Sigma$. Denote the cardinality of a set $A$ by $\vnorm{A}$. We shall reserve the index $t=1,\dots,T$ for reference to seasons. Lastly, let $\text{logN}(\mu,\sigma^2)$ denote a log-normal distribution i.e.\ if $X\sim \text{N}(\mu,\sigma^2)$ then $\exp(X)\sim \text{logN}(\mu,\sigma^2)$. We begin by presenting a model for football game outcomes. The model we use is similar to that presented in \cite{doi:10.1080/01621459.1998.10474084} and \cite{RSSC:RSSC065}. \subsection{Football League Model}\label{sec:PL_model} Consider a model with hidden Markov process $X_t$ ($t\in\mathbb{N}$), observed process $Y_t$ ($t\in\mathbb{N}$) and parameter $\theta$. The observation $Y_t$ contains all observations for state $X_t$. Denote the $j$th observation of state $t$ as $Y_{j,t}$. Next define the $k$th observation batch of state $t$ as $\widetilde{Y}_{k,t}$ for $k=1,\dots,c_t$ for some $c_t\geq 1$. For instance, if the observations are batched in groups of $10$, the $k$th batch of state $t$ is $\widetilde{Y}_{k,t}=Y_{10k-9 ,t},\dots,Y_{10k,t}$. In this application section, we are interested in the model \begin{equation}\label{eq:batch_ssm} \begin{cases} p(x_t|x_{1:t-1},\theta)=p(x_t|x_{t-1},\theta)\\ p(\widetilde{y}_{k,t}|\widetilde{y}_{1:(k-1),t},y_{1:(t-1)},x_{1:t},\theta)=p(\widetilde{y}_{k,t}|x_t,\theta) \\ p(x_1|\theta), p(\theta) \end{cases}. \end{equation} where $\widetilde{y}_{1:0,t}$ is an empty observation batch introduced for notational convenience. In this section, the sequence of target distributions is defined as follows. Let $\varpi_{k,t}=p(x_{1:t},\theta|\widetilde{y}_{1:k,t},y_{1:(t-1)})$ for $t=1,2,\dots$ and $k=0,\dots,c_t$. Then, we are interested in the targets $\pi_n=\varpi_{\varphi_1(n),\varphi_2(n)}$ for $n\in\mathbb{N}$ where \begin{equation*} \varphi_2(n)=\max\left\{j\in\mathbb{N} : (n-1) \geq \sum_{i=1}^{j-1}(c_i+1) \right\} ,\quad \varphi_1(n)=n-1-\sum_{i=1}^{\varphi_2(n)-1}(c_i+1), \end{equation*} where we set $\sum_{i=1}^0(c_i+1)=0$. The transition steps occur at $k\in D=\{n\in\mathbb{N}:\varphi_1(n)=0 \}$. In this application, the transition functions $f_k$ ($k\in D$) are dictated by the model namely $p(x_t|x_{t-1},\theta)$ in \eqref{eq:batch_ssm}. We now describe the states $X_t$, the observations $Y_t$ and the parameter $\theta$ in this football application. Each team is assumed to have a strength value (in $\mathbb{R}$) which remains constant within a season. Let $U_t$ be the set of teams that play in season $t$, $X_{i,t}$ be the strength of team $i$ in season $t$ and $\bs X_t=(X_{i,t})_{i\in U_t}$. To condense notation, for any set $A\subset U_t$ define $\bs X_{A,t}:=(X_{i,t})_{i\in A}$ and form the parameter vector $\bs\theta=(\lambda_H,\lambda_A,\sigma_p,\sigma_s,\eta,\mu_p)$, which we now define. At the end of every season, some teams are relegated and new teams are promoted to the league. Denote the set of promoted teams that begin season $t$ by $W_t$ and let $V_t=U_t\backslash W_t$ be the set of teams that remain in the league from season $t-1$ to $t$. The promoted teams strengths are introduced such that $ \bs X_{W_t,t}|(\bs\theta,\bs X_{t-1}=\bs x_{t-1})\sim \text{N}\left(\mu_p \bs 1_{\left\vert W_t \right\vert} ,\sigma_p^2\bs I_{\left\vert W_t \right\vert}\right)$. Thus any previous history in the league is not used for a promoted team. From season $t-1$ to $t$, the strengths of the teams that were not relegated are evolved such that $ \bs X_{V_t,t}|(\bs\theta,\bs X_{t-1}=\bs x_{t-1})\sim \text{N}\left(\eta \bs C_t \bs x_{V_t,t-1},\sigma_s^2\bs I_{\left\vert V_t \right\vert}\right)$. Thus between seasons, the strengths of the teams that are not relegated are centered around $0$ and expanded ($\eta>1$) or contracted ($\eta<1$). Next, consider a match, in season $t$, between home team $j$ and away team $k$ ($j,k\in U_t$). We assume the number of home $G_{j,H}^k$ and away goals $G_{k,A}^j$ is modelled by $ G_{j,H}^k|(\bs\theta,\bs X_t) \sim \text{Poisson} \left(\lambda_H\exp\left\{x_{j,t}-x_{k,t}\right\}\right)$ and $ G_{k,A}^j|(\bs\theta,\bs X_t)\sim \text{Poisson}\left(\lambda_A\exp\left\{x_{k,t}-x_{j,t}\right\}\right) $ independently of each other. The parameters $\lambda_H$ and $\lambda_A$ are strictly positive and pertain to the home and away advantage (or disadvantage) which is assumed to be the same across all teams and all seasons. More precisely, $\lambda_H$ ($\lambda_A$) is the expected number of home (away) goals in a match between two teams of equal strength. Finally, denote the results of season $t$ by $Y_t$; the number of home and away goals for all games in season $t$. For this football application, the sample space is $S_n=\mathbb{R}^{20\varphi_2(n)+2}\times(\mathbb{R}^+)^4$. For the first season strengths, we use an improper flat prior. For the home and away advantage we take respective Gamma distribution priors of shapes $5$ and $2$ and scales $5$ and $1$. For $(\eta,\sigma_s)$ and $(\mu_p,\sigma_p)$ we take their Jeffreys priors. Jeffreys prior was used for both $(\eta,\sigma_s)$ and $(\mu_p,\sigma_p)$ after considering the amount of information available for each parameter. For instance, if 10 seasons are considered, only 9 transitions between seasons are available for the likelihood of $(\eta,\sigma_s)$. Thus, using an informative prior would greatly influence the posterior distribution. This can also be argued for the promotion parameters $(\mu_p,\sigma_p)$. \subsection{The MCMC Step}\label{sec:mcmc-step} For the MCMC step in the RMCMC process (Algorithm \ref{code:rmcmc_process}), we use a Metropolis-Hasting algorithm \citep{annealing2}, \citep{HASTINGS01041970}. In general, a different, potentially more complex MCMC method can be used. However, the system does not rely on the choice of MCMC method, and will work with a simple sampler, as demonstrated in this application. We use independent proposal densities for the separate parameters. Due to the high dimension of the combine states and parameter, we choose to implement block updates \citep[Section 21.3.2]{MonteCarloHandbook}. This entails proposing parts of the state and parameter at any stage. The proposals densities used and the block updating is summarized in Algorithm \ref{code:MH-alg}. In the algorithm we propose a new strength of a single season $80$\% of the time and part of the parameter $\theta$ the remaining $20$\%. This was done so that exploration of the chain was mainly focused on the states. The proposal densities parameters were determined by consideration of the acceptance rate in a pre-initialization run of the MCMC. Lastly, the samples were written into the database in batches of $1,000$. \begin{algorithm}[tbp] \caption{Block Proposals for Metropolis-Hasting Algorithm}\label{code:MH-alg} \begin{algorithmic}[1] \Statex Given $(x_{1:t},\lambda_H,\lambda_A,\eta,\sigma_s,\mu_p,\sigma_p)$. \State Generate $u\sim\text{Uniform}(0,1)$. \If{$u < 0.8$} \State Generate $v\sim\text{Uniform}\{1,\dots,t\}$. \State Propose $x_v^*|x_v\sim\text{N}(\bs x_v,0.0002 I_{\vnorm{U_v}})$ \EndIf \If{$u \geq 0.8$} Generate $w\sim\text{Uniform}\{1,\dots,4\}$. \If{$w=1$} propose $\lambda_H^*|\lambda_H\sim\text{logN}(\log(\lambda_H),0.01^2)$. \EndIf \If{$w=2$} propose $\lambda_A^*|\lambda_A\sim\text{logN}(\log(\lambda_A),0.01^2)$. \EndIf \If{$w=3$} propose $\eta^*|\eta\sim\text{N}(\eta,0.01)$ and $\sigma_s^*|\sigma_s\sim\text{logN}(\log(\sigma_s),0.005)$. \EndIf \If{$w=4$} propose $\mu_p^*|\mu_p\sim \text{N}(\mu_p,0.0002)$ and $\sigma_p^*|\sigma_p\sim\text{logN}(\log(\sigma_p),0.002)$. \EndIf \EndIf \end{algorithmic} \end{algorithm} \subsection{League Predictions}\label{sec:league-predictions} In \S \ref{sec:PL_model} we introduced a model for the team strengths and the outcome of football matches, in terms of goals scored. In \S \ref{sec:mcmc-step} we presented the MCMC method which produces samples used to estimate the states and parameters of the model. We now explain how these samples are used to predict the end of season ranks of each team, which is our estimates of interest i.e.\ $\pi_n g_n$. For each sample, all games in a season are simulated once. Thus each sample gives a predicted end of season rank table. The distribution across these predicted rank tables gives the estimated probabilities of the ranks of each team. This distribution is the posterior summary of interest whose accuracy we aim to control. \subsection{System Parameters}\label{sec:system-parameters} As mentioned in \S \ref{sec:practical-notes}, we performed a pre-initialization run using $10,000$ samples to determine the subsampling size. Based on the results from the 2005/06 to the 2009/10 season, we found that a subsample size of $80$ gave $\varsigma\approx 2$. We used a burn-in period of $B_0=10,000$ within the RMCMC process. Within the control process we use $\beta_1=0.01$ and $\beta_2=0.0125$ for the accuracy thresholds and $\gamma_1=0.1$ and $\gamma_2=0.75$ for the quality thresholds. Whenever the control process demanded a change in $N_{\text{MAX}}$, it was increased or decreased by $10$\% of its current value. Finally, we set $N_{\text{MIN}}=1,000$. As mentioned in \S \ref{sec:league-predictions}, our estimate consists of rank probabilities for each team i.e.\ each team has estimated probabilities for ending the season ranked $1\text{st},\dots,20\text{th}$. The accuracy of each of the $400$ rank probabilities is calculated using the method presented in \S \ref{sec:est_acc} using use two batch lengths $b=10$ and $b=50$. The maximum standard deviation is reported as the accuracy of the estimate to be conservative. \subsection{Results}\label{sec:results} The system is initialized with the results from the 2005/06 to 2009/10 seasons of the English Premier League. Using the samples from this initialisation, we proceeded with $3$ separate runs of the system. The system itself remained unchanged in each of the runs, however, the way the results for the next $2$ seasons were revealed varied. The match results were revealed individually, in batches of $7$ days and in batches of $30$ days. New data batch were revealed only if the RMCMC process was paused. In Table \ref{tab:batch_sizes} we present the system results of each run. We see that for larger data batches, the RMCMC process is resumed more often. Further, the percentage of new samples generated after new data are revealed increases as with the size of the data batch. The average percentage of new samples is calculated as follows. Before a new data batch is revealed the percentage of new samples in the database generated after the introduction of the latest data is calculated. The average of these percentages is then taken over the data batches. This means that for larger data batches the RMCMC process will often be resumed to generate new samples that replace most of the samples already in the database. \begin{table}[tbp] \caption{System summary for various data batch sizes.} \label{tab:batch_sizes} \centering \begin{tabular}{llll} \toprule & individual & 7 day & 30 day \\ \midrule No. of batches & 760 & 70 & 20 \\ Range of games per batch & [1,1] & [3,21] & [10,53] \\ No. times RMCMC resumed & 39 & 31 & 18 \\ Total No. MCMC steps & 24,010,000 & 14,230,000 & 9,240,000 \\ Average \% of new samples & 2\% & 20\% & 53.6\% \\ \bottomrule \end{tabular} \end{table} In Table \ref{tab:parameter_estimates} we present the estimated posterior mean of the components of $\theta$ at the end of the run for each batch size. As expected, being based on the same data, these final estimates are almost identical for the various batch sizes. In Table \ref{tab:condensed_rank_tab} we present the predicted end of 2012/13 season ranks for selected teams and ranks. Each team and rank has $3$ predictions given by the runs using different batch sizes. For each batch size, these predictions are being controlled. More precisely, for every rank of every team the predictions standard deviation is being controlled below $\beta_2=0.0125$. This is consistent with the predictions across the various batch sizes. The predictions for all teams and ranks can be found in \S \ref{sec:more results} in the Appendix. \begin{table}[tbp] \caption{End of run parameter estimated mean with $95$\% credible intervals.} \centering \begin{tabular}{clll} \toprule Parameter& individual & 7 day & 30 day \\ \midrule $\lambda_H$ & 1.446 (1.406,1.497) & 1.447 (1.406,1.496) & 1.446 (1.406,1.494) \\ $\lambda_A$ & 1.031 (0.998,1.073) & 1.032 (0.995,1.073) & 1.032 (0.997,1.077) \\ $\eta$ & 0.970 (0.865,1.054) & 0.967 (0.865,1.049) & 0.964 (0.864,1.048) \\ $\sigma_s$ & 0.083 (0.061,0.117) & 0.084 (0.059,0.113) & 0.086 (0.059,0.116) \\ $-\mu_p$ & 0.245 (0.316,0.172) & 0.242 (0.322,0.167) & 0.244 (0.315,0.171) \\ $\sigma_p$ & 0.116 (0.049,0.204) & 0.117 (0.063,0.191) & 0.114 (0.06,0.202) \\ \toprule \end{tabular} \label{tab:parameter_estimates} \end{table} \begin{table}[tbp] \caption{End of 2012/13 season rank predictions for selected teams and ranks. Each team and rank has $3$ predictions given by (from top to bottom) the individual, $7$ day and $30$ day batch run.} \label{tab:condensed_rank_tab} \centering \begin{tabular}{lllllll} \toprule &\multicolumn{6}{c}{Rank}\\ \cmidrule(r){2-7} Team & 1 & 2 & 3 & $\dots$ & 18 & 19 \\ \midrule \multirow{3}{*}{Arsenal} & 8\% & 14\% & 17\% & $\dots$& 0\% & 0\% \\ & 8\% & 14\% & 17\%& $\dots$ & 0\% & 0\% \\ &8\% & 15\% & 19\%& $\dots$ & 0\% & 0\% \\ \midrule \multirow{3}{*}{Aston Villa} & 0\% & 0\% & 1\%& $\dots$ & 6\% & 5\% \\ & 0\% & 0\% & 1\%& $\dots$ & 6\% & 5\% \\ & 0\% & 0\% & 1\%& $\dots$ & 6\% & 5\% \\ \midrule \multirow{3}{*}{Chelsea} & 9\% & 15\% & 19\%& $\dots$ & 0\% & 0\% \\ & 9\% & 15\% & 21\%& $\dots$ & 0\% & 0\% \\ & 10\% & 16\% & 20\%& $\dots$ & 0\% & 0\% \\ \midrule \multirow{3}{*}{Everton} & 1\% & 2\% & 6\%& $\dots$ & 1\% & 1\% \\ & 1\% & 3\% & 5\%& $\dots$ & 1\% & 1\% \\ & 1\% & 2\% & 5\%& $\dots$ & 1\% & 1\% \\ \midrule \qquad \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots \\ \midrule \multirow{3}{*}{Wigan} & 0\% & 0\% & 0\%& $\dots$ & 10\% & 11\% \\ & 0\% & 0\% & 0\%& $\dots$ & 10\% & 11\% \\ & 0\% & 0\% & 0\%& $\dots$ & 11\% & 12\% \\ \bottomrule \end{tabular} \end{table} In the following, we present some results for the $7$ day batch run only. Further results for all the batch sizes are presented in \S \ref{sec:more results} in the Appendix. In Figure \ref{fig:system_res_weekly} we display the accuracy of the predictions ($A$), the quality of the samples ($Q$) and the number of samples in the database ($N$) as new data are revealed. In Figure \ref{fig:here_weekly_A}, the control process attempts to keep the accuracy of the predictions between $\beta_1=0.01$ and $\beta_2=0.0125$. Occasionally, after new data are revealed, the accuracy exceeds the upper threshold $\beta_1$. The accuracy drops nominally to $0$ at the end of each season prior to the introduction of the next seasons fixtures. Similarly, in Figure \ref{fig:here_weekly_Q}, the quality of the samples is attempted to be kept between $\gamma_1=0.1$ and $\gamma_2=0.75$. In Figure \ref{fig:here_weekly_N}, we see that the number of samples in the database, $N$, varies over time. More precisely, after $5$ batches of data, $19,246$ samples are used. However, later the number of samples used decreases to approximate $14,000$ samples. Similar features are seen for the different batch sizes. The change in the accuracy of the predictions and the quality of the samples gets smaller as the batch size decreases. Figure \ref{fig:KM_bulk} is a plot of the Kaplan-Meier estimator \citep{km} of the survival function of the samples in the database as new data are revealed. More precisely, let $U$ be a random variable of the number of new data batches observed before a sample is deleted. Then Fig \ref{fig:KM_bulk} is a plot of the Kaplan-Meier estimator of $S(u)=P(U>u)$. The Kaplan-Meier estimator takes into consideration the right-censoring due to the end of the simulation i.e.\ samples that could have survived longer after the simulation ended. We see that samples survive as new data are observed e.g.\ a sample survives $10$ or more batches with probability $0.33$. Thus samples are reused as envisaged in \S \ref{sec:reweight}. Lastly, from using the different batch sizes (see \S \ref{sec:more results} in the Appendix) we see that samples survive more data batches as the size of the batch gets smaller. \begin{figure} \caption{System results using $7$ day batches where (a) is the accuracy ($A$) of the predictions, (b) is the quality ($Q$) of the samples, (c) is the number of samples in the database ($N$) as new data are revealed and (d) is the Kaplan-Meier estimator the samples lifetime.} \label{fig:here_weekly_A} \label{fig:here_weekly_Q} \label{fig:here_weekly_N} \label{fig:KM_bulk} \label{fig:system_res_weekly} \end{figure} In order to determine the quality of the predicted ranks given by the system, we performed a separate run and consider the coverages of the prediction intervals. For this run the initial observations consisted of the 2005/06 to 2007/08 seasons results. We then introduced the match results for the 2008/09 to 2012/13 seasons in $7$ day batches. Over the $5$ seasons there were $178$ batches of intervals for each team. Before each batch of results was introduced, conservative $50$\% and $95$\% intervals were formed for the predicted end of season rank of each team. These confidence intervals are conservative due to the discreteness of ranks. The true mass contained in the conservative $50$\% intervals was on average $76.1$\%. Similarly, the true mass contained in the conservative $95$\% intervals was on average $98.8$\%. When compared with the true end of season ranks, $74.2$\% of the true ranks lied in the conservative $50$\% intervals and $99.3$\% lied in the $95$\% intervals. \section{Application to a Linear Gaussian Model}\label{sec:kalman_compare} In \S \ref{sec:football} we are unable to check if the strengths of the teams and the other parameters (i.e.\ the states and parameters) are being estimated accurately as their true distributions are unknown. In this section we inspect the estimates given by the RMCMC system using simulated data. We use a linear Gaussian model such that the Kalman filter \citep{kalman1960new} can be applied. This simulation will allow us to compare the RMCMC system and the Kalman filter estimates. This linear and Gaussian model was chosen to resemble the football model described in \S \ref{sec:football}. For this model the Kalman filter gives the exact conditional distribution. Therefore, the Kalman filter will provide the benchmark estimates to compare against. Consider the model defined as follows:\ \begin{equation} \label{eq:lgm} \begin{cases} \text{State}:\quad\quad\quad X_t=A X_{t-1}+\Phi_t, & \Phi_t\stackrel{\text{iid}}{\sim}\text{N}(0,\Sigma)\\ \text{Observation}:\quad Y_t=B X_{t}+\Psi_t, & \Psi_t\stackrel{\text{iid}}{\sim}\text{N}(0,\Xi) \end{cases},\quad\text{for } t=1,2,\dots \end{equation} and prior distribution $X_0\sim \text{N}(\mu_0,\Sigma_0)$. For this particular simulation we chose $A=0.7 (\mathbb{I}_{20}-\frac{1}{20}1_{20}1_{20}^T)$, $\Sigma=0.05\mathbb{I}_{20}$ and $\Xi=0.02 \mathbb{I}_{380}$. The matrix $B$ is constructed according to the football matches in the English Premier League in the $2005/06$ season. More precisely, each row of $B$ is consists of zeros apart from two entries at $i$ and $j$ corresponding to a football match between home team $i$ and away team $j$. A $2$ is put in the $i$th position and a $1$ at the $j$th. The rows are ordered chronologically from top to bottom. For the prior distribution, we set $\mu_0$ to be a vector of zeroes and $\Sigma_0=\mathbb{I}_{20}$. Denote the $i$th component of $X_t$ as $X_{i,t}$. A single realisation of the states and observations were generated for $t=1,\dots,7$. Using these observations, the RMCMC system was run $100$ times to estimate means. This was compared with the estimates given by the Kalman filer. Each run of the RMCMC system was initialised using the observations from state $t=1,\dots,5$. The sequence of targets is similar to that used in \S \ref{sec:PL_model} with $\varpi_{k,t}=p(x_{1:t}|\widetilde{y}_{1:k,t},y_{1:(t-1)})$. For the transition function $f_j$ ($j\in D$) we use the observation equation in \eqref{eq:lgm}. Finally, we take $g_n$ to be the identity function, so that our estimate of interest in the posterior mean. The observations were revealed in batches of $10$, so that each state consisted of $38$ batches. Specifically, the vector $Y_t$ contains all $380$ observations where we denote the $j$th observation as $Y_{j,t}$. The $k$th observation batch of state $t$ is $\widetilde{Y}_{k , t}:=Y_{10k-9,t},\dots,Y_{10k,t}$. Therefore, after initialisation, the batches $\widetilde{Y}_{1 , 6},\dots,\widetilde{Y}_{38 , 6},\widetilde{Y}_{1 , 7},\dots \widetilde{Y}_{38 , 7}$ are revealed. Within the control process we again use $\beta_1=0.01$, $\beta_2=0.0125$ and $\gamma_1=0.1$, $\gamma_2=0.75$. Also, we set $N_{\text{MIN}}=1,000$. For this simulation controlling the accuracy $A$ pertains to controlling the mean posterior of each component of every state as new data are observed. We use a Gibbs sampler \citep[see e.g.][]{geman1984stochastic} as the conditional distributions for the states can be explicitly computed for this model. Each Gibbs sampler step consists of updating a single randomly chosen state as outlined in Algorithm \ref{code:gibbs-sampler}. We used no subsampling and a burn-in period of $B_0=1,000$. The accuracy was calculated using the batch mean approach described in \S \ref{sec:est_acc} with batch lengths $10$ and $25$. The RMCMC process wrote $500$ samples to the database at a time. \begin{algorithm}[tbp] \caption{Gibbs Sampler:\ Single step}\label{code:gibbs-sampler} \begin{algorithmic}[1] \Statex Given $X_1,\dots,X_n$ and $Y_1,\dots,Y_n$. \State Generate $s\sim\text{Uniform}\{1,\dots,n\}$. \If{$s = 1$} Draw $Z_s$ from the pdf $f(x_1|X_2,\dots,X_n,Y_1,\dots,Y_n)$. \EndIf \If{$s >1$ and $s < n$} Draw $Z_s$ from the pdf $f(x_s|X_1,\dots,X_{s-1},X_{s+1},\dots,X_n,Y_1,\dots,Y_n)$. \EndIf \If{$s=n$} Draw $Z_s$ from the pdf $f(x_s|X_1,\dots,X_{n-1},Y_1,\dots,Y_n)$. \EndIf \State Let $X_s=Z_s$. \end{algorithmic} \end{algorithm} Figure \ref{fig:kf-plots} presents results comparing the Kalman filter estimates with the $100$ RMCMC estimates as the observations are revealed. The upper row of Fig \ref{fig:kf-plots} are violin plots \citep[see e.g.\ ][]{violin-plots} of the difference between the Kalman filter and the $100$ RMCMC system posterior mean of selected states and components. Violin plots are smoothed histograms either side of a box plot of the data. The estimate may be bias due to the scaling and normalisation of the weights carried out by the update process (\S \ref{sec:reweight}) (see for example \cite{hesterberg1995weighted} for the bias in weighted importance sampling). This is apparent in posterior mean for $X_{18,6}$ (Fig.~\ref{fig:team18}), as in $81$ out of the $100$ runs the RMCMC process remained paused after $\widetilde{Y}_{37,6}$ was revealed. For these $81$ runs, the posterior mean was formed using weighted importance sampling. In contrast, we see nearly no bias in the posterior mean for $X_{5,6}$ after $\widetilde{Y}_{1,6}$ was revealed (Fig.~\ref{fig:team5}) where the RMCMC process was started in every run (the posterior mean given by the Gibbs sampler is unbiased). Table \ref{tab:MSE} shows the estimated bias of the $100$ RMCMC system posterior means with respect to estimate given by the Kalman filter. Table \ref{tab:SDs} shows the standard deviation of the $100$ RMCMC system posterior means. We see that the standard deviation (the accuracy $A$) is controlled below the imposed threshold of $\beta_2=0.0125$. The lower row of Fig \ref{fig:kf-plots} are Q-Q plots of the Kalman filter estimate and the weighted RMCMC samples posterior distribution at the $1\%,2\%,\dots,99\%$ quantile from $1$ of the $100$ runs. The Q-Q plots for other components of and RMCMC runs are similar to those presented. These Q-Q plots indicate that the two distributions are roughly similar. \begin{figure} \caption{Simulation results:\ (a) and (b) are violin plots of the difference between the $100$ RMCMC and the Kalman filter posterior mean. (c) and (d) are Q-Q plots of the Kalman filter and RMCMC system posterior distribution from a single run (black).} \label{fig:team5} \label{fig:team18} \label{fig:QQ_1} \label{fig:QQ_2} \label{fig:kf-plots} \end{figure} Comparison of the two distributions is difficult as the RMCMC samples are not only weighted but are also dependent. Thus tests, such as the Kolmogorov-Smirnov test \citep[e.g.\ see p.\ 35][]{opac-b1079143}, cannot be applied. \begin{table}[tbp] \centering \caption{Tables of estimated bias of the $100$ system posterior means with respect to Kalman filter posterior mean.} \label{tab:MSE} \setlength{\tabcolsep}{2.5pt} \begin{tabular}{rddd} \toprule & \multicolumn{3}{c}{Last batch revealed}\\ \cmidrule(r){2-4} & \multicolumn{1}{c}{$\widetilde{Y}_{1,6}$} & \multicolumn{1}{c}{$\widetilde{Y}_{15,6}$} & \multicolumn{1}{c}{$\widetilde{Y}_{37,6}$}\\ \midrule $X_{5,6}$ & 0.0009 & 0.0004 & 0.0004\\ $X_{18,6}$ & -0.0012 & -0.0001 & 0.0049\\ \bottomrule \end{tabular} \hspace{13pt} \begin{tabular}{rddd} \toprule & \multicolumn{3}{c}{Last batch revealed}\\ \cmidrule(r){2-4} & \multicolumn{1}{c}{$\widetilde{Y}_{1,6}$} & \multicolumn{1}{c}{$\widetilde{Y}_{15,6}$} & \multicolumn{1}{c}{$\widetilde{Y}_{37,6}$}\\ \midrule $X_{5,6}$ & 0.0009 & 0.0004 & 0.0004\\ $X_{18,6}$ & -0.0012 & -0.0001 & 0.0049\\ \bottomrule \end{tabular} \end{table} \begin{table}[tbp] \centering \caption{Tables of the empirical standard deviation of posterior mean given by $100$ runs of the system.}\label{tab:SDs} \setlength{\tabcolsep}{2.5pt} \begin{tabular}{rddd} \toprule & \multicolumn{3}{c}{Last batch revealed}\\ \cmidrule(r){2-4} & \multicolumn{1}{c}{$\widetilde{Y}_{1,6}$} & \multicolumn{1}{c}{$\widetilde{Y}_{15,6}$} & \multicolumn{1}{c}{$\widetilde{Y}_{37,6}$}\\ \midrule $X_{5,6}$ & 0.0109 & 0.0023 & 0.0016\\ $X_{18,6}$ & 0.0110 & 0.0019 & 0.0030 \\ \bottomrule \end{tabular} \hspace{13pt} \begin{tabular}{rddd} \toprule & \multicolumn{3}{c}{Last batch revealed}\\ \cmidrule(r){2-4} & \multicolumn{1}{c}{$\widetilde{Y}_{3,7}$} & \multicolumn{1}{c}{$\widetilde{Y}_{10,7}$} & \multicolumn{1}{c}{$\widetilde{Y}_{20,7}$}\\ \midrule $X_{5,7}$ & 0.0065 & 0.0030 & 0.0026\\ $X_{18,7}$ & 0.0077 & 0.0037 & 0.0022\\ \bottomrule \end{tabular} \end{table} \section{Conclusion}\label{sec:conclusion} We have presented a new method that produces estimates from a sequence of distributions that maintains the accuracy at a user-specified level. In \S \ref{sec:football} we demonstrated that the system is not resumed each time an observation is revealed thus the samples are reused. Therefore, we proceed with importance sampling whenever possible. Further we attempt to reduce the size of the sample database whenever possible (\S \ref{sec:control}), thus limiting the computational effort of the update process and calculation of the estimates or predictions. In \S \ref{sec:kalman_compare} we used a linear Gaussian model to show that the system produced comparable estimates to those given by the Kalman filter. Proving exactness of the estimates produced by the system, if possible, is a topic for future work. For our system, we advocate using a standard MCMC method such as a Metropolis-Hastings algorithm before resorting to another more complicated method such as particle MCMC methods \citep{PMCMC} or SMC$^2$ \citep{RSSB:RSSB1046}. By starting with a standard MCMC approach, we avoid choosing the number of particles, choosing the transition densities and the resampling step that comes with using a particle filter, not to mention the higher computational cost. \appendix \section{Further System Results}\label{sec:more results} In this section we present further results from \S 4.5 in the main article. In Figure \ref{fig:system_parameters} we display the change of the accuracy of the predictions ($A$), the quality of the samples ($Q$) and the number of samples in the database ($N$) as new data are revealed. As expected, the larger the batch size the more frequently the accuracy of the predictions and the quality of the samples exceed the thresholds. Table \ref{tab:pred_single_ranks}, \ref{tab:pred_weekly_ranks} and \ref{tab:pred_monthly_ranks} present the predicted end of 2012/13 English Premier league ranks for the various batch sizes. The predictions are similar for all batch sizes. This is unsurprising since the predictions are based on the same data. Each probability (percentage) in Table \ref{tab:pred_single_ranks}, \ref{tab:pred_weekly_ranks} and \ref{tab:pred_monthly_ranks} are being controlled. More precisely, for each team and each rank the standard deviation of the reported probability (percentage) is being controlled below $\beta_2=0.0125$ (as set in the simulation in the main article \S 4.4). The survival of the samples as new data are observed varies greatly depending on the batch size (Figure \ref{fig:system_survival}). From the Kaplan-Meier estimators in Fig \ref{fig:single_survival}, \ref{fig:weekly_survival} and \ref{fig:monthly_survival} we observe that smaller batches increases the number of batches a sample survives. Hence, using smaller data batches results in samples being reused more. \begin{figure} \caption{Panel of plots of the system variables as new data, of varying size, are observed. Columns, from left to right, are the accuracy of the predictions ($A$), the quality of the samples ($Q$) and the Number of samples in the database $N$. Rows, from top to bottom are for individual, weekly and monthly data sizes.} \label{fig:single_A} \label{fig:single_Q} \label{fig:single_N} \label{fig:weekly_A} \label{fig:weekly_Q} \label{fig:weekly_N} \label{fig:monthly_A} \label{fig:monthly_Q} \label{fig:monthly_N} \label{fig:system_parameters} \end{figure} \begin{figure} \caption{Kaplan-Meier estimators of the survival of the samples as new data are observed using (a) individual results, (b) 7 day batches and (c) 30 day batches.} \label{fig:single_survival} \label{fig:weekly_survival} \label{fig:monthly_survival} \label{fig:system_survival} \end{figure} \begin{table}[H] \caption{Predicted end of season 2012/13 ranks for the English Premier League using individual results reported in percent.} \label{tab:pred_single_ranks} \centering \tabcolsep=0.16cm \centering \begin{tabular}{lllllllllllllllllllll} \toprule & \multicolumn{20}{c}{Rank} \\ \cmidrule(r){2-21} Team & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 & 11 & 12 & 13 & 14 & 15 & 16 & 17 & 18 & 19 & 20 \\ \midrule Arsenal & 8 & 14 & 17 & 17 & 15 & 10 & 7 & 4 & 3 & 2 & 1 & 1 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ Aston Villa & 0 & 0 & 1 & 1 & 3 & 3 & 5 & 6 & 7 & 7 & 8 & 8 & 8 & 7 & 8 & 7 & 6 & 6 & 5 & 4 \\ Chelsea & 9 & 15 & 19 & 17 & 13 & 9 & 6 & 4 & 2 & 2 & 1 & 1 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ Everton & 1 & 2 & 6 & 8 & 11 & 11 & 12 & 10 & 9 & 7 & 6 & 5 & 4 & 3 & 2 & 1 & 2 & 1 & 1 & 0 \\ Fulham & 0 & 1 & 2 & 4 & 5 & 7 & 8 & 9 & 9 & 8 & 8 & 6 & 6 & 7 & 5 & 4 & 4 & 3 & 2 & 1 \\ Liverpool & 2 & 5 & 8 & 11 & 13 & 13 & 10 & 9 & 7 & 6 & 5 & 3 & 2 & 2 & 2 & 1 & 1 & 0 & 0 & 0 \\ Man City & 32 & 28 & 18 & 10 & 5 & 3 & 2 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ Man United & 46 & 26 & 12 & 7 & 4 & 2 & 1 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ Newcastle & 0 & 1 & 2 & 3 & 5 & 6 & 8 & 9 & 10 & 8 & 8 & 7 & 6 & 6 & 5 & 5 & 4 & 3 & 2 & 2 \\ Norwich & 0 & 0 & 0 & 1 & 1 & 2 & 3 & 4 & 4 & 6 & 6 & 8 & 8 & 8 & 8 & 9 & 9 & 8 & 9 & 7 \\ QPR & 0 & 0 & 0 & 0 & 0 & 1 & 2 & 2 & 3 & 5 & 5 & 6 & 6 & 7 & 8 & 9 & 9 & 11 & 13 & 12 \\ Reading & 0 & 0 & 0 & 0 & 1 & 1 & 2 & 2 & 4 & 4 & 5 & 5 & 7 & 7 & 7 & 9 & 9 & 10 & 12 & 14 \\ Southampton & 0 & 0 & 0 & 1 & 1 & 2 & 2 & 3 & 3 & 4 & 5 & 5 & 6 & 7 & 8 & 8 & 10 & 11 & 11 & 13 \\ Stoke & 0 & 0 & 1 & 1 & 2 & 3 & 4 & 6 & 7 & 7 & 7 & 8 & 8 & 7 & 8 & 7 & 7 & 7 & 7 & 5 \\ Sunderland & 0 & 0 & 1 & 2 & 4 & 5 & 6 & 8 & 8 & 8 & 8 & 8 & 7 & 7 & 6 & 6 & 5 & 4 & 4 & 3 \\ Swansea & 0 & 0 & 0 & 1 & 1 & 3 & 4 & 5 & 6 & 6 & 7 & 8 & 8 & 8 & 8 & 8 & 8 & 7 & 7 & 6 \\ Tottenham & 2 & 7 & 12 & 14 & 14 & 12 & 10 & 8 & 6 & 4 & 3 & 2 & 2 & 2 & 1 & 1 & 0 & 0 & 0 & 0 \\ West Brom & 0 & 0 & 0 & 1 & 2 & 3 & 4 & 5 & 6 & 8 & 7 & 8 & 8 & 8 & 7 & 8 & 7 & 7 & 5 & 5 \\ West Ham & 0 & 0 & 0 & 0 & 1 & 1 & 2 & 3 & 3 & 4 & 5 & 5 & 6 & 7 & 8 & 9 & 10 & 11 & 11 & 14 \\ Wigan & 0 & 0 & 0 & 0 & 1 & 1 & 2 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 9 & 10 & 10 & 11 & 13 \\ \bottomrule \end{tabular} \end{table} \begin{table}[H] \caption{Predicted end of season 2012/13 ranks for the English Premier League using 7 day batches reported in percent.} \label{tab:pred_weekly_ranks} \centering \tabcolsep=0.16cm \begin{tabular}{lllllllllllllllllllll} \toprule & \multicolumn{20}{c}{Rank} \\ \cmidrule(r){2-21} Team & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 & 11 & 12 & 13 & 14 & 15 & 16 & 17 & 18 & 19 & 20 \\ \midrule Arsenal & 8 & 14 & 17 & 19 & 13 & 9 & 6 & 4 & 3 & 2 & 2 & 1 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ Aston Villa & 0 & 0 & 1 & 1 & 3 & 4 & 5 & 6 & 7 & 7 & 7 & 9 & 8 & 8 & 7 & 7 & 6 & 6 & 5 & 4 \\ Chelsea & 9 & 15 & 21 & 16 & 12 & 9 & 6 & 4 & 3 & 1 & 1 & 1 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ Everton & 1 & 3 & 5 & 8 & 11 & 12 & 13 & 10 & 8 & 6 & 5 & 4 & 3 & 3 & 2 & 2 & 1 & 1 & 1 & 0 \\ Fulham & 0 & 1 & 2 & 3 & 5 & 6 & 8 & 9 & 9 & 9 & 9 & 7 & 7 & 5 & 5 & 4 & 4 & 3 & 3 & 2 \\ Liverpool & 2 & 4 & 7 & 11 & 15 & 13 & 10 & 9 & 7 & 6 & 4 & 3 & 3 & 2 & 2 & 1 & 1 & 1 & 0 & 0 \\ Man City & 29 & 27 & 18 & 10 & 6 & 4 & 2 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ Man United & 47 & 26 & 14 & 7 & 3 & 1 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ Newcastle & 0 & 1 & 2 & 3 & 5 & 7 & 9 & 9 & 8 & 9 & 8 & 7 & 7 & 6 & 6 & 4 & 3 & 3 & 2 & 1 \\ Norwich & 0 & 0 & 0 & 0 & 2 & 2 & 3 & 4 & 5 & 6 & 6 & 8 & 7 & 8 & 7 & 9 & 9 & 9 & 9 & 8 \\ QPR & 0 & 0 & 0 & 0 & 1 & 1 & 2 & 2 & 3 & 4 & 5 & 6 & 6 & 7 & 7 & 8 & 9 & 12 & 13 & 13 \\ Reading & 0 & 0 & 0 & 0 & 1 & 2 & 2 & 2 & 3 & 4 & 5 & 6 & 5 & 7 & 8 & 8 & 10 & 11 & 11 & 15 \\ Southampton & 0 & 0 & 0 & 0 & 0 & 1 & 2 & 2 & 3 & 5 & 5 & 5 & 6 & 7 & 8 & 9 & 9 & 10 & 12 & 13 \\ Stoke & 0 & 0 & 0 & 1 & 2 & 3 & 4 & 6 & 6 & 7 & 8 & 8 & 8 & 7 & 7 & 8 & 7 & 7 & 7 & 5 \\ Sunderland & 0 & 0 & 1 & 3 & 3 & 5 & 7 & 8 & 8 & 8 & 8 & 8 & 8 & 7 & 7 & 5 & 5 & 4 & 3 & 2 \\ Swansea & 0 & 0 & 0 & 1 & 1 & 2 & 3 & 4 & 6 & 6 & 7 & 7 & 8 & 8 & 8 & 8 & 9 & 8 & 7 & 6 \\ Tottenham & 4 & 8 & 11 & 14 & 14 & 13 & 10 & 8 & 6 & 4 & 3 & 2 & 1 & 1 & 1 & 1 & 0 & 0 & 0 & 0 \\ West Brom & 0 & 0 & 0 & 1 & 2 & 3 & 4 & 6 & 7 & 7 & 7 & 7 & 8 & 7 & 8 & 8 & 7 & 6 & 5 & 5 \\ West Ham & 0 & 0 & 0 & 0 & 1 & 1 & 2 & 3 & 3 & 5 & 4 & 5 & 7 & 7 & 9 & 9 & 9 & 11 & 11 & 14 \\ Wigan & 0 & 0 & 0 & 0 & 1 & 1 & 2 & 3 & 3 & 5 & 5 & 6 & 7 & 9 & 8 & 9 & 10 & 10 & 11 & 12 \\ \bottomrule \end{tabular} \end{table} \begin{table}[H] \caption{Predicted end of season 2012/13 ranks for the English Premier League using 30 day batches reported in percent.} \label{tab:pred_monthly_ranks} \centering \tabcolsep=0.16cm \centering \begin{tabular}{lllllllllllllllllllll} \toprule & \multicolumn{20}{c}{Rank} \\ \cmidrule(r){2-21} Team & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 & 11 & 12 & 13 & 14 & 15 & 16 & 17 & 18 & 19 & 20 \\ \midrule Arsenal & 8 & 15 & 19 & 16 & 12 & 10 & 6 & 5 & 3 & 2 & 2 & 1 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ Aston Villa & 0 & 0 & 1 & 1 & 2 & 4 & 6 & 6 & 7 & 7 & 7 & 8 & 8 & 7 & 6 & 7 & 6 & 6 & 5 & 3 \\ Chelsea & 10 & 16 & 20 & 16 & 12 & 9 & 6 & 4 & 2 & 2 & 2 & 1 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 \\ Everton & 1 & 2 & 5 & 8 & 11 & 10 & 10 & 10 & 9 & 8 & 5 & 5 & 3 & 3 & 2 & 1 & 2 & 1 & 1 & 0 \\ Fulham & 0 & 1 & 2 & 4 & 5 & 8 & 9 & 9 & 8 & 9 & 8 & 8 & 6 & 5 & 6 & 4 & 3 & 3 & 2 & 1 \\ Liverpool & 2 & 4 & 7 & 11 & 13 & 12 & 11 & 9 & 7 & 6 & 4 & 4 & 3 & 2 & 2 & 1 & 1 & 1 & 0 & 0 \\ Man City & 29 & 28 & 17 & 11 & 6 & 3 & 2 & 1 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ Man United & 47 & 24 & 14 & 7 & 4 & 2 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ Newcastle & 0 & 1 & 2 & 4 & 5 & 6 & 9 & 9 & 9 & 9 & 9 & 7 & 7 & 6 & 4 & 4 & 3 & 3 & 2 & 1 \\ Norwich & 0 & 0 & 0 & 0 & 1 & 2 & 2 & 4 & 5 & 6 & 7 & 7 & 7 & 8 & 8 & 8 & 8 & 8 & 9 & 9 \\ QPR & 0 & 0 & 0 & 0 & 0 & 1 & 2 & 2 & 3 & 4 & 4 & 4 & 7 & 7 & 8 & 9 & 10 & 11 & 12 & 14 \\ Reading & 0 & 0 & 0 & 0 & 1 & 2 & 2 & 2 & 3 & 5 & 4 & 5 & 7 & 7 & 8 & 8 & 9 & 10 & 12 & 14 \\ Southampton & 0 & 0 & 0 & 1 & 1 & 2 & 2 & 2 & 3 & 4 & 6 & 6 & 6 & 7 & 9 & 8 & 10 & 10 & 10 & 13 \\ Stoke & 0 & 0 & 0 & 1 & 2 & 2 & 4 & 6 & 5 & 7 & 7 & 8 & 7 & 8 & 8 & 8 & 8 & 7 & 6 & 5 \\ Sunderland & 0 & 0 & 1 & 2 & 4 & 6 & 6 & 7 & 8 & 8 & 7 & 8 & 8 & 7 & 6 & 6 & 4 & 5 & 4 & 2 \\ Swansea & 0 & 0 & 0 & 1 & 2 & 3 & 4 & 5 & 7 & 6 & 6 & 7 & 8 & 9 & 8 & 9 & 7 & 7 & 7 & 5 \\ Tottenham & 3 & 7 & 10 & 14 & 15 & 13 & 10 & 7 & 6 & 4 & 3 & 2 & 2 & 1 & 1 & 1 & 0 & 0 & 0 & 0 \\ West Brom & 0 & 0 & 1 & 1 & 2 & 4 & 4 & 5 & 7 & 8 & 8 & 7 & 8 & 7 & 7 & 7 & 6 & 7 & 6 & 5 \\ West Ham & 0 & 0 & 0 & 0 & 1 & 1 & 2 & 3 & 3 & 4 & 4 & 5 & 6 & 7 & 8 & 9 & 10 & 10 & 12 & 13 \\ Wigan & 0 & 0 & 0 & 0 & 1 & 1 & 2 & 2 & 3 & 4 & 5 & 6 & 6 & 7 & 7 & 9 & 11 & 11 & 12 & 13 \\ \bottomrule \end{tabular} \end{table} \end{document}
arXiv
{ "id": "1307.0742.tex", "language_detection_score": 0.7660192847251892, "char_num_after_normalized": null, "contain_at_least_two_stop_words": null, "ellipsis_line_ratio": null, "idx": null, "lines_start_with_bullet_point_ratio": null, "mean_length_of_alpha_words": null, "non_alphabetical_char_ratio": null, "path": null, "symbols_to_words_ratio": null, "uppercase_word_ratio": null, "type": null, "book_name": null, "mimetype": null, "page_index": null, "page_path": null, "page_title": null }
arXiv/math_arXiv_v0.2.jsonl
null
null
\begin{document} \let\realverbatim=\verbatim \let\realendverbatim=\endverbatim \renewcommand\verbatim{\par\addvspace{6pt plus 2pt minus 1pt}\realverbatim} \renewcommand\endverbatim{\realendverbatim\addvspace{6pt plus 2pt minus 1pt}} \makeatletter \newcommand\verbsize{\@setfontsize\verbsize{10}\@xiipt} \renewcommand\verbatim@font{\verbsize\normalfont\ttfamily} \makeatother \newtheorem{thm}{Theorem}[section] \newtheorem{prop}[thm]{Proposition} \newtheorem{lemma}[thm]{Lemma} \newtheorem{cor}[thm]{Corollary} \newtheorem{conj}[thm]{Conjecture} \begin{abstract} For any fixed nonzero integer $h$, we show that a positive proportion of integral binary quartic forms $F$ do locally everywhere represent $h$, but do not globally represent $h$. We order classes of integral binary quartic forms by the two generators of their ring of ${\rm GL}_{2}({\mathbb Z})$-invariants, classically denoted by $I$ and $J$. \end{abstract} \maketitle \section{Introduction}\label{Intro} Let $h\in{\mathbb Z}$ be nonzero. We will prove the existence of many integral quartic forms that do not represent $h$. Specifically, we aim to show many quartic {\it Thue equations} \begin{equation} F(x,y)=h \end{equation} have no solutions in integers $x$ and $y$, where $F(x , y)$ is an irreducible binary quartic form with coefficients in the integers. Let $$ F(x , y) = a_{0}x^{4} + a_{1}x^{3}y + a_{2}x^{2}y^{2} + a_{3}xy^{3} + a_{4}y^{4} \in \mathbb{Z}[x , y]. $$ The discriminant $D$ of $F(x, y)$ is given by $$ D = D_{F} = a_{0}^{6} (\alpha_{1} - \alpha_{2})^{2} (\alpha_{1} - \alpha_{3})^{2} (\alpha_{1} - \alpha_{4})^{2} (\alpha_{2} - \alpha_{3})^{2} (\alpha_{2} - \alpha_{4})^{2} (\alpha_{3} - \alpha_{4})^{2} , $$ where $\alpha_{1}$, $\alpha_{2}$, $\alpha_{3}$ and $\alpha_{4}$ are the roots of $$ F(x , 1) = a_{0}x^{4} + a_{1}x^{3} + a_{2}x^{2} + a_{3}x + a_{4} . $$ Let $ A = \bigl( \begin{smallmatrix} a & b \\ c & d \end{smallmatrix} \bigr)$ be a $2 \times 2$ matrix, with $a, b, c, d \in {\mathbb Z}$. We define the integral binary quartic form $F^{A}(x , y)$ by $$ F^{A}(x , y) : = F(ax + by ,\ cx + dy). $$ It follows that \begin{equation}\label{St6} D_{F^{A}} = (\textrm{det} A)^{12} D_F. \end{equation} If $A \in {\rm GL}_{2}(\mathbb{Z})$, then we say that $\pm F^{A}$ is {\it equivalent} to $F$. The ${\rm GL}_{2}({\mathbb Z})$-invariants of a generic binary quartic form, which will be called \emph{invariants}, form a ring that is generated by two invariants. These two invariants are denoted by $I$ and $J$ and are algebraically independent. For $F(x , y) = a_{0}x^{4} + a_{1}x^{3}y + a_{2}x^{2}y^{2} + a_{3}xy^{3} + a_{4}y^{4}$, these invariants are defined as follows: \begin{equation}\label{defofI} I = I_{F} = a_{2}^{2} - 3a_{1}a_{3} + 12a_{0}a_{4} \end{equation} and \begin{equation}\label{defofJ} J = J_{F} = 2a_{2}^{3} - 9a_{1}a_{2}a_{3} + 27 a_{1}^{2}a_{4} - 72 a_{0}a_{2}a_{4} + 27a_{0}a_{3}^{2}. \end{equation} Every invariant is a polynomial in $I$ and $J$. Indeed, the discriminant $D$, which is an invariant, satisfies $$ 27D = 4I^3 - J^2. $$ Following \cite{BaShSel}, we define the height $\mathcal{H}(F)$ of an integral binary quartic form $F(x , y)$ as follows, \begin{equation}\label{Bash} \mathcal{H}(F) : = \mathcal{H}(I , J) := \max\left\{\left|I^3\right|, \frac{J^2}{4}\right\}, \end{equation} where $I = I_F$ and $J = J_F$. We note that if $F(x,y)=h$ has no solution, and $G$ is a {\it proper subform} of $F$, i.e., \begin{equation}\label{defofsubform} G(x,y)=F(ax+by,cx+dy) \end{equation} for some integer matrix $A=\bigl(\begin{smallmatrix}a&b\\c&d\end{smallmatrix}\bigr)$ with $|\!\det A|>1$, then clearly $G(x,y)=h$ will also have no integer solutions. We will call a binary form {\it maximal} if it is not a proper subform of another binary form. Our goal in this paper is to show that many (indeed, a positive proportion) of integral binary quartic forms are not proper subforms, locally represent $h$ at every place, but globally do not represent~$h$. The following is our main result. \begin{thm}\label{mainquartic} Let $h$ be any nonzero integer. When maximal integral binary quartic forms $F(x , y) \in \mathbb{Z}[x , y]$ are ordered by their height $\mathcal{H}(I, J)$, a positive proportion of the ${\rm GL}_2({\mathbb Z})$-classes of these forms $F$ have the following properties: \begin{enumerate}[{\rm (i)}] \item they locally everywhere represent $h$ $($i.e., $F(x , y) = h$ has a solution in~${\mathbb R}^2$ and in~${\mathbb Z}_p^2$ for all $p);$ and \item they globally do not represent $h$ $($i.e., $F(x , y) = h$ has no solution in~$\mathbb{Z}^2)$. \end{enumerate} \end{thm} In other words, we show that a positive proportion of quartic Thue equations $F(x,y)=h$ fail the integral Hasse principle, when classes of integral binary quartic forms $F$ are ordered by the height $\mathcal{H}(I , J)$ defined in \eqref{Bash}. We will construct a family of quartic forms that do not represent a given integer $h$ and obtain a lower bound $\mu > 0$ for the density of such forms. The value for $\mu$ is expressed explicitly in \eqref{finaldensity}. Moreover, our method yields an explicit construction of this positive density of forms. It is conjectured that, for any $n \geq 3$, a density of $100\%$ of integral binary forms of degree $n$ that locally represent a fixed integer $h$ do not globally represent $h$. The positive lower bound $\mu$ in \eqref{finaldensity} is much smaller than the conjectured density $1$. In joint work with Manjul Bhargava \cite{AB}, we proved a result similar to Theorem \ref{mainquartic}. In \cite{AB} we consider integral binary forms of any given degree ordered by na\"ive height (the maximum of absolute values of their coefficients). Theorem \ref{mainquartic} is new, as we use a different ordering of integral binary quartic forms, which is more interesting for at least two reasons; here integral binary quartic forms are ordered by two quantities $I$ and $J$, as opposed to five coefficients, and $I$ and $J$, unlike the coefficients, are ${\rm GL}_{2}({\mathbb Z})$-invariant. In \cite{AB}, for any fixed integer $h$, we showed that a positive proportion of binary forms of degree $n \geq 3$ do not represent $h$, when binary $n$-ic forms are ordered by their naive heights. Moreover, for $n =3$, we established the same conclusion when cubic forms are ordered by their absolute discriminants. The Davenport-Heilbronn Theorem, which states that the number of equivalence classes of irreducible binary cubic forms per discriminant is a constant on average, was an essential part of our argument in \cite{AB} for cubic forms. More importantly we made crucial use of the asymptotic counts given by the Davenport-Heilbronn Theorem for the number of equivalent integral cubic forms with bounded absolute discriminant (see the original work in \cite{DH}, and \cite{AB} for application and further references). Such results are not available for binary forms of degree larger than $3$. For quartic forms, fortunately we are empowered by beautiful results due to Bhargava and Shankar that give asymptotic formulas for the number of ${\rm GL}_{2}({\mathbb Z})$-equivalence classes of irreducible integral binary quartic forms having bounded invariants. These results will be discussed in Section \ref{BaShsec}. This paper is organized as follows. In Section \ref{perilim} we discuss some upper bounds for the number of primitive solutions of quartic Thue equations. Section \ref{BaShsec} contains important results, all cited from \cite{BaShSel}, about the height $\mathcal{H}(I, J)$. In Sections \ref{splitsection} and \ref{localsection} we impose conditions on the splitting behavior of the forms used in our construction modulo different primes to make sure we produce a large enough number of forms (which in fact form a subset of integral quartic forms with positive density) that do not represent $h$, without any local obstruction. In Section \ref{completesection}, we summarize the assumptions made in Sections \ref{splitsection} and \ref{localsection}, and apply essential results cited in Sections \ref{perilim} and \ref{BaShsec} to conclude that the quartic forms that we construct form a subset of integral binary quartic forms with positive density. \section{Primitive Solutions of Thue Equations}\label{perilim} Let $F(x , y) \in {\mathbb Z}[x , y]$ and $m \in {\mathbb Z}$. A pair $(x_{0} , y_{0}) \in \mathbb{Z}^2$ is called a {\it primitive solution} to the Thue equation $F(x , y) = m$ if $F(x_{0} , y_{0}) = m$ and $\gcd(x_{0} , y_{0}) = 1$. We will use the following result from \cite{AkhQuaterly} to obtain upper bounds for the number of primitive solutions of Thue equations. \begin{prop}[\cite{AkhQuaterly}, Theorem 1.1]\label{maineq4} Let $F(x , y) \in \mathbb{Z}[x , y]$ be an irreducible binary form of degree $4$ and discriminant $D$. Let $m$ be an integer with $$ 0 < m \leq \frac{|D|^{\frac{1}{6} - \epsilon} } {(3.5)^{2} 4^{ \frac{2}{3 } } }, $$ where $ 0< \epsilon < \frac{1}{6}$. Then the equation $|F(x , y)| = m$ has at most \[ 36 + \frac{4}{3 \epsilon} \] primitive solutions. In addition to the above assumptions, if we assume that the polynomial $F(X , 1)$ has $2 \mathtt{i}$ non-real roots, with $\mathtt{i} \in\{0, 1, 2\}$, then the number of primitive solutions does not exceed \[ 36 -16\mathtt{i} + \frac{4-\mathtt{i}}{3 \epsilon}. \] \end{prop} If the integral binary forms $F_{1}$ and $F_{2}$ are equivalent, as defined in the introduction, then there exists $A \in {\rm GL}_2({\mathbb Z})$ such that $$ F_2(x , y) = F_1^{A}(x , y) \, \, \textrm{or} \, \, F_2(x , y) = -F_1^{A}(x , y). $$ Therefore, $D_{F_{1}} = D_{F_{2}}$, and for every fixed integer $h$, the number of primitive solutions to $F_1(x , y) = \pm h$ equals the number of primitive solutions to $F_2(x , y) = \pm h$. The invariants $I_F$ and $J_F$ of an integral quartic form $F$ that are defined in \eqref{defofI} and \eqref{defofJ} have weights $4$ and $6$, respectively. This means \begin{equation}\label{Idet} I_{F^{A}} = (\textrm{det} A)^{4} I_F, \end{equation} and \begin{equation}\label{Jdet} J_{F^{A}} = (\textrm{det} A)^{6} J_F. \end{equation} Consequently, by definition of the height $\mathcal{H}$ in \eqref{Bash}, we have \begin{equation}\label{Hdet} \mathcal{H}(F^{A}) = (\textrm{det} A)^{12} \mathcal{H}(F), \end{equation} and \begin{equation*} \mathcal{H}(-F^{A}) = (\textrm{det} A)^{12} \mathcal{H}(F). \end{equation*} \section{On the Bhargava--Shankar height $\mathcal{H}(I, J)$}\label{BaShsec} In \cite{BaShSel} Bhargava and Shankar introduce the height $\mathcal{H}(F)$ (see \eqref{Bash} for definition) for any integral binary quartic form $F$. In this section we present some of the asymptotical results in \cite{BaShSel}, which will be used in our proofs. Indeed these asymptotic formulations are the reason that we are able to order quartic forms with respect to their $I$ and $J$ invariants. One may ask which integer pairs $(I , J)$ can actually occur as the invariants of an integral binary quartic form. The following result of Bhargava and Shankar provides a complete answer to this question. \begin{thm}[\cite{BaShSel}, Theorem 1.7]\label{BaSh-thm1.7} A pair $(I , J) \in \mathbb{Z} \times \mathbb{Z}$ occurs as the invariants of an integral binary quartic form if and only if it satisfies one of the following congruence conditions: \begin{eqnarray*} (a) \, \, I \equiv 0 \, \, (\textrm{mod}\, \, 3) &\textrm{and}\, & J \equiv 0\, \, (\textrm{mod}\, \, 27),\\ (b)\, \, I \equiv 1 \, \, (\textrm{mod}\, \, 9) &\textrm{and}\, & J \equiv \pm 2\, \, (\textrm{mod}\, \, 27),\\ (c)\, \, \, I \equiv 4 \, \, (\textrm{mod}\, \, 9) &\textrm{and}\, & J \equiv \pm 16\, \, (\textrm{mod}\, \, 27),\\ (d)\, \, I \equiv 7 \, \, (\textrm{mod}\, \, 9) &\textrm{and}\, & J \equiv \pm 7\, \, (\textrm{mod}\, \, 27). \end{eqnarray*} \end{thm} Let $V_{{\mathbb R}}$ denote the vector space of binary quartic forms over the real numbers ${\mathbb R}$. The group ${\rm GL}_{2}({\mathbb R})$ naturally acts on $V_{{\mathbb R}}$. The action of ${\rm GL}_{2}({\mathbb Z})$ on $V_{{\mathbb R}}$ preserves the lattice $V_{{\mathbb Z}}$ consisting of the integral elements of $V_{{\mathbb R}}$. The elements of $V_{{\mathbb Z}}$ are the forms that we are interested in. Let $V^{(\mathtt{i})}_{{\mathbb Z}}$ denote the set of elements in $V_{{\mathbb Z}}$ having nonzero discriminant and $\mathtt{i}$ pairs of complex conjugate roots and $4 -2\mathtt{i}$ real roots. For any ${\rm GL}_{2}({\mathbb Z})$-invariant set $S \subseteq V_{{\mathbb Z}}$, let $N(S ; X)$ denote the number of ${\rm GL}_{2}({\mathbb Z})$-equivalence classes of irreducible elements $f \in S$ satisfying $\mathcal{H}(f) < X$. For any set $S$ in $V_{{\mathbb Z}}$ that is definable by congruence conditions, following \cite{BaShSel}, we denote by $\mu_{p}(S)$ the $p$-adic density of the $p$-adic closure of $S$ in $V_{{\mathbb Z}_p}$, where we normalize the additive measure $\mu_p$ on $V_{{\mathbb Z}_p}$ so that $\mu_p(V_{{\mathbb Z}_p})= 1$. The following is a combination of Theorem 2.11 and Theorem 2.21 of \cite{BaShSel}. \begin{thm}[Bhargava--Shankar]\label{BaSh-thm2.11} Suppose $S$ is a subset of $V_{{\mathbb Z}}$ defined by congruence conditions modulo finitely many prime powers, or even a suitable infinite set of prime powers. Then we have \begin{equation} N(S \cap V_{{\mathbb Z}}^{(\mathtt{i})}; X) \sim N( V_{{\mathbb Z}}^{(\mathtt{i})}; X) \prod_{p} \mu_{p} (S). \end{equation} \end{thm} The statement of Theorem \ref{BaSh-thm2.11} for finite number of congruence conditions follows directly from Theorem 2.11 of \cite{BaShSel}. In Subsection 2.7 of \cite{BaShSel}, some congruence conditions are specified that are suitable for inclusion of infinitely many primes in the statement of Theorem \ref{BaSh-thm2.11} (see Theorem 2.21 of \cite{BaShSel}). A function $\phi : V_{{\mathbb Z}} \rightarrow [0, 1]$ is said to be \emph{defined by congruence conditions} if, for all primes $p$, there exist functions $\phi_p : V_{{\mathbb Z}_p} \rightarrow [0, 1]$ satisfying the following conditions:\newline (1) for all $F \in V_{{\mathbb Z}}$, the product $\prod_{p} \phi_{p}(F)$ converges to $\phi(F)$,\newline (2) for each prime $p$, the function $\phi_p$ is locally constant outside some closed set $S_p \subset V_{{\mathbb Z}_p}$ of measure zero. Such a function $\phi$ is called \emph{acceptable} if, for sufficiently large primes $p$, we have $\phi_p(F) = 1$ whenever $p^2 \nmid D_F$. For our purpose, particularly in order to impose congruence conditions modulo the infinitely many primes that are discussed in Subsection \ref{largeprimesubsection}, we define the acceptable function $\phi : V_{{\mathbb Z}} \rightarrow \{0, 1\}$ to be the characteristic function of a certain subset of integral binary quartic forms. More specifically, for $p < 49$, we define $\phi_p$ to be the constant function $1$. For $p > 49$, we define $\phi_p : V_{{\mathbb Z}_p} \rightarrow \{0, 1\}$ to be the characteristic function of the set of integral binary quartic forms that are not factored as $c_p M_p (x , y)^2$ modulo $p$, with $c_p \in \mathbb{F}_p$ and $M_p(x , y)$ any quadratic form over $\mathbb{F}_p$. Then \begin{equation}\label{defofaccept} \phi(F) = \prod_{p} \phi_{p}(F) \end{equation} is the characteristic function of the set of integral binary quartic forms that are not factored as $c_p M_p (x , y)^2$ over $\mathbb{F}_p$ for any $p > 49$. We denote by $\lambda(p)$ the $p$-adic density $\int_{F \in V_{{\mathbb Z}_p}} \phi_p(F) dF $. The value of $\lambda(p)$ will be computed in \eqref{largedensity}. It turns out that in Theorem \ref{mainquartic}, the positive proportion of integral binary quartic forms that do not represent $h$ is bounded below by $$ \mu = \kappa(h) \prod_{p} \lambda(p), $$ where $p$ ranges over all primes and $\kappa(h)$ is a constant that only depends on $h$ and can be explicitly determined from \eqref{finaldensity} in Section 6. Later in our proofs, in order to construct many inequivalent quartic forms, it will be important to work with quartic forms that have no non-trivial stabilizer in ${\rm GL}_2(\mathbb{Z})$. We note that the stabilizer in ${\rm GL}_2(\mathbb{Z})$ of an element in $V_{\mathbb{R}}$ always contains the identity matrix and its negative, and has size at least $2$. We will appeal to another important result due to Bhargava and Shankar, which bounds the number of ${\rm GL}_{2}(\mathbb{Z})$-equivalence classes of integral binary quartic forms having large stabilizers inside ${\rm GL}_{2}(\mathbb{Z})$. \begin{prop}[\cite{BaShSel}, Lemma 2.4]\label{BSL2.4} The number of $\textrm{GL}_{2}(\mathbb{Z})$-orbits of integral binary quartic forms $F \in V_{\mathbb{Z}}$ such that $D_F \neq 0$ and $\mathcal{H}(F) < X$ whose stabilizer in ${\rm GL}_{2}(\mathbb{Q})$ has size greater than $2$ is $O(X^{3/4 + \epsilon})$. \end{prop} \section{Quartic Forms Splitting Modulo a Prime}\label{splitsection} \textbf{Definition}. We define the subset $V'_{\mathbb{Z}}$ of integral binary quartic forms $V_{\mathbb{Z}}$ to be those forms $F$ that have trivial stabilizer (of size $2$). By Proposition \ref{BSL2.4}, $V'_{\mathbb{Z}}$ is a dense subset of equivalence classes of quartic forms and selecting our forms from $V'_{\mathbb{Z}}$ will not alter the $p$-adic densities that we will present later. From now on we will work only with classes of forms in $V'_{\mathbb{Z}}$. \textbf{Definition}. Assume that $F(x , y)$ is an irreducible quartic form. We say that $F(x , y)$ \emph{splits completely} modulo a prime number $p$, if either \begin{equation}\label{splitgI} F(x , y) \equiv m_{0} (x - b_{1}y)(x-b_{2}y) (x-b_{3}y)(x- b_{4}y)\, \, (\textrm{mod} \, \, p), \end{equation} or \begin{equation}\label{splitgII} F(x , y) \equiv m_{0} y(x-b_{2}y) (x-b_{3}y)(x- b_{4}y)\, \, (\textrm{mod} \, \, p), \end{equation} where $m_{0} \not \equiv 0$ (mod $p$), and $b_{1}, b_{2}, b_{3}, b_{4}$ are distinct integers modulo $p$, and further \begin{equation}\label{assumemore} b_{2}, b_{3}, b_{4} \not \equiv 0 \, \, \qquad (\textrm{mod} \, \, p). \end{equation} In case \eqref{splitgI}, we call $b_1$, $b_2$, $b_3$, and $b_4$ the \emph{simple roots} of the binary form $F(x , y)$ modulo $p$. In case \eqref{splitgII}, we call $\infty$, $b_2$, $b_3$, and $b_4$ the \emph{simple roots} of the binary form $F(x , y)$ modulo $p$. Let $p \geq 5$ be a prime. The $p$-adic density of binary quartic forms that split completely modulo $p$ is given by \begin{eqnarray}\label{splitdensity} \mu_{p} &= & \frac{ (p -1) \left( \frac{p (p-1)(p-2) (p-3) }{4!} + \frac{(p-1)(p-2)(p-3)} {3!} \right) }{p^5}\\ \nonumber & =& \frac{ (p -1)^2 (p+4) (p-2) (p-3) }{4! \, p^5}, \end{eqnarray} where in the first identity in \eqref{splitdensity}, the summand $\frac{p (p-1)(p-2) (p-3) }{4!}$ in the numerator counts the corresponding forms in \eqref{splitgI} and the summand $\frac{(p-1)(p-2)(p-3)} {3!}$ counts the corresponding forms in \eqref{splitgII}. Clearly the factor $p -1$ in the numerator counts the number of possibilities for $m_{0}$ modulo $p$ and the denominator $p^5$ counts all quartic forms with all choices for their five coefficients modulo $p$. Now assume $F(x , y)$ is an irreducible integral quartic form that splits completely modulo $p$. For $j\in \{ 1, 2, 3, 4\}$, we define \begin{equation}\label{defofFb} F_{b_{j}}(x , y) : = F(p x + b_{j} y, y), \end{equation} and additionally in case \eqref{splitgII}, \begin{equation}\label{defofFinf} F_{\infty}(x , y) := F (p y , x). \end{equation} We claim that the four forms $F_{b_{1}}(x , y)$ (or $F_{\infty}(x,y)$), $F_{b_{2}}(x , y)$, $F_{b_{3}}(x , y)$, and $F_{b_{4}}(x , y)$ are pairwise inequivalent. Indeed, any transformation $B\in{\rm GL}_2({\mathbb Q})$ taking, say $F_{b_i}(x,y)$ to $F_{b_j}(x,y)$ must be of the form $B=\bigl(\begin{smallmatrix}p&b_i\\ 0& 1\end{smallmatrix}\bigr)^{-1}\!A\bigl(\begin{smallmatrix}p &b_j\\ 0& 1\end{smallmatrix}\bigr)$, where $A\in{\rm GL}_2({\mathbb Q})$ stabilizes $F(x,y)$. Since we assumed $F \in V'_{\mathbb{Z}}$, the $2 \times 2$ matrix $A$ must be the identity matrix or its negative, and so $B= \pm \bigl(\begin{smallmatrix}p&b_i\\ 0& 1\end{smallmatrix}\bigr)^{-1}\bigl(\begin{smallmatrix}p&b_j\\ 0& 1\end{smallmatrix}\bigr)$. But $B\notin{\rm GL}_2({\mathbb Z})$, as $p \nmid (b_i-b_j)$. Therefore, for $i \neq j$, the quartic forms $F_{b_i}(x,y)$ and $F_{b_j}(x,y)$ are not ${\rm GL}_2({\mathbb Z})$-equivalent. Similarly in case \eqref{splitgII}, any transformation $B\in{\rm GL}_2({\mathbb Q})$ taking $F_{\infty}(x,y)$ to $F_{b_j}(x,y)$ must be of the form $B= \bigl(\begin{smallmatrix}0&p\\ 1& 0\end{smallmatrix}\bigr)^{-1}\!A\bigl(\begin{smallmatrix}p &b_j\\ 0& 1\end{smallmatrix}\bigr)$, where $A\in{\rm GL}_2({\mathbb Q})$ stabilizes $F(x,y)$. This change-of-variable matrix does not belong to ${\rm GL}_{2}(\mathbb{Z})$, unless $b_{j}\equiv 0$ (mod $p$). Therefore, $F_{\infty}(x , y)$, $F_{b_2}(x , y)$, $F_{b_{3}}(x , y)$, and $F_{b_{4}}(x , y)$ are pairwise inequivalent, as long as none of $b_{2}$, $b_{3}$ and $b_{4}$ are a multiple of $p$ (this motivated the extra assumption \eqref{assumemore} in our definition). Starting with a form $F$ that belongs to $V'_{\mathbb{Z}}$ and splits completely modulo $p$, we can construct $4$ integral quartic forms that are pairwise inequivalent. Let $ F(x , y) = a_{0}x^4 + a_{1} x^{3} y +a_{2} x^{2} y^2 + a_{3} x y^3+ a_{4}y^4 \in \mathbb{Z}[x , y], $ with content $1$ (i.e., the integers $a_{0}, a_{1}, a_{2}, a_{3}, a_{4}$ have no common prime divisor). If $F(x , y)$ satisfies \eqref{splitgII} then \begin{equation}\label{deftildeinf} \tilde{F}_{\infty}(x , y):= \frac{ F_{\infty}(x , y)}{p} \in \mathbb{Z}[x , y], \end{equation} where $F_{\infty}(x , y)$ is defined in \eqref{defofFinf}. Suppose that \begin{equation}\label{alessp4} F (b , 1) \equiv 0 \, \, \, (\textrm{mod}\, \, p), \, \, \, \textrm{with}\, \, b \in \mathbb{Z}. \end{equation} By \eqref{defofFb}, \begin{equation*}\label{Faei4} F_{b}(x , y) = F(p x + by , y) = e_{0} x^4 + e_{1} x^{3} y +e_{2} x^{2} y^2 + e_{3} x y^3+ e_{4}y^4, \end{equation*} with \begin{equation}\label{dotss4} e_{4-j} = p^j \sum_{i=0}^{4-j} a_{i} \, b^{4-i-j} {4-i \choose j}, \end{equation} for $j=0, 1, 2, 3, 4$. If $j \geq 1$, clearly $e_{4-j}$ is divisible by $p$. Since $e_{4} = F(b , 1)$, by \eqref{alessp4}, $e_{4}$ is also divisible by $p$. Therefore, \begin{equation}\label{deftildeb} \tilde{F_{b}}(x , y): = \frac{F_{b}(x , y)}{p} \in \mathbb{Z}[x , y]. \end{equation} Since $e_{3} = p f'(b)$, where $f'(X)$ denotes the derivative of polynomial $f(X) = F(X , 1)$, if $b$ is a simple root modulo $p$ then $f'(b)\not \equiv 0\, \, (\textrm{mod}\, p)$ and \begin{equation}\label{yL4} \tilde{F_{b}}(x , y) = y^{3} L(x , y)\, \, (\textrm{mod}\, \, p), \end{equation} where $L(x , y)= l_{1}x + l_{2}y$ is a linear form modulo $p$, with $l_{1} \not \equiv 0\pmod p$. We also note that $\mathcal{H}(F_b)$, defined in \eqref{Bash}, as well as the invariants of the form $F_b$, can be expressed in terms of invariants of the form $F$, as $F_b$ is obtained under the action of a $2 \times 2$ matrix of determinant $\pm p$ on $F$. By \eqref{Idet}, \eqref{Jdet}, and \eqref{Hdet}, we have \begin{eqnarray*}\label{Hoftilde} D_{F_{b}} & =& p^{12} D_F,\\ I_{{F_{b}}} &= & p^4 I_{{F}}, \\ J_{{F_{b}}} &= & p^6 J_{F}, \\ \mathcal{H}\left({F_{b}}\right) & =& \mathcal{H}\left(I_{{F_{b}}}, J_{{F_{b}}} \right) = p^{12} \mathcal{H}(F). \end{eqnarray*} After multiplication of the form $F_{b}(x , y)$ by $p^{-1}$, we therefore have \begin{eqnarray}\nonumber D_{\tilde{F_{b}}} & =& p^{6} D_F\\ \nonumber I_{\tilde{F_{b}}} &= & p^2 I_{{F}}, \\ \nonumber J_{\tilde{F_{b}}} &= & p^3 J_{F}, \\ \mathcal{H}\left(\tilde{F_{b}}\right) & =& \mathcal{H}\left(I_{\tilde{F_{b}}}, J_{\tilde{F_{b}}} \right) = p^{6} \mathcal{H}(F). \end{eqnarray} Now let us consider the quartic Thue equation $$ F(x , y) = m, $$ where $m = p_{1} p_{2} p_{3} h$, and $p_{1}$, $p_{2}$, and $p_{3}$ are three distinct primes greater than $4$, and $\gcd(h, p_{k}) = 1$, for $k\in \{ 1, 2, 3\}$. We will further assume that the quartic form $F(x , y)$ splits completely modulo $p_{1}$, $p_{2}$, and $p_{3}$. In Lemma \ref{corresponds-sol}, we will construct $64$ integral binary quartic forms $G_{j}(x , y)$, for $1 \leq j \leq 4^3$, and will make a one-to-one correspondence between the set of primitive solutions of $F(x , y) = m$ and the union of the sets of primitive solutions of $G_{j}(x , y) = h$, for $1 \leq j \leq 4^3$. First we need two auxiliary lemmas. \begin{lemma}\label{lem1corres} Let $F(x , y) \in \mathbb{Z}[x , y]$ be a binary quartic form that splits completely modulo $p$ and $m = p m_1$, with $p \nmid m_1$. The primitive solutions of the Thue equation $F(x , y) = m$ are in one-to-one correspondence with the union of the sets of primitive solutions to four Thue equations $$ \tilde{F}_{i}(x , y) = m_1, $$ where $\tilde{F}_{i}(x , y)$ are defined in \eqref{deftildeinf} and \eqref{deftildeb}, and $i=1, 2, 3, 4$. \end{lemma} \begin{proof} Assume that $(x_{0}, y_{0}) \in \mathbb{Z}^2$ is a solution to $F(x , y) = m = p m_{1}$. If $$ F(x , y) \equiv m_{0} (x - b_{1}y)(x-b_{2}y) (x-b_{3}y)(x- b_{4}y)\, \, (\textrm{mod} \, \, p), $$ then since $ p| F(x_0 , y_0) $, we have $$ p| (x_{0}- b_{i} y_0) $$ for some $i \in \{1, 2, 3, 4\}$. The value of $i$ is uniquely determined by the solution $(x_{0}, y_{0})$, as $b_{j}$'s are distinct modulo $p$. Therefore, \begin{equation}\label{x0X} x_{0} = p_1 X_{0} + b_{i} y_{0}, \end{equation} for some $ X_{0} \in \mathbb{Z}$, and $(X_{0}, y_{0})$ is a solution to \begin{equation}\label{redmp} \tilde{F}_{i}(x , y) = \frac{1}{p} F(p x + b_{i} y , y) = m_{1} = \frac{m}{p}. \end{equation} Conversely, assume for a fixed $i \in \{1, 2, 3, 4\}$ that $(X_{0}, y_{0}) \in \mathbb{Z}^2$ is a solution to $$ \tilde{F}_{i}(x , y) = \frac{1}{p} F(p x + b_{i} y , y) = m_{1} = \frac{m}{p}. $$ First we observe that $p \nmid y_{0}$. Because otherwise $p$ divides $p X_0 + b_{i} y_0$ and $p^4 \mid \frac{m}{p}$, which is a contradiction. Now by construction of the form $\tilde{F}_{i}(x , y)$, we clearly have $(x_0 , y_{0})$, with $$ x_{0} = p X_{0} + b_{i} y_{0}, $$ satisfies the equation $F(x , y) = m$. Further, if $(X_{0}, y_{0})$ is a primitive solution of $\tilde{F}_{i}(x , y) = \frac{m}{p}$, since $p \nmid y_0$, we have $\gcd(x_0 , y_0) = 1$. Assume that $$ F(x , y) \equiv m_{0} y (x-b_{2}y) (x-b_{3}y)(x- b_{4}y)\, \, (\textrm{mod} \, \, p). $$ The pair $(x_{0} , y_{0}) \in \mathbb{Z}^2$ with $p \nmid y_{0}$ is a primitive solution of $$ F(x , y) = p m_1, $$ if and only if $p \mid (x_0-b_{2}y_0) (x_0-b_{3}y_0)(x_0- b_{4}y_0)$. In this case, for a unique $i \in \{2, 3, 4\}$, we have \eqref{x0X}, and $(X_0, y_0)$ is a primitive solution to the Thue equation \eqref{redmp}. Similarly, the pair $(x_{1} , y_{1}) \in \mathbb{Z}^2$ with $p \mid y_{1}$ is a primitive solution of $$ F(x , y) = p m_1, $$ if and only if $(Y_1, x_{1})$, with $Y_1 = \frac{y_1}{p}$, is a primitive solution to $$\tilde{F}_{\infty}(x , y) = \frac{m}{p}.$$ \end{proof} \begin{lemma}\label{lem2} If $F(x, y)$ splits completely modulo $p_1$ and $p_2$, then $\tilde{F}_{b}(x , y)$ will also split completely modulo $p_2$, for any simple root $b$ (possibly $\infty$) of $F(x , y)$ modulo $p_1$. \end{lemma} \begin{proof} If $$ F(x , y) \equiv m_{0} (x - b_1 y) (x - b_2 y) (x - b_3 y) (x - b_4 y) \, \qquad (\textrm{mod} \, \, p_{1}) $$ and \begin{equation}\label{ciroots} F(x , y) \equiv m'_{0} (x - c_1 y) (x - c_2 y) (x - c_3 y) (x - c_4 y) \, \qquad (\textrm{mod} \, \, p_{2}), \end{equation} then for any $b \in \{ b_{1}, b_{2}, b_{3}, b_{4}\}$, we have $$ \tilde{F_{b}}(x , y) \equiv m''_{0}(x - c'_1 y) (x - c'_2 y) (x - c'_3 y) (x - c'_4 y) \, \qquad (\textrm{mod} \, \, p_{2}), $$ where $$ c'_{j} = p_{1} c_{j} + b. $$ The integers $c'_1, c'_2 , c'_3 , c'_4$ are indeed distinct modulo $p_{2}$, as $c_1, c_2 , c_3 , c_4$ are so and $p_{1}$ is invertible modulo $p_{2}$. We conclude that the quartic form $ \tilde{F}_{b}(x , y)$ splits completely modulo $p_{2}$, as well. If $$ F(x , y) \equiv m_{0} y (x - b_2 y) (x - b_3 y) (x - b_4 y) \, \qquad (\textrm{mod} \, \, p_{1}) $$ and \eqref{ciroots} holds, then $$ \tilde{F}_{\infty}(x , y) \equiv m''_{0}(x - c'_1 y) (x - c'_2 y) (x - c'_3 y) (x - c'_4 y) \, \qquad (\textrm{mod} \, \, p_{2}), $$ with $c'_i = c^{-1}_{i}$ modulo $p_2$, where $0$ and $\infty$ are considered to be the inverse of each other modulo $p_2$. Namely, if $c_1 =0$ modulo $p_2$, we get $$ \tilde{F}_{\infty}(x , y) \equiv m''_{0} y (x - c'_2 y) (x - c'_3 y) (x - c'_4 y) \, \qquad (\textrm{mod} \, \, p_{2}). $$ If $$ F(x , y) \equiv m_{0} y (x - b_2 y) (x - b_3 y) (x - b_4 y) \, \qquad (\textrm{mod} \, \, p_{1}) $$ and \begin{equation*} F(x , y) \equiv m'_{0} y (x - c_2 y) (x - c_3 y) (x - c_4 y) \, \qquad (\textrm{mod} \, \, p_{2}), \end{equation*} then $$ \tilde{F}_{\infty}(x , y) \equiv m''_{0}x (x - c'_2 y) (x - c'_3 y) (x - c'_4 y) \, \qquad (\textrm{mod} \, \, p_{2}), $$ with $c'_i = c^{-1}_{i}$ modulo $p_2$. Therefore, if $F(x, y)$ splits completely modulo $p_1$ and $p_2$, the $\tilde{F}_{b}(x , y)$ will also split completely modulo $p_2$, for any simple root $b$ of $F(x , y)$ modulo $p_1$. \end{proof} \begin{lemma}\label{corresponds-sol} Let $h$ be an integer, and $p_1$, $p_{2}$, and $p_{3}$ be three distinct primes greater than $4$ that do not divide $h$. Let $F(x, y) \in \mathbb{Z}[x , y]$ be a binary quartic form that splits completely modulo primes $p_{1}$, $p_{2}$, and $p_{3}$. Then there are $64$ binary quartic forms $G_{i}(x , y) \in \mathbb{Z}[x , y]$, with $1 \leq i \leq 64$, such that every primitive solution $(x_{\mathit{l}}, y_{\mathit{l}})$ of the equation $F(x , y)= h \, p_{1} p_{2} p_{3}$ corresponds uniquely to a triple $(j, x_{l, j}, y_{l, j})$, with $$ j \in \{1, 2, \ldots, 64\},\, \, x_{\mathit{l}, j}, y_{\mathit{l}, j} \in \mathbb{Z}, \, \, \gcd(x_{\mathit{l}, j} , y_{\mathit{l}, j}) =1, $$ and $$ G_{j} (x_{\mathit{l}, j} , y_{\mathit{l}, j}) = h. $$ Furthermore, \begin{equation*} \mathcal{H}\left( G_{j} \right) = \left(p_1 p_2 p_3\right)^{6} \mathcal{H}(F), \end{equation*} for $j = 1, \ldots, 64$. \end{lemma} \begin{proof} Let $m = p_1 p_2 p_3 h$. By Lemma \ref{lem1corres}, we may reduce the Thue equation $F(x , y) = m$ modulo $p_1$ to obtain $4$ quartic Thue equations \begin{equation}\label{reduceto4} \tilde{F}_{i}(x , y) = \frac{m}{p_1}, \end{equation} with $i = 1, 2, 3, 4$, such that every primitive solution of $F(x , y)= h \, p_{1} p_{2} p_{3} = m$ corresponds uniquely to a primitive solution of exactly one of the equations in \eqref{reduceto4}. By Lemma \ref{lem2}, every binary quartic form $\tilde{F}_{i}(x , y)$ in \eqref{reduceto4} splits completely modulo $p_2$. Applying Lemma \ref{lem1corres} modulo $p_2$ to each equation in \eqref{reduceto4}, we construct $4$ binary quartic forms. Therefore, we obtain $4^2$ Thue equations \begin{equation}\label{reduceto16} \tilde{F}_{i, k}(x , y) = \frac{m}{p_1p_2}, \end{equation} with $i, k =1, 2, 3, 4$, such that every primitive solution $F(x , y)= h \, p_{1} p_{2} p_{3} = m$ corresponds uniquely to a primitive solution of exactly one of the equations in \eqref{reduceto16}. By \eqref{Hoftilde}, \begin{equation}\label{HofFij} \mathcal{H}\left( F_{i, k} \right) = \left(p_1 p_2 \right)^{6} \mathcal{H}(F). \end{equation} By Lemma \ref{lem2}, each form $\tilde{F}_{i, k}(x , y)$ splits modulo $p_3$. We may apply Lemma \ref{lem1corres} once again to each equation in \eqref{reduceto16}. This way we obtain $4^3$ equations \begin{equation}\label{reduceto64} G_{j}(x , y) = \frac{m}{ p_1 p_2 p_3} = h. \end{equation} The construction of these equations ensures a one-to-one correspondence between the primitive solutions of the equation $F(x , y) = m$ and the union of the sets of the primitive solutions of Thue equations in \eqref{reduceto64}. By \eqref{Hoftilde} and \eqref{HofFij}, \begin{equation}\label{HofGj} \mathcal{H}\left( G_{j} \right) = \left(p_1 p_2 p_3\right)^{6} \mathcal{H}(F), \end{equation} for $j = 1, \ldots, 64$. \end{proof} We note that if $F(x , y)$ is irreducible over $\mathbb{Q}$, its associated forms $G_{j} (x , y)$, which are constructed in the proof of Lemma \ref{corresponds-sol}, will also be irreducible over $\mathbb{Q}$ as all of the matrix actions are rational. Furthermore, the forms $G_{j}(x , y)$ are not constructed as proper subforms of the binary quartic form $F(x , y)$. Indeed, they are maximal over~${\mathbb Z}_p$ for all $p\notin \{p_1,p_{2},p_3\}$ (being equivalent, up to a unit constant, to $F(x,y)$ over~${\mathbb Z}_p$ in that case), while for $p\in\{p_1,p_2, p_3\}$, we have $p\nmid D_F$, implying $p^6 || D_{G_j}$, and so $G_j(x , y)$ cannot be a subform over ${\mathbb Z}_p$ of any form by equation~(\ref{St6}) (see the definition of a subform in \eqref{defofsubform}). We remark that the reduction of Thue equations $F(x , y) = m$ modulo prime divisors of $m$ is a classical approach, and some sophisticated applications of it to bound the number of solutions of Thue equations can be found in \cite{Bom, Ste}. \section{Avoiding Local Obstructions}\label{localsection} In the previous section, we constructed $4^3$ binary quartic forms $G_{j}(x , y)$ and established Lemma \ref{corresponds-sol}, which corresponds each primitive solution of $F(x , y) = h p_1 p_2 p_3$ to a primitive solution of one of the equations $G_{j}(x , y) = h$, for $1 \leq j \leq 4^3$. Using Proposition \ref{maineq4}, we will obtain a small upper bound for the number of integral solutions to the equation $F(x , y) = m = p_{1} p_{2} p_{3} h$, which will lead us to conclude that some of the newly constructed Thue equations $G_{j}(x , y)= h$ cannot have any solutions. In this section we will work with a proper subset of the set of all quartic forms to construct forms such that the associated Thue equations have no local obstructions to solubility. We will impose some extra congruence conditions in our choice of forms $F(x , y)$, resulting in construction of $4^3$ forms $G_i(x , y)$ that locally represent $h$. For each prime $p$, we will make some congruence assumptions modulo $p$ and present $p$-adic densities for the subset of quartic forms that satisfy our assumptions to demonstrate that we will be left with a subset of $V_{\mathbb{Z}}$ with positive density. Before we divide up our discussion modulo different primes, we note that by \eqref{defofsubform}, if a form is non-maximal, then either it is not primitive, or after an ${\rm SL}_2({\mathbb Z})$-transformation it is of the form $a_0x^4+a_1x^{3}y+a_2 x^2 y^2 +a_3 xy^3+a_4 y^4$, where $p^i\mid a_i$, $i=0,1,2, 3, 4$, for some prime~$p$. In particular, integral binary quartic forms that are non-maximal must factor modulo some prime $p$ as a constant times the forth power of a linear form. It turns out that all integral binary quartic forms that are discussed in this section are indeed maximal. \subsection{Quartic Forms Modulo $2$.} To ensure that a quartic Thue equation $F(x , y) =h$ has a solution in $\mathbb{Z}_2$, it is sufficient to assume that $$ F(x , y) \equiv L_1(x,y) L_2(x , y)^{3} \, \, (\textrm{mod}\, \, 2^{4}), $$ where $L_1(x,y)$ and $L_2(x,y)$ are linearly independent linear forms modulo $2$. The system of two linear equations $$ L_1(x,y) \equiv h \, \, (\textrm{mod}\, 2^{4}) $$ $$ L_2(x , y) \equiv 1 \, \, (\textrm{mod}\, 2^{4}), $$ has a solution and therefore, by Hensel's Lemma, $F(x , y) = h$ is soluble in $\mathbb{Z}_2$. The 2-adic density of quartic forms $F(x , y)$ such that $ F(x , y) \equiv L_1(x,y) L_2(x , y)^{3}$ modulo $2^{4}$ is \begin{equation}\label{2-adicdensity} \frac{6}{2^5} = \frac{3}{16}, \end{equation} where the linear forms $L_1$ and $L_2$ can be chosen from the three linear forms $x$, $y$, or $x + y$. It is indeed necessary to consider integral quartic forms modulo $16$, as a $2$-adic unit $u$ belongs to $\mathbb{Q}^{4}_{2}$ if and only if $u \equiv 1$ modulo $16 \mathbb{Z}_2$. More specifically, assume that $(x_0: y_0: z_0)$ is a $\mathbb{Z}_2$-point on the projective curve $C: hz^4 = F(x , y)$ and $u = z_0^4$, with $z_0$ a unit in $\mathbb{Z}_2$. Therefore, $z_0 = 1 +2 t$ for some $t \in \mathbb{Z}_2$ and $$ z_{0}^{4} = (1 + 2 t)^4 \equiv 1 + 8 \left(t(3t+1)\right) \equiv 1 \, \, (\textrm{mod}\, \, 16). $$ \subsection{Quartic Forms Modulo Large Primes}\label{largeprimesubsection} Let us consider the curve $C: h z^4 = F(x , y)$ of genus~$g = 3$ over the finite field $\mathbb{F}_{q}$ of order $q$. By the Leep-Yeomans generalization of Hasse-Weil bound in \cite{LeYe}, the number of points $N$ on the curve $C$ satisfies the inequality \begin{equation}\label{HWLeYe} \left| N - (q+1) \right| \leq 2g \sqrt{q}. \end{equation} Let $p$ be a prime $p> (2g+1)^2 = 49$, $p \not \in \{p_1, p_2, p_3\}$, $p \nmid h$. Since $p+1 \geq 2g \sqrt{p} +1$, the lower bound in \eqref{HWLeYe} is nontrivial, implying that there must be an $\mathbb{F}_p$-rational point on the curve $h z^4 = F(x , y)$. If there exists $a \in {\mathbb Z}$ such that \begin{equation}\label{SimpleRoot} F(x, y) \equiv (x- ay) A(x , y)\, \, (\textrm{mod}\, p), \end{equation} with $A(x , y)$ an integral cubic binary form for which \begin{equation}\label{SimpleRoota} A(a , 1) \not \equiv 0 \, \, (\textrm{mod}\, p), \end{equation} then by Hensel's lemma, the smooth $\mathbb{F}_p$-point $(x_0: y_0 : z_0) = (a : 1 : 0)$ will lift to a $\mathbb{Z}_p$-point on the curve $h z^4 = F(x , y)$. Similarly, if \begin{equation*} F(x, y) \equiv y A(x , y)\, \, (\textrm{mod}\, p), \end{equation*} with $A(x , y)$ an integral cubic binary form for which \begin{equation*} A(1 , 0) \not \equiv 0 \, \, (\textrm{mod}\, p), \end{equation*} the smooth $\mathbb{F}_p$-point $(x_0: y_0 : z_0) = (1 : 0 : 0)$ will lift to a $\mathbb{Z}_p$-point on the curve $h z^4 = F(x , y)$. A quartic form over ${\mathbb F}_p$ that has a triple root must have a simple root, as well. So we will assume that $F(x,y)$ does not factor as $cM(x,y)^2$ modulo~$p$ for any quadratic binary form $M(x,y)$ and constant $c$ over ${\mathbb F}_p$. By definition, these forms are maximal over ${\mathbb Z}_p$. It follows from this assumption on $F(x ,y)$ that the curves $hz^4=F(x,y)$ are irreducible over ${\mathbb F}_p$ and there is at least one smooth $\mathbb{F}_p$-rational point on $h z^4 = F(x , y)$, which lifts to a $\mathbb{Z}_p$-point. We conclude that the integral quartic forms $G_{j}(x, y)$, constructed as described in Section \ref{splitsection} from such a form $F(x , y)$, all represent $h$ in ${\mathbb Z}_p$ for primes $p> (2g+1)^2$ as well. The $p$-adic density of binary quartic forms that are primitive and not constant multiples of the second powers of quadratic binary forms modulo~$p$ is \begin{equation}\label{largedensity} 1 - \frac{ (p-1)(p+1) p }{ 2p^5} - \frac{ (p-1)(p+1) }{p^5}, \end{equation} where the summand $-\frac{ (p-1)(p+1) p }{ 2p^5}$ eliminates forms of the shape $ c M^2(x, y) = c (x-b_{1}y)^2 (x-b_{2}y)^2 $ or $ c M^2(x, y) = c (x-b_{1}y)^2 y^2$ (mod $p$), and the summand $- \frac{ (p-1)(p+1) }{p^5}$ eliminates forms of the shape $ c L(x , y)^4$ (mod $p$), with $L(x , y)$ a linear form modulo $p$. \subsection{Quartic Forms Modulo Special Odd Primes}\label{specialoddprime} For $p \mid h$ we will assume that $$ F(x , y) \equiv L_1(x,y) L_3(x , y)^{3}\, \, \textrm{ (mod}\, \, p), $$ where $L_1(x,y)$ and $L_2(x,y)$ are two linearly independent linear forms modulo $p$. To find ${\mathbb Z}_p$-points on the curve $C: hz^4 = F(x , y)$, we consider the equation $F(x , y)=0$ (mod $p$). Since $L_1(x , y)$ and $L_2(x , y)$ are linearly independent modulo $p$, the system of linear equations $$ L_1(x,y) \equiv 0 \textrm{ (mod} \, \, p) $$ and $$ L_2(x,y) \equiv 0 \textrm{ (mod} \, \, p) $$ has exactly one solution. Since $L_1(x , y)=0$ has at least three points over ${\mathbb F}_p$, the equation $F(x , y)=0$ (mod $p$) has at least two solutions over ${\mathbb F}_p$ that provide smooth ${\mathbb F}_p$-points on the curve $C: hz^4 = F(x , y)$ (i.e., all points other than that intersection point of the two lines defined by $L_1(x , y)$ and $L_2(x , y)$). By Hensel's Lemma, these smooth points will lift to ${\mathbb Z}_p$-points. Thus the equations $F(x,y)=h$ and $G_j(x,y)=h$ will be locally soluble modulo $p$. Similarly, for every odd prime $p \not \in \{ p_{1}, p_2, p_{3}\}$, with $ p < 49$ and $p \nmid h$ (these are the primes not considered in Subsection \ref{largeprimesubsection}), we will assume that $$ F(x , y) \equiv L_1(x,y) L_2(x , y)^{3}\, \, \textrm{ (mod}\, \, p), $$ where $L_1(x,y)$ and $L_2(x,y)$ are linear forms that are linearly independent modulo~$p$. This condition implies that $F(x , y) \equiv h \textrm{ (mod} \, \, p)$ has solutions in integers, for $L_1(x,y)$ and $L_2(x,y)$ are linearly independent and therefore we can find $x_{0} , y_{0} \in \mathbb{Z}$ satisfying the following system of linear equations: $$ L_1(x_0,y_0) \equiv h \textrm{ (mod} \, \, p) $$ and $$ L_2(x_0,y_0) \equiv 1 \textrm{ (mod} \, \, p). $$ The smooth ${\mathbb F}_p$-point $(x_0 : y_0 : 1)$ lifts to a ${\mathbb Z}_p$-point on the curve $C: hz^4 = F(x , y)$. The $p$-adic density of primitive binary quartic forms of the shape \begin{equation}\label{modhigherp} L_1(x,y) L_2(x , y)^{3}\, \, (\textrm{mod} \, \, p) \end{equation} where $L_1(x,y)$ and $L_2(x,y)$ are linearly independent linear forms modulo $p$ and is \begin{equation}\label{specialdensity} \frac{(p+1)p(p-1)}{p^{5}}. \end{equation} The above density is calculated by considering the unique factorization of the form $F$ modulo $p$ as $$ m_{0} (x - b_{1}y)(x - b_{2} y)^3, $$ with $m_{0}$ non-zero, and $b_{1}$ and $b_{2}$ distinct roots (possibly $\infty$) modulo $p$. Such forms are maximal over ${\mathbb Z}_p$ \section{Completing the proof}\label{completesection} For $i=1, 2, 3$, let $p_{i}$ be the $i$-th prime greater than $4$ such that $p_{i}\nmid h$ and set $$ m = h\, p_1 p_2 p_3, $$ and $$ \mathcal{P} = \{ p_1, p_2, p_3\}. $$ For example, if $h=1$, we will choose $p_1 = 5$, $p_2 = 7$, and $p_3 = 11$. Let $F(x , y)$ be a maximal primitive irreducible integral binary quartic form which has a trivial stabilizer in ${\rm GL}_{2}(\mathbb{Q})$, with $$ \left|D_F \right| > (3.5)^{24} \, 4^{8} \left( \prod_{i=1}^{3} p_{i} \right)^{12}. $$ We note that the above assumption on the size of the discriminant of quartic forms exclude only finitely many ${\rm GL}_{2}(\mathbb{Z})$-equivalence classes of quartic forms (see \cite{BM, EG1}). In order to ensure that $h$ is represented by $F$ in $\mathbb{R}$, we assume that the leading coefficient of $F$ is positive if $h$ is positive and negative otherwise. Assume further that $F(x , y)$ splits completely modulo the primes $p_{1}$, $p_2$, $p_{3}$. Assume that for every prime $p \not \in \{ p_{1}, p_2, p_3\} = \mathcal{P}$, with $ p < 49$, we have $$ F(x , y) \equiv L_1(x,y) L_2(x , y)^{3}\, \, \textrm{ (mod}\, \, p), $$ where $L_1(x,y)$ and $L_2(x,y)$ are linear forms that are linearly independent modulo~$p$. Finally, assume, for each prime $p > 49$, that $F(x,y)$ does not factor as $cM(x,y)^2$ modulo~$p$ for any quadratic binary form $M(x,y)$ and constant $c$ over ${\mathbb F}_p$. By Proposition \ref{maineq4}, and taking $\epsilon = \frac{1}{12}$, there are at most \[ 36 -16\mathtt{i} + \frac{4-\mathtt{i}}{\frac{1}{4}} = 52 - 20 \mathtt{i} \] primitive solutions to the equation $$ F(x , y) = m = h\, p_1 p_2 p_3, $$ where $2 \, \mathtt{i}$ is the number of non-real roots of the polynomial $F(X, 1)$. By Lemma \ref{corresponds-sol}, each primitive solution $(x_{0}, y_{0})$ of $F(x , y) = m$ corresponds uniquely to a solution of $G_{i}(x , y) = h$, where $1 \leq i \leq 4^3$ is also uniquely determined by $(x_{0}, y_{0})$. Since $$ 4^3 - 52 + 20 \mathtt{i} = 12 + 20 \mathtt{i} \geq 12 $$ we conclude that at least $12$ of the $64$ equations $G_{i}(x , y) = h$ have no solutions in integers $x, y$. By \eqref{HofGj}, and Theorems \ref{BaSh-thm1.7} and \ref{BaSh-thm2.11}, we have the following lower bound $\mu$ for the density of integral quartic forms that represent $h$ locally, but not globally, \begin{equation}\label{finaldensity} \mu = \frac{12 } {\left(p_1 p_2 p_3\right)^{5}} \, \, \delta_{2} \prod_{p \in \mathcal{P}} \sigma(p) \prod_{p\geq 49, \, p\not\in \mathcal{P}, \, p\nmid h} \lambda(p) \prod_{p \mid h \, \textrm{or}\, p < 49} \gamma_{p}, \end{equation} where, via \eqref{splitdensity}, \eqref{2-adicdensity}, \eqref{largedensity}, \eqref{specialdensity}, $$\delta_2 = \frac{3}{16},$$ $$\sigma(p)= \frac{ (p -1)^2 (p+4) (p-2) (p-3) }{4! \, p^5},$$ \begin{equation}\label{lambdacal} \lambda(p) = 1 - \frac{ (p-1)(p+1) p }{ 2p^5} - \frac{ (p-1)(p+1) }{p^5}, \end{equation} and $$\gamma(p) = \frac{(p+1)p(p-1)}{p^{5}}.$$ In \eqref{finaldensity} all products are over rational primes. For all but finitely many primes $p$, the density $ \lambda(p)$ in \eqref{lambdacal} contributes to the product in \eqref{finaldensity}. Since $$ \prod_p \left(1 - \frac{ (p-1)(p+1)^2}{ 2p^5}\right) $$ is a convergent Euler product, the lower bound $\mu$ is a real number satisfying $0 <\mu <1$. \section*{Acknowledgements.} I am grateful to the anonymous referee for their careful reading of an earlier version of this manuscript and insightful comments. I would like to thank Arul Shankar for very helpful conversation and answering my questions, especially regarding the height $\mathcal{H}(I , J)$, which is the key tool in the present paper. I would also like to thank Mike Bennett and Manjul Bhargava for their insights and suggestions. This project was initiated during my visit to the Max Planck Institute for Mathematics, in Bonn, in the academic year 2018-2019. I acknowledge the support from the MPIM. In different stages of this project, my research has been partly supported by the National Science Foundation award DMS-2001281 and by the Simons Foundation Collaboration Grants, Award Number 635880. \end{document}
arXiv
{ "id": "2002.00548.tex", "language_detection_score": 0.7170199751853943, "char_num_after_normalized": null, "contain_at_least_two_stop_words": null, "ellipsis_line_ratio": null, "idx": null, "lines_start_with_bullet_point_ratio": null, "mean_length_of_alpha_words": null, "non_alphabetical_char_ratio": null, "path": null, "symbols_to_words_ratio": null, "uppercase_word_ratio": null, "type": null, "book_name": null, "mimetype": null, "page_index": null, "page_path": null, "page_title": null }
arXiv/math_arXiv_v0.2.jsonl
null
null
\begin{document} \begin{frontmatter} \title{Stochastic Navier-Stokes equations with Caputo derivative driven by fractional noises} \author[author1]{Guang-an Zou} \author[author1]{Guangying Lv} \author[author2]{Jiang-Lun Wu\corref{cor1}} \cortext[cor1]{Corresponding author} \ead{[email protected]} \address[author1]{School of Mathematics and Statistics, Henan University, Kaifeng 475004, P. R. China} \address[author2]{Department of Mathematics, Swansea University, Swansea SA2 8PP, United Kingdom} \begin{abstract} In this paper, we consider the extended stochastic Navier-Stokes equations with Caputo derivative driven by fractional Brownian motion. We firstly derive the pathwise spatial and temporal regularity of the generalized Ornstein-Uhlenbeck process. Then we discuss the existence, uniqueness, and H\"{o}lder regularity of mild solutions to the given problem under certain sufficient conditions, which depend on the fractional order $\alpha$ and Hurst parameter $H$. The results obtained in this study improve some results in existing literature. \end{abstract} \begin{keyword} Caputo derivative, stochastic Navier-Stokes equations, fractional Brownian motion, mild solutions. \end{keyword} \end{frontmatter} \section{Introduction} Stochastic Navier-Stokes equations (SNSEs) are widely regarded as one of the most fascinating problems in fluid mechanics, in particular, stochasticity could even lead to a better understanding of physical phenomenon and mechanisms of turbulence in fluids. Furthermore, the presence of noises could give rise to some statistical features and important phenomena, for example, a unique invariant measure and ergodic behavior for the SNSEs driven by degenerate noise have been established, which can not be found in deterministic Navier-Stokes equations \cite{Flandoli-Maslowski-1995,Hairer-2006}. Since the seminal work of Bensoussan and Temam \cite{Bensoussan-1973}, the SNSEs have been intensively investigated in the literature. The existence and uniqueness of solutions for the SNSEs with multiplicative Gaussian noise were proved in \cite{Da-2002,Mikulevicius-2005,Taniguchi-2011}. The large deviation principle for SNSEs with multiplicative noise had been established in \cite{Wang-2015,Xu-2009}. The study of random attractors of SNSEs can be found in \cite{Brzezniak-2013, Flandoli-1995}, just mention a few. On the other hand, fractional calculus has gained considerable popularity during the past decades owing to its demonstrated to describe physical systems possessing long-term memory and long-range spatial interactions, which play an important roles in diverse areas of science and engineering. Some theoretical analysis and experimental data have shown that fractional derivative can be recognized as one of the best tools to model anomalous diffusion processes \cite{Podlubny-1999,Srivastava-2006,Zhou-Wang-2016}. Consequently, the generalized Navier-Stokes equations with fractional derivative can be introduced to simulate anomalous diffusion in fractal media \cite{Momani-2006,Zhou-2017}. Recently, time-fractional Navier-Stokes equations have been initiated from the perspective of both analytical and numerical solutions, see \cite{De-2015,Ganji-2010,Kumar-2015,Li-2016,Zhou-Peng-2017,Zou-Zhou-2017} for more details. We would like to emphasize that it is natural and also important to study the generalized SNSEs with time-fractional derivatives, which might be useful to model reasonably the phenomenon of anomalous diffusion with intrinsic random effects. In this paper, we are concerned with the following generalized stochastic Navier-Stokes equations with time-fractional derivative on a finite time interval $[0,T]$ driven by fractional noise, defined on a domain $D\subset\mathbb{R}^d,d\ge1,$ with regular boundary $\partial D$ \begin{align*} ^{C}D_{t}^{\alpha}u=\nu\Delta u-(u\cdot\nabla)u-\nabla p+f(u)+\dot{B}^{H},~x\in D,~t>0, \tag{1.1} \end{align*} with the incompressibility condition: \begin{align*} \mathrm{div} u=0,~x\in D,t\geq0, \tag{1.2} \end{align*} subject to the initial condition: \begin{align*} u(x,0)=u_{0}(x),~x\in D,t=0, \tag{1.3} \end{align*} and the Dirichlet boundary conditions: \begin{align*} u(x,t)=0,~x\in \partial D,t\geq0, \tag{1.4} \end{align*} in which $u=u(x,t)$ represents the velocity field of the fluid; $\nu>0$ is the viscosity coefficient; $p=p(x,t)$ is the associated pressure field; $f(u)$ stands for the deterministic external forces; The term $\dot{B}^{H}=\frac{d}{dt}B^{H}(t)$, and $B^{H}(t)$ represents a cylindrical fractional Brownian motion (fBm) with Hurst parameter $H\in(0,1)$ describes a state dependent random noise. Here, $^{C}D_{t}^{\alpha}$ denotes the Caputo-type derivative of order $\alpha$ ($0<\alpha<1$) for a function $u(x,t)$ with respect to time $t$ defined by \begin{align*} ^{C}D_{t}^{\alpha}u(x,t)=\begin{cases} \frac{1}{\Gamma(1-\alpha)}\int_{0}^{t}\frac{\partial u(x,s)}{\partial s}\frac{ds}{(t-s)^{\alpha}},~0<\alpha<1,\\ \frac{\partial u(x,t)}{\partial t}, ~~~~~~~~~~~~~~~~~~~~~~ \alpha =1,\\ \end{cases} \tag{1.5} \end{align*} where $\Gamma(\cdot)$ stands for the gamma function $\Gamma(x)=\int_{0}^{\infty}t^{x-1}e^{-t}dt$. Define the Stokes operator subject to the no-slip homogeneous Dirichlet boundary condition (1.4) as the formula \begin{align*} Au:=-\nu P_{H}\Delta u, \end{align*} where $P_{H}$ is the Helmhotz-Hodge projection operator, we also define the nonlinear operator $B$ as \begin{align*} B(u,v)=-P_{H}(u\cdot\nabla)v, \end{align*} with a slight abuse of notation $B(u):=B(u,u)$. By applying the Helmholtz-Hodge operator $P_{H}$ to each term of time-fractional SNSEs, we can rewrite the Eqs.(1.1)-(1.4) as follows in the abstract form: \begin{align*} \begin{cases} ^{C}D_{t}^{\alpha}u=-Au+B(u)+f(u)+\dot{B}^{H},t>0,\\ u(0)=u_{0}, \end{cases} \tag{1.6} \end{align*} where we shall also use the same notations $f(u)$ instead of $P_{H}f$, and the solutions of problem (1.6) is also the solutions of Eqs.(1.1)-(1.4). Note that the totality of fractional Brownian motions (fBms) form a subclass of Gaussian processes, which are positively correlated for $H\in(1/2,1)$ and negatively correlated for $H\in(0,1/2)$ while $H=1/2$ are standard Gaussian processes. So it is interesting to consider the stochastic differential equations with fBm, and the subject of stochastic calculus with respect to fBm has attracted much attentions \cite{Biagini-2008,Duncan-2009,Jiang-2012,Mishura-2008}. In recent years, the existence and uniqueness of solutions for stochastic Burgers equations with fBm have been examined in \cite{Jiang-2012,Wang-Zeng-2010}. In addition, Zou and Wang investigated the time-fractional stochastic Burgers equation with standard Brownian motion \cite{Zou-2017}. However, to the best of our knowledge, the study of time-fractional SNSEs with fBm has not been addressed yet, which is indeed a fascinating and more interesting (and also practical) problem. The objective of the present paper is to establish the existence and uniqueness of mild solutions by Banach fixed point theorem and Mainardi's Wright-type function, the key and difficulty of problems are how to deal with stochastic convolution. We also prove the H\"{o}lder regularity of mild solutions to time-fractional SNSEs. Our consideration extends and improves the existing results carried out in previous studies \cite{De-2015,Flandoli-Maslowski-1995,Mikulevicius-2005,Wang-Zeng-2010,Zhou-2017}. The rest of the paper is organized as follows. In the next section, we introduce several notions and we give certain preliminaries needed in our later analysis. In Section 3, we establish the pathwise spatial and temporal regularity of the generalized Ornstein-Uhlenbeck process. In Section 4, we show the existence and uniqueness of mild solutions to time-fractional SNSEs. We end our paper by proving the H\"{o}lder regularity of the mild solution. \section{Notations and preliminaries} In this section, we give some notions and certain important preliminaries, which will be used in the subsequent discussions. Let $(\Omega,\mathcal{F},\mathds{P},\{\mathcal{F}_{t}\}_{t\geq0})$ be a filtered probability space with a normal filtration $\{\mathcal{F}_{t}\}_{t\geq0}$. We assume that the operator $A$ is self-adjoint and there exist the eigenvectors $e_{k}$ corresponding to eigenvalues $\gamma_{k}$ such that \begin{align*} Ae_{k}=\gamma_{k}e_{k},e_{k}=\sqrt{2}\sin(k\pi),\gamma_{k}=\pi^{2}k^{2},k\in N^{+}. \end{align*} For any $\sigma>0$, $A^{\frac{\sigma}{2}}e_{k}=\gamma_{k}^{\frac{\sigma}{2}}e_{k}, k=1,2,\ldots$, and let $\dot{H}^{\sigma}$ be the domain of the fractional power defined by \begin{align*} \dot{H}^{\sigma}=\mathcal{D}(A^{\frac{\sigma}{2}})=\{v\in L^{2}(D),s.t.~\|v\|_{ \dot{H}^{\sigma}}^{2}=\sum\limits_{k=1}^{\infty}\gamma_{k}^{\frac{\sigma}{2}}v_{k}^{2}<\infty\}, \end{align*} where $v_{k}=(v,e_{k})$ and the norm $\|v\|_{ \dot{H}^{\sigma}}=\|A^{\frac{\sigma}{2}}v\|$. Let $L^{2}(\Omega,H)$ be a Hilbert space of $H$-valued random variables equipped with the inner product $\mathbb{E}(\cdot,\cdot)$ and norm $\mathbb{E}\|\cdot\|$, it is given by \begin{align*} L_{2}(\Omega,H)=\{\chi:\mathbb{E}\|\chi\|_{H}^{2}=\int_{\Omega}\|\chi(\omega)\|_{H}^{2}d\mathds{P}(\omega)<\infty,\omega\in \Omega\}. \end{align*} \textbf{Definition 2.1.} For $H\in(0,1)$, a continuous centered Gaussian process $\{\beta^{H}(t),t\in[0,\infty)\}$ with covariance function \begin{align*} R_{H}(t,s)=\mathbb{E}[\beta^{H}(t)\beta^{H}(s)]=\frac{1}{2}(t^{2H}+s^{2H}-|t-s|^{2H}),~t,s\in [0,\infty) \end{align*} is called a one-dimensional fractional Brownian motion (fBm), and $H$ is the Hurst parameter. In particular when $H=\frac{1}{2}$, $\beta^{H}(t)$ represents a standard Brownian motion. Now let us introduce the Wiener integral with respect to the fBm. To begin with, we represent $\beta^{H}(t)$ as following (see \cite{Biagini-2008}) \begin{align*} \beta^{H}(t)=\int_{0}^{t}K_{H}(t,s)dW(s), \end{align*} where $W=\{W(t),t\in[0,T]\}$ is a Wiener process on the space $(\Omega,\mathcal{F},\mathds{P},\{\mathcal{F}_{t}\}_{t\geq0})$ and the kernel $K_{H}(t,s), 0\le s< t\le T$, is given by \begin{align*} K_{H}(t,s):=c_{H}(t-s)^{H-\frac{1}{2}}+c_{H}(\frac{1}{2}-H)\int_{s}^{t}(u-s)^{H-\frac{3}{2}}(1-(\frac{s}{u})^{\frac{1}{2}-H})du, \tag{2.1} \end{align*} for $0<H<\frac{1}{2}$ and $c_{H}=(\frac{2H\Gamma(\frac{3}{2}-H)}{\Gamma(H+\frac{1}{2})\Gamma(2-2H)})^{\frac{1}{2}}$ is a constant. When $\frac{1}{2}<H<1$, there holds \begin{align*} K_{H}(t,s)=c_{H}(H-\frac{1}{2})s^{\frac{1}{2}-H}\int_{s}^{t}(u-s)^{H-\frac{3}{2}}u^{H-\frac{1}{2}}du.\tag{2.2} \end{align*} It is easy to verify that \begin{align*} \frac{\partial K_{H}}{\partial t}(t,s)=c_{H}(H-\frac{1}{2})(\frac{s}{t})^{\frac{1}{2}-H}(t-s)^{H-\frac{3}{2}}. \tag{2.3} \end{align*} We denote by $\mathcal{H}$ the reproducing kernel Hilbert space of the fBm. Let $K_{\tau}^{*}:\mathcal{H}\rightarrow L^{2}([0,T])$ be the linear map given by \begin{align*} (K_{\tau}^{*}\psi)(s)=\varphi(s)K_{H}(\tau,s)+\int_{s}^{\tau}(\psi(t)-\psi(s))\frac{\partial K_{H}}{\partial t}(t,s)dt \tag{2.4} \end{align*} for $0<H<\frac{1}{2}$, and if $\frac{1}{2}<H<1$, we denote \begin{align*} (K_{\tau}^{*}\psi)(s)=\int_{s}^{\tau}\psi(t)\frac{\partial K_{H}}{\partial t}(t,s)dt. \tag{2.5} \end{align*} We refer the reader to \cite{Mishura-2008} for the proof of the fact that $K_{\tau}^{*}$ is an isometry between $\mathcal{H}$ and $L^{2}([0,T])$. Moreover, for any $\psi\in \mathcal{H}$, we have the following relation between the Wiener integral with respect to fBm and the It\^{o} integral with respect to Wiener process \begin{align*} \int_{0}^{t}\psi(s)d\beta^{H}(s)=\int_{0}^{t}(K_{\tau}^{*}\psi)(s)dW(s),~t\in [0,T]. \end{align*} Generally, following the standard approach for $H=\frac{1}{2}$, we consider $Q$-Wiener process with linear bounded covariance operator $Q$ such that $\mathrm{Tr} (Q)<\infty$. Furthermore, there exists the eigenvalues $\lambda_{n}$ and corresponding eigenfunctions $e_{k}$ satisfying $Q e_{k}=\lambda_{n}e_{k},k=1,2,\ldots$, then we define the infinite dimensional fBm with covariance $Q$ as \begin{align*} B^{H}(t):=\sum\limits_{k=1}^{\infty}\lambda^{1/2}_{k}e_{k}\beta_{k}^{H}(t), \end{align*} where $\beta_{k}^{H}$ are real-valued independent fBm's. In order to define Wiener integrals with repect to $Q$-fBm, we introduce $\mathcal{L}_{2}^{0}:=\mathcal{L}_{2}^{0}(Y,X)$ of all $Q$-Hilbert-Schmidt operators $\psi:Y\rightarrow X$, where $Y$ and $X$ are two real separable Hilbert spaces. We associate the $Q$-Hilbert-Schmidt operators $\psi$ with the norm \begin{align*} \|\varphi\|_{\mathcal{L}_{2}^{0}}^{2}=\sum\limits_{k=1}^{\infty}\|\lambda^{1/2}_{k}\psi e_{k}\|^{2}<\infty. \end{align*} As a consequence, for $\psi\in \mathcal{L}_{2}^{0}(Y,X)$, the Wiener integral of $\psi$ with respect to $B^{H}(t)$ is defined by \begin{align*} \int_{0}^{t}\psi(s)dB^{H}(s)=\sum\limits_{k=1}^{\infty}\int_{0}^{t}\lambda^{1/2}_{k}\psi(s)e_{k}d\beta_{k}^{H}(s)=\sum\limits_{k=1}^{\infty}\int_{0}^{t}\lambda^{1/2}_{k}(K_{\tau}^{*}\psi e_{k})(s)d\beta_{k}(s), \tag{2.6} \end{align*} where $\beta_{k}$ is the standard Brownian motion. \textbf{Definition 2.2.} An $\mathcal{F}_{t}$-adapted stochastic process $(u(t),t\in[0,T])$ is called a mild solution to (1.6) if the following integral equation is satisfied \begin{align*} u(t)&=E_{\alpha}(t)u_{0}+\int_{0}^{t}(t-s)^{\alpha-1}E_{\alpha,\alpha}(t-s)[B(u(s))+f(u(s))]ds\\ &~~~+\int_{0}^{t}(t-s)^{\alpha-1}E_{\alpha,\alpha}(t-s)dB^{H}(s), \tag{2.7} \end{align*} where the generalized Mittag-Leffler operators $E_{\alpha}(t)$ and $E_{\alpha,\alpha}(t)$ are defined, respectively, by \begin{align*} E_{\alpha}(t):=\int_{0}^{\infty}\xi_{\alpha}(\theta)T(t^{\alpha}\theta)d\theta, \end{align*} and \begin{align*} E_{\alpha,\alpha}(t):=\int_{0}^{\infty}\alpha\theta\xi_{\alpha}(\theta)T(t^{\alpha}\theta)d\theta, \end{align*} where $T(t)=e^{-tA},t\geq0$ is an analytic semigroup generated by the operator $-A$, and the Mainardi's Wright-type function with $\alpha\in (0,1)$ is given by \begin{align*} \xi_{\alpha}(\theta)=\sum_{k=0}^{\infty}\frac{(-1)^{k}\theta^{k}}{k!\Gamma(1-\alpha(1+k))}. \end{align*} Furthermore, for any $\alpha\in (0,1)$ and $-1<\nu<\infty$, it is not difficult to verity that \begin{align*} \xi_{\alpha}(\theta)\geq0 ~and~ \int_{0}^{\infty}\theta^{\nu}\xi_{\alpha}(\theta)d\theta=\frac{\Gamma(1+\nu)}{\Gamma(1+\alpha\nu)}, \tag{2.8} \end{align*} for all $\theta\geq0$. The derivation of mild solution (2.7) can refer to \cite{Zou-2017}. The operators $\{E_{\alpha}(t)\}_{t\geq0}$ and $\{E_{\alpha,\alpha}(t)\}_{t\geq0}$ in (2.7) have the following properties \cite{Zou-2017}: \textbf{Lemma 2.1.} For any $t>0$, $E_{\alpha}(t)$ and $E_{\alpha,\alpha}(t)$ are linear and bounded operators. Moreover, for $0<\alpha<1$ and $0\leq\nu<2$, there exists a constant $C>0$ such that \begin{align*} \|E_{\alpha}(t)\chi\|_{\dot{H}^{\nu}}\leq Ct^{-\frac{\alpha\nu}{2}}\|\chi\|,~\|E_{\alpha,\alpha}(t)\chi\|_{\dot{H}^{\nu}}\leq Ct^{-\frac{\alpha\nu}{2}}\|\chi\|. \end{align*} \textbf{Lemma 2.2.} For any $t>0$, the operators $E_{\alpha}(t)$ and $E_{\alpha,\alpha}(t)$ are strongly continuous. Moreover, for $0<\alpha<1$ and $0\leq\nu<2$ and $0\leq t_{1}< t_{2}\leq T$, there exists a constant $C>0$ such that \begin{align*} \|(E_{\alpha}(t_{2})-E_{\alpha}(t_{1}))\chi\|_{\dot{H}^{\nu}}\leq C(t_{2}-t_{1})^{\frac{\alpha\nu}{2}}\|\chi\|, \end{align*} and \begin{align*} \|(E_{\alpha,\alpha}(t_{2})-E_{\alpha,\alpha}(t_{1}))\chi\|_{\dot{H}^{\nu}}\leq C(t_{2}-t_{1})^{\frac{\alpha\nu}{2}}\|\chi\|. \end{align*} Throughout the paper, we assume that the mapping $f: \Omega\times H \rightarrow H$ satisfies the following global Lipschitz and growth conditions \begin{align*} \|f(u)-f(v)\|^{2}\leq C\|u-v\|^{2},~\|f(u)\|^{2}\leq C(1+\|u\|^{2})\tag{2.9} \end{align*} for any $u,v\in H$. \section{Regularity of the stochastic convolution} In this section, we state and prove the basic properties of stochastic convolution. Firstly, we introduce the following generalized Ornstein-Uhlenbeck process \begin{equation*} Z(t):=\int_{0}^{t}(t-s)^{\alpha-1}E_{\alpha,\alpha}(t-s)dB^{H}(s). \tag{3.1} \end{equation*} Obviously, it is very important to establish the basic properties of the stochastic integrals (3.1) in the study of the problem (1.6). For the sake of convenience, we introduce the following operator and show some properties. \textbf{Lemma 3.1.} Let $\mathcal{S}_{\alpha}(t)=t^{\alpha-1}E_{\alpha,\alpha}(t)$, for $0\leq\nu< 2$ and $0<\alpha<1$, there exists a constant $C>0$ such that \begin{align*} \|\mathcal{S}_{\alpha}(t)\chi\|_{\dot{H}^{\nu}}\leq Ct^{\frac{(2-\nu)\alpha-2}{2}}\|\chi\|,~\|[\mathcal{S}_{\alpha}(t_{2})-\mathcal{S}_{\alpha}(t_{1})]\chi\|_{\dot{H}^{\nu}}\leq C(t_{2}-t_{1})^{\frac{ 2-(2-\nu)\alpha}{2}}\|\chi\| \end{align*} for any $0\leq t_{1}<t_{2}\leq T$. \textbf{Proof.} By Lemma 2.1, we get \begin{align*} \|\mathcal{S}_{\alpha}(t)\chi\|_{\dot{H}^{\nu}}=\|t^{\alpha-1}E_{\alpha,\alpha}(t)\chi\|_{\dot{H}^{\nu}}\leq Ct^{\frac{(2-\nu)\alpha-2}{2}}\|\chi\|. \end{align*} Next, utilizing the property of semigroup $\|A^{\sigma}e^{-tA}\|\leq Ct^{-\sigma}$ for $\sigma\geq0$, we have \begin{align*} \|\frac{d}{dt}\mathcal{S}_{\alpha}(t)\chi \|_{\dot{H}^{\nu}}&=\|(\alpha-1)t^{\alpha-2}E_{\alpha,\alpha}(t)-\int_{0}^{\infty}\alpha^{2}t^{2\alpha-2}\theta^{2}\xi_{\alpha}(\theta)AT(t^{\alpha}\theta)d\theta\|_{\dot{H}^{\nu}}\\ &\leq (1-\alpha)t^{\alpha-2}\|E_{\alpha,\alpha}(t)\chi\|_{\dot{H}^{\nu}}+\int_{0}^{\infty}\alpha^{2}t^{2\alpha-2}\theta^{2}\xi_{\alpha}(\theta)\|A^{1+\frac{\nu}{2}}e^{-t^{\alpha}\theta A}\chi\|d\theta\\ &\leq C(1-\alpha)t^{\frac{(2-\nu)\alpha-4}{2}}\|\chi\|+\frac{\alpha^{2}\Gamma(2-\frac{\nu}{2})}{\Gamma(1+\alpha(1-\frac{\nu}{2}))}t^{\frac{(2-\nu)\alpha-4}{2}}\|\chi\|\\ &\leq Ct^{\frac{(2-\nu)\alpha-4}{2}}\|\chi\|. \end{align*} Hence, we have the following \begin{align*} \|[\mathcal{S}_{\alpha}(t_{2})-\mathcal{S}_{\alpha}(t_{1})]\chi\|_{\dot{H}^{\nu}}&=\|\int_{t_{1}}^{t_{2}}\frac{d}{dt}\mathcal{S}_{\alpha}(t)\chi dt\|_{\dot{H}^{\nu}}\\ &\leq\int_{t_{1}}^{t_{2}}Ct^{\frac{(2-\nu)\alpha-4}{2}}\|\chi\|dt\\ &=\frac{2C}{2-(2-\nu)\alpha}[t_{1}^{\frac{(2-\nu)\alpha-2}{2}}-t_{2}^{\frac{(2-\nu)\alpha-2}{2}}]\\ &\leq \frac{2C}{[2-(2-\nu)\alpha]T_{0}^{2-(2-\nu)\alpha}}(t_{2}-t_{1})^{\frac{2-(2-\nu)\alpha}{2}}, \end{align*} where $0<T_{0}\leq t_{1}<t_{2}\leq T$, and we have used $t_{2}^{\omega}-t_{1}^{\omega}\leq C(t_{2}-t_{1})^{\omega}$ for $0\leq \omega\leq1$, in the above derivation. In what follows, let us establish the pathwise spatial-temporal regularity of the stochastic convolution (3.1). \textbf{Theorem 3.1.} For $0\leq\nu<2$ and $0<\alpha<1$, the generalized Ornstein-Uhlenbeck process $(Z(t))_{t\geq0}$ with the Hurst parameter $\frac{1}{4}<H<1$ is well defined. Moreover, there holds \begin{align*} \sup\limits_{t\in[0,T]}\mathbb{E}\|Z(t)\|_{\dot{H}^{\nu}}^{2}\leq C(H,Q)t^{\sigma}<\infty,~ 0\leq t\leq T, \end{align*} where the index should satisfy $\sigma=\min\{(2-\nu)\alpha+4H-3,(2-\nu)\alpha+2H-1\}>0$. \textbf{Proof.} Using the Wiener integral with respect to fBm and noticing the expression of $K_{t}^{*}$ and the properties of It\^{o} integral, for $0<H<\frac{1}{2}$, we get \begin{align*} \mathbb{E}\|Z(t)\|_{\dot{H}^{\nu}}^{2}&=\mathbb{E}\|\int_{0}^{t}(t-s)^{\alpha-1}E_{\alpha,\alpha}(t-s)dB^{H}(s)\|_{\dot{H}^{\nu}}^{2}\\ &=\sum\limits_{k=1}^{\infty}\mathbb{E}\|\int_{0}^{t}\lambda^{1/2}_{k}(K_{t}^{*}\mathcal{S}_{\alpha}(t-s)e_{k})(s)d\beta_{k}(s)\|_{\dot{H}^{\nu}}^{2}\\ &=\sum\limits_{k=1}^{\infty}\int_{0}^{t}\mathbb{E}\|\lambda^{1/2}_{k}(K_{t}^{*}\mathcal{S}_{\alpha}(t-s)e_{k})(s)\|_{\dot{H}^{\nu}}^{2}ds\\ &=\sum\limits_{k=1}^{\infty}\int_{0}^{t}\mathbb{E}\|\lambda^{1/2}_{k}\mathcal{S}_{\alpha}(t-s)K_{H}(t,s)e_{k}\\ &\hspace{2mm}+\int_{s}^{t}\lambda^{1/2}_{k}[\mathcal{S}_{\alpha}(t-r)-\mathcal{S}_{\alpha}(t-s)]\frac{\partial K_{H}}{\partial r}(r,s)e_{k}dr\|_{\dot{H}^{\nu}}^{2}ds\\ &\leq 2\sum\limits_{k=1}^{\infty}\int_{0}^{t}\mathbb{E}\|\lambda^{1/2}_{k}\mathcal{S}_{\alpha}(t-s)K_{H}(t,s)e_{k}\|_{\dot{H}^{\nu}}^{2}ds\\ &\hspace{2mm}+2\sum\limits_{k=1}^{\infty}\int_{0}^{t}\mathbb{E}\|\int_{s}^{t}\lambda^{1/2}_{k}[\mathcal{S}_{\alpha}(t-r)-\mathcal{S}_{\alpha}(t-s)]\frac{\partial K_{H}}{\partial r}(r,s)e_{k}dr\|_{\dot{H}^{\nu}}^{2}ds\\ &=:I_{1}+I_{2}. \tag{3.2} \end{align*} With the help of the following inequality (see \cite{Wang-Zeng-2010}) \begin{align*} K_{H}(t,s)\leq C(H)(t-s)^{H-\frac{1}{2}}s^{H-\frac{1}{2}}, \end{align*} and further combining Lemma 3.1 and the H\"{o}lder inequality, we obtain \begin{align*} I_{1}&=2\sum\limits_{k=1}^{\infty}\int_{0}^{t}\mathbb{E}\|\lambda^{1/2}_{k}\mathcal{S}_{\alpha}(t-s)K_{H}(t,s)e_{k}\|_{\dot{H}^{\nu}}^{2}ds\\ &\leq 2C(H)(\int_{0}^{t}(t-s)^{(2-\nu)\alpha+2H-3}s^{2H-1}\sum\limits_{k=1}^{\infty}\mathbb{E}\|\lambda^{1/2}_{k}e_{k}\|^{2}ds)\\ &\leq 2C(H)Tr(Q)(\int_{0}^{t}(t-s)^{2[(2-\nu)\alpha+2H-3]}ds)^{\frac{1}{2}}(\int_{0}^{t}s^{2(2H-1)}ds)^{\frac{1}{2}}\\ &\leq C(H,Q)t^{(2-\nu)\alpha+4H-3}, \tag{3.3} \end{align*} and on the other hand, utilizing the expression (2.3), we get \begin{align*} I_{2}&=2\sum\limits_{k=1}^{\infty}\int_{0}^{t}\mathbb{E}\|\int_{s}^{t}\lambda^{1/2}_{k}[\mathcal{S}_{\alpha}(t-r)-\mathcal{S}_{\alpha}(t-s)]\frac{\partial K_{H}}{\partial r}(r,s)e_{k}dr\|_{\dot{H}^{\nu}}^{2}ds\\ &\leq 2\sum\limits_{k=1}^{\infty}\int_{0}^{t}\mathbb{E}(\int_{s}^{t}\|[\mathcal{S}_{\alpha}(t-r)-\mathcal{S}_{\alpha}(t-s)]\frac{\partial K_{H}}{\partial r}(r,s)\|_{\dot{H}^{\nu}}^{2}dr)(\int_{s}^{t}\mathbb{E}\|\lambda^{1/2}_{k}e_{k}\|^{2}dr)ds\\ &\leq2C_{H}^{2}(H-\frac{1}{2})^{2}Tr(Q)\int_{0}^{t}(t-s)(\int_{s}^{t}|(s-r)^{\frac{(2-\nu)\alpha}{2}}(\frac{s}{r})^{\frac{1}{2}-H}(r-s)^{H-\frac{3}{2}}|^{2}dr)ds\\ &\leq C(H,Q)(\int_{0}^{t}(t-s)^{(2-\nu)\alpha+4H-3}s^{1-2H}ds)\\ &\leq C(H,Q)t^{(2-\nu)\alpha+2H-1}. \tag{3.4} \end{align*} When $\frac{1}{4}<H<\frac{1}{2}$ and $\sigma=\min\{(2-\nu)\alpha+4H-3,(2-\nu)\alpha+2H-1\}>0$, by combining (3.2)-(3.4), one can easily get that \begin{align*} \mathbb{E}\|Z(t)\|_{\dot{H}^{\nu}}^{2}\leq C(H,Q)t^{\nu}\leq C(H,Q)T^{\sigma}<\infty. \end{align*} Similarly, for $\frac{1}{2}<H<1$, one can derive that \begin{align*} &\mathbb{E}\|Z(t)\|_{\dot{H}^{\nu}}^{2}\\ &=\mathbb{E}\|\int_{0}^{t}(t-s)^{\alpha-1}E_{\alpha,\alpha}(t-s)dB^{H}(s)\|_{\dot{H}^{\nu}}^{2}\\ &=\sum\limits_{k=1}^{\infty}\int_{0}^{t}\mathbb{E}\|\lambda^{1/2}_{k}(K_{t}^{*}\mathcal{S}_{\alpha}(t-s)e_{k})(s)\|_{\dot{H}^{\nu}}^{2}ds\\ &=\sum\limits_{k=1}^{\infty}\int_{0}^{t}\mathbb{E}\|\int_{s}^{t}\lambda^{1/2}_{k}\mathcal{S}_{\alpha}(t-r)\frac{\partial K_{H}}{\partial r}(r,s)e_{k}dr\|_{\dot{H}^{\nu}}^{2}ds\\ &\leq C_{H}^{2}(H-\frac{1}{2})^{2}\int_{0}^{t}\mathbb{E}(\int_{s}^{t}\|\mathcal{S}_{\alpha}(t-r)(\frac{s}{r})^{\frac{1}{2}-H}(r-s)^{H-\frac{3}{2}}\|_{\dot{H}^{\nu}}^{2}dr)(\int_{s}^{t}\mathbb{E}\|\lambda^{1/2}_{k}e_{k}\|^{2}dr)ds\\ &\leq C(H,Q)(\int_{0}^{t}(t-s)^{(2-\nu)\alpha+4H-3}s^{1-2H}ds)\\ &\leq C(H,Q)t^{(2-\nu)\alpha+2H-1}. \end{align*} Thus, if $\frac{1}{2}<H<1$ and $(2-\nu)\alpha+2H-1>0$, one can directly obtain $\mathbb{E}\|Z(t)\|_{\dot{H}^{\nu}}^{2}<C(H,Q)T^{(2-\nu)\alpha+2H-1}<\infty$. When $H=\frac{1}{2}$, $B^{H}(t)$ is standard Brownian motion and it is easy to obtain $Z(t)$ is well defined. This completes the proof. $\square$ \textbf{Theorem 3.2.} For $0\leq\nu<2$ and $0<\alpha<1$, the stochastic process $(Z_{t})_{t\geq0}$ with $\frac{1}{4}<H<1$ is continuous and it satisfies \begin{align*} \mathbb{E}\|Z(t_{2})-Z(t_{1})\|_{\dot{H}^{\nu}}^{2}\leq C(H,Q)(t_{2}-t_{1})^{\gamma},~ 0\leq t_{1}<t_{2}\leq T, \end{align*} where the index $\gamma=\min\{2-(2-\nu)\alpha,(2-\nu)\alpha+4H-3,(2-\nu)\alpha+2H-1\}>0$. \textbf{Proof.} From (2.7), according to the relation between the Wiener integral and fBm, we have \begin{align*} Z(t_{2})-Z(t_{1})&=\int_{0}^{t_{2}}(t_{2}-s)^{\alpha-1}E_{\alpha,\alpha}(t_{2}-s)dB^{H}(s)-\int_{0}^{t_{1}}(t_{1}-s)^{\alpha-1}E_{\alpha,\alpha}(t_{1}-s)dB^{H}(s)\\ &=\int_{0}^{t_{1}}(\mathcal{S}_{\alpha}(t_{2}-s)-\mathcal{S}_{\alpha}(t_{1}-s))dB^{H}(s)+\int_{t_{1}}^{t_{2}}\mathcal{S}_{\alpha}(t_{2}-s)dB^{H}(s)\\ &=\sum\limits_{k=1}^{\infty}\int_{0}^{t_{1}}\lambda^{1/2}_{k}(K_{t}^{*}(\mathcal{S}_{\alpha}(t_{2}-s)-\mathcal{S}_{\alpha}(t_{1}-s))e_{k})(s)d\beta_{k}(s)\\ &\hspace{2mm}+\sum\limits_{k=1}^{\infty}\int_{t_{1}}^{t_{2}}\lambda^{1/2}_{k}(K_{t}^{*}\mathcal{S}_{\alpha}(t_{2}-s)e_{k})(s)d\beta_{k}(s)\\ &=:J_{1}+J_{2}. \tag{3.5} \end{align*} For the term $J_{1}$, we get \begin{align*} &\mathbb{E}\|J_{1}\|_{\dot{H}^{\sigma}}^{2}\\ &=\mathbb{E}\|\sum\limits_{k=1}^{\infty}\int_{0}^{t_{1}}\lambda^{1/2}_{k}(K_{t}^{*}(\mathcal{S}_{\alpha}(t_{2}-s)-\mathcal{S}_{\alpha}(t_{1}-s))e_{k})(s)d\beta_{k}(s)\|_{\dot{H}^{\nu}}^{2}\\ &=\sum\limits_{k=1}^{\infty}\int_{0}^{t_{1}}\mathbb{E}\|\lambda^{1/2}_{k}(K_{t}^{*}(\mathcal{S}_{\alpha}(t_{2}-s)-\mathcal{S}_{\alpha}(t_{1}-s))e_{k})(s)\|_{\dot{H}^{\nu}}^{2}ds\\ &=\sum\limits_{k=1}^{\infty}\int_{0}^{t_{1}}\mathbb{E}\|\lambda^{1/2}_{k}(\mathcal{S}_{\alpha}(t_{2}-s)-\mathcal{S}_{\alpha}(t_{1}-s))K_{H}(t,s)e_{k}\\ &\hspace{2mm}+\int_{s}^{t}\lambda^{1/2}_{k}[(\mathcal{S}_{\alpha}(t_{2}-r)-\mathcal{S}_{\alpha}(t_{1}-r))-(\mathcal{S}_{\alpha}(t_{2}-s)-\mathcal{S}_{\alpha}(t_{1}-s))]\frac{\partial K_{H}}{\partial r}(r,s)e_{k}dr\|_{\dot{H}^{\nu}}^{2}ds\\ &\leq 2 (t_{2}-t_{1})^{2-(2-\nu)\alpha}\int_{0}^{t}(\|K_{H}(t,s)\|^{2}\mathbb{E}\|\lambda^{1/2}_{k}e_{k}\|^{2}+2\mathbb{E}(\int_{s}^{t}\|\frac{\partial K_{H}}{\partial r}(r,s)\|^{2}dr)(\int_{s}^{t}\mathbb{E}\|\lambda^{1/2}_{k}e_{k}\|^{2}dr))ds\\ &\leq C(H)Tr(Q)(t_{2}-t_{1})^{2-(2-\nu)\alpha}\int_{0}^{t}[(t-s)^{2H-1}s^{2H-1}+(t-s)(\int_{s}^{t}(\frac{s}{r})^{1-2H}(r-s)^{2H-3}dr)]ds\\ &\leq C(H,Q)t^{2H}(t_{2}-t_{1})^{2-(2-\nu)\alpha}. \tag{3.6} \end{align*} Applying Lemma 3.1 and the H\"{o}lder inequality, we obtain \begin{align*} \mathbb{E}\|J_{2}\|_{\dot{H}^{\nu}}^{2}&=\mathbb{E}\|\sum\limits_{k=1}^{\infty}\int_{t_{1}}^{t_{2}}\lambda^{1/2}_{k}(K_{t}^{*}\mathcal{S}_{\alpha}(t_{2}-s)e_{k})(s)d\beta_{k}(s)\|_{\dot{H}^{\nu}}^{2}\\ &=\sum\limits_{k=1}^{\infty}\int_{t_{1}}^{t_{2}}\mathbb{E}\|\lambda^{1/2}_{k}(K_{t}^{*}\mathcal{S}_{\alpha}(t_{2}-s)e_{k})(s)\|_{\dot{H}^{\nu}}^{2}ds\\ &=\sum\limits_{k=1}^{\infty}\int_{t_{1}}^{t_{2}}\mathbb{E}\|\lambda^{1/2}_{k}\mathcal{S}_{\alpha}(t_{2}-s)K_{H}(t,s)e_{k}\\ &\hspace{2mm}+\int_{s}^{t}\lambda^{1/2}_{k}[\mathcal{S}_{\alpha}(t_{2}-r)-\mathcal{S}_{\alpha}(t_{2}-s)]\frac{\partial K_{H}}{\partial r}(r,s)e_{k}dr\|_{\dot{H}^{\nu}}^{2}ds\\ &\leq 2\int_{t_{1}}^{t_{2}}\|\mathcal{S}_{\alpha}(t_{2}-s)K_{H}(t,s)\|_{\dot{H}^{\nu}}^{2}\mathbb{E}\|\lambda^{1/2}_{k}e_{k}\|^{2}ds\\ &\hspace{2mm}+2\sum\limits_{k=1}^{\infty}\int_{t_{1}}^{t_{2}}\mathbb{E}\|\int_{s}^{t}\lambda^{1/2}_{k}[\mathcal{S}_{\alpha}(t_{2}-r)-\mathcal{S}_{\alpha}(t_{2}-s)]\frac{\partial K_{H}}{\partial r}(r,s)e_{k}dr\|_{\dot{H}^{\nu}}^{2}ds\\ &\leq C(H)[(t_{2}-t_{1})^{(2-\nu)\alpha+4H-3}+(t_{2}-t_{1})^{(2-\nu)\alpha+2H-1}]. \tag{3.7} \end{align*} In a similar manner, for $\frac{1}{2}<H<1$, we have \begin{align*} &\mathbb{E}\|Z(t_{2})-Z(t_{1})\|_{\dot{H}^{\nu}}^{2}\\ &\leq 2\sum\limits_{k=1}^{\infty}\mathbb{E}\|\int_{0}^{t_{1}}\lambda^{1/2}_{k}(K_{t}^{*}(\mathcal{S}_{\alpha}(t_{2}-s)-\mathcal{S}_{\alpha}(t_{1}-s))e_{k})(s)d\beta_{k}(s)\|_{\dot{H}^{\nu}}^{2}\\ &\hspace{2mm}+2\sum\limits_{k=1}^{\infty}\mathbb{E}\|\int_{t_{1}}^{t_{2}}\lambda^{1/2}_{k}(K_{t}^{*}\mathcal{S}_{\alpha}(t_{2}-s)e_{k})(s)d\beta_{k}(s)\|_{\dot{H}^{\nu}}^{2}\\ &= 2\sum\limits_{k=1}^{\infty}\int_{0}^{t_{1}}\mathbb{E}\|\int_{s}^{t}\lambda^{1/2}_{k}(\mathcal{S}_{\alpha}(t_{2}-r)-\mathcal{S}_{\alpha}(t_{1}-r))\frac{\partial K_{H}}{\partial r}(r,s)e_{k}dr\|_{\dot{H}^{\nu}}^{2}ds\\ &\hspace{2mm}+2\sum\limits_{k=1}^{\infty}\int_{t_{1}}^{t_{2}}\|\int_{s}^{t}\lambda^{1/2}_{k}\mathcal{S}_{\alpha}(t_{2}-r)\frac{\partial K_{H}}{\partial r}(r,s)e_{k}dr\|_{\dot{H}^{\nu}}^{2}ds\\ &\leq 2 (t_{2}-t_{1})^{2-(2-\nu)\alpha}\int_{0}^{t_{1}}(\int_{s}^{t}\|\frac{\partial K_{H}}{\partial r}(r,s)\|^{2}dr)(\int_{s}^{t}\mathbb{E}\|\lambda^{1/2}_{k}e_{k}\|^{2}dr)ds\\ &\hspace{2mm}+2\int_{t_{1}}^{t_{2}}(\int_{s}^{t}\|\mathcal{S}_{\alpha}(t_{2}-r)\frac{\partial K_{H}}{\partial r}(r,s)\|_{\dot{H}^{\nu}}^{2}dr)(\int_{s}^{t}\mathbb{E}\|\lambda^{1/2}_{k}e_{k}\|^{2}dr)ds\\ &\leq C(H,Q)[t^{2H}(t_{2}-t_{1})^{2-(2-\nu)\alpha}+(t_{2}-t_{1})^{(2-\nu)\alpha+2H-1}]. \tag{3.8} \end{align*} When $H=\frac{1}{2}$, we can deduce that \begin{align*} \mathbb{E}\|Z(t_{2})-Z(t_{1})\|_{\dot{H}^{\nu}}^{2}\leq C(H,Q)(t_{2}-t_{1})^{2-(2-\nu)\alpha}. \tag{3.9} \end{align*} Therefore, when we set $\gamma=\min\{2-(2-\nu)\alpha,(2-\nu)\alpha+4H-3,(2-\nu)\alpha+2H-1\}>0$ with $\frac{1}{4}<H<1$, taking expectation on both side of (3.5) and combining (3.6)-(3.9) in turn then lead to complete the proof. $\square$ \section{Existence and regularity of mild solution} In this section, the existence and uniqueness of mild solution to (1.6) will be proved by Banach fixed point theorem. Let $K>0$ be constant to be determined later. We define the following space \begin{align*} B_{R}^{T}:=\{u:u\in C([0,T];\dot{H}^{\sigma}),\sup\limits_{t\in[0,T]}\|u(t)\|_{ \dot{H}^{\sigma}}\leq K,~\forall t\in[0,T],\sigma\geq0\}, \end{align*} where we denote $\dot{H}^{0}:=L^{2}(D)$. The following statement holds. \textbf{Theorem 4.1.} For $0\leq\nu<2$ and $0<\alpha<1$, then there exists a stopping time $T^{*}>0$ such that (1.6) has a unique mild solution in $L^{2}(\Omega,B_{R}^{T^{*}})$. \textbf{Proof.} We first define a map $\mathcal{F}:B_{R}^{T}\rightarrow C([0,T];\dot{H}^{\sigma})$ in the following manner: for any $u\in B_{R}^{T}$, \begin{align*} (\mathcal{F}u)(t)&=E_{\alpha}(t)u_{0}+\int_{0}^{t}(t-s)^{\alpha-1}E_{\alpha,\alpha}(t-s)[B(u(s))+f(u(s))]ds\\ &~~~+\int_{0}^{t}(t-s)^{\alpha-1}E_{\alpha,\alpha}(t-s)dB^{H}(s),~t\in [0,T]. \tag{4.1} \end{align*} To begin with, we need to show that $\mathcal{F}u\in B_{R}^{T}$ for $u\in B_{R}^{T}$. Making use of Lemma 3.1 and Theorem 3.1 and H\"{o}lder inequality, and based on $\|B(u)\|\leq C\|u\|\|A^{\frac{1}{2}}u\|$, we get \begin{align*} \mathbb{E}\|\mathcal{F}u\|_{\dot{H}^{\nu}}^{2}&\leq 3\mathbb{E}\|E_{\alpha}(t)u_{0}\|_{\dot{H}^{\nu}}^{2}+3\mathbb{E}\|\int_{0}^{t}\mathcal{S}_{\alpha}(t-s)[B(u)+f(u)]ds\|_{\dot{H}^{\nu}}^{2}+3\mathbb{E}\|Z(t)\|_{\dot{H}^{\nu}}^{2}\\ &\leq C\mathbb{E}\|u_{0}\|_{\dot{H}^{\nu}}^{2}+Ct^{(2-\nu)\alpha-1}\mathbb{E}(\int_{0}^{t}(\|B(u)\|^{2}+\|f(u)\|^{2})ds)+C(H,Q)t^{\sigma}\\ &\leq C\mathbb{E}\|u_{0}\|_{\dot{H}^{\nu}}^{2}+Ct^{(2-\nu)\alpha}(1+K^{2}+K^{4})+C(H,Q)t^{\sigma}, \tag{4.2} \end{align*} which implies that $\mathcal{F}u\in B_{R}^{T}$ as $T>0$ is sufficiently small and $K$ is sufficiently large. By a similar calculation as showing (4.2), we get the continuity of $\mathcal{F}u$. Given any $u,v\in B_{R}^{T}$, it follows from Lemma 3.1 that \begin{align*} \mathbb{E}\|\mathcal{F}u-\mathcal{F}v\|_{\dot{H}^{\nu}}^{2}&\leq 2\mathbb{E}\|\int_{0}^{t}\mathcal{S}_{\alpha}(t-s)[B(u)-B(v)]ds\|_{\dot{H}^{\nu}}^{2}+2\mathbb{E}\|\int_{0}^{t}\mathcal{S}_{\alpha}(t-s)[f(u)-f(v)]ds\|_{\dot{H}^{\nu}}^{2}\\ &\leq Ct^{2\alpha-1}\mathbb{E}(\int_{0}^{t}K^{2}\|u-v\|_{\dot{H}^{\nu}}^{2}ds)+Ct^{2\alpha-1}\mathbb{E}(\int_{0}^{t}\|u-v\|_{\dot{H}^{\nu}}^{2}ds),\tag{4.3} \end{align*} which further implies \begin{align*} \sup\limits_{t\in[0,T]}\mathbb{E}\|\mathcal{F}u-\mathcal{F}v\|_{\dot{H}^{\nu}}^{2}\leq C(T^{*})^{2\alpha}(1+K^{2})\sup\limits_{t\in[0,T]}\mathbb{E}\|u-v\|_{\dot{H}^{\nu}}^{2}.\tag{4.4} \end{align*} Next, let us take $T^{*}$ such that \begin{align*} 0<C(T^{*})^{2\alpha}(1+K^{2})<1, \end{align*} and so that $\mathcal{F}$ is a strict contraction mapping on $B_{R}^{T}$. By the Banach fixed point theorem, there exist a unique fixed point $u\in L^{2}(\Omega,B_{R}^{T^{*}})$, which is a mild solution of (1.6). This completes the proof. $\square$ Our final main result is devoted to the H\"{o}lder regularity of the mild solution and is stated as follows. \textbf{Theorem 4.2.} For $0\leq\nu<2$, $\frac{1}{4}<H<1$ and $0<\alpha<1$, there exists a unique mild solution $u(t)$ satisfying \begin{align*} \mathbb{E}\|u(t_{2})-u(t_{1})\|_{\dot{H}^{\nu}}^{2}< (t_{2}-t_{1})^{\beta}, ~0\leq t_{1}<t_{2}\leq T, \end{align*} where $\beta=\min\{\alpha\nu,(2-\nu)\alpha,2-(2-\nu)\alpha,(2-\nu)\alpha+4H-3,(2-\nu)\alpha+2H-1\}>0$. \textbf{Proof.} From (2.7) we have \begin{align*} u(t_{2})-u(t_{1})&=E_{\alpha}(t_{2})u_{0}-E_{\alpha}(t_{1})u_{0}+\int_{0}^{t_{2}}\mathcal{S}_{\alpha}(t_{2}-s)B(u(s))ds-\int_{0}^{t_{1}}\mathcal{S}_{\alpha}(t_{1}-s)B(u(s))ds\\ &\hspace{2mm}+\int_{0}^{t_{2}}\mathcal{S}_{\alpha}(t_{2}-s)f(u(s))ds-\int_{0}^{t_{1}}\mathcal{S}_{\alpha}(t_{1}-s)f(u(s))ds+Z(t_{2})-Z(t_{1})\\ &=:J_{1}+J_{2}+J_{3}+J_{4}, \tag{4.5} \end{align*} where we define \begin{align*} J_{1}:=E_{\alpha}(t_{2})u_{0}-E_{\alpha}(t_{1})u_{0},~J_{4}:=Z(t_{2})-Z(t_{1}), \end{align*} and \begin{align*} J_{2}:&=\int_{0}^{t_{2}}\mathcal{S}_{\alpha}(t_{2}-s)B(u(s))ds-\int_{0}^{t_{1}}\mathcal{S}_{\alpha}(t_{1}-s)B(u(s))ds\\ &=\int_{0}^{t_{1}}[\mathcal{S}_{\alpha}(t_{2}-s)-\mathcal{S}_{\alpha}(t_{1}-s)]B(u(s))ds+\int_{t_{1}}^{t_{2}}\mathcal{S}_{\alpha}(t_{2}-s)B(u(s))ds\\ &=:J_{21}+J_{22}, \end{align*} and \begin{align*} J_{3}&:=\int_{0}^{t_{2}}\mathcal{S}_{\alpha}(t_{2}-s)f(u(s))ds-\int_{0}^{t_{1}}\mathcal{S}_{\alpha}(t_{1}-s)f(u(s))ds\\ &=\int_{0}^{t_{1}}[\mathcal{S}_{\alpha}(t_{2}-s)-\mathcal{S}_{\alpha}(t_{1}-s)]f(u(s))ds+\int_{t_{1}}^{t_{2}}\mathcal{S}_{\alpha}(t_{2}-s)f(u(s))ds\\ &=:J_{31}+J_{32}. \end{align*} The application of Lemma 2.2 follows that \begin{align*} \mathbb{E}\|J_{1}\|_{\dot{H}^{\nu}}^{2}=\mathbb{E}\|E_{\alpha}(t_{2})u_{0}-E_{\alpha}(t_{1})u_{0}\|_{\dot{H}^{\nu}}^{2}\leq (t_{2}-t_{1})^{\alpha\nu}\mathbb{E}\|u_{0}\|^{2}. \tag{4.6} \end{align*} Applying the Lemma 3.1 and H\"{o}lder inequality, we get \begin{align*} \mathbb{E}\|J_{2}\|_{\dot{H}^{\nu}}^{2}&\leq 2\mathbb{E}\|J_{21}\|_{\dot{H}^{\nu}}^{2}+\mathbb{E}\|J_{22}\|_{\dot{H}^{\nu}}^{2}\\ &\leq2\mathbb{E}(\int_{0}^{t_{1}}\|\mathcal{S}_{\alpha}(t_{2}-s)-\mathcal{S}_{\alpha}(t_{1}-s)\|_{\dot{H}^{\nu}}^{2}ds)(\int_{0}^{t_{1}}\|B(u(s))\|^{2}ds)\\ &\hspace{2mm}+2\mathbb{E}(\int_{t_{1}}^{t_{2}}\|\mathcal{S}_{\alpha}(t_{2}-s)\|_{\dot{H}^{\nu}}^{2}ds)(\int_{t_{1}}^{t_{2}}\|B(u(s))\|^{2}ds)\\ &\leq CK^{4}T^{2}(t_{2}-t_{1})^{2-(2-\nu)\alpha}+CK^{4}(t_{2}-t_{1})^{(2-\nu)\alpha}. \tag{4.7} \end{align*} and \begin{align*} \mathbb{E}\|J_{3}\|_{\dot{H}^{\nu}}^{2}&\leq 2\mathbb{E}\|J_{31}\|_{\dot{H}^{\nu}}^{2}+\mathbb{E}\|J_{32}\|_{\dot{H}^{\nu}}^{2}\\ &\leq2\mathbb{E}(\int_{0}^{t_{1}}\|\mathcal{S}_{\alpha}(t_{2}-s)-\mathcal{S}_{\alpha}(t_{1}-s)\|_{\dot{H}^{\nu}}^{2}ds)(\int_{0}^{t_{1}}\|f(u(s))\|^{2}ds)\\ &\hspace{2mm}+2\mathbb{E}(\int_{t_{1}}^{t_{2}}\|\mathcal{S}_{\alpha}(t_{2}-s)\|_{\dot{H}^{\nu}}^{2}ds)(\int_{t_{1}}^{t_{2}}\|f(u(s))\|^{2}ds)\\ &\leq C(1+K^{2})T^{2}(t_{2}-t_{1})^{2-(2-\nu)\alpha}+C(1+K^{2})(t_{2}-t_{1})^{(2-\nu)\alpha}.\tag{4.8} \end{align*} By Theorem 3.2, we have \begin{align*} \mathbb{E}\|J_{4}\|_{\dot{H}^{\nu}}^{2}=\mathbb{E}\|Z(t_{2})-Z(t_{1})\|_{\dot{H}^{\nu}}^{2}\leq C(H,Q)(t_{2}-t_{1})^{\gamma}.\tag{4.9} \end{align*} Taking expectation on both side of (4.5) and combining (4.6)-(4.9), the proof of Theorem 4.2 is then completed. $\square$ \end{document}
arXiv
{ "id": "1709.05028.tex", "language_detection_score": 0.5534952878952026, "char_num_after_normalized": null, "contain_at_least_two_stop_words": null, "ellipsis_line_ratio": null, "idx": null, "lines_start_with_bullet_point_ratio": null, "mean_length_of_alpha_words": null, "non_alphabetical_char_ratio": null, "path": null, "symbols_to_words_ratio": null, "uppercase_word_ratio": null, "type": null, "book_name": null, "mimetype": null, "page_index": null, "page_path": null, "page_title": null }
arXiv/math_arXiv_v0.2.jsonl
null
null
\begin{document} \pagestyle{plain} \pagenumbering{arabic} \title{Prescription for experimental determination of \ the dynamics of a quantum black box} \begin{abstract} We give an explicit prescription for experimentally determining the evolution operators which completely describe the dynamics of a quantum mechanical black box -- an arbitrary open quantum system. We show necessary and sufficient conditions for this to be possible, and illustrate the general theory by considering specifically one and two quantum bit systems. These procedures may be useful in the comparative evaluation of experimental quantum measurement, communication, and computation systems. \end{abstract} \pacs{PACS numbers: 03.65.Bz, 89.70.+c,89.80.th,02.70.--c} \begin{multicols}{2}[] \def\begin{equation}{\begin{equation}} \def\end{equation}{\end{equation}} \def\begin{eqnarray}{\begin{eqnarray}} \def\end{eqnarray}{\end{eqnarray}} \defA^\dagger{A^\dagger} \newcommand{\mattwoc}[4]{\left[ \begin{array}{cc}{#1}&{#2}\\{#3}&{#4}\end{array}\right]} \newcommand{\ket}[1]{\mbox{$|#1\rangle$}} \newcommand{\bra}[1]{\mbox{$\langle #1|$}} \def\rangle{\rangle} \def\langle{\langle} \section{Introduction} Consider a black box with an input and an output. Given that the transfer function is linear, if the dynamics of the box are described by classical physics, well known recipes exist to completely determine the response function of the system. Now consider a {\em quantum-mechanical} black box whose input may be an arbitrary quantum state (in a finite dimensional Hilbert space), with internal dynamics and an output state (of same dimension as the input) determined by quantum physics. The box may even be connected to an external reservoir, or have other inputs and outputs which we wish to ignore. Can we determine the quantum transfer function of the system? The answer is yes. Simply stated, the most arbitrary transfer function of a quantum black box is to map one density matrix into another, $\rho_{in} {\rightarrow} \rho_{out}$, and this is determined by a linear mapping ${\cal E}$ which we shall give a prescription for obtaining. The interesting observation is that this black box may be an attempt to realize a useful quantum device. For example, it may be a quantum cryptography channel\cite{Bennett92,Hughes95} (which might include an eavesdropper!), a quantum computer in which decoherence occurs, limiting its performance\cite{Unruh94,Chuang95a}, or just an imperfect quantum logic gate\cite{Turchette95,Monroe95}, whose performance you wish to characterize to determine its usefulness. How many parameters are necessary to describe a quantum black box acting on an input with a state space of $N$ dimensions? And how may these parameters be experimentally determined? Furthermore, how is the resulting description of ${\cal E}$ useful as a performance characterization? We consider these questions in this paper. After summarizing the relevant mathematical formalism, we prove that ${\cal E}$ may be determined completely by a matrix of complex numbers $\chi$, and provide an accessible experimental prescription for obtaining $\chi$. We then give explicit constructions for the cases of one and two quantum bits (qubits), and then conclude by describing related performance estimation quantities derivable from $\chi$. \section{State Change Theory} A general way to describe the state change experienced by a quantum system is by using {\em quantum operations}, sometimes also known as {\em superscattering operators} or {\em completely positive maps}. This formalism is described in detail in \cite{Kraus83a}, and is given a brief but informative review in the appendix to \cite{Schumacher96a}. A quantum operation is a linear map ${\cal E}$ which completely describes the dynamics of a quantum system, \begin{equation} \rho \rightarrow \frac{{\cal E}(\rho)}{\mbox{tr}({\cal E}(\rho))} \,. \label{eq:rhomapfirst} \end{equation} A particularly useful description of quantum operations for theoretical applications is the so-called {\em operator-sum representation}: \begin{equation} \label{eqtn: op sum rep} {\cal E}(\rho) = \sum_i A_i \rho A_i^{\dagger} \,. \label{eq:eeffect} \end{equation} The $A_i$ are operators acting on the system alone, yet they completely describe the state changes of the system, including any possible unitary operation (quantum logic gate), projection (generalized measurement), or environmental effect (decoherence). In the case of a ``non-selective'' quantum evolution, such as arises from uncontrolled interactions with an environment (as in the decoherence of quantum computers), the $A_i$ operators satisfy an additional completeness relation, \begin{eqnarray} \sum_i A_i^{\dagger} A_i = I \,. \label{eq:completeness} \end{eqnarray} This relation ensures that the trace factor $\mbox{tr}({\cal E}(\rho))$ is always equal to one, and thus the state change experienced by the system can be written \begin{equation} \rho \rightarrow {\cal E}(\rho) \,. \label{eq:rhomap} \end{equation} Such quantum operations are in a one to one correspondence with the set of transformations arising from the joint unitary evolution of the quantum system and an initially uncorrelated environment\cite{Kraus83a}. In other words, the quantum operations formalism also describes the master equation and quantum Langevin pictures widely used in quantum optics \cite{Louisell,Gardiner91}, where the system's state change arises from an interaction Hamiltonian between the system and its environment\cite{Mabuchi96}. Our goal will be to describe the state change process by determining the operators $A_i$ which describe ${\cal E}$, (and until Section~\ref{sec:meas} we shall limit ourselves to those which satisfy Eq.(\ref{eq:completeness})). Once these operators have been determined many other quantities of great interest, such as the {\em fidelity}, {\em entanglement fidelity} and {\em quantum channel capacity} can be determined. Typically, the $A_i$ operators are derived from a {\em theoretical} model of the system and its environment; for example, they are closely related to the Lindblad operators. However, what we propose here is different: to determine systematically from {\em experiment} what the $A_i$ operators are for a specific quantum black box. \section{General Experimental Procedure} The experimental procedure may be outlined as follows. Suppose the state space of the system has $N$ dimensions; for example, $N=2$ for a single qubit. $N^2$ pure quantum states $|\psi_1\rangle\langle\psi_1|, \ldots,|\psi_{N^2}\rangle\langle\psi_{N^2}|$ are experimentally prepared, and the output state ${\cal E}(|\psi_j\rangle\langle\psi_j|)$ is measured for each input. This may be done, for example, by using quantum state tomography\cite{Raymer94a,Leonhardt96,Leibfried96a}. In principle, the quantum operation ${\cal E}$ can now be determined by a linear extension of ${\cal E}$ to all states. We prove this below. The goal is to determine the unknown operators $A_i$ in Eq.(\ref{eq:eeffect}). However, experimental results involve numbers (not operators, which are a theoretical concept). To relate the $A_i$ to measurable parameters, it is convenient to consider an equivalent description of ${\cal E}$ using a {\em fixed} set of operators $\tilde{A}_i$, which form a basis for the set of operators on the state space, so that \begin{eqnarray} A_i = \sum_m a_{im} \tilde{A}_m \label{eq:atildedef} \end{eqnarray} for some set of complex numbers $a_{im}$. Eq.(\ref{eq:eeffect}) may thus be rewritten as \begin{equation} \label{eqtn: two sided rep} {\cal E}(\rho) = \sum_{mn} \tilde{A}_m \rho \tilde{A}_{n}^{\dagger} \chi_{mn} \,, \end{equation} where $\chi_{mn} \equiv \sum_i a_{im} a_{in}^*$ is a ``classical'' {\em error correlation matrix} which is positive Hermitian by definition. This shows that ${\cal E}$ can be completely described by a complex number matrix, $\chi$, once the set of operators $\tilde{A}_i$ has been fixed. In general, $\chi$ will contain $N^4-N^2$ independent parameters, because a general linear map of $N$ by $N$ matrices to $N$ by $N$ matrices is described by $N^4$ independent parameters, but there are $N^2$ additional constraints due to the fact that the trace of $\rho$ remains one. We will show how to determine $\chi$ experimentally, and then show how an operator sum representation of the form Eq.(\ref{eqtn: op sum rep}) can be recovered once the $\chi$ matrix is known. Let $\rho_j$, $1\leq j \leq N^2$ be a set of linearly independent basis elements for the space of $N$$\times$$N$ matrices. A convenient choice is the set of projectors $\ket{n}\bra{m}$. Experimentally, the output state ${\cal E}(\ket{n}\bra{m})$ may be obtained by preparing the input states $\ket{n}$, $\ket{m}$, $\ket{n_+} = (\ket{n}+\ket{m})/\sqrt{2}$, and $\ket{n_-} = (\ket{n}+i\ket{m})/\sqrt{2}$ and forming linear combinations of ${\cal E}(\ket{n}\bra{n})$, ${\cal E}(\ket{m}\bra{m})$, ${\cal E}(\ket{n_+}\bra{n_+})$, and ${\cal E}(\ket{n_-}\bra{n_-})$. Thus, it is possible to determine ${\cal E}(\rho_j)$ by state tomography, for each $\rho_j$. Furthermore, each ${\cal E}(\rho_j)$ may be expressed as a linear combination of the basis states, \begin{equation} {\cal E}(\rho_j) = \sum_k \lambda_{jk} \rho_k \,, \end{equation} and since ${\cal E}(\rho_j)$ is known, $\lambda_{jk}$ can thus be determined. To proceed, we may write \begin{equation} \tilde{A}_m \rho_j \tilde{A}_n^\dagger = \sum_k \beta^{mn}_{jk} \rho_k \,, \label{eq:betadef} \end{equation} where $\beta^{mn}_{jk}$ are complex numbers which can be determined by standard algorithms given the $\tilde{A}_m$ operators and the $\rho_j$ operators. Combining the last two expressions we have \begin{equation} \sum_k \sum_{mn} \chi_{mn} \beta^{mn}_{jk} \rho_k = \sum_k \lambda_{jk}\rho_k \,. \end{equation} {} From independence of the $\rho_k$ it follows that for each $k$, \begin{equation} \label{eqtn: chi condition} \sum_{mn} \beta^{mn}_{jk} \chi_{mn} = \lambda_{jk} \,. \end{equation} This relation is a necessary and sufficient condition for the matrix $\chi$ to give the correct quantum operation ${\cal E}$. One may think of $\chi$ and $\lambda$ as vectors, and $\beta$ as a $N^4$$\times$$N^4$ matrix with columns indexed by $mn$, and rows by $ij$. To show how $\chi$ may be obtained, let $\kappa$ be the generalized inverse for the matrix $\beta$, satisfying the relation \begin{equation} \beta^{mn}_{jk} = \sum_{st,xy} \beta_{jk}^{st} \kappa_{st}^{xy} \beta_{xy}^{mn} \,. \end{equation} Most computer algebra packages are capable of finding such generalized inverses. In appendix \ref{appendix: chi} it is shown that $\chi$ defined by \begin{eqnarray} \chi_{mn} = \sum_{jk} \kappa_{jk}^{mn} \lambda_{jk} \label{eqtn:chidefn} \end{eqnarray} satisfies the relation (\ref{eqtn: chi condition}). The proof is somewhat subtle, but it is not relevant to the application of the present algorithm. Having determined $\chi$ one immediately obtains the operator sum representation for ${\cal E}$ in the following manner. Let the unitary matrix $U^\dagger$ diagonalize $\chi$, \begin{eqnarray} \chi_{mn} = \sum_{xy} U_{mx} d_{x} \delta_{xy} U^*_{ny} . \end{eqnarray} {} From this it can easily be verified that \begin{eqnarray} A_i = \sqrt{d_i} \sum_j U_{ij} \tilde{A}_j \end{eqnarray} gives an operator-sum representation for the quantum operation ${\cal E}$. Our algorithm may thus be summarized as follows: $\lambda$ is experimentally measured, and given $\beta$, determined by a choice of $\tilde{A}$, we find the desired parameters $\chi$ which completely describe ${\cal E}$. \section{One and Two Qubits} The above general method may be illustrated by the specific case of a black box operation on a single quantum bit (qubit). A convenient choice for the fixed operators $\tilde{A}_i$ is \begin{eqnarray} \tilde{A}_0 &=& I \label{eq:fixedonebit} \\ \tilde{A}_1 &=& \sigma_x \\ \tilde{A}_2 &=& -i \sigma_y \\ \tilde{A}_3 &=& \sigma_z \,, \label{eq:fixedonebitend} \end{eqnarray} where the $\sigma_i$ are the Pauli matrices. There are 12 parameters, specified by $\chi$, which determine an arbitrary single qubit black box operation ${\cal E}$; three of these describe arbitrary unitary transforms $\exp(i\sum_k r_k\sigma_k)$ on the qubit, and nine parameters describe possible correlations established with the environment $E$ via $\exp(i\sum_{jk} \gamma_{jk} \sigma_j\otimes\sigma^E_k)$. Two combinations of the nine parameters describe physical processes analogous to the $T_1$ and $T_2$ spin-spin and spin-lattice relaxation rates familiar to us from classical magnetic spin systems. However, the dephasing and energy loss rates determined by $\chi$ do not simply describe ensemble behavior; rather, $\chi$ describes the dynamics of a {\em single quantum system}. Thus, the decoherence of a single qubit must be described by {\em more than just two parameters}. {\em Twelve} are needed in general. These 12 parameters may be measured using four sets of experiments. As a specific example, suppose the input states $\ket{0}$, $\ket{1}$, $\ket{+}=(\ket{0}+\ket{1})/\sqrt{2}$ and $\ket{-} = (\ket{0}+i\,\ket{1})/\sqrt{2}$ are prepared, and the four matrices \begin{eqnarray} \rho'_1 &=& {\cal E}(\ket{0}\bra{0}) \\ \rho'_4 &=& {\cal E}(\ket{1}\bra{1}) \\ \rho'_2 &=& {\cal E}(\ket{+}\bra{+}) - i {\cal E}(\ket{-}\bra{-}) - (1-i)(\rho'_1 + \rho'_4)/2 \\ \rho'_3 &=& {\cal E}(\ket{+}\bra{+}) + i {\cal E}(\ket{-}\bra{-}) - (1+i)(\rho'_1 + \rho'_4)/2 \end{eqnarray} are determined using state tomography. These correspond to $\rho'_j = {\cal E}(\rho_j)$, where \begin{equation} \rho_1 = \mattwoc{1}{0}{0}{0} \,, \end{equation} $\rho_2 = \rho_1 \sigma_x$, $\rho_3=\sigma_x\rho_2$, and $\rho_4 = \sigma_x \rho_1\sigma_x$. From Eq.(\ref{eq:betadef}) and Eqs.(\ref{eq:fixedonebit}-\ref{eq:fixedonebitend}) we may determine $\beta$, and similarly $\rho'_j$ determines $\lambda$. However, due to the particular choice of basis, and the Pauli matrix representation of $\tilde{A}_i$, we may express the $\beta$ matrix as the Kronecker product $\beta = \Lambda\otimes \Lambda$, where \begin{equation} \Lambda = \frac{1}{2} \mattwoc{I}{\sigma_x}{\sigma_x}{-I} \,, \end{equation} so that $\chi$ may be expressed conveniently as \begin{equation} \chi = \Lambda \mattwoc{\rho'_1}{\rho'_2}{\rho'_3}{\rho'_4} \Lambda \,, \end{equation} in terms of block matrices. Likewise, it turns out that the parameters $\chi_2$ describing the black box operations on two qubits can be expressed as \begin{equation} \chi_2 = \Lambda_2 \overline{\rho}' \Lambda_2 \,, \end{equation} where $\Lambda_2 = \Lambda \otimes \Lambda$, and $\overline{\rho}'$ is a matrix of sixteen measured density matrices, \begin{equation} \overline{\rho}' = P^T \left[\begin{array}{cccc} \rho'_{11} & \rho'_{12} & \rho'_{13} & \rho'_{14} \\ \rho'_{21} & \rho'_{22} & \rho'_{23} & \rho'_{24} \\ \rho'_{31} & \rho'_{32} & \rho'_{33} & \rho'_{34} \\ \rho'_{41} & \rho'_{42} & \rho'_{43} & \rho'_{44} \end{array}\right] P \,, \end{equation} where $\rho'_{nm} = {\cal E}(\rho_{nm})$, $\rho_{nm} = T_n \ket{00}\bra{00} T_m$, $T_1 = I\otimes I$, $T_2 = I\otimes \sigma_x$, $T_3 = \sigma_x \otimes I$, $T_4 = \sigma_x \otimes \sigma_x$, and $P = I\otimes [(\rho_{00}+\rho_{12}+\rho_{21}+\rho_{33})\otimes I]$ is a permutation matrix. Similar results hold for $k>2$ qubits. Note that in general, a quantum black box acting on $k$ qubits is described by $16^k-4^k$ independent parameters. There is a particularly elegant geometric view of quantum operations for a single qubit. This is based on the Bloch vector, $\vec \lambda$, which is defined by \begin{equation} \rho = \frac{I+\vec \lambda \cdot \vec \sigma}{2}, \end{equation} satisfying $| \vec \lambda | \leq 1$. The map Eq.(\ref{eq:rhomap}) is equivalent to a map of the form \begin{equation} \vec \lambda \stackrel{\cal E}{\rightarrow} \vec \lambda' = M \vec \lambda + \vec c \,, \label{eqtn: affine map} \end{equation} where $M$ is a $3$$\times$$3$ matrix, and $\vec c$ is a constant vector. This is an {\em affine map}, mapping the Bloch sphere into itself. If the $A_i$ operators are written in the form \begin{eqnarray} A_i = \alpha_i I + \sum_{k=1}^3 a_{ik} \sigma_k, \end{eqnarray} then it is not difficult to check that \begin{eqnarray} M_{jk} & = & \sum_l \left[ \begin{array}{l} a_{lj} a_{lk}^* + a_{lj}^* a_{lk} + \\ \left( |\alpha_l|^2- \sum_p a_{lp} a_{lp}^* \right) \delta_{jk} + \\ i \sum_p \epsilon_{jkp} ( \alpha_l a_{lp}^* - \alpha_l^* a_{lp} ) \end{array} \right] \\ c_k &=& 2i \sum_l \sum_{jp} \epsilon_{jpk} a_{lj} a_{lp}^* \,, \end{eqnarray} where we have made use of Eq.(\ref{eq:completeness}) to simplify the expression for $\vec c$. The meaning of the affine map Eq.(\ref{eqtn: affine map}) is made clearer by considering the polar decomposition \cite{Horn91a} of the matrix $M$. Any real matrix $M$ can always be written in the form \begin{eqnarray} M = O S \,, \end{eqnarray} where $O$ is a real orthogonal matrix with determinant $1$, representing a proper rotation, and $S$ is a real symmetric matrix. Viewed this way, the map Eq.(\ref{eqtn: affine map}) is just a deformation of the Bloch sphere along principal axes determined by $S$, followed by a proper rotation due to $O$, followed by a displacement due to $\vec c$. Various well-known decoherence measures can be identified from $M$ and $\vec c$; for example, $T_1$ and $T_2$ are related to the magnitude of $\vec c$ and the norm of $M$. Other measures are described in the following section. \section{Related Quantities} We have described how to determine an unknown quantum operation ${\cal E}$ by systematically exploring the response to a complete set of states in the system's Hilbert space. Once the operators $A_i$ have been determined, many other interesting quantities can be evaluated. A quantity of particular importance is the {\em entanglement fidelity} \cite{Schumacher96a,Nielsen96c}. This quantity can be used to measure how closely the dynamics of the quantum system under consideration approximates that of some ideal quantum system. Suppose the target quantum operation is a unitary quantum operation, ${\cal U}(\rho) = U \rho U^{\dagger}$, and the actual quantum operation implemented experimentally is ${\cal E}$. The entanglement fidelity can be defined as \cite{Nielsen96c} \begin{eqnarray} F_e(\rho,{\cal U},{\cal E}) & \equiv & \sum_i \left| \mbox{tr}(U^{\dagger} A_i \rho) \right|^2 \\ &=& \sum_{mn} \chi_{mn} \mbox{tr} (U^{\dagger} \tilde{A}_m \rho) \mbox{tr}(\rho \tilde{A}_n^{\dagger} U) \,. \end{eqnarray} The second expression follows from the first by using Eq.(\ref{eq:atildedef}), and shows that errors in the experimental determination of ${\cal E}$ (resulting from errors in preparation and measurement) propagate linearly to errors in the estimation of entanglement fidelity. The minimum value of $F_e$ over all possible states $\rho$ is a single parameter which describes how well the experimental system implements the desired quantum logic gate. One may also be interested in the minimum {\em fidelity} of the gate operation. This is given by the expression, \begin{eqnarray} F \equiv \min_{|\psi\rangle} \langle \psi | U^{\dagger} {\cal E}(|\psi\rangle \langle \psi|) U |\psi \rangle, \end{eqnarray} where the minimum is over all pure states, $|\psi\rangle$. As for the entanglement fidelity, we may show that this quantity can be determined robustly, because of its linear dependence on the experimental errors. Another quantity of interest is the {\em quantum channel capacity}, defined by Lloyd \cite{Lloyd96a,Schumacher96b} as a measure of the amount of quantum information that can be sent using a quantum communication channel, such as an optical fiber. In terms of the parameters discussed in this paper, \begin{eqnarray} C({\cal E}) \equiv \max_{\rho} S({\cal E}(\rho)) - S_e(\rho,{\cal E}) \,, \end{eqnarray} where $S({\cal E}(\rho))$ is the von Neumann entropy of the density operator ${\cal E}(\rho)$, $S_e(\rho,{\cal E})$ is the {\em entropy exchange} \cite{Schumacher96a}, and the maximization is over all density operators $\rho$ which may be used as input to the channel. It is a measure of the amount of quantum information that can be sent reliably using a quantum communications channel which is described by a quantum operation ${\cal E}$. One final observation is that our procedure can in principle be used to determine the form of the Lindblad operator, ${\cal L}$, used in Markovian master equations of the form \begin{eqnarray} \dot \rho = {\cal L}(\rho), \end{eqnarray} where for convenience time is measured in dimensionless units, to make ${\cal L}$ dimensionless. This result follows from the fact that Lindblad operators ${\cal L}$ are just the logarithms of quantum operations; that is, $\exp({\cal L})$ is a quantum operation for any Lindblad operator, ${\cal L}$, and $\log {\cal E}$ is a Lindblad operator for any quantum operation ${\cal E}$. This observation may be used in the future to experimentally determine the form of the Lindblad operator for systems, but will not be explored further here. \section{Quantum Measurements} \label{sec:meas} Quantum operations can also be used to describe measurements. For each measurement outcome, $i$, there is associated a quantum operation, ${\cal E}_i$. The corresponding state change is given by \begin{eqnarray} \rho \rightarrow \frac{{\cal E}_i(\rho)}{\mbox{tr}({\cal E}_i(\rho))} \,, \end{eqnarray} where the probability of the measurement outcome occurring is $p_i = \mbox{tr}({\cal E}_i(\rho))$. Note that this mapping may be {\em nonlinear}, because of this renormalization factor. Despite the possible nonlinearity, the procedure we have described may be adapted to evaluate the quantum operations describing a measurement. To determine ${\cal E}_i$ we proceed exactly as before, except now we must perform the measurement a large enough number of times that the probability $p_i$ can be reliably estimated, for example by using the frequency of occurrence of outcome $i$. Next, $\rho'_j$ is determined using tomography, allowing us to obtain \begin{eqnarray} {\cal E}_i(\rho_j) = \mbox{tr}({\cal E}_i(\rho_j)) \rho'_j, \end{eqnarray} for each input $\rho_j$ which we prepare, since each term on the right hand side is known. Now we proceed exactly as before to evaluate the quantum operation ${\cal E}_i$. This procedure may be useful, for example, in evaluating the effectiveness of a quantum-nondemolition (QND) measurement\cite{braginsky92}. \section{Conclusion} In this paper we have shown how the dynamics of a quantum system may be experimentally determined using a systematic procedure. This elementary {\em system identification} step \cite{Ljung87} opens the way for robust experimental determination of a wide variety of interesting quantities. Amongst those that may be of particular interest are the quantum channel capacity, the fidelity, and the entanglement fidelity. We expect these results to be of great use in the experimental study of quantum computation, quantum error correction, quantum cryptography, quantum coding and quantum teleportation. \section*{Acknowledgments} We thank C.~M.~Caves, R.~Laflamme, Y.~Yamamoto, and W.~H.~Zurek for many useful discussions about quantum information and quantum optics. This work was supported in part by the Office of Naval Research (N00014-93-1-0116), the Phillips Laboratory (F29601-95-0209), and the Army Research Office (DAAH04-96-1-0299). We thank the Institute for Theoretical Physics for its hospitality and for the support of the National Science Foundation (PHY94-07194). ILC acknowledges financial support from the Fannie and John Hertz Foundation, and MAN acknowledges financial support from the Australian-American Educational Foundation (Fulbright Commission). \appendix \section{Proof of the $\chi$ relation} \label{appendix: chi} The difficulty in verifying that $\chi$ defined by (\ref{eqtn:chidefn}) satisfies (\ref{eqtn: chi condition}) is that in general $\chi$ is not uniquely determined by the last set of equations. For convenience we will rewrite these equations in matrix form as \begin{eqnarray} \label{eqtn: chi cond app} \beta \vec \chi & = & \vec \lambda \\ \label{eqtn: chi defn app} \vec \chi & \equiv & \kappa \vec \lambda \,. \end{eqnarray} {} From the construction that led to equation (\ref{eqtn: two sided rep}) we know there exists at least one solution to equation (\ref{eqtn: chi cond app}), which we shall call $\vec \chi '$. Thus $\vec \lambda = \beta \vec \chi '$. The generalized inverse satisfies $\beta \kappa \beta = \beta$. Premultiplying the definition of $\vec \chi$ by $\beta$ gives \begin{eqnarray} \beta \vec \chi & = & \beta \kappa \vec \lambda \\ & = & \beta \kappa \beta \vec \chi ' \\ & = & \beta \vec \chi ' \\ & = & \lambda \,. \end{eqnarray} Thus $\chi$ defined by (\ref{eqtn: chi defn app}) satisfies the equation (\ref{eqtn: chi cond app}), as was required to show. \end{multicols} \end{document}
arXiv
{ "id": "9610001.tex", "language_detection_score": 0.7880964279174805, "char_num_after_normalized": null, "contain_at_least_two_stop_words": null, "ellipsis_line_ratio": null, "idx": null, "lines_start_with_bullet_point_ratio": null, "mean_length_of_alpha_words": null, "non_alphabetical_char_ratio": null, "path": null, "symbols_to_words_ratio": null, "uppercase_word_ratio": null, "type": null, "book_name": null, "mimetype": null, "page_index": null, "page_path": null, "page_title": null }
arXiv/math_arXiv_v0.2.jsonl
null
null
\begin{document} \title{Addressing Over-Smoothing in Graph Neural Networks via Deep Supervision} \begin{abstract} Learning useful node and graph representations with graph neural networks (GNNs) is a challenging task. It is known that deep GNNs suffer from over-smoothing where, as the number of layers increases, node representations become nearly indistinguishable and model performance on the downstream task degrades significantly. To address this problem, we propose deeply-supervised GNNs (DSGNNs), i.e., GNNs enhanced with deep supervision where representations learned at all layers are used for training. We show empirically that DSGNNs are resilient to over-smoothing and can outperform competitive benchmarks on node and graph property prediction problems. \end{abstract} \section{Introduction} \label{sec:introduction} We live in a connected world and generate vast amounts of graph-structured or network data. Reasoning with graph-structured data has many important applications such as traffic speed prediction, product recommendation, and drug discovery~\citep{zhou2020graph, 10.1093/bib/bbab159}. Graph neural networks (GNNs), first introduced by~\citet{scarselli_gnn}, have emerged as the dominant solution for graph representation learning, which is the first step in building predictive models for graph-structured data. One of the most important applications of GNNs is that of \emph{node property prediction}, as in semi-supervised classification of papers (nodes) in a citation network \citep[see, e.g.,][]{kipf2016semi}. Another exciting and popular application of GNNs is that of \emph{graph property prediction}, as in, for example, graph classification and regression. In this setting, we are given a set of graphs and corresponding labels, one for each graph, and the goal is to learn a mapping from the graph to its label. In both problems, node and graph property prediction, the labels can be binary, multi-class, multi-label, or continuous. Even though GNNs have been shown to be a powerful tool for graph representation learning, they are limited in depth, that is the number of GNN layers. Indeed, deep GNNs suffer from the problem of over-smoothing where, as the number of layers increases, the node representations become nearly indistinguishable and model performance on the downstream task deteriorates significantly. Previous work has analyzed and quantified the over-smoothing problem \citep{deep_gnn_meng20,Zhao2020PairNorm,chen2020measuring} as well as proposed methodologies to address it explicitly \citep{Li_Han_Wu_2018,Zhao2020PairNorm,jumping-knowledge-networks-xu18c}. Some of the most recent approaches have mainly focused on forcing diversity on latent node representations via normalization \citep[see, e.g,][]{zhou-et-al-neurips-2020,Zhao2020PairNorm}. However, while these approaches have tackled the over-smoothing problem in node-property prediction tasks with reasonable success, they have largely overlooked the graph-property prediction problem. In this paper we show that over-smoothing is also a critical problem in graph-property prediction and propose a different approach to overcome it. In particular, our method trains predictors using node/graph representations from all layers, each contributing to the loss function equally, therefore encouraging the GNN to learn discriminative features at all GNN depths. Inspired by the work of \citet{pmlr-v38-lee15a}, we name our approach deeply-supervised graph neural networks (DSGNNs). Compared to approaches such as those by \citet{deep_gnn_meng20}, our method only requires a small number of additional parameters that grow linearly (instead of quadratically) with the number of GNN layers. Furthermore, our approach can be combined with previously proposed methods such as normalization~\citep{Zhao2020PairNorm} easily and we explore the effectiveness of this combination empirically. Finally, our approach is suitable for tackling \emph{both} graph and node property prediction problems. In summary, our contributions are the following, \begin{itemize} \item We propose the use of deep supervision for training GNNs, which encourages learning of discriminative features at all GNN layers. We refer to these types of methods as deeply-supervised graph neural networks (DSGNNs); \item DSGNNs can be used to tackle both node and graph-level property prediction tasks; \item DSGNNs are general and can be combined with any state-of-the-art GNN adding only a small number of additional parameters that grows linearly with the number of hidden layers and not the size of the graph; \item and we show that DSGNNs are resilient to the over-smoothing problem in deep networks and can outperform competing methods on challenging datasets. \end{itemize} \section{Related Work} \label{sec:related-work} GNNs have received a lot of attention over the last few years with several extensions and improvements on the original model of \citet{scarselli_gnn} including attention mechanisms \citep{velickovic2018graph} and scalability to large graphs \citep{hamilton2017inductive, klicpera2018predict, chiang2019cluster, zeng2019graphsaint}. While a comprehensive introduction to graph representation learning can be found in \citet{hamilton-grl-book-2020}, below we discuss previous work on graph property prediction and over-smoothing in GNNs focused on the node property prediction problem. \subsection{Graph Property Prediction} A common approach for graph property prediction is to first learn node representations using any of many existing GNNs~\citep{kipf2016semi, hamilton2017inductive, velickovic2018graph, xu2018powerful} and then aggregate the node representations to output a graph-level representation. Aggregation is performed using a readout (also known as pooling) function applied to the node representations output at the last GNN layer. A major issue is the readout function's ability to handle isomorphic graphs. It should be invariant to such permutations. Robustness for isomorphic graphs can be achieved via readout functions invariant to the node order such as the sum, mean or max. Several more sophisticated readout functions have also been proposed. For example,~\citet{sag_pool} proposed weighted average readout using self-attention (SAGPool).~\citet{zhang2018end} proposed a pooling layer (SortPool) that sorts nodes based on their structural role in the graph; sorting the nodes makes them invariant to node order such that representations can be learnt using $1$D convolutional layers. \Citet{graph-u-nets-gao19a} combine pooling with graph coarsening to train hierarchical graph neural networks. Similarly,~\citet{ying2018hierarchical} proposed differentiable pooling (DiffPool) where the pooling layer learns a soft assignment vector for each node to a cluster. Each cluster is represented by a single super-node and collectively all super-nodes represent a coarse version of the graph; representations for each super-node are learnt using a graph convolutional layer. These hierarchical methods coarsen the graph by reducing the graph's nodes at each convolutional layer down to a single node whose representation is used as input to a classifier. \subsection{Over-smoothing in Node-Property Prediction} \Citet{Li_Han_Wu_2018} focus on semi-supervised node classification in a setting with low label rates. They identify the over-smoothing problem as a consequence of the neighborhood aggregation step in GNNs; they show that the latter is equivalent to repeated application of Laplacian smoothing leading to over-smoothing. They propose a solution that increases the number of training examples using a random walk-based procedure to identify similar nodes. The expanded set of labeled examples is used to train a Graph Convolutional Network~\citep[GCN,][]{kipf2016semi}. The subset of nodes that the GCN model predicts most confidently are then added to the training set and the model is further fine-tuned; they refer to the latter process as self-training. This approach is not suitable for the graph property prediction setting where node-level labels are not available and the graphs are too small for this scheme to be effective. \Citet{deep_gnn_meng20} propose Deep Adaptive Graph Neural Networks (DAGNNs) for training deep GNNs by separating feature transformation from propagation. DAGNN uses a Multi-layer Perceptron (MLP) for feature transformation and smoothing via powers of the adjacency matrix for propagation similarly to~\citet{klicpera2018combining} and~\citet{wu2019simplifying}. However, the cost of their propagation operation increases quadratically as a function of the number of nodes in the graph hence DAGNNs do not scale well to large graphs. Furthermore, DAGNN's ability to combine local and global neighborhood information is of limited use in the graph property prediction setting where the graphs are small and the distinction between local and global node neighborhoods is difficult to make. \Citet{Zhao2020PairNorm} also analyze the over-smoothing problem and quantify it by measuring the row and column-wise similarity of learnt node representations. They introduce a normalization layer called PairNorm that during training forces these representations to remain distinct across node clusters. They show that generally, the normalization layer reduces the effects of over-smoothing for deep GNNs. To evaluate their approach, they identify the Missing Features (MF) setting such that when test node features are missing then GNNs with PairNorm substantially outperform GNNs without it. PairNorm is a general normalisation layer and it can be used with any graph GNN architecture including ours introduced in \cref{sec:ds_gnns}. It is applicable to both node and graph-level representation learning tasks. \citet{zhou-et-al-neurips-2020} also adopt group normalisation ~\citep{wu2018group} in neural networks to the graph domain. They show their approach tackles over-smoothing better than PairNorm. Group normalisation is most suited to node classification tasks and requires clustering nodes into groups posed as part of the learning problem. The number of groups is difficult to determine and must be tuned as a hyper-parameter. Finally, in an approach closely related to ours, \citet{jumping-knowledge-networks-xu18c} propose jumping knowledge networks (JKNets) that make use of jump connections wiring the outputs of the hidden graph convolutional layers directly to the output layer. These vectors are combined and used as input to a classification or regression layer. JKNets combine learnt node representations aggregated over different size neighborhoods in order to alleviate the problem of over-smoothing. We propose a different approach that, instead of combining hidden representations across layers, introduces a classification or regression layer attached to the output of each hidden graph convolutional layer. \section{Graph Neural Networks} \label{sec:graph_convolutional_networks} Let a graph be represented as the tuple $G=(V, E)$ where $V$ is the set of nodes and $E$ the set of edges. The graph has $|V| = N$ nodes. We assume that each node $v \in V$ is also associated with an attribute vector $\mathbf{x}_v \in \mathbb{R}^d$ and let $\mathbf{X} \in \mathbb{R}^{N \times d}$ represent the attribute vectors for all nodes in the graph. Let $\mathbf{A} \in \mathbb{R}^{N \times N}$ represent the graph adjacency matrix; here we assume that $\mathbf{A}$ is a symmetric and binary matrix such that $\mathbf{A}_{ij} \in \{0, 1\}$, where $\mathbf{A}_{ij}=1$ if there is an edge between nodes $i$ and $j$, i.e., $(v_i, v_j) \in E$, and $A_{ij}=0$ otherwise. Also, let $\mathbf{D}$ represent the diagonal degree matrix such that $\mathbf{D}_{ii} = \sum_{j=0}^{N-1}\mathbf{A}_{ij}$. Typical GNNs learn node representations via a neighborhood aggregation function. Assuming a GNN with $K$ layers, we define such a neighborhood aggregation function centred on node $v$ at layer $l$ as follows, \begin{equation} \mathbf{h}^{(l)}_v = \activ{l} \left( f\left( g\left(\mathbf{h}^{(l-1)}_v, \mathbf{h}^{(l-1)}_u~\forall u \in \neigh{v} \right) \right) \right), \label{eq:gnn_layer} \end{equation} where $\neigh{v}$ is the set of node $v$'s neighbors in the graph, $g$ is an aggregation function, $f$ is a linear transformation that could be the identity function, and $\activ{l}$ is a non-linear function applied element-wise. Let $\mathbf{H}^{(l)} \in \mathbb{R}^{N \times d^{(l)}}$ the representations for all nodes at the $l$-th layer with output dimension $d^{(l)}$; we set $\mathbf{H}^{(0)} \defeq \mathbf{X}$. and $d^{(0)} \defeq d$. A common aggregation function $g$ that calculates the weighted average of the node features where the weights are a deterministic function of the node degrees is $\hat{\mathbf{A}}\mathbf{H}$ as proposed by~\citet{kipf2016semi}. Here $\hat{\mathbf{A}}$ represents the twice normalized adjacency matrix with self loops given by $\hat{\mathbf{A}} = \hat{\mathbf{D}}^{-1/2}(\mathbf{A}+\mathbf{I})\hat{\mathbf{D}}^{-1/2}$ where $\hat{\mathbf{D}}$ is the degree matrix for $\mathbf{A}+\mathbf{I}$ and $\mathbf{I} \in \mathbb{R}^{N \times N}$ is the identity matrix. Substituting this aggregation function in \cref{eq:gnn_layer}, specifying $f$ to be a linear projection with weights $\mathbf{W}$ and defining the matrix $\nestedmathbold{\Omega}$ such that $\Omega_{ij} \defeq \hat{A}_{ij}$, gives rise to the graph convolutional layer of \citet{kipf2016semi}, \begin{equation} \mathbf{H}^{(l)}=\activ{l}(\nestedmathbold{\Omega} \mathbf{H}^{(l-1)}\mathbf{W}^{(l)}), \label{eq:gcn_layer} \end{equation} where, as before, $\activ{l}$ is a non-linear function, typically the element-wise rectified linear unit (ReLU) activation \citep{nair2010rectified}. Many other aggregation functions have been proposed, most notably the sampled mean aggregator in GraphSAGE \citep{hamilton2017inductive} and the attention-based weighted mean aggregator in graph attention networks \citep[GAT][]{velickovic2018graph}. In our work, we employ GAT-based graph convolutional layers, as they have been shown by \citet{dwivedi2020benchmarking} to be more expressive than the graph convolutional network (GCN) architecture of \citet{kipf2016semi}. In this case we make $\Omega_{ij} \defeq \omega_{ij}$ with \begin{equation} \omega_{ij} = \frac{\exp \left(\mathrm{LeakyReLU}\left(\mathbf{\nestedmathbold{\alpha}}^T[\mathbf{W}\mathbf{h}_i\|\mathbf{W}\mathbf{h}_j]\right)\right)}{\sum_{k \in \neigh{i}}\exp\left(\mathrm{LeakyReLU}\left(\mathbf{\nestedmathbold{\alpha}}^T[\mathbf{W}\mathbf{h}_i\|\mathbf{W}\mathbf{h}_k]\right)\right)} \label{eq:gat} \end{equation} for $j \in \neigh{i}$, where $\neigh{i}$, as before, is the set of node $i$'s neighbors; $\mathbf{\nestedmathbold{\alpha}}$ and $\mathbf{W}$ are trainable weight vector and matrix respectively and $\|$ is the concatenation operation. \subsection{Node Property Prediction} \Cref{eq:gcn_layer} is a realization of \cref{eq:gnn_layer} and constitutes the so-called spatial graph convolutional layer. More than one such layers can be stacked together to define GNNs. When paired with a task-specific loss function, these GNNs can be used to learn node representations in a semi-supervised setting using full-batch gradient descent. For example, in semi-supervised node classification, it is customary to use the row-wise softmax function at the output layer along with the cross-entropy loss over the training (labeled) nodes. \subsection{Graph Property Prediction} In the graph property prediction setting, we are given a set of $M$ graphs $\bar{G}=\{G_0, G_1, ..., G_{M-1}\}$ and corresponding properties (labels) $\nestedmathbold{Y} = \{\nestedmathbold{y}_0, \nestedmathbold{y}_1, ..., \nestedmathbold{y}_{M-1}\}$. The goal is to learn a function that maps a graph to its properties. The standard approach is to first learn node representations using a $K$-layer GNN followed by a readout function that outputs a graph-level vector representation. This graph-level representation can be used as input to a classifier or regressor. The readout function for a graph $G$ is generally defined as, \begin{equation} \mathbf{h}_{G}=r(\mathbf{h}^{(K-1)}_v \,|\, v \in G), \label{eq:redout} \end{equation} where $\mathbf{h}_G \in \mathbb{R}^{d_G}$ such that $d_G$ is the dimensionality of the graph-level representation vectors. Note that Equation~\ref{eq:redout} aggregates representations from all nodes in the graph. \begin{figure*} \caption{GNN architectures for graph property prediction. \textit{Left}: the standard architecture using a single readout layer after the last graph convolution but also shown with optional jump connections (dashed lines). \textit{Right}: the proposed architecture with deep supervision. For node property prediction the readout layers are removed from their corresponding architecture.} \label{fig:gnn_architectures} \end{figure*} Figure~\ref{fig:gnn_architectures} (left) shows a diagram of the standard GNN architecture with optional jump connections; for node property prediction tasks the architecture is the same but with the Readout layers removed. Jump connections can be applied at the node level, i.e., concatenate node representations output from each convolutional layer, or at the graph-level as shown in Figure~\ref{fig:gnn_architectures}. Furthermore, we include a multi-layer Perceptron (MLP) as the classifier/regressor so that the network can be trained end-to-end using stochastic gradient descent. The MLP is optional when using the standard architecture but necessary when employing jump connections. Given a suitable loss function such as the cross-entropy for classification or the root mean squared error (RMSE) for regression, we can train predictive models in a supervised setting for graph-level tasks and semi-supervised setting for node-level tasks. \section{Deeply-supervised Graph Neural Networks} \label{sec:ds_gnns} Deeply-supervised nets \citep[DSNs,][]{pmlr-v38-lee15a} were proposed as a solution to several problems in training deep neural networks. By using companion objective functions attached to the output of each hidden layer, DSNs tackle the issue of vanishing gradients. Furthermore, in standard neural networks with shallow architectures, deep supervision operates as a regularizer of the loss at the last hidden layer. Lastly, and more importantly, for deep networks, it encourages the estimation of discriminative features at all network layers \citep{pmlr-v38-lee15a}. Therefore, inspired by this work, we introduce deeply supervised graph neural networks (DSGNNs), i.e., graph neural network architectures trained with deep supervision. Thus, we hypothesize that DSGNNs are resilient to over-smoothing and test this hypothesis by evaluating and analyzing their performance in training shallow and deep networks in \cref{sec:empirical_evaluation}. Once we have defined node-level representations, as described in \cref{sec:graph_convolutional_networks}, our first step to construct and train DSGNNs is to compute graph-level representations at each layer using $\nestedmathbold{h}_G^{(l)} = r(\nestedmathbold{H}^{(l)})$, where $\nestedmathbold{H}^{(l)}$ is obtained using the recurrent relation in \cref{eq:gcn_layer}. As before, $r(\cdot)$ is a readout (or pooling) function. Simple examples of readout functions are the mean and the maximum of the features across all the nodes in the graph. This is followed by a linear layer and (potentially) a non-linearity that computes the output for each layer, $\nestedmathbold{z}_G^{(l)} = h_G(\nestedmathbold{h}_G^{(l)} \nestedmathbold{W}_G^{(l)})$. For example, $h_G(\cdot)$ can be the softmax function or the identity function for classification or regression, respectively. Finally, given a loss function over our true and predicted outputs $\{(\nestedmathbold{y}_G, \nestedmathbold{z}_G^{l})\}$ we learn all model parameters by optimizing the average loss function across all layers. \subsection{Graph Classification with a 2-Layer Network} As an illustrative example, here we consider a graph classification problem with $C$ classes using a $2$-layer GAT model as shown in \cref{fig:gnn_architectures} (right). See Section \ref{sec:ds_for_node_classification} in the Appendix for an example of the node classification setting. \textbf{(i) Layer-dependent graph features}: We first compute, for each graph, layer-dependent graph features as: \begin{align} \nestedmathbold{H}^{(1)} = \mathrm{ReLU}(\nestedmathbold{\Omega} \nestedmathbold{X} \nestedmathbold{W}^{(1)}), \quad \nestedmathbold{h}_G^{(1)} = \max (\nestedmathbold{H}^{(1)}), \\ \nestedmathbold{H}^{(2)} = \mathrm{ReLU}(\nestedmathbold{\Omega} \nestedmathbold{H}^{(1)} \nestedmathbold{W}^{(2)}), \quad \nestedmathbold{h}_G^{(2)} = \max (\nestedmathbold{H}^{(2)}), \end{align} where the $\mathrm{ReLU}$ activations are element-wise and the $\max$ readouts operate across rows. \textbf{(ii) Layer-dependent outputs}: We then compute the outputs for each layer as: \begin{equation} \nestedmathbold{z}_G^{(l)} = \mathrm{softmax}(\nestedmathbold{h}_G^{(l)} \nestedmathbold{W}_G^{(l)}), \ \quad l = 1, 2, \end{equation} where we note the new parameters $\{\nestedmathbold{W}_G^{(l)}\}$, which are different from the previous weight matrices $\{\nestedmathbold{W}^{(l)}\}$. \textbf{(iii) Layer-dependent losses}: We now compute the cross-entropy loss for each layer: \begin{equation} \mathcal{L}^{(l)}_{\bar{G}} = -\sum_{g \in G_L} \sum_{c=0}^{C-1} \mathbf{Y}_{g, c} \log(\nestedmathbold{Z}^{(l)}_{g, c}), \quad l=1,2, \label{eq:cross_entropy} \end{equation} where $G_L \subseteq \bar{G}$ is the set of training graphs, $\nestedmathbold{Z}^{(l)}_{g,c}$ is the predicted probability for class $c$ and graph $g$, and $\mathbf{Y}_{g,c}$ is the corresponding ground truth label. \textbf{(iv) Total loss}: The DSGNN loss is the mean of the losses of all predictive layers, for $K=2$, we have: \begin{equation} \mathcal{L}_{\bar{G}} = \frac{1}{K}\sum_{k=1}^{K} \mathcal{L}^{(k)}_{\bar{G}}, \label{eq:loss_ds} \end{equation} where each of the individual losses are given by \cref{eq:cross_entropy}. We estimate the model parameters using gradient-based optimization so as to minimize the total loss in \cref{eq:loss_ds}. Unlike \citet{pmlr-v38-lee15a}, we do not decay the contribution of the surrogate losses as a function of the training epoch. Consequently, at prediction time we average the outputs from all classifiers and then apply the softmax function to make a single prediction for each input graph. \subsection{Advantages of Deep Supervision} As mentioned before, over-smoothing leads to node representations with low discriminative power at the last GNN layer. This hinders the deep GNN's ability to perform well on predictive tasks. DSGNNs circumvent this issue as the learned node representations from all hidden layers inform the final decision. The distributed loss encourages node representations learned at all hidden layers to be discriminative such that network predictions do not rely only on the discriminative power of the last layer's representations. Furthermore, deep supervision increases the number of model parameters linearly to the number of MLP layers. Consider a classification model with $K$ hidden layers, $d_G$ dimensional graph-level representations, and a single layer MLP. If the number of classes is $C$, then a DSGNN model requires $(K-1) \times d_G \times C$ parameters more than a standard GNN. \section{Empirical Evaluation} \label{sec:empirical_evaluation} We aim to empirically evaluate the performance of DSGNNs on a number of challenging graph and node classification and regression tasks. We investigate empirically if the addition of deep supervision provides an advantage over the standard GNN and JKNet \citep{jumping-knowledge-networks-xu18c} architectures shown in \cref{fig:gnn_architectures}. We implemented\footnote{We will release the source code upon publication acceptance} the standard GNN, JKNet, and DSGNN architectures using PyTorch and the Deep Graph Library \citep[DGL,][]{wang2019dgl}. The version of the datasets we use is that available via DGL\footnote{\url{https://github.com/dmlc/dgl}} and DGL-LifeSci\footnote{\url{https://github.com/awslabs/dgl-lifesci}}. All experiments were run on a workstation with $8$GB of RAM, Nvidia Telsa P100 GPU, and Intel Xeon processor. \begin{table}[t] \caption{Graph regression and classification performance for the standard GNN, JKNet, and DSGNN architectures. The performance metric is mean test RMSE for ESOL\xspace and Lipophilicity\xspace and mean test accuracy for Enzymes\xspace calculated using $10$ repeats of $10$-fold cross validation. Standard deviation is given in parenthesis and the model depth that achieved the best performance in square brackets. Bold and underline indicate the best and second best models for each dataset.} \vskip 0.15in \begin{center} \begin{tabular}{llll} \toprule Model & ESOL\xspace & Lipophilicity\xspace & Enzymes\xspace \\ & \multicolumn{2}{c}{RMSE $\downarrow$} & \multicolumn{1}{c}{Accuracy $\uparrow$} \\ \midrule GNN & \underline{0.726 (0.063) [6]} & \underline{0.618(0.033) [8]} & \underline{64.1(6.8) [2]} \\ JKNet & 0.728 (0.074) [8] & 0.633 (0.035) [10] & \textbf{65.7 (5.8) [2]} \\ DSGNN & \textbf{0.694 (0.065) [16]} & \textbf{0.594 (0.033) [16]} & 63.3 (7.7) [2] \\ \bottomrule \end{tabular} \label{tab:results_summary_graph_datasets} \end{center} \vskip -0.15in \end{table} \begin{table}[t] \caption{Node classification performance for the standard GNN, JKNet, and DSGNN architectures with and without PairNorm (PN). The performance metric is mean test accuracy calculated over $20$ repeats of fixed train/val/test splits. Standard deviation is given in parenthesis and the model depth that achieved the best performance in square brackets. Bold and underline indicate the best and second best performing models for each dataset.} \vskip 0.15in \begin{center} \begin{tabular}{l l l l} \toprule Model & Cora\xspace & Citeseer\xspace & Pubmed\xspace \\ & \multicolumn{3}{c}{Accuracy $\uparrow$} \\ \midrule GNN & \textbf{82.6 (0.6) [2]} & \textbf{71.1 (00.6) [2]} & \underline{77.2 (0.5) [9]} \\ JKNet & \underline{81.4 (0.6) [3]} & 68.5 (0.4) [2] & 76.9 (0.9) [11] \\ DSGNN & 81.1 (1.0) [4] & \underline{69.9 (0.4) [3]} & \textbf{77.5 (0.5) [12]} \\ GNN-PN & 77.9 (0.4) [2] & 68.0 (0.7) [3] & 75.5 (00.7) [15] \\ DSGNN-PN & 73.1 (0.8) [7] & 59.4 (1.6) [2] & 75.9 (00.5) [7] \\ \bottomrule \end{tabular} \label{tab:results_summary_node_datasets} \end{center} \vskip -0.15in \end{table} \begin{table}[t] \caption{\textbf{Missing features setting} node classification performance comparison between the standard GNN, JKNet and DSGNN architectures with and without PairNorm (PN). Results shown are mean test accuracy and standard deviation over $20$ repeats of fixed train/val/test splits. In parenthesis we indicate the model depth that achieved the highest accuracy. Bold and underline indicate the best and second best performing models.} \vskip 0.15in \begin{center} \begin{tabular}{l l l l} \toprule Model & Cora\xspace & Citeseer\xspace & Pubmed\xspace \\ & \multicolumn{3}{c}{Accuracy $\uparrow$} \\ \midrule GNN & \textbf{77.5 (0.8) [10]} & \textbf{62.8 (0.7) [2]} & \underline{76.8 (0.7) [9]} \\ JKNet & 74.9 (1.0) [15] & 61.8 (0.8) [2] & 76.4 (0.7) [9] \\ DSGNN & \underline{76.8 (0.8) [11]} & 61.0 (1.0) [2] & \textbf{0.771 (0.4) [10]} \\ GNN-PN & 75.8 (0.4) [6] & \underline{62.1 (0.6) [4]} & 75.0 (0.7) [15] \\ DSGNN-PN & 73.5 (0.9) [15] & 52.8 (1.2) [9] & 74.7 (0.9) [25] \\ \bottomrule \end{tabular} \label{tab:results_summary_node_datasets_mv} \end{center} \vskip -0.15in \end{table} \subsection{Datasets and Experimental Set-up} \label{sec:datasets} DSGNN is a general architecture such that it can use any combination of graph convolutional and readout layers. We focus the empirical evaluation on a small number of representative methods. For graph convolutions we use graph attention networks \citep[GAT,][]{velickovic2018graph} with multi-head attention We average or concatenate the outputs of the attention heads (we treat this operation as a hyper-parameter) and use the resulting node vectors as input to the next layer. For DSGNN and JKNet, the last GAT layer is followed by a fully connected layer with an activation suitable for the downstream task, e.g., softmax for classification. The linear layer is necessary to map the GAT layer representations to the correct dimension for the downstream task. So, a DSGNN or JKNet model with $K$ layers comprises of $K-1$ GAT layers and one linear layer. For a standard GNN model, the last layer is also GAT following \citet{velickovic2018graph} such that a $K$-layer model comprises of $K$ GAT layers. We used the evaluation protocol proposed by~\citet{errica2019fair} and present results for six benchmark datasets. Of these, three are graph tasks and three are node tasks. We include detailed dataset statistics in Table \ref{tab:datasets} in the Appendix. \subsubsection{Graph Property Prediction} \label{sec:graph_regression_datasets} The graph property prediction datasets are from biochemistry where graphs represent molecules. The task is to predict molecular properties. We base our empirical evaluation on three datasets which are ESOL from \citet{delaney2004esol}, Lipophilicity from \citet{gaulton2017chembl} and Enzymes\xspace from \citet{schomburg2004brenda}. Enzymes\xspace is a graph classification task whereas ESOL\xspace and Lipophilicity\xspace are regression tasks. We use $10$-fold cross validation and repeat each experiment $10$ times. We optimize the root mean square error (RMSE) for regression and the cross-entropy loss for classification. For all architectures, we set all hidden layers to size $512$ with $4$ attention heads (total $2048$ features). All layers are followed with a $\mathrm{ReLU}$ activation~\citep{nair2010rectified} except the last one; the last layer either uses $\mathrm{softmax}$ activation or no activation for classification and regression tasks respectively. We varied model depth in the range $\{ 2,4,6,\ldots,20 \}$. For readout, we use the non-parametric max function. We perform grid search for learning rate: \{$0.01$, $0.001$, $0.0001$\} and weight decay: \{$0.001$, $0.0001$\}. We use batch size $64$ and train for a maximum $500$ epochs using mini-batch SGD with momentum set to $0.9$. \subsubsection{Node Property Prediction} \label{sec:node_classification_datasets} The node classification datasets are the citation networks Cora\xspace, Citeseer\xspace and Pubmed\xspace from \citet{sen2008collective}. The task for all datasets is semi-supervised node classification in a regime with few labeled nodes. We used the splits from \citet{yang2016revisiting} and performance on the validation set for hyper-parameter selection. We repeat each experiment $20$ times. We optimize the cross-entropy loss and use accuracy for model selection and performance comparison. We set all hidden layers to size $8$ with $8$ attention heads (total $64$ features) and $\mathrm{ELU}$~\citep{clevert2015fast} activation for all GAT layers. We varied model depth in the range $\{ 2,3,\ldots,12,15,20,25 \}$. We performed grid search for the initial learning rate: \{$0.01, 0.002, 0.005$\}, weight decay: \{$0.0, 0.005, 0.0005$\}, and dropout (both feature and attention): \{$0.2, 0.5$\}. We trained all models using the Adam optimiser~\citep{kingma2014adam} for a maximum $1000$ epochs and decayed the learning rate by a factor of $0.5$ every $250$ epochs. \begin{figure*} \caption{Graph classification and regression performance of standard GNN, JKNet, and DSGNN architectures as a function of model depth. Results shown are for the ESOL\xspace (left), Lipophilicity\xspace (middle) and Enzymes\xspace (right) datasets. The performance metric for ESOL\xspace and Lipophilicity\xspace is test RMSE and for Enzymes\xspace test accuracy. All metrics are mean over $10$ runs with $1$ standard deviation error bars.} \label{fig:deep_gnn_graph_performance} \end{figure*} \begin{figure*} \caption{Node classification performance comparison of standard GNN, JKNet, and DSGNN with and without PairNorm (PN) architectures as a function model depth. Results shown are for Cora (left), Citeseer (middle), and Pubmed (right) datasets. The performance metric is mean test over $20$ runs with $1$ standard deviation error bars.} \label{fig:deep_gnn_node_performance} \end{figure*} \begin{figure*} \caption{\textbf{Missing features setting} node classification performance comparison of standard GNN, JKNet, and DSGNN with and without PairNorm (PN) architectures as a function model depth. Results shown are for Cora (left), Citeseer (middle), and Pubmed (right) datasets. The performance metric is mean test over $20$ runs with $1$ standard deviation error bars.} \label{fig:deep_gnn_node_mv_performance} \end{figure*} \subsection{Results} \label{sec:results} Tables \ref{tab:results_summary_graph_datasets} and \ref{tab:results_summary_node_datasets} show the performance for each architecture and for the all datasets. Also shown for each model is the number of layers required to achieve the best performance. For each dataset, model selection across model depth is based on validation set performance and the tables report performance on the test set for the selected models. \subsubsection{Graph Regression and Classification} \label{sec:results_graph_regression_and_classification} We see in \cref{tab:results_summary_graph_datasets} that for ESOL\xspace and Lipophilicity\xspace, models enhanced with deep supervision achieved the best outcome. In addition, DSGNN performs best at a much larger depth than the other architectures. On the graph classification dataset (Enzymes\xspace), all models perform similarly with JKNet having a small advantage. For graph classification, all architectures performed best using only $2$ convolutional layers. Figure~\ref{fig:deep_gnn_graph_performance} shows the performance of each architecture as a function of model depth. For the regression datasets, all architectures benefit from increasing depth up to a point. For GNN models with more than $6$ and $8$ layers for ESOL\xspace (left) and Lipophilicity\xspace (middle) datasets respectively performance starts decreasing. Similarly for JKNet for models with more than $8$ and $10$ layers for ESOL\xspace and Lipophilicity\xspace respectively. On the other hand, with the exception of the Enzymes\xspace dataset, DSGNN performance continues to improve for up to $16$ layers for both datasets before it flattens out. Our evidence suggests that for graph-level tasks DSGNNs can exploit larger model depth to achieve improved performance when compared to standard GNNs and JKNets. \subsubsection{Node Classification} \label{sec:results_node_classification} Table~\ref{tab:results_summary_node_datasets} shows that the standard GNN architecture outperformed the others in two of the three node classification datasets, namely Cora\xspace and Citeseer\xspace. DSGNN demonstrated a small advantage on the larger Pubmed\xspace dataset. As previously reported in the literature, shallow architectures perform best on these citation networks. We observe the same since for the smaller Cora\xspace and Citeseer\xspace all architectures performed best with $2$ to $4$ layers. Only on Pubmed\xspace all architectures benefited from increased model depth. We attribute this to the graph's larger size where more graph convolutional layers allow for information from larger neighborhoods to inform the inferred node representations. It can be seen in Figure~\ref{fig:deep_gnn_node_performance} that for the smaller Cora\xspace and Citeseer\xspace, for all models performance degrades as model depth increases. As we noted earlier, this performance degradation has been attributed to the over-smoothing problem in GNNs. In our experiments, DSGNN demonstrated consistently higher resilience to over-smoothing than competing methods. \Cref{tab:results_summary_node_datasets_25} in the Appendix shows the performance of all architectures with $25$ layers and clearly indicates that DSGNN outperforms the standard GNN and JKNet for Cora\xspace and Pubmed\xspace. JKNet is best for Citeseer\xspace with DSGNN a close second. As expected DSGNN and JKNet are more resilient to over-smoothing as compared to the standard GNN with more than $12$ layers. For all datasets and especially Citeseer\xspace, performance for the standard GNN degrades substantially as a function of model depth. Lastly, we note that for all datasets the addition of pair normalization \citep[PairNorm,][]{Zhao2020PairNorm} to the standard GNN hurts performance. This finding is consistent with the results in \citet{Zhao2020PairNorm}. However, as we will see in \cref{sec:missing_features}, PairNorm can be beneficial in the missing feature setting. \cref{tab:results_summary_graph_datasets} shows that the performance of GNN with PN drops by approximately $4.7\%, 3.1\%$ on Cora\xspace and Citeseer\xspace respectively. DSGNN with PairNorm is the worst performing architecture across all node classification datasets. Consequently, we do not recommend the combination of deep supervision and PairNorm. \subsubsection{Node Classification with Missing Features} \label{sec:missing_features} \Citet{Zhao2020PairNorm} introduced the missing features setting for the node classification task and demonstrated that GNNs with PairNorm achieve the best results and for deeper models. In the missing features setting, a proportion of nodes in the validation and test sets have their feature vectors zeroed. This setting simulates the missing data scenario common in real-world applications. The missing data proportion can vary from $0\%$ where all nodes have known attributes and $100\%$ where all nodes in the validation and test sets have missing attributes. Here we consider the performance of standard GNN, JKNet, and DSGNN for the latter setting only. Table~\ref{tab:results_summary_node_datasets_mv} and Figure~\ref{fig:deep_gnn_node_mv_performance} show the performance of the three architectures in the missing features setting. We note that in comparison to the results in Table~\ref{tab:results_summary_node_datasets} and excluding Citeseer\xspace, all models achieved their best performance at larger depth. Interestingly and in contrast to \citet{Zhao2020PairNorm}, we found that the standard GNN architecture performed best for the smaller Cora\xspace and Citeseer\xspace graphs. We attribute the standard GNN's good performance to our use of a model with high capacity ($8$ attention heads and $8$-dimensional embeddings for each head) as well as careful tuning of relevant hyper-parameters. \Citet{Zhao2020PairNorm} use a simpler model, e.g., one attention head, and do not tune important hyper-parameters such as learning rate, dropout and weight decay. However, on the larger Pubmed\xspace dataset, DSGNN with $10$ layers achieves the highest test accuracy. A DSGNN model with $25$ layers as shown in Figure~\ref{fig:deep_gnn_node_mv_performance} and Table~\ref{tab:results_summary_node_datasets_25_mv} in the Appendix achieves the highest test accuracy even when compared to the $10$-layer DSGNN model; the latter was selected for inclusion in Table~\ref{tab:results_summary_node_datasets_mv} because it achieved the highest validation accuracy that we used for model selection across model depth. We provide additional analysis of DSGNN's ability to learn more discriminative node representations and alleviate over-smoothing in the Appendix Section \ref{sec:learned-representations}. We conclude that DSGNN is the more robust architecture to the over-smoothing problem in the missing feature setting and especially for larger graphs. \section{Conclusion} \label{sec:conclusion} We introduced deeply-supervised graph neural networks (DSGNNs) and demonstrated their effectiveness in training high performing models for graph and node property prediction problems. DSGNNs are GNNs enhanced with deep supervision that introduce companion losses attached to the hidden layers guiding the learning algorithm to learn discriminative features at all model depths. DSGNNs overcome the over-smoothing problem in deep models achieving competitive performance when compared with standard GNNs enhanced with PairNorm and jump connections. We provided empirical evidence supporting this for both graph and node property prediction and in the missing feature setting. We found that combining deep supervision with PairNorm degrades model performance. DSGNNs are more resilient to the over-smoothing problem achieving substantially higher accuracy for deep models. In future work, we plan to investigate the application of DSGNNs on larger graphs where we expect deep supervision will be beneficial. \appendix \onecolumn \section{Appendix} \label{sec:appendix} \subsection{Deep Supervision for Node Classification} \label{sec:ds_for_node_classification} In Section~\ref{sec:ds_gnns} we extended graph neural networks with deep supervision focused on the graph property prediction setting. Here, we explain how deep supervision can be applied for node property prediction with a focus on classification tasks. We are given a graph represented as the tuple $G=(V, E)$ where $V$ is the set of nodes and $E$ the set of edges. The graph has $|V| = N$ nodes. We assume that each node $v \in V$ is also associated with an attribute vector $\mathbf{x}_v \in \mathbb{R}^d$ and let $\mathbf{X} \in \mathbb{R}^{N \times d}$ represent the attribute vectors for all nodes in the graph. A subset of $M$ nodes, $V_l \subset V$, has known labels. Each label represents one of $C$ classes using a one-hot vector representation such that $\mathbf{Y} \in \mathbb{R}^{M\times C}$. The node property prediction task is to learn a function $f: V \rightarrow Y$ that maps node representations to class probabilities. Consider the case of a $2$-layer GNN with GAT~\citep{velickovic2018graph} layers and one attention head. The node representations output by each of the $2$ GAT layers are given by, \begin{align} \nestedmathbold{H}^{(1)} = \mathrm{ReLU}(\nestedmathbold{\Omega} \nestedmathbold{X} \nestedmathbold{W}^{(1)}), \quad \\ \nestedmathbold{H}^{(2)} = \mathrm{ReLU}(\nestedmathbold{\Omega} \nestedmathbold{H}^{(1)} \nestedmathbold{W}^{(2)}), \quad \end{align} where the $\mathrm{ReLU}$ activations are element-wise, $\mathbf{\Omega}$ are the attention weights given by Equation~\ref{eq:gat}, and $\mathbf{W}^{(i)}$ are trainable layer weights. Let each GAT layer be followed by a linear layer with $\mathrm{softmax}$ activation calculating class probabilities for all nodes in the graph such that, \begin{equation} \mathbf{Z}^{(l)} = \mathrm{softmax}(\mathbf{H}^{(l)}\mathbf{\widehat{W}}^{(l)}), \quad l=1,2, \label{eq:linear_gat_node_example} \end{equation} where $\mathbf{Z}^{(l)}$ are the class probabilities for all nodes as predicted by the $l$th layer, and $\mathbf{\widehat{W}}^{(l)}$ are the layer's trainable weights. Now we can compute layer-dependent losses as: \begin{equation} \mathcal{L}_N^{(l)} = -\sum_{v \in V_l} \sum_{c=0}^{C-1} \nestedmathbold{Y}_{v, c}log(\mathbf{Z}^{(l)}_{v, c}), \quad l=1,2. \label{eq:node_cross_entropy} \end{equation} For a standard GNN, in order to estimate the weights $\{ \mathbf{W}^{(1)}, \mathbf{W}^{(2)}, \mathbf{\widehat{W}}^{(2)} \}$, we optimize the cross-entropy loss calculated over the set of nodes with known labels only using $\mathcal{L}_N^{(2)}$. Deep supervision adds a linear layer corresponding to each GAT layer in the model such that, in our example, the model makes two predictions for each node, $\mathbf{Z}^{(1)}$ and $\mathbf{Z}^{(2)}$. We now estimate the weights \{$ \mathbf{{W}}^{(1)}, \mathbf{W}^{(2)}, \mathbf{\widehat{W}}^{(1)}, \mathbf{\widehat{W}}^{(2)}$\}, and optimize the mean loss that for our example is given by, \begin{equation} \mathcal{L}_N = \frac{1}{2}\sum_{k=1}^{2} \mathcal{L}^{(k)}_N. \label{eq:ds_node_loss} \end{equation} \subsection{Datasets} \label{sec:dataset_stats} \begin{table}[ht] \caption{Dataset statistics we used for the empirical evaluation. The number of nodes for ESOL\xspace, Lipophilicity\xspace, and Enzymes\xspace is the average of the number of nodes in all the graphs in each dataset. A value of `-' for train/val/test for the graph datasets indicates that 10-fold cross validation was used.} \vskip 0.15in \centering \begin{tabular}{lccccc} \toprule Name & Graphs & Nodes & Classes & Node features & \# train/val/test \\ \midrule Enzymes\xspace & 600 & 33 (avg) & 6 & 18 & -\\ ESOL\xspace & 1144 & 13 (avg) & Regr. & 74 & -\\ Lipophilicity\xspace & 4200 & 27 (avg) & Regr. & 74 & -\\ Cora\xspace & 1 & 2708 &7 & 1433 & 140/500/1000\\ Citeseer\xspace & 1 & 3327 & 6 & 3703 & 120/500/1000\\ Pubmed\xspace & 1 & 19717 &3 & 500 & 60/500/1000\\ \bottomrule \end{tabular} \label{tab:datasets} \vskip -0.1in \end{table} Table~\ref{tab:datasets} gives detailed information about the datasets we used for the empirical evaluation of the different architectures. Cora\xspace, Citeseer\xspace, and Pubmed\xspace are citation networks where the goal is to predict the subject of a paper. Edges represent citation relationships. We treat these graphs as undirected as it is common in the GNN literature. The datasets have known train/val/test splits from \citet{yang2016revisiting}. Training sets are small with the number of labeled nodes equal to $140$ ($20$ for each of $7$ classes), $120$ ($20$ for each of $6$ classes), and $60$ ($20$ for each of $3$ classes) for Cora\xspace, Citeseer\xspace, and Pubmed\xspace respectively. Enzymes\xspace is a graph classification dataset where the goal is to predict enzyme class as it relates to the reactions catalyzed. ESOL\xspace is a regression dataset where the goal is to predict molecular solubility. Lastly, Lipophilicity\xspace is a regression dataset where the goal is to predict the octanol/water distribution coefficient for a large number of compounds. \subsection{Additional Experimental Results} \label{sec:additional_experimental_results} \subsubsection{Deep Model Performance} \label{sec:node_deep_model_performance} In Sections \ref{sec:results_node_classification} and \ref{sec:missing_features}, we noted that for deep models with $25$ layers, DSGNNs demonstrate better resilience to the over-smoothing problem. Our conclusion holds for both the normal and missing feature settings as can be seen in Figures \ref{fig:deep_gnn_graph_performance}, \ref{fig:deep_gnn_node_performance} and \ref{fig:deep_gnn_node_mv_performance}. Tables \ref{tab:results_summary_node_datasets_25} and \ref{tab:results_summary_node_datasets_25_mv} focus on the node classification performance of models with $25$ layers. In the normal setting (Table \ref{tab:results_summary_node_datasets_25}), DSGNN outperforms the others on Cora\xspace and Pubmed\xspace by $0.9\%$ and $1.3\%$ respectively whereas it is second best to JKNet on Citeseer\xspace trailing by $1.2\%$. In the missing feature setting (Table \ref{tab:results_summary_node_datasets_25_mv}), DSGNN outperforms the second best model on all three datasets by $1.7\%$, $2.3\%$, and $3.2\%$ for Cora\xspace, Citeseer\xspace, and Pubmed\xspace respectively. In the missing features setting, DSGNN outperforms a standard GNN with PairNorm by $4.4\%$, $2.3\%$, and $3.5\%$ for Cora\xspace, Citeseer\xspace, and Pubmed\xspace respectively. This evidence supports our conclusion that enhancing GNNs with deep supervision as opposed to PairNorm or jump connections is a more suitable solution to the over-smoothing problem for deep GNNs. \begin{table}[ht] \caption{Node classification performance comparison between the standard GNN, Jumping Knowledge Network (JKNet) and the Deeply-Supervised GNN (DSGNN) architectures with and without PairNorm (PN). All models have $25$ layers. Results shown are mean accuracy and standard deviation on the test sets for $20$ repeats for fixed train/val/test splits. For each dataset, we use bold font to indicate the best performing model and underline the second best.} \vskip 0.15in \begin{center} \begin{tabular}{l l l l} \toprule Model & Cora\xspace & Citeseer\xspace & Pubmed\xspace \\ & \multicolumn{3}{c}{Accuracy $\uparrow$} \\ \midrule GNN & 74.3 $\pm$ 1.3 & 45.8 $\pm$ 3.3 & 75.9 $\pm$ 0.9 \\ JKNet & \underline{76.0 $\pm$ 1.5} & \textbf{63.3 $\pm$ 2.3} & \underline{76.6 $\pm$ 0.8} \\ DSGNN & \textbf{76.9 $\pm$ 0.8} & \underline{62.1 $\pm$ 1.0} & \textbf{77.9 $\pm$ 0.5} \\ GNN-PN & 71.5 $\pm$ 0.9 & 57.0 $\pm$ 1.4 & 75.6 $\pm$ 0.7 \\ DSGNN-PN & 72.2 $\pm$ 0.6 & 51.3 $\pm$ 0.9 & 74.2 $\pm$ 1.0 \\ \bottomrule \end{tabular} \label{tab:results_summary_node_datasets_25} \end{center} \vskip -0.1in \end{table} \begin{table}[ht] \caption{\textbf{Missing feature setting} node classification performance comparison between the standard GNN, Jumping Knowledge Network (JKNet) and the Deeply-Supervised GNN (DSGNN) architectures with and without PairNorm (PN). All models have $25$ layers. Results shown are mean accuracy and standard deviation on the test sets for $20$ repeats for fixed train/val/test splits. For each dataset, we use bold font to indicate the best performing model and underline the second best.} \vskip 0.15in \begin{center} \centering \begin{tabular}{l l l l} \toprule Model & Cora & Citeseer & Pubmed \\ & \multicolumn{3}{c}{Accuracy $\uparrow$} \\ \midrule GNN & 72.6 $\pm$ 1.4 & 40.6 $\pm$ 2.9 & 75.7 $\pm$ 1.0\\ JKNet & \underline{73.9 $\pm$ 1.5} & 50.3 $\pm$ 2.4 & \underline{76.7 $\pm$ 0.9} \\ DSGNN & \textbf{75.6 $\pm$ 0.9} & \textbf{56.3 $\pm$ 2.2} & \textbf{77.9 $\pm$ 0.4} \\ GNN-PN & 71.2 $\pm$ 0.9 & \underline{54.0 $\pm$ 1.6} & 74.4 $\pm$ 1.1 \\ DSGNN-PN & 72.5 $\pm$ 0.8 & 50.3 $\pm$ 1.4 & 74.7 $\pm$ 0.9 \\ \bottomrule \end{tabular} \label{tab:results_summary_node_datasets_25_mv} \end{center} \vskip -0.1in \end{table} \subsubsection{Analysis of Learned Representations} \label{sec:learned-representations} We provide additional evidence that DSGNNs learn more discriminate node representations for all hidden graph convolutional layers leading to performance benefits outlined above. We focus on the node classification domain. We adopt the metrics suggested by \citet{Zhao2020PairNorm} for measuring how discriminate node representations and node features are. Given a graph with $N$ nodes, let $\mathbf{H}^{(i)} \in \mathbb{R}^{N\times d}$ hold the $d$-dimensional node representations output by the $i$-th GAT layer. The row difference (row-diff) measures the average pairwise distance between the node representations (rows of $\mathbf{H}^{(i)}$). The column difference (col-diff) measures the average pairwise distance between the $L_1$-normalized columns of $\mathbf{H}^{(i)}$. The former measures node-wise over-smoothing and the latter feature-wise over-smoothing~\citep{Zhao2020PairNorm}. We consider the row-diff and col-diff metrics for the deepest models we trained, those with $25$ layers of which $24$ are GAT. We calculate the two metrics for the node representations output by each of the GAT layers. Figures \ref{fig:row_diff_25_layers} and \ref{fig:col_diff_25_layers} show plots of the row-diff and col-diff metrics respectively. We note that for all datasets, DSGNN node representations are the most separable for the majority of layers. For all models, row-diff plateaus after the first few layers. We interpret this as a point of convergence for the learnt node representations such that adding more layers can only harm the model's performance as indicated in Figure \ref{fig:deep_gnn_node_performance}. We further demonstrate this point by visualising the node embeddings for Cora\xspace. Figure \ref{fig:node_embeddings_cora} shows a visual representation of the learnt node embeddings for a subset of the GAT layers in the trained $25$-layer models. We used t-SNE~\citep{JMLR:v9:vandermaaten08a} to project the $64$-dimensional node features to $2$ dimensions. We can see that all architectures learn clusters of nodes with similar labels. However, for the standard GNN and JKNet, these clusters remain the same for the $10$-th layer and above. On the other hand, DSGNN continues to adapt the clusters for all layers as we would expect given the effect of the companion losses associated with each GAT layer. \begin{figure} \caption{Row difference metric from \citet{Zhao2020PairNorm} calculated for the output of each GAT layer in a $25$-layer model. Metric shown for Cora\xspace (left), Citeseer\xspace (middle), and Pubmed\xspace (right) datasets.} \label{fig:row_diff_25_layers} \end{figure} \begin{figure} \caption{Column difference metric from \citet{Zhao2020PairNorm} calculated for the output of each GAT layer in a $25$-layer model. Metric shown for Cora\xspace (left), Citeseer\xspace (middle), and Pubmed\xspace (right) datasets.} \label{fig:col_diff_25_layers} \end{figure} \begin{figure} \caption{Cora\xspace node embeddings for the standard GNN (top), DSGNN (middle) and JKNet (bottom) models each with $25$ layers. We show the node embeddings output from layers $1$ (the first GAT layer), $5$, $10$, $15$, and $24$ (the last GAT layer in all models). The colors indicate node class.} \label{fig:node_embeddings_cora} \end{figure} \end{document}
arXiv
{ "id": "2202.12508.tex", "language_detection_score": 0.7892009019851685, "char_num_after_normalized": null, "contain_at_least_two_stop_words": null, "ellipsis_line_ratio": null, "idx": null, "lines_start_with_bullet_point_ratio": null, "mean_length_of_alpha_words": null, "non_alphabetical_char_ratio": null, "path": null, "symbols_to_words_ratio": null, "uppercase_word_ratio": null, "type": null, "book_name": null, "mimetype": null, "page_index": null, "page_path": null, "page_title": null }
arXiv/math_arXiv_v0.2.jsonl
null
null
\begin{document} \title{Low-dimensional quite noisy bound entanglement with cryptographic key} \author{\L{}ukasz Pankowski} \affiliation{Institute of Informatics, University of Gda\'nsk, Gda\'nsk, Poland} \affiliation{Institute of Theoretical Physics and Astrophysics, University of Gda\'nsk, Gda\'nsk, Poland} \author{Micha{\l} Horodecki} \affiliation{Institute of Theoretical Physics and Astrophysics, University of Gda\'nsk, Gda\'nsk, Poland} \begin{abstract} We provide a class of bound entangled states that have positive distillable secure key rate. The smallest state of this kind is $4 \otimes 4$. Our class is a generalization of the class presented in \cite{lowdim-pptkey}. It is much wider, containing, in particular, states from the boundary of PPT entangled states (all of the states in the class in \cite{lowdim-pptkey} were of this kind) but also states inside the set of PPT entangled states, even, approaching the separable states. This generalization comes with a price: for the wider class a positive key rate requires, in general, apart from the \emph{one-way} Devetak-Winter protocol (used in \cite{lowdim-pptkey}) also the recurrence preprocessing and thus effectively is a \emph{two-way} protocol. We also analyze the amount of noise that can be admixtured to the states of our class without losing key distillability property which may be crucial for experimental realization. The wider class contains key-distillable states with higher entropy (up to 3.524, as opposed to 2.564 for the class in \cite{lowdim-pptkey}). \end{abstract} \maketitle \section{Introduction} Quantum cryptography, pioneered by Wiesner \cite{Wiesner}, allows to obtain cryptographic key based on physical impossibility of eavesdropping. Namely, if the transmitted signal is encoded into quantum states, then by reading it, eavesdropper always introduces noise into the signal. Thus Alice and Bob -- the parties who want to communicate privately -- can measure the level of noise and detect whether their transmission is secure (even if the noise was solely due to eavesdropping). There are two types of quantum key distribution protocols: \emph{prepare and measure} (as the original BB84 protocol \cite{BB84}) and protocols based on a shared entangled state (originated from the Ekert's protocol \cite{Ekert91}). For quite a time security proofs of prepare and measure protocols had been based on showing equivalence to the distillation (by local operations and classical communication) of maximally entangled states (the first such proof is due to Shor and Preskill \cite{ShorPreskill}). It have led to a belief that security of the quantum cryptography is always connected to the distillation of the maximally entangled states (this issue was perhaps first touched by Gisin and Wolf \cite{GisinWolf_linking}). This belief suggested that one could not obtain secure key from bound entangled states \cite{bound}, i.e., states from which maximally entangled states cannot be distilled. On the contrary, the key-distillable bound entangled states have been found \cite{pptkey} and examples of low dimensional states have been provided \cite{lowdim-pptkey}. The multipartite case was also considered \cite{AugusiakH2008-multi}. There are two approaches to obtaining cryptographic key from bound entangled PPT states: one is based on approximating private bit with a PPT state \cite{pptkey,keyhuge} and the other one -- on mixing orthogonal private bits \cite{lowdim-pptkey}. This paper continues on the second approach. The low dimensional key-distillable states with positive partial transpose \footnote{If a state has positive partial transpose (PPT) then one cannot distill maximally entangled state from it. It is an long-standing open question whether PPT is also a necessary condition for non-distillability of maximal entanglement \cite{RMPK-quant-ent} (for recent development, see \cite{PankowskiPHH-npt2007}).} (hence, bound entangled) presented in \cite{lowdim-pptkey} were lying on the boundary of PPT states and existence of the key-distillable states inside of PPT states was argued by the continuity argument, without giving the explicit form of those inner states. In this paper we present a wider class of PPT entangled key-distillable states including states inside the set of PPT states even approaching the set of separable states. We analyze properties of this class, as well as provide some more general criteria of key distillability, by exploiting criterion provided in \cite{acin-2006-73}. This criterion was earlier applied to analyze some PPT states in \cite{Bae2008} (see also \cite{PhysRevA.75.032306} in this context). The motivation behind the search for new bound entangled states with distillable key, is two-fold. First of all, there is a fundamental open question, whether from all entangled states one can draw secure key. To approach this question, one needs, in particular, to gather more phenomenology on the issue of drawing key from bound entangled states. In this paper, we have pushed this question a bit by showing explicitly that PPT key-distillable states can be in the interior of PPT states, even, approaching the set of separable states. Also, our general criterion of key distillability can serve for searching to what extent entanglement can provide for secure key. Another motivation comes from recent experiments, where bound entanglement was implemented in labs \cite{Experimental-bound-Bourennane,Experimental-bound-Lavoie,2010PhRvA..81d0304K,2010arXiv1005.1965B,2010arXiv1006.4651D}. In the experiments, usually, a four-partite bound entangled Smolin state was used, which allows for a number of non-classical effects being manifestations of true entanglement content of such state. We believe that low-dimensional bound entangled key-distillable states are also good candidates for experimental implementation, providing a non-classical effect -- possibility of distilling secure key. This requires states which are robust against noise, to facilitate the process of preparing them in a lab. In this paper, we analyze robustness of key-distillable states as well as provide very noisy states, having, in particular, relatively large entropy (\mbox{c.a.} 3.5 bits versus 4 bits of maximal possible entropy). Last but not least, the key-distillable bound entangled states are strictly related to the effect of superactivation of quantum capacity \cite{SmithYard-2008}, and our class may be further analyzed in this respect (in this paper, we have provided some exemplary calculations). The paper is organized as follows. In \mbox{Sec.} \ref{sec:prelim} we review basic facts about general theory of distillation of secure key from quantum states of \cite{keyhuge}. In particular, we describe technique called the privacy squeezing. In \mbox{Sec.} \ref{sec:class} we introduce our class of states which are PPT and key-distillable. We verify that they lie inside the set of PPT states, touching the set of separable states. Moreover, we check robustness of the property of key-distillability. We also give the explicit form of an important subset of our states as mixtures of pure states in \mbox{Sec.} \ref{sec:exp}). In \mbox{Secs.} \ref{sec:entropy} and \ref{sec:erasure} we examine entropic properties of our states and their relation with Smith-Yard superactivation of quantum capacity phenomenon. Finally, in \mbox{Sec.} \ref{sec:general-key}, we provide a general sufficient condition for distilling private key from quantum states of local dimension not less than 4. \section{Preliminaries} \label{sec:prelim} Let us first recall some important concepts of classical key distillation from quantum states, covered in detail in \cite{keyhuge}. A general state containing at least one bit of perfectly secure key is called the \emph{private bit} or \emph{pbit} \cite{keyhuge}. A private bit in its so-called $X$-form is given by \begin{align} \label{eq:pbit} \gamma(X) = \frac12 \begin{bmatrix} \sqrt{XX^\dagger} & 0 & 0 & X \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ X^\dagger & 0 & 0 & \sqrt{X^\dagger X} \end{bmatrix} \end{align} where $X$ is an arbitrary operator satisfying $\|X\|=1$ (here and throughout the paper, we use the trace norm, that is the sum of the singular values of an operator). The private bit has four subsystems: $ABA'B'$ where block matrix \eqref{eq:pbit} represents $AB$ subsystem and the blocks are operators acting on an $A'B'$ subsystem. Subsystems $A$ and $B$ are single qubit subsystems while dimensions of $A'$ and $B'$ must be greater or equal to 2, we assume dimensions $A'$ and $B'$ are equal and denote them by $d$. Subsystem $AA'$ belongs to Alice while subsystem $BB'$ belongs to Bob. Every state presented in the block matrix form throughout the paper has this structure. The bit of key contained in a private bit is obtained by measuring subsystems $A$ and $B$ in the standard basis; therefore, subsystem $AB$ is called the \emph{key part} of the state, while subsystem $A'B'$ is called the \emph{shield} of the state, as it protects correlations contained in the key part from an eavesdropper. Note that it may happen that Eve possesses a copy of the shield subsystem (when, e.g., the shield consists of two flag states – states with disjoint support) yet it does not hurt because the very presence of the shield subsystem in Alice and Bob's hands protects the bit of key. For a general state with $ABA'B'$ subsystems (i.e., not necessarily a private bit) one can infer possibility of distillation of private key using the method called the \emph{privacy squeezing} \cite{keyhuge}. Namely, we consider the following type of protocols: one measures the key part in the standard basis and classically process the outcomes (\mbox{cf.} \cite{acin-2006-73} for two-qubit states). Given a protocol of this type we would like to know whether it can distill key from the state. To this end, we construct a two qubit state in the following way: one applies to the original state the so-called \emph{twisting} operation, i.e., a unitary transformation of the following form \begin{align} U = \sum_{ij} |ij⟩_{AB}⟨ij| ⊗ U_{ij}^{A'B'} \end{align} and perform partial trace over $A'B'$. Now, it turns out that if we apply the protocol to the original state we obtain no less key than we would obtain from the above two qubit state using the same protocol. Therefore, if we apply a cleverly chosen twisting, we may infer key-distillability of the original state, by examining a two-qubit state (i.e., a much simpler object). This technique is called the \emph{privacy squeezing}. The role of twisting is to `squeeze' the privacy present in the original state into its key part, where it is then more easily detectable, e.g., by protocols designed for two-qubit states (see e.g., \cite{Gottesman-Lo,acin-2006-73,Renner2006-PhD}). To explain why the two qubit state cannot give more key than the original state (within the considered class of protocols) we invoke the following result of \cite{keyhuge}. One considers a state of three systems: a quantum one -- Eve's system and two classical ones -- the registers holding the outcomes of measurement of the key part (the state is therefore called a \emph{ccq state}). Now, it turns out that twisting does not change this state. However, in the considered class of protocols Alice and Bob use only classical registers, so the output of such protocols depends solely on the ccq state. Thus the key obtained with and without twisting is exactly the same. This holds, even though twisting is a non-local operation and the resulting state can be more powerful under all other respects (such as drawing key by some other type of protocols). Next, if we additionally trace out the shield, i.e., the subsystem $A'B'$, this means that the resulting ccq state differs from the original ccq state only by Eve having, in addition, the shield. Thus, if any key can be obtained from it, it can only be less secure than the key obtained from the original ccq state. It turns out that for any `spider' state, i.e., state of the form \begin{align} ρ = \begin{bmatrix} C & & & D \\ & E & F \\ & F^{†} & E' \\ D^{†} & & & C' \end{bmatrix} \label{eq:spider} \end{align} \begin{figure*} \caption{Block matrix form of mixture of four private bits.} \label{eq:the-class-blocks} \label{fig:the-class-blocks} \end{figure*} (where we have omitted zero blocks for clarity) there exists such a twisting operation that the matrix elements of the two qubit state, obtained by tracing out the $A'B'$ subsystem after applying the twisting, are equal to trace norms of the corresponding blocks in the original state: \begin{align} σ = \begin{bmatrix} \|C\| & & & \|D\| \\ & \|E\| & \|F\| \\ & \|F\| & \|E'\| \\ \|D\| & & & \|C'\| \end{bmatrix} \label{eq:ps-spider} \end{align} (we use here that $\|A\|=\|A^\dagger\|$ for trace norm). This twisting is in a sense optimal for the spider states. We call the two qubit state \eqref{eq:ps-spider} the \emph{privacy-squeezed state} of the original state. If a spider state satisfies $\|C\|=\|C'\|$ and $\|E\|=\|E'\|$ than its privacy-squeezed state is a Bell diagonal state. For a deeper discussion of the privacy squeezing see \cite{keyhuge}, although the name \emph{spider state} is not used there. \section{Distilling key from PPT mixtures of private states} \label{sec:class} Here, we construct a class of bound entangled states which are key-distillable. They are mixtures of four orthogonal private bits of some special form. We provide a sufficient condition to distill cryptographic key from our class. The condition given in this section is generalized to an arbitrary state in \mbox{Sec.} \ref{sec:general-key}. \subsection{Definition of the class} Let us consider a class of states \begin{align} \label{eq:the-class} \varrho = λ_1 γ_1^+ + λ_2 γ_1^- + λ_3 γ_2^+ + λ_4 γ_2^- \end{align} which is a mixture of four orthogonal private bits which could be considered analogues to the Bell states. The construction is possible in dimension $2d \otimes 2d$, with $d\geq 2$. The four private bits are given by \begin{align} \label{eq:the-class-gamma_i} \gamma_1^\pm = \gamma(± X), \quad \gamma_2^\pm = \sigma_x^A \gamma(± Y) \sigma_x^A \end{align} where $\sigma_x^A$ is a Pauli matrix $\sigma_x$ applied on subsystem $A$, and by $\gamma(X)$ we mean a private bit written in its $X$-form \eqref{eq:pbit}. States given by \eqref{eq:the-class} and \eqref{eq:the-class-gamma_i} have the block matrix form \eqref{eq:the-class-blocks} given on figure \ref{fig:the-class-blocks}. \begin{definition} We define the class ${\cal C}$ as the class of states given by \eqref{eq:the-class} and \eqref{eq:the-class-gamma_i} with operators $X$ and $Y$ related by \begin{align} \label{eq:Y} Y =\frac{X^\Gamma}{\|X^\Gamma\|} \end{align} where superscript Γ denotes the partial transposition in Alice versus Bob cut; and satisfying the following conditions: the diagonal blocks of \eqref{eq:the-class-blocks}, i.e., operators $\sqrt{XX^\dagger}$, $\sqrt{X^\dagger X}$, $\sqrt{YY^\dagger}$, $\sqrt{Y^\dagger Y}$ are all PPT-invariant, i.e., must satisfy $A=A^\Gamma$. \end{definition} (The relation \eqref{eq:Y} and PPT-invariance of the diagonal blocks are necessary to obtain simple conditions for the state to be PPT, given in \mbox{Sec.} \ref{sec:ppt}). In particular, the PPT-invariance of the diagonal blocks holds for \begin{align} \label{eq:X} X = \frac{1}{u} \sum_{i,j=0}^{d-1} u_{ij} |ij⟩⟨ji| \end{align} where $u_{ij}$ are elements of some unitary matrix on ${\cal C}^d$ and \begin{align} \label{eq:u} u = \sum_{i,j=0}^{d-1} |u_{ij}|. \end{align} For the operator $X$ given by \eqref{eq:X} we have \begin{align} \|X^\Gamma\| = \frac du, \qquad \frac{1}{\sqrt{d}} ≤ \|X^\Gamma\| ≤ 1 \end{align} where the minimum is achieved for the unimodular unitary \cite{lowdim-pptkey} and maximum for the identity matrix. We will sometimes write $ρ_U$ to denote the subclass of the class ${\cal C}$ with operator $X$ given by \eqref{eq:X} or to stress using a concrete unitary in the definition of $X$, in particular, we will consider the subclass $ρ_H$ where $u_{ij}$ are elements of the Hadamard unitary matrix. In case of $d=2$ we will also consider the subclass of the class ${\cal C}$ with operators $X$ and $Y$ given by \begin{align} \label{eq:spider-Y} Y = q \, Y_{U_1} + (1 - q) \, σ_x^{A'} Y_{U_2} σ_x^{A'}, \quad X = \frac{Y^Γ}{\|Y^Γ\|} \end{align} where \begin{align} \label{eq:spider-Y_U} Y_U = \frac{1}{d} \sum_{i,j=0}^{d-1} u_{ij} |ii⟩⟨jj|. \end{align} Unitaries $U_1$ and $U_2$ must have the same global phase, i.e., $\alpha_1=\alpha_2$ in the parametrization of a single qubit unitary given by \eqref{eq:qubit-unitary} in the appendix. In particular, one may take $U_1=U_2$. We also use an alternative parametrization in terms of $p$, α, and β given by \begin{align} p &\equiv λ_1 + λ_2 \in [0, 1] \\ \alpha &\equiv \frac{λ_1 - λ_2}{λ_1 + λ_2} \in [-1, 1] \\ \beta &\equiv \frac{λ_3 - λ_4}{λ_3 + λ_4} \in [-1, 1]. \end{align} On the other hand, the original parameters $λ_i$ can be expressed using $p$, α, and β as follows: \begin{align} λ_{1,2} &= \frac{1 ± α}{2} p \\ λ_{3,4} &= \frac{1 ± β}{2} (1 - p). \end{align} Both parametrizations are directly related with the privacy-squeezed version of the state given by ${\cal C}$ and \eqref{eq:the-class-gamma_i}, and constructed according to the formula \eqref{eq:ps-spider}: \begin{align} \label{eq:ps-class} \sigma=\sum_i \lambda_i |ψ_i\rangle\<ψ_i|=\frac12 \begin{bmatrix} p & & & \alpha p \\ \, & (1-p) & \beta (1-p) \\ \, & \beta (1-p) & (1-p) \\ \alpha p & & & p \end{bmatrix} \end{align} where the Bell states $ψ_i$ are given by \begin{align} \label{eq:bell-states} |ψ_{1,2}⟩ &= \frac{1}{\sqrt2} (|00⟩±|11⟩) \nonumber \\ |ψ_{3,4}⟩ &= \frac{1}{\sqrt2} (|01⟩±|10⟩). \end{align} Thus, $\lambda_i$ are the eigenvalues of the privacy-squeezed state, $p$ reports the balance between correlations and anti-correlations, while $\alpha$ and $\beta$ report how coherences are damped. A subclass of the class ${\cal C}$ with $X$ defined by \eqref{eq:X} has been considered in \cite{lowdim-pptkey}: \begin{align} \label{eq:subclass} \tilde\varrho = λ_1 γ_1^+ + λ_3 γ_2^+. \end{align} The class ${\cal C}$ is much wider then \eqref{eq:subclass}, in particular, it contains key-distillable PPT states arbitrary close to the separable states, but this comes with a price: we have to, in general, use the recurrence preprocessing to obtain positive key rate for ${\cal C}$ while for \eqref{eq:subclass} the sole Devetak-Winter protocol is enough \cite{lowdim-pptkey}. \subsection{Sufficient PPT conditions} \label{sec:ppt} For the states of the class ${\cal C}$ to be PPT (so that maximal entanglement cannot be distilled from them) it is sufficient to satisfy the following conditions \begin{align} |λ_1 - λ_2| &≤ (1 - λ_1 - λ_2) \|X^Γ\|^{-1} \\ \label{eq:ppt-cond-1b} |λ_3 - λ_4| &≤ (λ_1 + λ_2) \|X^Γ\| \end{align} or equivalently \begin{align} \label{eq:ppt-cond-alpha} |\alpha| &≤ \min(1, \alpha_1) \\ \label{eq:ppt-cond-beta} |\beta| &≤ \min(1, \alpha_1^{-1}) & \end{align} where \begin{align} \label{eq:alpha1} \alpha_1 = \frac{1- p}{p} \|X^\Gamma\|^{-1}. \end{align} In particular, if $p=\tilde{λ}_1$ where $\tilde{λ}_1$ is given by \eqref{eq:subclass-ppt-cond}, we have $\alpha_1=1$. Moreover, if $α=α_1β$ then ρ is a PPT-invariant state. For the subclass \eqref{eq:subclass}, the above PPT conditions collapse to a single PPT-invariant state, on the boundary of PPT states, which satisfies \begin{align} \label{eq:subclass-ppt-cond} λ_1 = \tilde{λ}_1 \equiv \frac{1}{1 + \|X^Γ\|}. \end{align} \subsection{Key distillability} \label{sec:key} We shall derive here a general sufficient condition for key-distillability of the spider states with a Bell diagonal privacy-squeezed state, which easily follows from combining the privacy squeezing technique with the result of \cite{acin-2006-73} on key distillation from two-qubit states. It is enough for our purposes, as states of our class are of that form. (In \mbox{Sec.} \ref{sec:general-key} we shall extend the key-distillability condition to arbitrary states by exploiting twirling). \begin{proposition} \label{prop:key} Let ρ be a state of the form \begin{align} \label{eq:th-key-rho} ρ = \begin{bmatrix} C & & & D \\ & E & F \\ & F^{†} & E' \\ D^{†} & & & C' \end{bmatrix} \end{align} satisfying $\|C\|=\|C'\|$ and $\|E\|=\|E'\|$, i.e., ρ is a state having a Bell diagonal privacy-squeezed state. If \begin{align} \label{eq:th-key-cond} \max(\|D\|,\|F\|) > \sqrt{\|C\| \|E\|} \end{align} then Alice and Bob can distill cryptographic key by first measuring the key part of many copies of the state ρ and than using the recurrence \cite{Maurer_key_agreement,BDSW1996} and the Devetak-Winter protocol \cite{DevetakWinter-hash}. \end{proposition} \noindent \begin{remark} Note that, interestingly, the condition \eqref{eq:th-key-cond} is equivalent to requiring that one of the matrices \begin{align} \begin{bmatrix} \|C\| & \|D\| \\ \|D^\dagger\| & \|E\| \end{bmatrix},\quad \begin{bmatrix} \|C\| & \|F\| \\ \|F^\dagger\| & \|E\| \end{bmatrix}\quad \end{align} is not a positive one. \end{remark} \begin{remark} Note that the right-hand side of \mbox{Eq.} \eqref{eq:th-key-cond} can also be written as $\frac12 \sqrt{p_e (1-p_e)}$ where $p_e$ is the probability of error (i.e. anticorrelation) when key part is measured in standard basis. \end{remark} \begin{proof}[Proof of the proposition \ref{prop:key}] We apply the privacy squeezing technique described in \mbox{Sec.} \ref{sec:prelim}, i.e., we show that the privacy-squeezed state of ρ is key-distillable by a protocol based on measuring the state locally in the standard basis and classical postprocessing. This implies ρ is also key-distillable. The privacy-squeezed state is precisely of the form \eqref{eq:ps-spider} with $\|C\|=\|C'\|$ and $\|E\|=\|E'\|$, i.e., it is a Bell diagonal state which can be written as \begin{align} \sigma = \frac12 \begin{bmatrix} a & & & d \\ & e & f \\ & f & e \\ d & & & a \end{bmatrix}. \end{align} For such a state it was shown in \cite{acin-2006-73} that if $\max(|d|,|f|)>\sqrt{ae}$ then one can distill key by measuring the state locally in the standard basis, and processing the resulting classical data (actually, by using the recurrence followed by the Devetak-Winter protocol). This is precisely the type of protocols allowed by the privacy-squeezing technique described in \mbox{Sec.} \ref{sec:prelim}. In our case, the above conditions are simply the ones given in \eqref{eq:th-key-cond}. \end{proof} Due to the form \eqref{eq:ps-class} of the privacy-squeezed state of the states from our class, we immediately obtain suitable conditions: \begin{corollary} \label{cor:key} Let ρ be a state defined by formulas ${\cal C}$ and \eqref{eq:the-class-gamma_i} with arbitrary $X$ and $Y$ satisfying $\|X\|=\|Y\|=1$. If \begin{align} \label{eq:key-cond-lambda} |λ_1 - λ_2| > \sqrt{(λ_1 + λ_2) (1 - λ_1 - λ_2)} \end{align} or equivalently if \begin{align} \label{eq:key-cond-alpha} |\alpha| > \sqrt{1 - p \over p} \end{align} then Alice and Bob can distill cryptographic key by first measuring the key part of many copies of the state ρ and than using the recurrence and the Devetak-Winter protocol. \end{corollary} Corollary \ref{cor:key} also holds if one uses $|λ_3 - λ_4|$ as the left-hand side of \eqref{eq:key-cond-lambda} or equivalently $|β|$ as the left-hand side of \eqref{eq:key-cond-alpha}, however, in our paper, we do not use these conditions. \begin{observation} \label{obs:p-range-of-ppt-key} For a state of the class ${\cal C}$ to be both PPT and key distillable using corollary~\ref{cor:key} it must satisfy both \eqref{eq:ppt-cond-alpha} and \eqref{eq:key-cond-alpha}. For a given value of the parameter $p$ there exist α satisfying both conditions iff $p \in (\frac12, p_{{\max}})$ where \begin{align} p_{{\max}} = \frac{1}{1 + \|X^Γ\|^2}. \end{align} \end{observation} \subsection{Tolerable white noise} We say that δ is the \emph{tolerable noise} of a key distillation protocol for a state ρ if for any $ε<δ$ the state $ρ_ε$ with ε of the white noise admixtured \begin{align} \varrho_ε = (1-ε) \varrho + ε \frac{I}{d^2} \end{align} remains key-distillable with that protocol. Having $p>\frac12$, the tolerable noise of the Devetak-Winter protocol with the recurrence preprocessing for the class ${\cal C}$ is given by \begin{align} \label{eq:tolerable-noise} \delta &= 1-{{1}\over{\sqrt{8(λ_1^2 + λ_2^2) - 4(λ_1 + λ_2) + 1}}} \\ &= 1-{{1}\over{\sqrt{4\left(1 + α^2\right)\,p^2-4\,p+1}}}. \end{align} In particular for a key-distillable PPT state $\tilde{ρ}_H$ with $λ_1=\tilde{λ}_1$ where $\tilde{λ}_1$ is given by \eqref{eq:subclass-ppt-cond} the tolerable noise for the Devetak-Winter protocol with the recurrence preprocessing \eqref{eq:tolerable-noise} is approximately equal to 0.155 while for the sole Devetak-Winter protocol it is approximately equal to 0.005, i.e., it is 31 times smaller. See figure~\ref{fig:noise}. \begin{figure} \caption{Comparison of $\tilde\varrho_H$ tolerable noise in case of using the Devetak-Winter protocol with and without the recurrence preprocessing.} \label{fig:noise} \end{figure} \subsection{Separability} Given a state $ρ_U$ of the class ${\cal C}$ with $X$ given by \eqref{eq:X} and $d=2$, i.e., ρ is a state of $4 ⊗ 4$ system, we may try to decompose ρ into a mixture of four two qubit states. The particular decomposition, which we propose below, is possible if \begin{align} \label{eq:sep-precond-lambda} |λ_3 - λ_4| ≤ (1 - λ_1 - λ_2) \|X^Γ\| \end{align} \begin{figure*} \caption{The form of two qubit Bell diagonal states from decomposition of a state $ρ_U$ with $d=2$.} \label{eq:rho_ij} \label{fig:rho_ij} \end{figure*} or equivalently if \begin{align} \label{eq:sep-precond-beta} |\beta| ≤ \|X^Γ\|. \end{align} All of the four two qubit states in our decomposition are Bell diagonal states with the same set of eigenvalues. Thus, the two qubit states are separable (and, hence, ρ is separable) if all their eigenvalues are less than or equal to $\frac12$ \footnote{This can be directly verified by positivity of partial transpose \cite{Peres96,sep1996}}. For our decomposition this happens if, additionally to \eqref{eq:sep-precond-lambda}, the following conditions are satisfied \begin{align} λ_1 &≤ \frac12 \\ λ_2 &≤ \frac12 \\ \label{eq:sep-as-ppt-1} |λ_3 - λ_4| &≤ (λ_1 + λ_2) \|X^Γ\| \end{align} or equivalently, additionally to \eqref{eq:sep-precond-beta}, the following conditions are satisfied \begin{align} \label{eq:sep-alpha-cond} |α| & ≤ \frac{1-p}{p} \\ \label{eq:sep-as-ppt-2} |β| & ≤ \frac{p}{1-p} \|X^Γ\|. \end{align} Note that conditions \eqref{eq:sep-as-ppt-1} and \eqref{eq:sep-as-ppt-2} are identical to the PPT conditions for ρ given by \eqref{eq:ppt-cond-1b} and \eqref{eq:ppt-cond-beta}, respectively. The decomposition into the four two qubit states has the form \begin{multline} \label{eq:4x2quit-decomposition} ρ_U = \frac{|u_{00}|}{u}ρ_{00}(|00⟩_{AA'}, |10⟩_{AA'}; |00⟩_{BB'}, |10⟩_{BB'}) \\ + \frac{|u_{01}|}{u}ρ_{01}(|00⟩_{AA'}, |11⟩_{AA'}; |01⟩_{BB'}, |10⟩_{BB'}) \\ + \frac{|u_{10}|}{u}ρ_{10}(|01⟩_{AA'}, |10⟩_{AA'}; |00⟩_{BB'}, |11⟩_{BB'}) \\ + \frac{|u_{11}|}{u}ρ_{11}(|01⟩_{AA'}, |11⟩_{AA'}; |01⟩_{BB'}, |11⟩_{BB'}) \end{multline} where $u_{ij}$ are the elements of the unitary matrix on ${\cal C}^2$ used to define operator $X$ in \eqref{eq:X}, $u$~is given by \eqref{eq:u}, and $ρ_{ij}$ denote the two qubit states given by \eqref{eq:rho_ij} on figure \ref{fig:rho_ij} where $ϕ_{ij}$ comes from the polar decomposition of $u_{ij}$ \begin{align} u_{ij} = |u_{ij}|e^{i ϕ_{ij}}. \end{align} The local basis of Alice and Bob for each of the two qubit states are given in \eqref{eq:4x2quit-decomposition} in parenthesis. \subsection{PPT key arbitrary close to separability} One can obtain key from some $4 ⊗ 4$ PPT states lying arbitrary close to the set of separable states. That is, one can easily select a single parameter subclass of the class ${\cal C}$ satisfying PPT conditions and approaching some separable state with $p=\frac12$ such that for any other state in this class, no matter how close to the separable state, the key condition \eqref{eq:key-cond-alpha} is satisfied. Note that if we chose a separable state with $p \ne \frac12$ as the final state the key condition would be violated before reaching that final state; thus, we would not approach with the key-distillable states arbitrary close to the set of separable states. \begin{figure}\label{fig:key-up-to-sep} \end{figure} Such a class of states, a subclass of $ρ_H$, is illustrated in figure \ref{fig:key-up-to-sep}. The dashed line represents the subclass $\tilde\varrho_H$, given by \eqref{eq:subclass}, a mixture of two pbits ($γ_1^+$ and $γ_2^+$) which in alternate parametrization is equivalent to $p\in[0,1]$ and $\alpha=\beta=1$. As shown in \cite{lowdim-pptkey}, this class contains exactly one (boundary) PPT entangled state obtained by setting $p=\tilde{λ}_1$ where $\tilde{λ}_1$ is given by \eqref{eq:subclass-ppt-cond}, otherwise the states are NPT. The solid line represents a class of PPT key-distillable states obtained by setting $p\in(\frac12,p_{{\max}})$, $α=\min(1,α_1)$, and $β=\min(1, α_1^{-1})$, where $p_{{\max}}=(1+\|X^Γ\|^2)^{-1}=\frac23$, see observation \ref{obs:p-range-of-ppt-key}, while $α_1$ is given by \eqref{eq:alpha1}, i.e., $α_1=\frac{1-p}{p}\sqrt{2}$ in the considered case. In the range $p\in(\frac12,\tilde{λ}_1]$ the class is represented as a straight line from the PPT state of the previous class $\tilde{ρ}_H$ on one end ($p=\tilde{λ}_1$) and approaches arbitrary close to the separable state $ρ_{\mathrm{sep}}$ ($p=\frac12$) on the other end. In the range $p\in[\tilde{λ}_1, p_{{\max}})$ the states are PPT-invariant and lie on the boundary of PPT entangled states, they are represented as an arc from the PPT state of the previous class $\tilde{ρ}_H$ on one end ($p=\tilde{λ}_1$) and approach arbitrary close to the state $ρ_{{\max}}$ ($p=p_{{\max}}$) on the other end. In the range $p\in(\tilde{λ}_1, p_{{\max}})$ one could take $α<α_1$, such that the key condition \eqref{eq:key-cond-alpha} is still satisfied, to enter inside the class of PPT states. \section{States $\varrho_H$ as mixtures of Bell states with `flags'} \label{sec:exp} States of the class $\varrho_H$ are separable in the $AB:A'B'$ cut, i.e., subsystems $AB$ and $A'B'$ of $\varrho_H$ are are only classically correlated. A state from $ρ_H$ can be decomposed into a mixture of four states. Each of the four states has a Bell state $ψ_i$ on the subsystem $AB$ and some corresponding state on $A'B'$. One can select parameters $p ∈ [0,1]$, $α ∈ [-1,1]$, and $β ∈ [-1,1]$ satisfying both the PPT conditions \eqref{eq:ppt-cond-alpha} and \eqref{eq:ppt-cond-beta} and the key condition \eqref{eq:key-cond-alpha}, and prepare a corresponding PPT key-distillable state from the class $ρ_H$ which has the form \begin{align} \varrho_H = \sum_{i=1}^4 q_i \, |ψ_i⟩⟨ψ_i|_{AB} \otimes \varrho_{A'B'}^{(i)} \end{align} where the Bell states $ψ_i$ are given by \eqref{eq:bell-states} and the correlated states are the following: \begin{align} \varrho^{(1)} &= \alpha \frac12 (P_{00} + P_{\psi_3}) + (1-\alpha) \frac{I}{4} \\ \varrho^{(2)} &= \alpha \frac12 (P_{11} + P_{\psi_4}) + (1-\alpha) \frac{I}{4} \\ \varrho^{(3,4)} &= \beta P_{\chi_\pm} + (1-\beta) \frac12 (P_{00} + P_{11}) \end{align} where $P_{ψ}$ denotes the projector onto a pure state ψ and \begin{align} \chi_\pm &= \frac{1}{\sqrt{2 \pm \sqrt2}} (|00\rangle \pm |\psi_1\rangle) \\ q_1 &= q_2 = \frac{p}{2} \\ q_3 &= q_4 = \frac{1 - p}{2}. \end{align} \section{Maximizing von Neumann Entropy} \label{sec:entropy} In this section, we find $4\otimes4$ key-distillable PPT states with a quite high von Neumann entropy for two subclasses of the class ${\cal C}$ and summarize the results in a table. \subsection{For states of the class $ρ_U$} \label{sec:sup-S-rho_U} Here, we find the supremum of the von Neumann entropy of the subclass $ρ_U$ of the class ${\cal C}$ with $X$ given by \eqref{eq:X} consisting of states that are both PPT and key-distillable by corollary~\ref{cor:key}. Let as denote this set of states as ${\cal PK}_d$, subscripted with the dimension of the unitary used to define operator $X$. As $\varrho$ is a mixture of four orthogonal private bits its von Neumann entropy is given by \begin{multline} \label{eq:S} S(\varrho_U) = H(p) + p \left(H\left(\frac{1-\alpha}{2}\right) + S(\sqrt{X^\dagger X})\right) \\ + (1 - p) \left(H\left(\frac{1-\beta}{2}\right) + S(\sqrt{Y^\dagger Y})\right) \end{multline} where \begin{align} \label{eq:S-XX} S(\sqrt{X^\dagger X}) &≤ 2\log_2 d \\ S(\sqrt{Y^\dagger Y}) &= \log_2 d \end{align} and the maximal value in \eqref{eq:S-XX} is achieved if the unitary used to define $X$ in \eqref{eq:X} is unimodular. A unimodular unitary also maximizes the allowed range of $p$ given by observation \ref{obs:p-range-of-ppt-key}, as it achieves minimum of $\|X^Γ\|$. Hence, to maximize the entropy, it is enough to consider a unimodular unitary. The supremum is achieved for a state with $p=p_{{\max}}$, $β=0$, and $\alpha=\sqrt{\frac{1-p}{p}}$ (which no longer satisfies our key-distillability condition) thus \begin{multline} \sup_{\varrho_U\in {\cal PK}_d} S(\varrho_U) = \sup_{p\in(\frac12, p_{\max})} \Bigg( (1+p)\log_2 d + (1 - p) \\ + H(p) + p H\left(\textstyle\frac{1-\sqrt{\frac{1 - p}{p}}}{2}\right) \Bigg) \end{multline} where $p_{\max}= (1 + \|X^Γ\|^2)^{-1}$ comes from observation \ref{obs:p-range-of-ppt-key}. In particular, for $d=2$, i.e., ρ being $4 ⊗ 4$ states, the supremum is achieved for state having $p=p_{\max}=2/3$ which gives \begin{align} \sup_{\varrho_U\in {\cal PK}_2} S(\varrho_U) \approx 3.319. \end{align} The supremum corresponds to a state $ρ_{{\max}}$ on figure \ref{fig:key-up-to-sep} but with $β=0$. \subsection{For states of a class larger than $ρ_U$} \label{sec:max-S-spider-Y} For the subclass ρ of the class ${\cal C}$ with $d=2$ and $X$ and $Y$ given by \eqref{eq:spider-Y}, we are able to obtain \begin{align} S(ρ) \approx 3.524 \end{align} for $U_1=U_2=H$, $q \approx 0.683$, $β=0$ and α, $p$ taken as in the previous subsection. It seems to be the supremum of the von Neumann entropy for this selection of operators $X$ and $Y$. \subsection{Summary} Here, we summarize the results of maximizing von Neumann entropy of $4\otimes4$ key-distillable PPT states in the following table: \noindent \begin{tabular}{lp{7.5cm}} \toprule $S(ρ)$ & $ρ$ satisfying PPT and key conditions \\ \midrule 2.564 & class $\tilde{ρ}$ from \cite{lowdim-pptkey} with $p=\tilde{λ}_1$, the maximum is achieved for $U=H$ \\ 3.319 & class $ρ_U$, the supremum is described in \mbox{Sec.} \ref{sec:sup-S-rho_U} \\ 3.524 & class ${\cal C}$ with $Y$ given by \eqref{eq:spider-Y}, a supposed supremum is described in \mbox{Sec.} \ref{sec:max-S-spider-Y} \\ \bottomrule \end{tabular} \section{Distillability via erasure channel} \label{sec:erasure} In \cite{SmithYard-2008}, it was shown that two zero capacity channels, if combined together, can have nonzero capacity. One of the channels was related (through so called Choi-Jamio\l{}kowski (CJ) isomorphism) to a bound entangled but key-distillable state, while the other was a so called symmetrically extendable channel. In particular, they considered an example, where the first channel had $4 ⊗ 4$ CJ state from the class \eqref{eq:subclass} while the second one was the 50\%-erasure channel. In \cite{Jonathan-two-wrongs-make-right} a simpler scheme was proposed, which also allows to observe this curious phenomenon. The second approach amounts to sending a subsystem $A'$ of a state defined on systems $ABA'B'$ through the 50\%-erasure channel and checking the coherent information of the resulting state. If it is positive one concludes that the capacity of combined channel is also positive. Here, we shall use this approach to see how the presence of coherence $\beta$ influence the phenomenon. Coherent information after sending the $A'$ subsystem through the 50\%-erasure channel is given by \begin{align} I_{\mathrm{coh}} = \frac12 (S_{A'BB'} - S) + \frac12 (S_{BB'} - S_{ABB'}) \end{align} where $S$, $S_{A'BB'}$, and $S_{BB'}$ are given by \eqref{eq:S}, \eqref{eq:S_A'BB'}, and \eqref{eq:S_BB'}, respectively. For a PPT state $\tilde{ρ}$ given by \eqref{eq:subclass} with $X$ given by \eqref{eq:X} and based on unimodular unitary and $λ_1=\tilde{λ}_1$, where $\tilde{λ}_1$ is given by \eqref{eq:subclass-ppt-cond}, the coherent information is positive starting from $d=11$. For a similar state of our class with $p=\tilde{λ}_1$, $α=1$ and $β=0$ the coherent information is positive starting from $d=22$. Formulas for $S_{A'BB'}$ and $S_{BB'}$ are as follows: \begin{multline} \label{eq:S_A'BB'} S(\varrho_{A'BB'}) = 1 + \frac12 S\left( p \sqrt{X X^\dagger} + (1 - p) \sqrt{Y^\dagger Y}\right) \\ + \frac12 S\left( p \sqrt{X^\dagger X} + (1 - p) \sqrt{Y Y^\dagger}\right) \end{multline} \begin{multline} \label{eq:S_BB'} S(\varrho_{BB'}) = 1 + \frac12 S_B\left( p \sqrt{X X^\dagger} + (1 - p) \sqrt{Y^\dagger Y}\right) \\ + \frac12 S_B\left( p \sqrt{X^\dagger X} + (1 - p) \sqrt{Y Y^\dagger}\right). \end{multline} \section{Condition for drawing secure key from general states} \label{sec:general-key} From \mbox{Sec.} \ref{sec:key}, we have a sufficient condition for drawing key in terms of norms of the nonzero blocks from states having a Bell diagonal privacy-squeezed state. In this section, we generalize that condition to the case of an arbitrary state. Let us define two twirling operations (\mbox{cf.} \cite{BDSW1996}) \begin{align} Λ_{XX} &= \frac12 ( \hat I ⊗ \hat I + \hat X ⊗ \hat X ) \\ Λ_{ZZ} &= \frac12 ( \hat I ⊗ \hat I + \hat Z ⊗ \hat Z ) \end{align} and one twirling with flags \begin{align} Λ_{XX}'(ρ) &= \frac12 ( ρ ⊗ |0⟩⟨0| + \hat X ⊗ \hat X (ρ) ⊗ |1⟩⟨1| ) \end{align} where $\hat U ρ= U ρ U^†$, $X$ and $Z$ are Pauli matrices. Now, we give a sufficient condition to obtain key from a general state. \begin{proposition} \label{prop:generic-key} For an arbitrary state \begin{align} \varrho = \begin{bmatrix} A & B & C & D \\ B^{†} & E & F & G \\ C^{†} & F^{†} & H & I \\ D^{†} & G^{†} & I^{†} & J \end{bmatrix} \end{align} if \begin{align} \label{eq:th-general} \max(\|D\|,\|F\|) > \frac12 \sqrt{(\|A\|+\|J\|)(\|E\| + \|H\|)} \end{align} then Alice and Bob can distill cryptographic key by first applying twirling $Λ_{XX}' \circ Λ_{ZZ}$ to the key part and measuring the key part of many copies of the state ρ and than using the recurrence and the Devetak-Winter protocol. \end{proposition} \begin{remark} Note that the right-hand side of \mbox{Eq.} \eqref{eq:th-general} can also be written as $\frac12 \sqrt{p_e (1-p_e)}$ where $p_e$ is the probability of error (i.e. anticorrelation) when key part is measured in standard basis. \end{remark} \begin{proof}[Proof of the proposition \ref{prop:generic-key}] Alice and Bob first apply twirling $Λ_{XX}' \circ Λ_{ZZ}$ (an LOCC operation) to the key part and obtain the following state \begin{multline} Λ'_{XX} \circ Λ_{ZZ} (ρ)\\ =\begin{bmatrix} A \oplus J & & & D\oplus D^† \\ & E \oplus H & F \oplus F^† \\ & F \oplus F^† & E \oplus H \\ D \oplus D^† & & & A \oplus J \end{bmatrix}. \end{multline} This state is now of the spider form and, thanks to flags, we have direct sums within the blocks. Now, the privacy-squeezed state has the following Bell diagonal form \begin{multline} \sigma = \\ =\small\begin{bmatrix} \|A\| + \|J\| & & & \|D\| + \|D^†\| \\ & \|E\| + \|H\| & \|F\| + \|F^†\| \\ & \|F\| + \|F^†\| & \|E\| + \|H\| \\ \|D\| + \|D^†\| & & & \|A\| + \|J\| \end{bmatrix}. \end{multline} Then the proof follows from proposition~\ref{prop:key}. \end{proof} Note that in the proof above we use $Λ_{XX}'$, a twirling with flags. If $Λ_{XX}$, a twirling without flags, were used instead we would have to replace $\|D\|$ with $\|D + D^†\|$ in \eqref{eq:th-general} (analogously for $\|F\|$) which can be much smaller than $\|D\|$, and even equal to zero in the extreme case of antihermitian $D$, i.e., $D^†=-D$, so in this case no key can be distilled from $Λ_{XX}(ρ)$ even if ρ is a private state, i.e., $ρ=γ(D)$. Note also, that in the proof, we have first applied twirling with flags to the original state, and then the privacy-squeezing operation. Actually, the same state would be obtained if we first apply the privacy squeezing and then apply (standard) twirling. This is illustrated by the following diagram \begin{align} \begin{CD} ρ @>{Λ'_{XX} \circ Λ_{ZZ}}>> ρ' \\ @V{P_{sq}}VV @VV{P_{sq}}V \\ σ @>{Λ_{XX} \circ Λ_{ZZ}}>> σ' \end{CD} \end{align} where $P_{sq}$ stands for the privacy squeezing. As explained above, this diagram would not commute if we used solely twirling without flags. Thus, to seek for key-distillable states, one can go the alternative route, i.e., first compute the privacy-squeezed state, and then, by twirling, obtain a Bell diagonal state. Now, if $Λ_{XX} \circ Λ_{ZZ}(σ)$ satisfies necessary security condition for realistic QKD on a Pauli channel from \cite{acin-2006-73}, i.e., its eigenvalues $λ_i$ satisfy \eqref{eq:key-cond-lambda}, then ρ is key-distillable using proposition~\ref{prop:generic-key}. \section{Appendix} The parametrization of a single qubit unitary \cite{Nielsen-Chuang}: \begin{align} \label{eq:qubit-unitary} U &= e^{i\alpha} \begin{bmatrix} e^{i\*\left(-\frac{\beta}{2}-\frac{\delta}{2}\right)} \*\cos\left(\frac{\gamma}{2}\right) & -e^{i\*\left(-\frac{\beta}{2} + \frac{\delta}{2}\right)} \*\sin \left(\frac{\gamma}{2}\right)\\[0.2ex] e^{i \*\left(\frac{\beta}{2}-\frac{\delta}{2}\right)} \*\sin \left(\frac{\gamma}{2} \right) & e^{i\*\left(\frac{\beta}{2}+\frac{\delta}{2}\right)} \*\cos \left(\frac{\gamma}{2}\right) \end{bmatrix}. \end{align} \section{Acknowledgment} The work is supported by Polish Ministry of Science and Higher Education grant \mbox{no.} 3582/B/H03/2009/36 and by the European Commission through the Integrated Project FET/QIPC QESSENCE. This work was done in National Quantum Information Centre of Gda\'nsk. \end{document}
arXiv
{ "id": "1008.1226.tex", "language_detection_score": 0.7683258056640625, "char_num_after_normalized": null, "contain_at_least_two_stop_words": null, "ellipsis_line_ratio": null, "idx": null, "lines_start_with_bullet_point_ratio": null, "mean_length_of_alpha_words": null, "non_alphabetical_char_ratio": null, "path": null, "symbols_to_words_ratio": null, "uppercase_word_ratio": null, "type": null, "book_name": null, "mimetype": null, "page_index": null, "page_path": null, "page_title": null }
arXiv/math_arXiv_v0.2.jsonl
null
null
\begin{document} \begin{abstract} We consider the defocusing energy-critical nonlinear Schr\"odinger equation in the exterior of a smooth compact strictly convex obstacle in three dimensions. For the initial-value problem with Dirichlet boundary condition we prove global well-posedness and scattering for all initial data in the energy space. \end{abstract} \maketitle \tableofcontents \section{Introduction}\label{S:Introduction} We consider the defocusing energy-critical NLS in the exterior domain $\Omega$ of a smooth compact strictly convex obstacle in ${\mathbb{R}}^3$ with Dirichlet boundary conditions: \begin{align}\label{nls} \begin{cases} i u_t+\Delta u=|u|^4 u, \\ u(0,x)=u_0(x),\\ u(t,x)|_{x\in \partial\Omega}=0. \end{cases} \end{align} Here $u:{\mathbb{R}}\times\Omega\to{\mathbb{C}}$ and the initial data $u_0(x)$ will only be required to belong to the energy space, which we will describe shortly. The proper interpretation of the \emph{linear} Schr\"odinger equation with such boundary conditions was an early difficulty in mathematical quantum mechanics, but is now well understood. Let us first whisk through these matters very quickly; see \cite{Kato:pert,RS1,RS2} for further information. We write $-\Delta_\Omega$ for the Dirichlet Laplacian on $\Omega$. This is the unique self-adjoint operator acting on $L^2(\Omega)$ associated with the closed quadratic form $$ Q: H^1_0(\Omega) \to [0,\infty) \qtq{via} Q(f):=\int_\Omega \overline{\nabla f(x)} \cdot \nabla f(x) \,dx. $$ The operator $-\Delta_\Omega$ is unbounded and positive semi-definite. All functions of this operator will be interpreted via the Hilbert-space functional calculus. In particular, $e^{it\Delta_\Omega}$ is unitary and provides the fundamental solution to the linear Schr\"odinger equation $i u_t+\Delta_\Omega u=0$, even when the naive notion of the boundary condition $u(t,x)|_{x\in\partial\Omega}=0$ no longer makes sense. We now define the natural family of homogeneous Sobolev spaces associated to the operator $-\Delta_\Omega$ via the functional calculus: \begin{defn}[Sobolev spaces] For $s\geq 0$ and $1<p<\infty$, let $\dot H_D^{s,p}(\Omega)$ denote the completion of $C^\infty_c(\Omega)$ with respect to the norm $$ \| f \|_{\dot H_D^{s,p}(\Omega)} := \| (-\Delta_\Omega)^{s/2} f \|_{L^p(\Omega)}. $$ Omission of the index $p$ indicates $p=2$. \end{defn} For $p=2$ and $s=1$, this coincides exactly with the definition of $\dot H^1_0(\Omega)$. For other values of parameters, the definition of $\dot H^{s,p}_D(\Omega)$ deviates quite sharply from the classical definitions of Sobolev spaces on domains, such as $\dot H^{s,p}(\Omega)$, $\dot H^{s,p}_0(\Omega)$, and the Lions--Magenes spaces $\dot H^{s,p}_{00}(\Omega)$. Recall that all of these spaces are defined via the Laplacian in the whole space and its fractional powers. For bounded domains ${\mathcal O}\subseteq{\mathbb{R}}^d$, the relation of $\dot H^{s,p}_D({\mathcal O})$ to the classical Sobolev spaces has been thoroughly investigated. See, for instance, the review \cite{Seeley:ICM} and the references therein. The case of exterior domains is much less understood; moreover, new subtleties appear. For example, for bounded domains $\dot H^{1,p}_D({\mathcal O})$ is equivalent to the completion of $C^\infty_c({\mathcal O})$ in the space $\dot H^{1,p}({\mathbb{R}}^d)$. However, this is no longer true in the case of exterior domains; indeed, it was observed in \cite{LSZ} that this equivalence fails for $p>3$ in the exterior of the unit ball in ${\mathbb{R}}^3$, even in the case of spherically symmetric functions. As the reader will quickly appreciate, little can be said about the problem \eqref{nls} without some fairly thorough understanding of the mapping properties of functions of $-\Delta_\Omega$ and of the Sobolev spaces $\dot H_D^{s,p}(\Omega)$, in particular. The analogue of the Mikhlin multiplier theorem is known for this operator and it is possible to develop a Littlewood--Paley theory on this basis; see \cite{IvanPlanch:square,KVZ:HA} for further discussion. To obtain nonlinear estimates, such as product and chain rules in $\dot H_D^{s,p}(\Omega)$, we use the main result of \cite{KVZ:HA}, which we record as Theorem~\ref{T:Sob equiv} below. By proving an equivalence between $\dot H_D^{s,p}(\Omega)$ and the classical Sobolev spaces (for a restricted range of exponents), Theorem~\ref{T:Sob equiv} allows us to import such nonlinear estimates directly from the Euclidean setting. After this slight detour, let us return to the question of the proper interpretation of a solution to \eqref{nls} and the energy space. For the linear Schr\"odinger equation with Dirichlet boundary conditions, the energy space is the domain of the quadratic form associated to the Dirichlet Laplacian, namely, $\dot H^1_D(\Omega)$. For the nonlinear problem \eqref{nls}, the energy space is again $\dot H^1_D(\Omega)$ and the energy functional is given by \begin{align}\label{energy} E(u(t)):=\int_{\Omega} \tfrac12 |\nabla u(t,x)|^2 + \tfrac16 |u(t,x)|^6\, dx. \end{align} Note that the second summand here, which is known as the potential energy, does not alter the energy space by virtue of Sobolev embedding, more precisely, the embedding $\dot H^1_D(\Omega)\hookrightarrow L^6(\Omega)$. The PDE \eqref{nls} is the natural Hamiltonian flow associated with the energy functional \eqref{energy}. Correspondingly, one would expect this energy to be conserved by the flow. This is indeed the case, provided we restrict ourselves to a proper notion of solution. \begin{defn}[Solution]\label{D:solution} Let $I$ be a time interval containing the origin. A function $u: I \times \Omega \to {\mathbb{C}}$ is called a (strong) \emph{solution} to \eqref{nls} if it lies in the class $C_t(I'; \dot H^1_D(\Omega)) \cap L_t^{5}L_x^{30}(I'\times\Omega)$ for every compact subinterval $I'\subseteq I$ and it satisfies the Duhamel formula \begin{equation}\label{E:duhamel} u(t) = e^{it\Delta_\Omega} u_0 - i \int_0^t e^{i(t-s)\Delta_\Omega} |u(s)|^4 u(s)\, ds, \end{equation} for all $t \in I$. \end{defn} For brevity we will sometimes refer to such functions as solutions to $\text{NLS}_\Omega$. It is not difficult to verify that strong solutions conserve energy. We now have sufficient preliminaries to state the main result of this paper. \begin{thm}\label{T:main} Let $u_0\in \dot H^1_D(\Omega)$. Then there exists a unique strong solution $u$ to \eqref{nls} which is global in time and satisfies \begin{align}\label{E:T:main} \iint_{{\mathbb{R}}\times\Omega} |u(t,x)|^{10} \,dx\, dt\le C(E(u)). \end{align} Moreover, $u$ scatters in both time directions, that is, there exist asymptotic states $u_\pm\in\dot H^1_D(\Omega)$ such that \begin{align*} \|u(t) - e^{it\Delta_\Omega}u_\pm\|_{\dot H^1_D(\Omega)}\to 0 \qtq{as} t\to\pm\infty. \end{align*} \end{thm} There is much to be said in order to give a proper context for this result. In particular, we would like to discuss the defocusing NLS in ${\mathbb{R}}^3$ with general power nonlinearity: \begin{equation}\label{GNLS} i u_t+\Delta u=|u|^p u. \end{equation} A key indicator for the local behaviour of solutions to this equation is the scaling symmetry \begin{align}\label{GNLSrescale} u(t,x)\mapsto u^\lambda(t,x):=\lambda^{\frac2p} u(\lambda^2 t, \lambda x) \qtq{for any} \lambda>0, \end{align} which leaves the class of solutions to \eqref{GNLS} invariant. Notice that when $p=4$ this rescaling also preserves the energy associated with \eqref{GNLS}, namely, $$ E(u(t)) = \int_{{\mathbb{R}}^3} \tfrac12|\nabla u(t,x)|^2+\tfrac1{p+2}|u(t,x)|^{p+2}\,dx. $$ For this reason, the quintic NLS in three spatial dimensions is termed energy-critical. The energy is the \emph{highest regularity} conservation law that is known for NLS; this has major consequences for the local and global theories for this equation when $p\geq 4$. When $p>4$, the equation is ill-posed in the energy space; see \cite{CCT}. For $p=4$, which is the focus of this paper, well-posedness in the energy space is delicate, as will be discussed below. For $0\leq p<4$, the equation is called energy-subcritical. Indeed, the energy strongly suppresses the short-scale behaviour of solutions, as can be read-off from its transformation under the rescaling \eqref{GNLSrescale}: $$ E(u^\lambda) = \lambda^{\frac4p - 1} E(u). $$ Accordingly, it is not very difficult to prove local well-posedness for initial data in $H^1({\mathbb{R}}^3)$. This follows by contraction mapping in Strichartz spaces and yields a local existence time that depends on the $H^1_x$ norm of the initial data. Using the conservation of mass (= $L^2_x$-norm) and energy, global well-posedness follows immediately by iteration. Notice that this procedure gives almost no information about the long-time behaviour of the solution. The argument just described does not extend to $p=4$. In this case, the local existence time cannot depend solely on the energy, which is a scale-invariant quantity. Nevertheless, a different form of local well-posedness was proved by Cazenave and Weissler \cite{cw0,cw1}, in which the local existence time depends upon the \emph{profile} of the initial data, rather than solely on its norm. Therefore, the iteration procedure described above cannot be used to deduce global existence. In fact, as the energy is the highest regularity conservation law that is known, global existence is non-trivial \emph{even} for Schwartz initial data. In \cite{cw0, cw1}, the time of existence is shown to be positive via the monotone convergence theorem; on the basis of subsequent developments, we now understand that this time is determined by the spread of energy on the Fourier side. In the case of the \emph{focusing} equation, the existence time obtained in these arguments is not fictitious; there are solutions with a fixed energy that blow up arbitrarily quickly. The Cazenave--Weissler arguments also yield global well-posedness and scattering for initial data with \emph{small} energy, for both the focusing and defocusing equations. Indeed, in this regime the nonlinearity can be treated perturbatively. The first key breakthrough for the treatment of the large-data energy-critical NLS was the paper \cite{borg:scatter}, which proved global well-posedness and scattering for spherically symmetric solutions in ${\mathbb{R}}^3$ and ${\mathbb{R}}^4$. This paper introduced the induction on energy argument, which has subsequently become extremely influential in the treatment of dispersive equations at the critical regularity. We will also be using this argument, so we postpone a further description until later. The induction on energy method was further advanced by Colliander, Keel, Staffilani, Takaoka, and Tao in their proof \cite{CKSTT:gwp} of global well-posedness and scattering for the quintic NLS in ${\mathbb{R}}^3$, for all initial data in the energy space. This result, which is the direct analogue of Theorem~\ref{T:main} for NLS in the whole space, will play a key role in the analysis of this paper. Let us state it explicitly: \begin{thm}[\cite{CKSTT:gwp}]\label{T:gopher} Let $u_0\in \dot H^1({\mathbb{R}}^3)$. Then there exists a unique strong solution $u$ to the quintic NLS in ${\mathbb{R}}^3$ which is global in time and satisfies \begin{align*} \iint_{{\mathbb{R}}\times{\mathbb{R}}^3} |u(t,x)|^{10} \,dx\, dt\le C(E(u)). \end{align*} Moreover, $u$ scatters in both time directions, that is, there exist asymptotic states $u_\pm\in\dot H^1({\mathbb{R}}^3)$ such that \begin{align*} \|u(t) - e^{it\Delta_{{\mathbb{R}}^3}}u_\pm\|_{\dot H^1({\mathbb{R}}^3)} \to 0 \qtq{as} t\to\pm\infty. \end{align*} \end{thm} We will also be employing the induction on energy argument, but in the style pioneered by Kenig and Merle \cite{KenigMerle}. The main result of this paper of Kenig and Merle was the proof of global well-posedness and scattering for the focusing energy-critical equation and data smaller than the soliton threshold. This result was for spherically symmetric data and dimensions $3\leq d \leq 5$; currently, the analogous result for general data is only known in dimensions five and higher \cite{Berbec}. The proof of Theorem~\ref{T:gopher} was revisited within this framework in \cite{KV:gopher}, which also incorporates innovations of Dodson \cite{Dodson:3+}. Let us now turn our attention to the problem of NLS on exterior domains. This is a very popular and challenging family of problems. While we will discuss many contributions below, to get a proper sense of the effort expended in this direction one should also consult the many references therein. In the Euclidean setting, the problem is invariant under space translations; this means that one may employ the full power of harmonic analytic tools. Indeed, much of the recent surge of progress in the analysis of dispersive equations is based on the incorporation of this powerful technology. Working on exterior domains breaks space translation invariance and so many of the tools that one could rely on in the Euclidean setting. The companion paper \cite{KVZ:HA} allows us to transfer many basic harmonic analytic results from the Euclidean setting to that of exterior domains. Many more subtle results, particularly related to the long-time behaviour of the propagator, require a completely new analysis; we will discuss examples of this below. Working on exterior domains also destroys the scaling symmetry. Due to the presence of a boundary, suitable scaling and space translations lead to the study of NLS in \emph{different} geometries. While equations with broken symmetries have been analyzed before, the boundary causes the geometric changes in this paper to be of a more severe nature than those treated previously. An additional new difficulty is that we must proceed without a dispersive estimate, which is currently unknown in this setting. Before we delve into the difficulties of the energy-critical problem in exterior domains, let us first discuss the energy-subcritical case. The principal difficulty in this case has been to obtain Strichartz estimates (cf. Theorem~\ref{T:Strichartz}). The first results in this direction hold equally well in interior and exterior domains. There is a strong parallel between compact manifolds and interior domains, so we will also include some works focused on that case. For both compact manifolds and bounded domains, one cannot expect estimates of the same form as for the Euclidean space. Finiteness of the volume means that there can be no long-time dispersion of wave packets; there is simply nowhere for them to disperse to. Indeed, in the case of the torus ${\mathbb{R}}^d/{\mathbb{Z}}^d$, solutions to the linear Schr\"odinger equation are periodic in time. Because of this, all Strichartz estimates must be local in time. Further, due to the existence of conjugate points for the geodesic flow, high frequency waves can reconcentrate; moreover, they can do so arbitrarily quickly. Correspondingly, Strichartz estimates in the finite domain/compact manifold setting lose derivatives relative to the Euclidean case. Nevertheless, the resulting Strichartz estimates are still strong enough to prove local (and so global) well-posedness, at least for a range of energy-subcritical nonlinearity exponents $p$. See the papers \cite{BFHM:Polygon,BSS:PAMS,BSS:schrodinger,borg:torus,BGT:compact} and references therein for further information. For exterior domains, the obstructions just identified no longer apply, at least in the case of non-trapping obstacles (we do not wish to discuss resonator cavities, or similar geometries). Thus one may reasonably expect all Strichartz estimates to hold, just as in the Euclidean case. There are many positive results in this direction, as will be discussed below; however, the full answer remains unknown, even for the exterior of a convex obstacle (for which there are no conjugate points). In the Euclidean case, the explicit form of the propagator guarantees the following dispersive estimate: \begin{equation}\label{E:EuclidDisp} \| e^{it\Delta_{{\mathbb{R}}^d}} f \|_{L^\infty({\mathbb{R}}^d)} \lesssim |t|^{-\frac d2} \| f\|_{L^1({\mathbb{R}}^d)}, \qtq{for all} t\neq0. \end{equation} This and the unitary of the propagator on $L^2({\mathbb{R}}^d)$ are all that is required to obtain all known Strichartz estimates. For the basic estimates the argument is elementary; see, for example, \cite{gv:strichartz}. The endpoint cases and exotic retarded estimates are more delicate; see \cite{Foschi,KeelTao, Vilela}. It is currently unknown whether or not the dispersive estimate holds outside a convex obstacle, indeed, even for the exterior of a sphere. The only positive result in this direction belongs to Li, Smith, and Zhang, \cite{LSZ}, who prove the dispersive estimate for spherically symmetric functions in the exterior of a sphere in ${\mathbb{R}}^3$. Relying on this dispersive estimate and employing an argument of Bourgain \cite{borg:scatter} and Tao \cite{tao:radial}, these authors proved Theorem~\ref{T:main} for spherically symmetric initial data when $\Omega$ is the exterior of a sphere in ${\mathbb{R}}^3$. In due course, we will explain how the lack of a dispersive estimate outside convex obstacles is one of the major hurdles we needed to overcome in order to prove Theorem~\ref{T:main}. Note that the dispersive estimate will not hold outside a generic non-trapping obstacle, since concave portions of the boundary can act as mirrors and refocus wave packets. Even though the question of dispersive estimates outside convex obstacles is open, global in time Strichartz estimates are known to hold. Indeed, in \cite{Ivanovici:Strichartz}, Ivanovici proves all classical Strichartz estimates except the endpoint cases. Her result will be crucial in what follows and is reproduced below as Theorem~\ref{T:Strichartz}. We also draw the reader's attention to the related papers \cite{Anton08,BSS:schrodinger,BGT04,HassellTaoWunsch,PlanchVega,RobZuily,StaffTataru,Tataru:Strichartz}, as well as the references therein. The key input for the proof of Strichartz estimates in exterior domains is the local smoothing estimate; one variant is given as Lemma~\ref{L:local smoothing} below. In the Euclidean setting, this result can be proved via harmonic analysis methods (cf. \cite{ConsSaut,Sjolin87,Vega88}). For the exterior of a convex obstacle, the usual approach is the method of positive commutators, which connects it to both Kato smoothing (cf. \cite[\S XIII.7]{RS4}) and the Morawetz identity; this is the argument used to prove Lemma~\ref{L:local smoothing} here. Local smoothing is also known to hold in the exterior of a non-trapping obstacle; see \cite{BGT04}. The local smoothing estimate guarantees that wave packets only spend a bounded amount of time next to the obstacle. This fact together with the fact that Strichartz estimates hold in the whole space can be used to reduce the problem of proving Strichartz inequalities to the local behaviour near the obstacle, locally in time. Using this argument, Strichartz estimates have been proved for merely non-trapping obstacles; for further discussion see \cite{BSS:schrodinger, BGT04, IvanPlanch:IHP, PlanchVega, StaffTataru}. While both local smoothing and Strichartz estimates guarantee that wave packets can only concentrate for a bounded amount of time, they do not guarantee that this period of time is one contiguous interval. In the context of a large-data nonlinear problem, this is a severe handicap when compared to the dispersive estimate: Once a wave packet begins to disperse, the nonlinear effects are reduced and the evolution is dominated by the linear part of the equation. If this evolution causes the wave packet to refocus, then nonlinear effects will become strong again. These nonlinear effects are very hard to control and one must fear the possibility that when the wave packet final breaks up again we find ourselves back at the beginning of the scenario we have just been describing. Such an infinite loop is inconsistent with scattering and global spacetime bounds. In Section~\ref{S:Linear flow convergence} we will prove a new kind of convergence result that plays the role of a dispersive estimate in precluding such periodic behaviour. The next order of business is to describe what direct information the existing Strichartz estimates give us toward the proof of Theorem~\ref{T:main}. This is how we shall begin the \subsection{Outline of the proof} For small initial data, the nonlinearity can be treated perturbatively, provided one has the right linear estimates, of course! In this way, both \cite{Ivanovici:Strichartz} and \cite{BSS:schrodinger} use the Strichartz inequalities they prove to obtain small energy global well-posedness and scattering for $\text{NLS}_\Omega$. Actually, there is one additional difficulty that we have glossed over here, namely, estimating the derivative of the nonlinearity. Notice that in order to commute with the free propagator, the derivative in question must be the square root of the Dirichlet Laplacian (rather than simply the gradient). In \cite{BSS:schrodinger} an $L^4_t L^\infty_x$ Strichartz inequality is proved, which allows the authors to use the equivalence of $\dot H^1_0$ and $\dot H^1_D$. In \cite{IvanPlanch:square} a Littlewood--Paley theory is developed, which allows the use of Besov space arguments (cf. \cite{IvanPlanch:IHP}). Indeed, the paper \cite{IvanPlanch:IHP} of Ivanovici and Planchon goes further, proving small data global well-posedness in the exterior of non-trapping obstacles. The main result of \cite{KVZ:HA}, which is repeated as Theorem~\ref{T:Sob equiv} below, allows us to transfer the existing local well-posedness arguments directly from the Euclidean case. Actually, a little care is required to ensure all exponents used lie within the regime where norms are equivalent; nevertheless, this can be done as documented in \cite{KVZ:HA}. Indeed, this paper shows that our problem enjoys a strong form of continuous dependence, known under the rubric `stability theory'; see Theorem~\ref{T:stability}. Colloquially, this says that every function that almost solves \eqref{nls} and has bounded spacetime norm lies very close to an actual solution to \eqref{nls}. This is an essential ingredient in any induction on energy argument. All the results just discussed are perturbative, in particular, they are blind to the sign of the nonlinearity. As blowup can occur for the focusing problem, any large-data global theory must incorporate some deeply nonlinear ingredient which captures the dynamical effects of the sign of the nonlinearity. At present, the only candidates for this role are the identities of Morawetz/virial type and their multi-particle (or interaction) counterparts. Historically, the Morawetz identity was first introduced for the linear wave equation and soon found application in proving energy decay in exterior domain problems and in the study of the nonlinear wave equation; see \cite{Morawetz75}. As noticed first by Struwe, this type of tool also provides the key non-concentration result to prove global well-posedness for the energy-critical wave equation in Euclidean spaces. See the book \cite{ShatahStruwe} for further discussion and complete references. More recently, this result (plus scattering) has been shown to hold outside convex obstacles \cite{SS10} and (without scattering) in interior domains \cite{BLP08}. In both instances, the Morawetz identity provides the crucial non-concentration result. There is a significant difference between the Morawetz identities for the nonlinear wave equation and the nonlinear Schr\"odinger equation, which explains why the solution of the well-posedness problem for the energy-critical NLS did not follow closely on the heels of that for the wave equation: \emph{scaling}. In the wave equation case, the Morawetz identity has energy-critical scaling. This ensures that the right-hand side of the inequality can be controlled in terms of the energy alone; it also underscores why it can be used to guarantee non-concentration of solutions. The basic Morawetz inequality for solutions $u$ to the defocusing quintic NLS in ${\mathbb{R}}^3$ (see \cite{LinStrauss}) reads as follows: $$ \frac{d\ }{dt} \int_{{\mathbb{R}}^3} \frac{x}{|x|} \cdot 2 \Im\bigl\{ \bar u(t,x) \nabla u(t,x)\bigr\} \,dx \geq \int _{{\mathbb{R}}^3} \frac{8|u|^6}{3|x|}\,dx. $$ The utility of this inequality is best seen by integrating both sides over some time interval~$I$; together with Cauchy--Schwarz, this leads directly to \begin{equation}\label{GNLSmor} \int_I \int _{{\mathbb{R}}^3} \frac{|u|^6}{|x|}\,dx \,dt \lesssim \| u \|_{L^\infty_t L^2_x (I\times{\mathbb{R}}^3)} \| \nabla u \|_{L^\infty_t L^2_x (I\times{\mathbb{R}}^3)}. \end{equation} Obviously the right-hand side cannot be controlled solely by the energy; indeed, the inequality has the scaling of $\dot H^{1/2}$. Nevertheless, the right-hand side can be controlled by the conservation of both mass and energy; this was one of the key ingredients in the proof of scattering for the inter-critical problem (i.e. $\frac43<p<4$) in \cite{GinibreVelo}. However, at both the mass-critical endpoint $p=\frac43$ and energy-critical endpoint $p=4$, solutions can undergo dramatic changes of scale without causing the mass or energy to diverge. In particular, by simply rescaling an energy-critical solution as in \eqref{GNLSrescale} one may make the mass as small as one wishes. Our comments so far have concentrated on RHS\eqref{GNLSmor}, but these concerns apply equally well to LHS\eqref{GNLSmor}. Ultimately, the Morawetz identity together with mass and energy conservation are each consistent with a solution that blows up by focusing \emph{part} of its energy at a point, even at the origin. A scenario where \emph{all} of the energy focuses at a single point would not be consistent with the conservation of mass. The key innovation of Bourgain \cite{borg:scatter} was the induction on energy procedure, which allowed him to reduce the analysis of general solutions to $\text{NLS}_{{\mathbb{R}}^3}$ to those which have a clear intrinsic characteristic length scale (at least for the middle third of their evolution). This length scale is time dependent. In this paper we write $N(t)$ for the reciprocal of this length, which represents the characteristic frequency scale of the solution. The fact that the solution lives at a single scale precludes the scenario described in the previous paragraph. By using suitably truncated versions of the Morawetz identity (cf. Lemma~\ref{L:morawetz} below) and the mass conservation law, Bourgain succeeded in proving not only global well-posedness for the defocusing energy-critical NLS in ${\mathbb{R}}^3$, but also global $L^{10}_{t,x}$ spacetime bounds for the solution. As noted earlier, the paper \cite{borg:scatter} treated the case of spherically symmetric solutions only. The general case was treated in \cite{CKSTT:gwp}, which also dramatically advanced the induction on energy method, including reducing treatment of the problem to the study of solutions that not only live at a single scale $1/N(t)$, but are even well localized in space around a single point $x(t)$. The dispersive estimate is needed to prove this strong form of localization. Another key ingredient in \cite{CKSTT:gwp} was the newly introduced interaction Morawetz identity; see \cite{CKSTT:interact}. As documented in \cite{CKSTT:gwp}, there are major hurdles to be overcome in frequency localizing this identity in the three dimensional setting. In particular, the double Duhamel trick is needed to handle one of the error terms. This relies \emph{crucially} on the dispersive estimate; thus, we are unable to employ the interaction Morawetz identity as a tool with which to tackle our Theorem~\ref{T:main}. In four or more spatial dimensions, strong spatial localization is not needed to employ the interaction Morawetz identity. This was first observed in \cite{RV, thesis:art}. Building upon this, Dodson \cite{Dodson:obstacle} has shown how the interaction Morawetz identity can be applied to the energy-critical problem in the exterior of a convex obstacle in four dimensions. He relies solely on frequency localization; one of the key tools that makes this possible is the long-time Strichartz estimates developed by him in the mass-critical Euclidean setting \cite{Dodson:3+} and adapted to the energy-critical setting in \cite{Visan:IMRN}. For the three dimensional problem, these innovations do not suffice to obviate the need for a dispersive estimate, even in the Euclidean setting; see \cite{KV:gopher}. The variant of the induction on energy technique that we will use in this paper was introduced by Kenig and Merle in \cite{KenigMerle}. This new approach has significantly streamlined the induction on energy paradigm; in particular, it has made it modular by completely separating the induction on energy portion from the rest of the argument. It has also sparked a rapid and fruitful development of the method, which has now been applied successfully to numerous diverse PDE problems, including wave maps and the Navier--Stokes system. Before we can discuss the new difficulties associated with implementing the induction on energy method to prove Theorem~\ref{T:main}, we must first explain what it is. We will do so rather quickly; readers not already familiar with this technique, may benefit from the introduction to the subject given in the lecture notes \cite{ClayNotes}. The argument is by contradiction. Suppose Theorem~\ref{T:main} were to fail, which is to say that there is no function $C:[0,\infty)\to[0,\infty)$ so that \eqref{E:T:main} holds. Then there must be some sequence of solutions $u_n:I_n\times\Omega\to{\mathbb{C}}$ so that $E(u_n)$ is bounded, but $S_{I_n}(u_n)$ diverges. Here we introduce the notation \begin{align*} S_I(u):=\iint_{I\times\Omega}|u(t,x)|^{10} dx\, dt, \end{align*} which is known as the \emph{scattering size} of $u$ on the time interval $I$. By passing to a subsequence, we may assume that $E(u_n)$ converges. Moreover, without loss of generality, we may assume that the limit $E_c$ is the smallest number that can arise as a limit of $E(u_n)$ for solutions with $S_{I_n}(u_n)$ diverging. This number is known as the \emph{critical energy}. It has the following equivalent interpretation: If \begin{align*} L(E):=\sup\{S_I(u) : \, u:I\times\Omega\to {\mathbb{C}}\mbox{ such that } E(u)\le E\}, \end{align*} where the supremum is taken over all solutions $u$ to \eqref{nls} defined on some spacetime slab $I\times\Omega$ and having energy $E(u)\le E$, then \begin{align}\label{E:induct hyp} L(E)<\infty \qtq{for} E<E_c \quad \qtq{and} \quad L(E)=\infty \qtq{for} E\ge E_c. \end{align} (The fact that we can write $E\geq E_c$ here rather than merely $E>E_c$ relies on the stability result Theorem~\ref{T:stability}.) This plays the role of the inductive hypothesis; it says that Theorem~\ref{T:main} is true for energies less than $E_c$. The argument is called induction on energy precisely because this is then used (via an extensive argument) to show that $L(E_c)$ is finite and so obtain the sought-after contradiction. Note that by the small-data theory mentioned earlier, we know that $E_c>0$. Indeed, in the small-data regime, one obtains very good quantitative bounds on $S_{\mathbb{R}}(u)$. As one might expect given the perturbative nature of the argument, the bounds are comparable to those for the linear flow; see \eqref{SbyE}. One would like to pass to the limit of the sequence of solutions $u_n$ to exhibit a solution $u_\infty$ that has energy $E_c$ and infinite scattering size. Notice that by virtue of \eqref{E:induct hyp}, such a function would be a \emph{minimal energy blowup solution}. This is a point of departure of the Kenig--Merle approach from \cite{borg:scatter,CKSTT:gwp}, which worked with merely almost minimal almost blowup solutions, in essence, the sequence $u_n$. Proving the existence of such a minimal energy blowup solution will be the key difficulty in this paper; even in the Euclidean setting it is highly non-trivial. In the Euclidean setting, existence was first proved by Keraani \cite{keraani-l2} for the (particularly difficult) mass-critical NLS; see also \cite{BegoutVargas,CarlesKeraani}. Existence of a minimal blowup solution for the Euclidean energy-critical problem was proved by Kenig--Merle \cite{KenigMerle} (see also \cite{BahouriGerard,keraani-h1} for some ingredients), who were also the first to realize the value of this result for well-posedness arguments. Let us first describe how the construction of minimal blowup solutions proceeds in the Euclidean setting. We will then discuss the difficulties encountered on exterior domains and how we overcome these. As $\text{NLS}_{{\mathbb{R}}^3}$ has the \emph{non-compact} symmetries of rescaling and spacetime translations, we cannot expect any subsequence of the sequence $u_n$ of almost minimal almost blowup solutions to converge. This is a well-known dilemma in the calculus of variations and lead to the development of \emph{concentration compactness}. In its original form, concentration compactness presents us with three possibilities: a subsequence converges after applying symmetry operations (the desired \emph{compactness} outcome); a subsequence splits into one or more bubbles (this is called \emph{dichotomy}); or the sequence is completely devoid of concentration (this is called \emph{vanishing}). The vanishing scenario is easily precluded. If the solutions $u_n$ concentrate at no point in spacetime (at any scale), then we expect the nonlinear effects to be weak and so expect spacetime bounds to follow from perturbation theory and the Strichartz inequality (which provides spacetime bounds for linear solutions). As uniform spacetime bounds for the solutions $u_n$ would contradict how these were chosen in the first place, this rules out the vanishing scenario. Actually, this discussion is slightly too naive; one needs to show that failure to concentrate actually guarantees that the linear solution has small spacetime bounds, which then allows us to treat the nonlinearity perturbatively. The tool that allows us to complete the argument just described is an inverse Strichartz inequality (cf. Proposition~\ref{P:inverse Strichartz}), which says that linear flows can only have non-trivial spacetime norm if they contain at least one bubble of concentration. Applying this result inductively to the functions $e^{it\Delta_{{\mathbb{R}}^3}}u_n(0)$, one finds all the bubbles of concentration in a subsequence of these linear solutions together with a remainder term. This is expressed in the form of a \emph{linear profile decomposition} (cf. Theorem~\ref{T:LPD}). Two regions of concentration are determined to be separate bubbles if their relative characteristic length scales diverge as $n\to\infty$, or if their spatial/temporal separation diverges relative to their characteristic scale; see~\eqref{E:LP5}. If there is only one bubble and no remainder term, then (after a little untangling) we find ourselves in the desired compactness regime, namely, that after applying symmetry operations to $u_n(0)$ we obtain a subsequence that converges strongly in $\dot H^1({\mathbb{R}}^3)$. Moreover this limit gives initial data for the needed minimal blowup solution (cf. Theorem~\ref{T:mmbs}). But what if we find ourselves in the unwanted dichotomy scenario where there is more than one bubble? This is where the inductive hypothesis comes to the rescue, as we will now explain. To each profile in the linear profile decomposition, we associate a nonlinear profile, which is a solution to $\text{NLS}_{{\mathbb{R}}^3}$. For bubbles of concentration that overlap time $t=0$, these are simply the nonlinear solutions with initial data given by the bubble. For bubbles of concentration that are temporally well separated from $t=0$, they are nonlinear solutions that have matching long-time behaviour (i.e. matching scattering state). If there is more than one bubble (or a single bubble but non-zero remainder), all bubbles have energy strictly less than $E_c$. (Note that energies are additive due to the strong separation of distinct profiles.) But then by the inductive hypothesis \eqref{E:induct hyp}, each one of the nonlinear profiles will be global in time and obey spacetime bounds. Adding the nonlinear profiles together (and incorporating the linear flow of the remainder term) we obtain an approximate solution to $\text{NLS}_{{\mathbb{R}}^3}$ with finite global spacetime bounds. The fact that the sum of the nonlinear profiles is an approximate solution relies on the separation property of the profiles (this is, after all, a \emph{nonlinear} problem). Thus by perturbation theory, for $n$ sufficiently large there is a true solution to $\text{NLS}_{{\mathbb{R}}^3}$ with initial data $u_n(0)$ and bounded global spacetime norms. This contradicts the criterion by which $u_n$ were chosen in the first place and so precludes the dichotomy scenario. This completes the discussion of how one proves the existence of minimal energy blowup solutions for the energy-critical problem in the Euclidean setting. The argument gives slightly more, something we call (by analogy with the calculus of variations) a \emph{Palais--Smale condition} (cf. Proposition~\ref{P:PS}). This says the following: Given an optimizing sequence of solutions for the scattering size with the energy converging to $E_c$, this sequence has a convergent subsequence (modulo the symmetries of the problem). Note that by the definition of $E_c$, such optimizing sequences have diverging scattering size. Recall that one of the key discoveries of \cite{borg:scatter,CKSTT:gwp} was that it was only necessary to consider solutions that have a well-defined (time-dependent) location and characteristic length scale. Mere existence of minimal blowup solutions is not sufficient; they need to have this additional property in order to overcome the intrinsic limitations of non-scale-invariant conservation/monotonicity laws. Fortunately, this additional property follows neatly from the Palais--Smale condition. If $u(t)$ is a minimal energy blowup solution and $t_n$ is a sequence of times, then $u_n(t)=u(t+t_n)$ is a sequence to which we may apply the Palais--Smale result. Thus, applying symmetry operations to $u(t_n)$ one may find a subsequence that is convergent in $\dot H^1({\mathbb{R}}^3)$. This is precisely the statement that the solution $u$ is \emph{almost periodic}, which is to say, the orbit is cocompact modulo spatial translations and rescaling. This compactness guarantees that the orbit is tight in both the physical and Fourier variables (uniformly in time). Let us now turn to the problem on exterior domains. Adapting the concentration compactness argument to this setting will cause us a great deal of trouble. Naturally, NLS in the exterior domain $\Omega$ does not enjoy scaling or translation invariance. Nevertheless, both the linear and nonlinear profile decompositions must acknowledge the possibility of solutions living at any scale and in any possible location. It is important to realize that in certain limiting cases, these profiles obey \emph{different} equations. Here are the three main examples: \begin{CI} \item Solutions with a characteristic scale much larger than that of the obstacle evolve as if in ${\mathbb{R}}^3$. \item Solutions very far from the obstacle (relative to there own characteristic scale) also evolve as if in ${\mathbb{R}}^3$. \item Very narrowly concentrated solutions lying very close to the obstacle evolve as if in a halfspace. \end{CI} This is both an essential idea that we will develop in what follows and extremely naive. In each of the three scenarios just described, there are serious omissions from this superficial picture, as we will discuss below. Nevertheless, the Palais--Smale condition we obtain in this paper (see Proposition~\ref{P:PS}) is so strong, that it proves the existence of minimal counterexamples in the following form: \begin{thm}[Minimal counterexamples]\label{T:mincrim} \hskip 0em plus 1em Suppose Theorem \ref{T:main} failed. Then there exist a critical energy\/ $0<E_c<\infty$ and a global solution $u$ to \eqref{nls} with $E(u)=E_c$, infinite scattering size both in the future and in the past $$ S_{\ge 0}(u)=S_{\le 0}(u)=\infty, $$ and whose orbit $\{ u(t):\, t\in {\mathbb{R}}\}$ is precompact in $\dot H_D^1(\Omega)$. \end{thm} As evidence of the strength of this theorem, we note that it allows us to complete the proof of Theorem~\ref{T:main} very quickly indeed (see the last half-page of this paper). Induction on energy has been adapted to scenarios with broken symmetries before and we would like to give a brief discussion of some of these works. Our efforts here diverge from these works in the difficulty of connecting the limiting cases to the original model. The lack of a dispersive estimate is a particular facet of this. In \cite{KVZ:quadpot}, the authors proved global well-posedness and scattering for the energy-critical NLS with confining or repelling quadratic potentials. The argument was modelled on that of Bourgain \cite{borg:scatter} and Tao \cite{tao:radial}, and correspondingly considered only spherically symmetric data. Radiality helps by taming the lack of translation invariance; the key issue was to handle the broken scaling symmetry. This problem has dispersive estimates, albeit only for short times in the confining (i.e. harmonic oscillator) case. In \cite{LSZ}, the Bourgain--Tao style of argument is adapted to spherically symmetric data in the exterior of a sphere in ${\mathbb{R}}^3$. A key part of their argument is to prove that a dispersive estimate holds in this setting. The paper \cite{KKSV:gKdV} considers the mass-critical generalized Korteweg--de Vries equation, using the concentration compactness variant of induction on energy. This paper proves a minimal counterexample theorem in the style of Theorem~\ref{T:mincrim}. Dispersive estimates hold; the main obstruction was to overcome the broken Galilei invariance. In the limit of highly oscillatory solutions (at a fixed scale) the gKdV equation is shown to resemble a \emph{different} equation, namely, the mass-critical NLS. This means that both the linear and nonlinear profile decompositions contain profiles that are embeddings of solutions to the linear/nonlinear Schr\"odinger equations, carefully embedded to mimic solutions to Airy/gKdV. An analogous scenario arrises in the treatment of the cubic Klein--Gordon equation in two spatial dimensions, \cite{KSV:2DKG}. Dispersive estimates hold for this problem. Here the scaling symmetry is broken and strongly non-relativistic profiles evolve according to the mass-critical Schr\"odinger equation, which also breaks the Lorentz symmetry. Linear and nonlinear profile decompositions that incorporate Lorentz boosts were one of the novelties of this work. In the last two examples, the broken symmetries have led to dramatic changes in the equation, though the geometry has remained the same (all of Euclidean space). Next, we describe some instances where the geometry changes, but the equation is essentially the same. The paper \cite{IPS:H3} treats the energy-critical NLS on three-dimensional hyperbolic space. Theorem~\ref{T:gopher} is used to treat highly concentrated profiles, which are embedded in hyperbolic space using the strongly Euclidean structure at small scales. Some helpful ingredients in hyperbolic space are the mass gap for the Laplacian and its very strong dispersive and Morawetz estimates. More recently, dramatic progress has been made on the energy-critical problem on the three dimensional flat torus. Global well-posedness for small data was proved in \cite{HerrTataruTz:torus} and the large-data problem was treated in \cite{IonPaus}. (See also \cite{HaniPaus,Herr:Zoll,HerrTataruTz:mixed,IonPaus1} for results in related geometries.) While the manifold in question may be perfectly flat, the presence of closed geodesics and corresponding paucity of Strichartz estimates made this a very challenging problem. The large data problem was treated via induction on energy, using the result for Euclidean space (i.e. Theorem~\ref{T:gopher}) as a black box to control highly concentrated profiles. The local-in-time frequency localized dispersive estimate proved by Bourgain \cite{borg:torus} plays a key role in ensuring the decoupling of profiles. While the methods employed in the many papers we have discussed so far inform our work here, they do not suffice for the treatment of Theorem~\ref{T:main}. Indeed, even the form of perturbation theory needed here spawned the separate paper \cite{KVZ:HA}. Moreover, in this paper we encounter not only changes in geometry, but also changes in the equation; after all, the Dirichlet Laplacian on exterior domains is very different from the Laplacian on ${\mathbb{R}}^3$. We have emphasized the dispersive estimate because it has been an essential ingredient in the concentration compactness variant of induction on energy; it is the tool that guarantees that profiles contain a single bubble of concentration and so underwrites the decoupling of different profiles. Up to now, no one has succeeded in doing this without the aid of a dispersive-type estimate. Moreover, as emphasized earlier, the dispersive estimate plays a seemly irreplaceable role in the treatment of the energy-critical problem in ${\mathbb{R}}^3$. Thus, we are confronted with the problem of finding and then proving a suitable substitute for the dispersive estimate. One of the key messages of this paper is the manner in which this issue is handled, in particular, that the weakened form of dispersive estimate we prove, namely Theorem~\ref{T:LF}, is strong enough to complete the construction of minimal blowup solutions. The result we prove is too strong to hold outside merely non-trapping obstacles; convexity plays an essential role here. Section~\ref{S:Linear flow convergence} is devoted entirely to the proof of Theorem~\ref{T:LF}. Three different methods are used depending on the exact geometric setting, but in all cases, the key result is an \emph{infinite-time} parametrix that captures the action of $e^{it\Delta_\Omega}$ up to a \emph{vanishing fraction} of the mass/energy. Both this level of accuracy and the fact that it holds for all time are essential features for the rest of the argument. The most difficult regime in the proof of Theorem~\ref{T:LF} is when the initial data is highly concentrated, say at scale ${\varepsilon}$, at a distance $\delta$ from the obstacle with ${\varepsilon}\lesssim \delta\lesssim 1$. To treat this regime, we subdivide into two cases: ${\varepsilon}\lesssim \delta\lesssim{\varepsilon}^{\frac67}$ and ${\varepsilon}^{\frac67}\lesssim\delta\lesssim 1$, which are called Cases~(iv) and~(v), respectively. In Case~(iv), the initial data sees the obstacle as a (possibly retreating) halfspace. To handle this case, we first approximate the initial data by a linear combination of Gaussian wave packets (with characteristic scale ${\varepsilon}$). Next we use the halfspace evolution of these wave packets (for which there is an exact formula) to approximate their linear evolution in $\Omega$. As the halfspace evolution does not match the Dirichlet boundary condition, we have to introduce a correction term $w$. Moreover, we have to choose the parameters in the definition of $w$ carefully, so that the resulting error terms can be controlled for the full range of $\delta$. In Case~(v), the obstacle is far from the initial data relative to the data's own scale, but close relative to the scale of the obstacle. We decompose the initial data into a linear combination of Gaussian wave packets, whose characteristic scale $\sigma$ is chosen carefully to allow refection off the obstacle to be treated by means of geometric optics. In particular, $\sigma$ is chosen so that the wave packets do not disperse prior to their collision with the obstacle, but do disperse shortly thereafter. We divide these wave packets into three categories: those that miss the obstacle, those that are near-grazing, and those that collide non-tangentially with the obstacle. Wave packets in the last category are the most difficult to treat. For these, we build a Gaussian parametrix for the reflected wave. To achieve the needed degree of accuracy, this parametrix must be very precisely constructed; in particular, it must be matched to the principal curvatures of the obstacle at the collision point. This parametrix does not match the Dirichlet boundary condition perfectly, and it is essential to wring the last drops of cancellation from this construction in order to ensure that it is not overwhelmed by the resulting errors. Further, the term $w$ that we introduce to match the boundary condition is carefully chosen so that it is non-resonant; note the additional phase factor in the definition of $w^{(3)}$. This is needed so that the error terms are manageable. An example of how the results of Section~\ref{S:Linear flow convergence} play a role can be seen in the case of profiles that are highly concentrated at a bounded distance from the obstacle. These live far from the obstacle relative to their own scale, and so we may attempt to approximate them by solutions to $\text{NLS}_{{\mathbb{R}}^3}$ whose existence is guaranteed by Theorem~\ref{T:gopher}. Such solutions scatter and so eventually dissolve into outward propagating radiation. However, the obstacle blocks a positive fraction of directions and so a non-trivial fraction of the energy of the wave packet will reflect off the obstacle. Theorem~\ref{T:LF3} guarantees that this reflected energy will not refocus. Only with this additional input can we truly say that such profiles behave as if in Euclidean space. Now consider the case when the profile is much larger than the obstacle. In this case the equivalence of the linear flows follows from Theorem~\ref{T:LF1}. However, the argument does not carry over to the nonlinear case. Embedding the nonlinear profiles requires a special argument; one of the error terms is simply not small. Nevertheless, we are able to control it by proving that it is non-resonant; see Step~2 in the proof of Theorem~\ref{T:embed2}. The third limiting scenario identified above was when the profile concentrates very close to the obstacle. In this regime the limiting geometry is the halfspace ${\mathbb{H}}$. Note that spacetime bounds for $\text{NLS}_{\mathbb{H}}$ follow from Theorem~\ref{T:gopher} by considering solutions that are odd under reflection in $\partial{\mathbb{H}}$. The linear flow is treated in Theorem~\ref{T:LF2} and the embedding of nonlinear profiles is the subject of Theorem~\ref{T:embed4}. Note that in this regime, the spacetime region where the evolution is highly nonlinear coincides with the region of collision with the boundary. In the far-field regime, the finite size of the obstacle affects the radiation pattern; thus it is essential to patch the halfspace linear evolution together with that in $\Omega$. Our discussion so far has emphasized how to connect the free propagator in the limiting geometries with that in $\Omega$. The complexity of energy-critical arguments is such that we also need to understand the relations between other spectral multipliers, such as Littlewood--Paley projectors and fractional powers. This is the subject of Section~\ref{S:Domain Convergence}. After much toil, we show that nonlinear profiles arising from all limiting geometries obey spacetime bounds, which plays an analogous role to the induction on energy hypothesis. Thus, when the nonlinear profile decomposition is applied to a Palais--Smale sequence, we can show that there can be only one profile and it cannot belong to either of the limiting geometries ${\mathbb{R}}^3$ or ${\mathbb{H}}$; it must live at approximately unit scale and at approximately unit distance from the obstacle. This is how we obtain Theorem~\ref{T:mincrim}. The proof of this theorem occupies most of Section~\ref{S:Proof}. The last part of that section deduces Theorem~\ref{T:main} from this result. To close this introduction, let us quickly recount the contents of this paper by order of presentation. Section~\ref{S:Preliminaries} mostly reviews existing material that is needed for the analysis: equivalence of Sobolev spaces and the product rule for the Dirichlet Laplacian; Littlewood--Paley theory and Bernstein inequalities; Strichartz estimates; local and stability theories for $\text{NLS}_\Omega$; persistence of regularity for solutions of NLS that obey spacetime bounds (this is important for the embedding of profiles); the Bourgain-style Morawetz identity; and local smoothing. Section~\ref{S:Domain Convergence} proves results related to the convergence of functions of the Dirichlet Laplacian as the underlying domains converge. Convergence of Green's functions at negative energies is proved via direct analysis making use of the maximum principle. This is extended to complex energies via analytic continuation and the Phragmen--Lindel\"of principle. General functions of the operator are represented in terms of the resolvent via the Helffer--Sj\"ostrand formula. Section~\ref{S:Linear flow convergence} analyses the behaviour of the linear propagator under domain convergence. In all cases, high-accuracy infinite-time parametrices are constructed. When the geometry guarantees that a vanishing fraction of the wave actually hits the obstacle, a simple truncation argument is used (Theorem~\ref{T:LF1}). For disturbances close to the obstacle, we base our approximation off the exact solution of the halfspace linear problem with Gaussian initial data; see Theorem~\ref{T:LF2}. For highly concentrated wave packets a bounded distance from the obstacle, we build a parametrix based on a Gaussian beam technique; see Theorem~\ref{T:LF3}. The fact that Gaussian beams are exact linear solutions in Euclidean space prevents the accumulation of errors at large times. Section~\ref{S:LPD} first proves refined and inverse Strichartz inequalities (Lemma~\ref{lm:refs} and Proposition~\ref{P:inverse Strichartz}). These show that linear evolutions with non-trivial spacetime norms must contain a bubble of concentration. This is then used to obtain the linear profile decomposition, Theorem~\ref{T:LPD}. The middle part of this section contains additional results related to the convergence of domains, which combine the tools from Sections~\ref{S:Domain Convergence} and~\ref{S:Linear flow convergence}. Section~\ref{S:Nonlinear Embedding} shows how nonlinear solutions in the limiting geometries can be embedded in $\Omega$. As nonlinear solutions in the limiting geometries admit global spacetime bounds (this is how Theorem~\ref{T:gopher} enters our analysis), we deduce that solutions to $\text{NLS}_\Omega$ whose characteristic length scale and location conform closely to one of these limiting cases inherit these spacetime bounds. These solutions to $\text{NLS}_\Omega$ appear again as nonlinear profiles in Section~\ref{S:Proof}. Section~\ref{S:Proof} contains the proofs of the Palais--Smale condition (Proposition~\ref{P:PS}), as well as the existence and almost periodicity of minimal blowup solutions (Theorem~\ref{T:mmbs}). Because of all the ground work laid in the previous sections, the nonlinear profile decomposition, decoupling, and induction on energy arguments all run very smoothly. This section closes with the proof of Theorem~\ref{T:main}; the needed contradiction is obtained by combining the space-localized Morawetz identity introduced in Lemma~\ref{L:morawetz} with the almost periodicity of minimal blowup solutions. \section{Preliminaries}\label{S:Preliminaries} \subsection{Some notation} We write $X \lesssim Y$ or $Y \gtrsim X$ to indicate $X \leq CY$ for some absolute constant $C>0$, which may change from line to line. When the implicit constant depends on additional quantities, this will be indicated with subscripts. We use $O(Y)$ to denote any quantity $X$ such that $|X| \lesssim Y$. We use the notation $X \sim Y$ whenever $X \lesssim Y \lesssim X$. We write $o(1)$ to indicate a quantity that converges to zero. Throughout this paper, $\Omega$ will denote the exterior domain of a smooth compact strictly convex obstacle in ${\mathbb{R}}^3$. Without loss of generality, we assume that $0\in \Omega^c$. We use $\diam:=\diam(\Omega^c)$ to denote the diameter of the obstacle and $d(x):=\dist(x,\Omega^c)$ to denote the distance of a point $x\in{\mathbb{R}}^3$ to the obstacle. In order to prove decoupling of profiles in $L^p$ spaces (when $p\neq 2$) in Section~\ref{S:LPD}, we will make use of the following refinement of Fatou's Lemma, due to Br\'ezis and Lieb: \begin{lem}[Refined Fatou, \cite{BrezisLieb}]\label{lm:rf} Let $0<p<\infty$. Suppose $\{f_n\}\subseteq L^p({\mathbb{R}}^d)$ with $\limsup\|f_n\|_{L^p}<\infty$. If $f_n\to f$ almost everywhere, then \begin{align*} \int_{{\mathbb{R}}^d}\Bigl||f_n|^p-|f_n-f|^p-|f|^p \Bigr| \,dx\to 0. \end{align*} In particular, $\|f_n\|_{L^p}^p-\|f_n-f\|_{L^p}^p \to \|f\|_{L^p}^p$. \end{lem} As described in the introduction, we need adaptations of a wide variety of harmonic analysis tools to the setting of exterior domains. Most of these were discussed in our paper \cite{KVZ:HA}. One of the key inputs for that paper is the following (essentially sharp) estimate for the heat kernel: \begin{thm}[Heat kernel bounds, \cite{qizhang}]\label{T:heat} Let $\Omega$ denote the exterior of a smooth compact convex obstacle in ${\mathbb{R}}^d$ for $d\geq 3$. Then there exists $c>0$ such that \begin{align*} |e^{t\Delta_{\Omega}}(x,y)|\lesssim \Bigr(\frac{d(x)}{\sqrt t\wedge \diam}\wedge 1\Bigr)\Bigl(\frac{d(y)}{\sqrt t\wedge \diam}\wedge 1\Bigr) e^{-\frac{c|x-y|^2}t} t^{-\frac d 2}, \end{align*} uniformly in $x, y\in \Omega$ and $t\geq 0$; recall that $A\wedge B = \min\{A,B\}$. Moreover, the reverse inequality holds after suitable modification of $c$ and the implicit constant. \end{thm} The most important result from \cite{KVZ:HA} for our applications here is the following, which identifies Sobolev spaces defined with respect to the Dirichlet Laplacian with those defined via the usual Fourier multipliers. Note that the restrictions on the regularity $s$ are necessary, as demonstrated by the counterexamples discussed in~\cite{KVZ:HA}. \begin{thm}[Equivalence of Sobolev spaces, \cite{KVZ:HA}]\label{T:Sob equiv} Let $d\geq 3$ and let $\Omega$ denote the complement of a compact convex body $\Omega^c\subset{\mathbb{R}}^d$ with smooth boundary. Let $1<p<\infty$. If $0\leq s<\min\{1+\frac1p,\frac dp\}$ then \begin{equation}\label{E:equiv norms} \bigl\| (-\Delta_{{\mathbb{R}}^d})^{s/2} f \bigl\|_{L^p} \sim_{d,p,s} \bigl\| (-\Delta_\Omega)^{s/2} f \bigr\|_{L^p} \qtq{for all} f\in C^\infty_c(\Omega). \end{equation} \end{thm} This result allows us to transfer several key results directly from the Euclidean setting, provided we respect the restrictions on $s$ and $p$. This includes such basic facts as the $L^p$-Leibnitz (or product) rule for first derivatives. Indeed, the product rule for the operator $(-\Delta_\Omega)^{1/2}$ is non-trivial; there is certainly no pointwise product rule for this operator. We also need to consider derivatives of non-integer order. The $L^p$-product rule for fractional derivatives in Euclidean spaces was first proved by Christ and Weinstein \cite{ChW:fractional chain rule}. Combining their result with Theorem~\ref{T:Sob equiv} yields the following: \begin{lem}[Fractional product rule]\label{lm:product} For all $f, g\in C_c^{\infty}(\Omega)$, we have \begin{align}\label{fp} \| (-\Delta_\Omega)^{\frac s2}(fg)\|_{L^p} \lesssim \| (-\Delta_\Omega)^{\frac s2} f\|_{L^{p_1}}\|g\|_{L^{p_2}}+ \|f\|_{L^{q_1}}\| (-\Delta_\Omega)^{\frac s2} g\|_{L^{q_2}} \end{align} with the exponents satisfying $1<p, p_1, q_2<\infty$, $1<p_2,q_1\le \infty$, \begin{align*} \tfrac1p=\tfrac1{p_1}+\tfrac1{p_2}=\tfrac1{q_1}+\tfrac1{q_2}, \qtq{and} 0<s<\min\bigl\{ 1+\tfrac1{p_1}, 1+\tfrac1{q_2},\tfrac3{p_1},\tfrac3{q_2} \bigr\}. \end{align*} \end{lem} \subsection{Littlewood--Paley theory on exterior domains} Fix $\phi:[0,\infty)\to[0,1]$ a smooth non-negative function obeying \begin{align*} \phi(\lambda)=1 \qtq{for} 0\le\lambda\le 1 \qtq{and} \phi(\lambda)=0\qtq{for} \lambda\ge 2. \end{align*} For each dyadic number $N\in 2^{\mathbb{Z}}$, we then define \begin{align*} \phi_N(\lambda):=\phi(\lambda/N) \qtq{and} \psi_N(\lambda):=\phi_N(\lambda)-\phi_{N/2}(\lambda); \end{align*} notice that $\{\psi_N(\lambda)\}_{N\in 2^{\Z}} $ forms a partition of unity for $(0,\infty)$. With these functions in place, we can now introduce the Littlewood--Paley projections adapted to the Dirichlet Laplacian on $\Omega$ and defined via the functional calculus for self-adjoint operators: \begin{align*} P^{\Omega}_{\le N} :=\phi_N\bigl(\sqrt{-\Delta_\Omega}\,\bigr), \quad P^{\Omega}_N :=\psi_N(\sqrt{-\Delta_\Omega}\,\bigr), \qtq{and} P^{\Omega}_{>N} :=I-P^{\Omega}_{\le N}. \end{align*} For brevity we will often write $f_N := P^{\Omega}_N f$ and similarly for the other projections. We will write $P_N^{{\mathbb{R}}^3}$, and so forth, to represent the analogous operators associated to the usual Laplacian in the full Euclidean space. We will also need the analogous operators on the halfspace ${\mathbb{H}}=\{x\in{\mathbb{R}}^3 : x \cdot e_3 >0\}$ where $e_3=(0,0,1)$, which we denote by $P^{{\mathbb{H}}}_N$, and so forth. Just like their Euclidean counterparts, these Littlewood--Paley projections obey Bernstein estimates. Indeed, these follow quickly from heat kernel bounds and the analogue of the Mikhlin multiplier theorem for the Dirichlet Laplacian. See \cite{KVZ:HA} for further details. \begin{lem}[Bernstein estimates] Let $1<p<q\le \infty$ and $-\infty<s<\infty$. Then for any $f\in C_c^{\infty}(\Omega)$, we have \begin{align*} \|P^{\Omega}_{\le N} f \|_{L^p(\Omega)}+\|P^{\Omega}_N f\|_{L^p(\Omega)}&\lesssim \|f\|_{L^p(\Omega)},\\ \|P^{\Omega}_{\le N} f\|_{L^q(\Omega)}+\|P^{\Omega}_N f\|_{L^q(\Omega)}&\lesssim N^{d(\frac 1p-\frac1q)}\|f\|_{L^p(\Omega)},\\ N^s\|P^{\Omega}_N f\|_{L^p(\Omega)}&\sim \|(-\Delta_{\Omega})^{\frac s2}P^{\Omega}_N f\|_{L^p(\Omega)}. \end{align*} \end{lem} A deeper application of the multiplier theorem for the Dirichlet Laplacian is the proof of the square function inequalities. Both are discussed in \cite{IvanPlanch:square}, as well as \cite{KVZ:HA}, and further references can be found therein. \begin{lem}[Square function estimate]\label{sq} Fix $1<p<\infty$. For all $f\in C_c^{\infty}(\Omega)$, \begin{align*} \|f\|_{L^p(\Omega)}\sim \Bigl\|\Bigl(\sum_{N\in 2^{\Z}}|P^{\Omega}_{N} f|^2\Bigr)^{\frac12}\Bigr\|_{L^p(\Omega)}. \end{align*} \end{lem} Implicit in this lemma is the fact that each $f$ coincides with $\sum f_N$ in $L^p(\Omega)$ sense for $1<p<\infty$. This relies on the fact that $0$ is not an eigenvalue of $-\Delta_\Omega$, as follows from Lemma~\ref{L:local smoothing}. \subsection{Strichartz estimates and the local theory} As the endpoint Strichartz inequality is not known for exterior domains, some care needs to be taken when defining the natural Strichartz spaces. For any time interval $I$, we define \begin{align*} S^0(I)&:=L_t^{\infty} L_x^2(I\times\Omega)\cap L_t^{2+{\varepsilon}}L_x^{\frac{6(2+{\varepsilon})}{2+3{\varepsilon}}}(I\times\Omega)\\ \dot S^1(I) &:= \{u:I\times\Omega\to {\mathbb{C}} :\, (-\Delta_\Omega)^{1/2}u\in S^0(I)\}. \end{align*} By interpolation, \begin{align}\label{Sspaces} \|u\|_{L_t^q L_x^r(I\times\Omega)}\leq \|u\|_{S^0(I)} \qtq{for all} \tfrac2q+\tfrac3r=\tfrac32 \qtq{with} 2+{\varepsilon}\leq q\leq \infty. \end{align} Here ${\varepsilon}>0$ is chosen sufficiently small so that all Strichartz pairs of exponents used in this paper are covered. For example, combining \eqref{Sspaces} with Sobolev embedding and the equivalence of Sobolev spaces Theorem~\ref{T:Sob equiv}, we obtain the following lemma. \begin{lem}[Sample spaces] We have \begin{align*} \|u\|_{L_t^\infty \dot H^1_D} &+ \|(-\Delta_{\Omega})^{\frac12}u\|_{L_t^{10} L_x^{\frac{30}{13}}} + \|(-\Delta_{\Omega})^{\frac12}u\|_{L_t^5L_x^{\frac{30}{11}}} + \|(-\Delta_{\Omega})^{\frac 12}u\|_{L_{t,x}^{\frac{10}3}}\\ & + \|(-\Delta_{\Omega})^{\frac 12}u\|_{L_t^{\frac83} L_x^4} +\|u\|_{L_t^\infty L_x^6}+\|u\|_{L^{10}_{t,x}}+\|u\|_{L_t^5 L_x^{30}}\lesssim \|u\|_{\dot S^1(I)}, \end{align*} where all spacetime norms are over $I\times \Omega$. \end{lem} We define $N^0(I)$ to be the dual Strichartz space and $$ \dot N^1(I):=\{F:I\times\Omega\to {\mathbb{C}}:\, (-\Delta_\Omega)^{1/2} F\in N^0(I)\}. $$ For the case of exterior domains, Strichartz estimates were proved by Ivanovici \cite{Ivanovici:Strichartz}; see also \cite{BSS:schrodinger}. These estimates form an essential foundation for all the analysis carried out in this papaer. \begin{thm}[Strichartz estimates]\label{T:Strichartz} Let $I$ be a time interval and let $\Omega$ be the exterior of a smooth compact strictly convex obstacle in ${\mathbb{R}}^3$. Then the solution $u$ to the forced Schr\"odinger equation $i u_t + \Delta_\Omega u = F$ satisfies the estimate \begin{align*} \|u\|_{S^0(I)}\lesssim \|u(t_0)\|_{L^2(\Omega)}+\|F\|_{N^0(I)} \end{align*} for any $t_0\in I$. In particular, as $(-\Delta_\Omega)^{1/2}$ commutes with the free propagator $e^{it\Delta_\Omega}$, \begin{align*} \|u\|_{\dot S^1(I)}\lesssim \|u(t_0)\|_{\dot H^1_D(\Omega)}+\|F\|_{\dot N^1(I)} \end{align*} for any $t_0\in I$. \end{thm} When $\Omega$ is the whole Euclidean space ${\mathbb{R}}^3$, we may take ${\varepsilon}=0$ in the definition of Strichartz spaces; indeed, for the linear propagator $e^{it\Delta_{{\mathbb{R}}^3}}$, Strichartz estimates for the endpoint pair of exponents $(q,r)=(2,6)$ were proved by Keel and Tao \cite{KeelTao}. Embedding functions on the halfspace ${\mathbb{H}}$ as functions on ${\mathbb{R}}^3$ that are odd under reflection in $\partial{\mathbb{H}}$, we immediately see that the whole range of Strichartz estimates, including the endpoint, also hold for the free propagator $e^{it\Delta_{{\mathbb{H}}}}$. The local theory for \eqref{nls} is built on contraction mapping arguments combined with Theorem~\ref{T:Strichartz} and the equivalence of Sobolev spaces Theorem~\ref{T:Sob equiv}. We record below a stability result for \eqref{nls}, which is essential in extracting a minimal counterexample to Theorem~\ref{T:main}. Its predecessor in the Euclidean case can be found in \cite{CKSTT:gwp}; for versions in higher dimensional Euclidean spaces see \cite{ClayNotes, RV, TaoVisan}. \begin{thm}[Stability for $\text{NLS}_{\Omega}$, \cite{KVZ:HA}]\label{T:stability} Let $\Omega$ be the exterior of a smooth compact strictly convex obstacle in ${\mathbb{R}}^3$. Let $I$ a compact time interval and let $\tilde u$ be an approximate solution to \eqref{nls} on $I\times \Omega$ in the sense that $$ i\tilde u_t + \Delta_\Omega \tilde u = |\tilde u|^4\tilde u + e $$ for some function $e$. Assume that \begin{align*} \|\tilde u\|_{L_t^\infty \dot H_D^1(I\times \Omega)}\le E \qtq{and} \|\tilde u\|_{L_{t,x}^{10}(I\times \Omega)} \le L \end{align*} for some positive constants $E$ and $L$. Let $t_0 \in I$ and let $u_0\in \dot H_D^1(\Omega)$ satisfy \begin{align*} \|u_0-\tilde u(t_0)\|_{\dot H_D^1}\le E' \end{align*} for some positive constant $E'$. Assume also the smallness condition \begin{align}\label{E:stab small} \bigl\|\sqrt{-\Delta_\Omega}\; e^{i(t-t_0)\Delta_\Omega}\bigl[u(t_0)-\tilde u_0\bigr]\bigr\|_{L_t^{10}L_x^{\frac{30}{13}}(I\times \Omega)} +\bigl\|\sqrt{-\Delta_\Omega}\; e\bigr\|_{N^0(I)}&\le{\varepsilon} \end{align} for some $0<{\varepsilon}<{\varepsilon}_1={\varepsilon}_1(E,E',L)$. Then, there exists a unique strong solution $u:I\times\Omega\mapsto {\mathbb{C}}$ to \eqref{nls} with initial data $u_0$ at time $t=t_0$ satisfying \begin{align*} \|u-\tilde u\|_{L_{t,x}^{10}(I\times \Omega)} &\leq C(E,E',L){\varepsilon}\\ \bigl\|\sqrt{-\Delta_\Omega}\; (u-\tilde u)\bigr\|_{S^0(I\times\Omega)} &\leq C(E,E',L)\, E'\\ \bigl\|\sqrt{-\Delta_\Omega}\; u\bigr\|_{S^0(I\times\Omega)} &\leq C(E,E',L). \end{align*} \end{thm} There is an analogue of this theorem for $\Omega$ an exterior domain in ${\mathbb{R}}^d$ with $d=4,5,6$; see \cite{KVZ:HA}. For dimensions $d\geq 7$, this is an open question. The proof of the stability result in ${\mathbb{R}}^d$ with $d\geq 7$ relies on fractional chain rules for H\"older continuous functions and `exotic' Strichartz estimates; see \cite{ClayNotes,TaoVisan}. The equivalence of Sobolev spaces Theorem~\ref{T:main} guarantees that the fractional chain rule can be imported directly from the Euclidean setting. However, the `exotic' Strichartz estimates are derived from the dispersive estimate \eqref{E:EuclidDisp} and it is not known whether they hold in exterior domains. Applying Theorem~\ref{T:stability} with $\tilde u\equiv0$, we recover the standard local well-posedness theory for \eqref{nls}. Indeed, for an arbitrary (large) initial data $u_0\in \dot H^1_D(\Omega)$, the existence of some small time interval $I$ on which the smallness hypothesis \eqref{E:stab small} holds is guaranteed by the monotone convergence theorem combined with Theorem~\ref{T:Strichartz}. Moreover, if the initial data $u_0$ has small norm in $\dot H^1_D(\Omega)$ (that is, $E'$ is small), then Theorem~\ref{T:Strichartz} yields \eqref{E:stab small} with $I={\mathbb{R}}$. Therefore, both local well-posedness for large data and global well-posedness for small data follow from Theorem~\ref{T:stability}. These special cases of Theorem~\ref{T:stability} have appeared before, \cite{BSS:schrodinger, IvanPlanch:IHP}; induction on energy, however, requires the full strength of Theorem~\ref{T:stability}. In Section~\ref{S:Nonlinear Embedding}, we will embed solutions to NLS in various limiting geometries back inside $\Omega$. To embed solutions to $\text{NLS}_{{\mathbb{R}}^3}$ in $\Omega$, we will make use of the following persistence of regularity result for this equation: \begin{lem}[Persistence of regularity for $\text{NLS}_{{\mathbb{R}}^3}$, \cite{CKSTT:gwp}]\label{lm:persistencer3} Fix $s\ge 0$ and let $I$ be a compact time interval and $u:I\times{\mathbb{R}}^3\to {\mathbb{C}}$ be a solution to $\text{NLS}_{{\mathbb{R}}^3}$ satisfying \begin{align*} E(u)\leq E<\infty \qtq{and} \|u\|_{L_{t,x}^{10}(I\times{\mathbb{R}}^3)}\leq L<\infty. \end{align*} If $u(t_0)\in \dot H^s({\mathbb{R}}^3)$ for some $t_0\in I$, then \begin{align*} \|(-\Delta_{{\mathbb{R}}^3})^{\frac s2}u\|_{S^0(I)}\leq C(E,L) \|u(t_0)\|_{\dot H^s({\mathbb{R}}^3)}. \end{align*} \end{lem} We will also need a persistence of regularity result for $\text{NLS}_{{\mathbb{H}}}$. This follows by embedding solutions on the halfspace as solutions on ${\mathbb{R}}^3$ that are odd under reflection in $\partial{\mathbb{H}}$. In particular, one may regard $-\Delta_{\mathbb{H}}$ as the restriction of $-\Delta_{{\mathbb{R}}^3}$ to odd functions. For example, one can see this equivalence in the exact formula for the heat kernel in ${\mathbb{H}}$. \begin{lem} [Persistence of regularity for $\text{NLS}_{{\mathbb{H}}}$]\label{lm:persistenceh} Fix $s\ge 0$ and let $I$ be a compact time interval and $u:I\times{\mathbb{H}}\to {\mathbb{C}}$ be a solution to $\text{NLS}_{{\mathbb{H}}}$ satisfying \begin{align*} E(u)\leq E<\infty \qtq{and} \|u\|_{L_{t,x}^{10}(I\times{\mathbb{H}})}\leq L<\infty. \end{align*} If $u(t_0)\in \dot H^s_D({\mathbb{H}})$ for some $t_0\in I$, then \begin{align*} \|(-\Delta_{{\mathbb{H}}})^{\frac s2}u\|_{S^0(I)}\leq C(E,L) \|u(t_0)\|_{\dot H^s_D({\mathbb{H}})}. \end{align*} \end{lem} \subsection{Morawetz and local smoothing} We preclude the minimal counterexample to Theorem~\ref{T:main} in Section~\ref{S:Proof} with the use of the following one-particle Morawetz inequality; cf. \cite{borg:scatter, LinStrauss}. \begin{lem}[Morawetz inequality]\label{L:morawetz} Let $I$ be a time interval and let $u$ be a solution to \eqref{nls} on $I$. Then for any $A\ge 1$ with $A|I|^{1/2}\geq \diam(\Omega^c)$ we have \begin{align}\label{mora} \int_I\int_{|x|\le A|I|^{\frac 12}, x\in \Omega}\frac{|u(t,x)|^6}{|x|} \,dx\,dt\lesssim A|I|^{\frac 12}, \end{align} where the implicit constant depends only on the energy of $u$. \end{lem} \begin{proof} Let $\phi(x)$ be a smooth radial bump function such that $\phi(x)=1$ for $|x|\le 1$ and $\phi(x)=0$ for $|x|>2$. Let $R\geq \diam(\Omega^c)$ and define $a(x):=|x|\phi\bigl(\frac x R\bigr)$. Then for $|x|\le R$ we have \begin{align}\label{cd1} \partial_j\partial_k a(x) \text{ is positive definite,} \quad \nabla a(x)=\frac x{|x|}, \qtq{and} \Delta \Delta a(x)<0, \end{align} while for $|x|>R$ we have the following rough estimates: \begin{align}\label{cd2} |\partial_k a(x)|\lesssim 1, \quad |\partial_j\partial_k a(x)|\lesssim \frac 1R,\qtq{and} |\Delta\Delta a(x)|\lesssim \frac 1{R^3}. \end{align} To continue, we use the local momentum conservation law \begin{align}\label{lmc} \partial_t \Im(\bar u \partial_k u)=-2\partial_j \Re(\partial_k u\partial_j\bar u)+\frac 12\partial_k\Delta(|u|^2)-\frac 23\partial_k(|u|^6). \end{align} Multiplying both sides by $\partial_k a$ and integrating over $\Omega$ we obtain \begin{align} \partial_t \Im\int_\Omega \bar u \partial_k u \partial_k a \,dx &=-2 \Re\int_\Omega\partial_j(\partial_k u\partial_j \bar u)\partial_k a\,dx\notag\\ &\quad+\frac 12\int_\Omega\partial_k\Delta(|u|^2)\partial_k a \,dx-\frac 23\int_\Omega\partial_k(|u|^6)\partial_k a \,dx.\label{17} \end{align} The desired estimate \eqref{mora} will follow from an application of the fundamental theorem of calculus combined with an upper bound on $\text{LHS}\eqref{17}$ and a lower bound on $\text{RHS}\eqref{17}$. The desired upper bound follows immediately from H\"older followed by Sobolev embedding: \begin{align}\label{upper bound} \Im\int_\Omega \bar u \partial_k u \partial_k a dx \lesssim \|u\|_{L^6(\Omega)} \|\nabla u\|_{L^2(\Omega)} \|\nabla a\|_{L^3(\Omega)}\lesssim R\|\nabla u\|_{L^2(\Omega)}^2. \end{align} Next we seek a lower bound on $\text{RHS}\eqref{17}$. From the divergence theorem, we obtain \begin{align*} -2\Re\int_\Omega\partial_j(\partial_k u\partial_j \bar u)\partial_k a\,dx &=-2\Re\int_\Omega\partial_j(\partial_k u\partial_j \bar u\partial_k a) \,dx+ 2\Re\int_\Omega\partial_k u\partial_j\bar u\partial_j\partial_k a \,dx\\ &=2\Re\int_{\partial\Omega}\partial_ku\partial_k a\partial_j \bar u\vec n_j d\sigma(x)+2\Re\int_{|x|\le R}\partial_k u\partial_j\bar u\partial_j\partial_k a\,dx\\ &\qquad + 2\int_{|x|\ge R}\partial_k u\partial_j\bar u\partial_j\partial_k a \,dx, \end{align*} where $\vec n$ denotes the outer normal to $\Omega^c$. We write \begin{align*} \partial_j \bar u\vec n_j=\nabla \bar u\cdot\vec n=\bar u_n. \end{align*} Moreover, from the Dirichlet boundary condition, the tangential derivative of $u$ vanishes on the boundary; thus, \begin{align*} \nabla u=(\nabla u\cdot {\vec n})\vec n=u_n\vec n \qtq{and} \partial_k u\partial_k a=u_n a_n. \end{align*} Using this, \eqref{cd1}, and \eqref{cd2} we obtain \begin{align*} -2\Re\int_\Omega\partial_j(\partial_k u\partial_j\bar u)\partial_k a\,dx &\ge 2\int_{\partial\Omega} a_n|u_n|^2 d\sigma(x)+2\int_{|x|\ge R}\partial_k u\partial_j \bar u\partial_j\partial_k a \,dx\\ &\ge 2\int_{\partial\Omega} a_n|u_n|^2 d\sigma(x)-\frac CR\|\nabla u\|_{L^2(\Omega)}^2. \end{align*} Similarly, we can estimate the second term on $\text{LHS}\eqref{17}$ as follows: \begin{align*} \frac 12\int_\Omega \partial_k\Delta(|u|^2)\partial_k a \,dx &=\frac 12\int_\Omega\partial_k\bigl[\Delta(|u|^2)\partial_k a\bigr] \,dx-\frac12\int_{\Omega}\Delta(|u|^2)\Delta a \,dx\\ &=-\frac 12\int_{\partial\Omega}\Delta(|u|^2)\partial_k a\vec n_k d\sigma(x)-\frac 12\int_{\Omega}|u|^2\Delta\Delta a \,dx\\ &=-\int_{\partial\Omega}|\nabla u|^2 a_n d\sigma(x)-\frac12\int_{|x|\le R}|u|^2 \Delta\Delta a \,dx\\ &\quad -\frac 12\int_{|x|\geq R}|u|^2 \Delta\Delta a \,dx\\ &\ge-\int_{\partial\Omega}|u_n|^2 a_n d\sigma(x)-\frac CR \|\nabla u\|_{L^2(\Omega)}^2; \end{align*} to obtain the last inequality we have used \eqref{cd1}, \eqref{cd2}, H\"older, and Sobolev embedding. Finally, to estimate the third term on $\text{LHS}\eqref{17}$ we use \eqref{cd1} and \eqref{cd2}: \begin{align*} -\frac 23\int_{\Omega}\partial_k(|u|^6)\partial_k a \,dx &=\frac23\int_\Omega|u|^6 \Delta a \,dx=\frac 43\int_{|x|\le R}\frac{|u|^6}{|x|} \,dx-\frac CR\|u\|_{L^6(\Omega)}^6. \end{align*} Collecting all these bounds and using the fact that $a_n\geq 0$ on $\partial \Omega$, we obtain \begin{align} \text{LHS}\eqref{17}\gtrsim \int_{|x|\le R}\frac{|u|^6}{|x|} \,dx - R^{-1} \bigl[ \|\nabla u\|_{L^2(\Omega)}^2 +\|u\|_{L^6(\Omega)}^6 \bigr].\label{lower bound} \end{align} Integrating \eqref{17} over $I$ and using \eqref{upper bound} and \eqref{lower bound} we derive \begin{align*} \int_I\int_{|x|\le R, x\in \Omega}\frac{|u|^6}{|x|} \,dx\,dt\lesssim R+\frac{|I|}{R}. \end{align*} Taking $R=A|I|^{\frac 12}$ yields \eqref{mora}. This completes the proof of the lemma. \end{proof} We record next a local smoothing result. While the local smoothing estimate does guarantee local energy decay, it falls short of fulfilling the role of a dispersive estimate. In particular, local smoothing does not preclude the possibility that energy refocuses finitely many times. Indeed, it is known to hold in merely non-trapping geometries. Nevertheless, it does play a key role in the proof of the Strichartz estimates. The version we need requires uniformity under translations and dilations; this necessitates some mild modifications of the usual argument. \begin{lem}[Local smoothing]\label{L:local smoothing} Let $u=e^{it\Delta_\Omega} u_0$. Then \begin{align*} \int_{\mathbb{R}} \int_\Omega |\nabla u(t,x)|^2 \bigl\langle R^{-1} (x-z)\bigr\rangle^{-3} \,dx\,dt \lesssim R \| u_0 \|_{L^2(\Omega)} \|\nabla u_0 \|_{L^2(\Omega)}, \end{align*} uniformly for $z\in {\mathbb{R}}^3$ and $R>0$. \end{lem} \begin{proof} We adapt the proof of local smoothing using the Morawetz identity from the Euclidean setting. For the level of generality needed here, we need to combine two Morawetz identities: one adapted to the obstacle and a second adapted to the $R$ ball around $z$. Recall that the origin is an interior point of $\Omega^c$. Given $x\in\partial\Omega$, let $\vec n(x)$ denote the outward normal to the obstacle at this point. As $\Omega^c$ is convex, there is a constant $C>0$ independent of $z$ so that \begin{align}\label{E:ls geom} \bigl| \tfrac{R^{-1}(x-z)}{\langle R^{-1}(x-z)\rangle} \cdot \vec n(x) \bigr| \leq C \tfrac{x}{|x|}\cdot \vec n(x) \qtq{for all} x\in\partial\Omega. \end{align} Indeed, the right-hand side is bounded away from zero uniformly for $x\in\partial\Omega$, while the set of vectors $\frac{R^{-1}(x-z)}{\langle R^{-1}(x-z)\rangle}$ is compact. For $C>0$ as above, let $$ F(t) := \int_\Omega \Im( \bar u \nabla u) \cdot \nabla a\, dx \qtq{with} a(x) := C |x| + R \langle R^{-1} (x-z) \bigr\rangle. $$ After integrating by parts several times (cf. Lemma~\ref{L:morawetz}) and using that $$ -\Delta\Delta a(x) \geq 0 \qtq{and} \partial_j\partial_k a(x) \geq \tfrac{1}{R} \langle R^{-1} (x-z) \bigr\rangle^{-3} \delta_{jk} \quad \text{(as symmetric matrices)} $$ one obtains $$ \partial_t F(t) \geq 2 \int_\Omega \frac{|\nabla u(t,x)|^2 \,dx}{R \langle R^{-1} (x-z) \rangle^{3}} + \int_{\partial\Omega} |\nabla u(t,x)|^2 \bigl[ \nabla a(x)\cdot \vec n(x) \bigr] \,d\sigma(x). $$ Moreover, by \eqref{E:ls geom} the integral over $\partial\Omega$ is positive since $$ \nabla a(x)\cdot \vec n(x) = \bigl[C \tfrac{x}{|x|} + \tfrac{R^{-1}(x-z)}{\langle R^{-1}(x-z)\rangle} \bigr]\cdot \vec n(x) \geq 0 \qtq{for} x\in\partial\Omega. $$ Noting that $|F(t)|\leq (C+1) \| u(t) \|_{L^2(\Omega)} \| \nabla u(t) \|_{L^2(\Omega)}$, the lemma now follows by applying the fundamental theorem of calculus. \end{proof} The remainder term in the linear profile decomposition Theorem~\ref{T:LPD} goes to zero in $L^{10}_{t,x}$; however, in order to prove the approximate solution property (cf. Claim~3 in the proof of Proposition~\ref{P:PS}), we need to show smallness in Strichartz spaces with one derivative. This is achieved via local smoothing (cf. Lemma~3.7 from \cite{keraani-h1}); the uniformity in Lemma~\ref{L:local smoothing} is essential for this application. \begin{cor}\label{C:Keraani3.7} Given $w_0\in \dot H^1_D(\Omega)$, $$ \| \nabla e^{it\Delta_\Omega} w_0 \|_{L^{\frac52}_{t,x}([\tau-T,\tau+T]\times\{|x-z|\leq R\})} \lesssim T^{\frac{31}{180}} R^{\frac7{45}} \| e^{it\Delta_\Omega} w_0 \|_{L^{10}_{t,x}({\mathbb{R}}\times\Omega)}^{\frac1{18}} \| w_0 \|_{\dot H^1_D(\Omega)}^{\frac{17}{18}}, $$ uniformly in $w_0$ and the parameters $R,T > 0$, $\tau\in{\mathbb{R}}$, and $z\in{\mathbb{R}}^3$. \end{cor} \begin{proof} Replacing $w_0$ by $e^{i\tau\Delta_\Omega} w_0$, we see that it suffices to treat the case $\tau=0$. By H\"older's inequality, \begin{align*} \| \nabla e^{it\Delta_\Omega} & w_0 \|_{L^{\frac52}_{t,x}([-T,T]\times\{|x-z|\leq R\})} \\ &\lesssim \| \nabla e^{it\Delta_\Omega} w_0 \|_{L^2_{t,x}([-T,T]\times\{|x-z|\leq R\})}^{\frac13} \|\nabla e^{it\Delta_\Omega} w_0 \|_{L^{\frac{20}7}_{t,x}([-T,T]\times\Omega)}^{\frac23}. \end{align*} We will estimate the two factors on the right-hand side separately. By the H\"older and Strichartz inequalities, as well as the equivalence of Sobolev spaces, we estimate \begin{align*} \|\nabla e^{it\Delta_\Omega} w_0 \|_{L^{\frac{20}7}_{t,x}([-T,T]\times\Omega)} &\lesssim T^{\frac 18} \| (-\Delta_\Omega)^{\frac12} e^{it\Delta_\Omega} w_0 \|_{L^{\frac{40}9}_t L^{\frac{20}7}_x} \lesssim T^{\frac 18} \|w_0\|_{\dot H^1_D(\Omega)} . \end{align*} In this way, the proof of the corollary reduces to showing \begin{align}\label{E:LS1022} \| \nabla e^{it\Delta_\Omega} w_0 \|_{L^2_{t,x}([-T,T]\times\{|x-z|\leq R\})} \lesssim T^{\frac4{15}} R^{\frac7{15}} \| e^{it\Delta_\Omega} w_0 \|_{L^{10}_{t,x}}^{\frac16} \| w_0 \|_{\dot H^1_D(\Omega)}^{\frac56}. \end{align} Given $N>0$, using the H\"older, Bernstein, and Strichartz inequalities, as well as the equivalence of Sobolev spaces, we have \begin{align*} \bigl\| \nabla e^{it\Delta_\Omega} P_{< N}^\Omega & w_0 \bigr\|_{L^2_{t,x}([-T,T]\times\{|x-z|\leq R\})} \\ &\lesssim T^{\frac25} R^{\frac9{20}} \bigl\| \nabla e^{it\Delta_\Omega} P_{< N}^\Omega w_0 \bigr\|_{L^{10}_tL^{\frac{20}7}_x} \\ &\lesssim T^{\frac25} R^{\frac9{20}} N^{\frac14} \| (-\Delta_\Omega)^{\frac38} e^{it\Delta_\Omega} P_{< N}^\Omega w_0 \|_{L^{10}_t L^{\frac{20}7}_x} \\ &\lesssim T^{\frac25} R^{\frac9{20}} N^{\frac14} \| e^{it\Delta_\Omega} w_0 \|_{L^{10}_{t,x}}^{\frac14} \| (-\Delta_\Omega)^{\frac12} e^{it\Delta_\Omega} w_0 \|_{L^{10}_t L^{\frac{30}{13}}_x}^{\frac34} \\ &\lesssim T^{\frac25} R^{\frac9{20}} N^{\frac14} \| e^{it\Delta_\Omega} w_0 \|_{L^{10}_{t,x}}^{\frac14} \| w_0 \|_{\dot H^1_D(\Omega)}^{\frac34}. \end{align*} We estimate the high frequencies using Lemma~\ref{L:local smoothing} and the Bernstein inequality: \begin{align*} \bigl\| \nabla e^{it\Delta_\Omega} P_{\geq N}^\Omega w_0 \bigr\|_{L^2_{t,x}([-T,T]\times\{|x-z|\leq R\})} ^2 &\lesssim R \| P_{\geq N}^\Omega w_0 \|_{L^2_x} \| \nabla P_{\geq N}^\Omega w_0 \|_{L^2_x} \\ &\lesssim R N^{-1} \| w_0 \|_{\dot H^1_D(\Omega)}^2. \end{align*} The estimate \eqref{E:LS1022} now follows by optimizing in the choice of $N$. \end{proof} \section{Convergence of domains}\label{S:Domain Convergence} The region $\Omega$ is not invariant under scaling or translation; indeed, under suitable choices of such operations, the obstacle may shrink to a point, march off to infinity, or even expand to fill a halfspace. The objective of this section is to prove some rudimentary statements about the behaviour of functions of the Dirichlet Laplacian under such circumstances. In the next section, we address the much more subtle question of the convergence of propagators in Strichartz spaces. We begin by defining a notion of convergence of domains that is general enough to cover the scenarios discussed in this paper, without being so general as to make the arguments unnecessarily complicated. Throughout, we write $$ G_{\mathcal O}(x,y;z) := (-\Delta_{\mathcal O} - z)^{-1}(x,y) $$ for the Green's function of the Dirichlet Laplacian in a general open set ${\mathcal O}$. This function is symmetric under the interchange of $x$ and $y$. \begin{defn}\label{D:converg} Given a sequence ${\mathcal O}_n$ of open subsets of ${\mathbb{R}}^3$ we define $$ \tlim {\mathcal O}_n := \{ x\in {\mathbb{R}}^3 :\, \liminf_{n\to\infty} \dist(x,{\mathcal O}_n^c) > 0\}. $$ Writing $\tilde{\mathcal O}=\tlim {\mathcal O}_n$, we say ${\mathcal O}_n\to{\mathcal O}$ if the following two conditions hold: ${\mathcal O}\triangle\tilde{\mathcal O}$ is a finite set and \begin{align}\label{cr2} G_{{\mathcal O}_n}(x,y;z)\to G_{{\mathcal O}}(x,y;z) \end{align} for all $z\in (-2,-1)$, all $x\in \tilde{\mathcal O}$, and uniformly for $y$ in compact subsets of $\tilde{\mathcal O}\setminus \{x\}$. \end{defn} The arguments that follow adapt immediately to allow the symmetric difference ${\mathcal O}\triangle\tilde{\mathcal O}$ to be a set of vanishing Minkowski $1$-content, rather than being merely finite. The role of this hypothesis is to guarantee that this set is removable for $\dot H^1_D({\mathcal O})$; see Lemma~\ref{L:dense} below. We restrict $z$ to the interval $(-2,-1)$ in \eqref{cr2} for simplicity and because it allows us to invoke the maximum principle when checking this hypothesis. Nevertheless, this implies convergence for all $z\in{\mathbb{C}}\setminus[0,\infty)$, as we will show in Lemma~\ref{lm:allz}. \begin{lem}\label{L:dense} If ${\mathcal O}_n\to {\mathcal O}$, then $C^\infty_c(\tilde{\mathcal O})$ is dense in $\dot H^1_D({\mathcal O})$. \end{lem} \begin{proof} By definition, $C^\infty_c({\mathcal O})$ is dense in $\dot H^1_D({\mathcal O})$. Given $f\in C^\infty_c({\mathcal O})$ and ${\varepsilon}>0$ define $ f_{\varepsilon}(x) := f(x) \prod_{k=1}^m \theta\bigl(\tfrac{x-x_k}{{\varepsilon}}\bigr) $ where $\{x_k\}_{k=1}^m$ enumerates ${\mathcal O}\triangle\tilde{\mathcal O}$ and $\theta:{\mathbb{R}}^3\to[0,1]$ is a smooth function that vanishes when $|x|\leq1$ and equals one when $|x|\geq2$. Then $f_{\varepsilon} \in C^\infty_c(\tilde{\mathcal O})\cap C^\infty_c({\mathcal O})$ and $$ \| f - f_{\varepsilon} \|_{\dot H^1({\mathbb{R}}^3)} \lesssim \sqrt{m{\varepsilon}^3} \; \|\nabla f\|_{L^\infty} + \sqrt{m{\varepsilon}}\; \|f\|_{L^\infty}. $$ As ${\varepsilon}$ can be chosen arbitrarily small, the proof is complete. \end{proof} In what follows, we will need some crude bounds on the Green's function that hold uniformly for the rescaled domains we consider. While several existing methods could be used to obtain more precise results (cf. \cite{Hislop}), we prefer to give a simple argument that gives satisfactory bounds and for which the needed uniformity is manifest. \begin{lem}\label{L:G bnds} For all open sets ${\mathcal O}\subseteq{\mathbb{R}}^3$ and $z\in{\mathbb{C}}\setminus[0,\infty)$, \begin{align}\label{moron} \bigl|G_{\mathcal O}(x,y;z)\bigr| \lesssim \frac{|z|^2}{(\Im z)^2} e^{-\frac12\Re \sqrt{-z}|x-y|} \Bigl(\frac1{|x-y|} + \sqrt{\Im z}\Bigr). \end{align} Moreover, if\/ $\Re z\leq 0$, then \begin{align}\label{moron'} \bigl|G_{\mathcal O}(x,y;z)\bigr| \lesssim e^{-\frac12\Re \sqrt{-z}|x-y|} \Bigl(\frac1{|x-y|} + \sqrt{\Im z}\Bigr). \end{align} \end{lem} \begin{proof} By the parabolic maximum principle, $0\leq e^{t\Delta_{{\mathcal O}}}(x,y) \leq e^{t\Delta_{{\mathbb{R}}^3}}(x,y)$. Thus, $$ |G_{{\mathcal O}} (x,y;z)| = \bigg| \int_0^\infty e^{tz + t\Delta_{{\mathcal O}}}(x,y) \,dt \biggr| \leq \int_0^\infty e^{t\Re(z) + t\Delta_{{\mathbb{R}}^3}}(x,y) \,dt = G_{{\mathbb{R}}^3} (x,y;\Re z) $$ for all $\Re z \leq 0$. Using the explicit formula for the Green's function in ${\mathbb{R}}^3$, we deduce \begin{equation}\label{Go bound} | G_{{\mathcal O}} (x,y;z) | \leq \frac{e^{-\sqrt{-\Re z}|x-y|}}{4\pi|x-y|} \qtq{whenever} \Re z\leq0. \end{equation} (When $z\in(-\infty,0]$ this follows more simply from the elliptic maximum principle.) Note that the inequality \eqref{Go bound} implies \eqref{moron'} in the sector $\Re z < -|\Im z|$. Indeed, in this region we have $\Re \sqrt{-z} \leq \sqrt{|z|} \leq 2^{\frac14} \sqrt{-\Re z}$. In the remaining cases of \eqref{moron'}, namely, $-|\Im z|\leq \Re z\leq0$, we have $1\leq\frac{|z|^2}{(\Im z)^2}\leq 2$ and so in this case \eqref{moron'} follows from \eqref{moron}. Thus, it remains to establish \eqref{moron}. To obtain the result for general $z\in{\mathbb{C}}\setminus[0,\infty)$, we combine \eqref{Go bound} with a crude bound elsewhere and the Phragm\'en--Lindel\"of principle. From \eqref{Go bound} and duality, we have $$ \bigl\| (-\Delta_{{\mathcal O}}+|z|)^{-1} \bigr\|_{L^1\to L^2} = \bigl\| (-\Delta_{{\mathcal O}}+|z|)^{-1} \bigr\|_{L^2\to L^\infty} \lesssim |z|^{-1/4}. $$ Combining this information with the identity $$ (-\Delta_{{\mathcal O}}-z)^{-1} = (-\Delta_{{\mathcal O}}+|z|)^{-1} + (-\Delta_{{\mathcal O}}+|z|)^{-1}\biggl[\frac{(z+|z|)(-\Delta_{{\mathcal O}}+|z|)}{-\Delta_{{\mathcal O}}-z}\biggr](-\Delta_{{\mathcal O}}+|z|)^{-1} $$ and elementary estimations of the $L^2$-norm of the operator in square brackets, we deduce that \begin{equation}\label{Go bound'} |G_{{\mathcal O}} (x,y;z)| \lesssim \frac{1}{|x-y|} + \frac{|z|^{3/2}}{|\Im z|} \qtq{for all} z\in{\mathbb{C}}\setminus[0,\infty). \end{equation} Using \eqref{Go bound} when $\Re z \leq 0$ and \eqref{Go bound'} when $\Re z >0$, we see that for given ${\varepsilon}>0$ we have \begin{equation}\label{log Go} \log \biggl|\frac{G_{{\mathcal O}} (x,y;z)}{(z+i{\varepsilon})^2}\biggr| \leq -|x-y|\Re\bigl(\sqrt{i{\varepsilon}-z}\bigr) + \log\bigl(\tfrac1{|x-y|}+\sqrt{\varepsilon}\bigr) + 2\log\bigl(\tfrac1{\varepsilon}\bigr) + C \end{equation} for a universal constant $C$ and all $z$ with $\Im z ={\varepsilon}$. By the Phragm\'en--Lindel\"of principle, this inequality extends to the entire halfspace $\Im z \geq {\varepsilon}$. Indeed, LHS\eqref{log Go} is subharmonic and converges to $-\infty$ at infinity, while RHS\eqref{log Go} is harmonic and grows sublinearly. To obtain the lemma at a fixed $z$ in the upper halfplane we apply \eqref{log Go} with ${\varepsilon}=\frac12\Im z$ and use the elementary inequality $$ \Re \sqrt{-u-\smash[b]{\tfrac i2 v}} \geq \tfrac12 \Re \sqrt{-u-iv} \qtq{for all} u\in{\mathbb{R}} \qtq{and} v>0. $$ The result for the lower halfplane follows by complex conjugation symmetry. \end{proof} \begin{lem}\label{lm:allz} If ${\mathcal O}_n\to {\mathcal O}$, then \eqref{cr2} holds uniformly for $z$ in compact subsets of ${\mathbb{C}}\setminus[0,\infty)$, $x\in \tilde{\mathcal O}$, and $y$ in compact subsets of $\tilde{\mathcal O}\setminus \{x\}$. \end{lem} \begin{proof} We argue by contradiction. Suppose not. Then there exist an $x\in \tilde{\mathcal O}$ and a sequence $y_n\to y_\infty\in \tilde {\mathcal O}\setminus \{x\}$ so that \begin{align*} f_n(z):=G_{{\mathcal O}_n}(x, y_n, z) \end{align*} does not converge uniformly to $G_{\mathcal O}(x,y_\infty;z)$ on some compact subset of ${\mathbb{C}}\setminus[0,\infty)$. By Lemma~\ref{L:G bnds}, we see that $\{f_n\}$ are a normal family and so, after passing to a subsequence, converge uniformly on compact sets to some $f(z)$. As $G_{{\mathcal O}_n}(x, y_n;z)\to G_{\mathcal O}(x,y_\infty;z)$ whenever $z\in (-2,-1)$, the limit must be $f(z)=G_{\mathcal O}(x,y_\infty;z)$. This shows that it was unnecessary to pass to a subsequence, thus providing the sought-after contradiction. \end{proof} Given sequences of scaling and translation parameters $N_n\in 2^{\mathbb{Z}}$ and $x_n\in\Omega$, we wish to consider the domains $N_n(\Omega-\{x_n\})$. Writing $d(x_n):=\dist(x_n,\Omega^c)$ and passing to a subsequence, we identify four specific scenarios: \begin{CI} \item Case 1: $N_n\equiv N_\infty$ and $x_n\to x_\infty\in \Omega$. Here we set $\Omega_n:=\Omega$. \item Case 2: $N_n\to 0$ and $-N_n x_n \to x_\infty\in{\mathbb{R}}^3$. Here $\Omega_n:=N_n(\Omega-\{x_n\})$. \item Case 3: $N_nd(x_n)\to \infty$. Here $\Omega_n:=N_n(\Omega-\{x_n\})$. \item\mbox{Case 4: }$N_n\to\infty$ and $N_n d(x_n)\to d_\infty>0$. Here $\Omega_n:=N_nR_n^{-1}(\Omega-\{x_n^*\})$, where $x_n^*\in\partial\Omega$ and $R_n\in SO(3)$ are chosen so that $d(x_n)=|x_n-x_n^*|$ and $R_n e_3 = \frac{x_n-x_n^*}{|x_n-x_n^*|}$. \end{CI} The seemingly missing possibility, namely, $N_n \gtrsim 1$ and $N_n d(x_n)\to 0$ will be precluded in the proof of Proposition~\ref{P:inverse Strichartz}. In Case~1, the domain modifications are so tame as to not require further analysis, as is reflected by the choice of $\Omega_n$. The definition of $\Omega_n$ in Case~4 incorporates additional translations and rotations to normalize the limiting halfspace to be $$ {\mathbb{H}} := \{ x\in{\mathbb{R}}^3 : e_3 \cdot x >0 \} \qtq{where} e_3:=(0,0,1). $$ In Cases~2 and~3, the limiting domain is ${\mathbb{R}}^3$, as we now show. \begin{prop}\label{P:convdomain} In Cases~2 and~3, $\Omega_n\to {\mathbb{R}}^3;$ in Case 4, $\Omega_n\to {\mathbb{H}}$. \end{prop} \begin{proof} In Case 2, we have $\tlim \Omega_n = {\mathbb{R}}^3\setminus\{x_\infty\}$. It remains to show convergence of the Green's functions. Let $C_0>0$ be a constant to be chosen later. We will show that for $z\in(-2,-1)$ and $n$ sufficiently large, \begin{align}\label{G to R lb} G_{\Omega_n}(x,y;z)\ge G_{{\mathbb{R}}^3}(x,y;z)-C_0N_nG_{{\mathbb{R}}^3}(x,-x_nN_n;z) \end{align} for $x\in {\mathbb{R}}^3\setminus\{ x_\infty\}$ fixed and $y$ in any compact subset $K$ of ${\mathbb{R}}^3\setminus\{x,x_\infty\}$. Indeed, for $n$ large enough we have $x\in \Omega_n$ and $K\subseteq\Omega_n$. Also, for $x_0\in \partial\Omega_n$ we have $|x_0+x_n N_n|\le \diam(\Omega^c)N_n$. Thus, for $z\in(-2,-1)$ we estimate \begin{align*} G_{{\mathbb{R}}^3}(x_0,y;z)-C_0N_nG_{{\mathbb{R}}^3}(x_0,-x_nN_n;z)&=\frac{e^{-\sqrt{-z}|x_0-y|}}{4\pi|x_0-y|} -C_0N_n\frac{e^{-\sqrt{-z}|x_0+x_nN_n|}}{4\pi|x_0+x_nN_n|}\\ &\le \frac{e^{-\sqrt{-z}|x_0-y|}}{4\pi|x_0-y|}-C_0\frac{e^{-\diam\sqrt{-z}N_n}}{4\pi \diam}<0, \end{align*} provided $C_0>\sup_{y\in K}\frac {\diam}{|x_0-y|}$ and $n$ is sufficiently large. Thus \eqref{G to R lb} follows from the maximum principle. The maximum principle also implies $G_{{\mathbb{R}}^3}(x,y;z)\ge G_{\Omega_n}(x,y;z) \geq 0$. Combining this with \eqref{G to R lb}, we obtain \begin{align*} G_{{\mathbb{R}}^3}(x,y;z)-C_0N_nG_{{\mathbb{R}}^3}(x,-x_nN_n;z)\le G_{\Omega_n}(x,y;z)\le G_{{\mathbb{R}}^3}(x,y;z) \end{align*} for $n$ sufficiently large. As $N_n\to0$ and $-x_nN_n\to x_\infty$, this proves the claim in Case~2. Next we consider Case 3. From the condition $N_nd(x_n)\to\infty$ it follows easily that $\tlim\Omega_n={\mathbb{R}}^3$. It remains to show the convergence of the Green's functions. By the maximum principle, $G_{{\mathbb{R}}^3}(x,y;z)\ge G_{\Omega_n}(x,y;z)$; thus, it suffices to prove a suitable lower bound. To this end, let ${\mathbb{H}}_n$ denote the halfspace containing $0$ for which $\partial{\mathbb{H}}_n$ is the hyperplane perpendicularly bisecting the line segment from $0$ to the nearest point on $\partial\Omega_n$. Note that $\dist(0,\partial{\mathbb{H}}_n)\to\infty$ as $n\to\infty$. Given $x\in{\mathbb{R}}^3$ and a compact set $K\subset{\mathbb{R}}^3\setminus\{x\}$, the maximum principle guarantees that \begin{align*} G_{\Omega_n}(x,y;z)\ge G_{{\mathbb{H}}_n}(x,y;z) \qtq{for all} y\in K \qtq{and} z\in(-2,-1), \end{align*} as long as $n$ is large enough that $x\in {\mathbb{H}}_n$ and $K\subset {\mathbb{H}}_n$. Now $$ G_{{\mathbb{H}}_n}(x,y;z)=G_{{\mathbb{R}}^3}(x,y;z)-G_{{\mathbb{R}}^3}(x,y_n;z), $$ where $y_n$ is the reflection of $y$ across $\partial{\mathbb{H}}_n$. Thus, \begin{align*} G_{{\mathbb{H}}_n}(x,y;z)=\frac{e^{-\sqrt{-z}|x-y|}}{4\pi|x-y|} - \frac{e^{-\sqrt{-z}|x-y_n|}}{4\pi|x-y_n|}\to G_{{\mathbb{R}}^3}(x,y;z) \qtq{as} n\to \infty. \end{align*} This completes the treatment of Case 3. It remains to prove the convergence in Case 4, where $\Omega_n=N_nR_n^{-1}(\Omega-\{x_n^*\})$, $N_n\to\infty$, and $N_n d(x_n)\to d_\infty>0$. It is elementary to see that $\tlim \Omega_n = {\mathbb{H}}$; in particular, ${\mathbb{H}}\subset \Omega_n$ for all $n$. We need to verify that \begin{align*} G_{\Omega_n}(x,y;z)\to G_{{\mathbb{H}}}(x,y;z)\qtq{for} z\in (-2,-1), \quad x\in {\mathbb{H}}, \end{align*} and uniformly for $y$ in a compact set $K\subset{\mathbb{H}}\setminus\{x\}$. By the maximum principle, $G_{\Omega_n}(x,y;z)\ge G_{\mathbb{H}}(x,y;z)$. On the other hand, we will show that \begin{align}\label{s21} G_{\Omega_n}(x,y;z)\le G_{\mathbb{H}}(x,y;z)+ C{N_n^{-{\varepsilon}}}e^{-\sqrt{-z}x_3}, \end{align} for any $0<{\varepsilon}<\frac 13$ and a large constant $C$ depending on $K$. As $N_n\to\infty$, these two bounds together immediately imply the convergence of the Green's functions. We now prove the upper bound \eqref{s21}. From the maximum principle it suffices to show that this holds just for $x\in \partial\Omega_n$, which amounts to \begin{align}\label{s22} |G_{{\mathbb{H}}}(x,y;z)| \le C {N_n^{-{\varepsilon}}}e^{-\sqrt{-z}x_3} \quad \text{for all } z\in (-2,-1), \ x\in \partial\Omega_n,\text{ and } y\in K. \!\! \end{align} Note that $G_{{\mathbb{H}}}$ is negative for such $x$. Recall also that \begin{align*} G_{{\mathbb{H}}}(x,y;z)=\frac 1{4\pi}\biggl(\frac1{|x-y|}e^{-\sqrt{-z}|x-y|}-\frac 1{|x-\bar y|}e^{-\sqrt{-z}|x-\bar y|}\biggr), \end{align*} where $\bar y=(y^{\perp},-y_3)$ denotes the reflection of $y$ across $\partial{\mathbb{H}}$. If $x\in\partial\Omega_n$ with $|x|\ge N_n^{\varepsilon}$ then we have $|x-y|\sim|x-\bar y|\gtrsim N_n^{\varepsilon}$ for $n$ large and so \begin{align*} |G_{{\mathbb{H}}}(x,y;z)|\le CN_n^{-{\varepsilon}}e^{-\sqrt{-z}x_3}, \end{align*} provided we choose $C \gtrsim \sup_{y\in K} \exp\{\sqrt{2}\,|y_3|\}$. Now suppose $x\in\partial\Omega_n$ with $|x|\le N_n^{{\varepsilon}}$. As the curvature of $\partial\Omega_n$ is $O(N_n^{-1})$, for such points we have $|x_3| \lesssim N_n^{2{\varepsilon}-1}$. Correspondingly, $$ 0 \leq |x-y| - |x-\bar y| = \frac{|x-y|^2 - |x-\bar y|^2}{|x-y|+|x-\bar y|} = \frac{4|x_3||y_3|}{|x-y|+|x-\bar y|} \lesssim_K N_n^{2{\varepsilon}-1}. $$ Thus, by the Lipschitz character of $r\mapsto e^{-\sqrt{-z}r}/r$ on compact subsets of $(0,\infty)$, $$ |G_{{\mathbb{H}}}(x,y;z)| \lesssim_K N_n^{2{\varepsilon}-1}. $$ On the other hand, since $|x_3| \lesssim N_n^{2{\varepsilon}-1}\to 0$ as $n\to\infty$, $$ {N_n^{-{\varepsilon}}}e^{-\sqrt{-z}x_3} \gtrsim N_n^{-{\varepsilon}}. $$ As $0<{\varepsilon}<\frac13$, this completes the justification of \eqref{s22} and so the proof of the lemma in Case~4. \end{proof} We conclude this section with two results we will need in Sections~\ref{S:LPD} and~\ref{S:Nonlinear Embedding}. \begin{prop}\label{P:converg} Assume $\Omega_n\to \Omega_\infty$ in the sense of Definition~\ref{D:converg} and let $\Theta\in C_c^{\infty}((0,\infty))$. Then \begin{align}\label{E:P converg1} \|[\Theta(-\Delta_{\Omega_n})-\Theta(-\Delta_{\Omega_\infty})]\delta_y\|_{\dot H^{-1}({\mathbb{R}}^3)} \to 0 \end{align} uniformly for $y$ in compact subsets of $\,\tlim \Omega_n$. Moreover, for any fixed $t\in {\mathbb{R}}$ and $h\in C_c^{\infty}(\tlim \Omega_n)$, we have \begin{align}\label{E:P converg2} \lim_{n\to\infty}\|e^{it\Delta_{\Omega_n}}h-e^{it\Delta_{\Omega_\infty}} h\|_{\dot H^{-1}({\mathbb{R}}^3)}=0. \end{align} \end{prop} \begin{proof} By the Helffer-Sj\"ostrand formula (cf. \cite[p. 172]{HelfferSjostrand}), we may write \begin{align*} \Theta(-\Delta_{{\mathcal O}})(x,y)=\int_{{\mathbb{C}}} G_{{\mathcal O}}(x,y;z)\rho_\Theta(z) \, d{\hbox{Area}}, \end{align*} where $\rho_\theta\in C_c^\infty({\mathbb{C}})$ with $|\rho_{\Theta}(z)|\lesssim |\Im z|^{20}$. Note that by Lemma~\ref{L:G bnds} this integral is absolutely convergent; moreover, we obtain the following bounds: \begin{align}\label{s18} |\Theta(-\Delta_{{\mathcal O}})(x,y)|\lesssim |x-y|^{-1}\langle x-y\rangle^{-10}, \end{align} uniformly for any domain ${\mathcal O}$. As $\Omega_n\to \Omega_\infty$, applying dominated convergence in the Helffer-Sj\"ostrand formula also guarantees that $\Theta(-\Delta_{\Omega_n})(x,y)\to \Theta(-\Delta_{\Omega_\infty})(x,y)$ for each $x\in \tilde\Omega_\infty:=\tlim \Omega_n$ fixed and uniformly for $y$ in compact subsets of $\tilde\Omega_\infty\setminus\{x\}$. Combining this with \eqref{s18} and applying the dominated convergence theorem again yields \begin{align*} \|\Theta(-\Delta_{\Omega_n})\delta_y-\Theta(-\Delta_{\Omega_\infty})\delta_y\|_{L_x^{\frac 65}}\to 0, \end{align*} which proves \eqref{E:P converg1} since by Sobolev embedding $L_x^{6/5}({\mathbb{R}}^3)\subseteq \dot H^{-1}_x({\mathbb{R}}^3)$. We turn now to \eqref{E:P converg2}. From the $L^{6/5}_x$-convergence of Littlewood--Paley expansions (cf. \cite[\S4]{KVZ:HA}), we see that given ${\varepsilon}>0$ and $h\in C_c^{\infty}(\tlim \Omega_n)$, there is a smooth function $\Theta:(0,\infty)\to[0,1]$ of compact support so that $$ \| [1 - \Theta(-\Delta_{\Omega_\infty})] h \|_{\dot H^{-1}({\mathbb{R}}^3)} \leq {\varepsilon}. $$ Combining this with \eqref{E:P converg1} we deduce that $$ \limsup_{n\to\infty} \| [1 - \Theta(-\Delta_{\Omega_n})] h \|_{\dot H^{-1}({\mathbb{R}}^3)} \leq {\varepsilon}. $$ In this way, the proof of \eqref{E:P converg2} reduces to showing $$ \lim_{n\to\infty}\bigl\|e^{it\Delta_{\Omega_n}}\Theta(-\Delta_{\Omega_n}) h-e^{it\Delta_\infty} \Theta(-\Delta_{\Omega_\infty}) h\bigr\|_{\dot H^{-1}({\mathbb{R}}^3)}=0, $$ which follows immediately from \eqref{E:P converg1}. \end{proof} \begin{lem}[Convergence of $\dot H^1_D$ spaces]\label{L:n3} Let $\Omega_n\to \Omega_\infty$ in the sense of Definition~\ref{D:converg}. Then we have \begin{align}\label{n4} \|(-\Delta_{\Omega_n})^{\frac 12} f-(-\Delta_{\Omega_\infty})^{\frac 12} f\|_{L^2({\mathbb{R}}^3)}\to 0 \qtq{for all} f\in C^\infty_c(\tlim\Omega_n). \end{align} \end{lem} \begin{proof} By the definition of $\tlim\Omega_n$, any $f\in C_c^{\infty}(\tlim \Omega_n)$ obeys $\supp(f)\subseteq\Omega_n$ for $n$ sufficiently large $n$ and for such $n$ we have \begin{align}\label{normequal} \| (-\Delta_{\Omega_n})^{\frac 12} f \|_{L^2({\mathbb{R}}^3)} =\|\nabla f\|_{L^2({\mathbb{R}}^3)} = \| (-\Delta_{\Omega_\infty})^{\frac 12} f \|_{L^2({\mathbb{R}}^3)}. \end{align} Given ${\varepsilon}>0$, there exists $\Theta_{\varepsilon}\in C_c^\infty((0,\infty))$ such that \begin{align*} \sup_{\lambda\in[0,\infty)}\ \biggl|\frac{\sqrt\lambda}{1+\lambda}-\Theta_{\varepsilon}(\lambda) \biggr| <{\varepsilon}. \end{align*} Thus for any $g\in C_c^{\infty}({\mathbb{R}}^3)$, we have \begin{align*} \langle g, (-\Delta_{\Omega_n})^{\frac 12} f\rangle &=\bigl\langle g, \frac{(-\Delta_{\Omega_n})^{\frac12}}{1-\Delta_{\Omega_n}}(1-\Delta) f\bigr\rangle =\langle g, \Theta_{\varepsilon}(-\Delta_{\Omega_n})(1-\Delta)f\rangle +O({\varepsilon}). \end{align*} Using Proposition \ref{P:converg} and the same reasoning, we obtain \begin{align*} \lim_{n\to\infty} \langle g, \Theta_{\varepsilon}(-\Delta_{\Omega_n})&(1-\Delta) f\rangle = \langle g, \Theta_{\varepsilon}(-\Delta_{\Omega_\infty})(1-\Delta)f\rangle = \langle g, (-\Delta_{\Omega_\infty})^{\frac 12} f\rangle + O({\varepsilon}). \end{align*} Putting these two equalities together and using the fact that ${\varepsilon}>0$ was arbitrary, we deduce that $(-\Delta_{\Omega_n})^{\frac 12} f \rightharpoonup (-\Delta_{\Omega_\infty})^{\frac 12} f$ weakly in $L^2({\mathbb{R}}^3)$. Combining this with \eqref{normequal} gives strong convergence in $L^2({\mathbb{R}}^3)$ and so proves the lemma. \end{proof} \section{Convergence of linear flows}\label{S:Linear flow convergence} In this section we prove convergence of free propagators in Strichartz spaces, as we rescale and translate the domain $\Omega$ by parameters $N_n\in 2^{\mathbb{Z}}$ and $x_n\in \Omega$ conforming to one of the following three scenarios: \begin{equation}\label{scenarios} \left\{ \ \begin{aligned} &\text{(i) $N_n\to 0$ and $-N_n x_n \to x_\infty\in {\mathbb{R}}^3$}\\ &\text{(ii) $N_n d(x_n)\to \infty$}\\ &\text{(iii) $N_n\to\infty$ and $N_n d(x_n) \to d_\infty>0$.} \end{aligned}\right. \end{equation} Here we use the shorthand $d(x_n)=\dist(x_n, \Omega^c)$. Notice that these are Cases~2--4 discussed in Section~\ref{S:Domain Convergence}. We will not discuss Case~1 of Section~\ref{S:Domain Convergence} here; there is no change in geometry in Case~1, which renders the results of this section self-evident. As seen in Section~\ref{S:Domain Convergence}, the limiting geometry in the first and second scenarios is the whole space ${\mathbb{R}}^3$, while in the third scenario the limiting geometry is the halfspace ${\mathbb{H}}$ (after a suitable normalization). More precisely, in the first and second scenarios writing $\Omega_n=N_n(\Omega-\{x_n\})$, Proposition~\ref{P:convdomain} gives $\Omega_n\to {\mathbb{R}}^3$. In the third scenario, we define $\Omega_n=N_nR_n^{-1}(\Omega-\{x_n^*\})$, where $x_n^*\in\partial\Omega$ and $R_n\in SO(3)$ are chosen so that $d(x_n)=|x_n-x_n^*|$ and $R_n e_3 = \frac{x_n-x_n^*}{|x_n-x_n^*|}$; in this scenario, Proposition~\ref{P:convdomain} gives $\Omega_n\to {\mathbb{H}}=\{x\in{\mathbb{R}}^3:x\cdot e_3>0\}$. The main result in this section is the following: \begin{thm}[Convergence of linear flows in Strichartz spaces]\label{T:LF}\hskip 0em plus 1em Let $\Omega_n$ be as above and let $\Omega_\infty$ be such that $\Omega_n\to \Omega_\infty$. Then \begin{align*} \lim_{n\to \infty}\| e^{it\Delta_{\Omega_n}} \psi-e^{it\Delta_{\Omega_\infty}}\psi\|_{L_t^q L_x^r({\mathbb{R}}\times{\mathbb{R}}^3)}=0 \end{align*} for all $\psi\in C_c^{\infty}(\tlim \Omega_n)$ and all pairs $(q,r)$ satisfying $\frac2q+\frac3r=\frac32$ with $2<q<\infty$ and $2<r<6$. \end{thm} In this paper we are considering an energy-critical problem and so need an analogue of this theorem with the corresponding scaling. To this end, we prove the following corollary, which will be used to obtain a linear profile decomposition for the propagator $e^{it\Delta_\Omega}$ in the following section. \begin{cor}[Convergence of linear flows in $L_{t,x}^{10}$]\label{C:LF} Let $\Omega_n$ be as above and let $\Omega_\infty$ be such that $\Omega_n\to \Omega_\infty$. Then \begin{align*} \lim_{n\to \infty}\| e^{it\Delta_{\Omega_n}} \psi-e^{it\Delta_{\Omega_\infty}}\psi\|_{L_{t,x}^{10}({\mathbb{R}}\times{\mathbb{R}}^3)}=0 \end{align*} for all $\psi\in C_c^{\infty}(\tlim \Omega_n)$. \end{cor} \begin{proof} By H\"older's inequality, \begin{align*} \| e^{it\Delta_{\Omega_n}} \psi-e^{it\Delta_{\Omega_\infty}}\psi\|_{L_{t,x}^{10}} &\lesssim \| e^{it\Delta_{\Omega_n}} \psi-e^{it\Delta_{\Omega_\infty}}\psi\|_{L_{t,x}^{10/3}}^{1/3} \| e^{it\Delta_{\Omega_n}} \psi-e^{it\Delta_{\Omega_\infty}}\psi\|_{L_{t,x}^{\infty}}^{2/3}. \end{align*} The corollary then follows from Theorem~\ref{T:LF} and the following consequence of Sobolev embedding: $$ \| e^{it\Delta_{\Omega_n}} \psi\|_{L_{t,x}^{\infty}} + \| e^{it\Delta_{\Omega_\infty}}\psi\|_{L_{t,x}^{\infty}} \lesssim \| (1-\Delta_{\Omega_n}) \psi \|_{L_t^{\infty} L_x^2} + \| (1-\Delta_{\Omega_\infty}) \psi \|_{L_t^{\infty} L_x^2} \lesssim_\psi 1. $$ Note that the implicit constant here does not depend on $n$, because the domains obey the interior cone condition with uniform constants. \end{proof} The proof of Theorem~\ref{T:LF} will occupy the remainder of this lengthy section. We will consider three different regimes of behaviour for $N_n$ and $x_n$. These do not exactly correspond to the three scenarios above, but rather are dictated by the method of proof. The first such case is when $N_n\to 0$ or $d(x_n)\to \infty$. The limiting geometry in this case is the whole of ${\mathbb{R}}^3$. \begin{thm} \label{T:LF1} Let $\Omega_n=N_n(\Omega-\{x_n\})$ and assume that $N_n\to 0$ or $d(x_n)\to\infty$. Then \begin{align*} \lim_{n\to\infty}\|e^{it\Delta_{\Omega_n}}\psi-e^{it\Delta_{{\mathbb{R}}^3}}\psi\|_{L_t^q L_x^r({\mathbb{R}}\times{\mathbb{R}}^3)}=0 \end{align*} for all $\psi\in C_c^{\infty}(\tlim \Omega_n)$ and all pairs $(q,r)$ satisfying $\frac2q+\frac3r=\frac32$ with $2<q<\infty$ and $2<r<6$. \end{thm} \begin{proof} By interpolation and the Strichartz inequality, it suffices to prove convergence in the symmetric Strichartz space $q=r=\frac{10}3$. To ease notation, we will simply write $-\Delta$ for $-\Delta_{{\mathbb{R}}^3}$. Let $\Theta$ be a smooth radial cutoff such that \begin{align*} \Theta(x)=\begin{cases}0, &|x|\le \frac 14\\1, &|x|\ge \frac12\end{cases} \end{align*} and let $\chi_n(x):=\Theta\bigl(\frac{\dist(x,\Omega_n^c)}{\diam(\Omega_n^c)}\bigr)$. Note that if $N_n\to 0$ then $\diam(\Omega_n^c)\to 0$ and so $\supp(1-\chi_n)$ is a collapsing neighbourhood of the point $-N_n x_n$. On the other hand, if $d(x_n)\to\infty$ then we have $\frac{\dist(0,\Omega_n^c)}{\diam (\Omega_n^c)}\to\infty$. As for $x\in \supp(1-\chi_n)$ we have $\dist(x,\Omega_n^c)\le \frac 12\diam(\Omega_n^c)$, this gives \begin{align*} |x|\ge\dist(0,\Omega_n^c)-\dist(x,\Omega_n^c)\ge\dist(0,\Omega_n^c)-\tfrac 12\diam(\Omega_n^c)\to\infty \qtq{as} n\to \infty. \end{align*} Now fix $\psi\in C_c^{\infty}(\tlim \Omega_n)$. From the considerations above, for $n$ sufficiently large we have $\supp \psi\subseteq\{x\in {\mathbb{R}}^3:\, \chi_n(x)=1\}$. Moreover, if $N_n\to 0$ or $d(x_n)\to\infty$, the monotone convergence theorem together with the Strichartz inequality give \begin{align*} \lim_{n\to\infty}\bigl\|(1-\chi_n)e^{it\Delta}\psi\bigr\|_{L_{t,x}^{\frac{10}3} ({\mathbb{R}}\times{\mathbb{R}}^3)}=0. \end{align*} We are thus left to estimate $e^{it\Delta_{\Omega_n}}\psi-\chi_n e^{it\Delta}\psi$. From the Duhamel formula, \begin{align*} e^{it\Delta_{\Omega_n}}\psi=\chi_n e^{it\Delta}\psi+i\int_0^t e^{i(t-s)\Delta_{\Omega_n}}[\Delta, \chi_n] e^{is\Delta}\psi \,ds. \end{align*} Using the Strichartz inequality, we thus obtain \begin{align}\label{1:43} \|e^{it\Delta_{\Omega_n}}\psi-\chi_n e^{it\Delta}\psi\|_{L_{t,x}^{\frac{10}3}({\mathbb{R}}\times{\mathbb{R}}^3)} \lesssim\bigl\|[\Delta,\chi_n] e^{it\Delta}\psi\bigr\|_{(L_{t,x}^{\frac{10}7}+L_t^1L_x^2)({\mathbb{R}}\times{\mathbb{R}}^3)}. \end{align} To estimate the right-hand side of \eqref{1:43}, we discuss separately the cases: $(1)$ $N_n\to 0$ and $(2)$ $d(x_n)\to \infty$ with $N_n\gtrsim 1$. In the first case, we estimate \begin{align*} \|&[\Delta, \chi_n]e^{it\Delta}\psi\bigr\|_{L_{t,x}^{\frac{10}7}} \lesssim \Bigl[\|\Delta\chi_n\|_{L_x^{\frac{10}7}}+\|\nabla \chi\|_{L_x^{\frac{10}7}}\Bigr] \Bigl[\|e^{it\Delta}\psi\|_{L_t^{\frac{10}7}L_x^\infty} + \|e^{it\Delta}\nabla\psi\|_{L_t^{\frac{10}7}L_x^\infty} \Bigr]\\ &\lesssim\Bigl[\diam(\Omega_n^c)^{-2}+\diam(\Omega_n^c)^{-1}\Bigr]\diam(\Omega_n^c)^{\frac{21}{10}} \Bigl[\|e^{it\Delta}\psi\|_{L_t^{\frac{10}7}L_x^\infty} + \|e^{it\Delta}\nabla\psi\|_{L_t^{\frac{10}7}L_x^\infty} \Bigr] \\ &\lesssim\Bigl[N_n^{\frac 1{10}}+N_n^{\frac{11}{10}}\Bigr] \Bigl[\|e^{it\Delta}\psi\|_{L_t^{\frac{10}7}L_x^\infty} + \|e^{it\Delta}\nabla\psi\|_{L_t^{\frac{10}7}L_x^\infty} \Bigr]. \end{align*} From the dispersive estimate and Sobolev embedding, \begin{align}\label{E:4disp} \|e^{it\Delta}\psi\|_{L_x^{\infty}}\lesssim \langle t\rangle^{-\frac32}\bigl[\|\psi\|_{L_x^1}+\|\psi\|_{H^2_x}\bigr]\lesssim_\psi\langle t\rangle^{-\frac 32}, \end{align} and similarly with $\psi$ replaced by $\nabla\psi$. Thus we obtain \begin{align*} \lim_{n\to\infty}\|[\Delta,\chi_n] e^{it\Delta} \psi\|_{L_{t,x}^{\frac{10}7}({\mathbb{R}}\times{\mathbb{R}}^3)}=0. \end{align*} Consider now the case $N_n\gtrsim 1$ and $d(x_n)\to\infty$. Then \begin{align*} \|[\Delta,\chi_n]e^{it\Delta}\psi\|_{L_t^1L_x^2} &\lesssim \bigl[\|\Delta\chi_n\|_{L_x^{\infty}}+\|\nabla\chi_n\|_{L_x^{\infty}}\bigr]\|e^{it\Delta}\langle\nabla\rangle \psi\|_{L_t^1L_x^2(\dist(x,\Omega_n^c)\sim N_n)}\\ &\lesssim \bigl[N_n^{-2}+N_n^{-1}\bigr]\|e^{it\Delta}\langle\nabla\rangle\psi\|_{L_t^1L_x^2(\dist(x,\Omega_n^c)\sim N_n)}. \end{align*} Using H\"older's inequality and \eqref{E:4disp}, we obtain \begin{align*} \|e^{it\Delta}\langle\nabla\rangle\psi\|_{L_x^2(\dist(x,\Omega_n^c)\sim N_n)} &\lesssim N_n^{\frac 32}\|e^{it\Delta}\langle\nabla\rangle\psi\|_{L_x^\infty} \lesssim_{\psi} N_n^{\frac 32}\langle t\rangle^{-\frac 32}. \end{align*} On the other hand, from the virial identity, \begin{align*} \|xe^{it\Delta}\langle\nabla\rangle\psi\|_{L_x^2}\lesssim_\psi\langle t\rangle \end{align*} and so, \begin{align*} \|e^{it\Delta}\langle\nabla\rangle\psi\|_{L_x^2(\dist(x,\Omega_n^c)\sim N_n)} &\lesssim\frac 1{\dist(0,\Omega_n^c)}\|xe^{it\Delta}\langle \nabla\rangle\psi\|_{L_x^2}\lesssim_\psi\frac{\langle t\rangle}{N_nd(x_n)}. \end{align*} Collecting these estimates we obtain \begin{align*} \|e^{it\Delta}\langle\nabla\rangle\psi\|_{L_t^1L_x^2(\dist(x,\Omega_n^c)\sim N_n)} &\lesssim_\psi\int_0^{\infty}\min\biggl\{\frac{N_n^{\frac 32}}{\langle t\rangle^{\frac 32}}, \ \frac{\langle t\rangle}{N_nd(x_n)}\biggr\}\\ &\lesssim _\psi N_nd(x_n)^{-\frac 15}+\min\bigl\{N_n^{\frac 32}, N_n^{-1}d(x_n)^{-1}\bigr\} \end{align*} and so, \begin{align*} \|[\Delta,\chi_n]e^{it\Delta}\psi\|_{L_t^1L_x^2}\lesssim_\psi d(x_n)^{-\frac 15}+N_n^{-2}d(x_n)^{-1}\to 0 \qtq{as} n\to \infty. \end{align*} This completes the proof of the theorem. \end{proof} Theorem~\ref{T:LF1} settles Theorem~\ref{T:LF} for $N_n$ and $x_n$ conforming to scenario (i) in \eqref{scenarios}, as well as part of scenario (ii). The missing part of the second scenario is $N_n d(x_n)\to \infty$ with $N_n\to \infty$ and $d(x_n)\lesssim 1$. Of course, we also have to prove Theorem~\ref{T:LF} for $N_n$ and $x_n$ conforming to scenario (iii), namely, $N_n d(x_n) \to d_\infty>0$ and $N_n\to\infty$. We will cover these remaining cases in two parts: \begin{SL}\addtocounter{smalllist}{3} \item $N_n\to \infty$ and $1\lesssim N_nd(x_n) \leq N_n^{1/7}$ \item $N_n\to \infty$ and $N_n^{1/7}\leq N_nd(x_n) \lesssim N_n$. \end{SL} Note that in case (iv) the obstacle $\Omega_n^c$ grows in diameter much faster than its distance to the origin. As seen from the origin, the obstacle is turning into a (possibly retreating) halfspace. By comparison, case (v) includes the possibility that the obstacle grows at a rate comparable to its distance to the origin. The two cases will receive different treatments. In Case~(iv), we use a parametrix construction adapted to the halfspace evolution. We also prove that when the halfspace is retreating, the halfspace propagators converge to $e^{it\Delta_{{\mathbb{R}}^3}}$; see Proposition~\ref{P:HtoR}. In Case~(v), the parametrix construction will be inspired by geometric optics considerations and will require a very fine analysis. We now turn to the details of the proof of Theorem~\ref{T:LF} in Case~(iv). \subsection{Case (iv)} After rescaling we find ourselves in the setting shown schematically in Figure~\ref{F:case4} below, with ${\varepsilon} = N_n^{-1}$. This restores the obstacle to its original size. We further rotate and translate the problem so that the origin lies on the boundary of the obstacle, the outward normal is $e_3$ at this point, and the wave packet $\psi$ is centered around the point $\delta e_3$. Abusing notation, we will write $\Omega$ for this new rotated/translated domain. As before, we write ${\mathbb{H}}=\{(x_1, x_2,x_3)\in{\mathbb{R}}^3:\, x_3>0\}$; by construction, $\partial{\mathbb{H}}$ is the tangent plane to $\Omega^c$ at the origin. Throughout this subsection, we write $x^{\perp}:=(x_1,x_2)$; also $\bar x:=(x_1,x_2,-x_3)$ denotes the reflection of $x$ in $\partial{\mathbb{H}}$. \begin{figure} \caption{Depiction of Case~(iv); here ${\varepsilon}\leq\delta\leq{\varepsilon}^{6/7}$ and ${\varepsilon}\to 0$.} \label{F:case4} \end{figure} This subsection will primarily be devoted to the proof of the following \begin{thm}\label{T:LF2} Fix $\psi\in C_c^{\infty}({\mathbb{H}} - \{e_3\})$ and let \begin{align*} \psi_{{\varepsilon},\delta}(x):={\varepsilon}^{-\frac 32}\psi\biggl(\frac{ x-\delta e_3}{\varepsilon}\biggr). \end{align*} Then for any pair $(q,r)$ satisfying $\frac2q+\frac3r=\frac32$ with $2<q<\infty$ and $2<r<6$ we have \begin{align}\label{cas4} \|e^{it\Delta_{\Omega({\varepsilon})}}\psi_{{\varepsilon},\delta}-e^{it\Delta_{{\mathbb{H}}}}\psi_{{\varepsilon},\delta}\|_{L_t^q L_x^r({\mathbb{R}}\times{\mathbb{R}}^3)} \to 0 \end{align} as ${\varepsilon}\to0$ with any $\delta=\delta({\varepsilon})$ obeying ${\varepsilon}\leq\delta\leq{\varepsilon}^{6/7}$. Here $\Omega({\varepsilon})$ is any family of affine images (i.e. rotations and translations) of $\Omega$ for which ${\mathbb{H}}\subseteq\Omega({\varepsilon})$ and $\partial{\mathbb{H}}$ is the tangent plane to $\Omega({\varepsilon})$ at the origin. \end{thm} Theorem~\ref{T:LF2} gives Theorem~\ref{T:LF} for $N_n$ and $x_n$ conforming to scenario (iii) in \eqref{scenarios}. Indeed, one applies Theorem~\ref{T:LF2} to the function $\tilde\psi(x)=\psi(x+e_3)$ with $\delta={\varepsilon}=N_n^{-1}$ and $\Omega({\varepsilon})=R_n^{-1}(\Omega-\{x_n^*\})$. With the aid of Proposition~\ref{P:HtoR} below, Theorem~\ref{T:LF2} also implies Theorem~\ref{T:LF} for $N_n$ and $x_n$ conforming to scenario (ii) with the additional restriction that $N_n^{1/7}\geq N_nd(x_n) \to \infty$. In this case, we apply Theorem~\ref{T:LF2} to the function $\tilde\psi(x)=\psi_\infty(x-\rho e_3)$ with $\rho=\sup\{|x|:x\in\supp(\psi)\}$, ${\varepsilon}=N_n^{-1}$, $\delta=d(x_n)-{\varepsilon}\rho$, $\Omega({\varepsilon})=R_n^{-1}(\Omega-\{x_n^*\})$, and $\psi_\infty$ being any subsequential limit of $\psi\circ R_n$. As $\psi\circ R_n\to\psi_\infty$ in $L^2$ sense, the Strichartz inequality controls the resulting errors. \begin{prop}\label{P:HtoR} Fix $\psi\in C_c^{\infty}({\mathbb{H}} - \{e_3\})$ and let \begin{align*} \psi_{{\varepsilon},\delta}(x):={\varepsilon}^{-\frac 32}\psi\biggl(\frac{ x-\delta e_3}{\varepsilon}\biggr). \end{align*} Then for any pair $(q,r)$ satisfying $\frac2q+\frac3r=\frac32$ with $2<q<\infty$ and $2<r<6$ we have \begin{align*} \|e^{it\Delta_{{\mathbb{H}}}}\psi_{{\varepsilon},\delta}-e^{it\Delta_{{\mathbb{R}}^3}}\psi_{{\varepsilon},\delta}\|_{L_t^q L_x^r({\mathbb{R}}\times{\mathbb{R}}^3)} \to 0 \end{align*} as ${\varepsilon}\to0$ with any $\delta=\delta({\varepsilon})$ obeying $\frac{\delta}{{\varepsilon}}\to \infty$. \end{prop} \begin{proof} We will prove the proposition in the special case $q=r=\frac{10}3$. The result for general exponents follows from the Strichartz inequality and interpolation, or by a simple modification of the arguments that follow. Using the exact formulas for the propagator in ${\mathbb{R}}^3$ and ${\mathbb{H}}$ and rescaling reduces the question to \begin{align}\label{E:H2R1} \|e^{it\Delta_{{\mathbb{R}}^3}} \tilde\psi_{{\varepsilon},\delta}\|_{L^{\frac{10}3}_{t,x}({\mathbb{R}}\times{\mathbb{H}})} \to 0 \qtq{where} \tilde\psi_{{\varepsilon},\delta}(y) = \psi( \bar y - \tfrac\delta{\varepsilon} e_3 ). \end{align} Notice that $\tilde\psi_{{\varepsilon},\delta}$ is supported deeply inside the complementary halfspace ${\mathbb{R}}^3\setminus{\mathbb{H}}$. For large values of $t$ we estimate as follows: Combining the $L^1_x\to L^\infty_x$ dispersive estimate with mass conservation gives $$ \|e^{it\Delta_{{\mathbb{R}}^3}} \tilde\psi_{{\varepsilon},\delta}\|_{L^{\frac{10}3}_{x}({\mathbb{R}}^3)} \lesssim |t|^{-3/5} \|\tilde\psi_{{\varepsilon},\delta}\|_{L^1_x}^{\frac25} \|\tilde\psi_{{\varepsilon},\delta}\|_{L^2_x}^{\frac35} \lesssim_\psi |t|^{-3/5}. $$ We use this bound when $|t| \geq T:=\sqrt{\delta/{\varepsilon}}$ to obtain \begin{align*} \|e^{it\Delta_{{\mathbb{R}}^3}} \tilde\psi_{{\varepsilon},\delta}\|_{L^{\frac{10}3}_{t,x}(\{|t|\geq T\}\times{\mathbb{H}})} &\lesssim_\psi \Bigl( \int_{T}^\infty t^{-2}\,dt\Bigr)^{\frac{3}{10}} \to 0 \qtq{as} {\varepsilon}\to 0. \end{align*} For $|t|\leq T$, we use the virial estimate $$ \bigl\| \bigl( y + \tfrac{\delta}{{\varepsilon}} e_3) e^{it\Delta_{{\mathbb{R}}^3}} \tilde\psi_{{\varepsilon},\delta} \bigr\|_{L^2_{x}({\mathbb{R}}^3)}^2 \lesssim \bigl\| \bigl( y + \tfrac{\delta}{{\varepsilon}} e_3) \tilde\psi_{{\varepsilon},\delta} \bigr\|_{L^2_{x}({\mathbb{R}}^3)}^2 + t^2 \bigl\| \nabla \tilde\psi_{{\varepsilon},\delta} \bigr\|_{L^2_{x}({\mathbb{R}}^3)}^2 \lesssim_\psi \tfrac{\delta}{{\varepsilon}}. $$ This together with the H\"older and Strichartz inequalities gives \begin{align*} \|e^{it\Delta_{{\mathbb{R}}^3}} \tilde\psi_{{\varepsilon},\delta}\|_{L^{\frac{10}3}_{t,x}(\{|t|\leq T\}\times{\mathbb{H}})} &\lesssim \|e^{it\Delta_{{\mathbb{R}}^3}} \tilde\psi_{{\varepsilon},\delta}\|_{L^\infty_t L^2_x(\{|t|\leq T\}\times{\mathbb{H}})}^{\frac25} \|e^{it\Delta_{{\mathbb{R}}^3}} \tilde\psi_{{\varepsilon},\delta}\|_{L^2_t L^6_x({\mathbb{R}}\times{\mathbb{H}})}^{\frac35} \\ &\lesssim \bigl(\tfrac{{\varepsilon}}{\delta}\bigr)^{\frac25}\bigl\| \bigl( y + \tfrac{\delta}{{\varepsilon}} e_3) e^{it\Delta_{{\mathbb{R}}^3}} \tilde\psi_{{\varepsilon},\delta} \bigr\|_{L^\infty_t L^2_x(\{|t|\leq T\}\times{\mathbb{H}})}^{\frac25} \|\psi\|_{L_x^2}^{\frac35}\\ &\lesssim_\psi \bigl(\tfrac{\varepsilon}\delta\bigr)^{\frac15} \to 0 \qtq{as} {\varepsilon}\to 0. \end{align*} This completes the proof of the proposition. \end{proof} We begin the proof of Theorem~\ref{T:LF2} by showing that we can approximate $\psi_{{\varepsilon},\delta}$ by Gaussians. \begin{lem}[Approximation by Gaussians] \label{lm:exp4} Let $\psi\in C_c^{\infty}({\mathbb{H}}-\{e_3\})$. Then for any $\eta>0$, $0<{\varepsilon}\leq 1$, and $\delta\geq{\varepsilon}$ there exist $N>0$, points $\{y^{(n)}\}_{n=1}^N \subset {\mathbb{H}}$, and constants $\{c_n\}_{n=1}^N\subset {\mathbb{C}}$ such that \begin{align*} \biggl\|\psi_{{\varepsilon},\delta}(x)-\sum_{n=1}^N c_n (2\pi{\varepsilon}^2)^{-\frac34}\Bigl[\exp\bigl\{-\tfrac {|x-\delta y^{(n)}|^2}{4{\varepsilon}^2}\bigr\} -\exp\bigl\{-\tfrac {|x-\delta\bar{ y}^{(n)}|^2}{4{\varepsilon}^2}\bigr\}\Bigr]\biggr\|_{L_x^2({\mathbb{H}})}<\eta. \end{align*} Here, $\bar y^{(n)}$ denotes the reflection of $y^{(n)}$ in $\partial{\mathbb{H}}$. Moreover, we may ensure that $$ \sum_n |c_n| \lesssim_\eta 1 \qtq{and} \sup_n |y^{(n)}-e_3| \lesssim_\eta {\varepsilon}\delta^{-1}, $$ uniformly in ${\varepsilon}$ and $\delta$. \end{lem} \begin{proof} Wiener showed that linear combinations of translates of a fixed function in $L^2({\mathbb{R}}^d)$ are dense in this space if and only if the Fourier transform of this function is a.e. non-vanishing. (Note that his Tauberian theorem is the analogous statement for $L^1$.) In this way, we see that we can choose vectors $z^{(n)}\in {\mathbb{R}}^3$ and numbers $\tilde c_n$ so that \begin{align*} \biggl\| \psi(x) - \sum_{n=1}^N \tilde c_n (2\pi)^{-\frac 34} e^{-\frac{|x-z^{(n)}|^2}4}\biggr\|_{L_x^2({\mathbb{R}}^3)}<\tfrac12 \eta. \end{align*} Rescaling, translating, and combining with the reflected formula, we deduce immediately that \begin{align*} \biggl\| \psi_{{\varepsilon},\delta}(x) - \psi_{{\varepsilon},\delta}(\bar x) - \sum_{n=1}^N c_n (2\pi{\varepsilon}^2)^{-\frac34} \Bigl[ e^{-\frac {|x-\delta y^{(n)}|^2}{4{\varepsilon}^2}} - e^{-\frac {|x-\delta\bar{ y}^{(n)}|^2}{4{\varepsilon}^2}}\Bigr] \biggr\|_{L_x^2({\mathbb{R}}^3)}<\eta, \end{align*} where $y^{(n)} = {\varepsilon}\delta^{-1} z^{(n)} + e_3$ and $c_n=\tilde c_n$ when $y^{(n)}\in {\mathbb{H}}$; otherwise we set $\bar y^{(n)} = {\varepsilon}\delta^{-1} z^{(n)} + e_3$ and $c_n= - \tilde c_n$, which ensures $y^{(n)} \in {\mathbb{H}}$. As $\psi_{\delta,{\varepsilon}}(x)$ is supported wholely inside ${\mathbb{H}}$, so $\psi_{\delta,{\varepsilon}}(\bar x)$ vanishes there. Thus the lemma now follows. \end{proof} By interpolation and the Strichartz inequality, it suffices to prove Theorem~\ref{T:LF2} for the symmetric Strichartz pair $q=r=\frac{10}3$. Also, to ease notation, we simply write $\Omega$ for $\Omega({\varepsilon})$ in what follows. Combining Lemma~\ref{lm:exp4} with the Strichartz inequality for both propagators $e^{it\Delta_{\Omega}}$ and $e^{it\Delta_{{\mathbb{H}}}}$, we obtain \begin{align*} &\|e^{it\Delta_{\Omega}}\psi_{{\varepsilon}, \delta}-e^{it\Delta_{{\mathbb{H}}}}\psi_{{\varepsilon},\delta}\|_{L_{t,x}^{\frac{10}3}({\mathbb{R}}\times\Omega)}\\ &\leq \sum_{n=1}^N |c_n|\Bigl\|\bigl[e^{it\Delta_{\Omega}}\chi_{{\mathbb{H}}}-e^{it\Delta_{{\mathbb{H}}}}\chi_{{\mathbb{H}}}\bigr] (2\pi{\varepsilon}^2)^{-\frac34}\bigl[e^{-\frac{|x-\delta y^{(n)}|^2}{4{\varepsilon}^2}}-e^{-\frac{|x-\delta\bar y^{(n)}|^2}{4{\varepsilon}^2}}\bigr]\Bigl\|_{L_{t,x}^{\frac{10}3} ({\mathbb{R}}\times\Omega)}\\ &\qquad+C\biggl\|\psi_{{\varepsilon},\delta} -\sum_{n=1}^N c_n(2\pi{\varepsilon}^2)^{-\frac34}\bigl[e^{-\frac{|x-\delta y^{(n)}|^2}{4{\varepsilon}^2}}-e^{-\frac{|x-\delta \bar y^ {(n)}|^2}{4{\varepsilon}^2}}\bigr]\biggr\|_{L^2({\mathbb{H}})}. \end{align*} Therefore, Theorem~\ref{T:LF2} is reduced to showing \begin{align}\label{fr} \Bigl\|\bigl[e^{it\Delta_{\Omega}}\chi_{{\mathbb{H}}}-e^{it\Delta_{{\mathbb{H}}}}\chi_{{\mathbb{H}}}\bigr](2\pi{\varepsilon}^2)^{-\frac34}\bigl[e^{-\frac {|x-\delta y|^2}{4{\varepsilon}^2}} -e^{-\frac {|x-\delta\bar{y}|^2}{4{\varepsilon}^2}}\bigr]\Bigr\|_{L_{t,x}^{\frac{10}3}({\mathbb{R}}\times\Omega)}=o(1) \end{align} as ${\varepsilon}\to0$ with any $\delta=\delta({\varepsilon})$ obeying ${\varepsilon}\leq\delta\leq{\varepsilon}^{6/7}$, and $y$ as in Lemma~\ref{lm:exp4}. Next we show that we can further simplify our task to considering only $y\in{\mathbb{H}}$ of the form $y=(0,0,y_3)$ in the estimate \eqref{fr}. Given $y\in{\mathbb{H}}$ with $|y-e_3|\lesssim{\varepsilon}\delta^{-1}$ that is not of this form, let ${\mathbb{H}}_y$ denote the halfspace containing $\delta y$ with $\partial{\mathbb{H}}_y$ being the tangent plane to $\partial\Omega$ at the point nearest $\delta y$. Moreover, let $\delta\tilde y$ be the reflection of $\delta y$ in $\partial{\mathbb{H}}_y$. Elementary geometric considerations show that the angle between $\partial{\mathbb{H}}$ and $\partial{\mathbb{H}}_y$ is $O({\varepsilon})$. Correspondingly, $|\delta\tilde y - \delta\bar y|\lesssim\delta{\varepsilon}$ and so \begin{align}\label{619} {\varepsilon}^{-\frac 32}\bigl\|e^{-\frac {|x-\delta \bar y|^2}{4{\varepsilon}^2}}-e^{-\frac{|x-\delta\tilde y|^2}{4{\varepsilon}^2}}\bigr\|_{L^2({\mathbb{R}}^3)}\to 0 \qtq{as} {\varepsilon} \to 0. \end{align} As we will explain, \eqref{fr} (and so Theorem~\ref{T:LF2}) follows by combining \eqref{619} with the Strichartz inequality and Proposition~\ref{P:LF2} below. Indeed, the only missing ingredient is the observation that $$ {\varepsilon}^{-\frac32} \Bigl\| e^{it\Delta_{{\mathbb{H}}}}\chi_{{\mathbb{H}}} \bigl[e^{-\frac {|x-\delta y|^2}{4{\varepsilon}^2}} -e^{-\frac {|x-\delta\bar{y}|^2}{4{\varepsilon}^2}}\bigr] - e^{it\Delta_{{\mathbb{H}}_y}}\chi_{{\mathbb{H}}_y}\bigl[e^{-\frac {|x-\delta y|^2}{4{\varepsilon}^2}} -e^{-\frac {|x-\delta\tilde{y}|^2}{4{\varepsilon}^2}}\bigr] \Bigr\|_{L_{t,x}^{\frac{10}3}({\mathbb{R}}\times{\mathbb{R}}^3)} $$ is $o(1)$ as ${\varepsilon}\to0$, which follows from \eqref{619} and the exact formula for the propagator in halfspaces. Therefore, it remains to justify the following proposition, whose proof will occupy the remainder of this subsection. \begin{prop} \label{P:LF2} We have \begin{align}\label{fn4} \Bigl\|\bigl[e^{it\Delta_{\Omega}}\chi_{{\mathbb{H}}}-e^{it\Delta_{{\mathbb{H}}}}\chi_{{\mathbb{H}}}\bigr](2\pi{\varepsilon}^2)^{-\frac34}\bigl[e^{-\frac {|x-\delta y|^2}{4{\varepsilon}^2}} -e^{-\frac {|x-\delta\bar{y}|^2}{4{\varepsilon}^2}}\bigr]\Bigr\|_{L_{t,x}^{\frac{10}3}({\mathbb{R}}\times\Omega)}=o(1) \end{align} as ${\varepsilon}\to0$ with any $\delta=\delta({\varepsilon})$ obeying ${\varepsilon}\leq\delta\leq{\varepsilon}^{6/7}$, uniformly for $y=(0,0,y_3)$ and $y_3$ in a compact subset of $(0,\infty)$. \end{prop} \begin{proof} To prove \eqref{fn4}, we will build a parametrix for the evolution in $\Omega$ and show that this differs little from the evolution in ${\mathbb{H}}$, for which we have an exact formula: \begin{align*} &(2\pi{\varepsilon}^2)^{-\frac 34} e^{it\Delta_{{\mathbb{H}}}}\chi_{{\mathbb{H}}}\bigl[e^{-\frac{|x-\delta y|^2}{4{\varepsilon}^2}}-e^{-\frac {|x-\delta\bar y|^2}{4{\varepsilon}^2}}\bigr] =(2\pi)^{-\frac 34}\bigl(\tfrac{{\varepsilon}}{{\varepsilon}^2+it}\bigr)^{\frac 32} \bigl[e^{-\frac {|x-\delta y|^2}{4({\varepsilon}^2+it)}} -e^{-\frac {|x-\delta\bar y|^2}{4({\varepsilon}^2+it)}}\bigr], \end{align*} for all $t\in {\mathbb{R}}$ and $x\in {\mathbb{H}}$. We write \begin{align*} u(t,x):=(2\pi)^{-\frac 34}\biggl(\frac {\varepsilon}{{\varepsilon}^2+it}\biggr)^{\frac 32}e^{-\frac{|x-\delta y|^2}{4({\varepsilon}^2+it)}}, \end{align*} and so for all $t\in{\mathbb{R}}$ and $x\in{\mathbb{H}}$, \begin{align*} (2\pi{\varepsilon}^2)^{-\frac 34} e^{it\Delta_{{\mathbb{H}}}}\chi_{{\mathbb{H}}}\bigl[e^{-\frac{|x-\delta y|^2}{4{\varepsilon}^2}}-e^{-\frac {|x-\delta\bar y|^2}{4{\varepsilon}^2}}\bigr] =u(t,x)-u(t,\bar x). \end{align*} We start by showing that a part of the halfspace evolution does not contribute to the $L_{t,x}^{10/3}$ norm. Let $\phi:[0, \infty)\to {\mathbb{R}}$ and $\theta:{\mathbb{R}}\to {\mathbb{R}}$ be smooth functions such that \begin{align*} \phi(r)=\begin{cases} 0, & 0\le r\le \frac 12\\ 1, & r\geq1\end{cases} \quad \qtq{and}\quad \theta(r)=\begin{cases} 1, & r\le 0 \\ 0, &r\geq 1. \end{cases} \end{align*} We define \begin{align*} v(t,x):=\bigl[u(t,x)-u(t, \bar x)\bigr]\Bigl[1-\phi\Bigl(\frac{x_1^2+x_2^2}{\varepsilon}\Bigr)\theta\Bigl(\frac{x_3}{\varepsilon}\Bigr)\Bigr]\chi_{\{x_3\ge-\frac 12\}}. \end{align*} We will prove that $v$ is a good approximation for the halfspace evolution. \begin{figure} \caption{The role of the cutoffs defining $v(t,x)$. The cutoff function takes values between $0$ and $1$ in the shaded region. We depict only one half of a cross-section; one obtains the full 3D figure by rotating about the~$x_3$-axis.} \label{F:v} \end{figure} \begin{lem} \label{L:v matters} We have \begin{align*} \|u(t,x)-u(t,\bar x)-v(t,x)\|_{L_{t,x}^{\frac{10}3}({\mathbb{R}}\times{\mathbb{H}})}=o(1) \end{align*} as ${\varepsilon}\to0$ with any $\delta=\delta({\varepsilon})$ obeying ${\varepsilon}\leq\delta\leq{\varepsilon}^{6/7}$, uniformly for $y=(0,0,y_3)$ and $y_3$ in a compact subset of $(0,\infty)$. \end{lem} \begin{proof} By the definition of $v$, we have to prove \begin{align*} \biggl\|\bigl[u(t,x)-u(t,\bar x)\bigr]\phi\biggl(\frac{x_1^2+x_2^2}{\varepsilon}\biggr)\theta\biggl(\frac{x_3}{\varepsilon}\biggr)\biggr\|_ {L_{t,x}^{\frac{10}3}({\mathbb{R}}\times{\mathbb{H}})}=o(1) \qtq{as} {\varepsilon}\to 0, \end{align*} which, considering the supports of $\phi$ and $\theta$, amounts to showing \begin{align}\label{pts} &\|u(t,x)\|_{L_{t,x}^{\frac{10}3}(|x^{\perp}|\ge \sqrt{{\varepsilon}/2},\ 0\le x_3\le {\varepsilon})}+\| u(t,\bar x)\|_{L_{t,x}^{\frac{10}3}(|x^{\perp}|\ge \sqrt{{\varepsilon}/2},\ 0\le x_3\le {\varepsilon})}=o(1). \end{align} We only prove \eqref{pts} for $u(t,x)$ with $t\in[0,\infty)$; the proof for negative times and for $u(t,\bar x)$ is similar. Let $T:={\varepsilon}^2\log(\frac 1{\varepsilon})$. We will consider separately the short time contribution $[0,T]$ and the long time contribution $[T, \infty)$. The intuition is that for short times the wave packet does not reach the cutoff, while for large times the wave packet has already disintegrated. Thus, we do not need to take advantage of the cancelation between $u(t,x)$ and $u(t,\bar x)$. We start with the long time contribution. A simple change of variables yields \begin{align*} \int_T^\infty\int_{{\mathbb{R}}^3} |u(t,x)|^{\frac{10}3} \,dx\,dt &\lesssim\int_T^\infty\int_{{\mathbb{R}}^3}\biggl(\frac{{\varepsilon}^2}{{\varepsilon}^4+t^2}\biggr)^{\frac52} e^{-\frac{5{\varepsilon}^2|x-\delta y|^2}{6({\varepsilon}^4+t^2)}} \,dx \,dt\\ &\lesssim \int_T^\infty \biggl(\frac{{\varepsilon}^2}{{\varepsilon}^4+t^2}\biggr)^{\frac52-\frac 32} \,dt\lesssim{\varepsilon}^2\int_T^\infty t^{-2} \,dt\lesssim \log^{-1}(\tfrac 1{\varepsilon}). \end{align*} For short times, we estimate \begin{align*} \int_0^T\int_{|x^{\perp}|\ge\sqrt{{\varepsilon}/2},0\le x_3\le{\varepsilon}}|u(t,x)|^{\frac{10}3} \,dx\,dt &\lesssim {\varepsilon}\int_0^T \biggl(\frac{{\varepsilon}^2}{{\varepsilon}^4+t^2}\biggr)^{\frac52}\int_{\sqrt{{\varepsilon}/2}}^\infty e^{-\frac{5{\varepsilon}^2r^2}{6({\varepsilon}^4+t^2)}}r \,dr\,dt\\ &\lesssim {\varepsilon}\int_0^T \biggl(\frac{{\varepsilon}^2}{{\varepsilon}^4+t^2}\biggr)^{\frac52-1}e^{-\frac{5{\varepsilon}^3}{12({\varepsilon}^4+t^2)}} \,dt\\ &\lesssim {\varepsilon} e^{-\frac{5 {\varepsilon}^3}{24 {\varepsilon}^4\log^2(\frac1{\varepsilon})}}\int_0^T \biggl(\frac{{\varepsilon}^2}{{\varepsilon}^4+t^2}\biggr)^{\frac 32} \,dt\\ &\le {\varepsilon}^{100}. \end{align*} This completes to the proof of the lemma. \end{proof} In view of Lemma~\ref{L:v matters}, Proposition~\ref{P:LF2} reduces to showing \begin{align}\label{compl} \Bigl\| (2\pi{\varepsilon}^2)^{-\frac 34} e^{it\Delta_\Omega}\chi_{{\mathbb{H}}}\bigl[e^{-\frac{|x-\delta y|^2}{4{\varepsilon}^2}}-e^{-\frac{|x-\delta\bar y|^2}{4{\varepsilon}^2}}\bigr]-v(t,x)\Bigl\|_{L_{t,x}^{\frac{10}3}({\mathbb{R}}\times\Omega)}=o(1). \end{align} To achieve this, we write \begin{align*} (2\pi{\varepsilon}^2)^{-\frac 34}e^{it\Delta_\Omega}\chi_{{\mathbb{H}}}\bigl[e^{-\frac{|x-\delta y|^2}{4{\varepsilon}^2}}-e^{-\frac{|x-\delta\bar y|^2}{4{\varepsilon}^2}}\bigr] =v(t,x)-w(t,x)-r(t,x), \end{align*} where $w$ is essentially $v$ evaluated on the boundary of $\Omega$ and $r(t,x)$ is the remainder term. More precisely, \begin{align*} w(t,x):=\bigl[u(t,x_*)-u(t,\bar x_*)\bigr]\biggl[1-\phi\biggl(\frac{x_1^2+x_2^2}{\varepsilon}\biggr)\theta\biggl(\frac{x_3}{\varepsilon}\biggr)\biggr] \theta\biggl(\frac{\dist(x,\Omega^c)}{\varepsilon}\biggr)\chi_{\{x_3\ge -\frac 12\}}, \end{align*} where $x_*$ denotes the point on $\partial\Omega$ such that $x_*^{\perp}=x^\perp$ and $\bar x_*$ denotes the reflection of $x_*$ in $\partial{\mathbb{H}}$. Note that for $x\in\partial\Omega$, we have $w(t,x)=v(t,x)$. Thus, on ${\mathbb{R}}\times\Omega$ the remainder $r(t,x)$ satisfies \begin{align*} (i\partial_t+\Delta_\Omega )r=(i\partial_t+\Delta )(v-w). \end{align*} Therefore, by the Strichartz inequality, \eqref{compl} will follow from \begin{align}\label{E:case4 estimates} \|w\|_{L_{t,x}^{\frac{10}3}({\mathbb{R}}\times\Omega)} + \|(i\partial_t+\Delta)v\|_{L_t^1L_x^2({\mathbb{R}}\times\Omega)} + \|(i\partial_t+\Delta)w\|_{L_t^1L_x^2({\mathbb{R}}\times\Omega)}=o(1). \end{align} \begin{figure} \caption{The shaded area indicates the support of $w(t,x)$. As in Figure~\ref{F:v} we depict only one half of a cross-section.} \label{F:w} \end{figure} To prove \eqref{E:case4 estimates}, we will make repeated use of the following \begin{lem} For $\alpha\geq 0$, \begin{align} \int_{|x^\perp|\le \sqrt {\varepsilon}}|x^\perp|^{\alpha}e^{-\frac{{\varepsilon}^2|x^\perp|^2}{2({\varepsilon}^4+t^2)}} \,dx^\perp &\lesssim \min\biggl\{\biggl(\frac{{\varepsilon}^4+t^2}{{\varepsilon}^2}\biggr)^{\frac{\alpha+2}2},{\varepsilon}^{\frac{\alpha+2}2}\biggr\}\label{estsmall}\\ \int_{|x^{\perp}|\ge \sqrt{{\varepsilon}/2}}e^{-\frac{{\varepsilon}^2|x^{\perp}|^2}{2(\eps^4+t^2)}}|x^{\perp}|^\alpha \,dx^{\perp} &\lesssim\biggl(\frac{\eps^4+t^2}{{\varepsilon}^2}\biggr)^{\frac{\alpha+2}2}\min\biggl\{1,\biggl(\frac{\eps^4+t^2}{{\varepsilon}^3}\biggr)^{20}\biggr\}.\label{estbig} \end{align} In particular, for $\alpha\geq0$, $\beta>\frac12$, and $\gamma=\min\{3-4\beta+\frac\alpha2,2-3\beta+\frac\alpha4\}$, \begin{align}\label{512} \int_0^{\infty}({\varepsilon}^4+t^2)^{-\beta}\biggl(\int_{|x^\perp|\le\sqrt{{\varepsilon}}}|x^\perp|^{\alpha}e^{-\frac{{\varepsilon}^2|x^\perp|^2}{2({\varepsilon}^2+t^2)}}\,d x^\perp\biggr)^{\frac 12} \,dt\lesssim {\varepsilon}^\gamma. \end{align} Moreover, for $\frac12<\beta<10$, \begin{align} \label{514} \int_0^\infty({\varepsilon}^4+t^2)^{-\beta}\min\biggl\{1, \biggl(\frac {{\varepsilon}^4+t^2}{{\varepsilon}^3}\biggr)^{10}\biggr\}\,dt\lesssim {\varepsilon}^{\frac32-3\beta}. \end{align} \end{lem} \begin{proof} Passing to polar coordinates, we estimate \begin{align*} \text{LHS}\eqref{estsmall}=\int_0^{\sqrt{\varepsilon}}e^{-\frac{{\varepsilon}^2r^2}{2({\varepsilon}^4+t^2)}} r^{\alpha+1} \,dr &\lesssim \biggl(\frac{{\varepsilon}^4+t^2}{{\varepsilon}^2}\biggr)^{\frac{\alpha+2}2}\int_0^{\frac{{\varepsilon}^{\frac32}}{\sqrt{{\varepsilon}^4+t^2}}} e^{-\frac {\rho^2}2} \rho^{\alpha+1} \,d\rho\\ &\lesssim \biggl(\frac{{\varepsilon}^4+t^2}{{\varepsilon}^2}\biggr)^{\frac{\alpha+2}2}\min\biggl\{1,\biggl(\frac{{\varepsilon}^{\frac32}}{\sqrt{{\varepsilon}^4+t^2}}\biggr)^{\alpha+2}\biggr\}, \end{align*} which settles \eqref{estsmall}. The proof of \eqref{estbig} follows along similar lines. Using \eqref{estsmall}, we estimate \begin{align*} \text{LHS}\eqref{512} &\lesssim \int_0^\infty({\varepsilon}^4+t^2)^{-\beta}\min\biggl\{\biggl(\frac{{\varepsilon}^4+t^2}{{\varepsilon}^2}\biggr)^{\frac{\alpha+2}4},\,{\varepsilon}^{\frac{\alpha+2}4}\biggr\} \,dt\\ &\lesssim \int_0^{{\varepsilon}^{\frac 32}}({\varepsilon}^4+t^2)^{-\beta+\frac{\alpha+2}4}{\varepsilon}^{-\frac{\alpha+2}2} \,dt +\int_{{\varepsilon}^{\frac 32}}^\infty({\varepsilon}^4+t^2)^{-\beta}{\varepsilon}^{\frac{\alpha+2}4} \,dt\\ &\lesssim {\varepsilon}^{-\frac{\alpha+2}2}\int_0^{{\varepsilon}^2} {\varepsilon}^{-4\beta+\alpha+2}\,dt + {\varepsilon}^{-\frac{\alpha+2}2}\int_{{\varepsilon}^2}^{{\varepsilon}^{\frac32}}t^{-2\beta+\frac{\alpha+2}2}\, dt +{\varepsilon}^{\frac{\alpha+2}4}{\varepsilon}^{\frac 32(1-2\beta)}\\ &\lesssim {\varepsilon}^{\frac{\alpha+2}2} {\varepsilon}^{2-4\beta}+ {\varepsilon}^{-\frac{\alpha+2}2}{\varepsilon}^{\frac 32(-2\beta+\frac{\alpha+2}2+1)}+{\varepsilon}^{\frac{\alpha+2}4}{\varepsilon}^{\frac 32(1-2\beta)}\\ &\lesssim {\varepsilon}^{3-4\beta+\frac\alpha2} +{\varepsilon}^{2-3\beta+\frac\alpha4}. \end{align*} To establish \eqref{514} one argues as for \eqref{512}; we omit the details. \end{proof} We are now ready to prove \eqref{E:case4 estimates}, which will complete the proof of Proposition~\ref{P:LF2}. We will estimate each of the three summands appearing on the left-hand side of \eqref{E:case4 estimates}. We start with the first one. \begin{lem}[Estimate for $w$]\label{L:we} We have \begin{align}\label{we} \|w\|_{L_{t,x}^{\frac{10}3}({\mathbb{R}}\times\Omega)}\lesssim \delta{\varepsilon}^{-\frac15}. \end{align} \end{lem} \begin{proof} We first obtain a pointwise bound for $w$. Note that on the support of $w$, \begin{align*} |x^\perp|\le {\varepsilon}^{\frac 12}, \quad |x_3|\lesssim {\varepsilon}, \qtq{and} |x_{*3}|\lesssim |x^\perp|^2\lesssim {\varepsilon}, \end{align*} where the last two estimates follow from the finite curvature assumption. Here we use the notation $x_{*3}:=x_*\cdot e_3$. Thus, using the fact that $|\bar x_* -\delta y|=|x_*+\delta y|$ and the mean value theorem, on the support of $w$ we have \begin{align}\label{dif} \biggl| e^{-\frac{|x_*-\delta y|^2}{4({\varepsilon}^2+it)}}-e^{-\frac{|\bar x_*-\delta y|^2}{4({\varepsilon}^2+it)}}\biggr| &=\biggl| e^{-\frac{|x^\perp|^2}{4({\varepsilon}^2+it)}}\biggl(e^{-\frac{|x_{*3}-\delta y_3|^2}{4({\varepsilon}^2+it)}}-e^{-\frac{|x_{*3}+\delta y_3|^2}{4({\varepsilon}^2+it)}}\biggr)\biggr|\notag\\ &\lesssim e^{-\frac{{\varepsilon}^2|x^{\perp}|^2}{4({\varepsilon}^4+t^2)}}\frac\delta{\sqrt{{\varepsilon}^4+t^2}}|x_{*3}|\notag\\ &\lesssim \delta {({\varepsilon}^4+t^2)}^{-\frac12} |x^\perp|^2 e^{-\frac{{\varepsilon}^2|x^\perp|^2}{4({\varepsilon}^4+t^2)}}. \end{align} Therefore, \begin{align}\label{ptw} |w(t,x)|\le |u(t,x_*)-u(t,\bar x_*)|\lesssim \delta{\varepsilon}^{\frac32}({\varepsilon}^4+t^2)^{-\frac 54} |x^\perp|^2e^{-\frac{{\varepsilon}^2|x^\perp|^2}{4({\varepsilon}^4+t^2)}}. \end{align} To control the $L_{t,x}^{\frac {10}3}$ norm of $w$ we use \eqref{ptw} together with \eqref{estsmall}, as follows: \begin{align*} \int_{\mathbb{R}}\int_\Omega|w(t,x)|^{\frac{10}3} \,dx\,dt &\lesssim \delta^{\frac{10}3}{\varepsilon}^{-\frac73}\int _0^{\infty}\biggl(\frac{{\varepsilon}^2}{{\varepsilon}^4+t^2}\biggr)^{\frac{25}6} \!\! \int_{|x^\perp|\le{\varepsilon}^{\frac 12}} e^{-\frac{5{\varepsilon}^2|x^\perp|^2}{6({\varepsilon}^4+t^2)}}|x^\perp|^{\frac {20}3} \,dx^\perp dt\\ &\lesssim \delta^{\frac{10}3}{\varepsilon}^{-\frac73}\int_0^{\infty}\biggl(\frac{{\varepsilon}^2}{{\varepsilon}^4+t^2}\biggr)^{\frac{25}6}\min\biggl\{\biggl(\frac{{\varepsilon}^4+t^2}{{\varepsilon}^2}\biggr)^{\frac{13}3},{\varepsilon}^{\frac{13}3}\biggr\}\,dt\\ &\lesssim \delta^{\frac{10}3}{\varepsilon}^{-\frac 83}\int_0^{{\varepsilon}^{\frac 32}}({\varepsilon}^4+t^2)^{\frac 16}\,dt+\delta^{\frac{10}3}{\varepsilon}^{\frac{31}3}\int_{{\varepsilon}^{\frac 32}}^\infty({\varepsilon}^4+t^2)^{-\frac{25}6}\,dt\\ &\lesssim \delta^{\frac{10}3}{\varepsilon}^{-\frac 23}. \end{align*} This completes the proof of the lemma. \end{proof} \begin{lem}[Estimate for $(i\partial_t+\Delta)v$]\label{L:ve} We have \begin{align}\label{ve} \|(i\partial_t+\Delta)v\|_{L_t^1L_x^2({\mathbb{R}}\times\Omega)}\lesssim \delta {\varepsilon}^{-\frac34}. \end{align} \end{lem} \begin{proof} Using the definition of $v(t,x)$, we compute \begin{align} (i\partial_t+\Delta)v(t,x) &=(i\partial_t+\Delta)\Bigl\{\bigl[u(t,x)-u(t,\bar x)\bigr]\bigl[1-\phi\bigl(\tfrac{x_1^2+x_2^2}{\varepsilon}\bigr)\theta\bigl(\tfrac{x_3}{\varepsilon}\bigr)\bigr]\chi_{\{x_3\ge -\frac 12\}}\Bigr\}\notag\\ &=\bigl[u(t,x)-u(t,\bar x)\bigr]\Delta\Bigl\{\bigl[1-\phi\bigl(\tfrac{x_1^2+x_2^2}{\varepsilon}\bigr)\theta\bigl(\tfrac{x_3}{\varepsilon}\bigr)\bigr]\chi_{\{x_3\ge-\frac 12\}}\Bigr\}\label{1v}\\ &\quad+2\nabla\bigl[u(t,x)-u(t,\bar x)\bigr]\cdot \nabla\Bigl\{\bigl[1-\phi\bigl(\tfrac{x_1^2+x_2^2}{\varepsilon}\bigr)\theta\bigl(\tfrac{x_3}{\varepsilon}\bigr)\bigr]\chi_{\{x_3\ge -\frac 12\}}\Bigr\}.\label{2v} \end{align} We first consider the contribution of \eqref{1v}. A direct analysis yields that for $x\in \Omega$ in the support of \eqref{1v}, \begin{align*} |x_3|\lesssim {\varepsilon}, \quad |x^{\perp}|\ge \sqrt{{\varepsilon}/2}, \qtq{and} \Bigl|\Delta\Bigl\{\bigl[1-\phi\bigl(\tfrac{x_1^2+x_2^2}{\varepsilon}\bigr)\theta\bigl(\tfrac{x_3}{\varepsilon}\bigr)\bigr]\chi_{\{x_3\ge -\frac 12\}}\Bigr\}\Bigr|\lesssim {\varepsilon}^{-2}. \end{align*} Thus, by the mean value theorem, \begin{align} \biggl| e^{-\frac{|x-\delta y|^2}{4({\varepsilon}^2+it)}}-e^{-\frac{|\bar x-\delta y|^2}{4({\varepsilon}^2+it)}}\biggr| &\lesssim e^{-\frac{{\varepsilon}^2|x^\perp|^2}{4({\varepsilon}^4+t^2)} }\frac{\delta|x_3|}{\sqrt{{\varepsilon}^4+t^2}} \lesssim \delta{\varepsilon}({\varepsilon}^4+t^2)^{-\frac12}e^{-\frac{{\varepsilon}^2|x^\perp|^2}{4({\varepsilon}^4+t^2)}}.\label{7c} \end{align} This yields the pointwise bound \begin{align} \eqref{1v}\lesssim {\varepsilon}^{-2}|u(t,x)-u(t,\bar x)|\lesssim \delta{\varepsilon}^{\frac 12}({\varepsilon}^4+t^2)^{-\frac54}e^{-\frac{{\varepsilon}^2|x^\perp|^2}{4({\varepsilon}^4+t^2)}}.\label{p1v} \end{align} Using \eqref{p1v} followed by \eqref{estbig} (with $\alpha=0$) and \eqref{514} (with $\beta=\frac 34$), we obtain \begin{align*} \|\eqref{1v}\|_{L_t^1 L_x^2({\mathbb{R}}\times\Omega)} & \lesssim {\varepsilon}^{\frac12}\delta{\varepsilon}^{\frac 12} \int_0^{\infty}({\varepsilon}^4+t^2)^{-\frac 54}\biggl(\int_{|x^\perp|\ge \sqrt{{\varepsilon}/2}}e^{-\frac{{\varepsilon}^2|x^\perp|^2}{2({\varepsilon}^4+t^2)}}\,dx^\perp\biggr)^{\frac 12} \,dt\\ &\lesssim \delta\int_0^{\infty}({\varepsilon}^4+t^2)^{-\frac 54+\frac12}\min\biggl\{1,\biggl(\frac{{\varepsilon}^4+t^2}{{\varepsilon}^3}\biggr)^{10}\biggr\}\,dt\\ &\lesssim \delta{\varepsilon}^{-\frac 34}. \end{align*} We now consider the contribution of \eqref{2v}. For $x\in \Omega$ in the support of \eqref{2v}, we have \begin{align*} |x_3|\lesssim {\varepsilon}, \quad |x^{\perp}|\ge \sqrt{{\varepsilon}/2}, \qtq{and} \Bigl|\nabla\Bigl\{\bigl[1-\phi\bigl(\tfrac{x_1^2+x_2^2}{\varepsilon}\bigr)\theta\bigl(\tfrac{x_3}{\varepsilon}\bigr)\bigr]\chi_{\{x_3\ge -\frac 12\}}\Bigr\}\Bigr|\lesssim {\varepsilon}^{-1}. \end{align*} Using that $|x-\delta \bar y|=|\bar x-\delta y|$, we compute \begin{align*} \nabla \biggl(e^{-\frac{|x-\delta y|^2}{4({\varepsilon}^2+it)}}-e^{-\frac{|\bar x- \delta y|^2}{4({\varepsilon}^2+it)}}\biggr) &=-\frac{x-\delta y}{2({\varepsilon}^2+it)}e^{-\frac{|x-\delta y|^2}{4({\varepsilon}^2+it)}}+ \frac{x-\delta\bar y}{2({\varepsilon}^2+it)}e^{-\frac{|x-\delta\bar y|^2}{4({\varepsilon}^2+it)}}\\ &=-\frac x{2({\varepsilon}^2+it)}\biggl(e^{-\frac{|x-\delta y|^2}{4({\varepsilon}^2+it)}}-e^{-\frac{|x-\delta \bar y|^2}{4({\varepsilon}^2+it)}}\biggr)\\ &\quad+ \frac{\delta y_3 e_3}{2({\varepsilon}^2+it)}\biggl(e^{-\frac{|x-\delta y|^2}{4({\varepsilon}^2+it)}}+e^{-\frac{|x-\delta \bar y|^2}{4({\varepsilon}^2+it)}}\biggr). \end{align*} Thus, for $x\in \Omega$ in the support of \eqref{2v} we have \begin{align*} \bigl|\nabla\bigl[ & u(t,x)-u(t,\bar x)\bigr]\bigr|\\ &\lesssim\biggl(\frac{{\varepsilon}^2}{\eps^4+t^2}\biggr)^{\frac34}\biggl\{\frac{|x|}{\sqrt{\eps^4+t^2}}\frac{{\varepsilon}\delta}{\sqrt{\eps^4+t^2}}e^{-\frac{{\varepsilon}^2|x^{\perp}|^2}{4(\eps^4+t^2)}}+\frac{\delta}{\sqrt{\eps^4+t^2}}e^{-\frac{{\varepsilon}^2|x^{\perp}|^2}{4(\eps^4+t^2)}}\biggr\} \\ &\lesssim\Bigl\{{\varepsilon}^{\frac 72}\delta(\eps^4+t^2)^{-\frac 74}+{\varepsilon}^{\frac 52}\delta(\eps^4+t^2)^{-\frac 74}|x^{\perp}|+{\varepsilon}^{\frac 32}\delta(\eps^4+t^2)^{-\frac 54}\Bigr\}e^{-\frac{{\varepsilon}^2|x^{\perp}|^2}{4(\eps^4+t^2)}}\\ &\lesssim \Bigl\{{\varepsilon}^{\frac 32}\delta(\eps^4+t^2)^{-\frac 54}+{\varepsilon}^{\frac52}\delta(\eps^4+t^2)^{-\frac74}|x^{\perp}|\Bigr\}e^{-\frac{{\varepsilon}^2|x^{\perp}|^2}{4(\eps^4+t^2)}}, \end{align*} which yields the pointwise bound \begin{align*} |\eqref{2v}|\lesssim \Bigl\{{\varepsilon}^{\frac 12}\delta(\eps^4+t^2)^{-\frac54}+{\varepsilon}^{\frac 32}\delta(\eps^4+t^2)^{-\frac74}|x^{\perp}|\Bigr\}e^{-\frac{{\varepsilon}^2|x^{\perp}|^2}{4(\eps^4+t^2)}}. \end{align*} Using \eqref{estbig} followed by \eqref{514}, we estimate the contribution of \eqref{2v} as follows: \begin{align*} \|\eqref{2v}\|_{L_t^1L_x^2({\mathbb{R}}\times\Omega)} &\lesssim {\varepsilon}^{\frac12}{\varepsilon}^{\frac 12}\delta\int_0^\infty(\eps^4+t^2)^{-\frac 54}\biggl(\int_{|x^{\perp}|\ge\sqrt{{\varepsilon}/2}}e^{-\frac{{\varepsilon}^2|x^{\perp}|^2}{2(\eps^4+t^2)}} \,dx^{\perp}\biggr)^{\frac12}\,dt\\ &\quad+{\varepsilon}^{\frac 12}{\varepsilon}^{\frac32}\delta\int_0^\infty(\eps^4+t^2)^{-\frac74}\biggl(\int_{|x^{\perp}|\ge\sqrt{{\varepsilon}/2}}|x^{\perp}|^2e^{-\frac{{\varepsilon}^2|x^{\perp}|^2}{2(\eps^4+t^2)}} \,dx^{\perp}\biggr)^{\frac 12}\,dt\\ &\lesssim \delta\int_0^{\infty}(\eps^4+t^2)^{-\frac 34}\min\biggl\{1,\biggl(\frac{\eps^4+t^2}{{\varepsilon}^3}\biggr)^{10}\biggr\} \,dt\\ &\lesssim \delta{\varepsilon}^{-\frac 34}. \end{align*} This completes the proof of the lemma. \end{proof} \begin{lem}[Estimate for $(i\partial_t+\Delta)w$]\label{L:we1} We have \begin{align}\label{we1} \|(i\partial_t+\Delta)w\|_{L_t^1L_x^2({\mathbb{R}}\times\Omega)}\lesssim \delta {\varepsilon}^{-\frac34} + \delta^3{\varepsilon}^{-2}. \end{align} \end{lem} \begin{proof} We compute \begin{align} (i\partial_t + \Delta)w\!\!&\notag\\ &=\Bigl\{(i\partial_t+\Delta)\bigl[u(t,x_*)-u(t,\bar x_*)\bigr]\label{w1}\\ &\quad+2\nabla\bigl[u(t,x_*)-u(t,\bar x_*)\bigr]\cdot \nabla\label{w2}\\ &\quad+\bigl[u(t,x_*)-u(t,\bar x_*)\bigr]\ \Delta \Bigr\}\bigl[1-\phi\bigl(\tfrac{x_1^2+x_2^2}{\varepsilon}\bigr)\theta\bigl(\tfrac{x_3}{\varepsilon}\bigr)\bigr]\theta\bigl(\tfrac{\dist(x,\Omega^c)}{{\varepsilon}}\bigr)\chi_{\{x_3\ge-\frac 12\}}\label{w3}. \end{align} We first consider the contribution of \eqref{w3}. Using \eqref{ptw}, we obtain the pointwise bound \begin{align*} |\eqref{w3}|\lesssim \delta{\varepsilon}^{-\frac 12}(\eps^4+t^2)^{-\frac 54}|x^{\perp}|^2 e^{-\frac{{\varepsilon}^2|x^{\perp}|^2}{4(\eps^4+t^2)}}. \end{align*} Thus using \eqref{512} and the fact that $|x_3|\lesssim {\varepsilon}$ for $x\in\supp w$, we obtain \begin{align*} \|\eqref{w3}\|_{L_t^1L_x^2({\mathbb{R}}\times\Omega)} &\lesssim \delta\int_0^\infty(\eps^4+t^2)^{-\frac 54} \biggl(\int_{|x^{\perp}|\le\sqrt {\varepsilon}}|x^{\perp}|^4e^{-\frac{{\varepsilon}^2|x^{\perp}|^2}{2(\eps^4+t^2)}} \,dx^{\perp}\biggr)^{\frac 12} \,dt \lesssim\delta{\varepsilon}^{-\frac 34}. \end{align*} Next we consider the contribution of \eqref{w2}. As $\frac{\partial x_*}{\partial x_3}=0$, $\nabla[u(t,x_*)-u(t,\bar x_*)]$ has no component in the $e_3$ direction. For the remaining directions we have \begin{equation*} \begin{aligned} \nabla_{\perp}\bigl[u(t,x_*)-u(t,\bar x_*)\bigr] &= \tfrac{-x^{\perp}}{2({\varepsilon}^2+it)}\bigl[u(t,x_*)-u(t,\bar x_*)\bigr] \\ &\quad - (\nabla_\perp x_{*3}) \bigl[\tfrac{x_{*3}-\delta y_3}{2({\varepsilon}^2+it)} u(t,x_*)- \tfrac{x_{*3}+\delta y_3}{2({\varepsilon}^2+it)} u(t,\bar x_*)\bigr]. \end{aligned} \end{equation*} Using \eqref{ptw}, $|\nabla_\perp x_{*3}|\lesssim |x^{\perp}|$, and $|x_{*3}|\lesssim {\varepsilon}$, we deduce \begin{align*} \bigl|\nabla_{\perp}\bigl[u(t,x_*)-u(t,\bar x_*)\bigr]\bigr| &\lesssim \bigl[ \delta{\varepsilon}^{\frac32}(\eps^4+t^2)^{-\frac74}|x^{\perp}|^3 + \delta{\varepsilon}^{\frac32}(\eps^4+t^2)^{-\frac54}|x^{\perp}| \bigr] e^{-\frac{{\varepsilon}^2|x^{\perp}|^2}{4(\eps^4+t^2)}}. \end{align*} This gives the pointwise bound \begin{align*} |\eqref{w2}|&\lesssim \bigl[\delta{\varepsilon}^{\frac 12} (\eps^4+t^2)^{-\frac74}|x^{\perp}|^3 + \delta{\varepsilon}^{\frac 12} (\eps^4+t^2)^{-\frac54}|x^{\perp}| \bigr]e^{-\frac{{\varepsilon}^2|x^{\perp}|^2}{4(\eps^4+t^2)}}. \end{align*} Using \eqref{512}, we thus obtain \begin{align*} \|\eqref{w2}\|_{L_t^1L_x^2({\mathbb{R}}\times\Omega)} &\lesssim {\varepsilon}\delta\int_0^\infty (\eps^4+t^2)^{-\frac74}\biggl(\int_{|x^{\perp}|\le\sqrt{\varepsilon}}|x^{\perp}|^6e^{-\frac{{\varepsilon}^2|x^{\perp}|^2}{2(\eps^4+t^2)}} \,dx^{\perp}\biggl)^{\frac 12} \,dt\\ &\quad + {\varepsilon}\delta\int_0^\infty (\eps^4+t^2)^{-\frac54}\biggl(\int_{|x^{\perp}|\le\sqrt{\varepsilon}}|x^{\perp}|^2e^{-\frac{{\varepsilon}^2|x^{\perp}|^2}{2(\eps^4+t^2)}} \,dx^{\perp}\biggl)^{\frac 12} \,dt\\ &\lesssim \delta{\varepsilon}^{-\frac34} + \delta{\varepsilon}^{-\frac14}\lesssim \delta{\varepsilon}^{-\frac34}. \end{align*} Lastly, we consider \eqref{w1}. We begin with the contribution coming from the term $\partial_t\bigl[u(t,x_*)-u(t,\bar x_*)\bigr]$, which we denote by $\eqref{w1}_{\partial_t}$. We start by deriving a pointwise bound on this term. A straightforward computation using \eqref{dif} yields \begin{align*} &\bigl|\partial_t\bigl[u(t,x_*)-u(t,\bar x_*)\bigr]\bigr|\\ &\lesssim \frac{{\varepsilon}^{\frac32}}{(\eps^4+t^2)^{\frac 54}}\Bigl| e^{-\frac{|x_*-\delta y|^2}{4({\varepsilon}^2+it)}}-e^{-\frac{|\bar x_*-\delta y|^2}{4({\varepsilon}^2+it)}}\Bigr|\\ &\quad +\biggl(\frac {{\varepsilon}^2}{\eps^4+t^2}\biggr)^{\frac 34}\frac 1{\eps^4+t^2}\biggl||x_*-\delta y|^2 e^{-\frac{|x_*-\delta y|^2}{4({\varepsilon}^2+it)}}-|\bar x_*-\delta y|^2 e^{-\frac{|\bar x_*-\delta y|^2}{4({\varepsilon}^2+it)}}\biggr|\\ &\lesssim\bigl[{\varepsilon}^{\frac 32}(\eps^4+t^2)^{-\frac54}+{\varepsilon}^{\frac 32}(\eps^4+t^2)^{-\frac74}|x^{\perp}|^2\bigl]\Bigl|e^{-\frac{|x_*-\delta y|^2}{4({\varepsilon}^2+it)}}-e^{-\frac{|\bar x_*-\delta y|^2}{4({\varepsilon}^2+it)}}\Bigr|\\ &\quad +{\varepsilon}^{\frac 32}(\eps^4+t^2)^{-\frac 74}e^{-\frac{{\varepsilon}^2|x^{\perp}|^2}{4(\eps^4+t^2)}}\Bigl||x_{*3}-\delta y_3|^2e^{-\frac{|x_{*3}-\delta y_3|^2}{4({\varepsilon}^2+it)}}-|x_{*3}+\delta y_3|^2 e^{-\frac{|x_{*3}+\delta y_3|^2}{4({\varepsilon}^2+it)}}\Bigr|\\ &\lesssim \bigl[{\varepsilon}^{\frac 32}(\eps^4+t^2)^{-\frac 54}+{\varepsilon}^{\frac32}(\eps^4+t^2)^{-\frac 74}|x^{\perp}|^2\bigr]e^{-\frac{{\varepsilon}^2|x^{\perp}|^2}{4(\eps^4+t^2)}}\delta(\eps^4+t^2)^{-\frac12}|x^{\perp}|^2\\ &\quad +{\varepsilon}^{\frac 32}(\eps^4+t^2)^{-\frac 74}e^{-\frac{{\varepsilon}^2|x^{\perp}|^2}{4(\eps^4+t^2)}}\bigl[\delta|x_{*3}|+(|x_{*3}|^2+\delta^2)\delta (\eps^4+t^2)^{-\frac12}|x^\perp|^2\bigr]\\ &\lesssim e^{-\frac{{\varepsilon}^2|x^{\perp}|^2}{4(\eps^4+t^2)}}\Bigl[\delta{\varepsilon}^{\frac32}(\eps^4+t^2)^{-\frac 74}|x^{\perp}|^2+ \delta{\varepsilon}^{\frac 32}(\eps^4+t^2)^{-\frac94}|x^{\perp}|^4+{\varepsilon}^{\frac 32}\delta^3(\eps^4+t^2)^{-\frac94}|x^{\perp}|^2\Bigr], \end{align*} where in order to obtain the third inequality we have used the identity $2(ab-cd)=(a-c)(b+d)+(a+c)(b-d)$. Using \eqref{512} as before, we obtain \begin{align*} \|\eqref{w1}_{\partial_t}\|_{L_t^1L_x^2({\mathbb{R}}\times\Omega)} &\lesssim \delta{\varepsilon}^{-\frac 14} + \delta{\varepsilon}^{-\frac 34} + \delta^3{\varepsilon}^{-2}\lesssim \delta{\varepsilon}^{-\frac 34} + \delta^3{\varepsilon}^{-2}. \end{align*} We now turn to the Laplacian term in \eqref{w1}, which we denote by $\eqref{w1}_\Delta$. For a generic function $f:{\mathbb{R}}^3\to{\mathbb{C}}$, \begin{equation}\label{bits&pieces} \Delta f(x_*) = [\Delta_\perp f + 2 (\nabla_\perp x_{*3})\cdot (\nabla_\perp \partial_3 f) + (\Delta_\perp x_{*3})(\partial_3 f) + |\nabla_\perp x_{*3}|^2(\partial_3^2 f) ](x_*). \end{equation} Using this formula with $f(x) := u(t,x) - u(t,\bar x)$, we first derive a pointwise bound on $\eqref{w1}_\Delta$. A direct computation gives \begin{align*} (\Delta_{\perp}f)(x_*)=\biggl[-\frac{1}{{\varepsilon}^2+it} + \frac{|x^{\perp}|^2}{4({\varepsilon}^2+it)^2}\biggr] \bigl[u(t,x_*)-u(t,\bar x_*)\bigr]. \end{align*} Therefore, using \eqref{ptw} we obtain the pointwise bound \begin{align*} \bigl|(\Delta_{\perp}f)(x_*)\bigr| &\lesssim \bigl[\delta{\varepsilon}^{\frac 32}(\eps^4+t^2)^{-\frac74}|x^{\perp}|^2+\delta{\varepsilon}^{\frac 32}(\eps^4+t^2)^{-\frac 94}|x^{\perp}|^4\bigr] e^{-\frac{{\varepsilon}^2|x^{\perp}|^2}{4(\eps^4+t^2)}}. \end{align*} Next, we combine $|\nabla_\perp x_{*3}|\lesssim |x^\perp|$ with $$ (\nabla_\perp \partial_3 f)(x_*) = \frac{(x_{*3}-\delta y_3)x^\perp}{4({\varepsilon}^2+it)^2} u(t,x_*) - \frac{(x_{*3}+\delta y_3)x^\perp}{4({\varepsilon}^2+it)^2} u(t,\bar x_*), $$ $|x_{*3}|\lesssim |x^\perp|^2$, \eqref{ptw}, and the crude bound \begin{equation}\label{u size} |u(t,x_*)|+|u(t,\bar x_*)| \lesssim {\varepsilon}^{\frac32}(\eps^4+t^2)^{-\frac34} e^{-\frac{{\varepsilon}^2|x^{\perp}|^2}{4(\eps^4+t^2)}} \end{equation} to obtain \begin{align*} \bigl|(\nabla_\perp x_{*3})\cdot (\nabla_\perp \partial_3 f)\bigr| &\lesssim \bigl[\delta{\varepsilon}^{\frac32}(\eps^4+t^2)^{-\frac94}|x^\perp|^6 + \delta{\varepsilon}^{\frac32}(\eps^4+t^2)^{-\frac74}|x^\perp|^2\bigr]e^{-\frac{{\varepsilon}^2|x^{\perp}|^2}{4(\eps^4+t^2)}}. \end{align*} Next we use $|\Delta_\perp x_{*3}|\lesssim 1$ and $|x_{*3}\pm\delta y|\lesssim \delta$ together with elementary computations to find $$ \bigl|(\Delta_\perp x_{*3})(\partial_3 f)\bigr| \lesssim \delta{\varepsilon}^{\frac32}(\eps^4+t^2)^{-\frac54} e^{-\frac{{\varepsilon}^2|x^{\perp}|^2}{4(\eps^4+t^2)}}. $$ Toward our last pointwise bound, we compute $$ (\partial_3^2 f)(x_*) = \biggl[\frac{-1}{2({\varepsilon}^2+it)}+\frac{|x_{*3}-\delta y_3|^2}{4({\varepsilon}^2+it)^2}\biggr] u(t,x_*) - \biggl[\frac{-1}{2({\varepsilon}^2+it)}+\frac{|x_{*3}+\delta y_3|^2}{4({\varepsilon}^2+it)^2}\biggr] u(t,\bar x_*), $$ Combining this with \eqref{ptw}, \eqref{u size}, $|\nabla_\perp x_{*3}|\lesssim |x^\perp|$, and $|x_{*3}|\lesssim |x^\perp|^2\lesssim{\varepsilon}\lesssim\delta$ yields \begin{align*} &\bigl||\nabla_\perp x_{*3}|^2(\partial_3^2 f) (x_*)\bigr|\\ &\lesssim \bigl[\delta{\varepsilon}^{\frac32}(\eps^4+t^2)^{-\frac74}|x^\perp|^4 + \delta^3{\varepsilon}^{\frac32}(\eps^4+t^2)^{-\frac94}|x^\perp|^4 + \delta{\varepsilon}^{\frac32}(\eps^4+t^2)^{-\frac74}|x^\perp|^2\bigr]e^{-\frac{{\varepsilon}^2|x^{\perp}|^2}{4(\eps^4+t^2)}}. \end{align*} We now put together all the pieces from \eqref{bits&pieces}. Using $|x^\perp|^2 \lesssim \delta \lesssim 1$ so as to keep only the largest terms, we obtain $$ \bigl| \Delta f (x_*) \bigr| \lesssim \bigl[ \delta{\varepsilon}^{\frac32}(\eps^4+t^2)^{-\frac54} + \delta{\varepsilon}^{\frac32}(\eps^4+t^2)^{-\frac74}|x^\perp|^2 + \delta{\varepsilon}^{\frac32}(\eps^4+t^2)^{-\frac94}|x^\perp|^4\bigr] e^{-\frac{{\varepsilon}^2|x^{\perp}|^2}{4(\eps^4+t^2)}}. $$ Using \eqref{512} as before, we thus obtain \begin{align*} \|\eqref{w1}_\Delta\|_{L_t^1L_x^2({\mathbb{R}}\times\Omega)}\lesssim \delta+\delta{\varepsilon}^{-\frac 14}+\delta{\varepsilon}^{-\frac 34}\lesssim \delta{\varepsilon}^{-\frac 34}. \end{align*} This completes the proof of the lemma. \end{proof} Collecting Lemmas~\ref{L:we}, \ref{L:ve}, and \ref{L:we1} and recalling that ${\varepsilon}\leq \delta\leq {\varepsilon}^{6/7}$, we derive \eqref{E:case4 estimates}. This in turn yields \eqref{compl}, which combined with Lemma~\ref{L:v matters} proves Proposition~\ref{P:LF2}. \end{proof} This completes the proof of Theorem~\ref{T:LF2} and so the discussion of Case~(iv). \subsection{Case (v)} In this case we have $N_n\to \infty$ and $N_n^{-6/7}\leq d(x_n) \lesssim 1$. As in Case~(iv), we rescale so that the obstacle is restored to its original (unit) size. Correspondingly, the initial data has characteristic scale ${\varepsilon}:= N_n^{-1}\to 0$ and is supported within a distance $O({\varepsilon})$ of the origin, which is at a distance $\delta:=d(x_n)$ from the obstacle. A schematic representation is given in Figure~\ref{F:case5}. There and below, \begin{equation}\label{E:psi eps defn} \psi_{\varepsilon}(x) := {\varepsilon}^{-3/2} \psi\bigl(\tfrac x{{\varepsilon}}\bigr). \end{equation} In this way, the treatment of Case~(v) reduces to the following assertion: \begin{thm}\label{T:LF3} Fix $\psi\in C^\infty_c({\mathbb{R}}^3)$ and let $\psi_{\varepsilon}$ be as in \eqref{E:psi eps defn}. Then for any pair $(q,r)$ satisfying $\frac2q+\frac3r=\frac32$ with $2<q<\infty$ and $2<r<6$, we have \begin{align}\label{main} \|e^{it\Delta_{\Omega({\varepsilon})}}\psi_{\varepsilon}-e^{it\Delta}\psi_{\varepsilon}\|_{L_t^q L_x^r({\mathbb{R}}\times{\mathbb{R}}^3)}\to 0 \qtq{as} {\varepsilon}\to 0, \end{align} for any ${\varepsilon}$-dependent family of domains $\Omega({\varepsilon})$ that are affine images of $\Omega$ with the property that $\delta:=\dist(0,\Omega({\varepsilon})^c) \geq {\varepsilon}^{6/7}$ and $\delta\lesssim 1$. \end{thm} \begin{figure} \caption{Depiction of Case~(v); here ${\varepsilon}^{6/7}\leq \delta\lesssim1 $ and ${\varepsilon}\to 0$.} \label{F:case5} \end{figure} We now begin the proof of Theorem~\ref{T:LF3}. By interpolation and the Strichartz inequality, it suffices to treat the case $q=r=\frac{10}3$. By time-reversal symmetry, it suffices to consider positive times only, which is what we will do below. To ease notation, we write $\Omega$ for $\Omega({\varepsilon})$ for the remainder of this subsection. The first step in the proof is to write $\psi_{\varepsilon}$ as a superposition of Gaussian wave packets; we will then investigate the evolution of the individual wave packets. The basic decomposition is given by the following lemma. The parameter $\sigma$ denotes the initial width of the Gaussian wave packets. It is chosen large enough so that the wave packets hold together until they collide with the obstacle. This ensures that they reflect in an almost particle-like manner and allows us to treat the reflected wave in the realm of geometric optics. Indeed, the particle-like regime lasts for time $\sim\sigma^2$, while the velocity of the wave packet is $2\xi=\tfrac{2n}{L}$, which is $\sim{\varepsilon}^{-1}$ for the dominant terms, up to double logarithmic factors (cf. \eqref{auto}). As the obstacle is $\delta$ away from the origin, it takes the dominant wave packets time $\sim\delta{\varepsilon}$ to reach the obstacle (up to $\log\log(\frac1{\varepsilon})$ factors), which is much smaller than $\sigma^{2}=\delta{\varepsilon}\log^2(\frac1{\varepsilon})$. Moreover, $\sigma$ is chosen small enough that the individual wave packets disperse shortly after this collision. In addition to the main geometric parameters ${\varepsilon}$ and $\delta$, we also need two degrees of small parameters; these are $[\log(\frac1{\varepsilon})]^{-1}$ and $[\log\log(\frac1{\varepsilon})]^{-1}$. \begin{lem}[Wave packet decomposition] \label{decomposition} Fix $\psi\in C_c^{\infty}({\mathbb{R}}^3)$ and let $0<{\varepsilon}\ll 1$, $$ \sigma:=\sqrt{{\varepsilon}\delta}\log(\tfrac 1{{\varepsilon}}) \qquad\text{and}\qquad L:=\sigma \log\log(\tfrac 1{{\varepsilon}}). $$ Then there exist coefficients $\{c_n^{{\varepsilon}}\}_{n\in{\mathbb{Z}}^3}$ so that \begin{equation}\label{expand} \biggl\|\psi_{\varepsilon}(x)-\sum_{n\in{\mathbb{Z}}^3}c_n^{{\varepsilon}}(2\pi\sigma^2)^{-\frac34}\exp\Bigl\{-\frac {|x|^2}{4\sigma^2}+in\cdot\frac xL\Bigr\}\biggr\|_{L^2({\mathbb{R}}^3)}=o(1) \end{equation} as ${\varepsilon}\to 0$. Moreover, \begin{equation}\label{bdforc} |c_n^{{\varepsilon}}|\lesssim_{k,\psi} \frac {(\sigma{\varepsilon})^{\frac 32}}{L^3}\min\biggl\{1,\Bigl(\frac L{{\varepsilon}|n|}\Bigr)^k\biggr\} \qtq{for all} k\in {\mathbb{N}} \end{equation} and \eqref{expand} remains true if the summation is only taken over those $n$ belonging to \begin{align}\label{auto} \mathcal S:=\biggl\{n\in {\mathbb{Z}}^3: \,\frac 1{{\varepsilon}\log\log(\frac 1{{\varepsilon}})}\leq \frac {|n|}L \leq \frac {\log\log(\frac1{{\varepsilon}})}{{\varepsilon}}\biggr\}. \end{align} \end{lem} \begin{proof} For $n\in{\mathbb{Z}}^3$, let $$ \gamma_n(x):=(2\pi\sigma^2)^{-\frac 34} \exp\bigl\{-\tfrac{|x|^2}{4\sigma^2}+in\cdot\tfrac xL\bigr\}. $$ Note that $\|\gamma_n\|_{L^2({\mathbb{R}}^3)}=1$. We define $$ c_n^{{\varepsilon}}:=(2\pi L)^{-3}\int_{[-\pi L,\pi L]^3}\psi_{\varepsilon}(x)(2\pi\sigma^2)^{\frac 34} \exp\bigl\{\tfrac{|x|^2}{4\sigma^2}-in\cdot\tfrac xL\bigr\} \, dx. $$ Then by the convergence of Fourier series we have \begin{equation}\label{eq1} \psi_{\varepsilon}(x)=\sum_{n\in {\mathbb{Z}}^3}c_n^{{\varepsilon}} \gamma_n(x) \quad\text{for all}\quad x\in[-\pi L,\pi L]^3. \end{equation} Taking ${\varepsilon}$ sufficiently small, we can guarantee that $\supp \psi_{\varepsilon}\subseteq[-\frac{\pi L}2, \frac{\pi L}2]^3$. Thus, to establish \eqref{expand} we need to show that outside the cube $[-\pi L,\pi L]^3$, the series only contributes a small error. Indeed, let $k\in {\mathbb{Z}}^3\setminus \{ 0 \}$ and $Q_k:=2\pi kL+[-\pi L,\pi L]^3$; using the periodicity of Fourier series, we obtain $$ \Bigl\|\sum_{n\in {\mathbb{Z}}^3}c_n^{{\varepsilon}}\gamma_n\Bigr\|_{L^2(Q_k)}^2=\int_{[-\pi L,\pi L]^3}|\psi_{\varepsilon}(x)|^2\exp\bigl\{\tfrac{|x|^2}{2\sigma^2} - \tfrac{|x+2\pi kL|^2}{2\sigma^2}\bigr\} \, dx. $$ As on the support of $\psi_{\varepsilon}$ we have $|x|\le \tfrac12 \pi L\leq \tfrac 14 |2\pi kL|$, we get $$ \Bigl\|\sum_{n\in {\mathbb{Z}}^3} c_n^{{\varepsilon}}\gamma_n\Bigr\|_{L^2(Q_k)}^2 \lesssim \|\psi\|_{L^2({\mathbb{R}}^3)}^2 \exp\bigl\{ -\tfrac{\pi^2 k^2L^2}{2\sigma^2} \bigr\}. $$ Summing in $k$ and using \eqref{eq1}, we obtain \begin{align*} \Bigl\|\psi_{\varepsilon}-\sum_{n\in {\mathbb{Z}}^3}c_n^{{\varepsilon}}\gamma_n\Bigr\|_{L^2({\mathbb{R}}^3)} &\lesssim \sum_{k\in {\mathbb{Z}}^3\setminus\{0\}} \Bigl\|\sum_{n\in {\mathbb{Z}}^3}c_n^{{\varepsilon}}\gamma_n\Bigr\|_{L^2(Q_k)}\\ &\lesssim \|\psi\|_{L^2({\mathbb{R}}^3)}\sum_{k\in{\mathbb{Z}}^3\setminus \{0\}}\exp \bigl\{ -\tfrac {\pi^2 k^2L^2}{4\sigma^2} \bigr\}\\ &\lesssim_{\psi} e^{-\frac {\pi^2 L^2}{4\sigma^2}}=o(1) \qtq{as} {\varepsilon}\to 0. \end{align*} This proves \eqref{expand}. Next we prove the upper bound \eqref{bdforc}. From the definition of $c_n^{{\varepsilon}}$, we immediately obtain \begin{align*} |c_n^{{\varepsilon}}|&\lesssim \frac {\sigma^{\frac32}}{L^3}\bigl\|\psi_{\varepsilon}(x)e^{\frac{|x|^2}{4\sigma^2}}\bigr\|_{L^1({\mathbb{R}}^3)} \lesssim\frac{(\sigma{\varepsilon})^{\frac 32}}{L^3}. \end{align*} To derive the other upper bound, we use integration by parts. Let $\mathbb D:= i\frac {Ln}{|n|^2}\cdot\nabla$; note that $\mathbb D^k e^{-in\frac xL}=e^{-in\frac xL}$. The adjoint of $\mathbb D$ is given by $\mathbb D^t=-i\nabla\cdot\frac{Ln}{|n|^2}$. We thus obtain \begin{align*} |c_n^{{\varepsilon}}| &=(2\pi L)^{-3}\biggl|\int_{{\mathbb{R}}^3} \mathbb D^k e^{-in\frac xL}\psi_{\varepsilon}(x)(2\pi\sigma^2)^{\frac 34} e^{\frac{|x|^2}{4\sigma^2}}dx\biggr|\\ &=(2\pi L)^{-3}\biggl|\int_{{\mathbb{R}}^3} e^{-in\frac xL}(\mathbb D^t)^k\Bigl[ {\varepsilon}^{-\frac 32}\psi\Bigl(\frac x{\varepsilon}\Bigr)(2\pi\sigma^2)^{\frac 34}e^{\frac{|x|^2}{4\sigma^2}}\Bigr]\,dx\biggr|\\ &\lesssim L^{-3}\Bigl(\frac L{|n|}\Bigr)^k\Bigl(\frac {\sigma}{{\varepsilon}}\Bigr)^{\frac 32}\sum_{|\alpha|\leq k}\Bigl\|\partial^\alpha\Bigl[\psi\Bigl(\frac x{{\varepsilon}}\Bigr)e^{\frac {|x|^2}{4\sigma^2}}\Bigr]\Bigr\|_{L^1({\mathbb{R}}^3)}\\ &\lesssim_{k,\psi}L^{-3}\Bigl(\frac L{|n|}\Bigr)^k\Bigl(\frac {\sigma}{{\varepsilon}}\Bigr)^{\frac 32}{\varepsilon}^{3-k}\\ &\lesssim_{k,\psi}\frac {({\varepsilon}\sigma)^{\frac 32}}{L^3}\Bigl(\frac L{{\varepsilon}|n|}\Bigr)^k. \end{align*} This proves \eqref{bdforc}. To derive the last claim, we first note that \begin{equation}\label{E:gamma inner prod} \int_{{\mathbb{R}}^3} \gamma_n(x)\overline{\gamma_m(x)}\,dx=e^{-\frac{\sigma^2}{2L^2}|n-m|^2}. \end{equation} Now fix $N\in {\mathbb{N}}$. For $n\le N$, we use the first upper bound for $c_n^{{\varepsilon}}$ to estimate \begin{align*} \Bigl\| \sum_{|n|\le N}c_n^{{\varepsilon}}\gamma_n \Bigr\|_{L^2({\mathbb{R}}^3)}^2 &\lesssim_{\psi}\frac {(\sigma{\varepsilon})^3}{L^6}\sum_{|n|,|m|\le N} e^{-\frac{\sigma^2}{2L^2}|n-m|^2} \lesssim_{\psi}\frac {(\sigma{\varepsilon})^3}{L^6}N^3\Bigl(\frac L{\sigma}\Bigr)^3 \lesssim_\psi \Bigl(\frac {{\varepsilon} N}L\Bigr)^3. \end{align*} For $n\geq N$, we use the second upper bound for $c_n^{{\varepsilon}}$ (with $k=3$) to estimate \begin{align*} \biggl\|\sum_{|n|\ge N}c_n^{{\varepsilon}}\gamma_n \biggr\|_{L^2}^2 &\lesssim_{\psi}\frac {(\sigma{\varepsilon})^3}{L^6}\Bigl(\frac L{{\varepsilon}}\Bigr)^6 \sum_{|n|\ge |m|\ge N} \frac 1{|n|^3} \frac 1{|m|^3} e^{-\frac {\sigma^2}{2L^2}|n-m|^2}\\ &\lesssim_{\psi} \Bigl(\frac\sigma{{\varepsilon}}\Bigr)^3\sum_{|n|\ge|m|\ge N}\frac 1{|m|^6} e^{-\frac {\sigma^2}{2L^2}|n-m|^2}\\ &\lesssim_\psi \Bigl(\frac {\sigma}{{\varepsilon}}\Bigr)^3\Bigl(\frac L{\sigma}\Bigr)^3 \sum_{|m|\ge N}\frac 1{|m|^6}\\ &\lesssim_\psi \Bigl(\frac L{{\varepsilon} N}\Bigr)^3. \end{align*} Thus, \begin{align}\label{error} \biggl\|&\sum_{|n|\le \frac L{{\varepsilon}\log\log(\frac 1{{\varepsilon}})}}c_n^{{\varepsilon}}\gamma_n\biggr\|_{L^2_x}^2+\biggl\|\sum_{|n|\ge {\frac L{\varepsilon}\log\log(\frac 1{{\varepsilon}})}}c_n^{{\varepsilon}}\gamma_n\biggr\|_{L^2_x}^2\lesssim_{\psi}[\log\log(\tfrac 1{{\varepsilon}})]^{-3}=o(1) \end{align} as ${\varepsilon}\to 0$. This completes the proof of Lemma~\ref{decomposition}. \end{proof} Combining the Strichartz inequality with Lemma~\ref{decomposition}, proving Theorem~\ref{T:LF3} reduces to showing \begin{align*} \Bigl\|\sum_{n\in \mathcal S} c_n^{{\varepsilon}}\bigl[{e^{it\Delta_{\Omega}}}(1_\Omega\gamma_n)-e^{it\Delta_{{\mathbb{R}}^3}}\gamma_n\bigr]\Bigr\|_{L_{t,x}^{\frac {10}3}({\mathbb{R}}\times{\mathbb{R}}^3)}=o(1) \qtq{as}{\varepsilon}\to 0. \end{align*} Recall that the linear Schr\"odinger evolution of a Gaussian wave packet in the whole space has a simple explicit expression: \begin{align*} u_n(t,x):=[e^{it\Delta_{{\mathbb{R}}^3}}\gamma_n](x)=\frac 1{(2\pi)^{\frac34}}\biggl(\frac {\sigma}{\sigma^2+it}\biggr)^{\frac32}\exp\biggl\{ix\cdot \xi_n-it|\xi_n|^2-\frac {|x-2\xi_nt|^2}{4(\sigma^2+it)}\biggr\}, \end{align*} where $\xi_n:=\frac nL$. \begin{defn}[Missing, near-grazing, and entering rays] \label{D:MEG} Fix $n\in \mathcal S$. We say $u_n$ \emph{misses the obstacle} if \begin{align*} \dist(2t\xi_n, \Omega^c)\ge \frac{|2t\xi_n|}{[\log\log(\tfrac 1\eps)]^4} \qtq{for all} t\geq 0. \end{align*} Let $$\mathcal M=\{n\in \mathcal S : u_n\mbox{ misses the obstacle}\}.$$ If the ray $2t\xi_n$ intersects the obstacle, let $t_c\geq0$ and $x_c=2t_c\xi_n\in \partial\Omega$ denote the time and location of first incidence, respectively. We say $u_n$ \emph{enters the obstacle} if in addition $$ \frac{|\xi_n\cdot\nu|}{|\xi_n|} \geq [\log\log(\tfrac 1\eps)]^{-4}, $$ where $\nu$ denotes the unit normal to the obstacle at the point $x_c$. Let \begin{align*} \mathcal E=\{n\in \mathcal S : u_n\mbox{ enters the obstacle}\}. \end{align*} Finally, we say $u_n$ is \emph{near-grazing} if it neither misses nor enters the obstacle. Let \begin{align*} \mathcal G=\{n\in \mathcal S : u_n \mbox{ is near-grazing}\}. \end{align*} \end{defn} We first control the contribution of the near-grazing directions. \begin{lem}[Counting $\mathcal G$] \label{L:counting G} The set of near-grazing directions constitutes a vanishing fraction of the total directions. More precisely, $$ \# \mathcal G \lesssim [\log\log(\tfrac 1\eps)]^{-4} \# \mathcal S \lesssim \Bigl(\frac L{\varepsilon}\Bigr)^3 [\log\log(\tfrac 1\eps)]^{-1}. $$ \end{lem} \begin{proof} We claim that the near-grazing directions are contained in a neighbourhood of width $O( [\log\log(\frac1{\varepsilon})]^{-4} )$ around the set of grazing rays, that is, rays that are tangent to $\partial\Omega$. We will first verify this claim and then explain how the lemma follows. The objects of interest are depicted in Figure~\ref{Fig.NG}. Two rays are show, one which collides with the obstacle and another that does not. The horizontal line represents the nearest grazing ray. The origin, from which the rays emanate, is marked $O$. \begin{figure} \caption{Near-grazing rays.} \label{Fig.NG} \end{figure} For rays that collide with the obstacle, the condition to be near-grazing is that $\sin(\phi) \leq [\log\log(\frac1{\varepsilon})]^{-4}$. Here $\phi$ is the angle between the ray and the tangent plane to the obstacle at the point of collision. Convexity of the obstacle guarantees that $\phi \geq \theta$. From this we deduce $\theta\lesssim [\log\log(\frac1{\varepsilon})]^{-4}$, in accordance with the claim made above. Let us now consider rays that do not collide with the obstacle. We recall that to be near-grazing in this case there must be some time $t>0$ so that $X=2\xi t$ is within a distance $2|\xi|t[\log\log(\frac1{\varepsilon})]^{-4}$ of a point $P$ on the obstacle. Then $\theta\leq\tan\theta \leq \frac{|XP|}{|OX|} \leq [\log\log(\frac1{\varepsilon})]^{-4}$. This finishes the proof of the claim. The set of directions corresponding to grazing rays is a smooth curve whose length is uniformly bounded in terms of the geometry of $\Omega$ alone. Moreover, we have shown that all near-grazing directions lie within a neighbourhood of this curve of thickness $O( [\log\log(\frac1{\varepsilon})]^{-4} )$. Noting that the directions $\{\frac n{|n|} : n\in\mathcal S\}$ are uniformly distributed on the sphere and much more tightly packed than the width of this neighbourhood, the lemma follows from a simple area estimate. \end{proof} \begin{prop}[The near-grazing contribution]\label{P:ng} We have $$ \Bigl\|\sum_{n\in \mathcal G} c_n^{{\varepsilon}}\bigl[{e^{it\Delta_{\Omega}}}(1_\Omega\gamma_n)-u_n\bigr]\Bigr\|_{L_{t,x}^{\frac {10}3}({\mathbb{R}}\times{\mathbb{R}}^3)}=o(1) \qtq{as}{\varepsilon}\to 0. $$ \end{prop} \begin{proof} From the Strichartz inequality, it suffices to prove \begin{align*} \Bigl\| \sum_{n\in \mathcal G} c_n^{{\varepsilon}}\gamma_n \Bigr\|_{L^2({\mathbb{R}}^3)}=o(1) \qtq{as} {\varepsilon}\to 0. \end{align*} Using \eqref{bdforc}, \eqref{E:gamma inner prod}, and Lemma~\ref{L:counting G}, we estimate \begin{align*} \Bigl\|\sum_{n\in \mathcal G} c_n^{{\varepsilon}} \gamma_n\Bigr\|_{L^2({\mathbb{R}}^3)}^2 &\lesssim \sum_{n,m\in \mathcal G}\frac {(\sigma {\varepsilon})^3}{L^6} e^{-\frac {\sigma^2}{2L^2}|n-m|^2} \lesssim\sum_{n\in \mathcal G} \frac {(\sigma{\varepsilon})^3}{L^6}\Bigl(\frac L{\sigma}\Bigr)^3\lesssim [\log\log(\tfrac 1{{\varepsilon}})]^{-1}, \end{align*} which converges to $0$ as ${\varepsilon}\to 0$. \end{proof} We now consider the contribution of rays that miss the obstacle in the sense of Definition~\ref{D:MEG}. \begin{prop}[Contribution of rays that miss the obstacle]\label{P:missing} Assume $n\in \mathcal M$. Then \begin{equation}\label{432} \|e^{it\Delta_{\Omega}}(1_{\Omega}\gamma_n)-u_n\|_{L_{t,x}^{\frac {10}3}({\mathbb{R}}\times {\mathbb{R}}^3)}\lesssim {\varepsilon}^{100} \end{equation} for sufficiently small ${\varepsilon}$. Furthermore, we have \begin{align}\label{249} \Bigl\|\sum_{n\in \mathcal M} c_n^{\varepsilon} \bigl[e^{it\Delta_{\Omega}}(1_\Omega\gamma_n)-u_n\bigr]\Bigr\|_{L_{t,x}^{\frac{10}3} ({\mathbb{R}}\times{\mathbb{R}}^3)}=o(1) \qtq{as} {\varepsilon}\to 0. \end{align} \end{prop} \begin{proof} We first notice that \eqref{249} is an immediate consequence of \eqref{432}. Indeed, using the upper bound \eqref{bdforc} for $c_n^{\varepsilon}$, we estimate \begin{align*} \Bigl\|\sum_{n\in \mathcal M} c_n^{\varepsilon}\bigl[e^{it\Delta_\Omega}(1_\Omega\gamma_n)-u_n\bigr]\Bigr\|_{L_{t,x}^{\frac{10}3}({\mathbb{R}}\times{\mathbb{R}}^3)} &\lesssim\sum_{|n|\le \frac L{\varepsilon}\log\log(\tfrac 1\eps)}\frac{(\sigma{\varepsilon})^{\frac32}}{L^3}{\varepsilon}^{100}\\ &\lesssim (\sigma{\varepsilon})^{\frac 32}{\varepsilon}^{97}[\log\log(\tfrac 1\eps)]^3=o(1). \end{align*} We are thus left to prove \eqref{432}. As $u_n$ misses the obstacle, we have \begin{align}\label{845} \dist(2t\xi_n,\Omega^c)\ge \tfrac 12\delta[\log\log(\tfrac 1\eps)]^{-4} \qtq{for all} t\geq 0. \end{align} Indeed, when $|2t\xi_n|<\frac \delta 2$, the triangle inequality gives $\dist(2t\xi_n,\Omega^c)\ge \frac \delta 2$; when $|2t\xi_n|\geq\frac \delta 2$, this bound follows immediately from Definition~\ref{D:MEG}. Now let $\chi$ be a smooth cutoff that vanishes on the obstacle and equals $1$ when \begin{align*} \dist(x,\Omega^c)\ge \delta \log^{-1}(\tfrac 1{\varepsilon}). \end{align*} This cutoff can be chosen to also obey the following: \begin{equation}\label{533} |\nabla \chi|\lesssim \delta^{-1}\log(\tfrac 1\eps), \quad |\Delta \chi|\lesssim \delta^{-2}\log^2(\tfrac 1{\varepsilon}),\quad |\supp(\Delta\chi)|\lesssim \delta\log^{-1}(\tfrac 1{\varepsilon}). \end{equation} From \eqref{845} and the triangle inequality, we obtain \begin{align}\label{1001} \dist(2t\xi_n, \supp(1-\chi))&\ge\dist(2t\xi_n,\Omega^c)-\delta\log^{-1}(\tfrac 1{\varepsilon})\notag\\ &\ge \tfrac12 \delta[\log\log(\tfrac 1\eps)]^{-4}-\delta\log^{-1}(\tfrac 1{\varepsilon})\ge \tfrac14\delta[\log\log(\tfrac 1\eps)]^{-4}. \end{align} Moreover, when $|t|\ge \sigma^2$, we observe that \begin{align}\label{1002} \dist(2t\xi_n, \supp(1-\chi))&\ge \dist(2t\xi_n,\Omega^c)-\delta\log^{-1}(\tfrac 1{\varepsilon})\notag\\ &\ge \frac{|2t\xi_n|}{[\log\log(\tfrac 1\eps)]^4}-\frac\delta{\log(\tfrac 1\eps)}\ge\frac{|t\xi_n|}{[\log\log(\tfrac 1\eps)]^4}. \end{align} Here we have used the fact that $\delta\ll |2t\xi_n|$ for $t\ge\sigma^2$. With these preliminaries out of the way, we are ready to begin proving \eqref{432}. By the triangle inequality, \begin{align} \text{LHS}\eqref{432}\le\|e^{it\Delta_{\Omega}}(1_{\Omega}\gamma_n)-\chi u_n\|_{L_{t,x}^{\frac{10}3}({\mathbb{R}}\times{\mathbb{R}}^3)}+\|\chi u_n-u_n\|_{L_{t,x}^{\frac{10}3}({\mathbb{R}}\times{\mathbb{R}}^3)}.\label{E:M} \end{align} We begin with the first term on the right-hand side of \eqref{E:M}. Using the Duhamel formula, we write \begin{align*} e^{it\Delta_{\Omega}}(1_{\Omega}\gamma_n)-\chi u_n &=e^{it\Delta_{\Omega}}(1_{\Omega}\gamma_n)-e^{it\Delta_{\Omega}}(\chi\gamma_n)+e^{it\Delta_{\Omega}}(\chi \gamma_n)-\chi u_n\\ &=e^{it\Delta_{\Omega}}[(1_{\Omega}-\chi)\gamma_n]+i\int_0^te^{i(t-s)\Delta_{\Omega}}\bigl[\Delta\chi u_n+2\nabla \chi \cdot \nabla u_n\bigr]\,ds. \end{align*} Similarly, for the second term on the right-hand side of \eqref{E:M} we have \begin{align*} (1-\chi)u_n&=e^{it\Delta}(1-\chi)\gamma_n+i\int_0^te^{i(t-s)\Delta}\bigl[\Delta \chi u_n+2\nabla \chi \cdot \nabla u_n\bigr](s)\,ds. \end{align*} Thus, using the Strichartz inequality we obtain \begin{align} \text{LHS}\eqref{432} &\lesssim\|(1-\chi)\gamma_n\|_{L^2({\mathbb{R}}^3)}+\|\Delta \chi u_n\|_{L^1_tL_x^2({\mathbb{R}}\times {\mathbb{R}}^3)} +\|\nabla \chi \cdot \nabla u_n\|_{L_t^1L_x^2({\mathbb{R}}\times {\mathbb{R}}^3)}.\label{530} \end{align} The first term on the right-hand side of \eqref{530} can be easily controlled: \begin{align*} \|(1-\chi)\gamma_n\|_{L^2({\mathbb{R}}^3)}^2 &\lesssim \sigma^{-3}\int_{\supp(1-\chi)}e^{-\frac {|x|^2}{2\sigma^2}}dx \\ &\lesssim \sigma^{-3}\sigma^3 \exp\Bigl\{-\frac {\dist^2(0,\supp(1-\chi))}{4 \sigma^2}\Bigr\}\\ &\lesssim \exp\Bigl\{-\frac{\delta^2}{8{\varepsilon}\delta\log^2(\tfrac1{\varepsilon})}\Bigr\}\le{\varepsilon}^{200}. \end{align*} To estimate the remaining terms on the right-hand side of \eqref{530}, we first observe that \begin{align*} |\nabla u_n|\lesssim |\xi_n||u_n|+\frac {|x-2\xi_n t|}{\sqrt{\sigma^4+t^2}}|u_n| &\lesssim \bigl[|\xi_n |+\sigma^{-1}\bigr]\biggl(\frac{\sigma^2}{\sigma^4+t^2}\biggr)^{\frac 34}e^{-\frac{{\sigma^2}|x-2\xi_n t|^2}{8(\sigma^4+t^2)}}. \end{align*} As $\sigma^{-1}\leq |\xi_n|\le \frac {\log\log(\frac 1{{\varepsilon}})}{{\varepsilon}}$, we obtain \begin{align}\label{1244} |u_n|+ |\nabla u_n| \lesssim \frac{\log\log(\tfrac 1\eps)}{{\varepsilon}}\biggl(\frac{\sigma^2}{\sigma^4+t^2}\biggr)^{\frac 34}e^{-\frac{{\sigma^2}|x-2\xi_n t|^2}{8(\sigma^4+t^2)}}. \end{align} To estimate the contribution of these terms, we discuss short and long times separately. For $0\leq t\le \sigma^2$, we use \eqref{1001} to estimate \begin{align*} \|u_n&\|_{L_t^1L_x^2(t\le \sigma^2, \ x\in \supp(1-\chi))} + \|\nabla u_n\|_{L_t^1L_x^2(t\le \sigma^2, \ x\in \supp(1-\chi))}\\ &\lesssim \frac{\log\log(\tfrac 1\eps)}{{\varepsilon}} \sigma^2\sup_{0\leq t\le\sigma^2}\biggl(\frac{\sigma^2}{\sigma^4+t^2}\biggr)^{\frac 34}\biggl\|\exp\Bigl\{-\frac{\sigma^2|x-2t\xi_n|^2}{8(\sigma^4+t^2)}\Bigr\}\biggr\|_{L_x^2(\supp(1-\chi))}\\ &\lesssim\frac{\log\log(\tfrac 1\eps)}{{\varepsilon}} \sigma^2 \sup_{0\leq t\le\sigma^2}\biggl\|\exp\Bigl\{-\frac{\sigma^2|x-2t\xi_n|^2}{16(\sigma^4+t^2)}\Bigr\}\biggr\|_{L_x^{\infty} (\supp(1-\chi))}\\ &\lesssim \frac{\log\log(\tfrac 1\eps)}{{\varepsilon}}\sigma^2 \exp\biggl\{-\frac\delta{{\varepsilon}\log^3(\tfrac 1{\varepsilon})}\biggr\}\\ &\le{\varepsilon}^{110}. \end{align*} For $|t|>\sigma^2$, we use \eqref{533} and \eqref{1002} to obtain \begin{align*} \|u_n&\|_{L_t^1L_x^2(t>\sigma^2, \ x\in \supp(1-\chi))} + \|\nabla u_n\|_{L_t^1L_x^2(t> \sigma^2, \ x\in \supp(1-\chi))}\\ &\lesssim\bigl[\delta\log^{-1}(\tfrac 1{\varepsilon})\bigr]^{\frac 12}\bigl\||u_n|+|\nabla u_n|\bigr\|_{L_t^1L_x^{\infty}(t>\sigma^2, \ x\in\supp(1- \chi))}\\ &\lesssim \frac{\delta^{\frac 12}\sigma^{\frac 32}\log\log(\tfrac 1\eps)}{{\varepsilon}\log^{\frac 12}(\tfrac 1{\varepsilon})}\biggl\|t^{-\frac 32}\exp\Bigl\{-\frac{\sigma^2\dist^2(2t\xi_n, \supp(1-\chi))}{8(\sigma^4+t^2)}\Bigr\}\biggr\|_{L_t^1(t>\sigma^2)}\\ &\lesssim \frac{\delta^{\frac 12}\sigma^{\frac 12}\log\log(\tfrac 1\eps)}{{\varepsilon}\log^{\frac 12}(\tfrac 1{\varepsilon})} \exp\Bigl\{-\frac\delta{\varepsilon}\Bigr\}\\ &\le {\varepsilon}^{110}. \end{align*} Putting these two pieces together, we find \begin{align*} \|\Delta \chi u_n\|_{L_t^1L_x^2({\mathbb{R}}\times{\mathbb{R}}^3)}+\|\nabla \chi \cdot \nabla u_n\|_{L_t^1L_x^2({\mathbb{R}}\times {\mathbb{R}}^3)} \lesssim\delta^{-2}\log^2(\tfrac 1{\varepsilon}){\varepsilon}^{110}\le {\varepsilon}^{100}. \end{align*} This completes the proof of Proposition~\ref{P:missing}. \end{proof} In order to complete the proof of Theorem~\ref{T:LF3}, we need to estimate the contribution from the Gaussian wave packets $\gamma_n$ that collide non-tangentially with the obstacle, that is, for $n\in\mathcal E$. This part of the argument is far more subtle than the treatment of $n\in \mathcal G$ or $n\in \mathcal M$. Naturally, the entering wave packets reflect off the obstacle and we will need to build a careful parametrix to capture this reflection. Moreover, the convexity of the obstacle enters in a crucial way --- it ensures that the reflected waves do not refocus. The treatment of the entering rays will occupy the remainder of this subsection. We begin with the simplest part of the analysis, namely, the short time contribution. Here, short times means well before the wave packets have reached the obstacle. The estimate applies equally well to all wave packets, irrespective of whether $n\in\mathcal E$ or not. \begin{prop}[The contribution of short times]\label{P:short times} Let $T:=\frac{{\varepsilon}\delta}{10\log\log(\tfrac 1\eps)}$. Then \begin{align}\label{st} \sum_{n\in \mathcal S} |c_n^{\varepsilon}|\bigl\|e^{it\Delta_{\Omega}}(1_\Omega\gamma_n)-u_n\bigr\|_{L_{t,x}^{\frac{10}3} ([0,T]\times{\mathbb{R}}^3)}=o(1) \qtq{as} {\varepsilon}\to 0. \end{align} \end{prop} \begin{proof} Let $\chi $ be a smooth cutoff that vanishes on the obstacle and equals $1$ when $\dist(x,\Omega^c)>\frac \delta{10}$. This cutoff can be chosen to also satisfy \begin{align}\label{deta} |\nabla \chi|\lesssim \delta^{-1} \qtq{and} |\Delta \chi|\lesssim \delta^{-2}. \end{align} Moreover, for $t\in[0,T]$ we have \begin{align*} |2t\xi_n |\le 2\frac{{\varepsilon}\delta}{10\log\log(\tfrac 1\eps)}\cdot\frac{\log\log(\tfrac 1\eps)}{{\varepsilon}}=\frac15 \delta \end{align*} and so \begin{align}\label{615} \dist(2t\xi_n, \supp(1-\chi))\ge \tfrac12 \delta \qtq{for all} t\in[0, T]. \end{align} The proof of this proposition is almost identical to that of Proposition~\ref{P:missing}, with the roles of \eqref{1001} and \eqref{1002} being played by \eqref{615}. Indeed, using the Duhamel formula and the Strichartz inequality as in the proof of Proposition~\ref{P:missing}, \begin{align*} \|e^{it\Delta_{\Omega}}(&1_{\Omega}\gamma_n)-u_n\|_{L_{t,x}^{\frac{10}3}([0,T]\times{\mathbb{R}}^3)}\\ &\le\|e^{it\Delta_{\Omega}}(1_{\Omega}\gamma_n)-\chi u_n\|_{L_{t,x}^{\frac{10}3}([0,T]\times{\mathbb{R}}^3)}+\|\chi u_n-u_n\|_{L_{t,x}^{\frac{10}3}([0,T]\times{\mathbb{R}}^3)}\\ &\lesssim \|(1-\chi)\gamma_n\|_{L^2({\mathbb{R}}^3)}+\|\Delta \chi u_n\|_{L^1_tL_x^2([0,T]\times{\mathbb{R}}^3)} +\|\nabla \chi \cdot \nabla u_n\|_{L_t^1L_x^2([0,T]\times{\mathbb{R}}^3)}. \end{align*} The first term is estimated straightforwardly \begin{align*} \|(1-\chi)\gamma_n\|_{L^2({\mathbb{R}}^3)}^2 &\lesssim\sigma^{-3}\int_{\supp(1-\chi)} e^{-\frac{|x|^2}{2\sigma^2}} \,dx \lesssim e^{-\frac{\dist^2(0,\supp(1-\chi))}{4\sigma^2}} \lesssim e^{-\frac{\delta^2}{16\sigma^2}}\le {\varepsilon}^{200}. \end{align*} For the remaining two terms, we use \eqref{1244} and \eqref{615} to estimate \begin{align*} \|u_n\|_{L_t^1L_x^2([0,T]\times\supp(1-\chi))} & + \|\nabla u_n\|_{L_t^1L_x^2([0,T]\times\supp(1-\chi))}\\ &\lesssim\delta\sup_{t\in[0,T]} \biggl(\frac{\sigma^2}{\sigma^4+t^2}\biggr)^{\frac34}\Bigl\|e^{-\frac{\sigma^2|x-2\xi_n t|^2}{8(\sigma^4+t^2)}}\Bigr\|_{L_x^2(\supp(1-\chi))}\\ &\lesssim \delta \sup_{t\in[0,T]}\Bigl\|e^{-\frac{\sigma^2|x-2\xi_nt|^2}{16(\sigma^4+t^2)}}\Bigr\|_{L_x^{\infty}(\supp(1-\chi))}\\ &\lesssim \delta \sup_{t\in[0,T]} \exp\Bigl\{-\frac{\sigma^2\dist^2(2t\xi_n,\supp(1-\chi))}{32\sigma^4}\Bigr\} \\ &\lesssim \delta e^{-\frac{\delta^2}{128\sigma^2}}\le {\varepsilon}^{110}. \end{align*} This implies \begin{align*} \|\Delta \chi u_n\|_{L^1_tL_x^2([0,T]\times {\mathbb{R}}^3)} +\|\nabla \chi \cdot \nabla u_n\|_{L_t^1L_x^2([0,T]\times {\mathbb{R}}^3)} \lesssim \delta^{-2}{\varepsilon}^{110}\le {\varepsilon}^{100}. \end{align*} Collecting these estimates and using \eqref{bdforc} we obtain \begin{align*} \text{LHS}\eqref{st}\lesssim \sum_{n\in \mathcal S} \frac{(\sigma{\varepsilon})^{\frac32}}{L^3} {\varepsilon}^{100}=o(1) \qtq{as} {\varepsilon}\to 0. \end{align*} This finishes the proof of Proposition~\ref{P:short times}. \end{proof} Now take $n\in \mathcal E$, which means that the wave packet $u_n(t,x)$ enters the obstacle. We write $t_c$ for the first time of intersection and $x_c=2t_c\xi_n$ for the location of this collision. Naturally both $t_c$ and $x_c$ depend on $n$; however, as most of the analysis will focus on one wave packet at a time, we suppress this dependence in the notation. We approximate the wave generated by $u_n$ reflecting off $\partial\Omega$ by a Gaussian wave packet $v_n$ (or more accurately by $-v_n$ since the Dirichlet boundary condition inverts the profile), which we define as follows: \begin{align}\label{forv} v_n(t,x):=&\Bigl(\frac {\sigma^2}{2\pi}\Bigr)^{\frac 34}\frac {(\det\Sigma)^{\frac 12}}{(\sigma^2+it_c)^{\frac 32}} [\det(\Sigma+i(t-t_c))]^{-\frac12} \exp\Bigl\{i(x-x_c)\eta-it|\eta|^2\notag\\ &\qquad\qquad\qquad\qquad+ix_c\cdot \xi-\tfrac14(x-x(t))^T(\Sigma+i(t-t_c))^{-1}(x-x(t))\Bigr\}, \end{align} where for simplicity we write $\xi=\xi_n$. The parameters $\eta$, which represents the momentum of the reflected wave packet, and $\Sigma$, which gives its covariance structure, will be specified shortly. Correspondingly, $x(t):=x_c+2\eta(t-t_c)$ represents the center of the reflected wave packet. We define an orthonormal frame $(\vec\tau,\vec \gamma,\vec \nu)$ at the point $x_c\in\partial\Omega$, where $\vec \tau,\vec \gamma$ are two tangent vectors to $\partial \Omega$ in the directions of the principal curvatures $\frac 1{R_1},\ \frac 1{R_2}$ and $\vec \nu$ is the unit outward normal to the obstacle. Note that the obstacle being strictly convex amounts to $1\lesssim R_1,R_2<\infty$. Without loss of generality, we may assume $R_1\le R_2$. With this frame, we define $\eta:=\xi-2(\xi\cdot\vec\nu)\vec\nu$ as the reflection of $\xi$, in accordance with the basic law of reflection, namely, the angle of incidence equals the angle of reflection. In this frame, $\Sigma^{-1}$ is defined as follows: \begin{align}\label{E:Sigma defn} \Sigma^{-1}=\frac 1{\sigma^2+it_c}\mathrm{Id}+iB, \end{align} where \begin{align*} B=\begin{pmatrix} \frac {4\xi_3}{R_1} & 0 &\frac {4\xi_1}{R_1}\\ 0 &\frac {4\xi_3}{R_2} &\frac {4\xi_2}{R_2}\\ \frac {4\xi_1}{R_1} &\frac {4\xi_2}{R_2} &\frac{4\xi_1^2}{R_1\xi_3}+\frac {4\xi_2^2}{R_2\xi_3} \end{pmatrix} \end{align*} and \begin{align*} \eta_1:=\eta\cdot\vec\tau&=\xi\cdot\vec\tau=:\xi_1 \\ \eta_2:=\eta\cdot \vec\gamma&=\xi\cdot \vec\gamma=:\xi_2\\ \eta_3:=\eta\cdot\vec\nu=-\xi\cdot\vec\nu&=:-\xi_3=\tfrac12 |\xi-\eta|. \end{align*} The matrix $B$ encodes the additional spreading of the reflected wave packet induced by the curvature of the obstacle; incorporating this subtle effect is essential for the analysis that follows. The structure of the matrix $B$ captures the basic rule of mirror manufacture: the radius of curvature equals twice the focal length. \begin{lem}[Bounds for collision times and locations]\label{L:xc} For rays that enter the obstacle, we have $$ \xi_3 < 0, \quad |\xi_3| \geq |\xi| [\log\log(\tfrac 1\eps)]^{-4}, \qtq{and} \delta \leq |x_c| \lesssim \delta[\log\log(\tfrac1{\varepsilon})]^8. $$ In particular, $\delta|\xi|^{-1}\leq 2t_c\lesssim \delta|\xi|^{-1}[\log\log(\tfrac 1\eps)]^8$. \end{lem} \begin{proof} The first inequality simply expresses the fact that the ray approaches the obstacle from without. The second inequality is an exact repetition of $n\in \mathcal E$ as given in Definition~\ref{D:MEG}. The lower bound on $|x_c|$ follows directly from the fact that $\delta=\dist(0,\Omega^c)$. The proof of the upper bound on $|x_c|$ divides into two cases. When $\delta\gtrsim[\log\log(\tfrac1{\varepsilon})]^{-8}$, the result follows from $|x_c|\leq \dist(0,\Omega^c) + \diam(\Omega^c) \lesssim1$. It remains to consider the case when $\delta\leq \tfrac{1}{8C} [\log\log(\tfrac1{\varepsilon})]^{-8}$ for some fixed large $C=C(\Omega)$. By approximating $\partial\Omega$ from within by a paraboloid, this case reduces to the analysis of the following system of equations: \begin{align*} y &= m x \quad \text{and} \quad y = Cx^2 + \delta \quad \text{with} \quad m\geq [\log\log(\tfrac1{\varepsilon})]^{-4}. \end{align*} The first equation represents the ray, whose slope is restricted by that permitted for an entering ray. (Note that the convexity of the obstacle implies that the angle between the ray and $\partial\Omega$ is larger than the angle between the ray and the axis $y=0$.) Using the quadratic formula, we see that the solution obeys $$ |x_c|\leq \sqrt{ x^2 + y^2 } = \frac{2\delta \sqrt{1+m^2}}{m + \sqrt{m^2 - 4C\delta}} \sim \frac{\delta\sqrt{1+m^2}}{m}, $$ where we used the restriction on $\delta$ in the last step. \end{proof} \begin{lem}[Reflected waves diverge]\label{L:diverging rays} For $j=1,2$, let $x^{(j)}(t)$ denote the broken ray beginning at the origin, moving with velocity $2\xi^{(j)}$ and reflecting off the convex body $\Omega^c$. Then $$ | x^{(1)}(t) - x^{(2)}(t) | \geq 2| \xi^{(1)} - \xi^{(2)} | \, t $$ whenever $t \geq \max\{t_c^{(1)},t_c^{(2)}\}$, that is, greater than the larger collision time. \end{lem} \begin{proof} In the two dimensional case, this result follows from elementary planar geometry. A particularly simple argument is to reflect the outgoing rays across the line joining the two collision points. By convexity, the continuations of the incoming rays will both lie between the reflected outgoing rays. Note that the geometry involved is dictated solely by the two tangent lines at the collision points, not by the shape of the convex body in between. We note that given two vectors $v^{(j)}\in{\mathbb{R}}^2$ and two points $y^{(j)}\in{\mathbb{R}}^2$, there is a convex curve passing through these points and having these vectors as outward normals at these points if and only if \begin{equation}\label{Convex position} v^{(1)} \cdot \bigl(y^{(1)}-y^{(2)}\bigr) \geq 0\quad\text{and}\quad v^{(2)} \cdot \bigl(y^{(2)}-y^{(1)}\bigr) \geq0. \end{equation} Indeed, by convexity, $\Omega^c\subseteq\{x:\, (x-y^{(j)})\cdot v^{(j)}\leq 0\}$ for $j=1,2$. We will use this two dimensional case as a stepping stone to treat three dimensions. (The argument carries over to higher dimensions also.) If $\xi^{(1)}$ and $\xi^{(2)}$ are parallel, then the analysis is one-dimensional and totally elementary. In what follows, we assume that these vectors are not parallel. Let $\nu^{(1)}$ and $\nu^{(2)}$ denote the unit outward normals to $\partial\Omega$ at the collision points. These are linearly independent. We write $P$ for the orthogonal projection into the plane that they span and $Q=\mathrm{Id}-P$ for the complementary projection. By the law of reflection, $Q [ x^{(j)}(t) ] = Q [2\xi^{(j)} t]$ and the broken rays $P [ x^{(j)}(t) ]$ make equal angles of incidence and reflection with the projected normals $P[\nu^{(j)}]=\nu^{(j)}$ at the projected collision points. We now apply the two-dimensional result. To do this, we need to see that the projected collision points and the projected normals obey the chord/normal condition \eqref{Convex position}; this follows immediately from the convexity of the original obstacle. Using the two-dimensional result, we get \begin{align*} \bigl| x^{(1)}(t) - x^{(2)}(t) \bigr|^2 &= \bigl| P[x^{(1)}(t)] - P[x^{(2)}(t)] \bigr|^2 + \bigl| Q[x^{(1)}(t)] - Q[x^{(2)}(t)]\bigr|^2\\ &\geq 4 \bigl| P [ \xi^{(1)} ] - P [ \xi^{(2)} ] \bigr|^2 t^2 +4\bigl| Q [ \xi^{(1)} ] - Q [ \xi^{(2)} ] \bigr|^2 t^2 \\ &= 4 | \xi^{(1)}- \xi^{(2)} |^2 t^2, \end{align*} which proves the lemma. \end{proof} Next we investigate in more detail the properties of the matrix $\Sigma$. \begin{lem}[Bounds for the covariance matrix]\label{L:matrix} Let $n\in \mathcal E$. Then \begin{align} \Re \vec v^{\;\!T}(\Sigma+i(t-t_c))^{-1}\vec v&\ge\frac{\sigma^2}{[\log\log(\tfrac 1\eps)]^{25}[\sigma^4+\log^4(\tfrac 1{\varepsilon})t^2]} |\vec v|^2\label{sig41}\\ \|(\Sigma+i(t-t_c))^{-1}\|_{\max}&\leq\frac{\log^{5}(\frac1{\varepsilon})}{\sqrt{\sigma^4+t^2}}\label{sig42}\\ |\det( \mathrm{Id} +i(t-t_c)\Sigma^{-1})|^{-\frac 12}&\leq \log^{\frac52}(\tfrac1{{\varepsilon}})\biggl(\frac{\sigma^4+t_c^2}{\sigma^4+t^2}\biggr)^{\frac34}\label{sig3} \end{align} for all $t\geq 0$ and $\vec v\in {\mathbb{R}}^3$. If in addition $|t-t_c|\le 4\frac{\sigma\log(\tfrac 1\eps)}{|\xi|}$, then \begin{align} \|(\Sigma+i(t-t_c))^{-1}-\Sigma^{-1}\|_{HS}&\le{\varepsilon}^{-\frac 12}\delta^{-\frac 32}\log^3(\tfrac 1{\varepsilon})\label{sig1}\\ \bigl|1-\det(\mathrm{Id}+i(t-t_c)\Sigma^{-1})^{-\frac12}\bigr|&\le{\varepsilon}^{\frac12}\delta^{-\frac12}\log^3(\tfrac 1{\varepsilon})\label{sig2}. \end{align} Here $\|\cdot\|_{HS}$ denotes the Hilbert--Schmidt norm: for a matrix $A=(a_{ij})$, this is given by $\|A\|_{HS}=(\sum_{i,j}|a_{ij}|^2)^{\frac 12}$. Also, $\|A\|_{\max}$ denotes the operator norm of $A$. \end{lem} \begin{proof} We first prove \eqref{sig1}. Using Lemma~\ref{L:xc}, we get \begin{align}\label{sig} \|\Sigma^{-1}\|_{HS}\leq \frac{\sqrt{3}}{|\sigma^2+it_c|}+\|B\|_{HS} \le \sqrt{3}\sigma^{-2}+\frac {4\sqrt{10}|\xi|^2}{ R_1|\xi_3|} &\lesssim\sigma^{-2}+|\xi|[\log\log(\tfrac 1{{\varepsilon}})]^4 \notag\\ &\lesssim {\varepsilon}^{-1}\delta^{-1}[\log\log(\tfrac 1{{\varepsilon}})]^5. \end{align} Thus, for $|t-t_c|\le 4\frac{\sigma\log(\tfrac 1\eps)}{|\xi|}$ we obtain \begin{align}\label{306} \|(t-t_c)\Sigma^{-1}\|_{HS}&\lesssim {\varepsilon}^{\frac12}\delta^{-\frac12}\log^2(\tfrac 1{\varepsilon})[\log\log(\tfrac 1\eps)]^6\ll1. \end{align} Combining this with the resolvent formula \begin{align*} (\Sigma+i(t-t_c))^{-1}-\Sigma^{-1}&=-i(t-t_c)\Sigma^{-1}(\Sigma+i(t-t_c))^{-1}\\ &=-i(t-t_c)\Sigma^{-2}(\mathrm{Id}+i(t-t_c)\Sigma^{-1})^{-1} \end{align*} and using \eqref{306}, we estimate \begin{align*} \|(\Sigma+i(t-t_c))^{-1}-\Sigma^{-1}\|_{HS}&\le |t-t_c|\|\Sigma^{-1}\|_{HS}^2\|(\mathrm{Id}+i(t-t_c)\Sigma^{-1})^{-1}\|_{HS}\\ &\lesssim \frac{4\sigma\log(\tfrac 1\eps)}{|\xi|}{\varepsilon}^{-2}\delta^{-2}[\log\log(\tfrac 1\eps)]^{10}\\ &\lesssim {\varepsilon}^{-\frac 12}\delta^{-\frac 32}\log^2(\tfrac1{\varepsilon})[\log\log(\tfrac 1\eps)]^{11}\\ &\le {\varepsilon}^{-\frac 12}\delta^{-\frac 32}\log^3(\tfrac 1{\varepsilon}). \end{align*} This settles \eqref{sig1}. The estimate \eqref{sig2} follows from \eqref{306} and the fact that the determinant function is Lipschitz on a small neighborhood of the identity. We now turn to the remaining estimates; the key is to understand the real symmetric matrix $B$. A direct computation gives \begin{align*} \det(\lambda \mathrm{Id}-B)=\lambda\biggl[\lambda^2-4\biggl(\frac{\xi_1^2+\xi_3^2}{R_1\xi_3}+\frac{\xi_2^2+\xi_3^2}{R_2\xi_3}\biggr)\lambda +\frac{16|\xi|^2}{R_1R_2}\biggr]. \end{align*} Hence one eigenvalue is $0$ and it is easy to check that $\eta$ is the corresponding eigenvector. We write $-\infty<\lambda_2\le\lambda_1<0$ for the remaining eigenvalues. Moreover, as \begin{align*} \lambda_1\lambda_2=\frac{16|\xi|^2}{R_1R_2}\qtq{and} |\lambda_1|+|\lambda_2|=4\Bigl(\frac{\xi_1^2+\xi_3^2}{R_1|\xi_3|}+\frac{\xi_2^2+\xi_3^2}{R_2|\xi_3|}\Bigr), \end{align*} using Lemma~\ref{L:xc} we get \begin{align*} [\log\log(\tfrac 1\eps)]^{-4}|\xi|\lesssim |\lambda_1|\le |\lambda_2|\lesssim\frac{|\xi|^2}{|\xi_3|}\lesssim |\xi|[\log\log(\tfrac 1\eps)]^4. \end{align*} In particular, \begin{align}\label{B norm} \| B \|_{\max} \lesssim |\xi|[\log\log(\tfrac 1\eps)]^4 \lesssim {\varepsilon}^{-1} [\log\log(\tfrac 1\eps)]^5. \end{align} The orthonormal eigenbasis for $B$ is also an eigenbasis for $\Sigma^{-1}$ with eigenvalues \begin{align*} \frac 1{\sigma^2+it_c}, \quad \frac 1{\sigma^2+it_c}+i\lambda_1,\qtq{and} \frac1{\sigma^2+it_c}+i\lambda_2. \end{align*} In this basis, $(\Sigma+i(t-t_c))^{-1}$ is diagonal with diagonal entries \begin{align*} \frac 1{\sigma^2+it}, \ \Bigl[\Bigl(\frac1{\sigma^2+it_c}+i\lambda_1\Bigr)^{-1}+i(t-t_c)\Bigr]^{-1},\ \Bigl[\Bigl(\frac1{\sigma^2+it_c}+i\lambda_2\Bigr)^{-1}+i(t-t_c)\Bigr]^{-1}. \end{align*} An exact computation gives \begin{align*} \Re \Bigl[\Bigl(\frac1{\sigma^2+it_c}+i\lambda_j\Bigr)^{-1}+i(t-t_c)\Bigr]^{-1}=\frac{\sigma^2}{\sigma^4[1-\lambda_j(t-t_c)]^2 +[t-\lambda_jt_c(t-t_c)]^2}. \end{align*} Using $\delta\lesssim 1$, the upper bound for $t_c$ given by Lemma~\ref{L:xc}, and the upper bound for $\lambda_j$ obtained above, we get \begin{align*} |\lambda_j t_c|&\lesssim |\xi|[\log\log(\tfrac 1\eps)]^4\frac{\delta[\log\log(\tfrac 1\eps)]^8}{|\xi|}\lesssim [\log\log(\tfrac 1\eps)]^{12}\\ \lambda_j^2 t_c^4&\lesssim |\xi|^2[\log\log(\tfrac 1\eps)]^8\frac{\delta^4[\log\log(\tfrac 1\eps)]^{32}}{|\xi|^4}\lesssim\delta^4{\varepsilon}^2[\log\log(\tfrac 1\eps)]^{42}\le \sigma^4\\ \sigma^4\lambda_j^2&\lesssim {\varepsilon}^2\delta^2\log^4(\tfrac 1{\varepsilon})|\xi|^2[\log\log(\tfrac 1\eps)]^8 \lesssim \log^4(\tfrac1{\varepsilon})[\log\log(\tfrac 1\eps)]^{10}. \end{align*} Therefore, \begin{align*} \sigma^4[1-\lambda_j(t-t_c)]^2 &+[t-\lambda_jt_c(t-t_c)]^2\\ &\lesssim \sigma^4(1+\lambda_j^2t^2+\lambda_j^2t_c^2)+t^2+\lambda_j^2t_c^2t^2+\lambda_j^2t_c^4\\ &\lesssim\sigma^4(1+\lambda_j^2t_c^2)+\lambda_j^2t_c^4+t^2(1+\lambda_j^2t_c^2+\sigma^4\lambda_j^2)\\ &\lesssim \sigma^4[\log\log(\tfrac 1\eps)]^{24}+t^2\log^4(\tfrac1{\varepsilon})[\log\log(\tfrac 1\eps)]^{10}\\ &\le [\log\log(\tfrac 1\eps)]^{25}[\sigma^4+\log^4(\tfrac 1{\varepsilon}) t^2]. \end{align*} Thus, \begin{align*} \Re \Bigl[\Bigl(\frac 1{\sigma^2+it_c}+i\lambda_j\Bigr)^{-1}+i(t-t_c)\Bigr]^{-1}\ge\frac{ \sigma^2}{[\log\log(\tfrac 1\eps)]^{25}[\sigma^4+t^2\log^4(\frac 1{\varepsilon})]}. \end{align*} As $\Re \frac 1{\sigma^2+it}$ admits the same lower bound, we derive \eqref{sig41}. We now turn to \eqref{sig42}. Our analysis is based on the identity \begin{align*} \biggl|\Bigl[\Bigl(\frac1{\sigma^2+it_c}+i\lambda_j\Bigr)^{-1}+i(t-t_c)\Bigr]^{-1}\biggr|^2 &=\biggl|\frac{1-\lambda_jt_c+i\lambda_j\sigma^2}{\sigma^2[1-\lambda_j(t-t_c)]+i[t-\lambda_jt_c(t-t_c)]}\biggr|^2\\ &=\frac{(1-\lambda_jt_c)^2+(\lambda_j\sigma^2)^2}{\sigma^4[1-\lambda_j(t-t_c)]^2+[t-\lambda_jt_c(t-t_c)]^2}. \end{align*} We have \begin{align*} (1-\lambda_jt_c)^2+(\lambda_j\sigma^2)^2\lesssim 1+ [\log\log(\tfrac 1\eps)]^{24} + \log^4(\tfrac1{\varepsilon})[\log\log(\tfrac 1\eps)]^{10} \leq \log^5(\tfrac1{\varepsilon}). \end{align*} To estimate the denominator we use Lemma~\ref{L:xc} to see that $t_c\ll \sigma^2$ and so \begin{align}\label{823} \sigma^4[1-\lambda_j(t-t_c)]^2+[t-\lambda_jt_c(t-t_c)]^2 &\geq t_c^2\Bigl\{[1-\lambda_j(t-t_c)]^2 + \bigl[ \tfrac{t}{t_c} -\lambda_j(t-t_c)\bigr]^2\Bigr\}\notag\\ &=2\bigl[\tfrac{t+t_c}2-\lambda_j t_c (t-t_c)\bigr]^2 + \tfrac12(t-t_c)^2\notag\\ &\gtrsim [\log\log(\tfrac 1\eps)]^{-24} (t+t_c)^2\notag\\ &\gtrsim \frac{\sigma^4+t^2}{\log^4(\frac1{{\varepsilon}}) [\log\log(\tfrac 1\eps)]^{26}}, \end{align} where we have used the bound $|\lambda_j t_c|\lesssim [\log\log(\tfrac 1\eps)]^{12}$ to derive the penultimate inequality. Combining these bounds we obtain \begin{align*} \biggl|\Bigl[\Bigl(\frac1{\sigma^2+it_c}+i\lambda_j\Bigr)^{-1}+i(t-t_c)\Bigr]^{-1}\biggr|^2 &\lesssim \frac{\log^9(\frac1{\varepsilon})[\log\log(\tfrac 1\eps)]^{26}}{\sigma^4+t^2} \le \frac{\log^{10}(\frac1{{\varepsilon}})}{\sigma^4+t^2}. \end{align*} As $(\Sigma+i(t-t_c))^{-1}$ is orthogonally diagonalizable, this bound on its eigenvalues yields \eqref{sig42}. Finally, we compute \begin{align*} |\det(\mathrm{Id}+i(t-t_c)\Sigma^{-1})| &=\biggl|\Bigl(1+\frac{i(t-t_c)}{\sigma^2+it_c}\Bigr)\prod_{j=1,2}\Bigl[1+i(t-t_c)\Bigl(\frac 1{\sigma^2+it_c}+i\lambda_j\Bigr)\Bigr]\biggr|\\ &=\biggl|\frac{\sigma^2+it}{(\sigma^2+it_c)^3}\prod_{j=1,2}\Bigl\{\sigma^2[1-\lambda_j(t-t_c)]+i[t-\lambda_jt_c(t-t_c)]\Bigr\}\biggr|. \end{align*} Using \eqref{823} we obtain \begin{align*} &|\det(\mathrm{Id}+i(t-t_c)\Sigma^{-1})|^{-1}\\ &\quad\le \frac{(\sigma^4+t_c^2)^{\frac 32}}{(\sigma^4+t^2)^{\frac12}} \prod_{j=1,2}\Bigl\{\sigma^4[1-\lambda_j(t-t_c)]^2+[t-\lambda_jt_c(t-t_c)]^2\Bigr\}^{-\frac12}\\ &\quad\leq \log^5(\tfrac1{{\varepsilon}})\biggl(\frac{\sigma^4+t_c^2}{\sigma^4+t^2}\biggr)^{\frac32}. \end{align*} This completes the proof of the lemma. \end{proof} Using this lemma, we will see that the reflected wave $v_n$ agrees with $u_n$ to high order on $\partial\Omega$, at least for $x$ near $x_c^{(n)}$ and $t$ near $t_c^{(n)}$; compare \eqref{A} with $|u_n(t_c^{(n)},x_c^{(n)})|\sim \sigma^{-3/2}$. Indeed, requiring this level of agreement can be used to derive the matrix $B$ given above. Without this level of accuracy we would not be able to show that the contribution of entering rays is $o(1)$ as ${\varepsilon}\to0$. \begin{lem} \label{L:uv match} Fix $n\in \mathcal E$. For each $x\in \Omega$, let $x_*=x_*(x)$ denote the nearest point to $x$ in $\partial\Omega$. Let $$ A_n(t,x):=\exp\{it|\xi_n|^2-i\xi_n\cdot(x_*-x_c^{(n)})\}\bigl[u_n(t,x_*)-v_n(t,x_*)\bigr]. $$ Then for each $(t,x)\in{\mathbb{R}}\times\Omega$ such that $|x_*-x_c^{(n)}|\le\sigma \log(\frac 1{{\varepsilon}})$ and $|t-t_c^{(n)}|\le \frac {4\sigma\log(\frac 1{{\varepsilon}})}{|\xi_n|}$ we have \begin{align} |A_n(t,x)|&\lesssim{\varepsilon}^{-\frac 14}\delta^{-\frac 54}\log^{12}(\tfrac 1{{\varepsilon}}) \label{A}\\ |\nabla A_n(t,x)|&\lesssim {\varepsilon}^{-\frac 34}\delta^{-\frac 74}\log^{12}(\tfrac 1{{\varepsilon}}) \label{deriv A}\\ |\partial_t A_n(t,x)| + |\Delta A_n(t,x)|&\lesssim {\varepsilon}^{-\frac 74}\delta^{-\frac 74}\log^9(\tfrac 1{{\varepsilon}}) \label{laplace A}. \end{align} \end{lem} \begin{proof} Throughout the proof, we will suppress the dependence on $n\in\mathcal E$; indeed, all estimates will be uniform in $n$. Let \begin{align*} F(t,x) &:= \biggl( \frac{\sigma^2+it_c}{\sigma^2+it}\biggr)^{\frac32} e^{-\frac{|x-2\xi t|^2}{4(\sigma^2+it)} } - \det (1+i(t-t_c)\Sigma^{-1})^{-\frac12} e^{\Phi(t,x)} \end{align*} with \begin{align*} \Phi(t,x) &:= i(x-x_c)(\eta-\xi) -\tfrac14(x-x(t))^T(\Sigma +i(t-t_c))^{-1}(x-x(t)), \end{align*} so that \begin{equation}\label{AfromF} A(t,x) = \Bigl(\frac {\sigma^2}{2\pi}\Bigr)^{\frac 34}(\sigma^2+it_c)^{-\frac 32} e^{ix_c \xi} F(t,x_*). \end{equation} We further decompose \begin{align*} F(t,x)=F_1(t,x)+F_2(t,x)+F_3(t,x), \end{align*} where \begin{align*} F_1(t,x)&:= \biggl[\biggl(\frac{\sigma^2+it_c}{\sigma^2+it}\biggr)^{\frac32} -1\biggr]e^{-\frac{|x-2\xi t|^2}{4(\sigma^2+it)} }\\ F_2(t,x)&:=\bigl[1-\det(1+i(t-t_c)\Sigma^{-1})^{-\frac 12}\bigr] e^{-\frac {|x-2\xi t|^2}{4(\sigma^2+it)}}\\ F_3(t,x)&:= \det (1+i(t-t_c)\Sigma^{-1})^{-\frac12}\Bigl\{e^{-\frac{|x-2\xi t|^2}{4(\sigma^2+it)}}-e^{\Phi(t,x)}\Bigr\}. \end{align*} We begin by estimating the time derivative of $F$ on $\partial\Omega$. We will make repeated use of the following bounds: \begin{align}\label{E:bounds1} t\sim t_c \ll \sigma^2 \qtq{and} |x-2\xi t| + |x-x(t)|\lesssim\sigma\log(\tfrac1{{\varepsilon}}), \end{align} for all $|x-x_c|\le\sigma \log(\frac 1{{\varepsilon}})$ and $|t-t_c|\le \frac {4\sigma\log(\frac 1{{\varepsilon}})}{|\xi|}$. Moreover, from \eqref{sig1}, \eqref{sig2}, and \eqref{sig}, we obtain \begin{align}\label{E:bounds2} \bigl|\partial_t\det(1+i(t-t_c)\Sigma^{-1})^{-\frac 12}\bigr| &=\tfrac12 \bigl|\det(1+i(t-t_c)\Sigma^{-1})^{-\frac 12}\bigr| \bigl| \Tr (\Sigma+i(t-t_c))^{-1}\bigr|\notag\\ &\lesssim \|(\Sigma+i(t-t_c))^{-1}\|_{HS}\lesssim {\varepsilon}^{-1}\delta^{-1}[\log\log(\tfrac 1\eps)]^5. \end{align} Lastly, as $\xi-\eta$ is normal to $\partial\Omega$ at $x_c$, we see that \begin{align}\label{E:bounds3} |(\xi-\eta)\cdot(x-x_c)| \lesssim |\xi| \, |x-x_c|^2 \lesssim \delta \log^5(\tfrac1{\varepsilon}), \end{align} for all $x\in \partial\Omega$ with $|x-x_c|\lesssim \sigma\log(\tfrac 1\eps)$. A straightforward computation using \eqref{E:bounds1} gives \begin{align*} |\partial_tF_1(t,x)|&\lesssim \sigma^{-2} + |t-t_c|\sigma^{-2}\bigl[\sigma^{-2}|\xi||x-2\xi t| + \sigma^{-4}|x-2\xi t|^2\bigr]\lesssim {\varepsilon}^{-1}\delta^{-1}. \end{align*} Using also \eqref{sig2} and \eqref{E:bounds2} we obtain \begin{align*} |\partial_tF_2(t,x)|&\lesssim {\varepsilon}^{-1}\delta^{-1}[\log\log(\tfrac 1\eps)]^5 + {\varepsilon}^{\frac12}\delta^{-\frac12} \log^3(\tfrac1{\varepsilon})\bigl[\sigma^{-2}|\xi||x-2\xi t| + \sigma^{-4}|x-2\xi t|^2\bigr]\\ &\lesssim {\varepsilon}^{-1}\delta^{-1} \log^{4}(\tfrac1{\varepsilon}). \end{align*} As $|\partial_t A| \lesssim \sigma^{-3/2} |\partial_t F|$, the contributions of $\partial_t F_1$ and $\partial_t F_2$ are consistent with \eqref{laplace A}. We now turn to $F_3$. In view of \eqref{sig41}, \eqref{sig2}, and \eqref{E:bounds2}, \begin{align*} &|\partial_t F_3(t,x)| \lesssim{\varepsilon}^{-1}\delta^{-1}[\log\log(\tfrac 1\eps)]^5+\biggl|\Bigl[\frac{\xi(x-2\xi t)}{\sigma^2+it}+\frac{i|x-2\xi t|^2}{4(\sigma^2+it)^2}\Bigr]e^{-\frac{|x-2\xi t|^2}{4(\sigma^2+it)}}\\ &- \Bigl[ \eta^T(\Sigma + i(t-t_c))^{-1}(x-x(t)) +\tfrac i4(x-x(t))^T(\Sigma + i(t-t_c))^{-2}(x-x(t)) \Bigr] e^{\Phi(t,x)}\biggr|. \end{align*} To simplify this expression we use the following estimates \begin{align*} \biggl| \frac{\xi(x-2\xi t)}{\sigma^2+it}+\frac{i|x-2\xi t|^2}{4(\sigma^2+it)^2} - \frac{\xi(x-2\xi t)}{\sigma^2+it_c} - \frac{i|x-2\xi t|^2}{4(\sigma^2+it_c)^2} \biggr| &\lesssim {\varepsilon}^{-1}\delta^{-1} \\ \bigl| \eta^T \bigl[ (\Sigma +i(t-t_c))^{-1} - \Sigma^{-1} \bigr] (x-x(t)) \bigr| &\lesssim {\varepsilon}^{-1}\delta^{-1} \log^6(\tfrac1{\varepsilon}) \\ \bigl| (x-x(t))^T\bigl[ (\Sigma +i(t-t_c))^{-2} - \Sigma^{-2} \bigr] (x-x(t)) \bigr| &\lesssim {\varepsilon}^{-\frac12}\delta^{-\frac32} \log^8(\tfrac1{\varepsilon}) \\ \biggl| \frac{|x-2\xi t|^2}{4(\sigma^2+it)} - \frac{|x-2\xi t|^2}{4(\sigma^2+it_c)} \biggr| &\lesssim {\varepsilon}^{\frac12} \delta^{-\frac12} \log^3(\tfrac1{\varepsilon}) \\ \bigl| (x-x(t))^T\bigl[ (\Sigma +i(t-t_c))^{-1} - \Sigma^{-1} \bigr] (x-x(t)) \bigr| &\lesssim {\varepsilon}^{\frac12} \delta^{-\frac12} \log^7(\tfrac1{\varepsilon}), \end{align*} which follow from \eqref{sig1}, \eqref{sig}, and \eqref{E:bounds1}. Combining these estimates with the fact that $z\mapsto e^{z}$ is $1$-Lipschitz on the region $\Re z <0$ yields \begin{align*} &|\partial_t F_3(t,x)|\lesssim{\varepsilon}^{-1}\delta^{-1} \log^{10}(\tfrac1{\varepsilon}) \\ &\quad+\biggl| \frac{\xi(x-2\xi t)}{\sigma^2+it_c}+\frac{i|x-2\xi t|^2}{4(\sigma^2+it_c)^2} - \eta^T \Sigma^{-1}(x-x(t)) - \tfrac i4(x-x(t))^T\Sigma^{-2}(x-x(t)) \biggr| \\ &\quad+ {\varepsilon}^{-\frac32}\delta^{-\frac12} \log(\tfrac 1\eps) \biggl|\frac{|x-2\xi t|^2}{4(\sigma^2+it_c)} + i(x-x_c)(\eta-\xi)-\tfrac14(x-x(t))^T \Sigma^{-1}(x-x(t)) \biggr|. \end{align*} As $|\partial_t A| \lesssim \sigma^{-3/2} |\partial_t F|$, the first term on the right-hand side is consistent with \eqref{laplace A}. Thus to complete our analysis of $ |\partial_t F|$, it remains only to bound the second and third lines in the display above. Recalling \eqref{E:Sigma defn}, the fact that $\eta$ belongs to the kernel of the symmetric matrix $B$, and $x(t)=x_c+2\eta(t-t_c)$, we can simplify these expressions considerably. First, we have \begin{align*} \Bigl| \tfrac{\xi(x-2\xi t)}{\sigma^2+it_c} & +\tfrac{i|x-2\xi t|^2}{4(\sigma^2+it_c)^2} - \eta^T \Sigma^{-1}(x-x(t)) - \tfrac i4(x-x(t))^T\Sigma^{-2}(x-x(t)) \Bigr| \\ ={}& \Bigl|\tfrac{(\xi-\eta)(x-x_c)}{\sigma^2+it_c} - i \tfrac{(t-t_c)(\xi-\eta)(x-x_c)}{(\sigma^2+it_c)^2} + \tfrac{(x-x_c)^T B (x-x_c)}{2(\sigma^2+it_c)} + i\tfrac{(x-x_c)^T B^2(x-x_c)}{4} \Bigr| \\ \lesssim{}& {\varepsilon}^{-1} \log^5(\tfrac1{\varepsilon}), \end{align*} where we used \eqref{E:bounds1}, \eqref{E:bounds3}, and \eqref{B norm} to obtain the inequality. This shows that the second line in the estimate on $\partial_t F_3$ is acceptable for \eqref{laplace A}. For the last line of our estimate on $\partial_t F_3$ above, we use the same tools to obtain \begin{align*} {\varepsilon}^{-\frac32}\delta^{-\frac12} &\log(\tfrac 1\eps)\Bigl|\tfrac{|x-2\xi t|^2}{4(\sigma^2+it_c)} + i(x-x_c)(\eta-\xi)-\tfrac14(x-x(t))^T \Sigma^{-1}(x-x(t)) \Bigr| \\ ={}& {\varepsilon}^{-\frac32}\delta^{-\frac12} \log(\tfrac 1\eps)\Bigl| \tfrac{(t-t_c)(\xi-\eta)(x-x_c)}{\sigma^2+it_c} + i(x-x_c)(\xi-\eta) + \tfrac{i}4 (x-x_c)^T B (x-x_c) \Bigr| \\ \lesssim {}& {\varepsilon}^{-1} \log^7(\tfrac1{\varepsilon}) + {\varepsilon}^{-\frac32}\delta^{-\frac12} \log(\tfrac 1\eps)\Bigl| (x-x_c)(\xi-\eta) + \tfrac14 (x-x_c)^T B (x-x_c) \Bigr|. \end{align*} The first summand is acceptable. To bound the second summand, we need to delve deeper. Using the orthonormal frame introduced earlier, we write \begin{align}\label{y1} y_1:=(x-x_c)\cdot \vec{\tau},\quad y_2:=(x-x_c) \cdot \vec{\gamma},\qtq{and} y_3:=(x-x_c) \cdot \vec{\nu}. \end{align} Then \begin{align}\label{xi3y3} (x-x_c)\cdot(\xi-\eta)=2\xi_3(x-x_c)\cdot{\vec{\nu}}=2\xi_3y_3. \end{align} For $x\in\partial \Omega$ near $x_c$, we have \begin{align}\label{y3} y_3=-\frac {y_1^2}{2R_1}-\frac {y_2^2}{2R_2}+O(|y_1|^3+|y_2|^3). \end{align} On the other hand, for any $z\in {\mathbb{R}}^3$ a direct computation shows that $$ \frac 14 z^TBz=\frac 1{\xi_3R_1}(\xi_3z_1+\xi_1z_3)^2+\frac1{\xi_3R_2}(\xi_3z_2+\xi_2z_3)^2. $$ Applying this to \begin{align*} z=x-x(t)&=x-x_c-2\eta(t-t_c) =\begin{pmatrix} y_1-2\xi_1(t-t_c)\\ y_2-2\xi_2(t-t_c)\\ 2\xi_3(t-t_c)\\ \end{pmatrix} + \begin{pmatrix} 0\\ 0\\ y_3\\ \end{pmatrix} \end{align*} and noting that $|y_3|\lesssim |y_1|^2+|y_2|^2\lesssim \sigma^2\log^2(\frac 1{\varepsilon})$, we get \begin{align*} \frac 14(x-x(t))^T B(x-x(t)) &=\frac{(y_1\xi_3+y_3\xi_1)^2}{\xi_3R_1}+\frac{(y_2\xi_3+y_3\xi_2)^2}{\xi_3R_2}\\ &=\xi_3\frac {y_1^2}{R_1}+\xi_3\frac {y_2^2}{R_2}+O\Bigl(\sigma^3\log^3(\tfrac 1{{\varepsilon}})\cdot \frac{|\xi|^2}{|\xi_3|}\Bigr). \end{align*} Combining this with \eqref{xi3y3}, \eqref{y3}, and Lemma~\ref{L:xc}, we deduce \begin{align}\label{B cancel} \Bigl|(x-x_c)\cdot(\xi-\eta)+\tfrac14 (x-x(t))^T B(x-x(t))\Bigr| &\lesssim\sigma^3\log^3(\tfrac 1{\varepsilon})\frac{|\xi|^2}{|\xi_3|} \notag \\ &\lesssim {\varepsilon}^{\frac 12}\delta^{\frac 32}\log^7(\tfrac 1{\varepsilon}). \end{align} This is the missing piece in our estimate of $|\partial_t F_3|$. Putting everything together yields $$ |\partial_t A(t,x)| \lesssim \sigma^{-\frac32} |\partial_t F(t,x)| \lesssim \sigma^{-\frac32} {\varepsilon}^{-1}\delta^{-1} \log^{10}(\tfrac1{\varepsilon}) \lesssim {\varepsilon}^{-\frac74}\delta^{-\frac74} \log^{9}(\tfrac1{\varepsilon}), $$ which proves the first half of \eqref{A}. This bound on the time derivative of $A$ allows us to deduce \eqref{A} by just checking its validity at $t=t_c$. Note that both $F_1$ and $F_2$ vanish at this point and so, by the fundamental theorem of calculus, we have \begin{align*} |A(t,x)| &\lesssim |t-t_c| {\varepsilon}^{-\frac74}\delta^{-\frac74} \log^{9}(\tfrac1{\varepsilon}) + \sigma^{-\frac32} |F_3(t_c,x)| \\ &\lesssim {\varepsilon}^{-\frac14}\delta^{-\frac54} \log^{12}(\tfrac1{\varepsilon}) + \sigma^{-\frac32} \bigl| (x-x_c)(\eta-\xi) + \tfrac14(x-x_c)^T B (x-x_c) \bigr|. \end{align*} Combining this with \eqref{B cancel} yields \eqref{A}. It remains to estimate the spatial derivatives of $A$. Notice that this corresponds to derivatives of $F$ in directions parallel to $\partial\Omega$. To compute these, we need to determine the unit normal $\vec\nu_x$ to $\partial\Omega$ at a point $x\in\partial\Omega$; indeed, the projection matrix onto the tangent space at $x$ is given by $\mathrm{Id} - \vec\nu_x^{\vphantom{T}} \vec\nu_x^T$. Writing $y=x-x_c$ as in \eqref{y1} and \eqref{y3}, we have \begin{equation}\label{nu_x} \vec\nu_x = \begin{pmatrix} y_1/R_1 \\ y_2/R_2 \\ 1 \end{pmatrix} + |y|^2 \vec \psi(y), \end{equation} where $\vec \psi$ is a smooth function with all derivatives bounded. However, the Laplacian of $A$ does not involve only the tangential derivatives of $F$; due to the curvature of the obstacle, the normal derivative of $F$ also enters: $$ |\Delta A| \lesssim \sigma^{-\frac32} \Bigl\{ |\nabla F|_{{\mathbb{R}}^3} + |\partial^2 F|_{T_x\partial\Omega} \Bigr\}. $$ Here $\partial^2 F$ denotes the full matrix of second derivatives of $F$, while the subscript $T_x\partial\Omega$ indicates that only the tangential components are considered; no subscript or ${\mathbb{R}}^3$ will be used to indicate that all components are considered. In this way, verifying \eqref{deriv A} and the remaining part of \eqref{laplace A} reduces to proving \begin{equation}\label{E:lap A needs} \begin{gathered} |\nabla F|_{T_x\partial\Omega} \lesssim \delta^{-1} \log^{13}(\tfrac1{\varepsilon}) \qtq{and} |\nabla F| + |\partial^2 F|_{T_x\partial\Omega} \lesssim {\varepsilon}^{-1}\delta^{-1} \log^{10}(\tfrac1{\varepsilon}). \end{gathered} \end{equation} Again we decompose $F$ into the three parts $F_1$, $F_2$, and $F_3$. The first two are easy to estimate; indeed, we do not even need to consider normal and tangential components separately: \begin{align*} |\nabla F_1(t,x)| \lesssim \frac{|x-2\xi t|}{\sigma^2} \frac{|t-t_c|}{\sigma^2} \lesssim \delta^{-1} \log\log(\tfrac 1\eps) \end{align*} and similarly, using \eqref{sig2}, \begin{align*} |\nabla F_2(t,x)| \lesssim \frac{|x-2\xi t|}{\sigma^2} {\varepsilon}^{\frac12}\delta^{-\frac12} \log^3(\tfrac1{\varepsilon}) \lesssim \delta^{-1} \log^3(\tfrac1{\varepsilon}). \end{align*} These are both consistent with the needs of \eqref{E:lap A needs}. We can bound the second derivatives of $F_1$ and $F_2$ in a similar manner: \begin{align*} |\partial^2 F_1(t,x)| &\lesssim \Bigl[ \sigma^{-2} + \frac{|x-2\xi t|^2}{\sigma^4} \Bigr] \frac{|t-t_c|}{\sigma^2} \lesssim {\varepsilon}^{-\frac12}\delta^{-\frac32} \log\log(\tfrac 1\eps) \\ |\partial^2 F_2(t,x)| &\lesssim \Bigl[ \sigma^{-2} + \frac{|x-2\xi t|^2}{\sigma^4} \Bigr] {\varepsilon}^{\frac12}\delta^{-\frac12} \log^3(\tfrac1{\varepsilon}) \lesssim {\varepsilon}^{-\frac12}\delta^{-\frac32} \log^3(\tfrac1{\varepsilon}). \end{align*} Both are acceptable for \eqref{E:lap A needs}. This leaves us to estimate the derivatives of $F_3$; now it will be important to consider tangential derivatives separately. We have \begin{align*} |\nabla F_3(t,x)|_{T_x\partial\Omega} &\lesssim \frac{|x-2\xi t|}{\sigma^2} |F_3(t,x)| \\ & \quad + \biggl| \frac{x-2\xi t}{2(\sigma^2+it)} - i(\xi-\eta) - \tfrac12 (\Sigma +i(t-t_c))^{-1}(x-x(t)) \biggr|_{T_x\partial\Omega}. \end{align*} From the proof of \eqref{A}, we know that $|F_3| \lesssim {\varepsilon}^{1/2}\delta^{-1/2}\log^{13}(\frac1{\varepsilon})$. To estimate the second line we begin by simplifying it. Using \eqref{E:bounds1} and \eqref{sig1}, we have \begin{align} |\nabla F_3(t,x)|_{T_x\partial\Omega} &\lesssim \frac{\sigma\log(\frac1{\varepsilon})}{\sigma^2} {\varepsilon}^{\frac12}\delta^{-\frac12}\log^{13}(\tfrac1{\varepsilon}) \notag\\ &\quad + \frac{|t-t_c|}{\sigma^{4}} |x-2\xi t| + \|(\Sigma +i(t-t_c))^{-1}-\Sigma^{-1}\|_{HS}|x-x(t)| \notag\\ & \quad + \biggl| \frac{x-2\xi t}{2(\sigma^2+it_c)}-i(\xi-\eta) - \tfrac12 \Sigma^{-1}(x-x(t)) \biggr|_{T_x\partial\Omega} \notag\\ & \lesssim \delta^{-1}\log^{13}(\tfrac1{\varepsilon})+ \biggl| \frac{(\xi-\eta)(t-t_c)}{\sigma^2+it_c} \biggr|_{T_x\partial\Omega} + \biggl| \xi-\eta + \tfrac12 B (x-x_c) \biggr|_{T_x\partial\Omega}. \label{nab F3} \end{align} Thus far, we have not used the restriction to tangential directions. Thus, using \eqref{B norm} we may pause to deduce $$ |\nabla F_3(t,x)|_{{\mathbb{R}}^3} \lesssim \delta^{-1}\log^{13}(\tfrac1{\varepsilon}) + \sigma^{-1}\log(\tfrac 1\eps) + |\xi| \lesssim {\varepsilon}^{-1}\log\log(\tfrac 1\eps), $$ which is consistent with \eqref{E:lap A needs}. We now return to \eqref{nab F3}. To estimate the last two summands we write $x-x_c=y$ and use \eqref{nu_x} to obtain \begin{equation*} \bigl(\mathrm{Id} - \vec\nu_x^{\vphantom{T}} \vec\nu_x^T\bigr) (\xi-\eta) = - \begin{pmatrix} 2\xi_3y_1/R_1\\ 2\xi_3y_2/R_2 \\ 0 \end{pmatrix} + O(|\xi| \, |y|^2). \end{equation*} Similarly, \begin{equation*} \tfrac12 \bigl(\mathrm{Id} - \vec\nu_x^{\vphantom{T}} \vec\nu_x^T\bigr) B (x-x_c)= \begin{pmatrix} 2\xi_3y_1/R_1\\ 2\xi_3y_2/R_2 \\ 0 \end{pmatrix} + O(\|B\|_{\max} |y|^2). \end{equation*} Using \eqref{B norm}, this allows us to deduce that \begin{equation}\label{E:T cancel} \bigl| \xi-\eta + \tfrac12 B (x-x_c) \bigr|_{T_x\partial\Omega} \lesssim \|B\|_{\max} |y|^2 + |\xi| \, |y|^2 \lesssim \delta\log^5(\tfrac1{\varepsilon}). \end{equation} Therefore, $$ |\nabla F_3(t,x)|_{T_x\partial\Omega} \lesssim \delta^{-1}\log^{13}(\tfrac1{\varepsilon}) + \log^{2}(\tfrac1{\varepsilon}) + \delta\log^{5}(\tfrac1{\varepsilon}) \lesssim \delta^{-1}\log^{13}(\tfrac1{\varepsilon}). $$ This is consistent with \eqref{E:lap A needs}, thereby completing the proof of \eqref{deriv A}. Estimating the second order derivatives of $F_3$ is very messy. We get \begin{align*} \partial_k \partial_l e^{-\frac{|x-2\xi t|^2}{4(\sigma^2+it)}} &=\biggl[ \frac{-\delta_{kl}}{2(\sigma^2+it)} + \frac{(x-2\xi t)_k(x-2\xi t)_l}{4(\sigma^2+it)^2}\biggr] e^{-\frac{|x-2\xi t|^2}{4(\sigma^2+it)}} \\ &=\biggl[ \frac{-\delta_{kl}}{2(\sigma^2+it_c)} + \frac{(x-2\xi t)_k(x-2\xi t)_l}{4(\sigma^2+it_c)^2}\biggr] e^{-\frac{|x-2\xi t|^2}{4(\sigma^2+it)}} + O\biggl(\frac{\log\log(\frac1{\varepsilon})}{{\varepsilon}^{\frac12}\delta^{\frac32}}\biggr). \end{align*} Proceeding similarly and using \eqref{sig1} and \eqref{sig} yields \begin{align*} & \partial_k \partial_l e^{\Phi(t,x)} = \partial_k \partial_l e^{i(x-x_c)(\eta-\xi)-\frac14(x-x(t))^T(\Sigma +i(t-t_c))^{-1}(x-x(t))} \\ &=\biggl[ -\tfrac12\Sigma^{-1}_{kl} + \Bigl\{ i(\eta-\xi) - \tfrac12\Sigma^{-1}(x-x(t))\Bigr\}_k \Bigl\{ i(\eta-\xi) - \tfrac12\Sigma^{-1}(x-x(t))\Bigr\}_l\biggr] e^{\Phi(t,x)}\\ &\quad + O\bigl({\varepsilon}^{-1}\delta^{-1}\log^6(\tfrac1{\varepsilon})\bigr). \end{align*} We now combine these formulas, using \eqref{B norm}, \eqref{E:T cancel}, and the definition of $\Sigma^{-1}$ in the process. This yields \begin{align*} |\partial^2 F_3|_{T_x\partial\Omega} &\lesssim \frac{\log^2(\frac1{\varepsilon})}{\sigma^2} |F_3| + |B|_{T_x\partial\Omega} + {\varepsilon}^{-\frac12}\delta^{\frac12}\log^5(\tfrac1{\varepsilon}) +{\varepsilon}^{-1}\delta^{-1}\log^6(\tfrac1{\varepsilon})\\ &\lesssim {\varepsilon}^{-\frac12}\delta^{-\frac32}\log^{13}(\tfrac1{\varepsilon}) + {\varepsilon}^{-1} \log(\tfrac1{\varepsilon}) + {\varepsilon}^{-\frac12}\delta^{\frac12}\log^5(\tfrac1{\varepsilon}) +{\varepsilon}^{-1}\delta^{-1}\log^6(\tfrac1{\varepsilon})\\ &\lesssim {\varepsilon}^{-1}\delta^{-1} \log^{6}(\tfrac1{\varepsilon}). \end{align*} This completes the proof of \eqref{E:lap A needs} and so that of \eqref{laplace A}. \end{proof} With all these preparations, we are ready to begin estimating the contribution of the wave packets that enter the obstacle. In view of Proposition~\ref{P:short times}, it suffices to prove the following \begin{prop}[The long time contribution of entering rays]\label{P:long times} We have \begin{equation}\label{enter} \Bigl\|\sum_{n\in \mathcal E} c_n^{{\varepsilon}}\bigl[{e^{it\Delta_{\Omega}}} (1_\Omega\gamma_n)-u_n\bigr]\Bigr\|_{L_{t,x}^{\frac {10}3}([T,\infty)\times{\mathbb{R}}^3)}=o(1) \qtq{as} {\varepsilon}\to 0, \end{equation} where $T=\frac1{10}{\varepsilon}\delta[\log\log(\tfrac 1\eps)]^{-1}$. \end{prop} Now fix $n\in \mathcal E$. We denote by $\chi^{u_n}(t)$ a smooth time cutoff such that $$ \chi^{u_n}(t)=1 \qtq{for} t\in \bigl[0, t_c+2\tfrac{\sigma\log(\frac1{\varepsilon})}{|\xi_n|}\bigr] \qtq{and} \chi^{u_n}(t)=0 \qtq{for} t\ge t_c+4\tfrac{\sigma\log(\frac1{\varepsilon})}{|\xi_n|}. $$ Denote by $\chi^{v_n}(t)$ a smooth time cutoff such that $$ \chi^{v_n}(t)=1 \qtq{for} t\ge t_c-2\tfrac{\sigma\log(\frac1{\varepsilon})}{|\xi_n|} \qtq{and} \chi^{v_n}(t)=0 \qtq{for} t\in\bigl[0,t_c- 4\tfrac{\sigma\log(\frac1{\varepsilon})}{|\xi_n|}\bigr]. $$ We then define \begin{align*} \tilde u_n(t,x) :=\chi^{u_n}(t) u_n(t,x) \qtq{and} \tilde v_n(t,x):=\chi^{v_n}(t)v_n(t,x). \end{align*} The cutoff $\chi^{u_n}$ kills $u_n$ shortly after it enters the obstacle; the additional time delay (relative to $t_c$) guarantees that the bulk of the wave packet is deep inside $\Omega^c$ when the truncation occurs. Note that the cutoff also ensures that $u_n$ does not exit the obstacle. Analogously, $\chi^{v_n}$ turns on the reflected wave packet shortly before it leaves the obstacle. By the triangle inequality, \begin{align} \text{LHS}\eqref{enter} &\lesssim \Bigl\|\sum_{n\in \mathcal E} c_n^{{\varepsilon}}(u_n-1_\Omega\tilde u_n)\Bigr\|_{L_{t,x}^{\frac{10}3}([T,\infty)\times{\mathbb{R}}^3)} + \Bigl\|\sum_{n\in \mathcal E} c_n^{{\varepsilon}}\tilde v_n\Bigr\|_{L_{t,x}^{\frac {10}3}([T,\infty)\times\Omega)}\notag\\ &\quad + \Bigl\|\sum_{ n\in \mathcal E} c_n^{{\varepsilon}}\bigl[{e^{it\Delta_{\Omega}}}(1_\Omega\gamma_n)-(\tilde u_n -\tilde v_n)\bigr]\Bigr\|_{L_{t,x}^{\frac {10}3}([T,\infty)\times\Omega)}.\label{E:control 462} \end{align} We prove that the first two summands are $o(1)$ in Lemmas~\ref{L:small u} and \ref{L:bdfv}, respectively. Controlling the last summand is a much lengthier enterprise and follows from Lemma~\ref{L:rem}. This will complete the proof of Proposition~\ref{P:long times}, which together with Propositions~\ref{P:ng}, \ref{P:missing}, and \ref{P:short times} yields Theorem~\ref{T:LF3}. \begin{lem}\label{L:small u} We have \begin{align*} \Bigl\|\sum_{n\in \mathcal E} c_n^{{\varepsilon}}(u_n-1_\Omega\tilde u_n)\Bigr\|_{L_{t,x}^{\frac{10}3}([T,\infty)\times{\mathbb{R}}^3)}=o(1) \qtq{as} {\varepsilon}\to 0. \end{align*} \end{lem} \begin{proof} By the triangle inequality and H\"older, \begin{align} \Bigl\|\sum_{n\in \mathcal E} c_n^{\varepsilon}(u_n-1_\Omega\tilde u_n)\Bigr\|_{L_{t,x}^{\frac{10}3}} &\lesssim\Bigl\|\sum_{n\in \mathcal S}|c_n^{\varepsilon}||u_n|\Bigr\|_{L_{t,x}^{\frac{10}3}}\label{interp}\notag\\ &\lesssim \Bigl\|\sum_{n\in \mathcal S}|c_n^{\varepsilon}| |u_n|\Bigr\|_{L_t^\infty L_x^2}^{\frac15} \Bigl\|\sum_{n\in \mathcal S}|c_n^{\varepsilon}||u_n|\Bigr\|_{L_t^{\frac83}L_x^4}^{\frac 45}, \end{align} where all spacetime norms are over $[T,\infty)\times{\mathbb{R}}^3$. To estimate the first factor on the right-hand side of \eqref{interp}, we use \begin{align*} |x-2t\xi_n|^2+|x-2t\xi_m|^2=2|x-(\xi_n+\xi_m)t|^2+2t^2|\xi_n-\xi_m|^2 \end{align*} together with \eqref{bdforc} to get \begin{align*} \Bigl\|\sum_{n\in \mathcal S}& |c_n^{\varepsilon}||u_n|\Bigr\|_{L_x^\infty L_x^2([T,\infty)\times{\mathbb{R}}^3)}^2\\ &\lesssim \biggl\|\sum_{n,m\in \mathcal S}\frac{(\sigma{\varepsilon})^3}{L^6}\biggl(\frac{\sigma^2}{\sigma^4+t^2}\biggr)^{\frac32} \int_{{\mathbb{R}}^3} e^{-\frac{\sigma^2|x-2t\xi_n|^2}{4(\sigma^4+t^2)}-\frac{\sigma^2|x-2t\xi_m|^2}{4(\sigma^4+t^2)}}\,dx\biggr\|_{L_t^{\infty}([T,\infty))}\\ &\lesssim \biggl\|\sum_{n, m\in \mathcal S}\frac{(\sigma{\varepsilon})^3}{L^6}\exp\Bigl\{-\frac{\sigma^2 t^2|\xi_n-\xi_m|^2}{2(\sigma^4+t^2)}\Bigr\}\biggr\|_{L_t^{\infty}([T,\infty))}. \end{align*} For $t\geq T$ we have \begin{align*} \frac{\sigma^2 t^2|\xi_n-\xi_m|^2}{2(\sigma^4+t^2)} &\ge\frac{\sigma^2|n-m|^2 T^2}{2L^2(\sigma^4+T^2)}\ge\frac{|n-m|^2T^2}{4\sigma^4[\log\log(\tfrac 1\eps)]^2}\ge\frac{|n-m|^2}{\log^5(\tfrac 1{\varepsilon})}, \end{align*} and so derive \begin{align*} \Bigl\|\sum_{n\in \mathcal S}|c_n^{\varepsilon}||u_n|\Bigr\|_{L_t^\infty L_x^2([T,\infty)\times{\mathbb{R}}^3)}^2 &\lesssim \sum_{n, m\in \mathcal S}\frac{(\sigma{\varepsilon})^3}{L^6}\exp\Bigl\{-\frac{|n-m|^2}{\log^5(\tfrac 1{\varepsilon})}\Bigr\}\\ &\lesssim \sum_{n\in \mathcal S}\frac{(\sigma{\varepsilon})^3}{L^6}\log^{\frac{15}2}(\tfrac 1{\varepsilon})\\ &\lesssim\frac{(\sigma{\varepsilon})^3}{L^6}\log^{\frac {15}2}(\tfrac1{\varepsilon})\cdot\biggl(\frac L{\varepsilon}\log\log(\tfrac 1\eps)\biggr)^3\\ &\lesssim \log^{\frac{15}2}(\tfrac 1{\varepsilon}). \end{align*} We now turn to the second factor on the right-hand side of \eqref{interp}. As \begin{align*} \sum_{j=1}^4\frac 14 |x-2\xi_{n_j} t|^2&= \biggl|x-\frac {\sum_{j=1}^4\xi_{n_j}}2 t\biggr|^2+\frac {t^2}4\sum_{j<k}|\xi_{n_j} -\xi_{n_k}|^2, \end{align*} we have \begin{align}\label{129} \Bigl\|\sum_{n\in \mathcal S}&|c_n^{{\varepsilon}}| |u_n|\Bigr\|_{L_x^4({\mathbb{R}}^3)}^4\notag\\ &\lesssim \sum_{n_1,\cdots,n_4\in \mathcal S}\bigl|c_{n_1}^{{\varepsilon}}c_{n_2}^{{\varepsilon}} c_{n_3}^{{\varepsilon}}c_{n_4}^{\varepsilon}\bigr| \biggl(\frac {\sigma^2}{\sigma^4+t^2}\biggr)^3\int_{{\mathbb{R}}^3} \exp\Bigl\{-\sum_{j=1}^4 \frac{\sigma^2|x-2\xi_{n_j} t|^2}{4(\sigma^4+t^2)}\Bigr\}\,dx\notag\\ &\lesssim \sum_{n_1,\cdots,n_4\in \mathcal S} \frac{(\sigma{\varepsilon})^6}{L^{12}}\biggl(\frac{\sigma^2}{\sigma^4+t^2}\biggr)^{\frac 32} \exp\Bigl\{-\frac {\sigma^2t^2}{4(\sigma^4+t^2)}\sum_{j<k}|\xi_{n_j}-\xi_{n_k}|^2\Bigr\}. \end{align} To estimate the sum in \eqref{129} we divide it into two parts. Let $N:=\log^3(\frac1{{\varepsilon}}).$ \textbf{Part 1:} $|n_j-n_k|\ge N$ for some $1\le j\neq k\le 4$. We estimate the contribution of the summands conforming to this case to LHS\eqref{129} by \begin{align*} \frac{(\sigma{\varepsilon})^6}{L^{12}}\biggl(\frac L{{\varepsilon}}\log\log(\tfrac 1{{\varepsilon}})\biggr)^{12}& \biggl(\frac {\sigma^2}{\sigma^4+t^2}\biggr)^{\frac 32} \exp\Bigl\{-\frac {\sigma^2N^2t^2}{4L^2(\sigma^4+t^2)} \Bigr\}\\ &\lesssim t^{-3} \sigma^9{\varepsilon}^{-6}[\log\log(\tfrac 1\eps)]^{12}\exp\Bigl\{-\frac{\sigma^2N^2t^2}{4L^2(\sigma^4+t^2)}\Bigr\}. \end{align*} For $T\le t\le \sigma^2$, we estimate \begin{align*} \exp\Bigl\{-\frac{\sigma^2N^2t^2}{4L^2(\sigma^4+t^2)}\Bigr\}\le\exp\Bigl\{-\frac{T^2N^2}{8\sigma^2L^2}\Bigr\}\le{\varepsilon}^{100} \end{align*} while for $t\ge \sigma^2$, \begin{align*} \exp\Bigl\{-\frac{\sigma^2N^2t^2}{4L^2(\sigma^4+t^2)}\Bigr\}\le\exp\Bigl\{-\frac{\sigma^2N^2}{8L^2}\Bigr\}\le {\varepsilon}^{100}. \end{align*} Thus the contribution of Part 1 is $O( {\varepsilon}^{80}t^{-3})$. \textbf{Part 2:} $|n_j-n_k|\le N$ for all $1\le j\neq k\le 4$. We estimate the contribution of the summands conforming to this case to LHS\eqref{129} by \begin{align*} \frac {(\sigma{\varepsilon})^6}{L^{12}}\biggl(\frac {\sigma^2}{\sigma^4+t^2}\biggr)^{\frac 32}N^9 \biggl(\frac L{{\varepsilon}}\log\log(\tfrac 1{{\varepsilon}})\biggr)^3 \lesssim \frac{\sigma^9N^9{\varepsilon}^3}{L^9t^3}[\log\log(\tfrac 1\eps)]^3\lesssim \frac{{\varepsilon}^3\log^{27}(\tfrac1{\varepsilon})}{[\log\log(\tfrac 1\eps)]^6} t^{-3}. \end{align*} Collecting the two parts and integrating in time, we obtain \begin{align*} \Bigl\|\sum_{n\in \mathcal S}|c_n^{\varepsilon}||u_n|\Bigr\|_{L_t^{\frac83}L_x^4([T,\infty)\times{\mathbb{R}}^3)} \lesssim \frac{{\varepsilon}^{\frac34}\log^{\frac{27}4}(\tfrac1{\varepsilon})}{[\log\log(\tfrac 1\eps)]^{\frac 32}}\cdot T^{-\frac 38} \lesssim {\varepsilon}^{\frac38}\delta^{-\frac38}\log^8(\tfrac 1{\varepsilon}). \end{align*} Putting everything together and invoking \eqref{interp} we get $$ \Bigl\|\sum_{n\in \mathcal E} c_n^{\varepsilon}(u_n-1_\Omega\tilde u_n)\Bigr\|_{L_{t,x}^{\frac{10}3}([T, \infty)\times{\mathbb{R}}^3)} \lesssim {\varepsilon}^{\frac3{10}}\delta^{-\frac3{10}}\log(\tfrac1{\varepsilon})^{\frac34+\frac{32}5}=o(1) \qtq{as}{\varepsilon}\to0. $$ This completes the proof of the lemma. \end{proof} \begin{lem}\label{L:bdfv} We have $$ \Bigl\|\sum_{n\in \mathcal E} c_n^{{\varepsilon}} \tilde v_n\Bigr\|_{L_{t,x}^{\frac {10}3}([T,\infty)\times{\mathbb{R}}^3)}=o(1) \qtq{as}{\varepsilon}\to0. $$ \end{lem} \begin{proof} By H\"older's inequality, \begin{align}\label{E:interp} \Bigl\|\sum_{n\in \mathcal E} c_n^{{\varepsilon}} \tilde v_n\Bigr\|_{L_{t,x}^{\frac {10}3}} \lesssim \Bigl\|\sum_{n\in \mathcal E}c_n^{\varepsilon} \tilde v_n\Bigr\|_{L_t^\infty L_x^2}^{\frac15} \Bigl\|\sum_{n\in \mathcal E}c_n^{\varepsilon} \tilde v_n\Bigr\|_{L_t^{\frac83}L_x^4}^{\frac 45}, \end{align} where all spacetime norms are over $[T,\infty)\times{\mathbb{R}}^3$. First we note that from \eqref{sig41} and \eqref{sig3}, we can bound \begin{align}\label{bdfv} |v_n(t,x)| &\lesssim \log^{\frac52}(\tfrac1{{\varepsilon}})\biggl(\frac{\sigma^2}{\sigma^4+t^2}\biggr)^{\frac34}\exp\Bigl\{-\frac{\sigma^2|x-x_n(t)|^2}{4[\log\log(\tfrac 1\eps)]^{25}[\sigma^4+t^2\log^4(\frac1{\varepsilon})]}\Bigr\}. \end{align} Using this bound, we estimate \begin{align*} &\int_{{\mathbb{R}}^3} \! |\tilde v_{n_1}||\tilde v_{n_2}| \,dx \lesssim \log^5(\tfrac1{{\varepsilon}})\biggl(\frac{\sigma^2}{\sigma^4+t^2}\biggr)^{\frac 32} \!\!\int_{{\mathbb{R}}^3}\!\! \exp\Bigl\{-\frac{\sigma^2[|x-x_1(t)|^2+|x-x_2(t)|^2]}{4[\log\log(\frac 1{{\varepsilon}})]^{25}[\sigma^4+t^2\log^4(\frac 1{{\varepsilon}})]} \Bigr\} \,dx\\ &\lesssim\log^5(\tfrac1{{\varepsilon}})\biggl[\frac {[\log\log(\frac1{{\varepsilon}})]^{25}[\sigma^4+t^2\log^4(\frac1{{\varepsilon}})]}{\sigma^4+t^2}\biggr]^{\frac 32} \exp\Bigl\{-\frac{\sigma^2|x_1(t)-x_2(t)|^2}{8[\log\log(\frac1{{\varepsilon}})]^{25}[\sigma^4+t^2\log^4(\frac 1{{\varepsilon}})]}\Bigr\}\\ &\lesssim \log^{12}(\tfrac 1{{\varepsilon}})\exp\Bigl\{-\frac{\sigma^2|x_1(t)-x_2(t)|^2}{8[\log\log(\frac1{{\varepsilon}})]^{25}[\sigma^4+t^2\log^4(\frac 1{{\varepsilon}})]}\Bigr\}, \end{align*} where $x_j(t)$ denotes the trajectory of $v_{n_j}$, that is, $$ x_j(t):=2\xi^{(j)} t_c^{(j)}+2\eta^{(j)} (t-t_c^{(j)}) $$ with $\xi^{(j)}:=\xi_{n_j}$, $\eta^{(j)}:=\eta_{n_j}$, and $t_c^{(j)}$ representing the corresponding collision times. Therefore, \begin{align}\label{E:tilde v} \Bigl\|\sum_{n\in\mathcal E} c_n^{{\varepsilon}}\tilde v_n\Bigr\|_{L_t^{\infty}L_x^2}^2 &\lesssim \sup_t \sum_{ n_1, n_2\in \mathcal E}|c_{n_1}^{{\varepsilon}}||c_{n_2}^{{\varepsilon}}|\log^{12}(\tfrac 1{{\varepsilon}}) e^{-\frac {\sigma^2|x_1(t)-x_2(t)|^2}{8[\log\log(\frac1{{\varepsilon}})]^{25}[\sigma^4+t^2\log^4(\frac 1{{\varepsilon}})]}}, \end{align} where the supremum in $t$ is taken over the region \begin{align*} t\ge \max\biggl\{t_c^{(1)}-4\frac {\sigma \log(\frac1{{\varepsilon}})}{|\xi^{(1)}|},\ t_c^{(2)}-4\frac {\sigma\log(\frac1{{\varepsilon}})}{|\xi^{(2)}|}\biggr\}. \end{align*} Next we show that for $|n_1-n_2|\geq \log^4(\frac1{\varepsilon})$ and all such $t$, \begin{equation}\label{difray} |x_1(t)-x_2(t)|\ge |\xi^{(1)}-\xi^{(2)}|t. \end{equation} We discuss two cases. When $t\ge \max\{t_c^{(1)},t_c^{(2)}\}$, this follows immediately from Lemma~\ref{L:diverging rays}. It remains to prove \eqref{difray} for $$ \max\biggl\{t_c^{(1)}-4\frac {\sigma\log(\frac1{{\varepsilon}})}{|\xi^{(1)}|},\ t_c^{(2)}-4\frac {\sigma\log(\frac1{{\varepsilon}})}{|\xi^{(2)}|}\biggr\}\le t\le \max\bigl\{t_c^{(1)}, \ t_c^{(2)}\bigr\}. $$ Without loss of generality, we may assume $t_c^{(1)}\ge t_c^{(2)}$. Using Lemmas~\ref{L:xc} and~\ref{L:diverging rays} and the fact that $|n_1-n_2|\ge \log^4(\frac 1{\varepsilon})$, we estimate \begin{align*} |x_1(t)-x_2(t)|&\ge |x_1(t_c^{(1)})-x_2(t_c^{(1)})|-|x_1(t)-x_1(t_c^{(1)})|-|x_2(t)-x_2(t_c^{(1)})|\\ &\ge 2|\xi^{(1)}-\xi^{(2)}|t_c^{(1)}-2|\xi^{(1)}||t-t_c^{(1)}|-2|\xi^{(2)}||t-t_c^{(1)}|\\ &\ge 2\frac {|n_1-n_2|}L t_c^{(1)}-8\sigma \log(\tfrac 1{{\varepsilon}})- 8\frac {|\xi^{(2)}|\sigma\log(\tfrac 1\eps)}{|\xi^{(1)}|}\\ &\ge 2\frac {|n_1-n_2|}{L}t_c^{(1)} -16\sigma\log(\tfrac 1{{\varepsilon}})[\log\log(\tfrac 1{{\varepsilon}})]^2\\ &\ge\frac {|n_1-n_2|}{L} t=|\xi^{(1)}-\xi^{(2)}| t. \end{align*} This completes the verification of \eqref{difray}. Using \eqref{bdforc} and \eqref{difray}, \eqref{E:tilde v} implies \begin{align*} \Bigl\|\sum_{n\in \mathcal E} c_n^{{\varepsilon}}\tilde v_n\Bigr\|_{L_x^2}^2 &\lesssim \sum_{|n_1-n_2|\ge \log^4(\frac1{{\varepsilon}}),\ n_i\in \mathcal E}\frac {(\sigma{\varepsilon})^3}{L^6}\log^{12}(\tfrac1{{\varepsilon}}) e^{-\frac{\sigma^2|n_1-n_2|^2t^2}{8L^2[\log\log(\frac 1{{\varepsilon}})]^{25}[\sigma^4+t^2\log^4(\frac 1{{\varepsilon}})]}}\\ &\quad +\sum_{|n_1-n_2|\le\log^4(\frac 1{{\varepsilon}}),\ n_i\in \mathcal E}\frac{(\sigma{\varepsilon})^3}{L^6}\log^{12}(\tfrac 1{{\varepsilon}})\\ &\lesssim \frac {(\sigma{\varepsilon})^3}{L^6}\log^{12}(\tfrac 1{{\varepsilon}})\biggl(\frac {L\log\log(\tfrac 1{{\varepsilon}})}{{\varepsilon}}\biggr)^3 \biggl(\frac{[\log\log(\tfrac 1{{\varepsilon}})]^{27}[\sigma^4+t^2\log^4(\frac 1{{\varepsilon}})]}{t^2}\biggr)^{\frac 32}\\ &\quad+\frac{(\sigma{\varepsilon})^3}{L^6}\log^{24}(\tfrac 1{{\varepsilon}})\biggl(\frac {L\log\log(\frac 1{{\varepsilon}})}{{\varepsilon}}\biggr)^3\\ &\lesssim \log^{13}(\tfrac1{{\varepsilon}})\biggl(\frac{\sigma^6}{t^3}+\log^6(\tfrac 1{\varepsilon})\biggr) + \log^{25}(\tfrac 1{{\varepsilon}}). \end{align*} Thus, \begin{align}\label{E:536} \Bigl\|\sum_{n\in \mathcal E} c_n^{{\varepsilon}}\tilde v_n\Bigr\|_{L_t^{\infty}L_x^2([T,\infty)\times{\mathbb{R}}^3)}\lesssim \log^{\frac{25}2}(\tfrac1{{\varepsilon}}). \end{align} We now turn to estimating the second factor on the right-hand side of \eqref{E:interp}. Combining \eqref{bdfv} with \begin{align*} \frac 14\sum_{j=1}^4|x-x_j(t)|^2=\biggl|x-\frac 14\sum_{j=1}^4 x_j(t)\biggr|^2+\frac 1{16}\sum_{j<l}|x_j(t)-x_l(t)|^2, \end{align*} we get \begin{align*} &\int_{{\mathbb{R}}^3}|\tilde v_{n_1}||\tilde v_{n_2}||\tilde v_{n_3}||\tilde v_{n_4}|\,dx\\ &\lesssim \log^{10}(\tfrac1{{\varepsilon}})\biggl(\frac{\sigma^2}{\sigma^4+t^2}\biggr)^3\!\!\int_{{\mathbb{R}}^3}\!\!\exp\Bigl\{ -\frac {\sigma^2}{4[\log\log(\frac1{{\varepsilon}})]^{25}[\sigma^4+t^2\log^4(\frac1{{\varepsilon}})]}\sum_{j=1}^4|x-x_j(t)|^2\Bigr\}\,dx\\ &\lesssim \log^{10}(\tfrac1{{\varepsilon}})\biggl(\frac {\sigma^2}{\sigma^4+t^2}\biggr)^3\biggl(\frac{[\log\log(\frac 1{{\varepsilon}})]^{25}[\sigma^4+t^2\log^4(\frac 1{{\varepsilon}})]}{\sigma^2}\biggr)^{\frac 32} e^{-\frac{\sigma^2\sum_{j<l}|x_j(t)-x_l(t)|^2}{16[\log\log(\frac 1{{\varepsilon}})]^{25}[\sigma^4+t^2\log^4(\frac 1{{\varepsilon}})]}}\\ &\lesssim\sigma^3t^{-3}\log^{17}(\tfrac 1{{\varepsilon}})e^{-\frac{\sigma^2\sum_{j<l}|x_j(t)-x_l(t)|^2}{16[\log\log(\frac1{{\varepsilon}})]^{25}[\sigma^4+t^2\log^4(\frac 1{{\varepsilon}})]}}. \end{align*} Combining this with \eqref{bdforc}, we obtain \begin{align}\label{359} \Bigl\|\sum_{n\in \mathcal E} &c_n^{{\varepsilon}} \tilde v_n(t)\Bigr\|_{L_x^4}^4 \lesssim \!\sum_{n_1,\ldots,n_4\in \mathcal E}\! \frac{(\sigma{\varepsilon})^6}{L^{12}}\sigma^3t^{-3}\log^{17}(\tfrac1{{\varepsilon}}) e^{-\frac{\sigma^2\sum_{j<l}|x_j(t)-x_l(t)|^2}{16[\log\log(\frac1{{\varepsilon}})]^{25}[\sigma^4+t^2\log^4(\frac 1{{\varepsilon}})]}}. \end{align} To estimate the sum above we break it into two parts. Let $N:= \log^4(\frac 1{{\varepsilon}})$. \textbf{Part 1:} $|n_j-n_k|\ge N$ for some $1\le j\neq k\le 4$. By \eqref{difray}, we have $$ |x_j(t)-x_k(t)|\ge \frac {|n_j-n_k|}{L} t \qtq{for all} t\in \supp \prod_{l=1}^4 \tilde v_{n_l}. $$ As $t\geq T$, we estimate the contribution of the summands conforming to this case to LHS\eqref{359} by \begin{align*} \frac {(\sigma{\varepsilon})^6}{L^{12}}\sigma^3t^{-3}& \log^{17}(\tfrac1{{\varepsilon}})\biggl[\frac {L\log\log(\frac 1{{\varepsilon}})}{{\varepsilon}}\biggr]^{12} \exp\Bigl\{-\frac{\sigma^2t^2N^2}{16L^2[\log\log(\frac1{{\varepsilon}})]^{25}[\sigma^4+t^2\log^4(\frac 1{{\varepsilon}})]}\Bigr\}\\ &\lesssim \frac {\sigma^9}{{\varepsilon}^6}t^{-3}\log^{18}(\tfrac 1{{\varepsilon}})\exp\Bigl\{-\frac{N^2}{\log^5(\frac 1{{\varepsilon}})}\Bigr\} \le{\varepsilon}^{100}t^{-3}. \end{align*} \textbf{Part 2:} $|n_j-n_k|\le N$ for all $1\le j<k \le 4$. We estimate the contribution of the summands conforming to this case to LHS\eqref{359} by \begin{align*} \frac {(\sigma {\varepsilon})^6}{L^{12}}&\sigma^3 t^{-3} \log^{17}(\tfrac1{{\varepsilon}}) N^9\biggl(\frac {L\log\log(\frac 1{{\varepsilon}})}{{\varepsilon}}\biggr)^3 \lesssim \biggl(\frac {\sigma N}L\biggr)^9{\varepsilon}^3 t^{-3}\log^{18}(\tfrac1{{\varepsilon}})\le {\varepsilon}^3 t^{-3}\log^{56}(\tfrac 1{{\varepsilon}}). \end{align*} Combining the estimates from the two cases, we obtain \begin{align*} \Bigl\|\sum_{n\in \mathcal E} c_n^{\varepsilon} \tilde v_n\Bigr\|_{L_t^{\frac83}L_x^4([T,\infty)\times{\mathbb{R}}^3)} &\lesssim \bigl[{\varepsilon}^{25} + {\varepsilon}^{\frac 34}\log^{14}(\tfrac 1{{\varepsilon}})\bigr] T^{-\frac 38} \lesssim {\varepsilon}^{\frac 38}\delta^{-\frac 38}\log^{15}(\tfrac1{\varepsilon}). \end{align*} Combining this with \eqref{E:interp} and \eqref{E:536} completes the proof of Lemma~\ref{L:bdfv}. \end{proof} To complete the proof of Proposition~\ref{P:long times}, we are left to estimate the last term on RHS\eqref{E:control 462}. For each $n\in \mathcal E$, we write \begin{equation}\label{E:259} {e^{it\Delta_{\Omega}}} (1_\Omega\gamma_n)-(\tilde u_n -\tilde v_n)=-(w_n+r_n), \end{equation} where $w_n$ is chosen to agree with $\tilde u_n-\tilde v_n$ on $\partial \Omega$ and $r_n$ is the remainder. More precisely, let $\phi\in C_c^{\infty}([0,\infty))$ with $\phi\equiv 1$ on $[0,\frac 12]$ and $\phi\equiv 0$ on $[1,\infty)$. For each $x\in \Omega$, let $x_*\in\partial\Omega$ denote the point obeying $|x-x_*|=\dist(x,\Omega^c)$. Now we define $$ w_n(t,x):=w_n^{(1)}(t,x)+w_n^{(2)}(t,x)+w_n^{(3)}(t,x) $$ and \begin{align*} w_n^{(j)}(t,x):=(\tilde u_n-\tilde v_n)(t,x_*)\phi\bigl(\tfrac{|x-x_*|}{\sigma}\bigr) \!\begin{cases} (1-\phi)\bigl(\frac {|x_*-x_c^{(n)}|}{\sigma\log(\frac 1{{\varepsilon}})}\bigr), & j=1,\\[2mm] \phi\bigl(\frac {|x_*-x_c^{(n)}|}{\sigma\log(\frac 1{{\varepsilon}})}\bigr)(1-\phi)\bigl(\frac {|t-t_c^{(n)}||\xi_n|}{2\sigma\log(\frac 1{{\varepsilon}})}\bigr), &j=2,\\[2mm] \phi\bigl(\frac {|x_*-x_c^{(n)}|}{\sigma\log(\frac 1{{\varepsilon}})}\bigr) \phi\bigl(\frac {|t-t_c^{(n)}||\xi_n|}{2\sigma\log(\frac 1{{\varepsilon}})}\bigr)e^{i\xi\cdot (x-x_*)},\!\! &j=3. \end{cases} \end{align*} We will estimate $w_n$ by estimating each $w_n^{(j)}$ separately. Note that $w_n^{(3)}$ is the most significant of the three; spatial oscillation has been introduced into this term to ameliorate the temporal oscillation of $\tilde u_n-\tilde v_n$. This subtle modification is essential to achieve satisfactory estimates. To estimate $r_n$, we use \eqref{E:259} to write $$ 0=(i\partial_t +\Delta_\Omega)(\tilde u_n-\tilde v_n-w_n-r_n)=(i\partial_t+\Delta)(\tilde u_n-\tilde v_n-w_n)-(i\partial_t+\Delta_\Omega) r_n, $$ which implies $$ (i\partial_t+\Delta_\Omega) r_n=iu_n\partial_t \chi^{u_n} -iv_n\partial_t \chi^{v_n} -(i\partial_t+\Delta)w_n. $$ Using the Strichartz inequality, we estimate \begin{align*} \Bigl\|\sum_{n\in \mathcal E} c_n^{{\varepsilon}}r_n\Bigr\|_{L_{t,x}^{\frac{10}3}([T,\infty)\times\Omega)} &\lesssim \Bigl\|\sum_{n\in \mathcal E}c_n^{{\varepsilon}}\bigl[u_n\partial_t\chi^{u_n}-v_n\partial_t\chi^{v_n}\bigr]\Bigr\|_{L_t^1L_x^2([T,\infty)\times\Omega)}\\ &\quad + \Bigl\|\sum_{n\in \mathcal E}c_n^{{\varepsilon}}(i\partial_t+\Delta)w_n\Bigr\|_{L_t^1L_x^2([T,\infty)\times\Omega)}. \end{align*} Putting everything together, we are thus left to prove the following \begin{lem}\label{L:rem} As ${\varepsilon}\to0$, we have \begin{align} &\Bigl\|\sum_{n\in \mathcal E} c_n^{{\varepsilon}} u_n\partial_t \chi^{u_n}\Bigr\|_{L_t^1L_x^2([T,\infty)\times \Omega)} +\Bigl\|\sum_{n\in \mathcal E} c_n^{{\varepsilon}} v_n \partial_t \chi^{v_n}\Bigr\|_{L_t^1L_x^2([T,\infty)\times \Omega)}=o(1)\label{rem1}\\ &\Bigl\|\sum_{n\in \mathcal E} c_n^{{\varepsilon}}w_n^{(j)}\Bigr\|_{L_{t,x}^{\frac{10}3}([T,\infty)\times\Omega)} +\Bigl\|\sum_{n\in \mathcal E} c_n^{{\varepsilon}}(i\partial_t+\Delta)w_n^{(j)}\Bigr\|_{L_t^1L_x^2([T,\infty)\times\Omega)}=o(1),\label{rem2} \end{align} for each $j=1,2,3$. As previously, $T=\frac1{10}{\varepsilon}\delta[\log\log(\frac1{\varepsilon})]^{-1}$. \end{lem} \begin{proof} We first prove the estimate \eqref{rem1} for $u_n$. Recall the following bound for $u_n$: \begin{align*} |u_n(t,x)|\lesssim \biggl(\frac{\sigma^2}{\sigma^4+t^2}\biggr)^{\frac 34} \exp\Bigl\{-\frac{\sigma^2|x-2\xi_n t|^2}{4(\sigma^4+t^2)}\Bigr\}. \end{align*} Also, for $t\in \supp \partial_t \chi^{u_n}=[t_c^{(n)}+\frac{2\sigma\log(\frac 1{{\varepsilon}})}{|\xi_n|},t_c^{(n)}+\frac{4\sigma\log(\frac 1{{\varepsilon}})}{|\xi_n|}]$ we have $t\le \sigma^2$ and, by the definition of $\mathcal E$, \begin{align*} \dist(2\xi_n t,\Omega)\gtrsim \frac{|\xi_n||t-t_c^{(n)}|}{[\log\log(\tfrac 1\eps)]^4}\ge\frac {\sigma\log(\frac1{{\varepsilon}})}{[\log\log(\frac 1{{\varepsilon}})]^5}. \end{align*} Thus, \begin{align*} |\partial_t\chi^{u_n}|^2\int_{\Omega}|u_n(t,x)|^2\,dx &\lesssim\biggl(\frac {\sigma^2}{\sigma^4+t^2}\biggr)^{\frac 32}|\partial_t\chi^{u_n}|^2\int_{\Omega}\exp\Bigl\{-\frac {\sigma^2|x-2\xi_{n}t|^2}{2(\sigma^4+t^2)}\Bigr\}\,dx\\ &\lesssim \sigma^{-3}\biggl(\frac{|\xi_n|}{\sigma\log(\tfrac 1\eps)}\biggr)^2 \int_{|y|\ge\frac{\sigma\log(\frac1{{\varepsilon}})}{[\log\log(\frac 1{{\varepsilon}})]^5}}e^{-\frac {|y|^2}{4\sigma^2}}\,dy\\ &\lesssim {\varepsilon}^{200}. \end{align*} Summing in $n$ and using \eqref{bdforc}, we obtain \begin{align*} \Bigl\|\sum_{n\in \mathcal E} c_n^{{\varepsilon}}u_n\partial_t\chi^{u_n}\Bigr\|_ {L_t^1L_x^2([T,\infty)\times\Omega)} &\lesssim \sum_{n\in \mathcal E} \frac {(\sigma{\varepsilon})^{\frac 32}}{L^3}{\varepsilon}^{100} \frac{\sigma\log(\frac1{{\varepsilon}})}{|\xi_n|}\leq {\varepsilon}^{90}. \end{align*} The estimate for $v_n$ is similar. Note that by the definition of $\mathcal E$, for $t\in\supp \partial_t \chi^{v_n}=[t_c^{(n)}-4\frac {\sigma \log(\frac1{{\varepsilon}})}{|\xi_n|},t_c^{(n)}-2\frac {\sigma\log(\frac1{{\varepsilon}})}{|\xi_n|}]$ we have \begin{align}\label{124} \dist(x_n(t), \Omega)\gtrsim\frac{|\xi_n||t-t_c^{(n)}|}{[\log\log(\tfrac 1\eps)]^4}\ge \frac {\sigma\log(\frac1{{\varepsilon}})} {[\log\log(\frac 1{{\varepsilon}})]^5} \end{align} and, by Lemma \ref{L:xc}, \begin{align*} t\leq t_c^{(n)}\lesssim \frac{\delta}{|\xi_n|}[\log\log(\tfrac 1\eps)]^8 \qtq{and} t^2\log^4(\tfrac1{\varepsilon})\le\sigma^4[\log\log(\tfrac 1\eps)]^{19}. \end{align*} Therefore, using \eqref{bdfv} we get \begin{align*} |(\partial_t\chi^{v_n})v_n(t,x)|^2 &\lesssim \log^5(\tfrac1{{\varepsilon}})\biggl(\frac{\sigma^2}{\sigma^4+t^2}\biggr)^{\frac32}\exp\biggl\{-\frac{\sigma^2|x-x_n(t)|^2}{2[\log\log(\tfrac 1\eps)]^{25}[\sigma^4+t^2\log^4(\tfrac 1{\varepsilon})]}\biggr\}\cdot\frac{|\xi_n|^2}{[\sigma\log(\tfrac 1\eps)]^2}\\ &\lesssim \sigma^{-5}|\xi_n|^2\log^{3}(\tfrac1{\varepsilon})\exp\biggl\{-\frac{|x-x_n(t)|^2}{4\sigma^2[\log\log(\tfrac 1\eps)]^{44}}\biggr\}. \end{align*} Using \eqref{124} and computing as for $u_n$, we obtain \begin{align*} \int_{\Omega} |\partial _t\chi^{v_n} v_n(t,x)|^2 \,dx\lesssim {\varepsilon}^{200} \end{align*} and then $$ \Bigl\|\sum_{n\in \mathcal E} c_n^{{\varepsilon}}v_n\partial_t\chi^{v_n}\Bigr\|_{L_t^1L_x^2([T,\infty)\times\Omega)} \le {\varepsilon}^{90}. $$ This completes the proof of \eqref{rem1}. We now turn to estimating \eqref{rem2}. We begin with the contribution from $w_n^{(1)}$. Using the definitions of $\tilde u_n(t,x)$ and $\tilde v_n(t,x)$, as well as \eqref{sig42}, \eqref{sig3}, and the fact that $\partial_t[\det M(t)]^{-1/2} = -\frac12 [\det M(t)]^{-1/2} \Tr[M(t)^{-1}\partial_t M(t)]$, we estimate \begin{align} |w_n^{(1)}(t,x)|&+|(i\partial_t+\Delta)w_n^{(1)}(t,x)|\notag\\ &\lesssim \Bigl[\sigma^{-2}+|\xi_n|^2+\frac{|x_*-2\xi_nt|^2}{\sigma^4}\Bigr]| u_n(t,x_*)| \chi_1(t,x)\label{cun}\\ &\quad+\Bigl[\sigma^{-2}+|\xi_n|^2+\frac{\log^{10}{(\frac1{\varepsilon})}}{\sigma^4+t^2}|x_*-x_n(t)|^2\Bigr]|v_n(t,x_*)| \chi_2(t,x)\label{cvn}, \end{align} where $\chi_1(t,x)$ is a cutoff to the spacetime region \begin{align*} \biggl\{(t,x)\in [0,\infty) \times\Omega: \, |x-x_*|\le \sigma, \ |x_*-x_c^{(n)}|\geq\tfrac12 \sigma\log(\tfrac 1\eps),\ t\leq t_c^{(n)}+4\frac{\sigma\log(\tfrac 1\eps)}{|\xi_n|}\biggr\} \end{align*} and $\chi_2(t,x)$ is a cutoff to the spacetime region \begin{align*} \biggl\{(t,x)\in [0,\infty) \times\Omega: \, |x-x_*|\le \sigma, \ |x_*-x_c^{(n)}|\geq\tfrac12\sigma\log(\tfrac 1\eps),\ t\ge t_c^{(n)}-4\frac{\sigma\log(\tfrac 1\eps)}{|\xi_n|}\biggr\}. \end{align*} Note that \begin{align}\label{E:chi} \int_{\Omega} \chi_1(t,x) + \chi_2(t,x) \, dx\lesssim \sigma \qtq{for all} t\geq 0. \end{align} To estimate the contribution from \eqref{cun}, we note that on the spacetime support of this term we have $t\leq \sigma^2$ and, by the definition of $\mathcal E$, \begin{align*} \frac{|x_*-x_c^{(n)}|}{[\log\log(\tfrac 1\eps)]^4}&\lesssim |x_*-2\xi_n t|\leq |x_*| + |x_c^{(n)}| + 2|\xi_n(t-t_c^{(n)})|\lesssim 1. \end{align*} Thus we can estimate \begin{align} \eqref{cun} &\lesssim\bigl[\sigma^{-2}+|\xi_n|^2+\sigma^{-4}\bigr]|u_n(t,x_*)|\chi_1(t,x)\notag\\ &\lesssim {\varepsilon}^{-2}\delta^{-2}[\log\log(\tfrac 1\eps)]^2 \sigma^{-\frac32}\exp\Bigl\{-\frac{c\sigma^2|x_*-x_c^{(n)}|^2}{\sigma^4[\log\log(\tfrac 1\eps)]^8}\Bigr\}\chi_1(t,x)\notag\\ &\lesssim {\varepsilon}^{-2}\delta^{-2}\sigma^{-\frac32}[\log\log(\tfrac 1\eps)]^2\exp\Bigl\{-\frac{\log^2(\tfrac1{\varepsilon})}{[\log\log(\tfrac 1\eps)]^9}\Bigr\}\chi_1(t,x)\notag\\ &\le {\varepsilon}^{100}\chi_1(t,x).\label{E:cun} \end{align} To estimate the contribution from \eqref{cvn}, we discuss long and short times separately. If $t\geq \delta[\log\log(\tfrac 1\eps)]^{10}|\xi_n|^{-1}$, then $t\gg t_c^{(n)}$ and so $2 |t-t_c^{(n)}|\geq t$. Using the definition of $\mathcal E$, we thus obtain \begin{align*} |x_*-x_n(t)|\ge \dist(x_n(t), \partial\Omega)\gtrsim \frac{|2\xi_n(t-t_c^{(n)})|}{[\log\log(\tfrac 1\eps)]^4}\geq \frac{|\xi_nt|}{[\log\log(\tfrac 1\eps)]^5}. \end{align*} Noting also that $\sigma^4\le t^2\log^4(\tfrac 1{\varepsilon})$, we estimate \begin{align}\label{l2} \frac{\sigma^2|x_*-x_n(t)|^2}{4[\log\log(\tfrac 1\eps)]^{25}[\sigma^4+t^2\log^4(\tfrac1{\varepsilon})]} &\ge \frac{\sigma^2 |\xi_n|^2t^2}{8[\log\log(\tfrac 1\eps)]^{35}t^2\log^4(\tfrac 1{\varepsilon})}\ge \frac\delta{{\varepsilon}\log^3(\frac 1{\varepsilon})}. \end{align} Using the crude upper bound $$ |x_*-x_n(t)|\leq |x_*|+|x_c^{(n)}| +2|\xi_n|(t-t_c^{(n)})\lesssim 1 + |\xi_n|t, $$ together with \eqref{bdfv} and \eqref{l2}, we obtain \begin{align*} \eqref{cvn} &\lesssim \log^{10}{(\tfrac1{\varepsilon})}{\varepsilon}^{-2}\delta^{-2}[\log\log(\tfrac 1\eps)]^2 \log^{\frac52}{(\tfrac1{\varepsilon})}\sigma^{\frac 32}t^{-\frac32}\exp\Bigl\{-\frac \delta{{\varepsilon}\log^3(\tfrac1{\varepsilon})}\Bigr\}\chi_2(t,x)\\ &\le t^{-\frac 32}{\varepsilon}^{100}\chi_2(t,x) \end{align*} for $t\geq \delta[\log\log(\tfrac 1\eps)]^{10}|\xi_n|^{-1}$. Now consider the regime $t_c^{(n)}-4\sigma\log(\tfrac 1\eps)|\xi_n|^{-1}\leq t\leq \delta[\log\log(\tfrac 1\eps)]^{10}|\xi_n|^{-1}$. By the definition of $\mathcal E$, we have \begin{align}\label{E:515} |x_*-x_n(t)|\gtrsim \frac{|x_*-x_c^{(n)}|}{[\log\log(\tfrac 1\eps)]^4}\ge\frac{\sigma\log(\tfrac 1\eps)}{[\log\log(\tfrac 1\eps)]^5}. \end{align} For the times under consideration, \begin{align*} \sigma^4+t^2\log^4(\tfrac 1{\varepsilon})\le \sigma^4+\delta^2{\varepsilon}^2\log^4(\tfrac 1{\varepsilon})[\log\log(\tfrac 1\eps)]^{22}\le \sigma^4[\log\log(\tfrac 1\eps)]^{23}, \end{align*} and so we obtain \begin{align}\label{l1} \frac{\sigma^2|x_*-x_n(t)|^2}{4[\log\log(\tfrac 1\eps)]^{25}[\sigma^4+t^2\log^4(\tfrac1{\varepsilon})]}&\geq\frac{\log^2(\tfrac 1{\varepsilon})}{[\log\log(\tfrac 1\eps)]^{60}}. \end{align} Using the crude upper bound \begin{align*} |x_*-x_n(t)|\lesssim |x_*|+|x_c^{(n)}|+|\xi_n t|\lesssim [\log\log(\tfrac 1\eps)]^{10} \end{align*} together with \eqref{bdfv} and \eqref{l1}, we obtain \begin{align*} \eqref{cvn} &\lesssim {\varepsilon}^{-2}\delta^{-2}\log^6(\tfrac1{{\varepsilon}})[\log\log(\tfrac 1\eps)]^{20} \log^{\frac52}{(\tfrac1{\varepsilon})}\sigma^{-\frac 32}\exp\Bigl\{-\frac{\log^2(\tfrac 1{\varepsilon})}{[\log\log(\tfrac 1\eps)]^{60}}\Bigr\}\chi_2(t,x)\\ &\le {\varepsilon}^{100}\chi_2(t,x) \end{align*} in the short time regime. Collecting our estimates for long and short times, we get \begin{align*} \eqref{cvn}\lesssim \langle t\rangle^{-\frac32}{\varepsilon}^{100} \chi_2(t,x). \end{align*} Combining this with \eqref{bdforc}, \eqref{E:chi}, and the bound \eqref{E:cun} for \eqref{cun}, we obtain \begin{align*} \Bigl\|\sum_{n\in\mathcal E} c_n^{{\varepsilon}}w_n^{(1)}\Bigr\|_{L_{t,x}^{\frac{10}3}([T,\infty)\times\Omega)}+\Bigl\|\sum_{n\in\mathcal E} c_n^{{\varepsilon}}(i\partial_t+\Delta) w_n^{(1)}\Bigr\|_{L_t^1L_x^2([T,\infty)\times\Omega)}=o(1). \end{align*} This proves \eqref{rem2} for $w_n^{(1)}$. Next we consider the term $w_n^{(2)}$. Just as for $w_n^{(1)}$, we have the following pointwise bound: \begin{align*} |w_n^{(2)}(t,x)| & +|(i\partial_t+\Delta )w_n^{(2)}(t,x)|\lesssim \biggl\{\Bigl[\sigma^{-2}+|\xi_n|^2+\frac{|x_*-2\xi_nt|^2}{\sigma^4}\Bigr]|\tilde u_n(t,x_*)|\\ &\qquad+\Bigl[\sigma^{-2}+|\xi_n|^2+\frac{\log^{10}(\frac1{\varepsilon})}{\sigma^4+t^2}|x_*-x_n(t)|^2\Bigr]|\tilde v_n(t,x_*)|\biggr\}\cdot \chi(t,x), \end{align*} where $\chi(t,x)$ is a cutoff to the spacetime region \begin{align*} \biggl\{(t,x)\in [0,\infty)\times\Omega: \, |x-x_*|\le \sigma, \ |x_*-x_c^{(n)}|\le \sigma\log(\tfrac 1\eps),\ |t-t_c^{(n)}|\ge \frac{\sigma\log(\tfrac 1\eps)}{|\xi_n|}\biggr\}. \end{align*} On the support of $\tilde u_n(t,x_*) \chi(t,x)$ we have $t\le \sigma^2$ and \begin{align*} |x_*-2\xi_n t|\ge |2\xi_n(t-t_c^{(n)})|-|x_*-x_c^{(n)}|\ge \sigma\log(\tfrac 1\eps). \end{align*} Hence \begin{align*} |\tilde u_n(t,x_*)|\chi(t,x)&\lesssim \biggl(\frac{\sigma^2}{\sigma^4+t^2}\biggr)^{\frac34}\exp\Bigl\{-\frac{\sigma^2|x_*-2\xi_nt|^2}{4(\sigma^4+t^2)}\Bigr\} \lesssim \sigma^{-\frac 32}\exp\bigl\{-\tfrac 18 \log^2(\tfrac 1{\varepsilon})\bigr\}\\ &\le{\varepsilon}^{100}. \end{align*} As before, this estimate is good enough to deduce \begin{align*} \Bigl\|\Bigl[\sigma^{-2}+|\xi_n|^2+\frac{|x_*-2\xi_nt|^2}{\sigma^4}\Bigr]|\tilde u_n(t,x_*)|\chi(t,x)\Bigr\|_{L_{t,x}^{\frac{10}3}\cap L_t^1L_x^2([T,\infty)\times\Omega)}\le {\varepsilon}^{90}. \end{align*} To estimate the contribution of the $\tilde v_n$ term, we split into short and long times as in the treatment of the corresponding term in $w_n^{(1)}$. Indeed, the treatment of the regime $t\ge \delta[\log\log(\tfrac 1\eps)]^{10}|\xi_n|^{-1}$ follows verbatim as there. For the complementary set of times $t_c^{(n)}-4\sigma\log(\tfrac 1\eps)|\xi_n|^{-1}\leq t\leq \delta[\log\log(\tfrac 1\eps)]^{10}|\xi_n|^{-1}$, we estimate \begin{align*} |x_*-x_n(t)|&\ge|x_c^{(n)}-x_n(t)|-|x_c^{(n)}-x_*|=2|\xi_n(t-t_c^{(n)})|-|x_c^{(n)}-x_*|\ge \sigma\log(\tfrac 1\eps). \end{align*} This plays the role of \eqref{E:515}; indeed, it is a stronger bound. With this in place, arguing as for $w_n^{(1)}$ we obtain \begin{align*} \Bigl\|\Bigl[\sigma^{-2}+|\xi_n|^2+\frac{\log^{10}(\frac1{\varepsilon})}{\sigma^4+t^2}|x_*-x_n(t)|^2\Bigr]|\tilde v_n(t,x_*)|\chi(t,x)\Bigr\|_{L_{t,x}^{\frac{10}3}\cap L_t^1L_x^2([T,\infty)\times\Omega)}\le {\varepsilon}^{90}. \end{align*} Combining the two estimates and using \eqref{bdforc} yields \eqref{rem2} for $w_n^{(2)}$. It remains to prove \eqref{rem2} for $w_n^{(3)}$, which is the most subtle of all. \begin{lem}[Almost disjointness of the $w^{(3)}_n$]\label{L:disjoint w3} Fix $(t,x)\in{\mathbb{R}}\times\Omega$. Then \begin{equation}\label{E:disjoint w3} \# \big\{ n\in \mathcal E : w^{(3)}_n(t,x) \neq 0 \bigr\} \lesssim \log(\tfrac 1\eps)^{12}. \end{equation} \end{lem} \begin{proof} From the definition of $w^{(3)}_n$ we see that if $w^{(3)}_n(t,x) \cdot w^{(3)}_m(t,x) \neq 0$, then \begin{align*} |t_c^{(n)}-t_c^{(m)}| \leq |t_c^{(n)}-t| + |t-t_c^{(m)}| \le 2\bigl( |\xi_n|^{-1}+ |\xi_m|^{-1} \bigr)\sigma\log(\tfrac 1\eps) \end{align*} and \begin{align*} |x_c^{(n)}-x_c^{(m)}| &\leq |x_c^{(n)}-x_*| + |x_*-x_c^{(m)}| \le 2\sigma\log(\tfrac 1\eps) . \end{align*} Combining these with \begin{align*} |x_c^{(n)}-x_c^{(m)}| &= 2 |\xi_n t_c^{(n)}-\xi_m t_c^{(m)} | \\ &= \bigl| (\xi_n+\xi_m) (t_c^{(n)}-t_c^{(m)}) + (\xi_n-\xi_m)(t_c^{(n)}+t_c^{(m)}) \bigr| \end{align*} and ${\varepsilon}^{-1} [\log\log(\tfrac 1\eps)]^{-1} \leq |\xi_n|,|\xi_m|\leq {\varepsilon}^{-1}\log\log(\tfrac 1\eps)$ yields \begin{align*} |\xi_n-\xi_m| \, (t_c^{(n)}+t_c^{(m)}) \lesssim \sigma\log(\tfrac 1\eps) + \sigma\log(\tfrac 1\eps)[\log\log(\tfrac 1\eps)]^{2} \lesssim \sigma\log(\tfrac 1\eps)[\log\log(\tfrac 1\eps)]^{2}. \end{align*} From Lemma~\ref{L:xc} we have $t_c^{(n)}+t_c^{(m)} \geq \delta{\varepsilon}[\log\log(\tfrac 1\eps)]^{-1}$ and so $$ |n-m| = L |\xi_n-\xi_m| \lesssim \frac{L\sigma}{\delta{\varepsilon}}\log(\tfrac 1\eps)[\log\log(\tfrac 1\eps)]^3 = [\log(\tfrac 1\eps)]^3[\log\log(\tfrac 1\eps)]^4 \leq [\log(\tfrac 1\eps)]^4. $$ The lemma now follows; RHS\eqref{E:disjoint w3} bounds the number of lattice points in a ball of this radius. \end{proof} To continue, we note that on the support of $w_n^{(3)}$ we have $\tilde u_n(t,x)=u_n(t,x)$ and $\tilde v_n(t,x)=v_n(t,x)$. We rewrite $w_n^{(3)}$ as follows: \begin{align*} w^{(3)}_n(t,x)&=\exp\{it|\xi_n|^2-i\xi_n\cdot(x_*-x_c^{(n)})\}\bigl[u_n(t,x_*)-v_n(t,x_*)\bigr]\\ &\qquad\quad \cdot \phi\biggl(\frac{|x-x_*|}{\sigma}\biggr)\phi\biggl(\frac{|x_*-x_c^{(n)}|}{\sigma\log(\tfrac 1\eps)}\biggr) \phi\biggl(\frac{|t-t_c^{(n)}| |\xi_n|}{2\sigma\log(\tfrac 1\eps)}\biggr)\\ &\qquad\quad \cdot\exp\{-it|\xi_n|^2+i\xi_n\cdot(x-x_c^{(n)})\}\\ &=:A_n(t,x)\cdot B_n(t,x)\cdot C_n(t,x). \end{align*} We have the following pointwise bounds on $A_n,B_n,C_n$, and their derivatives that are uniform in $n$: \begin{align*} &\begin{cases} |C_n(t,x)|\le1,\quad |\nabla C_n(t,x)|\le |\xi_n|\lesssim {\varepsilon}^{-1}\log\log(\tfrac 1\eps),\\ (i\partial_t+\Delta) C_n(t,x)=0, \end{cases}\\ &\begin{cases} |B_n(t,x)|\le 1, \ |\nabla B_n(t,x)|\lesssim \sigma^{-1}+[\sigma\log(\tfrac 1{\varepsilon})]^{-1}\lesssim {\varepsilon}^{-\frac12}\delta^{-\frac12}, \\ |(i\partial_t+\Delta)B_n(t,x)|\lesssim \sigma^{-2} +[\sigma\log(\tfrac 1\eps)]^{-2}+\frac{|\xi_n|}{\sigma\log(\frac1{\varepsilon})}\lesssim {\varepsilon}^{-\frac32}\delta^{-\frac12}, \end{cases}\\ &\begin{cases} |A_n(t,x)|\lesssim {\varepsilon}^{-\frac 14}\delta^{-\frac54}\log^{12}(\tfrac 1{\varepsilon}), \ |\nabla A_n(t,x)|\lesssim {\varepsilon}^{-\frac34}\delta^{-\frac 74}\log^{12}(\tfrac 1{\varepsilon}),\\ |(i\partial_t+\Delta) A_n(t,x)|\lesssim {\varepsilon}^{-\frac 74}\delta^{-\frac74}\log^9(\tfrac 1{\varepsilon}), \end{cases} \end{align*} on the support of $w_n^{(3)}$. Indeed, the bounds on $C_n$ and $B_n$ follow from direct computations, while the bounds on $A_n$ were proved in Lemma~\ref{L:uv match}. Using these bounds we immediately get \begin{align} \bigl\|w_n^{(3)}\bigr\|_{L_{t,x}^\infty([T,\infty)\times\Omega)}&\lesssim{\varepsilon}^{-\frac 14}\delta^{-\frac 54}\log^{12}(\tfrac 1{\varepsilon})\label{E:w3}\\ \bigl\|(i\partial_t+\Delta)w_n^{(3)}\bigr\|_{L_{t,x}^\infty([T,\infty)\times\Omega)}&\lesssim{\varepsilon}^{-\frac 74}\delta^{-\frac 74}\log^{13}(\tfrac 1{\varepsilon}),\label{E:laplace w3} \end{align} uniformly for $n\in \mathcal E$. Additionally, the spacetime support of $w_n^{(3)}$ has measure $$ \bigl|\supp w_n^{(3)}\bigr| \lesssim \bigl[\sigma \log(\tfrac1{\varepsilon}) {\varepsilon}\log\log(\tfrac 1\eps)] \sigma \bigl[\sigma\log(\tfrac1{\varepsilon})\bigr]^2 \lesssim \sigma^4{\varepsilon}\log^3(\tfrac1{\varepsilon})\log\log(\tfrac 1\eps). $$ Using this together with \eqref{bdforc}, Lemma~\ref{L:disjoint w3}, \eqref{E:w3}, and H\"older's inequality, we estimate \begin{align*} &\Bigl\|\sum_{n\in \mathcal E} c_n^{\varepsilon} w_n^{(3)}\Bigr\|_{L_{t,x}^{\frac{10}3}([T,\infty)\times\Omega)}^{\frac{10}3}\\ &\lesssim \sum_{n_1, \ldots, n_4\in \mathcal E} |c_{n_1}^{\varepsilon}|^{\frac56} \cdot \ldots \cdot |c_{n_4}^{\varepsilon}|^{\frac56} \int_T^\infty\int_\Omega |w_{n_1}^{(3)}(t,x)|^{\frac56}\cdot\ldots\cdot |w_{n_4}^{(3)}(t,x)|^{\frac56}\, dx\, dt\\ &\lesssim \frac{(\sigma {\varepsilon})^5}{L^{10}}\log^{36}(\tfrac1{\varepsilon})\Bigl[\frac L{\varepsilon}\log\log(\tfrac 1\eps)\Bigr]^3 \bigl[{\varepsilon}^{-\frac 14}\delta^{-\frac 54}\log^{12}(\tfrac 1{\varepsilon})\bigr]^{\frac{10}3} \sigma^4{\varepsilon}\log^3(\tfrac1{\varepsilon})\log\log(\tfrac 1\eps)\\ &\lesssim {\varepsilon}^{\frac{19}6}\delta^{-\frac{19}6}\log^{82}(\tfrac1{\varepsilon}) = o(1) \qtq{as} {\varepsilon}\to 0. \end{align*} Arguing similarly and using \eqref{E:laplace w3} in place of \eqref{E:w3}, we obtain \begin{align*} &\Bigl\|\sum_{n\in \mathcal E} c_n^{\varepsilon} (i\partial_t+\Delta)w_n^{(3)}\Bigr\|_{L_{t,x}^2([T,\infty)\times\Omega)}^2\\ &\lesssim \sum_{n_1,n_2\in \mathcal E} |c_{n_1}^{\varepsilon}||c_{n_2}^{\varepsilon}|\int_T^\infty\int_\Omega \bigl| (i\partial_t+\Delta)w_{n_1}^{(3)}(t,x)\bigr|\bigl| (i\partial_t+\Delta)w_{n_2}^{(3)}(t,x)\bigr|\, dx\, dt\\ &\lesssim \frac{(\sigma {\varepsilon})^3}{L^6}\log^{12}(\tfrac1{\varepsilon})\Bigl[\frac L{\varepsilon}\log\log(\tfrac 1\eps)\Bigr]^3 \bigl[{\varepsilon}^{-\frac 74}\delta^{-\frac 74}\log^{13}(\tfrac 1{\varepsilon})\bigr]^2 \sigma^4{\varepsilon}\log^3(\tfrac1{\varepsilon})\log\log(\tfrac 1\eps)\\ &\lesssim {\varepsilon}^{-\frac12}\delta^{-\frac32}\log^{46}(\tfrac1{\varepsilon}). \end{align*} To convert this to a bound in $L^1_tL^2_x$, we need the following consequence of Lemma~\ref{L:xc}: \begin{align*} \bigl| \bigl\{ t : {\textstyle\sum_{n\in \mathcal E}} c_n^{\varepsilon} w_n^{(3)}(t,x)\not\equiv 0\bigr\} \bigr| &\leq \max_{n,m\in\mathcal E} |t_c^{(n)}- t_c^{(m)}| +\tfrac{2\sigma\log(\frac1{\varepsilon})}{|\xi_n|} + \tfrac{2\sigma\log(\frac1{\varepsilon})}{|\xi_m|}\\ &\lesssim {\varepsilon}\delta[\log\log(\tfrac 1\eps)]^9. \end{align*} Applying H\"older's inequality in the time variable, we get \begin{align*} \Bigl\|\sum_{n\in\mathcal E} c_n^{\varepsilon} (i\partial_t+\Delta) & w_n^{(3)}\Bigr\|_{L_t^1L_x^2([T,\infty)\times\Omega)}\\ &\lesssim \bigl[{\varepsilon}\delta\log\log^9(\tfrac1{\varepsilon})\bigr]^{\frac12}\Bigl\|\sum_{n\in \mathcal E} c_n^{\varepsilon} (i\partial_t+\Delta)w_n^{(3)}\Bigr\|_{L_{t,x}^2([T,\infty)\times\Omega)}\\ &\lesssim {\varepsilon}^{\frac14}\delta^{-\frac 14}\log^{24}(\tfrac 1{\varepsilon}) = o(1) \qtq{as} {\varepsilon}\to 0. \end{align*} This proves \eqref{rem2} for $w_n^{(3)}$ and so completes the proof of Lemma~\ref{L:rem}. \end{proof} Combining Lemmas~\ref{L:small u}, \ref{L:bdfv}, and \ref{L:rem} yields Proposition~\ref{P:long times}, which controls the contribution for large times of rays that enter the obstacle. The contribution from short times was estimated in Proposition~\ref{P:short times}, while the contributions of near-grazing rays and rays that miss the obstacle were estimated in Propositions~\ref{P:ng} and \ref{P:missing}, respectively. Putting everything together completes the proof of Theorem~\ref{T:LF3} and so the discussion of Case~(v). \section{Linear profile decomposition}\label{S:LPD} The purpose of this section is to prove a linear profile decomposition for the propagator $e^{it\Delta_\Omega}$ for data in $\dot H^1_D(\Omega)$; see Theorem~\ref{T:LPD}. As we will see below, the profiles can live in different limiting geometries; this is one of the principal differences relative to previous analyses. Throughout this section, $\Theta:{\mathbb{R}}^3\to[0,1]$ denotes a smooth function such that \begin{align*} \Theta(x)=\begin{cases} 0, & |x|\le \frac 14,\\1, & |x| \ge \frac 12.\end{cases} \end{align*} We also write $\Theta^c(x):=1-\Theta(x)$ and $d(x):=\dist(x,\Omega^c)$. \begin{lem}[Refined Strichartz estimate]\label{lm:refs} Let $f\in \dot H^1_D(\Omega)$. Then we have \begin{align*} \|e^{it\ld} f\|_{L_{t,x}^{10}(\R\times\Omega)}\lesssim \|f\|_{\dot H^1_D(\Omega)}^{\frac 15} \sup_{N\in 2^{\Z}}\|e^{it\ld} f_N\|_{L_{t,x}^{10}(\R\times\Omega)}^{\frac45}. \end{align*} \end{lem} \begin{proof} From the square function estimate Lemma~\ref{sq}, Bernstein, and Strichartz inequalities, \begin{align*} \|e^{it\ld} & f\|_{L^{10}_{t,x}}^{10} \lesssim \iint_{{\mathbb{R}}\times\Omega} \Bigl(\sum_{N\in 2^{\Z}}|e^{it\ld} f_N|^2 \Bigr)^5 \,dx \,dt\\ &\lesssim \sum_{N_1\le \cdots\le N_5}\iint_{{\mathbb{R}}\times\Omega} |e^{it\ld} f_{N_1}|^2 \cdots |e^{it\ld} f_{N_5}|^2 \,dx\,dt\\ &\lesssim \sum_{N_1\le\cdots\le N_5} \|e^{it\ld} f_{N_1}\|_{L^\infty_{t,x}}\|e^{it\ld} f_{N_1}\|_{L^{10}_{t,x}} \prod_{j=2}^4\|e^{it\ld} f_{N_j}\|_{L^{10}_{t,x}}^2 \\ &\qquad\qquad \qquad\cdot\|e^{it\ld} f_{N_5}\|_{L^{10}_{t,x}}\|e^{it\ld} f_{N_5}\|_{L^5_{t,x}}\\ &\lesssim \sup_{N\in 2^{\Z}}\|e^{it\ld} f_N\|_{L^{10}_{t,x}}^8 \sum_{N_1\le N_5} \bigr[1+\log\bigl(\tfrac {N_5}{N_1}\bigr)\bigr]^3 N_1^{\frac32}\| e^{it\ld} f_{N_1}\|_{L^\infty_t L^2_x}\\ &\qquad\qquad\qquad\cdot N_5^{\frac 12}\|e^{it\ld} f_{N_5}\|_{L^5_t L^{\frac{30}{11}}_x} \\ &\lesssim \sup_{N\in 2^{\Z}}\|e^{it\ld} f_N\|_{L^{10}_{t,x}}^8 \sum_{N_1\le N_5} \bigr[1+\log\bigl(\tfrac {N_5}{N_1}\bigr)\bigr]^3 \bigl(\tfrac{N_1}{N_5}\bigr)^{\frac 12} \|f_{N_1}\|_{\dot H^1_D(\Omega)} \|f_{N_5}\|_{\dot H^1_D(\Omega)}\\ &\lesssim \sup_{N\in 2^{\Z}}\|e^{it\ld} f_N\|_{L^{10}_{t,x}}^8 \|f\|_{\dot H^1_D(\Omega)}^2, \end{align*} where all spacetime norms are over ${\mathbb{R}}\times\Omega$. Raising this to the power $\frac 1{10}$ yields the lemma. \end{proof} The refined Strichartz inequality shows that linear solutions with non-trivial spacetime norm must concentrate on at least one frequency annulus. The next proposition goes one step further and shows that they contain a bubble of concentration around some point in spacetime. A novelty in our setting is that the bubbles of concentration may live in one of the limiting geometries identified earlier. \begin{prop}[Inverse Strichartz inequality]\label{P:inverse Strichartz} Let $\{f_n\}\subset \dot H^1_D(\Omega)$. Suppose that \begin{align*} \lim_{n\to \infty}\|f_n\|_{\dot H^1_D(\Omega)}=A < \infty \qtq{and} \lim_{n\to\infty}\|e^{it\ld} f_n\|_{L_{t,x}^{10}(\R\times\Omega)}={\varepsilon} >0. \end{align*} Then there exist a subsequence in $n$, $\{\phi_n\}\subset \dot H^1_D(\Omega)$, $\{N_n\}\subset 2^{\Z}$, $\{(t_n, x_n)\}\subset {\mathbb{R}}\times\Omega$ conforming to one of the four cases listed below such that \begin{gather} \liminf_{n\to\infty}\|\phi_n\|_{\dot H^1_D(\Omega)}\gtrsim {\varepsilon}(\tfrac{{\varepsilon}}A)^{\frac 78}, \label{nontri}\\ \liminf_{n\to\infty}\Bigl\{ \|f_n\|_{\dot H^1_D(\Omega)}^2-\|f_n-\phi_n\|_{\dot H^1_D(\Omega)}^2\Bigr\} \gtrsim A^2 (\tfrac{\varepsilon} A)^{\frac{15}4},\label{dech}\\ \liminf_{n\to\infty}\Bigl\{ \|e^{it\ld} f_n\|_{L_{t,x}^{10}(\R\times\Omega)}^{10}-\|e^{it\ld} (f_n-\phi_n)\|_{L_{t,x}^{10}(\R\times\Omega)}^{10}\Bigr\} \gtrsim {\varepsilon}^{10}(\tfrac{\varepsilon} A)^{\frac{35}4}.\label{dect} \end{gather} The four cases are: \begin{CI} \item Case 1: $N_n\equiv N_\infty \in 2^{\mathbb{Z}}$ and $x_n\to x_{\infty}\in \Omega$. In this case, we choose $\phi\in \dot H^1_D(\Omega)$ and the subsequence so that $e^{it_n\Delta_{\Omega}}f_n\rightharpoonup \phi$ weakly in $\dot H^1_D(\Omega)$ and we set $\phi_n:=e^{-it_n\Delta_{\Omega}}\phi$. \item Case 2: $N_n\to 0$ and $-N_n x_n\to x_\infty\in {\mathbb{R}}^3$. In this case, we choose ${\tilde\phi}\in \dot H^1(\R^3)$ and the subsequence so that $$ g_n(x) :=N_n^{-\frac 12}(e^{it_n\Delta_{\Omega}}f_n)(N_n^{-1}x+x_n) \rightharpoonup {\tilde\phi}(x) \quad\text{weakly in} \quad \dot H^1({\mathbb{R}}^3) $$ and we set $$ \phi_n(x):=N_n^{\frac 12} e^{-it_n\Delta_{\Omega}}[(\chi_n\tilde\phi)(N_n(x-x_n))], $$ where $\chi_n(x)=\chi(N_n^{-1}x+x_n)$ and $\chi(x)=\Theta(\frac{d(x)}{\diam (\Omega^c)})$. \item Case 3: $N_n d(x_n)\to\infty$. In this case, we choose $\tilde\phi\in \dot H^1(\R^3)$ and the subsequence so that $$ g_n(x) :=N_n^{-\frac 12}(e^{it_n\Delta_{\Omega}}f_n)(N_n^{-1}x+x_n) \rightharpoonup {\tilde\phi}(x) \quad\text{weakly in} \quad \dot H^1({\mathbb{R}}^3) $$ and we set $$ \phi_n(x) :=N_n^{\frac12}e^{-it_n\Delta_{\Omega}}[(\chi_n\tilde\phi)(N_n(x-x_n))], $$ where $\chi_n(x)=1-\Theta(\frac{|x|}{N_n d(x_n)})$. \item Case 4: $N_n\to \infty$ and $N_n d(x_n)\to d_{\infty}>0$. In this case, we choose $ \tilde\phi \in \dot H^1_D({\mathbb{H}})$ and the subsequence so that $$ g_n(x) := N_n^{-\frac12}(e^{it_n\Delta_{\Omega}}f_n)(N_n^{-1}R_nx+x^*_n)\rightharpoonup {\tilde\phi}(x) \quad\text{weakly in} \quad \dot H^1(\R^3) $$ and we set $$ \phi_n(x) :=N_n^{\frac 12} e^{-it_n\Delta_{\Omega}}[\tilde\phi(N_nR_n^{-1}(\cdot-x^*_n))], $$ where $R_n\in SO(3)$ satisfies $R_n e_3 = \frac{x_n-x^*_n}{|x_n-x^*_n|}$ and $x^*_n\in \partial \Omega$ is chosen so that $d(x_n)=|x_n-x^*_n|$. \end{CI} \end{prop} \begin{rem} The analogue of $\tilde \phi$ in Case 1 is related to $\phi$ via $\phi(x)= N_\infty^{\frac12} \tilde \phi(N_\infty (x-x_\infty))$; see \eqref{1converg}. \end{rem} \begin{proof} From Lemma \ref{lm:refs} and the conditions on $f_n$, we know that for each $n$ there exists $N_n\in 2^{\Z}$ such that \begin{align*} \|e^{it\ld} P_{N_n}^{\Omega} f_n\|_{L_{t,x}^{10}(\R\times\Omega)}\gtrsim {\varepsilon}^{\frac 54}A^{-\frac 14}. \end{align*} On the other hand, from the Strichartz and Bernstein inequalities we get \begin{align*} \|e^{it\ld} P_{N_n}^{\Omega} f_n\|_{L_{t,x}^{\frac {10}3}({\mathbb{R}}\times\Omega)}\lesssim \| P_{N_n}^{\Omega} f_n\|_{L_x^2(\Omega)} \lesssim N_n^{-1} A. \end{align*} By H\"older's inequality, these imply \begin{align*} A^{-\frac 14}{\varepsilon}^{\frac 54}&\lesssim \|e^{it\ld}P_{N_n}^{\Omega} f_n\|_{L_{t,x}^{10}(\R\times\Omega)}\\ &\lesssim \|e^{it\ld}P_{N_n}^{\Omega} f_n\|_{L_{t,x}^{\frac{10}3}({\mathbb{R}}\times\Omega)}^{\frac 13}\|e^{it\ld}P_{N_n}^{\Omega} f_n\|_{L_{t,x}^{\infty}({\mathbb{R}}\times\Omega)}^{\frac 23} \\ &\lesssim N_n^{-\frac 13}A^{\frac 13}\|e^{it\ld} P_{N_n}^{\Omega} f_n\|_{L_{t,x}^{\infty}({\mathbb{R}}\times\Omega)}^{\frac 23}, \end{align*} and so \begin{align*} \|e^{it\ld} P_{N_n}^{\Omega} f_n\|_{L_{t,x}^{\infty}({\mathbb{R}}\times\Omega)} \gtrsim N_n^{\frac 12}{\varepsilon} (\tfrac {\varepsilon} A)^{\frac78}. \end{align*} Thus there exist $(t_n,x_n)\in {\mathbb{R}}\times \Omega$ such that \begin{align}\label{cncen} \Bigl|(e^{it_n\Delta_{\Omega}}P_{N_n}^{\Omega} f_n)(x_n)\Bigr|\gtrsim N_n^{\frac12}{\varepsilon} (\tfrac{{\varepsilon}}A)^{\frac 78}. \end{align} The cases in the statement of the proposition are determined solely by the behaviour of $x_n$ and $N_n$. We will now show \begin{align}\label{lb} N_n d(x_n)\gtrsim (\tfrac{\varepsilon} A)^{\frac{15}8} \qtq{whenever} N_n \gtrsim 1, \end{align} which explains the absence of the scenario $N_n \gtrsim 1$ with $N_nd(x_n)\to0$. The proof of \eqref{lb} is based on Theorem~\ref{T:heat}, which implies \begin{align*} \int_\Omega \bigl| e^{\Delta_\Omega / N_n^2}(x_n,y) \bigr|^2\,dy &\lesssim N_n^{6} \int_\Omega \Bigl| \bigl[N_n d(x_n)\bigr]\bigl[N_n d(x_n)+N_n|x_n-y| \bigr] e^{-c N_n^2|x_n-y|^2} \Bigr|^2 \,dy \\ &\lesssim [N_n d(x_n)]^2[N_nd(x_n) + 1]^2 N_n^3, \end{align*} whenever $N_n\gtrsim 1$. Writing $$ (e^{it_n\Delta_{\Omega}} P_{N_n}^{\Omega} f_n)(x_n) = \int_\Omega e^{\Delta_\Omega / N_n^2}(x_n,y) \, \bigl[ P^\Omega_{\leq 2 N_n} e^{ - \Delta_\Omega / N_n^2} e^{it_n\Delta_{\Omega}} P_{N_n}^{\Omega} f_n \bigr](y) \,dy $$ and using \eqref{cncen} and Cauchy--Schwarz gives \begin{align*} N_n^{\frac12}{\varepsilon} (\tfrac{{\varepsilon}}A)^{\frac 78} &\lesssim \bigl[N_n d(x_n)\bigr] \bigl[N_nd(x_n) + 1\bigr] N_n^{\frac32} \bigl\| P^\Omega_{\leq 2 N_n} e^{ - \Delta_\Omega / N_n^2} e^{it_n\Delta_{\Omega}} P_{N_n}^{\Omega} f_n \bigr\|_{L^2_x} \\ & \lesssim \bigl[N_n d(x_n)\bigr] \bigl[N_nd(x_n) + 1\bigr] N_n^{\frac12} \| f_n \|_{\dot H^1_D(\Omega)}. \end{align*} The inequality \eqref{lb} now follows. Thanks to the lower bound \eqref{lb}, after passing to a subsequence, we only need to consider the four cases below, which correspond to the cases in the statement of the proposition. \textbf{Case 1:} $N_n\sim 1$ and $N_n d(x_n)\sim 1$. \textbf{Case 2:} $N_n\to 0$ and $N_n d(x_n) \lesssim 1$. \textbf{Case 3:} $N_n d(x_n)\to \infty$ as $n\to \infty$. \textbf{Case 4:} $N_n\to \infty$ and $N_n d(x_n)\sim 1$. We will address these cases in order. The geometry in Case~1 is simplest and it allows us to introduce the basic framework for the argument. The main new difficulty in the remaining cases is the variable geometry, which is where Proposition~\ref{P:converg} and Corollary~\ref{C:LF} play a crucial role. Indeed, as we will see below, the four cases above reduce to the ones discussed in Sections~\ref{S:Domain Convergence} and~\ref{S:Linear flow convergence} after passing to a further subsequence. With Proposition~\ref{P:converg} and Corollary~\ref{C:LF} already in place, the arguments in the four cases parallel each other rather closely. There are four basic steps. The most important is to embed the limit object ${\tilde\phi}$ back inside $\Omega$ in the form of $\phi_n$ and to show that $f_n-\phi_n$ converges to zero in suitable senses. The remaining three steps use this information to prove the three estimates \eqref{nontri}, \eqref{dech}, and \eqref{dect}. \textbf{Case 1:} Passing to a subsequence, we may assume \begin{align*} N_n\equiv N_\infty\in 2^{\Z} \quad\text{and}\quad x_n\to x_\infty\in \Omega. \end{align*} To prefigure the treatment of the later cases we set $$ g_n(x) :=N_n^{-\frac 12}(e^{it_n\Delta_{\Omega}}f_n)(N_n^{-1}x+x_n), $$ even though the formulation of Case~1 does not explicitly include this sequence. As $f_n$ is supported in $\Omega$, so $g_n$ is supported in $\Omega_n :=N_n(\Omega-\{x_n\})$. Moreover, $$ \|g_n\|_{\dot H^1_D(\Omega_n)}=\|f_n\|_{\dot H^1_D(\Omega)}\lesssim A. $$ Passing to a subsequence, we can choose $\tilde \phi$ so that $g_n\rightharpoonup \tilde\phi$ weakly in $\dot H^1(\R^3)$. Rescaling the relation $g_n\rightharpoonup \tilde\phi$ yields \begin{align}\label{1converg} (e^{it_n\Delta_{\Omega}}f_n)(x)\rightharpoonup \phi(x) :=N_\infty^{\frac 12}\tilde \phi(N_\infty(x-x_\infty)) \quad \text{weakly in} \quad \dot H^1_D(\Omega). \end{align} To see that $\phi\in\dot H^1_D(\Omega)$ when defined in this way, we note that $\dot H^1_D(\Omega)$ is a weakly closed subset of $\dot H^1(\R^3)$; indeed, a convex set is weakly closed if and only if it is norm closed. The next step is to prove \eqref{nontri} by showing that $\phi$ is non-trivial. Toward this end, let $h:=P^{\Omega}_{N_\infty}\delta(x_\infty)$. Then from the Bernstein inequality we have \begin{align}\label{h bd} \|(-\Delta_{\Omega})^{-\frac 12}h\|_{L^2(\Omega)}=\|(-\Delta_{\Omega})^{-\frac 12}P_{N_\infty}^\Omega\delta(x_\infty)\|_{L^2(\Omega)}\lesssim N_\infty^{\frac 12}. \end{align} In particular, $h\in \dot H^{-1}_D(\Omega)$. On the other hand, we have \begin{align}\label{h meets phi} \langle \phi,h\rangle &=\lim_{n\to\infty}\langle e^{it_n\Delta_{\Omega}}f_n,h\rangle=\lim_{n\to\infty}\langle e^{it_n\Delta_{\Omega}} f_n,P_{N_\infty}^\Omega\delta(x_\infty)\rangle \notag \\ &=\lim_{n\to\infty}(e^{it_n\Delta_{\Omega}}P_{N_n}^{\Omega} f_n)(x_n)+\lim_{n\to\infty}\langle e^{it_n\Delta_{\Omega}}f_n, P_{N_\infty}^{\Omega}[\delta({x_\infty})-\delta({x_n})]\rangle. \end{align} The second limit in \eqref{h meets phi} vanishes. Indeed, basic elliptic theory shows that \begin{align}\label{elliptic est} \| \nabla v \|_{L^\infty(\{|x|\leq R\})} \lesssim R^{-1} \| v \|_{L^\infty(\{|x|\leq 2R\})} + R \| \Delta v \|_{L^\infty(\{|x|\leq 2R\})}, \end{align} which we apply to $v(x) = (P_{N_\infty}^{\Omega} e^{it_n\Delta_{\Omega}}f_n )(x+x_n)$ with $R=\frac12 d(x_n)$. By hypothesis, $d(x_n) \sim 1$, while by the Bernstein inequalities, $$ \| P_{N_\infty}^{\Omega} e^{it_n\Delta_{\Omega}}f_n \|_{L^\infty_x} \lesssim N_\infty^{\frac12} A \qtq{and} \| \Delta P_{N_\infty}^{\Omega} e^{it_n\Delta_{\Omega}}f_n \|_{L^\infty_x} \lesssim N_\infty^{\frac52} A. $$ Thus by the fundamental theorem of calculus and \eqref{elliptic est}, for $n$ sufficiently large, \begin{align}\label{6:37} |\langle e^{it_n\Delta_{\Omega}} f_n,P_{N_\infty}^{\Omega}[\delta(x_\infty)-\delta(x_n)]\rangle| &\lesssim |x_\infty-x_n| \, \| \nabla P_{N_\infty}^{\Omega} e^{it_n\Delta_{\Omega}} f_n\|_{L^\infty(\{|x|\leq R\})}\notag\\ &\lesssim A \bigl[\tfrac{N_\infty^{\frac12}}{d(x_n)} + N_\infty^{\frac52} d(x_n)\bigr] |x_\infty-x_n|, \end{align} which converges to zero as $n\to \infty$. Therefore, using \eqref{cncen}, \eqref{h bd}, and \eqref{h meets phi}, we have \begin{align} N_\infty^{\frac12} {\varepsilon} \bigl(\tfrac{{\varepsilon}}A\bigr)^{\frac78} \lesssim |\langle \phi, h\rangle| \lesssim \|\phi\|_{\dot H^1_D(\Omega)}\|h\|_{\dot H^{-1}_D(\Omega)} \lesssim N_\infty^{\frac12}\|\phi\|_{\dot H^1_D(\Omega)}.\label{lbf} \end{align} As $e^{it_n\ld}$ is unitary on $\dot H^1_D(\Omega)$ we have $\|\phi_n\|_{\dot H^1_D(\Omega)}=\|\phi\|_{\dot H^1_D(\Omega)}$, and so \eqref{lbf} yields \eqref{nontri}. Claim \eqref{dech} follows immediately from \eqref{nontri} and \eqref{1converg} since $\dot H^1_D(\Omega)$ is a Hilbert space. The only remaining objective is to prove decoupling for the $L_{t,x}^{10}$ norm. Note \begin{align*} (i\partial_t)^{\frac 12} e^{it\ld} =(-\Delta_{\Omega})^{\frac 12} e^{it\ld}. \end{align*} Thus, by H\"older, on any compact domain $K$ in ${\mathbb{R}}\times{\mathbb{R}}^3$ we have \begin{align*} \|e^{it\ld} e^{it_n\ld} f_n\|_{H^{\frac 12}_{t,x}(K)}\lesssim \| \langle-\Delta_{\Omega}\rangle ^{\frac12} e^{i(t+t_n)\Delta_\Omega}f_n \|_{L^2_{t,x}(K)}\lesssim_K A. \end{align*} From Rellich's Lemma, passing to a subsequence, we get \begin{align*} e^{it\ld} e^{it_n\ld} f_n \to e^{it\ld} \phi \qtq{strongly in} L_{t,x}^{2}(K) \end{align*} and so, passing to a further subsequence, $e^{it\ld}e^{it_n\ld} f_n(x)\to e^{it\ld} \phi(x)$ a.e. on $K$. Using a diagonal argument and passing again to a subsequence, we obtain \begin{align*} e^{it\ld}e^{it_n\ld} f_n(x)\to e^{it\ld} \phi(x) \quad\text{a.e. in ${\mathbb{R}}\times {\mathbb{R}}^3$}. \end{align*} Using the Fatou Lemma of Br\'ezis and Lieb (cf. Lemma~\ref{lm:rf}) and a change of variables, we get \begin{align*} \lim_{n\to \infty}\Bigl\{\|e^{it\ld} f_n\|_{L_{t,x}^{10}(\R\times\Omega)}^{10}-\|e^{it\ld} (f_n-\phi_n)\|_{L_{t,x}^{10}(\R\times\Omega)}^{10}\Bigr\} = \|e^{it\ld} \phi\|_{L_{t,x}^{10}(\R\times\Omega)}^{10}, \end{align*} from which \eqref{dect} will follow once we prove \begin{align}\label{want} \|e^{it\ld} \phi\|_{L_{t,x}^{10}(\R\times\Omega)}\gtrsim {\varepsilon} (\tfrac{{\varepsilon}}A)^{\frac{7}{8}}. \end{align} To see this, we use \eqref{lbf}, the Mikhlin multiplier theorem (for $e^{it\Delta_\Omega} P^\Omega_{\leq 2N_\infty}$), and Bernstein to estimate \begin{align*} N_\infty^{\frac12} {\varepsilon} \bigl(\tfrac{{\varepsilon}}A\bigr)^{\frac78}\lesssim |\langle \phi, h\rangle| &=|\langle e^{it\Delta_\Omega} \phi, e^{it\Delta_\Omega} h\rangle| \lesssim \|e^{it\Delta_\Omega}\phi\|_{L_x^{10}}\|e^{it\Delta_\Omega}h\|_{L_x^{\frac{10}9}}\\ &\lesssim \|e^{it\Delta_\Omega}\phi\|_{L_x^{10}} \|h\|_{L_x^{\frac{10}9}} \lesssim N_\infty^{\frac3{10}} \|e^{it\Delta_\Omega}\phi\|_{L_x^{10}}, \end{align*} for each $|t|\le N_{\infty}^{-2}$. Thus $$ \|e^{it\Delta_\Omega}\phi\|_{L_x^{10}} \gtrsim N_\infty^{\frac15} {\varepsilon} \bigl(\tfrac{{\varepsilon}}A\bigr)^{\frac78}, $$ uniformly in $|t|\le N_{\infty}^{-2}$. Integrating in $t$ leads to \eqref{want}. \textbf{Case 2:} As $N_n\to 0$, the condition $N_nd(x_n)\lesssim 1$ guarantees that $\{N_nx_n\}_{n\geq 1}$ is a bounded sequence; thus, passing to a subsequence, we may assume $-N_n x_n\to x_\infty\in {\mathbb{R}}^3$. As in Case 1, we define $\Omega_n :=N_n(\Omega-\{x_n\})$. Note that the rescaled obstacles $\Omega_n^c$ shrink to $x_\infty$ as $n\to \infty$; this is the defining characteristic of Case~2. As $f_n$ is bounded in $\dot H^1_D(\Omega)$, so the sequence $g_n$ is bounded in $\dot H^1_D(\Omega_n)\subseteq\dot H^1(\R^3)$. Thus, passing to a subsequence, we can choose ${\tilde\phi}$ so that $g_n \rightharpoonup {\tilde\phi}$ in $\dot H^1(\R^3)$. We cannot expect ${\tilde\phi}$ to belong to $\dot H^1_D(\Omega_n)$, since it has no reason to vanish on $\Omega_n^c$. This is the role of $\chi_n$ in the definition of $\phi_n$. Next we show that this does not deform ${\tilde\phi}$ too gravely; more precisely, \begin{align}\label{E:no deform} \chi_n{\tilde\phi} \to {\tilde\phi}, \qtq{or equivalently,} \bigl[ 1 - \chi(N_n^{-1}x+x_n)\bigr]{\tilde\phi}(x) \to 0 \quad \text{in $\dot H^1(\R^3)$.} \end{align} Later, we will also need to show that the linear evolution of $\chi_n{\tilde\phi}$ in $\Omega_n$ closely approximates the whole-space linear evolution of ${\tilde\phi}$. To prove \eqref{E:no deform} we first set $B_n:=\{ x\in {\mathbb{R}}^3 : \dist(x,\Omega_n^c) \leq \diam(\Omega_n^c)\}$, which contains $\supp(1-\chi_n)$ and $\supp(\nabla\chi_n)$. Note that because $N_n\to0$, the measure of $B_n$ shrinks to zero as $n\to\infty$. By H\"older's inequality, \begin{align*} \bigl\| [ 1 &- \chi(N_n^{-1}x+x_n)]{\tilde\phi}(x) \bigr\|_{\dot H^1(\R^3)} \\ &\lesssim \bigl\| [ 1 - \chi(N_n^{-1}x+x_n)]\nabla {\tilde\phi}(x) \bigr\|_{L^2({\mathbb{R}}^3)} + \bigl\| N_n^{-1} \bigl(\nabla\chi\bigr)(N_n^{-1}x+x_n) {\tilde\phi}(x) \bigr\|_{L^2({\mathbb{R}}^3)} \\ &\lesssim \| \nabla {\tilde\phi} \|_{L^2(B_n)} + \| {\tilde\phi} \|_{L^6(B_n)}, \end{align*} which converges to zero by the dominated convergence theorem. With \eqref{E:no deform} in place, the proofs of \eqref{nontri} and \eqref{dech} now follow their Case~1 counterparts very closely; this will rely on key inputs from Section~\ref{S:Domain Convergence}. We begin with the former. Let $h:=P_1^{{\mathbb{R}}^3} \delta(0)$; then \begin{align*} \langle \tilde \phi, h\rangle=\lim_{n\to\infty} \langle g_n, h\rangle=\lim_{n\to\infty}\langle g_n, P_1^{\Omega_n} \delta(0)\rangle+\lim_{n\to\infty}\langle g_n, (P_1^{{\mathbb{R}}^3}- P_1^{\Omega_n})\delta(0)\rangle. \end{align*} The second term vanishes due to Proposition~\ref{P:converg} and the uniform boundedness of $\|g_n\|_{\dot H^1(\R^3)}$. Therefore, \begin{align} |\langle\tilde\phi, h\rangle|&=\Bigl|\lim_{n\to \infty}\langle g_n,P_1^{\Omega_n}\delta(0)\rangle\Bigr|\notag\\ &=\Bigl|\lim_{n\to\infty} \langlee^{it_n\ld} f_n, N_n^{\frac52} (P_1^{\Omega_n}\delta(0))(N_n(x-x_n))\rangle\Bigr|\notag\\ &=\Bigl|\lim_{n\to\infty}\langle e^{it_n\Delta_{\Omega}}f_n, N_n^{-\frac12}P_{N_n}^{\Omega}\delta(x_n)\rangle\Bigr|\gtrsim{\varepsilon} (\tfrac{{\varepsilon}}A)^{\frac 78},\label{256} \end{align} where the last inequality follows from \eqref{cncen}. Thus, \begin{align*} \|{\tilde\phi}\|_{\dot H^1(\R^3)}\gtrsim {\varepsilon} (\tfrac{{\varepsilon}}A)^{\frac 78} \end{align*} as in \eqref{lbf}. Combining this with \eqref{E:no deform}, for $n$ sufficiently large we obtain \begin{align*} \|\phi_n\|_{\dot H^1_D(\Omega)}= \| \chi_n \tilde \phi\|_{\dot H^1_D(\Omega_n)}\gtrsim {\varepsilon} (\tfrac{{\varepsilon}}A)^{\frac 78}. \end{align*} This proves \eqref{nontri}. To prove the decoupling in $\dot H^1_D(\Omega)$, we write \begin{align*} &\|f_n\|_{\dot H^1_D(\Omega)}^2 -\|f_n-\phi_n\|_{\dot H^1_D(\Omega)}^2 = 2\langle f_n, \phi_n\rangle_{\dot H^1_D(\Omega)}-\|\phi_n\|_{\dot H^1_D(\Omega)}^2\\ &\quad=2\Bigl\langle N_n^{-\frac 12} (e^{it_n\ld} f_n)(N_n^{-1} x+x_n),\, {\tilde\phi}(x)\chi_n(x)\Bigr\rangle_{\dot H^1_D(\Omega_n)}-\| \chi_n \tilde \phi\|_{\dot H^1_D(\Omega_n)}^2\\ &\quad=2\langle g_n, \tilde \phi\rangle_{\dot H^1(\R^3)}-2\bigl\langle g_n, {\tilde\phi} (1-\chi_n) \bigr\rangle_{\dot H^1(\R^3)} -\| \chi_n \tilde \phi\|_{\dot H^1_D(\Omega_n)}^2. \end{align*} From the weak convergence of $g_n$ to ${\tilde\phi}$, \eqref{E:no deform}, and \eqref{nontri}, we deduce \begin{align*} \lim_{n\to\infty}\Bigl\{\|f_n\|_{\dot H^1_D(\Omega)}^2-\|f_n-\phi_n\|_{\dot H^1_D(\Omega)}^2\Bigr\}=\|\tilde\phi\|_{\dot H^1(\R^3)}^2 \gtrsim {\varepsilon}^2 (\tfrac{{\varepsilon}}A)^{\frac 74}. \end{align*} This completes the verification of \eqref{dech}. We now turn to proving decoupling of the $L_{t,x}^{10}(\R\times\Omega)$ norm, which we will achieve by showing that \begin{align}\label{305} \liminf_{n\to\infty}\biggl\{\|e^{it\Delta_\Omega} f_n\|_{L_{t,x}^{10}(\R\times\R^3)}^{10}-\|e^{it\Delta_\Omega}(f_n-\phi_n)&\|_{L_{t,x}^{10}(\R\times\Omega)}^{10}\biggr\} = \|e^{it\Delta_{{\mathbb{R}}^3}} \tilde\phi\|_{L_{t,x}^{10}(\R\times\R^3)}^{10}. \end{align} Notice that \eqref{dect} then follows from the lower bound \begin{align}\label{328} \|e^{it\Delta_{{\mathbb{R}}^3}} \tilde \phi\|_{L_{t,x}^{10}(\R\times\R^3)}^{10}\gtrsim {\varepsilon}^{10} (\tfrac {\varepsilon} A)^{\frac{35}4}, \end{align} which we prove in much the same way as in Case~1: From \eqref{256} and the Mikhlin multiplier theorem, we have \begin{align*} {\varepsilon} (\tfrac{{\varepsilon}}A)^{\frac 78}&\lesssim |\langle \tilde\phi, h\rangle|\lesssim \|e^{it\Delta_{{\mathbb{R}}^3}} \tilde\phi\|_{L^{10}({\mathbb{R}}^3)}\|e^{it\Delta_{{\mathbb{R}}^3}} P_1^{{\mathbb{R}}^3}\delta(0)\|_{L^{\frac {10}9}({\mathbb{R}}^3)}\lesssim \|e^{it\Delta_{{\mathbb{R}}^3}} \tilde\phi\|_{L^{10}({\mathbb{R}}^3)} \end{align*} uniformly for $|t|\leq 1$. Integrating in time yields \eqref{328} and then plugging this into \eqref{305} leads to \eqref{dect} in Case 2. To establish \eqref{305} we need two ingredients: The first ingredient is \begin{align}\label{c2i1} e^{it\Delta_{\Omega_n}}[g_n-\chi_n\tilde\phi]\to 0 \quad \text{a.e. in } {\mathbb{R}}\times{\mathbb{R}}^3, \end{align} while the second ingredient is \begin{align}\label{c2i2} \|e^{it\Delta_{\Omega_n}}[\chi_n\tilde \phi]-e^{it\Delta_{{\mathbb{R}}^3}}\tilde\phi\|_{L_{t,x}^{10}(\R\times\R^3)}\to 0. \end{align} Combining these and passing to a subsequence if necessary we obtain $$ e^{it\Delta_{\Omega_n}}g_n-e^{it\Delta_{{\mathbb{R}}^3}}\tilde\phi\to 0 \quad \text{a.e. in } {\mathbb{R}}\times{\mathbb{R}}^3, $$ which by the Fatou Lemma of Br\'ezis and Lieb (cf. Lemma~\ref{lm:rf}) yields \begin{align*} \liminf_{n\to\infty}\Bigl\{\|e^{it\Delta_{\Omega_n}}g_n\|_{L_{t,x}^{10}(\R\times\R^3)}^{10}-\|e^{it\Delta_{\Omega_n}}g_n-e^{it\Delta_{{\mathbb{R}}^3}} &\tilde\phi\|_{L_{t,x}^{10}(\R\times\R^3)}^{10}\Bigr\}\\ &= \|e^{it\Delta_{{\mathbb{R}}^3}} \tilde\phi\|_{L_{t,x}^{10}(\R\times\R^3)}^{10}. \end{align*} Combining this with \eqref{c2i2} and rescaling yields \eqref{305}. We start with the first ingredient \eqref{c2i1}. Using the definition of $\tilde \phi$ together with \eqref{E:no deform}, we deduce \begin{align*} g_n-\chi_n\tilde \phi \rightharpoonup 0 \quad \text{weakly in} \quad \dot H^1({\mathbb{R}}^3). \end{align*} Thus, by Proposition~\ref{P:converg}, \begin{align*} e^{it\Delta_{\Omega_n}}[g_n-\chi_n\tilde \phi]\rightharpoonup 0 \quad \text{weakly in} \quad \dot H^1({\mathbb{R}}^3) \end{align*} for each $t\in {\mathbb{R}}$. By the same argument as in Case 1, using the fact that $( i\partial_t)^{1/2} e^{it\Delta_{\Omega_n}}=(-\Delta_{\Omega_n})^{1/2} e^{it\Delta_{\Omega_n}}$ and passing to a subsequence, we obtain \eqref{c2i1}. To establish \eqref{c2i2} we will make use of Corollary~\ref{C:LF}. Note that $\tlim \Omega_n={\mathbb{R}}^3\setminus\{x_\infty\}$ and by Lemma~\ref{L:dense}, $\tilde \phi$ can be well approximated in $\dot H^1({\mathbb{R}}^3)$ by $\psi\in C^\infty_c(\tlim \Omega_n)$. By \eqref{E:no deform}, for $n$ sufficiently large, $\chi_n\tilde \phi$ are also well approximated in $\dot H^1({\mathbb{R}}^3)$ by the same $\psi\in C^\infty_c(\tlim \Omega_n)$. Thus, \eqref{c2i2} follows by combining Corollary~\ref{C:LF} with the Strichartz inequality. \textbf{Case 3:} The defining characteristic of this case is that the rescaled obstacles $\Omega_n^c$ march off to infinity; specifically, $\dist(0,\Omega_n^c)=N_nd(x_n)\to \infty$, where $\Omega_n:=N_n(\Omega-\{x_n\})$. The treatment of this case parallels that of Case~2. The differing geometry of the two cases enters only in the use of Proposition~\ref{P:converg}, Corollary~\ref{C:LF}, and the analogue of the estimate \eqref{E:no deform}. As these first two inputs have already been proven in all cases, our only obligation is to prove \begin{align}\label{E:no deform 3} \chi_n{\tilde\phi} \to {\tilde\phi}, \qtq{or equivalently,} \Theta\bigl(\tfrac{|x|}{\dist(0,\Omega_n^c)}\bigr) {\tilde\phi}(x) \to 0 \quad \text{in $\dot H^1(\R^3)$}. \end{align} To this end, let $B_n:= \{x\in {\mathbb{R}}^3: \, |x|\geq \frac14\dist(0, \Omega_n^c)\}$. Then by H\"older, \begin{align*} \bigl\| \Theta\bigl(\tfrac{|x|}{\dist(0,\Omega_n^c)}\bigr) {\tilde\phi}(x)\bigr\|_{\dot H^1(\R^3)} \lesssim \|\nabla {\tilde\phi}(x) \|_{L^2(B_n)}+ \| {\tilde\phi} \|_{L^6(B_n)}. \end{align*} As $1_{B_n} \to 0$ almost everywhere, \eqref{E:no deform 3} follows from the dominated convergence theorem. \textbf{Case 4:} Passing to a subsequence, we may assume $N_nd(x_n)\to d_\infty>0$. By weak sequential compactness of balls in $\dot H^1({\mathbb{R}}^3)$, we are guaranteed that we can find a subsequence and a ${\tilde\phi}\in \dot H^1({\mathbb{R}}^3)$ so that $g_n \rightharpoonup {\tilde\phi}$ weakly in this space. However, the proposition claims that ${\tilde\phi}\in \dot H^1_D({\mathbb{H}})$. This is a closed subspace isometrically embedded in $\dot H^1({\mathbb{R}}^3)$; indeed, $$ \dot H^1_D({\mathbb{H}}) = \bigl\{ g\in\dot H^1({\mathbb{R}}^3) : {\textstyle\int_{{\mathbb{R}}^3}} g(x)\psi(x) \,dx = 0 \text{ for all } \psi\in C^\infty_c(-{\mathbb{H}}) \bigr\}. $$ Using this characterization, it is not difficult to see that ${\tilde\phi}\in \dot H^1_D({\mathbb{H}})$ since for any compact set $K$ in the left halfplane, $K\subset \Omega_n^c$ for $n$ sufficiently large. Here $\Omega_n:=N_n R_n^{-1}(\Omega-\{x_n^*\})$, which is where $g_n$ is supported. As ${\tilde\phi}\in \dot H^1_D({\mathbb{H}})$ we have $\phi_n \in \dot H^1_D(\Omega)$, as is easily seen from $$ x\in{\mathbb{H}} \iff N_n^{-1} R_n^{} x + x^*_n \in {\mathbb{H}}_n := \{ y : (x_n - x_n^*)\cdot (y-x_n^*) >0 \} \subseteq \Omega; $$ indeed, $\partial {\mathbb{H}}_n$ is the tangent plane to $\partial\Omega$ at $x_n^*$. This inclusion further shows that \begin{align}\label{6:20} \bigl\| {\tilde\phi} \bigr\|_{\dot H^1_D({\mathbb{H}})} = \bigl\| \phi_n \bigr\|_{\dot H^1_D({\mathbb{H}}_n)} = \bigl\| \phi_n \bigr\|_{\dot H^1_D(\Omega)}. \end{align} To prove claim \eqref{nontri} it thus suffices to show a lower bound on $\| {\tilde\phi} \|_{\dot H^1_D({\mathbb{H}})}$. To this end, let $h:=P_1^{{\mathbb{H}}}\delta_{d_\infty e_3}$. From the Bernstein inequality we have \begin{align}\label{6:40} \|(-\Delta_{{\mathbb{H}}})^{-\frac 12}h\|_{L^2(\Omega)}\lesssim 1. \end{align} In particular, $h\in \dot H^{-1}_D({\mathbb{H}})$. Now let $\tilde x_n:= N_nR_n^{-1}(x_n-x_n^*)$; by hypothesis, $\tilde x_n \to d_\infty e_3$. Using Proposition~\ref{P:converg} we obtain \begin{align*} \langle \tilde \phi,h\rangle &=\lim_{n\to\infty}\Bigl\{\langle g_n,P_1^{\Omega_n}\delta_{\tilde x_n}\rangle + \langle g_n,[P_{1}^{{\mathbb{H}}}-P_1^{\Omega_n}]\delta_{d_\infty e_3}\rangle + \langle g_n,P_1^{\Omega_n}[\delta_{d_\infty e_3} - \delta_{\tilde x_n}]\rangle\Bigr\}\\ &=\lim_{n\to\infty}\Bigl\{N_n^{-\frac12}(e^{it_n\Delta_{\Omega}}P_{N_n}^{\Omega} f_n)(x_n)+\langle g_n,P_1^{\Omega_n}[\delta_{d_\infty e_3} - \delta_{\tilde x_n}]\rangle\Bigr\}. \end{align*} Arguing as in the treatment of \eqref{6:37} and applying \eqref{elliptic est} to $v(x)=(P_1^{\Omega_n}g_n)(x+\tilde x_n)$ with $R=\frac12 N_nd(x_n)$, for $n$ sufficiently large we obtain \begin{align*} |\langle g_n,P_1^{\Omega_n}[\delta_{d_\infty e_3} - \delta_{\tilde x_n}]\rangle|&\lesssim A \bigl(d_\infty^{-1}+d_\infty\bigr) |d_\infty e_3- \tilde x_n|\to 0\qtq{as} n\to \infty. \end{align*} Therefore, we have \begin{align*} |\langle \tilde \phi,h\rangle|\gtrsim {\varepsilon} (\tfrac{{\varepsilon}}A)^{\frac78}, \end{align*} which together with \eqref{6:20} and \eqref{6:40} yields \eqref{nontri}. Claim \eqref{dech} is elementary; indeed, \begin{align*} \|f_n\|_{\dot H^1_D(\Omega)}^2 -\|f_n-\phi_n\|_{\dot H^1_D(\Omega)}^2 &= 2\langle f_n, \phi_n\rangle_{\dot H^1_D(\Omega)}-\|\phi_n\|_{\dot H^1_D(\Omega)}^2\\ &= 2\langle g_n,\, {\tilde\phi}\rangle_{\dot H^1_D(\Omega_n)}-\|\tilde\phi\|_{\dot H^{1}_D({\mathbb{H}})}^2 \to \|\tilde\phi\|_{\dot H^{1}_D({\mathbb{H}})}^2. \end{align*} The proof of \eqref{dect} differs little from the cases treated previously: One uses the Rellich Lemma and Corollary~\ref{C:LF} to show $e^{it\Delta_{\Omega_n}} g_n\to e^{it\Delta_{\mathbb{H}}}{\tilde\phi}$ almost everywhere and then the Fatou Lemma of Br\'ezis and Lieb to see that $$ \text{LHS\eqref{dect}} = \| e^{it\Delta_{\mathbb{H}}} {\tilde\phi} \|_{L^{10}({\mathbb{R}}\times{\mathbb{H}})}^{10}. $$ The lower bound on this quantity comes from pairing with $h$; see Cases~1 and~2. \end{proof} To prove a linear profile decomposition for the propagator $e^{it\Delta_\Omega}$ we will also need the following weak convergence results. \begin{lem}[Weak convergence]\label{L:converg} Assume $\Omega_n\equiv\Omega$ or $\{\Omega_n\}$ conforms to one of the three scenarios considered in Proposition~\ref{P:convdomain}. Let $f\in C_c^\infty(\tlim \Omega_n)$ and let $\{(t_n,x_n)\}_{n\geq 1}\subset{\mathbb{R}}\times{\mathbb{R}}^3$. Then \begin{align}\label{lc} e^{it_n\Delta_{\Omega_n}}f(x+x_n) \rightharpoonup 0 \quad \text{weakly in } \dot H^1({\mathbb{R}}^3) \quad \text{as } n\to \infty \end{align} whenever $|t_n|\to \infty$ or $|x_n|\to \infty$. \end{lem} \begin{proof} By the definition of $\tlim\Omega_n$, we have $f\in C^\infty_c(\Omega_n)$ for $n$ sufficiently large. Let $\Omega_\infty$ denote the limit of $\Omega_n$ in the sense of Definition~\ref{D:converg}. We first prove \eqref{lc} when $t_n\to \infty$; the proof when $t_n\to-\infty$ follows symmetrically. Let $\psi\in C_c^\infty({\mathbb{R}}^3)$ and let $$ F_n(t):=\langle e^{it\Delta_{\Omega_n}}f(x+x_n), \psi\rangle_{\dot H^1({\mathbb{R}}^3)}. $$ To establish \eqref{lc}, we need to show \begin{align}\label{lc1} F_n(t_n)\to 0 \qtq{as} n\to \infty. \end{align} We compute \begin{align*} |\partial_t F_n(t)|&= \bigl|\langle i\Delta_{\Omega_n} e^{it\Delta_{\Omega_n}}f(x+x_n), \psi\rangle_{\dot H^1({\mathbb{R}}^3)} \bigr|\\ &= \bigl|\langle \Delta_{\Omega_n} e^{it\Delta_{\Omega_n}}f(x+x_n), \Delta \psi\rangle_{L^2({\mathbb{R}}^3)} \bigr| \lesssim \|f\|_{\dot H^2} \|\psi\|_{\dot H^2}\lesssim_{f,\psi}1. \end{align*} On the other hand, \begin{align*} \|F\|_{L_t^{\frac{10} 3}([t_n,\infty))} &\lesssim \|e^{it\Delta_{\Omega_n}}f\|_{L_{t,x}^{\frac{10}3}([t_n,\infty)\times{\mathbb{R}}^3)}\|\Delta \psi\|_{L_x^{\frac{10}7}({\mathbb{R}}^3)}\\ &\lesssim_\psi \|[e^{it\Delta_{\Omega_n}}-e^{it\Delta_{\Omega_\infty}}]f\|_{L_{t,x}^{\frac{10}3}([0,\infty)\times{\mathbb{R}}^3)} +\|e^{it\Delta_{\Omega_\infty}}f\|_{L_{t,x}^{\frac{10}3}([t_n,\infty)\times{\mathbb{R}}^3)}. \end{align*} The first term converges to zero as $n\to \infty$ by Theorem~\ref{T:LF}, while convergence to zero of the second term follows from the Strichartz inequality combined with the dominated convergence theorem. Putting everything together, we derive \eqref{lc1} and so \eqref{lc} when $t_n\to \infty$. Now assume $\{t_n\}_{n\geq 1}$ is bounded, but $|x_n|\to \infty$ as $n\to \infty$. Without loss of generality, we may assume $t_n\to t_\infty\in {\mathbb{R}}$ as $n\to \infty$. Let $\psi\in C_c^\infty({\mathbb{R}}^3)$ and $R>0$ such that $\supp\psi\subseteq B(0,R)$. We write \begin{align*} \langle e^{it_n\Delta_{\Omega_n}}f(x+x_n), \psi\rangle_{\dot H^1({\mathbb{R}}^3)} &=\langle e^{it_\infty\Delta_{\Omega_\infty}}f(x+x_n), \psi\rangle_{\dot H^1({\mathbb{R}}^3)} \\ &\quad +\langle [e^{it_\infty\Delta_{\Omega_n}}-e^{it_\infty\Delta_{\Omega_\infty}}]f(x+x_n), \psi\rangle_{\dot H^1({\mathbb{R}}^3)}\\ &\quad +\langle [e^{it_n\Delta_{\Omega_n}}-e^{it_\infty\Delta_{\Omega_n}}]f(x+x_n), \psi\rangle_{\dot H^1({\mathbb{R}}^3)}. \end{align*} By the Cauchy--Schwarz inequality, \begin{align*} \bigl|\langle e^{it_\infty\Delta_{\Omega_\infty}}f(x+x_n), \psi\rangle_{\dot H^1({\mathbb{R}}^3)}\bigr| \lesssim \|e^{it_\infty\Delta_{\Omega_{\infty}}}f\|_{L^2(|x|\ge |x_n|-R)} \|\Delta \psi\|_{L^2({\mathbb{R}}^3)}, \end{align*} which converges to zero as $n\to \infty$, by the monotone convergence theorem. By duality and Proposition~\ref{P:converg}, \begin{align*} \bigl\langle [e^{it_\infty\Delta_{\Omega_n}}-&e^{it_\infty\Delta_{\Omega_\infty}}]f(x+x_n), \psi\rangle_{\dot H^1({\mathbb{R}}^3)}\bigr|\\ &\lesssim \|[e^{it_\infty\Delta_{\Omega_n}}-e^{it_\infty\Delta_{\Omega_\infty}}]f\|_{\dot H^{-1}({\mathbb{R}}^3)} \|\Delta\psi\|_{\dot H^1({\mathbb{R}}^3)} \to 0 \qtq{as} n\to \infty. \end{align*} Finally, by the fundamental theorem of calculus, \begin{align*} \bigl|\langle [e^{it_n\Delta_{\Omega_n}}-e^{it_\infty\Delta_{\Omega_n}}]f(x+x_n), \psi\rangle_{\dot H^1({\mathbb{R}}^3)}\bigr| \lesssim |t_n-t_\infty| \|\Delta_{\Omega_n} f\|_{L^2} \|\Delta \psi\|_{L^2}, \end{align*} which converges to zero as $n\to \infty$. Putting everything together we deduce $$ \langle e^{it_n\Delta_{\Omega_n}}f(x+x_n), \psi\rangle_{\dot H^1({\mathbb{R}}^3)} \to 0 \qtq{as} n\to \infty. $$ This completes the proof of the lemma. \end{proof} \begin{lem}[Weak convergence]\label{L:compact} Assume $\Omega_n\equiv\Omega$ or $\{\Omega_n\}$ conforms to one of the three scenarios considered in Proposition~\ref{P:convdomain}. Let $f_n\in \dot H_D^1(\Omega_n)$ be such that $f_n\rightharpoonup 0$ weakly in $\dot H^1({\mathbb{R}}^3)$ and let $t_n\to t_\infty\in {\mathbb{R}}$. Then \begin{align*} e^{it_n\Delta_{\Omega_n}} f_n\rightharpoonup 0 \qtq{weakly in} \dot H^1({\mathbb{R}}^3). \end{align*} \end{lem} \begin{proof} For any $\psi\in C_c^{\infty}({\mathbb{R}}^3)$, \begin{align*} \bigl|\langle [e^{it_n\Delta_{\Omega_n}}-e^{it_\infty\Delta_{\Omega_n}}]f_n, \psi\rangle_{\dot H^1({\mathbb{R}}^3)}\bigr| &\lesssim \|[e^{it_n\Delta_{\Omega_n}}-e^{it_\infty\Delta_{\Omega_n}}]f_n\|_{L^2} \|\Delta\psi\|_{L^2}\\ &\lesssim |t_n-t_\infty|^{\frac12} \|(-\Delta_{\Omega_n})^{\frac12}f_n\|_{L^2} \|\Delta\psi\|_{L^2}, \end{align*} which converges to zero as $n\to \infty$. To obtain the last inequality above, we have used the spectral theorem together with the elementary inequality $|e^{it_n\lambda}-e^{it_\infty\lambda}|\lesssim |t_n-t_\infty|^{1/2}\lambda^{1/2}$ for $\lambda\geq 0$. Thus, we are left to prove \begin{align*} \int_{{\mathbb{R}}^3} \nabla \bigl[e^{it_\infty\Delta_{\Omega_n}} f_n\bigr](x) \nabla \bar\psi(x)\,dx = \int_{{\mathbb{R}}^3} e^{it_\infty\Delta_{\Omega_n}} f_n (x) (-\Delta\bar\psi)(x)\, dx \to 0 \qtq{as} n\to \infty \end{align*} for all $\psi\in C_c^\infty({\mathbb{R}}^3)$. By Sobolev embedding, $$ \|e^{it_\infty\Delta_{\Omega_n}} f_n\|_{L^6}\lesssim \|f_n\|_{\dot H^1({\mathbb{R}}^3)}\lesssim 1 \qtq{uniformly in} n\geq 1, $$ and so using a density argument and the dominated convergence theorem (using the fact that the measure of $\Omega_n\triangle(\tlim \Omega_n)$ converges to zero), it suffices to show \begin{align}\label{9:38am} \int_{{\mathbb{R}}^3} e^{it_\infty\Delta_{\Omega_n}} f_n (x) \bar\psi(x)\, dx \to 0 \qtq{as} n\to \infty \end{align} for all $\psi\in C_c^\infty(\tlim \Omega_n)$. To see that \eqref{9:38am} is true, we write \begin{align*} \langle e^{it_\infty\Delta_{\Omega_n}} f_n, \psi \rangle =\langle f_n, [e^{-it_\infty\Delta_{\Omega_n}} -e^{-it_\infty\Delta_{\Omega_\infty}}]\psi \rangle + \langle f_n,e^{-it_\infty\Delta_{\Omega_\infty}}\psi \rangle, \end{align*} where $\Omega_\infty$ denotes the limit of $\Omega_n$ in the sense of Definition~\ref{D:converg}. The first term converges to zero by Proposition~\ref{P:converg}. As $f_n\rightharpoonup 0$ in $\dot H^1({\mathbb{R}}^3)$, to see that the second term converges to zero, we merely need to prove that $e^{-it_\infty\Delta_{\Omega_\infty}}\psi\in \dot H^{-1}({\mathbb{R}}^3)$ for all $\psi\in C_c^\infty(\tlim \Omega_n)$. Toward this end, we use the Mikhlin multiplier theorem and Bernstein's inequality to estimate \begin{align*} \|e^{-it_\infty\Delta_{\Omega_\infty}}\psi\|_{\dot H^{-1}({\mathbb{R}}^3)} &\lesssim\|e^{-it_\infty\Delta_{\Omega_\infty}}P_{\leq 1}^{\Omega_\infty} \psi\|_{L^{\frac65}}+\sum_{N\geq 1}\|e^{-it_\infty\Delta_{\Omega_\infty}}P_N^{\Omega_\infty}\psi\|_{L^{\frac65}}\\ &\lesssim\|\psi\|_{L^{\frac65}}+\sum_{N\geq 1} \langle N^2t_\infty\rangle^2\|P_N^{\Omega_\infty}\psi\|_{L^{\frac65}}\\ &\lesssim \|\psi\|_{L^{\frac65}} + \|(-\Delta_{\Omega_\infty})^3\psi\|_{L^{\frac65}}\lesssim_\psi 1. \end{align*} This completes the proof of the lemma. \end{proof} Finally, we turn to the linear profile decomposition for the propagator $e^{it\Delta_\Omega}$ in $\dot H^1_D(\Omega)$. This is proved by the inductive application of Proposition~\ref{P:inverse Strichartz}. To handle the variety of cases in as systematic a way as possible, we introduce operators $G_n^j$ that act unitarily in $\dot H^1({\mathbb{R}}^3)$. \begin{thm}[$\dot H^1_D(\Omega)$ linear profile decomposition]\label{T:LPD} Let $\{f_n\}$ be a bounded sequence in $\dot H^1_D(\Omega)$. After passing to a subsequence, there exist $J^*\in \{0, 1, 2, \ldots,\infty\}$, $\{\phi_n^j\}_{j=1}^{J^*}\subset \dot H_D^1(\Omega)$, $\{\lambda_n^j\}_{j=1}^{J^*}\subset(0,\infty)$, and $\{(t_n^j,x_n^j)\}_{j=1}^{J^*}\subset {\mathbb{R}}\times \Omega$ conforming to one of the following four cases for each $j$: \begin{CI} \item Case 1: $\lambda_n^j\equiv \lambda_\infty^j$, $x_n^j\to x_\infty^j$, and there is a $\phi^j\in \dot H^1_D(\Omega)$ so that \begin{align*} \phi_n^j = e^{it_n^j (\lambda_n^j)^2\Delta_\Omega}\phi^j. \end{align*} We define $[G_n^j f] (x) := (\lambda_n^j)^{-\frac 12} f\bigl(\tfrac{x-x_n^j}{\lambda_n^j} \bigr)$ and $\Omega_n^j:=(\lambda_n^j)^{-1}(\Omega - \{x_n^j\})$. \item Case 2: $\lambda_n^j\to \infty$, $-(\lambda_n^j)^{-1}x_n^j \to x^j_\infty\in{\mathbb{R}}^3$, and there is a $\phi^j\in \dot H^1({\mathbb{R}}^3)$ so that \begin{align*} \phi_n^j(x)= G_n^j \bigl[e^{it_n^j \Delta_{\Omega_n^j}} (\chi_n^j \phi^j)\bigr] (x) \qtq{with} [G_n^j f] (x) := (\lambda_n^j)^{-\frac 12} f\bigl(\tfrac{x-x_n^j}{\lambda_n^j} \bigr), \end{align*} $\Omega_n^j := (\lambda_n^j)^{-1}(\Omega - \{x_n^j\})$, $\chi_n^j(x)=\chi(\lambda_n^jx+x_n^j)$, and $\chi(x)=\Theta(\tfrac{d(x)}{\diam(\Omega^c)})$. \item Case 3: $\frac{d(x_n^j)}{\lambda_n^j}\to \infty$ and there is a $\phi^j\in \dot H^1({\mathbb{R}}^3)$ so that \begin{align*} \phi_n^j(x)= G_n^j \bigl[e^{it_n^j \Delta_{\Omega_n^j}} (\chi_n^j \phi^j)\bigr] (x) \qtq{with} [G_n^j f] (x) := (\lambda_n^j)^{-\frac 12} f\bigl(\tfrac{x-x_n^j}{\lambda_n^j} \bigr), \end{align*} $\Omega_n^j := (\lambda_n^j)^{-1}(\Omega - \{x_n^j\})$, and $\chi_n^j(x)=1-\Theta(\tfrac{\lambda_n^j|x|}{d(x_n^j)})$. \item Case 4: $\lambda_n^j\to 0$, $\frac{d(x_n^j)}{\lambda_n^j}\to d_\infty^j>0$, and there is a $\phi^j\in \dot H^1_D({\mathbb{H}})$ so that \begin{align*} \phi_n^j(x)= G_n^j \bigl[ e^{it_n^j \Delta_{\Omega_n^j}} \phi^j\bigr] (x) \qtq{with} [G_n^j f](x) := (\lambda_n^j)^{-\frac12} f\bigl(\tfrac{(R_n^j)^{-1}(x-(x_n^j)^*)}{\lambda_n^j}\bigr), \end{align*} $\Omega_n^j := (\lambda_n^j)^{-1}(R_n^j)^{-1}(\Omega - \{(x_n^j)^*\})$, $(x_n^j)^*\in \partial\Omega$ is defined by $d(x_n^j)=|x_n^j-(x_n^j)^*|$, and $R_n^j\in SO(3)$ satisfies $R_n^j e_3=\tfrac{x_n^j-(x_n^j)^*}{|x_n^j-(x_n^j)^*|}$. \end{CI} Further, for any finite $0\le J\le J^*$, we have the decomposition \begin{align*} f_n=\sum_{j=1}^ J \phi_n^j +w_n^J, \end{align*} with $w_n^J\in \dot H^1_D(\Omega)$ satisfying \begin{gather} \lim_{J\to J^*} \limsup_{n\to\infty} \|e^{it\Delta_{\Omega}}w_n^J\|_{L_{t,x}^{10}({\mathbb{R}}\times\Omega)}=0, \label{E:LP1}\\ \lim_{n\to\infty}\Bigl\{\|f_n\|_{\dot H^1_D(\Omega)}^2-\sum_{j=1}^J\|\phi_n^j\|_{\dot H_D^1(\Omega)}^2-\|w_n^J\|_{\dot H^1_D(\Omega)}^2\Bigr\}=0, \label{E:LP2}\\ \lim_{n\to\infty}\Bigl\{\|f_n\|_{L^6(\Omega)}^6-\sum_{j=1}^J \|\phi_n^j\|_{L^6(\Omega)}^6-\|w_n^J\|_{L^6(\Omega)}^6\Bigr\}=0, \label{E:LP3}\\ e^{it_n^J\Delta_{\Omega_n^J}}(G_n^J)^{-1}w_n^J\rightharpoonup 0 \qtq{weakly in} \dot H^1({\mathbb{R}}^3), \label{E:LP4} \end{gather} and for all $j\neq k$ we have the asymptotic orthogonality property \begin{align}\label{E:LP5} \lim_{n\to\infty} \ \frac{\lambda_n^j}{\lambda_n^k}+\frac{\lambda_n^k}{\lambda_n^j}+ \frac{|x_n^j-x_n^k|^2}{\lambda_n^j\lambda_n^k}+\frac{|t_n^j(\lambda_n^j)^2-t_n^k(\lambda_n^k)^2|}{\lambda_n^j\lambda_n^k}=\infty. \end{align} Lastly, we may additionally assume that for each $j$ either $t_n^j\equiv 0$ or $t_n^j\to \pm \infty$. \end{thm} \begin{proof} We will proceed inductively and extract one bubble at a time. To start with, we set $w_n^0:=f_n$. Suppose we have a decomposition up to level $J\geq 0$ obeying \eqref{E:LP2} through \eqref{E:LP4}. (Note that conditions \eqref{E:LP1} and \eqref{E:LP5} will be verified at the end.) Passing to a subsequence if necessary, we set \begin{align*} A_J:=\lim_{n\to\infty} \|w_n^J\|_{\dot H^1_D(\Omega)} \qtq{and} {\varepsilon}_J:=\lim_{n\to \infty} \|e^{it\Delta_{\Omega}}w_n^J\|_{L_{t,x}^{10}({\mathbb{R}}\times\Omega)}. \end{align*} If ${\varepsilon}_J=0$, we stop and set $J^*=J$. If not, we apply the inverse Strichartz inequality Proposition \ref{P:inverse Strichartz} to $w_n^J$. Passing to a subsequence in $n$ we find $\{\phi_n^{J+1}\}\subset \dot H^1_D(\Omega)$, $\{\lambda_n^{J+1}\}\subset 2^{\mathbb Z}$, and $\{(t_n^{J+1}, x_n^{J+1})\}\subset{\mathbb{R}}\times\Omega$, which conform to one of the four cases listed in the theorem. Note that we rename the parameters given by Proposition~\ref{P:inverse Strichartz} as follows: $\lambda_n^{J+1} := N_n^{-1}$ and $t_n^{J+1} := - N_n^{2} t_n$. The profiles are defined as weak limits in the following way: \begin{align*} \tilde \phi^{J+1}=\wlim_{n\to\infty}(G_n^{J+1})^{-1} \bigl[ e^{-it_n^{J+1}(\lambda_n^{J+1})^2\Delta_\Omega}w_n^J\bigr] =\wlim_{n\to\infty}e^{-it_n^{J+1}\Delta_{\Omega_n^{J+1}}}[(G_n^{J+1})^{-1}w_n^J], \end{align*} where $G_n^{J+1}$ is defined in the statement of the theorem. In Cases 2, 3, 4, we define $\phi^{J+1}:=\tilde \phi^{J+1}$, while in Case 1, $$ \phi^{J+1}(x):= G_\infty^{J+1}\tilde \phi^{J+1}(x):=(\lambda_\infty^{J+1})^{-\frac12} \tilde \phi^{J+1}\bigl(\tfrac{x-x_\infty^{J+1}}{\lambda_\infty^{J+1}} \bigr). $$ Finally, $\phi_n^{J+1}$ is defined as in the statement of the theorem. Note that in Case 1, we can rewrite this definition as $$ \phi_n^{J+1}=e^{it_n^{J+1}(\lambda_n^{J+1})^2\Delta_\Omega}\phi^{J+1}=G_\infty^{J+1}e^{it_n^{J+1}\Delta_{\Omega_\infty^{J+1}}}\tilde \phi^{J+1}, $$ where $\Omega_\infty^{J+1}:=(\lambda_\infty^{J+1})^{-1}(\Omega - \{x_\infty^{J+1}\})$. Note that in all four cases, \begin{align}\label{strong} \lim_{n\to \infty}\|e^{-it_n^{J+1}\Delta_{\Omega_n^{J+1}}}(G_n^{J+1})^{-1}\phi_n^{J+1}-\tilde \phi^{J+1}\|_{\dot H^1({\mathbb{R}}^3)}=0; \end{align} see also \eqref{E:no deform} and \eqref{E:no deform 3} for Cases 2 and 3. Now define $w_n^{J+1}:=w_n^J-\phi_n^{J+1}$. By \eqref{strong} and the construction of $\tilde \phi^{J+1}$ in each case, \begin{align*} e^{-it_n^{J+1}\Delta_{\Omega_n^{J+1}}}(G_n^{J+1})^{-1}w_n^{J+1} \rightharpoonup 0 \quad \text{weakly in }\dot H^1({\mathbb{R}}^3). \end{align*} This proves \eqref{E:LP4} at the level $J+1$. Moreover, from Proposition \ref{P:inverse Strichartz} we also have \begin{align*} \lim_{n\to\infty}\Bigl\{\|w_n^J\|_{\dot H^1_D(\Omega)}^2-\|\phi_n^{J+1}\|_{\dot H^1_D(\Omega)}^2-\|w_n^{J+1}\|_{\dot H^1_D(\Omega)}^2\Bigr\}=0. \end{align*} This together with the inductive hypothesis give \eqref{E:LP2} at the level $J+1$. A similar argument establishes \eqref{E:LP3} at the same level. From Proposition~\ref{P:inverse Strichartz}, passing to a further subsequence we have \begin{equation}\label{new a,eps} \begin{aligned} &A_{J+1}^2=\lim_{n\to\infty}\|w_n^{J+1}\|_{\dot H^1_D(\Omega)}^2\le A_J^2\Bigl[1 -C \bigl(\tfrac{{\varepsilon}_J}{A_J}\bigr)^{\frac{15}4}\Bigr]\leq A_J^2\\ &{\varepsilon}_{J+1}^{10}=\lim_{n\to\infty}\|e^{it\Delta_{\Omega}}w_n^{J+1}\|_{L_{t,x}^{10}({\mathbb{R}}\times\Omega)}^{10}\le{\varepsilon}_J^{10}\Bigl[1-C\bigl(\tfrac{{\varepsilon}_J}{A_J}\bigr)^{\frac{35}4}\Bigr]. \end{aligned} \end{equation} If ${\varepsilon}_{J+1}=0$ we stop and set $J^*=J+1$; moreover, \eqref{E:LP1} is automatic. If ${\varepsilon}_{J+1}>0$ we continue the induction. If the algorithm does not terminate in finitely many steps, we set $J^*=\infty$; in this case, \eqref{new a,eps} implies ${\varepsilon}_J\to 0$ as $J\to \infty$ and so \eqref{E:LP1} follows. Next we verify the asymptotic orthogonality condition \eqref{E:LP5}. We argue by contradiction. Assume \eqref{E:LP5} fails to be true for some pair $(j,k)$. Without loss of generality, we may assume $j<k$ and \eqref{E:LP5} holds for all pairs $(j,l)$ with $j<l<k$. Passing to a subsequence, we may assume \begin{align}\label{cg} \frac{\lambda_n^j}{\lambda_n^k}\to \lambda_0\in (0,\infty), \quad \frac{x_n^j-x_n^k}{\sqrt{\lambda_n^j\lambda_n^k}}\to x_0, \qtq{and} \frac{t_n^j(\lambda_n^j)^2-t_n^k(\lambda_n^k)^2}{\lambda_n^j\lambda_n^k}\to t_0. \end{align} From the inductive relation \begin{align*} w_n^{k-1}=w_n^j-\sum_{l=j+1}^{k-1}\phi_n^l \end{align*} and the definition for $\tilde \phi^k$, we obtain \begin{align} \tilde \phi^k&=\wlim_{n\to\infty}e^{-it_n^k\Delta_{\Omega_n^k}}[(G_n^k)^{-1}w_n^{k-1}]\notag\\ &=\wlim_{n\to\infty}e^{-it_n^k\Delta_{\Omega_n^k}}[(G_n^k)^{-1}w_n^j]\label{tp1}\\ &\quad-\sum_{l=j+1}^{k-1} \wlim_{n\to \infty}e^{-it_n^k\Delta_{\Omega_n^k}}[(G_n^k)^{-1}\phi_n^l]\label{tp2}. \end{align} We will prove that these weak limits are zero and so obtain a contradiction to the nontriviality of $\tilde \phi^k$. We rewrite \eqref{tp1} as follows \begin{align*} e^{-it_n^k\Delta_{\Omega_n^k}}[(G_n^k)^{-1}w_n^j] &=e^{-it_n^k\Delta_{\Omega_n^k}}(G_n^k)^{-1}G_n^je^{it_n^j\Delta_{\Omega_n^j}}[e^{-it_n^j\Delta_{\Omega_n^j}}(G_n^j)^{-1}w_n^j]\\ &=(G_n^k)^{-1}G_n^je^{i\bigl(t_n^j-t_n^k\tfrac{(\lambda_n^k)^2}{(\lambda_n^j)^2}\bigr)\Delta_{{\Omega_n^j}}}[e^{-it_n^j\Delta_{\Omega_n^j}}(G_n^j)^{-1}w_n^j]. \end{align*} Note that by \eqref{cg}, \begin{align*} t_n^j-t_n^k\frac{(\lambda_n^k)^2}{(\lambda_n^j)^2}=\frac{t_n^j(\lambda_n^j)^2-t_n^k(\lambda_n^k)^2}{\lambda_n^j\lambda_n^k}\cdot\frac{\lambda_n^k} {\lambda_n^j}\to \frac{t_0}{\lambda_0}. \end{align*} Using this together with \eqref{E:LP4}, Lemma~\ref{L:compact}, and the fact that the adjoints of the unitary operators $(G_n^k)^{-1}G_n^j$ converge strongly, we obtain $\eqref{tp1}=0$. To complete the proof of \eqref{E:LP5}, it remains to show $\eqref{tp2}=0$. For all $j<l<k$ we write \begin{align*} e^{-it_n^k{\Delta_{\Omega_n^k}}}(G_n^k)^{-1}\phi_n^l =(G_n^k)^{-1}G_n^je^{i\bigl(t_n^j-t_n^k\tfrac{(\lambda_n^k)^2}{(\lambda_n^j)^2}\bigr)\Delta_{{\Omega_n^j}}}[e^{-it_n^j\Delta_{\Omega_n^j}}(G_n^j)^{-1}\phi_n^l]. \end{align*} Arguing as for \eqref{tp1}, it thus suffices to show \begin{align*} e^{-it_n^j\Delta_{\Omega_n^j}}(G_n^j)^{-1}\phi_n^l\rightharpoonup 0 \qtq{weakly in} \dot H^1({\mathbb{R}}^3). \end{align*} Using a density argument, this reduces to \begin{align}\label{need11} I_n:=e^{-it_n^j\Delta_{\Omega_n^j}}(G_n^j)^{-1}G_n^le^{it_n^l\Delta_{\Omega_n^l}}\phi\rightharpoonup 0 \qtq{weakly in} \dot H^1({\mathbb{R}}^3), \end{align} for all $\phi\in C_c^\infty(\tlim \Omega_n^l)$. In Case 1, we also used the fact that $(G_n^l)^{-1} G_\infty^l$ converges strongly to the identity. Depending on which cases $j$ and $l$ fall into, we can rewrite $I_n$ as follows: \begin{CI} \item Case a): If both $j$ and $l$ conform to Case 1, 2, or 3, then \begin{align*} I_n=\biggl(\frac{\lambda_n^j}{\lambda_n^l}\biggr)^{\frac12}\biggl[e^{i\bigl(t_n^l-t_n^j\bigl(\frac{\lambda_n^j} {\lambda_n^l}\bigr)^2\bigr)\Delta_{\Omega_n^l}}\phi\biggr]\biggl(\frac{\lambda_n^j x+x_n^j- x_n^l}{\lambda_n^l}\biggr). \end{align*} \item Case b): If $j$ conforms to Case 1, 2, or 3 and $l$ to Case 4, then \begin{align*} I_n=\biggl(\frac{\lambda_n^j}{\lambda_n^l}\biggr)^{\frac12}\biggl[e^{i\bigl(t_n^l-t_n^j\bigl(\frac{\lambda_n^j} {\lambda_n^l}\bigr)^2\bigr) \Delta_{\Omega_n^l}}\phi\biggr]\biggl(\frac{(R_n^l)^{-1}(\lambda_n^j x+x_n^j-(x_n^l)^*)}{\lambda_n^l}\biggr). \end{align*} \item Case c): If $j$ conforms to Case 4 and $l$ to Case 1, 2, or 3, then \begin{align*} I_n=\biggl(\frac{\lambda_n^j}{\lambda_n^l}\biggr)^{\frac12}\biggl[e^{i\bigl(t_n^l-t_n^j\bigl(\frac{\lambda_n^j} {\lambda_n^l}\bigr)^2\bigr) \Delta_{\Omega_n^l}}\phi\biggr]\biggl(\frac{R_n^j\lambda_n^j x+(x_n^j)^*-x_n^l}{\lambda_n^l}\biggr). \end{align*} \item Case d): If both $j$ and $l$ conform to Case 4, then \begin{align*} I_n=\biggl(\frac{\lambda_n^j}{\lambda_n^l}\biggr)^{\frac12}\biggl[e^{i\bigl(t_n^l-t_n^j\bigl(\frac{\lambda_n^j} {\lambda_n^l}\bigr)^2\bigr) \Delta_{\Omega_n^l}}\phi\biggr]\biggl(\frac{(R_n^l)^{-1}(R_n^j\lambda_n^j x+(x_n^j)^*-(x_n^l)^*)}{\lambda_n^l}\biggr). \end{align*} \end{CI} We first prove \eqref{need11} when the scaling parameters are not comparable, that is, \begin{align*} \lim_{n\to\infty}\frac{\lambda_n^j}{\lambda_n^l}+\frac{\lambda_n^l}{\lambda_n^j}=\infty. \end{align*} We treat all cases simultaneously. By Cauchy--Schwarz, \begin{align*} \bigl|\langle I_n, \psi\rangle_{\dot H^1({\mathbb{R}}^3)}\bigr| &\lesssim \min\Bigl\{\|\Delta I_n\|_{L^2({\mathbb{R}}^3)}\|\psi\|_{L^2({\mathbb{R}}^3)}, \|I_n\|_{L^2({\mathbb{R}}^3)}\|\Delta\psi\|_{L^2({\mathbb{R}}^3)}\Bigr\}\\ &\lesssim \min\biggl\{\frac{\lambda_n^j}{\lambda_n^l}\|\Delta\phi\|_{L^2({\mathbb{R}}^3)}\|\psi\|_{L^2({\mathbb{R}}^3)}, \frac{\lambda_n^l}{\lambda_n^j}\|\phi\|_{L^2({\mathbb{R}}^3)}\|\Delta\psi\|_{L^2({\mathbb{R}}^3)}\biggr\}, \end{align*} which converges to zero as $n\to \infty$, for all $\psi\in C_c^\infty({\mathbb{R}}^3)$. Thus, in this case $\eqref{tp2}=0$ and we get the desired contradiction. Henceforth we may assume \begin{align*} \lim_{n\to \infty}\frac{\lambda_n^j}{\lambda_n^l}=\lambda_0\in (0,\infty). \end{align*} We now suppose the time parameters diverge, that is, \begin{align*} \lim_{n\to \infty}\frac{|t_n^j(\lambda_n^j)^2-t_n^l(\lambda_n^l)^2|}{\lambda_n^j\lambda_n^l}=\infty; \end{align*} then we also have \begin{align*} \biggl|t_n^l-t_n^j\biggl(\frac{\lambda_n^j}{\lambda_n^l}\biggr)^2\biggr| =\frac{|t_n^l(\lambda_n^l)^2-t_n^j(\lambda_n^j)^2|}{\lambda_n^l\lambda_n^j}\cdot\frac{\lambda_n^j}{\lambda_n^l}\to \infty \qtq{as} n\to\infty. \end{align*} We first discuss Case a). Under the above condition, \eqref{need11} follows from \begin{align*} \lambda_0^{\frac 12}\biggl(e^{i\bigl(t_n^l-t_n^j\bigl(\frac{\lambda_n^j}{\lambda_n^l}\bigr)^2\bigr)\Delta_{\Omega_n^l}}\phi\biggr)\bigl(\lambda_0 x+(\lambda_n^l)^{-1}(x_n^j-x_n^l)\bigr)\rightharpoonup 0 \qtq{weakly in} \dot H^1({\mathbb{R}}^3), \end{align*} which is an immediate consequence of Lemma \ref{L:converg}. In Cases b), c), and d), the proof proceeds similarly since $SO(3)$ is a compact group; indeed, passing to a subsequence we may assume that $R_n^j\to R_0$ and $R_n^l\to R_1$, which places us in the same situation as in Case a). Finally, we deal with the situation when \begin{align}\label{cdition} \frac{\lambda_n^j}{\lambda_n^l}\to \lambda_0, \quad \frac{t_n^l(\lambda_n^l)^2-t_n^j(\lambda_n^j)^2}{\lambda_n^j\lambda_n^l}\to t_0, \qtq{but} \frac{|x_n^j-x_n^l|^2}{\lambda_n^j\lambda_n^l}\to \infty. \end{align} Then we also have $t_n^l-t_n^j(\lambda_n^j)^2/(\lambda_n^l)^2\to \lambda_0t_0$. Thus, in Case a) it suffices to show \begin{align}\label{524} \lambda_0^{\frac 12} e^{it_0\lambda_0\Delta_{\Omega_n^l}}\phi(\lambda_0x+y_n)\rightharpoonup0 \qtq{weakly in} \dot H^1({\mathbb{R}}^3), \end{align} where \begin{align*} y_n:=\frac{x_n^j-x_n^l}{\lambda_n^l}=\frac{x_n^j-x_n^l}{\sqrt{\lambda_n^l\lambda_n^j}}\sqrt{\frac{\lambda_n^j}{\lambda_n^l}}\to \infty \qtq{as} n\to \infty. \end{align*} The desired weak convergence \eqref{524} follows from Lemma \ref{L:converg}. As $SO(3)$ is a compact group, in Case b) we can proceed similarly if we can show \begin{align*} \frac{|x_n^j-( x_n^l)^*|}{\lambda_n^l}\to \infty \qtq{as} n\to \infty. \end{align*} But this is immediate from an application of the triangle inequality: for $n$ sufficiently large, \begin{align*} \frac{|x_n^j-(x_n^l)^*|}{\lambda_n^l}\ge\frac{|x_n^j-x_n^l|}{\lambda_n^l}-\frac{|x_n^l-(x_n^l)^*|}{\lambda_n^l} \ge\frac{|x_n^j-x_n^l|}{\lambda_n^l}-2d_\infty^l\to \infty. \end{align*} Case c) can be treated symmetrically. Finally, in Case d) we note that for $n$ sufficiently large, \begin{align*} \frac{|(x_n^j)^*-(x_n^l)^*|}{\lambda_n^l}&\ge\frac{|x_n^j-x_n^l|}{\lambda_n^l}-\frac{|x_n^j-(x_n^j)^*|}{\lambda_n^l}-\frac{|x_n^l-(x_n^l)^*|}{\lambda_n^l}\\ &\ge\frac{|x_n^j-x_n^l|}{\sqrt{\lambda_n^j\lambda_n^l}}\sqrt{\frac{\lambda_n^j}{\lambda_n^l}}-\frac{d(x_n^j)}{\lambda_n^j}\frac{\lambda_n^j}{\lambda_n^l}-\frac{d(x_n^l)}{\lambda_n^l}\\ &\ge \frac12\sqrt{\lambda_0}\frac{|x_n^j-x_n^l|}{\sqrt{\lambda_n^j\lambda_n^l}}-2\lambda_0d_\infty^j-2d_\infty^l\to \infty \qtq{as} n\to \infty. \end{align*} The desired weak convergence follows again from Lemma \ref{L:converg}. Finally, we prove the last assertion in the theorem regarding the behaviour of $t_n^j$. For each $j$, by passing to a subsequence we may assume $t_n^j\to t^j\in [-\infty, \infty]$. Using a standard diagonal argument, we may assume that the limit exists for all $j\ge 1$. Given $j$, if $t^j=\pm\infty$, there is nothing more to be proved; thus, let us suppose that $t^j\in (-\infty, \infty)$. We claim that we may redefine $t_n^j\equiv 0$, provided we replace the original profile $\phi^j$ by $\exp\{it^j\Delta_{\Omega^j_\infty}\} \phi^j$, where $\Omega^j_\infty$ denotes the limiting geometry dictated by the case to which $j$ conforms. Underlying this claim is the assertion that the errors introduced by these changes can be incorporated into $w_n^J$. The exact details of proving this depend on the case to which $j$ conforms; however, the principal ideas are always the same. Let us give the details in Case~2 alone (for which $\Omega_\infty^j={\mathbb{R}}^3$). Here, the claim boils down to the assertion that \begin{align}\label{s1} \lim_{n\to\infty} \bigl\|e^{it_n^j(\lambda_n^j)^2\Delta_\Omega}[G_n^j(\chi_n^j\phi^j)] - G_n^j(\chi_n^j e^{it^j\Delta_{{\mathbb{R}}^3}} \phi^j) \bigr\|_{\dot H^1_D(\Omega)} = 0. \end{align} To prove \eqref{s1} we first invoke Lemma~\ref{L:dense} to replace $\phi^j$ by a function $\psi\in C^\infty_c(\tlim \Omega^j_n)$. Moreover, for such functions $\psi$ we have $\chi_n\psi=\psi$ for $n$ sufficiently large. Doing this and also changing variables, we reduce \eqref{s1} to \begin{align}\label{s1'} \lim_{n\to\infty} \bigl\|e^{it_n^j\Delta_{\Omega_n^j}} \psi - \chi_n^j e^{it^j\Delta_{{\mathbb{R}}^3}} \psi \bigr\|_{\dot H^1_D(\Omega_n^j)} = 0. \end{align} We prove this by breaking it into three pieces. First, by taking the time derivative, we have $$ \bigl\|e^{it_n^j\Delta_{\Omega_n^j}} \psi - e^{it^j\Delta_{\Omega_n^j}}\psi \bigr\|_{\dot H^1({\mathbb{R}}^3)} \leq | t_n^j - t^j | \| \Delta \psi \|_{\dot H^1({\mathbb{R}}^3)}, $$ which converges to zero since $t_n^j \to t^j$. Secondly, we claim that $$ e^{it^j\Delta_{\Omega_n^j}}\psi \to e^{it^j\Delta_{{\mathbb{R}}^3}}\psi \qtq{strongly in} \dot H^1({\mathbb{R}}^3) \qtq{as} n\to\infty. $$ Indeed, the $\dot H^1({\mathbb{R}}^3)$ norms of both the proposed limit and all terms in the sequence are the same, namely, $\|\psi\|_{\dot H^1({\mathbb{R}}^3)}$. Thus, strong convergence can be deduced from weak convergence, which follows from Proposition~\ref{P:converg}. The third and final part of \eqref{s1'}, namely, \begin{align*} \bigl\| (1 - \chi_n^j) e^{it_j\Delta_{{\mathbb{R}}^3}} \psi \bigr\|_{\dot H^1({\mathbb{R}}^3)} \to 0 \qtq{as} n\to\infty, \end{align*} can be shown by direct computation using that $\lambda_n^j\to\infty$; see the proof of \eqref{E:no deform}. This completes the proof of the Theorem~\ref{T:LPD}. \end{proof} \section{Embedding of nonlinear profiles}\label{S:Nonlinear Embedding} The next step in the proof of Theorem~\ref{T:main} is to use the linear profile decomposition obtained in the previous section to derive a Palais--Smale condition for minimizing sequences of blowup solutions to \eqref{nls}. This essentially amounts to proving a nonlinear profile decomposition for solutions to $\text{NLS}_\Omega$; in the next section, we will prove this decomposition and then combine it with the stability result Theorem~\ref{T:stability} to derive the desired compactness for minimizing sequences of solutions. This leads directly to the Palais--Smale condition. In order to prove a nonlinear profile decomposition for solutions to \eqref{nls}, we have to address the possibility that the nonlinear profiles we will extract are solutions to the energy-critical equation in \emph{different} limiting geometries. In this section, we will see how to embed these nonlinear profiles corresponding to different limiting geometries back inside $\Omega$. Specifically, we need to approximate these profiles \emph{globally in time} by actual solutions to \eqref{nls} that satisfy \emph{uniform} spacetime bounds. This section contains three theorems, one for each of the Cases 2, 3, and 4 discussed in the previous sections. As in Section~\ref{S:LPD}, throughout this section $\Theta:{\mathbb{R}}^3\to [0,1]$ denotes a smooth function such that \begin{align*} \Theta(x)=\begin{cases}0, \ & |x|\le \frac 14, \\1, \ & |x|\geq \frac 12. \end{cases} \end{align*} We will also use the following notation: $$ \dot X^1(I\times\Omega):=L_{t,x}^{10}(I\times\Omega)\cap L_t^5\dot H^{1,\frac{30}{11}}_D(I\times\Omega). $$ Our first result in this section concerns the scenario when the rescaled obstacles $\Omega_n^c$ are shrinking to a point (cf. Case 2 in Theorem~\ref{T:LPD}). \begin{thm}[Embedding nonlinear profiles for shrinking obstacles]\label{T:embed2} Let $\{\lambda_n\}\subset 2^{\mathbb Z}$ be such that $\lambda_n\to \infty$. Let $\{t_n\}\subset{\mathbb{R}}$ be such that $t_n\equiv0$ or $t_n\to \pm\infty$. Let $\{x_n\}\subset \Omega$ be such that $-\lambda_n^{-1}x_n\to x_\infty\in {\mathbb{R}}^3$. Let $\phi\in \dot H^1({\mathbb{R}}^3)$ and \begin{align*} \phi_n(x)=\lambda_n^{-\frac12}e^{it_n\lambda_n^2\Delta_\Omega}\bigl[(\chi_n\phi)\bigl(\tfrac{x-x_n}{\lambda_n}\bigr)\bigr], \end{align*} where $\chi_n(x)=\chi(\lambda_n x+x_n)$ with $\chi(x)=\Theta(\tfrac{d(x)}{\diam(\Omega^c)})$. Then for $n$ sufficiently large there exists a global solution $v_n$ to $\text{NLS}_{\Omega}$ with initial data $v_n(0)=\phi_n$ which satisfies \begin{align*} \|v_n\|_{L_{t,x}^{10}({\mathbb{R}}\times\Omega)}\lesssim 1, \end{align*} with the implicit constant depending only on $\|\phi\|_{\dot H^1}$. Furthermore, for every ${\varepsilon}>0$ there exists $N_{\varepsilon}\in {\mathbb{N}}$ and $\psi_{\varepsilon}\in C_c^{\infty}({\mathbb{R}}\times{\mathbb{R}}^3)$ such that for all $n\geq N_{\varepsilon}$ we have \begin{align}\label{dense2} \|v_n(t-\lambda_n^2 t_n,x+x_n)-\lambda_n^{-\frac12}\psi_{\varepsilon}(\lambda_n^{-2}t,\lambda_n^{-1} x)\|_{\dot X^1({\mathbb{R}}\times{\mathbb{R}}^3)}<{\varepsilon}. \end{align} \end{thm} \begin{proof} The proof contains five steps. In the first step we construct global solutions to the energy-critical NLS in the limiting geometry ${\mathbb{R}}^3$ and we record some of their properties. In the second step we construct a candidate for the sought-after solution to $\text{NLS}_\Omega$. In the third step we prove that our candidate asymptotically matches the initial data $\phi_n$, while in the fourth step we prove that it is an approximate solution to \eqref{nls}. In the last step we invoke the stability result Theorem~\ref{T:stability} to find $v_n$ and then prove the approximation result \eqref{dense2}. To ease notation, throughout the proof we will write $-\Delta=-\Delta_{{\mathbb{R}}^3}$. \textbf{Step 1:} Constructing global solutions to $\text{NLS}_{{\mathbb{R}}^3}$. Let $\theta:=\frac 1{100}$. The construction of the solutions to $\text{NLS}_{{\mathbb{R}}^3}$ depends on the behaviour of $t_n$. If $t_n\equiv0$, let $w_n$ and $w_\infty$ be solutions to $\text{NLS}_{{\mathbb{R}}^3}$ with initial data $w_n(0)=\phi_{\le \lambda_n^{\theta}}$ and $w_\infty(0)=\phi$. If instead $t_n\to \pm\infty$, let $w_n$ be the solution to $\text{NLS}_{{\mathbb{R}}^3}$ such that \begin{align*} \|w_n(t)-e^{it\Delta}\phi_{\le \lambda_n^{\theta}}\|_{\dot H^1({\mathbb{R}}^3)}\to 0 \qtq{as} t\to \pm\infty. \end{align*} Similarly, we define $w_\infty$ as the solution to $\text{NLS}_{{\mathbb{R}}^3}$ such that \begin{align}\label{n24} \|w_\infty(t)-e^{it\Delta}\phi\|_{\dot H^1({\mathbb{R}}^3)}\to 0 \qtq{as} t\to \pm\infty. \end{align} By \cite{CKSTT:gwp}, in all cases $w_n$ and $w_\infty$ are global solutions and satisfy \begin{align}\label{258} \|w_n\|_{\dot S^1({\mathbb{R}}\times{\mathbb{R}}^3)}+\|w_\infty\|_{\dot S^1({\mathbb{R}}\times{\mathbb{R}}^3)}\lesssim 1, \end{align} with the implicit constant depending only on $\|\phi\|_{\dot H^1}$. Moreover, by the perturbation theory described in that paper, \begin{align}\label{258'} \lim_{n\to \infty}\|w_n-w_\infty\|_{\dot S^1({\mathbb{R}}\times{\mathbb{R}}^3)}=0. \end{align} By Bernstein's inequality, \begin{align*} \|\phi_{\le \lambda_n^{\theta}}\|_{\dot H^s({\mathbb{R}}^3)}\lesssim \lambda_n^{\theta(s-1)} \qtq{for any} s\ge 1, \end{align*} and so the persistence of regularity result Lemma~\ref{lm:persistencer3} gives \begin{align}\label{persist2} \||\nabla|^s w_n\|_{\dot S^1({\mathbb{R}}\times{\mathbb{R}}^3)}\lesssim \lambda_n^{\theta s} \qtq{for any} s\ge 0, \end{align} with the implicit constant depending solely on $\|\phi\|_{\dot H^1}$. Combining this with the Gagliardo--Nirenberg inequality \begin{align*} \|f\|_{L^\infty_x}\lesssim \|\nabla f\|_{L^2_x}^{\frac 12}\|\Delta f\|_{L^2_x}^{\frac12}, \end{align*} we obtain \begin{align}\label{259} \||\nabla|^s w_n\|_{L_{t,x}^{\infty}({\mathbb{R}}\times{\mathbb{R}}^3)}\lesssim \lambda_n^{\theta(s+\frac12)}, \end{align} for all $s\geq 0$. Finally, using the equation we get \begin{align}\label{260} \|\partial_t w_n\|_{L_{t,x}^{\infty}({\mathbb{R}}\times{\mathbb{R}}^3)}\le \|\Delta w_n\|_{L_{t,x}^{\infty}({\mathbb{R}}\times{\mathbb{R}}^3)}+\|w_n\|_{L_{t,x}^{\infty}({\mathbb{R}}\times{\mathbb{R}}^3)}^5 \lesssim\lambda_n^{\frac 52\theta}. \end{align} \textbf{Step 2:} Constructing the approximate solution to $\text{NLS}_\Omega$. As previously in this scenario, let $\Omega_n:=\lambda_n^{-1}(\Omega-\{x_n\})$. The most naive way to embed $w_n(t)$ into $\Omega_n$ is to choose $\tilde v_n(t) = \chi_n w_n(t)$; however, closer investigation reveals that this is \emph{not} an approximate solution to $\text{NLS}_\Omega$, unless one incorporates some high-frequency reflections off the obstacle, namely, \begin{align*} z_n(t):=i\int_0^t e^{i(t-s)\Delta_{\Omega_n}}(\Delta_{\Omega_n}\chi_n)w_n(s,-\lambda_n^{-1}x_n)\,ds. \end{align*} The source of these waves is nonresonant in spacetime due to the slow variation in time when compared to the small spatial scale involved. This allows us to estimate these reflected waves; indeed, we have the following lemma: \begin{lem} For any $T>0$, we have \begin{align} \limsup_{n\to\infty}\|z_n\|_{\dot X^1([-T,T]\times\Omega_n)}&=0\label{209}\\ \|(-\Delta_{\Omega_n})^{\frac s2}z_n\|_{L_t^\infty L_x^2([-T,T]\times\Omega_n)}&\lesssim \lambda_n^{s-\frac 32+\frac52\theta}(T+\lambda_n^{-2\theta}) \qtq{for all} 0\le s<\tfrac 32.\label{209'} \end{align} \end{lem} \begin{proof} Throughout the proof, all spacetime norms will be over $[-T,T]\times\Omega_n$. We write \begin{align*} z_n(t)&=-\int_0^t [e^{it\Delta_{\Omega_n}}\partial_se^{-is\Delta_{\Omega_n}}\chi_n]w_n(s,-\lambda_n^{-1}x_n) \,ds\\ &=-\chi_nw_n(t,-\lambda_n^{-1}x_n)+e^{it\Delta_{\Omega_n}}[\chi_nw_n(0,-\lambda_n^{-1}x_n)]\\ &\quad +\int_0^t [e^{i(t-s)\Delta_{\Omega_n}}\chi_n]\partial_sw_n(s,-\lambda_n^{-1}x_n)\,ds. \end{align*} We first estimate the $L_t^5\dot H^{1,\frac{30}{11}}_D$ norm of $z_n$. Using the Strichartz inequality, the equivalence of Sobolev spaces Theorem~\ref{T:Sob equiv}, \eqref{259}, and \eqref{260}, we get \begin{align*} \|z_n\|_{L_t^5\dot H_D^{1,\frac{30}{11}}} &\lesssim \|\nabla\chi_n(x) w_n(t, -\lambda_n^{-1}x_n)\|_{L_t^5L_x^\frac{30}{11}} +\|\nabla \chi_n(x)w_n(0,-\lambda_n^{-1}x_n)\|_{L^2_x}\\ &\quad+\|\nabla \chi_n(x) \partial_t w_n(t,-\lambda_n^{-1}x_n)\|_{L_t^1L_x^2}\\ &\lesssim T^{\frac 15}\|\nabla \chi_n\|_{L^{\frac{30}{11}}_x}\|w_n\|_{L_{t,x}^\infty}+\|\nabla\chi_n\|_{L^2_x}\|w_n\|_{L_{t,x}^\infty} +T\|\nabla\chi_n\|_{L^2_x}\|\partial_t w_n\|_{L_{t,x}^\infty}\\ &\lesssim T^{\frac15}\lambda_n^{-\frac{1}{10}+\frac{\theta}2}+\lambda_n^{-\frac12+\frac{\theta}2}+T\lambda_n^{-\frac 12+\frac 52\theta}\to 0\qtq{as} n\to \infty. \end{align*} Similarly, using also Sobolev embedding we obtain \begin{align}\label{zn in L10} \|z_n\|_{L_{t,x}^{10}}&\lesssim\|(-\Delta_{\Omega_n})^{\frac 12}z_n\|_{L_t^{10}L_x^{\frac{30}{13}}}\notag\\ &\lesssim \|\nabla \chi_n(x) w_n(t,-\lambda_n^{-1}x_n)\|_{L_t^{10}L_x^{\frac{30}{13}}}+\|\nabla\chi_n(x)w_n(0,-\lambda_n^{-1}x_n)\|_{L^2_x}\notag\\ &\quad+\|\nabla\chi_n(x)\partial_tw_n(t,-\lambda_n^{-1}x_n)\|_{L_t^1L_x^2}\notag\\ &\lesssim T^{\frac 1{10}}\|\nabla\chi_n\|_{L_x^{\frac{30}{13}}}\|w_n\|_{L_{t,x}^\infty}+\|\nabla\chi_n\|_{L_x^2}\|w_n\|_{L_{t,x}^\infty} +T\|\nabla\chi_n\|_{L_x^2}\|\partial_t w_n\|_{L_{t,x}^\infty}\notag\\ &\lesssim T^{\frac1{10}}\lambda_n^{-\frac{3}{10}+\frac{\theta}2}+\lambda_n^{-\frac12+\frac{\theta}2}+T\lambda_n^{-\frac 12+\frac 52\theta} \to 0\qtq{as} n\to \infty. \end{align} This proves \eqref{209}. To establish \eqref{209'}, we argue as before and estimate \begin{align*} \|(-\Delta_{\Omega_n})^{\frac s2}z_n\|_{L_t^\infty L_x^2} &\lesssim \|(-\Delta)^{\frac s2}\chi_n w_n(t,-\lambda_n^{-1}x_n)\|_{L_t^{\infty}L_x^2}+\|(-\Delta)^{\frac s2}\chi_nw_n(0,-\lambda_n^{-1}x_n)\|_{L_x^2}\\ &\quad+\|(-\Delta)^{\frac s2}\chi_n\partial_tw_n(t,-\lambda_n^{-1}x_n)\|_{L_t^1L_x^2}\\ &\lesssim \|(-\Delta)^{\frac s2}\chi_n\|_{L_x^2}\|w_n\|_{L_{t,x}^\infty}+T\|(-\Delta)^{\frac s2}\chi_n\|_{L_x^2}\|\partial_t w_n\|_{L_{t,x}^\infty}\\ &\lesssim \lambda_n^{s-\frac 32+\frac{\theta}2}+T\lambda_n^{s-\frac 32+\frac 52\theta}\\ &\lesssim \lambda_n^{s-\frac 32+\frac 52\theta}(T+\lambda_n^{-2\theta}). \end{align*} This completes the proof of the lemma. \end{proof} We are now in a position to introduce the approximate solution \begin{align*} \tilde v_n(t,x):=\begin{cases} \lambda_n^{-\frac12}(\chi_nw_n+z_n)(\lambda_n^{-2} t, \lambda_n^{-1}(x-x_n)), &|t|\le\lambda_n^2 T,\\ e^{i(t-\lambda_n^2 T)\Delta_{\Omega}}\tilde v_n(\lambda_n^2 T,x), & t>\lambda_n^2 T, \\ e^{i(t+\lambda_n^2 T)\Delta_\Omega}\tilde v_n(-\lambda_n^2 T,x), & t<-\lambda_n^2 T, \end{cases} \end{align*} where $T>0$ is a parameter to be chosen later. Note that $\tilde v_n$ has finite scattering size. Indeed, using a change of variables, the Strichartz inequality, \eqref{258}, \eqref{209'}, and \eqref{zn in L10}, we get \begin{align}\label{tildevn2} \|\tilde v_n\|_{L_{t,x}^{10}({\mathbb{R}}\times\Omega)} &\lesssim \|\chi_nw_n +z_n\|_{L_{t,x}^{10}([-T,T]\times\Omega_n)}+\|(\chi_nw_n+z_n)(\pm T)\|_{\dot H^1_D(\Omega_n)}\notag\\ &\lesssim \|w_n\|_{L_{t,x}^{10}({\mathbb{R}}\times{\mathbb{R}}^3)}+\|z_n\|_{L_{t,x}^{10}([-T,T]\times\Omega_n)}+\|\chi_n\|_{L_x^\infty}\|\nabla w_n\|_{L_t^\infty L_x^2({\mathbb{R}}\times{\mathbb{R}}^3)}\notag\\ &\quad+\|\nabla \chi_n\|_{L_x^3}\|w_n\|_{L_t^\infty L_x^6({\mathbb{R}}\times{\mathbb{R}}^3)}+\|(-\Delta_{\Omega_n})^{\frac12}z_n\|_{L_t^\infty L_x^2([-T,T]\times\Omega_n)}\notag\\ &\lesssim 1+ T^{\frac1{10}}\lambda_n^{-\frac{3}{10}+\frac{\theta}2}+\lambda_n^{-\frac12+\frac{\theta}2}+T\lambda_n^{-\frac 12+\frac 52\theta} . \end{align} \textbf{Step 3:} Asymptotic agreement of the initial data. In this step, we show (cf. the smallness hypothesis in Theorem~\ref{T:stability}) \begin{align}\label{match2} \lim_{T\to \infty}\limsup_{n\to\infty}\|(-\Delta_\Omega)^{\frac 12}e^{it\Delta_\Omega}[\tilde v_n(\lambda_n^2t_n)-\phi_n]\|_{L_t^{10}L_x^{\frac{30}{13}}({\mathbb{R}}\times\Omega)}=0. \end{align} We first prove \eqref{match2} in the case when $t_n\equiv0$. Using the Strichartz inequality, a change of variables, and H\"older, we estimate \begin{align*} \|(-\Delta_\Omega)^{\frac 12} e^{it\Delta_\Omega}&[\tilde v_n(0)-\phi_n]\|_{L_t^{10}L_x^{\frac{30}{13}}({\mathbb{R}}\times\Omega)}\\ &\lesssim \|(-\Delta_\Omega)^{\frac 12}[\tilde v_n(0)-\phi_n]\|_{L_x^2}\\ &\lesssim \|\nabla[\chi_n\phi_{\le\lambda_n^{\theta}}-\chi_n\phi]\|_{L_x^2}\\ &\lesssim \|\nabla\chi_n\|_{L_x^3}\|\phi_{\le\lambda_n^{\theta}}-\phi\|_{L_x^6}+\|\chi_n\|_{L_x^\infty}\|\nabla[\phi_{\le\lambda_n^\theta}-\phi]\|_ {L_x^2}, \end{align*} which converges to zero as $n\to \infty$. It remains to prove \eqref{match2} in the case $t_n\to \infty$; the case $t_n\to -\infty$ can be treated similarly. As $T>0$ is fixed, for sufficiently large $n$ we have $t_n>T$ and so \begin{align*} \tilde v_n(\lambda_n^2t_n,x)&=e^{i(t_n-T)\lambda_n^2\Delta_\Omega}\tilde v_n(\lambda_n^2 T,x) =e^{i(t_n-T)\lambda_n^2\Delta_\Omega}\bigl[\lambda_n^{-\frac12}\bigl(\chi_nw_n+z_n\bigr)\bigl(T,\tfrac{x-x_n}{\lambda_n}\!\bigl)\bigr] . \end{align*} Thus by a change of variables and the Strichartz inequality, \begin{align*} \|(-\Delta_\Omega)^{\frac 12}& e^{it\Delta_\Omega}[\tilde v_n(\lambda_n^2 t_n)-\phi_n]\|_{L_t^{10}L_x^{\frac{30}{13}}({\mathbb{R}}\times\Omega)}\\ &=\|(-\Delta_{\Omega_n})^{\frac 12}\{e^{i(t-T)\Delta_{\Omega_n}}(\chi_nw_n+z_n)(T)-e^{it\Delta_{\Omega_n}}(\chi_n\phi)\}\|_{L_t^{10}L_x^{\frac{30}{13}}({\mathbb{R}}\times\Omega_n)}\\ &\lesssim \|(-\Delta_{\Omega_n})^{\frac 12}z_n(T)\|_{L_x^2}+\|(-\Delta_{\Omega_n})^{\frac 12}[\chi_n(w_n-w_\infty)(T)]\|_{L_x^2}\\ &\quad+\|(-\Delta_{\Omega_n})^{\frac 12}e^{it\Delta_{\Omega_n}}[e^{-iT\Delta_{\Omega_n}}(\chi_nw_\infty(T))-\chi_n\phi]\| _{L_t^{10}L_x^{\frac{30}{13}}({\mathbb{R}}\times\Omega_n)}. \end{align*} Using \eqref{258'} and \eqref{209'}, we see that \begin{align*} &\|(-\Delta_{\Omega_n})^{\frac 12}z_n(T)\|_{L_x^2}+\|(-\Delta_{\Omega_n})^{\frac 12}[\chi_n(w_n-w_\infty)(T)]\|_{L_x^2}\\ &\lesssim \lambda_n^{-\frac 12+\frac 52\theta}(T+\lambda_n^{-2\theta})+\|\nabla\chi_n\|_{L_x^3}\|w_n-w_\infty\|_{L_t^\infty L_x^6}+\|\chi_n\|_{L_x^\infty}\|\nabla(w_n- w_\infty)\|_{L_t^\infty L_x^2}, \end{align*} which converges to zero as $n\to \infty$. Thus, to establish \eqref{match2} we are left to prove \begin{align}\label{n23} \lim_{T\to \infty}\limsup_{n\to\infty}\|(-\Delta_{\Omega_n})^{\frac12}e^{it\Delta_{\Omega_n}}[e^{-iT\Delta_{\Omega_n}}(\chi_nw_\infty(T))-\chi_n\phi]\| _{L_t^{10}L_x^{\frac{30}{13}}({\mathbb{R}}\times\Omega_n)}=0. \end{align} Using the triangle and Strichartz inequalities, we obtain \begin{align*} \|(-\Delta_{\Omega_n})^{\frac12}&e^{it\Delta_{\Omega_n}}[e^{-iT\Delta_{\Omega_n}}(\chi_nw_\infty(T))-\chi_n\phi]\|_{L_t^{10}L_x^{\frac{30}{13}}({\mathbb{R}}\times\Omega_n)}\\ &\lesssim \|(-\Delta_{\Omega_n})^{\frac 12}(\chi_n w_\infty(T))-\chi_n(-\Delta)^{\frac 12}w_\infty(T)\|_{L_x^2}\\ &\quad+\|[e^{i(t-T)\Delta_{\Omega_n}}-e^{i(t-T)\Delta}][\chi_n(-\Delta)^{\frac12}w_{\infty}(T)]\|_{L_t^{10}L_x^{\frac{30}{13}}({\mathbb{R}}\times\Omega_n)}\\ &\quad+\|e^{-iT\Delta}[\chi_n(-\Delta)^{\frac12}w_\infty(T)]-\chi_n(-\Delta)^{\frac12}\phi\|_{L_x^2}\\ &\quad+\|[e^{it\Delta_{\Omega_n}}-e^{it\Delta}][\chi_n(-\Delta)^{\frac12}\phi]\|_{L_t^{10}L_x^{\frac{30}{13}}({\mathbb{R}}\times\Omega_n)}\\ &\quad+ \|(-\Delta_{\Omega_n})^{\frac12}(\chi_n\phi)-\chi_n(-\Delta)^{\frac12}\phi\|_{L_x^2}. \end{align*} The fact that the second and fourth terms above converge to zero as $n\to \infty$ follows from Theorem~\ref{T:LF} and the density in $L_x^2$ of $C_c^{\infty}$ functions supported in ${\mathbb{R}}^3$ minus a point. To see that the first and fifth terms above converge to zero, we note that for any $f\in \dot H^1({\mathbb{R}}^3)$, \begin{align*} \|(-\Delta_{\Omega_n})^{\frac 12}(\chi_n f)-\chi_n(-\Delta)^{\frac12}f\|_{L_x^2} &\le \|(1-\chi_n)(-\Delta)^{\frac 12}f\|_{L_x^2}+\|(-\Delta)^{\frac12}[(1-\chi_n)f]\|_{L_x^2}\\ &\quad+\|(-\Delta_{\Omega_n})^{\frac 12}(\chi_n f)-(-\Delta)^{\frac12}(\chi_n f)\|_{L_x^2}\to 0 \end{align*} as $n\to \infty$ by Lemma~\ref{L:n3} and the monotone convergence theorem. Finally, for the third term we use \eqref{n24} and the monotone convergence theorem to obtain \begin{align*} \|e^{-iT\Delta}[\chi_n(-\Delta)^{\frac12}&w_\infty(T)]-\chi_n(-\Delta)^{\frac 12}\phi\|_{L_x^2}\\ &\lesssim \|(1-\chi_n)(-\Delta)^{\frac12}w_\infty(T)\|_{L^2_x}+\|(1-\chi_n)(-\Delta)^{\frac 12}\phi\|_{L_x^2}\\ &\quad+\|e^{-iT\Delta}(-\Delta)^{\frac12}w_\infty(T)-(-\Delta)^{\frac 12}\phi\|_{L_x^2} \to 0, \end{align*} by first taking $n\to \infty$ and then $T\to \infty$. This completes the proof of \eqref{n23} and so the proof of \eqref{match2}. \textbf{Step 4:} Proving that $\tilde v_n$ is an approximate solution to $\text{NLS}_{\Omega}$ in the sense that \begin{align*} i\partial_t \tilde v_n+\Delta_\Omega\tilde v_n=|\tilde v_n|^4\tilde v_n+e_n \end{align*} with \begin{align}\label{error2} \lim_{T\to\infty}\limsup_{n\to\infty}\|e_n\|_{\dot N^1({\mathbb{R}}\times\Omega)}=0. \end{align} We start by verifying \eqref{error2} on the large time interval $t>\lambda_n^2 T$; symmetric arguments can be used to treat $t<-\lambda_n^2 T$. By the definition of $\tilde v_n$, in this regime we have $e_n=-|\tilde v_n|^4\tilde v_n$. Using the equivalence of Sobolev spaces, Strichartz, and \eqref{209'}, we estimate \begin{align*} \|e_n\|_{\dot N^1(\{t>\lambda_n^2 T\}\times\Omega)} &\lesssim \|(-\Delta_\Omega)^{\frac12}(|\tilde v_n|^4\tilde v_n)\|_{L_t^{\frac53} L_x^{\frac{30}{23}}(\{t>\lambda_n^2 T\}\times\Omega)}\\ &\lesssim \|(-\Delta_\Omega)^{\frac12}\tilde v_n\|_{L_t^5 L_x^{\frac{30}{11}}(\{t>\lambda_n^2 T\}\times\Omega)} \|\tilde v_n\|^4_{L_{t,x}^{10}(\{t>\lambda_n^2 T\}\times\Omega)}\\ &\lesssim \|(-\Delta_{\Omega_n})^{\frac12}[\chi_n w_n (T)+z_n(T)] \|_{L_x^2}\|\tilde v_n\|^4_{L_{t,x}^{10}(\{t>\lambda_n^2 T\}\times\Omega)}\\ &\lesssim \bigl[ 1+ \lambda_n^{-\frac12+\frac52\theta}(T+\lambda_n^{-2\theta}) \bigr] \|\tilde v_n\|^4_{L_{t,x}^{10}(\{t>\lambda_n^2 T\}\times\Omega)}. \end{align*} Thus, to establish \eqref{error2} it suffices to show \begin{align}\label{largetime2} \lim_{T\to\infty}\limsup_{n\to\infty}\|e^{i(t-\lambda_n^2 T)\Delta_{\Omega}}\tilde v_n(\lambda_n^2T)\|_{L_{t,x}^{10}(\{t>\lambda_n^2 T\}\times\Omega)}=0, \end{align} to which we now turn. As a consequence of the spacetime bounds \eqref{258}, the global solution $w_\infty$ scatters. Let $w_+$ denote the forward asymptotic state, that is, \begin{align}\label{as2} \|e^{-it\Delta}w_\infty(t)- w_+\|_{\dot H^1({\mathbb{R}}^3)}\to 0 \qtq{as} t\to \infty. \end{align} (Note that in the case when $t_n\to \infty$, from the definition of $w_\infty$ we have $w_+=\phi$.) Using a change of variables, \eqref{209'}, the Strichartz and H\"older inequalities, and Sobolev embedding, we obtain \begin{align*} \|&e^{i(t-\lambda_n^2 T)\Delta_\Omega}\tilde v_n(\lambda_n^2T)\|_{L_{t,x}^{10}((\lambda_n^2T,\infty)\times\Omega)}\\ &=\|e^{it\Delta_{\Omega_n}}(\chi_nw_n(T)+z_n(T))\|_{L_{t,x}^{10}([0,\infty)\times\Omega_n)}\\ &\lesssim \|(-\Delta_{\Omega_n})^{\frac12}z_n(T)\|_{L_x^2}+\|(-\Delta_{\Omega_n})^{\frac12}[\chi_n(w_n(T)-w_\infty(T))]\|_{L_x^2}\\ &\quad+\|(-\Delta_{\Omega_n})^{\frac12}[\chi_n(w_{\infty}(T)-e^{iT\Delta}w_+)]\|_{L_x^2}+\|e^{it\Delta_{\Omega_n}}[\chi_ne^{iT\Delta}w_+]\|_{L_{t,x}^{10}([0,\infty)\times\Omega_n)}\\ &\lesssim \lambda_n^{-\frac12+\frac 52\theta}(T+\lambda_n^{-2\theta})+\|w_n(T)-w_\infty(T)\|_{\dot H^1_x}+\|w_\infty(T)-e^{iT\Delta}w_+\|_{\dot H^1_x}\\ &\quad+\|[e^{it\Delta_{\Omega_n}}-e^{it\Delta}][\chi_ne^{iT\Delta}w_+]\|_{L_{t,x}^{10}([0,\infty)\times{\mathbb{R}}^3)} +\|\nabla [(1-\chi_n)e^{iT\Delta}w_+]\|_{L_x^2}\\ &\quad+\|e^{it\Delta}w_+\|_{L_{t,x}^{10}((T,\infty)\times{\mathbb{R}}^3)}, \end{align*} which converges to zero by first letting $n\to \infty$ and then $T\to \infty$ by \eqref{258'}, \eqref{as2}, Corollary~\ref{C:LF}, and the monotone convergence theorem. We are left to prove \eqref{error2} on the middle time interval $|t|\leq \lambda_n^2T$. For these values of time, we compute \begin{align*} e_n(t,x)&=[(i\partial_t+\Delta_\Omega )\tilde v_n- |\tilde v_n|^4\tilde v_n](t,x)\\ &=-\lambda_n^{-\frac52}[\Delta\chi_n](\lambda_n^{-1}(x-x_n))w_n(\lambda_n^{-2}t,-\lambda_n^{-1}x_n)\\ &\quad+\lambda_n^{-\frac 52}[\Delta\chi_n w_n](\lambda_n^{-2}t,\lambda_n^{-1}(x-x_n))\\ &\quad+2\lambda_n^{-\frac 52}(\nabla\chi_n\cdot\nabla w_n)(\lambda_n^{-2}t, \lambda_n^{-1}(x-x_n))\\ &\quad+\lambda_n^{-\frac52}[\chi_n|w_n|^4w_n-|\chi_nw_n+z_n|^4(\chi_nw_n+z_n)](\lambda_n^{-2}t,\lambda_n^{-1}(x-x_n)). \end{align*} Thus, using a change of variables and the equivalence of Sobolev norms Theorem~\ref{T:Sob equiv}, we estimate \begin{align} \|e_n\|_{\dot N^1(\{|t|\leq\lambda_n^2 T\}\times\Omega)} &\lesssim \|(-\Delta_\Omega)^{\frac12} e_n\|_{L_{t,x}^{\frac{10}7}(\{|t|\le\lambda_n^2T\}\times\Omega)}\notag\\ &\lesssim\|\nabla[\Delta\chi_n(w_n(t,x)-w_n(t,-\lambda_n^{-1}x_n))]\|_{L_{t,x}^{\frac{10}7}([-T,T]\times\Omega_n)}\label{51}\\ &\quad+\|\nabla[\nabla\chi_n\cdot\nabla w_n]\|_{L_{t,x}^{\frac{10}7}([-T,T]\times\Omega_n)}\label{52}\\ &\quad+\|\nabla[\chi_n|w_n|^4 w_n-|\chi_nw_n+z_n|^4(\chi_nw_n+z_n)]\|_{L_{t,x}^{\frac{10}7}([-T,T]\times\Omega_n)}.\label{53} \end{align} Using H\"older, the fundamental theorem of calculus, and \eqref{259}, we estimate \begin{align*} \eqref{51}&\lesssim T^{\frac 7{10}}\|\Delta\chi_n\|_{L_x^{\frac{10}7}}\|\nabla w_n\|_{L_{t,x}^\infty}\\ &\quad+T^{\frac7{10}}\|\nabla\Delta\chi_n\|_{L_x^\frac{10}7}\|w_n(t,x)-w_n(t,-\lambda_n^{-1}x_n)\|_{L_{t,x}^\infty({\mathbb{R}}\times\supp\Delta\chi_n)}\\ &\lesssim T^{\frac 7{10}}\lambda_n^{-\frac 1{10}+ \frac32\theta}+T^{\frac7{10}}\lambda_n^{\frac 9{10}}\lambda_n^{-1}\|\nabla w_n\|_{L_{t,x}^\infty}\\ &\lesssim T^{\frac 7{10}}\lambda_n^{-\frac 1{10}+\frac 32\theta} \to 0 \qtq{as} n\to \infty. \end{align*} Notice that the cancellations induced by the introduction of $z_n$ were essential in order to control this term. Next, \begin{align*} \eqref{52} &\le T^{\frac7{10}}\bigl[\|\Delta\chi_n\|_{L_x^{\frac{10}7}}\|\nabla w_n\|_{L_{t,x}^\infty}+\|\nabla \chi_n\|_{L_x^{\frac{10}7}}\|\Delta w_n\|_{L_{t,x}^\infty}\bigr]\\ &\le T^{\frac 7{10}}[\lambda_n^{-\frac 1{10}+\frac 32\theta}+\lambda_n^{-\frac{11}{10}+\frac 52\theta}]\to 0 \qtq{as} n\to \infty. \end{align*} Finally, we turn our attention to \eqref{53}. A simple algebraic computation yields \begin{align*} \eqref{53}&\lesssim T^{\frac7{10}}\Bigl\{ \|\nabla[(\chi_n-\chi_n^5)w_n^5] \|_{L_t^\infty L_x^{\frac{10}7}} + \|z_n^4\nabla z_n\|_{L_t^\infty L_x^{\frac{10}7}}\\ &\quad+\sum_{k=1}^4\Bigl[ \|w_n^{k-1}z_n^{5-k}\nabla (\chi_n w_n) \|_{L_t^\infty L_x^{\frac{10}7}}+ \|w_n^k z_n^{4-k}\nabla z_n\|_{L_t^\infty L_x^{\frac{10}7}}\Bigr]\Bigr\}, \end{align*} where all spacetime norms are over $[-T,T]\times\Omega_n$. Using H\"older and \eqref{259}, we estimate \begin{align*} \|\nabla[(\chi_n-\chi_n^5)w_n^5] \|_{L_t^\infty L_x^{\frac{10}7}} &\lesssim \|\nabla \chi_n\|_{L_x^{\frac{10}7}}\|w_n\|_{L_{t,x}^\infty}^5 + \|\chi_n-\chi_n^5\|_{L_x^{\frac{10}7}}\|w_n\|_{L_{t,x}^\infty}^4\|\nabla w_n\|_{L_{t,x}^\infty} \\ &\lesssim \lambda_n^{-\frac{11}{10}+\frac 52\theta} + \lambda_n^{-\frac{21}{10}+\frac 72\theta}. \end{align*} Using also \eqref{209'}, Sobolev embedding, and Theorem~\ref{T:Sob equiv}, we obtain \begin{align*} \|z_n^4\nabla z_n\|_{L_t^{\infty}L_x^{\frac{10}7}} \lesssim\|\nabla z_n\|_{L_t^\infty L_x^2}\|z_n\|_{L_t^\infty L_x^{20}}^4 &\lesssim \|\nabla z_n\|_{L_t^\infty L_x^2}\||\nabla |^{\frac{27}{20}} z_n\|_{L_t^\infty L_x^2}^4\\ &\lesssim \lambda_n^{-\frac{11}{10}+\frac{25}2\theta}(T+\lambda_n^{-2\theta})^5. \end{align*} Similarly, \begin{align*} \| & w_n^{k-1}z_n^{5-k}\nabla (\chi_n w_n) \|_{L_t^\infty L_x^{\frac{10}7}} \\ &\lesssim \|\nabla\chi_n\|_{L_x^3}\|w_n\|_{L_t^\infty L_x^{\frac{150}{11}}}^k \|z_n\|_{L_t^\infty L_x^{\frac{150}{11}}}^{5-k} + \|\nabla w_n\|_{L_t^\infty L_x^2}\|w_n\|_{L_t^\infty L_x^{20}}^{k-1}\|z_n\|_{L_t^\infty L_x^{20}}^{5-k} \\ &\lesssim \||\nabla|^{\frac{32}{25}}w_n\|_{L_t^\infty L_x^2}^k\||\nabla|^{\frac{32}{25}}z_n\|_{L_t^\infty L_x^2}^{5-k} + \||\nabla|^{\frac{27}{20}}w_n\|_{L_t^\infty L_x^2}^{k-1}\||\nabla|^{\frac{27}{20}}z_n\|_{L_t^\infty L_x^2}^{5-k} \\ &\lesssim \lambda_n^{\frac7{25}\theta k+(-\frac{11}{50}+\frac 52\theta)(5-k)}(T+\lambda_n^{-2\theta})^{5-k} + \lambda_n^{\frac 7{20}\theta(k-1)}\lambda_n^{(-\frac 3{20}+\frac 52\theta)(5-k)}(T+\lambda_n^{-2\theta})^{5-k} \end{align*} and \begin{align*} \|w_n^k z_n^{4-k}\nabla z_n\|_{L_t^\infty L_x^{\frac{10}7}} &\lesssim \|\nabla z_n\|_{L_t^\infty L_x^2}\|w_n\|_{L_t^\infty L_x^{20}}^k\|z_n\|_{L_t^\infty L_x^{20}}^{4-k}\\ &\lesssim \lambda_n^{-\frac 12+\frac 52\theta}\lambda_n^{\frac7{20}\theta k}\lambda_n^{(-\frac3{20}+\frac 52\theta)(4-k)}(T+\lambda_n^{-2\theta})^{5-k}. \end{align*} Putting everything together and recalling $\theta=\frac1{100}$, we derive \begin{align*} \eqref{53}\to 0 \qtq{as} n\to \infty. \end{align*} Therefore, \begin{align*} \lim_{T\to\infty}\limsup_{n\to\infty}\|e_n\|_{\dot N^1(\{|t|\leq \lambda_n^2T\}\times\Omega)}=0, \end{align*} which together with \eqref{largetime2} gives \eqref{error2}. \textbf{Step 5} Constructing $v_n$ and approximation by $C_c^\infty$ functions. Using \eqref{tildevn2}, \eqref{match2}, and \eqref{error2}, and invoking the stability result Theorem~\ref{T:stability}, for $n$ (and $T$) sufficiently large we obtain a global solution $v_n$ to $\text{NLS}_\Omega$ with initial data $v_n(0)=\phi_n$ and \begin{align*} \|v_n\|_{L_{t,x}^{10}({\mathbb{R}}\times\Omega)}\lesssim 1. \end{align*} Moreover, \begin{align}\label{vncase2} \lim_{T\to\infty}\limsup_{n\to\infty}\|v_n(t-\lambda_n^2 t_n)-\tilde v_n(t)\|_{\dot S^1({\mathbb{R}}\times\Omega)}=0. \end{align} To complete the proof of the theorem, it remains to prove the approximation result \eqref{dense2}, to which we now turn. From the density of $C_c^\infty({\mathbb{R}}\times{\mathbb{R}}^3)$ in $\dot X^1({\mathbb{R}}\times{\mathbb{R}}^3)$, for any ${\varepsilon}>0$ there exists $\psi_{\varepsilon}\in C_c^\infty({\mathbb{R}}\times{\mathbb{R}}^3)$ such that \begin{align}\label{approxwinfty2} \|w_\infty-\psi_{\varepsilon}\|_{\dot X^1({\mathbb{R}}\times{\mathbb{R}}^3)}<\tfrac {\varepsilon} 3. \end{align} Using a change of variables, we estimate \begin{align*} \|v_n(t-\lambda_n^2 t_n, x+x_n)&-\lambda_n^{-\frac12}\psi_{\varepsilon}(\lambda_n^{-2}t, \lambda_n^{-1}x)\|_{\dot X^1({\mathbb{R}}\times{\mathbb{R}}^3)}\\ &\le \|v_n(t-\lambda_n^2 t_n)-\tilde v_n(t)\|_{\dot X^1({\mathbb{R}}\times{\mathbb{R}}^3)}+\|w_\infty-\psi_{\varepsilon}\|_{\dot X^1({\mathbb{R}}\times{\mathbb{R}}^3)}\\ &\quad+\|\tilde v_n(t,x)-\lambda_n^{-\frac12}w_\infty(\lambda_n^{-2},\lambda_n^{-1}(x-x_n))\|_{\dot X^1({\mathbb{R}}\times{\mathbb{R}}^3)}. \end{align*} In view of \eqref{vncase2} and \eqref{approxwinfty2}, proving \eqref{dense2} reduces to showing \begin{align}\label{remaincase2} \|\tilde v_n(t,x)-\lambda_n^{-\frac 12}w_\infty(\lambda_n^{-2}t,\lambda_n^{-1}(x-x_n))\|_{\dot X^1({\mathbb{R}}\times{\mathbb{R}}^3)}< \tfrac {\varepsilon} 3 \end{align} for sufficiently large $n$ and $T$. To prove \eqref{remaincase2} we discuss two different time regimes. On the middle time interval $|t|\leq \lambda_n^2 T$, we have \begin{align*} &\|\tilde v_n(t,x)-\lambda_n^{-\frac 12}w_\infty(\lambda_n^{-2}t,\lambda_n^{-1}(x-x_n))\|_{\dot X^1(\{|t|\leq \lambda_n^2T\}\times{\mathbb{R}}^3)}\\ &\lesssim\|\chi_n w_n+z_n-w_\infty\|_{\dot X^1([-T,T]\times{\mathbb{R}}^3)}\\ &\lesssim \|(1-\chi_n)w_\infty\|_{\dot X^1([-T,T]\times{\mathbb{R}}^3)}+\|\chi_n(w_n-w_\infty)\|_{\dot X^1([-T,T]\times{\mathbb{R}}^3)}+\|z_n\|_{\dot X^1([-T,T]\times{\mathbb{R}}^3)}, \end{align*} which converges to zero by \eqref{258}, \eqref{258'}, and \eqref{209}. We now consider $|t|> \lambda_n^2 T$; by symmetry, it suffices to control the contribution of positive times. Using the Strichartz inequality, we estimate \begin{align*} \|\tilde v_n(t,x)&-\lambda_n^{-\frac 12}w_\infty(\lambda_n^{-2}t,\lambda_n^{-1}(x-x_n))\|_{\dot X^1((\lambda_n^2T, \infty)\times{\mathbb{R}}^3)}\\ &=\|e^{i(t-T)\Delta_{\Omega_n}}[\chi_nw_n(T)+z_n(T)]-w_\infty\|_{\dot X^1((T,\infty)\times{\mathbb{R}}^3)}\\ &\lesssim \|z_n(T)\|_{\dot H^1_D(\Omega_n)}+\|\nabla[\chi_n(w_\infty-w_n)]\|_{L_x^2} +\|w_\infty\|_{\dot X^1((T,\infty)\times{\mathbb{R}}^3)}\\ &\quad +\|e^{i(t-T)\Delta_{\Omega_n}}[\chi_nw_\infty(T)]\|_{\dot X^1((T,\infty)\times{\mathbb{R}}^3)}\\ &=o(1) +\|e^{i(t-T)\Delta_{\Omega_n}}[\chi_nw_\infty(T)]\|_{\dot X^1((T,\infty)\times{\mathbb{R}}^3)} \qtq{as} n, T\to \infty \end{align*} by \eqref{209'}, \eqref{258'}, and the monotone convergence theorem. Using the triangle and Strichartz inequalities, we estimate the last term as follows: \begin{align*} \|&e^{i(t-T)\Delta_{\Omega_n}}[\chi_n w_\infty(T)]\|_{\dot X^1((T,\infty)\times{\mathbb{R}}^3)}\\ &\lesssim\|[e^{i(t-T)\Delta_{\Omega_n}}-e^{i(t-T)\Delta}][\chi_nw_\infty(T)]\|_{\dot X^1((T,\infty)\times{\mathbb{R}}^3)}+\|\nabla[(1-\chi_n)w_\infty]\|_{L_x^2}\\ &\quad+\|\nabla[e^{-iT\Delta}w_\infty(T)-w_+]\|_{L_x^2} + \|e^{it\Delta}w^+\|_{\dot X^1((T,\infty)\times{\mathbb{R}}^3)}, \end{align*} which converges to zero by letting $n\to \infty$ and then $T\to \infty$ by Theorem~\ref{T:LF}, \eqref{as2}, and the monotone convergence theorem. Putting everything together we obtain \eqref{remaincase2} and so \eqref{dense2}. This completes the proof of Theorem~\ref{T:embed2}. \end{proof} Our next result concerns the scenario when the rescaled obstacles $\Omega_n^c$ are retreating to infinity (cf. Case 3 in Theorem~\ref{T:LPD}). \begin{thm}[Embedding nonlinear profiles for retreating obstacles]\label{T:embed3} Let $\{t_n\}\subset {\mathbb{R}}$ be such that $t_n\equiv0$ or $t_n\to \pm\infty$. Let $\{x_n\}\subset \Omega$ and $\{\lambda_n\}\subset 2^{{\mathbb{Z}}}$ be such that $\frac{d(x_n)}{\lambda_n}\to \infty$. Let $\phi\in\dot H^1({\mathbb{R}}^3)$ and define \begin{align*} \phi_n(x)=\lambda_n^{-\frac 12}e^{i\lambda_n^2 t_n\Delta_\Omega}\bigl[(\chi_n\phi)\bigl(\tfrac{x-x_n}{\lambda_n}\bigr)\bigr], \end{align*} where $\chi_n(x)=1-\Theta(\lambda_n|x|/d(x_n))$. Then for $n$ sufficiently large there exists a global solution $v_n$ to $\text{NLS}_\Omega$ with initial data $v_n(0)=\phi_n$ which satisfies \begin{align*} \|v_n\|_{L_{t,x}^{10}({\mathbb{R}}\times\Omega)}\lesssim 1, \end{align*} with the implicit constant depending only on $\|\phi\|_{\dot H^1}$. Furthermore, for every ${\varepsilon}>0$ there exist $N_{\varepsilon}\in {\mathbb{N}}$ and $\psi_{\varepsilon}\in C_c^{\infty}({\mathbb{R}}\times{\mathbb{R}}^3)$ such that for all $n\ge N_{\varepsilon}$ we have \begin{align}\label{apcase3} \|v_n(t-\lambda_n^2 t_n, x+x_n)-\lambda_n^{-\frac12}\psi_{\varepsilon}(\lambda_n^{-2}t, \lambda_n^{-1} x)\|_{\dot X^1({\mathbb{R}}\times{\mathbb{R}}^3)}<{\varepsilon}. \end{align} \end{thm} \begin{proof} The proof of this theorem follows the general outline of the proof of Theorem~\ref{T:embed2}. It consists of the same five steps. Throughout the proof we will write $-\Delta=-\Delta_{{\mathbb{R}}^3}$. \textbf{Step 1:} Constructing global solutions to $\text{NLS}_{{\mathbb{R}}^3}$. Let $\theta:=\frac 1{100}$. As in the proof of Theorem~\ref{T:embed2}, the construction of the solutions to $\text{NLS}_{{\mathbb{R}}^3}$ depends on the behaviour of $t_n$. If $t_n\equiv0$, we let $w_n$ and $w_\infty$ be solutions to $\text{NLS}_{{\mathbb{R}}^3}$ with initial data $w_n(0)=\phi_{\le(d(x_n)/\lambda_n)^{\theta}}$ and $w_\infty(0)=\phi$. If $t_n\to \pm\infty$, we let $w_n$ and $w_\infty$ be solutions to $\text{NLS}_{{\mathbb{R}}^3}$ satisfying \begin{align*} \|w_n(t)-e^{it\Delta}\phi_{\le (d(x_n)/\lambda_n)^{\theta}}\|_{\dot H^1({\mathbb{R}}^3)}\to0 \qtq{and} \|w_\infty(t)-e^{it\Delta}\phi\|_{\dot H^1({\mathbb{R}}^3)}\to 0 \end{align*} as $t\to \pm \infty$. In all cases, \cite{CKSTT:gwp} implies that $w_n$ and $w_\infty$ are global solutions obeying global spacetime norms. Moreover, arguing as in the proof of Theorem~\ref{T:embed2} and invoking perturbation theory and the persistence of regularity result Lemma \ref{lm:persistencer3}, we see that $w_n$ and $w_\infty$ satisfy the following: \begin{equation}\label{cond3} \left\{ \quad \begin{aligned} &\|w_n\|_{\dot S^1({\mathbb{R}}\times{\mathbb{R}}^3)}+\|w_\infty\|_{\dot S^1({\mathbb{R}}\times{\mathbb{R}}^3)}\lesssim 1,\\ &\lim_{n\to \infty}\|w_n-w_\infty\|_{\dot S^1({\mathbb{R}}\times{\mathbb{R}}^3)}= 0,\\ &\||\nabla|^s w_n\|_{\dot S^1({\mathbb{R}}\times{\mathbb{R}}^3)}\lesssim \bigl(\tfrac{d(x_n)}{\lambda_n}\bigr)^{s\theta} \qtq{for all} s\ge 0. \end{aligned} \right. \end{equation} \textbf{Step 2:} Constructing the approximate solution to $\text{NLS}_{\Omega}$. Fix $T>0$ to be chosen later. We define \begin{align*} \tilde v_n(t,x):=\begin{cases} \lambda_n^{-\frac12}[\chi_nw_n](\lambda_n^{-2}t, \lambda_n^{-1}(x-x_n)), & |t|\le \lambda_n^2 T, \\ e^{i(t-\lambda_n^2 T)\Delta_\Omega}\tilde v_n(\lambda_n^2 T,x), & t>\lambda_n^2 T, \\ e^{i(t+\lambda_n^2 T)\Delta_\Omega}\tilde v_n(-\lambda_n^2 T,x), & t<-\lambda_n^2 T. \end{cases} \end{align*} Note that $\tilde v_n$ has finite scattering size; indeed, using a change of variables, the Strichartz inequality, H\"older, Sobolev embedding, and \eqref{cond3}, we get \begin{align}\label{tildevn3} \|\tilde v_n\|_{L_{t,x}^{10}({\mathbb{R}}\times\Omega)} &\lesssim \|\chi_n w_n\|_{L_{t,x}^{10}([-T,T]\times\Omega_n)}+\|\chi_n w_n(\pm T)\|_{\dot H^1_D(\Omega_n)} \notag\\ &\lesssim \|w_n\|_{\dot S^1({\mathbb{R}}\times{\mathbb{R}}^3)} + \|\nabla\chi_n\|_{L_x^3} \|w_n(\pm T)\|_{L_x^6} + \|\chi_n\|_{L_x^\infty} \|\nabla w_n(\pm T)\|_{L_x^2}\notag\\ &\lesssim 1, \end{align} where $\Omega_n:=\lambda_n^{-1}(\Omega-\{x_n\})$. \textbf{Step 3:} Asymptotic agreement of the initial data: \begin{align}\label{n0} \lim_{T\to\infty}\limsup_{n\to\infty}\|(-\Delta_\Omega)^{\frac12}e^{it\Delta_\Omega}[\tilde v_n(\lambda_n^2t_n)-\phi_n]\|_{L_t^{10}L_x^{\frac{30}{13}}({\mathbb{R}}\times\Omega)}=0. \end{align} We first consider the case when $t_n\equiv0$. By Strichartz and a change of variables, \begin{align*} \|&(-\Delta_{\Omega})^{\frac 12}e^{it\Delta_\Omega}[\tilde v_n(0)-\phi_n]\|_{L_t^{10}L_x^{\frac{30}{13}}({\mathbb{R}}\times\Omega)}\\ &\lesssim \|(-\Delta_{\Omega_n})^{\frac 12}[\chi_n\phi_{>(d(x_n)/\lambda_n)^{\theta}}]\|_{L^2_x(\Omega_n)}\\ &\lesssim \|\nabla \chi_n\|_{L_x^3}\|\phi_{>(d(x_n)/\lambda_n)^{\theta}}\|_{L_x^6}+\|\chi_n\|_{L_x^\infty}\|\nabla \phi_{>(d(x_n)/\lambda_n)^{\theta}}\|_{L_x^2}\to0\qtq{as} n\to \infty. \end{align*} It remains to prove \eqref{n0} when $t_n\to \infty$; the case $t_n\to-\infty$ can be treated similarly. As $T$ is fixed, for sufficiently large $n$ we have $t_n>T$ and so \begin{align*} \tilde v_n(\lambda_n^2t_n,x)=e^{i(t_n-T)\lambda_n^2\Delta_{\Omega}}\bigl[\lambda_n^{-\frac 12}(\chi_nw_n(T))\bigl(\tfrac{x-x_n}{\lambda_n}\bigr)\bigr]. \end{align*} Thus, by a change of variables and the Strichartz inequality, \begin{align} \|(-\Delta_{\Omega})^{\frac 12}& e^{it\Delta_{\Omega}}[\tilde v_n(\lambda_n^2 t_n)-\phi_n]\|_{L_t^{10}L_x^{\frac{30}{13}}({\mathbb{R}}\times\Omega)}\notag\\ &=\|(-\Delta_{\Omega_n})^{\frac 12}e^{it\Delta_{\Omega_n}}[e^{-iT\Delta_{\Omega_n}}(\chi_nw_n(T))-\chi_n\phi)]\|_{L_t^{10}L_x^{\frac{30}{13}}({\mathbb{R}}\times\Omega_n)}\notag\\ &\lesssim \|(-\Delta_{\Omega_n})^{\frac 12}[\chi_n(w_n(T)-w_\infty(T))]\|_{L^2(\Omega_n)}\label{n1}\\ &\quad +\|(-\Delta_{\Omega_n})^{\frac12}e^{it\Delta_{\Omega_n}}[e^{-iT\Delta_{\Omega_n}}(\chi_nw_\infty(T))-\chi_n\phi]\|_{L_t^{10}L_x^{\frac{30}{13}}({\mathbb{R}}\times\Omega_n)}\label{n2}. \end{align} Using \eqref{cond3} and Sobolev embedding, we see that \begin{align*} \eqref{n1}\lesssim \|\nabla\chi_n\|_{L_x^3}\|w_n(T)-w_\infty(T)\|_{L_x^6}+\|\chi_n\|_{L_x^\infty}\|\nabla[w_n(T)-w_\infty(T)]\|_{L_x^2} \to 0 \end{align*} as $n\to \infty$. The proof of $$ \lim_{T\to\infty}\limsup_{n\to\infty}\eqref{n2}=0 $$ is identical to the proof of \eqref{n23} in Theorem~\ref{T:embed2} and we omit it. This completes the proof of \eqref{n0}. \textbf{Step 4:} Proving that $\tilde v_n$ is an approximate solution to $\text{NLS}_{\Omega}$ in the sense that \begin{align}\label{n6} \lim_{T\to\infty}\limsup_{n\to\infty}\|(-\Delta_\Omega)^{\frac12}[(i\partial_t+\Delta_\Omega)\tilde v_n-|\tilde v_n|^4\tilde v_n]\|_{\dot N^0({\mathbb{R}}\times\Omega)}=0. \end{align} We first verify \eqref{n6} for $|t|>\lambda_n^2 T$. By symmetry, it suffices to consider positive times. Arguing as in the proof of Theorem~\ref{T:embed2}, we see that in this case \eqref{n6} reduces to \begin{align}\label{n7} \lim_{T\to \infty}\limsup_{n\to\infty}\|e^{i(t-\lambda_n^2T)\Delta_{\Omega}}\tilde v_n(\lambda_n^2T)\|_{L_{t,x}^{10}((\lambda_n^2T,\infty)\times\Omega)}=0. \end{align} Let $w_+$ denote the forward asymptotic state of $w_\infty$. Using a change of variables and the Strichartz inequality, we get \begin{align*} &\|e^{i(t-\lambda_n^2 T)\Delta_\Omega}\tilde v_n(\lambda_n^2T)\|_{L_{t,x}^{10}((\lambda_n^2T,\infty)\times\Omega)}\\ &=\|e^{it\Delta_{\Omega_n}}[\chi_nw_n(T)]\|_{L_{t,x}^{10}((0,\infty)\times\Omega_n)}\\ &\lesssim \|e^{it\Delta_{\Omega_n}}[\chi_ne^{iT\Delta}w_+]\|_{L_{t,x}^{10}((0,\infty)\times\Omega_n)}+\|\chi_n[w_\infty(T)-e^{iT\Delta}w_+]\|_{\dot H^1({\mathbb{R}}^3)}\\ &\quad+\|\chi_n[w_\infty(T)-w_n(T)]\|_{\dot H^1({\mathbb{R}}^3)}\\ &\lesssim \|[e^{it\Delta_{\Omega_n}}-e^{it\Delta}][\chi_n e^{iT\Delta}w_+]\|_{L_{t,x}^{10}((0,\infty)\times{\mathbb{R}}^3)}+\|(1-\chi_n)e^{iT\Delta}w_+\|_{\dot H^1({\mathbb{R}}^3)}\\ &\quad +\|e^{it\Delta}w_+\|_{L_{t,x}^{10}((T,\infty)\times{\mathbb{R}}^3)}+\|w_\infty(T) -e^{iT\Delta}w_+\|_{\dot H^1({\mathbb{R}}^3)}+\|w_\infty(T)-w_n(T)\|_{\dot H^1({\mathbb{R}}^3)}, \end{align*} which converges to zero by letting $n\to \infty$ and then $T\to \infty$ in view of Corollary~\ref{C:LF} (and the density of $C_c^\infty({\mathbb{R}}^3)$ functions in $\dot H^1({\mathbb{R}}^3)$), \eqref{cond3}, the definition of $w_+$, and the monotone convergence theorem. Next we show \eqref{n6} on the middle time interval $|t|\le \lambda_n^2 T$. We compute \begin{align*} [(i\partial_t+\Delta_{\Omega})\tilde v_n-|\tilde v_n|^4\tilde v_n](t,x) &=\lambda_n^{-\frac 52}[(\chi_n-\chi_n^5)|w_n|^4w_n](\lambda_n^{-2}t,\lambda_n^{-1}(x-x_n))\\ &\quad+2\lambda_n^{-\frac 52}[\nabla\chi_n \cdot\nabla w_n](\lambda_n^{-2} t, \lambda_n^{-1}(x-x_n))\\ &\quad+\lambda_n^{-\frac 52}[\Delta\chi_nw_n](\lambda_n^{-2} t,\lambda_n^{-1}(x-x_n)). \end{align*} Thus, using a change of variables and the equivalence of Sobolev spaces we obtain \begin{align} \|(-\Delta_\Omega)^{\frac 12}&[(i\partial_t+\Delta)\tilde v_n-|\tilde v_n|^4\tilde v_n]\|_{\dot N^0((|t|\le \lambda_n^2 T)\times\Omega)}\notag\\ &\lesssim \|\nabla[(\chi_n-\chi_n^5)|w_n|^4 w_n]\|_{\dot N^0([-T,T]\times\Omega_n)}\label{n9}\\ &\quad+\|\nabla(\nabla\chi_n\cdot \nabla w_n)\|_{\dot N^0([-T,T]\times\Omega_n)}+\|\nabla (\Delta\chi_nw_n)\|_{\dot N^0([-T,T]\times\Omega_n)}.\label{n10} \end{align} Using H\"older, we estimate the contribution of \eqref{n9} as follows: \begin{align*} \eqref{n9}&\lesssim \|(\chi_n-\chi_n^5)|w_n|^4 \nabla w_n\|_{L_{t,x}^{\frac{10}7}}+\|\nabla \chi_n(1-5\chi_n^4)w_n^5\|_{L_t^{\frac 53} L_x^{\frac{30}{23}}}\\ &\lesssim\|\nabla w_n\|_{L_{t,x}^{\frac{10}3}}\Bigl[\|w_n-w_\infty\|_{L_{t,x}^{10}}^4+\|1_{|x|\sim \frac{d(x_n)}{\lambda_n}} w_\infty\|_{L_{t,x}^{10}}^4\Bigr]\\ &\quad+\|w_n\|_{L_t^5 L_x^{30}}\|\nabla \chi_n\|_{L_x^3}\Bigl[\|w_n-w_\infty\|_{L_{t,x}^{10}}^4+\|1_{|x|\sim\frac{d(x_n)}{\lambda_n}}w_\infty\|_{L_{t,x}^{10}}^4\Bigr]\to0, \end{align*} by the dominated convergence theorem and \eqref{cond3}. Similarly, \begin{align*} \eqref{n10}&\lesssim T \Bigl[\|\Delta\chi_n\|_{L_x^\infty} \|\nabla w_n\|_{L_t^\infty L_x^2} +\|\nabla \chi_n\|_{L_x^\infty}\|\Delta w_n\|_{L_t^\infty L_x^2} +\|\nabla\Delta\chi_n\|_{L_x^3}\|w_n\|_{L_t^\infty L_x^6}\Bigr]\\ &\lesssim T\Bigl[\bigl(\tfrac{d(x_n)}{\lambda_n}\bigr)^{-2}+\bigl(\tfrac{d(x_n)}{\lambda_n}\bigr)^{\theta-1}\Bigr] \to 0 \qtq{as} n\to \infty. \end{align*} This completes the proof of \eqref{n6}. \textbf{Step 5:} Constructing $v_n$ and approximation by $C_c^\infty$ functions. Using \eqref{tildevn3}, \eqref{n0}, and \eqref{n6}, and invoking the stability result Theorem~\ref{T:stability}, for $n$ sufficiently large we obtain a global solution $v_n$ to $\text{NLS}_\Omega$ with initial data $v_n(0)=\phi_n$ which satisfies \begin{align*} \|v_n\|_{L_{t,x}^{10}({\mathbb{R}}\times\Omega)}\lesssim 1 \qtq{and} \lim_{T\to\infty}\limsup_{n\to \infty}\|v_n(t-\lambda_n^2t_n)-\tilde v_n (t)\|_{\dot S^1({\mathbb{R}}\times\Omega)}=0. \end{align*} It remains to prove the approximation result \eqref{apcase3}. From the density of $C_c^\infty({\mathbb{R}}\times{\mathbb{R}}^3)$ in $\dot X^1({\mathbb{R}}\times{\mathbb{R}}^3)$, for any ${\varepsilon}>0$ we can find $\psi_{\varepsilon}\in C_c^\infty({\mathbb{R}}\times{\mathbb{R}}^3)$ such that \begin{align*} \|w_\infty-\psi_{\varepsilon}\|_{\dot X^1({\mathbb{R}}\times{\mathbb{R}}^3)}< \tfrac {\varepsilon} 3. \end{align*} Thus, to prove \eqref{apcase3} it suffices to show \begin{align}\label{n11} \|\tilde v_n(t,x)-\lambda_n^{-\frac 12}w_\infty(\lambda_n^{-2}t, \lambda_n^{-1}(x-x_n))\|_{\dot X^1({\mathbb{R}}\times{\mathbb{R}}^3)}<\tfrac {\varepsilon} 3 \end{align} for $n, T$ sufficiently large. A change of variables gives \begin{align*} \text{LHS}\eqref{n11} &\le \|\chi_n w_n-w_\infty\|_{\dot X^1([-T,T]\times{\mathbb{R}}^3)}+\|e^{i(t-T)\Delta_{\Omega_n}}[\chi_nw_n(T)]-w_\infty\|_{\dot X^1((T,\infty)\times{\mathbb{R}}^3)}\\ &\quad+\|e^{i(t+T)\Delta_{\Omega_n}}[\chi_nw_n(-T)]-w_\infty\|_{\dot X^1((-\infty,-T)\times{\mathbb{R}}^3)}. \end{align*} We estimate the contribution from each term separately. For the first term we use the monotone convergence theorem and \eqref{cond3} to see that \begin{align*} \|\chi_n w_n-w_\infty\|_{\dot X^1([-T,T]\times{\mathbb{R}}^3)}&\lesssim \|(1-\chi_n)w_\infty\|_{\dot X^1({\mathbb{R}}\times{\mathbb{R}}^3)}+\|w_n-w_\infty\|_{\dot X^1({\mathbb{R}}\times{\mathbb{R}}^3)}\to 0, \end{align*} as $n\to \infty$. For the second term we use Strichartz to get \begin{align*} &\|e^{i(t-T)\Delta_{\Omega_n}}[\chi_n w_n(T)]-w_\infty\|_{\dot X^1((T,\infty)\times{\mathbb{R}}^3)}\\ &\lesssim \|w_\infty\|_{\dot X^1((T,\infty)\times{\mathbb{R}}^3)}+\|\chi_n[w_\infty(T)-w_n(T)]\|_{\dot H^1({\mathbb{R}}^3)}\\ &\quad+\|e^{i(t-T)\Delta_{\Omega_n}}[\chi_n w_\infty(T)]\|_{\dot X^1((T,\infty))}\\ &\lesssim \|w_\infty\|_{\dot X^1((T,\infty)\times{\mathbb{R}}^3)}+\|w_\infty(T)-w_n(T)\|_{\dot H^1({\mathbb{R}}^3)}+\|(1-\chi_n)w_\infty(T)\|_{\dot H^1({\mathbb{R}}^3)}\\ &\quad+\|[e^{i(t-T)\Delta_{\Omega_n}}-e^{i(t-T)\Delta}][\chi_nw_\infty(T)]\|_{\dot X^1({\mathbb{R}}\times{\mathbb{R}}^3)}+\|e^{it\Delta}w_+\|_{\dot X^1((T,\infty)\times{\mathbb{R}}^3)}\\ &\quad+\|w_+-e^{-iT\Delta}w_\infty(T)\|_{\dot H^1({\mathbb{R}}^3)}\to 0 \qtq{as} n\to \infty \qtq{and then} T\to \infty \end{align*} by Theorem~\ref{T:LF}, \eqref{cond3}, the definition of the asymptotic state $w_+$, and the monotone convergence theorem. The third term can be treated analogously to the second term. This completes the proof of \eqref{n11} and with it, the proof of Theorem~\ref{T:embed3}. \end{proof} Our final result in this section treats the case when the obstacle expands to fill a halfspace (cf. Case~4 in Theorem~\ref{T:LPD}). \begin{thm}[Embedding $\text{NLS}_{{\mathbb{H}}}$ into $\text{NLS}_{\Omega}$]\label{T:embed4} Let $\{t_n\}\subset {\mathbb{R}}$ be such that $t_n\equiv0$ or $t_n\to\pm\infty$. Let $\{\lambda_n\}\subset 2^{{\mathbb{Z}}}$ and $\{x_n\}\subset \Omega$ be such that \begin{align*} \lambda_n\to 0 \qtq{and} \tfrac{d(x_n)}{\lambda_n}\to d_\infty>0. \end{align*} Let $x_n^*\in \partial\Omega$ be such that $|x_n-x_n^*|=d(x_n)$ and let $R_n\in SO(3)$ be such that $R_n e_3=\frac{x_n-x_n^*}{|x_n-x_n^*|}$. Finally, let $\phi\in \dot H^1_D({\mathbb{H}})$ and define \begin{align*} \phi_n(x)=\lambda_n^{-\frac 12}e^{i\lambda_n^2t_n\Delta_\Omega}\bigl[\phi\bigl(\tfrac{R_n^{-1}(x-x_n^*)}{\lambda_n}\bigr)\bigr]. \end{align*} Then for $n$ sufficiently large there exists a global solution $v_n$ to $\text{NLS}_\Omega$ with initial data $v_n(0)=\phi_n$ which satisfies \begin{align*} \|v_n\|_{L_{t,x}^{10}({\mathbb{R}}\times\Omega)}\lesssim 1, \end{align*} with the implicit constant depending only on $\|\phi\|_{\dot H^1}$. Furthermore, for every ${\varepsilon}>0$ there exist $N_{\varepsilon}\in {\mathbb{N}}$ and $\psi_{\varepsilon}\in C_c^\infty({\mathbb{R}}\times{\mathbb{H}})$ such that for all $n\geq N_{\varepsilon}$ we have \begin{align}\label{ap4} \|v_n(t-\lambda_n^2 t_n, R_nx+x_n^*)-\lambda_n^{-\frac12}\psi_{\varepsilon}(\lambda_n^{-2}t, \lambda_n^{-1}x)\|_{\dot X^1({\mathbb{R}}\times{\mathbb{R}}^3)}<{\varepsilon}. \end{align} \end{thm} \begin{proof} Again, the proof follows the outline of the proofs of Theorems~\ref{T:embed2} and \ref{T:embed3}. \textbf{Step 1:} Constructing global solutions to $\text{NLS}_{{\mathbb{H}}}$. Let $\theta:=\frac 1{100}$. If $t_n\equiv0$, let $w_n$ and $w_\infty$ be solutions to $\text{NLS}_{{\mathbb{H}}}$ with initial data $w_n(0)=\phi_{\le \lambda_n^{-\theta}}$ and $w_\infty(0)=\phi$. If $t_n\to \pm\infty$, let $w_n$ and $w_\infty$ be solutions to $\text{NLS}_{{\mathbb{H}}}$ that satisfy \begin{align}\label{m12} \|w_n(t)-e^{it\Delta_{{\mathbb{H}}}}\phi_{\le \lambda_n^{-\theta}}\|_{\dot H^1_D({\mathbb{H}})}\to 0 \qtq{and} \|w_\infty(t)-e^{it\Delta_{{\mathbb{H}}}}\phi\|_{\dot H^1_D({\mathbb{H}})}\to 0, \end{align} as $t\to \pm \infty$. In all cases, \cite{CKSTT:gwp} implies that $w_n$ and $w_\infty$ are global solutions and obey \begin{align*} \|w_n\|_{\dot S^1({\mathbb{R}}\times{\mathbb{H}})}+\|w_\infty\|_{\dot S^1({\mathbb{R}}\times{\mathbb{H}})}\lesssim 1, \end{align*} with the implicit constant depending only on $\|\phi\|_{\dot H^1_D({\mathbb{H}})}$. Indeed, we may interpret such solutions as solutions to $\text{NLS}_{{\mathbb{R}}^3}$ that are odd under reflection in $\partial{\mathbb{H}}$. Moreover, arguing as in the proof of Theorems~\ref{T:embed2} and \ref{T:embed3} and using the stability result from \cite{CKSTT:gwp} and the persistence of regularity result Lemma~\ref{lm:persistenceh}, we have \begin{align}\label{cond4} \begin{cases} \lim_{n\to \infty}\|w_n-w_\infty\|_{\dot S^1({\mathbb{R}}\times{\mathbb{H}})}=0,\\ \|(-\Delta_{\mathbb{H}})^{\frac k2}w_n\|_{L_t^\infty L_x^2({\mathbb{R}}\times{\mathbb{H}})}\lesssim\lambda_n^{-\theta(k-1)} \qtq{for} k=1,2,3. \end{cases} \end{align} \textbf{Step 2:} Constructing approximate solutions to $\text{NLS}_\Omega$. Let $\Omega_n:=\lambda_n^{-1}R_n^{-1}(\Omega-\{x_n^*\})$ and let $T>0$ to be chosen later. On the middle time interval $|t|<\lambda_n^2 T$, we embed $w_n$ by using a boundary straightening diffeomorphism $\Psi_n$ of a neighborhood of zero in $\Omega_n$ of size $L_n:=\lambda_n^{-2\theta}$ into a corresponding neighborhood in ${\mathbb{H}}$. To this end, we define a smooth function $\psi_n$ on the set $|x^\perp|\le L_n$ so that $x^\perp\mapsto (x^\perp, -\psi_n(x^\perp))$ traces out $\partial\Omega_n$. Here and below we write $x\in {\mathbb{R}}^3$ as $x=(x^\perp, x_3)$. By our choice of $R_n$, $\partial \Omega_n$ has unit normal $e_3$ at zero. Moreover, $\partial\Omega_n$ has curvatures that are $O(\lambda_n)$. Thus, $\psi_n$ satisfies the following: \begin{align}\label{psin} \begin{cases} &\psi_n(0)=0, \quad \nabla\psi_n(0)=0, \quad |\nabla\psi_n(x^\perp)|\lesssim \lambda_n^{1-2\theta},\\ &|\partial^{\alpha}\psi_n(x^\perp)|\lesssim\lambda_n^{|\alpha|-1} \qtq{for all} |\alpha|\ge 2. \end{cases} \end{align} We now define the map $\Psi_n: \Omega_n\cap\{|x^\perp|\le L_n\}\to {\mathbb{H}}$ and a cutoff $\chi_n:{\mathbb{R}}^3\to[0,1]$ via \begin{align*} \Psi_n(x):=(x^{\perp}, x_3+\psi_n(x^\perp)) \qtq{and} \chi_n(x):=1-\Theta\bigl(\tfrac{x}{L_n}\bigr). \end{align*} Note that on the domain of $\Psi_n$, which contains $\supp\chi_n$, we have \begin{align}\label{detpsin} |\det(\partial \Psi_n)|\sim 1 \qtq{and} |\partial\Psi_n|\lesssim 1. \end{align} We are now ready to define the approximate solution. Let $\tilde w_n:=\chi_nw_n$ and define \begin{align*} \tilde v_n(t,x):=\begin{cases} \lambda_n^{-\frac12}[\tilde w_n(\lambda_n^{-2}t)\circ\Psi_n](\lambda_n^{-1}R_n^{-1}(x-x_n^*)), &|t|\le \lambda_n^2 T, \\ e^{i(t-\lambda_n^2 T)\Delta_\Omega}\tilde v_n(\lambda_n^2 T,x), &t>\lambda_n^2 T,\\ e^{i(t+\lambda_n^2 T)\Delta_\Omega}\tilde v_n(-\lambda_n^2T,x), &t<-\lambda_n^2 T . \end{cases} \end{align*} We first prove that $\tilde v_n$ has finite scattering size. Indeed, by the Strichartz inequality, a change of variables, and \eqref{detpsin}, \begin{align}\label{tildevn4} \|\tilde v_n\|_{L_{t,x}^{10}({\mathbb{R}}\times\Omega)} &\lesssim \|\tilde w_n\circ\Psi_n\|_{L_{t,x}^{10}({\mathbb{R}}\times\Omega_n)}+\|\tilde w_n(\pm T)\circ\Psi_n\|_{\dot H^1_D(\Omega_n)}\notag\\ &\lesssim \|\tilde w_n\|_{L_{t,x}^{10}({\mathbb{R}}\times{\mathbb{H}})} + \|\tilde w_n(\pm T)\|_{\dot H^1_D({\mathbb{H}})}\lesssim 1. \end{align} \textbf{Step 3:} In this step we prove asymptotic agreement of the initial data, namely, \begin{align}\label{match4} \lim_{T\to\infty}\limsup_{n\to \infty}\|(-\Delta_\Omega)^{\frac12}e^{it\Delta_\Omega}[\tilde v_n(\lambda_n^2 t_n)-\phi_n]\|_{L_t^{10} L_x^{\frac{30}{13}}({\mathbb{R}}\times\Omega)}=0. \end{align} We discuss two cases. If $t_n\equiv0$, then by Strichartz and a change of variables, \begin{align*} \| & (-\Delta_{\Omega})^{\frac 12} e^{it\Delta_\Omega}[\tilde v_n(0)-\phi_n]\|_{L_t^{10} L_x^{\frac{30}{13}}({\mathbb{R}}\times\Omega)}\\ &\lesssim \|(\chi_n\phi_{\le \lambda_n^{-\theta}})\circ\Psi_n-\phi\|_{\dot H^1_D(\Omega_n)}\\ &\lesssim \|\nabla[(\chi_n\phi_{>\lambda_n^{-\theta}})\circ\Psi_n]\|_{L^2_x}+\|\nabla[(\chi_n\phi)\circ\Psi_n-\chi_n\phi]\|_{L^2_x}+\|\nabla[(1-\chi_n)\phi]\|_{L^2_x}. \end{align*} As $\lambda_n\to 0$ we have $\|\nabla \phi_{>\lambda_n^{-\theta}}\|_{L^2_x}\to 0$ as $n\to \infty$; thus, using \eqref{detpsin} we see that the first term converges to $0$. For the second term, we note that $\Psi_n(x)\to x$ in $C^1$; thus, approximating $\phi$ by $C_c^\infty({\mathbb{H}})$ functions we see that the second term converges to $0$. Finally, by the dominated convergence theorem and $L_n=\lambda_n^{-2\theta}\to \infty$, the last term converges to $0$. It remains to prove \eqref{match4} when $t_n\to +\infty$; the case when $t_n\to -\infty$ can be treated similarly. Note that as $T>0$ is fixed, for $n$ sufficiently large we have $t_n>T$ and so \begin{align*} \tilde v_n(\lambda_n^2t_n,x)&=e^{i(t_n-T)\lambda_n^2\Delta_\Omega}[\lambda_n^{-\frac12}(\tilde w_n(T)\circ\Psi_n)(\lambda_n^{-1}R_n^{-1}(x-x_n^*))]. \end{align*} Thus, a change of variables gives \begin{align} \|(-\Delta_\Omega)^{\frac12} &e^{it\Delta_\Omega}[\tilde v_n(\lambda_n^2 t_n)-\phi_n]\|_{L_t^{10} L_x^{\frac{30}{13}}({\mathbb{R}}\times\Omega)}\notag\\ &\lesssim \|(-\Delta_{\Omega_n})^{\frac 12}[\tilde w_n(T)\circ\Psi_n-w_\infty(T)]\|_{L^2_x}\label{n13}\\ &\quad+\|(-\Delta_{\Omega_n})^{\frac 12}[e^{i(t-T)\Delta_{\Omega_n}}w_\infty(T)-e^{it\Delta_{\Omega_n}}\phi]\|_{L_t^{10} L_x^{\frac{30}{13}}({\mathbb{R}}\times\Omega_n)}.\label{n12} \end{align} Using the triangle inequality, \begin{align*} \eqref{n13} &\lesssim\|(-\Delta_{\Omega_n})^{\frac12}[(\chi_nw_\infty(T))\circ\Psi_n-w_\infty(T)]\|_{L^2_x}\\ &\quad+\|(-\Delta_{\Omega_n})^{\frac 12}[(\chi_n(w_n(T)-w_\infty(T)))\circ\Psi_n]\|_{L^2_x}, \end{align*} which converges to zero as $n\to \infty$ by \eqref{cond4} and the the fact that $\Psi_n(x)\to x$ in $C^1$. Using Strichartz, Lemma~\ref{L:n3}, Theorem~\ref{T:LF}, and \eqref{m12}, we see that \begin{align*} \eqref{n12} &\lesssim \|e^{i(t-T)\Delta_{\Omega_n}}(-\Delta_{{\mathbb{H}}})^{\frac12}w_\infty(T)-e^{it\Delta_{\Omega_n}}(-\Delta_{{\mathbb{H}}})^{\frac12}\phi\|_{L_t^{10} L_x^{\frac{30}{13}}({\mathbb{R}}\times\Omega_n)}\\ &\quad +\|[(-\Delta_{\Omega_n})^{\frac 12}-(-\Delta_{{\mathbb{H}}})^{\frac12}]w_\infty(T)\|_{L^2_x}+\|[(-\Delta_{\Omega_n})^{\frac 12}-(-\Delta_{{\mathbb{H}}})^{\frac 12}]\phi\|_{L^2_x}\\ &\lesssim\|[e^{i(t-T)\Delta_{\Omega_n}}-e^{i(t-T)\Delta_{{\mathbb{H}}}}](-\Delta_{{\mathbb{H}}})^{\frac 12}w_\infty(T)\|_{L_t^{10} L_x^{\frac{30}{13}}({\mathbb{R}}\times\Omega_n)}\\ &\quad+\|[e^{it\Delta_{\Omega_n}}-e^{it\Delta_{{\mathbb{H}}}}](-\Delta_{{\mathbb{H}}})^{\frac12}\phi\|_{L_t^{10} L_x^{\frac{30}{13}}({\mathbb{R}}\times\Omega_n)}\\ &\quad+\|e^{-iT\Delta_{{\mathbb{H}}}}w_\infty(T)-\phi\|_{\dot H^1_D({\mathbb{H}})}+o(1), \end{align*} and that this converges to zero by first taking $n\to \infty$ and then $T\to \infty$. \textbf{Step 4:} In this step we prove that $\tilde v_n$ is an approximate solution to $\text{NLS}_\Omega$ in the sense that \begin{align}\label{n14} \lim_{T\to\infty}\limsup_{n\to\infty}\|(-\Delta_\Omega)^{\frac12}[(i\partial_t+\Delta_\Omega)\tilde v_n-|\tilde v_n|^4\tilde v_n]\|_{\dot N^0({\mathbb{R}}\times\Omega)}=0. \end{align} We first control the contribution of $|t|\ge \lambda_n^2T$. As seen previously, this reduces to proving \begin{align}\label{n15} \lim_{T\to\infty}\limsup_{n\to\infty}\|e^{i(t-\lambda_n^2T)\Delta_{\Omega}}\tilde v_n(\lambda_n^2 T)\|_{L_{t,x}^{10}((\lambda_n^2 T,\infty)\times\Omega)}=0. \end{align} and the analogous estimate in the opposite time direction, which follows similarly. Let $w_+$ denote the forward asymptotic state of $w_\infty$. Using Strichartz, our earlier estimate on \eqref{n13}, and the monotone convergence theorem, we see that \begin{align*} \|&e^{i(t-\lambda_n^2 T)\Delta_{\Omega}}\tilde v_n(\lambda_n^2T)\|_{L_{t,x}^{10}((\lambda_n^2 T,\infty)\times\Omega)}\\ &=\|e^{i(t-T)\Delta_{\Omega_n}}[\tilde w_n(T)\circ \Psi_n]\|_{L_{t,x}^{10}((T,\infty)\times\Omega_n)}\\ &\lesssim \|e^{i(t-T)\Delta_{\Omega_n}}[e^{iT\Delta_{{\mathbb{H}}}}w_+]\|_{L_{t,x}^{10}((T,\infty)\times\Omega_n)}+\|w_\infty(T)-e^{iT\Delta_{{\mathbb{H}}}}w_+\|_{\dot H^1_D({\mathbb{H}})}\\ &\quad+\|\tilde w_n(T)\circ\Psi_n-w_\infty(T)\|_{\dot H^1_D(\Omega_n)}\\ &\lesssim \|[e^{i(t-T)\Delta_{\Omega_n}}-e^{i(t-T)\Delta_{{\mathbb{H}}}}][e^{iT\Delta_{{\mathbb{H}}}}w_+]\|_{L_{t,x}^{10}((0,\infty)\times\Omega_n)}\\ &\quad+\|e^{it\Delta_{\mathbb{H}}}w_+\|_{L_{t,x}^{10} ((T,\infty)\times{\mathbb{H}})}+o(1) \end{align*} and that this converges to zero by Theorem~\ref{T:LF} and the monotone convergence theorem by first taking $n\to \infty$ and then $T\to \infty$. Thus \eqref{n15} is proved. Next we control the contribution of the middle time interval $\{|t|\le \lambda_n^2 T\}$ to \eqref{n14}. We compute \begin{align*} \Delta(\tilde w_n\circ \Psi_n)&=(\partial_k\tilde w_n\circ\Psi_n)\Delta\Psi_n^k+(\partial_{kl}\tilde w_n\circ\Psi_n)\partial_j\Psi_n^l\partial_j\Psi_n^k, \end{align*} where $\Psi_n^k$ denotes the $k$th component of $\Psi_n$ and repeated indices are summed. As $\Psi_n(x)=x+(0,\psi_n(x^{\perp}))$, we have \begin{align*} &\Delta\Psi_n^k=O(\partial^2\psi_n), \quad \partial_j\Psi_n^l=\delta_{jl}+O(\partial\psi_n), \\ &\partial_j\Psi_n^l\partial_j\Psi_n^k=\delta_{jl}\delta_{jk}+O(\partial\psi_n)+O((\partial\psi_n)^2), \end{align*} where we use $O$ to denote a collection of similar terms. For example, $O(\partial\psi_n)$ contains terms of the form $c_j\partial_{x_j}\psi_n$ for some constants $c_j\in {\mathbb{R}}$, which may depend on the indices $k$ and $l$ appearing on the left-hand side. Therefore, \begin{align*} (\partial_k\tilde w_n\circ\Psi_n)\Delta\Psi_n^k&=O\bigl((\partial\tilde w_n\circ\Psi_n)(\partial^2\psi_n)\bigr),\\ (\partial_{kl}\tilde w_n\circ\Psi_n)\partial_j\Psi_n^l\partial_j\Psi_n^k &=\Delta\tilde w_n\circ\Psi_n+O\bigl(\bigl(\partial^2\tilde w_n\circ\Psi_n\bigr)\bigl(\partial\psi_n+(\partial\psi_n)^2\bigr)\bigr) \end{align*} and so \begin{align*} (i\partial_t+\Delta_{\Omega_n})&(\tilde w_n\circ \Psi_n)-(|\tilde w_n|^4\tilde w_n)\circ\Psi_n\\ &=[(i\partial_t+\Delta_{{\mathbb{H}}})\tilde w_n-|\tilde w_n|^4\tilde w_n]\circ \Psi_n \\ &\quad+O\bigl((\partial\tilde w_n\circ\Psi_n)(\partial^2\psi_n)\bigr)+O\bigl(\bigl(\partial^2\tilde w_n\circ\Psi_n\bigr)\bigl(\partial\psi_n+(\partial\psi_n)^2\bigr)\bigr). \end{align*} By a change of variables and \eqref{detpsin}, we get \begin{align} \|(-\Delta_\Omega)^{\frac 12}&[(i\partial_t+\Delta_\Omega)\tilde v_n-|\tilde v_n|^4\tilde v_n]\|_{L_t^1L_x^2((|t|\le \lambda_n^2T)\times\Omega)}\notag\\ &=\|(-\Delta_{\Omega_n})^{\frac12}[(i\partial_t+\Delta_{\Omega_n})(\tilde w_n\circ\Psi_n)-(|\tilde w_n|^4\tilde w_n)\circ \Psi_n]\|_{L_t^1L_x^2((|t|\le T)\times\Omega_n)}\notag\\ &\lesssim \|(-\Delta_{\Omega_n})^{\frac12}[((i\partial_t+\Delta_{{\mathbb{H}}})\tilde w_n-|\tilde w_n|^4\tilde w_n)\circ\Psi_n]\|_{L_t^1L_x^2([-T,T]\times\Omega_n)}\notag\\ &\quad+\|(-\Delta_{\Omega_n})^{\frac 12}[(\partial\tilde w_n\circ \Psi_n)\partial^2\psi_n]\|_{L_t^1L_x^2([-T,T]\times\Omega_n)}\notag\\ &\quad+\bigl\|(-\Delta_{\Omega_n})^{\frac 12}\bigl[(\partial^2\tilde w_n\circ\Psi_n)\bigl(\partial\psi_n+(\partial\psi_n)^2\bigr)\bigr]\bigr\|_{L_t^1L_x^2([-T,T]\times\Omega_n)}\notag\\ &\lesssim \|\nabla[(i\partial_t+\Delta_{{\mathbb{H}}})\tilde w_n -|\tilde w_n|^4\tilde w_n]\|_{L_t^1L_x^2([-T,T]\times{\mathbb{H}})}\label{n18}\\ &\quad+\|\nabla[(\partial \tilde w_n\circ\Psi_n)\partial^2\psi_n]\|_{L_t^1L_x^2([-T,T]\times\Omega_n)}\label{n16}\\ &\quad+\bigl\|\nabla\bigl[(\partial^2 \tilde w_n\circ \Psi_n)\bigl(\partial\psi_n+(\partial\psi_n)^2\bigr)\bigr]\bigr\|_{L_t^1L_x^2([-T,T]\times\Omega_n)}\label{n17}. \end{align} Using \eqref{cond4}, \eqref{psin}, and \eqref{detpsin}, we can control the last two terms as follows: \begin{align*} \eqref{n16} &\lesssim\|(\partial\tilde w_n\circ\Psi_n)\partial^3\psi_n\|_{L_t^1L_x^2([-T,T]\times\Omega_n)}+\|(\partial^2\tilde w_n\circ\Psi_n)\partial\Psi_n\partial^2\psi_n\|_{L_t^1L_x^2([-T,T]\times\Omega_n)}\\ &\lesssim T\lambda_n^2\|\nabla \tilde w_n\|_{L_t^\infty L_x^2}+T\lambda_n\|\partial^2\tilde w_n\|_{L_t^\infty L_x^2}\\ &\lesssim T\lambda_n^2\bigl[\|\nabla \chi_n\|_{L^3_x}\|w_n\|_{L_t^\infty L^6_x}+\|\nabla w_n\|_{L_t^\infty L_x^2}\bigr]\\ &\quad+T\lambda_n\bigl[\|\partial^2 \chi_n\|_{L^3_x}\|w_n\|_{L_t^\infty L^6_x}+\|\nabla \chi_n\|_{L_x^\infty}\|\nabla w_n\|_{L_t^\infty L_x^2} +\|\partial^2w_n\|_{L_t^\infty L_x^2}\bigr]\\ &\lesssim T\lambda_n^2+T\lambda_n[L_n^{-1}+\lambda_n^{-\theta}]\to 0\qtq{as} n\to \infty \end{align*} and similarly, \begin{align*} \eqref{n17} &\lesssim \|(\partial^2 \tilde w_n\circ\Psi_n)(\partial^2\psi_n+\partial\psi_n\partial^2\psi_n)\|_{L_t^1L_x^2([-T,T]\times\Omega_n)}\\ &\quad+ \|(\partial^3\tilde w_n\circ\Psi_n)[\partial\Psi_n(\partial\psi_n+(\partial\psi_n)^2)]\|_{L_t^1L_x^2([-T,T]\times\Omega_n)}\\ &\lesssim T[\lambda_n+\lambda_n^{2-2\theta}] \|\partial^2\tilde w_n\|_{L_t^\infty L_x^2}+T[\lambda_n^{1-2\theta}+\lambda_n^{2-4\theta}]\|\partial^3\tilde w_n\|_{L_t^\infty L_x^2}\\ &\lesssim T\lambda_n[L_n^{-1}+\lambda_n^{-\theta}]+T\lambda_n^{1-2\theta}\bigl[\|\partial^3\chi_n\|_{L^3_x}\|w_n\|_{L_t^\infty L^6_x}+\|\partial^2\chi_n\|_{L_x^\infty}\|\nabla w_n\|_{L^2_x}\\ &\quad+\|\nabla\chi\|_{L_x^\infty}\|\partial^2w_n\|_{L_t^\infty L^2_x}+\|\partial^3 w_n\|_{L_t^\infty L^2_x}\bigr]\\ &\lesssim T\lambda_n[L_n^{-1}+\lambda_n^{-\theta}]+T\lambda_n^{1-2\theta}\bigl[L_n^{-2}+L_n^{-1}\lambda_n^{-\theta}+\lambda_n^{-2\theta}\bigr]\to 0\qtq{as} n\to \infty. \end{align*} Finally, we consider \eqref{n18}. A direct computation gives \begin{align*} (i\partial_t+\Delta_{{\mathbb{H}}})\tilde w_n-|\tilde w_n|^4\tilde w_n=(\chi_n-\chi_n^5)|w_n|^4w_n+2\nabla\chi_n\cdot\nabla w_n+\Delta\chi_n w_n. \end{align*} We then bound each term as follows: \begin{align*} \|\nabla(\Delta \chi_n w_n)\|_{L_t^1L_x^2([-T,T]\times{\mathbb{H}})} &\lesssim T\bigl[ \|\partial^3\chi_n\|_{L^3_x}\|w_n\|_{L^\infty_t L^6_x}+\|\partial^2 \chi_n\|_{L^\infty_x} \|\nabla w_n\|_{L^\infty_t L^2_x} \bigr] \\ &\lesssim TL_n^{-2} \to 0 \qtq{as} n\to \infty\\ \|\nabla(\nabla\chi_n\cdot \nabla w_n)\|_{L_t^1L_x^2([-T,T]\times{\mathbb{H}})} &\lesssim T\bigl[ \|\partial^2 \chi_n\|_{L^\infty_x} \|\nabla w_n\|_{L^\infty_t L^2_x} + \|\nabla\chi_n\|_{L^\infty_x} \|\partial^2 w_n\|_{L^\infty_t L^2_x}\bigr]\\ &\lesssim T[L_n^{-2}+L_n^{-1}\lambda_n^{-\theta}] \to 0 \qtq{as} n\to \infty. \end{align*} Finally, for the first term, we have \begin{align*} \| & \nabla[(\chi_n-\chi_n^5)|w_n|^4w_n]\|_{\dot N^0([-T,T]\times{\mathbb{H}})}\\ &\lesssim \|(\chi_n-\chi_n^5) |w_n|^4\nabla w_n\|_{L_{t,x}^{\frac{10}7}([-T,T]\times{\mathbb{H}})}+\| |w_n|^5\nabla \chi_n\|_{L_t^{\frac53}L_x^{\frac{30}{23}}([-T,T]\times{\mathbb{H}})}\\ &\lesssim \|w_n 1_{|x|\sim L_n}\|_{L^{10}_{t,x}}^4 \|\nabla w_n\|_{L^{\frac{10}3}_{t,x}} + \|\nabla \chi_n\|_{L^3_x} \|w_n 1_{|x|\sim L_n}\|_{L^{10}_{t,x}}^4 \|\nabla w_n\|_{L^5_t L^\frac{30}{11}_x}\\ &\lesssim \|1_{|x|\sim L_n}w_\infty \|_{L^{10}_{t,x}}^4+\|w_\infty-w_n\|_{L^{10}_{t,x}}^4 \to 0 \qtq{as} n\to \infty. \end{align*} This completes the proof of \eqref{n14}. \textbf{Step 5:} Constructing $v_n$ and approximating by $C_c^{\infty}$ functions. Using \eqref{tildevn4}, \eqref{match4}, and \eqref{n14}, and invoking the stability result Theorem~\ref{T:stability}, for $n$ large enough we obtain a global solution $v_n$ to $\text{NLS}_\Omega$ with initial data $v_n(0)=\phi_n$ and \begin{align*} \|v_n\|_{L_{t,x}^{10}({\mathbb{R}}\times\Omega)}\lesssim 1. \end{align*} Moreover, \begin{align}\label{n19} \lim_{T\to\infty}\limsup_{n\to\infty}\|v_n(t-\lambda_n^2t_n)-\tilde v_n(t)\|_{\dot S^1({\mathbb{R}}\times\Omega)}=0. \end{align} It remains to prove the approximation result \eqref{ap4}. By the density of $C_c^{\infty}({\mathbb{R}}\times{\mathbb{H}})$ in $\dot X^1({\mathbb{R}}\times{\mathbb{H}})$, for every ${\varepsilon}>0$ there exists $\psi_{\varepsilon}\in C_c^{\infty}({\mathbb{R}}\times{\mathbb{H}})$ such that \begin{align*} \|w_\infty-\psi_{\varepsilon}\|_{\dot X^1({\mathbb{R}}\times{\mathbb{H}})}<\tfrac {\varepsilon} 3. \end{align*} This together with \eqref{n19} reduce matters to showing \begin{align}\label{c4e3} \|\tilde v_n(t,x)-\lambda_n^{-\frac 12}w_\infty(\lambda_n^{-2}t, \lambda_n^{-1}R_n^{-1}(x-x_n^*))\|_{\dot X^1({\mathbb{R}}\times{\mathbb{R}}^3)}<\tfrac\eps3 \end{align} for $n,\ T$ sufficiently large. A change of variables shows that \begin{align*} \text{LHS\eqref{c4e3}}&\lesssim \|\tilde w_n\circ \Psi_n-w_\infty\|_{\dot X^1([-T,T]\times{\mathbb{R}}^3)} \\ & \quad +\|e^{i(t-T)\Delta_{\Omega_n}}(\tilde w_n(T)\circ\Psi_n)-w_\infty\|_{\dot X^1((T,\infty)\times{\mathbb{R}}^3)}\\ &\quad+\|e^{i(t+T)\Delta_{\Omega_n}}(\tilde w_n(-T)\circ\Psi_n)-w_\infty\|_{\dot X^1((-\infty,-T)\times{\mathbb{R}}^3)}. \end{align*} The first term can be controlled as follows: \begin{align*} \|\tilde w_n\circ\Psi_n-w_\infty\|_{\dot X^1([-T,T]\times{\mathbb{R}}^3)} & \lesssim \| (\chi_n w_\infty)\circ\Psi_n-w_\infty\|_{\dot X^1([-T,T]\times{\mathbb{R}}^3)}\\ &\quad +\|[\chi_n(w_n-w_\infty)]\circ\Psi_n\|_{\dot X^1([-T,T]\times{\mathbb{R}}^3)}, \end{align*} which converges to zero as $n\to\infty$ by \eqref{cond4} and the fact that $\Psi_n(x)\to x$ in $C^1$. Similarly, we can use the Strichartz inequality to replace $\tilde w_n(T)\circ \Psi_n$ by $w_\infty(T)$ in the second term by making a $o(1)$ error as $n\to \infty$. Then we can use the convergence of propagators result Theorem~\ref{T:LF} to replace $e^{i(t-T)\Delta_{\Omega_n}}$ by $e^{i(t-T)\Delta_{{\mathbb{H}}}}$ with an additional $o(1)$ error. It then suffices to show \begin{align*} \|e^{i(t-T)\Delta_{{\mathbb{H}}}}w_\infty(T)-w_\infty\|_{\dot X^1((T,\infty)\times{\mathbb{R}}^3)}\to 0 \qtq{as}T\to \infty, \end{align*} which follows from the fact that $w_\infty$ scatters forward in time, just as in the proofs of Theorem~\ref{T:embed2} and~\ref{T:embed3}. The treatment of the third term is similar. This completes the proof of \eqref{ap4} and so the proof of Theorem~\ref{T:embed4}. \end{proof} \section{Palais--Smale and the proof of Theorem~ \ref{T:main}}\label{S:Proof} In this section we prove a Palais--Smale condition for minimizing sequences of blowup solutions to \eqref{nls}. This will allow us to conclude that failure of Theorem~\ref{T:main} would imply the existence of special counterexamples that are almost periodic. At the end of this section, we rule out these almost periodic solutions by employing a spatially truncated (one-particle) Morawetz inequality in the style of \cite{borg:scatter}. This will complete the proof of Theorem~\ref{T:main}. We first define operators $T_n^j$ on general functions of spacetime. These act on linear solutions in a manner corresponding to the action of $G_n^j \exp\{it_n^j\Delta_{\Omega_n^j}\}$ on initial data in Theorem~\ref{T:LPD}. As in that theorem, the exact definition depends on the case to which the index $j$ conforms. In Cases~1, 2,~and~3, we define \begin{align*} (T_n^j f)(t,x) :=(\lambda_n^j)^{-\frac 12}f\bigl((\lambda_n^j)^{-2} t+t_n^j, (\lambda_n^j)^{-1}(x-x_n^j)\bigr). \end{align*} In Case 4, we define \begin{align*} (T_n^j f)(t,x):=(\lambda_n^j)^{-\frac12}f\bigl((\lambda_n^j)^{-2}t+t_n^j, (\lambda_n^j)^{-1}(R_n^j)^{-1}(x-(x_n^j)^*)\bigr). \end{align*} Here, the parameters $\lambda_n^j, t_n^j, x_n^j, (x_n^j)^*$, and $R_n^j$ are as defined in Theorem~\ref{T:LPD}. Using the asymptotic orthogonality condition \eqref{E:LP5}, it is not hard to prove the following \begin{lem}[Asymptotic decoupling]\label{L:ortho} Suppose that the parameters associated to $j,k$ are orthogonal in the sense of \eqref{E:LP5}. Then for any $\psi^j, \psi^k\in C_c^{\infty}({\mathbb{R}}\times{\mathbb{R}}^3)$, \begin{align*} \|T_n^j\psi^j T_n^k\psi^k\|_{L_{t,x}^5({\mathbb{R}}\times{\mathbb{R}}^3)}+\|T_n^j\psi^j \nabla(T_n^k\psi^k)\|_{L_{t,x}^{\frac 52}({\mathbb{R}}\times{\mathbb{R}}^3)} +\|\nabla(T_n^j\psi^j)\nabla (T_n^k\psi^k)\|_{L_{t,x}^{\frac53}({\mathbb{R}}\times{\mathbb{R}}^3)} \end{align*} converges to zero as $n\to\infty$. \end{lem} \begin{proof} From a change of variables, we get \begin{align*} \|T_n^j&\psi^jT_n^k \psi^k\|_{L_{t,x}^5} + \|T_n^j\psi^j\nabla(T_n^k\psi^k)\|_{L_{t,x}^{\frac 52}} +\|\nabla(T_n^j\psi^j)\nabla (T_n^k\psi^k)\|_{L_{t,x}^{\frac53}}\\ &= \|\psi^j (T_n^j)^{-1}T_n^k\psi^k\|_{L_{t,x}^5}+\|\psi^j\nabla(T_n^j)^{-1}T_n^k\psi^k\|_{L_{t,x}^{\frac 52}} +\|\nabla \psi^j\nabla (T_n^j)^{-1}T_n^k\psi^k\|_{L_{t,x}^{\frac53}}, \end{align*} where all spacetime norms are over ${\mathbb{R}}\times{\mathbb{R}}^3$. Depending on the cases to which $j$ and $k$ conform, $(T_n^j)^{-1}T_n^k$ takes one of the following forms: \begin{CI} \item Case a): $j$ and $k$ each conform to one of Cases 1, 2, or 3. \begin{align*} [(T_n^j)^{-1}T_n^k\psi^k](t,x)=\bigl(\tfrac{\lambda_n^j}{\lambda_n^k}\bigr)^{\frac12} \psi^k\Bigl(\bigl(\tfrac{\lambda_n^j}{\lambda_n^k}\bigr)^{2} \bigl(t-\tfrac{t_n^j(\lambda_n^j)^2-t_n^k(\lambda_n^k)^2}{(\lambda_n^j)^2}\bigr), \tfrac{\lambda_n^j}{\lambda_n^k}\bigl( x - \tfrac{x_n^k-x_n^j}{\lambda_n^j}\bigr) \Bigr). \end{align*} \item Case b): $j$ conforms to Case 1, 2, or 3 and $k$ conforms to Case 4. \begin{align*} [(T_n^j)^{-1}&T_n^k\psi^k](t,x) \\ &=\bigl(\tfrac{\lambda_n^j}{\lambda_n^k}\bigr)^{\frac12} \psi^k\Bigl(\bigl(\tfrac{\lambda_n^j}{\lambda_n^k}\bigr)^{2} \bigl(t-\tfrac{t_n^j(\lambda_n^j)^2-t_n^k(\lambda_n^k)^2}{(\lambda_n^j)^2}\bigr), \tfrac{\lambda_n^j}{\lambda_n^k} (R_n^k)^{-1}\bigl( x - \tfrac{(x_n^k)^*-x_n^j}{\lambda_n^j}\bigr) \Bigr). \end{align*} \item Case c): $j$ conforms to Case 4 and $k$ to Case 1, 2, or 3. \begin{align*} [(T_n^j)^{-1}T_n^k\psi^k](t,x)=\bigl(\tfrac{\lambda_n^j}{\lambda_n^k}\bigr)^{\frac12} \psi^k\Bigl(\bigl(\tfrac{\lambda_n^j}{\lambda_n^k}\bigr)^{2} \bigl(t-\tfrac{t_n^j(\lambda_n^j)^2-t_n^k(\lambda_n^k)^2}{(\lambda_n^j)^2}\bigr), \tfrac{\lambda_n^j}{\lambda_n^k} \bigl( R_n^j x - \tfrac{x_n^k-(x_n^j)^*}{\lambda_n^j}\bigr) \Bigr). \end{align*} \item Case d): Both $j$ and $k$ conform to Case 4. \begin{align*} [(T_n^j&)^{-1}T_n^k\psi^k](t,x) \\ &=\bigl(\tfrac{\lambda_n^j}{\lambda_n^k}\bigr)^{\frac12} \psi^k\Bigl(\bigl(\tfrac{\lambda_n^j}{\lambda_n^k}\bigr)^{2} \bigl(t-\tfrac{t_n^j(\lambda_n^j)^2-t_n^k(\lambda_n^k)^2}{(\lambda_n^j)^2}\bigr), \tfrac{\lambda_n^j}{\lambda_n^k}(R_n^k)^{-1}\bigl( R_n^j x - \tfrac{(x_n^k)^*-(x_n^j)^*}{\lambda_n^j}\bigr) \Bigr). \end{align*} \end{CI} We only present the details for decoupling in the $L_{t,x}^5$ norm; the argument for decoupling in the other norms is very similar. We first assume $\frac{\lambda_n^j}{\lambda_n^k}+\frac{\lambda_n^k}{\lambda_n^j}\to\infty$. Using H\"older and a change of variables, we estimate \begin{align*} \|\psi^j(T_n^j)^{-1}T_n^k\psi^k\|_{L^5_{t,x}} &\le\min\bigl\{\|\psi^j\|_{L^\infty_{t,x}}\|(T_n^j)^{-1}T_n^k\psi^k\|_{L^5_{t,x}}+\|\psi^j\|_{L^5_{t,x}}\|(T_n^j)^{-1}T_n^k\psi^k\|_{L^\infty_{t,x}}\bigr\}\\ &\lesssim \min\Bigl\{ \bigl(\tfrac{\lambda_n^j}{\lambda_n^k}\bigr)^{-\frac12}, \bigl(\tfrac{\lambda_n^j}{\lambda_n^k}\bigr)^{\frac12}\Bigr\} \to 0 \qtq{as} n\to \infty. \end{align*} Henceforth, we may assume $\frac{\lambda_n^j}{\lambda_n^k}\to \lambda_0\in (0,\infty)$. If $\frac{|t_n^j(\lambda_n^j)^2-t_n^k(\lambda_n^k)^2|}{\lambda_n^k\lambda_n^j}\to \infty$, it is easy to see that the temporal supports of $\psi^j$ and $(T_n^j)^{-1}T_n^k\psi^k$ are disjoint for $n$ sufficiently large. Hence \begin{align*} \lim_{n\to \infty}\|\psi^j(T_n^j)^{-1}T_n^k\psi^k\|_{L^5_{t,x}} = 0. \end{align*} The only case left is when \begin{align}\label{s13} \tfrac{\lambda_n^j}{\lambda_n^k}\to\lambda_0, \quad \tfrac{t_n^j(\lambda_n^j)^2-t_n^k(\lambda_n^k)^2}{\lambda_n^k\lambda_n^j} \to t_0, \qtq{and} \tfrac{|x_n^j-x_n^k|}{\sqrt{\lambda_n^j\lambda_n^k}}\to\infty. \end{align} In this case we will verify that the spatial supports of $\psi^j$ and $(T_n^j)^{-1}T_n^k\psi^k$ are disjoint for $n$ sufficiently large. Indeed, in Case a), \begin{align*} \tfrac{|x_n^j-x_n^k|}{\lambda_n^j}=\tfrac{|x_n^j-x_n^k|}{\sqrt{\lambda_n^j\lambda_n^k}} \sqrt{\tfrac{\lambda_n^k}{\lambda_n^j}}\to \infty \qtq{as} n\to\infty. \end{align*} In Case b), for $n$ sufficiently large we have \begin{align*} \tfrac{|x_n^j-(x_n^k)^*|}{\lambda_n^j} &\ge \tfrac{|x_n^j-x_n^k|}{\lambda_n^j}-\tfrac{|x_n^k-(x_n^k)^*|}{\lambda_n^j} \ge\tfrac{|x_n^j-x_n^k|}{\lambda_n^j}-2\tfrac{d^k_\infty}{\lambda_0}, \end{align*} which converges to infinity as $n\to \infty$. In Case c), for $n$ sufficiently large we have \begin{align*} \tfrac{|(x_n^j)^*-x_n^k|}{\lambda_n^j} &\ge \tfrac{|x_n^j-x_n^k|}{\lambda_n^j}-\tfrac{|x_n^j-(x_n^j)^*|}{\lambda_n^j} \ge \tfrac{|x_n^j-x_n^k|}{\lambda_n^j}-2d^j_{\infty}, \end{align*} which converges to infinity as $n\to \infty$. Finally, in Case d) for $n$ sufficiently large, \begin{align*} \tfrac{|(x_n^j)^*-(x_n^k)^*|}{\lambda_n^j} &\ge \tfrac{|x_n^j-x_n^k|}{\lambda_n^j}-\tfrac{|x_n^j-(x_n^j)^*|}{\lambda_n^j} -\tfrac{|x_n^k-(x_n^k)^*|}{\lambda_n^j} \ge \tfrac{|x_n^j-x_n^k|}{\lambda_n^j}-2d_\infty^j-2\tfrac{d^k_\infty}{\lambda_0}, \end{align*} which converges to infinity as $n\to\infty$. Thus, in all cases, \begin{align*} \lim_{n\to \infty}\|\psi^j(T_n^j)^{-1}T_n^k\psi^k\|_{L^5_{t,x}} = 0. \end{align*} This completes the proof of the lemma. \end{proof} Theorem~\ref{T:main} claims that for any initial data $u_0\in \dot H^1_D(\Omega)$ there is a global solution $u:{\mathbb{R}}\times\Omega\to {\mathbb{C}}$ to \eqref{nls} with $S_{\mathbb{R}}(u)\leq C(\|u_0\|_{\dot H^1_D(\Omega)})$. Recall that for a time interval~$I$, the scattering size of $u$ on $I$ is given by \begin{align*} S_I(u)=\iint_{{\mathbb{R}}\times\Omega}|u(t,x)|^{10} \,dx\,dt. \end{align*} Supposing that Theorem~\ref{T:main} failed, there would be a critical energy $0<E_c<\infty$ so that \begin{align}\label{LofE} L(E)<\infty \qtq{for} E<E_c \qtq{and} L(E)=\infty \qtq{for} E\geq E_c. \end{align} Recall from the introduction that $L(E)$ is the supremum of $S_I(u)$ over all solutions $u:I\times\Omega\to {\mathbb{C}}$ with $E(u)\leq E$ and defined on any interval $I\subseteq {\mathbb{R}}$. The positivity of $E_c$ follows from small data global well-posedness. Indeed, the argument proves the stronger statement \begin{align}\label{SbyE} \|u\|_{\dot X^1({\mathbb{R}}\times\Omega)}\lesssim E(u_0)^{\frac12} \quad \text{for all data with } E(u_0)\leq \eta_0, \end{align} where $\eta_0$ denotes the small data threshold. Recall $\dot X^1= L_{t,x}^{10} \cap L_t^5\dot H^{1,\frac{30}{11}}$. Using the induction on energy argument together with \eqref{LofE} and the stability result Theorem~\ref{T:stability}, we now prove a compactness result for optimizing sequences of blowup solutions. \begin{prop}[Palais--Smale condition]\label{P:PS} Let $u_n: I_n\times\Omega\to {\mathbb{C}}$ be a sequence of solutions with $E(u_n)\to E_c$, for which there is a sequence of times $t_n\in I_n$ so that \begin{align*} \lim_{n\to\infty} S_{\ge t_n}(u_n)=\lim_{n\to\infty}S_{\le t_n}(u_n)=\infty. \end{align*} Then the sequence $u_n(t_n)$ has a subsequence that converges strongly in $\dot H^1_D(\Omega)$. \end{prop} \begin{proof} Using the time translation symmetry of \eqref{nls}, we may take $t_n\equiv0$ for all $n$; thus, \begin{align}\label{scat diverge} \lim_{n\to\infty} S_{\ge 0}(u_n)=\lim_{n\to \infty} S_{\le 0} (u_n)=\infty. \end{align} Applying Theorem~\ref{T:LPD} to the bounded sequence $u_n(0)$ in $\dot H^1_D(\Omega)$ and passing to a subsequence if necessary, we obtain the linear profile decomposition \begin{align}\label{s0} u_n(0)=\sum_{j=1}^J \phi_n^j+w_n^J \end{align} with the properties stated in that theorem. In particular, for any finite $0\leq J \leq J^*$ we have the energy decoupling condition \begin{align}\label{s01} \lim_{n\to \infty}\Bigl\{E(u_n)-\sum_{j=1}^J E(\phi_n^j)-E(w_n^J)\Bigr\}=0. \end{align} To prove the proposition, we need to show that $J^*=1$, that $w_n^1\to 0$ in $\dot H^1_D(\Omega)$, that the only profile $\phi^1_n$ conforms to Case~1, and that $t_n^1\equiv 0$. All other possibilities will be shown to contradict \eqref{scat diverge}. We discuss two scenarios: \textbf{Scenario I:} $\sup_j \limsup_{n\to \infty} E(\phi_n^j) =E_c$. From the non-triviality of the profiles, we have $\liminf_{n\to \infty} E(\phi_n^j)>0$ for every finite $1\leq j\leq J^*$; indeed, $\|\phi_n^j\|_{\dot H^1_D(\Omega)}$ converges to $\|\phi^j\|_{\dot H^1}$. Thus, passing to a subsequence, \eqref{s01} implies that there is a single profile in the decomposition \eqref{s0} (that is, $J^*=1$) and we can write \begin{equation}\label{s11} u_n(0)=\phi_n +w_n \qtq{with} \lim_{n\to \infty} \|w_n\|_{\dot H_D^1(\Omega)}=0. \end{equation} If $\phi_n$ conforms to Cases 2, 3, or 4, then by the Theorems~\ref{T:embed2}, \ref{T:embed3}, or \ref{T:embed4}, there are global solutions $v_n$ to $\text{NLS}_\Omega$ with data $v_n(0)=\phi_n$ that admit a uniform spacetime bound. By Theorem~\ref{T:stability}, this spacetime bound extends to the solutions $u_n$ for $n$ large enough. However, this contradicts \eqref{scat diverge}. Therefore, $\phi_n$ must conform to Case~1 and \eqref{s11} becomes \begin{equation}\label{s11'} u_n(0)=e^{it_n\lambda_n^2\Delta_\Omega}\phi +w_n \qtq{with} \lim_{n\to \infty} \|w_n\|_{\dot H_D^1(\Omega)}=0 \end{equation} and $t_n\equiv 0$ or $t_n\to \pm \infty$. If $t_n\equiv 0$, then we obtain the desired compactness. Thus, we only need to preclude that $t_n\to\pm\infty$. Let us suppose $t_n\to \infty$; the case $t_n\to -\infty$ can be treated symmetrically. In this case, the Strichartz inequality and the monotone convergence theorem yield \begin{align*} S_{\ge 0}(e^{it\Delta_\Omega}u_n(0))=S_{\ge 0}(e^{i(t+t_n\lambda_n^2)\Delta_{\Omega}}\phi+e^{it\Delta_{\Omega}}w_n) \to 0 \qtq{as} n\to \infty. \end{align*} By the small data theory, this implies that $S_{\geq 0}(u_n)\to 0$, which contradicts \eqref{scat diverge}. \textbf{Scenario 2:} $\sup_j \limsup_{n\to \infty} E(\phi_n^j) \leq E_c-2\delta$ for some $\delta>0$. We first observe that for each finite $J\leq J^*$ we have $E(\phi_n^j) \leq E_c-\delta$ for all $1\leq j\leq J$ and $n$ sufficiently large. This is important for constructing global nonlinear profiles for $j$ conforming to Case~1, via the induction on energy hypothesis~\eqref{LofE}. If $j$ conforms to Case~1 and $t_n^j\equiv 0$, we define $v^j:I^j\times\Omega\to {\mathbb{C}}$ to be the maximal-lifespan solution to \eqref{nls} with initial data $v^j(0)=\phi^j$. If instead $t_n^j\to \pm \infty$, we define $v^j:I^j\times\Omega\to {\mathbb{C}}$ to be the maximal-lifespan solution to \eqref{nls} which scatters to $e^{it\Delta_\Omega}\phi^j$ as $t\to \pm\infty$. Now define $v_n^j(t,x):=v^j(t+t_n^j(\lambda_n^j)^2,x)$. Then $v_n^j$ is also a solution to \eqref{nls} on the time interval $I_n^j:=I^j-\{t_n^j(\lambda_n^j)^2\}$. In particular, for $n$ sufficiently large we have $0\in I_n^j$ and \begin{align}\label{bb1} \lim_{n\to\infty}\|v_n^j(0)-\phi_n^j\|_{\dot H^1_D(\Omega)}=0. \end{align} Combining this with $E(\phi_n^j) \leq E_c-\delta$ and \eqref{LofE}, we deduce that for $n$ sufficiently large, $v_n^j$ (and also $v^j$) are global solutions that obey \begin{align*} S_{\mathbb{R}}(v^j)=S_{\mathbb{R}}(v_n^j)\le L(E_c-\delta)<\infty. \end{align*} Combining this with the Strichartz inequality shows that all Strichartz norms of $v_n^j$ are finite and, in particular, the $\dot X^1$ norm. This allows us to approximate $v_n^j$ in $\dot X^1({\mathbb{R}}\times\Omega)$ by $C_c^\infty({\mathbb{R}}\times{\mathbb{R}}^3)$ functions. More precisely, for any ${\varepsilon}>0$ there exist $N_{\varepsilon}^j\in {\mathbb{N}}$ and $\psi^j_{\varepsilon}\in C_c^\infty({\mathbb{R}}\times{\mathbb{R}}^3)$ so that for $n\geq N_{\varepsilon}^j$ we have \begin{align}\label{ap case1} \|v_n^j - T_n^j\psi^j_{\varepsilon}\|_{\dot X^1({\mathbb{R}}\times{\mathbb{R}}^3)}<{\varepsilon}. \end{align} Speciffically, choosing $\tilde\psi_{\varepsilon}^j\in C_c^\infty({\mathbb{R}}\times{\mathbb{R}}^3)$ such that \begin{align*} \|v^j-\tilde \psi_{\varepsilon}^j\|_{\dot X^1({\mathbb{R}}\times{\mathbb{R}}^3)}<\tfrac \eps2, \qtq{we set} \psi_{\varepsilon}^j(t,x):=(\lambda_\infty^j)^{\frac 12}\tilde\psi_{\varepsilon}^j \bigl((\lambda_\infty^j)^2 t, \lambda_\infty^j x+x_\infty^j\bigr). \end{align*} When $j$ conforms to Cases 2, 3, or 4, we apply the nonlinear embedding theorems of the previous section to construct the nonlinear profiles. More precisely, let $v_n^j$ be the global solutions to $\text{NLS}_\Omega$ constructed in Theorems~\ref{T:embed2}, \ref{T:embed3}, or \ref{T:embed4}, as appropriate. In particular, these $v_n^j$ also obey \eqref{ap case1} and $\sup_{n,j} S_{\mathbb{R}}(v_n^j)<\infty$. In all cases, we may use \eqref{SbyE} together with our bounds on the spacetime norms of $v_n^j$ and the finiteness of $E_c$ to deduce \begin{align}\label{s2} \|v_n^j\|_{\dot X^1({\mathbb{R}}\times\Omega)}\lesssim_{E_c, \delta} E(\phi_n^j)^{\frac12} \lesssim_{E_c, \delta}1. \end{align} Combining this with \eqref{s01} we deduce \begin{align}\label{s2lim} \limsup_{n\to \infty} \sum_{j=1}^J \|v_n^j\|_{\dot X^1({\mathbb{R}}\times\Omega)}^2 \lesssim_{E_c, \delta} \limsup_{n\to \infty} \sum_{j=1}^J E(\phi_n^j) \lesssim_{E_c,\delta} 1, \end{align} uniformly for finite $J\leq J^*$. The asymptotic orthogonality condition \eqref{E:LP5} gives rise to asymptotic decoupling of the nonlinear profiles. \begin{lem}[Decoupling of nonlinear profiles] \label{L:npd} For $j\neq k$ we have \begin{align*} \lim_{n\to \infty} \|v_n^j v_n^k\|_{L_{t,x}^5({\mathbb{R}}\times\Omega)} +\|v_n^j \nabla v_n^k\|_{L_{t,x}^{\frac52}({\mathbb{R}}\times\Omega)} +\|\nabla v_n^j \nabla v_n^k\|_{L_t^{\frac52} L_x^{\frac{15}{11}}({\mathbb{R}}\times\Omega)}=0. \end{align*} \end{lem} \begin{proof} Recall that for any ${\varepsilon}>0$ there exist $N_{\varepsilon}\in {\mathbb{N}}$ and $\psi_{\varepsilon}^j,\psi_{\varepsilon}^k\in C_c^\infty({\mathbb{R}}\times{\mathbb{R}}^3)$ so that \begin{align*} \|v_n^j - T_n^j\psi^j_{\varepsilon}\|_{\dot X^1({\mathbb{R}}\times{\mathbb{R}}^3)} + \|v_n^k - T_n^k\psi^k_{\varepsilon}\|_{\dot X^1({\mathbb{R}}\times{\mathbb{R}}^3)}<{\varepsilon}. \end{align*} Thus, using \eqref{s2} and Lemma~\ref{L:ortho} we get \begin{align*} \|v_n^j v_n^k\|_{L_{t,x}^5} &\leq \|v_n^j(v_n^k-T_n^k\psi_{{\varepsilon}}^k)\|_{L_{t,x}^5}+\|(v_n^j-T_n^j\psi_{\varepsilon}^j)T_n^k\psi_{{\varepsilon}}^k\|_{L_{t,x}^5} +\|T_n^j\psi_{\varepsilon}^j\, T_n^k\psi_{\varepsilon}^k\|_{L_{t,x}^5}\\ &\lesssim \|v^j_n\|_{\dot X^1} \|v_n^k - T_n^k\psi^k_{\varepsilon}\|_{\dot X^1} + \|v_n^j - T_n^j\psi^j_{\varepsilon}\|_{\dot X^1} \|\psi_{\varepsilon}^k\|_{\dot X^1} + \|T_n^j\psi_{\varepsilon}^j\, T_n^k\psi_{\varepsilon}^k\|_{L_{t,x}^5}\\ &\lesssim_{E_c,\delta} {\varepsilon} + o(1) \qtq{as}n\to \infty. \end{align*} As ${\varepsilon}>0$ was arbitrary, this proves the first asymptotic decoupling statement. The second decoupling statement follows analogously. For the third assertion, a little care has to be used to estimate the error terms, due to the asymmetry of the spacetime norm and to the restrictions placed by Theorem~\ref{T:Sob equiv}. Using the same argument as above and interpolation, we estimate \begin{align*} \|\nabla v_n^j \nabla v_n^k\|_{L_t^{\frac52} L_x^{\frac{15}{11}}} &\leq \|\nabla v_n^j(\nabla v_n^k-\nabla T_n^k\psi_{{\varepsilon}}^k)\|_{L_t^{\frac52} L_x^{\frac{15}{11}}}+\|(\nabla v_n^j-\nabla T_n^j\psi_{\varepsilon}^j)\nabla T_n^k\psi_{{\varepsilon}}^k\|_{L_t^{\frac52} L_x^{\frac{15}{11}}}\\ &\quad +\|\nabla T_n^j\psi_{\varepsilon}^j\, \nabla T_n^k\psi_{\varepsilon}^k\|_{L_t^{\frac52} L_x^{\frac{15}{11}}}\\ &\lesssim_{E_c,\delta}{\varepsilon} + \|\nabla T_n^j\psi_{\varepsilon}^j\, \nabla T_n^k\psi_{\varepsilon}^k\|_{L_{t,x}^{\frac53}}^{\frac23}\|\nabla T_n^j\psi_{\varepsilon}^j\, \nabla T_n^k\psi_{\varepsilon}^k\|_{L_t^\infty L_x^1}^{\frac13}\\ &\lesssim_{E_c,\delta}{\varepsilon} + \|\nabla T_n^j\psi_{\varepsilon}^j\, \nabla T_n^k\psi_{\varepsilon}^k\|_{L_{t,x}^{\frac53}}^{\frac23}\|\nabla\psi_{\varepsilon}^j\|_{L_x^2}^{\frac13}\|\nabla\psi_{\varepsilon}^k\|_{L_x^2}^{\frac13}\\ &\lesssim_{E_c,\delta}{\varepsilon} + o(1) \qtq{as}n\to \infty, \end{align*} where we used Lemma~\ref{L:ortho} in the last step. As ${\varepsilon}>0$ was arbitrary, this proves the last decoupling statement. \end{proof} As a consequence of this decoupling we can bound the sum of the nonlinear profiles in $\dot X^1$, as follows: \begin{align}\label{sum vnj} \limsup_{n\to \infty} \Bigl\|\sum_{j=1}^J v_n^j\Bigr\|_{\dot X^1({\mathbb{R}}\times\Omega)}\lesssim_{E_c,\delta}1 \quad\text{uniformly for finite $J\leq J^*$}. \end{align} Indeed, by Young's inequality, \eqref{s2}, \eqref{s2lim}, and Lemma~\ref{L:npd}, \begin{align*} S_{\mathbb{R}}\Bigl(\sum_{j=1}^J v_n^j\Bigr) &\lesssim \sum_{j=1}^J S_{\mathbb{R}}(v_n^j)+J^8\sum_{j\neq k}\iint_{{\mathbb{R}}\times\Omega}|v_n^j||v_n^k|^9 \,dx\,dt\\ &\lesssim_{E_c,\delta}1 + J^8 \sum_{j\neq k}\|v_n^jv_n^k\|_{L_{t,x}^5}\|v_n^k\|_{L_{t,x}^{10}}^8\\ &\lesssim_{E_c,\delta}1 + J^8 o(1) \qtq{as} n\to \infty. \end{align*} Similarly, \begin{align*} \Bigl\|\sum_{j=1}^J \nabla v_n^j\Bigr\|_{L_t^5L_x^{\frac{30}{11}}}^2 &= \Bigl\|\Bigl(\sum_{j=1}^J \nabla v_n^j\Bigr)^2\Bigr\|_{L_t^{\frac52}L_x^{\frac{15}{11}}}\lesssim \sum_{j=1}^J \|\nabla v_n^j\|_{L_t^5L_x^{\frac{30}{11}}}^2 + \sum_{j\neq k} \|\nabla v_n^j \nabla v_n^k\|_{L_t^{\frac52} L_x^{\frac{15}{11}}}\\ &\lesssim_{E_c,\delta}1 + o(1)\qtq{as} n\to \infty. \end{align*} This completes the proof of \eqref{sum vnj}. The same argument combined with \eqref{s01} shows that given $\eta>0$, there exists $J'=J'(\eta)$ such that \begin{align}\label{sum vnj tail} \limsup_{n\to \infty} \Bigl\|\sum_{j=J'}^J v_n^j\Bigr\|_{\dot X^1({\mathbb{R}}\times\Omega)}\leq \eta \quad \text{uniformly in $J\geq J'$}. \end{align} Now we are ready to construct an approximate solution to $\text{NLS}_\Omega$. For each $n$ and $J$, we define \begin{align*} u_n^J:=\sum_{j=1}^J v_n^j+e^{it\Delta_{\Omega}}w_n^J. \end{align*} Obviously $u_n^J$ is defined globally in time. In order to apply Theorem~\ref{T:stability}, it suffices to verify the following three claims for $u_n^J$: Claim 1: $\|u_n^J(0)-u_n(0)\|_{\dot H^1_D(\Omega)}\to 0$ as $n\to \infty$ for any $J$. Claim 2: $\limsup_{n\to \infty} \|u_n^J\|_{\dot X^1({\mathbb{R}}\times\Omega)}\lesssim_{E_c, \delta} 1$ uniformly in $J$. Claim 3: $\lim_{J\to\infty}\limsup_{n\to\infty}\|(i\partial_t+\Delta_{\Omega})u_n^J-|u_n^J|^4u_n^J\|_{\dot N^1({\mathbb{R}}\times\Omega)}=0$. The three claims imply that for sufficiently large $n$ and $J$, $u_n^J$ is an approximate solution to \eqref{nls} with finite scattering size, which asymptotically matches $u_n(0)$ at time $t=0$. Using the stability result Theorem~\ref{T:stability} we see that for $n, J$ sufficiently large, the solution $u_n$ inherits\footnote{In fact, we obtain a nonlinear profile decomposition for the sequence of solutions $u_n$ with an error that goes to zero in $L^{10}_{t,x}$.} the spacetime bounds of $u_n^J$, thus contradicting \eqref{scat diverge}. Therefore, to complete the treatment of the second scenario, it suffices to verify the three claims above. The first claim follows trivially from \eqref{s0} and \eqref{bb1}. To derive the second claim, we use \eqref{sum vnj} and the Strichartz inequality, as follows: \begin{align*} \limsup_{n\to \infty}\|u_n^J\|_{\dot X^1({\mathbb{R}}\times\Omega)}&\lesssim \limsup_{n\to \infty}\Bigl\|\sum_{j=1}^J v_n^j\Bigr\|_{\dot X^1({\mathbb{R}}\times\Omega)}+\limsup_{n\to \infty}\|w_n^J\|_{\dot H^1_D(\Omega)}\lesssim_{E_c,\delta}1. \end{align*} Next we verify the third claim. Adopting the notation $F(z)=|z|^4 z$, a direct computation gives \begin{align} (i\partial_t+\Delta_\Omega)u_n^J-F(u_n^J) &=\sum_{j=1}^JF(v_n^j)-F(u_n^J)\notag\\ &=\sum_{j=1}^J F(v_n^j)-F\biggl(\sum_{j=1}^J v_n^j\biggr)+F\bigl(u_n^J-e^{it\Delta_{\Omega}}w_n^J\bigr)-F(u_n^J)\label{s6}. \end{align} Taking the derivative, we estimate \begin{align*} \biggl|\nabla\biggl\{ \sum_{j=1}^JF(v_n^j)-F\biggl(\sum_{j=1}^J v_n^j\biggr)\biggr\} \biggr| \lesssim_{J} \sum_{j\neq k}|\nabla v_n^j||v_n^k|^4+|\nabla v_n^j||v_n^j|^3|v_n^k| \end{align*} and hence, using \eqref{s2} and Lemma~\ref{L:npd}, \begin{align*} \biggl\|\nabla\biggl\{ \sum_{j=1}^JF(v_n^j)-F\biggl(\sum_{j=1}^J v_n^j\biggr)\bigg\}\biggr \|_{\dot N^0({\mathbb{R}}\times \Omega)} &\lesssim_{J} \sum_{j\neq k} \bigl\| |\nabla v_n^j| |v_n^k|^4 + |\nabla v_n^j||v_n^j|^3|v_n^k| \bigr\|_{L^{\frac {10}7}_{t,x}} \\ &\lesssim_{J} \sum_{j\neq k} \bigl\|\nabla v_n^j v_n^k\bigr\|_{L^{\frac 52}_{t,x}}\bigl[\|v_n^k\|_{L^{10}_{t,x}}^3 +\|v_n^j\|_{L^{10}_{t,x}}^3\bigr]\\ &\lesssim_{J,E_c,\delta} o(1) \qtq{as} n\to \infty. \end{align*} Thus, using the equivalence of Sobolev spaces Theorem~\ref{T:Sob equiv}, we obtain \begin{equation}\label{s7} \lim_{J\to \infty}\limsup_{n\to \infty} \biggl\| \sum_{j=1}^J F(v_n^j)-F\biggl(\sum_{j=1}^J v_n^j\biggr) \biggr\|_{\dot N^1({\mathbb{R}}\times \Omega)}=0. \end{equation} We now turn to estimating the second difference in \eqref{s6}. We will show \begin{equation}\label{s8} \lim_{J\to \infty} {\limsup_{n\to \infty}} \bigl\| F(u_n^J-e^{it\Delta_{\Omega}}w_n^J)-F(u_n^J) \bigr\|_{\dot N^1({\mathbb{R}}\times \Omega)}=0. \end{equation} By the equivalence of Sobolev spaces, it suffices to estimate the usual gradient of the difference in dual Strichartz spaces. Taking the derivative, we get \begin{align*} \bigl|\nabla \bigl\{F\bigl(u_n^J-e^{it\Delta_{\Omega}}w_n^J\bigr)-F(u_n^J)\bigr\}\bigr| &\lesssim \sum_{k=0}^3 |\nabla u_n^J| |u_n^J|^k |e^{it\Delta_{\Omega}}w_n^J|^{4-k} \\ &\quad + \sum_{k=0}^4 |\nabla e^{it\Delta_{\Omega}}w_n^J| |u_n^J|^k |e^{it\Delta_{\Omega}}w_n^J|^{4-k}. \end{align*} Using H\"older and the second claim, we obtain \begin{align*} \sum_{k=0}^3\bigl\||\nabla u_n^J| |u_n^J|^k |e^{it\Delta_{\Omega}}w_n^J|^{4-k}\bigr\|_{L^{\frac53}_t L^{\frac{30}{23}}_x} &\lesssim \sum_{k=0}^3\|\nabla u_n^J\|_{L^5_tL^{\frac {30}{11}}_x} \|u_n^J\|_{L_{t,x}^{10}}^k \|e^{it\Delta_{\Omega}}w_n^J\|_{L_{t,x}^{10}}^{4-k}\\ &\lesssim_{E_c, \delta} \sum_{k=0}^3\|e^{it\Delta_{\Omega}}w_n^J\|_{L_{t,x}^{10}}^{4-k}, \end{align*} which converges to zero as $n,J\to \infty$ by \eqref{E:LP1}. The same argument gives \begin{align*} \lim_{J\to \infty}\limsup_{n\to \infty} \sum_{k=0}^3\bigl\||\nabla e^{it\Delta_{\Omega}}w_n^J| |u_n^J|^k |e^{it\Delta_{\Omega}}w_n^J|^{4-k}\bigr\|_{L^{\frac53}_t L^{\frac{30}{23}}_x}=0. \end{align*} This leaves us to prove \begin{align}\label{708} \lim_{J\to \infty}\limsup_{n\to \infty}\||\nabla e^{it\Delta_{\Omega}}w_n^J| |u_n^J |^4\|_{L_{t,x}^{\frac{10}7}}=0. \end{align} Using H\"older, the second claim, Theorem~\ref{T:Sob equiv}, and the Strichartz inequality, we get \begin{align*} \||\nabla e^{it\Delta_{\Omega}}w_n^J| |u_n^J|^4\|_{L_{t,x}^{\frac{10}7}} &\lesssim \|u_n^J \nabla e^{it\Delta_{\Omega}}w_n^J \|_{L_{t,x}^{\frac52}} \|u_n^J\|_{L_{t,x}^{10}}^3\\ &\lesssim_{E_c,\delta}\|e^{it\Delta_{\Omega}}w_n^J\nabla e^{it\Delta_{\Omega}}w_n^J \|_{L_{t,x}^{\frac52}} +\Bigl\|\sum_{j=1}^J v_n^j \nabla e^{it\Delta_{\Omega}}w_n^J \Bigr\|_{L_{t,x}^{\frac52}}\\ &\lesssim_{E_c,\delta}\|e^{it\Delta_{\Omega}}w_n^J\|_{L_{t,x}^{10}}^{\frac2{11}}\|e^{it\Delta_{\Omega}}w_n^J\|_{L_t^{\frac92}L_x^{54}}^{\frac9{11}}\|\nabla e^{it\Delta_{\Omega}}w_n^J \|_{L_t^5 L_x^{\frac{30}{11}}} \\ &\quad +\Bigl\|\sum_{j=1}^J v_n^j \nabla e^{it\Delta_{\Omega}}w_n^J \Bigr\|_{L_{t,x}^{\frac52}}\\ &\lesssim_{E_c,\delta}\|e^{it\Delta_{\Omega}}w_n^J\|_{L_{t,x}^{10}}^{\frac2{11}}+\Bigl\|\sum_{j=1}^J v_n^j \nabla e^{it\Delta_{\Omega}}w_n^J \Bigr\|_{L_{t,x}^{\frac52}}. \end{align*} By \eqref{E:LP1}, the contribution of the first term to \eqref{708} is acceptable. We now turn to the second term. By \eqref{sum vnj tail}, \begin{align*} \limsup_{n\to \infty} \Bigl\| \Bigl(\sum_{j=J'}^J v_n^j\Bigr) \nabla e^{it\Delta_\Omega} w_n^J \Bigr\|_{L_{t,x}^{\frac52}} &\lesssim\limsup_{n\to \infty}\Bigl\|\sum_{j=J'}^J v_n^j\Bigr\|_{\dot X^1}\|\nabla e^{it\Delta_\Omega} w_n^J\|_{L_t^5L_x^{\frac{30}{11}}}\lesssim_{E_c,\delta} \eta, \end{align*} where $\eta>0$ is arbitrary and $J'=J'(\eta)$ is as in \eqref{sum vnj tail}. Thus, proving \eqref{708} reduces to showing \begin{align}\label{834} \lim_{J\to\infty} \limsup_{n \to \infty}\| v_n^j \nabla e^{it\Delta_\Omega} w_n^J \|_{L_{t,x}^{\frac52}}=0 \qtq{for each} 1\leq j\leq J'. \end{align} To this end, fix $1\leq j\leq J'$. Let ${\varepsilon}>0$, $\psi_{\varepsilon}^j\in C^\infty_c({\mathbb{R}}\times{\mathbb{R}}^3)$ be as in \eqref{ap case1}, and let $R,T>0$ be such that $\psi^j_{\varepsilon}$ is supported in the cylinder $[-T,T]\times\{|x|\leq R\}$. Then $$ \supp(T_n^j \psi^j_{\varepsilon} ) \subseteq [(\lambda_n^j)^2 (-T-t_n^j), (\lambda_n^j)^2 (T-t_n^j)]\times\{|x -x_n^j|\leq \lambda_n^j R\} $$ and $\|T_n^j \psi^j_{\varepsilon}\|_{L^\infty_{t,x}} \lesssim (\lambda_n^j)^{-\frac12} \| \psi^j_{\varepsilon}\|_{L^\infty_{t,x}}$. If $j$ conforms to Case~4, then $x_n^j$ above should be replaced by $(x_n^j)^*$. Thus, using Corollary~\ref{C:Keraani3.7} we deduce that \begin{align*} \| (T_n^j \psi^j_{\varepsilon}) \nabla e^{it\Delta_\Omega} w_n^J \|_{L_{t,x}^{\frac52}} &\lesssim T^{\frac{31}{180}} R^{\frac7{45}} \| \psi^j_{\varepsilon}\|_{L^\infty_{t,x}} \| e^{it\Delta_\Omega} w_n^J \|_{L^{10}_{t,x}}^{\frac1{18}} \| w_n^J \|_{\dot H^1_D(\Omega)}^{\frac{17}{18}} \\ &\lesssim_{\psi^j_{\varepsilon},E_c} \| e^{it\Delta_\Omega} w_n^J \|_{L^{10}_{t,x}}^{\frac1{18}}. \end{align*} Combining this with \eqref{ap case1} and using Theorem~\ref{T:Sob equiv} and Strichartz, we deduce that \begin{align*} \| v_n^j \nabla e^{it\Delta_\Omega} w_n^J \|_{L_{t,x}^{\frac52}} & \lesssim \| v^j_n - T_n^j \psi^j_{\varepsilon} \|_{\dot X^1} \| \nabla e^{it\Delta_\Omega} w_n^J \|_{L_t^5 L_x^{\frac{30}{11}}} + C(\psi^j_{\varepsilon},E_c) \| e^{it\Delta_\Omega} w_n^J \|_{L^{10}_{t,x}}^{\frac1{18}} \\ &\lesssim {\varepsilon} E_c + C(\psi^j_{\varepsilon},E_c) \| e^{it\Delta_\Omega} w_n^J \|_{L^{10}_{t,x}}^{\frac1{18}}. \end{align*} Using \eqref{E:LP1} we get $\text{LHS\eqref{834}} \lesssim_{E_c} {\varepsilon}$. As ${\varepsilon}>0$ was arbitrary, this proves \eqref{834}. This completes the proof of \eqref{708} and with it, the proof of \eqref{s8}. Combining \eqref{s7} and \eqref{s8} yields the third claim. This completes the treatment of the second scenario and so the proof of the proposition. \end{proof} As an immediate consequence of the Palais--Smale condition, we obtain that the failure of Theorem~\ref{T:main} implies the existence of almost periodic counterexamples: \begin{thm}[Existence of almost periodic solutions]\label{T:mmbs} Suppose Theorem \ref{T:main} fails to be true. Then there exist a critical energy \/$0<E_c<\infty$ and a global solution $u$ to \eqref{nls} with $E(u)=E_c$, which blows up in both time directions in the sense that $$ S_{\ge 0}(u)=S_{\le 0}(u)=\infty, $$ and whose orbit $\{ u(t):\, t\in {\mathbb{R}}\}$ is precompact in $\dot H_D^1(\Omega)$. Moreover, there exists $R>0$ so that \begin{align}\label{E:unif L6} \int_{\Omega\cap\{|x|\leq R\}} |u(t,x)|^6\, dx\gtrsim 1 \quad \text{uniformly for } t\in {\mathbb{R}}. \end{align} \end{thm} \begin{proof} If Theorem~\ref{T:main} fails to be true, there must exist a critical energy $0<E_c<\infty$ and a sequence of solutions $u_n:I_n\times\Omega\to {\mathbb{C}}$ such that $E(u_n)\to E_c$ and $S_{I_n}(u_n)\to \infty$. Let $t_n\in I_n$ be such that $S_{\ge t_n}(u_n)=S_{\le t_n}(u_n)=\frac 12 S_{I_n}(u_n)$; then \begin{align}\label{s12} \lim_{n\to\infty} S_{\ge t_n}(u_n)=\lim_{n\to\infty}S_{\le t_n}(u_n)=\infty. \end{align} Applying Proposition \ref{P:PS} and passing to a subsequence, we find $\phi\in \dot H^1_D(\Omega)$ such that $u_n(t_n)\to \phi$ in $\dot H^1_D(\Omega)$. In particular, $E(\phi)=E_c$. We take $u:I\times\Omega\to {\mathbb{C}}$ to be the maximal-lifespan solution to \eqref{nls} with initial data $u(0)=\phi$. From the stability result Theorem~\ref{T:stability} and \eqref{s12}, we get \begin{align}\label{s14} S_{\ge 0}(u)=S_{\le 0}(u)=\infty. \end{align} Next we prove that the orbit of $u$ is precompact in $\dot H_D^1(\Omega)$. For any sequence $\{t'_n\}\subset I$, \eqref{s14} implies $S_{\ge t_n'}(u)=S_{\le t_n'}(u)=\infty$. Thus by Proposition~\ref{P:PS}, we see that $u(t_n')$ admits a subsequence that converges strongly in $\dot H^1_D(\Omega)$. Therefore, $\{u(t): t\in {\mathbb{R}}\}$ is precompact in $\dot H^1_D(\Omega)$. We now show that the solution $u$ is global in time. We argue by contradiction; suppose, for example, that $\sup I<\infty$. Let $t_n\to \sup I$. Invoking Proposition~\ref{P:PS} and passing to a subsequence, we find $\phi\in \dot H^1_D(\Omega)$ such that $u(t_n)\to \phi$ in $\dot H^1_D(\Omega)$. From the local theory, there exist $T=T(\phi)>0$ and a unique solution $v:[-T,T]\times\Omega\to {\mathbb{C}}$ to \eqref{nls} with initial data $v(0)=\phi$ such that $S_{[-T,T]}v<\infty$. By the stability result Theorem~\ref{T:stability}, for $n$ sufficiently large we find a unique solution $\tilde u_n:[t_n-T,t_n+T]\times\Omega\to {\mathbb{C}}$ to \eqref{nls} with data $\tilde u_n(t_n)=u(t_n)$ and $S_{[t_n-T,t_n+T]}(\tilde u_n)<\infty$. From uniqueness of solutions to \eqref{nls}, we must have $\tilde u_n=u$. Thus taking $n$ sufficiently large, we see that $u$ can be extended beyond $\sup I$, which contradicts the fact that $I$ is the maximal lifespan of $u$. Finally, we prove the uniform lower bound \eqref{E:unif L6}. We again argue by contradiction. Suppose there exist sequences $R_n\to \infty$ and $\{t_n\}\subset {\mathbb{R}}$ along which $$ \int_{\Omega\cap\{|x|\leq R_n\}} |u(t_n,x)|^6\, dx \to 0. $$ Passing to a subsequence, we find $u(t_n)\to \phi$ in $\dot H^1_D(\Omega)$ for some non-zero $\phi\in \dot H^1_D(\Omega)$. Note that if $\phi$ were zero, then the solution $u$ would have energy less than the small data threshold, which would contradict \eqref{s14}. By Sobolev embedding, $u(t_n)\to\phi$ in $L^6$, and since $R_n\to\infty$, $$ \int_\Omega |\phi(x)|^6\,dx = \lim_{n\to\infty} \int_{\Omega\cap\{|x|\leq R_n\}} |\phi(x)|^6\, dx = \lim_{n\to\infty} \int_{\Omega\cap\{|x|\leq R_n\}} |u(t_n,x)|^6\, dx =0. $$ This contradicts the fact that $\phi \neq 0$ and completes the proof of the theorem. \end{proof} Finally, we are able to prove the main theorem. \begin{proof}[Proof of Theorem~\ref{T:main}] We argue by contradiction. Suppose Theorem~\ref{T:main} fails. By Theorem~\ref{T:mmbs}, there exists a minimal energy blowup solution $u$ that is global in time, whose orbit is precompact in $\dot H_D^1(\Omega)$, and that satisfies $$ \int_{\Omega\cap\{|x|\leq R\}} |u(t,x)|^6\, dx \gtrsim 1 \quad \text{uniformly for } t\in {\mathbb{R}} $$ for some large $R>1$. Integrating over a time interval of length $|I|\geq 1$, we obtain \begin{align*} |I|\lesssim R\int_{I}\int_{\Omega\cap\{|x|\leq R\}} \frac{|u(t,x)|^6}{|x|}\,dx\,dt \lesssim R\int_{I}\int_{\Omega\cap\{|x|\leq R|I|^{\frac12}\}}\frac{|u(t,x)|^6}{|x|}\,dx\,dt. \end{align*} On the other hand, for $R|I|^{\frac12}\geq 1$, the Morawetz inequality Lemma~\ref{L:morawetz} gives \begin{align*} \int_{I}\int_{\Omega\cap\{|x|\leq R|I|^{\frac12}\}}\frac{|u(t,x)|^6}{|x|}\,dx\,dt \lesssim R|I|^{\frac12}, \end{align*} with the implicit constant depending only on $E(u)=E_c$. Taking $I$ sufficiently large depending on $R$ and $E_c$ (which is possible since $u$ is global in time), we derive a contradiction. This completes the proof of Theorem~\ref{T:main}. \end{proof} \end{document}
arXiv
{ "id": "1208.4904.tex", "language_detection_score": 0.5200492739677429, "char_num_after_normalized": null, "contain_at_least_two_stop_words": null, "ellipsis_line_ratio": null, "idx": null, "lines_start_with_bullet_point_ratio": null, "mean_length_of_alpha_words": null, "non_alphabetical_char_ratio": null, "path": null, "symbols_to_words_ratio": null, "uppercase_word_ratio": null, "type": null, "book_name": null, "mimetype": null, "page_index": null, "page_path": null, "page_title": null }
arXiv/math_arXiv_v0.2.jsonl
null
null
\begin{document} \title{The Bipartition Polynomial of a Graph: Reconstruction, Decomposition, and Applications} \begin{abstract} The bipartition polynomial of a graph, introduced in \cite{Dod2015}, is a generalization of many other graph polynomials, including the domination, Ising, matching, independence, cut, and Euler polynomial. We show in this paper that it is also a powerful tool for proving graph properties. In addition, we can show that the bipartition polynomial is polynomially reconstructible, which means that we can recover it from the multiset of bipartition polynomials of one-edge-deleted subgraphs. \end{abstract} \section{Introduction} The \emph{bipartition polynomial} $B(G;x,y,z)$ of a simple graph $G$ has been introduced in \cite{Dod2015}. The bipartition polynomial is related to the set of bipartite subgraphs of $G$; it generalizes the Ising polynomial \cite{Andren2009}, the matching polynomial \cite{Farrell1979}, the independence polynomial (in case of regular graphs) \cite{Levit2005,Gutman1983}, the domination polynomial \cite{Arocha2000}, the Eulerian subgraph polynomial \cite{Aigner2007}, and the cut polynomial of a graph. In this paper, we consider the natural generalization of the bipartition polynomial to graphs with parallel edges. Let $G=(V,E)$ be a simple undirected graph with vertex set $V$ and edge set $E$. The open neighborhood of a vertex $v$ of $G$ is denoted by $N(v)$ or $N_G(v)$. It is the set of all vertices of $G$ that are adjacent to $v$. The closed neighborhood of $v$ is defined by $N_G[v]=N_G(v)\cup \{v\}$. The neighborhood of a vertex subset $W \subseteq V$ is: \begin{align*} N_G(W) &= \bigcup_{w\in W}N_G(w) \setminus W, \\ N_G[W] &= N_G(W) \cup W. \end{align*} The \emph{edge boundary} $\partial W$ of a vertex subset $W$ of $G$ is \[ \partial W = \{\{u,v \} \mid u\in W\text{ and }v\in V\setminus W \}, \] i.e., the set of all edges of $G$ with exactly one end vertex in $W$. Throughout this paper, we denote by $n$ the order, $n=|V|$, by $m$ the size, $m=|E|$, and by $k(G)$ the number of components of $G$. The bipartition polynomial of a graph $G$ is defined by \begin{equation} B(G;x,y,z)=\sum_{W\subseteq V} x^{|W|}\sum_{F\subseteq \partial W} y^{|N_{(V,F)}(W)|} z^{|F|}. \label{eq:bipart_def} \end{equation} Note that the definitions of neighborhood, edge boundary and Equation \eqref{eq:bipart_def} can be easily extended to graphs with parallel edges. From now on, unless otherwise stated, we allow graphs to have parallel edges. Note that adding loops does not change the bipartition polynomial. \begin{figure} \caption{Illustration of the definition of the bipartition polynomial} \label{fig:bipart_def} \end{figure} Figure \ref{fig:bipart_def} provides an illustration of the definition given in Equation (\ref{eq:bipart_def}). First we select a vertex subset $W$, which is located within the left gray-shaded bubble in Figure \ref{fig:bipart_def}. The cardinality of the set $W$ is counted in the exponent of the variable $x$. The edge boundary $\partial W$ consists of all edges that stick out from that bubble. Assume we select the edges shown in bold as subset $F\subseteq \partial W$. The end vertices of these edges outside $W$ are presented within the next bubble that is labeled with $Y$. The cardinality of the set $Y$ is counted in the exponent of variable $y$ of the bipartition polynomial. The third variable, $z$, counts the edges in $F$. We see that by the definition of $F$ always a bipartite subgraph of $G$ is defined, which is the reason for the naming `bipartition polynomial'. If $H=(S\cup T,F)$ is a connected bipartite graph, then the partition sets $S$ and $T$ are uniquely defined (up to order). Equation (\ref{eq:bipart_def}) implies that we can derive the order and size of a graph from its bipartition polynomial: \begin{eqnarray} n = \deg B(G;x,1,1) \label{eq:n} \\ m = \frac{1}{2}[xyz]B(G;x,y,z), \label{eq:m} \end{eqnarray} where $[xyz]B(G;x,y,z)$ denotes the coefficient of $xyz$ in $B$. \begin{proposition}\label{prop:bipart} A loopless graph $G$ is bipartite if and only if \[ \frac{1}{2}[xyz]B(G;x,y,z) = \deg B(G;1,1,z). \] \end{proposition} \begin{proof} The left-hand side is, according to Equation (\ref{eq:m}), the number of edges of $G$. A graph $G=(V,E)$ is bipartite if and only if there is a vertex subset $W\subseteq V$ with $\partial W = E$. Equation (\ref{eq:bipart_def}) shows that in this case the degree of $z$ in $B(G;x,y,z)$ is equal to $m$. \end{proof} In the remaining part of this paper, we present different representation and decompositions of the bipartition polynomial (Section \ref{sect:decom}), derive relations to other graph polynomials (Section \ref{sec:poly}), prove its polynomial reconstructibility (Section \ref{sec:recon}), and provide some applications for proving graph properties (Section \ref{sec:appl}). \section{Representations and Decomposition}\label{sect:decom} In this section, we provide some different representations of the bipartition polynomial and decomposition formulae with respect to vertex and edge deletions. \subsection{Representations of the Bipartition Polynomial} \begin{theorem}[product representation, \cite{Dod2015}]\label{theo:prod_representation} The bipartition polynomial of a graph $G$ can be represented as \begin{equation} B(G;x,y,z) = \sum_{W\subseteq V} x^{|W|} \prod_{v\in N_{G}(W)} \left[y\left[(1+z)^{|\partial v \cap \partial W|}-1 \right] +1 \right]. \label{eq:prod_representation} \end{equation} The bipartition polynomial of a simple graph $G=(V,E)$ satisfies \begin{equation} B(G;x,y,z) = \sum_{W\subseteq V} x^{|W|} \prod_{v\in N_{G}(W)} \left[y\left[(1+z)^{|N_{G}(v)\cap W|}-1 \right] +1 \right]. \label{eq:prod_representation_simple} \end{equation} \end{theorem} \begin{corollary}\label{coro:k(G)} The number of components of a graph $G$ is $\log_{2}B(G;1,1,-1)$. \end{corollary} \begin{proof} From Equation (\ref{eq:prod_representation}), we obtain \[ B(G;1,1,-1) = \sum_{W\subseteq V}\prod_{v\in N_{G}(W)} 0^{\left\vert \partial v \cap \partial W\right\vert }. \] The product vanishes for all $W\subseteq V$ with $N_{G}(W)\neq\emptyset$, since $\partial v \cap \partial W\neq\emptyset$ for all $v\in N_{G}(W)$. The product equals 1 if $N_{G}(W)$ is empty. We have $N_{G}(W)=\emptyset$ if and only if $W$ is the (possibly empty) union of vertex sets of components of $G$. For a graph with $k$ components, there are $2^{k}$ ways to form a union of the vertex sets of the components. Hence we obtain $B(G;1,1,-1)=2^{k}$. \end{proof} The proof of the last proposition also yields the following statement. \begin{corollary}\label{coro:comp_sizes} If $G$ consists of $k$ components $G_{1},...,G_{k}$ such that the order of $G_{i}$ is $k_{i}$, then \[ B(G;x,1,-1)=\prod\limits_{i=1}^{k}(1+x^{k_{i}}). \] \end{corollary} Consequently, we can derive the order of all components of $G$ by the following simple procedure. The order of the first component is the smallest positive power, say $k_{1}$, of $x$ in $B(G;x,1,-1)$. Now divide $B(G;x,1,-1) $ by $(1+x^{k_{1}})$ and proceed step by step with the resulting polynomial in the same manner until you obtain the constant polynomial 1. A connected bipartite graph with at least one edge is called \emph{proper}. For any given graph $G$, we denote by $\text{Comp}(G)$ the set of proper components of $G$. As an abbreviation, we use $\text{Comp}(V,E)$ instead of $\text{Comp}((V,E))$. The number of isolated vertices of a graph $G=(V,E)$ is denoted by $\text{iso}(G)$ or by $\text{iso}(V,E)$. \begin{theorem}[bipartite representation, \cite{Dod2015}]\label{theo:bip_representation} The bipartition polynomial of a graph $G=(V,E)$ satisfies \begin{equation} B(G;x,y,z) = \hspace{-14pt}\sum_{\substack{F\subseteq E \\(V,F)\text{ is bipartite}}} \hspace{-20pt} z^{|F|}(1+x)^{\mathrm{iso}(V,F)} \hspace{-20pt} \prod_{(S\cup T,A)\in\text{Comp}(V,F)} \hspace{-25pt}(x^{|S|}y^{|T|}+x^{|T|}y^{|S|}). \end{equation} \end{theorem} For another representation of the bipartition polynomial using so-called \emph{activity}, we assume that the edge set $E=\{e_1,\ldots,e_m\}$ of the graph $G=(V,E)$ is linearly ordered, that is $e_1<e_2<\cdots <e_m$. Let $H$ be a \emph{spanning forest} of $G$, which is a forest $H=(V,F)$ with the same vertex set as $G$ and $F\subseteq E$. An edge $e\in E\setminus F$ is \emph{externally active} with respect to the forest $H$ if it is the largest edge in a cycle of even length of $H+e$. We denote by $\mathrm{ext}(H)$ the number of externally active edges of $H$. Note that our definition of external activity is little different from that of Tutte \cite{Tutte1954}. \begin{theorem}[forest representation, \cite{Dod2015}]\label{theo:forest_representation} The bipartition polynomial of a graph $G=(V,E)$ satisfies \begin{equation} \begin{array}{l} B(G;x,y,z) = {\displaystyle\sum_{\substack{H \text{ is spanning}\\ \text{ forest of }G }} \hspace{-10pt} (1+x)^{\mathrm{iso}(H)} z^{n-k(H)} (1+z)^{\mathrm{ext}(H)}} \\ {\displaystyle\hspace{70pt}\times \hspace{-10pt} \prod_{(S\cup T,A)\in\mathrm{Comp}(H)} \hspace{-25pt}(x^{|S|}y^{|T|}+x^{|T|}y^{|S|})}. \end{array} \end{equation} \end{theorem} \begin{remark} In \cite{Dod2015}, the Theorems 2,3, and 4 are proven for simple graphs only. However, the generalization to non-simple graphs is straightforward. \end{remark} We need also the following result, which is proven in \cite{Dod2015} too. \begin{theorem}\label{theo:bip_multiplicative} Let $G$ be a graph consisting of $k$ components $G_1,\ldots,G_k$. Then \[ B(G;x,y,z) = \prod_{i=1}^{k} B(G_i;x,y,z). \] \end{theorem} \subsection{Vertex and Edge Deletion} First we consider decompositions for the bipartition polynomial of a graph with respect to local vertex and edge operations. \begin{theorem} The bipartition polynomial of a graph $G=(V,E)$ satisfies for each vertex $v\in V$ the relation \begin{align*} B(G;x,y,z) &= (1+x) B(G-v;x,y,z) \\ & + \hspace{-10pt}\sum_{\substack{(S\cup T,F) \text{ conn. bip.}\\v\in S\cup T}} \hspace{-20pt} z^{|F|}(x^{|S|}y^{|T|}+x^{|T|}y^{|S|})B(G-(S\cup T);x,y,z) \\ &= (1+x) B(G-v;x,y,z) + \hspace{-10pt}\sum_{\substack{(S\cup T,F) \text{ tree of } G\\v\in S\cup T}} \hspace{-20pt} z^{|F|-1}(1+z)^{\mathrm{ext}(S\cup T,F)} \\ & \times (x^{|S|}y^{|T|}+x^{|T|}y^{|S|})B(G-(S\cup T);x,y,z), \end{align*} where the first sum is taken over all proper subgraphs, and the second sum is over all nontrivial trees (having at least one edge) of $G$ that contain the vertex $v$. \end{theorem} \begin{proof} We show the proof for the first equality and the second one can be shown similarly. From Theorem \ref{theo:bip_representation}, we obtain \begin{align*} B(G;x,y,z) &= \hspace{-14pt}\sum_{\substack{F\subseteq E \\(V,F)\text{ is bipartite}}} \hspace{-20pt} z^{|F|}(1+x)^{\mathrm{iso}(V,F)} \hspace{-20pt} \prod_{(S\cup T,A)\in\mathrm{Comp}(V,F)} \hspace{-25pt}(x^{|S|}y^{|T|}+x^{|T|}y^{|S|}) \\ &= \hspace{-14pt}\sum_{\substack{F\subseteq E\setminus \partial v \\(V,F)\text{ is bipartite}}} \hspace{-20pt} z^{|F|}(1+x)^{\mathrm{iso}(V,F)} \hspace{-20pt} \prod_{(S\cup T,A)\in\mathrm{Comp}(V,F)} \hspace{-25pt}(x^{|S|}y^{|T|}+x^{|T|}y^{|S|}) \\ &+ \hspace{-14pt}\sum_{\substack{F\subseteq E,\, F\cap \partial v\neq \emptyset \\(V,F)\text{ is bipartite}}} \hspace{-20pt} z^{|F|}(1+x)^{\mathrm{iso}(V,F)} \hspace{-20pt} \prod_{(S\cup T,A)\in\mathrm{Comp}(V,F)} \hspace{-25pt}(x^{|S|}y^{|T|}+x^{|T|}y^{|S|}) \\ &= (1+x) B(G-v;x,y,z) \\ & + \hspace{-10pt}\sum_{\substack{(S\cup T,F) \text{ conn. bip.}\\v\in S\cup T}} \hspace{-20pt} z^{|F|}(x^{|S|}y^{|T|}+x^{|T|}y^{|S|})B(G-(S\cup T);x,y,z) \\ \end{align*} The last equality results from factoring out the term of the product that corresponds to the component containing $v$ and applying Theorem \ref{theo:bip_multiplicative}. \end{proof} The proof of the next statement can be performed in the same way. \begin{theorem} Let $G=(V,E)$ be a graph and $e\in E$; then \begin{align*} B(G;x,y,z) &= B(G-e;x,y,z) \\ & + \hspace{-10pt}\sum_{\substack{(S\cup T,F) \text{ conn. bip.}\\e\in F}} \hspace{-20pt} z^{|F|}(x^{|S|}y^{|T|}+x^{|T|}y^{|S|})B(G-(S\cup T);x,y,z). \end{align*} \end{theorem} \section{Graph Polynomials that can be Derived from the Bipartition Polynomial} \label{sec:poly} Several well-known graph polynomials can be obtained by substitution of the variables of the bipartition polynomial and (in some case) by multiplication with a certain factor that can easily be obtained from graph parameters like order, size, and the number of components. First we recall some results from \cite{Dod2015}. \paragraph{Domination polynomial} The \emph{domination polynomial} of a graph $G=(V,E)$, introduced in \cite{Arocha2000}, is the ordinary generating function for the number of dominating sets of $G$. Let $d_k(G)$ be the number of dominating sets of size $k$ of $G$. We define the domination polynomial of $G$ by \[ D(G,x) = \sum_{k=0}^{n}d_k(G)x^k. \] The domination polynomial satisfies, \cite{Dod2015}, \begin{equation} D(G,x) =(1+x)^{n}B\left(G;\frac{-1}{1+x},\frac{x}{1+x},-1\right). \label{eq:dom_bip} \end{equation} A \emph{generalized domination polynomial} is given by \[ B(G;x,1-y,-1) = \sum_{W\subseteq V} x^{|W|}y^{|N_G(W)|}, \] which follows directly from Theorem \ref{theo:prod_representation}. The variable $y$ counts here the number of vertices that are dominated by a given set $W$. Consequently, the coefficient of $x^iy^j$ in $B(G;x,1-y,-1)$ gives the number of vertex subsets of cardinality $i$ that dominate a vertex set of size $j$. \paragraph{Ising polynomial} The \emph{Ising polynomial} of a graph $G$ is defined by \[ Z(G;x,y) = x^ny^m\sum_{W\subseteq V}x^{-|W|}y^{-|\partial W|}. \] The Ising polynomial has been differently introduced in \cite{Andren2009} by \[ \tilde{Z}(G;x,y) = \sum_{\sigma\in \Omega}x^{\epsilon(\sigma)}y^{M(\sigma)}, \] where $\sigma : V\rightarrow \{-1,1\}$ is the \emph{state} of $G=(V,E)$, $\sigma(v)$ the \emph{magnetization} of $v\in V$, $\Omega$ the set of all states of $G$. The sum \[ M(\sigma)=\sum\limits_{v\in V}\sigma(v) \] is called \emph{magnetization} of $G$ with respect to $\sigma$. The parameter $\epsilon(\sigma,e)$ defines the \emph{energy} of the edge $e\in E$. The energy of $G$ with respect to $\sigma$ is \[ \epsilon(\sigma)=\sum\limits_{e\in E}\epsilon(\sigma,e). \] The here given notions result from the interpretation of the Ising polynomial (Ising model) in statistical physics. The relation between the two above given representations of the Ising polynomial is \[ \tilde{Z}(G;x,y) = x^{-n}y^{-m}Z(G;x^2,y^2). \] Generalizations and modifications of the Ising polynomial and their efficient computation in graphs of bounded clique-width are considered in \cite{Kotek2015}. The Ising polynomial can be obtained from the bipartition polynomial by, \cite{Dod2015}, \begin{equation} Z(G;x,y) = x^{n}y^{m}B\left(G;\frac{1}{x},1,\frac{1}{y}-1\right). \label{eq:Ising_bip} \end{equation} \paragraph{Cut polynomial} The \emph{cut polynomial} of a graph $G=(V,E)$ is the ordinary generating function for the number of cuts of $G$, \[ C(G,z) = \frac{1}{2^{k(G)}}\sum_{W\in V}z^{|\partial W|}. \] The relation between cut polynomial and bipartition polynomial is given by, see \cite{Dod2015}, \begin{equation} C(G,z) = \frac{1}{2^{k(G)}} B(G;1,1,z-1). \label{eq:cut_bip} \end{equation} \begin{corollary}\label{coro:forest} A graph $G$ of order $n$ with $k$ components is a forest if and only if $C(G,z)=(1+z)^{n-k}$. \end{corollary} \begin{proof} The statement follows from the fact that a forest is the only graph for which any edge subset is a cut. \end{proof} The polynomial \begin{equation} B(G;x,1,z-1) = \sum_{W \subseteq V} x^{|W|}z^{|\partial W|} \label{eq:gen_cut} \end{equation} can be considered as a generalized cut polynomial; it is equivalent to the Ising polynomial. Equation (\ref{eq:gen_cut}) implies also \begin{equation} \left. \frac{1}{\partial x}B(G;x,1,t-1) \right\vert_{x=0} = \sum_{v\in V} t^{\deg v}, \label{eq:deg_gen} \end{equation} which is the \emph{degree generating function} of $G$. \paragraph{Euler polynomial} An \emph{Eulerian subgraph} of a graph $G=(V,E)$ is a spanning subgraph of $G$ in which all vertices have even degree. The \emph{Euler polynomial} of $G$ is defined by \[ \mathcal{E}(G,z) = \sum_{\substack{F\subseteq E \\ (V,F)\text{ is Eulerian}}} \hspace{-20pt}z^{|F|}. \] In \cite{Aigner2007} it is shown that the Euler polynomial is related to the Tutte polynomial via \[ \mathcal{E}(G,z) = (1-z)^{m-n+k(G)}z^{n-k(G)} T\left(G;\frac{1}{z},\frac{1+z}{1-z} \right). \] There is also a nice direct relation between cut polynomial and Euler polynomial, which is also shown in \cite{Aigner2007}, \begin{equation} C(G,z) = \frac{(1+z)^{|E|}}{2^{|E|-|V|+k(G)}} \;\mathcal{E}\left(G,\frac{1-z}{1+z}\right) \label{eq:cut_Euler} \end{equation} Solving Equation (\ref{eq:cut_Euler}) for $\mathcal{E}(G,z)$ and substituting $C$ according to Equation (\ref{eq:cut_bip}) yields \begin{equation} \mathcal{E}(G,z) = \frac{(1+z)^{|E|}}{2^{|V|}}B\left(G;1,1,\frac{-2z}{1+z} \right). \label{eq:euler_bip} \end{equation} Let $G$ be a plane graph (a planar graph with a given embedding in the plane) and $G^*$ its geometric dual. The set of cycles of $G$ is in one-to-one correspondence with the set of cuts of $G^*$, which yields \begin{equation*} \mathcal{E}(G,z) = C(G^*,z) \end{equation*} or, corresponding to Equations (\ref{eq:cut_bip}) and (\ref{eq:euler_bip}), \begin{equation} (1+z)^m B\left(G;1,1,\frac{-2z}{1+z}\right) = 2^{n-1}B(G^*;1,1,z-1). \label{eq:planar} \end{equation} \paragraph{Van der Waerden polynomial} The definition of this polynomial is presented in \cite{Andren2009} and based on an idea given in \cite{Waerden1941}. Let $G=(V,E)$ be a graph of order $n$ and size $m$. Let $w_{ij}(G)$ be the number of subgraphs of $G$ with exactly $j$ edges and $i$ vertices of odd degree. The \emph{van der Waerden polynomial} of $G$ is defined by \[ W(G;x,y) = \sum_{i=0}^{n}\sum_{j=0}^{m}w_{ij}(G)x^i y^j. \] From \cite{Andren2009} (Theorem 2.9), we obtain easily \[ W(G;x,y) = \left(\frac{1-x}{2}\right)^n(1-y)^m Z\left(G;\frac{1+x}{1-y},\frac{1+y}{1-y}\right), \] where $Z$ is the Ising polynomial. The van der Waerden polynomial can be derived from the bipartition polynomial by \begin{equation} W(G;x,y) = \left(\frac{1+x}{2}\right)^n(1+y)^m B\left(G;\frac{1-x}{1+x},1,-\frac{2y}{1+y}\right), \label{eq:vdwarden_bipart} \end{equation} where we use Equation (\ref{eq:Ising_bip}). \paragraph{Matching polynomial} The \emph{matching polynomial}, see \cite{Farrell1979}, of $G$ is defined by \[ M(G,x) = \sum_{\substack{F\subseteq E\\ F\text{ matching in }G} } \hspace{-20pt}x^{|F|}. \] Notice that the definition that is given here corresponds to \emph{matching generating polynomial} from \cite{Lovasz2009}. A subgraph of $G$ with exactly $k$ edges and exactly $2k$ vertices of odd degree is a matching in $G$, which yields \[ M(G,t) = \lim_{y\rightarrow 0} W(G;t y^{-\frac{1}{2}},y). \] Substituting Equation (\ref{eq:vdwarden_bipart}) for $W$, we obtain \begin{equation} M(G,t) = \lim_{y\rightarrow 0} \left(\frac{\sqrt{y}+t}{2\sqrt{y}}\right)^n (1+y)^m B\left(G;\frac{\sqrt{y}-t}{\sqrt{y}+t},1,-\frac{2y}{1+y}\right). \label{eq:match_bipart} \end{equation} \paragraph{Independence polynomial} The \emph{independence polynomial} of a graph $G=(V,E)$ is the ordinary generating function for the number of independent set of $G$, \[ I(G,x) = \sum_{\substack{W\subseteq V\\ W\text{ independent in }G}} \hspace{-25pt}x^{|W|}. \] If $G$ is a simple $r$-regular graph, then \[ I(G,t) = \lim_{x\rightarrow0}B\left(G;tx^{r},1,\frac{1}{x}-1\right). \] The proof of this relation is given in \cite{Dod2015}. We can easily rewrite the last equation in order to avoid the limit: \[ I(G,t) = \frac{1}{2\pi}\int_{0}^{2\pi}B(G;te^{irx},1,e^{-ix}-1)dx. \] The substitution $x\mapsto e^{ix}$ transforms each power of $x$ into a periodic function whose period divides $2\pi$ such that the integration over $[0,2\pi]$ yields the constant term (with respect to $x$) multiplied by $2\pi$. \section{Polynomial Reconstruction} \label{sec:recon} One of the important questions about graph polynomials is their distinguishing power, which can be stated as follows: Let $\mathcal{C}$ be a graph class and let $P$ be a polynomial-valued isomorphism invariant defined on $\mathcal{C}$. Are there nonismorphic graphs $G$ and $H$ in $\mathcal{C}$ such that $P(G) = P(H)$? Although we know that the bipartition polynomial cannot distinguish all graphs up to isomorphism, see \cite{Dod2015}, we do not know yet whether there are two nonisomorphic trees with the same bipartition polynomial. Instead, as trees are well-known to be `reconstructible' in various senses, we show that the bipartition polynomial of a graph is edge-reconstructible from its polynomial-deck, which shall be defined precisely below. For a graph $G$, its \emph{poly\-no\-mi\-al-deck\xspace} is the multiset $\{B(G-e)\}_{e \in E(G)}$. We show that the bipartition polynomial is `edge-reconstructible' in most cases in the following sense: A graph $G$ is \emph{bp-re\-con\-struc\-ti\-ble\xspace} if whenever a graph $H$ has the same poly\-no\-mi\-al-deck\xspace as $G$ we have $B(H) = B(G)$. Unfortunately, there are some graphs with few edges that are not bp-re\-con\-struc\-ti\-ble\xspace. To describe such examples, let $P_s$ and $C_s$ denote respectively the path and cycle on $s$ vertices. We denote by $C_s + tP_1$ the disjoint union of $C_s$ and $t$ isolated vertices, and the graphs $P_s + tP_1$, $sP_2 + tP_1$ etc. are defined similarly. The following graphs in each line have the same poly\-no\-mi\-al-deck\xspace but have different bipartition polynomial. \begin{itemize} \item $C_2 + (t+2)P_1$, $P_3 + (t+1)P_1$ and $2P_2 + tP_1$ for $t \geq 0$. \item $C_3 + (t+1)P_1$ and $K_{1,3} + tP_1$ for $t \geq 0$. \end{itemize} Note that the graphs on each line for fixed $t$ not only have the same poly\-no\-mi\-al-deck\xspace but also have the same collection of one-edge-deleted subgraphs. We prove the following in this section. \begin{theorem} \label{theo:reconstruct} A graph $G$ is bp-re\-con\-struc\-ti\-ble\xspace unless $G$ is one of the exceptions above. In particular, all graphs with at least four edges are bp-re\-con\-struc\-ti\-ble\xspace. \end{theorem} \subsection{Graphs with Isolated Vertices} We shall use the following information on graphs that are deducible from the bipartition polynomial. The statement combines the results given in Equations (\ref{eq:n}), (\ref{eq:m}), (\ref{eq:deg_gen}), Proposition \ref{prop:bipart}, and Corollaries \ref{coro:comp_sizes}, \ref{coro:k(G)}, \ref{coro:forest}. \begin{theorem} \label{theo:bip-info-collection} Let $G$ be a graph. The bipartition polynomial of $G$ yields $|V(G)|$, $|E(G)|$, $k(G)$, $\text{iso}(G)$, the degree sequence, and the multiset of orders of all components of $G$. We can also decide from $B(G)$ whether $G$ is bipartite, a forest, a path, or connected. (The last two properties follow from the other ones.) \end{theorem} We begin the proof of Theorem \ref{theo:reconstruct} with the case when two graphs $G$ and $H$ have different number of isolated vertices but the same poly\-no\-mi\-al-deck\xspace. Note that from Theorem \ref{theo:bip-info-collection}, we know that two graphs with a different number of isolated vertices have a different bipartition polynomial. \begin{lemma} \label{lem:dif-isol} Let $G$ and $H$ be two graphs having different number of isolated vertices. If $G$ and $H$ have the same poly\-no\-mi\-al-deck\xspace, then there exists $t \geq 0$ such that either \begin{itemize} \setlength{\itemsep}{0pt} \setlength{\parsep}{0pt} \setlength{\parskip}{0pt} \item $\{G, H\} \subset \{C_2 + (t+2)P_1, P_3 + (t+1)P_1, 2P_2 + tP_1 \}$ or \item $\{G, H\} = \{C_3 + (t+1)P_1, K_{1,3} + tP_1 \}$. \end{itemize} \end{lemma} \begin{proof} Suppose $G$ and $H$ have the same poly\-no\-mi\-al-deck\xspace and $\text{iso}(G) = t$ while $\text{iso}(H) > t$. Since $\text{iso}(G-e) \leq t+2$ for every edge $e \in E(G)$, we have $\text{iso}(H) = t+1$ or $t+2$. As $\text{iso}(H-f) > t$ for all $f \in E(H)$, we have $\text{iso}(G-e) > \text{iso}(G)$ for all $e \in E(G)$ implying that every edge of $G$ is incident with a vertex of degree 1. That is, the components of $G$ are stars and isolated vertices. By Theorem \ref{theo:bip-info-collection}, we deduce that $H-f$ is a forest for every $f \in E(H)$. Hence either $H$ itself is a forest or $H = C_s + t'P_1$ for some $s$ and $t+1 \leq t' \leq t+2$. If $\text{iso}(H) = t+2$ then every edge removal from $G$ produces two new isolated vertices, so that $G = s'P_2 + tP_1$ for some $s'$. Moreover, no edge of $H$ is incident with a vertex of degree 1, that is, $H = C_s + (t+2)P_1$. Since $G$ and $H$ have the same order and an equal number of edges, we conclude $G = 2P_2 + tP_1$ and $H = C_2 + (t+2)P_1$. Now we assume $\text{iso}(H) = t+1$. If $H = C_s + (t+1)P_1$ for some $s$, then for all $f \in E(H)$, $\text{iso}(H-f) = t+1$ and $H-f$ has maximum degree at most two. As $G$ and $H$ have the same poly\-no\-mi\-al-deck\xspace, the same holds for $G-e$ for all $e \in E(G)$. Since $G$ is a disjoint union of stars with $t$ isolated vertices, the only possibility for this case is $G = K_{1,3} + tP_1$ and $H = C_3 + (t+1)P_1$. Now we also assume that $H$ is a forest. Theorem \ref{theo:bip-info-collection} states that we can decide the orders of the components from the bipartition polynomial. If $G-e$ for some $e \in E(G)$ has three $P_2$-components, then $H-f$ has it too for some $f \in E(H)$ and $H$ has a $P_2$-component. Removing its edge produces a subgraph with $t+3$ isolated vertices, which cannot be obtained from $G$ by removing only one edge. Thus for all $e \in E(G)$, $G-e$ can have at most two $P_2$-components and $G$ may have at most three $P_2$-components. If $G$ has three $P_2$-components, then they are the only nontrivial components of $G$. On the other hand, as $H$ is a forest, each nontrivial component of $H$ has at least two leaves which leave $t+2$ isolated vertices each when removed. The number of leaves of $H$ must be equal to the number of $P_2$-components of $G$, so that either $G = 3P_2 + tP_1$ or $H = P_s + (t+1)P_1$ for some $s$. It is easy to check that for this case the only possibility of non-isomorphic pair $G$ and $H$ with same poly\-no\-mi\-al-deck\xspace is $G = 2P_2 + tP_1$ and $H = P_3 + (t+1)P_1$. \end{proof} \subsection{Cyclic Graphs} Because of Lemma \ref{lem:dif-isol} we only need to compare those graphs without isolated vertices. The remaining part of our proof of Theorem \ref{theo:reconstruct} is presented in the following order. \begin{enumerate} \item Every non-bipartite graph except $C_3 + tP_1$ for $t \geq 1$ is bp-re\-con\-struc\-ti\-ble\xspace. \item Every bipartite graph with a cycle except $C_2 + tP_1$ for $t \geq 2$ is bp-re\-con\-struc\-ti\-ble\xspace. \item Every forest except $P_3 + (t+1)P_1$, $2P_2 + tP_1$, $K_{1,3} + tP_1$ for $t \geq 0$ is bp-re\-con\-struc\-ti\-ble\xspace. \end{enumerate} The first two are simple but the proof for the third case is a bit longer so we defer it to Section \ref{subsec:forest-recon}. Given a proper bipartite graph $K$ with bipartition $(U_1,U_2)$, let $m(K) = \min(|U_1|, |U_2|)$ and $M(K) = \max(|U_1|, |U_2|)$. Theorem \ref{theo:bip_representation} states \begin{equation} B(G;x,y,z) = \hspace{-15pt}\sum_{\substack{F \subseteq E \\ (V,F) \text{ bipartite}}} \hspace{-15pt} z^{|F|} (1+x)^{\text{iso}(V,F)} \hspace{-15pt} \prod_{K \in \text{Comp}(V,F)} \hspace{-10pt} [x^{M(K)} y^{m(K)} + x^{m(K)} y^{M(K)}]. \label{eq:th5} \end{equation} Let us define for each $F \subseteq E$, a polynomial $\phi_G(F;x,y)$ or simply $\phi_G(F)$ as \begin{equation*} \phi_G(F) := \left\lbrace \begin{array}{l} (1+x)^{\text{iso}(V,F)} \hspace{-20pt} \prod\limits_{K \in \text{Comp}(V,F)} \hspace{-17pt} [x^{M(K)} y^{m(K)} + x^{m(K)} y^{M(K)}]\text{ if $(V,F)$ is bipartite,} \\ 0 \text{ otherwise.} \end{array}\right. \end{equation*} We also write $B(G)$ instead of $B(G;x,y,z)$ for convenience. With this definition, Equation (\ref{eq:th5}) simplifies to \begin{equation} \label{eq:biprep_short} B(G) = \sum_{F \subseteq E} z^{|F|} \phi_G(F). \end{equation} We consider the sum $B'(G) = \sum\limits_{e \in E} B(G-e)$ for a given multiset $\{ B(G-e)\}_{e \in E}$. For each $F \subsetneq E$, the term $z^{|F|} \phi_G(F)$ appears precisely $|E| - |F|$ times on the right-hand-side so that \begin{equation*} B'(G) = \sum_{F \subsetneq E} \big( |E| - |F| \big) z^{|F|} \phi_G(F). \end{equation*} Thus for each $k = 0, 1, \ldots, |E|-1$, the coefficient of $z^k$ in $B(G)$ is the coefficient of $z^k$ in $B'(G)$ divided by $|E| - k$. The only remaining term to decide $B(G)$ is $z^{|E|} \phi_G(E)$. Therefore, to show $G$ is bp-re\-con\-struc\-ti\-ble\xspace it is enough to show that if $H = (V',E')$ is another graph with the same poly\-no\-mi\-al-deck\xspace as $G$, then $\phi_H(E') = \phi_G(E)$. Now we show that nonbipartite graphs are bp-re\-con\-struc\-ti\-ble\xspace except $C_3 + tP_1$ for $t \geq 1$. \begin{lemma} \label{lem:recon1} Every nonbipartite graph except $C_3 + tP_1$ for $t \geq 1$ is bp-re\-con\-struc\-ti\-ble\xspace. \end{lemma} \begin{proof} By Lemma \ref{lem:dif-isol}, it is enough to show that if $G$ is not bipartite, $\text{iso}(G) = \text{iso}(H)$ and $H$ has the same poly\-no\-mi\-al-deck\xspace as $G$ then $B(G) = B(H)$. We may additionally assume that $G$ and $H$ have no isolated vertices. Suppose $G = (V,E)$ is not bipartite, $\text{iso}(G) = 0$ and let $D = \{B(G-e)\}_{e \in E}$. Let $H = (V',E')$ be a graph with $\text{iso}(H) = 0$ whose poly\-no\-mi\-al-deck\xspace is equal to $D$ as a multiset. If $G-e$ is not bipartite for some $e \in E$ then from the corresponding bipartition polynomial, we infer that $H-e'$ is nonbipartite for some $e' \in E'$ and $\phi_H(E') = 0$, that is, $B(G) = B(H)$. If there is no such $e$, then $G$ is an odd cycle. Applying Theorem \ref{theo:bip-info-collection} to $D$, we deduce that every one-edge-deleted subgraph of $H$ is a path consisting of odd number of vertices and the only graph with such property is an odd cycle, and hence $\phi_H(E') = 0$. \end{proof} We now suppose that $G = (V,E)$ is bipartite. Note that the degree of $\phi_G(F)$ is precisely $|V|$. Since \[ x^{M(K)}y^{m(K)} + x^{m(K)}y^{M(K)} = (xy)^{m(K)} \left( x^{M(K) - m(K)} + y^{M(K) - m(K)} \right), \] to decide $\phi_G(E)$ we only need $M(K) - m(K)$ for each nontrivial component $K$ in $G$. If $G$ has $k$ nontrivial components with bipartitions $(U_i, V_i)$ for $i = 1, 2, \ldots, k$ and $t$ isolated vertices, then we say $G$ has \emph{type} \[ (a_1, a_2, \ldots, a_k, *, *, \ldots, *) \] where $a_i = \big||U_i| - |V_i| \big|$ and the number of $*$'s are $t$. We will ignore the order of entries in types. From now on we consider the type instead of $\phi_G(E)$. \begin{lemma} \label{lem:recon2} Let $G$ be a bipartite graph. If $G$ has a cycle then $G$ is bp-re\-con\-struc\-ti\-ble\xspace unless $G = C_2 + tP_1$ for some $t \geq 2$. \end{lemma} \begin{proof} By Lemmas \ref{lem:dif-isol} and \ref{lem:recon1}, it is enough to show that if $H = (V',E')$ is another bipartite graph with the same poly\-no\-mi\-al-deck\xspace as $G$ and $\text{iso}(G) = \text{iso}(H) = 0$, then the type of $H$ is uniquely determined. Note that the exceptions $C_2 + tP_1$ are automatically excluded since $t \geq 2$. If $G$ is connected, then $G-e$ is connected for some $e$, since $G$ has a cycle. Let $e' \in E'$ be an edge such that $B(H-e') = B(G-e)$. We know that $H$ is bipartite and, by Theorem \ref{theo:bip-info-collection}, $H-e'$ is connected. The coefficient of $z^{|E'| - 1}$ in $B(H-e') = B(G-e)$ tells us the type of $H'$, which must be the same as the type of $H$ and hence $\phi_H(E') = \phi_G(E)$. Suppose $G$ is not connected. Then $G-e$ contains a cycle for some $e \in E$, and by Theorem \ref{theo:bip-info-collection}, $H$ also has an edge $e'$ such that $H-e'$ contains a cycle. Thus $H$ has a cycle, and we choose $e'' \in E'$ such that $H-e''$ has minimum number of components among the one-edge-deleted subgraphs of $H$. The components of $H$ are vertex-wise same as the components of $H-e''$ and have precisely the same bipartitions. That is, the type of $H$ is the type of $H-e''$ which is again equal to the type of $G$. Hence $\phi_H(E') = \phi_G(E)$ and $G$ is bp-re\-con\-struc\-ti\-ble\xspace. \end{proof} \subsection{Bipartition Polynomials of Forests} \label{subsec:forest-recon} In this section we prove the following lemma, thereby completing the proof of Theorem \ref{theo:reconstruct}. \begin{lemma} \label{lem:recon3} Every forest except $2P_2 + tP_1$, $P_3 + (t+1)P_1$ and $K_{1,3} + tP_1$ for $t \geq 0$ is bp-re\-con\-struc\-ti\-ble\xspace. \end{lemma} To prove Lemma \ref{lem:recon3} for forests with at least four edges we show the following: \begin{lemma} \label{lem:forestType} Let $F$ be a forest. The type of $F$ is uniquely determined from the degree sequence of $F$ and the multiset consisting of types of $F-e$ for all $e \in E(F)$. \end{lemma} In \cite{Delorme2002}, it was shown that if $G$ has at least four edges, then the degree sequence of $G$ is completely determined from the degree sequences of one-edge-deleted subgraphs. Theorem \ref{theo:bip-info-collection} states that the degree sequence is obtainable from the bipartition polynomial, so that Lemma \ref{lem:recon3} follows for forests with at least four edges. The missing cases for Lemma \ref{lem:recon3} without isolated vertices, $P_2$, $3P_2$, $P_3 + P_2$ and $P_4$ as simple graphs and also the non-simple ones are easy to check. We shall use some lemmas about trees. For the definition of the type of a bipartite graph see the discussion preceding Lemma \ref{lem:recon2}. In a tree, a vertex of degree 1 is a \emph{leaf} and an edge incident with a leaf is a \emph{leaf-edge}. An edge is \emph{internal} if it is not a leaf-edge. \begin{lemma} \label{lem:treetypes} Let $T$ be a tree with at least one edge. Let $(U,V)$ be the bipartition of $T$. \begin{enumerate} \renewcommand{(\roman{enumi})}{(\roman{enumi})} \item If $U$ has all the leaves, then $|U| > |V|$. \item If $V$ has only one leaf, then $|U| \geq |V|$. \item If $T$ has type $(a)$ for $a \geq 1$, then $T$ has two edges $e_1, e_2$ such that both $T-e_1$ and $T-e_2$ have type $(a-1,*)$. \item Suppose $T$ has type $(0)$. If $T-e$ has type $(1,1)$ for every internal edge $e$, then the degrees of vertices of $T$ are all odd. \item Suppose $T$ has type $(2)$. If the types of $T-e$ with $*$ are all $(1,*)$, then either $T$ is $K_{1,3}$ or $T-f$ has type $(0,2)$ for some edge $f$. \end{enumerate} \end{lemma} \begin{proof} Let $u$ be a vertex in $U$. Consider $u$ as a root and direct every edge of $T$ away from $u$. Then \[ |U| = 1 + \sum_{v \in V(T)} d^+(v) = 1 + \sum_{v \in V} \big( d(v) - 1 \big) = 1 + \sum_{v \in V} \big( d(v) - 2 \big) + |V|, \] so that \[ |U| - |V| = 1 + \sum_{v \in V} \big( d(v) - 2 \big) \tag{*} \] and (i), (ii) follows immediately. Suppose $T$ has type $(a)$ for some $a \geq 1$. We may assume $|U| - |V| = a$. By (i) and (ii), $U$ contains at least two leaves and removing their incident edges produce forests, each of type $(a-1, *)$. Thus (iii) holds. Now we consider (iv). Suppose $T$ has type $(0)$ and $T-e$ has type $(1,1)$ for every internal edge $e$ of $T$. If $T$ consists of only one edge then the conclusion holds. Thus we assume that $T$ has at least one internal edge. Suppose that $T$ has a vertex $v$ of even degree, for contradiction. Since $T$ has type $(0)$ $v$ is incident to at least one internal edge. Let us say $v$ is adjacent to $s$ leaves and is incident to $t$ internal edges $e_1, e_2, \ldots, e_t$ where $e_i = vu_i$ for $i = 1, 2, \ldots, t$. Let us denote by $(U_i, V_i)$ the bipartition of the component of $T-e_i$ containing $u_i$ such that $u_i \in U_i$. By the assumptions on $T$, we know $|U_i| - |V_i|$ is odd for all $i$. We consider the component of $T-e_1$ containing $v$. Its type is given by \[ \left\vert \sum_{i=2}^t |U_i| + s - \sum_{i=2}^t |V_i| - 1 \right\vert, \] which is an even number contradicting the assumption that $T-e_1$ has type $(1,1)$. Hence (iv) follows. Lastly we show (v). Let $T$ be a tree with bipartition $(U,V)$ such that $|U| - |V| = 2$. Suppose that for every leaf-edge $e$ of $T$, the forest $T-e$ has type $(1,*)$. That is, $U$ contains all the leaves. From Equation $(*)$, we deduce that $V$ has a unique vertex of degree 3 and all other vertices in $V$ have degree 2. If $V$ has no vertex of degree 2 then $|V| = 1$ and $T$ is $K_{1,3}$. Suppose $V$ has a vertex, say $v$, of degree 2. Let $e,f$ be the edges incident with $v$. If $e$ is a leaf-edge then $T-f$ has type $(0,2)$. We assume that both $e$ and $f$ are internal edges. The graph $T-e$ has two components, say $T_1 = (W_1, E_1)$ and $T_2 = (W_2, E_2)$ where $f \in E_2$. Note that all the leaves of $T_1$ are in $U$ and all but one leaves of $T_2$ are in $U$. By (i) and (ii), we have \[ |U \cap W_1| > |V \cap W_1| \quad \text{ and } \quad |U \cap W_2| \geq |V \cap W_2|. \] Since $2 = |U| - |V| = (|U \cap W_1| - |V \cap W_1| ) + (|U \cap W_2| - |V \cap W_2| )$, we have either \[ |U \cap W_1| - |V \cap W_1| = 2 \text{ and } |U \cap W_2| - |V \cap W_2| = 0 \] or \[ |U \cap W_1| - |V \cap W_1| = 1 \text{ and } |U \cap W_2| - |V \cap W_2| = 1. \] For the former $T-e$ has type (0,2). For the latter $T-f$ has type (0,2). Thus (v) holds. \end{proof} \begin{proof}[Proof of Lemma \ref{lem:forestType}] We shall divide the proof of Lemma \ref{lem:forestType} into three cases, depending on whether the forest $F$ has one component, two components or more than two components. In each case we show how to retrieve the type of $F$. We call the multiset of the types of $F-e$ for all edges $e$ as the \emph{type-deck} of $F$. The sub-multiset consisting of those types with $*$ is called \emph{$*$-deck\xspace}. We can assume the following in all three cases. The reasoning is given below. \begin{itemize} \setlength{\itemsep}{0pt} \setlength{\parsep}{0pt} \setlength{\parskip}{0pt} \item The types in the $*$-deck\xspace contains precisely one $*$. \item If $F$ has more than one component, then at least one type in the $*$-deck\xspace has a zero as entry. \item No type in the $*$-deck\xspace has more than one zero. \end{itemize} As the degree sequence of $F$ is given by the assumption, we may assume that $F$ has no isolated vertices. If the the $*$-deck\xspace of $F$ contains a type with two $*$'s, then the two isolated vertices came from deleting an edge in a $P_2$-component of $F$, so that we can recover the type of $F$ by replacing the two $*$'s with a zero. Thus we may assume that the types in the $*$-deck\xspace contains precisely one $*$. Let $m$ be the minimum of integral entries in the types in the $*$-deck\xspace. If $m>0$, then $F$ cannot induce a component of type $(m)$ by Lemma \ref{lem:treetypes} (iii). If $m \neq 1$, the entry $m$ is obtained from a component of type $(m+1)$, and the type of $F$ is obtained by replacing $(m,*)$ with a $m+1$. If $m =1$ and $F$ has more than one component, then $F$ cannot have a component of type $(0)$ so that again, by replacing $(m,*)$ with $m+1$ we retrieve the type of $F$. Hence we may assume that some types in the $*$-deck\xspace have a zero. Suppose that a type in the $*$-deck\xspace has at least two zeros. Then $F$ has a component of type $(0)$. Among the types in the $*$-deck\xspace, we choose one with a minimum number of zeros, denote this type by $X$. The zeros in $X$ came directly from $F$ and there must be a 1 and a $*$ which was obtained by removing a leaf-edge of a component with type $(0)$. Thus we replace $(1,*)$ by $(0)$ to retrieve the type of $F$. Now we may assume that no type in the $*$-deck\xspace has more than one zero. Now we prove Lemma \ref{lem:forestType}. \textbf{Case 1.} $F$ is a tree. If the the $*$-deck\xspace has $(a,*)$ and $(a+2,*)$ for some $a$ then the only possible type of $F$ is $(a+1)$. Suppose $(a,*)$ is the unique type in the $*$-deck\xspace. If $a \neq 1$ then by Lemma \ref{lem:treetypes} (iii) the type of $F$ is $(a+1)$. Suppose $(1,*)$ is the only type in the $*$-deck\xspace. $F$ has two possibilities: $(0)$ and $(2)$. If the type-deck has $(0,2)$ then clearly $(2)$ is the case. If the degree sequence of $F$ is (3,1,1,1) then $F$ is $K_{1,3}$. If the type-deck does not have $(0,2)$ and $F$ is not $K_{1,3}$ then by Lemma \ref{lem:treetypes} (v) the type of $F$ is (0). This completes Case 1. \textbf{Case 2.} $F$ has precisely two components. By the assumptions the $*$-deck\xspace has $(0,a,*)$ for some $a \geq 1$. If the $*$-deck\xspace has $(0,a+2,*)$ also then $F$ has type $(0,a+1)$. Thus we assume $(0,a,*)$ is the unique type in the $*$-deck\xspace with a 0. First we assume $a \geq 2$ and then consider the case $a=1$. Suppose $a \geq 2$. If $F$ had type $(0,a-1)$, then by Lemma \ref{lem:treetypes} (iii) its $*$-deck\xspace must have $(0,a-2,*)$ which is a contradiction. Thus $F$ has type $(0,a+1)$ or $(1,a)$. If the $*$-deck\xspace has $(1,a-1,*)$ then the type of $F$ cannot be $(0,a+1)$ and hence it is $(1,a)$. If $F$ has type $(1,a)$ by Lemma \ref{lem:treetypes} (iii) the $*$-deck\xspace has $(1,a-1,*)$. That is, $F$ has type $(1,a)$ if and only if the $*$-deck\xspace has $(1,a-1,*)$, implying that we can decide the type of $F$ from its $*$-deck\xspace. Now we assume $a=1$ and the types in the $*$-deck\xspace with a 0 are $(0,1,*)$. $F$ has one of three types: $(0,0)$, $(1,1)$ and $(0,2)$. If the type-deck has $(0,0,2)$ then $F$ has type $(0,2)$. Suppose the type-deck does not have $(0,0,2)$. If $F$ has type $(0,2)$, then every leaf-edge of a component of type $(2)$ produces $(1,*)$ when deleted. By Lemma \ref{lem:treetypes} (v), the component is $K_{1,3}$ and the $*$-deck\xspace of $F$ has precisely three $(0,1,*)$. On the other hand, if $F$ had type $(1,1)$ then by Lemma \ref{lem:treetypes} (iii) there are at least four $(0,1,*)$ in the $*$-deck\xspace of $F$. If $F$ had type $(0,0)$ then its $*$-deck\xspace also have at least four $(0,1,*)$ since we assumed none of the components are $K_2$ and every tree has at least two leaves. Thus $(0,1,*)$ appears precisely three times in the $*$-deck\xspace if and only if $F$ has type $(0,2)$. Now we assume the $*$-deck\xspace has at least four times $(0,1,*)$. $F$ has type $(0,0)$ or $(1,1)$. We may assume: \begin{enumerate} \renewcommand{(\roman{enumi})}{(\roman{enumi})} \item the $*$-deck\xspace has only $(0,1,*)$. \item the type-deck consists of precisely $(0,1,*)$ and $(0,1,1)$. \end{enumerate} The first assumption is because the only other possible type in the $*$-deck\xspace is $(1,2,*)$, in which case $F$ has type $(1,1)$. For the second assertion, recall that we assumed that no component is $P_2$. Thus a component of type $(0)$ has an internal edge and by removing an internal edge from $(0,0)$ we get $(0,a,a)$. If $a \neq 1$ then $F$ has type $(0,0)$. If $F$ has type $(1,1)$, then assumption (i) above implies for both components, one part of the bipartition contains all the leaves. From the proof of Lemma \ref{lem:treetypes} (i), all vertices in the other part have degree 2. On the other hand, if $F$ has type $(0,0)$, then assumption (ii) above implies that for each component of $F$, every internal edge produces a forest of type $(1,1)$ when removed. Lemma \ref{lem:treetypes} (iv) asserts that all vertices of $F$ have odd degree. Therefore, $F$ has type $(1,1)$ if $F$ has a vertex of degree 2, and $F$ has type $(0,0)$ otherwise. That is, we can determine the type of $F$ given all the assumptions so far. This completes Case 2. \textbf{Case 3.} $F$ has more than two components. By the assumptions after Lemma \ref{lem:treetypes} the $*$-deck\xspace has a type $(0,a_1, a_2, \ldots, a_k, *)$ where $a_i > 0$ for all $i$. The question is to decide whether $F$ has a component of type $(0)$ or not. If it has, then the $(0)$ component is unique and the $*$-deck\xspace has a type without zero. We replace $(1,*)$ with a $(0)$ to recover the type of $F$. If $F$ does not have a component of type $(0)$, then $F$ has type $(1, a_1, a_2, \ldots, a_k)$. Suppose the $*$-deck\xspace of $F$ has another type $(0,b_1, b_2, \ldots, b_k, *)$ where $\{a_i : 1 \leq i \leq k\} \neq \{b_i : 1 \leq i \leq k\}$ as multisets. Then $F$ has a component of type $(0)$, since otherwise the zeros in the $*$-deck\xspace types come from a $(1)$ of $F$ and all such types must be the same up to order of entries. Thus we assume $(0,a_1, a_2, \ldots, a_k, *)$ is the only type in the $*$-deck\xspace with a 0. If the type of $F$ had a 0 and two distinct nonzero numbers then the $*$-deck\xspace contains two distinct types with a 0. Hence we may assume that \[ (0,a_1, a_2, a_3, \ldots, a_k, *) = (0, a-1, a, a, \ldots, a, *) \quad \text{ for some $a \geq 2$}. \] The type of $F$ is either $(0,a,a,a,\ldots,a)$ or $(1,a-1,a,a,\ldots,a)$. Lemma \ref{lem:treetypes} (iii) implies that in the latter case the $*$-deck\xspace has $(1,a-1,a-1,a,\ldots,a,*)$, whereas the former case cannot have the same type. Thus we can decide the type of $F$ from the $*$-deck\xspace in Case 3. In all cases we can decide the type of $F$ from the degree sequence and the type-deck of $F$. Therefore Lemma \ref{lem:forestType} holds and Lemma \ref{lem:recon3} follows. \end{proof} \section{Applications}\label{sec:appl} In this section we prove some facts about Euler polynomials, the number of dominating sets, and sums over spanning forests of a graph. The common theme of these results is a very simple way of proving by just using different representations of the bipartition polynomial. We denote by $\mathcal{F}(G)$ the set of spanning forests of the graph $G$. \begin{theorem} The Euler polynomial of a graph $G$ of order $n$ and size $m$ satisfies \[ \mathcal{E}(G,z) = (1+z)^{m-n}(-z)^n \sum_{H\in\mathcal{F}(G)} \left(-\frac{1+z}{z}\right)^{k(H)} \left(\frac{1-z}{1+z}\right)^{\mathrm{ext}(H)}. \] \end{theorem} \begin{proof} We use the forest representation of the bipartition polynomial that is given in Theorem \ref{theo:forest_representation} and Equation (\ref{eq:euler_bip}); we obtain \begin{eqnarray*} \mathcal{E}(G,z) &=& \frac{(1+z)^m}{2^n}B\left(G;1,1,\frac{-2z}{1+z} \right) \\ &=& \frac{(1+z)^m}{2^n} \hspace{-5pt}\sum_{H\in\mathcal{F}(G)} \hspace{-7pt} 2^{\mathrm{iso}(H)} \left(\frac{-2z}{1+z}\right)^{n-k(H)} \left(\frac{1-z}{1+z}\right)^{\mathrm{ext}(H)} \hspace{-3pt} 2^{k(H)-\mathrm{iso}(H)} \\ &=& (1+z)^{m-n}(-z)^n \sum_{H\in\mathcal{F}(G)} \left(-\frac{1+z}{z}\right)^{k(H)} \left(\frac{1-z}{1+z}\right)^{\mathrm{ext}(H)}. \end{eqnarray*} For the second equality, we used the simple relation $|\mathrm{Comp}(H)| + \mathrm{iso}(H) = k(H)$. \end{proof} The next theorem provides a representation of the Euler polynomial as a sum ranging over subsets of the vertex set. \begin{theorem} The Euler polynomial of a graph $G=(V,E)$ satisfies \[ \mathcal{E}(G,z) = \frac{(1+z)^{|E|}}{2^{|V|}}\sum_{W\subseteq V} \left(\frac{1-z}{1+z}\right)^{|\partial W|}. \] \end{theorem} \begin{proof} The result can be obtained via the multiplicative representation of the bipartition polynomial according to Theorem \ref{theo:prod_representation}. The substitution of this representation for the bipartition polynomial in Equation (\ref{eq:euler_bip}) yields \begin{eqnarray*} \mathcal{E}(G,z) &=& \frac{(1+z)^m}{2^n}B\left(G;1,1,\frac{-2z}{1+z} \right) \\ &=& \frac{(1+z)^m}{2^n} \sum_{W\subseteq V} \prod_{v\in N_{G}(W)} \left[\left(1-\frac{2z}{1+z}\right)^{|\partial v\cap \partial W|} \right] \\ &=& \frac{(1+z)^m}{2^n} \sum_{W\subseteq V} \left(\frac{1-z}{1+z}\right)^{|\partial W|}. \end{eqnarray*} The last equality follows from the fact that $|N_{G}(v)\cap W|$ is the number of edges that connect $v$ to a vertex of $W$. Hence when we take the product over all vertices in $N_G(W)$, then we count each edge in $\partial W$ exactly once. \end{proof} The following statement can be proven also via the principle of inclusion--exclusion. However, our knowledge about the bipartition polynomial offers an even faster way of proof. \begin{theorem} The number $d(G)$ of dominating sets of a graph $G$ satisfies \[ d(G) = 2^n \sum_{W\subseteq V}(-1)^{|W|} \left(\frac{1}{2}\right)^{|N_G[W]|}. \] \end{theorem} \begin{proof} Here we use the product representation of the bipartition polynomial of a simple graph. The restriction to simple graphs does not change the domination polynomial. According to Equation (\ref{eq:dom_bip}), we obtain \begin{align*} d(G) &= D(G,1) \\ &= 2^n B\left(G;-\frac{1}{2},\frac{1}{2},-1\right) \\ &= 2^n \sum_{W\subseteq V}\left(-\frac{1}{2}\right)^{|W|} \prod_{v\in N_G(W)} \left[\left(\frac{1}{2}\right)\left[0^{|N_{G}(v)\cap W|}-1 \right] +1 \right]\\ &= 2^n \sum_{W\subseteq V}\left(-\frac{1}{2}\right)^{|W|} \prod_{v\in N_G(W)} \left(\frac{1}{2}\right) \\ &= 2^n \sum_{W\subseteq V}\left(-\frac{1}{2}\right)^{|W|} \left(\frac{1}{2}\right)^{|N_G(W)|}, \end{align*} which is equivalent to the statement of the theorem. \end{proof} \begin{theorem} Let $G=(V,E)$ be a graph with a linearly ordered edge set and $\mathcal{F}_0(G)$ the set of all spanning forests of $G$ with external activity 0. Then \[ \sum_{H\in\mathcal{F}_0(G)}(-2)^{k(H)} = (-1)^n 2^{k(G)}. \] \end{theorem} \begin{proof} The statement follows immediately from the forest representation of the bipartition polynomial that is given in Theorem \ref{theo:forest_representation} by substituting $x=1$, $y=1$, $z=-1$ in $B(G;x,y,z)$. \end{proof} \begin{theorem}\label{theo:biolor} Let $G=(V,E)$ be a simple undirected graph with $n$ vertices. The number of bicolored subgraphs of $G$ with exactly $i$ isolated vertices and exactly $j$ edges is given by the coefficient of $x^iz^j$ in the polynomial \[ (2x-1)^n B\left(G;\frac{1}{2x-1},\frac{1}{2x-1},z\right). \] \end{theorem} \begin{proof} An edge subset $F\subseteq E$ of $G$ induces a subgraph that can be properly colored with two colors if and only if $(V,F)$ is a bipartite graph. The number of bicolored graphs with edge set $F$ is then given by $2^{k(V,F)}$. Substituting $x$ and $y$ by $1/(2x-1)$ in $B(G;x,y,z)$ and multiplying the resulting expression with $(2x-1)^n$ yield \[ (2x-1)^n \hspace{-14pt}\sum_{\substack{F\subseteq E \\(V,F)\text{ is bipartite}}} \hspace{-20pt} z^{|F|}\left(\frac{2x}{2x-1}\right)^{\mathrm{iso}(V,F)} \hspace{-20pt} \prod_{(U,A)\in\mathrm{Comp}(V,F)} \hspace{-5pt}\left(\frac{2}{2x-1}\right)^{|U|}. \] Using the equations $\mathrm{iso}(V,F)+|\mathrm{Comp}(V,F)|=k(V,F)$ and \[ \mathrm{iso(V,F)} + \sum_{(U,A)\in\mathrm{Comp}(V,F)} |U| = n, \] we obtain \[ (2x-1)^n B\left(G;\frac{1}{2x-1},\frac{1}{2x-1},z\right) = \sum_{\substack{F\subseteq E \\(V,F)\text{ is bipartite}}} \hspace{-20pt} 2^{k(V,F)} x^{\mathrm{iso}(V,F)} z^{|F|}, \] which proves the statement. \end{proof} \begin{corollary} Let $G$ be a simple graph of order $n$. The number of bicolored subgraphs of $G$ without any isolated vertices is \[ (-1)^nB(G;-1,-1,1). \] \end{corollary} \begin{proof} This follows immediately from the last line of the proof of Theorem \ref{theo:biolor} by substituting $x=0$ and $z=1$. \end{proof} \section{Conclusions and Open Problems} The bipartition polynomial emerges as a powerful tool for proving equations in graphical enumeration. It shows nice relations to other graph polynomials, offers a couple of useful representations, and is polynomially reconstructible. However, there are still many open questions for the bipartition polynomial; we consider the following ones most interesting: \begin{itemize} \item Equation (\ref{eq:planar}) gives a relation between the bipartition polynomial of a planar graph and its dual. However, this equation is restricted to the evaluation of $B(G;x,y,z)$ at $x=y=1$. Is there a way to generalize this result? \item The Ising and matching polynomial of a graph $G$ can be derived from the corresponding polynomials of the complement $\bar{G}$. Can we calculate the bipartition polynomial of a graph from the bipartition polynomial of its complement? \item There are known pairs of nonisomorphic graphs with the same bipartition polynomial. However, despite all efforts by extensive computer search, we could not find a pair of nonisomorphic trees with coinciding bipartition polynomial. We know that no such pair for trees with order less than 19 exists. Is the bipartition polynomial able to distinguish all nonisomorphic trees? \end{itemize} \section{Acknowledgments} We thank Jo Ellis-Monaghan, Andrew Goodall, Johann A. Makowsky, and Iain Moffatt -- the organizers of the Dagstuhl seminar \emph{Graph Polynomials: Towards a Comparative Theory} (2016). This excellent and fruitful workshop initiated the cooperation of the authors of this paper. \printbibliography \end{document}
arXiv
{ "id": "1702.03546.tex", "language_detection_score": 0.760559618473053, "char_num_after_normalized": null, "contain_at_least_two_stop_words": null, "ellipsis_line_ratio": null, "idx": null, "lines_start_with_bullet_point_ratio": null, "mean_length_of_alpha_words": null, "non_alphabetical_char_ratio": null, "path": null, "symbols_to_words_ratio": null, "uppercase_word_ratio": null, "type": null, "book_name": null, "mimetype": null, "page_index": null, "page_path": null, "page_title": null }
arXiv/math_arXiv_v0.2.jsonl
null
null
\begin{document} \title{A Convex Sparse PCA for Feature Analysis} \author{Xiaojun Chang, Feiping Nie, Yi Yang, and~Heng~Huang \IEEEcompsocitemizethanks{\IEEEcompsocthanksitem Xiaojun Chang and Yi Yang are with School of Information Technology and Electrical Engineering, The University of Queensland, Australia. (E-mail: [email protected]; [email protected]).\protect\\ \IEEEcompsocthanksitem Feiping Nie and Heng Huang are with Department of Computer Science and Engineering, University of Texas at Arlington. (E-mail: [email protected], [email protected]).} \thanks{}} \markboth{Journal of \LaTeX\ Class Files,~Vol.~6, No.~1, January~2007} {Shell \MakeLowercase{\textit{et al.}}: Bare Demo of IEEEtran.cls for Computer Society Journals} \IEEEcompsoctitleabstractindextext{ \begin{abstract} Principal component analysis (PCA) has been widely applied to dimensionality reduction and data pre-processing for different applications in engineering, biology and social science. Classical PCA and its variants seek for linear projections of the original variables to obtain a low dimensional feature representation with maximal variance. One limitation is that it is very difficult to interpret the results of PCA. In addition, the classical PCA is vulnerable to certain noisy data. In this paper, we propose a convex sparse principal component analysis (CSPCA) algorithm and apply it to feature analysis. First we show that PCA can be formulated as a low-rank regression optimization problem. Based on the discussion, the $l_{2,1}$-norm minimization is incorporated into the objective function to make the regression coefficients sparse, thereby robust to the outliers. In addition, based on the sparse model used in CSPCA, an optimal weight is assigned to each of the original feature, which in turn provides the output with good interpretability. With the output of our CSPCA, we can effectively analyze the importance of each feature under the PCA criteria. The objective function is convex, and we propose an iterative algorithm to optimize it. We apply the CSPCA algorithm to feature selection and conduct extensive experiments on six different benchmark datasets. Experimental results demonstrate that the proposed algorithm outperforms state-of-the-art unsupervised feature selection algorithms. \end{abstract} \begin{keywords} Principal Component Analysis, Convex PCA, Sparse PCA, Feature Analysis \end{keywords}} \maketitle \IEEEdisplaynotcompsoctitleabstractindextext \IEEEpeerreviewmaketitle \section{Introduction} In many machine learning and data mining applications, such as face recognition \cite{eigenfaces} \cite{PCAFR}, conceptual indexing \cite{ConceptualIndexing}, collaborative filtering \cite{collaborativefiltering}, the dimensionality of the input data is usually very high. It is computationally expensive to analyze the high-dimensional data directly. Meanwhile, the noise in a representation may dramatically increase as the dimensionality is getting high \cite{lineardp} \cite{fsclustering} \cite{vpca}. To improve the efficiency and accuracy, researchers have demonstrated that dimensionality reduction is one of the most effective approaches for data analysis, and plays a significant role in data mining. Because of its simplicity and effectiveness, Principal Component Analysis (PCA) has been widely applied to various applications. The goal of PCA is to find a projection matrix that maximizes the variance of the samples after the projection, while preserving the structure of the original dataset as much as possible. PCA seeks a linear projection for the original high-dimensional feature vectors so as to obtain a low dimensional representation of data, which captures as much information as possible. One may obtain principal components (PCs) by performing singular value decomposition (SVD) of the original data matrix and choose the first $k$ PCs to represent the data, which is a more compact feature representation. There are two main reasons why PCA usually obtains good performance in the real world applications: (1) all the PCs are uncorrelated; (2) minimal information loss is guaranteed by the fact that PCs sequentially capture maximum variability among columns of data matrix. Nevertheless, PCA still has some inherent drawbacks, which this paper will address. One problem of the classical PCA is that each PC is obtained by a linear combination of original variables and loadings are normally non-zero, which makes it often difficult to interpret the results. To address this problem, Hui Zou etal. integrate the lasso penalty \cite{lasso}, which is a variable selection technique, into the regression criterion in \cite{spca}. In their paper, they propose a new approach for estimating PCs with sparse loadings, sparse principal component analysis (SPCA). Lasso penalty is implemented via elastic net, which is a generalization of lasso proposed in \cite{elastic}. However, their algorithm is non-convex and it is difficult to obtain the global optima. Thus the performance may vary dramatically with different local optima. Another drawback of the classical PCA methods is that they are least square estimation approaches, which are commonly known not to be robust in the sense that outlying measurements can arbitrarily skew the solution from the desired solution \cite{robust}. To make PCA robust to outliers, Xu etal. \cite{rpcaso} propose to recover a low-rank matrix from highly corrupted measurements. It has been experimentally demonstrated in \cite{robust} that robust PCA gains promising performance on noisy data analysis. However, despite of its robustness to the outliers, the algorithm proposed in \cite{robust} is transductive, and is not able to deal with the out-of-sample data which are unseen during the training phrase. It is very restrictive to have all the data beforehand. Therefore, the robust PCA algorithm proposed in \cite{rpcaso} is less practical for many real world application. In this paper, we propose a novel convex sparse PCA for feature analysis. It has been demonstrated in \cite{spca} that the sparse model is a good measure for feature analysis, especially for feature weighting. We therefore impose the $l_{2,1}$-norm on the regression coefficient so as to make our algorithm able to evaluate the importance of each feature. Besides, we adopt the $l_{2,1}$-norm based loss function, which is robust to the outliers, to achieve robust performance. Different from \cite{robustPCAMAYI}, our algorithm is inductive and can be directly used to map the unseen data which are outside the training set. We name the proposed algorithm Convex Sparse PCA (CSPCA). The main contributions of this paper can be summarized as follows: \begin{enumerate} \item We have theoretically proved the equivalence of the classical PCA and low rank regression. \item The proposed algorithm combines the recent advances of sparsity and robust PCA into a joint framework to leverage the mutual benefit. To the best of our knowledge, this is the first convex sparse and robust PCA algorithm, which ensures our algorithm always achieves the global optima. \item Different from the existing robust PCA algorithms \cite{robustPCAMAYI} \cite{RPCAOP}, which can only deal with the in-sample data, our algorithm is capable of mapping the data which are unseen during the training phase. \item We propose an effective iterative algorithm to optimize the objective function, which simultaneously optimizes the $l_{2,1}$-norm minimization and the trace norm minimization. \end{enumerate} The rest of this paper is organized as follows. We briefly review related work on PCA, sparse PCA and robust PCA in Section 2. Then we elaborate the formulation of our method in Section 3, followed by the proposed solution in Section 4. Extensive experiments are conducted in Section 5 to evaluate performance of the proposed algorithm. Section 6 concludes this paper. \section{Related Work} In this section, we briefly review three related topics of our work, including the classical PCA, sparse PCA and robust PCA. To begin with, we first define the terms and notations which will be frequently used in this paper. (1) data matrix denoted by $X = [x_1, x_2, \cdots , x_n]$ where $x_i \in \mathbb{R}^d ( 1 \leq i \leq n)$ is the $i$-th datum and $n$ is the total number of the samples; (2) projection matrix denoted by $W$; (3) the Frobenius norm denoted by $\|X\|_F$; (4) the trace norm denoted by $\|W\|_*$. \subsection{The Classical PCA} The classical PCA is a statistical technique for dimensionality reduction. Classical PCA techniques, also known as Karhunen-Loeve methods, look for a dimensionality reducing linear projection that maximizes the total scatter of all projected data points. To be more specific, PCA computes the PCs by performing eigen-value decomposition of covariance of the convariance matrix of all training data. In general, the entries of corresponding PCs are dense and non-zero. The objective function of classical PCA is \begin{equation}\nonumber \max_{W^TW=I} Tr(W^tXXW), \end{equation} where $Tr(\cdot)$ denotes trace operator. \subsection{Sparse PCA} A common limitation of the classical PCA is the lack of interpretability. All principal components are a linear combination of variables and most of the factor coefficients are non-zero. To get more interpretable results, sparse PCA is proposed, which leads to reduced computation time and improved generalization. There are numerous implementations of sparse PCA in the literature \cite{spcasemi} \cite{fullregSPCA} \cite{spectralSPCA} \cite{spcasemi} \cite{lassospca} \cite{sparsepcalowrank}. The objectives of all the methods aim to reduce the dimensionality reduction and the number of explicitly used variables. A straightforward way is to manually set factor coefficients with values below a threshold to zero. This simple and naive thresholding method is often adopted in various applications. Nevertheless, it could be potentially misleading in different aspects. Jolliffe etal. propose SCoTLASS to obtain modified principal components with possible zero factor coefficients \cite{lassospca}. Lasso \cite{lasso} has shown to be a effective variable selection method, which has been shown effective in a variety of applications. To further improve lasso, Zou etal. propose the elastic net in \cite{elastic} for sparsity based mining. Based on the fact that PCA can be reformulated as regression-type optimization problem, Zou etal. \cite{spca} propose sparse PCA (SPCA) for estimating PCs with sparse factor coefficients, which can be formulated as follows: \begin{equation}\nonumber \begin{aligned} &\min_{A, B} \sum_{i=1}^n \|x_i - AB^Tx_i\|^2 + \lambda \sum_{j=1}^k \|\beta _j \|^2 + \sum_{j=1}^k \lambda _{1,j} \|\beta _j \|_1, \\ & ~~~ s.t. ~A^TA=I \end{aligned} \end{equation} where $\beta$ is lasso estimates. All $k$ components share the same $\lambda$ and different $\lambda_{1,j}$'s are allowed for penalizing the loadings of different principal components. Although the algorithm has good performance and attracted more and more attention, it is non-convex and difficult to find the global optima. \subsection{Robust PCA} The goal of robust PCA is to recover a low-rank matrix $D$ from highly corrupted measurements $X = D + E$. The errors $E$ are supposed to be sparsely supported. Motivated by recent research on the robust solution of over-determined linear systems of equations in the presence of arbitrary but sparse errors and computing low-rank matrix solutions to underdetermined linear equations, John etal. \cite{robustPCAMAYI} propose exact recovery of corrupted low-rank matrices by convex optimization. A straightforward solution to robust PCA is to seek the matrix with the lowest rank that could have generated the data under the constraint of sparse errors. The objective function of robust PCA is formulated as follows: \begin{equation} \label{robustPCA1} \min_{D} \|X - D \|_0 + \gamma rank(D) \end{equation} However, since Eq. \eqref{robustPCA1} involves $l_0$-norm, the objective function is highly non-convex and it is difficult to find an efficient solution. To obtain a tractable optimization problem, it is nature to replace the $l_0$-norm with $l_1$-norm and the rank with the trace norm. The objective function can be rewritten as: \begin{equation} \min_{D} \|X - D\|_1 + \gamma \|D\|_* \end{equation} To make the objective function robust to outliers, we further replace $l_1$-norm with $l_{2,1}$-norm as $l_{2,1}$-norm is indicated to make the objective function robust to outliers in \cite{RPCAOP}. The objective function arrives at: \begin{equation} \min_{D} \|X - D\|_{2,1} + \gamma \|D\|_* \end{equation} Although the robust PCA has attracted much research attention in recent years, it still has a major limitation. As the robust PCA is transductive, despite of its good performance, it cannot be applied to out-of-sample problems. In other words, it cannot map the data, which are outside the training set, into the low dimensional subspace. \section{The Proposed method} In this section, we first demonstrate the equivalence of PCA and regression, followed by illustrating the formulation of the convex sparse PCA method. Then we describe a detailed approach to solve the objective function. \subsection{The Equivalence of Classical PCA and Regression} The proposed CSPCA is designed upon our recent finding that the classical PCA can be reformulated as a regression problem. This conclusion provides us with new insights of PCA in a different perspective, and enables us to design the new convex sparse PCA algorithm. We begin the following theorem. \newtheorem{theorem}{Theorem} \begin{theorem} The classical PCA can be reformulated as a low-rank regression optimization problem as follows: \begin{equation} \min_{rank(W)=k} \|W^TX - X\|_F^2 \label{obj1} \end{equation} \end{theorem} \begin{proof} As we have the constraint $rank(W) = k$, we can easily write $W = BA^T$, where $A \in \mathbb{R}^{d \times k}$ is an orthogonal matrix, $B \in \mathbb{R}^{d \times k}$ and the rank of both $A$ and $B$ are $k$. The above objective function can be rewritten as follows: \begin{equation} \begin{aligned} & \min_{A,B \in \mathbb{R}^{d \times k}, A^TA=I} \|AB^TX - X \|_F^2 \\ = & \min_{A,b \in \mathbb{R}^{d \times k}, A^TA=I} Tr(B^TXX^TB) - 2Tr(B^TXX^TA). \end{aligned} \label{der1} \end{equation} By setting the derivatives of \eqref{der1} w.r.t $B$ to zero, we have: \begin{equation} XX^TB = XX^TA \end{equation} By denoting $X=U\Sigma V^T$, $U^\perp$ as orthogonal complement standard basis vectors of $U$ and $B=U\alpha + U^\perp \beta$ ($\beta$ is an arbitrary vector), we have the following mathematical deduction: \begin{equation} \begin{aligned} & XX^TB = XX^TA \\ \Rightarrow & U\Sigma ^2U^T(U\alpha + U^\perp \beta) = U \Sigma^2 U^TA \\ \Rightarrow & U\Sigma^2\alpha = U\Sigma^2U^TA \\ \Rightarrow & \alpha = U^TA. \end{aligned} \end{equation} Hence, we have $B=UU^TA + U^\perp\beta$. By incorporating $B$ into \eqref{obj1}, we obtain: \begin{equation} \begin{aligned} & \min_{\beta, A^TA=I} \|AA^TUU^TX + A\beta ^T (U^\perp)^TX - X \|_F^2 \\ \Rightarrow & \min_{\beta , A^TA=I} \|AA^TUU^TU\Sigma V^T + A\beta^T(U^\perp)^TU\Sigma V^T - X\|_F^2 \\ \Rightarrow & \min_{A^TA=I} \|AA^TX - X\|_F^2. \end{aligned} \end{equation} Hence, we have $A=U_1Q$, where $Q$ is an arbitrary orthogonal matrix. And we can get \begin{equation} B = UU^TA + U^\perp\beta = UU^TU_1Q + U^\perp\beta = U_1Q + U^\perp \beta \end{equation} With the obtained $A$ and $B$, we can get: \begin{equation} \begin{aligned} W & = AB^T = U_1Q(U_1Q + U^\perp\beta)^T \\ & = U_1U_1^T + U_1Q\beta ^T {U^\perp}^T \end{aligned} \end{equation} The projected samples can be obtained as follows: \begin{equation} \begin{aligned} W^TX & = AB^TX = U_1U_1^TU\Sigma V^T + U_1Q\beta^T{U^\perp}^TU\Sigma V^T \\ & = U_1\Sigma _1V_1, \end{aligned} \end{equation} which is equivalent to projected samples obtained by classical PCA. \end{proof} \emph{\textbf{The connection between the stated Theorem 1 and Theorem 2 in \cite{spca}:}} Zou etal. claim that when $\lambda > 0$, PCA problem can be transformed into a regression-type problem by the following Theorem: \begin{theorem} For any $\lambda > 0$, let \begin{equation} \begin{aligned} & (\hat{\alpha}, \hat{\beta}) = \min_{\alpha , \beta} \sum_{i=1}^n \|x_i - \alpha \beta ^T x_i \|^2 + \lambda \|\beta\|^2 \\ & s.t.~\|\alpha\|^2 = 1. \end{aligned} \end{equation} \end{theorem} In the above theorem, $\beta$ is the lasso estimates and $\hat{\beta}$ $\propto$ the space of PCA. Compared with Theorem 2 proposed in \cite{spca}, our contribution is that we prove that when $\lambda = 0$, PCA problem is completely equivalent to a regression-type problem. \subsection{The Proposed Objective Function} In this section, we detail the proposed objective function of SCPCA. Motivated by previous work \cite{l21norm}, which demonstrate that $l_{2,1}$-norm of $W$ is capable of making $W$ sparse, we propose our sparse PCA algorithm as follows: \begin{equation} \min_{rank(W)=k} \|(W^TX - X)^T \|_2^2 + \alpha \|W\|_{2,1}, \end{equation} where $l_{2,1}$-norm of $W$ is defined as \begin{equation}\nonumber \|W\|_{2,1} = \sum_{i=1}^d \sqrt{\sum_{j=1}^d W_{ij}^2}. \end{equation} In the above function, $\|W^TX - X\|_2^2$ is the most commonly used least square loss function and is mathematically tractable and easily implemented. However, there are still some existing issues which need to take into further consideration. For example, it is well known that the least square loss function is very sensitive to outliers \cite{l21norm}. To address this issue, it is important for us to adopt a more robust loss function in the objective. In \cite{l21norm}, Nie etal. demonstrate that $l_{2,1}$-norm is more capable of dealing with the noisy data. Therefore, our proposed algorithm is rewritten as follows: \begin{equation} \min_{rank(W) = k} \|(W^TX - X)^T\|_{2,1} + \alpha \|W\|_{2,1} \end{equation} In the above formulation, the loss function $\|W^TX - X\|_{2,1}$ is robust to outliers, as proven in \cite{l21norm}. Meanwhile, $\|W\|_{2,1}$ in the regularization term is guaranteed to make $W$ sparse in rows. Next, we first give the definition of trace norm. The trace norm of $W$ is defined as \begin{equation} \|W\|_{*} = Tr(WW^T)^{\frac{1}{2}}. \end{equation} Following the work in \cite{robustPCAMAYI} \cite{ratiorules}, we restrict $W$ to be a low rank matrix. To have the problem tackable, we propose to minimize the trace norm of $W$, which is the convex hull of the rank of $W$. The objective function of the proposed algorithm is then given by: \begin{equation} \min_W \|(W^TX - X)^T\|_{2,1} + \alpha \|W\|_{2,1} + \beta \|W\|_{*} \label{finalobj} \end{equation} Compared with directly minimizing the rank of $W$, our proposed objective function as shown in \eqref{finalobj} is convex. We therefore name the proposed algorithm convex sparse PCA (CSPCA). Different from the previous robust PCA algorithms \cite{robustPCAMAYI} \cite{RPCAOP}, the proposed algorithm is inductive, and able to deal with the out-of-sample data which are unseen in the training phase. Given a new testing data point $x_t$, we can get its low dimensional representation by $W^Tx_t$ directly. \subsection{Optimization} As can be seen from Eq. \eqref{finalobj}, the proposed algorithm involves the $l_{2,1}$-norm, which is non-smooth and cannot be solved in a closed form. Hence, we proposed to solve this problem as follows. For an arbitrary matrix $A$, we denote $A = [A^1, \cdots , A^d]$, where $d$ is the number of features. By setting the derivatives w.r.t $W$ to zero, we have \begin{equation}\nonumber XD_1X^TW + \alpha D_2W + \beta D_3W = XD_1X. \end{equation} Then we have \begin{equation} W = (XD_1X^T + \alpha D_2 + \beta D_3)^{-1} (XD_1X^T), \label{devri} \end{equation} where $D_1$, $D_2$ and $D_3$ are diagonal matrices defined as follows. $ D_1 = \begin{bmatrix} \frac{1}{2\|e^1\|_2} & & \\ & \ddots & \\ & & \frac{1}{2\|e^d\|_2} \\ \end{bmatrix}$, where $E = (W^TX-X)^T$. $ D_2 = \begin{bmatrix} \frac{1}{2\|w^1\|_2} & & \\ & \ddots & \\ & & \frac{1}{2\|w^d\|_2} \\ \end{bmatrix}$ $D_3 = \frac{1}{2}(WW^T)^{-\frac{1}{2}}$ Based on the above mathematical deduction, we propose an iterative algorithm to optimize the objective function Eq. \eqref{finalobj}, which is summarized in Algorithm 1. In each iteration, $E$, $D_1$, $D_2$ and $D_3$ are updated by the current $W$, and then $W$ is updated based on the current calculated $E$, $D_1$, $D_2$ and $D_3$. Once $W$ is obtained and a new data point $x_i$, we get the projected representation by computing $W^Tx_i$. As the project matrix $W$ is sparse, it actually assigned a weight to each feature dimension and thus can be used for feature analysis. The importance score of each feature can be computed by $\|w^i\|_2(1\leq i \leq d)$. Then we can rank each feature according to this score. In this sense, $W$ can be readily used for feature selection and we only select the top $k$ features based on the score $\|w^i\|_2(1\leq i \leq d)$. \begin{algorithm} \caption{Algorithm to solve the problem in \eqref{finalobj}} \SetAlgoLined \KwData{Data matrix $X$ \\ ~~~~~~~~Parameters $\alpha$, $\beta$} \KwResult{W} Set $t$ = 0 \; Initialize $W_0 \in \mathbb{R}^{d \times c}$ randomly \; \Repeat{Converence}{ Compute $E_t$ according to $E_t = (W_t^TX - X)^T$ \; Compute the diagonal matrix $D_{1t}$ as follows: \begin{equation}\nonumber D_{1t} = \begin{bmatrix} \frac{1}{2\|e_t^1\|_2} & & \\ & \ddots & \\ & & \frac{1}{2\|e_t^d\|_2} \\ \end{bmatrix}; \end{equation} \\ Compute the diagonal matrix $D_{2t}$ as follows: \begin{equation}\nonumber D_{2t} = \begin{bmatrix} \frac{1}{2\|w_t^1\|_2} & & \\ & \ddots & \\ & & \frac{1}{2\|w_t^d\|_2} \\ \end{bmatrix}; \end{equation} \\ Compute $D_{3t}$ according to \begin{equation}\nonumber D_{3t} = \frac{1}{2}(W_tW_t^T)^{-\frac{1}{2}}; \end{equation} \\ Update $W_{t+1}$ according to \begin{equation}\nonumber W_{t+1} = (XD_{1t}X^T + \alpha D_{2t} + \beta D_{3t})^{-1} (XD_{1t}X^T); \end{equation} \\ $t = t + 1$ \; } Return $W = W_t$. \end{algorithm} \subsection{Convergence Analysis} In this section, we validate Algorithm 1 shown above. Specially, we prove that the objective function value converges to the optimal $W$ by the following theorem. \begin{theorem} The objective function value shown in Eq. \eqref{finalobj} monotonically decreases in each iteration until convergence using the iterative approach in Algorithm 1. \end{theorem} \begin{proof} According to the 8th step of Algorithm 1, it can be safely inferred that: \begin{equation}\nonumber \begin{aligned} W_{t+1} & = \arg \min Tr((W^TX - X)D_1(W^TX - X)) \\ & + \alpha Tr(W^TD_2W) + \beta Tr(W^TD_3W) \end{aligned} \end{equation} Therefore, we have: \begin{equation}\nonumber \begin{aligned} & Tr((W_{t+1}^TX - X)D_1(W_{t+1}^TX - X)) \\ & + \alpha Tr(W_{t+1}^TD_2W_{t+1}) + \beta Tr(W_{t+1}D_3W_{t+1}) \\ \\ \leq & Tr((W_t^TX - X)D_1(W_t^TX - X)) \\ & + \alpha Tr(W_t^TD_2W_t) + \beta Tr(W_tD_3W_t) \end{aligned} \end{equation} \begin{equation}\nonumber \begin{aligned} \Rightarrow & \sum_{i=1}^n \frac{\|W_{t+1}^Tx_i - x_i\|_2^2}{2\|W_{t+1}^Tx_i - x_i\|_2} + \alpha \sum_{i=1}^d \frac{\|w_{t+1}^i\|_2^2}{2\|w_t^i\|_2} \\ & + \frac{\beta}{2} Tr(W_{t+1}^T(W_tW_t^T)^{-\frac{1}{2}}W_{t+1}) \\ \leq & \sum_{i=1}^n \frac{\|W_{t}^Tx_i - x_i\|_2^2}{2\|W_{t}^Tx_i - x_i\|_2} + \alpha \sum_{i=1}^d \frac{\|w_{t}^i\|_2^2}{2\|w_t^i\|_2} \\ & + \frac{\beta}{2} Tr(W_{t}^T(W_tW_t^T)^{-\frac{1}{2}}W_{t}) \end{aligned} \end{equation} \begin{equation}\nonumber \begin{aligned} \Rightarrow & \sum_{i=1}^n \|W_{t+1}^Tx_i - x_i \|_2 - \sum_{i=1}^n \|W_{t+1}^Tx_i - x_i \|_2 \\& + \frac{\|W_{t+1}^Tx_i - x_i\|_2^2}{2\|W_{t+1}^Tx_i - x_i\|_2} + \alpha \sum_{i=1}^d \|w_{t+1}^i\|_2 - \alpha \sum_{i=1}^d \|w_{t+1}^i\|_2 \\ & + \alpha \sum_{i=1}^d \frac{\|w_{t+1}^i\|_2^2}{2\|w_t^i\|_2} + \frac{\beta}{2} Tr((W_{t+1}W_{t+1}^T)^{\frac{1}{2}}) \\ & - \frac{\beta}{2} Tr((W_{t+1}W_{t+1}^T)^{\frac{1}{2}}) + \frac{\beta}{2} Tr(W_{t+1}^T(W_tW_t^T)^{-\frac{1}{2}}W_{t+1}) \\ \leq & \sum_{i=1}^n \|W_{t}^Tx_i - x_i \|_2 - \sum_{i=1}^n \|W_{t}^Tx_i - x_i \|_2 + \frac{\|W_{t}^Tx_i - x_i\|_2^2}{2\|W_{t}^Tx_i - x_i\|_2} \\ & + \alpha \sum_{i=1}^d \|w_{t+1}^i\|_2 - \alpha \sum_{i=1}^d \|w_{t}^i\|_2 + \alpha \sum_{i=1}^d \frac{\|w_{t}^i\|_2^2}{2\|w_t^i\|_2} \\ & + \frac{\beta}{2} Tr((W_{t}W_{t}^T)^{\frac{1}{2}}) - \frac{\beta}{2} Tr((W_{t}W_{t}^T)^{\frac{1}{2}}) \\ & + \frac{\beta}{2} Tr(W_{t}^T(W_tW_t^T)^{-\frac{1}{2}}W_{t}) \end{aligned} \end{equation} \begin{equation}\nonumber \begin{aligned} \Rightarrow & \sum_{i=1}^n \|W_{t+1}^Tx_i - x_i\|_2 + \alpha \sum_{i=1}^d \|w_{t+1}^i\|_2 + \frac{\beta}{2} Tr((W_{t}W_{t}^T)^{\frac{1}{2}}) \\ & - \alpha (\sum_{i=1}^d \|w_{t+1}^i\|_2 - \sum_{i=1}^d \frac{\|w_{t+1}^i\|_2^2}{2\|w_t^i\|_2}) \\ & -\frac{\beta}{2} (Tr((W_{t+1}W_{t+1}^T)^{\frac{1}{2}}) - Tr(W_{t+1}^T(W_tW_t^T)^{-\frac{1}{2}})W_{t+1})) \\ \leq & \sum_{i=1}^n \|W_{t}^Tx_i - x_i\|_2 + \alpha \sum_{i=1}^d \|w_{t}^i\|_2 + \frac{\beta}{2} Tr((W_{t}W_{t}^T)^{\frac{1}{2}}) \\ & - \alpha (\sum_{i=1}^d \|w_{t}^i\|_2 - \sum_{i=1}^d \frac{\|w_{t}^i\|_2^2}{2\|w_t^i\|_2}) \\ & -\frac{\beta}{2} (Tr((W_{t}W_{t}^T)^{\frac{1}{2}}) - Tr(W_{t}^T(W_tW_t^T)^{-\frac{1}{2}})W_{t})) \end{aligned} \end{equation} It has been proven in \cite{l21norm} that for arbitrary non-zero vectors ${v_t^i}|_{i=1}^r$ we have: \begin{equation}\nonumber \sum_i \|v_{t+1}^i\|_2 - \sum_i \frac{\|v_{t+1}^i\|_2^2}{2\|v_t^i\|_2} \leq \sum_i \|v_{t}^i\|_2 - \sum_i \frac{\|v_{t}^i\|_2^2}{2\|v_t^i\|_2}, \end{equation} where $r$ is any non-zero number. Thus, we can obtain the following inequality: \begin{equation}\nonumber \begin{aligned} & \sum_{i=1}^n \|W_{t+1}^Tx_i - x_i \|_2 + \alpha \sum_{i=1}^d \|w_{t+1}^i\|_2 + \frac{\beta}{2} Tr((W_{t+1}W_{t+1})^{\frac{1}{2}}) \\ \leq & \sum_{i=1}^n \|W_{t}^Tx_i - x_i \|_2 + \alpha \sum_{i=1}^d \|w_{t}^i\|_2 + \frac{\beta}{2} Tr((W_{t}W_{t})^{\frac{1}{2}}) \end{aligned} \end{equation} \begin{equation}\nonumber \begin{aligned} \Rightarrow & \|W_{t+1}^TX - X\|_{2,1} + \alpha \|W_{t+1}\|_{2,1} + \beta \|W_{t+1}\|_{*} \\ \leq & \|W_{t}^TX - X\|_{2,1} + \alpha \|W_{t}\|_{2,1} + \beta \|W_{t}\|_{*} \end{aligned} \end{equation} which indicates that the objective function value of Eq. \eqref{finalobj} monotonically decreases until converging to the optimal $W$ via the proposed approach in Algorithm 1. \end{proof} To step further, we prove that the proposed algorithm converges to the global optima by Theorem \ref{gloablopt}. \begin{theorem} \label{gloablopt} The objective function value shown in Eq. \eqref{finalobj} converges to the global optima using Algorithm 1. \end{theorem} \begin{proof} Once the objective function converges using algorithm 1 and returns $W^*$. According to Eq. \eqref{devri}, we can get the following equation: \begin{equation}\nonumber XD_1X^TW^* + \alpha D_2W^* + \beta D_3W^* - XD_1X = 0 \end{equation} We can see that the derivatives w.r.t $W$ equals to zero, and we get the local solution to the objective function. Note that the proposed method is a convex problem. Hence, according to the Karush-Kuhn-Tucker (KKT) conditions, we conclude that the objective function converges to the global optima using Algorithm \ref{gloablopt}. \end{proof} \begin{table*}[tb] \small \caption{SETTINGS OF THE DATA SETS} \centering \begin{tabular}{|c||r|c|c|c|} \hline Dataset & Size(n) & No. of variables & Class Number & Number of Selected Features \\ \hline YaleB & 2414 & 1024 & 38 & $\{500, 600, 700, 800, 900, 1000\}$ \\ \hline ORL & 400 & 1024 & 40 & $\{500, 600, 700, 800, 900, 1000\}$ \\ \hline JAFFE & 213 & 676 & 10 & $\{350, 390, 430, \cdots 590, 610, 650\}$ \\ \hline HumanEVA & 10000 & 168 & 10 & $\{50, 60, 70, 80, 90, 100\}$ \\ \hline Coil20 & 1440 & 1024 & 20 & $\{170, 190, 210, 230, 250, 270, 290\}$ \\ \hline USPS & 9298 & 256 & 10 & $\{120, 140, 160, 180, 200, 220, 240\}$ \\ \hline \end{tabular} \label{setting} \end{table*} \section{Experiments} In this section, we evaluate performance of the proposed algorithm, which can be applied to many applications, such as dimension reduction and unsupervised feature selection. Following previous unsupervised feature selection algorithms \cite{laplacianscore} \cite{SPEC} \cite{UDFS}, we only evaluate the performance of CSPCA for feature selection and compare with related state-of-the-art unsupervised feature selection. \subsection{Experimental Settings} To demonstrate the effectiveness of the proposed algorithm for feature selection, we compare it with one baseline and several unsupervised feature selection methods. The compared algorithms are described as follows. \begin{enumerate} \item Using all features (All-Fea): We directly adopt the original features without performing feature selection. This approach is used as a baseline. \item Max Variance: This is a feature selection method using the classical PCA criteria. Features with maximum variances are chosen for subsequent tasks. \item Laplacian Score: To best preserve the local manifold structure, feature consistent with Gaussian Laplacian matrix are selected \cite{laplacianscore}. The importance of each feature is determined by its power. \item SPEC: This is a spectral regression based state-of-the-art feature selection algorithm. Features are selected one by one by leveraging the work of spectral graph theory \cite{SPEC}. \item MCFS: Features are selected based on spectral analysis and sparse regression problem \cite{MCFS}. Specifically, features are selected such that the multi-cluster structure of the data can be best preserved. \item UDFS: Features are selected by a joint framework of discriminative analysis and $l_{2,1}$-norm minimization \cite{UDFS}. UDFS selects the most discriminative feature subset from the whole feature set in batch mode. \end{enumerate} For each algorithm, all the parameters (if any) are tuned in the range of $\{10^{-6}, 10^{-4}, 10^{-2}, 10^0, 10^2, 10^4, 10^6\}$ and the best results are reported. There are some parameters need to be set in advance. For LS, MCFS and UDFS, we empirically set $k = 5$ for all the datasets to specify the size of neighborhoods. The number of selected features are set as described in Table 1 for all the datasets. For all the compared algorithms, we report the best clustering result with optimal parameters. In the experiments, we utilize K-means algorithm to cluster samples based on the selected features. Note that performance of K-means varies with different initializations. We randomly repeat the clustering 30 times for each setup and report average results with standard deviation. \subsection{Datasets} The datasets used in our experiments are described as follows. \begin{enumerate} \item Face Image Data: We use three face image datasets for face recognition, namely YaleB \cite{yaleb}, ORL \cite{ORL} and JAFFE \cite{JAFFE}. The YaleB dataset contains 2414 near frontal images from 38 persons under different illuminations. We resize each image to $32 \times 32$. The ORL dataset consists of 40 different subjects with 10 images each. We also resize each image to $32 \times 32$. The Japanese Female Facial Expression (JAFFE) dataset consists of 213 images of different facial expressions from 10 Japanese female models. The images are resized to $26 \times 26$. \item 3D Motion Data: The HumanEVA dataset is used to evaluate the performance of our algorithm in terms of 3D motion annotation \footnote{http://vision.cs.brown.edu/humaneva/}. This dataset contains five types of motions. Based on the 16 joint coordinates in 3D space, 1590 geometric pose descriptors are extracted using the method proposed in \cite{humaneva} to represent 3D motion data. \item Object Image Data: We use the Coil20 dataset \cite{COIL20} for object recognition. This dataset includes 1440 grey scale images with 20 different objects. In our experiment, we resize each image to $32 \times 32$. \item Handwritten Digit Data: We use the USPS dataset to validate the performance on handwritten digit recognition. The dataset consists of 9298 gray-scale handwritten digit images. We resize the images to $16 \times 16$. \end{enumerate} \begin{table*}[!ht] \small \renewcommand{1.3}{1.3} \caption{Performance Comparison(ACC $\pm~std$ \%) of All-Fea, MaxVar, LScore, SPEC, MCFS, UDFS and CSPCA. The best results are highlighted in bold. From this table, we can observe that our proposed algorithm has much advantage over other algorithms on all the used datasets.} \centering \begin{tabular}{|c||c|c|c|c|c|c|} \hline & YaleB & ORL & JAFFE & HumanEVA & Coil20 & USPS \\ \hline \hline All-Fea & $12.2 \pm 2.8$ & $69.0 \pm 1.7$ & $90.6 \pm 2.8$ & $47.2 \pm 2.0$ & $71.8 \pm 3.7$ & $71.1 \pm 2.7$ \\ \hline MaxVar & $12.7 \pm 2.6$ & $67.8 \pm 1.9$ & $95.3 \pm 2.1$ & $47.6 \pm 2.7$ & $71.7 \pm 3.2$ & $71.0 \pm 2.8$ \\ \hline LScore & $12.3 \pm 2.9$ & $70.2 \pm 2.2$ & $94.8 \pm 2.6$ & $47.8 \pm 2.2$ & $71.4 \pm 3.5$ & $73.7 \pm 2.4$ \\ \hline SPEC & $12.9 \pm 2.5$ & $67.0 \pm 1.8$ & $96.2 \pm 2.7$ & $50.3 \pm 2.5$ & $72.0 \pm 3.8$ & $72.2 \pm 2.5$ \\ \hline MCFS & $14.5 \pm 2.4$ & $69.4 \pm 1.5$ & $96.4 \pm 1.9$ & $53.6 \pm 2.9$ & $72.4 \pm 3.4$ & $73.1 \pm 2.8$ \\ \hline UDFS & $15.7 \pm 2.8$ & $69.9 \pm 1.2$ & $96.8 \pm 2.4$ & $56.3 \pm 2.4$ & $73.3 \pm 3.0$ & $73.7 \pm 2.9$ \\ \hline CSPCA & $\mathbf{19.3 \pm 2.2}$ & $\mathbf{71.3 \pm 1.0}$ & $\mathbf{97.2 \pm 2.2}$ & $\mathbf{56.4 \pm 2.1}$ & $\mathbf{75.2 \pm 2.9}$ & $\mathbf{76.9 \pm 2.1}$ \\ \hline \end{tabular} \label{ACC} \end{table*} \begin{table*} \small \renewcommand{1.3}{1.3} \caption{Performance Comparison(NMI $\pm~std$ \%) of All-Fea, MaxVar, LScore, SPEC, MCFS, UDFS and CSPCA. The best results are highlighted in bold. From this table, we can observe that our proposed algorithm has much advantage over other algorithms on all the used datasets.} \centering \begin{tabular}{|c||c|c|c|c|c|c|} \hline & YaleB & ORL & JAFFE & HumanEVA & Coil20 & USPS \\ \hline \hline All-Fea & $19.3 \pm 3.3$ & $83.7 \pm 2.4$ & $91.7 \pm 3.4$ & $52.2 \pm 4.3$ & $79.1 \pm 5.1$ & $61.7 \pm 3.4$ \\ \hline MaxVar & $21.3 \pm 2.9$ & $83.0 \pm 2.8$ & $93.0 \pm 3.1$ & $52.7 \pm 3.9$ & $79.5 \pm 4.8$ & $63.7 \pm 3.7$ \\ \hline LScore & $19.4 \pm 3.5$ & $85.3 \pm 2.6$ & $92.9 \pm 2.9$ & $53.2 \pm 4.1$ & $80.1 \pm 5.2$ & $63.1 \pm 3.2$ \\ \hline SPEC & $20.3 \pm 3.1$ & $81.6 \pm 2.1$ & $94.7 \pm 3.8$ & $55.4 \pm 3.7$ & $81.2 \pm 5.5$ & $62.3 \pm 3.6$ \\ \hline MCFS & $22.7 \pm 2.7$ & $83.6 \pm 2.2$ & $95.1 \pm 3.7$ & $57.1 \pm 3.8$ & $82.5 \pm 4.9$ & $63.1 \pm 4.0$ \\ \hline UDFS & $23.5 \pm 3.2$ & $84.2 \pm 3.1$ & $95.6 \pm 3.3$ & $60.0 \pm 3.4 $ & $83.2 \pm 5.3$ & $63.8 \pm 3.9$ \\ \hline CSPCA & $\mathbf{29.4 \pm 2.1}$ & $\mathbf{84.7 \pm 2.8}$ & $\mathbf{96.0 \pm 3.8}$ & $\mathbf{62.4 \pm 3.9}$ & $\mathbf{85.1 \pm 5.1}$ & $\mathbf{65.5 \pm 3.1}$ \\ \hline \end{tabular} \label{NMI} \end{table*} \subsection{Evaluation Metrics} Following related unsupervised feature selection work \cite{laplacianscore} , we adopt clustering accuracy (ACC) and normalized mutual information (NMI) as our evaluation metrics in our experiments. Let $q_i$ represent the clustering label result from a clustering algorithm and $p_i$ represent the corresponding ground truth label of arbitrary data point $x_i$. Then $ACC$ is defined as follows: \begin{equation} ACC = \frac{\sum_{i=1}^n \delta (p_i, map(q_i))}{ n }, \end{equation} where $\delta(x, y) = 1$ if $x=y$ and $\delta (x, y) = 0$ otherwise. $map(q_i)$ is the best mapping function that permutes clustering labels to match the ground truth labels using the Kuhn-Munkres algorithm. A larger ACC indicates a better clustering performance. For any two arbitrary variable $P$ and $Q$, NMI is defined as follows \cite{NMI}: \begin{equation} NMI = \frac{I(P, Q)}{\sqrt{H(P)H(Q)}}, \end{equation} where $I(P, Q)$ computes the mutual information between $P$ and $Q$, and $H(P)$ and $H(Q)$ are the entropies of $P$ and $Q$. Let $t_l$ represent the number of data in the cluster $\mathcal{C}_l(1 \leq l \leq c)$ generated by a clustering algorithm and $\widetilde{t_h}$ represent the number of data points from the $h$-th ground truth class. NMI metric is then computed as follows \cite{NMI}: \begin{equation} NMI = \frac{\sum_{l=1}^c \sum_{h=1}^c t_{l,h} log(\frac{n \times t_{l, h}}{•t_l\widetilde{t_h}})}{\sqrt{(\sum_{l=1}^c t_l \log \frac{t_l}{n})(\sum_{h=1}^c \widetilde{t_h} \log \frac{\widetilde{t_h}}{n})}}, \end{equation} where $t_{l,h}$ is the number of data samples that lies in the intersection between $\mathcal{C}_l$ and $h$th ground truth class. Similarly, a larger NMI indicates a better clustering performance. \subsection{Experimental Results} Empirical studies are conducted on six real-world data sets to validate the performance of the proposed algorithm and compare to state-of-the-art algorithms. Table \ref{ACC} and Table \ref{NMI} summarise ACC and NMI comparison results of all the compared algorithms over the used datasets. From the experimental results, we have the following observations. \begin{enumerate} \item The feature selection algorithms generally have better performance than the baseline All-Fea, which demonstrates that feature selection is necessary and effective. It can significantly reduce feature number as well as improve the performance. \item Both SPEC and MCFS utilize a two-step approach (spectral regression) for feature selection. The difference between them is MCFS select features in a batch mode but SPEC conduct this task separately. We can see MCFS gets better results than SPEC because it is a better way to analyze features jointly for feature selection. \item We can see from the result tables that UDFS gains the second best result, which indicates that it is beneficial to analyze features jointly and simultaneously adopt discriminative information and local structure of data distribution. \item From the experimental results, we can observe that the proposed CSPCA consistently outperform the other compared algorithms. This phenomenon demonstrate that the proposed algorithm is able to select the most informative features. \end{enumerate} \begin{figure*} \caption{Performance variation w.r.t the number of selected features when we fix $\alpha$ and $\beta$ at 1 using the proposed algorithm. From this figure, we have the following observations: (1) When the number of selected features is too small, the clustering ACC is not competitive with using all features without feature selection. (2) As the number of selected features increases, the clustering ACC rises before its peak in general on all the used datasets. (3) The trend of clustering ACC are varying when different datasets are used. (4) With all the features used, the clustering ACC are generally lower than the peak level on all the datasets. (a) YaleB (b)ORL (c) Jaffe (d) HumanEva (e) Coil20 (f) USPS} \label{featureselection} \end{figure*} \subsection{Influence of Selected Features} As the goal of feature selection is to boost accuracy and computation efficiency, experiments are conducted to learn how the number of selected features can affect the clustering performance. From these experiments we can see the general trade-off between performance and computational efficiency over all the used dataset. Fig. \ref{featureselection} shows the performance variance with right to the number of selected features in terms of clustering ACC. From the results, we have the following observations: \begin{enumerate} \item When the number of selected features is too small, the clustering ACC is not competitive with using all features without feature selection, which is mainly caused by too much information loss. For example, when only 500 features are selected on YaleB, the clustering ACC is relatively low, at only 0.164. \item As the number of selected features increases, the clustering ACC rises before its peak in general on all the used datasets. How many features are selected to get the peak level is different on different datasets. \item The trend of clustering ACC are varying when different datasets are used. For example, the clustering ACC keeps stable from using 800 features to using 1000 features for YaleB while drops for the other used datasets. The different variance shown on the six datasets are supposed to be related to the properties of the datasets. \item After all the features are used (without feature selection), the clustering ACC are generally lower than the peak level on all the datasets. We can safely conclude that as the clustering ACC increases, the proposed algorithm is capable of reducing noise and selecting the most discriminating features. \end{enumerate} \begin{figure*} \caption{Performance variation w.r.t $\alpha$ and $\beta$ when we fix the number of selected features for clustering. This figure shows different clustering performance when using different values of $\alpha$ and $\beta$. The impact of different combinations of regularization parameters are supposed to be related to the individual properties of the datasets. On the used datasets, we can observe that better experimental results are obtained when the two regularization parameters $\alpha$ and $\beta$ (a) YaleB (b)ORL (c) Jaffe (d) HumanEva (e) Coil20 (f) USPS.} \label{ParameterSensitivity} \end{figure*} \subsection{Parameter Sensitivity} Our proposed algorithm involves two regularization parameters, which are denoted as $\alpha$ and $\beta$ in Eq. \eqref{finalobj}. It is beneficial to learn how they influence the feature selection and consequently the performance on clustering. In this section, we conduct several experiments on the parameter sensitivity. We use the clustering ACC to reflect the performance variation. Fig. \ref{ParameterSensitivity} demonstrates the clustering ACC variation w.r.t $\alpha$ and $\beta$ on the six datasets. From this figure, we learn that the clustering performance changes corresponding to different combinations of $\alpha$ and $\beta$. The impact of different combinations of regularization parameters are supposed to be related to the individual properties of the datasets. On the used datasets, we can observe that better experimental results are obtained when the two regularization parameters $\alpha$ and $\beta$ are comparable. \subsection{Performance Variance w.r.t Different Initializations} In this section, experiments are conducted to evaluate how performance varies when performance variance w.r.t different initializations. Clustering ACC is also used to reflect the performance variation. The Kmeans algorithm has adopted the same initialization. We conduct different initializations, including setting all the diagonal elements of $W$ to 0.5 (1st initialization), 1 (2nd initialization), 2 (3rd initialization), setting all the elements of $W$ to 0.5 (4th initialization), 1 (5th initialization), 2 (6th initialization) and random values (7th initialization). The experimental results are shown in Table \ref{initialization}. From the experimental results, we can observe that the proposed algorithm always obtains global optima w.r.t different initializations. \begin{table*}[!ht] \small \renewcommand{1.3}{1.3} \caption{Performance variance w.r.t different initializations, including setting all the diagonal elements of $W$ to 0.5, 1, 2, and setting all the elements of $W$ to 0.5, 1, 2 and random values. In this experiment, the K-means clustering algorithm has adopted the same initialization. It can be seen that our algorithm always converges to the global optima regardless of the different initializations.} \centering \begin{tabular}{|c||c|c|c|c|c|c|} \hline & YaleB & ORL & JAFFE & HumanEVA & Coil20 & USPS \\ \hline \hline 1st initialization & $19.3$ & $71.3$ & $97.2$ & $56.4$ & $75.2$ & $76.9$ \\ \hline 2nd initialization & $19.3$ & $71.3$ & $97.2$ & $56.4$ & $75.2$ & $76.9$ \\ \hline 3rd initialization & $19.3$ & $71.3$ & $97.2$ & $56.4$ & $75.2$ & $76.9$ \\ \hline 4th initialization & $19.3$ & $71.3$ & $97.2$ & $56.4$ & $75.2$ & $76.9$ \\ \hline 5th initialization & $19.3$ & $71.3$ & $97.2$ & $56.4$ & $75.2$ & $76.9$ \\ \hline 6th initialization & $19.3$ & $71.3$ & $97.2$ & $56.4$ & $75.2$ & $76.9$ \\ \hline 7th initialization & $19.3$ & $71.3$ & $97.2$ & $56.4$ & $75.2$ & $76.9$ \\ \hline \end{tabular} \label{initialization} \end{table*} \begin{figure*} \caption{Convergence curves of the objective function value in Eq. \eqref{finalobj} using Algorithm 1. The figure indicates that the objective function value monotonically decreases until convergence by utilizing the proposed algorithm. (a) YaleB (b)ORL (c) Jaffe (d) HumanEva (e) Coil20 (f) USPS.} \label{Convergence} \end{figure*} \subsection{Convergence Study} In the previous section, we have proven that the objective function in Eq. \eqref{finalobj} monotonically decreases by using the proposed algorithm. It is interesting to learn how fast our algorithm converges. In this section, we conduct several experiments on validate the convergence of the proposed algorithm. We fix the two regularization parameters $\alpha$ and $\beta$ at 1, which is the median value of the range from which the regularization parameters are tuned. Fig. \ref{Convergence} shows the convergence curves of the proposed algorithm according to the objective function value in Eq. \eqref{finalobj}. From these figures, we can observe that the objective function value converges quickly. To be more specific, the proposed algorithm can converge within 10 iterations on all the used datasets, which is very efficient. \section{Conclusion} In this paper, we have proposed a novel convex sparse PCA and applied it to feature analysis. We first prove that PCA can be formulated as a low-rank regression optimization problem. We further incorporate the $l_{2,1}$-norm minimization into the proposed algorithm to make the regression coefficients sparse and make the model robust to the outliers. Different from state-of-the-art robust PCA, the proposed algorithm is capable of solving out-of-sample problems. Additionally, we propose an efficient algorithm to optimize the objective function. To validate the performances of our algorithm for feature analysis, we conduct experiments on six real-world datasets on clustering. It can be seen from the experimental reuslts that the proposed algorithm outperforms the other state-of-the-art unsupervised feature selection as well as the baseline using all features. Therefore, we conclude that the proposed algorithm is a robust sparse feature analysis method, and its benefits make it especially suitable for feature selection. \ifCLASSOPTIONcaptionsoff \fi { } \end{document}
arXiv
{ "id": "1411.6233.tex", "language_detection_score": 0.7717013955116272, "char_num_after_normalized": null, "contain_at_least_two_stop_words": null, "ellipsis_line_ratio": null, "idx": null, "lines_start_with_bullet_point_ratio": null, "mean_length_of_alpha_words": null, "non_alphabetical_char_ratio": null, "path": null, "symbols_to_words_ratio": null, "uppercase_word_ratio": null, "type": null, "book_name": null, "mimetype": null, "page_index": null, "page_path": null, "page_title": null }
arXiv/math_arXiv_v0.2.jsonl
null
null
\begin{document} \title{A relation between the cube polynomials of partial cubes and the clique polynomials of their crossing graphs} \footnotetext[1]{The work is partially supported by the National Natural Science Foundation of China (Grant No. 12071194, 11571155, 11961067).} \author{Yan-Ting Xie$^1$, ~Yong-De Feng$^{1, ~2}$, ~Shou-Jun Xu$^1, \thanks{Corresponding author. E-mail address: [email protected] (S.-J. Xu).}$} \date{\small $^1$ School of Mathematics and Statistics, Gansu Center for Applied Mathematics, \\Lanzhou University, Lanzhou, Gansu 730000, China\\ $^2$ College of Mathematics and Systems Science, Xinjiang University, Urumqi,\\ Xinjiang 830046, P.R. China} \maketitle \begin{abstract} Partial cubes are the graphs which can be embedded into hypercubes. The {\em cube polynomial} of a graph $G$ is a counting polynomial of induced hypercubes of $G$, which is defined as $C(G,x):=\sum_{i\geqslant 0}\alpha_i(G)x^i$ where $\alpha_i(G)$ is the number of induced $i$-cubes (hypercubes of dimension $i$) of $G$. The {\em clique polynomial} of $G$ is defined as $Cl(G,x):=\sum_{i\geqslant 0}a_i(G)x^i$ where $a_i(G)$ ($i\geqslant 1$) is the number of $i$-cliques (complete subgraphs of order $i$) in $G$ and $a_0(G)=1$. Equivalently, $Cl(G, x)$ is exactly the independence polynomial of the complement $\overline{G}$ of $G$. The {\em crossing graph} $G^{\#}$ of a partial cube $G$ is the graph whose vertices are corresponding to the $\Theta$-classes (the equivalent classes of edge set) of $G$, and $\theta_1$ and $\theta_2$ are adjacent in $G^{\#}$ if and only if they cross in $G$. In the present paper, we prove that for a partial cube $G$, $C(G,x)\leqslant Cl(G^{\#}, x+1)$ and the equality holds if and only if $G$ is a median graph (a special partial cube). Since every graph can be represented as the crossing graph of a median graph [SIAM J. Discrete Math., 15 (2002) 235--251], the above necessary-and-sufficient result shows that the study on the cube polynomials of median graphs can be transformed to the one on the clique polynomials of general graphs (equivalently, on the independence polynomials of their complements). In addition, we disprove the conjecture that the cube polynomials of median graphs are unimodal. \setlength{\baselineskip}{17pt} {} \vskip 0.1in \noindent \textbf{Keywords:} Partial cubes; Cube polynomials; Crossing graphs; Clique polynomials; Median graphs. \end{abstract} \section{Introduction} In this paper all graphs we consider are undirected, finite and simple. A {\em hypercube of dimension $n$} (or {\em $n$-cube} for short), denoted by $Q_n$, is a graph whose vertex set is corresponding to the set of 0-1 sequences $x_1x_2\cdots x_n$ with $x_i\in \{0,1\}$, $i=1, 2, \cdots, n$. Two vertices are adjacent if the corresponding 0-1 sequences differ in exactly one digit. $G$ is called {\em partial cube} if it is isomorphic to an isometric subgraph of $Q_n$ for some $n$. It's known that a relation $\Theta$ on the edge set, called Djokovi\'c-Winkler relation (see \cite{dj73,w84}), plays an important role in studying the partial cubes. This relation was used by Winkler \cite{w84} to characterize the partial cubes as those bipartite graphs for which $\Theta$ is an equivalence relation on edges. Its equivalence classes are called {\em $\Theta$-classes}. Let $G$ be a partial cube, $\theta_1,\theta_2$ two $\Theta$-classes. We say $\theta_1$ and $\theta_2$ {\em cross} in $G$ if $\theta_2$ occurs in both the components of $G-\theta_1$. The {\em crossing graph} $G^{\#}$ of $G$ is the graph whose vertices are corresponding to the $\Theta$-classes of $G$, and $\theta_1$ and $\theta_2$ are adjacent in $G^{\#}$ if and only if they cross in $G$. The crossing graph was introduced by Bandelt and Dress \cite{bd92} under the name of incompatibility graph and extensively studied by Klav\v zar and Mulder \cite{km02}. A graph $G$ is called a {\em median graph} if, for every three vertices $u, v, w$ of $G$, there exists a unique vertex, called the {\em median} of $u,v,w$, that lies on the shortest paths between each pair of $u,v,w$ simultaneously. Median graphs are an important subclass of partial cubes, which have many applications in such diverse areas as evolutionary theory, chemistry, literary history, location theory, consensus theory, and computer science. For the structural properties of median graphs, we can see \cite{bv87,km99,mu78,mu80a,mu80b,mu90,mu11} and Chapter 12 of the book \cite{hik11}. To study some properties of median graph, Bre\v sar et al. \cite{bks03} introduced a counting polynomial of hypercubes of a graph $G$, called the {\em cube polynomial}, as follows: \begin{equation*} C(G,x):=\sum_{i\geqslant 0}\alpha_i(G)x^i, \end{equation*} where $\alpha_i(G)$ is the number of induced $i$-cubes of $G$. In particular, we can see that $\alpha_0(G)$ is the number of vertices and $\alpha_1(G)$ is the number of edges in $G$. An {\em $i$-clique} of a graph $G$ is a complete subgraph with $i$ vertices. Let's define $a_i(G)$ as the number of $i$-clique in $G$ for $i\geqslant 1$ and $a_0(G)=1$. The {\em clique polynomial} of $G$ is defined as follows, which is introduced by Hoede and Li \cite{hl94}. \begin{equation*} Cl(G,x):=\sum_{i\geqslant 0}a_i(G)x^i. \end{equation*} Let $P(x)=\sum\limits_{i=0}^mp_ix^i$ and $Q(x)=\sum\limits_{i=0}^nq_ix^i$ be two polynomials with nonnegative coefficients. We say $P(x)\leqslant Q(x)$ if $m\leqslant n$ and $p_i\leqslant q_i$ for $0\leqslant i\leqslant m$. If $P(x)\leqslant Q(x)$ and $P(x)\neq Q(x)$, we say $P(x)<Q(x)$. Let $G$ be a median graph. Zhang et al. \cite{zss13} proved that $C(G,x)=\sum\limits_{i=0}^mb_i(G)(x+1)^i$ where $b_0(G)=1$ and $b_i(G)$ is a positive integer for each $i$ with $1\leqslant i\leqslant m$, and further gave an expression of $b_i(G)$. In the present paper, we reveal a combinatorial meaning of $b_i(G)$: the number of $i$-cliques of $G^{\#}$, i.e., $b_i(G)=a_i(G^{\#})$. Moreover, we obtain the following relation between the cube polynomials of partial cubes and the clique polynomials of their crossing graphs. \begin{thm}\label{thm:CubePandCliqueP} Let $G$ be a partial cube and $G\neq K_1$. Then \begin{equation}\label{eq:CubePandCliqueP} C(G,x)\leqslant Cl(G^{\#},x+1) \end{equation} and the equality holds if and only if $G$ is a median graph. \end{thm} For a general graph $G$, the {\em simplex graph} $S(G)$ of $G$ is the graph whose vertices are the cliques of $G$ (including the empty graph), two vertices being adjacent if, as cliques of $G$, they differ in exactly one vertex (see \cite{bv91,km02}). The simplex graph $S(G)$ of $G$ is a median graph. About the crossing graphs of median graphs, Klav\v zar and Mulder obtained the following theorem. \begin{thm}{\em\cite{km02}}\label{thm:EveryGraphisaCrossingGraph} Every graph is a crossing graph of some median graph. More precisely, for any graph $G$ we have $G=S(G)^{\#}$. \end{thm} Combined with the fact that the independence polynomial of a graph is equal to the clique polynomial of its complement, Theorem \ref{thm:CubePandCliqueP} shows that the study on the cube polynomials of median graphs can be transformed to the one on the clique polynomials and the independence polynomials of general graphs. A sequence $(s_1,s_2,\cdots,s_n)$ of nonnegative real numbers is {\em unimodal} if \begin{equation*} s_1\leqslant s_2\leqslant\cdots\leqslant s_m\geqslant\cdots\geqslant s_{n-1}\geqslant s_n \end{equation*} for some integer $1\leqslant m\leqslant n$ and {\em log-concave} if \begin{equation*} s_{i-1}s_{i+1}\leqslant s^2_i,\qquad\mbox{for }2\leqslant i\leqslant n-1. \end{equation*} The sequence is said to {\em have no internal zeros} if there are not three indices $i < j < k$ such that $s_i,s_k>0$ and $s_j=0$. In particular, the positive sequences have no internal zeros. It is well-known that a log-concave sequence with no internal zeros is unimodal \cite{b89}. A polynomial is called {\em unimodal} (resp. {\em log-concave}) if the sequence of its coefficients is unimodal (resp. log-concave). By the definitions, the coefficient sequences of the cube polynomials, the clique polynomials and the independence polynomials of graphs are positive. Thus, for these graph polynomials, the log-concavity is stronger than the unimodality. When we study graph polynomials, the unimodality and the log-concavity are always considered. For instance, it's proved that the matching polynomials of graphs \cite{hl72}, the independence polynomials of claw-free graphs \cite{h90} and the signless chromatic polynomials of graphs \cite{h12} are log-concave, but the conjecture about the unimodality of independence polynomials of trees is still open \cite{amse87}. Zhang et al. conjectured: \begin{con}{\em\cite{zss13}}\label{con:CubePsareUnimodal} The cube polynomials of median graphs are unimodal. \end{con} We disprove this conjecture by offering counterexamples, which are obtained from $Q_n$ ($n\geqslant 9$) by attaching sufficiently many pendant vertices. The paper is organized as follows. In next section, we introduce some terminology and properties of partial cubes and median graphs. Then, we prove the main theorem (i.e., Theorem \ref{thm:CubePandCliqueP}) in Section 3 and disprove Conjecture \ref{con:CubePsareUnimodal} in Section 4. Finally, we conclude the paper and give some future problems in Section 5. \section{Preliminaries} Let $G$ be a graph with vertex set $V(G)$ and edge set $E(G)$. For $S\subseteq V(G)$, the subgraph induced by $S$ is denoted by $G[S]$. For $v\in V(G)$, $N_G(v)$ is the neighbourhood of $v$ in $G$, i.e., $N_G(v):=\{u\in V(G)|uv\in E(G)\}$. The subgraph $G[N_G(v)]$ is written as $G_v$ simply. For $u, v\in V(G)$, the {\em distance} $d_{G}(u,v)$ (We will drop the subscript $G$ if no confusion can occur) is the length of the shortest path between $u$ and $v$ in $G$. We call a shortest path from $u$ to $v$ a $u,v$-{\em geodesic}. A subgraph $H$ of $G$ is called {\em isometric} if for any $u,v\in V(H)$, $d_H(u,v)=d_G(u,v)$, and further, if for any $u,v\in V(H)$, all $u,v$-geodesics are contained in $H$, we call $H$ a {\em convex} subgraph of $G$. Obviously, the convex subgraphs are isometric and the isometric ones are induced and connected. A {\em hypercube of dimension $n$} (or {\em $n$-cube} for short), denoted by $Q_n$, is a graph whose vertex set is corresponding to the set of 0-1 sequences $x_1x_2\cdots x_n$ with $x_i\in \{0,1\}$, $i=1, 2, \cdots, n$. Two vertices are adjacent if the corresponding 0-1 sequences differ in exactly one digit. A graph $G$ is called {\em partial cube} if it is isomorphic to an isometric subgraph of $Q_n$ for some $n$. The {\em Djokovi\'c-Winkler relation} (see \cite{dj73,w84}) $\Theta_{G}$ is a binary relation on $E(G)$ defined as follows: Let $e=uv$ and $f=xy$ be two edges in $G$, $e\,\Theta_{G}\,f\iff d_{G}(u,x)+d_{G}(v,y)\neq d_{G}(u,y)+d_{G}(v,x)$. If $G$ is bipartite, there is another equivalent definition of Djokovi\'c-Winkler relation: $e\,\Theta_{G}\,f\iff d_{G}(u,x)=d_{G}(v,y)$ and $d_{G}(u,y)=d_{G}(v,x)$. Winkler \cite{w84} proved that a graph $G$ is a partial cube if and only if $G$ is bipartite and $\Theta_{G}$ is an equivalence relation on $E(G)$. The following property can be obtained easily by definition. \begin{obs}\label{obs:ThetaRestrictonSubgraph} Let $G$, $G'$, $G''$ be three graphs. If $G$ is an isometric subgraph of $G'$ and $G'$ is an isometric subgraph of $G''$, then $G$ is an isometric subgraph of $G''$. In prticular, if $G'$ is a partial cube (in this case, $G''=Q_n$ for some integer $n$), then $G$ is also a partial cube. Moreover, for any $e,f\in E(G)$, $e\,\Theta_{G}\,f\iff e\,\Theta_{G'}\,f$. \end{obs} Let $G$ be a partial cube. We call the equivalence class on $E(G)$ {\em $\Theta$-class}. The {\em isometric dimension} of $G$, denoted by $\mathrm{idim}(G)$, is the smallest integer $n$ satisfying that $G$ is the isometric subgraph of $Q_n$, which coincides with the number of $\Theta$-classes \cite{dj73}. For $e=uv\in E(G)$, we denote the $\Theta$-class containing $uv$ as $F^G_{uv}$, i.e., $F^G_{uv}:=\{f\in E(G)|f\,\Theta_{G}\,e\}$. If we don't focus on which edges is contained in, we can also denote the $\Theta$-class by $\theta_1,\theta_2,\cdots$. Moreover, we denote $W^G_{uv}:=\{w\in V(G)|d_{G}(u,w)<d_{G}(v,w)\}$ and $U^G_{uv}:=\{w\in V(G)|w\in W^G_{uv}\mbox{ and }w\mbox{ is incident with an edge in }F^G_{uv}\}$. Except for Subsection 3.2, we will drop the subscript of $\Theta_G$ and the superscript of $F^G_{uv}$, $W^G_{uv}$ and $U^G_{uv}$ in the following. The following property is obvious. \begin{obs}\label{obs:FInduceIsomorphism} Let $G$ be a partial cube and $uv$ is an edge in $E(G)$. $F_{uv}$ induces an isomorphism between the induced subgraphs $G[U_{uv}]$ and $G[U_{vu}]$. Further, if $u_1u_2$, $v_1v_2$ are corresponding edges in $G[U_{uv}]$ and $G[U_{vu}]$ respectively, then $u_1u_2\,\Theta\,v_1v_2$. \end{obs} About $W_{uv}$, Djokovi\'c obtained the following property: \begin{pro}{\em\cite{dj73}}\label{pro:WisConvex} Let $G$ be a partial cube. Then $G[W_{uv}]$ and $G[W_{vu}]$ are convex in $G$ for any $uv\in E(G)$. \end{pro} \begin{comment} About the geodesics in partial cubes, the following proposition is useful. \begin{pro}{\em\cite{ik00}}\label{pro:DWRelationonGeodesic} Let $G$ be a partial cube with a path $P$ in it. $P$ is a geodesic in $G$ if and only if no two distinct edges on it are in relation $\Theta$. In particular, two adjacent edges in a partial cube can't be in relation $\Theta$. \end{pro} \end{comment} Let $H$ be an induced subgraph of $G$. Denote $\partial\, H=\{uv\in E(G)|u\in V(H),v\not\in V(H)\}$. About the convex subgraphs of bipartite graphs, Imrich and Klav\v zar obtained the following proposition: \begin{pro}{\em\cite{ik98}}\label{pro:ConvexityLemma} An induced connected subgraph $H$ of a bipartite graph $G$ is convex if and only if no edge of $\partial\, H$ is in relation $\Theta$ to an edge in $H$. \end{pro} In particular, we have \begin{obs}\label{obs:ConvexSubgraphofHypercube} Every convex subgraph of the hypercube $Q_n$ is a hypercube $Q_r$ for some integer $r\leqslant n$. \end{obs} $G$ is called a {\em median graph} if for every three different vertices $u,v,w\in V(G)$, there exists exactly one vertex $x\in V(G)$ (maybe $x\in\{u,v,w\}$), called the {\em median} of $u,v,w$, satisfying that $d(u,x)+d(x,v)=d(u,v)$, $d(u,x)+d(x,w)=d(u,w)$ and $d(v,x)+d(x,w)=d(v,w)$, that is, there exist the geodesics between each pair of $u,v,w$ where $x$ lies on all of them. If $H$ is a convex subgraph of $G$, then $H$ is also a median graph. There are many equivalent characterizations of median graphs (see \cite{km99}). The most famous one is the following proposition: \begin{pro}{\em\cite{ik00}}\label{pro:UisConvex} A graph $G$ is a median graph if and only if $G$ is a partial cube and $G[U_{uv}]$ is convex in $G$ for every $uv\in E(G)$. \end{pro} Let $G$ be a graph and $G_1,G_2$ two isometric subgraphs with $G=G_1\cup G_2$ and $G_0=G_1\cap G_2$ not empty, where there are no edges between $G_1- G_2$ and $G_2- G_1$. Let $G^*_i$ be isomorphic copy of $G_i$ for $i=1,2$. For every $u\in V(G_0)$, let $u_i$ be the corresponding vertex in $G^*_i$ ($i=1,2$). The {\em expansion} $G^*$ of $G$ with respect to $\{G_1,G_2\}$ is the graph obtained from the disjoint union $G^*_1$ and $G^*_2$ by adding an edge between the corresponding vertices $u_1$ and $u_2$ for each vertex $u\in G_0$. It's known that partial cubes are characterized as graphs that can be obtained from $K_1$ by a sequence of expansions \cite{c88}. If $G$ is a partial cube, all edges $u_1u_2$ for $u\in G_0$ compose a new $\Theta$-class by the definition of Djokovi\'c-Winkler relation. So $\mathrm{idim}(G^*)=\mathrm{idim}(G)+1$. The expansion is {\em convex} if $G_0$ is convex, and {\em peripheral} if $G_0=G_1$ or $G_0=G_2$. The second equivalent characterization of median graphs we will use is: \begin{pro}{\em\cite{mu90}}\label{pro:ConvexPeripheralExpansions} Let $G$ be a connected graph. $G$ is a median graph if and only if it can be obtained from $K_1$ by a sequence of peripheral convex expansions. \end{pro} Let $G$ be a partial cube and not $K_1$, $F_{ab},F_{uv}$ two $\Theta$-classes of $G$. We say $F_{ab},F_{uv}$ {\em cross} if $W_{ab}\cap W_{uv}\neq\emptyset$, $W_{ba}\cap W_{uv}\neq\emptyset$, $W_{ab}\cap W_{vu}\neq\emptyset$ and $W_{ba}\cap W_{vu}\neq\emptyset$. For a subgraph $H$ of $G$, we say $F_{uv}$ {\em occurs} in $H$ if there is an edge of $F_{uv}$ in $E(H)$. Another equivalent definition of crossing is: $F_{ab}$ and $F_{uv}$ cross if $F_{ab}$ occurs in both $G[W_{uv}]$ and $G[W_{vu}]$. The {\em crossing graph} of $G$ (see \cite{km02}), denote by $G^{\#}$, is the graph whose vertices are corresponding to the $\Theta$-classes of $G$, and $\theta_1=F_{ab},\theta_2=F_{uv}$ are adjacent in $G^{\#}$ if and only if they cross in $G$. The following proposition is an equivalent expression of crossing relation. \begin{pro}{\em\cite{km02}}\label{pro:CrossinIsometricCycle} Let $G$ be a partial cube, $\theta_1,\theta_2$ two $\Theta$-classes. Then $\theta_1$ and $\theta_2$ cross if and only if they occur on an isometric cycle $C$, i.e., $E(C)\cap\theta_1\neq\emptyset$, $E(C)\cap\theta_2\neq\emptyset$. \end{pro} \begin{comment} The {\em cube polynomial} of a graph $G$, introduced by Bre\v sar et al. \cite{bks03}, is defined as follows: \begin{equation*} C(G,x)=\sum_{i\geqslant 0}\alpha_i(G)x^i, \end{equation*} where $\alpha_i(G)$ is the number of induced $i$-cubes of $G$. The {\em clique polynomial} of $G$, introduced by Hoede and Li \cite{hl94}, is defined as follows: \begin{equation*} Cl(G,x)=\sum_{i\geqslant 0}a_i(G)x^i. \end{equation*} where $a_i(G)$ ($i\geqslant 1$) is the number of $i$-cliques in $G$ and we define $a_0(G)=1$. \end{comment} \section{Proof of Theorem \ref{thm:CubePandCliqueP}} The proof of Theorem \ref{thm:CubePandCliqueP} is organized as follows. First, we prove that the `$=$' of (\ref{eq:CubePandCliqueP}) holds if $G$ is a median graph. Then, we prove that $C(G,x)<Cl(G^{\#},x+1)$ if $G$ is not a median graph. \subsection{$C(G,x)=Cl(G^{\#},x+1)$ if $G$ is a median graph} Klav\v zar and Mulder obtained the following lemma: \begin{lem}{\em\cite{km02}}\label{lem:WeakCrossin4Cycle} Let $G$ be a median graph and $uv,uw\in E(G)$. If $F_{uv}$ and $F_{uw}$ cross, then $v,u,w$ are in a 4-cycle. \end{lem} Let $G$ be a partial cube, $\theta_1,\theta_2$ two $\Theta$-classes. A 4-cycle $uvwxu$ is called to be {\em $\theta_1,\theta_2$-alternating} if $uv,xw\in\theta_1$ and $ux,vw\in\theta_2$. We have the following lemma, which is stronger than Lemma \ref{lem:WeakCrossin4Cycle}. \begin{lem}\label{lem:Crossin4Cycle} Let $G$ be a median graph, $\theta_1,\theta_2$ two $\Theta$-classes. Then $\theta_1$ and $\theta_2$ cross if and only if there exists a $\theta_1,\theta_2$-alternating 4-cycle. \end{lem} \begin{proof} {\em Sufficiency.} Assume $C=uvwxu$ is a 4-cycle where $uv,xw\in\theta_1$ and $ux,vw\in\theta_2$. Since 4-cycles must be isometric in a bipartite graph, by Proposition \ref{pro:CrossinIsometricCycle}, $\theta_1$ and $\theta_2$ cross. {\em Necessity.} Denote the two components of $G-\theta_1$ as $G_1,G_2$. By the definition of crossing, there exist $a_1b_1,a_2b_2\in\theta_2$ satisfying that $a_1b_1\in E(G_1)$, $a_2b_2\in E(G_2)$ and $d(a_1,a_2)=d(b_1,b_2)=d(a_1,b_2)-1=d(b_1,a_2)-1$. Let $P$ be an $a_1,a_2$-geodesic. Since $a_1\in V(G_1)$, $a_2\in V(G_2)$ and $\theta_1$ is an edge cutset, there exists an edge in $E(P)\cap\theta_1$, denoted by $c_1c_2$. Since $c_1,c_2$ is on $P$, which is an $a_1,a_2$-geodesic, by Proposition \ref{pro:UisConvex}, $c_1,c_2\in U_{a_1b_1}$. Thus, there exist edges $c_1d_1,c_2d_2\in\theta_2$. By Observation \ref{obs:FInduceIsomorphism}, $d_1,d_2$ are adjacent and $d_1d_2\in\theta_1$, that is, the 4-cycle $c_1c_2d_2d_1c_1$ is $\theta_1,\theta_2$-alternating. \end{proof} Let $G$ be a partial cube, $H$ a convex subgraph of $G$ and $\theta_1,\theta_2$ two crossing $\Theta$-classes. If there is a $\theta_1,\theta_2$-alternating 4-cycle in $H$, we say that $\theta_1$ and $\theta_2$ {\em cross in} $H$. \begin{lem}\label{lem:CrossinConvexSubgraph} Let $G$ be a median graph, $H$ a convex subgraph of $G$ and $\theta_1,\theta_2$ two crossing $\Theta$-classes. If both $\theta_1$ and $\theta_2$ occur in $H$, then they cross in $H$. \end{lem} \begin{proof} Assume $e=uv$, $f=xw$ are two edges that $e\in E(H)\cap\theta_1$ and $f\in E(H)\cap\theta_2$. We distinguish two cases to discuss. \textbf{Case 1.} $e$ and $f$ are adjacent. W.l.o.g., assume $v=x$. By Lemma \ref{lem:WeakCrossin4Cycle}, $u,v,w$ are in a 4-cycle, say $uvwyu$. Since $u,v,w\in V(H)$, the 4-cycle $uvwyu$ is in $H$ by the convexity of $H$. Thus, $\theta_1$ and $\theta_2$ cross in $H$. \textbf{Case 2.} $e$ and $f$ are not adjacent. W.l.o.g., assume $d(v,x)=d(u,x)-1=d(v,w)-1$. Since $\theta_1$ and $\theta_2$ cross, by Lemma \ref{lem:Crossin4Cycle}, there exists a $\theta_1,\theta_2$-alternating 4-cycle, denoted by $C=abcda$. W.l.o.g., we assume $ab,cd\in\theta_1$, $ad,bc\in\theta_2$, $a\in U_{vu}\cap U_{xw}$, $b\in U_{uv}\cap U_{xw}$, $c\in U_{uv}\cap U_{wx}$ and $d\in U_{vu}\cap U_{wx}$. If $C$ is in $H$, the lemma holds. If some vertices of $C$ are in $H$ and others are not, then there are some edges of $C$ in $\partial\, H$, a contradiction with Proposition \ref{pro:ConvexityLemma}. Now assume $C$ is in $G-H$. By the definition of median graphs, let $v'$ be the median of $a,v,x$. Since $a,v\in U_{vu}$, $a,x\in U_{xw}$ and $v'$ is on both an $a,v$-geodesic and an $a,x$-geodesic, by Proposition \ref{pro:UisConvex}, $v'\in U_{vu}\cap U_{xw}$. Then there exist edges $v'u'$, $v'w'$ such that $v'u'\in \theta_1$ and $v'w'\in \theta_2$ (see Fig. \ref{fig:CrossG0}). Since $v'$ is on a $v,x$-geodesic, $v'\in V(H)$ by the convexity of $H$. Then $u',w'\in V(H)$ by Proposition \ref{pro:ConvexityLemma}. After considering the edges $v'u'$, $v'w'$ which is similar to Case 1, we obtain $\theta_1$ and $\theta_2$ cross in $H$. \end{proof} \begin{figure} \caption{Illustration for Case 2 in proof of Lemma \ref{lem:CrossinConvexSubgraph}} \label{fig:CrossG0} \end{figure} Now, we consider the crossing graphs. Let $G$ be a partial cube, $G^{\#}$ its crossing graph. If $v$ is a vertex in $G^{\#}$, we denote its corresponding $\Theta$-class in $G$ by $\theta_v$ in what follows. Let $H$ be a convex subgraph of $G$. The crossing graph of $H$ is denoted by $H^{\#}$ (by Observation \ref{obs:ThetaRestrictonSubgraph}, $H^{\#}$ is well-defined since the convex subgraph of a partial cube is still a partial cube). Then, the statement `$v\in V(H^{\#})$' is corresponding to `$\theta_v$ occurs in $H$', and `$uv\in E(H^{\#})$' is corresponding to `$\theta_u$, $\theta_v$ cross in $H$'. By Lemma \ref{lem:CrossinConvexSubgraph}, we can obtain the following lemma easily. \begin{lem}\label{lem:ConvexandInducedSubgraph} Let $G$ be a median graph. If $H$ is convex subgraph of $G$, then $H^{\#}$ is induced subgraph of $G^{\#}$. \end{lem} A pair of induced subgraphs $\{G_1,G_2\}$ of $G$ is called a {\em cubical cover} if $G=G_1\cup G_2$ and every induced hypercube in $G$ is contained in at least one of the $G_1$ and $G_2$. Bre\v sar et al. gave a recursive formula of cube polynomials about the expansion with respect to the cubical cover: \begin{lem}{\em\cite{bks03}}\label{lem:RecursiveFormulaofCubeP} Let $G$ be a graph constructed by the expansion with respect to the cubical cover $\{G_1,G_2\}$ with $G_0=G_1\cap G_2$. Then \begin{equation*} C(G,x)=C(G_1,x)+C(G_2,x)+xC(G_0,x). \end{equation*} \end{lem} If an expansion with respect to $\{G_1,G_2\}$ with $G_0=G_1\cap G_2$ is peripheral, i.e., $G_0=G_2$, then it's obvious that $\{G_1,G_2\}$ is a cubical cover. Thus, by Proposition \ref{pro:ConvexPeripheralExpansions}, for median graphs, we have \begin{cor}\label{lem:RecursiveFormulaforMedianGraph} Let $G$ be a median graph constructed by the peripheral convex expansion with respect to $\{G_0,G_1\}$ where $G_0$ is a convex subgraph of $G_1$. Then \begin{equation}\label{eq:RecursiveFormulaforMedianGraph} C(G,x)=C(G_1,x)+(x+1)C(G_0,x). \end{equation} \end{cor} Recall that $G_v=G[N_G(v)]$. The recursive formula of clique polynomials is given by the following lemma: \begin{lem}{\em\cite{hl94}}\label{lem:RecursiveFormulaofCliqueP} Let $G$ be a graph, $v$ a vertex of $G$. Then \begin{equation}\label{eq:RecursiveFormulaofCliqueP} Cl(G,x)=Cl(G-v,x)+xCl(G_v,x). \end{equation} \end{lem} Now, we give the main result in this subsection. \begin{thm}\label{thm:=forMedianGraph} Let $G$ be a median graph and $G\neq K_1$. Then \begin{equation}\label{eq:=forMedianGraph} C(G,x)=Cl(G^{\#},x+1). \end{equation} \end{thm} \begin{proof} We prove the equality (\ref{eq:=forMedianGraph}) by induction on $\mathrm{idim}(G)$ (or equivalently, $|V(G^{\#})|$). When $\mathrm{idim}(G)=1$, $G\cong K_2$, $G^{\#}\cong K_1$, and further $C(G,x)=x+2$, $Cl(G^{\#},x)=x+1$. Thus, the equality (\ref{eq:=forMedianGraph}) holds for the basic step of induction. Now, assume (\ref{eq:=forMedianGraph}) holds for all median graphs with isometric dimension at most $n-1$. Let $G$ be a median graph with $\mathrm{idim}(G)=n$. By Proposition \ref{pro:ConvexPeripheralExpansions}, $G$ can be obtained from a median graph $G_1$ by a peripheral convex expansion with respect to $\{G_0,G_1\}$ where $G_0$ is a convex subgraph of $G_1$. Then, by the definition of median graphs, $G_0$ is a median graph and further $\mathrm{idim}(G_1)=n-1$, $\mathrm{idim}(G_0)\leqslant n-1$. By (\ref{eq:RecursiveFormulaforMedianGraph}) and the induction hypothesis, we have \begin{equation}\label{eq:InductionHypothesis} C(G,x)=C(G_1,x)+(x+1)C(G_0,x)=Cl(G_1^{\#},x+1)+(x+1)Cl(G_0^{\#},x+1). \end{equation} Let $\theta_v$ be the $\Theta$-class obtained by the peripheral convex expansion. By Lemma \ref{lem:RecursiveFormulaofCliqueP}, we only need to prove that $G_1^{\#}=G^{\#}-v$ and $G_0^{\#}=(G^{\#})_v$ (to avoid confusion, we denote the subgraph induced by the neighbourhood of $v$ in $G^{\#}$ by $(G^{\#})_v$). Let $ab$ be an edge in $\theta_v$. W.l.o.g., assume $G[W_{ab}]=G[U_{ab}]=G_0$, $G[W_{ba}]=G_1$. We denote $G[U_{ba}]:=G'_0$. Since $G_0$ and $G'_0$ are isomorphic and their corresponding edges are $\Theta$-related by Observation \ref{obs:FInduceIsomorphism}, two $\Theta$-classes $\theta_u$ and $\theta_w$ ($u,w\neq v$) cross in $G_0$ if and only if they cross in $G'_0$, so do in $G_1$. Thus, $u,w$ are adjacent in $G^{\#}$ if and only if they are adjacent in $G_1^{\#}$. Then $G_1^{\#}=G^{\#}-v$. Let's consider $G_0^{\#}$. By the definition of crossing, $\theta_u$ occurs in $G_0$ if and only if it crosses $\theta_v$ in $G$. Thus $V(G_0^{\#})=N_{G^{\#}}(v)$. Since $G_0$ is a convex subgraph of $G$ by Proposition \ref{pro:WisConvex}, we obtain $G_0^{\#}=(G^{\#})_v$ by Lemma \ref{lem:ConvexandInducedSubgraph}. By (\ref{eq:RecursiveFormulaofCliqueP}), we obtain \begin{equation*} Cl(G_1^{\#},x+1)+(x+1)Cl(G_0^{\#},x+1)=Cl(G^{\#}-v,x+1)+(x+1)Cl((G^{\#})_v,x+1)=Cl(G^{\#},x+1). \end{equation*} Thus, the induction step completes. \end{proof} \subsection{$C(G,x)<Cl(G^{\#},x+1)$ if $G$ is not a median graph} Let $G$ be a graph. For any two vertices $u,v\in V(G)$, the {\em interval} $I_G(u,v)$ between $u$ and $v$ is a subset of $V(G)$ which is defined as follows: $I_G(u,v):=\{w\in V(G)|w\mbox{ is on a geodesic between }u$ $\mbox{and }v\}$. Let $X$ be a set. The power set $\mathcal{P}(X)$ is the set of subset of $X$, i.e., $\mathcal{P}(X):=\{Y|Y\subseteq X\}$. Let $S$ be a subset of $V(G)$. Let $\ell_G$ be the self-map of $\mathcal{P}(V(G))$ defined by $\ell_G(S):=\bigcup\limits_{u,v\in S}I_G(u,v)$. Let's denote $\ell^1_G(S):=\ell_G(S)$ and $\ell^i_G(S)=\ell(\ell^{i-1}_G(S))$ for each integer $i\geqslant 2$. The {\em convex hull} of $S$ in $G$ is defined as $co_G(S)=\bigcup\limits_{i\in\mathbb{N}}\ell^i_G(S)$. We can see that $G[co_G(S)]$ is the smallest convex subgraph containing $S$. In particular, if $H$ is a subgraph of $G$, we denote $co_G(H):=G[co_G(V(H))]$ and call it the {\em convex hull of the subgraph $H$} in $G$. The third equivalent characterization of median graphs we will use is: \begin{lem}{\em\cite{b82,km99}}\label{lem:ConvexHullofIsometricCycle} Let $G$ be a connected graph. $G$ is a median graph if and only if the convex hull of any isometric cycle of $G$ is a hypercube. \end{lem} Let $G$ be a graph. Let $\mathcal{C}(G)$ be the set of all isometric cycles of $G$. We define a binary relation `$\leqslant_{\mathcal{C}(G)}$' on $\mathcal{C}(G)$ as follows: For any $C,C'\in\mathcal{C}(G)$, $C\leqslant_{\mathcal{C}(G)}C'\iff co_G(V(C))\subseteq co_G(V(C'))$. It's easy to see that `$\leqslant_{\mathcal{C}(G)}$' is a partial order on $\mathcal{C}(G)$. We say that $C\in\mathcal{C}(G)$ is {\em maximal} if it is a maximal element in the partial order `$\leqslant_{\mathcal{C}(G)}$'. The set of maximal isometric cycles in $G$ is denoted by $\mathcal{C}_{\max}(G)$. Let $G$ be a partial cube with $\mathrm{idim}(G)=n$. Now, we construct $G^{+}$ based on $G$. Denote $G^{(0)}:=G$ and $S_0:=V(G)$, and further, for $i\geqslant 0$, \begin{equation}\label{eq:Si} S_{i+1}:=S_i\cup\bigcup_{C\in\mathcal{C}_{\max}(G^{(i)})}co_{Q_n}(V(C)) \end{equation} and \begin{equation}\label{eq:Gi} G^{(i+1)}:=Q_n[S_{i+1}] \end{equation} recursively. Since $G^{(i)}$ is the subgraph of $Q_n$ for each $i\geqslant 0$, every subgraph of $G^{(i)}$ is also the subgraph of $Q_n$. Thus, every $co_{Q_n}(V(C))$ in (\ref{eq:Si}) is well-defined, so are $S_{i+1}$ and $G^{(i+1)}$. Since $Q_n$ is finite, there must exist a smallest integer $l\geqslant 0$ that all $S_i$'s (resp. $G^{(i)}$'s) are equal for $i\geqslant l$. We define $G^{+}:=G^{(l)}$. About $G^{+}$, we have \begin{lem}\label{lem:G+isMedianGraph} For any partial cube $G$, $G^{+}$ is a median graph. \end{lem} \begin{proof} First, we prove that $G^{+}$ is connected. In order to do this, we prove that $G^{(i)}$ is connected by induction on $i$. Since $G^{(0)}:=G$ is a partial cube, it is connected. Now, assume $G^{(i)}$ ($0\leqslant i\leqslant l-1$) is connected. By the definition of convex hulls, for each $C\in\mathcal{C}_{\max}(G^{(i)})$, $co_{Q_n}(C)$ is connected. In particular, for any $u\in co_{Q_n}(V(C))\setminus V(C)$, $v\in V(C)$, there exists a path between $u$ and $v$. Since $G^{(i)}$ is connected, it's deduced that $G^{(i+1)}$ is connected. Thus, $G^{+}(=G^{(l)})$ is connected by induction. Then, we prove that the convex hull of any isometric cycle $C$ of $G^{+}(=G^{(l)})$ is a hypercube. {\bf Case 1.} $C\in\mathcal{C}_{\max}(G^{(l)})$. Since $S_{l+1}=S_l$, $co_{Q_n}(V(C))\subseteq S_l$. Then $co_{Q_n}(V(C))=co_{G^{(l)}}(V(C))$. By Observation \ref{obs:ConvexSubgraphofHypercube}, it induces a hypercube. {\bf Case 2.} $C$ is not maximal. By definition of `$\leqslant_{\mathcal{C}(G^{(l)})}$', there exists a maximal isometric cycle $C'$ such that $co_{G^{(l)}}(V(C))\subset co_{G^{(l)}}(V(C'))$. Similar to Case 1, $co_{G^{(l)}}(C')$ is a hypercube. By the definition of convexity, $co_{G^{(l)}}(C)$ is also a convex subgraph of $co_{G^{(l)}}(C')$. By Observation \ref{obs:ConvexSubgraphofHypercube}, $co_{G^{(l)}}(C)$ is a hypercube. In conclusion, the convex hull of every isometric cycle of $G^{+}(=G^{(l)})$ is a hypercube. By Lemma \ref{lem:ConvexHullofIsometricCycle}, $G^{+}$ is a median graph. \end{proof} Let $G$ be a partial cube and $H$ a subgraph of $G$. We denote $\mathcal{F}(H):=\{\theta|\theta\mbox{ is a }\Theta\mbox{-class of }G\mbox{ o-}$ $\mbox{ccurring in }H\}$. Now, we give a lemma. \begin{lem}\label{lem:ConvexHullProperty} Let $G$ be a partial cube and $H$ a subgraph of $G$. If $H$ is connected, then $\mathcal{F}(H)=\mathcal{F}(co_G(H))$. \end{lem} \begin{proof} Since $H$ is the subgraph of $co_G(H)$, it is obvious that $\mathcal{F}(H)\subseteq\mathcal{F}(co_G(H))$. By contradiction assume $\mathcal{F}(H)\subset\mathcal{F}(co_G(H))$. For convenience, we denote $H':=co_G(H)$. Since $H'$ is an isometric subgraph of $G$, $H'$ is a partial cube by Observation \ref{obs:ThetaRestrictonSubgraph}. Assume $F_{uv}\in\mathcal{F}(H')\setminus\mathcal{F}(H)$. Since $H$ is connected, $H$ is a subgraph of $H'[W_{uv}^{H'}]$ or $H'[W_{vu}^{H'}]$. W.l.o.g., assume $H$ is the subgraph of $H'[W_{uv}^{H'}]$. Combined with Proposition \ref{pro:WisConvex} and the transitivity of convex subgraphs, $H'[W_{uv}^{H'}]$ is a convex subgraph of $G$, a contradiction with the fact that $H'$ is the smallest convex subgraph containing $H$. Thus, $\mathcal{F}(H)=\mathcal{F}(co_G(H))$. \end{proof} Now, we consider the crossing graphs. Let $G$ be a partial cube. By the construction of $G^{(i)}$, $G^{(i-1)}$ is a subgraph of $G^{(i)}$ for any $1\leqslant i\leqslant l$. Moreover, for $u,v\in V(G^{(i-1)})$, we have \begin{equation}\label{eq:Isometric} d_{G^{(i-1)}}(u,v)=d_{G^{(i)}}(u,v). \end{equation} Thus, $G^{(i-1)}$ is an isometric subgraph of $G^{(i)}$. By Observation \ref{obs:ThetaRestrictonSubgraph} and Lemma \ref{lem:G+isMedianGraph}, $G^{(1)},G^{(2)},\cdots,$ $G^{(l-1)}$ are partial cubes. Set $\mathrm{idim}(G)=n$. By Lemma \ref{lem:ConvexHullProperty}, there are no new $\Theta$-classes arisen from $G^{(i-1)}$ to $G^{(i)}$ for any $1\leqslant i\leqslant l$. Then, we can deduce that $\mathrm{idim}(G^{(1)})=\mathrm{idim}(G^{(2)})=\cdots=\mathrm{idim}(G^{(l)})=n$. Moreover, for any $ab\in E(G^{(i-1)})$ ($1\leqslant i\leqslant l$), $F_{ab}^{G^{(i-1)}}\subseteq F_{ab}^{G^{(i)}}$ by Observation \ref{obs:ThetaRestrictonSubgraph}. For convenience, we can set that for any $ab\in E(G^{(i-1)})$, the corresponding vertex of $F_{ab}^{G^{(i-1)}}$ in $(G^{(i-1)})^{\#}$ and the one of $F_{ab}^{G^{(i)}}$ in $(G^{(i)})^{\#}$ are same. Thus, \begin{equation}\label{eq:VG} V(G^{\#})=V((G^{(1)})^{\#})=\cdots=V((G^{(l)})^{\#}). \end{equation} \begin{lem}\label{lem:EqualCrossingGraph} For any partial cube $G$ with $G\neq K_1$, $G^{\#}=(G^+)^{\#}$. \end{lem} \begin{proof} We have known that $V(G^{\#})=V((G^+)^{\#})$ by (\ref{eq:VG}). Let $u,v$ be two vertices in $G^{\#}$, $ab,cd$ two edges in $E(G)$ such that $F_{ab}^G$ and $F_{cd}^G$ are the $\Theta$-classes of $G$ corresponding to $u$ and $v$ respectively. Then $u$ (resp. $v$) is also the corresponding vertex of $F_{ab}^{G^{(i)}}$ (resp. $F_{cd}^{G^{(i)}}$) in $(G^{(i)})^{\#}$ for any $1\leqslant i\leqslant l$ by (\ref{eq:VG}). Now, we prove that {\bf Claim 1.} $uv\in E(G^{\#})\Longrightarrow uv\in E((G^+)^{\#})$. By Proposition \ref{pro:CrossinIsometricCycle}, $F_{ab}^G$, $F_{cd}^G$ occur on an isometric cycle $C$ in $G$. Combined with the fact that $G$ is an isometric subgraph of $G^+$ and Observation \ref{obs:ThetaRestrictonSubgraph}, $C$ is also an isometric cycle in $G^+(=G^{(l)})$. Moreover, $F_{ab}^{G^+}$, $F_{cd}^{G^+}$ occur on $C$ in $G^+$. Thus, $uv\in E((G^+)^{\#})$ by Proposition \ref{pro:CrossinIsometricCycle}. Then, we prove that {\bf Claim 2.} $uv\not\in E(G^{\#})\Longrightarrow uv\not\in E((G^+)^{\#})$. By contradiction assume $F_{ab}^{G^{+}}$ and $F_{cd}^{G^{+}}$ cross in $G^{+}$. By the construction of $G^+$, there exists an integer $i$ ($1\leqslant i\leqslant l$) such that $F_{ab}^{G^{(i)}}$ and $F_{cd}^{G^{(i)}}$ cross in $G^{(i)}$ but $F_{ab}^{G^{(i-1)}}$ and $F_{cd}^{G^{(i-1)}}$ don't cross in $G^{(i-1)}$. By the definition of crossing, w.l.o.g., assume $F_{ab}^{G^{(i-1)}}$ does not occur in $G^{(i-1)}[W_{cd}^{G^{(i-1)}}]$. However, since $F_{ab}^{G^{(i)}}$ and $F_{cd}^{G^{(i)}}$ cross in $G^{(i)}$, $F_{ab}^{G^{(i)}}$ occurs in $G^{(i)}[W_{cd}^{G^{(i)}}]$. Then there exists a maximal isometric cycle $C\in\mathcal{C}_{\max}(G^{(i-1)})$ such that $F_{ab}^{G^{(i)}}$ occurs in $co_{Q_n}(C)\cap G^{(i)}[W_{cd}^{G^{(i)}}]$. Since $F_{ab}^{G^{(i)}}\cap E(co_{Q_n}(C))\neq\emptyset$, by Lemma \ref{lem:ConvexHullProperty}, $F_{ab}^{G^{(i)}}$ occurs on $C$, so does $F_{ab}^{G^{(i-1)}}$. Considering that $co_{Q_n}(V(C))\cap W_{cd}^{G^{(i)}}\neq\emptyset$, it can be deduced that \begin{equation}\label{eq:CcapWcd} V(C)\cap W_{cd}^{G^{(i-1)}}\neq\emptyset. \end{equation} Since $F_{ab}^{G^{(i-1)}}$ does not occur in $G^{(i-1)}[W_{cd}^{G^{(i-1)}}]$, all edges of $F_{ab}^{G^{(i-1)}}$ are in $E(G^{(i-1)}[W_{dc}^{G^{(i-1)}}])$. Combined with the fact that $F_{ab}^{G^{(i-1)}}$ occurs on $C$, we can obtain that \begin{equation}\label{eq:CcapWdc} V(C)\cap W_{dc}^{G^{(i-1)}}\neq\emptyset. \end{equation} Since $C$ is connected, $F_{cd}^{G^{(i-1)}}\cap E(C)\neq\emptyset$ by (\ref{eq:CcapWcd}) and (\ref{eq:CcapWdc}). That is, both $F_{ab}^{G^{(i-1)}}$ and $F_{cd}^{G^{(i-1)}}$ occur on the isometric cycle $C$, a contradiction with Proposition \ref{pro:CrossinIsometricCycle}. Combined with Claim 1, Claim 2 and (\ref{eq:VG}), we obtain that $G^{\#}=(G^+)^{\#}$. \end{proof} Now, we give the main result in this subsection. \begin{thm}\label{thm:<forPartialCube} Let $G$ be a partial cube and $G\neq K_1$. If $G$ is not a median graph, then \begin{equation}\label{eq:<forPartialCube} C(G,x)<Cl(G^{\#},x+1). \end{equation} \end{thm} \begin{proof} By the construction of $G^+$, $G$ is an induced subgraph of $G^+$, so every induced $i$-cube in $G$ is also an induced $i$-cube in $G^+$ for each $i\geqslant 0$. Thus, $\alpha_i(G)\leqslant\alpha_i(G^+)$. Since $G$ is not a median graph, $G\neq G^{+}$. Then $\alpha_0(G)=|V(G)|<|V(G^+)|=\alpha_0(G^+)$, and further $C(G,x)\neq C(G^+,x)$. Thus, \begin{equation}\label{eq:CubePolynomialforSubgraph} C(G,x)<C(G^+,x). \end{equation} Since $G^+$ is a median graph, by Theorem \ref{thm:=forMedianGraph}, \begin{equation}\label{eq:MedianGraph} C(G^+,x)=Cl((G^+)^{\#},x+1). \end{equation} By Lemma \ref{lem:EqualCrossingGraph}, \begin{equation}\label{eq:EqualCrossingGraph} Cl((G^+)^{\#},x+1)=Cl(G^{\#},x+1). \end{equation} Combined with (\ref{eq:CubePolynomialforSubgraph}), (\ref{eq:MedianGraph}) and (\ref{eq:EqualCrossingGraph}), we obtain that $C(G,x)<Cl(G^{\#},x+1)$. \end{proof} Combined with Theorem \ref{thm:=forMedianGraph} and Theorem \ref{thm:<forPartialCube}, Theorem \ref{thm:CubePandCliqueP} is proved. \begin{comment} Let's denote the set of all 0-1 sequences of length $n$ by $\mathcal{B}_n$, i.e., $V(Q_n)=\mathcal{B}_n$. Now, we introduce two binary Boolean operations on $\mathcal{B}_n$: `$\wedge$' and `$\vee$'. Let $(x_1x_2\cdots x_n)$ and $(y_1y_2\cdots y_n)$ be two 0-1 sequences of length $n$. $(z_1z_2\cdots z_n):=(x_1x_2\cdots x_n)\wedge(y_1y_2\cdots y_n)$ (resp. $(w_1w_2\cdots w_n):=(x_1x_2\cdots x_n)\vee(y_1y_2\cdots y_n)$) is defined as follows: If $x_i=y_i=1$ (resp. $x_i=y_i=0$), then $z_i=1$ (resp. $w_i=0$); otherwise $z_i=0$ (resp. $w_i=1$) for $1\leqslant i\leqslant n$. Let $v_1,v_2,\cdots,v_m$ be $m$ 0-1 sequences of length $n$, where $v_j=(x_1^jx_2^j\cdots x_n^j)$ for $1\leqslant j\leqslant m$. We know that the operations `$\wedge$' and `$\vee$' satisfy the commutative laws and the associative laws, so we can define $(z_1z_2\cdots z_n):=\bigwedge\limits_{j=1}^mv_j$ (resp. $(w_1w_2\cdots w_n):=\bigvee\limits_{j=1}^mv_j$): If $x^1_i=x^2_i=\cdots=x^m_i=1$ (resp. $x^1_i=x^2_i=\cdots=x^m_i=0$), then $z_i=1$ (resp. $w_i=0$); otherwise $z_i=0$ (resp. $w_i=1$) for any $i\in [n]$. For a subset $S$ of $\mathcal{B}_n$, we denote $\bigwedge S:=\bigwedge\limits_{v\in S}v$ (resp. $\bigvee S:=\bigvee\limits_{v\in S}v$). We know that $\mathcal{B}_n$ forms a distributive lattice under `$\wedge$' and `$\vee$'. \end{comment} \section{Disproving Conjecture \ref{con:CubePsareUnimodal}} Recall that a sequence $(s_1,s_2,\cdots,s_n)$ of nonnegative numbers is {\em unimodal} if \begin{equation*} s_1\leqslant s_2\leqslant\cdots\leqslant s_m\geqslant\cdots\geqslant s_{n-1}\geqslant s_n \end{equation*} for some integer $1\leqslant m\leqslant n$ and {\em log-concave} if \begin{equation*} s_{i-1}s_{i+1}\leqslant s^2_i,\qquad\mbox{for }2\leqslant i\leqslant n-1. \end{equation*} The clique polynomial of a graph is not necessarily unimodal, as shown in the following example: \begin{exa}\label{exa:NonUnimodalCliqueP} Let $n,m$ be nonnegative integers with $n\geqslant 6$, $G:=K_n\cup mK_1$ the graph of the disjoint union of a complete graph $K_n$ and $m$ single vertices. Then \begin{equation*} Cl(G,x)=(x+1)^n+mx. \end{equation*} After calculations, we obtain that $Cl(G,x)$ is log-concave when $m\leqslant\left\lfloor\frac{n^2+n}{2n-4}\right\rfloor$; $Cl(G,x)$ is unimodal but not log-concave when $\left\lfloor\frac{n^2+n}{2n-4}\right\rfloor+1\leqslant m\leqslant\frac{n^2-3n}{2}$; $Cl(G,x)$ is not unimodal when $m\geqslant\frac{n^2-3n}{2}+1$. \end{exa} Combined with the Theorem \ref{thm:=forMedianGraph} and Theorem \ref{thm:EveryGraphisaCrossingGraph}, we can construct median graphs with non-unimodal cube polynomials, as shown in the following example: \begin{exa}\label{exa:NonUnimodalCubeP} Let $n,m$ be nonnegative integers with $n\geqslant 9$, $G$ the graph formed by $n$-cube $Q_n$ with $m$ pendant vertices attached. We can obtain that $G$ is a median graph and $G^{\#}\cong K_n\cup mK_1$. Then \begin{equation*} C(G,x)=Cl(K_n\cup mK_1,x+1)=(x+2)^n+m(x+1). \end{equation*} After calculations, we obtain that $C(G,x)$ is log-concave when $m\leqslant\left\lfloor\frac{n^2+n}{n-2}\cdot 2^{n-2}\right\rfloor$;$C(G,x)$ is unimodal but not log-concave when $\left\lfloor\frac{n^2+n}{n-2}\cdot 2^{n-2}\right\rfloor+1\leqslant m\leqslant\frac{n^2-5n}{2}\cdot 2^{n-2}$; $C(G,x)$ is not unimodal when $m\geqslant\frac{n^2-5n}{2}\cdot 2^{n-2}+1$. \end{exa} More generally, let $G$ be any median graph satisfying $\alpha_3(G)>\alpha_2(G)$ (the above $Q_n$ $(n\geqslant 9)$ as examples). Whether $C(G,x)$ is unimodal or not, the cube polynomial of the graph obtained from $G$ by attaching sufficiently many ( $\geqslant \alpha_2(G)-\alpha_1(G)+1$) pendant vertices is not unimodal. Thus, Conjecture \ref{con:CubePsareUnimodal} is false. \section{Conclusion and problems} In the present paper, we obtain an inequality relation between the cube polynomials of partial cubes $G$ and clique polynomials of their crossing graphs, i.e., $C(G,x)\leqslant Cl(G^{\#},x+1)$. Moreover, the equality holds if and only if $G$ is a median graph. A {\em hexagonal system} (or {\em benzenoid system}) is a 2-connected finite plane graph such that every interior face is a regular hexagon of side length one. Let $H$ be a hexagonal system. The Clar covering polynomial (or Zhang-Zhang polynomial) $\zeta(H, x)$ is an important graph polynomial in mathematical chemistry, which was introduced by Zhang and Zhang \cite{zz96}. Zhang et al. \cite{zss13} proved that $\zeta(H,x)=C(R(H),x)$, where $R(H)$ is the resonance graph of $H$ (further a median graph \cite{zls08}). Although Conjecture \ref{con:CubePsareUnimodal} associated with median graphs is false, the conjecture on the resonance graphs of hexagonal systems (a subclass of median graphs) is still open, proposed by Zhang and Zhang as follows: \begin{con}{\em\cite{zz00}}\label{con:ClarPisUnimodal} For a hexagonal system $H$, $C(R(H),x)$ (i.e., $\zeta(H,x)$) is unimodal. \end{con} Further, since all coefficients of $\zeta(H,x)$ are positive, the following conjecture is stronger, which was proposed by Li et al. \cite{lpw20} after checking large amounts of numerical results. \begin{con}{\em\cite{lpw20}}\label{con:ClarPisLogConcave} For a hexagonal system $H$, the Clar covering polynomial $\zeta(H,x)$ is log-concave. \end{con} By Theorem \ref{thm:=forMedianGraph} and Theorem \ref{thm:EveryGraphisaCrossingGraph}, the cube polynomials of median graphs, the clique polynomials and the independence polynomials of general graphs can be transformed into each other. About independence polynomials, the following famous conjecture is proposed by Alavi, Malde, Schwenk and Erd\"os in 1987: \begin{con}{\em\cite{amse87}}\label{con:IndependencePofTreeisUnimodal} The independence polynomial of every tree is unimodal. \end{con} The following conjecture is stronger. \begin{con}{\em\cite{bg21}}\label{con:IndependencePofTreeisLC} The independence polynomial of every tree is log-concave. \end{con} About log-concavity, the following proposition is well-known. \begin{pro}{\em\cite{h74}}\label{pro:P(x+1)isLC} If the polynomial $P(x)$ with positive coefficients is log-concave, then so does $P(x+1)$. \end{pro} A graph is called a {\em co-tree} if it is the complement of a tree. Combined with Theorem \ref{thm:=forMedianGraph} and Proposition \ref{pro:P(x+1)isLC}, we give the following conjecture, which is weaker than Conjecture \ref{con:IndependencePofTreeisLC}. \begin{con}\label{con:CubePsareLC} Let $G$ be a median graph. The cube polynomial $C(G,x)$ is log-concave if $G^{\#}$ is a co-tree. \end{con} Thus, the structural properties of median graphs whose crossing graphs are co-trees are worthy for future studies. \vskip 0.2 cm \noindent{\bf Acknowledgements:} This work is partially supported by National Natural Science Foundation of China (Grants No. 12071194, 11571155, 11961067). \end{document}
arXiv
{ "id": "2303.14671.tex", "language_detection_score": 0.7329923510551453, "char_num_after_normalized": null, "contain_at_least_two_stop_words": null, "ellipsis_line_ratio": null, "idx": null, "lines_start_with_bullet_point_ratio": null, "mean_length_of_alpha_words": null, "non_alphabetical_char_ratio": null, "path": null, "symbols_to_words_ratio": null, "uppercase_word_ratio": null, "type": null, "book_name": null, "mimetype": null, "page_index": null, "page_path": null, "page_title": null }
arXiv/math_arXiv_v0.2.jsonl
null
null
\begin{document} \title{Biased tomography schemes: an objective approach} \author{Z. Hradil$^1$, D. Mogilevtsev$^2$, J. \v{R}eh\'{a}\v{c}ek$^1$} \affiliation{Department of Optics, Palacky University, 17. listopadu 50, 772 00 Olomouc, Czech Republic$^1$} \affiliation{Institute of Physics, Belarus National Academy of Sciences, F.Skarina Ave. 68, Minsk 220072 Belarus; \\ Departamento de F\'{\i}sica, Universidade Federal de Alagoas Cidade Universit\'{a}ria, 57072-970, Macei\'{o}, AL, Brazil$^2$} \date{} \begin{abstract} We report on an intrinsic relationship between the maximum-likelihood quantum-state estimation and the representation of the signal. A quantum analogy of the transfer function determines the space where the reconstruction should be done without the need for any \emph{ad hoc} truncations of the Hilbert space. An illustration of this method is provided by a simple yet practically important tomography of an optical signal registered by realistic binary detectors. \end{abstract} \pacs{03.65.Wj, 42.50.Lc} \maketitle The development of effective and robust methods of quantum state reconstruction is a task of crucial importance for quantum optics and information. Such methods are needed for quantum diagnostics: for the verification of quantum state preparation, for the analysis of quantum dynamics and decoherence, and for information retrieval. Since the original proposal for quantum tomography and its experimental verification \cite{{tomogr},{tomogr1}} this discipline has recorded significant progress and is considered as a routine experimental technique nowadays. Reconstruction has been successfully applied to probing the structure of entangled states of light and ions, operations (quantum gates) with entangled states of light and ions or internal angular momentum structure of correlated beams, just to mention a few examples \cite{lnp}. All these applications exhibit common features. Any successful quantum tomography scheme relies on three key ingredients: on the availability of a particular tomographically complete measurement, on a suitable representation of quantum states, and on an adequate mathematical algorithm for inverting the measured data. In addition, the entire reconstruction scheme must be robust with respect to noise. In real experiments the presence of noise is unavoidable due to losses and due to the fact that detectors are not ideal. The presence of losses poses a limit on the accuracy of a reconstruction. However, the very presence of losses can be turned into advantage and used for the reconstruction purposes. As has been predicted in Ref. \cite{moghrad98}, imperfect detectors, which are able to distinguish only between the presence and absence of signal (binary detectors) provide sufficient data for the reconstruction of the quantum state of a light mode provided their quantum efficiencies are less than $100\%$. The presence of losses is thus a necessary condition for a successful reconstruction: An ideal binary detector would measure only the probability of finding the signal in the vacuum state. The required robustness of a tomography scheme with respect to noise is often difficult to meet especially if it is biased, that is, if some aspects of the quantum systems in question are observed more efficiently than the others. Since our ability to design and control measurements is severely limited, this situation will typically arise when one wants to characterize a system with a large number or infinitely many degrees of freedom, for instance in the quantum tomography of light mode mentioned above. The standard approach is to truncate the Hilbert space by a certain cut-off, reducing drastically the number of parameters involved \cite{lvovsky}. Needless to say, such \emph{ad hoc} truncation lacks physical foundation. It may have bad impact on the accuracy of reconstruction or conversely it may lead to more regular results. The latter case may easily happen when an experimentalist seeks for the result in the neighborhood of the true state. Such a tacitly accepted assumption may appear as crucial as it allows elimination of the infinite number of unwanted free parameters. This drawback erodes the notion of tomography as an objective scheme. In this Letter we will propose a reconstruction procedure that is optimized with respect to the experimental set-up, representation and inversion, designed for dealing with biased tomography schemes. The recommended approach to the generic problem of quantum state tomography will be demonstrated on the scheme of a light mode adopting elements of linear optics (beam splitter) with realistic binary detectors detecting the presence or absence of the signal only. In addition, we will, for the first time present a statistically correct description of such a tomographic scheme. Let us develop a generic formalism for the maximum-likelihood (ML) inversion of the measured data. Let us assume detections of a signal enumerated by the generic index $j$. Their probabilities are predicted by Quantum theory by means of positive-operator-valued measure (POVM) elements ${\bf A}_{j}$, \begin{equation}\label{probab} p_{j}= {\rm Tr}[ {\bf A}_{j} \rho ], \qquad 0\le {\bf A}_{j}\le 1, \end{equation} $\rho $ being the quantum state. The observations ${\bf A_j}$ are assumed to be tomographically complete in the Hilbert subspace we are interested in. No other specific assumptions about the operators ${\bf A}_{j}$, their commutation relations or group properties will be made. In general, probabilities $p_j$ are not normalized to one as the operator sum \begin{equation}\label{closure} \sum_{j} {\bf A}_{j} = G \ge 0 \end{equation} may differ from the identity operator. Theoretical probabilities $p_{j}$ can be sampled experimentally by means of registered data $ N_{j}$. The aim is to find the quantum state $\rho$ from data $N_j$. The ML scenario hinges upon a likelihood functional associated with the statistics of the experiment. In the following, we will adopt the generic form of likelihood for un-normalized probabilities \cite{barlow} \begin{equation}\label{likeli} \log {\cal L} = \sum_{j} N_{j} \log \left[ \frac{ p_{j}}{ \sum_{j' }p_{j'} }\right], \end{equation} which should be maximized with respect to $\rho$. Here the index $j$ runs over all registered data. The extremal equation for the maximum-likely state can be derived in three steps: (i) The positivity of $\rho$ is made explicit by decomposing it as $\rho=\sigma^\dag\sigma$. (ii) Likelihood (\ref{likeli}) is varied with respect to independent matrix $\sigma$ using $\delta (\log p_j)/\delta \sigma=A_j \sigma^\dag/p_j$; (iii) Obtained variation is set equal to zero and multiplied from right side by $\sigma$ with the result \begin{equation} \label{correctMaxLik} R \rho = G \rho, \qquad R = \sum_{j} \frac{\sum_{j'}p_{j'}}{ \sum_{j'} N_{j'} } \frac{N_j}{p_j(\rho)} {\bf A}_j, \end{equation} where the operator $G$ is defined by Eq.~(\ref{closure}) and operator $R$ depends on the particular choice of $\cal{L}$. Notice that this equation may be cast in the form of Expectation-Maximization (EM) algorithm \cite{em1} \begin{equation} \label{extremal_equation} R_{G} \rho_G = \rho_G, \quad \end{equation} where $R_G = G^{-1/2} R G^{-1/2}$ and $\rho_G = G^{1/2} \rho G^{1/2}$. This extremal equation may be solved by iterations in a fixed orthogonal basis. Keeping the positive semi-definiteness of $\rho_G$ [by combining Eq.~(\ref{correctMaxLik}) with its Hermitian conjugate] the $(n+1)$th iteration reads \[ \rho_G^{(n+1)} = R_G^{(n)} \rho_G^{(n)} R_G^{(n)}, \quad R_G^{(n)}=G^{-1/2} R(\rho^{(n)}) G^{-1/2}. \] Starting with some initial guess $\rho_G^{(0)}$ the iterations are repeated until the fixed point is reached. In terms of $\rho_G$, the desired solution is then given by \begin{equation}\label{inverse_map} \rho = G^{-1/2} \rho_G G^{-1/2}. \end{equation} Going back to likelihood in Eq.~(\ref{likeli}) we now see, that the operator $G$ coming from the mutual normalization of probabilities, $\sum_j p_j=\mathrm{Tr}[\rho G]$, provides a complete (normalized) POVM, which is equivalent to the original biased observations $A_j$: $\sum_j G^{-1/2} A_j G^{-1/2}=1_G$. This establishes the preferred basis for a reconstruction. Due to the division by the operator $G$ in Eq.~(\ref{inverse_map}) and in the sentence above the reconstruction can be done only in the subspace spanned by the non-zero eigenvalues of $G$. The spectrum of $G$ plays therefore the role of tomographic transfer function analogously to the transfer function in optical imaging. It quantifies the resolution of the reconstruction in the Hilbert space. Large eigenvalue of $G$ indicates that many observations overlapped in the corresponding Hilbert subspace and this part of the Hilbert space is more visible. The Hilbert subspace where the reconstruction was done is clearly not a subject of a free choice in the proper statistical analysis. This is the main result of this Letter. This also gives a clue how to approximate the solution in the infinite dimensional case simply by taking the subspace corresponding to the dominant eigenvalues. The result of reconstruction can be easily checked in the preferred basis afterwards. If the reconstructed state exhibits dominant contributions for the components with relatively small eigenvalues of G, the result cannot be trusted. The essence of the correct reconstruction inhere in the following recommended scenario: After collecting all data the optimal basis for reconstruction is identified as eigenvectors of $G$ operator. The truncation is achieved by taking into account only those with dominant eigenvalues, where the ML extremal equation should be solved keeping the semi-positive definiteness of the density matrix. This establishes the quantum tomography as an objective tool for the analysis of infinite dimensional quantum systems. Indeed, previously reported results of tomographic schemes have always considered the space for reconstruction \emph{ad hoc}: If one knows what the result should be it is not really difficult to get it. Let us illustrate this procedure on the following simple realistic detection set-up: the signal state (described by the density matrix $\rho$) of the input mode $a$ is mixed on a beam-splitter with the probe coherent state $|\beta\rangle$ of the mode $b$ and the mixed field is detected on a single on/off detector. Then the probability $p$ of having \textit{no} counts on the detector is measured. Such non-ideal measurements have already been used for tomography purposes. The inference of a photon number distribution was proposed in \cite{mog98} and experimentally realized in \cite{paris_exp}. A more advanced setup based on a multichannel fiber loop detector was developed and experimentally verified earlier in \cite{loop}. As proposed in \cite{Wallent} and \cite{Banaszek1}, the reconstruction of a full density matrix can be done by measuring a coherently shifted signal. This reconstruction technique has also been implemented experimentally as a direct counting of Wigner function \cite{Banaszek2}. However, the algorithms used for the quantum state reconstruction were not robust as indicated by the fact that they could give non-physical results. This is due to the a priori constraints put on a quantum object, namely the semi-positive definiteness of a density matrix $\rho \ge 0 $, which is not guaranteed in the above mentioned schemes. While it seems to be intractable to implement the condition of positive semi-definiteness in Wigner representation, it can be done in the general formalism adopting the maximum-likelihood estimation. The probability of registering no counts on the detector is given by Mandel's formula \cite{mandel}: \begin{equation} p=\langle:\exp{\{-\nu_cc^{\dagger}c\}}:\rangle, \label{p01} \end{equation} where $\nu_c$ is the efficiency of the detector; $c^{\dagger}$ and $c$ are creation and annihilation operators of the output mode, and $::$ denotes the normal ordering. For simplicity, we assume here that in the absence of the signal the detector does not produce any clicks; dark count are ignored. Let us assume, that the beam-splitter transforms input modes $a$ and $b$ in the following way: $ c=a\cos(\alpha)+b\sin(\alpha) $. Averaging over the probe mode $b$, from Eqs. (\ref{p01}) one obtains \begin{equation} \label{probability} p=\sum\limits_{n=0}(1-{\bar\nu})^n\langle n|D^{\dagger}(\gamma)\rho D(\gamma)|n\rangle, \label{p03} \end{equation} where $ {\bar\nu}=\nu_c\cos^2(\alpha), \quad \gamma=-\beta \tan(\alpha), \quad D(\gamma)=\exp{\{\gamma a^{\dagger}-\gamma^*a\}}$ is the coherent shift operator, and $|n\rangle$ denotes a Fock state of the signal mode $a$. Using the operator notation $ {\bf R}_{n,\gamma} = D(\gamma)|n \rangle \langle n| D^{\dagger}(\gamma)$, the no--count probability is generated by the POVM elements ${\bf A}_{\nu, \gamma}= \sum_n (1- \nu)^n {\bf R}_{n,\gamma}$ and, defining a collective index $j = \{ \nu, \gamma\}$, the counted probability coincides with Eq.~(\ref{probab}). \begin{figure} \caption{ Eigenvalues of the matrix $G$ (\ref{closure}) truncated at $N_{tr}=15$. The simulated measurement was done at $N_p$ $\gamma$-points equidistantly distributed in regions: (a) and (b) $Re(\gamma)\in [-2,2]$, $Im(\gamma)\in [-2,2]$; (c) $Re(\gamma)\in [-1,1]$, $Im(\gamma)=0$; (d) $Re(\gamma)\in [1,1.01]$, $Im(\gamma)=0$. In all panels, $10$ equidistant values of the detector efficiency were chosen from the interval $\eta\in [0.1,0.9]$.} \label{newfig1} \end{figure} Figure~\ref{newfig1} shows how a suitable choice of $\gamma$-points for a fixed truncation number $N_{tr}=15$ can be achieved. Obviously, the amount of data used in Fig.~\ref{newfig1}(a) as compared to Fig.~\ref{newfig1}(b) is excessive for the reconstruction. On the other hand, when the number of points is too small, or they are chosen in an inappropriate way, eigenvalues of $G$ differ strongly making reconstruction unfeasible. For example, in Figure (d) the last eigenvalue is only $\sim 10^{-5}$. However, one needs to mention that the analysis of $G$ provides a necessary but not sufficient condition of the reconstruction feasibility. In particular, a single $\gamma$ point measurement is not sufficient (just like a measurement in $\gamma=0$ is able to give only the diagonal elements). One needs to measure in at least two different non-zero $\gamma$ points. The confidence interval on the reconstructed density matrix elements can be provided with help of variance $\sigma(\rho_{mn})=\left(F(\rho_{mn})N_{mes}\right)^{-1/2}$, where $N_{mes}$ is the total number of measurements, and the Fisher information $F$ can be defined for real part of the density matrix elements as \cite{fisher}: \begin{equation} F(Re[\rho_{mn}])=\sum_{j} {\sum_{j' }p_{j'}\over p_{j}} \left[ {\partial\over\partial Re(\rho_{mn})}\frac{ p_{j}}{ \sum_{j' }p_{j'} }\right]^2, \label{fish} \end{equation} and similarly with $Re$ changed to $Im$ for imaginary part of $\rho$. To illustrate our discussion, let us consider a reconstruction of the following state (Figure~\ref{newfig2}): \begin{equation} \label{state} |\phi\rangle= \left(|0\rangle+\exp\{0.5i\}|2\rangle \right)/\sqrt{2}. \end{equation} The simulation was done using a total of $10^7$ measurements collected in five different points on the phase plane $\gamma$. \begin{figure} \caption{ A reconstruction of the state (\ref{state}) according to procedure (\ref{extremal_equation}). The following measurements were used: $Re(\gamma)=-0.2,-0.1,0,0.1,0.2$; $Im(\gamma)=0.1,-0.5,0,0.5,0.1$; $20$ equidistantly distributed detector efficiencies in the interval $[0.1,0.9]$ were used; the Hilbert space was truncated at $N_{tr}=5$. Panel (a) shows the eigenvalues of the matrix $G$. Panels (b) and (d) show the real and imaginary parts of the reconstructed matrix (in Fock basis). They were obtained using $10^6$ iteration of the EM algorithm. Panel (c) shows the variances of the real part ($n\le m$) and imaginary part ($n>m$) of the reconstructed elements given by Eq. (\ref{fish}). } \label{newfig2} \end{figure} In Fig.~\ref{newfig2}(a) one can see the eigenvalues of the matrix $G$ (\ref{closure}). Obviously, the chosen set of points is suitable for the reconstruction. Notice the correlation between decreasing eigenvalues and increasing errors in Figs.~\ref{newfig2}(a) and (c). \begin{figure} \caption{A reconstruction of the signal coherent state $\alpha=\exp\{i\pi/4\}$; (a) the reconstructed Wigner function; (b) the diagonal elements of the reconstructed density matrix; (c) the difference between the exact and the reconstructed Wigner functions; (d) the variance $\sigma(\gamma,\gamma^*)$. The Wigner function was reconstructed point-wise at $2500$ points of the phase plane from $N_r=10^4$ measurements per point using $N_{it}=10^3$ iterations of the EM algorithm. The Hilbert space was truncated at $N_{tr}=12$; $30$ different values of detector efficiencies were used equidistantly distributed in the interval $[0.1,0.9]$.} \label{newfig3} \end{figure} This objective approach may compared with alternative schemes based on the reconstruction of Wigner function. Measurement in any given $\gamma$ point is able to give a value of the Wigner function in that point. Indeed \cite{wig}, \begin{eqnarray} W(\gamma) = {2\over\pi}\sum\limits_{n=0}(-1)^n{\bf} R_{n,\gamma},\label{wign} \end{eqnarray} where $R_{n}(\gamma) = {\rm Tr}[\rho {\bf R}_{n,\gamma}] \equiv \langle n| D^{\dagger}(\gamma)\rho D(\gamma)|n \rangle$. For a fixed value of the amplitude $\gamma$ one should seek the set of non-negative matrix elements $ R_{n,\gamma}$ and plug in these values into the definition of the Wigner function (\ref{wign}). These matrix elements can be found by inverting the counted statistics (\ref{probability}) measured with a set of different efficiencies solving a linear positive inverse problem. This can be accomplished by means of the EM algorithm similarly to the approach used in \cite{Banaszek3}. An example of such a reconstruction is shown in Fig.~\ref{newfig3}. Though the reconstruction seems to be faithful, one should keep in mind that even very small deviations from the true Wigner function might make it non-physical. Such Wigner function would not correspond to any physical, positive definite density matrix. This is due to the fact that the operators ${\bf R}_{n,\gamma}$ do not commute for different $\gamma$s, so noisy measurements may give inconsistent results. Going back from Wigner function to the density matrix using Glauber's formula \cite{glaub}, $\rho=2\int d^2\gamma (-1)^nW(\gamma^*,\gamma)D(2\gamma)$, one can see in Fig.~\ref{newfig3}(b), that some diagonal elements of the reconstructed matrix are negative. A generic biased tomography scheme addressing some aspects of the quantum systems more efficiently than other aspects was introduced. Its performance is characterized by quantum analogy of transfer function, which may be further optimized to achieve the desired resolution. This establishes tomography as an objective tool for quantum diagnostics. The recommended approach was demonstrated on a simple, robust and effective quantum tomography scheme using detectors that are only capable to distinguish between the presence and absence of photons. The authors acknowledge the support from Research Project MSM6198959213 of the Czech Ministry of Education, Grant No. 202/06/0307 of Czech Grant Agency, EU project COVAQIAL FP6- 511004 (J.R. and Z. H), and project BRFFI of Belarus and CNPq of Brazil (D.M). \end{document}
arXiv
{ "id": "0606042.tex", "language_detection_score": 0.8543503284454346, "char_num_after_normalized": null, "contain_at_least_two_stop_words": null, "ellipsis_line_ratio": null, "idx": null, "lines_start_with_bullet_point_ratio": null, "mean_length_of_alpha_words": null, "non_alphabetical_char_ratio": null, "path": null, "symbols_to_words_ratio": null, "uppercase_word_ratio": null, "type": null, "book_name": null, "mimetype": null, "page_index": null, "page_path": null, "page_title": null }
arXiv/math_arXiv_v0.2.jsonl
null
null
\begin{document} \begin{abstract} Let $X$, $Y$ and $Z$ be Banach spaces and let $U$ be a subspace of $\mathcal{L}(X^*,Y)$, the Banach space of all operators from $X^*$ to~$Y$. An operator $S: U \to Z$ is said to be $(\ell^s_p,\ell_p)$-summing (where $1\leq p <\infty$) if there is a constant $K\geq 0$ such that $$ \Big( \sum_{i=1}^n \|S(T_i)\|_Z^p \Big)^{1/p} \le K \sup_{x^* \in B_{X^*}} \Big(\sum_{i=1}^n \|T_i(x^*)\|_Y^p\Big)^{1/p} $$ for every $n\in \mathbb{N}$ and every $T_1,\dots,T_n \in U$. In this paper we study this class of operators, introduced by Blasco and Signes as a natural generalization of the $(p,Y)$-summing operators of Kislyakov. On one hand, we discuss Pietsch-type domination results for $(\ell^s_p,\ell_p)$-summing operators. In this direction, we provide a negative answer to a question raised by Blasco and Signes, and we also give new insight on a result by Botelho and Santos. On the other hand, we extend to this setting the classical theorem of Kwapie\'{n} characterizing those operators which factor as $S_1\circ S_2$, where $S_2$ is absolutely $p$-summing and $S_1^*$ is absolutely $q$-summing ($1<p,q<\infty$ and $1/p+1/q \leq 1$). \end{abstract} \title{A class of summing operators acting in spaces of operators} \section{Introduction} Summability of series in Banach spaces is a classical central topic in the field of mathematical analysis. This study is faced from an abstract point of view as a part of the general analysis of the summability properties of operators, using some remarkable results of the theory of operator ideals. Pietsch's Factorization Theorem is nowadays the central tool in this topic, and different versions of this result adapted to other contexts are currently known. This theorem establishes that operators that transform weakly $p$-summable sequences into absolutely $p$-summable ones can always be dominated by an integral, and factored through a subspace of an $L_p$-space. Some related relevant results can also be formulated in terms of integral domination and factorization of operators. For example, recall that an operator between Banach spaces $S:X \to Y$ is said to be $(p,q)$-dominated (where $1<p,q<\infty$ and $1/p+1/q=1/r\leq 1$) if for every couple of finite sequences $(x_i)_{i=1}^n$ in~$X$ and $(y_i^*)_{i=1}^n$ in~$Y^*$, the strong $\ell_r$-norm of the sequence $(\langle S(x_i), y_i^* \rangle )_{i=1}^n$ is bounded above by the product of the weak $\ell_p$-norm of $(x_i)_{i=1}^n$ and the weak $\ell_q$-norm of $(y_i^*)_{i=1}^n$ (up to a multiplying constant independent of both sequences and their length). Kwapie\'{n}'s Factorization Theorem~\cite{kwa} states that an operator is $(p,q)$-dominated if and only if it can be written as the composition $S_1\circ S_2$ of operators such that $S_2$ is absolutely $p$-summing and the adjoint $S_1^*$ is absolutely $q$-summing (cf. \cite[\S 19]{def-flo}). The aim of this paper is to continue with the specific study of the summability properties of operators defined on spaces of operators. Throughout this paper $X$, $Y$ and $Z$ are Banach spaces. \begin{definition}[Blasco-Signes, \cite{bla-sig}]\label{definition:pPettisSumming} Let $1\leq p<\infty$ and let $U$ be a subspace of $\mathcal L(X^*,Y)$. An operator $S: U \to Z$ is said to be {\em $(\ell^s_p,\ell_p)$-summing} if there is a constant $K\geq 0$ such that \begin{equation}\label{eqn:psumming} \Big( \sum_{i=1}^n \|S(T_i)\|_Z^p \Big)^{1/p} \le K \sup_{x^* \in B_{X^*}} \Big(\sum_{i=1}^n \|T_i(x^*)\|_Y^p\Big)^{1/p} \end{equation} for every $n\in \mathbb{N}$ and every $T_1,\dots,T_n \in U$. \end{definition} Some fundamental properties of this type of operators are already known, as well as the main picture of their summability properties. The works of Blasco and Signes~\cite{bla-sig} and Botelho and Santos \cite{bot-san} fixed the framework and solved a great part of the natural problems appearing in this context. In the particular case when $U$ is the injective tensor product $X \hat{\otimes}_\varepsilon Y$ (naturally identified as a subspace of~$\mathcal{L}(X^*,Y)$), $(\ell^s_p,\ell_p)$-summing operators had been studied earlier by Kislyakov~\cite{kis} as ``$(p,Y)$-summing'' operators. In particular, he gave a Pietsch-type domination theorem for $(\ell^s_p,\ell_p)$-summing operators defined on $X \hat{\otimes}_\varepsilon Y$ (see \cite[Theorem~1.1.6]{kis}). This led to the natural question of whether a Pietsch-type domination theorem holds for arbitrary $(\ell^s_p,\ell_p)$-summing operators, see \cite[Question~5.2]{bla-sig}. Botelho and Santos extended Kislyakov's result by showing that this is the case when $U$ is Schwartz's $\varepsilon$-product $X\varepsilon Y$, i.e. the subspace of all operators from~$X^*$ to~$Y$ which are ($w^*$-to-norm) continuous when restricted to~$B_{X^*}$ (see \cite[Theorem~3.1]{bot-san}). This paper is organized as follows. In Section~\ref{section:Pietsch} we give new insight on the Botelho-Santos theorem and we provide a negative answer to the aforementioned question, see Example~\ref{example:counterBS}. To this end, we characterize those $(\ell^s_p,\ell_p)$-summing operators admitting a Pietsch-type domination by means of the strong operator topology (Theorem~\ref{theorem:equiv}). All of this is naturally connected with a discussion on measurability properties of operators which might be of independent interest. In Section~\ref{section:Kwapien} we start a general analysis of the summability properties of operators defined on spaces of operators that imply similar properties for the adjoint maps. Our main result along this way is a Kwapie\'{n}-type theorem involving the special summation that arises in this setting related to the strong operator topology, see Theorem~\ref{theorem:equiv2}. \subsubsection*{Notation and terminology} All our Banach spaces are real and all our topological spaces are Hausdorff. By a {\em subspace} of a Banach space we mean a norm-closed linear subspace. By an {\em operator} we mean a continuous linear map between Banach spaces. The norm of a Banach space~$X$ is denoted by $\|\cdot\|_X$ or simply $\|\cdot\|$. We write $B_X=\{x\in X:\|x\|\leq 1\}$ (the closed unit ball of~$X$). The topological dual of~$X$ is denoted by~$X^*$ and we write $w^*$ for its weak$^*$-topology. The evaluation of a functional $x^*\in X^*$ at $x\in X$ is denoted by either $\langle x,x^*\rangle$ or $\langle x^*,x\rangle$. We write $X\not \supseteq \ell_1$ to say that $X$ does not contain subspaces isomorphic to~$\ell_1$. We denote by $\mathcal{L}(X^*,Y)$ the Banach space of all operators from~$X^*$ to~$Y$, equipped with the operator norm. The {\em strong operator topology} ({\em SOT} for short) on $\mathcal{L}(X^*,Y)$ is the locally convex topology for which the sets $$ \{T\in \mathcal{L}(X^*,Y): \, \|T(x^*)\|_Y<\varepsilon\}, \quad x^*\in X^*, \quad \varepsilon>0, $$ are a subbasis of open neighborhoods of~$0$. That is, a net $(T_\alpha)$ in $\mathcal{L}(X^*,Y)$ is SOT-convergent to~$0$ if and only if $\|T_\alpha(x^*)\|_Y\to 0$ for every $x^*\in X^*$. Given a compact topological space~$L$, we denote by $C(L)$ the Banach space of all real-valued continuous functions on~$L$, equipped with the supremum norm. Thanks to Riesz's representation theorem, the elements of $C(L)^*$ are identified with regular Borel signed measures on~$L$. We denote by $P(L) \subseteq C(L)^*$ the convex $w^*$-compact set of all regular Borel probability measures on~$L$. For each $t\in L$, we write $\delta_t\in P(L)$ to denote the evaluation functional at~$t$, i.e. $\delta_t(h):=h(t)$ for all $h\in C(L)$. A function defined on~$L$ with values in a Banach space is said to be {\em universally strongly measurable} if it is strongly $\mu$-measurable for all $\mu \in P(L)$. We will mostly consider the case when $L$ is the dual closed unit ball $B_{X^*}$ equipped with the weak$^*$-topology. \section{Pietsch-type domination of $(\ell^s_p,\ell_p)$-summing operators}\label{section:Pietsch} Throughout this section we fix $1\leq p<\infty$. The aforementioned Pietsch-type domination theorem for $(\ell^s_p,\ell_p)$-summing operators proved in~\cite[Theorem~3.1]{bot-san} reads as follows: \begin{theorem}[Botelho-Santos]\label{theorem:BS} Let $U$ be a subspace of $X\varepsilon Y$ and let $S:U\to Z$ be an $(\ell^s_p,\ell_p)$-summing operator. Then there exist a constant $K\geq 0$ and $\mu \in P(B_{X^*})$ such that \begin{equation}\label{eqn:BotelhoSantos} \|S(T)\|_Z \leq K \Big(\int_{B_{X^*}}\|T(\cdot)\|_{Y}^p \, d\mu\Big)^{1/p} \end{equation} for every $T\in U$. \end{theorem} A first comment is that the integral of inequality~\eqref{eqn:BotelhoSantos} is always well-defined for any $T\in X\varepsilon Y$ and $\mu\in P(B_{X^*})$. Indeed, the restriction $T|_{B_{X^*}}$ is ($w^*$-to-norm) continuous, so it is universally strongly measurable. Since in addition $T|_{B_{X^*}}$ is bounded, it belongs to the Lebesgue-Bochner space $L_p(\mu,Y)$. \begin{remark}\label{remark:BS} Actually, Theorem~\ref{theorem:BS} is proved in~\cite[Theorem~3.1]{bot-san} for operators~$S$ defined on a subspace~$U$ contained in $$ \mathcal{L}_{w^*,\|\cdot\|}(X^*,Y)=\{T\in \mathcal L(X^*,Y):\, T \text{ is ($w^*$-to-norm) continuous}\}. $$ The proof given there is based on the abstract Pietsch-type domination theorem of Botelho, Pellegrino and Rueda~\cite{bot-pel-rue}, and the argument works for subspaces of $X\varepsilon Y$ as well. We stress that $\mathcal{L}_{w^*,\|\cdot\|}(X^*,Y)$ consists of finite rank operators, one has $$ \overline{\mathcal{L}_{w^*,\|\cdot\|}(X^*,Y)}^{\|\cdot\|}=X\hat{\otimes}_\varepsilon Y \subseteq X \varepsilon Y $$ and, in general, $\mathcal{L}_{w^*,\|\cdot\|}(X^*,Y)\neq X\varepsilon Y$. \end{remark} We next provide a more direct proof of Theorem~\ref{theorem:BS}. Yet another approach will be presented at the end of this section. \begin{proof}[Proof of Theorem~\ref{theorem:BS}] For any $n\in \mathbb{N}$ and $\bar{T}=(T_1,\dots,T_n)\in U^n$, we define $$ \Delta_{\bar{T}}: P(B_{X^*}) \to \mathbb{R}, \quad \Delta_{\bar{T}}(\mu):=\sum_{i=1}^n \|S(T_i)\|_Z^p - K^{p} \int_{B_{X^*}} \sum_{i=1}^n \|T_i(\cdot)\|_{Y}^p \, d\mu, $$ where $K\geq 0$ is a constant as in Definition~\ref{definition:pPettisSumming}. Clearly, $\Delta_{\bar{T}}$ is convex and $w^*$-continuous, because the real-valued function $$ x^*\mapsto \sum_{i=1}^n \|T_i(x^*)\|_Y^p $$ is $w^*$-continuous on~$B_{X^*}$. This function attains its supremum at some $x_{\bar{T}}^*\in B_{X^*}$. Bearing in mind that $S$ is $(\ell^s_p,\ell_p)$-summing, we get $\Delta_{\bar{T}}(\delta_{x^*_{\bar{T}}})\leq 0$. Note also that the collection of all functions of the form $\Delta_{\bar{T}}$ is a convex cone in~$\mathbb{R}^{P(B_{X^*})}$. Indeed, given $\bar{T}=(T_1,\dots,T_n)\in U^n$, $\bar{R}=(R_1,\dots,R_m)\in U^m$, $\alpha\geq 0$ and $\beta \geq 0$, we have $\alpha\Delta_{\bar{T}}+\beta\Delta_{\bar{R}}=\Delta_{\bar{H}}$, where $$ \bar{H}=(\alpha^{1/p}T_1,\dots,\alpha^{1/p}T_n,\beta^{1/p}R_1,\dots,\beta^{1/p}R_m). $$ Therefore, by Ky Fan's Lemma (see e.g. \cite[Lemma~9.10]{die-alt}), there is $\mu \in P(B_{X^*})$ such that $\Delta_{\bar{T}}(\mu)\leq 0$ for all functions of the form $\Delta_{\bar{T}}$. In particular, inequality~\eqref{eqn:BotelhoSantos} holds for all $T\in U$. \end{proof} Clearly, in order to extend the statement of Theorem~\ref{theorem:BS} to other subspaces $U$ of $\mathcal{L}(X^*,Y)$, the real-valued map $\|T(\cdot )\|_Y$ needs to be $\mu$-measurable for every $T\in U$. This holds automatically if $U$ is a subspace of $$ \mathcal{UM}(X^*,Y):=\{T\in \mathcal{L}(X^*,Y): \, T|_{B_{X^*}} \mbox{ is universally strongly measurable}\}. $$ Note that $\mathcal{UM}(X^*,Y)$ is a SOT-sequentially closed subspace of $\mathcal{L}(X^*,Y)$. \begin{example}\label{example:UM1} \begin{enumerate} \item[(i)] We have $X\varepsilon Y \subseteq \mathcal{UM}(X^*,Y)$ according to the comment preceding Remark~\ref{remark:BS}. \item[(ii)] More generally, {\em every ($w^*$-to-weak) continuous operator from~$X^*$ to~$Y$ belongs to $\mathcal{UM}(X^*,Y)$.} Indeed, just bear in mind that any weakly continuous function from a compact topological space to a Banach space is universally strongly measurable, see \cite[Proposition~4]{ari-alt}. We stress that, by the Banach-Dieudonn\'{e} theorem, an operator $T:X^*\to Y$ is ($w^*$-to-weak) continuous if and only if the restriction $T|_{B_{X^*}}$ is ($w^*$-to-weak) continuous. \item[(iii)] In particular, {\em if $X$ is reflexive, then $\mathcal{L}(X^*,Y)=\mathcal{UM}(X^*,Y)$}. \end{enumerate} \end{example} \begin{example}\label{example:UM2} {\em If $X \not \supseteq \ell_1$, then every $T\in \mathcal{L}(X^*,Y)$ with separable range belongs to~$\mathcal{UM}(X^*,Y)$.} Indeed, a result of Haydon~\cite{hay-J} (cf. \cite[Theorem~6.9]{van}) states that $X^{**}=\mathcal{UM}(X^*,\mathbb{R})$ if and only if $X\not\supseteq \ell_1$. The conclusion now follows from Pettis' measurability theorem applied to $T|_{B_{X^*}}$ and each $\mu\in P(B_{X^*})$, see e.g. \cite[p.~42, Theorem~2]{die-uhl-J}. \end{example} So, we will look for conditions ensuring that an $(\ell^s_p,\ell_p)$-summing operator defined on a subspace of $\mathcal{UM}(X^*,Y)$ is $(\ell^s_p,\ell_p)$-controlled, according to the following: \begin{definition}\label{definition:dominated} Let $U$ be a subspace of $\mathcal{UM}(X^*,Y)$. An operator $S: U \to Z$ is said to be {\em $(\ell^s_p,\ell_p)$-controlled} if there exist a constant $K\geq 0$ and $\mu \in P(B_{X^*})$ such that \begin{equation}\label{eqn:domi} \|S(T)\|_Z \leq K \Big(\int_{B_{X^*}} \|T(\cdot)\|_{Y}^p\, d\mu\Big)^{1/p} \end{equation} for every $T\in U$. \end{definition} \begin{proposition}\label{proposition:facto} Let $U$ be a subspace of $\mathcal{UM}(X^*,Y)$ and let $S: U \to Z$ be an operator. Then $S$ is $(\ell^s_p,\ell_p)$-controlled if and only if there exist $\mu\in P(B_{X^*})$, a subspace $W \subseteq L_p(\mu,Y)$ and an operator $\tilde{S}:W \to Y$ such that $S$ factors as $$ \xymatrix@R=3pc@C=3pc{U \ar[r]^{S} \ar[d]_{i_\mu|_U} & Z\\ W \ar@{->}[ur]_{\tilde{S}} & \\ } $$ where $i_\mu:\mathcal{UM}(X^*,Y)\to L_p(\mu,Y)$ is the operator that maps each $T\in \mathcal{UM}(X^*,Y)$ to the equivalence class of $T|_{B_{X^*}}$ in~$L_p(\mu,Y)$. \end{proposition} \begin{proof} It is clear that such factorization implies that $S$ is $(\ell^s_p,\ell_p)$-controlled. Conversely, inequality~\eqref{eqn:domi} in Definition~\ref{definition:dominated} allows us to define a linear continuous map $\tilde{S}_0: i_\mu(U) \to Z$ by declaring $\tilde{S}_0(i_\mu(T)):=S(T)$ for all $T\in U$. Now, we can extend $\tilde{S}_0$ to an operator $\tilde{S}$ from $W:=\overline{i_\mu(U)}$ to~$Z$. Clearly, we have $\tilde{S}\circ i_\mu|_U=S$. \end{proof} We next give a couple of applications of Proposition~\ref{proposition:facto} related to topological properties of $(\ell^s_p,\ell_p)$-controlled operators. The class of Banach spaces~$X$ such that $L_1(\mu)$ is separable for every $\mu \in P(B_{X^*})$ is rather wide. It contains, for instance, all weakly compactly generated Banach spaces (cf. \cite[Theorem~13.20 and Corollary~14.6]{fab-ultimo}) as well as all Banach spaces not containing subspaces isomorphic to~$\ell_1$ (see \cite[Proposition~B.1]{avi-mar-ple}). For such spaces we have: \begin{corollary}\label{corollary:SeparableRange} Suppose that $L_1(\mu)$ is separable for every $\mu \in P(B_{X^*})$ and that $Y$ is separable. Let $U$ be a subspace of $\mathcal{UM}(X^*,Y)$ and let $S:U \to Z$ be an $(\ell^s_p,\ell_p)$-controlled operator. Then $S$ has separable range. \end{corollary} \begin{proof} Under such assumptions, $L_p(\mu,Y)$ is separable for any $\mu\in P(B_{X^*})$. The result now follows from Proposition~\ref{proposition:facto}. \end{proof} A subset of a Banach space is said to be {\em weakly precompact} if every sequence in it admits a weakly Cauchy subsequence. Rosenthal's $\ell_1$-theorem~\cite{ros} (cf. \cite[Theorem~5.37]{fab-ultimo}) characterizes weakly precompact sets as those which are bounded and contain no sequence equivalent to the unit basis of~$\ell_1$. An operator between Banach spaces is said to be {\em weakly precompact} if it maps bounded sets to weakly precompact sets; this is equivalent to saying that it factors through a Banach space not containing subspaces isomorphic to~$\ell_1$. For more information on weakly precompact operators we refer the reader to~\cite{gon-abe}. \begin{corollary}\label{corollary:IdealProperties} Let $U$ be a subspace of $\mathcal{UM}(X^*,Y)$ and let $S:U \to Z$ be an $(\ell^s_p,\ell_p)$-controlled operator. Then: \begin{enumerate} \item[(i)] $S$ is weakly compact whenever $Y$ is reflexive. \item[(iii)] $S$ is weakly precompact whenever $Y \not\supseteq \ell_1$. \end{enumerate} \end{corollary} \begin{proof} We consider a factorization of~$S$ as in Proposition~\ref{proposition:facto} and we distinguish two cases: {\em Case $1<p<\infty$.} If $Y$ is reflexive, then so is $L_p(\mu,Y)$ (see e.g. \cite[p.~100, Corollary~2]{die-uhl-J}) and the same holds for~$W$, hence $S$ is weakly compact. On the other hand, if $Y \not\supseteq \ell_1$, then $L_p(\mu,Y) \not\supseteq\ell_1$ (see e.g. \cite[Theorem~2.2.2]{cem-men}) and so $W\not\supseteq\ell_1$, hence $S$ is weakly precompact. {\em Case $p=1$.} Let $j: L_2(\mu,Y)\to L_1(\mu,Y)$ be the identity operator. Since $$ i_\mu(B_U) \subseteq j(B_{L_2(\mu,Y)}), $$ we deduce that $i_\mu(B_U)$ is relatively weakly compact (resp. weakly precompact) whenever $Y$ is reflexive (resp. $Y \not\supseteq \ell_1$), and the same holds for $S(B_U)=\tilde{S}(i_\mu(B_U))$. \end{proof} The following result shows the link between $(\ell^s_p,\ell_p)$-controlled and $(\ell^s_p,\ell_p)$-summing operators. \begin{theorem}\label{theorem:equiv} Let $U$ be a subspace of~$\mathcal{UM}(X^*,Y)$ and let $S:U\to Z$ be an operator. Let us consider the following statements: \begin{enumerate} \item[(i)] $S$ is $(\ell^s_p,\ell_p)$-controlled. \item[(ii)] $S$ is $(\ell^s_p,\ell_p)$-summing and (SOT-to-norm) sequentially continuous. \end{enumerate} Then (i)$\Rightarrow$(ii). Moreover, both statements are equivalent whenever $U \cap X\varepsilon Y$ is SOT-sequentially dense in~$U$. \end{theorem} \begin{proof} Suppose first that $S$ is $(\ell^s_p,\ell_p)$-controlled and consider a factorization of~$S$ as in Proposition~\ref{proposition:facto}. We will deduce that $S$ is $(\ell^s_p,\ell_p)$-summing and (SOT-to-norm) sequentially continuous by checking that so is~$i_\mu$. On one hand, $i_\mu$ is $(\ell^s_p,\ell_p)$-summing, because for every $n\in \mathbb{N}$ and $T_1,\dots,T_n\in \mathcal{UM}(X^*,Y)$ we have $$ \sum_{i=1}^n \|i_\mu(T_i)\|_{L_p(\mu,Y)}^p= \int_{B_{X^*}}\sum_{i=1}^n \|T_i(\cdot)\|_Y^p \, d\mu \leq \sup_{x^*\in B_{X^*}} \sum_{i=1}^n \|T_i(x^*)\|_Y^p. $$ On the other hand, $i_\mu$ is (SOT-to-norm) sequentially continuous. Indeed, let $(T_n)$ be a sequence in~$\mathcal{UM}(X^*,Y)$ which SOT-converges to~$0$, i.e. $\|T_n(x^*)\|_Y\to 0$ for every $x^*\in X^*$. By the Banach-Steinhaus theorem, $\sup\{\|T_n\|:\, n\in\mathbb{N}\}<\infty$. From Lebesgue's dominated convergence theorem it follows that $(i_\mu(T_n))$ converges to~$0$ in the norm topology of~$L_p(\mu,Y)$. Suppose now that (ii) holds and that $U \cap X\varepsilon Y$ is SOT-sequentially dense in~$U$. The restriction $S|_{U \cap X\varepsilon Y}$ is $(\ell^s_p,\ell_p)$-summing and so Theorem~\ref{theorem:BS} and Proposition~\ref{proposition:facto} ensure the existence of $\mu \in P(B_{X^*})$, a subspace $W \subseteq L_p(\mu,Y)$ and an operator $\tilde{S}:W\to Z$ such that $i_\mu(U\cap X\varepsilon Y)\subseteq W$ and $$ \tilde{S}\circ i_\mu|_{U\cap X\varepsilon Y}=S|_{U \cap X\varepsilon Y}. $$ Then we have $i_\mu(U)\subseteq W$ and $\tilde{S}\circ i_\mu|_{U}=S$, because $S$ and $i_\mu$ are (SOT-to-norm) sequentially continuous and $U \cap X\varepsilon Y$ is SOT-sequentially dense in~$U$. Therefore, $S$ is $(\ell^s_p,\ell_p)$-controlled. \end{proof} We are now ready to present a negative answer to \cite[Question 5.2]{bla-sig}: \begin{example}\label{example:counterBS} Suppose that $X$ is not reflexive and $X^*$ is separable (e.g. $X=c_0$). Then $X^{**}=\mathcal{UM}(X^*,\mathbb{R})$, every $S\in X^{***}$ is $(\ell^s_p,\ell_p)$-summing, but no $S\in X^{***}\setminus X^*$ is $(\ell^s_p,\ell_p)$-controlled (as operators from $X^{**}$ to~$\mathbb{R}$). \end{example} \begin{proof} The equality $X^{**}=\mathcal{UM}(X^*,\mathbb{R})$ follows from the fact that $X\not\supseteq\ell_1$, according to Haydon's result which we already mentioned in Example~\ref{example:UM2}. Every $S\in X^{***}$ is easily seen to be $(\ell^s_p,\ell_p)$-summing as an operator from $X^{**}$ to~$\mathbb{R}$ (use that $B_{X^*}$ is $w^*$-dense in~$B_{X^{***}}$, by Goldstine's theorem). On the other hand, if $S\in X^{***}$ is $(\ell^s_p,\ell_p)$-controlled, then it is $w^*$-sequentially continuous by Theorem~\ref{theorem:equiv} (bear in mind that SOT$=w^*$ on~$X^{**}$). Since $(B_{X^{**}},w^*)$ is metrizable (because $X^*$ is separable), the restriction $S|_{B_{X^{**}}}$ is $w^*$-continuous and so, by the Banach-Dieudonn\'{e} theorem, $S$ is $w^*$-continuous, i.e. $S\in X^*$. \end{proof} In order to apply Theorem~\ref{theorem:equiv}, there are many examples of subspaces $U$ of $\mathcal{UM}(X^*,Y)$ for which $U \cap X\varepsilon Y$ is SOT-sequentially dense in~$U$. An operator $T:X^*\to Y$ is said to be {\em affine Baire-1} (we write $T\in \mathcal{AB}(X^*,Y)$ for short) if there is a sequence in $X\varepsilon Y$ which SOT-converges to~$T$. Affine Baire-1 operators were studied by Mercourakis and Stamati~\cite{mer-sta} and Kalenda and Spurn\'{y}~\cite{kal-spu}. We present below some examples. Recall first that a Banach space $Y$ has the {\em approximation property} ({\em AP}) if for each norm-compact set $C \subseteq Y$ and each $\varepsilon>0$ there is a finite rank operator $R:Y \to Y$ such that $\|R(y)-y\|_Y\leq \varepsilon$ for all $y\in C$. If in addition $R$ can be chosen in such a way that $\|R\| \leq \lambda$ for some constant $\lambda\geq 1$ (independent of~$C$ and~$\varepsilon$), then $Y$ is said to have the {\em $\lambda$-bounded approximation property} ({\em $\lambda$-BAP}). A Banach space is said to have the {\em bounded approximation property} ({\em BAP}) if it has the $\lambda$-BAP for some $\lambda\geq 1$. For instance, every Banach space with a Schauder basis has the BAP. In general, the AP and the BAP are different. However, a separable dual Banach space has the AP if and only if it has the $1$-BAP. For more information on these properties we refer the reader to~\cite{casazza}. \begin{example}\label{example:weak} {\em Suppose that $Y$ has the BAP. If $T \in \mathcal{L}(X^*,Y)$ is ($w^*$-to-weak) continuous and has separable range, then $T\in \mathcal{AB}(X^*,Y)$.} \end{example} \begin{proof} Let $\lambda\geq 1$ be a constant such that $Y$ has the $\lambda$-BAP. Given any countable set $D \subseteq Y$, there is a sequence $(R_n)$ of finite rank operators on~$Y$ such that $\|R_n\|\leq \lambda$ for all $n\in \mathbb{N}$ and $\|R_n(y)-y\|_Y \to 0$ for every $y\in D$. Therefore, $\|R_n(y)-y\|_Y \to 0$ for every $y\in \overline{D}$ (the norm-closure of~$D$). In particular, if this argument is applied to any countable set $D$ such that $D\subseteq T(X^*) \subseteq \overline{D}$, we get that the sequence $(R_n\circ T)$ is SOT-convergent to~$T$ in~$\mathcal{L}(X^*,Y)$. Note that each $R_n\circ T$ is ($w^*$-to-weak) continuous (because so is~$T$) and has finite rank, hence it belongs to~$\mathcal{L}_{w^*,\|\cdot\|}(X^*,Y)\subseteq X\varepsilon Y$. \end{proof} \begin{example}\label{example:MS} {\em Suppose that $X^*$ is separable and that either $X^*$ or $Y$ has the BAP. Then} $$ \mathcal{L}(X^*,Y) = \mathcal{AB}(X^*,Y), $$ see \cite[Theorems~2.18 and~2.19]{mer-sta}. The proofs of these results contain a gap which was commented and corrected in \cite[Remark~4.4]{kal-spu}. Note that the separability assumption on~$Y$ that appears in the statement of \cite[Theorem~2.19]{mer-sta} can be removed by using the arguments of~\cite{kal-spu}. \end{example} Clearly, $\mathcal{AB}(X^*,Y)$ is a linear subspace of $\mathcal{L}(X^*,Y)$. It is norm-closed whenever $Y$ has the BAP, as we next show. To this end, we use an argument similar to the usual proof that the uniform limit of a sequence of real-valued Baire-1 functions is Baire-1 (see e.g. \cite[Proposition~A.126]{luk-alt}). However, some technicalities arise since we need to approximate with operators instead of arbitrary continuous maps. \begin{lemma}\label{lem:closed} \it If $Y$ has the BAP, then $\mathcal{AB}(X^*,Y)$ is norm-closed in $\mathcal{L}(X^*,Y)$. \end{lemma} \begin{proof} Fix $\lambda\geq 1$ such that $Y$ has the $\lambda$-BAP. Let $T \in \overline{\mathcal{AB}(X^*,Y)}^{\|\cdot\|}$ with $\|T\|=1$. Let $(U_k)$ be a sequence in $\mathcal{AB}(X^*,Y)$ such that $\|U_k\|\leq 2^{-k+1}$ for all $k\in \mathbb{N}$ and $T=\sum_{k\in \mathbb{N}}U_k$ in the operator norm. Given $k\in \mathbb{N}$, we can apply to~$U_k$ the vector-valued version of Mokobodzki's theorem proved in \cite[Theorem~2.2]{kal-spu} to obtain a sequence $(S_{k,n})_{n\in \mathbb{N}}$ in $X\varepsilon Y$ such that \begin{itemize} \item $(S_{k,n})_{n\in \mathbb{N}}$ SOT-converges to~$U_k$; \item $\|S_{k,n}\|\leq \lambda 2^{-k+1}$ for all $n\in \mathbb{N}$. \end{itemize} Define a sequence $(T_n)$ in $X \varepsilon Y$ by $$ T_n:=\sum_{k=1}^n S_{k,n} \quad\mbox{for all }n\in \mathbb{N}. $$ It is easy to check that $(T_n)$ SOT-converges to~$T$, hence $T\in \mathcal{AB}(X^*,Y)$. \end{proof} As usual, we denote by $\mathcal{K}(X^*,Y)$ the subspace of $\mathcal{L}(X^*,Y)$ consisting of all compact operators from~$X^*$ to~$Y$. Clearly, we have $X\varepsilon Y \subseteq \mathcal{K}(X^*,Y)$. \begin{example}\label{example:MScompact} \it Suppose that $X$ is separable and $X\not \supseteq \ell_1$. \begin{enumerate} \item[(i)] Every finite rank operator $T:X^* \to Y$ is affine Baire-1. \item[(ii)] If $Y$ has the BAP, then $$ \mathcal{K}(X^*,Y) \subseteq \mathcal{AB}(X^*,Y). $$ \end{enumerate} \end{example} \begin{proof} (i) It suffices to check it for rank one operators. Fix $x^{**}\in X^{**}$ and $y\in Y$ in such a way that $T(x^*)=\langle x^{**},x^*\rangle y$ for all $x^*\in X^*$. Since $X$ is $w^*$-sequentially dense in~$X^{**}$ (by the Odell-Rosenthal theorem~\cite{ode-ros}, cf. \cite[Theorem~4.1]{van}), there is a sequence $(x_n)$ in~$X$ which $w^*$-converges to~$x^{**}$. For each $n\in \mathbb{N}$ we define $T_n\in \mathcal{L}_{w^*,\|\cdot\|}(X^*,Y) \subseteq X\varepsilon Y$ by declaring $T_n(x^*):=\langle x_n,x^*\rangle y$ for all $x^*\in X^*$. Clearly, $(T_n)$ is SOT-convergent to~$T$. (ii) Take any $T\in \mathcal{K}(X^*,Y)$. Since $Y$ has the AP, there is a sequence $(T_n)$ of finite rank operators from~$X^*$ to~$Y$ converging to~$T$ in the operator norm. Each $T_n$ is affine Baire-1 by~(i). An appeal to Lemma~\ref{lem:closed} ensures that $T\in \mathcal{AB}(X^*,Y)$. \end{proof} The proof of Theorem~\ref{theorem:BS} makes essential use of the $w^*$-continuity on~$B_{X^*}$ of the real-valued map $\|T(\cdot)\|_Y$ for $T\in X\varepsilon Y$. We next present an abstract Pietsch-type domination theorem for $(\ell^s_p,\ell_p)$-summing operators that does not require that continuity assumption, at the price of dominating with a {\em finitely additive} measure. As a consequence of this result, we will obtain another proof of Theorem~\ref{theorem:BS}. Given a measurable space~$(\Omega,\Sigma)$, we denote by $B(\Sigma)$ the Banach space of all bounded $\Sigma$-measurable real-valued functions on~$\Omega$, equipped with the supremum norm. The dual $B(\Sigma)^*$ can be identified with the Banach space ${\rm ba}(\Sigma)$ of all bounded finitely additive real-valued measures on~$\Sigma$, equipped with the variation norm. The duality is given by integration, that is, $\langle h,\nu \rangle=\int_\Omega h \, d\nu$ for every $h\in B(\Sigma)$ and $\nu\in {\rm ba}(\Sigma)$, see e.g. \cite[p.~77, Theorem~7]{die-J}. \begin{theorem}\label{theorem:FA} Let $\Sigma$ be a $\sigma$-algebra on~$B_{X^*}$ and let $U$ be a subspace of $\mathcal{L}(X^*,Y)$ such that the restriction of $\|T(\cdot)\|_Y$ to~$B_{X^*}$ is $\Sigma$-measurable for every $T\in U$. Let $S:U \to Z$ be an $(\ell^s_p,\ell_p)$-summing operator. Then there exist a constant $K\geq 0$ and a finitely additive probability $\nu$ on~$\Sigma$ such that \begin{equation}\label{eqn:FA} \|S(T)\|_Z \leq K \Big(\int_{B_{X^*}} \|T(\cdot)\|_{Y}^p \, d\nu \Big)^{1/p} \end{equation} for every $T\in U$. \end{theorem} \begin{proof} For each $T\in U$ we define $\psi_T\in B(\Sigma)$ by $$ \psi_T(x^*):=\|T(x^*)\|^p_Y \quad \mbox{for all }x^*\in B_{X^*}. $$ Let $L \subseteq {\rm ba}(\Sigma)=B(\Sigma)^*$ be the convex $w^*$-compact set of all finitely additive probabilities on~$\Sigma$. For any $n\in \mathbb{N}$ and $\bar{T}=(T_1,\dots,T_n)\in U^n$, we define $$ \Delta_{\bar{T}}: L \to \mathbb{R}, \quad \Delta_{\bar{T}}(\nu):=\sum_{i=1}^n \|S(T_i)\|_Z^p - K^{p} \int_{K} \sum_{i=1}^n \psi_{T_i} \, d\nu, $$ where $K\geq 0$ is a constant as in Definition~\ref{definition:pPettisSumming}. Clearly, $\Delta_{\bar{T}}$ is convex and $w^*$-continuous. Moreover, by the Hahn-Banach theorem there is $\eta_{\bar{T}} \in{\rm ba}(\Sigma)$ with $\|\eta_{\bar{T}}\|_{{\rm ba}(\Sigma)}=1$ such that $$ \Big\langle \sum_{i=1}^n \psi_{T_i},\eta_{\bar{T}} \Big\rangle= \Big\|\sum_{i=1}^n \psi_{T_i}\Big\|_{B(\Sigma)}. $$ Bearing in mind that $\sum_{i=1}^n \psi_{T_i}\geq 0$, it follows that the variation $|\eta_{\bar{T}}| \in L$ satisfies $$ \Big\langle \sum_{i=1}^n \psi_{T_i},|\eta_{\bar{T}}| \Big\rangle=\sup_{x^*\in B_{X^*}}\sum_{i=1}^n \psi_{T_i}(x^*). $$ Therefore, inequality~\eqref{eqn:psumming} in Definition~\ref{definition:pPettisSumming} yields $$ \Delta_{\bar{T}}\big(|\eta_{\bar{T}}|\big) = \sum_{i=1}^n \|S(T_i)\|_Z^p - K^p \Big\langle \sum_{i=1}^n \psi_{T_i},|\eta_{\bar{T}}| \Big\rangle \leq 0. $$ The collection of all functions of the form $\Delta_{\bar{T}}$ is easily seen to be a convex cone in~$\mathbb{R}^{L}$. By Ky Fan's Lemma (see e.g. \cite[Lemma~9.10]{die-alt}), there is $\nu \in L$ such that $\Delta_{\bar{T}}(\nu)\leq 0$ for all functions of the form $\Delta_{\bar{T}}$. In particular, \eqref{eqn:FA} holds for every $T\in U$. \end{proof} \begin{proof}[Another proof of Theorem~\ref{theorem:BS}] Let $\Sigma:={\rm Borel}(B_{X^*},w^*)$. Let $K$ and $\nu$ be as in Theorem~\ref{theorem:FA}. Define $\varphi\in B(\Sigma)^*$ by $\langle h,\varphi\rangle:=\int_{B_{X^*}}h \, d\nu$ for all $h\in B(\Sigma)$. Let $\mu\in C(B_{X^*})^*$ be the restriction of $\varphi$ to~$C(B_{X^*})$ (as a subspace of $B(\Sigma)$). Then $\mu\in P(B_{X^*})$ and~\eqref{eqn:FA} now reads as $$ \|S(T)\|_Z \leq K \Big(\int_{B_{X^*}} \|T(\cdot)\|_{Y}^p \, d\mu \Big)^{1/p} $$ for every $T\in U \subseteq X\varepsilon Y$. \end{proof} \section{Kwapie\'{n}-type theorem for $(\ell^s_p,\ell^s_q)$-dominated operators}\label{section:Kwapien} Throughout this section we fix $1< p, q< \infty$ such that $1/p + 1/q \leq 1$. Let $1\leq r < \infty$ be defined by $1/p + 1/q =1/r$. An operator $S:X\to Y$ is said to be {\em $(p,q)$-dominated} if there is a constant $K\geq 0$ such that $$ \Big( \sum_{i=1}^n | \langle S(x_i),y^*_i \rangle|^r \Big)^{1/r} \\ \le K \sup_{x^* \in B_{X^*}} \Big(\sum_{i=1}^n |\langle x_i,x^*\rangle|^p\Big)^{1/p} \cdot \sup_{y \in B_{Y}} \Big(\sum_{i=1}^n | \langle y,y^*_i \rangle|^q\Big)^{1/q} $$ for every $n\in \mathbb{N},$ every $x_1,\dots,x_n \in X$ and every $y^*_1, \dots, y^*_n \in Y^*$. The classical result of Kwapie\'{n}~\cite{kwa} mentioned in the introduction says that an operator between Banach spaces is $(p,q)$-dominated if and only if it can be written as $S_1\circ S_2$ for some operators $S_1$ and $S_2$ such that $S_2$ is absolutely $p$-summing and $S_1^*$ is absolutely $q$-summing (cf. \cite[\S 19]{def-flo}). Our aim in this section is to extend Kwapie\'{n}'s result to the framework of $(\ell^s_p,\ell_p)$-summing operators, see Theorem~\ref{theorem:equiv2} below. From now on we assume that $Z^*$ is a subspace of~$\mathcal{UM}(E^*,F)$ for some fixed Banach spaces $E$ and $F$. Accordingly, the adjoint of any operator taking values in~$Z$ is defined on a subspace of~$\mathcal{UM}(E^*,F)$ and we can discuss whether it is $(\ell^s_q,\ell_q)$-summing or $(\ell^s_q,\ell_q)$-controlled. \begin{definition}\label{definition:pqdom} Let $U$ be a subspace of $\mathcal L(X^*,Y)$. An operator $S: U \to Z$ is said to be {\em $(\ell^s_p,\ell^s_q)$-dominated} if there is a constant $K\geq 0$ such that \begin{multline}\label{eqn:pqdom} \Big( \sum_{i=1}^n | \langle S(T_i),z^*_i \rangle|^r \Big)^{1/r} \\ \le K \sup_{x^* \in B_{X^*}} \Big(\sum_{i=1}^n \|T_i(x^*)\|_Y^p\Big)^{1/p} \cdot \sup_{e^* \in B_{E^*}} \Big(\sum_{i=1}^n \| z^*_i (e^*)\|_{F}^q\Big)^{1/q} \end{multline} for every $n\in \mathbb{N},$ every $T_1,\dots,T_n \in U$ and every $z^*_1, \dots, z^*_n \in Z^*$. \end{definition} \begin{theorem}\label{theorem:equiv2} Let $U$ be a subspace of~$\mathcal{UM}(X^*,Y)$ and let $S:U\to Z$ be an operator. Consider the following statements: \begin{enumerate} \item[(i)] $S$ is $(\ell^s_p,\ell^s_q)$-dominated. \item[(ii)] There exist a constant $K\geq 0$ and measures $\mu \in P(B_{X^*})$ and $\eta \in P(B_{E^{*}})$ such that \begin{equation}\label{eqn:intpqdom} | \langle S(T), z^*\rangle| \leq K \Big(\int_{B_{X^*}}\|T(\cdot)\|^p_{Y} \, d\mu\Big)^{1/p} \cdot \Big(\int_{B_{E^*}}\| z^*(\cdot)\|^q_{F} \, d\eta \Big)^{1/q} \end{equation} for every $T \in U\cap X\varepsilon Y$ and every $z^* \in Z^*\cap E \varepsilon F$. \item[(iii)] There exist a constant $K\geq 0$ and measures $\mu \in P(B_{X^*})$ and $\eta \in P(B_{E^{*}})$ such that \eqref{eqn:intpqdom} holds for every $T \in U$ and every $z^* \in Z^*$. \item[(iv)] There exist a Banach space $W$, an $(\ell^s_p,\ell_p)$-controlled operator $S_2:U\to W$ and an operator $S_1:W\to Z$ with $(\ell^s_q,\ell_q)$-controlled adjoint such that $S$ factors as $S= S_1 \circ S_2$. \item[(v)] There exist a Banach space $W$, an $(\ell^s_p,\ell_p)$-summing operator $S_2:U\to W$ and an operator $S_1:W\to Z$ with $(\ell^s_q,\ell_q)$-summing adjoint such that $S$ factors as~$S= S_1 \circ S_2$. \end{enumerate} Then (iii)$\Rightarrow$(iv)$\Rightarrow$(v)$\Rightarrow$(i)$\Rightarrow$(ii). All statements are equivalent if, in addition, we assume that: \begin{enumerate} \item[(a)] the identity map on~$Z^*$ is (SOT-to-$w^*$) sequentially continuous; \item[(b)] $Z^* \cap E\varepsilon F$ is SOT-sequentially dense in~$Z^*$; \item[(c)] $U \cap X\varepsilon Y$ is SOT-sequentially dense in~$U$; \item[(d)] $S$ is (SOT-to-norm) sequentially continuous. \end{enumerate} \end{theorem} For the sake of brevity it is convenient to introduce the following: \begin{definition}\label{definition:admissible} We say that the triple $(Z,E,F)$ is {\em admissible} if conditions~(a) and~(b) above hold. \end{definition} Before embarking on the proof of Theorem~\ref{theorem:equiv2} we present some examples of admissible triples. Recall that the {\em weak operator topology} ({\em WOT} for short) on $\mathcal{L}(E^*,F)$ is the locally convex topology for which the sets $$ \{R\in \mathcal{L}(E^*,F): \, |\langle R(e^*),f^*\rangle|<\varepsilon\}, \quad e^*\in E^*, \quad f^*\in F^*, \quad \varepsilon>0, $$ are a subbasis of open neighborhoods of~$0$. So, a net $(R_\alpha)$ in $\mathcal{L}(E^*,F)$ is WOT-convergent to~$0$ if and only if $(R_\alpha(e^*))$ is weakly null in~$F$ for every $e^*\in E^*$. \begin{example}\label{example:wot-vs-weak} {\em If $Z^* \subseteq E \varepsilon F$, then $(Z,E,F)$ is admissible.} Indeed, (b) holds trivially, while (a) follows from the fact that a sequence in $E\varepsilon F$ is WOT-convergent to~$0$ if and only if it is weakly null in~$E\varepsilon F \subseteq \mathcal{L}(E^*,F)$ (see e.g. \cite[Theorem~1.3]{col-rue}). \end{example} \begin{example}\label{example:triple-l1} Suppose that $E \not\supseteq \ell_1$. Take $Z:=E^*$ and $F:=\mathbb{R}$. Then we have $Z^{*} = E^{**} = \mathcal{UM}(E^{*},F)$ (see Example~\ref{example:UM2}) and, of course, SOT $=w^*$ on~$Z^*$, so that (a) holds. If in addition $E$ is separable, then (b) also holds, i.e. $E\varepsilon F = E$ is $w^*$-sequentially dense in $E^{**}$, by the Odell-Rosenthal theorem~\cite{ode-ros} (cf. \cite[Theorem~4.1]{van}). \end{example} \begin{example}\label{example:projective} Suppose that $F:=X_0^*$ for a Banach space~$X_0$. Take $Z:=E^* \hat{\otimes}_\pi X_0$ (the projective tensor product of~$E^*$ and~$X_0$). Then: \begin{enumerate} \item[(i)] $Z^*=\mathcal{L}(E^*,F)$ in the natural way (see e.g. \cite[p.~230, Corollary~2]{die-uhl-J}). \item[(ii)] The identity map on~$Z^*$ is (WOT-to-$w^*$) sequentially continuous. \item[(iii)] If $E^*$ is separable and either $E^*$ or~$F$ has the BAP, then $Z^*=\mathcal{UM}(E^*,F)$ and $(Z,E,F)$ is admissible. \end{enumerate} \end{example} \begin{proof} (ii) Let $(\varphi_n)$ be a sequence in~$Z^*=\mathcal{L}(E^*,F)$ which WOT-converges to~$0$. Then it is bounded (by the Banach-Steinhaus theorem) and $$ \langle e^*\otimes x_0,\varphi_n \rangle=\langle x_0,\varphi_n(e^*)\rangle \to 0 \quad\mbox{for all }e^*\in E^*\mbox{ and }x_0\in X_0, $$ hence $(\varphi_n)$ is $w^*$-null. (iii) Under such assumptions $E\varepsilon F$ is SOT-sequentially dense in $\mathcal{L}(E^*,F)$ (see Example~\ref{example:MS}). In particular, we have $\mathcal{L}(E^*,F)=\mathcal{UM}(E^*,F)$. Bearing in mind~(ii) it follows that $(Z,E,F)$ is admissible. \end{proof} \begin{proof}[Proof of Theorem~\ref{theorem:equiv2}] (iii)$\Rightarrow$(iv) By assumption we have $$ |\langle S(T),z^*\rangle| \leq K \|i_\mu(T)\|_{L_p(\mu,Y)} \|z^*\|_{Z^*} \quad\mbox{for every }T\in U \mbox{ and }z^*\in Z^*, $$ hence $$ \|S(T)\|_Z \leq K \|i_\mu(T)\|_{L_p(\mu,Y)} \quad \mbox{for every }T\in U. $$ Write $W:=\overline{i_\mu(U)}$. By the previous inequality, there is an operator $S_1: W \to Z$ such that $S_1\circ i_\mu|_U=S$ (cf. the proof of Proposition~\ref{proposition:facto}). Of course, $S_2:=i_\mu|_U$ is $(\ell^s_p,\ell_p)$-controlled. We claim that $S_1^*:Z^* \to W^*$ is $(\ell^s_q,\ell_q)$-controlled. Indeed, inequality~\eqref{eqn:intpqdom} reads as $$ |\langle i_\mu(T), S_1^* (z^*) \rangle | \le K \, \|i_\mu(T)\|_{L_p(\mu,Y)} \, \| i_\eta(z^*)\|_{L_q(\eta,F)} $$ for every $T\in U$ and $z^*\in Z^*$. Thus, $\|S_1^*(z^*)\|_{W^*} \leq K \| i_\eta(z^*)\|_{L_q(\eta,F)}$ for every $z^*\in Z^*$, so that $S_1^*$ is $(\ell^s_q,\ell_q)$-controlled. (iv)$\Rightarrow$(v) This follows from Theorem~\ref{theorem:equiv}. (v)$\Rightarrow$(i) Fix $n\in \mathbb{N}$ and take $T_1,\dots,T_n \in U$ and $z^*_1, \dots, z^*_n \in Z^*$. Then Holder's inequality and the fact that $S_2$ (resp.~$S_1^*$) is $(\ell^s_p,\ell_p)$-summing (resp. $(\ell^s_q,\ell_q)$-summing) yield \begin{multline*} \Big(\sum_{i=1}^n | \langle S(T_i),z^*_i \rangle|^r\Big)^{1/r}= \Big(\sum_{i=1}^n | \langle S_2(T_i),S_1^*(z^*_i) \rangle|^r\Big)^{1/r} \\\leq \Big(\sum_{i=1}^n \|S_2(T_i)\|_W^r \cdot\|S_1^*(z^*_i)\|_{W^*}^r\Big)^{1/r} \leq \Big( \sum_{i=1}^n \|S_2(T_i)\|_W^p\Big)^{1/p} \cdot \Big( \sum_{i=1}^n \|S_1^*(z_i^*)\|_{W^*}^q\Big)^{1/q} \\ \leq K \sup_{x^* \in B_{X^*}} \Big(\sum_{i=1}^n \|T_i(x^*)\|_Y^p\Big)^{1/p} \cdot \sup_{e^* \in B_{E^*}} \Big(\sum_{i=1}^n \| z^*_i (e^*)\|_{F}^q\Big)^{1/q} \end{multline*} for some constant $K\geq 0$ independent of the $T_i$'s and $z_i^*$'s. This shows that $S$ is $(\ell^s_p,\ell^s_q)$-dominated. (i)$\Rightarrow$(ii) Observe that $L:=P(B_{X^*}) \times P(B_ {E^{*}})$ is a compact convex set of the locally convex space $C(B_{X^*})^* \times C(B_{E^*})^*$, equipped with the product of the corresponding $w^*$-topologies. Fix $n\in \mathbb{N}$, $$ \bar{T}=(T_1,\dots,T_n ) \in (U\cap X\varepsilon Y)^n \ \ \mbox{and} \ \ \bar{z^*}=(z^*_1,\dots, z^*_n) \in (Z^*\cap E\varepsilon F)^n. $$ Consider the function $\Delta_{\bar{T}, \bar{z^*} }: L \to \mathbb{R}$ given by $$ \Delta_{\bar{T},\bar{z^*} }(\mu,\eta):= $$ $$ \sum_{i=1}^n |\langle S(T_i), z^*_i \rangle|^r - K^r \frac{r}{p} \int_{B_{X^*}} \sum_{i=1}^n \|T_i(\cdot)\|_{Y}^p \, d\mu - K^r \frac{r}{q} \int_{{B_{E^{*}}}} \sum_{i=1}^n \|z^*_i(\cdot)\|_{F}^q \, d\eta, $$ where $K\geq 0$ is a constant as in Definition~\ref{definition:pqdom}. Clearly, $\Delta_{\bar{T}, \bar{z^*} }$ is convex and continuous, because $T_i\in X\varepsilon Y$ and $z_i^*\in E\varepsilon F$ for every~$i=1,\dots,n$. We claim that $\Delta_{\bar{T}, \bar{z^*}}(\mu,\eta)\leq 0$ for some $(\mu,\eta)\in L$. Indeed, since the functions $$ x^*\mapsto \sum_{i=1}^n \|T_i(x^*)\|_Y^p \quad\mbox{and}\quad e^{*} \mapsto \sum_{i=1}^n \|z^*_i(e^*)\|_F^q $$ are $w^*$-continuous on~$B_{X^*}$ and $B_{E^{*}}$, they attain their suprema at some $x_{\bar{T}}^*\in B_{X^*}$ and $e^{*}_{\bar{z^*}} \in B_{E^{*}}$, respectively. By taking into account Young's inequality, we have \begin{multline}\label{eqn:ultima} \sum_{i=1}^n | \langle S(T_i),z^*_i \rangle|^r \stackrel{\eqref{eqn:pqdom}}{\le} K^r \Big(\sum_{i=1}^n \|T_i(x_{\bar{T}}^*)\|_Y^p\Big)^{r/p} \cdot \Big(\sum_{i=1}^n \|z^*_i (e^{*}_{\bar{z^*}})\|_{F}^q\Big)^{r/q} \\ \leq K^r\frac{r}{p}\sum_{i=1}^n \|T_i(x_{\bar{T}}^*)\|_Y^p+K^r\frac{r}{q}\sum_{i=1}^n \|z^*_i (e^{*}_{\bar{z^*}})\|_{F}^q. \end{multline} If we write $\mu:=\delta_{x^*_{\bar{T}}}\in P(B_{X^*})$ and $\eta:=\delta_{e^{*}_{\bar{z^*}}}\in P(B_{E^*})$, then \eqref{eqn:ultima} yields $\Delta_{\bar{T}, \bar{z^*}}(\mu,\eta)\leq 0$, as required. The collection $\mathcal{C}$ of all functions $\Delta_{\bar{T},\bar{z^*}}$ as above is a convex cone in $\mathbb{R}^{L}$. Indeed, $\mathcal{C}$ is obviously closed under sums and we have $$ \alpha\Delta_{\bar{T},\bar{z^*}}=\Delta_{(\alpha^{1/p}T_1,\dots,\alpha^{1/p}T_n),(\alpha^{1/q}z_1^*,\dots,\alpha^{1/q}z_n^*)} $$ for all $\alpha\geq 0$. By Ky Fan's Lemma (see e.g. \cite[Lemma~9.10]{die-alt}), there is $(\mu, \eta) \in L$ such that $\Delta_{\bar{T},\bar{z^*}}(\mu,\eta) \leq 0$ for every $\Delta_{\bar{T},\bar{z^*}}\in \mathcal{C}$. In particular, \begin{multline}\label{eqn:fromKF} |\langle S(T), z^*\rangle|^r \le K^r \frac{r}{p} \int_{B_{X^*}} \|T(\cdot)\|_{Y}^p \, d\mu + K^r \frac{r}{q} \int_{B_{E^{*}}} \|z^*(\cdot)\|_{F}^q \, d\eta \\ \quad\mbox{for every }T \in U\cap X \varepsilon Y \mbox{ and }z^*\in Z^*\cap E \varepsilon F. \end{multline} Fix $T \in U\cap X \varepsilon Y$ and $z^*\in Z^*\cap E \varepsilon F$. We will check that \eqref{eqn:intpqdom} holds. Write $$ a:= \Big(\int_{B_{X^*}} \|T(\cdot)\|_{Y}^p \, d\mu \Big)^{1/p} \quad\mbox{and}\quad b:= \Big(\int_{B_{E^{*}}} \|z^*(\cdot)\|_{F}^q \, d\eta \Big)^{1/q}. $$ If either $a=0$ or $b=0$, then $\langle S(T), z^*\rangle=0$. Indeed, if $a=0$, then for each $n\in \mathbb{N}$ inequality~\eqref{eqn:fromKF} applied to the pair $(nT,z^*)$ yields $$ |\langle S(T), z^*\rangle|^r =\frac{1}{n^r}\cdot |\langle S(nT), z^*\rangle|^r \leq \frac{1}{n^r} \cdot \frac{K^rrb^q}{q}, $$ hence $\langle S(T), z^*\rangle=0$. A similar argument works for the case $b=0$. On the other hand, if $a\neq 0$ and $b\neq 0$, then inequality~\eqref{eqn:fromKF} applied to the pair $(\frac{1}{a}T,\frac{1}{b}z^*)$ yields $$ |\langle S(T), z^*\rangle|^r = a^r \, b^r \, \Big|\Big\langle S\Big(\frac{1}{a}T\Big),\frac{1}{b}z^*\Big\rangle\Big|^r $$ $$ \le K^r \, a^r \, b^r \left( \frac{r}{p \, a^p} \int_{B_{X^*}} \|T(\cdot)\|_{Y}^p \, d\mu + \frac{r}{q \, b^q} \int_{B_{E^{*}}} \|z^*(\cdot) \|_{F}^q \, d\eta \right) = K^r \, a^r \, b.^r $$ This proves~\eqref{eqn:intpqdom} when $T \in U\cap X \varepsilon Y$ and $z^*\in Z^*\cap E \varepsilon F$. Finally, we prove the implication (ii)$\Rightarrow$(iii) under the additional assumptions. Fix $T \in U$ and $z^*\in Z^*$. By~(c) (resp.~(b)), we can take a sequence $(T_n)$ (resp. $(z_n^*)$) in $U\cap X \varepsilon Y$ (resp. $Z^*\cap E \varepsilon F$) which SOT-converges to~$T$ (resp.~$z^*$). For each $n\in \mathbb{N}$ we have \begin{equation}\label{eqn:tothelimit} |\langle S(T_n),z_n^*\rangle| \leq K \Big(\int_{B_{X^*}}\|T_n(\cdot)\|_Y^p \, d\mu\Big)^{1/p} \cdot \Big(\int_{B_{E^*}}\|z_n^*(\cdot)\|_F^q \, d\eta\Big)^{1/q}. \end{equation} Since the operators $i_\mu$ and $i_\eta$ are (SOT-to-norm) sequentially continuous (see the proof of Theorem~\ref{theorem:equiv}), we have $$ \lim_{n\to \infty} \Big(\int_{B_{X^*}}\|T_n(\cdot)\|_Y^p \, d\mu\Big)^{1/p} = \Big(\int_{B_{X^*}}\|T(\cdot)\|_Y^p \, d\mu\Big)^{1/p} $$ and $$ \lim_{n\to \infty} \Big(\int_{B_{E^*}}\|z_n^*(\cdot)\|_F^q \, d\eta\Big)^{1/q} = \Big(\int_{B_{E^*}}\|z^*(\cdot)\|_F^q \, d\eta\Big)^{1/q}. $$ Moreover, $S$ is (SOT-to-norm) sequentially continuous by assumption~(d), so the sequence $(S(T_n))$ converges to~$S(T)$ in the norm topology. Since $(z_n^*)$ is $w^*$-convergent to~$z^*$ (by~(a)), we conclude that \begin{multline*} |\langle S(T),z^*\rangle| =\lim_{n\to \infty} |\langle S(T_n),z_n^*\rangle| \\ \stackrel{\eqref{eqn:tothelimit}}{\leq} K \Big(\int_{B_{X^*}}\|T(\cdot)\|_Y^p \, d\mu\Big)^{1/p} \cdot \Big(\int_{B_{E^*}}\|z^*(\cdot)\|_F^q \, d\eta\Big)^{1/q}, \end{multline*} as we wanted. The proof is finished. \end{proof} \begin{remark}\label{remark:SOT-norm} Statement (iv) in Theorem~\ref{theorem:equiv2} implies that $S_2$ is (SOT-to-norm) sequentially continuous (by Theorem~\ref{theorem:equiv}) and so is~$S$. \end{remark} \begin{corollary}\label{corollary:Kwapien} Suppose that $Z^* \subseteq E \varepsilon F$. Let $U$ be a subspace of~$X\varepsilon Y$ and let $S:U\to Z$ be an operator. Then the following statements are equivalent: \begin{enumerate} \item[(i)] $S$ is $(\ell^s_p,\ell^s_q)$-dominated. \item[(ii)] There exist a constant $K\geq 0$ and measures $\mu \in P(B_{X^*})$ and $\eta \in P(B_{E^{*}})$ such that $$ | \langle S(T), z^*\rangle| \leq K \Big(\int_{B_{X^*}}\|T(\cdot)\|^p_{Y} \, d\mu\Big)^{1/p} \cdot \Big(\int_{B_{E^*}}\| z^*(\cdot)\|^q_{F} \, d\eta \Big)^{1/q} $$ for every $T \in U$ and every $z^* \in Z^*$. \item[(iii)] There exist a Banach space $W$, an $(\ell^s_p,\ell_p)$-summing operator $S_2:U\to W$ and an operator $S_1:W\to Z$ with $(\ell^s_q,\ell_q)$-summing adjoint such that $S$ factors as~$S= S_1 \circ S_2$. \end{enumerate} \end{corollary} \end{document}
arXiv
{ "id": "2003.07252.tex", "language_detection_score": 0.6717246174812317, "char_num_after_normalized": null, "contain_at_least_two_stop_words": null, "ellipsis_line_ratio": null, "idx": null, "lines_start_with_bullet_point_ratio": null, "mean_length_of_alpha_words": null, "non_alphabetical_char_ratio": null, "path": null, "symbols_to_words_ratio": null, "uppercase_word_ratio": null, "type": null, "book_name": null, "mimetype": null, "page_index": null, "page_path": null, "page_title": null }
arXiv/math_arXiv_v0.2.jsonl
null
null
\begin{document} \title{On the Optimality of Stein Factors} \author{Adrian R\"ollin} \date{National University of Singapore} \maketitle \begin{abstract}The application of Stein's method for distributional approximation often involves so called \emph{Stein factors} (also called \emph {magic factors}) in the bound of the solutions to Stein equations. However, in some cases these factors contain additional (undesirable) logarithmic terms. It has been shown for many Stein factors that the known bounds are sharp and thus that these additional logarithmic terms cannot be avoided in general. However, no probabilistic examples have appeared in the literature that would show that these terms in the Stein factors are not just unavoidable artefacts, but that they are there for a good reason. In this article we close this gap by constructing such examples. This also leads to a new interpretation of the solutions to Stein equations. \end{abstract} \section{Introduction} Stein's method for distributional approximation, introduced by \cite{Stein1972}, has been used to obtain bounds on the distance between probability measures for a variety of distributions in different metrics. There are two main steps involved in the implementation of the method. The first step is to set up the so-called \emph{Stein equation}, involving a \emph{Stein operator}, and to obtain bounds on its solutions and their derivatives or differences; this can be done either analytically, as for example \cite{Stein1972}, or by means of the probabilistic method introduced by \cite{Barbour1988}. In the second step one then needs bound the expectation of a functional of the random variable under consideration. There are various techniques to do this, such as the local approach by \cite{Stein1972} and \cite{Chen2004a} or the exchangeable pair coupling by \cite{Stein1986}; see \cite{Chen2009} for a unification of these and many other approaches. To successfully implement the method, so-called \emph{Stein factors} play an important role. In this article we will use the term \emph{Stein factor} to refer to the asymptotic behaviour of the bounds on the solution to the Stein equation as some of the involved parameters tend to infinity or zero. Some of the known Stein factors are not satisfactory, because they contain terms which often lead to non-optimal bounds in applications. Additional work is then necessary to circumvent this problem; see for example \cite{Brown2000}. There are also situations where the solutions can grow exponentially fast, as has been shown by \cite{Barbour1992c} and \cite{Barbour1998a} for some specific compound Poisson distributions, which limits the usability of Stein's method in these cases. To make matters worse, for many of these Stein factors it has been shown that they cannot be improved; see \cite{Brown1995}, \cite{Barbour1992c} and \cite{Barbour2005}. However, these articles do not address the question whether the problematic Stein factors express a fundamental ``flaw'' in Stein's method or whether there are examples in which these additional terms are truly needed if Stein's method is employed to express the distance between the involved probability distributions in the specific metric. The purpose of this note is to show that the latter statement is in fact true. We will present a general method to construct corresponding probability distributions; this construction not only explains the presence of problematic Stein factors, but also gives new insight into Stein's method. In the next section, we recall the general approach of Stein's method in the context of Poisson approximation in total variation. Although in the univariate case the Stein factors do not contain problematic terms, it will demonstrate the basic construction of the examples. Then, in the remaining two sections, we apply the construction to the multivariate Poisson distribution and Poisson point processes, as in these cases the Stein factors contain a logarithmic term which may lead to non-optimal bounds in applications. \section{An illustrative example} In order to explain how to construct examples which illustrate the nature of Stein factors and also to recall the basic steps of Stein's method, we start with the Stein-Chen method for univariate Poisson approximation (see \cite{Barbour1992}). Let the total variation distance between two non-negative, integer-valued random variables $W$ and $Z$ be defined as \ben \label{1} \mathop{d_{\mathrm{TV}}}\bklr{\mathscr{L}(W),\mathscr{L}(Z)} := \sup_{h\in\%H_{{\mathrm{TV}}}}\babs{\mathbbm{E} h(W) - \mathbbm{E} h(Z)}, \ee where the set $\%H_{{\mathrm{TV}}}$ consists of all indicator functions on the non-negative integers~$\mathbbm{Z}_+$. Assume now that $Z\sim\mathop{\mathrm{Po}}(\lambda)$. Stein's idea is to replace the difference between the expectations on the right hand side of \eqref{1} by \be \mathbbm{E}\{g_h(W+1) - W g_h(W)\}, \ee where $g_h$ is the solution to the Stein equation \ben \label{2} \lambda g_h(j+1) - j g_h(j) = h(j) - \mathbbm{E} h(Z), \quad j\in\mathbbm{Z}_+. \ee The left hand side of \eqref{2} is an operator that characterises the Poisson distribution; that is, for $\%A g(j) := \lambda g(j+1) - jg(j)$, \be \text{$\mathbbm{E}\%A g(Y) = 0$~~for all bounded $g$} \quad\iff\quad Y\sim\mathop{\mathrm{Po}}(\lambda). \ee Assume for simplicity that $W$ has the same support as $\mathop{\mathrm{Po}}(\lambda)$. With \eqref{2}, we can now write \eqref{1} as \ben \label{3} \mathop{d_{\mathrm{TV}}}\bklr{\mathscr{L}(W),\mathop{\mathrm{Po}}(\lambda)} = \sup_{h\in\%H_{{\mathrm{TV}}}} \babs{\mathbbm{E}\%A g_h(W)}. \ee It turns out that \eqref{3} is often easier to bound than \eqref{1}. \cite{Barbour1983} and \cite{Barbour1992} showed that, for all functions $h\in\%H_{\mathrm{TV}}$, \ben \label{4} \norm{g_h} \leqslant 1\wedge\sqrt{\frac{2}{\lambda e}}, \qquad \norm{\Delta g_h}\leqslant \frac{1-e^{-\lambda}}{\lambda}, \ee where $\norm{\cdot}$ denotes the supremum norm and $\Delta g(j) := g(j+1)-g(j)$. So here, if one is interested in the asymptotic $\lambda\to\infty$, the Stein factors are of order $\lambda^{-1/2}$ and $\lambda^{-1}$, respectively. With this we have finished the first main step of Stein's method. As an example for the second step and also as a motivation for the main part of this paper, assume that $W$ is a non-negative integer-valued random variable and assume that $\tau$ is a function such that \ben \label{5} \mathbbm{E}\bklg{(W-\lambda)g(W)} = \mathbbm{E}\bklg{\tau(W)\Delta g(W)} \ee for all bounded functions $g$; see \cite{Cacoullos1994} and \cite{Papathanasiou1995} for more details on this approach. To estimate the distance between $\mathscr{L}(W)$ and the Poisson distribution with mean $\lambda$, we simply use \eqref{3} in connection with \eqref{5} to obtain \besn \label{6} \mathop{d_{\mathrm{TV}}}\bklr{\mathscr{L}(W),\mathop{\mathrm{Po}}(\lambda)} &= \sup_{h\in\%H_{\mathrm{TV}}}\babs{\%A g_h(W)} \\ &= \sup_{h\in\%H_{\mathrm{TV}}}\babs{\mathbbm{E}\bklg{\lambda g_h(W+1) - W g_h(W)}} \\ &=\sup_{h\in\%H_{\mathrm{TV}}}\babs{\mathbbm{E}\bklg{\lambda\Delta g_h(W) - (W-\lambda)g_h(W)}}\\ &= \sup_{h\in\%H_{\mathrm{TV}}}\babs{\mathbbm{E}\bklg{(\lambda-\tau(W))\Delta g_h(W)}} \\ &\leqslant \frac{1-e^-\lambda}{\lambda}\mathbbm{E}\babs{\tau(W)-\lambda}, \ee where for the last step we used \eqref{4}. Thus, \eqref{6} expresses the $\mathop{d_{\mathrm{TV}}}$-distance between $\mathscr{L}(W)$ and $\mathop{\mathrm{Po}}(\lambda)$ in terms of the average fluctuation of $\tau$ around~$\lambda$. It is not difficult to show that $\tau \equiv \lambda$ if and only if $W \sim \mathop{\mathrm{Po}}(\lambda)$. Assume now that, for a fixed positive integer $k$, $\tau(w) = \lambda + \delta_k(w)$, where $\delta_k(w)$ is the Kronecker delta, and assume that $W_k$ is a random variable satisfying \eqref{5} for this~$\tau$. In this case we can in fact replace the last inequality in \eqref{6} by an equality to obtain \ben \label{7} \mathop{d_{\mathrm{TV}}}\bklr{\mathscr{L}(W_k),\mathop{\mathrm{Po}}(\lambda)} = \mathbbm{P}[W_k=k]\sup_{h\in\%H_{\mathrm{TV}}}\abs{\Delta g_h(k)}. \ee From Eq.~(1.22) of the proof of Lemma 1.1.1 of \cite{Barbour1992} we see that, for $k=\floor{\lambda}$, \ben \label{8} \sup_{h\in\%H_{\mathrm{TV}}}\abs{\Delta g_h(k)} \asymp \lambda^{-1} \ee as $\lambda\to\infty$. Thus, \eqref{7} gives \ben \label{9} \mathop{d_{\mathrm{TV}}}\bklr{\mathscr{L}(W_k),\mathop{\mathrm{Po}}(\lambda)} \asymp \mathbbm{P}[W_k=k]\lambda^{-1} \ee as $\lambda \to\infty$. Note that, irrespective of the order of $\mathbbm{P}[W_k=k]$, the asymptotic \eqref{9} makes full use of the second Stein factor of \eqref{4}. To see that $\mathscr{L}(W_k)$ in fact exists, we rewrite \eqref{5} as $\mathbbm{E}\%B_k g(W_k) = 0$, where \besn \label{10} \%B_k g(w) & = \%A g(w) + \delta_k(w)\Delta g(w) \\ & = \bklr{\lambda+\delta_k(w)}g(w+1) - \bklr{w+\delta_k(w)}g(w). \ee Recall from \cite{Barbour1988}, that $\%A$ can be interpreted as the generator of a Markov process; in our case, as an immigration-death process, with immigration rate $\lambda$, per capita death rate~$1$ and $\mathop{\mathrm{Po}}(\lambda)$ as its stationary distribution. Likewise, we can interpret $\%B_k$ as a perturbed immigration-death process with the same transition rates, except in point~$k$, where the immigration rate is increased to $\lambda+1$ and the per capita death rate is increased to $1+1/k$. Thus, $\mathscr{L}(W_k)$ can be seen as the stationary distribution of this perturbed process. If $k=\floor{\lambda}$, the perturbation of the transition rates at point $k$ is of smaller order than the transition rates of the corresponding unperturbed immigration-death process in $k$. Thus, heuristically, $\mathbbm{P}[W_k=k]$ is of the same order as the probability $\mathop{\mathrm{Po}}(\lambda)\{k\}$ of the stationary distribution of the unperturbed process, hence $\mathbbm{P}[W_k=k] \asymp\lambda^{-1/2}$, and \eqref{9} is of order $\lambda^{-3/2}$. We omit a rigorous proof of this statement. \begin{remark} Note that by rearranging \eqref{7} we obtain \ben \label{14} \sup_{h\in\%H_{{\mathrm{TV}}}}\babs{\Delta g_h(k)} = \frac{\mathop{d_{\mathrm{TV}}}\bklr{\mathscr{L}(W_k),\mathscr{L}(Z)}}{\mathbbm{P}[W_k = k]}. \ee for positive $k$. We can assume without loss of generality that $g_h(0)=g_h(1)$ for all test functions $h$ because the value of $g_h(0)$ is not determined by \eqref{2} and can in fact be arbitrarily chosen. Thus $\Delta g_h(0) = 0$ and, taking the supremum over all $k\in\mathbbm{Z}_+$, we obtain \ben \label{15} \sup_{h\in\%H_{{\mathrm{TV}}}}\norm{\Delta g_h} = \sup_{k\geq1}\frac{\mathop{d_{\mathrm{TV}}}\bklr{\mathscr{L}(W_k),\mathscr{L}(Z)}}{\mathbbm{P}[W_k = k]}. \ee This provides a new interpretation of the bound $\norm{\Delta g_h}$ (a similar statement can be made for $\norm{g_h}$, but then with a different family of perturbations), namely as the ratio of the total variation distance between some very specific perturbed Poisson distributions and the Poisson distribution, and the probability mass at the location of these perturbations. Let us quote \cite{Chen1998}, page 98: \begin{quote} \small Stein's method may be regarded as a method of constructing certain kinds of identities which we call Stein identities, and making comparisons between them. In applying the method to probability approximation we construct two identities, one for the approximating distribution and the other for the distribution to be approximated. The discrepancy between the two distributions is then measured by comparing the two Stein identities through the use of the solution of an equation, called Stein equation. To effect the comparison, bounds on the solution and its smoothness are used. \end{quote} Equations \eqref{14} and \eqref{15} make this statement precise. They express how certain elementary deviations from the Stein identity of the approximating distribution will influence the distance of the resulting distributions in the specific metric, and they establish a simple link to the properties of the solutions to \eqref{2}. We can thus see $W$ from \eqref{5} as a `mixture' of such perturbations which is what is effectively expressed by Estimate~\eqref{6}. \end{remark} Thus, to understand why in some of the applications the Stein factors are not as satisfying as in the above Poisson example, we will in the following sections analyse the corresponding perturbed distributions in the cases of multivariate Poisson and Poisson point processes. In order to define the perturbations to obtain an equation of the form \eqref{7}, some care is needed, though. The attempt to simply add the perturbation as in \eqref{10}, may lead to an operator that is not interpretable as the generator of a Markov process and thus the existence of the perturbed distribution would not be guaranteed as easily. It turns out that with suitable symmetry assumptions we can circumvent this problem. \section{Multivariate Poisson distribution} \label{sec2} Let $d\geqslant 2$ be an integer, $\mu = (\mu_1,\dots,\mu_d)\in\mathbbm{R}_+^d$ such that $\sum \mu_i=1$, and let $\lambda>0$. Let $\mathop{\mathrm{Po}}(\lambda\mu)$ be the distribution on $\mathbbm{Z}_+^d$ defined as $\mathop{\mathrm{Po}}(\lambda\mu) = \mathop{\mathrm{Po}}(\lambda\mu_1)\otimes\dots\otimes \mathop{\mathrm{Po}}(\lambda\mu_d)$. Stein's method for multivariate Poisson approximation was introduced by \cite{Barbour1988}; but see also \cite{Arratia1989}. Let $\varepsilon^{(i)}$ denote $i$th unit vector. Using the Stein operator \be \%A g (w) := \sum_{i=1}^d \lambda\mu_i\bklg{g(w+\varepsilon^{(i)}) - g(w)} + \sum_{i=1}^d w_i\bklg{g(w-\varepsilon^{(i)}) - g(w)} \ee for $w\in\mathbbm{Z}_+^d$, it is proved in Lemma~3 of \cite {Barbour1988} that the solution $g_A$ to the Stein equation $\%A g_A (w) = \delta_A(w)- \mathop{\mathrm{Po}}(\lambda\mu)\{A\}$ for $A\subset\mathbbm{Z}_+^d$, satisfies the bound \ben \label{16} \bbbnorm{\sum_{i,j=1}^d \alpha_i\alpha_j \Delta_{i j} g_A}\leqslant \min\bbbklg{\frac{1 + 2\log^+(2\lambda)}{2\lambda} \sum_{i=1}^d\frac{\alpha_i^2}{\mu_i},\sum_{i=1}^d \alpha_i^2} \ee for any $\alpha\in\mathbbm{R}^d$, where \be \Delta_{i j}g(w) := g(w+\varepsilon^{(i)} +\varepsilon^{(j)}) - g(w+\varepsilon^{(i)}) - g(w+\varepsilon^{(j)}) + g(w). \ee Let now $m_i = \floor{\lambda\mu_i}$ for $i=1,\dots,d$ and define \ben \label{18} A_1 = \{w\in \mathbbm{Z}_+^d : 0\leqslant w_1 \leqslant m_1, 0\leqslant w_2 \leqslant m_2\}. \ee \cite{Barbour2005} proves that, if $\mu_1,\mu_2>0$ and $\lambda\geqslant (e/32\pi)(\mu_1\wedge\mu_2)^{-2}$, then \ben \label{19} \babs{\Delta_{12} g_{A_1}(w)} \geqslant \frac{\log\lambda}{20\lambda\sqrt{\mu_1\mu_2}} \ee for any $w$ with $(w_1,w_2) = (m_1,m_2)$. It is in fact not difficult to see from the proof of \eqref{19} that this bound also holds for the other quadrants having corner~$(m_1,m_2)$. \begin{figure} \caption{ A rough illustration of the perturbed process defined by the generator (\ref{20}). Between any of two connected points on the lattice $\mathbbm{Z}_+^2$, we assume the transition dynamics of a unperturbed immigration-death process, that is, in each coordinate immigration rate $\lambda\mu_i$ and per capita death rate $1$. The arrows symbolise the additional perturbations with respect to the unperturbed immigration-death process; each arrow indicates an increase by $1/2$ of the corresponding transition rate. The resulting differences of the point probabilities between the equilibrium distributions of the perturbed and unperturbed processes are indicated by the symbols $+$ and $-$. The corresponding signs in each of the quadrants are heuristically obvious, but they can be verified rigorously using the Stein equation, Eq.~\eqref{22}, and Eq.~$(2.8)$ of \cite{Barbour2005}.} \label{fig2} \end{figure} \begin{example} \label{ex1} Assume that $W$ is a random vector having the equilibrium distribution of the $d$-dimensional birth-death process with generator \besn \label{20} \%B_K g(w) & = \%Ag(w) \\ &\quad +\ahalf\delta_{K+\varepsilon^{(2)}}(w)\bkle{g(w+\varepsilon^{(1)}) - g(w)} +\ahalf\delta_{K+\varepsilon^{(1)}}(w)\bkle{g(w+\varepsilon^{(2)}) - g(w)}\\ &\quad +\ahalf\delta_{K+\varepsilon^{(1)}}(w)\bkle{g(w-\varepsilon^{(1)}) - g(w)} +\ahalf\delta_{K+\varepsilon^{(2)}}(w)\bkle{g(w-\varepsilon^{(2)}) - g(w)} \\ & = \sum_{i=1}^d \bklr{\lambda\mu_i + \ahalf\delta_1(i)\delta_{K+\varepsilon^{(2)}}(w) +\ahalf\delta_2(i)\delta_{K+\varepsilon^{(1)}}(w)}\bkle{g(w+\varepsilon^{(i)}) - g(w)}\\ &\quad + \sum_{i=1}^d \bklr{w_i+\ahalf\delta_1(i)\delta_{K+\varepsilon^{(1)}}(w) +\ahalf\delta_2(i)\delta_{K+\varepsilon^{(2)}}(w)}\bkle{g(w-\varepsilon^{(i)}) - g(w)} \ee where $K=(m_1,m_2,\dots,m_d)$. Assume further that $\mu_1 = \mu_2$ , thus $m_1 = m_2$ (the `symmetry condition'). See Figure~\ref{fig2} for an illustration of this process. As the perturbations are symmetric in the first coordinates the stationary distribution will also be symmetric in the first two coordinates. Now, noting that for any bounded $g$ we have $\mathbbm{E} \%B_K g(W)=0$, \besn \label{22} \mathbbm{E}\%A g(W) & = \mathbbm{E}\%Ag(W) - \mathbbm{E}\%B_K g(W)\\ & = - \ahalf\mathbbm{P}[W=K+\varepsilon^{(2)}] \bkle{g(K+\varepsilon^{(2)}+\varepsilon^{(1)})- g(K+\varepsilon^{(2)})}\\ &\quad- \ahalf\mathbbm{P}[W=K+\varepsilon^{(1)}] \bkle{g(K+\varepsilon^{(1)}+\varepsilon^{(2)})- g(K+\varepsilon^{(1)})}\\ &\quad- \ahalf\mathbbm{P}[W=K+\varepsilon^{(1)}]\bkle{g(K) - g(K+\varepsilon^{(1)})} \\ &\quad- \ahalf\mathbbm{P}[W=K+\varepsilon^{(2)}]\bkle{g(K) - g(K+\varepsilon^{(2)})} \\ & = - \mathbbm{P}[W=K+\varepsilon^{(1)}]\Delta_{12}g(K), \ee where we used $\mathbbm{P}[W=K+\varepsilon^{(1)}]=\mathbbm{P}[W=K+\varepsilon^{(2)}]$ for the last equality. Thus \bes \mathop{d_{\mathrm{TV}}}\bklr{\mathscr{L}(W),\mathop{\mathrm{Po}}(\lambda\mu)} & = \sup_{h\in\%H_{\mathrm{TV}}}\babs{\mathbbm{E}\%Ag_h(W)}\\ & = \mathbbm{P}[W=K+\varepsilon^{(1)}]\sup_{h\in\%H_{\mathrm{TV}}}\abs{\Delta_{12}g_h(K)}\\ & \geqslant \frac{\mathbbm{P}[W=K+\varepsilon^{(1)}]\log\lambda}{20\lambda\sqrt{\mu_1\mu_2}}. \ee On the other hand, from \eqref{16} for $\alpha=\varepsilon^{(1)}$, $\alpha=\varepsilon^{(2)}$ and $\alpha=\varepsilon^{(1)}+\varepsilon^{(2)}$ respectively, it follows that \be \abs{\Delta_{12}g_h(w)} \leqslant \frac{\bklr{1+2\log^+(2\lambda)}(\mu_1+\mu_2)}{2\lambda\mu_1\mu_2}. \ee This yields the upper estimate \bes \mathop{d_{\mathrm{TV}}}\bklr{\mathscr{L}(W),\mathop{\mathrm{Po}}(\lambda\mu)} & = \mathbbm{P}[W=K+\varepsilon^{(1)}]\sup_{h\in\%H_{\mathrm{TV}}}\abs{\Delta_{12}g_h(K)} \\ &\leqslant \mathbbm{P}[W=K+\varepsilon^{(1)}]\frac{\bklr{1+2\log^+(2\lambda)}(\mu_1+\mu_2)}{ 2\lambda\mu_1\mu_2}, \ee and thus we finally have \ben \label{23} \mathop{d_{\mathrm{TV}}}\bklr{\mathscr{L}(W),\mathop{\mathrm{Po}}(\lambda\mu)} \asymp \frac{\mathbbm{P}[W=K+\varepsilon^{(1)}]\log\lambda}{\lambda}. \ee Now, again heuristically, $\mathbbm{P}[W=K+\varepsilon^{(1)}]$ will be of the order $\mathop{\mathrm{Po}}(\lambda\mu_1)\klg{m_1} \times\cdots\times\mathop{\mathrm{Po}}(\lambda\mu_d)\klg{m_d}\asymp\lambda^{d/2 } $ , so that \eqref{23} will be of order $\log\lambda/\lambda^{1+d/2}$. Recalling that the test function \eqref{18} and also the corresponding test functions for the other three quadrants are responsible for the logarithmic term in \eqref{23}, we may conclude a situation as illustrated in Figure~\ref{fig2} for $d=2$. Different to the one-dimensional case, where the perturbation moves probability mass from the point of the perturbation to the rest of the support in a uniform way, the perturbations of the form \eqref{20} affect the rest of the support in a non-uniform way. However, further analysis is needed to find the exact distribution of the probability mass differences within each of the quadrants. Note that the perturbation \eqref{20} is `expectation neutral', that is, $W$ has also expectation $\lambda\mu$, which can be seen by using $\mathbbm{E}\%B g(W)=0$ with the function $g_i(w) = w_i $ for each coordinate $i$. \end{example} \section{Poisson point processes} Stein's method for Poisson point process approximation was derived by \cite{Barbour1988} and \cite{Barbour1992b}. They use the Stein operator \be \%A g(\xi) = \int_{\Gamma}\bkle{g(\xi+\delta_\alpha) - g(\xi)} \lambda(d\alpha) +\int_{\Gamma}\bkle{g(\xi-\delta_\alpha) - g(\xi)} \xi(d\alpha), \ee where $\xi$ is a point configuration on a compact metric space $\Gamma$ and $\lambda$ denotes the mean measure of the process. The most successful approximation results have been obtained in the so-called $d_2$-metric; see for example \cite{Barbour1992b}, \cite{Brown2000} and \cite{Schuhmacher2005}. Assume that $\Gamma$ is equipped with a metric $d_0$ which is, for convenience, bounded by $1$. Let $\%F$ be the set of functions $f:\Gamma\to\mathbbm{R}$, satisfying \be \sup_{x\neq y\in\Gamma} \frac{\abs{f(x)-f(y)}}{d_0(x,y)}\leqslant 1. \ee Define the metric $d_1$ on the set of finite measures on $\Gamma$ as \be d_1(\xi,\eta) = \begin{cases} 1 & \text{if $\xi(\Gamma)\neq\eta(\Gamma)$,}\\ \xi(\Gamma)^{-1}\sup\limits_{f\in\%F}\bbabs{\int f d\xi-\int f d\eta} & \text{if $\xi(\Gamma)=\eta(\Gamma)$.} \end{cases} \ee Let now $\%H_2$ be the set of all functions from the set of finite measures into $\mathbbm{R}$ satisfying \be \sup_{\eta\neq\xi} \frac{\abs{h(\eta)-h(\xi)}}{d_1(\xi,\eta)}\leqslant 1. \ee We then define for two random measures $\Phi$ and $\Psi$ on $\Gamma$ the $d_2$-metric as \be d_2\bklr{\mathscr{L}(\Phi),\mathscr{L}(\Psi)} := \sup_{h\in\%H_2}\babs{\mathbbm{E} h(\Phi) - \mathbbm{E} h(\Psi)}; \ee for more details on the $d_2$-metric see \cite{Barbour1992} and \cite{Schuhmacher2008}. If $h\in\%H_2$ and $g_h$ solves the Stein equation $\%Ag_h(\xi) = h(\xi) - \mathop{\mathrm{Po}}(\lambda)h$, \cite{Barbour1992b} prove the uniform bound \ben \label{24} \norm{\Delta_{\alpha\beta}g_h(\xi)} \leqslant 1\wedge \frac{5}{2\abs{\lambda}}\bbbklr{1+2\log^+\bbbklr{\frac{2\abs{\lambda}}{5}}}, \ee where $\abs{\lambda}$ denotes the $L_1$-norm of $\lambda$ and where \be \Delta_{\alpha\beta}g(\xi) = g(\xi+\delta_\alpha+\delta_\beta)-g(\xi+\delta_\beta) -g(\xi+\delta_\alpha)+g(\xi). \ee It has been shown by \cite{Brown1995} that the $\log$-term in \eqref{24} is unavoidable. However, \cite{Brown2000} have shown that it is possible to obtain results without the~$\log$ using a non-uniform bound on~$\Delta_{\alpha\beta}g_h$. Following the construction of \cite{Brown1995}, assume that $\Gamma = S\cup\{a\}\cup\{b\}$, where $S$ is a compact metric space, $a$ and $b$ are two additional points with $d_0(a,b)= d_0(b,x) = d_0(a,x) = 1$ for all $x\in S$. Assume further that the measure $\lambda$ satisfies $\lambda(\{a\})=\lambda(\{b\}) = 1/\abs{\lambda}$ (again, the `symmetry condition') and thus $\lambda(S) = \abs{\lambda}-2/\abs{\lambda}$. For $m_a,m_b\in\{0,1\}$, define now the test functions \ben \label{25} h(\xi) = \begin{cases} \frac{1}{\xi(\Gamma)} & \text{if $\xi(\{a\})=m_a$, $\xi(\{b\})=m_b$, $\xi\neq 0$}\\ 0 & \text{else.} \end{cases} \ee It is shown by direct verification that $h\in\%H_2$. \cite{Brown1995} prove that, for $m_a=m_b=1$, the corresponding solution $g_h$ to the Stein equation satisfies the asymptotic \ben \label{26} \abs{\Delta_{ab}g_h(0)} \asymp \frac{\log\abs{\lambda}}{\abs{\lambda}} \ee as $\abs{\lambda}\to\infty$, so that \eqref{24} is indeed sharp, but it is easy to see from their proof that \eqref{26} will hold for the other values of $m_a$ and $m_b$, as well. \begin{figure} \caption{ Illustration of the perturbed process defined by the generator \eqref{28} using the same conventions as in Figure~\ref{fig2}. The corresponding signs can be obtained through the Stein equation, Eq.~\eqref{29} and the representation of the solution of the Stein equation as in \cite{Brown1995}, for the different test functions \eqref{25}.} \label{fig3} \end{figure} \begin{example} \label{ex2} Let $\Gamma$ and $\lambda$ be as above with the simplifying assumption that $S$ is finite. Let $\Psi$ be a random point measure with equilibrium distribution of a Markov process with generator \besn \label{28} \%B_0 g(\xi) & = \%A g(\xi) + \ahalf\delta_{\delta_a}(\xi)\bkle{g(\xi+\delta_b)-g(\xi)} + \ahalf\delta_{\delta_b}(\xi)\bkle{g(\xi+\delta_a)-g(\xi)} \\ &\quad + \ahalf\delta_{\delta_a}(\xi)\bkle{g(\xi-\delta_a)-g(\xi)} + \ahalf\delta_{\delta_b}(\xi)\bkle{g(\xi-\delta_b)-g(\xi)}\\ &= \int_{\Gamma}\bkle{g(\xi+\delta_\alpha)- g(\xi)} \bklr{\lambda+\ahalf\delta_{\delta_a}(\xi)\delta_b +\ahalf\delta_{\delta_b}(\xi)\delta_a} (d\alpha)\\ &\quad+\int_{\Gamma}\bkle{g(\xi-\delta_\alpha) - g(\xi)} \bklr{\xi+\ahalf\delta_{\delta_a}(\xi)\delta_a +\ahalf\delta_{\delta_b}(\xi)\delta_b}(d\alpha). \ee See Figure~\ref{fig3} for an illustration of this process. Note that the situation here is different than in Section~\ref{sec2}. Firstly, we consider a weaker metric and, secondly, we impose a different structure on~$\lambda$. Where as in Section~\ref{sec2} we assumed that the mean of each coordinate is of the same order $\abs{\lambda}$, we assume now that there are two special points $a$ and $b$ with $\mathrm{o}(|\lambda|)$ mass attached to them. Again, in order to obtain a stationary distribution that is symmetric with respect to $a$ and $b$, we impose the condition that the immigration rates at the two coordinates $a$ and $b$ are the same. Now, for any bounded function $g$, \besn \label{29} \mathbbm{E}\%A g(\Psi) & = \mathbbm{E}\%Ag(\Psi) - \mathbbm{E}\%B_0 g(\Psi)\\ & = - \ahalf\mathbbm{P}[\Psi=\delta_b] \bkle{g(\delta_a + \delta_b) - g(\delta_a)} - \ahalf\mathbbm{P}[\Psi=\delta_a] \bkle{g(\delta_b + \delta_a) - g(\delta_b)}\\ &\quad - \ahalf\mathbbm{P}[\Psi=\delta_a] \bkle{g(\delta_a) - g(0)} - \ahalf\mathbbm{P}[\Psi=\delta_b] \bkle{g(\delta_b) - g(0)}\\ & = - \mathbbm{P}[\Psi = \delta_a]\Delta_{ab}g(0), \ee where we used that $\mathbbm{P}[\Psi = \delta_a]=\mathbbm{P}[\Psi = \delta_b]$. Thus, using \eqref{26}, \ben \label{30} d_2\bklr{\mathscr{L}(\Psi),\mathop{\mathrm{Po}}(\lambda)} = \mathbbm{P}[\Psi=\delta_a]\sup_{h\in\%H_2}\abs{\Delta_{ab} g_h(0)} \asymp \frac{\mathbbm{P}[\Psi=\delta_a]\log\abs{\lambda}}{\abs{\lambda}}. \ee Figure~\ref{fig3} illustrates the situation for $\abs{\Gamma}=3$. If the process $\Phi_t$ is somewhere on the bottom plane, that is $\Phi(S) = 0$, it will most of the times quickly jump upwards, parallel to the $S$-axis, before jumping between the parallels, as the immigration rate into $S$ is far larger than the jump rates between the parallels. Thus, because of the perturbations, probability mass is moved---as illustrated in Figure~\ref{fig3}---not only between the perturbed points but also between the parallels. Although indicator functions are not in $\%H_2$, the test functions in \eqref{25} decay slowly enough to detect this difference. \end{example} \begin{remark} Note again, as in Example~\ref{ex1}, that the perturbation in the above example is neutral with respect to the measure $\lambda$. It is also interesting to compare the total number of points to a Poisson distribution with mean $\abs{\lambda}$ in the $\mathop{d_{\mathrm{TV}}}$-distance. Note that \eqref{29} holds in particular for functions $g_h$ which depend only on the number of points of $\Psi$. Thus, using \eqref{3} in combination with \eqref{29} yields \be \mathop{d_{\mathrm{TV}}}\bklr{\mathscr{L}(\abs{\Psi}),\mathop{\mathrm{Po}}(\abs{\lambda})} = \mathbbm{P}[\Psi = \delta_a]\sup_{h\in\%H_{\mathrm{TV}}}\abs{\Delta^2 g_h(0)} \asymp \frac{\mathbbm{P}[\Psi = \delta_a]}{\abs{\lambda}}, \ee where $\Delta^2 g(w) = \Delta g(w+1) - \Delta g(w)$ (which corresponds to the first difference in~\eqref{4}) and where we used the fact that $\abs{\Delta^2 g_h(0)}\asymp\abs{\lambda}^{-1}$, again obtained from the proof of Lemma~1.1.1 of \cite{Barbour1992}. Thus we have effectively constructed an example, where the attempt to match not only the number but also the location of the points introduces an additional factor $\log\abs{\lambda}$ if using the $d_2$-metric. \end{remark} \end{document}
arXiv
{ "id": "0706.0879.tex", "language_detection_score": 0.735770046710968, "char_num_after_normalized": null, "contain_at_least_two_stop_words": null, "ellipsis_line_ratio": null, "idx": null, "lines_start_with_bullet_point_ratio": null, "mean_length_of_alpha_words": null, "non_alphabetical_char_ratio": null, "path": null, "symbols_to_words_ratio": null, "uppercase_word_ratio": null, "type": null, "book_name": null, "mimetype": null, "page_index": null, "page_path": null, "page_title": null }
arXiv/math_arXiv_v0.2.jsonl
null
null
\begin{document} \begin{abstract} When $S$ is a discrete subsemigroup of a discrete group $G$ such that $G = S^{-1} S$, it is possible to extend circle-valued multipliers {}from $S$ to $G$; to dilate (projective) isometric representations of $S$ to (projective) unitary representations of $G$; and to dilate/extend actions of $S$ by injective endomorphisms of a C*-algebra to actions of $G$ by automorphisms of a larger C*-algebra. These dilations are unique provided they satisfy a minimality condition. The (twisted) semigroup crossed product corresponding to an action of $S$ is isomorphic to a full corner in the (twisted) crossed product by the dilated action of $G$. This shows that crossed products by semigroup actions are Morita equivalent to crossed products by group actions, making powerful tools available to study their ideal structure and representation theory. The dilation of the system giving the Bost--Connes Hecke C*-algebra from number theory is constructed explicitly as an application: it is the crossed product $C_0(\mathbb A_f) \rtimes \Q^*_+$, corresponding to the multiplicative action of the positive rationals on the additive group $\mathbb A_f $ of finite adeles. \end{abstract} \maketitle \section*{Introduction} In recent years there has been renewed interest in crossed products by semigroups of endomorphisms, viewed now as universal algebras in contrast to their original presentation as corners in crossed products by groups. This new approach, initiated by Stacey \cite{sta} following a strategy pioneered by Raeburn for crossed products by group actions \cite{rae}, is based on the explicit formulation of a semigroup crossed product as the universal C*-algebra of a covariance relation. As such, it motivated the development of specific techniques and brought about new insights and applications, e.g. \cite{sta,alnr,murnew,sri,twi-units,quasilat,bc-alg, hecke5,diri}. Nevertheless, the implicit view of semigroup crossed products as corners continues to have a very important role: it is often invoked to prove the existence of nontrivial universal objects and it allows one to import results from the well-developed theory of crossed products by groups. When the endomorphisms are injective and the semigroup is abelian the two approaches are equivalent, and the proof involves using a direct limit to transform the endomorphisms into automorphisms and the isometries into unitaries. This has been done when the abelian semigroup is $\mathbb N$ \cite{cun,sta}, when it is totally ordered \cite{sri}, and, in general, when it is cancellative \cite{murnew}. As crossed products by more general (nonabelian) semigroups are being considered from the universal property point of view, the need arises to determine whether a realization as corners in crossed products by groups is true and useful in those cases too. This is the main task undertaken in the present work. A step away from commutativity of the acting semigroup was taken in \cite{semico} where isometric representations and multipliers of {\em normal} cancellative semigroups were extended using the same direct limits (the semigroup $S$ is normal if $xS = Sx$ for every $x \in S$, in which case the natural notions of right and left orders on $S$ coincide). Here we will go further and consider discrete semigroups that can be embedded in a discrete group and for which the right order is cofinal; since cofinality is a key ingredient of a directed system, this class is, arguably, the most general one for which the usual direct limit construction would work without a major modification. Based on the results presented below one may argue that the relevant object is the action of an ordered group, and that there are two ways of looking at it; the first is as an automorphic action on a C*-algebra {\em taken together with a distinguished subalgebra which is invariant under the action of the positive cone}, and the second is simply as the endomorphic action of this positive cone on the invariant subalgebra. We show that these two points of view are equivalent: to go from the former to the latter one just cuts down the automorphisms to endomorphisms of the invariant subalgebra and restricts to the positive cone, and the process is reversed by way of a dilation-extension construction, \thmref{dil-ext}, which constitutes our first main result. We also explicitly state and prove two additional features of this equivalence that, in our opinion, have not previously received enough attention. The first one is that the minimal automorphic dilation is canonically {\em unique}, which for instance allows one to test a good candidate, as done in Subsection \ref{diladeles} below. The second one is that the crossed product by the semigroup action is realized as a {\em full} corner in the crossed product by a group action, so the equivalence of the two approaches technically translates into the strong Morita equivalence of the crossed products. This is done in \thmref{fulcor}, which is our second main result. A modicum of extra work shows that these results are also valid for twisted crossed products and projective isometric representations with circle-valued multipliers. This requires the easy generalization, to Ore semigroups, of results known for semigroups that are abelian \cite{arv,din,che,kle} or normal \cite{semico,murpro}, which is done in the preliminary subsections \ref{multipls} and \ref{twi.cross.prod}. The arguments given are for projective isometric representations and twisted crossed products, but setting all multipliers to be identically $1$ will lighten the burden slightly for those interested in the dilation-extension itself and not in projective representations, twisted crossed products, and extensions of multipliers. In the final section we give an application to the semigroup dynamical system from number theory \cite{bc-alg} which has the Bost-Connes Hecke C*-algebra \cite{bos-con} as its crossed product. Starting with the $p$-adic version of the system \cite[Section 5.4]{diri} we show how one is quite naturally led to consider the ring of finite adeles with the multiplicative action of the positive rationals. This establishes a natural heuristic link between the Bost-Connes Hecke C*-algebra and the space $\mathcal A/\mathbb Q^*$, which lies at the heart of Connes's recent formulation of the Riemann Hypothesis as a trace formula \cite{con-cr,con-rzf}. \section{Preliminaries} In this first section we gather the basic definitions and results concerning the semigroups on which we will be interested. We also generalize other results about isometries and crossed products that are valid, with more or less the same proofs, in the present setting, although they were originally stated for particular cases. \subsection{Ore semigroups.} \begin{definition} An {\em Ore semigroup} $S$ is a cancellative semigroup such that $Ss \cap St \neq \emptyset $ for every pair $s, t \in S$. Ore semigroups are also known as {\em right--reversible} semigroups. (We leave the obvious symmetric consideration of left--reversibility to the reader.) \end{definition} \begin{theorem}[Ore, Dubreil] A semigroup $S$ can be embedded in a group $G$ with $S^{-1} S = G$ if and only if it is an Ore semigroup. In this case, the group $G$ is determined up to canonical isomorphism and every semigroup homomorphism $\phi$ from $S$ into a group $\mathcal G$ extends uniquely to a group homomorphism $\varphi : G \to \mathcal G$. \end{theorem} \begin{proof} See e.g. theorems 1.23, 1.24 and 1.25 in \cite{cli-pre} for the first part. We only need to prove the assertion about extending $\phi$. Since $G = S^{-1} S$, given $x,y \in S$ there exist $u,v \in S$ such that $v^{-1} u = y x^{-1}$, and hence the element $ux = vy $ is in $S x \cap S y$, proving that $S$ is directed by the relation defined by $s \preceq_r t$ if $t\in Ss$. An easy argument shows that $\varphi(x^{-1} y) = \phi(x)^{-1} \phi(y) $ defines a group homomorphism from $G = S^{-1} S $ to $\mathcal G$ that extends $\phi$. \end{proof} \begin{remark} The last assertion of the theorem generalizes \cite[Lemma 1.1]{semico}. Here we have found it more convenient, for compatibility with the rest of \cite{semico}, to work with the {\em right order} $\preceq_r$ determined by $S$ on $G$ via $x \preceq_r y$ if $y \in S x$. \end{remark} To illustrate the class of semigroups being considered, we list a few examples which have appeared recently in the context of semigroup actions: \begin{itemize} \item Abelian semigroups, (notably the multiplicative nonzero integers in an algebraic number field \cite{hecke5}); \item Semigroups obtained by pulling back the positive cone from a totally ordered quotient \cite{phi-rae}; \item Normal semigroups, in particular semidirect products \cite{semico,murpro}; \item Groups of matrices over the integers having positive determinant \cite[Example 4.3]{bre}; \end{itemize} \subsection{Extending multipliers and dilating isometries.}\label{multipls} Let $\lambda$ be a circle--valued multiplier on $S$, that is, a function $\lambda : S\times S \to \mathbb T$ such that $$ \lambda(r,s) \lambda(rs,t) = \lambda(r,st) \lambda(s,t), \quad r,s,t \in S. $$ A {\em projective isometric representation} of $S$ with multiplier $\lambda$ on a Hilbert space $H$ (an isometric $\lambda$--representation of $S$ on $H$) is a family $\{V_s: s\in S\}$ of isometries on $H$ such that $V_s V_t = \lambda(s,t) V_{st}$. A twisted version of Ito's dilation theorem \cite{ito} was obtained in \cite[Theorem 2.1]{semico}, where projective isometric representations of normal semigroups were dilated to projective unitary representations. Essentially the same proof, inspired on Douglas's \cite{dou}, works for Ore semigroups and gives the following. \begin{theorem}\label{dilation} Suppose $S$ is an Ore semigroup and let $\{V_s: s \in S\}$ be an isometric $\lambda$--representation of $S$ on a Hilbert space $H$, where $\lambda$ is a multiplier on $S$. Then there exists a unitary $\lambda$--representation of $S$ on a Hilbert space $\mathcal H$ containing a copy of $H$ such that \begin{enumerate} \item[(i)] $U_s$ leaves $H$ invariant and $U_s |_H = V_s$; and \item[(ii)] $\bigcup_{s\in S} U_s^*H$ is dense in $\mathcal H$. \end{enumerate} \end{theorem} \begin{proof} Verbatim from the proof of \cite[Theorem 2.1]{semico}, except for the following minor modification of the part of the argument where normality is used to obtain an admissible value for the function $f_t $. The value $st$ used there has to be substituted by any (fixed) $z \in Ss \cap St$, and thus the fourth paragraph there should be replaced by the following one. Suppose now that $f \in H_0$ and $t \in S$, and consider the function $f_t $ defined by $f_t(x) = \lambda(x,t) f(xt)$ for $x \in S$. If $s \in S$ is admissible for $f$, let $z \in Ss \cap S t$. We will show that $s_0 := zt^{-1}$ is admissible for $f_t$. For every $x \in Ss_0$, $xt \in Sz$, and since $z$ is admissible for $f$ \begin{eqnarray*} \lambda(x,t)f(xt) &= &\lambda(x,t) \overline{\lambda(xtz^{-1}, z)}V_{xtz^{-1}} f(z)\\ & = & \overline{\lambda(xtz^{-1}, zt^{-1})}V_{xtz^{-1}}\lambda(zt^{-1},t)f(zt^{-1}t)\\ & = & \overline{\lambda(xs_0^{-1}, s_0)}V_{xs_0^{-1}} f_t(s_0) \end{eqnarray*} where the second equality holds by the multiplier property applied to the elements $xtz^{-1}$, $zt^{-1}$, and $t$ in $S$. This proves that $s_0$ is admissible for $f_t$, so $f_t \in H_0$. \end{proof} Since the results of \cite{semico} concerning discrete normal semigroups depend only on this dilation theorem and on the unique extension of group--valued homomorphisms, they too are valid for Ore semigroups and we list them here for reference. \begin{theorem}\label{semimult} Suppose $S$ is an Ore semigroup and let $G = S^{-1} S$. Then \begin{enumerate} \item Every multiplier on $S$ extends to a multiplier on $G$. \item Restriction of multipliers on $G$ to multipliers on $S$ gives an isomorphism of $H^2(G,\mathbb T)$ onto $H^2(S,\mathbb T)$. \item Suppose $\lambda$ is a multiplier on $S$ and let $V$ be a $\lambda$--representation of $S$ by isometries on $H$. Assume $\mu$ is a multiplier on $G$ extending $\lambda$. Then there exists a unitary $\mu$--representation $U$ of $G$ on a Hilbert space $\mathcal H$ containing a copy of $H$ such that $U_s|H = V_s$ for $s \in S$, and $\bigcup_{s\in S}U_s^* H$ dense in $\mathcal H$. Moreover, $U$ and $\mathcal H$ are unique up to canonical isomorphism. \end{enumerate} \end{theorem} \begin{proof} The proofs of all but the last statement about uniqueness are as in Theorem 2.2, Corollary 2.3 and Corollary 2.4 of \cite{semico}, provided one considers the left-quotients $x = t^{-1} s$ instead of the right-quotients used there. In order to prove the uniqueness statement suppose $(U', \mathcal H ')$ is another unitary $\mu$-representation such that ${U'}_s|H = V_s$ and $\bigcup_{s\in S}{U'}_s^* H$ is dense in $\mathcal H'$. It is easy to see that the map $$ W: U_s^* h \mapsto {U'}_s^* h , \qquad s\in S, h \in H $$ is isometric, and that it extends to an isomorphism of $\mathcal H $ to $ \mathcal H'$ because of the density condition. It only remains to show that $W$ intertwines $U$ and $U'$. Since $S$ is an Ore semigroup, for every $x$ and $s$ in $S$ there exist $z $ and $t$ in $S$ such that $x s^{-1} = t ^{-1} z$. Then $tx = zs$, so \begin{eqnarray*} W U_x (U_s^* h) &=& W U_x U_{tx}^* U_{zs} U_s^* h = \mu(t,x) \overline{\mu(z,s)} W U_t^* U_{z} h = \mu(t,x) \overline{\mu(z,s)} W U_t^* (V_z h)\\ &=& \mu(t,x) \overline{\mu(z,s)} {U'}_t^* (V_z h) = \mu(t,x) \overline{\mu(z,s)}{U'}_t^* {U'}_z h = {U'}_x ({U'}_s^* h ) \\ &=& {U'}_x W (U_s^* h) \end{eqnarray*} This shows that $WU_x = U'_x W$ for every $x\in S$, hence for every $x \in G$. \end{proof} \subsection{Twisted semigroup crossed products}\label{twi.cross.prod} Suppose $A$ is a unital C*-algebra and let $\alpha$ be an action of the discrete semigroup $S$ by not necessarily unital endomorphisms of $A$. Let $\lambda$ be a circle-valued multiplier on $S$. A {\em twisted covariant representation} of the semigroup dynamical system $(A, S, \alpha)$ with multiplier $\lambda$ is a pair $(\pi, V)$ in which \begin{enumerate} \item $\pi$ is a unital representation of $A$ on $H$, \item $V: S \to Isom( H)$ is a projective isometric representation of $S$ with multiplier $\lambda$, i.e., $V_s V_t = \lambda(s,t) V_{st}$, and \item the covariance condition $\pi(\alpha_t(a)) = V_t \pi(a) V_t^*$ holds for every $a\in A$ and $ t\in S$. \end{enumerate} When dealing with twisted covariant representations with a specific multiplier $\lambda$, we will refer to the dynamical system as a twisted dynamical system and denote it by $(A, S, \alpha, \lambda)$. The (twisted) crossed product associated to $(A, S, \alpha, \lambda)$ is a C*-algebra $A\rtimes_{\alpha, \lambda}S$ together with a unital homomorphism $i_A :A \rightarrow A\rtimes_{\alpha, \lambda}S$ and a projective $\lambda$-representation of $S$ as isometries $i_{S}: S \rightarrow A\rtimes_{\alpha, \lambda}S $ such that \begin{enumerate} \item $(i_A, i_{S})$ is a twisted covariant representation for $(A, S, \alpha, \lambda)$, \item for any other covariant representation $(\pi, V)$ there is a representation $\pi \times V$ of $A\rtimes_{\alpha, \lambda}S$ such that $\pi = (\pi\times V )\circ i_A$ and $V = (\pi\times V) \circ i_{S}$, and \item $A\rtimes_{\alpha, \lambda}S$ is generated by $i_A(A)$ and $i_{S} (S)$ as a C*-algebra. \end{enumerate} The existence of a nontrivial universal object associated to $(A, S, \alpha, \lambda)$ depends on the existence of a nontrivial twisted covariant representation with multiplier $\lambda$. For general endomorphisms such representations need not exist, even in the untwisted case. For instance, the action of $\mathbb N$ by surjective shift-endomorphisms of $c_0$ described in Example 2.1(a) of \cite{sta} does not admit any nontrivial covariant representations. We will assume that our endomorphisms are injective, hence nontriviality of the semigroup crossed product will follow from its realization as a corner in a nontrivial classical crossed product. See \cite{sta,murnew} for abelian semigroups, and Remark \ref{nontriv} below. There are other possible covariance conditions which yield nontrivial crossed products even if the endomorphisms fail to be injective, see e.g. \cite{murpac} and \cite{pet}. We will not deal with them here, but we refer to \cite{lam} for an interesting comparative discussion of the different constructions. \begin{remark} It is immediate from the definition that the crossed product $A \rtimes S$ is generated, as a C*-algebra, by the monomials $v_x^* a v_y $ with $a \in A$, and $ x,y \in S$, but more is true for Ore semigroups: the products of such monomials can be simplified using covariance to obtain another monomial of the same type. Specifically, in order to simplify the product $v_x^* a v_y v_r^* b v_s$ we begin by finding elements $t$ and $z$ in $S$ such that $y r^{-1}= t^{-1} z$, so that $ty = zr$, (such elements do exist because $S$ is an Ore semigroup). It follows that \begin{eqnarray*} v_x^* a v_y v_r^* b v_s &=& \lambda(y,t) \overline{\lambda(z,r)} v_x^* a v_y v_{ty}^* v_{zr} v_r^* b v_s\\ &=& \lambda(y,t) \overline{\lambda(z,r)} v_x^* a v_yv_y^* v_t^* v_z v_rv_r^* b v_s\\ &=& \lambda(y,t) \overline{\lambda(z,r)} v_x^* v_t^* \alpha_t(a\alpha_y(1)) \alpha_z(\alpha_r(1) b ) v_z v_s\\ &=& \lambda(y,t) \overline{\lambda(z,r)} \overline{\lambda(t,x)} \lambda(z,s) v_{tx}^* \alpha_t(a\alpha_y(1)) \alpha_z(\alpha_r(1) b ) v_{zs}, \end{eqnarray*} hence the linear span of such monomials is dense in the crossed product. \end{remark} \section{The minimal automorphic dilation.} \label{min-aut-ext} There are two steps in realizing a semigroup crossed product as a corner in a crossed product by a group action. The first one is the dilation-extension of a semigroup action by injective endomorphisms to a group action by automorphisms, and the second one is the corresponding dilation-extension of covariant representations of the semigroup dynamical system to covariant representations of the dilated system. \subsection{A dilation-extension theorem.} \begin{theorem}\label{dil-ext} Assume $S$ is an Ore semigroup with enveloping group $G = S^{-1} S$ and let $\alpha$ be an action of $S$ by injective endomorphisms of a unital C*-algebra $A$. Then there exists a C*-dynamical system $(B, G, \beta)$, unique up to isomorphism, consisting of an action $\beta$ of $G$ by automorphisms of a C*-algebra $B$ and an embedding $i: A \to B$ such that \begin{enumerate} \item $\beta$ dilates $\alpha$, that is, $\beta_s\circ i = i \circ \alpha_s$ for $s \in S$, and \item $(B, G, \beta)$ is minimal, that is, $\bigcup_{s\in S}\beta_s^{-1}(i(A))$ is dense in $B$. \end{enumerate} \end{theorem} \begin{proof} By right--reversibility, $S$ is directed by $\preceq_r$ so one may follow the argument of \cite[Section 2]{murnew}. However, extra work is needed here: since $G$ need not be abelian, the choice of embeddings in the directed system must be carefully matched to the choice of right-order $\preceq_r$ on $S$. Consider the directed system of C*-algebras determined by the maps $\alpha_y^x = \alpha_{yx^{-1}}$ from $A_x := A $ into $A_y := A$, for $x \in S$ and $y \in Sx$, i.e. for $x \preceq_r y$ in $S$. By \cite[Proposition 11.4.1(i)]{kad-rin} there exists an inductive limit C*-algebra $A_\infty$ together with embeddings $\alpha^x : A_x \to A_\infty$ such that $\alpha^x = \alpha^y \circ \alpha^x_y$ whenever $x\preceq_r y$, and such that $\bigcup_{x\in S} \alpha^x(A_x)$ is dense in $A_\infty$. The next step is to extend the endomorphism $\alpha_s$ to an automorphism of $A_\infty$. For any fixed $s \in S$ the subset $Ss$ of $S$ is cofinal, so $A_\infty$ is also the inductive limit of the directed subsystem $(A_x, x\in Ss)$, and, for this subsystem, we may consider new embeddings $\psi^x : A_x \to A_\infty$ defined by $\psi^x (a) = \alpha^{xs^{-1}}(a)$ for $x \in Ss$ and $a \in A_x$. By \cite[Proposition 11.4.1(ii)]{kad-rin} there is an automorphism $\tilde{\alpha}_s$ of $A_\infty$ such that $\tilde{\alpha}_s \circ \alpha^x = \psi^x$ for every $x \in Ss$. Since $ \alpha^1 = \alpha^s \circ \alpha^1_s$ and $\psi^x = \alpha^{xs^{-1}}$, the choice $x = s$ gives $$ \tilde{\alpha}_s \circ \alpha^1 = \tilde{\alpha}_s \circ \alpha^s \circ \alpha^1_s = \alpha^1 \circ \alpha_s $$ so that (1) holds with $\beta = \tilde{\alpha}$ and $i = \alpha^1 : A_1 \to A_\infty$. Since $\tilde{\alpha}_s^{-1}(i(A)) = \alpha^s (A_s)$, (2) also holds. Uniqueness of the dilated system follows from \cite[Proposition 11.4.1(ii)]{kad-rin}: $A_\infty$ is the closure of the union of the subalgebras $\tilde{\alpha}_s^{-1}(i(A))$ with $s \in S$, if $(B, G, \beta)$ is another minimal dilation with embedding $j : A \to B$ then there is an isomorphism $\theta: A_\infty \rightarrow B$ given by $\theta \circ \tilde{\alpha}_{s^{-1}}( i(a)) = \beta_{s^{-1}} (j(a))$ for $a \in A$ and hence which intertwines $\tilde{\alpha}$ and $\beta$. \end{proof} \begin{definition} A system $(B, G, \beta)$ satisfying the conditions (1) and (2) of \thmref{dil-ext} is called the {\em minimal automorphic dilation} of $(A,S,\alpha)$. If $\lambda$ is a multiplier on $S$ with extension $\mu$ to $G$, we say that the twisted system $(B, G, \beta, \mu)$ is the minimal automorphic dilation of the twisted system $(A,S,\alpha,\lambda)$. (By \thmref{semimult} the extended multiplier exists and is unique up to a coboundary.) \end{definition} \begin{lemma}\label{dil-cov-rep} Let $(\pi,V)$ be a covariant representation for the twisted system $(A,S,\alpha,\lambda)$ on the Hilbert space $H$, and let $\tilde{V}$ be the minimal projective unitary dilation of $V$ on $\mathcal H$ given by \thmref{dilation}. Then there exists a representation $\tilde{\pi}$ of $B$ on $\mathcal H$ such that $(\tilde{\pi},\tilde{V})$ is covariant for the minimal automorphic dilation $(B, G, \beta, \mu)$ and $\tilde{\pi} \circ i = \pi$ on $H$. \end{lemma} \begin{proof} We work with the dense subspace $\mathcal H_0 = \bigcup_{t\in S} U_t^* H$ of $\mathcal H$ and the dense subalgebra $B_0 = \bigcup_{s\in S} \beta_s^{-1}(i(A))$. If $\xi \in \mathcal H_0$ there exists $t \in S$ such that $U_t \xi \in H$; assume $b = \beta_t^{-1}(i(a))$, since we want $(\tilde{\pi}, \tilde{V})$ to be covariant, the only choice is to define $\tilde{\pi}$ by $$ \tilde{\pi}(b) \xi = \tilde{\pi}(\beta_t^{-1}(i(a))) \xi = \tilde{V}_t^* \tilde{\pi}(i(a)) \tilde{V}_t \xi = \tilde{V}_t^* \pi(a) \tilde{V}_t \xi $$ because $\tilde{\pi}$ restricted to $i(A)$ and cut down to $H$ has to be equal to $\pi$. Of course we have to show that this actually defines an operator $\tilde{\pi} (b)$ on $\mathcal H$ for each $b \in B_0$, that $\tilde{\pi}$ extends to a homomorphism from all of $B$ to $B(\mathcal H)$, and that $(\tilde{\pi},\tilde{V})$ is covariant. The first step is to define $\tilde{\pi}(b)$ on $\mathcal H_0$ for a fixed $b \in B_0$. We begin by fixing $b \in B_0$, $a \in A$ and $s\in S$ such that $b = \beta_s^{-1}(i(a))$. For $\xi \in \tilde{V}^*_{t_0} H$ with $t_0$ in the cofinal set $Ss$, we let \begin{equation} \label{dil-rep} \varphi(b) \xi = \tilde{V}_{t_0}^* \pi (\alpha_{{t_0}s^{-1}}(a)) \tilde{V}_{t_0} \xi. \end{equation} If $t \in St_0$ then $\xi \in \tilde{V}^*_t H$, and \begin{eqnarray*} \tilde{V}_{t}^* \pi (\alpha_{{t}s^{-1}}(a)) \tilde{V}_{t} \xi_0 &=& \tilde{V}_{t_0}^* \tilde{V}_{tt_0^{-1}}^* \pi(\alpha_{t t_0^{-1}} \circ \alpha_{{t_0}s^{-1}}(a)) \tilde{V}_{tt_0^{-1}} \tilde{V}_{t_0} \xi \\ & = & \tilde{V}_{t_0}^* \tilde{V}_{tt_0^{-1}}^* V_{t t_0^{-1}} \pi( \alpha_{{t_0}s^{-1}}(a)) V_{t t_0^{-1}}^* \tilde{V}_{tt_0^{-1}} \tilde{V}_{t_0} \xi \\ & = & \tilde{V}_{t_0}^* \pi( \alpha_{{t_0}s^{-1}}(a)) \tilde{V}_{t_0} \xi. \end{eqnarray*} So the definition of $\phi (b) \xi $ could have been given using any $t \in St_0$ in place of $t_0$. Next we show that $\phi (b) \xi $ is also independent of $s$ and $a$, in the sense that if $b$ is also equal to $ \beta_{s'}^{-1}(i(a')) $ then $\alpha_{{t}{s'}^{-1}}(a')$ is equal to $\alpha_{{t} s^{-1}}(a)$ for $t$ in a cofinal set. To see this let $t \in Ss \cap Ss'$. Then $\alpha^t \circ \alpha^{s'}_t (a') = \alpha^{s'}(a') = \beta_{s'}^{-1}(i(a')) = \beta_{s}^{-1}(i(a)) = \alpha^{s}(a) = \alpha^t \circ \alpha^{s}_t (a)$, and since the embedding $\alpha^t$ is injective, it follows that $\alpha_{t{s'}^{-1}}(a') = \alpha_{ts^{-1}}(a)$. The map $\varphi(b) : \mathcal H_0 \to \mathcal H_0$ is clearly linear, and since the endomorphisms are injective, $\| \varphi (b) \xi\| \leq \|b\| \| \xi \|$. Thus $\phi(b)$ can be uniquely extended to a bounded linear operator (also denoted $\varphi(b)$) on all of $\mathcal H$ such that $\| \varphi(b)\| \leq \|b\|$. For any $s$ the map $\operatorname{Ad}_{\tilde{V}_{t_0}^*} \circ \pi \circ \alpha_{{t_0}s^{-1}}$ is a *-homomorphism on $A$, and by cofinality of $\preceq_r$, for any $b_1$ and $ b_2 $ in $B_0$ there exist $s \in S$ and $a_1$ and $a_2$ in $A$ such that $b_1 = \beta_s^{-1}(i(a_1))$ and $b_2 = \beta_s^{-1}(i(a_2))$. It follows easily from (\ref{dil-rep}) that $\varphi : B_0 \to B(\mathcal H)$ is a *-homomorphism which can be extended to a representation $\tilde{\pi}$ of $B$ on $\mathcal H$. Putting $a = 1$ in (\ref{dil-rep}) shows that $\tilde{\pi}$ is nondegenerate and there only remains to check that $(\tilde{\pi},\tilde{V})$ is a covariant pair for $(B,G, \beta, \mu)$. Suppose first $x \in S$ and $b \in B_0$; we can assume that $b = \beta^{-1}_s (i(a))$ for some $a \in A$ and $s \in Sx$. Let $\xi \in \tilde{V}^*_t H$; we can assume $t \in Ss \subset Sx$, and we observe that $\tilde{V}_x \xi \in \tilde{V}^*_{tx^{-1}} H$. Then \begin{eqnarray*} \tilde{\pi}(\beta_x(b)) \tilde{V}_x \xi & = & \tilde{\pi}(\beta_{x s^{-1}}(i(a))) \tilde{V}_x \xi\\ & = & \tilde{\pi}(\beta^{-1}_{sx^{-1}}(i(a))) \tilde{V}_x \xi\\ & = & \tilde{V}^*_{tx^{-1}} \pi(\alpha_{tx^{-1} xs^{-1}}(i(a))) \tilde{V}_{tx^{-1}}\tilde{V}_x \xi\\ & = & \tilde{V}^*_{tx^{-1}} \pi(\alpha_{ts^{-1}}(i(a))) \tilde{V}_{tx^{-1}}\tilde{V}_x \xi\\ & = & \tilde{V}^*_{x^{-1}}\tilde{V}^*_{t} \pi(\alpha_{ts^{-1}}(i(a))) \tilde{V}_t \xi\\ & = & \tilde{V}_{x} \tilde{\pi}(\beta^{-1}_{s}(i(a))) \xi, \end{eqnarray*} and since $\mathcal H_0$ is dense in $\mathcal H$ and $B_0$ is dense in $B$, the pair $(\tilde{\pi}, \tilde{V})$ satisfies the covariance relation. \end{proof} \subsection{Full corners.} Once we know how to dilate covariant representations from the semigroup action to the group action we can establish the relation between the respective crossed products. Before proving our main result we recall that if $p$ is a projection in the C*-algebra $A$ then the algebra $p A p$ is a {\em corner} in $A$, which is said to be {\em full} if the linear span of $ApA$ is dense in $A$. The most relevant feature of full corners is that if $pAp$ is a full corner in $A$, then $pA$ is a full Hilbert bimodule implementing the Morita equivalence, in the sense of Rieffel, of $pAp$ to $A$. \begin{theorem}\label{fulcor} Suppose $(A,S,\alpha, \lambda)$ is a twisted semigroup dynamical system in which $S$ is an Ore semigroup acting by injective endomorphisms and $\lambda $ is a multiplier on $S$. Let $(B,G,\beta, \mu)$ be the minimal automorphic dilation, with embedding $i: A \to B$. Then $A\rtimes_{\alpha,\lambda} S$ is canonically isomorphic to $i(1) (B\rtimes_{\beta, \mu} G) i(1)$, which is a full corner. As a consequence, the crossed product $A\rtimes_{\alpha,\lambda} S$ is Morita equivalent to $B \rtimes_{\beta, \mu} G$. \end{theorem} \begin{proof} Let $U$ be the projective unitary representation of $G$ in the multiplier algebra of $B\rtimes_{\beta, \mu} G$, and notice that $$i(1) U_s i(1) = U_s i(1), \qquad s \in S, $$ because $i(A)$ is invariant under $\beta_s$. Define $v_s = U_s i(1) $. Then $v_s^* v_s = i(1) U_s ^* U_s i(1) = i(1)$ and $v_s v_t = U_s i(1) U_t i(1) = U_s U_t i(1) = \mu(s,t) U_{st} i(1) = \lambda(s,t) v_{st}$, so $v$ is a projective isometric representation of $S$ with multiplier $\lambda$. Since $i(1) (B\rtimes_{\beta, \mu} G) i(1)$ is generated by the elements $i(1)U_x^* i(a) U_y i(1) = v_x^* i(a) v_y$, the isomorphism will be established by uniqueness of the crossed product once we show that the pair $(i,v)$ is universal. Suppose $(\pi,V)$ is a covariant representation for the twisted system $(A,S,\alpha,\lambda)$, and let $(\tilde{\pi},\tilde{V})$ be the corresponding dilated covariant representation of $(B,G,\beta, \mu)$ given by \lemref{dil-cov-rep}. By the universal property of $ B\rtimes_{\beta, \mu} G $ there is a homomorphism $$ (\tilde{\pi} \times \tilde{V}) : B\rtimes_{\beta, \mu} G \to C^*(\tilde{\pi}, \tilde{V}) $$ such that $\tilde{\pi}(b) \tilde{V}_s = (\tilde{\pi} \times \tilde{V}) (i_B(b) U_s)$ . Let $\rho $ be the restriction of $ (\tilde{\pi} \times \tilde{V}) $ to $i(1) (B\rtimes_{\beta, \mu} G) i(1)$, cut down to the invariant subspace $H$. By \lemref{dil-cov-rep} $$ \rho(i(a)) = (\tilde{\pi} \times \tilde{V})(i(a)) = \tilde{\pi} \circ i(a) = \pi(a), \qquad a \in A, $$ while $$ \rho(v_s) = (\tilde{\pi} \times \tilde{V}) (U_s i(1)) = \tilde{V}_s \pi(1) = V_s, \qquad s \in S $$ Thus $\rho \circ i = \pi$ and $\rho \circ v = V$, so $(i,v)$ is universal for $(A,S,\alpha, \lambda)$. Finally we prove that the corner is full, i.e., that the linear span of the elements of the form $X i(1) Y$ with $X, Y \in B \rtimes_{\beta, \mu} G $ is a dense subset of $ B \rtimes_{\beta, \mu} G$. It is easy to see that the elements of the form $U_s^* b U_t$ span a dense subset of $ B \rtimes_{\beta, \mu} G$ because $G = S^{-1} S$, where $b$ may be replaced with $U_r^* i(a) U_r$ by minimality of the dilation. Thus the elements $U_y^* i(\alpha_z(a)) U_x$ with $x,y,z \in S$ and $a \in A$ span a dense subset of $B \rtimes_{\beta, \mu} G$, and since $i(\alpha_z(a)) = i(1) i(\alpha_z(a))$, the proof is finished. \end{proof} \begin{remark} \label{nontriv} If one drops the assumption of injectivity of the endomorphisms, it is still possible to carry out the constructions and the arguments in the proofs of the preceding theorems. However, the resulting homomorphism $i: A \to B$ may not be an embedding any more. Indeed, Example 2.1(a) of \cite{sta} shows that the limit algebra $B$ may turn out to be the $0$ C*-algebra, yielding a trivial dilated system. We notice that the dilated system $(B,G,\beta,\mu)$ has nontrivial covariant representations if and only if $B \neq 0$, and these representations, when cut down to $i(A)$, give nontrivial covariant representations of the original semigroup system $(A,S,\alpha,\lambda)$. Thus, following \cite[Proposition 2.2]{sta} which deals with the case $S = \mathbb N$, we conclude that the crossed product $A\rtimes_{\alpha,\lambda} S$ is nontrivial if and only if the limit algebra $B$ is not $0$. Clearly, this is the case when, for instance, the endomorphisms are injective. \end{remark} \section{An example from number theory.} As an application of the preceding theory we consider the semigroup dynamical system from \cite{bc-alg} whose crossed product is the Bost-Connes Hecke C*-algebra \cite{bos-con}. Since Morita equivalence implies that the representation theory of the semigroup dynamical system is equivalent to that of the dilated system, it is quite useful to have an explicit formulation of the dilation. We point out that since the semigroup in question is abelian, this application is somewhat independent from the rest of the material on nonabelian semigroups. In fact, the example could be dealt with by enhancing \cite[Section 2]{murnew} with the uniqueness and fullness properties discussed above, which are easier to prove for abelian semigroups. \subsection{Finite Adeles.} The natural setting for identifying the ingredients of the minimal automorphic dilation of the semigroup dynamical system introduced in \cite{bc-alg} will be the (dual) $p$-adic picture described in \cite[Proposition 32]{diri}, in which the algebra is $C(\prod_p \mathbb Z_p)$ and the endomorphisms $\alpha_n$ consist of `division by $n$' in $\prod_p \mathbb Z_p$: $$\alpha_n (f) (x) = \left\{\begin{array}{ll} f(x/n)&\mbox{if $n | x$}\\ 0&\mbox{ otherwise}. \end{array} \right. $$ By \cite[Corollary 2.10]{bc-alg} the crossed product associated to this system is canonically isomorphic to the Bost-Connes Hecke C*-algebra $\mathcal C_{\mathbb Q}$. The ring ${\mathcal Z} := \prod_p \mathbb Z_p$ has lots of zero divisors and hence no fraction field. However, the diagonally embedded copy of $\N^{\times}$ is a multiplicative set with no zero divisors, and we may enlarge ${\mathcal Z}$ to a ring in which division by an element of $\N^{\times}$ is always possible. Our motivation is to extend the endomorphisms $\alpha_n$ defined above to automorphisms. The algebraic part is easy: we consider the ring $(\N^{\times})^{-1} {\mathcal Z}$ of formal fractions ${z}/{n}$ with $z\in {\mathcal Z}$ and $n \in \N^{\times}$, with the obvious rules of addition and multiplication (and simplification!), \cite[II.\S3]{lan-alg}. This ring has a universal property with respect to homomorphisms of ${\mathcal Z}$ that send $\N^{\times}$ into units. Since $\N^{\times} $ has no zero divisors, the canonical map $z \rightarrow {z}/{1}$ is an embedding of ${\mathcal Z}$ in $(\N^{\times})^{-1} {\mathcal Z}$. The topological aspect requires a moments thought, after which we declare that the subring ${\mathcal Z}$ must retain its compact topology and be relatively open. Since we want division by $n\in \N^{\times}$ to be an automorphism, this determines a topology on the compact open sets $(1/n) {\mathcal Z}$ and hence on their union, $(\N^{\times})^{-1} {\mathcal Z}$, which becomes a locally compact ring containing ${\mathcal Z}$ as a compact open subring. The ring we have just defined is (isomorphic to) the locally compact ring $\mathbb A_f$ of finite adeles, which is usually defined as the restricted product, over the primes $p \in \mathcal P$ of the $p$-adic numbers $\mathbb Q_p$ with respect to the $p$-adic integers $\mathbb Z_p$: $$ \mathbb A_f : = \{ (a_p) : a_p \in\mathbb Q_p \text{ and } a_p \in \mathbb Z_p \text{ for all but finitely many }p \in \mathcal P \}, $$ with $ \prod_p \mathbb Z_p$ as its maximal compact open subring. The isomorphism is implemented by the map from $(\N^{\times})^{-1} {\mathcal Z}$ into $\mathbb A_f$ given by the universal property; this map is clearly injective and, since every finite adele can be written as $z/n$ with $z\in {\mathcal Z}$ and $n \in \N^{\times}$, it is also surjective. Specifically, for each $a_p \in \mathbb Q_p$ there exists $k_p$ such that $p^{k_p} a_p = z_p \in \mathbb Z_p$ and a sequence $a = (a_p)_{p\in \mathcal P}$ is an adele if and only if $k_p$ can be taken to be $0$ for all but finitely many $p$'s, in which case $ n = \prod_p p^{-k_p} \in \N^{\times}$ and $a = (na)/n$, with $na = (na_p)_{p\in \mathcal P} \in \prod_p \mathbb Z_p$. \subsection{The minimal automorphic dilation of $(C(\mathcal Z), \N^{\times}, \alpha)$.}\label{diladeles} The rational numbers are embedded in $\mathbb A_f$, and division by a nonzero rational is clearly a homeomorphism so $$ \beta_r(f)(a) = f(r^{-1} a), \quad a \in\mathbb A_f, r \in \Q^*_+ $$ defines an action of $\Q^*_+ = (\N^{\times})^{-1} \N^{\times} $ by automorphisms of $C_0(\mathbb A_f)$. Since ${\mathcal Z}$ is compact and open, its characteristic function $ 1_{{\mathcal Z}}$ is a projection in $C_0(\mathbb A_f)$ and there is an obvious embedding $i$ of $C({\mathcal Z})$ as the corresponding ideal of $ C_0(\mathbb A_f)$, given by $$ i(f) (a) = \left\{ \begin{array}{ll} f(a) & \text{ if } a \in {\mathcal Z}\\ 0 & \text{ if } a \notin {\mathcal Z}. \end{array} \right. $$ \begin{proposition}\label{BC-min-aut-ext} The C*-dynamical system $(C_0(\mathbb A_f), \Q^*_+, \beta)$ is the minimal automorphic dilation of the semigroup dynamical system $(C({\mathcal Z}), \N^{\times}, \alpha)$, and hence $\mathcal C_{\mathbb Q}$ is the full corner of $C_0(\mathbb A_f)\rtimes_\beta \Q^*_+$ determined by the projection $1_{\mathcal Z}$. \end{proposition} \begin{proof} The embedding clearly intertwines $\alpha_n$ and $\beta_n$, in the sense that $\beta_n (i(f)) = i(\alpha_n(f))$, and the union of the compact subgroups $(1/n) {\mathcal Z}$ is dense in $\mathbb A_f$, so the union of the subalgebras $\beta_{1/n}(i( C({\mathcal Z})) )$ is dense in $C_0(\mathbb A_f) $, and the result follows from \thmref{dil-ext} and \thmref{fulcor}. \end{proof} Since the discrete multiplicative group $\Q^*_+$ acts by homotheties on the locally compact additive group $\mathbb A_f$, and since $\mathbb A_f$ is self-dual, we obtain another characterization of $\mathcal C_{\mathbb Q}$ as a full corner in the group C*-algebra of the semidirect product $\mathbb A_f \rtimes \Q^*_+$. One should bear in mind, however, that the self duality of $\mathbb A_f$ is not canonical. \begin{corollary} Let $e_{{\mathcal Z}} \in C^*(\mathbb A_f)$ be the Fourier transform of ${1_{\mathcal Z}} \in C_0(\mathbb A_f)$. Then $$ \mathcal C_{\mathbb Q} \cong e_{{\mathcal Z}} C^*(\mathbb A_f \rtimes \Q^*_+)e_{{\mathcal Z}}.$$ \end{corollary} \begin{proof} The action of $\Q^*_+$ on $\mathbb A_f$ is by homotheties, which are group automorphisms, so $C^*(\mathbb A_f \rtimes \Q^*_+)$ is isomorphic to the crossed product $C^*(\mathbb A_f) \rtimes_\beta \Q^*_+$. Moreover, the self-duality of the additive group of $\mathbb A_f$ satisfies $\langle rx,y\rangle = \langle x,ry\rangle$ for $r \in \Q^*_+$, thus $C^*(\mathbb A_f)$ is covariantly isomorphic to $C_0(\mathbb A_f)$, so $C^*(\mathbb A_f) \rtimes_\beta \Q^*_+$ is isomorphic to $C_0(\mathbb A_f) \rtimes_\beta \Q^*_+$, and the claim follows from Proposition \ref{BC-min-aut-ext}. \end{proof} \begin{remark} One of the principles of noncommutative geometry advocates that if $G$ is a group acting on a space $X$, then the quotient space $X/G$ has a noncommutative version in the associated crossed product $C_0(X) \rtimes G$, which is often more tractable. Accordingly, if we allow back in the all-important place at infinity which is left out from $\mathcal A_f$ and if we substitute $\Q^*_+$ by $\mathbb Q^*$, cf. \cite[Remarks 33]{bos-con}, then our \proref{BC-min-aut-ext} gives an explicit path leading from the Bost-Connes Hecke C*-algebra to the space $\mathcal A/\mathbb Q^*$, on which the construction of \cite{con-cr, con-rzf} is based. \end{remark} \end{document}
arXiv
{ "id": "9911135.tex", "language_detection_score": 0.8083587288856506, "char_num_after_normalized": null, "contain_at_least_two_stop_words": null, "ellipsis_line_ratio": null, "idx": null, "lines_start_with_bullet_point_ratio": null, "mean_length_of_alpha_words": null, "non_alphabetical_char_ratio": null, "path": null, "symbols_to_words_ratio": null, "uppercase_word_ratio": null, "type": null, "book_name": null, "mimetype": null, "page_index": null, "page_path": null, "page_title": null }
arXiv/math_arXiv_v0.2.jsonl
null
null
\begin{document} \begin{center} {\large \sc \bf {Coherence evolution and transfer supplemented by the state-restoring} } \vskip 15pt {\large E.B.Fel'dman and A.I.~Zenchuk } \vskip 8pt {\it $^2$Institute of Problems of Chemical Physics, RAS, Chernogolovka, Moscow reg., 142432, Russia}. \end{center} \begin{abstract} The evolution of quantum coherences comes with a set of conservation laws provided that the Hamiltonian governing this evolution conserves the spin-excitation number. At that, coherences do not intertwist during the evolution. Using the transmission line and the receiver in the initial ground state we can transfer the coherences to the receiver without interaction between them, { although the matrix elements contributing to each particular coherence intertwist in the receiver's state. } Therefore we propose a tool based on the unitary transformation at the receiver side to { untwist these elements and thus} restore (at least partially) the structure of the sender's initial density matrix. A communication line with two-qubit sender and receiver is considered as an example of implementation of this technique. \end{abstract} \maketitle \section{Introduction} \label{Section:Introduction} The multiple quantum (MQ) NMR dynamics is a basic tool of well developed MQ NMR spectroscopy studying the nuclear spin distribution in different systems \cite{BMGP,DMF}. { Working with spin polarization we essentially deal with the diagonal elements of the density matrix. However, the MQ NMR method allows us to split the whole density matrix into $N+1$ parts, and each of these parts contributes into a specific observable quantity called coherence intensity.} Thus studying the coherence intensities and the methods of manipulating them becomes an important direction in development of MQ NMR methods. For instance, the problem of relaxation of MQ coherences was studied in \cite{KS1,KS2,AS,CCGR,BFVV}. A similar problem in nonopore was considered in \cite{DFZ}). In MQ NMR experiment, the special sequence of the magnetic pulses is used to generate the so-called two-spin/two-quantum Hamiltonian ($H_{MQ}$) which is the non-secular part of the dipole-dipole interaction Hamiltonian averaged over fast oscillations. It was shown in the approximation of nearest-neighbor interactions that the $H_{MQ}$ Hamiltonian can be reduced to the flip-flop XX-Hamiltonian ($H_{XX}$) \cite{Mattis} via the unitary transformation \cite{DMF}. Notice, that $H_{MQ}$ does not commute with the $z$-projection of the total spin momentum $I_z$, while $[H_{XX},I_z]=0$. In this paper we consider the evolution problem for the created MQ coherences. Therefore, after creating the coherences, we switch off the irradiation and allow the coherences to evolve independently under the Hamiltonian commuting with $I_z$ (this can be, for instance, $H_{dz}$ Hamiltonian \cite{Abragam,Goldman} or $H_{XX}$ flip-flop Hamiltonian). We show that the coherences do not interact during the evolution governed by the Hamiltonian conserving the $z$-projection of the total spin momentum. This fact gives rise to the set of conservation laws associated with such dynamics, namely, the coherence intensity of an arbitrary order conserves. But the density-matrix elements contributing into the same order coherence do intertwist. In addition, the coherences, created in some subsystem (sender) can be transferred to another subsystem (receiver) through the transmission line without interaction between coherences if only the both receiver and transmission line are in the initial state having only the zero-order coherence. This process can be considered as a particular implementation of the remote state creation in spin systems \cite{Z_2014,BZ_2015}. We show that the sender's density-matrix elements in the receiver's state can be untwisted using the method based on the unitary transformation of the receiver or, more effectively, of the extended receiver. The theoretical arguments are supplemented with the particular model of communication line having two-node sender and receiver. Notice that the extended receiver was already used in the previous papers concerning the remote state creation \cite{BZ_2016} with the purpose of proper correcting the created state of the receiver and improving the characteristics of the remote state creation \cite{Z_2014,BZ_2015}. The paper is organized as follows. In Sec.\ref{Section:DC} we select the matrices $\rho^{(n)}$ responsible for forming the $n$-order coherence intensity and study some extremal values of coherence intensities. The evolution of the coherence intensities is considered in Sec.\ref{Section:ev}. The transfer of the coherences from the sender to the receiver is studied in Sec.\ref{Section:cohtr}. In Sec.\ref{Section:model} we apply the results of previous sections to a particular model of a chain with 2-qubit sender and receiver. The brief discussion of obtained results is given in Sec.\ref{Section:conclusion}. \section{Density matrix and coherences} \label{Section:DC} It was shown { (for instance, see \cite{FL})} that the density matrix of a quantum state can be written as a sum \begin{eqnarray}\label{RhoC} \rho = \sum_{n={-N}}^N \rho^{(n)}, \end{eqnarray} where each submatrix $ \rho^{(n)}$ consists of the elements of $\rho$ responsible for the spin-state transitions changing the total $z$-projection of the spin momentum by $n$. These elements contribute to the so-called $n$-order coherence intensity $I_n$ which can be registered using the MQ NMR methods. To select the density matrix elements contributing to the $n$-order coherence we turn to the { density-matrix representation in the multiplicative basis \begin{eqnarray}\label{multb} |i_1\dots i_N\rangle,\;\;i_k=0,1,\;\;k=1,\dots,N, \end{eqnarray} where $i_k$ denotes the state of the $k$th spin. Thus, the transformation from the computational basis to the multiplicative one reads} \begin{eqnarray}\label{mult} \rho_{ij}= \rho_{i_1\dots i_N;j_1\dots j_N},\;\;\; i=\sum_{n=1}^N i_n 2^{n-1} +1,\;\; j=\sum_{n=1}^N j_n 2^{n-1} +1. \end{eqnarray} Then, according to the definition, \begin{eqnarray}\label{defI} I_n(\rho) ={\mbox{Tr}} \Big(\rho^{(n)}\rho^{(-n)}\Big) = \sum_{\sum_k (j_k - i_k) = n} |\rho_{i_1\dots i_N;j_1\dots j_N}|^2,\;\; |n|\le N. \end{eqnarray} \subsection{Extremal values of coherence intensities} First of all we find the extremal values of the zero order coherence intensity of $\rho$ provided that all other coherences absent, so that $\rho=\rho_0$. By the definition (\ref{defI}), \begin{eqnarray} I_0={\mbox{Tr}} \Big(\rho_0 \rho_0\Big) = {\mbox{Tr}} \left(U_0\Lambda_0 U_0^+\right)^2 = {\mbox{Tr}} \Lambda_0^2 = \sum_{i=1}^{2^N} \lambda_{0i}^2, \end{eqnarray} where $N$ is the number of spins in the sender, $\Lambda_0={\mbox{diag}}(\lambda_{01},\dots,\lambda_{02^N})$ and $U_0$ are, respectively, the eigenvalue and eigenvector matrices of $\rho$. Therefore we have to find the extremum of $I_0$ with the normalization condition $\sum_{i=1}^{2^N} \lambda_{0i} =1$. Introducing the Lagrange factor $\alpha$ we reduce the problem to constructing the extremum of the function \begin{eqnarray} \tilde I_0 = \sum_{i=1}^{2^N} \lambda_{0i}^2 - \alpha \left( \sum_{i=1}^{2^N} \lambda_{0i} -1\right). \end{eqnarray} Differentiating with respect to $\lambda_{0i}$ and equating the result to zero we obtain the system of equations \begin{eqnarray} 2\lambda_{0i}=\alpha,\;\;i=1,\dots,2^N, \end{eqnarray} therefore, $\lambda_{0i}=\frac{\alpha}{2}$. Using the normalization we have $\alpha=\frac{1}{2^{N-1}}$, so that $\lambda_{0i}=\frac{1}{2^N}$. The second derivative of $\tilde I_0$ shows that this is a minimum. Thus, we have \begin{eqnarray} I_0^{min}=\frac{1}{2^N}, \;\;\rho|_{I_{0}^{min}} = \frac{1}{2^N}E, \end{eqnarray} where $E$ is the $2^N\times 2^N$ identity matrix. To find the maximum value of $I_0$ we observe that \begin{eqnarray} \sum_{i=1}^{2^N} \lambda_{0i}^2 =\left(\sum_{i=1}^{2^N} \lambda_{0i}\right)^2 -\sum_{i\neq j} \lambda_{0i}\lambda_{0j}=1-\sum_{i\neq j} \lambda_{0i}\lambda_{0j} \le 1. \end{eqnarray} It is obvious that the unit can be achieved if there is only one nonzero eigenvalue $\lambda_{01}=1$. Thus \begin{eqnarray} I_0^{max}=1, \;\;\rho|_{I_{0}^{max}} = {\mbox{diag}}(1,\underbrace{0,0,\dots0}_{2^N-1}). \end{eqnarray} Now we proceed to the analysis of the $n$-order coherence intensity for the matrix having only three non-zero coherences of zero- and $\pm n$-order, assuming that the zero-order coherence intensity $I_{0}$ is minimal, i.e., \begin{eqnarray}\label{rhoin} \rho=\frac{1}{2^N}E + \tilde \rho^{(n)} = U_n \left(\frac{1}{2^N}E +\Lambda_n\right) U^+_n,\;\;\;\tilde \rho^{(n)}=\rho^{(n)} + \rho^{(-n)} \end{eqnarray} where $\Lambda_n={\mbox{diag}}(\lambda_{n1},\dots,\lambda_{n2^N})$ and $U_n$ are the matrices of eigenvalues and eigenvectors of $\tilde \rho^{(n)}$. Of course, $U_n$ is also the eigenvector matrix for the whole $\rho$ in this case and \begin{eqnarray}\label{constr2} \sum_{i=1}^{2^N} \lambda_{ni} =0. \end{eqnarray} Now we proof one of the interesting property of the eigenvalues for the considered case. {\bf Proposition 1.} Eigenvalues $\lambda_{ni}$ appear in pairs: \begin{eqnarray}\label{pairs} \lambda_{n(2i-1)}= \eta_{ni}, \;\;\lambda_{n(2i)}= -\eta_{ni}, \;\;\;i=1,\dots,2^{N-1}. \end{eqnarray} {\it Proof.} First we show that, along with $\tilde \rho^{(n)}$, the odd powers of this matrix are also traceless. For instance, let us show that \begin{eqnarray}\label{rr} {\mbox{Tr}}(\tilde \rho^{(n)})^3 = \sum_{i,j,k} \tilde \rho^{(n)}_{ij} \tilde \rho^{(n)}_{jk} \tilde \rho^{(n)}_{ki} = 0. \end{eqnarray} Using the multiplicative basis for the density-matrix elements in the rhs of eq. (\ref{rr}), we remark that only such elements $\tilde \rho_{ij}$, $\tilde \rho_{jk}$ and $\tilde \rho_{ki}$ are nonzero that, respectively, $\sum_m i_{m} -\sum_m j_{m} = \pm n$, $\sum_m j_{m} -\sum_m k_{m} = \pm n$ and $\sum_m k_{m} -\sum_m i_{m} = \pm n$. However, summing all these equalities we obtain the identical zero in the lhs and either $\pm 3 n$ or $\pm n$ in the RHS. This contradiction means that there must be zero matrix elements in each term of the sum (\ref{rr}), i.e., the trace is zero. Similar consideration works for higher odd powers of $\tilde \rho^{(n)}$ (however, the sum $\tilde \rho^{(n)} + \tilde \rho^{(k)}$, $k\neq n$, doesn't possesses this property, i.e., the trace of any its power is non-zero in general). Consequently, along with (\ref{constr2}), the following equalities hold: \begin{eqnarray}\label{sumni} \sum_{i=1}^{2^N} \lambda_{ni}^m =0 \;\;{\mbox{for any odd}}\;\;m. \end{eqnarray} Condition (\ref{sumni}) holds for any odd $m$ if only the eigenvalues $\lambda_{ni}$ appear in pairs (\ref{pairs}). {To prove this statement, first we assume that all eigenvalues are non-degenerate and let the eigenvalue $\lambda_{n1}$ be maximal by absolute value. We divide sum (\ref{sumni}) by $\lambda_{n1}^m$: \begin{eqnarray}\label{sumni2} 1+\sum_{i=2}^{2^N} \left(\frac{\lambda_{ni}}{\lambda_{n1}}\right)^m =0, \;\;{\mbox{for odd}}\;\;m. \end{eqnarray} Each term in the sum can not exceed one by absolute value. Now we take the limit $m\to\infty$ in eq.(\ref{sumni2}). It is clear that all the terms such that $\left|\frac{\lambda_{ni}}{\lambda_{n1}}\right|<1$ vanish. Since this sum is zero, there must be an eigenvalue $\lambda_{n2}$ such that $\lambda_{n2} = -\lambda_{n1}$. Then, the appropriate term in (\ref{sumni2}) yields -1. So, two first terms in sum (\ref{sumni2}) cancel each other which reduces (\ref{sumni2}) to \begin{eqnarray}\label{sumni3} \sum_{i=3}^{2^N} \lambda_{ni}^m =0, \;\;{\mbox{for odd}}\;\;m. \end{eqnarray} Next, we select the maximal (by absolute value) of the remaining eigenvalues, repeat our arguments and conclude that there are two more eigenvalues equal by absolute value and having opposite signs. And so on. Finally, after $2^{N-1}$, steps we result in conclusion that all eigenvalues appear in pairs (\ref{pairs}). Let the $(2k+1)$th eigenvalue on the $(2k+1)$-step is $s$-multiple, i.e. $\lambda_{n(2k+1)} =\dots = \lambda_{n(2k+s)}$. Then the sum (\ref{sumni}) gets the form \begin{eqnarray}\label{sumni4} \sum_{i=2k+1}^{2^N} \left(\frac{\lambda_{ni}}{\lambda_{n(2k+1)}}\right)^m = s +\sum_{i=2k+s+1}^{2^N} \left(\frac{\lambda_{ni}}{\lambda_{n(2k+1)}}\right)^m,\;\; s\in{\mathbb N},\;\;s\le N-2k,\;\;{\mbox{odd}} \;\;m. \end{eqnarray} Now, to compensate $s$ we need an $s$-multiple eigenvalue, such that $\lambda_{n(2k+s+1)} = \dots = \lambda_{n(2k+2s)} = - \lambda_{n(2k+1)}$. Thus, if there is $s$-multiple positive eigenvalue, there must be an $s$-multiple negative eigenvalue. This ends the proof.} $\Box$ Next, since all the eigenvalues of $\rho$ must be non-negative and the density matrix $\rho$ has the structure (\ref{rhoin}), the negative eigenvalues $\eta_{ni}$ can not exceed $\frac{1}{2^N}$ by absolute value. Therefore, the maximal $n$-order coherence intensity corresponds to the case \begin{eqnarray} \eta_{ni} =\frac{1}{2^N}. \end{eqnarray} Consequently, \begin{eqnarray} I_n^{max}+I_{-n}^{max} = 2 I_n^{max} =\sum_{j=1}^{N_n} \lambda_{ni}^2 =\frac{N_n}{2^{2N}}\le \frac{1}{2^N}, \end{eqnarray} where $I_n^{max}=I_{-n}^{max}$ and $N_n$ is the number of nonzero eigenvalues of $\tilde \rho^{(n)}$. This number equals to the rank of $\tilde \rho^{(n)}$ which, in turn, can be found as follows. {\bf Proposition 2.} The rank of the matrix $\tilde \rho^{(n)}$ can be calculated using the formula \begin{eqnarray}\label{ran} N_n={\mbox{ran}}\;\tilde \rho^{(n)} = \sum_{k=0}^{N} \min \left( \left(N\atop k \right) ,\left(N\atop k+n \right)+\left(N\atop k-n \right) \;\; \right), \end{eqnarray} where the binomial coefficients $\left(N\atop m \right)=0$ for $m<0$. {\it Proof.} For the $n$-order coherence, the number of states with $k$ excited spins equals $ \left(N\atop k \right)$. The $\pm n$-order coherence collects the elements of $\rho$ responsible for transitions from the states with $k$ excited spins to the states with $k\pm n$ excited spins. All together, there are $\left(N\atop k+n \right)+\left(N\atop k-n \right)$ such transitions. These transitions can be collected into the matrix of $ \left(N\atop k \right)$ columns and $\left(N\atop k+n \right)+\left(N\atop k-n \right)$ rows, whose maximal rank equals $\min \left( \left(N\atop k \right) ,\left(N\atop k+n \right)+\left(N\atop k-n \right)\right)$. Obviously, the rank of $\tilde \rho^{(n)}$ equals the sum of calculated ranks for different $k=0,\dots,N$, i.e., we obtain formula (\ref{ran}).$\Box$ {\bf Consequence.} For the coherence intensity of the first order ($n=1$) eq.(\ref{ran}) yields: \begin{eqnarray}\label{ran1} N_1= \sum_{k=0}^{N} \left(N\atop k \right) = 2^N. \end{eqnarray} {Proof.} We have to show that in this case \begin{eqnarray}\label{con} \left(N\atop k \right) \le \left(N\atop k+1 \right)+\left(N\atop k-1 \right),\;\;0\le k \le N. \end{eqnarray} First we consider the case $k>1$ and $k<N$. Then \begin{eqnarray}\label{intermed1} \left(N\atop k+1 \right)+\left(N\atop k-1 \right) = \left(N\atop k \right) \left(\frac{N-k}{k+1} + \frac{k}{N-k+1}\right). \end{eqnarray} Let us show that the expression inside the parenthesis is $\ge 1$. After simple transformations, this condition takes the form \begin{eqnarray}\label{ge} 3 k^2 - 3 N k +N^2 -1\ge 0, \end{eqnarray} where the lhs is a quadratic expression in $k$. The roots of the lhs read \begin{eqnarray} k_{1,2}=\frac{3 N \pm\sqrt{12-3 N^2}}{6}, \end{eqnarray} which are imaginary for $N>2$. Therefore the parabola $3 k^2 - 3 N k +N^2$ lies in the upper half-plane $k$ for $N> 2$ and consequently condition (\ref{ge}) holds for $N\ge2$. In our case, the minimal $N$ is 2, which corresponds to the 1-qubit sender and 1-qubit receiver without the transmission line between them. If $k=1$ then, instead of (\ref{intermed1}), we have \begin{eqnarray} \left(N\atop 2 \right)+\left(N\atop 0 \right) = \left(N\atop 2 \right) +1 = \left(N\atop 1 \right) \frac{N-1}{2} +1 \ge \left(N\atop 1 \right),\;\;N\in{\mathbb N} . \end{eqnarray} Therefore condition (\ref{con}) is also satisfied. If $k=0$, then $\left(N\atop 1 \right)=1$ and \begin{eqnarray} \left(N\atop 1 \right)+\left(N\atop -1 \right) = \left(N\atop 1 \right) >\left(N\atop 0 \right) , \end{eqnarray} therefore condition (\ref{con}) is also satisfied. The cases $k=N$ can be considered in a similar way. $\Box$ Thus, $N_1$ equals the maximal possible rank $N_1={\mbox{ran}} \;\tilde \rho^{(1)}$, so that $\displaystyle 2 I_1^{max}= \frac{1}{2^{N}}$. { Similarly, for the $N$-order coherence we have only two nonzero terms in (\ref{rr}) which give $N_N=2$ and $2 I_N^{max} =\frac{1}{2^{2N-1}}$. For the intensities of the other-order coherences we do not give similar result for any $N$. The maximal coherence intensities of the $n$-order ($n>0$) for $N=2,\dots,5$ are given in Table \ref{Table1}.} This table shows the ordering of $I_n^{max}$: \begin{eqnarray}\label{order} I_0^{max} > I_1^{max}> \dots >I_N^{max}. \end{eqnarray} \begin{table} \begin{tabular}{|c|cc|ccc|cccc|ccccc|} \hline $N$ & \multicolumn{2}{|c|}{2} & \multicolumn{3}{|c|}{3}&\multicolumn{4}{|c|}{4}&\multicolumn{5}{|c|}{5}\cr \hline $n$ & 1 & 2 & 1 & 2 &3 & 1 & 2 &3 &4 &1&2&3&4&5 \cr $N_n$& 4 & 2 & 8 & 4 &2& 16 & 12 &4&2&32&24&14&4&2\cr $2 I_n^{max}$&$\displaystyle \frac{1}{4}$ & $\displaystyle \frac{1}{8}$& $\displaystyle \frac{1}{8}$ & $\displaystyle \frac{1}{16}$ & $\displaystyle \frac{1}{32}$&$\displaystyle \frac{1}{16}$ & $\displaystyle \frac{3}{64}$ & $\displaystyle \frac{1}{64}$&$\displaystyle \frac{1}{128}$ & $\displaystyle \frac{1}{32}$ & $\displaystyle \frac{3}{128}$ & $\displaystyle \frac{7}{512}$&$\displaystyle \frac{1}{256}$ & $\displaystyle \frac{1}{512}$ \cr \hline \end{tabular} \caption{The maximal coherence intensities $I_n^{max}$ of the $n$-order coherence and the rank $N_n$ of $\tilde \rho^{(n)}$ for the different numbers of nodes $N$ in a spin system. }\label{Table1} \end{table} Regarding the minimum of any non-zero-order coherence intensity, its value is obvious: \begin{eqnarray} I_n^{min} = 0. \end{eqnarray} \section{Evolution of coherences} \label{Section:ev} \subsection{Conservation laws} First of all we remind a famous conservation law which holds for any evolutionary quantum system. {\bf Proposition 3.} The sum of all coherence intensities conserves: \begin{eqnarray}\label{Lrho2} \frac{d}{d t} \sum_{n=-N}^N I_n = \frac{d}{d t}{\mbox{Tr}} \Big( \rho^{(n)}\rho^{(-n)}\Big) =0. \end{eqnarray} {\it Proof.} In fact, { consider the Liouvile equation \begin{eqnarray}\label{L} i \frac{d \rho}{dt} =[\rho,H]. \end{eqnarray} Using this equation we have \begin{eqnarray} i{\mbox{Tr}}\frac{d \rho^2}{dt} = {\mbox{Tr}} [\rho^2,H] =0. \end{eqnarray} Therefore \begin{eqnarray} {\mbox{Tr}}\rho^2 = {\mbox{Tr}}\left(\sum_{n=-N}^N \rho^{(n)}\rho^{(-n)}\right) = \sum_{n=-N}^N {\mbox{Tr}} (\rho^{(n)}\rho^{(-n)}) = \sum_{n=-N}^N I_n\equiv const. \end{eqnarray} which is equivalent to eq.(\ref{Lrho2})}. $\Box$ In addition, if the system evolves under the Hamiltonian commuting with $I_z$, \begin{eqnarray}\label{comm} [H,I_z]=0, \end{eqnarray} then there is a family of conservation laws specified as follows. {\bf Consequence.} If (\ref{comm}) holds then all coherences conserve, i.e. \begin{eqnarray}\label{cohI} \frac{dI_n}{dt} = 0,\;\;\; |n|\le N . \end{eqnarray} {\it Proof.} From eq.(\ref{L}) we have \begin{eqnarray} i \rho^{(n)} \frac{d \rho}{dt} + i \frac{d \rho}{dt}\rho^{(-n)} = \rho^{(n)} [ H,\rho] + [H,\rho] \rho^{(-n)} . \end{eqnarray} The trace of this equation reads \begin{eqnarray}\label{Tr0} && {\mbox{Tr}} \left(i \rho^{(n)} \frac{d \rho}{dt} + i \frac{d \rho}{dt}\rho^{(-n)} \right) = i \frac{d}{dt } {\mbox{Tr}} \Big( \rho^{(n)} \rho^{(-n)}\Big) \equiv \\\nonumber && i \frac{d I_n}{dt } = {\mbox{Tr}}\Big(\rho^{(n)} H\rho-\rho H\rho^{(n)}\Big) - {\mbox{Tr}}\Big( \rho H \rho^{(-n)}-\rho^{(-n)} H \rho\Big). \end{eqnarray} We can introduce factors $e^{i \phi I_z}$ and $e^{-i \phi I_z} $ under the trace, substitute expansion (\ref{RhoC}) for $\rho$ and use commutation relation (\ref{comm}). Then we have \begin{eqnarray}\label{TrTr} && {\mbox{Tr}} \Big(e^{i \phi I_z} (\rho^{(n)} H\rho-\rho H\rho^{(n)} )e^{-i \phi I_z}\Big) -{\mbox{Tr}} \Big(e^{i \phi I_z} (\rho H \rho^{(-n)}-\rho^{(-n)} H \rho)e^{-i \phi I_z}\Big) =\\\nonumber && \sum_{k=-N}^N \left({\mbox{Tr}} \Big( e^{i \phi (n+k) } (\rho^{(n)} H\rho^{(k)} -\rho^{(k)} H\rho^{(n)})\Big) - {\mbox{Tr}}\Big( e^{i \phi (k-n)} (\rho^{(k)} H \rho^{(-n)}-\rho^{(-n)} H \rho^{(k)})\Big) \right). \end{eqnarray} Since this trace must be independent on $\phi$ we have $k=-n$ and $k=n$ in the first and the second trace respectively. Therefore expression (\ref{TrTr}) is identical to zero and eq.(\ref{Tr0}) yields set of conservation lows (\ref{cohI}). $\Box$ Equalities (\ref{cohI}) represent the set of conservation laws associated with the dynamics of a spin system under the Hamiltonian $H$ commuting with $I_z$. \subsection{On map $\rho^{(n)}(0) \to \rho^{(n)}(t)$ } Here we derive an important consequence of conservation laws (\ref{cohI}) describing the dependence of the elements of the evolutionary matrix $\rho^{(n)}(t)$ on the elements of the initial matrix $\rho^{(n)}(0)$. First of all we notice that the Hamiltonian commuting with $I_z$ has the following block structure: \begin{eqnarray}\label{Hn} H=\sum_{l=0}^N H^{(l)}, \end{eqnarray} where the block $H_l$ governs the dynamics of states with $l$ excited spins ($l$-excitation block). Then any matrix $\rho^{(n)}$ can be also represented as \begin{eqnarray} \rho^{(n)}=\sum_{l=0}^{N-n} \rho^{(l,l+n)},\;\; \rho^{(-n)}=\sum_{l=n}^{N} \rho^{(l,l-n)},\;\;n=0,1,\dots,N. \end{eqnarray} Then, introducing the evolution operators \begin{eqnarray} V(t)=e^{-i H t},\;\;\; V^{(l)}(t)=e^{-i H^{(l)} t}, \end{eqnarray} we can write the evolution of the density matrix as \begin{eqnarray} && \rho(t)=V(t) \rho(0) V^+(t) = \sum_{n=-N}^N V(t) \rho^{(n)}(0) V^+(t) =\\\nonumber && \sum_{n=0}^N \sum_{l=0}^{N-n} V^{(l)}(t) \rho^{(l,l+n)}(0) (V^{(l+n)}(t))^+ + \sum_{n=-N}^{-1} \sum_{l=n}^{N} V^{(l)}(t) \rho^{(l,l-n)}(0) (V^{(l-n)}(t))^+ . \end{eqnarray} Since the operators $V^{(l)}$ do not change the excitation number, we can write \begin{eqnarray}\label{In0} && \rho(t) =\sum_{n=-N}^N \rho^{(n)}(t),\\\label{In} && \rho^{(n)}(t) = \sum_{l=0}^{N-n} V^{(l)}(t) \rho^{(l,l+n)}(0) (V^{(l+n)}(t))^+\equiv P^{(n)} \left[t, \rho^{(n)}(0)\right],\\\nonumber && \rho^{(-n)} = (\rho^{(n)}(t))^+ = \sum_{l=n}^{N} V^{(l)}(t) \rho^{(l,l-n)}(0) (V^{(l-n)}(t))^+\equiv P^{(-n)} \left[t, \rho^{(-n)}(0)\right], \end{eqnarray} where we introduce the linear evolutionary operators $P^{(n)}$ ($P^{(-n)}$) mapping the matrix $\rho^{(n)}(0)$ ($\rho^{(-n)}(0)$) into the evolutionary matrix $\rho^{(n)}(t)$ ($\rho^{(-n)}(t)$) responsible for the same $n$-order ($(-n)$-order) coherence, i.e., the operator $P^{(n)}$ applied to the matrix of the $n$-order coherence doesn't generate coherences of different order. We notice that, in certain sense, formulas (\ref{In}) are similar to the Liouville representation \cite{Fano}. Hereafter we do not write $t$ in the arguments of $P^{(n)}$ for simplicity. \section{Coherence transfer from sender to receiver} \label{Section:cohtr} \subsection{Coherence transfer as map $\rho^{(S)}(0)\to \rho^{(R)}(t)$} \label{Section:map} Now we consider the process of the coherence transfer from the M-qubit sender ($S$) to the M-qubit receiver ($R$) connected by the transmission line ($TL$). The receiver's density matrix reads \begin{eqnarray}\label{rhoR} \rho^R(t)={\mbox{Tr}}_{/R}\rho(t)= \sum_{n=-M}^M \rho^{(R;n)}(t), \end{eqnarray} where the trace is taken over all the nodes of the quantum system except the receiver, and $\rho^{(R;n)}$ means the submatrix of $\rho^{(R)}$ contributing into the $n$-order coherence. To proceed further, we consider the tensor product initial state \begin{eqnarray} \rho(0)=\rho^{(S)}(0)\otimes \rho^{(TL,R)}(0), \end{eqnarray} Obviously \begin{eqnarray} \rho^{(n)}(0) = \sum_{n_1+n_2=n} \rho^{(S;n_1)}(0)\otimes \rho^{(TL,R;n_2)}(0), \end{eqnarray} where $\rho^{(S;n)}$ and $\rho^{(TL,R;n)}$ are matrices contributing to the $n$-order coherence of, respectively, $\rho^{(S)}$ and $\rho^{(TL)}$. Using expansion (\ref{In0}) and operators $P^{(n)}$ defined in (\ref{In}) we can write \begin{eqnarray} \rho^{(R)} = {\mbox{Tr}}_{/R} \sum_{n=-N}^N P^{(n)} \left[\rho^{(n)}(0)\right]= {\mbox{Tr}}_{/R} \sum_{n=-N}^N \sum_{n_1+n_2=n} P^{(n)} \left[\rho^{(S;n_1)}(0)\otimes \rho^{(TL,R;n_2)}(0)\right]. \end{eqnarray} Next we need the following Proposition. {\bf Proposition 4.} The partial trace of matrix $\rho$ does not mix coherences of different order and, in addition, \begin{eqnarray}\label{PT} {\mbox{Tr}}_{/R} \rho^{(n)} = 0,\;\; |n|>M, \end{eqnarray} {\it Proof.} We split the whole multiplicative basis of quantum state into the $2^M$-dimensional sub-basis $B^{(R)}$ of the receiver's states and the $2^{N-M}$-dimensional sub-basis of the subsystem consisting of the sender and the transmission line $B^{(S,TL)}$, i.e., $|i\rangle = |i^{S,TL}\rangle \otimes |i^R\rangle $. Then elements of the density matrix $\rho$ are enumerated by the double indexes $i=(i^{S,TL},i^R)$ and $j=(j^{S,TL},j^R)$, i.e., \begin{eqnarray} \rho_{ij}=\rho_{(i^{S,TL},i^R),(j^{S,TL},j^R)}. \end{eqnarray} Then eq.(\ref{rhoR}) written in components reads \begin{eqnarray} \rho^{(R)}_{i^Rj^R} = {\mbox{Tr}}_{/R} \rho = \sum_{i^{S,TL}} \rho_{(i^{S,TL},i^R),(i^{S,TL},j^R)}. \end{eqnarray} Therefore the coherences in the matrix $\rho^{(R)}$ are formed only by the transitions in the subspace spanned by $B^{(R)}$. Therefore, the matrix $\rho^{(R;n)}$ forming the $n$-order coherence of the receiver consists of the elements included into the $n$-order coherence of the whole quantum system. Consequently, trace does not mix coherences. Since the receiver is an $M$-qubit subsystem, it can form only the coherences of order $n$ such that $|n|\le M$, which agrees with justifies condition (\ref{PT}). $\Box$ This Proposition allows us to conclude that \begin{eqnarray}\label{Rn} \rho^{(R;n)} = {\mbox{Tr}}_{/R} \sum_{n_1+n_2=n} P^{(n)}\left[ \rho^{(S;n_1)}(0)\otimes \rho^{(TL,R;n_2)}(0)\right],\;\; |n|\le M. \end{eqnarray} Formula (\ref{Rn}) shows that, in general, all the coherences of $\rho^{(S;n)}$ are mixed in any particular order coherence of the receiver's density matrix $\rho^R$. However, this is not the case if the initial state $\rho^{TL,R}(0)$ consists of elements contributing only to the zero-order coherence. Then (\ref{Rn}) gets the form \begin{eqnarray}\label{Rn2} \rho^{(R;n)} = {\mbox{Tr}}_{/R} \Big( P^{(n)} \Big[\rho^{(S;n)}(0)\otimes \rho^{(TL,R;0)}(0)\Big]\Big),\;\; |n|\le M. \end{eqnarray} In this case the elements contributing to the $n$-order coherence of $\rho^S(0)$ contribute only to the $n$-order coherence of $\rho^R(t)$. \subsection{Restoring of sender's state at receiver's side} \label{Section:selecting} In Sec.\ref{Section:map} we show that, although the coherences of the sender's initial state are properly separated in the receiver's state, the elements contributing to the particular $n$-order coherence of $\rho^S_0$ are mixed in $\rho^R_n$. But we would like to separate the elements of $\rho^S_0$ in $\rho^R(t)$, so that, in the ideal case,: \begin{eqnarray}\label{rhoij} &&\rho^R_{ij}(t) = f_{ij}(t) \rho^S_{ij},\;\;(i,j)\neq (2^M,2^M),\\\nonumber &&\rho^R_{2^M2^M}(t) = 1- \sum_{i=1}^{2^M-1} f_{ii}(t) \rho^S_{ii}. \end{eqnarray} We refer to the state with elements satisfying (\ref{rhoij}) as a completely restored state. Perhaps, relation (\ref{rhoij}) can not be realized for all elements of $\rho^R$, in other words, the complete sender's state restoring is impossible, in general case. However, the simple case of a complete restoring is the transfer of the one-qubit sender state to the one-qubit receiver because in this case there is only one element $\rho^S_{12}$ in $\rho^S$ contributing to the first order coherence in $\rho^R$ and one independent element $\rho^S_{11}$ contributing to the zero-order coherence. In addition, we can notice that the highest order coherences have the form (\ref{rhoij}) in general case, because there is only one element of the density matrix contributing to the $\pm M$-order coherence. Regarding the other coherences, we can try to partially restore at least some of the elements using the local unitary transformation at the receiver side. \subsubsection{Unitary transformation of extended receiver as state-restoring tool} \label{Section:U} Thus we can use the unitary transformation at the receiver to (partially) restore the initial sender's state $\rho^{(S)}(0)$ in the density matrix $\rho^{(R)}(t)$ at some time instant $t$ in the sense of definition (\ref{rhoij}). It is simple to estimate that the number of parameters in the unitary transformation $U^{(R)}$ of the receiver itself is not enough to restore all the elements of the density matrix $\rho^{(S)}(0)$. To make the complete restoring possible we must increase the number of parameters in the unitary transformation by extending the receiver to $M^{(ext)}>M$ nodes and use the transformation $U^{(ext)}$ of this extended receiver to restore the state $\rho^{(S)}(0)$. Thus we consider the $M^{(ext)}$-dimensional extended receiver and require that the above mentioned unitary transformation does not mix different submatrices $\rho^{(n)}$. This is possible if $U$ commutes with the $z$-projection of the total extended receiver's spin momentum. In this case the matrix $\rho^R$ can be obtained from $\rho$ in three steps: (i) reducing $\rho(t)$ to the density matrix of the extended receiver $\rho^{R_{ext}}(t)$, (ii) applying the restoring unitary transformation $U^{(ext)}$ and (iii) reducing the resulting density matrix $U^{(ext)}\rho^{R_{ext}}(t)(U^{(ext)})^+$ to $\rho^{R}$. To find out the general form of the unitary transformation we consider this transformation in the basis constructed on the matrices $I^{\pm}_j$ and $I_{zj}$. This basis reads: for the one-qubit subsystem ($i$th qubit of the whole quantum system), \begin{eqnarray}\label{B1} B^{(i)}: E, I_{zi}, I^+_i, I^-_i; \end{eqnarray} for the two-qubit subsystem (the $i$th and $j$th qubits), \begin{eqnarray}\label{B2} B^{(ij)}=B^{(i)}\otimes B^{(j)}; \end{eqnarray} for the three-qubit subsystem (the $i$th, $j$th and $k$th qubits), \begin{eqnarray}\label{B3} B^{(ijk)}=B^{(ij)}\otimes B^{(k)}; \end{eqnarray} for the four-qubit subsystem (the $i$th, $j$th, $k$th and $m$th qubits), \begin{eqnarray}\label{B4} B^{(ijkm)}=B^{(ij)}\otimes B^{(km)}, \end{eqnarray} and so on. The elements of the basis commuting with $I_z$ are formed by the pairs $I^+_p I^-_q$ and by the diagonal matrices $I_{zk}$, $E$. Thus, the one-qubit basis (\ref{B1}) involves two elements commuting with $I_z$: \begin{eqnarray}\label{B1U} B^{(C;i)}: E, I_{zi}. \end{eqnarray} The two-qubit basis (\ref{B2}) involves $6$ such elements: \begin{eqnarray}\label{B2U} B^{(C;ij)}: E, \;\;I_{zi},\;\; I_{zj}, \;\;I_{zi} I_{zj}, \;\;I^+_i I^-_j,\;\; I^+_j I^-_i. \end{eqnarray} The three-qubit basis (\ref{B3}) involves 20 such elements: \begin{eqnarray}\label{B3U} B^{(C;ijk)}: E,\;\; I_{zp},\;\; I_{zp} I_{zs},\;\; I_{zi} I_{zj}I_{zk},\;\; I^+_p I^-_s,I^+_p I^-_s I_{zr}, \;\; p,s,r\in \{i,j,k\}, \;r\neq p \neq s . \end{eqnarray} The four-qubit basis (\ref{B4}) involves 70 such elements: \begin{eqnarray}\label{B4U} B^{(C;ijkm)} &:& E, \;\;I_{zp}, \;\; I_{zp} I_{zs},\;\; I_{zp} I_{zs}I_{zr},\;\;I_{zi} I_{zj} I_{zk} I_{zm},\;\; I^+_p I^-_s,\;\;I^+_p I^-_s I_{zr},\;\;I^+_p I^-_s I_{zr} I_{zq}, \\\nonumber && I^+_p I^-_s I^+_r I^-_q,\;\;p,s,r,q \in \{i,j,k,m\},\;\; p\neq s \neq r \neq q , \end{eqnarray} and so on. However, there is a common phase which can not effect the elements of the density matrix. Therefore, the number of parameters in the above unitary transformations which can effect the density-matrix elements is less then the dimensionality of the bases (\ref{B1U}-\ref{B4U}) by one. \section{Particular model} \label{Section:model} As a particular model, we consider the spin-1/2 chain with two-qubit sender and receiver and the tensor product initial state \begin{eqnarray}\label{in2} \rho(0)=\rho^S(0) \otimes \rho^{TL,R}(0), \end{eqnarray} where $\rho^S(0)$ is an arbitrary initial state of the sender and $\rho^{TL,R}(0)$ is the initial thermal equilibrium state of the transmission line and receiver, \begin{eqnarray} \label{inTLB} \rho^{TL,B} &=&\frac{e^{bI_{z}}}{Z},\;\;Z=\left(2 \cosh\frac{b}{2}\right)^{N-2}, \end{eqnarray} where $b=\frac{1}{k T}$, $T$ is temperature and $k$ is the Boltzmann constant. Thus, both $\rho^{(S)}$ and $\rho^{(R)}$ are $4\times 4$ matrices. Let the evolution of the spin chain be governed by the nearest-neighbor $XX$-Hamiltonian \cite{Mattis} \begin{eqnarray}\label{XY} H=\sum_{i=1}^{N-1} D (I_{ix}I_{(i+1)x} +I_{iy}I_{(i+1)y}), \end{eqnarray} where $D$ is a coupling constant. Obviously, $[H,I_z]=0$. Using the Jordan-Wigner transformations \cite{JW,CG} we can derive the explicit formula for the density matrix of the two-qubit receiver (\ref{rhoR}) but we do not represent the details of this derivation for the sake of brevity. To proceed further, let us write formulas (\ref{Rn}) contributing into each particular coherence as follows. For the zero order coherence we have \begin{eqnarray}\label{coh0} \rho^{(R;0)}_{ij}&=& \alpha_{ij;11} \rho^S_{11} + \alpha_{ij;22} \rho^S_{22} + \alpha_{ij;33} \rho^S_{33} + \alpha_{ij;44} \rho^S_{44} + \alpha_{ij;23} \rho^S_{23} + \alpha_{ij;32} (\rho^S_{23})^* ,\\\nonumber &&(i,j)= (1,1),(2,2),(3,3),(2,3)\\\nonumber \rho^{(R;0)}_{44} &=& 1- \sum_{i=1}^3 \rho^R_{ii},\;\;\alpha_{ii;32}=\alpha_{ii;23}^*, \end{eqnarray} there are $12$ real parameters $\alpha_{ii;jj}$, $i=1,2,3$, $j=1,2,3,4$, and $9$ complex parameters $\alpha_{ii;23}$, $i=1,2,3$, $\alpha_{23;ii}$, $i=1,2,3,4$, $\alpha_{23;23}$ and $\alpha_{23;32}$, i.e., 30 real parameters. For the first order coherence: \begin{eqnarray}\label{coh1} (\rho^R_1)_{ij}= \alpha_{ij;12} \rho^S_{12} + \alpha_{ij;13} \rho^S_{13} + \alpha_{ij;24} \rho^S_{24} + \alpha_{ij;34} \rho^S_{34},\;\; (i,j)= (1,2),(1,3),(2,4),(3,4), \end{eqnarray} there are 16 complex parameters, or 32 real ones. Finally, for the second order coherence we have \begin{eqnarray}\label{coh2} \rho^R_{14}= \alpha_{14;12} \rho^S_{14}, \end{eqnarray} there is one complex parameter (two real ones). In all these formulas, $\alpha_{ij;nm}$ are defined by the interaction Hamiltonian and they depend on the time $t$. \subsection{Simple example of $\rho^{(S;1)}$-restoring} We see that there are 64 real parameter we would like to adjust in eqs.(\ref{coh0}-\ref{coh2}). For the purpose of complete restoring of an arbitrary state we need the extended receiver of $M=4$ nodes so that the number of the effective parameters in the unitary transformation described in Sec.\ref{Section:U} would be 69. However, for the sake of simplicity, here we use the unitary transformation of the two-qubit receiver to perform a complete restoring of the $\pm1$-order coherence matrices $\rho^{(S;\pm 1)}(0)$ of a special form, namely \begin{eqnarray}\label{inS} \rho^{(S;1)} + \rho^{(S;-1)} = \left( \begin{array}{cccc} 0&a&a&0\cr a^*&0&0&a\cr a^*&0&0&0\cr 0&a^*&0&0 \end{array} \right). \end{eqnarray} The unitary transformation constructed on the basis (\ref{B2U}) reads: \begin{eqnarray}\label{U2q} U=e^{i \phi_1 ( I_1^+I_2^- + I_1^-I_2^+)} e^{ \phi_2 ( I_1^+I_2^- - I_1^-I_2^+)} e^{i \Phi}, \end{eqnarray} where $\Phi={\mbox{diag}}(\phi_3,\dots,\phi_6)$ is a diagonal matrix and $\phi_i$, $i=1,\dots,6$, are arbitrary real parameters. Eqs. (\ref{coh1}) reduce to \begin{eqnarray}\label{coh1ex} (\rho^R_1)_{ij}=\alpha_{ij} a ,\;\; \alpha_{ij}= \alpha_{ij;12} + \alpha_{ij;13} + \alpha_{ij;24},\;\; (i,j)= (1,2),(1,3),(2,4),(3,4). \end{eqnarray} We consider the chain of $N=20$ nodes and set $b=10$. The time instant for the state registration at the receiver is chosen by the requirement to maximize the maximal-order coherence intensity (the second order in this model) because this intensity has the least maximal possible value according to (\ref{order}). This time instance was found numerically and it equals $D t=24.407$. Next, using the parameters $\phi_i$ of the unitary transformation (\ref{U2q}) we can put zero the coefficient $\alpha_{34}$ and thus obtain the completely restored matrices $\rho^{(R;\pm1)}$ in the form \begin{eqnarray}\label{Rt} \rho^{(R;1)} + \rho^{(R;-1)} = \left( \begin{array}{cccc} 0&\alpha_{12} a&\alpha_{13}a&0\cr \alpha_{12}^*a^*&0&0&\alpha_{24}a\cr \alpha_{13}^*a^*&0&0&0\cr 0&\alpha_{24}^*a^*&0&0 \end{array} \right). \end{eqnarray} The appropriate values of the parameters $\phi_i$ are following: \begin{eqnarray} \phi_1=2.41811,\;\;\phi_2=1.57113,\;\;\phi_k=0,\;\;k=2,\dots,6. \end{eqnarray} Therewith, \begin{eqnarray} \alpha_{12}=0.00021 + 0.63897 i,\;\;\;\alpha_{13}=0.00010 - 0.30585 i,\;\;\alpha_{24}=0.00010-0.30582 i . \end{eqnarray} Thus, using the unitary transformation of the receiver we restore the sender's initial matrices $\rho^{(S;\pm1)}(0)$ in the sense of definition (\ref{rhoij}). This result holds for arbitrary admittable initial matrices $\rho^{(S;0)}(0)$ and $\rho^{(S;2)}(0)$. \section{Conclusion} \label{Section:conclusion} The MQ coherence intensities are the characteristics of a density matrix which can be measured in MQ NMR experiments. We show that the coherences evolve independently if only the Hamiltonian governing the spin dynamics conserves the total $z$-projection of the spin momentum. This is an important property of quantum coherences which allows us to store them in the sense that the family of the density-matrix elements contributing into a particular-order coherence do not intertwist with other elements during evolution. In addition, if we connect the spin system with formed coherences (called sender in this case) to the transmission line and receiver we can transfer these coherences without mixing them if only the initial state of $\rho^{(TL,R)}(0)$ has only the zero-order coherence. We also describe the restoring method which could allow (at least partially) to reconstruct the sender's initial state. This state-restoring is based on the unitary transformation at the receiver side involving, in general, the so-called extended receiver with the purpose to enlarge the number of parameters in the unitary transformation. The partial state-restoring of two-qubit receiver via the unitary transformation on it is performed as a simplest example. Examples of more accurate restoring involving the extended receiver require large technical work and will be done in a specialized paper. This work is partially supported by the Program of RAS ''Element base of quantum computers'' and by the Russian Foundation for Basic Research, grants No.15-07-07928 and 16-03-00056. \end{document}
arXiv
{ "id": "1708.01132.tex", "language_detection_score": 0.6820265650749207, "char_num_after_normalized": null, "contain_at_least_two_stop_words": null, "ellipsis_line_ratio": null, "idx": null, "lines_start_with_bullet_point_ratio": null, "mean_length_of_alpha_words": null, "non_alphabetical_char_ratio": null, "path": null, "symbols_to_words_ratio": null, "uppercase_word_ratio": null, "type": null, "book_name": null, "mimetype": null, "page_index": null, "page_path": null, "page_title": null }
arXiv/math_arXiv_v0.2.jsonl
null
null
\begin{document} \frontmatter \title{Foundations of Quantum Decoherence} \ClearShipoutPicture \disscopyright I gratefully acknowledge the loving help and support of my parents, John and Clare Gamble, and of my fianc\'ee, Katherine Kelley. I extend sincere thanks to my advisors, John Lindner and Derek Newland, for their long hours and dedication to this project. I also thank Jon Breitenbucher for painstakingly assembling and maintaining this \LaTeX \\ template, which made the writing process significantly more enjoyable than it would have been otherwise. Finally, I am grateful to The Gallows program for providing me an environment in which I could grow, learn, and succeed. \pagebreak \begin{abstract} The conventional interpretation of quantum mechanics, though it permits a correspondence to classical physics, leaves the exact mechanism of transition unclear. Though this was only of philosophical importance throughout the twentieth century, over the past decade new technological developments, such as quantum computing, require a more thorough understanding of not just the \textit{result} of quantum emergence, but also its \textit{mechanism}. Quantum decoherence theory is the model that developed out of necessity to deal with the quantum-classical transition explicitly, and without external observers. In this thesis, we present a self-contained and rigorously argued full derivation of the master equation for quantum Brownian motion, one of the key results in quantum decoherence theory. We accomplish this from a foundational perspective, only assuming a few basic axioms of quantum mechanics and deriving their consequences. We then consider a physical example of the master equation and show that quantum decoherence successfully represents the transition from a quantum to classical system. \end{abstract} \if@xetex \cleardoublepage \phantomsection \addcontentsline{toc}{chapter}{Contents} \else \ifpdf \cleardoublepage \phantomsection \addcontentsline{toc}{chapter}{Contents} \else \cleardoublepage \addcontentsline{toc}{chapter}{Contents} \fi \fi \tableofcontents \listoffigures \chapter{Preface}\label{chap:intro} \addcontentsline{toc}{chapter}{Preface} \lettrine[lines=2, lhang=0.33, loversize=0.1]{T}his thesis is designed to serve a dual purpose. First, it is a stand-alone treatment of contemporary decoherence theory, accomplishing this mostly within a rigorous framework more detailed than is used in typical undergraduate quantum mechanics courses. It assumes no prior knowledge of quantum mechanics, although a basic understanding obtained through a standard introductory quantum mechanics or modern physics course would be helpful for depth of meaning. Although the mathematics used is introduced thoroughly in chapter \ref{chap:math_background}, the linear algebra can get quite complicated. Readers who have not had a formal course in linear algebra would benefit from having ref. \cite{poole} on-hand during some components, especially chapters \ref{chap:quantum_formal} and \ref{chap:dynamics}. The bulk of the work specifically related to decoherence is found in the last three chapters, and readers familiar with quantum mechanics desiring a better grasp of decoherence theory should proceed to the discussion of quantum mechanics in phase-space, found in chapter \ref{chap:wigner}. Second, this thesis is an introduction to the rigorous study of the foundations of quantum mechanics, and is again stand-alone in this respect. It develops the bulk of quantum mechanics from several standard postulates and the invariance of physics under the Galilei group\index{Group!Galilei} of transformations, outlined in sections \ref{sec:posts} and \ref{sec:galgroup}, respectively. Readers interested in this part of the thesis should study the first three chapters, where many fundamental results of quantum mechanics are developed. We now begin with a motivating discussion of quantum decoherence. One of the fundamental issues in physics today is the emergence of the familiar macroscopic physics that governs everyday objects from the strange, underlying microscopic laws for the motion of atoms and molecules. This collection of laws governing small bodies is called quantum mechanics, and operates entirely differently than classical Newtonian physics. However, since all macroscopic objects are made from microscopic particles, which obey quantum mechanics, there should be some way to link the two worlds: the macro and the micro. The conventional interpretation of quantum mechanics answers questions about the transition from classical to quantum mechanics, known as quantum emergence\index{Quantum Emergence}, through a special \textit{measurement}\index{Measurement} process, which is distinct from the other rules of quantum mechanics \cite{griffiths}.\footnote{In fact, the motion of a system not being measured is considered \textit{unitary}, and hence reversible, while the measurement process is conventionally considered discontinuous, and hence irreversible. So, not only are they treated separately, but they are considered fundamentally different processes!} However, when this measurement concept is used, problems arise. The most famous of these problems is known as Schr\"odinger's cat\index{Schr\"odinger's cat}, which asks about the nature of measurement through a paradox \cite{omnes}. The problem creates ambiguity about \begin{enumerate} \item when a measurement occurs, and \item who (or what) performs it.\end{enumerate} When all is said and done, the conventional interpretation leaves a bitter taste in the mouths of many physicists; what they want is a theory of quantum measurement that does not function due to subjectively defined observation. If no external observers are permitted, how can classical mechanics ever emerge from quantum mechanics? The answer is that complex systems, in essence, measure themselves, which leads us to decoherence. \section{Decoherence and the Measurement Problem} \begin{figure}\label{fig:informationexchange} \end{figure} Quantum decoherence theory is a quantitative model of how this transition from quantum to classical mechanics occurs, which involves systems performing local measurements on themselves. More precisely, we divide our universe into two pieces: a simple system component, which is treated quantum mechanically, and a complex environmental component, which is treated statistically.\footnote{The words statistical and classical are being tossed around here a bit. What we mean is statistical in the thermodynamic sense, for example probability distributions prepared by random coin-tosses. These random, statistical distributions are contrasted against quantum states, which may \textit{appear} to be random when observed, but actually carry quantum interference information.} Since the environment is treated statistically, it obeys the rules of classical (statistical) mechanics, and we call it a \textbf{mixture}\index{Impure State} \cite{ballentine}. When the environment is coupled to the system, any quantum mechanical information that the system transfers to the environment is effectively lost, hence the system becomes a mixture over time, as indicated in figure \ref{fig:informationexchange}. In the macroscopic world, ordinary forces are huge compared to the subtle effects of quantum mechanics, and thus large systems are very difficult to isolate from their environments. Hence, the time it takes large objects to turn to mixtures, called the \textbf{decoherence time}\index{Decoherence Time}, is very short. It is important to keep in mind that decoherence is inherently local. That is, if we consider our entire universe, the system plus the environment, quantum mechanically, classical effects do not emerge. Rather, we need to ``focus'' on a particular component, and throw away the quantum mechanical information having to do with the environment \cite{omnes}. In order to clarify this notion of decoherence, we examine the following unpublished example originally devised by Herbert J. Bernstein\index{Bernstein, H. J.} \cite{greenstein}. To start, consider an electron gun, as shown in figure \ref{bernsteindevice}. Electrons are an example of a two-state system, and as such they possess a quantum-mechanical property called spin \cite{nielsenchuang}. As we develop in detail later in section \ref{sec:quantumsup}, the spin of a two-state system\index{Two-State System} can be represented as a vector pointing on the unit two-sphere. Further, any possible spin can be formed as a linear combination of a spin pointing up in the $\hat z$ direction, and a spin pointing down in the $-\hat z$ direction.\footnote{In linear algebra terminology, we call the spin vectors pointing in $+ \hat z$ and $- \hat z$ a \textbf{basis} for the linear vector space of all possible states. We deal with bases precisely in section \ref{sec:linearvecspace}.} We suppose that our electron gun fires electrons of random spin, and then we use some angular control device to fix the electron's spin to some angle (that we set) in the $xy$-plane. Then, we use a Stern-Gerlach\index{Stern-Gerlach Analyzer} analyzer adjusted to some angle to measure the resulting electron. The Stern-Gerlach analyzer measures how close its control angle is to the spins of the electrons in the beam passing through it \cite{greenstein}. It reads out a number on a digital display, with $1$ corresponding to perfect alignment and $0$ corresponding to anti-alignment. \begin{figure}\label{bernsteindevice} \end{figure} So far, we can always use the analyzer to measure the quantum-mechanical spin of each electron in our beam. We simply turn the analyzer's angular control until its digital display reads one, and then read the value of the angular control. Similarly, if we were to turn the analyzer's control to the angle opposite from the beam's angle, the display would read zero. The fact that these two special angles always exist is fundamental to quantum mechanics, resulting from a purely non-classical phenomenon called \textbf{superposition}.\index{Superposition}\footnote{The precise nature of quantum superposition is rather subtle, and we discuss it at length in section \ref{sec:quantumsup}.} We next insert another component into the path of the electron beam. By turning on a switch, we activate a second device that adjusts the angle of our beam in the $xy$-plane by adding $\theta$. The trick is that this device is actually attached to a modified roulette wheel, which we spin every time an electron passes. The roulette wheel is labeled in radians, and determines the value of $\theta$ \cite{greenstein}. We now frantically spin the angular control attached to our analyzer, attempting to find the initial angle of our electron beam. However, much to our surprise, the display appears to be stuck on $0.5$ \cite{greenstein}. This reading turns out to be no mistake, since the angles of the electrons that the analyzer is measuring are now randomly distributed (thanks to the randomness of the roulette wheel) throughout the $xy$-plane. No matter how steadfastly we attempt to measure the spin of the electrons in our beam, we cannot while the roulette wheel is active. Essentially, the roulette wheel is absorbing the spin information of the electrons, as we apparently no longer have access to it. This absorption of quantum information is the exact process that the environment performs in quantum decoherence theory. In both cases, the information is lost due to statistical randomness, and forces a quantum system to be classically random as well. The roulette wheel in this simplified example, just like the environment in reality, is blocking our access to quantum properties of a system. In chapter \ref{chap:applications}, we return to a more physical example of decoherence using the quantitative tools we develop in this thesis. First, we need to discuss the mathematical underpinnings of quantum mechanics. \section{Notational Conventions} Throughout this thesis, we adopt a variety of notational conventions, some more common than others. Here, we list them for clarity. \begin{itemize} \item The symbol $(\equiv)$ will always be used in the case of a definition. It indicates that the equality does not follow from previous work. The $(=)$ sign indicates equality that logically follows from previous work. \item An integral symbol without bounds, \[\left( \int \right), \] is a definite integral from $- \infty$ to $+ \infty$, rather than the antiderivative, unless otherwise noted. \item Usually, the differential term in an integrand will be grouped with the integral symbol and separated by $(\cdot)$. This is standard multiplication, and is only included for notational clarity. \item Vectors are always given in Dirac kets, $\left( \, \left| \cdot \right> \, \right)$, operators on abstract vector or Hilbert spaces are always given with hats, $\left( \, \hat{\cdot}\, \right)$, linear functionals over vector spaces are given in Dirac bras, $\left( \, \left< \cdot \right| \, \right) $, and operators on function spaces are given with checks, $\left( \, \check{\cdot} \, \right)$. \item Both partial and total derivatives are given using either standard Leibniz or in a contracted form $d_x$, where \[ d_x \equiv \frac{d}{dx}. \] \item The symbol $(\leftrightarrow )$ is used to denote a special representation of a particular structure. Its precise definition is made clear by context. \item The symbol $(*)$ is used to denote the complex conjugate of a complex number. \end{itemize} \mainmatter \chapter{Mathematical background}\label{chap:math_background} \lettrine[lines=2, lhang=0.33, loversize=0.1]{B}efore we begin our discussion of quantum mechanics, we take this chapter to review the mathematical concepts that might be unfamiliar to the average undergraduate physics major wishing a more detailed understanding quantum mechanics. We begin with a discussion of linear vector spaces and linear operators. We next generalize these basic concepts to product spaces, and finally consider spaces of infinite dimension. Quantum mechanics is much more abstract than other areas of physics, such as classical mechanics, and so the immediate utility of the techniques introduced here is not evident. However, for the treatment in this thesis to be mostly self-contained, we proceed slowly and carefully. \section{Linear Vector Spaces}\label{sec:linearvecspace} In this section, we introduce linear vector spaces, which will be the stages for all of our subsequent work.\footnote{Well, actually we will work in a triplet of abstract spaces called a \textbf{rigged Hilbert space}, which is a special type of linear vector space. However, most textbooks on quantum mechanics, and even most physicists, do not bother much with the distinction. We will look at this issue in more detail in section \ref{sec:infdim}.} We begin with the elementary topic of vector spaces \cite{poole}. \begin{boxeddefn}{Vector space\index{Linear!Vector Space}}{defn:vecspace} Let $F$ be a field with addition $(+)$ and multiplication $(\cdot)$. A set $V$ is a \textbf{vector space} under the operation $(\oplus)$ over $F$ if for all $\left| u \right> , \left| v \right>, \left| w \right> \in V$ and $a,b \in F$: \begin{enumerate} \item $\left| u \right> \oplus \left| v \right> = \left| v \right> \oplus \left| u \right> $. \item $(\left| u \right> \oplus \left| v \right> ) \oplus \left| w \right> = \left| u \right> \oplus (\left| v \right> \oplus \left| w \right> )$. \item There exists $\left| 0 \right> \in V$ such that $\left| 0 \right> \oplus \left| u \right> = \left| u \right>$. \item There exists $- \left| u \right> \in V$ such that $ - \left| u \right> \oplus \left| u \right> = \left| 0\right> $. \item $a \cdot ( b \left| u \right> ) = (a \cdot b ) \left| u\right> $. \item $(a + b) \left| u \right> = a \left| u \right> + b \left| u\right> $. \item $a ( \left| u \right> + \left| v \right> ) = a \left| u \right> + a \left| v\right> $. \item For the unity of $F$, $1$, $1 \left| u \right> = \left| u \right>$. \end{enumerate} \end{boxeddefn} If $V$ satisfies the criteria for a vector space, the members $\left| u \right> \in V$ are called \textbf{vectors}\index{Vector}, and the members $a \in F$ are called \textbf{scalars}\index{Scalar}. For the purposes of quantum mechanics, the field $F$ we are concerned with is almost always $\mathbb C$, the field of complex numbers, and $V$ has the usual (Euclidean) topology.\footnote{The fields\index{Field (algebraic)} we refer to here are those from abstract algebra, and should not be confused with force fields (such as the electric and magnetic fields) used in physics. Loosely speaking, most of the sets of numbers we deal with in physics are algebraic fields, such as the real and complex numbers. For more details, see ref \cite{anderson}.} Since the operation $(\oplus)$ is by definition interchangeable with the field operation $(+)$, it is conventional to use the symbol $(+)$ for both, and we do so henceforth \cite{anderson}.\footnote{In definition \ref{defn:lindep}, we use the notion $\alpha \in \Lambda$, which might be foreign to some readers. $\Lambda$ is considered an index set, or a set of all possible allowed values for $\alpha$. Then, by $\alpha \in \Lambda$, we are letting $\alpha$ run over the entire index set. Using this powerful notation, we can treat almost any type of general sum or integral. For more information, see ref. \cite{gamelin}} \begin{boxeddefn}{Linear dependence\index{Linear!Dependence}}{defn:lindep} A collection of vectors $\{ \left| v _{\alpha} \right> \}_{\alpha \in \Lambda}$, where $\Lambda$ is some index set, belonging a vector space $V$ over $F$ is \textbf{linearly dependent} if there exists a set $\{a_{\alpha}\}_{\alpha \in \Lambda}$ such that \begin{equation} \sum_{\alpha \in \Lambda} a_{\alpha} \left| v_{\alpha}\right> = \left| 0\right> \end{equation} given that at least one $a_{i} \in \{ a_{\alpha} \} \neq 0$. \end{boxeddefn} This means that, if a set of vectors is linearly dependent, we can express one of the member vectors in terms of the others. If a set of vectors is not linearly dependent, we call it \textbf{linearly independent}\index{Linear!Independence}, in which case we would not be able to express one of the member vectors in terms of the others \cite{poole}. \begin{boxeddefn}{Dimension\index{Dimension}}{defndimension} Consider the vector space $V$ and let $\{ \left| v\right>_{\alpha} \}_{\alpha \in \Lambda} \subseteq V$ be an arbitrary set of linearly independent vectors. Then, if $\Lambda$ is alway finite, the \textbf{dimension} of $V$ is the maximum number of elements in $\Lambda$. If $\Lambda$ is not always finite, then $V$ is said to have \textbf{infinite dimension}. \end{boxeddefn} \begin{boxeddefn}{Basis\index{Basis}}{def:basis} Let $B=\{\left| v_{\alpha}\right> \}_{\alpha \in \Lambda} \subseteq V$, where $V$ is a vector space over the field $F$. If $\left| v_{\alpha} \right>$ and $\left| v_{\beta} \right>$ when $\alpha \neq \beta$ are linearly independent and an arbitrary vector $\left| u \right> \in V$ can be written as a linear combination of $\left| v _{\alpha}\right>$`s, i.e. \begin{equation} \left| u \right> = \sum_{\alpha \in \Lambda} c_{\alpha} \left| v _{\alpha}\right>, \end{equation} with $c_{\alpha} \in F$, we say $\{\left| v_{\alpha}\right> \}_{\alpha \in \Lambda}$ is a \textbf{basis set} or \textbf{basis} for $V$. \end{boxeddefn} It follows directly from this definition that, in any vector space with finite dimension $D$, any basis set will have precisely $D$ members. Because quantum mechanics deals with a Euclidean vector space over the complex numbers, it is advantageous to precisely define the inner product of two vectors within that special case \cite{ballentine}. \begin{boxeddefn}{Inner product}{defn:innderproduct} Let $V$ be a vector space over the field of complex numbers $\mathbb C$. Then, $g:V \times V \rightarrow \mathbb C$ is an \textbf{inner product} if, for all $\left| u \right>, \left| v \right>, \left| w \right> \in V$ and $\alpha, \beta \in \mathbb C$, \begin{enumerate} \item $g \left( \left| u \right> , \left| v \right> \right) = g \left( \left| v \right> , \left| u \right> \right)^ *$, \item $ g \left( \left| u \right> , a \left| v \right> + b \left| w \right> \right) = a \cdot g \left( \left| u \right> , \left| v \right> \right)+ b \cdot g \left( \left| u \right> , \left| w \right> \right) $, \item $ g \left( \left| u \right> , \left| u \right> \right) \geq 0$ with $ g \left( \left| u \right> , \left| u \right> \right) = 0 \Leftrightarrow \left| u \right> = \left| 0 \right>$. \end{enumerate} \end{boxeddefn} Although it is not immediately clear, the inner product is closely related to the space of linear functionals on $V$, called the dual space of $V$ and denoted $V^*$. Below, we define these concepts precisely and then show their connection through the Riesz representation theorem \cite{ballentine}. \begin{boxeddefn}{Linear functional\index{Linear!Functional}}{} A \textbf{linear functional} on a vector space $V$ over $\mathbb C$ is any function $F: V \rightarrow \mathbb C$ such that for all $\alpha, \beta \in \mathbb C$ and for all $\left| u \right>, \left| v \right> \in V$, \begin{equation} F \left( a \left| u \right>+ b \left| v \right> \right) = a \cdot F \left( \left| u \right> \right) + b \cdot F \left( \left| v \right> \right). \end{equation} We say that the space occupied by the linear functionals on $V$ is the \textbf{dual space}\index{Dual Space} of $V$, and we denote it by $V^*$. \end{boxeddefn} We connect the inner product with the dual space $V^*$ using the Riesz representation theorem \cite{ballentine}. \begin{boxedthm}{Riesz representation\index{Riesz Representation Theorem}}{thm:rieszthm} Let $V$ be a finite-dimensional vector space and $V^*$ be its dual space. Then, there exists a bijection $h: V^* \rightarrow V$ defined by $h(F) = \left| f \right> $ for $F \in V^*$ and $\left| f \right> \in V$ such that $F \left( \left| u \right> \right) = g \left( \left| f \right>, \left| u \right> \right)\, \forall \left| u \right> \in V$, where $g$ is an inner product of $V$ \cite{ballentine}. \end{boxedthm} The proof of this theorem is straightforward, but too lengthy for our present discussion, so we will reference a simple proof for the interested reader \cite{ballentine}. The consequences of this theorem are quite drastic. It is obviously true that the inner product of two vectors, which maps them to a scalar, is a linear functional. However, the Riesz theorem asserts that any linear functional can be represented as an inner product. This means that every linear functional has precisely one object in the dual space, corresponding to a vector in the vector space. For this reason, we call the linear functional associated with with $\left| u \right>$ a dual vector and write it as \begin{equation} \left< u \right| \in V^*, \end{equation} and we contract our notation for the inner product of two vectors $\left| u \right>$ and $\left| v \right>$ to \begin{equation} g\left( \left| u \right>, \left| v \right> \right) \equiv \left< u \big| v \right>, \end{equation} a notational convention first established by P. A. M. Dirac.\index{Dirac, P. A. M.} The vectors in $V$ are called \textbf{kets}\index{Ket} and the dual vectors, or linear functionals associated with vectors in $V^*$, are called \textbf{bras}\index{Bra}. Hence, when we adjoin a bra and a ket, we get a bra-ket or bracket, which is an inner product. Note that by the definition of the inner product, we have \begin{equation}\label{eqn:diracinner} \left< u \big | v \right> = \left< v \big | u \right>^*, \end{equation} so if we multiply some vector $\left| v \right>$ by a (complex) scalar $\alpha$, the corresponding dual vector is $\alpha^* \left< v \right|$. When we form dual vectors from vectors, we must always remember to conjugate such scalars. As another note, when choosing a basis, we frequently pick it as \textbf{orthonormal}, which we define below \cite{poole}. \begin{boxeddefn}{Orthonormality of a Basis\index{Basis!Orthonormality of}}{defn:orthobasis} A basis $B$ for some vector space $V$ is \textbf{orthonormal} if any two vectors $\left| \phi_i \right>$ and $\left| \phi_j \right>$ in $B$ satisfy \begin{equation} \left< \phi_i \big | \phi_j \right> = \begin{cases} 1 & \text{if $i=j$} \\ 0& \text{if $i \neq j$} \end{cases}. \end{equation} \end{boxeddefn} For any vector space, we can always find such a basis, so we do not lose any generality by always choosing to use one.\footnote{The process for finding an orthonormal basis is called the Graham-Schmidt algorithm\index{Graham-Schmidt Algorithm}, and allows us to construct an orthonormal basis from any basis. For details, see ref. \cite{poole}.} A useful example that illustrates the use of vectors and dual vectors can be found by constraining our vector space to a finite number of dimensions.\index{Matrix Representation!of Vectors}\index{Matrix Representation!of Linear Functionals} Working in such a space, we represent vectors as column matrices and dual vectors as row matrices \cite{nielsenchuang}. For example, in three dimensions we might have \begin{equation} \left| e_1 \right> \leftrightarrow \left( \begin{array}{c} 1 \\ 0 \\ 0 \end{array} \right) \end{equation} and \begin{equation} i \left| e_2 \right> \leftrightarrow \left( \begin{array}{c} 0 \\ i \\0 \end{array} \right), \end{equation} where $\left| e_1 \right>$ and $\left| e_2 \right>$ are the unit vectors from basic physics \cite{hrw}. Then, the linear functional corresponding to $\left| e_2 \right>$ is\footnote{Here, notice that to generate the representation for $\left< e_2 \right|$ from $\left| e_2 \right>$, we must take the complex conjugate. This is necessary due to the complex symmetry of the inner product established in eqn. \ref{eqn:diracinner}.} \begin{equation} \left< e_2 \right| \leftrightarrow i^* \left( \begin{array}{ccc} 0 & 1 & 0 \end{array} \right) = \left( \begin{array}{ccc} 0 & -i & 0 \end{array} \right) . \end{equation} We represent the inner product as matrix multiplication, so we write \begin{equation} - i \left< e_2 \big |e_1 \right> \leftrightarrow \left( \begin{array}{ccc} 0 & -i & 0 \end{array} \right) \left( \begin{array}{c} 1 \\ 0 \\ 0 \end{array} \right) = 0, \end{equation} which indicates that $\left| e_1 \right> $ and $\left| e_2\right>$ are orthogonal, as we expect. \section{Linear Operators} So far, we have looked at two main types of objects in a vector space: vectors and linear functionals. In this section, we focus on a third: the linear operator. Recall that linear functionals take vectors to numbers. Similarly, linear operators are objects that take vectors to other vectors. Formally, this is the following definition \cite{riley}. \begin{boxeddefn}{Linear Operator\index{Linear!Operator!Abstract}}{} Let $\left| u \right>, \left| v \right> \in V$ be vectors and $\alpha, \beta$ be scalars in the field associated with $V$. Then, we say $\hat A$ is a \textbf{linear operator} on $V$ if \begin{equation} \hat A \left| v \right> \in V \end{equation} and \begin{equation} \hat A \left( \alpha \left| u \right> + \beta \left| v \right> \right) = \alpha \hat A \left| u \right> + \beta \hat A \left| v \right>. \end{equation} \end{boxeddefn} Throughout the rest of this thesis, whenever we discuss an operator on a vector space, we will always use a hat to avoid confusion with a scalar. In a finite dimensional vector space, as indicated previously, we often represent vectors by column matrices and dual vectors by row matrices. Similarly, we represent operators by square matrices \cite{nielsenchuang}.\index{Matrix Representation!of Linear Operators} For example, if \begin{equation} \hat{A} \leftrightarrow \left( \begin{array}{ccc} 0 & 0 & 0\\ 1 & 0 & 0\\0 & 0 & 0 \end{array} \right), \end{equation} then \begin{equation} \hat{A} \left| e_1 \right> \leftrightarrow \left( \begin{array}{ccc} 0 & 0 & 0\\ 1 & 0 & 0\\0 & 0 & 0 \end{array} \right) \left( \begin{array}{c} 1 \\ 0 \\ 0 \end{array} \right) = \left( \begin{array}{c} 0 \\ 1 \\0 \end{array} \right) \leftrightarrow \left| e_2 \right>. \end{equation} We can also use our formalism to access individual elements of an operator in its matrix representation. Working in the three-dimensional standard, orthonormal basis from the example above, we specify $\hat B$ as \begin{equation} \hat{B} \left| u \right> = \left| v \right>, \end{equation} where \begin{equation} \left| u \right> = u_1 \left| e_1 \right> + u_2 \left| e_2 \right> + u_3 \left| e_3 \right> \end{equation} and \begin{equation} \left| v \right> = v_1 \left| e_1 \right> + v_2 \left| e_2 \right> + v_3 \left| e_3 \right>. \end{equation} Then, \begin{eqnarray} \left<e_i \right| \hat B\left| u \right>&=&\left< e_i \right| \hat B \left( u_1\left| e_1 \right> +u_2\left| e_2 \right> + u_3\left| e_3 \right> \right) \nonumber \\ &=& \left<e_i \right| \hat B \sum_{j=1}^3 u_j \left| e_j \right> \nonumber \\ &=& \left<e_i \big | v \right> \nonumber \\ &=&\sum_{j=1}^3 v_j \left<e_i \big | e_j \right> \nonumber \\ &=& v_i, \end{eqnarray} which is just the matrix equation \cite{ballentine} \begin{equation} \sum_{j=1}^3 B(i,j)u_j=v_j, \end{equation} where we made the definition \begin{equation}\label{eqn:matrixelem} B_{ij}=B(i,j)\equiv \big < e_i \big | \hat B \left| e_j \right>. \end{equation} We call $B(i,j)$ the \textbf{matrix element}\index{Matrix Element} corresponding to the the operator $\hat B$. Note that the matrix elements of an operator depend on our choice of basis set. Using this expression for a matrix element, we define the trace of an operator. This definition is very similar to the elementary notion of the trace of a matrix as the sum of the elements in the main diagonal.\footnote{Since the individual matrix elements of an operator depend on the basis chosen, it might seem as if the trace would vary with basis, as well. However, the trace turns out to be independent of basis choice \cite{ballentine}.} \begin{boxeddefn}{Trace}{defn:trace} Let $\hat A$ be an operator on the vector space $V$ and let $B=\{\left| v_{\alpha}\right> \}_{\alpha \in \Lambda} \subseteq V$ be an orthonormal basis for $V$. Then, the \textbf{trace} of $\hat{A}$ is \begin{equation} \mathrm{Tr} \left( \hat A \right) \equiv \sum_{\alpha \in \Lambda} \left< v _{\alpha }\right| \hat A \left| v_{\alpha} \right>. \end{equation} \end{boxeddefn} So far, we have defined operators as acting to the right on vectors. However, since the Riesz theorem guarantees a bijection between vectors and dual vectors (linear functionals in the dual space), we expect operators to also act to the left on dual vectors. To make this concept precise, we write a definition. \begin{boxeddefn}{Adjoint\index{Adjoint}}{def:adjoint} Suppose $\left| u \right>, \left| v \right> \in V$ such that an operator on $V$, $\hat A$, follows \begin{equation} \hat A \left| u \right> = \left| v \right>. \end{equation} Then, we define the \textbf{adjoint} of $\hat A$, $\hat A ^{\dagger}$, as \begin{equation} \left< u \right| \hat A^{\dagger} \equiv \left< v \right|. \end{equation} \end{boxeddefn} From this definition, it follows that \begin{eqnarray} \label{eqn:adjointapp} \left( \left< u \right| \hat A^{\dagger} \left| w \right> \right)^* &=& \left<v \big | w \right>^* \nonumber \\ &=& \left< w \big| v \right> \nonumber \\ &=& \left< w \right| \hat A \left| u \right>, \end{eqnarray} which is an important result involving the adjoint, and is sometimes even used as its definition. This correctly suggests that the adjoint for operators is very similar to the conjugate transpose for square matrices, with the two operations equivalent for the matrix representations of finite vector spaces.\footnote{Many physicists, seeing that linear functionals are represented as row matrices and vectors are represented as column matrices, will write $\left| v \right> = \left< v \right| ^{\dagger}$. This is not \textit{technically} correct, as the formal definition \ref{def:adjoint} only defined the adjoint operation for an operator, not a functional. However, though it is an abuse of notation, it turns out that nothing breaks as a result \cite{ballentine}. For clarity, we will be careful not to use the adjoint in this way.} Although the matrix representation of an operator is useful, we need to express operators using Dirac's bra-ket notation. To do this, we define the outer product \cite{nielsenchuang}. \begin{boxeddefn}{Outer Product\index{Outer Product}}{} Let $ \left| u \right>, \left| v \right> \in V$ be vectors. We define the \textbf{outer product} of $\left| u \right>$ and $\left| v \right>$ as the operator $\hat A$ such that \begin{equation} \hat A \equiv \left| u \right> \left< v \right|. \end{equation} \end{boxeddefn} Note that this is clearly linear, and is an operator, as \begin{equation} \big( \left| u \right> \left< v \right| \big) \left| w \right> = \left| u \right> \left< v \big | w \right> = \left< v \big | w \right> \left| u \right> \in V \end{equation} for $\left| u \right>, \left| v \right>, \left| w \right> \in V$, a vector space. Further, if an operator is constructed in such a way, eqn. \ref{eqn:adjointapp} tells us that its adjoint is \begin{equation} \left( \left| u \right> \left< v \right| \right)^{\dagger} = \left| v \right> \left< u \right|. \end{equation} Self-adjoint opeartors\index{Linear!Operator!Self-Adjoint}, i.e. operators such that \begin{equation} \hat A^{\dagger} = \hat A, \end{equation} are especially important in quantum mechanics. The main properties that make self-adjoint operators useful concern their eigenvectors and eigenvalues.\footnote{We assume that the reader has seen eigenvalues and eigenvectors. However, if not, see ref. \cite{poole} or any other linear algebra text for a thorough introduction.} We summarize them formally in the following theorem \cite{ballentine}. \begin{boxedthm}{Eigenvectors and Eigenvalues of Self-adjoint Operators}{} Let $\hat A$ be a self-adjoint operator. Then, all its eigenvalues are real and any two eigenvectors corresponding to two distinct eigenvalues are orthogonal. \end{boxedthm} \begin{proof} Let $ \hat A \left| u \right> = u \left| u \right>$ and $\hat A \left| v \right> = v \left| v \right>$ so that $\left| u \right>$ and $\left| v \right>$ are arbitrary (nonzero) eigenvectors of $\hat A$ corresponding to the eigenvalues $u$ and $v$. Then, using eqn. \ref{eqn:adjointapp}, we deduce \cite{ballentine} \begin{eqnarray} u \left< u \big | u \right> &=& \left< u \right| u \left| u \right> \nonumber \\ &=& \left< u \right| \hat A^{\dagger} \left| u \right>^* \nonumber \\ &=& \left< u \right| \hat A \left| u \right>^* \nonumber \\ &=& \left< u \right| u \left| u \right>^* \nonumber \\ &=& u^* \left< u \big| u \right>^* \nonumber \\ &=& u^* \left< u \big| u \right>. \end{eqnarray} Since $\left| u \right> \neq 0$, we get $u=u^*$, so $u$ is real. Hence, any arbitrary eigenvalue of a self-adjoint operator is real. Next, we consider combinations of two eigenvectors. That is, \begin{eqnarray} 0 &=& \left< u \right| \hat A \left| v \right> - \left< u \right| \hat A \left| v \right> \nonumber \\ &=& \left< u \right| \hat A \left| v \right> - \left< v \right| \hat A^{\dagger} \left| u \right>^* \nonumber \\ &=& \left< u \right| \hat A \left| v \right> - \left< v \right| \hat A \left| u \right>^* \nonumber \\ &=& \left< u \right| v \left| v \right> - \left< v \right| u \left| u \right>^* \nonumber \\ &=& \left( v - u \right) \left< u \big | v \right>. \end{eqnarray} Thus, if $v \neq u$, $\left< u \big | v \right> = 0$, so $\left| u \right>$ and $\left| v \right>$ are orthogonal as claimed. \end{proof} Now that we have shown this orthogonality of distinct eigenvectors or an operator, we would like to claim that these eigenvectors form a basis for the vector space in which the operator works. For finite dimensional spaces, this turns out to be the case, although the proof quite technical, so we omit it with reference \cite{ballentine}. However, infinite dimensional cases produce problems mathematically, hence the eigenvectors of an operator in such a space need not form a basis for that space \cite{ballentine}. For the moment, we will proceed anyway, returning to this issue in section \ref{sec:infdim}. Suppose that $\{\left| v_{\alpha}\right> \}_{\alpha \in \Lambda}$ is the set of all eigenvectors of the self-adjoint operator $\hat A$. Since eigenvectors are only determinable up to a scaling factor, as long as our vectors are of finite magnitude, we may rescale all of these vectors to be an orthonormal set of basis vectors \cite{poole}. By our assumption, this set forms a basis for our vector space, $V$. Thus, for any $\left| u \right> \in V$, we can write \begin{equation} \left| u \right> = \sum_{\alpha \in \Lambda} u_{\alpha} \left| v_{\alpha} \right> = \sum_{\alpha \in \Lambda} \left| v_{\alpha} \right>u_{\alpha} . \end{equation} Noting that, since the basis vectors are orthonormal, \begin{equation} \left<v_i \big | u \right> = \sum_{\alpha \in \Lambda} u_{\alpha} \left< v_i \big| v_\alpha \right> = u_i, \end{equation} we get \begin{equation} \left| u \right> = \sum_{\alpha \in \Lambda} \left| v_{\alpha} \right> \left< v_{\alpha} \big | u \right> = \left(\sum_{\alpha \in \Lambda} \left| v_{\alpha} \right> \left< v_{\alpha}\right| \right) \left| u \right> . \end{equation} It follows immediately that \begin{boxedeqn}{eqn:projector} \sum_{\alpha \in \Lambda} \left| v_{\alpha} \right> \left< v_{\alpha}\right| = \hat 1, \end{boxedeqn} which is called the \textbf{resolution of the identity}\index{Resolution of the Identity}. This leads us to a result that allows us to represent self-adjoint operators in terms of their eigenvector bases, the spectral theorem \cite{ballentine}. \begin{boxedthm}{Spectral Theorem\index{Spectral Theorem}}{thm:spectral} Let $\hat A$ be an operator on the vector space $V$. Assuming that the spectrum of eigenvectors of $\hat A$, $\{\left| v_{\alpha}\right> \}_{\alpha \in \Lambda}$, forms a basis for $V$, $\hat A$ can be expressed as \begin{equation} \hat A = \sum_{\alpha \in \Lambda} a_{\alpha} \left| v_{\alpha} \right> \left< v_{\alpha} \right|, \end{equation} where $\{ a_{\alpha} \}_{\alpha \in \Lambda}$ are the eigenvalues of $\hat A$. \end{boxedthm} \begin{proof} Let $\left| u \right> \in V$ be an arbitrary vector. Then, since $\{\left| v_{\alpha}\right> \}_{\alpha \in \Lambda}$ is a basis for $V$, we can write \begin{equation} \left| u \right> = \sum_{\alpha \in \Lambda} u_{\alpha} \left| v_{\alpha} \right>. \end{equation} Hence, \begin{equation} \hat A \left| u \right> = \sum_{\alpha \in \Lambda} u_{\alpha} \hat A \left| v_{\alpha} \right> = \sum_{\alpha \in \Lambda} u_{\alpha} a_{\alpha} \left| v_{\alpha} \right>. \end{equation} Now, we consider the other side of the equation. We get \cite{ballentine} \begin{eqnarray} \left( \sum_{\alpha \in \Lambda} a_{\alpha} \left| v_{\alpha} \right> \left< v_{\alpha} \right| \right) \left| u \right> &=& \left( \sum_{\alpha \in \Lambda} a_{\alpha} \left| v_{\alpha} \right> \left< v_{\alpha} \right| \right) \sum_{\beta \in \Lambda} u_{\beta} \left| v_{\beta} \right> \nonumber \\ &=& \sum_{\alpha \in \Lambda} \sum_{\beta \in \Lambda} a_{\alpha}u_{\beta} \left| v_{\alpha} \right> \left< v_{\alpha} \big| v_{\beta} \right> \nonumber \\ &=& \sum_{\alpha \in \Lambda} a_{\alpha}u_{\alpha} \left| v_{\alpha} \right> \nonumber \\ &=& \hat A \left| u \right>, \end{eqnarray} where we used the orthonormality of our basis vectors. This holds for arbitrary $\left| u \right> \in V$, so \cite{ballentine} \begin{equation} \hat A = \sum_{\alpha \in \Lambda} a_{\alpha} \left| v_{\alpha} \right> \left< v_{\alpha} \right| , \end{equation} as desired. \end{proof} Since we assumed that the eigenvectors for any self-adjoint operator formed a basis for the operator's space, we may use the spectral theorem to decompose self-adjoint operators into basis elements, which we make use of later. \section{The Tensor Product} So far, we have discussed two types of products in vector spaces: inner and outer. The tensor product falls into the same category as the outer product in that it involves arraying all possible combinations of two sets, and is sometimes referred to as the cartesian or direct product \cite{anderson}. We formally define the tensor product operation $( \otimes )$ below \cite{nielsenchuang}. \begin{boxeddefn}{Tensor Product\index{Tensor Product}}{defn:tensor} Suppose $V$ and $W$ are two vector spaces spanned by the orthonormal bases $\{\left| v_{\alpha}\right> \}_{\alpha \in \Lambda}$ and $\big\{\left| w_{\beta}\right> \big \}_{\beta \in \Gamma}$, respectively. Then, we define the \textbf{tensor product space}, or product space, as the space spanned by the basis set \begin{equation} \big \{ \left( \left| x \right>, \left| y \right> \right) \, : \, \left| x \right> \in \{\left| v_{\alpha}\right> \}_{\alpha \in \Lambda}, \left| y \right> \in \big\{\left| w_{\beta}\right> \big \}_{\beta \in \Gamma} \big \} \end{equation} and denote the space as $V \otimes W$. We call each ordered pair of vectors a \textbf{tensor product} of the two vectors and denote it as $\left| x \right> \otimes \left| y \right>$. We require \begin{equation} \left< \left( \left| x_1 \right> \otimes \left| y_1 \right> \right) \big| \left( \left| x_2 \right> \otimes \left| y_2 \right> \right) \right> \equiv \left< x_1 \big | x_2 \right> \otimes \left< y_1 \big | y_2 \right>. \end{equation} \end{boxeddefn} The tensor product is linear in the normal sense, in that it is distributive and can absorb scalar constants \cite{nielsenchuang}. Further, we define linear operators on a product space by \begin{equation} \label{eqn:tensorop} \left( \hat A \otimes \hat B \right) \left| v \right> \otimes \left| w \right> \equiv \hat A \left| v \right> \otimes \hat B \left| w \right>. \end{equation} The definition for the tensor product is quite abstract, so we now consider a special case in a matrix representation for clarity. Consider a a two-dimensional vector space, $V$, and a three-dimensional vector space $W$. We let the operator \begin{equation} \hat A \leftrightarrow \left( \begin{array}{cc} 1 & -i \\ 0 & 2 \end{array} \right) \end{equation} act over $V$, and the operator \begin{equation} \hat B \leftrightarrow \left( \begin{array}{ccc} i & 2 & -1 \\ 0 & 1 & -2 \\ 2i & -1 & 0 \end{array} \right) \end{equation} act over $W$. Then, operating on arbitrary vectors, we find \begin{equation} \hat A \left| v \right> \leftrightarrow \left( \begin{array}{cc} 1 & -i \\ 0 & 2 \end{array} \right) \left( \begin{array}{c} v_1 \\ v_2 \end{array} \right) = \left( \begin{array}{c} v_1-i v_2 \\ 2 v_2 \end{array} \right) \end{equation} and \begin{equation} \hat B \left| w \right> \leftrightarrow \left( \begin{array}{ccc} i & 2 & -1 \\ 0 & 1 & -2 \\ 2i & -1 & 0 \end{array} \right) \left( \begin{array}{c} w_1 \\ w_2 \\ w_3 \end{array} \right) = \left( \begin{array}{c} iw_1+2w_2-w_3 \\ w_2 -2 w_3\\ 2iw_1 - w_2 \end{array} \right). \end{equation} The representation of the tensor product as a matrix operation is called the \textbf{Kronecker product}, and is formed by nesting matrices from right to left and distributing via standard multiplication \cite{nielsenchuang}. We now illustrate it by working our example. \begin{eqnarray} \hat A \left| v \right> \otimes \hat B \left| w \right> &\leftrightarrow& \left( \begin{array}{c} v_1-i v_2 \\ 2 v_2 \end{array} \right) \otimes \left( \begin{array}{c} iw_1+2w_2-w_3 \\ w_2 -2 w_3\\ 2iw_1 - w_2 \end{array} \right) \nonumber \\ &=& \left( \begin{array}{c} \left( v_1-i v_2 \right)\left( \begin{array}{c} iw_1+2w_2-w_3 \\ w_2 -2 w_3\\ 2iw_1 - w_2 \end{array} \right) \\ 2 v_2 \left( \begin{array}{c} iw_1+2w_2-w_3 \\ w_2 -2 w_3\\ 2iw_1 - w_2 \end{array} \right)\end{array} \right) \nonumber \\ &=& \left( \begin{array}{c} \left( v_1-i v_2 \right) \left(iw_1+2w_2-w_3 \right) \\ \left( v_1-i v_2 \right)\left(w_2 -2 w_3 \right) \\ \left( v_1-i v_2 \right) \left( 2iw_1 - w_2 \right) \\ 2 v_2 \left( iw_1+2w_2-w_3 \right) \\ 2 v_2 \left( w_2 -2 w_3 \right) \\ 2 v_2 \left( 2iw_1 - w_2 \right) \end{array} \right) . \end{eqnarray} But by eqn. \ref{eqn:tensorop}, we should be able to first construct the tensor product of the of the operators $\hat A$ and $\hat B$ and apply the resulting operator to the tensor product of $\left| v \right>$ and $\left| w \right>$. Working this out using the Kronecker product\index{Kronecker Product}, we have \begin{eqnarray} \hat A \otimes \hat B &\leftrightarrow&\left( \begin{array}{cc} 1 & -i \\ 0 & 2 \end{array} \right) \otimes \left( \begin{array}{ccc} i & 2 & -1 \\ 0 & 1 & -2 \\ 2i & -1 & 0 \end{array} \right) \nonumber \\ &=& \left( \begin{array}{cc} 1 \left( \begin{array}{ccc} i & 2 & -1 \\ 0 & 1 & -2 \\ 2i & -1 & 0 \end{array} \right) & -i \left( \begin{array}{ccc} i & 2 & -1 \\ 0 & 1 & -2 \\ 2i & -1 & 0 \end{array} \right) \\ 0 \left( \begin{array}{ccc} i & 2 & -1 \\ 0 & 1 & -2 \\ 2i & -1 & 0 \end{array} \right) & 2 \left( \begin{array}{ccc} i & 2 & -1 \\ 0 & 1 & -2 \\ 2i & -1 & 0 \end{array} \right)\end{array} \right) \nonumber \\ &=& \left( \begin{array}{cccccc} i & 2 & -1 & 1 & -2i & i \\ 0 & 1 & -2 & 0 & -i & 2i\\ 2i & -1 & 0 & 2 & i & 0 \\ 0 & 0 & 0 & 2i & 4 & -2 \\ 0 & 0 & 0 & 0 & 2 & -4 \\ 0 & 0 & 0 & 4i & -2 & 0 \end{array} \right) \nonumber \\ \end{eqnarray} and \begin{equation} \left| v \right> \otimes \left| w \right> \leftrightarrow \left( \begin{array}{c} v_1 \\ v_2 \end{array} \right) \otimes \left( \begin{array}{c} w_1 \\ w_2 \\ w_3 \end{array} \right) = \left( \begin{array}{c} v_1 w_1 \\ v_2 w_1 \\ v_1 w_2 \\v_2 w_2\\ v_1w_3 \\v_2 w_3 \end{array} \right), \end{equation} so \begin{eqnarray} \hat A \otimes \hat B\left( \left| v \right> \otimes \left| w \right> \right) &\leftrightarrow & \left( \begin{array}{cccccc} i & 2 & -1 & 1 & -2i & i \\ 0 & 1 & -2 & 0 & -i & 2i\\ 2i & -1 & 0 & 2 & i & 0 \\ 0 & 0 & 0 & 2i & 4 & -2 \\ 0 & 0 & 0 & 0 & 2 & -4 \\ 0 & 0 & 0 & 4i & -2 & 0 \end{array} \right) \left( \begin{array}{c} v_1 w_1 \\ v_2 w_1 \\ v_1 w_2 \\v_2 w_2\\ v_1w_3 \\v_2 w_3 \end{array} \right) \nonumber \\ &=& \left( \begin{array}{c} i v_1 w_1+ 2 v_2 w_1 -v_1w_2+v_2w_2-2iv_1w_3+iv_2w_3 \\ v_1w_2-iv_2w_2-2v_1w_3+2iv_2w_3 \\ 2iv_1w_1+2v_2w_1-v_1w_2+iv_2w_2 \\ 2iv_2w_1+4v_2w_2-2v_2w_3\\ 2v_2w_2-4v_2w_3\\4iv_2w_1-2v_2w_2 \end{array} \right) \nonumber \\ &=& \left( \begin{array}{c} \left( v_1-i v_2 \right) \left(iw_1+2w_2-w_3 \right) \\ \left( v_1-i v_2 \right)\left(w_2 -2 w_3 \right) \\ \left( v_1-i v_2 \right) \left( 2iw_1 - w_2 \right) \\ 2 v_2 \left( iw_1+2w_2-w_3 \right) \\ 2 v_2 \left( w_2 -2 w_3 \right) \\ 2 v_2 \left( 2iw_1 - w_2 \right) \end{array} \right) \nonumber \\ &\leftrightarrow& \hat A \left| v \right> \otimes \hat B \left| w \right>, \end{eqnarray} and we confirm that this example follows \begin{equation} \left( \hat A \otimes \hat B\right) \left| v \right> \otimes \left| w \right> = \hat A \left| v \right> \otimes \hat B \left| w \right> \end{equation} when we use the Kronecker product representation for the tensor product. Since the matrix representation is very convenient for finite dimensional vector spaces, we frequently use the Kronecker product to calculate the tensor product and then shift back to the abstract Dirac notation. \section{Infinite Dimensional Spaces}\label{sec:infdim} So far, we have largely ignored the main complication that arises when we move from a finite dimensional space to an infinite one: the spectrum of eigenvectors for a self-adjoint operator is no longer guaranteed to form a basis for the space. To deal with this problem, we will have to work in a slightly more specific kind of vector space, called a Hilbert space, denoted $\mathcal H$. A Hilbert space is defined below \cite{ballentine}. \begin{boxeddefn}{Hilbert Space\index{Hilbert Space}}{} Let $W$ be a general linear vector space and suppose that $V\subseteq W$ is a vector space formed by any finite linear combinations of the basis set $\{\left| v_{\alpha}\right> \}_{\alpha \in \Lambda}$. That is, if \begin{equation} \left| u \right> = \sum_{i=1}^n u_{\alpha_i} \left| v \right>_{\alpha_i}, \end{equation} for some finite $n$, then $\left| u \right> \in V$. We say the \textbf{Hilbert space} $\mathcal H$ formed by completing $V$ contains any vector that can be written as \begin{equation} \left| u \right> = \lim_{n \rightarrow \infty} \sum_{i=1}^n u_{\alpha_i} \left| v \right>_{\alpha_i}, \end{equation} provided \begin{equation} \sum_{i=1}^{\infty} \left | u_{\alpha_i} \right|^2 \end{equation} exists and is finite. \end{boxeddefn} Note that for the vector spaces described in the above definition, the Hilbert space associated with them always follows $ V \subseteq \mathcal H \subseteq W$, and that $W=\mathcal H = V$ holds if (but \textit{not} only if) $W$ has finite dimension. Without spending too much time on the technicalities, there is a generalized spectral theorem that applies to spaces very closely related to, but larger than, Hilbert spaces \cite{ballentine}. To determine precisely what this space should be, we must first develop a certain subspace of a Hilbert space, which we define by including all vectors $\left| u \right>$ subject to \begin{equation} \left< u \big | u\right> = \sum_{n=1}^{\infty} \left| u_{\alpha_n} \right|^2 n^{m} \end{equation} converging for all $m \in \mathbb N$. For a Hilbert space, we require a much weaker condition, as we do not have the rapidly increasing $n^{m}$ in each term of the summand. We define this space as $\Omega$, and note that always $\Omega \subseteq \mathcal H$ \cite{ballentine}. The ramifications of the extra normalization requirement for a vector to be in $\Omega$ can be thought of as a requirement for an extremely fast decay as $n \rightarrow \infty$. We now define the space of interest, called the conjugate space\index{Conjugate Space} of $\Omega$, and written as $\Omega^{\times}$ in terms of its member vectors \cite{gamelin}. Any vector $\left| w \right>$ belongs to $\Omega^{\times}$ if \begin{figure}\label{fig:riggedhilbertspace} \end{figure} \begin{equation} \left< w \big | u \right>= \sum_{n=1}^{\infty} w_n^* u_n \end{equation} converges for all $\left| u \right> \in \Omega$ and $\left< w \right|$ is continuous on $\Omega$. Since we noted that for a vector $\left| u \right>$ to be in $\Omega$, it must vanish very quickly at infinity, $\left| w \right>$ is not nearly as restricted as a vector in $\mathcal H$. Thus, we have the triplet \begin{equation} \Omega \subseteq \mathcal H \subseteq \Omega^{\times}, \end{equation} which is called a \textbf{rigged Hilbert Space triplet}\index{Rigged Hilbert Space Triplet}, and is shown in figure \ref{fig:riggedhilbertspace} \cite{ballentine}.\footnote{The argument used here is rather subtle. If the reader is not clear on the details, it will not impair the comprehension of later sections. To thoroughly understand this material, we recommend first reading the treatment of normed linear spaces in ref. \cite{gamelin}, and then the discussion of rigged Hilbert spaces in refs. \cite{ballentine} and \cite{sudbery}.} We noted earlier that the set of eigenvectors of a self-adjoint operator need not form a basis for that operator's space if the space has infinite dimension. This means that the spectral theorem would break down, which is what we wish to avoid. Fortunately, a generalized spectral theorem\index{Spectral Theorem!Generalized} has been proven for rigged Hilbert space triplets, which states that any self adjoint operator in $\mathcal H$ has eigenvectors in $\Omega^{\times}$ that form a basis for $\mathcal H$ \cite{ballentine}. Due to this, we will work in a rigged Hilbert space triplet, which we will normally denote by the corresponding Hilbert space, $\mathcal H$. We do this with the understanding that to be completely rigorous, it might be necessary to switch between the component sets of the triplet on a case-by-case basis. Now that we have outlined the space in which we will be working, there is an important special case of an infinite dimensional basis that we need to examine. If our basis is \textbf{continuous}\index{Basis!Continuous}, then we can convert all of our abstract summation formulas into integral forms, which are used very frequently in quantum mechanics, since the two most popular bases (position and momentum) are usually continuous.\footnote{A common form of confusion when first studying quantum mechanics is the abstract notion of vectors. In classical mechanics, a vector might point to a particular spot in a physical space. However, in quantum mechanics, a vector can have infinite dimensionality, and so can effectively point to every point in a configuration space simultaneously, with varying magnitude. For this reason, a very clear distinction must be drawn between the vectors used in the formalism of quantum mechanics and the everyday vectors used in classical mechanics. } Specifically, suppose we have a continuous, orthonormal basis for a rigged Hilbert space $\mathcal H$ given by $\big\{ \left| \phi \right> \big \}_{\phi \in \Phi}$, where $\Phi$ is a real interval. Then, if we have \cite{ballentine} \begin{equation} \left| u \right> = \sum_{\phi \in \Phi} u_{\phi} \left| \phi \right>, \,\, \left| v \right> = \sum_{\phi \in \Phi} v_{\phi} \left| \phi \right>, \end{equation} we find a special case of eqn. \ref{eqn:diracinner}. This is\index{Integral Form!Inner Product}\index{Integral Form!Trace}\index{Integral Form!Spectral Theorem} \begin{equation} \left< u \big| v \right> = \int_{\Phi}d \phi \cdot u^*_{\phi} v_{\phi}, \end{equation} where the integral is taken over the real interval $\Phi$. Similarly, for an operator $\hat A$, definition \ref{defn:trace} becomes \cite{ballentine} \begin{equation} \mathrm{Tr} \left( \hat A \right) = \int_{\Phi} d \phi \cdot \left< \phi \right| \hat A \left| \phi \right>, \end{equation} and for self-adjoint $\hat A$, theorem \ref{thm:spectral} is \begin{equation} \hat A = \int_{\Phi}d \phi \cdot a_{\phi} \left| \phi \right> \left< \phi \right|. \end{equation} When working in a continuous basis, these integral forms of the inner product, trace, and spectral theorem will often be more useful in calculations than their abstract sum counterparts, and we make extensive use of them in chapter \ref{chap:dynamics}. \chapter{Formal Structure of Quantum Mechanics}\label{chap:quantum_formal} \lettrine[lines=2, lhang=0.33, loversize=0.1]{W}e now use the mathematical tools developed last chapter to set the stage for quantum mechanics. We begin by listing the correspondence rules that tell us how to represent physical objects mathematically. Then, we develop the fundamental quantum mechanical concept of the state and its associated operator. Next, we investigate the treatment of composite quantum mechanical systems. Throughout this chapter, we work in discrete bases to simplify our calculations and improve clarity. However, following the rigged Hibert space formalism developed in section \ref{sec:infdim}, translating the definitions in this section to an infinite-dimensional space is straightforward both mathematically and physically. \section{Fundamental Correspondence Rules of Quantum Mechanics} \label{sec:posts} At the core of the foundation of quantum mechanics are three rules. The first two tell us how to represent a physical object and describe its physical properties mathematically, and the third tells us how the the object and properties are connected. These three rules permit us to state a physical problem mathematically, work the problem mathematically, and then interpret the mathematical result physically \cite{ballentine}. The first physical object of concern is the \textbf{state}, which completely describes the physical aspects of some system \cite{ballentine}. For instance, we might speak of the state of a hydrogen atom, the state of a photon, or a state of thermal equilibrium between two thermal baths. \begin{boxedaxm}{State Operator\index{State Operator}}{axm:state} We represent each physical state as a unique linear operator that is self-adjoint, nonnegative, and of unit trace, which acts on a Rigged Hilbert Space $\mathcal H$. We write this operator $\hat{\rho}$ and call it the \textbf{state operator}. \end{boxedaxm} Now that we have introduced the state, we can discuss the physical concepts used to describe states. These concepts include momentum, energy, and position, and are collectively known as dynamical variables \cite{ballentine}. \begin{boxedaxm}{Observable\index{Observable}}{} We represent each dynamical variable as a Hermitian linear operator acting on a rigged Hilbert space $\mathcal H$ whose eigenvalues represent all possible values of the dynamical variable. We write this operator using our hat $\left(\, \hat{ }\, \right)$ notation, and call it an \textbf{observable}. \end{boxedaxm} We now link the first two axioms with the third \cite{ballentine}. \begin{boxedaxm}{Expectation Value\index{Expectation Value}}{axm:expectation} The \textbf{expectation value}, or average measurement\index{Measurement} of the value of an observable $\hat{\mathcal O}$ over infinitely many identically prepared states (called a virtual ensemble of states) is written as $\left< \hat{\mathcal O} \right>$ and given by \begin{equation} \left< \hat{\mathcal O} \right> \equiv \mathrm{Tr} \left( \hat{\rho} \hat{\mathcal O } \right). \end{equation} \end{boxedaxm} Though we claimed that these three axioms form the fundamental framework of modern quantum mechanics, they most likely seem foreign to the reader who has seen undergraduate material. In the next section, we work with the state operator and show that, in a special case, the formalism following from the correspondence rules outlined above is identical to that used in introductory quantum mechanics courses. \section{The State Operator} In axiom \ref{axm:state}, we defined $\hat{\rho}$, the state operator. However, the formal definition is very abstract, so in this section we investigate some of the properties of the state operator in an attempt to solidify its meaning. Physicists divide quantum mechanical states, and thus state operators, into two broad categories. Any given state is either called \textbf{pure} or \textbf{impure}. Sometimes, impure states are also referred to as mixtures or mixed states. We now precisely define a pure state \cite{ballentine}. \begin{boxeddefn}{Pure State\index{State!Pure}\index{State!Impure}\index{State!Vector}}{defn:pure} A given state is called \textbf{pure} if its corresponding unique state operator, $\hat{\rho}$, can be written as \begin{equation} \hat{\rho} \equiv \left| \psi \right> \left< \psi \right|, \end{equation} where $\left| \psi \right> \in \mathcal H$ is called the state vector in a rigged Hilbert space $\mathcal H$, $\left< \psi \right| \in \mathcal H^*$ is the linear functional corresponding to $\left| \psi \right>$, and $\left< \psi \big| \psi \right>=1$. If a state cannot be so represented, it is called \textbf{impure}. \end{boxeddefn} Although the importance of pure and impure states is not yet evident, we will eventually need an efficient method of distinguishing between them. The definition, which is phrased as an existence argument, is not well-suited to this purpose. To generate a more useful relationship, consider a pure state. We have \begin{equation} \hat{\rho}^2=\hat{\rho} \hat{\rho} = \left( \left| \psi \right> \left< \psi \right| \right) \left( \left| \psi \right> \left< \psi \right| \right) = \left| \psi \right> \left( \left< \psi \big| \psi \right> \right) \left< \psi \right| = \left| \psi \right>( 1 )\left< \psi \right| = \left| \psi \right> \left< \psi \right| = \hat{ \rho}. \end{equation} Thus, if a state is pure, it necessarily follows \cite{ballentine} \begin{equation} \hat{\rho}^2 = \hat{\rho}. \end{equation} Although seemingly a weaker condition, this result turns out to also be sufficient to describe a pure state. To show this, we suppose that our state space is discrete and has dimension $D$.\footnote{This is mainly for our convenience. The argument for an infinite-dimensional space is similar, but involves the generalized spectral theorem on our rigged Hilbert space.} Invoking the spectral theorem, theorem \ref{thm:spectral}, we write \begin{equation} \hat{ \rho} = \sum_{n=1}^{D} \rho_n \left| \phi_n \right> \left< \phi_n \right|, \label{eqn:specrho1} \end{equation} where $\{\rho_n\}_{n=1}^D$ is the spectrum of eigenvalues for $\hat{\rho}$, corresponding to the unit-normed eigenvectors of $\hat{\rho}$, $\big\{\left| \phi_n \right> \big \}_{n=1}^D$. If we consider some $1 \leq j \leq D$ with $ j,D\in \mathbb Z$ and let $\hat{\rho} = \hat{\rho}^2$, we have \begin{equation} \hat{\rho} \left| \phi_{j} \right> = \hat{\rho}^2 \left| \phi_{j} \right>, \end{equation} which is \begin{equation} \rho_{j} \left| \phi_{j} \right> = \rho_{j}^2 \left| \phi_{j} \right>, \end{equation} so \begin{equation} \rho_{j}=\rho_{j}^2 \end{equation} or \begin{equation} \rho_j \left(1- \rho_j \right) = 0. \end{equation} Since all of the eigenvalues of $\hat{\rho}$ must also follow this relationship, they must all either be one or zero. But by axiom \ref{axm:state}, $\mathrm{Tr}\left( \hat{\rho} \right) = 1$, so exactly one of the eigenvalues must be one, while all the others are zero. Thus, eqn. \ref{eqn:specrho1} becomes \begin{equation} \hat{ \rho} = \left| \phi_{q_1} \right> \left< \phi_{q_1} \right|, \end{equation} where we have taken $q_1=1$. Evidently, $\hat{ \rho}$ is a pure state, and we have shown sufficiency \cite{ballentine}. At this point, it is logical to inquire about the necessity of the state operator, as opposed to a state vector alone. After all, most states treated in introductory quantum mechanics are readily represented as state vectors. However, there are many states that are prepared statistically, and so cannot be represented as a state vector. An example of one of these cases is found in section \ref{sec:bellstate}. These impure states or mixtures\index{State!Impure} turn out to be of the utmost importance when we begin to discuss quantum decoherence, the main focus of this thesis \cite{zurek}. We now turn our attention to the properties of pure states, and illustrate that the state vectors defining pure state operators behave as expected under our correspondence rules. By axiom \ref{axm:expectation}, we know that the expectation value of the dynamical variable (observable) $\hat{A}$ of a state $\hat{\rho}$ is \begin{equation} \left< \hat A \right> = \mathrm{Tr}\left( \hat{\rho} \hat{A} \right). \end{equation} If $\hat{\rho}$ is a pure state, then we can write \begin{equation} \hat{\rho} = \left| \psi \right> \left< \psi \right|. \end{equation} Hence, $\left<\hat A \right>$ becomes \begin{equation} \left< \hat A \right> = \mathrm{Tr}\left( \left| \psi \right> \left< \psi \right| \hat{A} \right), \end{equation} which, by definition \ref{defn:trace}, is \cite{ballentine} \begin{eqnarray} \label{eqn:recoverexp} \left< \hat A \right> &=& \sum_{n=1}^{D} \left< \phi_n \right| \left( \left| \psi \right> \left< \psi \right| \hat{A} \right) \left| \phi_n \right> \nonumber\\ &=& \sum_{n=1}^{D}\left( \left< \phi_n \big | \psi \right>\right) \left( \left< \psi \right| \hat{A} \left| \phi_n \right> \right) \nonumber\\ &=& \left< \psi \right| \hat{A} \left| \psi \right>, \end{eqnarray} where we have used definition \ref{defn:orthobasis} to pick the basis $\big \{ \left| \phi_{\alpha}\right> \big \}_{\alpha \in \mathbb R}$ to be orthonormal and contain the vector $\left| \psi \right>$.\footnote{This works since $\left| \psi \right>$ is guaranteed to have unit magnitude by definition \ref{defn:pure}.} This is the standard definition for an expectation value in introductory quantum mechanics, which we recover by letting $\hat{\rho}$ be pure \cite{griffiths, cohtan}. \section{Composite Systems}\label{sec:composite} In order to model complex physical situations, we will often have to consider multiple, non-isolated states. To facilitate this, we need to develop a method for calculating the state operator of a composite, or combined, quantum system \cite{ballentine}. \begin{boxedaxm}{Composite State\index{Composite!State Operator}}{} Suppose we had a pure composite system composed of $n$ substates, $\big \{ \hat{\rho}_i \big\}_{i=1}^n$. Then, the \textbf{composite state operator} $\hat{\rho}$ of this combined system is given by\index{Composite!State Vector} \begin{equation} \hat{\rho} \equiv \hat{\rho}_1 \otimes \hat{\rho}_2 \otimes \cdot \cdot \cdot \otimes \hat{\rho}_n, \end{equation} where $(\otimes)$ is the tensor product, given in definition \ref{defn:tensor}. \end{boxedaxm} Note that if $\hat{\rho}$ is pure, there exists some characteristic state vector $\left| \psi \right>$ of $\hat{ \rho}$ where \begin{equation} \left| \psi \right> = \left| \psi_1 \right> \otimes \left| \psi_2 \right> \otimes \cdot \cdot \cdot \otimes \left| \psi_n \right> \label{eqn:stateveccomp} \end{equation} and each $\left| \psi_i \right>$ corresponds to $\hat{ \rho}_i$. As an important notational aside, eqn \ref{eqn:stateveccomp} is frequently shortened to \cite{nielsenchuang} \begin{equation} \left| \psi \right> = \left| \psi_1\psi_2 ... \psi_n \right> , \end{equation} where the tensor products are taken as implicit in the notation. Just as we discussed dynamical variables associated with certain states, so can we associate dynamical variables with composite systems. In general, an observable of a composite system with $n$ substates is formed by \cite{nielsenchuang}\index{Composite!Observable} \begin{equation} \hat{\mathcal O} \equiv \hat{\mathcal O}_1 \otimes \hat{\mathcal O}_2 \otimes \cdot \cdot \cdot \otimes \hat{\mathcal O}_n, \end{equation} where each $\hat{\mathcal O}_i$ is an observable of the $i$th substate. We have now extended the concepts of state and dynamical variable to composite systems, so it is logical to treat an expectation value of a composite system. Of course, since a composite system is a state, axiom \ref{axm:expectation} applies, so we have \begin{equation} \left< \hat{ \mathcal O } \right> = \mathrm{Tr} \left(\hat{\rho} \hat{ \mathcal O } \right). \end{equation} However, composite systems afford us opportunities that single systems do not. Namely, just as we trace over the degrees of freedom of a system to calculate expectation values on that system, we can trace over some of the degrees of freedom of a composite state to focus on a specific subsystem.\footnote{Here, a degree of freedom of a state can be thought of as its dimensionality. It is used analogously with the notion in a general system in classical mechanics, where the dimensionality of a system's configuration space corresponds to the number of degrees of freedom it possesses. For more on this, see ref. \cite{thornton}.} We call this operation the partial trace over a composite system, and we define it precisely below \cite{nielsenchuang}. \begin{boxeddefn}{Partial Trace\index{Partial Trace}}{def:partialtrace} Suppose we have an operator \begin{equation} \hat{ \mathcal Q} = \hat{\mathcal Q}_1 \otimes \hat{\mathcal Q}_2 \otimes \cdot \cdot \cdot \otimes \hat{\mathcal Q}_n. \end{equation} The \textbf{partial trace} of $\hat{\mathcal Q}$ over $\hat{\mathcal Q}_i$ is defined by \begin{equation} \mathrm{Tr}_i \left( \hat{ \mathcal Q} \right) \equiv \hat{\mathcal Q}_1 \otimes \hat{\mathcal Q}_2 \otimes \cdot \cdot \cdot \otimes \hat{\mathcal Q}_{i-1} \cdot \mathrm{Tr} \left( \hat{\mathcal Q}_i \right) \cdot \hat{\mathcal Q}_{i+1} \otimes \cdot \cdot \cdot \otimes \hat{\mathcal Q}_n. \end{equation} \end{boxeddefn} If the partial trace is applied to a composite system repeatedly such that all but one of the subsystem state operators are traced out, the remaining operator is called a reduced state operator \cite{nielsenchuang}. \begin{boxeddefn}{Reduced State Operator\index{Reduced State Operator}}{def:redstate} Suppose we have a composite system $\hat{\rho}$ with $n$ subsystems. The \textbf{reduced state operator for subsystem i} is defined by \begin{equation} \hat{\rho}^{(i)} = \mathrm{Tr}_1 \circ \mathrm{Tr}_2 \circ \cdot \cdot \cdot \circ \mathrm{Tr}_{i-1} \circ \mathrm{Tr}_{i+1} \circ \cdot \cdot \cdot \circ \mathrm{Tr}_n \left( \hat{\rho} \right). \end{equation} \end{boxeddefn} The partial trace and reduced state operator turn out to be essential in the analysis of composite systems, although that fact is not immediately obvious. To illustrate this, we consider some observable $\hat{\mathcal O}_m$ that acts only on the $k_m$th subsystem of a composite system. We choose a basis $\big\{\left| \Phi_k \right> \big \}_{k=1}^n$, where each element is formed by the Kronecker product of the basis elements of the corresponding subsystems. That is, each basis vector has the form $\left| \Phi_k \right> = \left| \phi_1 \phi_2 ... \phi_n \right>$, where each $\phi_l$ is one of the orthonormal basis vectors of the $l$th substate space. Then, from axiom \ref{axm:expectation}, we have \begin{eqnarray} \left< \hat{\mathcal O}_m \right> &=& \mathrm{Tr} \left( \hat{ \rho} \hat{ \mathcal O}_m \right) \nonumber \\ &=& \sum_{k=1}^n \left< \Phi_k \right|\hat{ \rho} \hat{ \mathcal O}_m \left| \Phi_k \right> \nonumber \\ &=& \sum_{k_1,k_2,...,k_n} \left< \phi_{k_1} \phi_{k_2} ... \phi_{k_n} \right| \hat{ \rho}\hat{ \mathcal O}_m \left| \phi_{k_1} \phi_{k_2} ... \phi_{k_n}\right>. \end{eqnarray} We use the resolution of the identity, eqn. \ref{eqn:projector}, to write our expectation value as \begin{equation} \sum_{k_1,k_2,...,k_n} \left< \phi_{k_1} \phi_{k_2} ... \phi_{k_n} \right| \hat{ \rho} \left( \sum_{j_1,j_2,...,j_n} \left| \phi_{j_1} \phi_{j_2} ... \phi_{j_n} \right> \left< \phi_{j_1} \phi_{j_2} ... \phi_{j_n}\right| \right)\hat{ \mathcal O}_m \left| \phi_{k_1} \phi_{k_2} ... \phi_{k_n}\right>, \end{equation} where $\left| \phi_{j_1} \phi_{j_2} ... \phi_{j_n} \right>$ corresponds to a basis vector. This becomes \begin{equation} \sum_{k,j} \left< \phi_{k_1} \phi_{k_2} ... \phi_{k_n} \right| \hat{ \rho} \left| \phi_{j_1} \phi_{j_2} ... \phi_{j_n} \right> \left< \phi_{j_1} \phi_{j_2} ... \phi_{j_n}\right| \hat{ \mathcal O}_m \left| \phi_{k_1} \phi_{k_2} ... \phi_{k_n}\right>. \end{equation} If the observable $\hat{\mathcal O}$ acts as identity on all but the $m$th subsystem, by eqn. \ref{eqn:tensorop}, we have \begin{equation} \sum_{k,j} \left< \phi_{k_1} \phi_{k_2} ... \phi_{k_n} \right| \hat{ \rho} \left| \phi_{j_1} \phi_{j_2} ... \phi_{j_n} \right> \left< \phi_{j_m} \right| \hat{ \mathcal O}_m \left| \phi_{k_m} \right>\left< \phi_{j_1} ... \phi_{j_{m-1}} \phi_{j_{m+1}}...\phi_{j_n} \big | \phi_{k_1} ... \phi_{k_{m-1}} \phi_{k_{m+1}}...\phi_{k_n} \right>. \end{equation} Since our chosen basis is orthonormal, for any non-zero term in the sum, we must have $j=k$ (except for $j_m$ and $k_m$), in which case the final inner produce is unity. Hence, we get \begin{equation} \sum_{k_1,k_2,...,k_n,j_m} \left< \phi_{k_1} \phi_{k_2} ... \phi_{k_n} \right| \hat{ \rho} \left| \phi_{k_1} ...\phi_{k_m-1} \phi_{j_m} \phi_{k_m+1}... \phi_{k_n} \right> \left< \phi_{j_m} \right| \hat{ \mathcal O}_m \left| \phi_{k_m} \right>. \end{equation} If we apply eqn. \ref{eqn:tensorop}, letting $\hat{\rho}=\hat{\rho}_1 \otimes \hat{\rho}_2 \otimes \cdot \cdot \cdot \otimes \hat{\rho}_n$, we have \begin{equation} \sum_{k_1,k_2,...,k_n,j_m} \left< \phi_{k_1} \right| \hat{\rho}_1 \left| \phi_{k_1} \right> \left< \phi_{k_2} \right| \hat{\rho}_2 \left| \phi_{k_2}\right> \cdot \cdot \cdot \left< \phi_{k_m} \right| \hat{\rho}_m \left| \phi_{j_m} \right> \cdot \cdot \cdot \left< \phi_{k_n} \right| \hat{\rho}_n \left| \phi_{k_n} \right>\left< \phi_{j_m} \right| \hat{ \mathcal O}_m \left| \phi_{k_m} \right>, \end{equation} or \begin{equation} \sum_{k_m,j_m} \mathrm{Tr}\left( \hat{\rho}_1 \right) \mathrm{Tr}\left( \hat{\rho}_2 \right) \cdot \cdot \cdot \left< \phi_{k_m} \right| \hat{\rho}_m \left| \phi_{j_m} \right> \cdot \cdot \cdot \mathrm{Tr}\left( \hat{\rho}_n \right) \left< \phi_{j_m} \right| \hat{ \mathcal O}_m \left| \phi_{k_m} \right>. \end{equation} Since each trace is just a scalar, we can write \begin{equation} \sum_{k_m}\left< \phi_{k_m}\right| \mathrm{Tr}\left( \hat{\rho}_1 \right) \mathrm{Tr}\left( \hat{\rho}_2 \right) \cdot \cdot \cdot \hat{\rho}_m \cdot \cdot \cdot \mathrm{Tr}\left( \hat{\rho}_n \right) \left( \sum_{j_m} \left| \phi_{j_m} \right> \left< \phi_{j_m} \right| \right) \hat{ \mathcal O}_m \left| \phi_{k_m} \right>. \end{equation} Recognizing the definition \ref{def:redstate} for the reduced state operator and the resolution of the identity from eqn. \ref{eqn:projector}, we find \cite{ballentine} \begin{boxedeqn}{} \left< \hat{\mathcal O}_m \right> = \sum_{k_m} \left< \phi_{k_m}\right| \hat{\rho}^{(m)} \left( \, \hat 1 \, \right) \hat{\mathcal O}_m \left| \phi_{k_m} \right> = \mathrm{Tr} \left( \hat{\rho}^{(m)} \hat{\mathcal O}_m \right). \end{boxedeqn} Due to this remarkable result, we know that the reduced state operator for a particular subsystem is enough to tell us about any observable that only depends on the subsystem. Further, we end up with a formula for the expectation value of a component observable very similar to axiom \ref{axm:expectation} for observables of the full system. \section{Quantum Superposition}\label{sec:quantumsup} Though we have introduced some of the basic formalism of the state, we are still missing one of the key facets of quantum mechanics. This piece is the superposition principle, which, at the time of this writing, is one of the core aspects of quantum mechanics that no one fully understands. However, due to repeated experimental evidence, we take it as an axiom. \begin{boxedaxm}{Superposition Principle\index{Superposition Principle}}{axm:sup} Suppose that a system can be in two possible states, represented by the state vectors $\left| 0 \right>$ and $\left| 1 \right>$. Then, \begin{equation} \left| \psi \right> = \alpha \left| 0 \right> + \beta \left| 1 \right>, \end{equation} where $\alpha, \beta \in \mathbb C$, is also a valid state of the system, provided that $\left| \alpha \right|^2 + \left| \beta \right|^2 = 1$. \end{boxedaxm} The superposition principle allows us to create new and intriguing states that we would not have access to otherwise. In fact, if we have $n$ linearly independent states of a system, any point on the unit n-sphere corresponds to a valid state of the system.\footnote{The reader might wonder why the superposition principle is necessary, after all, we know that state vectors exist in a Hilbert space, and Hilbert spaces act linearly. However, we were not guaranteed until now that any vector of unit norm in Hilbert space represents a valid physical situation. The superposition principle gives us this, which allows us great freedom in constructing states.} If we consider a two-state system with an orthonormal basis $\big \{ \left| 0 \right> , \left| 1 \right> \big \}$, the 2-sphere of possible states guaranteed by the superposition principle is conveniently visualized imbedded in 3-space. This visualization of a two-state system\index{Two-State System} is called the \textbf{Bloch sphere representation}\index{Bloch!Sphere}, and is pictured in figure \ref{fig:bloch_sphere} \cite{nielsenchuang}. To calculate the position of a system in Bloch space, we use the formula \begin{figure}\label{fig:bloch_sphere} \end{figure} \begin{equation} \label{eqn:blochvec} \hat{\rho} \leftrightarrow r_0 1 + \left< r \big| \sigma \right>, \end{equation} where $\left| r \right>$ is the 3-vector, \begin{equation} \left| r \right> \equiv r_1 \left| e_1 \right> + r_2 \left| e_2 \right> + r_3 \left| e_3 \right>, \end{equation} and $\vec{\sigma}$ is the vector of Pauli spin matrices, \begin{equation} \left| \sigma \right> \equiv \sigma_x \left| e_1 \right> + \sigma_y \left| e_2 \right> + \sigma_z \left| e_3 \right>. \label{eqn:paulivec} \end{equation} The Pauli matrices are \begin{equation} \hat{\sigma}_x \leftrightarrow \left(\begin{array}{cc}0 & 1 \\1 & 0\end{array}\right), \end{equation} \begin{equation} \hat{\sigma}_y \leftrightarrow \left(\begin{array}{cc}0 & -i \\i & 0\end{array}\right), \end{equation} and \begin{equation} \hat{\sigma}_z \leftrightarrow \left(\begin{array}{cc}1 & 0 \\0 & -1\end{array}\right). \end{equation} Writing eqn. \ref{eqn:blochvec} explicitly, we find \begin{equation} \hat \rho \leftrightarrow \left( \begin{array}{cc} r_0 + r_3 & r_1 -i r_2 \\ r_1+i r_2 & r_0 - r_3 \end{array} \right). \end{equation} This is trivially a basis for all two by two matrices, so we can indeed represent any $\hat \rho$ by eqn. \ref{eqn:blochvec}. Further, if we use the fact that $\mathrm{Tr}\left(\hat \rho \right) = 1$, we know \begin{equation} \mathrm{Tr}\left(\hat \rho \right) \leftrightarrow ( r_0 + r_3) + (r_0 - r_3) = 2r_0 = 1, \end{equation} so $r_0=1/2$. With this constraint in mind, it is conventional to write eqn. \ref{eqn:blochvec} as \cite{nielsenchuang} \begin{boxedeqn}{eqn:bloch2} \hat{\rho} \leftrightarrow \frac{1 + \left< r \big| \sigma \right>}{2}. \end{boxedeqn} Also, since $\hat \rho$ is self-adjoint, the diagonal entries must all be real, so $r_3 \in \mathbb R$. By the same reasoning, \begin{equation} r_1+ir_2 = (r_1 - ir_2)^*. \end{equation} Since $r_1$ and $r_2$ are arbitrary, we can choose either of them to be zero, and the resulting equation must hold for all values of the other. Hence, $r_1 = r_1^*$ and $r_2 = r_2^*$, so both $r_1$ and $r_2$ are real, and $\left| r \right>$ is a real-valued vector. Since $\left| r \right>$ is real, we use it as a position vector that tells us the location of the system in Bloch space and call it the Bloch vector.\index{Bloch!Vector} If we have a pure state \begin{equation} \left| \psi \right> = \alpha \left| 0 \right> + \beta \left| 1 \right>, \end{equation} we can express the location of the state in terms of the familiar polar and azimuthal angles of polar-spherical coordinates. Taking into account our redefined, conventional $\left| r \right>$, eqn. \ref{eqn:bloch2} is \begin{equation} \frac{1 + \left< r \big| \sigma \right>}{2} \leftrightarrow \frac{1}{2} \left(\begin{array}{cc}1+r_z & r_x-i r_y \\r_x+ir_y & 1-r_z\end{array}\right). \end{equation} We use the polar-spherical coordinate identities for unit vectors \begin{eqnarray} r_x &=& \sin \theta \cos \phi, \nonumber \\ r_y &=& \sin \theta \sin \phi, \nonumber \\ r_z &=& \cos \theta, \end{eqnarray} to determine \begin{eqnarray} \frac{1}{2} \left(\begin{array}{cc}1+r_z & r_x-i r_y \\r_x+ir_y & 1-r_z\end{array}\right) &=& \frac{1}{2} \left(\begin{array}{cc}1+\cos \theta & \sin \theta \cos \phi-i \sin \theta \sin \phi \\ \sin \theta \cos \phi+i \sin \theta \sin \phi & 1- \cos \theta \end{array}\right) \nonumber \\ &=& \frac{1}{2} \left(\begin{array}{cc}1+\cos \theta & \sin \theta e^{-i\phi} \\ \sin \theta e^{i \phi} & 1- \cos \theta \end{array}\right) \nonumber \\ &=& \frac{1}{2} \left(\begin{array}{cc} 2 \cos^2 \left( \frac{\theta}{2} \right) & 2 \sin\left( \frac{\theta}{2} \right) \cos \left( \frac{\theta}{2} \right) e^{-i\phi} \\ 2 \sin\left( \frac{\theta}{2} \right) \cos \left( \frac{\theta}{2} \right) e^{i \phi} & 2 \sin^2 \left( \frac{\theta}{2} \right) \end{array}\right) \nonumber \\ &=& \left(\begin{array}{cc} \cos^2 \left( \frac{\theta}{2} \right) & \sin\left( \frac{\theta}{2} \right) \cos \left( \frac{\theta}{2} \right) e^{-i\phi} \\ \sin\left( \frac{\theta}{2} \right) \cos \left( \frac{\theta}{2} \right) e^{i \phi} & \sin^2 \left( \frac{\theta}{2} \right) \end{array}\right). \end{eqnarray} If we let $\alpha \equiv \cos \left( \theta / 2 \right)$ and $\beta \equiv e^{i \phi} \sin \left( \theta / 2 \right)$, the right side of eqn. \ref{eqn:blochvec} becomes \begin{equation} \left(\begin{array}{cc} \left| \alpha \right| ^2 & \alpha \beta^* \\ \beta \alpha^* & \left| \beta \right|^2 \end{array}\right) \leftrightarrow \left| \psi \right> \left< \psi \right| = \hat{\rho}. \end{equation} Hence, the state vector of the pure state is \cite{nielsenchuang} \begin{boxedeqn}{} \left| \psi \right> =\cos \left( \frac{\theta}{2} \right) \left| 0 \right> + e^{i \phi} \sin \left( \frac{\theta}{2} \right) \left| 1 \right>. \label{eqn:bloch_def} \end{boxedeqn} We note that the coefficient on $\left| 0 \right>$ is apparently restricted to be real. However, unlike state operators, state vectors are not unique; physically identical state vectors may differ by a phase factor $e^{i \gamma}$ \cite{ballentine}. The notion of superposition\index{Superposition} also enables us to refine our classification of composite systems. Besides distinguishing between pure and impure states, physicists subdivide composite pure states into two categories: entangled states and product states. \begin{boxeddefn}{Product State\index{Product State}\index{Entangled State}}{} Suppose $\hat{\rho}$ is a pure composite quantum system with associated state vector $\left| \psi \right>$. If there exist state vectors $\left| \phi_1 \right>$ and $\left| \phi_2 \right> $ such that \begin{equation} \left| \psi \right> = \left| \phi_1 \right> \otimes \left| \phi_2 \right>, \end{equation} then we call $\left| \psi \right>$ a \textbf{product state}. If no such vectors exist, then we say $\left| \psi \right>$ is \textbf{entangled}. \end{boxeddefn} To construct entangled states, we take product states and put them into superposition. In illustration of this concept, we consider the following example. \section{Example: The Bell State}\label{sec:bellstate} An important example of an entangled state of two two-state systems is called the \textbf{Bell State}\index{Bell State}. Before we define this system, we need to develop some machinery to work with two-state\index{Two-State System} systems. We use the orthonormal basis set introduced previously for a single, pure, two-state system, $\big \{ \left| 0 \right> , \left| 1 \right> \big \}$, which we represent as column matrices by \begin{eqnarray} \left| 0 \right>&\leftrightarrow& \left( \begin{array}{c} 1 \\ 0 \end{array} \right), \nonumber \\ \left| 1 \right> &\leftrightarrow& \left( \begin{array}{c} 0 \\ 1 \end{array} \right). \end{eqnarray} In this representation, we define an orthonormal basis for two of these two-state systems as \cite{nielsenchuang} \begin{equation} \big \{ \left| 0 \right>\otimes\left| 0 \right> , \left| 0 \right>\otimes \left| 1 \right>,\left| 1 \right> \otimes \left| 0 \right> ,\left| 1 \right>\otimes\left| 1 \right>\big \} = \big \{ \left| 00 \right>,\left| 01 \right>,\left| 10 \right>,\left| 11 \right> \big \}, \end{equation} which have matrix representations \begin{eqnarray} &\left| 00 \right>& \leftrightarrow \left( \begin{array}{c} 1 \\ 0 \\0 \\0 \end{array} \right), \, \, \, \left| 01 \right> \leftrightarrow \left( \begin{array}{c} 0 \\ 1\\0\\0 \end{array} \right), \nonumber \\ &\left| 10 \right>& \leftrightarrow \left( \begin{array}{c} 0 \\ 0 \\1\\0 \end{array} \right), \, \, \, \left| 11 \right> \leftrightarrow \left( \begin{array}{c} 0 \\ 0\\0\\1 \end{array} \right). \end{eqnarray} By the superposition principle, we define the state \begin{equation} \left| \psi_B \right> \equiv \frac{\left| 00 \right> + \left| 11 \right> }{\sqrt{2}} \leftrightarrow \left(\begin{array}{c} \frac{1}{\sqrt 2} \\ 0 \\ 0\\ \frac{1}{\sqrt 2} \end{array}\right), \end{equation} which is the Bell state. To check if this state is entangled, we see if we can write $\left| \psi_B \right> = \left| \phi_A \right> \otimes \left| \phi_B \right>$ for some vectors $\left| \phi_A \right>$ and $\left| \phi_B \right>$. As matrices, this equation is \begin{equation} \left(\begin{array}{c} \frac{1}{\sqrt 2} \\ 0 \\ 0\\ \frac{1}{\sqrt 2} \end{array}\right) = \left(\begin{array}{c} a_1\\ a_2 \end{array}\right) \otimes \left(\begin{array}{c} b_1 \\ b_2 \end{array}\right) = \left(\begin{array}{c}a_1 b_1 \\ a_1 b_2 \\ a_2 b_1\\ a_2 b_2 \end{array}\right). \end{equation} This is a system of four simultaneous equations, $\frac{1}{ \sqrt 2} = a_1 b_1$, $0 = a_1 b_2$, $0 = a_2 b_1$, and $\frac{1}{ \sqrt 2} = a_2 b_2$. Since $\frac{1}{ \sqrt 2} = a_1 b_1$, $a_1\neq 0$ and $b_1 \neq 0$. Then, since $a_1 b_2 = 0$, $b_2=0$. But $\frac{1}{ \sqrt 2} = a_2 b_2$, so $b_2 \neq 0 $, which is a contradiction. Hence, $\left| \phi_A \right> $ and $\left| \phi_B \right>$ do not exist, so $\left| \psi_B \right>$ is entangled. \cite{nielsenchuang} Next, we compute the state operator corresponding to $\left| \psi_B \right>$. By definition \ref{defn:pure}, since the Bell state is pure by construction, its state operator is \begin{eqnarray} \hat{ \rho} &=& \left| \psi_B \right> \left< \psi_B \right| \nonumber \\ &=& \left( \frac{\left| 00 \right> + \left| 11 \right> }{\sqrt{2}} \right) \left( \frac{\left< 00 \right| + \left< 11 \right| }{\sqrt{2}} \right) \nonumber \\ &=& \frac{\left| 00 \right> \left< 00 \right| +\left| 00 \right> \left< 11\right| +\left| 11 \right> \left< 00 \right| +\left| 11 \right> \left< 11 \right|}{2} \nonumber \\ &\leftrightarrow& \left( \begin{array}{cccc} \frac{1}{2} & 0 & 0 & \frac{1}{2} \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ \frac{1}{2} & 0 & 0 & \frac{1}{2} \\ \end{array} \right). \end{eqnarray} Even though we constructed the Bell state from a state vector, we will explicitly verify its purity as an example. We find \begin{eqnarray} \left( \hat{ \rho} \right)^2 &=& \left(\frac{\left| 00 \right> \left< 00 \right| +\left| 00 \right> \left< 11\right| +\left| 11 \right> \left< 00 \right| +\left| 11 \right> \left< 11 \right|}{2} \right) \left(\frac{\left| 00 \right> \left< 00 \right| +\left| 00 \right> \left< 11\right| +\left| 11 \right> \left< 00 \right| +\left| 11 \right> \left< 11 \right|}{2} \right) \nonumber \\ &=& \frac{2 \left(\left| 00 \right> \left< 00 \right| +\left| 00 \right> \left< 11\right| +\left| 11 \right> \left< 00 \right| +\left| 11 \right> \left< 11 \right| \right)}{4} \nonumber \\ &=& \frac{ \left(\left| 00 \right> \left< 00 \right| +\left| 00 \right> \left< 11\right| +\left| 11 \right> \left< 00 \right| +\left| 11 \right> \left< 11 \right| \right)}{2} \nonumber \\ &=& \rho, \end{eqnarray} which confirms that the Bell state is pure. Next, suppose we want to measure some particular facet of the first subsystem. Since the Bell state is entangled, we cannot ``eyeball" the result, but rather we need to use the reduced state machinery we developed in definition \ref{def:redstate}. The reduced state operator for the first subsystem is \begin{eqnarray}\label{eqn:thisisamixture} \hat{\rho}^{(1)} &=& \mathrm{Tr}_2 \left( \hat{\rho} \right) \nonumber \\ &=& \mathrm{Tr}_2 \left( \frac{\left| 00 \right> \left< 00 \right| +\left| 00 \right> \left< 11\right| +\left| 11 \right> \left< 00 \right| +\left| 11 \right> \left< 11 \right|}{2} \right) \nonumber \\ &=& \frac{1}{2} \mathrm{Tr}_2 \left( \left| 0 \right> \left< 0 \right| \otimes \left| 0 \right> \left< 0 \right| + \left| 1 \right> \left< 0 \right| \otimes \left| 1 \right> \left< 0 \right| + \left| 0 \right> \left< 1 \right| \otimes \left| 0 \right> \left< 1 \right| + \left| 1 \right> \left< 1 \right| \otimes \left| 1 \right> \left< 1 \right| \right) \nonumber \\ &=& \frac{1}{2} \big ( \left| 0 \right> \left< 0 \right| \cdot \mathrm{Tr}\left( \left| 0 \right> \left< 0 \right| \right) + \left| 1 \right> \left< 0 \right| \cdot \mathrm{Tr}\left( \left| 1 \right> \left< 0 \right|\right) + \left| 0 \right> \left< 1 \right| \cdot \mathrm{Tr}\left( \left| 0 \right> \left< 1 \right| \right)+ \left| 1 \right> \left< 1 \right| \cdot \mathrm{Tr}\left( \left| 1 \right> \left< 1 \right| \right) \big ) \nonumber \\ &=& \frac{1}{2} \big ( \left| 0 \right> \left< 0 \right| \cdot 1 + \left| 1 \right> \left< 0 \right| \cdot 0+ \left| 0 \right> \left< 1 \right| \cdot 0+ \left| 1 \right> \left< 1 \right| \cdot 1 \big ) \nonumber \\ &=& \frac{ \left| 0 \right> \left< 0 \right| + \left| 1 \right> \left< 1 \right| }{2} \nonumber \\ &\leftrightarrow& \left(\begin{array}{cc}\frac{1}{2} & 0 \\0 & \frac{1}{2}\end{array}\right). \end{eqnarray} Oddly enough, \begin{equation} \left( \hat{\rho}^{(1)} \right) ^2 = \frac{ \left| 0 \right> \left< 0 \right| + \left| 1 \right> \left< 1 \right| }{4} \neq \hat{\rho}^{(1)}, \nonumber \\ \end{equation} so $\hat{\rho}^{(1)}$ is impure \cite{nielsenchuang}. Surprisingly, a pure composite system does not necessarily contain pure subsystems. If we express $\hat{\rho}^{(1)}$ in terms of the Pauli matrices and the identity as in eqn. \ref{eqn:blochvec}, we find that the Bloch vector corresponding to $\hat{\rho}^{(1)}$ is $\left| r\right> = 0$. We already noted that in Bloch space, the unit two-sphere represents all the possible pure state configurations for a two-state system. However, the unit ball represents all state configurations; the impure states have $\left< r \big| r \right> < 1$ \cite{nielsenchuang}. The Bell state, with $\left< r\big| r \right>=0$, is a special case of a totally mixed or impure state, meaning that the subsystem is entirely statistical (classical). By symmetry, if we had traced out the first subsystem rather than the second, we find $\hat{\rho}^{(1)}=\hat{\rho}^{(2)}$, so we actually have an entangled state composed of totally classical subsystems. \section{Projection Onto a Basis}\label{sec:projonbasis} So far, we have worked mostly in an abstract Hilbert space, although we have taken brief forays into matrix representations of states and observables. In this section, we formalize the notion of a representation of an operator in a basis. We are mainly interested in infinite and continuous bases\index{Basis!Continuous}, which we use to define a very useful structure \cite{cohtan}. \begin{boxeddefn}{Wavefuntion\index{Wavefunction}}{defn:wavefunction} Suppose that we have an infinite and continuous basis for $\mathcal H$, $\{ x \}_{x \in \mathbb R}$. Then, for some pure state vector $\left| \psi \right> \in \mathcal H$, we form the \textbf{wavefunction} \begin{equation} \psi : \mathbb R \rightarrow \mathbb C \end{equation} defined by \begin{equation} \psi(x) \equiv \left< x \big | \psi \right>. \end{equation} \end{boxeddefn} We note that if \begin{equation} \left| \psi \right> = \sum_{x \in \mathbb R} a_x \left| x \right>, \end{equation} \begin{equation} \left< \psi \right| = \sum_{x \in \mathbb R} a_x^* \left< x \right|, \end{equation} so \begin{equation} \mathrm{Tr}\left( \hat{\rho} \right) = \sum_{x \in \mathbb R} \left< x \big | \psi \right> \left< \psi \big| x \right> = \sum_{x \in \mathbb R } \psi^*(x) \psi(x), \end{equation} where we have used the complex symmetry of the inner product given by eqn. \ref{eqn:diracinner}. But since this is a sum over a continuous interval, it can be written as an integral. We obtain \begin{equation} \mathrm{Tr}\left( \hat{\rho} \right) = \int dx \cdot \psi^*(x) \psi(x) = \int dx \cdot \left| \psi(x)\right|^2 = 1, \end{equation} as the state operator has unit trace. Since our sum is infinite, it must be that $\psi(x)$ decays at infinity sufficiently fast such that the integral converges. This special class of functions is known as the set of square-normalizable functions, and is often denoted as $L^2$. Physically, this means that the wavefunction must be \textit{localized} in some sense, so that at extreme distances it is effectively zero. Just as we projected a vector into a basis and obtained a function, we can project a linear operator acting in Hilbert space onto a basis to obtain a linear operator in function space. We denote such operator with a check $\left( \, \check{ } \, \right)$, and define it by \cite{cohtan}\index{Linear!Operator!on a Function Space} \begin{equation}\label{eqn:checkop} \check{\mathcal O} \psi(x) \equiv \left< x \right| \hat{\mathcal O} \left| \psi \right>, \end{equation} where $\hat{\mathcal O}$ is an operator on a Hilbert space. An interesting application of this considers the matrix elements, given by eqn. \ref{eqn:matrixelem}, of a state operator $\hat{\rho}$ in the position basis. If $\hat \rho$ is pure, then \begin{boxedeqn}{} \rho(x,y) = \left< x \right| \hat \rho \left| y \right> = \left< x \big| \psi \right> \left< \psi \big| y \right> = \psi(x) \psi^*(y). \end{boxedeqn} Since we previously established that every valid wavefunction must vanish quickly at infinity, it follows that sufficiently off-diagonal elements of the state operator must vanish quickly, as well as distant points along the diagonal. \chapter{Dynamics}\label{chap:dynamics} \lettrine[lines=2, lhang=0.33, loversize=0.1]{Q}uantum dynamics is the framework that evolves a quantum state forward in time. We begin by considering the Galilei group\index{Group!Galilei} of transformations, under which all non-relativistic physics is believed to be invariant. We show that this group leads to the fundamental commutator relations that govern quantum dynamics, and then use them do derive the famous Schr\"odinger equation. Finally, we consider the free particle in the position basis. \section{The Galilei Group}\label{sec:galgroup} Fundamental to the notion of dynamics is the physical assumption that certain transformations will not change the physics of a situation \cite{ballentine}. All known experimental evidence supports this assumption, and it seems reasonable mathematically. This set of transformations forms a \textit{group}, called the Poincar\'e group\index{Group!Poincar\'e} of space translations, time translations, and Lorentz transformations.\footnote{The term group\index{Group} here is used in the formal, mathematical sense. We will not dwell on many of the subtleties that arise due to this here, and the interested reader is directed to ref. \cite{jones}.} However, for our purposes we take $v \ll c$, so the Poincar\'e group becomes the classical Galilei group, which we take as an axiom. For clarity, we assume a pure state in one temporal and one spacial dimension, but this treatment can be readily extended to impure states in three-dimensional space \cite{lindner}. \begin{boxedaxm}{Invariance Under the Galilei Group\index{Group!Galilei}}{} Let $G$ be the Galilei group, which contains elements generated by the composition of the operators \begin{eqnarray} \check S_{\epsilon} \psi(x,t) &=& \psi(x +\epsilon,t) \nonumber \\ \check{T}_{\epsilon} \psi(x,t) &=& \psi(x,t+\epsilon) \nonumber \\ \check{L}_{\epsilon} \psi(x,t) &=& \psi(x+\epsilon t, t) , \end{eqnarray} where $\psi (x,t)$, given by definition \ref{defn:wavefunction}, is a function of position and time, and $(\check{\,})$ represents an operator on the space of such functions, as defined in eqn. \ref{eqn:checkop}. Let $\check g \in G$ and let $\hat A$ be an observable of the state $\left| \psi \right>$ with eigenvectors $\big\{ \left| \phi_n \right> \big\}_{n \in \mathbb R}$ and eigenvalues $\{ a_n \}_{n \in \mathbb R}$. Then, if $\hat A \left| \phi_n \right> = a_n \left| \phi_n \right>$ and for all wavefunctions $v(x,t)$, $\check g v = v'$, we assert \begin{equation}\label{eqn:galobservables} \hat A ' \left| \phi_n' \right> \equiv a_n \left| \phi_n' \right> \end{equation} and \begin{equation}\label{eqn:galstates} \left| \left< \phi_n \big | \psi \right> \right|^2 \equiv \left| \left< \phi_n' \big| \psi' \right> \right|^2. \end{equation} \end{boxedaxm} In essence, eqns. \ref{eqn:galobservables} and \ref{eqn:galstates} refer to the invariance of possible measurement\index{Measurement} and invariance of probable outcome, and thus the invariance of all physics, under the Galilei group. We now write a motivating identity using the Galilei group. Considering the state wavefunction $\psi(x,t)$, we find \cite{lindner} \begin{eqnarray} \label{eqn:comm1} \check L_{\epsilon}^{-1} \check T_{\epsilon}^{-1} \check L_{\epsilon} \check T_{\epsilon} \psi(x,t) &=& \check L_{-\epsilon} \check T_{-\epsilon} \check L_{\epsilon} \check T_{\epsilon} \psi(x,t) \nonumber \\ &=& \check L_{-\epsilon} \check T_{-\epsilon} \check L_{\epsilon} \psi(x,t+\epsilon) \nonumber \\ &=& \check L_{-\epsilon} \check T_{-\epsilon} \psi(x+\epsilon(t+\epsilon),t+\epsilon) \nonumber \\ &=& \check L_{-\epsilon} \psi(x+\epsilon(t+\epsilon),t) \nonumber \\ &=& \psi(x+\epsilon(t+\epsilon)-\epsilon t,t) \nonumber \\ &=& \psi(x+ \epsilon^2,t) \nonumber \\ &=& \check S_{\epsilon^2} \psi (x,t) . \end{eqnarray} We conclude that these transformations do not commute, which will play a major role in the dynamics of quantum mechanics. Before we move to a Hilbert space, we need to convert our Galilei group into a more useful form. Due to eqn. \ref{eqn:galstates}, we can make use of Wigner's theorem, which guarantees that any Galilei transformation corresponds to a \textbf{unitary} operator $\hat U$ on a Hilbert space that obeys\footnote{Wigner's theorem is complicated to prove. See ref. \cite{bargmann} for a thorough treatment.} \begin{equation} \hat U \hat U^{\dagger} = \hat U^{\dagger} \hat U= \hat 1. \end{equation} Thus, if $\hat U$ is a unitary representative of a Galilei group member and $\hat A$ is an observable. If we take that \begin{equation} \left| u' \right> \equiv \hat U \left| u \right> \end{equation} for all $\left| u \right> \in \mathcal H$, we have \begin{equation} \hat A ' \left| \phi_n '\right> = a_n \left| \phi_n ' \right> \Rightarrow \hat A ' \hat U \left| \phi_n \right> = a_n \hat U \left| \phi_n \right>, \end{equation} so \begin{equation} \hat U^{\dagger} \hat A' \hat U \left| \phi_n \right> = \left( \hat U^{\dagger} \hat U \right) a_n \left| \phi_n \right> = a_n \left| \phi_n \right>. \end{equation} Hence, we get \begin{equation} \hat A \left| \phi_n \right> - \hat U^{\dagger} \hat A' \hat U \left| \phi_n \right> = a_n \left| \phi_n \right> - a_n \left| \phi_n \right> = \hat 0. \end{equation} Since this equation holds for all eigenvectors of $\hat A$, we have \cite{ballentine} \begin{equation} \label{eqn:operatortrans} \hat A - \hat U^{\dagger} \hat A' \hat U = 0 \Rightarrow \hat A = \hat U^{\dagger} \hat A' \hat U \Rightarrow \hat A' = \hat U \hat A \hat U^{\dagger}. \end{equation} We now take our unitary transformation to be a function of a single parameter, $t$, subject to $\hat U(t_1+t_2) = \hat U(t_1) \hat U(t_2)$ and $\hat U(0)=\hat 1$. Then, for small $t$, we take the Taylor expansion of $\hat U$ about $t=0$ to get\footnote{We will be making frequent use of the Taylor expansion. Readers unfamiliar with it are advised to see ref. \cite{riley}.} \begin{equation} \hat U(t) = \hat 1 + t \frac{d \hat U}{dt}\Big | _{t=0}+... \, . \end{equation} Similarly, we know that \cite{lindner} \begin{eqnarray} \hat 1 &=&\hat U \hat U^{\dagger} \nonumber \\ &=& \hat 1 + t \frac{d\hat U \hat U^{\dagger}}{dt}\Big|_{t=0} + ... \nonumber \\ &=& \hat 1 + t \left( \frac{d\hat U }{dt}\hat U^{\dagger} +\hat U\frac{d\hat U^{\dagger} }{dt} \right) _{t=0}+ ... \nonumber \\ &\sim& \hat 1 + t \left( \frac{d\hat U }{dt} +\frac{d\hat U^{\dagger} }{dt} \right) _{t=0}+ ... \, , \nonumber \\ \end{eqnarray} as $t \sim 0$ and $\hat U(0)=\hat U^{\dagger}(0)=\hat 1$. Since $\hat 1 = \hat U \hat U^{\dagger}$ for all $t$, it must be that \begin{equation} \left( \frac{d\hat U }{dt}+\frac{d\hat U^{\dagger} }{dt} \right) _{t=0} = \hat 0. \end{equation} We now let \begin{equation} \frac{d\hat U }{dt}\Big |_{t=0} \equiv i \hat K, \end{equation} which is well-defined so long as $\hat K$ is self-adjoint. We impose the boundary condition $\hat U (0) = \hat 1$ to find the solution to this first order differential equation, \begin{equation} \hat U(s) = e^{i \hat K t}. \end{equation} Since any unitary operator can be represented in this form, we now define the three generating operators of the Galilei group\index{Group!Galilei!Unitary Representatives}. They are \cite{lindner} \begin{eqnarray}\label{eqn:galdefexp} \check S_{x} \psi(x) &=& \left< x \right| \hat S_x \left| \psi \right> \equiv \left< x \right| e^{-i x \hat p} \left| \psi \right> \nonumber \\ \check T_t \psi(x)&=& \left< x \right| \hat T_t \left| \psi \right>\equiv \left< x \right| e^{-i t \hat h} \left| \psi \right>\nonumber \\ \check L_v \psi (x)& = & \left< x \right| \hat L_v \left| \psi \right>\equiv \left< x \right| e^{i v \hat f }\left| \psi \right>, \end{eqnarray} where $\hat f$, $\hat h$, and $\hat p$ are self-adjoint, and the particular signs and parameters associated with the transformations are matters of convention. \section{Commutator Relationships} We next introduce three particular observables. First, the position operator, $\hat Q$\index{Position Operator}, obeys the eigenvalue equation \begin{equation} \hat Q \left| x \right> = x \left| x \right>, \end{equation} where $\left| x \right>$ is an eigenvector of the position, i.e. a state of definite position. Second, the momentum operator, $\hat P$\index{Momentum Operator}, follows \begin{equation} \hat P \left| p \right> = p \left| p \right>. \end{equation} We require that the expectation values of these operators follow the classical relationship \cite{griffiths} \begin{equation}\label{eqn:classcorsp} \left< \hat P \right> \equiv \frac{d \left<\hat Q\right> }{dt}. \end{equation} Further, we define the energy operator $\hat H$, also known as the \textbf{Hamiltonian}\index{Hamiltonian}, in analogy to the classical total energy of a system, which is the kinetic energy $P^2/(2m)$ plus some potential energy $V$. It is \begin{equation}\label{eqn:eop} \hat H \equiv \frac{1}{2m} \hat P^2 + V. \end{equation} First, note that \begin{equation} \hat H \hat P = \left( \frac{1}{2m} \hat P ^2 + V\right) \hat P = \hat P \left(\frac{1}{2m} \hat P ^2 + V \right) = \hat P \hat H, \end{equation} so $\left[ \hat H , \hat P \right] = 0$. Next, recall that \begin{equation}\label{eqn:schropic} \left| \psi(t + \epsilon) \right> = \hat T_{\epsilon} \left| \psi(t) \right>. \end{equation} By the definition of the derivative, we have \cite{lindner} \begin{eqnarray} \frac{d}{dt} \left| \psi(t) \right> &=& \lim_{\epsilon \rightarrow 0} \frac{\left| \psi(t+\epsilon) \right> - \left| \psi(t) \right>}{\epsilon} \nonumber \\ &=& \lim_{\epsilon \rightarrow 0} \frac{e^{-i \epsilon \hat h} \left| \psi(t) \right> - \left| \psi(t) \right>}{\epsilon} \nonumber \\ &=& \lim_{\epsilon \rightarrow 0} \frac{\left(1-i \epsilon \hat h+\left(-i\epsilon \hat h \right)^2/2 +... \right) \left| \psi(t) \right> - \left| \psi(t) \right>}{\epsilon} \nonumber \\ &=& \lim_{\epsilon \rightarrow 0} \left( -i \hat h \left| \psi(t)\right> - \epsilon \hat h^2 \left| \psi(t) \right> + ... \right) \nonumber \\ &=& -i \hat h \left| \psi(t)\right> . \end{eqnarray} Following identical logic, we find \cite{lindner} \begin{equation} \frac{d}{dt} \left< \psi(t) \right| = +i \left< \psi(t)\right| \hat h. \end{equation} Since $\left| \psi(t) \right>$ is pure, we use eqn. \ref{eqn:recoverexp} to write \begin{eqnarray} \frac{d}{dt} \left< \hat Q \right>(t) &=& \frac{d}{dt} \left< \psi(t) \right| \hat Q \left| \psi(t) \right> \nonumber \\ &=& \left( \frac{d}{dt} \left< \psi(t) \right|\right) \hat Q \left| \psi(t) \right> +\left< \psi(t) \right| \hat Q \left( \frac{d}{dt} \left| \psi(t) \right> \right) \nonumber \\ &=& i \left< \psi(t)\right| \hat h \hat Q \left| \psi(t) \right> -\left< \psi(t) \right| \hat Q i \hat h \left| \psi(t)\right> \nonumber \\ &=& \left< \psi(t)\right| i \left( \hat h \hat Q - \hat Q \hat h \right) \left| \psi(t) \right> \nonumber \\ &=& \left< \psi(t)\right| i \left[ \hat h, \hat Q \right] \left| \psi(t) \right>, \end{eqnarray} so \begin{equation} \frac{d}{dt} \left< \hat Q \right>(t) = \left< i \left[ \hat h, \hat Q \right] \right>. \end{equation} Then, by eqn. \ref{eqn:classcorsp}, we have \begin{equation} \frac{1}{m} \left< \hat P \right> = \left< i \left[ \hat h, \hat Q \right] \right> \Leftrightarrow \left< \psi(t) \right| \frac{1}{m} \hat P \left| \psi(t) \right> = \left< \psi(t) \right| i \left[ \hat h, \hat Q \right] \left| \psi(t) \right>. \end{equation} Since this result holds for arbitrary $\left| \psi(t) \right>$, we get \begin{equation} \frac{1}{m}\hat P = i \left[ \hat h, \hat Q \right] , \end{equation} or \cite{lindner} \begin{equation} \left[ \hat Q, \hat h \right] = i \frac{1}{m} \hat P. \end{equation} We next continue working with the position operator to derive a second relation. Recall that from eqn. \ref{eqn:operatortrans}, a unitary transformation defined by \begin{equation} \left| \psi ' \right> = \hat U \left| \psi \right> \end{equation} transforms an operator as \begin{equation} \hat A ' = \hat U \hat A \hat U^{\dagger}. \end{equation} So, if our unitary operator is $\hat S_{x_0} = e^{-i x_0 \hat p}$, we can transform the position operator $\hat Q$ to $\hat Q '$ according to \begin{equation}\label{eqn:posexps} \hat Q ' = \hat S_{x_0} \hat Q \hat S_{x_0}^{\dagger} =e^{-i x_0 \hat p} \hat Q e^{+i x_0 \hat p}. \end{equation} By our definition of $\hat Q$, we know\footnote{This is because $\left| x \right>$ and $\left| x' \right>$ are valid eigenvectors of $\hat Q$, as the spectrum of allowed positions (the eigenvalues for $\hat Q$) is the entire real line.} \begin{equation} \hat Q \left| x \right> = x \left| x \right> \Rightarrow \hat Q \left| x' \right> = x' \left| x' \right>. \end{equation} Further, eqn. \ref{eqn:galobservables} tells us \begin{equation} \hat Q ' \left| x ' \right> = x \left| x ' \right>. \end{equation} Thus, \begin{equation} \left( \hat Q ' - \hat Q \right) \left| x' \right> = (x-x') \left| x' \right> = \left( x - (x+x_0) \right) \left| x ' \right> = - x_0 \left| x' \right>.\footnote{Note that $x'=x+x_0$, since $\check S_{x_0} \psi(x) = \psi(x_0+x) = \psi(x')$.} \end{equation} Note that this relationship holds for arbitrary $x_0$, and hence for all $\left| x' \right>$. This implies \cite{lindner} \begin{equation} \hat Q ' = \hat Q - x_0 . \end{equation} Recalling our definition for $\hat Q '$, we have \begin{equation} \label{eqn:qpcomm3} e^{-i x_0 \hat p} \hat Q e^{+i x_0 \hat p} = \hat Q - x_0 . \end{equation} As before, we expand the exponential terms in a Taylor series to obtain \begin{eqnarray} \left( \sum_{n=1}^{\infty}\frac{ \left( -i x_0 \hat p \right)^n}{n!} \right) \hat Q \left( \sum_{n=1}^{\infty}\frac{ \left( i x_0 \hat p \right)^n}{n!} \right) &=& \left( 1 - i x_0 \hat p + ... \right) \hat Q \left( 1 + i x_0 \hat p + ... \right) \nonumber \\ &=& \hat Q - i x_0 \hat p \hat Q + i x_0 \hat Q \hat p +... \nonumber \\ &=& \hat Q + i x_0 \left( \hat Q \hat p - \hat p \hat Q \right) + ... \nonumber \\ &=& \hat Q + i x_0 \left[ \hat Q , \hat p \right] +... \nonumber \\ &=& \hat Q - x_0 . \end{eqnarray} Hence, in the limit as $x_0 \rightarrow 0$, eqn. \ref{eqn:qpcomm3} is \cite{lindner} \begin{equation} i \left[ \hat Q, \hat p \right] = -1 \Rightarrow \left[ \hat Q, \hat p \right] = i \end{equation} Next, we examine the momentum operator. Taking our unitary operator to be $\hat L_{v_0}=e^{+i v \hat f}$, we get \begin{equation} \hat P ' = e^{+i v_0 \hat f} \hat P e^{-iv_0 \hat f}. \end{equation} If we operate on states of definite momentum, we know \begin{equation} \hat P \left| p \right> = p \left| p \right> = m v \left| p \right>. \end{equation} By direct analogy with the states of definite position above, we find \cite{lindner} \begin{equation} \hat P ' = e^{+i v_0 \hat f} \hat P e^{-iv_0 \hat f} = \hat P - m v_0 . \end{equation} As above, we find the Taylor expansion of the exponentials to obtain \begin{eqnarray} \left( \sum_{n=1}^{\infty}\frac{ \left( +i v_0 \hat f \right)^n}{n!} \right) \hat P \left( \sum_{n=1}^{\infty}\frac{ \left( i v_0 \hat f \right)^n}{n!} \right) &=& \left( 1 + i v_0 \hat f + ... \right) \hat P \left( 1 - i v_0 \hat f + ... \right) \nonumber \\ &=& \hat P + i v_0 \hat f \hat P - i v_0 \hat P \hat f +... \nonumber \\ &=& \hat P + i v_0 \left( \hat f \hat P - \hat P \hat f \right) + ... \nonumber \\ &=& \hat P + i v_0 \left[ \hat f , \hat P \right] +... \nonumber \\ &=& \hat P - mv_0 . \end{eqnarray} In the limit as $v_0 \rightarrow 0$, we have \begin{equation} i \left[ \hat f , \hat P \right] = - m \Rightarrow \left[ \hat f , \hat P \right] = i m . \end{equation} It is a convention to define $\hat f \equiv m \hat q$, in which case we have \cite{lindner} \begin{equation} \left[ \hat q , \hat P \right] = i . \end{equation} We now have \begin{eqnarray} \label{eqn:commsecondset} \left[ \hat H, \hat P \right] &=& 0, \nonumber \\ \left[ \hat Q, \hat h \right] &=& i \frac{1}{m} \hat P, \nonumber \\ \left[ \hat Q, \hat p \right] &=& i, \nonumber \\ \left[ \hat q , \hat P \right] &=& i. \end{eqnarray} We make the standard definition for the position, momentum, and energy operators in terms of the Galilei\index{Group!Galilei!Generators} group generators. It is \cite{lindner} \begin{equation} \hat Q \equiv \hbar \hat q, \, \,\, \hat P \equiv \hbar \hat p , \, \, \, \hat H \equiv \hbar \hat h, \end{equation} where $\hbar$ is a proportionality constant known as Planck's reduced constant, and is experimentally determined to be \begin{equation} \hbar \approx 10^{-34} \, \mathrm{joule-seconds} \end{equation} in SI units. Then, eqn. \ref{eqn:commsecondset} reads \cite{lindner} \begin{eqnarray}\label{eqn:goodcomm} \left[ \hat P , \hat H \right] &=& 0, \nonumber \\ \left[ \hat Q, \hat H \right] &=& i \hbar \frac{1}{m} \hat P, \nonumber \\ \left[ \hat Q, \hat P \right] &=& i \hbar, \end{eqnarray} where \begin{boxedeqn}{} \left[ \hat Q, \hat P \right] = i \hbar \end{boxedeqn}is especially important, and is called the \textbf{canonical commutator}\index{Canonical Commutator}. As a consequence of our work so far this chapter, we now are in the position to evolve a state operator $\hat \rho$ in time. From eqn. \ref{eqn:operatortrans}, we have \begin{equation} \hat A' = \hat U \hat A \hat U^{\dagger} \end{equation} for an arbitrary observable $\hat A$. Letting $\hat A = \hat{\rho}$ , the state operator, and $\hat U = \hat T_t= e^{-i t \hat H/\hbar}$, we have \begin{equation}\label{eqn:freestateop} \hat{\rho}' = e^{-i t \hat H/\hbar} \hat{\rho} e^{+i t \hat H/\hbar}. \end{equation} Thus, by the definition of the derivative, \begin{equation} \hat{\partial}_t \hat{\rho} = \lim_{t \rightarrow 0} \frac{\hat{\rho}' - \hat{\rho}}{t} = \lim_{t \rightarrow 0} \frac{e^{-i t \hat H/\hbar} \hat{\rho} e^{i t \hat H/\hbar}- \hat{\rho}}{t}. \end{equation} Expanding the exponential terms in a Taylor series, we get\index{Equation of Motion!of the State Operator} \begin{eqnarray} \hat{\partial}_t \hat{\rho} &=& \lim_{t \rightarrow 0} \frac{\left(1 - \frac{it \hat H}{\hbar}+... \right) \hat{\rho} \left(1 + \frac{it \hat H}{\hbar}+... \right) - \hat{\rho}}{t} \nonumber \\ &=& \lim_{t \rightarrow 0}\left( -\frac{i \hat H}{\hbar} \hat {\rho} + \hat{\rho} \frac{i \hat H}{\hbar} + ... \right) \nonumber \\ &=&- \frac{i \hat H}{\hbar} \hat {\rho} + \hat{\rho} \frac{i \hat H}{\hbar} \nonumber \\ &=& \frac{i}{\hbar} \left[ \hat{\rho} , \hat H \right] , \end{eqnarray} so the equation of motion for the state operator is \begin{boxedeqn}{eqn:heispic} \hat{\partial}_t \hat{\rho}= \frac{i}{\hbar} \left[ \hat{\rho} , \hat H \right] . \end{boxedeqn} \section{The Schr\" odinger wave equation} Now that we have the commutator relations in eqn. \ref{eqn:goodcomm}, we can touch base with elementary quantum mechanics by deriving the Schr\"odinger wave equation. We work in the position basis, where our basis vectors follow \begin{equation} \hat Q \left| x \right> = x \left| x \right>. \end{equation} Considering some state vector \begin{equation} \left| \psi \right> = \sum_{x \in \mathbb R} a_x \left| x \right>, \end{equation} its wavefunction, given by definition \ref{defn:wavefunction}, is \begin{equation}\label{eqn:wavefunction} \psi (x) = \left< x \big | \psi \right> = \left< x \right| \left( \sum_{x \in \mathbb R} a_x \left| x \right> \right)= a_x. \end{equation} Considering $\hat Q$, we find by eqn. \ref{eqn:checkop} that \begin{equation} \check Q \psi (x) = \left< x \right| \hat Q \left| \psi \right>= \left< x \right| x \left| \psi \right> = x \left< x \big| \psi \right> = x \psi(x). \end{equation} So, in the position basis, $\check Q$ turns out to be multiplication by $x$. Using this result with eqn. \ref{eqn:recoverexp}, we find \cite{griffiths} \begin{eqnarray} \left< \hat Q \right> &=& \left< \psi \right| \hat Q \left| \psi \right> \nonumber \\ &=& \left( \sum_{x \in \mathbb R} a_x^* \left< x \right| \right)\hat Q \left( \sum_{y \in \mathbb R} a_y \left| y \right> \right) \nonumber \\ &=& \left( \sum_{x \in \mathbb R} a_x^* \left< x \right| \right) \left( \sum_{y \in \mathbb R} a_y \hat Q \left| y \right> \right) \nonumber \\ &=& \left( \sum_{x \in \mathbb R} a_x^* \left< x \right| \right) \left( \sum_{y \in \mathbb R} a_y y \left| y \right> \right) \nonumber \\ &=& \sum_{x \in \mathbb R} a_x^* x a_x \nonumber \\ &=& \int dx \cdot \psi(x)^* x \psi(x) \nonumber \\ &=& \int dx \cdot \psi(x)^* \check Q \psi(x). \end{eqnarray} We would like to find a similar expression for momentum within the position basis. To do this, we consider the canonical commutator from eqn. \ref{eqn:commsecondset}, $\left[ \hat Q, \hat P \right] = i \hbar$, which corresponds to \begin{equation} \left< x \right| \left( \left[ \hat Q, \hat P \right] = i \hbar \right) \Rightarrow \left[ \check Q, \check P \right] \psi(x) = i \hbar \psi(x) \end{equation} Considering some dummy function $f(x)$, we have \cite{griffiths} \begin{eqnarray} i \hbar f(x) &=& x \frac{\hbar}{i} \frac{d f}{dx} - x \frac{\hbar}{i} \frac{df}{dx} + i \hbar f \nonumber \\ &=& x \frac{\hbar}{i} \frac{d f}{dx} - x \frac{\hbar}{i} \frac{df}{dx} - \frac{ \hbar}{i} f \nonumber \\ &=& x \frac{\hbar}{i} \frac{d f}{dx} - \frac{\hbar}{i} \frac{d }{dx}\left( x \cdot f \right) \nonumber \\ &=& \left( x \frac{\hbar}{i} \frac{d}{dx} - \frac{\hbar}{i} \frac{d }{dx} x \right)f \nonumber \\ &=& \left( \check Q \check P - \check P \check Q \right) f. \end{eqnarray} Since we know $\check Qf =xf$ in the position basis, \begin{equation} \label{eqn:mominpos} \check P f= \frac{\hbar}{i} \frac{d}{dx}f. \end{equation} We now drop our test function to obtain the famous operator relationship\index{Momentum Operator!in Position Basis} \begin{boxedeqn}{} \check P =\frac{\hbar}{i} \frac{d}{dx}. \end{boxedeqn} Now, recall from eqn. \ref{eqn:schropic}, \begin{equation} \left| \psi_{t=\epsilon} \right> = \hat T_{\epsilon} \left| \psi_{t=0} \right> = e^{-i \epsilon \hat H/{\hbar}} \left| \psi_0 \right>. \end{equation} It follows that \begin{equation} \hat{ \partial_t} \left| \psi_t \right> = \hat{\partial_t} \left( e^{-i \epsilon \hat H/{\hbar}} \left| \psi_0 \right> \right) = -\frac{i \hat H}{\hbar} e^{-i \epsilon \hat H/{\hbar}} \left| \psi_0 \right> = -\frac{i \hat H}{\hbar}\left| \psi_{t} \right>, \end{equation} so \begin{equation} \check{\partial_t} \psi_t(x) = \left< x \right| \hat{\partial_t} \left| \psi_t \right> = \left< x \right| -\frac{i \hat H}{\hbar} \left| \psi_t \right> = -\frac{i}{\hbar} \left< x \right| \hat H \left| \psi_t \right> = -\frac{i}{\hbar} \check H \psi_t(x). \end{equation} But by eqn. \ref{eqn:eop}, \begin{equation} \label{eqn:hcheck} \check H \psi_t(x) = \left< x \right| \hat H \left| \psi_t \right> =\left< x \right| \left( \frac{1}{2m} \hat P^2 + V \right) \left| \psi_t \right> = \frac{1}{2m} \check p ^2 \psi_t(x) + V \psi_t(x). \end{equation} Hence, we have \begin{equation} \check{\partial}_t \psi_t(x) = -\frac{i}{\hbar} \frac{1}{2m} \check P ^2 \psi_t(x) - V\psi_t(x) = -\frac{i}{\hbar} \frac{1}{2m} \left( \frac{\hbar}{i} \check{\partial}_x \right)^2 \psi_t(x)- V\psi_t(x). \end{equation} This is rewritten as \begin{boxedeqn}{} i \hbar \frac{\partial \psi(x,t)}{\partial t} = - \frac{\hbar^2}{2m} \frac{\partial^2 \psi(x,t)}{\partial x ^2} + V \psi (x,t), \end{boxedeqn} and is the \textbf{time-dependent Schr\"odinger equation}\index{Schr\"odinger Equation!Time Dependent} \cite{griffiths}. Remarkably, so long as $V$ is time-independent, this equation turns out to be separable, so we can effectively pull off the time-dependence. To do this, we suppose \cite{griffiths} \begin{equation} \psi(x,t) \equiv \psi(x) \varphi(t), \end{equation} and substitute into the Schr\"odinger equation. We have \begin{equation} i \hbar \frac{\partial \psi(x) \varphi(t)}{\partial t} = - \frac{\hbar^2}{2m} \frac{\partial^2 \psi(x) \varphi(t)}{\partial x ^2} + V \psi(x) \varphi(t), \end{equation} which is \begin{equation} i \hbar \psi(x) \frac{\partial \varphi(t)}{\partial t} + \frac{\hbar^2}{2m}\varphi(t) \frac{\partial^2 \psi(x)}{\partial x ^2} = V \psi(x) \varphi(t), \end{equation} or \begin{equation} i \hbar \frac{1}{\varphi(t)} \frac{\partial \varphi(t)}{\partial t} = - \frac{\hbar^2}{2m}\frac{1}{\psi(x)} \frac{\partial^2 \psi(x)}{\partial x ^2} + V, \end{equation} provided $\varphi(t), \psi(x) \neq 0$. We now have two independent, single-variable functions set equal , so we know each of the functions must be equal to some constant, which we name $E$. That is, we have \cite{griffiths} \begin{eqnarray} E &=& i \hbar \frac{1}{\varphi(t)} \check d_t \varphi(t), \nonumber \\ E &=& - \frac{\hbar^2}{2m}\frac{1}{\psi(x)} \check {d}_x^2 \psi(x)+ V , \end{eqnarray} where we have let the partial derivatives go to normal derivatives, since we now have single-variable functions. The time-dependent piece has the solution \begin{equation} \varphi(t) = e^{-i E t / \hbar}, \end{equation} and the time-independent piece is usually written as \cite{griffiths} \begin{boxedeqn}{eqn:schrotimeind} - \frac{\hbar^2}{2m} \check d _x^2 \psi (x) + V \psi(x) = E \psi (x), \end{boxedeqn} which is the\textbf{ time-independent Schr\"odinger equation}\index{Schr\"odinger Equation!Time Independent}. Although this result cannot be reduced further without specifying $V$, we can use eqn. \ref{eqn:hcheck} to find \begin{equation} \check H \psi(x) = E \psi(x). \end{equation} This means that the values for the separation constant $E$ are actually the possible eigenvalues for $\check H$, the position representation of the Hamiltonian (energy operator). Further, if we find $\psi(x)$, we can construct $\psi(x,t)$ by \begin{equation} \psi(x,t) = \varphi(t) \psi(x) = e^{-i E t / \hbar}\psi(x) . \end{equation} If we compare this to eqn. \ref{eqn:schropic}, \begin{equation} \left| \psi_t \right>=\hat T_{t} \left| \psi_{t=0} \right> = e^{-i t \hat H/{\hbar}} \left| \psi_0 \right>, \end{equation} we find a distinct similarity between the form of time evolution in Hilbert space using the unitary $\hat T_t$ operator and time evolution in position space using the complex exponential of the eigenvalues of the associated $\check H$ operator on function space. \section{The Free Particle}\label{sec:freeparticle}\index{Free Particle!in Position Basis} Now that we have derived the Schr\"odinger equation, we will put it to use by treating the case of a free particle, when the potential $V=0$. In this case, the time-independent Schr\"odinger equation (eqn. \ref{eqn:schrotimeind}) reads \begin{equation} - \frac{\hbar^2}{2m} \check d _x^2 \psi (x) = E \psi (x), \end{equation} which we write as \begin{equation} \check d _x^2 \psi (x) = -k^2 \psi (x), \end{equation} where \begin{equation} k \equiv \frac{\sqrt{2E}}{\hbar}. \end{equation} This equation has a solution \cite{cohtan} \begin{equation}\label{eqn:planewave1} \psi(x) = Ae^{ ik x}, \end{equation} which is sinusoidal with amplitude $A$. Note that we identified the constants in our equation as $k$ with some foresight, as it turns out to be the wave number, $k=2\pi/\lambda$, of the solution. However, this solution does not decay at infinity, so the condition imposed by definition \ref{defn:pure} is violated. That is \cite{griffiths}, \begin{eqnarray} \left< \psi \big| \psi \right> &=& \left( \sum_{x \in \mathbb R }a^*_x \left<x \right| \right) \left( \sum_{y \in \mathbb R} a_y \left| y \right> \right) \nonumber \\ &=& \sum_{x \in \mathbb R } \sum_{y \in \mathbb R} a^*_x a_y\left<x \big| y \right> \nonumber \\ &=& \sum_{x \in \mathbb R } a^*_x a_x \nonumber \\ &=& \int dx \cdot \psi^*(x) \psi(x) \nonumber \\ &=& \int dx \cdot A^* e^{-ikx} A e^{ikx} \nonumber \\ &=& \left| A \right| ^2 \int dx \nonumber \\ &=& \infty, \end{eqnarray} so we cannot pick appropriate $A$ such that $\left< \psi \big| \psi \right> =1$. Hence, $\left| \psi \right> $ must not be a physically realizable state. The resolution to this problem is to use a linear combination of states with different values for $A$. The general formula for this linear combination is \cite{griffiths} \begin{boxedeqn}{} \psi(x) = \int dk \cdot \phi(k) e^{ikx}, \end{boxedeqn} where $\phi(k)$ is the coefficient that replaces $A$ in our linear combination. Each of the component states of this integral are called \textbf{plane waves}\index{Plane Wave}, while the linear combination is called a \textbf{wave packet}\index{Wave Packet}. We will make use of the plane wave components for free particles later, so we need to investigate their form further. Consider the eigenvalue problem \begin{equation} \check P f_p(x) = p f_p(x), \end{equation} where $f_p$ is an eigenfunction and $p$ is an eigenvalue of the momentum operator in the position basis. Using eqn. \ref{eqn:mominpos}, we write \begin{equation} \frac{\hbar}{i} \check d_x f_p(x) = p f_p(x). \end{equation} This has a solution \cite{griffiths} \begin{equation}\label{eqn:planewave} f_p(x) = Ae^{ipx/\hbar}, \end{equation} which is of identical form to eqn. \ref{eqn:planewave1}. If we identify the eigenfunctions of the position operator with the plane wave states, we get the famous de Broglie relation\index{De Broglie Relation} \cite{griffiths, ballentine, sudbery, cohtan}, \begin{equation} p = \hbar k. \end{equation} Recall that plane wave states are not normalizable, and thus cannot be physically realizable states. This means that in the position basis, states of definite momentum are not permissible, which is a famous consequence of the Heisenberg uncertainty principle.\footnote{The uncertainty principle reads $\Delta x \Delta p \geq \hbar /2$ \cite{griffiths}. If we have a state of definite position, $\Delta p = 0$, so, roughly, $\Delta x = \infty$. This is the result that we have already seen; states of definite momentum are not square-normalizable in the position basis.} The wave packet, then, can be thought of as a superposition\index{Superposition} of states of definite momentum, giving rise to a state of definite position. That is \cite{griffiths}, \begin{equation} \psi(x) = \int dp \cdot \phi(p) e^{ipx}, \end{equation} where we have switched to units in which $\hbar \equiv 1$, as we will do for the remainder of this thesis. \chapter{The Wigner Distribution}\label{chap:wigner} \lettrine[lines=2, lhang=0.33, loversize=0.1]{T}he Wigner distribution was the first quasi-probability distribution used in physics. Invented in 1932 by E.P. Wigner\index{Wigner, E.P.}, it remains in wide use today in many areas, especially quantum mechanics and signal analysis \cite{wigner}. The Wigner distribution has been used to develop an entirely new formalism of quantum mechanics in phase-space, a space of position vs. momentum, which we touch on briefly in section \ref{sec:wig_harmonic} \cite{zachos2}. In the following chapter, we first define the Wigner distribution and derive some of its fundamental properties. Next, we discuss the Wigner distribution of a combined system and treat a free particle. Following that, we extend the distribution to its associated transform. We then create a table of useful inverse relationships between the state operator and Wigner distribution required in subsequent sections. Finally, we construct the Wigner distribution for a simple harmonic oscillator as an example and observe its correspondence to a classical phase-space probability distribution. \section{Definition and Fundamental Properitees} In this section, we explore the basic properties of the Wigner distribution, starting with its definition, which is stated below \cite{zachos} \begin{boxeddefn}{The Wigner distribution\index{Wigner Distribution!Defintion}}{def:WignerDist} Consider the matrix elements of some state operator, given by $\rho(x,y)=\left< x \right| \hat \rho \left| y \right>$. Then, the Wigner distribution $W$ associated with $\rho$ is given by \begin{equation}\label{eqn:wigdef1} W(\bar x,p,t ) \equiv \frac{1}{2 \pi} \int d \delta \cdot e^{-i p \delta } \rho(x,y,t), \end{equation} where $\bar x = (x+y)/2$ and $\delta = x-y$. This is also usefully written \begin{equation}\label{eqn:wigalt} W(\bar x,p) = \frac{1}{2 \pi} \int d \delta \cdot e^{-ip \delta}\rho \left( \bar x +\frac{1}{2} \delta, \bar x - \frac{1}{2} \delta \right), \end{equation} where time dependence is understood and not written explicitly. \end{boxeddefn} Note that the Wigner distribution is given by a special case of the \textbf{Fourier transform} of the state operator with respect to the mean ($\bar x$) and difference ($\delta$) coordinates.\footnote{We assume some basic familiarity with the Fourier transform. If this topic is unfamiliar, the reader is advised to see ref. \cite{riley}.} Using this definition, we now list and verify some of the well known properties of the Wigner distribution. \subsection{Inverse Distribution} As one might guess, just as the Wigner distribution is defined in terms of the state operator, it is possible to define the state operator in terms of the Wigner distribution. This distinction is arbitrary: valid formulations of quantum mechanics have been made with the Wigner distribution as the primary object, while the state operator takes a secondary seat. However, historically the state operator and its associated vector have been the objects of primary importance in the development of quantum mechanics \cite{styer}. If we wish to express a state operator in terms of an associated Wigner distribution, we can make use of the relation \cite{halliwell}\index{Wigner Distribution!Inverse} \begin{equation} \rho(x,y) = \int d p \cdot e^{i p \delta} W \left( p, \bar x \right). \end{equation} In order to show that this is well-defined, we note that the Plancherel theorem states \cite{griffiths} \begin{equation} f( p ) = \frac{1}{2 \pi} \int d \delta \cdot e^{- i p \delta} \mathcal{F}\left( f(\delta) \right) \Leftrightarrow \mathcal{F}\left( f(\delta) \right) = \int d p \cdot e^{i p \delta} f( p), \end{equation} for some function $f$ and its Fourier transform, $\mathcal F (f) $, so long as the functions decay sufficiently fast at infinity. From this, is evident that the state operator is a kind of Fourier transform of the Wigner distribution, as we claimed in the previous section, so our inverse relationship is indeed appropriate. \subsection{Reality of the Wigner Distribution}\index{Wigner Distribution!Reality} One of the most important of the basic properties we will cover is that the Wigner distribution is always real-valued.\footnote{Although it is real-valued, the Wigner distribution is \textit{not} always positive. It is called a quasi-probability distribution since it is analogous to a true probability distribution, but has negative regions. We will deal these apparent negative probabilities more in section \ref{sec:wig_harmonic}.} That is, \begin{boxedeqn}{} W(\bar x, p, t) \in \mathbb R. \label{eqn:reality} \end{boxedeqn} In order to show this, we will take the complex conjugate of $W(\bar x, p)$. This gives us \cite{cohen} \begin{eqnarray} W^*(\bar x, p) &=& \frac{1}{2 \pi} \int_{\delta=- \infty}^{\infty} d \delta \cdot e^{ip \delta}\rho^* \left( \bar x +\frac{1}{2} \delta, \bar x - \frac{1}{2} \delta \right) \nonumber \\ &=& \frac{1}{2 \pi} \int_{\delta= \infty}^{-\infty} \left( - d \delta \right) \cdot e^{-ip \delta}\rho^* \left( \bar x -\frac{1}{2} \delta, \bar x + \frac{1}{2} \delta \right) \nonumber \\ &=& \frac{1}{2 \pi} \int_{\delta= -\infty}^{\infty} d \delta \cdot e^{-ip \delta}\rho^{\dagger} \left( \bar x +\frac{1}{2} \delta, \bar x - \frac{1}{2} \delta \right) \nonumber \\ &=& \frac{1}{2 \pi} \int_{\delta= -\infty}^{\infty} d \delta \cdot e^{-ip \delta}\rho \left( \bar x +\frac{1}{2} \delta, \bar x - \frac{1}{2} \delta \right) \nonumber \\ &=& W(\bar x, p), \end{eqnarray} where we used eqn. \ref{eqn:adjointapp} for the self-adjoint operator $\hat{\rho}$. Since we found $W^*(\bar x, p) = W(\bar x, p)$, we have $W(\bar x, p) \in \mathbb R$, as we claimed in eqn. \ref{eqn:reality}. \subsection{Marginal Distributions}\label{sec:marginals}\index{Wigner Distribution!Marginal Distributions} Based on our definition of the Wigner distribution, we note two important marginal distributions. They are \cite{hillery} \begin{boxedeqn}{} \int dp \cdot W(\bar x,p) = \left< \bar x \right| \hat \rho \left| \bar x \right> \label{eqn:marginal1} \end{boxedeqn} and \begin{boxedeqn}{} \int d\bar x \cdot W(\bar x,p) = \left< p \right| \hat{\rho} \left| p \right>. \end{boxedeqn} To show these results, we recall the definition of the Wigner distribution. We have \begin{eqnarray} \int dp \cdot W(\bar x,p) &=& \int dp \cdot \frac{1}{2 \pi} \int d \delta \cdot e^{-ip \delta}\rho \left( \bar x +\frac{1}{2} \delta, \bar x - \frac{1}{2} \delta \right) \nonumber \\ &=& \frac{1}{2 \pi} \int d \delta \cdot \rho \left( \bar x +\frac{1}{2} \delta, \bar x - \frac{1}{2} \delta \right) \int dp \cdot e^{-ip \delta}\nonumber \\ &=& \frac{1}{2 \pi} \int d \delta \cdot \rho \left( \bar x +\frac{1}{2} \delta, \bar x - \frac{1}{2} \delta \right) 2 \pi \delta_D(\delta) \nonumber \\ &=& \int d \delta \cdot \delta_D(\delta) \rho \left( \bar x +\frac{1}{2} \delta, \bar x - \frac{1}{2} \delta \right) \nonumber \\ &=& \rho \left( \bar x , \bar x \right) \nonumber \\ &=& \left< \bar x \right| \hat{\rho} \left| \bar x \right>, \end{eqnarray} where $\delta_D$ is called the $\textbf{Dirac delta}$\index{Dirac Delta},\footnote{The Dirac delta is roughly a sharp spike at a point, and zero elsewhere. Technically, it is not quite a function, but it is a very useful construct in theoretical physics. For more information, see ref. \cite{riley}.} and has the important properties \cite{riley} \begin{equation} \int dx \cdot \delta_D(y) f(x+y) \equiv f(x) \end{equation} and \begin{equation} \int dx \cdot e^{-i x y} \equiv 2 \pi \delta_D(y). \end{equation} In preparation for calculating the postion marginal distribution, it is useful to discuss the momentum representation of the state operator\index{State Operator!in Momentum Basis}. Analogous to the position representation, we define \begin{boxedeqn}{} \tilde{\rho}(p,p') \equiv \left< p \right| \hat p \left| p \right>, \end{boxedeqn} where the $(\, \tilde{} \, )$ is used to distinguish position matrix elements from momentum matrix elements. In terms of the momentum representation, the Wigner distribution is\index{Wigner Distribution!in Momentum Basis} \begin{boxedeqn}{eqn:wigmomrep} W( \bar x, p ) \leftrightarrow W_P( x, \bar p ) \equiv \frac{1}{2 \pi} \int d \lambda \cdot e^{-i x \lambda}\tilde{\rho} \left( \bar p +\frac{1}{2} \lambda, \bar p - \frac{1}{2} \lambda \right), \end{boxedeqn} where $\bar p = \frac{p + p'}{2}$ and $\lambda = p - p'$ are the average and difference momentum coordinates, in direct analogy to $\bar x$ and $\delta$. We are now ready to calculate the position marginal distribution. We have \begin{eqnarray} \int d\bar{x} W(\bar x, p) &\leftrightarrow&\int d\bar{x} W_P( x, \bar p) \nonumber \\ &=& \int dx \frac{1}{2 \pi} \int d \lambda \cdot e^{-i x \lambda}\tilde{\rho} \left( \bar p +\frac{1}{2} \lambda, \bar p - \frac{1}{2} \lambda \right)\nonumber \\ &=& \frac{1}{2 \pi} \int d \lambda \cdot \tilde{\rho} \left( \bar p +\frac{1}{2} \lambda, \bar p - \frac{1}{2} \lambda \right) \int dx e^{-i x \lambda}\nonumber \\ &=& \frac{1}{2 \pi} \int d \lambda \cdot \tilde{\rho} \left( \bar p +\frac{1}{2} \lambda, \bar p - \frac{1}{2} \lambda \right) 2 \pi \delta_D(\delta) \nonumber \\ &=& \int d \lambda \cdot \delta_D(\delta) \tilde{\rho} \left( \bar p +\frac{1}{2} \lambda, \bar p - \frac{1}{2} \lambda \right) \nonumber \\ &=& \tilde \rho (\bar p , \bar p) \nonumber \\ &\leftrightarrow& \tilde \rho (p , p) \nonumber \\ &=& \left< p \right| \hat \rho \left| p \right>, \end{eqnarray} which is what we claimed. \section{Wigner distributions of combined systems} \label{sec:combinedwig} Recall that in section \ref{sec:composite}, we defined the state operator of a composite system as \begin{equation} \hat{\rho}_{1+2} = \hat{\rho}_1 \otimes \hat{\rho}_2, \end{equation} where $( \otimes )$ is the tensor product. In analogy to this, we define the Wigner distribution of a composite system to be \cite{halliwell} \begin{equation}\label{eqn:combinedwig} W_{1+2}(x_1,x_2,p_1,p_2) \equiv W_1(x_1,p_1)W_2(x_2,p_2). \index{Wigner Distribution!Composite systems} \end{equation} In section \ref{sec:composite}, we also developed the partial trace, which was a method for extracting information about a single sub-state operator in a composite state operator. Not surprisingly, we define an analogous operation, which effectively annihilates one of the sub-Wigner distributions in a composite distribution by integrating out the degrees of freedom of the sub-distribution. Formally, we call this the projection function $\mathcal A:W_{1+2}\rightarrow W_1$ and define it as \begin{equation} \mathcal A \left( W_{1+2} \right) \equiv \int dx_2 dp_2 W_{1+2}. \label{eqn:annihilator}\index{Wigner Distribution!Projection Function} \end{equation} To understand how it works, we evaluate it on the initial total Wigner distribution. This is \begin{eqnarray} \mathcal A \left(W_{1+2} \right) &=& \mathcal A \left(W_1(x_1,p_1 ) W_2 (x_2,p_2 )\right) \nonumber \\ &=& \int dx_2 dp_2 W_1(x_1,p_1 ) W_2 (x_2,p_2 ) \nonumber \\ &=& W_1(x_1,p_1 ) \int dx_2 dp_2 W_2 (x_2,p_2 ) \nonumber \\ &=& W_1(x_1,p_1 ) \int dx_2 \rho_2(x_2,x_2) \nonumber \\ &=& W_1(x_1,p_1 ) \mathrm{Tr}\left( \rho_2 \right) \nonumber \\ &=& W_1(x_1,p_1 ), \end{eqnarray} where we have used eqns. \ref{eqn:marginal1} and definition \ref{defn:trace} to integrate $W_2$ and perform the full trace of $\rho_2$. Thus, $\mathcal A$ behaves as desired, in direct analogy to the partial trace on composite state operators. \section{Equation of Motion for a Free Particle} \label{sec:freesys} Now that we have laid out the basic properties of the Wigner distribution, we need to understand how to use it to describe a physical system. In this section, we investigate how a Wigner distribution evolves in time in the absence of a potential. Recall that in section \ref{sec:freeparticle}, we established the Hamiltonian of a free system as \begin{equation} \hat H=\frac{\hat P^2}{2m}. \end{equation} Given the Hamiltonian, we can calculate the time evolution of the state operator of the system via the commutator relation \begin{equation} \partial_t \hat{\rho} = - i \left[ \hat H, \hat{\rho} \right], \end{equation} developed in eqn. \ref{eqn:heispic}, to obtain \begin{equation} \partial_t \hat{\rho} = i\left(\hat{\rho} \hat H - \hat H \hat{\rho} \right)= \frac{i}{2m} \left( \hat{\rho} \hat P^2 - \hat P^2 \hat{\rho} \right), \end{equation} noting that $m$ is a scalar. So far, we have a general operator equation for the evolution of the system. If we want to know more specific information about its motion, we need to choose a basis onto which we may project our equation. Choosing momentum, we multiply both sides of the equation by $\left< p \right|$ from the left and $\left| p' \right>$ from the right, where $p$ and $p'$ are two arbitrary momentum states of our system. This gives us \begin{equation} \left< p \right| \partial_t \hat {\rho} \left| p' \right>= \left< p\right| \frac{i}{2m} \left( \hat{\rho} \hat P^2 - \hat P^2 \hat{\rho} \right) \left| p' \right>. \end{equation} Since $p$ and $p'$ are states of definite momentum, they are eigenvalues of $\hat P$. Hence, $\hat{P} \left| p \right>= p\left| p \right>$, $\left<p \right| \hat P = \left<p \right| p$, and likewise for $p'$. So, our equation of motion becomes \begin{eqnarray} \left< p \right| \partial_t \hat{\rho} \left| p ' \right> &=& \left< p \right| \frac{i}{2m} \left( \hat{\rho} \hat p^2 - \hat p^2 \hat{\rho} \right) \left| p' \right> \nonumber \\ &=& \frac{i}{2m} \left< p \right| \hat{\rho} \hat p^2 \left| p' \right> - \frac{i}{2m} \left< p \right| \hat p^2 \hat{\rho} \left| p' \right> \nonumber \\ &=& \frac{i}{2m} \left< p \right| \hat{\rho} \hat p p' \left| p' \right> - \frac{i}{2m} \left< p \right| p \hat p \hat{\rho} \left| p' \right> \nonumber \\ &=& \frac{i}{2m} \left< p \right| \hat{\rho} p' \left| p' \right> p' - \frac{i}{2m} \left< p \right| p \hat{\rho} \left| p' \right> p\nonumber \\ &=& \frac{i}{2m} \left< p \right| \hat{\rho} \left| p' \right> \left( {p'}^2-p^2 \right) \nonumber \\ &=& \frac{i}{2m} \left< p\right| \hat{\rho} \left| p' \right> \left( p'-p \right) \left( p' +p \right) . \end{eqnarray} Next, we substitute difference and mean variables, $\lambda$ and $\bar{p}$, for $p$ and $p'$ by defining $\lambda\equiv p-p' $ and $2 \bar{p} \equiv p+p'$. This substitution is algebraically equivalent to $p=\bar{p}+\lambda/2$ and $p'=\bar{p}-\lambda/2$, so \begin{equation} \left< \bar{p}+\lambda/2 \right| \partial_t \hat {\rho} \left| \bar{p}-\lambda/2 \right> = \frac{i}{2m} \left< \bar{p}+\lambda/2\right| \hat{\rho} \left| \bar{p}-\lambda/2 \right> 2\bar{p}\lambda. \end{equation} We multiply both sides of the equation by $d\lambda\cdot e^{-i\lambda x}/(2 \pi)$ (the kernel of the Fourier transform) and integrate from $\lambda=- \infty$ to $\lambda= +\infty$. The left hand side is \begin{eqnarray} LHS &=&\int d\lambda \cdot \frac{1}{2 \pi} e^{-i\lambda x}\left< \bar{p}+\lambda/2 \right| \partial_t \hat {\rho} \left| \bar{p}-\lambda/2 \right> \nonumber \\ &=& \int d\lambda \cdot \partial_t \frac{1}{2 \pi} e^{-i\lambda x}\left< \bar{p}+\lambda/2 \right| \hat {\rho} \left| \bar{p}-\lambda/2 \right> \nonumber \\ &=& \partial_t \frac{1}{2 \pi}\int d\lambda \cdot e^{-i\lambda x}\left< \bar{p}+\lambda/2 \right| \hat {\rho} \left| \bar{p}-\lambda/2 \right> \nonumber \\ &=& \partial_t W_P(x,\bar p,t) \nonumber \\ &\leftrightarrow& \partial_t W(\bar x, p,t), \end{eqnarray} where we use the fact that the only explicitly time dependent piece of the integrand is $\hat{\rho}$. We also assume that the integral converges and, in the last step, we use eqn. \ref{eqn:wigmomrep} for the Wigner distribution in the momentum basis. Proceeding in a similar fashion on the right hand side, we get \begin{eqnarray} RHS &=& \int d\lambda\cdot \frac{1}{2 \pi} e^{-i\lambda x} \frac{i}{2m} \left< \bar{p}+\lambda/2\right| \hat{\rho} \left| \bar{p}-\lambda/2 \right> 2\bar{p}\lambda \nonumber \\ &=& \frac{i \cdot i\bar{p}}{m 2 \pi} \int d\lambda\cdot \left( \frac{\lambda}{i}e^{-i\lambda x} \right) \left< \bar{p}+\lambda /2\right| \hat{\rho} \left| \bar{p}-\lambda /2 \right> \nonumber \\ &=& - \frac{\bar{p}}{m} \frac{1}{2 \pi} \int d\lambda \cdot \left( -ie^{-i\lambda x} \right) \left< \bar{p}+\lambda /2\right| \hat{\rho} \left| \bar{p}-\lambda /2 \right> \nonumber \\ &=& - \frac{\bar{p}}{m} \frac{1}{2 \pi} \int d\lambda \cdot \partial_{x} \left(e^{-i\lambda x} \right) \left< \bar{p}+\lambda /2\right| \hat{\rho} \left| \bar{p}-\lambda /2 \right> \nonumber \\ &=& - \frac{\bar{p}}{m} \partial_{x} \frac{1}{2 \pi} \int d\lambda \cdot e^{-i\lambda x} \left< \bar{p}+\lambda /2\right| \hat{\rho} \left| \bar{p}-\lambda /2 \right> \nonumber \\ &=& - \frac{\bar{p}}{m} \partial_{x} W_P(x,\bar{p},t) \nonumber \\ &\leftrightarrow& - \frac{\bar{p}}{m} \partial_{\bar x} W(\bar x,p,t), \end{eqnarray} where we use the fact that $e^{-i\lambda x}$ was the only factor in the integrand that explicitly depended on $x$. We again assume that the integral converges and use eqn. \ref{eqn:wigmomrep} for the Wigner distribution in the momentum basis. Thus, equating the right hand and left hand sides in the position representation leaves \cite{hillery} \begin{boxedeqn}{eqn:wigfree} \partial_t W(\bar x,p,t) = - \frac{p}{m} \partial_{\bar x} W(\bar x,p,t),\index{Wigner Distribution!Free Evolution} \end{boxedeqn} which is the equation of motion for a free system in terms of its Wigner distribution. Although it might seem convoluted to introduce the Wigner form of this equation rather than using the evolution of a free particle in terms of its state operator, the power of the Wigner distribution is that it allows us to treat position and momentum simultaneously. \section{Associated Transform and Inversion Properties} Now that we have determined some of the properties of the Wigner distribution, it is useful to define the Wigner transform of an arbitrary distribution of two variables. \begin{boxeddefn}{The Wigner transform\index{Wigner Transform!Defintion}}{def:WignerTrans} Let $D(x,y)$ be an arbitrary distribution of two variables, $x$ and $y$, and possibly have an implicit temporal dependence. Then, the \textbf{Wigner transform} $\mathcal W$ of $D$, a special case of the Fourier transform, is defined as \begin{equation} \mathcal W \left( D (x,y ) \right) \equiv \frac{1}{2 \pi} \int d \delta \cdot \cdot e^{-ip \delta } D(x,y), \end{equation} where $\delta=x-y$, as identified in definition \ref{def:WignerDist}. \end{boxeddefn} By definition, we know the Wigner transform of $\rho(x,y)$ immediately. It is \begin{equation} \mathcal W \left( \rho (x,y) \right) = \frac{1}{2 \pi} \int d \delta \cdot e^{-ip \delta} \rho(x,y) = W(\bar x, p). \end{equation} We arrive at a more interesting result by considering \begin{equation} \mathcal W \left( \partial_t \rho(x,y) \right)=\int d \delta \cdot e^{-ip \delta } \partial_t \rho(x,y). \end{equation} Clearly, neither $\delta$ nor $e^{-ip \delta }$ depend explicitly on time. Assuming that the integral converges, we have \begin{equation} \int d \delta \cdot e^{-ip \delta } \partial_t \rho(x,y) = \partial_t \int d \delta \cdot e^{-ip \delta } \rho(x,y) = \partial_t W(\bar x,p), \end{equation} where we have applied definition \ref{def:WignerTrans}. So, \begin{equation} \mathcal W \left( \partial_t \rho(x,y) \right) = \partial_t W(\bar x,p), \end{equation} as desired. In the following sections, we work out some of the Wigner transforms of functions that we will need later. The results of these derivations are summarized in table \ref{tab:inversions} below. \begin{table}[h] \caption{Wigner transforms of important quantities, where $\bar x = \frac{x+y}{2}$ and $\delta = x - y$. \index{Wigner Transform!of Important Quantities}} \label{tab:inversions}\centering \begin{tabular}{|cc|} \hline Expression & Transform \\ \hline $ \rho (x,y)$ & $W(\bar x, p)$ \\ $\partial_t \rho(x,y)$ & $\partial_t W(\bar x,p)$ \\ $ \frac{i}{2} \left( \partial_x^2 - \partial_y^2 \right) \rho(x,y)$ & $-p \partial_{\bar x} W \left( \bar x , p\right)$\\ $(x-y) \left( \partial_x - \partial_y \right) \rho (x,y)$ & $- 2 \partial_p \left( p \cdot W( \bar x, p) \right)$ \\ $ \left(x - y \right)^2 \rho( x, y)$ & $- \partial_p^2W( \bar x, p)$ \\ \hline \end{tabular} \end{table} \subsection{The Wigner transform of $\left(i/2 \right)\left( \partial_x^2 - \partial_y^2 \right) \rho(x,y)$} By definition \ref{def:WignerTrans}, \begin{equation} V \equiv \mathcal W \left( \frac{i}{2} \left( \partial_x^2 - \partial_y^2 \right) \rho(x,y)\right)=\int d \delta \cdot e^{-ip \delta } \frac{i}{2} \left( \partial_x^2 - \partial_y^2 \right) \rho(x,y), \end{equation} where we have defined $V$ for our convenience. Then, we know from Clairaut's theorem that since all partial derivatives of the state operator are continuous, $\left[ \partial _x \rho (x,y) , \partial_y \rho (x,y) \right]=0$ \cite{stewart}. $V$ then expands to \begin{equation} V = \frac{i}{2} \int d \delta \cdot e^{-ip \delta } \left( \partial_x^2 - \partial_y^2 + \partial_x \partial_y - \partial_y\partial_x \right) \rho(x,y). \end{equation} We next note that by definition \ref{def:WignerDist}, $x=\bar x + 1/2 \cdot \delta$ and $y = \bar x - 1/2 \cdot \delta$, which implies $\partial_{\delta}x = 1/2$ and $\partial_{\delta} y = -1/2$. Hence, $2 \partial_{\delta} x = 1$ and $2 \partial_{\delta} y = -1$. We rework $V$ to be \begin{eqnarray} V &=& \frac{i}{2}\cdot 2 \int d \delta \cdot e^{-ip \delta } \left( \left( \partial_{\delta}x\right)\partial_x^2 + \left( \partial_{\delta}y\right)\partial_y^2 + \left( \partial_{\delta}x\right)\partial_x \partial_y + \left( \partial_{\delta}y\right) \partial_y\partial_x \right) \rho(x,y) \nonumber \\ &=& i \int d \delta \cdot e^{-ip \delta }\left( \left( \frac{ \partial \rho(x,y) / \partial x}{\partial x} \right) \frac{\partial x}{\partial \delta} + \left( \frac{ \partial \rho(x,y) / \partial y}{\partial y} \right) \frac{\partial y}{\partial \delta} + \left( \frac{ \partial \rho(x,y) / \partial y}{\partial x} \right) \frac{\partial x}{\partial \delta} +\left( \frac{ \partial \rho(x,y) / \partial x}{\partial y} \right) \frac{\partial y}{\partial \delta} \right)\nonumber \\ &=& i \int d \delta \cdot e^{-ip \delta } \partial _{\delta} \left( \partial_x \rho(x,y) + \partial_y \rho (x,y) \right). \end{eqnarray} Now, by definition \ref{def:WignerDist}, $\partial_{\bar x } x = \partial_{\bar x } y=1$, so \begin{equation} \frac{\partial \rho(x,y)}{\partial \bar x } = \frac{ \partial \rho (x,y) }{\partial x} \frac{\partial x}{\partial \bar x} + \frac{ \partial \rho (x,y) }{\partial y} \frac{\partial y}{\partial \bar x} = \frac{ \partial \rho (x,y) }{\partial x}+ \frac{ \partial \rho (x,y) }{\partial y} = \partial_x \rho(x,y) + \partial_y \rho (x,y). \end{equation} Thus, we have \begin{equation} V = i \int d \delta \cdot e^{-ip \delta } \partial _{\delta} \partial_{\bar x} \rho(x,y). \end{equation} Next, we integrate by parts to get \begin{equation} V = \left( e^{-ip \delta } \partial_{\bar x} \rho(x,y) \right)\Big |_{\delta = - \infty}^{\infty} - i \int d \delta \cdot \left( \partial _{\delta} e^{-ip \delta } \right) \partial_{\bar x} \rho(x,y). \end{equation} Noting that the state operator is continuous in the position basis, we find \begin{eqnarray} \lim_{\delta \rightarrow \pm \infty} \partial_{\bar x} \rho(x,y) &=& \lim_{\delta \rightarrow \pm \infty} \partial_{\bar x} \rho \left( \bar x + \frac{1}{2} \delta, \bar x - \frac{1}{2} \delta \right) \nonumber \\ &=& \partial_{\bar x}\lim_{\delta \rightarrow \pm \infty} \rho \left( \bar x + \frac{1}{2} \delta, \bar x - \frac{1}{2} \delta \right) \nonumber \\ &=& \partial_{\bar x}\lim_{\delta \rightarrow \pm \infty} \rho \left( \frac{1}{2} \delta, - \frac{1}{2} \delta \right) \nonumber \\ &=& 0. \end{eqnarray} Further, \begin{equation} 0 \leq \left| e^{-ip \delta} \right| \leq 1 \, \forall \delta, \end{equation} so \begin{equation} \left( e^{-ip \delta } \partial_{\bar x} \rho(x,y) \right)\Big |_{\delta = - \infty}^{\infty} = 0. \end{equation} Hence, \begin{equation} V = - i \int d \delta \cdot \left( \partial _{\delta} e^{-ip \delta } \right) \partial_{\bar x} \rho(x,y)= - i \int d \delta \cdot \left(-ip e^{-ip \delta } \right) \partial_{\bar x} \rho(x,y)= -p \partial_{\bar x} \int d \delta \cdot e^{-ip \delta } \rho(x,y). \end{equation} That is, by definition \ref{def:WignerDist}, \begin{boxedeqn}{} \mathcal W \left( \frac{i}{2} \left( \partial_x^2 - \partial_y^2 \right) \rho(x,y)\right) = -p \partial_{\bar x} \int d \delta \cdot e^{-ip \delta } \rho(x,y) = -p \partial_{\bar x} W \left( \bar x , p\right), \end{boxedeqn} which is what we wanted to show. \subsection{The Wigner transform of $ (x-y) \left( \partial_x - \partial_y \right) \rho (x,y) $} By definition \ref{def:WignerTrans}, \begin{equation} V \equiv \mathcal W \left( (x-y) \left( \partial_x - \partial_y \right) \rho (x,y) \right)=\int d \delta \cdot e^{-ip \delta } (x-y) \left( \partial_x - \partial_y \right) \rho (x,y) , \end{equation} where we have again defined $V$ for our convenience. Since $\delta = x - y $, we have \begin{equation} V = \int d \delta \cdot \delta e^{-ip \delta } \left( \partial_x - \partial_y \right) \rho (x,y). \end{equation} As we did in the previous section, we note that $\partial_{\delta} x = 1/2$ and $\partial_{\delta} y = - 1/2$, so $2 \partial_{\delta} x = 1$ and $2 \partial_{\delta} y = - 1$. Thus, \begin{equation} V = 2 \int d \delta \cdot \delta e^{-ip \delta } \left( \frac{\partial \rho(x,y)}{\partial x} \frac{\partial x}{\partial \delta}+ \frac{\partial \rho(x,y)}{\partial y} \frac{\partial y}{\partial \delta} \right). \end{equation} Next, we use the chain rule to find \begin{equation} V = 2 \int d \delta \cdot \delta e^{-ip \delta } \partial_{\delta} \rho(x,y). \end{equation} After that, we use integration by parts to get \begin{equation} V = 2 \left( \delta e^{-ip \delta } \rho(x,y) \right)\Big |_{\delta = - \infty}^{\infty} - 2 \int d \delta \cdot \rho(x,y) \partial_{\delta} \left( \delta e^{-ip \delta } \right) . \end{equation} As before, we investigate the boundary term. The non-oscillatory component follows \begin{equation} \lim_{\delta \rightarrow \pm \infty} \delta \rho(x,y) = 0, \end{equation} since $\rho(x,y)$ goes to zero rapidly off the diagonal, as we noted in section \ref{sec:projonbasis}. Since \begin{equation} 0 \leq \left| e^{-ip \delta} \right| \leq 1 \, \, \forall \delta, \end{equation} we have \begin{equation} \lim_{ \delta \rightarrow \pm \infty} e^{-ip \delta} \delta \rho(x,y) = 0, \end{equation} so \begin{equation} V = - 2 \int d \delta \cdot \rho(x,y) \partial_{\delta} \left( \delta e^{-ip \delta } \right). \end{equation} Finally, note that \begin{equation} \partial_{\delta} \left( \delta e^{-ip \delta } \right) = \partial_{p} \left( p e^{-ip \delta } \right), \end{equation} hence \begin{equation} V = - 2 \int d \delta \cdot \rho(x,y) \partial_{p} \left( p e^{-ip \delta } \right) = - 2 \partial_{p} \left( p \int d \delta \cdot e^{-ip \delta } \rho(x,y) \right) = - 2 \partial_p \left( p \cdot W( \bar x, p) \right). \end{equation} That is, \begin{boxedeqn}{} \mathcal W \left( (x-y) \left( \partial_x - \partial_y \right) \rho (x,y) \right) = - 2 \partial_p \left( p \cdot W( \bar x, p) \right), \end{boxedeqn} as desired. \subsection{The Wigner transform of $\left(x - y \right)^2 \rho( x, y) $} By definition \ref{def:WignerTrans}, \begin{equation} V \equiv \mathcal W \left( \left(x - y \right)^2 \rho( x, y) \right) =\int d \delta \cdot e^{-ip \delta } \left(x - y \right)^2 \rho( x, y) , \end{equation} where, as in the past two sections, we have defined $V$ for our convenience. By definition \ref{def:WignerDist}, $\delta = x-y$, so \begin{equation} V = \int d \delta \cdot e^{-ip \delta } \delta^2 \rho( x, y)=\int d \delta \cdot \delta^2 e^{-ip \delta } \rho( x, y). \end{equation} Now, since \begin{equation} \delta^2 e^{-ip \delta }= -\left(i^2 \delta^2 \right) e^{-ip \delta } =- \partial_p^2 e^{-ip \delta } , \end{equation} we have \begin{equation} V= \int d \delta \cdot \left( - \partial_p^2 e^{-ip \delta } \right) \rho( x, y) = - \partial_p^2 \int d \delta \cdot e^{-ip \delta } \rho( x, y) = - \partial_p^2W( \bar x, p), \end{equation} or \begin{boxedeqn}{} \mathcal W \left( \left(x - y \right)^2 \rho( x, y) \right) = - \partial_p^2W( \bar x, p), \end{boxedeqn} which is what we wanted to show. \section{Example: The Wigner Distribution of a Harmonic Oscillator}\label{sec:wig_harmonic} We will next develop the Wigner distribution for the quantum harmonic oscillator. The Hamiltonian is \cite{griffiths} \begin{equation} \hat H = \frac{\hat P^2}{2m} + \frac{1}{2} k x^2, \end{equation} where $k$ is the spring constant, and the angular frequency is \begin{equation} \omega = \sqrt{\frac{k}{m}}. \end{equation} From the time-independent Schr\"odinger equation, eqn. \ref{eqn:schrotimeind}, we have \begin{equation} - \frac{\hbar^2}{2M} \check d _x^2 \psi (x) + \frac{1}{2} k x^2 \psi(x) = E \psi (x). \end{equation} This equation is readily solved using power series, and has the well-known family of solutions \cite{ballentine,griffiths,cohtan} \begin{equation} \psi_n(x) = \frac{1}{\sqrt{n!}}\left( \frac{1}{\sqrt{2 m \omega}}\left( m \omega x - \partial_x \right) \right)^n \left( \frac{m \omega}{\pi} \right)^{1/4}e^{-\frac{m \omega}{2}x^2}, \end{equation} \begin{figure}\label{fig:harmonic_wavefunctions} \end{figure} which correspond to states of constant energy \cite{griffiths} \begin{equation} E_n=\left( n + \frac{1}{2} \right) \omega. \end{equation} For the purposes of this example, we will concentrate on the ground state ($n=0$) and the first three excited states, shown in figure \ref{fig:harmonic_wavefunctions}. In order to calculate the Wigner distribution of these states, we must use eqn. $\ref{eqn:wigalt}$, so we need explicit forms for the matrix elements of the state operators in the position basis. Fortunately, since the harmonic oscillator is pure, we easily obtain these by \begin{equation} \rho_n (x,y) = \left< x \right| \hat{\rho}_n \left| y \right> = \left< x \big| \psi_n \right> \left< \psi_n \big| y \right> = \psi_n^*(x) \psi_n(y), \end{equation} where we used eqn. \ref{eqn:wavefunction} to identify the wavefunction $\psi_n(y)$ and its complex conjugate $\psi_n^*(x)$. Since $\psi(x)$ is real we have, \begin{boxedeqn}{eqn:stateopharmonic} \rho_n(x,y) = \frac{1}{n!} \left( \frac{m \omega}{\pi} \right)^{1/2} \left( \frac{1}{2 m \omega}\right)^n\left( m \omega x - \partial_x \right)^n \left( m \omega y - \partial_y \right)^n e^{-\frac{m \omega}{2}x^2} e^{-\frac{m \omega}{2}y^2}. \end{boxedeqn} \begin{figure}\label{fig:harmonic_stateops} \end{figure} Particularly, for $n=0$ through $n=3$, in units where $m=\omega = \hbar = 1$, we have \begin{eqnarray} \rho_0(x,y)&=& \frac{1}{\sqrt{ \pi}}e^{ - \frac{ x^2+y^2}{2}} \nonumber, \\ \rho_1(x,y)&=& 2 x y \frac{1}{\sqrt{ \pi}}e^{ - \frac{ x^2+y^2}{2}} \nonumber, \\ \rho_2(x,y)&=& \frac{1}{8}\left( -2 e^{-x^2}+4e^{-x^2}x^2\right) \left( -2 e^{-y^2}+4e^{-y^2}y^2\right) \frac{1}{\sqrt{ \pi}}e^{ \frac{ x^2+y^2}{2}} \nonumber, \\ \rho_3(x,y)&=& \frac{1}{48} \left( 12 e^{-x^2} x - 8e^{-x^2}x^3\right) \left( 12 e^{-y^2} y - 8e^{-y^2}y^3 \right) \frac{1}{\sqrt{ \pi}}e^{ + \frac{ x^2+y^2}{2}} \nonumber, \\ \end{eqnarray} which we plot in figure \ref{fig:harmonic_stateops}. Now that we have the general form of the state operator matrix elements, it is just a matter of evaluating eqn. \ref{eqn:wigalt} to get the corresponding Wigner distributions. Starting with $n=0$, we have \begin{eqnarray} W_0(\bar x,p) &=& \frac{1}{2 \pi} \int d \delta \cdot e^{-ip \delta} \psi_0 \left( \bar x +\frac{1}{2} \delta \right) \psi_0 \left( \bar x - \frac{1}{2} \delta \right) \nonumber \\ &=& \frac{1}{2 \pi} \int d \delta \cdot e^{-ip \delta} \left( \left( \frac{\omega m }{\pi}\right) ^{1/4} e^{-\frac{1}{2} \omega m \left( \bar x +\frac{1}{2} \delta \right)^2 }\right) \left( \left( \frac{\omega m }{\pi}^{1/4}\right) e^{-\frac{1}{2} \omega m \left( \bar x -\frac{1}{2} \delta \right)^2 }\right) \nonumber \\ &=& \frac{1}{2 \pi} \left( \frac{\omega m }{\pi}\right) ^{1/2} \int d \delta \cdot e^{-ip \delta} e^{-\frac{1}{4} \omega m\left( \bar x - \frac{1}{2} \delta \right)^2 } \nonumber \\ &=& \frac{1}{\pi} e^{- p^2 - \bar x^2}, \end{eqnarray} \begin{figure}\label{fig:harmonic_wigs} \end{figure} which is just a three-dimensional Gaussian distribution. The calculations involved for the excited states are similar, but the algebra is significantly less trivial. They are easily performed using a computer algebra system, so we state the result. The Wigner distributions are\index{Harmonic Oscillator!Wigner Distribution Solutions} \begin{eqnarray} W_0(\bar x,p) &=& \frac{1}{\pi} e^{- p^2 - \bar x^2} \nonumber \\ W_1(\bar x,p) &=& \frac{2p^2+2\bar x^2 - 1}{ \pi} e^{- p^2 - \bar x^2} \nonumber \\ W_2(\bar x,p) &=& \frac{2 p^4 + 2 \bar x^4 +4 p^2 \bar x^2 -4 p^2 -4 \bar x^2+1}{ \pi} e^{- p^2 - \bar x^2} \\ W_3(\bar x,p) &=& \frac{4 \bar x^6 + 12 p^2 \bar x^ 4 - 18 \bar x^4 +12 p^4 \bar x^2 - 36 p^2 \bar x^2 +18 \bar x^2 + 4 p^6 - 18 p^4 + 18 p^2 - 3}{3 \pi} e^{- p^2 - \bar x^2}, \nonumber \end{eqnarray} \begin{figure}\label{fig:convolution} \end{figure} which are plotted in figure \ref{fig:harmonic_wigs}. Note how $W_0 > 0$ for all values of $x$ and $p$, but the higher energy states are sometimes negative. As we mentioned briefly before, the Wigner distribution is motivated by classical phase-space probability distributions, but is permitted to have negative values. These ``negative probabilities'' are a weird signature of a quantum mechanical system. To make this analogy more concrete, we consider $W_{10}(\bar x, p)$, shown in figure \ref{fig:convolution}. At the high energy of $n=10$, the oscillations inside the Wigner distribution become increasingly rapid. In the classical limit, as $n \rightarrow \infty$, we expect the negative portions to overlap and cancel with the positive components, giving us a positive-definite, classical probability distribution. In order to force this for $n=10$, we perform a careful function smoothing, known as a convolution\index{Convolution}, of $W_{10}$ with a simple Gaussian. Mathematically, this is \cite{hecht} \begin{equation} W^c_{10}(X,P) \equiv \int d \bar x d p \cdot W_{10}(\bar x, p) e^{-(X - \bar x)^2 -(P - p)^2}. \end{equation} As shown in figure \ref{fig:convolution}, this averages the inner oscillations to zero, but retains a large, outer, positive ring. This is what we expect, since a classical simple harmonic oscillator has eliptical orbits in phase-space. \chapter{The Master Equation for Quantum Brownian Motion}\label{text} \lettrine[lines=2, lhang=0.33, loversize=0.1]{I}n this chapter, we develop the fundamental equation of quantum decoherence, the master equation for quantum brownian motion. The master equation dictates the time-evolution of a \textbf{system} and an \textbf{environment} with which the system interacts (these terms will be precisely defined later). To facilitate this, we use the formalism of the Wigner distribution developed in chapter \ref{chap:wigner}, since it incorporates both position and momentum simultaneously, and consider how the system's Wigner distribution changes with time. Then, we invert the Wigner transformation to get the master equation, written in terms of the the system's state operator. After the equation is developed in this chapter, we examine its physical meaning and work through an example in chapter \ref{chap:applications}. \section{The System and Environment} The idea of collisions between a system and environment can be represented intuitively in a physical picture, as shown in figure \ref{fig:collision}. However, before we begin, we need to define precisely the notion of \textbf{system} and \textbf{environment}. Further, we need to specify what we mean by an interaction or \textbf{collision} between the system and environment. \begin{boxeddefn}{System\index{System}}{def:system} A \textbf{system}, or system particle, denoted as $\mathcal S$, is a single, one-dimensional point particle. It has momentum $p_S$, mass $m_S$, and position $x_S$. \end{boxeddefn} \begin{boxeddefn}{Environment\index{Environment}}{def:environment} An \textbf{environment} of a system $\mathcal S$ is denoted $\mathcal E_S$, and consists of an ideal one-dimensional gas of light particles. Each of these particles has momentum $p_E$, mass $m_E$, and position $x_E$. We will often abbreviate $\mathcal E_S$ to $\mathcal E$ if it is clear to what system $\mathcal S$ the environment belongs. \end{boxeddefn} \begin{boxeddefn}{Collision\index{Collision}}{def:collision} A \textbf{collision} between a particle of an environment $\mathcal E$ and a system $\mathcal S$ is defined as an instantaneous transfer of momentum that conserves both kinetic energy and momentum. \end{boxeddefn} It is important to note that the system we are considering is very large (massive) when compared to the individual environmental particles. Precisely, we take \cite{halliwell} \begin{equation}\label{eqn:massapprox} \frac{m_E}{m_S} \ll 1, \end{equation} and we will typically neglect terms of second or higher order in this factor. \begin{figure}\label{fig:collision} \end{figure} Now that we have defined the key objects treated by the master equation, we begin to investigate its structure. As stated above, we first want to consider how the Wigner Distribution of the system, $W_S$, changes with time. Quantum mechanically, we separate this change into two pieces. First, $W_S$ undergoes standard unitary time evolution, with the system treated as a free particle. Second, $\mathcal S$ collides with environment particles, and the collisions alter the system's energy and momentum. In section \ref{sec:freesys}, we considered the change in the Wigner distribution of a particle due to its free evolution, which we will make use of later. Now, we begin to consider the influence of an environment on a system. \section{Collisions Between Systems and Environment Particles} Before we begin to examine how a system behaves in the presence of an environment, we first consider the collision between a system particle and one particle from an environment. For each collision, we derive equations for momentum and position change. First, we address momentum change. Let $p_S$ and $p_E$ denote the initial momenta of a system and an environment particle. By definition \ref{def:collision}, the interaction between the two particles is totally elastic. That is, both kinetic energy and momentum are conserved. We write kinetic energy conservation as \cite{hrw} \begin{equation} \frac{p_S^2}{2m_S}+\frac{p_E^2}{2m_E}=\frac{\bar p_S^2}{2m_S}+\frac{\bar p_E^2}{2m_E}, \end{equation} which is equivalent to \begin{equation}\label{eqn:kecons} m_S\left( p_E- \bar p_E \right) \left( p_E + \bar p_E \right)=-m_E\left( p_S- \bar p_S \right) \left( p_S + \bar p_S \right), \end{equation} and momentum conservation as \cite{hrw} \begin{equation} p_s+p_E=\bar p_S + \bar p_E, \end{equation} which is also written as \begin{equation}\label{eqn:momcons} \left( p_E- \bar p_E \right) =-\left(p_S - \bar p_S \right). \end{equation} We then assume that, since a collision occurred, the momenta of both the system and environment particle have changed, i.e. $ p_E- \bar p_E \neq 0$ and $p_S - \bar p_S \neq 0$. So, we divide eqn. \ref{eqn:kecons} by eqn. \ref{eqn:momcons} to get \begin{equation} \frac{m_S\left( p_E- \bar p_E \right) \left( p_E + \bar p_E \right)}{ \left( p_E- \bar p_E \right)}=\frac{-m_E\left( p_S- \bar p_S \right) \left( p_S + \bar p_S \right)}{-\left(p_S - \bar p_S \right)}, \end{equation} which implies \begin{equation}\label{eqn:redkecons} m_S\left( p_E + \bar p_E \right)=m_E \left( p_S + \bar p_S \right). \end{equation} Then, we solve eqns. \ref{eqn:momcons} and \ref{eqn:redkecons} simultaneously for both $\bar p_S$ and $\bar p_E$. We have \cite{hrw} \begin{eqnarray}\label{eqn:pebar} m_S\left( p_E + \bar p_E \right)=m_E \left( p_S + p_S+p_E-\bar p_E \right) &\Rightarrow& -(m_S-m_E)p_E+2m_Ep_S=(m_E+m_S)\bar p_E \nonumber \\ &\Rightarrow& \bar p_E = -\frac{m_S-m_E}{m_S+m_E}p_E+\frac{2m_E}{m_S+m_E}p_S \end{eqnarray} and \begin{eqnarray}\label{eqn:psbar} m_S\left( p_E + p_E+p_S -\bar p_S \right)=m_E \left( p_S + \bar p_S\right) &\Rightarrow& (m_S-m_E)p_S+2m_Sp_E=(m_S+m_E)\bar p_S\nonumber \\ &\Rightarrow& \bar p_S=\frac{m_S-m_E}{m_S+m_E}p_S+\frac{2m_S}{m_S+m_E}p_E , \end{eqnarray} which are the changes in the momenta of the environment particle and the system. Now that we have investigated the momentum change that results from a collision, we will develop the corresponding position change. To do this, we use the plane wave treatment for the total system we developed in section \ref{sec:freeparticle} and note how changes in momentum imply changes in position. The wavefunction of the composite system containing both the system and environment particle, a product state, is given by\footnote{Since the product state vector is $\left| \phi \right> = \left| \phi_S \right> \otimes \left| \phi_E \right>$, the wavefunction form of the composite state takes ordinary multiplication.} \begin{equation} \phi = \phi_S \phi_E. \end{equation} Using equation \ref{eqn:planewave}, we can form the composite plane wave, $\phi_i$, from the individual incident plane wave of each particle.\footnote{Remember that plane waves are states of definite momentum. We are using them in this case because we are conserving the momentum in the collision.} This is\begin{equation} \phi_i=e^ {ip_Sx_S} e^ {ip_Ex_E}. \end{equation} After collision, using the momentum representation, the plane wave, $\phi_f$, becomes \begin{equation} \phi_f=e^ {i \bar p_S x_S}e^{i \bar p_E x_E}. \end{equation} By eqns. \ref{eqn:pebar} and \ref{eqn:psbar}, this can be written as \begin{eqnarray} \left( \text{Exponent of $\phi_f$} \right) &=& \left(i \left( \frac{m_S-m_E}{m_S+m_E}p_S+\frac{2m_S}{m_S+m_E}p_E\right) x_S \right) + \left( i\left(-\frac{m_S-m_E}{m_S+m_E}p_E+\frac{2m_E}{m_S+m_E}p_S\right) x_E \right)\nonumber \\ &=& i \left( \frac{m_S-m_E}{m_S+m_E}p_S x_S+\frac{2m_S}{m_S+m_E}p_E x_S-\frac{m_S-m_E}{m_S+m_E}p_E x_E +\frac{2m_E}{m_S+m_E}p_S x_E \right) \nonumber \\ &=& \left( ip_S\left(\frac{m_S-m_E}{m_S+m_E}x_S+\frac{2m_E}{m_S+m_E}x_E\right)\right) + \left( ip_E\left( \frac{2m_S}{m_S+m_E}x_S-\frac{m_S-m_E}{m_S+m_E}x_E \right) \right). \nonumber \end{eqnarray} We define \begin{equation} \phi_f=e^ {i \bar p_S x_S} e^{i \bar p_E x_E} \equiv e^{ip_S\bar x_S }e^{ip_E\bar x_E}, \end{equation} where \cite{halliwell} \begin{equation} \bar x_S = \frac{m_S-m_E}{m_S+m_E}x_S+\frac{2m_E}{m_S+m_E}x_E \end{equation} and \begin{equation} \bar x_E = \frac{2m_S}{m_S+m_E}x_S-\frac{m_S-m_E}{m_S+m_E}x_E. \end{equation} This way, we now have position and momentum representations of the collision. As is common in physics, we need to require that these collision interactions are local\index{Collision!Locality}.\footnote{It is important to emphasize that locality is \textit{not} an approximation, but is necessary to include in our treatment. Ideally, we would work this into our equations formally. However, for simplicity, we can achieve local interactions by requiring this condition.} Thus, throughout the collision, we take \begin{equation} \label{eqn:locality1} \left| x_S-x_E \right| \ll \left| x_S \right| \end{equation} and \begin{equation} \left| x_S-x_E \right| \ll \left| x_E \right|, \end{equation} since the potential energy, $V \left(x_S-x_E \right) \rightarrow 0$ as $\left| x_S-x_E \right| \rightarrow \infty$. Recalling that from eqn. \ref{eqn:massapprox} \begin{equation} \frac{m_E}{m_S} \ll 1, \end{equation} it is reasonable to ignore contributions to distances of order \begin{equation} \frac{m_E}{m_S} (x_S-x_E) \ll x_S,x_E. \end{equation} Enforcing the locality of collision, we find \begin{eqnarray} \bar x_S &=& \frac{m_S-m_E}{m_S+m_E}x_S+\frac{2m_E}{m_S+m_E}x_E \nonumber \\ &=& \left( \frac{m_S+m_E}{m_S+m_E}-\frac{2m_E}{m_S+m_E} \right)x_S+\frac{2m_E}{m_S+m_E}x_E \nonumber \\ &=& \left( 1-\frac{2m_E}{m_S+m_E} \right)x_S+\frac{2m_E}{m_S+m_E}x_E \nonumber \\ &=& x_S-\frac{2m_E}{m_S+m_E} x_S+\frac{2m_E}{m_S+m_E}x_E \nonumber \\ &=& x_S+ \frac{2m_E}{m_S+m_E}\left( x_E- x_S \right) \nonumber \\ &=& x_S +2 \frac{m_E}{m_S}\left( x_E- x_S \right) - 2 \left( \frac{m_E}{m_S} \right)^2\left( x_E- x_S \right) + ... \nonumber \\ &\sim& x_S \end{eqnarray} and \begin{eqnarray} \bar x_E &=&\frac{2m_S}{m_S+m_E}x_S-\frac{m_S-m_E}{m_S+m_E}x_E\nonumber \\ &=&\left( \frac{2m_S+2m_E}{m_S+m_E}-\frac{2m_E}{m_S+m_E} \right)x_S+\left( \frac{2m_E}{m_S+m_E}-\frac{m_S+m_E}{m_S+m_E} \right) x_E\nonumber \\ &=&\left( 2-\frac{2m_E}{m_S+m_E}\right)x_S+\left(\frac{2m_E}{m_S+m_E}-1 \right) x_E\nonumber \\ &=&2x_S-x_E+\frac{2m_E}{m_S+m_E}\left(x_E-x_S\right) \nonumber \\ &=& 2x_S-x_E+2 \frac{m_E}{m_S}\left( x_E- x_S \right) - 2 \left( \frac{m_E}{m_S} \right)^2\left( x_E- x_S \right) + ... \nonumber \\ &\sim&2x_S-x_E, \end{eqnarray} which amounts to a phase shift in our plane wave state. We have now worked out all the position and momentum components we will need to treat the full case of a system coupled to an environment. In summary, we have\index{Two-body Collisions} \begin{eqnarray} \label{eqn:summary} \bar p_S &=& \frac{m_S-m_E}{m_S+m_E}p_S+\frac{2m_S}{m_S+m_E}p_E, \nonumber \\ \bar p_E &=& -\frac{m_S-m_E}{m_S+m_E}p_E+\frac{2m_E}{m_S+m_E}p_S, \nonumber \\ \bar x_S &\sim& x_S, \nonumber \\ \bar x_E &\sim& 2x_S-x_E. \end{eqnarray} \section{Effect of Collision on a Wigner Distribution} In this section, we consider the change in the Wigner distribution of the system, $W_S$, from one collision with an environment particle. Since we have a composite state of environment and system particle, we use equation \ref{eqn:combinedwig} to write the Wigner distribution for the system and environment as \begin{equation} W_{S+E}=W_SW_E. \end{equation} It follows that the change in the total Wigner distribution for the system and environment, $\Delta W_{S+E}$, due to one collision is \cite{halliwell} \begin{eqnarray} \Delta W _{S+E} &=& \overline{W}_{S+E}-W_{S+E} \nonumber \\ &=&\overline{W}_{S}\overline{W}_{E}-W_{S}W_{E} \nonumber \\ &=& W_S\left(\bar x_S,\bar p_S \right) W_E \left(\bar x_E, \bar p_E \right) - W_S(x_S,p_S ) W_E (x_E,p_E ). \end{eqnarray} Now that we have the change in the total (system and environment) Wigner distribution, we use eqn. \ref{eqn:annihilator} developed in section \ref{sec:combinedwig} to deduce $\Delta W$, the change in the system's Wigner distribution, by summing (integrating) over all environmental configurations. We have \cite{ballentine} \begin{eqnarray}\label{eqn:deltawintegrals} \Delta W &=& \mathcal A \left(\Delta W _{S+E} \right) \nonumber \\ &=& \int dp_E dx_E \cdot\left( W_S\left(\bar x_S,\bar p_S \right) W_E \left(\bar x_E, \bar p_E \right) - W_S(x_S,p_S ) W_E (x_E,p_E ) \right) \nonumber \\ &=& \int dp_E dx_E \cdot W_S\left(\bar x_S,\bar p_S \right) W_E \left(\bar x_E, \bar p_E \right) - \int dp_E dx_E \cdot W_S(x_S,p_S ) W_E (x_E,p_E ). \end{eqnarray} To evaluate these integrals, we need to perform some algebraic manipulations on the first term in eqn. \ref{eqn:deltawintegrals}. From the eqn. \ref{eqn:summary}, we know that the first term of eqn. \ref{eqn:deltawintegrals} is (approximately) given by \begin{equation}\label{eqn:deltawfirstterm} \int dp_E dx_E \cdot W_S\left(x_S,\frac{m_S-m_E}{m_S+m_E}p_S+\frac{2m_S}{m_S+m_E}p_E \right) W_E \left(2x_S-x_E,-\frac{m_S-m_E}{m_S+m_E}p_E+\frac{2m_E}{m_S+m_E}p_S \right). \end{equation} We make the substitution \begin{eqnarray}\label{eqn:subs} u & \equiv& 2x_S-x_E \nonumber \\ v & \equiv & -\frac{m_S-m_E}{m_S+m_E}p_E+\frac{2m_E}{m_S+m_E}p_S , \end{eqnarray} from which it follows that \begin{eqnarray}\label{eqn:subsdiff} dx_E \cdot&=&-du \nonumber \\ dp_E&=&-\frac{m_S+m_E}{m_S-m_E} dv. \end{eqnarray} Further, since \begin{equation} p_E=\left( \frac{m_S+m_E}{m_S-m_E} \right) \left( \frac{2m_E}{m_S+m_E}p_S - v \right), \end{equation} we have \begin{eqnarray} \label{eqn:substaylor} \frac{m_S-m_E}{m_S+m_E}p_S+\frac{2m_S}{m_S+m_E}p_E &=& \frac{m_S-m_E}{m_S+m_E}p_S+\frac{2m_S}{m_S+m_E}\left( \frac{m_S+m_E}{m_S-m_E} \right) \left( \frac{2m_E}{m_S+m_E}p_S - v \right) \nonumber \\ &=& \frac{m_S-m_E}{m_S+m_E}p_S+ \frac{4 m_E m_S}{\left(m_S - m_E\right) \left( m_E +m_S \right)} p_S - \frac{2 m_S}{m_S - m_E} v \nonumber \\ &=& \frac{m_E+m_S}{m_S-m_E} p_S - \frac{2 m_S}{m_S - m_E} v \nonumber \\ &=& p_S + \frac{2 \left( m_E p_S - m_S u \right)}{m_S-m_E}. \end{eqnarray} Thus, substituting eqns. \ref{eqn:subs}, \ref{eqn:subsdiff}, and \ref{eqn:substaylor} into eqn. \ref{eqn:deltawfirstterm} gives \begin{equation} \frac{m_S+m_E}{m_S-m_E} \int dv du W_S\left(x_S, p_S + \frac{2 \left( m_E p_S - m_S u \right)}{m_S-m_E} \right) W_E \left(u,v \right). \end{equation} Next, we make the substitution $u \equiv x_E$ and $v \equiv p_E$, so eqn. \ref{eqn:deltawfirstterm} becomes \begin{equation} \frac{m_S+m_E}{m_S-m_E} \int dp_E dx_E \cdot W_S\left(x_S, p_S + \frac{2 \left( m_E p_S - m_S p_E \right)}{m_S-m_E} \right) W_E \left(x_E,p_E \right). \end{equation} Now, eqn. \ref{eqn:deltawintegrals} is \begin{eqnarray}\label{eqn:deltawintegrals2} \Delta W &=& \frac{m_S+m_E}{m_S-m_E} \int dp_E dx_E \cdot W_S\left(x_S, p_S + \frac{2 \left( m_E p_S - m_S p_E \right)}{m_S-m_E} \right) W_E \left(x_E,p_E \right) \\ &-& \int dp_E dx_E \cdot W_S(x_S,p_S ) W_E (x_E,p_E ) \nonumber \\ &=& \int dp_E dx_E \cdot \left( \frac{m_S+m_E}{m_S-m_E} W_S\left(x_S, p_S + \frac{2 \left( m_E p_S - m_S p_E \right)}{m_S-m_E} \right) -W_S(x_S,p_S ) \right)W_E \left(x_E,p_E \right). \nonumber \end{eqnarray} Next, we expand \begin{equation} W_S\left(x_S, p_S + \frac{2 \left( m_E p_S - m_S p_E \right)}{m_S-m_E} \right) \end{equation} using a Taylor series expansion in momentum about $p=p_S$. This is \begin{eqnarray}\label{eqn:taylor1} W_S\left(x_S, p_S + \frac{2 \left( m_E p_S - m_S p_E \right)}{m_S-m_E} \right) &=& W_S(x_S,p_S) + \frac{2 \left( m_E p_S - m_S p_E \right)}{m_S-m_E} \frac{\partial W_S}{ \partial p_S}(x_S,p_S) \nonumber \\ &+& \frac{1}{2} \left( \frac{2 \left( m_E p_S - m_S p_E \right)}{m_S-m_E} \right)^2 \frac{\partial ^2 W_S}{ \partial p_S^2}(x_S,p_S)+... \end{eqnarray} In order to justify dropping the high-order terms of the expansion, we need to show that \begin{equation} \frac{2 \left( m_E p_S - m_S p_E \right)}{m_S-m_E} \ll \left| p_S \right| , \end{equation} which is not readily apparent. If we expand the term in $m_E/m_S$, we have \begin{equation} \frac{2 \left( m_E p_S - m_S p_E \right)}{m_S-m_E} = -2 p_E + 2 \left( p_S - p_E \right) \frac{m_E}{m_S} + 2 \left( p_S - p_E \right) \left( \frac{m_E}{m_S} \right)^2 + ... \end{equation} Recalling eqn. \ref{eqn:massapprox}, it is obvious that while the terms of first order and higher in $m_E/m_S$ are small compared to $p_S$, the first term, $-2 p_E$, is not necessarily small with respect $p_S$. Fortunately, since we are expanding in an integrand and the average value of $p_E$ is zero, we can neglect this term and so we are justified in dropping high order terms in our Taylor expansion.\footnote{The fact that the average value of $p_E$ is zero is dealt with explicitly in eqn. \ref{eqn:peavgiszero}.} Simplifying coefficients and dropping terms of third order or higher, eqn. \ref{eqn:taylor1} is approximately \begin{equation} W_S(x_S,p_S) + \frac{2 \left( m_E p_S - m_S p_E \right)}{m_S-m_E} \frac{\partial W_S}{\partial p_S}(x_S,p_S) +\left( \frac{2m_S^2 p_E^2-4m_Em_Sp_Ep_S+2m_E^2p_S^2}{\left( m_E - m_S \right)^2} \right) \frac{\partial ^2 W_S}{\partial p_S^2}(x_S,p_S). \end{equation} Hence, we write $\Delta W$ as \cite{halliwell} \begin{equation} \label{eqn:deltawap1} \Delta W \sim \int dp_E dx_E \cdot\left( A W_S(x_S,p_S) W_E \left(x_E,p_E \right) + B \partial_{p_S}W_S(x_S,p_S) W_E \left(x_E,p_E \right)+ C\partial_{p_S}^2 W_S(x_S,p_S) W_E \left(x_E,p_E \right)\right), \end{equation} for some $A$, $B$, and $C$. We now work out the values of these coefficients, starting with $A$, which is \begin{eqnarray} A &=& \frac{m_S+m_E}{m_S-m_E} - 1 \nonumber \\ &=& \frac{2 m_E}{m_S-m_E} \nonumber \\ &=& \frac{2 m_E}{m_S-m_E} \cdot \frac{1/m_S}{1/m_S} \nonumber \\ &=& \frac{2m_E}{m_S} \cdot \frac{1}{1-m_E/m_S} \nonumber \\ &=& \frac{2m_E}{m_S} \left( 1 + \frac{m_E}{m_S} + \left( \frac{m_E}{m_S} \right)^2+... \right) \nonumber \\ &=& 2 \frac{m_E}{m_S} + 2 \left( \frac{m_E}{m_S} \right)^2 + ... \nonumber \\ &\sim& 2 \frac{m_E}{m_S}, \end{eqnarray} where we used the approximation in eqn. \ref{eqn:massapprox} to neglect the terms of order two or higher in $m_E/m_S$. We now turn to $B$, given by \begin{equation} B = \left( \frac{m_S+m_E}{m_S-m_E} \right) \frac{2 \left( m_E p_S - m_S p_E \right)}{m_S-m_E}. \end{equation} Anticipating a series expansion, we change variables to $r = m_E/m_S$ so that $m_E=r m_S$. $B$ is then \begin{equation} B =\left( \frac{m_S+r m_S}{m_S-r m_S} \right) \frac{2 \left( r m_S p_S - m_S p_E \right)}{m_S-r m_S} = \left( \frac{1+r}{1-r} \right) \frac{2 \left( r p_S - p_E \right)}{1-r} . \end{equation} We also calculate the first and second derivatives of $B$ with respect to $r$. They are \begin{equation} \frac{d B}{dr} = \frac{2 p_E (3 - r) - 2 (p_S +3 p_S r)}{(r-1)^3} \end{equation} and \begin{equation} \frac{d^2 B}{dr^2}= \frac{4 p_E (5+r) - 12 p_S (1+r)}{(r-1)^4}. \end{equation} Taking the Taylor series expansion of $B$ in $r$ about $r=0$, we find \begin{eqnarray} B &=& B\big|_{r=0}+ \frac{d B}{dr}\Big|_{r=0} \cdot r + \frac{d^2 B}{dr^2}\Big|_{r=0} \cdot \frac{r^2}{2}+... \nonumber \\ &=& -2p_E +\left(2p_S-6p_E \right) \cdot r + \left( 12p_S - 20 p_E \right) \cdot \frac{r^2}{2} + ... \nonumber \\ &=& -2p_E +\left(2p_S-6p_E \right) \cdot \frac{m_E}{m_S} + \left( 6p_S - 10 p_E \right) \cdot \left( \frac{m_E}{m_S}\right)^2+ ... \nonumber \\ &\sim& -2p_E +\left(2p_S-6p_E \right) \cdot \frac{m_E}{m_S} \nonumber \\ &=& 2 p_S \frac{m_E}{m_S} - \left( 2 + 6 \frac{m_E}{m_S} \right) p_E, \end{eqnarray} where we used eqn. \ref{eqn:massapprox} to neglect the terms of order two or higher in $m_E/m_S$. Finally, we consider the coefficient $C$, given by \begin{equation} C = \left( \frac{m_S+m_E}{m_S-m_E} \right) \frac{2m_S^2 p_E^2-4m_Em_Sp_Ep_S+2m_E^2p_S^2}{\left( m_E - m_S \right)^2}. \end{equation} In the same way we worked out coefficient $B$, we make the substitution $m_E = rm_S$, which gives us \begin{equation} C= \left( \frac{1+r}{1-r} \right) \frac{2 p_E^2-4rp_Ep_S+2r^2p_S^2}{\left( r - 1 \right)^2}, \end{equation} \begin{equation} \frac{d C}{dr} = \frac{4 \left( p_E^2 (2+r) + p_S^2 r (1+2 r) - p_E p_S \left(1+4r +r^2\right) \right)}{(r-1)^4}, \end{equation} and \begin{equation} \frac{d^2 C}{dr^2} = \frac{4 \left( 2 p_E p_S \left( 4 + 7r +r^2 \right) - 3 p_E^2 (3+r ) - p_S^2 \left( 1 + 7r +4r^2 \right) \right)}{(r-1)^5}. \end{equation} When we take the Taylor series expansion of $C$ in $r$ about $r=0$, we have \begin{eqnarray} C &=& C\big|_{r=0}+ \frac{d C}{dr}\Big|_{r=0} \cdot r + \frac{d^2 C}{dr^2}\Big|_{r=0} \cdot \frac{r^2}{2}+... \nonumber \\ &=& 2p_E^2 +\left(8 p_E^2- 4 p_E p_S \right) \cdot r + \left( 36 p_E^2 - 32 p_E p_S + 4 p_S^2 \right) \cdot \frac{r^2}{2} + ... \nonumber \\ &=& 2p_E^2 +\left(8 p_E^2- 4 p_E p_S \right) \cdot \frac{m_E}{m_S} +\left( 18 p_E^2 - 16 p_E p_S + 2 p_S^2 \right) \cdot \left( \frac{m_E}{m_S}\right)^2+ ... \nonumber \\ &\sim& 2p_E^2 +\left(8 p_E^2- 4 p_E p_S \right)\cdot \frac{m_E}{m_S} \nonumber \\ & = & \left( 2 + 8 \frac{m_E}{m_S} \right) p_E^2 - 4 p_E p_S \frac{m_E}{m_S}. \end{eqnarray} Thus, using eqn. \ref{eqn:deltawap1}, we can write $\Delta W$ as \begin{equation} \label{eqn:wignersimple} \Delta W \sim X + Y+ Z, \end{equation} where \begin{equation} X =\left(2 \frac{m_E}{m_S} \right) \int dp_E dx_E \cdot W_E(x_E,p_E) W_S(x_S,p_S), \end{equation} \begin{equation} Y = \left(2 p_S \frac{m_E}{m_S} \right) \int dp_E dx_E \cdot W_E(x_E,p_E) \partial_{p_S}W_S(x_S,p_S) - \left(2+6 \frac{m_E}{m_S} \right) \int dp_E dx_E \cdot p_E W_E(x_E,p_E) \partial_{p_S}W_S(x_S,p_S), \end{equation} and \begin{equation} Z =\left( 2 + 8 \frac{m_E}{m_S} \right) \int dp_E dx_E \cdot p_E^2 W_E(x_E,p_E) \partial_{p_S}^2W_S(x_S,p_S) - 4 p_S \frac{m_E}{m_S} \int dp_E dx_E \cdot p_E W_E(x_E,p_E) \partial_{p_S}^2W_S(x_S,p_S). \end{equation} Now, we recall from our preliminary discussion on the marginal distributions of the Wigner distribution in section \ref{sec:marginals} that \begin{eqnarray} \int dp_E dx_E \cdot O(p_E) W_E(x_E,p_E) W_S(x_S,p_S) &=& W_S(x_S,p_S) \int dp_E \cdot O(p_E) \int dx_E \cdot W_E(x_E,p_E) \nonumber \\ &=& W_S(x_S,p_S) \int dp_E \cdot O(p_E) \tilde \rho\left( p_E, p_E \right) \nonumber \\ &=& W_S(x_S,p_S) \mathrm{Tr} \left(\hat O \hat{\rho} \right) \nonumber \\ &=& W_S(x_S,p_S) \left< \hat O \right>, \end{eqnarray} where $O$ is an observable. Hence, our previous calculations yield \begin{equation} X = \left(2 \frac{m_E}{m_S} \right) \left< 1 \right> W_S(x_S,p_S)= 2 \frac{m_E}{m_S}W_S(x_S,p_S), \end{equation} \begin{equation} Y = \left(2 p_S \frac{m_E}{m_S} \right) \partial_{p_S} W_S(x_S,p_S) - \left(2+6 \frac{m_E}{m_S} \right) \left< p_E \right> \partial_{p_S} W_S(x_S,p_S), \end{equation} and \begin{equation} Z = \left( 2 + 8 \frac{m_E}{m_S} \right) \left< p_E^2 \right> \partial_{p_S}^2 W_S(x_S,p_S)- 4 p_S \frac{m_E}{m_S}\left< p_E\right> \partial_{p_S}^2 W_S(x_S,p_S). \end{equation} However, originally we considered the environment as an ideal (one-dimensional) gas of environment particles, so it is reasonable to assume that any measurement\index{Measurement} of an environment particle momentum is equally likely to be in the opposite direction, i.e. $\left< p_E \right> = 0$. Eqn. \ref{eqn:wignersimple} then becomes \begin{equation}\label{eqn:peavgiszero} \Delta W \sim 2 \frac{m_E}{m_S} W_S(x_S,p_S) + \left(2 p_S \frac{m_E}{m_S} \right) \partial_{p_S} W_S(x_S,p_S)+\left( 2 + 8 \frac{m_E}{m_S} \right) \left< p_E^2 \right> \partial_{p_S}^2 W_S(x_S,p_S). \end{equation} We notice that \begin{eqnarray} 2 \frac{m_E}{m_S} W_S(x_S,p_S) + \left(2 p_S \frac{m_E}{m_S} \right) \partial_{p_S} W_S(x_S,p_S) &=& 2 \frac{m_E}{m_S} \left( W_S(x_S,p_S) + p_S \partial_{p_S} W_S(x_S,p_S) \right) \nonumber \\ &=& 2 \frac{m_E}{m_S} \partial_{p_S} \left( p_S W_S(x_S,p_S) \right), \end{eqnarray} so we write the change in the Wigner distribution of the system due to one environmental collision as \begin{boxedeqn}{eqn:environcol} \Delta W \sim 2 \frac{m_E}{m_S} \partial_{p_S} \left( p_S W_S(x_S,p_S) \right)+\left( 2 + 8 \frac{m_E}{m_S} \right) \left< p_E^2 \right> \partial_{p_S}^2 W_S(x_S,p_S). \end{boxedeqn} \section{The Master Equation for Quantum Brownian Motion} In our simple model, the system is only under the influence of environmental particles, and is free otherwise. Thus, the total change in the system's Wigner distribution with time is given by its free particle term added to some contribution due to the environment. Since the environment acts on the system through collisions, if we define $\Gamma$ to be the statistical number of collisions per unit time between the system and environmental particles, we combine eqns. \ref{eqn:wigfree} and \ref{eqn:environcol} to get \cite{halliwell}\index{Master Equation!Wigner Form} \begin{equation} \frac{\partial W_S}{\partial t} = - \frac{p_s}{m_S} \partial_{x_S} W(x_S,p_S,t) + \Gamma \left( 2 \frac{m_E}{m_S} \partial_{p_S} \left( p_S W_S(x_S,p_S) \right)+\left( 2 + 8 \frac{m_E}{m_S} \right) \left< p_E^2 \right> \partial_{p_S}^2. W_S(x_S,p_S) \right), \end{equation} an expression for the total change in the system's Wigner distribution with time. We use table \ref{tab:inversions} to convert our equation for the Wigner distribution to an equation for the state operator of the system. This is \begin{eqnarray} \mathcal W \left( \partial_t \rho_S(x,y) \right) &=& \frac{1}{m_S} \mathcal W \left( \frac{i}{2} \left( \partial_x^2 - \partial_y^2 \right) \rho_S(x,y) \right) - \Gamma \frac{m_E}{m_S} \mathcal W \left( (x-y) \left( \partial_x - \partial_y \right) \rho_S (x,y) \right) \nonumber \\ &-& \Gamma \left( 2 + 8 \frac{m_E}{m_S} \right)\left< p_E^2 \right> \mathcal W \left( \left(x - y \right)^2 \rho_S( x, y) \right). \end{eqnarray} Noting that this is true for all $\rho_S (x,y)$, we have \begin{equation} \partial_t \rho_S(x,y) = \frac{i}{2m_S} \left( \partial_x^2 - \partial_y^2 \right) \rho_S(x,y) - \Gamma \frac{m_E}{m_S} (x-y) \left( \partial_x - \partial_y \right) \rho_S (x,y) - \Gamma \left( 2 + 8 \frac{m_E}{m_S} \right) \left< p_E^2 \right> \left(x - y \right)^2 \rho_S( x, y). \end{equation} We take the standard definition for the dissipation\index{Dissipation} rate $\gamma$ to be \cite{halliwell} \begin{equation} \label{eqn:dissipation} \gamma \equiv \frac{m_E}{m_S} \Gamma, \end{equation} so \begin{equation} \partial_t \rho_S(x,y) = \frac{i}{2m_S} \left( \partial_x^2 - \partial_y^2 \right) \rho_S(x,y) - \gamma (x-y) \left( \partial_x - \partial_y \right) \rho_S (x,y) -\gamma \frac{m_S}{m_E} \left( 2 + 8 \frac{m_E}{m_S} \right) \left< p_E^2 \right> \left(x - y \right)^2\rho_S( x, y). \end{equation} To express this result in standard form, we use the definition for temperature in one dimension from statistical mechanics, which is \cite{kittelkroemer}\index{Temperature} \begin{equation} \label{eqn:temp} \frac{1}{2} k T\equiv \frac{\left< p_E^2 \right> }{2 m_E}, \end{equation} where $T$ is temperature and $k$ is the Boltzmann constant. Using this definition, we examine the last term more closely and find \begin{eqnarray} \gamma \frac{m_S}{m_E} \left( 2 + 8 \frac{m_E}{m_S} \right) \left< p_E^2 \right> \left(x - y \right)^2\rho_S( x, y) &=& \gamma \frac{m_S}{m_E} \left( 2 + 8 \frac{m_E}{m_S} \right) m_E k T \left(x - y \right)^2\rho_S( x, y) \nonumber \\ &=& \gamma \left( 2m_S + 8 m_E \right) k T \left(x - y \right)^2\rho_S( x, y) \nonumber \\ &\sim& \gamma 2m_S k T \left(x - y \right)^2\rho_S( x, y), \end{eqnarray} where we have used the fact that $m_E \ll m_S$. Thus, our final result is \cite{halliwell}\index{Master Equation!State Operator Form} \begin{boxedeqn}{eqn:masterequation} \partial_t \rho_S(x,y) = \frac{i}{2m_S} \left( \partial_x^2 - \partial_y^2 \right) \rho_S(x,y) - \gamma (x-y) \left( \partial_x - \partial_y \right) \rho_S (x,y) - 2m_S \gamma k T \left(x - y \right)^2\rho_S( x, y), \end{boxedeqn} which is the accepted master equation for quantum Brownian motion \cite{omnes, zurek, halliwell}. Using dimensional analysis, we can reinsert $\hbar$ to bring the master equation into SI units. This is \begin{boxedeqn}{eqn:masterequationSI} \partial_t \rho_S(x,y) = \frac{i}{2m_S \hbar } \left( \partial_x^2 - \partial_y^2 \right) \rho_S(x,y) - \gamma (x-y) \left( \partial_x - \partial_y \right) \rho_S (x,y) - \frac{2m_S}{\hbar^2} \gamma k T \left(x - y \right)^2\rho_S( x, y). \end{boxedeqn} The assumptions used to derive this equation are listed in table \ref{tab:asum}. \begin{table}[h] \caption{Assumptions used for the derivation of eqn. \ref{eqn:masterequation} \label{tab:asum}}\centering \begin{tabular}{|ccc|} \hline Assumption & Equation & Label \\ \hline Small mass ratio & $m_E/m_S \ll 1$ & \ref{eqn:massapprox} \\ Locality & $\left| x_S-x_E \right| \ll \left| x_S \right| $ & \ref{eqn:locality1} \\ Statistical environment & $\left<p_E\right> = 0$ & \ref{eqn:peavgiszero} \\ Dissipation & $ \gamma = m_E/m_S \cdot \Gamma$ & \ref{eqn:dissipation}\\ Temperature & $1/2 \cdot k T = \left< p_E^2 \right>/(2m_E)$ & \ref{eqn:temp} \\ \hline \end{tabular} \end{table} \chapter{Consequences of the Master Equation}\label{chap:applications} \lettrine[lines=2, lhang=0.33, loversize=0.1]{W}e now explore the physical ramifications of the master equation for quantum Brownian motion, developed in the previous chapter. First, we investigate its physical meaning term by term. Next, we consider the simple example of a quantum harmonic oscillator undergoing decoherence. Finally, we offer some closing remarks on decoherence theory in general and suggestions for further reading. \section{Physical Significance of the first two terms} In the realm of master equations, eqn. \ref{eqn:masterequation} for quantum Brownian motion actually is \textit{simple} \cite{zurek}. Even so, the purpose of each term is not immediately obvious. In this section, we examine the physical meaning of the first and second terms. The first term is the free system evolution, as it is the transform of eqn. \ref{eqn:wigfree}. It does not hurt to verify this explicitly, without employing the Wigner distribution. If we switch to SI units via eqn. \ref{eqn:masterequationSI}, the first term is \begin{eqnarray} \frac{i\hbar}{2m_S} \left( \partial_x^2 - \partial_y^2 \right) \rho_S(x,y). &=& \frac{i\hbar}{2m_S} \left( \partial_x^2 - \partial_y^2 \right) \left< x \right| \hat{\rho_S} \left| y \right> \nonumber \\ &=& \frac{i\hbar}{2m_S} \partial_x^2 \left< x \right| \hat{\rho_S} \left| y \right> - \frac{i\hbar}{2m_S} \partial_y^2 \left< x \right| \hat{\rho_S} \left| y \right> \nonumber \\ &=& \frac{i\hbar}{2m_S}\frac{-1}{\hbar^2} \left( \frac{\hbar}{i} \partial_x \right) ^2 \left< x \right| \hat{\rho_S} \left| y \right> - \frac{i\hbar}{2m_S} \frac{-1}{\hbar^2} \left( \frac{\hbar}{i} \partial_y \right) ^2 \left< x \right| \hat{\rho_S} \left| y \right> \nonumber \\ &=& \frac{i\hbar}{2m_S}\frac{-1}{\hbar^2} \check P_x^2 \left< x \right| \hat{\rho_S} \left| y \right> - \frac{i\hbar}{2m_S} \frac{-1}{\hbar^2} \check P_y ^2 \left< x \right| \hat{\rho_S} \left| y \right> \nonumber \\ &=&- \frac{i}{2m_S\hbar}\left(\check P_x^2 \left< x \right|\right) \left( \hat{\rho_S} \left| y \right>\right) +\frac{i}{2m_S\hbar} \left( \left< x \right| \hat{\rho_S} \right) \left(\check P_y ^2 \left| y \right> \right) \nonumber \\ &=&- \frac{i}{2m_S\hbar}\left( \left< x \right| \hat P^2 \right) \left( \hat{\rho_S} \left| y \right>\right) +\frac{i}{2m_S\hbar} \left( \left< x \right| \hat{\rho_S} \right) \left(\hat P ^2 \left| y \right> \right) \nonumber \\ &=&- \frac{i}{\hbar}\left( \left< x \right| \frac{\hat P^2}{2m_S} \hat{\rho_S} \left| y \right> - \left< x \right| \hat{\rho_S} \frac{\hat P ^2}{2m_S} \left| y \right> \right)\nonumber \\ &=& -\frac{i}{ \hbar }\left< x \right| \left( \frac{\hat P^2}{2m_S} \hat{\rho_S} - \hat{\rho}_S \frac{\hat P ^2}{2m_S} \right) \left| y \right>. \end{eqnarray} By eqn. \ref{eqn:eop}, the free system (for which $V=0$) has a Hamiltonian of \begin{equation} \hat H_f = \frac{\hat P^2}{2m_s}, \end{equation} so our equation becomes \begin{equation} -\frac{i}{ \hbar }\left< x \right| \left( \hat H_f \hat{\rho_S} - \hat{\rho_S} \hat H_f \right) \left| y \right> = \left< x \right| \frac{i}{ \hbar} \left[ \hat{\rho}_S ,\hat H_f \right] \left| y \right>, \end{equation} which is \begin{equation} \left< x \right| \hat{\partial}_t \hat{\rho}_S \left| y \right> = \check{\partial}_t \rho_S(x,y) \end{equation} by eqn. \ref{eqn:heispic}. Thus, we confirm that the first term in the master equation is the free evolution of the state operator. The second term is not so obvious, and turns out to be responsible for damping our system's motion. To explain this, we use the master equation to calculate the rate of change of the expectation value of momentum due to the second term. In the position basis, the second term reduces to \cite{omnes} \begin{eqnarray} \partial_t \left< \hat P \right>_2 &=& \partial_t \mathrm{Tr}\left(\hat P \hat \rho \right)_2 \nonumber \\ &=& \mathrm{Tr}\left(\hat P \partial_t \hat \rho \right)_2 \nonumber \\ &=& - \gamma \mathrm{Tr} \left( \check P _x \gamma (x-y) \left( \partial_x - \partial_y \right) \rho(x,y) \right) \nonumber \\ &=& - \gamma \mathrm{Tr} \left( \frac{1}{i} \partial_x\left[ \gamma(x-y) \left( \partial_x - \partial_y \right) \rho(x,y)\right] \right) \nonumber \\ &=& - \gamma \mathrm{Tr} \left( \frac{1}{i} \left( \partial_x - \partial_y \right) \rho(x,y) \right)- \gamma \mathrm{Tr} \left( \frac{1}{i} (x-y) \left( \partial_x^2 - \partial_x \partial_y \right) \rho(x,y) \right) \nonumber \\ &=&- \gamma \int dx \cdot \frac{1}{i} \left( \partial_x - \partial_y \right) \rho(x,x) + \gamma \int dx \cdot \frac{1}{i} (x-x) \left( \partial_x^2 - \partial_x \partial_y \right) \rho(x,x) \nonumber \\ &=&- \gamma \int dx \cdot \frac{1}{i} \left( \partial_x - 0 \right) \rho(x,x) + 0 \nonumber \\ &=& - \gamma \mathrm{Tr} \left( \frac{1}{i} \partial_x \rho(x,y) \right) \nonumber \\ &=& - \gamma \mathrm{Tr} \left(\hat P \hat \rho \right) \nonumber \\ &=& - \gamma \left< \hat P \right> , \end{eqnarray} which is \begin{boxedeqn}{} \partial_t \left< \hat P \right>_2 = - \gamma \left< \hat P \right>. \end{boxedeqn} Hence, the contribution to the rate of change of momentum of the second term is the dissipation\index{Dissipation} (a scalar) times the momentum, pointed in the opposite direction as the momentum. This is precisely a damping effect, which is what we wanted to show \cite{thornton}. \section{The Decoherence Term} \begin{figure}\label{fig:decoherence0} \end{figure} The last term of the master equation turns out to cause decoherence of the system, so it is central to our discussion. To interpret it, we will make some reasonable approximations. To get a better idea of the relative size of the terms, we use the SI version of the master equation, eqn. \ref{eqn:masterequationSI}. Notice that the last term contains a numerical factor of $1/\hbar^2 \approx 10^{68}$, while the other terms are either first or zeroth order in $1/\hbar$. Thus, we surmise that for sufficiently large $\left| x - y \right|$, the last term will dominate equation.\footnote{In the matrix representation of a state operator, this corresponds to the \textit{off-diagonal} elements. Recall that the totally mixed state in eqn. \ref{eqn:thisisamixture} had a diagonal state operator. This confirms that decoherence works on the off-diagonal elements of the state operator.} Hence, our drastically simplified master equation is \cite{omnes} \begin{boxedeqn}{eqn:simplifiedmaster} \partial_t \rho_S(x,y) \sim - \frac{2m_S \gamma k T}{\hbar^2} \left(x - y \right)^2\rho_S( x, y), \end{boxedeqn} which has the standard solution \begin{equation} \label{eqn:approxdecoherence} \rho_S(x,y,t) = \rho_S(x,y,0) e^{ -\frac{2m_S \gamma k T}{\hbar^2}\left(x - y \right)^2t}. \end{equation} Since the argument of the exponential must be dimensionless, $\frac{\hbar^2}{2m_S \gamma k T\left(x - y \right)^2}$ has units of time. Customarily, we identify \cite{omnes}\index{Decoherence Time} \begin{boxedeqn}{} t_d \equiv \frac{\hbar^2}{2m_S \gamma k T\left(x - y \right)^2} \end{boxedeqn} as the (characteristic) decoherence time of the system, which is its $e$-folding time.\footnote{When $t=t_d$, $\rho(x,y,t_d) = \frac{1}{e} \rho(x,y,t)$.} Notice also that the decoherence time varies with location in state-space, as it depends on both $x$ and $y$. Thus, we are not surprised to find that some regions decay faster than others. Further, since $\hbar^2 \approx 10^{-68}$ in SI units, the decoherence time for any reasonably large system is incredibly small.\footnote{For example, if we suppose our environment is an ideal, one dimensional gas at room temperature with a mass of $10^{-26}$ kg per particle and a collision rate with the system of $\Gamma \approx 10^{10}$ collisions per second (atmospheric conditions), we find the decoherence time of the system for length scales of nanometers to be of order $t_d =\frac{\hbar^2}{2 m_E \Gamma k T (x-y)^2} \approx \frac{10^{-68}}{2 \cdot 10^{-26} \cdot 10^{10} \cdot 10^{-23} \cdot 300 \cdot 10^{-9}} \approx 10^{-19 }$ s.} Next, we consider an example to show how decoherence operates on a simple situation. \section{Example: The Harmonic Oscillator in a Thermal Bath} \begin{figure}\label{fig:decoherence3} \end{figure} So far, we have supposed that $\rho_S$ is a free particle. However, note that our simplified master equation, eqn. \ref{eqn:simplifiedmaster}, does not explicitly depend on the system's Hamiltonian (this was contained in the first term), so we are free to replace our initial state operator with some other state operator of a different system. We choose, due to its utility and familiarity, the harmonic oscillator. From our work in section \ref{sec:wig_harmonic}, we know that state operator for the harmonic oscillator, eqn. \ref{eqn:stateopharmonic}, is \begin{equation} \rho_n(x,y,t=0) = \frac{1}{n!} \left( \frac{m \omega}{\pi} \right)^{1/2} \left( \frac{1}{2 m \omega}\right)^n\left( m \omega x - \partial_x \right)^n \left( m \omega y - \partial_y \right)^n e^{-\frac{m \omega}{2}x^2} e^{-\frac{m \omega}{2}y^2}. \end{equation} If we place this state operator in a thermal bath, we expect the system to evolve approximately according to eqn. \ref{eqn:approxdecoherence}, so the time dependent state operator of the harmonic oscillator is\index{Harmonic Oscillator!Decoherence of} \begin{boxedeqn}{eqn:shodecoh} \rho_n(x,y,t) = \frac{1}{n!} \left( \frac{m \omega}{\pi} \right)^{1/2} \left( \frac{1}{2 m \omega}\right)^n\left( m \omega x - \partial_x \right)^n \left( m \omega y - \partial_y \right)^n e^{-\frac{m \omega}{2}x^2} e^{-\frac{m \omega}{2}y^2}e^{ -2m_S \gamma k T\left(x - y \right)^2t}. \end{boxedeqn} In figures \ref{fig:decoherence0} and \ref{fig:decoherence3}, we plot the state operators for $n=0$ and $n=3$. As is evident from the form of eqn. \ref{eqn:shodecoh}, the off-diagonal matrix elements (when $x \neq y$) quickly vanish with time. Physically, the off-diagonal elements of the state operator represent the quantum interference terms, terms that can interact only with other quantum systems. These interference terms are what give the entangled states we explored in sections \ref{sec:quantumsup} and \ref{sec:bellstate} their interesting qualities. By zeroing the off-diagonal elements, we take a quantum mechanical system and force it into a classical distribution. As it turns out, this interpretation becomes obvious as $t \rightarrow \infty$. By eqn. \ref{eqn:shodecoh}, this is \begin{figure}\label{fig:decoherenceall} \end{figure} \begin{equation} \lim_{t \rightarrow \infty} \rho(x,y,t) = \begin{cases} 0 & \text{if $x\neq y$}, \\ \psi^*(x)\psi(x) = \left| \psi(x) \right|^2 & \text{if $x =y$}, \end{cases} \end{equation} as shown in figure \ref{fig:decoherenceall}. This quantity is a statistical probability distribution, and as we saw with the roulette wheel at the beginning of this thesis, decoherence has effectively blocked us from accessing any of the quantum mechanical information present in our initial system. \section{Concluding Remarks} We have now developed and applied the master equation for quantum Brownian motion, and used it to clarify how a macroscopic, classical object might emerge from quantum mechanics. We started by setting the stage with the mathematics and formalism we would need to develop quantum mechanics. Then, we used the tools we made to derive the Schr\"odinger equation and the equation of motion for the state operator. We then shifted and considered quantum mechanics in phase-space, where the central object is the Wigner distribution. Next, we explored some of its key properties and described and example of its application using the harmonic oscillator. After that, we used it to derive the simple master equation for one-dimensional quantum Brownian motion. We explained each of the terms physically, and finally considered an example of decoherence, where the master equation transformed a quantum harmonic oscillator into a classical probability distribution. The debate still rages in the physics community; does decoherence theory \textit{solve} the philosophical problems brought about by paradoxes like Schr\"odinger's cat\index{Schr\"odinger's cat}, or does it merely postpone the problem, pushing the fundamental issue into an environmental black box \cite{meuffels,hobson,schlosshauer}? Regardless, it provides a practical framework for performing objective measurements\index{Measurement} without an observer, which is of key importance to the emerging fields of quantum computation and quantum information. Current efforts are underway to probe decoherence directly, both experimentally and theoretically. Through the use of mesoscopic systems, scientists have been able to manufacture tiny oscillators that are getting very close to the quantum regime \cite{lahaye, blencowe}. Theoretical predictions of what should be observed at the quantum-classical barrier have also been made, with the promise of experimental feasibility within a few years \cite{katz}. Just last year, scientists performed experiments involving ultra-cold chlorophyll, confirming that even photosynthesis is a quantum-emergence phenomenon, and thus governed by decoherence theory \cite{engel}. The group went so far as to suggest that chloroplasts were actually performing quantum computation algorithms on themselves to speed-up reaction times. This idea of selective self-measurement is intriguing, but largely undeveloped theoretically. It, along with the many other application areas of quantum decoherence theory, are sure to occupy physicists for years to come. \index{Mixture|see{State, Impure}} \index{Mixed State|see{State, Impure}} \index{Dual Vector|see{Linear Functional}} \index{Energy Operator|see{Hamiltonian}} \index{Operator|see{Linear Operator}} \index{Vector Space|see{Linear Vector Space}} \index{Basis!of Eigenvectors|see{Spectral Theorem}} \index{Free Particle!Wigner Distribution|see{Wigner}} \backmatter \renewcommand\bibname{References} \nocite{*} \if@xetex \cleardoublepage \phantomsection \addcontentsline{toc}{chapter}{Index} \else \ifpdf \cleardoublepage \phantomsection \addcontentsline{toc}{chapter}{Index} \else \cleardoublepage \addcontentsline{toc}{chapter}{Index} \fi \fi \printindex \end{document}
arXiv
{ "id": "0805.3178.tex", "language_detection_score": 0.6326155662536621, "char_num_after_normalized": null, "contain_at_least_two_stop_words": null, "ellipsis_line_ratio": null, "idx": null, "lines_start_with_bullet_point_ratio": null, "mean_length_of_alpha_words": null, "non_alphabetical_char_ratio": null, "path": null, "symbols_to_words_ratio": null, "uppercase_word_ratio": null, "type": null, "book_name": null, "mimetype": null, "page_index": null, "page_path": null, "page_title": null }
arXiv/math_arXiv_v0.2.jsonl
null
null
\begin{document} \title{Voronoi-based estimation of Minkowski tensors from finite point samples} \begin{abstract} Intrinsic volumes and Minkowski tensors have been used to describe the geometry of real world objects. This paper presents an estimator that allows to approximate these quantities from digital images. It is based on a generalized Steiner formula for Minkowski tensors of sets of positive reach. When the resolution goes to infinity, the estimator converges to the true value if the underlying object is a set of positive reach. The underlying algorithm is based on a simple expression in terms of the cells of a Voronoi decomposition associated with the image. \end{abstract} \section{Introduction} Intrinsic volumes, such as volume, surface area, and Euler characteristic, are widely-used tools to capture geometric features of an object; see, for instance, \cite{meckeEtAl,OM,milesSerra}. Minkowski tensors are tensor valued generalizations of the intrinsic volumes, associating with every sufficiently regular compact set in $\mathbb{R}^d$ a symmetric tensor, rather than a scalar. They carry information about geometric features of the set such as position, orientation, and eccentricity. For instance, the volume tensor -- defined formally in Section \ref{minkowski} -- of rank $0$ is just the volume of the set, while the volume tensors of rank $1$ and $2$ are closely related to the center of gravity and the tensor of inertia, respectively. For this reason, Minkowski tensors are used as shape descriptors in materials science \cite{mickel,aste}, physics \cite{kapfer}, and biology \cite{beisbart,ziegel}. The main purpose of this paper is to present estimators that approximate all the Min\-kow\-ski tensors of a set $K$ when only weak information on $K$ is available. More precisely, we assume that a finite set $K_0$ which is close to $K$ in the Hausdorff metric is known. The estimators are based on the Voronoi decomposition of $\mathbb{R}^d$ associated with the finite set $K_0$, following an idea of M\'{e}rigot et al.\ \cite{merigot}. What makes these estimators so interesting is that they are consistent; that is, they converge to the respective Minkowski tensors of $K$ when applied to a sequence of finite approximations converging to $K$ in the Hausdorff metric. We emphasize that the notion of `estimator' is used here in the sense of digital geometry \cite{digital} meaning `approximation of the true value based on discrete input' and should not be confused with the statistical concept related to the inference from data with random noise. The main application we have in mind is the case where $K_0$ is a digitization of $K$. This is detailed in the following. As data is often only available in digital form, there is a need for estimators that allow us to approximate the {Minkowski} tensors from digital images. In a black-and-white image of a compact geometric object $K\subseteq \mathbb{R}^d$, each pixel (or voxel) is colored black if {its} midpoint belongs to $K$ and white otherwise. Thus, the information about $K$ contained in the image is the set of black pixel (voxel) midpoints $K_0=K\cap a\mathbb{L}$, where $\mathbb{L}$ is the lattice formed by {all} pixel (voxel) midpoints and $a^{-1}$ is the resolution. A natural criterion for {the reliability of} a digital estimator is that it yields the correct tensor when $a\to 0_+$. If this property holds for all objects in a given family of sets, for instance, for all sets with smooth boundary, then the estimator is called \emph{multigrid convergent} for this class. Digital estimators for the scalar Minkowski tensors, that is, for the intrinsic volumes, are widespread in the digital geometry literature; see, e.g.,~\cite{digital,OM,OS} and the references therein. For Minkowski tensors up to rank two, estimators based on binary images are given in \cite{turk} for the two-dimensional and in \cite{mecke} for the three-dimensional case. Even for the class of convex sets, multigrid convergence has not been proven for any of the above mentioned estimators. The only exception are volume related quantities. Most of the above mentioned estimators are \emph{$n$-local} for some given fixed $n\in \mathbb{N}$. We call an estimator $n$-local if it depends on the image only through the histogram of all $n\times \dotsm \times n$ configurations of black and white points. For instance, a natural surface area estimator \cite{lindblad} in three-dimensional space scans the image with a voxel cube of size $2\times 2\times2$ and assigns a surface contribution to each observed configuration. The sum of all contributions is then the surface area estimator, which is clearly $2$-local. The advantage of $n$-local estimators is that they are intuitive, easy to implement, and the computation time is linear in the number of pixels or voxels. However, many {$n$-local} estimators are not multigrid convergent for convex sets; see \cite{am3} and the detailed discussion in Section \ref{known}. This implies that many established estimators, like the mentioned one in \cite{lindblad} cannot be multigrid convergent for convex sets. All the estimators of 2D-Minkowski tensors in \cite{turk} are $2$-local. By the results in \cite{am3}, the estimators for the perimeter and the Euler characteristic can thus not be multigrid convergent for convex sets. The multigrid convergence of the other estimators has not been investigated. The algorithms for 3D-Minkowski tensors in \cite{mecke} have as input a triangulation of the object's boundary, and the way this triangulation is obtained determines whether the resulting estimators are $n$-local or not. There are no known results on multigrid convergence for these estimators either. Summarizing, to the best of our knowledge, this paper presents for the first time estimators of all Minkowski tensors of arbitrary rank that come with a multigrid convergence proof for a class of sets that is considerably larger than the class of convex sets. The present work is inspired by \cite{merigot}, and we therefore start by recalling some basic notions from this paper. For a nonempty compact set $K$, the authors of \cite{merigot} define a tensor valued measure, which they call the \emph{Voronoi covariance measure}, defined on a Borel set $A\subseteq \mathbb{R}^d$ by \begin{equation*} \mathcal{V}_R(K;A) = \int_{ K^R }\mathds{1}_A(p_K(y)) (y-p_K(y))(y-p_K(y))^\top\,dy. \end{equation*} Here, $K^R$ is the set of points at distance at most $R>0$ from $K$ and $p_K$ is the \emph{metric projection} on $K$: the point $p_K(x)$ is the point in $K$ closest to $x$, provided that this closest point is unique. The metric projection of $K$ is well-defined on $\mathbb{R}^d$ with the possible exception of a set of Lebesgue-measure zero; see, e.g., \cite{fremlin}. The paper \cite{merigot} uses the Voronoi covariance measure to determine local features of surfaces. It is proved there that if $K \subseteq \mathbb{R}^3$ is a smooth surface, then \begin{equation}\label{eigen} \mathcal{V}_R(K;B(x,r)) \approx \frac{2\pi}{3}R^3r^2\bigg(u(x)u(x)^\top + \frac{r^2}{4}\sum_{i=1,2}k_i(x)^2P_i(x)P_i(x)^\top\bigg), \end{equation} where $B(x,r)$ is the Euclidean ball with midpoint $x\in K$ and radius $r$, $u(x)$ is one of the two surface unit normals at $x\in K$, $P_1(x),P_2(x)$ are the principal directions and $k_1(x),k_2(x)$ the corresponding principal curvatures. Hence, the eigenvalues and -directions of the Voronoi covariance measure carry information about local curvatures and normal directions. Assuming that a compact set $K_0$ approximates $K$, \cite{merigot} suggests to estimate $\mathcal{V}_R(K;\cdot) $ by $\mathcal{V}_R(K_0;\cdot)$. It is shown in that paper that $\mathcal{V}_R(K_0;\cdot)$ converges to $\mathcal{V}_R(K;\cdot)$ in the bounded Lipschitz metric when $K_0 \to K$ in the Hausdorff metric. Moreover, if $K_0$ is a finite set, then the Voronoi covariance measure can be expressed in the form \begin{equation*} \mathcal{V}_R(K_0;A) = \sum_{x\in K_0 \cap A} \int_{B(x,R)\cap V_x(K_0) } (y-x)(y-x)^\top \,dy. \end{equation*} Here, $V_x(K_0)$ is the Voronoi cell of $x$ in the Voronoi decomposition of $\mathbb{R}^d$ associated with $K_0$. Thus, the estimator which is used to approximate $\mathcal{V}_R(K;A)$ is easily computed. Given the Voronoi cells of $K_0$, each Voronoi cell contributes with a simple integral. Figure \ref{fig} (a) shows the Voronoi cells of a finite set of points on an ellipse. The Voronoi cells are elongated in the normal direction. This is the intuitive reason why they can be used to approximate \eqref{eigen}. The Voronoi covariance measure $\mathcal{V}_R(K;A) $ can be identified with a symmetric 2-tensor. In the present work, we explore how natural extensions of the Voronoi covariance measure can be used to estimate general Minkowski tensors. The generalizations of the Voronoi covariance measure, which we will introduce, will be called \emph{Voronoi tensor measures}. {We will then show how the Minkowski tensors can be recovered from these}. When we apply the results to digital images, we will work with full-dimensional sets $K$, and the finite point sample $K_0$ is obtained from the representation $K_0=K\cap a\mathbb{L}$ of a digital image of $K$. The Voronoi cells associated with $K_0=K\cap a\mathbb{L}$ are sketched in Figure~\ref{fig}~(b). Taking point samples from $K$ with increasing resolution, convergence results will follow from an easy generalization of the convergence proof in \cite{merigot}. \begin{figure} \caption{(a). The Voronoi cells of a finite set of points on a surface. (b). A digital image and the associated Voronoi cells.} \label{fig} \end{figure} The paper is structured as follows: In Section~\ref{minkowski}, we recall the definition of Minkowski tensors and the classical as well as a local Steiner formula for sets of positive reach. In Section~\ref{construction}, we define the Voronoi tensor measures, discuss how they can be estimated from finite point samples, and explain how the Steiner formula can be used to connect the Voronoi tensor measures with the Minkowski tensors. Section \ref{convergence} is concerned with the convergence of the estimator. The results are specialized to digital images in Section \ref{DI}. Finally, the estimator is compared with existing approaches in Section \ref{known}. \section{Minkowski tensors}\label{minkowski} We work in Euclidean space $\mathbb{R}^d$ with scalar product $\langle\cdot\,,\cdot\rangle$ and norm $|\cdot|$. The Euclidean ball with center $x\in\mathbb{R}^d$ and radius $r\ge 0$ is denoted by $B(x,r)$, and we write $S^{d-1}$ for the unit sphere in $\mathbb{R}^d$. Let $\partial A$ and $\text{int}A$ be the boundary and the interior of a set $A\subseteq{\mathbb R}^d$, respectively. The $k$-dimensional Hausdorff-measure in $\mathbb{R}^d$ is denoted by ${\mathcal H}^k$, $0\le k\le d$. Let ${\mathcal C}^d$ be the family of nonempty compact subsets of $\mathbb{R}^d$ and ${\mathcal K}^d\subseteq \mathcal{C}^d$ the subset of nonemtpy compact convex sets. For two compact sets $K,M \in{\mathcal C}^d$, we define their \emph{Hausdorff distance} by \begin{equation*} d_H(K,M) = \inf\{\varepsilon>0\mid K\subseteq M^\varepsilon, M \subseteq K^\varepsilon\}. \end{equation*} Let $\mathbb{T}^p$ denote the space of symmetric $p$-tensors (tensors of rank $p$) over $\mathbb{R}^d$. Identifying $\mathbb{R}^d$ with its dual (via the scalar product), a symmetric $p$-tensor defines a symmetric multilinear map $(\mathbb{R}^d)^p\to \mathbb{R}$. Letting $e_1,\dots,e_d$ be the standard basis in $\mathbb{R}^d$, a tensor $T\in \mathbb{T}^p$ is determined by its coordinates \begin{equation*} T_{i_1\dots i_p}=T(e_{i_1},\dots,e_{i_p}) \end{equation*} with respect to the standard basis, for all choices of ${i_1},\dots,{i_p} \in \{1,\dots,d\}$. We use the norm on $\mathbb{T}^p$ given by \begin{equation*} |T|=\sup\big\{|T(v_1,\dots,v_p)| \,\mid \, |v_1|=\dots =|v_p|=1\big\} \end{equation*} for $T\in \mathbb{T}^p$. The same definition is used for arbitrary tensors of rank $p$. The symmetric tensor product of $y_1,\ldots, y_m\in \mathbb{R}^{d}$ is given by the symmetrization $y_1\odot\cdots\odot y_m=(m!)^{-1}\sum \otimes_{i=1}^m y_{\sigma(i)}$, where the sum extends over all permutations $\sigma$ of $\{1,\ldots,m\}$ and $\otimes$ is the usual tensor product. We write $x^r$ for the $r$-fold tensor product of $x\in \mathbb{R}^d$. For two symmetric tensors of the form $T_1=y_1 \odot \cdots \odot y_r$ and $T_2=y_{r+1} \odot \cdots \odot y_{r+s}$, where $y_1, \ldots , y_{r+s} \in\mathbb{R}^d$, the symmetric tensor product $T_1\odot T_2$ of $T_1$ and $T_2$, which we often abbreviate by $T_1T_2$, is the symmetric tensor product of $y_1, \ldots, y_{r+s} $. This is extended to general symmetric tensors $T_1$ and $T_2$ by linearity. Moreover, it follows from the preceding definitions that $$ |y_1\odot\cdots\odot y_m|\le |y_1|\cdots |y_m|, $$ $y_1,\ldots, y_m\in \mathbb{R}^{d}$. For any compact set $K\subseteq \mathbb{R}^d$, we can define an element of $\mathbb{T}^r$ called the \emph{$r$th volume tensor} \begin{equation*} \Phi_{d}^{r,0}(K) = \frac{1}{r!} \int_{K} x^r \,dx. \end{equation*} For $s\geq 1$ we define $\Phi_{d}^{r,s}(K)=0$. Some of the volume tensors have well-known physical interpretations. For instance, $\Phi_{d}^{0,0}(K)$ is the usual volume of $K$, $\Phi_{d}^{1,0}(K)$ is up to normalization the center of gravity, and $\Phi_{d}^{2,0}(K)$ is closely related to the tensor of inertia. All three tensors together can be used to find the best approximating ellipsoid of a particle \cite{ziegel}. The sequence of all volume tensors $(\Phi_{d}^{r,0}(K))_{r=0}^\infty$ determines the compact set $K$ uniquely. For convex sets in the plane even the following stability result \cite[Remark 4.4.]{JuliaAstrid} holds: If $K, L\in {\mathcal K}^2$ are contained in the unit square and have coinciding volume tensors up to rank $r$, then their distance, measured in the symmetric difference metric ${\mathcal H}^2\big((K\setminus L) \cup (L\setminus K)\big)$, is of order $O(r^{-1/2})$ as $r\to \infty$. We will now define \emph{Minkowski surface tensors}. These can also be used to characterize the shape of an object or the structure of a material as in \cite{beisbart,kapfer}. They require stronger regularity assumptions on $K$. Usually, like in \cite[Section 5.4.2]{schneider}, the set $K$ is assumed to be convex. However, as Minkowski tensors are tensor-valued integrals with respect to the generalized curvature measures (also called support measures) of $K$, they can be defined whenever the latter are available. We will use this to define Minkowski tensors for sets of positive reach. First, we recall the definition of a set of positive reach and explain how curvature measures of such sets are determined (see \cite{Federer59,zahle}). For a compact set $K\in {\mathcal C}^d$, we let $d_K(x)$ denote the distance from $x\in \mathbb{R}^d$ to $K$. Then, for $R\ge 0$, $K^R=\{x\in \mathbb{R}^d \mid d_K(x)\leq R\}$ is the $R$-parallel set of $K$. The \emph{reach} $\reach(K)$ of $K$ is defined as the supremum over all $R\geq 0$ such that for all $x\in \mathbb{R}^d$ with $d_K(x)<R$ there is a unique closest point $p_K(x)$ in $K$. We say that $K$ has positive reach if $\reach(K)>0$. Smooth surfaces (of class $C^{1,1}$) are examples of sets of positive reach, and compact convex sets are characterized by having infinite reach. By definition, the map $p_K$ is defined everywhere on $K^R$ if $R<\reach(K)$. Let $K\subseteq \mathbb{R}^d$ be a (compact) set of positive reach. The (global) Steiner formula for sets with positive reach states that for all $R<\reach(K)$ the $R$-parallel volume of $K$ is a polynomial, that is, \begin{align}\label{gloSt} \mathcal{H}^d( K^R){}&= \sum_{k=0}^d \kappa_{d-k} R^{d-k} \Phi_{k}^{0,0}(K). \end{align} Here $\kappa_j$ is the volume of the unit ball in $\mathbb{R}^j$ and the numbers $\Phi^{0,0}_0(K),\ldots, \allowbreak \Phi_d^{0,0}(K)$ are the so-called \emph{intrinsic volumes} of $K$. They are special cases of the Minkowski tensors to be defined below. Some of them have well-known interpretations. As mentioned, $\Phi^{0,0}_d(K)$ is the volume of $K$. Moreover, $2\Phi^{0,0}_{d-1}(K)$ is the surface area, $\Phi^{0,0}_{d-2}(K)$ is proportional to the total mean curvature, and $\Phi^{0,0}_0(K)$ is the Euler characteristic of $K$. For convex sets, \eqref{gloSt} is the classical Steiner formula which holds for all $R\ge 0$. Z\"ahle \cite{zahle} showed that a local version of \eqref{gloSt} can be established giving rise to the \emph{generalized curvature measures} $\Lambda_k(K;\cdot)$ of $K$, for $k=0,\dots,d-1$. An extension to general closed sets is considered in \cite{last}. The generalized curvature measures (also called support measures) are measures on $\Sigma = \mathbb{R}^d\times S^{d-1}$. They are determined by the following {\em local} Steiner formula which holds for all $R < \reach(K)$ and all Borel set $B\subseteq \Sigma$: \begin{equation}\label{clasSteiner} \mathcal{H}^d\left(\left\{x\in K^R \backslash K \mid \Big(p_K(x), \tfrac{x-p_K(x)}{|x-p_K(x)|}\Big)\in B\right\}\right) = \sum_{k=0}^{d-1} R^{d-k} \kappa_{d-k} \Lambda_k(K;B). \end{equation} The coefficients $\Lambda_k(K;B)$ on the right side of \eqref{clasSteiner} are signed Borel measures $\Lambda_k(K;\cdot)$ evaluated on $B\subseteq\Sigma$. These measures are called the {\em generalized curvature measures} of $K$. Since the pairs of points in $B$ on the left side of \eqref{clasSteiner} always consist of a boundary point of $K$ and an outer unit normal of $K$ at that point, each of the measures $\Lambda_k(K,\cdot)$ is concentrated on the set of all such pairs. For this reason, the generalized curvature measures $\Lambda_k(K;\cdot)$, $k\in\{0,\ldots,d-1\}$, are also called {\em support measures}. They describe the local boundary behavior of the part of $\partial K$ that consists of points $x$ with an outer unit normal $u$ such that $(x,u)\in B$. A description of the generalized curvature measures $\Lambda_k(K,\cdot)$ by means of generalized curvatures living on the normal bundle of $K$ was first given in \cite{zahle} (see also \cite[\S 2.5 and p.~217]{schneider} and the references given there). The total measures $\Lambda_k(K,\Sigma)$ are the intrinsic volumes. Based on the generalized curvature measures, for every $k\in\{0,\dots,d-1\}$, $r,s\geq 0$ and every set $K\subseteq\mathbb{R}^d$ with positive reach, we define the {\em Minkowski tensor} \begin{equation*} \Phi_{k}^{r,s}(K) = \frac{1}{r!s!}\frac{\omega_{d-k}}{\omega_{d-k+s}}\int_{\Sigma} x^r u^{s} \Lambda_k(K;d(x,u)) \end{equation*} in $\mathbb{T}^{r+s}$. Here $\omega_k$ is the surface area of the unit sphere $S^{k-1}$ in $\mathbb{R}^k$. More information on Minkowski tensors can for instance be found in \cite{hug,mcmullen,schuster,KVJLNM}. As in the case of volume tensors, the Minkowski tensors carry strong information on the underlying set. For instance, already the sequence $(\Phi_{1}^{0,s}(K))_{s=0}^\infty$ determines any $K\in {\mathcal K}^d$ up to a translation. A stability result also holds: if $K$ and $L$ are both contained in a fixed ball and have the same tensors $\Phi_{1}^{0,s}$ of rank $s\le s_0$, then a translation of $K$ is close to $L$ in the Hausdorff metric and the distance is $O(s_0^{-\beta})$ as $s_0\to \infty$ for any $0<\beta<3/(n+1)$; see \cite[Theorem 4.9]{AstridMarkus}. One can define \emph{local Minkowski tensors} in a similar way (see \cite{HS14}). For a Borel set $B\subseteq \Sigma$, for $k\in\{0,\dots,d-1\}$, $r,s\geq 0$ and a set $K\subseteq\mathbb{R}^d$ with positive reach, we put \begin{equation*} \Phi_{k}^{r,s}(K;B) = \frac{1}{r!s!}\frac{\omega_{d-k}}{\omega_{d-k+s}}\int_{B} x^r u^{s} \,\Lambda_k(K;d(x,u)) \end{equation*} and, for a Borel set $A \subseteq \mathbb{R}^d$, \begin{equation*} \Phi_{d}^{r,0}(K;A) = \frac{1}{r!} \int_{K\cap A} x^r \,dx. \end{equation*} In order to avoid a distinction of cases, we also write $\Phi_{d}^{r,0}(K;A\times S^{d-1})$ instead of $\Phi_{d}^{r,0}(K;A)$. Moreover, we define $\Phi_{d}^{r,s}(K;\cdot)=0$ if $s\ge 1$. The local Minkowski tensors can be used to describe local boundary properties. For instance, local 1- and 2-tensors are used for the detection of sharp edges and corners on surfaces in \cite{clarenz}. They also carry information about normal directions and principal curvatures as explained in the introduction. We conclude this section with a general remark on continuity properties of the Minkowski tensors. Although the functions $K\mapsto \Phi_{k}^{r,s}(K)$ are continuous when considered in the metric space $(\mathcal{K}^d,d_H)$, they are not continuous on ${\mathcal C}^d$. (For instance, the volume tensors of a finite set are always vanishing, but finite sets can be used to approximate any compact set in the Hausdorff metric.) This is the reason why our approach requires an approximation argument with parallel sets as outlined below. The consistency of our estimator is mainly based on a continuity result for the metric projection map. We quote this result \cite[Theorem 3.2]{chazal} in a slightly different formulation which is symmetric in the two bodies involved. Let $\|f\|_{L^1(E)}$ be the usual $L^1$-norm of the restriction of $f$ to a Borel set $E\subseteq \mathbb{R}^d$. \begin{proposition}\label{CHAZProp} Let $\rho>0$ and let $E\subseteq \mathbb{R}^d$ be a bounded measurable set. Then there is a constant $C_1=C_1\left(d,\diam(E\cup\{0\}),\rho\right)>0$ such that \[ \|p_K-p_{K_0}\|_{L^1(E)} \le C_1 d_H(K,K_0)^{\frac 12} \] for all $K,K_0\in {\mathcal C}^d$ with $K,K_0\subseteq B(0,\rho)$. \end{proposition} \begin{proof} Let $E'$ be the convex hull of $E$ and observe that \begin{equation*} \|p_K-p_{K_0}\|_{L^1(E)} \leq \|p_K-p_{K_0}\|_{L^1(E')}. \end{equation*} It is shown in \cite[Lemma 3.3]{chazal} (see also \cite[Theorem 4.8]{Federer59}) that the map $v_K:\mathbb{R}^d\to\mathbb{R}$ given by $v_K(x)=|x|^2-d_K^2(x)$ is convex and that its gradient coincides almost everywhere with $2p_K$. Since $E'$ has rectifiable boundary, \cite[Theorem~3.5]{chazal} implies that \begin{align*} \|p_K-p_{K_0}\|_{L^1(E')} \le {}& c_1(d) ({\mathcal H}^d(E')+(c_2+\|d_K^2-d_{K_0}^2\|_{\infty,E'}^{\frac 12}){\mathcal H}^{d-1}(\partial E'))\\ &\times \|d_K^2-d_{K_0}^2\|_{\infty,E'}^{\frac 12}. \end{align*} Here $c_2=\diam(2p_K(E')\cup 2p_{K_0}(E'))\le 2\diam (K\cup K_0)\le 4\rho$ and the supremum-norm $\|\cdot\|_{\infty,E'}$ on $E'$ can be estimated by \begin{align*} \|d_K^2-d_{K_0}^2\|_{\infty,E'}&\le 2\diam(E'\cup K\cup K_0) \|d_K-d_{K_0}\|_{\infty,E'} \\&\le 2\left[\diam(E'\cup\{0\})+2\rho\right]d_H(K,K_0). \end{align*} Moreover, intrinsic volumes are increasing on the class of convex sets, so \begin{align*} \mathcal{H}^d(E'){}&\leq \mathcal{H}^d(B(0, \diam(E'\cup \{0\})))\\ {\mathcal H}^{d-1}(\partial E') {}&\leq \mathcal{H}^{d-1}(\partial B(0, \diam(E'\cup \{0\}))). \end{align*} Together with the trivial estimate $d_H(K,K_0)\le2\rho$ and with the equality $\diam(E\cup\{0\})=\diam(E'\cup\{0\})$, this yields the claim. \end{proof} The authors of \cite{chazal} argue that the exponent $1/2$ in Proposition \ref{CHAZProp} is best possible. \section{Construction of the estimator} \label{construction} In Section \ref{VTM} below, we define the Voronoi tensor measures and show how the Minkowski tensors can be obtained from these. We then explain in Section \ref{finite} how the Voronoi tensor measures can be estimated from finite point samples. As a special case, we obtain estimators for all intrinsic volumes. This is detailed in Section \ref{intvol}. \subsection{The Voronoi tensor measures} \label{VTM} Let $K$ be a compact set. Here and in the following subsections, we let $r,s\in\mathbb{N}_0$ and $R\ge 0$. Define the $ \mathbb{T}^{r+s}$-valued measures $\mathcal{V}_{R}^{r,s}(K;\cdot)$ given on a Borel set $A\subseteq \mathbb{R}^d$ by \begin{equation}\label{star} \mathcal{V}_{R}^{r,s}(K;A) = \int_{K^R }\mathds{1}_A(p_K(x)) \,p_K(x)^r(x-p_K(x))^s \, dx. \end{equation} When $K$ is a smooth surface, $\mathcal{V}_{R}^{0,2}(K;\cdot)$ {corresponds to} the Voronoi covariance measure in \cite{merigot}. We will refer to the measures defined in \eqref{star} as the \emph{Voronoi tensor measures}. Note that if $f:\mathbb{R}^d \to \mathbb{R}$ is a bounded Borel function, then \begin{equation}\label{integralf} \int_{\mathbb{R}^d} f(x) \,\mathcal{V}_{R}^{r,s}(K;dx) = \int_{K^R}f(p_K(x))\,p_K(x)^r(x-p_K(x))^s \, dx \in \mathbb{T}^{r+s}. \end{equation} Suppose now that $K$ has positive reach with $\reach(K)>R$. Then a special case of the generalized Steiner formula derived in \cite{last} (or an extension of \eqref{clasSteiner}) implies the following version of the local Steiner formula for the Voronoi tensor measures: \begin{align}\nonumber \mathcal{V}_{R}^{r,s}(K;A) {}&= \sum_{k=1}^{d} \omega_{k} \int_{\Sigma} \int_{0}^R \mathds{1}_{A}(x) t^{s+k-1} x^r u^s \, dt\, \Lambda_{d-k}(K;d(x,u))\nonumber\\ &\qquad +\mathds{1}_{\{s = 0\}}\int_{K\cap A} x^r \,dx\nonumber\\ &= r!s! \sum_{k=0}^d \kappa_{k+s} R^{s+k} \Phi_{d-k}^{r,s}(K;A\times S^{d-1}),\label{steiner} \end{align} where $A\subseteq {\mathbb R}^d$ is a Borel set. In particular, the total measure is \begin{equation*} \mathcal{V}_{R}^{r,s}(K)=\mathcal{V}_{R}^{r,s}(K;\mathbb{R}^d) = r!s!\sum_{k=0}^d \kappa_{k+s} R^{s+k} \Phi_{d-k}^{r,s}(K) . \end{equation*} Note that the special case $r=s=0$ is the Steiner formula \eqref{gloSt} for sets with positive reach. Equation \eqref{steiner}, used for different parallel distances $R$, can be solved for the Minkowski tensors. More precisely, choosing $d+1$ different values $0<R_0<\ldots <R_d<\reach(K)$ for $R$, we obtain a system of $d+1$ linear equations: \begin{align}\label{matrixeq} \begin{pmatrix} \mathcal{V}_{R_0}^{r,s}(K;A)\\ \vdots \\ \mathcal{V}_{R_d}^{r,s}(K;A) \end{pmatrix} =r!s! \begin{pmatrix}\kappa_s R_0^{s} & \dots & \kappa_{s+d}R_0^{s+d} \\ \vdots & & \vdots \\ \kappa_sR_{d}^{s} & \dots & \kappa_{s+d}R_{d}^{s+d} \end{pmatrix} \begin{pmatrix}\Phi_{d}^{r,s}(K;{A\times S^{d-1}})\\ \vdots \\ \Phi_{0}^{r,s}(K;{A\times S^{d-1}}) \end{pmatrix}. \end{align} Since the Vandermonde-type matrix \begin{align}\label{matrixA} A_{R_0,\ldots,R_d}^{r,s} = {r!s!} \begin{pmatrix} \kappa_s R_0^{s} & \dots & \kappa_{s+d}R_0^{s+d} \\ \vdots & & \vdots \\ \kappa_s R_{d}^{s} & \dots & \kappa_{s+d}R_{d}^{s+d} \end{pmatrix}\in \mathbb{R}^{(d+1)\times(d+1)} \end{align} in \eqref{matrixeq} is invertible, the system can be solved for the tensors, and thus we get \begin{align}\label{matrix} \begin{pmatrix}{\Phi}_{d}^{r,s}(K;A{\times S^{d-1}})\\ \vdots \\ {\Phi}_{0}^{r,s}(K;{A{\times S^{d-1}}}) \end{pmatrix} =\left(A_{R_0,\ldots,R_d}^{r,s}\right)^{-1} \begin{pmatrix} \mathcal{V}_{R_0}^{r,s}(K;A)\\ \vdots \\ \mathcal{V}_{R_d}^{r,s}(K;A) \end{pmatrix}. \end{align} If $s>0$, then ${\Phi}_{d}^{r,s}(K;A\times S^{d-1})=0$ by definition, so we may omit one of the equations in the system \eqref{matrixeq}. \subsection{Estimation of Minkowski tensors}\label{finite} Let $K$ be a compact set of positive reach. Suppose that we are given a compact set $K_0$ that is close to $K$ in the Hausdorff metric. In the applications we have in mind, $K_0$ is a finite subset of $K$, but this is not necessary for the algorithm to work. Based on $K_0$, we want to estimate the local Minkowski tensors of $K$. We do this by approximating $\mathcal{V}_{R_k}^{r,s}(K;A)$ in Formula \eqref{matrix} by $\mathcal{V}_{R_k}^{r,s}(K_0;A)$, for $k=0,\dots,d$ and $A\subseteq \mathbb{R}^d$ a Borel set. This leads to the following set of estimators for $\Phi_k^{r,s}(K;A\times S^{d-1})$, $k\in\{0,\ldots,d\}$: \begin{align} \begin{pmatrix}\hat{\Phi}_{d}^{r,s}(K_0;A\times S^{d-1})\\ \vdots \\ \hat{\Phi}_{0}^{r,s}(K_0;A\times S^{d-1}) \end{pmatrix} =\left(A_{R_0,\ldots,R_d}^{r,s}\right)^{-1} \begin{pmatrix} \mathcal{V}_{R_0}^{r,s}(K_0;A)\\ \vdots \\ \mathcal{V}_{R_d}^{r,s}(K_0;A) \end{pmatrix}\label{defEst} \end{align} with $A_{R_0,\ldots,R_d}^{r,s}$ given by \eqref{matrixA}. Setting $A=\mathbb{R}^d$ in \eqref{defEst}, we obtain estimators \[ \hat{\Phi}_{k}^{r,s}(K_0)=\hat{\Phi}_{k}^{r,s}(K_0;\mathbb{R}^d\times S^{d-1}) \] of the {intrinsic volumes}. Note that this approach requires an estimate for the reach of $K$ because we need to choose $0<R_0<\dots<R_d <\reach(K)$. The idea to invert the Steiner formula is not new. It was used in \cite{chazal} to approximate curvature measures of sets of positive reach. In \cite{spodarev} and \cite{jan} it was used to estimate intrinsic volumes but without proving convergence for the resulting estimator. We now consider the case where $K_0$ is finite. Let \begin{equation*} V_x(K_0)=\{y\in \mathbb{R}^d \mid p_{K_0}(y)=x\} \end{equation*} denote the Voronoi cell of $x\in K_0$ with respect to the set $K_0$. Since $\mathbb{R}^d$ is the union of the finitely many Voronoi cells of $K_0$, it follows that $K^R_0$ is the union of the $R$-bounded parts $B(x,R)\cap V_x(K_0)$, $x\in K_0$, of the Voronoi cells $V_x(K_0)$, $x\in K_0$, which have pairwise disjoint interiors. Thus \eqref{star} simplifies to \begin{equation}\label{algorithm} \mathcal{V}_{R}^{r,s}(K_0;A)= \sum_{x\in K_0\cap A } x^r \int_{B(x,R)\cap V_x(K_0)} (y-x)^s \, dy. \end{equation} Like the Voronoi covariance measure, the Voronoi tensor measure $\mathcal{V}_{R}^{r,s}(K_0;A)$ is a sum of simple contributions from the individual Voronoi cells. An example of a Voronoi decomposition associated with a digital image is sketched in Figure~\ref{redblue}. The original set $K$ is the disk bounded by the inner black circle, and the disk bounded by the outer black circle is its $R$-parallel set $K^R$. The finite point sample is $K_0 = K \cap \mathbb{Z}^2$, which is shown as the set of red dots in the picture, and the red curve is the boundary of its $R$-parallel set. The Voronoi cells of $K_0$ are indicated by blue lines. The $R$-bounded part of one of the Voronoi cells is the part that is cut off by the red arc. \begin{figure} \caption{The Voronoi decomposition (blue lines) and $R$-parallel set (red curve) associated with a digital image.} \label{redblue} \end{figure} \subsection{The case of intrinsic volumes}\label{intvol} Recall that $\Phi_k^{0,0}(K)=\Lambda_k(K;\mathbb{R}^d)$ is the $k$th intrinsic volume. Thus, Section \ref{finite} provides estimators for all intrinsic volumes as a special case. This case is particularly simple. The measure $\mathcal{V}_{R}^{0,0}(K;A)$ is simply the volume of a local parallel set \begin{align*} \mathcal{V}_{R}^{0,0}(K;A){}&=\mathcal{H}^d\left(\{ x \in K^R\mid p_K(x) \in A\}\right),\\ \mathcal{V}_{R}^{0,0}(K){}&=\mathcal{H}^d( K^R). \end{align*} In particular, if $K\subseteq \mathbb{R}^d$ is a compact set with $\reach(K)>R$, then Equation~\eqref{steiner} reduces to the usual local Steiner formula \begin{align*} \mathcal{H}^d( \{ x \in K^R\mid p_K(x) \in A\})&= \sum_{k=0}^d \kappa_{k} R^{k} \Lambda_{d-k}(K;A\times S^{d-1}), \end{align*} and to the (global) Steiner formula \eqref{gloSt} if $A=\mathbb{R}^d$. In this case, our algorithm approximates the parallel volume $\mathcal{H}^d(K^R)$ by $\mathcal{H}^d(K_0^R)$. In the example in Figure \ref{redblue}, this corresponds to approximating the volume of the larger black disk by the volume of the region bounded by the red curve. This volume is again the sum of the volumes of the regions bounded by the red and blue curves. In other words, it is the sum of volumes of the $R$-bounded Voronoi cells on the right-hand side of the equation \begin{equation*} \mathcal{V}_{R}^{0,0}(K_0;A)= \sum_{x\in K_0\cap A } \mathcal{H}^d(B(x,R) \cap V_x(K_0)). \end{equation*} \subsection{Estimators for general local Minkowski tensors}\label{general1} In Section \ref{finite} we have only considered estimators for local tensors of the form $\Phi_k^{r,s}(K;A \times S^{d-1})$, where $K\subseteq\mathbb{R}^d$ is a set with positive reach. The natural way to estimate $\Phi_k^{r,s}(K;B)$, for a measurable set $B\subseteq \Sigma $, would be to copy the idea in Section \ref{finite} with $\mathcal{V}_{R}^{r,s}(K;A)$ replaced by the following generalization of the Voronoi tensor measures, \begin{equation}\label{baddef} \mathcal{W}_{R}^{r,s}(K;B) = \int_{K^R \backslash K}\mathds{1}_B(p_K(x),u_K(x))p_K(x)^r (x-p_K(x))^s \, dx, \end{equation} where $u_K(x) = ({x-p_K(x)})/{|x-p_K(x)|}$ estimates the normal direction. Of course, this definition works for any $K\in\mathcal{C}^d$. Moreover, we could define estimators related to \eqref{baddef} whenever we have a set $K_0$ which approximates $K$. However, even if $K$ has positive reach, the map $x\mapsto u_K(x)$ is not Lipschitz on $K^R\backslash K$, and therefore the convergence results in Section \ref{convergence} will not work with this definition. Since the map $x\mapsto u_K(x)$ is Lipschitz on $K^R\backslash K^{R/2}$, it is natural to proceed as follows. For any $K\in\mathcal{C}^d$, we define \begin{align}\label{modify} \overline{\mathcal{V}}_{R}^{r,s}(K;B) {}&= \int_{K^R \backslash K^{R/2}}\mathds{1}_B(p_K(x),u_K(x))p_K(x)^r (x-p_K(x))^s \, dx. \end{align} Note that \begin{equation}\label{Wdifference} \overline{\mathcal{V}}_{R}^{r,s}(K;\cdot)=\mathcal{W}_{R}^{r,s}(K;\cdot)-\mathcal{W}_{R/2}^{r,s}(K;\cdot), \end{equation} where $\mathcal{W}_{R}^{r,s}(K;\cdot)$ is defined as in \eqref{baddef}. We will not use the notation $\mathcal{W}_{R}^{r,s}(K;\cdot)$ in the following. If $K$ has positive reach and $0<R<\text{reach}(K)$, then the generalized Steiner formula yields \begin{align*} {\overline{\mathcal{V}}_{R}^{r,s}(K;B)} &= r!s! \sum_{k=1}^d \kappa_{s+k} R^{s+k} (1-2^{-(s+k)}){ {\Phi}_{d-k}^{r,s}(K;B)}. \end{align*} Again, choosing $0<R_1<\ldots<R_d<\text{reach}(K)$, we can recover the Minkowski tensors from \begin{align*} \begin{pmatrix}{\Phi}_{d-1}^{r,s}(K;B)\\ \vdots \\ {\Phi}_{0}^{r,s}(K;B) \end{pmatrix} = \left( \overline{A}_{R_1,\ldots,R_d}^{r,s} \right)^{-1} \begin{pmatrix} \overline{\mathcal{V}}_{R_1}^{r,s}(K;B)\\ \vdots \\ \overline{\mathcal{V}}_{R_d}^{r,s}(K;B) \end{pmatrix} \end{align*} where \[ \overline {A}_{R_1,\ldots,R_d}^{r,s}= \frac{1}{r!s!} \begin{pmatrix} \kappa_{s+1} (1-2^{-(s+1)}) R_1^{s+1} & \dots & \kappa_{s+d}(1-2^{-(s+d)})R_1^{s+d} \\ \vdots & & \vdots \\ \kappa_{s+1} (1-2^{-(s+1)}) R_{d}^{s+1} & \dots & \kappa_{s+d}(1-2^{-(s+d)})R_{d}^{s+d} \end{pmatrix} \] is a regular matrix. Using this, we can define estimators for ${\Phi}_{k}^{r,s}(K;B)$, for $0\le k\leq d-1$, by \begin{align*} \begin{pmatrix}\overline{\Phi}_{d-1}^{r,s}(K_0;B)\\ \vdots \\ \overline{\Phi}_{0}^{r,s}(K_0;B) \end{pmatrix} = \left( \overline{A}_{R_1,\ldots,R_d}^{r,s} \right)^{-1} \begin{pmatrix} \overline{\mathcal{V}}_{R_1}^{r,s}(K_0;B)\\ \vdots \\ \overline{\mathcal{V}}_{R_d}^{r,s}(K_0;B) \end{pmatrix}, \end{align*} where $K_0$ is a compact set which approximates $K$. Convergence of these modified estimators will be discussed in Section \ref{convergence}. The estimators $\overline{\Phi}_{k}^{r,s}$ can be used to approximate local tensors of the form $\Phi_k^{r,s}(K;B)$ where the set $B\subseteq \Sigma$ involves normal directions. Thus, they are more general than $\hat{\Phi}_{k}^{r,s}$. However, \eqref{Wdifference} shows that estimating $\overline{\mathcal{V}}_{R}^{r,s}(K;B)$ requires an approximation of two parallel sets, rather than one. We therefore expect more severe numerical errors for $\overline{\Phi}_{k}^{r,s}$. \section{Convergence properties}\label{convergence} In this section we prove the main convergence results. This is an immediate generalization of \cite[Theorem 5.1]{merigot}. \subsection{The convergence theorem} For a bounded Lipschitz function $f:\mathbb{R}^d \to \mathbb{R}$, we let $|f|_\infty$ denote the usual supremum norm, \begin{equation*} |f|_L = \sup \bigg\{ \frac{|f(x)-f(y)|}{|x-y|} \mid x\neq y\bigg\} \end{equation*} the Lipschitz semi-norm, and \begin{equation*} |f|_{bL}=|f|_L + |f|_\infty \end{equation*} the bounded Lipschitz norm. Let $d_{bL} $ be the bounded Lipschitz metric on the space of bounded $\mathbb{T}^p$-valued Borel measures on $\mathbb{R}^d$. For any two such measures $\mu$ and $\nu$ on $\mathbb{R}^d$, the distance with respect to $d_{bL}$ is defined by \begin{equation*} d_{bL}(\mu,\nu) = \sup \bigg\{\bigg|\int f \, d\mu - \int f \, d\nu\bigg| \mid |f|_{bL} \leq 1\bigg\}, \end{equation*} where the supremum extends over all bounded Lipschitz functions $f:\mathbb{R}^d\to \mathbb{R}$ with $ |f|_{bL} \leq 1$. The following theorem shows that the map \begin{equation*} K \mapsto \mathcal{V}_{R}^{r,s}(K;\cdot) \end{equation*} is H\"{o}lder continuous with exponent $ \frac{1}{2}$ with respect to the Hausdorff metric on $\mathcal{C}^d$ (restricted to compact subsets of a fixed ball) and the bounded Lipschitz metric. In the proof, we use the symmetric difference $A\Delta B=(A\setminus B)\cup(B\setminus A)$ of sets $A,B\subseteq \mathbb{R}^d$. \begin{thm}\label{converge} Let $R,\rho>0$ and $r,s\in {\mathbb N}_0$ be given. Then there is a positive constant $C_2=C_2(d,R,\rho,r,s)$ such that \begin{align*} d_{bL}(\mathcal{V}_{R}^{r,s}(K;\cdot),\mathcal{V}_{R}^{r,s}(K_0;\cdot )) \leq C_2 d_H(K,K_0)^{\frac{1}{2}} \end{align*} for all compact sets $K,K_0\subseteq B(0,\rho)$. \end{thm} \begin{proof} Let $f$ with $|f|_{bL} \leq 1$ be given. Then \eqref{integralf} yields \begin{align}\nonumber &\bigg|\int_{\mathbb{R}^d} f(x)\, \mathcal{V}_{R}^{r,s}(K;dx)-\int_{\mathbb{R}^d} f(x)\, \mathcal{V}_{R}^{r,s}(K_0;dx)\bigg|\\ &=\bigg|\int_{K^R }f(p_K(x))\,p_K(x)^r(x-p_K(x))^s \, dx\nonumber\\ &\qquad\qquad\qquad\qquad-\int_{K_0^R }f(p_{K_0}(x))p_{K_0}(x)^r(x-p_{K_0}(x))^s \, dx\bigg| \nonumber \\ &\leq {I}+{II},\label{AB} \end{align} where $I$ is the integral \begin{align*} \int_{K^R \cap K_0^R }|f(p_K(x))p_K(x)^r(x-p_K(x))^s-f(p_{K_0}(x))\,p_{K_0}(x)^r(x-p_{K_0}(x))^s |\,dx \end{align*} and \begin{align*} II={{\rho}^r}R^s \mathcal{H}^{d}(K^R \Delta K_0^R). \end{align*} By \cite[Corollary 4.4]{chazal}, there is a constant $c_1=c_1(d,R,\rho)>0$ such that \begin{equation}\label{symdif} \mathcal{H}^{d}(K^R \Delta K_0^R) \leq c_1\,d_H(K,K_0) \end{equation} when $d_H(K,K_0)\leq {R}/{2}$. Replacing $c_1$ by a possibly even bigger constant, we can ensure that \eqref{symdif} also holds when $R/2\le d_H(K,K_0)\leq 2 \rho$. Hence, \begin{equation}\label{B} II \leq {c_2}\,d_H(K,K_0)^{\frac 12} \end{equation} with some constant $c_2=c_2(d,R,\rho,r,s)>0$. Using the inequalities (and interpreting empty products as 1) \begin{align}\label{product} \bigg|\bigodot_{i=1}^m y_i - \bigodot_{i=1}^m z_i\bigg|\leq \bigg|\bigotimes_{i=1}^m y_i - \bigotimes_{i=1}^m z_i\bigg|\leq \sum_{j=1}^m |y_j-z_j|\prod_{i=1}^{j-1} |y_i| \prod_{i=j+1}^m |z_i|, \end{align} with $m=r+s$ and the rank-one tensors \[ \begin{array}{lcl} y_1=\ldots=y_r=p_K(x), &\qquad& y_{r+1}=\ldots=y_{r+s}=x-p_K(x),\\ z_1=\ldots=z_r=p_{K_0}(x), &\qquad& z_{r+1}=\ldots=z_{r+s}=x-p_{K_0}(x), \end{array} \] we get \begin{align*} |f{}&(p_K(x))\, p_K(x)^r(x-p_K(x))^s-f(p_{K_0}(x))\,p_{K_0}(x)^r(x-p_{K_0}(x))^s | \\ &\leq |f(p_K(x))-f(p_{K_0}(x)) | |p_K(x)|^{r} |x-p_{K}(x)|^s \\ &+|f(p_{K_0}(x))| \sum_{j=1}^r |p_K(x)-p_{K_0}(x)||p_K(x)|^{j-1} |p_{K_0}(x)|^{r-j}|x-p_{K_0}(x)|^s + \\ &+|f(p_{K_0}(x))| \sum_{j=1}^s |p_K(x)-p_{K_0}(x)||p_K(x)|^{r} |x-p_{K}(x)|^{j-1}|x-p_{K_0}(x)|^{s-j}. \end{align*} Since we assumed that $|f|_{bL}\le 1$, we get \begin{align}\nonumber I&\leq (r+s+1)\max\{\rho,1\}^r\max\{R,1\}^s\int_{K^R \cap K_0^R } |p_K(x)-p_{K_0}(x)|\, dx\\ &\leq c_3\, d_H(K,K_0)^{\frac{1}{2}}.\label{A} \end{align} The existence of the constant $c_3=c_3(d,R,\rho,r,s)$ in the last inequality is guaranteed by Proposition \ref{CHAZProp} with $K^R \cap K_0^R$ as the set $E$, because this choice of $E$ satisfies $\diam(E \cup \{0\})\leq 2(\rho + R)$. \end{proof} When $r=s=0$ and $f=1$, the above proof simplifies to Inequality \eqref{symdif} as $I$ vanishes. Hence we obtain the following strengthening of the theorem, which is relevant for the estimation of intrinsic volumes. \begin{thm}\label{IVconverge} Let $R,\rho>0$. Then there is a constant $C_3=C_3(d,R,\rho)>0$ such that \begin{equation*} \Big|\mathcal{V}_{R}^{0,0}(K)-\mathcal{V}_{R}^{0,0}(K_0) \Big|\leq C_3\, d_H(K,K_0) \end{equation*} for all compact sets $K,K_0\subseteq B(0,\rho)$. \end{thm} For local tensors, the proof of Theorem \ref{converge} can also be adapted to show a convergence result. \begin{thm}\label{locallip} Let $r,s\in\mathbb{N}_0$ and $R>0$. If $K_i \to K$ with respect to the Hausdorff metric on ${\mathcal C}^d$, as $i\to \infty$, then $\mathcal{V}_{R}^{r,s}(K_i;A)\to \mathcal{V}_{R}^{r,s}(K;A)$ in the tensor norm, for every Borel set $A\subseteq\mathbb{R}^d$ which satisfies \begin{equation}\label{4.3exceptional} \mathcal{H}^d(p_K^{-1}(\partial A)\cap K^R)=0. \end{equation} \end{thm} \begin{proof} Convergence of tensors is equivalent to coordinate-wise convergence. Hence, it is enough to show that the coordinates satisfy $$\mathcal{V}_{R}^{r,s}(K_i;A)_{i_1\dots i_{r+s}}\to \mathcal{V}_{R}^{r,s}(K;A)_{i_1\dots i_{r+s}}\qquad\text{as $i\to\infty$},$$ for all choices of indices ${i_1\dots i_{r+s}}$; see the notation at the beginning of Section~\ref{minkowski}. We write $T_K(x)=p_K(x)^r(x-p_K(x))^s$. Then \begin{equation*} \mathcal{V}_{R}^{r,s}(K;A)_{i_1\dots i_{r+s}}=\int_{K^R} \mathds{1}_A(p_K(x))T_K(x)_{i_1\dots i_{r+s}}\, dx \end{equation*} is a signed measure. Let $T_K(x)_{i_1\dots i_{r+s}}^+$ and $T_K(x)_{i_1\dots i_{r+s}}^-$ denote the positive and negative part of $T_K(x)_{i_1\dots i_{r+s}}$, respectively. Then \begin{equation*} \mathcal{V}_{R}^{r,s}(K;A)^{\pm}_{i_1\dots i_{r+s}}=\int_{K^R} \mathds{1}_A(p_K(x))T_K(x)_{i_1\dots i_{r+s}}^{\pm}\,dx \end{equation*} are non-negative measures such that \begin{equation*} \mathcal{V}_{R}^{r,s}(K;\cdot)_{i_1\dots i_{r+s}}=\mathcal{V}_{R}^{r,s}(K;\cdot)_{i_1\dots i_{r+s}}^+-\mathcal{V}_{R}^{r,s}(K;\cdot)_{i_1\dots i_{r+s}}^-. \end{equation*} The proof of Theorem \ref{converge} can immediately be generalized to show that $\mathcal{V}_{R}^{r,s}(K_i;\cdot)^{\pm}_{i_1\dots i_{r+s}}$ converges to $\mathcal{V}_{R}^{r,s}(K;\cdot)^{\pm}_{i_1\dots i_{r+s}}$ in the bounded Lipschitz norm (as $i\to\infty$), and hence the measures converge weakly. In particular, they converge on every continuity set of $\mathcal{V}_{R}^{r,s}(K;\cdot)^{\pm}_{i_1\dots i_{r+s}}$. If $\mathcal{H}^d(p_K^{-1}(\partial A)\cap K^R)=0$, then $A$ is such a continuity set. \end{proof} \begin{remark} Though relatively mild, the condition $\mathcal{H}^d(p_K^{-1}(\partial A)\cap K^R)=0$ can be hard to control if $K$ is unknown. It is satisfied if, for instance, $K$ and $A$ are smooth and their boundaries intersect transversely. A special case of this is when $K$ is a smooth surface and $A$ is a small ball centered on the boundary of $K$. This is the case in the application from \cite{merigot} that was described in the introduction. Examples where it is not satisfied are when $A=K$ or when $K$ is a polytope intersecting $\partial A$ at a vertex. \end{remark} \begin{remark}\label{Rem4.6new} Let $f:\mathbb{R}^d\to\mathbb{R}$ be a bounded measurable function. We define $$ \mathcal{V}_{R}^{r,s}(K;f):=\int_{\mathbb{R}^d} f(x)\, \mathcal{V}_{R}^{r,s}(K;dx). $$ Hence $\mathcal{V}_{R}^{r,s}(K;A)=\mathcal{V}_{R}^{r,s}(K;\mathds{1}_A)$ for every Borel set $A\subseteq\mathbb{R}^d$. Then, Theorem \ref{locallip} is equivalent to saying that, for all continuous test functions $f:\mathbb{R}^d\to\mathbb{R}$, $$ \mathcal{V}_{R}^{r,s}(K_i;f)\to \mathcal{V}_{R}^{r,s}(K;f),\quad \text{as }i\to\infty, $$ in the tensor norm, whenever $K_i \to K$ with respect to the Hausdorff metric on ${\mathcal C}^d$, as $i\to \infty$. Thus, if one is interested in the local behaviour of $\Phi^{r,s}_k(K; \cdot)$ at a neighborhood $A$, like in \cite{merigot}, then one can study $$ \Phi^{r,s}_k(K;f):=\int_{\Sigma} f(x)x^ru^s\, \Lambda_k(K;d(x,u)), $$ where $f$ is a continuous function with support in $A$. This avoids the extra condition \eqref{4.3exceptional}. \end{remark} As the matrix $A_{R_0,\ldots,R_d}^{r,s}$ in the definition \eqref{defEst} of $\hat \Phi_k^{r,s}(K_0;A\times S^{d-1})$ does not depend on the set $K_0$, the above results immediately yield a consistency result for the estimation of the Minkowski tensors. We formulate this only for $A=\mathbb{R}^d$. \begin{corollary}\label{corNew} Let $\rho>0$ and $K$ be a compact subset of $B(0,\rho)$ of positive reach such that $\mathrm{Reach}(K)>R_d>\ldots>R_0>0$. Let $K_0\subseteq B(0,\rho)$ be a compact set. Then there is a constant $C_4=C_4(d,R_0,\ldots,R_d,\rho)$ such that \[ \left| \hat{\Phi}^{0,0}_k(K_0)-\Phi^{0,0}_k(K)\right|\le C_4\, d_H(K_0,K), \] for all $k\in\{0,\ldots,d\}$. For $r,s\in {\mathbb N}_0$ there is a constant $C_5=C_5(d,R_0,\ldots,R_d,\rho,r,s)$ such that \[ \left| \hat{\Phi}^{r,s}_k(K_0)-\Phi^{r,s}_k(K)\right|\le C_5\, d_H(K_0,K)^{\frac12}, \] for all $k\in\{0,\ldots,d-1\}$. \end{corollary} Finally, we state the convergence results for the modified estimators for $\Phi_k^{r,s}(K;B)$, where $B\subseteq \Sigma$ is a Borel set, that were defined in Section \ref{general1}. The map $x\mapsto {x}/{|x|}$ is Lipschitz on $\mathbb{R}^d \backslash \indre({B(0,{R}/{2})})$ with Lipschitz constant ${4}/{R}$, and therefore the mapping $u_K$, which was defined after \eqref{baddef}, satisfies \begin{equation*} |u_K(x)-u_{K_0}(x)|\leq \tfrac{4}{R}|p_K(x)-p_{K_0}(x)|, \end{equation*} for $x\in (K^R \backslash K^{R/2}) \cap(K_0^R \backslash K_0^{R/2})$. Moreover, \begin{equation*} \left(K^R \backslash K^{R/2}\right) \Delta \left(K_0^R \backslash K_0^{R/2}\right) \subseteq \left(K^R \Delta K_0^{R}\right) \cup \left(K^{R/2} \Delta K_0^{R/2}\right). \end{equation*} Using this, it is straightforward to generalize the proofs of Theorems \ref{converge} and \ref{locallip} to obtain the following result. \begin{thm}\label{convergeloc2} Let $R,\rho>0$ and $r,s\in {\mathbb N}_0$ be given. Then there is a positive constant $C_6=C_6(d,R,\rho,r,s)$ such that \begin{align*} d_{bL}(\overline{\mathcal{V}}_{R}^{r,s}(K;\cdot),\overline{\mathcal{V}}_{R}^{r,s}(K_0;\cdot )) \leq C_6 d_H(K,K_0)^{\frac{1}{2}} \end{align*} for all compact sets $K,K_0\subseteq B(0,\rho)$. \end{thm} This in turn leads to the next convergence result. \begin{thm} Let $r,s\in\mathbb{N}_0$ and $R>0$. If $K,K_i\in \mathcal{C}^d$ are compact sets such that $K_i\to K$ in the Hausdorff metric, as $i\to\infty$, then $\overline{\mathcal{V}}_{R}^{r,s}(K_i;B)$ converges to $\overline{\mathcal{V}}_{R}^{r,s}(K;B)$ in the tensor norm, for any measurable set $B\subseteq \Sigma$ satisfying \begin{equation*} \mathcal{H}^d(\{x\in K^R \mid (p_K(x), u_K(x))\in \partial B\})=0. \end{equation*} Here $\partial B$ is the boundary of $B$ as a subset of $\Sigma$. If $B$ satisfies this condition and $\text{Reach}(K)>R_d$, then \begin{equation*} \lim_{i \to 0} \overline{\Phi}_{k}^{r,s}(K_i;B) = {\Phi_{k}^{r,s}(K;B)} . \end{equation*} \end{thm} \begin{remark} We can argue as in Remark \ref{Rem4.6new} to see that if $K,K_i\in \mathcal{C}^d$ are compact sets such that $K_i\to K$ in the Hausdorff metric, as $i\to\infty$, then $$ \overline{\mathcal{V}}_{R}^{r,s}(K_i;g)\to \overline{\mathcal{V}}_{R}^{r,s}(K;g),\quad \text{as }i\to\infty, $$ whenever $g:\Sigma\to\mathbb{R}$ is a continuous test function and $\overline{\mathcal{V}}_{R}^{r,s}(K;g)$ is defined similarly as before. If $K$ satisfies $\text{Reach}(K)>R_d$, we get $\overline{\Phi}_{k}^{r,s}(K_i;g) \to {\Phi_{k}^{r,s}(K;g)}$, as $i\to\infty$. \end{remark} \section{Application to digital images}\label{DI} Our main motivation for this paper is the estimation of Minkowski tensors from digital images. Recall that we model a black-and-white digital image of $K\subseteq \mathbb{R}^d$ as the set $K\cap a\mathbb{L}$, where $\mathbb{L}\subseteq \mathbb{R}^d$ is a fixed lattice and $a>0$. We refer to \cite{barvinok02} for basic information about lattices. The lower dimensional parts of $K$ are generally invisible in the digital image. When dealing with digital images, we will therefore always assume that the underlying set is topologically regular, which means that it is the closure of its own interior. In digital stereology, the underlying object $K$ is often assumed to belong to one of the following two set classes: \begin{itemize} \item $K$ is called \emph{$\delta$-regular} if it is topologically regular and the reach of its closed complement ${\rm cl}({\mathbb{R}^d \backslash K})$ and the reach of $K$ itself are both at least $\delta>0$. This is a kind of smoothness condition on the boundary, ensuring in particular that $\partial K$ is a $C^1$ manifold (see the discussion after Definition 1 in \cite{svane15b}). \item $K$ is called \emph{polyconvex} if it is a finite union of compact convex sets. While convex sets have infinite reach, note that polyconvex sets do generally not have positive reach. Also note that for a compact convex set $K\subseteq\mathbb{R}^d$, the set ${\rm cl}({\mathbb{R}^d \backslash K})$ need not have positive reach. \end{itemize} It should be observed that for a compact set $K\subseteq \mathbb{R}^d$ both assumptions imply that the boundary of $K$ is a $(d-1)$-rectifiable set in the sense of \cite{Federer69} (i.e., $\partial K$ is the image of a bounded subset of $\mathbb{R}^{d-1}$ under a Lipschitz map), which is a much weaker property that will be sufficient for the analysis in Section \ref{volten}. \subsection{The volume tensors}\label{volten} Simple and efficient estimators for the volume tensors $\Phi_d^{r,0}(K)$ of a (topologically regular) compact set $K$ are already known and are usually based on the approximation of $K$ by the union of all pixels (voxels) with midpoint in $K$. This leads to the estimator \begin{equation*} \phi_d^{r,0}(K\cap a\mathbb{L} ) = \frac1 {r!} \sum_{z \in K\cap a\mathbb{L}} \int_{z+aV_0(\mathbb{L})}x^r\,dx, \end{equation*} where $V_0(\mathbb{L})$ is the Voronoi cell of 0 in the Voronoi decomposition generated by $\mathbb{L}$. This, in turn, can be approximated by \begin{equation*} \hat{\phi}_d^{r,0}(K\cap a\mathbb{L} ) = \frac{a^{d}}{r!} \mathcal{H}^d\left(V_0(\mathbb{L})\right) \sum_{z \in K\cap a\mathbb{L}} z^r. \end{equation*} When $r\in \{0,1\}$, we even have ${\phi}_d^{r,0}(K\cap a\mathbb{L} )=\hat{\phi}_d^{r,0}(K\cap a\mathbb{L} )$. Choose $C>0$ such that $V_0(\mathbb{L}) \subseteq B(0,C)$. Then $$ K\Delta \bigcup_{z\in K\cap a\mathbb{L}} (z+aV_0(\mathbb{L}))\subseteq (\partial K)^{ aC}. $$ In fact, if $x\in \left[\bigcup_{z\in K\cap a\mathbb{L}} (z+aV_0(\mathbb{L}))\right]\setminus K$, then there is some $z\in K\cap a\mathbb{L}$ such that $x\in z+aV_0(\mathbb{L})$ and $x\notin K$. Since $z\in K$ and $x\notin K$, we have $[x,z]\cap\partial K\neq\emptyset$. Moreover, $x-z\in aV_0(\mathbb{L})\subseteq B(0,aC)$, and hence $|x-z|\le aC$. This shows that $x\in(\partial K)^{aC}$. Now assume that $x\in K$ and $x\notin (\partial K)^{aC}$. Then $B(x,\rho)\subseteq K$ for some $\rho>aC$. Since $\bigcup_{z\in a\mathbb{L}}(z+aV_0(\mathbb{L}))=\mathbb{R}^d$, there is some $z\in a\mathbb{L}$ such that $x\in z+aV_0(\mathbb{L})$. Hence $x-z\in aV_0(\mathbb{L})\subseteq B(0,aC)$. We conclude that $z\in B(x,aC)\subseteq K$, therefore $z\in K\cap a\mathbb{L}$ and thus $x\in \bigcup_{z\in K\cap a\mathbb{L}} (z+aV_0(\mathbb{L}))$. Hence \begin{equation}\label{Oabound} |{\phi}_d^{r,0}(K\cap a\mathbb{L} ) - {\Phi}_d^{r,0}(K)| \leq \frac{1} {r!} \int_{(\partial K)^{ aC}}|x|^r \, dx. \end{equation} If $\mathcal{H}^{d}(\partial K)=0$, then the integral on the right-hand side goes to zero by monotone convergence, so \begin{equation}\label{convzero} \lim_{a\to 0_+}{\phi}_d^{r,0}(K\cap a\mathbb{L} ) ={\Phi}_d^{r,0}(K). \end{equation} If $\partial K$ is $(d-1)$-rectifiable in the sense of \cite[Section 3.2.14]{Federer69}, that is, $\partial K$ is the image of a bounded subset of $\mathbb{R}^{d-1}$ under a Lipschitz map, then $\mathcal{H}^{d}(\partial K)=0$. Since $\partial K$ is compact, \cite[Theorem 3.2.39]{Federer69} implies that $\lim_{a\to 0_+}\mathcal{H}^d((\partial K)^{ aC})/a $ exists and equals a fixed multiple of $\mathcal{H}^{d-1}(\partial K)$ which is finite. Hence, \eqref{Oabound} shows that the speed of convergence in \eqref{convzero} is $O(a)$ as $a\to 0_+$. Inequality \eqref{product} yields that $|x^r-z^{r}|\leq aC r(|x|+aC)^{r-1}$ whenever $x\in z+ aV_0(\mathbb{L})$ and $r\ge 1$. Therefore, \begin{align*} |\hat{\phi}_d^{r,0}(K\cap a\mathbb{L} ) - \phi_d^{r,0}(K\cap a \mathbb{L})|{}& \leq \frac{aC } {(r-1)!} \sum_{z \in K\cap a\mathbb{L}} \int_{z+aV_0(\mathbb{L})}(|x|+aC)^{r-1}\, dx\\ & \leq \frac{aC } {(r-1)!} \int_{K^{aC}} (|x|+aC)^{r-1} \,dx, \end{align*} which shows that \begin{equation*} \lim_{a\to 0_+}\hat{\phi}_d^{r,0}(K\cap a\mathbb{L} ) ={\Phi}_d^{r,0}(K), \end{equation*} provided that $\mathcal{H}^d(\partial K)=0$. If $\partial K$ is $(d-1)$-rectifiable, then the speed of convergence is of the order $O(a)$. Hence, we suggest to simply use the estimators $\hat{\phi}_d^{r,0}(K\cap a\mathbb{L} )$ for the volume tensors. This estimator can be computed much faster and more directly than $\hat{\Phi}_d^{r,0}(K\cap a\mathbb{L} )$. Moreover, it does not require an estimate for the reach of $K$, and it converges for a much larger class of sets than those of positive reach. \subsection{Convergence for digital images} For the estimation of the remaining tensors we suggest to use the Voronoi tensor measures. Choosing $K_0=K \cap a\mathbb{L}$ in \eqref{algorithm}, we obtain \begin{equation}\label{algorithm2} \mathcal{V}_{R}^{r,s}(K\cap a\mathbb{L} ;A)= \sum_{x\in K \cap a\mathbb{L} \cap A } x^r \int_{B(x,R)\cap V_x(K\cap a\mathbb{L})} (y-x)^s \,dy, \end{equation} where $A\subseteq\mathbb{R}^d$ is a Borel set. To show some convergence results in Corollary \ref{convercor} below, we first note that the digital image converges to the original set in the Hausdorff metric. \begin{lemma}\label{dHbounds} If $K$ is compact and topologically regular, then \begin{equation*} \lim_{a\to 0_+} d_H(K,K\cap a\mathbb{L}) = 0. \end{equation*} If $K $ is $\delta$-regular, then $d_H(K,K\cap a\mathbb{L})$ is of order $O(a)$. The same holds if $K$ is topologically regular and polyconvex. \end{lemma} \begin{proof} Recall from \cite[p.~311]{barvinok02} that $ \mu(\mathbb{L})=\max_{x\in\mathbb{R}^d}\text{dist}(x,\mathbb{L}) $ is well defined and denotes the covering radius of $\mathbb{L}$. Let $\varepsilon>0$ be given. Since $K$ is compact, there are points $x_1,\ldots,x_m\in K$ such that $$ K\subseteq\bigcup_{i=1}^m B(x_i,\varepsilon). $$ Using the fact that $K$ is topologically regular, we conclude that there are points $y_i\in\text{int}(K)\cap \text{int}(B(x_i,2\varepsilon))$ for $i=1,\ldots,m$. Hence, there are $\varepsilon_i\in (0,2\varepsilon)$ such that $ B(y_i,\varepsilon_i)\subseteq K\cap B(x_i,2\varepsilon)$ for $i=1,\ldots,m$. Let $0<a<\min\{\varepsilon_i/\mu(\mathbb{L}) \mid i=1,\ldots,m\}$. Since $\varepsilon_i/a>\mu(\mathbb{L})$ it follows that $a\mathbb{L} \cap B(y_i,\varepsilon_i)\neq\emptyset$, for $i=1,\ldots,m$. Thus we can choose $z_i\in a\mathbb{L} \cap B(y_i,\varepsilon_i)\subseteq a\mathbb{L}\cap K$ for $i=1,\ldots,m$. By the triangle inequality, we have $|z_i-x_i|\le \varepsilon_i+2\varepsilon\le 4\varepsilon$, and hence $x_i\in (K\cap a\mathbb{L})+B(0,4\varepsilon)$, for $i=1,\ldots,m$. Therefore, $K\subseteq (K\cap a\mathbb{L}) +B(0,5\varepsilon)$ if $a>0$ is sufficiently small. Assume that $K$ is $\delta$-regular, for some $\delta>0$. We choose $0<a<\delta/(2\mu(\mathbb{L}))$. Since $a\mu(\mathbb{L})<\delta/2$, for any $x\in K$ there is a ball $B(y,a\mu(\mathbb{L}))$ of radius $a\mu(\mathbb{L})$ such that $x\in B(y,a\mu(\mathbb{L}))\subseteq K$. From $a\mathbb{L}\cap B(y,a\mu(\mathbb{L}))\neq\emptyset$ we conclude that there is a point $z\in K\cap a\mathbb{L}$ with $|x-z|\le 2a\mu(\mathbb{L})$. Hence $x\in (K\cap a\mathbb{L}) +B(0,2a\mu(\mathbb{L}))$, and therefore $d_H(K,K\cap a\mathbb{L})\le 2a\mu(\mathbb{L})$. Finally, we assume that $K$ is topologically regular and polyconvex. Then $K$ is the union of finitely many compact convex sets with interior points. Hence, for the proof we may assume that $K$ is convex with $B(0,\rho)\subseteq K$ for a fixed $\rho>0$. Choose $0<a<\rho/(2\mu(\mathbb{L}))$ and put $r=2a\mu(\mathbb{L})<\rho$. If $x\in K$, then $B((1-r/\rho)x,r)\subseteq K$ and $B((1-r/\rho)x,r)$ contains a point $z\in a\mathbb{L}$. Since $$ |x-z|\le r+({r}/{\rho})|x|\le 2a\mu(\mathbb{L})\left(1+\text{diam}(K)/\rho\right), $$ we get $$ K\subseteq (K\cap a\mathbb{L}) +B\big(0,2a\mu(\mathbb{L})\left(1+\text{diam}(K)/\rho \right)\big), $$ which completes the argument. \end{proof} Thus Theorems \ref{converge} and \ref{IVconverge} and Corollary \ref{corNew} together with Lemma \ref{dHbounds} yield the following result. \begin{corollary}\label{convercor} If $K $ is compact and topologically regular, then \begin{align*} &\lim_{a\to 0_+} d_{bL}(\mathcal{V}_{R}^{r,s}(K;\cdot),\mathcal{V}_{R}^{r,s}(K\cap a\mathbb{L};\cdot)) = 0,\\ &\lim_{a\to 0_+} \mathcal{V}_{R}^{r,s}(K\cap a\mathbb{L}) = \mathcal{V}_{R}^{r,s}(K). \end{align*} If, in addition, $K$ has positive reach, then \begin{align}\label{multigrid} &\lim_{a\to 0_+} \hat{\Phi}^{r,s}_k(K\cap a\mathbb{L}) = {\Phi}^{r,s}_k(K). \end{align} If $K$ is $\delta$-regular or a topologically regular convex set, then the speed of convergence is $O(a)$ when $r=s=0$ and $O(\sqrt{a})$ otherwise. \end{corollary} The property \eqref{multigrid} means that $\hat{\Phi}^{r,s}_k(K\cap a\mathbb{L})$ is multigrid convergent for the class of sets of positive reach as defined in the introduction. A similar statement about local tensors, but without the speed of convergence, can be made. We omit this here. \subsection{Possible refinements of the algorithm for digital images}\label{refinement} We first describe how the number of necessary radii $R_0<R_1<\ldots<R_d$ in \eqref{defEst} can be reduced by one if $s=0$ and $A=\mathbb{R}^d$. Setting $s=0$ and $A=\mathbb{R}^d$ and subtracting $(r!)\Phi_d^{r,0}(K)$ on both sides of Equation \eqref{steiner} yields \begin{align}\label{modstein} \int_{K^R\backslash K} p_K(x)^r \,dx = \mathcal{V}_{R}^{r,0}(K)-(r!)\Phi_d^{r,0}(K) = (r!) \sum_{k=1}^d \kappa_{k} R^{k} \Phi_{d-k}^{r,0}(K). \end{align} As mentioned in Section \ref{volten}, the volume tensor $\Phi_d^{r,0}(K)$ can be estimated by $\hat{\phi}_d^{r,0}(K\cap a\mathbb{L})$. We may take $\mathcal{V}_{R}^{r,0}(K\cap a\mathbb{L})-(r!)\hat{\phi}_d^{r,0}(K\cap a\mathbb{L})$ as an improved estimator for \eqref{modstein}. This corresponds to replacing the integration domains $B(x,R)\cap V_x(K\cap a\mathbb{L})$ in \eqref{algorithm2} by \[ (B(x,R)\cap V_x(K\cap a\mathbb{L}))\backslash V_x(a\mathbb{L}). \] This makes sense since $V_x(a\mathbb{L})$ is likely to be contained in $K$ while the left-hand side of \eqref{modstein} is an integral over $K^R\backslash K$. The Minkowski tensors can now be isolated from only $d$ equations of the form \eqref{modstein} with $d$ different values of $R$. We now suggest a slightly modified estimator for the Minkowski tensors satisfying the same convergence results as $\hat{\Phi}_k^{r,s}(K\cap a\mathbb{L})$ but where the number of summands in \eqref{algorithm2} is considerably reduced. As the volume tensors can easily be estimated with the estimators in Section \ref{volten}, we focus on the tensors with $k<d$. Let $K$ be a compact set. We define the {\em Voronoi neighborhood} $N_\mathbb{L}(0)$ of $0$ to be the set of points $y\in \mathbb{L}$ such that the Voronoi cells $V_0(\mathbb{L})$ and $V_y(\mathbb{L})$ of $0$ and $y$, respectively, have exactly one common $(d-1)$-dimensional face. Similarly, for $z\in \mathbb{L}$ the Voronoi neighborhood $N_\mathbb{L}(z)$ of $z$ is defined, and thus clearly $N_\mathbb{L}(z)=z+N_\mathbb{L}(0)$. When $\mathbb{L}\subset \mathbb{R}^2$ is the standard lattice, $N_\mathbb{L}(z)$ consists of the four points in $\mathbb{L}$ that are neighbors of $z$ in the usual $4$-neighborhood \cite{OM}. Define $I(K\cap a\mathbb{L})$ to be the set of points $z\in K\cap a\mathbb{L}$ such that $N_{a\mathbb{L}}(z)\subseteq K\cap a\mathbb{L}$. The relative complement $B(K\cap a\mathbb{L})=(K\cap a\mathbb{L})\setminus I(K\cap a\mathbb{L})$ of $I(K\cap a\mathbb{L})$ can be considered as the set of lattice points in $K\cap a\mathbb{L}$ that are close to the boundary of the given set $K$. We modify \eqref{algorithm2} by removing contributions from $I(K\cap a\mathbb{L})$ and define \begin{equation}\label{algorithm3} \tilde{\mathcal{V}}_{R}^{r,s}(K\cap a\mathbb{L} ;A)= \sum_{x\in B(K \cap a\mathbb{L}) \cap A } x^r \int_{B(x,R)\cap V_x(K\cap a\mathbb{L})} (y-x)^s\, dy. \end{equation} Assuming that $K$ has positive reach, let $0<R_0<R_1<\ldots<R_d< \textrm{Reach}(K)$. We write again $K_0$ for $K\cap a\mathbb{L}$. Then we obtain the estimators \begin{align} \begin{pmatrix} {\tilde{\Phi}}_{d}^{r,s}(K_0;A\times S^{d-1})\\ \vdots \\ {\tilde{\Phi}}_{0}^{r,s}(K_0;A\times S^{d-1}) \end{pmatrix} =\left(A_{R_0,\ldots,R_d}^{r,s}\right)^{-1} \begin{pmatrix} \tilde{\mathcal{V}}_{R_0}^{r,s}(K_0;A)\\ \vdots \\ \tilde{\mathcal{V}}_{R_d}^{r,s}(K_0;A) \end{pmatrix}\label{defEstcheck} \end{align} with $A_{R_0,\ldots,R_d}^{r,s}$ given by \eqref{matrixA}. Working with $\tilde{\mathcal{V}}_{R}^{r,s}(K\cap a\mathbb{L};A)$ reduces the workload considerably. For instance, when $K$ is $\delta$-regular or polyconvex and topologically regular, the number of elements in $I(K\cap a\mathbb{L})$ increases with $a^{-d}$, whereas the number of elements in $B(K \cap a\mathbb{L})$ only increases with $a^{-(d-1)}$ as $a\to 0_+$. The set $I(K\cap a\mathbb{L})$ can be obtained from the digital image of $K$ in linear time using a linear filter. Moreover, we have the following convergence result. \begin{proposition} Let $K$ be a topologically regular compact set with positive reach and let $C$ be such that $V_0(\mathbb{L})\subseteq B(0,C)$. If $A$ is a Borel set in $\mathbb{R}^d$ and $aC<R_0<R_1<\ldots<R_d<\mathrm{Reach}(K)$ and $K_0=K\cap a\mathbb{L}$, then \[ \tilde{\Phi}_{k}^{r,s}(K_0;A\times S^{d-1})=\hat{\Phi}_{k}^{r,s}(K_0;A\times S^{d-1}) \] for all $k\in\{0,\ldots,d-1\}$, whenever $s=0$ or $s$ is odd. If $s$ is even and $k\in\{0,\ldots,d-1\}$, then \begin{equation*} \lim_{a\to 0_+} \tilde{\Phi}_{k}^{r,s}(K_0;A\times S^{d-1})=\lim_{a\to 0_+}\hat{\Phi}_{k}^{r,s}(K_0;A\times S^{d-1}). \end{equation*} \end{proposition} \begin{proof} Let $aC<R<\mathrm{Reach}(K)$. For $x\in I(K\cap a\mathbb{L})$, we have \[ B(x,R)\cap V_{x}(K\cap a\mathbb{L})=V_{x}(a\mathbb{L}), \] so the contribution of $x$ to the sum in \eqref{algorithm2} is $(s!)x^r\Phi^{s,0}_d(V_{0}(a\mathbb{L}))$. It follows that \begin{align}\label{Vred} {\mathcal{V}}_{R}^{r,s}(K\cap a\mathbb{L} ;A)-\tilde{\mathcal{V}}_{R}^{r,s}(K\cap a\mathbb{L} ;A)= (s!)\Phi^{s,0}_d(V_{0}(a\mathbb{L}))\sum_{x\in I(K\cap a\mathbb{L})\cap A}x^r. \end{align} For odd $s$ we have $\Phi^{s,0}_d(V_{0}(a\mathbb{L}))=0$, so the claim follows. For $s=0$ the right-hand side of \eqref{Vred} does not vanish, but it is independent of $R$. A combination of \[ \left(A_{R_0,\ldots,R_d}^{r,0}\right)^{-1} \begin{pmatrix} 1\\1\\ \vdots \\ 1 \end{pmatrix}= \begin{pmatrix} (r!)^{-1}\\ 0\\ \vdots \\ 0 \end{pmatrix}, \] with \eqref{Vred}, \eqref{defEst} and \eqref{defEstcheck} gives the claim. For even $s>0$, we have that $\Phi^{s,0}_d(V_{0}(a\mathbb{L}))=a^{d+s}\Phi^{s,0}_d(V_{0}(\mathbb{L}))$, while \begin{align*} \left|\sum_{x\in I(K\cap a\mathbb{L})\cap A}x^r \right| &\leq \sum_{x\in I(K\cap a\mathbb{L})}|x|^r \\ &\leq \sup_{x\in K}|x|^r\sum_{x\in I(K\cap a\mathbb{L})} \left(a^{d}{\mathcal H}^d(V_{0}(\mathbb{L}))\right)^{-1}{\mathcal H}^d(V_{x}(a\mathbb{L}))\\ &\leq \sup_{x\in K}|x|^r \cdot a^{-d}\cdot {\mathcal H}^d(V_0(\mathbb{L}))^{-1}\cdot \mathcal{H}^d(K^{aC}). \end{align*} Therefore, the expression on the right-hand side of \eqref{Vred} converges to $0$. \end{proof} It should be noted that a similar modification for $\overline \Phi_k^{r,s}$ is not necessary. In fact the modified Voronoi tensor measure \eqref{modify} with $K=K_0$ has the advantage that small Voronoi cells that are completely contained in the $R_0/2 $-parallel set of $K\cap a\mathbb{L}$ do not contribute. In particular, contributions from $I(K\cap a\mathbb{L})$ are automatically ignored when $a$ is sufficiently small. \section{Comparison to known estimators}\label{known} Most {existing} estimators of intrinsic volumes \cite{digital,lindblad,OM} and Minkowski tensors \cite{turk,mecke} are $n$-local for some $n\in \mathbb{N}$. The idea is to look at all $n\times \dotsm \times n$ pixel blocks in the image and count how many times each of the $2^{n^d}$ possible configurations of black and white points occur. Each configuration is weighted by an element of $\mathbb{T}^{r+s}$ and $\Phi^{r,s}_k(K)$ is estimated as a weighted sum of the configuration counts. It is known that estimators of this type for intrinsic volumes other than ordinary volume are not multigrid convergent, even when $K$ is known to be a convex polytope; see \cite{am3}. {It is not difficult to see that there cannot be a multigrid convergent $n$-local estimator for the (even rank) tensors $\Phi_k^{0,2s}(K)$ with $k=0,\ldots,d-1$, $s\in\mathbb{N}$, for polytopes $K$, either. In fact, repeatedly taking the trace of such an estimator would lead to a multigrid convergent $n$-local estimator of the $k$th intrinsic volume, in contradiction to \cite{am3}.} The algorithm presented in this paper is not $n$-local for any $n\in \mathbb{N}$. It is required in the convergence proof that the parallel radius $R$ is fixed while the resolution $a^{-1}$ goes to infinity. {The non-local operation in the definition of our estimator is the calculation of the Voronoi diagram.} The computation time for Voronoi diagrams of $k$ points is $O(k\log k + k^{\lfloor d/2\rfloor})$, see \cite{chazelle}, which is somewhat slower than $n$-local algorithms for which the computation time for $k$ data points is $O(k)$. The computation time can be improved by ignoring interior points as discussed in Section \ref{refinement}. The idea to base digital estimators for intrinsic volumes on an inversion of the Steiner formula as in \eqref{matrix} has occurred before in \cite{spodarev,jan}. In both references, the authors define estimators for polyconvex sets which are not necessarily of positive reach. This more ambitious aim leads to problems with the convergence. In \cite{spodarev}, the authors use a version of the Steiner formula for polyconvex sets given in terms of the Schneider index, see \cite{schneider}. Since its definition is, however, $n$-local in nature, the authors choose an $n$-local algorithm to estimate it. As already mentioned, such algorithms are not multigrid convergent. In \cite{jan}, it is used that the intrinsic volumes of a polyconvex set can, on the one hand, be approximated by those of a parallel set with small parallel radius, and on the other hand, the closed complement of this parallel set has positive reach, so that its intrinsic volumes can be computed via the Steiner formula. The authors employ a discretization of the parallel volumes of digital images, but without showing that the convergence is preserved. It is likely that the ideas of the present paper combined with the ones of \cite{jan} could be used to construct {multigrid} convergent digital algorithms for polyconvex sets. The price for this is that the notion of convergence in \cite{jan} is slightly artificial for practical purposes, requiring very small parallel radii in order to get good approximations and at the same time large radii compared to resolution. In \cite{svane}, $n$-local algorithms based on grey-valued images are suggested. They are shown to converge to the true value when the resolution {tends} to infinity. However, they only apply to surface and certain mean curvature tensors. Moreover, they are hard to apply in practice, since they require detailed information about the underlying point spread function {which specifies the representation of the object as grey-value image. If grey-value images are given, the} algorithm of the present paper could be applied to thresholded images, but there may be more efficient ways to exploit the additional information of the grey-values. \end{document}
arXiv
{ "id": "1511.02394.tex", "language_detection_score": 0.7040834426879883, "char_num_after_normalized": null, "contain_at_least_two_stop_words": null, "ellipsis_line_ratio": null, "idx": null, "lines_start_with_bullet_point_ratio": null, "mean_length_of_alpha_words": null, "non_alphabetical_char_ratio": null, "path": null, "symbols_to_words_ratio": null, "uppercase_word_ratio": null, "type": null, "book_name": null, "mimetype": null, "page_index": null, "page_path": null, "page_title": null }
arXiv/math_arXiv_v0.2.jsonl
null
null
\begin{document} \title{ Concordance Rate of a Four-Quadrant Plot \for Repeated Measurements} \begin{abstract} Before new clinical measurement methods are implemented in clinical practice, it must be confirmed whether their results are equivalent to those of existing methods. The agreement of the trend between these methods is evaluated using the four-quadrant plot, which describes the trend of change in each difference of the two measurement methods’ values in sequential time points, and the plot's concordance rate, which is calculated using the sum of data points in the four-quadrant plot that agree with this trend divided by the number of all accepted data points. However, the conventional concordance rate does not consider the covariance between the data on individual subjects, which may affect its proper evaluation. Therefore, we proposed a new concordance rate calculated by each individual according to the number of agreement. Moreover, this proposed method can set a parameter that the minimum concordant number between two measurement techniques. The parameter can provide a more detailed interpretation of the degree of agreement. A numerical simulation conducted with several factors indicated that the proposed method resulted in a more accurate evaluation. We also showed a real data and compared the proposed method with the conventional approach. Then, we concluded the discussion with the implementation in clinical studies. \end{abstract} \keywords{Clinical trial, \and Method comparison, \and Monte Carlo Simulation, \and Trending agreement} \section{Introduction} New clinical measurements and new technologies such as cardiac output (CO) monitoring continue to be introduced, and it must be verified whether the results of the new testing measurement methods are equivalent to those of the standard measurement methods before implementing them in clinical practice. For example, an improved cardiac index (CI) tracking device was compared to a traditional method for CI by transpulmonary thermodilution to assess the reliability for accurately measuring changes in norepinephrine dose during operations (Monnet {\it{et al}}., 2012). In the study of Cox {\it{et al}}. (2017), the bioimpedance electrical cardiometry, another experimental measurement device of CI, examined with the continuous pulmonary artery thermoregulatory catheterization as the gold standard by conducting before, during, and after cardiac surgery. Various statistical methods have been proposed to assess the equivalence of the new testing measurement methods with the gold standards (e.g., Carstensen, 2010; Choundhary and Nagaraja, 2005; Choudhary and Nagaraja, 2017). In Altman and Bland (1983), Bland and Altman (1986) and Bland and Altman (1996), the Bland-Altman analysis has been proposed to evaluate the accuracy of a new clinical test based on its difference from a gold standard measurement values and on the mean of the two tests values. In addition, a method for calculating the sample size when conducting the Bland-Altman analysis during clinical trials has been proposed by Shieh (2019). The Bland-Altman analysis has also expanded to cases of repeated measurement (e.g., Bland and Altman, 2007; Bartko, 1976; Zou, 2013), which have been used in clinical studies. Asamoto {\it{et al}}. (2017) used the Bland-Altman analysis to evaluate the equivalence of the accuracy in the less invasive continuous CO monitor during two different surgeries. However, the Bland-Altman plot can not describe the trending ability between the two compared measurements, because the Bland-Altman analysis does not consider the order of the observed data. Thus, to evaluate the trending ability, the researchers also showed the four-quadrant plot for drawing the changes of the measurement results and calculated the concordance rate. In fact, in these equivalence comparative clinical trials, the four-quadrant plot and the concordance rate are often used along with the Bland-Altman analysis. As the assessment based on the degree of trending of the CO changes at each time point, the use of the four-quadrant plot and concordance rate has been proposed (Perrino {\it{et al}}., 1994; Perrino {\it{et al}}., 1998). The four-quadrant plot and four-quadrant concordance analysis are often employed with the Bland-Altman analysis when evaluating the equivalence of the two measurement methods (e.g. Monnet {\it{et al}}., 2012). The four-quadrant plot and concordance rate focus on the trending ability between each difference of two testing values, while Bland-Altman analysis assesses the accuracy and the precision of values of two measurement methods. In a four-quadrant plot, pairs of each difference of two testing values at sequential time points are plotted. For example, a plot draws with the value at the second time point minus the value measured at the first time point which are measured by the gold standard on the horizontal axis, and the difference value between the same time points measured by the experimental method on the vertical axis. The evaluation of the four-quadrant plot is based on whether the trends regarding each difference between the new experimental measurement and the gold standard are concordant. When the trends between the two measurements increase or decrease together, those points are regarded as being in agreement (Saugel {\it{et al}}., 2015). Here, the small difference values do not count for the concordance rate by introducing the ``exclusion zone". Concordance rate in a four-quadrant plot is calculated by the ratio of the number of agreements to all data points. However, this conventional concordance rate does not consider covariance within an individual, even though, in general, one subject is measured multiple times in clinical practice. In the case when the covariance within an individual is high, this may lead to incorrect results in the calculation without considering the covariance. However, concordance rate for the four-quadrant plot has not been expanded for repeated measurement, unlike the Bland-Altman analysis. Thus, our study proposes a new concordance rate for the four-quadrant plot based on multivariate normal distribution in order to take into account the individual subjects. This new method can be applied to any number of repeated measurement. Specifically, the proposed concordance rate is formulated as conditional probabilities of the agreement given the event in which no data points within individual fall into the exclusion zone. In this study, we examine the case of three time points in numerical simulation. The proposed method also has a parameter to set the minimum concordant number $m$ between two measurement methods regarded as being in ``agreement". The concordance number is counted based on how many times an ``agreement" out of the number of differences of measurement values $T$. This parameter is the least number that the trending of two clinical measurement methods can be assessed in calculating concordance rate. For instance, when the parameter $m$ is $3$ and $T$ is $5$, the concordance rate evaluates the case of more than $3$ agreements out of $5$ times. This parameter allows analysts setting from a clinical perspective. In general, $T = m$ and the high probability of the concordance are ideal, but the parameter can provide a more detailed interpretation of the degree of agreement by adjusting the parameter $m$. \ Accordingly, this study first proposes the new concordance rate for the four-quadrant plot in a general framework and then takes the case of the calculation at three time points as an example. In detail, the remainder of this paper is organized as follows; Section 2 explains the general concordance rate for the four-quadrant plot. In Section 3, we introduce the new proposed concordance rate and present the case wherein the maximum number of agreements is two. Then, Section 4 presents the application of the proposed method to simulations and its result. Section 5 describes the results of the application to a real example. We conclude this paper in Section 6. \section{Concordance Rate} \label{sec:concordance} This section explains the ways to draw the four-quadrant plot and calculate the concordance rate by using the conventional method. The assessment method for the trending agreement of two testing values using the four-quadrant plot was first proposed by Perrino, {\it{et al}}. (1994). The four-quadrant plot uses each pair of differences between the values measured by the two clinical methods being compared. Point $x^{*}_{it^{*}} (i=1,2,\cdots,n;\quad t^{*}=1,2,\cdots,(T+1))$ indicates as the value of a gold standard for individual subject $i$ at time $t^*$th, and $y^{*}_{it^{*}} (i=1,2,\cdots,n;\quad t^{*}=1,2,\cdots,(T+1))$ is the value of the experimental technique. Then, the $t$th difference of the values measured by the gold standard is \begin{align*} x_{it} = x^{*}_{i(t+1)} - x^{*}_{it} \quad (t=1,2,\cdots,T), \end{align*} and the $t$th difference of the values measured by the experimental technique is \begin{align*} y_{it} = y^{*}_{i(t+1)} - y^{*}_{it} (t=1,2,\cdots,T). \end{align*} Plot 1 in Figure \ref{4q step} shows an example of treatment values in a time sequence that compares two tests for one subject. Focusing on the first two data points in Plot 1, the difference between [2] and [1] can be described as [4] of the four-quadrant plot in Plot 2. At this time, both $x$ and $y$ increase, which indicates that the direction of change in $x$ and $y$ is the same. A point such as [4] plotted in the upper-right of the four-quadrant plot can be evaluated as being in ``agreement." In contrast, the difference between [3] and [2] is plotted as [5] in the lower-right of Plot 2. In this case, $x$ increases but $y$ decreases, which means that the trend of $x$ and $y$ is recognized as being in ``disagreement." Similarly, if the difference in both $x$ and $y$ is negative, as plotted in the lower-left, the change is also in ``agreement," while the data points in the upper-left can be assessed as being in ``disagreement." \begin{figure} \caption{Plots for the step of drawing the four-quadrant plot. The horizontal axis denotes $x$, and the vertical axis denotes $y$. Plot 1: Data plotted for three pairs of values on Cartesian coordinates. Plot 2: Four-quadrant plot of the data in Plot 1.} \label{4q step} \end{figure} \begin{figure} \caption{Four-quadrant plot with artificial example data.} \label{4q plot} \end{figure} Figure \ref{4q plot} is a four-quadrant plot with artificial example data. In the figure, the red points in the upper-right and lower-left sections are counted as being in ``agreement." The blue dots, on the other hand, signify ``disagreement." When the difference value of the experimental technique is equal to that of the gold standard, the data dot is on the $45^\circ$ lines (dotted lines in Figure \ref{4q plot}). The concordance rate is calculated based on the idea above. The conventional concordance rate (CCR) is defined as follows: \begin{align} {\rm CCR}(a) = \frac{ \# {\rm SA} - \# {\rm AEz}(a) }{ nT - \# {\rm Ez} (a) }, \end{align} where \begin{align*} {\rm SA} =& \{(x_{it}, y_{it})| \; {\big(} (x_{it}\geq0,\;y_{it}\geq0)\ \cup\ (x_{it}<0,\;y_{it}<0){\big)}, \\ &i=1,2,\cdots,n;\;t=1,2,\cdots,T \}, \\ {\rm AEz}(a) =& \{(x_{it}, y_{it})| \; {\big(} (0\leq x_{it}\leq a,\;0\leq y_{it}\leq a)\ \cup\ (-a< x_{it}<0,\;-a<y_{it}<0){\big)} \\ &i=1,2,\cdots,n;\;t=1,2,\cdots,T \}, \quad {\rm and } \\ {\rm Ez}(a)=&\{(x_{it},y_{it})|\;-a\leq x_{it},x_{it} \leq a,\;-a \leq y_{it},y_{it} \leq a,\; t=1,2,\cdots,T\}.\\ \end{align*} {\rm SA} is the set of ``agreement" pairs of each difference between the values of the gold standard and experimental technique. ${\rm Ez}(a)$ is the set of pairs plotted in the exclusion zone. In the four-quadrant plot, the exclusion zone (middle square in Figure \ref{4q plot}) is usually placed to remove data plots close to the origin of the plot, because it is difficult to determine whether such small values have occurred due to the examination or mechanical errors (e.g.,Critchley {\it{et al}}., 2010). The gray points plotted in the exclusion zone in Figure \ref{4q plot} are excluded when calculating the concordance rate. The range of the exclusion zone depends on $a$, which is set from a clinical point of view (e.g.,Saugel {\it{et al}}., 2015). ${\rm AEz}(a)$ is the set of the ``agreement" pairs in the exclusion zone. \# signifies the cardinality of a set. The concordance rate in Eq. (1) is the ratio between the number of data points in the ``agreement" sections except exclusion zone with all data points that fall outside the exclusion zone. This conventional concordance rate simply counts the number of data points that show the same trend of change. However, multiple measurements are generally taken for a single patient in a clinical setting. Individual tendencies may influence the measurement results for a single subject. Therefore, individuals must be considered to calculate a more precise concordance rate. \section{Concordance Rate for the Four-quadrant Plot} \label{sec:proposal} \subsection{General framework of the proposed concordance rate} The proposed concordance rate evaluates the equivalence between the experimental technique and the gold standard through calculation that considers the individual subjects. This proposed method includes the exclusion zone as well, and is defined as the conditional probability, which corresponds to the event falling out of the exclusion zone in all time points. We estimate the parameters of the population with all the data. The approach for calculation of the proposed method starts with the four-quadrant plot per point $t$. First, the quadrant sections are named $A_t$ to $D_t$. The sample space where the $t$th value falls in each section can be described in four ways: \begin{align*} A_{t}=&\{\omega |\; X_{t} (\omega) \geq 0, Y_{t} (\omega) \geq 0\}, \\ B_{t}=&\{\omega |\; X_{t} (\omega) < 0, Y_{t} (\omega) < 0\}, \\ C_{t}=&\{\omega |\; X_{t} (\omega) < 0, Y_{t} (\omega) \geq 0\}, \quad {\rm and} \\ D_{t}=&\{\omega |\; X_{t} (\omega) \geq 0, Y_{t} (\omega) < 0\} \quad (t = 1, 2, \cdots, T). \end{align*} Here, $X_t$ and $Y_t$ are random variables of each difference of the values of the gold standard and experimental techniques, respectively. $X_t$ and $Y_t$ correspond to $x_{it}$ and $y_{it}$, respectively. ${\rm X} = (X_1, X_2, \cdots, X_T)$ and ${\rm Y} = (Y_1, Y_2, \cdots, Y_T)$ are assumed to be distributed from multivariate normal distributions. $A_t$ in the upper-right and $B_t$ in the lower-left quadrants of the four-quadrant plot (Figure \ref{4q plot}) correspond with ``agreement," whereas $C_t$ in the upper-left and $D_t$ in the lower-right quadrants are in ``disagreement." Here, the family of sets is defined as follows: \begin{align*} \mathscr{W}_t = \{A_t \cup B_t, C_t \cup D_t\} \quad (t = 1, 2, \cdots, T). \end{align*} Then, exclusion zone at the $t$th time is \begin{align*} {\rm Ez_t}(a)=&\{\omega|\;-a \leq X_t(\omega) \leq a, -a \leq Y_t(\omega) \leq a\} \quad (t = 1, 2, \cdots, T). \end{align*} ${\rm Ez}(a)$ is also divided into four-quadrant sections: \begin{align*} {\rm EzA_t}(a)=&\{\omega|\;0 \leq X_t(\omega) \leq a, 0 \leq Y_t(\omega) \leq a\}, \\ {\rm EzB_t}(a)=&\{\omega|\;-a \leq X_t(\omega) \leq 0, -a \leq Y_t(\omega) \leq 0\}, \\ {\rm EzC_t}(a)=&\{\omega|\;-a \leq X_t(\omega) \leq 0, 0 \leq Y_t(\omega) \leq a\}, \\ {\rm EzD_t}(a)=&\{\omega|\;0 \leq X_t(\omega) \leq a, -a \leq Y_t(\omega) \leq 0\}\quad (t = 1, 2, \cdots, T). \end{align*} The assets of the random variables in $A_t, B_t, C_t$, and $D_t$, except the exclusion zone, are defined as follows: \begin{align*} A_t^{\dagger} =& A_t \cap {\rm EzrA_t}(a)^c, \\ B_t^{\dagger} =& B_t \cap {\rm EzrB_t}(a)^c, \\ C_t^{\dagger} =& C_t \cap {\rm EzrC_t}(a)^c, \quad {\rm and}\\ D_t^{\dagger} =& D_t \cap {\rm EzrD_t}(a)^c , \end{align*} where $Z^c$ is the complement of arbitrary set $Z$.\ $A_t^{\dagger}$ and $B_t^{\dagger}$ are the events of ``agreement" that do not fall into the exclusion zone, whereas $C_t^{\dagger}$ and $D_t^{\dagger}$ are the events of ``disagreement" out of the exclusion zone. The proposed concordance rate is calculated in the condition when all pairs of $(X_t,Y_t)$ are not in the exclusion zone. This means that all data of one subject are excluded from the calculation if any pair of data points for that subject drops to the exclusion zone at least once. This can be described as \begin{align*} {\rm NEz(a)} = \Big \{ \omega|\; \forall t \; (t = 1,2, \cdots, T); \; \omega \notin {\rm Ez}_t(a)\Big \}. \end{align*} Here, the two clinical testing methods are regarded as equivalent if $X_t$ and $Y_t$ show the same direction of trends more than $m$ times out of $T$ times per subject. Concordance rate in the agreement times more than the setting number in $m$ is calculated. $m$ is determined from a clinical perspective. $T$ is the number of differences of measurement values. Given this idea, we propose the new concordance rate, wherein the probability of ``agreement" of more than $m$ times in $T$ is defined as follows: \begin{align} P\Big[ \bigcup_{t = m}^T H_t | {\rm NEz}(a) \Big] \nonumber \ =& \frac{ P\Big[ (\bigcup_{t = m}^T H_t) \cap {\rm NEz}(a) \Big] } { P\Big [ {\rm NEz}(a) \Big] }\\ =& \frac{ \sum_{t = m}^T P\Big[ H_t \cap {\rm NEz}(a) \Big] } { 1 - P\Big [ \bigcup_{s=1}^T {\rm Ez_s}(a) \Big] }, \label{condition1} \end{align} where \begin{align} H_t = \Big \{ \omega |\; (W_1(\omega), W_2(\omega), \cdots, W_T(\omega)) \in \prod_{s=1}^T \mathscr{W}_s, \sum_{s=1}^T I(W_s(\omega) = A_s(\omega) \cup B_s(\omega)) = t \Big \}. \label{condition2} \end{align} $H_t$ in Eq. (\ref{condition1}) is the subset of the sample space wherein the trend between $X$ and $Y$ agrees $t$ times. $I$ is the indicator function in the condition wherein the $s$th data fall in $A^{\dagger}$ or $B^{\dagger}$. $\prod_{s=1}^T \mathscr{W}_s$ in Eq. (\ref{condition2}) indicates the product. \subsection{Example of the proposal index, T = 2} Next, we explain the proposed concordance rate in the case of $m = 1$ and $T = 2$, that is, at three points in time. The probability can be calculated as follows: \begin{align} P\Big[ \Big( \bigcup_{t = 1}^2 H_t \Big) \cap {\rm NEz}(a) \Big] \ = \frac{ \sum_{t = 1}^2 P\Big[ H_t \cap {\rm NEz}(a) \Big] } { 1 - P\Big [ \bigcup_{s=1}^2 {\rm Ez}_s(a) \Big] }. \label{condition3} \end{align} We apply the definition at $T =2$ to a four-quadrant plot. There are three patterns in the case of $T =2$: agreement in $t=1$, agreement in $t=2$, and agreements in $t=1$ and $t=2$. The probability of the numerator in the definition formula is \begin{align} P[H_1 \cap {\rm NEz}(a)] =& P[(A_1^{\dagger} \cup B_1^{\dagger}) \cap (C_2^{\dagger} \cup D_2^{\dagger})] + P[(C_1^{\dagger} \cup D_1^{\dagger}) \cap (A_2^{\dagger} \cup B_2^{\dagger})]\label{eq2} \\ P[H_2 \cap {\rm NEz}(a)] =& P[(A_1^{\dagger} \cup B_1^{\dagger}) \cap (A_2^{\dagger} \cup B_2^{\dagger})]. \label{eq3} \end{align} To describe each case, the range wherein the data point enters into each quadrant of the plot is set as $F = \{ [0, \infty]^T, \; [-\infty, 0]^T \}$, and the range of the exclusion zone is $E = \{ [0, a]^T, \; [-a, 0]^T \}$. Vectors to describe the range for the probability calculations are as follows: \begin{align*} {\bf v_1} = \left[ \begin{array}{c} v_{11} \\ v_{21} \\ \end{array} \right] , \quad {\bf v_2} = \left[ \begin{array}{c} v_{12} \\ v_{22} \\ \end{array} \right], \quad {\bf z_1} = \left[ \begin{array}{c} z_{11} \\ z_{21} \\ \end{array} \right], \quad {\bf z_2} = \left[ \begin{array}{c} z_{12} \\ z_{22} \\ \end{array} \right]. \end{align*} The first term of Eq. (\ref{eq2}) means the probability with which the trend of $X_1$ and $Y_1$ is in agreement, whereas that of $X_2$ and $Y_2$ is not. This can also be expressed as \begin{align*} &P\Big[ (A_1^{\dagger} \cup B_1^{\dagger}) \cap (C_2^{\dagger} \cup D_2^{\dagger}) \Big] \nonumber \\ =& \sum_{\substack{{\bf v_1} = {\bf z_1}, {\bf v_2} \neq {\bf z_2} \\ {\bf v_1}, {\bf v_2}, {\bf z_1}, {\bf z_2} \in F }} P(v_{11} < X_1 < v_{21},\; v_{12} < X_2 < v_{22},\; z_{11} < Y_1 < z_{21},\; z_{12} < Y_2 < z_{22} ) \nonumber \\ &+ \sum_{\substack{{\bf v_1} = {\bf z_1}, {\bf v_2} \neq {\bf z_2} \\ {\bf v_1},{\bf v_2}, {\bf z_1}, {\bf z_2} \in E }} P(v_{11} < X_1 < v_{21},\; v_{12} < X_2 < v_{22},\; z_{11} < Y_1 < z_{21},\; z_{12} < Y_2 < z_{22} ) \nonumber \\ &- \sum_{\substack{{\bf v_1} = {\bf z_1}, {\bf v_2} \neq {\bf z_2} \\ {\bf v_1}, {\bf z_1} \in F, \; {\bf v_2}, {\bf z_2} \in E }} P(v_{11} < X_1 < v_{21},\; v_{12} < X_2 < v_{22},\; z_{11} < Y_1 < z_{21},\; z_{12} < Y_2 < z_{22} ) \nonumber \\ &- \sum_{\substack{{\bf v_1} = {\bf z_1}, {\bf v_2} \neq {\bf z_2} \\ {\bf v_1}, {\bf z_1} \in E, \; {\bf v_2}, {\bf z_2} \in F }} P(v_{11} < X_1 < v_{21},\; v_{12} < X_2 < v_{22},\; z_{11} < Y_1 < z_{21},\; z_{12} < Y_2 < z_{22} ). \end{align*} Then, the second term of Eq. (\ref{eq2}) is the probability when the trend of $X_1$ and $Y_1$ is in disagreement, but that of $X_2$ and $Y_2$ is in agreement. This can be rewritten similarly as \begin{align*} &P\Big[ (C_1^{\dagger} \cup D_1^{\dagger}) \cap (A_2^{\dagger} \cup B_2^{\dagger}) \Big] \nonumber \\ =& \sum_{\substack{\bf{v}_1 \neq {\bf z_1}, {\bf v_2} = {\bf z_2} \\ {\bf v_1}, {\bf v_2}, {\bf z_1}, {\bf z_2} \in F }} P(v_{11} < X_1 < v_{21},\; v_{12} < X_2 < v_{22},\; z_{11} < Y_1 < z_{21},\; z_{12} < Y_2 < z_{22} ) \nonumber \\ &+ \sum_{\substack{{\bf v_1} \neq {\bf z_1}, {\bf v_2} = {\bf z_2} \\ {\bf v_1}, {\bf v_2}, {\bf z_1}, {\bf z_2} \in E }} P(v_{11} < X_1 < v_{21},\; v_{12} < X_2 < v_{22},\; z_{11} < Y_1 < z_{21},\; z_{12} < Y_2 < z_{22} ) \nonumber \\ &- \sum_{\substack{{\bf v_1} \neq {\bf z_1}, {\bf v_2} = {\bf z_2} \\ {\bf v_1}, {\bf z_1} \in F, \; {\bf v_2}, {\bf z_2} \in E }} P(v_{11} < X_1 < v_{21},\; v_{12} < X_2 < v_{22},\; z_{11} < Y_1 < z_{21},\; z_{12} < Y_2 < z_{22} ) \nonumber \\ &- \sum_{\substack{{\bf v_1} \neq {\bf z_1}, {\bf v_2} = {\bf z_2} \\ {\bf v_1}, {\bf z_1} \in E, \; {\bf v_2}, {\bf z_2} \in F }} P(v_{11} < X_1 < v_{21},\; v_{12} < X_2 < v_{22},\; z_{11} < Y_1 < z_{21},\; z_{12} < Y_2 < z_{22} ). \end{align*} Eq. (\ref{eq3}) is the probability that the trends of $X_1$ and $Y_1$ and of $X_2$ and $Y_2$ are both concordant: \begin{align*} &P\Big[ (A_1^{\dagger} \cup B_1^{\dagger}) \cap (A_2^{\dagger} \cup B_2^{\dagger}) \Big] \nonumber \\ =& \sum_{\substack{{\bf v_1} = {\bf z_1}, {\bf v_2} = {\bf z_2} \\ {\bf v_1}, {\bf v_2}, {\bf z_1}, {\bf z_2} \in F }} P(v_{11} < X_1 < v_{21},\; v_{12} < X_2 < v_{22},\; z_{11} < Y_1 < z_{21},\; z_{12} < Y_2 < z_{22} ) \nonumber \\ &+ \sum_{\substack{{\bf v_1} = {\bf z_1}, {\bf v_2} = {\bf z_2} \\ {\bf v_1}, {\bf v_2}, {\bf z_1}, {\bf z_2} \in E }} P(v_{11} < X_1 < v_{21},\; v_{12} < X_2 < v_{22},\; z_{11} < Y_1 < z_{21},\; z_{12} < Y_2 < z_{22} ) \nonumber \\ &- \sum_{\substack{{\bf v_1} = {\bf z_1}, {\bf v_2} = {\bf z_2} \\ {\bf v_1}, {\bf z_1} \in F, \; {\bf v_2}, {\bf z_2} \in E }} P(v_{11} < X_1 < v_{21},\; v_{12} < X_2 < v_{22},\; z_{11} < Y_1 < z_{21},\; z_{12} < Y_2 < z_{22} ) \nonumber \\ &- \sum_{\substack{{\bf v_1} = {\bf z_1}, {\bf v_2} = {\bf z_2} \\ {\bf v_1}, {\bf z_1} \in E, \; {\bf v_2}, {\bf z_2} \in F }} P(v_{11} < X_1 < v_{21},\; v_{12} < X_2 < v_{22},\; z_{11} < Y_1 < z_{21},\; z_{12} < Y_2 < z_{22} ). \end{align*} Finally, the probability of the denominator in $T =2$ is \begin{align*} &1 - P\Big [ \bigcup_{s=1}^2 {\rm Ez}_s(a) \Big]\\ =& 1- P(-a < X_1 < a. -\infty < X_2 <\infty,\;-a < Y_1 <a, -a < Y_2 <a)\\ &- P(-\infty < X_1 < \infty, -a < X_2 <a,\;-\infty < Y_1 < \infty, -a < Y_2 < a)\\ &+ P(-a < X_1 < a, -a < X_2 <a,\;-a < Y_1 < a, -a < Y_2 < a). \end{align*} In the proposed concordance rate, we assume that all random variables are distributed from multivariate normal distribution. Therefore, we must estimate the mean vectors and covariance matrices to calculate the concordance rate. The method of estimating these parameters is described next. \subsection{Estimation}\label{sec3} First, we define $Z = (X_1,\; \cdots,\; X_T,\; Y_1,\; \cdots,\; Y_T) = (Z_1,\; \cdots,\; Z_{T}, Z_{T+1},\; \cdots,\; Z_{T+T})$. Since the proposed method assumes that $Z$ are distributed from $T+T$-dimensional normal distributions, it is necessary to estimate the $T+T$-dimensional mean vector and variance covariance matrix to calculate the concordance rate. The estimated mean vector in the proposed approach is $\bar{\bm z}=(\bar{x}_1,\;\cdots,\;\bar{x}_T,\; \bar{y}_1,\;\cdots,\;\bar{y}_T )^T$ , where $\bar{x}_t$ and $\bar{y}_t$ are the mean of the $t$th value of gold standard and experimental technique, respectively. The covariance matrix based on the differences between the times is ${\bm{S}} = (s_{tk})\quad (t,k=1,2,\cdots,T+T)$, where $s_{tk}$ is the covariance between $t$ and $k$. By using these estimator, the proposed concordance rate in Eq (\ref{condition1}), defined as the conditional probability, can be calculated. \section{Numerical Simulation} \label{sec:4} In this section, we describe the simulation design, and present the simulation results. We conducted a simulation and set the two types of evaluation for the simulation. First, we examined how close the concordance rates calculated with the conventional methods and the proposed approach were to the result of the true concordance rate. The assessment of each concordance rate was expressed as the difference from the true concordance rate. The results of the proposed method can not be simply compared with CCR$(a)$, since CCR$(a)$ does not consider the repeated measurement. In order to compare with the conventional concordance rate, control1 and control2 were adjusted to allow repeated measurements, which details in factor 7.\ Secondly, to compare the diagnosability of the proposed method with CCR$(a)$, we calculated ROC curves and Area Under the Curve (AUC) (e.g. Pepe, 2003). The second evaluation is based on AUC. In this simulation, we used {\choosefont{pcr} RStudio Version 1.1.453.} \subsection{Simulation design} We set $T=2$, and the data generation procedure is as follows: \begin{align*} \bf{Z} \sim N (\bf{\mu}_{z}, \Sigma_{z}) \end{align*} where ${\bf Z} = (X_1,X_2,Y_1,Y_2) ^{T}$ . $X_t$ is the difference in the measurement values of the gold standard between the $t$th and $(t+1)$th times $(t=1,2,3)$, and $Y_t$ is that of experimental technique. In addition, \begin{align*} \bm{\mu}_{Z} = \left[ \begin{array}{c} \bm{\mu}_{X}\\ \bm{\mu}_{Y}\\ \end{array} \right] , \quad \bm{\Sigma}_{Z} = \left[ \begin{array}{cc} \bm{\Sigma}_{X} & \bm{\Sigma}_{XY} \\ \bm{\Sigma}_{XY} & \bm{\Sigma}_{Y}\\ \end{array} \right], \end{align*} where $\bf{\mu}_{X} = (\mu_{x1},\mu_{x2})^{T}$ and $\bf{\mu}_{Y} = (\mu_{y1},\mu_{y2})^{T}$ are the mean vectors of the gold standard and experimental technique, and $\bf{\Sigma}_{X}$ and $\bf{\Sigma}_{Y}$ are the covariance matrices, respectively.\\ Here, \begin{align*} \bm{\Sigma}_{X} = \left[ \begin{array}{cc} {\sigma}_{x1} & {\rho} \\ {\rho} & {\sigma}_{x2}\\ \end{array} \right], \quad \bm{\Sigma}_{Y} = \left[ \begin{array}{cc} {\sigma}_{y1} & {\rho} \\ {\rho} & {\sigma}_{y2}\\ \end{array} \right], \ {\rm and} \ \quad \bm{\Sigma}_{XY} = \left[ \begin{array}{cc} {\rho}_{XY} & {\rho}_{XY} \\ {\rho}_{XY} & {\rho}_{XY}\\ \end{array} \right]. \end{align*} we set $\sigma_{x1} = \sigma_{x2} = \sigma_{y1} = \sigma_{y2} = 1$. Factors set in the simulation are presented in Table \ref{T1}. The number of total patterns is $30\times3\times2\times2\times2\times2\times3=4320$. For each pattern, corresponding artificial data are generated 100 times and we evaluate the results. The levels of the seven factors are set as follows. \ \noindent {\bf Factor 1: Means} The mean is of 30 patterns, as shown in Table \ref{T2}. The setting depends on the combination of the magnitude of the mean value and the direction of change in $x$ and $y$. \ \noindent {\bf Factor 2: Covariance between the difference values within each measurement method } The corvariance within each measurement method of the difference values, ${\rho} $, is set as $0$, $1/3$ and $2/3$ in both $X$ and $Y$. \ \noindent {\bf Factor 3: Covariance between $X$ and $Y$} ${\rho}_{XY} = 0$ and $1/3$. \ \noindent {\bf Factor 4: Number of agreements} Factor 4 is the number of trending agreements between $X$ and $Y$. We set two different situations: (1) agreement more than once in $T=2$, and (2) agreement at both time points. \ \noindent {\bf Factor 5: Exclusion zone} $a$ of the exclusion zone ${\rm Ez}(a)$ is set as 0.5 and 1.0. \ \noindent {\bf Factor 6: Number of subjects} The number of subjects is set as 15 and 40. \ \noindent {\bf Factor 7: Methods} We calculate the concordance rate by four methods. Control1, control2, and the proposed method are used in the first evaluation, and CCR, Control1, control2, and the proposed method are used in the second evaluation. We denote the proposed concordance rate as ``proposal". Control1, based on binomial distribution, is calculated as follows: \begin{align*} \sum_{s=m}^{2} {}_{2}C_{s} p^s(1-p)^{(2-s)}, \end{align*} where \begin{align*} p = \frac{k_1+k_2}{n_1^{\dagger}+n_2^{\dagger}}. \end{align*} ${k_t}$ ($t = 1,2$) is the number of data that show the same trend between $X_t$ and $Y_t$ out of the exclusion zone. $n_t^{\dagger}$ is the number of subjects whose data points fall out of the exclusion zone. The concordance rate in control2 is calculated by the probability at each number of agreement: twice in two time points is $p_1p_2$, and once in two time points $p_1(1-p_2)+(1-p_1)p_2$, where \begin{align*} p_t = \frac{k_t}{n_t^{\dagger}}\quad (t=1,2). \end{align*} \ Subjects whose difference value fall in the exclusion zone of the four-quadrant plot even once are excluded from the calculation of the concordance rate in both control1 and control2 as same manner of the proposed method. \ The first evaluation index for the simulation result is the absolute values of the difference between the concordance rate based on each estimated parameters and the concordance rate computed with the true mean vector $\bm{\mu}_{Z}$ and with true covariance matrix $\bm{\Sigma}_{Z}$. We set the evaluation to deserve as better assessment if the absolute values of the difference between the true value and the estimated values are smaller among all concordance rate approaches. For the second evaluation index, we label to each pattern of means in Table \ref{T1}. If $\bf{\mu}_{X}$ and $\bf{\mu}_{Y}$ are concordant both two times, we mark the corresponding mean pattern as ``$\circ$", and the rest as ``$\times$". Then, the 4320 $\times$ 100 data in total have this label. ROC and AUC (e.g. Pepe, 2003) are calculated by the label and the results of concordance rates in each method, and we compare these results of AUC among the proposal method, CCR$(a)$, control1 and control2. \begin{table} \begin{center} \caption{Factors of the simulation design} \label{T1} \begin{tabular}{llll} \hline Factor No. & Factor name & levels \\ \hline Factor 1 & Means & 30 \\ Factor 2 & Covariance between the difference values within each measurement method & 3\\ Factor 3 & Covariance between $X$ and $Y$ & 2 \\ Factor 4 & Number of agreements & 2 \\ Factor 5 & Exclusion zone & 2 \\ Factor 6 & Number of subjects & 2 \\ Factor 7 & Methods & 3 / 4 \\ \hline \end{tabular} \end{center} \end{table} \begin{table} \begin{center} \caption{Mean patterns in Factor 1: Label $\circ$ indicates the pattern of agreement between $\mu_{X}$ and $\mu_{Y}$ 2 times, $\times$ indicates the pattern of not agreement between $\mu_{X}$ and $\mu_{Y}$ 2 times} \label{T2} \begin{tabular}{lrrrrclrrrrc} \hline Pattern No. &$\mu_{X1}$ & $\mu_{X2}$ & $\mu_{Y1}$ & $\mu_{Y2}$ & Label & Pattern No. &$\mu_{X1}$ & $\mu_{X2}$ & $\mu_{Y1}$ & $\mu_{Y2}$ & Label\\ \hline Pattern1 & -1.5 & -1.5 & 1.5 & 1.5 & $\times$ & Pattern16 & 0.5 & 0.5 & -0.5 & -0.5 & $\times$\\ Pattern2 & -0.5 & -0.5 & 0.5 & 0.5 & $\times$ & Pattern17 & -0.5 & -1.5 & -0.5 & -1.5 & $\circ$\\ Pattern3 & -1.5 & 1.5 & 1.5 & 1.5 & $\times$ & Pattern18 & 0.5 & -1.5 & -0.5 & -1.5 & $\times$\\ Pattern4 & 0.5 & -0.5 & 0.5 & 0.5 & $\times$ & Pattern19 & -0.5 & 1.5 & -0.5 & -1.5 & $\times$\\ Pattern5 & 1.5 & 1.5 & 1.5 & 1.5 & $\circ$ & Pattern20 & 0.5 & 1.5 & -0.5 & -1.5 & $\times$\\ Pattern6 & 0.5 & 0.5 & 0.5 & 0.5 & $\circ$ & Pattern21 & -1.5 & -1.5 & -1.5 & 1.5 & $\times$\\ Pattern7 & -0.5 & -1.5 & 0.5 & 1.5 & $\times$ & Pattern22 & -0.5 & -0.5 & -0.5 & 0.5 & $\times$\\ Pattern8 & 0.5 & -1.5 & 0.5 & 1.5 & $\times$ & Pattern23 & -1.5 & 1.5 & -1.5 & 1.5 & $\circ$\\ Pattern9 & -0.5 & 1.5 & 0.5 & 1.5 & $\times$ & Pattern24 & 0.5 & -0.5 & -0.5 & 0.5 & $\times$\\ Pattern10 & 0.5 & 1.5 & 0.5 & 1.5 & $\circ$ & Pattern25 & 1.5 & 1.5 & -1.5 & 1.5 & $\times$\\ Pattern11 & -1.5 & -1.5 & -1.5 & -1.5 & $\circ$ & Pattern26 & 0.5 & 0.5 & -0.5 & 0.5 & $\times$\\ Pattern12 & -0.5 & -0.5 & -0.5 & -0.5 & $\circ$ & Pattern27 & -0.5 & -1.5 & -0.5 & 1.5 & $\times$\\ Pattern13 & -1.5 & 1.5 & -1.5 & -1.5 & $\times$ & Pattern28 & 0.5 & -1.5 & -0.5 & 1.5 & $\times$\\ Pattern14 & 0.5 & -0.5 & -0.5 & -0.5 & $\times$ & Pattern29 & -0.5 & 1.5 & -0.5 & 1.5 & $\circ$\\ Pattern15 & 1.5 & 1.5 & -1.5 & -1.5 & $\times$ & Pattern30 & 0.5 & 1.5 & -0.5 & 1.5 & $\times$\\ \hline \end{tabular} \end{center} \end{table} \subsection{Simulation results} \subsubsection{Difference between the true value and the estimation of each concordance rate method} In all simulation results, the proposed approach was closer to the true value than control methods. Figure \ref{fig.1} showed the result of this simulation. We also showed median, the first quartile and the third quartile by each factor in tables. Medians of the proposal method was smaller, and the interquantile range was narrower than than the control1 and control2 in all factors. These results indicate that the variation of the proposed method was smaller than two control concordance methods. The estimation of the proposal was stable.\ Table \ref{TF1} are the results par each pattern of the mean. In Pattern3, 13, 21 and 25, the bias of control1 tended to be large. These patterns are the situation that the all absolute values of means of $X$ and $Y$ are 1.5 and the direction of trends disagree two times. compared to the control methods, the proposed method was stable in all patterns. Table \ref{TF2} is the results of the covariance of the difference values within each measurement method and table \ref{TF3} is the results of the covariance between $X$ ans $Y$. The results of all methods were almost same in terms of both covariances. The proposed method was more stable than the conventional methods in both factors. The proposed method resulted more closely to the true values in both $m=1$ and $m=2$ than the control methods (Table \ref{TF4}). It means that the proposal evaluated more properly in all number of agreement in the case of $T=2$. Regarding the exclusion zone, the concordance rates was slightly higher in larger size of the exclusion zone (Table \ref{TF5}). The concordance rates in all methods were smaller in the larger number of subjects (Table \ref{TF6}). \subsubsection{Diagnosability of the estimation of each concordance method} To compare the diagnosability of the proposed method with that of the conventional methods, we described the ROC curves of the proposal, CCR, control1 and control2 in Figure \ref{fig.8.1} and calculated their AUC in Table \ref{T3}. Seeing from the results of AUC, the proposed method was better than the conventional methods. In other words, it showed that the diagnostic capability of the proposed method was superior to the conventional methods. \begin{figure} \caption{Result of the simulation} \label{fig.1} \end{figure} \begin{table} \begin{center} \caption{The result of the simulation for Factor1: Means.}\label{TF1} \begin{tabular}{l|lll} \hline Pattern No. & control1 & control2 & proposal\\ \hline Pattern1 & 0.028 (0.011, 0.059) & 0.028 (0.011, 0.060) & 0.018 (0.007, 0.042) \\ Pattern2 & 0.076 (0.036, 0.137) & 0.079 (0.036, 0.141) & 0.047 (0.022, 0.086) \\ Pattern3 & 0.158 (0.131, 0.191) & 0.038 (0.019, 0.069) & 0.025 (0.012, 0.043)\\ Pattern4 & 0.066 (0.030, 0.117) & 0.071 (0.034, 0.123) & 0.046 (0.021, 0.080)\\ Pattern5 & 0.023 (0.010, 0.057) & 0.023 (0.011, 0.057) & 0.015 (0.006, 0.037)\\ Pattern6 & 0.072 (0.034, 0.124) & 0.074 (0.036, 0.127) & 0.043 (0.019, 0.079)\\ Pattern7 & 0.048 (0.022, 0.093) & 0.053 (0.024, 0.096) & 0.033 (0.015, 0.068)\\ Pattern8 & 0.070 (0.035, 0.118) & 0.051 (0.023, 0.093) & 0.033 (0.016, 0.062)\\ Pattern9 & 0.059 (0.029, 0.105) & 0.043 (0.020, 0.086) & 0.028 (0.013, 0.059)\\ Pattern10 & 0.035 (0.016, 0.081) & 0.039 (0.020, 0.085) & 0.025 (0.011, 0.060)\\ Pattern11 & 0.022 (0.010, 0.055) & 0.023 (0.011, 0.056) & 0.014 (0.006, 0.036)\\ Pattern12 & 0.072 (0.035, 0.124) & 0.074 (0.038, 0.126) & 0.042 (0.020, 0.077)\\ Pattern13 & 0.159 (0.132, 0.190) & 0.038 (0.019, 0.069) & 0.025 (0.012, 0.042)\\ Pattern14 & 0.065 (0.029, 0.117) & 0.069 (0.032, 0.127) & 0.045 (0.021, 0.082)\\ Pattern15 & 0.029 (0.011, 0.061) & 0.030 (0.011, 0.062) & 0.018 (0.007, 0.042)\\ Pattern16 & 0.079 (0.038, 0.137) & 0.082 (0.038, 0.141) & 0.047 (0.021, 0.086)\\ Pattern17 & 0.036 (0.016, 0.085) & 0.040 (0.021, 0.086) & 0.026 (0.011, 0.060)\\ Pattern18 & 0.059 (0.029, 0.104) & 0.043 (0.020, 0.090) & 0.030 (0.013, 0.061)\\ Pattern19 & 0.070 (0.034, 0.116) & 0.052 (0.024, 0.095) & 0.034 (0.015, 0.062)\\ Pattern20 & 0.047 (0.021, 0.092) & 0.052 (0.023, 0.092) & 0.033 (0.014, 0.067)\\ Pattern21 & 0.159 (0.132, 0.190) & 0.038 (0.019, 0.069) & 0.024 (0.011, 0.042)\\ Pattern22 & 0.066 (0.031, 0.117) & 0.073 (0.035, 0.127) & 0.046 (0.021, 0.082)\\ Pattern23 & 0.015 (0.005, 0.056) & 0.014 (0.005, 0.056) & 0.009 (0.003, 0.036)\\ Pattern24 & 0.068 (0.030, 0.120) & 0.071 (0.031, 0.125) & 0.045 (0.021, 0.080)\\ Pattern25 & 0.158 (0.130, 0.190) & 0.038 (0.019, 0.069) & 0.025 (0.012, 0.043)\\ Pattern26 & 0.067 (0.030, 0.118) & 0.074 (0.034, 0.128) & 0.046 (0.022, 0.082)\\ Pattern27 & 0.072 (0.034, 0.117) & 0.050 (0.023, 0.094) & 0.034 (0.016, 0.064)\\ Pattern28 & 0.046 (0.020, 0.086) & 0.045 (0.020, 0.090) & 0.031 (0.013, 0.062)\\ Pattern29 & 0.042 (0.017, 0.089) & 0.036 (0.016, 0.084) & 0.022 (0.009, 0.055)\\ Pattern30 & 0.058 (0.028, 0.103) & 0.042 (0.019, 0.086) & 0.029 (0.013, 0.059)\\ \hline \end{tabular}\\ \end{center} $\quad \quad \quad \quad \quad \quad \quad$ median(first quartile, third quartile) \end{table} \begin{table} \begin{center} \caption{The result of the simulation for Factor2: Covariance of the difference values within each measurement method.}\label{TF2} \begin{tabular}{l|lll} \hline & control1 & control2 & proposal \\ \hline ${\rho} = 0$ & 0.061 (0.022, 0.123) & 0.043 (0.017, 0.091) & 0.028 (0.011, 0.058)\\ ${\rho} = 1/3$ & 0.063 (0.022, 0.124) & 0.045 (0.019, 0.091) & 0.029 (0.012, 0.060)\\ ${\rho} = 2/3$ & 0.069 (0.029, 0.137) & 0.052 (0.025, 0.102) & 0.033 (0.014, 0.065)\\ \hline \end{tabular} \end{center} $\quad \quad \quad \quad \quad \quad \quad$ median(first quartile, third quartile) \end{table} \begin{table} \begin{center} \caption{The result of the simulation for Factor3: Covariance between $X$ and $Y$}\label{TF3} \begin{tabular}{l|lll} \hline & control1 & control2 & proposal\\ \hline ${\rho}_{XY} = 0$ & 0.065 (0.025, 0.127) & 0.048 (0.02, 0.094) & 0.031 (0.013, 0.062)\\ ${\rho}_{XY} = 1/3$ & 0.064 (0.024, 0.130) & 0.046 (0.019, 0.095) & 0.029 (0.012, 0.060)\\ \hline \end{tabular} \end{center} $\quad \quad \quad \quad \quad \quad \quad$ median(first quartile, third quartile) \end{table} \begin{table} \begin{center} \caption{The result of the simulation for Factor4: Number of agreements.}\label{TF4} \begin{tabular}{l|lll} \hline & control1 & control2 & proposal\\ \hline $m = 1$ & 0.059 (0.021, 0.122) & 0.042 (0.017, 0.087) & 0.027 (0.011, 0.056)\\ $m = 2$ & 0.070 (0.029, 0.135) & 0.052 (0.023, 0.102) & 0.034 (0.014, 0.066)\\ \hline \end{tabular} \end{center} $\quad \quad \quad \quad \quad \quad \quad$ median(first quartile, third quartile) \end{table} \begin{table} \begin{center} \caption{The result of the simulation for Factor5: Exclusion zone. }\label{TF5} \begin{tabular}{l|lll} \hline & control1 & control2 & proposal\\ \hline $a = 0.5$ & 0.058 (0.023, 0.116) & 0.043 (0.019, 0.085) & 0.028 (0.012, 0.055)\\ $a = 1.0$ & 0.072 (0.026, 0.141) & 0.053 (0.021, 0.106) & 0.032 (0.013, 0.067)\\ \hline \end{tabular} \end{center} $\quad \quad \quad \quad \quad \quad \quad$ median(first quartile, third quartile) \end{table} \begin{table} \begin{center} \caption{The result of the simulation for Factor6: Number of subjects. }\label{TF6} \begin{tabular}{l|lll} \hline & control1 & control2 & proposal\\ \hline $n = 15$ & 0.077 (0.029, 0.145) & 0.061 (0.025, 0.117) & 0.039 (0.016, 0.078)\\ $n = 40$ & 0.055 (0.022, 0.111) & 0.038 (0.016, 0.074) & 0.024 (0.010, 0.047)\\ \hline \end{tabular} \end{center} $\quad \quad \quad \quad \quad \quad \quad$ median(first quartile, third quartile) \begin{center} \caption{AUC of proposal method, CCR, control1 and control2 in the simulation}\label{T3} \begin{tabular}{l|llll} \hline & proposal & CCR & control1 & control2\\ \hline AUC & 0.930 & 0.898 & 0.898 & 0.909\\ \hline \end{tabular} \label{table3} \end{center} \end{table} \begin{figure} \caption{ROC curves of proposal method, CCR, control1 and control2 for the simulation} \label{fig.8.1} \end{figure} \section{Real Example} \label{sec:5} In this section, we show the usefulness of the proposed concordance rate by the diagnosability through a real example. We applied the proposed methods to the blood pressure data of package {\choosefont{pcr}MethComp} in {\choosefont{pcr} R software} (Carstensen {\it{et al}}., 2020). The data (Altman and Bland, 1991; Bland and Altman, 1999) comprise the blood pressure measurement for 85 subjects based on 3 types of data: data named as J and R were measured by a gold standard conducted by 2 different human observers, and S was measured by an automatic machine as the experimental method. The study was performed at three time points for each subject.\ The four-quadrant plots generated from the real data are presented in Figure \ref{fig.9}. Comparing 2 of the 3 measurement results to one another, there are 3 pairs: J(observer1) and R(observer2), R and S(auto machine), and J and S. Each pattern has 2 plots, (1) $t=1$ and (2) $t=2$. We calculated the concordance rate with the proposed method, CCR, control1 and control2 as described in Section 4. The concordance rate was in the 2 cases when the trend of change agreed once in two time points ($m = 1$) and twice all time points ($m = 2$). ${\rm Ez}(a)$ was set as 10 percent quantile point in each pair (e.g. Critchley {\it{et al}}., 2010). \ As the assessment of the methods, we compared the diagnostic feasibility of the proposal and the conventional methods of CCR, control1 and control2. Specifically, each 10 subjects out of 85 were randomly selected $100$ times, and the concordance rates was obtained by the four methods in the only case of $m=2$ in each pair. Based on the results, AUC of the proposal, CCR, control1 and control2 were calculated, and ROC curves of the proposal and CCR were drawn to estimate the diagnosability. Each pattern of the four-quadrant plots in Figure \ref{fig.9} shows the characteristics of the real example. The Data of J and R in Pattern 1 have many red points which show "agreement" of the trend between two data and most of these points lie close to the $45^\circ$ line, because this tendency naturally derives from the same established measurement method. On the other hand, data of S, the experimental measurement, is collected in the different way, thus the plots of Pattern2 and Pattern3 have more blue dots as "disagreement" than the plots of pattern1, and the data are distributed with variation. Then, data of pattern 1 is attached "agreement" label, and data of both pattern2 and pattern 3 as "disagreement" label. For the evaluation, $10$ subjects out of 85 are randomly selected and calculated by proposal, CCR, control1, and control2 inall three patterns. The procedure was iterated $1000$ times and the diagnostic performances of each method are evaluated. AUC of the proposal methods, CCR, control1 and control2 shows in Table \ref{Table4}. Each concordance rate estimated with high accuracy in $m=2$ of the example data. The proposal was better than CCR, control1 and control2. As for ROC curves in Figure \ref{fig.10}, the plot of the proposed method drew a curve with almost right angle, while the curve was more moderate in the ROC of CCR. The AUC and ROC curves indicate that the proposed approach has more accuracy than the conventional concordance rates. \begin{figure} \caption{Four-quadrant plots with real example data. Pattern1: J(observer1) and R(observer2), Pattern2: R and S(automatic machine), and Pattern3: J and S.} \label{fig.9} \end{figure} \begin{figure} \caption{ROC of proposal and CCR} \label{fig.10} \end{figure} \begin{table}[htbp] \begin{center} \caption{AUC of CCR, control1, control2 and proposal in a real example}\label{R3} \begin{tabular}{llll} \hline proposal &CCR & control1 & control2 \\ \hline $0.999$ & $0.964$ & $0.964$ & $0.965$ \\ \hline \label{Table4} \end{tabular} \end{center} \end{table} \section{Discussion} \label{sec:6} The conventional concordance rate for a four-quadrant plot is one of the methods for evaluating the equivalence between a new testing method and a standard measurement method. In many clinical practice situations, these values are observed repeatedly for the same subjects. However, the conventional concordance rate for the four-quadrant plot does not consider individual subjects when evaluating the trend of measurement values between two clinical testing methods being compared. Therefore, we proposed a new concordance rate based on normal distribution that is calculated using the difference values in each measurement technique depending on the number of agreements. The minimum number of agreements to evaluate the equivalence named hyper parameter can be set according to the total number of time points in the data and the clinical point of view. In most factors set in the simulation, the proposed concordance rate was mostly closer to the true value than the conventional methods. In addition to that, the diagnosability of the estimation of the proposed method was superior to both the existing concordance method and its applied control methods from the results of numerical simulations. In addition, through the real example using sbp data, we showed the superiority of the proposed method for the diagnosability by these AUC values. We also provided only the results of the numerical simulations and a real example for the case of time point $T = 2$ in this study; however, this proposed concordance rate can be calculated as a case of any $T$. Here we mention the assumptions of the proposed method and its comparison with existing statistical methods. In the proposed method, we assumed that these data are distributed from multivariate normal distribution. In practical situation, concordance rate is used with Bland-Altman analysis to evaluate the equivalence of two measurement methods. Bland-Altman analysis assumed to be distributed from normal distribution (e.g. Bland and Altman, 2007; Bartko, 1976; Zou, 2013). Therefore, the assumption of the proposed method is consistent with that of Bland-Altman analysis. Next, Goodman and Kruskal's gamma (Goodman and Kruskal, 1963) is similar to the concordance rate, although the range is different. The gamma statistic does not consider the exclusion zone and, in the practical situation of clinical trials, concordance rate is usually used with Bland-Altman analysis. Finally, We further discuss the four points of future work of this study. First, for the values of the proposed concordance rate, there are no absolute criteria, similar to the conventional concordance rate. Although various criteria have been proposed, there are no common acceptable criteria for the conventional concordance rate (e.g., Saugel {\it{et al}}., 2015). Therefore, it is difficult to determine the result as good, acceptable, or poor. Secondly, the results of the proposed concordance rate may also face the problem at time intervals between the measurement values, similar to the conventional concordance rate (e.g., Saugel {\it{et al}}., 2015). Thirdly, we have to determine the parameters of the exclusion zone (e.g., Critchley {\it{et al}}., 2011). Forthly, in the proposed method, we introduced hyper parameter $m$, which allows us to arrive at a flexible interpretation of the results. While the Bland-Altman analysis was sometimes used in confirmatory clinical trials based on the statistical inference (e.g., Asamoto {\it{et al}}., 2017), our proposed concordance rate for the four-quadrant plot has not been established yet in this regard. The estimation of the confidence interval will be needed. In this study, we found that the conventional concordance rate was not so proper indicator in repeated measurements, while the proposed concordance rate could enhance the accuracy by calculating depending on the number of agreement. As the proposed concordance rate provides the trending agreement from various perspectives, this new method is expected to contribute to clinical decisions as an exploratory analysis. Further consideration is thus required from these points of view. \ \noindent {\bf{Conflict of Interest}} \noindent {\it{The authors have declared no conflict of interest. }} \end{document}
arXiv
{ "id": "2007.04042.tex", "language_detection_score": 0.7982957363128662, "char_num_after_normalized": null, "contain_at_least_two_stop_words": null, "ellipsis_line_ratio": null, "idx": null, "lines_start_with_bullet_point_ratio": null, "mean_length_of_alpha_words": null, "non_alphabetical_char_ratio": null, "path": null, "symbols_to_words_ratio": null, "uppercase_word_ratio": null, "type": null, "book_name": null, "mimetype": null, "page_index": null, "page_path": null, "page_title": null }
arXiv/math_arXiv_v0.2.jsonl
null
null
\begin{document} \selectlanguage{english} \begin{abstract} We study spectral properties of unbounded Jacobi matrices with periodically modulated or blended entries. Our approach is based on uniform asymptotic analysis of generalized eigenvectors. We determine when the studied operators are self-adjoint. We identify regions where the point spectrum has no accumulation points. This allows us to completely describe the essential spectrum of these operators. \end{abstract} \maketitle \section{Introduction} Consider two sequences $a = (a_n : n \in \mathbb{N}_0)$ and $b = (b_n : n \in \mathbb{N}_0)$ such that $a_n > 0$ and $b_n \in \mathbb{R}$ for all $n \geq 0$. Let $A$ be the closure in $\ell^2(\mathbb{N}_0)$ of the operator acting on sequences having finite support by the matrix \[ \begin{pmatrix} b_0 & a_0 & 0 & 0 &\ldots \\ a_0 & b_1 & a_1 & 0 & \ldots \\ 0 & a_1 & b_2 & a_2 & \ldots \\ 0 & 0 & a_2 & b_3 & \\ \vdots & \vdots & \vdots & & \ddots \end{pmatrix}. \] The operator $A$ is called \emph{Jacobi matrix}. Recall that $\ell^2(\mathbb{N}_0)$ is the Hilbert space of square summable complex valued sequences with the scalar product \[ \sprod{x}{y}_{\ell^2(\mathbb{N}_0)} = \sum_{n=0}^\infty x_n \overline{y_n}. \] The most throughly studied are bounded Jacobi matrices, see e.g. \cite{Simon2010Book}. Let us remind that the Jacobi matrix $A$ is bounded if and only if the sequences $a$ and $b$ are bounded. In this article we are exclusively interested in \emph{unbounded} Jacobi matrices. We shall consider two classes: periodically modulated and periodically blended. The first class has been introduced in \cite{JanasNaboko2002} and systematically studied since then. To be precise, let $N$ be a positive integer. We say that $A$ has \emph{$N$-periodically modulated} entries if there are two $N$-periodic sequences $(\alpha_n : n \in \mathbb{Z})$ and $(\beta_n : n \in \mathbb{Z})$ of positive and real numbers, respectively, such that \begin{enumerate}[leftmargin=2em, label=\alph*)] \item $\begin{aligned}[b] \lim_{n \to \infty} a_n = \infty \end{aligned},$ \item $\begin{aligned}[b] \lim_{n \to \infty} \bigg| \frac{a_{n-1}}{a_n} - \frac{\alpha_{n-1}}{\alpha_n} \bigg| = 0 \end{aligned},$ \item $\begin{aligned}[b] \lim_{n \to \infty} \bigg| \frac{b_n}{a_n} - \frac{\beta_n}{\alpha_n} \bigg| = 0 \end{aligned}.$ \end{enumerate} This class contains sequences one can find in many applications. It is also rich enough to allow building an intuition about the general case. In particular, in this class there are examples of Jacobi matrices with purely absolutely continuous spectrum filling the whole real line (see \cite{JanasNaboko2002, PeriodicI, PeriodicII, SwiderskiTrojan2019, JanasNaboko2001}), having a bounded gap in absolutely continuous spectrum (see \cite{Dombrowski2004, Dombrowski2009, DombrowskiJanasMoszynskiEtAl2004, DombrowskiPedersen2002a, DombrowskiPedersen2002, JanasMoszynski2003, JanasNabokoStolz2004, Sahbani2016, Janas2012}), having absolutely continuous spectrum on the half-line (see \cite{Damanik2007, Janas2001, Janas2009, Motyka2015, Naboko2009, Naboko2010, Simonov2007, DombrowskiPedersen1995, Naboko2019}), having purely singular continuous spectral measure with explicit Hausdorff dimension (see \cite{Breuer2010}), having a dense point spectrum on the real line (see \cite{Breuer2010}), and having an empty essential spectrum (see \cite{Silva2004, Silva2007, Silva2007a, HintonLewis1978, Szwarc2002}). The second class, that is blended Jacobi matrices (see Definition~\ref{def:3}), has been introduced in \cite{Janas2011} as an example of unbounded Jacobi matrices having absolutely continuous spectrum equal to a finite union of compact intervals. It has been further studied in \cite{ChristoffelI, SwiderskiTrojan2019} in the context of orthogonal polynomials. Before we formulate the main results of this paper, let us introduce some definitions. In our investigation, the crucial r\^ole is played by the \emph{transfer matrix} defined as \[ B_j(x) = \begin{pmatrix} 0 & 1 \\ -\frac{a_{j-1}}{a_j} & \frac{x - b_j}{a_j} \end{pmatrix}. \] We say that a sequence $(x_n : n \in \mathbb{N})$ of vectors from a normed vector space $V$ belongs to $\mathcal{D}_r (V)$ for a certain $r \in \mathbb{N}_0$, if it is \emph{bounded,} and for each $j \in \{1, \ldots, r\}$, \[ \sum_{n = 1}^\infty \big\| \Delta^j x_n \big\|^\frac{r}{j} < \infty \] where \begin{align*} \Delta^0 x_n &= x_n, \\ \Delta^j x_n &= \Delta^{j-1} x_{n+1} - \Delta^{j-1} x_n, \qquad j \geq 1. \end{align*} If $X$ is the real line with Euclidean norm we abbreviate $\mathcal{D}_{r} = \mathcal{D}_{r}(X)$. Given a compact set $K \subset \mathbb{C}$ and a normed vector space $R$, we denote by $\mathcal{D}_{r}(K, R)$ the case when $X$ is the space of all continuous mappings from $K$ to $R$ equipped with the supremum norm. \begin{main_theorem} \label{thm:A} Suppose that $A$ is a Jacobi matrix with $N$-periodically modulated entries. Let \[ \mathcal{X}_0(x) = \lim_{n \to \infty} X_{nN}(x) \] where \[ X_n(x) = B_{n+N-1}(x) B_{n+N-2}(x) \cdots B_n(x). \] Assume that\footnote{For a real matrix $X$ we define its discriminant as $\operatorname{discr} X = (\operatorname{tr} X)^2 - 4 \det X$.} $\operatorname{discr} \mathcal{X}_0(0) > 0$. If there are a compact set $K \subset \mathbb{R}$ with at least $N+1$ points, $r \in \mathbb{N}$ and $i \in \{0, 1, \ldots, N-1 \}$, so that\footnote{By $\operatorname{Mat}(d, \mathbb{R})$ we denote the real matrices of dimension $d \times d$ with the operator norm.} \begin{equation} \label{eq:131} \big( X_{nN+i} : n \in \mathbb{N} \big) \in \mathcal{D}_r \big( K, \operatorname{Mat}(2, \mathbb{R}) \big), \end{equation} then $A$ is self-adjoint and\footnote{For a self-adjoint operator $A$ we denote by $\sigma_{\mathrm{ess}}(A), \sigma_{\mathrm{ac}}(A)$ and $\sigma_{\mathrm{sing}}(A)$ its essential spectrum, the essential spectrum and the singular spectrum, respectively.} $\sigma_{\mathrm{ess}}(A) = \emptyset$. \end{main_theorem} Recall that a sufficient condition for self-adjointness of the operator $A$ is \emph{the Carleman's condition} (see e.g. \cite[Corollary 6.19]{Schmudgen2017}), that is \begin{equation} \label{eq:19} \sum_{n=0}^\infty \frac{1}{a_n} = \infty. \end{equation} The conclusion of Theorem~\ref{thm:A} is in strong contrast with the case when $\operatorname{discr} \mathcal{X}_0(0) < 0$. Indeed, if $\operatorname{discr} \mathcal{X}_0(0) < 0$, then by \cite[Theorem A]{SwiderskiTrojan2019}, the operator $A$ is self-adjoint if and only if the Carleman's condition is satisfied. If it is the case then $A$ is purely absolutely continuous and $\sigma(A) = \mathbb{R}$. Under the Carleman's condition, the conclusion of Theorem~\ref{thm:A} for $r=1$ has been proven in \cite{JanasNaboko2002} by showing that the resolvent of $A$ is compact. Furthermore, by \cite[Theorem 8]{Chihara1962} (see also \cite[Theorem 2.6]{Szwarc2002}) it follows that if a self-adjoint Jacobi matrix $A$ is $1$-periodically modulated with $\operatorname{discr} \mathcal{X}_0(0) > 0$, then $\sigma_{\mathrm{ess}}(A) = \emptyset$, i.e. the condition \eqref{eq:131} is not necessary here. \begin{main_theorem} \label{thm:B} Suppose that $A$ is a Jacobi matrix with $N$-periodically blended entries. Let \[ \mathcal{X}_1(x) = \lim_{n \to \infty} X_{n(N+2)+1}(x) \] where \[ X_n(x) = B_{n+N+1}(x) B_{n+N}(x) \cdots B_n(x). \] If there are a compact set $K \subset \mathbb{R}$ with at least $N+3$ points, $r \in \mathbb{N}$, and $i \in \{1,2, \ldots, N \}$, so that \[ \big( X_{n(N+2)+i} : n \in \mathbb{N} \big) \in \mathcal{D}_{r} \big( K, \operatorname{Mat}(2, \mathbb{R}) \big), \] then $A$ is self-adjoint and \[ \sigma_{\mathrm{sing}}(A) \cap \Lambda = \emptyset \quad \text{and} \quad \sigma_{\mathrm{ac}}(A) = \sigma_{\mathrm{ess}}(A) = \overline{\Lambda} \] where \[ \Lambda = \big\{ x \in \mathbb{R} : \operatorname{discr} \mathcal{X}_1(x) < 0 \big \}. \] \end{main_theorem} For the proof of Theorem \ref{thm:B} see Theorem \ref{thm:6}. Let us comment that in Theorem \ref{thm:B}, the absolute continuity of $A$ follows by \cite[Theorem B]{SwiderskiTrojan2019}. Moreover, by \cite[Theorem 3.13]{ChristoffelI} it stems that $\Lambda$ is a union of $N$ open disjoint bounded intervals. For $r = 1$ and under certain very strong assumptions, Theorem \ref{thm:B} has been proven in \cite[Theorem 5]{Janas2011}. The following results concerns the case when $\operatorname{discr} \mathcal{X}_0(0) = 0$. For the proof see Theorem \ref{thm:10}. \begin{main_theorem} \label{thm:C} Let $A$ be a Jacobi matrix with $N$-periodically modulated entries, and let $X_n$ and $\mathcal{X}_0$ be defined as in Theorem~\ref{thm:A}. Suppose that $\mathcal{X}_0(0) = \sigma \operatorname{Id}$ for any $\sigma \in \{-1, 1\}$, and that there are two $N$-periodic sequences $(s_n : n \in \mathbb{N}_0)$ and $(z_n : n \in \mathbb{N}_0)$, such that \[ \lim_{n \to \infty} \bigg|\frac{\alpha_{n-1}}{\alpha_n} a_n - a_{n-1} - s_n\bigg| = 0, \qquad \lim_{n \to \infty} \bigg|\frac{\beta_n}{\alpha_n} a_n - b_n - z_n\bigg| = 0. \] Let $R_n = a_{n+N-1}(X_n - \sigma \operatorname{Id})$. Then $(R_{kN} : k \in \mathbb{N}_0)$ converges locally uniformly on $\mathbb{R}$ to $\mathcal{R}_0$. If there are a compact set $K \subset \mathbb{R}$ with at least $N+1$ points and $i \in \{1,2, \ldots, N \}$, so that \[ \big( R_{nN+i} : n \in \mathbb{N} \big) \in \mathcal{D}_{1} \big( K, \operatorname{Mat}(2, \mathbb{R}) \big), \] then $A$ is self-adjoint and \[ \sigma_{\mathrm{sing}}(A) \cap \Lambda = \emptyset \qquad \text{and} \qquad \sigma_{\mathrm{ac}}(A) = \sigma_{\mathrm{ess}}(A) = \overline{\Lambda} \] where \[ \Lambda = \big\{ x \in \mathbb{R} : \operatorname{discr} \mathcal{R}_0(x) < 0 \big\}. \] \end{main_theorem} In fact, Theorem~\ref{thm:C} completes the analysis started in \cite{PeriodicIII} where it has been shown that $\overline{\Lambda} \subset \sigma_{\mathrm{ac}}(A)$. Finally, we investigate the case when the Carleman's condition \eqref{eq:19} is \emph{not} satisfied. \begin{main_theorem} \label{thm:D} Let $A$ be a Jacobi matrix with $N$-periodically modulated entries, and let $X_n$ and $\mathcal{X}_0$ be defined as in Theorem~\ref{thm:A}. Suppose that $\mathcal{X}_0(0) = \sigma \operatorname{Id}$ for any $\sigma \in \{-1, 1\}$, and that the Carleman's condition is \emph{not} satisfied. Assume that there are $i \in \{0, 1, \ldots, N-1\}$ and a sequence of positive numbers $(\gamma_n : n \in \mathbb{N}_0)$ satisfying \[ \sum_{n=0}^\infty \frac{1}{\gamma_n} = \infty, \] such that $R_{nN+i}(0) = \gamma_n(X_{nN+i}(0) - \sigma \operatorname{Id})$ converges to a non-zero matrix $\mathcal{R}_i$. Suppose that \[ \big( R_{nN+i}(0) : n \in \mathbb{N} \big) \in \mathcal{D}_1 \big( K, \operatorname{Mat}(2, \mathbb{R}) \big). \] Then \begin{enumerate}[(i), leftmargin=2em] \item if $\operatorname{discr} \mathcal{R}_i < 0$, then $A$ is \emph{not} self-adjoint; \item if $\operatorname{discr} \mathcal{R}_i > 0$, then $\sigma_{\mathrm{ess}}(A) = \emptyset$ provided $A$ is self-adjoint. \end{enumerate} \end{main_theorem} In fact, in Theorem \ref{thm:8a} we characterize when $A$ is self-adjoint. To illustrate Theorem \ref{thm:D}, in Section \ref{sec:KM} we consider the $N$-periodically modulated Kostyuchenko--Mirzoev's class. In this context we can precisely describe when the operator $A$ is self-adjoint. In our analysis the basic objects are \emph{generalized eigenvectors} of $A$. Let us recall that $(u_n : n \in \mathbb{N}_0)$ is a generalized eigenvector associated with $x \in \mathbb{C}$, if for all $n \geq 1$ \[ \begin{pmatrix} u_n \\ u_{n+1} \end{pmatrix} = B_n(x) \begin{pmatrix} u_{n-1} \\ u_{n} \end{pmatrix} \] for a certain $(u_0, u_1) \neq (0,0)$. The spectral properties of $A$ are intimately related to the asymptotic behavior of generalized eigenvectors. For example, $A$ is self-adjoint if and only if there is a generalized eigenvector associated with some $x_0 \in \mathbb{R}$, that is \emph{not} square-summable. In another vein, the theory of subordinacy (see \cite{Khan1992}) describes spectral properties of a self-adjoint $A$ in terms of asymptotic behavior of generalized eigenvectors. In particular, it has been shown in \cite{Silva2007} that the subordinacy theory together with some general properties of self-adjoint operators imply the following: if $K \subset \mathbb{R}$ is a compact interval such that for each $x \in K$ there is a generalized eigenvector $(u_n(x) : n \in \mathbb{N}_0)$ associated with $x \in K$, so that \begin{equation} \label{eq:132} \sum_{n=0}^\infty \sup_{x \in K} |u_n(x)|^2 < \infty, \end{equation} then $\sigma_{\mathrm{ess}}(A) \cap K = \emptyset$. In \cite{Silva2007}, for some class of Jacobi matrices the condition~\eqref{eq:132} has been checked with a help of uniform discrete Levinson's type theorems. In this article we take similar approach. In particular, in Theorems~\ref{thm:2} and \ref{thm:3}, we prove our uniform Levinson's theorems. They improve the existing results known in the literature. More precisely, Theorem~\ref{thm:2} with $r \geq 2$ in the case of negative discriminant, improves the pointwise theorem \cite[Theorem 3.1]{Moszynski2006}. The case of positive discriminant for $r > 2$ has not been studied before, even pointwise. Concerning the uniformity, Theorem~\ref{thm:2} improves \cite{Silva2004}, where for $r=1$ it was assumed that the limiting matrix is constant. Our analysis shows that this condition can be dropped (see the comment after proof of Theorem~\ref{thm:3}). We prove uniformity by constructing explicit diagonalization of the relevant matrices. The case of positive discriminant provides more technical challenges than the negative one. If the Carleman's condition is not satisfied, our Levinson's type theorems allowed us to study asymptotic behavior of generalized eigenvectors on the whole complex plane for a large class of sequences $a$ and $b$. In particular, our results cover the asymptotic recently obtained by Yafaev in \cite{Yafaev2019}, see Corollary~\ref{cor:3} for details. Let us emphasize that our approach is different than used in \cite{Yafaev2019}. The organization of the paper is as follows. In Section~\ref{sec:prelim} we collect basic properties and definitions. In particular, we prove axillary results concerning periodically modulated and blended Jacobi matrices. In Section~\ref{sec:stolz} we describe Stolz classes, and prove results necessary to show in Section \ref{sec:levinson} our Levinson's type theorems which might be of independent interest. In Section~\ref{sec:essential} we apply them to deduce Theorems~\ref{thm:A}, \ref{thm:B} and \ref{thm:C}. Finally, in Section~\ref{sec:notCarleman} we prove Theorem~\ref{thm:D}, and study the Kostyuchenko--Mirzoev's class of Jacobi matrices in details. \subsection*{Notation} By $\mathbb{N}$ we denote the set of positive integers and $\mathbb{N}_0 = \mathbb{N} \cup \{0\}$. Throughout the whole article, we write $A \lesssim B$ if there is an absolute constant $c>0$ such that $A \le cB$. We write $A \asymp B$ if $A \lesssim B$ and $B \lesssim A$. Moreover, $c$ stands for a positive constant whose value may vary from occurrence to occurrence. \subsection*{Acknowledgment} The first author was partially supported by the Foundation for Polish Science (FNP) and by long term structural funding -- Methusalem grant of the Flemish Government. \section{Preliminaries} \label{sec:prelim} Given two sequences $a = (a_n : n \in \mathbb{N}_0)$ and $b = (b_n : n \in \mathbb{N}_0)$ of positive and real numbers, respectively, we define $k$th associated \emph{orthonormal} polynomials as \[ \begin{gathered} p^{[k]}_0(x) = 1, \qquad p^{[k]}_1(x) = \frac{x - b_k}{a_k}, \\ a_{n+k-1} p^{[k]}_{n-1}(x) + b_{n+k} p^{[k]}_n(x) + a_{n+k} p^{[k]}_{n+1}(x) = x p^{[k]}_n(x), \qquad n \geq 1. \end{gathered} \] We usually omit the superscript if $k = 0$. Suppose that the Jacobi matrix $A$ corresponding to the sequences $a$ and $b$ is self-adjoint. Let us denote by $E_A$ its spectral resolution of the identity. Then for any Borel subset $B \subset \mathbb{R}$, we set \[ \mu(B) = \langle E_A(B) \delta_0, \delta_0 \rangle_{\ell^2(\mathbb{N}_0)} \] where $\delta_0$ is the sequence having $1$ on the $0$th position and $0$ elsewhere. The polynomials $(p_n : n \in \mathbb{N}_0)$ form an orthonormal basis of $L^2(\mathbb{R}, \mu)$. In this article, we are interested in Jacobi matrices associated to two classes of sequences that are defined in terms of periodic Jacobi parameters. The latter are described as follows. Let $(\alpha_n : n \in \mathbb{Z})$ and $(\beta_n : n \in \mathbb{Z})$ be two $N$-periodic sequences of real and positive numbers, respectively. Let $(\mathfrak{p}_n : n \in \mathbb{N}_0)$ be the corresponding polynomials, that is \[ \begin{gathered} \mathfrak{p}_0(x) = 1, \qquad \mathfrak{p}_1(x) = \frac{x-\beta_0}{\alpha_0}, \\ \alpha_n \mathfrak{p}_{n-1}(x) + \beta_n \mathfrak{p}_n(x) + \alpha_{n} \mathfrak{p}_{n+1}(x) = x \mathfrak{p}_n(x), \qquad n \geq 1. \end{gathered} \] Let \[ \mathfrak{B}_n(x) = \begin{pmatrix} 0 & 1 \\ -\frac{\alpha_{n-1}}{\alpha_n} & \frac{x - \beta_n}{\alpha_n} \end{pmatrix}, \qquad\text{and}\qquad \mathfrak{X}_n(x) = \prod_{j = n}^{N+n-1} \mathfrak{B}_j(x), \qquad n \in \mathbb{Z} \] where for a sequence of square matrices $(C_n : n_0 \leq n \leq n_1)$ we have set \[ \prod_{k = n_0}^{n_1} C_k = C_{n_1} C_{n_1-1} \cdots C_{n_0}. \] \subsection{Periodic modulation} \begin{definition} \label{def:2} We say that the Jacobi matrix $A$ associated to $(a_n : n \in \mathbb{N}_0)$ and $(b_n : n \in \mathbb{N}_0)$ has \emph{$N$-periodically modulated entries,} if there are two $N$-periodic sequences $(\alpha_n : n \in \mathbb{Z})$ and $(\beta_n : n \in \mathbb{Z})$ of positive and real numbers, respectively, such that \begin{enumerate}[(a), leftmargin=2em] \item $\begin{aligned}[b] \lim_{n \to \infty} a_n = \infty \end{aligned},$ \item $\begin{aligned}[b] \lim_{n \to \infty} \bigg| \frac{a_{n-1}}{a_n} - \frac{\alpha_{n-1}}{\alpha_n} \bigg| = 0 \end{aligned},$ \item $\begin{aligned}[b] \lim_{n \to \infty} \bigg| \frac{b_n}{a_n} - \frac{\beta_n}{\alpha_n} \bigg| = 0 \end{aligned}.$ \end{enumerate} \end{definition} For a Jacobi matrix $A$ with $N$-periodically modulated entries, we set \[ X_n = \prod_{j = n}^{N+n-1} B_j. \] Then for each $i \in \{0, 1, \ldots, N-1\}$ the sequence $(X_{jN+i} : j \in \mathbb{N}_0)$ has a limit $\mathcal{X}_i$. In view of \cite[Proposition 3.8]{ChristoffelI}, we have $\mathcal{X}_i(x) = \mathfrak{X}_i(0)$ for all $x \in \mathbb{C}$. \begin{proposition} \label{prop:10} Let $N$ be a positive integer and $\sigma \in \{-1, 1\}$. Let $A$ be a Jacobi matrix with $N$-periodically modulated entries so that $\mathfrak{X}_0(0) = \sigma \operatorname{Id}$. Suppose that there are two $N$-periodic sequences $(s_n : n \in \mathbb{Z})$ and $(z_n : n \in \mathbb{Z})$, such that \[ \lim_{n \to \infty} \bigg|\frac{\alpha_{n-1}}{\alpha_n} a_n - a_{n-1} - s_n\bigg| = 0, \qquad \lim_{n \to \infty} \bigg|\frac{\beta_n}{\alpha_n} a_n - b_n - z_n\bigg| = 0, \] then for each $i \in \{0, 1, \ldots, N-1 \}$ the sequence $\big(a_{(k+1)N+i-1} (X_{kN+i} - \sigma \operatorname{Id}) : k \in \mathbb{N} \big)$ converges locally uniformly on $\mathbb{C}$ to $\mathcal{R}_i$, and \[ \operatorname{tr} \mathcal{R}_i = -\sigma \lim_{k \to \infty} \big( a_{(k+1)N+i-1} - a_{kN+i-1} \big). \] \end{proposition} \begin{proof} According to \cite[Proposition 9]{PeriodicIII}, we have \begin{equation} \label{eq:102a} \mathcal{R}_i(x) = \alpha_{i-1} \mathcal{C}_i(x)+ \alpha_{i-1} \mathcal{D}_i \end{equation} where \[ \mathcal{C}_i(x) = x \begin{pmatrix} -\frac{\alpha_{i-1}}{\alpha_i} \Big(\mathfrak{p}^{[i+1]}_{N-2}\Big)'(0) & \Big(\mathfrak{p}^{[i]}_{N-1}\Big)'(0) \\ -\frac{\alpha_{i-1}}{\alpha_i} \Big(\mathfrak{p}^{[i+1]}_{N-1}\Big)'(0) & \Big(\mathfrak{p}^{[i]}_{N}\Big)'(0) \end{pmatrix} \] and \begin{equation} \label{eq:100} \mathcal{D}_i = \sum_{j=0}^{N-1} \frac{1}{\alpha_{i+j}} \left\{ \prod_{m=j+1}^{N-1} \mathfrak{B}_{i+m} (0) \right\} \begin{pmatrix} 0 & 0 \\ s_{i+j} & z_{i+j} \end{pmatrix} \left\{ \prod_{m=0}^{j-1} \mathfrak{B}_{i+m} (0) \right\}. \end{equation} In view of \cite[Proposition 6]{PeriodicIII}, \begin{equation} \label{eq:102} \operatorname{tr} \mathcal{C}_i \equiv 0. \end{equation} Since the trace is linear and invariant under cyclic permutations, by \eqref{eq:100} we get \begin{equation} \label{eq:101} \operatorname{tr} \mathcal{D}_i = \sum_{j=0}^{N-1} \frac{1}{\alpha_{i+j}} \operatorname{tr} \left\{ \begin{pmatrix} 0 & 0 \\ s_{i+j} & z_{i+j} \end{pmatrix} \prod_{m=j+1}^{N+ j-1} \mathfrak{B}_{i+m} (0) \right\}. \end{equation} Using \cite[Proposition 3]{PeriodicIII} \[ \prod_{m=j+1}^{N+j-1} \mathfrak{B}_{i+m} (0) = \begin{pmatrix} -\frac{\alpha_{i+j}}{\alpha_{i+j+1}} \mathfrak{p}^{[i+j+2]}_{N-3}(0) & \mathfrak{p}^{[i+j+1]}_{N-2}(0) \\ -\frac{\alpha_{i+j}}{\alpha_{i+j+1}} \mathfrak{p}^{[i+j+2]}_{N-2}(0) & \mathfrak{p}^{[i+j+1]}_{N-1}(0) \end{pmatrix}, \] thus \begin{align} \nonumber \operatorname{tr} \left\{ \begin{pmatrix} 0 & 0 \\ s_{i+j} & z_{i+j} \end{pmatrix} \prod_{m=j+1}^{N+ j-1} \mathfrak{B}_{i+m} (0) \right\} &= s_{i+j} \mathfrak{p}^{[i+j+1]}_{N-2}(0) + z_{i+j} \mathfrak{p}^{[i+j+1]}_{N-1}(0) \\ \label{eq:18} &= -\sigma \frac{\alpha_{i+j}}{\alpha_{i+j-1}} s_{i+j}, \end{align} where the last equality follows by \cite[formula (13)]{PeriodicIII}. Inserting \eqref{eq:18} into \eqref{eq:101} results in \[ \operatorname{tr} \mathcal{D}_i = -\sigma \sum_{j=0}^{N-1} \frac{s_{i+j}}{\alpha_{i+j-1}}. \] Hence, by \eqref{eq:102a} and \eqref{eq:102}, we get \[ \operatorname{tr} \mathcal{R}_i = \alpha_{i-1} \operatorname{tr} \mathcal{D}_i = -\sigma \alpha_{i-1} \sum_{j=0}^{N-1} \frac{s_{i+j}}{\alpha_{i+j-1}}. \] Finally, by \cite[Proposition 3]{christoffelII}, we obtain \[ \operatorname{tr} \mathcal{R}_i = -\sigma \lim_{k \to \infty} \big( a_{(k+1)N+i-1} - a_{kN+i-1} \big), \] which completes the proof. \end{proof} \subsection{Periodic blend} \begin{definition} The Jacobi matrix $A$ associated to $(a_n : n \in \mathbb{N}_0)$ and $(b_n : n \in \mathbb{N}_0)$ has \emph{asymptotically $N$-periodic entries} if there are two $N$-periodic sequences $(\alpha_n : n \in \mathbb{Z})$ and $(\beta_n : n \in \mathbb{Z})$ of positive and real numbers, respectively, such that \begin{enumerate}[(a), leftmargin=2em] \item $\begin{aligned}[b] \lim_{n \to \infty} \big|a_n - \alpha_n\big| = 0 \end{aligned}$, \item $\begin{aligned}[b] \lim_{n \to \infty} \big|b_n - \beta_n\big| = 0 \end{aligned}$. \end{enumerate} \end{definition} \begin{definition} \label{def:3} The Jacobi matrix $A$ associated with sequences $(a_n : n \in \mathbb{N}_0)$ and $(b_n : n \in \mathbb{N}_0)$ has a \emph{$N$-periodically blended entries} if there are an asymptotically $N$-periodic Jacobi matrix $\tilde{A}$ associated with sequences $(\tilde{a}_n : n \in \mathbb{N}_0)$ and $(\tilde{b}_n : n \in \mathbb{N}_0)$, and a sequence of positive numbers $(\tilde{c}_n : n \in \mathbb{N}_0)$, such that \begin{enumerate}[(a), leftmargin=2em] \item $\begin{aligned}[b] \lim_{n \to \infty} \tilde{c}_n = \infty, \qquad\text{and}\qquad \lim_{m \to \infty} \frac{\tilde{c}_{2m+1}}{\tilde{c}_{2m}} = 1 \end{aligned}$, \item $\begin{aligned}[b] a_{k(N+2)+i} = \begin{cases} \tilde{a}_{kN+i} & \text{if } i \in \{0, 1, \ldots, N-1\}, \\ \tilde{c}_{2k} & \text{if } i = N, \\ \tilde{c}_{2k+1} & \text{if } i = N+1, \end{cases} \end{aligned}$ \item $\begin{aligned}[b] b_{k(N+2)+i} = \begin{cases} \tilde{b}_{kN+i} & \text{if } i \in \{0, 1, \ldots, N-1\}, \\ 0 & \text{if } i \in \{N, N+1\}. \end{cases} \end{aligned}$ \end{enumerate} \end{definition} If $A$ is a Jacobi matrix having $N$-periodically blended entries, we set \[ X_n(x) = \prod_{j = n}^{N+n+1} B_j(x). \] By \cite[Proposition 3.12]{ChristoffelI}, for each $i \in \{1, 2, \ldots, N-1\}$, \[ \lim_{j \to \infty} B_{j(N+2)+i}(x) = \mathfrak{B}_i(x), \] locally uniformly with respect to $x \in \mathbb{C}$, thus the sequence $(X_{j(N+2)+i}:j \in \mathbb{N})$ converges to $\mathcal{X}_i$ locally uniformly on $\mathbb{C}$ where \begin{equation} \label{eq:32} \mathcal{X}_i(x) = \bigg( \prod_{j = 1}^{i-1} \mathfrak{B}_j(x) \bigg) \mathcal{C}(x) \bigg(\prod_{j = i}^{N-1} \mathfrak{B}_j(x) \bigg), \end{equation} and \[ \mathcal{C}(x) = \begin{pmatrix} 0 & -1 \\ \frac{\alpha_{N-1}}{\alpha_0} & -\frac{2x-\beta_0}{\alpha_0} \end{pmatrix}. \] Moreover, we have the following proposition. \begin{proposition} \label{prop:1} \begin{align} \label{eq:21a} \lim_{j \to \infty} B^{-1}_{j(N+2)}(x) &= \begin{pmatrix} 0 & 0 \\ 1 & 0 \end{pmatrix}, \\ \label{eq:21b} \lim_{j \to \infty} B_{j(N+2)+N}(x) &= \begin{pmatrix} 0 & 1 \\ 0 & 0 \end{pmatrix}, \\ \label{eq:21c} \lim_{j \to \infty} B_{j(N+2)+N+1}(x) &= \begin{pmatrix} 0 & 1 \\ -1 & 0 \end{pmatrix}. \end{align} locally uniformly with respect to $x \in \mathbb{C}$. \end{proposition} \begin{proof} The proposition easily follows from Definition \ref{def:3}. Indeed, we have \[ B_{j(N+2)}^{-1}(x) = \begin{pmatrix} \frac{x-\tilde{b}_{jN}}{\tilde{c}_{2j-1}} & -\frac{\tilde{a}_{jN}}{\tilde{c}_{2j-1}} \\ 1 & 0 \end{pmatrix}, \] and \[ B_{j(N+2)+N}(x) = \begin{pmatrix} 0 & 1 \\ -\frac{\tilde{a}_{jN+N-1}}{\tilde{c}_{2j}} & \frac{x}{\tilde{c}_{2j}} \end{pmatrix}, \qquad B_{j(N+2)+N+1}(x) = \begin{pmatrix} 0 & 1 \\ -\frac{\tilde{c}_{2j}}{\tilde{c}_{2j+1}} & \frac{x}{\tilde{c}_{2j+1}} \end{pmatrix}. \] Thus using Definition \ref{def:3}(i) and boundedness of the sequence $(\tilde{a}_n : n \in \mathbb{N}_0)$, we can compute the limits. \end{proof} \section{Stolz class} \label{sec:stolz} In this section we define a proper class of slowly oscillating sequences which is motivated by \cite{Stolz1994}, see also \cite[Section 2]{SwiderskiTrojan2019}. Let $X$ be a normed space. We say that a bounded sequence $(x_n)$ belongs to $\mathcal{D}_{r, s}(X)$ for certain $r \in \mathbb{N}$ and $s \in \{0, 1, \ldots, r-1\}$, if for each $j \in \{1, \ldots, r-s\}$, \[ \sum_{n=1}^\infty \big\|\Delta^j x_n \big\|^{\frac{r}{j+s}} < \infty. \] Moreover, $(x_n) \in \mathcal{D}_{r,s }^0(X)$, if $(x_n) \in \mathcal{D}_{r,s}(X)$ and \[ \sum_{n=1}^\infty \|x_n\|^{\frac{r}{s}} < \infty. \] Let us observe that \[ \mathcal{D}_{r, s}(X) \subset \mathcal{D}_{r, 0}(X), \qquad\text{and}\qquad \mathcal{D}_{r, r-1}(X) = \mathcal{D}_{1, 0}(X). \] To simplify the notation, if $X$ is the real line with Euclidean norm we shortly write $\mathcal{D}_{r, s} = \mathcal{D}_{r,s}(\mathbb{R})$. Given a compact set $K \subset \mathbb{C}$ and a normed vector space $R$, by $\mathcal{D}_{r, s}(K, R)$ we denote the case when $X$ is the space of all continuous mappings from $K$ to $R$ equipped with the supremum norm. Moreover, given a positive integer $N$, we say that $(x_n) \in \mathcal{D}^N_{r, s}(X)$ if for each $i \in \{0, 1, \ldots, N-1 \}$, \[ \big( x_{nN+i} : n \in \mathbb{N} \big) \in \mathcal{D}_{r,s}(X). \] \begin{lemma} \label{lem:3} Let $d$ and $M$ be positive integers, and let $K_0 \subset \mathbb{C}$ be a set with at least $M+1$ points. Suppose that $(x_n : n \in \mathbb{N})$ is a sequence of elements from $\operatorname{Mat}(d, \mathbb{P}_M)$ where $\mathbb{P}_M$ denotes the linear space of complex polynomials of degree at most $M$. If there are $r \geq 1$ and $s \in \{0, 1, \ldots, r-1\}$ so that for all $z \in K_0$, \[ \big( x_n(z) : n \in \mathbb{N} \big) \in \mathcal{D}_{r,s} \big(\operatorname{Mat}(d, \mathbb{C}) \big), \] then for every compact set $K \subset \mathbb{C}$, \[ \big( x_n : n \in \mathbb{N} \big) \in \mathcal{D}_{r,s} \big(K, \operatorname{Mat}(d, \mathbb{C}) \big). \] \end{lemma} \begin{proof} Let $\{ z_0, z_1, \ldots, z_M \}$ be a subset of $K_0$ consisting of distinct points. By the Lagrange interpolation formula, we can write \[ x_n(z) = \sum_{j=0}^M \ell_j(z) x_n(z_j) \] where \[ \ell_j(z) = \prod_{\stackrel{m = 0}{m \neq j}}^M \frac{z - z_m}{z_j - z_m}. \] Let $K$ be a compact subset of $\mathbb{C}$. Then there is a constant $c>0$ such that for any $j \in \{0, 1, \ldots, M \}$, \[ \sup_{z \in K} |\ell_j(z)| \leq c. \] Since, for each $k \geq 0$, \[ \Delta^k x_n(z) = \sum_{j=0}^M \ell_j(z) \Delta^k x_n(z_j), \] we obtain \[ \sup_{z \in K} \big\| \Delta^k x_n(z) \big\| \leq c \sum_{j=0}^M \big\| \Delta^k x_n(z_j) \big\|, \] and the conclusion follows. \end{proof} The following lemma is well-known and its proof is straightforward. \begin{lemma} \label{lem:1} For any two sequences $(x_n)$ and $(y_n)$, and $j \in \mathbb{N}$, \[ \Delta^j(x_n y_n : n \in \mathbb{N})_n = \sum_{k = 0}^j {j \choose k} \Delta^{j-k}x_n \cdot \Delta^k y_{n + j - k}. \] \end{lemma} \begin{corollary}[{\cite[Corollary 1]{SwiderskiTrojan2019}}] \label{cor:1} Let $r \in \mathbb{N}$ and $s \in \{0, \ldots, r-1\}$. \begin{enumerate}[(i), leftmargin=2em] \item If $(x_n) \in \mathcal{D}_{r, 0}(X)$ and $(y_n) \in \mathcal{D}_{r, s}^0(X)$ then $(x_n y_n) \in \mathcal{D}_{r, s}^0(X)$. \item If $(x_n), (y_n) \in \mathcal{D}_{r, s}(X)$, then $(x_n y_n) \in \mathcal{D}_{r, s}(X)$. \end{enumerate} \end{corollary} \begin{lemma}[{\cite[Lemma 2]{SwiderskiTrojan2019}}] \label{lem:2} Fix $r \in \mathbb{N}$, $s \in \{0, \ldots, r-1\}$ and a compact set $K \subseteq \mathbb{R}$. Let $(f_n : n \in \mathbb{N}) \in \mathcal{D}_{r, s}(K, \mathbb{R})$ be a sequence of real functions on $K$ with values in $I \subseteq \mathbb{R}$ and let $F \in \mathcal{C}^{r-s}(I, \mathbb{R})$. Then $(F \circ f_n : n \in \mathbb{N}) \in \mathcal{D}_{r, s}(K, \mathbb{R})$. \end{lemma} By Lemma \ref{lem:2}, we easily get the following corollary. \begin{corollary} \label{cor:2} Let $r \in \mathbb{N}$. If $(x_n) \in \mathcal{D}_{r, 0}(K, \mathbb{C})$, and \[ 0 < \delta \leq \abs{x_n(x)}, \] for all $n \in \mathbb{N}$ and $x \in K$, then $(x_n^{-1} : n \in \mathbb{N}) \in \mathcal{D}_{r, 0}(K, \mathbb{C})$. \end{corollary} The next theorem is the main result of this section. \begin{theorem} \label{thm:1} Fix two integers $r \geq 2$ and $s \in \{0, \ldots, r-2\}$, and a compact set $K \subset \mathbb{R}$. Suppose that $(\lambda_n^+ : n \in \mathbb{N})$ and $(\lambda_n^- : n \in \mathbb{N})$ are two uniform Cauchy sequences from $\mathcal{D}_{r, 0}(K, \mathbb{R})$ so that for all $x \in K$ and $n \in \mathbb{N}$, \begin{equation} \label{eq:7} \begin{aligned} \lambda_n^+(x) \lambda_n^-(x) &> 0, \\ \abs{\lambda_n^+(x)} - \abs{\lambda_n^-(x)} &\geq \delta > 0. \end{aligned} \end{equation} Let $(X_n : n \in \mathbb{N}) \in \mathcal{D}_{r, s}\big(K, \operatorname{GL}(2, \mathbb{R})\big)$ be such that \begin{equation} \label{eq:8} \sup_{x \in K} \sup_{n \in \mathbb{N}} \big(\|X_n(x)\| + \|X_n^{-1}(x)\|\big) < \infty. \end{equation} Then there are sequences $(\mu_n^+ : n \in \mathbb{N}), (\mu_n^- : n \in \mathbb{N}) \in \mathcal{D}_{r, 0}(K, \mathbb{R})$ and $(Y_n : n \in \mathbb{N}) \in \mathcal{D}_{r, s+1}\big(K, \operatorname{GL}(2, \mathbb{R})\big)$ satisfying \begin{equation} \label{eq:20} \begin{pmatrix} \lambda^+_n & 0 \\ 0 & \lambda_n^- \end{pmatrix} X_n^{-1} X_{n-1} = Y_n \begin{pmatrix} \mu_n^+ & 0 \\ 0 & \mu_n^- \end{pmatrix} Y_n^{-1}, \end{equation} such that $(\mu_n^+ : n \in \mathbb{N})$ and $(\mu_n^- : n \in \mathbb{N})$ are uniform Cauchy sequences with \begin{align*} \mu_n^+(x) \mu_n^-(x) &> 0, \\ \abs{\mu_n^+(x)} - \abs{\mu_n^-(x)} &\geq \delta' > 0, \end{align*} for all $x \in K$ and $n \in \mathbb{N}$. Moreover, \begin{equation} \label{eq:10} \lim_{n \to \infty} \sup_{x \in K} \big\| Y_n(x) - \operatorname{Id} \big\| = 0. \end{equation} \end{theorem} \begin{proof} Let \[ D_n = \begin{pmatrix} \lambda_n^+ & 0 \\ 0 & \lambda_n^- \end{pmatrix}. \] We set \[ W_n = D_n X_n^{-1} X_{n-1} = D_n \big( \operatorname{Id} - X_n^{-1} \Delta X_{n-1}\big). \] By \eqref{eq:8}, we have \[ \sup_{K} \big\|W_n - D_n \big\| = \sup_{K} \big\|D_n X_n^{-1} \Delta X_{n-1} \big\| \leq c \sup_K \big\| \Delta X_{n-1} \big\|. \] Since $(X_n) \in \mathcal{D}_{r, s}\big(K, \operatorname{GL}(2, \mathbb{R})\big)$, \[ \lim_{n \to \infty} \sup_K \| \Delta X_n \|= 0, \] thus \[ \lim_{n \to \infty} \sup_K \big\| W_n - D_n \big\|= 0. \] In particular, $W_n$ has positive discriminant. Let $\mu^+_n$ and $\mu_n^-$ be its eigenvalues with $\abs{\mu^+_n} > \abs{\mu^-_n}$. Then \[ \lim_{n \to \infty} \sup_K \big|\mu_n^+ - \lambda_n^+ \big| = 0, \qquad\text{and}\qquad \lim_{n \to \infty} \sup_K \big|\mu_n^- - \lambda_n^- \big| = 0, \] and hence $(\mu^+_n : n \in \mathbb{N})$ and $(\mu_n^- : n \in \mathbb{N})$ are a uniform Cauchy sequence satisfying \eqref{eq:7}. Setting \[ X_n = \begin{pmatrix} x_{11}^{(n)} & x_{12}^{(n)} \\ x_{21}^{(n)} & x_{22}^{(n)} \end{pmatrix}, \qquad\text{and}\qquad W_n = \begin{pmatrix} w_{11}^{(n)} & w_{12}^{(n)} \\ w_{21}^{(n)} & w_{22}^{(n)} \end{pmatrix}, \] we obtain \[ W_n = \frac{1}{\det X_n} \begin{pmatrix} \lambda_n^+ \big(x_{11}^{(n-1)} x_{22}^{(n)} - x_{21}^{(n-1)} x_{12}^{(n)}\big) & \lambda_n^+ \big(x_{12}^{(n-1)} x_{22}^{(n)} - x_{22}^{(n-1)} x_{12}^{(n)}\big) \\ \lambda_n^- \big( x_{21}^{(n-1)} x_{11}^{(n)} - x_{11}^{(n-1)} x_{21}^{(n)}\big) & \lambda_n^- \big( x_{22}^{(n-1)} x_{11}^{(n)} - x_{12}^{(n-1)} x_{21}^{(n)}\big) \end{pmatrix}. \] By \eqref{eq:8} and Corollary \ref{cor:2}, we have \[ \bigg(\frac{1}{\det X_n} \bigg) \in \mathcal{D}_{r, 0}, \] hence by Corollary \ref{cor:1}(ii), we get that \[ \big(w_{11}^{(n)} : n \in \mathbb{N} \big), \big(w_{22}^{(n)} : n \in \mathbb{N} \big)\in \mathcal{D}_{r, 0}. \] Moreover, \begin{align*} w_{12}^{(n)} &= \frac{\lambda_n^+}{\det X_n} \big(x_{12}^{(n-1)} x_{22}^{(n)} - x_{22}^{(n-1)} x_{12}^{(n)}\big) \\ &= \frac{\lambda_n^+}{\det X_n} \Big(\big(x_{22}^{(n)} - x_{22}^{(n-1)}\big) x_{12}^{(n)} - \big(x_{12}^{(n)}-x_{12}^{(n-1)}\big) x_{22}^{(n)}\Big), \end{align*} and \begin{align*} w_{21}^{(n)} &= \frac{\lambda_n^-}{\det X_n} \Big(\big(x_{22}^{(n)} - x_{22}^{(n-1)}\big) x_{21}^{(n)} - \big(x_{21}^{(n)}-x_{21}^{(n-1)}\big) x_{22}^{(n)}\Big), \end{align*} thus, by Corollary \ref{cor:1}(i), \[ \big(w_{12}^{(n)} : n \in \mathbb{N} \big), \big(w_{21}^{(n)} : n \in \mathbb{N} \big)\in \mathcal{D}_{r, s+1}^0. \] Next, we compute the eigenvalues. We obtain \[ \mu_n^+ = \frac{w_{11}^{(n)}+w_{22}^{(n)}}{2} + \frac{\sigma_n}{2} \sqrt{\operatorname{discr}{W_n}},\qquad\text{and}\qquad \mu_n^- = \frac{w_{11}^{(n)}+w_{22}^{(n)}}{2} - \frac{\sigma_n}{2} \sqrt{\operatorname{discr}{W_n}} \] where $\sigma_n = \sign{w_{11}^{(n)}}$, and \[ \operatorname{discr}{W_n} = \big(w_{22}^{(n)} - w_{11}^{(n)}\big)^2 + 4 w_{12}^{(n)} w_{21}^{(n)}. \] Since for all $n$ sufficiently large \begin{equation} \label{eq:11} \big| w_{11}^{(n)}-w_{22}^{(n)} \big| \geq \big|\lambda_n^+ - \lambda_n^-\big| - \big|w_{11}^{(n)}-\lambda_n^+ \big| - \big|w_{22}^{(n)}-\lambda_n^- \big| \geq \frac{\delta}{2}, \end{equation} by Lemma \ref{lem:2}, we have $(\mu_n^+),(\mu_n^-) \in \mathcal{D}_{r, 0}(K, \mathbb{R})$. It remains to compute the matrix $Y_n$. Suppose that the equations \begin{equation} \label{eq:30} W_n \begin{pmatrix} 1 \\ v_n^+ \end{pmatrix} = \mu_n^+ \begin{pmatrix} 1 \\ v_n^+ \end{pmatrix} \qquad\text{and}\qquad W_n \begin{pmatrix} v_n^- \\ 1 \end{pmatrix} = \mu_n^- \begin{pmatrix} v_n^- \\ 1 \end{pmatrix} \end{equation} both have solutions, then the matrix \[ Y_n = \begin{pmatrix} 1 & v_n^- \\ v_n^+ & 1 \end{pmatrix} \] satisfies \eqref{eq:20}. Observe that equations \eqref{eq:30} are equivalent to \begin{align} \label{eq:34} \left\{ \begin{aligned} w_{11}^{(n)} + v_n^+ w_{12}^{(n)} &= \mu_n^+, \\ w_{21}^{(n)} + v_n^+ w_{22}^{(n)} &= \mu_n^+ v_n^+, \end{aligned} \right. \qquad\text{and}\qquad \left\{ \begin{aligned} w_{11}^{(n)} v_n^- + w_{12}^{(n)} &= \mu_n^- v_n^-, \\ w_{21}^{(n)} v_n^- + w_{22}^{(n)} &= \mu_n^-. \end{aligned} \right. \end{align} If $\sigma_n = 1$ then by \eqref{eq:11}, \[ w_{22}^{(n)} - w_{11}^{(n)} - \sqrt{\operatorname{discr}{W_n}} \leq -\frac{\delta}{2}, \] otherwise \[ w_{22}^{(n)} - w_{11}^{(n)} + \sqrt{\operatorname{discr}{W_n}} \geq \frac{\delta}{2}. \] Thus \[ \big| w_{22}^{(n)} - w_{11}^{(n)} - \sigma_n \sqrt{\operatorname{discr}{W_n}} \big| \geq \frac{\delta}{2}, \] and \begin{align*} v_n^+ = \frac{-2 w_{21}^{(n)}}{w_{22}^{(n)} - w_{11}^{(n)} - \sigma_n \sqrt{\operatorname{discr} W_n}}, \quad\text{and}\quad v_n^- = \frac{2 w_{12}^{(n)}}{w_{22}^{(n)} - w_{11}^{(n)} - \sigma_n \sqrt{\operatorname{discr}{W_n}}}, \end{align*} satisfy the systems \eqref{eq:34}. In view of \eqref{eq:11}, Corollary \ref{cor:2} and Corollary \ref{cor:1}(i), we conclude that $(v_n^+), (v_n^-) \in \mathcal{D}_{r, s+1}^0(K, \mathbb{R})$. Finally, Lemma \ref{lem:2} implies that $(Y_n)$ belongs to $\mathcal{D}_{r, s+1}\big(K, \operatorname{GL}(2, \mathbb{R})\big)$. Because \[ \lim_{n \to \infty} \sup_K{\abs{v_n^+}} = \lim_{n \to \infty} \sup_K{\abs{v_n^-}} = 0, \] we easily obtain \eqref{eq:10}. \end{proof} \begin{corollary} The sequences $(\mu^-_n)$ and $(\mu^+_n)$ converge to the same limit as $(\lambda^-_n)$ and $(\lambda^+_n)$, respectively. \end{corollary} \section{Levinson's type theorems} \label{sec:levinson} In this section we develop discrete variants of the Levinson's theorem. There are two cases we need to distinguish according to whether the limiting matrix has two different eigenvalues or not. \subsection{Different eigenvalues} \begin{theorem} \label{thm:2} Let $(X_n : n \in \mathbb{N})$ be a sequence of continuous mappings defined on $\mathbb{R}$ with values in $\operatorname{GL}(2, \mathbb{R})$ that converges uniformly on a compact set $K$ to the mapping $\mathcal{X}$ with $\operatorname{discr} \mathcal{X}(x) \neq 0$ and $\det \mathcal{X}(x) > 0$ for each $x \in K$. If $\operatorname{discr} \mathcal{X} > 0$, we additionally assume that for all $x \in K$, \begin{equation} \label{eq:35} \big|[\mathcal{X}(x)]_{1, 1} - \lambda_1(x)\big| > 0 \quad\text{and}\quad \big|[\mathcal{X}(x)]_{2, 2} - \lambda_2(x)\big| > 0 \end{equation} where $\lambda_1$ and $\lambda_2$ are continuous functions on $K$ so that $\lambda_1(x)$ and $\lambda_2(x)$ are eigenvalues of $\mathcal{X}(x)$. Let $(E_n : n \in \mathbb{N})$ be a sequence of continuous mappings defined on $\mathbb{R}$ with values in $\operatorname{Mat}(2, \mathbb{C})$ such that \begin{equation} \label{eq:40} \sum_{n = 1}^\infty \sup_K{\| E_n\|} < \infty. \end{equation} If $(X_n : n \in \mathbb{N})$ belongs to $\mathcal{D}_{r, 0}\big(K, \operatorname{GL}(2, \mathbb{R}) \big)$ for a certain $r \geq 1$ and $\eta$ is a continuous eigenvalue of $\mathcal{X}$, then there are continuous mappings $\Phi_n: K \rightarrow \mathbb{C}^2$, $\mu_n: K \rightarrow \mathbb{C}$, and $v : K \rightarrow \mathbb{C}^2$, satisfying \[ \Phi_{n+1} = (X_n + E_n)\Phi_n \] and \[ \lim_{n \to \infty} \sup_{x \in K}{|\mu_n(x) - \eta(x)|} = 0, \] such that \begin{equation} \label{eq:14} \lim_{n \to \infty} \sup_{x \in K}{ \bigg\| \frac{\Phi_n(x)}{\prod_{j=1}^{n-1} \mu_j(x)} - v(x) \bigg\|} = 0 \end{equation} whereas $v(x)$ is an eigenvector of $\mathcal{X}(x)$ corresponding to $\eta(x)$ for each $x \in K$. \end{theorem} \begin{proof} Suppose that $\operatorname{discr} \mathcal{X}(x) > 0$ and $\det \mathcal{X}(x) > 0$ for all $x \in K$. In particular, $\operatorname{tr} \mathcal{X}(x) \neq 0$ for all $x \in K$. Let $\lambda^+$ and $\lambda^-$ denote the eigenvalues of $\mathcal{X}$ such that $|\lambda^+| > |\lambda^-|$, namely we set \[ \lambda^+(x) = \frac{\operatorname{tr} \mathcal{X}(x) + \sigma \sqrt{\operatorname{discr} \mathcal{X}(x)}}{2}, \qquad\text{and}\qquad \lambda^-(x) = \frac{\operatorname{tr} \mathcal{X}(x) - \sigma \sqrt{\operatorname{discr} \mathcal{X}(x)}}{2} \] where $\sigma = \sign{\operatorname{tr} \mathcal{X}}$. Without loss of generality we can assume that \eqref{eq:35} is satisfied with $\lambda_1 = \lambda^+$ and $\lambda_2 = \lambda^-$, since otherwise we consider mappings conjugated by \[ J = \begin{pmatrix} 0 & 1\\ 1 & 0 \end{pmatrix}. \] Select $\delta > 0$ such that for all $x \in K$, \[ \big|[\mathcal{X}(x)]_{1, 1} - \lambda^+(x)\big| \geq 2 \delta \quad\text{and}\quad \big|[\mathcal{X}(x)]_{2, 2} - \lambda^-(x)\big| \geq 2 \delta, \] and \[ \operatorname{discr} \mathcal{X}(x) \geq 2 \delta^2, \qquad \det \mathcal{X}(x) \geq 2 \delta^2. \] Since $(\operatorname{discr} X_n : n \in \mathbb{N})$ converges uniformly on $K$, there is $M \geq 1$ such that for all $n \geq M$ and $x \in K$, \begin{equation} \label{eq:43} \operatorname{discr} X_n(x) \geq \delta^2 \qquad \det X_n(x) \geq \delta^2. \end{equation} Hence, the matrix $X_n(x)$ has two eigenvalues \[ \lambda_n^+(x) = \frac{\operatorname{tr} X_n(x) + \sigma \sqrt{\operatorname{discr} X_n(x)}}{2}, \qquad\text{and}\qquad \lambda_n^-(x) = \frac{\operatorname{tr} X_n(x) - \sigma \sqrt{\operatorname{discr} X_n(x)}}{2}, \] By increasing $M$, we can also assume that for all $n \geq M$ and $x \in K$, \begin{equation} \label{eq:41} \big|[X_n(x)]_{1, 1} - \lambda^+_n(x)\big| \geq \delta \quad\text{and}\quad \big|[X_n(x)]_{2, 2} - \lambda^-_n(x)\big| \geq \delta. \end{equation} Then setting \[ C_{n, 0} = \begin{pmatrix} \frac{[X_n]_{1,2}}{\lambda^+_n-[X_n]_{1,1}} & 1 \\ 1 & \frac{[X_n]_{2,1}}{\lambda^-_n - [X_n]_{2,2}} \end{pmatrix} \qquad\text{and}\qquad D_{n, 0} = \begin{pmatrix} \lambda_n^+ & 0 \\ 0 & \lambda_n^- \end{pmatrix}, \] we obtain \[ X_n = C_{n, 0} D_{n, 0} C_{n, 0}^{-1}. \] In view of \eqref{eq:43} and \eqref{eq:41}, by Corollaries \ref{cor:2} and \ref{cor:1}, the sequences $(C_{n, 0} : n \geq M)$ and $(D_{n, 0} : n \geq M)$ belong to $\mathcal{D}_{r, 0}\big(K, \operatorname{GL}(2, \mathbb{R})\big)$. If $r \geq 2$, in view of \eqref{eq:43} we can apply Theorem \ref{thm:1} to get two sequences of mappings \[ (C_{n,1} : n \geq M) \in \mathcal{D}_{r, 1}\big(K, \operatorname{GL}(2, \mathbb{R}) \big), \qquad\text{and}\qquad (D_{n, 1} : n \geq M) \in \mathcal{D}_{r, 0}\big(K, \operatorname{GL}(2, \mathbb{R}) \big), \] such that \[ D_{n, 0} C_{n, 0}^{-1} C_{n-1, 0} = C_{n, 1} D_{n, 1} C_{n, 1}^{-1}, \] and \[ D_{n, 1} = \begin{pmatrix} \gamma_{n, 1}^+ & 0 \\ 0 & \gamma_{n, 1}^- \end{pmatrix}. \] Then for $n \geq M+1$, \begin{align*} X_{n+1} X_n &= (C_{n+1,0} D_{n+1, 0} C_{n+1, 0}^{-1}) (C_{n, 0} D_{n, 0} C_{n, 0}^{-1}) \\ &= C_{n+1, 0} (D_{n+1, 0} C_{n+1, 0}^{-1} C_{n, 0}) (D_{n, 0} C_{n, 0}^{-1} C_{n-1, 0}) C_{n-1, 0}^{-1} \\ &= C_{n+1, 0} (C_{n+1, 1} D_{n+1, 1} C_{n+1,1}^{-1}) (C_{n, 1} D_{n, 1} C_{n, 1}^{-1}) C_{n-1, 0}^{-1} \\ &= C_{n+1, 0} C_{n+1, 1} (D_{n+1, 1} C_{n+1, 1}^{-1} C_{n, 1}) (D_{n, 1} C_{n, 1}^{-1} C_{n-1, 1}) (C_{n-1, 0} C_{n-1, 1})^{-1}. \end{align*} By repeated application of Theorem \ref{thm:1} for $k \in \{2, 3, \ldots, r-1\}$, we can find sequences \begin{equation} \label{eq:6} (C_{j, k} : j \geq M+k) \in \mathcal{D}_{r, k}\big(K, \operatorname{GL}(2, \mathbb{R})\big), \quad\text{and}\quad (D_{j, k} : j \geq M+k) \in \mathcal{D}_{r, 0}\big(K, \operatorname{GL}(2, \mathbb{R})\big), \end{equation} such that \[ D_{j, k-1} C_{j, k-1}^{-1} C_{j-1, k-1} = C_{j, k} D_{j, k} C_{j, k}^{-1}. \] Hence, \[ X_{n+1} X_n = Q_{n+1} \big(D_{n+1, r-1} C_{n+1, r-1}^{-1} C_{n, r-1}\big) \big(D_{n, r-1} C_{n, r-1}^{-1} C_{n-1, r-1} \big) Q_{n-1}^{-1} \] where \[ Q_m = C_{m, 0} C_{m, 1} \cdots C_{m, r-1}. \] Notice that \[ \lim_{m \to \infty} Q_m = \begin{pmatrix} \frac{[\mathcal{X}]_{1,2}}{\lambda^+-[\mathcal{X}]_{1,1}} & 1 \\ 1 & \frac{[\mathcal{X}]_{2,1}}{\lambda^--[\mathcal{X}]_{2,2}} \end{pmatrix} \] uniformly on $K$. Let us now consider the recurrence equation \begin{align*} \Psi_{k+1} &= Q_{2k+1}^{-1} (X_{2k+1} + E_{2k+1})(X_{2k} + E_{2k}) Q_{2k-1} \Psi_k \\ &=\big(Y_k + R_k + F_k\big) \Psi_k, \qquad k \geq M \end{align*} where \[ Y_k = D_{2k+1, r-1} D_{2k, r-1} = \begin{pmatrix} \gamma_{2k+1, r-1}^+ \gamma_{2k, r-1}^+ & 0 \\ 0 & \gamma_{2k+1, r-1}^- \gamma_{2k, r-1}^- \end{pmatrix}, \] \[ R_k = (D_{2k+1, r-1} C_{2k+1, r-1}^{-1} C_{2k, r-1}) (D_{2k,r-1} C_{2k, r-1}^{-1} C_{2k-1, r-1}) - D_{2k+1, r-1} D_{2k, r-1}, \] and \[ F_k = Q_{2k+1}^{-1} X_{2k+1} E_{2k} Q_{2k-1} + Q_{2k+1}^{-1} E_{2k+1} X_{2k} Q_{2k-1} + Q_{2k+1}^{-1} E_{2k+1} E_{2k} Q_{2k-1}. \] Since \begin{align*} R_k &= -D_{2k+1, r-1} C_{2k+1, r-1}^{-1} \big( \Delta C_{2k, r-1}\big) D_{2k, r-1} \\ &\phantom{=}- D_{2k+1, r-1} C_{2k+1, r-1}^{-1} C_{2k, r-1} D_{2k, r-1} C_{2k, r-1}^{-1} \big( \Delta C_{2k-1, r-1} \big), \end{align*} we easily see that \[ \|R_k\| \leq c \big(\big\|\Delta C_{2k, r-1}\big\| + \big\|\Delta C_{2k-1, r-1} \big\| \big), \] which together with \eqref{eq:6} implies that \[ \sum_{k = M}^\infty \sup_K \|R_k\| < \infty. \] In view of \eqref{eq:40} we also get \[ \sum_{k = M}^{\infty} \sup_K \|F_k\| < \infty. \] Let us consider the case $\eta = \lambda^-$. The sequence $(\gamma_{n, r-1}^- : n \geq M)$ converges to $\lambda^-$, thus there are $n_0 \geq M$ and $\delta' > 0$, so that for all $n \geq n_0$, \[ \bigg|\frac{\gamma^+_{n, r-1}}{\gamma^-_{n, r-1}}\bigg| \geq 1 + \frac{\delta}{\abs{\gamma^-_{n, r-1}}} \geq 1 + \delta', \] thus for all $k_1 \geq k_0 \geq n_0$, \[ \prod_{j = k_0}^{k_1} \bigg|\frac{\gamma^+_{2j+1, r-1}}{\gamma^-_{2j+1, r-1}}\bigg| \cdot \bigg|\frac{\gamma^+_{2j, r-1}}{\gamma^-_{2j, r-1}} \bigg| \geq (1+ \delta')^{2(k_1-k_0)}. \] In particular, $(Y_k : k \geq n_0)$ satisfies the uniform Levinson's condition, see \cite[Definition 2.1]{Silva2004}. Therefore, in view of \cite[Theorem 4.1]{Silva2004}, there is a sequence $(\Psi_k : k \geq n_0)$ such that \[ \lim_{n \to \infty} \sup_{x \in K}{ \bigg\| \frac{\Psi_k(x)}{\prod_{j = n_0}^{k-1} \gamma^-_{2j+1, r-1}(x) \gamma^-_{2j, r-1}(x)} - e_2 \bigg\|} = 0 \] where \[ e_2 = \begin{pmatrix} 0 \\ 1 \end{pmatrix}. \] In fact, in the proof of \cite[Theorem 4.1]{Silva2004} the author used supremum norm with respect to the parameter, thus when all the mappings in \cite[Theorem 4.1]{Silva2004} are continuous (or holomorphic) with respect to this parameter, the functions $\Phi_k$ are continuous (or holomorphic, respectively). We are now in the position to define $(\Phi_n : n \geq 2 n_0)$. Namely, for $x \in K$ and $n \geq 2n_0$, we set \[ \Phi_{n}(x) = \begin{cases} Q_{2k-1}(x) \Psi_k(x) & \text{if } n = 2k, \\ (X_{2k}(x) + E_{2k}(x)) Q_{2k-1}(x) \Psi_k(x) & \text{if } n = 2k+1. \end{cases} \] As it is easy to check, $(\Phi_n : n \geq n_0)$ satisfies \[ \Phi_{n+1} = (X_n + E_n) \Phi_n. \] Observe that for \[ v^- = \begin{pmatrix} 1 \\ \frac{[\mathcal{X}]_{2,1}}{\lambda^--[\mathcal{X}]_{2,2}} \end{pmatrix} \] we obtain \[ \lim_{k \to \infty} \sup_{x \in K}{ \big\|Q_{2k-1}(x) e_2 - v^-(x)\big\|} = 0, \] and \[ \lim_{k \to \infty} \sup_{x \in K}{ \bigg\| \frac{X_{2k}(x) + E_{2k}(x)}{\gamma^-_{2k, r-1}(x)} Q_{2k-1}(x) e_2 - v^-(x) \bigg\|} = 0. \] Therefore, \eqref{eq:14} is satisfied for $(\mu_n : n \in \mathbb{N})$ defined on $K$ by the formula \[ \mu_n = \begin{cases} 1 & \text{for } n < n_0, \\ \gamma^-_{n, r-1} &\text{for } n \geq n_0. \end{cases} \] This completes the proof. The reasoning when $\eta = \lambda^+$ is analogous. If $\operatorname{discr} \mathcal{X}(x) < 0$ for $x \in K$, the argument is simpler. First, let us observe that the matrix $\mathcal{X}(x)$ has real entries, thus $|[\mathcal{X}(x)]_{1, 2}| > 0$ for all $x \in K$. Since $(X_n : n \in \mathbb{N})$ converges uniformly on $K$, there are $\delta > 0$ and $M \geq 1$, such that for all $n \geq M$ and $x \in K$, \[ \operatorname{discr} X_n(x) \leq -\delta,\qquad\text{and}\qquad |[X_n(x)]_{1, 2}| > \delta. \] Therefore, for each $x \in K$, the matrix $X_n(x)$ has two eigenvalues $\lambda_n$ and $\overline{\lambda_n}$ where \[ \lambda_n(x) = \frac{\operatorname{tr} X_n(X) + i\sqrt{\abs{\operatorname{discr} X_n(x)}}}{2}. \] Hence, setting \[ C_{n, 0} = \begin{pmatrix} 1 & 1 \\ \frac{\lambda_n - [X_n]_{1,1}}{[X_n]_{1,2}} & \frac{\overline{\lambda_n} - [X_n]_{1,1}}{[X_n]_{1,2}} \end{pmatrix}, \qquad\text{and}\qquad D_{n, 0} = \begin{pmatrix} \lambda_n & 0 \\ 0 & \overline{\lambda_n} \end{pmatrix}, \] we obtain \[ X_n = C_{n, 0} D_{n, 0} C_{n, 0}^{-1}. \] Moreover, $(C_{n, 0} : n \geq M)$ and $(D_{n, 0} : n \geq M)$ belong to $\mathcal{D}_{r, 0}\big(K, \operatorname{GL}(2, \mathbb{C}) \big)$. If $r \geq 2$, then by \cite[Theorem 1]{SwiderskiTrojan2019}, there are two sequences of matrices \[ (C_{n, 1} : n \geq M) \in \mathcal{D}_{r, 1}\big(K, \operatorname{GL}(2, \mathbb{C}) \big), \qquad\text{and}\qquad (C_{n, 1} : n \geq M) \in \mathcal{D}_{r, 0}\big(K, \operatorname{GL}(2, \mathbb{C}) \big), \] such that \[ D_{n, 0} C_{n, 0}^{-1} C_{n, 0} = C_{n, 1} D_{n, 1} C_{n, 1}^{-1}, \] and \[ D_{n, 1} = \begin{pmatrix} \gamma_{n, 1} & 0 \\ 0 & \overline{\gamma_{n, 1}} \end{pmatrix}. \] By repeated application of \cite[Theorem 1]{SwiderskiTrojan2019}, for $k \in \{2, 3, \ldots, r-1\}$, we can find sequences \[ (C_{j, k} : j \geq M+k) \in \mathcal{D}_{r, k}\big(K, \operatorname{GL}(2, \mathbb{C})\big), \qquad\text{and}\qquad (D_{j, k} : j \geq M+k) \in \mathcal{D}_{r, 0}\big(K, \operatorname{GL}(2, \mathbb{C})\big), \] such that \[ D_{j, k-1} C_{j, k-1}^{-1} C_{j-1, k-1} = C_{j, k} D_{j, k} C_{j, k}^{-1}. \] Hence, \[ X_{n+1}X_n = Q_{n+1} \big(D_{n+1, r-1} C_{n+1, r-1}^{-1} C_{n, r-1} \big) \big(D_{n, r-1} C_{n, r-1}^{-1} C_{n-1, r-1}\big) Q_{n-1}^{-1}, \] where \[ Q_m = C_{m, 0} C_{m, 1} \cdots C_{m, r-1}. \] Notice that \[ \lim_{m \to \infty} Q_m = \begin{pmatrix} 1 & 1 \\ \frac{\lambda - [\mathcal{X}]_{1,1}}{[\mathcal{X}]_{1,2}} & \frac{\overline{\lambda} - [\mathcal{X}]_{1,1}}{[\mathcal{X}]_{1,2}} \end{pmatrix} \] uniformly on $K$. We next consider the recurrence equation \begin{align*} \Psi_{k+1} &= Q_{2k+1}^{-1} (X_{2k+1} + E_{2k+1})(X_{2k} + E_{2k})Q_{2k-1} \Psi_k \\ &= (Y_k + R_k + F_k) \Psi_k, \qquad k \geq M \end{align*} where \[ Y_{k} = D_{2k+1, r-1} D_{2k, r-1} = \begin{pmatrix} \gamma_{2k+1, r} \gamma_{2k, r} & 0 \\ 0 & \overline{\gamma_{2k+1, r}} \overline{\gamma_{2k, r}} \end{pmatrix}, \] \[ R_k = (D_{2k+1, r-1} C_{2k+1, r-1}^{-1} C_{2k, r-1}) (D_{2k,r-1} C_{2k, r-1}^{-1} C_{2k-1, r-1})-D_{2k+1, r-1} D_{2k, r-1}, \] and \[ F_k = Q_{2k+1}^{-1} X_{2k+1} E_{2k} Q_{2k-1} + Q_{2k+1}^{-1} E_{2k+1} X_{2k} Q_{2k-1} +Q_{2k+1}^{-1} E_{2k+1} E_{2k} Q_{2k-1}. \] Suppose that $\eta = \overline{\lambda}$. Since for all $n \geq M$, \[ \bigg|\frac{\gamma_{n, r-1}}{\overline{\gamma_{n, r-1}}} \bigg| = 1, \] the sequence $(Y_k : k \geq M)$ satisfies the uniform Levinson's condition. Therefore, by \cite[Theorem 4.1]{Silva2004}, there is a sequence $(\Psi_k : k \geq M)$ such that \[ \lim_{k \to \infty}\sup_{x \in K}{ \bigg\|\frac{\Psi_k(x)} {\prod_{j = M}^{k-1} \gamma_{2j+1, r-1}(x) \overline{\gamma_{2j, r-1}(x)}} - e_2\bigg\|} = 0. \] Hence, $(\Phi_n : n \geq 2M)$ defined by the formula \[ \Phi_{n}(x) = \begin{cases} Q_{2k-1}(x) \Psi_k(x) & \text{if } n = 2k, \\ (X_{2k}(x) + E_{2k}(x)) Q_{2k-1}(x) \Psi_k(x) & \text{if } n = 2k+1. \end{cases} \] together with \[ \mu_n = \begin{cases} 1 & \text{for } n < 2 M, \\ \gamma_{n, r-1} & \text{for } n \geq 2 M, \end{cases} \] satisfies \eqref{eq:14}. This completes the proof of the theorem. \end{proof} The following lemma guarantees that in the case of positive discriminant Theorem \ref{thm:2} can at least be applied locally. \begin{lemma} \label{lem:4} Suppose that $X$ is a continuous mapping defined on a closed interval $I \subset \mathbb{R}$ with values in $\operatorname{Mat}(2, \mathbb{R})$ that has positive discriminant on $I$. Let $\lambda_1, \lambda_2: I \rightarrow \mathbb{R}$, be continuous functions so that $\lambda_1(x)$ and $\lambda_2(x)$ are the distinct eigenvalues of $X(x)$. Then for each $x \in I$ there is an open interval $I_x$ containing $x$ such that \begin{enumerate}[(i), leftmargin=2em] \item for all $y \in I_x \cap I$, \[ \big([X(y)]_{1,1} - \lambda_1(y)\big)\big([X(y)]_{2,2} - \lambda_2(y)\big) \neq 0, \] \end{enumerate} or \begin{enumerate}[(i), resume, leftmargin=2em] \item for all $y \in I_x \cap I$, \[ \big([X(y)]_{1,1} - \lambda_2(y)\big)\big([X(y)]_{2,2} - \lambda_1(y)\big) \neq 0. \] \end{enumerate} \end{lemma} \begin{proof} Let $x \in I$. Since $\operatorname{discr} X(x) > 0$, we have $\lambda_1(x) \neq \lambda_2(x)$. By the continuity of $X$, it is enough to show that \[ \big([X(x)]_{1,1} - \lambda_1(x)\big)\big([X(x)]_{2,2} - \lambda_2(x)\big) \neq 0, \] or \[ \big([X(x)]_{1,1} - \lambda_2(x)\big)\big([X(x)]_{2,2} - \lambda_1(x)\big) \neq 0. \] If neither of the conditions is met, we would have \[ [X(x)]_{1,1} = \lambda_1(x) \quad\text{and}\quad [X(x)]_{2,2} = \lambda_1(x), \] or \[ [X(x)]_{1,1} = \lambda_2(x) \quad\text{and}\quad [X(x)]_{2,2} = \lambda_2(x). \] Thus $\operatorname{tr} X(x)$ equals $2 \lambda_1(x)$ or $2 \lambda_2(x)$, but neither of the situations is possible. \end{proof} The following corollary gives a Levinson's type theorem in the case when the limit $\mathcal{X}$ is a constant matrix. It may be proved in much the same way as Theorem \ref{thm:2}. Here, the condition \eqref{eq:35} can be dropped since $\mathcal{X}$ is a constant matrix. \begin{corollary} \label{cor:4} Let $(X_n : n \in \mathbb{N})$ be a sequence of matrices belonging to $\mathcal{D}_1 \big(\operatorname{GL}(2, \mathbb{R})\big)$ convergent to the matrix $\mathcal{X}$. Let $(E_n : n \in \mathbb{N})$ be a sequence of continuous (or holomorphic) mappings defined on a compact set $K \subset \mathbb{C}$ with values in $\operatorname{Mat}(2, \mathbb{C})$, such that \[ \sum_{n = 1}^\infty \sup_K \|E_n\| < \infty. \] Suppose that $\operatorname{discr} \mathcal{X} \neq 0$ and $\det \mathcal{X} > 0$. If $\eta$ is an eigenvalue of $\mathcal{X}$, then there are continuous (or holomorphic, respectively) mappings $\Phi_n: K \rightarrow \mathbb{C}^2$, satisfying \[ \Phi_{n+1} = (X_n + E_n) \Phi_n \] such that \[ \lim_{n \to \infty} \sup_{x \in K}{\bigg\| \frac{\Phi_n(x)}{\prod_{j = 1}^{n-1} \mu_j} - v \bigg\|}=0 \] where $v$ is the eigenvector of $\mathcal{X}$ corresponding to $\eta$, and $\mu_n$ is the eigenvalue of $X_n$ such that \[ \lim_{n \to \infty} |\mu_n - \eta| = 0. \] \end{corollary} \subsection{Perturbations of the identity} \begin{theorem} \label{thm:3} Let $(X_n : n \in \mathbb{N})$ be a sequence of continuous mappings defined on $\mathbb{R}$ with values in $\operatorname{GL}(2, \mathbb{R})$ that uniformly converges on a compact interval $K$ to $\sigma \operatorname{Id}$ for a certain $\sigma \in \{-1, 1\}$. Suppose that there is a sequence of positive numbers $(\gamma_n : n \in \mathbb{N}_0)$ such that $R_n = \gamma_n(X_n - \sigma \operatorname{Id})$ converges uniformly on $K$ to the mapping $\mathcal{R}$ satisfying $\operatorname{discr} \mathcal{R}(x) \neq 0$ for all $x \in K$. If $\operatorname{discr} \mathcal{R} > 0$, we additionally assume that \begin{equation} \label{eq:110} \sum_{n=0}^\infty \frac{1}{\gamma_n} = \infty. \end{equation} Let $(E_n : n \in \mathbb{N})$ be a sequence of continuous mappings defined on $\mathbb{R}$ with values in $\operatorname{Mat}(2, \mathbb{C})$, such that \begin{equation} \label{eq:37} \sum_{n = 1}^\infty \sup_K{\|E_n\|} < \infty. \end{equation} If $(R_n : n \in \mathbb{N})$ belongs to $\mathcal{D}_{1, 0}\big(K, \operatorname{Mat}(2, \mathbb{R})\big)$ and $\eta$ is a continuous eigenvalue of $\mathcal{R}$, then there are $n_0 \geq 1$ and continuous mappings $\Phi_n : K \rightarrow \mathbb{C}^2$, $\mu_n : K \rightarrow \mathbb{C}$, and $v : K \rightarrow \mathbb{C}^2$ satisfying \[ \Phi_{n+1} = (X_n + E_n) \Phi_n, \] such that \begin{equation} \label{eq:17} \lim_{n \to \infty} \sup_{x \in K}{\bigg\|\frac{\Phi_n(x)}{\prod_{j = n_0}^{n-1} \big( \sigma + \gamma_j^{-1} \mu_j(x)\big)} - v(x) \bigg\|} = 0 \end{equation} where for each $x \in K$, $v(x)$ is an eigenvector of $\mathcal{R}(x)$ corresponding to $\eta(x)$, and $\mu_n(x)$ is an eigenvalue of $R_n(x)$ such that \[ \lim_{n \to \infty} \sup_{x \in K}{\big|\mu_n(x) - \eta(x)\big|} = 0. \] \end{theorem} \begin{proof} Let us first consider the case of positive discriminant. There is $\delta > 0$ such that for all $x \in K$, \[ \operatorname{discr} \mathcal{R}(x) \geq 2\delta^2. \] Then the matrix $\mathcal{R}(x)$ has two eigenvalues \[ \xi^+(x) = \frac{\operatorname{tr} \mathcal{R} (x) + \sigma \sqrt{\operatorname{discr} \mathcal{R}(x)}}{2}, \qquad\text{and}\qquad \xi^-(x) = \frac{\operatorname{tr} \mathcal{R} (x) - \sigma \sqrt{\operatorname{discr} \mathcal{R}(x)}}{2}. \] Since $(R_n : n \in \mathbb{N})$ converges uniformly on $K$, there is $M \geq 1$, such that for all $n \geq M$ and $x \in K$, \begin{equation} \label{eq:108} \operatorname{discr} R_n(x) \geq \delta^2, \qquad\text{and}\qquad \gamma_n \geq \delta. \end{equation} In particular, the matrix $R_{n}$ has two eigenvalues \begin{equation} \label{eq:107} \xi^+_n(x) = \frac{\operatorname{tr} R_n(x) + \sigma \sqrt{\operatorname{discr} R_n(x)}}{2}, \qquad\text{and}\qquad \xi^-_n(x) = \frac{\operatorname{tr} R_n(x) - \sigma \sqrt{\operatorname{discr} R_n(x)}}{2}. \end{equation} Now, let us consider the collection of intervals $\{I_x : x \in K\}$ determined in Lemma \ref{lem:4} for the mapping $\mathcal{R}$. By compactness of $K$ we can find a finite subcollection $\{I_1, \ldots, I_J\}$ that covers $K$. Let us consider the case $\eta = \xi^-$. It is clear that \[ \lim_{n \to \infty} \sup_{x \in K}{\big\|\xi^-_n(x) - \eta(x)\big\|} = 0. \] Suppose that on each $K_j = \overline{I_j} \cap K$, one can find $\Phi_n^{(j)}$ and $v^{(j)}$ so that \begin{equation} \label{eq:46} \lim_{n \to \infty} \sup_{x \in K_j}{ \bigg\| \frac{\Phi^{(j)}_n(x)}{\prod_{m = n_0}^{n-1} \big(\sigma + \gamma_m^{-1} \mu_m(x)\big)} - v^{(j)}(x) \bigg\| } =0. \end{equation} Let $\{\psi_1, \ldots, \psi_J\}$ be the continuous partition of unity subordinate to the covering $\{I_1, \ldots , I_J\}$, that is $\psi_j$ is a continuous non-negative function with $\operatornamewithlimits{supp} \psi_j \subset I_j$, so that \[ \sum_{j = 1}^J \psi_j \equiv 1. \] We set \[ \Phi_n = \sum_{j = 1}^J \Phi^{(j)}_n \psi_j, \qquad\text{and}\qquad v = \sum_{j = 1}^J v^{(j)} \psi_j. \] Observe that $v(x)$ is an eigenvector of $\mathcal{R}(x)$ corresponding to $\eta(x)$ for all $x \in K$. Moreover, since $\psi_j$ is supported inside $I_j$, \begin{align*} \lim_{n \to \infty} \sup_{x \in K}{ \bigg\| \frac{\Phi_n(x)}{\prod_{m = n_0}^{n-1} \big(\sigma + \gamma_m^{-1} \mu_m(x)\big)} - v(x) \bigg\|} \leq \lim_{n \to \infty} \sum_{j = 1}^J \sup_{x \in K_j}{ \bigg\| \frac{\Phi_n^{(j)}(x)}{\prod_{m = n_0}^{n-1} \big(\sigma + \gamma_m^{-1} \mu_m(x)\big)} - v^{(j)}(x) \bigg\|} = 0. \end{align*} Therefore, it is sufficient to prove \eqref{eq:46} for $K = K_j$ where $j \in \{ 1, \ldots, J \}$. To simplify the notation, we drop the dependence on $j$. Without loss of generality, we can assume that for each $x \in K$, \[ \big|[\mathcal{R}(x)]_{1,1} - \xi^+(x)\big| \geq 2\delta, \qquad\text{and}\qquad \big|[\mathcal{R}(x)]_{2,2} - \xi^-(x) \big| \geq 2\delta. \] Since $(R_n : n \in \mathbb{N})$ converges to $\mathcal{R}$ uniformly on $K$, there are $M \geq 1$ such that for all $x \in K$ and $n \geq M$, \begin{equation} \label{eq:49} \big| [R_n(x)]_{1,1} - \xi^+_n(x) \big| \geq \delta, \qquad\text{and}\qquad \big| [R_n(x)]_{2,2} - \xi^-_n(x) \big| \geq \delta. \end{equation} Now, we can define \[ C_n = \begin{pmatrix} \frac{[R_n]_{1,2}}{\xi^+_n - [R_n]_{1,1}} & 1 \\ 1 & \frac{[R_n]_{2,1}}{\xi^-_n-[R_n]_{2,2}} \end{pmatrix}, \qquad\text{and}\qquad \tilde{D}_n(x) = \begin{pmatrix} \xi^+_n(x) & 0 \\ 0 & \xi^-_n(x) \end{pmatrix}. \] Then \[ R_{n}(x) = C_n(x) \tilde{D}_n(x) C_n^{-1}(x), \] and in view of \eqref{eq:49}, \eqref{eq:108}, \eqref{eq:107}, Corollary~\ref{cor:1} and Lemma~\ref{lem:2}, we conclude that \begin{equation} \label{eq:113} (C_n : n \geq M) \in \mathcal{D}_{1,0} \big( K, \operatorname{GL}(2, \mathbb{R}) \big). \end{equation} Notice that \begin{equation} \label{eq:16} \lim_{n \to \infty} C_n = \begin{pmatrix} \frac{[\mathcal{R}]_{1,2}}{\xi^+ - [\mathcal{R}]_{1,1}} & 1 \\ 1 & \frac{[\mathcal{R}]_{2,1}}{\xi^- - [\mathcal{R}]_{2,2}} \end{pmatrix} \end{equation} uniformly on $K$. Since \[ X_{n} = \sigma \operatorname{Id} + \frac{1}{\gamma_n} R_{n}, \] we obtain \[ X_{n}(x) = C_n(x) D_n(x) C_n^{-1}(x) \] where \[ D_n = \sigma \operatorname{Id} + \frac{1}{\gamma_n} \tilde{D}_n. \] Hence, eigenvalues of $X_n$ are \begin{equation} \label{eq:52} \lambda^+_n = \sigma + \frac{1}{\gamma_n} \xi^+_n, \qquad\text{and}\qquad \lambda^-_n = \sigma + \frac{1}{\gamma_n} \xi^-_n. \end{equation} Let us now consider the recurrence equation \begin{align*} \Psi_{n+1} &= C_{n+1}^{-1} (X_n + E_n)C_n \Psi_n\\ &=(D_n + F_n) \Psi_n \end{align*} where \[ F_n = -C^{-1}_{n+1} \big( \Delta C_n \big) D_n + C_{n+1}^{-1} E_n C_n. \] By \eqref{eq:16}, we easily see that \[ \| F_n \| \leq c \big(\| \Delta C_n \| + \|E_n\|\big), \] which together with \eqref{eq:113} and \eqref{eq:37} gives \[ \sum_{n=M}^\infty \sup_K \| F_n \| < \infty. \] Next, in view of \eqref{eq:52}, \eqref{eq:107} and \eqref{eq:108}, for $n \geq M$, \begin{align} \label{eq:126} \bigg| \frac{\lambda^+_n}{\lambda^-_n} \bigg| = \bigg| 1 + \frac{1}{\gamma_n} \frac{\sqrt{\operatorname{discr} R_n}}{1 + \frac{\sigma}{\gamma_n} \xi_n^-}\bigg| \geq 1 + \frac{\sqrt{\operatorname{discr} R_n}}{2 \gamma_n} \geq \exp\bigg(\frac{\delta}{4 \gamma_n} \bigg), \end{align} after possibly enlarging $M$. Therefore, for all $n_2 > n_1 \geq M$, \[ \prod_{n=n_1}^{n_2} \bigg| \frac{\lambda^+_n}{\lambda^-_n} \bigg| \geq \exp\bigg(\frac{\delta}{4} \sum_{n = n_1}^{n_2} \frac{1}{\gamma_n} \bigg). \] Hence, \eqref{eq:110} guarantees that the sequence $(D_n : n \geq M)$ satisfies the uniform Levinson's condition. Let us remind that we are considering $\eta = \xi^-$. In view of \cite[Theorem 4.1]{Silva2004}, there is a sequence $(\Psi_n : n \geq M)$ such that \[ \lim_{n \to \infty} \sup_{x \in K}{ \bigg\| \frac{\Psi_n(x)}{\prod_{j=M}^{n-1} \lambda_{j}^-(x)} - e_2 \bigg\|} = 0. \] Now, for $x \in K$ and $n \geq M$, we set \[ \Phi_n(x) = C_n(x) \Psi_n(x). \] It is easy to verify that $(\Phi_n : n \geq M)$ satisfies \[ \Phi_{n+1} = (X_n + E_n) \Phi_n. \] Setting \[ v = \begin{pmatrix} 1 \\ \frac{[\mathcal{R}]_{2,1}}{\xi^-- [\mathcal{R}]_{2,2}} \end{pmatrix} \] by \eqref{eq:16}, we get \[ \lim_{n \to \infty} \sup_{x \in K}{\big\| C_n(x) e_2 - v(x) \big\|} = 0, \] which completes the proof of \eqref{eq:17} for $K = K_j$, and the case of positive discriminant follows. When $\operatorname{discr} \mathcal{R} < 0$ on $K$, the reasoning is similar. Since the matrix $\mathcal{R}$ has real entries, $[\mathcal{R}(x)]_{1,2} \neq 0$ for all $x \in K$. Therefore, for $n \geq M$, we can set \[ C_n = \begin{pmatrix} 1 & 1 \\ \frac{\xi^+_n - [R_n]_{1,1}}{[R_n]_{1, 2}} & \frac{\xi^-_n - [R_n]_{1,1}}{[R_n]_{1, 2}} \end{pmatrix} \] where \[ \xi^+_n(x) = \frac{\operatorname{tr} R_n(x) + i \sqrt{|\operatorname{discr} R_n(x)|}}{2}, \qquad\text{and}\qquad \xi^-_n(x) = \frac{\operatorname{tr} R_n(x) - i \sqrt{|\operatorname{discr} R_n(x)}|}{2}. \] Since \[ \bigg| \frac{\lambda^+_n}{\lambda^-_n} \bigg| = 1, \] the sequence $(D_n : n \geq M)$ satisfies the uniform Levinson's condition. The rest of the proof runs as before. \end{proof} The method of the proof used in Theorem \ref{thm:3}, can be also applied in the case of different eigenvalues and $r = 1$. In particular, the condition \eqref{eq:35} can be dropped. The proof of the following corollary is analogous to the proof of Theorem \ref{thm:3}. \begin{corollary} \label{cor:5} Let $(X_n : n \in \mathbb{N})$ be a sequence of matrices in $\operatorname{GL}(2, \mathbb{R})$ convergent to the matrix $\sigma \operatorname{Id}$ for a certain $\sigma \in \{-1, 1\}$. Suppose that there is a sequence of positive numbers $(\gamma_n : n \in \mathbb{N}_0)$ such that $R_n = \gamma_n(X_n - \sigma \operatorname{Id})$ converges to the matrix $\mathcal{R}$ satisfying $\operatorname{discr} \mathcal{R} \neq 0$. If $\operatorname{discr} \mathcal{R} > 0$, we additionally assume \[ \sum_{n = 0}^\infty \frac{1}{\gamma_n} = \infty. \] Let $(E_n : n \in \mathbb{N})$ be is a sequence of continuous (or holomorphic) mappings on a compact set $K \subset \mathbb{C}$ with values in $\operatorname{Mat}(2, \mathbb{C})$, such that \[ \sum_{n = 1}^\infty \sup_K \|E_n\| < \infty. \] If $(R_n: n \in \mathbb{N})$ belongs to $\mathcal{D}_{1, 0}(\operatorname{Mat}(2, \mathbb{R}))$, and $\eta$ is an eigenvalue of $\mathcal{R}$, then there are $n_0 \geq 1$ and continuous (or holomorphic, respectively) mappings $\Phi_n: K \rightarrow \mathbb{C}^2$, satisfying \[ \Phi_{n+1} = (X_n + E_n) \Phi_n, \] and such that \[ \lim_{n \to \infty} \sup_{x \in K}{ \bigg\| \frac{\Phi_n(x)}{\prod_{j = n_0}^{n-1} (\sigma + \gamma_j^{-1} \mu_j)} - v \bigg\|}=0 \] where $v$ is an eigenvector of $\mathcal{R}$ corresponding to $\eta$, $\mu_n$ is the eigenvalue of $R_n$ such that \[ \lim_{n \to \infty} |\mu_n - \eta| = 0. \] \end{corollary} In the following proposition we describe a way to estimate the denominator in \eqref{eq:17}. \begin{proposition} \label{prop:7} Let $(X_n : n \in \mathbb{N})$ be a sequence of mappings defined on $\mathbb{R}$ with values in $\operatorname{GL}(2, \mathbb{R})$ convergent on a compact set $K$ to $\sigma \operatorname{Id}$ for a certain $\sigma \in \{-1, 1\}$. Suppose that there is a sequence of positive numbers $(\gamma_n : n \in \mathbb{N})$ satisfying \[ \lim_{n \to \infty} \Gamma_n = \infty \qquad \text{where} \qquad \Gamma_n = \sum_{j=1}^n \frac{1}{\gamma_j}, \] such that $R_n = \gamma_n (X_n - \sigma \operatorname{Id})$ converges uniformly on $K$ to the mapping $\mathcal{R}$. Assume that $\operatorname{discr} \mathcal{R}(x) > 0$ for all $x \in K$, and \begin{equation} \label{eq:50} \sum_{n=1}^\infty \Gamma_n \cdot \sup_{x \in K} \| R_{n+1}(x) - R_n(x) \| < \infty. \end{equation} Then there is $n_0$ such that for all $n \geq n_0$, and $x \in K$, \begin{equation} \label{eq:38} \prod_{j = n_0}^n \big|\sigma + \gamma_j^{-1} \mu_j^-(x) \big|^2 \asymp \exp \Big(\Gamma_n \big( \sigma \operatorname{tr} \mathcal{R}(x) - \sqrt{\operatorname{discr} \mathcal{R}(x)} \big) \Big) \end{equation} and \begin{equation} \label{eq:39} \prod_{j = n_0}^n \big|\sigma + \gamma_j^{-1} \mu_j^+(x) \big|^2 \asymp \exp \Big(\Gamma_n \big( \sigma \operatorname{tr} \mathcal{R}(x) + \sqrt{\operatorname{discr} \mathcal{R}(x)} \big) \Big) \end{equation} where \begin{equation} \label{eq:56} \mu_j^- = \frac{1}{2} \Big(\operatorname{tr} R_{j} - \sigma \sqrt{\operatorname{discr} R_{j}}\Big), \qquad\text{and}\qquad \mu_j^+ = \frac{1}{2}\Big(\operatorname{tr} R_{j} + \sigma \sqrt{\operatorname{discr} R_{j}}\Big). \end{equation} The implicit constants in \eqref{eq:38} and \eqref{eq:39} are independent of $x$ and $n$. \end{proposition} \begin{proof} Since $\operatorname{discr} \mathcal{R} > 0$ on $K$, there is $n_0$ such that for all $j \geq n_0$ and $x \in K$, $\operatorname{discr} R_{j}(x) > 0$. Thus $R_{j}$ has two eigenvalues given by the formulas \eqref{eq:56}. By possible enlarging $n_0$, for all $n \geq n_0$, we have \[ \log\Big( \prod_{j = n_0}^n \big|\sigma + \gamma_j^{-1} \mu_j^- \big|^2 \Big) \asymp \sum_{j = n_0}^n \frac{1}{\gamma_j} \Big(\sigma \operatorname{tr} R_{j} - \sqrt{\operatorname{discr} R_{j}}\Big) \] uniformly on $K$. Let \[ A_n^- = \sigma \operatorname{tr} R_{n} - \sqrt{\operatorname{discr} R_{n}}, \qquad A_{\infty}^- = \sigma \operatorname{tr} \mathcal{R} - \sqrt{\operatorname{discr} \mathcal{R}}. \] Since for $m \geq n$, \begin{align*} \big|A_n^- - A_m^- \big| \cdot \Gamma_n &\leq c \sum_{k = n}^\infty \big\|R_{k+1} - R_{k} \big\| \cdot \Gamma_n \\ &\leq c \sum_{k = n}^\infty \big\|R_{k+1} - R_{k} \big\| \cdot \Gamma_k, \end{align*} we obtain \begin{equation} \label{eq:59} \sup_K{\big| A_n^- - A_{\infty}^- \big|} \cdot \Gamma_n \leq c. \end{equation} Now, by the summation by parts, we get \begin{align*} \sum_{j = n_0}^n \frac{1}{\gamma_j} A_j^- &= (\Gamma_n - \Gamma_{n_0-1}) A_{\infty}^- + \sum_{j = n_0}^n (\Gamma_j - \Gamma_{j-1}) (A_j^- - A_{\infty}^-) \\ &= \Gamma_n A^-_{\infty} - \Gamma_{n_0-1} A_{n_0}^- + \Gamma_n (A_n^- - A_{\infty}^-) + \sum_{j = n_0}^{n-1} \Gamma_j (A_j^- - A_{j+1}^-), \end{align*} thus, by \eqref{eq:50} and \eqref{eq:59}, \[ \sup_K{\bigg| \sum_{j = n_0}^n \frac{1}{\gamma_j} A_j^- - A_{\infty}^- \cdot \Gamma_n\bigg|} \leq c. \] Hence, \[ \prod_{j = n_0}^n \big|\sigma + \gamma_j^{-1} \mu_j^- \big|^2 \asymp \exp \Big(\Gamma_n \big( \sigma \operatorname{tr} \mathcal{R} - \sqrt{\operatorname{discr} \mathcal{R}} \big) \Big), \] uniformly on $K$. The proof of \eqref{eq:39} is similar. \end{proof} \section{Essential spectrum for positive discriminant} \label{sec:essential} In this section we prove the main results of the paper. \begin{theorem} \label{thm:6} Let $N$ and $r$ be positive integers and $i \in \{1, 2, \ldots, N\}$. Let $A$ be a Jacobi matrix with $N$-periodically blended entries. If there is a compact set $K_0 \subset \mathbb{R}$ with at least $N+3$ points so that \begin{equation} \label{eq:127} \big( X_{n(N+2)+i} : n \in \mathbb{N} \big) \in \mathcal{D}_{r,0} \big( K_0, \operatorname{Mat}(2, \mathbb{R}) \big), \end{equation} then $A$ is self-adjoint and \[ \sigma_{\mathrm{sing}}(A) \cap \Lambda = \emptyset \quad \text{and} \quad \sigma_{\mathrm{ac}}(A) = \sigma_{\mathrm{ess}}(A) = \overline{\Lambda} \] where \[ \Lambda = \big\{ x \in \mathbb{R} : \operatorname{discr} \mathcal{X}_1(x) < 0 \big\} \] wherein $\mathcal{X}_1$ is given by the formula \eqref{eq:32}. \end{theorem} \begin{proof} Fix $x_0 \in \mathbb{R} \setminus \overline{\Lambda}$. Let $I$ be an open interval containing $x_0$ such that $\overline{I} \subset \mathbb{R} \setminus \overline{\Lambda}$. Since $\operatorname{discr} \mathcal{X}_1 = \operatorname{discr} \mathcal{X}_i$, we have $\operatorname{discr} \mathcal{X}_i > 0$ on $\overline{I}$. Thus the matrix $\mathcal{X}_i$ has two different eigenvalues $\lambda^+$ and $\lambda^-$. Since $\det \mathcal{X}_i \equiv 1$, we can select them in such a way that \[ \abs{\lambda^-} < 1 < \abs{\lambda^+}. \] Let $I_0$ be an open interval determined by Lemma \ref{lem:4} for $x_0$ and the mapping $\mathcal{X}_i: \overline{I} \rightarrow \operatorname{Mat}(2, \mathbb{R})$. Without loss of generality we can assume that, for all $x \in I_0$, \[ \big| [\mathcal{X}_i(x)]_{1, 1} - \lambda^-(x)\big| > 0, \qquad\text{and}\qquad \big| [\mathcal{X}_i(x)]_{2, 2} - \lambda^+(x)\big| > 0. \] Let $K = \overline{I_0}$. In view of Lemma \ref{lem:3}, \begin{equation} \label{eq:130} \big( X_{j(N+2)+i} : j \in \mathbb{N} \big) \in \mathcal{D}_{r,0} \big( K, \operatorname{GL}(2, \mathbb{R}) \big). \end{equation} Now, by Theorem \ref{thm:2}, there are sequences $(\Phi^\pm_n : n \geq n_0)$ and $(\mu_n^\pm : n \in \mathbb{N})$, such that \begin{equation} \label{eq:138} \lim_{n \to \infty} \sup_{x \in K}{ \bigg\| \frac{\Phi^\pm_n(x)}{\prod_{j=1}^n \mu_j^\pm(x)} - v^\pm(x) \bigg\|} = 0. \end{equation} where $v^\pm$ is a continuous eigenvector of $\mathcal{X}_i$ corresponding to $\lambda^\pm$. We set \[ \phi_1^\pm = B_1^{-1} \cdots B_{n_0(N+2)+i-1}^{-1} \Phi^\pm_{n_0}, \] and for $n \geq 1$, \begin{equation} \label{eq:24} \phi^\pm_{n+1} = B_n \phi^\pm_n. \end{equation} Then for $k(N+2)+i' > n_0(N+2) + i$ with $i' \in \{0, 1, \ldots, N+1\}$, we have \begin{equation} \label{eq:139} \phi^\pm_{k(N+2)+i'} = \begin{cases} B^{-1}_{k(N+2)+i'} B^{-1}_{k(N+2)+i'+1} \cdots B^{-1}_{k(N+2)+i-1} \Phi^\pm_k &\text{if } i' \in \{0, 1, \ldots, i-1\}, \\ \Phi^\pm_k &\text{if } i' = i, \\ B_{k(N+2)+i'-1} B_{k(N+2)+i'-2} \cdots B_{k(N+2)+i} \Phi_k^\pm &\text{if } i' \in \{i+1, \ldots, N+1\}. \end{cases} \end{equation} Since for $i' \in \{1, \ldots, i-1\}$, \[ \lim_{k \to \infty} B^{-1}_{k(N+2)+i'} B^{-1}_{k(N+2)+i'+1} \cdots B^{-1}_{k(N+2)+i-1} = \mathfrak{B}^{-1}_{i'} \mathfrak{B}^{-1}_{i'-1} \cdots \mathfrak{B}^{-1}_{i-1}, \] we obtain \begin{equation} \label{eq:25} \lim_{k \to \infty} \sup_K{ \bigg\| \frac{\phi^{-}_{k(N+2)+i'}}{\prod_{j=1}^{k-1} \mu_j^-} - \mathfrak{B}^{-1}_{i'} \mathfrak{B}^{-1}_{i'-1} \cdots \mathfrak{B}^{-1}_{i-1} v^- \bigg\| } =0. \end{equation} Analogously, for $i' \in \{i+1, \ldots, N\}$, we get \begin{equation} \label{eq:26} \lim_{k \to \infty} \sup_K{ \bigg\| \frac{\phi^{-}_{k(N+2)+i'}}{\prod_{j=1}^{k-1} \mu_j^-} - \mathfrak{B}_{i'-1} \mathfrak{B}_{i'-2} \cdots \mathfrak{B}_i v^- \bigg\| } =0. \end{equation} Lastly, by Proposition \ref{prop:1}, \begin{equation} \label{eq:22} \lim_{k \to \infty} \sup_K \Bigg\| \frac{\phi^{-}_{k(N+2)}}{\prod_{j = 0}^{k-1} \mu_j^-} - \begin{pmatrix} 0 & 0 \\ 1 & 0 \end{pmatrix} \mathfrak{B}_{1}^{-1} \mathfrak{B}_{2}^{-1} \cdots \mathfrak{B}_{i-1}^{-1} v^- \Bigg\| =0 \end{equation} and \begin{equation} \label{eq:23} \lim_{k \to \infty} \sup_K \Bigg\| \frac{\phi^\pm_{k(N+2)+N+1}}{\prod_{j=1}^{k-1} \mu_j^-} - \begin{pmatrix} 0 & 1 \\ 0 & 0 \end{pmatrix} \mathfrak{B}_{N-1} \mathfrak{B}_{N-2} \cdots \mathfrak{B}_i v^- \Bigg\| =0. \end{equation} Since $(\phi^\pm_n : n \in \mathbb{N})$ satisfies \eqref{eq:24}, the sequence $(u^\pm_n(x) : n \in \mathbb{N}_0)$ defined as \[ u^\pm_n(x) = \begin{cases} \langle \phi^\pm_1(x), e_1 \rangle & \text{if } n = 0, \\ \langle \phi^\pm_n(x), e_2 \rangle & \text{if } n \geq 1, \end{cases} \] is a generalized eigenvector associated to $x \in K$. Observe that $(u_0, u_1) \neq 0$ on $K$. Indeed, otherwise there is $x \in K$, so that $\phi^\pm_1(x) = 0$, hence $\phi^\pm_n(x) = 0$ for all $n \in \mathbb{N}$. Therefore, $v^\pm(x) = 0$, which is impossible. Now, in view of \eqref{eq:138} and \eqref{eq:139}, there are constants $c > 0$ and $\delta > 0$ such that for all $n \geq n_0$ and $x \in K$, \[ \big| u^+_{n(N+2)+i-1}(x) \big|^2 + \big| u^+_{n(N+2)+i}(x) \big|^2 = \big\| \phi^+_{n(N+2)+i}(x) \big\|^2 \geq c \prod_{j=n_0}^{n-1} |\mu^+_j(x)|^2 \geq c (1 + \delta)^n. \] Moreover, by \eqref{eq:25}--\eqref{eq:23}, for all $n \geq n_0$, $i' \in \{0, 1, \ldots, N+1 \}$, and $x \in K$, \[ \big\| \phi^-_{n(N+2)+i'}(x) \big\|^2 \leq c \prod_{j=n_0}^{n-1} |\mu^-_j(x)|^2 \leq c (1+\delta)^{-n}. \] Consequently, for any $x \in K$, \[ \sum_{n = 0}^\infty \abs{u^+_n(x)}^2 = \infty \] which shows that $A$ is self-adjoint (see \cite[Theorem 6.16]{Schmudgen2017}). Moreover, \[ \sum_{n = 0}^\infty \sup_{x \in K}{\abs{u^-_n(x)}}^2 < \infty, \] thus by the proof of \cite[Theorem 5.3]{Silva2007}, \[ \sigma_{\mathrm{ess}}(A) \cap K = \emptyset. \] Therefore, for all $x_0 \in \mathbb{R} \setminus \overline{\Lambda}$ there is an open interval $I_0$ containing $x_0$ such that $\sigma_{\mathrm{ess}}(A) \cap I_0 = \emptyset$. Consequently, $\sigma_{\mathrm{ess}}(A) \subseteq \overline{\Lambda}$. In view of \eqref{eq:130}, \cite[Theorem B]{SwiderskiTrojan2019} implies that $A$ is purely absolutely continuous on $\Lambda$, and $\overline{\Lambda} \subset \sigma_{\mathrm{ac}}(A)$. This completes the proof. \end{proof} \begin{remark} The proof of \cite[Corollary 6.7]{Swiderski2020} entails that \eqref{eq:127} is satisfied for any compact set $K \subset \mathbb{R}$, and all $i \in \{ 1, 2, \ldots, N \}$, provided that \[ \bigg( \frac{1}{a_n} : n \in \mathbb{N} \bigg), \big( b_n : n \in \mathbb{N} \big) \in \mathcal{D}^{N+2}_{r,0} \quad \text{and} \quad \bigg( \frac{a_{n(N+2)+N}}{a_{n(N+2)+N+1}} : n \in \mathbb{N} \bigg) \in \mathcal{D}_{r,0}. \] \end{remark} Essentially the same reasoning as in the proof of Theorem \ref{thm:6} leads to the following theorem. \begin{theorem} \label{thm:7} Let $N$ and $r$ be positive integers and $i \in \{0, 1, \ldots, N-1\}$. Let $A$ be a Jacobi matrix with $N$-periodically modulated entries. If $|\operatorname{tr} \mathfrak{X}_0(0)| > 2$ and there is a compact set $K_0 \subset \mathbb{R}$ with at least $N+1$ points so that \begin{equation} \label{eq:128} \big( X_{nN+i} : n \in \mathbb{N} \big) \in \mathcal{D}_{r,0} \big( K_0, \operatorname{Mat}(2, \mathbb{R}) \big), \end{equation} then $A$ is self-adjoint and $\sigma_{\mathrm{ess}}(A) = \emptyset$. \end{theorem} \begin{remark} The proof of \cite[Corollary 8]{SwiderskiTrojan2019} implies that \eqref{eq:128} is satisfied for any compact set $K \subset \mathbb{R}$, and all $i \in \{0, 1, \ldots, N-1 \}$, provided that \[ \bigg( \frac{a_{n-1}}{a_n} : n \in \mathbb{N} \bigg), \bigg( \frac{b_n}{a_n} : n \in \mathbb{N} \bigg), \bigg( \frac{1}{a_n} : n \in \mathbb{N} \bigg) \in \mathcal{D}^N_{r,0}. \] \end{remark} We next consider the case when $\mathfrak{X}_0$ has equal eigenvalues. \begin{theorem} \label{thm:10} Let $N$ and $r$ be positive integers and $i \in \{0, 1, \ldots, N-1\}$. Let $A$ be a Jacobi matrix with $N$-periodically modulated entries so that $\mathfrak{X}_0(0) = \sigma \operatorname{Id}$ for a certain $\sigma \in \{-1, 1\}$. Suppose that there are two $N$-periodic sequences $(s_n : n \in \mathbb{N}_0)$ and $(z_n : n \in \mathbb{N}_0)$, such that \[ \lim_{n \to \infty} \bigg|\frac{\alpha_{n-1}}{\alpha_n} a_n - a_{n-1} - s_n\bigg| = 0, \qquad \lim_{n \to \infty} \bigg|\frac{\beta_n}{\alpha_n} a_n - b_n - z_n\bigg| = 0. \] Let \[ R_n = a_{n+N-1} \big(X_n - \sigma \operatorname{Id}\big). \] Then $(R_{nN} : n \in \mathbb{N})$ converges to $\mathcal{R}_0$ locally uniformly on $\mathbb{R}$. If there is a compact set $K_0 \subset \mathbb{R}$ with at least $N+1$ points such that \begin{equation} \label{eq:129} \big( R_{nN+i} : n \in \mathbb{N} \big) \in \mathcal{D}_{1,0} \big( K_0, \operatorname{Mat}(2, \mathbb{R}) \big), \end{equation} then $A$ is self-adjoint and \[ \sigma_{\mathrm{sing}}(A) \cap \Lambda = \emptyset \qquad \text{and} \qquad \sigma_{\mathrm{ac}}(A) = \sigma_{\mathrm{ess}}(A) = \overline{\Lambda} \] where \[ \Lambda = \big\{ x \in \mathbb{R} : \operatorname{discr} \mathcal{R}_0(x) < 0 \big\}. \] \end{theorem} \begin{proof} In view of Proposition~\ref{prop:10}, there is $c > 0$ such that for all $k \geq 0$, \begin{align*} a_{kN+i} &= \sum_{j=0}^{k-1} \big( a_{(j+1)N+i} - a_{jN+i} \big) + a_i \\ &\leq c(k+1). \end{align*} Therefore, \[ \sum_{n = 0}^{k_0N+i} \frac{1}{a_n} \geq \sum_{k = 0}^{k_0} \frac{1}{a_{kN+i}} \geq \frac{1}{c} \sum_{k=1}^{k_0} \frac{1}{k}. \] Thus, the Carleman's condition is satisfied, and so $A$ is self-adjoint. Thanks to Lemma~\ref{lem:3}, for any compact set $K \subset \mathbb{R}$, \[ \big( X_{jN+i} : j \in \mathbb{N}_0 \big), \big( R_{nN+i} : n \in \mathbb{N} \big) \in \mathcal{D}_{1,0} \big( K, \operatorname{Mat}(2, \mathbb{R}) \big). \] Since $\operatorname{discr} \mathcal{R}_0 = \operatorname{discr} \mathcal{R}_i$, by \cite[Criterion 5.8]{Moszynski2009} together with \cite[Proposition 5.7]{Moszynski2009} and \cite[Theorem 5.6]{Moszynski2009}, we conclude that $A$ is purely absolutely continuous on $\Lambda$ and $\overline{\Lambda} \subset \sigma_{\mathrm{ac}}(A)$. Hence, it remains to show that $\sigma_{\mathrm{ess}}(A) \subset\overline{\Lambda}$. To do so, we fix a compact set $K \subset \mathbb{R} \setminus \overline{\Lambda}$ with non-empty interior. Since $\operatorname{discr} \mathcal{R}_i > 0$ on $K$, for each $x \in K$ the matrix $\mathcal{R}_i(x)$ has two distinct eigenvalues \[ \xi^+(x) = \frac{\operatorname{tr} \mathcal{R}_{i}(x) + \sigma \sqrt{\operatorname{discr} \mathcal{R}_i(x)}}{2}, \qquad\text{and}\qquad \xi^-(x) = \frac{\operatorname{tr} \mathcal{R}_i(x) - \sigma \sqrt{\operatorname{discr} \mathcal{R}_i(x)}}{2}. \] Moreover, $(\operatorname{discr} R_{jN+i} : j \in \mathbb{N})$ converges uniformly on $K$, thus there are $M \geq 1$ and $\delta > 0$ such that for all $j \geq M$ and $x \in K$, \[ \operatorname{discr} R_{jN+i}(x) \geq \delta. \] Therefore, $R_{jN+i}(x)$ has two distinct eigenvalues \[ \xi^+_j(x) = \frac{\operatorname{tr} R_{jN+i}(x) + \sigma \sqrt{\operatorname{discr} R_{jN+i}(x)}}{2}, \qquad\text{and}\qquad \xi^-_j(x) = \frac{\operatorname{tr} R_{jN+i}(x) - \sigma \sqrt{\operatorname{discr} R_{jN+i}(x)}}{2}. \] Since \[ X_n = \sigma \operatorname{Id} + \frac{1}{a_{n+N-1}} R_n, \] the eigenvalues of $X_{jN+i}(x)$ are \[ \lambda_j^+(x) = \sigma + \frac{\xi_j^+(x)}{a_{(j+1)N+i-1}}, \qquad\text{and}\qquad \lambda_j^-(x) = \sigma + \frac{\xi_j^-(x)}{a_{(j+1)N+i-1}}. \] By Theorem \ref{thm:3}, there is a sequence $(\Phi_n : n \geq n_0)$ such that \begin{equation} \label{eq:42} \lim_{n \to \infty} \sup_{x \in K} \bigg\|\frac{\Phi_n(x)}{\prod_{j = n_0}^{n-1} \lambda_j^-(x)} - v^-(x) \bigg\|= 0 \end{equation} where $v^-$ is a continuous eigenvector of $\mathcal{R}_i$ corresponding to $\xi^-$. We set \[ \phi_1 = B_1^{-1} \cdots B^{-1}_{n_0} \Phi_{n_0}, \] and for $n \geq 1$, \begin{equation} \label{eq:15} \phi_{n+1} = B_n \phi_n. \end{equation} Then for $kN+i' > n_0N+i$ with $i' \in \{0, 1, \ldots, N-1\}$, we have \[ \phi_{kN+i'} = \begin{cases} B_{kN+i'}^{-1} B_{kN+i'+1}^{-1} \cdots B_{kN+i-1}^{-1} \Phi_k &\text{if } i' \in \{0, 1, \ldots, i-1\}, \\ \Phi_k & \text{if } i ' = i,\\\ B_{kN+i'-1} B_{kN+i'-2} \cdots B_{kN+i} \Phi_k & \text{if } i' \in \{i+1, \ldots, N-1\}. \end{cases} \] Since for $i' \in \{0, 1, \ldots, i-1\}$, \[ \lim_{k \to \infty} B_{kN+i'}^{-1} B_{kN+i'+1}^{-1} \cdots B_{kN+i-1}^{-1} = \mathfrak{B}_{i'}^{-1}(0) \mathfrak{B}_{i'+1}^{-1}(0) \cdots \mathfrak{B}_{i-1}^{-1}(0), \] we obtain \begin{equation} \label{eq:27} \lim_{k \to \infty} \sup_K{ \bigg\| \frac{\phi_{kN+i'}}{\prod_{j = n_0}^{k-1} \lambda^-_j} - \mathfrak{B}_{i'}^{-1}(0) \mathfrak{B}_{i'+1}^{-1}(0) \cdots \mathfrak{B}_{i-1}^{-1}(0) v^- \bigg\| } = 0. \end{equation} Analogously, for $i' \in \{i+1, \ldots, N-1\}$, \begin{equation} \label{eq:28} \lim_{k \to \infty} \sup_K{ \bigg\| \frac{\phi_{kN+i'}}{\prod_{j = n_0}^{k-1} \lambda^-_j} - \mathfrak{B}_{i'-1}(0) \mathfrak{B}_{i'-2}(0) \cdots \mathfrak{B}_{i}(0) v^- \bigg\|} =0. \end{equation} Since $(\phi_n : n \in \mathbb{N})$ satisfies \eqref{eq:15}, the sequence $(u_n(x) : n \in \mathbb{N}_0)$ defined as \[ u_n(x) = \begin{cases} \langle \phi_1(x), e_1 \rangle & \text{if } n = 0, \\ \langle \phi_n(x), e_2 \rangle & \text{if } n \geq 1, \end{cases} \] is a generalized eigenvector associated to $x \in K$. By \eqref{eq:42}, \eqref{eq:27} and \eqref{eq:28}, for each $i' \in \{0, 1, \ldots, N-1\}$, $n > n_0$, and $x \in K$, \begin{equation} \label{eq:29} |u_{nN+i'}(x)| \leq c \prod_{j = n_0}^{n-1} |\lambda_j^-(x)|. \end{equation} Since $(R_{nN+i} : n \in \mathbb{N})$ converges to $\mathcal{R}_i$ uniformly on $K$, and \[ \lim_{n \to \infty} a_n = \infty, \] there is $M \geq n_0$, such that for $n \geq M$, \[ \frac{|\operatorname{tr} R_{nN+i}(x)| + \sqrt{\operatorname{discr} R_{nN+i}(x)}}{ a_{(n+1)N+i-1}} \leq 1. \] Therefore, for $n \geq M$, \[ |\lambda_n^-(x)| = 1 + \frac{ \sigma \operatorname{tr} R_{nN+i}(x) - \sqrt{\operatorname{discr} R_{nN+i}(x)} }{ 2 a_{(n+1)N+i-1}}. \] We next claim the following holds true. \begin{claim} \label{clm:1} There are $\delta' > 0$ and $M_0 \geq M$ such that for all $n \geq M_0$ and $x \in K$, \begin{equation} \label{eq:103} n \frac{\sigma \operatorname{tr} R_{nN+i}(x) - \sqrt{\operatorname{discr} R_{nN+i}(x)}} {a_{(n+1)N+i-1}} \leq -1-\delta'. \end{equation} \end{claim} First observe that by the Stolz--Ces\'aro theorem and Proposition~\ref{prop:10}, we have \begin{equation} \label{eq:104} 0 \leq \lim_{n \to \infty} \frac{a_{(n+1)N+i-1}}{n} = \lim_{n \to \infty} \big( a_{(n+1)N+i-1} - a_{nN+i-1} \big) = -\sigma \operatorname{tr} \mathcal{R}_i(x). \end{equation} Since $(R_{nN+i} : n \in \mathbb{N})$ converges to $\mathcal{R}_i$ uniformly on $K$, \begin{equation} \label{eq:105} \lim_{n \to \infty} \Big( \sigma \operatorname{tr} R_{nN+i}(x) - \sqrt{\operatorname{discr} R_{nN+i}(x)} \Big) = \sigma \operatorname{tr} \mathcal{R}_i(x) - \sqrt{\operatorname{discr} \mathcal{R}_i(x)}. \end{equation} Thus, by \eqref{eq:104} and \eqref{eq:105} we get \[ \lim_{n \to \infty} n \frac{ \sigma \operatorname{tr} R_{nN+i}(x) - \sqrt{\operatorname{discr} R_{nN+i}(x)}}{a_{(n+1)N+i-1}} = \begin{cases} -\infty & \text{if } \operatorname{tr} \mathcal{R}_i = 0, \\ -1 - \tfrac{\sqrt{\operatorname{discr} \mathcal{R}_i}}{-\sigma \operatorname{tr} \mathcal{R}_i} & \text{otherwise,} \end{cases} \] which together with \eqref{eq:104} implies \eqref{eq:103}. Now, using Claim \ref{clm:1}, we conclude for all $n \geq M_0$, \[ \sup_{x \in K} |\lambda^-_n(x)| \leq 1 - \frac{1+\delta'}{2n}. \] Consequently, by \eqref{eq:29}, there is $c' > 0$ such that for all $i' \in \{0, 1, \ldots, N-1\}$ and $n \geq M_0$, \begin{align*} \sup_{x \in K}{|u_{nN+i}(x)|} &\leq c \prod_{j = n_0}^{n-1} \bigg(1 - \frac{1+\delta'}{2j} \bigg) \\ &\leq c' \exp\bigg(-\frac{1+\delta'}{2} \log (n-1) \bigg). \end{align*} Hence, \[ \sum_{n = 0}^\infty \sup_{x \in K}{|u_n(x)|^2} < \infty, \] thus by the proof of \cite[Theorem 5.3]{Silva2007} we conclude that $\sigma_{\mathrm{ess}}(A) \cap K = \emptyset$. Since $K$ was any compact subset of $\mathbb{R} \setminus \overline{\Lambda}$, we obtain $\sigma_{\mathrm{ess}}(A) \subseteq \overline{\Lambda}$, and the theorem follows. \end{proof} \begin{remark} By \cite[Proposition 9]{PeriodicIII}, the regularity \eqref{eq:129} holds true for any compact set $K \subset \mathbb{R}$, and all $i \in \{0, 1, \ldots, N-1 \}$, if \[ \bigg( \frac{\alpha_{n-1}}{\alpha_n} a_n - a_{n-1} : n \in \mathbb{N} \bigg), \bigg( \frac{\beta_n}{\alpha_n} a_n - b_n : n \in \mathbb{N} \bigg), \bigg( \frac{1}{a_n} : n \in \mathbb{N} \bigg) \in \mathcal{D}_1^N(\mathbb{R}). \] \end{remark} \section{Periodic modulations in non-Carleman setup} \label{sec:notCarleman} In this section we shall consider Jacobi matrices such that \[ \sum_{n=0}^\infty \frac{1}{a_n} < \infty. \] Let us start with the following general observation. \begin{proposition} \label{prop:3} Let $N$ be a positive integer and \[ X_n(z) = \prod_{j=n}^{n+N-1} B_j(z). \] Let $K$ be a compact subset of $\mathbb{C}$ containing $0$, and suppose that \begin{equation} \label{eq:115} \sup_{n \geq 1} \sup_{z \in K} \| B_n(z) \| < \infty. \end{equation} Then there is $c > 0$ such that \[ \sup_{x \in K} \| X_n(z) - X_n(0) \| \leq c \sum_{j = 0}^{N-1} \frac{1}{a_{n+j}}. \] In particular, if \begin{equation} \label{eq:116} \sum_{n=0}^\infty \frac{1}{a_n} < \infty, \end{equation} then \[ \sum_{n=1}^\infty \sup_{z \in K} \| X_n(z) - X_n(0) \| < \infty. \] \end{proposition} \begin{proof} Let us notice that \[ B_j(z) - B_j(0) = \begin{pmatrix} 0 & 0 \\ 0 & \frac{z}{a_j} \end{pmatrix}, \] thus \begin{equation} \label{eq:114} \| B_j(z) - B_j(0) \| \leq \frac{|z|}{a_j}. \end{equation} Since \[ X_n(z) - X_n(0) = \sum_{j=0}^{N-1} \Bigg\{ \prod_{m=j+1}^{N-1} B_{n+m}(0) \Bigg\} \big( B_{n+j}(z) - B_{n+j}(0) \big) \Bigg\{ \prod_{m=0}^{j-1} B_{n+m}(z) \Bigg\}, \] we have \[ \| X_n(z) - X_n(0) \| \leq \sum_{j=0}^{N-1} \Bigg\{ \prod_{m=j+1}^{N-1} \big\| B_{n+m}(0) \big\| \Bigg\} \big\| B_{n+j}(z) - B_{n+j}(0) \big\| \Bigg\{ \prod_{m=0}^{j-1} \big\| B_{n+m}(z) \| \Bigg\}. \] Now the conclusion easily follows by \eqref{eq:115} and \eqref{eq:114}. \end{proof} The following corollary reproves the main result of \cite{Yafaev2019} obtained by a different method. \begin{corollary}[Yafaev] \label{cor:3} Suppose that the Carleman's condition is \emph{not} satisfied and \begin{equation} \label{eq:134} \bigg( \frac{a_n}{\sqrt{a_{n-1} a_{n+1}}} - 1 : n \in \mathbb{N} \bigg) \in \ell^1 \quad \text{and} \quad \bigg( \frac{b_n}{\sqrt{a_{n-1} a_n}} : n \in \mathbb{N} \bigg) \in \mathcal{D}_1. \end{equation} Let \begin{equation} \label{eq:136} q = \lim_{n \to \infty} \frac{b_n}{2 \sqrt{a_{n-1} a_n}}. \end{equation} If $|q| \neq 1$, then for every $z \in \mathbb{C}$ there is a basis $\{ u^+(z), u^-(z) \}$ of generalized eigenvectors associated with $z$ such that \begin{equation} \label{eq:133} u^\pm_n(z) = \bigg(\prod_{j=1}^n \lambda_j^\pm(0) \bigg) \big( 1 + \epsilon^\pm_n(z) \big) \end{equation} where $\lambda^\pm_j(0)$ is the eigenvalue of $B_j(0)$, and $(\epsilon_n^\pm)$ is a sequence of holomorphic functions tending to zero uniformly on any compact subset of $\mathbb{C}$. \end{corollary} \begin{proof} By \cite[Lemma 2.1]{Yafaev2019} \begin{equation} \label{eq:135} \bigg( \sqrt{\frac{a_{n+1}}{a_n}} : n \in \mathbb{N} \bigg) \in \mathcal{D}_1 \quad \text{and} \quad \lim_{n \to \infty} \sqrt{\frac{a_{n+1}}{a_n}} \geq 1. \end{equation} By Corollary~\ref{cor:2} and Lemma~\ref{lem:2} it implies \begin{equation} \label{eq:57} \bigg( \frac{a_{n-1}}{a_n} : n \in \mathbb{N} \bigg) \in \mathcal{D}_1. \end{equation} Next, we write \begin{equation} \label{eq:137} \frac{b_n}{a_n} = \frac{b_n}{\sqrt{a_{n-1} a_n}} \sqrt{\frac{a_{n-1}}{a_n}}, \end{equation} thus by \eqref{eq:134}, \eqref{eq:135} and Corollary~\ref{cor:1} \begin{equation} \label{eq:58} \bigg( \frac{b_n}{a_n} : n \in \mathbb{N} \bigg) \in \mathcal{D}_1. \end{equation} In particular, by \eqref{eq:57} and \eqref{eq:58} we conclude that \[ (B_n(0) : n \in \mathbb{N}) \in \mathcal{D}_1 \big( \operatorname{GL}(2, \mathbb{R}) \big). \] Now, in view of Proposition~\ref{prop:3} \[ B_n(z) = B_n(0) + E_n(z) \] where for any compact set $K \subset \mathbb{C}$, \[ \sum_{n=1}^\infty \sup_{z \in K} \| E_n(z) \| < \infty. \] By \eqref{eq:57} and \eqref{eq:58}, there are $r, s \in \mathbb{R}$, \[ r = \lim_{n \to \infty} \frac{a_{n-1}}{a_n} \quad \text{and} \quad s = \lim_{n \to \infty} \frac{b_{n}}{a_n}. \] Then the limit of $(B_n(0) : n \in \mathbb{N})$ is \[ \mathcal{B} = \begin{pmatrix} 0 & 1 \\ -r & -s \end{pmatrix}. \] Notice that \[ \operatorname{discr} \mathcal{B} = s^2 - 4r = r \Big(\frac{s}{2 \sqrt{r}} - 1 \Big) \Big(\frac{s}{2 \sqrt{r}} + 1 \Big). \] On the other hand, by \eqref{eq:135}, \eqref{eq:136} and \eqref{eq:137}, we can easily deduce that \[ r \in (0, 1] \quad \text{and} \quad q = \frac{s}{2 \sqrt{r}}. \] Therefore, $\operatorname{discr} \mathcal{B} \neq 0$ whenever $\abs{q} \neq 1$. Fix a compact set $K \subset \mathbb{C}$. If $\operatorname{discr} \mathcal{B} > 0$, then $\mathcal{B}$ has two eigenvectors \[ v^+ = \begin{pmatrix} 1 \\ \lambda^+ \end{pmatrix} \qquad v^- = \begin{pmatrix} 1 \\ \lambda^- \end{pmatrix} \] corresponding to the eigenvalues \[ \lambda^+ = \frac{-s + \sqrt{s^2-4r}}{2}, \qquad \lambda^- = \frac{-s - \sqrt{s^2-4r}}{2}. \] Since $\det \mathcal{B} = r$ these eigenvalues are non-zero. Let us consider the system \[ \Phi_{n+1} = \big( B_n(0) + E_n \big) \Phi_n. \] By Corollary \ref{cor:4}, there is a sequence of mappings $(\Phi^\pm_n : n \geq n_0)$ so that \begin{equation} \label{eq:36} \lim_{n \to \infty} \sup_{z \in K}{\bigg\|\frac{\Phi^\pm_n(z)}{\prod_{j=1}^{n-1} \lambda^\pm_j(0)} - v^\pm \bigg\|} = 0. \end{equation} Since $B_n$ is invertible for any $n$, we set \[ \phi^\pm_1 = B_1^{-1} \cdots B_{n_0}^{-1} \Phi^\pm_{n_0}. \] Then for $n \geq 1$, we define \[ \phi^\pm_{n+1} = B_n \phi_n^\pm. \] Finally, to obtain a generalized eigenvector associated with $z \in K$, we set \[ u_n^\pm(z) = \begin{cases} \langle \phi^\pm_1(z), e_1\rangle & \text{if } n = 0,\\ \langle \phi^\pm_n(z), e_2\rangle & \text{if } n \geq 1. \end{cases} \] Now, by \eqref{eq:36} we easily see that \[ u^\pm_n(z) = \bigg(\prod_{j = 1}^{n-1} \lambda_j^\pm(0) \bigg) \big(\lambda^\pm + \epsilon^\pm_n(z)\big) \] with \[ \lim_{n \to \infty} \sup_{z \in K}{|\epsilon^\pm_n(z)|} = 0. \] Since $(\lambda^\pm_j(0))$ converges to $\lambda^\pm$, we obtain \eqref{eq:133}. When $\operatorname{discr} \mathcal{B} < 0$, the reasoning is analogous. \end{proof} \subsection{Perturbation of the identity} \begin{theorem} \label{thm:8a} Let $N$ be a positive integer. Let $A$ be a Jacobi matrix with $N$-periodically modulated entries so that $\mathfrak{X}_0(0) = \sigma \operatorname{Id}$ for a certain $\sigma \in \{-1, 1 \}$. Assume that there are $i \in \{0, 1, \ldots, N-1 \}$, and a sequence of positive numbers $(\gamma_n : n \in \mathbb{N}_0)$ satisfying \[ \sum_{n=0}^\infty \frac{1}{\gamma_n} = \infty, \] such that $R_{nN+i}(0) = \gamma_n \big(X_{nN+i}(0) - \sigma \operatorname{Id} \big)$ converges to the non-zero matrix $\mathcal{R}_i$. If $\big( R_{nN+i}(0) : n \in \mathbb{N} \big)$ belongs to $\mathcal{D}_1 \big( \operatorname{Mat}(2, \mathbb{R}) \big)$, $\operatorname{discr} \mathcal{R}_i > 0$, and \begin{equation} \label{eq:45} \sum_{n=0}^\infty \frac{1}{a_n} < \infty, \end{equation} then $A$ is self-adjoint if and only if there is $n_0 \geq 1$, such that \begin{equation} \label{eq:72} \sum_{n=n_0}^{\infty} \prod_{j=n_0}^n \bigg| 1 + \frac{\sigma \operatorname{tr} R_{jN+i}(0) + \sqrt{\operatorname{discr} R_{jN+i}(0)}}{2 \gamma_j} \bigg|^2 = \infty. \end{equation} Moreover, if $A$ is self-adjoint, then $\sigma_{\mathrm{ess}}(A) = \emptyset$. \end{theorem} \begin{proof} Since $\operatorname{discr} \mathcal{R}_i > 0$, there are $\delta > 0$ and $n_0 \in \mathbb{N}$, such that for all $j \geq n_0$, \[ \operatorname{discr} R_{jN+i}(0) > \delta. \] Hence, the matrix $R_{jN+i}(0)$ has two distinct eigenvalues \[ \xi^+_j(0) = \frac{\operatorname{tr} R_{jN+i}(0) + \sigma \sqrt{\operatorname{discr} R_{jN+i}(0)}}{2}, \qquad\text{and}\qquad \xi^-_j(0) = \frac{\operatorname{tr} R_{jN+i}(0) - \sigma \sqrt{\operatorname{discr} R_{jN+i}(0)}}{2}, \] thus the matrix $X_{jN+i}(0)= \sigma \operatorname{Id} + \gamma_j^{-1}R_{jN+i}(0)$ has two eigenvalues \[ \lambda^+_j(0) = \sigma + \frac{\xi^+_j(0)}{\gamma_j}, \qquad\text{and}\qquad \lambda^-_j(0) = \sigma + \frac{\xi^-_j(0)}{\gamma_j}. \] Let $K$ be any compact subset of $\mathbb{R}$. By Proposition~\ref{prop:3}, we can write \[ X_{nN+i}(x) = \sigma \operatorname{Id} + \frac{1}{\gamma_n} R_{nN+i}(0) + E_{nN+i}(x) \] where \[ \sum_{n=0}^\infty \sup_{x \in K} \| E_{nN+i}(x) \| < \infty. \] Since $(R_{jN+i}(0) : j \in \mathbb{N})$ belongs to $\mathcal{D}_1\big(\operatorname{Mat}(2, \mathbb{R})\big)$, by Corollary~\ref{cor:5}, there are two sequences $\big( \Phi_j^- : j \geq n_0 \big)$ and $\big( \Phi_j^+ : j \geq n_0 \big)$ satisfying \[ \Phi_{j+1} = \big( X_{jN+i}(0) + E_{jN+i}\big) \Phi_j, \] and such that \begin{equation} \label{eq:140} \lim_{n \to \infty} \sup_K \bigg\| \frac{\Phi_n^\pm}{\prod_{j = n_0}^{n-1} \lambda^\pm_j(0)} - v^\pm \bigg\| = 0 \end{equation} for certain $v^-, v^+ \neq 0$. Let \[ \phi_1^\pm = B_1^{-1} B_2^{-1} \cdots B_{n_0}^{-1} \Phi_{n_0}^\pm. \] For $n \geq 1$, we set \[ \phi_{n+1}^\pm = B_n \phi_n^\pm. \] Then for $kN+i' > n_0N+i$ with $i' \in \{0, 1, \ldots, N-1\}$, we have \[ \phi_{kN+i'}^\pm = \begin{cases} B_{kN+i'}^{-1} B_{kN+i'+1}^{-1} \cdots B_{kN+i-1}^{-1} \Phi_k^\pm &\text{if } i' \in \{0, 1, \ldots, i-1\}, \\ \Phi_k^\pm & \text{if } i ' = i,\\\ B_{kN+i'-1} B_{kN+i'-2} \cdots B_{kN+i} \Phi_k^\pm & \text{if } i' \in \{i+1, \ldots, N-1\}. \end{cases} \] Consequently, we obtain \begin{equation} \label{eq:71} \lim_{k \to \infty} \frac{\phi_{kN+i'}^\pm}{\prod_{j = n_0}^n \lambda^\pm_j(0)} = \begin{cases} \mathfrak{B}_{i'}^{-1}(0) \mathfrak{B}_{i'+1}^{-1}(0) \cdots \mathfrak{B}_{i-1}^{-1}(0) v^\pm &\text{if } i' \in \{0, 1, \ldots, i-1\} \\ v^\pm &\text{if } i' = i, \\ \mathfrak{B}_{i'-1}(0) \mathfrak{B}_{i'-2}(0) \cdots \mathfrak{B}_{i}(0) v^\pm &\text{if } i' \in \{i+1, \ldots ,N-1\}, \end{cases} \end{equation} uniformly on $K$. Let \[ u_n^\pm(x) = \begin{cases} \langle \phi_1^\pm(x), e_1 \rangle & \text{if } n = 0, \\ \langle \phi_n^\pm(x), e_2 \rangle & \text{if } n \geq 1. \end{cases} \] Then $(u_n^+(x) : n \in \mathbb{N}_0)$ and $(u_n^-(x) : n \in \mathbb{N}_0)$ are two generalized eigenvectors associated with $x \in K$. Since their asymptotic behavior is different from each other, they are linearly independent. By \eqref{eq:140} and \eqref{eq:71}, there is a constant $c>0$ such that for all $n > n_0$, and $x \in K$, \begin{equation} \label{eq:44} \big| u^{\pm}_{nN+i}(x) \big|^2 + \big| u^{\pm}_{nN+i-1}(x) \big|^2 = \big\| \phi^{\pm}_{nN+i}(x) \big\|^2 \geq c \prod_{j = n_0}^{n-1} \big| \lambda_j^{\pm}(0) \big|^2. \end{equation} Moreover, for all $n > n_0$, $i' \in \{0, 1, \ldots, N-1\}$, and $x \in K$, \begin{equation} \label{eq:48a} \big\| \phi^\pm_{nN+i'}(x) \big\|^2 \leq c \prod_{j = n_0}^{n-1} \big| \lambda_j^{\pm}(0) \big|^2. \end{equation} Since $|\lambda_{j}^{-}(0)| \leq |\lambda_{j}^{+}(0)|$, we obtain \begin{equation} \label{eq:73} \sum_{n=n_0+1}^\infty |u_n^-(x)|^2 \leq c \sum_{n=n_0+1}^\infty |u_n^+(x)|^2. \end{equation} Now, observe that if \eqref{eq:72} is satisfied then by \eqref{eq:44} the generalized eigenvector $(u_n^+(x) : n \in \mathbb{N}_0)$ is not square-summable, hence by \cite[Theorem 6.16]{Schmudgen2017}, the operator $A$ is self-adjoint. On the other-hand, if \eqref{eq:72} is not satisfied, then by \eqref{eq:48a} and \eqref{eq:73}, all generalized eigenvectors are square-summable, thus by \cite[Theorem 6.16]{Schmudgen2017}, the operator $A$ is not self-adjoint. Finally, let us suppose that $A$ is self-adjoint. By the proof of \cite[Theorem 5.3]{Silva2007}, if \begin{equation} \label{eq:70} \sum_{n=0}^\infty \sup_{x \in K} | u^-_n(x) |^2 < \infty, \end{equation} then $\sigma_{\mathrm{ess}}(A) \cap K = \emptyset$, and since $K$ is any compact subset of $\mathbb{R}$ this implies that $\sigma_{\mathrm{ess}}(A) = \emptyset$. Therefore, to complete the proof of the theorem it is enough to show \eqref{eq:70}. Observe that $E_{nN+i}(0)=0$, thus \[ \lambda_j^+(0) \lambda_j^-(0) = \det X_{jN+i}(0) = \frac{a_{jN+i-1}}{a_{(j+1)N+i-1}}, \] and so \[ \prod_{j=n_0}^k \lambda^-_j(0) \lambda^+_j(0) = \frac{a_{n_0 N+i-1}}{a_{(k+1)N+i-1}}. \] Consequently, by \eqref{eq:45}, \[ \sum_{k=n_0}^\infty \prod_{j=n_0}^k \big| \lambda^-_j(0) \lambda^+_j(0) \big| = \sum_{k=n_0}^\infty \frac{a_{n_0 N+i-1}}{a_{(k+1)N+i}} < \infty, \] which together with $| \lambda^-_j(0) | \leq | \lambda^+_j(0) |$ implies that \[ \sum_{k=n_0}^\infty \prod_{j=n_0}^k \big| \lambda^-_j(0) \big|^2 < \infty. \] Hence, by \eqref{eq:48a} we obtain \eqref{eq:70}, and the theorem follows. \end{proof} By the similar reasoning one can prove the following theorem. \begin{theorem} \label{thm:8} Let $N$ be a positive integer. Let $A$ be a Jacobi matrix with $N$-periodically modulated entries so that $\mathfrak{X}_0(0) = \sigma \operatorname{Id}$ for a certain $\sigma \in \{-1, 1 \}$. Assume that there are $i \in \{0, 1, \ldots, N-1 \}$ and a sequence of positive numbers $(\gamma_n : n \in \mathbb{N}_0)$ such that $R_{nN+i}(0) = \gamma_n \big(X_{nN+i}(0) - \sigma \operatorname{Id}\big)$ converges to the non-zero matrix $\mathcal{R}_i$. If $\big( R_{nN+i}(0) : n \in \mathbb{N} \big)$ belongs to $\mathcal{D}_1 \big( \operatorname{Mat}(2, \mathbb{R}) \big)$, $\operatorname{discr} \mathcal{R}_i < 0$, and \[ \sum_{n=0}^\infty \frac{1}{a_n} < \infty, \] then the operator $A$ is \emph{not} self-adjoint. \end{theorem} Proposition~\ref{prop:7} motivates us to the following notion. Given a sequence $(w_n : n \in \mathbb{N})$ such that $w_n > 0$ for all $n \in \mathbb{N}$, we introduce the weighted Stolz class. We say that $(x_n)$ a bounded sequence from a normed vector space $X$ belongs to $\mathcal{D}_1(X; w)$, if \[ \sum_{n = 1}^\infty \left\|\Delta x_n \right\| w_n < \infty. \] Moreover, given a positive integer $N$, we say that $x \in \mathcal{D}^N_{1}(X; w)$ if for each $i \in \{0, 1, \ldots, N-1 \}$, \[ \big( x_{nN+i} : n \in \mathbb{N} \big) \in \mathcal{D}_{1}(X; w). \] Similar reasoning to \cite[Corollary 1]{SwiderskiTrojan2019} leads to the following fact. \begin{proposition} \label{prop:4} For any weight $(w_n)$ \begin{enumerate}[(i), leftmargin=2em] \item If $(x_n), (y_n) \in \mathcal{D}_1(X; w)$, then $(x_n y_n), (x_n + y_n) \in \mathcal{D}_1(X;w)$. \item If $(x_n) \in \mathcal{D}_1(K, \mathbb{C}; w)$, and $\|x_n(t)\| \geq c > 0$ for all $n \geq \mathbb{N}_0$ and $t \in K$, then $(x_n^{-1}) \in \mathcal{D}_1(K, \mathbb{C}; w)$. \end{enumerate} \end{proposition} The following proposition describes a way to construct matrices $(R_n : n \in \mathbb{N})$ satisfying the hypotheses of Theorems \ref{thm:8a} and \ref{thm:8}. \begin{proposition} \label{prop:2} Let $N$ be a positive integer and $i \in \{0, 1, \ldots, N-1\}$. Let $A$ be a Jacobi matrix with $N$-periodically modulated entries so that $\mathfrak{X}_0(0) = \sigma \operatorname{Id}$ for a certain $\sigma \in \{-1, 1 \}$. Assume that there is $(\gamma_n : n \in \mathbb{N}_0)$ a sequence of positive numbers such that $R_{nN+i}(0) = \gamma_n\big(X_{nN+i}(0) - \sigma \operatorname{Id}\big)$ converges to non-zero matrix $\mathcal{R}_i$. If there are two $N$-periodic sequences $(\tilde{s}_{i'} : i' \in \mathbb{N}_0)$ and $(\tilde{z}_{i'} : i' \in \mathbb{N}_0)$ such that \begin{equation} \label{eq:117} \tilde{s}_{i'} = \lim_{n \to \infty} \gamma_n \Big(\frac{\alpha_{i'-1}}{\alpha_{i'}} - \frac{a_{nN+i'-1}}{a_{nN+i'}} \Big), \qquad\text{and}\qquad \tilde{z}_{i'} = \lim_{n \to \infty} \gamma_n \Big(\frac{\beta_{{i'}}}{\alpha_{i'}} - \frac{b_{nN+i'}}{a_{nN+i'}} \Big), \end{equation} then \begin{equation} \label{eq:118} \mathcal{R}_i = \sum_{j=0}^{N-1} \left\{ \prod_{m=j+1}^{N-1} \mathfrak{B}_{i+m}(0) \right\} \begin{pmatrix} 0 & 0 \\ \tilde{s}_{i+j} & \tilde{z}_{i+j} \end{pmatrix} \left\{ \prod_{m=0}^{j-1} \mathfrak{B}_{i+m}(0) \right\} \end{equation} and \begin{equation} \label{eq:118a} \operatorname{tr} \mathcal{R}_i = -\sigma \sum_{j=0}^{N-1} \tilde{s}_{i+j} \frac{\alpha_{i+j}}{\alpha_{i+j-1}}. \end{equation} Moreover, if there is a weight $(w_n : n \in \mathbb{N})$ so that for all $i' \in \{0, 1, \ldots, N-1 \}$, \begin{equation} \label{eq:120} \bigg( \frac{1}{\gamma_n} : n \in \mathbb{N} \bigg), \bigg( \gamma_n \Big(\frac{\alpha_{i'-1}}{\alpha_{i'}} - \frac{a_{nN+i'-1}}{a_{nN+i'}} \Big) : n \in \mathbb{N} \bigg), \bigg( \gamma_n \Big(\frac{\beta_{i'}}{\alpha_{i'}} - \frac{b_{nN+i'}}{a_{nN+i'}} \Big) : n \in \mathbb{N} \bigg) \in \mathcal{D}_1(\mathbb{R}; w), \end{equation} then \begin{equation} \label{eq:121} \big( R_{nN+i}(0) : n \in \mathbb{N} \big) \in \mathcal{D}_1 \big( \operatorname{Mat}(2, \mathbb{R}); w\big). \end{equation} \end{proposition} \begin{proof} Since \[ X_n(0) - \mathfrak{X}_n(0) = \sum_{i'=0}^{N-1} \left\{ \prod_{m=i'+1}^{N-1} \mathfrak{B}_{n+m}(0) \right\} \big( B_{n+i'}(0) - \mathfrak{B}_{n+i'}(0) \big) \left\{ \prod_{m=0}^{i'-1} B_{n+m}(0) \right\}, \] and $\mathfrak{X}_i(0) = \sigma \operatorname{Id}$, we get \begin{equation} \label{eq:119} R_{nN+i}(0) = \sum_{i'=0}^{N-1} \left\{ \prod_{m=i'+1}^{N-1} \mathfrak{B}_{i+m}(0) \right\} \gamma_n \big( B_{nN+i+i'}(0) - \mathfrak{B}_{i+i'}(0) \big) \left\{ \prod_{m=0}^{i'-1} B_{nN+i+m}(0) \right\}. \end{equation} Observe that \begin{equation} \label{eq:122} \gamma_n \big( B_{nN+i+i'}(0) - \mathfrak{B}_{i+i'}(0) \big) = \gamma_n \begin{pmatrix} 0 & 0 \\ \frac{\alpha_{i+i'-1}}{\alpha_{i+i'}} - \frac{a_{nN+i+i'-1}}{a_{nN+i+i'}} & \frac{\beta_{i+i'}}{\alpha_{i+i'}} - \frac{b_{nN+i+i'}}{a_{nN+i+i'}} \end{pmatrix}, \end{equation} thus by \eqref{eq:117} we obtain \[ \lim_{n \to \infty} \gamma_n \big( B_{nN+i+i'}(0) - \mathfrak{B}_{i+i'}(0) \big) = \begin{pmatrix} 0 & 0 \\ \tilde{s}_{i+i'} & \tilde{z}_{i+i'} \end{pmatrix}. \] Now, \eqref{eq:118} easily follows from \eqref{eq:119}. The proof of \eqref{eq:118a} is analogous to Proposition \ref{prop:10}, cf. \eqref{eq:101} and \eqref{eq:18}. We proceed to showing \eqref{eq:121}. By \eqref{eq:120}, for each $i' \in \{0, 1, \ldots, N-1 \}$, \[ \bigg( \frac{a_{nN+i'-1}}{a_{nN+i'}} : n \in \mathbb{N} \bigg), \bigg( \frac{b_{nN+i'}}{a_{nN+i'}} : n \in \mathbb{N} \bigg) \in \mathcal{D}_1(\mathbb{R}; w), \] thus, \begin{equation} \label{eq:123} \big( B_{nN+i'}(0) : n \in \mathbb{N} \big) \in \mathcal{D}_1 \big( \operatorname{GL}(2, \mathbb{R}); w\big). \end{equation} Moreover, in view of \eqref{eq:122}, the condition \eqref{eq:120} implies that \begin{equation} \label{eq:124} \Big( \gamma_n \big( B_{nN+i+i'}(0) - \mathfrak{B}_{i+i'}(0) \big) : n \in \mathbb{N} \Big) \in \mathcal{D}_1 \big( \operatorname{Mat}(2, \mathbb{R}); w\big). \end{equation} Now, \eqref{eq:123} and \eqref{eq:124} together with \eqref{eq:119} implies \eqref{eq:121} \end{proof} \subsection{A periodic modulations of Kostyuchenko--Mirzoev's class} \label{sec:KM} Let $N$ be a positive integer. We say that a Jacobi matrix $A$ associated to $(a_n : n \in \mathbb{N}_0)$ and $(b_n: n \in \mathbb{N}_0)$ belongs to $N$-periodically modulated Kostyuchenko--Mirzoev's class, if there are two $N$-periodic sequences $(\alpha_n : n \in \mathbb{Z})$ and $(\beta_n : n \in \mathbb{Z})$ of positive and real numbers, respectively, such that \[ a_n = \alpha_n \tilde{a}_n \Big( 1 + \frac{f_n}{\gamma_n} \Big) > 0, \qquad\text{and}\qquad b_n = \frac{\beta_n}{\alpha_n} a_n \] where $(f_n : n \in \mathbb{N}_0)$ is bounded sequence, and $(\tilde{a}_n : n \in \mathbb{N}_0)$ is a positive sequence satisfying \[ \sum_{n=0}^\infty \frac{1}{\tilde{a}_n} < \infty \qquad \text{and} \qquad \lim_{n \to \infty} \gamma_n \Big( 1- \frac{\tilde{a}_{n-1}}{\tilde{a}_n} \Big) = \kappa > 0 \] for a certain positive sequence $(\gamma_n : n \in \mathbb{N}_0)$ tending to infinity. This class contains interesting examples of Jacobi matrices giving rise to self-adjoint operators which do not satisfy the Carleman's condition. Moreover, we formulate certain conditions under which the essential spectrum is empty. This class has been studied in \cite{Kostyuchenko1999, JanasMoszynski2003, Silva2004, Silva2007, Silva2007a} in the case when $N$ is an even integer, $\alpha_n \equiv 1, \beta_n \equiv 0$, and \[ \tilde{a}_n = (n+1)^\kappa, \qquad \gamma_n = n+1 \] for some $\kappa > 1$. \begin{theorem} \label{thm:4} Let $N$ be a positive integer. Let $A$ be a Jacobi matrix from $N$-periodically modulated Kostyuchenko--Mirzoev's class so that $\mathfrak{X}_0(0) = \sigma \operatorname{Id}$ for a certain $\sigma \in \{-1, 1\}$. Suppose that there is a weight $(w_n : n \in \mathbb{N})$, so that \begin{equation} \label{eq:51} \bigg( \gamma_n \Big( 1 - \frac{\tilde{a}_{n-1}}{\tilde{a}_n} \Big) : n \in \mathbb{N} \bigg), \big( f_n : n \in \mathbb{N} \big), \bigg( \frac{\gamma_{n-1}}{\gamma_n} : n \in \mathbb{N} \bigg) \in \mathcal{D}_1^N(\mathbb{R}; w), \end{equation} and \begin{equation} \label{eq:51a} \lim_{n \to \infty} \frac{\gamma_{n-1}}{\gamma_n} = 1. \end{equation} Then for all $i \in \{0, 1, \ldots, N-1\}$, the matrices $R_{nN+i}(0) = \gamma_{nN} \big(X_{nN+i}(0) - \sigma \operatorname{Id}\big)$ converge to the non-zero matrix $\mathcal{R}_i$, \begin{equation} \label{eq:33} \big( R_{nN+i}(0) : n \in \mathbb{N} \big) \in \mathcal{D}_1 \big(\operatorname{Mat}(2, \mathbb{R}); w\big), \end{equation} and \[ \sum_{n = 0}^\infty \sup_{z \in K}{ \big\|X_{nN+i}(z) - X_{nN+i}(0) \big\| } < \infty \] for every compact set $K \subset \mathbb{C}$. Moreover, $\operatorname{tr} \mathcal{R}_i = -\kappa \sigma N$, and \begin{equation} \label{eq:31} \mathcal{R}_i = \sum_{j=0}^{N-1} \frac{\alpha_{i+j-1}}{\alpha_{i+j}} \big( \kappa + \mathfrak{f}_{i+j} - \mathfrak{f}_{i+j-1} \big) \left\{ \prod_{m=j+1}^{N-1} \mathfrak{B}_{i+m}(0) \right\} \begin{pmatrix} 0 & 0 \\ 1 & 0 \end{pmatrix} \left\{ \prod_{m=0}^{j-1} \mathfrak{B}_{i+m}(0) \right\} \end{equation} where $(\mathfrak{f}_n : n \in \mathbb{Z})$ is $N$-periodic sequence so that \begin{equation} \label{eq:63} \lim_{n \to \infty} |f_n - \mathfrak{f}_n| = 0. \end{equation} \end{theorem} \begin{proof} To prove \eqref{eq:33}, we are going to apply Proposition \ref{prop:2}. To do so, we need to check \eqref{eq:120}. In fact, it is enough to show that for any $i \in \{0, 1, \ldots, N-1 \}$, \begin{equation} \label{eq:61} \bigg( \gamma_{jN} \Big( \frac{\alpha_{i-1}}{\alpha_{i}} - \frac{a_{jN+i-1}}{a_{jN+i}} \Big) : j \in \mathbb{N} \bigg) \in \mathcal{D}_1(\mathbb{R}; w). \end{equation} We write \[ \gamma_{jN} \Big( \frac{\alpha_{i-1}}{\alpha_{i}} - \frac{a_{jN+i-1}}{a_{jN+i}} \Big) = \frac{\alpha_{i-1}}{\alpha_i} \gamma_{jN} \Big( 1 - \frac{\tilde{a}_{jN+i-1}}{\tilde{a}_{jN+i}} \Big( 1 + \frac{e_{j}}{\gamma_{jN}} \Big) \bigg) \] where \begin{equation} \label{eq:65} e_{j} = \gamma_{jN} \bigg( \frac{1+\tfrac{f_{jN+i-1}}{\gamma_{jN+i-1}}}{1+\tfrac{f_{jN+i}}{\gamma_{jN+i}}} -1 \bigg) = \frac{\gamma_{jN}}{\gamma_{jN+i-1}} \frac{f_{jN+i-1}- f_{jN+i} \tfrac{\gamma_{jN+i-1}}{\gamma_{jN+i}}} {1+\tfrac{f_{jN+i}}{\gamma_{jN+i}}}. \end{equation} Thus \begin{equation} \label{eq:66} \gamma_{jN} \Big( \frac{\alpha_{i-1}}{\alpha_{i}} - \frac{a_{jN+i-1}}{a_{jN+i}} \Big) = \frac{\alpha_{i-1}}{\alpha_i} \gamma_{jN} \Big( 1 - \frac{\tilde{a}_{jN+i-1}}{\tilde{a}_{jN+i}} \Big) - \frac{\alpha_{i-1}}{\alpha_i} \frac{\tilde{a}_{jN+i-1}}{\tilde{a}_{jN+i}} e_j \end{equation} and by \eqref{eq:51} we easily obtain \eqref{eq:61}. In view of \eqref{eq:51a} and \eqref{eq:63}, the formula \eqref{eq:65} gives \[ \lim_{j \to \infty} e_{jN+i} = \mathfrak{f}_{i-1} - \mathfrak{f}_i. \] Thus, by \eqref{eq:66} \[ \lim_{j \to \infty} \gamma_{jN} \Big( \frac{\alpha_{i-1}}{\alpha_{i}} - \frac{a_{jN+i-1}}{a_{jN+i}} \Big) = \frac{\alpha_{i-1}}{\alpha_i} (\kappa + \mathfrak{f}_{i} - \mathfrak{f}_{i-1}). \] Fix a compact set $K \subset \mathbb{C}$. Since the condition \eqref{eq:116} is satisfied, by Proposition \ref{prop:3}, for all $z \in K$, \[ X_{jN+i}(z) = \sigma \operatorname{Id} + \frac{1}{\gamma_j} R_{jN+i}(0) + E_{jN+i}(z) \] where \[ \sup_{z \in K} \|E_n(z)\| \leq \frac{c}{\tilde{a}_n}. \] Finally, by Proposition \ref{prop:2}, we obtain \eqref{eq:33} and \eqref{eq:31}. \end{proof} \subsubsection{Examples of modulated sequences} In this section we present examples of sequences $(\tilde{a}_n : n \in \mathbb{N}_0)$ and $(\gamma_n : n \in \mathbb{N}_0)$ satisfying the assumptions of Theorem~\ref{thm:4}. \begin{example} Let $\kappa > 1$ and \[ \tilde{a}_n = (n+1)^{\kappa} \quad \text{and} \quad \gamma_n = n+1. \] Then \[ \gamma_n \Big( 1 - \frac{\tilde{a}_{n-1}}{\tilde{a}_n} \Big) = \kappa + \frac{\kappa (\kappa-1)}{2n} + \mathcal{O} \Big( \frac{1}{n^2} \Big). \] \end{example} \begin{example} Let \[ \tilde{a}_n = (n+1) \log^2(n+2) \quad \text{and} \quad \gamma_n = n+1. \] Then \[ \gamma_n \Big( 1 - \frac{\tilde{a}_{n-1}}{\tilde{a}_n} \Big) = 1 + \frac{2}{\log n} - \frac{3}{n \log n} + \mathcal{O} \Big( \frac{1}{n \log^2 n} \Big). \] \end{example} \begin{proposition} \label{prop:5} Suppose that the hypotheses of Theorem~\ref{thm:4} are satisfied with $\gamma_n = n+1$. Assume that $\operatorname{discr} \mathcal{R}_0 > 0$. Then \begin{enumerate}[(i), leftmargin=2em] \item \label{cas:1} if $-\kappa + \frac{1}{N} \sqrt{\operatorname{discr} \mathcal{R}_0} > -1$, then the operator $A$ is self-adjoint; \item \label{cas:2} if $-\kappa + \frac{1}{N} \sqrt{\operatorname{discr} \mathcal{R}_0} < -1$, then the operator $A$ is not self-adjoint. \end{enumerate} Moreover, if the operator $A$ is self-adjoint then $\sigma_{\mathrm{ess}}(A) = \emptyset$. \end{proposition} \begin{proof} We shall consider the case \ref{cas:1} only as the reasoning in \ref{cas:2} is similar. By Theorem~\ref{thm:8a} it is enough to check whether there is $n_0 \geq 1$ so that the series \begin{equation} \label{eq:75} \sum_{n=n_0}^\infty \prod_{j=n_0}^n \bigg| 1 + \frac{\sigma \operatorname{tr} R_{jN}(0) + \sqrt{\operatorname{discr} R_{jN}(0)}}{2 jN} \bigg|^2 \end{equation} diverges. Let us select $\delta > 0$ so that \begin{equation} \label{eq:47} - \kappa + \frac{1}{N} \sqrt{\operatorname{discr} \mathcal{R}_0} - \delta > -1. \end{equation} By Theorem \ref{thm:4}, $\operatorname{tr} \mathcal{R}_0 = -\kappa \sigma N$. Hence, there is $j_0 \in \mathbb{N}$ such that for all $j \geq j_0$, \[ \Big| \big( \sigma \operatorname{tr} R_{jN}(0) + \sqrt{\operatorname{discr} R_{jN}(0)}\big) - \big( -\kappa N + \sqrt{\operatorname{discr} \mathcal{R}_0}\big)\Big| \leq N \delta. \] Thus, \[ 1 + \frac{\sigma \operatorname{tr} R_{jN}(0) + \sqrt{\operatorname{discr} R_{jN}(0)}}{2 jN} \geq 1 + \frac{1}{2jN}\big(-\kappa N + \sqrt{\operatorname{discr} \mathcal{R}_0} - N \delta\big), \] and so \begin{align*} \log \bigg( \prod_{j=j_0}^n \bigg| 1 + \frac{\sigma \operatorname{tr} R_{jN}(0) + \sqrt{\operatorname{discr} R_{jN}(0)}}{2 jN} \bigg| \bigg) &\geq -c + \frac{1}{2} \Big(-\kappa + \frac{1}{N} \sqrt{\operatorname{discr} \mathcal{R}_0} - \delta \Big) \sum_{j = 1}^n \frac{1}{j} \\ &\geq -c' + \frac{1}{2} \Big( -\kappa + \frac{1}{N} \sqrt{\operatorname{discr} \mathcal{R}_0} - \delta \Big) \log n. \end{align*} Therefore, \[ \prod_{j=j_0}^n \bigg| 1 + \frac{\sigma \operatorname{tr} R_{jN}(0) + \sqrt{\operatorname{discr} R_{jN}(0)}}{2 jN} \bigg|^2 \geq c n^{-\kappa + \frac{1}{N} \sqrt{\operatorname{discr} \mathcal{R}_0} - \delta} \] which, in view of \eqref{eq:47}, implies that the series \eqref{eq:75} is divergent. \end{proof} \begin{example} For $0 < \tau < 1$ we set \[ \tilde{a}_n = \textrm{e}^{n^\tau} \qquad\text{and}\qquad \gamma_n = \max\{ n^{1-\tau}, 1\}. \] Let $m \in \mathbb{N}$ be chosen so that \[ 1 - \frac{1}{m-2} \leq \tau < 1 - \frac{1}{m-1}. \] Then \begin{align*} 1-\frac{\tilde{a}_{n-1}}{\tilde{a}_n} &= \sum_{j = 1}^{m-1} \frac{(-1)^{j+1}}{j!} \big(n^\tau - (n-1)^\tau \big)^j + \mathcal{O} \big(n^{m(\tau-1)}\big) \\ &= \sum_{j = 1}^{m-1} \frac{(-1)^{j+1}}{j!} n^{\tau j} \Big(1 - (1-n^{-1})^\tau\Big)^j + \mathcal{O}\big(n^{m(\tau-1)}\big). \end{align*} Since \[ 1 - (1-n^{-1})^\tau = \tau n^{-1} - \tfrac{\tau(\tau-1)}{2} n^{-2} + \mathcal{O}\big(n^{-3}\big), \] we obtain \begin{align*} 1-\frac{\tilde{a}_{n-1}}{\tilde{a}_n} &= n^{\tau}\Big(\tau n^{-1} - \tfrac{\tau(\tau-1)}{2} n^{-2} + \mathcal{O}\big(n^{-3}\big) \Big) -\sum_{j = 2}^{m-1} \frac{(-1)^j}{j!} n^{\tau j} \Big(\tau n^{-1} + \mathcal{O}\big(n^{-2}\big)\Big)^j + \mathcal{O}\big(n^{m(\tau-1)}\big)\\ &= \tau n^{\tau-1} - \tfrac{\tau(\tau-1)}{2} n^{\tau-2} + \mathcal{O}\big(n^{\tau-3}\big) - \sum_{j = 2}^{m-1} \frac{(-1)^j}{j!} n^{\tau j} \Big(\tau^j n^{-j} + \mathcal{O}\big(n^{-j-1}\big)\Big) +\mathcal{O}\big(n^{m(\tau-1)}\big)\\ &= \tau n^{\tau-1} - \tfrac{\tau(\tau-1)}{2} n^{\tau-2} - \sum_{j = 2}^{m-1} \frac{(-\tau)^j}{j!} n^{j(\tau-1)} + \mathcal{O}\big(n^{2\tau-3}\big) +\mathcal{O}\big(n^{m(\tau-1)}\big). \end{align*} Hence, \[ \gamma_n \bigg(1- \frac{\tilde{a}_{n-1}}{\tilde{a}_n}\bigg) = \tau + \tfrac{\tau(1-\tau)}{2} n^{-1} - \sum_{j = 2}^{m-1}\frac{(-\tau)^{j}}{j!} n^{-(j-1)(1-\tau)} +\mathcal{O}\big(n^{-2+\tau}\big) +\mathcal{O}\big(n^{-(m-1)(1-\tau)}\big). \] In particular, the assumptions of Theorem \ref{thm:4} are satisfied. \end{example} For a given sequence $(\gamma_n : n \in \mathbb{N}_0)$, the following proposition provides an explicit sequence $(\tilde{a}_n : n \in \mathbb{N}_0)$ satisfying the regularity assumptions of Theorem \ref{thm:4}. \begin{proposition} Suppose that $(\gamma_n : n \in \mathbb{N})$ is a positive sequence such that \[ \lim_{n \to \infty} \gamma_n = \infty, \qquad\text{and}\qquad \bigg( \frac{1}{\gamma_n} : n \in \mathbb{N} \bigg) \in \mathcal{D}_1^N(\mathbb{R}; w) \] where $w=(w_n : n \in \mathbb{N})$ is a weight. For $\kappa > 0$ we set \[ \tilde{a}_n = \exp \bigg(\sum_{j=1}^n \frac{\kappa}{\gamma_j} \bigg). \] Then \[ \lim_{n \to \infty} \gamma_n \Big( 1 - \frac{\tilde{a}_{n-1}}{\tilde{a}_n} \Big) = \kappa, \] and \[ \bigg( \gamma_n \Big( 1 - \frac{\tilde{a}_{n-1}}{\tilde{a}_n} \Big) : n \in \mathbb{N} \bigg) \in \mathcal{D}_1^N(\mathbb{R}; w). \] \end{proposition} \begin{proof} We have \[ \gamma_n \Big( 1 - \frac{\tilde{a}_{n-1}}{\tilde{a}_n} \Big) = \gamma_n \bigg( 1 - \exp \Big(-\frac{\kappa}{\gamma_n} \Big) \bigg) = f \Big(\frac{1}{\gamma_n} \Big) \] where \[ f(x) = \frac{1 - \textrm{e}^{-\kappa x}}{x}. \] Observe that \[ \lim_{x \to 0} f(x) = \kappa. \] Moreover, $f$ has analytic extension to $\mathbb{R}$, thus by the mean value theorem \[ \Big| f \Big(\frac{1}{\gamma_{n+N}} \Big) - f \Big(\frac{1}{\gamma_n} \Big) \Big| \leq c \Big| \frac{1}{\gamma_{n+N}} - \frac{1}{\gamma_{n}} \Big|, \] from which the conclusion follows. \end{proof} The following proposition settles the problem when the Carleman's condition is satisfied in terms of the growth of the sequence $(\gamma_n : n \in \mathbb{N}_0)$. \begin{proposition} \label{prop:6} Suppose that $(\gamma_n : n \in \mathbb{N})$ and $(\tilde{a}_n : n \in \mathbb{N}_0)$ are positive sequences satisfying \[ \lim_{n \to \infty} \gamma_n = \infty, \qquad \text{and} \qquad \lim_{n \to \infty} \gamma_n \Big( 1 - \frac{\tilde{a}_{n-1}}{\tilde{a}_n} \Big) = \kappa > 0. \] Then \begin{enumerate}[(i), leftmargin=2em] \item \label{prop:6:a} if $\lim_{n \to \infty} \frac{\gamma_n}{n} = 0$, then $\sum_{n=0}^\infty \frac{1}{\tilde{a}_n} < \infty$; \item \label{prop:6:b} if $\lim_{n \to \infty} \frac{\gamma_n}{n} = \infty$, then $\sum_{n=0}^\infty \frac{1}{\tilde{a}_n} = \infty$. \end{enumerate} \end{proposition} \begin{proof} We shall prove \ref{prop:6:a} only, as the proof of \ref{prop:6:b} is similar. Let \[ r_n = \gamma_n \Big( 1 - \frac{\tilde{a}_{n-1}}{\tilde{a}_n} \Big). \] There is $n_0$ such that for $n \geq n_0$, \[ \frac{\gamma_n}{n} \leq \frac{\kappa}{4} \leq \frac{r_n}{2}. \] Hence, for $j \geq n_0$, \[ \frac{\tilde{a}_{j-1}}{\tilde{a}_j} = 1 - \frac{r_j}{\gamma_j} \leq 1 - \frac{2}{j}, \] and so \[ \frac{\tilde{a}_{n_0-1}}{\tilde{a}_n} = \prod_{j = n_0}^n \frac{\tilde{a}_{j-1}}{\tilde{a}_j} \leq \prod_{j=n_0}^n \bigg( 1 - \frac{2}{j} \bigg). \] Consequently, for a certain $c>0$, \[ \frac{\tilde{a}_{n_0-1}}{\tilde{a}_n} \leq c n^{-2}, \] which implies that \[ \sum_{n=0}^\infty \frac{1}{\tilde{a}_n} < \infty.\qedhere \] \end{proof} The following proposition has a proof similar to Proposition \ref{prop:5}. \begin{proposition} Suppose that the hypotheses of Theorem~\ref{thm:4} are satisfied for a sequence $(\gamma_n : n \in \mathbb{N}_0)$ such that \[ \lim_{n \to \infty} \frac{\gamma_n}{n} = 0. \] Assume that $\operatorname{discr} \mathcal{R}_0 > 0$. Then \begin{enumerate}[(i), leftmargin=2em] \item if $-\kappa + \frac{1}{N} \sqrt{\operatorname{discr} \mathcal{R}_0} > 0$ then the operator $A$ is self-adjoint; \item if $-\kappa + \frac{1}{N} \sqrt{\operatorname{discr} \mathcal{R}_0} < 0$ then the operator $A$ is not self-adjoint. \end{enumerate} Moreover, if $A$ is self-adjoint then $\sigma_{\mathrm{ess}}(A) = \emptyset$. \end{proposition} \subsubsection{Construction of the modulating sequences} In this section we present examples of sequences $(\alpha_n : n \in \mathbb{N}_0)$ and $(\beta_n : n \in \mathbb{N}_0)$ for which one can compute $\operatorname{tr} \mathcal{R}_0$ and $\operatorname{discr} \mathcal{R}_0$. The first example illustrates that the sign of $\operatorname{discr} \mathcal{R}_0$ may be positive or negative. \begin{example} \label{ex:1} Let $N=3$, and \[ \alpha_n \equiv 1, \qquad\text{and}\qquad \beta_n \equiv 1. \] Then $\sigma = 1$ and \[ \mathcal{R}_0 = \begin{pmatrix} -\kappa - \mathfrak{f}_0 + \mathfrak{f}_2 & \kappa - \mathfrak{f}_0 + \mathfrak{f}_1 \\ -\kappa + \mathfrak{f}_1 - \mathfrak{f}_2 & -2 \kappa + \mathfrak{f}_0 - \mathfrak{f}_2 \end{pmatrix}. \] Consequently, \[ \operatorname{tr} \mathcal{R}_0 = - 3 \kappa \qquad \text{and} \qquad \operatorname{discr} \mathcal{R}_0 = 4 \Big( \mathfrak{f}_0^2 + \mathfrak{f}_1^2 + \mathfrak{f}_2^2 - \mathfrak{f}_0 \mathfrak{f}_1 - \mathfrak{f}_0 \mathfrak{f}_2 - \mathfrak{f}_1 \mathfrak{f}_2 \Big) - 3 \kappa^2. \] In particular, taking $\mathfrak{f}_0 = \mathfrak{f}_1 = 0$ and $\mathfrak{f}_2 = t$, we obtain \[ \sign{\operatorname{discr} \mathcal{R}_0} = \begin{cases} 1 & |t| > \frac{\sqrt{3}}{2} \kappa, \\ 0 & |t| = \frac{\sqrt{3}}{2} \kappa, \\ -1 & |t| < \frac{\sqrt{3}}{2} \kappa. \end{cases} \] \end{example} In the following example, discriminant of $\mathcal{R}_0$ is non-negative regardless of $(\mathfrak{f}_n : n \in \mathbb{Z})$. \begin{example} \label{ex:2} Let $N=4$, and \[ \alpha_n \equiv 1, \qquad \beta_{n} = \begin{cases} (-1)^{n/2} & \text{$n$ even}, \\ 0 & \text{otherwise.} \end{cases} \] Then $\sigma = 1$ and \[ \mathcal{R}_0 = \begin{pmatrix} -2 \kappa - \mathfrak{f}_0 + \mathfrak{f}_1 - \mathfrak{f}_2 + \mathfrak{f}_3 & -\mathfrak{f}_0 + 2 \mathfrak{f}_1 - \mathfrak{f}_2 \\ 0 & -2 \kappa + \mathfrak{f}_0 - \mathfrak{f}_1 + \mathfrak{f}_2 - \mathfrak{f}_3 \end{pmatrix}. \] Consequently, \[ \operatorname{tr} \mathcal{R}_0 = -4 \kappa \qquad \text{and} \qquad \operatorname{discr} \mathcal{R}_0 = 4 \bigg( \sum_{j=0}^3 (-1)^j \mathfrak{f}_j \bigg)^2 \geq 0. \] \end{example} The following theorem provides a large class of modulating sequences for which $\operatorname{discr} \mathcal{R}_0$ is always non-negative. \begin{theorem} \label{thm:5} Let $N$ be an even integer and $\kappa > 0$. Let $(\mathfrak{f}_n : n \in \mathbb{Z})$ be $N$-periodic sequence of non-negative numbers and $(\alpha_n : n \in \mathbb{Z})$ be $N$-periodic sequence of positive numbers satisfying \begin{equation} \label{eq:55} \alpha_0 \alpha_2 \cdots \alpha_{N-2} = \alpha_1 \alpha_3 \cdots \alpha_{N-1}. \end{equation} Let $\mathfrak{B}_n$ denote the transfer matrix associated with sequences $(\alpha_n : n \in \mathbb{Z})$ and $\beta_n \equiv 0$. We set \[ \mathcal{R}_0 = \sum_{j=0}^{N-1} \frac{\alpha_{j-1}}{\alpha_{j}} \big( \kappa + \mathfrak{f}_{j} - \mathfrak{f}_{j-1} \big) \left\{ \prod_{m=j+1}^{N-1} \mathfrak{B}_{m}(0) \right\} \begin{pmatrix} 0 & 0 \\ 1 & 0 \end{pmatrix} \left\{ \prod_{m=0}^{j-1} \mathfrak{B}_{m}(0) \right\}. \] Then \[ \operatorname{tr} \mathcal{R}_0 = -(-1)^{N/2} N \kappa \qquad \text{and} \qquad \operatorname{discr} \mathcal{R}_0 = 4 \bigg( \sum_{j=0}^{N-1} (-1)^j \mathfrak{f}_j \bigg)^2. \] \end{theorem} \begin{proof} Let $N = 2M$. By \cite[Proposition 3]{PeriodicIII}, for all $\ell \geq k \geq 0$ we have \begin{equation} \label{eq:53} \prod_{m=k}^{\ell} \mathfrak{B}_m(0) = \begin{pmatrix} -\frac{\alpha_{k-1}}{\alpha_k} \mathfrak{p}^{[k+1]}_{\ell-k-1}(0) & \mathfrak{p}^{[k]}_{\ell-k}(0) \\ -\frac{\alpha_{k-1}}{\alpha_k} \mathfrak{p}^{[k+1]}_{\ell-k}(0) & \mathfrak{p}^{[k]}_{\ell-k+1}(0) \end{pmatrix}. \end{equation} Observe that for $k \geq 1$ and $j \geq 0$, \begin{align*} \prod_{m=j}^{j+2k-1} \mathfrak{B}_m(0) &= \prod_{m=0}^{k-1} \Big( \mathfrak{B}_{j+2m+1}(0) \mathfrak{B}_{j+2m}(0) \Big) = \prod_{m=0}^{k-1} \begin{pmatrix} -\frac{\alpha_{j+2m-1}}{\alpha_{j+2m}} & 0 \\ 0 & -\frac{\alpha_{j+2m}}{\alpha_{j+2m+1}} \end{pmatrix} \\&= (-1)^k \begin{pmatrix} \frac{\alpha_{j+2k-3}}{\alpha_{j+2k-2}} \ldots \frac{\alpha_{j+1}}{\alpha_{j+2}} \frac{\alpha_{j-1}}{\alpha_j} & 0 \\ 0 & \frac{\alpha_{j+2k-2}}{\alpha_{j+2k-1}} \ldots \frac{\alpha_{j+2}}{\alpha_{j+3}} \frac{\alpha_{j}}{\alpha_{j+2}} \end{pmatrix}. \end{align*} In particular, by \eqref{eq:55}, we obtain \[ \prod_{m=0}^{N-1} \mathfrak{B}_m(0) = (-1)^M \operatorname{Id}. \] Moreover, by \eqref{eq:53}, for all $j \geq 0$ and $n \geq 0$, \begin{equation} \label{eq:54} \mathfrak{p}^{[j]}_{n}(0) = \begin{cases} (-1)^k \frac{\alpha_{j+2k-2}}{\alpha_{j+2k-1}} \ldots \frac{\alpha_{j+2}}{\alpha_{j+3}} \frac{\alpha_{j}}{\alpha_{j+2}} & n=2k, \\ 0 & \text{otherwise.} \end{cases} \end{equation} Setting \[ s_j = \kappa + \mathfrak{f}_j - \mathfrak{f}_{j-1}, \] by \eqref{eq:31}, we write \[ \mathcal{R}_0 = \sum_{j=0}^{N-1} \frac{\alpha_{j-1}}{\alpha_j} s_j \left\{ \prod_{m=j+1}^{N-1} \mathfrak{B}_{m}(0) \right\} \begin{pmatrix} 0 & 0 \\ 1 & 0 \end{pmatrix} \left\{ \prod_{m=0}^{j-1} \mathfrak{B}_{m}(0) \right\}. \] Therefore, by \eqref{eq:53}, \[ \mathcal{R}_0 = \sum_{j=0}^{N-1} \frac{\alpha_{j-1}}{\alpha_j} s_j \begin{pmatrix} -\frac{\alpha_{j}}{\alpha_{j+1}} \mathfrak{p}^{[j+2]}_{N-j-3}(0) & \mathfrak{p}^{[j+1]}_{N-j-2}(0) \\ -\frac{\alpha_{j}}{\alpha_{j+1}} \mathfrak{p}^{[j+2]}_{N-j-2}(0) & \mathfrak{p}^{[j+1]}_{N-j-1}(0) \end{pmatrix} \begin{pmatrix} 0 & 0 \\ 1 & 0 \end{pmatrix} \begin{pmatrix} -\frac{\alpha_{N-1}}{\alpha_0} \mathfrak{p}^{[1]}_{j-2}(0) & \mathfrak{p}^{[0]}_{j-1}(0) \\ -\frac{\alpha_{N-1}}{\alpha_0} \mathfrak{p}^{[1]}_{j-1}(0) & \mathfrak{p}^{[0]}_{j}(0) \end{pmatrix}, \] and consequently, \[ \mathcal{R}_0 = \sum_{j=0}^{N-1} \frac{\alpha_{j-1}}{\alpha_j} s_j \begin{pmatrix} -\frac{\alpha_{N-1}}{\alpha_0} \mathfrak{p}^{[1]}_{j-2}(0) \mathfrak{p}^{[j+1]}_{N-j-2}(0) & \mathfrak{p}^{[j+1]}_{N-j-2}(0) \mathfrak{p}^{[0]}_{j-1}(0) \\ -\frac{\alpha_{N-1}}{\alpha_0} \mathfrak{p}^{[j+1]}_{N-j-1}(0) \mathfrak{p}^{[1]}_{j-2}(0) & \mathfrak{p}^{[j+1]}_{N-j-1}(0) \mathfrak{p}^{[0]}_{j-1}(0) \end{pmatrix}. \] In view of \eqref{eq:54}, we have \[ \mathcal{R}_0 = \sum_{j=0}^{N-1} \frac{\alpha_{j-1}}{\alpha_j} s_j \begin{pmatrix} -\frac{\alpha_{N-1}}{\alpha_0} \mathfrak{p}^{[1]}_{j-2}(0) \mathfrak{p}^{[j+1]}_{N-j-2}(0) & 0 \\ 0 & \mathfrak{p}^{[j+1]}_{N-j-1}(0) \mathfrak{p}^{[0]}_{j-1}(0) \end{pmatrix}. \] By considering even and odd $j$, the last formula can be written in the form \begin{align*} \mathcal{R}_0 &= \sum_{k=0}^{M-1} \frac{\alpha_{2k-1}}{\alpha_{2k}} s_{2k} \begin{pmatrix} -\frac{\alpha_{N-1}}{\alpha_0} \mathfrak{p}^{[1]}_{2k-2}(0) \mathfrak{p}^{[2k+1]}_{N-2k-2}(0) & 0 \\ 0 & 0 \end{pmatrix} \\ &\phantom{=}+ \sum_{k=0}^{M-1} \frac{\alpha_{2k}}{\alpha_{2k+1}} s_{2k+1} \begin{pmatrix} 0 & 0 \\ 0 & \mathfrak{p}^{[2k+2]}_{N-2k-2}(0) \mathfrak{p}^{[0]}_{2k}(0) \end{pmatrix}. \end{align*} Now, using \eqref{eq:54} and \eqref{eq:55} we obtain \[ \mathfrak{p}^{[2k+2]}_{N-2k-2}(0) \mathfrak{p}^{[0]}_{2k}(0) = (-1)^{M-1} \frac{\alpha_0 \alpha_2 \ldots \alpha_{N-2}}{\alpha_{1} \alpha_3 \ldots \alpha_{N-1}} \cdot \frac{\alpha_{2k+1}}{\alpha_{2k}} = (-1)^{M-1} \cdot 1 \cdot \frac{\alpha_{2k+1}}{\alpha_{2k}}. \] Analogously one can show \[ -\frac{\alpha_{N-1}}{\alpha_0} \mathfrak{p}^{[1]}_{2k-2}(0) \mathfrak{p}^{[2k+1]}_{N-2k-2}(0) = (-1)^{M-1} \frac{\alpha_{2k}}{\alpha_{2k-1}}. \] Therefore, \begin{align*} \mathcal{R}_0 &= (-1)^{M-1} \begin{pmatrix} \sum_{k=0}^{M-1} s_{2k} & 0 \\ 0 & \sum_{k=0}^{M-1} s_{2k+1} \end{pmatrix} \\ &= -\sigma \begin{pmatrix} M \kappa + \sum_{k=0}^{M-1} \big( \mathfrak{f}_{2k} - \mathfrak{f}_{2k-1} \big) & 0 \\ 0 & M \kappa + \sum_{k=0}^{M-1} \big( \mathfrak{f}_{2k+1} - \mathfrak{f}_{2k} \big) \end{pmatrix}, \end{align*} and the conclusion readily follows. \end{proof} \begin{bibliography}{jacobi} \end{bibliography} \end{document}
arXiv
{ "id": "2006.07959.tex", "language_detection_score": 0.490896075963974, "char_num_after_normalized": null, "contain_at_least_two_stop_words": null, "ellipsis_line_ratio": null, "idx": null, "lines_start_with_bullet_point_ratio": null, "mean_length_of_alpha_words": null, "non_alphabetical_char_ratio": null, "path": null, "symbols_to_words_ratio": null, "uppercase_word_ratio": null, "type": null, "book_name": null, "mimetype": null, "page_index": null, "page_path": null, "page_title": null }
arXiv/math_arXiv_v0.2.jsonl
null
null
\begin{document} \title{Atom diode: Variants, stability, limits, and adiabatic interpretation} \author{A. Ruschhaupt} \email[Email address: ]{[email protected]} \affiliation{Departamento de Qu\'\i mica-F\'\i sica, Universidad del Pa\'\i s Vasco, Apdo. 644, 48080 Bilbao, Spain} \author{J. G. Muga} \email[Email address: ]{[email protected]} \affiliation{Departamento de Qu\'\i mica-F\'\i sica, Universidad del Pa\'\i s Vasco, Apdo. 644, 48080 Bilbao, Spain} \begin{abstract} We examine and explain the stability properties of the ``atom diode'', a laser device that lets the ground state atom pass in one direction but not in the opposite direction. The diodic behavior and the variants that result by using different laser configurations may be understood with an adiabatic approximation. The conditions to break down the approximation, which imply also the diode failure, are analyzed. \end{abstract} \pacs{03.75.Be,42.50.Lc} \maketitle \section{Introduction} In a previous paper \cite{ruschhaupt_2004_diode} we proposed simple models for an ``atom diode'', a laser device that lets the neutral atom in its ground state pass in one direction (conventionally from left to right) but not in the opposite direction for a range of incident velocities. A diode is a very basic control element in a circuit, so many applications may be envisioned to trap or cool atoms, or to build logic gates for quantum information processing in atom chips or other setups. Similar ideas have been developed independently by Raizen and coworkers \cite{raizen.2005,dudarev.2005}. While their work has emphasized phase space compression, we looked for the laser interactions leading to the most effective diode. This lead us to consider first STIRAP \cite{STIRAP} (stimulated Raman adiabatic passage) transitions and three level atoms, although we also proposed schemes for two-level atoms. In this paper we continue the investigation on the atom diode, concentrating on its two-level version, by examining its stability with respect to parameter changes, and also several variants, including in particular the ones discussed in \cite{ruschhaupt_2004_diode} and \cite{raizen.2005}. We shall see that the behaviour of the diode, its properties, and its working parameter domain can be understood and quantified with the aid of an adiabatic basis (equivalently, partially dressed states) obtained by diagonalizing the effective interaction potential. We restrict the atomic motion, similarly to \cite{ruschhaupt_2004_diode}, to one dimension. This occurs when the atom travels in waveguides formed by optical fields \cite{schneble.2003}, or by electric or magnetic interactions due to charged or current-carrying structures \cite{folman.2002}. It can be also a good approximation in free space for atomic packets which are broad in the laser direction, perpendicular to the incident atomic direction \cite{HHM05}. Three dimensional effects should not imply a dramatic disturbance, in any case, as we shall analyze elsewhere. \begin{figure} \caption{(a) Schematic action of the different lasers on the atom levels and (b) location of the different laser potentials.} \label{fig1} \end{figure} The basic setting can be seen in Fig. \ref{fig1}, and consists of three, partially overlapping laser fields: two of them are state-selective mirror lasers blocking the excited ($|2\rangle$) and ground ($|1\rangle$) states on the left and right, respectively of a central pumping laser on resonance with the atomic transition. They are all assumed to be traveling waves perpendicular to the atomic motion direction. The corresponding effective, time-independent, interaction-picture Hamiltonian for the two-level atom may be written, using $|1\rangle \equiv {1 \choose 0}$ and $|2\rangle \equiv {0 \choose 1}$, as \begin{eqnarray} \bm{H} = \frac{\hat{p}_x^2}{2m} + \underbrace{\frac{\hbar}{2} \left(\begin{array}{cc} W_1(x) & \Omega (x)\\ \Omega (x) & W_2(x) \end{array}\right)}_{\bm{M}(x)}, \label{ham2} \end{eqnarray} where $\Omega(x)$ is the Rabi frequency for the resonant transition and the effective reflecting potentials are $W_1(x)\hbar/2$ and $W_2(x)\hbar/2$. $\hat{p}_x = -i\hbar \frac{\partial}{\partial x}$ is the momentum operator and $m$ is the mass (corresponding to Neon in all numerical examples). Spontaneous decay is neglected here for simplicity, but it could be incorporated following \cite{ruschhaupt_2004_diode}. It implies both perturbing and beneficial effects for unidirectional transmission. Notice that in the ideal diode operation the ground state atom must be excited during its left-to-right crossing of the device. In principle, excited atoms could cross the diode ``backwards'', i.e., from right to left, but an irreversible decay from the excited state to the ground state would block any backward motion \cite{ruschhaupt_2004_diode}. The behaviour of this device is quantified by the scattering transmission and reflection amplitudes for left (l) and right (r) incidence. Using $\alpha$ and $\beta$ to denote the channels, $\alpha=1,2$, $\beta=1,2$, let us denote by $R^l_{\beta\alpha} (v)$ ($R^r_{\beta\alpha} (v)$) the scattering amplitudes for incidence with modulus of velocity $v>0$ from the left (right) in channel $\alpha$, and reflection in channel $\beta$. Similarly we denote by $T^l_{\beta\alpha} (v)$ ($T^r_{\beta\alpha} (v)$) the scattering amplitude for incidence in channel $\alpha$ from the left (right) and transmission in channel $\beta$. For some figures, it will be preferable to use an alternative notation in which the information of the superscript ($l/r$) is contained instead in the sign of the velocity argument $w$, positive for left incidence and negative otherwise, \begin{eqnarray} R_{\beta\alpha}(w):= \left\{ \begin{array}{ll} R^l_{\beta\alpha}(\fabs{w}),& {\rm{if}}\; w>0 \\ R^r_{\beta\alpha}(\fabs{w}),& {\rm{if}}\; w<0 \end{array} \right. \nonumber\\ T_{\beta\alpha}(w):= \left\{ \begin{array}{ll} T^l_{\beta\alpha}(\fabs{w}),& {\rm{if}}\; w>0 \\ T^r_{\beta\alpha}(\fabs{w}),& {\rm{if}}\; w<0 \end{array} \right. \nonumber \end{eqnarray} The ideal diode configuration must be such that \begin{eqnarray} \fabsq{T_{21}^l (v)}\approx \fabsq{R_{11}^r (v)} \approx 1, \label{di1} \\ \fabsq{R_{\beta 1}^l (v)} \approx \fabsq{T_{\beta 1}^r (v)} \approx \fabsq{R_{21}^r (v)} \approx \fabsq{T_{11}^l (v)} \approx 0, \label{di2} \end{eqnarray} with $\beta=1,2$, in an interval $v_{min}<v<v_{max}$ of the modulus of the velocity. In words, there must be full transmission for left incidence and full reflection for right incidence in the ground state. This was achieved in \cite{ruschhaupt_2004_diode} with a particular choice of the potential in which $\Omega(x)$, $W_1(x)$, and $W_2(x)$ were related to two partially overlapping functions $f_1(x)$, $f_2(x)$. However, other forms are also possible, so we shall deal here with the more general structure of Eq. (\ref{ham2}). We shall use Gaussian laser profiles \begin{eqnarray*} &W_1 (x) = \hat{W}_1 \;\Pi(x,d),\quad W_2 (x) = \hat{W}_2 \;\Pi(x,-d),&\\ &\Omega (x) = \hat{\Omega}\; \Pi (x,0),& \end{eqnarray*} where $$ \Pi (x,x_0)=\exp[-(x-x_0)^2/(2\Delta x^2)] $$ and $\Delta x = 15 \mum$. In section \ref{s2} we shall examine the stability and limits of the ``diodic'' behavior while the variants of the atom diode are presented in section \ref{s3}. They are explained in section \ref{s4} with the aid of an adiabatic basis and approximation. The paper ends with a summary and an appendix on the adiabaticity criterion. \section{``Diodic'' behaviour and its limits\label{s2}} \begin{figure}\label{fig2} \end{figure} The behavior of the two-level atom diode is examined by solving numerically the stationary Schr\"odinger equation, \begin{eqnarray} E_v \bm{\Psi} (x) = \bm{H} \bm{\Psi} (x), \label{stat} \end{eqnarray} with the Hamiltonian given by Eq. (\ref{ham2}) and $E_v = \frac{m}{2}v^2$. The results, obtained by the ``invariant imbedding method'' \cite{singer.1982,band.1994}, are shown in Fig. \ref{fig2} for different parameters. In the plotted velocity range, the ``diodic'' behaviour holds, i.e. Eqs. (\ref{di1}) and (\ref{di2}) are fulfilled. (The transmission and reflection probabilities for incidence in the ground state, $|R_{21}^{l/r}|^2$ and $|T_{11}^{l/r}|^2$, which are not shown in the Figure are zero.) The device may be asymmetric, i.e. even with $\hat{W}_1 \neq \hat{W}_2$ there can be a ``diodic'' behaviour, see some examples in Fig. \ref{fig2}. Note in passing that the device works as a diode for incidence in the excited state too, but in the opposite direction, namely, $|T^r_{12}(v)|^2 \approx |R^l_{22}(v)|^2\approx 1$, whereas all other probabilities for incidence in the excited state are approximately zero. Now let us examine the stability of the diode with respect to changes in the separation between laser field centers $d$. We define $v_{max}$ and $v_{min}$ as the upper and lower limits where diodic behaviour holds, by imposing that all scattering probabilities from the ground state be small except the ones in Eq. (\ref{di1}) (i.e., the transmission probability from $1$ to $2$ for left incidence and the reflection probability from $1$ to $1$ for right incidence). More precisely, $v_{max/min}$ are chosen as the limiting values such that $\sum_{\beta=1}^2 (|R_{\beta 1}^l|^2+|T^r_{\beta 1}|^2) +(|R^r_{21}|^2+|T^l_{11}|^2)+(1-|T_{21}^l|^2)+ (1-|R^r_{11}|^2)<\epsilon$ for all $v_{min}< v <v_{max}$. In Fig. \ref{fig3}, $v_{max/min}$ are plotted versus the distance between the laser centers, $d$, for different combinations of $\hat{\Omega}$, $\hat{W}_1$, and $\hat{W}_2$. For the intensities considered, $v_{max}$ is in the ultra-cold regime below $1$ m/s. In the $v_{max}$ surface, unfilled boxes indicate reflection failure for right incidence and filled circles indicate transmission failure for left incidence. In the $v_{min}$ surface, the failure is always due to transmission failure for left incidence. We see that the valid $d$ range for ``diodic'' behaviour can be increased by increasing the Rabi frequency $\hat{\Omega}$, compare e.g. (a) and (b), or (c) and (d). Moreover, higher mirror intensities increase $v_{max}$ at the plateau but also make it narrower, compare e.g. (b) and (d). This narrowing can be simply compensated by increasing $\hat{\Omega}$ too, compare e.g. (a) and (d). \begin{figure} \caption{ Limit $v_{min}$ (solid lines) and $v_{max}$ (symbols connected with dashed lines) for ``diodic'' behaviour, $\epsilon = 0.01$; the circles (boxes) correspond to breakdown due to transmission (reflection); (a) $\hat{\Omega} = 0.2 \Msi$, $\hat{W}_1 = \hat{W}_2 = 20 \Msi$; (b) $\hat{\Omega} = 1 \Msi$, $\hat{W}_1 = \hat{W}_2 = 20 \Msi$; (c) $\hat{\Omega} = 0.2 \Msi$, $\hat{W}_1 = \hat{W}_2 = 100 \Msi$; (d) $\hat{\Omega} = 1 \Msi$, $\hat{W}_1 = \hat{W}_2 = 100 \Msi$.} \label{fig3} \end{figure} \begin{figure} \caption{ Limit $v_{min}$ (solid lines) and $v_{max}$ (symbols connected with dashed lines) for ``diodic'' behaviour versus the shift $\Delta$ for different $d$, $\epsilon = 0.01$; the circles (boxes) correspond to breakdown first for transmission (reflection); $\hat{\Omega} = 1 \Msi$, $\hat{W}_1 = \hat{W}_2 = 100 \Msi$; (a) $d=46\mum$, (b) $d=50\mum$, (c) $d=60\mum$, (d) $d=70\mum$.} \label{figx} \end{figure} Finally, we have also examined the stability with respect to a shift $\Delta$ of the central position of the pumping laser, see Fig. \ref{figx}. It turns out that there is a range, which depends on $d$, where the limits $v_{min}$ and $v_{max}$ practically do not change. \section{Variants of the atom diode\label{s3}} Is the mirror potential $W_2$ really necessary? If we want ground state atoms to pass from left to right but not from right to left, it is not intuitively obvious why we should add a reflection potential for the excited state on the left of the pumping potential $\Omega$, see again Fig. \ref{fig1}. In other words, it could appear that the pumping potential and a reflecting potential $W_1$ on its right would be enough to make a perfect diode. This simpler two-laser scheme, however, only works partially. In Fig. \ref{fig5} the scattering probabilities for the case $\hat{W}_1 >0$, $\hat{W}_2 = 0$ are represented. While there is still full reflection if the atom comes from the right, the transmission probability is only $1/2$ when the atom comes from the left; accordingly there is a $1/2$ reflection probability from the left, which is equally distributed between the ground and excited state channels. This is in contrast to the $\hat{W}_1>0$, $\hat{W}_2 > 0$ case of Fig \ref{fig2}. We may thus conclude that the counterintuitive state-selective mirror $W_2$ is really important to attain a perfect diode. In Fig. \ref{fig5} the case $\hat{W}_1=0$, $\hat{W}_2>0$ is also plotted. For incidence from the right in the ground state there is no full reflection so this case is not useful as a diode. But for incidence from the left there is equal transmission in ground and excited states so that this device might be useful to build an interferometer. A very remarkable and useful property in this case, and in fact in all cases depicted in Figs. \ref{fig2} and \ref{fig5}, is the constant value of the transmission and reflection probabilities in a broad velocity range. This is calling for an explanation. Moreover, why do they take the values $1$, $1/2$, or $1/4$? None of these facts is very intuitive, neither within the representations and concepts we have put forward so far, nor according to the following arguments: Let us consider again the simple two-laser configuration with $\hat{W}_2=0$ and $\hat{W}_1>0$. From a classical perspective, the atom incident from the left finds first the pumping laser and then the state-selective mirror potential for the ground state. According to this ``sequential'' model, one would expect an important effect of the velocity in the pumping efficiency. A different velocity implies a different traversal time and thus a different final phase for the Rabi oscillation which should lead to a smooth, continuum variation of the final atomic state with the velocity. In particular, the probability of the excited state after the pumping would oscillate with the velocity and therefore the final transmission after the right mirror should oscillate too, if the sequential model picture were valid. Indeed, these oscillations are clearly seen in Fig. \ref{fig6} when $\hat{W}_1=\hat{W}_2=0$ (above a low velocity threshold in which the Rabi oscillations are suppressed and all channels are equally populated, for a related effect see \cite{ROS}, see also the explanation of this low velocity regime in section \ref{s4}). Clearly, however, the oscillations are absent when $\hat{W}_1>0$, so the sequential, classical-like picture cannot be right. In summary, the mirror potentials added to the pumping laser imply a noteworthy stabilization of the probabilities and velocity independence. The failure of the sequential scattering picture must be due to some sort of quantum interference phenomenon. Interference effects are well known in scattering off composite potentials, but in comparison with, e.g., resonance peaks in a double barrier, the present results are of a different nature. There is indeed a clean explanation to all the mysteries we have dropped along the way as the reader will find out in the next section. \begin{figure}\label{fig5} \end{figure} \begin{figure}\label{fig6} \end{figure} \section{Adiabatic interpretation of the diode and its variants\label{s4}} Depending on the mirror potentials included in the device, let us label the four possible cases discussed in the previous section as follows: case ``0'': $\hat{W}_1=\hat{W}_2=0$; case ``1'': $\hat{W}_1>0$, $\hat{W}_2=0$; case ``2'': $\hat{W}_1=0$, $\hat{W}_2>0$; case ``12'': $\hat{W}_1>0$, $\hat{W}_2>0$. We diagonalize now the potential matrix $\bm{M}(x)$ \begin{eqnarray*} \bm{U}(x)\bm{M}(x)\bm{U}^+ (x) = \left(\begin{array}{cc} \lambda_-(x) & 0 \\ 0 & \lambda_+(x) \end{array}\right). \end{eqnarray*} The orthogonal matrix $\bm{U}(x)$ is given by \begin{eqnarray*} \bm{U}(x) = \left(\begin{array}{cc} \frac{W_-(x) - \mu(x)}{\sqrt{4\Omega^2(x)+[W_-(x) - \mu(x)]^2}} & \frac{W_-(x) + \mu(x)}{\sqrt{4\Omega^2(x)+[W_-(x) + \mu(x)]^2}}\\ \frac{2\Omega(x)}{\sqrt{4\Omega^2(x)+[W_-(x) - \mu(x)]^2}} & \frac{2\Omega(x)}{\sqrt{4\Omega^2(x)+[W_-(x) + \mu(x)]^2}} \end{array}\right) \end{eqnarray*} where \begin{eqnarray} W_- &=& W_1 - W_2, \nonumber\\ \mu&=&\sqrt{4\Omega^2(x)+W_-^2(x)}, \nonumber \end{eqnarray} and the eigenvalues of $\bm{M}(x)$ are $$ \lambda_{\mp} (x) = \frac{\hbar}{4}\left[W_1(x)+W_2(x) \mp \mu(x)\right] $$ with corresponding (normalized) eigenvectors $|\lambda_\mp(x)\rangle$. The asymptotic form of $\bm{U}$ varies for the different cases distinguished with a superscript, $U^{(j)}$, $j=0,1,2,12$. For $x\to-\infty$, the same $\bm{U}$ is found for cases $0$ and $1$, in which the left edge corresponds to the pumping potential. Similarly, the cases $2$ and $12$ share the same left edge potential $W_2$ and thus a common form of $\bm{U}$, \begin{eqnarray*} \bm{U}^{(0,1)}(-\infty)\!=\!\frac{1}{\sqrt{2}}\left(\begin{array}{cc} -1 & 1\\ 1 & 1 \end{array}\right),\quad\! \!\!\bm{U}^{(2,12)}(-\infty)\!=\!\left(\begin{array}{cc} -1 & 0\\ 0 & 1 \end{array}\right)\!. \end{eqnarray*} The corresponding analysis for $x\to \infty$ gives the asymptotic forms \begin{eqnarray*} \bm{U}^{(0,2)}(\infty) = \frac{1}{\sqrt{2}}\left(\begin{array}{cc} -1 & 1\\ 1 & 1 \end{array}\right),\quad \bm{U}^{(1,12)}(\infty) = \left(\begin{array}{cc} 0 & 1\\ 1 & 0 \end{array}\right). \end{eqnarray*} \begin{figure} \caption{ Eigenvalues (a) $\lambda_+$ and (b) $\lambda_-$; $d=50\mum$, $\hat{\Omega} = 1 \Msi$; $\hat{W}_1 = \hat{W}_2 = 100 \Msi$ (solid lines); $\hat{W}_1 = 100 \Msi$, $\hat{W}_2 = 0$ (boxes); $\hat{W}_1 = 0$, $\hat{W}_2 = 100 \Msi$ (circles).} \label{fig7} \end{figure} The eigenvalues $\lambda_\pm (x)$ for the same parameters of Fig. \ref{fig5} are plotted in Fig. \ref{fig7}. We see that $\lambda_+ (x) > 0$ has at least one high barrier whereas $\lambda_-(x) \approx 0$. If $\bm{\Psi}$ is a two-component solution of the stationary Schr\"odinger equation, Eq. (\ref{stat}), we define now the vector $$ \bm{\Phi}(x)= {\phi_-(x) \choose \phi_+(x)} := \bm{U}(x)\bm{\Psi}(x) $$ in a potential-adapted, ``adiabatic representation''. Note that if no approximation is made, $\bm{\Phi}$ and $\bm{\Psi}$ are both exact and contain the same information expressed in different bases. Starting from Eq. (\ref{stat}), using $\bm{\Psi}=\bm{U}^+\bm{\Phi}$, and multiplying from the left by $\bm{U}$, we arrive at the following equation for $\bm{\Phi}(x)$ \begin{eqnarray*} E_v \bm{\Phi}(x) &=& -\frac{\hbar^2}{2m} \frac{\partial^2}{\partial x^2} \bm{\Phi}(x) + \left(\begin{array}{cc} \!\lambda_-(x) & 0 \\ 0 & \lambda_+(x) \end{array}\!\right) \bm{\Phi} (x)\\ & & + \bm{Q} \bm{\Phi}(x), \end{eqnarray*} where \begin{eqnarray} \bm{Q} &=& -\frac{\hbar^2}{2m} \left(\bm{U}(x) \frac{\partial^2 \bm{U}^+}{\partial x^2}(x) + 2 \bm{U}(x)\frac{\partial \bm{U}^+}{\partial x}(x) \frac{\partial}{\partial x}\right) \nonumber \\ & = & \left(\begin{array}{cc} m\,B^2(x)/2 & -A(x) + i B(x) \hat{p}_x \\ A(x) - i B(x) \hat{p}_x & m\,B^2(x)/2\end{array}\right) \label{q} \end{eqnarray} is the coupling term in the adiabatic basis, and $A(x)$, $B(x)$ are real functions, \begin{eqnarray*} A(x)&=& \frac{1}{32 \mu^4(x)\Delta x^4 m} \Big\{\\ & & \fexp{-\frac{(x+d)^2}{\Delta x^2}} d^2 \hbar^6 \Omega(x) W_-(x)\\ & & + \fexp{\frac{(x+d)^2}{\Delta x^2}} \times\\ & & \left[-4\Omega^2(x) + W_1^2(x) + W_2^2(x) + 6 W_1(x)W_2(x)\right]\Big\},\\ B(x)&=& \frac{d \hbar^3}{4 \mu^2(x) \Delta x^2 m} \Omega(x) \left[W_1(x) + W_2(x)\right]. \end{eqnarray*} Let us consider incidence from the left and assume first that the coupling $\bm{Q}$ can be neglected so that there are two independent adiabatic modes ($\pm$) in which the internal state of the atom adapts to the position-dependent eigenstates $|\lambda_\pm\rangle$ of the laser potential $\bm{M}$, whereas the atom center-of-mass motion is affected in each mode by the effective adiabatic potentials $\lambda_\pm(x)$. Because $\lambda_- \approx 0$, an approximate solution for $\phi_- (x)$ is a full transmitted wave and because $\lambda_+$ consists of at least one ``high'' barrier -at any rate the present argument is only applicable for energies below the barrier top-, an approximate solution for $\phi_- (x)$ is a wave which is fully reflected by a wall. So we can write for $x\ll 0$, \begin{eqnarray*} \bm{\Phi} (x) \approx \bm{\Phi}_{-\infty} (x) := \left(\begin{array}{c}c_-\\c_+\end{array}\right) e^{ikx} + \left(\begin{array}{c}0\\-c_+\end{array}\right) e^{-ikx}, \end{eqnarray*} and for $x\gg 0$, \begin{eqnarray*} \bm{\Phi} (x) \approx \bm{\Phi}_{\infty} (x) := \left(\begin{array}{c}c_-\\0\end{array}\right) e^{ikx}. \end{eqnarray*} In order to determine the amplitudes $c_\pm$ we have to compare with the asymptotic form of the scattering solution for left incidence, \begin{eqnarray*} {\bm{\Psi}}(x) \approx {\bm{\Psi}}_{-\infty} (x) := \left(\begin{array}{c} 1\\0 \end{array}\right) e^{i k x} + \left(\begin{array}{c} R_{11}^l\\R_{21}^l \end{array}\right) e^{-i k x} \end{eqnarray*} if $x\ll 0$ and \begin{eqnarray*} \bm{\Psi}(x) \approx \bm{\Psi}_{\infty} (x) := e^{i k x} \left(\begin{array}{c} T_{11}^l\\T_{21}^l \end{array}\right) \end{eqnarray*} if $x\gg 0$. The transmission and reflection coefficients can now be approximately calculated for each case from the boundary conditions $\bm{\Phi}_{-\infty}(x) = \bm{U}(-\infty) \bm{\Psi}_{-\infty} (x)$ and $\bm{\Phi}_{\infty}(x) = \bm{U}(\infty) \bm{\Psi}_{\infty} (x)$. \begin{table}[tbp] \label{tab} \caption{Reflection and transmission probability for the different variations of the atom diode} (a) incidence from the right: \begin{eqnarray*} \begin{array}{cc|c|c|c|c|c|c|} &\mbox{case} & c_-^{r} & c_+^{r} & R_{11}^{r}& R_{21}^{r} & T_{11}^{r} & T_{21}^{r}\\[0.1cm] \hline (0)& \hat{W}_1=\hat{W}_2=0 & -\frac{1}{\sqrt{2}} & \frac{1}{\sqrt{2}} & -\frac{1}{2} & -\frac{1}{2} & \frac{1}{2} & -\frac{1}{2}\\[0.1cm] \hline (1)& \hat{W}_1>0, \hat{W}_2=0 & 0 & 1 & -1 & 0 & 0 & 0\\[0.1cm] \hline (2)& \hat{W}_1=0, \hat{W}_2>0 & -\frac{1}{\sqrt{2}} & \frac{1}{\sqrt{2}} & -\frac{1}{2} & -\frac{1}{2} & \frac{1}{\sqrt{2}} & 0\\[0.1cm] \hline (12)& \hat{W}_1>0, \hat{W}_2>0 & 0 & 1 & -1 & 0 & 0 & 0\\[0.1cm] \hline \end{array} \end{eqnarray*} (b) incidence from the left: \begin{eqnarray*} \begin{array}{cc|c|c|c|c|c|c|} &\mbox{case} & c_-^{l} & c_+^{l} & R_{11}^{l}& R_{21}^{l} & T_{11}^{l} & T_{21}^{l}\\[0.1cm] \hline (0)& \hat{W}_1=\hat{W}_2=0 & -\frac{1}{\sqrt{2}} & \frac{1}{\sqrt{2}} & -\frac{1}{2} & -\frac{1}{2} & \frac{1}{2} & -\frac{1}{2}\\[0.1cm] \hline (1)& \hat{W}_1>0, \hat{W}_2=0 & -\frac{1}{\sqrt{2}} & \frac{1}{\sqrt{2}} & -\frac{1}{2} & -\frac{1}{2} & 0 & -\frac{1}{\sqrt{2}}\\[0.1cm] \hline (2)& \hat{W}_1=0, \hat{W}_2>0 & -1 & 0 & 0 & 0 & \frac{1}{\sqrt{2}} & -\frac{1}{\sqrt{2}}\\[0.1cm] \hline (12)& \hat{W}_1>0, \hat{W}_2>0 & -1 & 0 & 0 & 0 & 0 & -1\\[0.1cm] \hline \end{array} \end{eqnarray*} \end{table} The incidence from the right can be treated in a similar way. All the amplitudes are given in Table I, from which we can find, taking the squares, the transmission and reflection probabilities $1$, $1/2$, $1/4$, and $0$, of Figs. \ref{fig2}, \ref{fig5}, and \ref{fig6}. \begin{figure} \caption{ $|\langle 1|\lambda_-^{(j)}\rangle|^2$ (solid lines) and $|\langle 2|\lambda_-^{(j)}\rangle|^2$ (dashed lines) for $d = 50 \mum$, $\hat{\Omega} = 1 \Msi$, $\hat{W}_1 = 100 \Msi$; (a) $\hat{W}_2 = 100 \Msi$ (case $j=12$), (b) $\hat{W}_2 = 0$ (case $j=1$).} \label{fig8} \end{figure} These results provide in summary a simple explanation of the behaviour of the diode and its variants. In particular, the perfect diode behavior of case $12$, occurs because the (approximately) ``freely'' moving mode $\phi_-$ transfers adiabatically the ground state to the excited state from left to right. To visualize this, let us represent the probabilities to find the ground and excited state in the eigenvectors $|\lambda^{(j)}_-(x)\rangle$ for the cases $j=12,1$. They are plotted in Fig. \ref{fig8}a for the case ``12'': the perfect adiabatic transfer can be seen clearly. On the other hand, the mode ``$+$'' (not plotted), which tends to the ground state on the right edge of the device, is blocked by a high barrier. The stability of this blocking effect with respect to incident velocities holds for energies smaller than the $\lambda_+$ barrier top, more on this below. In Fig. \ref{fig8}b the ground and excited state probabilities for case ``1'' are plotted. If the mirror potential laser $W_2$ is removed on the left edge of the device, the ground state is not any more an eigenstate of the potential for $x\ll 0$. The adiabatic transfer of the mode ``$-$'' occurs instead from $(|2\rangle-|1\rangle)/2^{1/2}$ on the left to $|2\rangle$ on the right, whereas the blocked mode ``$+$'' on the left corresponds to the linear combination $(|2\rangle+|1\rangle)/2^{1/2}$. This results in a $1/2$ reflection probability for ground-state incidence from the left. A similar analysis would be applicable in the other cases. \begin{figure} \caption{ Limits of the ``diodic'' behaviour $v_{min}$ (thick dashed line) and $v_{max}$ (filled circles connected with a dashed line, see also Fig. \ref{fig3}), $\epsilon = 0.01$; limits of condition (\ref{cond1}) $v_{\lambda,min}$ (lower solid line) and $v_{\lambda,max}$ (upper solid line); limit of the adiabatic approximation $v_{ad,max}$ (unfilled circles), $\epsilon = 0.01$; $\hat{W}_1 = \hat{W}_2 = 100 \Msi$, $\hat{\Omega} = 0.2 \Msi$.} \label{fig9} \end{figure} Of course all approximations have a range of validity that depend on the potential parameters and determines the working conditions of the diode. Even though these conditions can be easily found numerically from the exact results, approximate breakdown criteria are helpful to understand the limits of the device and different reasons for its failure. For the approximation that $\phi_-$ is a fully transmitted wave and $\phi_+$ a fully reflected one a necessary condition is \begin{eqnarray} \mbox{max}_x \left[\lambda_-(x)\right] < E_v < \mbox{max}_x \left[\lambda_+ (x)\right]. \label{cond1} \end{eqnarray} This defines the limits \begin{eqnarray} v_{\lambda,min} &:=& \sqrt{\frac{2}{m}\mbox{max}_x \left[ \lambda_-(x)\right]}, \label{deflm}\\ v_{\lambda,max} &:=& \sqrt{\frac{2}{m}\mbox{max}_x \left[\lambda_+(x)\right]}, \label{deflp} \end{eqnarray} such that Eq. (\ref{cond1}) is fulfilled for all $v$ with $v_{\lambda,min} < v < v_{\lambda,max}$. The plateaus of $v_{max}$ seen e.g. in Fig. \ref{fig3} for a range of $d$-values are essentially coincident with $v_{\lambda,max}$. Fig. \ref{fig9} shows the exact limits $v_{min}$ and $v_{max}$ for the ``diodic'' behaviour, as in Fig. \ref{fig3}c, and also the limits $v_{\lambda,min}$, $v_{\lambda,max}$ resulting from the condition of Eq. (\ref{cond1}). We see that the exact limit $v_{min}$ coincides essentially with $v_{\lambda,min}$ so that the lower ``diodic'' velocity boundary can be understood by the breakdown of the condition that $\phi_-$ is fully transmitted due to a $\lambda_-$ barrier. This effect is only relevant for small distances $d$ between the lasers. Another reason for the breaking down of the diode may be that the adiabatic modes are no longer independent, i.e. that $\bm{Q}$, see Eq. (\ref{q}), cannot be neglected. An approximate criterion for adiabaticity, more precisely for neglecting the non-diagonal elements of $\bm{Q}$, see the Appendix, is \begin{eqnarray} q(v) &:=& \mbox{max}_{x \in I} \frac{\fabsq{A(x)} + 2m\fabsq{B(x)}[E_v - \lambda_-(x)]} {\fabsq{\lambda_+(x)-\lambda_-(x)}}\nonumber\\ &\ll& 1 \label{neglectQ} \end{eqnarray} with $I = [-d, d]$. A velocity boundary $v_{ad,max}$ defined by $q(v) < \epsilon$ for all $v_{\lambda,min} < v < v_{ad,max}$ is shown in Fig. \ref{fig9}. (Note that the condition of Eq. (\ref{neglectQ}) only makes sense if $E_v > \lambda_-(x)$, i.e. $v_{\lambda,min} < v$.) We see in Fig. \ref{fig9} that the breakdown of the diode at $v_{max}$ for large $d$ is due to a failure of the adiabatic approximation. \section{Summary\label{s5}} Summarizing, we have studied a two-level model for an ``atom diode'', a laser device in which ground state atoms can pass in one direction, conventionally from left to right, but not in the opposite direction. The proposed scheme includes three lasers: two of them are state-selective mirrors, one for the excited state on the left, and the other one for the ground state on the right, whereas the third one -located between the two mirrors- is a pumping laser on resonance with the atomic transition. We have shown that the ``diodic'' behaviour is very stable with respect to atom velocity in a given range, and with respect to changes in the distances between the centers of the lasers. The inclusion of the laser on the left, reflecting the excited state, is somewhat counterintuitive, but it is essential for a perfect diode effect; the absence of this laser leads to a $50\%$ drop in efficiency. The stability properties as well as the actual mechanism of the diode is explained with an adiabatic basis and an adiabatic approximation. The diodic transmission is due to the adiabatic transfer of population from left to right, from the ground state to the excited state in a free-motion adiabatic mode, while the other mode is blocked by a barrier. \begin{acknowledgments} AR acknowledges support by the Ministerio de Educaci\'on y Ciencia. This work has been supported by Ministerio de Educaci\'on y Ciencia (BFM2003-01003), and UPV-EHU (00039.310-15968/2004). \end{acknowledgments} \begin{appendix} \section{} To motivate Eq. (\ref{neglectQ}), see also \cite{messiah.book}, let us assume \begin{eqnarray} E \bm{\Phi}(x) &=& -\frac{\hbar^2}{2m} \frac{\partial^2}{\partial x^2} \bm{\Phi}(x) + \left(\begin{array}{cc} \lambda_- & 0 \\ 0 & \lambda_+ \end{array}\right) \bm{\Phi} (x)\nonumber\\ & + & \epsilon \left(\begin{array}{cc} 0 & -\tilde{A}+i \tilde{B} \hat{p}_x\\ \tilde{A} - i \tilde{B} \hat{p}_x & 0 \end{array}\right) \bm{\Phi}(x) \label{ap} \end{eqnarray} where $\lambda_\pm$, $\tilde{A}$ and $\tilde{B}$ are real and independent of $x$. We assume that $E > \lambda_-$ and that $\epsilon$ is small such that we can treat $\bm{\Phi}$ perturbatively, \begin{eqnarray*} \bm{\Phi}(x) \approx \left(\begin{array}{c}\phi_{0,-} (x) \\ \phi_{0,+} (x)\end{array}\right) + \epsilon \left(\begin{array}{c}\phi_{1,-} (x) \\ \phi_{1,+} (x)\end{array}\right) \end{eqnarray*} with \begin{eqnarray*} \phi_{0,-} (x) &=& \fexp{\frac{i}{\hbar} \sqrt{2m(E-\lambda_-)} x}\\ \phi_{0,+} (x) &=& 0 \end{eqnarray*} Then it follows from (\ref{ap}) for the first-order correction \begin{eqnarray*} \phi_{1,-} & = & 0\\ \phi_{1,+} & = & \left[E- \lambda_+ -\hat{p}^2_x/(2m)\right]^{-1} (\tilde{A} - i \tilde{B} \hat{p}_x)\; \phi_{0,-}\\ & = & \frac{\tilde{A} - i \tilde{B} \sqrt{2m(E-\lambda_-)}}{\lambda_- - \lambda_+}\; \phi_{0,-} \end{eqnarray*} because $\hat{p}_x\,\phi_{0,-} = \sqrt{2m(E-\lambda_-)}\,\phi_{0,-}$. If we want to neglect $\phi_+ = 0 + \epsilon\,\phi_{1,+}$ we get the condition \begin{eqnarray*} \epsilon^2 \frac{\fabsq{\tilde{A}} + \fabsq{\tilde{B}} 2m (E-\lambda_-)} {\fabsq{\lambda_- - \lambda_+}} \ll 1. \end{eqnarray*} If $\lambda_\pm$, $\tilde{A}$ and $\tilde{B}$ depend on $x$, we may use the condition \begin{eqnarray*} \mbox{max}_{x\in I} \frac{\fabsq{\epsilon \tilde{A}(x)} + \fabsq{\epsilon\tilde{B}(x)} 2m [E-\lambda_-(x)]} {\fabsq{\lambda_-(x) - \lambda_+(x)}} \ll 1 \end{eqnarray*} where $I$ is chosen in such a way that the assumption $\phi_{0,+}(x)=0$ is approximately valid. In Eq. (\ref{ap}), we have not included any diagonal elements in the coupling, compare with Eq. (\ref{q}). We neglect them in the condition (\ref{neglectQ}) but in principle it would be also possible to absorb them by defining effective adiabatic potentials $\tilde{\lambda}_{\pm} = \lambda_{\pm} + mB^2/2$. \end{appendix} \end{document}
arXiv
{ "id": "0505189.tex", "language_detection_score": 0.7564923167228699, "char_num_after_normalized": null, "contain_at_least_two_stop_words": null, "ellipsis_line_ratio": null, "idx": null, "lines_start_with_bullet_point_ratio": null, "mean_length_of_alpha_words": null, "non_alphabetical_char_ratio": null, "path": null, "symbols_to_words_ratio": null, "uppercase_word_ratio": null, "type": null, "book_name": null, "mimetype": null, "page_index": null, "page_path": null, "page_title": null }
arXiv/math_arXiv_v0.2.jsonl
null
null
\begin{document} \date{} \title{Central intersections of element centralisers} \begin{center} \small \textit{FB Mathematik, TU Kaiserslautern, Postfach 3049} \textit{67653 Kaiserslautern, Germany} \text{E-mail: [email protected]} \end{center} \paragraph{} \textit{MSC:} \textit{Primary: 20E34} \textit{Seconday: 20D99} \paragraph{} \textit{Keywords:} \textit{Finite groups, element centralisers, CA-groups, F-groups} \normalsize \begin{abstract} In 1970 R. Schmidt gave a structural classification for CA-groups. In this paper we consider a condition upon the intersection of element centralisers which turns out to be equivalent to the definition of a CA-group. We then weaken which centralisers we chose to intersect and structurally classify this new family of groups. Furthermore we apply a similar weakening to the class of F-groups introduced by It{\^o} in 1953 and classified by Rebmann in 1971. \end{abstract} \section{Introduction} A finite group is called a CA-group if the centraliser of every non-central element is abelian. If $G$ is a CA-group and $x,y\in G\setminus Z(G)$, then $C_G(x)$ can never be properly contained in $C_G(y)$. Furthermore we shall see in Lemma~\ref{TIC=CA} that $G$ being a CA-group is equivalent to saying for all $x$ and $y$ non-central elements in $G$ either $C_G(x)=C_G(y)$ or $C_G(x)\cap C_G(y)=Z(G)$. In addition to the class of CA-groups, It{\^o} \cite{ItoTypeI} introduced the notion of an F-group. This is a group in which every non-central element centraliser contains no other non-central element centraliser. That is, $G$ is an F-group if for any $x\in G\setminus Z(G)$, then $C_G(x)\leq C_G(y)$ implies $y\in Z(G)$. We shall also see in Lemma~\ref{CentralTIC=F} that $G$ being an F-group is equivalent to $Z(C_G(x))\cap Z(C_G(y))=Z(G)$ for all $x,y\in G\setminus Z(G)$ such that $C_G(x)\ne C_G(y)$. The aim of this paper is to consider these intersection conditions for a specific subset of centralisers, in particular, the set of minimal centralisers in a group (those which do not properly contain any other element centraliser). Thus we define a group to be a ${\rm CA}_{min}$-group if for two non-central elements $x$ and $y$ with minimal element centralisers in $G$ either $C_G(x)=C_G(y)$ or $C_G(x)\cap C_G(y)=Z(G)$. Similarly we call $G$ an ${\rm F}_{min}$-group if for two non-central elements $x$ and $y$ with minimal element centralisers in $G$ either $C_G(x)=C_G(y)$ or $Z(C_G(x))\cap Z(C_G(y))=Z(G)$. Note that the analogous condition by considering maximal centralisers for CA-groups was studied by Schmidt \cite{SchmidtCaGps} (although Schmidt defined such groups using subgroup centralisers). The aim of this paper is to prove a structural classification of ${\rm CA}_{min}$-groups and ${\rm F}_{min}$-groups. \begin{thm}\label{MainThm} $G$ is a ${\rm CA}_{min}$-group {\rm (respectively ${\rm F}_{min}$)} if and only if $G$ has one of the following forms: \begin{enumerate} \item $G/Z(G)$ is a Frobenius group with kernel $L/Z(G)$ and complement $K/Z(G)$ such that both $K$ and $L$ are abelian. \item $G/Z(G)$ is a Frobenius group with kernel $L/Z(G)$ and complement $K/Z(G)$ such that $K$ is abelian, $L$ is a ${\rm CA}_{min}$-group {\rm (respectively ${\rm F}_{min}$)}, $Z(G)=Z(L)$ and $L/Z(L)$ is a $p$-group. \item $G/Z(G)\cong {\rm Sym}(4)$ and if $V/Z(G)\cong V_4$, then $V$ is non-abelian. \item $G$ has an abelian normal subgroup of index $p$, $G$ is not abelian. \item $G\cong A\times P$, where $A$ is abelian and $P$ is a non-abelian $p$-group for some prime $p$; therefore $P$ is a ${\rm CA}_{min}$-group {\rm (respectively ${\rm F}_{min}$)}. \item $G/Z(G)\cong PGL_2(p^n)$ or $PSL_2(p^n)$ with $p^n > 3$. \end{enumerate} \end{thm} Note a similarity to Rebmann's structural classification of F-groups \cite{FGroups}. In fact the groups of type (1), (3) and (4) are CA-groups \cite{SchmidtCaGps}. Moreover for the families (2) and (5) replacing ${\rm CA}_{min}$-group by F-group yields the corresponding family for F-groups. While the non-solvable case (6) contains all the non-solvable cases of F-groups. In particular we obtain the following corollary as given in Rebmann \cite{FGroups}. \begin{cor}\label{NonSolFIsCA} Let $G$ be a non-solvable F-group. Then $G$ is a CA-group. \end{cor} By using the above theorem is it easy to see that the class of CA-groups is strictly smaller than the class of ${\rm CA}_{min}$-groups, as $PSL_2(q)$ and $PGL_2(q)$ have a non-abelian centraliser. However in Rebmann's paper no example of an F-group which is not a CA-group was provided. We finish this paper by providing a family of $p$-groups which are F-groups but not CA-groups. \begin{prop}\label{FNotCA} Let $G$ be an extraspecial group of order $p^{2n+1}$ with $n>1$. Then $G$ is an F-group which is not a CA-group. \end{prop} Note that this family will also be a family of ${\rm F}_{min}$-groups which are not ${\rm CA}_{min}$-groups. Finally, observe that if there exists a solvable ${\rm CA}_{min}$-group which is not an CA-group, then such a $p$-group exists for some prime $p$. However, running over the GAP libraries we have been unable to find a $p$-group which is ${\rm CA}_{min}$-group and not a CA-group. Note that \cite{RockeAbCent} studied such groups however we were unable to use the results and methods in this paper to produce such an example. \section{Preliminaries} \subsection{Conditions on element centraliser intersections} Let $G$ be a non-abelian CA-group with $x$ and $y$ non-central elements in $G$. Consider $C_G(x)\cap C_G(y)$. If $z\in C_G(x)\cap C_G(y)$, then $\langle C_G(x),C_G(y)\rangle\leq C_G(z)$. As these centralisers are abelian either $C_G(x)= C_G(y)$ or $z\in Z(G)$. Moreover we shall show in the next lemma that this condition is equivalent to a CA-group. \begin{lm}\label{TIC=CA} Let $G$ be a finite non-abelian group. Then $G$ is a CA-group if and only if for any pair of non-central elements $x$ and $y$ such that $C_G(x)\ne C_G(y)$ then $C_G(x)\cap C_G(y)=Z(G)$. \begin{proof} We commented above that any CA-group satisfies this condition. Hence it remains to show to converse. Let $z$ be a non-central element in $G$ and $x,y\in C_G(z)\setminus Z(G)$. Then $\langle z\rangle \leq C_G(y)\cap C_G(x)$ and so $C_G(x)\cap C_G(y)\ne Z(G)$. Therefore $C_G(x)= C_G(y)$, and so $x$ and $y$ commute. In particular, any two elements in $C_G(z)$ commute. Thus $C_G(z)$ is abelian and $G$ is a CA-group. \end{proof} \end{lm} Furthermore, with this observation we have the following corollary. \begin{cor} Let $G$ be a CA-group such that $Z_2(G)>Z(G)$. Then $G$ is meta-abelian. \begin{proof} We want to show that $G'$ is abelian. However, by \cite[Theorem III.2.11]{Huppert}, $G'\leq C_G(Z_2(G))$. Thus it is enough to show $C_G(Z_2(G))$ is abelian. Let $x,y\in C_G(Z_2(G))\setminus Z(G)$. Then $Z(G)<Z_2(G)\leq C_G(x)\cap C_G(y)$. Thus $x$ and $y$ commute and so $C_G(Z_2(G))$ is abelian. \end{proof} \end{cor} We now make clear the definition of a minimal element centraliser for use in the definitions of ${\rm CA}_{min}$-groups and ${\rm F}_{min}$-groups. \begin{df} An element centraliser $C_G(x)$ for $x$ a non-central element is called a minimal centraliser if $C_G(y)\leq C_G(x)$ implies $C_G(y)=C_G(x)$. \end{df} Thus to relax the notion of a CA-group, we want to consider the intersection property for minimal element centralisers. Note that any non-abelian group must have at least two minimal non-central element centralisers. Otherwise if $C=C_G(x)$ is the unique minimal centraliser in $G$, then for all $y\in G$, we have that $C\leq C_G(y)$. Therefore $x\in \cap_{y\in G} C_G(y)=Z(G)$. Thus $C=G$ and $x\in Z(G)$. Note that we could also consider maximal centralisers ($C_G(x)$ called a maximal centraliser if $C_G(x)<C_G(y)$ implies $y\in Z(G)$). In fact Schmidt considered the set of groups in which any two distinct maximal non-central element centralisers have intersection equal to the center of the group \cite{SchmidtCaGps}. (These were referred to as $\mathfrak{D}$-groups) However he only classified the soluble $\mathfrak{D}$-groups, although he did discuss in depth the non-solvable case too. \begin{df} Let ${\rm CA}_{min}$ denote the set of finite groups $G$ such that $C_G(x)\cap C_G(y)=Z(G)$ for any two distinct minimal centralisers $C_G(x)$ and $C_G(y)$. \end{df} By definition, a group is an F-group if and only if for any non-central element $x$ in $G$ we have $C_G(x)$ is both a maximal and minimal centraliser in $G$. Therefore, for an F-group, the definitions of $\mathfrak{D}$ and ${\rm CA}_{min}$ are equivalent. Furthermore, the following corollary follows from Lemma~\ref{TIC=CA}. \begin{cor}\label{CA=D+F} Let $G$ be a finite group. Then $G$ is a CA-group if and only if $G$ is an F-group and a ${\rm CA}_{min}$-group. \end{cor} As with the notion of CA-groups we shall weaken the notion of an F-group. However, first we need an analogous lemma for Lemma~\ref{TIC=CA}. \begin{lm}\label{CentralTIC=F} Let $G$ be a finite non-abelian group. Then $G$ is an F-group if and only if for any pair of non-central elements $x$ and $y$ such that $C_G(x)\ne C_G(y)$ then $Z(C_G(x))\cap Z(C_G(y))=Z(G)$. \begin{proof} Assume $G$ is an F-group and let $C_G(x)\ne C_G(y)$ for $x$ and $y$ non-central elements. If $z\in Z(C_G(x))\cap Z(C_G(y))$, then $\langle C_G(x),C_G(y)\rangle \leq C_G(z)$. Hence as in an F-group every centraliser is both maximal and minimal, it follows that $C_G(z)=G$. Or in other words $z\in Z(G)$. Therefore $Z(C_G(x))\cap Z(C_G(y))=Z(G)$. For the converse direction assume that $C_G(x)<C_G(y)$ for both $x$ and $y$ non-central elements in $G$. Then $Z(C_G(y))\leq Z(C_G(x))$ and therefore $Z(C_G(y))=Z(G)$, which implies $y\in Z(G)$. \end{proof} \end{lm} Thus we now make the following definition. \begin{df} Let ${\rm F}_{min}$ denote the set of finite groups $G$ such that $Z(C_G(x))\cap Z(C_G(y))=Z(G)$ for any two distinct minimal centralisers $C_G(x)$ and $C_G(y)$. \end{df} Therefore we have the following inclusions: (Theorem~\ref{MainThm} and Proposition~\ref{FNotCA} show that these are strict inclusions) \begin{center} \begin{tikzpicture} \node (1) {${\rm F}_{min}$-groups}; \node[below left of=1,node distance=10mm , rotate=45] (2) {$\subsetneq$}; \node[below right of=1,node distance=10mm , rotate=315] (3) {$\supsetneq$}; \node[below left of=1] (4) {${\rm CA}_{min}$-groups}; \node[below right of=1] (5) {F-groups}; \node[below left of=5] (6) {CA-groups}; \node[above right of=6,node distance=10mm , rotate=45] (7) {$\subsetneq$}; \node[above left of=6,node distance=10mm , rotate=315] (8) {$\supsetneq$}; \end{tikzpicture} \end{center} We finally observe that the intersection of ${\rm CA}_{min}$-groups with F-groups equals the set of CA-groups. \subsection{Exhibiting a partition} In the works of Rebmann and Schmidt \cite{FGroups}, \cite{SchmidtCaGps} providing an abelian normal partition of the central quotient $G/Z(G)$ yielded a powerful tool to structurally classify families of groups; in particular they could apply the following classifications by Baer and Suzuki. Neither theorem appears as one statement but as several across the papers, therefore we combine the results into one statement. \begin{thm}\cite{BaerPart1}\cite{BaerPart2}\label{BaerSolPart} Let $G$ be a solvable group with a normal non-trivial partition $\beta$, then $G$ is one of the following: \begin{enumerate} \item A component of $\beta$ is self normalising in $G$ and $G$ is a Frobenius group. \item $G\cong {\rm Sym}(4)$ and $\beta$ is the set of maximal cyclic subgroups of $G$. \item $G$ has a nilpotent normal subgroup $N$ which lies in $\beta$ with $|G:N|=p$ and every element in $G\setminus N$ has order $p$. \item $G$ is a $p$-group, for $p$ a prime. \end{enumerate} \end{thm} \begin{thm}\cite{SuzPart}\label{SuzNonSolPart} Let $G$ be a non-solvable group with a normal non-trivial partition $\beta$. Then $G\cong PGL_2(p^n)$, $PSL_2(p^n)$ for $p$ prime and $p^n>3$, $Sz(2^n)$ for $n\geq 3$ or a component of $\beta$ is self normalising and $G$ is a Frobenius group. \end{thm} We aim to show that for $G$ a ${\rm CA}_{min}$-group or an ${\rm F}_{min}$-group, as for F-groups, the central quotient $G/Z(G)$ exhibits a normal abelian partition. For the case of ${\rm CA}_{min}$-groups we require the following preliminary result. \begin{lm}\label{CAminCenAb} Let $G$ be a ${\rm CA}_{min}$-group, then each minimal centraliser is abelian. \begin{proof} Let $C$ be a minimal centraliser in $G$ and $x\in C$. If $x\not\in Z(C)$, then there exists a minimal centraliser $D \leq C_G(x)$. It is clear that $Z(C_G(x))\leq Z(D)$. Hence $x\in C\cap D$ which equals $Z(G)$ or $C=D$. Thus assume that $C=D$. However as $x\in Z(D)$, it means that $x\in Z(C)$. \end{proof} \end{lm} \begin{lm} Let $G$ be a ${\rm CA}_{min}$-group. Then \[ \beta = \{C/Z(G) \mid C \text{ a minimal centraliser in $G$}\} \] forms a non-trivial normal partition of $G/Z(G)$ consisting of abelian subgroups. \begin{proof} It is clear that the set $\beta$ is closed under conjugation and by Lemma~\ref{CAminCenAb} every subgroup in $\beta$ is abelian. Thus to show $\beta$ is a partition we need to show that every element in $G/Z(G)$ lies in a unique subgroup in $\beta$. Take $C/Z(G)$ and $D/Z(G)$ distinct in $\beta$. Then $C/Z(G)\cap D/Z(G)=(C\cap D)/Z(G)=1$. Thus it is enough to show that any $xZ(G)$ lies in some $C/Z(G)$. Consider $C_G(x)$ which contains some minimal centraliser $C$. Then as in Lemma~\ref{CAminCenAb}, $Z(C_G(x))\leq Z(C)$, hence $x\in C$. In particular $xZ(G)\in C/Z(G)$. \end{proof} \end{lm} \begin{lm} Let $G$ be an ${\rm F}_{min}$-group. Then \[ \beta = \{Z(C)/Z(G) \mid C \text{ a minimal centraliser in $G$}\} \] forms a non-trivial normal partition of $G/Z(G)$ consisting of abelian subgroups. \begin{proof} It is clear that the set $\beta$ is closed under conjugation and every subgroup in $\beta$ is abelian. Thus to show $\beta$ is a partition we need to show that every element in $G/Z(G)$ lies in a unique subgroup in $\beta$. Take $Z(C)/Z(G)$ and $Z(D)/Z(G)$ distinct in $\beta$. Then $Z(C)/Z(G)\cap Z(D)/Z(G)=(Z(C)\cap Z(D))/Z(G)=1$. Thus it is enough to show that any $xZ(G)$ lies in some $Z(C)/Z(G)$. Consider $C_G(x)$ which contains some minimal centraliser $C$. Then as in Lemma~\ref{CAminCenAb}, $Z(C_G(x))\leq Z(C)$, hence $x\in Z(C)$. In particular $xZ(G)\in Z(C)/Z(G)$. \end{proof} \end{lm} \section{Proof of main theorem} We now aim to classify ${\rm CA}_{min}$-groups and ${\rm F}_{min}$-groups. In fact a similar argument as for F-groups occurs when we replace F-group by ${\rm CA}_{min}$-group or ${\rm F}_{min}$-group. \subsection{Classifying ${\rm CA}_{min}$-groups} Due to the classification of partitions by Baer and Suzuki, first we shall consider the solvable ${\rm CA}_{min}$-groups. \begin{thm}\label{SolCAminGp} Let $G$ be a solvable ${\rm CA}_{min}$ group. Then $G$ is one of the following: \begin{enumerate} \item $G/Z(G)$ is a Frobenius group with kernel $L/Z(G)$ and complement $K/Z(G)$ such that both $K$ and $L$ are abelian. \item $G/Z(G)$ is a Frobenius group with kernel $L/Z(G)$ and complement $K/Z(G)$ such that $K$ is abelian, $L$ is a ${\rm CA}_{min}$-group, $Z(L)=Z(G)$ and $L/Z(L)$ is a $p$-group. \item $G/Z(G)\cong {\rm Sym}(4)$ and if $V/Z(G)\cong V_4$, then $V$ is non-abelian. \item $G$ has an abelian normal subgroup of index $p$, $G$ is not abelian. \item $G\cong A\times P$, where $A$ is abelian and $P$ is a non-abelian $p$-group for some prime $p$; therefore $P$ is a ${\rm CA}_{min}$-group. \end{enumerate} \begin{proof} As $G/Z(G)$ admits a non-trivial normal partition, we apply the classification of Baer (Theorem~\ref{BaerSolPart}) to determine $G/Z(G)$. \underline{\bf Case (1)}\newline Let $L/Z(G)$ denote the Frobenius kernel of $G/Z(G)$. Furthermore, let $K/Z(G)$ denote an element in the partition of $G/Z(G)$ which is self-normalising. Then $K=C_G(x)$ for some minimal centraliser $C_G(x)$ in $G$. We want to show that $K/Z(G)$ is a Frobenius complement, thus we need that $K\cap K^g=Z(G)$ for all $g\in G\setminus K$. As $K$ is a minimal centraliser in $G$, it means $K^g$ is also a minimal centraliser in $G$ and therefore $K\cap K^g=Z(G)$ or $K$. However, if $K^g=K$, then $gZ(G)\in N_{G/Z(G)}(K/Z(G))=K/Z(G)$ as $K/Z(G)$ was chosen to be self-normalising. Thus $K/Z(G)$ is a Frobenius complement in $G/Z(G)$. Furthermore as $K$ is a minimal centraliser in $G$, then $K$ is abelian (Corollary~\ref{CAminCenAb}). Let $x\in L\setminus Z(G)$. As $G/Z(G)$ is Frobenius with kernel $L/Z(G)$, \[ C_G(x)/Z(G)\leq C_{G/Z(G)}(xZ(G))\leq L/Z(G), \] so $C_G(x)\leq L$; or in other words $C_G(x)=C_L(x)$. If $L$ has a unique minimal centraliser, then $L$ is abelian by Corollary~\ref{CAminCenAb} and thus of type $(1)$. Hence assume $L$ has two distinct minimal centralisers $C_L(x)<L$ and $C_L(y)<L$ for $x,y\in L\setminus Z(G)$. Then $C_L(x)=C_G(x)$ and $C_L(y)=C_G(x)$. If $C_G(x)$ is not a minimal centraliser in $G$, then there exists $C_G(z)<C_G(x)$. As $C_G(z)<C_G(x)\leq L$, it follows that $z\in L\setminus Z(G)$. Thus $C_L(z)< C_L(x)$ and so $C_L(x)$ is not minimal. Therefore $C_G(x)$ and $C_G(y)$ are distinct minimal centralisers in $G$. As $G$ is a ${\rm CA}_{min}$-group, we have that $Z(L)\leq C_L(x)\cap C_L(y)=C_G(x)\cap C_G(y)=Z(G)$. However $Z(G)\leq L$ and therefore $Z(G)\leq Z(L)$ implying that $Z(L)=Z(G)$. Furthermore, we have shown that $L$ is a ${\rm CA}_{min}$-group. By Thompson, \cite[Theorem V.8.7]{Huppert}, $L/Z(G)$ is nilpotent (as it is a Frobenius kernel). By \cite[Remark 2.4]{BaerPart1}, the only nilpotent groups with a partition are $p$-groups for some prime $p$. This implies $G$ is of type $(2)$. \underline{\bf Case (2)}\newline In this case $G/Z(G)\cong {\rm Sym}(4)$. Let $V\leq G$ such that $V/Z(G)\cong V_4$, the Klein-four subgroup. If $V$ is abelian, then for all $x\in V\setminus Z(G)$ we have $V\leq C_G(x)$. As $xZ(G)$ has order $2$ it follows that $C_{G/Z(G)}(xZ(G))=D_8$ or $V_4$ (when $xZ(G)$ is a double or single transposition respectively), we also observe that $V/Z(G)\leq C_G(x)/Z(G)\leq C_{G/Z(G)}(xZ(G))$. If there exists an $x\in V$ such that $C_G(x)/Z(G)\cong V_4$, then $C_G(x)=V$ and so is abelian and thus a minimal centraliser. However $V_4$ is not contained in the partition of $G/Z(G)$ which yields a contradiction. Thus for all $x\in V\setminus Z(G)$ we have that $C_G(x)/Z(G)=C_{G/Z(G)}(xZ(G))\cong D_8$ and $xZ(G)\leq Z(D_8)$. In particular, it follows that $V_4$ must be the normal klein-four subgroup of ${\rm Sym}(4)$. Inside $C_G(x)/Z(G)\cong D_8$ there exists a unique cyclic subgroup of order 4 and another copy of $V_4$ which is not normal in ${\rm Sym}(4)$. Let $N,M\leq C_G(x)$ such that $N/Z(G)\cong C_4$ and $M/Z(G)$ equals the non-normal copy of $V_4$. By Theorem~\ref{BaerSolPart}, $N/Z(G)$ lies in the partition and therefore $N$ is a minimal centraliser in $G$. The subgroup $M$ contains $x$ and therefore $\langle x,Z(G)\rangle \leq Z(M)$. In particular, we see that $M/Z(M)$ must be cyclic and so $M$ is abelian. However $x\in N\cap M$ and so $M$ cannot be a minimal centraliser in $G$. Thus $C_G(y)>M$ for all $y\in M$. As $M/Z(G)$ lies in a unique maximal subgroup isomorphic to $D_8$, it follows that for each $y\in M\setminus Z(G)$, then $C_G(y)/Z(G)$ equals the unique maximal subgroup containing $M/Z(G)$. Thus $M/Z(G)\leq Z(D_8)$ which is a contradiction. \underline{\bf Case (3)}\newline In this case $N/Z$ is a component of $\beta$, which implies $N$ is abelian. \underline{\bf Case (4)}\newline In this case $G/Z(G)$ is a $p$-group and therefore $G$ is nilpotent. Therefore $G=A\times P$ for $A\leq Z(G)$ and $P$ a $p$-subgroup which is a ${\rm CA}_{min}$-group. \end{proof} \end{thm} We next show that each case occurring in Theorem~\ref{SolCAminGp} yields a ${\rm CA}_{min}$-group. \begin{prop}\label{ListAreCAmin} Any solvable group occurring in Theorem~\ref{SolCAminGp} is a ${\rm CA}_{min}$-group. \begin{proof} The solvable groups in Theorem~\ref{MainThm} are those of type $(1)-(5)$. Any group of type $(5)$ is easily seen to be a ${\rm CA}_{min}$-group. For the groups of type $(1),(3)$ and $(4)$, Schmidt \cite{SchmidtCaGps} has shown that they are CA-groups and hence are ${\rm CA}_{min}$-groups. Thus it only leaves those of type $(2)$. If $x\in L\setminus Z(G)$, then it was shown in the proof of Theorem~\ref{SolCAminGp} that $C_G(x)=C_L(x)$. If $x \in G\setminus L$, then as $G/Z(G)$ is a Frobenius group $xZ(G)$ lies in some conjugate of $K/Z(G)$ \cite[Page 496]{Huppert}. Thus assume $x\in K$. As $K$ is abelian, then $K/Z(G)\leq C_G(x)/Z(G)\leq C_{G/Z(G)}(xZ(G))\leq K/Z(G)$. That is $K=C_G(x)$. It now follows that any two distinct minimal non-central element centralisers have intersection equal to $Z(G)$. \end{proof} \end{prop} Thus it only remains to study the non-solvable ${\rm CA}_{min}$-groups. \begin{thm}\label{NonSolCAminGp} Let $G$ be a non-solvable group. Then $G$ is a ${\rm CA}_{min}$ group if and only if $G/Z(G)\cong PSL_2(p^n)$ or $PGL_2(p^n)$ with $p^n>3$. \begin{proof} As $G/Z(G)$ admits a non-trivial normal partition, we will use the classification of Suzuki (Theorem~\ref{SuzNonSolPart}) to determine $G/Z(G)$. \underline{\bf Case (1)}\newline If $G/Z(G)$ is a Frobenius group, then as in the solvable case the kernel $L/Z(G)$ is nilpotent and the complement $K/Z(G)$ is abelian. However, this implies $G/Z(G)$ and therefore $G$ is solvable. \underline{\bf Case (2)}\newline If $G/Z(G)$ is isomorphic to $Sz(2^n)$, then Schmidt[8] showed that the Sylow $2$-subgroups of $Sz(2^n)$ are subgroups of some components for any non-trivial partition. However $Sz(2^n)$ has non-abelian Sylow $2$-subgroups and therefore cannot be a ${\rm CA}_{min}$-group. \underline{\bf Case (3)}\newline Assume $G/Z(G)\cong PSL_2(p^n)$ or $PGL_2(p^n)$ with $p^n>3$. In this case we want to show that any group arising in this way is a ${\rm CA}_{min}$-group. It is well known that every element centraliser in $PGL_2(q)$ and $PSL_2(q)$ takes one of the following forms: \begin{enumerate} \item A cyclic group of order $q$, $q-1$ or $q+1$. \item A dihedral group of order $2(q+1)$ or $2(q-1)$. \end{enumerate} Let $x\in G\setminus Z(G)$ such that $C_G(x)$ is a minimal centraliser and set $C$ to be the subgroup of $G$ such that $C_{G/Z(G)}(xZ(G))=C/Z(G)$. If $C/Z(G)$ is cyclic, then $C$ is abelian and thus $C_G(x)=C$ . If $C/Z(G)$ is dihedral, then as $xZ(G)\in Z(C/Z(G))$, it follows that $xZ(G)\in C'/Z(G)$ for $C'/Z(G)$ the cyclic subgroup of index $2$ in $C/Z(G)$. Hence $C'\leq C_G(x)\leq C$. If $C_G(x)=C$, then there exists a $y\in G$ such that $C_{G/Z(G)}(yZ)=C'/Z(G)$ and so $C_G(y)<C_G(x)$ contradicting minimality. Thus every minimal centraliser in $G$ is abelian and its quotient is a centraliser in $G/Z(G)$. Let $x,y\in G\setminus Z(G)$ such that $C_G(x)$ and $C_G(y)$ are distinct minimal centralisers in $G$. Then $C_G(x)/Z(G)$ and $C_G(y)/Z(G)$ are centralisers in $G/Z(G)$. It is enough to show that $(C_G(x)/Z(G))\cap (C_G(y)/Z(G))$ is trivial in $G/Z(G)$. Let $kZ(G)\in (C_G(x)/Z(G)\cap C_G(y)/Z(G))$. Then $C_G(x)/Z(G)$ and $C_G(y)/Z(G)$ are distinct abelian centralisers in $G/Z(G)$ which are subgroups of $C_{G/Z(G)}(kZ)$. However, no centraliser in $PSL_2(q)$ or $PGL_2(q)$ contains two distinct abelian centralisers in $PSL_2(q)$ or $PGL_2(q)$ respectively. Therefore $kZ=Z$ and hence the intersection is trivial. In particular, we have shown that $C_G(x)\cap C_G(y)=Z(G)$ and $G$ is a ${\rm CA}_{min}$-group. \end{proof} \end{thm} \begin{cor*}[Corollary~\ref{NonSolFIsCA}] Any non-solvable F-group is a CA-group. \begin{proof} By Rebmann, any non-solvable F-group must be of the form $G/Z(G)\cong PSL_2(q)$ or $PGL_2(q)$ with some extra condition on $G'$. However, by the previous result any group such that $G/Z(G)$ has this structure is ${\rm CA}_{min}$-group. Hence $G$ is an F-group and ${\rm CA}_{min}$-group, and it follows it is a CA-group. \end{proof} \end{cor*} \subsection{Classifying ${\rm F}_{min}$-groups} We now repeat a similar argument as in the case of ${\rm CA}_{min}$-groups. Most details are omitted, however we include details for cases (1) and (2) in the solvable case to highlight that the partition now consists of quotients of centres of centralisers. \begin{thm} Let $G$ be a solvable ${\rm F}_{min}$-group. Then $G$ is one of the following: \begin{enumerate} \item $G/Z(G)$ is a Frobenius group with kernel $L/Z(G)$ and complement $K/Z(G)$ such that both $K$ and $L$ are abelian. \item $G/Z(G)$ is a Frobenius group with kernel $L/Z(G)$ and complement $K/Z(G)$ such that $K$ is abelian, $L$ is an ${\rm F}_{min}$-group, $Z(L)=Z(G)$ and $L/Z(L)$ is a $p$-group. \item $G/Z(G)\cong {\rm Sym}(4)$ and if $V/Z(G)\cong V_4$, then $V$ is non-abelian. \item $G$ has an abelian normal subgroup of index $p$, $G$ is not abelian. \item $G\cong A\times P$, where $A$ is abelian and $P$ is a non-abelian $p$-group for some prime $p$; therefore $P$ is an ${\rm F}_{min}$-group. \end{enumerate} \begin{proof} As $G/Z(G)$ admits a non-trivial normal partition, we apply the classification of Baer (Theorem~\ref{BaerSolPart}) to determine $G/Z(G)$ and then $G$. Note that cases (3) and (4) use the exact same argument as in Theorem~\ref{SolCAminGp} and so we shall not repeat them. \underline{\bf Case (1)}\newline Let $L/Z(G)$ denote the Frobenius kernel of $G/Z(G)$ and $K/Z(G)$ an element in the partition of $G/Z(G)$ which is self-normalising. Then $K=Z(C_G(x))$ for some minimal centraliser $C_G(x)$ in $G$. Using the same argument as in Theorem~\ref{SolCAminGp} shows $K/Z(G)$ is a Frobenius complement and abelian. Furthermore, recall that for $x\in L\setminus Z(G)$, then $C_G(x)=C_L(x)$ and $C_L(x)$ is minimal centraliser in $L$ implies that $C_G(x)$ is minimal centraliser in $G$. If $L$ has a unique minimal centraliser, then $L$ is abelian by Corollary~\ref{CAminCenAb} and thus of type $(1)$. Hence assume $L$ has two distinct minimal centralisers $C_L(x)<L$ and $C_L(y)<L$ for $x,y\in L\setminus Z(G)$. Then $C_L(x)=C_G(x)$ and $C_L(y)=C_G(x)$ are distinct minimal centralisers in $G$. Moreover $Z(L)\leq Z(C_L(x))\cap Z(C_L(y))=Z(G)$ and so $Z(L)=Z(G)$. In particular, we have shown that $L$ is an ${\rm F}_{min}$-group. Repeating the argument in Theorem~\ref{SolCAminGp} also implies $G$ is of type $(2)$. \underline{\bf Case (2)}\newline In this case $G/Z(G)\cong {\rm Sym}(4)$. Let $V\leq G$ such that $V/Z(G)\cong V_4$. If $V$ is abelian, then for all $x\in V\setminus Z(G)$ we have $V\leq C_G(x)$. As $xZ(G)$ has order $2$ it follows that $C_{G/Z(G)}(xZ(G))=D_8$ or $V_4$ (when $xZ(G)$ is a double or single transposition respectively), we also observe that $V/Z(G)\leq C_G(x)/Z(G)\leq C_{G/Z(G)}(xZ(G))$. If there exists an $x\in V$ such that $C_G(x)/Z(G)\cong V_4$, then $C_G(x)=V$ and so is abelian and thus a minimal centraliser. However $V_4$ is not contained in the partition of $G/Z(G)$ which yields a contradiction. Thus for all $x\in V\setminus Z(G)$ we have that $C_G(x)/Z(G)=C_{G/Z(G)}(xZ(G))\cong D_8$ and $xZ(G)\leq Z(D_8)$. In particular, it follows that $V_4$ must be the normal Klein-four subgroup of ${\rm Sym}(4)$. Inside $C_G(x)/Z(G)\cong D_8$ there exists a unique cyclic subgroup of order 4 and another copy of $V_4$ which is not normal in ${\rm Sym}(4)$. Let $N,M\leq C_G(x)$ such that $N/Z(G)\cong C_4$ and $M/Z(G)$ equals the non-normal copy of $V_4$. By Theorem~\ref{BaerSolPart}, $N/Z(G)$ lies in the partition and therefore $N$ is the centre of a minimal centraliser in $G$. The subgroup $M$ contains $x$ and therefore $\langle x,Z(G)\rangle \leq Z(M)$. In particular, we see that $M/Z(M)$ must be cyclic and so $M$ is abelian. However $x\in Z(N)\cap Z(M)$ and so $M$ cannot be a minimal centraliser in $G$. Thus $C_G(y)>M$ for all $y\in M$. As $M/Z(G)$ lies in a unique maximal subgroup isomorphic to $D_8$, it follows that for each $y\in M\setminus Z(G)$, then $C_G(y)/Z(G)$ equals the unique maximal subgroup containing $M/Z(G)$. Thus $M/Z(G)\leq Z(D_8)$ which is a contradiction. \end{proof} \end{thm} Using the same arguments as in Proposition~\ref{ListAreCAmin} gives the analogous result for ${\rm F}_{min}$-groups. \begin{prop} Any solvable group occurring in Theorem~\ref{MainThm} is an ${\rm F}_{min}$-group. \end{prop} We are therefore left to study the non-solvable ${\rm F}_{min}$-groups. \begin{thm} Let $G$ be a non-solvable group. Then $G$ is an ${\rm F}_{min}$-group if and only if $G/Z\cong PSL_2(p^n)$ or $PGL_2(p^n)$ with $p^n>3$. \begin{proof} As $G/Z(G)$ admits a non-trivial normal partition, we will use the classification of Suzuki (Theorem~\ref{SuzNonSolPart}) to determine $G/Z(G)$. Note that Cases (1) and (2) use exactly the same argument as in Theorem~\ref{NonSolCAminGp} and so shall not be repeated. \underline{\bf Case (3)}\newline Assume $G/Z(G)\cong PSL_2(p^n)$ or $PGL_2(p^n)$ with $p^n>3$. We saw that any such group is a ${\rm CA}_{min}$-group, which implies it is an ${\rm F}_{min}$-group. \end{proof} \end{thm} Moreover, we now obtain the analogous corollary of Rebmann for non-solvable ${\rm F}_{min}$-groups. \begin{cor} Any non-solvable ${\rm F}_{min}$-group is a ${\rm CA}_{min}$-group. \end{cor} Finally, by combing the two theorems in this section we obtain Theorem~\ref{MainThm}. \section{A family of F-groups which are not CA-groups} As we saw in the introduction, given the classification of ${\rm CA}_{min}$-groups, it is easy to see that there is a non-solvable ${\rm CA}_{min}$-group which is not a CA-group. However, as commented in Rebmann, any non-solvable F-group is also a CA-group. Thus we need to consider the solvable classification from Rebmann. In particular, using \cite[Corollary 5.1]{FGroups} if $G$ is an F-group that is not a CA-group, $G$ must take one of the two forms: \begin{enumerate} \item $G\cong A\times P$ where $A$ is abelian and $P$ is a non-abelian $p$-group which is also an F-group. \item $G/Z(G)$ is a Frobenius group with kernel $L/Z(G)$ and complement $K/Z(G)$, $K$ is abelian $Z(L)=Z(G)$, $L$ is an $F$-group and $L/Z(L)$ is a $p$-group. \end{enumerate} Note that if we have an F-group which is not a CA-group of the second type, then the subgroup $L$ cannot be a CA-group. In particular, if there exists an F-group which is not a CA-group, then there exists such a $p$-group. Thus to find an F-group which is not a CA-group, the first place to consider is in the set of $p$-groups. In particular we shall consider the class of extraspecial groups. First we state the following lemma which will be of use to us. \begin{lm} Let $G$ be a finite group in which the derived subgroup has order $p$ for some prime $p$. Then $G$ is an F-group. \begin{proof} This result follows from the observation that the conjugacy class $g^G$ is contained in the coset $gG'$. Therefore $|g^G|$ equals $p$ or $1$. Hence $|C_G(g)|=\frac{|G|}{p}$ or $|G|$ and so every non-central element centraliser is both maximal and minimal. In particular $G$ is an F-group. \end{proof} \end{lm} Let $G$ be an extraspecial group, usually denoted by one of the two groups $p^{1+2n}_{\pm}$ for some positive integer $n$. Then we have $G'=\Phi(G)=Z(G)$ of order $p$. By the previous lemma $G$ is an F-group. Assume $n>1$, otherwise $G$ is of order $p^3$ and it is easy to see such groups are CA-groups. Then $G$ is isomorphic to the central product of $H$ and $P$, where $H$ is an extraspecial group of order $p^{1\pm 2(n-1)}$ and $P$ is extraspecial of order $p^3$. Take $x\in H\setminus Z(H)$. Then $x\not\in Z(G)$, however $P\leq C_G(x)$. Thus $G$ is not a CA-group. \begin{prop*}[Proposition~\ref{FNotCA}] Let $G$ be an extraspecial group of order $p^{2n+1}$ with $n>1$. Then $G$ is an F-group which is not a CA-group. \end{prop*} Note that not all F-groups which are not CA-groups occur from extraspecial groups. In particular, using GAP we can find 5 groups of order 64 which are F-groups but not CA-groups. \section*{Acknowledgments} The author gratefully acknowledges financial support by the ERC Advanced Grant $291512$. In addition the author would like to thank Benjamin Sambale for reading and discussing a preliminary version of this paper. \end{document}
arXiv
{ "id": "1702.00245.tex", "language_detection_score": 0.761515200138092, "char_num_after_normalized": null, "contain_at_least_two_stop_words": null, "ellipsis_line_ratio": null, "idx": null, "lines_start_with_bullet_point_ratio": null, "mean_length_of_alpha_words": null, "non_alphabetical_char_ratio": null, "path": null, "symbols_to_words_ratio": null, "uppercase_word_ratio": null, "type": null, "book_name": null, "mimetype": null, "page_index": null, "page_path": null, "page_title": null }
arXiv/math_arXiv_v0.2.jsonl
null
null
\begin{document} \title{Generalized uncertainty relations and entanglement dynamics in quantum Brownian motion models} \author{C. Anastopoulos\footnote{[email protected]}, S. Kechribaris\footnote{[email protected]}, and D. Mylonas\footnote{[email protected]}\\ Department of Physics, University of Patras, 26500 Patras, Greece} \maketitle \begin{abstract} We study entanglement dynamics in quantum Brownian motion (QBM) models. Our main tool is the Wigner function propagator. Time evolution in the Wigner picture is physically intuitive and it leads to a simple derivation of a master equation for any number of system harmonic oscillators and spectral density of the environment. It also provides generalized uncertainty relations, valid for any initial state, that allow a characterization of the environment in terms of the modifications it causes to the system's dynamics. In particular, the uncertainty relations are very informative about the entanglement dynamics of Gaussian states, and to a lesser extent for other families of states. For concreteness, we apply these techniques to a bipartite QBM model, describing the processes of entanglement creation, disentanglement, and decoherence at all temperatures and time scales. \end{abstract} \section{Introduction} The study of quantum entanglement is both of practical and theoretical significance: entanglement is viewed as a physical resource for quantum-information processing and it constitutes a major issue in the foundations of quantum theory. The quantification of entanglement is difficult in multipartite systems (see, for example, Refs. \cite{PeresBook,KarolBook,AlickiBook, 4Hor}); however, there are useful separability criteria and entanglement measures for bipartite states, pure and mixed \cite{Peres, Horod, Simon, Duan, Barnum, HHH, AlickiHorod, GMVT03}). Realistic quantum systems, including multipartite ones, cannot avoid interactions with their environments, which can degrade their quantum coherence and entanglement. Thus quantum decoherence and disentanglement are obstacles to quantum-information processing \cite{RajRendell,Diosi,Dodd,DoddHal}. On the other hand, some environments act as intermediates that generate entanglement in multipartite systems, even if the components do not interact directly \cite{Braun, BFP, OK}. The theoretical study of entanglement dynamics in open quantum systems has uncovered important physical effects, such as the sudden death of entanglement \cite{YE, suddeath}, entanglement revival after sudden death \cite{FicTan06}, the significance of non-Markovian effects \cite{ASH, nonmark, CYH}, the possibility of a rich phase structure for the asymptotic behavior of entanglement \cite{PR1, PR2}, and intricacies in the evolution of entanglement in multipartite systems \cite{Li10}. Here, we study entanglement and decoherence in quantum Brownian motion (QBM) models \cite{HPZ, QBM, QBM2}, focusing on their description in terms of generalized uncertainty relations. Our main tool in this study is the Wigner function propagator. QBM models are defined by a quadratic total Hamiltonian, and they are characterized by a Gaussian propagator. This propagator is solely determined by two matrices: one corresponding to the classical dissipative equations of motion and one containing the effect of environment-induced diffusion. In Sec. II we provide explicit formulas for their determination. The simplicity of time evolution in the Wigner picture leads to a concise derivation of an exact master equation for general QBM models, with any number of system oscillators and spectral density. Moreover, time evolution in the Wigner picture allows for a derivation of generalized uncertainty relations, valid for {\em any} initial state, that incorporate the influence of the environment upon the system. These uncertainty relations generalize the ones of Ref. \cite{AnHa} to QBM models with an arbitrary number of system oscillators---see also Refs. \cite{HZ, CYH}. Their most important feature is that the lower bound is independent of the initial state, and for this reason, they allow for general statements about the process of decoherence and thermalization. The uncertainty relations are also related to separability criteria for bipartite systems \cite{Simon, GMVT03}. Hence, they provide an important tool for the study of entanglement dynamics. For Gaussian states, in particular, the uncertainty relations, derived here, provide a general characterization of processes such as entanglement creation and disentanglement without the need to specify detailed properties of the initial state. However, uncertainty relations do not suffice to distinguish all entangled {\em non-Gaussian} states. For such states, the description of entanglement dynamics from the uncertainty relations is rather partial, but still leads to nontrivial results. The uncertainty relations derived in this article apply to any open quantum system characterized by Gaussian propagation, and they are expressed solely in terms of the coefficients of the Wigner function propagator. They can be used for the study of entanglement dynamics, not only in bipartite but also in multipartite systems. To demonstrate their usefulness, we apply them to a concrete bipartite QBM model system that has been studied by Paz and Roncanglia \cite{PR1, PR2}. In this model, there exist two coupled subalgebras of observables, only one of which couples directly to the environment. For a special case of the system parameters, considered in Ref. \cite{PR1}, one of the subalgebras is completely decoupled, and thus there exists a decoherence-free subspace for the system. Here we focus on the generic case, also explored in Ref. \cite{PR2}. We find that in the high-temperature regime, decoherence and disentanglement are generic and the uncertainty relations allow for an identification of the characteristic timescales, which in some cases may be of very different orders of magnitude. At low temperature, entanglement creation often occurs and we demonstrate that it is accompanied by ``entanglement oscillations'', that is, a sequence of entanglement sudden death and revivals at early times. In this regime, there is no decoherence, and disentanglement arises because of relaxation. At a time scale of the order of relaxation time the system tends to a unique asymptotic state, which coincides with a thermal state at the weak-coupling limit. The generalized uncertainty relations allow for the determination of upper limits to disentanglement time with respect to all Gaussian initial states. The structure of the article is the following. In Sec. II we construct the Wigner function propagator for the most general QBM model and we provide explicit formulas for the propagator's coefficients. The master equation is then simply obtained from the propagator. In Sec. III we construct the generalized uncertainty relations valid for all QBM models, we show that they can be used for the study of multipartite entanglement, and we then consider their special case in the model of Refs. \cite{PR1, PR2}. In Sec. IV we employ the uncertainty relations for the study of decoherence, disentanglement, and entanglement creation in different regimes and time scales of this model. \section{Quantum Brownian motion models for multipartite systems} In this section, we consider the most general setup for quantum Brownian motion, namely, a system of $N$ harmonic oscillators of masses $M_r$ and frequencies $\Omega_r$ interacting with a heat bath. The heat bath is modeled by a set of harmonic oscillators of masses $m_i$ and frequencies $\omega_i$, initially at a thermal state of temperature $T$. The Hamiltonian of the total system is a sum of three terms $\hat{H} = \hat{H}_{sys} + \hat{H}_{env} + \hat{H}_{int}$, where \begin{eqnarray} \hat{H}_{sys} &=& \sum_r\left( \frac{1}{2M_r} \hat{P}_r^2 + \frac{M_r \Omega^2_r}{2} \hat{X}_r^2\right) \label{ho}\\ \hat{H}_{env} &=& \sum_i (\frac{1}{2m_i} \hat{p}_i^2 + \frac{m_i \omega_i^2}{2} \hat{q}_i^2)\\ \hat{H}_{int} &=& \sum_i \sum_a c_{ir} \hat{X}_r \hat{q}_i, \label{hint} \end{eqnarray} where $\hat{X}_r$ and $\hat{P}_r$ are the position and momentum operators for the system oscillators and $\hat{q}_i$ and $\hat{p}_i$ are the position and momentum operators for the environment oscillator. The interaction Hamiltonian Eq. (\ref{hint}) involves different couplings $c_{ir}$ of each system oscillators to the bath. Thus it can also be used to describes systems different from the classic setup of Brownian motion, for example, particle detectors at different locations interacting with a quantum field \cite{LinHu}. For an initial state that is factorized in system and environment degrees of freedom the evolution of the reduced density matrix for the system variables is {\em autonomous}, and it can be expressed in terms of a master equation. For the issues we explore in this article, in particular entanglement dynamics, the determination of the propagator of the reduced density matrix is more important than the construction of the master equation, because it allows us to follow the time evolution of the relevant observables. The construction of the propagator is simpler in the Wigner picture. Instead of the density operator, we work with the Wigner function, defined by \begin{eqnarray} W({\bf X},{\bf P}) = \frac{1}{(2 \pi)^N} \int d^N \zeta e^{-i {\bf P} \cdot {\bf \zeta}} \hat{\rho}({\bf X} + \frac{1}{2}{\bf \zeta}, {\bf X}- \frac{1}{2}{\bf \zeta}). \end{eqnarray} Its inverse is \begin{eqnarray} \hat{\rho}({\bf X},{\bf Y}) = \int d^NP \; e^{i{\bf P} \cdot ({\bf X} - {\bf X'})} \; W(\frac{1}{2}({\bf X} + {\bf X'}), {\bf P}). \end{eqnarray} For a factorized initial state, time evolution in QBM models is encoded in the density matrix propagator $J({\bf X}_f, {\bf Y}_f, t| {\bf X}_0, {\bf Y}_0,0)$, defined by \begin{eqnarray} \hat{\rho}_t({\bf X}_f, {\bf Y}_f) = \int d^NX_0 d^NY_0 \; \; J({\bf X}_f, {\bf Y}_f, t| {\bf X}_0, {\bf Y}_0, 0) \hat{\rho}_0({\bf X}_0, {\bf Y}_0). \label{jprop} \end{eqnarray} The Wigner function propagator is defined as \begin{eqnarray} K({\bf X}_f, {\bf P}_f, t| {\bf X}_0, {\bf P}_0, 0) = \int \frac{d^N \zeta_f d^N\zeta_0}{(2\pi)^N} e^{i {\bf P}_{0}\cdot{\bf \zeta}_0 - i {\bf P}_{f} \cdot {\bf \zeta}_f} \; J({\bf X}_f + \frac{{\bf \zeta}_f}{2}, {\bf X}_f - \frac{{\bf \zeta}_f}{2}, t| {\bf x}_0 + \frac{{\bf \zeta}_0}{2}, {\bf X}_0 - \frac{ {\bf \zeta}_0}{2},0). \label{wfprop} \end{eqnarray} Denoting the phase-space coordinates by the vector \begin{eqnarray} \xi_a = (X_1, P_1, X_2, P_2, \ldots, X_N, P_N), \hspace{1cm} a = 1, 2, \ldots, 2N, \label{xidef} \end{eqnarray} we write the Wigner function propagator compactly as $K_t(\xi_f, \xi_0)$ and express Eq. (\ref{jprop}) as \begin{eqnarray} W_t(\xi) = \int \frac{d^{2N} \xi_0}{(2 \pi)^N} K_t(\xi_f, \xi_0) W_0(\xi_0), \label{wt} \end{eqnarray} where $W_t$ and $W_0$ are the Wigner functions at times $t$ and $0$, respectively. In QBM models the Wigner function propagator is Gaussian. This follows from the fact that the total Hamiltonian for the system is quadratic and the initial state for the bath is Gaussian. The most general form of a Gaussian Wigner function propagator is \begin{eqnarray} K_t(\xi_f, \xi_0) = \frac{\sqrt{\det S^{-1}(t)}}{\pi^N} \exp \left[ - \frac{1}{2} [\xi_f^a - \xi_{cl}^a(t)] S^{-1}_{ab}(t) [\xi_f^b - \xi_{cl}^b(t)] \right], \label{gauss} \end{eqnarray} where $S^{-1}_{ab}(t)$ is a positive real-valued matrix, and $\xi_{cl}(t)$ is the solution of the corresponding classical equations of motion (including dissipation) with initial condition $\xi = \xi_0$ at $t = t_0$. The equations of motion are linear, so $\xi_{cl}(t)$ is of the form \begin{eqnarray} \xi^a_{cl}(t) = R^{a}_b(t) \xi_0^b, \label{ceq} \end{eqnarray} in terms of a matrix $R^a_b(t)$. Equation (\ref{gauss}) holds if there are no ``decoherence-free'' subalgebras, that is, if there is no subalgebra of the canonical variables that remains decoupled from the environment. These observables evolve with a delta-function propagator, rather than with a Gaussian. However, this case corresponds to a set of measure zero in the space of parameters, and it can be obtained as a weak limit of the generic expression, Eq. (\ref{gauss}). In order to specify the Wigner function propagator, we must construct the matrix-valued functions $R(t)$ and $S(t)$. To this end, we consider the two-point correlation matrix $V$ of a quantum state $\hat{\rho}$, defined by \begin{eqnarray} V_{ab} = \frac{1}{2} Tr\left[\hat{\rho} (\hat{\xi}_a \hat{\xi}_b + \hat{\xi}_b \hat{\xi}_a) \right] - Tr (\hat{\rho} \hat{\xi}_a) Tr(\hat{\rho} \hat{\xi}_b). \label{Vab} \end{eqnarray} Gaussian propagation decouples the evolution of two-point correlations from any higher-order correlations. From Eqs. (\ref{wt}) and (\ref{gauss}), we find the two-point correlation matrix, Eq. (\ref{Vab}), $V_t$ at time $t$, \begin{eqnarray} V_t = R(t)V_0 R^T(t) + S(t), \label{Vt} \end{eqnarray} where $V_0$ is the correlation matrix of the initial state. The first term in the right-hand side of Eq. (\ref{Vt}) corresponds to the evolution of the initial phase-space correlations according to the {\em classical} equations of motion. The second term incorporates the effect of environment-induced fluctuations and it does not depend on the initial state. Hence, the matrix $S$ can be explicitly constructed, by identifying the part of the correlation matrix that does not depend on the initial state. To this end, we proceed as follows. From the Heisenberg-picture evolution of the bath oscillators, we obtain the equations \begin{eqnarray} \ddot{\hat{q}}_i (t) + \omega_i^2 \hat{q}_i (t) = \sum_r \frac{c_{ir}}{m_i} \hat{X}_r(t), \label{qeq} \end{eqnarray} with solution \begin{eqnarray} \hat{q}_i(t) = \hat{q}_i^0(t) + \sum_r\frac{c_{ir}}{m_i \omega_i} \int_0^t ds \sin\left(\omega_i(t-s)\right) \hat{X}_r(s), \label{q1} \end{eqnarray} where \begin{eqnarray} \hat{q}_i^0(t) = \hat{q}_i \cos \left(\omega_i t\right) + \frac{\hat{p}_i}{m_i \omega_i} \sin\left(\omega_it\right). \end{eqnarray} For the system variables, we obtain \begin{eqnarray} \ddot{\hat{X}}_r (t) + \Omega_r^2 \hat{X}_r (t) + \frac{2}{M_r} \sum_{r'} \int_0^t ds \gamma_{rr'}(t-s) \hat{X}_{r'}(s) = \sum_i \frac{c_{ir}}{M_r} \hat{q}^0_i(t) \label{Xeq}, \end{eqnarray} where \begin{eqnarray} \gamma_{rr'} (s) = - \sum_i \frac{c_{ir} c_{ir'}}{2 m_i \omega_i^2} \sin\left(\omega_i s\right) \end{eqnarray} is the dissipation kernel. In general, the matrix $\gamma_{rr'}$ is symmetric and has $\frac{1}{2}N(N+1)$ independent terms, each defining a different relaxation time-scale for the system. However, symmetries of the couplings $c_{ir}$ may reduce the number of independent components of the dissipation kernel. The solution of Eq. (\ref{Xeq}) is \begin{eqnarray} \hat{X}_r(t) = \sum_{r'} (\dot{v}_{rr'}(t) \hat{X}_{r'} + \frac{1}{M_{r'}}v_{rr'} \hat{P}_{r'}) +\sum_{r'} \frac{1}{M_{r'}} \int_0^t ds v_{r r'}(t-s) \sum_i c_{ir'} \hat{q}_i^0(s), \label{Xsol} \end{eqnarray} where $v_{rr'}(t)$ is the solution of the homogeneous part of Eq. (\ref{Xeq}), with initial conditions $v_{rr'}(0) = \delta_{rr'} $ and $\dot{v}_{rr'}(0) = 0 $. It can be expressed as an inverse Laplace transform \begin{eqnarray} v(t) = {\cal L}^{-1} [A^{-1}(z)], \label{vt} \end{eqnarray} where $A_{rr'}(z) = (z^2 + \Omega_r^2)\delta_{rr'} + \frac{2}{M_r} \tilde{\gamma}_{rr'}(z)$ and $\tilde{\gamma}_{rr'}(z)$ is the Laplace transform of the dissipation kernel. The classical equations of motion follow from the expectation values of $\hat{X}_r$ and $\hat{P}_r = M_r \dot{X}_r$ in Eq. (\ref{Xsol}) \begin{eqnarray} \left(\begin{array}{c} X(t) \\ P(t) \end{array} \right) = \left( \begin{array}{cc} \dot{v}(t) & v(t) M^{-1}\\ M \dot{v}(t) & M \ddot{v}(t) M^{-1} \end{array} \right) \left(\begin{array}{c} X(0) \\ P(0) \end{array} \right), \label{ceq2} \end{eqnarray} where $M = \mbox{diag} ( M_1, \ldots, M_r)$ is the mass matrix for the system. The matrix $R$ of Eq. (\ref{ceq}) follows from Eq. (\ref{ceq2}) by a relabeling coordinated according to the definition of the vector $\xi^a$, Eq. (\ref{xidef}). We next employ Eq. (\ref{Xsol}), in order to construct the correlation matrix Eq. (\ref{Vab}). Using the following equation for the correlation functions of harmonic oscillators in a thermal state at temperature $T$, \begin{eqnarray} \langle \hat{q}_i^0(s) \hat{q}_j^0(s') \rangle_{T} = \delta_{ij} \frac{1}{2m_i\omega_i} \coth \left(\frac{\omega_i}{2T}\right) \cos \left(\omega_i(s-s')\right), \end{eqnarray} we find \begin{eqnarray} S_{X_r X_{r'}} &=& \sum_{qq'} \frac{1}{M_q M_{q'}} \int_0^t ds \int_0^t ds' v_{rq}(s) \nu_{qq'}(s-s') v_{q'r'}(s'),\label{sxx} \\ S_{P_r P_{r'}} &=& M_r M_{r'} \sum_{qq'} \frac{1}{M_q M_{q'}} \int_0^t ds \int_0^t ds' \dot{v}_{rq}(s) \nu_{qq'}(s-s') \dot{v}_{q'r'}(s'),\\ S_{X_r P_{r'}} &=& M_{r'} \sum_{qq'} \frac{1}{M_q M_{q'}} \int_0^t ds' v_{rq}(s) \nu_{qq'}(s-s') \dot{v}_{q'r'}(s') \label{sxp}, \end{eqnarray} where the symmetric matrix \begin{eqnarray} \nu_{rr'}(s) = \sum_i \frac{c_{ir} c_{ir'}}{2 m_i \omega_i^2} \coth \left( \frac{\omega_i}{2T}\right)\cos \left(\omega_is\right) \end{eqnarray} is the noise kernel. Similarly to the dissipation kernel, the noise kernel has $\frac{1}{2}N(N+1)$ independent components. Equations (\ref{sxx}---\ref{sxp}) together with the classical equations of motion (\ref{ceq2}) fully specify the Wigner function propagator. The master equation in the Wigner representation easily follows, by taking the time derivative of Eq. (\ref{wt}) and using the identities \begin{eqnarray} \int \frac{d^{2N} \xi_0}{(2 \pi)^N} (\xi - \xi_{cl})^a K_t(\xi_f, \xi_0) W_0(\xi_0) &=& - S^{ab} \frac{\partial W_t(\xi)}{\partial \xi^b}, \\ \int \frac{d^{2N} \xi_0}{(2 \pi)^N} (\xi - \xi_{cl})^a (\xi - \xi_{cl})^b K_t(\xi_f, \xi_0) W_0(\xi_0) &=& S^{ab} + S^{ac}S^{bd} \frac{\partial^2 W_t(\xi)}{\partial \xi^c \partial \xi^d}. \end{eqnarray} The result is \begin{eqnarray} \frac{\partial W_t}{\partial t} = -(\dot{R}R^{-1})^a_b \frac{\partial (\xi^b W_t)} {\partial \xi^a} + (\frac{1}{2} \dot{S}^{ab} - (\dot{R}R^{-1})_c^{(a} S^{cb)}) \frac{\partial^2 W_t(\xi)}{\partial \xi^a \partial \xi^b}. \label{master} \end{eqnarray} The method leading to the master equation (\ref{master}) is a generalization of the approach in Ref. \cite{HalYu} for the derivation of the Hu, Paz and Zhang master equation for $N = 1$. To the best of our knowledge the only other derivation of the QBM master equation in such a general setup (also including external force terms) is the one by Fleming, Roura and Hu, Ref. \cite{QBM2}. The benefit of the present derivation is that, by construction, it also provides the solution of the master equation, i.e., explicit formulas for the coefficients of the propagator. The first term in the right-hand side of Eq. (\ref{master}) corresponds to the Hamiltonian and dissipation terms, and the second one to diffusion with diffusion functions $D^{ab}(t) = (\frac{1}{2} \dot{S}^{ab} - \dot{R}R^{-1})_c^{(a} S^{cb)})$. A necessary condition for the master equation to be Markovian is that dissipation is local, that is, that the matrix $A:= \dot{R}R^{-1}$ is time independent. Then $A$ is a generator of a one-parameter semi-group on the classical-state space. Moreover, the diffusion functions must be constant, which implies that $S$ must be a solution of the equations $\ddot{S} = O\dot{S}+\dot{S}O$. \section{Generalized uncertainty relations} In this section, we derive generalized uncertainty relations for the QBM models described in Sec. II, which are relevant to the discussion of entanglement dynamics. \subsection{Background} Let ${\cal H} = L^2(R^N)$ be the Hilbert space of a quantum system corresponding to a classical phase-space $R^{2N}$. ${\cal H}$ carries a representation of canonical commutation relations \begin{eqnarray} [\hat{q}_i, \hat{p}_j] = i \delta_{ij}, i = 1, \ldots, N. \end{eqnarray} We employ a vector notation, analogous to Eq. (\ref{xidef}), for the canonical operators $\hat{q}_i$ and $\hat{p}_j$. Then the commutation relations take the form \begin{eqnarray} [\hat{\xi}_a, \hat{\xi}_b] = i \Omega_{ab}, \hspace{0.5cm} a,b = 1,2, \ldots 2N, \end{eqnarray} where \begin{eqnarray} \Omega = \left(\begin{array}{cccc} J &0 & \ldots &0 \\ 0& J & \ldots &0 \\ \ldots& & &\\ 0& 0& \ldots &J \end{array} \right) \hspace{1.2cm} J = \left(\begin{array}{cc} 0&1 \\ -1&0\end{array} \right). \end{eqnarray} The standard uncertainty relations for this system take the form \begin{eqnarray} V \geq - \frac{i}{2} \Omega. \label{V} \end{eqnarray} For a bipartite system, with $n$ degrees of freedom for the first subsystem, and $N-n$ ones for the second, the Peres-Horodecki partial transpose operation defines a transformation $\xi \rightarrow \Lambda \xi$, where $\Lambda$ inverts the momenta of the second subsystem. Then, the correlation matrix of a separable state satisfies the inequality \cite{Simon} \begin{eqnarray} V \geq - \frac{i}{2} \tilde{\Omega}, \hspace{1cm} \tilde{\Omega} = \Lambda \Omega \Lambda. \label{V2} \end{eqnarray} Of special interest is the case $N = 2$, where Eqs. (\ref{V}) and (\ref{V2}) lead to a simple, if weaker, set of uncertainty relations. These have a simple generalization in the QBM model considered in this article. We introduce the variables \begin{eqnarray} X_+ = \frac{1}{2}(X_1 + X_2) , \hspace{1cm} P_+ = P_1 + P_2 , \\ X_- = \frac{1}{2}(X_1 - X_2) , \hspace{1cm} P_- = P_1 - P_2. \end{eqnarray} The partial transpose operation then interchanges $P_+$ with $P_-$, that is, \begin{eqnarray} \Lambda (X_+, P_+, X_-, P_-) = (X_+, P_-, X_- , P_+). \end{eqnarray} Hence, the uncertainty relations, \begin{eqnarray} {\cal A}_{X_+P_+} := (\Delta X_+)^2 (\Delta P_+ )^2 - V_{X_+P_+}^2 \geq \frac{1}{4}, \hspace{0.8cm} {\cal A}_{X_-P_-} := (\Delta X_-)^2 (\Delta P_- )^2 - V_{X_-P_-}^2 \geq \frac{1}{4} \label{unc2a}, \end{eqnarray} satisfied by any pair of conjugate variables (they follow from the positivity of the $2\times 2$ diagonal subdeterminants of $V$), imply that a factorized state must satisfy the following relations \begin{eqnarray} {\cal A}_{X_+ P_-} := (\Delta X_+)^2 (\Delta P_- )^2 - V_{X_+ P_-}^2 \geq \frac{1}{4}, \hspace{0.8cm} {\cal A}_{X_- P_+} := (\Delta X_-)^2 (\Delta P_+ )^2 - V_{X_-P_+}^2 \geq \frac{1}{4} \label{unc2b}. \end{eqnarray} If either inequality in Eq. (\ref{unc2b}) is violated, then the state is entangled. Hence, the uncertainty functions ${\cal A}_{X_+ P_-}$ and ${\cal A}_{X_- P_+}$ provide witnesses of entanglement for any state. They are weaker than the full Eq. (\ref{V2}). Equation (\ref{V2}) fully specifies entanglement in all Gaussian states, while Eq. (\ref{unc2b}) does so only for pure Gaussian states. \subsection{Uncertainty relations in QBM models} The initial correlation matrix $V_0$ in Eq. (\ref{Vt}) satisfies the inequality (\ref{V}). It follows that \begin{eqnarray} V_t \geq - \frac{i}{2} R(t) \Omega R^T(t) + S(t). \label{gunc1} \end{eqnarray} The inequality (\ref{gunc1}) is a generalized uncertainty relation that incorporates the effect of environment-induced fluctuations. It generalizes the uncertainty relations of Ref. \cite{AnHa} to oscillator systems with an arbitrary number of degrees of freedom. The right-hand side of Eq. (\ref{gunc1}) depends only on the coefficients of the Wigner function propagator and not on any properties of the initial state. Hence, Eq. (\ref{gunc1}) provides a lower bound to the correlation matrix at time $t$, for a system that comes into contact with a heat bath at time $t = 0$. Equality in Eq. (\ref{gunc1}) is achieved for pure Gaussian states. The bound is to be understood in the sense of an envelope. No single Gaussian state saturates the bound in Eq. (\ref{gunc1}) at all moments of time, but equality is achieved by a different family of Gaussians at each moment $t$. \subsubsection{Bipartite entanglement} When applied to a bipartite system, Eq. (\ref{gunc1}) implies that the condition \begin{eqnarray} - \frac{i}{2} R(t) \Omega R^T(t) + S(t) < -\frac{i}{2} \tilde{\Omega} \label{gunc2} \end{eqnarray} is sufficient for the existence of entangled states at time $t$, irrespective of the degradation caused by the environment. For Gaussian initial states, this condition is also necessary. For a factorized initial state, Eqs. (\ref{Vt}) and (\ref{V2}) yield \begin{eqnarray} V_t \geq - \frac{i}{2} R(t) \tilde{\Omega} R^T(t) + S(t). \label{gunc3} \end{eqnarray} Inequality (\ref{gunc3}) is saturated for {\em factorized} pure Gaussian states, and, similarly to Eq. (\ref{gunc3}), the lower bound to the correlation matrix is to be understood as an envelope. If an initially factorized state remains factorized at time $t$, then $V_t \geq -\frac{i}{2}\tilde{\Omega}$. Then Eq. (\ref{gunc3}) implies that the inequality \begin{eqnarray} - \frac{i}{2} \left( R(t) \tilde{\Omega} R^T(t) - \tilde{\Omega} \right) + S(t) \leq 0 \label{gunc4} \end{eqnarray} is a necessary condition for the preservation of factorizability at time $t$. \subsubsection{Tripartite entanglement} Equations (\ref{Vt}) and (\ref{gunc1}) apply to systems of $N$ oscillators. Used in conjunction with suitable separability criteria for multipartite systems \cite{DCT}, they also allow the derivation for uncertainty relations relevant multipartite systems. For example, we can use the criteria of Ref. \cite{GKLC} which apply to systems of three oscillators, labeled by the index $i = 1, 2, 3$. One defines the matrices $\Lambda_i$ that effect partial transposition with respect to the $i$th subsystems. Then, separable states satisfy \begin{eqnarray} V \geq - \frac{i}{2} \tilde{\Omega}_i, \hspace{1cm} \tilde{\Omega}_i = \Lambda \Omega_i \Lambda, \label{unc6} \end{eqnarray} for all $i$. There are some subtleties in the application of the criterion Eq. (\ref{unc6}) for Gaussians: there exist states that satisfy Eq. (\ref{unc6}) that are not fully separable, but only biseparable with respect to all possible bipartite splits---see Ref. \cite{GKLC} for details. However, the reasoning of Sec. III B 1 applies. The condition \begin{eqnarray} - \frac{i}{2} R(t) \Omega R^T(t) + S(t) < -\frac{i}{2} \tilde{\Omega}_i, \label{guncb1} \end{eqnarray} for all $i$, is sufficient for the existence of entangled states at time $t$, irrespective of the degradation caused by the environment. For a factorized initial state, Eqs. (\ref{Vt}) and (\ref{unc6}) yield \begin{eqnarray} V_t \geq - \frac{i}{2} R(t) \tilde{\Omega}_i R^T(t) + S(t), \label{guncb2} \end{eqnarray} for all $i$. Equation (\ref{guncb2}) implies that the condition \begin{eqnarray} - \frac{i}{2} \left( R(t) \tilde{\Omega} R^T(t) - \tilde{\Omega}_i \right) + S(t) \leq 0 \label{guncb3} \end{eqnarray} is necessary for the preservation of factorizability at time $t$. \subsection{A case model} The uncertainty relations (\ref{gunc1})--(\ref{gunc4}) hold for any Gaussian QBM system and depend only on the matrices $R$ and $S$ defining the density-matrix propagator, for which explicit expressions were given in Sec. II. In what follows, we elaborate on these relations in the context of a specific QBM model for a bipartite system, which has been studied by Paz and Roncanglia \cite{PR1, PR2}. In this model, the system consists of two harmonic oscillators with equal masses $M$ and frequencies $\Omega_1$ and $\Omega_2$. We also consider symmetric coupling to the environment, that is., $c_{i1} = c_{i2}:= c_i$ in Eq. (\ref{hint}). The latter assumption is a strong simplification, because the dissipation and noise kernels then become scalars, \begin{eqnarray} \gamma(s) &=& \int d \omega I(\omega) \sin \left(\omega s\right) \left(\begin{array}{cc}1&1\\1&1 \end{array} \right), \\ \nu(s) &=& \int d \omega I(\omega) \coth \left(\frac{\omega}{2T}\right) \cos \left(\omega s\right) \left(\begin{array}{cc}1&1\\1&1 \end{array} \right), \end{eqnarray} where \begin{eqnarray} I(\omega) = \sum_i \frac{c_i^2}{2 m_i \omega_i^2} \delta (\omega - \omega_i) \end{eqnarray} is the bath's spectral density. A common form for $I(\omega)$ is \begin{eqnarray} I(\omega)=M\gamma\omega \left(\frac{\omega}{\tilde{\omega}} \right)^s e^{-\frac{\omega^2}{\Lambda^2}}, \end{eqnarray} where $\gamma$ is a dissipation constant, $\Lambda$ is a high-frequency cutoff, $\tilde{\omega}$ is a frequency scale, and the exponent $s$ characterizes the infrared behavior of the bath. For this model, it is convenient to employ the dimensionless parameter $\delta := \frac{\Omega_1^2 - \Omega_2^2}{ \Omega_1^2 + \Omega_2^2 }$, denoting how far the system is from resonance, and the scaled temperature $\theta := \frac{T}{\sqrt{ \Omega_1^2 + \Omega_2^2 }}$. In this model, the pair of oscillators is coupled to the environment only through the variables $X_+$. The variable $X_-$ is affected by the environment only through its coupling with $X_+$, which is proportional to $\Delta^2 = |\Omega_i^2 - \Omega_2^2|$. For resonant oscillators ($\Omega_1 = \Omega_2$) this coupling vanishes, the subalgebra generated by $\hat{X}_-$ and $\hat{P}_-$ is isolated from the environment, and it is therefore decoherence free. This means in particular that some entanglement may persist even at late times. This case has been studied in detail in Ref. \cite{PR1}. For nonzero $\Delta$, the $\hat{X}_-$ and $\hat{P}_-$ subalgebra is not totally isolated from the environment. The uncertainty relations simplify when the environment is ohmic ($s = 0$). Then, dissipation is local and in the weak-coupling limit ($\gamma << \Omega_i$), the matrices $R$ describing classical evolution take the form \begin{eqnarray} R(t) = e^{- \frac{1}{2}\gamma t} U(t), \end{eqnarray} where $U(t)$ is a canonical transformation: $U(t)\Omega U^T(t) = \Omega$. Hence, Eq. (\ref{gunc1}) becomes \begin{eqnarray} V_t \geq -\frac{i}{2} e^{- \gamma t} \Omega + S(t). \label{gunc5} \end{eqnarray} From Eq. (\ref{gunc5}) we see that dissipation tends to shrink phase-space areas, but this is compensated by the effects of diffusion incorporated into the definition of the matrix $S$. For an initial factorized state, we obtain \begin{eqnarray} V_t \geq -\frac{i}{2} e^{- \gamma t} F(t) + S(t), \label{gunc6b} \end{eqnarray} where $F(t) = U(t)\tilde{\Omega}U^T(t)$ is an oscillating function of time. The oscillations in $F(t)$ may lead to violation of the bound $V_t \geq -\frac{i}{2} \tilde{\Omega}$ for factorized states and thus to entanglement creation. However, the oscillating character of $F(t)$ implies that entanglement creation will in general be accompanied by entanglement death and revival. For times $t >> \gamma^{-1}$ the first term in the right-hand side of Eq. (\ref{gunc6b}) is suppressed. \paragraph{The Wigner function area.} According to Eq. (\ref{gunc5}), the matrix $V_t + \frac{i}{2}e^{- \gamma t} - S(t)$ is positive. Its upper $2\times 2$ submatrix in the $X_+, X_-$ coordinates should also be positive; hence, \begin{eqnarray} [(\Delta X_+)^2 - S_{X_+X_+}][(\Delta P_+)^2 - S_{P_+P_+}]] - (V_{X_+P_+} - S_{X_+P_+})^2 \geq \frac{1}{4} e^{- 2 \gamma t}. \label{gunc6} \end{eqnarray} By virtue of Schwartz's inequality, $(\Delta X_+)^2 S_{X_+X_+} + (\Delta P_+)^2 S_{P_+P_+} - V_{X_+P_+}S_{X_+P_+} \geq 0$; hence, \begin{eqnarray} {\cal A}_{X_+P_+} \geq \frac{1}{4} e^{- \gamma t} + \left(S_{X_+X_+} S_{P_+P_+} - S_{X_+P_+}^2\right ). \label{gunc7a} \end{eqnarray} Similarly, \begin{eqnarray} {\cal A}_{X_- P_-} &\geq& \frac{1}{4} e^{- \gamma t} + \left(S_{X_-X_-} S_{P_- P_-} - S_{X_- P_-}^2\right), \label{gunc7b} \\ {\cal A}_{X_+ P_-} &\geq& \left(S_{X_+X_+} S_{P_- P_-} - S_{X_+ P_-}^2 \right), \label{gunc7c}\\ {\cal A}_{X_-, P_+} &\geq& \left(S_{X_-X_-} S_{P_+P_+} - S_{X_- P_+}^2 \right). \label{gunc7d} \end{eqnarray} The uncertainty functions ${\cal A}_{X_iP_j}$ correspond to the area of the projection of the Wigner function ellipse onto a two-dimensional subspace defined by $X_i$ and $P_j$. The right-hand side of the inequalities are plotted in Fig. 1 as function of time. Except possibly at early times, the functions increase monotonically and reach a constant asymptotic value at a time scale of order $\gamma^{-1}$. \begin{figure} \caption{ \small (Color online) (a) The lower bounds for ${\cal A}_{X_+P_-}$ in Eq. (\ref{gunc7a}) and ${\cal A}_{X_+P_+}$ in Eq. (\ref{gunc7c}) as functions of $\gamma t$, for parameter values $\theta = 0.7$ and $\delta = 0.38$. (b) The lower bound to ${\cal A}_{X_+P_-}$ as a function of $\gamma t$ for different values of the dimensionless temperature $\theta$.} \end{figure} \section{Entanglement dynamics} \subsection{Disentanglement at high temperature} A widely studied regime in quantum Brownian motion models is the so-called Fokker-Planck limit in ohmic environments, because in this limit the master equation is Markovian. The Fokker-Planck limit is defined by the condition $T >> \Lambda$, and then taking $\Lambda \rightarrow \infty$, in order to obtain time-local dissipation and noise. In this regime, thermal noise is strong, resulting in loss of quantum coherence and entanglement at early times. It is convenient to work with the uncertainty functions ${\cal A}_{X_iP_j}$, because they can be explicitly evaluated\footnote{There is no loss of information in this choice, because of the rapid degradation of coherence. The sharper inequality, Eq. (\ref{gunc5}), gives the same estimation for the characteristic time scales of these processes.}. We find \begin{eqnarray} {\cal A}_{X_+P_+} &\geq& \frac{1}{4}(1 - \gamma t + \gamma^2 T^2 t^4), \label{gunc9a} \\ {\cal A}_{X_-P_-} &\geq& \frac{1}{4}[1 - \gamma t + \frac{\gamma^2 T^2}{2^{8}\cdot 3^4\cdot 35} \Delta^8 t^{12}], \label{gunc9b} \\ {\cal A}_{X_+P_-} &\geq& \frac{11 \gamma^2 T^2}{256} \Delta^4 t^8 , \label{gunc9c} \\ {\cal A}_{X_-P_+} &\geq& \frac{\gamma^2 T^2}{256} \Delta^4 t^8, \label{gunc9d} \end{eqnarray} The above equations are obtained far from resonance for the two oscillators, that is, $\Delta >> \gamma$. Equations (\ref{gunc9a}) and (\ref{gunc9b}) represent the initial growth of fluctuations starting from purely quantum fluctuations at $t = 0$. The growth of the fluctuations for the variables $X_+$ and $P_+$ is faster than that of the variables $X_-$ and $P_-$, because the former couple indirectly to the bath. The $-\gamma t$ term in these equations indicates an initial decrease of the fluctuations, in apparent violation of the uncertainty principle. The violation in Eq. (\ref{gunc9a}) occurs at a timescale of order $(\gamma T^2)$. This is because these equations are derived taking the infinite cut-off limit $\Lambda \rightarrow \infty$, which leads to violations of the positivity of the density operator at $t < \Lambda^{-1}$ \cite{AnHa}. For $t > \Lambda^{-1} >> T^{-1}$, and $T$ sufficiently large so that $\gamma T^2/\Lambda^3 >> 1$, such violations do not arise. Ignoring the positivity-violating terms, Eq. (\ref{gunc9a}) leads to an expression $t_{th} \sim 1/\sqrt{\gamma T}$ for the time scale where the thermal fluctuations overcome the purely quantum ones. This is an upper limit to the decoherence time for the $X_+$ and $P_+$ variables \cite{AnHa}. From Eqs. (\ref{gunc9c}) and (\ref{gunc9d}) we obtain the characteristic time scale where ${\cal A}_{X_+ P_-}$ and ${\cal A}_{X_- P_+}$ reach the value $\frac{1}{4}$ starting from 0. This is indicative of the time scale for disentanglement $t_{dis}$ in this model: \begin{eqnarray} t_{dis} \sim \frac{1}{(\gamma T \Delta^2)^{1/4}}. \end{eqnarray} The characteristic scale for disentanglement is distinct from the time scale $t_{th}$ characterizing the growth of thermal fluctuations: \begin{eqnarray} t_{dis}/t_{th} = \left(\frac{\gamma T}{\Delta^2}\right)^{1/4}. \end{eqnarray} For sufficiently small values of $\Delta$, that is, weak coupling between the $+$ and $-$ variables, the disentanglement timescale may be much larger than the decoherence time scale for the $X_+$ and $P_+$ variables. Hence, even if the $X_-, P_-$ degrees of freedom are only partially protected from degradation from the environment, they can sustain entanglement long after the $X_+$ and $P_+$ variables have decohered. \subsection{Long-time limit} While entanglement may be preserved much longer than the coherence of the $X_+$ and $P_+$ degrees of freedom, the interaction with the environment sets the relaxation time scale $\gamma^{-1}$ as an upper limit for disentanglement time. For times $t >> \gamma^{-1}$, all states tend toward the stationary state $\hat{\rho}_{\infty}$ corresponding to a Wigner function, \begin{eqnarray} W_{\infty}(\xi) = \frac{\sqrt{\det S^{-1}_{\infty}}}{\pi} \exp[- \frac{1}{2} \xi S^{-1}_{\infty} \xi], \label{asympt} \end{eqnarray} where $S_{\infty}$ is the asymptotic value of the matrix $S$ at $t \rightarrow \infty$. At this limit, the correlation matrix $V$ coincides with $S$. Explicit evaluation of Eqs. (\ref{sxx}-\ref{sxp}) shows that, as $t\rightarrow \infty$ the only nonvanishing elements of the matrix $S$ are the diagonal ones: $S_{X_+X_+}, S_{X_-X_-}, S_{P_+P_+}, S_{P_-P_-}$ (see the Appendix). For states of this form, the uncertainty functions ${\cal A}_{X_+P_-}$ and ${\cal A}_{X_- P_+}$ fully determine entanglement. We further find that to leading order in $\gamma/\Omega_i$ and $\Omega_i/\Lambda$, the asymptotic state coincides with the thermal state for Hamiltonian $\hat{H}_0$ in Eq. (\ref{ho}); hence, it is factorized. However, at low temperatures the thermal states are close to the boundary that separates factorized from entangled states (for example, they satisfy ${\cal A}_{X_+P_-} \simeq \frac{1}{4}$). Hence, the corrections from the nonzero values of $\gamma/\Omega_i$ and $\Omega_i/\Lambda$ may lead the asymptotic state to retain some degree of entanglement, as was found in Ref. \cite{PR2}. We have verified numerically that the residual entanglement decreases with increasing values of the cutoff parameter $\Lambda$. This result applies to a system of nondegenerate oscillations. For degenerate oscillators, the $X_-$ and $ P_-$ subalgebra is protected from the environment. Hence, the asymptotic state is not unique and it may sustain entanglement or even be characterized by a nonterminating sequence of entanglement deaths and revivals. The analysis of Sec. II allows us to make a general characterization of the asymptotic state valid for any QBM model. The key observation is that the uniqueness of the asymptotic state is solely determined from the classical equations of motion, that is, from the matrix $R^a_{b}$ in Eq. (\ref{ceq}). In the generic case the phase space contains no dissipation-free subspace, and $\xi^a_{cl}(t) \rightarrow 0$ as $t \rightarrow \infty$, irrespective of the initial condition. Hence, for times $t$ much larger than the relaxation time $\tau_{rel}$ the memory of the initial state is lost from the Wigner function propagator, Eq. (\ref{gauss}). Moreover, if $\xi^a_{cl}(t) \rightarrow 0$ sufficiently fast as $t \rightarrow \infty$, the limit $t \rightarrow \infty$ for the matrix $S$, Eqs. (\ref{sxx}--\ref{sxp}), is well defined. Thus a unique asymptotic state of the form (\ref{asympt}) is obtained. At the weak-coupling limit, one expects that the asymptotic state will be close to the thermal state at temperature $T$; hence, it will be factorized. If, on the other hand, the classical equations of motion admit a dissipation-free subspace, time evolution in this subspace is Hamiltonian, and there $\xi^a_{cl}(t)$ does not converge to a unique value as $t \rightarrow \infty$. This implies that the Wigner function propagator Eq. (\ref{gauss}) preserves its dependence on the initial variables even for $t >> \tau_{rel}$. As a consequence, an asymptotic state may not exist or, if it exists, it may not be unique. Hence, in this case asymptotic entanglement or a sequence of entanglement death and revivals is possible. Nonetheless, the case of a unique asymptotic state is the generic one. Dissipation-free subspaces exist only for a set of measure zero in the space of parameters (e.g., system-environment couplings) characterizing a QBM model. For example, even a small dependence of the coupling on the oscillator's position will prevent the existence of a dissipation-free subspace. Hence, unless some symmetry can be invoked that fully protects a subalgebra from degradation from the environment, we expect that the relaxation time sets an absolute upper limit to the time scale that entanglement can be preserved in any oscillator system interacting with a QBM-type environment. \subsection{Entanglement creation} In general, two noninteracting quantum systems may become entangled by their interaction with a third system. In QBM the role of the third system can be played by the environment, and indeed, low-temperature baths have the tendency to create entanglement. The uncertainty relations Eqs. (\ref{gunc3}) and (\ref{gunc4}) are particularly useful for the study of entanglement creation. We apply them as follows. The positivity of the matrix $V_t + \frac{i}{2} \tilde{\Omega}$ is a necessary criterion for a state to be factorized at time $t$. Hence, in a factorized state, the minimal eigenvalue $\lambda_{min}(t)$ of $V_t + \frac{i}{2} \tilde{\Omega}$ is positive. By Eq. (\ref{gunc3}), $\lambda_{min}(t)$ is always bounded from below by the minimal eigenvalue of the matrix $-\frac{i}{2}(R\tilde{\Omega}R^T - \tilde{\Omega}) + S$, which we denote as $\tilde{\lambda}_{bound}(t)$. Hence, the function $\tilde{\lambda}_{bound}(t)$ determines the capacity of the environment to create entanglement irrespective of the initial state. In particular, the condition that $\tilde{\lambda}_{bound}(t) \leq 0$ implies that at least some factorized states can develop entanglement at time $t$. Figure 2(b) provides a plot of the minimal eigenvalue $\lambda_{min}(t)$ of $V_t + \frac{i}{2} \tilde{\Omega}$ for an initial factorized Gaussian state together with the lower bound, $\tilde{\lambda}_{bound}(t)$, as functions of time. $\lambda_{min}(t)$ oscillates rapidly at a scale of $\Omega_i^{-1}$, so that at time scale of order $\gamma^{-1}$ we can distinguish only two enveloping curves that bound it from above and below. $\tilde{\lambda}_{bound}(t)$ is close to the lower enveloping curve of $\lambda_{min}(t)$ and we note that at specific instants the inequality $\lambda_{min}(t) \geq \lambda_{bound}(t)$ is saturated. \begin{figure} \caption{ \small (Color online) (a) The rapidly oscillating minimal eigenvalue $\lambda_{min}$ of $V_t + \frac{i}{2} \tilde{\Omega}$ for an initial factorized state $|0, 1 \rangle$, together with the lower bound $\tilde{\lambda}_{bound}(t)$ corresponding to minimal eigenvalue of the matrix $-\frac{i}{2}(R\tilde{\Omega}R^T - \tilde{\Omega}) + S$. In this plot $\delta = 0.02$ and $\theta = 0.21$. (b) Same as in (a) but for an initial factorized Gaussian state.} \end{figure} For Gaussian states the criterion $V_t < - \frac{i}{2} \tilde{\Omega}$ completely specifies entanglement, hence, for times $t$ that $\tilde{\lambda}_{bound}(t) > 0$, no initially factorized Gaussian state can sustain entanglement. In Figs. 2(b) and 3, we see that $\tilde{\lambda}_{bound}(t)$ exhibits oscillations around zero at low temperatures. This implies that, at least for Gaussian states, entanglement creation at low temperature is typically accompanied by a period of ``entanglement oscillations'', that is, a sequence of entanglement deaths and revivals, which terminates at a time scale of order $\gamma^{-1}$, when the system relaxes to an asymptotic factorized state. Figure 2(a) provides a plot of $\lambda_{min}(t)$ for an initial factorized energy eigenstate $|0, 1\rangle$, together with the bound $\tilde{\lambda}_{bound}(t)$. For non-Gaussians, a positive value of $\lambda_{min}(t)$ does not imply factorizability of the state; information about entanglement is carried in higher order correlation functions of the system. Nonetheless, a negative value of $\lambda_{min}$ is a definite sign of entanglement. Despite of the fact that $\lambda_{min}(t)$ saturates the bound at some instants, in general its behavior is qualitatively different. In Fig. 3, the minimal eigenvalue $\tilde{\lambda}_{bound}(t)$ is plotted for different values of temperature. With increasing temperature the time intervals of persisting entanglement shrink and the entanglement oscillations are suppressed. At sufficiently high temperature (of order $\theta > 10$), no creation of entanglement occurs. \begin{figure} \caption{\small (Color online) The minimal eigenvalue $\tilde{\lambda}_{bound}(t)$ of the matrix $-\frac{i}{2}(R\tilde{\Omega}R^T - \tilde{\Omega}) + S$ for $\delta = 0.02$ and different values of temperature.} \end{figure} \subsection{Disentanglement at low temperature} We saw that at high temperature, the noise from the environment degrades the quantum state and causes rapid decoherence and disentanglement. At low temperatures ($\theta < 1 $), however, the noise is not sufficiently strong to cause decoherence \cite{HPZ}, and entanglement is preserved longer. The physical mechanism responsible for disentanglement at low-temperature is relaxation: the existence of a unique asymptotic factorized state implies that at a time scale of order $\gamma^{-1}$ all memory of the initial state (including entanglement) is lost. In other words, a low temperature bath is much more efficient in creating and preserving entanglement, but relaxation to equilibrium will inevitably lead to a factorized state. By Eq. (\ref{gunc1}), the minimal eigenvalue $\lambda_{min}(t)$ of the matrix $V_t + \frac{i}{2} \tilde{\Omega}$ is always bounded from below by the minimal eigenvalue $\lambda_{bound}(t)$ of the matrix $-\frac{i}{2}(R\Omega R^T - \tilde{\Omega}) + S$. Hence, the condition $\lambda_{bound}(t) < 0$ is sufficient for the existence of entangled states at time $t$. Moreover, the condition $\lambda_{min}(t) > 0 $ establishes that the evolution of any {\em Gaussian} initial state at time $t$ is factorized. Figure 4 contains plots of the minimal eigenvalue $\lambda_{min}(t)$ of $V_t + \frac{i}{2} \tilde{\Omega}$ for two different initial states, together with the lower bound $\lambda_{bound}(t)$. In Fig. 4(a) the initial state is an entangled Gaussian, and in Fig. 4(b) the initial state is $\frac{1}{2(1 + e^{- |z|^2})} (|z, 0\rangle +|0, z\rangle)$, where $z$ is a coherent state. In both cases, $\lambda_{min}(t)$ approaches the lower bound only after a time scale of order $\gamma^{-1}$ when the system has started relaxation to a unique asymptotic state. We note that there are no entanglement oscillations for such states, only a gradual decay of entanglement. This behavior is typical for initial states that violate Eq. (\ref{V2}) by a substantial margin. However, the uncertainty relations do not provide any significant information about the entanglement dynamics of initial states that are entangled, but do not violate the bound, Eq. (\ref{V2}). This is the case, for example, for states of the form $\frac{1}{\sqrt{2}}(|0,1\rangle + e^{i \theta} |1, 0\rangle))$. In order to study such states, we would have to obtain generalized uncertainty relations pertaining to correlation functions of order higher than 2. In Fig. 5, we plot the minimal eigenvalue $\lambda_{bound}(t)$ as a function of time $t$, for different temperatures. As expected, the time interval during which the system sustains entangled states [i.e., $\lambda_{bound}(t) < 0$ ] shrinks with temperature. \begin{figure} \caption{ \small (Color online) The minimal eigenvalue $\lambda_{bound}(t)$ of the matrix $-\frac{i}{2}(R\Omega R^T - \tilde{\Omega}) + S$ for $\delta = 0.02$ and different values of temperature.} \end{figure} The uncertainty relation, Eq. (\ref{gunc1}), allows for the definition of the disentanglement time $t_{dis}$ as the instant that $\lambda_{bound}(t) = 0$. Thus defined, $t_{dis}$ is an upper bound to the disentanglement time for {\em any} Gaussian initial state. In general, non-Gaussian states may preserve entanglement for times larger than $t_{dis}$. However, $t_{dis}$ depends only on the matrices $S$ and $R$, and the evolution of higher-order correlation functions of non-Gaussian states is governed by the matrices $S$ and $R$ alone. Moreover, $t_{dis} \sim \gamma^{-1}$ refers to the regime of relaxation to a unique thermal equilibrium state, hence, the loss of any memory of the initial condition. For this reason, it is reasonable to assume that $t_{dis}$ provides a good estimation for disentanglement time that is valid for a larger class of initial states, at least as far as its qualitative dependence on temperature and other bath parameters are concerned. Figure 6 plots $t_{dis}$ as a function of temperature for different values of $\delta$. As expected $t_{dis}$ decreases with temperature. However, there is no monotonic dependence of $t_{dis}$ on $\delta$, and for $\theta > 0.5$, $t_{dis}$ is largely insensitive to $\delta$. Finally, we note that the weaker uncertainty relations for the Wigner function areas ${\cal A}_{X_{\pm}P_{\mp}}$ also provide an estimation for disentanglement time $t_{is}$. Since these inequalities are weaker, the values of $t_{dis}$ thus obtained are smaller, but their dependence on the parameters $\delta$ and $\theta$ is qualitatively similar. \begin{figure} \caption{\small (Color online) Disentanglement time $t_{dis}$ in units of $\gamma^{-1}$ as a function of the dimensionless temperature $\theta$ for different values of $\delta$.} \end{figure} \section{Conclusions} The main results of our article are the following: (i) the explicit construction of the Wigner function propagator for QBM models with any number of system oscillators and for any spectral density; the propagator allows for a simple derivation of the corresponding master equation; (ii) the identification of generalized uncertainty relations valid in any QBM model that provide a state-independent lower bound to the fluctuations induced by the environment; (iii) the application of the uncertainty relations to a concrete model, for the study of decoherence, disentanglement, and entanglement creation in different regimes. In particular, we showed that entanglement creation is often accompanied by entanglement oscillation at early times and that the uncertainty relations provide an upper bound to disentanglement time with respect to all initial Gaussian states. In our opinion, the most important feature of the techniques developed in this article is that they can be immediately generalized for addressing more complex systems and issues in the study of entanglement dynamics, for example, in the derivation of uncertainty relations for higher-order correlation functions, or for information-theoretic quantities that contain more detailed information about entanglement of general initial states, and in the exploration of entanglement dynamics in multipartite systems and of the dependence of entanglement on the spatial separation of multipartite systems. \begin{appendix} \section{The coefficients in the Wigner function propagator} In this appendix, we sketch the calculations of the coefficients in the Wigner function propagator for the model presented in Sec. III C. We first compute the function $v(s)$ of Eq. (\ref{vt}) in the $X_+, X_-$ coordinates. To leading order in $\gamma$ for the poles in the Laplace transform (\ref{vt}), we obtain \begin{eqnarray} v_{++}(s)&=&\frac{e^{-\frac{1}{2}\gamma s}}{4\Omega_1 \Omega_2(\Omega_2^2-\Omega_1^2)} \left[\Omega_2 \sin(\Omega_1 s)\left(\gamma^2 - 2\Omega_1^2 + 2 \Omega_2^2 \right) \right. \nonumber \\ &&\left. + 4 \gamma \Omega_1\Omega_2 \left(\cos(\Omega_2 s)-\cos(\Omega_1 s)\right)-\Omega_1 \sin(\Omega_2 s) \left(\gamma^2+2\Omega_1^2-2 \Omega_2^2\right)\right] \\ v_{+-}(s)&=&\frac{e^{-\frac{1}{2}\gamma s}}{2\Omega_1 \Omega_2}\left[\Omega_2\sin(\Omega_1 s)-\Omega_1\sin(\Omega_2 s)\right] \\ v_{--}(s)&=&\frac{e^{-\frac{1}{2}\gamma s}}{4\Omega_1 \Omega_2(\Omega_2^2-\Omega_1^2)}\left[\Omega_2\sin(\Omega_1 s)\left(\gamma^2-8\gamma\Omega_1-2\Omega_1^2+2\omega_2^2\right)\right. \nonumber \\ && \left.+ 4\gamma\Omega_1\Omega_2\left(\cos(\Omega_2 s)-\cos(\Omega_1 s)\right)-\Omega_1\sin(\Omega_2 s)\left(\gamma^2-8\gamma\Omega_1+2\Omega_1^2- 2\Omega_2^2\right)\right] \hspace{1cm} \end{eqnarray} From $v(s)$ one constructs the matrix $R$ using Eq. (\ref{ceq2}) and the matrix $S$ using Eqs. (\ref{sxx}---\ref{sxp}). To obtain the asymptotic state, we compute $S$ at the limit $t \rightarrow \infty$. The off-diagonal elements in the $X_+, X_-$ basis vanish, while \begin{eqnarray} S_{X_+X_+}&=&\frac{\gamma}{M \pi}\int_0^\infty d\omega \omega f(\omega) (-2\omega^2+\Omega_1^2+\Omega_2^2)^2 , \\ S_{P_+P_+}&=&\frac{4M\gamma}{\pi}\int_0^\infty d\omega \omega^3 f(\omega) (-2\omega^2+\Omega_1^2+\Omega_2^2)^2 , \\ S_{X_-X_-}&=&\frac{\gamma}{M\pi} (\Omega_1^2-\Omega_2^2)^2\int_0^\infty d\omega \omega , f(\omega) \\ S_{P_- P_-}&=&\frac{4M\gamma}{\pi}(\Omega_1^2-\Omega_2^2)^2 \int_0^\infty d\omega \omega^3 f(\omega), \end{eqnarray} where \begin{eqnarray} f(\omega) = \frac{e^{-\frac{\omega^2}{\Lambda^2}}\coth\left(\frac{\omega}{2T}\right)} {[2(\omega^2-\Omega_1^2)^2+\gamma^2(\omega^2+\Omega_1^2)] [2(\omega^2-\Omega_2^2)^2+\gamma^2(\omega^2+\Omega_2^2)]}. \end{eqnarray} The asymptotic values for $S$ can be evaluated to leading order in $\gamma/\Omega_i$ and $\Omega_i/\Lambda$, by substituting the Lorentzians in the integrals with a delta function, that is, $[(x-a)^2+\gamma^2]^{-1} \simeq \frac{\pi}{2 \gamma} \delta (x-a)$. This corresponds to the weak-damping limit of Ref. \cite{HZ}. We then obtain \begin{eqnarray} S_{X_+X_+} = S_{X_-X_-} = \frac{1}{8M} \left( \frac{\coth\frac{\Omega_1}{2T}}{\Omega_1} + \frac{\coth\frac{\Omega_2}{2T}}{\Omega_2} \right), \\ S_{P_+P_+} = S_{P_-P_-} = \frac{M}{2} \left(\Omega_1 \coth\frac{\Omega_1}{2T} + \Omega_2 \coth\frac{\Omega_2}{2T} \right), \end{eqnarray} which correspond to an asymptotic thermal state for the pair of oscillators. \end{appendix} \end{document}
arXiv
{ "id": "1007.4755.tex", "language_detection_score": 0.7451272010803223, "char_num_after_normalized": null, "contain_at_least_two_stop_words": null, "ellipsis_line_ratio": null, "idx": null, "lines_start_with_bullet_point_ratio": null, "mean_length_of_alpha_words": null, "non_alphabetical_char_ratio": null, "path": null, "symbols_to_words_ratio": null, "uppercase_word_ratio": null, "type": null, "book_name": null, "mimetype": null, "page_index": null, "page_path": null, "page_title": null }
arXiv/math_arXiv_v0.2.jsonl
null
null
\begin{document} \newcommand{\(g^{(2)}\)}{\(g^{(2)}\)} \newcommand{\(g^{(3)}\)}{\(g^{(3)}\)} \newcommand{\(g^{(1,1)}\)}{\(g^{(1,1)}\)} \title{Probing multimode squeezing with correlation functions} \author{Andreas Christ\(^{1,2}\), Kaisa Laiho\(^2\), Andreas Eckstein\(^2\), Kati\'{u}scia N. Cassemiro\(^2\), and Christine Silberhorn\(^{1,2}\)} \address{\(^1\)Applied Physics, University of Paderborn, Warburger Straße 100, 33098 Paderborn, Germany} \address{\(^2\)Max Planck Institute for the Science of Light,\\ G\"unther-Scharowsky Straße 1/Bau 24, 91058 Erlangen, Germany} \ead{[email protected]} \date{\today} \begin{abstract} Broadband multimode squeezers constitute a powerful quantum resource with promising potential for different applications in quantum information technologies such as information coding in quantum communication networks or quantum simulations in higher dimensional systems. However, the characterization of a large array of squeezers that coexist in a single spatial mode is challenging. In this paper we address this problem and propose a straightforward method to determine the number of squeezers and their respective squeezing strengths by using broadband multimode correlation function measurements. These measurements employ the large detection windows of state of the art avalanche photodiodes to simultaneously probe the full Hilbert space of the generated state, which enables us to benchmark the squeezed states. Moreover, due to the structure of correlation functions, our measurements are not affected by losses. This is a significant advantage, since detectors with low efficiencies are sufficient. Our approach is less costly than tomographic methods relying on multimode homodyne detection which is based on much more demanding measurement and analysis tools and appear to be impractical for large Hilbert spaces. \end{abstract} \pacs{42.50.-p 42.65.Yj 42.65.Lm 42.65.Wi 03.65.Wj} \maketitle \section{Introduction} The study of correlation functions has a long history and lies at the heart of coherence theory \cite{mandel_optical_1995}. Intensity correlation measurements were first performed by Hanbury Brown and Twiss in the context of classical optics \cite{brown_correlation_1956}. Since then correlation functions have become an standard tool in quantum optical experiments to study the properties of laser beams \cite{chopra_higher-order_1973}, parametric downconversion sources\cite{blauensteiner_photon_2009, ivanova_multiphoton_2006} or heralded single-photons \cite{tapster_photon_1998, uren_characterization_2005, bussieres_fast_2008}. Current state of the art experiments are able to measure correlation functions up to the eighth order \cite{avenhaus_accessing_2010}, giving access to diverse characteristics of photonic states. The normalized second-order correlation function \(g^{(2)}(0)\) probes whether the generated photons are bunched or anti-bunched, with \(g^{(2)}(0) < 1\) being a genuine sign of non-classicality \cite{loudon_quantum_2000}. The measurement of all unnormalized moments \(G^{(n)}\) of a given optical quantum state provide complete access to the photon-number distribution for arbitrary single-mode input states \cite{mandel_optical_1995}. Moreover, it is possible to perform a full state-tomography with the help of correlation function measurements \cite{shchukin_universal_2006}. The measurement of these correlation functions is, in general, performed in a time resolved manner \(g^{(n)}(t_1, t_2, \dots t_n)\). Limited time resolution has been considered as a detrimental effect and treated as experimental imperfection \cite{tapster_photon_1998}. In contrast to previous work, we employ the finite time resolution of photo-detectors to gain access to the spectral character of broadband multimode quantum states. Our scheme of measuring broadband multimode correlation functions of pulsed quantum light is especially useful for probing squeezed states. These states are commonly generated via the interaction of light with a crystal exhibiting a \(\chi^{(2)}\)-nonlinearity, a process referred to as parametric downconversion (PDC)\cite{rarity_quantum_1992, mauerer_how_2009, wasilewski_pulsed_2006, lvovsky_decomposing_2007, wenger_pulsed_2004, anderson_pulsed_1997} or with optical fibers featuring a \(\chi^{(3)}\)-nonlinearity called four-wave-mixing (FWM) \cite{loudon_squeezed_1987, levenson_generation_1985}. In general the generated squeezed states exhibit multimode characteristics in the spectral degree of freedom, i.e. a set of independent squeezed states is created with each squeezer residing in its own Hilbert space. This inherent multimode character renders these states powerful for coding quantum information, yet the same feature impedes a proper experimental characterization in a straightforward manner. Due to the sheer vastness of the corresponding Hilbert space, standard quantum tomography methods become time-consuming and ineffective. It is neither easy to determine the degree of squeezing in each mode, nor the amount of generated independent squeezers. Nonetheless, these are the key benchmarks defining the potential of a source for quantum information and quantum cryptography applications. In the following we investigate how to overcome these issues and elaborate on an alternative approach to determine the properties of multimode squeezed states based on measuring broadband multimode correlation functions. This paper is structured as follows: In section \ref{sec:multimode_squeezer} we revisit the general structure of multimode twin-beam squeezers drawing special attention --- but not restricting --- to states generated by parametric downconversion and four-wave-mixing. Section \ref{sec:correlation_functions} presents the formalism of correlation functions, introduces the intricacies of finite time resolution and defines broadband multimode correlation measurements. Section \ref{sec:probing_twin_beam_squeezing} combines the findings of sections \ref{sec:multimode_squeezer} and \ref{sec:correlation_functions}: We analyze the relation between the number of generated squeezers, their respective squeezing strengths and broadband multimode correlation functions, which leads us to proposing our scheme for characterizing multimode squeezing with the aid of broadband multimode correlation functions. \section{Multimode Squeezers}\label{sec:multimode_squeezer} In a squeezed state of light one quadrature of the field exhibits an uncertainty below the standard quantum level at the expense of an increased variance in the conjugate quadrature, such that the Heisenberg's uncertainty relation holds at its minimum attainable value. The standard description of squeezed states usually considers two different types of squeezers: single-beam squeezers and twin-beam squeezers. Single-beam squeezers create the squeezing into a single optical mode \(\hat{S} = \exp\left(-\zeta \hat{a}^{\dagger2} + \zeta^* \hat{a}^2 \right)\), whereas twin-beam squeezers consist of \textit{two} beams with inter-beam squeezing \(\hat{S}^{ab} = \exp\left(-\zeta \hat{a}^\dagger \hat{b}^\dagger + \zeta^* \hat{a} \hat{b}\right) \) \cite{barnett_methods_2003}. In these equations \(\zeta\) labels the squeezing strength and the operators \(\hat{a}^\dagger, \hat{b}^\dagger\) create photons in distinct optical modes. In this section we go beyond the standard description and discuss the theory of squeezed states, which are generated by the interaction of ultrafast pump pulses with nonlinear crystals or optical fibers. Here, we concentrate on the spectral structure of the broadband output beams. In general the utilized optical processes, typically called optical parametric amplification (OPA) or parametric downconversion (PDC) do not generate one but a variety of different squeezers in multiple frequency modes. A whole set of independent squeezed beams is generated in broadband orthogonal spectral modes within an optical beam. We refer to these states as frequency multimode single- or twin-beam squeezers \cite{wasilewski_pulsed_2006}. Here the \textit{multimode} prefix indicates that more than one squeezer is present in the optical beam and the term \textit{single- or twin-beam} identifies whether one squeezed beam or two entangled squeezed beams are created. Due to the single-pass configuration of our sources losses are negligible, hence we restrict ourselves to the analysis of pure squeezed states. \subsection{Multimode twin-beam squeezers} The subject of our analysis is twin-beam squeezing generated by the propagation of an ultrafast pump pulse through a nonlinear medium (single-beam squeezers are discussed in \ref{app:single_beam_squeezer}). For simplicity we focus on the collinear propagation of all involved fields each generated into a single spatial mode. This description is rigorously fulfilled for PDC in waveguides \cite{mosley_direct_2009, christ_spatial_2009}, but can also be applied for other experimental configurations, since the approximation carries all the complexities of the multimode propagation in the spectral degree of freedom. If the pump field is undepleted, we can neglect its quantum fluctuations and describe this OPA process by the effective quadratic Hamiltonian (see \ref{app:multimode_two_mode_squeezer_generation} for a detailed derivation) \begin{eqnarray} \hat{H}_{OPA} = A \int \mathrm d \omega_s \int \mathrm d \omega_i\, f(\omega_s, \omega_i) \hat{a}_s^\dagger(\omega_s) \hat{a}_i^\dagger(\omega_i) + h.c. \, , \label{eq:effective_OPA_hamiltonian_two_mode} \end{eqnarray} in which the constant \(A\) denotes the overall efficiency of the OPA, the function \(f(\omega_s, \omega_i)\) describes the normalized output spectrum of the downconverted beam, which --- in many cases --- is close to a two-dimensional Gaussian distribution. The operators \(\hat{a}^\dagger_s(\omega_s)\) and \(\hat{a}^\dagger_i(\omega_i)\) are the photon creation operators in the different twin-beam arms, in general labelled signal and idler, respectively. The unitary transformation generated by the effective OPA Hamiltonian in equation \eref{eq:effective_OPA_hamiltonian_two_mode} can be written in the form \begin{eqnarray} \fl \qquad \hat{U}_{OPA} &= \exp\left[-\frac{\imath}{\hbar} \left( A \int \mathrm d \omega_s \int \mathrm d \omega_i\, f(\omega_s, \omega_i) \hat{a}_s^\dagger(\omega_s) \hat{a}_i^\dagger(\omega_i) + h.c. \right) \right]. \label{eq:effective_OPA_unitary_two_mode_hamilton} \end{eqnarray} By virtue of the singular-value-decomposition theorem \cite{law_continuous_2000} we decompose the two terms in the exponential of equation \eref{eq:effective_OPA_unitary_two_mode_hamilton} as \begin{eqnarray} \nonumber -\frac{\imath}{\hbar} A f(\omega_s, \omega_i) = \sum_k r_k \psi^*_k(\omega_s) \phi^*_k(\omega_i), \,\,\,\, \mathrm{and}\\ -\frac{\imath}{\hbar} A^* f^*(\omega_s, \omega_i) = - \sum_k r_k \psi_k(\omega_s) \phi_k(\omega_i). \label{eq:singular_value_decomposition} \end{eqnarray} Here both \(\left\{\psi_k(\omega_s)\right\}\) and \(\left\{\phi_k(\omega_i)\right\}\) each form a complete set of orthonormal functions. The amplitudes of the generated modes \(\psi_k(\omega_s)\) and \(\phi_k(\omega_i)\) are given by the \(r_k \in \mathbb{R}^+\) distribution. Employing equation \eref{eq:singular_value_decomposition} and introducing a new broadband mode basis \cite{rohde_spectral_2007} for the generated state as: \begin{eqnarray} \hat{A}_k = \int \mathrm d \omega_s \psi_k(\omega_s) \hat{a}_s(\omega_s) \,\,\,\, \mathrm{and} \,\,\,\, \hat{B}_k = \int \mathrm d \omega_i \phi_k(\omega_i) \hat{a}_i(\omega_i), \end{eqnarray} we obtain the unitary transformation \cite{mauerer_how_2009} \begin{eqnarray} \nonumber \hat{U}_{OPA} &= \exp\left[\sum_k r_k \hat{A}_k^\dagger \hat{B}_k^\dagger - h.c. \right] \\ \nonumber &= \bigotimes_k \exp\left[r_k \hat{A}_k^\dagger \hat{B}_k^\dagger - h.c. \right] \\ &= \bigotimes_k \hat{S}^{ab}_k(-r_k). \label{eq:effective_OPA_unitary_two_mode} \end{eqnarray} In total the OPA generates a tensor product of distinct broadband twin-beam squeezers as defined in \cite{barnett_methods_2003} with squeezing amplitudes \(r_k\), related to the available amount of squeezing via: \(\mathrm{squeezing[dB]} = -10 \log_{10}\left(e^{-2 r_k}\right)\). The Heisenberg representation of the multimode twin-beam squeezers is given by independent input-output relations for each broadband beam \begin{eqnarray} \nonumber \hat{A}_k \Rightarrow \cosh(r_k) \hat{A}_k + \sinh(r_k)\hat{B}_k^\dagger \\ \hat{B}_k \Rightarrow \cosh(r_k) \hat{B}_k + \sinh(r_k)\hat{A}_k^\dagger . \label{eq:two_mode_squeezer_input_output_relation} \end{eqnarray} Note that the squeezer distribution \(r_k\) and basis modes \(\hat{A}_k\) and \(\hat{B}_k\) are unique and well-defined properties of the generated twin-beam. Their exact form is given by the Schmidt decomposition of the joint spectral amplitude \(-\frac{\imath}{\hbar} A f(\omega_s, \omega_i)\). This mathematical transformation directly yields the physical shape of the generated optical modes \(\psi_k(\omega_s)\), \(\phi_k(\omega_i)\) with each pair \(\hat{A}_k\) and \(\hat{B}_k\) being strictly correlated. In figure \ref{fig:schmidt_modes} we illustrated one possible squeezer distribution and corresponding broadband modes. The joint spectral distribution \(f(\omega_s, \omega_i)\) of the generated twin-beams shown in figure \ref{fig:schmidt_modes} defines the shape of the broadband signal and idler modes \(\hat{A}_k\) and \(\hat{B}_k\). In the special case of a Gaussian spectral distribution the form of the squeezing modes resembles the Hermite functions. The number of different squeezer modes is closely connected to the frequency correlations between the signal and idler beam. In the presented case the spectrally correlated beams lead to over 20 independent squeezers. The total amount of squeezing depends on the constant \(A\) appearing in the Hamiltonian in equation \eref{eq:effective_OPA_hamiltonian_two_mode}, which is directly related to the applied pump power \(I\) and the strength of the nonlinearity \(\chi^{(2)}\) in the medium \( (A \propto \sqrt{I}, \chi^{(2)})\). \begin{figure} \caption{Visualization of the singular value decomposition in equation \eref{eq:singular_value_decomposition}. The frequency distribution \(- \frac{\imath}{\hbar} A f(\omega_s, \omega_i)\) of the generated state defines the shape of the signal and idler modes \(\psi_k(\omega_s), \phi_k(\omega_i)\) and the squeezer distribution \(r_k\).} \label{fig:schmidt_modes} \end{figure} The OPA state is mainly characterized by the number of squeezed modes and the overall gain of the process, both being determined by the distribution of the individual squeezing amplitudes \(r_k\). In order to analyze the number of generated squeezers independently from the amount of squeezing, we split the distribution of squeezing weights \(r_k\) into a normalized distribution \(\lambda_k\) \(\left(\sum_k \lambda_k^2 = 1\right)\) that characterizes the probability for occupation of different squeezers in the respective optical quantum state, and an overall gain of the process \(B \in \mathbb{R}^+\), quantifying the total amount of generated squeezing according to \begin{eqnarray} r_k = B \, \lambda_k. \end{eqnarray} The characterization of these two fundamental properties of a multimode twin-beam state is a major experimental challenge. While these states are easily generated in the lab, a tomography by means of homodyne detection would require to match for each squeezed mode \(\hat{A}_k\) and \(\hat{B}_k\) different local oscillator beams with adapted temporal-spectral pulse shapes. Multimode homodyning \cite{beck_joint_2001} may provide a route to circumvent this difficulty, however an experimental implementation still appears challenging. \section{Correlation functions}\label{sec:correlation_functions} The n-th order (normalized) correlation function \(g^{(n)}(t_1, t_1, \dots ,t_n)\) is generally defined as a time-dependent function of the electromagnetic field. For quantized electric field operators, it can be expressed as \cite{glauber_quantum_1963, loudon_quantum_2000, mandel_optical_1995, vogel_quantum_2006} \begin{eqnarray} g^{(n)}(t_1, t_2, \dots, t_n) =\frac{\left< \hat{E}^{(-)}(t_1) \dots \hat{E}^{(-)}(t_n)\hat{E}^{(+)}(t_1) \dots \hat{E}^{(+)}(t_n)\right>} {\left< \hat{E}^{(-)}(t_1)\hat{E}^{(+)}(t_1)\right> \dots \left< \hat{E}^{(-)}(t_n) \hat{E}^{(+)}(t_n) \right>}, \label{eq:correlation_function-time_resolved} \end{eqnarray} and it measures the (normalized) n-th order temporal correlations at different points in time. Note that this definition of the correlation functions is independent of coupling losses and detection inefficiencies yielding a loss resilient measure \cite{avenhaus_accessing_2010}. Realistic detectors however, suffer from internal jitter and finite gating times. We accommodate for these resolution effects by weighting the correlation function with the appropriate detection window \(T(t)\) of the applied detectors as presented in \cite{tapster_photon_1998}, and obtain \begin{figure} \caption{a) perfect time-resolved detection; b) finite detection gate; c) broadband detection gate exceeding the pulse duration giving rise to different types of correlation measures.} \label{fig:correlation_function} \end{figure} \begin{eqnarray} \nonumber \fl g^{(n)}(t_1, t_2, \dots, t_n) = \\ \fl \qquad \frac{\int \mathrm d t_1 T(t_1) \dots \int \mathrm d t_n T(t_n) \left< \hat{E}^{(-)}(t_1) \dots \hat{E}^{(-)}(t_n)\hat{E}^{(+)}(t_1) \dots \hat{E}^{(+)}(t_n)\right>} { \int \mathrm d t_1 T(t_1) \left< \hat{E}^{(-)}(t_1)\hat{E}^{(+)}(t_1)\right> \dots \int \mathrm d t_n T(t_n) \left< \hat{E}^{(-)}(t_n) \hat{E}^{(+)}(t_n) \right>}. \label{eq:correlation_function-time_resolved_finite_gating} \end{eqnarray} If the employed photo-detectors exhibit flat detection windows, exceeding the length of the investigated pulses (\(T(t) \rightarrow \mathrm{const.}\)), equation \eref{eq:correlation_function-time_resolved_finite_gating} can be simplified to \begin{eqnarray} g^{(n)} = \frac{\int \mathrm d t_1 \dots \mathrm d t_n \left< \hat{E}^{(-)}(t_1) \dots \hat{E}^{(-)}(t_n)\hat{E}^{(+)}(t_1) \dots \hat{E}^{(+)}(t_n)\right>} { \int \mathrm d t_1 \left< \hat{E}^{(-)}(t_1)\hat{E}^{(+)}(t_1)\right> \dots \int \mathrm d t_n \left< \hat{E}^{(-)}(t_n) \hat{E}^{(+)}(t_n) \right>}. \label{eq:correlation_function-time_resolved_flat} \end{eqnarray} This theoretical model is adequate for the detection of ultrafast pulses with standard avalanche photodetectors. Furthermore, equation \eref{eq:correlation_function-time_resolved_flat} exhibits the convenient property of time independence and represents our generalized broadband multimode correlation function. Despite its similarity to the common correlation functions as defined in equation \eref{eq:correlation_function-time_resolved}, the broadband multimode correlation function in equation \eref{eq:correlation_function-time_resolved_flat} should no longer be considered as a naive general measure of n-th order coherence. In figure \ref{fig:correlation_function} we illustrate the main difference between the time-integrated and time-resolved correlation measurements. Equation \eref{eq:correlation_function-time_resolved_flat} is still not optimal for our studies of squeezed light fields. We transform it further by replacing the electric field operators by photon number creation and destruction operators (\(\hat{E}^{(+)}(t_n) \propto \hat{a}(t_n)\)) and perform a Fourier transform from the time domain into the frequency domain (\( \hat{a}(t) = \int \mathrm d \omega\, \hat{a}(\omega) e^{-\imath \omega t}\)). Equation \eref{eq:correlation_function-time_resolved_flat} is then rewritten as \begin{eqnarray} \nonumber g^{(n)} &= \frac{\int \mathrm d \omega_1 \dots \mathrm d \omega_n \left< \hat{a}^\dagger(\omega_1) \dots \hat{a}^\dagger(\omega_n)\hat{a}(\omega_1) \dots \hat{a}(\omega_n)\right>} { \int \mathrm d \omega_1 \left< \hat{a}^\dagger(\omega_1)\hat{a}(\omega_1)\right> \dots \int \mathrm d \omega_n \left< \hat{a}^\dagger(\omega_n) \hat{a}(\omega_n) \right>} \\ &= \frac{\left<: \left( \int \mathrm d \omega \hat{a}^\dagger(\omega) \hat{a}(\omega) \right)^n: \right>}{\left< \int \mathrm d \omega \hat{a}^\dagger(\omega) \hat{a}(\omega) \right>^n}, \label{eq:correlation_function-time_resolved_flat_frequency_domain} \end{eqnarray} in which \(\left<: \cdots :\right>\) indicates normal ordering of the enclosed photon creation and destruction operators. In addition we adapt the correlation function to the basis of the measured quantum system, i.e. we perform a general basis transform from \(\hat{a}(\omega)\) to the basis of the measured multimode twin-beam squeezers \(\hat{A}_k\). This results in: \begin{eqnarray} g^{(n)} = \frac{\left<:\left(\sum_k \hat{A}_k^\dagger\hat{A}_k\right)^n:\right>}{\left<\sum_k \hat{A}_k^\dagger\hat{A}_k\right>^n} \label{eq:broadband_correlation_function_final} \end{eqnarray} Equations \eref{eq:correlation_function-time_resolved_flat}, \eref{eq:correlation_function-time_resolved_flat_frequency_domain} and \eref{eq:broadband_correlation_function_final} stress the key difference between time-resolved and time-integrated correlation function measurements. While time-resolved correlation functions probe specific temporal modes, time-integrating detectors directly measure a superposition of all the different modes. This specific feature of broadband multimode detection is essential for our analysis. The simultaneous measurement of all different optical modes gives us direct \textit{loss-independent} access to the squeezer distribution of the probed state. \subsection{Broadband multimode cross-correlation functions} In the previous section we restricted ourselves to intra-beam correlations. To allow for measurements of correlations between different beams we extend our analysis. The identification of such inter-beam correlations is of special importance in quantum optics and quantum information applications, since they quantify the continuous variable entanglement between different subsystems, in our case the analyzed optical beams. In section \ref{sec:multimode_squeezer} we have already discussed one of the most employed entanglement sources: Twin-beam squeezers. These states are not only entangled in their quadratures, but also in their spectral and spatial degrees of freedom \cite{braunstein_quantum_2005}. In order to probe higher-order cross-correlations between the two different beams \cite{vogel_quantum_2006}, or subsystems \(a\) and \(b\) of order \(n\) and \(m\) respectively, we generalize equation \eref{eq:correlation_function-time_resolved} to \begin{eqnarray} \nonumber & \fl g^{(n, m)}(t^{(a)}_1, t^{(a)}_2, \dots, t^{(a)}_n; t^{(b)}_1, t^{(b)}_2, \dots, t^{(b)}_m) =\\ & \fl =\frac{\left< \hat{E}_a^{(-)}(t^{(a)}_1) \dots \hat{E}_a^{(-)}(t^{(a)}_n)\hat{E}_a^{(+)}(t^{(a)}_1) \dots \hat{E}_a^{(+)}(t^{(a)}_n) \times \hat{E}_b^{(-)}(t^{(b)}_1) \dots \hat{E}_b^{(+)}(t^{(b)}_n)\right>} {\left< \hat{E}_a^{(-)}(t^{(a)}_1)\hat{E}_a^{(+)}(t^{(a)}_1)\right> \dots \left< \hat{E}_a^{(-)}(t^{(a)}_n) \hat{E}_a^{(+)}(t^{(a)}_n) \right> \times \dots \left<\hat{E}_b^{(-)}(t^{(b)}_n)\hat{E}_b^{(+)}(t^{(b)}_n)\right>}. \label{eq:cross_correlation_function-time_resolved} \end{eqnarray} Taking into account broadband detection windows --- exceeding the pulse duration --- the above formula can be reformulated as \begin{eqnarray} g^{(n,m)} = \frac{\left<:\left(\int\mathrm d t\,\hat{E}_a^{(-)}(t)\hat{E}_a^{(+)}(t)\right)^n: :\left(\int\mathrm d t \,\hat{E}_b^{(-)}(t)\hat{E}_b^{(+)}(t)\right)^m: \right>}{\left<\int\mathrm d t \,\hat{E}_a^{(-)}(t)\hat{E}_a^{(+)}(t)\right>^n\left<\int\mathrm d t \,\hat{E}_b^{(-)}(t)\hat{E}_b^{(+)}(t)\right>^m}. \end{eqnarray} Again we perform the same simplifications as in equation \eref{eq:correlation_function-time_resolved_flat_frequency_domain} in section \ref{sec:correlation_functions}, namely we replace the electric field operators by photon creation and destruction operators, apply the Fourier transform from the time to frequency domain and finally we adapt the measurement basis to the given optical state. We find an extended version of equations \eref{eq:correlation_function-time_resolved_flat_frequency_domain} and \eref{eq:broadband_correlation_function_final} \begin{eqnarray} g^{(n,m)} &= \frac{\left<:\left(\int\mathrm d\omega \,\hat{a}^\dagger(\omega)\hat{a}(\omega)\right)^n: :\left(\int\mathrm d\omega \,\hat{b}^\dagger(\omega)\hat{b}(\omega)\right)^m: \right>}{\left<\int\mathrm d\omega \,\hat{a}^\dagger(\omega)\hat{a}(\omega)\right>^n\left<\int\mathrm d\omega \,\hat{b}^\dagger(\omega)\hat{b}(\omega)\right>^m} \\ &= \frac{\left<:\left(\sum_k \hat{A}_k^\dagger\hat{A}_k\right)^n::\left(\sum_k \hat{B}_k^\dagger\hat{B}_k\right)^m:\right>}{\left<\sum_k \hat{A}_k^\dagger\hat{A}_k\right>^n\left<\sum_k \hat{B}_k^\dagger\hat{B}_k\right>^m}. \label{eq:broadband_cross-correlation_function} \end{eqnarray} Further extensions of cross-correlation measurements to systems consisting of more than two different beams are possible \cite{mandel_optical_1995}, but are not necessary within the scope of this paper. \section{Probing frequency multimode squeezers via correlation functions}\label{sec:probing_twin_beam_squeezing} Using the theoretical description of squeezers as well as the derived broadband multimode correlation functions, we now combine the findings of section \ref{sec:multimode_squeezer} and \ref{sec:correlation_functions}. We establish a connection between the broadband multimode correlation functions and the properties of the squeezing, i.e. the mode distribution \(\lambda_k\) and the optical gain \(B\). \subsection{Probing the number of modes via \(g^{(2)}\)-measurements}\label{sec:probing_twin_beam_squeezing_mode_distribution} The foremost important property of frequency multimode squeezers is the number of independent squeezers in the generated twin-beam state, which is specified by the mode distribution \(\lambda_k\). In contrast to the optical gain \(B\), which is easily tuned by adjusting the pump power the mode distribution \(\lambda_k\) is heavily constricted by the dispersion in the nonlinear material and hence --- in general --- not easily adjustable \cite{frequency_filter}. The effective number of modes in multimode twin-beam state is given by the Schmidt number or cooperativity parameter \(K\) as defined in \cite{eberly_schmidt_2006, r_grobe_measure_1994} with \begin{eqnarray} K = 1 / \sum_k \lambda_k^4. \label{eq:schmidt_number} \end{eqnarray} Under the assumption of an independent uniform squeezer distribution it directly reflects the number of occupied modes. \begin{figure} \caption{Setup to measure \(g^{(2)}\) of a multimode twin-beam squeezer.} \label{fig:mm_two_mode_squeezr_g2_setup} \end{figure} The mode number \(K\) of a multimode twin-beam squeezer can be directly accessed by measuring the broadband multimode \(g^{(2)}\)-correlation function in the signal or idler arm as depicted in figure \ref{fig:mm_two_mode_squeezr_g2_setup}. This is a result of the structure of the second-order correlation function, which --- by using \eref{eq:broadband_correlation_function_final} and \eref{eq:two_mode_squeezer_input_output_relation} --- can be expressed as \begin{eqnarray} g^{(2)} &= 1 + \frac{\sum_k \sinh^4(r_k) } { \left[\sum_k \sinh^2(r_k)\right]^2 }. \label{eq:gtwo_mm_two_mode_squeezer_nonlinear} \end{eqnarray} For our further analysis it is useful to distinguish the low gain from the high gain regime, corresponding to low and high levels of squeezing. In the low gain regime corresponding to biphotonic states typically referred to in the context of PDC experiments \(\sinh(r_k) \approx r_k = B \lambda_k\) and we are able to simplify equation \eref{eq:gtwo_mm_two_mode_squeezer_nonlinear} to \begin{eqnarray} \nonumber g^{(2)} &\approx 1 + \frac{\sum_k B^4 \lambda_k^4) } { \left(\sum_k B^2 \lambda_k^2\right)^2 } = 1 + \frac{\sum_k \lambda_k^4 } { \left(\sum_k \lambda_k^2\right)^2 } = 1 + \sum_k \lambda_k^4\\ &=1 + \frac{1}{K}. \label{eq:gtwo_mm_two_mode_squeezer} \end{eqnarray} Consequently the effective number of modes is directly available from the correlation function measurement via \(K = 1 / ( g^{(2)} -1 )\). For a single twin-beam squeezer (\(K = 1\)) \(g^{(2)} = 2\), whereas for higher numbers of squeezers (\(K \gg 1\)) the contributions from the term \(\sum_k \lambda_k^4\) becomes negligible and \(g^{(2)}\) approaches one. This direct correspondence between \(g^{(2)}\) and the effective number of modes \(K\) is presented in figure \ref{fig:g2_mm_squeezer_results} (a). Another way of interpreting equation \eref{eq:gtwo_mm_two_mode_squeezer} is to approach the correlation function measurement from the photon-number point of view. The \(g^{(2)}\)-value of a single twin-beam squeezer, which exhibits a thermal photon-number distribution, evaluates to \(g^{(2)}\)\(=2\). If more squeezers are involved the detector cannot distinguish between the different thermal distributions, i.e. it measures a convolution of all the different thermal photon streams, which gives a Poissonian photon-number distribution \cite{mauerer_how_2009, avenhaus_photon_2008}. In fact one can show that the \(g^{(2)}\)-correlation function in equation \eref{eq:gtwo_mm_two_mode_squeezer_nonlinear} is the convolution of the second-order moments of each individual squeezer. Once more, we stress that the \(g^{(2)}\)-measurement does not give access to the exact distribution of squeezers \(\lambda_k\), but to the \textit{effective} number of modes under the assumption that all squeezed states share an identical amount of squeezing. This is a rather crude model and does not fit very well to many experimental realizations. Fortunately, there is a common class of squeezed states, for which a much more refined mode distribution \(\lambda_k\) is accessible: In the case of a two-dimensional Gaussian joint-spectral distribution \(f(\omega_s, \omega_i)\), the distribution \(\lambda_k\) is thermal \(\lambda_k = \sqrt{1 - \mu^2} \mu^k\), and thus it can be characterized by a single distribution parameter \(\mu\) \cite{uren_photon_2003}. The latter can be retrieved from a \(g^{(2)}\)-measurement via \(\mu = \sqrt{2 / g^{(2)} - 1}\), as depicted in figure \ref{fig:g2_mm_squeezer_results} (b), where we illustrate how the detection of the \(g^{(2)}\)-function can provide us directly with comprehensive knowledge about the underlying spectral mode structure of the analyzed state. \begin{figure} \caption{a) Plot of the effective mode number \(K\) as a function of \(g^{(2)}\) for various effective numbers of modes. b) Visualization of \(\mu\) as a function of \(g^{(2)}\) for different thermal squeezer distributions.} \label{fig:g2_mm_squeezer_results} \end{figure} In conclusion we have shown, that by measuring the second-order correlation function \(g^{(2)}\) of a multimode broadband twin-beam state one can probe the corresponding distribution of spectral modes \(\lambda_k\). Our method displays the advantage that correlation functions can be measured in a very practical way \cite{eckstein_highly_2011}, resulting in an approach that is much easier than realizing homodyne measurements, which require addressing individual modes. As a side remark we would like to point out that one can also determine the effective number of squeezers from the higher moments \(g^{(n)}\, , n \ge 2\) similar to the presented approach, yet \(g^{(2)}\) is already sufficient for our purposes. \subsection{Probing the optical gain \(B\) of a multimode twin-beam squeezer via \(g^{(1,1)}\) measurements} In section \ref{sec:probing_twin_beam_squeezing_mode_distribution} we determined the number of modes in a loss resilient way by measuring \(g^{(2)}\) for low gains \(B\). Here we investigate the amount of the generated squeezing determined by the overall optical gain \(B\). In order to probe this value the setup has to be changed to measure the correlation function \(g^{(1,1)}\) of the generated twin-beam squeezer as presented in figure \ref{fig:g11_mm_squeezer_setup}. \begin{figure} \caption{Schematic setup to measure \(g^{(1,1)}\) of a multimode twin-beam squeezer generated via PDC.} \label{fig:g11_mm_squeezer_setup} \end{figure} Using equation \eref{eq:broadband_cross-correlation_function} and \eref{eq:two_mode_squeezer_input_output_relation} we obtain for \(g^{(1,1)}\) the form \begin{eqnarray} \nonumber \fl g^{(1,1)} = \frac{\sum_{k,l} \sinh^2(r_k) \sinh^2(r_l) + \sum_{k} \sinh^2(r_k) \cosh^2(r_k)}{\left[\sum_k \sinh^2(r_k)\right]^2} \\ \fl \qquad = 1 + \underbrace{\frac{1}{\sum_k \sinh^2(r_k)}}_{1/\left<n\right>} + \underbrace{\frac{\sum_k \sinh^4(r_k)}{\left[\sum_k \sinh^2(r_k)\right]^2}}_{g^{(2)}-1}. \label{eq:g11_two_mode_squeezer} \end{eqnarray} The relevant characteristics we exploit from this measurement is its dependence on both, the number of modes in the system, as given by the \(g^{(2)}\)-function \textit{and} the mean photon number in each arm, which is closely connected to the coupling coefficient \(B\). In the low gain regime (\(\sinh(r_k) \approx r_k\)), \(g^{(1,1)}\) simplifies to \begin{eqnarray} g^{(1,1)} &\approx 1 + \frac{1}{B^2} + \underbrace{\frac{\sum_k \lambda_k^4}{\left[\sum_k \lambda_k^2 \right]^2}}_{g^{(2)}-1} \approx \underbrace{g^{(2)}}_{\le 2} + \underbrace{\frac{1}{B^2}}_{\gg 1} \approx \frac{1}{B^2}. \label{eq:g11_two_mode_squeezer_low_gain} \end{eqnarray} Hence, the optical gain is --- in the low gain regime --- obtained from the \(g^{(1,1)}\)-measurement via the simple relation \(B \approx 1 / \sqrt{g^{(1,1)}}\). Mode dependencies of the coupling value \(B\) only occur at high squeezing strengths, where the relation diverges from equation \eref{eq:g11_two_mode_squeezer_low_gain} and takes on a more complicated form. In figure \ref{fig:two_mode_squeezer_g11_analytic_inverse} we plot the dependence of the overall coupling value \(B\) on \(g^{(1,1)}\) --- as presented in equation \eref{eq:g11_two_mode_squeezer} --- which takes on a high value for small optical gains \(B\) but rapidly decreases when the high gain regime is approached. \begin{figure} \caption{The optical gain \(B\) plotted as a function of \(g^{(1,1)}\). For small values of \(B\) the correlation function \(g^{(1,1)}\) takes on a high value, yet rapidly decreases when the high gain regime is approached.} \label{fig:two_mode_squeezer_g11_analytic_inverse} \end{figure} In total measuring \(g^{(1,1)}\) gives direct \textit{loss-independent} access to the optical gain \(B\). This enables a loss tolerant probing of the generated mean photon number which, in the low gain regime, is even independent of the underlying mode structure. Taking into account the prior knowledge we gained from section \ref{sec:probing_twin_beam_squeezing_mode_distribution}, we can now ascertain all parameters needed to fully determine the highly complex multimode state. The optical gain \(B\) defines not only the photon distribution, but quantifies the generated twin-beam squeezing, i.e. the available CV-entanglement in each mode. Note that all modes exhibit different entanglement parameters. Depending on the state and its respective mode distribution determined by the \(g^{(2)}\)-measurement all the entanglement could be generated in a single spectral mode where it is readily available for quantum information experiments or in a multitude of different squeezed modes. Note however, that after the state generation process multiple squeezers cannot be combined into a single optical mode by using only Gaussian operations, since this operation would be equivalent to continuous-variable entanglement distillation \cite{eisert_distilling_2002, fiurascaronek_gaussian_2002, giedke_characterization_2002}. \section{Outlook} In this paper we focused on the state characterization of ultrafast twin-beam squeezers in the time domain and their experimental analysis. The presented approach however is not limited to twin-beam squeezers: On the one hand, our measurement technique also applies to probe the squeezing of ultrafast multimode \textit{single}-beam squeezers as presented in \ref{app:single_beam_squeezer}. On the other hand, our approach is easily adapted to spatial multimode squeezed states \cite{treps_quantum_2005, chalopin_multimode_2010, lassen_generation_2006}. These are characterized by measuring correlation functions that are broadband in the spatial domain, in a direct analogy to the spectral degree of freedom analyzed in this work. \section{Conclusion} We elaborated on the generation of multimode squeezed beams and their characterization with multimode broadband correlation functions. We expanded the formalism of correlation functions by including the effects of finite time resolution. These extended correlation function measurements serve as a versatile tool for the characterization of optical quantum states such as twin-beam squeezers. They provide a simple, straightforward and \textit{loss independent} way to investigate the characteristics of multimode squeezed states. Our findings are important for the field of efficient quantum state characterization and have already proven to be a useful experimental tool in the laboratory \cite{laiho_testing_2010, eckstein_highly_2011}. \section{Acknowledgments} This work was supported by the EC under the grant agreements CORNER (FP7-ICT-213681), and QUESSENCE (248095). Kati\'{u}scia N. Cassemiro acknowledges support from the Alexander von Humboldt foundation. The authors thank Agata M. Bra\'nczyk, Malte Avenhaus and Benjamin Brecht for useful discussions and helpful comments. \\ \begin{appendix} \section{Multimode twin-beam squeezer generation via nonlinear optical processes}\label{app:multimode_two_mode_squeezer_generation} \subsection{Generation of multimode twin-beam squeezers via parametric downconversion}\label{app:multimode_two_mode_squeezer_generation_PDC} In the process of parametric downconversion squeezed states are generated by the interaction of a strong pump field with the \(\chi^{(2)}\)-nonlinearity of a crystal. Regarding the generation of twin-beam squeezers the Hamiltonian of the corresponding three-wave-mixing process is given by \cite{mauerer_how_2009, braczyk_optimized_2010, grice_spectral_1997}: \begin{eqnarray} \hat{H}_{PDC} = \int_{-\frac{L}{2}}^{\frac{L}{2}} \mathrm d z \, \chi^{(2)} \hat{E}^{(+)}_p(z,t) \hat{E}^{(-)}_s(z,t) \hat{E}_i^{(-)}(z,t) + h.c. \label{eq:pdc_twin-beam_hamiltonian} \end{eqnarray} where we focused on a collinear interaction of all three beams. In equation \eref{eq:pdc_twin-beam_hamiltonian} \(L\) labels the length of the medium, \( \chi^{(2)} \) the nonlinearity of the crystal, and \(\hat{E}_p^{(+)}(z,t)\), \(\hat{E}^{(-)}_s(z,t)\), \(\hat{E}^{(-)}_i(z,t)\) the pump, the signal and the idler fields. The electric field operators used in equation \eref{eq:pdc_twin-beam_hamiltonian} are defined as follows \begin{eqnarray} \fl \qquad \hat{E}_{x}^{(-)}(z,t) = \hat{E}_{x}^{(+)\dagger}(z,t) = C \int \mathrm d \omega_{x} \, \exp\left[-\imath\left(k_{x}(\omega)z + \omega t\right)\right] \hat{a}^\dagger_{x}(\omega) \label{eq:electric_field_operator}, \end{eqnarray} in which we have merged all constants and slowly varying field amplitudes in the overall parameter \(C\). In order to simplify the Hamiltonian we treat the strong pump field as a classical wave \begin{eqnarray} \hat{E}_p^{(+)}(z,t) \Rightarrow E_p(z,t) = \int \mathrm d \omega_p \, \alpha(\omega_p) \exp\left[\imath\left(k_p(\omega_p)z + \omega_p t\right)\right]. \label{eq:electric_pump_field} \end{eqnarray} Here \(\alpha(\omega_p) = A_p \exp\left[(\omega_p - \mu_p)^2/ (2\sigma_p^2)\right]\) is the Gaussian pump envelope function generated by an ultrafast laser system, consisting of a field amplitude \(A_p\), a central pump frequency \(\mu_p\), and a pump width \(\sigma_p\). The PDC Hamiltonian in equation \eref{eq:pdc_twin-beam_hamiltonian} generates the following unitary transformation: \begin{eqnarray} \hat{U} = \exp\left[-\frac{\imath}{\hbar} \int_{-\infty}^{\infty}\mathrm d t' \, \hat{H}_{PDC}(t')\right] \label{eq:unitary_operator_two_mode_squeezer} \end{eqnarray} In the low downconversion regime we can ignore the time-ordering of the electric field operators \cite{wasilewski_pulsed_2006, lvovsky_decomposing_2007} and directly evaluate the time integration. This yields a delta-function \(2 \pi \delta(\omega_s + \omega_i - \omega_p)\) and hence allows us to perform the integral over the pump frequency \(\omega_p\). Equation \ref{eq:unitary_operator_two_mode_squeezer} can be re-expressed as \begin{eqnarray} \nonumber \fl \hat{U} = \exp \left[ -\frac{\imath}{\hbar} \left(A' \int_{-\frac{L}{2}}^{\frac{L}{2}} \mathrm d z \int \mathrm d \omega_s \int \mathrm d \omega_i \, \right. \right. \\ \left. \left. \times \alpha(\omega_s + \omega_i) \exp\left[\imath \Delta k z\right] \hat{a}_s^\dagger(\omega_s) \hat{a}_i^\dagger(\omega_i) + h.c. \right) \right], \end{eqnarray} in which \(\Delta k = k_p(\omega_s + \omega_i) - k_s(\omega_s) - k_i(\omega_i)\) is the so called phase-mismatch and \(A'\) accumulates all constants. Finally, we perform the integration over the length of the crystal and obtain \begin{eqnarray} \fl \qquad \hat{U} = \exp\left[ -\frac{\imath}{\hbar} \left( A \int \mathrm d \omega_s \int \mathrm d \omega_i \, \alpha(\omega_s + \omega_i) \phi(\omega_s, \omega_i) \hat{a}_s^\dagger(\omega_s) \hat{a}_i^\dagger(\omega_i) + h.c. \right)\right], \label{eq:unitary_typeII_process_derivation} \end{eqnarray} where \(\phi(\omega_s, \omega_i) = \mathrm{sinc}\left(\frac{\Delta k L}{2}\right)\) is referred to as the phasematching function. The latter combined with the pump distribution \(\alpha(\omega_s + \omega_i)\) gives the overall frequency distribution or joint spectral amplitude \(f(\omega_s, \omega_i)\) of the generated state. The final unitary squeezing operator of the downconversion process is \begin{eqnarray} \hat{U} = \exp\left[-\frac{\imath}{\hbar}\underbrace{\left( A \int \mathrm d \omega_s \int \mathrm d \omega_i f(\omega_s, \omega_i) \hat{a}_s^\dagger(\omega_s) \hat{a}_i^\dagger(\omega_i) + h.c. \right)}_{\hat{H}_{eff}} \right]. \label{eq:unitary_PDC_twin_beam_derivation} \end{eqnarray} The \(\mathrm{sinc}\) function appearing in equation \ref{eq:unitary_PDC_twin_beam_derivation} can be approximated by a Gaussian distribution \begin{eqnarray} \fl \qquad \phi(\omega_s, \omega_i) = \mathrm{sinc}\left(\frac{\Delta k(\omega_s,\omega_i) L}{2}\right) \approx \exp\left[-0.193 \left(\frac{\Delta k(\omega_s, \omega_i) L}{2}\right)^2\right]. \label{eq:phasematching_function_approximation} \end{eqnarray} With this simplification the joint frequency distribution \(f(\omega_s, \omega_i)\) takes on the form of a two-dimensional Gaussian distribution. Applying this approximation the exact squeezer distribution is accessible as presented in section \ref{sec:probing_twin_beam_squeezing}. \subsection{Generation of multimode twin-beam squeezers via four-wave-mixing} In a four-wave-mixing (FWM) process two strong pump fields interact with the \(\chi^{(3)}\)-nonlinearity of a fiber to create two new electric fields. If the two generated fields are distinguishable the Hamiltonian of the process is given by \cite{chen_quantum_2007} \begin{eqnarray} \fl \qquad \hat{H}_{\mathrm{FWM}} = \int_{-\frac{L}{2}}^{\frac{L}{2}} \mathrm d z \, \chi^{(3)} \hat{E}^{(+)}_{p1}(z,t) \hat{E}^{(+)}_{p2}(z,t) \hat{E}^{(-)}_s(z,t) \hat{E}_i^{(-)}(z,t) + h.c. \,\, . \label{eq:fwm_twin-beam_hamiltonian} \end{eqnarray} Again, we assume a collinear interaction of all interacting beams. The electric fields for signal, idler and pump are defined in equations \eref{eq:electric_field_operator} and \eref{eq:electric_pump_field}. Performing the same steps as in \ref{app:multimode_two_mode_squeezer_generation_PDC} we obtain a similar unitary transformation \begin{eqnarray} \fl \qquad \hat{U} = \exp\left[ -\frac{\imath}{\hbar} \underbrace{\left( A \int \mathrm d \omega_s \int \mathrm d \omega_i \, f_{\mathrm{FWM}}(\omega_s, \omega_i) \hat{a}_s^\dagger(\omega_s) \hat{a}_i^\dagger(\omega_i) + h.c. \right)}_{\hat{H}_{eff}}\right] . \label{eq:unitary_FWM_twin_beam_derivation} \end{eqnarray} Equation \eref{eq:unitary_FWM_twin_beam_derivation} resembles equation \eref{eq:unitary_PDC_twin_beam_derivation} with the exception of the joint frequency distribution \(f_{\mathrm{FWM}}(\omega_s, \omega_i)\) which takes on a more complicated shape in comparison to the PDC case \begin{eqnarray} \fl \qquad f_{\mathrm{FWM}}(\omega_s, \omega_i) = \int \mathrm d \omega_p \, \alpha(\omega_{p}) \alpha(\omega_s + \omega_i - \omega_p)\, \mathrm{sinc}\left( \frac{\Delta k (\omega_p, \omega_s, \omega_i) L}{2} \right) . \end{eqnarray} By comparing the unitary transformation in equation \eref{eq:unitary_typeII_process_derivation} and \eref{eq:unitary_FWM_twin_beam_derivation} it is apparent that the two different processes both create the same fundamental quantum state: Multimode twin-beam squeezers. \section{Multimode single-beam squeezers}\label{app:single_beam_squeezer} In the main body of the paper we discussed the characterization of multimode twin-beam squeezers. Here we call attention to the fact that the broadband multimode correlation function formalism is also applicable to probe multimode single-beam squeezed states. \subsection{Generation of multimode single-beam squeezers}\label{app:single_beam_squeezer_generation} Single-beam squeezers are created by PDC and FWM processes similar to the twin-beam states. The difference between the twin-beam and single-beam squeezer generation is that in the latter the generated beams are emitted into the same optical mode, whereas in the former two different optical modes are generated as discussed in \ref{app:multimode_two_mode_squeezer_generation}. The PDC Hamiltonian generating a single-beam squeezer is given by \begin{eqnarray} \hat{H} = \int_{-\frac{L}{2}}^{\frac{L}{2}} \mathrm d z \, \chi^{(2)} \hat{E}^{(+)}_p(z,t) \hat{E}^{(-)}(z,t) \hat{E}^{(-)}(z,t) + h.c. \,\, . \label{eq:pdc_single-beam_hamiltonian} \end{eqnarray} Performing the same steps as in the case of twin-beam generation we obtain the unitary transformation \begin{eqnarray} \hat{U} = \exp\left[-\frac{\imath}{\hbar}\underbrace{\left( A \int \mathrm d \omega_s \int \mathrm d \omega_i f(\omega_s, \omega_i) \hat{a}^\dagger(\omega_s) \hat{a}^\dagger(\omega_i) + h.c. \right)}_{\hat{H}_{eff}} \right]. \end{eqnarray} If the joint spectral distribution \(f(\omega_s, \omega_i)\) is engineered to be symmetric under permutation of signal and idler, the Schmidt decomposition is given by: \begin{eqnarray} -\frac{\imath}{\hbar} A f(\omega_s, \omega_i) = \sum_k r_k \phi_k^*(\omega_s) \phi_k^*(\omega_i) \,\,\, \mathrm{and}\\ -\frac{\imath}{\hbar} A^* f^*(\omega_s, \omega_i) = -\sum_k r_k \phi_k(\omega_s) \phi_k(\omega_i) \label{eq:singular_value_decomposition_single_beam} \end{eqnarray} Introducing broadband modes we obtain the multimode broadband unitary transformation \begin{eqnarray} \nonumber \hat{U} &= \exp\left[\sum_k r_k \hat{A}_k^\dagger \hat{A}_k^\dagger - h.c.\right] \\ \nonumber &= \bigotimes_k \exp\left[r_k \hat{A}_k^\dagger \hat{A}_k^\dagger - h.c.\right]\\ &= \bigotimes_k \hat{S}(-r_k). \label{eq:OPA_unitary_single_beam} \end{eqnarray} This is exactly the form of a frequency multimode single-beam squeezed state \cite{barnett_methods_2003}. Or written in the Heisenberg picture: \begin{eqnarray} \hat{A}_k = \cosh(r_k) \hat{A}_k + \sinh(r_k) \hat{A}_k^\dagger \end{eqnarray} Single-beam squeezers are --- like twin-beam squeezers --- widely employed in quantum optics experiments \cite{zhu_photocount_1990, sasaki_multimode_2006}. As in the twin-beam squeezer case the same states are generated by properly engineered FWM processes. \subsection{Probing frequency multimode single-beam squeezers via correlation function measurements} In order to characterize the generated states we have to determine the optical gain \(B\) and mode distribution \(\lambda_k\) as in the case of multimode twin-beam squeezers (see section \ref{sec:probing_twin_beam_squeezing}). Therefore, we adapt the scheme presented in section \ref{sec:probing_twin_beam_squeezing} and probe the correlation functions \(g^{(2)}\) and \(g^{(3)}\) as sketched in figure \ref{fig:squeezer_two_mode_g2_g3_setup}. \begin{figure} \caption{Schematic setup to measure a) \(g^{(2)}\) and b) \(g^{(3)}\) of a frequency multimode single-beam squeezer.} \label{fig:squeezer_two_mode_g2_g3_setup} \end{figure} For a multimode single-beam squeezer they can be written as: \begin{eqnarray} g^{(2)} =& 1 + 2 \frac{\sum_k \sinh^4(r_k)}{\left[\sum_k \sinh^2(r_k)\right]^2} + \underbrace{\frac{1}{\sum_k \sinh^2(r_k)}}_{1/\left<n\right>} \,\,\,\,\, \mathrm{and}\\ \nonumber g^{(3)} =& 1 + 6 \frac{\sum_k \sinh^4(r_k)}{\left[\sum_k \sinh^2(r_k)\right]^2} + 8 \frac{\sum_k \sinh^6(r_k)}{\left[\sum_k \sinh^2(r_k)\right]^3}\\ & + \frac{3}{\sum_k \sinh^2(r_k)} + 6 \frac{\sum_k \sinh^4(r_k)}{\left[\sum_k \sinh^2(r_k)\right]^3}. \label{eq:mm_sm_squeezer_correlations} \end{eqnarray} In the single-beam case however \(g^{(2)}\) does not directly yield the effective number of modes \(K\) or thermal mode distribution parameter \(\mu\) as for the multimode twin-beam squeezers in equation \eref{eq:gtwo_mm_two_mode_squeezer}. A joint measurement of \(g^{(2)}\) and \(g^{(3)}\) is necessary, as sketched in figure \ref{fig:g3_vs_g2_mm_squeezer_K_mu_dependence}. \begin{figure} \caption{\(g^{(3)}\) as a function of \(g^{(2)}\) for various multimode single-beam squeezers. The effective number of modes and the thermal mode distributions parameter \(\mu\) of a multimode single-beam squeezer are encoded in the slope.} \label{fig:g3_vs_g2_mm_squeezer_K_mu_dependence} \end{figure} Clearly the effective mode number \(K\) and the thermal mode distribution \(\mu\) are given by the slope \(s\) of \(g^{(3)}\) vs. \(g^{(2)}\). In figure \ref{fig:g3_vs_g2_mm_squeezer_K_mu_analysis} we plotted the explicit dependence of \(K\) and \(\mu\) on the slope \(s\). Surprisingly the functions exhibit almost the same shape as in the twin-beam squeezer case. \begin{figure} \caption{a) Effective mode number \(K\) as a function of the slope of \(g^{(3)}[g^{(2)}]\). b) Thermal mode distribution \(\mu\) as a function of the slope of \(g^{(3)}[g^{(2)}]\) for multimode single-beam squeezed states.} \label{fig:g3_vs_g2_mm_squeezer_K_mu_analysis} \end{figure} In order to obtain the gain of a multimode single-beam squeezer a single \(g^{(2)}\)-measurement is sufficient which is sensitive towards the coupling value \(B\) as presented in figure \ref{fig:g3_vs_g2_mm_squeezer_B} (similar to the \(g^{(1,1)}\)-measurement in the twin-beam squeezer case). In the low gain regime it is given via the relation \(B = 1 / \sqrt{g^{(2)}}\). Again, while describing a different system, the shape of the function \(B[g^{(2)}]\) is very similar to the twin-beam squeezer case. \begin{figure} \caption{Optical gain \(B\) as a function of \(g^{(2)}\) for a multimode single-beam squeezed state.} \label{fig:g3_vs_g2_mm_squeezer_B} \end{figure} In total the theoretical description and derivation of multimode single-beam squeezers is very similar to the mathematics behind multimode twin-beam states. These similarities translate to multimode correlations functions which are able to probe the generated optical gain \(B\) and mode distribution \(\lambda_k\) as in the twin-beam case. \end{appendix} \end{document}
arXiv
{ "id": "1012.0262.tex", "language_detection_score": 0.7322735786437988, "char_num_after_normalized": null, "contain_at_least_two_stop_words": null, "ellipsis_line_ratio": null, "idx": null, "lines_start_with_bullet_point_ratio": null, "mean_length_of_alpha_words": null, "non_alphabetical_char_ratio": null, "path": null, "symbols_to_words_ratio": null, "uppercase_word_ratio": null, "type": null, "book_name": null, "mimetype": null, "page_index": null, "page_path": null, "page_title": null }
arXiv/math_arXiv_v0.2.jsonl
null
null
\begin{document} \begin{abstract} We develop an elementary theory of partially additive rings as a foundation of ${\mathbb F}_1$-geometry. Our approach is so concrete that an analog of classical algebraic geometry is established very straightforwardly. As applications, (1) we construct a kind of group scheme ${{\mathbb{G}\mathbb{L}}}_n$ whose value at a commutative ring $R$ is the group of $n\times n$ invertible matrices over $R$ and at ${\mathbb F}_1$ is the $n$-th symmetric group, and (2) we construct a projective space $\mathbb P^n$ as a kind of scheme and count the number of points of ${\mathbb P}^n({\mathbb F}_q)$ for $q=1$ or $q=p^n$ a power of a rational prime, then we explain a reason of number 1 in the subscript of ${\mathbb F}_1$ even though it has two elements. \end{abstract} \title{Partially additive rings \ and group schemes over $ un$} \tableofcontents \section{Introduction} It seems that the notion of a so-called field with one element was first proposed by J. Tits\cite{tits-sur-les-analogues-algebriques}. There have been many attempts to define an algebraic geometry over $\FF_1.$ We start from a partially additive algebraic system, partial monoids. We impose strict associativity on partial monoids, in the sense of G. Segal\cite{segal-configuration-spaces-and}, who used the topological version of this structure in the study of configuration spaces. Then we define a partial ring as a partial monoid equipped with a binary commutative, associative multiplication with unity. Our approach is so concrete that we can establish an analog of the classical theory of schemes based on commutative rings with unity and define so-called partial schemes, which are locally partial-ringed spaces that are isomorphic to an affine partial scheme in an open neighborhood of each point. Among others, rather concrete constructions of $\FF_1$-geometry from a (partial) algebraic systems are, Deitmar\cite{deitmar-schemes-over-f1}, Deitmar\cite{deitmar-congruence-schemes} and Lorscheid\cite{lorscheid-the-geometry-of-1}. In particular, the latter two treat the partially additive algebraic system, so our approach resembles their approach most. Indeed the category of partial rings is embedded in the category of blueprints. The main outcome of our construction is that we can describe a group valued functor ${\mathbb{G}\mathbb{L}}_n$ by a partial group object in the category of partial rings, which means that we have constructed an affine partial group partial scheme. When this functor is applied to good partial rings, it takes values in the category of groups. For example, if $A$ is a commutative ring with unity, then ${\mathbb{G}\mathbb{L}}_n(A)$ is nothing but the general linear group of $n\times n$ matrices, and if $A=\FF_1, $ then ${\mathbb{G}\mathbb{L}}_n(\FF_1) =\mathfrak S_n$ is the $n$-th symmetric group. Another modest outcome of our approach is that we have an explanation why the number 1 is put to the notation of our field even though it has two elements. Namely, we say that the number is there since we have only one element which can be added to 1 in the field $\FF_1 = \set{0,1},$ while in the usual finite field $\mathbb{F}_q,$ there are $q$ of such elements. (See Example \ref{ex:projective}.) In this paper, $\mathbb{N}$ denotes the set of non-negative integers. {\it Acknowledgement.} Conversations with Bastiaan Cnossen about tensor products of partial monoids are very useful for the author. Indeed, the associative closure and the tensor product which appear in this article are contained in his argument \cite{bastiaan-cnossen-master-thesis}, explicitly or implicitly. Yoshifumi Tsuchimoto suggested to the author that search for a group scheme in $\FF_1$-geometry is important. Bastiaan Cnossen, Katsuhiko Kuribayashi, Makoto Masumoto, Shuhei Masumoto, Jun-ichi Matsuzawa, Kazunori Nakamoto and Yasuhiro Omoda gave valuable comments on previous versions of this article. The author is grateful to these people. \section{Additive Part} \subsection{Partial Magmas and Monoids} \begin{defn}[partial magma and partial monoid] A (commutative and unital) {\bf partial magma} is \begin{enumerate} \item a set $A,$ with a distinguished element $0,$ \item a subset $A_2 $ of $A\times A,$ \item a map $+\colon A_2 \to A,$ \end{enumerate} such that \begin{enumerate}[label= (\alph*)] \item $(0,a)\in A_2, (a,0) \in A_2$ and $a+0 = a = 0+a,$ for all $a\in A,$ \item if $(a,b) \in A_2$ then $(b,a) \in A_2$ and $a+b = b+a,$ for all $a, b \in A.$ \end{enumerate} If, moreover, it satisfies the following condition, we say that it is a (commutative and unital) {\bf partial monoid} : \begin{enumerate}[label= (\alph*)] \setcounter{enumi}{2} \item $(a,b), (a+b,c) \in A_2$ if and only if $(b,c), (a,b+c)\in A_2$ for all $a, b, c \in A$ and in such a case, $(a+b)+c = a+(b+c).$ \end{enumerate} \end{defn} If, instead, a partial magma satisfies the following condition, we say that it is a (commutative and unital) {\bf weak partial monoid} (or weakly associative partial magma) : \begin{enumerate}[label= (\alph*)] \setcounter{enumi}{3} \item If $a_1 + \dots + a_r$ can be calculated in $A$ under supplement of parenthesis in more than or equal to two ways, then the results are equal for all $r\in \mathbb{N}$ and $a_1, \dots, a_r \in A.$ \end{enumerate} \begin{defn} Let $A$ and $B$ be partial magmas. A map $f\colon A\to B$ is called a {\bf homomorphism} if $f(0) = 0, (f\times f)(A_2) \subseteq B_2$ and $f(a_1 + a_2) = f(a_1) + f(a_2)$ for all $(a_1, a_2) \in A_2.$ If $A$ and $B$ are partial monoids, a map $A\to B$ is a {\bf homomorphism} if it is a homomorphism of partial magmas. The category of partial magmas and partial monoids are denoted by ${\mathcal{P\!M}ag}$ and ${\mathcal{P\!M}on},$ respectively. \end{defn} \begin{example} A partial magma of order 1 is isomorphic to $0 = \Set{0},$ which is a partial monoid. A partial magma of order 2 is isomorphic to one of the following : \begin{enumerate} \item $\mathbb{F}_1 = \Set{0,1}$ where $1+1$ is undefined, \item $\mathbb{F}_2 = \Set{0,1}$ where $1+1 = 0$ and \item $\mathbb{B} = \Set{0,1}$ where $1+1 = 1.$ \end{enumerate} These are also partial monoids. \end{example} \begin{example} Any based set $(X,0)$ can be given a partial monoid structure by putting $X_2 = \Set{0} \times X \cup X\times \Set{0}.$ A homomorphism between such partial monoids is nothing but a based map between the given based sets. \end{example} \begin{example} Any abelian monoid $M$ can be given a partial monoid structure by putting $M_2 = M\times M.$ A homomorphism between such partial monoids is nothing but a homomorphism between the given abelian monoids. \end{example} These examples show that we can embed a category of based sets ${\mathcal{S}et}_0$ and that of abelian monoids ${\mathcal{A}b\mathcal{M}on}$ into ${\mathcal{P\!M}on}.$ In this paper, based sets and abelian monoids (and abelian groups) are always considered as partial monoids unless otherwise specified. It is easily shown that a homomorphism $f\colon A\to B$ is a monomorphism in the category of partial monoids if and only if it is an injective map of underlying sets. When $A$ and $B$ are partial magmas, we say that $A$ is contained in $B$ to mean that $A$ is a partial submagma of $B.$ Remark that a homomorphism $f\colon A\to B$ is an epimorphism if it is a surjective map of underlying sets, but the converse is false. For example, the map $\FF_1 \to \mathbb{N}$ determined by $1\mapsto 1$ is a monomorphism and epimorphism, but not an isomorphism. \subsection{Monoid completion} In this section, we show that for any partial magma $A,$ there exists a monoid $A_{mon}$ and a homomorphism $\mu\colon A\to A_{mon}$ which is universal among the homomorphisms from $A$ to abelian monoids. Let $A$ be a partial magma and $\mathbb{N}[A]$ be the free abelian monoid generated by the underlying set of $A.$ More precisely, \[ \mathbb{N}[A] = \Set{ a_1 \dotplus \dots \dotplus a_r | r\in \mathbb{N}, a_i \in A\,(1\leq i\leq r)}, \] where the empty sum is the unit. By the injective homomorphism $A \to \mathbb{N}[A]~;~a\mapsto a$ of partial magmas, we regard $A$ as a partial submagma of $\mathbb{N}[A].$ Let $\sim$ be the equivalence relation on $\mathbb{N}[A]$ generated by $0\dotplus x\sim x$ and $(a_1+a_2) \dotplus x \sim a_1\dotplus a_2\dotplus x,$ where $x$ is any element of $\mathbb{N}[A]$ and $(a_1, a_2) \in A_2.$ We put $A_{mon} = \mathbb{N}[A]/\sim.$ Since $\sim$ is an additive equivalence relation, $A_{mon}$ has a monoid structure such that the projection $\mathbb{N}[A]\to A_{mon}$ is a homomorphism of monoids. The composite $A\to \mathbb{N}[A] \to A_{mon}$ is denoted by $\mu\colon A\to A_{mon}.$ \begin{prop} Let $A$ be a partial magma and $f\colon A\to B$ be a homomorphism to an abelian monoid $B.$ Then there exists a unique homomorphism $f_{mon}\colon A_{mon} \to B$ such that $f_{mon}\circ \mu = f.$ \end{prop} \begin{proof} We put $f_{mon}([a_1\dotplus \dots \dotplus a_r]) = f(a_1) + \dots + f(a_r),$ then we have a homomorphism $f_{mon} \colon A_{mon} \to B$ such that $f_{mon}\circ \mu = f,$ which is unique. \end{proof} Remark that $\mu$ is not necessarily a monomorphism. Indeed, $\mu(A)$ has a natural structure of partial submagma of $A_{mon}$ which is weakly associative and is denoted by $A_{wass}.$ \subsection{Associative closure} In this section, we show that for any partial magma $A,$ there exists a partial monoid $A_{ass}$ and a homomorphism $\alpha\colon A\to A_{ass}$ which is universal among the homomorphisms from $A$ to partial monoids. Let $A$ be a partial magma and $\Set{B^{(\lambda)}}$ be a family of partial submagmas of $A.$ If we put \[ B = \cap_\lambda B^{(\lambda)} \mbox{~and~} B_2 = \cap_\lambda B^{(\lambda)}_2 \] then $B$ is the largest partial submagma of $A$ which is contained in every $B^{(\lambda)}$'s. If all the $B^{(\lambda)}$'s are partial submonoids of $A,$ then $B$ is also a partial submonoid of $A.$ On the other hand, let $A$ be a partial magma and $\Set{B^{(\lambda)}}$ be a family of partial submagmas of $A$ which is totally ordered by containment. If we put \[ B = \cup_\lambda B^{(\lambda)} \mbox{~and~} B_2 = \cup_\lambda B^{(\lambda)}_2 \] then $B$ is the smallest partial submagma of $A$ which contains every $B^{(\lambda)}$'s. If all the $B^{(\lambda)}$'s are partial submonoids of $A,$ then $B$ is also a partial submonoid of $A.$ Let $A$ be a partial monoid and $B$ be a partial submagma of $A.$ We define $B_{ass, A}$ to be the smallest partial submonoid of $A$ which contains $B.$ We call $B_{ass,A}$ the associative closure of $B$ in $A.$ $B_{ass, A}$ can be constructed inductively as follows: Put $B^{(0)} = B$ and $B^{(0)}_2 = B_2.$ Suppose we have constructed a partial submagma $B^{(n-1)}\subseteq A.$ Consider a condition \[ (*)~~~(a+b) + c \mbox{~can be calculated in~} B^{(n-1)} \] for a triple $(a,b,c) \in B^{(n-1)}\times B^{(n-1)}\times B^{(n-1)}.$ If we put \begin{align*} B^{(n)} &= B^{(n-1)} \cup \Set{ b+c | (a,b,c) \mbox{~satisfies~}(*) }\mbox{~and~}\\ B^{(n)}_2 &= B^{(n-1)}_2\cup (\Set{0}\times B^{(n-1)} )\cup (B^{(n-1)}\times \Set{0} )\\ &\phantom{=} \cup \Set{ (b,c), (c,b), (a,b+c), (b+c, a) | (a,b,c)\mbox{~satisfies~}(*)}, \end{align*} then $B^{(n)}$ has a natural structure of partial submagma of $A.$ Constructing $B^{(n)}, n=0,1,2,\dots $ inductively, we put \[ B' = \cup_{n\geq 0} B^{(n)} \mbox{~and~} B'_2 = \cup_{n\geq 0} B^{(n)}_2. \] Now, $B'$ is a partial submonoid of $A$ which contains $B$ and which is the smallest. Thus $B' = B_{ass, A}.$ For any partial magma $A,$ we define $A_{ass}$ to be the smallest partial submonoid of $A_{mon}$ which contains $A_{wass}.$ The composite $A\to A_{wass} \to A_{ass}$ is denoted by $\alpha.$ \begin{prop} Let $A$ be a partial magma and $f\colon A\to B$ be a homomorphism to a partial monoid $B.$ Then there exists a unique homomorphism $f_{ass}\colon A_{ass} \to B$ such that $f_{ass}\circ \alpha = f.$ \end{prop} \subsection{Limits and colimits in ${\mathcal{P\!M}ag}$ and ${\mathcal{P\!M}on}$} \label{sec:pmag_pmon_complete_cocomplete} It is easily checked that ${\mathcal{P\!M}ag}$ has all small limits and all small colimits. It is also easily checked that in ${\mathcal{P\!M}on},$ all limits in ${\mathcal{P\!M}ag}$ are also limits in ${\mathcal{P\!M}on},$ so ${\mathcal{P\!M}on}$ has all small limits. If we construct a colimit in ${\mathcal{P\!M}ag}$ from a given diagram in ${\mathcal{P\!M}on},$ then it is not a colimit in ${\mathcal{P\!M}on},$ but composing with the functor which takes associative closure makes it a colimit in ${\mathcal{P\!M}on}.$ This shows that ${\mathcal{P\!M}on}$ has all small colimits. \subsection{Tensor product and $\mathop{\mathrm{Hom}}\nolimits$} In this section, we define tensor product and $\mathop{\mathrm{Hom}}\nolimits.$ Propositions are given without proof since each of them can be proved by a formal argument. Let $A, B$ be partial monoids and $\mathbb{N}[A\times B]$ be the free abelian monoid generated by the set $A\times B.$ Let $\sim$ be the equivalence relation on $\mathbb{N}[A\times B]$ generated by \begin{enumerate} \item $(0, b) \dotplus x \sim x \sim (a,0)\dotplus x$ for all $a\in A, b\in B$ and $x\in \mathbb{N}[A\times B],$ \item $(a_1, b) \dotplus (a_2, b) \dotplus x \sim (a_1+a_2, b) \dotplus x$ for all $x\in \mathbb{N}[A\times B], b\in B$ and $(a_1, a_2) \in A_2,$ \item $(a, b_1) \dotplus (a, b_2) \dotplus x \sim (a, b_1+b_2) \dotplus x$ for all $x\in \mathbb{N}[A\times B], a\in A$ and $(b_1, b_2) \in A_2.$ \end{enumerate} We put $T(A, B) = \mathbb{N}[A\times B]/ \sim.$ Then $T(A,B)$ has an abelian monoid structure such that $\pi\colon \mathbb{N}[A\times B] \to T(A,B)$ is a homomorphism of monoids. An element of $T(A,B)$ represented by $(a,b) \in A\times B$ is denoted by $a\otimes b.$ We give $\pi(A\times B) \subseteq T(A,B)$ the maximal partial magma structure. \begin{defn}[tensor product] Let $A$ and $B$ be partial monoids. The associative closure of $\pi(A\times B)$ is denoted by $A\otimes B$ and is called the {\bf tensor product} of $A$ and $B.$ \end{defn} \begin{defn}[bilinear map] We say that a map $f\colon A\times B \to C$ is bilinear if for each $a\in A$ the map $B\to C,$ given by $b\mapsto f(a,b)$ is a partial monoid homomorphism and for each $b\in B$ the map $A\to C$ given by $a\mapsto f(a,b)$ is a partial monoid homomorphism. \end{defn} \begin{prop} Let $A, B, C$ be partial monoids and $f\colon A\times B \to C$ a biliear map. Then there exists a unique partial monoid homomorphism $\tilde{f}\colon A\otimes B \to C$ which makes the following diagram commute: \begin{center} \begin{tikzcd} A\times B \ar{r}{f}\ar{d} & C\\ A\otimes B\ar{ru}[below]{\tilde{f}}, \end{tikzcd} \end{center} where the vertical map is the canonical map.\qed \end{prop} For partial magmas $A,B$ we would like to define \begin{align*} \mathop{\mathrm{Hom}}\nolimits_{\mathcal{P\!M}ag} (A, B) &= \Set{ f\colon A\to B | f : \mbox{homomorphism} }\\ \mathop{\mathrm{Hom}}\nolimits_{\mathcal{P\!M}ag} (A, B)_2 &= \Set{ (f,g) | (f(a), g(a)) \in B_2 \mbox{~for all~} a\in A }. \end{align*} But a formula $(f+g)( a ) = f(a) + g(a)$ does not define a homomorphism $f+g \colon A\to B$ unless $B$ is associative. If we assume that $B$ is a partial monoid, then $\mathop{\mathrm{Hom}}\nolimits_{\mathcal{P\!M}ag} (A,B)$ given above is a partial monoid. Thus we have a functor $\mathop{\mathrm{Hom}}\nolimits_{\mathcal{P\!M}ag}( -, - ) \colon {\mathcal{P\!M}ag}^{op}\times {\mathcal{P\!M}on} \to {\mathcal{P\!M}on}.$ We also have a functor $\mathop{\mathrm{Hom}}\nolimits_{\mathcal{P\!M}on}( -, - ) \colon {\mathcal{P\!M}on}^{op}\times {\mathcal{P\!M}on} \to {\mathcal{P\!M}on}.$ \begin{prop}\label{prop:bilinear_pmag_pmon} Let $A$ and $B$ be partial magmas and $C$ be a partial monoid. If $f \colon A\times B\to C$ is a bilinear map, then there exists a unique bilinear map $\tilde{f}\colon A_{ass}\times B_{ass} \to C$ which makes the following diagram commute: \begin{center} \begin{tikzcd} A\times B \ar{r}{f}\ar{d} & C\\ A_{ass}\times B_{ass}\ar{ru}[below]{\tilde{f}}. \end{tikzcd} \end{center}\qed \end{prop} \begin{prop} Let $A$ be a partial monoid, then we have an adjoint pair of functors \[ \otimes A \colon {\mathcal{P\!M}on} \leftrightarrows {\mathcal{P\!M}on} : \mathop{\mathrm{Hom}}\nolimits(A, -) \qedhere \] \qed \end{prop} \begin{prop} $({\mathcal{P\!M}on}, \otimes , \FF_1)$ constitute a symmetric monoidal category. \qed \end{prop} \subsection{Equivalence relations} For a detailed discussion on equivalence relations, see \S 2.5 of \cite{borceux-handbook-2} or \cite{nlab-congruences}. As opposed to the terminology in \cite{nlab-congruences}, we reserved the word ``congruences'' for effective equivalence relations on partial rings, that apear in a later section of this paper. Let $A$ be a partial magma and $R$ be an equivalence relation on $A$ in ${\mathcal{P\!M}ag}.$ Thus $R$ is a partial submagma of $A\times A$ such that for any partial magma $X,$ \[ \mathop{\mathrm{Hom}}\nolimits_{{\mathcal{P\!M}ag}} (X, R) \subseteq \mathop{\mathrm{Hom}}\nolimits_{{\mathcal{P\!M}ag}} (X, A\times A) \cong \mathop{\mathrm{Hom}}\nolimits_{{\mathcal{P\!M}ag}}(X, A)\times \mathop{\mathrm{Hom}}\nolimits_{{\mathcal{P\!M}ag}}(X, A) \] is an equivalence relation on the set $\mathop{\mathrm{Hom}}\nolimits_{{\mathcal{P\!M}ag}} (X,A).$ Recall that $R$ is said to be effective if $R\rightrightarrows A$ is a kernel pair of some $f\colon A\to B,$ {\it i.e.} the following diagram is a pullback diagram: \[ \begin{tikzcd} R\ar{r}\ar{d} & A\ar{d}{f}\\ A\ar{r}[below]{f} & B. \end{tikzcd} \] \begin{defn} Let $A$ be a partial magma. We say that an equivalence relation $R$ on $A$ is {\bf additive} if $R_2 = (R\times R) \cap A_2.$ \end{defn} If $R$ is an additive equivalence relation on a partial magma $A,$ then we can define a partial magma by \begin{align*} &A/\!\!/ R = ( \mbox{~the quotient set of~} A\mbox{~by~} R\\ &(A/\!\!/ R)_2 = \Set{ ([a_1], [a_2] ) | (a_1, a_2) \in A_2 }\\ &[a_1]+[a_2] = [a_1+a_2]. \end{align*} It is easily checked that the following diagram is a coequalizer diagram in ${\mathcal{P\!M}ag}:$ \begin{center} \begin{tikzcd} R \ar[shift left = 0.3em]{r}\ar[shift right = 0.3em]{r}& A \ar{r} & A/\!\!/ R. \end{tikzcd} \end{center} \begin{prop}\label{prop:effective-additive-pmag} An equivalence relation $R$ on a partial magma $A$ is effective if and only if it is additive. \end{prop} \begin{proof} Suppose that $R$ is effective. Then $R\rightrightarrows A$ is a kernel pair of some $f\colon A\to B.$ Let $C=\Set{ 0, a, b, a+b}$ be a partial monoid in which $a+b$ is the only non-trivial sum. If $a_1Rb_1, a_2Rb_2$ and $(a_1, a_2), (b_1, b_2) \in A_2$, we define $h_i\colon C\to A$ by $h_i(a) = a_i, h_i(b) = b_i\,(i=1,2).$ Since $f\circ h_1 = f\circ h_2$ we have a morphism $h\colon C\to R$ such that $h(a) = (a_1, a_2)$ and $h(b) = (b_1, b_2)$ by universality of the pullback. Then $((a_1, a_2) , (b_1, b_2))\in R_2,$ which means that $R$ is additive. Conversely, suppose that $R$ is additive. Let $\pi \colon A\to A/\!\!/ R$ be the projection morphism. To show that the diagram \[ \begin{tikzcd} R\ar{r}\ar{d} & A\ar{d}{f}\\ A\ar{r}[below]{f} & A/\!\!/ R \end{tikzcd} \] is a pullback diagram, let $g_1, g_2\colon B\to A$ be two morphisms such that $f\circ g_1 = f\circ g_2.$ It is easily checked that $(g_1,g_2) \colon B\to A\times A$ factors through a morphism $B\to R$ since $R$ is additive. This proves that $R$ is effective. \end{proof} Let $A$ be a partial monoid and $R$ be an additive equivalence relation on $A.$ Let $A/R = (A/\!\!/ R)_{ass}$ be the associative closure of $A/\!\!/ R.$ Then it is easily checked that the following diagram is a coequalizer diagram in ${\mathcal{P\!M}on}:$ \begin{center} \begin{tikzcd} R \ar[shift left = 0.3em]{r}\ar[shift right = 0.3em]{r}& A \ar{r} & A/R. \end{tikzcd} \end{center} \begin{prop} An equivalence relation $R$ on a partial monoid $A$ is effective if and only if it is additive and the canonical map $\alpha \colon A/\!\!/ R \to A/R$ is injective. \end{prop} \begin{proof} Suppose that $R$ is effective. Then we can show that $R$ is additive by the same argument as in the proof of Proposition \ref{prop:effective-additive-pmag}. It follows that $R\rightrightarrows A$ is the kernel pair of the canonical morphism $\pi \colon A\to A/R.$ We need to show that $\alpha$ is injective. Assume, on the contrary, that $\alpha$ is not injective and there exist $a_1, a_2 \in A$ such that $[a_1] \neq [a_2]$ in $A/\!\!/ R$ but $\alpha([a_1]) = \alpha([a_2]).$ Let $f_i \colon \FF_1 \to A$ be a morphism given by $f_i (1) = a_i\, (i=1,2).$ Since $R\rightrightarrows A$ is the kernel pair of $\pi \colon A\to A/R,$ there exists a morphism $f\colon \FF_1 \to R$ such that $f(1) = (f_1(1), f_2(1)) = (a_1, a_2),$ a contradiction. This proves that $\alpha$ is injective. Conversely, suppose that $R$ is additive and the canonical map $\alpha \colon A/\!\!/ R \to A/R$ is injective. To show that $R\rightrightarrows A$ is the kernel pair of $\pi\colon A\to A/R,$ let $f_1, f_2\colon B\to A$ be morphisms such that $\pi \circ f_1 = \pi \circ f_2.$ If $\pi'\colon A\to A/\!\!/ R$ denotes the canonical morphism, $\pi'\circ f_1 = \pi'\circ f_2,$ since $\alpha$ is injective. Since $R\rightrightarrows A$ is the kernel pair of $\pi'$ in ${\mathcal{P\!M}ag},$ we have a unique morphism $f\colon B\to R$ such that $f = (f_1, f_2)$ in ${\mathcal{P\!M}ag}.$ Since $B$ and $R$ are partial monoids, $f$ is a morphism in ${\mathcal{P\!M}on},$ which is uniquely determined. \end{proof} \begin{cor} Let $R$ be an additive equivalence relation on a partial monoid $A.$ The $R$ is effective if and only if the following condition holds: \[ \mbox{(Condition)} \left\{ \begin{array}{l} \mbox{~if~} aRa', bRb', cRc', (a,b) \in A_2, (b',c')\in A_2,\\ (a+b)Rx, (b'+c')Rx', (x,c) \in A_2 \mbox{~and~} (a',x') \in A_2\\ \mbox{~then~}(x+c) R (a'+x'). \end{array}\right. \] \end{cor} \begin{cor} If $\Set{R_\lambda}$ is a family of effective equivalence relations on a partial monoid $A.$ Then $\cap_\lambda R_{\lambda}$ is an effective equivalence relation on $A.$ \end{cor} The next theorem is not used in the following part of this paper. \begin{thm} The category of partial magmas is regular. \end{thm} \begin{proof} As noted in \S\ref{sec:pmag_pmon_complete_cocomplete}, ${\mathcal{P\!M}ag}$ is complete and cocomplete. To show that a pullback of a regular epimorphism is a regular epimorphism, let $f\colon A\to B$ be a regular epimorphism. If $K$ is the kernel pair of $f,$ then it is readily checked that $B \cong A/\!\!/ K.$ So we may assume that $B = A/\!\!/ K$ and $f=\pi \colon A\to A/\!\!/ K.$ Let \[ \begin{tikzcd} P\ar{r}\ar{d}{h} & A \ar{d}{\pi}\\ C \ar{r}{g} & A/\!\!/ K \end{tikzcd} \] be the pullback of $\pi$ along a morphism $g\colon C\to A/\!\!/ K.$ Since $\pi$ is a surjective map, so is $h.$ Let $L$ be the kernel pair of $h.$ We show that the diagram \[ \begin{tikzcd} L \ar[shift left = 0.3em]{r}{l_1}\ar[shift right = 0.3em]{r}[below]{l_2}& P \ar{r} & C. \end{tikzcd} \] is a coequalizer diagram. For that purpose, suppose that there exists a morphism $m\colon P\to D$ such that $m\circ l_1 = m\circ l_2.$ Since $h$ is surjective, we have a map $\tilde{m}\colon C\to D$ of sets which satisfies $\tilde{m} \circ h= m.$ To show that $\tilde{m}$ is a homomorphism of partial magmas, suppose $(c_1, c_2) \in C_2.$ Then $(g(c_1), g(c_2)) \in (A/\!\!/ K)_2.$ So we can take a summable pair $(a_1, a_2) \in A_2$ such that $(\pi(a_1), \pi(a_2)) =(g(c_1), g(c_2)).$ Thus, $((a_1, c_1) , (a_2, c_2)) \in P_2,$ which implies that $(\tilde{m}(c_1), \tilde{m}(c_2)) = (m(a_1, c_1) , m(a_2, c_2)) \in D_2.$ Therefore $\tilde{m}$ is a unique homomorphism which satisfies $\tilde{m} \circ h= m.$ \end{proof} \section{Partial Rings} \subsection{Partial Rings} \begin{defn}[partial ring] A {\bf partial ring} is a partial monoid $A$ with a bilinear operation $\cdot,$ called a multiplication which is associative, commutative, and have a unit $1.$ Here, bilinearity of $\cdot$ is, by definition, equivalent to the following condition: \begin{enumerate} \item $0\cdot a = 0$ for all $a\in A$ and \item $(a_1, a_2) \in A_2$ implies $(a_1\cdot x, a_2\cdot x) \in A_2$ and $a_1\cdot x + a_2\cdot x = (a_1+a_2)\cdot x$ for all $x\in A.$ \end{enumerate} \end{defn} \begin{defn} Let $A$ and $B$ be partial rings. A map $f\colon A\to B$ is a homomorphism of partial rings if it is a homomorphism of underlying partial monoids and satisfies $f(1) = 1$ and $f(ab) = f(a)f(b)$ for all $a, b\in A.$ The category of partial rings are denoted by ${\mathcal{P\!R}ing}.$ \end{defn} \begin{rem} A partial ring is nothing but a commutative monoid object in the symmetric monoidal category $({\mathcal{P\!M}on}, \otimes , \FF_1).$ \end{rem} \begin{example} A partial ring of order 2 is isomorphic to one of $\FF_1, \mathbb{F}_2$ and $\mathbb{B}.$ \end{example} \subsection{A partial ring given by generators and a summability list} In this section let $n$ be a fixed positive integer and $N=\mathbb{N}[x_1,\dots, x_n]$ be the set of the polynomials of indeterminates $x_1, \dots, x_n$ with coefficients in $\mathbb{N}.$ Let $S = \Set{s_1, \dots, s_r }$ be a subset of $N.$ Then let \[ \FF_1 \angles{x_1,\dots, x_n | \exists s_1, \dots, s_r } \] denote the smallest partial subring of $N$ which contains every subsum of one of $s_1, \dots , s_r,$ and in which every pair $(a,b)$ such that $a+b$ is a subsum of one of $s_1, \dots , s_r$ is summable. If $c_1, \dots, c_n$ are elements of a partial ring $A,$ and $s \in N,$ then $s(c_1, \dots, c_n) \in A$ denotes the value of the polynomial $s$ as usual. Let $s = m_1 + \dots + m_r$ be the unique factorization of $s$ into a sum of monomials with coefficients 1. We say that $s(c_1, \dots, c_n)$ {\bf can be calculated in $A$} if $(m_{i_1}, \dots, m_{i_s}) \in A_s$ for all subsets $\set{i_1,\dots , i_s} \subseteq \set{1, \dots, r}.$ \begin{prop}\label{prop:pring_hom_by_generators} Let $c_1, \dots, c_n$ be elements of a partial ring $A$ and $S = \Set{s_1, \dots, s_r }$ be a subset of $N.$ If $s_i (c_1, \dots, c_n)$ can be calculated in $A$ for all $i=1,\dots , r, $ then there exists a unique partial ring homomorphism \[ \varphi \colon \FF_1 \angles{x_1,\dots, x_n | \exists s_1, \dots, s_r }\to A \] such that $\varphi(x_i) = c_i\,(i=1,\dots, n).$ \end{prop} The remainder of this section is devoted to a proof of the above proposition. Let $W$ be a partial submagma of the underlying partial monoid of $N.$ We say that an element $w\in W$ {\bf is factorized in $W$} if the unique factorization $w = m_1 + \dots + m_r$ in $N$ can be calculated in $W$ under some appropriate reordering and supplement of parentheses. We say that $W$ {\bf has the factorization property} if every element of $W$ is factorized in $W.$ \begin{lem} Let $B \subseteq N$ be a partial submagma of the underlying partial monoid of $N.$ If $B$ has the factorization property, then so is its associative closure $B_{ass, N}$ in $N.$ \end{lem} \begin{proof} Recalling the inductive construction of the associative closure, it is sufficient to show that if $B^{(n-1)}$ has the factorization property then so is $B^{(n)}.$ So assume that $B^{(n-1)}$ has the factorization property. New elements in $B^{(n)}$ are of the form $b+c$ where there exists $a\in B^{(n-1)}$ such that $(a+b)+c$ can be calculated in $B^{(n-1)}.$ Let $b= m_1 + \dots + m_r$ and $c = m'_1 + \dots + m'_s$ be the unique factorization of $b$ and $c$ in $N.$ By assumption, there exists a way to supply parentheses to these formula so that they can be calculated in $B^{(n-1)}.$ Using these supplement of parentheses, $b+c$ can be calculated in $B^{(n)}.$ Thus $B^{(n)}$ has the factorization property. \end{proof} \begin{lem}\label{lem:inc_seq_of_submonoids_unique_factorization} There exists an increasing sequence $X^{(0)}\subseteq X^{(1)}\subseteq \dots$ of partial submonoids of $N$ such that \begin{enumerate} \item $X^{(i)}$ has the factorization property for all $i$, \item if $a,b \in X^{(i)}$ then $ab \in X^{(i+1)}$ for all $i$ and \item $\FF_1 \angles{x_1,\dots, x_n | \exists s_1, \dots, s_r } = \cup_{i\geq 0} X^{(i)}$ as a partial monoid. \end{enumerate} \end{lem} \begin{proof} If we put \begin{align*} Y^{(0)} &= \Set{ 0, 1 } \cup \Set{ s | s \mbox{~is a subsum of some~}s_i \in S},\\ (Y^{(0)})_2 &= \Set{ (a,b) | a+b \mbox{~is a subsum of some~}s_i \in S} \end{align*} then this is a partial submagma of $N,$ which has the factorization property. Let $X^{(0)} = (Y^{(0)})_{ass, N}$ be its associative closure in $N.$ By the previous lemma, $X^{(0)}$ has the factorization property. Suppose that we have constructed $X^{(0)}, \dots, X^{(k-1)}$ such that $X^{(i)}$ has the factorization property for $i=0,\dots, k-1$ and if $a,b \in X^{(i)}$ then $ab \in X^{(i+1)}$ for all $i = 0, \dots, k-2.$ The we put \begin{align*} Y^{(k)} &= X^{(k-1)}\cup \Set{ ab | a, b \in X^{(k-1)} },\\ (Y^{(k)})_2 &= (X^{(k-1)})_2 \cup \Set{ (a_1 b, a_2 b) | (a_1, a_2) \in (X^{(k-1)})_2 , b\in X^{(k-1)} }. \end{align*} $Y^{(k)}$ is a partial submagma of $N.$ To show that $Y^{(k)}$ has the factorization property, let $a= m_1 + \dots + m_r$ and $b = m'_1 + \dots + m'_s$ be the unique factorization of $a$ and $b$ in $N.$ By assumption, there exists a way to supply parentheses to these formula so that they can be calculated in $X^{(k-1)}.$ Then the unique factorization of $ab$ is given by $ab = \sum m_i m'_j.$ Now, supply parentheses to $ab = m_1 b + \dots + m_r b$ in the way that we supplied parentheses to the formula $a = m_1 + \dots + m_r.$ This formula can be calculated in $Y^{(k)},$ once $b$ is calculated. So we supply parentheses to each $b = m'_1 + \dots + m'_s$ so that it is calculated in $Y^{(k)}.$ Thus $Y^{(k)}$ has the factorization property. Let $X^{(k)} = (Y^{(k)})_{ass, N}$ be its associative closure in $N.$ By the previous lemma, $X^{(k)}$ has the factorization property. We have constructed an increasing sequence $X^{(0)}\subseteq X^{(1)}\subseteq \dots$ of partial submonoids of $N$ which satisfies (1) and (2). Now it is clear that it satisfies (3). \end{proof} \begin{proof}[Proof of Proposition \ref{prop:pring_hom_by_generators}] Let $Y^{(k)}$ and $X^{(k)}$ be as in (the proof of) Lemma \ref{lem:inc_seq_of_submonoids_unique_factorization}. The assumption of the proposition that $s_i(c_1, \dots, c_n)$ can be calculated in $A$ for all $i$ is equivalent to the condition that a partial magma homomorphism $\varphi\colon Y^{(0)} \to A$ can be defined by $\varphi(x_i) = c_i.$ Suppose we have shown that $\varphi$ extends to a partial magma homomorphism $\varphi \colon Y^{(k-1)} \to A.$ If $(a + b) + c$ can be calculated in $Y^{(k-1)},$ then $(\varphi(b) , \varphi(c)) \in A_2,$ since $A$ is a partial monoid. If we put $\varphi(b+c) = \varphi(b)+\varphi(c),$ then this is well-defined since $Y^{(k-1)}$ has the factorization property. In this way, we can extend $\varphi$ uniquely to a partial monoid homomorphism $\varphi \colon X^{(k-1)}\to A.$ Let $a, b \in X^{(k-1)}.$ We put $\varphi(ab) = \varphi(a)\varphi(b).$ This is well-defined by the factorization property of $Y^{(k)}.$ So $\varphi$ uniquely extends to a partial magma homomorphism $\varphi\colon Y^{(k)} \to A.$ Continuing this process inductively, $\varphi$ extends uniquely to a partial monoid $\varphi \colon \cup_{k} X^{(k)} \to A,$ which is clearly a partial ring homomorphism $\varphi \colon \FF_1 \angles{x_1,\dots, x_n | \exists s_1, \dots, s_r }\to A.$ \end{proof} \subsection{Congruences} \begin{defn}[Congruence] Let $A$ be a partial ring. By a {\bf congruence} on $A,$ we mean an effective equivalence relation on $A.$ \end{defn} \begin{prop} Let $A$ be a partial ring. A partial subring $R \subseteq A\times A$ is a congruence on $A$ if and only if its underlying partial monoid is an effective equivalence relation on the underlying partial monoid of $A.$ \end{prop} \begin{proof} Assume that $R$ is a congruence on $A.$ Then $R\rightrightarrows A$ is a kernel pair of some morphism $f\colon A\to B$ in ${\mathcal{P\!R}ing}.$ Since a limit of any diagram in ${\mathcal{P\!R}ing}$ is constructed by taking a limit of the corresponding diagram in ${\mathcal{P\!M}on},$ $R\rightrightarrows A$ is the kernel pair of the underlying homomorphism $f\colon A\to B$ in ${\mathcal{P\!M}on}.$ So the underlying partial monoid of $R$ is an effective relation on the underlying partial monoid of $A.$ Conversely, assume that the underlying partial monoid of $R$ is an effective relation on the underlying partial monoid of $A.$ We can define a map $A/\!\!/ R \times A/\!\!/ R \to A/\!\!/ R$ by $([a_1], [a_2]) \mapsto [a_1 a_2],$ since $R$ is a partial subring of $A\times A.$ Then $m'$ is bilinear since the multiplication map of $A$ is bilinear. Then by Proposition \ref{prop:bilinear_pmag_pmon}, $m'$ induces a bilinear map $\bar{m}\colon A/R\times A/R \to A/R.$ This makes $A/R$ into a partial ring. It is clear that the canonical morphism $\pi\colon A\to A/R$ is a homomorphism of partial rings. To show that $R$ is the kernel pair of $\pi\colon A\to A/R,$ let $f_1, f_2 \colon B\to A$ be two morphisms such that $\pi\circ f_1 = \pi\circ f_2.$ The underlying partial monoid of $R$ is the kernel pair of $\pi$ in ${\mathcal{P\!M}on},$ there exists a unique morphism $f\colon B\to A$ of underlying partial monoids such that $f = (f_1, f_2).$ Since $R$ is a partial subring of $A\times A,$ $f$ is a homomorphism of partial rings. \end{proof} \begin{cor} If $\Set{R_\lambda}$ is a family of congruences on a partial ring $A,$ then $\cap_{\lambda} R_{\lambda}$ is a congruence on $A.$ \end{cor} \begin{cor} Let $A$ be a partial ring. For any subset $S$ of $A\times A,$ there exists the smallest congruence on $A$ which contains $S.$ (It is denoted by $\angles{S}.$) \end{cor} \begin{prop} Let $R$ be a congruence on a partial ring $A.$ Then for any homomorphism $f\colon A\to B$ for which $f(a_1) = f(a_2)$ for all $(a_1, a_2) \in R,$ there exists a unique homomorphism $\tilde{f} \colon A/R \to B$ which makes the following diagram commute: \begin{center} \begin{tikzcd} A \ar{r}{f}\ar{d}[left]{\pi} & B\\ A/R \ar{ru}[below]{\tilde{f}}. \end{tikzcd} \end{center} \end{prop} \begin{proof} Let $f\colon A\to B$ be a morphism of partial rings such that $f(a_1) = f(a_2)$ for all $(a_1, a_2) \in R.$ Since $A\to A/\!\!/ R$ is a coequalizer of $R\rightrightarrows A$ in ${\mathcal{P\!M}ag}$ we have a homomorphism $A/\!\!/ R \to B$ of partial magmas. Similarly, we have a homomorphism $A/R \to B$ of partial monoids, and we have a commutative diagram \begin{center} \begin{tikzcd} A\ar{r}\ar{rd}[below]{f} & A/\!\!/ R \ar{r}\ar{d} & A/R\ar{ld}{\tilde{f}}\\ & B. \end{tikzcd} \end{center} In the following diagram, the largest rectangle is commutative, since $f$ is a homomorphism of partial rings and by the definition of $m' :$ \begin{center} \begin{tikzcd} A/\!\!/ R \times A/\!\!/ R\ar{r}\ar{d}{m'} & A/R \times A/R \ar{d}{\bar{m}} \ar{r} & B\times B \ar{d}{m_B}\\ A/\!\!/ R\ar{r} & A/R \ar{r}& B. \end{tikzcd} \end{center} Also, the left small rectangle is commutative by the definition of $\bar{m}.$ Then so is the right small rectangle, by Proposition \ref{prop:bilinear_pmag_pmon}. \end{proof} \subsection{Ideals} \begin{defn}[ideal] An {\bf ideal} of a partial ring $A$ is a partial submonoid $I$ of $A$ such that $I_2 = A_2 \cap (I\times I)$ and $ax \in I $ for any $a\in A$ and $x\in I.$ \end{defn} \begin{example} Let $T$ be a subset of $A.$ If we put \begin{align*} I &= \Set{ a_1 t_1 + \dots + a_r t_r | \begin{array}{l}r\in \mathbb{N}, a_i \in A, t_i \in T \\ (a_1 t_1 , \dots , a_r t_r)\in A_r \end{array} }, \end{align*} then $I$ is the smallest ideal which contains $T.$ This ideal $I$ is denoted by $(S).$ If $ S = \Set{a}$ is a singleton, $(S)$ is also denoted by $(a).$ \end{example} Let $\fraka$ and $\frakb$ be two ideals of $A.$ The smallest ideal which contains $\fraka$ and $\frakb$ is denoted by $\fraka + \frakb.$ On the other hand, if we put \begin{align*} I &= \Set{ a_1 b_1 + \dots + a_r b_r | \begin{array}{l}r\in \mathbb{N}, a_i \in A, b_i \in B \\ (a_1 b_1 , \dots , a_r b_r)\in A_r \end{array} }, \end{align*} then $I$ is an ideal which is contained in both $\fraka$ and $\frakb.$ This ideal $I$ is denoted by $\fraka \frakb.$ Let $\varphi\colon A\to B$ be a homomorphism of partial rings and $J\subseteq B$ be an ideal. If we put \[ I = \varphi^{-1}(J) \] then $I$ is an ideal of $A.$ This ideal $I$ is denoted by $\varphi^*(J).$ On the other hand, let $I$ be an ideal of $A.$ The smallest ideal of $B$ which contains $\varphi(I)$ is denoted by $\varphi_*(I).$ \begin{defn}[prime ideal] An ideal $I$ of a partial ring $A$ is called a {\bf prime ideal} if $I\neq A$ and $ab \in I$ implies $a\in I$ or $b\in I$ for any $a,b \in A.$ \end{defn} \subsection{Localization} Let $S$ be a multiplicative subset of $A.$ We put \begin{align*} S^{-1} A &= \Set{ a/s | a\in A, s\in S} \\ (S^{-1} A)_2 &= \Set{ (a/s, b/t) | \mbox{there exists~} u \in S \mbox{~s.t.~}(uta, usb) \in A_2}, \end{align*} where $a/s$ denotes the usual equivalence class such that $a/s = b/t$ if and only if there exists $u\in S$ such that $uta = usb.$ It is readily checked that $S^{-1}A$ is a partial ring by putting $\frac{a}{s}\frac{b}{t} = \frac{ab}{st}.$ The homomorphism $\lambda \colon A\to S^{-1}A$ given by $\lambda(a) = a/1$ has the universal property of localization. \begin{defn}[local pring] A partial ring $A$ is called a {\bf local partial-ring} (local pring for short) if it has a unique maximal ideal. \end{defn} If $\frakp$ is a prime ideal of a partial ring $A,$ then the localization $A_\frakp$ of $A$ by the multiplicative set $A\setminus \frakp$ is a local pring. If $A , B$ are local prings, a homomorphism $\varphi\colon A\to B$ is called a {\bf homomorphism of local prings} if $\varphi^*(\frakm_B) = \frakm_A,$ where $\frakm_A$ (resp. $\frakm_B$) is the maximal ideals of $A$ (resp. $B).$ \begin{prop} Let $A$ be a partial ring and $\lambda\colon A\to S^{-1}A$ be the localization by a multiplicative subset $S$ of $A.$ For any ideal $I\subseteq A, \lambda^*\lambda_*(I) = I.$ \end{prop} \begin{defn}[partial field] A {\bf partial field} is a partial ring in which every non-zero element is multiplicatively invertible. \end{defn} A partial ring is a partial field iff the ideal $(0)$ is a maximal ideal. \subsection{Blueprints} We can compare partial rings to Lorscheid's blueprints\cite{lorscheid-the-geometry-of-1}. As is explained in the above paper, Deitmar's sesquiads\cite{deitmar-congruence-schemes} are special kinds of blueprints. In this section, we show that partial rings are also special kinds of blueprints as explained below. Let $A$ be a partial ring and let $\mathbb{N}[A]$ denote the monoid-semiring determined by the underlying multiplicative monoid of $A.$ We put \[ R_0(A) = \Set{ (a_1\dotplus a_2, a ) | (a_1, a_2) \in A, a_1+a_2 = a}\cup \Set{(0,\emptyset)}. \] Let $R(A)$ be the smallest additive equivalence relation on $\mathbb{N}[A].$ In other words, $R(A)$ is the smallest equivalence relation on $\mathbb{N}[A]$ which contains \[ R_1(A) = \Set{(0 \dotplus x, x) , (a_1 \dotplus a_2 \dotplus x, (a_1+a_2) \dotplus x ) | (a_1, a_2 ) \in A_2, x\in \mathbb{N}[A]}. \] \begin{lem}\label{lem:pring_to_blueprint_to_pring} If $(a_1\dotplus a_2, a ) \in R(A),$ then $(a_1,a_2) \in A_2$ and $a_1+a_2 = a.$ \end{lem} \begin{proof} Consider the following property for $x = a_1\dotplus \dots \dotplus a_r\in \mathbb{N}[A]$ \[ \mbox{(Property)}~~~\mbox{if~}\dotplus\mbox{~is replaced with~}+, x\mbox{~can be calculated in~}A. \] Since $A$ is a partial monoid, for any $(x,y) \in R_1(A), x$ has this property if and only if $y$ has, and the sum for $x$ and that for $y$ are equal. Then the same is true for any $(x,y) \in R(A).$ Suppose $(a_1 \dotplus a_2, a) \in R(A).$ Since $a$ can be calculated in $A,$ so is $a_1 \dotplus a_2$ and $a_1 + a_2 = a.$ \end{proof} \begin{lem} $R(A)$ is multiplicative. \end{lem} \begin{proof} If $(\alpha, \beta) \in R_0(A)$ and $c\in A$ then $(\alpha c, \beta c) \in R_0(A).$ Then it is readily checked that \begin{enumerate} \item if $(\alpha, \beta) \in R(A)$ and $c\in A$ then $(\alpha c, \beta c) \in R(A),$ \item if $(\alpha, \beta) \in R(A)$ and $\xi\in \mathbb{N}[A]$ then $(\alpha \xi, \beta \xi) \in R(A)$ and \item if $(\alpha, \beta) \in R(A)$ and $(\gamma, \delta)\in \mathbb{N}[A]$ then $(\alpha \gamma, \beta \delta) \in R(A),$ \end{enumerate} which shows that $R(A)$ is multiplicative. \end{proof} It follows that $B(A) = (A, R(A))$ is a blueprint. Now, let $M$ be a commutative (multiplicative) monoid and $B = (M,R)$ be a blueprint. Put \[ R_0 = \Set{ (a_1\dotplus a_2, a ) \in R | a_1, a_2 , a \in M} \cup \Set{(0,\emptyset)}. \] \begin{defn} A blueprint $(M, R)$ is {\bf generated by binary operations} if $R$ is the smallest additive equivalence relation which contains $R_0.$ A blueprint $(M, R)$ is {\bf associative} if $(a_1\dotplus a_2 \dotplus a_3, b)\in R, a_i\,(i = 1,2,3), b\in M$ then there exists $c\in M$ such that $(a_1\dotplus a_2, c) \in R.$ \end{defn} \begin{lem} Let $A$ be a partial ring. The blueprint $B(A) = (A, R(A))$ is generated by binary operations and associative. \end{lem} \begin{proof} $B(A)$ is generated by binary operations by its construction. If $(a_1\dotplus a_2 \dotplus a_3, b)\in R, a_i\,(i = 1,2,3), b\in A,$ then by Lemma \ref{lem:pring_to_blueprint_to_pring}, $a_1 + a_2 + a_3 = b$ in $A.$ Then by the associativity of a partial monoid $A,$ there exists $c\in A$ such that $a_1 + a_2 = c $ in $A.$ This means that $B(A)$ is associative. \end{proof} \begin{prop} The category of partial rings is isomorphic to the category of proper cancellative and associative blueprints with zero which are generated by binary operations. \end{prop} \begin{proof} Let $B = (A, R)$ be a proper cancellative and associative blueprint with zero which is generated by binary operations. We put \[ A_2 = \set{(a_1, a_2) \in A^2 | \exists a\in A \mbox{~s.t.~}(a_1 \dotplus a_2 , a ) \in R} \] and define $a_1 + a_2 = a$ if $(a_1 \dotplus a_2 , a ) \in R.$ Since $B$ is a proper blueprint with a zero, this determines a partial magma structure on $A.$ Since $B$ is associative, $A$ is a partial monoid. Finally, $A$ is a partial ring since $R$ is multiplicative. Conversely, if $A$ is a partial ring, let $\mathbb{N}[A]$ denote the semiring determined by the underlying multiplicative monoid of $A.$ \end{proof} \section{Partial Schemes} \subsection{Locally PRinged Spaces} \begin{defn}[locally pringed space] A {\bf locally partial-ringed space} (locally pringed space for short) is a pair $(X, \mathcal O_X)$ where $X$ is a topological space and $\mathcal O_X$ is a sheaf of partial rings on $X$ whose stalks are local prings. Let $(X,\mathcal O_X)$ and $(Y,\mathcal O_Y)$ be two locally pringed spaces. A morphism $(X, \mathcal O_X) \to (Y,\mathcal O_Y)$ is \begin{enumerate} \item a map $f\colon X\to Y$ of topological spaces and \item a homomorphism $f^\#\colon \mathcal O_Y\to f_* \mathcal O_X$ of sheaves of partial rings over $Y$ which induces a homomorpshim of local prings $o_{Y,f(x)} \to o_{X,x}$ for all point $x\in X.$ \end{enumerate} \end{defn} \subsection{Affine Partial Schemes} In this section, we will define affine partial schemes. Most statements and proofs in \S 3.1 and \S 3.2 of \cite{lorscheid-the-geometry-of-1} about affine blue schemes can be read as statements and proofs about affine partial schemes. We will give proofs for some of them rather when it is simpler than those in \cite{lorscheid-the-geometry-of-1} to show the simplicity of our theory. Let $A$ be a partial ring and $X_A$ be the set of prime ideals of $A.$ For any $a\in A,$ let $D(a)$ be the set of prime ideals $\frakp$ such that $a\notin \frakp.$ We give $X_A$ the topology generated by $D(a)$'s for all $a\in A.$ This topology is called the Zariski topology. For any ideal $\fraka,$ let $V(\fraka)$ denote the set of prime ideals which contain $\fraka.$ \begin{prop} If $\fraka$ does not meet a multiplicative set $S,$ then there exists a prime ideal $\frakp$ which contains $\fraka$ and does not meet $S.$ \end{prop} \begin{proof} Let $\Sigma$ be a family of ideals which contain $\fraka$ and do not meet $S.$ Since $\fraka$ is an element of it, $\Sigma$ is not empty. By the Zorn's lemma, there exists a maximal element $\frakp$ of $\Sigma.$ Suppose that $ab \in \frakp.$ Then $(\frakp + (a)) (\frakp + (b)) \subseteq \frakp.$ Since $\frakp$ does not meet $S,$ at least one of $(\frakp + (a))$ and $(\frakp + (b))$ does not meet $S.$ Since $\frakp$ is maximal with this property, one of $a$ and $b$ is an element of $\frakp.$ This proves that $\frakp$ is a prime ideal. \end{proof} \begin{cor} If $D(b) \subseteq D(a),$ then there exist $k\in \mathbb{N}$ such that $b^k \in (a).$ \end{cor} \begin{proof} Take $\fraka = (a)$ and $S = \Set{b^k | k\in \mathbb{N}}$ in the previous lemma. \end{proof} \begin{cor} If we put $\sqrt{\fraka} = \Set{ a\in A | \exists r \in \mathbb{N}\mbox{~s.t.~} a^r \in \fraka},$ then \[ \sqrt{\fraka} = \bigcap_{\fraka \subseteq \frakp,~\frakp\in X_A} \frakp \] \end{cor} \begin{proof} It is clear that $\sqrt{\fraka} \subseteq \bigcap \frakp.$ For the converse, assume that $a \in A$ is not in $\sqrt{\fraka},$ or equivalently, $\fraka$ does not meet the multiplicative subset $S = \Set{ a^r | r\in \mathbb{N}}$ of $A.$ Then by proposition, there exists a prime ideal $\frakp$ which contains $\fraka$ and does not meet $S.$ In particular, $a\notin \frakp$ and this proves that $\sqrt{\fraka} \supseteq \bigcap \frakp.$ \end{proof} \begin{prop} $D(a)$ is quasi-compact for all $a\in A.$ In particular $X_A$ is quasi-compact. \end{prop} \begin{proof} It is sufficient to show that if $D(a) = \cup_{\lambda\in \Lambda} D(a_{\lambda})$ for some subset $\set{a_\lambda}\subseteq A,$ then there exist finite elements $\lambda_1, \dots , \lambda_r \in \Lambda$ such that $D(a) = D(a_{\lambda_1})\cup \dots \cup D(a_{\lambda_r}).$ So assume that $D(a) = \cup_{\lambda\in \Lambda} D(a_{\lambda})$ for some subset $\set{a_\lambda}\subseteq A.$ Then $a\in \frakp \iff \Set{a_\lambda}\subseteq \frakp$ for all $\frakp \in X_A,$ which implies that $\sqrt{(a)} = \sqrt{(\Set{a_\lambda})}.$ So there exists $r\in \mathbb{N}$ such that $a^r \in (\Set{a_\lambda}),$ which means that we can take some $a_{\lambda_1}, \dots , a_{\lambda_s}$ and $c_1, \dots, c_s \in A$ such that $(c_1a_{\lambda_1}, \dots , a_sa_{\lambda_s} )\in A_s$ and $a^r =c_1a_{\lambda_1}+ \dots + a_sa_{\lambda_s},$ so $a \in \sqrt{(a_{\lambda_1}, \dots , a_{\lambda_s})}.$ Then $\Set{a_{\lambda_1}, \dots , a_{\lambda_s}} \subseteq \frakp \implies a\in \frakp$ for all $\frakp \in X_A$ and this means that $D(a) \subseteq D(a_{\lambda_1}) \cup \dots \cup D(a_{\lambda_s}).$ \end{proof} For any open set $U$ of $X= X_A,$ let $S_U$ be the subset of $A$ consisting of all $a\in A$ such that $a\notin \frakp$ for all $\frakp \in U.$ Then $S_U$ is a multiplicative subset of $A.$ If we put $\mathcal O'_X(U) = S_U^{-1} A,$ then $\mathcal O'_X$ is a presheaf of partial rings on $X.$ Its sheafification is denoted by $\mathcal O_X.$ \begin{prop} If $x = \frakp\in X_A$ is a point, the stalk $o_{X,x}$ at $x$ coincides with the local pring $A_\frakp.$ \end{prop} The locally pringed space $(X_A, \mathcal O_X)$ constructed above is denoted by $\mathop{\mathrm{Spec}}\nolimits (A).$ \begin{defn}[affine pscheme] A locally pringed space is called an {\bf affine partial scheme} (affine pscheme for short) if it is isomorphic as a locally pringed space to $\mathop{\mathrm{Spec}}\nolimits A$ for a partial ring $A.$ \end{defn} Let $A, B$ be partial rings and $\varphi\colon A\to B$ be a homomorphism. Suppose that $\mathop{\mathrm{Spec}}\nolimits(A) = (X,\mathcal O_X)$ and $\mathop{\mathrm{Spec}}\nolimits(B) = (Y,\mathcal O_Y).$ \begin{prop} Let $A$ be a partial ring and $(X,\mathcal O_X) = \mathop{\mathrm{Spec}}\nolimits (A).$ For any element $s\in A,$ there exists a monomorphism $A_s \to \mathcal O_X(D(s)).$ \end{prop} \begin{proof} We can define a homomorphism $\varphi\colon A_s \to \mathcal O_X(D(s))$ by putting\\ $\varphi(a/s^r) (\frakp) = a/s^r \in A_{\frakp}$ for all $\frakp \in X.$ Suppose $\varphi(a/s^r) = \varphi(b/s^q).$ This means that $a/s^r = b/s^q$ in $A_\frakp$ for all $\frakp\in D(s),$ that is, we have $h\notin \frakp$ such that $hs^q a = hs^r b.$ If we put \[ \fraka = \Set{ x\in A | xs^qa = xs^rb}, \] then $\fraka$ is an ideal of $A$ and $\fraka \not\subseteq \frakp$ for all $\frakp \in D(s).$ Then we have that $\sqrt{(s)} \subseteq \sqrt{\fraka}.$ If $s^k \in \fraka,$ then $s^{k+q} a = s^{k+r} b, $ which means that $a/s^r = b/s^q$ in $A_s.$ Therefore $\varphi$ is injective. \end{proof} If we take $s=1$ in the above proposition, we obtain a monomorphism $\gamma \colon A\to \mathcal O_X(X).$ The following lemma, which is an extraction from the proof of Lemma 3.16 in \cite{lorscheid-the-geometry-of-1} is useful. \begin{lem}\label{lem:integralization} Let $A$ be a partial ring and $(X,\mathcal O_X) = \mathop{\mathrm{Spec}}\nolimits(A).$ We put $B = \mathcal O_X(X).$ For any element $\sigma \in B,$ there exists an element $s\in A$ such that $s\sigma \in \gamma(A).$ \end{lem} \begin{proof} We can prove this similarly as in the second paragraph of the proof of Lemma 3.16 of \cite{lorscheid-the-geometry-of-1}. \end{proof} \begin{thm} Let $A$ be a partial ring and $(X,\mathcal O_X) = \mathop{\mathrm{Spec}}\nolimits(A).$ We put $B = \mathcal O_X(X)$ and let $(Y,\mathcal O_Y) = \mathop{\mathrm{Spec}}\nolimits (B).$ Then $(X, \mathcal O_X)$ and $(Y, \mathcal O_Y)$ are isomorphic. \end{thm} \begin{proof} This theorem is only a mixture of Lemma 3.16 and Lemma 3.18 of \cite{lorscheid-the-geometry-of-1}. At the final step of the proof of Lemma 3.18 of \cite{lorscheid-the-geometry-of-1}, we can use Lemma \ref{lem:integralization}. \end{proof} The following corollary is immediate from the theorem. \begin{cor}\label{cor:one-one-correspondence} There exists a one-to-one correspondence between the morphisms $\mathop{\mathrm{Spec}}\nolimits A\to \mathop{\mathrm{Spec}}\nolimits B$ of affine pschemes and the morphisms $B\to \Gamma ( \mathop{\mathrm{Spec}}\nolimits A )$ of partial rings. \end{cor} The following definition is a translation into our case from \cite{lorscheid-the-geometry-of-1}. \begin{defn} A partial ring of the form $\Gamma(X, \mathcal O)$ for some affine pscheme $(X,\mathcal O)$ is called {\bf global}. \end{defn} \begin{prop} Every partial field is global. \end{prop} \begin{proof} If $F$ is a partial field and $\mathop{\mathrm{Spec}}\nolimits(F) = (X, \mathcal O),$ then $X = \Set{ (0) }$ is a point. Then it is clear that $\mathcal O_X(X) = F.$ \end{proof} \subsection{Partial schemes} \begin{defn}[partial scheme] A locally pringed space is called a {\bf partial scheme} if it is locally an affine pscheme. \end{defn} \subsection{Projective Spaces} In the polynomial semiring $\mathbb{N}[y_0, \dots, y_n]$ of $n+1$ indeterminates, we can consider a partial monoid \begin{align*} \mathbb{N}_1[y_0, \dots, y_n]=\Set{ f(y_0, \dots, y_n) | \mbox{~the constant term of~}f \mbox{~is~}0\mbox{~or~}1 }, \end{align*} in which two polynomials are summable if constant terms of them are summable in $\FF_1.$ Let $B$ be a partial subring of $\mathbb{N}_1[x_0, \dots, x_n]$ which contains $\angles{y_0, \dots, y_n},$ the multiplicatively written free commutative monoid thought of as a partial ring with trivial sum. Consider a multiplicative subset $S_i = \Set{ y_i^r | r\in \mathbb{N}},$ then the localization $S_i^{-1} B$ has a $\mathbb{Z}$ grading by the degree of polynomials. Let $A^{(i)}$ denote the 0-th part of $S_i^{-1} B\,(0\leq i \leq n).$ More precisely, \begin{align*} A^{(i)} &= \Set{y_i^{-r} f_r | r\in \mathbb{N}, f_r\in B \mbox{~: a homogeneous polynomial of degree~}r},\\ A^{(i)} &= \Set{ (y_i^{-r} f_r,y_i^{-r} g_r) | (f_r, g_r ) \in B_2}. \end{align*} If $A^{(i,j)}$ denotes the localization of $A^{(i)}$ by the multiplicative subset \[\Set{(y_j/y_i)^r | r\in \mathbb{N}},\] then we have $A^{(i,j)}=A^{(j,i)}.$ This observation ensures that, for any partial ring $A,$ we can glue affine pschemes $\mathbb{A}_i = \mathop{\mathrm{Spec}}\nolimits A^{(i)}$ along the isomorphisms of open subschemes $\mathop{\mathrm{Spec}}\nolimits A^{(i,j)}$ along the isomorphisms given by the equalities $A^{(i,j)}=A^{(j,i)}.$ Let $\mathbb{P}^n_V$ denote the resulting partial scheme, where $V = \mathop{\mathrm{Spec}}\nolimits(B),$ which is thought of as a vector space determined by $B.$ \begin{example}\label{ex:projective} Let \[ B = \FF_1 \angles {y_0, \dots, y_n | \exists(y_0 + \dots + y_n) }. \] At this point, we propose to think of this partial ring as the most fundamental one which is between $\angles{y_0, \dots, y_n}$ and $\mathbb{N}_1[x_0, \dots, x_n].$ For example $\mathop{\mathrm{Hom}}\nolimits_{{\mathcal{A}lg}_{\FF_1}}(B, A)$ equals $A_{n+1},$ the summable $(n+1)$-tuples in $A,$ which may be think of as the most fundamental $\FF_1$-module ``of rank $n+1$''. In this example, $\mathbb{P}^n_V $ for this specific $B$ is abbreviated as $\mathbb{P}^n.$ In this case, \begin{align*} A^{(i)} &= \Set{\mbox{~subsum of~}(y_0/y_i + \dots + y_n/y_i)^r | r\in \mathbb{N}}\cup \set{0},\\ A^{(i,j)} &= \Set{y_i^s y_j^{-s}\left(\mbox{~subsum of~}(y_0/y_i + \dots + y_n/y_i)^r\right) |r\in \mathbb{N}, s\in \mathbb{Z}}\cup \set{0},\\ A^{(i,j,k)} &= \Set{ \begin{array}{l}y_i^s y_j^t y_k^u\times\\ \left(\mbox{~subsum of~}(y_0/y_i + \dots + y_n/y_i)^r\right) \end{array} |\begin{array}{l} r\in \mathbb{N}, \\ s,t,u\in \mathbb{Z},\\ s+t+u = 0 \end{array} }\cup \set{0},\\ \vdots \end{align*} and so on. Now let $F$ be a partial field. Then by Corollary \ref{cor:one-one-correspondence}, there is a one to one correspondence between the set of $F$ points $\mathop{\mathrm{Spec}}\nolimits F \to \mathbb{A}_i$ and \[ A_i(F) = \mathop{\mathrm{Hom}}\nolimits_{{\mathcal{A}lg}_{\FF_1}}(A^{(i)}, F). \] Then the set of $F$-points $\mathop{\mathrm{Spec}}\nolimits F\to \mathbb{P}^n$ can be identified with \[ \mathbb{P}^n(F) = \coprod_{i=1}^n A_i(F)/\sim, \] where for $v_i \in A_i(F)$ and $v_j \in A_j(F),$ $v_i\sim v_j$ if there exists a homomorphism $v\colon A^{(i,j)} \to F$ such that $v|_{A^{(i)}} = v_i$ and $v|_{A^{(j)}} = v_j.$ For any subset $\set{i_1,\dots, i_r} \subseteq \Set{0,1,\dots , n},$ we put \[ A_{i_1,\dots, i_r}(F) = \mathop{\mathrm{Hom}}\nolimits_{{\mathcal{A}lg}_{\FF_1}}\left(A^{(i_1,\dots, i_r)}, F \right). \] Then as sets, \[A_{i_1,\dots, i_r}(F) \cong \left(\begin{array}{l} n\mbox{-tuples~} (x_1, \dots, x_n)\mbox{~for which}\\ 1+x_1 + \dots +x_n \mbox{~can be calculated in~}F\\ \mbox{and~} x_1, \dots , x_{r-1} \neq 0 \end{array}\right). \] Then \[ \# A_{i_1,\dots, i_r}(F)= (\kappa-1)^{r-1}\kappa^{n-r+1}, \] where $\kappa = \kappa(F)$ denotes the number of elements of $F$ which is summable with 1. Now we can calculate $\# \mathbb{P}^n(F )$ as \begin{align*} \# \mathbb{P}^n(F) &= \sum_{i=1}^{n+1} (-1)^{i-1} \binom{n+1}{k}(\kappa-1)^{i-1}\kappa^{n-i+1}\\ &= -\frac{(\kappa-(\kappa-1))^{n+1} - \kappa^{n+1}}{\kappa-1}\\ &= \frac{\kappa^{n+1} - 1}{\kappa-1} = \kappa^n + \dots + \kappa + 1. \end{align*} Of course, $\kappa(\mathbb{F}_q) = q,$ where $\mathbb{F}_q$ denotes the finite field with $q$ elements, and $\kappa(\FF_1 ) = 1.$ \end{example} \section{Affine Group PSchemes and Affine PGroup PSchemes} \subsection{$\mathbb{G}_a$} Put $K = \FF_1\angles{ x, y | \exists (x+y) }$ and let $R$ be the congruence $\angles{(x+y, 0)}.$ Then put $G = K/ R.$ If we put $t = [x] = -[y] \in G,$ then \[ G = \Set{ a_0 + a_1t + \dots + a_r t^r | r\in \mathbb{N}, a_0 =0\mbox{~or~}1, a_i \in \mathbb{Z} }. \] A cogroup structure on $G$ is given by \begin{align*} &m\colon G\to G\otimes G~;~ t\mapsto 1\otimes t + t\otimes 1\\ &e\colon G\to \FF_1 ~;~t \mapsto 0\\ &i\colon G\to G~;~t\mapsto -t. \end{align*} Then $\mathbb{G}_a = \mathop{\mathrm{Hom}}\nolimits_{{{\mathcal{A}lg}}_{\FF_1}} (G, -)$ is the additive group. \subsection{$\mathbb{G}_m$} Put $K= \angles{ x, y }$ and let $R$ be the congruence $\angles{(ab, 1)}.$ Then put $G = K/R.$ If we put $t = [x] \in G, $ then $G = \Set{ t^n | t\in \mathbb{Z}}.$ A cogroup structure on $G$ is given by \begin{align*} &m\colon G\to G\otimes G~;~ t\mapsto t\otimes t\\ &e\colon G\to \FF_1 ~;~t \mapsto 1\\ &i\colon G\to G~;~t\mapsto t^{-1}. \end{align*} Then $\mathbb{G}_m = \mathop{\mathrm{Hom}}\nolimits_{{{\mathcal{A}lg}}_{\FF_1}} (G, -)$ is the multiplicative group. \subsection{${\mathbb{G}\mathbb{L}}_n$} In this section, we construct an affine pgroup pscheme ${\mathbb{G}\mathbb{L}}_n,$ which induces a ${\mathcal{G}rp}$ valued functor when restricted to a `good' partial rings. \subsubsection{Linear Algebra} Let $A$ be a partial ring. \begin{defn}[$A$-modules] An $A$-module is a partial monoid $M$ equipped with a bilinear action $A\times M\to M$ by $A.$ \end{defn} For any natural number $n, A^n$ is an $A$-module in a natural way. Let $\varphi \colon A^n \to A^m$ be an $A$-module homomorphism. Let $\set{e_1,\dots, e_n}$ and\\ $\set{f_1,\dots, f_m}$ be the canonical basis of $A^n, $ respectively. Suppose $\varphi( e_j ) = \sum_{i=1}^m c_{ij} f_i.$ Let $(a_1,\dots, a_n)\in A^n$ be any element. Since $( a_1e_1, \dots, a_n e_n )$ is a summable $n$-tuple in $A^n,$ $( a_1\varphi(e_1), \dots, a_n \varphi(e_n) )$ is a summable $n$-tuple in $A^m.$ This implies that $( a_1 c_{1j} , a_2c_{2j}, \dots, a_n c_{nj} ) $ is a summable $n$-tuple in $A.$ Now, we put \[ A_{(n)} = \Set{ (c_1, \dots, c_n) \in A^n | \begin{array}{l} (a_1 c_1 , \dots , a_n c_n) \mbox{~is a summable~}n\mbox{-tuple}\\ \mbox{for any } a_1, \dots, a_n \in A \end{array} }. \] If we think of $C = (c_{ij})$ as an $m\times n$ matrix, then we have shown above that each row of $C$ is an element of $A_{(n)}.$ The set of $m\times n$ matrices with this property is denoted by $M_{m,n}(A).$ Conversely, if we are given an $m\times n$ matrix $C = (c_{ij})\in M_{m,n}(A),$ we can define a $A$-module homomorphism $\varphi \colon A^n\to A^m$ by the usual matrix multiplication $\varphi(a_1,\dots, a_n) = C\,^t\!(a_1,\dots, a_n).$ If $m=n,$ matrix multiplication gives $M_n(A) = M_{n,n}(A)$ a non-commutative monoid structure. The invertible elements of $M_n(A)$ constitute a group, which is denoted by $GL_n(A).$ We introduce a weaker version $M'_{m,n}(A)$ of $M_{m,n}(A),$ in which matrices have their rows in $A_n,$ instead of $A_{(n)}.$ If $m=n,$ matrix multiplication gives $M'_n(A) = M'_{n,n}(A)$, in this case, a non-commutative partial magma structure as is illustrated below. If we put \begin{align*} M &= M'_n(A), \mbox{~and}\\ M_2 &= \Set{ ( (c_{ij}), (d_{ij}) ) | (c_{i1}d_{1j}, \dots, c_{in}d_{nj}) \in A_n \mbox{~for each~} i, j }, \end{align*} then a multiplication $M_2 \to M$ is given by the matrix multiplication. The unit matrix gives a unit for the partial magma $M,$ which can be multiplied to any element of $M.$ As a definition of invertible matrix in $M'_n(A)$, we give the following one (hinted by the group pscheme construction given in a following section): a matrix $C \in M'_n(A)$ is invertible if there exists a matrix $C'\in M'_n(A)$ such that we can form $CC'$ and $C'C$ in $M'_n(A)$ and $CC' = C'C = I_n,$ the unit matrix. The invertible elements of $M_n(A)$ constitute a partial group, which is denoted by $GL'_n(A),$ with the following (ad hoc) definition of a partial group. \begin{defn}[partial group] A partial group is a non-commutative partial magma $G$ in which for every element $a\in G,$ there exists an element $b\in G$ such that $ab$ and $ba$ can be formed in $G$ and $ab=ba=1,$ where $1$ is the unit element of the partial magma $G.$ \end{defn} \begin{defn}[good partial ring] A partial ring $A$ is called good if $A_n = A_{(n)}.$ \end{defn} If $A$ is a good partial ring, $GL_n(A) = GL'_n(A).$ Note that commutative monoids and commutative rings are good partial rings. \subsubsection{Partial Cogroups} If $\mathcal C$ is a category with finite coproducts, then we can define a partial cogroup in $\mathcal C.$ Let $I$ be an initial object of $\mathcal C,$ and $\otimes $ be a binary coproduct in $\mathcal C.$ \begin{defn}[Partial cogroup] A partial cogroup in $\mathcal C$ is \begin{enumerate}[label = \arabic*)] \item an object $G,$ \item an object $H$ and an epimorphism $j\colon G\otimes G\to H,$ \item a morphism $e\colon G\to I,$ called the counit, and epimorphisms $e_L\colon H \to I\otimes G$ and $e_R \colon H \to G\otimes I,$ \item a morphism $m\colon G\to H,$ called the comultiplication, and \item a morphism $i\colon G\to G,$ called the inverse, and morphisms $i_L, i_R \colon H \to G$ \end{enumerate} which makes the following diagrams commute : \begin{enumerate}[label = \alph*)] \item \[ \begin{tikzcd} & G\otimes G \ar{ld}[above]{e\otimes id}\ar[two heads]{d}{j}\ar{rd}{id\otimes e}\\ I\otimes G\ar{d}[left]{(id,!)} & H \ar[two heads]{l}{e_L}\ar[two heads]{r}[below]{e_R} & G\otimes I\ar{d}{(!, id)}\\ G & G \ar[equal]{l}\ar{u}{m}\ar[equal]{r} & G \end{tikzcd} \] \item \[ \begin{tikzcd} & G\otimes G\ar{ld}[above]{i\otimes id}\ar{rd}{id\otimes i}\ar[two heads]{d}{j}\\ G & H\ar{l}{i_L}\ar{r}[below]{i_R} & G\\ I\ar{u}{!} & G\ar{u}{m}\ar{l}{e}\ar{r}[below]{e} & I\ar{u}{!}. \end{tikzcd} \] \end{enumerate} \end{defn} \subsubsection{${\mathbb{G}\mathbb{L}}_n$} Let $\mathbb{N}[x_{ij}, y_{ij}\,(1\leq i,j \leq n) ]$ be the semiring of polynomials of $2n^2$ indeterminates $x_{ij}, y_{ij}\,(1\leq i,j \leq n).$ Consider $n\times n$ matrices $X = (x_{ij}), Y = (y_{ij}), Z = XY = (z_{ij})$ and $W = YX = (w_{ij}).$ Let $K$ be the subset of $\mathbb{N}[x_{ij}, y_{ij}\,(1\leq i,j \leq n) ]$ consisting of $4n$ elements \[ \begin{array}{l} x_i = x_{i1}+\dots +x_{in} \,(1\leq i\leq n),\\ y_i = y_{i1}+\dots +y_{in} \,(1\leq i\leq n),\\ z_i = z_{i1}+\dots +z_{in} \,(1\leq i\leq n)\mbox{~and~}\\ w_i = w_{i1}+\dots +w_{in} \,(1\leq i\leq n). \end{array} \] We put $G' = \FF_1 \angles{x_{ij}, y_{ij}\,(1\leq i,j \leq n) | \exists t, \forall t\in K}.$ Let $Q$ be the smallest congruence on $G'$ which contains $2n^2$ pairs $(z_{ij} ,\delta_{ij})$ and $(w_{ij}, \delta_{ij})\,(1\leq i, j\leq n).$ Then we put $G=G'/Q.$ Next, let $N = \mathbb{N}[x_{ij}, y_{ij}, x'_{ij}, y'_{ij}\,(1\leq i,j \leq n) ]$ be the semiring of polynomials of $4n^2$ indeterminates $x_{ij}, y_{ij}, x'_{ij}, y'_{ij}\,(1\leq i,j \leq n).$ Consider $n\times n$ matrices \begin{align*} &X = (x_{ij}), Y = (y_{ij}), Z = XY = (z_{ij}), W = YX = (w_{ij}), \\ &X' = (x'_{ij}), Y' = (y'_{ij}), Z' = X'Y' = (z'_{ij}), W' = Y'X' = (w'_{ij}), \\ &S = XX' = (s_{ij}), T = Y'Y = (t_{ij}), U = ST = (u_{ij}), V = TS = (v_{ij}). \end{align*} We put \[ L = \Set{ x_i, y_i, z_i, w_i, x'_i, y'_i, z'_i, w'_i, s_i, t_i, u_i, v_i | 1\leq i \leq n}, \] where $*_i$ denotes the sum of $i$-th row of a matrix indicated by the capital of the same letter $*.$ We put $H' = \FF_1 \angles{x_{ij}, y_{ij},x'_{ij}, y'_{ij}\,(1\leq i,j \leq n) | \exists t, \forall t\in L}.$ Let $R$ be the smallest congruence on $H'$ which contains $6n^2$ pairs $(z_{ij} ,\delta_{ij}),$ $(w_{ij}, \delta_{ij}),$ $(z'_{ij} ,\delta_{ij}),$ $(w'_{ij}, \delta_{ij}),$ $(u_{ij} ,\delta_{ij})$ and $(v_{ij}, \delta_{ij})\,(1\leq i, j\leq n).$ Then we put $H = H'/R.$ The following list of maps define partial ring homomorphisms which give $G$ a partial cogroup structure: \begin{align*} & j\colon G\otimes G \to H~&;~ & j(x_{ij}\otimes 1) = x_{ij}, j(y_{ij}\otimes 1) = y_{ij},\\ & & & j(1\otimes x_{ij}) = x'_{ij}, j(1\otimes y_{ij}) = y'_{ij},\\ & e\colon G \to \FF_1~&;~& e(x_{ij}) = e(y_{ij}) = \delta_{ij},\\ & e_L\colon H \to \FF_1\otimes G~&;~& e_L(x_{ij}) = e_L(y_{ij}) = \delta_{ij}\otimes 1,\\ & & & e_L(x'_{ij}) = 1\otimes x_{ij}, e_L(y'_{ij}) = 1\otimes y_{ij},\\ & e_R\colon H \to \FF_1\otimes G~&;~& e_R(x_{ij}) = x_{ij}\otimes 1, e_R(y_{ij}) = y_{ij}\otimes 1,\\ & & & e_R(x'_{ij}) = e_R(y'_{ij}) = 1\otimes \delta_{ij},\\ & m\colon G\to H~&;~& m(x_{ij}) = s_{ij}, m(y_{ij}) = t_{ij},\\ & i\colon G\to G~&;~& i(x_{ij}) = y_{ij}, i(y_{ij}) = x_{ij},\\ & i_L\colon H\to G~&;~& i_L(x_{ij}) = y_{ij}, e_L(y_{ij}) = x_{ij},\\ & & & e_L(x'_{ij}) = x_{ij}, e_L(y'_{ij}) = y_{ij},\\ & i_R\colon H\to G~&;~& i_R(x_{ij}) = x_{ij}, e_R(y_{ij}) = y_{ij},\\ & & & e_R(x'_{ij}) = y_{ij}, e_R(y'_{ij}) = x_{ij}. \end{align*} \begin{thm} There exists a representable functor ${\mathbb{G}\mathbb{L}}_n \colon {\mathcal{P\!R}ing} \to \mathcal{PG}rp$ from the category of partial rings to the category of partial groups which enjoys the following properties: \begin{enumerate} \item its restriction to the category of good partial rings factors through ${\mathcal{G}rp}, $ the category of groups. \item ${\mathbb{G}\mathbb{L}}_n (A)$ is the group of $n$-th general linear group with entries in $A$, if $A$ is a commutative rings with 1, and \item ${\mathbb{G}\mathbb{L}}_n (\FF_1) = \mathfrak S_n$ is $n$-th symmetric group. \end{enumerate} \end{thm} \end{document}
arXiv
{ "id": "2206.06084.tex", "language_detection_score": 0.701431155204773, "char_num_after_normalized": null, "contain_at_least_two_stop_words": null, "ellipsis_line_ratio": null, "idx": null, "lines_start_with_bullet_point_ratio": null, "mean_length_of_alpha_words": null, "non_alphabetical_char_ratio": null, "path": null, "symbols_to_words_ratio": null, "uppercase_word_ratio": null, "type": null, "book_name": null, "mimetype": null, "page_index": null, "page_path": null, "page_title": null }
arXiv/math_arXiv_v0.2.jsonl
null
null
\begin{document} \maketitle \begin{abstract} The notion of retrocell in a double category with companions is introduced and its basic properties established. Explicit descriptions in some of the usual double categories are given. Monads in a double category provide an important example where retrocells arise naturally. Cofunctors appear as a special case. The motivating example of vertically closed double categories is treated in some detail. \end{abstract} \tableofcontents \section*{Introduction} In \cite{Par21} an in-depth study of the double category $ {\mathbb R}{\rm ing} $ of rings, homomorphisms, bimodules and linear maps was made, and several interesting features were uncovered. It became apparent that considering this double category, rather than the category of rings and homomorphisms or the bicategory of bimodules, could provide some important insights into the nature of rings and modules. An important property of the bicategory of bimodules is that it is biclosed, i.e. the $ \otimes $ has right adjoints in each variable so that we have bijections of linear maps \begin{center} \begin{tabular}{c} $M \to N \obslash_T P $ \\[3pt] \hline \\[-12pt] $N \otimes_S M \to P$ \\[3pt] \hline \\[-12pt] $ N \to P \oslash_R M $ \end{tabular} \end{center} for bimodules $$ \bfig\scalefactor{.8} \Ctriangle/@{<-}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}/<400,300>[S`R`T\rlap{\ .};M`N`P] \efig $$ We use (a slight modification of Lambek's notation for the hom bimodules \cite{Lam66}): $ P \oslash_R M $ is the $ T\mbox{-}S $ bimodule of $ R $-linear maps $ M \to P $, and $ T $-linear for $ N \obslash_T P $. Both $ P \oslash_R M $ and $ N \obslash_T P $ are covariant in $ P $ but contravariant in the other variables. This is for $ 2 $-cells in the bicategory $ {\cal{B}}{\it im} $ but it does not extend to cells in the double category $ {\mathbb R}{\rm ing} $, which casts a shadow on our contention that $ {\mathbb R}{\rm ing} $ works better than $ {\cal{B}}{\it im} $. The way out of this dilemma is hinted at in the commuter cells of \cite{GraPar08} (there called commutative cells) introduced to deal with the universal property of internal comma objects. That is, to use companions to define new cells, which we call retrocells below, and thus recover functoriality. After a quick review of companions in Section 1, we introduce retrocells in Section 2 and see that they are the cells of a new double category, and if we apply this construction twice, we get the original double category, up to isomorphism. Section 3 extends the mates calculus to double categories where we see retrocells appearing as the mates of standard cells. A careful study of dualities in Section 4 completes this. Retrocells in the standard double categories whose vertical arrows are spans, relations, profunctors or $ {\bf V} $-matrices are analyzed in Section 5. They correspond to various sorts of liftings reminiscent of fibrations. Section 6 studies retrocells in the context of monads in a double category. It is seen that, while Kleisli objects are certain kinds of universal cells, Eilenberg-Moore objects are universal retrocells. In $ {\mathbb S}{\rm pan} {\bf A} $, monads are category objects in $ {\bf A} $ and internal functors are cells preserving identities and multiplication. Retrocells, on the other hand, give cofunctors. In Section 7 we extend Shulman's closed equipments to general double categories, and establish the functoriality of internal homs, covariant in one variable and retrovariant in the other, formulated in terms of ``twisted cospans''. We end in Section 8 by re-examining commuter cells in the light of retrocells and see that this leads to an interesting triple category, though we do not pursue the triple category aspect. The results of this paper were presented in preliminary form at CT2019 in Edinburgh and in the MIT Categories Seminar in October 2020. We thank Bryce Clarke, Matt Di~Meglio and David Spivak for expressing their interest in retrocells. We also thank the anonymous referee for a careful reading and numerous suggestions resulting in a much better presentation. \section{Companions} The whole paper will be concerned with double categories that have companions, so we recall the definition, principal properties we will use, and establish some notation (see \cite{GraPar04} for more details). \begin{definition} Let $ f \colon A \to B $ be a horizontal arrow in a double category $ {\mathbb A} $. A {\em companion} for $ f $ is a vertical arrow $ v \colon A \xy\morphism(0,0)|m|<225,0>[`;\hspace{-1mm}\bullet\hspace{-1mm}]\endxy B $ together with two {\em binding cells} $ \alpha $ and $ \beta $ as below, such that $$ \bfig\scalefactor{.8} \square/=`=`@{>}|{\usebox{\bbox}}`>/[A`A`A`B;``v`f] \place(250,250)[{\scriptstyle \alpha}] \square(500,0)/>``=`=/[A`B`B`B;f```] \place(750,250)[{\scriptstyle \beta}] \place(1300,250)[=\ \ \id_f] \place(1700,250)[\mbox{and}] \square(2100,-250)/`@{>}|{\usebox{\bbox}}`=`=/[A`B`B`B\rlap{\ .};`v``] \place(2350,0)[{\scriptstyle \beta}] \square(2100,250)/=`=`@{>}|{\usebox{\bbox}}`>/[A`A`A`B;``v`f] \place(2350,500)[{\scriptstyle \alpha}] \place(3000,250)[= \ \ 1_v] \efig $$ \end{definition} We can always assume the vertical identities are strict and usually denote them by long equal signs in diagrams, as we just did. Of course horizontal identities are always strict, and we use a similar diagrammatic notation. The vertical identity on $ A $, $ \id_A $, is a companion to the horizontal identity $ 1_A $, with both binding cells the common value $ 1_{\id_A} = \id_{1_A} $, $$ \bfig\scalefactor{.8} \square/=`=`=`=/[A`A`A`A\rlap{\ .};```] \place(250,250)[{\scriptstyle 1}] \efig $$ If $ f \colon A \to B $ and $ g \colon B \to C $ have respective companions $ (v, \alpha, \beta) $ and $ (w, \gamma, \delta) $ then $ g f $ has $ w \bdot v $ as companion with binding cells $$ \bfig\scalefactor{.8} \square/`=`=`>/[A`B`A`B;```f] \place(250,250)[{\scriptstyle \id_f}] \square(500,0)/=``@{>}|{\usebox{\bbox}}`>/[B`B`B`C;``w`g] \place(750,250)[{\scriptstyle \gamma}] \square(0,500)/=`=`@{>}|{\usebox{\bbox}}`>/[A`A`A`B;``v`f] \place(250,750)[{\scriptstyle \alpha}] \square(500,500)/=``@{>}|{\usebox{\bbox}}`/[A`A`B`B;``v`] \place(750,750)[{\scriptstyle 1_v}] \place(1450,500)[\mbox{and}] \square(1900,0)/=`@{>}|{\usebox{\bbox}}``=/[B`B`C`C;`w``] \place(2150,250)[{\scriptstyle 1_w}] \square(2400,0)/>`@{>}|{\usebox{\bbox}}`=`=/[B`C`C`C\rlap{\ .};g`w``] \place(2650,250)[{\scriptstyle \delta}] \square(1900,500)/>`@{>}|{\usebox{\bbox}}`=`/[A`B`B`B;f`v``] \place(2150,750)[{\scriptstyle \beta}] \square(2400,500)/>``=`/[B`C`B`C;g```] \place(2650,750)[{\scriptstyle \id_g}] \efig $$ Two companions $ (v, \alpha, \beta) $ and $ (v', \alpha', \beta') $ for the same $ f $ are isomorphic by the globular isomorphism $$ \bfig\scalefactor{.8} \square/`@{>}|{\usebox{\bbox}}`=`=/[A`B`B`B\rlap{\ .};f`v``] \place(250,250)[{\scriptstyle \beta}] \square(0,500)/=`=`@{>}|{\usebox{\bbox}}`>/[A`A`A`B;``v'`] \place(250,750)[{\scriptstyle \alpha'}] \efig $$ We usually choose a representative companion from each isomorphism class and call it $ (f_*, \psi_f, \chi_f) $ $$ \bfig\scalefactor{.8} \square/=`=`@{>}|{\usebox{\bbox}}`>/[A`A`A`B;``f_*`f] \place(250,250)[{\scriptstyle \psi_f}] \square(1200,0)/>`@{>}|{\usebox{\bbox}}`=`=/[A`B`B`B\rlap{\ .};f`f_*``] \place(1450,250)[{\scriptstyle \chi_f}] \efig $$ The choice is arbitrary but it simplifies things if we choose the companion of $ 1_A $ to be $ (\id_A, 1_{\id_A}, 1_{\id_A}) $. In all of our examples there is a canonical choice and for that $ (1_A)_* = \id_A $. To lighten the notation, we often write the binding cells $ \psi_f $ and $ \chi_f $ as corner brackets in diagrams: $$ \bfig\scalefactor{.8} \square/=`=`@{>}|{\usebox{\bbox}}`>/[A`A`A`B;``f_*`f] \place(250,200)[\ulcorner] \place(850,250)[\mbox{and}] \square(1200,0)/>`@{>}|{\usebox{\bbox}}`=`=/[A`B`B`B\rlap{\ .};f`f_*``] \place(1450,250)[\lrcorner] \efig $$ We also use $ = $ and $ {\mbox{\rule{.23mm}{2.3mm}\hspace{.6mm}\rule{.23mm}{2.3mm}}} $ for horizontal and vertical identity cells. There is a useful technique, called {\em sliding}, where we slide a horizontal arrow around a corner into a vertical one. Specifically, there are bijections natural in every way that makes sense, $$ \bfig\scalefactor{.8} \square/`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`>/<1000,500>[A`C`D`E;`v`w`h] \morphism(0,500)|a|/>/<500,0>[A`B;f] \morphism(500,500)|a|/>/<500,0>[B`C;g] \place(500,250)[{\scriptstyle \alpha}] \place(1500,250)[\longleftrightarrow] \square(2000,-150)/>`@{>}|{\usebox{\bbox}}``>/<500,800>[A`B`D`E;f`v``h] \morphism(2500,650)|r|/@{>}|{\usebox{\bbox}}/<0,-400>[B`C;g_*] \morphism(2500,250)|r|/@{>}|{\usebox{\bbox}}/<0,-400>[C`E;w] \place(2250,250)[{\scriptstyle \beta}] \efig $$ and also $$ \bfig\scalefactor{.8} \square/>`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`/<1000,500>[A`B`C`E;f`v`w`] \morphism(0,0)|b|/>/<500,0>[C`D;g] \morphism(500,0)|b|/>/<500,0>[D`E;h] \place(500,250)[{\scriptstyle \alpha}] \place(1500,250)[\longleftrightarrow] \square(2000,-150)/>``@{>}|{\usebox{\bbox}}`>/<500,800>[A`B`D`E\rlap{\ .};f``w`h] \morphism(2000,650)|l|/@{>}|{\usebox{\bbox}}/<0,-400>[A`C;v] \morphism(2000,250)|l|/@{>}|{\usebox{\bbox}}/<0,-400>[C`D;g_*] \place(2250,250)[{\scriptstyle \beta}] \efig $$ If we combine the two we get a bijection $$ \bfig\scalefactor{.8} \square(0,150)/>`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`>/[A`B`C`D;f`v`w`g] \place(250,400)[{\scriptstyle \alpha}] \place(1000,400)[\longleftrightarrow] \square(1500,0)/`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`=/<500,400>[C`B`D`D;`g_*`w`] \square(1500,400)/=`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`/<500,400>[A`A`C`B;`v`f_*`] \place(1750,400)[{\scriptstyle \widehat{\alpha}}] \efig $$ which is, in a sense, the conceptual basis for retrocells. That, and the idea that $ f $ and $ f_* $ are really the same morphism in different roles. We refer the reader to \cite{Gra20} for all unexplained double category matters. \section{Retrocells} Let $ {\mathbb A} $ be a double category in which every horizontal arrow has a companion and choose a companion for each (with $ \id_A $ as the companion of $ 1_A $). \begin{definition} A {\em retrocell} $ \alpha $ in $ {\mathbb A} $, denoted $$ \bfig\scalefactor{.8} \square/>`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`>/[A`B`C`D;f`v`w`g] \morphism(170,250)/<=/<200,0>[`;\alpha] \efig $$ is a (standard) double cell of $ {\mathbb A} $ of the form $$ \bfig\scalefactor{.8} \square/`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`=/<500,400>[B`C`D`D\rlap{\ .};`w`g_*`] \square(0,400)/=`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`/<500,400>[A`A`B`C;`f_*`v`] \place(250,400)[{\scriptstyle \alpha}] \efig $$ \end{definition} \begin{theorem} The objects, horizontal and vertical arrows of $ {\mathbb A} $ together with retrocells, form a double category $ {\mathbb A}^{ret} $. \end{theorem} \begin{proof} The horizontal composite $ \beta \alpha $ of retrocells $$ \bfig\scalefactor{.8} \square/>`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`>/[A`B`C`D;f`v`w`g] \morphism(180,250)/<=/<200,0>[`;\alpha] \square(500,0)/>``@{>}|{\usebox{\bbox}}`>/[B`E`D`F;h``x`k] \morphism(680,250)/<=/<200,0>[`;\beta] \efig $$ is given by $$ \bfig\scalefactor{.8} \square/=`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`=/[E`E`F`F;`x`x`] \place(250,250)[=] \square(0,500)/=`@{>}|{\usebox{\bbox}}``/<500,1000>[A`A`E`E;`(hf)_*``] \place(250,1000)[{\scriptstyle \cong}] \morphism(500,1500)|r|/@{>}|{\usebox{\bbox}}/<0,-500>[A`B;f_*] \morphism(500,1000)|r|/@{>}|{\usebox{\bbox}}/<0,-500>[B`E;h_*] \square(500,0)/``@{>}|{\usebox{\bbox}}`=/[E`D`F`F;``k_*`] \place(750,500)[{\scriptstyle \beta}] \square(500,500)/=``@{>}|{\usebox{\bbox}}`/[B`B`E`D;``w`] \square(500,1000)/=``@{>}|{\usebox{\bbox}}`/[A`A`B`B;``f_*`] \place(750,1250)[=] \square(1000,0)/=``@{>}|{\usebox{\bbox}}`=/[D`D`F`F;``k_*`] \place(1250,250)[=] \square(1000,500)/``@{>}|{\usebox{\bbox}}`/[B`C`D`D;``g_*`] \place(1250,1000)[{\scriptstyle \alpha}] \square(1000,1000)/=``@{>}|{\usebox{\bbox}}`/[A`A`B`C;``v`] \square(1500,0)/=``@{>}|{\usebox{\bbox}}`=/<500,1000>[C`C`F`F;``(kg)_*`] \place(1750,500)[{\scriptstyle \cong}] \square(1500,1000)/=``@{>}|{\usebox{\bbox}}`/[A`A`C`C;``v`] \place(1750,1250)[=] \efig $$ where the \ $ \cong $ \ represent the canonical isomorphisms $ (hf)_* \cong h_* \bdot f_* $ and $ (kg)_* \cong k_* \bdot g_* $, $$ \bfig\scalefactor{.8} \square/`@{>}|{\usebox{\bbox}}`=`=/<1000,500>[A`E`E`E;`(hf)_*``] \place(500,250)[\lrcorner] \morphism(0,500)|a|/>/<500,0>[A`B;f] \morphism(500,500)|a|/>/<500,0>[B`E;h] \square(0,500)/>`=`=`/[A`B`A`B;f```] \place(250,750)[{\mbox{\rule{.2mm}{2mm}\hspace{.5mm}\rule{.2mm}{2mm}}}] \square(500,500)/=``@{>}|{\usebox{\bbox}}`/[B`B`B`E;``h_*`] \place(750,750)[\ulcorner] \square(0,1000)/=`=`@{>}|{\usebox{\bbox}}`/[A`A`A`B;``f_*`] \place(250,1250)[\ulcorner] \square(500,1000)/=``@{>}|{\usebox{\bbox}}`/[A`A`B`B;``f_*`] \place(750,1250)[=] \place(1400,750)[\mbox{and}] \square(1850,0)/=`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`=/[D`D`F`F;`k_*`k_*`] \place(2100,250)[=] \square(2350,0)/>``=`=/[D`F`F`F\rlap{\ .};k```] \place(2600,250)[\lrcorner] \square(1850,500)/>`@{>}|{\usebox{\bbox}}`=`/[C`D`D`D;g`g_*``] \place(2100,750)[\lrcorner] \square(2350,500)/>``=`/[D`F`D`F;k```] \place(2600,750)[{\mbox{\rule{.2mm}{2mm}\hspace{.5mm}\rule{.2mm}{2mm}}}] \square(1850,1000)/=`=`@{>}|{\usebox{\bbox}}`/<1000,500>[C`C`C`F;``(kg)_*`] \place(2350,1250)[\ulcorner] \efig $$ The vertical composite $ \alpha' \bdot \alpha $ of retrocells $$ \bfig\scalefactor{.8} \square/>`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`>/[C`D`C'`D';g`v'`w'`h] \morphism(180,250)/<=/<200,0>[`;\alpha'] \square(0,500)/>`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`/[A`B`C`D;f`v`w`] \morphism(180,750)/<=/<200,0>[`;\alpha] \efig $$ is $$ \bfig\scalefactor{.8} \square/=`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`=/[D`D`D'`D';`w'`w'`] \place(250,250)[=] \square(0,500)/`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`/[B`C`D`D;`w`g_*`] \square(0,1000)/=`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`/[A`A`B`C;`f_*`v`] \place(250,1000)[{\scriptstyle \alpha}] \square(500,0)/`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`=/[D`C'`D'`D'\rlap{\ .};``h_*`] \place(750,500)[{\scriptstyle \alpha'}] \square(500,500)/=``@{>}|{\usebox{\bbox}}`/[C`C`D`C';``v'`] \square(500,1000)/=``@{>}|{\usebox{\bbox}}`/[A`A`C`C;``v`] \place(750,1250)[=] \efig $$ Horizontal and vertical identities are $$ \bfig\scalefactor{.8} \square(0,250)/=`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`=/[A`A`C`C;`v`v`] \morphism(180,500)/<=/<200,0>[`;1_v] \place(700,500)[=] \square(900,0)/`@{>}|{\usebox{\bbox}}`=`=/[A`C`C`C;`v``] \place(1150,500)[=] \square(900,500)/=`=`@{>}|{\usebox{\bbox}}`/[A`A`A`C;``v`] \place(1650,500)[\mbox{and}] \square(1900,250)/>`=`=`>/[A`B`A`B;f```f] \place(2150,500)[{\scriptstyle \id_f}] \place(2600,500)[=] \square(2900,0)/`=`@{>}|{\usebox{\bbox}}`=/[B`A`B`B\rlap{\ .};``f_*`] \place(3150,500)[=] \square(2900,500)/=`@{>}|{\usebox{\bbox}}`=`/[A`A`B`A;`f_*``] \efig $$ There are a number of things to check (horizontal and vertical unit laws and associativities as well as interchange), all of which are straightforward calculations and will be left to the reader. It is merely a question of writing out the diagrams and following the steps indicated schematically below. The identity laws are trivial because of our conventions that $ (1_A)_* = \id_A $ and vertical identities are as strict as in $ {\mathbb A} $. For retrocells $$ \bfig\scalefactor{.8} \square/>`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`>/[A_0`A_1`C_0`C_1;f_1`v_0`v_1`g_1] \morphism(180,250)/<=/<200,0>[`;\alpha_1] \square(500,0)/>``@{>}|{\usebox{\bbox}}`>/[A_1`A_2`C_1`C_2;f_2``v_2`g_2] \morphism(680,250)/<=/<200,0>[`;\alpha_2] \square(1000,0)/>``@{>}|{\usebox{\bbox}}`>/[A_2`A_3`C_2`C_3\rlap{\ ,};f_3``v_3`g_3] \morphism(1180,250)/<=/<200,0>[`;\alpha_3] \efig $$ $ \alpha_3 (\alpha_2 \alpha_1) $ is a composite of 17 cells arranged in a $ 4 \times 7 $ array represented schematically as \begin{center} \setlength{\unitlength}{.9mm} \begin{picture}(70,40) \put(0,0){\framebox(70,40){}} \put(10,0){\line(0,1){40}} \put(20,0){\line(0,1){40}} \put(30,0){\line(0,1){40}} \put(40,0){\line(0,1){40}} \put(50,0){\line(0,1){40}} \put(60,0){\line(0,1){40}} \put(0,10){\line(1,0){10}} \put(10,20){\line(1,0){10}} \put(20,10){\line(1,0){10}} \put(20,20){\line(1,0){10}} \put(30,10){\line(1,0){10}} \put(30,30){\line(1,0){10}} \put(40,20){\line(1,0){10}} \put(50,10){\line(1,0){10}} \put(50,30){\line(1,0){10}} \put(60,30){\line(1,0){10}} \put(5,25){\makebox(0,0){$\scriptstyle\cong$}} \put(15,10){\makebox(0,0){$\scriptstyle \alpha_3$}} \put(25,30){\makebox(0,0){$\scriptstyle \cong$}} \put(35,20){\makebox(0,0){$\scriptstyle \alpha_2$}} \put(45,30){\makebox(0,0){$\scriptstyle \alpha_1$}} \put(55,20){\makebox(0,0){$\scriptstyle \cong$}} \put(65,15){\makebox(0,0){$\scriptstyle \cong$}} \put(115,20){(1)} \put(75,0){.} \end{picture} \end{center} \noindent The empty rectangles are horizontal identities and the $ \cong $ represent canonical isomorphisms generated by companions. $ (\alpha_3 \alpha_2) \alpha_1 $ on the other hand is of the form \begin{center} \setlength{\unitlength}{.9mm} \begin{picture}(70,40) \put(0,0){\framebox(70,40){}} \put(10,0){\line(0,1){40}} \put(20,0){\line(0,1){40}} \put(30,0){\line(0,1){40}} \put(40,0){\line(0,1){40}} \put(50,0){\line(0,1){40}} \put(60,0){\line(0,1){40}} \put(0,10){\line(1,0){10}} \put(10,10){\line(1,0){10}} \put(10,30){\line(1,0){10}} \put(20,20){\line(1,0){10}} \put(30,10){\line(1,0){10}} \put(30,30){\line(1,0){10}} \put(40,20){\line(1,0){10}} \put(40,30){\line(1,0){10}} \put(50,20){\line(1,0){10}} \put(60,30){\line(1,0){10}} \put(5,25){\makebox(0,0){$\scriptstyle\cong$}} \put(15,20){\makebox(0,0){$\scriptstyle \cong$}} \put(25,10){\makebox(0,0){$\scriptstyle \alpha_3$}} \put(35,20){\makebox(0,0){$\scriptstyle \alpha_2$}} \put(45,10){\makebox(0,0){$\scriptstyle \cong$}} \put(55,30){\makebox(0,0){$\scriptstyle \alpha_1$}} \put(65,20){\makebox(0,0){$\scriptstyle \cong$}} \put(115,20){(2)} \put(75,0){.} \end{picture} \end{center} \noindent It is now clear what to do. Switch $ \alpha_3 $ with $ \cong $ in (1) and $ \alpha_1 $ with $ \cong $ in (2) to get \begin{center} \setlength{\unitlength}{.9mm} \begin{picture}(30,40) \put(0,0){\framebox(30,40){}} \put(10,0){\line(0,1){40}} \put(20,0){\line(0,1){40}} \put(30,0){\line(0,1){40}} \put(0,20){\line(1,0){10}} \put(10,10){\line(1,0){10}} \put(10,30){\line(1,0){10}} \put(20,20){\line(1,0){10}} \put(5,10){\makebox(0,0){$\scriptstyle \alpha_3$}} \put(15,20){\makebox(0,0){$\scriptstyle \alpha_2$}} \put(25,30){\makebox(0,0){$\scriptstyle \alpha_1$}} \end{picture} \end{center} \noindent in the middle in both cases. The $ 4 \times 2 $ block on the left in (1) becomes \begin{center} \setlength{\unitlength}{.9mm} \begin{picture}(20,40) \put(0,0){\framebox(20,40){}} \put(10,0){\line(0,1){40}} \put(20,0){\line(0,1){40}} \put(0,10){\line(1,0){10}} \put(10,20){\line(1,0){10}} \put(5,25){\makebox(0,0){$\scriptstyle \cong$}} \put(15,30){\makebox(0,0){$\scriptstyle \cong$}} \put(25,0){.} \end{picture} \end{center} \noindent which is not formally the same as $ 4 \times 2 $ block in (2), but they are equal by one of the coherence identities for $ (\ )_* $. We write it out $$ \bfig\scalefactor{.8} \square/=`@{>}|{\usebox{\bbox}}``=/<500,1500>[A_0`A_0`A_3`A_3;`(f_3 f_2 f_1)_*``] \place(250,750)[{\scriptstyle \cong}] \morphism(500,1500)|r|/@{>}|{\usebox{\bbox}}/<0,-1000>[A_0`A_2;(f_2 f_1)_*] \morphism(500,500)|r|/@{>}|{\usebox{\bbox}}/<0,-500>[A_2`A_3;f_{3*}] \square(500,0)/=``@{>}|{\usebox{\bbox}}`=/<500,500>[A_2`A_2`A_3`A_3;``f_{3*}`] \place(750,250)[=] \square(500,0)/=```/<500,1500>[A_0`A_0`A_3`A_3;```] \morphism(1000,1500)|r|/@{>}|{\usebox{\bbox}}/<0,-500>[A_0`A_1;f_{1*}] \morphism(1000,1000)|r|/@{>}|{\usebox{\bbox}}/<0,-500>[A_1`A_2;f_{2*}] \place(750,1150)[{\scriptstyle \cong}] \place(1400,850)[=] \square(2000,0)/=`@{>}|{\usebox{\bbox}}``=/<500,1500>[A_0`A_0`A_3`A_3;`(f_3 f_2 f_1)_*``] \place(2250,750)[{\scriptstyle \cong}] \morphism(2500,1500)|r|/@{>}|{\usebox{\bbox}}/<0,-500>[A_0`A_1;f_{1*}] \morphism(2500,1000)|r|/@{>}|{\usebox{\bbox}}/<0,-1000>[A_1`A_3;(f_3 f_2)_*] \square(2500,1000)/=``@{>}|{\usebox{\bbox}}`=/[A_0`A_0`A_1`A_1;``f_{1*}`] \place(2750,1250)[{\scriptstyle =}] \morphism(3000,1000)|r|/@{>}|{\usebox{\bbox}}/<0,-500>[A_1`A_2;f_{2*}] \morphism(3000,500)|r|/@{>}|{\usebox{\bbox}}/<0,-500>[A_2`A_3\rlap{\ .};f_{3*}] \morphism(2500,0)/=/<500,0>[A_3`A_3;] \place(2750,700)[{\scriptstyle \cong}] \efig $$ There may be something to worry about here because $ (f_2 f_1)_* \cong f_{2*} \bdot f_{1*} $ involves $ \chi_{f_2 f_1} $ whereas $ (f_3 f_2)_* \cong f_{3*} \bdot f_{2*} $ involves $ \chi_{f_3 f_2} $ which are unrelated. However both $ \chi_{f_2 f_1} $ and $ \chi_{f_3 f_2} $ cancel in the composites. The left hand side is \begin{center} \setlength{\unitlength}{1mm} \begin{picture}(40,50) \put(0,0){\framebox(40,50){}} \put(10,10){\line(0,1){10}} \put(10,20){\line(0,1){10}} \put(20,0){\line(0,1){50}} \put(30,30){\line(0,1){20}} \put(40,0){\line(0,1){50}} \put(0,10){\line(1,0){20}} \put(0,20){\line(1,0){40}} \put(0,30){\line(1,0){40}} \put(20,40){\line(1,0){20}} \put(10,5){\makebox(0,0){$\scriptstyle \chi_{f_3 f_2 f_1}$}} \put(15,15){\makebox(0,0){$\scriptstyle \psi_{f_3}$}} \put(5,25){\makebox(0,0){$\scriptstyle \psi_{f_2 f_1}$}} \put(30,25){\makebox(0,0){$\scriptstyle\chi_{f_2 f_1}$}} \put(35,35){\makebox(0,0){$\scriptstyle \psi_{f_2}$}} \put(25,45){\makebox(0,0){$\scriptstyle \psi_{f_1}$}} \end{picture} \end{center} \noindent and when we cancel $ \chi_{f_2 f_1} $ with $ \psi_{f_2 f_1} $ leaving $ \id_{f_2 f_1} $, that composite reduces to \begin{center} \setlength{\unitlength}{1mm} \begin{picture}(30,40) \put(0,0){\framebox(30,40){}} \put(10,10){\line(0,1){30}} \put(20,10){\line(0,1){30}} \put(30,0){\line(0,1){40}} \put(0,10){\line(1,0){30}} \put(0,20){\line(1,0){30}} \put(0,30){\line(1,0){30}} \put(15,5){\makebox(0,0){$\scriptstyle \chi_{f_3 f_2 f_1}$}} \put(25,15){\makebox(0,0){$\scriptstyle \psi_{f_3}$}} \put(15,25){\makebox(0,0){$\scriptstyle \psi_{f_2}$}} \put(5,35){\makebox(0,0){$\scriptstyle \psi_{f_1}$}} \end{picture} \end{center} \noindent as does the right hand side. The $ 4 \times 2 $ block on the right is the same with the roles of $ \psi $ and $ \chi $ interchanged. This completes the proof of associativity of horizontal composition of retrocells. The associativity for vertical composition is much simpler as it does not involve $ \psi $'s or $ \chi $'s, only the associativity isomorphisms of $ {\mathbb A} $. In particular if $ {\mathbb A} $ were strict, then $ {\mathbb A}^{ret} $ would be too, and the proof of associativity would be merely a question of writing down the two composites and observing that they are exactly the same. For interchange consider retrocells $$ \bfig\scalefactor{.8} \square/>`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`>/[B_0`B_1`C_0`C_1;g_1`w_0`w_1`h_1] \morphism(180,250)/<=/<200,0>[`;\beta_1] \square(500,0)/>``@{>}|{\usebox{\bbox}}`>/[B_1`B_2`C_1`C_2\rlap{\ .};g_2``w_2`h_1] \morphism(680,250)/<=/<200,0>[`;\beta_2] \square(0,500)/>`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`/[A_0`A_1`B_0`B_1;f_1`v_0`v_1`] \morphism(180,750)/<=/<200,0>[`;\alpha_1] \square(500,500)/>``@{>}|{\usebox{\bbox}}`/[A_1`A_2`B_1`B_2;f_2``v_2`] \morphism(680,750)/<=/<200,0>[`;\alpha_2] \efig $$ Then the pattern for $ (\beta_2 \beta_1) \bdot (\alpha_2 \alpha_1) $ is \begin{center} \setlength{\unitlength}{.9mm} \begin{picture}(80,40) \put(0,0){\framebox(80,40){}} \put(10,0){\line(0,1){40}} \put(20,0){\line(0,1){40}} \put(30,0){\line(0,1){40}} \put(40,0){\line(0,1){40}} \put(50,0){\line(0,1){40}} \put(60,0){\line(0,1){40}} \put(70,0){\line(0,1){40}} \put(0,10){\line(1,0){50}} \put(60,10){\line(1,0){10}} \put(0,20){\line(1,0){10}} \put(20,20){\line(1,0){10}} \put(50,20){\line(1,0){10}} \put(10,30){\line(1,0){10}} \put(30,30){\line(1,0){50}} \put(5,30){\makebox(0,0){$\scriptstyle \cong$}} \put(15,20){\makebox(0,0){$\scriptstyle \alpha_2$}} \put(25,30){\makebox(0,0){$\scriptstyle \alpha_1$}} \put(35,20){\makebox(0,0){$\scriptstyle \cong$}} \put(45,20){\makebox(0,0){$\scriptstyle \cong$}} \put(55,10){\makebox(0,0){$\scriptstyle \beta_2$}} \put(65,20){\makebox(0,0){$\scriptstyle \beta_1$}} \put(75,15){\makebox(0,0){$\scriptstyle \cong$}} \end{picture} \end{center} \noindent and for $ (\beta_2 \bdot \alpha_2) (\beta_1 \bdot \alpha_1) $ it is \begin{center} \setlength{\unitlength}{.9mm} \begin{picture}(60,40) \put(0,0){\framebox(60,40){}} \put(10,0){\line(0,1){40}} \put(20,0){\line(0,1){40}} \put(30,0){\line(0,1){40}} \put(40,0){\line(0,1){40}} \put(50,0){\line(0,1){40}} \put(10,10){\line(1,0){10}} \put(30,10){\line(1,0){20}} \put(0,20){\line(1,0){10}} \put(20,20){\line(1,0){20}} \put(50,20){\line(1,0){10}} \put(10,30){\line(1,0){20}} \put(40,30){\line(1,0){10}} \put(5,30){\makebox(0,0){$\scriptstyle \cong$}} \put(15,20){\makebox(0,0){$\scriptstyle \alpha_2$}} \put(25,10){\makebox(0,0){$\scriptstyle \beta_2$}} \put(35,30){\makebox(0,0){$\scriptstyle \alpha_1$}} \put(45,20){\makebox(0,0){$\scriptstyle \beta_1$}} \put(55,10){\makebox(0,0){$\scriptstyle \cong$}} \put(65,0){.} \end{picture} \end{center} \noindent The two $ \cong $ in the middle of the first one are inverse to each other, $$ g_{2*} \bdot g_{1*} \to^{\cong} (g_2 g_1)_* \to^{\cong} g_{2*} \bdot g_{1*}\ , $$ so each of $ (\beta_2 \beta_1) \bdot (\alpha_2 \alpha_1) $ and $ (\beta_2 \bdot \alpha_2) (\beta_1 \bdot \alpha_1) $ is equal to \begin{center} \setlength{\unitlength}{.9mm} \begin{picture}(50,40) \put(0,0){\framebox(50,40){}} \put(10,0){\line(0,1){40}} \put(20,0){\line(0,1){40}} \put(30,0){\line(0,1){40}} \put(40,0){\line(0,1){40}} \put(50,0){\line(0,1){40}} \put(10,10){\line(1,0){10}} \put(30,10){\line(1,0){10}} \put(0,20){\line(1,0){10}} \put(20,20){\line(1,0){10}} \put(40,20){\line(1,0){10}} \put(10,30){\line(1,0){10}} \put(30,30){\line(1,0){10}} \put(5,30){\makebox(0,0){$\scriptstyle \cong$}} \put(15,20){\makebox(0,0){$\scriptstyle \alpha_2$}} \put(25,30){\makebox(0,0){$\scriptstyle \alpha_1$}} \put(25,10){\makebox(0,0){$\scriptstyle \beta_2$}} \put(35,20){\makebox(0,0){$\scriptstyle \beta_1$}} \put(45,10){\makebox(0,0){$\scriptstyle \cong$}} \end{picture} \end{center} \noindent completing the proof. \end{proof} \begin{theorem} \label{Thm-DoubDual} (1) $ {\mathbb A}^{ret} $ has a canonical choice of companions. \noindent (2) There is a canonical isomorphism of double categories with companions $$ {\mathbb A} \to^\cong {\mathbb A}^{ret\ ret} $$ which is the identity on objects and horizontal and vertical arrows. \end{theorem} \begin{proof} The companion of $ f \colon A \to B $ in $ {\mathbb A}^{ret} $ is $ f_* $, $ f $'s companion in $ {\mathbb A} $ with binding retrocells $$ \bfig\scalefactor{.8} \square(0,250)/>`@{>}|{\usebox{\bbox}}`=`=/[A`B`B`B;f`f_*``] \morphism(170,500)/<=/<200,0>[`;] \place(800,500)[=] \square(1200,0)/`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`=/[B`B`B`B;`\id_B`\id_B`] \square(1200,500)/=`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`/[A`A`B`B;`f_*`f_*`] \place(1450,500)[{\scriptstyle 1}] \efig $$ and $$ \bfig\scalefactor{.8} \square(0,250)/=`=`@{>}|{\usebox{\bbox}}`>/[A`A`A`B;``f_*`f] \morphism(170,500)/<=/<200,0>[`;] \place(800,500)[=] \square(1200,0)/`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`=/[A`A`B`B\rlap{\ .};`f_*`f_*`] \square(1200,500)/=`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`/[A`A`A`A;`\id_A`\id_A`] \place(1450,500)[{\scriptstyle 1}] \efig $$ The binding equations only involve canonical isos so hold by coherence. A cell $ \alpha $ in $ {\mathbb A}^{ret\,ret} $, i.e. a retrocell in $ {\mathbb A}^{ret} $ is $$ \bfig\scalefactor{.8} \square(0,250)/>`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`>/[A`B`C`D;f`v`w`g] \morphism(170,500)/<=/<200,0>[`;\alpha] \place(800,500)[=] \square(1200,0)/`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`=/[B`C`D`D;`w`g_*`] \square(1200,500)/=`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`/[A`A`B`C;`f_*`v`] \place(1450,500)[{\scriptstyle \alpha}] \place(2200,500)[\mbox{in\ \ ${\mathbb A}^{ret}$}] \efig $$ $$ \bfig\scalefactor{.8} \place(800,750)[=] \place(100,0)[\ ] \square(1250,0)/`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`=/[C`D`D`D;`g_*`\id_D`] \square(1250,500)/`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`/[A`B`C`D;`v`w`] \square(1250,1000)/=`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`/[A`A`A`B;`\id_A`f_*`] \place(1500,750)[{\scriptstyle \alpha}] \place(2200,750)[\mbox{in\ \ ${\mathbb A}$}] \efig $$ and these are in canonical bijection with $$ \bfig\scalefactor{.8} \square/>`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`>/[A`B`C`D\rlap{\ .};f`v`w`g] \place(250,250)[{\scriptstyle \alpha'}] \efig $$ Checking that composition and identities are preserved is a straightforward calculation and is omitted. \end{proof} \begin{example}\rm If $ \cal{A} $ is a $ 2 $-category, the double category of quintets $ {\mathbb Q}\cal{A} $ has the same objects as $ \cal{A} $, the $ 1 $-cells of $ \cal{A} $ as both horizontal and vertical arrows, and cells $ \alpha $ $$ \bfig\scalefactor{.8} \square/>`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`>/[A`B`C`D;f`h`k`g] \place(250,250)[{\scriptstyle \alpha}] \place(850,250)[=] \square(1200,0)[A`B`C`D;f`h`k`g] \morphism(1550,350)/=>/<-140,-140>[`;\alpha] \efig $$ i.e. a $ 2 $-cell $ \alpha \colon k f \to g h $. Horizontal and vertical composition are given by pasting. Every horizontal arrow $ f \colon A \to B $ has a companion, $ f_* $, namely $ f $ itself considered as a vertical arrow. A retrocell $$ \bfig\scalefactor{.8} \square/>`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`>/[A`B`C`D;f`h`k`g] \morphism(180,250)/<=/<200,0>[`;\alpha] \efig $$ is $$ \bfig\scalefactor{.8} \square/`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`=/[B`C`D`D;`k`g_*`] \square(0,500)/=`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`/[A`A`B`C;`f_*`h`] \place(250,500)[{\scriptstyle \alpha}] \place(900,500)[=] \square(1400,250)[A`A`D`D;1_A`k f`g h`1_D] \morphism(1750,600)/=>/<-140,-140>[`;\alpha] \efig $$ i.e. a coquintet $$ \bfig\scalefactor{.8} \square[A`B`C`D\rlap{\ .};f`h`k`g] \morphism(220,200)/=>/<140,140>[`;\alpha] \efig $$ Thus $$ ({\mathbb Q} {\cal{A}})^{ret} = {\rm co}{\mathbb Q}{\cal{A}} = {\mathbb Q}({\cal{A}}^{co})\ . $$ \end{example} \section{Adjoints, companions, mates} The well-known mates calculus says that if we have functors $ F, G, H, K, U, V $ as below with $ F \dashv U $ and $ G \dashv V $, then there is a bijection between natural transformations $ t $ and $ u $ as below $$ \bfig\scalefactor{.8} \square[{\bf A}`{\bf B}`{\bf C}`{\bf D};H`U`V`K] \morphism(180,250)/=>/<200,0>[`;t] \place(900,250)[\longleftrightarrow] \square(1300,0)[{\bf C}`{\bf D}`{\bf A}`{\bf B}\rlap{\ .};K`F`G`H] \morphism(1680,250)/=>/<-200,0>[`;u] \efig $$ This is usually stated for bicategories but with the help of retrocells we can extend it to double categories (with companions). To say that two horizontal arrows are adjoint in a double category $ {\mathbb A} $ means they are so in the $ 2 $-category of horizontal arrows $ {\cal{H}}{\it or} {\mathbb A} $. So $ h $ left adjoint to $ f $ means we are given cells $$ \bfig\scalefactor{.8} \square/`=`=`=/<800,400>[A`A`A`A;```] \morphism(0,400)/>/<400,0>[A`B;f] \morphism(400,400)/>/<400,0>[B`A;h] \place(400,200)[{\scriptstyle \epsilon}] \place(1100,200)[\mbox{and}] \square(1400,0)/=`=`=`/<800,400>[B`B`B`B;```] \morphism(1400,0)|b|/>/<400,0>[B`A;h] \morphism(1800,0)|b|/>/<400,0>[A`B;f] \place(1800,200)[{\scriptstyle \eta}] \efig $$ satisfying the ``triangle'' identities $$ \bfig\scalefactor{.8} \square/`=`=`=/<1000,500>[A`A`A`A;```] \morphism(0,500)/>/<500,0>[A`B;f] \morphism(500,500)/>/<500,0>[B`A;h] \place(500,250)[{\scriptstyle \epsilon}] \square(1000,0)/>``=`>/[A`B`A`B;f```f] \place(1250,250)[{\mbox{\rule{.2mm}{2mm}\hspace{.5mm}\rule{.2mm}{2mm}}}] \square(0,500)/>`=`=`/[A`B`A`B;f```] \place(250,750)[{\mbox{\rule{.2mm}{2mm}\hspace{.5mm}\rule{.2mm}{2mm}}}] \square(500,500)/=``=`/<1000,500>[B`B`B`B;```] \place(1000,750)[{\scriptstyle \eta}] \place(1900,500)[=] \square(2300,250)/>`=`=`>/[A`B`A`B;f```f] \place(2550,500)[{\mbox{\rule{.2mm}{2mm}\hspace{.5mm}\rule{.2mm}{2mm}}}] \efig $$ and $$ \bfig\scalefactor{.8} \square/>`=`=`>/[B`A`B`A;h```h] \place(250,250)[{\mbox{\rule{.2mm}{2mm}\hspace{.5mm}\rule{.2mm}{2mm}}}] \square(500,0)/``=`=/<1000,500>[A`A`A`A;```] \place(1000,250)[{\scriptstyle \epsilon}] \morphism(500,500)/>/<500,0>[A`B;f] \morphism(1000,500)/>/<500,0>[B`A;h] \square(0,500)/=`=`=`/<1000,500>[B`B`B`B;```] \place(500,750)[{\scriptstyle \eta}] \square(1000,500)/>``=`/[B`A`B`A;h```] \place(1250,750)[{\mbox{\rule{.2mm}{2mm}\hspace{.5mm}\rule{.2mm}{2mm}}}] \place(1900,500)[=] \square(2300,250)/>`=`=`>/[B`A`B`A;h```h] \place(2550,500)[{\mbox{\rule{.2mm}{2mm}\hspace{.5mm}\rule{.2mm}{2mm}}}] \place(2950,0)[.] \efig $$ To say that the vertical arrows are adjoint means that they are so in the vertical bicategory $ {\cal{V}}{\it ert} {\mathbb A} $. So $ x $ is left adjoint to $ v $ if we are given cells $$ \bfig\scalefactor{.8} \square/=``=`=/<550,1000>[A`A`A`A;```] \place(275,500)[{\scriptstyle \epsilon}] \morphism(0,1000)/@{>}|{\usebox{\bbox}}/<0,-500>[A`C;v] \morphism(0,500)/@{>}|{\usebox{\bbox}}/<0,-500>[C`A;x] \place(1000,500)[\mbox{and}] \square(1450,0)/=`=``=/<550,1000>[C`C`C`C;```] \morphism(2000,1000)|r|/@{>}|{\usebox{\bbox}}/<0,-500>[C`A;x] \morphism(2000,500)|r|/@{>}|{\usebox{\bbox}}/<0,-500>[A`C;v] \place(1750,500)[{\scriptstyle \eta}] \efig $$ also satisfying the triangle identities. Suppose we are given horizontal arrows $ f $ and $ h $ with cells $ \alpha_1 $ and $ \beta_1 $ as below. In the presence of companions we can use sliding to transform them. We have bijections $$ \bfig\scalefactor{.8} \square(0,0)/`=`=`=/<1000,500>[A`A`A`A;```] \place(500,250)[{\scriptstyle \alpha_1}] \morphism(0,500)/>/<500,0>[A`B;f] \morphism(500,500)/>/<500,0>[B`A;h] \place(1300,250)[\longleftrightarrow] \square(1600,0)/>`=`@{>}|{\usebox{\bbox}}`=/[A`B`A`A;f``h_*`] \place(1850,250)[{\scriptstyle \alpha_2}] \place(2450,250)[\longleftrightarrow] \square(2800,-250)/=`=``=/<500,1000>[A`A`A`A;```] \place(3050,250)[{\scriptstyle \alpha_3}] \morphism(3300,750)|r|/@{>}|{\usebox{\bbox}}/<0,-500>[A`B;f_*] \morphism(3300,250)|r|/@{>}|{\usebox{\bbox}}/<0,-500>[B`A;h_*] \efig $$ $$ \bfig\scalefactor{.8} \square(0,0)/=`=`=`/<1000,500>[B`B`B`B;```] \place(500,250)[{\scriptstyle \beta_1}] \morphism(0,0)|b|/>/<500,0>[B`A;h] \morphism(500,0)|b|<500,0>[A`B;f] \place(1300,250)[\longleftrightarrow] \square(1600,0)/=`@{>}|{\usebox{\bbox}}`=`>/[B`B`A`B;`h_*``f] \place(1850,250)[{\scriptstyle \beta_2}] \place(2400,250)[\longleftrightarrow] \square(2800,-250)/=``=`=/<500,1000>[B`B`B`B\rlap{\ .};```] \place(3050,250)[{\scriptstyle \beta_3}] \morphism(2800,750)/@{>}|{\usebox{\bbox}}/<0,-500>[B`A;h_*] \morphism(2800,250)/@{>}|{\usebox{\bbox}}/<0,-500>[A`B;f_*] \efig $$ \begin{proposition} \label{Prop-AdjComp} $ h $ is left adjoint to $ f $ with adjunctions $ \alpha_1 $ and $ \beta_1 $ if and only if $ f_* $ is left adjoint to $ h_* $ with adjunctions $ \beta_3 $ and $ \alpha_3 $. \end{proposition} \begin{theorem} \label{Thm-Mates} Consider horizontal morphisms $ f $ and $ g $ and vertical morphisms $ v $ and $ w $ as in $$ \bfig\scalefactor{.8} \square/>`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`>/[A`B`C`D\rlap{\ .};f`v`w`g] \efig $$ (1) If $ x $ is left adjoint to $ v $ and $ y $ left adjoint to $ w $, then there is a bijection between cells $ \alpha $ and retrocells $ \beta $ as in $$ \bfig\scalefactor{.8} \square/>`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`>/[A`B`C`D;f`v`w`g] \place(250,250)[{\scriptstyle \alpha}] \place(900,250)[\longleftrightarrow] \square(1300,0)/>`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`>/[C`D`A`B\rlap{\ .};g`x`y`f] \morphism(1620,250)/=>/<-150,0>[`;\beta] \efig $$ (2) If $ h $ is left adjoint to $ f $ and $ k $ left adjoint to $ g $, then there is a bijection between cells $ \alpha $ and retrocells $ \gamma $ as in $$ \bfig\scalefactor{.8} \square/>`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`>/[A`B`C`D;f`v`w`g] \place(250,250)[{\scriptstyle \alpha}] \place(900,250)[\longleftrightarrow] \square(1300,0)/>`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`>/[B`A`D`C\rlap{\ .};h`w`v`k] \morphism(1620,250)/=>/<-150,0>[`;\gamma] \efig $$ \end{theorem} \begin{proof} (1) Standard cells $ \alpha $ are in bijection with $ 2 $-cells $ \widehat{\alpha} $ in the bicategory $ {\cal{V}}{\it ert} {\mathbb A} $ $$ \bfig\scalefactor{.8} \square(0,250)/>`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`>/[A`B`C`D;f`v`w`g] \place(250,500)[{\scriptstyle \alpha}] \place(900,500)[\longleftrightarrow] \square(1300,0)/`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`=/[C`B`D`D;`g_*`w`] \square(1300,500)/=`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`/[A`A`C`B;`v`\alpha`] \place(1550,500)[{\scriptstyle \widehat{\alpha}}] \efig $$ and retrocells $ \beta $ are defined to be $ 2 $-cells in $ {\cal{V}}{\it ert}{\mathbb A} $ $$ \bfig\scalefactor{.8} \square/`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`=/[D`A`B`B\rlap{\ .};`y`f_*`] \square(0,500)/=`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`/[C`C`D`A;`g_*`x`] \place(250,500)[{\scriptstyle \beta}] \efig $$ Then our claimed bijection is just the usual bijection from bicategory theory: $$ \frac{\widehat{\alpha} \colon g_* \bdot v \to w \bdot f_*} {\beta \colon y \bdot g_* \to f_* \bdot x\rlap{\ .}} $$ (2) From the previous proposition we have $ f_* $ is left adjoint to $ h_* $ and $ g_* $ left adjoint to $ k_* $, and again our bijection follows from the usual bicategory one: $$ \frac{\widehat{\alpha} \colon g_* \bdot v \to w_* \bdot f_*} {\gamma \colon v \bdot h_* \to k_* \bdot w} $$ \end{proof} \begin{corollary} (1) If $ f $ has a left adjoint $ h $, $ g $ a left adjoint $ k $, $ v $ a right adjoint $ x $ and $ w $ a right adjoint $ y $, then we have a bijection of cells $$ \bfig\scalefactor{.8} \square/>`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`>/[A`B`C`D;f`v`w`g] \place(250,250)[{\scriptstyle \alpha}] \place(900,250)[\longleftrightarrow] \square(1300,0)/>`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`>/[D`C`B`A\rlap{\ .};k`y`x`h] \place(1550,250)[{\scriptstyle \delta}] \efig $$ (2) We get the same bijection if left and right are interchanged in all four adjunctions. \end{corollary} \begin{proof} (1) We have the following bijections $$ \bfig\scalefactor{.8} \square/>`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`>/[A`B`C`D;f`v`w`g] \place(250,250)[{\scriptstyle \alpha}] \place(800,250)[\longleftrightarrow] \square(1100,0)/>`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`>/[B`A`D`C;h`w`v`k] \morphism(1450,250)/=>/<-200,0>[`;\beta] \place(1900,250)[\longleftrightarrow] \square(2200,0)/>`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`>/[D`C`B`A;k`y`x`h] \place(2450,250)[{\scriptstyle \delta}] \efig $$ the first by direct application of part (2) of Theorem \ref{Thm-Mates} and the second by applying part (1) of Theorem \ref{Thm-Mates} in $ {\mathbb A}^{ret} $ where $ x \dashv v $ and $ y \dashv w $. Finally $ \delta $ is a cell in $ ({\mathbb A}^{ret})^{ret} \cong {\mathbb A} $. \noindent (2) For this we use (1) first and then (2) in $ {\mathbb A}^{ret} $ $$ \bfig\scalefactor{.8} \square/>`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`>/[A`B`C`D;f`v`w`g] \place(250,250)[{\scriptstyle \alpha}] \place(800,250)[\longleftrightarrow] \square(1100,0)/>`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`>/[C`D`A`B;g`x`y`f] \morphism(1450,250)/=>/<-200,0>[`;\gamma] \place(1900,250)[\longleftrightarrow] \square(2200,0)/>`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`>/[D`C`B`A\rlap{\ .};k`y`x`h] \place(2450,250)[{\scriptstyle \delta}] \efig $$ \end{proof} Note that the statement of the corollary does not refer to retrocells or companions but it does not seem possible to prove it directly without companions. The infamous pinwheel \cite{DawPar93B} pops up in all attempts to do so. \section{Coretrocells} There is a dual situation giving two more bijections in the presence of right adjoints, but the notion of retrocell is not self-dual. In fact there is a dual notion, coretrocell, which also comes up in practice as we will see later. Like for $ 2 $-categories there are duals op and co for double categories. $ {\mathbb A}^{op} $ has the horizontal direction reversed and $ {\mathbb A}^{co} $ the vertical. If $ {\mathbb A} $ has companions there is no reason why $ {\mathbb A}^{op} $ or $ {\mathbb A}^{co} $ should, and even if they did there is no relation between the retrocells there and those of $ {\mathbb A} $. Companions in $ {\mathbb A}^{op} $ or $ {\mathbb A}^{co} $ correspond to conjoints in $ {\mathbb A} $ and we will use these to define coretrocells. For completeness we recall the notion of conjoint. More details can be found in \cite{Gra20}. \begin{definition} Let $ f \colon A \to B $ be a horizontal arrow in $ {\mathbb A} $. A {\em conjoint} for $ f $ is a vertical arrow $ v \colon B \xy\morphism(0,0)|m|<225,0>[`;\hspace{-1mm}\bullet\hspace{-1mm}]\endxy A $ together with two cells (conjunctions) $$ \bfig\scalefactor{.8} \square/>`=`@{>}|{\usebox{\bbox}}`=/[A`B`A`A;f``v`] \place(250,250)[{\scriptstyle \alpha}] \place(900,250)[\mbox{and}] \square(1300,0)/=`@{>}|{\usebox{\bbox}}`=`>/[B`B`A`B;`v``f] \place(1550,250)[{\scriptstyle \beta}] \efig $$ such that $$ \bfig\scalefactor{.8} \square(0,250)/>`=`@{>}|{\usebox{\bbox}}`=/[A`B`A`A;f``v`] \place(250,500)[{\scriptstyle \alpha}] \square(500,250)/=``=`>/[B`B`A`B;```f] \place(750,500)[{\scriptstyle \beta}] \place(1600,500)[= \mbox{\ \ $\id_f$\quad and\quad}] \square(2200,0)/>`=`@{>}|{\usebox{\bbox}}`=/[A`B`A`A;f``v`] \place(2450,250)[{\scriptstyle \alpha}] \square(2200,500)/=`@{>}|{\usebox{\bbox}}`=`/[B`B`A`B;`v``] \place(2450,750)[{\scriptstyle \beta}] \place(3100,500)[= \mbox{\ \ $1_v$}] \place(3200,0)[.] \efig $$ \end{definition} As we said, this is the vertical dual of the notion of companion and therefore has the corresponding properties. They are unique up to globular isomorphism when they exist and we choose representation that we call $ f^* $. We have $ (g f)^* \cong f^* \bdot g^* $ and $ 1^*_A \cong \id_A $. The choice is arbitrary but in practice there is a canonical one and for that $ 1^*_A $ is usually $ \id_A $, which we will assume. The dual of sliding is {\em flipping}: we have bijections, natural in every way that makes sense, $$ \bfig\scalefactor{.9} \square(0,250)/`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`>/<1000,500>[A`C`D`E;`v`w`h] \place(500,500)[{\scriptstyle \alpha}] \morphism(0,750)|a|/>/<500,0>[A`B;f] \morphism(500,750)|a|/>/<500,0>[B`C;g] \place(1500,500)[\longleftrightarrow] \square(2100,0)/>``@{>}|{\usebox{\bbox}}`>/<500,1000>[B`C`D`E;g``w`h] \place(2350,500)[{\scriptstyle \beta}] \morphism(2100,1000)|l|/@{>}|{\usebox{\bbox}}/<0,-500>[B`A;f^*] \morphism(2100,500)|l|/@{>}|{\usebox{\bbox}}/<0,-500>[A`D;v] \efig $$ and $$ \bfig\scalefactor{.8} \square(0,250)/>`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`/<1000,500>[A`B`C`E;f`v`w`] \place(500,500)[{\scriptstyle \alpha}] \morphism(0,250)|b|/>/<500,0>[C`D;g] \morphism(500,250)|b|/>/<500,0>[D`E;h] \place(1500,500)[\longleftrightarrow] \square(2100,0)/>`@{>}|{\usebox{\bbox}}``>/<500,1000>[A`B`C`D\rlap{\ .};f`v``g] \place(2350,500)[{\scriptstyle \beta}] \morphism(2600,1000)|r|/@{>}|{\usebox{\bbox}}/<0,-500>[B`E;w] \morphism(2600,500)|r|/@{>}|{\usebox{\bbox}}/<0,-500>[E`D;h^*] \efig $$ We now complete Proposition \ref{Prop-AdjComp}. \begin{proposition} Assuming only those companions and conjoints mentioned, we have the following natural bijections $$ \bfig\scalefactor{.67} \square(0,250)/`=`=`=/<1000,500>[A`A`A`A;```] \place(500,500)[{\scriptstyle \alpha_1}] \morphism(0,750)|a|/>/<500,0>[A`B;f] \morphism(500,750)|a|/>/<500,0>[B`A;h] \place(1350,500)[\longleftrightarrow] \square(1700,250)/>`=`@{>}|{\usebox{\bbox}}`=/[A`B`A`A;f``h_*`] \place(1950,500)[{\scriptstyle \alpha_2}] \place(2650,500)[\longleftrightarrow] \square(3100,0)/=`=``=/<500,1000>[A`A`A`A;```] \place(3350,500)[{\scriptstyle \alpha_3}] \morphism(3600,1000)|r|/@{>}|{\usebox{\bbox}}/<0,-500>[A`B;f_*] \morphism(3600,500)|r|/@{>}|{\usebox{\bbox}}/<0,-500>[B`A;h_*] \morphism(500,100)/<->/<0,-300>[`;] \morphism(1950,100)/<->/<0,-300>[`;] \square(250,-1000)/>`@{>}|{\usebox{\bbox}}`=`=/[B`A`A`A;h`f^*``] \place(500,-750)[{\scriptstyle \alpha_4}] \place(1250,-750)[\longleftrightarrow] \square(1700,-1000)/=`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`=/[B`B`A`A;`f^*`h_*`] \place(1950,-750)[{\scriptstyle \alpha_5}] \morphism(500,-1200)/<->/<0,-300>[`;] \square(250,-2700)/=``=`=/<500,1000>[A`A`A`A;```] \place(500,-2200)[{\scriptstyle \alpha_6}] \morphism(250,-1700)|l|/@{>}|{\usebox{\bbox}}/<0,-500>[A`B;h^*] \morphism(250,-2200)|l|/@{>}|{\usebox{\bbox}}/<0,-500>[B`A;f^*] \square(2850,-3950)/=`=`=`/<1000,500>[B`B`B`B;```] \place(3350,-3700)[{\scriptstyle \beta_1}] \morphism(2850,-3950)|b|/>/<500,0>[B`A;h] \morphism(3350,-3950)|b|/>/<500,0>[A`B;f] \place(2520,-3700)[\longleftrightarrow] \square(1700,-3950)/=`@{>}|{\usebox{\bbox}}`=`>/[B`B`A`B;`h_*``f] \place(1950,-3700)[{\scriptstyle \beta_2}] \place(1200,-3700)[\longleftrightarrow] \square(250,-4200)/=``=`=/<500,1000>[B`B`B`B;```] \place(500,-3700)[{\scriptstyle \beta_3}] \morphism(250,-3200)|l|/@{>}|{\usebox{\bbox}}/<0,-500>[B`A;h_*] \morphism(250,-3700)|l|/@{>}|{\usebox{\bbox}}/<0,-500>[A`B;f_*] \morphism(1950,-2900)/<->/<0,-300>[`;] \morphism(3350,-2900)/<->/<0,-300>[`;] \square(3100,-2700)/=`=`@{>}|{\usebox{\bbox}}`>/[B`B`B`A;``f^*`h] \place(3350,-2450)[{\scriptstyle \beta_4}] \place(2650,-2450)[\longleftrightarrow] \square(1700,-2700)/=`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`=/[B`B`A`A;`h_*`f^*`] \place(1950,-2450)[{\scriptstyle \beta_5}] \morphism(3350,-1750)/<->/<0,-300>[`;] \square(3100,-1550)/=`=``=/<500,1000>[B`B`B`B\rlap{\ .};```] \place(3350,-1050)[{\scriptstyle \beta_6}] \morphism(3600,-550)|r|/@{>}|{\usebox{\bbox}}/<0,-500>[B`A;f^*] \morphism(3600,-1050)|r|/@{>}|{\usebox{\bbox}}/<0,-500>[A`B;h^*] \efig $$ The following are then equivalent. \begin{itemize} \item[(1)] $ h $ is left adjoint to $ f $ with adjunctions $ \alpha_1 $ and $ \beta_1 $ \item[(2)] $ h_* $ is a conjoint for $ f $ with conjunctions $ \alpha_2$ and $ \beta_2 $ \item[(3)] $ f_* $ is left adjoint to $ h_* $ with adjunctions $ \alpha_3 $ and $ \beta_3 $ \item[(4)] $ f^* $ is a companion for $ h $ with binding cells $ \alpha_4 $ and $ \beta_4 $ \item[(5)] $ f^* $ is isomorphic to $ h_* $ with inverse isomorphisms $ \alpha_5 $ and $ \beta_5 $ \item[(6)] $ f^* $ is left adjoint to $ h^* $ with ajdunctions $ \alpha_6 $ and $ \beta_6 $. \end{itemize} \end{proposition} \begin{definition} Suppose that in $ {\mathbb A} $ every horizontal arrow $ f $ has a conjoint $ f^* $, then a {\em coretrocell} $$ \bfig\scalefactor{.8} \square/>`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`>/[A`B`C`D;f`v`w`g] \morphism(260,200)|r|/=>/<0,200>[`;\alpha] \efig $$ is a (standard) cell $$ \bfig\scalefactor{.8} \square/`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`=/[D`A`C`C;`g^*`v`] \place(250,500)[{\scriptstyle \alpha}] \square(0,500)/=`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`/[B`B`D`A;`w`f^*`] \efig $$ in $ {\mathbb A} $. \end{definition} Coretrocells are retrocells in $ {\mathbb A}^{co} $. So all properties of retrocells dualize to coretrocells. In particular we have a double category $ {\mathbb A}^{cor} $ whose cells are coretrocells. Dualities can be confusing so we list them here. \begin{proposition} (1) If $ {\mathbb A} $ has conjoints then $ {\mathbb A}^{op} $ and $ {\mathbb A}^{co} $ have companions and \noindent (a) $ ({\mathbb A}^{cor})^{op} = ({\mathbb A}^{op})^{ret} $ \noindent (b) $ ({\mathbb A}^{cor})^{co} = ({\mathbb A}^{co})^{ret} $ \noindent (2) If $ {\mathbb A} $ has companions then $ {\mathbb A}^{op} $ and $ {\mathbb A}^{co} $ have conjoints and \noindent (a) $ ({\mathbb A}^{ret})^{op} = ({\mathbb A}^{op})^{cor} $ \noindent (b) $ ({\mathbb A}^{ret})^{co} = ({\mathbb A}^{co})^{cor} $ \noindent (3) Under the above conditions \noindent (a) $ ({\mathbb A}^{ret})^{coop} = ({\mathbb A}^{coop})^{ret} $ \noindent (b) $ ({\mathbb A}^{cor})^{coop} = ({\mathbb A}^{coop})^{cor} $. \end{proposition} Passing between $ {\mathbb A} $ and $ {\mathbb A}^{co} $ switches left adjoints to right (both horizontal and vertical), switches companions and conjoints, and retrocells with coretrocells. Thus we get the dual theorem for mates. \begin{theorem} Assume $ {\mathbb A} $ has conjoints. (1) If $ x $ is right adjoint to $ v $ and $ y $ right adjoint to $ w $, then there is a bijection between cells $ \alpha $ and coretrocells $ \beta $ $$ \bfig\scalefactor{.8} \square/>`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`>/[A`B`C`D;f`v`w`g] \place(250,250)[{\scriptstyle \alpha}] \place(900,250)[\longleftrightarrow] \square(1300,0)/>`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`>/[C`D`A`B\rlap{\ .};g`x`y`f] \morphism(1550,200)|r|/=>/<0,200>[`;\beta] \efig $$ (2) If $ h $ is right adjoint to $ f $ and $ k $ right adjoint to $ g $, then there is a bijection between cells $ \alpha $ and coretrocells $ \gamma $ $$ \bfig\scalefactor{.8} \square/>`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`>/[A`B`C`D;f`v`w`g] \place(250,250)[{\scriptstyle \alpha}] \place(900,250)[\longleftrightarrow] \square(1300,0)/>`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`>/[B`A`D`C\rlap{\ .};g`w`v`f] \morphism(1550,200)|r|/=>/<0,200>[`;\gamma] \efig $$ \end{theorem} Whereas we think of companions as vertical arrows isomorphic to horizontal ones, it makes sense to think of a cell $ \alpha $ as above as a cell $$ \bfig\scalefactor{.8} \square/`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`=/[C`B`D`D;`g_*`w`] \place(250,500)[{\scriptstyle \widehat{\alpha}}] \square(0,500)/=`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`/[A`A`C`B;`v`f_*`] \efig $$ (which it corresponds to bijectively) and reversing its direction would give a natural notion of a cell in the opposite direction, thus giving retrocells. Coretrocells, on the other hand, are less intuitive. We think of conjoints as vertical arrows adjoint to horizontal ones, and although there is a bijection between cells $ \alpha $ and cells $$ \bfig\scalefactor{.8} \square/`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`=/[A`D`C`C\rlap{\ ,};`v`g^*`] \place(250,500)[{\scriptstyle \alpha^\vee}] \square(0,500)/=`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`/[B`B`A`D;`f^*`w`] \efig $$ this is more in the nature of a proposition than a tautology. Nevertheless, formally the two bijections are dual, so have the same status. Reversing the direction of the $ \alpha^\vee $ gives us coretrocells, and they do come up in practice as we will see in the next sections. \section{Retrocells for spans and such} If $ {\bf A} $ is a category with pullbacks, we get a double category $ {\mathbb S}{\rm pan} {\bf A} $ whose horizontal part is $ {\bf A} $, whose vertical arrows are spans and whose cells are span morphisms, modified to account for the horizontal arrows $$ \bfig\scalefactor{.8} \square(0,250)/>`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`>/[A`B`C`D;f`S`T`g] \place(250,500)[{\scriptstyle \alpha}] \place(900,500)[=] \square(1300,0)[S`T`C`D\rlap{\ .};\alpha`\sigma_1 `\tau_1`g] \square(1300,500)/>`<-`<-`/[A`B`S`T;f`\sigma_0`\tau_0`] \efig $$ $ {\mathbb S}{\rm pan} {\bf A} $ has companions $ f_* $ and conjoints $ f^* $: $$ \bfig\scalefactor{.7} \place(0,400)[f_*\ \ =] \morphism(400,400)|r|/>/<0,400>[A`A;1_A] \morphism(400,400)|r|/>/<0,-400>[A`B;f] \place(900,400)[\mbox{and}] \place(1400,400)[f^*\ \ =] \morphism(1800,400)|r|/>/<0,400>[A`B;f] \morphism(1800,400)|r|/>/<0,-400>[A`A\rlap{\ \ .};1_A] \efig $$ A retrocell $ \beta $ is $$ \bfig\scalefactor{.8} \square(0,250)/>`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`>/[A`B`C`D;f`S`T`g] \morphism(350,500)/=>/<-200,0>[`;\beta] \place(900,500)[=] \square(1400,0)/>`>`>`=/<650,500>[T \times_B A`S`D`D;\beta`\tau_1 p_1`g \sigma_1`] \square(1400,500)/=`<-`<-`/<650,500>[A`A`T \times_B A`S;`p_2`\sigma_0`] \efig $$ where $$ \bfig\scalefactor{.8} \square[T \times_B A`A`T`B;p_2`p_1`f`\tau_0] \efig $$ is a pullback. A coretrocell $ \gamma $ is $$ \bfig\scalefactor{.8} \square(0,250)/>`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`>/[A`B`C`D;f`S`T`g] \morphism(250,450)|r|/=>/<0,200>[`;\gamma] \place(900,500)[=] \square(1400,0)/>`>`>`=/<550,500>[C \times_D T`S`C`C\rlap{\ .};\gamma`p_1`\sigma_1`] \square(1400,500)/=`<-`<-`/<550,500>[B`B`C \times_D T`S;`\tau_0 p_2`f \sigma_0`] \efig $$ When $ {\bf A} = {\bf Set} $ we can represent an element $ s \in S $ with $ \sigma_0 s = a $ and $ \sigma_1 s = c $ by an arrow $ a \todo{s} c $. Then a morphism of spans $ \alpha $ is a function $$ (a \todo{s} c) \longmapsto (f a \todo{\alpha(s)} g c) . $$ For a retrocell $ \beta $, an element of $ T \times_B A $ is a pair $ (b \todo{t} d, a) $ such that $ f a = b $ so we can represent it as $ f a \todo{t} d $. Then $ \beta $ is a function $ (f a \todo{t} d) \longmapsto (a \todo{\beta t} \beta_1 t) $ with $ g \beta_1 t = d $. If we picture $ S $ as lying over $ T $ (thinking of (co)fibrations) then $ \beta $ is a lifting: for every $ t $ we are given a $ \beta t $ $$ \bfig\scalefactor{.8} \square/@{>}|{\usebox{\bbox}}`--`--`@{>}|{\usebox{\bbox}}/[a`\beta_1 t`f a`d;\beta t```t] \morphism(250,200)/|->/<0,200>[`;] \morphism(900,0)/--/<0,500>[T`S;] \efig $$ So it is like an opfibration but without any of the category structure around (in particular we cannot say that ``$ \beta t $ is over $ t $''). For a coretrocell $ \gamma $, an element of $ C \times_D T $ is a pair $ (c, b \todo{t} d) $ with $ g c = d $ which we can write as $ b \todo{t} g c $. $ \gamma $ then assigns to such a $ t $ an $ S $ element $ \gamma_0 t \to^{\gamma t} c $ with $ f \gamma_0 t = b $, i.e. a lifting from $ T $ to $ S $ $$ \bfig\scalefactor{.8} \square/@{>}|{\usebox{\bbox}}`--`--`@{>}|{\usebox{\bbox}}/[\gamma_0 t`c`b`g c\rlap{\ ,};\gamma t```t] \morphism(250,200)/|->/<0,200>[`;] \efig $$ much like a fibration, though without the category structure. This example shows well the difference between retrocells and coretrocells and their comparison with actual cells. The story for relations is much the same. If $ {\bf A} $ is a regular category and $$ \bfig\scalefactor{.8} \square/>`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`>/[A`B`C`D;f`R`S`g] \efig $$ is a boundary in $ {\mathbb R}{\rm el} {\bf A} $, i.e. $ f $ and $ g $ are morphisms and $ R $ and $ S $ are relations, then in the internal language of $ A $, there is a (necessarily unique) cell iff $$ a \sim_R c \Rightarrow f a \sim_S g c\ , $$ there is a retrocell iff $$ f a \sim_S d \Rightarrow \exists c (a \sim_R c \wedge g c = d) $$ and a coretrocell iff $$ b \sim_S g c \quad\Rightarrow\quad \exists a (a \sim_R c \wedge f a = b) . $$ Profunctors are the relations of the $ {\cal C}{\it at} $ world. There is a double category which we call $ {\mathbb C}{\rm at} $ whose objects are small categories, horizontal arrows functors, vertical arrows profunctors, and cells the appropriate natural transformations. In a typical cell $$ \bfig\scalefactor{.8} \square/>`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`>/[{\bf A}`{\bf B}`{\bf C}`{\bf D};F`P`Q`G] \place(250,250)[{\scriptstyle t}] \efig $$ $ t $ is a natural transformation $ P (-, =) \ \to \ Q (F -, G =) $. $ {\mathbb C}{\rm at} $ has companions and conjoints: $$ F_* (A, B) = {\bf B} (FA, B) $$ $$ F^* (B, A) = {\bf B} (B, FA) . $$ We denote an element $ p \in P (A, C) $ by an arrow $ p \colon A \xy\morphism(0,0)|m|<225,0>[`;\hspace{-1mm}\bullet\hspace{-1mm}]\endxy C $. So the action of $ t $ is $$ t \colon (p \colon A \xy\morphism(0,0)|m|<225,0>[`;\hspace{-1mm}\bullet\hspace{-1mm}]\endxy C) \longmapsto (t p \colon FA \xy\morphism(0,0)|m|<225,0>[`;\hspace{-1mm}\bullet\hspace{-1mm}]\endxy G C) $$ natural in $ A $ and $ C $, of course. A retrocell $$ \bfig\scalefactor{.8} \square/>`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`>/[{\bf A}`{\bf B}`{\bf C}`{\bf D};F`P`Q`G] \morphism(350,250)/=>/<-200,0>[`;\phi] \efig $$ is a natural transformation $ \phi \colon Q \otimes_{\bf B} F_* \to G_* \otimes_{\bf C} P $. An element of $ Q \otimes_{\bf B} F_* (A, D) $ is an element of $ Q (FA, D) $, $ g \colon FA \xy\morphism(0,0)|m|<225,0>[`;\hspace{-1mm}\bullet\hspace{-1mm}]\endxy D $. An element of $ G_* \otimes_{\bf C} P (A, D) $ is an equivalence class $$ [p \colon A \xy\morphism(0,0)|m|<225,0>[`;\hspace{-1mm}\bullet\hspace{-1mm}]\endxy C, d \colon GC \to D]_C . $$ So a retrocell assigns to each element of $ Q $, $ q \colon FA \to D $, an equivalence class $$ [\phi (q) \colon A \xy\morphism(0,0)|m|<225,0>[`;\hspace{-1mm}\bullet\hspace{-1mm}]\endxy C, \ov{\phi} (q) \colon GC \to D] . $$ We can think of it as a lifting, like for spans $$ \bfig\scalefactor{.8} \square/@{>}|{\usebox{\bbox}}`--``@{>}|{\usebox{\bbox}}/<600,800>[A`C`FA`D\rlap{\ .};\phi(q)```q] \morphism(600,800)/--/<0,-400>[C`GC;] \morphism(600,400)/>/<0,-400>[GC`D;] \morphism(300,250)/|->/<0,400>[`;] \efig $$ The lifting $ C $ does not lie over $ D $, there is merely a comparison $ GC \to D $. Furthermore the lifting is not unique, but two liftings are connected by a zigzag of $ {\bf C} $ morphisms. We have not spelled out the details because we do not know of any occurrences of these retrocells in print. Coretrocells of profunctors are similar (dual). We get a ``lifting'' $$ \bfig\scalefactor{.8} \square/@{>}|{\usebox{\bbox}}``--`@{>}|{\usebox{\bbox}}/<600,800>[A`C`B`GC\rlap{\ .};p```q] \morphism(0,800)/--/<0,-400>[A`FA;] \morphism(0,400)/<-/<0,-400>[FA`B;] \morphism(300,250)/|->/<0,400>[`;] \efig $$ A final variation on the span theme is $ {\bf V} $-matrices. Let $ {\bf V} $ be a monoidal category with coproducts preserved by $ \otimes $ in each variable separately. There is associated a double category which we call $ {\bf V} $-$ {\mathbb S}{\rm et} $. Its objects are sets and horizontal arrows functions. A vertical arrow $ A \xy\morphism(0,0)|m|<225,0>[`;\hspace{-1mm}\bullet\hspace{-1mm}]\endxy C $ is an $ A \times C $ matrix of objects of $ {\bf V} $, $ [V_{ac}] $. A cell is a matrix of morphisms $$ \bfig\scalefactor{.8} \square/>`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`>/[A`B`C`D;f`{[}V_{ac}{]}`{[}W_{bd}{]}`g] \place(250,250)[{\scriptstyle {[}\alpha_{ac}{]}}] \efig $$ $$ \alpha_{ac} \colon V_{ac} \to W_{fa, gc} . $$ Vertical composition is matrix multiplication $$ [X_{ce}] \otimes [V_{ac}] = [\sum_{c\in C} X_{ce} \otimes V_{ac}] . $$ Every horizontal arrow has a companion $$ f_* = [\Delta_{fa, b}] $$ and a conjoint $$ f^* = [\Delta_{b, fa}] $$ where $ \Delta $ is the ``Kronecker delta'' $$ \Delta_{b, b'} = \left\{ \begin{array}{lll} I & \mbox{if} & b = b'\\ 0 & \mbox{if} & b \neq b'. \end{array} \right. $$ A retrocell $$ \bfig\scalefactor{.8} \square/>`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`>/[A`B`C`D;f`{[}V_{ac}{]}`{[}W_{bd}{]}`g] \morphism(340,240)/=>/<-200,0>[`;\phi] \efig $$ is an $ A \times D $ matrix $ [\phi_{ad}] $ $$ \phi_{ad} \colon W_{fa, d} \to \sum_{gc = d} V_{ac} \ . $$ A coretrocell $$ \bfig\scalefactor{.8} \square/>`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`>/[A`B`C`D;f`{[}V_{ac}{]}`{[}W_{bd}{]}`g] \morphism(250,200)|r|/=>/<0,200>[`;\psi] \efig $$ is a $ B \times C $ matrix $ [\psi_{bc}] $ $$ \psi_{bc} \colon W_{b, gc} \to \sum_{fa = b} V_{ac} \ . $$ For example, if $ {\bf V} = {\bf Ab} $, and we again represent elements of $ V_{ac} $ by arrows $ a \todo{v} c $ (resp. of $ W_{bd} $ by $ b \todo{w} d $), then $ \phi $ associates to each $ f a \todo{w} d $ a finite number of elements $ a \todo{v_i} c_i $ with $ g c_i = d $ $$ \bfig\scalefactor{.8} \square/@{>}|{\usebox{\bbox}}`--`--`@{>}|{\usebox{\bbox}}/[a`c_i`f a`d;v_i```w] \morphism(250,200)/|->/<0,200>[`;] \place(900,250)[(i = 1, ..., n)\ .] \efig $$ Of course the dual situation holds for coretrocells $ \psi $. So we see that (co)retrocells in each case give liftings but of a type adapted to the situation. For spans they are uniquely specified, for relations they exist but are not specified, for profunctors only up to a connectedness condition and for matrices of Abelian groups we get a finite number of them. \section{Monads} A monad in $ {\cal C}{\it at} $ is a quadruple $ ({\bf A}, T, \eta, \mu) $ where $ {\bf A} $ is a category, $ T \colon {\bf A} \to {\bf A} $ an endo\-functor, $ \eta \colon 1_{\bf A} \to T $ and $ \mu \colon T^2 \to T $ natural transformations satisfying the well-known unit and associativity laws. In \cite{Str72} Street introduced morphisms of monads $$ (F, \phi) \colon ({\bf A}, T, \eta, \mu) \to ({\bf B}, S, \kappa, \nu) $$ as functors $ F \colon {\bf A} \to {\bf B} $ together with a natural transformation $$ \bfig\scalefactor{.8} \square[{\bf A}`{\bf B}`{\bf A}`{\bf B};F`T`S`F] \morphism(340,320)/=>/<-140,-140>[`;\phi] \efig $$ respecting units and multiplications in the obvious way. He called these monad functors, now called lax monad morphisms (see \cite{Lei04}). This was done, not just in $ {\cal C}{\it at} $, but in a general $ 2 $-category. Using duality, he also considered what he called monad opfunctors, i.e. oplax morphisms of monads, with the $ \phi $ in the opposite direction. The lax morphisms work well with Eilenberg-Moore algebras, giving a functor $$ {\bf EM} (F, \phi) \colon {\bf EM} ({\mathbb T}) \to {\bf EM} ({\mathbb S}) $$ $$ (TA \to^a A) \longmapsto (SFA \to^{\phi A} FTA \to^{Fa} FA) $$ whereas the oplax ones give functors on the Kleisli categories $$ {\bf Kl} (F, \psi) \colon {\bf Kl} ({\mathbb T}) \to {\bf Kl} ({\mathbb S}) $$ $$ (A \to^f TB) \longmapsto (FA \to^{Ff} FTB \to^{\psi B} SFB) . $$ The story for monads in a double category is this (see \cite{FioGamKoc11, FioGamKoc12}, though note that there horizontal and vertical are reversed). In general we just get one kind of morphism, the oplax ones. If we have companions then we also get the lax ones, and if we also have conjoints we have another kind. The $ 2 $-category case considered by Street corresponds to the double category of coquintets which has companions but not conjoints. Let $ {\mathbb A} $ be a double category. A vertical {\em monad} in $ {\mathbb A} $, $ t = (A, t, \eta, \mu) $ consists of an object $ A $, a vertical endomorphism $ t $ and two cells $ \eta $ and $ \mu $ as below $$ \bfig\scalefactor{.8} \square(0,250)/=`=`@{>}|{\usebox{\bbox}}`=/[A`A`A`A;``t`] \place(250,500)[{\scriptstyle \eta}] \square(1100,0)/=``@{>}|{\usebox{\bbox}}`=/<500,1000>[A`A`A`A;``t`] \place(1350,500)[{\scriptstyle \mu}] \morphism(1100,1000)/@{>}|{\usebox{\bbox}}/<0,-500>[A`A;t] \morphism(1100,500)/@{>}|{\usebox{\bbox}}/<0,-500>[A`A;t] \efig $$ satisfying $$ \bfig\scalefactor{.8} \square/=`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`=/[A`A`A`A;`t`t`] \place(250,250)[{\scriptstyle =}] \square(0,500)/=`=`@{>}|{\usebox{\bbox}}`/[A`A`A`A;``t`] \place(250,750)[{\scriptstyle \eta}] \square(500,0)/=``@{>}|{\usebox{\bbox}}`=/<500,1000>[A`A`A`A;``t`] \place(750,500)[{\scriptstyle \mu}] \place(1300,500)[=] \square(1600,250)/=`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`=/[A`A`A`A;`t`t`] \place(1850,500)[{\scriptstyle 1_t}] \place(2400,500)[=] \square(2700,0)/=`=`@{>}|{\usebox{\bbox}}`=/[A`A`A`A;``t`] \place(2950,250)[{\scriptstyle \eta}] \square(2700,500)/=`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`/[A`A`A`A;`t`t`] \place(2950,750)[{\scriptstyle =}] \square(3200,0)/=``@{>}|{\usebox{\bbox}}`=/<500,1000>[A`A`A`A;``t`] \place(3450,500)[{\scriptstyle \mu}] \efig $$ $$ \bfig\scalefactor{.8} \square/=`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`=/[A`A`A`A;`t`t`] \place(250,250)[{\scriptstyle =}] \square(0,500)/=``@{>}|{\usebox{\bbox}}`/<500,1000>[A`A`A`A;``t`] \place(250,1000)[{\scriptstyle \mu}] \morphism(0,1500)/@{>}|{\usebox{\bbox}}/<0,-500>[A`A;t] \morphism(0,1000)/@{>}|{\usebox{\bbox}}/<0,-500>[A`A;t] \square(500,0)/=``@{>}|{\usebox{\bbox}}`=/<500,1500>[A`A`A`A;``t`] \place(750,750)[{\scriptstyle \mu}] \place(1500,750)[=] \square(2000,0)/=``@{>}|{\usebox{\bbox}}`=/<500,1000>[A`A`A`A;``t`] \place(2250,500)[{\scriptstyle \mu}] \square(2000,1000)/=`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`/[A`A`A`A;`t`t`] \place(2250,1250)[{\scriptstyle =}] \morphism(2000,1000)/@{>}|{\usebox{\bbox}}/<0,-500>[A`A;t] \morphism(2000,500)/@{>}|{\usebox{\bbox}}/<0,-500>[A`A;t] \square(2500,0)/=``@{>}|{\usebox{\bbox}}`=/<500,1500>[A`A`A`A\rlap{\ .};``t`] \place(2750,750)[{\scriptstyle \mu}] \efig $$ A (horizontal) {\em morphism of monads} $ (f, \psi) \colon (A, t, \eta, \mu) \to (B, s, \kappa, \nu) $ consists of a horizontal arrow $ f $ and a cell $ \psi $ as below, such that $$ \bfig\scalefactor{.8} \square/=`=`@{>}|{\usebox{\bbox}}`=/[A`A`A`A;``t`] \place(250,250)[{\scriptstyle \eta}] \square(500,0)/>``@{>}|{\usebox{\bbox}}`>/[A`B`A`B;f``s`f] \place(750,250)[{\scriptstyle \psi}] \place(1300,250)[=] \square(1600,0)/>`=`=`>/[A`B`A`B;f```f] \place(1850,250)[{\scriptstyle \id_f}] \square(2100,0)/=``@{>}|{\usebox{\bbox}}`=/[B`B`B`B;``s`] \place(2350,250)[{\scriptstyle \kappa}] \efig $$ and $$ \bfig\scalefactor{.8} \square/=``@{>}|{\usebox{\bbox}}`=/<500,1000>[A`A`A`A;``t`] \place(250,500)[{\scriptstyle \mu}] \morphism(0,1000)/@{>}|{\usebox{\bbox}}/<0,-500>[A`A;t] \morphism(0,500)/@{>}|{\usebox{\bbox}}/<0,-500>[A`A;t] \square(500,0)/>``@{>}|{\usebox{\bbox}}`>/<500,1000>[A`B`A`B;f``s`f] \place(750,500)[{\scriptstyle \psi}] \place(1400,500)[=] \square(1800,0)/>`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`>/[A`B`A`B;f`t`s`f] \place(2050,250)[{\scriptstyle \psi}] \square(1800,500)/>`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`/[A`B`A`B;f`t`s`] \place(2050,750)[{\scriptstyle \psi}] \square(2300,0)/=``@{>}|{\usebox{\bbox}}`=/<500,1000>[B`B`B`B\rlap{\ .};``s`] \place(2550,500)[{\scriptstyle \nu}] \efig $$ These are the oplax morphisms referred to above. There are also vertical morphisms of monads, ``bimodules'', whose composition requires certain well-behaved coequalizers. They are interesting (see e.g. \cite{Shu08}), of course, but will not concern us here. If $ {\mathbb A} $ has companions we can also define retromorphisms of monads. (See \cite{Cla22, DiM22}.) \begin{definition} A {\em retromorphism of monads} $ (f, \phi) \colon (A, t, \eta, \mu) \to (B, s, \kappa, \nu) $ consists of a horizontal arrow $ f $ and a retrocell $ \phi $ $$ \bfig\scalefactor{.8} \square/>`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`>/[A`B`A`B;f`t`s`f] \morphism(350,250)/=>/<-200,0>[`;\phi] \efig $$ satisfying $$ \bfig\scalefactor{.8} \square/=`=`>`=/[B`B`B`B;``s`] \place(250,250)[{\scriptstyle \kappa}] \square(0,500)/=`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`/[A`A`B`B;`f_*`f_*`] \place(250,750)[{\scriptstyle =}] \square(500,0)/``>`=/[B`A`B`B;``f_*`] \square(500,500)/=``@{>}|{\usebox{\bbox}}`/[A`A`B`A;``t`] \place(750,500)[{\scriptstyle \phi}] \place(1500,500)[=] \square(2000,0)/=`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`=/[A`A`B`B;`f_*`f_*`] \place(2250,250)[{\scriptstyle =}] \square(2000,500)/=`=`@{>}|{\usebox{\bbox}}`/[A`A`A`A;``t`] \place(2250, 750)[{\scriptstyle \eta}] \efig $$ $$ \bfig\scalefactor{.8} \square/=``@{>}|{\usebox{\bbox}}`=/<500,1000>[B`B`B`B;``s`] \place(250,500)[{\scriptstyle \nu}] \morphism(0,1000)/@{>}|{\usebox{\bbox}}/<0,-500>[B`B;s] \morphism(0,500)/@{>}|{\usebox{\bbox}}/<0,-500>[B`B;s] \square(0,1000)/=`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`/[A`A`B`B;`f_*`f_*`] \place(250,1250)[{\scriptstyle =}] \morphism(500,1500)/=/<500,0>[A`A;] \morphism(1000,1500)|r|/@{>}|{\usebox{\bbox}}/<0,-1000>[A`A;t] \morphism(1000,500)|r|/@{>}|{\usebox{\bbox}}/<0,-500>[A`B;f_*] \morphism(500,0)/=/<500,0>[B`B;] \place(750,750)[{\scriptstyle \phi}] \place(1300,750)[=] \square(1700,0)/=`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`=/[B`B`B`B;`s`s`] \place(1950,250)[{\scriptstyle =}] \square(1700,500)/`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`/[B`A`B`B;`s`f_*`] \square(1700,1000)/=`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`/[A`A`B`A;`f_*`t`] \place(1950,1250)[{\scriptstyle \phi}] \square(2200,0)/``@{>}|{\usebox{\bbox}}`=/[B`A`B`B;``f_*`] \square(2200,500)/``@{>}|{\usebox{\bbox}}`/[A`A`B`A;``t`] \place(2450,500)[{\scriptstyle \phi}] \square(2200,1000)/=``@{>}|{\usebox{\bbox}}`=/[A`A`A`A;``t`] \place(2450,1250)[{\scriptstyle =}] \square(2700,0)/=``@{>}|{\usebox{\bbox}}`=/[A`A`B`B;``f_*`] \square(2700,500)/=``@{>}|{\usebox{\bbox}}`/<500,1000>[A`A`A`A;``t`] \place(2950,1000)[{\scriptstyle \mu}] \efig $$ \end{definition} \begin{proposition} The identity retrocell is a retromorphism $ (A, t, \eta, \mu) \to (A, t, \eta, \mu) $. The composite of two retromorphisms of monads is again one. \end{proposition} \begin{proof} Easy calculation. \end{proof} For a monad $ t = (A, t, \eta, \mu) $, Kleisli is a colimit construction, a universal morphism of the form $$ (A, t, \eta, \mu) \to (X, \id_X, 1, 1) . $$ \begin{definition} The {\em Kleisli object} of a vertical monad in a double category, if it exists, is an object $ Kl(t) $, a horizontal arrow $ f $ and a cell $$ \bfig\scalefactor{.8} \square/>`@{>}|{\usebox{\bbox}}`=`>/[A`Kl(t)`A`Kl(t);f`t``f] \place(250,250)[{\scriptstyle \pi}] \efig $$ such that \noindent (1) $$ \bfig\scalefactor{.8} \square/=`=`@{>}|{\usebox{\bbox}}`=/[A`A`A`A;``t`] \place(250,250)[{\scriptstyle \eta}] \square(500,0)/>``=`>/[A`Kl(t)`A`Kl(t);f```f] \place(750,250)[{\scriptstyle \pi}] \place(1500,250)[=] \square(2000,0)/>`=`=`>/[A`Kl(t)`A`Kl(t);f```f] \place(2250,250)[{\scriptstyle \id_f}] \efig $$ \noindent (2) $$ \bfig\scalefactor{.8} \square/=``@{>}|{\usebox{\bbox}}`=/<500,1000>[A`A`A`A;``t`] \place(250,500)[{\scriptstyle \mu}] \morphism(0,1000)/@{>}|{\usebox{\bbox}}/<0,-500>[A`A;t] \morphism(0,500)/@{>}|{\usebox{\bbox}}/<0,-500>[A`A;t] \square(500,0)/>``=`>/<500,1000>[A`Kl(t)`A`Kl(t);f```f] \place(750,500)[{\scriptstyle \pi}] \place(1500,500)[=] \square(2000,0)/>`@{>}|{\usebox{\bbox}}`=`>/[A`Kl(t)`A`Kl(t);f`t``f] \place(2250,250)[{\scriptstyle \pi}] \square(2000,500)/>`@{>}|{\usebox{\bbox}}`=`/[A`Kl(t)`A`Kl(t);f`t``] \place(2250,750)[{\scriptstyle \pi}] \square(2500,0)/=``=`=/<600,1000>[Kl(t)`Kl(t)`Kl(t)`Kl(t);```] \place(2850,500)[{\scriptstyle \cong}] \efig $$ and universal with those properties. That is, for any $$ \bfig\scalefactor{.8} \square/>`@{>}|{\usebox{\bbox}}`=`>/[A`B`A`B;X`t``X] \place(250,250)[{\scriptstyle \xi}] \efig $$ such that (1) $ \xi \eta = \id $ and (2) $ \xi \mu = \xi \cdot \xi $, there exists a unique $ h \colon Kl(t) \to B $ such that (1) $ h f = x $ and (2) $ h \pi = \xi $. \end{definition} Just by universality, if we have a morphism of monads $ (h, \psi) \colon (A, t, \eta, \mu) \to (B, s, \kappa, \nu) $ and the Kleisli objects $ Kl(t) $ and $ Kl(s) $ exist, we get a horizontal arrow $ Kl(h, \psi) $ such that $$ \bfig\scalefactor{.8} \square[A`Kl(t)`B`Kl(s)\rlap{\ .};f`h`Kl(h, \psi)`g] \efig $$ This does not work for Eilenberg-Moore objects. Asking for a universal morphism of the form $$ (X, \id_X, 1, 1) \to (A, t, \eta, \mu) $$ is not the right thing as can be seen from the usual $ {\cal C}{\it at} $ example, but also in general. For such a morphism $ (u, \theta) $, the unit law says $$ \bfig\scalefactor{.8} \square/=`=`=`=/[X`X`X`X;```] \place(250,250)[{\scriptstyle 1_{\id_X}}] \square(500,0)/>``@{>}|{\usebox{\bbox}}`>/[X`A`X`A;u``t`u] \place(750,250)[{\scriptstyle \theta}] \place(1500,250)[=] \square(2000,0)/>`=`=`>/[X`A`X`A;u```u] \place(2250,250)[{\scriptstyle \id_u}] \square(2500,0)/=``@{>}|{\usebox{\bbox}}`=/[A`A`A`A;``t`] \place(2750,250)[{\scriptstyle \eta}] \efig $$ i.e. $ \theta $ must be $ \eta u $ and this is a morphism. Thus monad morphisms $ (u, \theta) $ are in bijection with horizontal arrows $ X \to A $. The universal such is $ 1_A $, i.e. we get $$ \bfig\scalefactor{.8} \square/>`=`@{>}|{\usebox{\bbox}}`>/[A`A`A`A;1_A``t`1_A] \place(250,250)[{\scriptstyle \eta}] \efig $$ not the Eilenberg-Moore object. \begin{definition} The {\em Eilenberg-Moore object} of a vertical monad $ (A, t, \eta, \mu) $ is the universal retromorphism of monads $$ (X, \id_X, 1, 1) \to^{(u, \theta)} (A, t, \eta, \mu) $$ $$ \bfig\scalefactor{.8} \square/>`=`@{>}|{\usebox{\bbox}}`>/[X`A`X`A\rlap{\ .};u``t`u] \morphism(350,250)/=>/<-200,0>[`;\theta] \efig $$ \end{definition} \begin{proposition} Let $ \cal{A} $ be a $ 2 $-category and $ (A, t, \eta, \mu) $ a monad in $ \cal{A} $. Then $ (A, t, \eta, \mu) $ is also a monad in the double category of coquintets $ {\rm co}{\mathbb Q}{\cal{A}} $, and a retromorphism $$ (u, \theta) \colon (X, \id_X, 1, 1) \to (A, t, \eta, \mu) $$ is a $ 1 $-cell $ u \colon X \to A $ and a $ 2 $-cell $ \theta \colon t u \to u $ in $ \cal{A} $ satisfying the unit and associativity laws for a $ t $-algebra. The universal such is the Eilenberg-Moore object for $ t $. \end{proposition} \begin{proof} This is merely a question of interpreting the definition of retromorphism in $ {\rm co}{\mathbb Q}{\cal{A}} $. \end{proof} We now see immediately how a retrocell $ (f, \phi) \colon (A, t, \eta, \mu) \to (B, s, \kappa, \nu) $ produces, by universality, a horizontal arrow $$ \bfig\scalefactor{.8} \square<850,500>[EM(t)`EM(s)`A`B\rlap{\ .};EM(f, \phi)`u`u'`f] \efig $$ \begin{example}\rm Let $ \cal{A} $ be a $ 2 $-category and $ {\mathbb Q}{\cal{A}} $ the double category of quintets in $ \cal{A} $. Recall that a cell in $ {\mathbb Q}{\cal{A}} $ is a quintet in $ \cal{A} $ $$ \bfig\scalefactor{.8} \square[A`B`C`D\rlap{\ .};f`h`k`g] \morphism(330,330)/=>/<-140,-140>[`;\alpha] \efig $$ Every horizontal arrow $ f $ has a companion, namely $ f $ itself but viewed as a vertical arrow. A (vertical) monad in $ {\mathbb Q}{\cal{A}} $ is a comonad in $ \cal{A} $. A morphism of monads in $ {\mathbb Q}{\cal{A}} $ is then a lax morphism of comonads, and a retromorphism of monads in $ {\mathbb Q}{\cal{A}} $ is an oplax morphism of comonads in $ \cal{A} $. To make the connection with Street's monad functors and opfunctors, we must take coquintets (the $ \alpha $ in the opposite direction) $ {\rm co}{\mathbb Q}{\cal{A}} $. Now a monad in $ {\rm co}{\mathbb Q}{\cal{A}} $ is a monad in $ \cal{A} $, a monad morphism in $ {\rm co}{\mathbb Q}{\cal{A}} $ is an oplax morphism of monads, i.e. a monad opfunctor in $ \cal{A} $, whereas a retromorphism of monads is now a lax morphism of monads, i.e. a monad functor. It is unfortunate that the most natural morphisms from a double category point of view are not the established ones in the literature. At the time of \cite{Str72}, people were more interested in the Eilenberg-Moore algebras for a monad as a generalization of Lawvere theories and their algebras, so it was natural to choose the monad morphisms that worked well with those, namely lax morphisms, as monad functors. Now, with the advent of categorical computer science, Kleisli categories have come into their own, and it is not so clear what the leading concept is, and double category theory suggests that it may well be the oplax morphisms. \end{example} \begin{example}\rm Let $ {\bf C} $ be a category with (a choice of) pullbacks. As is well-known a monad in $ {\mathbb S}{\rm pan} {\bf C} $ is a category object in $ {\bf C} $. A morphism of monads in $ {\mathbb S}{\rm pan} {\bf C} $ is an internal functor. A retromorphism of monads $$ \bfig\scalefactor{.8} \square/>`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`>/[A_0`B_0`A_0`B_0;F`A_1`B_1`F] \morphism(350,250)/=>/<-200,0>[`;\phi] \efig $$ is first of all a morphism $ F \colon A_0 \to B_0 $ and then a cell $$ \bfig\scalefactor{.8} \square/>`>`>`=/<800,500>[B_1 \times_{B_0} A_0`A_0`B_0`B_0;\phi`d_1 p_1`F d_1`] \square(0,500)/=`<-`<-`/<800,500>[A_0`A_0`B_1 \times_{B_0} A_0`A_0;`p_2`d_0`] \efig $$ which must satisfy the unit law $$ \bfig\scalefactor{.8} \qtriangle/>`>`>/<850,550>[A_0`B_1\times_{B_0} A_0`A_1; \langle \id F, 1_{A_0} \rangle`\id`\phi] \efig $$ and the composition law $$ \bfig\scalefactor{.9} \square/`>``>/<2200,1000>[B_1 \times_{B_0} B_1 \times_{B_0} A_0`B_1 \times_{B_0} A_0 \times_{A_0} A_1 `B_1 \times_{B_0} A_0`A_1\rlap{\ .};`\nu \times_{B_0} A_0``\phi] \morphism(0,1000)/>/<1100,0>[B_1 \times_{B_0} B_1 \times_{B_0} A_0`B_1 \times_{B_0} A_1;B_1 \times_{B_0} \phi] \morphism(1100,1000)/>/<1100,0>[B_1 \times_{B_0} A_1`B_1 \times_{B_0} A_0 \times_{A_0} A_1;\cong] \morphism(2200,1000)|r|/>/<0,-500>[B_1 \times_{B_0} A_0 \times_{A_0} A_1`A_1 \times_{A_0} A_1;\phi \times_{A_0} A_1] \morphism(2200,500)|r|/>/<0,-500>[A_1 \times_{A_0} A_1`A_1;\mu] \efig $$ This is precisely an internal cofunctor \cite{Agu97, Cla20}. When $ {\bf C} = {\bf Set} $, a cofunctor $ F \colon {\bf A} \xy\morphism(0,0)|m|<225,0>[`;\hspace{-1mm}+\hspace{-1mm}]\endxy {\bf B} $ consists of an object function $ F \colon {\rm Ob} {\bf A} \to {\rm Ob} {\bf B} $ and a lifting function $ \phi \colon (b \colon FA \to B) \longmapsto (a \colon A \to A') $ with $ FA' = B $ $$ \bfig\scalefactor{.8} \square/>`--`--`>/[A`A'`FA`B;a```b] \morphism(250,180)/|->/<0,200>[`;] \efig $$ satisfying \noindent (1) (unit law) $ \phi (A, 1_{FA}) = 1_A $ \noindent (2) (composition law) $$ \phi(b'b, A) = \phi (b', A') \phi (b, A) . $$ So $ F $ is like a split opfibration given algebraically but without the functor part. \end{example} If $ {\mathbb A} $ has conjoints, we can define coretromorphisms of monads as retromorphisms in $ {\mathbb A}^{op} $ which now has companions, and monads in $ {\mathbb A}^{op} $ are the same as monads in $ {\mathbb A} $. Explicitly, an opretromorphism $$ (f, \theta) \colon (A, t, \eta, \mu) \to (B, s, \kappa, \nu) $$ consists of a horizontal morphism $ f \colon A \to B $ in $ {\mathbb A} $ and an opretromorphism $ \theta $ $$ \bfig\scalefactor{.8} \square(0,250)/>`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`>/[A`B`A`B;f`t`s`f] \morphism(250,450)|r|/=>/<0,200>[`;\theta] \place(800,500)[=] \square(1100,0)/`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`=/[B`A`A`A;`f^*`t`] \place(1350,500)[{\scriptstyle \theta}] \square(1100,500)/=`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`/[B`B`B`A;`g`f^*`] \efig $$ such that $$ \bfig\scalefactor{.8} \square/=`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`=/[B`B`A`A;`f^*`f^*`] \place(250,250)[{\scriptstyle =}] \square(0,500)/=`=`@{>}|{\usebox{\bbox}}`/[B`B`B`B;``s`] \place(250,750)[{\scriptstyle \kappa}] \square(500,0)/``@{>}|{\usebox{\bbox}}`=/[B`A`A`A;``t`] \place(750,500)[{\scriptstyle \theta}] \square(500,500)/=``@{>}|{\usebox{\bbox}}`/[B`B`B`A;``f^*`] \place(1500,500)[=] \square(2000,0)/=`=`@{>}|{\usebox{\bbox}}`=/[A`A`A`A;``t`] \place(2250,250)[{\scriptstyle \eta}] \square(2000,500)/=`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`/[B`B`A`A;`f^*`f^*`] \place(2250,750)[{\scriptstyle =}] \efig $$ and $$ \bfig\scalefactor{.8} \square/=`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`=/[B`B`A`A;`f^*`f^*`] \square(0,500)/=``@{>}|{\usebox{\bbox}}`/<500,1000>[B`B`B`B;``s`] \place(250,250)[{\scriptstyle =}] \place(250,1000)[{\scriptstyle \nu}] \morphism(0,1500)/@{>}|{\usebox{\bbox}}/<0,-500>[B`B;s] \morphism(0,1000)/@{>}|{\usebox{\bbox}}/<0,-500>[B`B;s] \morphism(500,1500)/=/<500,0>[B`B;] \morphism(1000,1500)|r|/@{>}|{\usebox{\bbox}}/<0,-500>[B`A;f^*] \morphism(1000,1000)|r|/@{>}|{\usebox{\bbox}}/<0,-1000>[A`A;t] \morphism(500,0)/=/<500,0>[A`A;] \place(750,750)[{\scriptstyle \theta}] \place(1300,750)[=] \square(1600,0)/`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`=/[B`A`A`A;`f^*`t`] \square(1600,500)/=`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`/[B`B`B`A;`s`f^*`] \square(1600,1000)/=`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`/[B`B`B`B;`s`s`] \place(1850,500)[{\scriptstyle \theta}] \place(1850,1250)[{\scriptstyle =}] \square(2100,0)/=``@{>}|{\usebox{\bbox}}`=/[A`A`A`A;``t`] \place(2350,250)[{\scriptstyle =}] \square(2100,500)/``@{>}|{\usebox{\bbox}}`/[B`A`A`A;``t`] \square(2100,1000)/=``@{>}|{\usebox{\bbox}}`/[B`B`B`A;``f^*`] \place(2350,1000)[{\scriptstyle \theta}] \square(2600,0)/=``@{>}|{\usebox{\bbox}}`=/<500,1000>[A`A`A`A\rlap{\ .};``t`] \place(2850,500)[{\scriptstyle \mu}] \square(2600,1000)/=``@{>}|{\usebox{\bbox}}`/[B`B`A`A;``f^*`] \place(2850,1250)[{\scriptstyle =}] \efig $$ Coretromorphisms do not come up in the formal theory of monads because the \nobreak{double} category of coquintets of a $ 2 $-category seldom has conjoints, but $ {\mathbb S}{\rm pan} {\bf C} $ does, and we get opcofunctors, i.e. cofunctors $ {\bf A}^{op} \to {\bf B}^{op} $. These consist of an object function $ F \colon {\rm Ob} {\bf A} \to {\rm Ob}{\bf B} $ and a lifting function $$ \theta \colon (b \colon B \to FA) \longmapsto (a \colon A' \to A) $$ with $ F A' = A $ $$ \bfig\scalefactor{.8} \square/>`--`--`>/[A'`A`B`FA;a```b] \morphism(250,200)/|->/<0,200>[`;] \efig $$ satisfying \noindent (1) $ \theta (A, 1_{FA}) = 1_A $ \noindent (2) $ \theta (A, bb') = \theta (A, b) \theta (A', b') $. This again illustrates well the difference between retromorphisms and coretromorphisms and, at the same time, the symmetry of the concepts. They all move objects forward. Functors move arrows forward $$ (a \colon A \to A') \longmapsto (Fa \colon FA \to FA') , $$ cofunctors move arrows of the form $ FA \to B $ backward $$ (b \colon FA \to B) \longmapsto (\phi b \colon A \to A') $$ and opcofunctors move arrows of the form $ B \to FA $ backward $$ (b \colon B \to FA) \longmapsto (\theta b \colon A' \to A). $$ All of this can be extended to the enriched setting for a monoidal category $ {\bf V} $ which has coproducts preserved by the tensor in each variable. Then a monad in $ {\bf V} $-$ {\mathbb S}{\rm et} $ is exactly a small $ {\bf V} $-category and the retromorphisms are exactly the enriched cofunctors of Clarke and Di~Meglio \cite{ClaDim22}, to which we refer the reader for further details. \section{Closed double categories} Many bicategories that come up in practice are closed, i.e. composition $ \otimes $ has right adjoints in each variable, $$ Q \otimes (-) \dashv Q \obslash (\ ) $$ $$ (\ )\otimes P \dashv (\ ) \oslash P\rlap{\ .} $$ Thus we have bijections \begin{center} \begin{tabular}{c} $P \to Q \obslash R $ \\[3pt] \hline \\[-12pt] $Q \otimes P \to R$ \\[3pt] \hline \\[-12pt] $Q \to R \oslash P $ \end{tabular} \end{center} We adapt (and adopt) Lambek's notation for the internal homs. $ \otimes $ is a kind of multiplication and $ \obslash $ and $ \oslash $ divisions. \begin{example}\rm The original example in \cite{Lam66}, though not expressed in bicategorical terms, was $ {\cal{B}} {\it im} $ the bicategory whose objects are rings, $ 1 $-cells bimodules and $ 2 $-cells linear maps. Composition is $ \otimes $ $$ \bfig\scalefactor{.8} \Atriangle/@{<-}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}/<400,300>[S`R`T\rlap{\ .};M`N`N\otimes_S M] \efig $$ ($ M $ is an $ S $-$ R $-bimodule, i.e. left $ S $ - right $ R $ bimodule, etc.) Given $ P \colon R \xy\morphism(0,0)|m|<225,0>[`;\hspace{-1mm}\bullet\hspace{-1mm}]\endxy T $, we have the usual bijections \begin{center} \begin{tabular}{c} $N \to P \oslash_R M $\mbox{\quad $T$-$S$ linear} \\[3pt] \hline \\[-12pt] $N \otimes_S M \to P$ \mbox{\quad $T$-$R$ linear} \\[3pt] \hline \\[-12pt] $M \to N \obslash_T P$\mbox{\quad $S$-$R$ linear} \end{tabular} \end{center} where $$ P \oslash_R M = \Hom_R (M, P) $$ $$ N \obslash_T P = \Hom_T (N, P) $$ are the hom bimodules of $ R $-linear (resp. $ T $-linear) maps. \end{example} \begin{example}\rm The bicategory of small categories and profunctors is closed. For profunctors $$ \bfig\scalefactor{.8} \Atriangle/@{<-}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}/<400,300>[{\bf B}`{\bf A}`{\bf C};P`Q`R] \efig $$ we have $$ (Q \obslash_{\bf C} R) (A, B) = \{n.t. \ Q(B, -) \to R (A, -)\} $$ and $$ (R \oslash_{\bf A} P) (B, C) = \{n.t. \ P(-, B) \to R(-, C)\}\rlap{\ .} $$ \end{example} \begin{example}\rm If $ {\bf A} $ has finite limits, then it is locally cartesian closed if and only if the bicategory of spans in $ {\bf A} $, $ {\cal{S}}{\it pan} {\bf A} $, is closed (Day \cite{Day74}). For spans $ A \to/<-/^{p_0} R \to^{p_1} B $ and $ B \to/<-/^{\tau_0} T \to^{\tau_1} C $, the composite is given by the pullback $ T \times_B R $, which we could compute as the pullback $ P $ below and then composing with $ \tau_1 $ $$ \bfig\scalefactor{.8} \square/<-`>`>`<-/<700,500>[R`P`A \times B`A \times T;```A \times \tau_0] \morphism(700,0)|b|/>/<600,0>[A \times T`A \times C;A \times \tau_1] \place(350,270)[\mbox{$\scriptstyle PB$}] \efig $$ i.e. $ T \otimes_B (\ ) $ is the composite $$ {\bf A}/(A \times B) \to^{(A \times \tau_0)^*} {\bf A}/(A \times T) \to^{\sum_{A \times \tau_1}} {\bf A}/(A \times C)\ . $$ $ \sum_{A \times \tau_1} $ always has a right adjoint $ (A \times \tau_1)^* $ and if $ {\bf A} $ is locally cartesian closed so will $ (A \times \tau_0)^* $, namely $ \prod_{A \times \tau_0} $. So, for $ A \to/<-/^{\sigma_0} S \to^{\sigma_1} C $, $$ T \obslash_C S = \prod_{A \times \tau_0} (A \times \tau_1)^* S\rlap{\ .} $$ If we interpret this for $ {\bf A} = {\bf Set} $, in terms of fibers $$ (T \obslash_C S)_{ab} = \prod_c S_{ac}^{T_{bc}} \ . $$ The situation for $ \oslash_A $ is similar $$ (S \oslash_A R)_{bc} = \prod_a S_{ac}^{R_{ab}} \ . $$ \end{example} These bicategories, and in fact most bicategories that occur in practice, are the vertical bicategories of naturally occurring double categories. So a definition of a (vertically) closed double category would seem in order. And indeed Shulman in \cite{Shu08} did give one. A double category is closed if its vertical bicategory is. This definition was taken up by Koudenburg \cite{Kou14} in his work on pointwise Kan extensions. But both were working with ``equipments'', double categories with companions and conjoints. Something more is needed for general double categories. \begin{definition} (Shulman) $ {\mathbb A} $ has {\em globular left homs} if for every $ y $, $ y \bdot (\ ) $ has a right adjoint $ y \bsd (\ ) $ in $ {\cal{V}}{\it ert} {\mathbb A} $. \end{definition} Thus for every $ z $ we have a bijection $$ \frac{y \bdot x \to z}{x \to y \bsd z} \mbox{\quad\quad in $ {\cal{V}}{\it ert} {\mathbb A} $} $$ $$ \bfig\scalefactor{.8} \square/=``@{>}|{\usebox{\bbox}}`=/<500,1000>[A`A`C`C;``z`] \place(250,500)[{\scriptstyle \alpha}] \morphism(0,1000)/@{>}|{\usebox{\bbox}}/<0,-500>[A`B;x] \morphism(0,500)/@{>}|{\usebox{\bbox}}/<0,-500>[B`C;y] \morphism(900,-20)/-/<0,1060>[`;] \square(1300,250)/=`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`=/[A`A`B`B;`x`y \bsd z`] \place(1550,500)[{\scriptstyle \beta}] \place(1900,0)[.] \efig $$ Of course there is the usual naturality condition on $ x $, which is guaranteed by expressing the above bijection as composition with an evaluation cell $ \epsilon \colon y \bdot (y \bsd z) \to z $ $$ \bfig\scalefactor{.8} \square/=``@{>}|{\usebox{\bbox}}`=/<500,1000>[A`A`C`C\rlap{\ .};``z`] \place(250,500)[{\scriptstyle \epsilon}] \morphism(0,1000)/@{>}|{\usebox{\bbox}}/<0,-500>[A`B;y \bsd z] \morphism(0,500)/@{>}|{\usebox{\bbox}}/<0,-500>[B`C;y] \efig $$ The universal property is then: for every $ \alpha $ there is a unique $ \beta $, as below, such that $$ \bfig\scalefactor{.8} \square/=`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`=/[B`B`C`C;`y`y`] \place(250,250)[{\scriptstyle =}] \square(0,500)/=`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`/[A`A`B`B;`x`y \bsd z`] \place(250,750)[{\scriptstyle \beta}] \square(500,0)/=``@{>}|{\usebox{\bbox}}`=/<500,1000>[A`A`C`C;``z`] \place(750,500)[{\scriptstyle \epsilon}] \place(1400,500)[=] \square(1800,0)/=``@{>}|{\usebox{\bbox}}`=/<500,1000>[A`A`C`C\rlap{\ .};``z`] \place(2050,500)[{\scriptstyle \alpha}] \morphism(1800,1000)/@{>}|{\usebox{\bbox}}/<0,-500>[A`B;x] \morphism(1800,500)/@{>}|{\usebox{\bbox}}/<0,-500>[B`C;y] \efig $$ This shows clearly that $ \bsd $ has nothing to do with horizontal arrows, and the interplay between the horizontal and vertical is at the very heart of double categories. \begin{definition} $ {\mathbb A} $ has {\em strong left homs (is left closed)} if for every $ y $ and $ z $ as below there is a vertical arrow $ y \bsd z $ and an evaluation cell $ \epsilon $ such that for every $ \alpha $ there is a unique $ \beta $ such that $$ \bfig\scalefactor{.8} \square/=`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`=/[B`B`C`C;`y`y`] \place(250,250)[{\scriptstyle =}] \square(0,500)/>`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`/[A`A`B`B;f`x`y \bsd z`] \place(250,750)[{\scriptstyle \beta}] \square(500,0)/=``@{>}|{\usebox{\bbox}}`=/<500,1000>[A`A`C`C;``z`] \place(750,500)[{\scriptstyle \epsilon}] \place(1400,500)[=] \square(1800,0)/>``@{>}|{\usebox{\bbox}}`=/<500,1000>[A'`A`C`C\rlap{\ .};f``z`] \place(2050,500)[{\scriptstyle \alpha}] \morphism(1800,1000)/@{>}|{\usebox{\bbox}}/<0,-500>[A'`B;x] \morphism(1800,500)/@{>}|{\usebox{\bbox}}/<0,-500>[B`C;y] \efig $$ \end{definition} \begin{proposition} If $ {\mathbb A} $ has companions and has globular left homs, then the strong universal property is equivalent to stability under companions: for every $ f $, the canonical morphism $$ (y \bsd z) \bdot f_* \to y \bsd (z \bdot f_*) $$ is an isomorphism. \end{proposition} \begin{proof} (Sketch) For every $ f $ and $ x $ as below we have the following natural bijections of cells $$ \bfig\scalefactor{.8} \square/>``@{>}|{\usebox{\bbox}}`=/<500,1000>[A'`A`C`C;f``z`] \place(250,500)[{\scriptstyle \alpha}] \morphism(0,1000)/@{>}|{\usebox{\bbox}}/<0,-500>[A'`B;x] \morphism(0,500)/@{>}|{\usebox{\bbox}}/<0,-500>[B`C;y] \morphism(900,-20)/-/<0,1060>[`;] \square(1300,0)/`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`=/[B`A`C`C;`y`z`] \place(1550,500)[{\scriptstyle \ov{\alpha}}] \square(1300,500)/=`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`/[A'`A'`B`A;`x`f_*`] \morphism(2200,-20)/-/<0,1060>[`;] \square(2600,250)/=`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`=/[A'`A'`B`B;`x`y \bsd (z \bdot f_*)`] \place(2850,500)[{\scriptstyle \beta}] \place(3500,0)[.] \efig $$ $ y \bsd z $ is strong iff we have the following bijections $$ \bfig\scalefactor{.8} \square/>``@{>}|{\usebox{\bbox}}`=/<500,1000>[A'`A`C`C;f``z`] \place(250,500)[{\scriptstyle \alpha}] \morphism(0,1000)/@{>}|{\usebox{\bbox}}/<0,-500>[A'`B;x] \morphism(0,500)/@{>}|{\usebox{\bbox}}/<0,-500>[B`C;y] \morphism(900,-20)/-/<0,1060>[`;] \square(1300,250)/>`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`=/[A'`A`B`B;f`x`y \bsd z`] \place(1550,500)[{\scriptstyle \gamma}] \morphism(2200,-20)/-/<0,1060>[`;] \square(2600,0)/=`@{>}|{\usebox{\bbox}}``=/<500,1000>[A'`A'`B`B;`x``] \place(2850,500)[{\scriptstyle \ov{\gamma}}] \morphism(3100,1000)|r|/@{>}|{\usebox{\bbox}}/<0,-500>[A'`A;f_*] \morphism(3100,500)|r|/@{>}|{\usebox{\bbox}}/<0,-500>[A`B\rlap{\ .};y \bsd z] \efig $$ \end{proof} \begin{proposition} If $ {\mathbb A} $ has conjoints, then the strong universal property is equivalent to the globular one. \end{proposition} \begin{proof} (Sketch) For every $ f $ and $ x $ as below we have the following natural bijections $$ \bfig\scalefactor{.8} \square/>``@{>}|{\usebox{\bbox}}`=/<500,1000>[A'`A`C`C;f``z`] \place(250,500)[{\scriptstyle \alpha}] \morphism(0,1000)/@{>}|{\usebox{\bbox}}/<0,-500>[A'`B;x] \morphism(0,500)/@{>}|{\usebox{\bbox}}/<0,-500>[B`C;y] \morphism(800,-320)/-/<0,1650>[`;] \square(1200,-250)/=``@{>}|{\usebox{\bbox}}`=/<600,1500>[A`A`C`C;``z`] \place(1500,550)[{\scriptstyle \widetilde{\alpha}}] \morphism(1200,1250)/@{>}|{\usebox{\bbox}}/<0,-500>[A`A';f^*] \morphism(1200,750)/@{>}|{\usebox{\bbox}}/<0,-500>[A'`B;x] \morphism(1200,250)/@{>}|{\usebox{\bbox}}/<0,-500>[B`C;y] \morphism(2150,-320)/-/<0,1650>[`;] \square(2500,0)/=``@{>}|{\usebox{\bbox}}`=/<500,1000>[A`A`B`B;``y \bsd z`] \morphism(2500,1000)/@{>}|{\usebox{\bbox}}/<0,-500>[A`A';f^*] \morphism(2500,500)/@{>}|{\usebox{\bbox}}/<0,-500>[A'`B;x] \place(2750,500)[{\scriptstyle \widetilde{\beta}}] \morphism(3300,-320)/-/<0,1650>[`;] \square(3600,250)/>`@{>}|{\usebox{\bbox}}`>`=/[A'`A`B`B;f`x`y \bsd z`] \place(3850,500)[{\scriptstyle \beta}] \efig $$ \end{proof} All of the examples above have conjoints so the left homs are automatically strong. Of course, $ y \bsd z $ is functorial in $ y $ and $ z $, contravariant in $ y $ and covariant in $ z $, but only for globular cells $ \beta $, $ \gamma $ $$ y' \to^\beta y \quad \& \quad z \to^\gamma z' \quad \leadsto \quad y \bsd z \to^{\beta \bsd \gamma} y' \bsd z' \ . $$ For general double category cells $ \beta $, $ \gamma $ $$ \bfig\scalefactor{.8} \square/>`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`>/[B'`B`C'`C;b`y'`y`c] \place(250,250)[{\scriptstyle \beta}] \place(950,250)[\mbox{and}] \square(1400,0)/>`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`>/[A`A'`C`C';a`z`z'`c'] \place(1650,250)[{\scriptstyle \gamma}] \efig $$ we would hope to get a cell $$ \bfig\scalefactor{.8} \square/>`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`>/[A`A'`B`B';a`y \bsd z`y' \bsd z'`] \place(250,250)[{\scriptstyle \beta \bsd \gamma}] \efig $$ but $ b $ is in the wrong direction, and there are $ c $ and $ c' $ in opposite directions. If we reverse $ b $ and $ c $ then $ \beta $ is in the wrong direction. That was the motivation for retrocells. \begin{proposition} Suppose $ {\mathbb A} $ has companions and is (strongly) left closed. Then a retrocell $ \beta $ and a standard cell $ \gamma $ $$ \bfig\scalefactor{.8} \square/>`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`>/[B`B'`C`C';b`y`y'`c] \morphism(350,250)/=>/<-200,0>[`;\beta] \square(1000,0)/>`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`>/[A`A'`C`C';a`z`z'`c] \place(1250,250)[{\scriptstyle \gamma}] \efig $$ induce a canonical cell $$ \bfig\scalefactor{.8} \square/>`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`>/[A`A'`B`B'\rlap{\ .};a`y \bsd z`y' \bsd z'`b] \place(250,250)[{\scriptstyle \beta \bsd \gamma}] \efig $$ \end{proposition} \begin{proof} (Sketch) A candidate $ \xi $ for $ \beta \bsd \gamma $ would satisfy the following bijections $$ \bfig\scalefactor{.8} \square(0,250)/>`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`>/[A`A'`B`B';a`y \bsd z`y' \bsd z'`b] \place(250,500)[{\scriptstyle \xi}] \morphism(950,-120)/-/<0,1250>[`;] \square(1400,0)/>``@{>}|{\usebox{\bbox}}`=/<500,1000>[A`A'`B'`B';a``y' \bsd z'`] \place(1650,500)[{\scriptstyle \ov{\xi}}] \morphism(1400,1000)/@{>}|{\usebox{\bbox}}/<0,-500>[A`B;y \bsd z] \morphism(1400,500)/@{>}|{\usebox{\bbox}}/<0,-500>[B`B';b_*] \morphism(2300,-320)/-/<0,1650>[`;] \square(2800,-250)/>``@{>}|{\usebox{\bbox}}`=/<600,1500>[A`A'`C'`C';a``z'`] \place(3100,500)[{\scriptstyle \ov{\ov{\xi}} }] \morphism(2800,1250)/@{>}|{\usebox{\bbox}}/<0,-500>[A`B;y \bsd z] \morphism(2800,750)/@{>}|{\usebox{\bbox}}/<0,-500>[B`B';b_*] \morphism(2800,250)/@{>}|{\usebox{\bbox}}/<0,-500>[B'`C';y'] \efig $$ and there is indeed a canonical $ \ov{\ov{\xi}} $, namely $$ \bfig\scalefactor{.8} \square/`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`=/[B'`C`C'`C';`y'`c_*`] \square(0,500)/=`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`/[B`B`B'`C;`b_*`y`] \square(0,1000)/=`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`/[A`A`B`B;`y \bsd z`y \bsd z`] \square(500,0)/=``@{>}|{\usebox{\bbox}}`=/[C`C`C'`C';``c_*`] \square(500,500)/=``@{>}|{\usebox{\bbox}}`/<500,1000>[A`A`C`C;``z`] \square(1000,0)/>``=`=/[C`C'`C'`C'\rlap{\ .};c```] \square(1000,500)/>``@{>}|{\usebox{\bbox}}`/<500,1000>[A`A'`C`C';a``z'`] \place(250,500)[{\scriptstyle \beta}] \place(250,1250)[{\scriptstyle =}] \place(750,250)[{\scriptstyle =}] \place(750,1000)[{\scriptstyle \epsilon}] \place(1250,250)[{\lrcorner}] \place(1250,1000)[{\scriptstyle \gamma}] \efig $$ \end{proof} \noindent In fact the cell $ \beta \bsd \gamma $ is not only canonical but also functorial, i.e. $ (\beta' \beta) \bsd (\gamma' \gamma) = (\beta' \bsd \gamma') (\beta \bsd \gamma) $. To express this properly we must define the categories involved. The codomain of $ \bsd $ is simply $ {\bf A}_1 $, the category whose objects are vertical arrows of $ {\mathbb A} $ and whose morphisms are (standard) cells. The domain of $ \bsd $ is the category which, for lack of a better name, we call $ {\bf TC} ({\mathbb A}) $ (twisted cospans). Its objects are cospans of vertical arrows and its cells are pairs $ (\beta, \gamma) $ $$ \bfig\scalefactor{.8} \square/>`@{<-}|{\usebox{\bbox}}`@{<-}|{\usebox{\bbox}}`>/[C`C'`B`B';c`y`y'`b] \morphism(340,250)/=>/<-200,0>[`;\beta] \square(0,500)/>`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`/[A`A'`C`C';a`z`z'`] \place(250,750)[{\scriptstyle \gamma}] \efig $$ where $ \beta $ is a retrocell and $ \gamma $ a standard cell. Also we must flesh out our sketchy construction of $ y \bsd z $. We can express the universal property of $ y \bsd z $ as representability of a functor $$ L_{y,z} \colon {\bf A}^{op}_1 \to {\bf Set}\rlap{\ .} $$ For $ v \colon \ov{A} \xy\morphism(0,0)|m|<225,0>[`;\hspace{-1mm}\bullet\hspace{-1mm}]\endxy \ov{B} $, $ L_{y,z} (v) = \{(f, g, \alpha) | f, g, \alpha \mbox{\ \ as in}\ (*)\} $ $$ f \colon \ov{A} \to A , g \colon \ov{B} \to B $$ $$ \bfig\scalefactor{.8} \square/>``@{>}|{\usebox{\bbox}}`=/<700,1500>[\ov{A}`A`C`C\rlap{\ .};f``z`] \place(-1300,0)[\ ] \place(350,750)[{\scriptstyle \alpha}] \morphism(0,1500)/@{>}|{\usebox{\bbox}}/<0,-500>[\ov{A}`\ov{B};v] \morphism(0,1000)/@{>}|{\usebox{\bbox}}/<0,-500>[\ov{B}`B;g_*] \morphism(0,500)/@{>}|{\usebox{\bbox}}/<0,-500>[B`C;y] \place(2000,750)[(*)] \efig $$ Some straightforward calculation will show that $ L_{y, z} $ is indeed a functor. The following bijections show that $ y \bsd z $ is a representing object for $ L_{y, z} $ $$ \bfig\scalefactor{.8} \square/>``@{>}|{\usebox{\bbox}}`=/<700,1500>[\ov{A}`A`C`C;f``z`] \place(250,750)[{\scriptstyle \alpha}] \morphism(0,1500)/@{>}|{\usebox{\bbox}}/<0,-500>[\ov{A}`\ov{B};v] \morphism(0,1000)/@{>}|{\usebox{\bbox}}/<0,-500>[\ov{B}`B;g_*] \morphism(0,500)/@{>}|{\usebox{\bbox}}/<0,-500>[B`C;y] \morphism(1000,-50)/-/<0,1650>[`;] \square(1300,250)/>``@{>}|{\usebox{\bbox}}`=/<500,1000>[\ov{A}`A`B`B;f``y \bsd z`] \morphism(1300,1250)/@{>}|{\usebox{\bbox}}/<0,-500>[\ov{A}`\ov{B};v] \morphism(1300,750)/@{>}|{\usebox{\bbox}}/<0,-500>[\ov{B}`B;g_*] \place(1550,750)[{\scriptstyle \ov{\alpha}}] \morphism(2100,100)/-/<0,1250>[`;] \square(2400,500)/>`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`>/[\ov{A}`A`\ov{B}`B;f`v`y \bsd z`g] \place(2650,750)[{\scriptstyle{\ovv\alpha}}] \place(3000,0)[.] \efig $$ This gives the full double category universal property of $ \bsd $ : For every boundary $$ \bfig\scalefactor{.8} \square/>`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`>/[\ov{A}`A`\ov{B}`B;f`v`y \bsd z`g] \efig $$ and $ \alpha $ as below, there exists a unique fill-in $ \beta $ such that $$ \bfig\scalefactor{.8} \square/>``@{>}|{\usebox{\bbox}}`=/<600,1500>[\ov{A}`A`C`C;f``z`] \place(300,750)[{\scriptstyle \alpha}] \morphism(0,1500)/@{>}|{\usebox{\bbox}}/<0,-500>[\ov{A}`\ov{B};v] \morphism(0,1000)/@{>}|{\usebox{\bbox}}/<0,-500>[\ov{B}`B;g_*] \morphism(0,500)/@{>}|{\usebox{\bbox}}/<0,-500>[B`C;y] \place(900,750)[=] \square(1200,0)/=`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`=/[B`B`C`C;`y`y`] \square(1200,500)/`@{>}|{\usebox{\bbox}}`=`/[\ov{B}`B`B`B;`g_*``] \square(1200,1000)/>`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`>/[\ov{A}`A`\ov{B}`B;f`v`y \bsd z`g] \place(1450,250)[{\scriptstyle =}] \place(1450,750)[{\lrcorner}] \place(1450,1250)[{\scriptstyle \beta}] \square(1700,0)/=``@{>}|{\usebox{\bbox}}`=/<600,1500>[A`A`C`C\rlap{\ .};``z`] \place(2000,750)[{\scriptstyle \epsilon}] \efig $$ For $ (\beta, \gamma) $ in $ {\bf TC}({\mathbb A}) $ we get a natural transformation $$ \phi_{\beta \gamma} \colon L_{y, z} \to L_{y', z'} $$ $$ \bfig\scalefactor{.8} \square(0,250)/>``@{>}|{\usebox{\bbox}}`=/<500,1500>[\ov{A}`A`C`C;f``z`] \place(250,1000)[{\scriptstyle \alpha}] \morphism(0,1750)/@{>}|{\usebox{\bbox}}/<0,-500>[\ov{A}`\ov{B};v] \morphism(0,1250)/@{>}|{\usebox{\bbox}}/<0,-500>[\ov{B}`B;g_*] \morphism(0,750)/@{>}|{\usebox{\bbox}}/<0,-500>[B`C;y] \place(750,1000)[\longmapsto] \square(1150,0)/=`@{>}|{\usebox{\bbox}}``=/[B'`B'`C'`C';`y'``] \square(1150,500)/=`@{>}|{\usebox{\bbox}}``/<500,1000>[\ov{B}`\ov{B}`B'`B';`(bg)_*``] \square(1150,1500)/=`@{>}|{\usebox{\bbox}}``/[\ov{A}`\ov{A}`\ov{B}`\ov{B};`v``] \place(1400,250)[{\scriptstyle =}] \place(1400,1000)[{\scriptstyle \cong}] \place(1400,1750)[{\scriptstyle =}] \square(1650,0)/`@{>}|{\usebox{\bbox}}``=/[B'`C`C'`C';`y'``] \square(1650,500)/=`@{>}|{\usebox{\bbox}}``/[B`B`B'`C;`b_*``] \square(1650,1000)/=`@{>}|{\usebox{\bbox}}``/[\ov{B}`\ov{B}`B`B;`g_*``] \square(1650,1500)/=`@{>}|{\usebox{\bbox}}``/[\ov{A}`\ov{A}`\ov{B}`\ov{B};`v``] \place(1900,500)[{\scriptstyle \beta}] \place(1900,1250)[{\scriptstyle =}] \place(1900,1750)[{\scriptstyle =}] \square(2150,0)/=`@{>}|{\usebox{\bbox}}``=/[C`C`C'`C';`c_*``] \square(2150,500)/>``@{>}|{\usebox{\bbox}}`/<500,1500>[\ov{A}`A`C`C;f``z`] \morphism(2150,2000)/@{>}|{\usebox{\bbox}}/<0,-500>[\ov{A}`\ov{B};v] \morphism(2150,1500)/@{>}|{\usebox{\bbox}}/<0,-500>[\ov{B}`B;g_*] \morphism(2150,1000)/@{>}|{\usebox{\bbox}}/<0,-500>[B`C;y] \place(2400,250)[{\scriptstyle =}] \place(2400,1250)[{\scriptstyle \alpha}] \square(2650,0)/`@{>}|{\usebox{\bbox}}`=`=/[C`C'`C'`C'\rlap{\ .};`c_*``] \square(2650,500)/>``@{>}|{\usebox{\bbox}}`>/<500,1500>[A`A'`C`C';a``z'`c] \place(2900,250)[{ \lrcorner}] \place(2900,1250)[{\scriptstyle \gamma}] \efig $$ Some calculation is needed to show naturality, which we leave to the reader. This natural transformation is what gives $ \beta \bsd \gamma $. We are now ready for the main theorem of the section. \begin{theorem} For $ {\mathbb A} $ a left closed double category with companions, the internal hom is a functor $$ \bsd\ \colon {\bf TC} ({\mathbb A}) \to {\bf A}_1\rlap{\ .} $$ \end{theorem} \begin{proof} Let $ (\beta, \gamma) $ and $ (\beta', \gamma') $ be composable morphisms in $ {\bf TC} ({\mathbb A}) $ $$ \bfig\scalefactor{.8} \square/>`@{<-}|{\usebox{\bbox}}`@{<-}|{\usebox{\bbox}}`>/[C`C'`B`B';c`y`y'`b] \square(0,500)/>`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`/[A`A'`C`C';a`z`z'`] \square(500,0)/>``@{<-}|{\usebox{\bbox}}`>/[C'`C''`B'`B''\rlap{\ .};c'``y''`b'] \square(500,500)/>``@{>}|{\usebox{\bbox}}`/[A'`A''`C'`C'';a'``z''`] \morphism(350,250)/=>/<-200,0>[`;\beta] \place(250,750)[{\scriptstyle \gamma}] \morphism(850,250)/=>/<-200,0>[`;\beta'] \place(750,750)[{\scriptstyle \gamma'}] \efig $$ Then $$ L_{y,z} \to^{\phi_{\beta, \gamma}} L_{y', z'} \to^{\phi_{\beta', \gamma'}} L_{y'', z''} $$ takes $ v $ to the composite of 23 cells (most of which are bookkeeping -- identities, canonical isos, ...) arranged in a $ 5 \times 7 $ array, with 38 objects, and best represented schematically as \begin{center} \setlength{\unitlength}{.9mm} \begin{picture}(70,50) \put(0,0){\framebox(70,50){}} \put(10,0){\line(0,1){50}} \put(20,0){\line(0,1){50}} \put(30,0){\line(0,1){50}} \put(40,0){\line(0,1){50}} \put(50,0){\line(0,1){50}} \put(60,0){\line(0,1){50}} \put(0,10){\line(1,0){10}} \put(10,20){\line(1,0){20}} \put(20,10){\line(1,0){50}} \put(0,40){\line(1,0){40}} \put(30,30){\line(1,0){10}} \put(40,20){\line(1,0){30}} \put(5,25){\makebox(0,0){$\scriptstyle\cong$}} \put(15,10){\makebox(0,0){$\scriptstyle \beta'$}} \put(25,30){\makebox(0,0){$\scriptstyle \cong$}} \put(35,20){\makebox(0,0){$\scriptstyle \beta$}} \put(45,35){\makebox(0,0){$\scriptstyle \alpha$}} \put(55,35){\makebox(0,0){$\scriptstyle \gamma$}} \put(55,15){\makebox(0,0){$ \lrcorner$}} \put(65,35){\makebox(0,0){$\scriptstyle \gamma'$}} \put(65,5){\makebox(0,0){$ \lrcorner$}} \put(95,25){(*)} \end{picture} \end{center} \noindent whereas $$ L_{yz} \to^{\phi_{\beta' \beta, \gamma' \gamma}} L_{y'' z''} $$ takes $ v $ to \begin{center} \setlength{\unitlength}{.9mm} \begin{picture}(70,50) \put(0,0){\framebox(70,50){}} \put(10,0){\line(0,1){50}} \put(20,0){\line(0,1){50}} \put(30,0){\line(0,1){50}} \put(40,0){\line(0,1){50}} \put(50,0){\line(0,1){50}} \put(60,20){\line(0,1){30}} \put(0,10){\line(1,0){10}} \put(10,20){\line(1,0){10}} \put(20,10){\line(1,0){10}} \put(30,20){\line(1,0){40}} \put(10,30){\line(1,0){30}} \put(0,40){\line(1,0){40}} \put(5,25){\makebox(0,0){$\scriptstyle\cong$}} \put(15,10){\makebox(0,0){$\scriptstyle \beta'$}} \put(25,20){\makebox(0,0){$\scriptstyle \beta$}} \put(35,10){\makebox(0,0){$\scriptstyle \cong$}} \put(45,35){\makebox(0,0){$\scriptstyle \alpha$}} \put(55,35){\makebox(0,0){$\scriptstyle \gamma$}} \put(65,35){\makebox(0,0){$\scriptstyle \gamma'$}} \put(60,10){\makebox(0,0){$ \lrcorner$}} \put(90,25){(**)} \end{picture} \end{center} \noindent The three bottom right cells of (**) compose to the $ 2 \times 2 $ block on the bottom right of (*), so the $ 5 \times 3 $ part on the right of (*) is equal to the $ 5 \times 4 $ part on the right of (**). And the rest are equal too by coherence. It follows that $$ (\beta' \bsd \gamma') (\beta \bsd \gamma) = (\beta' \beta) \bsd (\gamma' \gamma) . $$ For identities $ 1_{y \bsd z} = 1_y \bsd 1_z $. \end{proof} Right closure is dual but the duality is op, switching the direction of vertical arrows which switches companions with conjoints and retrocells with coretrocells. We outline the changes. \begin{definition} (Shulman) $ {\mathbb A} $ has {\em globular right homs} if for every $ x $, $ (\ ) \bdot x $ has a right adjoint $ (\ ) \slashdot x $ in $ {\cal{V}}{\it ert} {\mathbb A} $, $$ \frac{y \bdot x \to z}{y \to z \slashdot x} \mbox{\quad in \ ${\cal{V}}{\it ert}{\mathbb A} $}. $$ This bijection is mediated by an evaluation cell $$ \bfig\scalefactor{.8} \square/=``@{>}|{\usebox{\bbox}}`=/<500,1000>[A`A`C`C\rlap{\ .};```] \morphism(0,1000)/@{>}|{\usebox{\bbox}}/<0,-500>[A`B;x] \morphism(0,500)/@{>}|{\usebox{\bbox}}/<0,-500>[B`C;z \slashdot x] \place(250,500)[{\scriptstyle \epsilon'}] \efig $$ The right homs are {\em strong} if $ z \slashdot x $ has the universal property for cells of the form $$ \bfig\scalefactor{.8} \square/=``@{>}|{\usebox{\bbox}}`>/<500,1000>[A`A`C'`C\rlap{\ .};``z`g] \morphism(0,1000)/@{>}|{\usebox{\bbox}}/<0,-500>[A`B;x] \morphism(0,500)/@{>}|{\usebox{\bbox}}/<0,-500>[B`C';y] \place(250,500)[{\scriptstyle \alpha}] \efig $$ \end{definition} \begin{proposition} If $ {\mathbb A} $ has conjoints and globular right homs, then the strong universal property is equivalent to the canonical morphism $$ g^* \bdot (z \slashdot x) \to (g^* \bdot z) \slashdot x $$ being an isomorphism. If instead $ {\mathbb A} $ has companions, then strong is equivalent to globular. \end{proposition} Finally, if $ {\mathbb A} $ has conjoints, $ z \slashdot x $ is functorial in $ z $ and $ x $, for standard cells in $ z $ and for coretrocells in $ x $. More precisely, $ \slashdot $ is defined on the category $ {\bf TS} ({\mathbb A}) $ whose objects are spans of vertical arrows, $ (x, z) $, as below, and whose morphisms are pairs of cells $$ \bfig\scalefactor{.8} \square/>`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`>/[A`A'`C`C';a`z`z'`c] \place(250,250)[{\scriptstyle \gamma}] \square(0,500)/>`@{<-}|{\usebox{\bbox}}`@{<-}|{\usebox{\bbox}}`/[B`B'`A`A';b`x`x'`] \morphism(250,850)/=>/<0,-200>[`;\alpha] \efig $$ where $ \alpha $ is a coretrocell and $ \gamma $ a standard one. \begin{theorem} If $ {\mathbb A} $ has conjoints and is right closed, then $ \slashdot $ is a functor $$ \slashdot \ \colon {\bf TS} ({\mathbb A}) \to {\bf A}_1 . $$ \end{theorem} For completeness sake, we end this section with a definition. \begin{definition} A double category $ {\mathbb A} $ is closed if it is right closed and left closed. \end{definition} \section{A triple category} As mentioned in the introduction, one of the inspirations for retrocells was the commuter cells of \cite{GraPar08}. \begin{definition} Let $ {\mathbb A} $ be a double category with companions. A cell $$ \bfig\scalefactor{.8} \square/>`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`>/[A`B`C`D;f`v`w`g] \place(250,250)[{\scriptstyle \alpha}] \efig $$ is a {\em commuter} cell if the associated globular cell $ \widehat{\alpha} $ $$ \bfig\scalefactor{.8} \square/`>`=`=/[C`D`D`D;`g_*``] \square(0,500)/>`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`>/[B`B`C`D;f`v`w`g] \square(0,1000)/=`=`>`/[A`A`B`B;``f_*`] \place(250,250)[{ \lrcorner}] \place(250,750)[{\scriptstyle \alpha}] \place(250,1250)[{ \ulcorner}] \efig $$ is a horizontal isomorphism. \end{definition} The intent is that the cell $ \alpha $ itself is an isomorphism making the square commute (up to isomorphism). The inverse of $ \widehat{\alpha} $ is a retrocell, so the question is, how do we express that a cell and a retrocell are inverse to each other? Cells and retrocells form a double category (and ultimately a triple category). For a double category with companions $ {\mathbb A} $, we define a new (vertical arrow) double category $ {\mathbb V}{\rm ar} ({\mathbb A}) $ as follows. Its objects are the vertical arrows of $ {\mathbb A} $, its horizontal arrows are standard cells of $ {\mathbb A} $, and its vertical arrows are retrocells. It is a thin double category with a unique cell $$ \bfig\scalefactor{.8} \square/>`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`>/[v`w`v'`w';\alpha`\beta`\gamma`\alpha'] \place(250,250)[{\scriptstyle !}] \efig $$ $$ \bfig\scalefactor{.9} \node a(300,0)[C'] \node b(800,0)[D'] \node c(0,250)[A'] \node d(300,500)[C] \node e(800,500)[D] \node f(0,800)[A] \node g(500,800)[B] \arrow|b|/>/[a`b;g'] \arrow|l|/@{>}|{\usebox{\bbox}}/[c`a;v'] \arrow|r|/>/[d`a;c] \arrow|r|/>/[e`b;d] \arrow|b|/>/[d`e;g] \arrow|l|/>/[f`c;a] \arrow|l|/@{>}|{\usebox{\bbox}}/[f`d;v] \arrow|r|/@{>}|{\usebox{\bbox}}/[g`e;w] \arrow|a|/>/[f`g;f] \place(430,650)[{\scriptstyle \alpha}] \morphism(160,320)|l|/=>/<0,200>[`;\beta] \node f'(1500,800)[A] \node g'(2000,800)[B] \node c'(1500,300)[A'] \node d'(2000,300)[B'] \node a'(1800,0)[C'] \node b'(2300,0)[D'] \node e'(2300,400)[D] \arrow|a|/>/[f'`g';f] \arrow|l|/>/[f'`c';a] \arrow|l|/>/[g'`d';b] \arrow|r|/@{>}|{\usebox{\bbox}}/[g'`e';w] \arrow|a|/>/[c'`d';f'] \arrow|l|/@{>}|{\usebox{\bbox}}/[c'`a';v'] \arrow|b|/>/[a'`b';g'] \arrow|r|/@{>}|{\usebox{\bbox}}/[d'`b';w'] \arrow|r|/>/[e'`b';d] \place(1950,150)[{\scriptstyle \alpha'}] \morphism(2150,320)|l|/=>/<0,200>[`;\gamma] \efig $$ if we have $$ f' a = b f $$ $$ g' c = d g\ , $$ and $$ \bfig\scalefactor{.8} \square/`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`=/[A'`C`C'`C';`v'`c_*`] \square(0,500)/=`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`/[A`A`A'`C;`a_*`v`] \place(250,500)[{\scriptstyle \beta}] \square(500,0)/>``@{>}|{\usebox{\bbox}}`>/[C`D`C'`D';``d_*`g'] \square(500,500)/>``@{>}|{\usebox{\bbox}}`>/[A`B`C`D;f``w`g] \place(750,250)[{\scriptstyle *}] \place(750,750)[{\scriptstyle \alpha}] \place(1300,500)[=] \square(1600,0)/>`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`>/[A'`B'`C'`D';f'`v'`w'`g'] \square(1600,500)/>`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`/[A`B`A'`B';f`a_*`b_*`] \place(1850,250)[{\scriptstyle \alpha'}] \place(1850,750)[{\scriptstyle *}] \square(2100,0)/``@{>}|{\usebox{\bbox}}`=/[B'`D`D'`D';``d_*`] \square(2100,500)/=``@{>}|{\usebox{\bbox}}`/[B`B`B'`D;``w`] \place(2350,500)[{\scriptstyle \gamma}] \efig $$ where the starred cells are the canonical ones gotten from the equations $ g' c = d g $ and $ f' a = b f $ by ``sliding''. \begin{proposition} $ {\mathbb V}{\rm ar} ({\mathbb A}) $ is a strict double category. \end{proposition} \begin{proof} We just have to check that cells compose horizontally and vertically. We simply give a sketch of the proof. Suppose we have two cells, $$ \bfig\scalefactor{.8} \square/>`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`>/[v`w`v'`w';\alpha`\beta`\gamma`\alpha'] \place(250,250)[{\scriptstyle !}] \square(500,0)/>``@{>}|{\usebox{\bbox}}`>/[w`x`w'`x';\delta``\xi`\delta'] \place(750,250)[{\scriptstyle !}] \efig $$ i.e. we have \begin{center} \setlength{\unitlength}{.9mm} \begin{picture}(130,20) \put(0,0){\framebox(20,20){}} \put(30,0){\framebox(20,20){}} \put(80,0){\framebox(20,20){}} \put(110,0){\framebox(20,20){}} \put(10,0){\line(0,1){20}} \put(40,0){\line(0,1){20}} \put(90,0){\line(0,1){20}} \put(120,0){\line(0,1){20}} \put(10,10){\line(1,0){10}} \put(30,10){\line(1,0){10}} \put(90,10){\line(1,0){10}} \put(110,10){\line(1,0){10}} \put(25,10){\makebox(0,0){$=$}} \put(65,10){\makebox(0,0){and}} \put(105,10){\makebox(0,0){$=$}} \put(5,10){\makebox(0,0){$\scriptstyle \beta$}} \put(15,5){\makebox(0,0){$\scriptstyle *$}} \put(15,15){\makebox(0,0){$\scriptstyle \alpha$}} \put(15,15){\makebox(0,0){$\scriptstyle \alpha$}} \put(35,5){\makebox(0,0){$\scriptstyle \alpha'$}} \put(35,15){\makebox(0,0){$\scriptstyle *$}} \put(45,10){\makebox(0,0){$\scriptstyle \gamma$}} \put(85,10){\makebox(0,0){$\scriptstyle \gamma$}} \put(95,5){\makebox(0,0){$\scriptstyle *$}} \put(95,15){\makebox(0,0){$\scriptstyle \delta$}} \put(115,5){\makebox(0,0){$\scriptstyle \delta'$}} \put(115,15){\makebox(0,0){$\scriptstyle *$}} \put(125,10){\makebox(0,0){$\scriptstyle \xi$}} \put(135,0){.} \end{picture} \end{center} \noindent Thus \begin{center} \setlength{\unitlength}{.9mm} \begin{picture}(170,20) \put(0,0){\framebox(20,20){}} \put(30,0){\framebox(30,20){}} \put(70,0){\framebox(30,20){}} \put(110,0){\framebox(30,20){}} \put(150,0){\framebox(20,20){}} \put(10,0){\line(0,1){20}} \put(40,0){\line(0,1){20}} \put(50,0){\line(0,1){20}} \put(80,0){\line(0,1){20}} \put(90,0){\line(0,1){20}} \put(120,0){\line(0,1){20}} \put(130,0){\line(0,1){20}} \put(160,0){\line(0,1){20}} \put(10,10){\line(1,0){10}} \put(40,10){\line(1,0){20}} \put(70,10){\line(1,0){10}} \put(90,10){\line(1,0){10}} \put(110,10){\line(1,0){20}} \put(150,10){\line(1,0){10}} \put(25,10){\makebox(0,0){$=$}} \put(65,10){\makebox(0,0){$=$}} \put(105,10){\makebox(0,0){$=$}} \put(145,10){\makebox(0,0){$=$}} \put(5,10){\makebox(0,0){$\scriptstyle \beta$}} \put(15,5){\makebox(0,0){$\scriptstyle *$}} \put(15,15){\makebox(0,0){$\scriptstyle \delta \alpha$}} \put(35,10){\makebox(0,0){$\scriptstyle \beta$}} \put(45,5){\makebox(0,0){$\scriptstyle *$}} \put(45,15){\makebox(0,0){$\scriptstyle \alpha$}} \put(55,5){\makebox(0,0){$\scriptstyle *$}} \put(55,15){\makebox(0,0){$\scriptstyle \delta$}} \put(75,5){\makebox(0,0){$\scriptstyle \alpha'$}} \put(75,15){\makebox(0,0){$\scriptstyle *$}} \put(85,10){\makebox(0,0){$\scriptstyle \gamma$}} \put(95,5){\makebox(0,0){$\scriptstyle *$}} \put(95,15){\makebox(0,0){$\scriptstyle \delta$}} \put(115,5){\makebox(0,0){$\scriptstyle \alpha'$}} \put(115,15){\makebox(0,0){$\scriptstyle *$}} \put(125,5){\makebox(0,0){$\scriptstyle \delta'$}} \put(125,15){\makebox(0,0){$\scriptstyle *$}} \put(135,10){\makebox(0,0){$\scriptstyle \xi$}} \put(155,5){\makebox(0,0){$\scriptstyle \alpha' \delta'$}} \put(155,15){\makebox(0,0){$\scriptstyle *$}} \put(165,10){\makebox(0,0){$\scriptstyle \xi$}} \put(175,0){.} \end{picture} \end{center} \noindent Consider cells $$ \bfig\scalefactor{.8} \square/>`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`>/[v'`w'`v''`w''\rlap{\ .};\alpha'`\beta'`\gamma'`\alpha''] \place(250,250)[{\scriptstyle !}] \square(0,500)/>`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`/[v`w`v'`w';\alpha`\beta`\gamma`] \place(250,750)[{\scriptstyle !}] \efig $$ We did not say, but vertical composition of arrows in $ {\mathbb V}{\rm ar} ({\mathbb A}) $ is given by horizontal composition of retrocells. It could not be otherwise given their boundaries. Then we have the following \begin{center} \setlength{\unitlength}{.9mm} \begin{picture}(170,30) \put(0,5){\framebox(20,20){}} \put(30,0){\framebox(30,30){}} \put(70,0){\framebox(30,30){}} \put(110,0){\framebox(30,30){}} \put(150,5){\framebox(20,20){}} \put(10,5){\line(0,1){20}} \put(40,0){\line(0,1){30}} \put(50,0){\line(0,1){30}} \put(80,0){\line(0,1){30}} \put(90,0){\line(0,1){30}} \put(120,0){\line(0,1){30}} \put(130,0){\line(0,1){30}} \put(160,5){\line(0,1){20}} \put(10,15){\line(1,0){10}} \put(30,20){\line(1,0){10}} \put(40,10){\line(1,0){20}} \put(50,20){\line(1,0){10}} \put(70,20){\line(1,0){20}} \put(80,10){\line(1,0){20}} \put(110,10){\line(1,0){10}} \put(110,20){\line(1,0){20}} \put(130,10){\line(1,0){10}} \put(150,15){\line(1,0){10}} \put(25,15){\makebox(0,0){$=$}} \put(65,15){\makebox(0,0){$=$}} \put(105,15){\makebox(0,0){$=$}} \put(145,15){\makebox(0,0){$=$}} \put(5,15){\makebox(0,0){$\scriptstyle \beta' \bdot \beta$}} \put(15,10){\makebox(0,0){$\scriptstyle *$}} \put(15,20){\makebox(0,0){$\scriptstyle \alpha$}} \put(35,10){\makebox(0,0){$\scriptstyle \beta'$}} \put(35,25){\makebox(0,0){$\scriptstyle =$}} \put(45,5){\makebox(0,0){$\scriptstyle =$}} \put(45,20){\makebox(0,0){$\scriptstyle \beta$}} \put(55,5){\makebox(0,0){$\scriptstyle *$}} \put(55,25){\makebox(0,0){$\scriptstyle \alpha$}} \put(75,10){\makebox(0,0){$\scriptstyle \beta'$}} \put(75,25){\makebox(0,0){$\scriptstyle =$}} \put(85,5){\makebox(0,0){$\scriptstyle *$}} \put(85,15){\makebox(0,0){$\scriptstyle \alpha'$}} \put(85,25){\makebox(0,0){$\scriptstyle *$}} \put(95,5){\makebox(0,0){$\scriptstyle =$}} \put(95,20){\makebox(0,0){$\scriptstyle \gamma$}} \put(115,5){\makebox(0,0){$\scriptstyle \alpha''$}} \put(115,15){\makebox(0,0){$\scriptstyle *$}} \put(115,25){\makebox(0,0){$\scriptstyle *$}} \put(125,10){\makebox(0,0){$\scriptstyle \gamma'$}} \put(125,25){\makebox(0,0){$\scriptstyle =$}} \put(135,5){\makebox(0,0){$\scriptstyle =$}} \put(135,20){\makebox(0,0){$\scriptstyle \gamma$}} \put(155,10){\makebox(0,0){$\scriptstyle \alpha''$}} \put(155,20){\makebox(0,0){$\scriptstyle *$}} \put(165,15){\makebox(0,0){$\scriptstyle \gamma' \bdot \gamma$}} \put(175,5){.} \end{picture} \end{center} \noindent So horizontal and vertical composition of cells are again cells. Identities pose no problem. \end{proof} \begin{proposition} A cell $$ \bfig\scalefactor{.8} \square/>`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`>/[A`B`C`D;f`v`w`g] \place(250,250)[{\scriptstyle \alpha}] \efig $$ in $ {\mathbb A} $ is a commuter cell iff $ \alpha \colon v \to w $ has a companion in $ {\mathbb V}{\rm ar} {\mathbb A} $. \end{proposition} \begin{proof} A companion $ \beta \colon v \xy\morphism(0,0)|m|<225,0>[`;\hspace{-1mm}\bullet\hspace{-1mm}]\endxy w $ for $ \alpha $ will have cells $$ \bfig\scalefactor{.8} \square/>`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`=/[v`w`w`w;\alpha`\beta`\id_w`] \place(250,250)[{\scriptstyle !}] \place(850,250)[\mbox{and}] \square(1200,0)/=`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`>/[v`v`v`w;`\id_v`\beta`\alpha] \place(1450,250)[{\scriptstyle !}] \efig $$ i.e. it is a retrocell $$ \bfig\scalefactor{.8} \square/>`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`>/[A`B`C`D;f'`v`w`g'] \morphism(340,250)/=>/<-200,0>[`;\beta] \efig $$ making the following cubes ``commute'' $$ \bfig\scalefactor{.9} \node a(300,0)[D] \node b(800,0)[D] \node c(0,250)[B] \node d(300,500)[C] \node e(800,500)[D] \node f(0,800)[A] \node g(500,800)[B] \arrow|b|/=/[a`b;] \arrow|l|/@{>}|{\usebox{\bbox}}/[c`a;w] \arrow|r|/>/[d`a;g'] \arrow|r|/=/[e`b;] \arrow|b|/>/[d`e;g] \arrow|l|/>/[f`c;f'] \arrow|l|/@{>}|{\usebox{\bbox}}/[f`d;v] \arrow|r|/@{>}|{\usebox{\bbox}}/[g`e;w] \arrow|a|/>/[f`g;f] \place(430,650)[{\scriptstyle \alpha}] \morphism(150,300)|l|/=>/<0,200>[`;\beta] \node f'(1500,800)[A] \node g'(2000,800)[B] \node c'(1500,300)[B] \node d'(2000,300)[B] \node a'(1800,0)[D] \node b'(2300,0)[D] \node e'(2300,400)[D] \arrow|a|/>/[f'`g';f] \arrow|l|/>/[f'`c';f'] \arrow|l|/=/[g'`d';] \arrow|r|/@{>}|{\usebox{\bbox}}/[g'`e';w] \arrow|a|/=/[c'`d';] \arrow|l|/@{>}|{\usebox{\bbox}}/[c'`a';w] \arrow|b|/=/[a'`b';] \arrow/@{>}|{\usebox{\bbox}}/[d'`b';w] \arrow|r|/=/[e'`b';] \place(1950,150)[{\scriptstyle 1_w}] \morphism(2130,320)|r|/=>/<0,200>[`;] \place(2215,335)[{\scriptstyle 1_w}] \place(1150,500)[\mbox{``=''}] \efig $$ and $$ \bfig\scalefactor{.9} \node a(300,0)[C] \node b(800,0)[D] \node c(0,250)[A] \node d(300,500)[C] \node e(800,500)[C] \node f(0,800)[A] \node g(500,800)[A] \arrow|b|/>/[a`b;g] \arrow|l|/@{>}|{\usebox{\bbox}}/[c`a;v] \arrow|r|/=/[d`a;] \arrow|r|/>/[e`b;g'] \arrow|b|/=/[d`e;] \arrow|l|/=/[f`c;] \arrow|l|/@{>}|{\usebox{\bbox}}/[f`d;v] \arrow|r|/@{>}|{\usebox{\bbox}}/[g`e;v] \arrow|a|/=/[f`g;] \place(430,650)[{\scriptstyle 1_v}] \morphism(160,320)|l|/=>/<0,200>[`;1_v] \node f'(1500,800)[A] \node g'(2000,800)[A] \node c'(1500,300)[A] \node d'(2000,300)[B] \node a'(1800,0)[C] \node b'(2300,0)[D] \node e'(2300,400)[C] \arrow|a|/=/[f'`g';] \arrow|l|/=/[f'`c';] \arrow|l|/>/[g'`d';f'] \arrow|r|/@{>}|{\usebox{\bbox}}/[g'`e';v] \arrow|a|/>/[c'`d';f] \arrow|l|/@{>}|{\usebox{\bbox}}/[c'`a';v] \arrow|b|/>/[a'`b';g] \arrow/@{>}|{\usebox{\bbox}}/[d'`b';w] \arrow|r|/>/[e'`b';g'] \place(1950,150)[{\scriptstyle \alpha}] \morphism(2150,320)|l|/=>/<0,200>[`;\beta] \place(1150,500)[\mbox{``=''}] \efig $$ \noindent So, first of all $ f = f' $ and $ g = g' $. The first ``equation'' says $$ \bfig\scalefactor{.8} \square/`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`=/[B`C`D`D;`w`g_*`] \square(0,500)/=`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`/[A`A`B`C;`f_*`v`] \place(250,500)[{\scriptstyle \beta}] \square(500,0)/>``=`=/[C`D`D`D;g```] \place(750,250)[{ \lrcorner}] \square(500,500)/>``@{>}|{\usebox{\bbox}}`/[A`B`C`D;f``w`] \place(750,750)[{\scriptstyle \alpha}] \place(1350,500)[=] \square(1700,0)/=`@{>}|{\usebox{\bbox}}`@{>}|{\usebox{\bbox}}`=/[B`B`D`D;`w`w`] \place(1950,250)[{\scriptstyle 1_w}] \square(1700,500)/>`@{>}|{\usebox{\bbox}}`=`/[A`B`B`B;f`f_*``] \place(1950,750)[{\scriptstyle \lrcorner}] \square(2200,0)/=```=/<500,1000>[B`B`D`D;```] \morphism(2700,1000)|r|/@{>}|{\usebox{\bbox}}/<0,-500>[B`D;w] \morphism(2700,500)/=/<0,-500>[D`D;] \place(2450,500)[{\scriptstyle \cong}] \efig $$ which by sliding is equivalent to $ \widehat{\alpha} \beta = 1_{w \bdot f_*} $. Similarly the second equation says $ \beta \widehat{\alpha} = 1_{g_* \bdot v} $. \end{proof} We end by acknowledging the ``triple category in the room''. The cubes we have been discussing are clearly the triple cells of a triple category $ {\mathfrak{Ret}} {\mathbb A} $. We orient the cubes to be in line with our intercategories conventions of \cite{GraPar15} where the faces of the cubes are horizontal, vertical (left and right), and basic (front and back) in decreasing order of strictness (or fancyness). The order here will be commutative, cell, and retrocell. \begin{itemize} \item[1.] Objects are the objects of $ {\mathbb A} $, ($ A, A', B, ..$) \item[2.] Transversal arrows are the horizontal arrows of $ {\mathbb A} $, ($ f, f', g, g' $) \item[3.] Horizontal arrows are the horizontal arrows of $ {\mathbb A} $, ($ a, b, c, d $) \item[4.] Vertical arrows are the vertical arrows of $ {\mathbb A} $, ($ v, v', w, w' $) \item[5.] Horizontal cells are commutative squares of horizontal arrows \item[6.] Vertical cells are double cells in $ {\mathbb A} $, ($ \alpha, \alpha' $) \item[7.] Basic cells are retrocells in $ {\mathbb A} $, ($ \beta, \gamma $) \item[8.] Triple cells are ``commutative'' cubes as discussed above \end{itemize} $$ \bfig\scalefactor{.9} \node a(300,0)[D] \node b(800,0)[D'] \node c(0,250)[C] \node d(300,500)[B] \node e(800,500)[B'] \node f(0,800)[A] \node g(500,800)[A'] \arrow|b|/>/[a`b;d] \arrow|l|/>/[c`a;g] \arrow|r|/@{>}|{\usebox{\bbox}}/[d`a;w] \arrow|r|/@{>}|{\usebox{\bbox}}/[e`b;w'] \arrow|b|/>/[d`e;b] \arrow|l|/@{>}|{\usebox{\bbox}}/[f`c;v] \arrow|l|/>/[f`d;f] \arrow|r|/>/[g`e;f'] \arrow|a|/>/[f`g;a] \place(150,400)[{\scriptstyle \alpha}] \morphism(650,250)|l|/=>/<-200,0>[`;\gamma] \node f'(1500,800)[A] \node g'(2000,800)[A'] \node c'(1500,300)[C] \node d'(2000,300)[C'] \node a'(1800,0)[D] \node b'(2300,0)[D'\rlap{\ .}] \node e'(2300,400)[B'] \arrow|a|/>/[f'`g';a] \arrow|l|/@{>}|{\usebox{\bbox}}/[f'`c';v] \arrow|l|/@{>}|{\usebox{\bbox}}/[g'`d';v'] \arrow|r|/>/[g'`e';f'] \arrow|a|/>/[c'`d';c] \arrow|l|/>/[c'`a';g] \arrow|b|/>/[a'`b';d] \arrow/>/[d'`b';g'] \arrow|r|/@{>}|{\usebox{\bbox}}/[e'`b';w'] \morphism(1850,550)/=>/<-200,0>[`;\beta] \place(2150,400)[{\scriptstyle \alpha'}] \efig $$ We leave the details to the interested reader. \end{document}
arXiv
{ "id": "2306.06436.tex", "language_detection_score": 0.486892968416214, "char_num_after_normalized": null, "contain_at_least_two_stop_words": null, "ellipsis_line_ratio": null, "idx": null, "lines_start_with_bullet_point_ratio": null, "mean_length_of_alpha_words": null, "non_alphabetical_char_ratio": null, "path": null, "symbols_to_words_ratio": null, "uppercase_word_ratio": null, "type": null, "book_name": null, "mimetype": null, "page_index": null, "page_path": null, "page_title": null }
arXiv/math_arXiv_v0.2.jsonl
null
null
\begin{document} \title{On Type I Blowups of Suitable Weak Solutions to Navier-Stokes Equations near Boundary. } \begin{abstract} In this note, boundary Type I blowups of suitable weak solutions to the Navier-Stokes equations are discussed. In particular, it has been shown that, under certain assumptions, the existence of non-trivial mild bounded ancient solutions in half space leads to the existence of suitable weak solutions with Type I blowup on the boundary. \end{abstract} \section{Introduction} \setcounter{equation}{0} The aim of the note is to study conditions under which solutions to the Navier-Stokes equations undergo Type I blowups on the boundary. Consider the classical Navier-Stokes equations \begin{equation} \label{NSE} \partial_tv+v\cdot\nabla v-\Delta v=-\nabla q,\qquad {\rm div}\,v=0 \end{equation} in the space time domain $Q^+=B^+\times ]-1,0[$, where $B^+=B^+(1)$ and $B^+(r)=\{x\in \mathbb R^3: |x|<r,\,x_3>0\}$ is a half ball of radius $r$ centred at the origin $x=0$. It is supposed that $v$ satisfies the homogeneous Dirichlet boundary condition \begin{equation}\label{bc} v(x',0,t)=0 \end{equation} for all $|x'|<1$ and $-1<t<0$. Here, $x'=(x_1,x_2)$ so that $x=(x',x_3)$ and $z=(x,t)$. Our goal is to understand how to determine whether or not the origin $z=0$ is a singular point of the velocity field $v$. We say that $z=0$ is a regular point of $v$ if there exists $r>0$ such that $v\in L_\infty(Q^+(r))$ where $Q^+(r)=B^+(r)\times ]-r^2,0[$. It is known, see \cite{S3} and \cite{S2009}, that the velocity $v$ is H\"older continuous in a parabolic vicinity of $z=0$ if $z=0$ is a regular point. However, further smoothness even in spatial variables does not follow in the regular case, see \cite{Kang2005} and \cite{SerSve10} for counter-examples. The class of solutions to be studied is as follows. \begin{definition}\label{sws} A pair of functions $v$ and $q$ is called a suitable weak solution to the Navier-Stokes equations in $Q^+$ near the boundary if and only if the following conditions hold: \begin{equation}\label{class} v\in L_{2, \infty}(Q^+)\cap L_2(-1,0;W^1_2(Q^+)),\qquad q\in L_\frac 32(Q^+); \end{equation} $v$ and $q$ satisfy equations (\ref{NSE}) and boundary condition (\ref{bc}); $$\int\limits_{B^+}\varphi(x,t)|v(x,t)|^2dx+2\int\limits_{-1}^t\int\limits_{B^+}\varphi|\nabla v|^2dxdt\leq $$ \begin{equation}\label{energy_inequality} \leq \int\limits_{-1}^t\int\limits_{B^+}(|v|^2(\partial_t\varphi+\Delta\varphi)+v\cdot\nabla v(|v|^2+2q))dxdt \end{equation} for all non-negative functions $\varphi\in C^\infty_0(B\times]-1,1[)$ such that $\varphi|_{x_3=0}=0$. \end{definition} In what follows, some statements will be expressed in terms of scale invariant quantities (invariant with respect to the Navier-Stokes scaling: $\lambda v(\lambda x,\lambda^2 t)$ and $\lambda ^2q(\lambda x,\lambda^2 t)$). Here, they are: $$A(v,r)=\sup\limits_{-r^2<t<0}\frac 1r\int\limits_{B^+(r)}|v(x,t)|^2dx, \qquad E(v,r)=\frac 1r\int\limits_{Q^+(r)}|\nabla v|^2dz,$$$$ C(v,r)=\frac 1{r^2}\int\limits_{Q^+(r)}|v|^3dz,\qquad D_0(q,r)=\frac 1{r^2}\int\limits_{Q^+(r)}|q-[q]_{B^+(r)}|^\frac 32dz, $$ $$D_2(q,r)=\frac 1{r^\frac {13}8}\int\limits^0_{-r^2}\Big(\int\limits_{B^+(r)}|\nabla q|^\frac {12}{11}dx\Big)^\frac {11}8dt,$$ where $$[f]_\Omega=\frac 1{|\Omega|}\int\limits_\Omega fdx.$$ We also introduce the following values: $$g(v):=\min\{\sup\limits_{0<R<1}A(v,R), \sup\limits_{0<R<1}C(v,R),\sup\limits_{0<R<1}E(v,R)\} $$ and, given $r>0$, $$G_r(v,q):=$$$$\max\{\sup\limits_{0<R<r}A(v,R), \sup\limits_{0<R<r}C(v,R),\sup\limits_{0<R<r}E(v,R),\sup\limits_{0<R<r}D_0(q,R)\}. $$ Relationships between $g(v)$ and $G_1(v,q)$ is described in the following proposition. \begin{pro}\label{boundednesstheorem} Let $v$ and $q$ be a suitable weak solution to the Navier-Stokes equations in $Q^+$ near the boundary. Then, $G_1$ is bounded if and only if $g$ is bounded. \end{pro} If $z=0$ is a singular point of $v$ and $g(v)<\infty$, then $z=0$ is called a Type I singular point or a Type I blowup point. Now, we are ready to state the main results of the paper. \begin{definition} \label{leas} A function $u:Q^+_-:=\mathbb R^3_+\times]-\infty,0[\,\to\mathbb R^3$ is called a local energy ancient solution if there exists a function $p:Q_-^+\to\mathbb R$ such that the pair $u$ and $p$ is a suitable weak solution in $Q^+(R)$ for any $R>0$. Here, $\mathbb R^3_+:=\{x\in \mathbb R^3:\,x_3>0\}$. \end{definition} \begin{theorem}\label{local energy ancient solution} There exists a suitable weak solution $v$ and $q$ with Type I blowup at the origin $z=0$ if and only if there exists a non-trivial local energy ancient solution $u$ such that $u$ and the corresponding pressure $p$ have the following prosperities: \begin{equation}\label{scalequatities} G_\infty(u,p):=\max\{\sup\limits_{0<R<\infty} A(u,R), \sup\limits_{0<R<\infty}E(u,R),$$$$\sup\limits_{0<R<\infty}C(u,R),\sup\limits_{0<R<\infty}D_0(p,R)\}<\infty \end{equation} and \begin{equation}\label{singularity} \inf\limits_{0<a<1}C(u,a)\geq \varepsilon_1>0. \end{equation} \end{theorem} \begin{remark}\label{singType1} According to (\ref{scalequatities}) and (\ref{singularity}), the origin $z=0$ is Type I blowup of the velocity $u$. \end{remark} There is another way to construct a suitable weak solution with Type I blowup. It is motivated by the recent result in \cite{AlBa18} for the interior case. Now, the main object is related to the so-called mild bounded ancient solutions in a half space, for details see \cite{SerSve15} and \cite{BaSe15}. \begin{definition}\label{mbas} A bounded function $u$ is a mild bounded ancient solution if and only if there exists a pressure $p=p^1+p^2$, where the even extension of $p^1$ in $x_3$ to the whole space $\mathbb R^3$ is a $L_\infty(-\infty,0;BMO(\mathbb R^3))$-function, $$\Delta p^1={\rm divdiv}\,u\otimes u$$ in $Q^+_-$ with $p^1_{,3}(x',0,t)=0$, and $p^2(\cdot,t)$ is a harmonic function in $\mathbb R^3_+$, whose gradient satisfies the estimate $$|\nabla p^2(x,t)|\leq \ln (2+1/x_3)$$ for all $(x,t)\in Q^+_-$ and has the property $$\sup\limits_{x'\in \mathbb R^2}|\nabla p^2(x,t)|\to 0 $$ as $x_3\to \infty$; functions $u$ and $p$ satisfy: $$\int\limits_{Q^+_-}u\cdot \nabla qdz=0$$ for all $q\in C^\infty_0(Q_-:=\mathbb R^3\times ]-\infty,0[)$ and, for any $t<0$, $$\int\limits_{Q^+_-}\Big(u\cdot(\partial_t\varphi+\Delta\varphi)+u\otimes u:\nabla \varphi+p{\rm div}\,\varphi\Big)dz=0 $$ for and $\varphi\in C^\infty_0(Q_-)$ with $\varphi(x',0,t)=$ for all $x'\in \mathbb R^2$. \end{definition} As it has been shown in \cite{BaSe15}, any mild bounded ancient solution $u$ in a half space is infinitely smooth up to the boundary and $u|_{x_3}=0$. \begin{theorem}\label{mbas_type1} Let $u$ be a mild bounded ancient solution such that $|u|\leq 1$ and $|u(0,a,0)|=1$ for a positive number $a$ and such that (\ref{scalequatities}) holds. Then there exists a suitable weak solution in $Q^+$ having Type I blowup at the origin $z=0$. \end{theorem} {\bf Acknowledgement} The work is supported by the grant RFBR 17-01-00099-a. \section{Basic Estimates} \setcounter{equation}{0} In this section, we are going to state and prove certain basic estimates for arbitrary suitable weak solutions near the boundary. For our purposes, the main estimate of the convective term can be derived as follows. First, we apply H\"older inequality in spatial variables: $$\||v||\nabla v|\|_{\frac {12}{11},\frac 32,Q_+(r)}^\frac 32=\int\limits^0_{-r^2}\Big(\int\limits_{B_+(r)} |v|^\frac {12}{11}|\nabla v|^\frac {12}{11}dx\Big)^\frac{11}8dt\leq$$ $$\leq\int\limits^0_{-r^2}\Big(\int\limits_{B_+(r)}|\nabla v|^2dx\Big)^\frac 34\Big(\int\limits_{B_+(r)}|v|^\frac {12}5dx\Big)^\frac 58dt. $$ Then, byy interpolation, since $\frac {12}5=2\cdot\frac 35+3\cdot\frac 25$, we find $$\Big(\int\limits_{B_+(r)}|v|^\frac {12}5dx\Big)^\frac 58\leq \Big(\int\limits_{B_+(r)}|v|^2dx\Big)^\frac 38\Big(\int\limits_{B_+(r)}|v|^3dx\Big)^\frac 14. $$ So, $$\||v||\nabla v|\|_{\frac {12}{11},\frac 32,Q_+(r)}^\frac 32\leq $$$$\leq \int\limits^0_{-r^2}\Big(\int\limits_{B_+(r)}|\nabla v|^2dx\Big)^\frac 34\Big(\int\limits_{B_+(r)}|v|^2dx\Big)^\frac 38\Big(\int\limits_{B_+(r)}|v|^3dx\Big)^\frac 14dt\leq $$ \begin{equation}\label{mainest}\leq \sup\limits_{-r^2<t<0}\Big(\int\limits_{B_+(r)}|v|^2dx\Big)^\frac 38\Big(\int\limits_{Q_+(r)}|\nabla v|^2dxdt\Big)^\frac 34\Big(\int\limits_{Q_+(r)}|v|^3dxdt\Big)^\frac 14\leq \end{equation} $$\leq r^\frac 38r^\frac 34r^\frac 12A^\frac 38(v,r)E^\frac 34(v,r)C^\frac 14(v,r) $$$$=r^\frac {13}8A^\frac 38(v,r)E^\frac 34(v,r)C^\frac 14(v,r). $$ Two other estimates are well known and valid for any $0<r\leq 1$: \begin{equation}\label{multiple} C(v,r)\leq cA^\frac 34(v,r)E^\frac 34(v,r) \end{equation} and \begin{equation}\label{embedding} D_0(q,r)\leq cD_2(q,r). \end{equation} Next, one more estimate immediately follows from the energy inequality (\ref{energy}) for a suitable choice of cut-off function $\varphi$: \begin{equation}\label{energy} A(v,\tau R)+E(v,\tau R)\leq c(\tau)\Big[C^\frac 23(v,R)+C^\frac 13(v,R)D_0^\frac 23(q,R)+C(v,R)\Big] \end{equation} for any $0<\tau<1$ and for all $0<R\leq 1$. The last two estimates are coming out from the linear theory. Here, they are: \begin{equation}\label{pressure} D_2(q,r)\leq c \Big(\frac r\varrho\Big)^2\Big[D_2(q,\varrho)+E^\frac 34(v,\varrho)\Big]+$$$$+c\Big(\frac \varrho r\Big)^\frac {13}8A^\frac 38(v,\varrho)E^\frac 34(v,\varrho)C^\frac 14(v,\varrho) \end{equation} for any $0<r<\varrho\leq 1$ and \begin{equation}\label{highder} \|\partial_tv\|_{\frac {12}{11},\frac 32,Q^+(\tau R)}+\|\nabla^2v\|_{\frac {12}{11},\frac 32,Q^+(\tau R)}+\|\nabla q\|_{\frac {12}{11},\frac 32,Q^+(\tau R)} \leq \end{equation} $$\leq c(\tau)R^\frac {13}{12}\Big[D_0^\frac 23(q,R)+C^\frac 13(v,R)+E^\frac 12(v,R)+$$$$+(A^\frac 38(v,R)E^\frac 34(v,R)C^\frac 14(v,R))^\frac 23\Big] $$ for any $0<\tau<1$ and for all $0<R\leq 1$. Estimate (\ref{highder}) follows from bound (\ref{mainest}), from the local regularity theory for the Stokes equations (linear theory), see paper \cite{S2009}, and from scaling. Estimate (\ref{pressure}) will be proven in the next section. \section{Proof of (\ref{pressure})} \setcounter{equation}{0} Here, we follows paper \cite{S3}. We let $\tilde f=-v\cdot\nabla v$ and observe that \begin{equation}\label{weakerterm} \frac 1r\|\nabla v\|_{\frac {12}{11},\frac 32,Q^+(r)}\leq r^\frac {13}{12}E^\frac 12(v,r) \end{equation} and, see (\ref{mainest}), \begin{equation}\label{righthand} \|\tilde f\|_{\frac {12}{11},\frac 32,Q^+(r)} \leq cr^\frac {13}{12}(A^\frac 38(v,r)E^\frac 34(v,r)C^\frac 14(v,r))^\frac 23. \end{equation} Next, we select a convex domain with smooth boundary so that $$B^+(1/2)\subset \tilde B\subset B^+$$ and, for $0<\varrho<1$, we let $$\tilde B(\varrho)=\{x\in \mathbb R^3: x/\varrho\in \tilde B\},\qquad \tilde Q(\varrho)=\tilde B(\varrho)\times ]-\varrho^2,0[. $$ Now, consider the following initial boundary value problem: \begin{equation}\label{v1eq} \partial_tv^1-\Delta v^1+\nabla q^1=\tilde f, \qquad {\rm div}\,v^1=0 \end{equation} in $\tilde Q(\varrho)$ and \begin{equation}\label{v1ibc} v^1=0 \end{equation} on parabolic boundary $\partial'\tilde Q(\varrho)$ of $\tilde Q(\varrho)$. It is also supposed that $[q^1]_{\tilde B(\varrho)}(t)=0$ for all $-\varrho^2<t<0$. Due to estimate ({\ref{righthand}) and due to the Navier-Stokes scaling, a unique solution to problem (\ref{v1eq}) and (\ref{v1ibc}) satisfies the estimate $$\frac 1{\varrho^2}\|v^1\|_{\frac {12}{11},\frac 32,\tilde Q(\varrho)}+\frac 1{\varrho}\|\nabla v^1\|_{\frac {12}{11},\frac 32,\tilde Q(\varrho)} +\|\nabla^2 v^1\|_{\frac {12}{11},\frac 32,\tilde Q(\varrho)}+$$ \begin{equation}\label{v1est} + \frac 1{\varrho}\| q^1\|_{\frac {12}{11},\frac 32,\tilde Q(\varrho)} +\|\nabla q^1\|_{\frac {12}{11},\frac 32,\tilde Q(\varrho)}\leq \end{equation} $$\leq c\|\tilde f\|_{\frac {12}{11},\frac 32,\tilde Q(\varrho)}\leq c\varrho^\frac {12}{11}(A^\frac 38(v,\varrho)E^\frac 34(v,\varrho)C^\frac 14(v,\varrho))^\frac 23,$$ where a generic constant c is independent of $\varrho$. Regarding $v^2=v-v^1$ and $q^2=q-[q]_{B_+(\varrho/2)}-q^1$, one can notice the following:\begin{equation}\label{v2eq} \partial_tv^2-\Delta v^2+\nabla q^2=0, \qquad {\rm div}\,v^2=0 \end{equation} in $\tilde Q(\varrho)$ and \begin{equation}\label{v2ibc} v^2|_{x_3=0}=0. \end{equation} As it was indicated in \cite{S2009}, functions $v^2$ and $q^2$ obey the estimate \begin{equation}\label{v2q2est}\|\nabla^2 v^2\|_{9,\frac 32, Q^+(\varrho/4)}+\|\nabla q^2\|_{9,\frac 32, Q^+(\varrho/4)}\leq \frac c{\varrho^\frac {29}{12}}L,\end{equation} where $$L:=\frac 1{\varrho^2}\| v^2\|_{\frac {12}{11},\frac 32, Q^+(\varrho/2)}+\frac 1{\varrho}\| \nabla v^2\|_{\frac {12}{11},\frac 32, Q^+(\varrho/2)}+\frac 1{\varrho}\| q^2\|_{\frac {12}{11},\frac 32, Q^+(\varrho/2)}.$$ As to an evaluation of $L$, we have $$L\leq \Big[\frac 1{\varrho^2}\| v\|_{\frac {12}{11},\frac 32, Q^+(\varrho/2)}+\frac 1{\varrho}\| \nabla v\|_{\frac {12}{11},\frac 32, Q^+(\varrho/2)}+\frac 1{\varrho}\| q-[q]_{B^+(\varrho/)}\|_{\frac {12}{11},\frac 32, Q^+(\varrho/2)}+$$ $$+\frac 1{\varrho^2}\| v^1\|_{\frac {12}{11},\frac 32, Q^+(\varrho/2)}+\frac 1{\varrho}\| \nabla v^1\|_{\frac {12}{11},\frac 32, Q^+(\varrho/1)}+\frac 1{\varrho}\| q^1\|_{\frac {12}{11},\frac 32, Q^+(\varrho/2)}\Big]\leq$$ $$\leq \Big[\frac 1{\varrho}\| \nabla v\|_{\frac {12}{11},\frac 32, Q^+(\varrho/2)}+\frac 1{\varrho}\|\nabla q\|_{\frac {12}{11},\frac 32, Q^+(\varrho/2)}+$$$$+\frac 1{\varrho}\| \nabla v^1\|_{\frac {12}{11},\frac 32, Q^+(\varrho/2)}+\frac 1{\varrho}\| q^1\|_{\frac {12}{11},\frac 32, Q^+(\varrho/2)}\Big].$$ So, by (\ref{weakerterm}), by (\ref{highder}) with $R=\varrho$ and $\tau=\frac 12$, and by (\ref{v1est}), one can find the following bound \begin{equation}\label{q2est} \|\nabla q^2\|_{9,\frac 32, Q^+(\varrho/4)} \leq \frac c{\varrho^\frac 43}\Big[E^\frac 12(v,\varrho)+D_2^\frac 23(q,\varrho)+$$$$+(A^\frac 38(\varrho)E^\frac 34(v,\varrho)C^\frac 14(v,\varrho))^\frac 23\Big]. \end{equation} Now, assuming $0<r<\varrho/4$, we can derive from (\ref{v1est}) and from (\ref{q2est}) the estimate $$D_2(r)\leq \frac c{r^\frac {13}8}\Big[\int\limits^0_{-r^2}\Big(\int\limits_{B^+(r)}|\nabla q^1|^\frac {12}{11}dx\Big)^\frac {11}8dt+ \int\limits^0_{-r^2}\Big(\int\limits_{B^+(r)}|\nabla q^2|^\frac {12}{11}dx\Big)^\frac {11}8dt\Big]\leq $$$$ \leq \frac c{r^\frac {13}8}\int\limits^0_{-r^2}\Big(\int\limits_{B^+(r)}|\nabla q^1|^\frac {12}{11}dx\Big)^\frac {11}8dt+cr^2 \int\limits^0_{-r^2}\Big(\int\limits_{B^+(r)}|\nabla q^1|^9dx\Big)^\frac 16dt\leq$$ $$\leq c\Big(\frac \varrho r\Big)^\frac {13}8A^\frac 38(v,\varrho)E^\frac 34(v,\varrho)C^\frac 14(v,\varrho)+c\Big(\frac r\varrho\Big)^2\Big[E^\frac 34(v,\varrho)+D_2(q,\varrho)+$$$$+A^\frac 38(v,\varrho)E^\frac 34(v,\varrho)C^\frac 14(v,\varrho)\Big] $$ and thus $$D_2(q,r)\leq c\Big(\frac r\varrho\Big)^2\Big[E^\frac 34(v,\varrho)+D_2(q,\varrho)\Big]+$$$$+c\Big(\frac \varrho r\Big)^\frac {13}8A^\frac 38(v,\varrho)E^\frac 34(v,\varrho)C^14(v,\varrho) $$ for $0<r<\varrho/4$. The latter implies estimate (\ref{pressure}). \section{Proof of Proposition \ref{boundednesstheorem}} \setcounter{equation}{0} \begin{proof} We let $g=g(v)$ and $G=G_1(v,q)$. Let us assume that $g<\infty$. Our aim is to show that $G<\infty$. There are three cases: \textsc{Case 1}. Suppose that \begin{equation}\label{case1} C_0:=\sup\limits_{0<R<1}C(v,R)<\infty. \end{equation} Then, from (\ref{energy}), one can deduce that $$A(v, R/2)+E(v, R/2)\le c_1(1+D_0^\frac 23(q,R)).$$ Here and in what follows in this case, $c_1$ is a generic constant depending on $C_0$ only. Now, let us use (\ref{embedding}), (\ref{pressure}) with $\varrho= R/2$, and the above estimate. As a result, we find $$D_2(q,r)\leq c\Big(\frac rR\Big)^2D_2(q, R/2) +c_1\Big(\frac Rr\Big)^\frac {13}8[E^\frac 34(v, R/2)+1 +D_2^\frac 34(q,R)]\leq $$$$\leq c\Big(\frac rR\Big)^2D_2(q,R)+c_1\Big(\frac Rr\Big)^\frac {13}8[1+D_2(q,R)^\frac 23]$$ for all $0<r< R/2$. So, by Young's inequality, \begin{equation}\label{pressure1} D_2(q,r)\leq c\Big(\frac rR\Big)^2D_2(q,R)+c_1\Big(\frac Rr\Big)^\frac {71}8\end{equation} for all $0<r< R/2$. If $ R/2\leq r\leq R$, then $$D_2(q,r)\leq \frac 1{( R/2)^\frac {13}8}\int\limits^0_{-R^2}\Big(\int\limits_{B^+(R)}|\nabla q|^\frac {12}{11}dx\Big)^\frac {11}8dt\leq 2^\frac {13}8D_2(q,R)\Big(\frac {2r}{R}\Big)^2.$$ So, estimate (\ref{pressure1}) holds for all $0<r<R<1$. Now, for $\mu$ and $R$ in $]0,1[$, we let $r=\mu R$ in (\ref{pressure1}) and find $$D_2(q,\mu R)\leq c\mu^2D_2(q,R)+c_1\mu^{-\frac{71}8}. $$ Picking $\mu$ up so small that $2c\mu\leq 1$, we show that $$D_2(q,\mu R)\leq \mu D_2(q,R)+c_1 $$ for any $0<R<1$. One can iterate the last inequality and get the following: $$D_2(q,\mu^{k+1}R)\leq \mu^{k+1}D_2(q,R)+c_1(1+\mu+...+\mu^k)$$ for all natural numbers $k$. The latter implies that \begin{equation}\label{1case1est} D_2(q,r)\leq c_1\frac rRD_2(q,R)+c_1 \end{equation} for all $0<r<R<1$. And we can deduce from (\ref{embedding}) and from the above estimate that $$\max\{\sup\limits_{0<R<1}D_0(q,R), \sup\limits_{0<R<1}D_2(q,\tau R)\}<\infty$$ for any $0<\tau<1$. Uniform boundedness of $A(R)$ and $E(R)$ follows from the energy estimate (\ref{energy}) and from the assumption (\ref{case1}). \textsc{Case 2}. Assume now that \begin{equation}\label{case2} A_0:=\sup\limits_{0<R<1}A(v,R)<\infty. \end{equation} Then, from (\ref{multiple}), it follows that $$C(v,r)\leq cA_0^\frac 34E^\frac 34(v,r)$$ for any $0<r<1$ and thus $$A(v,\tau \varrho)+E(v,\tau \varrho)\leq c_3(A_0,\tau)\Big[E^\frac 12(v,\varrho)+E^\frac 14(v,\varrho)D_0^\frac 23(q,\varrho)+E^\frac 34(v,\varrho)\Big]. $$ for any $0<\tau<1$ and $0<\varrho<1$. Our next step is an estimate for the pressure quantity: $$D_2(q,r)\leq c\Big(\frac r\varrho\Big)^2\Big[D_2(q,\varrho)+E^\frac34(v,\varrho)\Big]+c_2\Big(\frac \varrho r\Big)^\frac {13}8E^\frac {15}{16}(v,\varrho)\leq$$$$\leq c\Big(\frac r\varrho\Big)^2D_2(q,\varrho)+c_2\Big(\frac \varrho r\Big)^\frac {13}8(E^\frac {15}{16}(v,\varrho)+1)$$ for any $0<r<\varrho<1$. Here, a generic constant, depending on $A_0$ only, is denoted by $c_2$. Letting $r=\tau R$ and $\mathcal E(r):=A(v,r)+D_2(q,r)$, one can deduce from latter inequalities, see also (\ref{embedding}), the following estimates: $$\mathcal E(\tau \varrho)\leq c\tau^2D_2(q,\varrho)+c_2\Big(\frac 1 \tau\Big)^\frac {13}8(E^\frac {15}{16}(v,\varrho)+1)+$$$$+c_3(A_0,\tau)\Big[E^\frac 12(v,\varrho)+E^\frac 14(v,\varrho)D_2^\frac 23(\varrho)+E^\frac 34(v,\varrho)\Big]\leq $$ $$\leq c\tau^2D_2(q,\varrho)+c_2\Big(\frac 1 \tau\Big)^\frac {13}8(E^\frac {15}{16}(v,\varrho)+1)+$$$$+c_3(A_0,\tau)\Big(\frac1{\tau}\Big)^4E^\frac 34(v,\varrho)+c_3(A_0,\tau)\Big[E^\frac 12(v,\varrho)+E^\frac 34(v,\varrho)\Big]\leq $$ $$\leq c\tau^2\mathcal E(\varrho) +c_3(A_0,\tau). $$ The rest of the proof is similar to what has been done in Case 1, see derivation of (\ref{1case1est}). \textsc{Case 3}. Assume now that \begin{equation}\label{case3} E_0:=\sup\limits_{0<R<1}E(v,R)<\infty. \end{equation} Indeed, $$C(v,r)\leq cE_0^\frac 34A^\frac 34(v,r)$$ for all $0<r\leq 1$. As to the pressure, we can find $$D_2(\tau\varrho)\leq c\tau^2D_2(\varrho)+c_4(E_0,\tau)A^\frac {9}{16}(\varrho) $$ for any $0<\tau<1$ and for any $0<\varrho<1$. In turn, the energy inequality gives: $$A(v,\tau \varrho)\leq c_5(E_0,\tau)\Big[A^\frac 12(v,\varrho)+A^\frac 14(v,\varrho)D_0^\frac 23(q,\varrho)+A^\frac 34(v,\varrho)\Big]\leq $$ $$\leq c_5(E_0,\tau)\Big[A^\frac 12(v,\varrho)+A^\frac 14(v,\varrho)D_2^\frac 23(q,\varrho)+A^\frac 34(v,\varrho)\Big] $$ for any $0<\tau<1$ and for any $0<\varrho<1$. Similar to Case 2, one can introduce the quantity $\mathcal E(r)=A(v,r)+D_2(q,r)$ and find the following inequality for it: $$\mathcal E(\tau\varrho)\leq c\tau^2D_2(q,\varrho)+c_4(E_0,\tau)A^\frac {9}{16}(v,\varrho)+$$ $$+c_5(E_0,\tau)\Big[A^\frac 12(v,\varrho)+A^\frac 14(v,\varrho)D_2^\frac 23(q,\varrho)+A^\frac 34(v,\varrho)\Big]\leq $$ $$\leq c\tau^2\mathcal E(\varrho)+c_5(E_0,\tau)$$ for any $0<\tau<1$ and for any $0<\varrho<1$. The rest of the proof is the same as in Case 2. \end{proof} \section{Proof of Theorem \ref{local energy ancient solution}} \setcounter{equation}{0} Assume that $v$ and $q$ is a suitable weak solution in $Q^+$ with Type I blow up at the origin so that \begin{equation}\label{type1} g=g(v)=\min\{ \sup\limits_{0<R<1}A(v,R),\sup\limits_{0<R<1}E(v,R)\sup\limits_{0<R<1}C(v,R)\}<\infty. \end{equation} By Theorem \ref{boundednesstheorem}, \begin{equation} \label{bound1} G_1=G_1(v,q):=\max\{\sup\limits_{0<R<1}A(v,R),\sup\limits_{0<R<1}E(v,R),$$$$\sup\limits_{0<R<1}C(v,R), \sup\limits_{0<R<1}D_0(v,R)\}<\infty. \end{equation} We know, see Theorem 2.2 in \cite{S2016}, that there exists a positive number $\varepsilon_1=\varepsilon_1(G_1)$ such that \begin{equation}\label{sing1} \inf\limits_{0<R<1}C(v,R)\geq \varepsilon_1>0. \end{equation} Otherwise, the origin $z=0$ is a regular point of $v$. Let $R_k\to0$ and $a>0$ and let $$u^{(k)}(y,s)=R_kv(x,t),\qquad p^{(k)}(y,s)=R_k^2q(x,t), $$ where $x=R_ky$, $t=R^2_ks$. Then, we have $$A(v,aR_k)=A(u^{(k)},a)\leq G_1,\qquad E(v,aR_k)=E(u^{(k)},a)\leq G_1,$$$$ C(v,aR_k)=C(u^{(k)},a)\leq G_1, \qquad D_0(q,u^{(k)})=D_0(p^{(k)},a)\leq G_1.$$ Thus, by (\ref{highder}), $$\|\partial_tu^{(k)}\|_{\frac {12}{11},\frac 32,Q^+(a)}+\|\nabla^2u^{(k)}\|_{\frac {12}{11},\frac 32,Q^+(a)}+\|\nabla p^{(k)}\|_{\frac {12}{11},\frac 32,Q^+(a)}\leq c(a,G_1)$$ Moreover, the well known multiplicative inequality implies the following bound: $$\sup\limits_k\int\limits_{Q^+}|u^{(k)}|^\frac {10}3dz\leq c(a,G_1).$$ Using known arguments, one can select a subsequence (still denoted in the same way as the whole sequence) such that, for any $a>0$, $$u^{(k)}\to u$$ in $L_3(Q^+(a))$, $$\nabla u^{(k)}\rightharpoonup \nabla u$$ in $L_2(Q^+(a))$, $$p^{(k)}\rightharpoonup p$$ in $L_\frac 32(Q^+(a))$. The first two statements are well known and we shall comment on the last one only. Without loss of generality, we may assume that $$\nabla p^{(k)}\rightharpoonup w $$ in $L_{\frac {12}{11}}(Q^+(a))$ for all positive $a$. We let $p^{(k)}_1(x,t)=p^{(k)}(x,t)-[p^{(k)}]_{B^+(1)}(t)$. Then, there exists a subsequence $\{k^1_j\}_{j=1}^\infty$ such that $$p^{(k^1_j)}_1\rightharpoonup p_1$$ in $L_\frac 32(Q^+(1))$ as $j\to\infty$. Indeed, it follows from Poincar\'e-Sobolev inequality $$\|p^{(k^1_j)}_1\|_{\frac 32, Q^+(1)}\leq c \|\nabla p^{(k^1_j)}\|_{\frac {12}{11},\frac 32,Q^+(1)}\leq c(1,G_1).$$ Moreover, one has $\nabla p_1=w$ in $Q^+(1)$. Our next step is to define $p^{(k^1_j)}_2(x,t)=p^{(k^1_j)}(x,t)-[p^{(k^1_j)}]_{B^+(2)}(t)$. For the same reason as above, there is a subsequence $\{k^2_j\}_{j=1}^\infty$ of the sequence $\{k^1_j\}_{j=1}^\infty$ such that $$p^{(k^2_j)}_2\rightharpoonup p_2$$ in $L_\frac 32(Q^+(2))$ as $j\to\infty$. Moreover, we claim that $\nabla p_2=w$ in $Q^+(2)$ and $$p_2(x,t)-p_1(x,t)=[p_2]_{B^+(1)}(t)-[p_1]_{B^+(1)}(t)=[p_2]_{B^+(1)}(t)$$ for $x\in B^+(1)$ and $-1<t<0$, i.e., in $Q^+(1)$. After $s$ steps, we arrive at the following: there exists a subsequence $\{k^s_j\}_{j=1}^\infty$ of the sequence $\{k^{s-1}_j\}_{j=1}^\infty$ such that $p^{(k^s_j)}_s(x,t) =p^{(k^s_j)}(x,t)-[p^{(k^s_j)}]_{B^+(s)}(t)$ in $Q^+(s)$ and $$p^{(k^s_j)}_s\rightharpoonup p_s$$ in $L_\frac 32(Q^+(s))$ as $j\to\infty$. Moreover, $\nabla p_s=w$ in $Q^+(s)$ and $$p_s(x,t)=p_{s-1}(x,t)+[p_s]_{B^+(s-1)}(t)$$ in $Q^+(s-1)$. And so on. The following function $p$ is going to be well defined: $p=p_1$ in $Q^+(1)$ and $$p(x,t)=p_{s+1}(x,t)-\sum\limits_{m=1}^s[p_{m+1}]_{B^+(m)}(t)\chi_{]-m^2,0[}(t)$$ in $Q^+(s+1)$, where $\chi_\omega(t)$ is the indicator function of the set $\omega \in \mathbb R$. Indeed, to this end, we need to verify that $$p_{s+1}(x,t)-\sum\limits_{m=1}^{s}[p_{m+1}]_{B^+(m)}(t)\chi_{]-m^2,0[}(t)=$$ $$=p_{s}(x,t)-\sum\limits_{m=1}^{s-1}[p_{m+1}]_{B^+(m)}(t)\chi_{]-m^2,0[}(t)$$ in $Q^+(s)$. The latter is an easy exercise. Now, let us fix $s$ and consider the sequence $$p^{(k^s_j)}(x,t)=p^{(k^s_j)}_{s}(x,t)-\sum\limits_{m=1}^{s-1}[p^{(k^s_j)}_{m+1}]_{B^+(m)}(t)\chi_{]-m^2,0[}(t)$$ in $Q^+(s)$. Then, since the sequence $\{k^s_j\}_{j=1}^\infty$ is a subsequence of all sequences $\{k^{m+1}_j\}_{j=1}^\infty$ with $m\leq s-1$, one can easily check that $$p^{(k^s_j)}\rightharpoonup p$$ in $L_\frac 32(Q^+(s))$. It remains to apply the diagonal procedure of Cantor. Having in hands the above convergences, we can conclude that the pair $u$ and $p$ is a local energy ancient solution in $Q^+_-$ and (\ref{scalequatities}) and (\ref{singularity}) hold. The inverse statement is obvious. \section{Proof of Theorem \ref{mbas_type1}} \setcounter{equation}{0} The proof is similar to the proof of Theorem \ref{local energy ancient solution}. We start with scaling $u^\lambda(y,s)=\lambda u(x,t)$ and $p^\lambda(y,s)=\lambda^2p(x,t)$ where $x=\lambda y$ and $t=\lambda^2s$ and $\lambda\to\infty$. We know $$|u^\lambda(0,y_{3\lambda},0)|=\lambda|u(0,a,0)|=\lambda$$ and so that $y_{3\lambda}\to0$ as $\lambda\to\infty$. For any $R>0$, by the invariance with respect to the scaling, we have $$A(u^\lambda,R)=A(u,\lambda R)\leq G(u,p)=:G_0,\qquad E(u^\lambda,R)=E(u,\lambda R)\leq G_0, $$ $$C(u^\lambda,R)=C(u,\lambda R)\leq G_0,\qquad D_0(p^\lambda,R)=D_0(p,\lambda R)\leq G_0.$$ Now, one can apply estimate (\ref{scalequatities}) and get the following: $$\|\partial_tu^\lambda\|_{\frac {12}{11},\frac 32,Q^+(R)}+\|\nabla^2u^\lambda\|_{\frac {12}{11},\frac 32,Q^+(R)}+\|\nabla p^\lambda\|_{\frac {12}{11},\frac 32,Q^+(aR}\leq c(R,G_0).$$ Without loss of generality, we can deduce from the above estimates that, for any $R>0$, $$u^{(k)}\to v$$ in $L_3(Q^+(R))$, $$\nabla u^{(k)}\rightharpoonup \nabla v$$ in $L_2(Q^+(R))$, $$p^{(k)}\rightharpoonup q$$ in $L_\frac 32(Q^+(R))$. Passing to the limit as $\lambda\to\infty$, we conclude that $v$ and $q$ are a local energy ancient solution in $Q^+_-$ for which $G(v,q)<\infty$. Now, our goal is to prove that $z=0$ is a singular point of $v$. We argue ad absurdum. Assume that the origin is a regular point, i.e., there exist numbers $R_0>0$ and $A_0>0$ such that $$|v(z)|\leq A_0$$ for all $z\in Q^+(R_0)$. Hence, \begin{equation}\label{estim1} C(v,R)=\frac 1{R^2}\int\limits_{Q^+(R)}|v|^3dz\leq cA_0^3R^3 \end{equation} for all $0<R\leq R_0$. Moreover, \begin{equation}\label{pass} C(u^\lambda,R)\to C(v,R) \end{equation} as $\lambda\to\infty$. By weak convergence, $$D_0(q,R)\leq G_0$$ for all $R>0$. Now, we can calculate positive numbers $\varepsilon(G_0)$ and $c(G_0)$ of Theorem 2.2 in \cite{S2016}. Then, let us fix $0<R_1<R_0$, see (\ref{estim1}), so that $C(v,R_1)<\varepsilon(G_0)/2$. According to (\ref{pass}), one can find a number $\lambda_0>0$ such that $$G(u^\lambda,R_1)<\varepsilon(G_0)$$ for all $\lambda>\lambda_0$. By Theorem 2.2 of \cite{S2016}, $$\sup\limits_{z\in Q^+(R_1/2)}|u^\lambda(z)|<\frac {c(G_0)}{R_1} $$ for all $\lambda>\lambda_0$. It remains to select $\lambda_1>\lambda_0$ such that $y_{3\lambda}=a/\lambda <R_1/2$ and $\lambda_1>c(G_0)/R_1$. Then $$|u^{\lambda_1}(0,y_{3\lambda_1},0)|=\lambda_1\leq \sup\limits_{z\in Q^+(R_1/2)}|u^{\lambda_1}(z)|<\frac {c(G_0)}{R_1}.$$ This is a contradiction. \end{document}
arXiv
{ "id": "1901.08842.tex", "language_detection_score": 0.5329040288925171, "char_num_after_normalized": null, "contain_at_least_two_stop_words": null, "ellipsis_line_ratio": null, "idx": null, "lines_start_with_bullet_point_ratio": null, "mean_length_of_alpha_words": null, "non_alphabetical_char_ratio": null, "path": null, "symbols_to_words_ratio": null, "uppercase_word_ratio": null, "type": null, "book_name": null, "mimetype": null, "page_index": null, "page_path": null, "page_title": null }
arXiv/math_arXiv_v0.2.jsonl
null
null
\begin{document} \title{Noise sensitivity of the top eigenvector of a Wigner matrix \thanks{ G\'abor Lugosi was supported by the Spanish Ministry of Economy and Competitiveness, Grant PGC2018-101643-B-I00; ``High-dimensional problems in structured probabilistic models - Ayudas Fundaci\'on BBVA a Equipos de Investigaci\'on Cientifica 2017''; and Google Focused Award ``Algorithms and Learning for AI''. Charles Bordenave was supported by by the research grants ANR-14-CE25-0014 and ANR-16-CE40-0024-01. Nikita Zhivotovskiy was supported by RSF grant No. 18-11-00132. }} \author{Charles Bordenave\thanks{Institut de Math\'ematiques de Marseille, CNRS \& Aix-Marseille University, Marseille, France.} \and G\'abor Lugosi \thanks{Department of Economics and Business, Pompeu Fabra University, Barcelona, Spain, [email protected]} \thanks{ICREA, Pg. Lluís Companys 23, 08010 Barcelona, Spain} \thanks{Barcelona Graduate School of Economics} \and Nikita Zhivotovskiy\thanks{This work was prepared while Nikita Zhivotovskiy was a postdoctoral fellow at the department of Mathematics, Technion I.I.T. and researcher at National University Higher School of Economics. Now at Google Research, Brain Team.} } \maketitle \begin{abstract} We investigate the noise sensitivity of the top eigenvector of a Wigner matrix in the following sense. Let $v$ be the top eigenvector of an $N\times N$ Wigner matrix. Suppose that $k$ randomly chosen entries of the matrix are resampled, resulting in another realization of the Wigner matrix with top eigenvector $v^{[k]}$. We prove that, with high probability, when $k \ll N^{5/3-o(1)}$, then $v$ and $v^{[k]}$ are almost collinear and when $k\gg N^{5/3}$, then $v^{[k]}$ is almost orthogonal to $v$. \end{abstract} \section{Introduction} In this paper we study the \emph{noise sensitivity} of top eigenvectors of Wigner matrices. For a positive integer $N$, let $X=(X_{i,j})$ be a symmetric $N\times N$ matrix such that, for $i\leq j$, the $X_{i,j}$ are independent real random variables, such that for some constant $\delta>0$ and for all $ i \leq j$, $\mathbb{E} X_{i,j} = 0$ and $\mathbb{E} \exp ( |X_{i,j}|^\delta ) \leq 1/\delta$. Note that this assumption is satisfied for a wide class of distributions with a sufficiently light tail. Uniformly bounded, sub-gaussian, and sub-exponential distributions fall in this class. To guarantee that $X$ is a symmetric matrix, we set $X_{i,j} = X_{j,i}$. Finally, we assume that the off-diagonal entries have the unit variance: for all $i \ne j$, $\mathbb{E} X_{ij}^2 = 1$ and for all $i$, $\mathbb{E} X_{ii}^2 = \sigma_0^2$, for some $\sigma_0 \geq 0$. Throughout this text, we call such matrix $X$ a Wigner matrix. In this paper we are concerned with large matrices and the main results are asymptotic, concerning $N\to \infty$. The distribution of the entries $X_{i,j}$ may change with $N$ though we suppress this dependence in the notation. However, the values of $\sigma_0$ and $\delta$ are assumed to be the same for all $N$. Let $\lambda=\sup_{w\in S^{N-1}} \inr{w,Xw}$ be the top eigenvalue of $X$ and let $v$ denote the corresponding unit eigenvector. In this paper we study the noise sensitivity of $v$. In particular, we are interested in the behavior of the top eigenvector $v^{[k]}$ of the symmetric matrix $X^{[k]}$ obtained by resampling $k$ random entries of $X$. The main finding of the paper is that, with high probability, when $k \leq N^{5/3-o(1)}$, then $v$ and $v^{[k]}$ are almost collinear and when $k\gg N^{5/3}$, then $v^{[k]}$ is almost orthogonal to $v$. \subsection*{Related work and proof technique} Noise sensitivity is an important notion in probability that has been extensively studied since the pioneering work of Benjamini, Kalai, and Schramm \cite{BeKaSc99}. Noise sensitivity has mostly been studied in the context of Boolean functions and it has been shown to have deep connections with threshold phenomena, measure concentration, and isoperimetric inequalities, see Talagrand \cite{Tal94a}, Friedgut and Kalai \cite{FrKa96}, Kahn, Kalai, and Linial \cite{KaKaLi88}, Bourgain, Kahn, Kalai, Katznelson, and Linial \cite{BoKaKaKaLi92} for some of the key early work and Garban \cite{Gar11}, Garban and Steif \cite{GaSt14}, Kalai and Safra \cite{KaSa06}, O'Donnell \cite{ODo14} for surveys. The key techniques for studying noise sensitivity typically use elements of harmonic analysis, in particular, hypercontractivity (\cite{Tal94a}, \cite{KaKaLi88}) but also the ``randomized algorithm'' approach of Schramm and Steif \cite{ScSt10} and other techniques, see Garban, Pete, and Schramm \cite{GaPeSc10}. Our approach is inspired by Chatterjee's work \cite{Cha16} who shows that, for functions of independent standard Gaussian random variables, the notion of noise sensitivity (or ``chaos'' as Chatterjee calls it) is deeply related to the notion of ``superconcentration''. In fact, a result in a similar spirit to ours for the \emph{Gaussian Unitary Ensemble} was proved by Chatterjee \cite[Section 3.6]{Cha16}. However, instead of resampling random entries of the matrix, the perturbations considered in \cite{Cha16} are different. In Chatterjee's model, \emph{every} entry of the matrix $X$ is perturbed by replacing $X$ by $Y=e^{-t}X+\sqrt{1-e^{-2t}}X'$ where $X'$ is an independent copy of $X$ and $t>0$. It is proved in \cite{Cha16} that the top eigenvectors of $X$ and $Y$ are approximately orthogonal (in the sense that the expectation of their inner product goes to zero as $N\to\infty$) as soon as $t \gg N^{-1/3}$. Chatterjee uses this example to illustrate how ``superconcentration'' implies ``chaos''. His techniques crucially depend on the Gaussian assumption as in that case explicit formulas may be exploited. Our techniques are similar in the sense that our starting point is also ``superconcentration'' (i.e., the fact that the variance of the largest eigenvalue of a Wigner matrix is small). However, outside of the Gaussian realm, the notions of superconcentration and chaos are murkier. Starting from a general formula for the variance of a function of independent random variables, due to Chatterjee \cite{Cha05}, we establish a monotonicity lemma that allows us to make the connection between the variance of the top eigenvalue and the inner product of interest. Then we use the fact that the top eigenvector has a small variance (i.e., in a sense, it is ``superconcentrated''). The monotonicity lemma may be of independent interest and it may have further uses when one tries to prove that ``superconcentration implies chaos'' for functions of independent--not necessarily Gaussian--random variables. \subsection*{Result} To formally describe the setup, let $X$ be a symmetric $N\times N$ Wigner matrix as defined above. For a positive integer $k \le \binom{N}{2} + N = N(N+1)/2$, let the random matrix $X^{[k]}$ be defined as follows. Let $S_k=\{(i_1,j_1),\ldots,(i_k,j_k)\}$ be a set of $k$ pairs chosen uniformly at random (without replacement) from the set of all ordered pairs $(i,j)$ of indices with $1\le i\le j\le N$. We also assume that $S_k$ is independent of the entries of $X$. The entries of $X^{[k]}$ above the diagonal are \[ X^{[k]}_{i,j} = \left\{ \begin{array}{ll} X'_{i,j} & \text{if $(i,j)\in S_k$} \\ X_{i,j} & \text{otherwise}, \end{array} \right. \] where $(X'_{i,j})_{1\le i\le j\le N}$ are independent random variables, independent of $X$ and $X'_{i,j}$ has the same distribution as $X_{i,j}$, for all $i\le j$. In words, $X^{[k]}$ is obtained from $X$ by resampling $k$ random entries of the matrix above and including the diagonal and also the corresponding terms below the diagonal. Clearly, $X^{[k]}$ has the same distribution as $X$. Denote unit eigenvectors corresponding to the largest eigenvalues of $X$ and $X^{[k]}$ by $v$ and $v^{[k]}$, respectively. Note that with overwhelming probability, the spectrum of a Wigner matrix is simple and, in particular, the top unit eigenvector is unique (up to changing the sign), see \cite{MR2760897}. Our main results are the following. \begin{theorem} \label{thm:main} Assume that $X$ is a Wigner matrix as above. If $k/N^{5/3}\to \infty$, then \[ \mathbb{E} \left|\inr{v,v^{[k]}}\right| = o(1)~. \] \end{theorem} Conversely, our second result asserts that when $k \leq N^{5/3 -o(1)}$ then $v$ and $v^{[k]}$ are almost aligned. \begin{theorem} \label{thm:main2} Assume that $X$ is a Wigner matrix as above. There exists a constant $c >0$ such that, with $\varepsilon_N =( \log N)^{-c \log \log N}$, $$ \mathbb{E} \max_{1 \leq k \leq \varepsilon_N N^{5/3} } \min_{s \in \{-1,1\}} \| v - s v^{[k]} \|_{2} = o(1)~. $$ \end{theorem} The proof of Theorem \ref{thm:main2} actually establishes that $\max_k \min _s \sqrt N \| v - s v^{[k]} \|_{\infty}$ goes to $0$ in probability. The following heuristic argument may provide an intuition of why the threshold in the lower bound of Theorem \ref{thm:main2} is at $k = N^{5/3 - o(1)}$. Since the seminal work of Erd\H{o}s, Schlein, and Yau \cite{ESY09b}, it is well known that unit eigenvectors of random matrices are delocalized in the sense that $\| v \|_\infty = N^{-1/2 + o(1)}$ with high probability. Denoting the top eigenvalue of $X^{[k]}$ by $\lambda^{[k]}$, we might infer from the derivative of a simple eigenvalue as the function of the matrix entries that $$ \lambda^{[1]} - \lambda \simeq ( 1+ \mathbbm{1} (i_1 \ne j_1 )) v_{i_1}( X'_{i_1,j_1} - X_{i_1,j_1} )v_{j_1} \simeq \frac{ X'_{i_1,j_1} - X_{i_1,j_1} }{N^{1+o(1)}}~, $$ where $v_i$ is the $i$-th component of $v$ Assuming that $v_i$ is nearly independent of any matrix entry $X_{ij}$, since $X_{ij}$ is centered with unit variance, we would get from the central limit theorem that $$ \lambda^{[k]} - \lambda = \sum_{t =0}^{k-1}( \lambda^{[t+1]} - \lambda ^{[t]} ) \simeq \frac{\sqrt k}{N^{1+o(1)}}~ . $$ On the other hand, the known behavior of random matrices at the edge of the spectrum implies that the second largest eigenvalue of $X$ is at distance of order $N^{-1/6}$ from $\lambda$. The above heuristic should thus break down when $\sqrt k / N^{1 + o(1)}$ is of order $N^{-1/6}$. It gives the threshold at $k = N^{5/3+o(1)}$. To get an idea of how Theorem \ref{thm:main} is proved, consider the variance of the largest eigenvalue $\lambda$ of $X$. The key inequality we prove is that \[ \left(\mathbb{E} \left|\inr{v,v^{[k]}}\right|\right)^2 \lesssim \frac{N^2\mathrm{Var}(\lambda)}{k}. \] By the Tracy-Widom law \cite{tracy1994level, tracy1996orthogonal} for the largest eigenvalue, we expect that $\mathrm{Var}(\lambda)$ is of order $N^{-1/3}$, which implies the desired asymptotic orthogonality whenever $k/N^{5/3}\to \infty$. The proof of the inequality above is based on a variance formula for general functions of independent random variables due to Chatterjee \cite{Cha05}, see Lemma \ref{lem:chatterjee} below. The variance formula suggests that small variance implies noise sensitivity of the top eigenvalue in a certain sense. This is made precise by Lemmas \ref{lem:monotonicity} and \ref{lem:monotonicity_second}. Finally, noise sensitivity of the top eigenvalue translates to the inequality above. \remark We expect that the arguments of Theorem \ref{thm:main} for the noise sensitivity of the top eigenvalue may be modified to prove analogous results for the eigenvector corresponding to the $j$-th largest eigenvalue, $1 \leq j \leq N$. However, the threshold is expected to occur at values different from $N^{5/3}$. In particular, a simple heuristic argument suggests that for the $j$-th eigenvector the threshold occurs around $N^{5/3+o(1)} \min ( j , N -j +1)^{-2/3}$. However, to keep the presentation transparent, in this paper we focus on the top eigenvalue. Interestingly, the proof that the top eigenvalue is very sensitive to resampling more than $\Theta(N^{5/3})$ entries involves proving that it is insensitive to resampling just a single entry. As a consequence the proofs of Theorems \ref{thm:main} and \ref{thm:main2} share common techniques. The rest of the paper is dedicated to proving Theorems \ref{thm:main} and \ref{thm:main2}. In Section \ref{sec:var} we introduce a general tool for proving noise sensitivity that generalizes Chatterjee's ideas based of ``superconcentration'' to functions of independent, not necessarily standard normal random variables. In Section \ref{sec:rm} we summarize some of the tools from random matrix theory that are crucial for our arguments. In Sections \ref{sec:thm1} and \ref{sec:thm2} we give the proofs of Theorems \ref{thm:main} and \ref{thm:main2}. \section{Variance and noise sensitivity} \label{sec:var} The first building block in the proof of Theorem \ref{thm:main} is a formula for the variance of an arbitrary function of independent random variables, due to Chatterjee \cite{Cha05}. For any positive integer $i$, denote $[i]=\{1, \ldots, i\}$. \begin{lemma} \label{lem:chatterjee} \emph{\cite{Cha05}} Let $X_1,\ldots,X_n$ be independent random variables taking values in some set $\mathcal{X}$ and let $f:\mathcal{X}^n \to \mathbb{R}$ be a measurable function. Denote $X=(X_1,\ldots,X_n)$. Let $X'=(X_1',\ldots,X_n')$ be an independent copy of $X$. Under the notation \[ X^{(i)}=(X_1,\ldots,X_{i-1},X_i',X_{i+1},\ldots,X_n) \quad \text{and} \quad X^{[i]}=(X_1',\ldots,X_i',X_{i+1},\ldots,X_n) \] and, in particular, $X^{[0]}=X$ and $X^{[n]}=X'$, we have \[ \mathrm{Var}(f(X)) = \frac{1}{2} \sum_{i=1}^n \mathbb{E} \left[\left(f(X)-f(X^{(i)})\right) \left(f(X^{[i-1]})-f(X^{[i]})\right)\right]~. \] \end{lemma} In general, for $A \subseteq [n]$ let $X^A$ denote the random vector, obtained from $X$ by replacing the components indexed by $A$ by corresponding components of $X^{\prime}$. In the variance formula above, the order of the variables does not matter and the formula remains valid after permuting the indices $1,\ldots,n$ arbitrarily. In particular, one may take the variables in random order. Thus, if $\sigma=(\sigma(1),\ldots,\sigma(n))$ is a random permutation sampled uniformly from the symmetric group $S_n$ and $\sigma([i])$ denotes $\{\sigma(1), \ldots, \sigma(i)\}$, then \begin{equation} \label{eq:var} \mathrm{Var}(f(X)) = \frac{1}{2} \sum_{i=1}^n \mathbb{E} \left[\left(f(X)-f(X^{{(\sigma(i))}})\right) \left(f(X^{\sigma([i-1])})-f(X^{\sigma([i])})\right)\right]~. \end{equation} Note that on the right-hand side of \eqref{eq:var} the expectation is taken with respect to both $X,X'$, and the random permutation $\sigma$. One would intuitively expect that the terms on the right-hand side of \eqref{eq:var} decrease with $i$, as the differences $f(X)-f(X^{ (\sigma(i))} )$ and $f(X^{\sigma([i-1])})-f(X^{\sigma([i])})$ become less correlated as more randomly chosen components get resampled. This is indeed the case and this fact is one of our main tools in proving noise sensitivity. We believe that the following lemma can be useful in diverse situations. The proof is given in Section \ref{sec:prooflemmon} below. \begin{lemma} \label{lem:monotonicity} Consider the setup of Lemma \ref{lem:chatterjee} and the notation above. For $i\in [n]$, denote \[ B_i= \mathbb{E} \left[\left(f(X)-f(X^{{(\sigma(i))}})\right) \left(f(X^{\sigma([i-1])})-f(X^{\sigma([i])})\right)\right]~, \] where the expectation is taken with respect to components of vectors and random permutations. Then $B_i \ge B_{i+1}$ for all $i=1,\ldots,n-1$ and $B_n \ge 0$. In particular, for any $k\in [n]$, \[ B_k \le \frac{2\mathrm{Var}(f(X))}{k}. \] \end{lemma} We also introduce a modification of Lemma \ref{lem:monotonicity} that will be more convenient for our purposes. To do so, we introduce the following notation. Let $j$ have uniform distribution on $[n]$. Let $X^{(j) \circ \sigma([i-1])}$ denote the vector obtained from $X^{\sigma([i-1])}$ by replacing its $j$-th component by an independent copy of the random variable $X_j$, denoted by $X_j^{\prime\prime}$. Observe that $j$ may belong to $\sigma([i - 1])$ and in this case $X_j^{\prime\prime}$ is independent of $X_j^{\prime}$ appearing in $X^{\sigma([i-1])}$. With this notation in mind we may prove the following version of Lemma \ref{lem:monotonicity}. \begin{lemma} \label{lem:monotonicity_second} Using the notation of Lemma \ref{lem:monotonicity}, assuming that $j$ is chosen uniformly at random from the set $[n]$ and independently of other random variables involved, we have for any $k\in [n]$, \[ B_k^{\prime} \le \frac{2\mathrm{Var}(f(X))}{k}\left(\frac{n + 1}{n}\right)~, \] where for any $i \in [n]$, \[ B_i^{\prime} = \mathbb{E} \left[\left(f(X)-f(X^{(j)})\right) \left(f(X^{\sigma([i-1])})-f(X^{(j) \circ \sigma([i-1])})\right)\right]~. \] \end{lemma} \section{Random matrix results} \label{sec:rm} In the proof of Theorem \ref{thm:main} we apply Lemma \ref{lem:monotonicity_second} with $f$ being the top eigenvalue of a Wigner matrix. The usefulness of this bound crucially hinges on the fact that the variance of the top eigenvalue is small, that is, in a sense, the top eigenvalue is ``superconcentrated''. This fact is quantified in this section. Our first lemma on the variance of $\lambda$ is obtained as a combination of a result of Ledoux and Rider \cite{MR2678393} on Gaussian ensembles and the universality of fluctuations for Wigner matrices as stated in Erd\H{o}s, Yau and Yin \cite{MR2871147}. \begin{lemma} \label{lem:variance} Assume that $X$ is a Wigner matrix as in Theorem \ref{thm:main}. Let $\lambda$ denote the largest eigenvalue of $X$. Then, \[ \mathrm{Var}(\lambda) \le (c +o(1)) N^{-1/3}~, \] where $c>0$ is an absolute constant. \end{lemma} { \begin{remark} The result of Lemma \ref{lem:variance} implies an improved version of the variance bound \[ \mathrm{Var}(\lambda) \lesssim (\log N)^{C\log \log N}N^{-1/3}, \] following from \cite[Theorem 2.2]{MR2871147}. \end{remark} } We also need the following delocalization result of the top eigenvector of a Wigner matrix which can be found in Tao and Vu \cite[Proposition 1.12]{MR2669449}. \begin{lemma} \label{delocalization}{\em \cite{MR2669449}.} Assume that $X$ is a Wigner matrix as in Theorem \ref{thm:main}. For any real $c_0>0$, there exists a constant $C>0$, such that, with probability at least $1-C N^{-c_0}$, any eigenvector $w$ of $X$ with $\| w \|_2 =1 $ satisfies \[ \|w\|_{\infty} \le \frac{(\log N)^C}{\sqrt{N}}~. \] \end{lemma} Our final lemma is a perturbation inequality in $\ell^\infty$-norm of the top eigenvector of a Wigner matrix when a single entry is re-sampled. The proof uses precise estimates on the eigenvalue spacings in Wigner matrices proved in Tao and Vu \cite{MR2669449} and Erd\H{o}s, Yau, and Yin \cite{MR2871147}. \begin{lemma} \label{fliponeelem} Let $X$ be a Wigner matrix as in Theorem \ref{thm:main} and $X'$ { be} an independent copy of $X$. For any $(i,j)$ with $1\le i,j \le N$. Denote by {$X^{(ij)}$} the symmetric matrix obtained from $X$ by replacing the entry $X_{ij}$ by $X'_{ij}$ and $X_{ji}$ by $X'_{ji}$. For any $0 < \alpha < 1/10$, there exists $\kappa >0$ such that, for all $N$ large enough, with probability at least $1 - N^{-\kappa}$, \[ \max_{1 \leq i,j \leq N} \inf_{s \in \{-1,1\}} \|s v - u^{(ij)} \|_\infty \le N^{ - \frac 1 2 - \alpha}~, \] where $v$ and $u^{(ij)}$ are any unit eigenvectors corresponding to the largest eigenvalues of $X$ and {$X^{(ij)}$}. \end{lemma} \section{Proof of Theorem \ref{thm:main}} \label{sec:thm1} Now we are ready for the proof of the main results of the paper. We start by fixing some notation. Let $\lambda$ denote the largest eigenvalue of the Wigner matrix $X$ of Theorem \ref{thm:main} and let $v\in S^{N-1}$ be a corresponding normalized eigenvector. Let $k\in \left[\binom{N}{2} + N\right]$ to be specified later and let $X^{[k]}$ be the random symmetric matrix obtained by resampling $k$ random entries { above} the diagonal and including the diagonal, as defined in the introduction. We denote by $S_k \subset \left[\binom{N}{2} + N\right]$ the set of random positions of the $k$ resampled entries. Let $\lambda^{[k]}$ denote the top eigenvalue of $X^{[k]}$ and $v^{[k]}$ a corresponding normalized eigenvector. For $1 \leq i \leq j \leq N$, we denote by {$Y_{(ij)}$} the symmetric matrix obtained from $X$ by replacing the entry $X_{ij}$ by $X^{\prime\prime}_{ij}$ where $X^{\prime\prime}$ is an independent copy of $X$. We obtain $Y^{[k]}_{(ij)}$ from $X^{[k]}$ by the same operation. We denote by $(\mu_{(ij)},u_{(ij)})$, and $(\mu^{[k]}_{(ij)},u^{[k]}_{(ij)})$ the top eigenvalue/eigenvector pairs of $Y_{(ij)}$ and $Y^{[k]}_{(ij)}$, respectively. Let $(s, t)$ be a pair of {indices chosen uniformly at random from $\left[\binom{N}{2} + N\right]$ and satisfying $1\leq s \leq t\leq N$}. For ease of notation, we set $Y = Y_{(st)}$, $\mu = \mu_{(st)} $ and $u = u_{(st)}$. We define similarly $Y^{[k]} = Y^{[k]}_{(st)}$, $\mu^{[k]} = \mu^{[k]}_{(st)}$ and $u^{[k]} = u^{[k]}_{(st)}$. By applying Lemma \ref{lem:monotonicity_second} to the function of $n=\binom{N}{2} + N$ independent random variables $f\left((X_{i,j})_{1\le i\le j\le N}\right)=\lambda$, we obtain that, for any $k\in \left[\binom{N}{2} + N\right]$, \begin{equation} \label{eq:firststep} \frac{2\mathrm{Var}(\lambda)}{k} \cdot \frac{\binom{N}{2} + N + 1}{\binom{N}{2} + N } \ge \mathbb{E}\left[ (\lambda - \mu)\left(\lambda^{[k]}-\mu^{[k]}\right) \right]~. \end{equation} {In what follows, we show that the right-hand side of \eqref{eq:firststep} satisfies \[ \mathbb{E}\left[(\lambda - \mu)\left(\lambda^{[k]}-\mu^{[k]}\right) \right] \simeq \frac{1}{N^2}\mathbb{E}\left[\langle v, v^{[k]}\rangle^2\right]. \] This relation, combined with Lemma \ref{lem:variance} and \eqref{eq:firststep}, implies \[ \mathbb{E}\left[\langle v, v^{[k]}\rangle^2\right] \lesssim \frac{N^{\frac{5}{3}}}{k}~, \] which is sufficient for Theorem \ref{thm:main}. We proceed with the formal argument. } Using the notation of the previous section we have \[ \mathbb{E}\left[(\lambda - \mu)\left(\lambda^{[k]}-\mu^{[k]}\right) \right] = \mathbb{E}\left[(\inr{v,Xv} - \inr{u,Yu})\left(\inr{v^{[k]},X^{[k]}v^{[k]}} - \inr{u^{[k]},Y^{[k]}u^{[k]}}\right)\right]~. \] { Using the fact that $v$ maximizes $\inr{v,Xv}$ and $u$ maximizes $\inr{u,Yu}$ we have \[ \inr{u,(X - Y)u} \le \inr{v,Xv} - \inr{u,Yu} \le \inr{v,(X - Y)v}. \] Observe that the elements of $X - Y$ are all zeros except at most two that correspond to resampled values. If the element $X_{t, s}$ of $X$ was resampled to get $Y$, we have, for any vector $x$, \[ \inr{x,(X - Y)x} = U_{t, s} x_{t}x_{s} \] with $U_{t,s} = (X_{t,s} - X''_{t,s}) ( 1 + \mathbbm{1}(t \ne s))$. Similarly, if we set $U'_{t,s} = (X'_{t,s} - X''_{t,s}) ( 1 + \mathbbm{1}(t \ne s))$, we have $\inr{x,(X^{[k]} - Y^{[k]})x} = U'_{t, s} x_{t}x_{s}$. Therefore, it is straightforward to see that \begin{align*} &(\inr{v,Xv} - \inr{u,Yu})\left(\inr{v^{[k]},X^{[k]}v^{[k]}} - \inr{u^{[k]},Y^{[k]}u^{[k]}}\right)\ge I , \end{align*} where we have set, $$ I = V_{t,s} \min\left\{v_t v_s v^{[k]}_t v^{[k]}_s, u_t u_s v^{[k]}_t v^{[k]}_s, v_t v_s u^{[k]}_t u^{[k]}_s, u_t u_s u^{[k]}_t u^{[k]}_s\right\}, $$ and for $1 \leq i \leq j \leq N$, $$ V_{i,j} = U_{i,j} U'_{i,j} = ( 1 + \mathbbm{1}(i \ne j))^2 (X_{i,j} - X''_{i,j}) (X'_{i,j} - X''_{i,j}). $$ In order to have some extra independence, we introduce yet another independent copy of our random variables. For $1 \leq i \leq j \leq N$, let $Z_{(ij)}$ be the symmetric matrix obtained from $X$ by replacing the entry $X_{ij}$ by $X'''_{ij}$ where $X'''$ is an independent copy of $X$, independent of $X'$ and $X''$. We obtain $Z^{[k]}_{(ij)}$ from $X^{[k]}$ by the same operation. As above, we denote by $w_{(ij)}$, and $w^{[k]}_{(ij)}$ the top unit eigenvector of $Z_{(ij)}$ and $Z^{[k]}_{(ij)}$, respectively. For ease of notation, with $(s,t)$ as above, we define $w = w_{(s,t)}$ and $w^{[k]} = w^{[k]}_{(st)}$. The key observation is that $V_{i,j}$ is independent of $Z_{(ij)}$ and $Z^{[k]}_{(ij)}$.} Fix $0 < \alpha< 1/10$ and let $C$ be as in Lemma \ref{delocalization} for $c_0 = 10$. We define {$\mathcal{E}= \mathcal{E}_1 \cap \mathcal{E}_2$} to be the {intersection} of the following two events: \begin{itemize} {\item $\mathcal{E}_1$: for all $1 \leq i \leq j \leq N$: $\max(\|v - w_{(ij)} \|_{\infty} , \|u_{(ij)} - w_{(ij)} \|_{\infty} , \|v^{[k]} - w_{(ij)}^{[k]}\|_{\infty} , \|u_{(ij)}^{[k]} - w_{(ij)}^{[k]}\|_{\infty} ) \le N^{-\frac 1 2 - \alpha}$. \item $\mathcal{E}_2$: $\|x\|_{\infty} \le \frac{(\log N)^C}{\sqrt{N}}$ for all $x \in \left\{v, u_{(ij)}, w_{(ij)}, v^{[k]}, u_{(ij)}^{[k]}, w_{(ij)}^{[k]} : 1 \leq i, j \leq N\right\}$.} \end{itemize} By Lemmas \ref{delocalization}, \ref{fliponeelem}, and the union bound, we have, for all $N$ large enough, $\mathbb{P}(\mathcal{E}_2^c) \leq N^{-6}$ and for some $\kappa >0$, $\mathbb{P}(\mathcal{E}^c) \le N^{-\kappa}$ (provided that we choose properly the $\pm$-phase for the eigenvectors $u$, $w$, $u^{[k]}$ and $w^{[k]}$). Observe that when $\mathcal{E}$ holds, for all \begin{equation}\label{eq:xyinf} x \in \{v_t v_s v^{[k]}_t v^{[k]}_s, u_t u_s v^{[k]}_t v^{[k]}_s, v_t v_s u^{[k]}_t u^{[k]}_s, u_t u_s u^{[k]}_t u^{[k]}_s \}~, \end{equation} we have, for all $N$ large enough, $$|x - w_t w_s w^{[k]}_t w^{[k]}_s| \le \frac{4(\log N)^{3C}}{N^{2 + \alpha}}~.$$ We show this, for brevity, only for $v_t v_s v^{[k]}_t v^{[k]}_s$. Denoting $\delta_t = w_t-v_t$ and $\delta_t^{[k]}=v_t^{[k]}-w_t^{[k]}$, we write \[ v_t v_s v^{[k]}_t v^{[k]}_s = (w_t -\delta_t)(w_s - \delta_s)(w^{[k]}_t - \delta_t^{[k]})(w^{[k]}_s - \delta_s^{[k]})~. \] Then open the brackets and use that, on $\mathcal{E}$, \[ \max\{|\delta_t|, |\delta_s|, |\delta_t^{[k]}|, |\delta_s^{[k]}|\} \le N^{-\frac{1}{2} - \alpha}\quad \text{and}\quad \max\{|w_t|, |w_s|, |w^{[k]}_t|, |w^{[k]}_s|\} \le (\log N)^{{ 3}C} / \sqrt{N}~. \] { If $\mathcal{E}$ holds, we thus have \begin{align*} & I \ge V_{t,s} w_t w_s w^{[k]}_t w^{[k]}_s - \frac{4(\log N)^{3C}}{N^{2 + \alpha}} |V_{t,s}| . \end{align*} On the other hand, if $\mathcal{E}_2 \backslash \mathcal{E}$ holds, we get $$ I \geq - \frac{(\log N)^{4C}}{N^2}\mathbbm{1} (\mathcal{E}^c) |V_{t,s}|. $$ Finally, if $\mathcal{E}_2$ does not hold, using that all the vectors are of unit norm (and therefore, $\max\{|v_t|, |v_s|, |v^{[k]}_t|, |v^{[k]}_s|\} \le 1$), we have \begin{align*} & I \geq - \mathbbm{1} (\mathcal{E}_2^c) |V_{t,s}|~. \end{align*} The same bounds hold for $ V_{t,s} w_t w_s w^{[k]}_t w^{[k]}_s $ on $\mathcal{E}_2 \backslash \mathcal{E}$ and $\mathcal{E}_2^c$. Note also that $\mathbb{E} V_{t,s}^2 \leq c^2_1$ for some constant $c_1 \geq 1$ depending on $\delta$. Combining altogether the last three bounds, by the Cauchy-Schwarz inequality, we arrive at $$ \mathbb{E} [ I ] \geq \mathbb{E} [ V_{t,s} w_t w_s w^{[k]}_t w^{[k]}_s ] - 4c_1\frac{(\log N)^{3C}}{N^{2 + \alpha}} - 2c_1 \frac{(\log N)^{4C}}{N^{2}} \sqrt{\mathbb{P}(\mathcal{E}^c)} - 2c_1 \sqrt{\mathbb{P}(\mathcal{E}_2^c)}. $$ Recalling \eqref{eq:firststep}, we find \begin{align*} \mathbb{E} [ V_{t,s} w_t w_s w^{[k]}_t w^{[k]}_s ] &\le \frac{4\mathrm{Var}(\lambda)}{k}+ 4c_1\frac{(\log N)^{3C}}{N^{2 + \alpha}} + 2c_1 \frac{(\log N)^{4C}}{N^{2}} \sqrt{\mathbb{P}(\mathcal{E}^c)} + 2c_1 \sqrt{\mathbb{P}(\mathcal{E}_2^c)}~. \end{align*} Integrating over the random choice of $(s,t)$, we have \begin{equation} \label{decomp} \mathbb{E} [ V_{t,s} w_t w_s w^{[k]}_t w^{[k]}_s ]= \frac{1}{\binom{N}{2} + N}\mathbb{E} \left( \sum\limits_{1 \le i \le j \le N}V_{i,j} (w_{(ij)})_i (w_{(ij)})_j (w^{[k]}_{(ij)})_i (w^{[k]}_{(ij)})_j \right). \end{equation} Now, using \eqref{decomp} and using $\frac{\binom{N}{2} + N + 1}{\binom{N}{2} + N } \le 2$, we get \begin{equation} \label{eq:Z1} \mathbb{E} \left(\sum\limits_{1 \leq i , j \leq N} \tilde V_{i,j}(w_{(ij)})_i (w_{(ij)})_j (w^{[k]}_{(ij)})_i (w^{[k]}_{(ij)})_j \right) \leq 4 N^2 \frac{\mathrm{Var}(\lambda)}{k} + \varepsilon_N , \end{equation} where $\tilde V_{i,j} = V_{i,j} / 2$ if $i \ne j$, $\tilde V_{i,i} = V_{i,i}$ and $$ \varepsilon_N = 4c_1\frac{(\log N)^{3C}}{N^{ \alpha}} + 2c_1 (\log N)^{4C} \sqrt{\mathbb{P}(\mathcal{E}^c) } + 2c_1 N^2\sqrt{\mathbb{P}(\mathcal{E}_2^c)}. $$ Note that for $i \ne j$, $\mathbb{E} \tilde V_{i,j} = 2$ and $\mathbb{E} \tilde V_{i,i} = \sigma_0^2$. We have \begin{equation*} \mathbb{E}\left(\sum\limits_{i=1}^N (w_{(ij)})_i (w_{(ij)})_j (w^{[k]}_{(ij)})_i (w^{[k]}_{(ij)})_j \right) \le \frac{ (\log N) ^{4C} }{N} + N \mathbb{P}(\mathcal{E}_2^c). \end{equation*} Hence, using that the variable $V_{i,j}$ is independent of the vectors $w_{(ij)},w_{(ij)}^{[k]}$, we deduce that \begin{eqnarray} 2 \mathbb{E} \left(\sum\limits_{1 \leq i , j \leq N} (w_{(ij)})_i (w_{(ij)})_j (w^{[k]}_{(ij)})_i (w^{[k]}_{(ij)})_j \right) &\leq & 4 N^2 \frac{\mathrm{Var}(\lambda)}{k} + \varepsilon'_N ,\label{eq:Z2} \end{eqnarray} where $$ \varepsilon'_N = \varepsilon_N + | 2 - \sigma_0^2|\frac{ (\log N) ^{4C} }{N} + N | 2 - \sigma_0^2| \mathbb{P}(\mathcal{E}_2^c). $$ We now argue that in \eqref{eq:Z2}, we may replace the vectors $w_{(ij)}$ and $w_{(ij)}^{[k]}$ by $v$ and $v^{[k]}$ respectively. We repeat the above argument. Recall the event $\mathcal{E} = \mathcal{E}_1 \cap \mathcal{E}_2$ defined above. As already pointed, on the event $\mathcal{E}$, we have $$ |v_i v_j v_i^{[k]} v_j ^{[k]} - (w_{(ij)})_i (w_{(ij)})_j (w^{[k]}_{(ij)})_i (w^{[k]}_{(ij)})_j| \le \frac{4(\log N)^{3C}}{N^{2 + \alpha}}~. $$ If $\mathcal{E}_2$ holds, we have $$ |v_i v_j v_i^{[k]} v_j ^{[k]} - (w_{(ij)})_i (w_{(ij)})_j (w^{[k]}_{(ij)})_i (w^{[k]}_{(ij)})_j| \le \frac{2(\log N)^{4C}}{N^{2}}~. $$ Finally, there is the deterministic bound $$ |v_i v_j v_i^{[k]} v_j ^{[k]} - (w_{(ij)})_i (w_{(ij)})_j (w^{[k]}_{(ij)})_i (w^{[k]}_{(ij)})_j| \le 2. $$ Combining the last three bounds we obtain that \begin{align*} &\mathbb{E} \sum\limits_{1 \leq i , j \leq N} | (w_{(ij)})_i (w_{(ij)})_j (w^{[k]}_{(ij)})_i (w^{[k]}_{(ij)})_j - v_i v_j v^{[k]}_i v^{[k]}_j| \\ &\quad\leq \frac{4(\log N)^{3C}}{N^{\alpha}} + 2 (\log N)^{4C} \mathbb{P}(\mathcal{E}^c) + 2N^2 \mathbb{P}(\mathcal{E}_2^c). \end{align*} The right-hand side is upper bounded by $2 \varepsilon_N$. We thus have proved that \begin{eqnarray} 2 \mathbb{E} \left(\sum\limits_{1 \leq i , j \leq N} v_i v_j v^{[k]}_i v^{[k]}_j \right) &\leq & 4 N^2 \frac{\mathrm{Var}(\lambda)}{k} + \varepsilon''_N ,\label{eq:Z3} \end{eqnarray} with $\varepsilon''_N = \varepsilon'_N +2 \varepsilon_N$. As already pointed, by Lemmas \ref{delocalization}, \ref{fliponeelem}, and the union bound, we have, for all $N$ large enough, $\mathbb{P}({\mathcal{E}'_2}^c) \leq N^{-6}$ and $\mathbb{P}({\mathcal{E}'}^c) \le N^{-\kappa}$. It follows that $\varepsilon''_N\to 0$ with $N$. Now, combining Jensen's inequality and \eqref{eq:Z3}, \begin{align*} \left(\mathbb{E}\left|\inr{v, v^{[k]}}\right|\right)^2 &\le \mathbb{E}\left(\sum\limits_{i = 1}^Nv_iv_i^{[k]}\right)^2 \le \mathbb{E}\left(\sum\limits_{1 \le i , j \le N}v_i v_j v^{[k]}_i v^{[k]}_j\right) \le 2N^2 \frac{\mathrm{Var}(\lambda)}{k} + \frac{\varepsilon''_N}{2}. \end{align*} From Lemma \ref{lem:variance}, the claim follows.} \subsection{Proof of Lemma \ref{lem:monotonicity} and Lemma \ref{lem:monotonicity_second}} \label{sec:prooflemmon} We start with the following technical lemma. \begin{lemma} \label{varianceformula} Let $f: \mathcal{X}^n \to \mathbb{R}$ be a measurable function and let $\sigma \in S_n$ be any fixed permutation. Fix $i \in [n - 1]$ and $j \in [n]$ such that $j \notin \sigma([i])$. Let $X_1,\ldots,X_n$ be independent random variables taking values in $\mathcal{X}$. Then \begin{align*} A_i &= \mathbb{E} \left[\left(f(X)-f(X^{(\sigma(i))})\right) \left(f(X^{\sigma([i - 1])})-f(X^{\sigma([i])})\right)\right] \\ &\ge \mathbb{E} \left[\left(f(X)-f(X^{(\sigma(i))})\right) \left(f(X^{\sigma([i - 1])\cup j})-f(X^{\sigma([i])\cup j})\right)\right] \\ &\ge 0~. \end{align*} \end{lemma} \begin{proof} Without loss of generality, we may consider one particular permutation $\sigma$, defined as follows: set $\sigma(k) = k$ for $k \notin \{1, i\}$, $\sigma(i) = 1$, $\sigma(1) = i$, and we may also assume that $j = i + 1$. The proof is identical for any other $\sigma$ and $j$. In our case, \[ A_i = \mathbb{E} \left[\left(f(X)-f(X^{(1)})\right) \left(f(X^{[i]\setminus\{1\}})-f(X^{[i]})\right)\right]~. \] Moreover, we have \[ \mathbb{E} \left[\left(f(X)-f(X^{(\sigma(i))})\right) \left(f(X^{\sigma([i - 1])\cup j})-f(X^{\sigma([i])\cup j})\right)\right] = A _{i + 1}. \] We introduce a simplifying notation. Denote $B = (X_2, \ldots,X_i)$, $B' = (X'_2, \ldots, X'_i)$ and $C = (X_{i + 2}, \ldots, X_n)$. Therefore, we may rewrite \[ A_i = \mathbb{E} \left[\left(f(X_1, B, X_{i + 1}, C)-f(X_1', B, X_{i + 1}, C)\right) \left(f(X_1, B', X_{i + 1}, C)-f(X_1', B', X_{i + 1}, C)\right)\right] \] and \[ A_{i + 1} = \mathbb{E} \left[\left(f(X_1, B, X_{i + 1}, C)-f(X_1', B, X_{i + 1}, C)\right) \left(f(X_1, B', X_{i + 1}', C)-f(X_1', B', X_{i + 1}', C)\right)\right]~. \] Denote $h(X_1, X_1', X_{i + 1}, C) = \mathbb{E}[\left(f(X_1, B, X_{i + 1}, C)-f(X_1', B, X_{i + 1}, C)\right)\big| X_1, X_1', X_{i + 1}, C]$. Using the independence of $B, B'$ and their independence of the remaining random variables, we have \[ A_i = \mathbb{E} h(X_1, X_1', X_{i + 1}, C)^2~. \] At the same time, using the same notation for $h$ we have, by the Cauchy-Shwarz inequality and the fact that $X_{i + 1}$ and $X_{i + 1}'$ have the same distribution, \begin{align*} A_{i + 1} &= \mathbb{E} h(X_1, X_1', X_{i + 1}, C)h(X_1, X_1', X'_{i + 1},C) \\ &= \mathbb{E}[\mathbb{E}[ h(X_1, X_1', X_{i + 1}, C)h(X_1, X_1', X'_{i + 1},C)|X_1, X_1', C ]] \\ &\le \mathbb{E} h(X_1, X_1', X_{i + 1}, C)^2 \\ &= A_{i}~. \end{align*} Now to prove that $A_{i} \ge 0$, it is sufficient to show that $A_n \ge 0$. Denoting $g(X_1) = \mathbb{E} [f(X)|\ X_1]$, we have \begin{align*} A_{n} &= \mathbb{E} \left[\left(f(X)-f(X^{(1)})\right) \left(f(X^{[n]\setminus\{1\}})-f(X^{[n]})\right)\right] \\ &= \mathbb{E} (f(X)f(X^{[n]\setminus\{1\}}) - f(X)f(X^{[n]}) - f(X^{(1)})f(X^{[n]\setminus\{1\}}) + f(X^{(1)})f(X^{[n]})) \\ &= 2\mathbb{E} f(X)f(X^{[n]\setminus\{1\}}) - 2(\mathbb{E} f(X))^2 \\ &= 2\mathbb{E}[\mathbb{E} [f(X)f(X^{[n]\setminus\{1\}})| X_1]] - 2(\mathbb{E} f(X))^2 \\ &= 2\mathbb{E}[g(X_1)^2] - 2(\mathbb{E} f(X))^2 \\ &\ge 0~, \end{align*} where we used Jensen's inequality and that $\mathbb{E} g(X_1) = \mathbb{E} f(X)$. \end{proof} We proceed with the proof of Lemma \ref{lem:monotonicity}. \begin{proof} In this proof by writing $i + 1$ we mean $i + 1\ (\text{mod}\ n)$. For each permutation $\sigma \in S_n$ and fixed $i \in [n]$ we construct a corresponding permutation $\sigma'$ by defining $\sigma'(i) = \sigma(i + 1),\ \sigma'(i + 1) = \sigma(i)$ and $\sigma'(k) = \sigma(k)$ for $k \neq \{i, i + 1\}$. It is straightforward to see that for any fixed $i$ there is a one-to-one correspondence between $\sigma \in S_n$ and $\sigma'$. By observing that $\sigma'([i]) = \sigma([i - 1])\cup \sigma(i + 1)$ and $\sigma'([i +1]) = \sigma([i + 1])$ we have, conditionally on $\sigma$, \begin{align*} &\mathbb{E} \left[\left(f(X)-f(X^{(\sigma(i))})\right) \left(f(X^{\sigma([i - 1])})-f(X^{\sigma([i])})\right)\right] \\ &=\mathbb{E} \left[\left(f(X)-f(X^{(\sigma'(i + 1))})\right) \left(f(X^{\sigma([i - 1])})-f(X^{\sigma([i])})\right)\right] \\ &\ge \mathbb{E} \left[\left(f(X)-f(X^{(\sigma'(i + 1))})\right) \left(f(X^{\sigma'([i])})-f(X^{\sigma'([i+ 1])})\right)\right]~, \end{align*} where in the last step we used Lemma \ref{varianceformula}. Using the one to one correspondence between all $\sigma$ and $\sigma'$, we have \begin{align*} B_i &= \mathbb{E}_{\sigma}\mathbb{E} \left[\left(f(X)-f(X^{(\sigma(i))})\right) \left(f(X^{\sigma([i - 1])})-f(X^{\sigma([i])})\right)\right] \\ &= \frac{1}{n!}\sum\limits_{\sigma}\mathbb{E} \left[\left(f(X)-f(X^{(\sigma(i))})\right) \left(f(X^{\sigma([i - 1])})-f(X^{\sigma([i])})\right)\right] \\ &\ge\frac{1}{n!}\sum\limits_{\sigma'}\mathbb{E} \left[\left(f(X)-f(X^{(\sigma'(i + 1))})\right) \left(f(X^{\sigma'([i])})-f(X^{\sigma'([i+ 1])})\right)\right] \\ &=\mathbb{E}_{\sigma}\mathbb{E} \left[\left(f(X)-f(X^{(\sigma(i + 1))})\right) \left(f(X^{\sigma([i])})-f(X^{\sigma([i+ 1])})\right)\right] \\ &= B_{i + 1}. \end{align*} The proof that $B_n \ge 0$ follows from Lemma \ref{varianceformula} as well. \end{proof} Finally, we prove Lemma \ref{lem:monotonicity_second}. \begin{proof} To prove this Lemma we show an upper bound for $B_i^{\prime}$. We have, \begin{align*} B_i^{\prime} &= \mathbb{E} \left[\left(f(X)-f(X^{(j)})\right) \left(f(X^{\sigma([i-1])})-f(X^{(j) \circ \sigma([i-1])})\right)\right] \\ &=\mathbb{E}_{\sigma}\left(\mathbb{E} \left[\left(f(X)-f(X^{(j)})\right) \left(f(X^{\sigma([i-1])})-f(X^{(j) \circ \sigma([i-1])})\right)\big| j \in { \sigma[i - 1]}\right]\mathbb{P}(j \in { \sigma[i - 1]})\right) \\ &\quad+\mathbb{E}_{\sigma}\left(\mathbb{E} \left[\left(f(X)-f(X^{(j)})\right) \left(f(X^{\sigma([i-1])})-f(X^{(j) \circ \sigma([i-1])})\right)\big| j \notin { \sigma[i - 1]}\right]\mathbb{P}(j \notin { \sigma[i - 1]})\right)~. \end{align*} Observe that $\mathbb{P}(j \in{ \sigma[i - 1]}) = \frac{i - 1}{n}$ and the second summand is equal to $B_i\frac{n - i + 1}{n}$. We proceed with the first summand. For $i \ge 1$, we have \begin{align*} &\mathbb{E}_{\sigma}\mathbb{E} \left[\left(f(X)-f(X^{(j)})\right) \left(f(X^{\sigma([i-1])})-f(X^{(j) \circ \sigma([i-1])})\right)\big| j \in { \sigma[i - 1]}\right] \\ &= \mathbb{E}_{\sigma}\mathbb{E} \left[\left(f(X)-f(X^{(j)})\right) \left(f(X^{\sigma([i-1])\setminus\{j\}})-f(X^{(j) \circ \sigma([i-1])})\right)\big| j \in { \sigma[i - 1]}\right] \\ &\quad+\mathbb{E}_{\sigma}\mathbb{E} \left[\left(f(X)-f(X^{(j)})\right) \left(f(X^{\sigma([i-1])})-f(X^{\sigma([i-1])\setminus\{j\}})\right)\big| j \in { \sigma[i - 1]}\right] \\ & = B_{i - 1} - \mathbb{E}_{\sigma}\mathbb{E} \left[\left(f(X)-f(X^{(j)})\right) \left(f(X^{\sigma([i-1])\setminus\{j\}}) - f(X^{\sigma([i-1])})\right)\big| j \in { \sigma[i - 1]}\right]~. \end{align*} { Finally, we prove that \begin{equation} \label{diffeq} \mathbb{E}_{\sigma}\mathbb{E} \left[\left(f(X)-f(X^{(j)})\right) \left(f(X^{\sigma([i-1])\setminus\{j\}}) - f(X^{\sigma([i-1])})\right)\big| j \in \sigma[i - 1]\right] \ge 0~. \end{equation} Without loss of generality, we consider a particular choice of $\sigma$ and $j$ such that $\sigma(k) = k$, for $k \in [n]$ and $j = 1$. Therefore, \eqref{diffeq} will follow from \[ \mathbb{E} f(X)(f(X^{[i-1]\setminus\{1\}}) - f(X^{[i-1]})) \ge \mathbb{E} f(X^{(1)})(f(X^{[i-1]\setminus\{1\}}) - f(X^{[i-1]}))~. \] Since $X^{(1)} = (X_1^{\prime\prime}, X_2, \ldots, X_n)$, we have $\mathbb{E} f(X)f(X^{[i-1]}) = \mathbb{E} f(X^{(1)})f(X^{[i-1]})$. This implies that \eqref{diffeq} is valid whenever \[ \mathbb{E} f(X)f(X^{[i-1]\setminus\{1\}}) \ge \mathbb{E} f(X^{(1)})f(X^{[i-1]\setminus\{1\}})~. \] As in the proof of Lemma \ref{varianceformula}, this relation holds due to Jensen's inequality. These lines together imply that \[ B_i^{\prime} \le \frac{i - 1}{n}B_{i - 1} + \frac{n - i + 1}{n}B_i~, \] which, using Lemma \ref{lem:monotonicity}, proves the claim.} \end{proof} \subsection{Proof of Lemma \ref{lem:variance}} We start with a special case. Let us say that a Wigner matrix as in Theorem \ref{thm:main} is {\em standard} if for all $i$, $ \mathbb{E} X_{ii}^2= 2$. In this case, the variance of the entries of $X$ is equal to the variance of the entries of a random matrix $Y$ sampled from the Gaussian Orthogonal Ensemble (GOE). If $\mu$ is the largest eigenvalue of $Y$, it follows from \cite[Corollary 3]{MR2678393} that for some absolute constant $c >0$, $$ \mathrm{Var} ( \mu ) \leq c N^{-1/3}. $$ On the other hand, it follows from \cite[Theorem 2.4]{MR2871147} (see also \cite[Theorem 1.6]{MR3034787} for a statement which can be used directly) that, $$ N^{1/3} \left| \mathrm{Var} ( \mu ) - \mathrm{Var} (\lambda) \right| = o(1). $$ We obtain the first claim of the lemma for standard Wigner matrices. To conclude the proof of the lemma for Wigner matrices, it suffices to prove that for any Wigner matrix $X$, for some $\kappa \geq 1/3$, we have for all $N$ large enough, \begin{equation} \label{eq:ll0} \mathbb{E} |\lambda - \lambda_0 |^2 \leq N^{-\kappa}. \end{equation} where $\lambda_0$ is the largest eigenvalue of a matrix $X_0$ obtained from $X$ by setting to $0$ all diagonal entries. We will prove it for any $\kappa < 1/2$ (an improvement of the forthcoming Lemma \ref{lem:resres0} would give \eqref{eq:ll0} for any $\kappa <1$). The proof requires some care since the operator norm of $X - X_0$ may be much larger than $1$ and the rank of $X-X_0$ could be $N$. There is an easy inequality which is half of \eqref{eq:ll0}. Let $v_0$ be a unit eigenvector of $X_0$ with eigenvalue $\lambda_0$. We have $$ \lambda \geq \langle v_0 , X v_0 \rangle = \langle v_0 , X_0 v_0 \rangle+ \langle v_0 , (X -X_0)v_0 \rangle = \lambda_0 + \sum_{i=1}^N (v_0)_i^2 X_{ii}~, $$ where $(v_0)_i$ is the $i$-th coordinate of $v_0$. We observe that $v_0$ is independent of { $X_{ii}$ for all $i$ and $\mathbb{E} X_{ii} X_{jj} = 0$ for $i \neq j$. Denoting $(x)^2_+ = \max (x,0)^2$, by the Cauchy-Schwarz inequality,} we deduce that $$ \mathbb{E} (\lambda_0 - \lambda)_+ ^2 \leq \mathbb{E} \sum_{i=1}^N (v_0)_i^4 \mathbb{E} X_{ii}^2 \leq \mathbb{E} \| v_0 \|^2_{\infty} \sigma_0^2~. $$ We write, $\mathbb{E} \| v_0 \|^2_{\infty} \leq (\log N)^ {2C}/ N + \mathbb{P}( \|v_0 \|_{\infty} \geq (\log N)^ {C}/ \sqrt N) $. From Lemma \ref{delocalization} applied to $c_0 = 2$, we deduce that for some constant $C>0$, $$ \mathbb{E} (\lambda_0 - \lambda)_+ ^2 \leq \frac{(\log N) ^C}{N}~. $$ It implies the easy half of \eqref{eq:ll0} for any $\kappa < 1$. The proof of the converse inequality is more involved. Fore ease of notation, we introduce the number for $N \geq 3$, \begin{equation}\label{eq:defL} L = L_N = (\log N)^{\log \log N}~. \end{equation} We say that a sequence of events $(A_N)$ holds {\em with overwhelming probability} if for any $C>0$, there exists a constant $c>0$ such that $\mathbb{P}( A_N) \geq 1 - cN^{-C}$. We repeatedly use the fact that a polynomial { intersection} of events of overwhelming probability is an event of overwhelming probability. We start with a small deviation lemma which can be found, for example, in \cite[Appendix B]{MR2981427}. \begin{lemma}\label{lem:dev} Assume that $(Z_i)$ $1 \leq i \leq N$ are independent centered complex variables such that for some $\delta >0$, for all $i$, $\mathbb{E} \exp \left( |Z_i|^{\delta}\right) \leq 1/ \delta$. Then, for any $(x_i) \in \mathbb{C}^N$ with overwhelming probability, $$ \left| \sum_{i=1}^N x_i Z_i \right| \leq L \| x\|_2~. $$ \end{lemma} For $z = E + {\mathbf{i}}\eta$ with $\eta >0$ and $E \in \mathbb{R}$, we introduce the resolvent matrices $$ R (z ) = ( X - z I) ^{-1} \hbox{ and } R_0 (z ) = ( X_0 - z I) ^{-1}~, $$ where $I$ denotes the identity matrix. The following lemma asserts that the resolvent can be used to estimate the largest eigenvalue of $X$ and $X_0$. \begin{lemma}\label{lem:resl} Let $X$ be a Wigner matrix as in Theorem \ref{thm:main} and let $\lambda_1 \geq \ldots \geq \lambda_N$ be its eigenvalues. For any $1 \leq k \leq N$, there exists an integer $1 \leq i \leq N$ such that for all $E$ and $\eta >0$ $$ \frac 1 2 \max( \eta, |\lambda_k - E |) ^{-2} \leq N\eta ^{-1} \Im R( E + {\mathbf{i}}\eta)_{ii}~. $$ Moreover, let $1 \leq k \leq L$. There exists $c_0 >0$ such that with overwhelming probability, we have $|\lambda_k - 2 \sqrt N| < L^{c_0} N^{-1/6}$ and for all integers $ 1 \leq i \leq N$, and all $E$ such that $|E - 2 \sqrt N| < L^{c_0} N^{-1/6}$, $$ N\eta ^{-1} \Im R( E + {\mathbf{i}}\eta)_{ii} \leq L^{c_0} \min_{ 1 \leq j \leq N} (\lambda_j -E )^{-2}~. $$ \end{lemma} \begin{proof} From the spectral theorem, we have $$ \Im R_{ii} = \sum_{p = 1}^N \frac{ \eta (v_p)_i ^2}{ (\lambda_p - E )^2 + \eta ^2}~, $$ where $(v_1, \ldots, v_N)$ is an orthonormal basis of eigenvectors of $X$ and $(v_p)_i$ is the $i$-th coordinate of $v_p$. In particular, $$ N \eta^{-1} \Im R_{ii} \geq \frac{ N ({v_p})_i ^2}{ (\lambda_k- E )^2 + \eta ^2} \geq \frac{ N ({v_p})_i ^2}{ 2\max( \eta, |\lambda_k - E |) ^{2} }~. $$ From the pigeonhole principle, for some $i$, $({v_p})_i^2 \geq 1/N$ and the first statement of the lemma follows. Fix an integer $1 \leq k \leq L$. From \cite[Theorem 2.2]{MR2871147} and Lemma \ref{delocalization}, for some constants $c_0,C_0>0$, we have, with overwhelming probability, that the following event $\mathcal{E}$ holds: $|\lambda_k- 2 \sqrt N| \leq L^{c_0} N ^ {-1/6}$, for all integers $1 \leq p \leq N$, $$ \lambda_p \leq 2 \sqrt N - 2 C_0 p^{2/3} N^{-1/6} + L^c p^{-1/3}N^{-1/6}~, $$ and $ \| v_p \|^2_{\infty} \leq L / N$. We set $q = \lfloor C L^{3c_0} \rfloor$ for some $C$. Let $E $ be such that $|E - 2 \sqrt N| \leq L^{c_0} N ^ {-1/6}$. On the event $\mathcal{E}$, if $C$ is large enough, we have, for all $p > q$, $E - \lambda_p \geq C_0 p^{2/3} N^{-1/6}$ and $$ \sum_{p = q+1}^{N} \frac{ N (v_p)_i^2}{ (\lambda_p - E )^2 + \eta ^2} \leq \sum_{p = q+1}^{N} \frac{L}{ (\lambda_p - E )^2} \leq \frac 1 {C^2_0} \sum_{p = q+1}^{N} \frac{L N^{1/3}} {p ^{4/3}} \leq c_1 L N^{1/3} q^ {-1/3}. $$ On the other hand, on the same event $\mathcal{E}$, we have $$ \sum_{p = 1}^{q} \frac{ N (v_p)_i ^2}{ (\lambda_p - E )^2 + \eta ^2} \leq \sum_{p = 1}^{q} \frac{ N (v_p)_i ^2}{ \min_{1 \leq j\leq N} (\lambda_j - E )^2} \leq \frac{L q }{\min_{1 \leq j\leq N} (\lambda_j - E)^{2}}~. $$ It remains to adjust the value of the constant $c_0$ to conclude the proof. \end{proof} The next step in the proof of \eqref{eq:ll0} is a comparison between the resolvent of $X$ and $X_0$ for $z$ close to $2\sqrt N$. The following result is a corollary of \cite[Theorem 2.1 (ii)]{MR2871147}. \begin{lemma}\label{loclaw} Let $X$ be a Wigner matrix as in Theorem \ref{thm:main}. There exists $c >0$ such that, with overwhelming probability, the following event holds: for all $z = E + {\mathbf{i}}\eta$ such that $|2 \sqrt N - E| \leq \sqrt N$ and $N^{-1/2} L ^c \leq \eta \leq N^{1/2}$, all $i \ne j$, we have $$ | R(z)_{ij} | \leq \Delta \quad \hbox{ and } \quad | R(z)_{ii} | \leq c N^{-1/2}, $$ where $\Delta = L^c (|E-2\sqrt N| + \eta)^{1/4} N^{-7/8} \eta^{-1/2} + L^c N^{-2} \eta^{-1} $. \end{lemma} \begin{proof} Let $Y = X/ \sqrt N $ and for $z \in \mathbb{C}$, $\Im (z) >0$, $G (z) = ( Y - z I)^{-1}$. We have $R(z) = N^{-1/2} G(z N^{-1/2})$. Theorem 2.1 (ii) in \cite{MR2871147} asserts that with overwhelming probability for all $w= a + {\mathbf{i}} b$ such that $|a| \leq 5$ and $N^{-1} L ^c \leq b \leq 1$, all $i \ne j$, we have $$ | G(w)_{ij} | \leq \delta \quad \hbox{ and } \quad | G(w)_{ii} - m (w) | \leq \delta~, $$ where $\delta = L^c \sqrt{\Im (m (w)) / (N b)} + L^c (Nb)^{-1} $ and $m(w)$ is the Cauchy-Stieltjes transform of the semi-circular law (for its precise definition see \cite{MR2871147}). Then \cite[Lemma 3.4]{MR2871147} implies that, for some $C >0$, for all $w = a + {\mathbf{i}}b$, $|a| \leq 5$ and $0 \leq b \leq 1$, we have $|m(w)| \leq C$ and $|\Im( m(w)) | \leq C \sqrt{ |a - 2| + b}$. We apply the above result for $a = E / \sqrt N$ and $b = \eta / \sqrt N$. We obtain the claimed statement for $R(z) = N^{-1/2} G(z N^{-1/2})$. \end{proof} We use Lemma \ref{loclaw} to estimate the difference between $R(z)$ and $R_0(z)$. \begin{lemma}\label{lem:resres0} Let $X$ be a Wigner matrix as in Theorem \ref{thm:main}, let $X_0$ be obtained from $X$ by setting to $0$ all diagonal entries, and let $c_0$ be as in Lemma \ref{lem:resl}. With overwhelming probability, the following event holds: for all $z = E + {\mathbf{i}}\eta$ such that $|2 \sqrt N - E| \leq L^{c_0} N^{-1/6}$ and $ \eta = N^{-1/4}$, all $i$, $$ | R_0 (z)_{ii} - R(z)_{ii} | \leq \frac{1}{4 N \eta}~. $$ \end{lemma} \begin{proof} The resolvent identity states that if $A-z I$ and $B-z I $ are invertible matrices then \begin{equation}\label{eq:resid} (A - zI)^{-1} = (B-zI)^{-1} + (B - zI)^{-1}(B-A)(A - zI)^{-1}~. \end{equation} Applying twice this identity, it implies that $$ R = R_0 + R_0 (X_0-X)R_0 + R_0 (X_0-X)R_0 (X_0-X)R $$ (where we omit to write the parameter $z$ for ease of notation). For any integer $1 \leq i \leq N$, we thus have $$ R_{ii} - (R_0)_{ii} = -\sum_{j} (R_0)_{ij} X_{jj} (R_0)_{ji} + \sum_{j,k} (R_0)_{ij} X_{jj} (R_0)_{jk} X_{kk} R_{ki} = - I(z) + J(z)~. $$ Note that $X_{jj}$ is independent of $R_0$. By Lemma \ref{lem:dev} and Lemma \ref{loclaw} we find that, with overwhelming probability, $$ |I(z)| \leq L \Delta^2 \sqrt N + c L N^{-1}~. $$ For a given $z = E + {\mathbf{i}} \eta$ such that $|E - 2 \sqrt N| \leq L^{c_0} N^{-1/6}$ and $\eta = N^{-1/4}$, it is straightforward to check that, for some $c>0$, {$\Delta \le L^c N^{-19/24}$} and $|I(z)|\leq L^c N^{-13/12} = o(1/(N\eta))$. Similarly, we have $$ |J(z)| \leq \sum_{k} |X_{kk}| |R_{ki}| |G_k| \quad \hbox{ with } \; G_k = \sum_{j} (R_0)_{ij} X_{jj} (R_0)_{jk}~. $$ For a given $z$, by Lemma \ref{lem:dev} and Lemma \ref{loclaw}, we have with overwhelming probability, for all $k$, $|G_k| \leq L^c N^{-13/12}$ and $| J(z) | \leq L (\Delta N + c N^{-1/2} ) L^c N^{-13/12} =o(1/(N\eta))$. For a given $z$, let $\mathcal E_z$ be the event that $\max_{1 \leq i \leq N} |R(z)_{ii} - R_0(z)_{ii}| \leq (8 N \eta)^ {-1}$ and $\mathcal E'_z$ the event that $\max_{1 \leq i \leq N}|R(z)_{ii} - R_0(z)_{ii}| \leq (4 N \eta)^ {-1}$. We have proved so far that for a given $z = E + {\mathbf{i}}\eta$ such that $|E - 2 \sqrt N| \leq L^{c_0} N^{-1/6}$ and $\eta = N^{-1/4}$, with overwhelming probability, $\mathcal E_z$ holds. By a net argument, it implies that with overwhelming probability, the events $\mathcal E_z'$ hold jointly for all $z = E + {\mathbf{i}} \eta$ with $|E - 2 \sqrt N| \leq L^{c_0} N^{-1/6}$ and $\eta = N^{-1/4}$. Indeed, from the resolvent identity \eqref{eq:resid}, we have $|R_{ij} (E + {\mathbf{i}} \eta) - R_{ij} ( E' + {\mathbf{i}} \eta) | \leq \eta^{-2} |E - E'|$. It follows that if $|E - E'| \leq \eta^2 (8 N \eta)^ {-1} \leq N^{-1} $ then $|R(z)_{ii} - R_0(z)_{ii}| \leq (8 N \eta)^ {-1}$. Let $\mathcal N$ be a finite subset of the interval $K = \{ E : |E - 2 \sqrt N| \leq L^{c_0} N^{-1/6}\}$ such that for all $E \in K$, $\min_{E' \in \mathcal N} |E - E'| \leq N^{-1}$. We may assume that $\mathcal N$ has at most $N$ elements. From what precedes we have the inclusion, with $\eta = N^{-1/4}$, $$ \bigcap_{z = E + {\mathbf{i}} \eta : E \in \mathcal N} \mathcal E_z \; \subseteq \bigcap_{z = E + {\mathbf{i}}\eta : E \in K} \mathcal E'_z~. $$ From the union bound, the right-hand side holds with overwhelming probability. It concludes the proof of the lemma. \end{proof} Now we have all ingredients necessary to conclude the proof of \eqref{eq:ll0}. Let $\eta = N^{-1/4}$. We prove that for some $c >0$, with overwhelming probability, $$ \lambda \leq \lambda_0 + L^c \eta~. $$ By Lemma \ref{lem:resl}, with overwhelming probability, $|\lambda - 2 \sqrt N | \leq L^{c_0} N^{-1/6}$ and for some $j$, $$ N \eta^{-1} \Im R (\lambda + {\mathbf{i}} \eta)_{jj} \geq \frac 1 2 \eta^{-2}. $$ and if $\lambda > \lambda_0$, $$ N \eta^{-1} \Im R_0 (\lambda + {\mathbf{i}} \eta)_{jj} \leq L^{c_0} (\lambda - \lambda_0)^{-2}~. $$ By Lemma \ref{lem:resres0}, we deduce that with overwhelming probability, if $\lambda > \lambda_0$, $$ \frac 1 4 \eta^{-2} \leq L^{c_0} (\lambda - \lambda_0)^{-2}~. $$ Hence, $\lambda \leq \lambda_0 + 2 L^{c_0/2} \eta$, concluding the proof of \eqref{eq:ll0}. \subsection{Proof of Lemma \ref{fliponeelem}} Let $\lambda = \lambda_1 \geq \cdots \geq \lambda_N$ be the eigenvalues of $X$. For any $(i,j)$, let $\lambda^{(ij)}$ be the largest eigenvalue of ${X^{(ij)}}$. We start by proving that $\lambda$ and $\lambda^{(ij)}$ are close compared to their fluctuations. We have $$ \lambda \geq \langle u^{(ij)} , Xu^{(ij)} \rangle = \lambda^{(ij)} + \langle u^{(ij)} , (X - {X^{(ij)}} ) u^{(ij)} \rangle \geq \lambda^{(ij)} - 2 (|X_{ij}| + |X'_{ij}|) \| u^{(ij)} \|^2_{\infty}~, $$ { where $u^{(ij)}$ is as in Lemma \ref{fliponeelem}.} Since $X$ and ${X^{(ij)}}$ have the same distribution, we deduce from Lemma \ref{delocalization} that, { for any $c_0 >0$, there exists $C > 0$ such that with probability at least $1 - CN^{2-c_0}$, $\| v \|_{\infty} \leq (\log N)^C / \sqrt N$ and $\max_{ij} \| u^{(ij)} \|_{\infty} \leq (\log N)^C / \sqrt N$. For all $N$ large enough, we have $(\log N)^C \leq L$, where $L$ is defined in (\ref{eq:defL}). Hence for any $c_0$, for some new constant $C>0$, with probability at least $1 - CN^{2-c_0}$, $\| v \|_{\infty} \leq L/ \sqrt N$ and $\max_{ij} \| u^{(ij)} \|_{\infty} \leq L/ \sqrt N$. Since $c_0$ can be taken arbitrarily large, we deduce that} with overwhelming probability, $\| v \|_{\infty} \leq L / \sqrt N$, $\max_{ij} \| u^{(ij)} \|_{\infty} \leq L/ \sqrt N$ and $\max_{ij}( |X_{ij}| + |X'_{ij}|) \leq L/2$. On this event, we get $$ \lambda \geq \lambda^{(ij)} - \frac{L^{3}} {N}~. $$ Reversing the role $X$ and ${X^{(ij)}}$ and using the union bound, we deduce that, with overwhelming probability, $$ \max_{ij} |\lambda - \lambda^{(ij)} | \leq \frac{L^{3}} {N}~. $$ It follows from \cite[Theorem 1.14]{MR2669449} that, for any $\rho >0$, there exists $\kappa >0$ such that, for all $N$ large enough, $$ \mathbb{P} ( \lambda_2 < \lambda - N^{-1/2-\rho} ) \geq 1 - N^{-\kappa}~. $$ Let $(v_1, \ldots, v_p)$ be an orthonormal basis of eigenvectors of $X$ associated to the eigenvalues $(\lambda_1, \ldots, \lambda_N)$ with $v_1 = v$. We set $\theta = 2/5 - 3 \rho / 5$ and $q = \lfloor N^{\theta} \rfloor$. For some constant $c>0$ to be defined and $\rho \in (0,1/16)$, we introduce the event $\mathcal{E}_\rho$ such that \begin{itemize} \item $\lambda_2 < \lambda - N^{-1/2 - \rho}$ and $ \lambda_q \leq \lambda - c q^{2/3} N^{-1/6}$~; \item $\max_{1 \leq p \leq q} \| v_p \|_{\infty} \leq L / \sqrt N$ and $\max_{ij} \| u^{(ij)} \|_{\infty} \leq L / \sqrt N$~; \item $\max_{ij}( |X_{ij}| + |X'_{ij}|) \leq L/2$~. \end{itemize} From what precedes, Lemma \ref{delocalization} and \cite[Theorem 2.2]{MR2871147}, for some $c$ small enough, for any $\rho >0$ there exits $\kappa >0$ such that for all $N$ large enough, $\mathbb{P}(\mathcal{E}_\rho) \geq 1 - N^{-\kappa}$. Note also, that we have checked that if $\mathcal{E}_\rho$ holds then $\max_{ij} |\lambda - \lambda^{(ij)} | \leq L^{3} / N$. On the event $\mathcal{E}_\rho$, we now prove that $v$ and $u^{(ij)}$ are close in $\ell^{\infty}$-norm. For a fixed $(i,j)$, we write, $u^{(ij)} = \alpha v + \beta x + \gamma y$, where $\alpha^2 + \beta ^2 + \gamma^2 = 1$ with $\beta,\gamma$ non-negative real numbers, $x$ is a unit vector in the vector space spanned by $(v_2, \ldots, v_q)$, and $y$ is a unit vector in the vector space spanned by $(v_{q+1}, \ldots, v_N)$. Set \[ w = {(X^{(ij)} - X)} u^{(ij)} + ( \lambda - \lambda^{(ij)} ) u^{(ij)}. \] We have $$ \lambda u^{(ij)} = \alpha \lambda v + \beta X x + \gamma X y + w~. $$ Taking the scalar product with $y$, we find $$ \lambda \gamma = \lambda \langle y , u^{(ij)} \rangle = \gamma \langle y , X y \rangle + \langle y, w\rangle \leq (\lambda - c q^{2/3} N^{-1/6})\gamma + \langle y, w\rangle~. $$ Hence, $$ \gamma\leq c^{-1} q^{-2/3} N^{1/6} \| w\|_2 \leq c^{-1} q^{-2/3} N^{1/6} \left( \frac{L^{2} }{\sqrt N} + \frac{L^{4}}{N^{3/2}} \right) \leq 2 c^{-1} L^{4} N^{-2\theta / 3 - 1/3}~. $$ Similarly, taking the scalar product with $x$, we find $$ \beta \leq N^{1/2 + \rho} \langle x, w \rangle \leq N^{1/2 + \rho} \left( \left| \langle x, (X - {X^{(ij)}}) u^{(ij)} \rangle\right| + \frac{L^{ 3}}{N} \right)~. $$ Since $|\langle a , b \rangle| \leq \| a\|_{\infty} \|b \|_1 \leq m \|a \|_{\infty} \|b\|_{\infty} $ where $m$ is the number of non-zeros entries of $b$, we have $\left| \langle x, (X - {X^{(ij)}}) u^{(ij)} \rangle\right| \leq \| x \|_{\infty} L^{2} / \sqrt N$. By construction, $x = \sum_{p=2}^q \gamma_p v_p$ where $\sum_p |\gamma_p|^2 = 1$. If $\mathcal{E}_\rho$ holds, { using the Cauchy-Schwarz inequality and $ \| v_p \|_{2} = 1$,} we deduce that $$ \| x \|_{\infty} \leq \sum_{p=2}^q |\gamma_p| \| v_p \|_{\infty} \leq \frac{L}{\sqrt N} \sum_{p=2}^q |\gamma_p| \leq \frac{L \sqrt q}{\sqrt N} \leq L N^{\theta / 2 - 1/2}~. $$ So finally, $$ \beta \leq 2 L^{3} N^{-1/2 + \theta /2 + \rho}~, $$ We deduce that $|\alpha| = \sqrt{1 - \beta^2 - \gamma^2} \geq 1 - \beta - \gamma$ is positive for all $N$ large enough. We set $s = \alpha / |\alpha|$. We find, since $\|y\|_{\infty} \leq \| y\|_2 \leq 1$, $$ \| s v - u^{(ij)}\|_{\infty} \leq ( 1 - |\alpha| ) \| v \|_{\infty} + \beta \|x \|_{\infty} + \gamma \leq 2\beta \|x \|_{\infty} +2\gamma~. $$ For our choice of $\theta= 2/5 - 3 \rho / 5$, this last expression is $O( L^{4} N^{-3/5 + 8 \rho /5})$. {Indeed, we have \begin{align*} &\gamma \leq 2c^{-1} L^4 N^{-4/15+2\rho /5-1/3}= 2c^{-1} L^4 N^{-3/5+2\rho /5}, \\ &\|x \|_{\infty} \leq L N^{1/5-3\rho /10-1/2} = LN^{-3/10-3\rho /10},\\ &\beta \leq 2 L^3 N^{-1/2+1/5-3\rho/10+\rho} = 2L^3 N^{-3/10+7\rho/10}. \end{align*}} Since $\rho<1/16$, we have $3/5 - 8 \rho /5 > 1/2$. Hence, finally, if we set ${\kappa^{\prime}} = 1/10 - 8 \rho/ 5 >0$, we get that $\| s v - u^{(ij)}\|_{\infty} = O ( L^4 N^{-1/2{ - \kappa^{\prime}}})$. This concludes the proof of the lemma. \section{Proof of Theorem \ref{thm:main2}} \label{sec:thm2} { The proof of Theorem \ref{thm:main2} relies on the rigorous justification of the heuristic argument sketched below the statement of Theorem \ref{thm:main2}, see the forthcoming Lemma \ref{lem:llk}. This is performed by a careful perturbation argument on the resolvent in Lemma \ref{lem:resresk}. Indeed, the resolvent has nice analytical properties and it is intimately connected to the spectrum, as illustrated in Lemma \ref{lem:reslk}}. Recall that $S_k=\{(i_1,j_1),\ldots,(i_k,j_k)\}$ is the set of $k$ pairs chosen uniformly at random (without replacement) from the set of all ordered pairs $(i,j)$ of indices with $1\le i\le j\le N$ which is used in the definition of $X^{[k]}$. We denote by $\lambda$ and $\lambda^{[k]}$ the largest eigenvalues of $X$ and $X^{[k]}$. Recall the definition of $L = L_N$ in \eqref{eq:defL} and the notion of overwhelming probability immediately below \eqref{eq:defL}. The main technical lemma is the following: \begin{lemma}\label{lem:llk} Let $X$ be a Wigner matrix as in Theorem \ref{thm:main2} and let $\lambda = \lambda_1 \geq \cdots \geq \lambda_N$ be its eigenvalues. For any $c>0$ there exists a constant $c_2 >0$ such that for all $\varepsilon >0$, for all $N$ large enough, with probability at least $1- \varepsilon$, $$\max_{k \leq N^{5/3} L^{-c_2}}\max_{p \in \{1,2\}} | \lambda_p - \lambda_p^{[k]}| \leq N^{-1/6} L^{-c}~.$$ \end{lemma} We postpone the proof of Lemma \ref{lem:llk} to the next subsection. We denote by $R (z) = (X - z I)^{-1}$ and $R^{[k]} (z) = (X^{[k]}- z I)^{-1}$ the resolvent of $X$ and $X^{[k]}$. The proof of Lemma \ref{lem:llk} is based on this comparison lemma on the resolvents. \begin{lemma}\label{lem:resresk} Let $X$ be a Wigner matrix as in Theorem \ref{thm:main}. Let $c_0 >0$ be as in Lemma \ref{lem:resl} and let $c_1 >0$. There exists $c_2>0$ such that, with overwhelming probability, the following event holds: for all $k \leq N^{5/3} L^{-c_2}$, for all $z = E + {\mathbf{i}} \eta$ such that $|2 \sqrt N - E| \leq L^{c_0} N^{-1/6}$ and $ \eta = N^{-1/6} L^{-c_1}$, $$ \max_{1 \leq i,j \leq N} N \eta | R^{[k]} (z)_{ij} - R(z)_{ij} | \leq \frac{1}{L^2}~. $$ \end{lemma} We postpone the proof of Lemma {\ref{lem:resresk}} to the next subsection. Our next lemma connects the resolvent with eigenvectors. \begin{lemma}\label{lem:reslk} Let $X$ be a Wigner matrix as in Theorem \ref{thm:main} and let $\varepsilon >0$. There exist $c_1,c_2$ such that the following event holds for all $N$ large enough with probability at least $1 - \varepsilon$: for all $k \leq N^{5/3} L^{-c_2}$, we have, with $z = \lambda + {\mathbf{i}}\eta$, $ \eta = N^{-1/6} L^{-c_1}$, $$ \max_{1 \leq i,j \leq N} | N \eta \Im R(z)_{ij} - N v_i v_{j} | \leq \frac 1 {L^2} \quad \hbox{ and } \quad \max_{1 \leq i,j \leq N} | N\eta \Im R^{[k]} (z )_{ij} - N v^{[k]}_i v^{[k]}_{j} | \leq \frac 1 {L^2}~. $$ \end{lemma} \begin{proof} Let $\lambda = \lambda_1 \geq \lambda_2 \geq \cdots \geq \lambda_N$ be the eigenvalues of $X$. Let $(v_1, \ldots, v_N)$ be an eigenvector basis of $X$. Recall that $$ N \eta \Im R(E + {\mathbf{i}} \eta )_{ij} = \sum_{p=1}^N \eta^2 \frac{ N (v_p)_i (v_p)_j }{ (\lambda_p - E )^2 + \eta ^2}~. $$ As in the proof of Lemma \ref{lem:resl}, from \cite[Theorem 2.2]{MR2871147} and Lemma \ref{delocalization}, for some constants $c_0,C>0$, we have with overwhelming probability that the following event $\mathcal{E}$ holds: $|\lambda- 2 \sqrt N| \leq L^{c_0} N ^ {-1/6}$, for all integers $1 \leq p \leq N$, $ \| v_p \|^2_{\infty} \leq L / N$ and for all $q >p$ with $q = \lfloor L^{c_0} \rfloor$ and $E$ such that $|E - 2 \sqrt N| \leq L^{c_0} N^{-1/6}$ we have $$ \sum_{p = q+1}^{N} \frac{ N (v_p)_i (v_p)_j }{ (\lambda_p - E )^2 + \eta ^2} \leq C L N^{1/3} q^ {-1/3}~. $$ On the other hand, let $\mathcal{E}_\delta$ be the event that $\lambda_2 \leq \lambda - \delta N^{-1/6}$. Fix $\varepsilon >0$. From \cite[Theorem 2.7]{MR3253704} and, e.g., \cite[Chapter 3]{MR2760897}, there exists $\delta >0$ such that $$ \mathbb{P} (\mathcal{E}_\delta ) \geq 1 - \varepsilon~. $$ On the event $\mathcal{E}\cap \mathcal{E}_\delta$, if $| \lambda - E| \leq (\delta/2) N^{-1/6}$, we have $$ \sum_{p = 2}^{q} \frac{ N (v_p)_i (v_p)_j }{ (\lambda_p - E )^2 + \eta ^2} \leq \frac 4 {\delta^2}L q N^{1/3}~. $$ Finally, if $|\lambda - E | \leq \eta/L^2$, on the event $\mathcal{E}$, we find easily, if $v_i$ is i-th coordinate of $v$, $$ \left| \eta^2 \frac{ N v_i v_j }{ (\lambda - E )^2 + \eta ^2} - N v_i v_j \right| \leq \frac 1 {L^3}~. $$ For some $c_1 >0$, we thus find, that if $\eta = N^ {-1/6} L^{-c_1}$ then on the event $\mathcal{E}\cap \mathcal{E}_\delta$, for all $E$ such that $|\lambda - E | \leq \eta/L^2$ we have $$ \max_{i ,j}| N \eta \Im R(E + {\mathbf{i}} \eta )_{ij} - N v_i v_{j} | \leq \frac 1 {L^2}~. $$ We apply this last estimate $R$ and $E = \lambda$. For each $k$, let $\mathcal{E}^ {[k]}$ be the event corresponding to $\mathcal{E}$ for $X^{[k]}$ instead of $X$. We apply the above estimate on the event $\mathcal{E}'_k = \mathcal{E}^{[k]} \cap \mathcal{E}_\delta \cap \{ \max_{p = 1 ,2} |\lambda_p - \lambda_p^{[k]}| \leq \eta / L^2\}$ to $R^{[k]}$ and $E = \lambda$. By Lemma \ref{lem:llk} and the union bound $\cap_{k \leq N^{5/3} L^{-c_2}} \mathcal{E}'_k$ has probability at least $1 - 2 \varepsilon$. It concludes the proof. \end{proof} We may now conclude the proof of Theorem \ref{thm:main2}. Let $c_1,c_2$ be as in Lemma \ref{lem:reslk}, $k \leq N^{5/3} L^{- c_2}$ and $\eta = N^{-1/6} L^{-c_1}$. Up to increasing the value of $c_2$, we may also assume that the conclusion of Lemma \ref{lem:resresk} holds. By Lemma \ref{delocalization}, Lemma \ref{lem:resresk} and Lemma \ref{lem:reslk}, for any $\varepsilon >0$, for all $N$ large enough, with probability at least $1 - \varepsilon$, it holds that for some $c >0$: $ \sqrt N \| v\|_{\infty} \leq (\log N) ^c$, $\sqrt N \| v ^{[k]} \|_\infty \leq (\log N)^c$ and $$ \max_{i ,j}| N v_i v_{j} - N v^{[k]}_i v^{[k]}_{j} | \leq \frac 3 {L^2}. $$ Applied to $i = j$, we get that for some $s_i \in \{-1,1\}$, $$ \sqrt N | s_i v_i - v_i^{[k]} | \leq \frac{\sqrt 3}{ L}. $$ Notably, we find $$ (1 - s_i s_j) N | v_i v_j | \leq | N v_i v_{j} - N v^{[k]}_i v^{[k]}_{j} | + \frac {2 \sqrt 3} {L} (\log N) ^{c} \leq \frac 4 {L} (\log N) ^{c}. $$ Let $J = \{ 1 \leq i \leq N : \sqrt N |v_i| \geq L^{-1/3} \}$. It follows from the above inequality that for $i,j \in J$, $s_i = s_j$. Let $s$ be this common value. We have for all $i \in J$, $$ \sqrt N | s v_i - v_i^{[k]} | \leq \frac{\sqrt 3}{ L}. $$ Moreover, for all $i \notin J$, by definition, $$ \sqrt N | s v_i - v_i^{[k]} | \leq \sqrt N |v_i| + \sqrt N |v^{[k]}_i| \leq L^{-1/3} + L^{-1/3} + \sqrt 3 L ^{-1}. $$ It concludes the proof of Theorem \ref{thm:main2}. \subsection{Proof of Lemma \ref{lem:resresk}} { The proof of Lemma \ref{lem:resresk} is based on a technical martingale argument. Thanks to the resolvent identity \eqref{eq:resid}, we will write $R^{[k]}_{ij}(z) - R_{ij}(z)$ as a sum of martingale differences up to small error terms, this is performed in Equation \eqref{eq:RkRij}. These martingales will allow us to use concentration inequalities. Each term of the martingale differences will be estimated thanks to the upper bound on resolvent entries given in Lemma \ref{loclaw}.} { We apply many times the resolvent identity and for technical convenience, it will be easier to have a uniform bound on our random variables. }We thus start by truncating our random variables $(X_{ij})$. Set $\tilde X_{ij} = X_{ij} \mathbbm{1} ( |X_{ij} | \leq (\log N)^{c})$ and $\tilde X'_{ij} = X'_{ij} \mathbbm{1} ( |X'_{ij} | \leq (\log N)^{c})$ with $c= 2/ \delta$. The matrix $\tilde X$ has independent entries above the diagonal. Moreover, since $\mathbb{E} \exp ( |X_{ij}|^\delta) \leq 1/ \delta$, with overwhelming probability, $X = \tilde X$ and $X' = \tilde X'$. It is also straightforward to check that $\mathbb{E} |X_{ij}|^2 \mathbbm{1} ( |X_{ij} | \geq (\log N) ^{c} ) = O( \exp ( - (\log N)^{2}/2))$. It implies that $|\mathbb{E} \tilde X_{ij}| = O ( \exp ( - ( \log N)^2 /2 )$ and $\mathrm{Var} (\tilde X_{ij}) = 1 + O ( \exp ( - (\log N)^2 /2))$ for $i \ne j$. We define the matrix $\bar X$ with for $i \ne j$, $$\bar X_{ij} =( \tilde X_{ij} - \mathbb{E} \tilde X_{ij} ) / \sqrt{\mathrm{Var} (\tilde X_{ij} )}\quad \hbox{ and } \quad \bar X_{ii} = \tilde X_{ij} - \mathbb{E} \tilde X_{ij}.$$ The matrix $\bar X$ is a Wigner matrix as in Theorem \ref{thm:main2} with entries in $[-L/4,L/4]$. Moreover, from Gershgorin's circle theorem \cite[Theorem 6.6.1]{HJ85}, with overwhelming probability, the operator norm of $X - \bar X$ satisfies $\| X - \bar X \| = O ( N \exp ( - ( \log N)^2/2 )$. Observe that from the spectral theorem, for any Hermitian matrix $A$, $\| (A-z)^{-1}\| \leq |\Im(z)|^ {-1}$. In particular, from the resolvent identity \eqref{eq:resid}, we get $\| (X-z)^{-1} - (\bar X - z)^{-1}\| = \| (X-z)^{-1} (X - X') (\bar X - z)^{-1}\|\leq \Im (z)^{-2} \| X - \bar X \| = O ( N^3 \exp ( - ( \log N)^2/2 )$ if $\Im (z) \geq N^{-1}$. The same truncation procedure applies for $X^{[k]}$. In the proof of Lemma \ref{lem:resresk}, we may thus assume without loss of generality that the random variables $X_{ij}$ have support in $[-L/4,L/4]$. { It will also be convenient to assume that the random subset $S_k$ does not contain too many points on a given row or column. To that end,} for $0 \leq t \leq k$, let ${\cal F}_t$ be the $\sigma$-algebra generated by the random variable $X$, $S_{k}$ and $(X'_{i_s,j_s})_{1 \leq s \leq t}$. For $1 \leq i,j \leq N$, we set $$T_{ij} = \{ t : \{ i_t ,j_t \} \cap \{i, j \} \ne \emptyset \}~.$$ Note that $T_{ij}$ is ${\cal F}_0$-measurable. We have $$ \mathbb{E} |T_{ij}| = \frac{2 k}{N+1}~. $$ Besides, from \cite[Proposition 1.1]{MR2288072}, for any $u >0$, $$\mathbb{P} \left( |T_{ij}| \geq \mathbb{E} |T_{ij}| + u \right) \leq \exp \left( - \frac{u^2 }{ 4\mathbb{E} |T_{ij}| +2 u } \right)~. $$ If $k \leq N^{5/3} L^{-c_2}$, it follows that with overwhelming probability, the following event, say $\mathcal T$, holds: $\max_{ij} |T_{ij}| \leq 4k' /N$ where for ease of notation we have set $$ k' = \min ( k , N (\log N)^2)~. $$ Now, let $c$ be as in Lemma \ref{loclaw} and, for $0 \leq t \leq k$, we denote by $\mathcal{E}_t \in {\cal F}_t$ the event that $\mathcal T$ holds and that the conclusion of Lemma \ref{loclaw} holds for $X^{[t]}$ and $R^{[t]}$ (with the convention $X^{[0]} = X$). If $\mathcal{E}_t$ holds, then for all $z = E + {\mathbf{i}} \eta$ with $|2 \sqrt N - E| \leq L^{c _0} N^{-1/6}$ and $ \eta = N^{-1/6} L^{-c_1}$, we have, $$ \max_{i \ne j} |R^{[t]}_{ij} (z) | \leq \delta = L ^ {c'} N^{-5/6} \quad \hbox{ and } \quad ~ \max_{i} |R^{[t]}_{ii} (z) | \leq \delta_0 = c N^{-1/2}~, $$ where $c' =1 + c + \max ( c_0 / 2, c_0/4 + c_1/2) $. { After these preliminaries, we may now write the resolvent expansion. Our goal is to write $R^{[k]}_{ij}(z) - R_{ij}(z)$ as a sum of martingale differences up to error terms. The outcome will be Equation \eqref{eq:RkRij} below.} We define $X_0^{[t]}$ as the symmetric matrix obtained from $X^{[t]}$ by setting to $0$ the entries $(i_t,j_t)$ and $(j_t,i_t)$. By construction $X^{[t+1]}_0$ is ${\cal F}_t$-measurable. We denote by $R_0^{[t]}$ the resolvent of $X_0^{[t]}$. The resolvent identity \eqref{eq:resid} implies that $$ R_0^{[t+1]} = R^{[t]} + R^{[t]} ( X^{[t]}-X_0^{[t+1]})R^{[t]} + R^{[t]} (X^{[t]}-X_0^{[t+1]})R^{[t]} (X^{[t]}-X_0^{[t+1]}) R^{[t+1]}_0 $$ (we omit to write the parameter $z$ for ease of notation). Now, we set for $i \ne j$, $E^s_{ij} = e_i e_j ^* + e_j e_i^*$ and {$E_{ii}^s = e_i e_i^*$}, where $e_i$ denotes the canonical vector of $\mathbb R^n$ with all entries equal to $0$ except the $i$-th entry equal to $1$. We have \begin{equation}\label{eq:XtXt0} X^{[t]}-X_0^{[t+1]} = X_{i_{t+1} j_{t+1}} E^s_{i_{t+1} j_{t+1}} \quad \hbox{ and } \quad X^{[t+1]}-X_0^{[t+1]} = X'_{i_{t+1} j_{t+1}} E^s_{i_{t+1} j_{t+1}}~. \end{equation} We use that $|X_{ij}| \leq L/4$ and $(R^{[t+1]}_0)_{ij} \leq \eta^{-1}$. If $\mathcal{E}_t$ holds, we deduce that for all $z = E + {\mathbf{i}} \eta$ with $|2 \sqrt N - E| \leq L^{c_0} N^{-1/6}$ and $ \eta = N^{-1/6} L^{-c_1}$, we have \begin{equation}\label{eq:R0Rt} \max_{i\ne j} |(R^{[t+1]}_0)_{ij} | \leq \sqrt 2 \delta \quad \hbox{ and } \quad \max_{i} |(R^{[t+1]}_0)_{ii} | \leq \sqrt 2\delta_0~. \end{equation} Similarly, the resolvent identity \eqref{eq:resid} with $R^{[t+1]}$ and $R^{[t]}$ implies that, if $\mathcal{E}_t$ holds, for all $z = E + {\mathbf{i}} \eta$ with $|2 \sqrt N - E| \leq L^{c_0} N^{-1/6}$ and $ \eta = N^{-1/6} L^{-c_1}$, we have \begin{equation}\label{eq:R1Rt} \max_{i \ne j} |R^{[t+1]}_{ij} | \leq \sqrt 2 \delta \quad \hbox{ and } \quad \max_{i} |R^{[t+1]}_{ii} | \leq \sqrt 2 \delta_0~. \end{equation} Finally, the resolvent identity with $R^{[t+1]}$ and $R_0^{[t+1]}$ gives \begin{align*} & R^{[t+1]} \; = \; R_0^{[t+1]} + R_0^{[t+1]} ( X_0^{[t+1]}-X^{[t+1]})R^{[t+1]} \\ &\quad = \; \sum_{\ell = 0}^2 \left(R_0^{[t+1]} ( X_0^{[t+1]}-X^{[t+1]}) \right)^{\ell} R_0^{[t+1]} + \left(R_0^{[t+1]} ( X_0^{[t+1]}-X^{[t+1]}) \right)^{3} R^{[t+1]} ~. \end{align*} Note that, $\mathbb{E} [X'_{i_{t+1} j_{t+1}} | {\cal F}_t] = 0$. We use $|X_{i_tj_t} |\leq L/4$, from \eqref{eq:XtXt0}-\eqref{eq:R0Rt}-\eqref{eq:R1Rt}, we deduce that \begin{equation}\label{eq:ERtR0} \left| \mathbb{E} [ R^{[t+1]}_{ij} |{\cal F}_t ] - (R^{[t+1]}_0)_{ij} -s^{[t+1]}_{ij} {X'}^2_{i_{t+1}j_{t+1}} \right| \leq a_t \quad \hbox{ and } \quad |R^{[t+1]}_{ij} - (R^{[t+1]}_0)_{ij} | \leq b_t~, \end{equation} where $s^{[t]}_{ij} = ((R_0^{[t]} E^s_{i_{t} j_{t}})^2 R_0^{[t]})_{ij}$ and, if $\mathcal{E}_t$ holds, \begin{align*} a_t = L^3 \delta^2 \delta_0 ^2+ L^3 \delta_0^4 \mathbbm{1}_{(t \in T_{ij})} \quad \hbox{ and } \quad b_t = L \delta^2 + L \delta \delta_0 \mathbbm{1}_{(t \in T_{ij})} + L \delta_0^2 \mathbbm{1}_{ ( \{i_t,j_t\} = \{ i,j \})}~. \end{align*} We rewrite, one last time the resolvent identity with $R_0^{[t+1]}$ and $R^{[t]}$: $$ R^{[t]} = \sum_{\ell = 0}^2 \left(R_0^{[t+1]} ( X_0^{[t+1]}-X^{[t]}) \right)^{\ell} R_0^{[t+1]} + \left(R_0^{[t+1]} ( X_0^{[t+1]}-X^{[t]}) \right)^{3}R^{[t]} ~. $$ If $\mathcal{E}_t$ holds, we arrive at, $$ \left| \mathbb{E} [ R^{[t+1]}_{ij} |{\cal F}_t ] - (R^{[t]})_{ij} - r^{[t+1]}_{ij} X_{i_{t+1} j_{t+1}} + s^{[t+1]}_{ij} (X^2_{i_{t+1} j_{t+1}} - {X'}^2_{i_{t+1}j_{t+1} }) \right| \leq 2a_t~, $$ where $r^{[t]}_{ij} = (R_0^{[t]} E^s_{i_{t} j_{t}} R_0^{[t]})_{ij}$. We have thus found that \begin{equation}\label{eq:RkRij} R^{[k]}_{ij} - R_{ij} = \sum_{t=0}^ {k-1} \left(R^{[t+1]}_{ij} - (R^{[t]})_{ij} \right) = \sum_{t=0}^ {k-1} \left(R^{[t+1]}_{ij} - \mathbb{E} [ R^{[t+1]}_{ij} |{\cal F}_t ] \right) + r_{ij} + s_{ij} - s'_{ij} +a_{ij} ~, \end{equation} where we have set, with $Y_{ij} = X_{ij}^2 - \mathbb{E} X_{ij}^2$, $Y'_{ij} = {X'_{ij}}^2 - \mathbb{E} X_{ij}^2$, $$ r_{ij} = \sum_{t=1}^{k} r^{[t]}_{ij} X_{i_{t} j_{t}} ~,\quad s_{ij} = \sum_{t=1}^{k} s^{[t]}_{ij} Y_{i_{t} j_{t}} ~, \quad s'_{ij} = \sum_{t=1}^{k} s^{[t]}_{ij} Y'_{i_{t} j_{t}} ~, \quad |a_{ij}| \leq 2 \sum_{t=1}^k a_t~. $$ { In this final step of the proof, we use concentration inequalities to estimate the terms in \eqref{eq:RkRij}}. We set $Z_{t+1} = (R^{[t+1]}_{ij} - \mathbb{E} [ R^{[t+1]}_{ij} |{\cal F}_t ] ) \mathbbm{1}_{\mathcal{E}_t}$. We write, for any $u \geq 0$, $$ \mathbb{P} \left( \left| \sum_{t=0}^ {k-1} \left(R^{[t+1]}_{ij} - \mathbb{E} [ R^{[t+1]}_{ij} |{\cal F}_t ] \right)\right| \geq u \right) \leq \mathbb{P} \left( \left| \sum_{t=1}^ {k} Z_t \right| \geq u \right) + \sum_{t=0}^{k-1} \mathbb{P} ( \mathcal{E}_t^c)~. $$ By Lemma \ref{loclaw}, we have for any $c>0$, $\sum_{t=0}^{k-1} \mathbb{P} ( \mathcal{E}_t^c) = O ( N ^{-c})$. Since $ \mathcal{E}_t \in {\cal F}_t$, we have { that} $\mathbb{E}[ Z_{t+1} | {\cal F}_t ] = 0$. Also, from \eqref{eq:R0Rt}-\eqref{eq:ERtR0}, $|Z_{t}| \leq 2b_t$. On the event $\mathcal T$, we have $$ \sqrt{\sum_{t=1}^k b_t^2 } \leq L \delta^2 \sqrt k + L \delta \delta_0 \sqrt{\frac{4k'}{N} } + L \delta^2_0 \leq 2 L \delta^2 \sqrt{k'}~. $$ Azuma-Hoeffding martingale inequality implies that, for $u \geq 0$, $$ \mathbb{P} \left( \left| \sum_{t=1}^ {k} Z_t \right| \geq 2 u L \delta^2 \sqrt{k'} \right) \leq 2 \exp \left( - \frac{u^2}{2}\right)~. $$ We apply the later inequality to $u = \log N$. We deduce that, with overwhelming probability, \begin{equation}\label{eq:jook} \sum_{t=0}^ {k-1} \left(R^{[t+1]}_{ij} - \mathbb{E} [ R^{[t+1]}_{ij} |{\cal F}_t ] \right) \leq L^2 \sqrt{k'} \delta^2~. \end{equation} We may treat similarly the random variable $s'_{ij}$ in \eqref{eq:RkRij}. We set $Z'_{t+1} = s^{[t+1]}_{ij} Y'_{i_{t+1} j_{t+1}}\mathbbm{1}_{\mathcal{E}_t}$. Note that $s^{[t+1]}_{ij}$ is ${\cal F}_t$-measurable and $\mathbb{E} [ Y'_{i_{t+1} j_{t+1}} | {\cal F}_t ] = 0$. Thus $\mathbb{E} [Z'_{t+1} | {\cal F}_t] = 0$. Moreover, since $|Y_{ij}| \leq L^2/ 16$, from \eqref{eq:R0Rt}, we find $|Z'_{t+1}| \leq b'_t = L^2 (\delta^2 \delta_0 + \delta_0^3 \mathbbm{1}_{( t \in T_{ij})})$. If $\mathcal T$ holds, we get $$ \sqrt{\sum_{t=0}^{k-1} {b'_t}^2} \leq L^2 \delta^2 \delta_0 \sqrt{k} + L^2 \delta_0^3 \sqrt{ \frac{4k'}{N}} \leq 2 L^2 \delta^2 \delta_0 \sqrt{k'} $$ We write, for $u \geq 0$, $$ \mathbb{P} \left( \left| s'_{ij} \right| \geq u \right) \leq \mathbb{P} \left( \left| \sum_{t=1}^{k} Z'_t \right| \geq u \right) + \sum_{t=0}^{k-1} \mathbb{P} ( \mathcal{E}_{t}^c) . $$ From Azuma-Hoeffding martingale inequality, we deduce that, with overwhelming probability, \begin{equation}\label{eq:jook3} |s'_{ij}| \leq \sqrt{k'} \delta^2. \end{equation} We now estimate the random variable $r_{ij}$ in \eqref{eq:RkRij}. We will also use Azuma-Hoeffding inequality but we need to introduce a backward filtration {(because we have to deal with the random variables $X_{i_t,j_t}$ instead of $X'_{i_t,j_t}$ as in $s'_{ij}$)}. We define ${\cal F}'_t$ as the $\sigma$-algebra generated by the random variables, $X'$, $S_k$ and $\{ (X_{ij}) : \{i,j\} \notin \{ i_s, j_s \}, s \leq t \}$. By construction $X^{[t]}$ and $X_0^{[t]}$ are ${\cal F}'_{t}$-measurable random variables. Let $\mathcal{E}'_{t} \in {\cal F}'_{t} $ be the event that $\mathcal T$ holds and that the conclusion of Lemma \ref{loclaw} holds for $X^{[t]}$. If $\mathcal{E}'_{t}$ holds, then for all $z = E + {\mathbf{i}} \eta$ with $|2 \sqrt N - E| \leq L^{c_0} N^{-1/6}$ and $ \eta = N^{-1/6} L^{-c_1}$, we have, $$ \max_{i \ne j} |R^{[t]}_{ij} (z) | \leq \delta \quad \hbox{ and } \quad ~ \max_{i} |R^{[t]}_{ii} (z) | \leq \delta_0~. $$ Arguing as in \eqref{eq:R0Rt}, if $\mathcal{E}'_t$ holds then $$ \max_{i \ne j} |(R_0^{[t]})_{ij} | \leq \sqrt 2\delta \quad \hbox{ and } \quad ~ \max_{i} |(R^{[t]}_0)_{ii} | \leq \sqrt 2\delta_0~. $$ The variable $r^{[t]}_{ij}$ is ${\cal F}'_{t}$-measurable and $\mathbb{E} (X_{i_{t} j_{t}} | {\cal F}'_{t} ) = 0$. We write, for $u \geq 0$, $$ \mathbb{P} \left( \left| r_{ij} \right| \geq u \right) \leq \mathbb{P} \left( \left| \sum_{t=0}^{k-1} \tilde Z_t \right| \geq u \right) + \sum_{t=0}^{k-1} \mathbb{P} ( {\mathcal{E}'_{t}}^c)~ , $$ where $\tilde Z_{t+1} = r^{[t]}_{ij} X_{i_{t} j_{t}} \mathbbm{1}_{\mathcal{E}'_{t}}$. We have $\mathbb{E} ( \tilde Z_{t+1} | {\cal F}'_t ) = 0$ and $$|\tilde Z_t| \leq \tilde b_t = L \delta^2 + L \delta \delta_0 \mathbbm{1}_{(t \in T_{ij})} + L \delta_0^2 \mathbbm{1}_{ \{ i_t,j_t\} = \{ i,j \})}~.$$ Arguing as above, from Azuma-Hoeffding martingale inequality, we deduce that with overwhelming probability, \begin{equation}\label{eq:jook2} |r_{ij}| \leq L^2\sqrt {k'} \delta^2. \end{equation} Similarly, repeating the argument leading to \eqref{eq:jook3} with $s_{ij}$ and the filtration $({\cal F}'_t)$ gives with overwhelming probability, \begin{equation}\label{eq:jook4} |s_{ij}| \leq \sqrt{k'} \delta^2. \end{equation} We note also that if $\mathcal T$ holds then $$ |a_{ij}| \leq 2 \sum_{t=1}^k a_t \leq 2 L^3 \delta^2 \delta^2_0 k + 2L^3 \delta_0^4 \frac{4k'}{N} \leq \sqrt{k'} \delta^2, $$ where the last inequality holds provided that $k \leq N^{5/3}$. So finally, from \eqref{eq:RkRij}-\eqref{eq:jook}-\eqref{eq:jook3}-\eqref{eq:jook2}-\eqref{eq:jook4}, we have proved that for a given $z = E + {\mathbf{i}} \eta$ such that $|E - 2 \sqrt N| \leq L^{c_0} N^{-1/6}$ and $\eta = N^{-1/6} L ^{-c_1}$, with overwhelming probability $$ \left| R^{[k]}_{ij} (z)- R_{ij} (z) \right| \leq 3 L ^2 \sqrt {k'} \delta^2~, $$ where the inequality holds provided that $k \leq N^{5/3}$. Recall that $|R_{ij} (E + {\mathbf{i}} \eta) - R_{ij} ( E' + {\mathbf{i}} \eta) | \leq \eta^{-2} |E - E'|$. By a net argument (as in the proof of Lemma \ref{lem:resres0}), we deduce that with overwhelming probability for all $z = E + {\mathbf{i}} \eta$ such that $|2 \sqrt N - E| \leq L^{c_0} N^{-1/6}$, $\left| R^{[k]}_{ij} (z)- R_{ij} (z) \right| \leq 4 L^2 \sqrt {k'} \delta^2$. It concludes the proof of Lemma \ref{lem:resresk}. \subsection{Proof of Lemma \ref{lem:llk}} Let $c_0$ be as in Lemma \ref{lem:resl} and $c >0$. We set $c_1 = c_0/2 +2c$ and let $\eta = N^{-1/6} L ^{-c_1}$. Let $p \in \{1,2\}$. We start with by bounding $\min_{ j}|\lambda_p - \lambda_j^{[k]}|$ and $\min_{j}|\lambda^{[k]}_p - \lambda_j|$. Since $X$ and $X^{[k]}$ have the same distribution, we only prove that with overwhelming probability \begin{equation}\label{eq:pton} \min_{ 1 \leq j \leq N}|\lambda_p - \lambda_j^{[k]}| \leq 2 L^{c_0/2} \eta. \end{equation} By Lemma \ref{lem:resl}, with overwhelming probability, $|\lambda_p - 2 \sqrt N | \leq L^{c_0} N^{-1/6}$ and for some integer $1 \leq i\leq N$, $$ N \eta^{-1} \Im R (\lambda_p + {\mathbf{i}} \eta)_{ii} \geq \frac 1 2 \eta^{-2}~, $$ and, $$ N \eta^{-1} \Im R^{[k]} (\lambda_p + {\mathbf{i}} \eta)_{ii} \leq L^{c_0} \min_{1 \leq j \leq N} (\lambda_p - \lambda_j^{[k]})^{-2}~. $$ By Lemma \ref{lem:resresk}, we deduce that if $k \leq N^{5/3} L^{-c_2}$, with overwhelming probability, $$ \frac 1 4 \eta^{-2} \leq L^{c_0}\min_{1 \leq j \leq N} (\lambda_p - \lambda_j^{[k]})^{-2}~. $$ It proves \eqref{eq:pton}. We may now conclude the proof of Lemma \ref{lem:llk}. Fix $\varepsilon >0$. As already noticed, from \cite[Theorem 2.7]{MR3253704}, there exists $\delta >0$ such that, with probability at least $1 - \varepsilon$, $\lambda_2 < \lambda - \delta N^{-1/6}$. From what precedes, with probability at least $1 - 2 \varepsilon$, $\mathcal{E}_\delta$ holds and for all $k \leq N^{5/3} L^{-c_2}$, we have $$ \max \left( \min_{ 1 \leq j \leq N}|\lambda^{[k]}_p - \lambda_j| , \min_{ 1 \leq j \leq N}|\lambda_p - \lambda_j^{[k]}| \right) \leq \alpha~, $$ with $\alpha = 2 L^{c_0/2} \eta$. On this event, we readily find $|\lambda - \lambda^{[k]} | \leq \alpha$ and for some $p$, $|\lambda_p - \lambda^{[k]}_2| \leq \alpha$. Assume that this last inequality is false for $p \ne 2$. Since $2\alpha < \delta N^{-1/6}$, if $p \ne 2$, then $p \leq 3$ and we deduce that $\lambda_2 > \lambda_2^{[k]} + \alpha$. We note that, on our event, for some $q$, we have $|\lambda_2 - \lambda_q^{[k]}| \leq \alpha$. In particular, $\lambda_q^{[k]} > \lambda_2^{[k]}$. So necessarily, $q = 1$ and, from the triangle inequality, $| \lambda_2 - \lambda_1| \leq 2 \alpha$. This is a contradiction since $2\alpha < \delta N^{-1/6}$. It concludes the proof of Lemma \ref{lem:llk}. {\textbf{Acknowledgments} We would like to thank Jaehun Lee for pointing out a mistake in the proof of Lemma \ref{lem:monotonicity_second} in an early version of this paper. We also would like to thank the referees for their valuable reports.} \end{document}
arXiv
{ "id": "1903.04869.tex", "language_detection_score": 0.5959930419921875, "char_num_after_normalized": null, "contain_at_least_two_stop_words": null, "ellipsis_line_ratio": null, "idx": null, "lines_start_with_bullet_point_ratio": null, "mean_length_of_alpha_words": null, "non_alphabetical_char_ratio": null, "path": null, "symbols_to_words_ratio": null, "uppercase_word_ratio": null, "type": null, "book_name": null, "mimetype": null, "page_index": null, "page_path": null, "page_title": null }
arXiv/math_arXiv_v0.2.jsonl
null
null