id
stringlengths 10
15
| question
stringlengths 63
2.3k
| solutions
stringlengths 20
28.5k
|
|---|---|---|
IMOSL-2016-N7
|
Let \(n\) be an odd positive integer. In the Cartesian plane, a cyclic polygon \(P\) with area \(S\) is chosen. All its vertices have integral coordinates, and the squares of its side lengths are all divisible by \(n\) . Prove that \(2S\) is an integer divisible by \(n\) .
|
Solution. Let \(P = A_{1}A_{2}\ldots A_{k}\) and let \(A_{k + i} = A_{i}\) for \(i\geqslant 1\) . By the Shoelace Formula, the area of any convex polygon with integral coordinates is half an integer. Therefore, \(2S\) is an integer. We shall prove by induction on \(k\geqslant 3\) that \(2S\) is divisible by \(n\) . Clearly, it suffices to consider \(n = p^{t}\) where \(p\) is an odd prime and \(t\geqslant 1\) .
For the base case \(k = 3\) , let the side lengths of \(P\) be \(\sqrt{n a},\sqrt{n b},\sqrt{n c}\) where \(a,b,c\) are positive integers. By Heron's Formula,
\[16S^{2} = n^{2}(2a b + 2b c + 2c a - a^{2} - b^{2} - c^{2}).\]
This shows \(16S^{2}\) is divisible by \(n^{2}\) . Since \(n\) is odd, \(2S\) is divisible by \(n\) .
Assume \(k\geqslant 4\) . If the square of length of one of the diagonals is divisible by \(n\) , then that diagonal divides \(P\) into two smaller polygons, to which the induction hypothesis applies. Hence we may assume that none of the squares of diagonal lengths is divisible by \(n\) . As usual, we denote by \(\nu_{p}(r)\) the exponent of \(p\) in the prime decomposition of \(r\) . We claim the following.
- Claim. \(\nu_{p}(A_{1}A_{m}^{2}) > \nu_{p}(A_{1}A_{m + 1}^{2})\) for \(2\leqslant m\leqslant k - 1\) .
Proof. The case \(m = 2\) is obvious since \(\nu_{p}(A_{1}A_{2}^{2})\geqslant p^{t} > \nu_{p}(A_{1}A_{3}^{2})\) by the condition and the above assumption.
Suppose \(\nu_{p}(A_{1}A_{2}^{2}) > \nu_{p}(A_{1}A_{3}^{2}) > \dots >\nu_{p}(A_{1}A_{m}^{2})\) where \(3\leqslant m\leqslant k - 1\) . For the induction step, we apply Ptolemy's Theorem to the cyclic quadrilateral \(A_{1}A_{m - 1}A_{m}A_{m + 1}\) to get
\[A_{1}A_{m + 1}\times A_{m - 1}A_{m} + A_{1}A_{m - 1}\times A_{m}A_{m + 1} = A_{1}A_{m}\times A_{m - 1}A_{m + 1},\]
which can be rewritten as
\[\begin{array}{r}{A_{1}A_{m + 1}^{2}\times A_{m - 1}A_{m}^{2} = A_{1}A_{m - 1}^{2}\times A_{m}A_{m + 1}^{2} + A_{1}A_{m}^{2}\times A_{m - 1}A_{m + 1}^{2}}\\ {-2A_{1}A_{m - 1}\times A_{m}A_{m + 1}\times A_{1}A_{m}\times A_{m - 1}A_{m + 1}.} \end{array} \quad (1)\]
From this, \(2A_{1}A_{m - 1}\times A_{m}A_{m + 1}\times A_{1}A_{m}\times A_{m - 1}A_{m + 1}\) is an integer. We consider the component of \(p\) of each term in (1). By the inductive hypothesis, we have \(\nu_{p}(A_{1}A_{m - 1}^{2}) > \nu_{p}(A_{1}A_{m}^{2})\) . Also, we have \(\nu_{p}(A_{m}A_{m + 1}^{2})\geqslant p^{t} > \nu_{p}(A_{m - 1}A_{m + 1}^{2})\) . These give
\[\nu_{p}(A_{1}A_{m - 1}^{2}\times A_{m}A_{m + 1}^{2}) > \nu_{p}(A_{1}A_{m}^{2}\times A_{m - 1}A_{m + 1}^{2}). \quad (2)\]
Next, we have \(\nu_{p}(A_{1}A_{m - 1}^{2}\times A_{m}A_{m + 1}^{2}\times A_{1}A_{m}^{2}\times A_{m - 1}A_{m + 1}^{2}) = \nu_{p}(A_{1}A_{m - 1}^{2}\times A_{m}A_{m + 1}^{2}) +\) \(\nu_{p}(A_{1}A_{m}^{2}\times A_{m - 1}A_{m + 1}^{2}) > 2\nu_{p}(A_{1}A_{m}^{2}\times A_{m - 1}A_{m + 1}^{2})\) from (2). This implies
\[\nu_{p}(2A_{1}A_{m - 1}\times A_{m}A_{m + 1}\times A_{1}A_{m}\times A_{m - 1}A_{m + 1}) > \nu_{p}(A_{1}A_{m}^{2}\times A_{m - 1}A_{m + 1}^{2}). \quad (3)\]
Combining (1), (2) and (3), we conclude that
\[\nu_{p}(A_{1}A_{m + 1}^{2}\times A_{m - 1}A_{m}^{2}) = \nu_{p}(A_{1}A_{m}^{2}\times A_{m - 1}A_{m + 1}^{2}).\]
By \(\nu_{p}(A_{m - 1}A_{m}^{2})\geqslant p^{t} > \nu_{p}(A_{m - 1}A_{m + 1}^{2})\) , we get \(\nu_{p}(A_{1}A_{m + 1}^{2})< \nu_{p}(A_{1}A_{m}^{2})\) . The Claim follows by induction. \(\square\)
From the Claim, we get a chain of inequalities
\[p^{t} > \nu_{p}(A_{1}A_{3}^{2}) > \nu_{p}(A_{1}A_{4}^{2}) > \dots >\nu_{p}(A_{1}A_{k}^{2})\geqslant p^{t},\]
which yields a contradiction. Therefore, we can show by induction that \(2S\) is divisible by \(n\) .
Comment. The condition that \(P\) is cyclic is crucial. As a counterexample, consider the rhombus with vertices \((0,3),(4,0),(0, - 3),(- 4,0)\) . Each of its squares of side lengths is divisible by 5, while \(2S = 48\) is not.
The proposer also gives a proof for the case \(n\) is even. One just needs an extra technical step for the case \(p = 2\) .
|
IMOSL-2016-N8
|
Find all polynomials \(P(x)\) of odd degree \(d\) and with integer coefficients satisfying the following property: for each positive integer \(n\) , there exist \(n\) positive integers \(x_{1}, x_{2}, \ldots , x_{n}\) such that \(\frac{1}{2} < \frac{P(x_{i})}{P(x_{j})} < 2\) and \(\frac{P(x_{i})}{P(x_{j})}\) is the \(d\) - th power of a rational number for every pair of indices \(i\) and \(j\) with \(1 \leqslant i, j \leqslant n\) .
|
Answer. \(P(x) = a(r x + s)^{d}\) where \(a, r, s\) are integers with \(a \neq 0, r \geqslant 1\) and \((r, s) = 1\) .
Solution. Let \(P(x) = a_{d}x^{d} + a_{d - 1}x^{d - 1} + \dots + a_{0}\) . Consider the substitution \(y = d a_{d}x + a_{d - 1}\) . By defining \(Q(y) = P(x)\) , we find that \(Q\) is a polynomial with rational coefficients without the term \(y^{d - 1}\) . Let \(Q(y) = b_{d}y^{d} + b_{d - 2}y^{d - 2} + b_{d - 3}y^{d - 3} + \dots + b_{0}\) and \(B = \max_{0 \leqslant i \leqslant d} \{|b_{i}|\}\) (where \(b_{d - 1} = 0\) ).
The condition shows that for each \(n \geqslant 1\) , there exist integers \(y_{1}, y_{2}, \ldots , y_{n}\) such that \(\frac{1}{2} < \frac{Q(y_{i})}{Q(y_{j})} < 2\) and \(\frac{Q(y_{i})}{Q(y_{j})}\) is the \(d\) - th power of a rational number for \(1 \leqslant i, j \leqslant n\) . Since \(n\) can be arbitrarily large, we may assume all \(x_{i}\) 's and hence \(y_{i}\) 's are integers larger than some absolute constant in the following.
By Dirichlet's Theorem, since \(d\) is odd, we can find a sufficiently large prime \(p\) such that \(p \equiv 2\) (mod \(d\) ). In particular, we have \((p - 1, d) = 1\) . For this fixed \(p\) , we choose \(n\) to be sufficiently large. Then by the Pigeonhole Principle, there must be \(d + 1\) of \(y_{1}, y_{2}, \ldots , y_{n}\) which are congruent mod \(p\) . Without loss of generality, assume \(y_{i} \equiv y_{j}\) (mod \(p\) ) for \(1 \leqslant i, j \leqslant d + 1\) . We shall establish the following.
Claim. \(\frac{Q(y_{i})}{Q(y_{1})} = \frac{y_{i}^{d}}{y_{1}^{d}}\) for \(2 \leqslant i \leqslant d + 1\) .
Proof. Let \(\frac{Q(y_{i})}{Q(y_{1})} = \frac{l^{d}}{m^{d}}\) where \((l, m) = 1\) and \(l, m > 0\) . This can be rewritten in the expanded form
\[b_{d}(m^{d}y_{i}^{d} - l^{d}y_{1}^{d}) = -\sum_{j = 0}^{d - 2}b_{j}(m^{d}y_{i}^{j} - l^{d}y_{1}^{j}). \quad (1)\]
Let \(c\) be the common denominator of \(Q\) , so that \(cQ(k)\) is an integer for any integer \(k\) . Note that \(c\) depends only on \(P\) and so we may assume \((p, c) = 1\) . Then \(y_{1} \equiv y_{i}\) (mod \(p\) ) implies \(cQ(y_{1}) \equiv cQ(y_{i})\) (mod \(p\) ).
Case 1. \(p|cQ(y_{1})\)
In this case, there is a cancellation of \(p\) in the numerator and denominator of \(\frac{cQ(y_{i})}{cQ(y_{1})}\) , so that \(m^{d} \leqslant p^{- 1}|cQ(y_{1})|\) . Noting \(|Q(y_{1})| < 2By_{1}^{d}\) as \(y_{1}\) is large, we get
\[m \leqslant p^{-\frac{1}{d}} (2cB)^{\frac{1}{d}} y_{1}. \quad (2)\]
For large \(y_{1}\) and \(y_{i}\) , the relation \(\frac{1}{2} < \frac{Q(y_{i})}{Q(y_{1})} < 2\) implies
\[\frac{1}{3} < \frac{y_{i}^{d}}{y_{1}^{d}} < 3. \quad (3)\]
We also have
\[\frac{1}{2} < \frac{l^{d}}{m^{d}} < 2. \quad (4)\]
Now, the left- hand side of (1) is
\[b_{d}(m y_{i} - l y_{1})(m^{d - 1}y_{i}^{d - 1} + m^{d - 2}y_{i}^{d - 2}l y_{1} + \cdot \cdot \cdot +l^{d - 1}y_{1}^{d - 1}).\]
Suppose on the contrary that \(m y_{i} - l y_{1}\neq 0\) . Then the absolute value of the above expression is at least \(|b_{d}|m^{d - 1}y_{i}^{d - 1}\) . On the other hand, the absolute value of the right- hand side of (1) is at most
\[\sum_{j = 0}^{d - 2}B(m^{d}y_{i}^{j} + l^{d}y_{1}^{j})\leqslant (d - 1)B(m^{d}y_{i}^{d - 2} + l^{d}y_{1}^{d - 2})\] \[\qquad \leqslant (d - 1)B(7m^{d}y_{i}^{d - 2})\] \[\qquad \leqslant 7(d - 1)B(p^{-\frac{1}{d}}(2c B)^{\frac{1}{d}}y_{1})m^{d - 1}y_{i}^{d - 2}\] \[\qquad \leqslant 21(d - 1)Bp^{-\frac{1}{d}}(2c B)^{\frac{1}{d}}m^{d - 1}y_{i}^{d - 1}\]
by using successively (3), (4), (2) and again (3). This shows
\[|b_{d}|m^{d - 1}y_{i}^{d - 1}\leqslant 21(d - 1)Bp^{-\frac{1}{d}}(2c B)^{\frac{1}{d}}m^{d - 1}y_{i}^{d - 1},\]
which is a contradiction for large \(p\) as \(b_{d},B,c,d\) depend only on the polynomial \(P\) . Therefore, we have \(m y_{i} - l y_{1} = 0\) in this case.
- Case 2. \((p,c Q(y_{1})) = 1\) .
From \(c Q(y_{1})\equiv c Q(y_{i})\) (mod \(p\) ), we have \(l^{d}\equiv m^{d}\) (mod \(p\) ). Since \((p - 1,d) = 1\) , we use Fermat Little Theorem to conclude \(l\equiv m\) (mod \(p\) ). Then \(p|m y_{i} - l y_{1}\) . Suppose on the contrary that \(m y_{i} - l y_{1}\neq 0\) . Then the left- hand side of (1) has absolute value at least \(|b_{d}|p m^{d - 1}y_{i}^{d - 1}\) . Similar to Case 1, the right- hand side of (1) has absolute value at most
\[21(d - 1)B(2c B)^{\frac{1}{d}}m^{d - 1}y_{i}^{d - 1},\]
which must be smaller than \(|b_{d}|p m^{d - 1}y_{i}^{d - 1}\) for large \(p\) . Again this yields a contradiction and hence \(m y_{i} - l y_{1} = 0\) .
In both cases, we find that \(\frac{Q(y_{i})}{Q(y_{1})} = \frac{l^{d}}{m^{d}} = \frac{y_{i}^{d}}{y_{1}^{d}}\) .
From the Claim, the polynomial \(Q(y_{1})y^{d} - y_{1}^{d}Q(y)\) has roots \(y = y_{1},y_{2},\ldots ,y_{d + 1}\) . Since its degree is at most \(d\) , this must be the zero polynomial. Hence, \(Q(y) = b_{d}y^{d}\) . This implies \(P(x) = a_{d}(x + \frac{a_{d - 1}}{d a_{d}})^{d}\) . Let \(\frac{a_{d - 1}}{d a_{d}} = \frac{s}{r}\) with integers \(r,s\) where \(r\geqslant 1\) and \((r,s) = 1\) . Since \(P\) has integer coefficients, we need \(r^{d}|a_{d}\) . Let \(a_{d} = r^{d}a\) . Then \(P(x) = a(r x + s)^{d}\) . It is obvious that such a polynomial satisfies the conditions.
Comment. In the proof, the use of prime and Dirichlet's Theorem can be avoided. One can easily show that each \(P(x_{i})\) can be expressed in the form \(u v_{i}^{d}\) where \(u,v_{i}\) are integers and \(u\) cannot be divisible by the \(d\) - th power of a prime (note that \(u\) depends only on \(P\) ). By fixing a large integer \(q\) and by choosing a large \(n\) , we can apply the Pigeonhole Principle and assume
\(x_{1} \equiv x_{2} \equiv \dots \equiv x_{d + 1} \pmod {q}\) and \(v_{1} \equiv v_{2} \equiv \dots \equiv v_{d + 1} \pmod {q}\) . Then the remaining proof is similar to Case 2 of the Solution.
Alternatively, we give another modification of the proof as follows.
We take a sufficiently large \(n\) and consider the corresponding positive integers \(y_{1}, y_{2}, \ldots , y_{n}\) . For each \(2 \leq i \leq n\) , let \(\frac{Q(y_{i})}{Q(y_{1})} = \frac{l_{i}^{d}}{m_{i}^{d}}\) .
As in Case 1, if there are \(d\) indices \(i\) such that the integers \(\frac{c|Q(y_{1})|}{m_{i}^{d}}\) are bounded below by a constant depending only on \(P\) , we can establish the Claim using those \(y_{i}\) 's and complete the proof. Similarly, as in Case 2, if there are \(d\) indices \(i\) such that the integers \(|m_{i}y_{i} - l_{i}y_{1}|\) are bounded below, then the proof goes the same. So it suffices to consider the case where \(\frac{c|Q(y_{1})|}{m_{i}^{d}} \leqslant M\) and \(|m_{i}y_{i} - l_{i}y_{1}| \leqslant N\) for all \(2 \leq i \leq n'\) where \(M, N\) are fixed constants and \(n'\) is large. Since there are only finitely many choices for \(m_{i}\) and \(m_{i}y_{i} - l_{i}y_{1}\) , by the Pigeonhole Principle, we can assume without loss of generality \(m_{i} = m\) and \(m_{i}y_{i} - l_{i}y_{1} = t\) for \(2 \leq i \leq d + 2\) . Then
\[\frac{Q(y_{i})}{Q(y_{1})} = \frac{l_{i}^{d}}{m^{d}} = \frac{(m y_{i} - t)^{d}}{m^{d} y_{1}^{d}}\]
so that \(Q(y_{1})(m y - t)^{d} - m^{d} y_{1}^{d} Q(y)\) has roots \(y = y_{2}, y_{3}, \ldots , y_{d + 2}\) . Its degree is at most \(d\) and hence it is the zero polynomial. Therefore, \(Q(y) = \frac{b_{d}}{m^{d}} (m y - t)^{d}\) . Indeed, \(Q\) does not have the term \(y^{d - 1}\) , which means \(t\) should be 0. This gives the corresponding \(P(x)\) of the desired form.
The two modifications of the Solution work equally well when the degree \(d\) is even.
|
IMOSL-2017-A1
|
Let \(a_{1},a_{2},\ldots ,a_{n},k\) , and \(M\) be positive integers such that
\[\frac{1}{a_{1}} +\frac{1}{a_{2}} +\dots +\frac{1}{a_{n}} = k\quad \mathrm{and}\quad a_{1}a_{2}\dots a_{n} = M.\]
If \(M > 1\) , prove that the polynomial
\[P(x) = M(x + 1)^{k} - (x + a_{1})(x + a_{2})\cdot \cdot \cdot (x + a_{n})\]
has no positive roots.
|
Solution 1. We first prove that, for \(x > 0\)
\[a_{i}(x + 1)^{1 / a_{i}}\leqslant x + a_{i}, \quad (1)\]
with equality if and only if \(a_{i} = 1\) . It is clear that equality occurs if \(a_{i} = 1\) .
If \(a_{i} > 1\) , the AM- GM inequality applied to a single copy of \(x + 1\) and \(a_{i} - 1\) copies of 1 yields
\[\frac{(x + 1) + \overbrace{1 + 1 + \cdots + 1}^{a_{i} - 1\mathrm{ones}}}{a_{i}}\geqslant \sqrt[n]{(x + 1)\cdot 1^{a_{i} - 1}}\Longrightarrow a_{i}(x + 1)^{1 / a_{i}}\leqslant x + a_{i}.\]
Since \(x + 1 > 1\) , the inequality is strict for \(a_{i} > 1\) .
Multiplying the inequalities (1) for \(i = 1,2,\ldots ,n\) yields
\[\prod_{i = 1}^{n}a_{i}(x + 1)^{1 / a_{i}}\leqslant \prod_{i = 1}^{n}(x + a_{i})\iff M(x + 1)^{\sum_{i = 1}^{n}1 / a_{i}} - \prod_{i = 1}^{n}(x + a_{i})\leqslant 0\iff P(x)\leqslant 0\]
with equality iff \(a_{i} = 1\) for all \(i\in \{1,2,\ldots ,n\}\) . But this implies \(M = 1\) , which is not possible. Hence \(P(x)< 0\) for all \(x\in \mathbb{R}^{+}\) , and \(P\) has no positive roots.
Comment 1. Inequality (1) can be obtained in several ways. For instance, we may also use the binomial theorem: since \(a_{i}\geqslant 1\) ,
\[\left(1 + \frac{x}{a_{i}}\right)^{a_{i}} = \sum_{j = 0}^{a_{i}}\binom{a_{i}}{j}\left(\frac{x}{a_{i}}\right)^{j}\geqslant \binom{a_{i}}{0} + \binom{a_{i}}{1}\cdot \frac{x}{a_{i}} = 1 + x.\]
Both proofs of (1) mimic proofs to Bernoulli's inequality for a positive integer exponent \(a_{i}\) ; we can use this inequality directly:
\[\left(1 + \frac{x}{a_{i}}\right)^{a_{i}}\geqslant 1 + a_{i}\cdot \frac{x}{a_{i}} = 1 + x,\]
and so
\[x + a_{i} = a_{i}\left(1 + \frac{x}{a_{i}}\right)\geqslant a_{i}(1 + x)^{1 / a_{i}},\]
or its (reversed) formulation, with exponent \(1 / a_{i}\leqslant 1\) :
\[(1 + x)^{1 / a_{i}}\leqslant 1 + \frac{1}{a_{i}}\cdot x = \frac{x + a_{i}}{a_{i}}\Longrightarrow a_{i}(1 + x)^{1 / a_{i}}\leqslant x + a_{i}.\]
Solution 2. We will prove that, in fact, all coefficients of the polynomial \(P(x)\) are non- positive, and at least one of them is negative, which implies that \(P(x) < 0\) for \(x > 0\) .
Indeed, since \(a_{j} \geqslant 1\) for all \(j\) and \(a_{j} > 1\) for some \(j\) (since \(a_{1}a_{2}\ldots a_{n} = M > 1\) ), we have \(k = \frac{1}{a_{1}} +\frac{1}{a_{2}} +\dots +\frac{1}{a_{n}} < n\) , so the coefficient of \(x^{n}\) in \(P(x)\) is \(- 1 < 0\) . Moreover, the coefficient of \(x^{r}\) in \(P(x)\) is negative for \(k < r \leqslant n = \deg (P)\) .
For \(0 \leqslant r \leqslant k\) , the coefficient of \(x^{r}\) in \(P(x)\) is
\[M \cdot \binom{k}{r} - \sum_{1 \leqslant i_{1} < i_{2} < \dots < i_{n - r} \leqslant n} a_{i_{1}} a_{i_{2}} \dots a_{i_{n - r}} = a_{1} a_{2} \dots a_{n} \cdot \binom{k}{r} - \sum_{1 \leqslant i_{1} < i_{2} < \dots < i_{n - r} \leqslant n} a_{i_{1}} a_{i_{2}} \dots a_{i_{n - r}},\]
which is non- positive iff
\[\binom{k}{r} \leqslant \sum_{1 \leqslant j_{1} < j_{2} < \dots < j_{r} \leqslant n} \frac{1}{a_{j_{1}} a_{j_{2}} \dots a_{j_{r}}}. \quad (2)\]
We will prove (2) by induction on \(r\) . For \(r = 0\) it is an equality because the constant term of \(P(x)\) is \(P(0) = 0\) , and if \(r = 1\) , (2) becomes \(k = \sum_{i = 1}^{n} \frac{1}{a_{i}}\) . For \(r > 1\) , if (2) is true for a given \(r < k\) , we have
\[\binom{k}{r+1}=\frac{k-r}{r+1}\cdot\binom{k}{r}\leqslant\frac{k-r}{r+1}\cdot\sum_{1\leqslant j_{1}< j_{2}<\dots<j_{r}\leqslant n}\frac{1}{a_{j_{1}}a_{j_{2}}\cdots a_{j_{r}}},\]
and it suffices to prove that
\[\frac{k - r}{r + 1} \cdot \sum_{1 \leqslant j_{1} < j_{2} < \dots < j_{r} \leqslant n} \frac{1}{a_{j_{1}} a_{j_{2}} \dots a_{j_{r}}} \leqslant \sum_{1 \leqslant j_{1} < \dots < j_{r} < j_{r + 1} \leqslant n} \frac{1}{a_{j_{1}} a_{j_{2}} \dots a_{j_{r}} a_{j_{r + 1}}},\]
which is equivalent to
\[\frac{1}{a_{1}} +\frac{1}{a_{2}} +\dots +\frac{1}{a_{n}} -\sum_{1 \leqslant j_{1} < j_{2} < \dots < j_{r} \leqslant n} \frac{1}{a_{j_{1}} a_{j_{2}} \dots a_{j_{r}}} \leqslant (r + 1) \sum_{1 \leqslant j_{1} < \dots < j_{r} < j_{r + 1} \leqslant n} \frac{1}{a_{j_{1}} a_{j_{2}} \dots a_{j_{r}} a_{j_{r + 1}}}.\]
Since there are \(r + 1\) ways to choose a fraction \(\frac{1}{a_{j_{i}}} \frac{1}{a_{j_{1}} a_{j_{2}} \dots a_{j_{r}} a_{j_{r + 1}}}\) to factor out, every term \(\frac{1}{a_{j_{1}} a_{j_{2}} \dots a_{j_{r}} a_{j_{r + 1}}}\) in the right hand side appears exactly \(r + 1\) times in the product
\[\left(\frac{1}{a_{1}} +\frac{1}{a_{2}} +\dots +\frac{1}{a_{n}}\right) \sum_{1 \leqslant j_{1} < j_{2} < \dots < j_{r} \leqslant n} \frac{1}{a_{j_{1}} a_{j_{2}} \dots a_{j_{r}}}.\]
Hence all terms in the right hand side cancel out.
The remaining terms in the left hand side can be grouped in sums of the type
\[\frac{1}{a_{j_{1}}^{2}a_{j_{2}}\cdots a_{j_{r}}} +\frac{1}{a_{j_{1}}a_{j_{2}}^{2}\cdots a_{j_{r}}} +\dots +\frac{1}{a_{j_{1}}a_{j_{2}}\cdots a_{j_{r}}^{2}} -\frac{r}{a_{j_{1}}a_{j_{2}}\cdots a_{j_{r}}}\] \[\qquad = \frac{1}{a_{j_{1}}a_{j_{2}}\cdots a_{j_{r}}}\left(\frac{1}{a_{j_{1}}} +\frac{1}{a_{j_{2}}} +\dots +\frac{1}{a_{j_{r}}} -r\right),\]
which are all non- positive because \(a_{i} \geqslant 1 \implies \frac{1}{a_{i}} \leqslant 1\) , \(i = 1, 2, \ldots , n\) .
Comment 2. The result is valid for any real numbers \(a_{i}\) , \(i = 1, 2, \ldots , n\) with \(a_{i} \geqslant 1\) and product \(M\) greater than 1. A variation of Solution 1, namely using weighted AM- GM (or the Bernoulli inequality for real exponents), actually proves that \(P(x) < 0\) for \(x > - 1\) and \(x \neq 0\) .
|
IMOSL-2017-A3
|
let et \(S\) be a finite set, and let \(\mathcal{A}\) be the set of all functions from \(S\) to \(S\) . Let \(f\) be an element of \(\mathcal{A}\) , and let \(T = f(S)\) be the image of \(S\) under \(f\) . Suppose that \(f \circ g \circ f \neq g \circ f \circ g\) for every \(g\) in \(\mathcal{A}\) with \(g \neq f\) . Show that \(f(T) = T\) .
|
Solution. For \(n \geqslant 1\) , denote the \(n\) - th composition of \(f\) with itself by
\[f^{n} \stackrel{\mathrm{def}}{=} \underbrace{f \circ f \circ \cdots \circ f}_{n \text{ times}}.\]
By hypothesis, if \(g \in \mathcal{A}\) satisfies \(f \circ g \circ f = g \circ f \circ g\) , then \(g = f\) . A natural idea is to try to plug in \(g = f^{n}\) for some \(n\) in the expression \(f \circ g \circ f = g \circ f \circ g\) in order to get \(f^{n} = f\) , which solves the problem:
Claim. If there exists \(n \geqslant 3\) such that \(f^{n + 2} = f^{2n + 1}\) , then the restriction \(f: T \to T\) of \(f\) to \(T\) is a bijection.
Proof. Indeed, by hypothesis, \(f^{n + 2} = f^{2n + 1} \iff f \circ f^{n} \circ f = f^{n} \circ f \circ f^{n} \implies f^{n} = f\) . Since \(n - 2 \geqslant 1\) , the image of \(f^{n - 2}\) is contained in \(T = f(S)\) , hence \(f^{n - 2}\) restricts to a function \(f^{n - 2}: T \to T\) . This is the inverse of \(f: T \to T\) . In fact, given \(t \in T\) , say \(t = f(s)\) with \(s \in S\) , we have
\[t = f(s) = f^{n}(s) = f^{n - 2}(f(t)) = f(f^{n - 2}(t)),\qquad \mathrm{i.e.,}\qquad f^{n - 2}\circ f = f\circ f^{n - 2} = \mathrm{id}\mathrm{~on~}T\]
(here id stands for the identity function). Hence, the restriction \(f: T \to T\) of \(f\) to \(T\) is bijective with inverse given by \(f^{n - 2}: T \to T\) . \(\square\)
It remains to show that \(n\) as in the claim exists. For that, define
\[S_{m} \stackrel{\mathrm{def}}{=} f^{m}(S) \qquad (S_{m} \text{ is image of } f^{m})\]
Clearly the image of \(f^{m + 1}\) is contained in the image of \(f^{m}\) , i.e., there is a descending chain of subsets of \(S\)
\[S \supseteq S_{1} \supseteq S_{2} \supseteq S_{3} \supseteq S_{4} \supseteq \dots ,\]
which must eventually stabilise since \(S\) is finite, i.e., there is a \(k \geqslant 1\) such that
\[S_{k} = S_{k + 1} = S_{k + 2} = S_{k + 3} = \dots \stackrel {\mathrm{def}}{=} S_{\infty}.\]
Hence \(f\) restricts to a surjective function \(f: S_{\infty} \to S_{\infty}\) , which is also bijective since \(S_{\infty} \subseteq S\) is finite. To sum up, \(f: S_{\infty} \to S_{\infty}\) is a permutation of the elements of the finite set \(S_{\infty}\) , hence there exists an integer \(r \geqslant 1\) such that \(f^{r} = \mathrm{id}\) on \(S_{\infty}\) (for example, we may choose \(r = |S_{\infty}|!\) ). In other words,
\[f^{m + r} = f^{m} \text{ on } S \text{ for all } m \geqslant k. \quad (*)\]
Clearly, \((*)\) also implies that \(f^{m + tr} = f^{m}\) for all integers \(t \geqslant 1\) and \(m \geqslant k\) . So, to find \(n\) as in the claim and finish the problem, it is enough to choose \(m\) and \(t\) in order to ensure that there exists \(n \geqslant 3\) satisfying
\[\left\{ \begin{array}{l}2n + 1 = m + tr \\ n + 2 = m \end{array} \right. \quad \iff \quad \left\{ \begin{array}{l}m = 3 + tr \\ n = m - 2. \end{array} \right.\]
This can be clearly done by choosing \(m\) large enough with \(m \equiv 3\) (mod \(r\) ). For instance, we may take \(n = 2kr + 1\) , so that
\[f^{n + 2} = f^{2kr + 3} = f^{4kr + 3} = f^{2n + 1}\]
where the middle equality follows by \((*)\) since \(2kr + 3 \geqslant k\) .
|
IMOSL-2017-A4
|
A sequence of real numbers \(a_{1}, a_{2}, \ldots\) satisfies the relation
\[a_{n} = -\max_{i + j = n}(a_{i} + a_{j})\qquad \mathrm{for~all~}n > 2017.\]
Prove that this sequence is bounded, i.e., there is a constant \(M\) such that \(|a_{n}| \leq M\) for all positive integers \(n\) .
|
Solution 1. Set \(D = 2017\) . Denote
\[M_{n} = \max_{k< n}a_{k}\qquad \mathrm{and}\qquad m_{n} = -\min_{k< n}a_{k} = \max_{k< n}(-a_{k}).\]
Clearly, the sequences \((m_{n})\) and \((M_{n})\) are nondecreasing. We need to prove that both are bounded.
Consider an arbitrary \(n > D\) ; our first aim is to bound \(a_{n}\) in terms of \(m_{n}\) and \(M_{n}\) .
(i) There exist indices \(p\) and \(q\) such that \(a_{n} = -(a_{p} + a_{q})\) and \(p + q = n\) . Since \(a_{p}, a_{q} \leq M_{n}\) , we have \(a_{n} \geq -2M_{n}\) .
(ii) On the other hand, choose an index \(k < n\) such that \(a_{k} = M_{n}\) . Then, we have
\[a_{n} = -\max_{\ell < n}(a_{n - \ell} + a_{\ell}) \leq -(a_{n - k} + a_{k}) = -a_{n - k} - M_{n} \leq m_{n} - M_{n}.\]
Summarizing (i) and (ii), we get
\[-2M_{n} \leq a_{n} \leq m_{n} - M_{n},\]
whence
\[m_{n} \leq m_{n + 1} \leq \max \{m_{n}, 2M_{n}\} \quad \text{and} \quad M_{n} \leq M_{n + 1} \leq \max \{M_{n}, m_{n} - M_{n}\} . \quad (1)\]
Now, say that an index \(n > D\) is lucky if \(m_{n} \leq 2M_{n}\) . Two cases are possible.
Case 1. Assume that there exists a lucky index \(n\) . In this case, (1) yields \(m_{n + 1} \leq 2M_{n}\) and \(M_{n} \leq M_{n + 1} \leq M_{n}\) . Therefore, \(M_{n + 1} = M_{n}\) and \(m_{n + 1} \leq 2M_{n} = 2M_{n + 1}\) . So, the index \(n + 1\) is also lucky, and \(M_{n + 1} = M_{n}\) . Applying the same arguments repeatedly, we obtain that all indices \(k > n\) are lucky (i.e., \(m_{k} \leq 2M_{k}\) for all these indices), and \(M_{k} = M_{n}\) for all such indices. Thus, all of the \(m_{k}\) and \(M_{k}\) are bounded by \(2M_{n}\) .
Case 2. Assume now that there is no lucky index, i.e., \(2M_{n} < m_{n}\) for all \(n > D\) . Then (1) shows that for all \(n > D\) we have \(m_{n} \leq m_{n + 1} \leq m_{n}\) , so \(m_{n} = m_{D + 1}\) for all \(n > D\) . Since \(M_{n} < m_{n} / 2\) for all such indices, all of the \(m_{n}\) and \(M_{n}\) are bounded by \(m_{D + 1}\) .
Thus, in both cases the sequences \((m_{n})\) and \((M_{n})\) are bounded, as desired.
Solution 2. As in the previous solution, let \(D = 2017\) . If the sequence is bounded above, say, by \(Q\) , then we have that \(a_{n} \geq \min \{a_{1}, \ldots , a_{D}, - 2Q\}\) for all \(n\) , so the sequence is bounded. Assume for sake of contradiction that the sequence is not bounded above. Let \(\ell = \min \{a_{1}, \ldots , a_{D}\}\) , and \(L = \max \{a_{1}, \ldots , a_{D}\}\) . Call an index \(n\) good if the following criteria hold:
\[a_{n} > a_{i} \text{ for each } i < n, \quad a_{n} > -2\ell, \quad \text{and} \quad n > D \quad (2)\]
We first show that there must be some good index \(n\) . By assumption, we may take an index \(N\) such that \(a_{N} > \max \{L, - 2\ell \}\) . Choose \(n\) minimally such that \(a_{n} = \max \{a_{1}, a_{2}, \ldots , a_{N}\}\) . Now, the first condition in (2) is satisfied because of the minimality of \(n\) , and the second and third conditions are satisfied because \(a_{n} \geq a_{N} > L, - 2\ell\) , and \(L \geq a_{i}\) for every \(i\) such that \(1 \leq i \leq D\) .
Let \(n\) be a good index. We derive a contradiction. We have that
\[a_{n} + a_{u} + a_{v}\leqslant 0, \quad (3)\]
whenever \(u + v = n\) .
We define the index \(u\) to maximize \(a_{u}\) over \(1\leqslant u\leqslant n - 1\) , and let \(v = n - u\) . Then, we note that \(a_{u}\geqslant a_{v}\) by the maximality of \(a_{u}\) .
Assume first that \(v\leqslant D\) . Then, we have that
\[a_{N} + 2\ell \leqslant 0,\]
because \(a_{u}\geqslant a_{v}\geqslant \ell\) . But this contradicts our assumption that \(a_{n} > - 2\ell\) in the second criteria of (2).
Now assume that \(v > D\) . Then, there exist some indices \(w_{1},w_{2}\) summing up to \(v\) such that
\[a_{v} + a_{w_{1}} + a_{w_{2}} = 0.\]
But combining this with (3), we have
\[a_{n} + a_{u}\leqslant a_{w_{1}} + a_{w_{2}}.\]
Because \(a_{n} > a_{u}\) , we have that \(\max \{a_{w_{1}},a_{w_{2}}\} >a_{u}\) . But since each of the \(w_{i}\) is less than \(v\) , this contradicts the maximality of \(a_{u}\) .
Comment 1. We present two harder versions of this problem below.
Version 1. Let \(a_{1},a_{2},\ldots\) be a sequence of numbers that satisfies the relation
\[a_{n} = -\max_{i + j + k = n}(a_{i} + a_{j} + a_{k})\qquad \mathrm{for~all~}n > 2017.\]
Then, this sequence is bounded.
Proof. Set \(D = 2017\) . Denote
\[M_{n} = \max_{k< n}a_{k}\qquad \mathrm{and}\qquad m_{n} = -\min_{k< n}a_{k} = \max_{k< n}(-a_{k}).\]
Clearly, the sequences \((m_{n})\) and \((M_{n})\) are nondecreasing. We need to prove that both are bounded.
Consider an arbitrary \(n > 2D\) ; our first aim is to bound \(a_{n}\) in terms of \(m_{i}\) and \(M_{i}\) . Set \(k = \lfloor n / 2\rfloor\) .
(i) Choose indices \(p\) , \(q\) , and \(r\) such that \(a_{n} = -(a_{p} + a_{q} + a_{r})\) and \(p + q + r = n\) . Without loss of generality, \(p\geqslant q\geqslant r\) .
Assume that \(p\geqslant k + 1(>D)\) ; then \(p > q + r\) . Hence
\[-a_{p} = \max_{i + 1 + 2 + i + 3 = p}(a_{i1} + a_{i2} + a_{i3})\geqslant a_{q} + a_{r} + a_{p - q - r},\]
and therefore \(a_{n} = -(a_{p} + a_{q} + a_{r})\geqslant (a_{q} + a_{r} + a_{p - q - r}) - a_{q} - a_{r} = a_{p - q - r}\geqslant - m_{n}\) .
Otherwise, we have \(k\geqslant p\geqslant q\geqslant r\) . Since \(n< 3k\) , we have \(r< k\) . Then \(a_{p},a_{q}\leqslant M_{k + 1}\) and \(a_{r}\leqslant M_{k}\) , whence \(a_{n}\geqslant - 2M_{k + 1} - M_{k}\) .
Thus, in any case \(a_{n}\geqslant -\max \{m_{n},2M_{k + 1} + M_{k}\}\) .
(ii) On the other hand, choose \(p\leqslant k\) and \(q\leqslant k - 1\) such that \(a_{p} = M_{k + 1}\) and \(a_{q} = M_{k}\) . Then \(p + q< n\) , so \(a_{n}\leqslant -(a_{p} + a_{q} + a_{n - p - q}) = - a_{n - p - q} - M_{k + 1} - M_{k}\leqslant m_{n} - M_{k + 1} - M_{k}\) .
To summarize,
\[-\max \{m_{n},2M_{k + 1} + M_{k}\} \leqslant a_{n}\leqslant m_{n} - M_{k + 1} - M_{k},\]
whence
\[m_{n}\leqslant m_{n + 1}\leqslant \max \{m_{n},2M_{k + 1} + M_{k}\} \quad \mathrm{and}\quad M_{n}\leqslant M_{n + 1}\leqslant \max \{M_{n},m_{n} - M_{k + 1} - M_{k}\} . \quad (4)\]
Now, say that an index \(n > 2D\) is lucky if \(m_{n} \leqslant 2M_{[n / 2] + 1} + M_{[n / 2]}\) . Two cases are possible.
Case 1. Assume that there exists a lucky index \(n\) ; set \(k = \lfloor n / 2\rfloor\) . In this case, (4) yields \(m_{n + 1} \leqslant 2M_{k + 1} + M_{k}\) and \(M_{n} \leqslant M_{n + 1} \leqslant M_{n}\) (the last relation holds, since \(m_{n} - M_{k + 1} - M_{k} \leqslant (2M_{k + 1} + M_{k}) - M_{k + 1} - M_{k} = M_{k + 1} \leqslant M_{n}\) ). Therefore, \(M_{n + 1} = M_{n}\) and \(m_{n + 1} \leqslant 2M_{k + 1} + M_{k}\) ; the last relation shows that the index \(n + 1\) is also lucky.
Thus, all indices \(N > n\) are lucky, and \(M_{N} = M_{n} \geqslant m_{N} / 3\) , whence all the \(m_{N}\) and \(M_{N}\) are bounded by \(3M_{n}\) .
Case 2. Conversely, assume that there is no lucky index, i.e., \(2M_{[n / 2] + 1} + M_{[n / 2]} < m_{n}\) for all \(n > 2D\) . Then (4) shows that for all \(n > 2D\) we have \(m_{n} \leqslant m_{n + 1} \leqslant m_{n}\) , i.e., \(m_{N} = m_{2D + 1}\) for all \(N > 2D\) . Since \(M_{N} < m_{2N + 1} / 3\) for all such indices, all the \(m_{N}\) and \(M_{N}\) are bounded by \(m_{2D + 1}\) .
Thus, in both cases the sequences \((m_{n})\) and \((M_{n})\) are bounded, as desired.
Version 2. Let \(a_{1}, a_{2}, \ldots\) be a sequence of numbers that satisfies the relation
\[a_{n} = -\max_{i_{1} + \dots +i_{k} = n}(a_{i_{1}} + \dots +a_{i_{k}}) \qquad \text{for all} n > 2017.\]
Then, this sequence is bounded.
Proof. As in the solutions above, let \(D = 2017\) . If the sequence is bounded above, say, by \(Q\) , then we have that \(a_{n} \geqslant \min \{a_{1}, \ldots , a_{D}, - kQ\}\) for all \(n\) , so the sequence is bounded. Assume for sake of contradiction that the sequence is not bounded above. Let \(\ell = \min \{a_{1}, \ldots , a_{D}\}\) , and \(L = \max \{a_{1}, \ldots , a_{D}\}\) . Call an index \(n\) good if the following criteria hold:
\[a_{n} > a_{i} \text{for each} i < n, \quad a_{n} > -k\ell , \quad \text{and} \quad n > D \quad (5)\]
We first show that there must be some good index \(n\) . By assumption, we may take an index \(N\) such that \(a_{N} > \max \{L, - k\ell \}\) . Choose \(n\) minimally such that \(a_{n} = \max \{a_{1}, a_{2}, \ldots , a_{N}\}\) . Now, the first condition is satisfied because of the minimality of \(n\) , and the second and third conditions are satisfied because \(a_{n} \geqslant a_{N} > L, - k\ell\) , and \(L \geqslant a_{i}\) for every \(i\) such that \(1 \leqslant i \leqslant D\) .
Let \(n\) be a good index. We derive a contradiction. We have that
\[a_{n} + a_{v_{1}} + \dots +a_{v_{k}} \leqslant 0, \quad (6)\]
whenever \(v_{1} + \dots +v_{k} = n\) .
We define the sequence of indices \(v_{1}, \ldots , v_{k - 1}\) to greedily maximize \(a_{v_{1}}\) , then \(a_{v_{2}}\) , and so forth, selecting only from indices such that the equation \(v_{1} + \dots +v_{k} = n\) can be satisfied by positive integers \(v_{1}, \ldots , v_{k}\) . More formally, we define them inductively so that the following criteria are satisfied by the \(v_{i}\) :
1. \(1 \leqslant v_{i} \leqslant n - (k - i) - (v_{1} + \dots +v_{i - 1})\) .
2. \(a_{v_{i}}\) is maximal among all choices of \(v_{i}\) from the first criteria.
First of all, we note that for each \(i\) , the first criteria is always satisfiable by some \(v_{i}\) , because we are guaranteed that
\[v_{i - 1} \leqslant n - (k - (i - 1)) - (v_{1} + \dots +v_{i - 2}),\]
which implies
\[1 \leqslant n - (k - i) - (v_{1} + \dots +v_{i - 1}).\]
Secondly, the sum \(v_{1} + \dots +v_{k - 1}\) is at most \(n - 1\) . Define \(v_{k} = n - (v_{1} + \dots +v_{k - 1})\) . Then, (6) is satisfied by the \(v_{i}\) . We also note that \(a_{v_{i}} \geqslant a_{v_{j}}\) for all \(i < j\) ; otherwise, in the definition of \(v_{i}\) , we could have selected \(v_{j}\) instead.
Assume first that \(v_{k} \leqslant D\) . Then, from (6), we have that
\[a_{n} + k\ell \leqslant 0,\]
by using that \(a_{v_{1}} \geqslant \dots \geqslant a_{v_{k}} \geqslant \ell\) . But this contradicts our assumption that \(a_{n} > - k\ell\) in the second criteria of (5).
Now assume that \(v_{k} > D\) , and then we must have some indices \(w_{1},\ldots ,w_{k}\) summing up to \(v_{k}\) such that
\[a_{v_{k}} + a_{w_{1}} + \dots +a_{w_{k}} = 0.\]
But combining this with (6), we have
\[a_{n} + a_{v_{1}} + \dots +a_{v_{k - 1}}\leqslant a_{w_{1}} + \dots +a_{w_{k}}.\]
Because \(a_{n} > a_{v_{1}}\geqslant \dots \geqslant a_{v_{k - 1}}\) , we have that \(\max \{a_{w_{1}},\ldots ,a_{w_{k}}\} >a_{v_{k - 1}}\) . But since each of the \(w_{i}\) is less than \(v_{k}\) , in the definition of the \(v_{k - 1}\) we could have chosen one of the \(w_{i}\) instead, which is a contradiction. \(\square\)
Comment 2. It seems that each sequence satisfying the condition in Version 2 is eventually periodic, at least when its terms are integers.
However, up to this moment, the Problem Selection Committee is not aware of a proof for this fact (even in the case \(k = 2\) ).
|
IMOSL-2017-A6
|
Find all functions \(f\colon \mathbb{R}\to \mathbb{R}\) such that
\[f(f(x)f(y)) + f(x + y) = f(xy) \quad (*)\]
for all \(x,y\in \mathbb{R}\)
|
Answer: There are 3 solutions:
\[x\mapsto 0\qquad \mathrm{or}\qquad x\mapsto x - 1\qquad \mathrm{or}\qquad x\mapsto 1 - x \quad (x\in \mathbb{R}). \quad (x\in \mathbb{R}).\]
Solution. An easy check shows that all the 3 above mentioned functions indeed satisfy the original equation \((\ast)\)
In order to show that these are the only solutions, first observe that if \(f(x)\) is a solution then \(- f(x)\) is also a solution. Hence, without loss of generality we may (and will) assume that \(f(0)\leqslant 0\) from now on. We have to show that either \(f\) is identically zero or \(f(x) = x - 1\) \((\forall x\in \mathbb{R})\)
Observe that, for a fixed \(x\neq 1\) , we may choose \(y\in \mathbb{R}\) so that \(x + y = xy\iff y = \frac{x}{x - 1}\) and therefore from the original equation \((\ast)\) we have
\[f\big(f(x)\cdot f\Big(\frac{x}{x - 1}\Big)\Big) = 0\qquad (x\neq 1). \quad (1)\]
In particular, plugging in \(x = 0\) in (1), we conclude that \(f\) has at least one zero, namely \((f(0))^{2}\)
\[f\big((f(0))^{2}\big) = 0. \quad (2)\]
We analyze two cases (recall that \(f(0)\leqslant 0\) ):
Case 1: \(f(0) = 0\)
Setting \(y = 0\) in the original equation we get the identically zero solution:
\[f(f(x)f(0)) + f(x) = f(0)\Rightarrow f(x) = 0\mathrm{~for~all~}x\in \mathbb{R}.\]
From now on, we work on the main
Case 2: \(f(0)< 0\)
We begin with the following
Claim 1.
\[f(1) = 0,\qquad f(a) = 0\Rightarrow a = 1,\qquad \mathrm{and}\qquad f(0) = -1. \quad (3)\]
Proof. We need to show that 1 is the unique zero of \(f\) . First, observe that \(f\) has at least one zero \(a\) by (2); if \(a\neq 1\) then setting \(x = a\) in (1) we get \(f(0) = 0\) , a contradiction. Hence from (2) we get \((f(0))^{2} = 1\) . Since we are assuming \(f(0)< 0\) , we conclude that \(f(0) = - 1\) . \(\square\)
Setting \(y = 1\) in the original equation \((\ast)\) we get
\[f(f(x)f(1)) + f(x + 1) = f(x)\iff f(0) + f(x + 1) = f(x)\iff f(x + 1) = f(x) + 1\qquad (x\in \mathbb{R}). \quad (x\in \mathbb{R}).\]
An easy induction shows that
\[f(x + n) = f(x) + n\qquad (x\in \mathbb{R}, n\in \mathbb{Z}). \quad (4)\]
Now we make the following
Claim 2. \(f\) is injective.
Proof. Suppose that \(f(a) = f(b)\) with \(a\neq b\) . Then by (4), for all \(N\in \mathbb{Z}\)
\[f(a + N + 1) = f(b + N) + 1.\]
Choose any integer \(N< - b\) ; then there exist \(x_{0},y_{0}\in \mathbb{R}\) with \(x_{0} + y_{0} = a + N + 1\) \(x_{0}y_{0} = b + N\) Since \(a\neq b\) , we have \(x_{0}\neq 1\) and \(y_{0}\neq 1\) . Plugging in \(x_{0}\) and \(y_{0}\) in the original equation \((\ast)\) we get
\[f(f(x_{0})f(y_{0})) + f(a + N + 1) = f(b + N)\iff f(f(x_{0})f(y_{0})) + 1 = 0\] \[\iff f(f(x_{0})f(y_{0}) + 1) = 0\qquad \mathrm{by~}(4)\] \[\iff f(x_{0})f(y_{0}) = 0\qquad \mathrm{by~}(3).\]
However, by Claim 1 we have \(f(x_{0})\neq 0\) and \(f(y_{0})\neq 0\) since \(x_{0}\neq 1\) and \(y_{0}\neq 1\) , a contradiction.
Now the end is near. For any \(t\in \mathbb{R}\) , plug in \((x,y) = (t, - t)\) in the original equation \((\ast)\) to get
\[f(f(t)f(-t)) + f(0) = f(-t^{2})\iff f(f(t)f(-t)) = f(-t^{2}) + 1\qquad \mathrm{by~}(3)\] \[\iff f(f(t)f(-t)) = f(-t^{2} + 1)\qquad \mathrm{by~}(4)\] \[\iff f(t)f(-t) = -t^{2} + 1\qquad \mathrm{by~injectivity~of~}f.\]
Similarly, plugging in \((x,y) = (t,1 - t)\) in \((\ast)\) we get
\[f(f(t)f(1 - t)) + f(1) = f(t(1 - t))\iff f(f(t)f(1 - t)) = f(t(1 - t))\quad \mathrm{by~}(3)\] \[\iff f(t)f(1 - t) = t(1 - t)\qquad \mathrm{by~injectivity~of~}f.\]
But since \(f(1 - t) = 1 + f(- t)\) by (4), we get
\[f(t)f(1 - t) = t(1 - t)\iff f(t)(1 + f(-t)) = t(1 - t)\iff f(t) + (-t^{2} + 1) = t(1 - t)\] \[\iff f(t) = t - 1,\]
as desired.
Comment. Other approaches are possible. For instance, after Claim 1, we may define
\[g(x)\stackrel {\mathrm{def}}{=}f(x) + 1.\]
Replacing \(x + 1\) and \(y + 1\) in place of \(x\) and \(y\) in the original equation \((\ast)\) , we get
\[f(f(x + 1)f(y + 1)) + f(x + y + 2) = f(xy + x + y + 1)\qquad (x,y\in \mathbb{R}),\]
and therefore, using (4) (so that in particular \(g(x) = f(x + 1)\) ), we may rewrite \((\ast)\) as
\[g(g(x)g(y)) + g(x + y) = g(xy + x + y)\qquad (x,y\in \mathbb{R}). \quad (**) \quad (**) \quad (**) \quad (**) \quad (**) \quad (**)\]
We are now to show that \(g(x) = x\) for all \(x\in \mathbb{R}\) under the assumption (Claim 1) that 0 is the unique zero of \(g\) .
Claim 3. Let \(n\in \mathbb{Z}\) and \(x\in \mathbb{R}\) . Then
(a) \(g(x + n) = x + n\) , and the conditions \(g(x) = n\) and \(x = n\) are equivalent.
(b) \(g(nx) = ng(x)\) .
Proof. For part (a), just note that \(g(x + n) = x + n\) is just a reformulation of (4). Then \(g(x) = n \iff g(x - n) = 0 \iff x - n = 0\) since 0 is the unique zero of \(g\) . For part (b), we may assume that \(x \neq 0\) since the result is obvious when \(x = 0\) . Plug in \(y = n / x\) in \((\ast \ast)\) and use part (a) to get
\[g\left(g(x)g\left(\frac{n}{x}\right)\right) + g\left(x + \frac{n}{x}\right) = g\left(n + x + \frac{n}{x}\right) \iff g\left(g(x)g\left(\frac{n}{x}\right)\right) = n \iff g(x)g\left(\frac{n}{x}\right) = n.\]
In other words, for \(x \neq 0\) we have
\[g(x) = \frac{n}{g(n / x)}.\]
In particular, for \(n = 1\) , we get \(g(1 / x) = 1 / g(x)\) , and therefore replacing \(x \leftarrow nx\) in the last equation we finally get
\[g(nx) = \frac{n}{g(1 / x)} = ng(x),\]
as required.
Claim 4. The function \(g\) is additive, i.e., \(g(a + b) = g(a) + g(b)\) for all \(a, b \in \mathbb{R}\) .
Proof. Set \(x \leftarrow - x\) and \(y \leftarrow - y\) in \((\ast \ast)\) ; since \(g\) is an odd function (by Claim 3(b) with \(n = - 1\) ), we get
\[g(g(x)g(y)) - g(x + y) = -g(-xy + x + y).\]
Subtracting the last relation from \((\ast \ast)\) we have
\[2g(x + y) = g(xy + x + y) + g(-xy + x + y)\]
and since by Claim 3(b) we have \(2g(x + y) = g(2(x + y))\) , we may rewrite the last equation as
\[
g(\alpha + \beta) = g(\alpha) + g(\beta)
\quad \text{where} \quad
\begin{cases}
\alpha = xy + x + y, \\
\beta = -xy + x + y.
\end{cases}
\]
In other words, we have additivity for all \(\alpha , \beta \in \mathbb{R}\) for which there are real numbers \(x\) and \(y\) satisfying
\[x + y = \frac{\alpha + \beta}{2}\qquad \mathrm{and}\qquad xy = \frac{\alpha - \beta}{2},\]
i.e., for all \(\alpha , \beta \in \mathbb{R}\) such that \((\frac{\alpha + \beta}{2})^{2} - 4 \cdot \frac{\alpha - \beta}{2} \geqslant 0\) . Therefore, given any \(a, b \in \mathbb{R}\) , we may choose \(n \in \mathbb{Z}\) large enough so that we have additivity for \(\alpha = na\) and \(\beta = nb\) , i.e.,
\[g(na) + g(nb) = g(na + nb) \iff ng(a) + ng(b) = ng(a + b)\]
by Claim 3(b). Cancelling \(n\) , we get the desired result. (Alternatively, setting either \((\alpha , \beta) = (a, b)\) or \((\alpha , \beta) = (- a, - b)\) will ensure that \((\frac{\alpha + \beta}{2})^{2} - 4 \cdot \frac{\alpha - \beta}{2} \geqslant 0\) ). \(\square\)
Now we may finish the solution. Set \(y = 1\) in \((\ast \ast)\) , and use Claim 3 to get
\[g(g(x)g(1)) + g(x + 1) = g(2x + 1) \iff g(g(x)) + g(x) + 1 = 2g(x) + 1 \iff g(g(x)) = g(x).\]
By additivity, this is equivalent to \(g(g(x) - x) = 0\) . Since 0 is the unique zero of \(g\) by assumption, we finally get \(g(x) - x = 0 \iff g(x) = x\) for all \(x \in \mathbb{R}\) .
|
IMOSL-2017-A7
|
Let \(a_{0},a_{1},a_{2},\ldots\) be a sequence of integers and \(b_{0},b_{1},b_{2},\ldots\) be a sequence of positive integers such that \(a_{0} = 0,a_{1} = 1\) , and
\[a_{n + 1} = \left\{ \begin{array}{ll}a_{n}b_{n} + a_{n - 1}, & \mathrm{if~}b_{n - 1} = 1\\ a_{n}b_{n} - a_{n - 1}, & \mathrm{if~}b_{n - 1} > 1 \end{array} \right. \quad \mathrm{for~}n = 1,2,\ldots\]
Prove that at least one of the two numbers \(a_{2017}\) and \(a_{2018}\) must be greater than or equal to 2017.
|
Solution 1. The value of \(b_{0}\) is irrelevant since \(a_{0} = 0\) , so we may assume that \(b_{0} = 1\) .
Lemma. We have \(a_{n}\geqslant 1\) for all \(n\geqslant 1\)
Proof. Let us suppose otherwise in order to obtain a contradiction. Let
\[n\geqslant 1\mathrm{~be~the~smallest~integer~with~}a_{n}\leqslant 0. \quad (1)\]
Note that \(n\geq 2\) . It follows that \(a_{n - 1}\geqslant 1\) and \(a_{n - 2}\geqslant 0\) . Thus we cannot have \(a_{n} =\) \(a_{n - 1}b_{n - 1} + a_{n - 2}\) , so we must have \(a_{n} = a_{n - 1}b_{n - 1} - a_{n - 2}\) . Since \(a_{n}\leqslant 0\) , we have \(a_{n - 1}\leqslant a_{n - 2}\) Thus we have \(a_{n - 2}\geqslant a_{n - 1}\geqslant a_{n}\)
Let
\[r\mathrm{~be~the~smallest~index~with~}a_{r}\geqslant a_{r + 1}\geqslant a_{r + 2}. \quad (2)\]
Then \(r\leqslant n - 2\) by the above, but also \(r\geqslant 2\) : if \(b_{1} = 1\) , then \(a_{2} = a_{1} = 1\) and \(a_{3} = a_{2}b_{2} + a_{1} > a_{2}\) ; if \(b_{1} > 1\) , then \(a_{2} = b_{1} > 1 = a_{1}\) .
By the minimal choice (2) of \(r\) , it follows that \(a_{r - 1}< a_{r}\) . And since \(2\leqslant r\leqslant n - 2\) , by the minimal choice (1) of \(n\) we have \(a_{r - 1},a_{r},a_{r + 1} > 0\) . In order to have \(a_{r + 1}\geqslant a_{r + 2}\) , we must have \(a_{r + 2} = a_{r + 1}b_{r + 1} - a_{r}\) so that \(b_{r}\geqslant 2\) . Putting everything together, we conclude that
\[a_{r + 1} = a_{r}b_{r}\pm a_{r - 1}\geqslant 2a_{r} - a_{r - 1} = a_{r} + (a_{r} - a_{r - 1}) > a_{r},\]
which contradicts (2).
To complete the problem, we prove that \(\max \{a_{n},a_{n + 1}\} \geqslant n\) by induction. The cases \(n = 0,1\) are given. Assume it is true for all non- negative integers strictly less than \(n\) , where \(n\geqslant 2\) . There are two cases:
Case 1: \(b_{n - 1} = 1\) .
Then \(a_{n + 1} = a_{n}b_{n} + a_{n - 1}\) . By the inductive assumption one of \(a_{n - 1}\) , \(a_{n}\) is at least \(n - 1\) and the other, by the lemma, is at least 1. Hence
\[a_{n + 1} = a_{n}b_{n} + a_{n - 1}\geqslant a_{n} + a_{n - 1}\geqslant (n - 1) + 1 = n.\]
Thus \(\max \{a_{n},a_{n + 1}\} \geqslant n\) , as desired.
Case 2: \(b_{n - 1} > 1\) .
Since we defined \(b_{0} = 1\) there is an index \(r\) with \(1\leqslant r\leqslant n - 1\) such that
\[b_{n - 1},b_{n - 2},\ldots ,b_{r}\geqslant 2\qquad \mathrm{and}\qquad b_{r - 1} = 1.\]
We have \(a_{r + 1} = a_{r}b_{r} + a_{r - 1}\geqslant 2a_{r} + a_{r - 1}\) . Thus \(a_{r + 1} - a_{r}\geqslant a_{r} + a_{r - 1}\) .
Now we claim that \(a_{r} + a_{r - 1}\geqslant r\) . Indeed, this holds by inspection for \(r = 1\) ; for \(r\geqslant 2\) , one of \(a_{r},a_{r - 1}\) is at least \(r - 1\) by the inductive assumption, while the other, by the lemma, is at least 1. Hence \(a_{r} + a_{r - 1}\geqslant r\) , as claimed, and therefore \(a_{r + 1} - a_{r}\geqslant r\) by the last inequality in the previous paragraph.
Since \(r\geqslant 1\) and, by the lemma, \(a_{r}\geqslant 1\) , from \(a_{r + 1} - a_{r}\geqslant r\) we get the following two inequalities:
\[a_{r + 1}\geqslant r + 1\qquad \mathrm{and}\qquad a_{r + 1} > a_{r}.\]
Now observe that
\[a_{m} > a_{m - 1}\Longrightarrow a_{m + 1} > a_{m}\mathrm{~for~}m = r + 1,r + 2,\ldots ,n - 1,\]
since \(a_{m + 1} = a_{m}b_{m} - a_{m - 1}\geqslant 2a_{m} - a_{m - 1} = a_{m} + (a_{m} - a_{m - 1}) > a_{m}\) . Thus
\[a_{n} > a_{n - 1} > \dots >a_{r + 1}\geqslant r + 1\Longrightarrow a_{n}\geqslant n.\]
So \(\max \{a_{n},a_{n + 1}\} \geqslant n\) , as desired.
Solution 2. We say that an index \(n > 1\) is bad if \(b_{n - 1} = 1\) and \(b_{n - 2} > 1\) ; otherwise \(n\) is good. The value of \(b_{0}\) is irrelevant to the definition of \((a_{n})\) since \(a_{0} = 0\) ; so we assume that \(b_{0} > 1\) .
Lemma 1. (a) \(a_{n}\geqslant 1\) for all \(n > 0\)
(b) If \(n > 1\) is good, then \(a_{n} > a_{n - 1}\)
Proof. Induction on \(n\) . In the base cases \(n = 1,2\) we have \(a_{1} = 1\geqslant 1\) , \(a_{2} = b_{1}a_{1}\geqslant 1\) , and finally \(a_{2} > a_{1}\) if 2 is good, since in this case \(b_{1} > 1\) .
Now we assume that the lemma statement is proved for \(n = 1,2,\ldots ,k\) with \(k\geqslant 2\) , and prove it for \(n = k + 1\) . Recall that \(a_{k}\) and \(a_{k - 1}\) are positive by the induction hypothesis.
Case 1: \(k\) is bad.
We have \(b_{k - 1} = 1\) , so \(a_{k + 1} = b_{k}a_{k} + a_{k - 1}\geqslant a_{k} + a_{k - 1} > a_{k}\geqslant 1\) , as required.
Case 2: \(k\) is good.
We already have \(a_{k} > a_{k - 1}\geqslant 1\) by the induction hypothesis. We consider three easy subcases.
Subcase 2.1: \(b_{k} > 1\)
Then \(a_{k + 1}\geqslant b_{k}a_{k} - a_{k - 1}\geqslant a_{k} + (a_{k} - a_{k - 1}) > a_{k}\geqslant 1\)
Subcase 2.2: \(b_{k} = b_{k - 1} = 1\)
Then \(a_{k + 1} = a_{k} + a_{k - 1} > a_{k}\geqslant 1\)
Subcase 2.3: \(b_{k} = 1\) but \(b_{k - 1} > 1\)
Then \(k + 1\) is bad, and we need to prove only (a), which is trivial: \(a_{k + 1} = a_{k} - a_{k - 1}\geqslant 1\)
So, in all three subcases we have verified the required relations.
Lemma 2. Assume that \(n > 1\) is bad. Then there exists a \(j\in \{1,2,3\}\) such that \(a_{n + j}\geqslant\) \(a_{n - 1} + j + 1\) , and \(a_{n + i}\geqslant a_{n - 1} + i\) for all \(1\leqslant i< j\)
Proof. Recall that \(b_{n - 1} = 1\) . Set
\[m = \inf \{i > 0\colon b_{n + i - 1} > 1\}\]
(possibly \(m = +\infty\) ). We claim that \(j = \min \{m,3\}\) works. Again, we distinguish several cases, according to the value of \(m\) ; in each of them we use Lemma 1 without reference.
Case 1: \(m = 1\) , so \(b_{n} > 1\)
Then \(a_{n + 1}\geqslant 2a_{n} + a_{n - 1}\geqslant a_{n - 1} + 2\) , as required.
Case 2: \(m = 2\) , so \(b_{n} = 1\) and \(b_{n + 1} > 1\)
Then we successively get
\[a_{n + 1} = a_{n} + a_{n - 1}\geqslant a_{n - 1} + 1,\] \[a_{n + 2}\geqslant 2a_{n + 1} + a_{n}\geqslant 2(a_{n - 1} + 1) + a_{n} = a_{n - 1} + (a_{n - 1} + a_{n} + 2)\geqslant a_{n - 1} + 4,\]
which is even better than we need.
Case 3: \(m > 2\) , so \(b_{n} = b_{n + 1} = 1\) .
Then we successively get
\[a_{n + 1} = a_{n} + a_{n - 1}\geqslant a_{n - 1} + 1,\quad a_{n + 2} = a_{n + 1} + a_{n}\geqslant a_{n - 1} + 1 + a_{n}\geqslant a_{n - 1} + 2,\] \[a_{n + 3}\geqslant a_{n + 2} + a_{n + 1}\geqslant (a_{n - 1} + 1) + (a_{n - 1} + 2)\geqslant a_{n - 1} + 4,\]
as required.
Lemmas 1(b) and 2 provide enough information to prove that \(\max \{a_{n}, a_{n + 1}\} \geqslant n\) for all \(n\) and, moreover, that \(a_{n} \geqslant n\) often enough. Indeed, assume that we have found some \(n\) with \(a_{n - 1} \geqslant n - 1\) . If \(n\) is good, then by Lemma 1(b) we have \(a_{n} \geqslant n\) as well. If \(n\) is bad, then Lemma 2 yields \(\max \{a_{n + i}, a_{n + i + 1}\} \geqslant a_{n - 1} + i + 1 \geqslant n + i\) for all \(0 \leqslant i < j\) and \(a_{n + j} \geqslant a_{n - 1} + j + 1 \geqslant n + j\) ; so \(n + j\) is the next index to start with.
|
IMOSL-2017-A8
|
Assume that a function \(f\colon \mathbb{R}\to \mathbb{R}\) satisfies the following condition:
For every \(x,y\in \mathbb{R}\) such that \(\big(f(x) + y\big)\big(f(y) + x\big) > 0\) , we have \(f(x) + y = f(y) + x\)
Prove that \(f(x) + y\leqslant f(y) + x\) whenever \(x > y\)
|
Solution 1. Define \(g(x) = x - f(x)\) . The condition on \(f\) then rewrites as follows:
For every \(x,y\in \mathbb{R}\) such that \(\big((x + y) - g(x)\big)\big((x + y) - g(y)\big) > 0\) , we have \(g(x) = g(y)\)
This condition may in turn be rewritten in the following form:
If \(g(x)\neq g(y)\) , then the number \(x + y\) lies (non- strictly) between \(g(x)\) and \(g(y)\) .
Notice here that the function \(g_{1}(x) = - g(- x)\) also satisfies \((\ast)\) , since
\[g_{1}(x)\neq g_{1}(y)\quad \Longrightarrow \quad g(-x)\neq g(-y)\quad \Longrightarrow \quad -(x + y)\mathrm{~lies~between~}g(-x)\mathrm{~and~}g(-y)\] \[\qquad \Longrightarrow \quad x + y\mathrm{~lies~between~}g_{1}(x)\mathrm{~and~}g_{1}(y).\]
On the other hand, the relation we need to prove reads now as
\[g(x)\leqslant g(y)\qquad \mathrm{whenever}~x< y. \quad (1)\]
Again, this condition is equivalent to the same one with \(g\) replaced by \(g_{1}\)
If \(g(x) = 2x\) for all \(x\in \mathbb{R}\) , then \((\ast)\) is obvious; so in what follows we consider the other case. We split the solution into a sequence of lemmas, strengthening one another. We always consider some value of \(x\) with \(g(x)\neq 2x\) and denote \(X = g(x)\)
Lemma 1. Assume that \(X< 2x\) . Then on the interval \((X - x;x]\) the function \(g\) attains at most two values — namely, \(X\) and, possibly, some \(Y > X\) . Similarly, if \(X > 2x\) , then \(g\) attains at most two values on \([x;X - x)\) — namely, \(X\) and, possibly, some \(Y< X\) .
Proof. We start with the first claim of the lemma. Notice that \(X - x< x\) , so the considered interval is nonempty.
Take any \(a\in (X - x;x)\) with \(g(a)\neq X\) (if it exists). If \(g(a)< X\) , then \((\ast)\) yields \(g(a)\leqslant\) \(a + x\leqslant g(x) = X\) , so \(a\leqslant X - x\) which is impossible. Thus, \(g(a) > X\) and hence by \((\ast)\) we get \(X\leqslant a + x\leqslant g(a)\)
Now, for any \(b\in (X - x;x)\) with \(g(b)\neq X\) we similarly get \(b + x\leqslant g(b)\) . Therefore, the number \(a + b\) (which is smaller than each of \(a + x\) and \(b + x\) ) cannot lie between \(g(a)\) and \(g(b)\) , which by \((\ast)\) implies that \(g(a) = g(b)\) . Hence \(g\) may attain only two values on \((X - x;x]\) , namely \(X\) and \(g(a) > X\) .
To prove the second claim, notice that \(g_{1}(- x) = - X< 2\cdot (- x)\) , so \(g_{1}\) attains at most two values on \((- X + x, - x]\) , i.e., \(- X\) and, possibly, some \(- Y > - X\) . Passing back to \(g\) , we get what we need.
Lemma 2. If \(X< 2x\) , then \(g\) is constant on \((X - x;x)\) . Similarly, if \(X > 2x\) , then \(g\) is constant on \((x;X - x)\) .
Proof. Again, it suffices to prove the first claim only. Assume, for the sake of contradiction, that there exist \(a,b\in (X - x;x)\) with \(g(a)\neq g(b)\) ; by Lemma 1, we may assume that \(g(a) = X\) and \(Y = g(b) > X\) .
Notice that \(\min \{X - a,X - b\} >X - x\) , so there exists a \(u\in (X - x;x)\) such that \(u< \min \{X - a,X - b\}\) . By Lemma 1, we have either \(g(u) = X\) or \(g(u) = Y\) . In the former case, by \((\ast)\) we have \(X\leqslant u + b\leqslant Y\) which contradicts \(u< X - b\) . In the second case, by \((\ast)\) we have \(X\leqslant u + a\leqslant Y\) which contradicts \(u< X - a\) . Thus the lemma is proved. \(\square\)
Lemma 3. If \(X< 2x\) , then \(g(a) = X\) for all \(a\in (X - x;x)\) . Similarly, if \(X > 2x\) , then \(g(a) = X\) for all \(a\in (x;X - x)\) .
Proof. Again, we only prove the first claim.
By Lemmas 1 and 2, this claim may be violated only if \(g\) takes on a constant value \(Y > X\) on \((X - x,x)\) . Choose any \(a,b\in (X - x;x)\) with \(a< b\) . By \((\ast)\) , we have
\[Y\geqslant b + x\geqslant X. \quad (2)\]
In particular, we have \(Y\geqslant b + x > 2a\) . Applying Lemma 2 to \(a\) in place of \(x\) , we obtain that \(g\) is constant on \((a,Y - a)\) . By (2) again, we have \(x\leqslant Y - b< Y - a\) ; so \(x,b\in (a;Y - a)\) . But \(X = g(x)\neq g(b) = Y\) , which is a contradiction. \(\square\)
Now we are able to finish the solution. Assume that \(g(x) > g(y)\) for some \(x< y\) . Denote \(X = g(x)\) and \(Y = g(y)\) ; by \((\ast)\) , we have \(X\geqslant x + y\geqslant Y\) , so \(Y - y\leqslant x< y\leqslant X - x\) , and hence \((Y - y;y)\cap (x;X - x) = (x,y)\neq \emptyset\) . On the other hand, since \(Y - y< y\) and \(x< X - x\) , Lemma 3 shows that \(g\) should attain a constant value \(X\) on \((x;X - x)\) and a constant value \(Y\neq X\) on \((Y - y;y)\) . Since these intervals overlap, we get the final contradiction.
Solution 2. As in the previous solution, we pass to the function \(g\) satisfying \((\ast)\) and notice that we need to prove the condition (1). We will also make use of the function \(g_{1}\) .
If \(g\) is constant, then (1) is clearly satisfied. So, in the sequel we assume that \(g\) takes on at least two different values. Now we collect some information about the function \(g\) .
Claim 1. For any \(c\in \mathbb{R}\) , all the solutions of \(g(x) = c\) are bounded.
Proof. Fix any \(y\in \mathbb{R}\) with \(g(y)\neq c\) . Assume first that \(g(y) > c\) . Now, for any \(x\) with \(g(x) = c\) , by \((\ast)\) we have \(c\leqslant x + y\leqslant g(y)\) , or \(c - y\leqslant x\leqslant g(y) - y\) . Since \(c\) and \(y\) are constant, we get what we need.
If \(g(y)< c\) , we may switch to the function \(g_{1}\) for which we have \(g_{1}(- y) > - c\) . By the above arguments, we obtain that all the solutions of \(g_{1}(- x) = - c\) are bounded, which is equivalent to what we need. \(\square\)
As an immediate consequence, the function \(g\) takes on infinitely many values, which shows that the next claim is indeed widely applicable.
Claim 2. If \(g(x)< g(y)< g(z)\) , then \(x< z\) .
Proof. By \((\ast)\) , we have \(g(x)\leqslant x + y\leqslant g(y)\leqslant z + y\leqslant g(z)\) , so \(x + y\leqslant z + y\) , as required. \(\square\)
Claim 3. Assume that \(g(x) > g(y)\) for some \(x< y\) . Then \(g(a)\in \{g(x),g(y)\}\) for all \(a\in [x;y]\) .
Proof. If \(g(y)< g(a)< g(x)\) , then the triple \((y,a,x)\) violates Claim 2. If \(g(a)< g(y)< g(x)\) , then the triple \((a,y,x)\) violates Claim 2. If \(g(y)< g(x)< g(a)\) , then the triple \((y,x,a)\) violates Claim 2. The only possible cases left are \(g(a)\in \{g(x),g(y)\}\) . \(\square\)
In view of Claim 3, we say that an interval \(I\) (which may be open, closed, or semi- open) is a Dirichlet interval\* if the function \(g\) takes on just two values on \(I\) .
Assume now, for the sake of contradiction, that (1) is violated by some \(x< y\) . By Claim 3, \([x;y]\) is a Dirichlet interval. Set
\(r = \inf \{a\colon (a;y]\) is a Dirichlet interval} and \(s = \sup \{b\colon [x;b)\) is a Dirichlet interval}.
Clearly, \(r\leqslant x< y\leqslant s\) . By Claim 1, \(r\) and \(s\) are finite. Denote \(X = g(x)\) , \(Y = g(y)\) , and \(\Delta = (y - x) / 2\) .
Suppose first that there exists a \(t\in (r;r + \Delta)\) with \(f(t) = Y\) . By the definition of \(r\) , the interval \((r - \Delta ;y]\) is not Dirichlet, so there exists an \(r^{\prime}\in (r - \Delta ;r]\) such that \(g(r^{\prime})\neq \{X,Y\}\) .
The function \(g\) attains at least three distinct values on \([r^{\prime};y]\) , namely \(g(r^{\prime})\) , \(g(x)\) , and \(g(y)\) . Claim 3 now yields \(g(r^{\prime}) \leqslant g(y)\) ; the equality is impossible by the choice of \(r^{\prime}\) , so in fact \(g(r^{\prime}) < Y\) . Applying \((*)\) to the pairs \((r^{\prime},y)\) and \((t,x)\) we obtain \(r^{\prime} + y \leqslant Y \leqslant t + x\) , whence \(r - \Delta + y < r^{\prime} + y \leqslant t + x < r + \Delta + x\) , or \(y - x < 2\Delta\) . This is a contradiction.
Thus, \(g(t) = X\) for all \(t \in (r;r + \Delta)\) . Applying the same argument to \(g_{1}\) , we get \(g(t) = Y\) for all \(t \in (s - \Delta ;s)\) .
Finally, choose some \(s_{1},s_{2}\in (s - \Delta ;s)\) with \(s_{1}< s_{2}\) and denote \(\delta = (s_{2} - s_{1}) / 2\) . As before, we choose \(r^{\prime}\in (r - \delta ;r)\) with \(g(r^{\prime})\neq \{X,Y\}\) and obtain \(g(r^{\prime})< Y\) . Choose any \(t\in (r;r + \delta)\) ; by the above arguments, we have \(g(t) = X\) and \(g(s_{1}) = g(s_{2}) = Y\) . As before, we apply \((*)\) to the pairs \((r^{\prime},s_{2})\) and \((t,s_{1})\) obtaining \(r - \delta +s_{2}< r^{\prime} + s_{2}\leqslant Y\leqslant t + s_{1}< r + \delta +s_{1}\) , or \(s_{2} - s_{1}< 2\delta\) . This is a final contradiction.
Comment 1. The original submission discussed the same functions \(f\) , but the question was different — namely, the following one:
Prove that the equation \(f(x) = 2017x\) has at most one solution, and the equation \(f(x) = - 2017x\) has at least one solution.
The Problem Selection Committee decided that the question we are proposing is more natural, since it provides more natural information about the function \(g\) (which is indeed the main character in this story). On the other hand, the new problem statement is strong enough in order to imply the original one easily.
Namely, we will deduce from the new problem statement (along with the facts used in the solutions) that \((i)\) for every \(N > 0\) the equation \(g(x) = - Nx\) has at most one solution, and \((ii)\) for every \(N > 1\) the equation \(g(x) = Nx\) has at least one solution.
Claim \((i)\) is now trivial. Indeed, \(g\) is proven to be non- decreasing, so \(g(x) + Nx\) is strictly increasing and thus has at most one zero.
We proceed on claim \((ii)\) . If \(g(0) = 0\) , then the required root has been already found. Otherwise, we may assume that \(g(0) > 0\) and denote \(c = g(0)\) . We intend to prove that \(x = c / N\) is the required root. Indeed, by monotonicity we have \(g(c / N) \geqslant g(0) = c\) ; if we had \(g(c / N) > c\) , then \((*)\) would yield \(c \leqslant 0 + c / N \leqslant g(c / N)\) which is false. Thus, \(g(x) = c = Nx\) .
Comment 2. There are plenty of functions \(g\) satisfying \((*)\) (and hence of functions \(f\) satisfying the problem conditions). One simple example is \(g_{0}(x) = 2x\) . Next, for any increasing sequence \(A = (\ldots ,a_{- 1},a_{0},a_{1},\ldots)\) which is unbounded in both directions (i.e., for every \(N\) this sequence contains terms greater than \(N\) , as well as terms smaller than \(- N\) ), the function \(g_{A}\) defined by
\[g_{A}(x) = a_{i} + a_{i + 1}\qquad \mathrm{whenever}~x\in [a_{i};a_{i + 1})\]
satisfies \((*)\) . Indeed, pick any \(x< y\) with \(g(x)\neq g(y)\) ; this means that \(x\in [a_{i};a_{i + 1})\) and \(y\in [a_{j};a_{j + 1})\) for some \(i< j\) . Then we have \(g(x) = a_{i} + a_{i + 1}\leqslant x + y< a_{j} + a_{j + 1} = g(y)\) , as required.
There also exist examples of the mixed behavior; e.g., for an arbitrary sequence \(A\) as above and an arbitrary subset \(I\subseteq \mathbb{Z}\) the function
\[
g_{A,I}(x) =
\begin{cases}
g_0(x), & x \in [a_i, a_{i+1}) \ \text{with } i \in I, \\
g_A(x), & x \in [a_i, a_{i+1}) \ \text{with } i \notin I.
\end{cases}
\]
also satisfies \((*)\) .
Finally, it is even possible to provide a complete description of all functions \(g\) satisfying \((*)\) (and hence of all functions \(f\) satisfying the problem conditions); however, it seems to be far out of scope for the IMO. This description looks as follows.
Let \(A\) be any closed subset of \(\mathbb{R}\) which is unbounded in both directions. Define the functions \(i_{A}\) , \(s_{A}\) , and \(g_{A}\) as follows:
\[i_{A}(x) = \inf \{a\in A\colon a\geqslant x\} ,\quad s_{A}(x) = \sup \{a\in A\colon a\leqslant x\} ,\quad g_{A}(x) = i_{A}(x) + s_{A}(x).\]
It is easy to see that for different sets \(A\) and \(B\) the functions \(g_{A}\) and \(g_{B}\) are also different (since, e.g., for any \(a\in A\backslash B\) the function \(g_{B}\) is constant in a small neighborhood of \(a\) , but the function \(g_{A}\) is not). One may check, similarly to the arguments above, that each such function satisfies \((\ast)\) .
Finally, one more modification is possible. Namely, for any \(x\in A\) one may redefine \(g_{A}(x)\) (which is \(2x\) ) to be any of the numbers
\[g_{A + }(x) = i_{A + }(x) + x\quad \mathrm{or}\quad g_{A - }(x) = x + s_{A - }(x),\] \[\mathrm{where}\qquad i_{A + }(x) = \inf \{a\in A\colon a > x\} \quad \mathrm{and}\quad s_{A - }(x) = \sup \{a\in A\colon a< x\} .\]
This really changes the value if \(x\) has some right (respectively, left) semi- neighborhood disjoint from \(A\) , so there are at most countably many possible changes; all of them can be performed independently.
With some effort, one may show that the construction above provides all functions \(g\) satisfying \((\ast)\) .
|
IMOSL-2017-C1
|
A rectangle \(\mathcal{R}\) with odd integer side lengths is divided into small rectangles with integer side lengths. Prove that there is at least one among the small rectangles whose distances from the four sides of \(\mathcal{R}\) are either all odd or all even.
|
Solution. Let the width and height of \(\mathcal{R}\) be odd numbers \(a\) and \(b\) . Divide \(\mathcal{R}\) into \(ab\) unit squares and color them green and yellow in a checkered pattern. Since the side lengths of \(a\) and \(b\) are odd, the corner squares of \(\mathcal{R}\) will all have the same color, say green.
Call a rectangle (either \(\mathcal{R}\) or a small rectangle) green if its corners are all green; call it yellow if the corners are all yellow, and call it mixed if it has both green and yellow corners. In particular, \(\mathcal{R}\) is a green rectangle.
We will use the following trivial observations.
- Every mixed rectangle contains the same number of green and yellow squares;
- Every green rectangle contains one more green square than yellow square;
- Every yellow rectangle contains one more yellow square than green square.
The rectangle \(\mathcal{R}\) is green, so it contains more green unit squares than yellow unit squares. Therefore, among the small rectangles, at least one is green. Let \(\mathcal{S}\) be such a small green rectangle, and let its distances from the sides of \(\mathcal{R}\) be \(x\) , \(y\) , \(u\) and \(v\) , as shown in the picture. The top- left corner of \(\mathcal{R}\) and the top- left corner of \(\mathcal{S}\) have the same color, which happen if and only if \(x\) and \(u\) have the same parity. Similarly, the other three green corners of \(\mathcal{S}\) indicate that \(x\) and \(v\) have the same parity, \(y\) and \(u\) have the same parity, i.e. \(x\) , \(y\) , \(u\) and \(v\) are all odd or all even.

|
IMOSL-2017-C2
|
Let \(n\) be a positive integer. Define a chameleon to be any sequence of \(3n\) letters, with exactly \(n\) occurrences of each of the letters \(a\) , \(b\) , and \(c\) . Define a swap to be the transposition of two adjacent letters in a chameleon. Prove that for any chameleon \(X\) , there exists a chameleon \(Y\) such that \(X\) cannot be changed to \(Y\) using fewer than \(3n^{2} / 2\) swaps.
|
Solution 1. To start, notice that the swap of two identical letters does not change a chameleon, so we may assume there are no such swaps.
For any two chameleons \(X\) and \(Y\) , define their distance \(d(X,Y)\) to be the minimal number of swaps needed to transform \(X\) into \(Y\) (or vice versa). Clearly, \(d(X,Y) + d(Y,Z)\geqslant d(X,Z)\) for any three chameleons \(X\) , \(Y\) , and \(Z\) .
Lemma. Consider two chameleons
\[P = \underbrace{aa\ldots a}_{n}\underbrace{bb\ldots b}_{n}\underbrace{cc\ldots c}_{n}\quad \mathrm{and}\quad Q = \underbrace{cc\ldots c}_{n}\underbrace{bb\ldots b}_{n}\underbrace{aa\ldots a}_{n}.\]
Then \(d(P,Q)\geqslant 3n^{2}\) .
Proof. For any chameleon \(X\) and any pair of distinct letters \(u,v\in \{a,b,c\}\) , we define \(f_{u,v}(X)\) to be the number of pairs of positions in \(X\) such that the left one is occupied by \(u\) , and the right one is occupied by \(v\) . Define \(f(X) = f_{a,b}(X) + f_{a,c}(X) + f_{b,c}(X)\) . Notice that \(f_{a,b}(P) = f_{a,c}(P) = f_{b,c}(P) = n^{2}\) and \(f_{a,b}(Q) = f_{a,c}(Q) = f_{b,c}(Q) = 0\) , so \(f(P) = 3n^{2}\) and \(f(Q) = 0\) .
Now consider some swap changing a chameleon \(X\) to \(X^{\prime}\) ; say, the letters \(a\) and \(b\) are swapped. Then \(f_{a,b}(X)\) and \(f_{a,b}(X^{\prime})\) differ by exactly 1, while \(f_{a,c}(X) = f_{a,c}(X^{\prime})\) and \(f_{b,c}(X) = f_{b,c}(X^{\prime})\) . This yields \(|f(X) - f(X^{\prime})| = 1\) , i.e., on any swap the value of \(f\) changes by 1. Hence \(d(X,Y)\geqslant |f(X) - f(Y)|\) for any two chameleons \(X\) and \(Y\) . In particular, \(d(P,Q)\geqslant |f(P) - f(Q)| = 3n^{2}\) , as desired.
Back to the problem, take any chameleon \(X\) and notice that \(d(X,P) + d(X,Q)\geqslant d(P,Q)\geqslant 3n^{2}\) by the lemma. Consequently, \(\max \{d(X,P),d(X,Q)\} \geqslant \frac{3n^{2}}{2}\) , which establishes the problem statement.
Comment 1. The problem may be reformulated in a graph language. Construct a graph \(G\) with the chameleons as vertices, two vertices being connected with an edge if and only if these chameleons differ by a single swap. Then \(d(X,Y)\) is the usual distance between the vertices \(X\) and \(Y\) in this graph. Recall that the radius of a connected graph \(G\) is defined as
\[r(G) = \min_{v\in V}\max_{u\in V}d(u,v).\]
So we need to prove that the radius of the constructed graph is at least \(3n^{2} / 2\) .
It is well- known that the radius of any connected graph is at least the half of its diameter (which is simply \(\max_{u,v\in V}d(u,v)\) ). Exactly this fact has been used above in order to finish the solution.
Solution 2. We use the notion of distance from Solution 1, but provide a different lower bound for it.
In any chameleon \(X\) , we enumerate the positions in it from left to right by \(1,2,\ldots ,3n\) . Define \(s_{c}(X)\) as the sum of positions occupied by \(c\) . The value of \(s_{c}\) changes by at most 1 on each swap, but this fact alone does not suffice to solve the problem; so we need an improvement.
For every chameleon \(X\) , denote by \(X_{\overline{c}}\) the sequence obtained from \(X\) by removing all \(n\) letters \(c\) . Enumerate the positions in \(X_{\overline{c}}\) from left to right by \(1,2,\ldots ,2n\) , and define \(s_{\overline{c},b}(X)\) as the sum of positions in \(X_{\overline{c}}\) occupied by \(b\) . (In other words, here we consider the positions of the \(b\) 's relatively to the \(a\) 's only.) Finally, denote
\[d^{\prime}(X,Y):= |s_{c}(X) - s_{c}(Y)| + |s_{\overline{c},b}(X) - s_{\overline{c},b}(Y)|.\]
We start with the case when \(n = 2k\) is even; denote \(X = X_{2k}\) . We show that \(d_{a,b}^{*}(X,Y)\leqslant 2k^{2}\) for any chameleon \(Y\) ; this yields the required estimate. Proceed by the induction on \(k\) with the trivial base case \(k = 0\) . To perform the induction step, notice that \(d_{a,b}^{*}(X,Y)\) is indeed the minimal number of swaps needed to change \(Y_{\overline{c}}\) into \(X_{\overline{c}}\) . One may show that moving \(a_{1}\) and \(a_{2k}\) in \(Y\) onto the first and the last positions in \(Y\) , respectively, takes at most \(2k\) swaps, and that subsequent moving \(b_{1}\) and \(b_{2k}\) onto the second and the second last positions takes at most \(2k - 2\) swaps. After performing that, one may delete these letters from both \(X_{\overline{c}}\) and \(Y_{\overline{c}}\) and apply the induction hypothesis; so \(X_{\overline{c}}\) can be obtained from \(Y_{\overline{c}}\) using at most \(2(k - 1)^{2} + 2k + (2k - 2) = 2k^{2}\) swaps, as required.
If \(n = 2k + 3\) is odd, the proof is similar but more technically involved. Namely, we claim that \(d_{a,b}^{*}(X_{2k + 3},Y)\leqslant 2k^{2} + 6k + 5\) for any chameleon \(Y\) , and that the equality is achieved only if \(Y_{\overline{c}} =\) \(b b\ldots b a a\ldots a\) . The proof proceeds by a similar induction, with some care taken of the base case, as well as of extracting the equality case. Similar estimates hold for \(d_{b,c}^{*}\) and \(d_{c,a}^{*}\) . Summing three such estimates, we obtain
\[d^{*}(X_{2k + 3},Y)\leqslant 3(2k^{2} + 6k + 5) = \left\lceil \frac{3n^{2}}{2}\right\rceil +1,\]
which is by 1 more than we need. But the equality could be achieved only if \(Y_{\overline{c}} = bb\ldots ba a\ldots a\) and, similarly, \(Y_{\overline{b}} = aa\ldots a cc\ldots c\) and \(Y_{\overline{a}} = cc\ldots cbb\ldots b\) . Since these three equalities cannot hold simultaneously, the proof is finished.
|
IMOSL-2017-C4
|
Let \(N \geq 2\) be an integer. \(N(N + 1)\) soccer players, no two of the same height, stand in a row in some order. Coach Ralph wants to remove \(N(N - 1)\) people from this row so that in the remaining row of \(2N\) players, no one stands between the two tallest ones, no one stands between the third and the fourth tallest ones, ..., and finally no one stands between the two shortest ones. Show that this is always possible.
|
Solution 1. Split the row into \(N\) blocks with \(N + 1\) consecutive people each. We will show how to remove \(N - 1\) people from each block in order to satisfy the coach's wish.
First, construct a \((N + 1) \times N\) matrix where \(x_{i,j}\) is the height of the \(i^{\text{th}}\) tallest person of the \(j^{\text{th}}\) block—in other words, each column lists the heights within a single block, sorted in decreasing order from top to bottom.
We will reorder this matrix by repeatedly swapping whole columns. First, by column permutation, make sure that \(x_{2,1} = \max \{x_{2,i}: i = 1,2,\ldots ,N\}\) (the first column contains the largest height of the second row). With the first column fixed, permute the other ones so that \(x_{3,2} = \max \{x_{3,i}: i = 2,\ldots ,N\}\) (the second column contains the tallest person of the third row, first column excluded). In short, at step \(k\) ( \(k = 1,2,\ldots ,N - 1\) ), we permute the columns from \(k\) to \(N\) so that \(x_{k + 1,k} = \max \{x_{i,k}: i = k,k + 1,\ldots ,N\}\) , and end up with an array like this:
\[x_{1,1} x_{1,2} x_{1,3} \dots x_{1,N - 1} x_{1,N\] \[\forall \forall \forall \forall \forall \forall\] \[x_{2,1} > x_{2,2} x_{2,3} \dots x_{2,N - 1} x_{2,N\] \[\forall \forall \forall \forall \forall \forall\] \[x_{3,1} x_{3,2} > x_{3,3} \dots x_{3,N - 1} x_{3,N\] \[\forall \forall \forall \forall \forall \forall\] \[\vdots \vdots \vdots \vdots \ddots \vdots\] \[\forall \forall \forall \forall \forall \forall\] \[x_{N,1} x_{N,2} x_{N,3} \dots x_{N,N - 1} > x_{N,N\] \[\forall \forall \forall \forall \forall \forall\] \[x_{N + 1,1} x_{N + 1,2} x_{N + 1,3} \dots x_{N + 1,N - 1} x_{N + 1,N\]
Now we make the bold choice: from the original row of people, remove everyone but those with heights
\[x_{1,1} > x_{2,1} > x_{2,2} > x_{3,2} > \dots > x_{N,N - 1} > x_{N,N} > x_{N + 1,N} \quad (*)\]
Of course this height order \((*)\) is not necessarily their spatial order in the new row. We now need to convince ourselves that each pair \((x_{k,k}; x_{k + 1,k})\) remains spatially together in this new row. But \(x_{k,k}\) and \(x_{k + 1,k}\) belong to the same column/block of consecutive \(N + 1\) people; the only people that could possibly stand between them were also in this block, and they are all gone.
Solution 2. Split the people into \(N\) groups by height: group \(G_{1}\) has the \(N + 1\) tallest ones, group \(G_{2}\) has the next \(N + 1\) tallest, and so on, up to group \(G_{N}\) with the \(N + 1\) shortest people.
Now scan the original row from left to right, stopping as soon as you have scanned two people (consecutively or not) from the same group, say, \(G_{i}\) . Since we have \(N\) groups, this must happen before or at the \((N + 1)^{\text{th}}\) person of the row. Choose this pair of people, removing all the other people from the same group \(G_{i}\) and also all people that have been scanned so far. The only people that could separate this pair's heights were in group \(G_{i}\) (and they are gone); the only people that could separate this pair's positions were already scanned (and they are gone too).
We are now left with \(N - 1\) groups (all except \(G_{i}\) ). Since each of them lost at most one person, each one has at least \(N\) unscanned people left in the row. Repeat the scanning process from left to right, choosing the next two people from the same group, removing this group and
everyone scanned up to that point. Once again we end up with two people who are next to each other in the remaining row and whose heights cannot be separated by anyone else who remains (since the rest of their group is gone). After picking these 2 pairs, we still have \(N - 2\) groups with at least \(N - 1\) people each.
If we repeat the scanning process a total of \(N\) times, it is easy to check that we will end up with 2 people from each group, for a total of \(2N\) people remaining. The height order is guaranteed by the grouping, and the scanning construction from left to right guarantees that each pair from a group stand next to each other in the final row. We are done.
Solution 3. This is essentially the same as solution 1, but presented inductively. The essence of the argument is the following lemma.
Lemma. Assume that we have \(N\) disjoint groups of at least \(N + 1\) people in each, all people have distinct heights. Then one can choose two people from each group so that among the chosen people, the two tallest ones are in one group, the third and the fourth tallest ones are in one group, ..., and the two shortest ones are in one group.
Proof. Induction on \(N \geqslant 1\) ; for \(N = 1\) , the statement is trivial.
Consider now \(N\) groups \(G_{1},\ldots ,G_{N}\) with at least \(N + 1\) people in each for \(N\geq 2\) . Enumerate the people by \(1,2,\ldots ,N(N + 1)\) according to their height, say, from tallest to shortest. Find the least \(s\) such that two people among \(1,2,\ldots ,s\) are in one group (without loss of generality, say this group is \(G_{N}\) ). By the minimality of \(s\) , the two mentioned people in \(G_{N}\) are \(s\) and some \(i< s\)
Now we choose people \(i\) and \(s\) in \(G_{N}\) , forget about this group, and remove the people \(1,2,\ldots ,s\) from \(G_{1},\ldots ,G_{N - 1}\) . Due to minimality of \(s\) again, each of the obtained groups \(G_{1}^{\prime},\ldots ,G_{N - 1}^{\prime}\) contains at least \(N\) people. By the induction hypothesis, one can choose a pair of people from each of \(G_{1}^{\prime},\ldots ,G_{N - 1}^{\prime}\) so as to satisfy the required conditions. Since all these people have numbers greater than \(s\) , addition of the pair \((s,i)\) from \(G_{N}\) does not violate these requirements. \(\square\)
To solve the problem, it suffices now to split the row into \(N\) contiguous groups with \(N + 1\) people in each and apply the Lemma to those groups.
Comment 1. One can identify each person with a pair of indices \((p,h)\) \((p,h\in \{1,2,\ldots ,N(N + 1)\})\) so that the \(p^{\mathrm{th}}\) person in the row (say, from left to right) is the \(h^{\mathrm{th}}\) tallest person in the group. Say that \((a,b)\) separates \((x_{1},y_{1})\) and \((x_{2},y_{2})\) whenever \(a\) is strictly between \(x_{1}\) and \(y_{1}\) , or \(b\) is strictly between \(x_{2}\) and \(y_{2}\) . So the coach wants to pick \(2N\) people \((p_{i},h_{i})(i = 1,2,\ldots ,2N)\) such that no chosen person separates \((p_{1},h_{1})\) from \((p_{2},h_{2})\) , no chosen person separates \((p_{3},h_{3})\) and \((p_{4},h_{4})\) , and so on. This formulation reveals a duality between positions and heights. In that sense, solutions 1 and 2 are dual of each other.
Comment 2. The number \(N(N + 1)\) is sharp for \(N = 2\) and \(N = 3\) , due to arrangements \(1,5,3,4,2\) and \(1,10,6,4,3,9,5,8,7,2,11\) .
|
IMOSL-2017-C5
|
A hunter and an invisible rabbit play a game in the Euclidean plane. The hunter's starting point \(H_{0}\) coincides with the rabbit's starting point \(R_{0}\) . In the \(n^{\mathrm{th}}\) round of the game \((n \geqslant 1)\) , the following happens.
(1) First the invisible rabbit moves secretly and unobserved from its current point \(R_{n - 1}\) to some new point \(R_{n}\) with \(R_{n - 1}R_{n} = 1\) .
(2) The hunter has a tracking device (e.g. dog) that returns an approximate position \(R_{n}^{\prime}\) of the rabbit, so that \(R_{n}R_{n}^{\prime} \leqslant 1\) .
(3) The hunter then visibly moves from point \(H_{n - 1}\) to a new point \(H_{n}\) with \(H_{n - 1}H_{n} = 1\) .
Is there a strategy for the hunter that guarantees that after \(10^{9}\) such rounds the distance between the hunter and the rabbit is below 100?
|
Answer: There is no such strategy for the hunter. The rabbit "wins".
Solution. If the answer were "yes", the hunter would have a strategy that would "work", no matter how the rabbit moved or where the radar pings \(R_{n}^{\prime}\) appeared. We will show the opposite: with bad luck from the radar pings, there is no strategy for the hunter that guarantees that the distance stays below 100 in \(10^{9}\) rounds.
So, let \(d_{n}\) be the distance between the hunter and the rabbit after \(n\) rounds. Of course, if \(d_{n} \geqslant 100\) for any \(n < 10^{9}\) , the rabbit has won — it just needs to move straight away from the hunter, and the distance will be kept at or above 100 thereon.
We will now show that, while \(d_{n} < 100\) , whatever given strategy the hunter follows, the rabbit has a way of increasing \(d_{n}^{2}\) by at least \(\frac{1}{2}\) every 200 rounds (as long as the radar pings are lucky enough for the rabbit). This way, \(d_{n}^{2}\) will reach \(10^{4}\) in less than \(2 \cdot 10^{4} \cdot 200 = 4 \cdot 10^{6} < 10^{9}\) rounds, and the rabbit wins.
Suppose the hunter is at \(H_{n}\) and the rabbit is at \(R_{n}\) . Suppose even that the rabbit reveals its position at this moment to the hunter (this allows us to ignore all information from previous radar pings). Let \(r\) be the line \(H_{n}R_{n}\) , and \(Y_{1}\) and \(Y_{2}\) be points which are 1 unit away from \(r\) and 200 units away from \(R_{n}\) , as in the figure below.

The rabbit's plan is simply to choose one of the points \(Y_{1}\) or \(Y_{2}\) and hop 200 rounds straight towards it. Since all hops stay within 1 distance unit from \(r\) , it is possible that all radar pings stay on \(r\) . In particular, in this case, the hunter has no way of knowing whether the rabbit chose \(Y_{1}\) or \(Y_{2}\) .
Looking at such pings, what is the hunter going to do? If the hunter's strategy tells him to go 200 rounds straight to the right, he ends up at point \(H^{\prime}\) in the figure. Note that the hunter does not have a better alternative! Indeed, after these 200 rounds he will always end up at a point to the left of \(H^{\prime}\) . If his strategy took him to a point above \(r\) , he would end up even further from \(Y_{2}\) ; and if his strategy took him below \(r\) , he would end up even further from \(Y_{1}\) . In other words, no matter what strategy the hunter follows, he can never be sure his distance to the rabbit will be less than \(y \stackrel{\mathrm{def}}{=} H^{\prime}Y_{1} = H^{\prime}Y_{2}\) after these 200 rounds.
To estimate \(y^{2}\) , we take \(Z\) as the midpoint of segment \(Y_{1}Y_{2}\) , we take \(R^{\prime}\) as a point 200 units to the right of \(R_{n}\) and we define \(\epsilon = ZR^{\prime}\) (note that \(H^{\prime}R^{\prime} = d_{n}\) ). Then
\[y^{2} = 1 + (H^{\prime}Z)^{2} = 1 + (d_{n} - \epsilon)^{2}\]
where
\[\epsilon = 200 - R_{n}Z = 200 - \sqrt{200^{2} - 1} = \frac{1}{200 + \sqrt{200^{2} - 1}} >\frac{1}{400}.\]
In particular, \(\epsilon^{2} + 1 = 400\epsilon\) , so
\[y^{2} = d_{n}^{2} - 2\epsilon d_{n} + \epsilon^{2} + 1 = d_{n}^{2} + \epsilon (400 - 2d_{n}).\]
Since \(\epsilon > \frac{1}{400}\) and we assumed \(d_{n}< 100\) , this shows that \(y^{2} > d_{n}^{2} + \frac{1}{2}\) . So, as we claimed, with this list of radar pings, no matter what the hunter does, the rabbit might achieve \(d_{n + 200}^{2} > d_{n}^{2} + \frac{1}{2}\) . The wabbit wins.
Comment 1. Many different versions of the solution above can be found by replacing 200 with some other number \(N\) for the number of hops the rabbit takes between reveals. If this is done, we have:
\[\epsilon = N - \sqrt{N^{2} - 1} >\frac{1}{N + \sqrt{N^{2} - 1}} >\frac{1}{2N}\]
and
\[\epsilon^{2} + 1 = 2N\epsilon ,\]
so, as long as \(N > d_{n}\) , we would find
\[y^{2} = d_{n}^{2} + \epsilon (2N - 2d_{n}) > d_{n}^{2} + \frac{N - d_{n}}{N}.\]
For example, taking \(N = 101\) is already enough—the squared distance increases by at least \(\frac{1}{101}\) every 101 rounds, and \(101^{2}\cdot 10^{4} = 1.0201\cdot 10^{8}< 10^{9}\) rounds are enough for the rabbit. If the statement is made sharper, some such versions might not work any longer.
Comment 2. The original statement asked whether the distance could be kept under \(10^{10}\) in \(10^{100}\) rounds.
|
IMOSL-2017-C6
|
Let \(n > 1\) be an integer. An \(n\times n\times n\) cube is composed of \(n^{3}\) unit cubes. Each unit cube is painted with one color. For each \(n\times n\times 1\) box consisting of \(n^{2}\) unit cubes (of any of the three possible orientations), we consider the set of the colors present in that box (each color is listed only once). This way, we get \(3n\) sets of colors, split into three groups according to the orientation. It happens that for every set in any group, the same set appears in both of the other groups. Determine, in terms of \(n\) , the maximal possible number of colors that are present.
|
Answer: The maximal number is \(\frac{n(n + 1)(2n + 1)}{6}\) .
Solution 1. Call a \(n\times n\times 1\) box an \(x\) - box, a \(y\) - box, or a \(z\) - box, according to the direction of its short side. Let \(C\) be the number of colors in a valid configuration. We start with the upper bound for \(C\) .
Let \(C_{1}\) , \(C_{2}\) , and \(C_{3}\) be the sets of colors which appear in the big cube exactly once, exactly twice, and at least thrice, respectively. Let \(M_{i}\) be the set of unit cubes whose colors are in \(C_{i}\) , and denote \(n_{i} = |M_{i}|\) .
Consider any \(x\) - box \(X\) , and let \(Y\) and \(Z\) be a \(y\) - and a \(z\) - box containing the same set of colors as \(X\) does.
Claim. \(4|X\cap M_{1}| + |X\cap M_{2}|\leqslant 3n + 1\)
Proof. We distinguish two cases.
Case 1: \(X\cap M_{1}\neq \emptyset\)
A cube from \(X\cap M_{1}\) should appear in all three boxes \(X\) , \(Y\) , and \(Z\) , so it should lie in \(X\cap Y\cap Z\) . Thus \(X\cap M_{1} = X\cap Y\cap Z\) and \(|X\cap M_{1}| = 1\) .
Consider now the cubes in \(X\cap M_{2}\) . There are at most \(2(n - 1)\) of them lying in \(X\cap Y\) or \(X\cap Z\) (because the cube from \(X\cap Y\cap Z\) is in \(M_{1}\) ). Let \(a\) be some other cube from \(X\cap M_{2}\) . Recall that there is just one other cube \(a^{\prime}\) sharing a color with \(a\) . But both \(Y\) and \(Z\) should contain such cube, so \(a^{\prime}\in Y\cap Z\) (but \(a^{\prime}\notin X\cap Y\cap Z\) ). The map \(a\mapsto a^{\prime}\) is clearly injective, so the number of cubes \(a\) we are interested in does not exceed \(|(Y\cap Z)\setminus X| = n - 1\) . Thus \(|X\cap M_{2}|\leqslant 2(n - 1) + (n - 1) = 3(n - 1)\) , and hence \(4|X\cap M_{1}| + |X\cap M_{2}|\leqslant 4 + 3(n - 1) = 3n + 1\) .
Case 2: \(X\cap M_{1} = \emptyset\)
In this case, the same argument applies with several changes. Indeed, \(X\cap M_{2}\) contains at most \(2n - 1\) cubes from \(X\cap Y\) or \(X\cap Z\) . Any other cube \(a\) in \(X\cap M_{2}\) corresponds to some \(a^{\prime}\in Y\cap Z\) (possibly with \(a^{\prime}\in X\) ), so there are at most \(n\) of them. All this results in \(|X\cap M_{2}|\leqslant (2n - 1) + n = 3n - 1\) , which is even better than we need (by the assumptions of our case).
Summing up the inequalities from the Claim over all \(x\) - boxes \(X\) , we obtain
\[4n_{1} + n_{2}\leqslant n(3n + 1).\]
Obviously, we also have \(n_{1} + n_{2} + n_{3} = n^{3}\) .
Now we are prepared to estimate \(C\) . Due to the definition of the \(M_{i}\) , we have \(n_{i}\geqslant i|C_{i}|\) , so
\[C\leqslant n_{1} + \frac{n_{2}}{2} +\frac{n_{3}}{3} = \frac{n_{1} + n_{2} + n_{3}}{3} +\frac{4n_{1} + n_{2}}{6}\leqslant \frac{n^{3}}{3} +\frac{3n^{2} + n}{6} = \frac{n(n + 1)(2n + 1)}{6}.\]
It remains to present an example of an appropriate coloring in the above- mentioned number of colors. For each color, we present the set of all cubes of this color. These sets are:
1. \(n\) singletons of the form \(S_{i} = \{(i,i,i)\}\) (with \(1\leqslant i\leqslant n)\) ;
2. \(3\binom{n}{2}\) doubletons of the forms \(D_{i,j}^{1} = \{(i,j,j),(j,i,i)\}\) , \(D_{i,j}^{2} = \{(j,i,j),(i,j,i)\}\) , and \(D_{i,j}^{3} = \{(j,j,i),(i,i,j)\}\) (with \(1\leqslant i< j\leqslant n)\) ;
3. \(2\binom{n}{3}\) triplets of the form \(T_{i,j,k} = \{(i,j,k),(j,k,i),(k,i,j)\}\) (with \(1\leqslant i< j< k\leqslant n\) or \(1\leqslant i< k< j\leqslant n\) ).
One may easily see that the \(i^{\mathrm{th}}\) boxes of each orientation contain the same set of colors, and that
\[n + \frac{3n(n - 1)}{2} +\frac{n(n - 1)(n - 2)}{3} = \frac{n(n + 1)(2n + 1)}{6}\]
colors are used, as required.
Solution 2. We will approach a new version of the original problem. In this new version, each cube may have a color, or be invisible (not both). Now we make sets of colors for each \(n\times n\times 1\) box as before (where "invisible" is not considered a color) and group them by orientation, also as before. Finally, we require that, for every non- empty set in any group, the same set must appear in the other 2 groups. What is the maximum number of colors present with these new requirements?
Let us call strange a big \(n\times n\times n\) cube whose painting scheme satisfies the new requirements, and let \(D\) be the number of colors in a strange cube. Note that any cube that satisfies the original requirements is also strange, so \(\max (D)\) is an upper bound for the original answer.
Claim. \(D\leqslant \frac{n(n + 1)(2n + 1)}{6}\)
Proof. The proof is by induction on \(n\) . If \(n = 1\) , we must paint the cube with at most 1 color.
Now, pick a \(n\times n\times n\) strange cube \(A\) , where \(n\geqslant 2\) . If \(A\) is completely invisible, \(D = 0\) and we are done. Otherwise, pick a non- empty set of colors \(S\) which corresponds to, say, the boxes \(X\) , \(Y\) and \(Z\) of different orientations.
Now find all cubes in \(A\) whose colors are in \(S\) and make them invisible. Since \(X\) , \(Y\) and \(Z\) are now completely invisible, we can throw them away and focus on the remaining \((n - 1)\times (n - 1)\times (n - 1)\) cube \(B\) . The sets of colors in all the groups for \(B\) are the same as the sets for \(A\) , removing exactly the colors in \(S\) , and no others! Therefore, every nonempty set that appears in one group for \(B\) still shows up in all possible orientations (it is possible that an empty set of colors in \(B\) only matched \(X\) , \(Y\) or \(Z\) before these were thrown away, but remember we do not require empty sets to match anyway). In summary, \(B\) is also strange.
By the induction hypothesis, we may assume that \(B\) has at most \(\frac{(n - 1)n(2n - 1)}{6}\) colors. Since there were at most \(n^{2}\) different colors in \(S\) , we have that \(A\) has at most \(\frac{(n - 1)n(2n - 1)}{6} +n^{2} = \frac{n(n + 1)(2n + 1)}{6}\) colors.
Finally, the construction in the previous solution shows a painting scheme (with no invisible cubes) that reaches this maximum, so we are done.
|
IMOSL-2017-C7
|
For any finite sets \(X\) and \(Y\) of positive integers, denote by \(f_{X}(k)\) the \(k^{\mathrm{th}}\) smallest positive integer not in \(X\) , and let
\[X*Y = X\cup \{f_{X}(y)\colon y\in Y\} .\]
Let \(A\) be a set of \(a > 0\) positive integers, and let \(B\) be a set of \(b > 0\) positive integers. Prove that if \(A*B = B*A\) , then
\[{\frac{A*(A*\cdots*(A*(A*A))\cdots)}{A\mathrm{~appears~}b\mathrm{~times}}}={\frac{B*(B*\cdots*(B*(B*B))\cdots)}{B\mathrm{~appears~}a\mathrm{~times}}}.}\]
|
Solution 1. For any function \(g\colon \mathbb{Z}_{>0}\to \mathbb{Z}_{>0}\) and any subset \(X\subset \mathbb{Z}_{>0}\) , we define \(g(X) =\) \(\{g(x)\colon x\in X\}\) . We have that the image of \(f_{X}\) is \(f_{X}(\mathbb{Z}_{>0}) = \mathbb{Z}_{>0}\setminus X\) . We now show a general lemma about the operation \(*\) , with the goal of showing that \(*\) is associative.
Lemma 1. Let \(X\) and \(Y\) be finite sets of positive integers. The functions \(f_{X*Y}\) and \(f_{X}\circ f_{Y}\) are equal.
Proof. We have
\[f_{X*Y}(\mathbb{Z}_{>0}) = \mathbb{Z}_{>0}\backslash (X*Y) = (\mathbb{Z}_{>0}\backslash X)\backslash f_{X}(Y) = f_{X}(\mathbb{Z}_{>0})\backslash f_{X}(Y) = f_{X}(\mathbb{Z}_{>0}\backslash Y) = f_{X}(f_{Y}(\mathbb{Z}_{>0})).\]
Thus, the functions \(f_{X*Y}\) and \(f_{X}\circ f_{Y}\) are strictly increasing functions with the same range. Because a strictly function is uniquely defined by its range, we have \(f_{X*Y} = f_{X}\circ f_{Y}\) . \(\square\)
Lemma 1 implies that \(*\) is associative, in the sense that \((A*B)*C = A*(B*C)\) for any finite sets \(A,B\) , and \(C\) of positive integers. We prove the associativity by noting
\[\mathbb{Z}_{>0}\setminus ((A*B)*C) = f_{(A*B)*C}(\mathbb{Z}_{>0}) = f_{A*B}(f_{C}(\mathbb{Z}_{>0})) = f_{A}(f_{B}(f_{C}(\mathbb{Z}_{>0})))\] \[\qquad = f_{A}(f_{B*C}(\mathbb{Z}_{>0}) = f_{A*(B*C)}(\mathbb{Z}_{>0}) = \mathbb{Z}_{>0}\setminus (A*(B*C)).\]
In light of the associativity of \(*\) , we may drop the parentheses when we write expressions like \(A*(B*C)\) . We also introduce the notation
\[X^{*k} = \underbrace{X*(X*\cdots*(X*(X*X))\cdots)}_{X\mathrm{~appears~}k\mathrm{~times}}.\]
Our goal is then to show that \(A*B = B*A\) implies \(A^{*b} = B^{*a}\) . We will do so via the following general lemma.
Lemma 2. Suppose that \(X\) and \(Y\) are finite sets of positive integers satisfying \(X*Y = Y*X\) and \(|X| = |Y|\) . Then, we must have \(X = Y\) .
Proof. Assume that \(X\) and \(Y\) are not equal. Let \(s\) be the largest number in exactly one of \(X\) and \(Y\) . Without loss of generality, say that \(s\in X\setminus Y\) . The number \(f_{X}(s)\) counts the \(s^{th}\) number not in \(X\) , which implies that
\[f_{X}(s) = s + |X\cap \{1,2,\ldots ,f_{X}(s)\} |. \quad (1)\]
Since \(f_{X}(s)\geqslant s\) , we have that
\[\{f_{X}(s) + 1,f_{X}(s) + 2,\ldots \} \cap X = \{f_{X}(s) + 1,f_{X}(s) + 2,\dots \} \cap Y,\]
which, together with the assumption that \(|X| = |Y|\) , gives
\[|X\cap \{1,2,\ldots ,f_{X}(s)\} | = |Y\cap \{1,2,\ldots ,f_{X}(s)\} |. \quad (2)\]
Now consider the equation
\[t - \left|Y\cap \{1,2,\ldots ,t\} \right| = s.\]
This equation is satisfied only when \(t\in \left[f_{Y}(s),f_{Y}(s + 1)\right)\) , because the left hand side counts the number of elements up to \(t\) that are not in \(Y\) . We have that the value \(t = f_{X}(s)\) satisfies the above equation because of (1) and (2). Furthermore, since \(f_{X}(s)\notin X\) and \(f_{X}(s)\geqslant s\) , we have that \(f_{X}(s)\notin Y\) due to the maximality of \(s\) . Thus, by the above discussion, we must have \(f_{X}(s) = f_{Y}(s)\) .
Finally, we arrive at a contradiction. The value \(f_{X}(s)\) is neither in \(X\) nor in \(f_{X}(Y)\) , because \(s\) is not in \(Y\) by assumption. Thus, \(f_{X}(s)\notin X\ast Y\) . However, since \(s\in X\) , we have \(f_{Y}(s)\in Y\ast X\) , a contradiction. \(\square\)
We are now ready to finish the proof. Note first of all that \(|A^{*b}| = ab = |B^{*a}|\) . Moreover, since \(A*B = B*A\) , and \(*\) is associative, it follows that \(A^{*b}*B^{*a} = B^{*a}*A^{*b}\) . Thus, by Lemma 2, we have \(A^{*b} = B^{*a}\) , as desired.
Comment 1. Taking \(A = X^{*k}\) and \(B = X^{*l}\) generates many non- trivial examples where \(A*B = B*A\) . There are also other examples not of this form. For example, if \(A = \{1,2,4\}\) and \(B = \{1,3\}\) , then \(A*B = \{1,2,3,4,6\} = B*A\) .
Solution 2. We will use Lemma 1 from Solution 1. Additionally, let \(X^{*k}\) be defined as in Solution 1. If \(X\) and \(Y\) are finite sets, then
\[f_{X} = f_{Y}\iff f_{X}(\mathbb{Z}_{>0}) = f_{Y}(\mathbb{Z}_{>0})\iff (\mathbb{Z}_{>0}\setminus X) = (\mathbb{Z}_{>0}\setminus Y)\iff X = Y, \quad (3)\]
where the first equivalence is because \(f_{X}\) and \(f_{Y}\) are strictly increasing functions, and the second equivalence is because \(f_{X}(\mathbb{Z}_{>0}) = \mathbb{Z}_{>0}\setminus X\) and \(f_{Y}(\mathbb{Z}_{>0}) = \mathbb{Z}_{>0}\setminus Y\) .
Denote \(g = f_{A}\) and \(h = f_{B}\) . The given relation \(A*B = B*A\) is equivalent to \(f_{A*B} = f_{B*A}\) because of (3), and by Lemma 1 of the first solution, this is equivalent to \(g\circ h = h\circ g\) . Similarly, the required relation \(A^{*b} = B^{*a}\) is equivalent to \(g^{b} = h^{a}\) . We will show that
\[g^{b}(n) = h^{a}(n) \quad (4)\]
for all \(n\in \mathbb{Z}_{>0}\) , which suffices to solve the problem.
To start, we claim that (4) holds for all sufficiently large \(n\) . Indeed, let \(p\) and \(q\) be the maximal elements of \(A\) and \(B\) , respectively; we may assume that \(p\geqslant q\) . Then, for every \(n\geqslant p\) we have \(g(n) = n + a\) and \(h(n) = n + b\) , whence \(g^{b}(n) = n + ab = h^{a}(n)\) , as was claimed.
In view of this claim, if (4) is not identically true, then there exists a maximal \(s\) with \(g^{b}(s)\neq\) \(h^{a}(s)\) . Without loss of generality, we may assume that \(g(s)\neq s\) , for if we had \(g(s) = h(s) = s\) then \(s\) would satisfy (4). As \(g\) is increasing, we then have \(g(s) > s\) , so (4) holds for \(n = g(s)\) . But then we have
\[g(g^{b}(s)) = g^{b + 1}(s) = g^{b}(n) = h^{a}(n) = h^{a}(g(s)) = g(h^{a}(s)),\]
where the last equality holds in view of \(g\circ h = h\circ g\) . By the injectivity of \(g\) , the above equality yields \(g^{b}(s) = h^{a}(s)\) , which contradicts the choice of \(s\) . Thus, we have proved that (4) is identically true on \(\mathbb{Z}_{>0}\) , as desired.
Comment 2. We present another proof of Lemma 2 of the first solution.
Let \(x = |X| = |Y|\) . Say that \(u\) is the smallest number in \(X\) and \(v\) is the smallest number in \(Y\) ; assume without loss of generality that \(u\leqslant v\) .
Let \(T\) be any finite set of positive integers, and define \(t = |T|\) . Enumerate the elements of \(X\) as \(x_{1}< x_{2}< \dots < x_{n}\) . Define \(S_{m} = f_{(T*X^{*}(m - 1))}(X)\) , and enumerate its elements \(s_{m,1}< s_{m,2}< \dots < s_{m,n}\) . Note that the \(S_{m}\) are pairwise disjoint; indeed, if we have \(m< m'\) , then
\[S_{m}\subset T*X^{*m}\subset T*X^{*(m^{\prime} - 1)}\quad \mathrm{and}\quad S_{m^{\prime}} = (T*X^{*m^{\prime}})\setminus (T*X^{*(m^{\prime} - 1)})\]
We claim the following statement, which essentially says that the \(S_{m}\) are eventually linear translates of each other:
Claim. For every \(i\) , there exists some \(m_{i}\) and \(c_{i}\) such that for all \(m > m_{i}\) , we have that \(s_{m,i} = t + mn - c_{i}\) . Furthermore, the \(c_{i}\) do not depend on the choice of \(T\) .
First, we show that this claim implies Lemma 2. We may choose \(T = X\) and \(T = Y\) . Then, there is some \(m'\) such that for all \(m \geq m'\) , we have
\[f_{X^{*m}}(X) = f_{(Y*X^{*(m - 1)})}(X). \quad (5)\]
Because \(u\) is the minimum element of \(X\) , \(v\) is the minimum element of \(Y\) , and \(u \leqslant v\) , we have that
\[\bigcup_{m = m'}^{\infty}f_{X^{*m}}(X)\cup X^{*m'} = \left(\bigcup_{m = m'}^{\infty}f_{(Y*X^{*(m - 1)})}(X)\right)\cup \left(Y*X^{*(m' - 1)}\right) = \{u,u + 1,\ldots \} ,\]
and in both the first and second expressions, the unions are of pairwise distinct sets. By (5), we obtain \(X^{*m'} = Y*X^{*(m' - 1)}\) . Now, because \(X\) and \(Y\) commute, we get \(X^{*m'} = X^{*(m' - 1)}*Y\) , and so \(X = Y\) .
We now prove the claim.
Proof of the claim. We induct downwards on \(i\) , first proving the statement for \(i = n\) , and so on.
Assume that \(m\) is chosen so that all elements of \(S_{m}\) are greater than all elements of \(T\) (which is possible because \(T\) is finite). For \(i = n\) , we have that \(s_{m,n} > s_{k,n}\) for every \(k < m\) . Thus, all \((m - 1)n\) numbers of the form \(s_{k,u}\) for \(k < m\) and \(1 \leqslant u \leqslant n\) are less than \(s_{m,n}\) . We then have that \(s_{m,n}\) is the \(((m - 1)n + x_{n})^{th}\) number not in \(T\) , which is equal to \(t + (m - 1)n + x_{n}\) . So we may choose \(c_{n} = x_{n} - n\) , which does not depend on \(T\) , which proves the base case for the induction.
For \(i < n\) , we have again that all elements \(s_{m,j}\) for \(j < i\) and \(s_{p,i}\) for \(p < m\) are less than \(s_{m,i}\) , so \(s_{m,i}\) is the \(((m - 1)i + x_{i})^{th}\) element not in \(T\) or of the form \(s_{p,j}\) for \(j > i\) and \(p < m\) . But by the inductive hypothesis, each of the sequences \(s_{p,j}\) is eventually periodic with period \(n\) , and thus the sequence \(s_{m,i}\) such must be as well. Since each of the sequences \(s_{p,j} - t\) with \(j > i\) eventually do not depend on \(T\) , the sequence \(s_{m,i} - t\) eventually does not depend on \(T\) either, so the inductive step is complete. This proves the claim and thus Lemma 2. \(\square\)
|
IMOSL-2017-C8
|
Let \(n\) be a given positive integer. In the Cartesian plane, each lattice point with nonnegative coordinates initially contains a butterfly, and there are no other butterflies. The neighborhood of a lattice point \(c\) consists of all lattice points within the axis- aligned \((2n + 1) \times (2n + 1)\) square centered at \(c\) , apart from \(c\) itself. We call a butterfly lonely, crowded, or comfortable, depending on whether the number of butterflies in its neighborhood \(N\) is respectively less than, greater than, or equal to half of the number of lattice points in \(N\) .
Every minute, all lonely butterflies fly away simultaneously. This process goes on for as long as there are any lonely butterflies. Assuming that the process eventually stops, determine the number of comfortable butterflies at the final state.
|
Answer: \(n^{2} + 1\) .
Solution. We always identify a butterfly with the lattice point it is situated at. For two points \(p\) and \(q\) , we write \(p \geq q\) if each coordinate of \(p\) is at least the corresponding coordinate of \(q\) . Let \(O\) be the origin, and let \(\mathcal{Q}\) be the set of initially occupied points, i.e., of all lattice points with nonnegative coordinates. Let \(\mathcal{R}_{\mathrm{H}} = \{(x,0): x \geq 0\}\) and \(\mathcal{R}_{\mathrm{V}} = \{(0, y): y \geq 0\}\) be the sets of the lattice points lying on the horizontal and vertical boundary rays of \(\mathcal{Q}\) . Denote by \(N(a)\) the neighborhood of a lattice point \(a\) .
1. Initial observations. We call a set of lattice points up-right closed if its points stay in the set after being shifted by any lattice vector \((i,j)\) with \(i,j\geq 0\) . Whenever the butterflies form a up-right closed set \(S\) , we have \(|N(p)\cap S|\geq |N(q)\cap S|\) for any two points \(p,q\in S\) with \(p\geq q\) So, since \(\mathcal{Q}\) is up-right closed, the set of butterflies at any moment also preserves this property. We assume all forthcoming sets of lattice points to be up-right closed.
When speaking of some set \(S\) of lattice points, we call its points lonely, comfortable, or crowded with respect to this set (i.e., as if the butterflies were exactly at all points of \(S\) ). We call a set \(S \subset \mathcal{Q}\) stable if it contains no lonely points. In what follows, we are interested only in those stable sets whose complements in \(\mathcal{Q}\) are finite, because one can easily see that only a finite number of butterflies can fly away on each minute.
If the initial set \(\mathcal{Q}\) of butterflies contains some stable set \(S\) , then, clearly no butterfly of this set will fly away. On the other hand, the set \(\mathcal{F}\) of all butterflies in the end of the process is stable. This means that \(\mathcal{F}\) is the largest (with respect to inclusion) stable set within \(\mathcal{Q}\) , and we are about to describe this set.
2. A description of a final set. The following notion will be useful. Let \(\mathcal{U} = \{\vec{u}_{1}, \vec{u}_{2}, \ldots , \vec{u}_{d}\}\) be a set of \(d\) pairwise non-parallel lattice vectors, each having a positive \(x\) - and a negative \(y\) -coordinate. Assume that they are numbered in increasing order according to slope. We now define a \(\mathcal{U}\) -curve to be the broken line \(p_{0}p_{1} \ldots p_{d}\) such that \(p_{0} \in \mathcal{R}_{\mathrm{V}}\) , \(p_{d} \in \mathcal{R}_{\mathrm{H}}\) , and \(\overrightarrow{p_{i - 1}p_{i}} = \vec{u}_{i}\) for all \(i = 1, 2, \ldots , m\) (see the Figure below to the left).

<center>Construction of \(\mathcal{U}\) -curve </center>

<center>Construction of \(\mathcal{D}\) </center>
Now, let \(\mathcal{K}_{n} = \{(i,j):1\leqslant i\leqslant n, - n\leqslant j\leqslant - 1\}\) . Consider all the rays emerging at \(O\) and passing through a point from \(\mathcal{K}_{n}\) ; number them as \(r_{1},\ldots ,r_{m}\) in increasing order according to slope. Let \(A_{i}\) be the farthest from \(O\) lattice point in \(r_{i}\cap \mathcal{K}_{n}\) , set \(k_{i} = |r_{i}\cap \mathcal{K}_{n}|\) , let \(\vec{v}_{i} = \overrightarrow{OA_{i}}\) , and finally denote \(\mathcal{V} = \{\vec{v}_{i}:1\leqslant i\leqslant m\}\) ; see the Figure above to the right. We will concentrate on the \(\mathcal{V}\) - curve \(d_{0}d_{1}\ldots d_{m}\) ; let \(\mathcal{D}\) be the set of all lattice points \(p\) such that \(p\geqslant p^{\prime}\) for some (not necessarily lattice) point \(p^{\prime}\) on the \(\mathcal{V}\) - curve. In fact, we will show that \(\mathcal{D} = \mathcal{F}\) .
Clearly, the \(\mathcal{V}\) - curve is symmetric in the line \(y = x\) . Denote by \(D\) the convex hull of \(\mathcal{D}\) .
3. We prove that the set \(\mathcal{D}\) contains all stable sets. Let \(\mathcal{S}\subset \mathcal{Q}\) be a stable set (recall that it is assumed to be up-right closed and to have a finite complement in \(\mathcal{Q}\) ). Denote by \(S\) its convex hull; clearly, the vertices of \(S\) are lattice points. The boundary of \(S\) consists of two rays (horizontal and vertical ones) along with some \(\mathcal{V}_{*}\) -curve for some set of lattice vectors \(\mathcal{V}_{*}\) .
Claim 1. For every \(\vec{v}_{i}\in \mathcal{V}\) , there is a \(\vec{v}_{i}^{*}\in \mathcal{V}_{*}\) co- directed with \(\vec{v}\) with \(|\vec{v}_{i}^{*}|\geqslant |\vec{v}|\) .
Proof. Let \(\ell\) be the supporting line of \(S\) parallel to \(\vec{v}_{i}\) (i.e., \(\ell\) contains some point of \(S\) , and the set \(S\) lies on one side of \(\ell\) ). Take any point \(b\in \ell \cap \mathcal{S}\) and consider \(N(b)\) . The line \(\ell\) splits the set \(N(b)\setminus \ell\) into two congruent parts, one having an empty intersection with \(\mathcal{S}\) . Hence, in order for \(b\) not to be lonely, at least half of the set \(\ell \cap N(b)\) (which contains \(2k_{i}\) points) should lie in \(S\) . Thus, the boundary of \(S\) contains a segment \(\ell \cap S\) with at least \(k_{i} + 1\) lattice points (including \(b\) ) on it; this segment corresponds to the required vector \(\vec{v}_{i}^{*}\in \mathcal{V}_{*}\) . \(\square\)

<center>Proof of Claim 1 </center>

<center>Proof of Claim 2 </center>
Claim 2. Each stable set \(S\subseteq Q\) lies in \(\mathcal{D}\) .
Proof. To show this, it suffices to prove that the \(\mathcal{V}_{*}\) - curve lies in \(D\) , i.e., that all its vertices do so. Let \(p^{\prime}\) be an arbitrary vertex of the \(\mathcal{V}_{*}\) - curve; \(p^{\prime}\) partitions this curve into two parts, \(\mathcal{X}\) (being down- right of \(p\) ) and \(\mathcal{V}\) (being up- left of \(p\) ). The set \(\mathcal{V}\) is split now into two parts: \(\mathcal{V}_{\mathcal{X}}\) consisting of those \(\vec{v}_{i}\in \mathcal{V}\) for which \(\vec{v}_{i}^{*}\) corresponds to a segment in \(\mathcal{X}\) , and a similar part \(\mathcal{V}_{\mathcal{Y}}\) . Notice that the \(\mathcal{V}\) - curve consists of several segments corresponding to \(\mathcal{V}_{\mathcal{X}}\) , followed by those corresponding to \(\mathcal{V}_{\mathcal{Y}}\) . Hence there is a vertex \(p\) of the \(\mathcal{V}\) - curve separating \(\mathcal{V}_{\mathcal{X}}\) from \(\mathcal{V}_{\mathcal{Y}}\) . Claim 1 now yields that \(p^{\prime}\geqslant p\) , so \(p^{\prime}\in \mathcal{D}\) , as required. \(\square\)
Claim 2 implies that the final set \(\mathcal{F}\) is contained in \(\mathcal{D}\) .
4. \(\mathcal{D}\) is stable, and its comfortable points are known. Recall the definitions of \(r_{i}\) ; let \(r_{i}^{\prime}\) be the ray complementary to \(r_{i}\) . By our definitions, the set \(N(O)\) contains no points between the rays \(r_{i}\) and \(r_{i + 1}\) , as well as between \(r_{i}^{\prime}\) and \(r_{i + 1}^{\prime}\) .
Claim 3. In the set \(\mathcal{D}\) , all lattice points of the \(\mathcal{V}\) - curve are comfortable.
Proof. Let \(p\) be any lattice point of the \(\mathcal{V}\) - curve, belonging to some segment \(d_{i}d_{i + 1}\) . Draw the line \(\ell\) containing this segment. Then \(\ell \cap \mathcal{D}\) contains exactly \(k_{i} + 1\) lattice points, all of which lie in \(N(p)\) except for \(p\) . Thus, exactly half of the points in \(N(p)\cap \ell\) lie in \(\mathcal{D}\) . It remains to show that all points of \(N(p)\) above \(\ell\) lie in \(\mathcal{D}\) (recall that all the points below \(\ell\) lack this property).
Notice that each vector in \(\mathcal{V}\) has one coordinate greater than \(n / 2\) ; thus the neighborhood of \(p\) contains parts of at most two segments of the \(\mathcal{V}\) - curve succeeding \(d_{i}d_{i + 1}\) , as well as at most two of those preceding it.
The angles formed by these consecutive segments are obtained from those formed by \(r_{j}\) and \(r_{j - 1}^{\prime}\) (with \(i - 1\leqslant j\leqslant i + 2\) ) by shifts; see the Figure below. All the points in \(N(p)\) above \(\ell\) which could lie outside \(\mathcal{D}\) lie in shifted angles between \(r_{j}\) , \(r_{j + 1}\) or \(r_{j}^{\prime}\) , \(r_{j - 1}^{\prime}\) . But those angles, restricted to \(N(p)\) , have no lattice points due to the above remark. The claim is proved. \(\square\)

<center>Proof of Claim 3 </center>
Claim 4. All the points of \(\mathcal{D}\) which are not on the boundary of \(D\) are crowded.
Proof. Let \(p\in \mathcal{D}\) be such a point. If it is to the up- right of some point \(p^{\prime}\) on the curve, then the claim is easy: the shift of \(N(p^{\prime})\cap \mathcal{D}\) by \(\overrightarrow{p^{\prime}p}\) is still in \(\mathcal{D}\) , and \(N(p)\) contains at least one more point of \(\mathcal{D}\) — either below or to the left of \(p\) . So, we may assume that \(p\) lies in a right triangle constructed on some hypothenuse \(d_{i}d_{i + 1}\) . Notice here that \(d_{i},d_{i + 1}\in N(p)\) .
Draw a line \(\ell \parallel d_{i}d_{i + 1}\) through \(p\) , and draw a vertical line \(h\) through \(d_{i}\) ; see Figure below. Let \(\mathcal{D}_{\mathrm{L}}\) and \(\mathcal{D}_{\mathrm{R}}\) be the parts of \(\mathcal{D}\) lying to the left and to the right of \(h\) , respectively (points of \(\mathcal{D}\cap h\) lie in both parts).

<center>Proof of Claim 4 </center>
Notice that the vectors \(\overrightarrow{d_{i}p}\) , \(\overrightarrow{d_{i + 1}d_{i + 2}}\) , \(\overrightarrow{d_{i}d_{i + 1}}\) , \(\overrightarrow{d_{i - 1}d_{i}}\) , and \(\overrightarrow{pd_{i + 1}}\) are arranged in non- increasing order by slope. This means that \(\mathcal{D}_{\mathrm{L}}\) shifted by \(\overrightarrow{d_{i}p}\) still lies in \(\mathcal{D}\) , as well as \(\mathcal{D}_{\mathrm{R}}\) shifted by \(\overrightarrow{d_{i + 1}p}\) . As we have seen in the proof of Claim 3, these two shifts cover all points of \(N(p)\) above \(\ell\) , along with those on \(\ell\) to the left of \(p\) . Since \(N(p)\) contains also \(d_{i}\) and \(d_{i + 1}\) , the point \(p\) is crowded. \(\square\)
Thus, we have proved that \(\mathcal{D} = \mathcal{F}\) , and have shown that the lattice points on the \(\mathcal{V}\) - curve are exactly the comfortable points of \(\mathcal{D}\) . It remains to find their number.
Recall the definition of \(\mathcal{K}_{n}\) (see Figure on the first page of the solution). Each segment \(d_{i}d_{i + 1}\) contains \(k_{i}\) lattice points different from \(d_{i}\) . Taken over all \(i\) , these points exhaust all the lattice points in the \(\mathcal{V}\) - curve, except for \(d_{1}\) , and thus the number of lattice points on the \(\mathcal{V}\) - curve is \(1 + \sum_{i = 1}^{m}k_{i}\) . On the other hand, \(\sum_{i = 1}^{m}k_{i}\) is just the number of points in \(\mathcal{K}_{n}\) , so it equals \(n^{2}\) . Hence the answer to the problem is \(n^{2} + 1\) .
Comment 1. The assumption that the process eventually stops is unnecessary for the problem, as one can see that, in fact, the process stops for every \(n \geqslant 1\) . Indeed, the proof of Claims 3 and 4 do not rely essentially on this assumption, and they together yield that the set \(\mathcal{D}\) is stable. So, only butterflies that are not in \(\mathcal{D}\) may fly away, and this takes only a finite time.
This assumption has been inserted into the problem statement in order to avoid several technical details regarding finiteness issues. It may also simplify several other arguments.
Comment 2. The description of the final set \(\mathcal{F}(= \mathcal{D})\) seems to be crucial for the solution; the Problem Selection Committee is not aware of any solution that completely avoids such a description.
On the other hand, after the set \(\mathcal{D}\) has been defined, the further steps may be performed in several ways. For example, in order to prove that all butterflies outside \(\mathcal{D}\) will fly away, one may argue as follows. (Here we will also make use of the assumption that the process eventually stops.)
First of all, notice that the process can be modified in the following manner: Each minute, exactly one of the lonely butterflies flies away, until there are no more lonely butterflies. The modified process necessarily stops at the same state as the initial one. Indeed, one may observe, as in solution above, that the (unique) largest stable set is still the final set for the modified process.
Thus, in order to prove our claim, it suffices to indicate an order in which the butterflies should fly away in the new process; if we are able to exhaust the whole set \(\mathcal{Q} \setminus \mathcal{D}\) , we are done.
Let \(\mathcal{C}_0 = d_0d_1\ldots d_m\) be the \(\nu\) - curve. Take its copy \(\mathcal{C}\) and shift it downwards so that \(d_0\) comes to some point below the origin \(O\) . Now we start moving \(\mathcal{C}\) upwards continuously, until it comes back to its initial position \(\mathcal{C}_0\) . At each moment when \(\mathcal{C}\) meets some lattice points, we convince all the butterflies at those points to fly away in a certain order. We will now show that we always have enough arguments for butterflies to do so, which will finish our argument for the claim.
Let \(\mathcal{C}' = d_0'd_1'\ldots d_m'\) be a position of \(\mathcal{C}\) when it meets some butterflies. We assume that all butterflies under this current position of \(\mathcal{C}\) were already convinced enough and filed away. Consider the lowest butterfly \(b\) on \(\mathcal{C}'\) . Let \(d_i'd_{i + 1}'\) be the segment it lies on; we choose \(i\) so that \(b \neq d_{i + 1}'\) (this is possible because \(\mathcal{C}\) as not yet reached \(\mathcal{C}_0\) ).
Draw a line \(\ell\) containing the segment \(d_i'd_{i + 1}'\) . Then all the butterflies in \(N(b)\) are situated on or above \(\ell\) ; moreover, those on \(\ell\) all lie on the segment \(d_i d_{i + 1}\) . But this segment now contains at most \(k_i\) butterflies (including \(b\) ), since otherwise some butterfly had to occupy \(d_{i + 1}'\) which is impossible by the choice of \(b\) . Thus, \(b\) is lonely and hence may be convinced to fly away.
After \(b\) has filed away, we switch to the lowest of the remaining butterflies on \(\mathcal{C}'\) , and so on.
Claims 3 and 4 also allow some different proofs which are not presented here.
|
IMOSL-2017-G1
|
Let \(A B C D E\) be a convex pentagon such that \(A B = B C = C D\) \(\angle E A B = \angle B C D\) , and \(\angle E D C = \angle C B A\) . Prove that the perpendicular line from \(E\) to \(B C\) and the line segments \(A C\) and \(B D\) are concurrent.
|
Solution 1. Throughout the solution, we refer to \(\angle A\) , \(\angle B\) , \(\angle C\) , \(\angle D\) , and \(\angle E\) as internal angles of the pentagon \(A B C D E\) . Let the perpendicular bisectors of \(A C\) and \(B D\) , which pass respectively through \(B\) and \(C\) , meet at point \(I\) . Then \(B D \perp C I\) and, similarly, \(A C \perp B I\) . Hence \(A C\) and \(B D\) meet at the orthocenter \(H\) of the triangle \(B I C\) , and \(I H \perp B C\) . It remains to prove that \(E\) lies on the line \(I H\) or, equivalently, \(E I \perp B C\) .
Lines \(I B\) and \(I C\) bisect \(\angle B\) and \(\angle C\) , respectively. Since \(I A = I C\) , \(I B = I D\) , and \(A B = B C = C D\) , the triangles \(I A B\) , \(I C B\) and \(I C D\) are congruent. Hence \(\angle I A B = \angle I C B = \angle C / 2 = \angle A / 2\) , so the line \(I A\) bisects \(\angle A\) . Similarly, the line \(I D\) bisects \(\angle D\) . Finally, the line \(I E\) bisects \(\angle E\) because \(I\) lies on all the other four internal bisectors of the angles of the pentagon.
The sum of the internal angles in a pentagon is \(540^{\circ}\) , so
\[\angle E = 540^{\circ} - 2\angle A + 2\angle B.\]
In quadrilateral \(A B I E\) ,
\[\angle B I E = 360^{\circ} - \angle E A B - \angle A B I - \angle A E I = 360^{\circ} - \angle A - \frac{1}{2}\angle B - \frac{1}{2}\angle E\] \[\qquad = 360^{\circ} - \angle A - \frac{1}{2}\angle B - (270^{\circ} - \angle A - \angle B)\] \[\qquad = 90^{\circ} + \frac{1}{2}\angle B = 90^{\circ} + \angle I B C,\]
which means that \(E I \perp B C\) , completing the proof.

Solution 2. We present another proof of the fact that \(E\) lies on line \(I H\) . Since all five internal bisectors of \(A B C D E\) meet at \(I\) , this pentagon has an inscribed circle with center \(I\) . Let this circle touch side \(B C\) at \(T\) .
Applying Brianchon's theorem to the (degenerate) hexagon \(A B T C D E\) we conclude that \(A C\) , \(B D\) and \(E T\) are concurrent, so point \(E\) also lies on line \(I H T\) , completing the proof.
Solution 3. We present yet another proof that \(E I\perp B C\) . In pentagon \(A B C D E\) , \(\angle E<\) \(180^{\circ} \iff \angle A + \angle B + \angle C + \angle D > 360^{\circ}\) . Then \(\angle A + \angle B = \angle C + \angle D > 180^{\circ}\) , so rays \(E A\) and \(C B\) meet at a point \(P\) , and rays \(B C\) and \(E D\) meet at a point \(Q\) . Now,
\[\angle P B A = 180^{\circ} - \angle B = 180^{\circ} - \angle D = \angle Q D C\]
and, similarly, \(\angle P A B = \angle Q C D\) . Since \(A B = C D\) , the triangles \(P A B\) and \(Q C D\) are congruent with the same orientation. Moreover, \(P Q E\) is isosceles with \(E P = E Q\) .

In Solution 1 we have proved that triangles \(I A B\) and \(I C D\) are also congruent with the same orientation. Then we conclude that quadrilaterals \(P B I A\) and \(Q D I C\) are congruent, which implies \(I P = I Q\) . Then \(E I\) is the perpendicular bisector of \(P Q\) and, therefore, \(E I\perp\) \(P Q \iff E I\perp B C\) .
Comment. Even though all three solutions used the point \(I\) , there are solutions that do not need it. We present an outline of such a solution: if \(J\) is the incenter of \(\triangle Q C D\) (with \(P\) and \(Q\) as defined in Solution 3), then a simple angle chasing shows that triangles \(C J D\) and \(B H C\) are congruent. Then if \(S\) is the projection of \(J\) onto side \(C D\) and \(T\) is the orthogonal projection of \(H\) onto side \(B C\) , one can verify that
\[Q T = Q C + C T = Q C + D S = Q C + \frac{C D + D Q - Q C}{2} = \frac{P B + B C + Q C}{2} = \frac{P Q}{2},\]
so \(T\) is the midpoint of \(P Q\) , and \(E\) , \(H\) and \(T\) all lie on the perpendicular bisector of \(P Q\) .
|
IMOSL-2017-G2
|
Let \(R\) and \(S\) be distinct points on circle \(\Omega\) , and let \(t\) denote the tangent line to \(\Omega\) at \(R\) . Point \(R'\) is the reflection of \(R\) with respect to \(S\) . A point \(I\) is chosen on the smaller arc \(RS\) of \(\Omega\) so that the circumcircle \(\Gamma\) of triangle \(ISR'\) intersects \(t\) at two different points. Denote by \(A\) the common point of \(\Gamma\) and \(t\) that is closest to \(R\) . Line \(AI\) meets \(\Omega\) again at \(J\) . Show that \(JR'\) is tangent to \(\Gamma\) .
|
Solution 1. In the circles \(\Omega\) and \(\Gamma\) we have \(\angle JRS = \angle JIS = \angle AR'S\) . On the other hand, since \(RA\) is tangent to \(\Omega\) , we get \(\angle SJR = \angle SRA\) . So the triangles \(ARR'\) and \(SJR\) are similar, and
\[\frac{R'R}{RJ} = \frac{AR'}{SR} = \frac{AR'}{SR'}\]
The last relation, together with \(\angle AR'S = \angle JRR'\) , yields \(\triangle ASR' \sim \triangle R'JR\) , hence \(\angle SAR' = \angle RR'J\) . It follows that \(JR'\) is tangent to \(\Gamma\) at \(R'\) .

<center>Solution 1 </center>

<center>Solution 2 </center>
Solution 2. As in Solution 1, we notice that \(\angle JRS = \angle JIS = \angle AR'S\) , so we have \(RJ \parallel AR'\) .
Let \(A'\) be the reflection of \(A\) about \(S\) ; then \(ARA'R'\) is a parallelogram with center \(S\) , and hence the point \(J\) lies on the line \(RA'\) .
From \(\angle SR'A' = \angle SRA = \angle SJR\) we get that the points \(S, J, A', R'\) are concyclic. This proves that \(\angle SR'J = \angle SA'J = \angle SA'R = \angle SAR'\) , so \(JR'\) is tangent to \(\Gamma\) at \(R'\) .
|
IMOSL-2017-G3
|
Let \(O\) be the circumcenter of an acute scalene triangle \(ABC\) . Line \(OA\) intersects the altitudes of \(ABC\) through \(B\) and \(C\) at \(P\) and \(Q\) , respectively. The altitudes meet at \(H\) . Prove that the circumcenter of triangle \(PQH\) lies on a median of triangle \(ABC\) .
|
Solution. Suppose, without loss of generality, that \(AB < AC\) . We have \(\angle PQH = 90^\circ - \angle QAB = 90^\circ - \angle OAB = \frac{1}{2}\angle AOB = \angle ACB\) , and similarly \(\angle QPH = \angle ABC\) . Thus triangles \(ABC\) and \(HPQ\) are similar. Let \(\Omega\) and \(\omega\) be the circumcircles of \(ABC\) and \(HPQ\) , respectively. Since \(\angle AHP = 90^\circ - \angle HAC = \angle ACB = \angle HQP\) , line \(AH\) is tangent to \(\omega\) .

Let \(T\) be the center of \(\omega\) and let lines \(AT\) and \(BC\) meet at \(M\) . We will take advantage of the similarity between \(ABC\) and \(HPQ\) and the fact that \(AH\) is tangent to \(\omega\) at \(H\) , with \(A\) on line \(PQ\) . Consider the corresponding tangent \(AS\) to \(\Omega\) , with \(S \in BC\) . Then \(S\) and \(A\) correspond to each other in \(\triangle ABC \sim \triangle HPQ\) , and therefore \(\angle OSM = \angle OAT = \angle OAM\) . Hence quadrilateral \(SAOM\) is cyclic, and since the tangent line \(AS\) is perpendicular to \(AO\) , \(\angle OMS = 180^\circ - \angle OAS = 90^\circ\) . This means that \(M\) is the orthogonal projection of \(O\) onto \(BC\) , which is its midpoint. So \(T\) lies on median \(AM\) of triangle \(ABC\) .
|
IMOSL-2017-G4
|
In triangle \(A B C\) , let \(\omega\) be the excircle opposite \(A\) . Let \(D\) , \(E\) , and \(F\) be the points where \(\omega\) is tangent to lines \(B C\) , \(C A\) , and \(A B\) , respectively. The circle \(A E F\) intersects line \(B C\) at \(P\) and \(Q\) . Let \(M\) be the midpoint of \(A D\) . Prove that the circle \(M P Q\) is tangent to \(\omega\) .
|
Solution 1. Denote by \(\Omega\) the circle \(A E F P Q\) , and denote by \(\gamma\) the circle \(P Q M\) . Let the line \(A D\) meet \(\omega\) again at \(T \neq D\) . We will show that \(\gamma\) is tangent to \(\omega\) at \(T\) .
We first prove that points \(P,Q,M,T\) are concyclic. Let \(A^{\prime}\) be the center of \(\omega\) . Since \(A^{\prime}E\perp AE\) and \(A^{\prime}F\perp AF\) , \(A A^{\prime}\) is a diameter in \(\Omega\) . Let \(N\) be the midpoint of \(D T\) ; from \(A^{\prime}D = A^{\prime}T\) we can see that \(\angle A^{\prime}N A = 90^{\circ}\) and therefore \(N\) also lies on the circle \(\Omega\) . Now, from the power of \(D\) with respect to the circles \(\gamma\) and \(\Omega\) we get
\[D P\cdot D Q = D A\cdot D N = 2D M\cdot {\frac{D T}{2}} = D M\cdot D T,\]
so \(P,Q,M,T\) are concyclic.
If \(E F\parallel B C\) , then \(A B C\) is isosceles and the problem is now immediate by symmetry. Otherwise, let the tangent line to \(\omega\) at \(T\) meet line \(B C\) at point \(R\) . The tangent line segments \(R D\) and \(R T\) have the same length, so \(A^{\prime}R\) is the perpendicular bisector of \(D T\) ; since \(N D = N T\) , \(N\) lies on this perpendicular bisector.
In right triangle \(A^{\prime}R D\) , \(R D^{2} = R N\cdot R A^{\prime} = R P\cdot R Q\) , in which the last equality was obtained from the power of \(R\) with respect to \(\Omega\) . Hence \(R T^{2} = R P\cdot R Q\) , which implies that \(R T\) is also tangent to \(\gamma\) . Because \(R T\) is a common tangent to \(\omega\) and \(\gamma\) , these two circles are tangent at \(T\) .

Solution 2. After proving that \(P,Q,M,T\) are concyclic, we finish the problem in a different fashion. We only consider the case in which \(E F\) and \(B C\) are not parallel. Let lines \(P Q\) and \(E F\) meet at point \(R\) . Since \(P Q\) and \(E F\) are radical axes of \(\Omega ,\gamma\) and \(\omega ,\gamma\) , respectively, \(R\) is the radical center of these three circles.
With respect to the circle \(\omega\) , the line \(D R\) is the polar of \(D\) , and the line \(E F\) is the polar of \(A\) . So the pole of line \(A D T\) is \(D R\cap E F = R\) , and therefore \(R T\) is tangent to \(\omega\) .
Finally, since \(T\) belongs to \(\gamma\) and \(\omega\) and \(R\) is the radical center of \(\gamma\) , \(\omega\) and \(\Omega\) , line \(R T\) is the radical axis of \(\gamma\) and \(\omega\) , and since it is tangent to \(\omega\) , it is also tangent to \(\gamma\) . Because \(R T\) is a common tangent to \(\omega\) and \(\gamma\) , these two circles are tangent at \(T\) .
Comment. In Solution 2 we defined the point \(R\) from Solution 1 in a different way.
Solution 3. We give an alternative proof that the circles are tangent at the common point \(T\) . Again, we start from the fact that \(P, Q, M, T\) are concyclic. Let point \(O\) be the midpoint of diameter \(AA'\) . Then \(MO\) is the midline of triangle \(ADA'\) , so \(MO \parallel A'D\) . Since \(A'D \perp PQ\) , \(MO\) is perpendicular to \(PQ\) as well.
Looking at circle \(\Omega\) , which has center \(O\) , \(MO \perp PQ\) implies that \(MO\) is the perpendicular bisector of the chord \(PQ\) . Thus \(M\) is the midpoint of arc \(\overline{PQ}\) from \(\gamma\) , and the tangent line \(m\) to \(\gamma\) at \(M\) is parallel to \(PQ\) .

Consider the homothety with center \(T\) and ratio \(\frac{TD}{TM}\) . It takes \(D\) to \(M\) , and the line \(PQ\) to the line \(m\) . Since the circle that is tangent to a line at a given point and that goes through another given point is unique, this homothety also takes \(\omega\) (tangent to \(PQ\) and going through \(T\) ) to \(\gamma\) (tangent to \(m\) and going through \(T\) ). We conclude that \(\omega\) and \(\gamma\) are tangent at \(T\) .
|
IMOSL-2017-G5
|
Let \(ABCC_{1}B_{1}A_{1}\) be a convex hexagon such that \(AB = BC\) , and suppose that the line segments \(AA_{1}\) , \(BB_{1}\) , and \(CC_{1}\) have the same perpendicular bisector. Let the diagonals \(AC_{1}\) and \(A_{1}C\) meet at \(D\) , and denote by \(\omega\) the circle \(ABC\) . Let \(\omega\) intersect the circle \(A_{1}BC_{1}\) again at \(E \neq B\) . Prove that the lines \(BB_{1}\) and \(DE\) intersect on \(\omega\) .
|
Solution 1. If \(AA_{1} = CC_{1}\) , then the hexagon is symmetric about the line \(BB_{1}\) ; in particular the circles \(ABC\) and \(A_{1}BC_{1}\) are tangent to each other. So \(AA_{1}\) and \(CC_{1}\) must be different. Since the points \(A\) and \(A_{1}\) can be interchanged with \(C\) and \(C_{1}\) , respectively, we may assume \(AA_{1} < CC_{1}\) .
Let \(R\) be the radical center of the circles \(A E B C\) and \(A_{1}E B C_{1}\) , and the circumcircle of the symmetric trapezoid \(A C C_{1}A_{1}\) ; that is the common point of the pairwise radical axes \(A C\) , \(A_{1}C_{1}\) , and \(B E\) . By the symmetry of \(A C\) and \(A_{1}C_{1}\) , the point \(R\) lies on the common perpendicular bisector of \(A A_{1}\) and \(C C_{1}\) , which is the external bisector of \(\angle A D C\) .
Let \(F\) be the second intersection of the line \(D R\) and the circle \(A C D\) . From the power of \(R\) with respect to the circles \(\omega\) and \(A C F D\) we have \(R B \cdot R E = R A \cdot R C = R D \cdot D F\) , so the points \(B, E, D\) and \(F\) are concyclic.
The line \(R D F\) is the external bisector of \(\angle A D C\) , so the point \(F\) bisects the arc \(\overline{C D A}\) . By \(A B = B C\) , on circle \(\omega\) , the point \(B\) is the midpoint of arc \(\overline{A E C}\) ; let \(M\) be the point diametrically opposite to \(B\) , that is the midpoint of the opposite arc \(\overline{C A}\) of \(\omega\) . Notice that the points \(B\) , \(F\) and \(M\) lie on the perpendicular bisector of \(A C\) , so they are collinear.

Finally, let \(X\) be the second intersection point of \(\omega\) and the line \(D E\) . Since \(B M\) is a diameter in \(\omega\) , we have \(\angle B X M = 90^{\circ}\) . Moreover,
\[\angle E X M = 180^{\circ} - \angle M B E = 180^{\circ} - \angle F B E = \angle E D F,\]
so \(M X\) and \(F D\) are parallel. Since \(B X\) is perpendicular to \(M X\) and \(B B_{1}\) is perpendicular to \(F D\) , this shows that \(X\) lies on line \(B B_{1}\) .
Solution 2. Define point \(M\) as the point opposite to \(B\) on circle \(\omega\) , and point \(R\) as the intersection of lines \(AC\) , \(A_{1}C_{1}\) and \(BE\) , and show that \(R\) lies on the external bisector of \(\angle ADC\) , like in the first solution.
Since \(B\) is the midpoint of the arc \(\overline{ACE}\) , the line \(BER\) is the external bisector of \(\angle CEA\) . Now we show that the internal angle bisectors of \(\angle ADC\) and \(\angle CEA\) meet on the segment \(AC\) . Let the angle bisector of \(\angle ADC\) meet \(AC\) at \(S\) , and let the angle bisector of \(\angle CEA\) , which is line \(EM\) , meet \(AC\) at \(S'\) . By applying the angle bisector theorem to both internal and external bisectors of \(\angle ADC\) and \(\angle CEA\) ,
\[AS:CS = AD:CD = AR:CR = AE:CE = AS':CS'\]
so indeed \(S = S'\) .
By \(\angle RDS = \angle SER = 90^{\circ}\) the points \(R\) , \(S\) , \(D\) and \(E\) are concyclic.

Now let the lines \(BB_{1}\) and \(DE\) meet at point \(X\) . Notice that \(\angle EXB = \angle EDS\) because both \(BB_{1}\) and \(DS\) are perpendicular to the line \(DR\) , we have that \(\angle EDS = \angle ERS\) in circle \(SRDE\) , and \(\angle ERS = \angle EMB\) because \(SR \perp BM\) and \(ER \perp ME\) . Therefore, \(\angle EXB = \angle EMB\) , so indeed, the point \(X\) lies on \(\omega\) .
|
IMOSL-2017-G6
|
Let \(n \geq 3\) be an integer. Two regular \(n\) - gons \(\mathcal{A}\) and \(\mathcal{B}\) are given in the plane. Prove that the vertices of \(\mathcal{A}\) that lie inside \(\mathcal{B}\) or on its boundary are consecutive.
(That is, prove that there exists a line separating those vertices of \(\mathcal{A}\) that lie inside \(\mathcal{B}\) or on its boundary from the other vertices of \(\mathcal{A}\) .)
|
Solution 1. In both solutions, by a polygon we always mean its interior together with its boundary.
We start with finding a regular \(n\) - gon \(\mathcal{C}\) which \((i)\) is inscribed into \(\mathcal{B}\) (that is, all vertices of \(\mathcal{C}\) lie on the perimeter of \(\mathcal{B}\) ); and \((ii)\) is either a translation of \(\mathcal{A}\) , or a homothetic image of \(\mathcal{A}\) with a positive factor.
Such a polygon may be constructed as follows. Let \(O_{A}\) and \(O_{B}\) be the centers of \(\mathcal{A}\) and \(\mathcal{B}\) , respectively, and let \(A\) be an arbitrary vertex of \(\mathcal{A}\) . Let \(\overrightarrow{O_{B}C}\) be the vector co- directional to \(\overrightarrow{O_{A}A}\) , with \(C\) lying on the perimeter of \(\mathcal{B}\) . The rotations of \(C\) around \(O_{B}\) by multiples of \(2\pi /n\) form the required polygon. Indeed, it is regular, inscribed into \(\mathcal{B}\) (due to the rotational symmetry of \(\mathcal{B}\) ), and finally the translation/homothey mapping \(\overrightarrow{O_{A}A}\) to \(\overrightarrow{O_{B}C}\) maps \(\mathcal{A}\) to \(\mathcal{C}\) .
Now we separate two cases.

<center>Construction of \(\mathcal{C}\) </center>

<center>Case 1: Translation </center>
Case 1: \(\mathcal{C}\) is a translation of \(\mathcal{A}\) by a vector \(\vec{v}\) .
Denote by \(t\) the translation transform by vector \(\vec{v}\) . We need to prove that the vertices of \(\mathcal{C}\) which stay in \(\mathcal{B}\) under \(t\) are consecutive. To visualize the argument, we refer the plane to Cartesian coordinates so that the \(x\) - axis is co- directional with \(\vec{v}\) . This way, the notions of right/left and top/bottom are also introduced, according to the \(x\) - and \(y\) - coordinates, respectively.
Let \(B_{\mathrm{T}}\) and \(B_{\mathrm{B}}\) be the top and the bottom vertices of \(\mathcal{B}\) (if several vertices are extremal, we take the rightmost of them). They split the perimeter of \(\mathcal{B}\) into the right part \(B_{\mathrm{R}}\) and the left part \(B_{\mathrm{L}}\) (the vertices \(B_{\mathrm{T}}\) and \(B_{\mathrm{B}}\) are assumed to lie in both parts); each part forms a connected subset of the perimeter of \(\mathcal{B}\) . So the vertices of \(\mathcal{C}\) are also split into two parts \(C_{\mathrm{L}} \subset B_{\mathrm{L}}\) and \(C_{\mathrm{R}} \subset B_{\mathrm{R}}\) , each of which consists of consecutive vertices.
Now, all the points in \(B_{\mathrm{R}}\) (and hence in \(C_{\mathrm{R}}\) ) move out from \(\mathcal{B}\) under \(t\) , since they are the rightmost points of \(\mathcal{B}\) on the corresponding horizontal lines. It remains to prove that the vertices of \(C_{\mathrm{L}}\) which stay in \(\mathcal{B}\) under \(t\) are consecutive.
For this purpose, let \(C_{1}\) , \(C_{2}\) , and \(C_{3}\) be three vertices in \(C_{\mathrm{L}}\) such that \(C_{2}\) is between \(C_{1}\) and \(C_{3}\) , and \(t(C_{1})\) and \(t(C_{3})\) lie in \(\mathcal{B}\) ; we need to prove that \(t(C_{2}) \in \mathcal{B}\) as well. Let \(A_{i} = t(C_{i})\) . The line through \(C_{2}\) parallel to \(\vec{v}\) crosses the segment \(C_{1}C_{3}\) to the right of \(C_{2}\) ; this means that this line crosses \(A_{1}A_{3}\) to the right of \(A_{2}\) , so \(A_{2}\) lies inside the triangle \(A_{1}C_{2}A_{3}\) which is contained in \(\mathcal{B}\) . This yields the desired result.
Case 2: \(\mathcal{C}\) is a homothetic image of \(\mathcal{A}\) centered at \(X\) with factor \(k > 0\) .
Denote by \(h\) the homothety mapping \(\mathcal{C}\) to \(\mathcal{A}\) . We need now to prove that the vertices of \(\mathcal{C}\) which stay in \(\mathcal{B}\) after applying \(h\) are consecutive. If \(X\in B\) , the claim is easy. Indeed, if \(k< 1\) then the vertices of \(\mathcal{A}\) lie on the segments of the form \(X C\) ( \(C\) being a vertex of \(\mathcal{C}\) ) which lie in \(\mathcal{B}\) . If \(k > 1\) , then the vertices of \(\mathcal{A}\) lie on the extensions of such segments \(X C\) beyond \(C\) , and almost all these extensions lie outside \(\mathcal{B}\) . The exceptions may occur only in case when \(X\) lies on the boundary of \(\mathcal{B}\) , and they may cause one or two vertices of \(\mathcal{A}\) stay on the boundary of \(\mathcal{B}\) . But even in this case those vertices are still consecutive.
So, from now on we assume that \(X\notin B\)
Now, there are two vertices \(B_{\mathrm{T}}\) and \(B_{\mathrm{B}}\) of \(\mathcal{B}\) such that \(\mathcal{B}\) is contained in the angle \(\angle B_{\mathrm{T}}X B_{\mathrm{B}}\) if there are several options, say, for \(B_{\mathrm{T}}\) , then we choose the farthest one from \(X\) if \(k > 1\) , and the nearest one if \(k< 1\) . For the visualization purposes, we refer the plane to Cartesian coordinates so that the \(y\) - axis is co- directional with \(\overline{B_{\mathrm{B}}B_{\mathrm{T}}}\) , and \(X\) lies to the left of the line \(B_{\mathrm{T}}B_{\mathrm{B}}\) . Again, the perimeter of \(\mathcal{B}\) is split by \(B_{\mathrm{T}}\) and \(B_{\mathrm{B}}\) into the right part \(B_{\mathrm{R}}\) and the left part \(B_{\mathrm{L}}\) , and the set of vertices of \(\mathcal{C}\) is split into two subsets \(\mathcal{C}_{\mathrm{R}}\subset B_{\mathrm{R}}\) and \(\mathcal{C}_{\mathrm{L}}\subset B_{\mathrm{L}}\) .

<center>Case 2, \(X\) inside \(\mathcal{B}\) </center>

<center>Subcase 2.1: \(k > 1\) </center>
Subcase 2.1: \(k > 1\) .
In this subcase, all points from \(B_{\mathrm{R}}\) (and hence from \(\mathcal{C}_{\mathrm{R}}\) ) move out from \(\mathcal{B}\) under \(h\) , because they are the farthest points of \(\mathcal{B}\) on the corresponding rays emanated from \(X\) . It remains to prove that the vertices of \(\mathcal{C}_{\mathrm{L}}\) which stay in \(\mathcal{B}\) under \(h\) are consecutive.
Again, let \(C_{1}\) , \(C_{2}\) , \(C_{3}\) be three vertices in \(\mathcal{C}_{\mathrm{L}}\) such that \(C_{2}\) is between \(C_{1}\) and \(C_{3}\) , and \(h(C_{1})\) and \(h(C_{3})\) lie in \(\mathcal{B}\) . Let \(A_{i} = h(C_{i})\) . Then the ray \(X C_{2}\) crosses the segment \(C_{1}C_{3}\) beyond \(C_{2}\) , so this ray crosses \(A_{1}A_{3}\) beyond \(A_{2}\) ; this implies that \(A_{2}\) lies in the triangle \(A_{1}C_{2}A_{3}\) , which is contained in \(\mathcal{B}\) .

<center>Subcase 2.2: \(k< 1\) </center>
Subcase 2.2: \(k< 1\) .
This case is completely similar to the previous one. All points from \(B_{\mathrm{L}}\) (and hence from \(\mathcal{C}_{\mathrm{L}}\) move out from \(\mathcal{B}\) under \(h\) , because they are the nearest points of \(\mathcal{B}\) on the corresponding
rays emanated from \(X\) . Assume that \(C_{1}\) , \(C_{2}\) , and \(C_{3}\) are three vertices in \(\mathcal{C}_{\mathrm{R}}\) such that \(C_{2}\) lies between \(C_{1}\) and \(C_{3}\) , and \(h(C_{1})\) and \(h(C_{3})\) lie in \(\mathcal{B}\) ; let \(A_{i} = h(C_{i})\) . Then \(A_{2}\) lies on the segment \(X C_{2}\) , and the segments \(X A_{2}\) and \(A_{1}A_{3}\) cross each other. Thus \(A_{2}\) lies in the triangle \(A_{1}C_{2}A_{3}\) , which is contained in \(\mathcal{B}\) .
Comment 1. In fact, Case 1 can be reduced to Case 2 via the following argument.
Assume that \(\mathcal{A}\) and \(\mathcal{C}\) are congruent. Apply to \(\mathcal{A}\) a homothety centered at \(O_{B}\) with a factor slightly smaller than 1 to obtain a polygon \(\mathcal{A}^{\prime}\) . With appropriately chosen factor, the vertices of \(\mathcal{A}\) which were outside/inside \(\mathcal{B}\) stay outside/inside it, so it suffices to prove our claim for \(\mathcal{A}^{\prime}\) instead of \(\mathcal{A}\) . And now, the polygon \(\mathcal{A}^{\prime}\) is a homothetic image of \(\mathcal{C}\) , so the arguments from Case 2 apply.
Comment 2. After the polygon \(\mathcal{C}\) has been found, the rest of the solution uses only the convexity of the polygons, instead of regularity. Thus, it proves a more general statement:
Assume that \(\mathcal{A}\) , \(\mathcal{B}\) , and \(\mathcal{C}\) are three convex polygons in the plane such that \(\mathcal{C}\) is inscribed into \(\mathcal{B}\) , and \(\mathcal{A}\) can be obtained from it via either translation or positive homothety. Then the vertices of \(\mathcal{A}\) that lie inside \(\mathcal{B}\) or on its boundary are consecutive.
Solution 2. Let \(O_{A}\) and \(O_{B}\) be the centers of \(\mathcal{A}\) and \(\mathcal{B}\) , respectively. Denote \([n] = \{1,2,\ldots ,n\}\) .
We start with introducing appropriate enumerations and notations. Enumerate the sidelines of \(\mathcal{B}\) clockwise as \(\ell_{1},\ell_{2},\ldots ,\ell_{n}\) . Denote by \(\mathcal{H}_{i}\) the half- plane of \(\ell_{i}\) that contains \(\mathcal{B}\) ( \(\mathcal{H}_{i}\) is assumed to contain \(\ell_{i}\) ); by \(B_{i}\) the midpoint of the side belonging to \(\ell_{i}\) ; and finally denote \(\overrightarrow{b_{i}} = \overrightarrow{B_{i}O_{B}}\) . (As usual, the numbering is cyclic modulo \(n\) , so \(\ell_{n + i} = \ell_{i}\) etc.)
Now, choose a vertex \(A_{1}\) of \(\mathcal{A}\) such that the vector \(\overrightarrow{O_{A}A_{1}}\) points "mostly outside \(\mathcal{H}_{1}\) "; strictly speaking, this means that the scalar product \(\langle \overrightarrow{O_{A}A_{1}},\overrightarrow{b_{1}}\rangle\) is minimal. Starting from \(A_{1}\) , enumerate the vertices of \(\mathcal{A}\) clockwise as \(A_{1},A_{2},\ldots ,A_{n}\) ; by the rotational symmetry, the choice of \(A_{1}\) yields that the vector \(\overrightarrow{O_{A}A_{i}}\) points "mostly outside \(\mathcal{H}_{i}\) ", i.e.,
\[\langle \overrightarrow{O_{A}A_{i}},\overrightarrow{b_{i}}\rangle = \min_{j\in [n]}\langle \overrightarrow{O_{A}A_{j}},\overrightarrow{b_{i}}\rangle . \quad (1)\]

<center>Enumerations and notations </center>
We intend to reformulate the problem in more combinatorial terms, for which purpose we introduce the following notion. Say that a subset \(I\subseteq [n]\) is connected if the elements of this set are consecutive in the cyclic order (in other words, if we join each \(i\) with \(i + 1\) mod \(n\) by an edge, this subset is connected in the usual graph sense). Clearly, the union of two connected subsets sharing at least one element is connected too. Next, for any half- plane \(\mathcal{H}\) the indices of vertices of, say, \(\mathcal{A}\) that lie in \(\mathcal{H}\) form a connected set.
To access the problem, we denote
\[M = \{j\in [n]\colon A_{j}\notin \mathcal{B}\} ,\qquad M_{i} = \{j\in [n]\colon A_{j}\notin \mathcal{H}_{i}\} \quad \mathrm{for~}i\in [n].\]
We need to prove that \([n]\setminus M\) is connected, which is equivalent to \(M\) being connected. On the other hand, since \(\begin{array}{r}{\mathcal{B} = \bigcap_{i\in [n]}\mathcal{H}_{i}} \end{array}\) , we have \(\begin{array}{r}{M = \bigcup_{i\in [n]}M_{i}} \end{array}\) , where the sets \(M_{i}\) are easier to investigate. We will utilize the following properties of these sets; the first one holds by the definition of \(M_{i}\) , along with the above remark.

<center>The sets \(M_{i}\) </center>
Property 1: Each set \(M_{i}\) is connected.
Property 2: If \(M_{i}\) is nonempty, then \(i \in M_{i}\) .
Proof. Indeed, we have
\[j\in M_{i}\iff A_{j}\notin \mathcal{H}_{i}\iff \langle \overline{B_{i}A_{j}},\overline{b_{i}}\rangle < 0\iff \langle \overline{O_{A}A_{j}},\overline{b_{i}}\rangle < \langle \overline{O_{A}B_{i}},\overline{b_{i}}\rangle . \quad (2)\]
The right- hand part of the last inequality does not depend on \(j\) . Therefore, if some \(j\) lies in \(M_{i}\) , then by (1) so does \(i\) . \(\square\)
In view of Property 2, it is useful to define the set
\[M^{\prime} = \{i\in [n]\colon i\in M_{i}\} = \{i\in [n]\colon M_{i}\neq \emptyset \} .\]
Property 3: The set \(M^{\prime}\) is connected.
Proof. To prove this property, we proceed on with the investigation started in (2) to write
\[i\in M^{\prime}\iff A_{i}\in M_{i}\iff \langle \overline{B_{i}A_{i}},\overline{b_{i}}\rangle < 0\iff \langle \overline{O_{B}O_{A}},\overline{b_{i}}\rangle < \langle \overline{O_{B}B_{i}},\overline{b_{i}}\rangle +\langle \overline{A_{i}O_{A}},\overline{b_{i}}\rangle .\]
The right- hand part of the obtained inequality does not depend on \(i\) , due to the rotational symmetry; denote its constant value by \(\mu\) . Thus, \(i \in M^{\prime}\) if and only if \(\langle \overline{O_{B}O_{A}},\overline{b_{i}}\rangle < \mu\) . This condition is in turn equivalent to the fact that \(B_{i}\) lies in a certain (open) half- plane whose boundary line is orthogonal to \(O_{B}O_{A}\) ; thus, it defines a connected set. \(\square\)
Now we can finish the solution. Since \(M^{\prime} \subseteq M\) , we have
\[M = \bigcup_{i\in [n]}M_{i} = M^{\prime}\cup \bigcup_{i\in [n]}M_{i},\]
so \(M\) can be obtained from \(M^{\prime}\) by adding all the sets \(M_{i}\) one by one. All these sets are connected, and each nonempty \(M_{i}\) contains an element of \(M^{\prime}\) (namely, \(i\) ). Thus their union is also connected.
Comment 3. Here we present a way in which one can come up with a solution like the one above.
Assume, for sake of simplicity, that \(O_{A}\) lies inside \(B\) . Let us first put onto the plane a very small regular \(n\) - gon \(A^{\prime}\) centered at \(O_{A}\) and aligned with \(A\) ; all its vertices lie inside \(B\) . Now we start blowing it up, looking at the order in which the vertices leave \(B\) . To go out of \(B\) , a vertex should cross a certain side of \(B\) (which is hard to describe), or, equivalently, to cross at least one sideline of \(B\) — and this event is easier to describe. Indeed, the first vertex of \(A^{\prime}\) to cross \(\ell_{i}\) is the vertex \(A_{i}^{\prime}\) (corresponding to \(A_{i}\) in \(A\) ); more generally, the vertices \(A_{j}^{\prime}\) cross \(\ell_{i}\) in such an order that the scalar product \(\langle \overline{O_{A}A_{j}},\overline{b_{i}}\rangle\) does not increase. For different indices \(i\) , these orders are just cyclic shifts of each other; and this provides some intuition for the notions and claims from Solution 2.
|
IMOSL-2017-G7
|
A convex quadrilateral \(ABCD\) has an inscribed circle with center \(I\) . Let \(I_{a}\) , \(I_{b}\) , \(I_{c}\) , and \(I_{d}\) be the incenter of the triangles \(DAB\) , \(ABC\) , \(BCD\) , and \(CDA\) , respectively. Suppose that the common external tangents of the circles \(AI_{b}I_{d}\) and \(CI_{b}I_{d}\) meet at \(X\) , and the common external tangents of the circles \(BI_{a}I_{c}\) and \(DI_{a}I_{c}\) meet at \(Y\) . Prove that \(\angle X I Y = 90^{\circ}\) .
|
Solution. Denote by \(\omega_{a}\) , \(\omega_{b}\) , \(\omega_{c}\) and \(\omega_{d}\) the circles \(AI_{b}I_{d}\) , \(BI_{a}I_{c}\) , \(CI_{b}I_{d}\) , and \(DI_{a}I_{c}\) , let their centers be \(O_{a}\) , \(O_{b}\) , \(O_{c}\) and \(O_{d}\) , and let their radii be \(r_{a}\) , \(r_{b}\) , \(r_{c}\) and \(r_{d}\) , respectively.
Claim 1. \(I_{b}I_{d} \perp AC\) and \(I_{a}I_{c} \perp BD\) .
Proof. Let the incircles of triangles \(ABC\) and \(ACD\) be tangent to the line \(AC\) at \(T\) and \(T'\) , respectively. (See the figure to the left.) We have \(AT = \frac{AB + AC - BC}{2}\) in triangle \(ABC\) , \(AT' = \frac{AD + AC - CD}{2}\) in triangle \(ACD\) , and \(AB - BC = AD - CD\) in quadrilateral \(ABCD\) , so
\[AT = \frac{AC + AB - BC}{2} = \frac{AC + AD - CD}{2} = AT'.\]
This shows \(T = T'\) . As an immediate consequence, \(I_{b}I_{d} \perp AC\) .
The second statement can be shown analogously.

Claim 2. The points \(O_{a}\) , \(O_{b}\) , \(O_{c}\) and \(O_{d}\) lie on the lines \(AI\) , \(BI\) , \(CI\) and \(DI\) , respectively.
Proof. By symmetry it suffices to prove the claim for \(O_{a}\) . (See the figure to the right above.)
Notice first that the incircles of triangles \(ABC\) and \(ACD\) can be obtained from the incircle of the quadrilateral \(ABCD\) with homothety centers \(B\) and \(D\) , respectively, and homothety factors less than 1, therefore the points \(I_{b}\) and \(I_{d}\) lie on the line segments \(BI\) and \(DI\) , respectively.
As is well- known, in every triangle the altitude and the diameter of the circumcircle starting from the same vertex are symmetric about the angle bisector. By Claim 1, in triangle \(AI_{d}I_{b}\) , the segment \(AT\) is the altitude starting from \(A\) . Since the foot \(T\) lies inside the segment \(I_{b}I_{d}\) , the circumcenter \(O_{a}\) of triangle \(AI_{d}I_{b}\) lies in the angle domain \(I_{b}AI_{d}\) in such a way that \(\angle I_{b}AT = \angle O_{a}AI_{d}\) . The points \(I_{b}\) and \(I_{d}\) are the incenters of triangles \(ABC\) and \(ACD\) , so the lines \(AI_{b}\) and \(AI_{d}\) bisect the angles \(\angle BAC\) and \(\angle CAD\) , respectively. Then
\[\angle O_{a}AD = \angle O_{a}AI_{d} + \angle I_{d}AD = \angle I_{b}AT + \angle I_{d}AD = \frac{1}{2}\angle BAC + \frac{1}{2}\angle CAD = \frac{1}{2}\angle BAD,\]
so \(O_{a}\) lies on the angle bisector of \(\angle BAD\) , that is, on line \(AI\) .
The point \(X\) is the external similitude center of \(\omega_{a}\) and \(\omega_{c}\) ; let \(U\) be their internal similitude center. The points \(O_{a}\) and \(O_{c}\) lie on the perpendicular bisector of the common chord \(I_{b}I_{d}\) of \(\omega_{a}\) and \(\omega_{c}\) , and the two similitude centers \(X\) and \(U\) lie on the same line; by Claim 2, that line is parallel to \(AC\) .

From the similarity of the circles \(\omega_{a}\) and \(\omega_{c}\) , from \(O_{a}I_{b} = O_{a}I_{d} = O_{a}A = r_{a}\) and \(O_{c}I_{b} = O_{c}I_{d} = O_{c}C = r_{c}\) , and from \(A C\parallel O_{a}O_{c}\) we can see that
\[\frac{O_{a}X}{O_{c}X} = \frac{O_{a}U}{O_{c}U} = \frac{r_{a}}{r_{c}} = \frac{O_{a}I_{b}}{O_{c}I_{b}} = \frac{O_{a}I_{d}}{O_{c}I_{d}} = \frac{O_{a}A}{O_{c}C} = \frac{O_{a}I}{O_{c}I}.\]
So the points \(X,U,I_{b},I_{d},I\) lie on the Apollonius circle of the points \(O_{a},O_{c}\) with ratio \(r_{a}:r_{c}\) . In this Apollonius circle \(X U\) is a diameter, and the lines \(I U\) and \(I X\) are respectively the internal and external bisectors of \(\angle O_{a}I O_{c} = \angle A I C\) , according to the angle bisector theorem. Moreover, in the Apollonius circle the diameter \(U X\) is the perpendicular bisector of \(I_{b}I_{d}\) , so the lines \(I X\) and \(I U\) are the internal and external bisectors of \(\angle I_{b}I I_{d} = \angle B I D\) , respectively.
Repeating the same argument for the points \(B,D\) instead of \(A,C\) , we get that the line \(I Y\) is the internal bisector of \(\angle A I C\) and the external bisector of \(\angle B I D\) . Therefore, the lines \(I X\) and \(I Y\) respectively are the internal and external bisectors of \(\angle B I D\) , so they are perpendicular.
Comment. In fact the points \(O_{a}\) , \(O_{b}\) , \(O_{c}\) and \(O_{d}\) lie on the line segments \(A I\) , \(B I\) , \(C I\) and \(D I\) , respectively. For the point \(O_{a}\) this can be shown for example by \(\angle I_{d}O_{a}A + \angle A O_{a}I_{b} = (180^{\circ} - 2\angle O_{a}A I_{d}) + (180^{\circ} - 2\angle I_{b}A O_{a}) = 360^{\circ} - \angle B A D = \angle A D I + \angle D I A + \angle A I B + \angle I B A > \angle I_{d}I A + \angle A I I_{b}.\)
The solution also shows that the line \(I Y\) passes through the point \(U\) , and analogously, \(I X\) passes through the internal similitude center of \(\omega_{b}\) and \(\omega_{d}\) .
|
IMOSL-2017-G8
|
There are 2017 mutually external circles drawn on a blackboard, such that no two are tangent and no three share a common tangent. A tangent segment is a line segment that is a common tangent to two circles, starting at one tangent point and ending at the other one. Luciano is drawing tangent segments on the blackboard, one at a time, so that no tangent segment intersects any other circles or previously drawn tangent segments. Luciano keeps drawing tangent segments until no more can be drawn. Find all possible numbers of tangent segments when he stops drawing.
|
Answer: If there were \(n\) circles, there would always be exactly \(3(n - 1)\) segments; so the only possible answer is \(3 \cdot 2017 - 3 = 6048\) .
Solution 1. First, consider a particular arrangement of circles \(C_1, C_2, \ldots , C_n\) where all the centers are aligned and each \(C_i\) is eclipsed from the other circles by its neighbors – for example, taking \(C_i\) with center \((i^2, 0)\) and radius \(i / 2\) works. Then the only tangent segments that can be drawn are between adjacent circles \(C_i\) and \(C_{i + 1}\) , and exactly three segments can be drawn for each pair. So Luciano will draw exactly \(3(n - 1)\) segments in this case.

For the general case, start from a final configuration (that is, an arrangement of circles and segments in which no further segments can be drawn). The idea of the solution is to continuously resize and move the circles around the plane, one by one (in particular, making sure we never have 4 circles with a common tangent line), and show that the number of segments drawn remains constant as the picture changes. This way, we can reduce any circle/segment configuration to the particular one mentioned above, and the final number of segments must remain at \(3n - 3\) .
Some preliminary considerations: look at all possible tangent segments joining any two circles. A segment that is tangent to a circle \(A\) can do so in two possible orientations — it may come out of \(A\) in clockwise or counterclockwise orientation. Two segments touching the same circle with the same orientation will never intersect each other. Each pair \((A, B)\) of circles has 4 choices of tangent segments, which can be identified by their orientations — for example, \((A + , B - )\) would be the segment which comes out of \(A\) in clockwise orientation and comes out of \(B\) in counterclockwise orientation. In total, we have \(2n(n - 1)\) possible segments, disregarding intersections.
Now we pick a circle \(C\) and start to continuously move and resize it, maintaining all existing tangent segments according to their identifications, including those involving \(C\) . We can keep our choice of tangent segments until the configuration reaches a transition. We lose nothing if we assume that \(C\) is kept at least \(\epsilon\) units away from any other circle, where \(\epsilon\) is a positive, fixed constant; therefore at a transition either: (1) a currently drawn tangent segment \(t\) suddenly becomes obstructed; or (2) a currently absent tangent segment \(t\) suddenly becomes unobstructed and available.
Claim. A transition can only occur when three circles \(C_1, C_2, C_3\) are tangent to a common line \(\ell\) containing \(t\) , in a way such that the three tangent segments lying on \(\ell\) (joining the three circles pairwise) are not obstructed by any other circles or tangent segments (other than \(C_1, C_2, C_3\) ).
Proof. Since (2) is effectively the reverse of (1), it suffices to prove the claim for (1). Suppose \(t\) has suddenly become obstructed, and let us consider two cases.
Case 1: \(t\) becomes obstructed by a circle

Then the new circle becomes the third circle tangent to \(\ell\) , and no other circles or tangent segments are obstructing \(t\) .
Case 2: \(t\) becomes obstructed by another tangent segment \(t'\)
When two segments \(t\) and \(t'\) first intersect each other, they must do so at a vertex of one of them. But if a vertex of \(t'\) first crossed an interior point of \(t\) , the circle associated to this vertex was already blocking \(t\) (absurd), or is about to (we already took care of this in case 1). So we only have to analyze the possibility of \(t\) and \(t'\) suddenly having a common vertex. However, if that happens, this vertex must belong to a single circle (remember we are keeping different circles at least \(\epsilon\) units apart from each other throughout the moving/resizing process), and therefore they must have different orientations with respect to that circle.

Thus, at the transition moment, both \(t\) and \(t'\) are tangent to the same circle at a common point, that is, they must be on the same line \(\ell\) and hence we again have three circles simultaneously tangent to \(\ell\) . Also no other circles or tangent segments are obstructing \(t\) or \(t'\) (otherwise, they would have disappeared before this transition). \(\square\)
Next, we focus on the maximality of a configuration immediately before and after a transition, where three circles share a common tangent line \(\ell\) . Let the three circles be \(C_1, C_2, C_3\) , ordered by their tangent points. The only possibly affected segments are the ones lying on \(\ell\) , namely \(t_{12}\) , \(t_{23}\) and \(t_{13}\) . Since \(C_2\) is in the middle, \(t_{12}\) and \(t_{23}\) must have different orientations with respect to \(C_2\) . For \(C_1\) , \(t_{12}\) and \(t_{13}\) must have the same orientation, while for \(C_3\) , \(t_{13}\) and \(t_{23}\) must have the same orientation. The figure below summarizes the situation, showing alternative positions for \(C_1\) (namely, \(C_1\) and \(C_1'\) ) and for \(C_3\) ( \(C_3\) and \(C_3'\) ).

Now perturb the diagram slightly so the three circles no longer have a common tangent, while preserving the definition of \(t_{12}\) , \(t_{23}\) and \(t_{13}\) according to their identifications. First note that no other circles or tangent segments can obstruct any of these segments. Also recall that tangent segments joining the same circle at the same orientation will never obstruct each other.
The availability of the tangent segments can now be checked using simple diagrams.
Case 1: \(t_{13}\) passes through \(C_{2}\)

In this case, \(t_{13}\) is not available, but both \(t_{12}\) and \(t_{23}\) are.
Case 2: \(t_{13}\) does not pass through \(C_{2}\)

Now \(t_{13}\) is available, but \(t_{12}\) and \(t_{23}\) obstruct each other, so only one can be drawn.
In any case, exactly 2 out of these 3 segments can be drawn. Thus the maximal number of segments remains constant as we move or resize the circles, and we are done.
Solution 2. First note that all tangent segments lying on the boundary of the convex hull of the circles are always drawn since they do not intersect anything else. Now in the final picture, aside from the \(n\) circles, the blackboard is divided into regions. We can consider the picture as a plane (multi- )graph \(G\) in which the circles are the vertices and the tangent segments are the edges. The idea of this solution is to find a relation between the number of edges and the number of regions in \(G\) ; then, once we prove that \(G\) is connected, we can use Euler's formula to finish the problem.
The boundary of each region consists of 1 or more (for now) simple closed curves, each made of arcs and tangent segments. The segment and the arc might meet smoothly (as in \(S_{i}\) , \(i = 1,2,\ldots ,6\) in the figure below) or not (as in \(P_{1},P_{2},P_{3},P_{4}\) ; call such points sharp corners of the boundary). In other words, if a person walks along the border, her direction would suddenly turn an angle of \(\pi\) at a sharp corner.

Claim 1. The outer boundary \(B_{1}\) of any internal region has at least 3 sharp corners.
Proof. Let a person walk one lap along \(B_{1}\) in the counterclockwise orientation. As she does so, she will turn clockwise as she moves along the circle arcs, and not turn at all when moving along the lines. On the other hand, her total rotation after one lap is \(2\pi\) in the counterclockwise direction! Where could she be turning counterclockwise? She can only do so at sharp corners, and, even then, she turns only an angle of \(\pi\) there. But two sharp corners are not enough, since at least one arc must be present—so she must have gone through at least 3 sharp corners. \(\square\)
Claim 2. Each internal region is simply connected, that is, has only one boundary curve.
Proof. Suppose, by contradiction, that some region has an outer boundary \(B_{1}\) and inner boundaries \(B_{2}, B_{3}, \ldots , B_{m} (m \geq 2)\) . Let \(P_{1}\) be one of the sharp corners of \(B_{1}\) .
Now consider a car starting at \(P_{1}\) and traveling counterclockwise along \(B_{1}\) . It starts in reverse, i.e., it is initially facing the corner \(P_{1}\) . Due to the tangent conditions, the car may travel in a way so that its orientation only changes when it is moving along an arc. In particular, this means the car will sometimes travel forward. For example, if the car approaches a sharp corner when driving in reverse, it would continue travel forward after the corner, instead of making an immediate half- turn. This way, the orientation of the car only changes in a clockwise direction since the car always travels clockwise around each arc.
Now imagine there is a laser pointer at the front of the car, pointing directly ahead. Initially, the laser endpoint hits \(P_{1}\) , but, as soon as the car hits an arc, the endpoint moves clockwise around \(B_{1}\) . In fact, the laser endpoint must move continuously along \(B_{1}\) ! Indeed, if the endpoint ever jumped (within \(B_{1}\) , or from \(B_{1}\) to one of the inner boundaries), at the moment of the jump the interrupted laser would be a drawable tangent segment that Luciano missed (see figure below for an example).

Now, let \(P_{2}\) and \(P_{3}\) be the next two sharp corners the car goes through, after \(P_{1}\) (the previous lemma assures their existence). At \(P_{2}\) the car starts moving forward, and at \(P_{3}\) it will start to move in reverse again. So, at \(P_{3}\) , the laser endpoint is at \(P_{3}\) itself. So while the car moved counterclockwise between \(P_{1}\) and \(P_{3}\) , the laser endpoint moved clockwise between \(P_{1}\) and \(P_{3}\) . That means the laser beam itself scanned the whole region within \(B_{1}\) , and it should have crossed some of the inner boundaries. \(\square\)
Claim 3. Each region has exactly 3 sharp corners.
Proof. Consider again the car of the previous claim, with its laser still firmly attached to its front, traveling the same way as before and going through the same consecutive sharp corners \(P_{1}\) , \(P_{2}\) and \(P_{3}\) . As we have seen, as the car goes counterclockwise from \(P_{1}\) to \(P_{3}\) , the laser endpoint goes clockwise from \(P_{1}\) to \(P_{3}\) , so together they cover the whole boundary. If there were a fourth sharp corner \(P_{4}\) , at some moment the laser endpoint would pass through it. But, since \(P_{4}\) is a sharp corner, this means the car must be on the extension of a tangent segment going through \(P_{4}\) . Since the car is not on that segment itself (the car never goes through \(P_{4}\) ), we would have 3 circles with a common tangent line, which is not allowed.

We are now ready to finish the solution. Let \(r\) be the number of internal regions, and \(s\) be the number of tangent segments. Since each tangent segment contributes exactly 2 sharp corners to the diagram, and each region has exactly 3 sharp corners, we must have \(2s = 3r\) . Since the graph corresponding to the diagram is connected, we can use Euler's formula \(n - s + r = 1\) and find \(s = 3n - 3\) and \(r = 2n - 2\) .
|
IMOSL-2017-N1
|
The sequence \(a_{0},a_{1},a_{2},\ldots\) of positive integers satisfies
\[a_{n + 1} = \left\{ \begin{array}{ll}\sqrt{a_{n}}, & \mathrm{if~}\sqrt{a_{n}}\mathrm{~is~an~integer}\\ a_{n} + 3, & \mathrm{otherwise} \end{array} \right. \quad \mathrm{for~every~}n\geqslant 0.\]
Determine all values of \(a_{0} > 1\) for which there is at least one number \(a\) such that \(a_{n} = a\) for infinitely many values of \(n\) .
|
Answer: All positive multiples of 3.
Solution. Since the value of \(a_{n + 1}\) only depends on the value of \(a_{n}\) , if \(a_{n} = a_{m}\) for two different indices \(n\) and \(m\) , then the sequence is eventually periodic. So we look for the values of \(a_{0}\) for which the sequence is eventually periodic.
Claim 1. If \(a_{n}\equiv - 1\) (mod 3), then, for all \(m > n\) , \(a_{m}\) is not a perfect square. It follows that the sequence is eventually strictly increasing, so it is not eventually periodic.
Proof. A square cannot be congruent to \(- 1\) modulo 3, so \(a_{n}\equiv - 1\) (mod 3) implies that \(a_{n}\) is not a square, therefore \(a_{n + 1} = a_{n} + 3 > a_{n}\) . As a consequence, \(a_{n + 1}\equiv a_{n}\equiv - 1\) (mod 3), so \(a_{n + 1}\) is not a square either. By repeating the argument, we prove that, from \(a_{n}\) on, all terms of the sequence are not perfect squares and are greater than their predecessors, which completes the proof.
Claim 2. If \(a_{n}\not\equiv - 1\) (mod 3) and \(a_{n} > 9\) then there is an index \(m > n\) such that \(a_{m}< a_{n}\) .
Proof. Let \(t^{2}\) be the largest perfect square which is less than \(a_{n}\) . Since \(a_{n} > 9\) , \(t\) is at least 3. The first square in the sequence \(a_{n},a_{n} + 3,a_{n} + 6,\ldots\) will be \((t + 1)^{2}\) , \((t + 2)^{2}\) or \((t + 3)^{2}\) , therefore there is an index \(m > n\) such that \(a_{m}\leqslant t + 3< t^{2}< a_{n}\) , as claimed.
Claim 3. If \(a_{n}\equiv 0\) (mod 3), then there is an index \(m > n\) such that \(a_{m} = 3\) .
Proof. First we notice that, by the definition of the sequence, a multiple of 3 is always followed by another multiple of 3. If \(a_{n}\in \{3,6,9\}\) the sequence will eventually follow the periodic pattern 3, 6, 9, 3, 6, 9, ... If \(a_{n} > 9\) , let \(j\) be an index such that \(a_{j}\) is equal to the minimum value of the set \(\{a_{n + 1},a_{n + 2},\ldots \}\) . We must have \(a_{j}\leqslant 9\) , otherwise we could apply Claim 2 to \(a_{j}\) and get a contradiction on the minimality hypothesis. It follows that \(a_{j}\in \{3,6,9\}\) , and the proof is complete.
Claim 4. If \(a_{n}\equiv 1\) (mod 3), then there is an index \(m > n\) such that \(a_{m}\equiv - 1\) (mod 3).
Proof. In the sequence, 4 is always followed by \(2\equiv - 1\) (mod 3), so the claim is true for \(a_{n} = 4\) . If \(a_{n} = 7\) , the next terms will be 10, 13, 16, 4, 2, ... and the claim is also true. For \(a_{n}\geqslant 10\) , we again take an index \(j > n\) such that \(a_{j}\) is equal to the minimum value of the set \(\{a_{n + 1},a_{n + 2},\ldots \}\) , which by the definition of the sequence consists of non- multiples of 3. Suppose \(a_{j}\equiv 1\) (mod 3). Then we must have \(a_{j}\leqslant 9\) by Claim 2 and the minimality of \(a_{j}\) . It follows that \(a_{j}\in \{4,7\}\) , so \(a_{m} = 2< a_{j}\) for some \(m > j\) , contradicting the minimality of \(a_{j}\) . Therefore, we must have \(a_{j}\equiv - 1\) (mod 3).
It follows from the previous claims that if \(a_{0}\) is a multiple of 3 the sequence will eventually reach the periodic pattern 3, 6, 9, 3, 6, 9, ...; if \(a_{0}\equiv - 1\) (mod 3) the sequence will be strictly increasing; and if \(a_{0}\equiv 1\) (mod 3) the sequence will be eventually strictly increasing.
So the sequence will be eventually periodic if, and only if, \(a_{0}\) is a multiple of 3.
|
IMOSL-2017-N2
|
Let \(p \geq 2\) be a prime number. Eduardo and Fernando play the following game making moves alternately: in each move, the current player chooses an index \(i\) in the set \(\{0, 1, \ldots , p - 1\}\) that was not chosen before by either of the two players and then chooses an element \(a_{i}\) of the set \(\{0, 1, 2, 3, 4, 5, 6, 7, 8, 9\}\) . Eduardo has the first move. The game ends after all the indices \(i \in \{0, 1, \ldots , p - 1\}\) have been chosen. Then the following number is computed:
\[M = a_{0} + 10\cdot a_{1} + \cdot \cdot \cdot +10^{p - 1}\cdot a_{p - 1} = \sum_{j = 0}^{p - 1}a_{j}\cdot 10^{j}.\]
The goal of Eduardo is to make the number \(M\) divisible by \(p\) , and the goal of Fernando is to prevent this.
Prove that Eduardo has a winning strategy.
|
Solution. We say that a player makes the move \((i, a_{i})\) if he chooses the index \(i\) and then the element \(a_{i}\) of the set \(\{0, 1, 2, 3, 4, 5, 6, 7, 8, 9\}\) in this move.
If \(p = 2\) or \(p = 5\) then Eduardo chooses \(i = 0\) and \(a_{0} = 0\) in the first move, and wins, since, independently of the next moves, \(M\) will be a multiple of 10.
Now assume that the prime number \(p\) does not belong to \(\{2, 5\}\) . Eduardo chooses \(i = p - 1\) and \(a_{p - 1} = 0\) in the first move. By Fermat's Little Theorem, \((10^{(p - 1) / 2})^{2} = 10^{p - 1} \equiv 1 \pmod{p}\) , so \(p \mid (10^{(p - 1) / 2})^{2} - 1 = (10^{(p - 1) / 2} + 1)(10^{(p - 1) / 2} - 1)\) . Since \(p\) is prime, either \(p \mid 10^{(p - 1) / 2} + 1\) or \(p \mid 10^{(p - 1) / 2} - 1\) . Thus we have two cases:
Case \(a\) : \(10^{(p - 1) / 2} \equiv - 1 \pmod{p}\)
In this case, for each move \((i, a_{i})\) of Fernando, Eduardo immediately makes the move \((j, a_{j}) = (i + \frac{p - 1}{2}, a_{i})\) , if \(0 \leq i \leq \frac{p - 3}{2}\) , or \((j, a_{j}) = (i - \frac{p - 1}{2}, a_{i})\) , if \(\frac{p - 1}{2} \leq i \leq p - 2\) . We will have \(10^{j} \equiv - 10^{i} \pmod{p}\) , and so \(a_{j} \cdot 10^{j} = a_{i} \cdot 10^{j} \equiv - a_{i} \cdot 10^{i} \pmod{p}\) . Notice that this move by Eduardo is always possible. Indeed, immediately before a move by Fernando, for any set of the type \(\{r, r + (p - 1) / 2\}\) with \(0 \leq r \leq (p - 3) / 2\) , either no element of this set was chosen as an index by the players in the previous moves or else both elements of this set were chosen as indices by the players in the previous moves. Therefore, after each of his moves, Eduardo always makes the sum of the numbers \(a_{k} \cdot 10^{k}\) corresponding to the already chosen pairs \((k, a_{k})\) divisible by \(p\) , and thus wins the game.
Case \(b\) : \(10^{(p - 1) / 2} \equiv 1 \pmod{p}\)
In this case, for each move \((i, a_{i})\) of Fernando, Eduardo immediately makes the move \((j, a_{j}) = (i + \frac{p - 1}{2}, 9 - a_{i})\) , if \(0 \leq i \leq \frac{p - 3}{2}\) , or \((j, a_{j}) = (i - \frac{p - 1}{2}, 9 - a_{i})\) , if \(\frac{p - 1}{2} \leq i \leq p - 2\) . The same argument as above shows that Eduardo can always make such move. We will have \(10^{j} \equiv 10^{i} \pmod{p}\) , and so \(a_{j} \cdot 10^{j} + a_{i} \cdot 10^{i} \equiv (a_{i} + a_{j}) \cdot 10^{i} = 9 \cdot 10^{i} \pmod{p}\) . Therefore, at the end of the game, the sum of all terms \(a_{k} \cdot 10^{k}\) will be congruent to
\[\sum_{i = 0}^{\frac{p - 3}{2}} 9 \cdot 10^{i} = 10^{(p - 1) / 2} - 1 \equiv 0 \pmod{p},\]
and Eduardo wins the game.
|
IMOSL-2017-N3
|
Determine all integers \(n \geqslant 2\) with the following property: for any integers \(a_{1}, a_{2}, \ldots , a_{n}\) whose sum is not divisible by \(n\) , there exists an index \(1 \leqslant i \leqslant n\) such that none of the numbers
\[a_{i}, a_{i} + a_{i + 1}, \ldots , a_{i} + a_{i + 1} + \dots + a_{i + n - 1}\]
is divisible by \(n\) . (We let \(a_{i} = a_{i - n}\) when \(i > n\) .)
|
Answer: These integers are exactly the prime numbers.
Solution. Let us first show that, if \(n = a b\) , with \(a,b\geqslant 2\) integers, then the property in the statement of the problem does not hold. Indeed, in this case, let \(a_{k} = a\) for \(1\leqslant k\leqslant n - 1\) and \(a_{n} = 0\) . The sum \(a_{1} + a_{2} + \cdot \cdot \cdot +a_{n} = a\cdot (n - 1)\) is not divisible by \(n\) . Let \(i\) with \(1\leqslant i\leqslant n\) be an arbitrary index. Taking \(j = b\) if \(1\leqslant i\leqslant n - b\) , and \(j = b + 1\) if \(n - b< i\leqslant n\) , we have
\[a_{i} + a_{i + 1} + \cdot \cdot \cdot +a_{i + j - 1} = a\cdot b = n\equiv 0\pmod {n}.\]
It follows that the given example is indeed a counterexample to the property of the statement.
Now let \(n\) be a prime number. Suppose by contradiction that the property in the statement of the problem does not hold. Then there are integers \(a_{1},a_{2},\ldots ,a_{n}\) whose sum is not divisible by \(n\) such that for each \(i\) \(1\leqslant i\leqslant n\) , there is \(j\) \(1\leqslant j\leqslant n\) , for which the number \(a_{i} + a_{i + 1}+\) \(\cdot \cdot \cdot +a_{i + j - 1}\) is divisible by \(n\) . Notice that, in any such case, we should have \(1\leqslant j\leqslant n - 1\) since \(a_{1} + a_{2} + \cdot \cdot \cdot +a_{n}\) is not divisible by \(n\) . So we may construct recursively a finite sequence of integers \(0 = i_{0}< i_{1}< i_{2}< \cdot \cdot \cdot < i_{n}\) with \(i_{s + 1} - i_{s}\leqslant n - 1\) for \(0\leqslant s\leqslant n - 1\) such that, for \(0\leqslant s\leqslant n - 1\)
\[a_{i_{s} + 1} + a_{i_{s} + 2} + \cdot \cdot \cdot +a_{i_{s + 1}}\equiv 0\pmod {n}\]
(where we take indices modulo \(n\) ). Indeed, for \(0\leqslant s< n\) , we apply the previous observation to \(i = i_{s} + 1\) in order to define \(i_{s + 1} = i_{s} + j\) .
In the sequence of \(n + 1\) indices \(i_{0},i_{1},i_{2},\ldots ,i_{n}\) , by the pigeonhole principle, we have two distinct elements which are congruent modulo \(n\) . So there are indices \(r,s\) with \(0\leqslant r< s\leqslant n\) such that \(i_{s}\equiv i_{r}\) (mod \(n\) ) and
\[a_{i_{r} + 1} + a_{i_{r} + 2} + \cdot \cdot \cdot +a_{i_{s}} = \sum_{j = r}^{s - 1}(a_{i_{j} + 1} + a_{i_{j} + 2} + \cdot \cdot \cdot +a_{i_{j + 1}})\equiv 0\pmod {n}.\]
Since \(i_{s}\equiv i_{r}\) (mod \(n\) ), we have \(i_{s} - i_{r} = k\cdot n\) for some positive integer \(k\) , and, since \(i_{j + 1} - i_{j}\leqslant\) \(n - 1\) for \(0\leqslant j\leqslant n - 1\) , we have \(i_{s} - i_{r}\leqslant (n - 1)\cdot n\) , so \(k\leqslant n - 1\) . But in this case
\[a_{i_{r} + 1} + a_{i_{r} + 2} + \cdot \cdot \cdot +a_{i_{s}} = k\cdot (a_{1} + a_{2} + \cdot \cdot \cdot +a_{n})\]
cannot be a multiple of \(n\) , since \(n\) is prime and neither \(k\) nor \(a_{1} + a_{2} + \cdot \cdot \cdot +a_{n}\) is a multiple of \(n\) . A contradiction.
|
IMOSL-2017-N4
|
Call a rational number short if it has finitely many digits in its decimal expansion. For a positive integer \(m\) , we say that a positive integer \(t\) is \(m\) - static if there exists a number \(c \in \{1, 2, 3, \ldots , 2017\}\) such that \(\frac{10^{t} - 1}{c \cdot m}\) is short, and such that \(\frac{10^{k} - 1}{c \cdot m}\) is not short for any \(1 \leqslant k < t\) . Let \(S(m)\) be the set of \(m\) - static numbers. Consider \(S(m)\) for \(m = 1, 2, \ldots\) . What is the maximum number of elements in \(S(m)\) ?
|
Answer: 807.
Solution. First notice that \(x \in \mathbb{Q}\) is short if and only if there are exponents \(a, b \geqslant 0\) such that \(2^{a} \cdot 5^{b} \cdot x \in \mathbb{Z}\) . In fact, if \(x\) is short, then \(x = \frac{n}{10^{b}}\) for some \(k\) and we can take \(a = b = k\) ; on the other hand, if \(2^{a} \cdot 5^{b} \cdot x = q \in \mathbb{Z}\) then \(x = \frac{2^{b} \cdot 5^{a} q}{10^{a + b}}\) , so \(x\) is short.
If \(m = 2^{a} \cdot 5^{b} \cdot s\) , with \(\gcd (s, 10) = 1\) , then \(\frac{10^{t} - 1}{m}\) is short if and only if \(s\) divides \(10^{t} - 1\) . So we may (and will) suppose without loss of generality that \(\gcd (m, 10) = 1\) . Define
\[C = \{1 \leqslant c \leqslant 2017: \gcd (c, 10) = 1\}.\]
The \(m\) - static numbers are then precisely the smallest exponents \(t > 0\) such that \(10^{t} \equiv 1\) (mod \(cm\) ) for some integer \(c \in C\) , that is, the set of orders of 10 modulo \(cm\) . In other words,
\[S(m) = \{\mathrm{ord}_{cm}(10): c \in C\} .\]
Since there are \(4 \cdot 201 + 3 = 807\) numbers \(c\) with \(1 \leqslant c \leqslant 2017\) and \(\gcd (c, 10) = 1\) , namely those such that \(c \equiv 1, 3, 7, 9\) (mod 10),
\[|S(m)| \leqslant |C| = 807.\]
Now we find \(m\) such that \(|S(m)| = 807\) . Let
\[P = \{1 < p \leqslant 2017: p \text{ is prime}, p \neq 2, 5\}\]
and choose a positive integer \(\alpha\) such that every \(p \in P\) divides \(10^{\alpha} - 1\) (e.g. \(\alpha = \phi (T)\) , \(T\) being the product of all primes in \(P\) ), and let \(m = 10^{\alpha} - 1\) .
Claim. For every \(c \in C\) , we have
\[\mathrm{ord}_{cm}(10) = c\alpha .\]
As an immediate consequence, this implies \(|S(m)| = |C| = 807\) , finishing the problem. Proof. Obviously \(\mathrm{ord}_{m}(10) = \alpha\) . Let \(t = \mathrm{ord}_{cm}(10)\) . Then
\[c m \mid 10^{t} - 1 \Rightarrow m \mid 10^{t} - 1 \Rightarrow \alpha \mid t.\]
Hence \(t = k\alpha\) for some \(k \in \mathbb{Z}_{>0}\) . We will show that \(k = c\) .
Denote by \(\nu_{p}(n)\) the number of prime factors \(p\) in \(n\) , that is, the maximum exponent \(\beta\) for which \(p^{\beta} \mid n\) . For every \(\ell \geqslant 1\) and \(p \in P\) , the Lifting the Exponent Lemma provides
\[\nu_{p}(10^{\ell \alpha} - 1) = \nu_{p}((10^{\alpha})^{\ell} - 1) = \nu_{p}(10^{\alpha} - 1) + \nu_{p}(\ell) = \nu_{p}(m) + \nu_{p}(\ell),\]
SO
\[c m \mid 10^{k\alpha} - 1 \iff \forall p \in P; \nu_{p}(cm) \leqslant \nu_{p}(10^{k\alpha} - 1)\] \[\iff \forall p \in P; \nu_{p}(m) + \nu_{p}(c) \leqslant \nu_{p}(m) + \nu_{p}(k)\] \[\iff \forall p \in P; \nu_{p}(c) \leqslant \nu_{p}(k)\] \[\iff c \mid k.\]
The first such \(k\) is \(k = c\) , so \(\mathrm{ord}_{cm}(10) = c\alpha\) .
Comment. The Lifting the Exponent Lemma states that, for any odd prime \(p\) , any integers \(a, b\) coprime with \(p\) such that \(p \mid a - b\) , and any positive integer exponent \(n\) ,
\[\nu_{p}(a^{n} - b^{n}) = \nu_{p}(a - b) + \nu_{p}(n),\]
and, for \(p = 2\) ,
\[\nu_{2}(a^{n} - b^{n}) = \nu_{2}(a^{2} - b^{2}) + \nu_{p}(n) - 1.\]
Both claims can be proved by induction on \(n\) .
|
IMOSL-2017-N5
|
Find all pairs \((p,q)\) of prime numbers with \(p > q\) for which the number
\[{\frac{(p+q)^{p+q}(p-q)^{p-q}-1}{(p+q)^{p-q}(p-q)^{p+q}-1}}\]
is an integer.
|
Answer: The only such pair is \((3,2)\) .
Solution. Let \(M = (p + q)^{p - q}(p - q)^{p + q} - 1\) , which is relatively prime with both \(p + q\) and \(p - q\) . Denote by \((p - q)^{- 1}\) the multiplicative inverse of \((p - q)\) modulo \(M\) .
By eliminating the term \(- 1\) in the numerator,
\[\begin{array}{c}{(p+q)^{p+q}(p-q)^{p-q}-1\equiv(p+q)^{p-q}(p-q)^{p+q}-1\pmod{M}}\\ {(p+q)^{2q}\equiv(p-q)^{2q}\pmod{M}}\\ {\left((p+q)\cdot(p-q)^{-1}\right)^{2q}\equiv1\pmod{M}.}\end{array} \quad (1)\]
Case 1: \(q \geq 5\) .
Consider an arbitrary prime divisor \(r\) of \(M\) . Notice that \(M\) is odd, so \(r \geq 3\) . By (2), the multiplicative order of \(\left((p + q) \cdot (p - q)^{- 1}\right)\) modulo \(r\) is a divisor of the exponent \(2q\) in (2), so it can be 1, 2, \(q\) or \(2q\) .
By Fermat's theorem, the order divides \(r - 1\) . So, if the order is \(q\) or \(2q\) then \(r \equiv 1\) (mod \(q\) ). If the order is 1 or 2 then \(r \mid (p + q)^{2} - (p - q)^{2} = 4pq\) , so \(r = p\) or \(r = q\) . The case \(r = p\) is not possible, because, by applying Fermat's theorem,
\[M = (p + q)^{p - q}(p - q)^{p + q} - 1\equiv q^{p - q}(-q)^{p + q} - 1 = \left(q^{2}\right)^{p} - 1\equiv q^{2} - 1 = (q + 1)(q - 1)\pmod{p}\]
and the last factors \(q - 1\) and \(q + 1\) are less than \(p\) and thus \(p \nmid M\) . Hence, all prime divisors of \(M\) are either \(q\) or of the form \(kq + 1\) ; it follows that all positive divisors of \(M\) are congruent to 0 or 1 modulo \(q\) .
Now notice that
\[M = \left((p + q)^{\frac{p - q}{2}}(p - q)^{\frac{p + q}{2}} - 1\right)\left((p + q)^{\frac{p - q}{2}}(p - q)^{\frac{p}{2} +1} + 1\right)\]
is the product of two consecutive positive odd numbers; both should be congruent to 0 or 1 modulo \(q\) . But this is impossible by the assumption \(q \geq 5\) . So, there is no solution in Case 1.
Case 2: \(q = 2\) .
By (1), we have \(M \mid (p + q)^{2q} - (p - q)^{2q} = (p + 2)^{4} - (p - 2)^{4}\) , so
\[(p + 2)^{p - 2}(p - 2)^{p + 2} - 1 = M\leqslant (p + 2)^{4} - (p - 2)^{4}\leqslant (p + 2)^{4} - 1,\] \[(p + 2)^{p - 6}(p - 2)^{p + 2}\leqslant 1.\]
If \(p \geq 7\) then the left- hand side is obviously greater than 1. For \(p = 5\) we have \((p + 2)^{p - 6}(p - 2)^{p + 2} = 7^{- 1} \cdot 3^{7}\) which is also too large.
There remains only one candidate, \(p = 3\) , which provides a solution:
\[\frac{(p + q)^{p + q}(p - q)^{p - q} - 1}{(p + q)^{p - q}(p - q)^{p + q} - 1} = \frac{5^{5} \cdot 1^{1} - 1}{5^{1} \cdot 1^{5} - 1} = \frac{3124}{4} = 781.\]
So in Case 2 the only solution is \((p, q) = (3, 2)\) .
Case 3: \(q = 3\) .
Similarly to Case 2, we have
\[M\mid (p + q)^{2q} - (p - q)^{2q} = 64\cdot \left(\left(\frac{p + 3}{2}\right)^{6} - \left(\frac{p - 3}{2}\right)^{6}\right).\]
Since \(M\) is odd, we conclude that
\[M\mid \left(\frac{p + 3}{2}\right)^{6} - \left(\frac{p - 3}{2}\right)^{6}\]
and
\[(p + 3)^{p - 3}(p - 3)^{p + 3} - 1 = M\leqslant \left(\frac{p + 3}{2}\right)^{6} - \left(\frac{p - 3}{2}\right)^{6}\leqslant \left(\frac{p + 3}{2}\right)^{6} - 1,\] \[64(p + 3)^{p - 9}(p - 3)^{p + 3}\leqslant 1.\]
If \(p\geqslant 11\) then the left- hand side is obviously greater than 1. If \(p = 7\) then the left- hand side is \(64\cdot 10^{- 2}\cdot 4^{10} > 1\) . If \(p = 5\) then the left- hand side is \(64\cdot 8^{- 4}\cdot 2^{8} = 2^{2} > 1\) . Therefore, there is no solution in Case 3.
|
IMOSL-2017-N7
|
Say that an ordered pair \((x,y)\) of integers is an irreducible lattice point if \(x\) and \(y\) are relatively prime. For any finite set \(S\) of irreducible lattice points, show that there is a homogenous polynomial in two variables, \(f(x,y)\) , with integer coefficients, of degree at least 1, such that \(f(x,y) = 1\) for each \((x,y)\) in the set \(S\) .
Note: A homogenous polynomial of degree \(n\) is any nonzero polynomial of the form
\[f(x,y) = a_{0}x^{n} + a_{1}x^{n - 1}y + a_{2}x^{n - 2}y^{2} + \dots +a_{n - 1}xy^{n - 1} + a_{n}y^{n}.\]
|
Solution 1. First of all, we note that finding a homogenous polynomial \(f(x,y)\) such that \(f(x,y) = \pm 1\) is enough, because we then have \(f^{2}(x,y) = 1\) . Label the irreducible lattice points \((x_{1},y_{1})\) through \((x_{n},y_{n})\) . If any two of these lattice points \((x_{i},y_{i})\) and \((x_{j},y_{j})\) lie on the same line through the origin, then \((x_{j},y_{j}) = (- x_{i}, - y_{i})\) because both of the points are irreducible. We then have \(f(x_{j},y_{j}) = \pm f(x_{i},y_{i})\) whenever \(f\) is homogenous, so we can assume that no two of the lattice points are collinear with the origin by ignoring the extra lattice points.
Consider the homogenous polynomials \(\ell_{i}(x,y) = y_{i}x - x_{i}y\) and define
\[g_{i}(x,y) = \prod_{j\neq i}\ell_{j}(x,y).\]
Then \(\ell_{i}(x_{j},y_{j}) = 0\) if and only if \(j = i\) , because there is only one lattice point on each line through the origin. Thus, \(g_{i}(x_{j},y_{j}) = 0\) for all \(j\neq i\) . Define \(a_{i} = g_{i}(x_{i},y_{i})\) , and note that \(a_{i}\neq 0\) .
Note that \(g_{i}(x,y)\) is a degree \(n - 1\) polynomial with the following two properties:
1. \(g_{i}(x_{j},y_{j}) = 0\) if \(j\neq i\)
2. \(g_{i}(x_{i},y_{i}) = a_{i}\)
For any \(N\geq n - 1\) , there also exists a polynomial of degree \(N\) with the same two properties. Specifically, let \(I_{i}(x,y)\) be a degree 1 homogenous polynomial such that \(I_{i}(x_{i},y_{i}) = 1\) , which exists since \((x_{i},y_{i})\) is irreducible. Then \(I_{i}(x,y)^{N - (n - 1)}g_{i}(x,y)\) satisfies both of the above properties and has degree \(N\) .
We may now reduce the problem to the following claim:
Claim: For each positive integer \(a\) , there is a homogenous polynomial \(f_{a}(x,y)\) , with integer coefficients, of degree at least 1, such that \(f_{a}(x,y)\equiv 1\) (mod \(a\) ) for all relatively prime \((x,y)\) .
To see that this claim solves the problem, take \(a\) to be the least common multiple of the numbers \(a_{i}\) ( \(1\leqslant i\leqslant n\) ). Take \(f_{a}\) given by the claim, choose some power \(f_{a}(x,y)^{k}\) that has degree at least \(n - 1\) , and subtract appropriate multiples of the \(g_{i}\) constructed above to obtain the desired polynomial.
We prove the claim by factoring \(a\) . First, if \(a\) is a power of a prime ( \(a = p^{k}\) ), then we may choose either:
- \(f_{a}(x,y) = (x^{p - 1} + y^{p - 1})^{\phi (a)}\) if \(p\) is odd;
- \(f_{a}(x,y) = (x^{2} + xy + y^{2})^{\phi (a)}\) if \(p = 2\) .
Now suppose \(a\) is any positive integer, and let \(a = q_{1}q_{2}\dots q_{k}\) , where the \(q_{i}\) are prime powers, pairwise relatively prime. Let \(f_{q_{i}}\) be the polynomials just constructed, and let \(F_{q_{i}}\) be powers of these that all have the same degree. Note that
\[\frac{a}{q_{i}} F_{q_{i}}(x,y) = \frac{a}{q_{i}} \pmod {a}\]
for any relatively prime \(x,y\) . By Bezout's lemma, there is an integer linear combination of the \(\frac{a}{q_{i}}\) that equals 1. Thus, there is a linear combination of the \(F_{q_{i}}\) such that \(F_{q_{i}}(x,y)\equiv 1\) (mod \(a\) ) for any relatively prime \((x,y)\) ; and this polynomial is homogenous because all the \(F_{q_{i}}\) have the same degree.
Solution 2. As in the previous solution, label the irreducible lattice points \((x_{1}, y_{1}), \ldots , (x_{n}, y_{n})\) and assume without loss of generality that no two of the points are collinear with the origin. We induct on \(n\) to construct a homogenous polynomial \(f(x, y)\) such that \(f(x_{i}, y_{i}) = 1\) for all \(1 \leq i \leq n\) .
If \(n = 1\) : Since \(x_{1}\) and \(y_{1}\) are relatively prime, there exist some integers \(c, d\) such that \(cx_{1} + dy_{1} = 1\) . Then \(f(x, y) = cx + dy\) is suitable.
If \(n \geq 2\) : By the induction hypothesis we already have a homogeneous polynomial \(g(x, y)\) with \(g(x_{1}, y_{1}) = \ldots = g(x_{n - 1}, y_{n - 1}) = 1\) . Let \(j = \deg g\) ,
\[g_{n}(x, y) = \prod_{k = 1}^{n - 1}(y_{k}x - x_{k}y),\]
and \(a_{n} = g_{n}(x_{n}, y_{n})\) . By assumption, \(a_{n} \neq 0\) . Take some integers \(c, d\) such that \(cx_{n} + dy_{n} = 1\) . We will construct \(f(x, y)\) in the form
\[f(x, y) = g(x, y)^{K} - C \cdot g_{n}(x, y) \cdot (cx + dy)^{L},\]
where \(K\) and \(L\) are some positive integers and \(C\) is some integer. We assume that \(L = Kj - n + 1\) so that \(f\) is homogenous.
Due to \(g(x_{1}, y_{1}) = \ldots = g(x_{n - 1}, y_{n - 1}) = 1\) and \(g_{n}(x_{1}, y_{1}) = \ldots = g_{n}(x_{n - 1}, y_{n - 1}) = 0\) , the property \(f(x_{1}, y_{1}) = \ldots = f(x_{n - 1}, y_{n - 1}) = 1\) is automatically satisfied with any choice of \(K, L\) , and \(C\) .
Furthermore,
\[f(x_{n}, y_{n}) = g(x_{n}, y_{n})^{K} - C \cdot g_{n}(x_{n}, y_{n}) \cdot (cx_{n} + dy_{n})^{L} = g(x_{n}, y_{n})^{K} - C a_{n}.\]
If we have an exponent \(K\) such that \(g(x_{n}, y_{n})^{K} \equiv 1 \pmod {a_{n}}\) , then we may choose \(C\) such that \(f(x_{n}, y_{n}) = 1\) . We now choose such a \(K\) .
Consider an arbitrary prime divisor \(p\) of \(a_{n}\) . By
\[p \mid a_{n} = g_{n}(x_{n}, y_{n}) = \prod_{k = 1}^{n - 1}(y_{k}x_{n} - x_{k}y_{n}),\]
there is some \(1 \leq k < n\) such that \(x_{k}y_{n} \equiv x_{n}y_{k} \pmod {p}\) . We first show that \(x_{k}x_{n}\) or \(y_{k}y_{n}\) is relatively prime with \(p\) . This is trivial in the case \(x_{k}y_{n} \equiv x_{n}y_{k} \not\equiv 0 \pmod {p}\) . In the other case, we have \(x_{k}y_{n} \equiv x_{n}y_{k} \equiv 0 \pmod {p}\) , If, say \(p \mid x_{k}\) , then \(p \nmid y_{k}\) because \((x_{k}, y_{k})\) is irreducible, so \(p \mid x_{n}\) ; then \(p \nmid y_{n}\) because \((x_{k}, y_{k})\) is irreducible. In summary, \(p \mid x_{k}\) implies \(p \nmid y_{k}y_{n}\) . Similarly, \(p \mid y_{n}\) implies \(p \nmid x_{k}x_{n}\) .
By the homogeneity of \(g\) we have the congruences
\[x_{k}^{d} \cdot g(x_{n}, y_{n}) = g(x_{k}x_{n}, x_{k}y_{n}) \equiv g(x_{k}x_{n}, y_{k}x_{n}) = x_{n}^{d} \cdot g(x_{k}, y_{k}) = x_{n}^{d} \pmod {p} \quad (1.1)\]
and
\[y_{k}^{d} \cdot g(x_{n}, y_{n}) = g(y_{k}x_{n}, y_{k}y_{n}) \equiv g(x_{k}y_{n}, y_{k}y_{n}) = y_{n}^{d} \cdot g(x_{k}, y_{k}) = y_{n}^{d} \pmod {p}. \quad (1.2)\]
If \(p \nmid x_{k}x_{n}\) , then take the \((p - 1)^{st}\) power of (1.1); otherwise take the \((p - 1)^{st}\) power of (1.2); by Fermat's theorem, in both cases we get
\[g(x_{n}, y_{n})^{p - 1} \equiv 1 \pmod {p}.\]
If \(p^{\alpha} \mid m\) , then we have
\[g(x_{n}, y_{n})^{p^{\alpha - 1}(p - 1)} \equiv 1 \pmod {p^{\alpha}},\]
which implies that the exponent \(K = n \cdot \phi (a_{n})\) , which is a multiple of all \(p^{\alpha - 1}(p - 1)\) , is a suitable choice. (The factor \(n\) is added only so that \(K \geq n\) and so \(L > 0\) .)
Comment. It is possible to show that there is no constant \(C\) for which, given any two irreducible lattice points, there is some homogenous polynomial \(f\) of degree at most \(C\) with integer coefficients that takes the value 1 on the two points. Indeed, if one of the points is \((1,0)\) and the other is \((a,b)\) , the polynomial \(f(x,y) = a_0x^n + a_1x^{n - 1}y + \dots + a_n y^n\) should satisfy \(a_0 = 1\) , and so \(a^n \equiv 1\) (mod \(b\) ). If \(a = 3\) and \(b = 2^k\) with \(k \geq 3\) , then \(n \geq 2^{k - 2}\) . If we choose \(2^{k - 2} > C\) , this gives a contradiction.
|
IMOSL-2017-N8
|
Let \(p\) be an odd prime number and \(\mathbb{Z}_{>0}\) be the set of positive integers. Suppose that a function \(f\colon \mathbb{Z}_{>0}\times \mathbb{Z}_{>0}\to \{0,1\}\) satisfies the following properties:
- \(f(1,1) = 0\) ;- \(f(a,b) + f(b,a) = 1\) for any pair of relatively prime positive integers \((a,b)\) not both equal to 1;- \(f(a + b,b) = f(a,b)\) for any pair of relatively prime positive integers \((a,b)\) .
Prove that
\[\sum_{n = 1}^{p - 1}f(n^{2},p)\geq \sqrt{2p} -2.\]
|
Solution 1. Denote by \(\mathbb{A}\) the set of all pairs of coprime positive integers. Notice that for every \((a,b)\in \mathbb{A}\) there exists a pair \((u,v)\in \mathbb{Z}^{2}\) with \(u a + v b = 1\) . Moreover, if \((u_{0},v_{0})\) is one such pair, then all such pairs are of the form \((u,v) = (u_{0} + k b,v_{0} - k a)\) , where \(k\in \mathbb{Z}\) . So there exists a unique such pair \((u,v)\) with \(- b / 2< u\leq b / 2\) ; we denote this pair by \((u,v) = g(a,b)\) .
Lemma. Let \((a,b)\in \mathbb{A}\) and \((u,v) = g(a,b)\) . Then \(f(a,b) = 1\iff u > 0\)
Proof. We induct on \(a + b\) . The base case is \(a + b = 2\) . In this case, we have that \(a = b = 1\) , \(g(a,b) = g(1,1) = (0,1)\) and \(f(1,1) = 0\) , so the claim holds.
Assume now that \(a + b > 2\) , and so \(a\neq b\) , since \(a\) and \(b\) are coprime. Two cases are possible.
Case 1: \(a > b\)
Notice that \(g(a - b,b) = (u,v + u)\) , since \(u(a - b) + (v + u)b = 1\) and \(u\in (- b / 2,b / 2]\) . Thus \(f(a,b) = 1\iff f(a - b,b) = 1\iff u > 0\) by the induction hypothesis.
Case 2: \(a< b\) . (Then, clearly, \(b\geq 2\) .)
Now we estimate \(v\) . Since \(v b = 1 - u a\) , we have
\[1 + \frac{a b}{2} >v b\geqslant 1 - \frac{a b}{2},\quad \mathsf{s o}\quad \frac{1 + a}{2}\geqslant \frac{1}{b} +\frac{a}{2} >v\geqslant \frac{1}{b} -\frac{a}{2} > - \frac{a}{2}.\]
Thus \(1 + a > 2v > - a\) , so \(a\geq 2v > - a\) , hence \(a / 2\geq v > - a / 2\) , and thus \(g(b,a) = (v,u)\) .
Observe that \(f(a,b) = 1\iff f(b,a) = 0\iff f(b - a,a) = 0\) . We know from Case 1 that \(g(b - a,a) = (v,u + v)\) . We have \(f(b - a,a) = 0\iff v\leq 0\) by the inductive hypothesis. Then, since \(b > a\geq 1\) and \(u a + v b = 1\) , we have \(v\leq 0\iff u > 0\) , and we are done.
The Lemma proves that, for all \((a,b)\in \mathbb{A}\) , \(f(a,b) = 1\) if and only if the inverse of \(a\) modulo \(b\) , taken in \(\{1,2,\ldots ,b - 1\}\) , is at most \(b / 2\) . Then, for any odd prime \(p\) and integer \(n\) such that \(n\neq 0\) (mod \(p\) ), \(f(n^{2},p) = 1\) iff the inverse of \(n^{2}\bmod p\) is less than \(p / 2\) . Since \(\{n^{2}\bmod p\colon 1\leqslant n\leqslant p - 1\} = \{n^{- 2}\bmod p\colon 1\leqslant n\leqslant p - 1\}\) , including multiplicities (two for each quadratic residue in each set), we conclude that the desired sum is twice the number of quadratic residues that are less than \(p / 2\) , i.e.,
\[\sum_{n = 1}^{p - 1}f(n^{2},p) = 2\left|\left\{k\colon 1\leqslant k\leqslant \frac{p - 1}{2}\mathrm{~and~}k^{2}\bmod p< \frac{p}{2}\right\}\right|. \quad (1)\]
Since the number of perfect squares in the interval \([1,p / 2)\) is \(|\sqrt{p / 2} | > \sqrt{p / 2} - 1\) , we conclude that
\[\sum_{n = 1}^{p - 1}f(n^{2},p) > 2\left(\sqrt{\frac{p}{2}} -1\right) = \sqrt{2p} -2.\]
Solution 2. We provide a different proof for the Lemma. For this purpose, we use continued fractions to find \(g(a,b) = (u,v)\) explicitly.
The function \(f\) is completely determined on \(\mathbb{A}\) by the following
Claim. Represent \(a / b\) as a continued fraction; that is, let \(a_{0}\) be an integer and \(a_{1},\ldots ,a_{k}\) be positive integers such that \(a_{k}\geq 2\) and
\[\frac{a}{b} = a_{0} + \frac{1}{a_{1} + \frac{1}{a_{2} + \frac{1}{\cdots + \frac{1}{a_{k}}}} = [a_{0};a_{1},a_{2},\ldots ,a_{k}].\]
Then \(f(a,b) = 0\iff k\) is even.
Proof. We induct on \(b\) . If \(b = 1\) , then \(a / b = [a]\) and \(k = 0\) . Then, for \(a\geq 1\) , an easy induction shows that \(f(a,1) = f(1,1) = 0\)
Now consider the case \(b > 1\) . Perform the Euclidean division \(a = qb + r\) , with \(0\leq r< b\) We have \(r\neq 0\) because \(\gcd (a,b) = 1\) . Hence
\[f(a,b) = f(r,b) = 1 - f(b,r),\quad \frac{a}{b} = [q;a_{1},\ldots ,a_{k}],\quad \mathrm{and}\quad \frac{b}{r} = [a_{1};a_{2},\ldots ,a_{k}].\]
Then the number of terms in the continued fraction representations of \(a / b\) and \(b / r\) differ by one. Since \(r< b\) , the inductive hypothesis yields
\[f(b,r) = 0\iff k - 1\mathrm{~is~even},\]
and thus
\[f(a,b) = 0\iff f(b,r) = 1\iff k - 1\mathrm{~is~odd}\iff k\mathrm{~is~even}.\]
Now we use the following well- known properties of continued fractions to prove the Lemma:
Let \(p_{i}\) and \(q_{i}\) be coprime positive integers with \([a_{0};a_{1},a_{2},\ldots ,a_{i}] = p_{i} / q_{i}\) , with the notation borrowed from the Claim. In particular, \(a / b = [a_{0};a_{1},a_{2},\ldots ,a_{k}] = p_{k} / q_{k}\) . Assume that \(k > 0\) and define \(q_{- 1} = 0\) if necessary. Then
\(q_{k} = a_{k}q_{k - 1} + q_{k - 2}\) , and
\[a q_{k - 1} - b p_{k - 1} = p_{k}q_{k - 1} - q_{k}p_{k - 1} = (-1)^{k - 1}.\]
Assume that \(k > 0\) . Then \(a_{k}\geq 2\) , and
\[b = q_{k} = a_{k}q_{k - 1} + q_{k - 2}\geq a_{k}q_{k - 1}\geq 2q_{k - 1}\Rightarrow q_{k - 1}\leq \frac{b}{2},\]
with strict inequality for \(k > 1\) , and
\[(-1)^{k - 1}q_{k - 1}a + (-1)^{k}p_{k - 1}b = 1.\]
Now we finish the proof of the Lemma. It is immediate for \(k = 0\) . If \(k = 1\) , then \((- 1)^{k - 1} = 1\) , so
\[-b / 2< 0\leqslant (-1)^{k - 1}q_{k - 1}\leqslant b / 2.\]
If \(k > 1\) , we have \(q_{k - 1}< b / 2\) , so
\[-b / 2< (-1)^{k - 1}q_{k - 1}< b / 2.\]
Thus, for any \(k > 0\) , we find that \(g(a,b) = ((- 1)^{k - 1}q_{k - 1},(- 1)^{k}p_{k - 1})\) , and so
\[f(a,b) = 1\iff k\mathrm{~is~odd}\iff u = (-1)^{k - 1}q_{k - 1} > 0.\]
Comment 1. The Lemma can also be established by observing that \(f\) is uniquely defined on \(\mathbb{A}\) , defining \(f_{1}(a,b) = 1\) if \(u > 0\) in \(g(a,b) = (u,v)\) and \(f_{1}(a,b) = 0\) otherwise, and verifying that \(f_{1}\) satisfies all the conditions from the statement.
It seems that the main difficulty of the problem is in conjecturing the Lemma.
Comment 2. The case \(p \equiv 1\) (mod 4) is, in fact, easier than the original problem. We have, in general, for \(1 \leq a \leq p - 1\) ,
\[f(a,p) = 1 - f(p,a) = 1 - f(p - a,a) = f(a,p - a) = f(a + (p - a),p - a) = f(p,p - a) = 1 - f(p - a,p).\]
If \(p \equiv 1\) (mod 4), then \(a\) is a quadratic residue modulo \(p\) if and only if \(p - a\) is a quadratic residue modulo \(p\) . Therefore, denoting by \(r_{k}\) (with \(1 \leq r_{k} \leq p - 1\) ) the remainder of the division of \(k^{2}\) by \(p\) , we get
\[\sum_{n = 1}^{p - 1}f(n^{2},p) = \sum_{n = 1}^{p - 1}f(r_{n},p) = \frac{1}{2}\sum_{n = 1}^{p - 1}(f(r_{n},p) + f(p - r_{n},p)) = \frac{p - 1}{2}.\]
Comment 3. The estimate for the sum \(\sum_{n = 1}^{p}f(n^{2},p)\) can be improved by refining the final argument in Solution 1. In fact, one can prove that
\[\sum_{n = 1}^{p - 1}f(n^{2},p)\geqslant \frac{p - 1}{16}.\]
By counting the number of perfect squares in the intervals \([kp, (k + 1 / 2)p)\) , we find that
\[\sum_{n = 1}^{p - 1}f(n^{2},p) = \sum_{k = 0}^{p - 1}\left(\left\lfloor \sqrt{\left(k + \frac{1}{2}\right)p}\right\rfloor -\left\lfloor \sqrt{kp}\right\rfloor\right). \quad (2)\]
Each summand of (2) is non- negative. We now estimate the number of positive summands. Suppose that a summand is zero, i.e.,
\[\left|\sqrt{\left(k + \frac{1}{2}\right)p}\right| = \left\lfloor \sqrt{kp}\right\rfloor = :q.\]
Then both of the numbers \(kp\) and \(kp + p / 2\) lie within the interval \([q^{2}, (q + 1)^{2})\) . Hence
\[\frac{p}{2} < (q + 1)^{2} - q^{2},\]
which implies
\[q\geqslant \frac{p - 1}{4}.\]
Since \(q \leq \sqrt{kp}\) , if the \(k^{\mathrm{th}}\) summand of (2) is zero, then
\[k\geqslant \frac{q^{2}}{p}\geqslant \frac{(p - 1)^{2}}{16p} >\frac{p - 2}{16}\Longrightarrow k\geqslant \frac{p - 1}{16}.\]
So at least the first \(\lceil \frac{p - 1}{16}\rceil\) summands (from \(k = 0\) to \(k = \lceil \frac{p - 1}{16}\rceil - 1\) ) are positive, and the result follows.
Comment 4. The bound can be further improved by using different methods. In fact, we prove that
\[\sum_{n = 1}^{p - 1}f(n^{2},p)\geqslant \frac{p - 3}{4}.\]
To that end, we use the Legendre symbol
\[\left(\frac{a}{p}\right) = \left\{ \begin{array}{ll}0 & \mathrm{if~}p\mid a\\ 1 & \mathrm{if~}a\mathrm{~is~a~nonzero~quadratic~residue~mod~}p\\ -1 & \mathrm{otherwise.} \end{array} \right.\]
We start with the following Claim, which tells us that there are not too many consecutive quadratic residues or consecutive quadratic non- residues.
Claim. \(\begin{array}{r}{\sum_{n = 1}^{p - 1}\left(\frac{n}{p}\right)\left(\frac{n + 1}{p}\right) = - 1. } \end{array}\)
Proof. We have \(\begin{array}{r}{\left(\frac{n}{p}\right)\left(\frac{n + 1}{p}\right) = \left(\frac{n(n + 1)}{p}\right)} \end{array}\) . For \(1\leqslant n\leqslant p - 1\) , we get that \(n(n + 1)\equiv n^{2}(1 + n^{- 1})\) (mod \(p\) ), hence \(\begin{array}{r}{\left(\frac{n(n + 1)}{p}\right) = \left(\frac{1 + n^{- 1}}{p}\right)} \end{array}\) . Since \(\{1 + n^{- 1}\bmod p\colon 1\leqslant n\leqslant p - 1\} = \{0,2,3,\ldots ,p - 1\bmod p\}\) , we find
\[\sum_{n = 1}^{p - 1}\left(\frac{n}{p}\right)\left(\frac{n + 1}{p}\right) = \sum_{n = 1}^{p - 1}\left(\frac{1 + n^{-1}}{p}\right) = \sum_{n = 1}^{p - 1}\left({\frac{n}{p}}\right) - 1 = -1,\]
because \(\begin{array}{r}{\sum_{n = 1}^{p}\left(\frac{n}{p}\right) = 0} \end{array}\)
Observe that (1) becomes
\[\sum_{n = 1}^{p - 1}f(n^{2},p) = 2|S|,\quad S = \left\{r\colon 1\leqslant r\leqslant \frac{p - 1}{2}\mathrm{~and~}\left(\frac{r}{p}\right) = 1\right\} .\]
We connect \(S\) with the sum from the claim by pairing quadratic residues and quadratic non- residues. To that end, define
\[S^{\prime} = \left\{r\colon 1\leqslant r\leqslant \frac{p - 1}{2}\mathrm{~and~}\left(\frac{r}{p}\right) = -1\right\}\] \[T = \left\{r\colon \frac{p + 1}{2}\leqslant r\leqslant p - 1\mathrm{~and~}\left(\frac{r}{p}\right) = 1\right\}\] \[T^{\prime} = \left\{r\colon \frac{p + 1}{2}\leqslant r\leqslant p - 1\mathrm{~and~}\left(\frac{r}{p}\right) = -1\right\}\]
Since there are exactly \((p - 1) / 2\) nonzero quadratic residues modulo \(p\) , \(|S| + |T| = (p - 1) / 2\) . Also we obviously have \(|T| + |T^{\prime}| = (p - 1) / 2\) . Then \(|S| = |T^{\prime}|\) .
For the sake of brevity, define \(t = |S| = |T^{\prime}|\) . If \(\left(\frac{n}{p}\right)\left(\frac{n + 1}{p}\right) = - 1\) , then exactly of one the numbers \(\left(\frac{n}{p}\right)\) and \(\left(\frac{n + 1}{p}\right)\) is equal to 1, so
\[\left|\left\{n\colon 1\leqslant n\leqslant \frac{p - 3}{2}\mathrm{~and~}\left(\frac{n}{p}\right)\left(\frac{n + 1}{p}\right) = -1\right\} \right|\leqslant |S| + |S - 1| = 2t.\]
On the other hand, if \(\left(\frac{n}{p}\right)\left(\frac{n + 1}{p}\right) = - 1\) , then exactly one of \(\left(\frac{n}{p}\right)\) and \(\left(\frac{n + 1}{p}\right)\) is equal to \(- 1\) , and
\[\left|\left\{n\colon \frac{p + 1}{2}\leqslant n\leqslant p - 2\mathrm{~and~}\left(\frac{n}{p}\right)\left(\frac{n + 1}{p}\right) = -1\right\} \right|\leqslant |T^{\prime}| + |T^{\prime} - 1| = 2t.\]
Thus, taking into account that the middle term \(\left(\frac{(p - 1) / 2}{p}\right)\left(\frac{(p + 1) / 2}{p}\right)\) may happen to be \(- 1\)
\[\left|\left\{n\colon 1\leqslant n\leqslant p - 2\mathrm{~and~}\left(\frac{n}{p}\right)\left(\frac{n + 1}{p}\right) = -1\right\} \right|\leqslant 4t + 1.\]
This implies that
\[\left|\left\{n\colon 1\leqslant n\leqslant p - 2\mathrm{~and~}\left(\frac{n}{p}\right)\left(\frac{n + 1}{p}\right) = 1\right\} \right|\geqslant (p - 2) - (4t + 1) = p - 4t - 3,\]
and so
\[-1 = \sum_{n = 1}^{p - 1}\left(\frac{n}{p}\right)\left(\frac{n + 1}{p}\right)\geqslant p - 4t - 3 - (4t + 1) = p - 8t - 4,\]
which implies \(8t\geqslant p - 3\) , and thus
\[\sum_{n = 1}^{p - 1}f(n^{2},p) = 2t\geqslant \frac{p - 3}{4}.\]
Comment 5. It is possible to prove that
\[\sum_{n = 1}^{p - 1}f(n^{2},p)\geqslant \frac{p - 1}{2}.\]
The case \(p\equiv 1\) (mod 4) was already mentioned, and it is the equality case. If \(p\equiv 3\) (mod 4), then, by a theorem of Dirichlet, we have
\[\left|\left\{r:1\leqslant r\leqslant \frac{p - 1}{2}\mathrm{and}\left(\frac{r}{p}\right) = 1\right\} \right| > \frac{p - 1}{4},\]
which implies the result.
See https://en.wikipedia.org/wiki/Quadratic_residue#Dirichlet.27s_formulas for the full statement of the theorem. It seems that no elementary proof of it is known; a proof using complex analysis is available, for instance, in Chapter 7 of the book Quadratic Residues and Non- Residues: Selected Topics, by Steve Wright, available in https://arxiv.org/abs/1408.0235.
|
IMOSL-2018-A1
|
Let \(\mathbb{Q}_{>0}\) denote the set of all positive rational numbers. Determine all functions \(f\colon \mathbb{Q}_{>0}\to \mathbb{Q}_{>0}\) satisfying
\[f\left(x^{2}f(y)^{2}\right) = f(x)^{2}f(y) \quad (*)\]
for all \(x,y\in \mathbb{Q}_{>0}\)
|
Answer: \(f(x) = 1\) for all \(x\in \mathbb{Q}_{>0}\)
Solution. Take any \(a,b\in \mathbb{Q}_{>0}\) . By substituting \(x = f(a)\) , \(y = b\) and \(x = f(b)\) , \(y = a\) into \((*)\) we get
\[f\big(f(a)\big)^{2}f(b) = f\big(f(a)^{2}f(b)^{2}\big) = f\big(f(b)\big)^{2}f(a),\]
which yields
\[\frac{f\big(f(a)\big)^{2}}{f(a)} = \frac{f\big(f(b)\big)^{2}}{f(b)}\qquad \mathrm{for~all~}a,b\in \mathbb{Q}_{>0}.\]
In other words, this shows that there exists a constant \(C\in \mathbb{Q}_{>0}\) such that \(f\big(f(a)\big)^{2} = Cf(a)\) or
\[\left(\frac{f\big(f(a)\big)}{C}\right)^{2} = \frac{f(a)}{C}\qquad \mathrm{for~all~}a\in \mathbb{Q}_{>0}. \quad (1)\]
Denote by \(f^{n}(x) = \underbrace{f(f(\ldots(f(x))\ldots))}_{n}\) the \(n^{\mathrm{th}}\) iteration of \(f\) . Equality (1) yields
\[\frac{f(a)}{C} = \left(\frac{f^{2}(a)}{C}\right)^{2} = \left(\frac{f^{3}(a)}{C}\right)^{4} = \dots = \left(\frac{f^{n + 1}(a)}{C}\right)^{2^{n}}\]
for all positive integer \(n\) . So, \(f(a) / C\) is the \(2^{n}\) - th power of a rational number for all positive integer \(n\) . This is impossible unless \(f(a) / C = 1\) , since otherwise the exponent of some prime in the prime decomposition of \(f(a) / C\) is not divisible by sufficiently large powers of 2. Therefore, \(f(a) = C\) for all \(a\in \mathbb{Q}_{>0}\) .
Finally, after substituting \(f\equiv C\) into \((*)\) we get \(C = C^{3}\) , whence \(C = 1\) . So \(f(x)\equiv 1\) is the unique function satisfying \((*)\) .
Comment 1. There are several variations of the solution above. For instance, one may start with finding \(f(1) = 1\) . To do this, let \(d = f(1)\) . By substituting \(x = y = 1\) and \(x = d^{2}\) , \(y = 1\) into \((*)\) we get \(f(d^{2}) = d^{3}\) and \(f(d^{6}) = f(d^{2})^{2}\cdot d = d^{7}\) . By substituting now \(x = 1\) , \(y = d^{2}\) we obtain \(f(d^{6}) = d^{2}\cdot d^{3} = d^{5}\) . Therefore, \(d^{7} = f(d^{6}) = d^{5}\) , whence \(d = 1\) .
After that, the rest of the solution simplifies a bit, since we already know that \(C = \frac{f(f(1))^{2}}{f(1)} = 1\) . Hence equation (1) becomes merely \(f(f(a))^{2} = f(a)\) , which yields \(f(a) = 1\) in a similar manner.
Comment 2. There exist nonconstant functions \(f\colon \mathbb{R}^{+}\to \mathbb{R}^{+}\) satisfying \((*)\) for all real \(x,y > 0\) — e.g., \(f(x) = \sqrt{x}\) .
|
IMOSL-2018-A2
|
Find all positive integers \(n \geq 3\) for which there exist real numbers \(a_1, a_2, \ldots , a_n\) , \(a_{n+1} = a_1\) , \(a_{n+2} = a_2\) such that
\[a_{i}a_{i + 1} + 1 = a_{i + 2}\]
for all \(i = 1,2,\ldots ,n\)
|
Answer: \(n\) can be any multiple of 3.
Solution 1. For the sake of convenience, extend the sequence \(a_1, \ldots , a_{n+2}\) to an infinite periodic sequence with period \(n\) . ( \(n\) is not necessarily the shortest period.)
If \(n\) is divisible by 3, then \((a_1, a_2, \ldots) = (- 1, - 1, 2, - 1, - 1, 2, \ldots)\) is an obvious solution.
We will show that in every periodic sequence satisfying the recurrence, each positive term is followed by two negative values, and after them the next number is positive again. From this, it follows that \(n\) is divisible by 3.
If the sequence contains two consecutive positive numbers \(a_{i}\) , \(a_{i + 1}\) , then \(a_{i + 2} = a_{i}a_{i + 1} + 1 > 1\) , so the next value is positive as well; by induction, all numbers are positive and greater than 1. But then \(a_{i + 2} = a_{i}a_{i + 1} + 1 \geq 1 \cdot a_{i + 1} + 1 > a_{i + 1}\) for every index \(i\) , which is impossible: our sequence is periodic, so it cannot increase everywhere.
If the number 0 occurs in the sequence, \(a_{i} = 0\) for some index \(i\) , then it follows that \(a_{i + 1} = a_{i - 1}a_{i} + 1\) and \(a_{i + 2} = a_{i}a_{i + 1} + 1\) are two consecutive positive elements in the sequences and we get the same contradiction again.
Notice that after any two consecutive negative numbers the next one must be positive: if \(a_{i}< 0\) and \(a_{i + 1}< 0\) , then \(a_{i + 2} = a_{1}a_{i + 1} + 1 > 1 > 0\) . Hence, the positive and negative numbers follow each other in such a way that each positive term is followed by one or two negative values and then comes the next positive term.
Consider the case when the positive and negative values alternate. So, if \(a_{i}\) is a negative value then \(a_{i + 1}\) is positive, \(a_{i + 2}\) is negative and \(a_{i + 3}\) is positive again.
Notice that \(a_{i}a_{i + 1} + 1 = a_{i + 2}< 0< a_{i + 3} = a_{i + 1}a_{i + 2} + 1\) ; by \(a_{i + 1} > 0\) we conclude \(a_{i}< a_{i + 2}\) Hence, the negative values form an infinite increasing subsequence, \(a_{i}< a_{i + 2}< a_{i + 4}< \ldots\) which is not possible, because the sequence is periodic.
The only case left is when there are consecutive negative numbers in the sequence. Suppose that \(a_{i}\) and \(a_{i + 1}\) are negative; then \(a_{i + 2} = a_{i}a_{i + 1} + 1 > 1\) . The number \(a_{i + 3}\) must be negative. We show that \(a_{i + 4}\) also must be negative.
Notice that \(a_{i + 3}\) is negative and \(a_{i + 4} = a_{i + 2}a_{i + 3} + 1< 1< a_{i}a_{i + 1} + 1 = a_{i + 2}\) , so
\[a_{i + 5} - a_{i + 4} = (a_{i + 3}a_{i + 4} + 1) - (a_{i + 2}a_{i + 3} + 1) = a_{i + 3}(a_{i + 4} - a_{i + 2}) > 0,\]
therefore \(a_{i + 5} > a_{i + 4}\) . Since at most one of \(a_{i + 4}\) and \(a_{i + 5}\) can be positive, that means that \(a_{i + 4}\) must be negative.
Now \(a_{i + 3}\) and \(a_{i + 4}\) are negative and \(a_{i + 5}\) is positive; so after two negative and a positive terms, the next three terms repeat the same pattern. That completes the solution.
Solution 2. We prove that the shortest period of the sequence must be 3. Then it follows that \(n\) must be divisible by 3.
Notice that the equation \(x^{2} + 1 = x\) has no real root, so the numbers \(a_{1}, \ldots , a_{n}\) cannot be all equal, hence the shortest period of the sequence cannot be 1.
By applying the recurrence relation for \(i\) and \(i + 1\) ,
\[(a_{i + 2} - 1)a_{i + 2} = a_{i}a_{i + 1}a_{i + 2} = a_{i}(a_{i + 3} - 1),\quad \mathrm{so}\] \[a_{i + 2}^{2} - a_{i}a_{i + 3} = a_{i + 2} - a_{i}.\]
By summing over \(i = 1,2,\ldots ,n\) , we get
\[\sum_{i = 1}^{n}(a_{i} - a_{i + 3})^{2} = 0.\]
That proves that \(a_{i} = a_{i + 3}\) for every index \(i\) , so the sequence \(a_{1},a_{2},\ldots\) is indeed periodic with period 3. The shortest period cannot be 1, so it must be 3; therefore, \(n\) is divisible by 3.
Comment. By solving the system of equations \(ab + 1 = c\) , \(bc + 1 = a\) , \(ca + 1 = b\) , it can be seen that the pattern \((- 1, - 1,2)\) is repeated in all sequences satisfying the problem conditions.
|
IMOSL-2018-A3
|
Given any set \(S\) of positive integers, show that at least one of the following two assertions holds:
(1) There exist distinct finite subsets \(F\) and \(G\) of \(S\) such that \(\textstyle \sum_{x\in F}1 / x = \sum_{x\in G}1 / x\) .
(2) There exists a positive rational number \(r< 1\) such that \(\textstyle \sum_{x\in F}1 / x\neq r\) for all finite subsets \(F\) of \(S\)
|
Solution 1. Argue indirectly. Agree, as usual, that the empty sum is 0 to consider rationals in \([0,1)\) ; adjoining 0 causes no harm, since \(\textstyle \sum_{x\in F}1 / x = 0\) for no nonempty finite subset \(F\) of \(S\) For every rational \(r\) in \([0,1)\) , let \(F_{r}\) be the unique finite subset of \(S\) such that \(\textstyle \sum_{x\in F_{r}}1 / x = r\) The argument hinges on the lemma below.
Lemma. If \(x\) is a member of \(S\) and \(q\) and \(r\) are rationals in \([0,1)\) such that \(q - r = 1 / x\) , then \(x\) is a member of \(F_{q}\) if and only if it is not one of \(F_{r}\)
Proof. If \(x\) is a member of \(F_{q}\) , then
\[\sum_{y\in F_{q}\setminus \{x\}}\frac{1}{y} = \sum_{y\in F_{q}}\frac{1}{y} -\frac{1}{x} = q - \frac{1}{x} = r = \sum_{y\in F_{r}}\frac{1}{y},\]
so \(F_{r} = F_{q}\setminus \{x\}\) , and \(x\) is not a member of \(F_{r}\) . Conversely, if \(x\) is not a member of \(F_{r}\) , then
\[\sum_{y\in F_{r}\cup \{x\}}\frac{1}{y} = \sum_{y\in F_{r}}\frac{1}{y} +\frac{1}{x} = r + \frac{1}{x} = q = \sum_{y\in F_{q}}\frac{1}{y},\]
so \(F_{q} = F_{r}\cup \{x\}\) , and \(x\) is a member of \(F_{q}\)
Consider now an element \(x\) of \(S\) and a positive rational \(r< 1\) . Let \(n = |r x|\) and consider the sets \(F_{r - k / x}\) , \(k = 0,\ldots ,n\) . Since \(0\leq r - n / x< 1 / x\) , the set \(F_{r - n / x}\) does not contain \(x\) , and a repeated application of the lemma shows that the \(F_{r - (n - 2k) / x}\) do not contain \(x\) , whereas the \(F_{r - (n - 2k - 1) / x}\) do. Consequently, \(x\) is a member of \(F_{r}\) if and only if \(n\) is odd.
Finally, consider \(F_{2 / 3}\) . By the preceding, \(|2x / 3|\) is odd for each \(x\) in \(F_{2 / 3}\) , so \(2x / 3\) is not integral. Since \(F_{2 / 3}\) is finite, there exists a positive rational \(\epsilon\) such that \(|(2 / 3 - \epsilon)x| = |2x / 3|\) for all \(x\) in \(F_{2 / 3}\) . This implies that \(F_{2 / 3}\) is a subset of \(F_{2 / 3 - \epsilon}\) which is impossible.
Comment. The solution above can be adapted to show that the problem statement still holds, if the condition \(r< 1\) in (2) is replaced with \(r< \delta\) , for an arbitrary positive \(\delta\) . This yields that, if \(S\) does not satisfy (1), then there exist infinitely many positive rational numbers \(r< 1\) such that \(\textstyle \sum_{x\in F}1 / x\neq r\) for all finite subsets \(F\) of \(S\)
Solution 2. A finite \(S\) clearly satisfies (2), so let \(S\) be infinite. If \(S\) fails both conditions, so does \(S\setminus \{1\}\) . We may and will therefore assume that \(S\) consists of integers greater than 1. Label the elements of \(S\) increasingly \(x_{1}< x_{2}< \dots\) , where \(x_{1}\geq 2\)
We first show that \(S\) satisfies (2) if \(x_{n + 1}\geq 2x_{n}\) for all \(n\) . In this case, \(x_{n}\geq 2^{n - 1}x_{1}\) for all \(n\) , so
\[s = \sum_{n\geq 1}\frac{1}{x_{n}}\leqslant \sum_{n\geq 1}\frac{1}{2^{n - 1}x_{1}} = \frac{2}{x_{1}}.\]
If \(x_{1}\geq 3\) , or \(x_{1} = 2\) and \(x_{n + 1} > 2x_{n}\) for some \(n\) , then \(\textstyle \sum_{x\in F}1 / x< s< 1\) for every finite subset \(F\) of \(S\) , so \(S\) satisfies (2); and if \(x_{1} = 2\) and \(x_{n + 1} = 2x_{n}\) for all \(n\) , that is, \(x_{n} = 2^{n}\) for all \(n\) , then every finite subset \(F\) of \(S\) consists of powers of 2, so \(\textstyle \sum_{x\in F}1 / x\neq 1 / 3\) and again \(S\) satisfies (2).
Finally, we deal with the case where \(x_{n + 1}< 2x_{n}\) for some \(n\) . Consider the positive rational \(r = 1 / x_{n} - 1 / x_{n + 1}< 1 / x_{n + 1}\) . If \(r = \textstyle \sum_{x\in F}1 / x\) for no finite subset \(F\) of \(S\) , then \(S\) satisfies (2).
We now assume that \(r = \sum_{x\in F_{0}}1 / x\) for some finite subset \(F_{0}\) of \(S\) , and show that \(S\) satisfies (1). Since \(\sum_{x\in F_{0}}1 / x = r< 1 / x_{n + 1}\) , it follows that \(x_{n + 1}\) is not a member of \(F_{0}\) , so
\[\sum_{x\in F_{0}\cup \{x_{n + 1}\}}\frac{1}{x} = \sum_{x\in F_{0}}\frac{1}{x} +\frac{1}{x_{n + 1}} = r + \frac{1}{x_{n + 1}} = \frac{1}{x_{n}}.\]
Consequently, \(F = F_{0}\cup \{x_{n + 1}\}\) and \(G = \{x_{n}\}\) are distinct finite subsets of \(S\) such that \(\sum_{x\in F}1 / x = \sum_{x\in G}1 / x\) , and \(S\) satisfies (1).
|
IMOSL-2018-A5
|
Determine all functions \(f:(0,\infty)\to \mathbb{R}\) satisfying
\[\left(x + \frac{1}{x}\right)f(y) = f(xy) + f\left(\frac{y}{x}\right) \quad (1)\]
for all \(x,y > 0\)
|
Answer: \(f(x) = C_{1}x + \frac{C_{2}}{x}\) with arbitrary constants \(C_{1}\) and \(C_{2}\) .
Solution 1. Fix a real number \(a > 1\) , and take a new variable \(t\) . For the values \(f(t)\) , \(f(t^{2})\) , \(f(at)\) and \(f(a^{2}t^{2})\) , the relation (1) provides a system of linear equations:
\[x = y = t:\qquad \left(t + \frac{1}{t}\right)f(t) = f(t^{2}) + f(1)\] \[x = \frac{t}{a},y = at:\qquad \left(\frac{t}{a} +\frac{a}{t}\right)f(at) = f(t^{2}) + f(a^{2})\] \[x = a^{2}t,y = t:\qquad \left(a^{2}t + \frac{1}{a^{2}t}\right)f(t) = f(a^{2}t^{2}) + f\left(\frac{1}{a^{2}}\right)\] \[x = y = at:\qquad \left(at + \frac{1}{at}\right)f(at) = f(a^{2}t^{2}) + f(1)\]
In order to eliminate \(f(t^{2})\) , take the difference of (2a) and (2b); from (2c) and (2d) eliminate \(f(a^{2}t^{2})\) ; then by taking a linear combination, eliminate \(f(at)\) as well:
\[\left(t + \frac{1}{t}\right)f(t) - \left(\frac{t}{a} +\frac{a}{t}\right)f(at) = f(1) - f(a^{2})\quad \mathrm{and}\] \[\left(a^{2}t + \frac{1}{a^{2}t}\right)f(t) - \left(at + \frac{1}{at}\right)f(at) = f(1 / a^{2}) - f(1),\quad \mathrm{so}\] \[\left(\left(at + \frac{1}{at}\right)\left(t + \frac{1}{t}\right) - \left(\frac{t}{a} +\frac{a}{t}\right)\left(a^{2}t + \frac{1}{a^{2}t}\right)\right)f(t)\] \[\qquad = \left(at + \frac{1}{at}\right)\left(f(1) - f(a^{2})\right) - \left(\frac{t}{a} +\frac{a}{t}\right)\left(f(1 / a^{2}) - f(1)\right).\]
Notice that on the left- hand side, the coefficient of \(f(t)\) is nonzero and does not depend on \(t\) :
\[\left(at + \frac{1}{at}\right)\left(t + \frac{1}{t}\right) - \left(\frac{t}{a} +\frac{a}{t}\right)\left(a^{2}t + \frac{1}{a^{2}t}\right) = a + \frac{1}{a} -\left(a^{3} + \frac{1}{a^{3}}\right)< 0.\]
After dividing by this fixed number, we get
\[f(t) = C_{1}t + \frac{C_{2}}{t} \quad (3)\]
where the numbers \(C_{1}\) and \(C_{2}\) are expressed in terms of \(a\) , \(f(1)\) , \(f(a^{2})\) and \(f(1 / a^{2})\) , and they do not depend on \(t\) .
The functions of the form (3) satisfy the equation:
\[\left(x + \frac{1}{x}\right)f(y) = \left(x + \frac{1}{x}\right)\left(C_{1}y + \frac{C_{2}}{y}\right) = \left(C_{1}xy + \frac{C_{2}}{xy}\right) + \left(C_{1}\frac{y}{x} +C_{2}\frac{x}{y}\right) = f(xy) + f\left(\frac{y}{x}\right).\]
Solution 2. We start with an observation. If we substitute \(x = a \neq 1\) and \(y = a^n\) in (1), we obtain
\[f(a^{n + 1}) - \left(a + \frac{1}{a}\right)f(a^{n}) + f(a^{n - 1}) = 0.\]
For the sequence \(z_{n} = a^{n}\) , this is a homogeneous linear recurrence of the second order, and its characteristic polynomial is \(t^{2} - \left(a + \frac{1}{a}\right)t + 1 = (t - a)(t - \frac{1}{a})\) with two distinct nonzero roots, namely \(a\) and \(1 / a\) . As is well- known, the general solution is \(z_{n} = C_{1}a^{n} + C_{2}(1 / a)^{n}\) where the index \(n\) can be as well positive as negative. Of course, the numbers \(C_{1}\) and \(C_{2}\) may depend of the choice of \(a\) , so in fact we have two functions, \(C_{1}\) and \(C_{2}\) , such that
\[f(a^{n}) = C_{1}(a)\cdot a^{n} + \frac{C_{2}(a)}{a^{n}}\quad \mathrm{for~every~}a\neq 1\mathrm{~and~every~integer~}n. \quad (4)\]
The relation (4) can be easily extended to rational values of \(n\) , so we may conjecture that \(C_{1}\) and \(C_{2}\) are constants, and whence \(f(t) = C_{1}t + \frac{C_{2}}{t}\) . As it was seen in the previous solution, such functions indeed satisfy (1).
The equation (1) is linear in \(f\) ; so if some functions \(f_{1}\) and \(f_{2}\) satisfy (1) and \(c_{1},c_{2}\) are real numbers, then \(c_{1}f_{1}(x) + c_{2}f_{2}(x)\) is also a solution of (1). In order to make our formulas simpler, define
\[f_{0}(x) = f(x) - f(1)\cdot x.\]
This function is another one satisfying (1) and the extra constraint \(f_{0}(1) = 0\) . Repeating the same argument on linear recurrences, we can write \(f_{0}(a) = K(a)a^{n} + \frac{L(a)}{a^{n}}\) with some functions \(K\) and \(L\) . By substituting \(n = 0\) , we can see that \(K(a) + L(a) = f_{0}(1) = 0\) for every \(a\) . Hence,
\[f_{0}(a^{n}) = K(a)\left(a^{n} - \frac{1}{a^{n}}\right).\]
Now take two numbers \(a > b > 1\) arbitrarily and substitute \(x = (a / b)^{n}\) and \(y = (ab)^{n}\) in (1):
\[\begin{array}{l}{\left(\frac{a^{n}}{b^{n}} +\frac{b^{n}}{a^{n}}\right)f_{0}\big((ab)^{n}\big) = f_{0}\big(a^{2n}\big) + f_{0}\big(b^{2n}\big),\quad \mathrm{so}}\\ {\left(\frac{a^{n}}{b^{n}} +\frac{b^{n}}{a^{n}}\right)K(ab)\bigg((ab)^{n} - \frac{1}{(ab)^{n}}\bigg) = K(a)\bigg(a^{2n} - \frac{1}{a^{2n}}\bigg) + K(b)\bigg(b^{2n} - \frac{1}{b^{2n}}\bigg),\quad \mathrm{or~equivalently}}\\ {K(ab)\bigg(a^{2n} - \frac{1}{a^{2n}} +b^{2n} - \frac{1}{b^{2n}}\bigg) = K(a)\bigg(a^{2n} - \frac{1}{a^{2n}}\bigg) + K(b)\bigg(b^{2n} - \frac{1}{b^{2n}}\bigg).} \end{array} \quad (5)\]
By dividing (5) by \(a^{2n}\) and then taking limit with \(n \to +\infty\) we get \(K(ab) = K(a)\) . Then (5) reduces to \(K(a) = K(b)\) . Hence, \(K(a) = K(b)\) for all \(a > b > 1\) .
Fix \(a > 1\) . For every \(x > 0\) there is some \(b\) and an integer \(n\) such that \(1 < b < a\) and \(x = b^{n}\) . Then
\[f_{0}(x) = f_{0}(b^{n}) = K(b)\left(b^{n} - \frac{1}{b^{n}}\right) = K(a)\left(x - \frac{1}{x}\right).\]
Hence, we have \(f(x) = f_{0}(x) + f(1)x = C_{1}x + \frac{C_{2}}{x}\) with \(C_{1} = K(a) + f(1)\) and \(C_{2} = - K(a)\) .
Comment. After establishing (5), there are several variants of finishing the solution. For example, instead of taking a limit, we can obtain a system of linear equations for \(K(a)\) , \(K(b)\) and \(K(ab)\) by substituting two positive integers \(n\) in (5), say \(n = 1\) and \(n = 2\) . This approach leads to a similar ending as in the first solution.
Optionally, we define another function \(f_{1}(x) = f_{0}(x) - C\left(x - \frac{1}{x}\right)\) and prescribe \(K(c) = 0\) for another fixed \(c\) . Then we can choose \(ab = c\) and decrease the number of terms in (5).
|
IMOSL-2018-A6
|
Let \(m, n \geqslant 2\) be integers. Let \(f(x_{1}, \ldots , x_{n})\) be a polynomial with real coefficients such that
\[f(x_{1},\ldots ,x_{n}) = \left\lfloor \frac{x_{1} + \ldots + x_{n}}{m}\right\rfloor \quad \mathrm{for~every~}x_{1},\ldots ,x_{n}\in \{0,1,\ldots ,m - 1\} .\]
Prove that the total degree of \(f\) is at least \(n\) .
|
Solution. We transform the problem to a single variable question by the following
Lemma. Let \(a_{1},\ldots ,a_{n}\) be nonnegative integers and let \(G(x)\) be a nonzero polynomial with \(\deg G\leqslant a_{1} + \ldots +a_{n}\) . Suppose that some polynomial \(F(x_{1},\ldots ,x_{n})\) satisfies
\[F(x_{1},\ldots ,x_{n}) = G(x_{1} + \ldots +x_{n})\quad \mathrm{for~}(x_{1},\ldots ,x_{n})\in \{0,1,\ldots ,a_{1}\} \times \ldots \times \{0,1,\ldots ,a_{n}\} .\]
Then \(F\) cannot be the zero polynomial, and \(\deg F\geqslant \deg G\)
For proving the lemma, we will use forward differences of polynomials. If \(p(x)\) is a polynomial with a single variable, then define \((\Delta p)(x) = p(x + 1) - p(x)\) . It is well- known that if \(p\) is a nonconstant polynomial then \(\deg \Delta p = \deg p - 1\) .
If \(p(x_{1},\ldots ,x_{n})\) is a polynomial with \(n\) variables and \(1\leqslant k\leqslant n\) then let
\[(\Delta_{k}p)(x_{1},\ldots ,x_{n}) = p(x_{1},\ldots ,x_{k - 1},x_{k} + 1,x_{k + 1},\ldots ,x_{n}) - p(x_{1},\ldots ,x_{n}).\]
It is also well- known that either \(\Delta_{k}p\) is the zero polynomial or \(\deg (\Delta_{k}p)\leqslant \deg p - 1\) .
Proof of the lemma. We apply induction on the degree of \(G\) . If \(G\) is a constant polynomial then we have \(F(0,\ldots ,0) = G(0)\neq 0\) , so \(F\) cannot be the zero polynomial.
Suppose that \(\deg G\geqslant 1\) and the lemma holds true for lower degrees. Since \(a_{1} + \ldots +a_{n}\geqslant\) \(\deg G > 0\) , at least one of \(a_{1},\ldots ,a_{n}\) is positive; without loss of generality suppose \(a_{1}\geqslant 1\)
Consider the polynomials \(F_{1} = \Delta_{1}F\) and \(G_{1} = \Delta G\) . On the grid \(\{0,\ldots ,a_{1} - 1\} \times \{0,\ldots ,a_{2}\} \times\) \(\ldots \times \{0,\ldots ,a_{n}\}\) we have
\[F_{1}(x_{1},\ldots ,x_{n}) = F(x_{1} + 1,x_{2},\ldots ,x_{n}) - F(x_{1},x_{2},\ldots ,x_{n}) =\] \[\qquad = G(x_{1} + \ldots +x_{n} + 1) - G(x_{1} + \ldots +x_{n}) = G_{1}(x_{1} + \ldots +x_{n}).\]
Since \(G\) is nonconstant, we have \(\deg G_{1} = \deg G - 1\leqslant (a_{1} - 1) + a_{2} + \ldots +a_{n}\) . Therefore we can apply the induction hypothesis to \(F_{1}\) and \(G_{1}\) and conclude that \(F_{1}\) is not the zero polynomial and \(\deg F_{1}\geqslant \deg G_{1}\) . Hence, \(\deg F\geqslant \deg F_{1} + 1\geqslant \deg G_{1} + 1 = \deg G\) . That finishes the proof.
To prove the problem statement, take the unique polynomial \(g(x)\) so that \(g(x) = \left\lfloor \frac{x}{m}\right\rfloor\) for \(x\in \{0,1,\ldots ,n(m - 1)\}\) and \(\deg g\leqslant n(m - 1)\) . Notice that precisely \(n(m - 1) + 1\) values of \(g\) are prescribed, so \(g(x)\) indeed exists and is unique. Notice further that the constraints \(g(0) = g(1) = 0\) and \(g(m) = 1\) together enforce \(\deg g\geqslant 2\)
By applying the lemma to \(a_{1} = \ldots = a_{n} = m - 1\) and the polynomials \(f\) and \(g\) , we achieve \(\deg f\geqslant \deg g\) . Hence we just need a suitable lower bound on \(\deg g\) .
Consider the polynomial \(h(x) = g(x + m) - g(x) - 1\) . The degree of \(g(x + m) - g(x)\) is \(\deg g - 1\geqslant 1\) , so \(\deg h = \deg g - 1\geqslant 1\) , and therefore \(h\) cannot be the zero polynomial. On the other hand, \(h\) vanishes at the points \(0,1,\ldots ,n(m - 1) - m\) , so \(h\) has at least \((n - 1)(m - 1)\) roots. Hence,
\[\deg f\geqslant \deg g = \deg h + 1\geqslant (n - 1)(m - 1) + 1\geqslant n.\]
Comment 1. In the lemma we have equality for the choice \(F(x_{1},\ldots ,x_{n}) = G(x_{1} + \ldots +x_{n})\) , so it indeed transforms the problem to an equivalent single- variable question.
Comment 2. If \(m\geq 3\) , the polynomial \(h(x)\) can be replaced by \(\Delta g\) . Notice that
\[(\Delta g)(x) = \left\{ \begin{array}{ll}1 & \mathrm{if~}x\equiv -1\pmod {m}\\ 0 & \mathrm{otherwise} \end{array} \right. \quad \mathrm{for~}x = 0,1,\ldots ,n(m - 1) - 1.\]
Hence, \(\Delta g\) vanishes at all integers \(x\) with \(0\leq x< n(m - 1)\) and \(x\neq - 1\) (mod \(m\) ). This leads to \(\deg g\geqslant \frac{(m - 1)^{2}n}{m} +1\) .
If \(m\) is even then this lower bound can be improved to \(n(m - 1)\) . For \(0\leqslant N< n(m - 1)\) , the \((N + 1)^{\mathrm{st}}\) forward difference at \(x = 0\) is
\[(\Delta^{N + 1})g(0) = \sum_{k = 0}^{N}(-1)^{N - k}\binom{N}{k}(\Delta g)(k) = \sum_{\substack{0\leqslant k\leqslant N \\ k = -1\ (\mathrm{mod} m)}}(-1)^{N - k}\binom{N}{k}. \quad (*)\]
Since \(m\) is even, all signs in the last sum are equal; with \(N = n(m - 1) - 1\) this proves \(\Delta^{n(m - 1)}g(0)\neq 0\) , indicating that \(\deg g\geqslant n(m - 1)\) .
However, there are infinitely many cases when all terms in \((\ast)\) cancel out, for example if \(m\) is an odd divisor of \(n + 1\) . In such cases, \(\deg f\) can be less than \(n(m - 1)\) .
Comment 3. The lemma is closely related to the so- called
Alon- Furedi bound. Let \(S_{1},\ldots ,S_{n}\) be nonempty finite sets in a field and suppose that the polynomial \(P(x_{1},\ldots ,x_{n})\) vanishes at the points of the grid \(S_{1}\times \ldots \times S_{n}\) , except for a single point. Then \(\deg P\geqslant \sum_{i = 1}^{n}(|S_{i}| - 1)\) .
(A well- known application of the Alon- Furedi bound was the former IMO problem 2007/6. Since then, this result became popular among the students and is part of the IMO training for many IMO teams.)
The proof of the lemma can be replaced by an application of the Alon- Furedi bound as follows. Let \(d = \deg G\) , and let \(G_{0}\) be the unique polynomial such that \(G_{0}(x) = G(x)\) for \(x\in \{0,1,\ldots ,d - 1\}\) but \(\deg G_{0}< d\) . The polynomials \(G_{0}\) and \(G\) are different because they have different degrees, and they attain the same values at \(0,1,\ldots ,d - 1\) ; that enforces \(G_{0}(d)\neq G(d)\) .
Choose some nonnegative integers \(b_{1},\ldots ,b_{n}\) so that \(b_{1}\leqslant a_{1}\) , ..., \(b_{n}\leqslant a_{n}\) , and \(b_{1} + \ldots +b_{n} = d\) , and consider the polynomial
\[H(x_{1},\ldots ,x_{n}) = F(x_{1},\ldots ,x_{n}) - G_{0}(x_{1} + \ldots +x_{n})\]
on the grid \(\{0,1,\ldots ,b_{1}\} \times \ldots \times \{0,1,\ldots ,b_{n}\}\) .
At the point \((b_{1},\ldots ,b_{n})\) we have \(H(b_{1},\ldots ,b_{n}) = G(d) - G_{0}(d)\neq 0\) . At all other points of the grid we have \(F = G\) and therefore \(H = G - G_{0} = 0\) . So, by the Alon- Furedi bound, \(\deg H\geqslant b_{1} + \ldots +b_{n} = d\) . Since \(\deg G_{0}< d\) , this implies \(\deg F = \deg (H + G_{0}) = \deg H\geqslant d = \deg G\) . \(\square\)
|
IMOSL-2018-C1
|
Let \(n \geq 3\) be an integer. Prove that there exists a set \(S\) of \(2n\) positive integers satisfying the following property: For every \(m = 2,3,\ldots ,n\) the set \(S\) can be partitioned into two subsets with equal sums of elements, with one of subsets of cardinality \(m\) .
|
Solution. We show that one of possible examples is the set
\[S = \{1\cdot 3^{k},2\cdot 3^{k}\colon k = 1,2,\ldots ,n - 1\} \cup \left\{1,\frac{3^{n} + 9}{2} -1\right\} .\]
It is readily verified that all the numbers listed above are distinct (notice that the last two are not divisible by 3).
The sum of elements in \(S\) is
\[\Sigma = 1 + \left(\frac{3^{n} + 9}{2} -1\right) + \sum_{k = 1}^{n - 1}(1\cdot 3^{k} + 2\cdot 3^{k}) = \frac{3^{n} + 9}{2} +\sum_{k = 1}^{n - 1}3^{k + 1} = \frac{3^{n} + 9}{2} +\frac{3^{n + 1} - 9}{2} = 2\cdot 3^{n}.\]
Hence, in order to show that this set satisfies the problem requirements, it suffices to present, for every \(m = 2,3,\ldots ,n\) , an \(m\) - element subset \(A_{m}\subset S\) whose sum of elements equals \(3^{n}\) .
Such a subset is
\[A_{m} = \{2\cdot 3^{k}\colon k = n - m + 1,n - m + 2,\ldots ,n - 1\} \cup \{1\cdot 3^{n - m + 1}\} .\]
Clearly, \(|A_{m}| = m\) . The sum of elements in \(A_{m}\) is
\[3^{n - m + 1} + \sum_{k = n - m + 1}^{n - 1}2\cdot 3^{k} = 3^{n - m + 1} + \frac{2\cdot 3^{n} - 2\cdot 3^{n - m + 1}}{2} = 3^{n},\]
as required.
Comment. Let us present a more general construction. Let \(s_{1},s_{2},\ldots ,s_{2n - 1}\) be a sequence of pairwise distinct positive integers satisfying \(s_{2i + 1} = s_{2i} + s_{2i - 1}\) for all \(i = 2,3,\ldots ,n - 1\) . Set \(s_{2n} = s_{1} + s_{2} + \ldots +s_{2n - 4}\) .
Assume that \(s_{2n}\) is distinct from the other terms of the sequence. Then the set \(S = \{s_{1},s_{2},\ldots ,s_{2n}\}\) satisfies the problem requirements. Indeed, the sum of its elements is
\[\Sigma = \sum_{i = 1}^{2n - 4}s_{i} + (s_{2n - 3} + s_{2n - 2}) + s_{2n - 1} + s_{2n} = s_{2n} + s_{2n - 1} + s_{2n - 1} + s_{2n} = 2s_{2n} + 2s_{2n - 1}.\]
Therefore, we have
\[\frac{\Sigma}{2} = s_{2n} + s_{2n - 1} = s_{2n} + s_{2n - 2} + s_{2n - 3} = s_{2n} + s_{2n - 2} + s_{3n - 4} + s_{2n - 5} = \dots ,\]
which shows that the required sets \(A_{m}\) can be chosen as
\[A_{m} = \{s_{2n},s_{2n - 2},\ldots ,s_{2n - 2m + 4},s_{2n - 2m + 3}\} .\]
So, the only condition to be satisfied is \(s_{2n}\notin \{s_{1},s_{2},\ldots ,s_{2n - 1}\}\) , which can be achieved in many different ways (e.g., by choosing properly the number \(s_{1}\) after specifying \(s_{2},s_{3},\ldots ,s_{2n - 1})\) .
The solution above is an instance of this general construction. Another instance, for \(n > 3\) , is the set
\[\{F_{1},F_{2},\ldots ,F_{2n - 1},F_{1} + \dots +F_{2n - 4}\} ,\]
where \(F_{1} = 1\) , \(F_{2} = 2\) , \(F_{n + 1} = F_{n} + F_{n - 1}\) is the usual Fibonacci sequence.
|
IMOSL-2018-C3
|
Let \(n\) be a given positive integer. Sisyphus performs a sequence of turns on a board consisting of \(n + 1\) squares in a row, numbered 0 to \(n\) from left to right. Initially, \(n\) stones are put into square 0, and the other squares are empty. At every turn, Sisyphus chooses any nonempty square, say with \(k\) stones, takes one of those stones and moves it to the right by at most \(k\) squares (the stone should stay within the board). Sisyphus' aim is to move all \(n\) stones to square \(n\) .
Prove that Sisyphus cannot reach the aim in less than
\[\left\lceil \frac{n}{1}\right\rceil +\left\lceil \frac{n}{2}\right\rceil +\left\lceil \frac{n}{3}\right\rceil +\cdot \cdot \cdot +\left\lceil \frac{n}{n}\right\rceil\]
turns. (As usual, \(\lceil x\rceil\) stands for the least integer not smaller than \(x\) .)
|
Solution. The stones are indistinguishable, and all have the same origin and the same final position. So, at any turn we can prescribe which stone from the chosen square to move. We do it in the following manner. Number the stones from 1 to \(n\) . At any turn, after choosing a square, Sisyphus moves the stone with the largest number from this square.
This way, when stone \(k\) is moved from some square, that square contains not more than \(k\) stones (since all their numbers are at most \(k\) ). Therefore, stone \(k\) is moved by at most \(k\) squares at each turn. Since the total shift of the stone is exactly \(n\) , at least \(\lceil n / k\rceil\) moves of stone \(k\) should have been made, for every \(k = 1,2,\ldots ,n\) .
By summing up over all \(k = 1,2,\ldots ,n\) , we get the required estimate.
Comment. The original submission contained the second part, asking for which values of \(n\) the equality can be achieved. The answer is \(n = 1,2,3,4,5,7\) . The Problem Selection Committee considered this part to be less suitable for the competition, due to technicalities.
|
IMOSL-2018-C4
|
An anti- Pascal pyramid is a finite set of numbers, placed in a triangle- shaped array so that the first row of the array contains one number, the second row contains two numbers, the third row contains three numbers and so on; and, except for the numbers in the bottom row, each number equals the absolute value of the difference of the two numbers below it. For instance, the triangle below is an anti- Pascal pyramid with four rows, in which every integer from 1 to \(1 + 2 + 3 + 4 = 10\) occurs exactly once:
\[{\frac{4}{5}} {\frac{2}{7}} {\frac{6}{1}}\] \[{8} {3} {10} {9}.\]
Is it possible to form an anti- Pascal pyramid with 2018 rows, using every integer from 1 to \(1 + 2 + \dots +2018\) exactly once?
|
Answer: No, it is not possible.
Solution. Let \(T\) be an anti- Pascal pyramid with \(n\) rows, containing every integer from 1 to \(1 + 2 + \dots +n\) , and let \(a_{1}\) be the topmost number in \(T\) (Figure 1). The two numbers below \(a_{1}\) are some \(a_{2}\) and \(b_{2} = a_{1} + a_{2}\) , the two numbers below \(b_{2}\) are some \(a_{3}\) and \(b_{3} = a_{1} + a_{2} + a_{3}\) , and so on and so forth all the way down to the bottom row, where some \(a_{n}\) and \(b_{n} = a_{1} + a_{2} + \dots +a_{n}\) are the two neighbours below \(b_{n - 1} = a_{1} + a_{2} + \dots +a_{n - 1}\) . Since the \(a_{k}\) are \(n\) pairwise distinct positive integers whose sum does not exceed the largest number in \(T\) , which is \(1 + 2 + \dots +n\) , it follows that they form a permutation of \(1,2,\ldots ,n\) .

<center>Figure 1 </center>

<center>Figure 2 </center>
Consider now (Figure 2) the two 'equilateral' subtriangles of \(T\) whose bottom rows contain the numbers to the left, respectively right, of the pair \(a_{n}\) , \(b_{n}\) . (One of these subtriangles may very well be empty.) At least one of these subtriangles, say \(T^{\prime}\) , has side length \(\ell \geqslant \lceil (n - 2) / 2\rceil\) . Since \(T^{\prime}\) obeys the anti- Pascal rule, it contains \(\ell\) pairwise distinct positive integers \(a_{1}^{\prime},a_{2}^{\prime},\ldots ,a_{\ell}^{\prime}\) , where \(a_{1}^{\prime}\) is at the apex, and \(a_{k}^{\prime}\) and \(b_{k}^{\prime} = a_{1}^{\prime} + a_{2}^{\prime} + \dots +a_{k}^{\prime}\) are the two neighbours below \(b_{k - 1}^{\prime}\) for each \(k = 2,3\ldots ,\ell\) . Since the \(a_{k}\) all lie outside \(T^{\prime}\) , and they form a permutation of \(1,2,\ldots ,n\) , the \(a_{k}^{\prime}\) are all greater than \(n\) . Consequently,
\[b_{\ell}^{\prime}\geqslant (n + 1) + (n + 2) + \dots +(n + \ell) = \frac{\ell(2n + \ell + 1)}{2}\] \[\qquad \geqslant \frac{1}{2}\cdot \frac{n - 2}{2}\left(2n + \frac{n - 2}{2} +1\right) = \frac{5n(n - 2)}{8},\]
which is greater than \(1 + 2 + \dots +n = n(n + 1) / 2\) for \(n = 2018\) . A contradiction.
Comment. The above estimate may be slightly improved by noticing that \(b_{\ell}^{\prime}\neq b_{n}\) . This implies \(n(n + 1) / 2 = b_{n} > b_{\ell}^{\prime}\geqslant [(n - 2) / 2](2n + [(n - 2) / 2] + 1) / 2\) , so \(n\leqslant 7\) if \(n\) is odd, and \(n\leqslant 12\) if \(n\) is even. It seems that the largest anti- Pascal pyramid whose entries are a permutation of the integers from 1 to \(1 + 2 + \dots +n\) has 5 rows.
|
IMOSL-2018-C6
|
Let \(a\) and \(b\) be distinct positive integers. The following infinite process takes place on an initially empty board.
\((i)\) If there is at least a pair of equal numbers on the board, we choose such a pair and increase one of its components by \(a\) and the other by \(b\) .
\((ii)\) If no such pair exists, we write down two times the number 0.
Prove that, no matter how we make the choices in \((i)\) , operation \((ii)\) will be performed only finitely many times.
|
Solution 1. We may assume \(\gcd (a,b) = 1\) ; otherwise we work in the same way with multiples of \(d = \gcd (a,b)\) .
Suppose that after \(N\) moves of type \((ii)\) and some moves of type \((i)\) we have to add two new zeros. For each integer \(k\) , denote by \(f(k)\) the number of times that the number \(k\) appeared on the board up to this moment. Then \(f(0) = 2N\) and \(f(k) = 0\) for \(k< 0\) . Since the board contains at most one \(k - a\) , every second occurrence of \(k - a\) on the board produced, at some moment, an occurrence of \(k\) ; the same stands for \(k - b\) . Therefore,
\[f(k) = \left\lfloor \frac{f(k - a)}{2}\right\rfloor +\left\lfloor \frac{f(k - b)}{2}\right\rfloor , \quad (1)\]
yielding
\[f(k)\geqslant \frac{f(k - a) + f(k - b)}{2} -1. \quad (2)\]
Since \(\gcd (a,b) = 1\) , every integer \(x > ab - a - b\) is expressible in the form \(x = sa + tb\) , with integer \(s,t\geqslant 0\) .
We will prove by induction on \(s + t\) that if \(x = sa + bt\) , with \(s,t\) nonnegative integers, then
\[f(x) > \frac{f(0)}{2^{s + t}} -2. \quad (3)\]
The base case \(s + t = 0\) is trivial. Assume now that (3) is true for \(s + t = v\) . Then, if \(s + t = v + 1\) and \(x = sa + tb\) , at least one of the numbers \(s\) and \(t - \mathrm{say}s - \mathrm{is}\) positive, hence by (2),
\[f(x) = f(sa + tb)\geqslant \frac{f((s - 1)a + tb)}{2} -1 > \frac{1}{2}\left(\frac{f(0)}{2^{s + t - 1}} -2\right) - 1 = \frac{f(0)}{2^{s + t}} -2.\]
Assume now that we must perform moves of type \((ii)\) ad infinitum. Take \(n = ab - a - b\) and suppose \(b > a\) . Since each of the numbers \(n + 1,n + 2,\ldots ,n + b\) can be expressed in the form \(sa + tb\) , with \(0\leqslant s\leqslant b\) and \(0\leqslant t\leqslant a\) , after moves of type \((ii)\) have been performed \(2^{a + b + 1}\) times and we have to add a new pair of zeros, each \(f(n + k)\) , \(k = 1,2,\ldots ,b\) , is at least 2. In this case (1) yields inductively \(f(n + k)\geqslant 2\) for all \(k\geqslant 1\) . But this is absurd: after a finite number of moves, \(f\) cannot attain nonzero values at infinitely many points.
Solution 2. We start by showing that the result of the process in the problem does not depend on the way the operations are performed. For that purpose, it is convenient to modify the process a bit.
Claim 1. Suppose that the board initially contains a finite number of nonnegative integers, and one starts performing type \((i)\) moves only. Assume that one had applied \(k\) moves which led to a final arrangement where no more type \((i)\) moves are possible. Then, if one starts from the same initial arrangement, performing type \((i)\) moves in an arbitrary fashion, then the process will necessarily stop at the same final arrangement
Proof. Throughout this proof, all moves are supposed to be of type \((i)\) .
Induct on \(k\) ; the base case \(k = 0\) is trivial, since no moves are possible. Assume now that \(k \geqslant 1\) . Fix some canonical process, consisting of \(k\) moves \(M_{1}, M_{2}, \ldots , M_{k}\) , and reaching the final arrangement \(A\) . Consider any sample process \(m_{1}, m_{2}, \ldots\) starting with the same initial arrangement and proceeding as long as possible; clearly, it contains at least one move. We need to show that this process stops at \(A\) .
Let move \(m_{1}\) consist in replacing two copies of \(x\) with \(x + a\) and \(x + b\) . If move \(M_{1}\) does the same, we may apply the induction hypothesis to the arrangement appearing after \(m_{1}\) . Otherwise, the canonical process should still contain at least one move consisting in replacing \((x, x) \mapsto (x + a, x + b)\) , because the initial arrangement contains at least two copies of \(x\) , while the final one contains at most one such.
Let \(M_{i}\) be the first such move. Since the copies of \(x\) are indistinguishable and no other copy of \(x\) disappeared before \(M_{i}\) in the canonical process, the moves in this process can be permuted as \(M_{i}, M_{1}, \ldots , M_{i - 1}, M_{i + 1}, \ldots , M_{k}\) , without affecting the final arrangement. Now it suffices to perform the move \(m_{1} = M_{i}\) and apply the induction hypothesis as above. \(\square\)
Claim 2. Consider any process starting from the empty board, which involved exactly \(n\) moves of type \((ii)\) and led to a final arrangement where all the numbers are distinct. Assume that one starts with the board containing \(2n\) zeroes (as if \(n\) moves of type \((ii)\) were made in the beginning), applying type \((i)\) moves in an arbitrary way. Then this process will reach the same final arrangement.
Proof. Starting with the board with \(2n\) zeros, one may indeed model the first process mentioned in the statement of the claim, omitting the type \((ii)\) moves. This way, one reaches the same final arrangement. Now, Claim 1 yields that this final arrangement will be obtained when type \((i)\) moves are applied arbitrarily. \(\square\)
Claim 2 allows now to reformulate the problem statement as follows: There exists an integer \(n\) such that, starting from \(2n\) zeroes, one may apply type \((i)\) moves indefinitely.
In order to prove this, we start with an obvious induction on \(s + t = k \geqslant 1\) to show that if we start with \(2^{s + t}\) zeros, then we can get simultaneously on the board, at some point, each of the numbers \(sa + tb\) , with \(s + t = k\) .
Suppose now that \(a < b\) . Then, an appropriate use of separate groups of zeros allows us to get two copies of each of the numbers \(sa + tb\) , with \(1 \leqslant s, t \leqslant b\) .
Define \(N = ab - a - b\) , and notice that after representing each of numbers \(N + k\) , \(1 \leqslant k \leqslant b\) , in the form \(sa + tb\) , \(1 \leqslant s, t \leqslant b\) we can get, using enough zeros, the numbers \(N + 1\) , \(N + 2\) , ..., \(N + a\) and the numbers \(N + 1\) , \(N + 2\) , ..., \(N\) + \(b\) .
From now on we can perform only moves of type \((i)\) . Indeed, if \(n \geqslant N\) , the occurrence of the numbers \(n + 1\) , \(n + 2\) , ..., \(n + a\) and \(n + 1\) , \(n + 2\) , ..., \(n + b\) and the replacement \((n + 1, n + 1) \mapsto (n + b + 1, n + a + 1)\) leads to the occurrence of the numbers \(n + 2\) , \(n + 3\) , ..., \(n + a + 1\) and \(n + 2\) , \(n + 3\) , ..., \(n + b + 1\) .
Comment. The proofs of Claims 1 and 2 may be extended in order to show that in fact the number of moves in the canonical process is the same as in an arbitrary sample one.
|
IMOSL-2018-C7
|
Consider 2018 pairwise crossing circles no three of which are concurrent. These circles subdivide the plane into regions bounded by circular edges that meet at vertices. Notice that there are an even number of vertices on each circle. Given the circle, alternately colour the vertices on that circle red and blue. In doing so for each circle, every vertex is coloured twice — once for each of the two circles that cross at that point. If the two colourings agree at a vertex, then it is assigned that colour; otherwise, it becomes yellow. Show that, if some circle contains at least 2061 yellow points, then the vertices of some region are all yellow.
|
Solution 1. Letting \(n = 2018\) , we will show that, if every region has at least one non- yellow vertex, then every circle contains at most \(n + \lfloor \sqrt{n - 2}\rfloor - 2\) yellow points. In the case at hand, the latter equals \(2018 + 44 - 2 = 2060\) , contradicting the hypothesis.
Consider the natural geometric graph \(G\) associated with the configuration of \(n\) circles. Fix any circle \(C\) in the configuration, let \(k\) be the number of yellow points on \(C\) , and find a suitable lower bound for the total number of yellow vertices of \(G\) in terms of \(k\) and \(n\) . It turns out that \(k\) is even, and \(G\) has at least
\[k + 2\binom{k / 2}{2} +2\binom{n - k / 2 - 1}{2} = \frac{k^{2}}{2} -(n - 2)k + (n - 2)(n - 1) \quad (*)\]
yellow vertices. The proof hinges on the two lemmata below.
Lemma 1. Let two circles in the configuration cross at \(x\) and \(y\) . Then \(x\) and \(y\) are either both yellow or both non- yellow.
Proof. This is because the numbers of interior vertices on the four arcs \(x\) and \(y\) determine on the two circles have like parities.
In particular, each circle in the configuration contains an even number of yellow vertices.
Lemma 2. If \(\overline{xy}\) , \(\overline{yz}\) , and \(\overline{xx}\) are circular arcs of three pairwise distinct circles in the configuration, then the number of yellow vertices in the set \(\{x, y, z\}\) is odd.
Proof. Let \(C_1\) , \(C_2\) , \(C_3\) be the three circles under consideration. Assume, without loss of generality, that \(C_2\) and \(C_3\) cross at \(x\) , \(C_3\) and \(C_1\) cross at \(y\) , and \(C_1\) and \(C_2\) cross at \(z\) . Let \(k_1\) , \(k_2\) , \(k_3\) be the numbers of interior vertices on the three circular arcs under consideration. Since each circle in the configuration, different from the \(C_i\) , crosses the cycle \(\overline{xy} \cup \overline{yz} \cup \overline{xx}\) at an even number of points (recall that no three circles are concurrent), and self- crossings are counted twice, the sum \(k_1 + k_2 + k_3\) is even.
Let \(Z_1\) be the colour \(z\) gets from \(C_1\) and define the other colours similarly. By the preceding, the number of bichromatic pairs in the list \((Z_1, Y_1)\) , \((X_2, Z_2)\) , \((Y_3, X_3)\) is odd. Since the total number of colour changes in a cycle \(Z_1 - Y_1 - Y_3 - X_3 - X_2 - Z_2 - Z_1\) is even, the number of bichromatic pairs in the list \((X_2, X_3)\) , \((Y_1, Y_3)\) , \((Z_1, Z_2)\) is odd, and the lemma follows.
We are now in a position to prove that \((*)\) bounds the total number of yellow vertices from below. Refer to Lemma 1 to infer that the \(k\) yellow vertices on \(C\) pair off to form the pairs of points where \(C\) is crossed by \(k / 2\) circles in the configuration. By Lemma 2, these circles cross pairwise to account for another \(2\binom{k / 2}{2}\) yellow vertices. Finally, the remaining \(n - k / 2 - 1\) circles in the configuration cross \(C\) at non- yellow vertices, by Lemma 1, and Lemma 2 applies again to show that these circles cross pairwise to account for yet another \(2\binom{n - k / 2 - 1}{2}\) yellow vertices. Consequently, there are at least \((*)\) yellow vertices.
Next, notice that \(G\) is a plane graph on \(n(n - 1)\) degree 4 vertices, having exactly \(2n(n - 1)\) edges and exactly \(n(n - 1) + 2\) faces (regions), the outer face inclusive (by Euler's formula for planar graphs).
Lemma 3. Each face of \(G\) has equally many red and blue vertices. In particular, each face has an even number of non- yellow vertices.
Proof. Trace the boundary of a face once in circular order, and consider the colours each vertex is assigned in the colouring of the two circles that cross at that vertex, to infer that colours of non- yellow vertices alternate. \(\square\)
Consequently, if each region has at least one non- yellow vertex, then it has at least two such. Since each vertex of \(G\) has degree 4, consideration of vertex- face incidences shows that \(G\) has at least \(n(n - 1) / 2 + 1\) non- yellow vertices, and hence at most \(n(n - 1) / 2 - 1\) yellow vertices. (In fact, Lemma 3 shows that there are at least \(n(n - 1) / 4 + 1 / 2\) red, respectively blue, vertices.)
Finally, recall the lower bound \((\ast)\) for the total number of yellow vertices in \(G\) , to write \(n(n - 1) / 2 - 1 \geq k^{2} / 2 - (n - 2)k + (n - 2)(n - 1)\) , and conclude that \(k \leq n + \lfloor \sqrt{n - 2} \rfloor - 2\) , as claimed in the first paragraph.
Solution 2. The first two lemmata in Solution 1 show that the circles in the configuration split into two classes: Consider any circle \(C\) along with all circles that cross \(C\) at yellow points to form one class; the remaining circles then form the other class. Lemma 2 shows that any pair of circles in the same class cross at yellow points; otherwise, they cross at non- yellow points.
Call the circles from the two classes white and black, respectively. Call a region yellow if its vertices are all yellow. Let \(w\) and \(b\) be the numbers of white and black circles, respectively; clearly, \(w + b = n\) . Assume that \(w \geq b\) , and that there is no yellow region. Clearly, \(b \geq 1\) , otherwise each region is yellow. The white circles subdivide the plane into \(w(w - 1) + 2\) larger regions — call them white. The white regions (or rather their boundaries) subdivide each black circle into black arcs. Since there are no yellow regions, each white region contains at least one black arc.
Consider any white region; let it contain \(t \geq 1\) black arcs. We claim that the number of points at which these \(t\) arcs cross does not exceed \(t - 1\) . To prove this, consider a multigraph whose vertices are these black arcs, two vertices being joined by an edge for each point at which the corresponding arcs cross. If this graph had more than \(t - 1\) edges, it would contain a cycle, since it has \(t\) vertices; this cycle would correspond to a closed contour formed by black sub- arcs, lying inside the region under consideration. This contour would, in turn, define at least one yellow region, which is impossible.
Let \(t_{i}\) be the number of black arcs inside the \(i^{\mathrm{th}}\) white region. The total number of black arcs is \(\textstyle \sum_{i}t_{i} = 2w b\) , and they cross at \(2\binom{b}{2} = b(b - 1)\) points. By the preceding,
\[b(b - 1) \leq \sum_{i = 1}^{w^{2} - w + 2}(t_{i} - 1) = \sum_{i = 1}^{w^{2} - w + 2}t_{i} - (w^{2} - w + 2) = 2w b - (w^{2} - w + 2),\]
or, equivalently, \((w - b)^{2} \leq w + b - 2 = n - 2\) , which is the case if and only if \(w - b \leq \lfloor \sqrt{n - 2} \rfloor\) . Consequently, \(b \leq w \leq \left(n + \lfloor \sqrt{n - 2} \rfloor\right) / 2\) , so there are at most \(2(w - 1) \leq n + \lfloor \sqrt{n - 2} \rfloor - 2\) yellow vertices on each circle — a contradiction.
|
IMOSL-2018-G1
|
Let \(A B C\) be an acute- angled triangle with circumcircle \(\Gamma\) . Let \(D\) and \(E\) be points on the segments \(A B\) and \(A C\) , respectively, such that \(A D = A E\) . The perpendicular bisectors of the segments \(B D\) and \(C E\) intersect the small arcs \(\overline{{A B}}\) and \(\overline{{A C}}\) at points \(F\) and \(G\) respectively. Prove that \(D E\parallel F G\) .
|
Solution 1. In the sequel, all the considered arcs are small arcs.
Let \(P\) be the midpoint of the arc \(\overline{{B C}}\) . Then \(A P\) is the bisector of \(\angle B A C\) , hence, in the isosceles triangle \(A D E\) , \(A P\perp D E\) . So, the statement of the problem is equivalent to \(A P\perp F G\) .
In order to prove this, let \(K\) be the second intersection of \(\Gamma\) with \(F D\) . Then the triangle \(F B D\) is isosceles, therefore
\[\angle A K F = \angle A B F = \angle F D B = \angle A D K,\]
yielding \(A K = A D\) . In the same way, denoting by \(L\) the second intersection of \(\Gamma\) with \(G E\) , we get \(A L = A E\) . This shows that \(A K = A L\) .

Now \(\angle F B D = \angle F D B\) gives \(\overline{{A F}} = \overline{{B F}} +\overline{{A K}} = \overline{{B F}} +\overline{{A L}}\) , hence \(\overline{{B F}} = \overline{{L F}}\) . In a similar way, we get \(\overline{{C G}} = \overline{{G K}}\) . This yields
\[\angle (A P,F G) = \frac{\overline{{A F}} + \overline{{P G}}}{2} = \frac{\overline{{A L}} + \overline{{L F}} + \overline{{P C}} + \overline{{C G}}}{2} = \frac{\overline{{K L}} + \overline{{L B}} + \overline{{B C}} + \overline{{C K}}}{4} = 90^{\circ}.\]
Solution 2. Let \(Z = A B\cap F G\) , \(T = A C\cap F G\) . It suffices to prove that \(\angle A T Z = \angle A Z T\) . Let \(X\) be the point for which \(F X A D\) is a parallelogram. Then
\[\angle F X A = \angle F D A = 180^{\circ} - \angle F D B = 180^{\circ} - \angle F B D,\]
where in the last equality we used that \(F D = F B\) . It follows that the quadrilateral \(B F X A\) is cyclic, so \(X\) lies on \(\Gamma\) .

Analogously, if \(Y\) is the point for which \(GYAE\) is a parallelogram, then \(Y\) lies on \(\Gamma\) . So the quadrilateral \(X F G Y\) is cyclic and \(F X = A D = A E = G Y\) , hence \(X F G Y\) is an isosceles trapezoid.
Now, by \(X F\parallel A Z\) and \(Y G\parallel A T\) , it follows that \(\angle A T Z = \angle Y G F = \angle X F G = \angle A Z T\) .
Solution 3. As in the first solution, we prove that \(F G\perp A P\) , where \(P\) is the midpoint of the small arc \(\widehat{B C}\) .
Let \(O\) be the circumference of the triangle \(A B C\) , and let \(M\) and \(N\) be the midpoints of the small arcs \(\overline{{A B}}\) and \(\overline{{A C}}\) , respectively. Then \(O M\) and \(O N\) are the perpendicular bisectors of \(A B\) and \(A C\) , respectively.

The distance \(d\) between \(O M\) and the perpendicular bisector of \(B D\) is \(\frac{1}{2} A B - \frac{1}{2} B D = \frac{1}{2} A D\) , hence it is equal to the distance between \(O N\) and the perpendicular bisector of \(\widehat{C E}\) .
This shows that the isosceles trapezoid determined by the diameter \(\delta\) of \(\Gamma\) through \(M\) and the chord parallel to \(\delta\) through \(F\) is congruent to the isosceles trapezoid determined by the diameter \(\delta^{\prime}\) of \(\Gamma\) through \(N\) and the chord parallel to \(\delta^{\prime}\) through \(G\) . Therefore \(M F = N G\) , yielding \(M N\parallel F G\) .
Now
\[\angle (M N,A P) = \frac{1}{2}\left(\overline{{A M}} +\overline{{P C}} +\overline{{C N}}\right) = \frac{1}{4}\left(\overline{{A B}} +\overline{{B C}} +\overline{{C A}}\right) = 90^{\circ},\]
hence \(M N\perp A P\) , and the conclusion follows.
|
IMOSL-2018-G2
|
Let \(ABC\) be a triangle with \(AB = AC\) , and let \(M\) be the midpoint of \(BC\) . Let \(P\) be a point such that \(PB < PC\) and \(PA\) is parallel to \(BC\) . Let \(X\) and \(Y\) be points on the lines \(PB\) and \(PC\) , respectively, so that \(B\) lies on the segment \(PX\) , \(C\) lies on the segment \(PY\) , and \(\angle PXM = \angle PYM\) . Prove that the quadrilateral \(APXY\) is cyclic.
|
Solution. Since \(AB = AC\) , \(AM\) is the perpendicular bisector of \(BC\) , hence \(\angle PAM = \angle AMC = 90^\circ\) .

Now let \(Z\) be the common point of \(AM\) and the perpendicular through \(Y\) to \(PC\) (notice that \(Z\) lies on to the ray \(AM\) beyond \(M\) ). We have \(\angle PAZ = \angle PYZ = 90^\circ\) . Thus the points \(P\) , \(A\) , \(Y\) , and \(Z\) are concyclic.
Since \(\angle CMZ = \angle CYZ = 90^\circ\) , the quadrilateral \(CYZM\) is cyclic, hence \(\angle CZM = \angle CYM\) . By the condition in the statement, \(\angle CYM = \angle BXM\) , and, by symmetry in \(ZM\) , \(\angle CZM = \angle BZM\) . Therefore, \(\angle BXM = \angle BZM\) . It follows that the points \(B\) , \(X\) , \(Z\) , and \(M\) are concyclic, hence \(\angle BXZ = 180^\circ - \angle BMZ = 90^\circ\) .
Finally, we have \(\angle PXZ = \angle PYZ = \angle PAZ = 90^\circ\) , hence the five points \(P\) , \(A\) , \(X\) , \(Y\) , \(Z\) are concyclic. In particular, the quadrilateral \(APXY\) is cyclic, as required.
Comment 1. Clearly, the key point \(Z\) from the solution above can be introduced in several different ways, e.g., as the second meeting point of the circle \(CMY\) and the line \(AM\) , or as the second meeting point of the circles \(CMY\) and \(BMX\) , etc.
For some of definitions of \(Z\) its location is not obvious. For instance, if \(Z\) is defined as a common point of \(AM\) and the perpendicular through \(X\) to \(PX\) , it is not clear that \(Z\) lies on the ray \(AM\) beyond \(M\) . To avoid such slippery details some more restrictions on the construction may be required.
Comment 2. Let us discuss a connection to the Miquel point of a cyclic quadrilateral. Set \(X' = MX \cap PC\) , \(Y' = MY \cap PB\) , and \(Q = XY \cap X'Y'\) (see the figure below).
We claim that \(BC \parallel PQ\) . (One way of proving this is the following. Notice that the quadruple of lines \(PX, PM, PY, PQ\) is harmonic, hence the quadruple \(B\) , \(M\) , \(C\) , \(PQ \cap BC\) of their intersection points with \(BC\) is harmonic. Since \(M\) is the midpoint of \(BC\) , \(PQ \cap BC\) is an ideal point, i.e., \(PQ \parallel BC\) .)
It follows from the given equality \(\angle PXM = \angle PYM\) that the quadrilateral \(XYX'Y'\) is cyclic. Note that \(A\) is the projection of \(M\) onto \(PQ\) . By a known description, \(A\) is the Miquel point for the sidelines \(XY, XY', X'Y, X'Y'\) . In particular, the circle \(PXY\) passes through \(A\) .

Comment 3. An alternative approach is the following. One can note that the (oriented) lengths of the segments \(CY\) and \(BX\) are both linear functions of a parameter \(t = \cot \angle PXM\) . As \(t\) varies, the intersection point \(S\) of the perpendicular bisectors of \(PX\) and \(PY\) traces a fixed line, thus the family of circles \(PXY\) has a fixed common point (other than \(P\) ). By checking particular cases, one can show that this fixed point is \(A\) .
Comment 4. The problem states that \(\angle PXM = \angle PYM\) implies that \(APXY\) is cyclic. The original submission claims that these two conditions are in fact equivalent. The Problem Selection Committee omitted the converse part, since it follows easily from the direct one, by reversing arguments.
|
IMOSL-2018-G3
|
A circle \(\omega\) of radius 1 is given. A collection \(T\) of triangles is called good, if the following conditions hold:
(i) each triangle from \(T\) is inscribed in \(\omega\) ;
(ii) no two triangles from \(T\) have a common interior point.
Determine all positive real numbers \(t\) such that, for each positive integer \(n\) , there exists a good collection of \(n\) triangles, each of perimeter greater than \(t\) .
|
Answer: \(t \in (0, 4]\) .
Solution. First, we show how to construct a good collection of \(n\) triangles, each of perimeter greater than 4. This will show that all \(t \leqslant 4\) satisfy the required conditions.
Construct inductively an \((n + 2)\) - gon \(B A_{1}A_{2}\ldots A_{n}C\) inscribed in \(\omega\) such that \(B C\) is a diameter, and \(B A_{1}A_{2}\) \(B A_{2}A_{3}\) ,..., \(B A_{n - 1}A_{n}\) \(B A_{n}C\) is a good collection of \(n\) triangles. For \(n = 1\) , take any triangle \(B A_{1}C\) inscribed in \(\omega\) such that \(B C\) is a diameter; its perimeter is greater than \(2B C = 4\) . To perform the inductive step, assume that the \((n + 2)\) - gon \(B A_{1}A_{2}\ldots A_{n}C\) is already constructed. Since \(A_{n}B + A_{n}C + BC > 4\) , one can choose a point \(A_{n + 1}\) on the small arc \(\overline{{C A_{n}}}\) , close enough to \(C\) , so that \(A_{n}B + A_{n}A_{n + 1} + B A_{n + 1}\) is still greater than 4. Thus each of these new triangles \(B A_{n}A_{n + 1}\) and \(B A_{n + 1}C\) has perimeter greater than 4, which completes the induction step.

We proceed by showing that no \(t > 4\) satisfies the conditions of the problem. To this end, we assume that there exists a good collection \(T\) of \(n\) triangles, each of perimeter greater than \(t\) , and then bound \(n\) from above.
Take \(\epsilon > 0\) such that \(t = 4 + 2\epsilon\) .
Claim. There exists a positive constant \(\sigma = \sigma (\epsilon)\) such that any triangle \(\Delta\) with perimeter \(2s \geqslant 4 + 2\epsilon\) , inscribed in \(\omega\) , has area \(S(\Delta)\) at least \(\sigma\) .
Proof. Let \(a\) , \(b\) , \(c\) be the side lengths of \(\Delta\) . Since \(\Delta\) is inscribed in \(\omega\) , each side has length at most 2. Therefore, \(s - a \geqslant (2 + \epsilon) - 2 = \epsilon\) . Similarly, \(s - b \geqslant \epsilon\) and \(s - c \geqslant \epsilon\) . By Heron's formula, \(S(\Delta) = \sqrt{s(s - a)(s - b)(s - c)} \geqslant \sqrt{(2 + \epsilon)\epsilon^3}\) . Thus we can set \(\sigma (\epsilon) = \sqrt{(2 + \epsilon)\epsilon^3}\) .
Now we see that the total area \(S\) of all triangles from \(T\) is at least \(n\sigma (\epsilon)\) . On the other hand, \(S\) does not exceed the area of the disk bounded by \(\omega\) . Thus \(n\sigma (\epsilon) \leqslant \pi\) , which means that \(n\) is bounded from above.
Comment 1. One may prove the Claim using the formula \(S = \frac{abc}{4R}\) instead of Heron's formula.
Comment 2. In the statement of the problem condition \((i)\) could be replaced by a weaker one: each triangle from \(T\) lies within \(\omega\) . This does not affect the solution above, but reduces the number of ways to prove the Claim.
|
IMOSL-2018-G4
|
A point \(T\) is chosen inside a triangle \(A B C\) . Let \(A_{1}\) , \(B_{1}\) , and \(C_{1}\) be the reflections of \(T\) in \(B C\) , \(C A\) , and \(A B\) , respectively. Let \(\Omega\) be the circumcircle of the triangle \(A_{1}B_{1}C_{1}\) . The lines \(A_{1}T\) , \(B_{1}T\) , and \(C_{1}T\) meet \(\Omega\) again at \(A_{2}\) , \(B_{2}\) , and \(C_{2}\) , respectively. Prove that the lines \(A A_{2}\) , \(B B_{2}\) , and \(C C_{2}\) are concurrent on \(\Omega\) .
|
Solution. By \(\dot{\mathbf{x}} (\ell ,n)\) we always mean the directed angle of the lines \(\ell\) and \(n\) , taken modulo \(180^{\circ}\) .
Let \(C C_{2}\) meet \(\Omega\) again at \(K\) (as usual, if \(C C_{2}\) is tangent to \(\Omega\) , we set \(T = C_{2}\) ). We show that the line \(B B_{2}\) contains \(K\) ; similarly, \(A A_{2}\) will also pass through \(K\) . For this purpose, it suffices to prove that
\[\begin{array}{r}{\dot{\times}(C_{2}C,C_{2}A_{1}) = \dot{\times}(B_{2}B,B_{2}A_{1}).} \end{array} \quad (1)\]
By the problem condition, \(C B\) and \(C A\) are the perpendicular bisectors of \(T A_{1}\) and \(T B_{1}\) , respectively. Hence, \(C\) is the circumference of the triangle \(A_{1}T B_{1}\) . Therefore,
\[\begin{array}{r}{\dot{\times}(C A_{1},C B) = \dot{\times}(C B,C T) = \dot{\times}(B_{1}A_{1},B_{1}T) = \dot{\times}(B_{1}A_{1},B_{1}B_{2}).} \end{array} \quad (1)\]
In circle \(\Omega\) we have \(\dot{\times}(B_{1}A_{1},B_{1}B_{2}) = \dot{\times}(C_{2}A_{1},C_{2}B_{2})\) . Thus,
\[\begin{array}{r}{\dot{\times}(C A_{1},C B) = \dot{\times}(B_{1}A_{1},B_{1}B_{2}) = \dot{\times}(C_{2}A_{1},C_{2}B_{2}).} \end{array} \quad (2)\]
Similarly, we get
\[\begin{array}{r}{\dot{\times}(B A_{1},B C) = \dot{\times}(C_{1}A_{1},C_{1}C_{2}) = \dot{\times}(B_{2}A_{1},B_{2}C_{2}).} \end{array} \quad (3)\]
The two obtained relations yield that the triangles \(A_{1}B C\) and \(A_{1}B_{2}C_{2}\) are similar and equioriented, hence
\[\frac{A_{1}B_{2}}{A_{1}B} = \frac{A_{1}C_{2}}{A_{1}C}\quad \mathrm{and}\quad \dot{\times}(A_{1}B,A_{1}C) = \dot{\times}(A_{1}B_{2},A_{1}C_{2}).\]
The second equality may be rewritten as \(\dot{\times}(A_{1}B,A_{1}B_{2}) = \dot{\times}(A_{1}C,A_{1}C_{2})\) , so the triangles \(A_{1}B B_{2}\) and \(A_{1}C C_{2}\) are also similar and equioriented. This establishes (1).

Comment 1. In fact, the triangle \(A_{1}B C\) is an image of \(A_{1}B_{2}C_{2}\) under a spiral similarity centred at \(A_{1}\) ; in this case, the triangles \(A B B_{2}\) and \(A C C_{2}\) are also spirally similar with the same centre.
Comment 2. After obtaining (2) and (3), one can finish the solution in different ways.
For instance, introducing the point \(X = BC \cap B_2C_2\) , one gets from these relations that the 4- tuples \((A_1, B, B_2, X)\) and \((A_1, C, C_2, X)\) are both cyclic. Therefore, \(K\) is the Miquel point of the lines \(BB_2\) , \(CC_2\) , \(BC\) , and \(B_2C_2\) ; this yields that the meeting point of \(BB_2\) and \(CC_2\) lies on \(\Omega\) .
Yet another way is to show that the points \(A_1\) , \(B\) , \(C\) , and \(K\) are concyclic, as
\[\begin{array}{r}{\mathcal{x}(K C,K A_{1}) = \mathcal{x}(B_{2}C_{2},B_{2}A_{1}) = \mathcal{x}(B C,B A_{1}).} \end{array}\]
By symmetry, the second point \(K'\) of intersection of \(BB_2\) with \(\Omega\) is also concyclic to \(A_1\) , \(B\) , and \(C\) , hence \(K' = K\) .

Comment 3. The requirement that the common point of the lines \(AA_2\) , \(BB_2\) , and \(CC_2\) should lie on \(\Omega\) may seem to make the problem easier, since it suggests some approaches. On the other hand, there are also different ways of showing that the lines \(AA_2\) , \(BB_2\) , and \(CC_2\) are just concurrent.
In particular, the problem conditions yield that the lines \(A_2T\) , \(B_2T\) , and \(C_2T\) are perpendicular to the corresponding sides of the triangle \(ABC\) . One may show that the lines \(AT\) , \(BT\) , and \(CT\) are also perpendicular to the corresponding sides of the triangle \(A_2B_2C_2\) , i.e., the triangles \(ABC\) and \(A_2B_2C_2\) are orthologic, and their orthology centres coincide. It is known that such triangles are also perspective, i.e. the lines \(AA_2\) , \(BB_2\) , and \(CC_2\) are concurrent (in projective sense).
To show this mutual orthology, one may again apply angle chasing, but there are also other methods. Let \(A'\) , \(B'\) , and \(C'\) be the projections of \(T\) onto the sides of the triangle \(ABC\) . Then \(A_2T \cdot TA' = B_2T \cdot TB' = C_2T \cdot TC'\) , since all three products equal (minus) half the power of \(T\) with respect to \(\Omega\) . This means that \(A_2\) , \(B_2\) , and \(C_2\) are the poles of the sidelines of the triangle \(ABC\) with respect to some circle centred at \(T\) and having pure imaginary radius (in other words, the reflections of \(A_2\) , \(B_2\) , and \(C_2\) in \(T\) are the poles of those sidelines with respect to some regular circle centred at \(T\) ). Hence, dually, the vertices of the triangle \(ABC\) are also the poles of the sidelines of the triangle \(A_2B_2C_2\) .
|
IMOSL-2018-G5
|
Let \(A B C\) be a triangle with circumcircle \(\omega\) and incentre \(I\) . A line \(\ell\) intersects the lines \(A I\) , \(B I\) , and \(C I\) at points \(D\) , \(E\) , and \(F\) , respectively, distinct from the points \(A\) , \(B\) , \(C\) , and \(I\) . The perpendicular bisectors \(x\) , \(y\) , and \(z\) of the segments \(A D\) , \(B E\) , and \(C F\) , respectively determine a triangle \(\Theta\) . Show that the circumcircle of the triangle \(\Theta\) is tangent to \(\omega\) .
|
Preamble. Let \(X = y \cap z\) , \(Y = x \cap z\) , \(Z = x \cap y\) and let \(\Omega\) denote the circumcircle of the triangle \(X Y Z\) . Denote by \(X_{0}\) , \(Y_{0}\) , and \(Z_{0}\) the second intersection points of \(A I\) , \(B I\) and \(C I\) , respectively, with \(\omega\) . It is known that \(Y_{0}Z_{0}\) is the perpendicular bisector of \(A I\) , \(Z_{0}X_{0}\) is the perpendicular bisector of \(B I\) , and \(X_{0}Y_{0}\) is the perpendicular bisector of \(C I\) . In particular, the triangles \(X Y Z\) and \(X_{0}Y_{0}Z_{0}\) are homothetic, because their corresponding sides are parallel.
The solutions below mostly exploit the following approach. Consider the triangles \(X Y Z\) and \(X_{0}Y_{0}Z_{0}\) , or some other pair of homothetic triangles \(\Delta\) and \(\delta\) inscribed into \(\Omega\) and \(\omega\) , respectively. In order to prove that \(\Omega\) and \(\omega\) are tangent, it suffices to show that the centre \(T\) of the homothety taking \(\Delta\) to \(\delta\) lies on \(\omega\) (or \(\Omega\) ), or, in other words, to show that \(\Delta\) and \(\delta\) are perspective (i.e., the lines joining corresponding vertices are concurrent), with their perspector lying on \(\omega\) (or \(\Omega\) ).
We use directed angles throughout all the solutions.
Solution 1.
Claim 1. The reflections \(\ell_{a}\) , \(\ell_{b}\) and \(\ell_{c}\) of the line \(\ell\) in the lines \(x\) , \(y\) , and \(z\) , respectively, are concurrent at a point \(T\) which belongs to \(\omega\) .

Proof. Notice that \(\hat{\star} (\ell_{b},\ell_{c}) = \hat{\star} (\ell_{b},\ell) + \hat{\star} (\ell ,\ell_{c}) = 2\hat{\star} (y,\ell) + 2\hat{\star} (\ell ,z) = 2\hat{\star} (y,z)\) . But \(y\perp B I\) and \(z\perp C I\) implies \(\hat{\star} (y,z) = \hat{\star} (B I,I C)\) , so, since \(2\hat{\star} (B I,I C) = \hat{\star} (B A,A C)\) , we obtain
\[\hat{\star} (\ell_{b},\ell_{c}) = \hat{\star} (B A,A C). \quad (1)\]
Since \(A\) is the reflection of \(D\) in \(x\) , \(A\) belongs to \(\ell_{a}\) ; similarly, \(B\) belongs to \(\ell_{b}\) . Then (1) shows that the common point \(T^{\prime}\) of \(\ell_{a}\) and \(\ell_{b}\) lies on \(\omega\) ; similarly, the common point \(T^{\prime \prime}\) of \(\ell_{c}\) and \(\ell_{b}\) lies on \(\omega\) .
If \(B\neq \ell_{a}\) and \(B\neq \ell_{c}\) , then \(T^{\prime}\) and \(T^{\prime \prime}\) are the second point of intersection of \(\ell_{b}\) and \(\omega\) , hence they coincide. Otherwise, if, say, \(B\in \ell_{c}\) , then \(\ell_{c} = B C\) , so \(\hat{\star} (B A,A C) = \hat{\star} (\ell_{b},\ell_{c}) = \hat{\star} (\ell_{b},B C)\) , which shows that \(\ell_{b}\) is tangent at \(B\) to \(\omega\) and \(T^{\prime} = T^{\prime \prime} = B\) . So \(T^{\prime}\) and \(T^{\prime \prime}\) coincide in all the cases, and the conclusion of the claim follows. \(\square\)
Now we prove that \(X\) , \(X_{0}\) , \(T\) are collinear. Denote by \(D_{b}\) and \(D_{c}\) the reflections of the point \(D\) in the lines \(y\) and \(z\) , respectively. Then \(D_{b}\) lies on \(\ell_{b}\) , \(D_{c}\) lies on \(\ell_{c}\) , and
\[\begin{array}{r l} & {\mathfrak{x}(D_{b}X,XD_{c}) = \mathfrak{x}(D_{b}X,D X) + \mathfrak{x}(D X,XD_{c}) = 2\mathfrak{x}(y,D X) + 2\mathfrak{x}(D X,z) = 2\mathfrak{x}(y,z)}\\ & {\qquad = \mathfrak{x}(B A,A C) = \mathfrak{x}(B T,T C),} \end{array} \quad (x)\]
hence the quadrilateral \(X D_{b}T D_{c}\) is cyclic. Notice also that since \(X D_{b} = X D = X D_{c}\) , the points \(D,D_{b},D_{c}\) lie on a circle with centre \(X\) . Using in this circle the diameter \(D_{c}D_{c}^{\prime}\) yields \(\mathfrak{x}(D_{b}D_{c},D_{c}X) = 90^{\circ} + \mathfrak{x}(D_{b}D_{c}^{\prime},D_{c}^{\prime}X) = 90^{\circ} + \mathfrak{x}(D_{b}D,D_{c})\) . Therefore,
\[\mathfrak{x}(\ell_{b},X T) = \mathfrak{x}(D_{b}T,X T) = \mathfrak{x}(D_{b}D_{c},D_{c}X) = 90^{\circ} + \mathfrak{x}(D_{b}D,D D_{c})\] \[= 90^{\circ} + \mathfrak{x}(B I,I C) = \mathfrak{x}(B A,A I) = \mathfrak{x}(B A,A X_{0}) = \mathfrak{x}(B T,T X_{0}) = \mathfrak{x}(\ell_{b},X_{0}T),\]
so the points \(X\) , \(X_{0}\) , \(T\) are collinear. By a similar argument, \(Y\) , \(Y_{0}\) , \(T\) and \(Z\) , \(Z_{0}\) , \(T\) are collinear. As mentioned in the preamble, the statement of the problem follows.
Comment 1. After proving Claim 1 one may proceed in another way. As it was shown, the reflections of \(\ell\) in the sidelines of \(XYZ\) are concurrent at \(T\) . Thus \(\ell\) is the Steiner line of \(T\) with respect to \(\Delta XYZ\) (that is the line containing the reflections \(T_{a},T_{b},T_{c}\) of \(T\) in the sidelines of \(XYZ\) ). The properties of the Steiner line imply that \(T\) lies on \(\Omega\) , and \(\ell\) passes through the orthocentre \(H\) of the triangle \(XYZ\) .

Let \(H_{a}\) , \(H_{b}\) , and \(H_{c}\) be the reflections of the point \(H\) in the lines \(x\) , \(y\) , and \(z\) , respectively. Then the triangle \(H_{a}H_{b}H_{c}\) is inscribed in \(\Omega\) and homothetic to \(ABC\) (by an easy angle chasing). Since \(H_{a} \in \ell_{a}\) , \(H_{b} \in \ell_{b}\) , and \(H_{c} \in \ell_{c}\) , the triangles \(H_{a}H_{b}H_{c}\) and \(ABC\) form a required pair of triangles \(\Delta\) and \(\delta\) mentioned in the preamble.
Comment 2. The following observation shows how one may guess the description of the tangency point \(T\) from Solution 1.
Let us fix a direction and move the line \(\ell\) parallel to this direction with constant speed.
Then the points \(D\) , \(E\) , and \(F\) are moving with constant speeds along the lines \(AI\) , \(BI\) , and \(CI\) , respectively. In this case \(x\) , \(y\) , and \(z\) are moving with constant speeds, defining a family of homothetic triangles \(XYZ\) with a common centre of homothety \(T\) . Notice that the triangle \(X_{0}Y_{0}Z_{0}\) belongs to this family (for \(\ell\) passing through \(I\) ). We may specify the location of \(T\) considering the degenerate case when \(x\) , \(y\) , and \(z\) are concurrent. In this degenerate case all the lines \(x\) , \(y\) , \(z\) , \(\ell\) , \(\ell_{a}\) , \(\ell_{b}\) , \(\ell_{c}\) have a common point. Note that the lines \(\ell_{a}\) , \(\ell_{b}\) , \(\ell_{c}\) remain constant as \(\ell\) is moving (keeping its direction). Thus \(T\) should be the common point of \(\ell_{a}\) , \(\ell_{b}\) , and \(\ell_{c}\) , lying on \(\omega\) .
Solution 2. As mentioned in the preamble, it is sufficient to prove that the centre \(T\) of the homothety taking \(XYZ\) to \(X_{0}Y_{0}Z_{0}\) belongs to \(\omega\) . Thus, it suffices to prove that \(\mathcal{x}(TX_{0},TY_{0}) =\) \(\mathcal{x}(Z_{0}X_{0},Z_{0}Y_{0})\) , or, equivalently, \(\mathcal{x}(XX_{0},YY_{0}) = \mathcal{x}(Z_{0}X_{0},Z_{0}Y_{0})\) .
Recall that \(YZ\) and \(Y_{0}Z_{0}\) are the perpendicular bisectors of \(AD\) and \(AI\) , respectively. Then, the vector \(\overrightarrow{x}\) perpendicular to \(YZ\) and shifting the line \(Y_{0}Z_{0}\) to \(YZ\) is equal to \(\frac{1}{2}\overrightarrow{ID}\) . Define the shifting vectors \(\overrightarrow{y} = \frac{1}{2}\overrightarrow{IE}\) , \(\overrightarrow{z} = \frac{1}{2}\overrightarrow{IP}\) similarly. Consider now the triangle \(UVW\) formed by the perpendiculars to \(AI\) , \(BI\) , and \(CI\) through \(D\) , \(E\) , and \(F\) , respectively (see figure below). This is another triangle whose sides are parallel to the corresponding sides of \(XYZ\) .
Claim 2. \(\overrightarrow{IU} = 2\overline{X_{0}X}\) , \(\overrightarrow{IV} = 2\overline{Y_{0}Y}\) , \(\overrightarrow{IW} = 2\overline{Z_{0}Z}\) .
Proof. We prove one of the relations, the other proofs being similar. To prove the equality of two vectors it suffices to project them onto two non- parallel axes and check that their projections are equal.
The projection of \(\overline{X_{0}X}\) onto \(IB\) equals \(\vec{y}\) , while the projection of \(\overrightarrow{IU}\) onto \(IB\) is \(\overrightarrow{IE} = 2\vec{y}\) . The projections onto the other axis \(IC\) are \(\vec{z}\) and \(\overrightarrow{IP} = 2\vec{z}\) . Then \(\overrightarrow{IU} = 2\overline{X_{0}X}\) follows. \(\square\)
Notice that the line \(\ell\) is the Simson line of the point \(I\) with respect to the triangle \(UVW\) ; thus \(U\) , \(V\) , \(W\) , and \(I\) are concyclic. It follows from Claim 2 that \(\mathcal{x}(XX_{0},YY_{0}) = \mathcal{x}(IU,IV) = \mathcal{x}(WU,WV) = \mathcal{x}(Z_{0}X_{0},Z_{0}Y_{0})\) , and we are done.

Solution 3. Let \(I_{a}\) , \(I_{b}\) , and \(I_{c}\) be the excentres of triangle \(ABC\) corresponding to \(A\) , \(B\) , and \(C\) , respectively. Also, let \(u\) , \(v\) , and \(w\) be the lines through \(D\) , \(E\) , and \(F\) which are perpendicular to \(AI\) , \(BI\) , and \(CI\) , respectively, and let \(UVW\) be the triangle determined by these lines, where \(u = VW\) , \(v = UW\) and \(w = UV\) (see figure above).
Notice that the line \(u\) is the reflection of \(I_{b}I_{c}\) in the line \(x\) , because \(u\) , \(x\) , and \(I_{b}I_{c}\) are perpendicular to \(AD\) and \(x\) is the perpendicular bisector of \(AD\) . Likewise, \(v\) and \(I_{a}I_{c}\) are reflections of each other in \(y\) , while \(w\) and \(I_{a}I_{b}\) are reflections of each other in \(z\) . It follows that \(X\) , \(Y\) , and \(Z\) are the midpoints of \(UI_{a}\) , \(VI_{b}\) and \(WI_{c}\) , respectively, and that the triangles \(UVW\) , \(XYZ\) and \(I_{a}I_{b}I_{c}\) are either translates of each other or homothetic with a common homothety centre.
Construct the points \(T\) and \(S\) such that the quadrilaterals \(UVIW\) , \(XYTZ\) and \(I_{a}I_{b}SI_{c}\) are homothetic. Then \(T\) is the midpoint of \(IS\) . Moreover, note that \(\ell\) is the Simson line of the point \(I\) with respect to the triangle \(UVW\) , hence \(I\) belongs to the circumcircle of the triangle \(UVW\) , therefore \(T\) belongs to \(\Omega\) .
Consider now the homothety or translation \(h_{1}\) that maps \(X Y Z T\) to \(I_{a}I_{b}I_{c}S\) and the homothety \(h_{2}\) with centre \(I\) and factor \(\frac{1}{2}\) . Furthermore, let \(h = h_{2}\circ h_{1}\) . The transform \(h\) can be a homothety or a translation, and
\[h\left(T\right) = h_{2}\left(h_{1}\left(T\right)\right) = h_{2}\left(S\right) = T,\]
hence \(T\) is a fixed point of \(h\) . So, \(h\) is a homothety with centre \(T\) . Note that \(h_{2}\) maps the excentres \(I_{a}\) , \(I_{b}\) , \(I_{c}\) to \(X_{0}\) , \(Y_{0}\) , \(Z_{0}\) defined in the preamble. Thus the centre \(T\) of the homothety taking \(X Y Z\) to \(X_{0}Y_{0}Z_{0}\) belongs to \(\Omega\) , and this completes the proof.
|
IMOSL-2018-G6
|
A convex quadrilateral \(ABCD\) satisfies \(AB \cdot CD = BC \cdot DA\) . A point \(X\) is chosen inside the quadrilateral so that \(\angle XAB = \angle XCD\) and \(\angle XBC = \angle XDA\) . Prove that \(\angle AXB + \angle CXD = 180^\circ\) .
|
Solution 1. Let \(B'\) be the reflection of \(B\) in the internal angle bisector of \(\angle AXC\) , so that \(\angle AXB' = \angle CXB\) and \(\angle CXB' = \angle AXB\) . If \(X\) , \(D\) , and \(B'\) are collinear, then we are done. Now assume the contrary.
On the ray \(XB'\) take a point \(E\) such that \(XE \cdot XB = XA \cdot XC\) , so that \(\triangle AXE \sim \triangle BXC\) and \(\triangle CXE \sim \triangle BXA\) . We have \(\angle XCE + \angle XCD = \angle XBA + \angle XAB < 180^\circ\) and \(\angle XAE + \angle XAD = \angle XDA + \angle XAD < 180^\circ\) , which proves that \(X\) lies inside the angles \(\angle ECD\) and \(\angle EAD\) of the quadrilateral \(EADC\) . Moreover, \(X\) lies in the interior of exactly one of the two triangles \(EAD\) , \(ECD\) (and in the exterior of the other).

The similarities mentioned above imply \(XA \cdot BC = XB \cdot AE\) and \(XB \cdot CE = XC \cdot AB\) . Multiplying these equalities with the given equality \(AB \cdot CD = BC \cdot DA\) , we obtain \(XA \cdot CD \cdot CE = XC \cdot AD \cdot AE\) , or, equivalently,
\[\frac{XA \cdot DE}{AD \cdot AE} = \frac{XC \cdot DE}{CD \cdot CE}. \quad (*)\]
Lemma. Let \(PQR\) be a triangle, and let \(X\) be a point in the interior of the angle \(QPR\) such that \(\angle QPX = \angle PRX\) . Then \(\frac{PX \cdot QR}{PQ \cdot PR} < 1\) if and only if \(X\) lies in the interior of the triangle \(PQR\) .
Proof. The locus of points \(X\) with \(\angle QPX = \angle PRX\) lying inside the angle \(QPR\) is an arc \(\alpha\) of the circle \(\gamma\) through \(R\) tangent to \(PQ\) at \(P\) . Let \(\gamma\) intersect the line \(QR\) again at \(Y\) (if \(\gamma\) is tangent to \(QR\) , then set \(Y = R\) ). The similarity \(\triangle QPY \sim \triangle QRP\) yields \(PY = \frac{PQ \cdot PR}{QR}\) . Now it suffices to show that \(PX < PY\) if and only if \(X\) lies in the interior of the triangle \(PQR\) . Let \(m\) be a line through \(Y\) parallel to \(PQ\) . Notice that the points \(Z\) of \(\gamma\) satisfying \(PZ < PY\) are exactly those between the lines \(m\) and \(PQ\) .
Case 1: \(Y\) lies in the segment \(QR\) (see the left figure below).
In this case \(Y\) splits \(\alpha\) into two arcs \(\overline{PY}\) and \(\overline{YR}\) . The arc \(\overline{PY}\) lies inside the triangle \(PQR\) , and \(\overline{PY}\) lies between \(m\) and \(PQ\) , hence \(PX < PY\) for points \(X \in \overline{PY}\) . The other arc \(\overline{YR}\) lies outside triangle \(PQR\) , and \(\overline{YR}\) is on the opposite side of \(m\) than \(P\) , hence \(PX > PY\) for \(X \in \overline{YR}\) .
Case 2: \(Y\) lies on the ray \(QR\) beyond \(R\) (see the right figure below).
In this case the whole arc \(\alpha\) lies inside triangle \(PQR\) , and between \(m\) and \(PQ\) , thus \(PX < PY\) for all \(X \in \alpha\) . \(\square\)

Applying the Lemma (to \(\triangle EAD\) with the point \(X\) , and to \(\triangle ECD\) with the point \(X\) ), we obtain that exactly one of two expressions \(\frac{XA \cdot DE}{AD \cdot AE}\) and \(\frac{XC \cdot DE}{CD \cdot CE}\) is less than 1, which contradicts \((*)\) .
Comment 1. One may show that \(AB \cdot CD = XA \cdot XC + XB \cdot XD\) . We know that \(D, X, E\) are collinear and \(\angle DCE = \angle CXD = 180^\circ - \angle AXB\) . Therefore,
\[AB \cdot CD = XB \cdot \frac{\sin \angle AXB}{\sin \angle BAX} DE \cdot \frac{\sin \angle CED}{\sin \angle DCE} = XB \cdot DE.\]
Furthermore, \(XB \cdot DE = XB \cdot (XD + XE) = XB \cdot XD + XB \cdot XE = XB \cdot XD + XA \cdot XC\) .
Comment 2. For a convex quadrilateral \(ABCD\) with \(AB \cdot CD = BC \cdot DA\) , it is known that \(\angle DAC + \angle ABD + \angle BCA + \angle CDB = 180^\circ\) (among other, it was used as a problem on the Regional round of All- Russian olympiad in 2012), but it seems that there is no essential connection between this fact and the original problem.
Solution 2. The solution consists of two parts. In Part 1 we show that it suffices to prove that
\[\frac{XB}{XD} = \frac{AB}{CD} \quad (1)\]
and
\[\frac{XA}{XC} = \frac{DA}{BC}. \quad (2)\]
In Part 2 we establish these equalities.
Part 1. Using the sine law and applying (1) we obtain
\[\frac{\sin \angle AXB}{\sin \angle XAB} = \frac{AB}{XB} = \frac{CD}{XD} = \frac{\sin \angle CXD}{\sin \angle XCD},\]
so \(\sin \angle AXB = \sin \angle CXD\) by the problem conditions. Similarly, (2) yields \(\sin \angle DXA = \sin \angle BXC\) . If at least one of the pairs \((\angle AXB, \angle CXD)\) and \((\angle BXC, \angle DXA)\) consists of supplementary angles, then we are done. Otherwise, \(\angle AXB = \angle CXD\) and \(\angle DXA = \angle BXC\) . In this case \(X = AC \cap BD\) , and the problem conditions yield that \(ABCD\) is a parallelogram and hence a rhombus. In this last case the claim also holds.
Part 2. To prove the desired equality (1), invert \(ABCD\) at centre \(X\) with unit radius; the images of points are denoted by primes.
We have
\[\angle A'B'C' = \angle XB'A' + \angle XB'C' = \angle XAB + \angle XCB = \angle XCD + \angle XCB = \angle BCD.\]
Similarly, the corresponding angles of quadrilaterals \(ABCD\) and \(D'A'B'C'\) are equal. Moreover, we have
\[A^{\prime}B^{\prime}\cdot C^{\prime}D^{\prime} = \frac{A B}{X A\cdot X B}\cdot \frac{C D}{X C\cdot X D} = \frac{B C}{X B\cdot X C}\cdot \frac{D A}{X D\cdot D A} = B^{\prime}C^{\prime}\cdot D^{\prime}A^{\prime}.\]

Now we need the following Lemma.
Lemma. Assume that the corresponding angles of convex quadrilaterals \(XYZT\) and \(X^{\prime}Y^{\prime}Z^{\prime}T^{\prime}\) are equal, and that \(XY\cdot ZT = YZ\cdot TX\) and \(X^{\prime}Y^{\prime}\cdot Z^{\prime}T^{\prime} = Y^{\prime}Z^{\prime}\cdot T^{\prime}X^{\prime}\) . Then the two quadrilaterals are similar.
Proof. Take the quadrilateral \(XYZ_{1}T_{1}\) similar to \(X^{\prime}Y^{\prime}Z^{\prime}T^{\prime}\) and sharing the side \(XY\) with \(XYZT\) , such that \(Z_{1}\) and \(T_{1}\) lie on the rays \(YZ\) and \(XT\) , respectively, and \(Z_{1}T_{1}\parallel ZT\) . We need to prove that \(Z_{1} = Z\) and \(T_{1} = T\) . Assume the contrary. Without loss of generality, \(TX > XT_{1}\) . Let segments \(XZ\) and \(Z_{1}T_{1}\) intersect at \(U\) . We have
\[\frac{T_{1}X}{T_{1}Z_{1}} < \frac{T_{1}X}{T_{1}U} = \frac{TX}{ZT} = \frac{XY}{YZ} < \frac{XY}{YZ_{1}},\]
thus \(T_{1}X\cdot YZ_{1}< T_{1}Z_{1}\cdot XY\) . A contradiction.

It follows from the Lemma that the quadrilaterals \(ABCD\) and \(D'A'B'C'\) are similar, hence
\[\frac{BC}{AB} = \frac{A'B'}{D'A'} = \frac{AB}{XA\cdot XB}\cdot \frac{XD\cdot XA}{DA} = \frac{AB}{AD}\cdot \frac{XD}{XB},\]
and therefore
\[\frac{XB}{XD} = \frac{AB^{2}}{BC\cdot AD} = \frac{AB^{2}}{AB\cdot CD} = \frac{AB}{CD}.\]
We obtain (1), as desired; (2) is proved similarly.
Comment. Part 1 is an easy one, while part 2 seems to be crucial. On the other hand, after the proof of the similarity \(D'A'B'C' \sim ABCD\) one may finish the solution in different ways, e.g., as follows. The similarity taking \(D'A'B'C'\) to \(ABCD\) maps \(X\) to the point \(X'\) isogonally conjugate of \(X\) with respect to \(ABCD\) (i.e. to the point \(X'\) inside \(ABCD\) such that \(\angle BAX = \angle DAX'\) , \(\angle CBX = \angle ABX'\) , \(\angle DCX = \angle BCX'\) , \(\angle ADX = \angle CDX'\) ). It is known that the required equality \(\angle AXB + \angle CXD = 180^{\circ}\) is one of known conditions on a point \(X\) inside \(ABCD\) equivalent to the existence of its isogonal conjugate.
|
IMOSL-2018-G7
|
Let \(O\) be the circumcentre, and \(\Omega\) be the circumcircle of an acute- angled triangle \(ABC\) . Let \(P\) be an arbitrary point on \(\Omega\) , distinct from \(A\) , \(B\) , \(C\) , and their antipodes in \(\Omega\) . Denote the circumcentres of the triangles \(AOP\) , \(BOP\) , and \(COP\) by \(O_A\) , \(O_B\) , and \(O_C\) , respectively. The lines \(\ell_A\) , \(\ell_B\) , and \(\ell_C\) perpendicular to \(BC\) , \(CA\) , and \(AB\) pass through \(O_A\) , \(O_B\) , and \(O_C\) , respectively. Prove that the circumcircle of the triangle formed by \(\ell_A\) , \(\ell_B\) , and \(\ell_C\) is tangent to the line \(OP\) .
|
Solution. As usual, we denote the directed angle between the lines \(a\) and \(b\) by \(\hat{\times} (a,b)\) . We frequently use the fact that \(a_1 \perp a_2\) and \(b_1 \perp b_2\) yield \(\hat{\times} (a_1, b_1) = \hat{\times} (a_2, b_2)\) .
Let the lines \(\ell_B\) and \(\ell_C\) meet at \(L_A\) ; define the points \(L_B\) and \(L_C\) similarly. Note that the sidelines of the triangle \(L_A L_B L_C\) are perpendicular to the corresponding sidelines of \(ABC\) . Points \(O_A\) , \(O_B\) , \(O_C\) are located on the corresponding sidelines of \(L_A L_B L_C\) ; moreover, \(O_A\) , \(O_B\) , \(O_C\) all lie on the perpendicular bisector of \(OP\) .

Claim 1. The points \(L_B\) , \(P\) , \(O_A\) , and \(O_C\) are concyclic.
Proof. Since \(O\) is symmetric to \(P\) in \(O_A O_C\) , we have
\[\hat{\times} (O_A P,O_C P) = \hat{\times} (O_C O,O_A O) = \hat{\times} (C P,A P) = \hat{\times} (C B,A B) = \hat{\times} (O_A L_B,O_C L_B).\]
Denote the circle through \(L_B\) , \(P\) , \(O_A\) , and \(O_C\) by \(\omega_B\) . Define the circles \(\omega_A\) and \(\omega_C\) similarly.
Claim 2. The circumcircle of the triangle \(L_A L_B L_C\) passes through \(P\) .
Proof. From cyclic quadruples of points in the circles \(\omega_B\) and \(\omega_C\) , we have
\[\hat{\times} (L_C L_A,L_C P) = \hat{\times} (L_C O_B,L_C P) = \hat{\times} (O_A O_B,O_A P)\] \[\qquad = \hat{\times} (O_A O_C,O_A P) = \hat{\times} (L_B O_C,L_B P) = \hat{\times} (L_B L_A,L_B P).\]
Claim 3. The points \(P\) , \(L_C\) , and \(C\) are collinear.
Proof. We have \(\hat{\times} (P L_C,L_C L_A) = \hat{\times} (P L_C,L_C O_B) = \hat{\times} (P O_A,O_A O_B)\) . Further, since \(O_A\) is the centre of the circle \(A O P\) , \(\hat{\times} (P O_A,O_A O_B) = \hat{\times} (P A,A O)\) . As \(O\) is the circumference of the triangle \(P C A\) , \(\hat{\times} (P A,A O) = \pi /2 - \hat{\times} (C A,C P) = \hat{\times} (C P,L_C L_A)\) . We obtain \(\hat{\times} (P L_C,L_C L_A) = \hat{\times} (C P,L_C L_A)\) , which shows that \(P \in C L_C\) . \(\square\)
Similarly, the points \(P\) , \(L_{A}\) , \(A\) are collinear, and the points \(P\) , \(L_{B}\) , \(B\) are also collinear. Finally, the computation above also shows that
\[\begin{array}{r}{\mathcal{x}(O P,P L_{A}) = \mathcal{x}(P A,A O) = \mathcal{x}(P L_{C},L_{C}L_{A}),} \end{array}\]
which means that \(O P\) is tangent to the circle \(P L_{A}L_{B}L_{C}\) .
Comment 1. The proof of Claim 2 may be replaced by the following remark: since \(P\) belongs to the circles \(\omega_{A}\) and \(\omega_{C}\) , \(P\) is the Miquel point of the four lines \(\ell_{A}\) , \(\ell_{B}\) , \(\ell_{C}\) , and \(O_{A}O_{B}O_{C}\) .
Comment 2. Claims 2 and 3 can be proved in several different ways and, in particular, in the reverse order.
Claim 3 implies that the triangles \(A B C\) and \(L_{A}L_{B}L_{C}\) are perspective with perspector \(P\) . Claim 2 can be derived from this observation using spiral similarity. Consider the centre \(Q\) of the spiral similarity that maps \(A B C\) to \(L_{A}L_{B}L_{C}\) . From known spiral similarity properties, the points \(L_{A},L_{B},P,Q\) are concyclic, and so are \(L_{A},L_{C},P,Q\) .
Comment 3. The final conclusion can also be proved it terms of spiral similarity: the spiral similarity with centre \(Q\) located on the circle \(A B C\) maps the circle \(A B C\) to the circle \(P L_{A}L_{B}L_{C}\) . Thus these circles are orthogonal.
Comment 4. Notice that the homothety with centre \(O\) and ratio 2 takes \(O_{A}\) to \(A^{\prime}\) that is the common point of tangents to \(\Omega\) at \(A\) and \(P\) . Similarly, let this homothety take \(O_{B}\) to \(B^{\prime}\) and \(O_{C}\) to \(C^{\prime}\) . Let the tangents to \(\Omega\) at \(B\) and \(C\) meet at \(A^{\prime \prime}\) , and define the points \(B^{\prime \prime}\) and \(C^{\prime \prime}\) similarly. Now, replacing labels \(O\) with \(I\) , \(\Omega\) with \(\omega\) , and swapping labels \(A \leftrightarrow A^{\prime \prime}\) , \(B \leftrightarrow B^{\prime \prime}\) , \(C \leftrightarrow C^{\prime \prime}\) we obtain the following
Reformulation. Let \(\omega\) be the incircle, and let \(I\) be the incentre of a triangle \(A B C\) . Let \(P\) be a point of \(\omega\) (other than the points of contact of \(\omega\) with the sides of \(A B C\) ). The tangent to \(\omega\) at \(P\) meets the lines \(A B\) , \(B C\) , and \(C A\) at \(A^{\prime}\) , \(B^{\prime}\) , and \(C^{\prime}\) , respectively. Line \(\ell_{A}\) parallel to the internal angle bisector of \(\angle B A C\) passes through \(A^{\prime}\) ; define lines \(\ell_{B}\) and \(\ell_{C}\) similarly. Prove that the line \(I P\) is tangent to the circumcircle of the triangle formed by \(\ell_{A}\) , \(\ell_{B}\) , and \(\ell_{C}\) .
Though this formulation is equivalent to the original one, it seems more challenging, since the point of contact is now "hidden".
|
IMOSL-2018-N1
|
Determine all pairs \((n,k)\) of distinct positive integers such that there exists a positive integer \(s\) for which the numbers of divisors of \(s n\) and of \(s k\) are equal.
|
Answer: All pairs \((n,k)\) such that \(n\nmid k\) and \(k\nmid n\)
Solution. As usual, the number of divisors of a positive integer \(n\) is denoted by \(d(n)\) . If \(n = \prod_{i}p_{i}^{\alpha_{i}}\) is the prime factorisation of \(n\) , then \(d(n) = \prod_{i}(\alpha_{i} + 1)\) .
We start by showing that one cannot find any suitable number \(s\) if \(k\mid n\) or \(n\mid k\) (and \(k\neq n\) ). Suppose that \(n\mid k\) , and choose any positive integer \(s\) . Then the set of divisors of \(s n\) is a proper subset of that of \(s k\) , hence \(d(s n)< d(s k)\) . Therefore, the pair \((n,k)\) does not satisfy the problem requirements. The case \(k\mid n\) is similar.
Now assume that \(n\nmid k\) and \(k\nmid n\) . Let \(p_{1},\ldots ,p_{t}\) be all primes dividing \(n k\) , and consider the prime factorisations
\[n = \prod_{i = 1}^{t}p_{i}^{\alpha_{i}}\quad \mathrm{and}\quad k = \prod_{i = 1}^{t}p_{i}^{\beta_{i}}.\]
It is reasonable to search for the number \(s\) having the form
\[s = \prod_{i = 1}^{t}p_{i}^{\gamma_{i}}.\]
The (nonnegative integer) exponents \(\gamma_{i}\) should be chosen so as to satisfy
\[\frac{d(s n)}{d(s k)} = \prod_{i = 1}^{t}\frac{\alpha_{i} + \gamma_{i} + 1}{\beta_{i} + \gamma_{i} + 1} = 1. \quad (1)\]
First of all, if \(\alpha_{i} = \beta_{i}\) for some \(i\) , then, regardless of the value of \(\gamma_{i}\) , the corresponding factor in (1) equals 1 and does not affect the product. So we may assume that there is no such index \(i\) . For the other factors in (1), the following lemma is useful.
Lemma. Let \(\alpha >\beta\) be nonnegative integers. Then, for every integer \(M\geq \beta +1\) , there exists a nonnegative integer \(\gamma\) such that
\[\frac{\alpha + \gamma + 1}{\beta + \gamma + 1} = 1 + \frac{1}{M} = \frac{M + 1}{M}.\]
Proof.
\[\frac{\alpha + \gamma + 1}{\beta + \gamma + 1} = 1 + \frac{1}{M}\iff \frac{\alpha - \beta}{\beta + \gamma + 1} = \frac{1}{M}\iff \gamma = M(\alpha -\beta) - (\beta +1)\geq 0.\]
Now we can finish the solution. Without loss of generality, there exists an index \(u\) such that \(\alpha_{i} > \beta_{i}\) for \(i = 1,2,\ldots ,u\) , and \(\alpha_{i}< \beta_{i}\) for \(i = u + 1,\ldots ,t\) . The conditions \(n\nmid k\) and \(k\nmid n\) mean that \(1\leqslant u\leqslant t - 1\) .
Choose an integer \(X\) greater than all the \(\alpha_{i}\) and \(\beta_{i}\) . By the lemma, we can define the numbers \(\gamma_{i}\) so as to satisfy
\[\frac{\alpha_{i} + \gamma_{i} + 1}{\beta_{i} + \gamma_{i} + 1} = \frac{u X + i}{u X + i - 1\] \[\frac{\beta_{u + i} + \gamma_{u + i} + 1}{\alpha_{u + i} + \gamma_{u + i} + 1} = \frac{(t - u)X + i}{(t - u)X + i - 1\]
Then we will have
\[\frac{d(s n)}{d(s k)} = \prod_{i = 1}^{n}\frac{u X + i}{u X + i - 1}\cdot \prod_{i = 1}^{t - u}\frac{(t - u)X + i - 1}{(t - u)X + i} = \frac{u(X + 1)}{u X}\cdot \frac{(t - u)X}{(t - u)(X + 1)} = 1,\]
as required.
Comment. The lemma can be used in various ways, in order to provide a suitable value of \(s\) . In particular, one may apply induction on the number \(t\) of prime factors, using identities like
\[\frac{n}{n - 1} = \frac{n^2}{n^2 - 1}\cdot \frac{n + 1}{n}.\]
|
IMOSL-2018-N2
|
Let \(n > 1\) be a positive integer. Each cell of an \(n \times n\) table contains an integer. Suppose that the following conditions are satisfied:
\((i)\) Each number in the table is congruent to 1 modulo \(n\) ;
\((ii)\) The sum of numbers in any row, as well as the sum of numbers in any column, is congruent to \(n\) modulo \(n^2\) .
Let \(R_{i}\) be the product of the numbers in the \(i^{\mathrm{th}}\) row, and \(C_{j}\) be the product of the numbers in the \(j^{\mathrm{th}}\) column. Prove that the sums \(R_{1} + \dots +R_{n}\) and \(C_{1} + \dots +C_{n}\) are congruent modulo \(n^{4}\) .
|
Solution 1. Let \(A_{i,j}\) be the entry in the \(i^{\mathrm{th}}\) row and the \(j^{\mathrm{th}}\) column; let \(P\) be the product of all \(n^2\) entries. For convenience, denote \(a_{i,j} = A_{i,j} - 1\) and \(r_i = R_i - 1\) . We show that
\[\sum_{i = 1}^{n}R_{i} = (n - 1) + P\pmod{n^{4}}. \quad (1)\]
Due to symmetry of the problem conditions, the sum of all the \(C_{j}\) is also congruent to \((n - 1) + P\) modulo \(n^4\) , whence the conclusion.
By condition \((i)\) , the number \(n\) divides \(a_{i,j}\) for all \(i\) and \(j\) . So, every product of at least two of the \(a_{i,j}\) is divisible by \(n^2\) , hence
\[R_{i} = \prod_{j = 1}^{n}(1 + a_{i,j}) = 1 + \sum_{j = 1}^{n}a_{i,j} + \sum_{1\leqslant j_{1}< j_{2}\leqslant n}a_{i,j_{1}}a_{i,j_{2}} + \dots \equiv 1 + \sum_{j = 1}^{n}a_{i,j}\equiv 1 - n + \sum_{j = 1}^{n}A_{i,j}\pmod{n^{2}} \quad (mod n^{2})\]
for every index \(i\) . Using condition \((ii)\) , we obtain \(R_{i} \equiv 1\) (mod \(n^2\) ), and so \(n^2 \mid r_i\) .
Therefore, every product of at least two of the \(r_i\) is divisible by \(n^4\) . Repeating the same argument, we obtain
\[P = \prod_{i = 1}^{n}R_{i} = \prod_{i = 1}^{n}(1 + r_{i})\equiv 1 + \sum_{i = 1}^{n}r_{i}\pmod{n^{4}},\]
whence
\[\sum_{i = 1}^{n}R_{i} = n + \sum_{i = 1}^{n}r_{i}\equiv n + (P - 1)\pmod{n^{4}},\]
as desired.
Comment. The original version of the problem statement contained also the condition
\((iii)\) The product of all the numbers in the table is congruent to 1 modulo \(n^4\) .
This condition appears to be superfluous, so it was omitted.
Solution 2. We present a more straightforward (though lengthier) way to establish (1). We also use the notation of \(a_{i,j}\) .
By condition \((i)\) , all the \(a_{i,j}\) are divisible by \(n\) . Therefore, we have
\[P = \prod_{i = 1}^{n}\prod_{j = 1}^{n}(1 + a_{i,j})\equiv 1 + \sum_{(i,j)}a_{i,j} + \sum_{(i_{1},j_{1}),(i_{2},j_{2})}a_{i_{1},j_{1}}a_{i_{2},j_{2}}\] \[\qquad +\sum_{(i_{1},j_{1}),(i_{2},j_{2}),(i_{3},j_{3})}a_{i_{1},j_{1}}a_{i_{2},j_{2}}a_{i_{3},j_{3}}\pmod{n^{4}},\]
where the last two sums are taken over all unordered pairs/triples of pairwise different pairs \((i,j)\) ; such conventions are applied throughout the solution.
Similarly,
\[\sum_{i = 1}^{n}R_{i} = \sum_{i = 1}^{n}\prod_{j = 1}^{n}(1 + a_{i,j})\equiv n + \sum_{i}\sum_{j}a_{i,j} + \sum_{i}\sum_{j_{1},j_{2}}a_{i,j_{1}}a_{i,j_{2}} + \sum_{i}\sum_{j_{1},j_{2},j_{3}}a_{i,j_{1}}a_{i,j_{2}}a_{i,j_{3}}\pmod {n^{4}}.\]
Therefore,
\[P + (n - 1) - \sum_{i}R_{i}\equiv \sum_{(i_{1},j_{1}),(i_{2},j_{2})}a_{i_{1},j_{1}}a_{i_{2},j_{2}} + \sum_{(i_{1},j_{1}),(i_{2},j_{2}),(i_{3},j_{3})}a_{i_{1},j_{1}}a_{i_{2},j_{2}}a_{i_{3},j_{3}}\] \[\qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad +\sum_{(i_{1},j_{1}),(i_{2},j_{2}),(i_{3},j_{3})}a_{i_{1},j_{1}}a_{i_{2},j_{2}}a_{i_{3},j_{3}}\pmod {n^{4}}.\]
We show that in fact each of the three sums appearing in the right- hand part of this congruence is divisible by \(n^{4}\) ; this yields (1). Denote those three sums by \(\Sigma_{1}\) , \(\Sigma_{2}\) , and \(\Sigma_{3}\) in order of appearance. Recall that by condition \((ii)\) we have
\[\sum_{j}a_{i,j}\equiv 0\pmod{n^{2}}\qquad \mathrm{for~all~indices~}i.\]
For every two indices \(i_{1}< i_{2}\) we have
\[\sum_{j_{1}}\sum_{j_{2}}a_{i_{1},j_{1}}a_{i_{2},j_{2}} = \left(\sum_{j_{1}}a_{i_{1},j_{1}}\right)\cdot \left(\sum_{j_{2}}a_{i_{2},j_{2}}\right)\equiv 0\pmod{n^{4}},\]
since each of the two factors is divisible by \(n^{2}\) . Summing over all pairs \((i_{1},i_{2})\) we obtain \(n^{4}\mid \Sigma_{1}\) . Similarly, for every three indices \(i_{1}< i_{2}< i_{3}\) we have
\[\sum_{j_{1}}\sum_{j_{2}}\sum_{j_{3}}a_{i_{1},j_{1}}a_{i_{2},j_{2}}a_{i_{3},j_{3}} = \left(\sum_{j_{1}}a_{i_{1},j_{1}}\right)\cdot \left(\sum_{j_{2}}a_{i_{2},j_{2}}\right)\cdot \left(\sum_{j_{3}}a_{i_{3},j_{3}}\right)\]
which is divisible even by \(n^{6}\) . Hence \(n^{4}\mid \Sigma_{2}\) .
Finally, for every indices \(i_{1}\neq i_{2} = i_{3}\) and \(j_{2}< j_{3}\) we have
\[a_{i_{2},j_{2}}\cdot a_{i_{2},j_{3}}\cdot \sum_{j_{1}}a_{i_{1},j_{1}}\equiv 0\pmod{n^{4}},\]
since the three factors are divisible by \(n\) , \(n\) , and \(n^{2}\) , respectively. Summing over all 4- tuples of indices \((i_{1},i_{2},j_{2},j_{3})\) we get \(n^{4}\mid \Sigma_{3}\) .
|
IMOSL-2018-N3
|
Define the sequence \(a_{0},a_{1},a_{2},\ldots\) by \(a_{n} = 2^{n} + 2^{\lfloor n / 2\rfloor}\) . Prove that there are infinitely many terms of the sequence which can be expressed as a sum of (two or more) distinct terms of the sequence, as well as infinitely many of those which cannot be expressed in such a way.
|
Solution 1. Call a nonnegative integer representable if it equals the sum of several (possibly 0 or 1) distinct terms of the sequence. We say that two nonnegative integers \(b\) and \(c\) are equivalent (written as \(b \sim c\) ) if they are either both representable or both non- representable.
One can easily compute
\[S_{n - 1}:= a_{0} + \dots +a_{n - 1} = 2^{n} + 2^{\lfloor n / 2\rfloor} + 2^{\lfloor n / 2\rfloor} - 3.\]
Indeed, we have \(S_{n} - S_{n - 1} = 2^{n} + 2^{\lfloor n / 2\rfloor} = a_{n}\) so we can use the induction. In particular, \(S_{2k - 1} = 2^{2k} + 2^{k + 1} - 3\) .
Note that, if \(n \geq 3\) , then \(2^{\lfloor n / 2\rfloor} \geq 2^{2} > 3\) , so
\[S_{n - 1} = 2^{n} + 2^{\lfloor n / 2\rfloor} + 2^{\lfloor n / 2\rfloor} - 3 > 2^{n} + 2^{\lfloor n / 2\rfloor} = a_{n}.\]
Also notice that \(S_{n - 1} - a_{n} = 2^{\lfloor n / 2\rfloor} - 3 < a_{n}\) .
The main tool of the solution is the following claim.
Claim 1. Assume that \(b\) is a positive integer such that \(S_{n - 1} - a_{n} < b < a_{n}\) for some \(n \geq 3\) . Then \(b \sim S_{n - 1} - b\) .
Proof. As seen above, we have \(S_{n - 1} > a_{n}\) . Denote \(c = S_{n - 1} - b\) ; then \(S_{n - 1} - a_{n} < c < a_{n}\) , so the roles of \(b\) and \(c\) are symmetrical.
Assume that \(b\) is representable. The representation cannot contain \(a_{i}\) with \(i \geq n\) , since \(b < a_{n}\) . So \(b\) is the sum of some subset of \(\{a_{0}, a_{1}, \ldots , a_{n - 1}\}\) ; then \(c\) is the sum of the complement. The converse is obtained by swapping \(b\) and \(c\) . \(\square\)
We also need the following version of this claim.
Claim 2. For any \(n \geq 3\) , the number \(a_{n}\) can be represented as a sum of two or more distinct terms of the sequence if and only if \(S_{n - 1} - a_{n} = 2^{\lfloor n / 2\rfloor} - 3\) is representable.
Proof. Denote \(c = S_{n - 1} - a_{n} < a_{n}\) . If \(a_{n}\) satisfies the required condition, then it is the sum of some subset of \(\{a_{0}, a_{1}, \ldots , a_{n - 1}\}\) ; then \(c\) is the sum of the complement. Conversely, if \(c\) is representable, then its representation consists only of the numbers from \(\{a_{0}, \ldots , a_{n - 1}\}\) , so \(a_{n}\) is the sum of the complement. \(\square\)
By Claim 2, in order to prove the problem statement, it suffices to find infinitely many representable numbers of the form \(2^{t} - 3\) , as well as infinitely many non- representable ones.
Claim 3. For every \(t \geq 3\) , we have \(2^{t} - 3 \sim 2^{4t - 6} - 3\) , and \(2^{4t - 6} - 3 > 2^{t} - 3\) .
Proof. The inequality follows from \(t \geq 3\) . In order to prove the equivalence, we apply Claim 1 twice in the following manner.
First, since \(S_{2t - 3} - a_{2t - 2} = 2^{t - 1} - 3 < 2^{t} - 3 < 2^{2t - 2} + 2^{t - 1} = a_{2t - 2}\) , by Claim 1 we have \(2^{t} - 3 \sim S_{2t - 3} - (2^{t} - 3) = 2^{2t - 2}\) .
Second, since \(S_{4t - 7} - a_{4t - 6} = 2^{2t - 3} - 3 < 2^{2t - 2} < 2^{4t - 6} + 2^{2t - 3} = a_{4t - 6}\) , by Claim 1 we have \(2^{2t - 2} \sim S_{4t - 7} - 2^{2t - 2} = 2^{4t - 6} - 3\) .
Therefore, \(2^{t} - 3 \sim 2^{2t - 2} \sim 2^{4t - 6} - 3\) , as required. \(\square\)
Now it is easy to find the required numbers. Indeed, the number \(2^{3} - 3 = 5 = a_{0} + a_{1}\) is representable, so Claim 3 provides an infinite sequence of representable numbers
\[2^{3} - 3 \sim 2^{6} - 3 \sim 2^{18} - 3 \sim \dots \sim 2^{t} - 3 \sim 2^{4t - 6} - 3 \sim \dots .\]
On the other hand, the number \(2^{7} - 3 = 125\) is non- representable (since by Claim 1 we have \(125 \sim S_{6} - 125 = 24 \sim S_{4} - 24 = 17 \sim S_{3} - 17 = 4\) which is clearly non- representable). So Claim 3 provides an infinite sequence of non- representable numbers
\[2^{7} - 3 \sim 2^{22} - 3 \sim 2^{82} - 3 \sim \dots \sim 2^{t} - 3 \sim 2^{4t - 6} - 3 \sim \dots .\]
Solution 2. We keep the notion of representability and the notation \(S_{n}\) from the previous solution. We say that an index \(n\) is good if \(a_{n}\) writes as a sum of smaller terms from the sequence \(a_{0},a_{1},\ldots\) . Otherwise we say it is bad. We must prove that there are infinitely many good indices, as well as infinitely many bad ones.
Lemma 1. If \(m\geq 0\) is an integer, then \(4^{m}\) is representable if and only if either of \(2m + 1\) and \(2m + 2\) is good.
Proof. The case \(m = 0\) is obvious, so we may assume that \(m\geq 1\) . Let \(n = 2m + 1\) or \(2m + 2\) . Then \(n\geq 3\) . We notice that
\[S_{n - 1}< a_{n - 2} + a_{n}.\]
The inequality writes as \(2^{n} + 2^{\lceil n / 2\rceil} + 2^{\lfloor n / 2\rfloor} - 3< 2^{n} + 2^{\lfloor n / 2\rfloor} + 2^{n - 2} + 2^{\lfloor n / 2\rfloor - 1}\) , i.e. as \(2^{\lfloor n / 2\rfloor}< 2^{n - 2} + 2^{\lfloor n / 2\rfloor - 1} + 3\) . If \(n\geq 4\) , then \(n / 2\leq n - 2\) , so \(\lceil n / 2\rceil \leqslant n - 2\) and \(2^{\lfloor n / 2\rfloor}\leqslant 2^{n - 2}\) . For \(n = 3\) the inequality verifies separately.
If \(n\) is good, then \(a_{n}\) writes as \(a_{n} = a_{i_{1}} + \dots +a_{i_{r}}\) , where \(r\geq 2\) and \(i_{1}< \dots < i_{r}< n\) . Then \(i_{r} = n - 1\) and \(i_{r - 1} = n - 2\) , for if \(n - 1\) or \(n - 2\) is missing from the sequence \(i_{1},\ldots ,i_{r}\) , then \(a_{i_{1}} + \dots +a_{i_{r}}\leqslant a_{0} + \dots +a_{n - 3} + a_{n - 1} = S_{n - 1} - a_{n - 2}< a_{n}\) . Thus, if \(n\) is good, then both \(a_{n} - a_{n - 1}\) and \(a_{n} - a_{n - 1} - a_{n - 2}\) are representable.
We now consider the cases \(n = 2m + 1\) and \(n = 2m + 2\) separately.
If \(n = 2m + 1\) , then \(a_{n} - a_{n - 1} = a_{2m + 1} - a_{2m} = (2^{2m + 1} + 2^{m}) - (2^{2m} + 2^{m}) = 2^{2m}\) . So we proved that, if \(2m + 1\) is good, then \(2^{2m}\) is representable. Conversely, if \(2^{2m}\) is representable, then \(2^{2m}< a_{2m}\) , so \(2^{2m}\) is a sum of some distinct terms \(a_{i}\) with \(i< 2m\) . It follows that \(a_{2m + 1} = a_{2m} + 2^{2m}\) writes as \(a_{2m}\) plus a sum of some distinct terms \(a_{i}\) with \(i< 2m\) . Hence \(2m + 1\) is good.
If \(n = 2m + 2\) , then \(a_{n} - a_{n - 1} - a_{n - 2} = a_{2m + 2} - a_{2m + 1} - a_{2m} = (2^{2m + 2} + 2^{m + 1}) - (2^{2m + 1} + 2^{m}) - (2^{2m} + 2^{m}) = 2^{2m}\) . So we proved that, if \(2m + 2\) is good, then \(2^{2m}\) is representable. Conversely, if \(2^{2m}\) is representable, then, as seen in the previous case, it writes as a sum of some distinct terms \(a_{i}\) with \(i< 2m\) . Hence \(a_{2m + 2} = a_{2m + 1} + a_{2m} + 2^{2m}\) writes as \(a_{2m + 1} + a_{2m}\) plus a sum of some distinct terms \(a_{i}\) with \(i< 2m\) . Thus \(2m + 2\) is good.
Lemma 2. If \(k\geq 2\) , then \(2^{4k - 2}\) is representable if and only if \(2^{k + 1}\) is representable.
In particular, if \(s\geq 2\) , then \(4^{s}\) is representable if and only if \(4^{4s - 3}\) is representable. Also, \(4^{4s - 3} > 4^{s}\) .
Proof. We have \(2^{4k - 2}< a_{4k - 2}\) , so in a representation of \(2^{4k - 2}\) we can have only terms \(a_{i}\) with \(i\leq 4k - 3\) . Notice that
\[a_{0} + \dots +a_{4k - 3} = 2^{4k - 2} + 2^{2k} - 3< 2^{4k - 2} + 2^{2k} + 2^{k} = 2^{4k - 2} + a_{2k}.\]
Hence, any representation of \(2^{4k - 2}\) must contain all terms from \(a_{2k}\) to \(a_{4k - 3}\) . (If any of these terms is missing, then the sum of the remaining ones is \(\leqslant (a_{0} + \dots +a_{4k - 3}) - a_{2k}< 2^{4k - 2}\) .) Hence, if \(2^{4k - 2}\) is representable, then \(2^{4k - 2} - \sum_{i = 2k}^{4k - 3}a_{i}\) is representable. But
\[2^{4k - 2} - \sum_{i = 2k}^{4k - 3}a_{i} = 2^{4k - 2} - (S_{4k - 3} - S_{2k - 1}) = 2^{4k - 2} - (2^{4k - 2} + 2^{2k} - 3) + (2^{2k} + 2^{k + 1} - 3) = 2^{k + 1}.\]
So, if \(2^{4k - 2}\) is representable, then \(2^{k + 1}\) is representable. Conversely, if \(2^{k + 1}\) is representable, then \(2^{k + 1}< 2^{2k} + 2^{k} = a_{2k}\) , so \(2^{k + 1}\) writes as a sum of some distinct terms \(a_{i}\) with \(i< 2k\) . It follows that \(2^{4k - 2} = \sum_{i = 2k}^{4k - 3}a_{i} + 2^{k + 1}\) writes as \(a_{4k - 3} + a_{4k - 4} + \dots +a_{2k}\) plus the sum of some distinct terms \(a_{i}\) with \(i< 2k\) . Hence \(2^{4k - 2}\) is representable.
For the second statement, if \(s\geq 2\) , then we just take \(k = 2s - 1\) and we notice that \(2^{k + 1} = 4^{s}\) and \(2^{4k - 2} = 4^{4s - 3}\) . Also, \(s\geq 2\) implies that \(4s - 3 > s\) .
Now \(4^{2} = a_{2} + a_{3}\) is representable, whereas \(4^{6} = 4096\) is not. Indeed, note that \(4^{6} = 2^{12} < a_{12}\) , so the only available terms for a representation are \(a_{0}, \ldots , a_{11}\) , i.e., 2, 3, 6, 10, 20, 36, 72, 136, 272, 528, 1056, 2080. Their sum is \(S_{11} = 4221\) , which exceeds 4096 by 125. Then any representation of 4096 must contain all the terms from \(a_{0}, \ldots , a_{11}\) that are greater than 125, i.e., 136, 272, 528, 1056, 2080. Their sum is 4072. Since \(4096 - 4072 = 24\) and 24 is clearly not representable, 4096 is non- representable as well.
Starting with these values of \(m\) , by using Lemma 2, we can obtain infinitely many representable powers of 4, as well as infinitely many non- representable ones. By Lemma 1, this solves our problem.
|
IMOSL-2018-N4
|
Let \(a_{1}\) , \(a_{2}\) , ..., \(a_{n}\) , ... be a sequence of positive integers such that
\[\frac{a_{1}}{a_{2}} +\frac{a_{2}}{a_{3}} +\dots +\frac{a_{n - 1}}{a_{n}} +\frac{a_{n}}{a_{1}}\]
is an integer for all \(n \geq k\) , where \(k\) is some positive integer. Prove that there exists a positive integer \(m\) such that \(a_{n} = a_{n + 1}\) for all \(n \geq m\) .
|
Solution 1. The argument hinges on the following two facts: Let \(a\) , \(b\) , \(c\) be positive integers such that \(N = b / c + (c - b) / a\) is an integer.
(1) If \(\gcd (a, c) = 1\) , then \(c\) divides \(b\) ; and
(2) If \(\gcd (a, b, c) = 1\) , then \(\gcd (a, b) = 1\) .
To prove (1), write \(ab = c(aN + b - c)\) . Since \(\gcd (a, c) = 1\) , it follows that \(c\) divides \(b\) . To prove (2), write \(c^{2} - bc = a(cN - b)\) to infer that \(a\) divides \(c^{2} - bc\) . Letting \(d = \gcd (a, b)\) , it follows that \(d\) divides \(c^{2}\) , and since the two are relatively prime by hypothesis, \(d = 1\) .
Now, let \(s_{n} = a_{1} / a_{2} + a_{2} / a_{3} + \dots + a_{n - 1} / a_{n} + a_{n} / a_{1}\) , let \(\delta_{n} = \gcd (a_{1}, a_{n}, a_{n + 1})\) and write
\[s_{n + 1} - s_{n} = \frac{a_{n}}{a_{n + 1}} +\frac{a_{n + 1} - a_{n}}{a_{1}} = \frac{a_{n} / \delta_{n}}{a_{n + 1} / \delta_{n}} +\frac{a_{n + 1} / \delta_{n} - a_{n} / \delta_{n}}{a_{1} / \delta_{n}}.\]
Let \(n \geq k\) . Since \(\gcd (a_{1} / \delta_{n}, a_{n} / \delta_{n}, a_{n + 1} / \delta_{n}) = 1\) , it follows by (2) that \(\gcd (a_{1} / \delta_{n}, a_{n} / \delta_{n}) = 1\) . Let \(d_{n} = \gcd (a_{1}, a_{n})\) . Then \(d_{n} = \delta_{n} \cdot \gcd (a_{1} / \delta_{n}, a_{n} / \delta_{n}) = \delta_{n}\) , so \(d_{n}\) divides \(a_{n + 1}\) , and therefore \(d_{n}\) divides \(d_{n + 1}\) .
Consequently, from some rank on, the \(d_{n}\) form a nondecreasing sequence of integers not exceeding \(a_{1}\) , so \(d_{n} = d\) for all \(n \geq \ell\) , where \(\ell\) is some positive integer.
Finally, since \(\gcd (a_{1} / d, a_{n + 1} / d) = 1\) , it follows by (1) that \(a_{n + 1} / d\) divides \(a_{n} / d\) , so \(a_{n} \geq a_{n + 1}\) for all \(n \geq \ell\) . The conclusion follows.
Solution 2. We use the same notation \(s_{n}\) . This time, we explore the exponents of primes in the prime factorizations of the \(a_{n}\) for \(n \geq k\) .
To start, for every \(n \geq k\) , we know that the number
\[s_{n + 1} - s_{n} = \frac{a_{n}}{a_{n + 1}} +\frac{a_{n + 1}}{a_{1}} -\frac{a_{n}}{a_{1}} \quad (*)\]
is integer. Multiplying it by \(a_{1}\) we obtain that \(a_{1}a_{n} / a_{n + 1}\) is integer as well, so that \(a_{n + 1} \mid a_{1}a_{n}\) . This means that \(a_{n} \mid a_{1}^{n - k}a_{k}\) , so all prime divisors of \(a_{n}\) are among those of \(a_{1}a_{k}\) . There are finitely many such primes; therefore, it suffices to prove that the exponent of each of them in the prime factorization of \(a_{n}\) is eventually constant.
Choose any prime \(p \mid a_{1}a_{k}\) . Recall that \(v_{p}(q)\) is the standard notation for the exponent of \(p\) in the prime factorization of a nonzero rational number \(q\) . Say that an index \(n \geq k\) is large if \(v_{p}(a_{n}) \geq v_{p}(a_{1})\) . We separate two cases.
Case 1: There exists a large index \(n\) .
If \(v_{p}(a_{n + 1}) < v_{p}(a_{1})\) , then \(v_{p}(a_{n} / a_{n + 1})\) and \(v_{p}(a_{n} / a_{1})\) are nonnegative, while \(v_{p}(a_{n + 1} / a_{1}) < 0\) ; hence \((*)\) cannot be an integer. This contradiction shows that index \(n + 1\) is also large.
On the other hand, if \(v_{p}(a_{n + 1}) > v_{p}(a_{n})\) , then \(v_{p}(a_{n} / a_{n + 1}) < 0\) , while \(v_{p}\left((a_{n + 1} - a_{n}) / a_{1}\right) \geqslant 0\) , so \((*)\) is not integer again. Thus, \(v_{p}(a_{1}) \leqslant v_{p}(a_{n + 1}) \leqslant v_{p}(a_{n})\) .
The above arguments can now be applied successively to indices \(n + 1\) , \(n + 2\) , ..., showing that all the indices greater than \(n\) are large, and the sequence \(v_{p}(a_{n})\) , \(v_{p}(a_{n + 1})\) , \(v_{p}(a_{n + 2})\) , ... is nonincreasing — hence eventually constant.
Case 2: There is no large index.
We have \(v_{p}(a_{1}) > v_{p}(a_{n})\) for all \(n\geq k\) . If we had \(v_{p}(a_{n + 1})< v_{p}(a_{n})\) for some \(n\geq k\) then \(v_{p}(a_{n + 1} / a_{1})< v_{p}(a_{n} / a_{1})< 0< v_{p}(a_{n} / a_{n + 1})\) which would also yield that \((\ast)\) is not integer. Therefore, in this case the sequence \(v_{p}(a_{k}),v_{p}(a_{k + 1}),v_{p}(a_{k + 2}),\ldots\) is nondecreasing and bounded by \(v_{p}(a_{1})\) from above; hence it is also eventually constant.
Comment. Given any positive odd integer \(m\) , consider the \(m\) - tuple \((2,2^{2},\ldots ,2^{m - 1},2^{m})\) . Appending an infinite string of 1's to this \(m\) - tuple yields an eventually constant sequence of integers satisfying the condition in the statement, and shows that the rank from which the sequence stabilises may be arbitrarily large.
There are more sophisticated examples. The solution to part (b) of 10532, Amer. Math. Monthly, Vol. 105 No. 8 (Oct. 1998), 775- 777 (available at https://www.jstor.org/stable/2589009), shows that, for every integer \(m\geq 5\) , there exists an \(m\) - tuple \((a_{1},a_{2},\ldots ,a_{m})\) of pairwise distinct positive integers such that \(\gcd (a_{1},a_{2}) = \gcd (a_{2},a_{3}) = \dots = \gcd (a_{m - 1},a_{m}) = \gcd (a_{m},a_{1}) = 1\) , and the sum \(a_{1} / a_{2} + a_{2} / a_{3} + \dots +a_{m - 1} / a_{m} + a_{m} / a_{1}\) is an integer. Letting \(a_{m + k} = a_{1}\) , \(k = 1,2,\ldots\) , extends such an \(m\) - tuple to an eventually constant sequence of positive integers satisfying the condition in the statement of the problem at hand.
Here is the example given by the proposers of 10532. Let \(b_{1} = 2\) , let \(b_{k + 1} = 1 + b_{1}\dots b_{k} = 1 + b_{k}(b_{k} - 1)\) , \(k\geq 1\) , and set \(B_{m} = b_{1}\dots b_{m - 4} = b_{m - 3} - 1\) . The \(m\) - tuple \((a_{1},a_{2},\ldots ,a_{m})\) defined below satisfies the required conditions:
\[a_{1} = 1, a_{2} = (8B_{m} + 1)B_{m} + 8, a_{3} = 8B_{m} + 1, a_{k} = b_{m - k} \mathrm{for} 4\leqslant k\leqslant m - 1,\] \[a_{m} = \frac{a_{2}}{2}\cdot a_{3}\cdot \frac{B_{m}}{2} = \left(\frac{1}{2} (8B_{m} + 1)B_{m} + 4\right)\cdot (8B_{m} + 1)\cdot \frac{B_{m}}{2}.\]
It is readily checked that \(a_{1}< a_{m - 1}< a_{m - 2}< \dots < a_{3}< a_{2}< a_{m}\) . For further details we refer to the solution mentioned above. Acquaintance with this example (or more elaborated examples derived from) offers no advantage in tackling the problem.
|
IMOSL-2018-N5
|
Four positive integers \(x\) , \(y\) , \(z\) , and \(t\) satisfy the relations
\[x y - z t = x + y = z + t. \quad (*)\]
Is it possible that both \(x y\) and \(z t\) are perfect squares?
|
Answer: No.
Solution 1. Arguing indirectly, assume that \(x y = a^{2}\) and \(z t = c^{2}\) with \(a,c > 0\)
Suppose that the number \(x + y = z + t\) is odd. Then \(x\) and \(y\) have opposite parity, as well as \(z\) and \(t\) . This means that both \(x y\) and \(z t\) are even, as well as \(x y - z t = x + y\) ; a contradiction. Thus, \(x + y\) is even, so the number \(s = \frac{x + y}{2} = \frac{z + t}{2}\) is a positive integer.
Next, we set \(b = \frac{|x - y|}{2}\) , \(d = \frac{|z - t|}{2}\) . Now the problem conditions yield
\[s^{2} = a^{2} + b^{2} = c^{2} + d^{2} \quad (1)\]
and
\[2s = a^{2} - c^{2} = d^{2} - b^{2} \quad (2)\]
(the last equality in (2) follows from (1)). We readily get from (2) that \(a,d > 0\)
In the sequel we will use only the relations (1) and (2), along with the fact that \(a\) , \(d\) , \(s\) are positive integers, while \(b\) and \(c\) are nonnegative integers, at most one of which may be zero. Since both relations are symmetric with respect to the simultaneous swappings \(a \leftrightarrow d\) and \(b \leftrightarrow c\) , we assume, without loss of generality, that \(b \geqslant c\) (and hence \(b > 0\) ). Therefore, \(d^{2} = 2s + b^{2} > c^{2}\) , whence
\[d^{2} > \frac{c^{2} + d^{2}}{2} = \frac{s^{2}}{2}. \quad (3)\]
On the other hand, since \(d^{2} - b^{2}\) is even by (2), the numbers \(b\) and \(d\) have the same parity, so \(0 < b \leqslant d - 2\) . Therefore,
\[2s = d^{2} - b^{2} \geqslant d^{2} - (d - 2)^{2} = 4(d - 1), \qquad \text{i.e.,} \qquad d \leqslant \frac{s}{2} + 1. \quad (4)\]
Combining (3) and (4) we obtain
\[2s^{2}< 4d^{2}\leqslant 4\left(\frac{s}{2} +1\right)^{2},\qquad \mathrm{or}\qquad (s - 2)^{2}< 8,\]
which yields \(s \leqslant 4\) .
Finally, an easy check shows that each number of the form \(s^{2}\) with \(1 \leqslant s \leqslant 4\) has a unique representation as a sum of two squares, namely \(s^{2} = s^{2} + 0^{2}\) . Thus, (1) along with \(a,d > 0\) imply \(b = c = 0\) , which is impossible.
Solution 2. We start with a complete description of all 4- tuples \((x,y,z,t)\) of positive integers satisfying \((*)\) . As in the solution above, we notice that the numbers
\[s = \frac{x + y}{2} = \frac{z + t}{2},\quad p = \frac{x - y}{2},\quad \mathrm{and}\quad q = \frac{z - t}{2}\]
are integers (we may, and will, assume that \(p,q \geqslant 0\) ). We have
\[2s = xy - zt = (s + p)(s - p) - (s + q)(s - q) = q^{2} - p^{2},\]
so \(p\) and \(q\) have the same parity, and \(q > p\) .
Set now \(k = \frac{q - p}{2}\) , \(\ell = \frac{q + p}{2}\) . Then we have \(s = \frac{q^{2} - p^{2}}{2} = 2k\ell\) and hence
\[\begin{array}{r l} & {x = s + p = 2k\ell -k + \ell ,\qquad y = s - p = 2k\ell +k - \ell ,}\\ & {z = s + q = 2k\ell +k + \ell ,\qquad t = s - q = 2k\ell -k - \ell .} \end{array} \quad (5)\]
Recall here that \(\ell \geqslant k > 0\) and, moreover, \((k,\ell)\neq (1,1)\) , since otherwise \(t = 0\)
Assume now that both \(x y\) and \(z t\) are squares. Then \(x y z t\) is also a square. On the other hand, we have
\[\begin{array}{r l} & {x y z t = (2k\ell -k + \ell)(2k\ell +k - \ell)(2k\ell +k + \ell)(2k\ell -k - \ell)}\\ & {\qquad = \left(4k^{2}\ell^{2} - (k - \ell)^{2}\right)\left(4k^{2}\ell^{2} - (k + \ell)^{2}\right) = \left(4k^{2}\ell^{2} - k^{2} - \ell^{2}\right)^{2} - 4k^{2}\ell^{2}.} \end{array} \quad (6)\]
Denote \(D = 4k^{2}\ell^{2} - k^{2} - \ell^{2} > 0\) . From (6) we get \(D^{2} > x y z t\) . On the other hand,
\[(D - 1)^{2} = D^{2} - 2(4k^{2}\ell^{2} - k^{2} - \ell^{2}) + 1 = (D^{2} - 4k^{2}\ell^{2}) - (2k^{2} - 1)(2\ell^{2} - 1) + 2\] \[\qquad = x y z t - (2k^{2} - 1)(2\ell^{2} - 1) + 2< x y z t,\]
since \(\ell \geqslant 2\) and \(k\geqslant 1\) . Thus \((D - 1)^{2}< x y z t< D^{2}\) , and \(x y z t\) cannot be a perfect square; a contradiction.
Comment. The first part of Solution 2 shows that all 4- tuples of positive integers \(x\geqslant y\) , \(z\geqslant t\) satisfying \((\ast)\) have the form (5), where \(\ell \geqslant k > 0\) and \(\ell \geqslant 2\) . The converse is also true: every pair of positive integers \(\ell \geqslant k > 0\) , except for the pair \(k = \ell = 1\) , generates via (5) a 4- tuple of positive integers satisfying \((\ast)\) .
|
IMOSL-2018-N6
|
Let \(f\colon \{1,2,3,\ldots \} \to \{2,3,\ldots \}\) be a function such that \(f(m + n)\mid f(m) + f(n)\) for all pairs \(m,n\) of positive integers. Prove that there exists a positive integer \(c > 1\) which divides all values of \(f\) .
|
Solution 1. For every positive integer \(m\) , define \(S_{m} = \{n\colon m\mid f(n)\}\) .
Lemma. If the set \(S_{m}\) is infinite, then \(S_{m} = \{d,2d,3d,\ldots \} = d\cdot \mathbb{Z}_{>0}\) for some positive integer \(d\) .
Proof. Let \(d = \min S_{m}\) ; the definition of \(S_{m}\) yields \(m\mid f(d)\) .
Whenever \(n\in S_{m}\) and \(n > d\) , we have \(m\mid f(n)\mid f(n - d) + f(d)\) , so \(m\mid f(n - d)\) and therefore \(n - d\in S_{m}\) . Let \(r\leqslant d\) be the least positive integer with \(n\equiv r\) (mod \(d\) ); repeating the same step, we can see that \(n - d,n - 2d,\ldots ,r\in S_{m}\) . By the minimality of \(d\) , this shows \(r = d\) and therefore \(d\mid n\) .
Starting from an arbitrarily large element of \(S_{m}\) , the process above reaches all multiples of \(d\) ; so they all are elements of \(S_{m}\) . \(\square\)
The solution for the problem will be split into two cases.
Case 1: The function \(f\) is bounded.
Call a prime \(p\) frequent if the set \(S_{p}\) is infinite, i.e., if \(p\) divides \(f(n)\) for infinitely many positive integers \(n\) ; otherwise call \(p\) sporadic. Since the function \(f\) is bounded, there are only a finite number of primes that divide at least one \(f(n)\) ; so altogether there are finitely many numbers \(n\) such that \(f(n)\) has a sporadic prime divisor. Let \(N\) be a positive integer, greater than all those numbers \(n\) .
Let \(p_{1},\ldots ,p_{k}\) be the frequent primes. By the lemma we have \(S_{p_{i}} = d_{i}\cdot \mathbb{Z}_{>0}\) for some \(d_{i}\) . Consider the number
\[n = Nd_{1}d_{2}\cdot \cdot \cdot d_{k} + 1.\]
Due to \(n > N\) , all prime divisors of \(f(n)\) are frequent primes. Let \(p_{i}\) be any frequent prime divisor of \(f(n)\) . Then \(n\in S_{p_{i}}\) , and therefore \(d_{i}\mid n\) . But \(n\equiv 1\) (mod \(d_{i}\) ), which means \(d_{i} = 1\) . Hence \(S_{p_{i}} = 1\cdot \mathbb{Z}_{>0} = \mathbb{Z}_{>0}\) and therefore \(p_{i}\) is a common divisor of all values \(f(n)\) .
Case 2: \(f\) is unbounded.
We prove that \(f(1)\) divides all \(f(n)\) .
Let \(a = f(1)\) . Since \(1\in S_{a}\) , by the lemma it suffices to prove that \(S_{a}\) is an infinite set.
Call a positive integer \(p\) a peak if \(f(p) > \max (f(1),\ldots ,f(p - 1))\) . Since \(f\) is not bounded, there are infinitely many peaks. Let \(1 = p_{1}< p_{2}< \ldots\) be the sequence of all peaks, and let \(h_{k} = f(p_{k})\) . Notice that for any peak \(p_{i}\) and for any \(k< p_{i}\) , we have \(f(p_{i})\mid f(k) + f(p_{i} - k)<\) \(2f(p_{i})\) , hence
\[f(k) + f(p_{i} - k) = f(p_{i}) = h_{i}. \quad (1)\]
By the pigeonhole principle, among the numbers \(h_{1},h_{2},\ldots\) there are infinitely many that are congruent modulo \(a\) . Let \(k_{0}< k_{1}< k_{2}< \ldots\) be an infinite sequence of positive integers such that \(h_{k_{0}}\equiv h_{k_{1}}\equiv \ldots\) (mod \(a\) ). Notice that
\[f(p_{k_{i}} - p_{k_{0}}) = f(p_{k_{i}}) - f(p_{k_{0}}) = h_{k_{i}} - h_{k_{0}}\equiv 0\pmod {a},\]
so \(p_{k_{i}} - p_{k_{0}}\in S_{a}\) for all \(i = 1,2,\ldots\) . This provides infinitely many elements in \(S_{a}\) .
Hence, \(S_{a}\) is an infinite set, and therefore \(f(1) = a\) divides \(f(n)\) for every \(n\) .
Comment. As an extension of the solution above, it can be proven that if \(f\) is not bounded then \(f(n) = an\) with \(a = f(1)\) .
Take an arbitrary positive integer \(n\) ; we will show that \(f(n + 1) = f(n) + a\) . Then it follows by induction that \(f(n) = an\) .
Take a peak \(p\) such that \(p > n + 2\) and \(h = f(p) > f(n) + 2a\) . By (1) we have \(f(p - 1) = f(p) - f(1) = h - a\) and \(f(n + 1) = f(p) - f(p - n - 1) = h - f(p - n - 1)\) . From \(h - a = f(p - 1) \mid f(n) + f(p - n - 1) < f(n) + h < 2(h - a)\) we get \(f(n) + f(p - n - 1) = h - a\) . Then
\[f(n + 1) - f(n) = \left(h - f(p - n - 1)\right) - \left(h - a - f(p - n - 1)\right) = a.\]
On the other hand, there exists a wide family of bounded functions satisfying the required properties. Here we present a few examples:
\[
f(n) = c; \qquad
f(n) =
\begin{cases}
2c, & \text{if } n \text{ is even}, \\
c, & \text{if } n \text{ is odd};
\end{cases}
\qquad
f(n) =
\begin{cases}
2018c, & \text{if } n \le 2018, \\
c, & \text{if } n > 2018.
\end{cases}
\]
Solution 2. Let \(d_{n} = \gcd (f(n), f(1))\) . From \(d_{n + 1} \mid f(1)\) and \(d_{n + 1} \mid f(n + 1) \mid f(n) + f(1)\) , we can see that \(d_{n + 1} \mid f(n)\) ; then \(d_{n + 1} \mid \gcd (f(n), f(1)) = d_{n}\) . So the sequence \(d_{1}, d_{2}, \ldots\) is nonincreasing in the sense that every element is a divisor of the previous elements. Let \(d = \min (d_{1}, d_{2}, \ldots) = \gcd (d_{1}, d_{2}, \ldots) = \gcd (f(1), f(2), \ldots)\) ; we have to prove \(d \geq 2\) .
For the sake of contradiction, suppose that the statement is wrong, so \(d = 1\) ; that means there is some index \(n_{0}\) such that \(d_{n} = 1\) for every \(n \geq n_{0}\) , i.e., \(f(n)\) is coprime with \(f(1)\) .
Claim 1. If \(2^{k} \geq n_{0}\) then \(f(2^{k}) \leq 2^{k}\) .
Proof. By the condition, \(f(2n) \mid 2f(n)\) ; a trivial induction yields \(f(2^{k}) \mid 2^{k}f(1)\) . If \(2^{k} \geq n_{0}\) then \(f(2^{k})\) is coprime with \(f(1)\) , so \(f(2^{k})\) is a divisor of \(2^{k}\) . \(\square\)
Claim 2. There is a constant \(C\) such that \(f(n) < n + C\) for every \(n\) .
Proof. Take the first power of 2 which is greater than or equal to \(n_{0}\) : let \(K = 2^{k} \geq n_{0}\) . By Claim 1, we have \(f(K) \leq K\) . Notice that \(f(n + K) \mid f(n) + f(K)\) implies \(f(n + K) \leq f(n) + f(K) \leq f(n) + K\) . If \(n = tK + r\) for some \(t \geq 0\) and \(1 \leq r \leq K\) , then we conclude
\[f(n) \leq K + f(n - K) \leq 2K + f(n - 2K) \leq \ldots \leq tK + f(r) < n + \max \left(f(1), f(2), \ldots , f(K)\right),\]
so the claim is true with \(C = \max \left(f(1), \ldots , f(K)\right)\) . \(\square\)
Claim 3. If \(a, b \in \mathbb{Z}_{>0}\) are coprime then \(\gcd \left(f(a), f(b)\right) \mid f(1)\) . In particular, if \(a, b \geq n_{0}\) are coprime then \(f(a)\) and \(f(b)\) are coprime.
Proof. Let \(d = \gcd \left(f(a), f(b)\right)\) . We can replicate Euclid's algorithm. Formally, apply induction on \(a + b\) . If \(a = 1\) or \(b = 1\) then we already have \(d \mid f(1)\) .
Without loss of generality, suppose \(1 < a < b\) . Then \(d \mid f(a)\) and \(d \mid f(b) \mid f(a) + f(b - a)\) , so \(d \mid f(b - a)\) . Therefore \(d\) divides \(\gcd \left(f(a), f(b - a)\right)\) which is a divisor of \(f(1)\) by the induction hypothesis. \(\square\)
Let \(p_{1} < p_{2} < \ldots\) be the sequence of all prime numbers; for every \(k\) , let \(q_{k}\) be the lowest power of \(p_{k}\) with \(q_{k} \geq n_{0}\) . (Notice that there are only finitely many positive integers with \(q_{k} \neq p_{k}\) .)
Take a positive integer \(N\) , and consider the numbers
\[f(1), f(q_{1}), f(q_{2}), \ldots , f(q_{N}).\]
Here we have \(N + 1\) numbers, each being greater than 1, and they are pairwise coprime by Claim 3. Therefore, they have at least \(N + 1\) different prime divisors in total, and their greatest prime divisor is at least \(p_{N + 1}\) . Hence, \(\max (f(1), f(q_{1}), \ldots , f(q_{N})) \geq p_{N + 1}\) .
Choose \(N\) such that \(\max (q_{1}, \ldots , q_{N}) = p_{N}\) (this is achieved if \(N\) is sufficiently large), and \(p_{N + 1} - p_{N} > C\) (that is possible, because there are arbitrarily long gaps between the primes). Then we establish a contradiction
\[p_{N + 1} \leq \max (f(1), f(q_{1}), \ldots , f(q_{N})) < \max (1 + C, q_{1} + C, \ldots , q_{N} + C) = p_{N} + C < p_{N + 1}\]
which proves the statement.
|
IMOSL-2018-N7
|
Let \(n \geqslant 2018\) be an integer, and let \(a_{1}, a_{2}, \ldots , a_{n}, b_{1}, b_{2}, \ldots , b_{n}\) be pairwise distinct positive integers not exceeding \(5n\) . Suppose that the sequence
\[\frac{a_{1}}{b_{1}}, \frac{a_{2}}{b_{2}}, \ldots , \frac{a_{n}}{b_{n}} \quad (1)\]
forms an arithmetic progression. Prove that the terms of the sequence are equal.
|
Solution. Suppose that (1) is an arithmetic progression with nonzero difference. Let the difference be \(\Delta = \frac{c}{d}\) , where \(d > 0\) and \(c, d\) are coprime.
We will show that too many denominators \(b_{i}\) should be divisible by \(d\) . To this end, for any \(1 \leqslant i \leqslant n\) and any prime divisor \(p\) of \(d\) , say that the index \(i\) is \(p\) - wrong, if \(v_{p}(b_{i}) < v_{p}(d)\) . \((v_{p}(x)\) stands for the exponent of \(p\) in the prime factorisation of \(x\) .)
Claim 1. For any prime \(p\) , all \(p\) - wrong indices are congruent modulo \(p\) . In other words, the \(p\) - wrong indices (if they exist) are included in an arithmetic progression with difference \(p\) .
Proof. Let \(\alpha = v_{p}(d)\) . For the sake of contradiction, suppose that \(i\) and \(j\) are \(p\) - wrong indices (i.e., none of \(b_{i}\) and \(b_{j}\) is divisible by \(p^{\alpha}\) ) such that \(i \neq j\) (mod \(p\) ). Then the least common denominator of \(\frac{a_{i}}{b_{i}}\) and \(\frac{a_{j}}{b_{j}}\) is not divisible by \(p^{\alpha}\) . But this is impossible because in their difference, \((i - j)\Delta = \frac{(i - j)c}{d}\) , the numerator is coprime to \(p\) , but \(p^{\alpha}\) divides the denominator \(d\) . \(\square\)
Claim 2. \(d\) has no prime divisors greater than 5.
Proof. Suppose that \(p \geqslant 7\) is a prime divisor of \(d\) . Among the indices \(1, 2, \ldots , n\) , at most \(\left\lceil \frac{n}{p} \right\rceil < \frac{n}{p} + 1\) are \(p\) - wrong, so \(p\) divides at least \(\frac{p - 1}{p} n - 1\) of \(b_{1}, \ldots , b_{n}\) . Since these denominators are distinct,
\[5n \geqslant \max \{b_{i}: p \mid b_{i} \} \geqslant \left(\frac{p - 1}{p} n - 1\right) p = (p - 1)(n - 1) - 1 \geqslant 6(n - 1) - 1 > 5n,\]
a contradiction. \(\square\)
Claim 3. For every \(0 \leqslant k \leqslant n - 30\) , among the denominators \(b_{k + 1}, b_{k + 2}, \ldots , b_{k + 30}\) , at least \(\phi (30) = 8\) are divisible by \(d\) .
Proof. By Claim 1, the 2- wrong, 3- wrong and 5- wrong indices can be covered by three arithmetic progressions with differences 2, 3 and 5. By a simple inclusion- exclusion, \((2 - 1) \cdot (3 - 1) \cdot (5 - 1) = 8\) indices are not covered; by Claim 2, we have \(d \mid b_{i}\) for every uncovered index \(i\) . \(\square\)
Claim 4. \(|\Delta | < \frac{20}{n - 2}\) and \(d > \frac{n - 2}{20}\) .
Proof. From the sequence (1), remove all fractions with \(b_{n} < \frac{n}{2}\) . There remain at least \(\frac{n}{2}\) fractions, and they cannot exceed \(\frac{5n}{n / 2} = 10\) . So we have at least \(\frac{n}{2}\) elements of the arithmetic progression (1) in the interval \((0, 10]\) , hence the difference must be below \(\frac{10}{n / 2 - 1} = \frac{20}{n - 2}\) .
The second inequality follows from \(\frac{1}{d} \leqslant \frac{|c|}{d} = |\Delta |\) . \(\square\)
Now we have everything to get the final contradiction. By Claim 3, we have \(d \mid b_{i}\) for at least \(\left\lfloor \frac{n}{30} \right\rfloor \cdot 8\) indices \(i\) . By Claim 4, we have \(d \geqslant \frac{n - 2}{20}\) . Therefore,
\[5n \geqslant \max \{b_{i}: d \mid b_{i} \} \geqslant \left(\left\lfloor \frac{n}{30} \right\rfloor \cdot 8\right) \cdot d > \left(\frac{n}{30} -1\right) \cdot 8 \cdot \frac{n - 2}{20} > 5n.\]
Comment 1. It is possible that all terms in (1) are equal, for example with \(a_{i} = 2i - 1\) and \(b_{i} = 4i - 2\) we have \(\frac{a_{i}}{b_{i}} = \frac{1}{2}\) .
Comment 2. The bound \(5n\) in the statement is far from sharp; the solution above can be modified to work for \(9n\) . For large \(n\) , the bound \(5n\) can be replaced by \(n^{\frac{3}{2} - \epsilon}\) .
|
IMOSL-2019-A1
|
Let \(\mathbb{Z}\) be the set of integers. Determine all functions \(f:\mathbb{Z}\to \mathbb{Z}\) such that, for all integers \(a\) and \(b\)
\[f(2a) + 2f(b) = f(f(a + b)). \quad (1)\]
|
Answer: The solutions are \(f(n) = 0\) and \(f(n) = 2n + K\) for any constant \(K\in \mathbb{Z}\)
Common remarks. Most solutions to this problem first prove that \(f\) must be linear, before determining all linear functions satisfying (1).
Solution 1. Substituting \(a = 0,b = n + 1\) gives \(f(f(n + 1)) = f(0) + 2f(n + 1)\) . Substituting \(a = 1,b = n\) gives \(f(f(n + 1)) = f(2) + 2f(n)\)
In particular, \(f(0) + 2f(n + 1) = f(2) + 2f(n)\) , and so \(f(n + 1) - f(n) = \textstyle {\frac{1}{2}}\left(f(2) - f(0)\right)\) Thus \(f(n + 1) - f(n)\) must be constant. Since \(f\) is defined only on \(\mathbb{Z}\) , this tells us that \(f\) must be a linear function; write \(f(n) = Mn + K\) for arbitrary constants \(M\) and \(K\) , and we need only determine which choices of \(M\) and \(K\) work.
Now, (1) becomes
\[2Ma + K + 2(Mb + K) = M(M(a + b) + K) + K\]
which we may rearrange to form
\[(M - 2)\big(M(a + b) + K\big) = 0.\]
Thus, either \(M = 2\) , or \(M(a + b) + K = 0\) for all values of \(a + b\) . In particular, the only possible solutions are \(f(n) = 0\) and \(f(n) = 2n + K\) for any constant \(K\in \mathbb{Z}\) , and these are easily seen to work.
Solution 2. Let \(K = f(0)\)
First, put \(a = 0\) in (1); this gives
\[f(f(b)) = 2f(b) + K \quad (2)\]
for all \(b\in \mathbb{Z}\)
Now put \(b = 0\) in (1); this gives
\[f(2a) + 2K = f(f(a)) = 2f(a) + K,\]
where the second equality follows from (2). Consequently,
\[f(2a) = 2f(a) - K \quad (3)\]
for all \(a\in \mathbb{Z}\)
Substituting (2) and (3) into (1), we obtain
\[f(2a) + 2f(b) = f(f(a + b))\] \[2f(a) - K + 2f(b) = 2f(a + b) + K\] \[f(a) + f(b) = f(a + b) + K.\]
Thus, if we set \(g(n) = f(n) - K\) we see that \(g\) satisfies the Cauchy equation \(g(a + b) = g(a) + g(b)\) . The solution to the Cauchy equation over \(\mathbb{Z}\) is well- known; indeed, it may be proven by an easy induction that \(g(n) = Mn\) for each \(n \in \mathbb{Z}\) , where \(M = g(1)\) is a constant.
Therefore, \(f(n) = Mn + K\) , and we may proceed as in Solution 1.
Comment 1. Instead of deriving (3) by substituting \(b = 0\) into (1), we could instead have observed that the right hand side of (1) is symmetric in \(a\) and \(b\) , and thus
\[f(2a) + 2f(b) = f(2b) + 2f(a).\]
Thus, \(f(2a) - 2f(a) = f(2b) - 2f(b)\) for any \(a, b \in \mathbb{Z}\) , and in particular \(f(2a) - 2f(a)\) is constant. Setting \(a = 0\) shows that this constant is equal to \(- K\) , and so we obtain (3).
Comment 2. Some solutions initially prove that \(f(f(n))\) is linear (sometimes via proving that \(f(f(n)) - 3K\) satisfies the Cauchy equation). However, one can immediately prove that \(f\) is linear by substituting something of the form \(f(f(n)) = M'n + K'\) into (2).
|
IMOSL-2019-A2
|
Let \(u_{1}, u_{2}, \ldots , u_{2019}\) be real numbers satisfying
\[u_{1} + u_{2} + \dots +u_{2019} = 0\quad \mathrm{and}\quad u_{1}^{2} + u_{2}^{2} + \dots +u_{2019}^{2} = 1.\]
Let \(a = \min (u_{1}, u_{2}, \ldots , u_{2019})\) and \(b = \max (u_{1}, u_{2}, \ldots , u_{2019})\) . Prove that
\[a b\leqslant -\frac{1}{2019}.\]
|
Solution 1. Notice first that \(b > 0\) and \(a< 0\) . Indeed, since \(\sum_{i = 1}^{2019} u_{i}^{2} = 1\) , the variables \(u_{i}\) cannot be all zero, and, since \(\sum_{i = 1}^{2019} u_{i} = 0\) , the nonzero elements cannot be all positive or all negative.
Let \(P = \{i: u_{i} > 0\}\) and \(N = \{i: u_{i} \leqslant 0\}\) be the indices of positive and nonpositive elements in the sequence, and let \(p = |P|\) and \(n = |N|\) be the sizes of these sets; then \(p + n = 2019\) . By the condition \(\sum_{i = 1}^{2019} u_{i} = 0\) we have \(0 = \sum_{i = 1}^{2019} u_{i} = \sum_{i \in P} u_{i} - \sum_{i \in N} |u_{i}|\) , so
\[\sum_{i \in P} u_{i} = \sum_{i \in N} |u_{i}|. \quad (1)\]
After this preparation, estimate the sum of squares of the positive and nonpositive elements as follows:
\[\begin{array}{l}{\sum_{i\in P}u_{i}^{2}\leqslant \sum_{i\in P}b u_{i} = b\sum_{i\in P}u_{i} = b\sum_{i\in N}|u_{i}|\leqslant b\sum_{i\in N}|a| = -n a b;}\\ {\sum_{i\in N}u_{i}^{2}\leqslant \sum_{i\in N}|a|\cdot |u_{i}| = |a|\sum_{i\in N}|u_{i}| = |a|\sum_{i\in P}u_{i}\leqslant |a|\sum_{i\in P}b = -p a b.} \end{array} \quad (2)\]
The sum of these estimates is
\[1 = \sum_{i = 1}^{2019} u_{i}^{2} = \sum_{i\in P} u_{i}^{2} + \sum_{i\in N} u_{i}^{2} \leqslant -(p + n) a b = -2019 a b;\]
that proves \(a b \leqslant \frac{- 1}{2019}\) .
Comment 1. After observing \(\sum_{i \in P} u_{i}^{2} \leqslant b \sum_{i \in P} u_{i}\) and \(\sum_{i \in N} u_{i}^{2} \leqslant |a| \sum_{i \in P} |u_{i}|\) , instead of (2, 3) an alternative continuation is
\[|a b| \geqslant \frac{\sum_{i \in P} u_{i}^{2}}{\sum_{i \in P} u_{i}} \cdot \frac{\sum_{i \in N} u_{i}^{2}}{\sum_{i \in N} |u_{i}|} = \frac{\sum_{i \in P} u_{i}^{2}}{\left(\sum_{i \in P} u_{i}\right)^{2}} \sum_{i \in N} u_{i}^{2} \geqslant \frac{1}{p} \sum_{i \in N} u_{i}^{2}\]
(by the AM- QM or the Cauchy- Schwarz inequality) and similarly \(|a b| \geqslant \frac{1}{n} \sum_{i \in P} u_{i}^{2}\) .
Solution 2. As in the previous solution we conclude that \(a < 0\) and \(b > 0\) .
For every index \(i\) , the number \(u_{i}\) is a convex combination of \(a\) and \(b\) , so
\[u_{i} = x_{i} a + y_{i} b \quad \text{with some weights} 0 \leqslant x_{i}, y_{i} \leqslant 1, \text{with} x_{i} + y_{i} = 1.\]
Let \(X = \sum_{i = 1}^{2019} x_{i}\) and \(Y = \sum_{i = 1}^{2019} y_{i}\) . From \(0 = \sum_{i = 1}^{2019} u_{i} = \sum_{i = 1}^{2019} (x_{i} a + y_{i} b) = - |a| X + b Y\) , we get
\[|a| X = b Y. \quad (4)\]
From \(\sum_{i = 1}^{2019}(x_{i} + y_{i}) = 2019\) we have
\[X + Y = 2019. \quad (5)\]
The system of linear equations (4, 5) has a unique solution:
\[X = \frac{2019b}{|a| + b},\quad Y = \frac{2019|a|}{|a| + b}.\]
Now apply the following estimate to every \(u_{i}^{2}\) in their sum:
\[u_{i}^{2} = x_{i}^{2}a^{2} + 2x_{i}y_{i}ab + y_{i}^{2}b^{2}\leqslant x_{i}a^{2} + y_{i}b^{2};\]
we obtain that
\[1 = \sum_{i = 1}^{2019}u_{i}^{2}\leqslant \sum_{i = 1}^{2019}(x_{i}a^{2} + y_{i}b^{2}) = Xa^{2} + Yb^{2} = \frac{2019b}{|a| + b} |a|^{2} + \frac{2019|a|}{|a| + b} b^{2} = 2019|a|b = -2019ab.\]
Hence, \(a b\leqslant \frac{- 1}{2019}\)
Comment 2. The idea behind Solution 2 is the following thought. Suppose we fix \(a< 0\) and \(b > 0\) , fix \(\sum u_{i} = 0\) and vary the \(u_{i}\) to achieve the maximum value of \(\sum u_{i}^{2}\) . Considering varying any two of the \(u_{i}\) while preserving their sum: the maximum value of \(\sum u_{i}^{2}\) is achieved when those two are as far apart as possible, so all but at most one of the \(u_{i}\) are equal to \(a\) or \(b\) . Considering a weighted version of the problem, we see the maximum (with fractional numbers of \(u_{i}\) having each value) is achieved when \(\frac{2019b}{|a| + b}\) of them are \(a\) and \(\frac{2019|a|}{|a| + b}\) are \(b\) .
In fact, this happens in the solution: the number \(u_{i}\) is replaced by \(x_{i}\) copies of \(a\) and \(y_{i}\) copies of \(b\) .
|
IMOSL-2019-A3
|
Let \(n \geqslant 3\) be a positive integer and let \((a_{1},a_{2},\ldots ,a_{n})\) be a strictly increasing sequence of \(n\) positive real numbers with sum equal to 2. Let \(X\) be a subset of \(\{1,2,\ldots ,n\}\) such that the value of
\[\left|1 - \sum_{i\in X}a_{i}\right|\]
is minimised. Prove that there exists a strictly increasing sequence of \(n\) positive real numbers \((b_{1},b_{2},\ldots ,b_{n})\) with sum equal to 2 such that
\[\sum_{i\in X}b_{i} = 1.\]
|
Common remarks. In all solutions, we say an index set \(X\) is \((a_{i})\) - minimising if it has the property in the problem for the given sequence \((a_{i})\) . Write \(X^{c}\) for the complement of \(X\) , and \([a,b]\) for the interval of integers \(k\) such that \(a\leqslant k\leqslant b\) . Note that
\[\left|1 - \sum_{i\in X}a_{i}\right| = \left|1 - \sum_{i\in X^{c}}a_{i}\right|,\]
so we may exchange \(X\) and \(X^{c}\) where convenient. Let
\[\Delta = \sum_{i\in X^{c}}a_{i} - \sum_{i\in X}a_{i}\]
and note that \(X\) is \((a_{i})\) - minimising if and only if it minimises \(|\Delta |\) , and that \(\sum_{i\in X}a_{i} = 1\) if and only if \(\Delta = 0\) .
In some solutions, a scaling process is used. If we have a strictly increasing sequence of positive real numbers \(c_{i}\) (typically obtained by perturbing the \(a_{i}\) in some way) such that
\[\sum_{i\in X}c_{i} = \sum_{i\in X^{c}}c_{i},\]
then we may put \(b_{i} = 2c_{i} / \sum_{j = 1}^{n}c_{j}\) . So it suffices to construct such a sequence without needing its sum to be 2.
The solutions below show various possible approaches to the problem. Solutions 1 and 2 perturb a few of the \(a_{i}\) to form the \(b_{i}\) (with scaling in the case of Solution 1, without scaling in the case of Solution 2). Solutions 3 and 4 look at properties of the index set \(X\) . Solution 3 then perturbs many of the \(a_{i}\) to form the \(b_{i}\) , together with scaling. Rather than using such perturbations, Solution 4 constructs a sequence \((b_{i})\) directly from the set \(X\) with the required properties. Solution 4 can be used to give a complete description of sets \(X\) that are \((a_{i})\) - minimising for some \((a_{i})\) .
Solution 1. Without loss of generality, assume \(\textstyle \sum_{i\in X}a_{i}\leqslant 1\) , and we may assume strict inequality as otherwise \(b_{i} = a_{i}\) works. Also, \(X\) clearly cannot be empty.
If \(n\in X\) , add \(\Delta\) to \(a_{n}\) , producing a sequence of \(c_{i}\) with \(\textstyle \sum_{i\in X}c_{i} = \sum_{i\in X^{c}}c_{i}\) , and then scale as described above to make the sum equal to 2. Otherwise, there is some \(k\) with \(k\in X\) and \(k + 1\in X^{c}\) . Let \(\delta = a_{k + 1} - a_{k}\)
- If \(\delta >\Delta\) , add \(\Delta\) to \(a_{k}\) and then scale.
- If \(\delta < \Delta\) , then considering \(X\cup \{k + 1\} \setminus \{k\}\) contradicts \(X\) being \((a_{i})\) -minimising.
- If \(\delta = \Delta\) , choose any \(j\neq k,k + 1\) (possible since \(n\geqslant 3\) ), and any \(\epsilon\) less than the least of \(a_{1}\) and all the differences \(a_{i + 1} - a_{i}\) . If \(j\in X\) then add \(\Delta - \epsilon\) to \(a_{k}\) and \(\epsilon\) to \(a_{j}\) , then scale; otherwise, add \(\Delta\) to \(a_{k}\) and \(\epsilon /2\) to \(a_{k + 1}\) , and subtract \(\epsilon /2\) from \(a_{j}\) , then scale.
Solution 2. This is similar to Solution 1, but without scaling. As in that solution, without loss of generality, assume \(\sum_{i\in X}a_{i}< 1\) .
Suppose there exists \(1\leqslant j\leqslant n - 1\) such that \(j\in X\) but \(j + 1\in X^{c}\) . Then \(a_{j + 1} - a_{j}\geqslant \Delta\) because otherwise considering \(X\cup \{j + 1\} \setminus \{j\}\) contradicts \(X\) being \((a_{i})\) - minimising.
If \(a_{j + 1} - a_{j} > \Delta\) , put
\[
b_i =
\begin{cases}
a_j + \dfrac{\Delta}{2}, & \text{if } i = j, \\
a_{j+1} - \dfrac{\Delta}{2}, & \text{if } i = j + 1, \\
a_i, & \text{otherwise.}
\end{cases}
\]
If \(a_{j + 1} - a_{j} = \Delta\) , choose any \(\epsilon\) less than the least of \(\Delta /2\) , \(a_{1}\) and all the differences \(a_{i + 1} - a_{i}\) . If \(|X|\geqslant 2\) , choose \(k\in X\) with \(k\neq j\) , and put
\[b_{i}=\left\{\begin{array}{l l}{a_{j}+\Delta/2-\epsilon,}&{\mathrm{if~}i=j;}\\ {a_{j+1}-\Delta/2,}&{\mathrm{if~}i=j+1;}\\ {a_{k}+\epsilon,}&{\mathrm{if~}i=k;}\\ {a_{i},}&{\mathrm{otherwise}.}\end{array}\right.\]
Otherwise, \(|X^{c}|\geqslant 2\) , so choose \(k\in X^{c}\) with \(k\neq j + 1\) , and put
\[b_{i}=\left\{\begin{array}{l l}{a_{j}+\Delta/2,}&{\mathrm{if~}i=j;}\\ {a_{j+1}-\Delta/2+\epsilon,}&{\mathrm{if~}i=j+1;}\\ {a_{k}-\epsilon,}&{\mathrm{if~}i=k;}\\ {a_{i},}&{\mathrm{otherwise}.}\end{array}\right.\]
If there is no \(1\leqslant j\leqslant n\) such that \(j\in X\) but \(j + 1\in X^{c}\) , there must be some \(1< k\leqslant n\) such that \(X = [k,n]\) (certainly \(X\) cannot be empty). We must have \(a_{1} > \Delta\) , as otherwise considering \(X\cup \{1\}\) contradicts \(X\) being \((a_{i})\) - minimising. Now put
\[
b_i =
\begin{cases}
a_1 - \dfrac{\Delta}{2}, & \text{if } i = 1, \\
a_n + \dfrac{\Delta}{2}, & \text{if } i = n, \\
a_i, & \text{otherwise.}
\end{cases}
\]
Solution 3. Without loss of generality, assume \(\textstyle \sum_{i\in X}a_{i}\leqslant 1\) , so \(\Delta \geqslant 0\) . If \(\Delta = 0\) we can take \(b_{i} = a_{i}\) , so now assume that \(\Delta >0\)
Suppose that there is some \(k\leqslant n\) such that \(|X\cap [k,n]| > |X^{c}\cap [k,n]|\) . If we choose the largest such \(k\) then \(|X\cap [k,n]| - |X^{c}\cap [k,n]| = 1\) . We can now find the required sequence \((b_{i})\) by starting with \(c_{i} = a_{i}\) for \(i< k\) and \(c_{i} = a_{i} + \Delta\) for \(i\geqslant k\) , and then scaling as described above.
If no such \(k\) exists, we will derive a contradiction. For each \(i\in X\) we can choose \(i< j_{i}\leqslant n\) in such a way that \(j_{i}\in X^{c}\) and all the \(j_{i}\) are different. (For instance, note that necessarily \(n\in X^{c}\) and now just work downwards; each time an \(i\in X\) is considered, let \(j_{i}\) be the least element of \(X^{c}\) greater than \(i\) and not yet used.) Let \(Y\) be the (possibly empty) subset of \([1,n]\) consisting of those elements in \(X^{c}\) that are also not one of the \(j_{i}\) . In any case
\[\Delta = \sum_{i\in X}(a_{j_{i}} - a_{i}) + \sum_{j\in Y}a_{j}\]
where each term in the sums is positive. Since \(n\geqslant 3\) the total number of terms above is at least two. Take a least such term and its corresponding index \(i\) and consider the set \(Z\) which we form from \(X\) by removing \(i\) and adding \(j_{i}\) (if it is a term of the first type) or just by adding \(j\) if it is a term of the second type. The corresponding expression of \(\Delta\) for \(Z\) has the sign of its least term changed, meaning that the sum is still nonnegative but strictly less than \(\Delta\) , which contradicts \(X\) being \((a_{i})\) - minimising.
Solution 4. This uses some similar ideas to Solution 3, but describes properties of the index sets \(X\) that are sufficient to describe a corresponding sequence \((b_{i})\) that is not derived from \((a_{i})\) .
Note that, for two subsets \(X\) , \(Y\) of \([1, n]\) , the following are equivalent:
- \(|X \cap [i, n]| \leqslant |Y \cap [i, n]|\) for all \(1 \leqslant i \leqslant n\) ;- \(Y\) is at least as large as \(X\) , and for all \(1 \leqslant j \leqslant |Y|\) , the \(j^{\text{th}}\) largest element of \(Y\) is at least as big as the \(j^{\text{th}}\) largest element of \(X\) ;- there is an injective function \(f: X \to Y\) such that \(f(i) \geqslant i\) for all \(i \in X\) .
If these equivalent conditions are satisfied, we write \(X \preceq Y\) . We write \(X \prec Y\) if \(X \preceq Y\) and \(X \neq Y\) .
Note that if \(X \prec Y\) , then \(\sum_{i \in X} a_{i} < \sum_{i \in Y} a_{i}\) (the second description above makes this clear).
We claim first that, if \(n \geqslant 3\) and \(X \prec X^{c}\) , then there exists \(Y\) with \(X \prec Y \prec X^{c}\) . Indeed, as \(|X| \leqslant |X^{c}|\) , we have \(|X^{c}| \geqslant 2\) . Define \(Y\) to consist of the largest element of \(X^{c}\) , together with all but the largest element of \(X\) ; it is clear both that \(Y\) is distinct from \(X\) and \(X^{c}\) , and that \(X \preceq Y \preceq X^{c}\) , which is what we need.
But, in this situation, we have
\[\sum_{i \in X} a_{i} < \sum_{i \in Y} a_{i} < \sum_{i \in X^{c}} a_{i} \qquad \text{and} \qquad 1 - \sum_{i \in X} a_{i} = - \left(1 - \sum_{i \in X^{c}} a_{i}\right),\]
so \(|1 - \sum_{i \in Y} a_{i}| < |1 - \sum_{i \in X} a_{i}|\) .
Hence if \(X\) is \((a_{i})\) - minimising, we do not have \(X \prec X^{c}\) , and similarly we do not have \(X^{c} \prec X\) .
Considering the first description above, this immediately implies the following Claim.
Claim. There exist \(1 \leqslant k, \ell \leqslant n\) such that \(|X \cap [k, n]| > \frac{n - k + 1}{2}\) and \(|X \cap [\ell , n]| < \frac{n - \ell + 1}{2}\) .
We now construct our sequence \((b_{i})\) using this claim. Let \(k\) and \(\ell\) be the greatest values satisfying the claim, and without loss of generality suppose \(k = n\) and \(\ell < n\) (otherwise replace \(X\) by its complement). As \(\ell\) is maximal, \(n - \ell\) is even and \(|X \cap [\ell , n]| = \frac{n - \ell}{2}\) . For sufficiently small positive \(\epsilon\) , we take
\[
b_i = i\varepsilon +
\begin{cases}
0, & \text{if } i < \ell, \\
\delta, & \text{if } \ell \le i \le n - 1, \\
\gamma, & \text{if } i = n.
\end{cases}
\]
Let \(M = \sum_{i \in X} i\) . So we require
\[M \epsilon + \left(\frac{n - \ell}{2} -1\right) \delta + \gamma = 1\]
and
\[\frac{n(n + 1)}{2} \epsilon + (n - \ell) \delta + \gamma = 2.\]
These give
\[\gamma = 2 \delta + \left(\frac{n(n + 1)}{2} -2 M\right) \epsilon\]
and for sufficiently small positive \(\epsilon\) , solving for \(\gamma\) and \(\delta\) gives \(0 < \delta < \gamma\) (since \(\epsilon = 0\) gives \(\delta = 1 / (\frac{n - \ell}{2} + 1)\) and \(\gamma = 2\delta\) ), so the sequence is strictly increasing and has positive values.
Comment. This solution also shows that the claim gives a complete description of sets \(X\) that are \((a_{i})\) - minimising for some \((a_{i})\) .
Another approach to proving the claim is as follows. We prove the existence of \(\ell\) with the claimed property; the existence of \(k\) follows by considering the complement of \(X\) .
Suppose, for a contradiction, that for all \(1\leqslant \ell \leqslant n\) we have \(|X\cap [\ell ,n]|\geqslant \left\lceil \frac{n - \ell + 1}{2}\right\rceil\) . If we ever have strict inequality, consider the set \(Y = \{n,n - 2,n - 4,\ldots \}\) . This set may be obtained from \(X\) by possibly removing some elements and reducing the values of others. (To see this, consider the largest \(k\in X\backslash Y\) , if any; remove it, and replace it by the greatest \(j\in X^{c}\) with \(j< k\) , if any. Such steps preserve the given inequality, and are possible until we reach the set \(Y\) .) So if we had strict inequality, and so \(X\neq Y\) , we have
\[\sum_{i\in X}a_{i} > \sum_{i\in Y}a_{i} > 1,\]
contradicting \(X\) being \((a_{i})\) - minimising. Otherwise, we always have equality, meaning that \(X = Y\) . But now consider \(Z = Y\cup \{n - 1\} \backslash \{n\}\) . Since \(n\geqslant 3\) , we have
\[\sum_{i\in Y}a_{i} > \sum_{i\in Z}a_{i} > \sum_{i\in Y^{c}}a_{i} = 2 - \sum_{i\in Y}a_{i},\]
and so \(Z\) contradicts \(X\) being \((a_{i})\) - minimising.
|
IMOSL-2019-A4
|
Let \(n \geq 2\) be a positive integer and \(a_{1}, a_{2}, \ldots , a_{n}\) be real numbers such that
\[a_{1} + a_{2} + \dots +a_{n} = 0.\]
Define the set \(A\) by
\[A = \{(i,j) \mid 1 \leqslant i < j \leqslant n, |a_{i} - a_{j}| \geqslant 1\} .\]
Prove that, if \(A\) is not empty, then
\[\sum_{(i,j)\in A}a_{i}a_{j}< 0.\]
|
Solution 1. Define sets \(B\) and \(C\) by
\[B = \{(i,j) \mid 1 \leqslant i,j \leqslant n, |a_{i} - a_{j}| \geqslant 1\},\] \[C = \{(i,j) \mid 1 \leqslant i,j \leqslant n, |a_{i} - a_{j}| < 1\} .\]
We have
\[\sum_{(i,j)\in A}a_{i}a_{j} = \frac{1}{2}\sum_{(i,j)\in B}a_{i}a_{j}\] \[\sum_{(i,j)\in B}a_{i}a_{j} = \sum_{1\leqslant i,j\leqslant n}a_{i}a_{j} - \sum_{(i,j)\notin B}a_{i}a_{j} = 0 - \sum_{(i,j)\in C}a_{i}a_{j}.\]
So it suffices to show that if \(A\) (and hence \(B\) ) are nonempty, then
\[\sum_{(i,j)\in C}a_{i}a_{j} > 0.\]
Partition the indices into sets \(P\) , \(Q\) , \(R\) , and \(S\) such that
\[P = \{i \mid a_{i} \leqslant -1\} \qquad R = \{i \mid 0 < a_{i} < 1\} \] \[Q = \{i \mid -1 < a_{i} \leqslant 0\} \qquad S = \{i \mid 1 \leqslant a_{i}\} .\]
Then
\[\sum_{(i,j)\in C}a_{i}a_{j}\geqslant \sum_{i\in P\cup S}a_{i}^{2} + \sum_{i,j\in Q\cup R}a_{i}a_{j} = \sum_{i\in P\cup S}a_{i}^{2} + \left(\sum_{i\in Q\cup R}a_{i}\right)^{2}\geqslant 0.\]
The first inequality holds because all of the positive terms in the RHS are also in the LHS, and all of the negative terms in the LHS are also in the RHS. The first inequality attains equality only if both sides have the same negative terms, which implies \(|a_{i} - a_{j}| < 1\) whenever \(i,j \in Q \cup R\) ; the second inequality attains equality only if \(P = S = \emptyset\) . But then we would have \(A = \emptyset\) . So \(A\) nonempty implies that the inequality holds strictly, as required.
Solution 2. Consider \(P, Q, R, S\) as in Solution 1, set
\[p = \sum_{i\in P}a_{i},\quad q = \sum_{i\in Q}a_{i},\quad r = \sum_{i\in R}a_{i},\quad s = \sum_{i\in S}a_{i},\]
and let
\[t_{+} = \sum_{(i,j)\in A, a_{i}a_{j}\geqslant 0}a_{i}a_{j},\quad t_{-} = \sum_{(i,j)\in A, a_{i}a_{j}\leqslant 0}a_{i}a_{j}.\]
We know that \(p + q + r + s = 0\) , and we need to prove that \(t_{+} + t_{- } < 0\) .
Notice that \(t_{+} \leqslant p^{2} / 2 + pq + rs + s^{2} / 2\) (with equality only if \(p = s = 0\) ), and \(t_{- } \leqslant pr + ps + qs\) (with equality only if there do not exist \(i \in Q\) and \(j \in R\) with \(a_{j} - a_{i} > 1\) ). Therefore,
\[t_{+} + t_{-} \leqslant \frac{p^{2} + s^{2}}{2} +pq + rs + pr + ps + qs = \frac{(p + q + r + s)^{2}}{2} - \frac{(q + r)^{2}}{2} = -\frac{(q + r)^{2}}{2} \leqslant 0.\]
If \(A\) is not empty and \(p = s = 0\) , then there must exist \(i \in Q, j \in R\) with \(|a_{i} - a_{j}| > 1\) , and hence the earlier equality conditions cannot both occur.
Comment. The RHS of the original inequality cannot be replaced with any constant \(c < 0\) (independent of \(n\) ). Indeed, take
\[a_{1} = -\frac{n}{n + 2},a_{2} = \dots = a_{n - 1} = \frac{1}{n + 2},a_{n} = \frac{2}{n + 2}.\]
Then \(\sum_{(i,j)\in A}a_{i}a_{j} = -\frac{2n}{(n + 2)^{2}}\) , which converges to zero as \(n \to \infty\) .
|
IMOSL-2019-A5
|
Let \(x_{1}\) , \(x_{2}\) , ..., \(x_{n}\) be different real numbers. Prove that
\[\sum_{1\leqslant i\leqslant n}\prod_{j\neq i}\frac{1 - x_{i}x_{j}}{x_{i} - x_{j}} = \left\{ \begin{array}{ll}0, & \mathrm{if~}n\mathrm{~is~even};\\ 1, & \mathrm{if~}n\mathrm{~is~odd}. \end{array} \right.\]
|
Common remarks. Let \(G(x_{1}, x_{2}, \ldots , x_{n})\) be the function of the \(n\) variables \(x_{1}, x_{2}, \ldots , x_{n}\) on the LHS of the required identity.
Solution 1 (Lagrange interpolation). Since both sides of the identity are rational functions, it suffices to prove it when all \(x_{i} \notin \{\pm 1\}\) . Define
\[f(t) = \prod_{i = 1}^{n}(1 - x_{i}t),\]
and note that
\[f(x_{i}) = (1 - x_{i}^{2})\prod_{j\neq i}1 - x_{i}x_{j}.\]
Using the nodes \(+1, - 1, x_{1}, \ldots , x_{n}\) , the Lagrange interpolation formula gives us the following expression for \(f\) :
\[\sum_{i = 1}^{n}f(x_{i})\frac{(x - 1)(x + 1)}{(x_{i} - 1)(x_{i} + 1)}\prod_{j\neq i}\frac{x - x_{j}}{x_{i} - x_{j}} +f(1)\frac{x + 1}{1 + 1}\prod_{1\leqslant i\leqslant n}\frac{x - x_{i}}{1 - x_{i}} +f(-1)\frac{x - 1}{-1 - 1}\prod_{1\leqslant i\leqslant n}\frac{x - x_{i}}{1 - x_{i}}.\]
The coefficient of \(t^{n + 1}\) in \(f(t)\) is 0, since \(f\) has degree \(n\) . The coefficient of \(t^{n + 1}\) in the above expression of \(f\) is
\[0 = \sum_{1\leqslant i\leqslant n}\frac{f(x_{i})}{\prod_{j\neq i}(x_{i} - x_{j})\cdot(x_{i} - 1)(x_{i} + 1)} +\frac{f(1)}{\prod_{1\leqslant j\leqslant n}(1 - x_{j})\cdot(1 + 1)} +\frac{f(-1)}{\prod_{1\leqslant j\leqslant n}(-1 - x_{j})\cdot(-1 - 1)}\] \[= -G(x_{1},\ldots ,x_{n}) + \frac{1}{2} +\frac{(-1)^{n + 1}}{2}.\]
Comment. The main difficulty is to think of including the two extra nodes \(\pm 1\) and evaluating the coefficient \(t^{n + 1}\) in \(f\) when \(n + 1\) is higher than the degree of \(f\) .
It is possible to solve the problem using Lagrange interpolation on the nodes \(x_{1}, \ldots , x_{n}\) , but the definition of the polynomial being interpolated should depend on the parity of \(n\) . For \(n\) even, consider the polynomial
\[P(x) = \prod_{i}(1 - x x_{i}) - \prod_{i}(x - x_{i}).\]
Lagrange interpolation shows that \(G\) is the coefficient of \(x^{n - 1}\) in the polynomial \(P(x) / (1 - x^{2})\) , i.e. 0. For \(n\) odd, consider the polynomial
\[P(x) = \prod_{i}(1 - x x_{i}) - x\prod_{i}(x - x_{i}).\]
Now \(G\) is the coefficient of \(x^{n - 1}\) in \(P(x) / (1 - x^{2})\) , which is 1.
Solution 2 (using symmetries). Observe that \(G\) is symmetric in the variables \(x_{1},\ldots ,x_{n}\) . Define \(V = \prod_{i< j}(x_{j} - x_{i})\) and let \(F = G\cdot V\) , which is a polynomial in \(x_{1},\ldots ,x_{n}\) . Since \(V\) is alternating, \(F\) is also alternating (meaning that, if we exchange any two variables, then \(F\) changes sign). Every alternating polynomial in \(n\) variables \(x_{1},\ldots ,x_{n}\) vanishes when any two variables \(x_{i}\) , \(x_{j}\) \((i\neq j)\) are equal, and is therefore divisible by \(x_{i} - x_{j}\) for each pair \(i\neq j\) . Since these linear factors are pairwise coprime, \(V\) divides \(F\) exactly as a polynomial. Thus \(G\) is in fact a symmetric polynomial in \(x_{1},\ldots ,x_{n}\) .
Now observe that if all \(x_{i}\) are nonzero and we set \(y_{i} = 1 / x_{i}\) for \(i = 1,\ldots ,n\) , then we have
\[\frac{1 - y_{i}y_{j}}{y_{i} - y_{j}} = \frac{1 - x_{i}x_{j}}{x_{i} - x_{j}},\]
so that
\[G\left(\frac{1}{x_{1}},\ldots ,\frac{1}{x_{n}}\right) = G(x_{1},\ldots ,x_{n}).\]
By continuity this is an identity of rational functions. Since \(G\) is a polynomial, it implies that \(G\) is constant. (If \(G\) were not constant, we could choose a point \((c_{1},\ldots ,c_{n})\) with all \(c_{i}\neq 0\) such that \(G(c_{1},\ldots ,c_{n})\neq G(0,\ldots ,0)\) ; then \(g(x):= G(c_{1}x,\ldots ,c_{n}x)\) would be a nonconstant polynomial in the variable \(x\) , so \(|g(x)|\to \infty\) as \(x\to \infty\) , hence \(\left|G\left(\frac{y}{c_{1}},\ldots ,\frac{y}{c_{n}}\right)\right|\to \infty\) as \(y\to 0\) which is impossible since \(G\) is a polynomial.)
We may identify the constant by substituting \(x_{i} = \zeta^{i}\) , where \(\zeta\) is a primitive \(n^{\mathrm{th}}\) root of unity in \(\mathbb{C}\) . In the \(i^{\mathrm{th}}\) term in the sum in the original expression we have a factor \(1 - \zeta^{i}\zeta^{n - i} = 0\) , unless \(i = n\) or \(2i = n\) . In the case where \(n\) is odd, the only exceptional term is \(i = n\) , which gives the value \(\prod_{j\neq n}\frac{1 - \zeta^{j}}{1 - \zeta^{j}} = 1\) . When \(n\) is even, we also have the term \(\prod_{j\neq n}\frac{1 + \zeta^{j}}{2} = (- 1)^{n - 1} = - 1\) , so the sum is 0.
Comment. If we write out an explicit expression for \(F\) ,
\[F = \sum_{1\leqslant i\leqslant n}(-1)^{n - i}\prod_{\stackrel{j< k}{j,k\neq i}}(x_{k} - x_{j})\prod_{j\neq i}(1 - x_{i}x_{j})\]
then to prove directly that \(F\) vanishes when \(x_{i} = x_{j}\) for some \(i\neq j\) , but no other pair of variables coincide, we have to check carefully that the two nonzero terms in this sum cancel.
A different and slightly less convenient way to identify the constant is to substitute \(x_{i} = 1 + \epsilon \zeta^{i}\) , and throw away terms that are \(O(\epsilon)\) as \(\epsilon \to 0\) .
Solution 3 (breaking symmetry). Consider \(G\) as a rational function in \(x_{n}\) with coefficients that are rational functions in the other variables. We can write
\[G(x_{1},\ldots ,x_{n}) = \frac{P(x_{n})}{\prod_{j\neq n}(x_{n} - x_{j})}\]
where \(P(x_{n})\) is a polynomial in \(x_{n}\) whose coefficients are rational functions in the other variables. We then have
\[P(x_{n}) = \left(\prod_{j\neq n}(1 - x_{n}x_{j})\right) + \sum_{1\leqslant i\leqslant n - 1}(x_{i}x_{n} - 1)\left(\prod_{j\neq i,n}(x_{n} - x_{j})\right)\left(\prod_{j\neq i,n}\frac{1 - x_{i}x_{j}}{x_{i} - x_{j}}\right).\]
For any \(k\neq n\) , substituting \(x_{n} = x_{k}\) (which is valid when manipulating the numerator \(P(x_{n})\)
on its own), we have (noting that \(x_{n} - x_{j}\) vanishes when \(j = k\) )
\[P(x_{k}) = \left(\prod_{j\neq n}(1 - x_{k}x_{j})\right) + \sum_{1\leqslant i\leqslant n - 1}(x_{i}x_{k} - 1)\left(\prod_{j\neq i,n}(x_{k} - x_{j})\right)\left(\prod_{j\neq i,n}\frac{1 - x_{i}x_{j}}{x_{i} - x_{j}}\right)\] \[\qquad = \left(\prod_{j\neq n}(1 - x_{k}x_{j})\right) + \left(x_{k}^{2} - 1\right)\left(\prod_{j\neq k,n}(x_{k} - x_{j})\right)\left(\prod_{j\neq k,n}\frac{1 - x_{k}x_{j}}{x_{k} - x_{j}}\right)\] \[\qquad = \left(\prod_{j\neq n}(1 - x_{k}x_{j})\right) + \left(x_{k}^{2} - 1\right)\left(\prod_{j\neq k,n}(1 - x_{k}x_{j})\right)\] \[\qquad = 0.\]
Note that \(P\) is a polynomial in \(x_{n}\) of degree \(n - 1\) . For any choice of distinct real numbers \(x_{1}, \ldots , x_{n - 1}\) , \(P\) has those real numbers as its roots, and the denominator has the same degree and the same roots. This shows that \(G\) is constant in \(x_{n}\) , for any fixed choice of distinct \(x_{1}, \ldots , x_{n - 1}\) . Now, \(G\) is symmetric in all \(n\) variables, so it must be also be constant in each of the other variables. \(G\) is therefore a constant that depends only on \(n\) . The constant may be identified as in the previous solution.
Comment. There is also a solution in which we recognise the expression for \(F\) in the comment after Solution 2 as the final column expansion of a certain matrix obtained by modifying the final column of the Vandermonde matrix. The task is then to show that the matrix can be modified by column operations either to make the final column identically zero (in the case where \(n\) even) or to recover the Vandermonde matrix (in the case where \(n\) odd). The polynomial \(P / (1 - x^{2})\) is helpful for this task, where \(P\) is the parity- dependent polynomial defined in the comment after Solution 1.
|
IMOSL-2019-A6
|
A polynomial \(P(x,y,z)\) in three variables with real coefficients satisfies the identities
\[P(x,y,z) = P(x,y,xy - z) = P(x,zx - y,z) = P(yz - x,y,z). \quad (*)\]
Prove that there exists a polynomial \(F(t)\) in one variable such that
\[P(x,y,z) = F(x^{2} + y^{2} + z^{2} - xyz).\]
|
Common remarks. The polynomial \(x^{2} + y^{2} + z^{2} - xyz\) satisfies the condition \((*)\) , so every polynomial of the form \(F(x^{2} + y^{2} + z^{2} - xyz)\) does satisfy \((*)\) . We will use without comment the fact that two polynomials have the same coefficients if and only if they are equal as functions.
Solution 1. In the first two steps, we deal with any polynomial \(P(x,y,z)\) satisfying \(P(x,y,z) =\) \(P(x,y,xy - z)\) . Call such a polynomial weakly symmetric, and call a polynomial satisfying the full conditions in the problem symmetric.
Step 1. We start with the description of weakly symmetric polynomials. We claim that they are exactly the polynomials in \(x\) , \(y\) , and \(z(xy - z)\) . Clearly, all such polynomials are weakly symmetric. For the converse statement, consider \(P_{1}(x,y,z):= P(x,y,z + \textstyle {\frac{1}{2}}xy)\) , which satisfies \(P_{1}(x,y,z) = P_{1}(x,y, - z)\) and is therefore a polynomial in \(x,y\) , and \(z^{2}\) . This means that \(P\) is a polynomial in \(x\) , \(y\) , and \((z - \textstyle {\frac{1}{2}}xy)^{2} = - z(xy - z) + \textstyle {\frac{1}{4}}x^{2}y^{2}\) , and therefore a polynomial in \(x\) , \(y\) , and \(z(xy - z)\) .
Step 2. Suppose that \(P\) is weakly symmetric. Consider the monomials in \(P(x,y,z)\) of highest total degree. Our aim is to show that in each such monomial \(\mu x^{a}y^{b}z^{c}\) we have \(a,b\geqslant c\) . Consider the expansion
\[P(x,y,z) = \sum_{i,j,k}\mu_{i,j,k}x^{i}y^{j}\big(z(xy - z)\big)^{k}. \quad (1.1)\]
The maximal total degree of a summand in (1.1) is \(m = \max_{i,j,k}\colon \mu_{i,j,k}\neq 0(i + j + 3k)\) . Now, for any \(i,j,k\) satisfying \(i + j + 3k = m\) the summand \(\mu_{i,j,k}x^{i}y^{j}\big(z(xy - z)\big)^{k}\) has leading term of the form \(\mu x^{i + k}y^{j + k}z^{k}\) . No other nonzero summand in (1.1) may have a term of this form in its expansion, hence this term does not cancel in the whole sum. Therefore, \(\deg P = m\) , and the leading component of \(P\) is exactly
\[\sum_{i + j + 3k = m}\mu_{i,j,k}x^{i + k}y^{j + k}z^{k},\]
and each summand in this sum satisfies the condition claimed above.
Step 3. We now prove the problem statement by induction on \(m = \deg P\) . For \(m = 0\) the claim is trivial. Consider now a symmetric polynomial \(P\) with \(\deg P > 0\) . By Step 2, each of its monomials \(\mu x^{a}y^{b}z^{c}\) of the highest total degree satisfies \(a,b\geqslant c\) . Applying other weak symmetries, we obtain \(a,c\geqslant b\) and \(b,c\geqslant a\) ; therefore, \(P\) has a unique leading monomial of the form \(\mu (xyz)^{c}\) . The polynomial \(P_{0}(x,y,z) = P(x,y,z) - \mu (xyz - x^{2} - y^{2} - z^{2})^{c}\) has smaller total degree. Since \(P_{0}\) is symmetric, it is representable as a polynomial function of \(xyz - x^{2} - y^{2} - z^{2}\) . Then \(P\) is also of this form, completing the inductive step.
Comment. We could alternatively carry out Step 1 by an induction on \(n = \deg_{z}P\) , in a manner similar to Step 3. If \(n = 0\) , the statement holds. Assume that \(n > 0\) and check the leading component of \(P\) with respect to \(z\) :
\[P(x,y,z) = Q_{n}(x,y)z^{n} + R(x,y,z),\]
where \(\deg_{z}R< n\) . After the change \(z\mapsto xy - z\) , the leading component becomes \(Q_{n}(x,y)(- z)^{n}\) ; on the other hand, it should remain the same. Hence \(n\) is even. Now consider the polynomial
\[P_{0}(x,y,z) = P(x,y,z) - Q_{n}(x,y)\cdot \left(z(z - xy)\right)^{n / 2}.\]
It is also weakly symmetric, and \(\deg_{z}P_{0}< n\) . By the inductive hypothesis, it has the form \(P_{0}(x,y,z) =\) \(S(x,y,z(z - xy))\) . Hence the polynomial
\[P(x,y,z) = S(x,y,z(xy - z)) + Q_{n}(x,y)\big(z(z - xy)\big)^{n / 2}\]
also has this form. This completes the inductive step.
Solution 2. We will rely on the well- known identity
\[\cos^{2}u + \cos^{2}v + \cos^{2}w - 2\cos u\cos v\cos w - 1 = 0\quad \mathrm{whenever} u + v + w = 0. \quad (2.1)\]
Claim 1. The polynomial \(P(x,y,z)\) is constant on the surface
\[\mathfrak{S} = \{(2\cos u,2\cos v,2\cos w):u + v + w = 0\} .\]
Proof. Notice that for \(x = 2\cos u\) , \(y = 2\cos v\) , \(z = 2\cos w\) , the Vieta jumps \(x\mapsto yz - x\) , \(y\mapsto zx - y\) , \(z\mapsto xy - z\) in \((\ast)\) replace \((u,v,w)\) by \((v - w, - v,w)\) , \((u,w - u, - w)\) and \((- u,v,u - v)\) , respectively. For example, for the first type of jump we have
\[y z - x = 4\cos v\cos w - 2\cos u = 2\cos (v + w) + 2\cos (v - w) - 2\cos u = 2\cos (v - w).\]
Define \(G(u,v,w) = P(2\cos u,2\cos v,2\cos w)\) . For \(u + v + w = 0\) , the jumps give
\[G(u,v,w) = G(v - w, - v,w) = G(w - v, - v,(v - w) - (- v)) = G(-u - 2v, - v,2v - w)\] \[\qquad = G(u + 2v,v,w - 2v).\]
By induction,
\[G(u,v,w) = G\big(u + 2k v,v,w - 2k v\big)\quad (k\in \mathbb{Z}). \quad (2.2)\]
Similarly,
\[G(u,v,w) = G\big(u,v - 2\ell u,w + 2\ell u\big)\quad (\ell \in \mathbb{Z}). \quad (2.3)\]
And, of course, we have
\[G(u,v,w) = G\big(u + 2p\pi ,v + 2q\pi ,w - 2(p + q)\pi \big)\quad (p,q\in \mathbb{Z}). \quad (2.4)\]
Take two nonzero real numbers \(u,v\) such that \(u,v\) and \(\pi\) are linearly independent over \(\mathbb{Q}\) . By combining (2.2- 2.4), we can see that \(G\) is constant on a dense subset of the plane \(u + v + w = 0\) . By continuity, \(G\) is constant on the entire plane and therefore \(P\) is constant on \(\mathfrak{S}\) . \(\square\)
Claim 2. The polynomial \(T(x,y,z) = x^{2} + y^{2} + z^{2} - xyz - 4\) divides \(P(x,y,z) - P(2,2,2)\) .
Proof. By dividing \(P\) by \(T\) with remainders, there exist some polynomials \(R(x,y,z)\) , \(A(y,z)\) and \(B(y,z)\) such that
\[P(x,y,z) - P(2,2,2) = T(x,y,z)\cdot R(x,y,z) + A(y,z)x + B(y,z). \quad (2.5)\]
On the surface \(\mathfrak{S}\) the LHS of (2.5) is zero by Claim 1 (since \((2,2,2)\in \mathfrak{S})\) and \(T = 0\) by (2.1). Hence, \(A(y,z)x + B(y,z)\) vanishes on \(\mathfrak{S}\) .
Notice that for every \(y = 2\cos v\) and \(z = 2\cos w\) with \(\frac{\pi}{3} < v,w< \frac{2\pi}{3}\) , there are two distinct values of \(x\) such that \((x,y,z)\in \mathfrak{S}\) , namely \(x_{1} = 2\cos (v + w)\) (which is negative), and \(x_{2} = 2\cos (v - w)\) (which is positive). This can happen only if \(A(y,z) = B(y,z) = 0\) . Hence, \(A(y,z) = B(y,z) = 0\) for \(|y|< 1\) , \(|z|< 1\) . The polynomials \(A\) and \(B\) vanish on an open set, so \(A\) and \(B\) are both the zero polynomial. \(\square\)
The quotient \((P(x,y,z) - P(2,2,2)) / T(x,y,z)\) is a polynomial of lower degree than \(P\) and it also satisfies \((\ast)\) . The problem statement can now be proven by induction on the degree of \(P\) .
Comment. In the proof of (2.2) and (2.3) we used two consecutive Vieta jumps; in fact from \((\ast)\) we used only \(P(x,y,xy - z) = P(x,zx - y,z) = P(yz - x,y,z)\) .
Solution 3 (using algebraic geometry, just for interest). Let \(Q = x^{2} + y^{2} + z^{2} - xyz\) and let \(t\in \mathbb{C}\) . Checking where \(Q - t,\frac{\partial Q}{\partial x},\frac{\partial Q}{\partial y}\) and \(\frac{\partial Q}{\partial z}\) vanish simultaneously, we find that the surface \(Q = t\) is smooth except for the cases \(t = 0\) , when the only singular point is \((0,0,0)\) , and \(t = 4\) , when the four points \((\pm 2,\pm 2,\pm 2)\) that satisfy \(xyz = 8\) are the only singular points. The singular points are the fixed points of the group \(\Gamma\) of polynomial automorphisms of \(\mathbb{C}^{3}\) generated by the three Vieta involutions
\[\iota_{1}:(x,y,z)\mapsto (x,y,xy - z),\quad \iota_{2}:(x,y,z)\mapsto (x,xz - y,z),\quad \iota_{3}:(x,y,z)\mapsto (yz - x,y,z).\]
\(\Gamma\) acts on each surface \(\mathcal{V}_{t}:Q - t = 0\) . If \(Q - t\) were reducible then the surface \(Q = t\) would contain a curve of singular points. Therefore \(Q - t\) is irreducible in \(\mathbb{C}[x,y,z]\) . (One can also prove algebraically that \(Q - t\) is irreducible, for example by checking that its discriminant as a quadratic polynomial in \(x\) is not a square in \(\mathbb{C}[y,z]\) , and likewise for the other two variables.) In the following solution we will only use the algebraic surface \(\mathcal{V}_{0}\) .
Let \(U\) be the \(\Gamma\) - orbit of \((3,3,3)\) . Consider \(\iota_{3}\circ \iota_{2}\) , which leaves \(z\) invariant. For each fixed value of \(z\) , \(\iota_{3}\circ \iota_{2}\) acts linearly on \((x,y)\) by the matrix
\[M_{z}:= \left( \begin{array}{cc}z^{2} - 1 -z\] \[z -1 \end{array} \right).\]
The reverse composition \(\iota_{2}\circ \iota_{3}\) acts by \(M_{z}^{- 1} = M_{z}^{adj}\) . Note \(\operatorname *{det}M_{z} = 1\) and \(\operatorname {tr}M_{z} = z^{2} - 2\) . When \(z\) does not lie in the real interval \([- 2,2]\) , the eigenvalues of \(M_{z}\) do not have absolute value 1, so every orbit of the group generated by \(M_{z}\) on \(\mathbb{C}^{2}\setminus \{(0,0)\}\) is unbounded. For example, fixing \(z = 3\) we find \((3F_{2k + 1},3F_{2k - 1},3)\in U\) for every \(k\in \mathbb{Z}\) , where \((F_{n})_{n\in \mathbb{Z}}\) is the Fibonacci sequence with \(F_{0} = 0\) , \(F_{1} = 1\) .
Now we may start at any point \((3F_{2k + 1},3F_{2k - 1},3)\) and iteratively apply \(\iota_{1}\circ \iota_{2}\) to generate another infinite sequence of distinct points of \(U\) , Zariski dense in the hyperbola cut out of \(\mathcal{V}_{0}\) by the plane \(x - 3F_{2k + 1} = 0\) . (The plane \(x = a\) cuts out an irreducible conic when \(a\notin \{- 2,0,2\}\) .) Thus the Zariski closure \(\overline{U}\) of \(U\) contains infinitely many distinct algebraic curves in \(\mathcal{V}_{0}\) . Since \(\mathcal{V}_{0}\) is an irreducible surface this implies that \(\overline{U} = \mathcal{V}_{0}\) .
For any polynomial \(P\) satisfying \((\ast)\) , we have \(P - P(3,3,3) = 0\) at each point of \(U\) . Since \(\overline{U} = \mathcal{V}_{0}\) , \(P - P(3,3,3)\) vanishes on \(\mathcal{V}_{0}\) . Then Hilbert's Nullstellensatz and the irreducibility of \(Q\) imply that \(P - P(3,3,3)\) is divisible by \(Q\) . Now \((P - P(3,3,3)) / Q\) is a polynomial also satisfying \((\ast)\) , so we may complete the proof by an induction on the total degree, as in the other solutions.
Comment. We remark that Solution 2 used a trigonometric parametrisation of a real component of \(\mathcal{V}_{4}\) ; in contrast \(\mathcal{V}_{0}\) is birationally equivalent to the projective space \(\mathbb{P}^{2}\) under the maps
\[(x,y,z)\to (x:y:z),\quad (a:b:c)\to \left(\frac{a^{2} + b^{2} + c^{2}}{bc},\frac{a^{2} + b^{2} + c^{2}}{ac},\frac{a^{2} + b^{2} + c^{2}}{ab}\right).\]
The set \(U\) in Solution 3 is contained in \(\mathbb{Z}^{3}\) so it is nowhere dense in \(\mathcal{V}_{0}\) in the classical topology.
Comment (background to the problem). A triple \((a,b,c)\in \mathbb{Z}^{3}\) is called a Markov triple if \(a^{2} + b^{2} + c^{2} = 3abc\) , and an integer that occurs as a coordinate of some Markov triple is called a Markov number. (The spelling Markoff is also frequent.) Markov triples arose in A. Markov's work in the 1870s on the reduction theory of indefinite binary quadratic forms. For every Markov triple,
\((3a,3b,3c)\) lies on \(Q = 0\) . It is well known that all nonzero Markov triples can be generated from \((1,1,1)\) by sequences of Vieta involutions, which are the substitutions described in equation \((\ast)\) in the problem statement. There has been recent work by number theorists about the properties of Markov numbers (see for example Jean Bourgain, Alex Gamburd and Peter Sarnak, Markoff triples and strong approximation, Comptes Rendus Math. 345, no. 2, 131- 135 (2016), arXiv:1505.06411). Each Markov number occurs in infinitely many triples, but a famous old open problem is the unicity conjecture, which asserts that each Markov number occurs in only one Markov triple (up to permutations and sign changes) as the largest coordinate in absolute value in that triple. It is a standard fact in the modern literature on Markov numbers that the Markov triples are Zariski dense in the Markov surface. Proving this is the main work of Solution 3. Algebraic geometry is definitely off- syllabus for the IMO, and one still has to work a bit to prove the Zariski density. On the other hand the approaches of Solutions 1 and 2 are elementary and only use tools expected to be known by IMO contestants. Therefore we do not think that the existence of a solution using algebraic geometry necessarily makes this problem unsuitable for the IMO.
|
IMOSL-2019-A7
|
Let \(\mathbb{Z}\) be the set of integers. We consider functions \(f\colon \mathbb{Z}\to \mathbb{Z}\) satisfying
\[f\big(f(x + y) + y\big) = f\big(f(x) + y\big)\]
for all integers \(x\) and \(y\) . For such a function, we say that an integer \(v\) is \(f\) - rare if the set
\[X_{v} = \{x\in \mathbb{Z}\colon f(x) = v\}\]
is finite and nonempty.
(a) Prove that there exists such a function \(f\) for which there is an \(f\) - rare integer.
(b) Prove that no such function \(f\) can have more than one \(f\) - rare integer.
|
Solution 1. a) Let \(f\) be the function where \(f(0) = 0\) and \(f(x)\) is the largest power of 2 dividing \(2x\) for \(x\neq 0\) . The integer 0 is evidently \(f\) - rare, so it remains to verify the functional equation.
Since \(f(2x) = 2f(x)\) for all \(x\) , it suffices to verify the functional equation when at least one of \(x\) and \(y\) is odd (the case \(x = y = 0\) being trivial). If \(y\) is odd, then we have
\[f(f(x + y) + y) = 2 = f(f(x) + y)\]
since all the values attained by \(f\) are even. If, on the other hand, \(x\) is odd and \(y\) is even, then we already have
\[f(x + y) = 2 = f(x)\]
from which the functional equation follows immediately.
b) An easy inductive argument (substituting \(x + ky\) for \(x\) ) shows that
\[f(f(x + ky) + y) = f(f(x) + y) \quad (*)\]
for all integers \(x\) , \(y\) and \(k\) . If \(v\) is an \(f\) - rare integer and \(a\) is the least element of \(X_{v}\) , then by substituting \(y = a - f(x)\) in the above, we see that
\[f(x + k\cdot (a - f(x))) - f(x) + a\in X_{v}\]
for all integers \(x\) and \(k\) , so that in particular
\[f(x + k\cdot (a - f(x)))\geqslant f(x)\]
for all integers \(x\) and \(k\) , by assumption on \(a\) . This says that on the (possibly degenerate) arithmetic progression through \(x\) with common difference \(a - f(x)\) , the function \(f\) attains its minimal value at \(x\) .
Repeating the same argument with \(a\) replaced by the greatest element \(b\) of \(X_{v}\) shows that
\[f(x + k\cdot (b - f(x))\leqslant f(x)\]
for all integers \(x\) and \(k\) . Combined with the above inequality, we therefore have
\[f(x + k\cdot (a - f(x))\cdot (b - f(x))) = f(x) \quad (†)\]
for all integers \(x\) and \(k\) .
Thus if \(f(x)\neq a,b\) , then the set \(X_{f(x)}\) contains a nondegenerate arithmetic progression, so is infinite. So the only possible \(f\) - rare integers are \(a\) and \(b\) .
In particular, the \(f\) - rare integer \(v\) we started with must be one of \(a\) or \(b\) , so that \(f(v) = f(a) = f(b) = v\) . This means that there cannot be any other \(f\) - rare integers \(v'\) , as they would on the one hand have to be either \(a\) or \(b\) , and on the other would have to satisfy \(f(v') = v'\) . Thus \(v\) is the unique \(f\) - rare integer.
Comment 1. If \(f\) is a solution to the functional equation, then so too is any conjugate of \(f\) by a translation, i.e. any function \(x \mapsto f(x + n) - n\) for an integer \(n\) . Thus in proving part (b), one is free to consider only functions \(f\) for which 0 is \(f\) - rare, as in the following solution.
Solution 2, part (b) only. Suppose \(v\) is \(f\) - rare, and let \(a\) and \(b\) be the least and greatest elements of \(X_{v}\) , respectively. Substituting \(x = v\) and \(y = a - v\) into the equation shows that
\[f(v) - v + a\in X_{v}\]
and in particular \(f(v)\geqslant v\) . Repeating the same argument with \(x = v\) and \(y = b - v\) shows that \(f(v)\leqslant v\) , and hence \(f(v) = v\)
Suppose now that \(v^{\prime}\) is a second \(f\) - rare integer. We may assume that \(v = 0\) (see Comment 1). We've seen that \(f(v^{\prime}) = v^{\prime}\) ; we claim that in fact \(f(k v^{\prime}) = v^{\prime}\) for all positive integers \(k\) . This gives a contradiction unless \(v^{\prime} = v = 0\) .
This claim is proved by induction on \(k\) . Supposing it to be true for \(k\) , we substitute \(y = k v^{\prime}\) and \(x = 0\) into the functional equation to yield
\[f((k + 1)v^{\prime}) = f(f(0) + k v^{\prime}) = f(k v^{\prime}) = v^{\prime}\]
using that \(f(0) = 0\) . This completes the induction, and hence the proof.
Comment 2. There are many functions \(f\) satisfying the functional equation for which there is an \(f\) - rare integer. For instance, one may generalise the construction in part (a) of Solution 1 by taking a sequence \(1 = a_{0},a_{1},a_{2},\ldots\) of positive integers with each \(a_{i}\) a proper divisor of \(a_{i + 1}\) and choosing arbitrary functions \(f_{i}\colon (\mathbb{Z} / a_{i}\mathbb{Z})\setminus \{0\} \to a_{i}\mathbb{Z}\setminus \{0\}\) from the nonzero residue classes modulo \(a_{i}\) to the nonzero multiples of \(a_{i}\) . One then defines a function \(f\colon \mathbb{Z}\to \mathbb{Z}\) by
\[
f(x) :=
\begin{cases}
f_{i+1}(x \bmod a_{i+1}), & \text{if } a_i \mid x \text{ but } a_{i+1} \nmid x, \\
0, & \text{if } x = 0.
\end{cases}
\]
If one writes \(v(x)\) for the largest \(i\) such that \(a_{i}\mid x\) (with \(v(0) = \infty\) ), then it is easy to verify the functional equation for \(f\) separately in the two cases \(v(y) > v(x)\) and \(v(x)\geqslant v(y)\) . Hence this \(f\) satisfies the functional equation and 0 is an \(f\) - rare integer.
Comment 3. In fact, if \(v\) is an \(f\) - rare integer for an \(f\) satisfying the functional equation, then its fibre \(X_{v} = \{v\}\) must be a singleton. We may assume without loss of generality that \(v = 0\) . We've already seen in Solution 1 that 0 is either the greatest or least element of \(X_{0}\) ; replacing \(f\) with the function \(x\mapsto - f(- x)\) if necessary, we may assume that 0 is the least element of \(X_{0}\) . We write \(b\) for the largest element of \(X_{0}\) , supposing for contradiction that \(b > 0\) , and write \(N = (2b)!\) .
It now follows from \((\ast)\) that we have
\[f(f(N b) + b) = f(f(0) + b) = f(b) = 0,\]
from which we see that \(f(N b) + b\in X_{0}\subseteq [0,b]\) . It follows that \(f(N b)\in [- b,0)\) , since by construction \(N b\notin X_{v}\) . Now it follows that \((f(N b) - 0)\cdot (f(N b) - b)\) is a divisor of \(N\) , so from \((\dagger)\) we see that \(f(N b) = f(0) = 0\) . This yields the desired contradiction.
|
IMOSL-2019-C1
|
The infinite sequence \(a_{0}\) , \(a_{1}\) , \(a_{2}\) , ... of (not necessarily different) integers has the following properties: \(0 \leqslant a_{i} \leqslant i\) for all integers \(i \geqslant 0\) , and
\[\binom{k}{a_{0}}+\binom{k}{a_{1}}+\cdots+\binom{k}{a_{k}}=2^{k}\]
for all integers \(k \geqslant 0\) .
Prove that all integers \(N \geqslant 0\) occur in the sequence (that is, for all \(N \geqslant 0\) , there exists \(i \geqslant 0\) with \(a_{i} = N\) ).
|
Solution. We prove by induction on \(k\) that every initial segment of the sequence, \(a_{0}, a_{1}, \ldots , a_{k}\) , consists of the following elements (counted with multiplicity, and not necessarily in order), for some \(\ell \geqslant 0\) with \(2\ell \leqslant k + 1\) :
\[0,1,\ldots ,\ell -1,\quad 0,1,\ldots ,k - \ell .\]
For \(k = 0\) we have \(a_{0} = 0\) , which is of this form. Now suppose that for \(k = m\) the elements \(a_{0}, a_{1}, \ldots , a_{m}\) are \(0, 0, 1, 1, 2, 2, \ldots , \ell - 1, \ell - 1, \ell , \ell + 1, \ldots , m - \ell - 1, m - \ell\) for some \(\ell\) with \(0 \leqslant 2\ell \leqslant m + 1\) . It is given that
\[\binom{m+1}{a_{0}}+\binom{m+1}{a_{1}}+\cdots+\binom{m+1}{a_{m}}+\binom{m+1}{a_{m+1}}=2^{m+1},\]
which becomes
\[\binom{m+1}{0}+\binom{m+1}{1}+\cdots+\binom{m+1}{\ell-1}\] \[\qquad+\left(\binom{m+1}{0}+\binom{m+1}{1}+\cdots+\binom{m+2}{\ell-1}\right)+\binom{m+1}{m-\ell}\right)+\binom{m+1}{a_{m+1}}=2^{m+1},\]
or, using \(\binom{m+1}{i}=\binom{m+1}{m+1-i}\) , that
\[\binom{m+1}{0}+\binom{m+1}{1}+\cdots+\binom{m+1}{\ell-1}\] \[\qquad+\left(\binom{m+1}{m+1}+\binom{m+1}{m}+\cdots+\binom{m+1}{\ell+1}\right)+\binom{m+1}{a_{m+1}}=2^{m+1}.\]
On the other hand, it is well known that
\[\binom{m+1}{0}+\binom{m+1}{1}+\cdots+\binom{m+1}{m+1}=2^{m+1},\]
and so, by subtracting, we get
\[\binom{m+1}{a_{m+1}}=\binom{m+1}{\ell}.\]
From this, using the fact that the binomial coefficients \(\binom{m+1}{i}\) are increasing for \(i \leqslant \frac{m+1}{2}\) and decreasing for \(i \geqslant \frac{m+1}{2}\) , we conclude that either \(a_{m+1} = \ell\) or \(a_{m+1} = m + 1 - \ell\) . In either case, \(a_{0}, a_{1}, \ldots , a_{m+1}\) is again of the claimed form, which concludes the induction.
As a result of this description, any integer \(N \geqslant 0\) appears as a term of the sequence \(a_{i}\) for some \(0 \leqslant i \leqslant 2N\) .
|
IMOSL-2019-C2
|
You are given a set of \(n\) blocks, each weighing at least 1; their total weight is \(2n\) . Prove that for every real number \(r\) with \(0 \leqslant r \leqslant 2n - 2\) you can choose a subset of the blocks whose total weight is at least \(r\) but at most \(r + 2\) .
|
Solution 1. We prove the following more general statement by induction on \(n\) .
Claim. Suppose that you have \(n\) blocks, each of weight at least 1, and of total weight \(s \leqslant 2n\) . Then for every \(r\) with \(- 2 \leqslant r \leqslant s\) , you can choose some of the blocks whose total weight is at least \(r\) but at most \(r + 2\) .
Proof. The base case \(n = 1\) is trivial. To prove the inductive step, let \(x\) be the largest block weight. Clearly, \(x \geqslant s / n\) , so \(s - x \leqslant \frac{n - 1}{n} s \leqslant 2(n - 1)\) . Hence, if we exclude a block of weight \(x\) , we can apply the inductive hypothesis to show the claim holds (for this smaller set) for any \(- 2 \leqslant r \leqslant s - x\) . Adding the excluded block to each of those combinations, we see that the claim also holds when \(x - 2 \leqslant r \leqslant s\) . So if \(x - 2 \leqslant s - x\) , then we have covered the whole interval \([- 2, s]\) . But each block weight is at least 1, so we have \(x - 2 \leqslant (s - (n - 1)) - 2 = s - (2n - (n - 1)) \leqslant s - (s - (n - 1)) \leqslant s - x\) , as desired. \(\square\)
Comment. Instead of inducting on sets of blocks with total weight \(s \leqslant 2n\) , we could instead prove the result only for \(s = 2n\) . We would then need to modify the inductive step to scale up the block weights before applying the induction hypothesis.
Solution 2. Let \(x_{1}, \ldots , x_{n}\) be the weights of the blocks in weakly increasing order. Consider the set \(S\) of sums of the form \(\sum_{j \in J} x_{j}\) for a subset \(J \subseteq \{1, 2, \ldots , n\}\) . We want to prove that the mesh of \(S\) – i.e. the largest distance between two adjacent elements – is at most 2.
For \(0 \leqslant k \leqslant n\) , let \(S_{k}\) denote the set of sums of the form \(\sum_{i \in J} x_{i}\) for a subset \(J \subseteq \{1, 2, \ldots , k\}\) . We will show by induction on \(k\) that the mesh of \(S_{k}\) is at most 2.
The base case \(k = 0\) is trivial (as \(S_{0} = \{0\}\) ). For \(k > 0\) we have
\[S_{k} = S_{k - 1} \cup (x_{k} + S_{k - 1})\]
(where \((x_{k} + S_{k - 1})\) denotes \(\{x_{k} + s: s \in S_{k - 1}\}\) ), so it suffices to prove that \(x_{k} \leqslant \sum_{j < k} x_{j} + 2\) . But if this were not the case, we would have \(x_{l} > \sum_{j < k} x_{j} + 2 \geqslant k + 1\) for all \(l \geqslant k\) , and hence
\[2n = \sum_{j = 1}^{n} x_{j} > (n + 1 - k)(k + 1) + k - 1.\]
This rearranges to \(n > k(n + 1 - k)\) , which is false for \(1 \leqslant k \leqslant n\) , giving the desired contradiction.
|
IMOSL-2019-C3
|
Let \(n\) be a positive integer. Harry has \(n\) coins lined up on his desk, each showing heads or tails. He repeatedly does the following operation: if there are \(k\) coins showing heads and \(k > 0\) , then he flips the \(k^{\mathrm{th}}\) coin over; otherwise he stops the process. (For example, the process starting with \(T H T\) would be \(T H T \rightarrow H H T \rightarrow H T T \rightarrow T T T\) , which takes three steps.)
Letting \(C\) denote the initial configuration (a sequence of \(n\) \(H\) 's and \(T\) 's), write \(\ell (C)\) for the number of steps needed before all coins show \(T\) . Show that this number \(\ell (C)\) is finite, and determine its average value over all \(2^{n}\) possible initial configurations \(C\) .
|
Answer: The average is \(\frac{1}{4} n(n + 1)\) .
Common remarks. Throughout all these solutions, we let \(E(n)\) denote the desired average value.
Solution 1. We represent the problem using a directed graph \(G_{n}\) whose vertices are the length- \(n\) strings of \(H\) 's and \(T\) 's. The graph features an edge from each string to its successor (except for \(T T\dots T T\) , which has no successor). We will also write \(\bar{H} = T\) and \(\bar{T} = H\) .
The graph \(G_{0}\) consists of a single vertex: the empty string. The main claim is that \(G_{n}\) can be described explicitly in terms of \(G_{n - 1}\) :
- We take two copies, \(X\) and \(Y\) , of \(G_{n - 1}\) .
- In \(X\) , we take each string of \(n - 1\) coins and just append a \(T\) to it. In symbols, we replace \(s_{1} \dots s_{n - 1}\) with \(s_{1} \dots s_{n - 1}T\) .
- In \(Y\) , we take each string of \(n - 1\) coins, flip every coin, reverse the order, and append an \(H\) to it. In symbols, we replace \(s_{1} \dots s_{n - 1}\) with \(\bar{s}_{n - 1}\bar{s}_{n - 2}\dots \bar{s}_{1}H\) .
- Finally, we add one new edge from \(Y\) to \(X\) , namely \(H H \dots H H H \rightarrow H H \dots H H T\) .
We depict \(G_{4}\) below, in a way which indicates this recursive construction:

We prove the claim inductively. Firstly, \(X\) is correct as a subgraph of \(G_{n}\) , as the operation on coins is unchanged by an extra \(T\) at the end: if \(s_{1} \dots s_{n - 1}\) is sent to \(t_{1} \dots t_{n - 1}\) , then \(s_{1} \dots s_{n - 1}T\) is sent to \(t_{1} \dots t_{n - 1}T\) .
Next, \(Y\) is also correct as a subgraph of \(G_{n}\) , as if \(s_{1} \dots s_{n - 1}\) has \(k\) occurrences of \(H\) , then \(\bar{s}_{n - 1} \dots \bar{s}_{1}H\) has \((n - 1 - k) + 1 = n - k\) occurrences of \(H\) , and thus (provided that \(k > 0\) ), if \(s_{1} \dots s_{n - 1}\) is sent to \(t_{1} \dots t_{n - 1}\) , then \(\bar{s}_{n - 1} \dots \bar{s}_{1}H\) is sent to \(\bar{t}_{n - 1} \dots \bar{t}_{1}H\) .
Finally, the one edge from \(Y\) to \(X\) is correct, as the operation does send \(H H \dots H H H\) to \(H H \dots H H T\) .
To finish, note that the sequences in \(X\) take an average of \(E(n - 1)\) steps to terminate, whereas the sequences in \(Y\) take an average of \(E(n - 1)\) steps to reach \(HH\dots H\) and then an additional \(n\) steps to terminate. Therefore, we have
\[E(n) = \frac{1}{2} (E(n - 1) + (E(n - 1) + n)) = E(n - 1) + \frac{n}{2}.\]
We have \(E(0) = 0\) from our description of \(G_{0}\) . Thus, by induction, we have \(E(n) = \frac{1}{2} (1 + \dots + n) = \frac{1}{4} n(n + 1)\) , which in particular is finite.
Solution 2. We consider what happens with configurations depending on the coins they start and end with.
- If a configuration starts with \(H\) , the last \(n - 1\) coins follow the given rules, as if they were all the coins, until they are all \(T\) , then the first coin is turned over.
- If a configuration ends with \(T\) , the last coin will never be turned over, and the first \(n - 1\) coins follow the given rules, as if they were all the coins.
- If a configuration starts with \(T\) and ends with \(H\) , the middle \(n - 2\) coins follow the given rules, as if they were all the coins, until they are all \(T\) . After that, there are \(2n - 1\) more steps: first coins 1, 2, ..., \(n - 1\) are turned over in that order, then coins \(n\) , \(n - 1\) , ..., 1 are turned over in that order.
As this covers all configurations, and the number of steps is clearly finite for 0 or 1 coins, it follows by induction on \(n\) that the number of steps is always finite.
We define \(E_{A B}(n)\) , where \(A\) and \(B\) are each one of \(H\) , \(T\) or \(*\) , to be the average number of steps over configurations of length \(n\) restricted to those that start with \(A\) , if \(A\) is not \(*\) , and that end with \(B\) , if \(B\) is not \(*\) (so \(*\) represents "either \(H\) or \(T\) "). The above observations tell us that, for \(n \geq 2\) :
\[E_{H*}(n) = E(n - 1) + 1.\] \[E_{*T}(n) = E(n - 1).\] \[E_{HT}(n) = E(n - 2) + 1(\mathrm{by~using~both~the~observations~for~}H*~\mathrm{and~for~}*T).\] \[E_{TH}(n) = E(n - 2) + 2n - 1.\]
Now \(E_{H*}(n) = \frac{1}{2} (E_{HH}(n) + E_{HT}(n))\) , so \(E_{HH}(n) = 2E(n - 1) - E(n - 2) + 1\) . Similarly, \(E_{TT}(n) = 2E(n - 1) - E(n - 2) - 1\) . So
\[E(n) = \frac{1}{4} (E_{HT}(n) + E_{HH}(n) + E_{TT}(n) + E_{TH}(n)) = E(n - 1) + \frac{n}{2}.\]
We have \(E(0) = 0\) and \(E(1) = \frac{1}{2}\) , so by induction on \(n\) we have \(E(n) = \frac{1}{4} n(n + 1)\) .
Solution 3. Let \(H_{i}\) be the number of heads in positions 1 to \(i\) inclusive (so \(H_{n}\) is the total number of heads), and let \(I_{i}\) be 1 if the \(i^{\mathrm{th}}\) coin is a head, 0 otherwise. Consider the function
\[t(i) = I_{i} + 2(\min \{i,H_{n}\} -H_{i}).\]
We claim that \(t(i)\) is the total number of times coin \(i\) is turned over (which implies that the process terminates). Certainly \(t(i) = 0\) when all coins are tails, and \(t(i)\) is always a nonnegative integer, so it suffices to show that when the \(k^{\mathrm{th}}\) coin is turned over (where \(k = H_{n}\) ), \(t(k)\) goes down by 1 and all the other \(t(i)\) are unchanged. We show this by splitting into cases:
- If \(i< k\) , \(I_{i}\) and \(H_{i}\) are unchanged, and \(\min \{i,H_{n}\} = i\) both before and after the coin flip, so \(t(i)\) is unchanged.
- If \(i > k\) , \(\min \{i,H_{n}\} = H_{n}\) both before and after the coin flip, and both \(H_{n}\) and \(H_{i}\) change by the same amount, so \(t(i)\) is unchanged.
- If \(i = k\) and the coin is heads, \(I_{i}\) goes down by 1, as do both \(\min \{i,H_{n}\} = H_{n}\) and \(H_{i}\) ; so \(t(i)\) goes down by 1.
- If \(i = k\) and the coin is tails, \(I_{i}\) goes up by 1, \(\min \{i,H_{n}\} = i\) is unchanged and \(H_{i}\) goes up by 1; so \(t(i)\) goes down by 1.
We now need to compute the average value of
\[\sum_{i = 1}^{n}t(i) = \sum_{i = 1}^{n}I_{i} + 2\sum_{i = 1}^{n}\min \{i,H_{n}\} -2\sum_{i = 1}^{n}H_{i}.\]
The average value of the first term is \(\frac{1}{2} n\) , and that of the third term is \(- \frac{1}{2} n(n + 1)\) . To compute the second term, we sum over choices for the total number of heads, and then over the possible values of \(i\) , getting
\[2^{1 - n}\sum_{j = 0}^{n}\binom{n}{j}\sum_{i = 1}^{n}\min \{i,j\} = 2^{1 - n}\sum_{j = 0}^{n}\binom{n}{j}\left(nj - \binom{j}{2}\right).\]
Now, in terms of trinomial coefficients,
\[\sum_{j = 0}^{n}j\binom{n}{j} = \sum_{j = 1}^{n}\binom{n}{n - j,j - 1,1} = n\sum_{j = 0}^{n - 1}\binom{n - 1}{j} = 2^{n - 1}n\]
and
\[\sum_{j = 0}^{n}\binom{j}{2}\binom{n}{j} = \sum_{j = 2}^{n}\binom{n}{n - j,j - 2,2} = \binom{n}{2}\sum_{j = 0}^{n - 2}\binom{n - 2}{j} = 2^{n - 2}\binom{n}{2}.\]
So the second term above is
\[2^{1 - n}\left(2^{n - 1}n^{2} - 2^{n - 2}\binom{n}{2}\right) = n^{2} - \frac{n(n - 1)}{4},\]
and the required average is
\[E(n) = \frac{1}{2} n + n^{2} - \frac{n(n - 1)}{4} -\frac{1}{2} n(n + 1) = \frac{n(n + 1)}{4}.\]
Solution 4. Harry has built a Turing machine to flip the coins for him. The machine is initially positioned at the \(k^{\mathrm{th}}\) coin, where there are \(k\) heads (and the position before the first coin is considered to be the \(0^{\mathrm{th}}\) coin). The machine then moves according to the following rules, stopping when it reaches the position before the first coin: if the coin at its current position is \(H\) , it flips the coin and moves to the previous coin, while if the coin at its current position is \(T\) , it flips the coin and moves to the next position.
Consider the maximal sequences of consecutive moves in the same direction. Suppose the machine has \(a\) consecutive moves to the next coin, before a move to the previous coin. After those \(a\) moves, the \(a\) coins flipped in those moves are all heads, as is the coin the machine is now at, so at least the next \(a + 1\) moves will all be moves to the previous coin. Similarly, \(a\) consecutive moves to the previous coin are followed by at least \(a + 1\) consecutive moves to
the next coin. There cannot be more than \(n\) consecutive moves in the same direction, so this proves that the process terminates (with a move from the first coin to the position before the first coin).
Thus we have a (possibly empty) sequence \(a_{1}< \dots < a_{t}\leqslant n\) giving the lengths of maximal sequences of consecutive moves in the same direction, where the final \(a_{t}\) moves must be moves to the previous coin, ending before the first coin. We claim there is a bijection between initial configurations of the coins and such sequences. This gives
\[E(n) = \frac{1}{2} (1 + 2 + \dots +n) = \frac{n(n + 1)}{4}\]
as required, since each \(i\) with \(1\leqslant i\leqslant n\) will appear in half of the sequences, and will contribute \(i\) to the number of moves when it does.
To see the bijection, consider following the sequence of moves backwards, starting with the machine just before the first coin and all coins showing tails. This certainly determines a unique configuration of coins that could possibly correspond to the given sequence. Furthermore, every coin flipped as part of the \(a_{j}\) consecutive moves is also flipped as part of all subsequent sequences of \(a_{k}\) consecutive moves, for all \(k > j\) , meaning that, as we follow the moves backwards, each coin is always in the correct state when flipped to result in a move in the required direction. (Alternatively, since there are \(2^{n}\) possible configurations of coins and \(2^{n}\) possible such ascending sequences, the fact that the sequence of moves determines at most one configuration of coins, and thus that there is an injection from configurations of coins to such ascending sequences, is sufficient for it to be a bijection, without needing to show that coins are in the right state as we move backwards.)
Solution 5. We explicitly describe what happens with an arbitrary sequence \(C\) of \(n\) coins. Suppose that \(C\) contain \(k\) heads at positions \(1\leqslant c_{1}< c_{2}< \dots < c_{k}\leqslant n\)
Let \(i\) be the minimal index such that \(c_{i}\geqslant k\) . Then the first few steps will consist of turning over the \(k^{\mathrm{th}}\) , \((k + 1)^{\mathrm{th}}\) , ..., \(c_{i}^{\mathrm{th}}\) , \((c_{i} - 1)^{\mathrm{th}}\) , \((c_{i} - 2)^{\mathrm{th}}\) , ..., \(k^{\mathrm{th}}\) coins in this order. After that we get a configuration with \(k - 1\) heads at the same positions as in the initial one, except for \(c_{i}\) . This part of the process takes \(2(c_{i} - k) + 1\) steps.
After that, the process acts similarly; by induction on the number of heads we deduce that the process ends. Moreover, if the \(c_{i}\) disappear in order \(c_{i_{1}},\ldots ,c_{i_{k}}\) , the whole process takes
\[\ell (C) = \sum_{j = 1}^{k}\bigl (2(c_{i_{j}} - (k + 1 - j)) + 1\bigr) = 2\sum_{j = 1}^{k}c_{j} - 2\sum_{j = 1}^{k}(k + 1 - j) + k = 2\sum_{j = 1}^{k}c_{j} - k^{2}\]
steps.
Now let us find the total value \(S_{k}\) of \(\ell (C)\) over all \(\binom{n}{k}\) configurations with exactly \(k\) heads. To sum up the above expression over those, notice that each number \(1\leqslant i\leqslant n\) appears as \(c_{j}\) exactly \(\binom{n- 1}{k- 1}\) times. Thus
\[S_{k} = 2\binom{n- 1}{k- 1}\sum_{i=1}^{n}i-\binom{n}{k}k^{2}=2\frac{(n- 1)\cdots(n-k+1)}{(k- 1)!}\cdot\frac{n(n+1)}{2}-\frac{n\cdots(n-k+1)}{k!}k^{2}\] \[\qquad=\frac{n(n- 1)\cdots(n-k+1)}{(k- 1)!}\big((n+1)- k\big)=n(n- 1)\binom{n- 2}{k- 1}+n\binom{n- 1}{k- 1}.\]
Therefore, the total value of \(\ell (C)\) over all configurations is
\[\sum_{k = 1}^{n}S_{k} = n(n - 1)\sum_{k = 1}^{n}\binom{n - 2}{k - 1} + n\sum_{k = 1}^{n}\binom{n - 1}{k - 1} = n(n - 1)2^{n - 2} + n2^{n - 1} = 2^{n}\frac{n(n + 1)}{4}.\]
Hence the required average is \(E(n) = \frac{n(n + 1)}{4}\) .
|
IMOSL-2019-C5
|
On a certain social network, there are 2019 users, some pairs of which are friends, where friendship is a symmetric relation. Initially, there are 1010 people with 1009 friends each and 1009 people with 1010 friends each. However, the friendships are rather unstable, so events of the following kind may happen repeatedly, one at a time:
Let \(A\) , \(B\) , and \(C\) be people such that \(A\) is friends with both \(B\) and \(C\) , but \(B\) and \(C\) are not friends; then \(B\) and \(C\) become friends, but \(A\) is no longer friends with them.
Prove that, regardless of the initial friendships, there exists a sequence of such events after which each user is friends with at most one other user.
|
Common remarks. The problem has an obvious rephrasing in terms of graph theory. One is given a graph \(G\) with 2019 vertices, 1010 of which have degree 1009 and 1009 of which have degree 1010. One is allowed to perform operations on \(G\) of the following kind:
Suppose that vertex \(A\) is adjacent to two distinct vertices \(B\) and \(C\) which are not adjacent to each other. Then one may remove the edges \(AB\) and \(AC\) from \(G\) and add the edge \(BC\) into \(G\) .
Call such an operation a refrending. One wants to prove that, via a sequence of such refrendings, one can reach a graph which is a disjoint union of single edges and vertices.
All of the solutions presented below will use this reformulation.
Solution 1. Note that the given graph is connected, since the total degree of any two vertices is at least 2018 and hence they are either adjacent or have at least one neighbour in common. Hence the given graph satisfies the following condition:
Every connected component of \(G\) with at least three vertices is not complete and has a vertex of odd degree.
We will show that if a graph \(G\) satisfies condition (1) and has a vertex of degree at least 2, then there is a refrending on \(G\) that preserves condition (1). Since refrendings decrease the total number of edges of \(G\) , by using a sequence of such refrendings, we must reach a graph \(G\) with maximal degree at most 1, so we are done.

Pick a vertex \(A\) of degree at least 2 in a connected component \(G'\) of \(G\) . Since no component of \(G\) with at least three vertices is complete we may assume that not all of the neighbours of \(A\) are adjacent to one another. (For example, pick a maximal complete subgraph \(K\) of \(G'\) . Some vertex \(A\) of \(K\) has a neighbour outside \(K\) , and this neighbour is not adjacent to every vertex of \(K\) by maximality.) Removing \(A\) from \(G\) splits \(G'\) into smaller connected components \(G_{1}, \ldots , G_{k}\) (possibly with \(k = 1\) ), to each of which \(A\) is connected by at least one edge. We divide into several cases.
Case 1: \(k \geq 2\) and \(A\) is connected to some \(G_{i}\) by at least two edges.
Choose a vertex \(B\) of \(G_{i}\) adjacent to \(A\) , and a vertex \(C\) in another component \(G_{j}\) adjacent to \(A\) . The vertices \(B\) and \(C\) are not adjacent, and hence removing edges \(AB\) and \(AC\) and adding in edge \(BC\) does not disconnect \(G'\) . It is easy to see that this preserves the condition, since the refrending does not change the parity of the degrees of vertices.
Case 2: \(k \geq 2\) and \(A\) is connected to each \(G_{i}\) by exactly one edge.
Consider the induced subgraph on any \(G_{i}\) and the vertex \(A\) . The vertex \(A\) has degree 1 in this subgraph; since the number of odd- degree vertices of a graph is always even, we see that \(G_{i}\) has a vertex of odd degree (in \(G\) ). Thus if we let \(B\) and \(C\) be any distinct neighbours of \(A\) , then removing edges \(AB\) and \(AC\) and adding in edge \(BC\) preserves the above condition: the refrending creates two new components, and if either of these components has at least three vertices, then it cannot be complete and must contain a vertex of odd degree (since each \(G_{i}\) does).
Case 3: \(k = 1\) and \(A\) is connected to \(G_{1}\) by at least three edges.
By assumption, \(A\) has two neighbours \(B\) and \(C\) which are not adjacent to one another. Removing edges \(AB\) and \(AC\) and adding in edge \(BC\) does not disconnect \(G'\) . We are then done as in Case 1.
Case 4: \(k = 1\) and \(A\) is connected to \(G_{1}\) by exactly two edges.
Let \(B\) and \(C\) be the two neighbours of \(A\) , which are not adjacent. Removing edges \(AB\) and \(AC\) and adding in edge \(BC\) results in two new components: one consisting of a single vertex; and the other containing a vertex of odd degree. We are done unless this second component would be a complete graph on at least 3 vertices. But in this case, \(G_{1}\) would be a complete graph minus the single edge \(BC\) , and hence has at least 4 vertices since \(G'\) is not a 4- cycle. If we let \(D\) be a third vertex of \(G_{1}\) , then removing edges \(BA\) and \(BD\) and adding in edge \(AD\) does not disconnect \(G'\) . We are then done as in Case 1.

Comment. In fact, condition 1 above precisely characterises those graphs which can be reduced to a graph of maximal degree \(\leq 1\) by a sequence of refrendings.
Solution 2. As in the previous solution, note that a refriending preserves the property that a graph has a vertex of odd degree and (trivially) the property that it is not complete; note also that our initial graph is connected. We describe an algorithm to reduce our initial graph to a graph of maximal degree at most 1, proceeding in two steps.
Step 1: There exists a sequence of refriendings reducing the graph to a tree.
Proof. Since the number of edges decreases with each refriending, it suffices to prove the following: as long as the graph contains a cycle, there exists a refriending such that the resulting graph is still connected. We will show that the graph in fact contains a cycle \(Z\) and vertices \(A, B, C\) such that \(A\) and \(B\) are adjacent in the cycle \(Z\) , \(C\) is not in \(Z\) , and is adjacent to \(A\) but not \(B\) . Removing edges \(AB\) and \(AC\) and adding in edge \(BC\) keeps the graph connected, so we are done.

To find this cycle \(Z\) and vertices \(A, B, C\) , we pursue one of two strategies. If the graph contains a triangle, we consider a largest complete subgraph \(K\) , which thus contains at least three vertices. Since the graph itself is not complete, there is a vertex \(C\) not in \(K\) connected to a vertex \(A\) of \(K\) . By maximality of \(K\) , there is a vertex \(B\) of \(K\) not connected to \(C\) , and hence we are done by choosing a cycle \(Z\) in \(K\) through the edge \(AB\) .

If the graph is triangle- free, we consider instead a smallest cycle \(Z\) . This cycle cannot be Hamiltonian (i.e. it cannot pass through every vertex of the graph), since otherwise by minimality the graph would then have no other edges, and hence would have even degree at every vertex. We may thus choose a vertex \(C\) not in \(Z\) adjacent to a vertex \(A\) of \(Z\) . Since the graph is triangle- free, it is not adjacent to any neighbour \(B\) of \(A\) in \(Z\) , and we are done. \(\square\)
Step 2: Any tree may be reduced to a disjoint union of single edges and vertices by a sequence of refriendings.
Proof. The refriending preserves the property of being acyclic. Hence, after applying a sequence of refriendings, we arrive at an acyclic graph in which it is impossible to perform any further refriendings. The maximal degree of any such graph is 1: if it had a vertex \(A\) with two neighbours \(B, C\) , then \(B\) and \(C\) would necessarily be nonadjacent since the graph is cycle- free, and so a refriending would be possible. Thus we reach a graph with maximal degree at most 1 as desired. \(\square\)
|
IMOSL-2019-C6
|
Let \(n > 1\) be an integer. Suppose we are given \(2n\) points in a plane such that no three of them are collinear. The points are to be labelled \(A_{1}\) , \(A_{2}\) , ..., \(A_{2n}\) in some order. We then consider the \(2n\) angles \(\angle A_{1}A_{2}A_{3}\) , \(\angle A_{2}A_{3}A_{4}\) , ..., \(\angle A_{2n - 2}A_{2n - 1}A_{2n}\) , \(\angle A_{2n - 1}A_{2n}A_{1}\) , \(\angle A_{2n}A_{1}A_{2}\) . We measure each angle in the way that gives the smallest positive value (i.e. between \(0^{\circ}\) and \(180^{\circ}\) ). Prove that there exists an ordering of the given points such that the resulting \(2n\) angles can be separated into two groups with the sum of one group of angles equal to the sum of the other group.
|
Comment. The first three solutions all use the same construction involving a line separating the points into groups of \(n\) points each, but give different proofs that this construction works. Although Solution 1 is very short, the Problem Selection Committee does not believe any of the solutions is easy to find and thus rates this as a problem of medium difficulty.
Solution 1. Let \(\ell\) be a line separating the points into two groups ( \(L\) and \(R\) ) with \(n\) points in each. Label the points \(A_{1}\) , \(A_{2}\) , ..., \(A_{2n}\) so that \(L = \{A_{1}, A_{3}, \ldots , A_{2n - 1}\}\) . We claim that this labelling works.
Take a line \(s = A_{2n}A_{1}\) .
(a) Rotate \(s\) around \(A_{1}\) until it passes through \(A_{2}\) ; the rotation is performed in a direction such that \(s\) is never parallel to \(\ell\) .
(b) Then rotate the new \(s\) around \(A_{2}\) until it passes through \(A_{3}\) in a similar manner.
(c) Perform \(2n - 2\) more such steps, after which \(s\) returns to its initial position.
The total (directed) rotation angle \(\Theta\) of \(s\) is clearly a multiple of \(180^{\circ}\) . On the other hand, \(s\) was never parallel to \(\ell\) , which is possible only if \(\Theta = 0\) . Now it remains to partition all the \(2n\) angles into those where \(s\) is rotated anticlockwise, and the others.
Solution 2. When tracing a cyclic path through the \(A_{i}\) in order, with straight line segments between consecutive points, let \(\theta_{i}\) be the exterior angle at \(A_{i}\) , with a sign convention that it is positive if the path turns left and negative if the path turns right. Then \(\sum_{i = 1}^{2n} \theta_{i} = 360k^{\circ}\) for some integer \(k\) . Let \(\phi_{i} = \angle A_{i - 1}A_{i}A_{i + 1}\) (indices mod \(2n\) ), defined as in the problem; thus \(\phi_{i} = 180^{\circ} - |\theta_{i}|\) .
Let \(L\) be the set of \(i\) for which the path turns left at \(A_{i}\) and let \(R\) be the set for which it turns right. Then \(S = \sum_{i \in L} \phi_{i} - \sum_{i \in R} \phi_{i} = (180(|L| - |R|) - 360k)^{\circ}\) , which is a multiple of \(360^{\circ}\) since the number of points is even. We will show that the points can be labelled such that \(S = 0\) , in which case \(L\) and \(R\) satisfy the required condition of the problem.
Note that the value of \(S\) is defined for a slightly larger class of configurations: it is OK for two points to coincide, as long as they are not consecutive, and OK for three points to be collinear, as long as \(A_{i}\) , \(A_{i + 1}\) and \(A_{i + 2}\) do not appear on a line in that order. In what follows it will be convenient, although not strictly necessary, to consider such configurations.
Consider how \(S\) changes if a single one of the \(A_{i}\) is moved along some straight- line path (not passing through any \(A_{j}\) and not lying on any line \(A_{j}A_{k}\) , but possibly crossing such lines). Because \(S\) is a multiple of \(360^{\circ}\) , and the angles change continuously, \(S\) can only change when a point moves between \(R\) and \(L\) . Furthermore, if \(\phi_{j} = 0\) when \(A_{j}\) moves between \(R\) and \(L\) , \(S\) is unchanged; it only changes if \(\phi_{j} = 180^{\circ}\) when \(A_{j}\) moves between those sets.
For any starting choice of points, we will now construct a new configuration, with labels such that \(S = 0\) , that can be perturbed into the original one without any \(\phi_{i}\) passing through \(180^{\circ}\) , so that \(S = 0\) for the original configuration with those labels as well.
Take some line such that there are \(n\) points on each side of that line. The new configuration has \(n\) copies of a single point on each side of the line, and a path that alternates between
sides of the line; all angles are 0, so this configuration has \(S = 0\) . Perturbing the points into their original positions, while keeping each point on its side of the line, no angle \(\phi_{i}\) can pass through \(180^{\circ}\) , because no straight line can go from one side of the line to the other and back. So the perturbation process leaves \(S = 0\) .
Comment. More complicated variants of this solution are also possible; for example, a path defined using four quadrants of the plane rather than just two half- planes.
Solution 3. First, let \(\ell\) be a line in the plane such that there are \(n\) points on one side and the other \(n\) points on the other side. For convenience, assume \(\ell\) is horizontal (otherwise, we can rotate the plane). Then we can use the terms "above", "below", "left" and "right" in the usual way. We denote the \(n\) points above the line in an arbitrary order as \(P_{1}\) , \(P_{2}\) , ..., \(P_{n}\) , and the \(n\) points below the line as \(Q_{1}\) , \(Q_{2}\) , ..., \(Q_{n}\) .
If we connect \(P_{i}\) and \(Q_{j}\) with a line segment, the line segment will intersect with the line \(\ell\) . Denote the intersection as \(I_{ij}\) . If \(P_{i}\) is connected to \(Q_{j}\) and \(Q_{k}\) , where \(j < k\) , then \(I_{ij}\) and \(I_{ik}\) are two different points, because \(P_{i}\) , \(Q_{j}\) and \(Q_{k}\) are not collinear.
Now we define a "sign" for each angle \(\angle Q_{j}P_{i}Q_{k}\) . Assume \(j < k\) . We specify that the sign is positive for the following two cases:
- if \(i\) is odd and \(I_{ij}\) is to the left of \(I_{ik}\) ,
- if \(i\) is even and \(I_{ij}\) is to the right of \(I_{ik}\) .
Otherwise the sign of the angle is negative. If \(j > k\) , then the sign of \(\angle Q_{j}P_{i}Q_{k}\) is taken to be the same as for \(\angle Q_{k}P_{i}Q_{j}\) .
Similarly, we can define the sign of \(\angle P_{j}Q_{i}P_{k}\) with \(j < k\) (or equivalently \(\angle P_{k}Q_{i}P_{j}\) ). For example, it is positive when \(i\) is odd and \(I_{ji}\) is to the left of \(I_{ki}\) .
Henceforth, whenever we use the notation \(\angle Q_{j}P_{i}Q_{k}\) or \(\angle P_{j}Q_{i}P_{k}\) for a numerical quantity, it is understood to denote either the (geometric) measure of the angle or the negative of this measure, depending on the sign as specified above.
We now have the following important fact for signed angle measures:
\[\angle Q_{i_{1}}P_{k}Q_{i_{3}} = \angle Q_{i_{1}}P_{k}Q_{i_{2}} + \angle Q_{i_{2}}P_{k}Q_{i_{3}} \quad (1)\]
for all points \(P_{k}\) , \(Q_{i_{1}}\) , \(Q_{i_{2}}\) and \(Q_{i_{3}}\) with \(i_{1} < i_{2} < i_{3}\) . The following figure shows a "natural" arrangement of the points. Equation (1) still holds for any other arrangement, as can be easily verified.

Similarly, we have
\[\angle P_{i_{1}}Q_{k}P_{i_{3}} = \angle P_{i_{1}}Q_{k}P_{i_{2}} + \angle P_{i_{2}}Q_{k}P_{i_{3}}, \quad (2)\]
for all points \(Q_{k}\) , \(P_{i_{1}}\) , \(P_{i_{2}}\) and \(P_{i_{3}}\) , with \(i_{1} < i_{2} < i_{3}\) .
We are now ready to specify the desired ordering \(A_{1}, \ldots , A_{2n}\) of the points:
- if \(i \leqslant n\) is odd, put \(A_{i} = P_{i}\) and \(A_{2n + 1 - i} = Q_{i}\) ;
- if \(i \leqslant n\) is even, put \(A_{i} = Q_{i}\) and \(A_{2n + 1 - i} = P_{i}\) .
For example, for \(n = 3\) this ordering is \(P_{1}, Q_{2}, P_{3}, Q_{3}, P_{2}, Q_{1}\) . This sequence alternates between \(P\) 's and \(Q\) 's, so the above conventions specify a sign for each of the angles \(A_{i - 1}A_{i}A_{i + 1}\) . We claim that the sum of these \(2n\) signed angles equals 0. If we can show this, it would complete the proof.
We prove the claim by induction. For brevity, we use the notation \(\angle P_{i}\) to denote whichever of the \(2n\) angles has its vertex at \(P_{i}\) , and \(\angle Q_{i}\) similarly.
First let \(n = 2\) . If the four points can be arranged to form a convex quadrilateral, then the four line segments \(P_{1}Q_{1}\) , \(P_{1}Q_{2}\) , \(P_{2}Q_{1}\) and \(P_{2}Q_{2}\) constitute a self- intersecting quadrilateral. We use several figures to illustrate the possible cases.
The following figure is one possible arrangement of the points.

Then \(\angle P_{1}\) and \(\angle Q_{1}\) are positive, \(\angle P_{2}\) and \(\angle Q_{2}\) are negative, and we have
\[|\angle P_{1}| + |\angle Q_{1}| = |\angle P_{2}| + |\angle Q_{2}|.\]
With signed measures, we have
\[\angle P_{1} + \angle Q_{1} + \angle P_{2} + \angle Q_{2} = 0. \quad (3)\]
If we switch the labels of \(P_{1}\) and \(P_{2}\) , we have the following picture:

Switching labels \(P_{1}\) and \(P_{2}\) has the effect of flipping the sign of all four angles (as well as swapping the magnitudes on the relabelled points); that is, the new values of \((\angle P_{1}, \angle P_{2}, \angle Q_{1}, \angle Q_{2})\) equal the old values of \((- \angle P_{2}, - \angle P_{1}, - \angle Q_{1}, - \angle Q_{2})\) . Consequently, equation (3) still holds. Similarly, when switching the labels of \(Q_{1}\) and \(Q_{2}\) , or both the \(P\) 's and the \(Q\) 's, equation (3) still holds.
The remaining subcase of \(n = 2\) is that one point lies inside the triangle formed by the other three. We have the following picture.

We have
\[|\angle P_{1}| + |\angle Q_{1}| + |\angle Q_{2}| = |\angle P_{2}|.\]
and equation (3) holds.
Again, switching the labels for \(P\) 's or the \(Q\) 's will not affect the validity of equation (3). Also, if the point lying inside the triangle of the other three is one of the \(Q\) 's rather than the \(P\) 's, the result still holds, since our sign convention is preserved when we relabel \(Q\) 's as \(P\) 's and vice- versa and reflect across \(\ell\) .
We have completed the proof of the claim for \(n = 2\) .
Assume the claim holds for \(n = k\) , and we wish to prove it for \(n = k + 1\) . Suppose we are given our \(2(k + 1)\) points. First ignore \(P_{k + 1}\) and \(Q_{k + 1}\) , and form \(2k\) angles from \(P_{1}\) , ..., \(P_{k}\) , \(Q_{1}\) , ..., \(Q_{k}\) as in the \(n = k\) case. By the induction hypothesis we have
\[\sum_{i = 1}^{k}(\angle P_{i} + \angle Q_{i}) = 0.\]
When we add in the two points \(P_{k + 1}\) and \(Q_{k + 1}\) , this changes our angles as follows:
- the angle at \(P_{k}\) changes from \(\angle Q_{k - 1}P_{k}Q_{k}\) to \(\angle Q_{k - 1}P_{k}Q_{k + 1}\) ;
- the angle at \(Q_{k}\) changes from \(\angle P_{k - 1}Q_{k}P_{k}\) to \(\angle P_{k - 1}Q_{k}P_{k + 1}\) ;
- two new angles \(\angle Q_{k}P_{k + 1}Q_{k + 1}\) and \(\angle P_{k}Q_{k + 1}P_{k + 1}\) are added.
We need to prove the changes have no impact on the total sum. In other words, we need to prove
\[(\angle Q_{k - 1}P_{k}Q_{k + 1} - \angle Q_{k - 1}P_{k}Q_{k}) + (\angle P_{k - 1}Q_{k}P_{k + 1} - \angle P_{k - 1}Q_{k}P_{k}) + (\angle P_{k + 1} + \angle Q_{k + 1}) = 0. \quad (4)\]
In fact, from equations (1) and (2), we have
\[\angle Q_{k - 1}P_{k}Q_{k + 1} - \angle Q_{k - 1}P_{k}Q_{k} = \angle Q_{k}P_{k}Q_{k + 1},\]
and
\[\angle P_{k - 1}Q_{k}P_{k + 1} - \angle P_{k - 1}Q_{k}P_{k} = \angle P_{k}Q_{k}P_{k + 1}.\]
Therefore, the left hand side of equation (4) becomes \(\angle Q_{k}P_{k}Q_{k + 1} + \angle P_{k}Q_{k}P_{k + 1} + \angle Q_{k}P_{k + 1}Q_{k + 1} + \angle P_{k}Q_{k + 1}P_{k + 1}\) , which equals 0, simply by applying the \(n = 2\) case of the claim. This completes the induction.
Solution 4. We shall think instead of the problem as asking us to assign a weight \(\pm 1\) to each angle, such that the weighted sum of all the angles is zero.
Given an ordering \(A_{1},\ldots ,A_{2n}\) of the points, we shall assign weights according to the following recipe: walk in order from point to point, and assign the left turns \(+1\) and the right turns \(- 1\) . This is the same weighting as in Solution 3, and as in that solution, the weighted sum is a multiple of \(360^{\circ}\) .
We now aim to show the following:
Lemma. Transposing any two consecutive points in the ordering changes the weighted sum by \(\pm 360^{\circ}\) or 0.
Knowing that, we can conclude quickly: if the ordering \(A_{1},\ldots ,A_{2n}\) has weighted angle sum \(360k^{\circ}\) , then the ordering \(A_{2n},\ldots ,A_{1}\) has weighted angle sum \(- 360k^{\circ}\) (since the angles are the same, but left turns and right turns are exchanged). We can reverse the ordering of \(A_{1}\) , ..., \(A_{2n}\) by a sequence of transpositions of consecutive points, and in doing so the weighted angle sum must become zero somewhere along the way.
We now prove that lemma:
Proof. Transposing two points amounts to taking a section \(A_{k}A_{k + 1}A_{k + 2}A_{k + 3}\) as depicted, reversing the central line segment \(A_{k + 1}A_{k + 2}\) , and replacing its two neighbours with the dotted lines.

<center>Figure 1: Transposing two consecutive vertices: before (left) and afterwards (right) </center>
In each triangle, we alter the sum by \(\pm 180^{\circ}\) . Indeed, using (anticlockwise) directed angles modulo \(360^{\circ}\) , we either add or subtract all three angles of each triangle.
Hence both triangles together alter the sum by \(\pm 180\pm 180^{\circ}\) , which is \(\pm 360^{\circ}\) or 0.
|
IMOSL-2019-C8
|
Alice has a map of Wonderland, a country consisting of \(n \geq 2\) towns. For every pair of towns, there is a narrow road going from one town to the other. One day, all the roads are declared to be "one way" only. Alice has no information on the direction of the roads, but the King of Hearts has offered to help her. She is allowed to ask him a number of questions. For each question in turn, Alice chooses a pair of towns and the King of Hearts tells her the direction of the road connecting those two towns.
Alice wants to know whether there is at least one town in Wonderland with at most one outgoing road. Prove that she can always find out by asking at most \(4n\) questions.
|
Comment. This problem could be posed with an explicit statement about points being awarded for weaker bounds \(cn\) for some \(c > 4\) , in the style of IMO 2014 Problem 6.
(Thailand)
Solution. We will show Alice needs to ask at most \(4n - 7\) questions. Her strategy has the following phases. In what follows, \(S\) is the set of towns that Alice, so far, does not know to have more than one outgoing road (so initially \(|S| = n\) ).
Phase 1. Alice chooses any two towns, say \(A\) and \(B\) . Without loss of generality, suppose that the King of Hearts' answer is that the road goes from \(A\) to \(B\) .
At the end of this phase, Alice has asked 1 question.
Phase 2. During this phase there is a single (variable) town \(T\) that is known to have at least one incoming road but not yet known to have any outgoing roads. Initially, \(T\) is \(B\) . Alice does the following \(n - 2\) times: she picks a town \(X\) she has not asked about before, and asks the direction of the road between \(T\) and \(X\) . If it is from \(X\) to \(T\) , \(T\) is unchanged; if it is from \(T\) to \(X\) , \(X\) becomes the new choice of town \(T\) , as the previous \(T\) is now known to have an outgoing road.
At the end of this phase, Alice has asked a total of \(n - 1\) questions. The final town \(T\) is not yet known to have any outgoing roads, while every other town has exactly one outgoing road known. The undirected graph of roads whose directions are known is a tree.
Phase 3. During this phase, Alice asks about the directions of all roads between \(T\) and another town she has not previously asked about, stopping if she finds two outgoing roads from \(T\) . This phase involves at most \(n - 2\) questions. If she does not find two outgoing roads from \(T\) , she has answered her original question with at most \(2n - 3 \leq 4n - 7\) questions, so in what follows we suppose that she does find two outgoing roads, asking a total of \(k\) questions in this phase, where \(2 \leq k \leq n - 2\) (and thus \(n \geq 4\) for what follows).
For every question where the road goes towards \(T\) , the town at the other end is removed from \(S\) (as it already had one outgoing road known), while the last question resulted in \(T\) being removed from \(S\) . So at the end of this phase, \(|S| = n - k + 1\) , while a total of \(n + k - 1\) questions have been asked. Furthermore, the undirected graph of roads within \(S\) whose directions are known contains no cycles (as \(T\) is no longer a member of \(S\) , all questions asked in this phase involved \(T\) and the graph was a tree before this phase started). Every town in \(S\) has exactly one outgoing road known (not necessarily to another town in \(S\) ).
Phase 4. During this phase, Alice repeatedly picks any pair of towns in \(S\) for which she does not know the direction of the road between them. Because every town in \(S\) has exactly one outgoing road known, this always results in the removal of one of those two towns from \(S\) . Because there are no cycles in the graph of roads of known direction within \(S\) , this can continue until there are at most 2 towns left in \(S\) .
If it ends with \(t\) towns left, \(n - k + 1 - t\) questions were asked in this phase, so a total of \(2n - t\) questions have been asked.
Phase 5. During this phase, Alice asks about all the roads from the remaining towns in \(S\) that she has not previously asked about. She has definitely already asked about any road between those towns (if \(t = 2\) ). She must also have asked in one of the first two phases about
at least one other road involving one of those towns (as those phases resulted in a tree with \(n > 2\) vertices). So she asks at most \(t(n - t) - 1\) questions in this phase.
At the end of this phase, Alice knows whether any town has at most one outgoing road. If \(t = 1\) , at most \(3n - 3 \leqslant 4n - 7\) questions were needed in total, while if \(t = 2\) , at most \(4n - 7\) questions were needed in total.
Comment 1. The version of this problem originally submitted asked only for an upper bound of \(5n\) , which is much simpler to prove. The Problem Selection Committee preferred a version with an asymptotically optimal constant. In the following comment, we will show that the constant is optimal.
Comment 2. We will show that Alice cannot always find out by asking at most \(4n - 3(\log_2 n) - 15\) questions, if \(n \geqslant 8\) .
To show this, we suppose the King of Hearts is choosing the directions as he goes along, only picking the direction of a road when Alice asks about it for the first time. We provide a strategy for the King of Hearts that ensures that, after the given number of questions, the map is still consistent both with the existence of a town with at most one outgoing road, and with the nonexistence of such a town. His strategy has the following phases. When describing how the King of Hearts' answer to a question is determined below, we always assume he is being asked about a road for the first time (otherwise, he just repeats his previous answer for that road). This strategy is described throughout in graph- theoretic terms (vertices and edges rather than towns and roads).
Phase 1. In this phase, we consider the undirected graph formed by edges whose directions are known. The phase terminates when there are exactly 8 connected components whose undirected graphs are trees. The following invariant is maintained: in a component with \(k\) vertices whose undirected graph is a tree, every vertex has at most \(\lfloor \log_2 k \rfloor\) edges into it.
- If the King of Hearts is asked about an edge between two vertices in the same component, or about an edge between two components at least one of which is not a tree, he chooses any direction for that edge arbitrarily.- If he is asked about an edge between a vertex in component \(A\) that has \(a\) vertices and is a tree and a vertex in component \(B\) that has \(b\) vertices and is a tree, suppose without loss of generality that \(a \geqslant b\) . He then chooses the edge to go from \(A\) to \(B\) . In this case, the new number of edges into any vertex is at most \(\max \{\lfloor \log_2 a \rfloor , \lfloor \log_2 b \rfloor + 1\} \leqslant \lfloor \log_2 (a + b) \rfloor\) .
In all cases, the invariant is preserved, and the number of tree components either remains unchanged or goes down by 1. Assuming Alice does not repeat questions, the process must eventually terminate with 8 tree components, and at least \(n - 8\) questions having been asked.
Note that each tree component contains at least one vertex with no outgoing edges. Colour one such vertex in each tree component red.
Phase 2. Let \(V_1\) , \(V_2\) and \(V_3\) be the three of the red vertices whose components are smallest (so their components together have at most \(\left\lfloor \frac{3}{8} n \right\rfloor\) vertices, with each component having at most \(\left\lfloor \frac{3}{8} n - 2 \right\rfloor\) vertices). Let sets \(C_1\) , \(C_2\) , ... be the connected components after removing the \(V_j\) . By construction, there are no edges with known direction between \(C_i\) and \(C_j\) for \(i \neq j\) , and there are at least five such components.
If at any point during this phase, the King of Hearts is asked about an edge within one of the \(C_i\) , he chooses an arbitrary direction. If he is asked about an edge between \(C_i\) and \(C_j\) for \(i \neq j\) , he answers so that all edges go from \(C_i\) to \(C_{i + 1}\) and \(C_{i + 2}\) , with indices taken modulo the number of components, and chooses arbitrarily for other pairs. This ensures that all vertices other than the \(V_j\) will have more than one outgoing edge.
For edges involving one of the \(V_j\) he answers as follows, so as to remain consistent for as long as possible with both possibilities for whether one of those vertices has at most one outgoing edge. Note that as they were red vertices, they have no outgoing edges at the start of this phase. For edges between two of the \(V_j\) , he answers that the edges go from \(V_1\) to \(V_2\) , from \(V_2\) to \(V_3\) and from \(V_3\) to \(V_1\) . For edges between \(V_j\) and some other vertex, he always answers that the edge goes into \(V_j\) , except for the last such edge for which he is asked the question for any given \(V_j\) , for which he answers that the
edge goes out of \(V_{j}\) . Thus, as long as at least one of the \(V_{j}\) has not had the question answered for all the vertices that are not among the \(V_{j}\) , his answers are still compatible both with all vertices having more than one outgoing edge, and with that \(V_{j}\) having only one outgoing edge.
At the start of this phase, each of the \(V_{j}\) has at most \(\left\lfloor \log_{2}\left\lfloor \frac{3}{8} n - 2\right\rfloor \right\rfloor < (\log_{2}n) - 1\) incoming edges. Thus, Alice cannot determine whether some vertex has only one outgoing edge within \(3(n - 3 - ((\log_{2}n) - 1)) - 1\) questions in this phase; that is, \(4n - 3(\log_{2}n) - 15\) questions total.
Comment 3. We can also improve the upper bound slightly, to \(4n - 2(\log_{2}n) + 1\) . (We do not know where the precise minimum number of questions lies between \(4n - 3(\log_{2}n) + O(1)\) and \(4n - 2(\log_{2}n) + O(1)\) .) Suppose \(n \geq 5\) (otherwise no questions are required at all).
To do this, we replace Phases 1 and 2 of the given solution with a different strategy that also results in a spanning tree where one vertex \(V\) is not known to have any outgoing edges, and all other vertices have exactly one outgoing edge known, but where there is more control over the numbers of incoming edges. In Phases 3 and 4 we then take more care about the order in which pairs of towns are chosen, to ensure that each of the remaining towns has already had a question asked about at least \(\log_{2}n + O(1)\) edges.
Define trees \(T_{m}\) with \(2^{m}\) vertices, exactly one of which (the root) has no outgoing edges and the rest of which have exactly one outgoing edge, as follows: \(T_{0}\) is a single vertex, while \(T_{m}\) is constructed by joining the roots of two copies of \(T_{m - 1}\) with an edge in either direction. If \(n = 2^{m}\) we can readily ask \(n - 1\) questions, resulting in a tree \(T_{m}\) for the edges with known direction: first ask about \(2^{m - 1}\) disjoint pairs of vertices, then about \(2^{m - 2}\) disjoint pairs of the roots of the resulting \(T_{1}\) trees, and so on. For the general case, where \(n\) is not a power of 2, after \(k\) stages of this process we have \(\left\lfloor n / 2^{k}\right\rfloor\) trees, each of which is like \(T_{k}\) but may have some extra vertices (but, however, a unique root). If there are an even number of trees, then ask about pairs of their roots. If there are an odd number (greater than 1) of trees, when a single \(T_{k}\) is left over, ask about its root together with that of one of the \(T_{k + 1}\) trees.
Say \(m = \lfloor \log_{2}n\rfloor\) . The result of that process is a single \(T_{m}\) tree, possibly with some extra vertices but still a unique root \(V\) . That root has at least \(m\) incoming edges, and we may list vertices \(V_{0}\) , ..., \(V_{m - 1}\) with edges to \(V\) , such that, for all \(0 \leq i < m\) , vertex \(V_{i}\) itself has at least \(i\) incoming edges.
Now divide the vertices other than \(V\) into two parts: \(A\) has all vertices at an odd distance from \(V\) and \(B\) has all the vertices at an even distance from \(B\) . Both \(A\) and \(B\) are nonempty; \(A\) contains the \(V_{i}\) , while \(B\) contains a sequence of vertices with at least 0, 1, ..., \(m - 2\) incoming edges respectively, similar to the \(V_{i}\) . There are no edges with known direction within \(A\) or within \(B\) .
In Phase 3, then ask about edges between \(V\) and other vertices: first those in \(B\) , in order of increasing number of incoming edges to the other vertex, then those in \(A\) , again in order of increasing number of incoming edges, which involves asking at most \(n - 1 - m\) questions in this phase. If two outgoing edges are not found from \(V\) , at most \(2n - 2 - m \leq 4n - 2(\log_{2}n) + 1\) questions needed to be asked in total, so we suppose that two outgoing edges were found, with \(k\) questions asked in this phase, where \(2 \leq k \leq n - 1 - m\) . The state of \(S\) is as described in the solution above, with the additional property that, since \(S\) must still contain all vertices with edges to \(V\) , it contains the vertices \(V_{i}\) described above.
In Phase 4, consider the vertices left in \(B\) , in increasing order of number of edges incoming to a vertex. If \(s\) is the least number of incoming edges to such a vertex, then, for any \(s \leq t \leq m - 2\) , there are at least \(m - t - 2\) vertices with more than \(t\) incoming edges. Repeatedly asking about the pair of vertices left in \(B\) with the least numbers of incoming edges results in a single vertex left over (if any were in \(B\) at all at the start of this phase) with at least \(m - 2\) incoming edges. Doing the same with \(A\) (which must be nonempty) leaves a vertex with at least \(m - 1\) incoming edges.
Thus if only \(A\) is nonempty we ask at most \(n - m\) questions in Phase 5, so in total at most \(3n - m - 1\) questions, while if both are nonempty we ask at most \(2n - 2m + 1\) questions in Phase 5, so in total at most \(4n - 2m - 1 < 4n - 2(\log_{2}n) + 1\) questions.
|
IMOSL-2019-G1
|
Let \(A B C\) be a triangle. Circle \(\Gamma\) passes through \(A\) , meets segments \(A B\) and \(A C\) again at points \(D\) and \(E\) respectively, and intersects segment \(B C\) at \(F\) and \(G\) such that \(F\) lies between \(B\) and \(G\) . The tangent to circle \(B D F\) at \(F\) and the tangent to circle \(C E G\) at \(G\) meet at point \(T\) . Suppose that points \(A\) and \(T\) are distinct. Prove that line \(A T\) is parallel to \(B C\) .
|
Solution. Notice that \(\angle T F B = \angle F D A\) because \(F T\) is tangent to circle \(B D F\) , and moreover \(\angle F D A = \angle C G A\) because quadrilateral \(A D F G\) is cyclic. Similarly, \(\angle T G B = \angle G E C\) because \(G T\) is tangent to circle \(C E G\) , and \(\angle G E C = \angle C F A\) . Hence,
\[\angle T F B = \angle C G A\quad \mathrm{and}\quad \angle T G B = \angle C F A. \quad (1)\]

Triangles \(F G A\) and \(G F T\) have a common side \(F G\) , and by (1) their angles at \(F,G\) are the same. So, these triangles are congruent. So, their altitudes starting from \(A\) and \(T\) , respectively, are equal and hence \(A T\) is parallel to line \(B F G C\) .
Comment. Alternatively, we can prove first that \(T\) lies on \(\Gamma\) . For example, this can be done by showing that \(\angle A F T = \angle A G T\) using (1). Then the statement follows as \(\angle T A F = \angle T G F = \angle G F A\) .
|
IMOSL-2019-G2
|
Let \(ABC\) be an acute- angled triangle and let \(D\) , \(E\) , and \(F\) be the feet of altitudes from \(A\) , \(B\) , and \(C\) to sides \(BC\) , \(CA\) , and \(AB\) , respectively. Denote by \(\omega_{B}\) and \(\omega_{C}\) the incircles of triangles \(BDF\) and \(CDE\) , and let these circles be tangent to segments \(DF\) and \(DE\) at \(M\) and \(N\) , respectively. Let line \(MN\) meet circles \(\omega_{B}\) and \(\omega_{C}\) again at \(P \neq M\) and \(Q \neq N\) , respectively. Prove that \(MP = NQ\) .
|
Solution. Denote the centres of \(\omega_{B}\) and \(\omega_{C}\) by \(O_{B}\) and \(O_{C}\) , let their radii be \(r_{B}\) and \(r_{C}\) , and let \(BC\) be tangent to the two circles at \(T\) and \(U\) , respectively.

From the cyclic quadrilaterals \(AFDC\) and \(ABDE\) we have
\[\angle MDO_{B} = \frac{1}{2}\angle FDB = \frac{1}{2}\angle BAC = \frac{1}{2}\angle CDE = \angle O_{C}DN,\]
so the right- angled triangles \(DMO_{B}\) and \(DNO_{C}\) are similar. The ratio of similarity between the two triangles is
\[\frac{DN}{DM} = \frac{O_{C}N}{O_{B}M} = \frac{r_{C}}{r_{B}}.\]
Let \(\phi = \angle DMN\) and \(\psi = \angle MND\) . The lines \(FM\) and \(EN\) are tangent to \(\omega_{B}\) and \(\omega_{C}\) , respectively, so
\[\angle MTP = \angle FMP = \angle DMN = \phi \quad \text{and} \quad \angle QUN = \angle QNE = \angle MND = \psi .\]
(It is possible that \(P\) or \(Q\) coincides with \(T\) or \(U\) , or lie inside triangles \(DMT\) or \(DUN\) , respectively. To reduce case- sensitivity, we may use directed angles or simply ignore angles \(MTP\) and \(QUN\) .)
In the circles \(\omega_{B}\) and \(\omega_{C}\) the lengths of chords \(MP\) and \(NQ\) are
\[MP = 2r_{B}\cdot \sin \angle MTP = 2r_{B}\cdot \sin \phi \quad \mathrm{and} \quad NQ = 2r_{C}\cdot \sin \angle QUN = 2r_{C}\cdot \sin \psi .\]
By applying the sine rule to triangle \(DNM\) we get
\[\frac{DN}{DM} = \frac{\sin\angle DMN}{\sin\angle MND} = \frac{\sin\varphi}{\sin\psi}.\]
Finally, putting the above observations together, we get
\[\frac{MP}{NQ} = \frac{2r_{B}\sin\varphi}{2r_{C}\sin\psi} = \frac{r_{B}}{r_{C}}\cdot \frac{\sin\varphi}{\sin\psi} = \frac{DM}{DN}\cdot \frac{\sin\varphi}{\sin\psi} = \frac{\sin\psi}{\sin\varphi}\cdot \frac{\sin\varphi}{\sin\psi} = 1,\]
so \(MP = NQ\) as required.
|
IMOSL-2019-G3
|
In triangle \(A B C\) , let \(A_{1}\) and \(B_{1}\) be two points on sides \(B C\) and \(A C\) , and let \(P\) and \(Q\) be two points on segments \(A A_{1}\) and \(B B_{1}\) , respectively, so that line \(P Q\) is parallel to \(A B\) . On ray \(P B_{1}\) , beyond \(B_{1}\) , let \(P_{1}\) be a point so that \(\angle P P_{1}C = \angle B A C\) . Similarly, on ray \(Q A_{1}\) , beyond \(A_{1}\) , let \(Q_{1}\) be a point so that \(\angle C Q_{1}Q = \angle C B A\) . Show that points \(P\) , \(Q\) , \(P_{1}\) , and \(Q_{1}\) are concyclic.
|
Solution 1. Throughout the solution we use oriented angles.
Let rays \(A A_{1}\) and \(B B_{1}\) intersect the circumcircle of \(\triangle A C B\) at \(A_{2}\) and \(B_{2}\) , respectively. By
\[\angle Q P A_{2} = \angle B A A_{2} = \angle B B_{2}A_{2} = \angle Q B_{2}A_{2},\]
points \(P,Q,A_{2},B_{2}\) are concyclic; denote the circle passing through these points by \(\omega\) . We shall prove that \(P_{1}\) and \(Q_{1}\) also lie on \(\omega\) .

By
\[\angle C A_{2}A_{1} = \angle C A_{2}A = \angle C B A = \angle C Q_{1}Q = \angle C Q_{1}A_{1},\]
points \(C,Q_{1},A_{2},A_{1}\) are also concyclic. From that we get
\[\angle Q Q_{1}A_{2} = \angle A_{1}Q_{1}A_{2} = \angle A_{1}C A_{2} = \angle B C A_{2} = \angle B A A_{2} = \angle Q P A_{2},\]
so \(Q_{1}\) lies on \(\omega\) .
It follows similarly that \(P_{1}\) lies on \(\omega\) .
Solution 2. First consider the case when lines \(P P_{1}\) and \(Q Q_{1}\) intersect each other at some point \(R\) .
Let line \(P Q\) meet the sides \(A C\) and \(B C\) at \(E\) and \(F\) , respectively. Then
\[\angle P P_{1}C = \angle B A C = \angle P E C,\]
so points \(C,E,P,P_{1}\) lie on a circle; denote that circle by \(\omega_{P}\) . It follows analogously that points \(C,F,Q,Q_{1}\) lie on another circle; denote it by \(\omega_{Q}\) .
Let \(A Q\) and \(B P\) intersect at \(T\) . Applying Pappus' theorem to the lines \(A A_{1}P\) and \(B B_{1}Q\) provides that points \(C = A B_{1}\cap B A_{1}\) , \(R = A_{1}Q\cap B_{1}P\) and \(T = A Q\cap B P\) are collinear.
Let line \(R C T\) meet \(P Q\) and \(A B\) at \(S\) and \(U\) , respectively. From \(A B\parallel P Q\) we obtain
\[\frac{S P}{S Q} = \frac{U B}{U A} = \frac{S F}{S E},\]
so
\[S P\cdot S E = S Q\cdot S F.\]

So, point \(S\) has equal powers with respect to \(\omega_{P}\) and \(\omega_{Q}\) , hence line \(RCS\) is their radical axis; then \(R\) also has equal powers to the circles, so \(RP \cdot RP_{1} = RQ \cdot RQ_{1}\) , proving that points \(P, P_{1}, Q, Q_{1}\) are indeed concyclic.
Now consider the case when \(PP_{1}\) and \(QQ_{1}\) are parallel. Like in the previous case, let \(AQ\) and \(BP\) intersect at \(T\) . Applying Pappus' theorem again to the lines \(AA_{1}P\) and \(BB_{1}Q\) , in this limit case it shows that line \(CT\) is parallel to \(PP_{1}\) and \(QQ_{1}\) .
Let line \(CT\) meet \(PQ\) and \(AB\) at \(S\) and \(U\) , as before. The same calculation as in the previous case shows that \(SP \cdot SE = SQ \cdot SF\) , so \(S\) lies on the radical axis between \(\omega_{P}\) and \(\omega_{Q}\) .

Line \(CST\) , that is the radical axis between \(\omega_{P}\) and \(\omega_{Q}\) , is perpendicular to the line \(\ell\) of centres of \(\omega_{P}\) and \(\omega_{Q}\) . Hence, the chords \(PP_{1}\) and \(QQ_{1}\) are perpendicular to \(\ell\) . So the quadrilateral \(PP_{1}Q_{1}Q\) is an isosceles trapezium with symmetry axis \(\ell\) , and hence is cyclic.
Comment. There are several ways of solving the problem involving Pappus' theorem. For example, one may consider the points \(K = PB_{1} \cap BC\) and \(L = QA_{1} \cap AC\) . Applying Pappus' theorem to the lines \(AA_{1}P\) and \(QB_{1}B\) we get that \(K, L\) , and \(PQ \cap AB\) are collinear, i.e. that \(KL \parallel AB\) . Therefore, cyclicity of \(P, Q, P_{1}\) , and \(Q_{1}\) is equivalent to that of \(K, L, P_{1}\) , and \(Q_{1}\) . The latter is easy after noticing that \(C\) also lies on that circle. Indeed, e.g. \(\angle (LK, LC) = \angle (AB, AC) = \angle (P_{1}K, P_{1}C)\) shows that \(K\) lies on circle \(KLC\) .
This approach also has some possible degeneracy, as the points \(K\) and \(L\) may happen to be ideal.
|
IMOSL-2019-G4
|
Let \(P\) be a point inside triangle \(ABC\) . Let \(AP\) meet \(BC\) at \(A_{1}\) , let \(BP\) meet \(CA\) at \(B_{1}\) , and let \(CP\) meet \(AB\) at \(C_{1}\) . Let \(A_{2}\) be the point such that \(A_{1}\) is the midpoint of \(PA_{2}\) , let \(B_{2}\) be the point such that \(B_{1}\) is the midpoint of \(PB_{2}\) , and let \(C_{2}\) be the point such that \(C_{1}\) is the midpoint of \(PC_{2}\) . Prove that points \(A_{2}\) , \(B_{2}\) , and \(C_{2}\) cannot all lie strictly inside the circumcircle of triangle \(ABC\) .
(Australia)
![]
|
Solution 1. Since
\[\angle APB + \angle BPC + \angle CPA = 2\pi = (\pi -\angle ACB) + (\pi -\angle BAC) + (\pi -\angle CBA),\]
at least one of the following inequalities holds:
\[\angle APB\geqslant \pi -\angle ACB,\quad \angle BPC\geqslant \pi -\angle BAC,\quad \angle CPA\geqslant \pi -\angle CBA.\]
Without loss of generality, we assume that \(\angle BPC\geqslant \pi -\angle BAC\) . We have \(\angle BPC > \angle BAC\) because \(P\) is inside \(\triangle ABC\) . So \(\angle BPC\geqslant \max (\angle BAC,\pi -\angle BAC)\) and hence
\[\sin \angle BPC\leqslant \sin \angle BAC. \quad (*)\]
Let the rays \(AP\) , \(BP\) , and \(CP\) cross the circumcircle \(\Omega\) again at \(A_{3}\) , \(B_{3}\) , and \(C_{3}\) , respectively. We will prove that at least one of the ratios \(\frac{PB_{1}}{B_{1}B_{3}}\) and \(\frac{PC_{1}}{C_{1}C_{3}}\) is at least 1, which yields that one of the points \(B_{2}\) and \(C_{2}\) does not lie strictly inside \(\Omega\) .
Because \(A,B,C,B_{3}\) lie on a circle, the triangles \(CB_{1}B_{3}\) and \(BB_{1}A\) are similar, so
\[\frac{CB_{1}}{B_{1}B_{3}} = \frac{BB_{1}}{B_{1}A}.\]
Applying the sine rule we obtain
\[\frac{PB_{1}}{B_{1}B_{3}} = \frac{PB_{1}}{CB_{1}} \cdot \frac{CB_{1}}{B_{1}B_{3}} = \frac{PB_{1}}{C B_{1}} \cdot \frac{BB_{1}}{B_{1}A} = \frac{\sin \angle ACP}{\sin \angle BPC} \cdot \frac{\sin \angle BAC}{\sin \angle PBA}.\]
Similarly,
\[\frac{PC_{1}}{C_{1}C_{3}} = \frac{\sin\angle PBA}{\sin\angle BPC} \cdot \frac{\sin\angle BAC}{\sin\angle ACP}.\]
Multiplying these two equations we get
\[\frac{PB_{1}}{B_{1}B_{3}} \cdot \frac{PC_{1}}{C_{1}C_{3}} = \frac{\sin^{2}\angle BAC}{\sin^{2}\angle BPC} \geqslant 1\]
using \((*)\) , which yields the desired conclusion.
Comment. It also cannot happen that all three points \(A_{2}\) , \(B_{2}\) , and \(C_{2}\) lie strictly outside \(\Omega\) . The same proof works almost literally, starting by assuming without loss of generality that \(\angle BPC \leqslant \pi - \angle BAC\) and using \(\angle BPC > \angle BAC\) to deduce that \(\sin \angle BPC \geqslant \sin \angle BAC\) . It is possible for \(A_{2}\) , \(B_{2}\) , and \(C_{2}\) all to lie on the circumcircle; from the above solution we may derive that this happens if and only if \(P\) is the orthocentre of the triangle \(ABC\) , (which lies strictly inside \(ABC\) if and only if \(ABC\) is acute).
Solution 2. Define points \(A_{3}\) , \(B_{3}\) , and \(C_{3}\) as in Solution 1. Assume for the sake of contradiction that \(A_{2}\) , \(B_{2}\) , and \(C_{2}\) all lie strictly inside circle \(ABC\) . It follows that \(PA_{1} < A_{1}A_{3}\) , \(PB_{1} < B_{1}B_{3}\) , and \(PC_{1} < C_{1}C_{3}\) .
Observe that \(\triangle PBC_{3} \sim \triangle PCB_{3}\) . Let \(X\) be the point on side \(PB_{3}\) that corresponds to point \(C_{1}\) on side \(PC_{3}\) under this similarity. In other words, \(X\) lies on segment \(PB_{3}\) and satisfies \(PX : XB_{3} = PC_{1} : C_{1}C_{3}\) . It follows that
\[\angle XCP = \angle PBC_{1} = \angle B_{3}BA = \angle B_{3}CB_{1}.\]
Hence lines \(CX\) and \(CB_{1}\) are isogonal conjugates in \(\triangle PCB_{3}\) .

Let \(Y\) be the foot of the bisector of \(\angle B_{3}CP\) in \(\triangle PCB_{3}\) . Since \(PC_{1} < C_{1}C_{3}\) , we have \(PX < XB_{3}\) . Also, we have \(PY < YB_{3}\) because \(PB_{1} < B_{1}B_{3}\) and \(Y\) lies between \(X\) and \(B_{1}\) . By the angle bisector theorem in \(\triangle PCB_{3}\) , we have \(PY : YB_{3} = PC : CB_{3}\) . So \(PC < CB_{3}\) and it follows that \(\angle PB_{3}C < \angle CPB_{3}\) . Now since \(\angle PB_{3}C = \angle BB_{3}C = \angle BAC\) , we have
\[\angle BAC < \angle CPB_{3}.\]
Similarly, we have
\[\angle CBA < \angle APC_{3} \quad \text{and} \quad \angle ACB < \angle BPA_{3} = \angle B_{3}PA.\]
Adding these three inequalities yields \(\pi < \pi\) , and this contradiction concludes the proof.
Solution 3. Choose coordinates such that the circumcentre of \(\triangle ABC\) is at the origin and the circumradius is 1. Then we may think of \(A\) , \(B\) , and \(C\) as vectors in \(\mathbb{R}^{2}\) such that
\[|A|^{2} = |B|^{2} = |C|^{2} = 1.\]
\(P\) may be represented as a convex combination \(\alpha A + \beta B + \gamma C\) where \(\alpha ,\beta ,\gamma >0\) and \(\alpha +\beta +\gamma = 1\) Then
\[A_{1} = \frac{\beta B + \gamma C}{\beta + \gamma} = \frac{1}{1 - \alpha} P - \frac{\alpha}{1 - \alpha} A,\]
so
\[A_{2} = 2A_{1} - P = \frac{1 + \alpha}{1 - \alpha} P - \frac{2\alpha}{1 - \alpha} A.\]
Hence
\[|A_{2}|^{2} = \left(\frac{1 + \alpha}{1 - \alpha}\right)^{2}|P|^{2} + \left(\frac{2\alpha}{1 - \alpha}\right)^{2}|A|^{2} - \frac{4\alpha(1 + \alpha)}{(1 - \alpha)^{2}} A\cdot P.\]
Using \(|A|^{2} = 1\) we obtain
\[\frac{(1 - \alpha)^{2}}{2(1 + \alpha)} |A_{2}|^{2} = \frac{1 + \alpha}{2} |P|^{2} + \frac{2\alpha^{2}}{1 + \alpha} -2\alpha A\cdot P. \quad (1)\]
Likewise
\[\frac{(1 - \beta)^{2}}{2(1 + \beta)} |B_{2}|^{2} = \frac{1 + \beta}{2} |P|^{2} + \frac{2\beta^{2}}{1 + \beta} -2\beta B\cdot P \quad (2)\]
and
\[\frac{(1 - \gamma)^{2}}{2(1 + \gamma)} |C_{2}|^{2} = \frac{1 + \gamma}{2} |P|^{2} + \frac{2\gamma^{2}}{1 + \gamma} -2\gamma C\cdot P. \quad (3)\]
Summing (1), (2) and (3) we obtain on the LHS the positive linear combination
\[\mathrm{LHS} = \frac{(1 - \alpha)^{2}}{2(1 + \alpha)} |A_{2}|^{2} + \frac{(1 - \beta)^{2}}{2(1 + \beta)} |B_{2}|^{2} + \frac{(1 - \gamma)^{2}}{2(1 + \gamma)} |C_{2}|^{2}\]
and on the RHS the quantity
\[\frac{(1 + \alpha}{2} +\frac{1 + \beta}{2} +\frac{1 + \gamma}{2})\left|P\right|^{2} + \left(\frac{2\alpha^{2}}{1 + \alpha} +\frac{2\beta^{2}}{1 + \beta} +\frac{2\gamma^{2}}{1 + \gamma}\right) - 2(\alpha A\cdot P + \beta B\cdot P + \gamma C\cdot P).\]
The first term is \(2|P|^{2}\) and the last term is \(- 2P\cdot P\) , so
\[\mathrm{RHS} = \left(\frac{2\alpha^{2}}{1 + \alpha} +\frac{2\beta^{2}}{1 + \beta} +\frac{2\gamma^{2}}{1 + \gamma}\right)\] \[= \frac{3\alpha - 1}{2} +\frac{(1 - \alpha)^{2}}{2(1 + \alpha)} +\frac{3\beta - 1}{2} +\frac{(1 - \beta)^{2}}{2(1 + \beta)} +\frac{3\gamma - 1}{2} +\frac{(1 - \gamma)^{2}}{2(1 + \gamma)}\] \[= \frac{(1 - \alpha)^{2}}{2(1 + \alpha)} +\frac{(1 - \beta)^{2}}{2(1 + \beta)} +\frac{(1 - \gamma)^{2}}{2(1 + \gamma)}.\]
Here we used the fact that
\[\frac{3\alpha - 1}{2} +\frac{3\beta - 1}{2} +\frac{3\gamma - 1}{2} = 0.\]
We have shown that a linear combination of \(|A_{1}|^{2}\) , \(|B_{1}|^{2}\) , and \(|C_{1}|^{2}\) with positive coefficients is equal to the sum of the coefficients. Therefore at least one of \(|A_{1}|^{2}\) , \(|B_{1}|^{2}\) , and \(|C_{1}|^{2}\) must be at least 1, as required.
Comment. This proof also works when \(P\) is any point for which \(\alpha ,\beta ,\gamma > - 1\) , \(\alpha +\beta +\gamma = 1\) , and \(\alpha ,\beta ,\gamma \neq 1\) . (In any cases where \(\alpha = 1\) or \(\beta = 1\) or \(\gamma = 1\) , some points in the construction are not defined.)
|
IMOSL-2019-G5
|
Let \(A B C D E\) be a convex pentagon with \(C D = D E\) and \(\angle E D C \neq 2 \cdot \angle A D B\) . Suppose that a point \(P\) is located in the interior of the pentagon such that \(A P = A E\) and \(B P = B C\) . Prove that \(P\) lies on the diagonal \(C E\) if and only if \(\mathrm{area}(B C D) + \mathrm{area}(A D E) =\) \(\mathrm{area}(A B D) + \mathrm{area}(A B P)\) .
|
Solution 1. Let \(P^{\prime}\) be the reflection of \(P\) across line \(A B\) , and let \(M\) and \(N\) be the midpoints of \(P^{\prime}E\) and \(P^{\prime}C\) respectively. Convexity ensures that \(P^{\prime}\) is distinct from both \(E\) and \(C\) , and hence from both \(M\) and \(N\) . We claim that both the area condition and the collinearity condition in the problem are equivalent to the condition that the (possibly degenerate) right- angled triangles \(A P^{\prime}M\) and \(B P^{\prime}N\) are directly similar (equivalently, \(A P^{\prime}E\) and \(B P^{\prime}C\) are directly similar).

For the equivalence with the collinearity condition, let \(F\) denote the foot of the perpendicular from \(P^{\prime}\) to \(A B\) , so that \(F\) is the midpoint of \(P P^{\prime}\) . We have that \(P\) lies on \(C E\) if and only if \(F\) lies on \(M N\) , which occurs if and only if we have the equality \(\angle A F M = \angle B F N\) of signed angles modulo \(\pi\) . By concyclicity of \(A P^{\prime}F M\) and \(B F P^{\prime}N\) , this is equivalent to \(\angle A P^{\prime}M = \angle B P^{\prime}N\) , which occurs if and only if \(A P^{\prime}M\) and \(B P^{\prime}N\) are directly similar.

For the other equivalence with the area condition, we have the equality of signed areas \(\mathrm{area}(A B D) + \mathrm{area}(A B P) = \mathrm{area}(A P^{\prime}B D) = \mathrm{area}(A P^{\prime}D) + \mathrm{area}(B D P^{\prime})\) . Using the identity \(\mathrm{area}(A D E) - \mathrm{area}(A P^{\prime}D) = \mathrm{area}(A D E) + \mathrm{area}(A D P^{\prime}) = 2\mathrm{area}(A D M)\) , and similarly for \(B\) , we find that the area condition is equivalent to the equality
\[\mathrm{area}(D A M) = \mathrm{area}(D B N).\]
Now note that \(A\) and \(B\) lie on the perpendicular bisectors of \(P^{\prime}E\) and \(P^{\prime}C\) , respectively. If we write \(G\) and \(H\) for the feet of the perpendiculars from \(D\) to these perpendicular bisectors respectively, then this area condition can be rewritten as
\[M A\cdot G D = N B\cdot H D.\]
(In this condition, we interpret all lengths as signed lengths according to suitable conventions: for instance, we orient \(P^{\prime}E\) from \(P^{\prime}\) to \(E\) , orient the parallel line \(D H\) in the same direction, and orient the perpendicular bisector of \(P^{\prime}E\) at an angle \(\pi /2\) clockwise from the oriented segment \(P^{\prime}E\) - we adopt the analogous conventions at \(B\) .)

To relate the signed lengths \(GD\) and \(HD\) to the triangles \(AP'M\) and \(BP'N\) , we use the following calculation.
Claim. Let \(\Gamma\) denote the circle centred on \(D\) with both \(E\) and \(C\) on the circumference, and \(h\) the power of \(P'\) with respect to \(\Gamma\) . Then we have the equality
\[GD\cdot P'M = HD\cdot P'N = \frac{1}{4} h\neq 0.\]
Proof. Firstly, we have \(h\neq 0\) , since otherwise \(P'\) would lie on \(\Gamma\) , and hence the internal angle bisectors of \(\angle EDP'\) and \(\angle P'DC\) would pass through \(A\) and \(B\) respectively. This would violate the angle inequality \(\angle EDC\neq 2\cdot \angle ADB\) given in the question.
Next, let \(E'\) denote the second point of intersection of \(P'E\) with \(\Gamma\) , and let \(E''\) denote the point on \(\Gamma\) diametrically opposite \(E'\) , so that \(E''E\) is perpendicular to \(P'E\) . The point \(G\) lies on the perpendicular bisectors of the sides \(P'E\) and \(EE''\) of the right- angled triangle \(P'E'E''\) ; it follows that \(G\) is the midpoint of \(P'E''\) . Since \(D\) is the midpoint of \(E'E''\) , we have that \(GD = \frac{1}{2} P'E'\) . Since \(P'M = \frac{1}{2} P'E\) , we have \(GD\cdot P'M = \frac{1}{4} P'E'\cdot P'E = \frac{1}{4} h\) . The other equality \(HD\cdot P'N\) follows by exactly the same argument.

From this claim, we see that the area condition is equivalent to the equality
\[(MA:P'M) = (NB:P'N)\]
of ratios of signed lengths, which is equivalent to direct similarity of \(AP'M\) and \(BP'N\) , as desired.
Solution 2. Along the perpendicular bisector of \(C E\) , define the linear function
\[f(X) = \mathrm{area}(B C X) + \mathrm{area}(A X E) - \mathrm{area}(A B X) - \mathrm{area}(A B P),\]
where, from now on, we always use signed areas. Thus, we want to show that \(C, P, E\) are collinear if and only if \(f(D) = 0\) .

Let \(P^{\prime}\) be the reflection of \(P\) across line \(A B\) . The point \(P^{\prime}\) does not lie on the line \(C E\) To see this, we let \(A^{\prime \prime}\) and \(B^{\prime \prime}\) be the points obtained from \(A\) and \(B\) by dilating with scale factor 2 about \(P^{\prime}\) , so that \(P\) is the orthogonal projection of \(P^{\prime}\) onto \(A^{\prime \prime}B^{\prime \prime}\) . Since \(A\) lies on the perpendicular bisector of \(P^{\prime}E\) , the triangle \(A^{\prime \prime}E P^{\prime}\) is right- angled at \(E\) (and \(B^{\prime \prime}C P^{\prime}\) similarly). If \(P^{\prime}\) were to lie on \(C E\) , then the lines \(A^{\prime \prime}E\) and \(B^{\prime \prime}C\) would be perpendicular to \(C E\) and \(A^{\prime \prime}\) and \(B^{\prime \prime}\) would lie on the opposite side of \(C E\) to \(D\) . It follows that the line \(A^{\prime \prime}B^{\prime \prime}\) does not meet triangle \(C D E\) , and hence point \(P\) does not lie inside \(C D E\) . But then \(P\) must lie inside \(A B C E\) and it is clear that such a point cannot reflect to a point \(P^{\prime}\) on \(C E\) .
We thus let \(O\) be the centre of the circle \(C E P^{\prime}\) . The lines \(A O\) and \(B O\) are the perpendicular bisectors of \(E P^{\prime}\) and \(C P^{\prime}\) , respectively, so
\[\mathrm{area}(B C O) + \mathrm{area}(A O E) = \mathrm{area}(O P^{\prime}B) + \mathrm{area}(P^{\prime}O A) = \mathrm{area}(P^{\prime}B O A)\] \[\qquad = \mathrm{area}(A B O) + \mathrm{area}(B A P^{\prime}) = \mathrm{area}(A B O) + \mathrm{area}(A B P),\]
and hence \(f(O) = 0\)
Notice that if point \(O\) coincides with \(D\) then points \(A,B\) lie in angle domain \(C D E\) and \(\angle E O C = 2\cdot \angle A O B\) , which is not allowed. So, \(O\) and \(D\) must be distinct. Since \(f\) is linear and vanishes at \(O\) , it follows that \(f(D) = 0\) if and only if \(f\) is constant zero - we want to show this occurs if and only if \(C, P, E\) are collinear.

In the one direction, suppose firstly that \(C, P, E\) are not collinear, and let \(T\) be the centre of the circle \(C E P\) . The same calculation as above provides
\[\mathrm{area}(B C T) + \mathrm{area}(A T E) = \mathrm{area}(P B T A) = \mathrm{area}(A B T) - \mathrm{area}(A B P)\]
so
\[f(T) = -2\mathrm{area}(A B P)\neq 0.\]
Hence, the linear function \(f\) is nonconstant with its zero is at \(O\) , so that \(f(D) \neq 0\) .
In the other direction, suppose that the points \(C, P, E\) are collinear. We will show that \(f\) is constant zero by finding a second point (other than \(O\) ) at which it vanishes.

Let \(Q\) be the reflection of \(P\) across the midpoint of \(AB\) , so \(PAQB\) is a parallelogram. It is easy to see that \(Q\) is on the perpendicular bisector of \(CE\) ; for instance if \(A'\) and \(B'\) are the points produced from \(A\) and \(B\) by dilating about \(P\) with scale factor 2, then the projection of \(Q\) to \(CE\) is the midpoint of the projections of \(A'\) and \(B'\) , which are \(E\) and \(C\) respectively. The triangles \(BCQ\) and \(AQE\) are indirectly congruent, so
\[f(Q) = \left(\mathrm{area}(BCQ) + \mathrm{area}(AQE)\right) - \left(\mathrm{area}(ABQ) - \mathrm{area}(BAP)\right) = 0 - 0 = 0.\]
The points \(O\) and \(Q\) are distinct. To see this, consider the circle \(\omega\) centred on \(Q\) with \(P'\) on the circumference; since triangle \(PP'Q\) is right- angled at \(P'\) , it follows that \(P\) lies outside \(\omega\) . On the other hand, \(P\) lies between \(C\) and \(E\) on the line \(CPE\) . It follows that \(C\) and \(E\) cannot both lie on \(\omega\) , so that \(\omega\) is not the circle \(CEP'\) and \(Q \neq O\) .
Since \(O\) and \(Q\) are distinct zeroes of the linear function \(f\) , we have \(f(D) = 0\) as desired.
Comment 1. The condition \(\angle EDC \neq 2 \cdot \angle ADB\) cannot be omitted. If \(D\) is the centre of circle \(CEP'\) , then the condition on triangle areas is satisfied automatically, without having \(P\) on line \(CE\) .
Comment 2. The "only if" part of this problem is easier than the "if" part. For example, in the second part of Solution 2, the triangles \(EAQ\) and \(QBC\) are indirectly congruent, so the sum of their areas is 0, and \(DCQE\) is a kite. Now one can easily see that \(\angle (AQ, DE) = \angle (CD, CB)\) and \(\angle (BQ, DC) = \angle (ED, EA)\) , whence \(\mathrm{area}(BCD) = \mathrm{area}(AQD) + \mathrm{area}(EQA)\) and \(\mathrm{area}(ADE) = \mathrm{area}(BDQ) + \mathrm{area}(BQC)\) , which yields the result.
Comment 3. The origin of the problem is the following observation. Let \(ABDH\) be a tetrahedron and consider the sphere \(S\) that is tangent to the four face planes, internally to planes \(ADH\) and \(BDH\) and externally to \(ABD\) and \(ABH\) (or vice versa). It is known that the sphere \(S\) exists if and only if \(\mathrm{area}(ADH) + \mathrm{area}(BDH) \neq \mathrm{area}(ABH) + \mathrm{area}(ABD)\) ; this relation comes from the usual formula for the volume of the tetrahedron.
Let \(T, T_{a}, T_{b}, T_{d}\) be the points of tangency between the sphere and the four planes, as shown in the picture. Rotate the triangle \(ABH\) inward, the triangles \(BDH\) and \(ADH\) outward, into the triangles \(ABP\) , \(BDC\) and \(ADE\) , respectively, in the plane \(ABD\) . Notice that the points \(T_{d}, T_{a}, T_{b}\) are rotated to \(T\) , so we have \(HT_{a} = HT_{b} = HT_{d} = PT = CT = ET\) . Therefore, the point \(T\) is the centre of the circle \(CEP\) . Hence, if the sphere exists then \(C, E, P\) cannot be collinear.
If the condition \(\angle EDC \neq 2 \cdot \angle ADB\) is replaced by the constraint that the angles \(\angle EDA, \angle ADB\) and \(\angle BDC\) satisfy the triangle inequality, it enables reconstructing the argument with the tetrahedron and the tangent sphere.

|
IMOSL-2019-G6
|
Let \(I\) be the incentre of acute- angled triangle \(ABC\) . Let the incircle meet \(BC\) , \(CA\) , and \(AB\) at \(D\) , \(E\) , and \(F\) , respectively. Let line \(EF\) intersect the circumcircle of the triangle at \(P\) and \(Q\) , such that \(F\) lies between \(E\) and \(P\) . Prove that \(\angle DPA + \angle AQD = \angle QIP\) .
|
Solution 1. Let \(N\) and \(M\) be the midpoints of the arcs \(\widehat{BC}\) of the circumcircle, containing and opposite vertex \(A\) , respectively. By \(\angle FAE = \angle BAC = \angle BNC\) , the right- angled kites \(AFIE\) and \(NBMC\) are similar. Consider the spiral similarity \(\phi\) (dilation in case of \(AB = AC\) ) that moves \(AFIE\) to \(NBMC\) . The directed angle in which \(\phi\) changes directions is \(\angle (AF, NB)\) , same as \(\angle (AP, NP)\) and \(\angle (AQ, NQ)\) ; so lines \(AP\) and \(AQ\) are mapped to lines \(NP\) and \(NQ\) , respectively. Line \(EF\) is mapped to \(BC\) ; we can see that the intersection points \(P = EF \cap AP\) and \(Q = EF \cap AQ\) are mapped to points \(BC \cap NP\) and \(BC \cap NQ\) , respectively. Denote these points by \(P'\) and \(Q'\) , respectively.

Let \(L\) be the midpoint of \(BC\) . We claim that points \(P, Q, D, L\) are concyclic (if \(D = L\) then line \(BC\) is tangent to circle \(PQD\) ). Let \(PQ\) and \(BC\) meet at \(Z\) . By applying Menelaus' theorem to triangle \(ABC\) and line \(EFZ\) , we have
\[\frac{BD}{DC} = \frac{BF}{FA} \cdot \frac{AE}{EC} = -\frac{BZ}{ZC},\]
so the pairs \(B, C\) and \(D, Z\) are harmonic. It is well- known that this implies \(ZB \cdot ZC = ZD \cdot ZL\) . (The inversion with pole \(Z\) that swaps \(B\) and \(C\) sends \(Z\) to infinity and \(D\) to the midpoint of \(BC\) , because the cross- ratio is preserved.) Hence, \(ZD \cdot ZL = ZB \cdot ZC = ZP \cdot ZQ\) by the power of \(Z\) with respect to the circumcircle; this proves our claim.
By \(\angle MPP' = \angle MQQ' = \angle MLP' = \angle MLQ' = 90^\circ\) , the quadrilaterals \(MLPP'\) and \(MLQQ'\) are cyclic. Then the problem statement follows by
\[\angle DPA + \angle AQD = 360^\circ -\angle PAQ - \angle QDP = 360^\circ -\angle PNQ - \angle QLP\] \[\qquad = \angle LPN + \angle NQL = \angle P'ML + \angle LMQ' = \angle P'MQ' = \angle PIQ.\]
Solution 2. Define the point \(M\) and the same spiral similarity \(\phi\) as in the previous solution. (The point \(N\) is not necessary.) It is well- known that the centre of the spiral similarity that maps \(F, E\) to \(B, C\) is the Miquel point of the lines \(FE, BC, BF\) and \(CE\) ; that is, the second intersection of circles \(ABC\) and \(AEF\) . Denote that point by \(S\) .
By \(\phi (F) = B\) and \(\phi (E) = C\) the triangles \(SBF\) and \(SCE\) are similar, so we have
\[\frac{SB}{SC} = \frac{BF}{CE} = \frac{BD}{CD}.\]
By the converse of the angle bisector theorem, that indicates that line \(SD\) bisects \(\angle BSC\) and hence passes through \(M\) .
Let \(K\) be the intersection point of lines \(EF\) and \(SI\) . Notice that \(\phi\) sends points \(S, F, E, I\) to \(S, B, C, M\) , so \(\phi (K) = \phi (FE \cap SI) = BC \cap SM = D\) . By \(\phi (I) = M\) , we have \(KD \parallel IM\) .

We claim that triangles \(SPI\) and \(SDQ\) are similar, and so are triangles \(SPD\) and \(SIQ\) . Let ray \(SI\) meet the circumcircle again at \(L\) . Note that the segment \(EF\) is perpendicular to the angle bisector \(AM\) . Then by \(\angle AML = \angle ASL = \angle ASI = 90^{\circ}\) , we have \(ML \parallel PQ\) . Hence, \(\overline{PL} = \overline{MQ}\) and therefore \(\angle PSL = \angle MSQ = \angle DSQ\) . By \(\angle QPS = \angle QMS\) , the triangles \(SPK\) and \(SMQ\) are similar. Finally,
\[\frac{SP}{SI} = \frac{SP}{SK} \cdot \frac{SK}{SI} = \frac{SM}{SQ} \cdot \frac{SD}{SM} = \frac{SD}{SQ}\]
shows that triangles \(SPI\) and \(SDQ\) are similar. The second part of the claim can be proved analogously.
Now the problem statement can be proved by
\[\angle DPA + \angle AQD = \angle DPS + \angle SQD = \angle QIS + \angle SIP = \angle QIP.\]
Solution 3. Denote the circumcircle of triangle \(A B C\) by \(\Gamma\) , and let rays \(P D\) and \(Q D\) meet \(\Gamma\) again at \(V\) and \(U\) , respectively. We will show that \(A U\perp I P\) and \(A V\perp I Q\) . Then the problem statement will follow as
\[\angle D P A + \angle A Q D = \angle V U A + \angle A V U = 180^{\circ} - \angle U A V = \angle Q I P.\]
Let \(M\) be the midpoint of arc \(\widehat{B U V C}\) and let \(N\) be the midpoint of arc \(\widehat{C A B}\) ; the lines \(A I M\) and \(A N\) being the internal and external bisectors of angle \(B A C\) , respectively, are perpendicular. Let the tangents drawn to \(\Gamma\) at \(B\) and \(C\) meet at \(R\) ; let line \(P Q\) meet \(A U\) , \(A I\) , \(A V\) and \(B C\) at \(X\) , \(T\) , \(Y\) and \(Z\) , respectively.
As in Solution 1, we observe that the pairs \(B,C\) and \(D,Z\) are harmonic. Projecting these points from \(Q\) onto the circumcircle, we can see that \(B,C\) and \(U,P\) are also harmonic. Analogously, the pair \(V,Q\) is harmonic with \(B,C\) . Consider the inversion about the circle with centre \(R\) , passing through \(B\) and \(C\) . Points \(B\) and \(C\) are fixed points, so this inversion exchanges every point of \(\Gamma\) by its harmonic pair with respect to \(B,C\) . In particular, the inversion maps points \(B,C,N,U,V\) to points \(B,C,M,P,Q\) , respectively.
Combine the inversion with projecting \(\Gamma\) from \(A\) to line \(P Q\) ; the points \(B,C,M,P,Q\) are projected to \(F,E,T,P,Q\) , respectively.

The combination of these two transformations is projective map from the lines \(A B\) , \(A C\) , \(A N\) , \(A U\) , \(A V\) to \(I F\) , \(I E\) , \(I T\) , \(I P\) , \(I Q\) , respectively. On the other hand, we have \(A B\perp I F\) , \(A C\perp I E\) and \(A N\perp A T\) , so the corresponding lines in these two pencils are perpendicular. This proves \(A U\perp I P\) and \(A V\perp I Q\) , and hence completes the solution.
|
IMOSL-2019-G7
|
The incircle \(\omega\) of acute- angled scalene triangle \(ABC\) has centre \(I\) and meets sides \(BC\) , \(CA\) , and \(AB\) at \(D\) , \(E\) , and \(F\) , respectively. The line through \(D\) perpendicular to \(EF\) meets \(\omega\) again at \(R\) . Line \(AR\) meets \(\omega\) again at \(P\) . The circumcircles of triangles \(PCE\) and \(PBF\) meet again at \(Q \neq P\) . Prove that lines \(DI\) and \(PQ\) meet on the external bisector of angle \(BAC\) .
|
Common remarks. Throughout the solution, \(\angle (a, b)\) denotes the directed angle between lines \(a\) and \(b\) , measured modulo \(\pi\) .
Solution 1.
Step 1. The external bisector of \(\angle BAC\) is the line through \(A\) perpendicular to \(IA\) . Let \(DI\) meet this line at \(L\) and let \(DI\) meet \(\omega\) at \(K\) . Let \(N\) be the midpoint of \(EF\) , which lies on \(IA\) and is the pole of line \(AL\) with respect to \(\omega\) . Since \(AN \cdot AI = AE^2 = AR \cdot AP\) , the points \(R\) , \(N\) , \(I\) , and \(P\) are concyclic. As \(IR = IP\) , the line \(NI\) is the external bisector of \(\angle PNR\) , so \(PN\) meets \(\omega\) again at the point symmetric to \(R\) with respect to \(AN\) – i.e. at \(K\) .
Let \(DN\) cross \(\omega\) again at \(S\) . Opposite sides of any quadrilateral inscribed in the circle \(\omega\) meet on the polar line of the intersection of the diagonals with respect to \(\omega\) . Since \(L\) lies on the polar line \(AL\) of \(N\) with respect to \(\omega\) , the line \(PS\) must pass through \(L\) . Thus it suffices to prove that the points \(S\) , \(Q\) , and \(P\) are collinear.

Step 2. Let \(\Gamma\) be the circumcircle of \(\triangle BIC\) . Notice that
\[\angle (B Q,Q C) = \angle (B Q,Q P) + \angle (P Q,Q C) = \angle (B F,F P) + \angle (P E,E C)\] \[\qquad = \angle (E F,E P) + \angle (F P,F E) = \angle (F P,E P) = \angle (D F,D E) = \angle (B I,I C),\]
so \(Q\) lies on \(\Gamma\) . Let \(QP\) meet \(\Gamma\) again at \(T\) . It will now suffice to prove that \(S, P\) , and \(T\) are collinear. Notice that \(\angle (BI, IT) = \angle (BQ, QT) = \angle (BF, FP) = \angle (FK, KP)\) . Note \(FD \perp FK\) and \(FD \perp BI\) so \(FK \parallel BI\) and hence \(IT\) is parallel to the line \(KNP\) . Since \(DI = IK\) , the line \(IT\) crosses \(DN\) at its midpoint \(M\) .
Step 3. Let \(F'\) and \(E'\) be the midpoints of \(DE\) and \(DF\) , respectively. Since \(DE' \cdot E'F = DE'^2 = BE' \cdot E'I\) , the point \(E'\) lies on the radical axis of \(\omega\) and \(\Gamma\) ; the same holds for \(F'\) . Therefore, this radical axis is \(E'F'\) , and it passes through \(M\) . Thus \(IM \cdot MT = DM \cdot MS\) , so \(S, I, D\) , and \(T\) are concyclic. This shows \(\angle (DS, ST) = \angle (DI, IT) = \angle (DK, KP) = \angle (DS, SP)\) , whence the points \(S, P\) , and \(T\) are collinear, as desired.

Comment. Here is a longer alternative proof in step 1 that \(P\) , \(S\) , and \(L\) are collinear, using a circular inversion instead of the fact that opposite sides of a quadrilateral inscribed in a circle \(\omega\) meet on the polar line with respect to \(\omega\) of the intersection of the diagonals. Let \(G\) be the foot of the altitude from \(N\) to the line \(DIKL\) . Observe that \(N, G, K, S\) are concyclic (opposite right angles) so
\[\angle D I P = 2\angle D K P = \angle G K N + \angle D S P = \angle G S N + \angle N S P = \angle G S P,\]
hence \(I, G, S, P\) are concyclic. We have \(I G \cdot I L = I N \cdot I A = r^{2}\) since \(\triangle I G N \sim \triangle I A L\) . Inverting the circle \(I G S P\) in circle \(\omega\) , points \(P\) and \(S\) are fixed and \(G\) is taken to \(L\) so we find that \(P, S\) , and \(L\) are collinear.
Solution 2. We start as in Solution 1. Namely, we introduce the same points \(K\) , \(L\) , \(N\) , and \(S\) , and show that the triples \((P, N, K)\) and \((P, S, L)\) are collinear. We conclude that \(K\) and \(R\) are symmetric in \(AI\) , and reduce the problem statement to showing that \(P\) , \(Q\) , and \(S\) are collinear.
Step 1. Let \(AR\) meet the circumcircle \(\Omega\) of \(ABC\) again at \(X\) . The lines \(AR\) and \(AK\) are isogonal in the angle \(BAC\) ; it is well known that in this case \(X\) is the tangency point of \(\Omega\) with the \(A\) - mixtilinear circle. It is also well known that for this point \(X\) , the line \(XI\) crosses \(\Omega\) again at the midpoint \(M'\) of arc \(BAC\) .
Step 2. Denote the circles \(BFP\) and \(CEP\) by \(\Omega_{B}\) and \(\Omega_{C}\) , respectively. Let \(\Omega_{B}\) cross \(AR\) and \(EF\) again at \(U\) and \(Y\) , respectively. We have
\[\angle (UB,BF) = \angle (UP,PF) = \angle (RP,PF) = \angle (RF,FA),\]
so \(UB \parallel RF\) .

Next, we show that the points \(B\) , \(I\) , \(U\) , and \(X\) are concyclic. Since
\[\angle (UB,UX) = \angle (RF, RX) = \angle (AF, AR) + \angle (FR, FA) = \angle (M'B, M'X) + \angle (DR, DF),\]
it suffices to prove \(\angle (IB, IX) = \angle (M'B, M'X) + \angle (DR, DF)\) , or \(\angle (IB, M'B) = \angle (DR, DF)\) . But both angles equal \(\angle (CI, CB)\) , as desired. (This is where we used the fact that \(M'\) is the midpoint of arc \(BAC\) of \(\Omega\) .)
It follows now from circles \(BUIX\) and \(BPUY\) that
\[\angle (IU,UB) = \angle (IX,BX) = \angle (M'X,BX) = \frac{\pi - \angle A}{2} \\ = \angle (EF,AF) = \angle (YF,BF) = \angle (YU,BU),\]
so the points \(Y\) , \(U\) , and \(I\) are collinear.
Let \(EF\) meet \(BC\) at \(W\) . We have
\[\angle (IY,YW) = \angle (UY,FY) = \angle (UB,FB) = \angle (RF,AF) = \angle (CI,CW),\]
so the points \(W\) , \(Y\) , \(I\) , and \(C\) are concyclic.
Similarly, if \(V\) and \(Z\) are the second meeting points of \(\Omega_{C}\) with \(A R\) and \(E F\) , we get that the 4- tuples \((C,V,I,X)\) and \((B,I,Z,W)\) are both concyclic.
Step 3. Let \(Q^{\prime} = C Y\cap B Z\) . We will show that \(Q^{\prime} = Q\) .
First of all, we have
\[\angle (Q^{\prime}Y,Q^{\prime}B) = \angle (C Y,Z B) = \angle (C Y,Z Y) + \angle (Z Y,B Z)\] \[\qquad = \angle (C I,I W) + \angle (I W,I B) = \angle (C I,I B) = \frac{\pi - \angle A}{2} = \angle (F Y,F B),\]
so \(Q^{\prime}\in \Omega_{B}\) . Similarly, \(Q^{\prime}\in \Omega_{C}\) . Thus \(Q^{\prime}\in \Omega_{B}\cap \Omega_{C} = \{P,Q\}\) and it remains to prove that \(Q^{\prime}\neq P\) . If we had \(Q^{\prime} = P\) , we would have \(\angle (P Y,P Z) = \angle (Q^{\prime}Y,Q^{\prime}Z) = \angle (I C,I B)\) . This would imply
\[\angle (P Y,Y F) + \angle (E Z,Z P) = \angle (P Y,P Z) = \angle (I C,I B) = \angle (P E,P F),\]
so circles \(\Omega_{B}\) and \(\Omega_{C}\) would be tangent at \(P\) . That is excluded in the problem conditions, so \(Q^{\prime} = Q\) .

Step 4. Now we are ready to show that \(P\) , \(Q\) , and \(S\) are collinear.
Notice that \(A\) and \(D\) are the poles of \(E W\) and \(D W\) with respect to \(\omega\) , so \(W\) is the pole of \(A D\) . Hence, \(W I\perp A D\) . Since \(C I\perp D E\) , this yields \(\angle (I C,W I) = \angle (D E,D A)\) . On the other hand, \(D A\) is a symmedian in \(\triangle D E F\) , so \(\angle (D E,D A) = \angle (D N,D F) = \angle (D S,D F)\) . Therefore,
\[\angle (P S,P F) = \angle (D S,D F) = \angle (D E,D A) = \angle (I C,I W)\] \[\qquad = \angle (Y C,Y W) = \angle (Y Q,Y F) = \angle (P Q,P F),\]
which yields the desired collinearity.
|
IMOSL-2019-G8
|
Let \(\mathcal{L}\) be the set of all lines in the plane and let \(f\) be a function that assigns to each line \(\ell \in \mathcal{L}\) a point \(f(\ell)\) on \(\ell\) . Suppose that for any point \(X\) , and for any three lines \(\ell_{1}\) , \(\ell_{2}\) , \(\ell_{3}\) passing through \(X\) , the points \(f(\ell_{1})\) , \(f(\ell_{2})\) , \(f(\ell_{3})\) and \(X\) lie on a circle.
Prove that there is a unique point \(P\) such that \(f(\ell) = P\) for any line \(\ell\) passing through \(P\) .
|
Common remarks. The condition on \(f\) is equivalent to the following: There is some function \(g\) that assigns to each point \(X\) a circle \(g(X)\) passing through \(X\) such that for any line \(\ell\) passing through \(X\) , the point \(f(\ell)\) lies on \(g(X)\) . (The function \(g\) may not be uniquely defined for all points, if some points \(X\) have at most one value of \(f(\ell)\) other than \(X\) ; for such points, an arbitrary choice is made.)
If there were two points \(P\) and \(Q\) with the given property, \(f(PQ)\) would have to be both \(P\) and \(Q\) , so there is at most one such point, and it will suffice to show that such a point exists.
Solution 1. We provide a complete characterisation of the functions satisfying the given condition.
Write \(\angle (\ell_{1},\ell_{2})\) for the directed angle modulo \(180^{\circ}\) between the lines \(\ell_{1}\) and \(\ell_{2}\) . Given a point \(P\) and an angle \(\alpha \in (0,180^{\circ})\) , for each line \(\ell\) , let \(\ell^{\prime}\) be the line through \(P\) satisfying \(\angle (\ell^{\prime},\ell) = \alpha\) , and let \(h_{P,\alpha}(\ell)\) be the intersection point of \(\ell\) and \(\ell^{\prime}\) . We will prove that there is some pair \((P,\alpha)\) such that \(f\) and \(h_{P,\alpha}\) are the same function. Then \(P\) is the unique point in the problem statement.
Given an angle \(\alpha\) and a point \(P\) , let a line \(\ell\) be called \((P,\alpha)\) - good if \(f(\ell) = h_{P,\alpha}(\ell)\) . Let a point \(X\neq P\) be called \((P,\alpha)\) - good if the circle \(g(X)\) passes through \(P\) and some point \(Y\neq P,X\) on \(g(X)\) satisfies \(\angle (PY,YX) = \alpha\) . It follows from this definition that if \(X\) is \((P,\alpha)\) - good then every point \(Y\neq P,X\) of \(g(X)\) satisfies this angle condition, so \(h_{P,\alpha}(XY) = Y\) for every \(Y\in g(X)\) . Equivalently, \(f(\ell)\in \{X,h_{P,\alpha}(\ell)\}\) for each line \(\ell\) passing through \(X\) . This shows the following lemma.
Lemma 1. If \(X\) is \((P,\alpha)\) - good and \(\ell\) is a line passing through \(X\) then either \(f(\ell) = X\) or \(\ell\) is \((P,\alpha)\) - good.
Lemma 2. If \(X\) and \(Y\) are different \((P,\alpha)\) - good points, then line \(XY\) is \((P,\alpha)\) - good.
Proof. If \(XY\) is not \((P,\alpha)\) - good then by the previous Lemma, \(f(XY) = X\) and similarly \(f(XY) = Y\) , but clearly this is impossible as \(X\neq Y\) . \(\square\)
Lemma 3. If \(\ell_{1}\) and \(\ell_{2}\) are different \((P,\alpha)\) - good lines which intersect at \(X\neq P\) , then either \(f(\ell_{1}) = X\) or \(f(\ell_{2}) = X\) or \(X\) is \((P,\alpha)\) - good.
Proof. If \(f(\ell_{1}),f(\ell_{2})\neq X\) , then \(g(X)\) is the circumcircle of \(X\) , \(f(\ell_{1})\) and \(f(\ell_{2})\) . Since \(\ell_{1}\) and \(\ell_{2}\) are \((P,\alpha)\) - good lines, the angles
\[\angle (Pf(\ell_{1}),f(\ell_{1})X) = \angle (Pf(\ell_{2}),f(\ell_{2})X) = \alpha ,\]
so \(P\) lies on \(g(X)\) . Hence, \(X\) is \((P,\alpha)\) - good.
Lemma 4. If \(\ell_{1}\) , \(\ell_{2}\) and \(\ell_{3}\) are different \((P,\alpha)\) - good lines which intersect at \(X\neq P\) , then \(X\) is \((P,\alpha)\) - good.
Proof. This follows from the previous Lemma since at most one of the three lines \(\ell_{i}\) can satisfy \(f(\ell_{i}) = X\) as the three lines are all \((P,\alpha)\) - good. \(\square\)
Lemma 5. If \(ABC\) is a triangle such that \(A\) , \(B\) , \(C\) , \(f(AB)\) , \(f(AC)\) and \(f(BC)\) are all different points, then there is some point \(P\) and some angle \(\alpha\) such that \(A\) , \(B\) and \(C\) are \((P,\alpha)\) - good points and \(AB\) , \(BC\) and \(CA\) are \((P,\alpha)\) - good lines.

Proof. Let \(D\) , \(E\) , \(F\) denote the points \(f(BC)\) , \(f(AC)\) , \(f(AB)\) , respectively. Then \(g(A)\) , \(g(B)\) and \(g(C)\) are the circumcircles of \(AEF\) , \(BDF\) and \(CDE\) , respectively. Let \(P \neq F\) be the second intersection of circles \(g(A)\) and \(g(B)\) (or, if these circles are tangent at \(F\) , then \(P = F\) ). By Miquel's theorem (or an easy angle chase), \(g(C)\) also passes through \(P\) . Then by the cyclic quadrilaterals, the directed angles
\[\angle (PD,DC) = \angle (PF,FB) = \angle (PE,EA) = \alpha ,\]
for some angle \(\alpha\) . Hence, lines \(AB\) , \(BC\) and \(CA\) are all \((P, \alpha)\) - good, so by Lemma 3, \(A\) , \(B\) and \(C\) are \((P, \alpha)\) - good. (In the case where \(P = D\) , the line \(PD\) in the equation above denotes the line which is tangent to \(g(B)\) at \(P = D\) . Similar definitions are used for \(PE\) and \(PF\) in the cases where \(P = E\) or \(P = F\) .)
Consider the set \(\Omega\) of all points \((x, y)\) with integer coordinates \(1 \leqslant x, y \leqslant 1000\) , and consider the set \(L_{\Omega}\) of all horizontal, vertical and diagonal lines passing through at least one point in \(\Omega\) . A simple counting argument shows that there are 5998 lines in \(L_{\Omega}\) . For each line \(\ell\) in \(L_{\Omega}\) we colour the point \(f(\ell)\) red. Then there are at most 5998 red points. Now we partition the points in \(\Omega\) into 10000 ten by ten squares. Since there are at most 5998 red points, at least one of these squares \(\Omega_{10}\) contains no red points. Let \((m, n)\) be the bottom left point in \(\Omega_{10}\) . Then the triangle with vertices \((m, n)\) , \((m + 1, n)\) and \((m, n + 1)\) satisfies the condition of Lemma 5, so these three vertices are all \((P, \alpha)\) - good for some point \(P\) and angle \(\alpha\) , as are the lines joining them. From this point on, we will simply call a point or line good if it is \((P, \alpha)\) - good for this particular pair \((P, \alpha)\) . Now by Lemma 1, the line \(x = m + 1\) is good, as is the line \(y = n + 1\) . Then Lemma 3 implies that \((m + 1, n + 1)\) is good. By applying these two lemmas repeatedly, we can prove that the line \(x + y = m + n + 2\) is good, then the points \((m, n + 2)\) and \((m + 2, n)\) then the lines \(x = m + 2\) and \(y = n + 2\) , then the points \((m + 2, n + 1)\) , \((m + 1, n + 2)\) and \((m + 2, n + 2)\) and so on until we have proved that all points in \(\Omega_{10}\) are good.
Now we will use this to prove that every point \(S \neq P\) is good. Since \(g(S)\) is a circle, it passes through at most two points of \(\Omega_{10}\) on any vertical line, so at most 20 points in total. Moreover, any line \(\ell\) through \(S\) intersects at most 10 points in \(\Omega_{10}\) . Hence, there are at least eight lines \(\ell\) through \(S\) which contain a point \(Q\) in \(\Omega_{10}\) which is not on \(g(S)\) . Since \(Q\) is not on \(g(S)\) , the point \(f(\ell) \neq Q\) . Hence, by Lemma 1, the line \(\ell\) is good. Hence, at least eight good lines pass through \(S\) , so by Lemma 4, the point \(S\) is good. Hence, every point \(S \neq P\) is good, so by Lemma 2, every line is good. In particular, every line \(\ell\) passing through \(P\) is good, and therefore satisfies \(f(\ell) = P\) , as required.
Solution 2. Note that for any distinct points \(X, Y\) , the circles \(g(X)\) and \(g(Y)\) meet on \(XY\) at the point \(f(XY) \in g(X) \cap g(Y) \cap (XY)\) . We write \(s(X, Y)\) for the second intersection point of circles \(g(X)\) and \(g(Y)\) .
Lemma 1. Suppose that \(X, Y\) and \(Z\) are not collinear, and that \(f(XY) \notin \{X, Y\}\) and similarly for \(YZ\) and \(ZX\) . Then \(s(X, Y) = s(Y, Z) = s(Z, X)\) .
Proof. The circles \(g(X)\) , \(g(Y)\) and \(g(Z)\) through the vertices of triangle \(XYZ\) meet pairwise on the corresponding edges (produced). By Miquel's theorem, the second points of intersection of any two of the circles coincide. (See the diagram for Lemma 5 of Solution 1. )
Now pick any line \(\ell\) and any six different points \(Y_{1},\ldots ,Y_{6}\) on \(\ell \setminus \{f(\ell)\}\) . Pick a point \(X\) not on \(\ell\) or any of the circles \(g(Y_{i})\) . Reordering the indices if necessary, we may suppose that \(Y_{1},\ldots ,Y_{4}\) do not lie on \(g(X)\) , so that \(f(X Y_{i})\notin \{X,Y_{i}\}\) for \(1\leqslant i\leqslant 4\) . By applying the above lemma to triangles \(X Y_{i}Y_{j}\) for \(1\leqslant i< j\leqslant 4\) , we find that the points \(s(Y_{i},Y_{j})\) and \(s(X,Y_{i})\) are all equal, to point \(O\) say. Note that either \(O\) does not lie on \(\ell\) , or \(O = f(\ell)\) , since \(O\in g(Y_{i})\) .
Now consider an arbitrary point \(X^{\prime}\) not on \(\ell\) or any of the circles \(g(Y_{i})\) for \(1\leqslant i\leqslant 4\) . As above, we see that there are two indices \(1\leqslant i< j\leqslant 4\) such that \(Y_{i}\) and \(Y_{j}\) do not lie on \(g(X^{\prime})\) . By applying the above lemma to triangle \(X^{\prime}Y_{i}Y_{j}\) we see that \(s(X^{\prime},Y_{i}) = O\) , and in particular \(g(X^{\prime})\) passes through \(O\) .
We will now show that \(f(\ell^{\prime}) = O\) for all lines \(\ell^{\prime}\) through \(O\) . By the above note, we may assume that \(\ell^{\prime}\neq \ell\) . Consider a variable point \(X^{\prime}\in \ell^{\prime}\setminus \{O\}\) not on \(\ell\) or any of the circles \(g(Y_{i})\) for \(1\leqslant i\leqslant 4\) . We know that \(f(\ell^{\prime})\in g(X^{\prime})\cap \ell^{\prime} = \{X^{\prime},O\}\) . Since \(X^{\prime}\) was suitably arbitrary, we have \(f(\ell^{\prime}) = O\) as desired.
Solution 3. Notice that, for any two different points \(X\) and \(Y\) , the point \(f(X Y)\) lies on both \(g(X)\) and \(g(Y)\) , so any two such circles meet in at least one point. We refer to two circles as cutting only in the case where they cross, and so meet at exactly two points, thus excluding the cases where they are tangent or are the same circle.
Lemma 1. Suppose there is a point \(P\) such that all circles \(g(X)\) pass through \(P\) . Then \(P\) has the given property.
Proof. Consider some line \(\ell\) passing through \(P\) , and suppose that \(f(\ell)\neq P\) . Consider some \(X\in \ell\) with \(X\neq P\) and \(X\neq f(\ell)\) . Then \(g(X)\) passes through all of \(P\) , \(f(\ell)\) and \(X\) , but those three points are collinear, a contradiction.
Lemma 2. Suppose that, for all \(\epsilon >0\) , there is a point \(P_{\epsilon}\) with \(g(P_{\epsilon})\) of radius at most \(\epsilon\) . Then there is a point \(P\) with the given property.
Proof. Consider a sequence \(\epsilon_{i} = 2^{- i}\) and corresponding points \(P_{\epsilon_{i}}\) . Because the two circles \(g(P_{\epsilon_{i}})\) and \(g(P_{\epsilon_{j}})\) meet, the distance between \(P_{\epsilon_{i}}\) and \(P_{\epsilon_{j}}\) is at most \(2^{1 - i} + 2^{1 - j}\) . As \(\sum_{i}\epsilon_{i}\) converges, these points converge to some point \(P\) . For all \(\epsilon >0\) , the point \(P\) has distance at most \(2\epsilon\) from \(P_{\epsilon}\) , and all circles \(g(X)\) pass through a point with distance at most \(2\epsilon\) from \(P_{\epsilon}\) , so distance at most \(4\epsilon\) from \(P\) . A circle that passes distance at most \(4\epsilon\) from \(P\) for all \(\epsilon >0\) must pass through \(P\) , so by Lemma 1 the point \(P\) has the given property.
Lemma 3. Suppose no two of the circles \(g(X)\) cut. Then there is a point \(P\) with the given property.
Proof. Consider a circle \(g(X)\) with centre \(Y\) . The circle \(g(Y)\) must meet \(g(X)\) without cutting it, so has half the radius of \(g(X)\) . Repeating this argument, there are circles with arbitrarily small radius and the result follows by Lemma 2.
Lemma 4. Suppose there are six different points \(A\) , \(B_{1}\) , \(B_{2}\) , \(B_{3}\) , \(B_{4}\) , \(B_{5}\) such that no three are collinear, no four are concyclic, and all the circles \(g(B_{i})\) cut pairwise at \(A\) . Then there is a point \(P\) with the given property.
Proof. Consider some line \(\ell\) through \(A\) that does not pass through any of the \(B_{i}\) and is not tangent to any of the \(g(B_{i})\) . Fix some direction along that line, and let \(X_{\epsilon}\) be the point on \(\ell\) that has distance \(\epsilon\) from \(A\) in that direction. In what follows we consider only those \(\epsilon\) for which \(X_{\epsilon}\) does not lie on any \(g(B_{i})\) (this restriction excludes only finitely many possible values of \(\epsilon\) ).
Consider the circle \(g(X_{\epsilon})\) . Because no four of the \(B_{i}\) are concyclic, at most three of them lie on this circle, so at least two of them do not. There must be some sequence of \(\epsilon \rightarrow 0\) such that it is the same two of the \(B_{i}\) for all \(\epsilon\) in that sequence, so now restrict attention to that sequence, and suppose without loss of generality that \(B_{1}\) and \(B_{2}\) do not lie on \(g(X_{\epsilon})\) for any \(\epsilon\) in that sequence.
Then \(f(X_{\epsilon}B_{1})\) is not \(B_{1}\) , so must be the other point of intersection of \(X_{\epsilon}B_{1}\) with \(g(B_{1})\) , and the same applies with \(B_{2}\) . Now consider the three points \(X_{\epsilon}\) , \(f(X_{\epsilon}B_{1})\) and \(f(X_{\epsilon}B_{2})\) . As \(\epsilon \to 0\) , the angle at \(X_{\epsilon}\) tends to \(\angle B_{1}AB_{2}\) or \(180^{\circ} - \angle B_{1}AB_{2}\) , which is not 0 or \(180^{\circ}\) because no three of the points were collinear. All three distances between those points are bounded above by constant multiples of \(\epsilon\) (in fact, if the triangle is scaled by a factor of \(1 / \epsilon\) , it tends to a fixed triangle). Thus the circumradius of those three points, which is the radius of \(g(X_{\epsilon})\) , is also bounded above by a constant multiple of \(\epsilon\) , and so the result follows by Lemma 2. \(\square\)
Lemma 5. Suppose there are two points \(A\) and \(B\) such that \(g(A)\) and \(g(B)\) cut. Then there is a point \(P\) with the given property.
Proof. Suppose that \(g(A)\) and \(g(B)\) cut at \(C\) and \(D\) . One of those points, without loss of generality \(C\) , must be \(f(AB)\) , and so lie on the line \(AB\) . We now consider two cases, according to whether \(D\) also lies on that line.
Case 1: \(D\) does not lie on that line.
In this case, consider a sequence of \(X_{\epsilon}\) at distance \(\epsilon\) from \(D\) , tending to \(D\) along some line that is not a tangent to either circle, but perturbed slightly (by at most \(\epsilon^{2}\) ) to ensure that no three of the points \(A\) , \(B\) and \(X_{\epsilon}\) are collinear and no four are concyclic.
Consider the points \(f(X_{\epsilon}A)\) and \(f(X_{\epsilon}B)\) , and the circles \(g(X_{\epsilon})\) on which they lie. The point \(f(X_{\epsilon}A)\) might be either \(A\) or the other intersection of \(X_{\epsilon}A\) with the circle \(g(A)\) , and the same applies for \(B\) . If, for some sequence of \(\epsilon \to 0\) , both those points are the other point of intersection, the same argument as in the proof of Lemma 4 applies to find arbitrarily small circles. Otherwise, we have either infinitely many of those circles passing through \(A\) , or infinitely many passing through \(B\) ; without loss of generality, suppose infinitely many through \(A\) .
We now show we can find five points \(B_{i}\) satisfying the conditions of Lemma 4 (together with \(A\) ). Let \(B_{1}\) be any of the \(X_{\epsilon}\) for which \(g(X_{\epsilon})\) passes through \(A\) . Then repeat the following four times, for \(2 \leq i \leq 5\) .
Consider some line \(\ell = X_{\epsilon}A\) (different from those considered for previous \(i\) ) that is not tangent to any of the \(g(B_{j})\) for \(j < i\) , and is such that \(f(\ell) = A\) , so \(g(Y)\) passes through \(A\) for all \(Y\) on that line. If there are arbitrarily small circles \(g(Y)\) we are done by Lemma 2, so the radii of such circles must be bounded below. But as \(Y \to A\) , along any line not tangent to \(g(B_{j})\) , the radius of a circle through \(Y\) and tangent to \(g(B_{j})\) at \(A\) tends to 0. So there must be some \(Y\) such that \(g(Y)\) cuts \(g(B_{j})\) at \(A\) rather than being tangent to it there, for all of the previous \(B_{j}\) , and we may also pick it such that no three of the \(B_{i}\) and \(A\) are collinear and no four are concyclic. Let \(B_{i}\) be this \(Y\) . Now the result follows by Lemma 4.
Case 2: \(D\) does lie on that line.
In this case, we follow a similar argument, but the sequence of \(X_{\epsilon}\) needs to be slightly different. \(C\) and \(D\) both lie on the line \(AB\) , so one must be \(A\) and the other must be \(B\) . Consider a sequence of \(X_{\epsilon}\) tending to \(B\) . Rather than tending to \(B\) along a straight line (with small perturbations), let the sequence be such that all the points are inside the two circles, with the angle between \(X_{\epsilon}B\) and the tangent to \(g(B)\) at \(B\) tending to 0.
Again consider the points \(f(X_{\epsilon}A)\) and \(f(X_{\epsilon}B)\) . If, for some sequence of \(\epsilon \to 0\) , both those points are the other point of intersection with the respective circles, we see that the angle at \(X_{\epsilon}\) tends to the angle between \(AB\) and the tangent to \(g(B)\) at \(B\) , which is not 0 or \(180^{\circ}\) , while the distances tend to 0 (although possibly slower than any multiple of \(\epsilon\) ), so we have arbitrarily small circumradii and the result follows by Lemma 2. Otherwise, we have either infinitely many of the circles \(g(X_{\epsilon})\) passing through \(A\) , or infinitely many passing through \(B\) , and the same argument as in the previous case enables us to reduce to Lemma 4. \(\square\)
Lemmas 3 and 5 together cover all cases, and so the required result is proved.
Comment. From the property that all circles \(g(X)\) pass through the same point \(P\) , it is possible to deduce that the function \(f\) has the form given in Solution 1. For any line \(\ell\) not passing through \(P\) we may define a corresponding angle \(\alpha (\ell)\) , which we must show is the same for all such lines. For any point \(X \neq P\) , with at least one line \(\ell\) through \(X\) and not through \(P\) , such that \(f(\ell) \neq X\) , this angle must be equal for all such lines through \(X\) (by (directed) angles in the same segment of \(g(X)\) ).
Now consider all horizontal and all vertical lines not through \(P\) . For any pair consisting of a horizontal line \(\ell_{1}\) and a vertical line \(\ell_{2}\) , we have \(\alpha (\ell_{1}) = \alpha (\ell_{2})\) unless \(f(\ell_{1})\) or \(f(\ell_{2})\) is the point of intersection of those lines. Consider the bipartite graph whose vertices are those lines and where an edge joins a horizontal and a vertical line with the same value of \(\alpha\) . Considering a subgraph induced by \(n\) horizontal and \(n\) vertical lines, it must have at least \(n^{2} - 2n\) edges, so some horizontal line has edges to at least \(n - 2\) of the vertical lines. Thus, in the original graph, all but at most two of the vertical lines have the same value of \(\alpha\) , and likewise all but at most two of the horizontal lines have the same value of \(\alpha\) , and, restricting attention to suitable subsets of those lines, we see that this value must be the same for the vertical lines and for the horizontal lines.
But now we can extend this to all vertical and horizontal lines not through \(P\) (and thus to lines in other directions as well, since the only requirement for 'vertical' and 'horizontal' above is that they are any two nonparallel directions). Consider any horizontal line \(\ell_{1}\) not passing through \(P\) , and we wish to show that \(\alpha (\ell_{1})\) has the same value \(\alpha\) it has for all but at most two lines not through \(P\) in any direction. Indeed, we can deduce this by considering the intersection with any but at most five of the vertical lines: the only ones to exclude are the one passing through \(P\) , the one passing through \(f(\ell_{1})\) , at most two such that \(\alpha (\ell) \neq \alpha\) , and the one passing through \(h_{P,\alpha}(\ell_{1})\) (defined as in Solution 1). So all lines \(\ell\) not passing through \(P\) have the same value of \(\alpha (\ell)\) .
Solution 4. For any point \(X\) , denote by \(t(X)\) the line tangent to \(g(X)\) at \(X\) ; notice that \(f(t(X)) = X\) , so \(f\) is surjective.
Step 1: We find a point \(P\) for which there are at least two different lines \(p_{1}\) and \(p_{2}\) such that \(f(p_{i}) = P\) .
Choose any point \(X\) . If \(X\) does not have this property, take any \(Y \in g(X) \setminus \{X\}\) ; then \(f(XY) = Y\) . If \(Y\) does not have the property, \(t(Y) = XY\) , and the circles \(g(X)\) and \(g(Y)\) meet again at some point \(Z\) . Then \(f(XZ) = Z = f(YZ)\) , so \(Z\) has the required property.
We will show that \(P\) is the desired point. From now on, we fix two different lines \(p_{1}\) and \(p_{2}\) with \(f(p_{1}) = f(p_{2}) = P\) . Assume for contradiction that \(f(\ell) = Q \neq P\) for some line \(\ell\) through \(P\) . We fix \(\ell\) , and note that \(Q \in g(P)\) .
Step 2: We prove that \(P \in g(Q)\) .
Take an arbitrary point \(X \in \ell \setminus \{P, Q\}\) . Two cases are possible for the position of \(t(X)\) in relation to the \(p_{i}\) ; we will show that each case (and subcase) occurs for only finitely many positions of \(X\) , yielding a contradiction.
Case 2.1: \(t(X)\) is parallel to one of the \(p_{i}\) ; say, to \(p_{1}\) .
Let \(t(X)\) cross \(p_{2}\) at \(R\) . Then \(g(R)\) is the circle \((PRX)\) , as \(f(RP) = P\) and \(f(RX) = X\) . Let \(RQ\) cross \(g(R)\) again at \(S\) . Then \(f(RQ) \in \{R, S\} \cap g(Q)\) , so \(g(Q)\) contains one of the points \(R\) and \(S\) .
If \(R \in g(Q)\) , then \(R\) is one of finitely many points in the intersection \(g(Q) \cap p_{2}\) , and each of them corresponds to a unique position of \(X\) , since \(RX\) is parallel to \(p_{1}\) .
If \(S \in g(Q)\) , then \(\angle (QS, SP) = \angle (RS, SP) = \angle (RX, XP) = \angle (p_{1}, \ell)\) , so \(\angle (QS, SP)\) is constant for all such points \(X\) , and all points \(S\) obtained in such a way lie on one circle \(\gamma\) passing through \(P\) and \(Q\) . Since \(g(Q)\) does not contain \(P\) , it is different from \(\gamma\) , so there are only finitely many points \(S\) . Each of them uniquely determines \(R\) and thus \(X\) .

So, Case 2.1 can occur for only finitely many points \(X\) .
Case 2.2: \(t(X)\) crosses \(p_{1}\) and \(p_{2}\) at \(R_{1}\) and \(R_{2}\) , respectively.
Clearly, \(R_{1} \neq R_{2}\) , as \(t(X)\) is the tangent to \(g(X)\) at \(X\) , and \(g(X)\) meets \(\ell\) only at \(X\) and \(Q\) . Notice that \(g(R_{i})\) is the circle \((P X R_{i})\) . Let \(R_{i}Q\) meet \(g(R_{i})\) again at \(S_{i}\) ; then \(S_{i} \neq Q\) , as \(g(R_{i})\) meets \(\ell\) only at \(P\) and \(X\) . Then \(f(R_{i}Q) \in \{R_{i}, S_{i}\}\) , and we distinguish several subcases.

Subcase 2.2.1: \(f(R_{1}Q) = S_{1}\) , \(f(R_{2}Q) = S_{2}\) ; so \(S_{1}, S_{2} \in g(Q)\) .
In this case we have \(0 = \angle (R_{1}X, X P) + \angle (X P, R_{2}X) = \angle (R_{1}S_{1}, S_{1}P) + \angle (S_{2}P, S_{2}R_{2}) = \angle (Q S_{1}, S_{1}P) + \angle (S_{2}P, S_{1}P)\) , which shows \(P \in g(Q)\) .
Subcase 2.2.2: \(f(R_{1}Q) = R_{1}\) , \(f(R_{2}Q) = R_{2}\) ; so \(R_{1}, R_{2} \in g(Q)\) .
This can happen for at most four positions of \(X\) – namely, at the intersections of \(\ell\) with a line of the form \(K_{1}K_{2}\) , where \(K_{i} \in g(Q) \cap p_{i}\) .
Subcase 2.2.3: \(f(R_{1}Q) = S_{1}\) , \(f(R_{2}Q) = R_{2}\) (the case \(f(R_{1}Q) = R_{1}\) , \(f(R_{2}Q) = S_{2}\) is similar).
In this case, there are at most two possible positions for \(R_{2}\) - namely, the meeting points of \(g(Q)\) with \(p_{2}\) . Consider one of them. Let \(X\) vary on \(\ell\) . Then \(R_{1}\) is the projection of \(X\) to \(p_{1}\) via \(R_{2}\) , \(S_{1}\) is the projection of \(R_{1}\) to \(g(Q)\) via \(Q\) . Finally, \(\angle (QS_{1},S_{1}X) = \angle (R_{1}S_{1},S_{1}X) = \angle (R_{1}P,PX) = \angle (p_{1},\ell) \neq 0\) , so \(X\) is obtained by a fixed projective transform \(g(Q) \to \ell\) from \(S_{1}\) . So, if there were three points \(X\) satisfying the conditions of this subcase, the composition of the three projective transforms would be the identity. But, if we apply it to \(X = Q\) , we successively get some point \(R_{1}^{\prime}\) , then \(R_{2}\) , and then some point different from \(Q\) , a contradiction.
Thus Case 2.2 also occurs for only finitely many points \(X\) , as desired.
Step 3: We show that \(f(PQ) = P\) , as desired.
The argument is similar to that in Step 2, with the roles of \(Q\) and \(X\) swapped. Again, we show that there are only finitely many possible positions for a point \(X \in \ell \setminus \{P, Q\}\) , which is absurd.
Case 3.1: \(t(Q)\) is parallel to one of the \(p_{i}\) ; say, to \(p_{1}\) .
Let \(t(Q)\) cross \(p_{2}\) at \(R\) ; then \(g(R)\) is the circle \((PRQ)\) . Let \(RX\) cross \(g(R)\) again at \(S\) . Then \(f(RX) \in \{R, S\} \cap g(X)\) , so \(g(X)\) contains one of the points \(R\) and \(S\) .

Subcase 3.1.1: \(S = f(RX) \in g(X)\) .
We have \(\angle (t(X), QX) = \angle (SX, SQ) = \angle (SR, SQ) = \angle (PR, PQ) = \angle (p_{2}, \ell)\) . Hence \(t(X) \parallel p_{2}\) . Now we recall Case 2.1: we let \(t(X)\) cross \(p_{1}\) at \(R^{\prime}\) , so \(g(R^{\prime}) = (PR^{\prime}X)\) , and let \(R^{\prime}Q\) meet \(g(R^{\prime})\) again at \(S^{\prime}\) ; notice that \(S^{\prime} \neq Q\) . Excluding one position of \(X\) , we may assume that \(R^{\prime} \notin g(Q)\) , so \(R^{\prime} \neq f(R^{\prime}Q)\) . Therefore, \(S^{\prime} = f(R^{\prime}Q) \in g(Q)\) . But then, as in Case 2.1, we get \(\angle (t(Q), PQ) = \angle (QS^{\prime}, S^{\prime}P) = \angle (R^{\prime}X, XP) = \angle (p_{2}, \ell)\) . This means that \(t(Q)\) is parallel to \(p_{2}\) , which is impossible.
Subcase 3.1.2: \(R = f(RX) \in g(X)\) .
In this case, we have \(\angle (t(X), \ell) = \angle (RX, RQ) = \angle (RX, p_{1})\) . Again, let \(R^{\prime} = t(X) \cap p_{1}\) ; this point exists for all but at most one position of \(X\) . Then \(g(R^{\prime}) = (R^{\prime}XP)\) ; let \(R^{\prime}Q\) meet \(g(R^{\prime})\) again at \(S^{\prime}\) . Due to \(\angle (R^{\prime}X, XR) = \angle (QX, QR) = \angle (\ell, p_{1})\) , \(R^{\prime}\) determines \(X\) in at most two ways, so for all but finitely many positions of \(X\) we have \(R^{\prime} \notin g(Q)\) . Therefore, for those positions we have \(S^{\prime} = f(R^{\prime}Q) \in g(Q)\) . But then \(\angle (RX, p_{1}) = \angle (R^{\prime}X, XP) = \angle (R^{\prime}S^{\prime}, S^{\prime}P) = \angle (QS^{\prime}, S^{\prime}P) = \angle (t(Q), QP)\) is fixed, so this case can hold only for one specific position of \(X\) as well.
Thus, in Case 3.1, there are only finitely many possible positions of \(X\) , yielding a contradiction.
Case 3.2: \(t(Q)\) crosses \(p_{1}\) and \(p_{2}\) at \(R_{1}\) and \(R_{2}\) , respectively.
By Step 2, \(R_{1} \neq R_{2}\) . Notice that \(g(R_{i})\) is the circle \((PQR_{i})\) . Let \(R_{i}X\) meet \(g(R_{i})\) at \(S_{i}\) ; then \(S_{i} \neq X\) . Then \(f(R_{i}X) \in \{R_{i}, S_{i}\}\) , and we distinguish several subcases.

Subcase 3.2.1: \(f(R_{1}X) = S_{1}\) and \(f(R_{2}X) = S_{2}\) , so \(S_{1}, S_{2} \in g(X)\) .
As in Subcase 2.2.1, we have \(0 = \angle (R_{1}Q, QP) + \angle (QP, R_{2}Q) = \angle (XS_{1}, S_{1}P) + \angle (S_{2}P, S_{2}X)\) , which shows \(P \in g(X)\) . But \(X, Q \in g(X)\) as well, so \(g(X)\) meets \(\ell\) at three distinct points, which is absurd.
Subcase 3.2.2: \(f(R_{1}X) = R_{1}\) , \(f(R_{2}X) = R_{2}\) , so \(R_{1}, R_{2} \in g(X)\) .
Now three distinct collinear points \(R_{1}, R_{2}\) , and \(Q\) belong to \(g(X)\) , which is impossible.
Subcase 3.2.3: \(f(R_{1}X) = S_{1}\) , \(f(R_{2}X) = R_{2}\) (the case \(f(R_{1}X) = R_{1}\) , \(f(R_{2}X) = S_{2}\) is similar).
We have \(\angle (XR_{2}, R_{2}Q) = \angle (XS_{1}, S_{1}Q) = \angle (R_{1}S_{1}, S_{1}Q) = \angle (R_{1}P, PQ) = \angle (p_{1}, \ell)\) , so this case can occur for a unique position of \(X\) .
Thus, in Case 3.2, there is only a unique position of \(X\) , again yielding the required contradiction.
|
IMOSL-2019-N1
|
Find all pairs \((m,n)\) of positive integers satisfying the equation
\[(2^{n} - 1)(2^{n} - 2)(2^{n} - 4)\cdot \cdot \cdot (2^{n} - 2^{n - 1}) = m! \quad (1)\]
|
Answer: The only such pairs are \((1,1)\) and \((3,2)\) .
Common remarks. In all solutions, for any prime \(p\) and positive integer \(N\) , we will denote by \(v_{p}(N)\) the exponent of the largest power of \(p\) that divides \(N\) . The left- hand side of (1) will be denoted by \(L_{n}\) ; that is, \(L_{n} = (2^{n} - 1)(2^{n} - 2)(2^{n} - 4)\cdot \cdot \cdot (2^{n} - 2^{n - 1})\) .
Solution 1. We will get an upper bound on \(n\) from the speed at which \(v_{2}(L_{n})\) grows.
From
\[L_{n} = (2^{n} - 1)(2^{n} - 2)\cdot \cdot \cdot (2^{n} - 2^{n - 1}) = 2^{1 + 2 + \dots +(n - 1)}(2^{n} - 1)(2^{n - 1} - 1)\cdot \cdot \cdot (2^{1} - 1)\]
we read
\[v_{2}(L_{n}) = 1 + 2 + \dots +(n - 1) = \frac{n(n - 1)}{2}.\]
On the other hand, \(v_{2}(m!)\) is expressed by the Legendre formula as
\[v_{2}(m!) = \sum_{i = 1}^{\infty}\left\lfloor \frac{m}{2^{i}}\right\rfloor .\]
As usual, by omitting the floor functions,
\[v_{2}(m!) < \sum_{i = 1}^{\infty}\frac{m}{2^{i}} = m.\]
Thus, \(L_{n} = m!\) implies the inequality
\[\frac{n(n - 1)}{2} < m. \quad (2)\]
In order to obtain an opposite estimate, observe that
\[L_{n} = (2^{n} - 1)(2^{n} - 2)\cdot \cdot \cdot (2^{n} - 2^{n - 1})< (2^{n})^{n} = 2^{n^{2}}.\]
We claim that
\[2^{n^{2}}< \left(\frac{n(n - 1)}{2}\right)!\quad \mathrm{for~}n\geqslant 6. \quad (3)\]
For \(n = 6\) the estimate (3) is true because \(2^{6^{2}}< 6.9\cdot 10^{10}\) and \(\left(\frac{n(n - 1)}{2}\right)! = 15! > 1.3\cdot 10^{12}\)
For \(n\geqslant 7\) we prove (3) by the following inequalities:
\[\left(\frac{n(n - 1)}{2}\right)! = 15!\cdot 16\cdot 17\cdot \cdot \frac{n(n - 1)}{2} >2^{36}\cdot 16^{\frac{n(n - 1)}{2} - 15}\] \[\qquad = 2^{2n(n - 1) - 24} = 2^{n^{2}}\cdot 2^{n(n - 2) - 24} > 2^{n^{2}}.\]
Putting together (2) and (3), for \(n \geq 6\) we get a contradiction, since
\[L_{n}< 2^{n^{2}}< \left(\frac{n(n - 1)}{2}\right)!< m! = L_{n}.\]
Hence \(n \geq 6\) is not possible.
Checking manually the cases \(n \leq 5\) we find
\[L_{1} = 1 = 1!,\qquad L_{2} = 6 = 3!,\qquad 5!< L_{3} = 168< 6!,\] \[7!< L_{4} = 20 160< 8! \qquad \mathrm{and}\qquad 10!< L_{5} = 9 999 360< 11!.\]
So, there are two solutions:
\[(m,n)\in \{(1,1),(3,2)\} .\]
Solution 2. Like in the previous solution, the cases \(n = 1,2,3,4\) are checked manually. We will exclude \(n \geq 5\) by considering the exponents of 3 and 31 in (1).
For odd primes \(p\) and distinct integers \(a,b\) , coprime to \(p\) , with \(p\mid a - b\) , the Lifting The Exponent lemma asserts that
\[\upsilon_{p}(a^{k} - b^{k}) = \upsilon_{p}(a - b) + \upsilon_{p}(k).\]
Notice that 3 divides \(2^{k} - 1\) if only if \(k\) is even; moreover, by the Lifting The Exponent lemma we have
\[\upsilon_{3}(2^{2k} - 1) = \upsilon_{3}(4^{k} - 1) = 1 + \upsilon_{3}(k) = \upsilon_{3}(3k).\]
Hence,
\[\upsilon_{3}(L_{n}) = \sum_{2k\leq n}\upsilon_{3}(4^{k} - 1) = \sum_{k\leq \lfloor \frac{n}{2}\rfloor}\upsilon_{3}(3k).\]
Notice that the last expression is precisely the exponent of 3 in the prime factorisation of \(\left(3\left\lfloor \frac{n}{2}\right\rfloor\right)!\) . Therefore
\[\begin{array}{c}{{\upsilon_{3}(m!)=\upsilon_{3}(L_{n})=\upsilon_{3}\left(\left(3\lfloor\frac{n}{2}\rfloor\right)!\right)}}\\ {{3\left\lfloor\frac{n}{2}\right\rfloor\leqslant m\leqslant3\left\lfloor\frac{n}{2}\right\rfloor+2.}}\end{array} \quad (4)\]
Suppose that \(n \geqslant 5\) . Note that every fifth factor in \(L_{n}\) is divisible by \(31 = 2^{5} - 1\) , and hence we have \(\upsilon_{31}(L_{n}) \geqslant \lfloor \frac{n}{5} \rfloor\) . Then
\[\frac{n}{10}\leqslant \left\lfloor \frac{n}{5}\right\rfloor \leqslant \upsilon_{31}(L_{n}) = \upsilon_{31}(m!) = \sum_{k = 1}^{\infty}\left\lfloor \frac{m}{31^{k}}\right\rfloor < \sum_{k = 1}^{\infty}\frac{m}{31^{k}} = \frac{m}{30}. \quad (5)\]
By combining (4) and (5),
\[3n< m\leqslant \frac{3n}{2} +2\]
so \(n < \frac{4}{3}\) which is inconsistent with the inequality \(n \geqslant 5\) .
Comment 1. There are many combinations of the ideas above; for example combining (2) and (4) also provides \(n < 5\) . Obviously, considering the exponents of any two primes in (1), or considering one prime and the magnitude orders lead to an upper bound on \(n\) and \(m\) .
Comment 2. This problem has a connection to group theory. Indeed, the left- hand side is the order of the group \(GL_{n}(\mathbb{F}_{2})\) of invertible \(n\) - by- \(n\) matrices with entries modulo 2, while the right- hand side is the order of the symmetric group \(S_{m}\) on \(m\) elements. The result thus shows that the only possible isomorphisms between these groups are \(GL_{1}(\mathbb{F}_{2})\cong S_{1}\) and \(GL_{2}(\mathbb{F}_{2})\cong S_{3}\) , and there are in fact isomorphisms in both cases. In general, \(GL_{n}(\mathbb{F}_{2})\) is a simple group for \(n\geq 3\) , as it is isomorphic to \(PSL_{n}(\mathbb{F}_{2})\) .
There is also a near- solution of interest: the left- hand side for \(n = 4\) is half of the right- hand side when \(m = 8\) ; this turns out to correspond to an isomorphism \(GL_{4}(\mathbb{F}_{2})\cong A_{8}\) with the alternating group on eight elements.
However, while this indicates that the problem is a useful one, knowing group theory is of no use in solving it!
|
IMOSL-2019-N3
|
We say that a set \(S\) of integers is rootiful if, for any positive integer \(n\) and any \(a_{0},a_{1},\ldots ,a_{n}\in S\) , all integer roots of the polynomial \(a_{0} + a_{1}x + \cdot \cdot \cdot +a_{n}x^{n}\) are also in \(S\) . Find all rootiful sets of integers that contain all numbers of the form \(2^{a} - 2^{b}\) for positive integers \(a\) and \(b\) .
|
Answer: The set \(\mathbb{Z}\) of all integers is the only such rootiful set.
Solution 1. The set \(\mathbb{Z}\) of all integers is clearly rootiful. We shall prove that any rootiful set \(S\) containing all the numbers of the form \(2^{a} - 2^{b}\) for \(a,b\in \mathbb{Z}_{>0}\) must be all of \(\mathbb{Z}\) .
First, note that \(0 = 2^{1} - 2^{1}\in S\) and \(2 = 2^{2} - 2^{1}\in S\) . Now, \(- 1\in S\) , since it is a root of \(2x + 2\) , and \(1\in S\) , since it is a root of \(2x^{2} - x - 1\) . Also, if \(n\in S\) then \(- n\) is a root of \(x + n\) , so it suffices to prove that all positive integers must be in \(S\) .
Now, we claim that any positive integer \(n\) has a multiple in \(S\) . Indeed, suppose that \(n = 2^{\alpha}\cdot t\) for \(\alpha \in \mathbb{Z}_{\geq 0}\) and \(t\) odd. Then \(t\mid 2^{\phi (t)} - 1\) , so \(n\mid 2^{\alpha +\phi (t) + 1} - 2^{\alpha +1}\) . Moreover, \(2^{\alpha +\phi (t) + 1} - 2^{\alpha +1}\in S\) , and so \(S\) contains a multiple of every positive integer \(n\) .
We will now prove by induction that all positive integers are in \(S\) . Suppose that \(0,1,\ldots ,n-\) \(1\in S\) ; furthermore, let \(N\) be a multiple of \(n\) in \(S\) . Consider the base- \(n\) expansion of \(N\) , say \(N = a_{k}n^{k} + a_{k - 1}n^{k - 1} + \cdot \cdot \cdot +a_{1}n + a_{0}\) . Since \(0\leqslant a_{i}< n\) for each \(a_{i}\) , we have that all the \(a_{i}\) are in \(S\) . Furthermore, \(a_{0} = 0\) since \(N\) is a multiple of \(n\) . Therefore, \(a_{k}n^{k} + a_{k - 1}n^{k - 1} + \cdot \cdot \cdot +a_{1}n - N = 0\) , so \(n\) is a root of a polynomial with coefficients in \(S\) . This tells us that \(n\in S\) , completing the induction.
Solution 2. As in the previous solution, we can prove that \(0,1\) and \(- 1\) must all be in any rootiful set \(S\) containing all numbers of the form \(2^{a} - 2^{b}\) for \(a,b\in \mathbb{Z}_{>0}\) .
We show that, in fact, every integer \(k\) with \(|k| > 2\) can be expressed as a root of a polynomial whose coefficients are of the form \(2^{a} - 2^{b}\) . Observe that it suffices to consider the case where \(k\) is positive, as if \(k\) is a root of \(a_{n}x^{n} + \cdot \cdot \cdot +a_{1}x + a_{0} = 0\) , then \(- k\) is a root of \((- 1)^{n}a_{n}x^{n} + \cdot \cdot \cdot - a_{1}x + a_{0} = 0\) .
Note that
\[(2^{a_{n}} - 2^{b_{n}})k^{n} + \cdot \cdot \cdot +(2^{a_{0}} - 2^{b_{0}}) = 0\]
is equivalent to
\[2^{a_{n}}k^{n} + \cdot \cdot \cdot +2^{a_{0}} = 2^{b_{n}}k^{n} + \cdot \cdot \cdot +2^{b_{0}}.\]
Hence our aim is to show that two numbers of the form \(2^{a_{n}}k^{n} + \cdot \cdot \cdot +2^{a_{0}}\) are equal, for a fixed value of \(n\) . We consider such polynomials where every term \(2^{a_{i}}k^{i}\) is at most \(2k^{n}\) ; in other words, where \(2\leqslant 2^{a_{i}}\leqslant 2k^{n - i}\) , or, equivalently, \(1\leqslant a_{i}\leqslant 1 + (n - i)\log_{2}k\) . Therefore, there must be \(1 + \lfloor (n - i)\log_{2}k\rfloor\) possible choices for \(a_{i}\) satisfying these constraints.
The number of possible polynomials is then
\[\prod_{i = 0}^{n}(1 + \lfloor (n - i)\log_{2}k\rfloor)\geqslant \prod_{i = 0}^{n - 1}(n - i)\log_{2}k = n!(\log_{2}k)^{n}\]
where the inequality holds as \(1 + \lfloor x\rfloor \geqslant x\) .
As there are \((n + 1)\) such terms in the polynomial, each at most \(2k^{n}\) , such a polynomial must have value at most \(2k^{n}(n + 1)\) . However, for large \(n\) , we have \(n!(\log_{2}k)^{n} > 2k^{n}(n + 1)\) . Therefore there are more polynomials than possible values, so some two must be equal, as required.
|
IMOSL-2019-N4
|
Let \(\mathbb{Z}_{>0}\) be the set of positive integers. A positive integer constant \(C\) is given. Find all functions \(f:\mathbb{Z}_{>0}\to \mathbb{Z}_{>0}\) such that, for all positive integers \(a\) and \(b\) satisfying \(a + b > C\) ,
\[a + f(b)\mid a^{2} + b f(a). \quad (*)\]
|
Answer: The functions satisfying \((*)\) are exactly the functions \(f(a) = ka\) for some constant \(k\in \mathbb{Z}_{>0}\) (irrespective of the value of \(C\) ).
Common remarks. It is easy to verify that the functions \(f(a) = ka\) satisfy \((*)\) . Thus, in the proofs below, we will only focus on the converse implication: that condition \((*)\) implies that \(f = ka\) .
A common minor part of these solutions is the derivation of some relatively easy bounds on the function \(f\) . An upper bound is easily obtained by setting \(a = 1\) in \((*)\) , giving the inequality
\[f(b)\leqslant b\cdot f(1)\]
for all sufficiently large \(b\) . The corresponding lower bound is only marginally more difficult to obtain: substituting \(b = 1\) in the original equation shows that
\[a + f(1)\mid (a^{2} + f(a)) - (a - f(1))\cdot (a + f(1)) = f(1)^{2} + f(a)\]
for all sufficiently large \(a\) . It follows from this that one has the lower bound
\[f(a)\geqslant a + f(1)\cdot (1 - f(1)),\]
again for all sufficiently large \(a\) .
Each of the following proofs makes use of at least one of these bounds.
Solution 1. First, we show that \(b\mid f(b)^{2}\) for all \(b\) . To do this, we choose a large positive integer \(n\) so that \(n b - f(b)\geqslant C\) . Setting \(a = n b - f(b)\) in \((*)\) then shows that
\[n b\mid (n b - f(b))^{2} + b f(n b - f(b))\]
so that \(b\mid f(b)^{2}\) as claimed.
Now in particular we have that \(p\mid f(p)\) for every prime \(p\) . If we write \(f(p) = k(p)\cdot p\) , then the bound \(f(p)\leqslant f(1)\cdot p\) (valid for \(p\) sufficiently large) shows that some value \(k\) of \(k(p)\) must be attained for infinitely many \(p\) . We will show that \(f(a) = ka\) for all positive integers \(a\) . To do this, we substitute \(b = p\) in \((*)\) , where \(p\) is any sufficiently large prime for which \(k(p) = k\) , obtaining
\[a + k p\mid (a^{2} + p f(a)) - a(a + k p) = p f(a) - p k a.\]
For suitably large \(p\) we have \(\gcd (a + k p,p) = 1\) , and hence we have
\[a + k p\mid f(a) - ka.\]
But the only way this can hold for arbitrarily large \(p\) is if \(f(a) - ka = 0\) . This concludes the proof.
Comment. There are other ways to obtain the divisibility \(p\mid f(p)\) for primes \(p\) , which is all that is needed in this proof. For instance, if \(f(p)\) were not divisible by \(p\) then the arithmetic progression \(p^{2} + b f(p)\) would attain prime values for infinitely many \(b\) by Dirichlet's Theorem: hence, for these pairs p, b, we would have p + f(b) = p2 + bf(p). Substituting a → b and b → p in \((*)\) then shows that \((f(p)^{2} - p^{2})(p - 1)\) is divisible by \(b + f(p)\) and hence vanishes, which is impossible since \(p\nmid f(p)\) by assumption.
Solution 2. First, we substitute \(b = 1\) in \((\ast)\) and rearrange to find that
\[\frac{f(a) + f(1)^{2}}{a + f(1)} = f(1) - a + \frac{a^{2} + f(a)}{a + f(1)}\]
is a positive integer for sufficiently large \(a\) . Since \(f(a) \leqslant af(1)\) , for all sufficiently large \(a\) , it follows that \(\frac{f(a) + f(1)^{2}}{a + f(1)} \leqslant f(1)\) also and hence there is a positive integer \(k\) such that \(\frac{f(a) + f(1)^{2}}{a + f(1)} = k\) for infinitely many values of \(a\) . In other words,
\[f(a) = ka + f(1)\cdot (k - f(1))\]
for infinitely many \(a\) .
Fixing an arbitrary choice of \(a\) in \((\ast)\) , we have that
\[\frac{a^{2} + bf(a)}{a + kb + f(1)\cdot (k - f(1))}\]
is an integer for infinitely many \(b\) (the same \(b\) as above, maybe with finitely many exceptions). On the other hand, for \(b\) taken sufficiently large, this quantity becomes arbitrarily close to \(\frac{f(a)}{k}\) ; this is only possible if \(\frac{f(a)}{k}\) is an integer and
\[\frac{a^{2} + bf(a)}{a + kb + f(1)\cdot (k - f(1))} = \frac{f(a)}{k}\]
for infinitely many \(b\) . This rearranges to
\[\frac{f(a)}{k}\cdot \big(a + f(1)\cdot (k - f(1))\big) = a^{2}. \quad (*)\]
Hence \(a^{2}\) is divisible by \(a + f(1)\cdot (k - f(1))\) , and hence so is \(f(1)^{2}(k - f(1))^{2}\) . The only way this can occur for all \(a\) is if \(k = f(1)\) , in which case \((\ast \ast)\) provides that \(f(a) = ka\) for all \(a\) , as desired.
Solution 3. Fix any two distinct positive integers \(a\) and \(b\) . From \((\ast)\) it follows that the two integers
\[(a^{2} + cf(a))\cdot (b + f(c))\mathrm{~and~}(b^{2} + cf(b))\cdot (a + f(c))\]
are both multiples of \((a + f(c))\cdot (b + f(c))\) for all sufficiently large \(c\) . Taking an appropriate linear combination to eliminate the \(c f(c)\) term, we find after expanding out that the integer
\[\big[a^{2}f(b) - b^{2}f(a)\big]\cdot f(c) + \big[(b - a)f(a)f(b)\big]\cdot c + \big[ab(af(b) - bf(a))\big] \quad (†)\]
is also a multiple of \((a + f(c))\cdot (b + f(c))\)
But as \(c\) varies, \((\dagger)\) is bounded above by a positive multiple of \(c\) while \((a + f(c))\cdot (b + f(c))\) is bounded below by a positive multiple of \(c^{2}\) . The only way that such a divisibility can hold is if in fact
\[\big[a^{2}f(b) - b^{2}f(a)\big]\cdot f(c) + \big[(b - a)f(a)f(b)\big]\cdot c + \big[ab(af(b) - bf(a))\big] = 0 \quad (††)\]
for sufficiently large \(c\) . Since the coefficient of \(c\) in this linear relation is nonzero, it follows that there are constants \(k, \ell\) such that \(f(c) = kc + \ell\) for all sufficiently large \(c\) ; the constants \(k\) and \(\ell\) are necessarily integers.
The value of \(\ell\) satisfies
\[\big[a^{2}f(b) - b^{2}f(a)\big]\cdot \ell +\big[ab(af(b) - bf(a))\big] = 0 \quad (†††)\]
and hence \(b\mid \ell a^{2}f(b)\) for all \(a\) and \(b\) . Taking \(b\) sufficiently large so that \(f(b) = kb + \ell\) , we thus have that \(b\mid \ell^{2}a^{2}\) for all sufficiently large \(b\) ; this implies that \(\ell = 0\) . From \((\dagger \dagger \dagger)\) it then follows that \(\frac{f(a)}{a} = \frac{f(b)}{b}\) for all \(a\neq b\) , so that there is a constant \(k\) such that \(f(a) = ka\) for all \(a\) ( \(k\) is equal to the constant defined earlier).
Solution 4. Let \(\Gamma\) denote the set of all points \((a,f(a))\) , so that \(\Gamma\) is an infinite subset of the upper- right quadrant of the plane. For a point \(A = (a,f(a))\) in \(\Gamma\) , we define a point \(A' = (- f(a), - f(a)^2 /a)\) in the lower- left quadrant of the plane, and let \(\Gamma '\) denote the set of all such points \(A'\) .

Claim. For any point \(A\in \Gamma\) , the set \(\Gamma\) is contained in finitely many lines through the point \(A^{\prime}\) Proof. Let \(A = (a,f(a))\) . The functional equation (with \(a\) and \(b\) interchanged) can be rewritten as \(b + f(a)\mid a f(b) - b f(a)\) , so that all but finitely many points in \(\Gamma\) are contained in one of the lines with equation
\[a y - f(a)x = m(x + f(a))\]
for \(m\) an integer. Geometrically, these are the lines through \(A^{\prime} = (- f(a), - f(a)^{2} / a)\) with gradient \(\frac{f(a) + m}{a}\) . Since \(\Gamma\) is contained, with finitely many exceptions, in the region \(0\leqslant y\leqslant\) \(f(1)\cdot x\) and the point \(A^{\prime}\) lies strictly in the lower- left quadrant of the plane, there are only finitely many values of \(m\) for which this line meets \(\Gamma\) . This concludes the proof of the claim.
Now consider any distinct points \(A,B\in \Gamma\) . It is clear that \(A^{\prime}\) and \(B^{\prime}\) are distinct. A line through \(A^{\prime}\) and a line through \(B^{\prime}\) only meet in more than one point if these two lines are equal to the line \(A^{\prime}B^{\prime}\) . It then follows from the above claim that the line \(A^{\prime}B^{\prime}\) must contain all but finitely many points of \(\Gamma\) . If \(C\) is another point of \(\Gamma\) , then the line \(A^{\prime}C^{\prime}\) also passes through all but finitely many points of \(\Gamma\) , which is only possible if \(A^{\prime}C^{\prime} = A^{\prime}B^{\prime}\) .
We have thus seen that there is a line \(\ell\) passing through all points of \(\Gamma^{\prime}\) and through all but finitely many points of \(\Gamma\) . We claim that this line passes through the origin \(O\) and passes through every point of \(\Gamma\) . To see this, note that by construction \(A,O,A^{\prime}\) are collinear for every point \(A\in \Gamma\) . Since \(\ell = AA^{\prime}\) for all but finitely many points \(A\in \Gamma\) , it thus follows that \(O\in \ell\) . Thus any \(A\in \Gamma\) lies on the line \(\ell = A^{\prime}O\) .
Since \(\Gamma\) is contained in a line through \(O\) , it follows that there is a real constant \(k\) (the gradient of \(\ell\) ) such that \(f(a) = ka\) for all \(a\) . The number \(k\) is, of course, a positive integer.
Comment. Without the \(a + b > C\) condition, this problem is approachable by much more naive methods. For instance, using the given divisibility for \(a,b\in \{1,2,3\}\) one can prove by a somewhat tedious case- check that \(f(2) = 2f(1)\) and \(f(3) = 3f(1)\) ; this then forms the basis of an induction establishing that \(f(n) = nf(1)\) for all \(n\) .
|
IMOSL-2019-N5
|
Let \(a\) be a positive integer. We say that a positive integer \(b\) is \(a\) - good if \(\binom{a n}{b} - 1\) is divisible by \(a n + 1\) for all positive integers \(n\) with \(a n\geqslant b\) . Suppose \(b\) is a positive integer such that \(b\) is \(a\) - good, but \(b + 2\) is not \(a\) - good. Prove that \(b + 1\) is prime.
|
Solution 1. For \(p\) a prime and \(n\) a nonzero integer, we write \(v_{p}(n)\) for the \(p\) - adic valuation of \(n\) : the largest integer \(t\) such that \(p^{t}\mid n\) .
We first show that \(b\) is \(a\) - good if and only if \(b\) is even, and \(p\mid a\) for all primes \(p\leqslant b\)
To start with, the condition that \(a n + 1\mid \binom{a n}{b} - 1\) can be rewritten as saying that
\[\frac{a n(a n - 1)\cdot\cdot\cdot(a n - b + 1)}{b!}\equiv 1\pmod {a n + 1}. \quad (1)\]
Suppose, on the one hand, there is a prime \(p\leqslant b\) with \(p\nmid a\) . Take \(t = v_{p}(b!)\) . Then there exist positive integers \(c\) such that \(a c\equiv 1\) (mod \(p^{t + 1}\) ). If we take \(c\) big enough, and then take \(n = (p - 1)c\) , then \(a n = a(p - 1)c\equiv p - 1\) (mod \(p^{t + 1}\) ) and \(a n\geqslant b\) . Since \(p\leqslant b\) , one of the terms of the numerator \(a n(a n - 1)\cdot \cdot \cdot (a n - b + 1)\) is \(a n - p + 1\) , which is divisible by \(p^{t + 1}\) . Hence the \(p\) - adic valuation of the numerator is at least \(t + 1\) , but that of the denominator is exactly \(t\) . This means that \(p\mid \binom{a n}{b}\) , so \(p\nmid \binom{a n}{b} - 1\) . As \(p\mid a n + 1\) , we get that \(a n + 1\nmid \binom{a n}{b}\) , so \(b\) is not \(a\) - good.
On the other hand, if for all primes \(p\leqslant b\) we have \(p\mid a\) , then every factor of \(b!\) is coprime to \(a n + 1\) , and hence invertible modulo \(a n + 1\) : hence \(b!\) is also invertible modulo \(a n + 1\) . Then equation (1) reduces to:
\[a n(a n - 1)\cdot \cdot \cdot (a n - b + 1)\equiv b!\pmod {a n + 1}.\]
However, we can rewrite the left- hand side as follows:
\[a n(a n - 1)\cdot \cdot \cdot (a n - b + 1)\equiv (-1)(-2)\cdot \cdot \cdot (-b)\equiv (-1)^{b}b!\pmod {a n + 1}.\]
Provided that \(a n > 1\) , if \(b\) is even we deduce \((- 1)^{b}b! \equiv b!\) as needed. On the other hand, if \(b\) is odd, and we take \(a n + 1 > 2(b!)\) , then we will not have \((- 1)^{b}b! \equiv b!\) , so \(b\) is not \(a\) - good. This completes the claim.
To conclude from here, suppose that \(b\) is \(a\) - good, but \(b + 2\) is not. Then \(b\) is even, and \(p\mid a\) for all primes \(p\leqslant b\) , but there is a prime \(q\leqslant b + 2\) for which \(q\nmid a\) : so \(q = b + 1\) or \(q = b + 2\) . We cannot have \(q = b + 2\) , as that is even too, so we have \(q = b + 1\) : in other words, \(b + 1\) is prime.
Solution 2. We show only half of the claim of the previous solution: we show that if \(b\) is \(a\) - good, then \(p\mid a\) for all primes \(p\leqslant b\) . We do this with Lucas' theorem.
Suppose that we have \(p\leqslant b\) with \(p\nmid a\) . Then consider the expansion of \(b\) in base \(p\) ; there will be some digit (not the final digit) which is nonzero, because \(p\leqslant b\) . Suppose it is the \(p^{t}\) digit for \(t\geqslant 1\) .
Now, as \(n\) varies over the integers, \(a n + 1\) runs over all residue classes modulo \(p^{t + 1}\) ; in particular, there is a choice of \(n\) (with \(a n > b\) ) such that the \(p^{0}\) digit of \(a n\) is \(p - 1\) (so \(p\mid a n + 1\) ) and the \(p^{t}\) digit of \(a n\) is 0. Consequently, \(p\mid a n + 1\) but \(p\mid \binom{a n}{b}\) (by Lucas' theorem) so \(p\nmid \binom{a n}{b} - 1\) . Thus \(b\) is not \(a\) - good.
Now we show directly that if \(b\) is \(a\) - good but \(b + 2\) fails to be so, then there must be a prime dividing \(a n + 1\) for some \(n\) , which also divides \((b + 1)(b + 2)\) . Indeed, the ratio between \(\binom{a n}{b + 2}\) and \(\binom{a n}{b}\) is \((b + 1)(b + 2) / (a n - b)(a n - b - 1)\) . We know that there must be a choice of \(a n + 1\) such that the former binomial coefficient is 1 modulo \(a n + 1\) but the latter is not, which means that the given ratio must not be 1 mod \(a n + 1\) . If \(b + 1\) and \(b + 2\) are both coprime to \(a n + 1\) then
the ratio is 1, so that must not be the case. In particular, as any prime less than \(b\) divides \(a\) , it must be the case that either \(b + 1\) or \(b + 2\) is prime.
However, we can observe that \(b\) must be even by insisting that \(an + 1\) is prime (which is possible by Dirichlet's theorem) and hence \(\binom{an}{b} \equiv (- 1)^b = 1\) . Thus \(b + 2\) cannot be prime, so \(b + 1\) must be prime.
|
IMOSL-2019-N6
|
Let \(H = \left\{\left|i\sqrt{2}\right|:i\in \mathbb{Z}_{>0}\right\} = \{1,2,4,5,7,\ldots \}\) , and let \(n\) be a positive integer. Prove that there exists a constant \(C\) such that, if \(A\subset \{1,2,\ldots ,n\}\) satisfies \(|A|\geqslant C\sqrt{n}\) , then there exist \(a,b\in A\) such that \(a - b\in H\) . (Here \(\mathbb{Z}_{>0}\) is the set of positive integers, and \(\lfloor z\rfloor\) denotes the greatest integer less than or equal to \(z\) .)
|
Common remarks. In all solutions, we will assume that \(A\) is a set such that \(\{a - b:a,b\in A\}\) is disjoint from \(H\) , and prove that \(|A|< C\sqrt{n}\) .
Solution 1. First, observe that if \(n\) is a positive integer, then \(n\in H\) exactly when
\[\left\{\frac{n}{\sqrt{2}}\right\} >1 - \frac{1}{\sqrt{2}}. \quad (1)\]
To see why, observe that \(n\in H\) if and only if \(0< i\sqrt{2} - n< 1\) for some \(i\in \mathbb{Z}_{>0}\) . In other words, \(0< i - n / \sqrt{2} < 1 / \sqrt{2}\) , which is equivalent to (1).
Now, write \(A = \{a_{1}< a_{2}< \dots < a_{k}\}\) , where \(k = |A|\) . Observe that the set of differences is not altered by shifting \(A\) , so we may assume that \(A\subseteq \{0,1,\ldots ,n - 1\}\) with \(a_{1} = 0\) .
From (1), we learn that \(\{a_{i} / \sqrt{2}\} < 1 - 1 / \sqrt{2}\) for each \(i > 1\) since \(a_{i} - a_{1}\notin H\) . Furthermore, we must have \(\{a_{i} / \sqrt{2}\} < \{a_{j} / \sqrt{2}\}\) whenever \(i< j\) ; otherwise, we would have
\[-\left(1 - \frac{1}{\sqrt{2}}\right)< \left\{\frac{a_{j}}{\sqrt{2}}\right\} -\left\{\frac{a_{i}}{\sqrt{2}}\right\} < 0.\]
Since \(\{(a_{j} - a_{i}) / \sqrt{2}\} = \{a_{j} / \sqrt{2}\} - \{a_{i} / \sqrt{2}\} +1\) , this implies that \(\{(a_{j} - a_{i}) / \sqrt{2}\} >1 / \sqrt{2} >\) \(1 - 1 / \sqrt{2}\) , contradicting (1).
Now, we have a sequence \(0 = a_{1}< a_{2}< \dots < a_{k}< n\) , with
\[0 = \left\{\frac{a_{1}}{\sqrt{2}}\right\} < \left\{\frac{a_{2}}{\sqrt{2}}\right\} < \dots < \left\{\frac{a_{k}}{\sqrt{2}}\right\} < 1 - \frac{1}{\sqrt{2}}.\]
We use the following fact: for any \(d\in \mathbb{Z}\) , we have
\[\left\{\frac{d}{\sqrt{2}}\right\} >\frac{1}{2d\sqrt{2}}. \quad (2)\]
To see why this is the case, let \(h = \left|d / \sqrt{2}\right|\) , so \(\left\{d / \sqrt{2}\right\} = d / \sqrt{2} - h\) . Then
\[\left\{\frac{d}{\sqrt{2}}\right\} \left(\frac{d}{\sqrt{2}} +h\right) = \frac{d^{2} - 2h^{2}}{2}\geqslant \frac{1}{2},\]
since the numerator is a positive integer. Because \(d / \sqrt{2} +h< 2d / \sqrt{2}\) , inequality (2) follows.
Let \(d_{i} = a_{i + 1} - a_{i}\) , for \(1\leqslant i< k\) . Then \(\{a_{i + 1} / \sqrt{2}\} - \{a_{i} / \sqrt{2}\} = \{d_{i} / \sqrt{2}\}\) , and we have
\[1 - \frac{1}{\sqrt{2}} >\sum_{i}\left\{\frac{d_{i}}{\sqrt{2}}\right\} >\frac{1}{2\sqrt{2}}\sum_{i}\frac{1}{d_{i}}\geqslant \frac{(k - 1)^{2}}{2\sqrt{2}}\frac{1}{\sum_{i}d_{i}} >\frac{(k - 1)^{2}}{2\sqrt{2}}\cdot \frac{1}{n}. \quad (3)\]
Here, the first inequality holds because \(\{a_{k} / \sqrt{2}\} < 1 - 1 / \sqrt{2}\) , the second follows from (2), the third follows from an easy application of the AM- HM inequality (or Cauchy- Schwarz), and the fourth follows from the fact that \(\sum_{i}d_{i} = a_{k}< n\) .
Rearranging this, we obtain
\[\sqrt{2\sqrt{2} - 2}\cdot \sqrt{n} >k - 1,\]
which provides the required bound on \(k\) .
Solution 2. Let \(\alpha = 2 + \sqrt{2}\) , so \((1 / \alpha) + (1 / \sqrt{2}) = 1\) . Thus, \(J = \left\{|i\alpha |:i\in \mathbb{Z}_{>0}\right\}\) is the complementary Beatty sequence to \(H\) (in other words, \(H\) and \(J\) are disjoint with \(H\cup J = \mathbb{Z}_{>0}\) ). Write \(A = \{a_{1}< a_{2}< \dots < a_{k}\}\) . Suppose that \(A\) has no differences in \(H\) , so all its differences are in \(J\) and we can set \(a_{i} - a_{1} = \lfloor \alpha b_{i}\rfloor\) for \(b_{i}\in \mathbb{Z}_{>0}\) .
For any \(j > i\) , we have \(a_{j} - a_{i} = \lfloor \alpha b_{j}\rfloor - \lfloor \alpha b_{i}\rfloor\) . Because \(a_{j} - a_{i}\in J\) , we also have \(a_{j} - a_{i} = \lfloor \alpha t\rfloor\) for some positive integer \(t\) . Thus, \(\lfloor \alpha t\rfloor = \lfloor \alpha b_{j}\rfloor - \lfloor \alpha b_{i}\rfloor\) . The right hand side must equal either \(\lfloor \alpha (b_{j} - b_{i})\rfloor\) or \(\lfloor \alpha (b_{j} - b_{i})\rfloor - 1\) , the latter of which is not a member of \(J\) as \(\alpha >2\) . Therefore, \(t = b_{j} - b_{i}\) and so we have \(\lfloor \alpha b_{j}\rfloor - \lfloor \alpha b_{i}\rfloor = \lfloor \alpha (b_{j} - b_{i})\rfloor\) .
For \(1\leqslant i< k\) we now put \(d_{i} = b_{i + 1} - b_{i}\) , and we have
\[\left|\alpha \sum_{i}d_{i}\right| = \left|\alpha b_{k}\right| = \sum_{i}\left|\alpha d_{i}\right|;\]
that is, \(\textstyle \sum_{i}\{\alpha d_{i}\} < 1\) . We also have
\[1 + \left|\alpha \sum_{i}d_{i}\right| = 1 + a_{k} - a_{1}\leqslant a_{k}\leqslant n\]
so \(\textstyle \sum_{i}d_{i}\leqslant n / \alpha\)
With the above inequalities, an argument similar to (3) (which uses the fact that \(\{\alpha d\} =\) \(\{d\sqrt{2}\} >1 / (2d\sqrt{2})\) for positive integers \(d\) ) proves that \(1 > \left((k - 1)^{2} / (2\sqrt{2})\right)(\alpha /n)\) , which again rearranges to give
\[\sqrt{2\sqrt{2} - 2}\cdot \sqrt{n} >k - 1.\]
Comment. The use of Beatty sequences in Solution 2 is essentially a way to bypass (1). Both Solutions 1 and 2 use the fact that \(\sqrt{2} < 2\) ; the statement in the question would still be true if \(\sqrt{2}\) did not have this property (for instance, if it were replaced with \(\alpha\) ), but any argument along the lines of Solutions 1 or 2 would be more complicated.
Solution 3. Again, define \(J = \mathbb{Z}_{>0}\backslash H\) , so all differences between elements of \(A\) are in \(J\) . We start by making the following observation. Suppose we have a set \(B\subseteq \{1,2,\ldots ,n\}\) such that all of the differences between elements of \(B\) are in \(H\) . Then \(|A|\cdot |B|\leqslant 2n\) .
To see why, observe that any two sums of the form \(a + b\) with \(a\in A,b\in B\) are different; otherwise, we would have \(a_{1} + b_{1} = a_{2} + b_{2}\) , and so \(|a_{1} - a_{2}| = |b_{2} - b_{1}|\) . However, then the left hand side is in \(J\) whereas the right hand side is in \(H\) . Thus, \(\{a + b:a\in A,b\in B\}\) is a set of size \(|A|\cdot |B|\) all of whose elements are no greater than \(2n\) , yielding the claimed inequality.
With this in mind, it suffices to construct a set \(B\) , all of whose differences are in \(H\) and whose size is at least \(C^{\prime}\sqrt{n}\) for some constant \(C^{\prime} > 0\) .
To do so, we will use well- known facts about the negative Pell equation \(X^{2} - 2Y^{2} = - 1\) ; in particular, that there are infinitely many solutions and the values of \(X\) are given by the recurrence \(X_{1} = 1,X_{2} = 7\) and \(X_{m} = 6X_{m - 1} - X_{m - 2}\) . Therefore, we may choose \(X\) to be a solution with \(\sqrt{n} /6< X\leqslant \sqrt{n}\) .
Now, we claim that we may choose \(B = \{X,2X,\ldots ,[(1 / 3)\sqrt{n} ]X\}\) . Indeed, we have
\[\left(\frac{X}{\sqrt{2}} -Y\right)\left(\frac{X}{\sqrt{2}} +Y\right) = \frac{-1}{2}\]
and so
\[0 > \left(\frac{X}{\sqrt{2}} -Y\right)\geqslant \frac{-3}{\sqrt{2n}},\]
from which it follows that \(\{X / \sqrt{2}\} >1 - (3 / \sqrt{2n})\) . Combined with (1), this shows that all differences between elements of \(B\) are in \(H\) .
Comment. Some of the ideas behind Solution 3 may be used to prove that the constant \(C = \sqrt{2\sqrt{2} - 2}\) (from Solutions 1 and 2) is optimal, in the sense that there are arbitrarily large values of \(n\) and sets \(A_{n} \subseteq \{1, 2, \ldots , n\}\) of size roughly \(C\sqrt{n}\) , all of whose differences are contained in \(J\) .
To see why, choose \(X\) to come from a sufficiently large solution to the Pell equation \(X^{2} - 2Y^{2} = 1\) , so \(\{X / \sqrt{2}\} \approx 1 / (2X\sqrt{2})\) . In particular, \(\{X\} ,\{2X\} ,\ldots ,\{|2X\sqrt{2} (1 - 1 / \sqrt{2})|X\}\) are all less than \(1 - 1 / \sqrt{2}\) . Thus, by (1) any positive integer of the form \(iX\) for \(1\leqslant i\leqslant |2X\sqrt{2} (1 - 1 / \sqrt{2})|\) lies in \(J\) .
Set \(n\approx 2X^{2}\sqrt{2} (1 - 1 / \sqrt{2})\) . We now have a set \(A = \{iX:i\leqslant |2X\sqrt{2} (1 - 1 / \sqrt{2})|\}\) containing roughly \(2X\sqrt{2} (1 - 1 / \sqrt{2})\) elements less than or equal to \(n\) such that all of the differences lie in \(J\) , and we can see that \(|A|\approx C\sqrt{n}\) with \(C = \sqrt{2\sqrt{2} - 2}\) .
Solution 4. As in Solution 3, we will provide a construction of a large set \(B\subseteq \{1,2,\ldots ,n\}\) all of whose differences are in \(H\) .
Choose \(Y\) to be a solution to the Pell- like equation \(X^{2} - 2Y^{2} = \pm 1\) ; such solutions are given by the recurrence \(Y_{1} = 1,Y_{2} = 2\) and \(Y_{m} = 2Y_{m - 1} + Y_{m - 2}\) , and so we can choose \(Y\) such that \(n / (3\sqrt{2})< Y\leqslant n / \sqrt{2}\) . Furthermore, it is known that for such a \(Y\) and for \(1\leqslant x< Y\) ,
\[\{x\sqrt{2}\} +\{(Y - x)\sqrt{2}\} = \{Y / \sqrt{2}\} \quad (4)\]
if \(X^{2} - 2Y^{2} = 1\) , and
\[\{x\sqrt{2}\} +\{(Y - x)\sqrt{2}\} = 1 + \{Y / \sqrt{2}\} \quad (5)\]
if \(X^{2} - 2Y^{2} = - 1\) . Indeed, this is a statement of the fact that \(X / Y\) is a best rational approximation to \(\sqrt{2}\) , from below in the first case and from above in the second.
Now, consider the sequence \(\{\sqrt{2}\} ,\{2\sqrt{2}\} ,\ldots ,\{(Y - 1)\sqrt{2}\}\) . The Erdős- Szekeres theorem tells us that this sequence has a monotone subsequence with at least \(\sqrt{Y - 2} +1 > \sqrt{Y}\) elements; if that subsequence is decreasing, we may reflect (using (4) or (5)) to ensure that it is increasing. Call the subsequence \(\{y_{1}\sqrt{2}\}\) , \(\{y_{2}\sqrt{2}\}\) , ..., \(\{y_{t}\sqrt{2}\}\) for \(t > \sqrt{Y}\) .
Now, set \(B = \{\{y_{i}\sqrt{2}\} :1\leqslant i\leqslant t\}\) . We have \(|y_{j}\sqrt{2}| - |y_{i}\sqrt{2}| = |(y_{j} - y_{i})\sqrt{2}|\) for \(i< j\) (because the corresponding inequality for the fractional parts holds by the ordering assumption on the \(\{y_{i}\sqrt{2}\}\) ), which means that all differences between elements of \(B\) are indeed in \(H\) . Since \(|B| > \sqrt{Y} > \sqrt{n} /\sqrt{3\sqrt{2}}\) , this is the required set.
Comment. Any solution to this problem will need to use the fact that \(\sqrt{2}\) cannot be approximated well by rationals, either directly or implicitly (for example, by using facts about solutions to Pell- like equations). If \(\sqrt{2}\) were replaced by a value of \(\theta\) with very good rational approximations (from below), then an argument along the lines of Solution 3 would give long arithmetic progressions in \(\{|\theta |:0\leqslant i< n\}\) (with initial term 0) for certain values of \(n\) .
|
IMOSL-2019-N7
|
Prove that there is a constant \(c > 0\) and infinitely many positive integers \(n\) with the following property: there are infinitely many positive integers that cannot be expressed as the sum of fewer than \(c n \log (n)\) pairwise coprime \(n^{\mathrm{th}}\) powers.
|
Solution 1. Suppose, for an integer \(n\) , that we can find another integer \(N\) satisfying the following property:
\(n\) is divisible by \(\phi (p^{e})\) for every prime power \(p^{e}\) exactly dividing \(N\) .
This property ensures that all \(n^{\mathrm{th}}\) powers are congruent to 0 or 1 modulo each such prime power \(p^{e}\) , and hence that any sum of \(m\) pairwise coprime \(n^{\mathrm{th}}\) powers is congruent to \(m\) or \(m - 1\) modulo \(p^{e}\) , since at most one of the \(n^{\mathrm{th}}\) powers is divisible by \(p\) . Thus, if \(k\) denotes the number of distinct prime factors of \(N\) , we find by the Chinese Remainder Theorem at most \(2^{k} m\) residue classes modulo \(N\) which are sums of at most \(m\) pairwise coprime \(n^{\mathrm{th}}\) powers. In particular, if \(N > 2^{k} m\) then there are infinitely many positive integers not expressible as a sum of at most \(m\) pairwise coprime \(n^{\mathrm{th}}\) powers.
It thus suffices to prove that there are arbitrarily large pairs \((n, N)\) of integers satisfying \((\dagger)\) such that
\[N > c\cdot 2^{k}n\log (n)\]
for some positive constant \(c\) .
We construct such pairs as follows. Fix a positive integer \(t\) and choose (distinct) prime numbers \(p \mid 2^{2^{t - 1}} + 1\) and \(q \mid 2^{2^{t}} + 1\) ; we set \(N = pq\) . It is well- known that \(2^{t} \mid p - 1\) and \(2^{t + 1} \mid q - 1\) , hence
\[n = \frac{(p - 1)(q - 1)}{2^{t}}\]
is an integer and the pair \((n, N)\) satisfies \((\dagger)\) .
Estimating the size of \(N\) and \(n\) is now straightforward. We have
\[\log_{2}(n) \leqslant 2^{t - 1} + 2^{t} - t < 2^{t + 1} < 2\cdot \frac{N}{n},\]
which rearranges to
\[N > \frac{1}{8}\cdot 2^{2}n\log_{2}(n)\]
and so we are done if we choose \(c < \frac{1}{8\log(2)} \approx 0.18\) .
Comment 1. The trick in the above solution was to find prime numbers \(p\) and \(q\) congruent to 1 modulo some \(d = 2^{t}\) and which are not too large. An alternative way to do this is via Linnik's Theorem, which says that there are absolute constants \(b\) and \(L > 1\) such that for any coprime integers \(a\) and \(d\) , there is a prime congruent to \(a\) modulo \(d\) and of size \(\leqslant b d^{L}\) . If we choose some \(d\) not divisible by 3 and choose two distinct primes \(p, q \leqslant b \cdot (3d)^{L}\) congruent to 1 modulo \(d\) (and, say, distinct modulo 3), then we obtain a pair \((n, N)\) satisfying \((\dagger)\) with \(N = pq\) and \(n = \frac{(p - 1)(q - 1)}{d}\) . A straightforward computation shows that
\[N > C n^{1 + \frac{1}{2L - 1}}\]
for some constant \(C\) , which is in particular larger than any \(c \cdot 2^{2} n \log (n)\) for \(p\) large. Thus, the statement of the problem is true for any constant \(c\) . More strongly, the statement of the problem is still true when \(c n \log (n)\) is replaced by \(n^{1 + \delta}\) for a sufficiently small \(\delta > 0\) .
Solution 2, obtaining better bounds. As in the preceding solution, we seek arbitrarily large pairs of integers \(n\) and \(N\) satisfying \((\dagger)\) such that \(N > c2^{k}n\log (n)\) .
This time, to construct such pairs, we fix an integer \(x\geqslant 4\) , set \(N\) to be the lowest common multiple of \(1,2,\ldots ,2x\) , and set \(n\) to be twice the lowest common multiple of \(1,2,\ldots ,x\) . The pair \((n,N)\) does indeed satisfy the condition, since if \(p^{e}\) is a prime power divisor of \(N\) then \(\frac{\phi(p^{e})}{2}\leqslant x\) is a factor of \(\begin{array}{r}{\frac{n}{2} = \mathrm{lcm}_{r\leqslant x}(r)} \end{array}\)
Now \(2N / n\) is the product of all primes having a power lying in the interval \((x,2x]\) , and hence \(2N / n > x^{\pi (2x) - \pi (x)}\) . Thus for sufficiently large \(x\) we have
\[\log \left(\frac{2N}{2^{\pi (2x)}n}\right) > (\pi (2x) - \pi (x))\log (x) - \log (2)\pi (2x)\sim x,\]
using the Prime Number Theorem \(\pi (t)\sim t / \log (t)\)
On the other hand, \(n\) is a product of at most \(\pi (x)\) prime powers less than or equal to \(x\) , and so we have the upper bound
\[\log (n)\leqslant \pi (x)\log (x)\sim x,\]
again by the Prime Number Theorem. Combined with the above inequality, we find that for any \(\epsilon >0\) , the inequality
\[\log \left(\frac{N}{2^{\pi (2x)}n}\right) > (1 - \epsilon)\log (n)\]
holds for sufficiently large \(x\) . Rearranging this shows that
\[N > 2^{\pi (2x)}n^{2 - \epsilon} > 2^{\pi (2x)}n\log (n)\]
for all sufficiently large \(x\) and we are done.
Comment 2. The stronger bound \(N > 2^{\pi (2x)}n^{2 - \epsilon}\) obtained in the above proof of course shows that infinitely many positive integers cannot be written as a sum of at most \(n^{2 - \epsilon}\) pairwise coprime \(n^{\mathrm{th}}\) powers.
By refining the method in Solution 2, these bounds can be improved further to show that infinitely many positive integers cannot be written as a sum of at most \(n^{\alpha}\) pairwise coprime \(n^{\mathrm{th}}\) powers for any positive \(\alpha >0\) . To do this, one fixes a positive integer \(d\) , sets \(N\) equal to the product of the primes at most \(dx\) which are congruent to 1 modulo \(d\) , and \(n = d\mathsf{lcm}_{r\leqslant x}(r)\) . It follows as in Solution 2 that \((n,N)\) satisfies \((\dagger)\)
Now the Prime Number Theorem in arithmetic progressions provides the estimates \(\log (N)\sim \frac{d}{\phi(d)} x\) \(\log (n)\sim x\) and \(\pi (dx)\sim \frac{dx}{\log (x)}\) for any fixed \(d\) . Combining these provides a bound
\[N > 2^{\pi (dx)}n^{d / \phi (d) - \epsilon}\]
for any positive \(\epsilon\) , valid for \(x\) sufficiently large. Since the ratio \(\frac{d}{\phi(d)}\) can be made arbitrarily large by a judicious choice of \(d\) , we obtain the \(n^{\alpha}\) bound claimed.
Comment 3. While big results from analytic number theory such as the Prime Number Theorem or Linnik's Theorem certainly can be used in this problem, they do not seem to substantially simplify matters: all known solutions involve first reducing to condition \((\dagger)\) , and even then analytic results do not make it clear how to proceed. For this reason, we regard this problem as suitable for the IMO.
Rather than simplifying the problem, what nonelementary results from analytic number theory allow one to achieve is a strengthening of the main bound, typically replacing the \(n\log (n)\) growth with a power \(n^{1 + \delta}\) . However, we believe that such stronger bounds are unlikely to be found by students in the exam.
The strongest bound we know how to achieve using purely elementary methods is a bound of the form \(N > 2^{k}n\log (n)^{M}\) for any positive integer \(M\) . This is achieved by a variant of the argument in Solution 1, choosing primes \(p_{0},\ldots ,p_{M}\) with \(p_{i}\mid 2^{2^{t + i - 1}} + 1\) and setting \(N = \prod_{i}p_{i}\) and \(n =\) \(2^{- tM}\prod_{i}(p_{i} - 1)\)
|
IMOSL-2019-N8
|
Let \(a\) and \(b\) be two positive integers. Prove that the integer
\[a^{2} + \left\lceil \frac{4a^{2}}{b}\right\rceil\]
is not a square. (Here \(\lceil z\rceil\) denotes the least integer greater than or equal to \(z\) .)
|
Solution 1. Arguing indirectly, assume that
\[a^{2} + \left\lceil \frac{4a^{2}}{b}\right\rceil = (a + k)^{2},\quad \mathrm{or}\quad \left\lceil \frac{(2a)^{2}}{b}\right\rceil = (2a + k)k.\]
Clearly, \(k \geqslant 1\) . In other words, the equation
\[\left\lceil \frac{c^{2}}{b}\right\rceil = (c + k)k \quad (1)\]
has a positive integer solution \((c, k)\) , with an even value of \(c\) .
Choose a positive integer solution of (1) with minimal possible value of \(k\) , without regard to the parity of \(c\) . From
\[\frac{c^{2}}{b} >\left\lceil \frac{c^{2}}{b}\right\rceil -1 = c k + k^{2} - 1\geqslant c k\]
and
\[\frac{(c - k)(c + k)}{b} < \frac{c^{2}}{b} \leqslant \left\lceil \frac{c^{2}}{b}\right\rceil = (c + k)k\]
it can be seen that \(c > bk > c - k\) , so
\[c = k b + r\quad \mathrm{with~some~}0< r< k.\]
By substituting this in (1) we get
\[\left\lceil \frac{c^{2}}{b}\right\rceil = \left\lceil \frac{(b k + r)^{2}}{b}\right\rceil = k^{2}b + 2k r + \left\lceil \frac{r^{2}}{b}\right\rceil\]
and
\[(c + k)k = (k b + r + k)k = k^{2}b + 2k r + k(k - r),\]
so
\[\left\lceil \frac{r^{2}}{b}\right\rceil = k(k - r). \quad (2)\]
Notice that relation (2) provides another positive integer solution of (1), namely \(c' = r\) and \(k' = k - r\) , with \(c' > 0\) and \(0 < k' < k\) . That contradicts the minimality of \(k\) , and hence finishes the solution.
Solution 2. Suppose that
\[a^{2} + \left[\frac{4a^{2}}{b}\right] = c^{2}\]
with some positive integer \(c > a\) , so
\[\begin{array}{l}{c^{2} - 1< a^{2} + \frac{4a^{2}}{b}\leqslant c^{2},}\\ {0\leqslant c^{2}b - a^{2}(b + 4)< b.} \end{array} \quad (3)\]
Let \(d = c^{2}b - a^{2}(b + 4)\) , \(x = c + a\) and \(y = c - a\) ; then we have \(c = \frac{x + y}{2}\) and \(a = \frac{x - y}{2}\) , and (3) can be re- written as follows:
\[\begin{array}{c}{\left(\frac{x + y}{2}\right)^{2}b - \left(\frac{x - y}{2}\right)^{2}(b + 4) = d,}\\ {x^{2} - (b + 2)xy + y^{2} + d = 0;\qquad 0\leqslant d< b.} \end{array} \quad (4)\]
So, by the indirect assumption, the equation (4) has some positive integer solution \((x,y)\)
Fix \(b\) and \(d\) , and take a pair \((x,y)\) of positive integers, satisfying (4), such that \(x + y\) is minimal. By the symmetry in (4) we may assume that \(x\geqslant y\geqslant 1\)
Now we perform a usual "Vieta jump". Consider (4) as a quadratic equation in variable \(x\) , and let \(z\) be its second root. By the Vieta formulas,
\[x + z = (b + 2)y,\quad \mathrm{and}\quad zx = y^{2} + d,\]
so
\[z = (b + 2)y - x = \frac{y^{2} + d}{x}.\]
The first formula shows that \(z\) is an integer, and by the second formula \(z\) is positive. Hence \((z,y)\) is another positive integer solution of (4). From
\[(x - 1)(z - 1) = xz - (x + z) + 1 = (y^{2} + d) - (b + 2)y + 1\] \[\qquad < (y^{2} + b) - (b + 2)y + 1 = (y - 1)^{2} - b(y - 1)\leqslant (y - 1)^{2}\leqslant (x - 1)^{2}\]
we can see that \(z< x\) and therefore \(z + y< x + y\) . But this contradicts the minimality of \(x + y\) among the positive integer solutions of (4).
|
IMOSL-2020-A1.5
|
Version 2: For every positive integer \(N\) , determine the smallest real number \(b_{N}\) such that, for all real \(x\) ,
\[\sqrt[N]{\frac{x^{2N} + 1}{2}}\leqslant b_{N}(x - 1)^{2} + x.\]
|
Answer for both versions: \(a_{n} = b_{N} = N / 2\) .
Solution 1 (for Version 1). First of all, assume that \(a_{n}< N / 2\) satisfies the condition. Take \(x = 1 + t\) for \(t > 0\) , we should have
\[\frac{(1 + t)^{2N} + 1}{2}\leqslant (1 + t + a_{n}t^{2})^{N}.\]
Expanding the brackets we get
\[(1 + t + a_{n}t^{2})^{N} - \frac{(1 + t)^{2N} + 1}{2} = \left(Na_{n} - \frac{N^{2}}{2}\right)t^{2} + c_{3}t^{3} + \ldots +c_{2N}t^{2N} \quad (1)\]
with some coefficients \(c_{3},\ldots ,c_{2N}\) . Since \(a_{n}< N / 2\) , the right hand side of (1) is negative for sufficiently small \(t\) . A contradiction.
It remains to prove the following inequality
\[\sqrt[N]{\frac{1 + x^{2N}}{2}}\leqslant x + \frac{N}{2} (x - 1)^{2}, \quad (1)\]
where \(N = 2^{n}\) .
Use induction in \(n\) . The base case \(n = 0\) is trivial: \(N = 1\) and both sides of \(\mathcal{I}(N,x)\) are equal to \((1 + x^{2}) / 2\) . For completing the induction we prove \(\mathcal{I}(2N,x)\) assuming that \(\mathcal{I}(N,y)\) is established for all real \(y\) . We have
\[\begin{array}{l}{(x + N(x - 1)^{2})^{2} = x^{2} + N^{2}(x - 1)^{4} + N(x - 1)^{2}\frac{(x + 1)^{2} - (x - 1)^{2}}{2}}\\ {\qquad = x^{2} + \frac{N}{2} (x^{2} - 1)^{2} + \left(N^{2} - \frac{N}{2}\right)(x - 1)^{4}\geqslant x^{2} + \frac{N}{2} (x^{2} - 1)^{2}\geqslant \sqrt[N]{\frac{1 + x^{4N}}{2}},} \end{array} \quad (1)\]
where the last inequality is \(\mathcal{I}(N,x^{2})\) . Since
\[x + N(x - 1)^{2}\geqslant x + \frac{(x - 1)^{2}}{2} = \frac{x^{2} + 1}{2}\geqslant 0,\]
taking square root we get \(\mathcal{I}(2N,x)\) . The inductive step is complete.
Solution 2.1 (for Version 2). Like in Solution 1 of Version 1, we conclude that \(b_{N} \geqslant N / 2\) . It remains to prove the inequality \(\mathcal{I}(N, x)\) for an arbitrary positive integer \(N\) .
First of all, \(\mathcal{I}(N, 0)\) is obvious. Further, if \(x > 0\) , then the left hand sides of \(\mathcal{I}(N, - x)\) and \(\mathcal{I}(N, x)\) coincide, while the right hand side of \(\mathcal{I}(N, - x)\) is larger than that of \(\mathcal{I}(N, - x)\) (their difference equals \(2(N - 1)x \geqslant 0\) ). Therefore, \(\mathcal{I}(N, - x)\) follows from \(\mathcal{I}(N, x)\) . So, hereafter we suppose that \(x > 0\) .
Divide \(\mathcal{I}(N, x)\) by \(x\) and let \(t = (x - 1)^{2} / x = x - 2 + 1 / x\) ; then \(\mathcal{I}(n, x)\) reads as
\[f_{N} := \frac{x^{N} + x^{-N}}{2} \leqslant \left(1 + \frac{N}{2} t\right)^{N}. \quad (2)\]
The key identity is the expansion of \(f_{N}\) as a polynomial in \(t\) :
Lemma.
\[f_{N} = N \sum_{k = 0}^{N} \frac{1}{N + k} \binom{N + k}{2k} t^{k}. \quad (3)\]
Proof. Apply induction on \(N\) . We will make use of the straightforward recurrence relation
\[f_{N + 1} + f_{N - 1} = (x + 1 / x) f_{N} = (2 + t) f_{N}. \quad (4)\]
The base cases \(N = 1, 2\) are straightforward:
\[f_{1} = 1 + \frac{t}{2}, \quad f_{2} = \frac{1}{2} t^{2} + 2t + 1.\]
For the induction step from \(N - 1\) and \(N\) to \(N + 1\) , we compute the coefficient of \(t^{k}\) in \(f_{N + 1}\) using the formula \(f_{N + 1} = (2 + t) f_{N} - f_{N - 1}\) . For \(k = 0\) that coefficient equals 1, for \(k > 0\) it equals
\[2\frac{N}{N + k}\binom{N + k}{2k} +\frac{N}{N + k - 1}\binom{N + k - 1}{2k - 2} -\frac{N - 1}{N + k - 1}\binom{N + k - 1}{2k}\] \[\quad = \frac{(N + k - 1)!}{(2k)!(N - k)!}\left(2N + \frac{2k(2k - 1)N}{(N + k - 1)(N - k + 1)} -\frac{(N - 1)(N - k)}{N + k - 1}\right)\] \[\quad = \frac{(N + k - 1)!}{(2k)!(N - k + 1)!}\left(2N(N - k + 1) + 3kN + k - N^{2} - N\right) = \frac{\binom{N + k + 1}{2k}}{(N + k + 1)}(N + 1),\]
that completes the induction.
Turning back to the problem, in order to prove (2) we write
\[\left(1 + \frac{N}{2} t\right)^{N} - f_{N} = \left(1 + \frac{N}{2} t\right)^{N} - N \sum_{k = 0}^{N} \frac{1}{N + k} \binom{N + k}{2k} t^{k} = \sum_{k = 0}^{N} \alpha_{k} t^{k},\]
where
\[\alpha_{k} = \left(\frac{N}{2}\right)^{k}\binom{N}{k} -\frac{N}{N + k}\binom{N + k}{2k}\] \[\qquad = \left(\frac{N}{2}\right)^{k}\binom{N}{k}\left(1 - 2^{k}\frac{(1 + 1 / N)(1 + 2 / N)\cdot\ldots\cdot(1 + (k - 1) / N)}{(k + 1)\cdot\ldots\cdot(2k)}\right)\] \[\qquad \geqslant \left(\frac{N}{2}\right)^{k}\binom{N}{k}\left(1 - 2^{k}\frac{2\cdot3\cdot\ldots\cdot k}{(k + 1)\cdot\ldots\cdot(2k)}\right) = \left(\frac{N}{2}\right)^{k}\binom{N}{k}\left(1 - \prod_{j = 1}^{k}\frac{2j}{k + j}\right)\geqslant 0,\]
and (2) follows.
Solution 2.2 (for Version 2). Here we present another proof of the inequality (2) for \(x > 0\) , or, equivalently, for \(t = (x - 1)^{2} / x \geqslant 0\) . Instead of finding the coefficients of the polynomial \(f_{N} = f_{N}(t)\) we may find its roots, which is in a sense more straightforward. Note that the recurrence (4) and the initial conditions \(f_{0} = 1\) , \(f_{1} = 1 + t / 2\) imply that \(f_{N}\) is a polynomial in \(t\) of degree \(N\) . It also follows by induction that \(f_{N}(0) = 1\) , \(f_{N}^{\prime}(0) = N^{2} / 2\) : the recurrence relations read as \(f_{N + 1}(0) + f_{N - 1}(0) = 2f_{N}(0)\) and \(f_{N + 1}^{\prime}(0) + f_{N - 1}^{\prime}(0) = 2f_{N}^{\prime}(0) + f_{N}(0)\) , respectively.
Next, if \(x_{k} = \exp (\frac{i\pi(2k - 1)}{2N})\) for \(k \in \{1, 2, \ldots , N\}\) , then
\[-t_{k} := 2 - x_{k} - \frac{1}{x_{k}} = 2 - 2\cos \frac{\pi(2k - 1)}{2N} = 4\sin^{2} \frac{\pi(2k - 1)}{4N} > 0\]
and
\[f_{N}(t_{k}) = \frac{x_{k}^{N} + x_{k}^{-N}}{2} = \frac{\exp\left(\frac{i\pi(2k - 1)}{2}\right) + \exp\left(-\frac{i\pi(2k - 1)}{2}\right)}{2} = 0.\]
So the roots of \(f_{N}\) are \(t_{1}, \ldots , t_{N}\) and by the AM- GM inequality we have
\[f_{N}(t) = \left(1 - \frac{t}{t_{1}}\right)\left(1 - \frac{t}{t_{2}}\right)\cdot \cdot \cdot \left(1 - \frac{t}{t_{N}}\right)\leqslant \left(1 - \frac{t}{N}\left(\frac{1}{t_{1}} +\cdot \cdot \cdot +\frac{1}{t_{n}}\right)\right)^{N} =\] \[\qquad \left(1 + \frac{t f_{N}^{\prime}(0)}{N}\right)^{N} = \left(1 + \frac{N}{2} t\right)^{N}.\]
Comment. The polynomial \(f_{N}(t)\) equals to \(\frac{1}{2} T_{N}(t + 2)\) , where \(T_{n}\) is the \(n^{\mathrm{th}}\) Chebyshev polynomial of the first kind: \(T_{n}(2\cos s) = 2\cos ns\) , \(T_{n}(x + 1 / x) = x^{n} + 1 / x^{n}\) .
Solution 2.3 (for Version 2). Here we solve the problem when \(N \geqslant 1\) is an arbitrary real number. For a real number \(a\) let
\[f(x) = \left(\frac{x^{2N} + 1}{2}\right)^{\frac{1}{N}} - a(x - 1)^{2} - x.\]
Then \(f(1) = 0\) ,
\[f^{\prime}(x) = \left(\frac{x^{2N} + 1}{2}\right)^{\frac{1}{N} -1}x^{2N - 1} - 2a(x - 1) - 1\quad \mathrm{and}\quad f^{\prime}(1) = 0;\]
\[f^{\prime \prime}(x) = (1 - N)\left(\frac{x^{2N} + 1}{2}\right)^{\frac{1}{N} -2}x^{4N - 2} + (2N - 1)\left(\frac{x^{2N} + 1}{2}\right)^{\frac{1}{N} -1}x^{2N - 2} - 2a\quad \mathrm{and}\quad f^{\prime \prime}(1) = N - 2a.\]
So if \(a < \frac{N}{2}\) , the function \(f\) has a strict local minimum at point 1, and the inequality \(f(x) \leqslant 0 = f(1)\) does not hold. This proves \(b_{N} \geqslant N / 2\) .
For \(a = \frac{N}{2}\) we have \(f^{\prime \prime}(1) = 0\) and
\[f^{\prime \prime \prime}(x) = \frac{1}{2} (1 - N)(1 - 2N)\left(\frac{x^{2N} + 1}{2}\right)^{\frac{1}{N} -3}x^{2N - 3}(1 - x^{2N})\quad \left\{ \begin{array}{ll} > 0 & \mathrm{if~}0< x< 1 \mathrm{and}\\ < 0 & \mathrm{if~}x > 1. \end{array} \right.\]
Hence, \(f^{\prime \prime}(x) < 0\) for \(x \neq 1\) ; \(f^{\prime}(x) > 0\) for \(x < 1\) and \(f^{\prime}(x) < 0\) for \(x > 1\) , finally \(f(x) < 0\) for \(x \neq 1\) .
Comment. Version 2 is much more difficult, of rather A5 or A6 difficulty. The induction in Version 1 is rather straightforward, while all three above solutions of Version 2 require some creativity.
|
IMOSL-2020-A4
|
Let \(a,b,c,d\) be four real numbers such that \(a\geqslant b\geqslant c\geqslant d > 0\) and \(a + b + c + d = 1\) . Prove that
\[(a + 2b + 3c + 4d)a^{a}b^{b}c^{c}d^{d}< 1.\]
|
Solution 1. The weighted AM- GM inequality with weights \(a,b,c,d\) gives
\[a^{a}b^{b}c^{c}d^{d}\leqslant a\cdot a + b\cdot b + c\cdot c + d\cdot d = a^{2} + b^{2} + c^{2} + d^{2},\]
so it suffices to prove that \((a + 2b + 3c + 4d)(a^{2} + b^{2} + c^{2} + d^{2})< 1 = (a + b + c + d)^{3}\) . This can be done in various ways, for example:
\[(a + b + c + d)^{3} > a^{2}(a + 3b + 3c + 3d) + b^{2}(3a + b + 3c + 3d)\] \[\qquad +c^{2}(3a + 3b + c + 3d) + d^{2}(3a + 3b + 3c + d)\] \[\qquad \geqslant (a^{2} + b^{2} + c^{2} + d^{2})\cdot (a + 2b + 3c + 4d).\]
Solution 2. From \(b\geqslant d\) we get
\[a + 2b + 3c + 4d\leqslant a + 3b + 3c + 3d = 3 - 2a.\]
If \(a< \frac{1}{2}\) , then the statement can be proved by
\[(a + 2b + 3c + 4d)a^{a}b^{b}c^{c}d^{d}\leqslant (3 - 2a)a^{a}a^{b}a^{c}a^{d} = (3 - 2a)a = 1 - (1 - a)(1 - 2a)< 1.\]
From now on we assume \(\frac{1}{2}\leqslant a< 1\)
By \(b,c,d< 1 - a\) we have
\[b^{b}c^{c}d^{d}< (1 - a)^{b}\cdot (1 - a)^{c}\cdot (1 - a)^{d} = (1 - a)^{1 - a}.\]
Therefore,
\[(a + 2b + 3c + 4d)a^{a}b^{b}c^{c}d^{d}< (3 - 2a)a^{a}(1 - a)^{1 - a}.\]
For \(0< x< 1\) , consider the functions
\(f(x) = (3 - 2x)x^{x}(1 - x)^{1 - x}\) and \(g(x) = \log f(x) = \log (3 - 2x) + x\log x + (1 - x)\log (1 - x);\) hereafter, log denotes the natural logarithm. It is easy to verify that
\[g^{\prime \prime}(x) = -\frac{4}{(3 - 2x)^{2}} +\frac{1}{x} +\frac{1}{1 - x} = \frac{1 + 8(1 - x)^{2}}{x(1 - x)(3 - 2x)^{2}} >0,\]
so \(g\) is strictly convex on \((0,1)\)
By \(g\left(\frac{1}{2}\right) = \log 2 + 2\cdot \frac{1}{2}\log \frac{1}{2} = 0\) and \(\lim_{x\to 1 - }g(x) = 0\) , we have \(g(x)\leqslant 0\) (and hence \(f(x)\leqslant 1\) for all \(x\in \left[\frac{1}{2},1\right)\) , and therefore
\[(a + 2b + 3c + 4d)a^{a}b^{b}c^{c}d^{d}< f(a)\leqslant 1.\]
Comment. For a large number of variables \(a_{1}\geqslant a_{2}\geqslant \ldots \geqslant a_{n} > 0\) with \(\textstyle \sum_{i}a_{i} = 1\) , the inequality
\[\left(\sum_{i}i a_{i}\right)\prod_{i}a_{i}^{a_{i}}\leqslant 1\]
does not necessarily hold. Indeed, let \(a_{2} = a_{3} = \ldots = a_{n} = \epsilon\) and \(a_{1} = 1 - (n - 1)\epsilon\) , where \(n\) and \(\epsilon \in (0,1 / n)\) will be chosen later. Then
\[\left(\sum_{i}a_{i}\right)\prod_{i}a_{i}^{a_{i}} = \left(1 + \frac{n(n - 1)}{2}\epsilon\right)\epsilon^{(n - 1)\epsilon}(1 - (n - 1)\epsilon)^{1 - (n - 1)\epsilon}. \quad (1)\]
If \(\epsilon = C / n^{2}\) with an arbitrary fixed \(C > 0\) and \(n\rightarrow \infty\) , then the factors \(\epsilon^{(n - 1)\epsilon} = \exp ((n - 1)\epsilon \log \epsilon)\) and \((1 - (n - 1)\epsilon)^{1 - (n - 1)\epsilon}\) tend to 1, so the limit of (1) in this set- up equals \(1 + C / 2\) . This is not simply greater than 1, but it can be arbitrarily large.
|
IMOSL-2020-A5
|
A magician intends to perform the following trick. She announces a positive integer \(n\) , along with \(2n\) real numbers \(x_{1}< \ldots < x_{2n}\) , to the audience. A member of the audience then secretly chooses a polynomial \(P(x)\) of degree \(n\) with real coefficients, computes the \(2n\) values \(P(x_{1}),\ldots ,P(x_{2n})\) , and writes down these \(2n\) values on the blackboard in non- decreasing order. After that the magician announces the secret polynomial to the audience.
Can the magician find a strategy to perform such a trick?
|
Answer: No, she cannot.
Solution. Let \(x_{1}< x_{2}< \ldots < x_{2n}\) be real numbers chosen by the magician. We will construct two distinct polynomials \(P(x)\) and \(Q(x)\) , each of degree \(n\) , such that the member of audience will write down the same sequence for both polynomials. This will mean that the magician cannot distinguish \(P\) from \(Q\) .
Claim. There exists a polynomial \(P(x)\) of degree \(n\) such that \(P(x_{2i - 1}) + P(x_{2i}) = 0\) for \(i = 1,2,\ldots ,n\) .
Proof. We want to find a polynomial \(a_{n}x^{n} + \ldots +a_{1}x + a_{0}\) satisfying the following system of equations:
\[\left\{ \begin{array}{l l}{(x_{1}^{n} + x_{2}^{n})a_{n} + (x_{1}^{n - 1} + x_{2}^{n - 1})a_{n - 1} + \ldots +2a_{0} = 0}\\ {(x_{3}^{n} + x_{4}^{n})a_{n} + (x_{3}^{n - 1} + x_{4}^{n - 1})a_{n - 1} + \ldots +2a_{0} = 0}\\ {\ldots}\\ {(x_{2n - 1}^{n} + x_{2n}^{n})a_{n} + (x_{2n - 1}^{n - 1} + x_{2n}^{n - 1})a_{n - 1} + \ldots +2a_{0} = 0} \end{array} \right.\]
We use the well known fact that a homogeneous system of \(n\) linear equations in \(n + 1\) variables has a nonzero solution. (This fact can be proved using induction on \(n\) , via elimination of variables.) Applying this fact to the above system, we find a nonzero polynomial \(P(x)\) of degree not exceeding \(n\) such that its coefficients \(a_{0},\ldots ,a_{n}\) satisfy this system. Therefore \(P(x_{2i - 1}) + P(x_{2i}) = 0\) for all \(i = 1,2,\ldots ,n\) . Notice that \(P\) has a root on each segment \([x_{2i - 1},x_{2i}]\) by the Intermediate Value theorem, so \(n\) roots in total. Since \(P\) is nonzero, we get \(\deg P = n\) . \(\square\)
Now consider a polynomial \(P(x)\) provided by the Claim, and take \(Q(x) = - P(x)\) . The properties of \(P(x)\) yield that \(P(x_{2i - 1}) = Q(x_{2i})\) and \(Q(x_{2i - 1}) = P(x_{2i})\) for all \(i = 1,2,\ldots ,n\) . It is also clear that \(P\neq - P = Q\) and \(\deg Q = \deg P = n\) .
Comment. It can be shown that for any positive integer \(n\) the magician can choose \(2n + 1\) distinct real numbers so as to perform such a trick. Moreover, she can perform such a trick with almost all (in a proper sense) \((2n + 1)\) - tuples of numbers.
|
IMOSL-2020-A6
|
Determine all functions \(f: \mathbb{Z} \to \mathbb{Z}\) such that
\[f^{a^{2} + b^{2}}(a + b) = af(a) + bf(b)\qquad \mathrm{for~every~}a,b\in \mathbb{Z}.\]
Here, \(f^{n}\) denotes the \(n^{\mathrm{th}}\) iteration of \(f\) , i.e., \(f^{0}(x) = x\) and \(f^{n + 1}(x) = f(f^{n}(x))\) for all \(n \geqslant 0\) .
|
Answer: Either \(f(x) = 0\) for all \(x \in \mathbb{Z}\) , or \(f(x) = x + 1\) for all \(x \in \mathbb{Z}\) .
Solution. Refer to the main equation as \(E(a,b)\) .
\(E(0,b)\) reads as \(f^{b^{2}}(b) = bf(b)\) . For \(b = - 1\) this gives \(f(- 1) = 0\) .
Now \(E(a, - 1)\) reads as
\[f^{a^{2} + 1}(a - 1) = af(a) = f^{a^{2}}(a). \quad (1)\]
For \(x \in \mathbb{Z}\) define the orbit of \(x\) by \(\mathcal{O}(x) = \{x, f(x), f(f(x)), \ldots \} \subseteq \mathbb{Z}\) . We see that the orbits \(\mathcal{O}(a - 1)\) and \(\mathcal{O}(a)\) differ by finitely many terms. Hence, any two orbits differ by finitely many terms. In particular, this implies that either all orbits are finite or all orbits are infinite.
Case 1: All orbits are finite.
Then \(\mathcal{O}(0)\) is finite. Using \(E(a, - a)\) we get
\[a\big(f(a) - f(-a)\big) = af(a) - af(-a) = f^{2a^{2}}(0)\in \mathcal{O}(0).\]
For \(|a| > \max_{z\in \mathcal{O}(0)}|z|\) , this yields \(f(a) = f(- a)\) and \(f^{2a^{2}}(0) = 0\) . Therefore, the sequence \(\left(f^{k}(0):k = 0,1,\ldots\right)\) is purely periodic with a minimal period \(T\) which divides \(2a^{2}\) . Analogously, \(T\) divides \(2(a + 1)^{2}\) , therefore, \(T|\gcd \bigl (2a^{2},2(a + 1)^{2}\bigr) = 2\) , i.e., \(f(f(0)) = 0\) and \(a\big(f(a) - f(- a)\big) = f^{2a^{2}}(0) = 0\) for all \(a\) . Thus,
\[\begin{array}{r l} & {f(a) = f(-a)\qquad \mathrm{for~all~}a\neq 0;}\\ & {\mathrm{in~particular,} f(1) = f(-1) = 0.} \end{array} \quad (A)\]
Next, for each \(n \in \mathbb{Z}\) , by \(E(n, 1 - n)\) we get
\[n f(n) + (1 - n)f(1 - n) = f^{n^{2} + (1 - n)^{2}}(1) = f^{2n^{2} - 2n}(0) = 0. \quad (C)\]
Assume that there exists some \(m \neq 0\) such that \(f(m) \neq 0\) . Choose such an \(m\) for which \(|m|\) is minimal possible. Then \(|m| > 1\) due to \((\spadesuit)\) ; \(f(|m|) \neq 0\) due to \((\clubsuit)\) ; and \(f(1 - |m|) \neq 0\) due to \((\heartsuit)\) for \(n = |m|\) . This contradicts to the minimality assumption.
So, \(f(n) = 0\) for \(n \neq 0\) . Finally, \(f(0) = f^{3}(0) = f^{4}(2) = 2f(2) = 0\) . Clearly, the function \(f(x) \equiv 0\) satisfies the problem condition, which provides the first of the two answers.
Case 2: All orbits are infinite.
Since the orbits \(\mathcal{O}(a)\) and \(\mathcal{O}(a - 1)\) differ by finitely many terms for all \(a \in \mathbb{Z}\) , each two orbits \(\mathcal{O}(a)\) and \(\mathcal{O}(b)\) have infinitely many common terms for arbitrary \(a, b \in \mathbb{Z}\) .
For a minute, fix any \(a, b \in \mathbb{Z}\) . We claim that all pairs \((n, m)\) of nonnegative integers such that \(f^{n}(a) = f^{m}(b)\) have the same difference \(n - m\) . Arguing indirectly, we have \(f^{n}(a) = f^{m}(b)\) and \(f^{p}(a) = f^{q}(b)\) with, say, \(n - m > p - q\) , then \(f^{p + m + k}(b) = f^{p + n + k}(a) = f^{q + n + k}(b)\) , for all nonnegative integers \(k\) . This means that \(f^{\ell + (n - m) - (p - q)}(b) = f^{\ell}(b)\) for all sufficiently large \(\ell\) , i.e., that the sequence \(\left(f^{n}(b)\right)\) is eventually periodic, so \(\mathcal{O}(b)\) is finite, which is impossible.
Now, for every \(a, b \in \mathbb{Z}\) , denote the common difference \(n - m\) defined above by \(X(a, b)\) . We have \(X(a - 1, a) = 1\) by (1). Trivially, \(X(a, b) + X(b, c) = X(a, c)\) , as if \(f^{n}(a) = f^{m}(b)\) and \(f^{p}(b) = f^{q}(c)\) , then \(f^{p + n}(a) = f^{p + m}(b) = f^{q + m}(c)\) . These two properties imply that \(X(a, b) = b - a\) for all \(a, b \in \mathbb{Z}\) .
But (1) yields \(f^{a^{2} + 1}(f(a - 1)) = f^{a^{2}}(f(a))\) , so
\[1 = X\big(f(a - 1),f(a)\big) = f(a) - f(a - 1)\quad \mathrm{for~all~}a\in \mathbb{Z}.\]
Recalling that \(f(- 1) = 0\) , we conclude by (two- sided) induction on \(x\) that \(f(x) = x + 1\) for all \(x\in \mathbb{Z}\) .
Finally, the obtained function also satisfies the assumption. Indeed, \(f^{n}(x) = x + n\) for all \(n\geq 0\) , so
\[f^{a^{2} + b^{2}}(a + b) = a + b + a^{2} + b^{2} = af(a) + bf(b).\]
Comment. There are many possible variations of the solution above, but it seems that finiteness of orbits seems to be a crucial distinction in all solutions. However, the case distinction could be made in different ways; in particular, there exist some versions of Case 1 which work whenever there is at least one finite orbit.
We believe that Case 2 is conceptually harder than Case 1.
|
IMOSL-2020-A7
|
Let \(n\) and \(k\) be positive integers. Prove that for \(a_{1},\ldots ,a_{n}\in [1,2^{k}]\) one has
\[\sum_{i = 1}^{n}\frac{a_{i}}{\sqrt{a_{1}^{2} + \ldots + a_{i}^{2}}}\leqslant 4\sqrt{kn}.\]
|
Solution 1. Partition the set of indices \(\{1,2,\ldots ,n\}\) into disjoint subsets \(M_{1},M_{2},\ldots ,M_{k}\) so that \(a_{\ell}\in [2^{j - 1},2^{j}]\) for \(\ell \in M_{j}\) . Then, if \(|M_{j}| = :p_{j}\) , we have
\[\sum_{\ell \in M_{j}}\frac{a_{\ell}}{\sqrt{a_{1}^{2} + \ldots + a_{\ell}^{2}}}\leqslant \sum_{i = 1}^{p_{j}}\frac{2^{j}}{2^{j - 1}\sqrt{i}} = 2\sum_{i = 1}^{p_{j}}\frac{1}{\sqrt{i}},\]
where we used that \(a_{\ell}\leqslant 2^{j}\) and in the denominator every index from \(M_{j}\) contributes at least \((2^{j - 1})^{2}\) . Now, using \(\sqrt{i} -\sqrt{i - 1} = \frac{1}{\sqrt{i} + \sqrt{i - 1}}\geqslant \frac{1}{2\sqrt{i}}\) , we deduce that
\[\sum_{\ell \in M_{j}}\frac{a_{\ell}}{\sqrt{a_{1}^{2} + \ldots + a_{\ell}^{2}}}\leqslant 2\sum_{i = 1}^{p_{j}}\frac{1}{\sqrt{i}}\leqslant 2\sum_{i = 1}^{p_{j}}2(\sqrt{i} -\sqrt{i - 1}) = 4\sqrt{p_{j}}.\]
Therefore, summing over \(j = 1,\ldots ,k\) and using the QM- AM inequality, we obtain
\[\sum_{\ell = 1}^{n}\frac{a_{\ell}}{\sqrt{a_{1}^{2} + \ldots + a_{\ell}^{2}}}\leqslant 4\sum_{j = 1}^{k}\sqrt{|M_{j}|}\leqslant 4\sqrt{k\sum_{j = 1}^{k}|M_{j}|} = 4\sqrt{kn}.\]
Comment. Consider the function \(f(a_{1},\ldots ,a_{n}) = \sum_{i = 1}^{n}\frac{a_{i}}{\sqrt{a_{1}^{2} + \ldots + a_{i}^{2}}}\) . One can see that rearranging the variables in increasing order can only increase the value of \(f(a_{1},\ldots ,a_{n})\) . Indeed, if \(a_{j} > a_{j + 1}\) for some index \(j\) then we have
\[f(a_{1},\ldots ,a_{j - 1},a_{j + 1},a_{j},a_{j + 2},\ldots ,a_{n}) - f(a_{1},\ldots ,a_{n}) = \frac{a}{S} +\frac{b}{\sqrt{S^{2} - a^{2}}} -\frac{b}{S} -\frac{a}{\sqrt{S^{2} - b^{2}}}\]
where \(a = a_{j},b = a_{j + 1}\) , and \(S = \sqrt{a_{1}^{2} + \ldots + a_{j + 1}^{2}}\) . The positivity of the last expression above follows from
\[\frac{b}{\sqrt{S^{2} - a^{2}}} -\frac{b}{S} = \frac{a^{2}b}{S\sqrt{S^{2} - a^{2}}\cdot(S + \sqrt{S^{2} - a^{2}})} >\frac{ab^{2}}{S\sqrt{S^{2} - b^{2}}\cdot(S + \sqrt{S^{2} - b^{2}})} = \frac{a}{\sqrt{S^{2} - b^{2}}} -\frac{a}{S}.\]
Comment. If \(k< n\) , the example \(a_{m}:= 2^{k(m - 1) / n}\) shows that the problem statement is sharp up to a multiplicative constant. For \(k\geqslant n\) the trivial upper bound \(n\) becomes sharp up to a multiplicative constant.
Solution 2. Apply induction on \(n\) . The base \(n\leqslant 16\) is clear: our sum does not exceed \(n\leqslant 4\sqrt{nk}\) . For the inductive step from \(1,\ldots ,n - 1\) to \(n\geqslant 17\) consider two similar cases.
Case 1: \(n = 2t\)
Let \(x_{\ell} = \frac{a_{\ell}}{\sqrt{a_{1}^{2} + \ldots + a_{\ell}^{2}}}\) . We have
\[\exp (-x_{t + 1}^{2} - \ldots -x_{2t}^{2})\geqslant \left(1 - x_{t + 1}^{2}\right)\cdot \left(1 - x_{2t}^{2}\right) = \frac{a_{1}^{2} + \ldots + a_{t}^{2}}{a_{1}^{2} + \ldots + a_{2t}^{2}}\geqslant \frac{1}{1 + 4^{k}},\]
where we used that the product is telescopic and then an estimate \(a_{t + i}\leqslant 2^{k}a_{i}\) for \(i = 1,\ldots ,t\) . Therefore, \(x_{t + 1}^{2} + \ldots +x_{2t}^{2}\leqslant \log (4^{k} + 1)\leqslant 2k\) , where \(\log\) denotes the natural logarithm. This implies \(x_{t + 1} + \ldots +x_{2t}\leqslant \sqrt{2kt}\) . Hence, using the inductive hypothesis for \(n = t\) we get
\[\sum_{\ell = 1}^{2t}x_{\ell}\leqslant 4\sqrt{kt} +\sqrt{2kt}\leqslant 4\sqrt{2kt}.\]
Case 2: \(n = 2t + 1\) .
Analogously, we get \(x_{t + 2}^{2} + \ldots +x_{2t + 1}^{2}\leqslant \log (4^{k} + 1)\leqslant 2k\) and
\[\sum_{\ell = 1}^{2t + 1}x_{\ell}\leqslant 4\sqrt{k(t + 1)} +\sqrt{2kt}\leqslant 4\sqrt{k(2t + 1)}.\]
The last inequality is true for all \(t\geq 8\) since
\[4\sqrt{2t + 1} -\sqrt{2t}\geqslant 3\sqrt{2t} = \sqrt{18t}\geqslant \sqrt{16t + 16} = 4\sqrt{t + 1}.\]
|
IMOSL-2020-A8
|
Let \(\mathbb{R}^{+}\) be the set of positive real numbers. Determine all functions \(f\colon \mathbb{R}^{+}\to \mathbb{R}^{+}\) such that, for all positive real numbers \(x\) and \(y\) ,
\[f\big(x + f(x y)\big) + y = f(x)f(y) + 1. \quad (*)\]
|
Answer: \(f(x) = x + 1\) .
Solution 1. A straightforward check shows that \(f(x) = x + 1\) satisfies \((*)\) . We divide the proof of the converse statement into a sequence of steps.
Step 1: \(f\) is injective.
Put \(x = 1\) in \((*)\) and rearrange the terms to get
\[y = f(1)f(y) + 1 - f\big(1 + f(y)\big).\]
Therefore, if \(f(y_{1}) = f(y_{2})\) , then \(y_{1} = y_{2}\) .
Step 2: \(f\) is (strictly) monotone increasing.
For any fixed \(y\in \mathbb{R}^{+}\) , the function
\[g(x):= f\big(x + f(x y)\big) = f(x)f(y) + 1 - y\]
is injective by Step 1. Therefore, \(x_{1} + f(x_{1}y)\neq x_{2} + f(x_{2}y)\) for all \(y,x_{1},x_{2}\in \mathbb{R}^{+}\) with \(x_{1}\neq x_{2}\) Plugging in \(z_{i} = x_{i}y\) , we arrive at
\[\frac{z_{1} - z_{2}}{y}\neq f(z_{2}) - f(z_{1}),\quad \mathrm{or}\quad \frac{1}{y}\neq \frac{f(z_{2}) - f(z_{1})}{z_{1} - z_{2}}\]
for all \(y,z_{1},z_{2}\in \mathbb{R}^{+}\) with \(z_{1}\neq z_{2}\) . This means that the right- hand side of the rightmost relation is always non- positive, i.e., \(f\) is monotone non- decreasing. Since \(f\) is injective, it is strictly monotone.
Step 3: There exist constants \(a\) and \(b\) such that \(f(y) = ay + b\) for all \(y\in \mathbb{R}^{+}\) .
Since \(f\) is monotone and bounded from below by 0, for each \(x_{0}\geq 0\) , there exists a right limit \(\lim_{x\searrow x_{0}}f(x)\geq 0\) . Put \(p = \lim_{x\searrow 0}f(x)\) and \(q = \lim_{x\searrow p}f(x)\) .
Fix an arbitrary \(y\) and take the limit of \((*)\) as \(x\searrow 0\) . We have \(f(xy)\searrow p\) and hence \(f(x + f(xy))\searrow q\) ; therefore, we obtain
\[q + y = pf(y) + 1,\quad \mathrm{or}\quad f(y) = \frac{q + y - 1}{p}.\]
(Notice that \(p\neq 0\) , otherwise \(q + y = 1\) for all \(y\) , which is absurd.) The claim is proved.
Step 4: \(f(x) = x + 1\) for all \(x\in \mathbb{R}^{+}\) .
Based on the previous step, write \(f(x) = ax + b\) . Putting this relation into \((*)\) we get
\[a(x + ax y + b) + b + y = (ax + b)(ay + b) + 1,\]
which can be rewritten as
\[(a - ab)x + (1 - ab)y + ab + b - b^{2} - 1 = 0\qquad \mathrm{for~all~}x,y\in \mathbb{R}^{+}.\]
This identity may hold only if all the coefficients are 0, i.e.,
\[a - ab = 1 - ab = ab + b - b^{2} - 1 = 0.\]
Hence, \(a = b = 1\) .
Solution 2. We provide another proof that \(f(x) = x + 1\) is the only function satisfying \((\ast)\) .
Put \(a = f(1)\) . Define the function \(\phi \colon \mathbb{R}^{+}\to \mathbb{R}\) by
\[\phi (x) = f(x) - x - 1.\]
Then equation \((\ast)\) reads as
\[\phi (x + f(xy)) = f(x)f(y) - f(xy) - x - y. \quad (1)\]
Since the right- hand side of (1) is symmetric under swapping \(x\) and \(y\) , we obtain
\[\phi \big(x + f(xy)\big) = \phi \big(y + f(xy)\big).\]
In particular, substituting \((x,y) = (t,1 / t)\) we get
\[\phi (a + t) = \phi \left(a + \frac{1}{t}\right), \qquad t \in \mathbb{R}^{+}. \quad (2)\]
Notice that the function \(f\) is bounded from below by a positive constant. Indeed, for each \(y \in \mathbb{R}^{+}\) , the relation \((\ast)\) yields \(f(x)f(y) > y - 1\) , hence
\[f(x) > \frac{y - 1}{f(y)} \qquad \text{for all} x \in \mathbb{R}^{+}.\]
If \(y > 1\) , this provides a desired positive lower bound for \(f(x)\) .
Now, let \(M = \inf_{x \in \mathbb{R}^{+}} f(x) > 0\) . Then, for all \(y \in \mathbb{R}^{+}\) ,
\[M \geqslant \frac{y - 1}{f(y)}, \quad \text{or} \quad f(y) \geqslant \frac{y - 1}{M}. \quad (3)\]
Lemma 1. The function \(f(x)\) (and hence \(\phi (x)\) ) is bounded on any segment \([p,q]\) , where \(0 < p < q < +\infty\) .
Proof. \(f\) is bounded from below by \(M\) . It remains to show that \(f\) is bounded from above on \([p,q]\) . Substituting \(y = 1\) into \((\ast)\) , we get
\[f\big(x + f(x)\big) = af(x). \quad (4)\]
Take \(z \in [p,q]\) and put \(s = f(z)\) . By (4), we have
\[f(z + s) = as \quad \text{and} \quad f(z + s + as) = f\big(z + s + f(z + s)\big) = a^{2}s.\]
Plugging in \((x,y) = \left(z,1 + \frac{s}{z}\right)\) to \((\ast)\) and using (3), we obtain
\[f(z + as) = f\big(z + f(z + s)\big) = sf\left(1 + \frac{s}{z}\right) - \frac{s}{z} \geqslant \frac{s^{2}}{Mz} - \frac{s}{z}.\]
Now, substituting \((x,y) = \left(z + as,\frac{z}{z + as}\right)\) to \((\ast)\) and applying the above estimate and the estimate \(f(y) \geqslant M\) , we obtain
\[a^{2}s = f(z + s + as) = f\big(z + as + f(z)\big) = f(z + as)f\left(\frac{z}{z + as}\right) + 1 - \frac{z}{z + as}\] \[\qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad\] \[\qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \quad \geqslant Mf(z + as)\geqslant \frac{s^{2}}{z} -\frac{Ms}{z}\geqslant \frac{s^{2}}{q} -\frac{Ms}{p}.\]
This yields \(s \leqslant q\left(\frac{M}{p} + a^{2}\right) = : L\) , and \(f\) is bounded from above by \(L\) on \([p,q]\) . \(\square\)
Applying Lemma 1 to the segment \([a, a + 1]\) , we see that \(\phi\) is bounded on it. By (2) we get that \(\phi\) is also bounded on \([a + 1, +\infty)\) , and hence on \([a, +\infty)\) . Put \(C = \max \{a, 3\}\) .
Lemma 2. For all \(x \geq C\) , we have \(\phi (x) = 0\) (and hence \(f(x) = x + 1\) ).
Proof. Substituting \(y = x\) to (1), we obtain
\[\phi \big(x + f(x^{2})\big) = f(x)^{2} - f(x^{2}) - 2x,\]
hence,
\[\phi \big(x + f(x^{2})\big) + \phi (x^{2}) = f(x)^{2} - (x + 1)^{2} = \phi (x)\big(f(x) + x + 1\big). \quad (5)\]
Since \(f(x) + x + 1 \geq C + 1 \geq 4\) , we obtain that
\[|\phi (x)| \leq \frac{1}{4} \left(|\phi \big(x + f(x^{2})\big)| + |\phi (x^{2})|\right). \quad (6)\]
Since \(C \geq a\) , there exists a finite supremum \(S = \sup_{x \geq C} |\phi (x)|\) . For each \(x \in [C, +\infty)\) , both \(x + f(x^{2})\) and \(x^{2}\) are greater than \(x\) ; hence they also lie in \([C, +\infty)\) . Therefore, taking the supremum of the left- hand side of (6) over \(x \in [C, +\infty)\) , we obtain \(S \leq S / 2\) and hence \(S = 0\) . Thus, \(\phi (x) = 0\) for all \(x \geq C\) . \(\square\)
It remains to show that \(f(y) = y + 1\) when \(0 < y < C\) . For each \(y\) , choose \(x > \max \{C, \frac{C}{y}\}\) . Then all three numbers \(x\) , \(xy\) , and \(x + f(xy)\) are greater than \(C\) , so \((\ast)\) reads as
\[(x + xy + 1) + 1 + y = (x + 1)f(y) + 1, \quad \text{hence} \quad f(y) = y + 1.\]
Comment 1. It may be useful to rewrite \((\ast)\) in the form
\[\phi \big(x + f(xy)\big) + \phi (xy) = \phi (x)\phi (y) + x\phi (y) + y\phi (x) + \phi (x) + \phi (y).\]
This general identity easily implies both (1) and (5).
Comment 2. There are other ways to prove that \(f(x) \geq x + 1\) . Once one has proved this, they can use this stronger estimate instead of (3) in the proof of Lemma 1. Nevertheless, this does not make this proof simpler. So proving that \(f(x) \geq x + 1\) does not seem to be a serious progress towards the solution of the problem. In what follows, we outline one possible proof of this inequality.
First of all, we improve inequality (3) by noticing that, in fact, \(f(x)f(y) \geq y - 1 + M\) , and hence
\[f(y) \geq \frac{y - 1}{M} + 1. \quad (7)\]
Now we divide the argument into two steps.
Step 1: We show that \(M \leq 1\) .
Suppose that \(M > 1\) ; recall the notation \(a = f(1)\) . Substituting \(y = 1 / x\) in \((\ast)\) , we get
\[f(x + a) = f(x)f\left(\frac{1}{x}\right) + 1 - \frac{1}{x}\geq Mf(x),\]
provided that \(x \geq 1\) . By a straightforward induction on \([(x - 1) / a]\) , this yields
\[f(x) \geq M^{(x - 1) / a}. \quad (8)\]
Now choose an arbitrary \(x_{0} \in \mathbb{R}^{+}\) and define a sequence \(x_{0}, x_{1}, \ldots\) by \(x_{n + 1} = x_{n} + f(x_{n}) \geq x_{n} + M\) for all \(n \geq 0\) ; notice that the sequence is unbounded. On the other hand, by (4) we get
\[a x_{n + 1} > a f(x_{n}) = f(x_{n + 1}) \geq M^{(x_{n + 1} - 1) / a},\]
which cannot hold when \(x_{n + 1}\) is large enough.
Step 2: We prove that \(f(y) \geq y + 1\) for all \(y \in \mathbb{R}^{+}\) .
Arguing indirectly, choose \(y \in \mathbb{R}^{+}\) such that \(f(y) < y + 1\) , and choose \(\mu\) with \(f(y) < \mu < y + 1\) . Define a sequence \(x_{0}, x_{1}, \ldots\) by choosing a large \(x_{0} \geq 1\) and setting \(x_{n + 1} = x_{n} + f(x_{n}y) \geq x_{n} + M\) for all \(n \geq 0\) (this sequence is also unbounded). If \(x_{0}\) is large enough, then (7) implies that \(\left(\mu - f(y)\right)f(x_{n}) \geq 1 - y\) for all \(n\) . Therefore,
\[f(x_{n + 1}) = f(y)f(x_{n}) + 1 - y\leq \mu f(x_{n}).\]
On the other hand, since \(M \leq 1\) , inequality (7) implies that \(f(z) \geq z\) , provided that \(z \geq 1\) . Hence, if \(x_{0}\) is large enough, we have \(x_{n + 1} \geq x_{n}(1 + y)\) for all \(n\) . Therefore,
\[x_{0}(1 + y)^{n} \leq x_{n} \leq f(x_{n}) \leq \mu^{n} f(x_{0}),\]
which cannot hold when \(n\) is large enough.
|
IMOSL-2020-C1
|
Let \(n\) be a positive integer. Find the number of permutations \(a_{1},a_{2},\ldots ,a_{n}\) of the sequence \(1,2,\ldots ,n\) satisfying
\[a_{1}\leqslant 2a_{2}\leqslant 3a_{3}\leqslant \ldots \leqslant na_{n}. \quad (*)\]
|
Answer: The number of such permutations is \(F_{n + 1}\) , where \(F_{k}\) is the \(k^{\mathrm{th}}\) Fibonacci number: \(F_{1} = F_{2} = 1\) , \(F_{n + 1} = F_{n} + F_{n - 1}\) .
Solution 1. Denote by \(P_{n}\) the number of permutations that satisfy \((\ast)\) . It is easy to see that \(P_{1} = 1\) and \(P_{2} = 2\) .
Lemma 1. Let \(n\geqslant 3\) . If a permutation \(a_{1},\ldots ,a_{n}\) satisfies \((\ast)\) then either \(a_{n} = n\) , or \(a_{n - 1} = n\) and \(a_{n} = n - 1\) .
Proof. Let \(k\) be the index for which \(a_{k} = n\) . If \(k = n\) then we are done.
If \(k = n - 1\) then, by \((\ast)\) , we have \(n(n - 1) = (n - 1)a_{n - 1}\leqslant na_{n}\) , so \(a_{n}\geqslant n - 1\) . Since \(a_{n}\neq a_{n - 1} = n\) , the only choice for \(a_{n}\) is \(a_{n} = n - 1\) .
Now suppose that \(k\leqslant n - 2\) . For every \(k< i< n\) we have \(k n = k a_{k}\leqslant i a_{i}< n a_{i}\) , so \(a_{i}\geqslant k + 1\) . Moreover, \(n a_{n}\geqslant (n - 1)a_{n - 1}\geqslant (n - 1)(k + 1) = n k + (n - 1 - k) > n k\) , so \(a_{n}\geqslant k + 1\) . Now the \(n - k + 1\) numbers \(a_{k},a_{k + 1},\ldots ,a_{n}\) are all greater than \(k\) ; but there are only \(n - k\) such values; this is not possible. \(\square\)
If \(a_{n} = n\) then \(a_{1},a_{2},\ldots ,a_{n - 1}\) must be a permutation of the numbers \(1,\ldots ,n - 1\) satisfying \(a_{1}\leqslant 2a_{2}\leqslant \ldots \leqslant (n - 1)a_{n - 1}\) ; there are \(P_{n - 1}\) such permutations. The last inequality in \((\ast)\) , \((n - 1)a_{n - 1}\leqslant n a_{n} = n^{2}\) , holds true automatically.
If \((a_{n - 1},a_{n}) = (n,n - 1)\) , then \(a_{1},\ldots ,a_{n - 2}\) must be a permutation of \(1,\ldots ,n - 2\) satisfying \(a_{1}\leqslant \ldots \leqslant (n - 2)a_{n - 2}\) ; there are \(P_{n - 2}\) such permutations. The last two inequalities in \((\ast)\) hold true automatically by \((n - 2)a_{n - 2}\leqslant (n - 2)^{2}< n(n - 1) = (n - 1)a_{n - 1} = n a_{n}\) .
Hence, the sequence \((P_{1},P_{2},\ldots)\) satisfies the recurrence relation \(P_{n} = P_{n - 1} + P_{n - 2}\) for \(n\geqslant 3\) . The first two elements are \(P_{1} = F_{2}\) and \(P_{2} = F_{3}\) , so by a trivial induction we have \(P_{n} = F_{n + 1}\) .
Solution 2. We claim that all sought permutations are of the following kind. Split \(\{1,2,\ldots ,n\}\) into singletons and pairs of adjacent numbers. In each pair, swap the two numbers and keep the singletons unchanged.
Such permutations correspond to tilings of a \(1\times n\) chessboard using dominoes and unit squares; it is well- known that the number of such tilings is the Fibonacci number \(F_{n + 1}\) .
The claim follows by induction from
Lemma 2. Assume that \(a_{1},\ldots ,a_{n}\) is a permutation satisfying \((\ast)\) , and \(k\) is an integer such that \(1\leqslant k\leqslant n\) and \(\{a_{1},a_{2},\ldots ,a_{k - 1}\} = \{1,2,\ldots ,k - 1\}\) . (If \(k = 1\) , the condition is empty.) Then either \(a_{k} = k\) , or \(a_{k} = k + 1\) and \(a_{k + 1} = k\) .
Proof. Choose \(t\) with \(a_{t} = k\) . Since \(k\notin \{a_{1},\ldots ,a_{k - 1}\}\) , we have either \(t = k\) or \(t > k\) . If \(t = k\) then we are done, so assume \(t > k\) .
Notice that one of the numbers among the \(t - k\) numbers \(a_{k},a_{k + 1},\ldots ,a_{t - 1}\) is at least \(t\) , because there are only \(t - k - 1\) values between \(k\) and \(t\) . Let \(i\) be an index with \(k\leqslant i< t\) and \(a_{i}\geqslant t\) ; then \(k t = t a_{t}\geqslant i a_{i}\geqslant i t\geqslant k t\) , so that all the inequalities turn into equalities, hence \(i = k\) and \(a_{k} = t\) . If \(t = k + 1\) , we are done.
Suppose that \(t > k + 1\) . Then the chain of inequalities \(k t = k a_{k}\leqslant \ldots \leqslant t a_{t} = k t\) should also turn into a chain of equalities. From this point we can find contradictions in several ways; for example by pointing to \(a_{t - 1} = \frac{k t}{t - 1} = k + \frac{k}{t - 1}\) which cannot be an integer, or considering
the product of the numbers \((k + 1)a_{k + 1},\ldots ,(t - 1)a_{t - 1}\) ; the numbers \(a_{k + 1},\ldots ,a_{t - 1}\) are distinct and greater than \(k\) , so
\[(k t)^{t - k - 1} = (k + 1)a_{k + 1}\cdot (k + 2)a_{k + 2}\cdot \ldots \cdot (t - 1)a_{t - 1}\geqslant \left((k + 1)(k + 2)\cdot \ldots \cdot (t - 1)\right)^{2}.\]
Notice that \((k + i)(t - i) = kt + i(t - k - i) > kt\) for \(1\leqslant i< t - k\) . This leads to the contradiction
\[(k t)^{t - k - 1}\geqslant \left((k + 1)(k + 2)\cdot \ldots \cdot (t - 1)\right)^{2} = \prod_{i = 1}^{t - k - 1}(k + i)(t - i) > (k t)^{t - k - 1}.\]
Therefore, the case \(t > k + 1\) is not possible.
|
IMOSL-2020-C2
|
In a regular 100- gon, 41 vertices are colored black and the remaining 59 vertices are colored white. Prove that there exist 24 convex quadrilaterals \(Q_{1}\) , ..., \(Q_{24}\) whose corners are vertices of the 100- gon, so that
- the quadrilaterals \(Q_{1}\) , ..., \(Q_{24}\) are pairwise disjoint, and
- every quadrilateral \(Q_{i}\) has three corners of one color and one corner of the other color.
|
Solution. Call a quadrilateral skew- colored, if it has three corners of one color and one corner of the other color. We will prove the following
Claim. If the vertices of a convex \((4k + 1)\) - gon \(P\) are colored black and white such that each color is used at least \(k\) times, then there exist \(k\) pairwise disjoint skew- colored quadrilaterals whose vertices are vertices of \(P\) . (One vertex of \(P\) remains unused.)
The problem statement follows by removing 3 arbitrary vertices of the 100- gon and applying the Claim to the remaining 97 vertices with \(k = 24\) .
Proof of the Claim. We prove by induction. For \(k = 1\) we have a pentagon with at least one black and at least one white vertex. If the number of black vertices is even then remove a black vertex; otherwise remove a white vertex. In the remaining quadrilateral, there are an odd number of black and an odd number of white vertices, so the quadrilateral is skew- colored.
For the induction step, assume \(k \geqslant 2\) . Let \(b\) and \(w\) be the numbers of black and white vertices, respectively; then \(b, w \geqslant k\) and \(b + w = 4k + 1\) . Without loss of generality we may assume \(w \geqslant b\) , so \(k \leqslant b \leqslant 2k\) and \(2k + 1 \leqslant w \leqslant 3k + 1\) .
We want to find four consecutive vertices such that three of them are white, the fourth one is black. Denote the vertices by \(V_{1}, V_{2}, \ldots , V_{4k + 1}\) in counterclockwise order, such that \(V_{4k + 1}\) is black, and consider the following \(k\) groups of vertices:
\[(V_{1}, V_{2}, V_{3}, V_{4}), (V_{5}, V_{6}, V_{7}, V_{8}), \ldots , (V_{4k - 3}, V_{4k - 2}, V_{4k - 1}, V_{4k})\]
In these groups there are \(w\) white and \(b - 1\) black vertices. Since \(w > b - 1\) , there is a group, \((V_{i}, V_{i + 1}, V_{i + 2}, V_{i + 3})\) that contains more white than black vertices. If three are white and one is black in that group, we are done. Otherwise, if \(V_{i}, V_{i + 1}, V_{i + 2}, V_{i + 3}\) are all white then let \(V_{j}\) be the first black vertex among \(V_{i + 4}, \ldots , V_{4k + 1}\) (recall that \(V_{4k + 1}\) is black); then \(V_{j - 3}, V_{j - 2}\) and \(V_{j - 1}\) are white and \(V_{j}\) is black.
Now we have four consecutive vertices \(V_{i}, V_{i + 1}, V_{i + 2}, V_{i + 3}\) that form a skew- colored quadrilateral. The remaining vertices form a convex \((4k - 3)\) - gon; \(w - 3\) of them are white and \(b - 1\) are black. Since \(b - 1 \geqslant k - 1\) and \(w - 3 \geqslant (2k + 1) - 3 > k - 1\) , we can apply the Claim with \(k - 1\) . \(\square\)
Comment. It is not true that the vertices of the 100- gon can be split into 25 skew- colored quadrilaterals. A possible counter- example is when the vertices \(V_{1}, V_{3}, V_{5}, \ldots , V_{81}\) are black and the other vertices, \(V_{2}, V_{4}, \ldots , V_{80}\) and \(V_{82}, V_{83}, \ldots , V_{100}\) are white. For having 25 skew- colored quadrilaterals, there should be 8 containing three black vertices. But such a quadrilateral splits the other 96 vertices into four sets in such a way that at least two sets contain odd numbers of vertices and therefore they cannot be grouped into disjoint quadrilaterals.

|
IMOSL-2020-C5
|
Let \(p\) be an odd prime, and put \(N = \textstyle {\frac{1}{4}}(p^{3} - p) - 1\) . The numbers \(1,2,\ldots ,N\) are painted arbitrarily in two colors, red and blue. For any positive integer \(n\leqslant N\) , denote by \(r(n)\) the fraction of integers in \(\{1,2,\ldots ,n\}\) that are red.
Prove that there exists a positive integer \(a\in \{1,2,\ldots ,p - 1\}\) such that \(r(n)\neq a / p\) for all \(n = 1,2,\ldots ,N\)
|
Solution. Denote by \(R(n)\) the number of red numbers in \(\{1,2,\ldots ,n\}\) , i.e., \(R(n) = n r(n)\) . Similarly, denote by \(B(n)\) and \(b(n) = B(n) / n\) the number and proportion of blue numbers in \(\{1,2,\ldots ,n\}\) , respectively. Notice that \(B(n) + R(n) = n\) and \(b(n) + r(n) = 1\) . Therefore, the statement of the problem does not change after swapping the colors.
Arguing indirectly, for every \(a\in \{1,2,\ldots ,p - 1\}\) choose some positive integer \(n_{a}\) such that \(r(n_{a}) = a / p\) and, hence, \(R(n_{a}) = a n_{a} / p\) . Clearly, \(p\mid n_{a}\) , so that \(n_{a} = p m_{a}\) for some positive integer \(m_{a}\) , and \(R(n_{a}) = a m_{a}\) . Without loss of generality, we assume that \(m_{1}< m_{p - 1}\) , as otherwise one may swap the colors. Notice that
\[m_{a}\leqslant \frac{N}{p} < \frac{p^{2} - 1}{4}\qquad \mathrm{for~all~}a = 1,2,\ldots ,p - 1. \quad (1)\]
The solution is based on a repeated application of the following simple observation.
Claim. Assume that \(m_{a}< m_{b}\) for some \(a,b\in \{1,2,\ldots ,p - 1\}\) . Then
\[m_{b}\geqslant \frac{a}{b} m_{a}\qquad \mathrm{and}\qquad m_{b}\geqslant \frac{p - a}{p - b} m_{a}.\]
Proof. The first inequality follows from \(b m_{b} = R(n_{b})\geqslant R(n_{a}) = a m_{a}\) . The second inequality is obtained by swapping colors .
Let \(q = (p - 1) / 2\) . We distinguish two cases.
Case 1: All \(q\) numbers \(m_{1},m_{2},\ldots ,m_{q}\) are smaller than \(m_{p - 1}\) .
Let \(m_{a}\) be the maximal number among \(m_{1},m_{2},\ldots ,m_{q}\) ; then \(m_{a}\geqslant q\geqslant a\) . Applying the Claim, we get
\[m_{p - 1}\geqslant \frac{p - a}{p - (p - 1)} m_{a}\geqslant (p - q)q = \frac{p^{2} - 1}{4},\]
which contradicts (1).
Case 2: There exists \(k\leqslant q\) such that \(m_{k} > m_{p - 1}\) .
Choose \(k\) to be the smallest index satisfying \(m_{k} > m_{p - 1}\) ; by our assumptions, we have \(1< k\leqslant q< p - 1\) .
Let \(m_{a}\) be the maximal number among \(m_{1},m_{2},\ldots ,m_{k - 1}\) ; then \(a\leqslant k - 1\leqslant m_{a}< m_{p - 1}\) . Applying the Claim, we get
\[m_{k}\geqslant \frac{p - 1}{k} m_{p - 1}\geqslant \frac{p - 1}{k}\cdot \frac{p - a}{p - (p - 1)} m_{a}\] \[\qquad \qquad \qquad \qquad \geqslant \frac{p - 1}{k}\cdot (p - k + 1)(k - 1)\geqslant \frac{k - 1}{k}\cdot (p - 1)(p - q)\geqslant \frac{1}{2}\cdot \frac{p^{2} - 1}{2}\]
which contradicts (1) again.
Comment 1. The argument in Case 2, after a slight modification of estimates at the end, applies as soon as there exists \(k< \frac{3(p + 1)}{4}\) with \(a_{k}< a_{p - 1}\) . However, this argument does not seem to work if there is no such \(k\) .
Comment 2. If \(p\) is small enough, then one can color \(\{1,2,\ldots ,N + 1\}\) so that there exist numbers \(m_{1}\) , \(m_{2}\) , ..., \(m_{p - 1}\) satisfying \(r(p m_{a}) = a / p\) . For \(p = 3,5,7\) , one can find colorings providing the following sequences:
\[(m_{1},m_{2}) = (1,2),\qquad (m_{1},m_{2},m_{3},m_{4}) = (1,2,3,6),\quad \mathrm{and}\quad (m_{1},\ldots ,m_{6}) = (1,2,3,4,6,12),\]
respectively.
Thus, for small values of \(p\) , the number \(N\) in the problem statement cannot be increased. However, a careful analysis of the estimates shows that this number can be slightly increased for \(p \geqslant 11\) .
|
IMOSL-2020-C6
|
\(4n\) coins of weights \(1,2,3,\ldots ,4n\) are given. Each coin is colored in one of \(n\) colors and there are four coins of each color. Show that all these coins can be partitioned into two sets with the same total weight, such that each set contains two coins of each color.
|
Solution 1. Let us pair the coins with weights summing up to \(4n + 1\) , resulting in the set \(S\) of \(2n\) pairs: \(\{1,4n\}\) , \(\{2,4n - 1\}\) , ..., \(\{2n,2n + 1\}\) . It suffices to partition \(S\) into two sets, each consisting of \(n\) pairs, such that each set contains two coins of each color.
Introduce a multi- graph \(G\) (i.e., a graph with loops and multiple edges allowed) on \(n\) vertices, so that each vertex corresponds to a color. For each pair of coins from \(S\) , we add an edge between the vertices corresponding to the colors of those coins. Note that each vertex has degree 4. Also, a desired partition of the coins corresponds to a coloring of the edges of \(G\) in two colors, say red and blue, so that each vertex has degree 2 with respect to each color (i.e., each vertex has equal red and blue degrees).
To complete the solution, it suffices to provide such a coloring for each component \(G'\) of \(G\) . Since all degrees of the vertices are even, in \(G'\) there exists an Euler circuit \(C\) (i.e., a circuit passing through each edge of \(G'\) exactly once). Note that the number of edges in \(C\) is even (it equals twice the number of vertices in \(G'\) ). Hence all the edges can be colored red and blue so that any two edges adjacent in \(C\) have different colors (one may move along \(C\) and color the edges one by one alternating red and blue colors). Thus in \(G'\) each vertex has equal red and blue degrees, as desired.
Comment 1. To complete Solution 1, any partition of the edges of \(G\) into circuits of even lengths could be used. In the solution above it was done by the reference to the well- known Euler Circuit Lemma: Let \(G\) be a connected graph with all its vertices of even degrees. Then there exists a circuit passing through each edge of \(G\) exactly once.
Solution 2. As in Solution 1, we will show that it is possible to partition \(2n\) pairs \(\{1,4n\}\) , \(\{2,4n - 1\}\) , ..., \(\{2n,2n + 1\}\) into two sets, each consisting of \(n\) pairs, such that each set contains two coins of each color.
Introduce a multi- graph (i.e., a graph with multiple edges allowed) \(\Gamma\) whose vertices correspond to coins; thus we have \(4n\) vertices of \(n\) colors so that there are four vertices of each color. Connect pairs of vertices \(\{1,4n\}\) , \(\{2,4n - 1\}\) , ..., \(\{2n,2n + 1\}\) by \(2n\) black edges.
Further, for each monochromatic quadruple of vertices \(i,j,k,\ell\) we add a pair of grey edges forming a matching, e.g., \((i,j)\) and \((k,\ell)\) . In each of \(n\) colors of coins we can choose one of three possible matchings; this results in \(3^{n}\) ways of constructing grey edges. Let us call each of \(3^{n}\) possible graphs \(\Gamma\) a cyclic graph. Note that in a cyclic graph \(\Gamma\) each vertex has both black and grey degrees equal to 1. Hence \(\Gamma\) is a union of disjoint cycles, and in each cycle black and grey edges alternate (in particular, all cycles have even lengths).
It suffices to find a cyclic graph with all its cycle lengths divisible by 4. Indeed, in this case, for each cycle we start from some vertex, move along the cycle and recolor the black edges either to red or to blue, alternating red and blue colors. Now blue and red edges define the required partition, since for each monochromatic quadruple of vertices the grey edges provide a bijection between the endpoints of red and blue edges.
Among all possible cyclic graphs, let us choose graph \(\Gamma_{0}\) having the minimal number of components (i.e., cycles). The following claim completes the solution.
Claim. In \(\Gamma_{0}\) , all cycle lengths are divisible by 4.
Proof. Assuming the contrary, choose a cycle \(C_{1}\) with an odd number of grey edges. For some color \(c\) the cycle \(C_{1}\) contains exactly one grey edge joining two vertices \(i,j\) of color \(c\) , while the other edge joining two vertices \(k,\ell\) of color \(c\) lies in another cycle \(C_{2}\) . Now delete edges \((i,j)\) and \((k,\ell)\) and add edges \((i,k)\) and \((j,\ell)\) . By this switch we again obtain a cyclic graph \(\Gamma_{0}^{\prime}\) and decrease the number of cycles by 1. This contradicts the choice of \(\Gamma_{0}\) . \(\square\)
Comment 2. Use of an auxiliary graph and reduction to a new problem in terms of this graph is one of the crucial steps in both solutions presented. In fact, graph \(G\) from Solution 1 could be obtained from any graph \(\Gamma\) from Solution 2 by merging the vertices of the same color.
|
IMOSL-2020-C7
|
Consider any rectangular table having finitely many rows and columns, with a real number \(a(r,c)\) in the cell in row \(r\) and column \(c\) . A pair \((R,C)\) , where \(R\) is a set of rows and \(C\) a set of columns, is called a saddle pair if the following two conditions are satisfied:
(i) For each row \(r^{\prime}\) , there is \(r\in R\) such that \(a(r,c)\geqslant a(r^{\prime},c)\) for all \(c\in C\) ;
(ii) For each column \(c^{\prime}\) , there is \(c\in C\) such that \(a(r,c)\leqslant a(r,c^{\prime})\) for all \(r\in R\) .
A saddle pair \((R,C)\) is called a minimal pair if for each saddle pair \((R^{\prime},C^{\prime})\) with \(R^{\prime}\subseteq R\) and \(C^{\prime}\subseteq C\) , we have \(R^{\prime} = R\) and \(C^{\prime} = C\) .
Prove that any two minimal pairs contain the same number of rows.
|
Solution 1. We say that a pair \((R^{\prime},C^{\prime})\) of nonempty sets is a subpair of a pair \((R,C)\) if \(R^{\prime}\subseteq R\) and \(C^{\prime}\subseteq C\) . The subpair is proper if at least one of the inclusions is strict.
Let \((R_{1},C_{1})\) and \((R_{2},C_{2})\) be two saddle pairs with \(|R_{1}| > |R_{2}|\) . We will find a saddle subpair \((R^{\prime},C^{\prime})\) of \((R_{1},C_{1})\) with \(|R^{\prime}|\leqslant |R_{2}|\) ; clearly, this implies the desired statement.
Step 1: We construct maps \(\rho \colon R_{1}\to R_{1}\) and \(\sigma \colon C_{1}\to C_{1}\) such that \(|\rho (R_{1})|\leqslant |R_{2}|\) , and \(a\big(\rho (r_{1}),c_{1}\big)\geqslant a\big(r_{1},\sigma (c_{1})\big)\) for all \(r_{1}\in R_{1}\) and \(c_{1}\in C_{1}\) .
Since \((R_{1},C_{1})\) is a saddle pair, for each \(r_{2}\in R_{2}\) there is \(r_{1}\in R_{1}\) such that \(a(r_{1},c_{1})\geqslant a(r_{2},c_{1})\) for all \(c_{1}\in C_{1}\) ; denote one such an \(r_{1}\) by \(\rho_{1}(r_{2})\) . Similarly, we define four functions
\[\begin{array}{r l} & {\rho_{1}\colon R_{2}\to R_{1}\quad \mathrm{such~that}\quad a\big(\rho_{1}(r_{2}),c_{1}\big)\geqslant a(r_{2},c_{1})\quad \mathrm{for~all}\quad r_{2}\in R_{2},\quad c_{1}\in C_{1};}\\ & {\rho_{2}\colon R_{1}\to R_{2}\quad \mathrm{such~that}\quad a\big(\rho_{2}(r_{1}),c_{2}\big)\geqslant a(r_{1},c_{2})\quad \mathrm{for~all}\quad r_{1}\in R_{1},\quad c_{2}\in C_{2};}\\ & {\sigma_{1}\colon C_{2}\to C_{1}\quad \mathrm{such~that}\quad a\big(r_{1},\sigma_{1}(c_{2})\big)\leqslant a(r_{1},c_{2})\quad \mathrm{for~all}\quad r_{1}\in R_{1},\quad c_{2}\in C_{2};}\\ & {\sigma_{2}\colon C_{1}\to C_{2}\quad \mathrm{such~that}\quad a\big(r_{2},\sigma_{2}(c_{1})\big)\leqslant a(r_{2},c_{1})\quad \mathrm{for~all}\quad r_{2}\in R_{2},\quad c_{1}\in C_{1}.} \end{array} \quad (1)\]
Set now \(\rho = \rho_{1}\circ \rho_{2}\colon R_{1}\to R_{1}\) and \(\sigma = \sigma_{1}\circ \sigma_{2}\colon C_{1}\to C_{1}\) . We have
\[|\rho (R_{1})| = |\rho_{1}(\rho_{2}(R_{1}))|\leqslant |\rho_{1}(R_{2})|\leqslant |R_{2}|.\]
Moreover, for all \(r_{1}\in R_{1}\) and \(c_{1}\in C_{1}\) , we get
\[\begin{array}{r l} & {a\big(\rho (r_{1}),c_{1}\big) = a\big(\rho_{1}(\rho_{2}(r_{1})),c_{1}\big)\geqslant a\big(\rho_{2}(r_{1}),c_{1}\big)\geqslant a\big(\rho_{2}(r_{1}),\sigma_{2}(c_{1})\big)}\\ & {\qquad \geqslant a\big(r_{1},\sigma_{2}(c_{1})\big)\geqslant a\big(r_{1},\sigma_{1}(\sigma_{2}(c_{1}))\big) = a\big(r_{1},\sigma (c_{1})\big),} \end{array} \quad (2)\]
as desired.
Step 2: Given maps \(\rho\) and \(\sigma\) , we construct a proper saddle subpair \((R^{\prime},C^{\prime})\) of \((R_{1},C_{1})\) .
The properties of \(\rho\) and \(\sigma\) yield that
\[a\big(\rho^{i}(r_{1}),c_{1}\big)\geqslant a\big(\rho^{i - 1}(r_{1}),\sigma (c_{1})\big)\geqslant \ldots \geqslant a\big(r_{1},\sigma^{i}(c_{1})\big),\]
for each positive integer \(i\) and all \(r_{1}\in R_{1}\) , \(c_{1}\in C_{1}\) .
Consider the images \(R^{i} = \rho^{i}(R_{1})\) and \(C^{i} = \sigma^{i}(C_{1})\) . Clearly, \(R_{1} = R^{0}\supseteq R^{1}\supseteq R^{2}\supseteq \ldots\) and \(C_{1} = C^{0}\supseteq C^{1}\supseteq C^{2}\supseteq \ldots\) . Since both chains consist of finite sets, there is an index \(n\) such that \(R^{n} = R^{n + 1} = \ldots\) and \(C^{n} = C^{n + 1} = \ldots\) . Then \(\rho^{n}(R^{n}) = R^{2n} = R^{n}\) , so \(\rho^{n}\) restricted to \(R^{n}\) is a bijection. Similarly, \(\sigma^{n}\) restricted to \(C^{n}\) is a bijection from \(C^{n}\) to itself. Therefore, there exists a positive integer \(k\) such that \(\rho^{nk}\) acts identically on \(R^{n}\) , and \(\sigma^{nk}\) acts identically on \(C^{n}\) .
We claim now that \((R^{n},C^{n})\) is a saddle subpair of \((R_{1},C_{1})\) , with \(|R^{n}|\leqslant |R^{1}| = |\rho (R_{1})|\leqslant\) \(|R_{2}|\) , which is what we needed. To check that this is a saddle pair, take any row \(r^{\prime}\) ; since \((R_{1},C_{1})\) is a saddle pair, there exists \(r_{1}\in R_{1}\) such that \(a(r_{1},c_{1})\geqslant a(r^{\prime},c_{1})\) for all \(c_{1}\in C_{1}\) . Set now \(r_{*}=\rho^{nk}(r_{1})\in R^{n}\) . Then, for each \(c\in C^{n}\) we have \(c=\sigma^{nk}(c)\) and hence
\[a(r_{*},c) = a\big(\rho^{nk}(r_{1}),c\big)\geqslant a\big(r_{1},\sigma^{nk}(c)\big) = a(r_{1},c)\geqslant a(r^{\prime},c),\]
which establishes condition \((i)\) . Condition \((ii)\) is checked similarly.
Solution 2. Denote by \(\mathcal{R}\) and \(\mathcal{C}\) the set of all rows and the set of all columns of the table, respectively. Let \(\mathcal{T}\) denote the given table; for a set \(R\) of rows and a set \(C\) of columns, let \(\mathcal{T}[R,C]\) denote the subtable obtained by intersecting rows from \(R\) and columns from \(C\) .
We say that row \(r_{1}\) exceeds row \(r_{2}\) in range of columns \(C\) (where \(C\subseteq C\) ) and write \(r_{1}\geq_{C}r_{2}\) or \(r_{2}\leq_{C}r_{1}\) , if \(a(r_{1},c)\geq a(r_{2},c)\) for all \(c\in C\) . We say that a row \(r_{1}\) is equal to a row \(r_{2}\) in range of columns \(C\) and write \(r_{1}\equiv_{C}r_{2}\) , if \(a(r_{1},c) = a(r_{2},c)\) for all \(c\in C\) . We introduce similar notions, and use the same notation, for columns. Then conditions \((i)\) and \((ii)\) in the definition of a saddle pair can be written as \((i)\) for each \(r^{\prime}\in \mathcal{R}\) there exists \(r\in R\) such that \(r\geq_{C}r^{\prime}\) ; and \((ii)\) for each \(c^{\prime}\in \mathcal{C}\) there exists \(c\in C\) such that \(c\leq_{R}c^{\prime}\) .
Lemma. Suppose that \((R,C)\) is a minimal pair. Remove from the table several rows outside of \(R\) and/or several columns outside of \(C\) . Then \((R,C)\) remains a minimal pair in the new table.
Proof. Obviously, \((R,C)\) remains a saddle pair. Suppose \((R^{\prime},C^{\prime})\) is a proper subpair of \((R,C)\) . Since \((R,C)\) is a saddle pair, for each row \(r^{*}\) of the initial table, there is a row \(r\in R\) such that \(r\geq_{C}r^{*}\) . If \((R^{\prime},C^{\prime})\) became saddle after deleting rows not in \(R\) and/or columns not in \(C\) , there would be a row \(r^{\prime}\in R^{\prime}\) satisfying \(r^{\prime}\geq_{C^{\prime}}r\) . Therefore, we would obtain that \(r^{\prime}\geq_{C^{\prime}}r^{*}\) , which is exactly condition \((i)\) for the pair \((R^{\prime},C^{\prime})\) in the initial table; condition \((ii)\) is checked similarly. Thus, \((R^{\prime},C^{\prime})\) was saddle in the initial table, which contradicts the hypothesis that \((R,C)\) was minimal. Hence, \((R,C)\) remains minimal after deleting rows and/or columns. \(\square\)
By the Lemma, it suffices to prove the statement of the problem in the case \(\mathcal{R} = R_{1}\cup R_{2}\) and \(\mathcal{C} = C_{1}\cup C_{2}\) . Further, suppose that there exist rows that belong both to \(R_{1}\) and \(R_{2}\) . Duplicate every such row, and refer one copy of it to the set \(R_{1}\) , and the other copy to the set \(R_{2}\) . Then \((R_{1},C_{1})\) and \((R_{2},C_{2})\) will remain minimal pairs in the new table, with the same numbers of rows and columns, but the sets \(R_{1}\) and \(R_{2}\) will become disjoint. Similarly duplicating columns in \(C_{1}\cap C_{2}\) , we make \(C_{1}\) and \(C_{2}\) disjoint. Thus it is sufficient to prove the required statement in the case \(R_{1}\cap R_{2} = \emptyset\) and \(C_{1}\cap C_{2} = \emptyset\) .
The rest of the solution is devoted to the proof of the following claim including the statement of the problem.
Claim. Suppose that \((R_{1},C_{1})\) and \((R_{2},C_{2})\) are minimal pairs in table \(\mathcal{T}\) such that \(R_{2} = \mathcal{R}\setminus R_{1}\) and \(C_{2} = \mathcal{C}\setminus C_{1}\) . Then \(|R_{1}| = |R_{2}|\) , \(|C_{1}| = |C_{2}|\) ; moreover, there are four bijections
\[\begin{array}{r l} & {\rho_{1}\colon R_{2}\to R_{1}\quad \mathrm{such~that}\quad \rho_{1}(r_{2})\equiv_{C_{1}}r_{2}\quad \mathrm{for~all}\quad r_{2}\in R_{2};}\\ & {\rho_{2}\colon R_{1}\to R_{2}\quad \mathrm{such~that}\quad \rho_{2}(r_{1})\equiv_{C_{2}}r_{1}\quad \mathrm{for~all}\quad r_{1}\in R_{1};}\\ & {\sigma_{1}\colon C_{2}\to C_{1}\quad \mathrm{such~that}\quad \sigma_{1}(c_{2})\equiv_{R_{1}}c_{2}\quad \mathrm{for~all}\quad c_{2}\in C_{2};}\\ & {\sigma_{2}\colon C_{1}\to C_{2}\quad \mathrm{such~that}\quad \sigma_{2}(c_{1})\equiv_{R_{2}}c_{1}\quad \mathrm{for~all}\quad c_{1}\in C_{1}.} \end{array} \quad (3)\]
We prove the Claim by induction on \(|\mathcal{R}| + |\mathcal{C}|\) . In the base case we have \(|R_{1}| = |R_{2}| = |C_{1}| = |C_{2}| = 1\) ; let \(R_{i} = \{r_{i}\}\) and \(C_{i} = \{c_{i}\}\) . Since \((R_{1},C_{1})\) and \((R_{2},C_{2})\) are saddle pairs, we have \(a(r_{1},c_{1})\geq a(r_{2},c_{1})\geq a(r_{2},c_{2})\geq a(r_{1},c_{2})\geq a(r_{1},c_{1})\) , hence, the table consists of four equal numbers, and the statement follows.
To prove the inductive step, introduce the maps \(\rho_{1}\) , \(\rho_{2}\) , \(\sigma_{1}\) , and \(\sigma_{2}\) as in Solution 1, see (1). Suppose first that all four maps are surjective. Then, in fact, we have \(|R_{1}| = |R_{2}|\) , \(|C_{1}| = |C_{2}|\) , and all maps are bijective. Moreover, for all \(r_{2}\in R_{2}\) and \(c_{2}\in C_{2}\) we have
\[\begin{array}{r l} & {a(r_{2},c_{2})\leqslant a\big(r_{2},\sigma_{2}^{-1}(c_{2})\big)\leqslant a\big(\rho_{1}(r_{2}),\sigma_{2}^{-1}(c_{2})\big)\leqslant a\big(\rho_{1}\big(r_{2}),\sigma_{1}^{-1}\circ \sigma_{2}^{-1}(c_{2})\big)}\\ & {\qquad \leqslant a\big(\rho_{2}\circ \rho_{1}(r_{2}),\sigma_{1}^{-1}\circ \sigma_{2}^{-1}(c_{2})\big).} \end{array} \quad (4)\]
Summing up, we get
\[\sum_{\substack{r_{2}\in R_{2}\\ c_{2}\in C_{2}}}a(r_{2},c_{2})\leqslant \sum_{\substack{r_{2}\in R_{2}\\ c_{2}\in C_{2}}}a\big(\rho_{2}\circ \rho_{1}(r_{2}),\sigma_{1}^{-1}\circ \sigma_{2}^{-1}(c_{2})\big).\]
Since \(\rho_{1} \circ \rho_{2}\) and \(\sigma_{1}^{- 1} \circ \sigma_{2}^{- 1}\) are permutations of \(R_{2}\) and \(C_{2}\) , respectively, this inequality is in fact equality. Therefore, all inequalities in (4) turn into equalities, which establishes the inductive step in this case.
It remains to show that all four maps are surjective. For the sake of contradiction, we assume that \(\rho_{1}\) is not surjective. Now let \(R_{1}^{\prime} = \rho_{1}(R_{2})\) and \(C_{1}^{\prime} = \sigma_{1}(C_{2})\) , and set \(R^{*} = R_{1} \setminus R_{1}^{\prime}\) and \(C^{*} = C_{1} \setminus C_{1}^{\prime}\) . By our assumption, \(R^{*} \neq \emptyset\) .
Let \(\mathcal{Q}\) be the table obtained from \(\mathcal{T}\) by removing the rows in \(R^{*}\) and the columns in \(C^{*}\) ; in other words, \(\mathcal{Q} = \mathcal{T}[R_{1}^{\prime} \cup R_{2}, C_{1}^{\prime} \cup C_{2}]\) . By the definition of \(\rho_{1}\) , for each \(r_{2} \in R_{2}\) we have \(\rho_{1}(r_{2}) \geq c_{1} r_{2}\) , so a fortiori \(\rho_{1}(r_{2}) \geq c_{1}^{\prime} r_{2}\) ; moreover, \(\rho_{1}(r_{2}) \in R_{1}^{\prime}\) . Similarly, \(C_{1}^{\prime} \ni \sigma_{1}(c_{2}) \leq_{R_{1}^{\prime}} c_{2}\) for each \(c_{2} \in C_{2}\) . This means that \((R_{1}^{\prime}, C_{1}^{\prime})\) is a saddle pair in \(\mathcal{Q}\) . Recall that \((R_{2}, C_{2})\) remains a minimal pair in \(\mathcal{Q}\) , due to the Lemma.
Therefore, \(\mathcal{Q}\) admits a minimal pair \((\overline{R_{1}}, \overline{C_{1}})\) such that \(\overline{R_{1}} \subseteq R_{1}^{\prime}\) and \(\overline{C_{1}} \subseteq C_{1}^{\prime}\) . For a minute, confine ourselves to the subtable \(\overline{\mathcal{Q}} = \mathcal{Q}[\overline{R_{1}} \cup R_{2}, \overline{C_{1}} \cup C_{2}]\) . By the Lemma, the pairs \((\overline{R_{1}}, \overline{C_{1}})\) and \((R_{2}, C_{2})\) are also minimal in \(\overline{\mathcal{Q}}\) . By the inductive hypothesis, we have \(|R_{2}| = |\overline{R_{1}} | \leq |R_{1}^{\prime}| = |\rho_{1}(R_{2})| \leq |R_{2}|\) , so all these inequalities are in fact equalities. This implies that \(\overline{R_{2}} = R_{2}^{\prime}\) and that \(\rho_{1}\) is a bijection \(R_{2} \to R_{1}^{\prime}\) . Similarly, \(\overline{C_{1}} = C_{1}^{\prime}\) , and \(\sigma_{1}\) is a bijection \(C_{2} \to C_{1}^{\prime}\) . In particular, \((R_{1}^{\prime}, C_{1}^{\prime})\) is a minimal pair in \(\mathcal{Q}\) .
Now, by inductive hypothesis again, we have \(|R_{1}^{\prime}| = |R_{2}|\) , \(|C_{1}^{\prime}| = |C_{2}|\) , and there exist four bijections
\[\rho_{1}^{\prime}:R_{2}\to R_{1}^{\prime}\quad \mathrm{such~that}\quad \rho_{1}^{\prime}(r_{2})\equiv_{C_{1}^{\prime}}r_{2}\quad \mathrm{for~all}\quad r_{2}\in R_{2};\] \[\rho_{2}^{\prime}:R_{1}^{\prime}\to R_{2}\quad \mathrm{such~that}\quad \rho_{2}^{\prime}(r_{1})\equiv_{C_{2}}r_{1}\quad \mathrm{for~all}\quad r_{1}\in R_{1}^{\prime};\] \[\sigma_{1}^{\prime}:C_{2}\to C_{1}^{\prime}\quad \mathrm{such~that}\quad \sigma_{1}^{\prime}(c_{2})\equiv_{R_{1}^{\prime}}c_{2}\quad \mathrm{for~all}\quad c_{2}\in C_{2};\] \[\sigma_{2}^{\prime}:C_{1}^{\prime}\to C_{2}\quad \mathrm{such~that}\quad \sigma_{2}^{\prime}(c_{1})\equiv_{R_{2}}c_{1}\quad \mathrm{for~all}\quad c_{1}\in C_{1}^{\prime}.\]
Notice here that \(\sigma_{1}\) and \(\sigma_{1}^{\prime}\) are two bijections \(C_{2} \to C_{1}^{\prime}\) satisfying \(\sigma_{1}^{\prime}(c_{2}) \equiv_{R_{1}^{\prime}} c_{2} \geq_{R_{1}} \sigma_{1}(c_{2})\) for all \(c_{2} \in C_{2}\) . Now, if \(\sigma_{1}^{\prime}(c_{2}) \neq \sigma_{1}(c_{2})\) for some \(c_{2} \in C_{2}\) , then we could remove column \(\sigma_{1}^{\prime}(c_{2})\) from \(C_{1}^{\prime}\) obtaining another saddle pair \(\left(R_{1}^{\prime}, C_{1}^{\prime} \setminus \{\sigma_{1}^{\prime}(c_{2})\} \right)\) in \(\mathcal{Q}\) . This is impossible for a minimal pair \((R_{1}^{\prime}, C_{1}^{\prime})\) ; hence the maps \(\sigma_{1}\) and \(\sigma_{1}^{\prime}\) coincide.
Now we are prepared to show that \((R_{1}^{\prime}, C_{1}^{\prime})\) is a saddle pair in \(\mathcal{T}\) , which yields a desired contradiction (since \((R_{1}, C_{1})\) is not minimal). By symmetry, it suffices to find, for each \(r^{\prime} \in \mathcal{R}\) , a row \(r_{1} \in R_{1}^{\prime}\) such that \(r_{1} \geq_{C_{1}^{\prime}} r^{\prime}\) . If \(r^{\prime} \in R_{2}\) , then we may put \(r_{1} = \rho_{1}(r^{\prime})\) ; so, in the sequel we assume \(r^{\prime} \in R_{1}\) .
There exists \(r_{2} \in R_{2}\) such that \(r^{\prime} \leq_{C_{2}} r_{2}\) ; set \(r_{1} = (\rho_{2}^{\prime})^{- 1}(r_{2}) \in R_{1}^{\prime}\) and recall that \(r_{1} \equiv_{C_{2}} r_{2} \geq_{C_{2}} r^{\prime}\) . Therefore, implementing the bijection \(\sigma_{1} = \sigma_{1}^{\prime}\) , for each \(c_{1} \in C_{1}^{\prime}\) we get
\[a(r^{\prime}, c_{1}) \leq a(r^{\prime}, \sigma_{1}^{-1}(c_{1})) \leq a(r_{1}, \sigma_{1}^{-1}(c_{1})) = a(r_{1}, \sigma_{1}^{\prime} \circ \sigma_{1}^{-1}(c_{1})) = a(r_{1}, c_{1}),\]
which shows \(r^{\prime} \leq_{C_{1}^{\prime}} r_{1}\) , as desired. The inductive step is completed.
Comment 1. For two minimal pairs \((R_{1}, C_{1})\) and \((R_{2}, C_{2})\) , Solution 2 not only proves the required equalities \(|R_{1}| = |R_{2}|\) and \(|C_{1}| = |C_{2}|\) , but also shows the existence of bijections (3). In simple words, this means that the four subtables \(\mathcal{T}[R_{1}, C_{1}]\) , \(\mathcal{T}[R_{1}, C_{2}]\) , \(\mathcal{T}[R_{2}, C_{1}]\) , and \(\mathcal{T}[R_{2}, C_{2}]\) differ only by permuting rows/columns. Notice that the existence of such bijections immediately implies that \((R_{1}, C_{2})\) and \((R_{2}, C_{1})\) are also minimal pairs.
This stronger claim may also be derived directly from the arguments in Solution 1, even without the assumptions \(R_{1} \cap R_{2} = \emptyset\) and \(C_{1} \cap C_{2} = \emptyset\) . Indeed, if \(|R_{1}| = |R_{2}|\) and \(|C_{1}| = |C_{2}|\) , then similar arguments show that \(R^{n} = R_{1}\) , \(C^{n} = C_{1}\) , and for any \(r \in R^{n}\) and \(c \in C^{n}\) we have
\[a(r, c) = a(\rho^{nk}(r), c) \geq a(\rho^{nk - 1}(r), \sigma(c)) \geq \ldots \geq a(r, \sigma^{nk}(c)) = a(r, c).\]
This yields that all above inequalities turn into equalities. Moreover, this yields that all inequalities in (2) turn into equalities. Hence \(\rho_{1}, \rho_{2}, \sigma_{1}\) , and \(\sigma_{2}\) satisfy (3).
It is perhaps worth mentioning that one cannot necessarily find the maps in (3) so as to satisfy \(\rho_{1} = \rho_{2}^{- 1}\) and \(\sigma_{1} = \sigma_{2}^{- 1}\) , as shown by the table below.
<table><tr><td>1</td><td>0</td><td>0</td><td>1</td></tr><tr><td>0</td><td>1</td><td>1</td><td>0</td></tr><tr><td>1</td><td>0</td><td>1</td><td>0</td></tr><tr><td>0</td><td>1</td><td>0</td><td>1</td></tr></table>
Comment 2. One may use the following, a bit more entertaining formulation of the same problem.
On a specialized market, a finite number of products are being sold, and there are finitely many retailers each selling all the products by some prices. Say that retailer \(r_{1}\) dominates retailer \(r_{2}\) with respect to a set of products \(P\) if \(r_{1}\) 's price of each \(p\in P\) does not exceed \(r_{2}\) 's price of \(p\) . Similarly, product \(p_{1}\) exceeds product \(p_{2}\) with respect to a set of retailers \(R\) , if \(r\) 's price of \(p_{1}\) is not less than \(r\) 's price of \(p_{2}\) , for each \(r\in R\) .
Say that a set \(R\) of retailers and a set \(P\) of products form a saddle pair if for each retailer \(r^{\prime}\) there is \(r\in R\) dominating \(r^{\prime}\) with respect to \(P\) , and for each product \(p^{\prime}\) there is \(p\in P\) exceeding \(p^{\prime}\) with respect to \(R\) . A saddle pair \((R,P)\) is called a minimal pair if for each saddle pair \((R^{\prime},P^{\prime})\) with \(R^{\prime}\subseteq R\) and \(P^{\prime}\subseteq P\) , we have \(R^{\prime} = R\) and \(P^{\prime} = P\) .
Prove that any two minimal pairs contain the same number of retailers.
|
IMOSL-2020-G1
|
Let \(A B C\) be an isosceles triangle with \(B C = C A\) , and let \(D\) be a point inside side \(A B\) such that \(A D< D B\) . Let \(P\) and \(Q\) be two points inside sides \(B C\) and \(C A\) , respectively, such that \(\angle D P B = \angle D Q A = 90^{\circ}\) . Let the perpendicular bisector of \(P Q\) meet line segment \(C Q\) at \(E\) , and let the circumcircles of triangles \(A B C\) and \(C P Q\) meet again at point \(F\) , different from \(C\) .
Suppose that \(P, E, F\) are collinear. Prove that \(\angle A C B = 90^{\circ}\) .
|
Solution 1. Let \(\ell\) be the perpendicular bisector of \(P Q\) , and denote by \(\omega\) the circle \(C F P Q\) By \(D P\perp B C\) and \(D Q\perp A C\) , the circle \(\omega\) passes through \(D\) ; moreover, \(C D\) is a diameter of \(\omega\) .
The lines \(Q E\) and \(P E\) are symmetric about \(\ell\) , and \(\ell\) is a symmetry axis of \(\omega\) as well; it follows that the chords \(C Q\) and \(F P\) are symmetric about \(\ell\) , hence \(C\) and \(F\) are symmetric about \(\ell\) . Therefore, the perpendicular bisector of \(C F\) coincides with \(\ell\) . Thus \(\ell\) passes through the circumcenter \(O\) of \(A B C\) .
Let \(M\) be the midpoint of \(A B\) . Since \(C M\perp D M\) , \(M\) also lies on \(\omega\) . By \(\angle A C M = \angle B C M\) , the chords \(M P\) and \(M Q\) of \(\omega\) are equal. Then, from \(M P = M Q\) it follows that \(\ell\) passes through \(M\) .

Finally, both \(O\) and \(M\) lie on lines \(\ell\) and \(C M\) , therefore \(O = M\) , and \(\angle A C B = 90^{\circ}\) follows.
Solution 2. Like in the first solution, we conclude that points \(C\) , \(P\) , \(Q\) , \(D\) , \(F\) and the midpoint \(M\) of \(A B\) lie on one circle \(\omega\) with diameter \(C D\) , and \(M\) lies on \(\ell\) , the perpendicular bisector of \(P Q\) .
Let \(B F\) and \(C M\) meet at \(G\) and let \(\alpha = \angle A B F\) . Then, since \(E\) lies on \(\ell\) , and the quadrilaterals \(F C B A\) and \(F C P Q\) are cyclic, we have
\[\angle C Q P = \angle F P Q = \angle F C Q = \angle F C A = \angle F B A = \alpha .\]
Since points \(P\) , \(E\) , \(F\) are collinear, we have
\[\angle F E M = \angle F E Q + \angle Q E M = 2\alpha +(90^{\circ} - \alpha) = 90^{\circ} + \alpha .\]
But \(\angle F G M = 90^{\circ} + \alpha\) , so \(F E G M\) is cyclic. Hence
\[\angle E G C = \angle E F M = \angle P F M = \angle P C M.\]
Thus \(G E\parallel B C\) . It follows that \(\angle F A C = \angle C B F = \angle E G F\) , so \(F E G A\) is cyclic, too. Hence \(\angle A C B = \angle A F B = \angle A F G = 180^{\circ} - \angle A M G = 90^{\circ}\) , that completes the proof.

Comment 1. The converse statement is true: if \(\angle ACB = 90^{\circ}\) then points \(P\) , \(E\) and \(F\) are collinear. This direction is easier to prove.
Comment 2. The statement of the problem remains true if the projection \(P\) of \(D\) onto \(BC\) lies outside line segment \(BC\) . The restriction that \(P\) lies inside line segment \(BC\) is given to reduce case- sensitivity.
|
IMOSL-2020-G2
|
Let \(ABCD\) be a convex quadrilateral. Suppose that \(P\) is a point in the interior of \(ABCD\) such that
\[\angle PAD:\angle PBA:\angle DPA = 1:2:3 = \angle CBP:\angle BAP:\angle BPC.\]
The internal bisectors of angles \(ADP\) and \(PCB\) meet at a point \(Q\) inside the triangle \(ABP\) . Prove that \(AQ = BQ\) .
|
Solution 1. Let \(\phi = \angle PAD\) and \(\psi = \angle CBP\) ; then we have \(\angle PBA = 2\phi\) , \(\angle DPA = 3\phi\) , \(\angle BAP = 2\psi\) and \(\angle BPC = 3\psi\) . Let \(X\) be the point on segment \(AD\) with \(\angle XPA = \phi\) . Then
\[\angle PXD = \angle PAX + \angle XPA = 2\phi = \angle DPA - \angle XPA = \angle DPA.\]
It follows that triangle \(DPX\) is isosceles with \(DX = DP\) and therefore the internal angle bisector of \(\angle ADP\) coincides with the perpendicular bisector of \(XP\) . Similarly, if \(Y\) is a point on \(BC\) such that \(\angle BPY = \psi\) , then the internal angle bisector of \(\angle PCB\) coincides with the perpendicular bisector of \(PY\) . Hence, we have to prove that the perpendicular bisectors of \(XP\) , \(PY\) , and \(AB\) are concurrent.

Notice that
\[\angle AXP = 180^{\circ} - \angle PXD = 180^{\circ} - 2\phi = 180^{\circ} - \angle PBA.\]
Hence the quadrilateral \(AXPB\) is cyclic; in other words, \(X\) lies on the circumcircle of triangle \(APB\) . Similarly, \(Y\) lies on the circumcircle of triangle \(APB\) . It follows that the perpendicular bisectors of \(XP\) , \(PY\) , and \(AB\) all pass through the center of circle \((ABYPX)\) . This finishes the proof.
Comment. Introduction of points \(X\) and \(Y\) seems to be the key step in the solution above. Note that the same point \(X\) could be introduced in different ways, e.g., as the point on the ray \(CP\) beyond \(P\) such that \(\angle PBX = \phi\) , or as a point where the circle \((APB)\) meets again \(AB\) . Different definitions of \(X\) could lead to different versions of the further solution.
Solution 2. We define the angles \(\phi = \angle PAD\) , \(\psi = \angle CBP\) and use \(\angle PBA = 2\phi\) , \(\angle DPA = 3\phi\) , \(\angle BAP = 2\psi\) and \(\angle BPC = 3\psi\) again. Let \(O\) be the circumcenter of \(\triangle APB\) .
Notice that \(\angle ADP = 180^{\circ} - \angle PAD - \angle DPA = 180^{\circ} - 4\phi\) , which, in particular, means that \(4\phi < 180^{\circ}\) . Further, \(\angle POA = 2\angle PBA = 4\phi = 180^{\circ} - \angle ADP\) , therefore the quadrilateral \(ADPO\) is cyclic. By \(AO = OP\) , it follows that \(\angle ADO = \angle ODP\) . Thus \(DO\) is the internal bisector of \(\angle ADP\) . Similarly, \(CO\) is the internal bisector of \(\angle PCB\) .

Finally, \(O\) lies on the perpendicular bisector of \(AB\) as it is the circumcenter of \(\triangle APB\) . Therefore the three given lines in the problem statement concur at point \(O\) .
|
IMOSL-2020-G3
|
Let \(ABCD\) be a convex quadrilateral with \(\angle ABC > 90^{\circ}\) , \(\angle CDA > 90^{\circ}\) , and \(\angle DAB = \angle BCD\) . Denote by \(E\) and \(F\) the reflections of \(A\) in lines \(BC\) and \(CD\) , respectively. Suppose that the segments \(AE\) and \(AF\) meet the line \(BD\) at \(K\) and \(L\) , respectively. Prove that the circumcircles of triangles \(BEK\) and \(DFL\) are tangent to each other.
|
Solution 1. Denote by \(A'\) the reflection of \(A\) in \(BD\) . We will show that that the quadrilaterals \(A'BKE\) and \(A'DLF\) are cyclic, and their circumcircles are tangent to each other at point \(A'\) .
From the symmetry about line \(BC\) we have \(\angle BEK = \angle BAK\) , while from the symmetry in \(BD\) we have \(\angle BAK = \angle B'A'K\) . Hence \(\angle BEK = \angle B'A'K\) , which implies that the quadrilateral \(A'BKE\) is cyclic. Similarly, the quadrilateral \(A'DLF\) is also cyclic.

For showing that circles \(A'BKE\) and \(A'DLF\) are tangent it suffices to prove that
\[\angle A'KB + \angle A'LD = \angle BA'D.\]
Indeed, by \(AK \perp BC\) , \(AL \perp CD\) , and again the symmetry in \(BD\) we have
\[\angle A'KB + \angle A'LD = 180^{\circ} - \angle KA'L = 180^{\circ} - \angle KAL = \angle BCD = \angle BAD = \angle BA'D,\]
as required.
Comment 1. The key to the solution above is introducing the point \(A'\) ; then the angle calculations can be done in many different ways.
Solution 2. Note that \(\angle KAL = 180^{\circ} - \angle BCD\) , since \(AK\) and \(AL\) are perpendicular to \(BC\) and \(CD\) , respectively. Reflect both circles \((BEK)\) and \((DFL)\) in \(BD\) . Since \(\angle KEB = \angle KAB\) and \(\angle DFL = \angle DAL\) , the images are the circles \((KAB)\) and \((LAD)\) , respectively; so they meet at \(A\) . We need to prove that those two reflections are tangent at \(A\) .
For this purpose, we observe that
\[\angle AKB + \angle ALD = 180^{\circ} - \angle KAL = \angle BCD = \angle BAD.\]
Thus, there exists a ray \(AP\) inside angle \(\angle BAD\) such that \(\angle BAP = \angle AKB\) and \(\angle DAP = \angle DLA\) . Hence the line \(AP\) is a common tangent to the circles \((KAB)\) and \((LAD)\) , as desired.
Comment 2. The statement of the problem remains true for a more general configuration, e.g., if line \(BD\) intersect the extension of segment \(AE\) instead of the segment itself, etc. The corresponding restrictions in the statement are given to reduce case sensitivity.
|
IMOSL-2020-G4
|
In the plane, there are \(n \geq 6\) pairwise disjoint disks \(D_{1}, D_{2}, \ldots , D_{n}\) with radii \(R_{1} \geq R_{2} \geq \ldots \geq R_{n}\) . For every \(i = 1, 2, \ldots , n\) , a point \(P_{i}\) is chosen in disk \(D_{i}\) . Let \(O\) be an arbitrary point in the plane. Prove that
\[O P_{1} + O P_{2} + \ldots +O P_{n}\geq R_{6} + R_{7} + \ldots +R_{n}.\]
(A disk is assumed to contain its boundary.)
|
Solution. We will make use of the following lemma.
Lemma. Let \(D_{1},\ldots ,D_{6}\) be disjoint disks in the plane with radii \(R_{1},\ldots ,R_{6}\) . Let \(P_{i}\) be a point in \(D_{i}\) , and let \(O\) be an arbitrary point. Then there exist indices \(i\) and \(j\) such that \(O P_{i}\geqslant R_{j}\)
Proof. Let \(O_{i}\) be the center of \(D_{i}\) . Consider six rays \(O O_{1}\) , ..., \(O O_{6}\) (if \(O = O_{i}\) , then the ray \(O O_{i}\) may be assumed to have an arbitrary direction). These rays partition the plane into six angles (one of which may be non- convex) whose measures sum up to \(360^{\circ}\) ; hence one of the angles, say \(\angle O_{i}O O_{j}\) , has measure at most \(60^{\circ}\) . Then \(O_{i}O_{j}\) cannot be the unique largest side in (possibly degenerate) triangle \(O O_{i}O_{j}\) , so, without loss of generality, \(O O_{i}\geqslant O_{i}O_{j}\geqslant R_{i} + R_{j}\) . Therefore, \(O P_{i}\geqslant O O_{i} - R_{i}\geqslant (R_{i} + R_{j}) - R_{i} = R_{j}\) , as desired. \(\square\)
Now we prove the required inequality by induction on \(n \geq 5\) . The base case \(n = 5\) is trivial. For the inductive step, apply the Lemma to the six largest disks, in order to find indices \(i\) and \(j\) such that \(1 \leq i, j \leq 6\) and \(O P_{i} \geqslant R_{j} \geqslant R_{6}\) . Removing \(D_{i}\) from the configuration and applying the inductive hypothesis, we get
\[\sum_{k \neq i} O P_{k} \geqslant \sum_{\ell \geqslant 7} R_{\ell}.\]
Adding up this inequality with \(O P_{i} \geqslant R_{6}\) we establish the inductive step.
Comment 1. It is irrelevant to the problem whether the disks contain their boundaries or not. This condition is included for clarity reasons only. The problem statement remains true, and the solution works verbatim, if the disks are assumed to have disjoint interiors.
Comment 2. There are several variations of the above solution. In particular, while performing the inductive step, one may remove the disk with the largest value of \(O P_{i}\) and apply the inductive hypothesis to the remaining disks (the Lemma should still be applied to the six largest disks).
Comment 3. While proving the Lemma, one may reduce it to a particular case when the disks are congruent, as follows: Choose the smallest radius \(r\) of the disks in the Lemma statement, and then replace, for each \(i\) , the \(i^{\mathrm{th}}\) disk with its homothetic copy, using the homothety centered at \(P_{i}\) with ratio \(r / R_{i}\) .
This argument shows that the Lemma is tightly connected to a circle packing problem, see, e.g., https://en.wikipedia.org/wiki/Circle_packing_in_a_circle. The known results on that problem provide versions of the Lemma for different numbers of disks, which lead to different inequalities of the same kind. E.g., for 4 disks the best possible estimate in the Lemma is \(O P_{i} \geqslant (\sqrt{2} - 1) R_{j}\) , while for 13 disks it has the form \(O P_{i} \geqslant \sqrt{5} R_{j}\) . Arguing as in the above solution, one obtains the inequalities
\[\sum_{i = 1}^{n} O P_{i} \geqslant (\sqrt{2} - 1) \sum_{j = 4}^{n} R_{j} \quad \text{and} \quad \sum_{i = 1}^{n} O P_{i} \geqslant \sqrt{5} \sum_{j = 13}^{n} R_{j}.\]
However, there are some harder arguments which allow to improve these inequalities, meaning that the \(R_{j}\) with large indices may be taken with much greater factors.
|
IMOSL-2020-G5
|
Let \(A B C D\) be a cyclic quadrilateral with no two sides parallel. Let \(K\) , \(L\) , \(M\) , and \(N\) be points lying on sides \(A B\) , \(B C\) , \(C D\) , and \(D A\) , respectively, such that \(K L M N\) is a rhombus with \(K L \parallel A C\) and \(L M \parallel B D\) . Let \(\omega_{1}\) , \(\omega_{2}\) , \(\omega_{3}\) , and \(\omega_{4}\) be the incircles of triangles \(A N K\) , \(B K L\) , \(C L M\) , and \(D M N\) , respectively. Prove that the internal common tangents to \(\omega_{1}\) and \(\omega_{3}\) and the internal common tangents to \(\omega_{2}\) and \(\omega_{4}\) are concurrent.
|
Solution 1. Let \(I_{i}\) be the center of \(\omega_{i}\) , and let \(r_{i}\) be its radius for \(i = 1,2,3,4\) . Denote by \(T_{1}\) and \(T_{3}\) the points of tangency of \(\omega_{1}\) and \(\omega_{3}\) with \(N K\) and \(L M\) , respectively. Suppose that the internal common tangents to \(\omega_{1}\) and \(\omega_{3}\) meet at point \(S\) , which is the center of homothety \(h\) with negative ratio (namely, with ratio \(- \frac{r_{3}}{r_{1}}\) ) mapping \(\omega_{1}\) to \(\omega_{3}\) . This homothety takes \(T_{1}\) to \(T_{3}\) (since the tangents to \(\omega_{1}\) and \(\omega_{3}\) at \(T_{1}\) to \(T_{3}\) are parallel), hence \(S\) is a point on the segment \(T_{1}T_{3}\) with \(T_{1}S: S T_{3} = r_{1}: r_{3}\) .
Construct segments \(S_{1}S_{3} \parallel K L\) and \(S_{2}S_{4} \parallel L M\) through \(S\) with \(S_{1} \in N K\) , \(S_{2} \in K L\) , \(S_{3} \in L M\) , and \(S_{4} \in M N\) . Note that \(h\) takes \(S_{1}\) to \(S_{3}\) , hence \(I_{1}S_{1} \parallel I_{3}S_{3}\) , and \(S_{1}S: S S_{3} = r_{1}: r_{3}\) . We will prove that \(S_{2}S: S S_{4} = r_{2}: r_{4}\) or, equivalently, \(K S_{1}: S_{1}N = r_{2}: r_{4}\) . This will yield the problem statement; indeed, applying similar arguments to the intersection point \(S^{\prime}\) of the internal common tangents to \(\omega_{2}\) and \(\omega_{4}\) , we see that \(S^{\prime}\) satisfies similar relations, and there is a unique point inside \(K L M N\) satisfying them. Therefore, \(S^{\prime} = S\) .

Further, denote by \(I_{A}\) , \(I_{B}\) , \(I_{C}\) , \(I_{D}\) and \(r_{A}\) , \(r_{B}\) , \(r_{C}\) , \(r_{D}\) the incenters and inradii of triangles \(D A B\) , \(A B C\) , \(B C D\) , and \(C D A\) , respectively. One can shift triangle \(C L M\) by \(\overline{{L K}}\) to glue it with triangle \(A K N\) into a quadrilateral \(A K C^{\prime}N\) similar to \(A B C D\) . In particular, this shows that \(r_{1}: r_{3} = r_{A}: r_{C}\) ; similarly, \(r_{2}: r_{4} = r_{B}: r_{D}\) . Moreover, the same shift takes \(S_{3}\) to \(S_{1}\) , and it also takes \(I_{3}\) to the incenter \(I_{3}^{\prime}\) of triangle \(K C^{\prime}N\) . Since \(I_{1}S_{1} \parallel I_{3}S_{3}\) , the points \(I_{1}\) , \(S_{1}\) , \(I_{3}^{\prime}\) are collinear. Thus, in order to complete the solution, it suffices to apply the following Lemma to quadrilateral \(A K C^{\prime}N\) .
Lemma 1. Let \(A B C D\) be a cyclic quadrilateral, and define \(I_{A}\) , \(I_{C}\) , \(r_{B}\) , and \(r_{D}\) as above. Let \(I_{A}I_{C}\) meet \(B D\) at \(X\) ; then \(B X: X D = r_{B}: r_{D}\) .
Proof. Consider an inversion centered at \(X\) ; the images under that inversion will be denoted by primes, e.g., \(A^{\prime}\) is the image of \(A\) .
By properties of inversion, we have
\[\angle I_{C}^{\prime}I_{A}^{\prime}D^{\prime} = \angle X I_{A}^{\prime}D^{\prime} = \angle X D I_{A} = \angle B D A / 2 = \angle B C A / 2 = \angle A C I_{B}.\]
We obtain \(\angle I_{A}^{\prime}I_{C}^{\prime}D^{\prime} = \angle C A I_{B}\) likewise; therefore, \(\triangle I_{C}^{\prime}I_{A}^{\prime}D^{\prime} \sim \triangle A C I_{B}\) . In the same manner, we get \(\triangle I_{C}^{\prime}I_{A}^{\prime}B^{\prime} \sim \triangle A C I_{D}\) , hence the quadrilaterals \(I_{C}^{\prime}B^{\prime}I_{A}^{\prime}D^{\prime}\) and \(A I_{D}C I_{B}\) are also similar. But the diagonals \(A C\) and \(I_{B}I_{D}\) of quadrilateral \(A I_{D}C I_{B}\) meet at a point \(Y\) such that \(I_{B}Y\) :
\(Y I_{D} = r_{B}:r_{D}\) . By similarity, we get \(D^{\prime}X:B^{\prime}X = r_{B}:r_{D}\) and hence \(B X:X D = D^{\prime}X\) : \(B^{\prime}X = r_{B}:r_{D}\) . \(\square\)
Comment 1. The solution above shows that the problem statement holds also for any parallel- ogram \(K L M N\) whose sides are parallel to the diagonals of \(A B C D\) , as no property specific for a rhombus has been used. This solution works equally well when two sides of quadrilateral \(A B C D\) are parallel.
Comment 2. The problem may be reduced to Lemma 1 by using different tools, e.g., by using mass point geometry, linear motion of \(K\) , \(L\) , \(M\) , and \(N\) , etc.
Lemma 1 itself also can be proved in different ways. We present below one alternative proof.
Proof. In the circumcircle of \(A B C D\) , let \(K^{\prime}\) , \(L^{\prime}\) . \(M^{\prime}\) , and \(N^{\prime}\) be the midpoints of arcs \(A B\) , \(B C\) , \(C D\) , and \(D A\) containing no other vertices of \(A B C D\) , respectively. Thus, \(K^{\prime} = C I_{B}\cap D I_{A}\) , etc. In the computations below, we denote by \([P]\) the area of a polygon \(P\) . We use similarities \(\triangle I_{A}B K^{\prime}\sim\) \(\triangle I_{A}D N^{\prime}\) , \(\triangle I_{B}K^{\prime}L^{\prime}\sim \triangle I_{B}A C\) , etc., as well as congruences \(\triangle I_{B}K^{\prime}L^{\prime} = \triangle B K^{\prime}L^{\prime}\) and \(\triangle I_{D}M^{\prime}N^{\prime} =\) \(\triangle D M^{\prime}N^{\prime}\) (e.g., the first congruence holds because \(K^{\prime}L^{\prime}\) is a common internal bisector of angles \(B K^{\prime}I_{B}\) and \(B L^{\prime}I_{B}\) ).
We have
\[{\frac{B X}{D X}}={\frac{[I_{A}B I_{C}]}{[I_{A}D I_{C}]}}={\frac{B I_{A}\cdot B I_{C}\cdot\sin I_{A}B I_{C}}{D I_{A}\cdot D I_{C}\cdot\sin I_{A}D I_{C}}}={\frac{B I_{A}}{D I_{A}}}\cdot{\frac{B I_{C}}{D I_{C}}}\cdot{\frac{\sin N^{\prime}B M^{\prime}}{\sin K^{\prime}D L^{\prime}}}\] \[\qquad={\frac{B K^{\prime}}{D N^{\prime}}}\cdot{\frac{B L^{\prime}}{D M^{\prime}}}\cdot{\frac{\sin N^{\prime}D M^{\prime}}{\sin K^{\prime}B L^{\prime}}}={\frac{B K^{\prime}\cdot B L^{\prime}\cdot\sin K^{\prime}B L^{\prime}}{D N^{\prime}\cdot D M^{\prime}\cdot\sin N^{\prime}D M^{\prime}}}\cdot{\frac{\sin^{2}N^{\prime}D M^{\prime}}{\sin^{2}K^{\prime}B L^{\prime}}}\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\] \[\quad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad}\]
as required.
Solution 2. This solution is based on the following general Lemma.
Lemma 2. Let \(E\) and \(F\) be distinct points, and let \(\omega_{i}\) , \(i = 1,2,3,4\) , be circles lying in the same halfplane with respect to \(E F\) . For distinct indices \(i,j\in \{1,2,3,4\}\) , denote by \(O_{i j}^{+}\) (respectively, \(O_{i j}^{- }\) ) the center of homothety with positive (respectively, negative) ratio taking \(\omega_{i}\) to \(\omega_{j}\) . Suppose that \(E = O_{12}^{+} = O_{34}^{+}\) and \(F = O_{23}^{+} = O_{41}^{+}\) . Then \(O_{13}^{- } = O_{24}^{- }\) .
Proof. Applying Monge's theorem to triples of circles \(\omega_{1},\omega_{2},\omega_{4}\) and \(\omega_{1},\omega_{3},\omega_{4}\) , we get that both points \(O_{24}^{- }\) and \(O_{13}^{- }\) lie on line \(E O_{14}^{- }\) . Notice that this line is distinct from \(E F\) . Similarly we obtain that both points \(O_{24}^{- }\) and \(O_{13}^{- }\) lie on \(F O_{34}^{- }\) . Since the lines \(E O_{14}^{- }\) and \(F O_{34}^{- }\) are distinct, both points coincide with the meeting point of those lines. \(\square\)

Turning back to the problem, let \(AB\) intersect \(CD\) at \(E\) and let \(BC\) intersect \(DA\) at \(F\) . Assume, without loss of generality, that \(B\) lies on segments \(AE\) and \(CF\) . We will show that the points \(E\) and \(F\) , and the circles \(\omega_{i}\) satisfy the conditions of Lemma 2, so the problem statement follows. In the sequel, we use the notation of \(O_{ij}^{\pm}\) from the statement of Lemma 2, applied to circles \(\omega_{1}\) , \(\omega_{2}\) , \(\omega_{3}\) , and \(\omega_{4}\) .
Using the relations \(\triangle ECA \sim \triangle EBD\) , \(KN \parallel BD\) , and \(MN \parallel AC\) . we get
\[\frac{AN}{ND} = \frac{AN}{AD} \cdot \frac{AD}{ND} = \frac{KN}{BD} \cdot \frac{AC}{NM} = \frac{AC}{BD} = \frac{AE}{ED}.\]
Therefore, by the angle bisector theorem, point \(N\) lies on the internal angle bisector of \(\angle AED\) . We prove similarly that \(L\) also lies on that bisector, and that the points \(K\) and \(M\) lie on the internal angle bisector of \(\angle AFB\) .
Since \(KLMN\) is a rhombus, points \(K\) and \(M\) are symmetric in line \(ELN\) . Hence, the convex quadrilateral determined by the lines \(EK\) , \(EM\) , \(KL\) , and \(ML\) is a kite, and therefore it has an incircle \(\omega_{0}\) . Applying Monge's theorem to \(\omega_{0}\) , \(\omega_{2}\) , and \(\omega_{3}\) , we get that \(O_{23}^{+}\) lies on \(KM\) . On the other hand, \(O_{23}^{+}\) lies on \(BC\) , as \(BC\) is an external common tangent to \(\omega_{2}\) and \(\omega_{3}\) . It follows that \(F = O_{23}^{+}\) . Similarly, \(E = O_{12}^{+} = O_{34}^{+}\) , and \(F = O_{41}^{+}\) .
Comment 3. The reduction to Lemma 2 and the proof of Lemma 2 can be performed with the use of different tools, e.g., by means of Menelaus theorem, by projecting harmonic quadruples, by applying Monge's theorem in some other ways, etc.
|
IMOSL-2020-G6
|
Let \(I\) and \(I_{A}\) be the incenter and the \(A\) - excenter of an acute- angled triangle \(A B C\) with \(A B< A C\) . Let the incircle meet \(B C\) at \(D\) . The line \(A D\) meets \(B I_{A}\) and \(C I_{A}\) at \(E\) and \(F\) , respectively. Prove that the circumcircles of triangles \(A I D\) and \(I_{A}E F\) are tangent to each other.
|
Solution 1. Let \(\hat{\times} (p,q)\) denote the directed angle between lines \(p\) and \(q\) .
The points \(B\) , \(C\) , \(I\) , and \(I_{A}\) lie on the circle \(\Gamma\) with diameter \(I I_{A}\) . Let \(\omega\) and \(\Omega\) denote the circles \((I_{A}E F)\) and \((A I D)\) , respectively. Let \(T\) be the second intersection point of \(\omega\) and \(\Gamma\) . Then \(T\) is the Miquel point of the complete quadrilateral formed by the lines \(B C\) , \(B I_{A}\) , \(C I_{A}\) , and \(D E F\) , so \(T\) also lies on circle \((B D E)\) (as well as on circle \((C D F)\) ). We claim that \(T\) is a desired tangency point of \(\omega\) and \(\Omega\) .
In order to show that \(T\) lies on \(\Omega\) , use cyclic quadrilaterals \(B D E T\) and \(B I I_{A}T\) to write
\[\hat{\times} (D T,D A) = \hat{\times} (D T,D E) = \hat{\times} (B T,B E) = \hat{\times} (B T,B I_{A}) = \hat{\times} (I T,I I_{A}) = \hat{\times} (I T,I A).\]

To show that \(\omega\) and \(\Omega\) are tangent at \(T\) , let \(\ell\) be the tangent to \(\omega\) at \(T\) , so that \(\hat{\times} (T I_{A},\ell) =\) \(\hat{\times} (E I_{A},E T)\) . Using circles \((B D E T)\) and \((B I C I_{A})\) , we get
\[\hat{\times} (E I_{A},E T) = \hat{\times} (E B,E T) = \hat{\times} (D B,D T).\]
Therefore,
\[\hat{\times} (T I,\ell) = 90^{\circ} + \hat{\times} (T I_{A},\ell) = 90^{\circ} + \hat{\times} (D B,D T) = \hat{\times} (D I,D T),\]
which shows that \(\ell\) is tangent to \(\Omega\) at \(T\) .
Solution 2. We use the notation of circles \(\Gamma\) , \(\omega\) , and \(\Omega\) as in the previous solution.
Let \(L\) be the point opposite to \(I\) in circle \(\Omega\) . Then \(\angle I A L = \angle I D L = 90^{\circ}\) , which means that \(L\) is the foot of the external bisector of \(\angle A\) in triangle \(A B C\) . Let \(L I\) cross \(\Gamma\) again at \(M\) .
Let \(T\) be the foot of the perpendicular from \(I\) onto \(I_{A}L\) . Then \(T\) is the second intersection point of \(\Gamma\) and \(\Gamma\) . We will show that \(T\) is the desired tangency point.
First, we show that \(T\) lies on circle \(\omega\) . Notice that
\[\mathfrak{x}(L T,L M) = \mathfrak{x}(A T,A I)\quad \mathrm{and}\quad \mathfrak{x}(M T,M L) = \mathfrak{x}(M T,M I) = \mathfrak{x}(I_{A}T,I_{A}I),\]
which shows that triangles \(TML\) and \(TI_{A}A\) are similar and equioriented. So there exists a rotational homothety \(\tau\) mapping \(TML\) to \(TI_{A}A\) .
Since \(\mathfrak{x}(ML,LD) = \mathfrak{x}(AI,AD)\) , we get \(\tau (BC) = AD\) . Next, since
\[\mathfrak{x}(MB,ML) = \mathfrak{x}(MB,MI) = \mathfrak{x}(IAB,IAI) = \mathfrak{x}(IAE,IAA),\]
we get \(\tau (B) = E\) . Similarly, \(\tau (C) = F\) . Since the points \(M\) , \(B\) , \(C\) , and \(T\) are concyclic, so are their \(\tau\) - images, which means that \(T\) lies on \(\omega = \tau (\Gamma)\) .

Finally, since \(\tau (L) = A\) and \(\tau (B) = E\) , triangles \(ATL\) and \(ETB\) are similar so that
\[\mathfrak{x}(AT,AL) = \mathfrak{x}(ET,EB) = \mathfrak{x}(EI_{A},ET).\]
This means that the tangents to \(\Omega\) and \(\omega\) at \(T\) make the same angle with the line \(I_{A}TL\) , so the circles are indeed tangent at \(T\) .
Comment. In both solutions above, a crucial step is a guess that the desired tangency point lies on \(\Gamma\) . There are several ways to recognize this helpful property.
E.g. one may perform some angle chasing to see that the tangents to \(\Omega\) at \(L\) and to \(\omega\) at \(I_{A}\) are parallel (and the circles lie on different sides of the tangents). This yields that, under the assumption that the circles are tangent externally, the tangency point must lie on \(I_{A}L\) . Since \(IL\) is a diameter in \(\Omega\) , this, in turn, implies that \(T\) is the projection of \(I\) onto \(I_{A}L\) .
Another way to see the same fact is to perform a homothety centered at \(A\) and mapping \(I\) to \(I_{A}\) (and \(D\) to some point \(D^{\prime}\) ). The image \(\Omega^{\prime}\) of \(\Omega\) is tangent to \(\omega\) at \(I_{A}\) , because \(\angle BIA_{A} + \angle CIA_{A}D^{\prime} = 180^{\circ}\) . This yields that the tangents to \(\Omega\) at \(I\) and to \(\omega\) at \(I_{A}\) are parallel.
There are other ways to describe the tangency point. The next solution presents one of them.
Solution 3. We also use the notation of circles \(\omega\) , and \(\Omega\) from the previous solutions.
Perform an inversion centered at \(D\) . The images of the points will be denoted by primes, e.g., \(A'\) is the image of \(A\) .
For convenience, we use the notation \(\angle B I D = \beta\) , \(\angle C I D = \gamma\) , and \(\alpha = 180^{\circ} - \beta - \gamma = 90^{\circ} - \angle B A I\) . We start with computing angles appearing after inversion. We get
\[\angle D B^{\prime}I^{\prime} = \beta , \quad \angle D C^{\prime}I^{\prime} = \gamma , \quad \text{and hence} \quad \angle B^{\prime}I^{\prime}C^{\prime} = \alpha ;\] \[\angle E^{\prime}I_{A}^{\prime}F^{\prime} = \angle E^{\prime}I_{A}^{\prime}D - \angle F^{\prime}I_{A}^{\prime}D = \angle I_{A}E D - \angle I_{A}F D = \angle E I_{A}F = 180^{\circ} - \alpha .\]
Next, we have
\[\angle A^{\prime}E^{\prime}B^{\prime} = \angle D E^{\prime}B^{\prime} = \angle D B E = \beta = 90^{\circ} - \frac{\angle D B A}{2} = 90^{\circ} - \frac{\angle E^{\prime}A^{\prime}B^{\prime}}{2},\]
which yields that triangle \(A^{\prime}B^{\prime}E^{\prime}\) is isosceles with \(A^{\prime}B^{\prime} = A^{\prime}E^{\prime}\) . Similarly, \(A^{\prime}F^{\prime} = A^{\prime}C^{\prime}\) .
Finally, we get
\[\angle A^{\prime}B^{\prime}I^{\prime} = \angle I^{\prime}B^{\prime}D - \angle A^{\prime}B^{\prime}D = \beta - \angle B A D = \beta -(90^{\circ} - \alpha) + \angle I A D\] \[\qquad = \angle I C D + \angle I A D = \angle C^{\prime}I^{\prime}D + \angle A^{\prime}I^{\prime}D = \angle C^{\prime}I^{\prime}A^{\prime};\]
similarly, \(\angle A^{\prime}C^{\prime}I^{\prime} = \angle A^{\prime}I^{\prime}B^{\prime}\) , so that triangles \(A^{\prime}B^{\prime}I^{\prime}\) and \(A^{\prime}I^{\prime}C^{\prime}\) are similar. Therefore, \(A^{\prime}I^{\prime 2} = A^{\prime}B^{\prime} \cdot A^{\prime}C^{\prime}\) .
Recall that we need to prove the tangency of line \(A^{\prime}I^{\prime} = \Omega^{\prime}\) with circle \((E^{\prime}F^{\prime}I_{A}^{\prime}) = \omega^{\prime}\) . A desired tangency point \(T^{\prime}\) must satisfy \(A^{\prime}T^{\prime 2} = A^{\prime}E^{\prime} \cdot A^{\prime}F^{\prime}\) ; the relations obtained above yield
\[A^{\prime}E^{\prime} \cdot A^{\prime}F^{\prime} = A^{\prime}B^{\prime} \cdot A^{\prime}C^{\prime} = A^{\prime}I^{\prime 2},\]
so that \(T^{\prime}\) should be symmetric to \(I^{\prime}\) with respect to \(A^{\prime}\) .
Thus, let us define a point \(T^{\prime}\) as the reflection of \(I^{\prime}\) in \(A^{\prime}\) , and show that \(T^{\prime}\) lies on circle \(\Omega^{\prime}\) ; the equalities above will then imply that \(A^{\prime}T^{\prime}\) is tangent to \(\Omega^{\prime}\) , as desired.

The property that triangles \(B^{\prime}A^{\prime}I^{\prime}\) and \(I^{\prime}A^{\prime}C^{\prime}\) are similar means that quadrilateral \(B^{\prime}I^{\prime}C^{\prime}T^{\prime}\) is harmonic. Indeed, let \(C^{*}\) be the reflection of \(C^{\prime}\) in the perpendicular bisector of \(I^{\prime}T^{\prime}\) ; then \(C^{*}\) lies on \(B^{\prime}A^{\prime}\) by \(\angle B^{\prime}A^{\prime}I^{\prime} = \angle A^{\prime}I^{\prime}C^{\prime} = \angle T^{\prime}I^{\prime}C^{*}\) , and then \(C^{*}\) lies on circle \((I^{\prime}B^{\prime}T^{\prime})\) since \(A^{\prime}B^{\prime} \cdot A^{\prime}C^{*} = A^{\prime}B^{\prime} \cdot A^{\prime}C^{\prime} = A^{\prime}I^{\prime 2} = A^{\prime}I^{\prime} \cdot A^{\prime}T^{\prime}\) . Therefore, \(C^{\prime}\) also lies on that circle (and the circle is \((B^{\prime}I^{\prime}C^{\prime}) = \Gamma^{\prime}\) ). Moreover, \(B^{\prime}C^{*}\) is a median in triangle \(B^{\prime}I^{\prime}T^{\prime}\) , so \(B^{\prime}C^{\prime}\) is its symmedian, which establishes harmonicity.
Now we have \(\angle A^{\prime}B^{\prime}T^{\prime} = \angle I^{\prime}B^{\prime}C^{\prime} = \beta = \angle A^{\prime}B^{\prime}E^{\prime}\) ; which shows that \(E^{\prime}\) lies on \(B^{\prime}T^{\prime}\) . Similarly, \(F^{\prime}\) lies on \(C^{\prime}T^{\prime}\) . Hence, \(\angle E^{\prime}T^{\prime}F^{\prime} = \angle B^{\prime}I^{\prime}C^{\prime} = 180^{\circ} - \angle E^{\prime}I_{A}^{\prime}F^{\prime}\) , which establishes \(T^{\prime} \in \omega^{\prime}\) .
Comment 2. The solution above could be finished without use of harmonicity. E.g., one may notice that both triangles \(A^{\prime}T^{\prime}F^{\prime}\) and \(A^{\prime}E^{\prime}T^{\prime}\) are similar to triangle \(B^{\prime}I^{\prime}J\) , where \(J\) is the point symmetric to \(I^{\prime}\) in the perpendicular bisector of \(B^{\prime}C^{\prime}\) ; indeed, we have \(\angle T^{\prime}A^{\prime}E^{\prime} = \gamma - \beta = \angle I^{\prime}B^{\prime}J^{\prime}\) and \(\frac{B^{\prime}I^{\prime}}{B^{\prime}J^{\prime}} = \frac{B^{\prime}I^{\prime}}{C^{\prime}I^{\prime}} = \frac{B^{\prime}A^{\prime}}{A^{\prime}I^{\prime}} = \frac{A^{\prime}E^{\prime}}{A^{\prime}T^{\prime}}\) . This also allows to compute \(\angle E^{\prime}T^{\prime}F^{\prime} = \angle E^{\prime}T^{\prime}A^{\prime} - \angle F^{\prime}T^{\prime}A^{\prime} = \angle I^{\prime}J B^{\prime} - \angle J I^{\prime}B^{\prime} = \alpha\) .
Comment 3. Here we list several properties of the configuration in the problem, which can be derived from the solutions above.
The quadrilateral \(IBTC\) (as well as \(I^{\prime}B^{\prime}T^{\prime}C^{\prime}\) ) is harmonic. Hence, line \(IT\) contains the meeting point of tangents to \(\Gamma\) at \(B\) and \(C\) , i.e., the midpoint \(N\) of arc \(BAC\) in the circumcircle of \(\triangle ABC\) .
|
IMOSL-2020-G7
|
Let \(P\) be a point on the circumcircle of an acute- angled triangle \(ABC\) . Let \(D\) , \(E\) , and \(F\) be the reflections of \(P\) in the midlines of triangle \(ABC\) parallel to \(BC\) , \(CA\) , and \(AB\) , respectively. Denote by \(\omega_{A}\) , \(\omega_{B}\) , and \(\omega_{C}\) the circumcircles of triangles \(ADP\) , \(BEP\) , and \(CFP\) , respectively. Denote by \(\omega\) the circumcircle of the triangle formed by the perpendicular bisectors of segments \(AD\) , \(BE\) and \(CF\) .Show that \(\omega_{A}\) , \(\omega_{B}\) , \(\omega_{C}\) , and \(\omega\) have a common point.
Show that \(\omega_{A}\) , \(\omega_{B}\) , \(\omega_{C}\) , and \(\omega\) have a common point.
|
Solution. Let \(AA_{1}\) , \(BB_{1}\) , and \(CC_{1}\) be the altitudes in triangle \(ABC\) , and let \(m_{A}\) , \(m_{B}\) , and \(m_{C}\) be the midlines parallel to \(BC\) , \(CA\) , and \(AB\) , respectively. We always denote by \(\hat{x} (p, q)\) the directed angle from a line \(p\) to a line \(q\) , taken modulo \(180^{\circ}\) .
Step 1: Circles \(\omega_{A}\) , \(\omega_{B}\) , and \(\omega_{C}\) share a common point \(Q\) different from \(P\) .
Notice that \(m_{A}\) is the perpendicular bisector of \(PD\) , so \(\omega_{A}\) is symmetric with respect to \(m_{A}\) . Since \(A\) and \(A_{1}\) are also symmetric to each other in \(m_{A}\) , the point \(A_{1}\) lies on \(\omega_{A}\) . Similarly, \(B_{1}\) and \(C_{1}\) lie on \(\omega_{B}\) and \(\omega_{C}\) , respectively.
Let \(H\) be the orthocenter of \(\triangle ABC\) . Quadrilaterals \(ABA_{1}B_{1}\) and \(BCB_{1}C_{1}\) are cyclic, so \(AH \cdot HA_{1} = BH \cdot HB_{1} = CH \cdot HC_{1}\) . This means that \(H\) lies on pairwise radical axes of \(\omega_{A}\) , \(\omega_{B}\) , and \(\omega_{C}\) . Point \(P\) also lies on those radical axes; hence the three circles have a common radical axis \(\ell = PH\) , and the second meeting point \(Q\) of \(\ell\) with \(\omega_{A}\) is the second common point of the three circles. Notice here that \(H\) lies inside all three circles, hence \(Q \neq P\) .

Step 2: Point \(Q\) lies on \(\omega\) .
Let \(p_{A}\) , \(p_{B}\) , and \(p_{C}\) denote the perpendicular bisectors of \(AD\) , \(BE\) , and \(CF\) , respectively; denote by \(\Delta\) the triangle formed by those perpendicular bisectors. By Simson's theorem, in order to show that \(Q\) lies on the circumcircle \(\omega\) of \(\Delta\) , it suffices to prove that the projections of \(Q\) onto the sidelines \(p_{A}\) , \(p_{B}\) , and \(p_{C}\) are collinear. Alternatively, but equivalently, it suffices to prove that the reflections \(Q_{A}\) , \(Q_{B}\) , and \(Q_{C}\) of \(Q\) in those lines, respectively, are collinear. In fact, we will show that four points \(P\) , \(Q_{A}\) , \(Q_{B}\) , and \(Q_{C}\) are collinear.
Since \(p_{A}\) is the common perpendicular bisector of \(AD\) and \(QQ_{A}\) , the point \(Q_{A}\) lies on \(\omega_{A}\) , and, moreover, \(\hat{x} (DA, DQ_{A}) = \hat{x} (AQ, AD)\) . Therefore,
\[\hat{x} (PA, PQ_{A}) = \hat{x} (DA, DQ_{A}) = \hat{x} (AQ, AD) = \hat{x} (PQ, PD) = \hat{x} (PQ, BC) + 90^{\circ}.\]
Similarly, we get \(\begin{array}{r}{\mathfrak{x}(P B,P Q_{B}) = \mathfrak{x}(P Q,C A) + 90^{\circ}} \end{array}\) . Therefore,
\[\begin{array}{r l} & {\mathfrak{x}(P Q_{A},P Q_{B}) = \mathfrak{x}(P Q_{A},P A) + \mathfrak{x}(P A,P B) + \mathfrak{x}(P B,P Q_{B})}\\ & {\qquad = \mathfrak{x}(B C,P Q) + 90^{\circ} + \mathfrak{x}(C A,C B) + \mathfrak{x}(P Q,C A) + 90^{\circ} = 0,} \end{array\]
which shows that \(P\) , \(Q_{A}\) , and \(Q_{B}\) are collinear. Similarly, \(Q_{C}\) also lies on \(P Q_{A}\) .
Comment 1. There are several variations of Step 2. In particular, let \(O_{A}\) , \(O_{B}\) , and \(O_{C}\) denote the centers of \(\omega_{A}\) , \(\omega_{B}\) , and \(\omega_{C}\) , respectively; they lie on \(p_{A}\) , \(p_{B}\) , and \(p_{C}\) , respectively. Moreover, all those centers lie on the perpendicular bisector \(p\) of \(P Q\) . Now one can show that \(\mathfrak{x}(Q O_{A},p_{A}) = \mathfrak{x}(Q O_{B},p_{B}) = \mathfrak{x}(Q O_{C},p_{C})\) , and then finish by applying generalized Simson's theorem, Alternatively, but equivalently, those relations show that \(Q\) is the Miquel point of the lines \(p_{A}\) , \(p_{B}\) , \(p_{C}\) , and \(p\) .
To establish \(\mathfrak{x}(Q O_{A},p_{A}) = \mathfrak{x}(Q O_{C},p_{C})\) , notice that it is equivalent to \(\mathfrak{x}(Q O_{A},Q O_{C}) = \mathfrak{x}(p_{A},p_{C})\) which may be obtained, e.g., as follows:
\[\begin{array}{r l} & {\mathfrak{x}(Q O_{A},Q O_{C}) = \mathfrak{x}(Q O_{A},p) + \mathfrak{x}(p,Q O_{C}) = \mathfrak{x}(A Q,A P) + \mathfrak{x}(C P,C Q)}\\ & {\qquad = \mathfrak{x}(A Q,C Q) + \mathfrak{x}(C P,A P) = \mathfrak{x}(A Q,P Q) + \mathfrak{x}(P Q,C Q) + \mathfrak{x}(C B,A B)}\\ & {\qquad = \mathfrak{x}(A D,A A_{1}) + \mathfrak{x}(C C_{1},C F) + \mathfrak{x}(A A_{1},C C_{1}) = \mathfrak{x}(A D,C F) = \mathfrak{x}(p_{A},p_{C}).} \end{array} \quad (C)\]

Comment 2. The inversion at \(H\) with (negative) power \(- A H\cdot H A_{1}\) maps \(P\) to \(Q\) , and the circumcircle of \(\triangle A B C\) to its Euler circle. Therefore, \(Q\) lies on that Euler circle.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.