id
stringlengths 40
40
| text
stringlengths 12
210k
|
---|---|
c131ae7b1c323a1bf06dc0e05598d06573370584
|
Q: Why are there no $16$ by $32$ Hadamard circulant matrices? Two rows of a matrix are orthogonal if their inner product equals zero. Call a matrix with all rows pairwise orthogonal an orthogonal matrix. A circulant matrix is one where each row vector is rotated one element to the right relative to the preceding row vector. We will only consider matrices whose entries are either $-1$ or $1$.
For number of columns $n= 4,8,12,16,20,24,28, 36$ there exist $n/2$ by $n$ orthogonal circulant matrices.
Why are there no circulant matrices with $16$ rows and $32$ columns which are orthogonal?
Or to phrase it differently, is it possible to prove they don't exist without enumerating them all?
Example 6 by 12 matrix
\begin{pmatrix}
-1 &\phantom{-}1 &\phantom{-}1 & -1 & -1 &\phantom{-}1 & -1 &\phantom{-}1 & -1 & -1 & -1 & -1\\
-1 & -1 &\phantom{-}1 &\phantom{-}1 & -1 & -1 &\phantom{-}1 & -1 &\phantom{-}1 & -1 & -1 & -1\\
-1 & -1 & -1 &\phantom{-}1 &\phantom{-}1 & -1 & -1 &\phantom{-}1 & -1 &\phantom{-}1 & -1 & -1\\
-1 & -1 & -1 & -1 &\phantom{-}1 &\phantom{-}1 & -1 & -1 &\phantom{-}1 & -1 &\phantom{-}1 & -1\\
-1 & -1 & -1 & -1 & -1 &\phantom{-}1 &\phantom{-}1 & -1 & -1 &\phantom{-}1 & -1 &\phantom{-}1\\
\phantom{-}1 & -1 & -1 & -1 & -1 & -1 &\phantom{-}1 &\phantom{-}1 & -1 & -1 &\phantom{-}1 & -1\\\end{pmatrix}
A: These matrices are known as circulant partial Hadamard matrices and a good reference for these, along with recent results, is $\textit{Circulant partial Hadamard matrices}$ by Craigen, Faucher, Low, and Wares, Lin. Alg. Appl. 439.
Denote by $r\mbox{-}H(k\times n)$ a $k\times n$ circulant Hadamard matrix in which a row (and hence all) has sum $r$. The authors compile a table of the maximum values of $k$ for $n\le 64$ and all values of $r$. You can see that the $16\times 32$ matrix doesn't exist along with the $22\times 44$ matrix.
One of the first results in the paper is that if $r\mbox{-}H(k\times n)$ exists then $n$ is divisible by 4. This is why your column numbers are all multiples of 4. Another result is that if Ryser's conjecture is true then $k\le \frac{n}{2}$. The authors show also that there is empirical evidence that the maximum value of $k=\frac{n}{2}$ is attained almost always for $r=2$. A conjecture of Delsarte, Goethals, and Seidel is that a $2\mbox{-}H(k\times 2k)$ exists if and only if $k-1$ is an odd prime power. These two results combined would explain why the $16\times 32$ and $22\times 44$ cases don't exist. It also indicates that the next non-existent case could be $34\times 68$.
|
fedc7d34aa87c90dde2765cba93f6dc17e5bfdf0
|
Q: Spectral theory of compact, self-adjoint operators. Let $T$ be a compact, self-adjoint operator on a separable Hilbert space H. Suppose that $f\in H$, $||f|| =1$ and $||(T-3)f||\leq 1/2$. Let P be the orthogonal projection onto the direct sum of all the eigenspace of T with eigenvalue $\lambda \in [2,4]$. Show that
$||Pf||\geq \frac{\sqrt{3}}{4}$.
I think what I need to do is use spectral theory to show that there exists an orthonormal basis consisting of the eigenvectors. Then I need to evaluate $P$ with regards to this basis. Am I on the right track?
|
f82a7ec6fc818166cbc52130cc70f1fea1919989
|
Q: What is the relationship between a quotient space and annihalator? If we have a vector space $V$ and subspace $W$, we have that
$$\dim(V/W) = \dim V - \dim W.$$
Similarly for the annihilator $W^{\circ}$ we have that
$$\dim W^{\circ} = \dim V - \dim W.$$
What is the isomorphism between these two spaces? Is there an intuitive way to relate the two ideas?
A: The natural isomorphism is not between $W^0$ and $V/W$, but between $W^{\circ}$ and $(V/W)^*$. Consider the linear map
\begin{align*}
L : W^{\circ} &\to (V/W)^*\\
\varphi &\mapsto \hat{\varphi}
\end{align*}
where $\hat{\varphi}(v + W) := \varphi(v)$. Note, the map $\hat{\varphi}$ is well-defined because if $v' + W = v + W$, then $v' = v + w$ for some $w \in W$ so
$$\hat{\varphi}(v' + W) = \varphi(v') = \varphi(v + w) = \varphi(v) + \varphi(w) = \varphi(v) = \hat{\varphi}(v + W).$$
Note that if $\hat{\varphi} = 0$, then for every $v \in V$, $0 = \hat{\varphi}(v + W) = \varphi(v)$, so $\varphi = 0$. Therefore $L$ is injective.
Let $\psi \in (V/W)^*$. Note that $\psi\circ\pi \in V^*$ where $\pi : V \to V/W$ is the natural projection, and $(\psi\circ\pi)(W) = \psi(\pi(W)) = \psi(0) = 0$, so $\psi\circ\pi \in W^{\circ}$. Furthermore
$$\widehat{\psi\circ\pi}(v + W) = (\psi\circ\pi)(v) = \psi(\pi(v)) = \psi(v + W),$$
so $L(\psi\circ\pi) = \psi$. Therefore, $L$ is surjective.
So the map $L : W^{\circ} \to (V/W)^*$, $\varphi \mapsto \hat{\varphi}$ is an isomorphism.
If $V$ (and hence $W$) is finite-dimensional, then we have
$$\dim W^{\circ} = \dim (V/W)^* = \dim V/W = \dim V - \dim W.$$
|
7f3194a2858f55c9fb7bcce99e9012f31698049c
|
Q: How do I show that $-\frac{1}{e^x + 1} + 1 = \frac{e^x}{e^x + 1}$? The expression is
$$-\frac{1}{e^x + 1} + 1 = \frac{e^x}{e^x + 1}$$
I would like help to get from the left side to the right side.
A: One can write the $1=\frac{e^x+1}{e^x+1}$ in $\frac{-1}{e^x+1}+1$ to obtain $$\frac{-1}{e^x+1}+\frac{e^x+1}{e^x+1}=\frac{-1+e^x+1}{e^x+1}=\frac{e^x}{e^x+1}.$$
Hope this helped!
A: $$-\frac{1}{e^x + 1} + 1 = -\frac{1}{e^x + 1} + \frac{e^x+1}{e^x+1}= \frac{-1+e^x+1}{e^x+1} = \frac{e^x}{e^x+1}{}$$
A: Simply compute the LHS..
$$-\frac{1}{e^x + 1} + 1 = \frac{-1 + e^x + 1}{e^x + 1} = \frac{e^x}{e^x + 1}$$
A: you should take lcm: see my answer
$$-\frac{1}{e^x+1}+1=-\frac{1}{e^x+1}+\frac{1}{1}=\frac{-1+1*(e^x+1)}{1*(e^x+1)}=\frac{-1+e^x+1}{e^x+1}=\frac{e^x}{e^x+1}$$
A: \begin{align} \forall x\in\mathbb{R},\ \dfrac{-1}{e^x+1}+1-\left(\dfrac{e^x}{e^x+1}\right)&=\dfrac{-1}{e^x+1}+\dfrac{e^x+1}{e^x+1}-\dfrac{e^x}{e^x+1}\\
&=\dfrac{-1+e^x+1}{e^x+1}-\dfrac{e^x}{e^x+1}\\
&=\dfrac{e^x}{e^x+1}-\dfrac{e^x}{e^x+1}\\
&=0\\
&\square
\end{align}
A: Why don't you do simple addition. $$\frac{-1}{e^x+1}+1$$ $$=\frac{-1+e^x+1}{e^x+1}$$ $$=\frac{e^x}{e^x+1}$$
|
31a17046ea8e2e3d8524194cf43635e21f2a776e
|
Q: Initial value problem for a system of ODEs with two parameters Consider the following initial value problem for the autonomous system of ODEs
\begin{equation}%%\label{eqn: }
\begin{cases}
%\vspace{3mm}
x'(t)=y(t),\; t>0,\\
%\vspace{3mm}
y'(t)=-\frac{\displaystyle x(t)}{\displaystyle\sqrt{1+c_1+x^2(t)}}\,y^2(t)-\big(1-\sqrt{1+c_1+x^2(t)}\big)\,x(t),\; t>0,\\
x(0)=0,\; y(0)=k,
\end{cases}
\end{equation}
where $c_1$, $k$ are parameters. I'd like to find $c_1>-1$ and $k\in\mathbb{R}$ such that a nontrivial solution $(x(t),y(t))\not\equiv (0,0)$, $t>0$ exists and satisfies the additional condition $x(1)=0$. My idea essentially is "try and error", i.e. choose some $c_1$ and $k$ and use numerics to find $(x(t),y(t))$. However, I failed to find such $c_1$ and $k$ and this seems not a systematic approach and is not so efficient. Such problem belongs to the so-called inverse problems, controllability problems, or other kinds of problems? Or I may refer to some references or papers related to this kind of problems? Thanks!
A: The best thing to do is to analyse the system using a dynamical systems perspective. That is, to (1) find all equilibria of the system, and (2) to classify the behaviour of these equilibria.
As you easily infer from the ODEs, the equilibria of the system are the solutions of the equations
\begin{align}
y &= 0,\\
x\left(1 - \sqrt{1+c_1+x^2}\right) &= 0.
\end{align}
the last equation is satisfied if either $x=0$ or
\begin{equation}
1+c_1 + x^2 = 1 \;\Rightarrow x^2 = -c_1.
\end{equation}
So, for $c_1>0$, only the trivial equilibrium $(x,y)=(0,0)$ exists, whereas for $-1<c_1<0$, three equilibria exist, and the two nontrivial equilibria are found at $(x,y) = (\pm \sqrt{-c_1},0)$.
If we linearise the system around the trivial equilibrium, we see that the eigenvalues of the linearisation are given by $\lambda = \pm \sqrt{\sqrt{1+c_1}-1}$. Therefore, for $c_1>0$, the trivial equilibrium is a saddle; for $-1<c_1<0$, the trivial equilibrium is a centre. This is relevant for the following reason: you're looking for an orbit which starts somewhere on the $y$-axis, and which crosses the $y$-axis again at some specified later time ($t=1$). From the phase plane for $c_1>0$, it is immediately clear that no such orbit exists.
However, for $-1<c_1<0$, any orbit found in between the two nontrivial equilibria is closed, i.e. periodic, and therefore crosses the $y$-axis infinitely many times.
This will definitely give you more insight in the possible values of $c_1$ and $k$.
Note 1: The phase plane figures are made with the java-applet PPlane.
Note 2: You can do a lot more analysis on this system, to get more explicit expressions, etc. However, as you're interested in numerics, I'll leave that for now.
A: $$\begin{cases}
%\vspace{3mm}
\frac{dx}{dt}=y(t),\; t>0,\\
%\vspace{3mm}
\frac{dy}{dt}=-\frac{\displaystyle x(t)}{\displaystyle\sqrt{1+c_1+x^2(t)}}\,y^2(t)-\big(1-\sqrt{1+c_1+x^2(t)}\big)\,x(t),\; t>0,\\
x(0)=0,\; y(0)=k,
\end{cases}$$
$$y\frac{dy}{dx}=-\frac{x}{\displaystyle\sqrt{1+c_1+x^2}}\,y^2-\big(1-\sqrt{1+c_1+x^2}\big)\,x,\;$$
With $\:Y=y^2\:$ :
$$\frac{dY}{dx}=-2\frac{x}{\sqrt{1+c_1+x^2}}\,Y-2\big(1-\sqrt{1+c_1+x^2}\big)\,x$$
With $\:X=x^2\:$ :
$$\frac{dY}{dX}=-\frac{1}{\displaystyle\sqrt{1+c_1+X}}\,Y-\big(1-\sqrt{1+c_1+X}\big)\;$$
This linear ODE is easy to solve. The solution is :
$$Y(X)=2+c_1+X-2\sqrt{1+c_1+X}+Ce^{-2\sqrt{1+c_1+X}}$$
The equation of the trajectory is :
$$y(x)=\pm\sqrt{2+c_1+x^2-2\sqrt{1+c_1+x^2}+Ce^{-2\sqrt{1+c_1+x^2}}}$$
In order to know the time at each position on the trajectory, one have to integrate :
$\frac{dx}{dt}=y(x)\:$ then $\:dt=\frac{dx}{dy(x)}$
$$t=\int \frac{dx}{ \sqrt{2+c_1+x^2-2\sqrt{1+c_1+x^2}+Ce^{-2\sqrt{1+c_1+x^2}}} }$$
|
13b62b544f72e092cf17d9e8591a7bad63c10671
|
Q: What is the colon operator between matrices? While reading Robust Quasistatic Finite Elements and Flesh Simulation by Teran et al., I have seen in several equations a colon operator used between matrices.
Here are some examples, using matrices $F$, $P$ and $C$:
$\delta F : \delta P > 0$
$\delta F : (\partial P / \partial F) : \delta F > 0$
$i2 = C : C$
The only hint I have is that I believe that $C$ is a diagonal matrix with diagonal elements $[\sigma_1^2, \sigma_2^2, \sigma_3^2]$, and the result of $C : C$ is $\sigma_1^4 + \sigma_2^4 + \sigma_3^4$.
Does anybody know what this operator represents?
A: Since the paper deals with tensors etc, I think it's the "double dot product" as described here:
https://en.wikipedia.org/wiki/Dyadics
Double dot product
$$ A:B = \sum_{j} \sum_{i} (a_i\cdot d_j)(b_i\cdot c_j), $$
or
$$ A:B = \sum_{j} \sum_{i} (a_i\cdot c_j)(b_i\cdot d_j) $$
where $A = \sum_{i} a_i b_i$ and $B = \sum_j c_j d_j$.
|
8028249b218779a1ad9b6fb476cddfbb4e554ee8
|
Q: Mathematical game - finding values I have a small mathematical game. A random number with uniform distribution between 1 and 10 is drawn, the player must guess any number except the the one drawn.The method I have found is by selecting the same number for every draw, the player will win 9 times out of 10. My question is , is there a better method of playing,lets say the player wins 29 times out of 30, that is a ratio of 9.66 out of 10?
A: Your strategy might depend on the lack of 'randomness' of you random numbers source, if any.
For a truly random source, if there is no history-dependence which would cause a correlation between consecutive outputs, you have no better and no worse strategy than guessing $1$ every time. However if the source isn't ideal (say, it returns numbers in a cycle: $1,\,2,\,3,\dots,\,10,\,1,\,2,\dots$) you might discover it and adjust the strategy accordingly.
A: Assuming the chosen number has to also be on the same range, there's no strategy better or worse than picking the same number every time. Whichever number your strategy tells you to pick is selected by the random process exactly $\frac{1}{10}$ of the time because the distribution is uniform.
|
0604b15886368f80d9832a3bc913cdd5abf5f9d2
|
Q: 2-edge colorable graph approximation A 2 edge-colorable graph is a graph in which we can color the edges with two colors, in a way such that no edges of the same color share a vertex.
Given a graph G = (V,E) I want to find a 2 edge-colorable subgraph of G that has the maximum number of edges.
Currently i am using this algorithm. I find a maximum size matching M1 in G and then I find a maximum size matching M2 in G - M1. I would like to show that this algorithm can grant me a 3/4 approximation of this problem. Can anyone help me proving that?
A: Each $2$ edge-colorable subgraph $H$ of $G$ is a union of vertex disjoint paths or cycles of even length. Let $H$ be a maximum size $2$ edge-colorable subgraph $H$ of $G$, $m=|E(H)|$, and $M_1$ be a maximum size matching in $G$. Since each monochromatic set of edges constitutes a matching, $m_1=|M_1|\ge |E(H)|/2=m/2$. Next, since a subgraph of a $2$ edge-colorable graph is $2$ edge-colorable, in the set $E(H)\setminus M_1$ we can find a matching of size at least $|E(H)\setminus M_1|/2$. Then $$m_1+m_2=|M_1\cup M_2|\ge m_1+|E(H)\setminus M_1|/2\ge $$ $$m_1+(m-m_1)/2=(m+m_1)/2\ge (m+m/2)/2=3m/4.$$
|
b127020d17234bee1ef01e0164e9b0b888419c62
|
Q: price and quantity after taxation Given that demand for a good X is equal to $q_D=393-2p$ and market supply is $q_S=p/4-12$. Find equilibrium price and quantity, consumer and producer surplus and draw a diagram illustrating the situation. Given that:
a) $T=20\% \pi$, total profit is taxed
b) $T=200$ tax does not depend on volume and value of goods sold
will it be simply $$393-2p=p/4-12-200$$
? Obviously i need to find $p$.
Obviously i have calculated the equilibrium price and quantity before taxation that is $p=180,q=33$. How to find situation after taxation in those two cases?
A: $$393 − 2p = \frac p 4 − 12 − 200 - 0.2 \cdot \text{profit} .$$
|
89ee31d652e7481df902e093930662464cb4dd28
|
Q: Find a matrix-valued product, given the eigenvalues and eigenspaces of the matrix
$M$ is a matrix, $3\times 3$. The space of the eigenvalue $-2$ is $\{(0; t; s) : t; s \in \mathbb{R}\}$ and the space of the eigenvalue $5$ is $\{(0; t; s) : t; s \in \mathbb{R}\}$.
Then they ask the value of
$$M\begin{pmatrix} 1\\1\\-2\end{pmatrix}$$
I've tried using the formula: $A = P D P^{-1}$
P being
$$
\begin{pmatrix}
0 & 0 & 1 \\
1 & 0 & 0 \\
0 & 1 & 0 \\
\end{pmatrix}
$$
And $D$ being
$$
\begin{pmatrix}
-2 & 0 & 0 \\
0 & -2 & 0 \\
0 & 0 & 5 \\
\end{pmatrix}
$$
But it gets me nowhere near the solution which is the vector $(5, -2, 4)$
A: The eigenspaces of $-2$ and $5$ are non-empty and coincide, which is not possible. Judging from your matrix $P$ I guess that the eigespace of $5$ is actually $\{(r,0,0) \mid r \in \mathbb{R}\}$.
Then we already have all we need: We have $M = P D P^{-1}$ for the matrices
$$
P =
\begin{pmatrix}
0 & 0 & 1 \\
1 & 0 & 0 \\
0 & 1 & 0 \\
\end{pmatrix}
\quad\text{and}\quad
\begin{pmatrix}
-2 & 0 & 0 \\
0 & -2 & 0 \\
0 & 0 & 5 \\
\end{pmatrix}.
$$
Because $P^{-1}$ is given by
$$
P^{-1} =
\begin{pmatrix}
0 & 1 & 0 \\
0 & 0 & 1 \\
1 & 0 & 0 \\
\end{pmatrix}
$$
we get that
$$
M \begin{pmatrix} 1 \\ 1 \\ -2 \end{pmatrix}
= PDP^{-1} \begin{pmatrix} 1 \\ 1 \\ -2 \end{pmatrix}
= PD \begin{pmatrix} 1 \\ -2 \\ 1 \end{pmatrix}
= P \begin{pmatrix} -2 \\ 4 \\ 5 \end{pmatrix}
= \begin{pmatrix} 5 \\ -2 \\ 4 \end{pmatrix}.
$$
A: If Jendrik's assumption is correct, observe that
$$\begin{pmatrix}1\\ 1\\ -2\end{pmatrix}=\begin{pmatrix}1\\ 0\\ 0\end{pmatrix}+\begin{pmatrix}0\\ 1\\ -2\end{pmatrix},
$$
hence its image under $M$ is
$$5\begin{pmatrix}1\\ 0\\ 0\end{pmatrix}-2\begin{pmatrix}0\\ 1\\ -2\end{pmatrix}.$$
|
005c3fac9107b902c6aa46f28040e92acc83c822
|
Q: Proving that $x^{\alpha}(1+\Vert x\Vert^{2})^{-k}$ belongs to $L^{2}(\mathbb{R}^{n})$ Let $\alpha\in\mathbb{N}^{n}$ be a multi-index, i.e. $\alpha=(\alpha_{1},\dots,\alpha_{n})$ such that $x^{\alpha}:=\prod_{i=1}^{n}x^{\alpha_{i}}_{i}$. The modulus of a multi-index is defined as the quantity $\vert\alpha\vert:=\sum_{i=1}^{n}\alpha_{i}$. Let $\Vert x\Vert$ denote the euclidean norm of $x$, i.e. $\Vert x\Vert:=\sqrt{\sum_{i=1}^{n}x^{2}_{i}}$. Define the following function, for a fixed $\alpha\in\mathbb{N}^{n}$ and a fixed $k\in\mathbb{N}_{0}$
$$f_{\alpha,k}:\mathbb{R}^{n}\to\mathbb{C}:x\mapsto\frac{x^{\alpha}}{(1+\Vert x\Vert^{2})^{k}}$$
I want to prove that $f_{\alpha,k}\in L^{2}(\mathbb{R}^{n})$ (the space of square integrable functions over $\mathbb{R}^{n}$) if and only if $2k>\vert\alpha\vert+\tfrac{n}{2}$. It is possible that this bound is false. In that case, do not hesitate to correct it. Anyway, here is my attempt:
\begin{align*}
\int_{\mathbb{R}^{n}}\vert f_{\alpha,k}(x)\vert^{2}\text{d}x &= \int_{\mathbb{R}^{n}}\left\vert \frac{x^{\alpha}}{(1+\Vert x\Vert^{2})^{k}}\right\vert^{2}\text{d}x
\\ &= \int_{\mathbb{R}^{n}}\left\vert \frac{x^{2\alpha}}{(1+\Vert x\Vert^{2})^{2k}}\right\vert\text{d}x\\
\end{align*}
Since $\vert x^{2\alpha}\vert\le C(1+\Vert x\Vert^{2})^{\vert\alpha\vert}$, for a constant $C>0$, we have
\begin{align*}
\int_{\mathbb{R}^{n}}\vert f_{\alpha,k}(x)\vert^{2}\text{d}x &= \int_{\mathbb{R}^{n}}\left\vert \frac{x^{2\alpha}}{(1+\Vert x\Vert^{2})^{2k}}\right\vert\text{d}x\\
&\le C\int_{\mathbb{R}^{n}}\left\vert \frac{1}{(1+\Vert x\Vert^{2})^{2k-\vert\alpha\vert}}\right\vert\text{d}x\\
&=C\int_{\mathbb{R}^{n}}\frac{1}{(1+\Vert x\Vert^{2})^{2k-\vert\alpha\vert}}\text{d}x
\end{align*}
But I'm stuck here. I don't know how to get rid of this integral. Any hint is appreciated. Thank you.
A: You can use spherical coordinates. In this way, the terms in $\cos \theta_i$ and $\sin\theta_i$ only appear in the numerator (the denominator depends only on $r$). Consequently, the considered integral is convergent if and only if $I:=\int_0^{+\infty}r^{n-1+2|\alpha|}/(1+r^2)^{2k}\mathrm dr$ converges. Since the only problem is at infinity, $I$ is convergent if and only if $\int_0^{+\infty} r^{-(4k-n-2|\alpha|+1)}\mathrm dr$ converges. This is equivalent to
$$4k-n-2|\alpha| \gt 0.$$
|
6ae471476c9337721980b8ec7249e10d552e4b86
|
Q: Unclear passage in integration involving Gamma functions I find myself in need of some advice on an integration problem.
Let $F(x,\lambda)=\Gamma(x,\lambda)/\Gamma(x)$, $x,\lambda>0$ be the Regularized Upper Incomplete Gamma Function, where $\Gamma(x,\lambda) = \int_\lambda^\infty e^{-s}s^{x-1}ds,$ and $\Gamma(x) = \int_0^\infty e^{-t}t^{x-1}dt$.
Let
(1) $m_k(\lambda)=\int_0^\infty x^k d F(x,\lambda)$
In some notes I'm reading I found the following identity
(2) $m_k(\lambda)=k \int_0^\infty x^{k-1} [1-F(x,\lambda)]dx$.
Any idea on how to go from (1) to (2)? Maybe I'm missing something or there's a typo somewhere I can't spot. Note: based on this result, another one is derived, that actually seems correct (although based on empirical evidence only).
|
7411f13b117879788d356f522c28e0ddb659a337
|
Q: Set partition, partition into X set with Y elements. How do you partition a set into X number of new sets all that have Y elements?
Example: How to partition 18 unique cards are divided to six persons, and each person gets 3 cards each. How many combinations are there?
A: This is the number of permutations of 18 elements, divided 6 times by the number of permutations of 3 elements.
|
5a9ec4c78108d0d7bdd39058d2d32e5d87c7a9d6
|
Q: Is there a solution to this unidirectional wave equation, with initial value $v=f(x)$ and $x=t^2$ unidirectional wace equation:
$$\frac{du}{dt}+c\frac{du}{dx}=0$$
The initial value $u=f(x)$ is given on the parabola $x=t^2$.
Is there a solution to this problem, discuss why the solution is unique and differentiable or discuss why there is no solution.
and is there a solution to the problem if $t\leq c/2$
Can someone help me how to start with this? Do I need Fredholm alternative to see how many solutions. Or what should be the first step. Characteristics maybe?
A: You have your initial condition on the curve defined by $x=t^2$. At the same time you should know that the characteristics to the transport equation are the strait lines
$$
x=ct+\xi,
$$
where $\xi$ is some constant, and your solution is constant along the characteristics.
Clearly, no matter what the value of $c$ the straight line with $\xi=0$ will cross the curve of the initial conditions twice, since we know that $f$ is arbitrary hence this will imply that in general on the same characteristic there will be two different initial conditions, hence no solution exists in general.
The case $t\leq c/2$ I will leave to you. You need to analyze your initial conditions and characteristics and conclude that...
|
90e7c74cc15da1efa85dfa2970ed6b83c5743d7d
|
Q: P is a natural number. 2P has 28 divisors and 3P has 30 divisors. How many divisors of 6P will be there? While answering Aptitude questions in a book I faced this question but not able to find the solution So I searched Google and got two answers but didn't get an idea how the answer came.
Question:
P is a natural number. 2P has 28 divisors and 3P has 30 divisors. How many divisors of 6P will be there?
Solution 1:
2P is having 28(4*7) divisors but 3P is not having a total divisors which is divisible by 7, so the first part of the number P will be 2^5.
Similarly, 3P is having 30 (3*10) divisors but 2P does not have a total divisors which is divisible by 3. So 2nd part of the number P will be 3^3. So, P = 2^5*3^3 and the solution is 35.
Solution 2:
2P has 28 divisors =4x7,
3P has 30 divisors
Hence P=2^5 3^3
6p =2^6 3^4
Hence 35 divisors
I have been trying to understand the steps but not able to get.
A: To understand the answer you should know how to find the number of divisors of a number from its prime factorization: The number $n=2^a3^b5^c7^d\cdots p^k$ has exactly $\tau(n)=(a+1)(b+1)(c+1)\cdots(k+1)$ divisors.
So if $P=2^a3^b5^c\cdots$ then $2P$ has $\tau(2P)=(a+2)(b+1)(c+1)\cdots$ divisors and $3P$ has $\tau(3P)=(a+1)(b+2)(c+1)\cdots$ divisors. The first solution observes that $7\mid \tau(2P)$ and $7\nmid \tau(3P)$, hence we must have $7\mid a+2$. (Note: In the form written in the OP, the conclusion that $a+2=7$ is a bit too hasty). Similarly, $3\mid \tau(3P)$, $3\nmid \tau(2P)$ tells us that $3\mid b+2$ (and not immediately $3=b+2$). We learn little about $c$ etc., i.e., about primes $\ge 5$ dividing $P$.
So let us justify the conclusions made: While $7\mid a+2$ allows $a+2$ to be any of $7,14,21,28$, we know that $a+1\mid \tau(3P)$, but of the number $6,13,20,27$ only $6$ is in fact a divisor of $\tau(3P)=30$. Thus we can indeed conclude that $a=5$. With this in mind, we know that $(b+1)(c+1)\cdots = \frac{\tau(2P)}{a+2}=4$ and $(b+2)(c+1)\cdots =\frac{\tau(3P)}{a+1}=5$. We conclude as above that $5\mid b+2$ and - this time in fact directly - $5=b+2$.
A: First, we want to know how to easily calculate the number of divisors of any number. If we have $n=p_1^{e_1}p_2^{e_2}\cdots p_k^{e_k}$ where all $p_i$ are distinct, then to construct a divisor we have $e_1+1$ choices for the number of factors $p_1$ in our divisor, $e_2+1$ choices for the number of factors $p_2$, etc., making the number of divisors equal to $(e_1+1)(e_2+1)\cdots (e_k+1)$. So for example, $12=2^2\cdot 3$ so $12$ has $(2+1)(1+1)=6$ divisors.
Let's look at the first hint now, $2P$ having 28 divisors. Let's write $2P=p_1^{e_1}p_2^{e_2}\cdots p_k^{e_k}$. The number of divisors is now $(e_1+1)(e_2+1)\cdots (e_k+1)=28$. So, we can say that one power, say $e_1$, must be either $6,13$ or $27$ (since 28 has a factor 7, and we assumed it was contained in $e_1+1$. Our options are now:
$$2P=p_1^6\cdot p_2\cdot p_3$$
$$2P=p_1^6\cdot p_2^3$$
$$2P=p_1^{13}\cdot p_2$$
$$2P=p_1^{27}$$
The second hint says that 3P has 30 divisors. Since this does not contain a factor 7, we know that the $2$ in $2P$ must be responsible for this (notice that follows $e_1$ is the power of 2 in $2P$). Thus we know that $p_1=2$. Now we can, by our previously stated options, calculate P.
$$P=2^5\cdot p_2\cdot p_3$$
$$P=2^5\cdot p_2^3$$
$$P=2^{12}\cdot p_2$$
$$P=2^{26}$$
In the third option, the number of divisors of $3P$ must be divisible by $12+1=13$, and in the last case, the number of divisors of $3P$ must be divisible by $27$. We conclude the last two are not possible, since 30 is not divisible by either 13 or 27. In the first case, the number of divisors of $3P$ is divisible by $6$ because of the amount of factors 2, and we also know that $3P$ there has (at least) one prime factor that it only contains one time, so we have another factor 2 in the number of divisors of $3P$. Now the number of divisors of $3P$ is divisible by $12$, which is also impossible ($12$ does not divide $30$). We conclude that $P$ must be of the form $p_1^5\cdot p_2^3$. We also know that if $p_2\neq 3$, then the number of divisors of $3P$ is divisible by $6$ because of the factors 2 and by $4$ because of the factors $p_2$. This is again impossible.
Finally, we must have $p_2=3$ so $P=2^5\cdot 3^3$. Now we can easily calculate the number of divisors of $6P$; it must be $(6+1)(4+1)=35$.
Hope this helped!
A: hint
Let the prime factorization of $P=2^{a} \cdot 3^{b} \cdot 5^{c} \cdot 7^{d} \ldots$. Then the number of divisors of $P$ is given by
$$(a+1)(b+1)(c+1)\ldots.$$
If $2P$ has $28$ Divisors then
$$(a+2)(b+1)(c+1)\ldots=28.$$
Likewise
If $3P$ has $30$ Divisors then
$$(a+1)(b+2)(c+1)\ldots=30.$$
I hope now you can understand the solutions you found. If not let me know I will elaborate.
|
37335b8ac2b32c2ed6fd44e767b608859f3279f0
|
Q: What kind of differential equation is this one and how to solve it? On working on a physical problem that is of interest to me, I arrived at a differential equation that I desperately need to solve$$\frac{dy}{dt}+c y=\frac{\partial y}{\partial t}$$ where c is constant and y is the unknown function that I need to solve for.
I don't know if such an equation fall under a known class of differential equations, since it contain both ordinary and partial derivative operators. Note: I arrived at this equation from this one: $\frac{dy}{dt}+c y=\frac{\partial y}{\partial \theta}\frac{d\theta}{dt}$ where $\theta$ is unknown function that I know it can only be function of time, hence $\frac{d\theta}{dt}=\frac{\partial \theta}{\partial t}$, so $\frac{\partial y}{\partial \theta}\frac{d\theta}{dt}=\frac{\partial y}{\partial \theta}\frac{\partial \theta}{\partial t}=\frac{\partial y}{\partial t}$ and hence the equation in the question above, is there any problem with this reasoning in the first place?
A: Your observation ''$\frac{\partial y}{\partial \theta} \frac{\partial \theta}{\partial t} = \frac{\partial y}{\partial t}$'' is not quite correct. To see this, we consider the different number of variables the function $y$ can have.
Suppose $y$ is a function of $\theta$ only. If $\theta$ itself is a function of $t$, $y$ is a function of $t$ only (through $\theta$). For clarity, we introduce $Y(t) = y(\theta(t))$. Then, we have
\begin{equation}
\frac{\text{d} Y}{\text{d} t} = \frac{\text{d}}{\text{d} t} y(\theta(t)) = \frac{\text{d} y}{\text{d} \theta}\,\frac{\text{d} \theta}{\text{d} t}.
\end{equation}
As you can see, no partial derivatives are necessary, since all the functions involved each depend on one variable only.
Now, suppose $y$ is a function of both $\theta$ and $t$, so we write $y(\theta,t)$. Now, you have reason to assume that $\theta$ itself is a function of $t$. If we plug that into $y$, then $y$ depends on $t$ only -- albeit in a more complex fashion, both explicitly through its second variable, and implicitly through the $t$-dependence of $\theta$. For clarity, we introduce $\eta(t) = y(\theta(t),t)$. Then, we have
\begin{equation}
\frac{\text{d} \eta}{\text{d} t} = \frac{\text{d}}{\text{d} t} \Big[y(\theta(t),t)\Big] = \frac{\text{d} \theta}{\text{d} t}\,\frac{\partial y}{\partial \theta} + \frac{\partial y}{\partial t}.
\end{equation}
So, taking your equation
\begin{equation}
\frac{\text{d}}{\text{d} t} y + c y = \frac{\text{d} \theta}{\text{d} t}\frac{\partial y}{\partial \theta},
\end{equation}
we see that the right hand side can be written as
\begin{equation}
\frac{\text{d} \theta}{\text{d} t}\frac{\partial y}{\partial \theta} = \frac{\text{d}}{\text{d} t} y - \frac{\partial y}{\partial t},
\end{equation}
leading to the equation
\begin{equation}
c y = - \frac{\partial y}{\partial t}.
\end{equation}
Therefore, we know that $y$ can be written as
\begin{equation}
y(\theta,t) = f(\theta) e^{-c t}.
\end{equation}
|
872f5e5d13d1544e6f41c61c7b87d8bf6e3da449
|
Q: A formal proof of how the following equation computes the nth catalan number? $t(0)=1$
$t(n+1)=\sum_{i=0}^n t(i)*t(n-i)$ $ ,n>=0$
nth Catalan number is given by :-
$t(n)=2nCn/(n+1) $
I tried breaking the latter formula into a Summation of pairs, but it did not work.
My question is how the recurrence relation can be obtained from nth Catalan number relation or if we can someone see the solution intuitively?
A: Use generating functions.
Define $C(z) = \sum_{n \ge 0} t(n) z^n$, multiply your recurrence by $z^n$ and add over $n \ge 0$. Recognize the resulting sums:
$\begin{align}
\sum_{n \ge 0} t(n + 1) z^n
&= \sum_{n \ge 0} \sum_{0 \le i \le n} t(i) t(n - i) z^n \\
\frac{C(z) - t(0)}{z}
&= C^2(z)
\end{align}$
This gives the quadratic:
$\begin{align}
z C^2(z) - C(z) + 1
= 0
\end{align}$
Solutions to this are:
$\begin{align}
C(z)
= \frac{1 \pm \sqrt{1 - 4 z}}{2 z}
\end{align}$
We know $t(0) = 1$, so it should be that $\lim_{z \to 0} C(z) = 1$, and the right sign is negative.
Expanding the root as a series by the generalized binomial theorem we get:
$\begin{align}
\sqrt{1 - 4 z}
&= \sum_{n \ge 0} (-1)^n \binom{1/2}{n} \cdot 4^n z^n \\
&= 1 + \sum_{n \ge 1}
(-1)^n
\cdot \frac{(-1)^{n - 1}}{2^{2 n - 1} n}
\binom{2 n - 2}{n - 1} \cdot 4^n z^n \\
&= 1 - \sum_{n \ge 1} \frac{2}{n} \binom{2 n - 2}{n - 1} z^n
\end{align}$
Replacing in the expression for $C(z)$:
$\begin{align}
C(z)
&= \frac{1 - \left(
1 - \sum_{n \ge 1} \frac{2}{n} \binom{2 n - 2}{n - 1} z^n
\right)}{2 z} \\
&= \sum_{n \ge 1} \frac{1}{n} \binom{2 n - 2}{n - 1} z^{n - 1} \\
&= \sum_{n \ge 0} \frac{1}{n + 1} \binom{2 n}{n} z^n
\end{align}$
Thus $t(n) = \frac{1}{n + 1} \binom{2 n}{n}$, as claimed.
|
a8a8ae739864f8d85ebe3a1348725a9292918d1c
|
Q: When do matrices have positive eigenvectors? We have a $3\times 3$ matrix and we know $\det(A) > 0$, so how can we prove that there is always a positive eigenvector?
|
0fd7943eb1186c16b2820c28adb0cbea2646f7ff
|
Q: $\{x \in \Bbb Z\,|\, x-j=km \text{ for some } k \in \Bbb Z\}$ = $\bigcup\limits_{k \in Z}${km+j}? Let $m$ be any fixed positive integer. For each integer $j$, $0\le j \lt m$, let $\Bbb Z_j=\{x \in \Bbb Z\,|\, x-j=km \text{ for some } k \in \Bbb Z\}$.
Then when we consider the below definition, can we denote $\{x \in \Bbb Z\,|\, x-j=km \text{ for some } k \in \Bbb Z\}$ as $\bigcup\limits_{k \in Z}${km+j} or $\overset {m-1}\bigcup\limits_{j=0}${km+j}?
FYI
"Definition 6 Let F be an arbitrary family of sets. The union of the sets in F, denoted by $\bigcup\mathscr F$, is the set of all elements that are in A for some $A\in\mathscr F$.
$\bigcup\limits_{A \in \mathscr F}A$={$x\in U$|x∈A for some $A\in F$}"
If the family $\mathscr F$ is indexed by the set $\Gamma$, the following alternate notation may be used:
$\bigcup\limits_{r\in \Gamma}A_r$={$x\in U$ | $x \in A_r$ for some $r\in \Gamma$}
If the idex set $\Gamma$ is finite, $\Gamma$={1, 2, 3,..., n} for some natural number n, more intuitive notations such as
$\overset {n}\bigcup\limits_{i=1}A_i$ or $A_1 \bigcup A_2 \bigcup \cdots \bigcup A_n$
are often used for $\bigcup_{r \in \Gamma}A_r$.
Source: Set Theory by You-Feng Lin, Shwu-Yeng T. Lin.
A: That is correct, although it is customary to denote such a union of singletons as
$$\{km+j|k\in\mathbb Z\}$$
A: You also can denote the partition associated with the congruence in a compact form as $$\mathbf Z=\bigcup_{j=0}^{m-1}(j+\mathbf Zm).$$
|
417ed7067b708fd3136e421983e15de4b6833f87
|
Q: Evaluating the integral of a $\cos(\theta)$ within the exponential wrt $\theta$ I want to evaluate the following integral
$\int^{2\pi}_0 \, d\theta e^{- i k (x - x')\cos{\theta}}$,
where all of the variables are real and $i$ is the imaginary unit. The difficulty is the cosine term within the exponent. Are there any techniques that one could suggest to reduce this to an integral which is easier to evaluate. I have seen similar posts but have not been able to relate the problem to them.
Thanks for your help!
A: Well, if you let $u=k(x-x')$, we're looking at $\int_0^{2\pi}e^{iu\cos\theta}d\theta$. Using the fact that our integrand is $2\pi$-periodic, we know that this is also equal to $\int_0^{2\pi}e^{iu\cos(\theta+\pi)}d\theta=\int_0^{2\pi}e^{-iu\cos\theta}d\theta$. Taking the mean of these two, we're now calculating:
$$\int_0^{2\pi}\cos(u\cos\theta)d\theta=2\pi J_0(u)$$
where $J_0$ is the $0^{th}$ Bessel function.
One way we can justify this is by expanding our integrand as $\sum_n\frac{(-1)^nu^{2n}}{(2n)!}\cos ^{2n}\theta$ and using the result that $\int_0^{2\pi}\cos^{2n}\theta d\theta=2\pi\frac{2n\choose n}{2^{2n}}$.
The interchange of summation and integration is well-justified as the series converges extremely quickly. For an explicit comparison, the triangle inequality lets us dominate the series by the series $\cosh\vert u\vert$, which converges uniformly on compact subsets of $\mathbb{R}$ (indeed, of $\mathbb{C}$).
Putting these together, we get:
$$\sum_n[\frac{(-1)^nu^{2n}}{(2n)!}][2\pi\frac{2n\choose n}{2^{2n}}]=2\pi\sum_n\frac{(-1)^n(u/2)^{2n}}{(n)!^2}=2\pi J_0(u)$$
by comparing to the series for the Bessel function.
|
987f23abd39d61cb536b28da35cd39d63d50299a
|
Q: Transforming ODE into polar form Let $z=\rho e^{i\phi}$ be a complex number and $\alpha$ some parameter. I determined the following ODE
$$
\dot{\rho}e^{i\phi}+i\rho\dot{\phi}e^{i\phi}=\rho e^{i\phi}(\alpha+i-\rho^2).
$$
How to get the polar form
$$
\dot{\rho}=\rho(\alpha-\rho^2),\qquad\dot{\phi}=1
$$
from this? I do not see it...
A: If you assume $\rho$ and $\phi$ are real (and you do), then you're given a complex equation for two real variables. First, you divide both sides by $e^{i \phi}$. Then, you write both sides in the form $a+b i$, yielding
\begin{equation}
\big(\dot{\rho}\big) + i \big(\rho \dot{\phi}\big) = \big(\alpha \rho- \rho^3\big) + i \big(\rho\big).
\end{equation}
The complex number on the left hand side must equal the complex number on the right hand side, so both the real and imaginary parts must match. Therefore, you get two equations:
\begin{align}
\dot{\rho} &= \alpha \rho - \rho^3, \\
\rho \dot{\phi} &= \rho.
\end{align}
I'm sure you can take it from here.
|
119a75e4906e93b6e92f3ea16cce0e6bbbe5ad58
|
Q: Diagonalization problem in terms of direct sum In section 8 of "Linear Algebra and Geometry", Kostrikin says that we are going to study the problem of taking a matrix associated with a linear map $f\colon L\to M$ and putting it in its "simplest form" using appropriate bases. Then he says that in "direct sum language", the problem is formulated in the following way:
"Let us construct the exterior direct sum of spaces $L\bigoplus M$ and associate with the mapping f its graph $\Gamma_{f}$: the set of all vectors $(l, f(l)$ in the sum. Its easy to verify that it is a subspace of the sum. We are interested in the invariants of the arrangement of $\Gamma_{f}$ in $L\bigoplus M$."
Can someone explain this analogy more clearly? I don't get what is meant with "invariants" and "arrangement" in this context. Thanks.
|
2c08705151f07fd74a25ff7eb82422312f4687bb
|
Q: Negation of injectivity I'm having some problems understanding the negation of injectivity.
Take the function $f: \mathbb{R} \rightarrow \mathbb{R}$ given by $f(x) = x^2$. The formal definition of injectivity is $f(a)=f(b) \implies a = b$. Therefore the function $f(x)$ is not injective because $-1 \neq 1$ while $f(-1)=f(1)=1$.
But when I try to specify the negation of the statement "f is injective", I run into problems. I know that the negation of "P implies Q" is "P but not Q" so the formal definition of non-injectivity should be $f(a)=f(b) \implies a\neq b$, right? The problem is this statement doesn't hold for the function $f(x)=x^2$, because $f(1) = f(1)$ while it's not true that $1 \neq 1$.
What am I doing wrong?
A: There exists $a,b \in \mathbb{R}$ such that $f(a) = f(b)$ but $a\neq b$, is a negation of injectivity.
A:
"P but not Q" so the formal definition of non-injectivity should be
$f(a)=f(b) \implies a\neq b$, right?
Wrong, but close. You said "P but not Q" (which really means "P and not Q") and then you wrote the equivalent of "P implies not Q". These are different.
You also have to be careful how to take negation inside a quantifier. The definition of injectivity really is "for all $x,y$ something", which is negated as "there exists $x,y$ for which NOT something". Substituting "P implies Q" for "something", and using the rule for negating an implication, we get that the negation of "for $x,y$ P implies Q" is "there exists $x,y$ such that P and not Q".
A: By definition, $f$ is injective if and only if
$$ \forall(a,b) \in \mathbb{R}^2: f(a)=f(b) \implies a=b. $$
The negation of this statement is
$$ \exists (a,b) \in \mathbb{R}^2: f(a)=f(b) \quad \text{and} \quad a \neq b. $$
$f(x)=x^2$ is not injective because there exists the pair $(-1,1)$ such that $(-1)^2 = 1^2$ but $-1 \neq 1.$
A: As you notice the negation of $P$ implies $Q$ is $P$ but not $Q$. However this made formal for injectivety should be stated $f(a)=f(b) \wedge a\neq b$ i.e. both statement $f(a)=f(b) $ and $a\neq b$ hold, for some numbers $a$ and $b$.
|
d5aa94d52b936e846d3700fc4430a61f81d18d48
|
Q: Existence finite limit implies bounded sequence Consider a sequence of real numbers $\{a_n\}_n$. I have read in several sources that
(*) If $\exists$ $\lim_{n\rightarrow \infty}a_n=L$ with $-\infty<L<\infty$ then $\{a_n\}_n$ is bounded i.e. $\exists$ $0<M<\infty$ s.t. $|a_n|\leq M$ $\forall n$
Isn't the correct statement
(**) If $\exists$ $\lim_{n\rightarrow \infty}a_n=L$ with $-\infty<L<\infty$ then $\{a_n\}_n$ is bounded above and/or below?
?
Why?
Do similar conclusions hold for a function $f:\mathbb{R}\rightarrow\mathbb{R}$ with $\lim_{x\rightarrow \infty}f(x)=L$ and $-\infty<L<\infty$?
A: The first one is the correct one. By definition, $(a_n)_n$ converges to $L\in\mathbb{R}$ means
$$
\forall \varepsilon > 0 \exists N_\varepsilon \geq 0 \text{ s.t. } \forall n \geq N, \ \lvert a_n - L \rvert \leq \varepsilon.
$$
Take $\varepsilon = 1$, for instance:
$$
\exists N_1 \geq 0 \text{ s.t. } \forall n \geq N_1, \ \lvert a_n - L \rvert \leq 1.
$$
Now, by definition of the absolution values, this means that for all $n \geq N_1$
$$
L-1 \leq a_n \leq L+1.
$$
This implies that $(a_n)_n$ is bounded by $\ell=\max(\lvert L-1\rvert,\lvert L+1\rvert) $, except for maybe the first few (at most $N_1-1$) terms. But there are a finite number of them, so you are done: letting $M=\max(\ell, \lvert a_1\rvert,\dots, \lvert a_{N_1-1}\rvert)$, you then get that $\lvert a_n\rvert \leq M$ for all $n\geq 1$.
Edit: you can mimic this proof for continuous functions $f\colon\mathbb{R}\to\mathbb{R}$ having finite limits at $\pm\infty$. (the key is that at some point, you'll need to consider $\sup_{[-A,A]} \lvert f\rvert$ to conclude, and continuity ensures that this is finite.)
A: If $\{a_n\}_n\subset \Bbb R$ and $\lim_{n\rightarrow \infty}a_n=L\in \Bbb R=(-\infty,\infty)$, then
$$\exists N>0 \qquad\text{such that}\qquad |a_n-L|<1 \quad \forall n\geq N$$
this implies that
$$ -1<a_n-L<1 \implies |a_n|<\max\{|L+1|,|L-1|\} \qquad \forall n\geq N$$
So you can choose
$$M= \max\{|a_0|,\ldots,|a_N|,|L-1|,|L+1|\}$$
|
d16c13524dc407239f6aca929869a57623dbb894
|
Q: Is it true that a cycle with a period of 29 hours over 24 hours leads to a non-recurring pattern and how to prove it? The default 'reset time' for Internet Information Services is 29 hours. The reason for this is that
'Wade [person on the team who developed the setting] suggested 29 hours for the simple reason that it’s the smallest prime number over 24. He wanted a staggered and non-repeating pattern that doesn’t occur more frequently than once per day. In Wade’s words: “you don’t get a resonate pattern”'.
Source: http://blogs.iis.net/owscott/why-is-the-iis-default-app-pool-recycle-set-to-1740-minutes
Is it true that if you have a cycle (of say 24 hours) and in this cycle period you want to have a non-resonate, staggered non-repeating pattern bigger than the cycle period, you have to use a prime number larger than this cycle period? How can this be proved?
/edit
A simple calculation shows that this cycle is recurring as follows:
5
10
15
20
1
6
11
16
21
2
7
12
17
22
3
8
13
18
23
4
9
14
19
0
5
10
15
20
(...)
So after 24 times every value is touched and the cycle restarts again. This cycle is repeating after all. I don't know how to formalize this, but I think the question can be answered with False, it does not lead to a non-recurring pattern.
A: If integer $r$ is the length of the reset interval, and resets begin at $t=0$, then the sequence of reset times (as measured on a $24$-hour clock) is
$$R_{24,r} = (t\ \mathtt{mod}\ 24:\ \ t\ \mathtt{mod}\ r=0,\ t\in \{0,1,2,...\} )$$
where $\ a\ \mathtt{mod}\ b\ $ denotes the remainder of the division of $a$ by $b$.
NB:
If $q,r$ are positive integers, then the infinite sequence $$R_{q,r} = (t\ \mathtt{mod}\ q:\ \ t\ \mathtt{mod}\ r=0,\ t\in \{0,1,2,...\} )$$ is periodic with fundamental period $\frac{q}{\gcd(q,r)}$, where $\gcd$ denotes the greatest common divisor. Thus, the largest possible fundamental period is $q$, which is attained whenever $q,r$ are coprime (i.e., whenever $\gcd(q,r)=1$).
That is, $R_{q,r}$ contains a minimal repeating cycle of length $\frac{q}{\gcd(q,r)}$. If $q$ is fixed and $r=1,2,3,...$, then the corresponding fundamental periods range over all the divisors of $q$ (in haphazard order). What varies as $r$ varies is the length and internal structure of the minimal repeating cycle.
It seems evident that Wade's actual criteria are just these:
*
*There must be at most one reset every $24$ hours.
*The repeating cycle of reset times ($\mathtt{mod}\ 24$) should not be simply $(0,1,2,...,23)$.
Now (1) requires that $r\ge 24$, so consider the first few possibilities:
r minimal repeating cycle in the sequence of reset times (mod 24)
-- ------------------------------------------------------------------------------------
24 0
25 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23
26 0, 2, 4, 6, 8, 10, 12, 14, 16, 18, 20, 22
27 0, 3, 6, 9, 12, 15, 18, 21
28 0, 4, 8, 12, 16, 20
29 0, 5, 10, 15, 20, 1, 6, 11, 16, 21, 2, 7, 12, 17, 22, 3, 8, 13, 18, 23, 4, 9, 14, 19
The smallest integer $r$ satisfying both (1) and (2) is seen to be $r=29$.
NB: In our case of $R_{24,r}$, the maximum period ($24$) is attained when $\gcd(24,r) = 1$, i.e., when $r \in \{1,5,7,11,13,17,19,23,25,29,31,35,... \}$. It just happens that $r=29$ is the least of these with $r\ge 24$ and such that the minimal repeating cycle is not simply $(0,1,2,...,23)$.
A: No, it isn't true. If you must have a period which is an integer number of hours, any period which is relatively prime to 24 (that is, doesn't share a prime factor with 24) will only hit the same time of day after 24 repetitions. For example: 25 hours would be an hour later each day until it works through all possible times and back to the starting time.
As other answers have noted, the language of the source is incoherent and betrays ignorance of the numerical issues in question.
First of all, if you really wanted to have a different time each day, you would count in minutes (and use any number relatively prime to 1440) or seconds (and use any number relatively prime to 86400).
Secondly, if you wanted to be clever about it and choose an interval which was as far as possible from a "simple" fraction of 24 hours or 1440 minutes or 86400 seconds, you would choose $\frac12(\sqrt 5+1)$ of a day. This is the most difficult ratio of all to approximate as a simple fraction.
But basically all these numerical acrobatics are pointless given the relatively mild requirement "not to reset at the same time each day".
A: The design somewhat contradicts the stated reasons for choosing $29$.
The reset time is specified not in hours but in minutes, $1740 = 29 \times 60$. By taking it to be a multiple of $1$ hour, there is a synchronization of the resets with the hours of a $24$ hour clock, which makes the possible period lengths $60$ times shorter. The designers created resonance while trying to avoid it.
If the resolution of users' time observation is approximately an hour it is still possible to increase the perceived periodicity at a $1$-hour scale beyond $24$, by using the freedom to select the reset time in minutes.
|
7266ff559a4292196a2b806a7c0ac931ebd8135f
|
Q: The sequential compactness argument from an abstract point of view In the following, I am trying to consider the (sequential) compactness methods in applications from a rather abstract point of view. I am not sure, whether such an abstraction is really meaningful.
Preliminary remark: Let $X$ be a set, $\tau_1$ and $\tau_2$ two topologies on $X$ and consider the join topology $\tau_1 \vee \tau_2$,
i.e. the topology generated by $\tau_1 \cup \tau_2$.
If $x \in X$ and $x_\alpha$ is a net in $X$ that $\tau_1$-converges to $x$ and $\tau_2$-converges to $x$
then $x_\alpha$ $(\tau_1 \vee \tau_2)$-converges to $x$.
I often meet compactness theorems in applications that establish $\tau$-convergence of some sequence $x_n$ (for some topology $\tau$) by convergence in some weaker $\tau_1$-topology together with some additional properties that need to be satisfied by the sequence $x_n$. Sometimes, these additional properties can be rephrased by relative (sequential) compactness in another weaker topology $\tau_2$.
As an example, consider $X := L^1(P)$ for some probability measure $P$, $\tau$ the topology of convergence in $L^1$-norm,
$\tau_1$ the topology of convergence in measure and $\tau_2 := \sigma(L^1, L^\infty)$ the weak topology.
The Vitali convergence theorem states that on $L^1(P)$
a sequence $x_n$ $\tau$-converges if and only if $x_n$ $\tau_1$-converges and $x_n$ is uniformly integrable.
By Dunford-Pettis and Eberlein-Smulian, the uniform integrability of $x_n$ is nothing else but relative sequential compactness in the $\tau_2$-topology.
In particular, the Vitali convergence theorem implies that $\tau = \tau_1 \vee \tau_2$.
Question 1: Does it also hold the other way round?
I.e. if $x_n$ is $\tau_2$-convergent and $\tau_1$-relatively sequentially compact does it follow that $x_n$ is $\tau$-convergent?
Question 2: If Question 1 has a positive answer, does it always hold that if on some (possibly nice enough) topological space $(X, \tau)$
it holds that $\tau = \tau_1 \vee \tau_2$ then $\tau_1$-convergence of a sequence or a net and $\tau_2$-relative (sequential) compactness implies its $\tau$-convergence? (Note that in contrast to the above preliminary remark, we do not impose the $\tau_2$-convergence (it will then follow automatically)).
Question 3: It would be also interesting to know of such "additional properties" that can not be rephrased by a relative (sequential) compactness property.
A: Here is a counterexample to Question 2. Consider $X=[0,1)$ with $\tau$ its usual topology. Let $\tau_1$ be the coarser topology that glues $1$ to $0$ (i.e., neighborhoods of $0$ must contain intervals $(1-\epsilon,1)$, so $(X,\tau_1)$ is a circle) and let $\tau_2$ be the coarser topology that glues $1$ to $1/2$ (i.e., neighborhoods of $1/2$ must contain intervals $(1-\epsilon,1)$, so $(X,\tau_2)$ is like the letter P). It is easy to verify that $\tau=\tau_1\vee\tau_2$. But the sequence $(1-1/n)$ converges with respect to $\tau_1$ and $\tau_2$ (to $0$ and $1/2$, respectively), but does not converge with respect to $\tau$.
The additional hypothesis you need for this to work is that if a sequence $(x_n)$ converges $x$ with respect to $\tau_1$ and converges to $y$ with respect to $\tau_2$, then we must have $x=y$ (note that this hypothesis is certainly necessary if the topologies are Hausdorff, since such a sequence with $x\neq y$ will then give a counterexample as above). With that assumption, suppose $(x_n)$ $\tau_1$-converges to $x$ and is $\tau_2$-relatively sequentially compact. Then every $\tau_2$-convergent subsequence of $(x_n)$ must converge to $x$. So by relative sequential compactness, every subsequence contains a subsubsequence which converges to $x$ with respect to both $\tau_1$ and $\tau_2$ (and hence, with respect to $\tau$). It follows that the sequence $(x_n)$ converges to $x$ with respect to $\tau$.
In particular, this gives a positive answer to Question 1, and in fact shows that if $\tau_1$ and $\tau_2$ are Hausdorff, Question 2 has a positive answer for $(\tau_1,\tau_2)$ iff it has a positive answer for $(\tau_2,\tau_1)$.
(The result of the second paragraph also works with "sequence" replaced by "net" and "relatively sequentially compact" replaced by "relatively compact" everywhere.)
|
b0c3d3814a8425e4f86b7e9e3c2fe4d745445f4a
|
Q: System of equations for operations Given a system with multiple equations, where we know the values and the result, but not the operations between the values:
\begin{cases} 3 ⊕ 5 ⊙ 2 = 13 \\ 7 ⊕ 2 ⊙ 4 = 10 \\ 4 ⊕ 3 ⊙ 3 = 9 \end{cases}
Is there an algorithmic way to deduce $⊕$ and $⊙$ (which in this case would be multiplication and subtraction, respectively)? Like a system of equations where the unknowns are the operations themselves? Does this have a name? Are there ways of calculating it?
A: There is, but it is complicated, and you would need to know more. You would have to know what type of operations $⊕$ and $⊙$ are. Let's start simple. If we were told that $x⊙y$ was of form $ax+by$ (which is is; ⊙ represents subtraction and we would just plug in 1 for a and -1 for b), we could solve a system of equations for $a$ and $b$. Let's say we were given that
$$7⊙2=5$$
$$6⊙4=2$$
$$9⊙1=8$$
Since we are only solving for two variables, we only need two equations. Let's pick $7⊙2=5$ and $6⊙4=2$.
We have that
$$7a+2b=5$$
$$6a+4b=2$$
I am going to solve this system without substitution, because there is an easier method. We start by multiplying the second equation by $-\frac{1}{2}$, and we get $-3a-2b=-1$.
Now let's add the equations:
$$7a+2b=5$$
$$\underline{-3a-2b=-1}$$
$$4a=4$$
From here, we can see that $a=1$. Plugging this in to any equation, let's take the first, gives us $7+2b=5$, and from here we can solve for $b$ to get that $b=-1$.
Thus $a⊙b=(1)a+(-1)b=a+(-b)=a-b$, so we have that $a⊙b=a-b$. Now we know that $⊙$ represents subtraction.
Now let's say we wanted to solve for $⊕$. Since this is multiplication, we would be told that $x⊕y=pxy$, from which we could solve for $p$ (which would be $1$). Or, we could have a definition like $x⊕y=pxy+qx+ry$, where $p$ would be $1$, and $q$ and $r$ would each be $0$. We could even have $x⊕y=pxy+qx+ry+s$, and we would have to solve for all of these (we would need more equations).
Now, bringing in the two operations, we have
$$3⊕5⊙2=13$$
$$7⊕2⊙4=10$$
$$4⊕3⊙3=9$$
And thus
$$(p(3*5))⊙2=13$$
$$(p(2*7))⊙4=10$$
$$(p(4*3))⊙3=9$$
Going further,
$$15pa+2b=13$$
$$14pa+4b=10$$
$$12pa+3b=9$$
And we would have to solve for $a$, $b$, and $p$. We would get that they are $1, -1,$ and $1$, respectively, and then we could determine the operations.
Adding in exponents, and things would get crazy. You'd be probably better off with trial and error.
|
3bb09357cf31866ced2b70d31f2f81bab1a3fe22
|
Q: Vector multiplication with transpose I have the two vectors $X$ and $Q$.
I want to calculate the following:
$(X-Q)^T(X-Q)$.
This is what I get:
$X^TX - X^TQ -Q^TX + Q^TQ $
The solutions state that this can be rewritten as:
$X^TX - 2Q^TX + Q^TQ$
I can't see how that could be done, is it possible somehow?
A: For two vectors $X,Q \in \mathbb R^n$, we have that $X^TQ$ is just another way to write down the scalar product of $X$ and $Q$: $$X^TQ=\langle X,Q \rangle = \sum_{i=1}^n x_i q_i.$$
From this sum, we see that the scalar product of two real vectors is symmetric: $X^TQ = \langle X,Q \rangle = \langle Q,X \rangle = Q^TX.$
If $X$ and $Q$ are complex vectors then we still have the algebraic identity $X^TQ = Q^TX, $ but this is expression is not the scalar product of $X$ and $Q$ anymore (and we cannot, e.g. expect $(X-Q)^T(X-Q)$ to be real, or nonnegative).
A: Notice that $X^T Q$ is an $1 \times 1$ matrix, and therefore $X^T Q = (X^T Q)^T = Q^T X$. Inserting this in your solution gives the desired result.
|
f9d9c82e99bcc864bc8288f9840ed35e4b1ef323
|
Q: onto but not one-to-one on set of Natural Numbers Let $\mathbb{N} = \{0, 1, 2, 3, ...\}$.
Is there a function from $\mathbb{N}$ to $\mathbb{N}$ which is an onto function but not one-to-one function?
I have tried it but could not find any such function.
A: Yes many, for example $$f(2n) = f(2n+1) = n$$
A: Just do a left-shift
$$
f(n) = \begin{cases}
0 & \text{ if } n = 0\\
n-1 & \text{ if } n > 0
\end{cases}
$$
A: Just take $f$ defined by $f(0) = 0$, and $f(n+1) = n$.
A: Yet another assignment:
$$n \mapsto \left\lfloor \frac n5\right\rfloor$$
For each number $n$ is returns the fifth part of $n$ truncated to the nearest lower integer, so it is:
$0$ for $n=0\dots 4$,
$1$ for $n=5\dots 9$,
$2$ for $n=10\dots 14$,
$3$ for $n=15\dots 19$,
and so on.
|
f2035c8b2b14c6f60d0503e385f4ea0d6d3fd5da
|
Q: Show that $\Vert T\Vert = \sup\{\vert \langle Tx, y\rangle\vert : x,y\in E, \,\Vert x\Vert \leq 1, \,\Vert y\Vert \leq 1\}$ I have the following task where I'm blocked:
Show that if $E$ is an inner product space and $T: E\rightarrow E$ is
a bounded linear operator, that is
$$\underset{\Vert x\Vert \leq 1}{\sup} \Vert Tx\Vert < \infty,\quad x\in E$$
then $$\Vert T\Vert = \sup\{\vert \langle Tx, y\rangle\vert : x, y\in E,\,
\Vert x\Vert \leq 1,\, \Vert y\Vert \leq 1\}.$$
In the problem $\langle, \rangle$ denotes inner product. I'm basically stuck here and I don't know how to proceed. I know that generally the norm of a linear operator is defined as:
$$\Vert T\Vert =\underset{\Vert x\Vert \leq 1}{\sup} \Vert Tx\Vert,\quad x\in E.$$
So I guess I need to show that:
$$\Vert T\Vert =\sup \{\Vert Tx\Vert : x\in E,\, \Vert x\Vert \leq 1\} = \sup\{\vert \langle Tx, y\rangle\vert : x, y\in E,\,
\Vert x\Vert \leq 1,\, \Vert y\Vert \leq 1\}.$$
Am I in the correct trail? Any hints what I should notice here?
|
aaa0da8c4a29c26f15003c7a35d71c23cdf98be9
|
Q: Finding the number of combinations Some boxes are labeled as $4,3,5,9,6$. A box with label $x$ gives a random integer $n$ such that, $1\leq n\leq x$. Thus we can get a collection of five numbers by each box. One such collection is $4,2,1,2,6$. Two collections are same if one is a permutation of other. So $1,2,1,1,6$ is same as $2,1,1,1,6$. Now we have to calculate the total number of different collections we can get by repeatedly performing this act.
Can someone please help me out with this question??
|
439bb49c6cf73500fb0215062fa66b39c8e93850
|
Q: Convergence of Schwartz functions I am proving or disproving the following statement:
Let $f_n$ be a sequence of Schwartz functions in $\mathbb R^d$, such that $f_n$ converges to 0 uniformly. Is it true that $f_n$ converges to 0 in $L^p(\mathbb R^d)$ for all $1 \leq p\leq \infty$?
My idea:
The case $p=\infty$ is automatically true, and for simplicity we consider the case $p=1$ only.
Let $\epsilon>0$, and by uniform convergence, there is an $N$ such that for all $n\geq N$, for all $x\in \mathbb R^d$,
$$
|f_n(x)|<\epsilon
$$
Notice that $f_n$ is Schwartz, and hence integrable. There is an $M_n>0$ such that we have:
$$
\int_{|x|>M_n}|f_n(x)|<\epsilon
$$
But the problem lies in whether $\sup_n M_n<\infty$. If this is true, then we are done.
If on the contrary this is false, please give a counterexample.
Thank you!
Edited: If the above is hard to prove, please try to prove my original question, assuming $f_n$ converges to 0 in the metric of Schwartz space.
A: Given $f\in\mathcal{S}(\mathbb{R}^d)$ and $a>0$ let $f_n(x)=n^{-a}\,f(x/n)$. Then $f_n$ converges uniformly to $0$ but $\|f_n\|_p=n^{d/p-a}\,\|f\|_p$ does not converge to $0$ if $1\le p\le d/a$.
If $f_n$ converges to $0$ in the metric of $\mathcal{S}(\mathbb{R}^d)$, then
$$
\lim_{n\to\infty}\sup_{x\in\mathbb{R}^d}(1+|x|)^{d+1}\,|f_n(x)|=0.
$$
From this it is easy to see that $\lim_{n\to\infty}\|f_n\|_p=0$ for all $p\ge1$.
|
c8d7e883f91fd0d7c746e2c637097208d0c8e8f0
|
Q: An inequality on three constrained positive numbers Assume $a,b,c$ are all positive numbers, and $2a^3b+2b^3c+2c^3a=a^2b^2+b^2c^2+c^2a^2$.
Prove that:
$$2ab(a-b)^2+2bc(b-c)^2+2ca(c-a)^2\ge(ab+bc+ca)^2$$
A: Let $a+b+c=3u$, $ab+ac+bc=3v^2$ and $abc=w^3$.
Hence, the condition gives $\sum\limits_{cyc}(a^3b+a^3c-a^2b^2)=\sum\limits_{cyc}(a^3c-a^3b)$
or $9u^2v^2-9v^4+uw^3=u\prod\limits_{cyc}(a-b)$
or $(9u^2v^2-9v^4+uw^3)^2=27u^2(3u^2v^4-4v^6-4u^3w^3+6uv^2w^3-w^6)$
or $28u^2w^6+18u(6u^4-8u^2v^2-v^4)w^3-54u^2v^6+81v^8=0$, which gives
$3(6u^4-8u^2v^2-v^4)^2-28(-2u^2v^6+3v^8)\geq0$
or $108u^8-288u^6v^2+156u^4v^4+104u^2v^6-81v^8\geq0$, which gives
$u^2\geq1.49098...v^2$.
By the way, we get $28u^2w^6+18u(6u^4-8u^2v^2-v^4)w^3=9v^4(6u^2v^2-9v^4)$.
In another hand, we need to prove that $2\sum\limits_{cyc}(a^3b+a^3c-2a^2b^2)\geq9v^4$ or
$6u^2v^2-9v^4+2uw^3\geq0$ or $9v^4(6u^2v^2-9v^4)+18uv^4w^3\geq0$
or $28u^2w^6+18u(6u^4-8u^2v^2-v^4)w^3+18uv^4w^3\geq0$
or $27u^3-36uv^2+7w^3\geq0$.
By Schur $w^3\geq4uv^2-3u^3$.
Hence, it remains to prove that $u^2\geq\frac{4}{3}v^2$, which is obvious.
|
e8b3c6abfd930c34e42125eee9b30d8b72627a76
|
Q: Limit of real logarithm During my homework, I've got to the following exercise:
Calculate the following limit (if exists):
$$\lim \limits_{x \to 0}\left(\frac{\ln\big(\cos(x)\big)}{x^2}\right).$$
I looked in the Calculus book to find an identity that will help me resolve the expression, and what I found was:
$$\frac{1}{n}\ln(n) = \ln(\sqrt[n]n)$$
I couldn't manage to prove that Identity and thus couldn't solve the question.
So what I need is:
*
*Proof to the identity,
*Help in solving the question.
Context:
Didn't learn either L'Hopital, McLauren or Taylor Series yet.
This question came just after the "Continuity" chapter.
A: If you can use:
$$\lim_{u\to 0}\frac{\ln (1+u)}{u} = 1$$
(which can be easily proven by considering the function $u\mapsto \ln(1+u)$, and showing that it is differentiable at $0$ with derivative equal to $1$)
and
$$\lim_{u\to 0}\frac{\sin u}{u} = 1$$
which also can be shown the same way $\sin^\prime 0 = \cos 0 = 1$),
then you have that, for $x\in (-\frac{\pi}{2}, \frac{\pi}{2})\setminus\{0\}$:
$$
\frac{\ln\cos x}{x^2} = \frac{\ln \sqrt{1-\sin^2 x}}{x^2} = \frac{1}{2}\frac{\ln(1-\sin^2 x)}{x^2} = -\frac{1}{2}\frac{\ln(1-\sin^2 x)}{-\sin^2 x}\cdot\frac{\sin^2 x}{x^2}.
$$
Using the above, since $\sin^2 x\xrightarrow[x\to0]{} 0$ , you get, as all limits exist:
$$
\frac{\ln\cos x}{x^2} \xrightarrow[x\to0]{} -\frac{1}{2}\left(\lim_{u\to 0}\frac{\ln (1+u)}{u}\right)\cdot \left(\lim_{x\to 0}\frac{\sin x}{x}\right)^2 = -\frac{1}{2}
$$
A: Use Taylor's formula at order $2$ and equivalents:
*
*$\cos x=1-\dfrac{x^2}2+o(x^2)$
*$\ln(1-u)\sim_0 -u$, hence $\ln(\cos x)\sim_0 -\dfrac{x^2}2+o(x^2)\sim_0 -\dfrac{x^2}2$.
From this, we deduce
$$\frac{\ln\cos x}{x^2}\sim_0 -\frac{x^2}{2x^2}=-\frac12.$$
Without Taylor's formula:
You can prove directly that $\;\displaystyle\lim_{x\to 0}\frac{1-\cos x}{x^2}=\lim_{x\to 0}\frac{\sin^2 x }{x^2(1+ \cos x)}=\frac12$, so that
$$\frac{1-\cos x}{x^2}\sim_0\frac12,\quad\text{whence}\quad \cos x\sim_0 1 - \dfrac{x^2}2.$$
A: There are many ways to solve this question:
*
*have you heard about Taylor series? By applying McLaurin rules you get
$$\lim \limits_{x \to 0}\frac{\ln(\cos(x))}{x^2} = \lim \limits_{x \to 0}\frac{\ln \left( 1 - \frac{x^2}{2!} + \frac{x^4}{4!} + O(x^4)\right)}{x^2}\\=\lim \limits_{x \to 0}\frac{-\frac{x^2}{2!} + O(x^2)}{x^2} = \lim \limits_{x \to 0}\frac{x^2 \cdot \left( -\frac{1}{2} + \frac{O(x^2)}{x^2}\right)}{x^2} = \lim \limits_{x \to 0}-\frac{1}{2} + \frac{O(x^2)}{x^2} = -\frac{1}{2}$$
*another approach is using L'Hospital's rule:
differentiate both numerator and denominator (given you checkes the hypothesis) and you get
$$\lim \limits_{x \to 0}\frac{\ln(\cos(x))}{x^2} = \lim \limits_{x \to 0}\frac{\frac{1}{\cos x} \cdot -\sin x}{2x}$$ and since $ \lim \limits_{x \to 0}\frac{\sin x}{x} = 1$ you can simplify to get
$$$\lim \limits_{x \to 0}\frac{\ln(\cos(x))}{x^2} = \lim \limits_{x \to 0}\frac{\frac{1}{\cos x} \cdot -\sin x}{2x} = \lim \limits_{x \to 0}-\frac{1}{2 \cos x} = -\frac{1}{2}$$
*classic approach is to check the limit by its definition: let $f(x) = \frac{\ln(\cos(x))}{x^2}$, what you want to check is that for every number $\epsilon$ there is some number $\delta$ such that
$$|f(x) - \left( -\frac{1}{2} \right) | < \epsilon$$
whenever $| x - 0| < \delta$
A: Use the well-known Taylor series
$$\cos(x)= 1-\frac{1}{2}x^2+\frac{1}{24}x^4+O(x^6)$$
$$\ln(1+x)= x-\frac{1}{2}x^2+\frac{1}{3}x^3-\frac{1}{4}x^4+\frac{1}{5}x^5+O(x^6)$$
to get
$$\ln(\cos(x))=-\frac{1}{2}x^2-\frac{1}{12}x^4+O(x^6)$$
Can you continue?
|
c60d1b90bba8c37cce0f5cf152b7215c093270f2
|
Q: Solving a third degree polynomial without calculator I was just wondering if it is possible to find a solution to this without using a CAS/Calculator (with wolframalpha I get $x \approx 3.865$)
$\dfrac{1}{x^3} = \dfrac{4}{(10-x)^{3}}, x\in\mathbb{R}$
Thank you!
A: Hint:
$$
\dfrac{1}{x^3} = \dfrac{4}{(10-x)^{3}}
\iff
\left(\dfrac{10-x}{x}\right)^3 = 4
\iff
\left(\dfrac{10}{x}-1\right)^3 = 4
$$
Solution:
$x= \dfrac{10}{\sqrt[3]{4}+1} \approx 3.864882$
A: $$ \frac{1}{x^3} = \frac{4}{(10-x)^3} \\
\left( \frac{10-x}{x} \right) ^3 = 4 \\
\frac{10-x}{x} = \sqrt[3]{4} \\
\frac{10}{x} = 1 + \sqrt[3]{4} \\
x = \frac{10}{1 + \sqrt[3]{4}}
$$
|
4d22a06841276a1bd10f07c31dc43c78243a58b4
|
Q: Distance to a convex set and the inner product so I have a fairly simple question on real analysis that I can totally geometrically agree, but my attempts to prove failed.
Let $B \subseteq \mathbb{R}^N$ closed and convex and let $a \in \mathbb{R}^N, y_0 \in B$ such that $∥a − y_0∥ = d(a, B)$.
Show that
$$⟨x − y_0, a − y_0⟩ ≤ 0, ∀x ∈ B.$$
The idea I pursued the most was to suppose by contradiction that there is an $x\in B$ s.t $⟨x − y_0, a − y_0⟩ >0 $ and try to prove the distance between $x$ and $a$ is less than $||y-a||$, but I did not manage to do it, and it does not seem to use the convexity hypothesis very well. Could anyone help?
A: Well, if $x\in B$, then $d(x,a)\ge d(y_0,a)$, which leads to $\vert\vert a-x\vert\vert^2\ge\vert\vert a-y_0\vert\vert^2$. Writing $a-x=(a-y_0)-(x-y_0)$, we can see that:
$$\vert\vert a-x\vert\vert^2=\vert\vert a-y_0\vert\vert^2+\vert\vert x-y_0\vert\vert^2-2\langle a-y_0,x-y_0\rangle$$
which $\ge\vert\vert a-y_0\vert\vert^2$. Rerranging this gives the desired inequality, noting trivially that $\vert\vert x-y_0\vert\vert^2\ge0$
Geometrically, note that $\langle x-y_0,a-y_0\rangle=0$ is a plane, so this result shows that given a point $a\notin B$, there is a plane which $a$ and $B$ lie on opposite sides of.
|
f4f950a3154fb88856c1cdf61ca53247f61f7d50
|
Q: Extreme of $\cos A\cos B\cos C$ in a triangle without calculus.
If $A,B,C$ are angles of a triangle, find the extreme value of $\cos A\cos B\cos C$.
I have tried using $A+B+C=\pi$, and applying all and any trig formulas, also AM-GM, but nothing helps.
On this topic we learned also about Cauchy inequality, but I have no experience with it.
The answer according to Mathematica is when $A=B=C=60$.
Any ideas?
A: If $y=\cos A\cos B\cos C,$
$2y=\cos C[2\cos A\cos B]=\cos C\{\cos(A-B)+\cos(A+B)\}$
As $A+B=\pi-C,\cos(A+B)=-\cos C$
On rearrangement we have $$\cos^2C-\cos C\cos(A-B)+2y=0$$
As $C$ is real, so will be $\cos C$
$\implies$ the discriminant $$\cos^2(A-B)-8y\ge0\iff y\le\dfrac{\cos^2(A-B)}8\le\dfrac18$$
The equality occurs if $\cos^2(A-B)=1\iff\sin^2(A-B)=0$
$\implies A-B=n\pi$ where $n$ is any integer
As $0<A,B<\pi, n=0\iff A=B$ and consequently $$\cos^2C-\cos C+2\cdot\dfrac18=0\implies \cos C=\dfrac12\implies C=\dfrac\pi3$$
$\implies A=B=\dfrac{A+B}2=\dfrac{\pi-C}2=\dfrac\pi3=C$
A: I use the Lagrange's multipliers theorem.
Let us define the functions
$$g(A,B,C)=A+B+C-\pi,$$
$$f(A,B,C)=\cos A\cos B\cos C.$$
Then we have
$$\mathrm{d}g(A,B,C)=\mathrm{d}A+\mathrm{d}B+\mathrm{d}C,$$
$$\mathrm{d}f(A,B,C)=-\sin A\cos B\cos C\mathrm{d}A-\cos A\sin B\cos C\mathrm{d}B-\cos A\cos B\sin C\mathrm{d}C$$
where $\left(\mathrm{d}A,\mathrm{d}B,\mathrm{d}C\right)$ is the coordinate forms on $\mathbb{R}^3$ (for example, $\mathrm{d}A[(1,2,3)]=1$).
Then we apply the Lagrange's multipliers theorem : we must cancel every determinants of the matrix
$$\left(\begin{array}{cccccccc}
1 & 1 & 1 \\
-\sin A\cos B\cos C & -\cos A\sin B\cos C & -\cos A\cos B\sin C
\end{array}\right).$$
This yields
\begin{cases} -\cos A\sin B\cos C + \sin A\cos B\cos C = 0 \\
-\cos A\cos B\sin C + \sin A\cos B\cos C = 0 \\
-\cos A\cos B\sin C + \cos A\sin B\cos C=0
\end{cases}
The first line gives us $\cos C = 0$ (and then $C=\pi/2$) or
$$-\cos A\sin B + \sin A\cos B =0$$
that is
$$\sin(A-B)=0$$
and then $A=B$.
Do the same for the two other lines to get the condition $A=B=C$ (the other conditions are impossible, check that). Because in a triangle, we have $g(A,B,C)=0$, we finally find that $A=B=C=\pi/3$.
A: Assume without loss of generality that $\displaystyle A , B, C $ belongs to the first quadrant.By the identity $\displaystyle sen^2x+cos^2x=1$, we can see that:
\begin{equation*}
tanx+cotx=\frac{1}{senxcosx}
\end{equation*}
So:
\begin{equation}
\frac{sen\frac{\beta}{2}sen\frac{\gamma}{2}}{sen\frac{\alpha}{2}cos\frac{\alpha}{2}}=\left(tan\frac{\alpha}{2}+cot\frac{\alpha}{2}\right)sen\frac{\beta}{2}sen\frac{\gamma}{2}
\end{equation}
\begin{equation}
\frac{cos\frac{\beta}{2}cos\frac{\gamma}{2}}{sen\frac{\alpha}{2}cos\frac{\alpha}{2}}=\left(tan\frac{\alpha}{2}+cot\frac{\alpha}{2}\right)cos\frac{\beta}{2}cos\frac{\gamma}{2}
\end{equation}
Add so:
$\\ \\ \displaystyle \left(tan\frac{\alpha}{2}+cot\frac{\alpha}{2}\right)sen\frac{\beta}{2}sen\frac{\gamma}{2}+\left(tan\frac{\alpha}{2}+cot\frac{\alpha}{2}\right)cos\frac{\beta}{2}cos\frac{\gamma}{2}=\frac{cos\left(\frac{\beta}{2}-\frac{\gamma}{2}\right)}{sen\frac{\alpha}{2}cos\frac{\alpha}{2}}\leq \frac{1}{sen\frac{\alpha}{2}cos\frac{\alpha}{2}} \\ \\ $
We conclude that:
\begin{equation}
\left(tan\frac{\alpha}{2}+cot\frac{\alpha}{2}\right)\left(cos\frac{\beta}{2}cos\frac{\gamma}{2}+sen\frac{\beta}{2}sen\frac{\gamma}{2}\right)\leq \frac{1}{sen\frac{\alpha}{2}cos\frac{\alpha}{2}}
\end{equation}
Apply Ravi transfomation, we get:
\begin{equation*}
\left(\sqrt{\frac{xy}{z(x+y+z)}}+\sqrt{\frac{z(x+y+z)}{xy}}\right)\times
\end{equation*}
\begin{equation*}
\left( \sqrt{\frac{x(x+y+z)}{(x+z)(x+y)}}\sqrt{\frac{y(x+y+z)}{(x+y)(y+z)}}+ \sqrt{\frac{yz}{(x+z)(x+y)}} \sqrt{\frac{xz}{(x+y)(y+z)}}\right)
\end{equation*}
\begin{equation}
\leq \frac{(x+y)(y+z)}{\sqrt{xyz(x+y+z)}}
\end{equation}
Its easy see that:
\begin{equation}
(xy+z(x+y+z))\left((x+y+z)\sqrt{xy}+z\sqrt{xy}\right)\leq (x+y)^2(y+z)\sqrt{(x+z)(y+z)}
\end{equation}
The above inequality is equivalent to:
\begin{equation}
(x+z)(y+z)\sqrt{xy}\left(x+y+2z\right)\leq (x+y)^2(y+z)\sqrt{(x+z)(y+z)}
\end{equation}
Which with due cancellation reduces to the inequality below:
\begin{equation}
(x+z)\sqrt{xy}\left(x+y+2z\right)\leq (x+y)^2\sqrt{(x+z)(y+z)}
\end{equation}
By symmetry we conclude the inequalities:
\begin{equation}
(x+y)\sqrt{yz}\left(2x+y+z\right)\leq (y+z)^2\sqrt{(x+z)(x+y)}
\end{equation}
\begin{equation}
(y+z)\sqrt{xz}\left(x+2y+z\right)\leq (x+z)^2\sqrt{(y+z)(x+y)}
\end{equation}
Multiplying, we get:
\begin{equation}
xyz(2x+y+z)(x+2y+z)(x+y+2z)\leq [(x+z)(x+y)][(y+z)(x+y)][(x+z)(y+z)]
\end{equation}
Supose $\displaystyle xy+xz+yz=1$, note that x,y and z will be cotangents of angles of an acute triangle, hence it follows
$\\ \displaystyle [(x+z)(x+y)][(y+z)(x+y)][(x+z)(y+z)]=$
$\\ \displaystyle [x^2+xy+xz+yz][y^2+xy+xz+yz][z^2+xy+xz+yz]=$
$\\ \displaystyle [x^2+1][y^2+1][z^2+1]=[cot^2\alpha'+1][cot^2\beta'+1][cot^2\gamma'+1]=csc^2\alpha' csc^2\beta' csc^2\gamma' \\ \\$
From where we can see that:
\begin{equation*}
\cot\alpha \cot\beta \cot\beta \left(2\cot\alpha+\cot\beta+\cot\beta\right)\left(\cot\alpha+2\cot\beta+\cot\beta\right)\left(\cot\alpha+\cot\beta+2\cot\beta\right) \leq
\end{equation*}
\begin{equation}
csc^2\alpha' csc^2\beta' csc^2\gamma'
\end{equation}
Extracting the cube root on both sides of the inequality, we have:
\newpage
\begin{equation*}
(\cot\alpha \cot\beta \cot\beta)^{\frac{1}{3}} \left(2\cot\alpha+\cot\beta+\cot\beta\right)^{\frac{1}{3}}\left(\cot\alpha+2\cot\beta+\cot\beta\right)^{\frac{1}{3}}\left(\cot\alpha+\cot\beta+2\cot\beta\right)^{\frac{1}{3}}
\end{equation*}
\begin{equation}
\leq csc^{\frac{2}{3}}\alpha' csc^{\frac{2}{3}}\beta' csc^{\frac{2}{3}}\gamma'
\end{equation}
Look at the expression:
$$\\ \\ \sqrt[3]{(2\times a+b+c)(a+2\times b+c)(a+b+2\times c)}$$
So:
$$ \displaystyle a+a+b+c\geq 4\sqrt[4]{a^2bc}$$
$$ \displaystyle a+b+b+c\geq 4\sqrt[4]{ab^2c}$$
$$ \displaystyle a+b+c+c\geq 4\sqrt[4]{abc^2}$$
And see, taking the product:
$$\\ \\ (2\times a+b+c)(a+2\times b+c)(a+b+2\times c)\geq 64\sqrt[4]{4a^4b^4c^4}=64abc$$
And so:
$$\\ \\ (2\times a+b+c)(a+2\times b+c)(a+b+2\times c)\geq 64abc$$
Extracting the cube root, we have:
$$\\ \\ \sqrt[3]{(2\times a+b+c)(a+2\times b+c)(a+b+2\times c)}\geq4\sqrt[3]{abc} \\ \ \ $$
Make the replacement $\displaystyle a=\cot(\alpha),b=\cot(\beta),c=\cot(\gamma)$, and so
$$\\ \\ \sqrt[3]{( 2\cot(\alpha)+\cot(\beta)+\cot(\gamma) )(\cot(\alpha)+2\cot(\beta)+\cot(\gamma))(\cot(\alpha)+\cot(\beta)+2\cot(\gamma))}\geq$$
$$4\sqrt[3]{\cot(\alpha)\cot(\beta)\cot(\gamma)} \\ \ \ $$
Multiplying by
$\displaystyle (\cot\alpha \cot\beta \cot\beta)^{\frac{1}{3}}$, we get:
\newpage
\begin{equation*}
4(\cot\alpha \cot\beta \cot\beta)^{\frac{2}{3}} \leq
\end{equation*}
\begin{equation}
(\cot\alpha \cot\beta \cot\beta)^{\frac{1}{3}}\left(2\cot\alpha+\cot\beta+\cot\beta\right)^{\frac{1}{3}}\left(\cot\alpha+2\cot\beta+\cot\beta\right)^{\frac{1}{3}}\left(\cot\alpha+\cot\beta+2\cot\beta\right)^{\frac{1}{3}}
\end{equation}
We arrive by transitivity to the inequality below:
\begin{equation}
4(\cot\alpha \cot\beta \cot\beta)^{\frac{2}{3}} \leq \csc^{\frac{2}{3}}\alpha \csc^{\frac{2}{3}}\beta \csc^{\frac{2}{3}}
\end{equation}
We can assume without loss of generality that the angles are in the first quadrant, so we can extract the root and preserve the sign of inequality, because by this hypothesis all terms are positive... our inequality is equivalent to the inequality required in the problem.Done.For more solution enter to the link https://www.overleaf.com/read/qyrxbsjhhjst
A: We know from geometry that for any triangle ABC the distance between its circumcenter $O$ and its orthocenter $H$ can be given by the following formula:
$$OH^2=R^2(1-8\cos A\cos B\cos C)$$
R being the circumradius.
Besides that, we know that orthocenter and circumcenter coincide only if the triangle is an equilateral one.
Therefore, $\cos A\cos B\cos C$ attains a maximum value of $\frac 18$ when $A=B=C=\pi/3$.
No Calculus needed.
A: By the $AM-GM$ inequality, $$\frac{\cos A+\cos B+\cos C}{3}\geq \sqrt[3]{\cos A\cos B\cos C}$$ with equality only when $\cos A=\cos B=\cos C$
A: We may assume $\alpha\leq\beta\leq\gamma$. The product $p:=\cos\alpha\cos\beta\cos\gamma$ is negative iff $\gamma>{\pi\over2}$, so that the minimum value $p_{\min}=-1$ is attained when $\alpha=\beta=0$, $\gamma=\pi$, i.e., for a degenerate triangle.
In the search of $p_{\max}$ we may assume $\gamma\leq{\pi\over2}$, hence $0\leq\cos\gamma\leq1$. From
$$2\cos\alpha\cos\beta=\cos(\alpha-\beta)+\cos(\alpha+\beta)\leq 1+\cos(\alpha+\beta)$$
it follows that
$$p\leq{1\over2}(1-\cos\gamma)\cos\gamma={1\over2}\left({1\over4}-\left(\cos\gamma-{1\over2}\right)^2\right)\leq{1\over8}\ .$$
This proves $p_{\max}={1\over8}$, and this value is attained when $\gamma={\pi\over3}$ and $\alpha=\beta$, i.e., for an equilateral tringle.
A: If you don't like to my first solution, here is a more simple solution.Assume without loss of generality that A,B,C belongs to the first quadrant.And so, it's easy to see that:
\begin{align*}
x+y\geq 2\sqrt{xy} \tag{1}
\end{align*}
\begin{align*}
x+z\geq 2\sqrt{xz} \tag{2}
\end{align*}
\begin{align*}
y+z\geq 2\sqrt{yz} \tag{3}
\end{align*}
Multiplying (1),(2),(3), we get:
$$ (x+y)(x+z)(y+z) \geq 8xyz$$
$$ \frac{1}{8} \geq \frac{xyz}{(x+y)(x+z)(y+z)}$$
$$ \displaystyle \frac{1}{8} \geq \sqrt{\frac{xy}{(x+z)(y+z)}}\sqrt{\frac{xz}{(x+y)(y+z)}}\sqrt{\frac{yz}{(x+y)(x+z)}} $$
By Ravi transformation and Law of cosines, we get
$$ \frac{1}{8} \geq sen\frac{\alpha}{2} sen\frac{\beta}{2} sen\frac{\gamma}{2}$$
And this is equivalent to:
$$ \frac{1}{8} \geq cos \left(\frac{\pi}{2}-\frac{\alpha}{2}\right) cos \left(\frac{\pi}{2}-\frac{\beta}{2}\right) cos \left(\frac{\pi}{2}-\frac{\gamma}{2}\right)$$
Take the substitution $\displaystyle \frac{\pi}{2}-\frac{\alpha}{2}=A$,$\displaystyle \frac{\pi}{2}-\frac{\beta}{2}=B$,$\displaystyle \frac{\pi}{2}-\frac{\gamma}{2}=C$, we get $\displaystyle A+B+C=\pi$ and:
$$ \frac{1}{8} \geq cosAcosBcosC$$
A: Due to the inequality of means and the law of cosines, we know that:
\begin{equation*}
4a^4=((a^2+b^2-c^2)+(a^2+c^2-b^2))^2\geq 4(a^2+b^2-c^2)(a^2+c^2-b^2)=\\ 16a^2bc\cos\beta \cos\gamma \Rightarrow a^2\geq 4bc\cos\beta \cos\gamma
\end{equation*}
\begin{equation*}
4b^4=((a^2+b^2-c^2)+(b^2+c^2-a^2))^2\geq 4(a^2+b^2-c^2)(b^2+c^2-a^2)=\\ 16ab^2c\cos\alpha \cos\gamma \Rightarrow b^2\geq 4ac\cos\alpha \cos\gamma
\end{equation*}
\begin{equation*}
4c^4=((a^2+c^2-b^2)+(b^2+c^2-a^2))^2\geq 4(a^2+c^2-b^2)(b^2+c^2-a^2)=\\ 16abc^2\cos\alpha \cos\beta \Rightarrow c^2\geq 4ab\cos\alpha \cos\beta
\end{equation*}
Multiplying the last three inequalities and extracting the root (assuming, without loss of generality, that alpha, beta and gamma are in the first quadrant) we will have the required inequality.
|
d24fc7e0109004709b69e760eb8ddc34c5c0ca5c
|
Q: "the only odd dimensional spheres with a unique smooth structure are $S^1$, $S^3$, $S^5$, $S^{61}$" This (long) paper,
Guozhen Wang, Zhouli Xu.
"On the uniqueness of the smooth structure of the 61-sphere."
arXiv:1601.02184 [math.AT].
proves that
the only odd dimensional spheres with a unique smooth structure are $S^1$, $S^3$, $S^5$, $S^{61}$.
The new result is for $S^{61}$.
Is it possible to give some intuition on this remarkable result, for those
not steeped in algebraic and differential geometry, and so not intimately familiar with homotopy groups of spheres? Any attempt would be welcomed.
A: Results of this form, and my intuition from them, come from the Kervaire-Milnor paper on exotic spheres. (There was never a homotopy spheres II. The purported content of that unpublished paper appears to be summarized in these notes, though I haven't read them.) I'm going to need to jump into the algebra here; personally, I couldn't tell you the difference between $S^{57}$ and $S^{61}$ without it.
For $n \not\equiv 2 \bmod 4$, there is an exact sequence $$0 \to \Theta_n^{bp} \to \Theta_n \to \pi_n/J_n \to 0.$$ For $n=4k-2$, instead we have the exact sequence $$0 \to \Theta_n^{bp} \to \Theta_n \to \pi_n/J_n \xrightarrow{\Phi_k} \Bbb Z/2 \to \Theta_{n-1}^{bp} \to 0.$$
Let's start by introducing the cast of characters.
$\Theta_n$ is the group of homotopy $n$-spheres. It's smooth manifolds, up to diffeomorphism, which are homotopy equivalent (hence by Smale's h-cobordism theorem, and in low dimensions Perelman's and Freedman's work, homeomorphic) to the $n$-sphere $S^n$. (Actually, we identify $h$-cobordant manifolds. Because $h$-cobordism is now known in all dimensions at least 5, it changes nothing for high-dimensional manifolds; but it explains why $\Theta_4=1$ is possible even though it's an open problem, suspected to be false, that the 4-sphere admits a unique smooth structure. In any case, this is not an important aside.) The group operation is connected sum. The data we're really after is $|\Theta_n|$ - the number of smooth structures.
$\Theta_n^{bp}$ is the subgroup of those $n$-spheres which bound parallelizable manifolds. This subgroup is essential, because it's usually the fellow forcing us to have exotic spheres in the other dimensions.
This group is always cyclic (Kervaire and Milnor provide an explicit generator). As a rough justification for this group: the way this goes is by taking an arbitrary element, writing down a parallelizable manifold it bounds, and using the parallelizability condition to do some simplifying algebra until this bounding manifold is particularly simple - at which point you identify it as a connected sum of standard ones, hence that $\Theta_n^{bp}$ is cyclic generated by the standard one. I (or rather, Milnor and Kervaire) can tell you its order: If $n$ is even, $|\Theta_n^{bp}| = 0$; if $n=4k-1$, $$|\Theta_n^{bp}|=2^{2k-2}(2^{2k-1}-1) \cdot \text{the numerator of }\frac{4B_k}{k}$$ is sort of nasty, but in particular always nonzero when $k>1$; and for $n=4k-3$, it is either 0 or $\Bbb Z/2$, the first precisely if $\Phi_k \neq 0$ in the above exact sequence.
$\pi_n/J$, and the map $\Theta_n \to \pi_n/J$, is a bit harder to state; $\pi_n$ is the stable-homotopy group of spheres, $J$ is the image of a certain map, and the map from $\Theta_n$ sends a homotopy 7-sphere, which is stably parallelizable, to its "framed cobordism class". The real point, though, is that this term $\pi_n/J$ is entirely the realm of stable homotopy theory. This is precisely why people now say that the exotic spheres problem is "a homotopy theory problem". (To give the slightest bit more detail: The Thom-Pontryagin construction gives that $\pi_n = \Omega_n^{fr}$, the framed cobordism group, whose elements are equivalence classes of manifolds with trivializations of the "stable tangent bunde". Every homotopy sphere is stably trivial, and the image of $J$ is precisely the difference between any two stable trivializations.) This map $\Theta_n \to \pi_n/J$ might motivate the introduction of $\Theta_n^{bp}$ - since that is, more or less obviously, the kernel. The fact that this map is not always surjective - the obstruction supplied by $\Phi_k$ - is the statement that not every framed manifold is framed cobordant to a sphere. I find it somewhat surprising that so many actually are!
The last thing you should know is about the map $\Phi_k$. It's known as the Kervaire invariant. It's known to be nonzero in dimensions $k=1,2,4,8,16$, and might be nonzero in dimension $32$, but that's open. The remarkable result of Mike Hill, Mike Hopkins, and Doug Ravenel is that $\Phi_k = 0$ for $k > 32$. I don't have much to say about this, other than that it's there. Summing up what we have so far:
For dimensions $n=4k-1>3$, there are always exotic spheres coming from $\Theta_n^{bp}$ - lots of them! For dimensions $n=4k-3$, $\Theta_n^{bp} = \Bbb Z/2$ unless $k=1,2,4,8,16,32$. So the only possible odd-dimensional spheres with a unique smooth structure are $S^1$, $S^3, S^5, S^{13}, S^{29}, S^{61}$, and $S^{125}$.
Now to deal with special cases. It is classical that $S^1$ and $S^3$ have a unique smooth structure ($S^3$ is due to Moise); $S^5$ is dealt with by 1) finding a 6-manifold of nonzero Kervaire invariant, showing that $\Phi_2 \neq 0$ and hence that $\Theta_5^{bp}=0$; and then 2) calculating that $\pi_5$, the fifth stable homotopy group of spheres, is zero. You can do this with Serre's spectral sequence calculations. (It was pointed out to me that this means that three different field's medalists work went into getting $\Theta_5 = 1$ - Milnor, Serre, Smale. It is worth noting that there is a differential topological proof, coming from the explicit classification of smooth, simply-connected 5-manifolds, but it isn't substantially easier or anything.)
For $S^{13}$ and $S^{29}$, these are disqualified by the homotopy theory calculation that $\pi_{13}/J$ and $\pi_{29}/J$ are not zero. I do not know how these calculations are done - probably the Adams spectral sequence and a lot of auxiliary spectral sequences, which seems to be how a lot of these things are done. Maybe someone else can shed some light on that.
For $S^{125}$, the paper itself sketches why: There's a spectrum known as $tmf$, and the authors are able to write down a homomorphism $\pi_n/J \to \pi_n{tmf}$ and find a class in $tmf$ that's hit when $n=125$.
So what we know now is that $\pi_{61}/J \cong \Theta_n$. The content of the paper you're talking about is precisely the calculation that $\pi_{61}/J = 0$. The authors access it through the Adams spectral sequence, as far as I can tell (I am a non-expert). Adams SS is notoriously hard to calculate anything with - mostly the entire content of the paper is identifying of a single differential in the whole spectral sequence. Once this is done, they're able to finish the calculation, but it's hard work. If you want a sketch of how this is done, I found the introduction to their paper readable - see section 3 of the paper.
|
6295b65a9b766b32d31c9daf62a9af5a04ffd6fd
|
Q: 5 grades into 4 percentages. English teacher in need of help. I have five numbers, three of them each represent 25% of an average and the last two are the remaining 25%. One the last two numbers is 10% and the other 15%, how do I add them up into one number ?
I have 35 students. One of them has the following grades : 7, 12, 17, 13 and 15. The first three are out of 25% so no problem, but the 13 is out of 15% and the 15 is out of 10%. How do I add them up to make a grade out of 25% ?
I know this is beyond easy for some, so thank you for your help.
A: If you want a percentage as the end result, $$((g_1+g_2+g_3)\times 0.25 + g_4\times 0.1 + g_5\times 0.15)\times 100$$
Where the $g_i$ are the grades as a percentage out of $100$ i.e. if the student got $20/25$ for the first grade, then $g_1 = 80$.
If you just change all the results to be out of $25$ then you loose information about the relative weighting of each grade, because a test/assignment marked out of $25$ should (often is) weighted more than when it is out of say $10$.
If this is not an issue, then you can just add the scores together from the $10\%$ and $15\%$ tasks as the fourth percentage.
A: In this case the formula for the weighted mean is
$$ \frac{25}{100} \cdot 7 + \frac{25}{100} \cdot 12 + \frac{25}{100} \cdot 17 + \frac{15}{100} \cdot 13 + \frac{10}{100} \cdot 15 = 12.45 $$
If you want to write one grade with weight 25% that combines the last two grades, the formula is
$$ \frac{15}{100} \cdot 13 + \frac{10}{100} \cdot 15 = \frac{25}{100} \cdot x \implies x = \frac{15 \cdot 13 + 10 \cdot 15}{25} = 13.8$$
If you double check now, you get that the average of $7,12,17$ and $13.8$ is $12.45$ (so that each grade is worth 25%).
|
e0abb9bf6b1d1c5e636aa4f05bdab9f184acc583
|
Q: coefficients of Laurent series of rational function Let $F(z)$ be a rational function $\frac{P(z)}{Q(z)}$ such that the degree of $P(z)$ is less than the degree of $Q(z)$ and suppose that all the zeros of $Q(z)$ are contained in the open disk $|z| < r$.
I know that if $f(z)$ is analytic for $|z| > r$ and bounded by M > 0 there, that is, $|f(z)|{\le}M$ for all $z$ with $|z| > r$, then the coefficients of the Laurent series of $f(z)$ for $|z| > r$ satisfy $aj = 0$ for $j = 1,2,3,$....
I'm supposed to show that the coefficients of the Laurent series of $F(z)$ for $|z| > r$ satisfy $aj = 0$ for $j = 1,2,3,$.... by using the corollary above. I know that $F(z)$ is analytic for $|z| > r$ but I'm missing the boundless condition to finish my proof. Any suggestions?
A: As with any nonconstant polynomial, $|Q(z)|\to\infty$ as $|z|\to\infty.$ Then $\deg(P)<\deg(Q)$ implies $|F(z)|\to 0$ as $|z|\to\infty$ and in particular is bounded for $|z|>r$, say by $M$. The Laurent series of $F$ centered at some $z_0$ has coefficients
$$a_k = \frac{1}{2\pi i}\int_{|z-z_0|=R}\frac{F(z)}{(z-z_0)^{k+1}}dz,$$
so
$$|a_k|\leq \frac{M}{R^k}$$
for big enough $R$. If $k\geq 1$, you can take $R\to\infty$ to show $a_k = 0.$
|
b208730fc2f6f4099c22a007686ec028929fb132
|
Q: Finding a differential equation, given two solutions? I need to find a differential equation that satisfies the solutions $$y_1 (x) = x $$ and $$ y_2 (x) = \ln(x)$$ on the interval $( 0, + \infty)$. Since these two solutions are linearly independent, I know that the differential equation will have to be of second order. But I cannot come up with one. Is there some general method to come up with differential equations, given some solutions?
A: A general solution (if you want a linear differential equation) is
$$y=c_1x + c_2 \ln{x}$$
then
$$y'=c_1+\frac{c_2}{x}$$
and
$$y''=\frac{-c_2}{x^2}$$
One way to come up with a differential equation is to use these three equations and eliminate the constants $c_1$ and $c_2$.
From the last equation you know that $c_2=-x^2y''$. Putting this in the second equation yields
$$y'=c_1-xy''$$
or
$$c_1=y'+xy''$$
which you can insert into the first equation for $c_1$
to yield
$$y= (y'+xy'')x-x^2y''\ln{x}$$
|
5296ddb311573eda5e110999dc3cc3757734a4c5
|
Q: Write formulas in specific languages of group. So, for each of the following groups write a formula in the language of group theory, which holds in given group, but doesn't hold in others two.
$(i)$ The integers with addition \ I think it's $\{\Bbb{Z},+,-,0\}$
$(ii)$ Positive reals with multiplication \ I think it's $\{\Bbb{R}_+,*,\,^{-1},1\}$
$(iii)$ Permutations of $\{a,b,c\}$ with composition \ I don't know exatly what it is.
So, I found such formula for $(ii)$, it's:
$\forall x \exists y(x=y*y)$;
This formula holds for $(ii)$ but doesn't hold for $(i)$, 'cause $\forall x\exists y(x=y*y)$ to $(i)$ is $\forall x\exists y(x=y+y)$, doesn't hold for $x=1$.
But I've stuck on others groups and I cannot fid anything right for them.
Notice that we don't have '$>$' and '$<$' for the first two, and I'm not sure about last one.
A: If you know anything about group theory, $(iii)$ is just $S_3$ and has exactly $6$ elements (which you can write in the language of groups since you have equality). Clearly, "Having exactly 6 elements" is true in $(iii)$ and false in $(i)$ and $(ii)$. We call this sentence $\varphi$.
As you noted, $\mathbb{R}_+ \models \forall x \exists y (x = y*y)$. This is false in $(i)$ and it is also false in $(iii)$. Notice that a transposition, i.e. $(ab)$, cannot be written as the composition of two permutations. We call this sentence $\psi$.
Finally, as Derek Holt suggested, we are already done. Notice that $\neg \varphi \wedge \neg \psi$ is a sentence in FOL which does not hold in $(ii)$ (since $\psi$ holds in $(ii)$) and does not hold in $(iii)$ (since $\varphi$ holds in $(iii)$), but does hold in $(i)$ because $\neg \psi$ and $\neg \varphi$ hold in $(i)$.
|
67c397b61080715c1e9537920a0f24e8b02d2234
|
Q: Thesis-subjects in number theory for an undergraduate. I am looking for some thesis subjects with regards to number-theory. I am having a hard time looking for subjects, because i simply don't know everything of number theory. Any suggestions or tips?
Kees Til
|
083f4ececdca55996b6ee68d8e435bea4fda4d55
|
Q: If $\phi:A\to B$ is a ring homomorphism, why does there exist $\psi:\text{spec}(A)\to \text{spec}(B)$? Let $\phi:A\to B$ be a ring homomorphism, where $A$ and $B$ are commutative rings. We know that if $q$ is a prime ideal in $B$, then $\phi^{-1}(q)$ is a prime ideal in $A$. Hence, there exists a mapping $\phi^{*}: \operatorname{Spec}(B)\to\operatorname{Spec}(A)$.
My book (Commutative Algebra by Atiyah and MacDonald, Problem 21 on page 13) then talks about $\phi^{*-1}:\operatorname{Spec}(A)\to\operatorname{Spec}(B)$. Why does this mapping exist? The image of a prime ideal under a ring homomorphism may not be a prime ideal, unless the mapping is surjective.
A: You read incorrectly. $\phi^{*-1}$ is not an application $\operatorname{Spec} A \to \operatorname{Spec} B$; it is the "preimage" operation from subsets of $\operatorname{Spec} A$ to subsets of $\operatorname{Spec} B$. It is induced by $\phi^* : \operatorname{Spec} B \to \operatorname{Spec} A$ just like any map of sets $f : X \to Y$ induces a map $f^{-1} : P(Y) \to P(X)$, defined by
$$f^{-1}(U) = \{ x \in X : f(x) \in U \}.$$
For some context, the equation written in the book is (among others) $\phi^{*-1}(V(\mathfrak{a})) = V(\mathfrak{a}^e)$, where $\mathfrak{a}$ is a prime ideal of $A$ and $\phi : A \to B$. Here $V(\mathfrak{a})$ is a subset of $\operatorname{Spec} A$, not an element of it.
A: Your skepticism is justified; such a map does not exist, in general.
Perhaps your book is working in a case where $\phi^*$ is known to be bijective, or perhaps your book is speaking of $\phi^{*{-1}}$ of some subset of $\text{Spec}(A)$ (if $f$ is a map of sets $X \to Y$, then $f^{-1}$ makes sense as a map from the power set of $Y$ to the power set of $X$).
|
72be35e9d5d65b928e12abf4460b357dddfaddec
|
Q: Exercise 8.12 Introduction to stochastic processes Gregory Lawler Let $X_t$ be a standard Brownian motion starting at 0 and let
$T=min \{t:|X_t|=1\}$ and $\hat{T}=min \{t:X_t=1\}$
(a) Show that there exist positive constants $c$, $\beta$ such that for all $t>0$,
$$P(T>t)\leq ce^{-\beta t}$$
conclude that $E(T)\leq \infty$
use the reflection principle to fine the density of $\hat{T}$, and show that $E(\hat{T})=\infty$
Please help me to start to solve this problem.
A: For the second part, let
$ X_t^* = \max_{s\in [0\ t]}X_s$
note that:
$P(\hat T \le t) = P(X_t^*\ge 1)$
and also, by reflection principle, you can show that the pdf of $X_t^*$ is twice the pdf of $X_t$, but defined on $[0\ \infty)$ only.
Put these together, you get the cdf of $\hat T$
$P(\hat T \le t) = 2(1- \phi(1/{\sqrt{t}}))$
where $\phi$ is the cdf of standard normal r.v., and by integration by parts, you can prove that $E(\hat T)=\infty$.
|
72bb54c2ab782410993e45c5bc3e31e57dfeb126
|
Q: There exists a non-empty open set $U ⊆ \Bbb R^2$ such that $f(x, y) = 0$ for every $(x, y) ∈ U$. Show that $f = 0$, i.e. $f$ is identically zero. Let $f ∈ \Bbb R[x, y]$ be such that there exists a non-empty open set $U ⊆ \Bbb R^2$ such that $f(x, y) = 0$ for every $(x, y) ∈ U$. Show that $f = 0$, i.e. $f$ is identically zero.
My try: Since $f ∈ \Bbb R[x, y]$, where $\Bbb R[x,y]$ is the polynomial ring in two variables, $f$ can be expressed as $a_0x^n+a_1x^{n-1}y+...+a_ny^n$ for some $n$ and $a_0,a_1,...,a_n$.
If we can show that each of the coefficients $a_0,a_1,...,a_n$ are zero then we are done and for that we have to find out $n^{th}$ order partial derivatives at $(0,0)$.
Partial derivatives of the form $$\frac{\delta^n}{\delta x^i \delta y^{n-i}} f$$
I can't figure out what to do next. Help Needed!!
Also other methods of solution greatly accepted.
A: Let $(x_0,y_0)$ be an inner point of this set $U$.
Your function can be written as $$f(x,y) = \sum_{i=0}^n\sum_{j=0}^m f_{ij} (x-x_0)^i (y-y_0)^j$$ with constants $f_{ij}$.
Now, obviously, $$\frac{\partial^{s+p}f }{\partial x^s\partial y^p}(x_0,y_0)=0$$ by the definition of the set $U$. On the other hand, by a direct calculations you can show that $$\frac{\partial^{s+p}f }{\partial x^s\partial y^p}(x_0,y_0)=s!p!f_{sp}$$under the hypothesis that $s\le n$, $p\le m$.
This implies that all coefficients $f_{ij}$ are zero and hence $f$ is identically zero.
A: An algebraic approach. Because $U$ is open, there is a rectangle $[x_0,x_1]\times [y_0,y_1]\subseteq U$. Think of $f$ as an element of $(\mathbb{R}[x])[y]$, so we have $f_0,\dotsc,f_n\in \mathbb{R}[X]$ with
$$f(x,y)=f_x(y)=\sum_{i=1}^n f_i(x)y^i$$
For every $x=[x_0,x_1]$. We know that $f_{x}=0\in \mathbb{R}[y]$ because there are infinte many zeros namely $[y_0,y_1]$. So $f_i(x)=0$ for each $x=[x_0,x_1]$. So $f_i\in \mathbb{R}[x]$ has infinete many zeros namely $[x_0,x_1]$ and thus $f_i=0\in \mathbb{R}[x]$. So $f=0$.
A: My initial instinct is that if $f$ is zero on an open subset of the domain, $f$ is zero on a lot of points, infinitely many in fact. You can use this to your advantage with Lagrange Interpolation. It takes only a finite number of points to determine a polynomial function of a certain degree. Call this degree $d$. Call the number of zero points needed to determine this polynomial $n$. Then pick another zero point (so that we now have $n+1$ points) at which our interpolated polynomial is nonzero. Having to incorporate this point now make the function identically zero. The trick here is to ensure that you are able to pick the right points, but I believe something along these lines could work for even some non-open sets.
Utilizing the open set specifically though, you will need the fact that if a function of one variable is zero on an interval, then the function is zero. Since $U$ is an open set, there is an open disc within $U$, call it $D$. Take any $x$ coordinate that is in $D$ and restrict $f$ to just this coordinate. This restricted $f$ is zero on an interval, so it must also be zero. This works for any $x$ coordinate in $D$, so that now for every $y$, we have an interval at which we know $f$ restricted to that $y$ is zero. The rest of the proof then follows.
|
6099ba4c1b618d5fa33371bd8c397959605462c5
|
Q: approximations in lp spaces If $f$ is a bounded measurable function, then, on every ball, the functions $f*\rho_{\epsilon}$ converge to $f$ in the mean and in measure.
This corollary is from V.I.Bogachev, page 253 volume 1. I do not understand the proof, so i want someone to help me or to tell me a another book that have this proof more analytic. Thanks.
|
076ed7cc535c09b83af4924be508929b47030cc4
|
Q: Solving the Sturm-Liouville problem using Green's function and Spectral Theorem. I am reading a paper that deals with the solution of the Sturm-Liouville problem:
$u''(t) + \rho (t) u + \lambda ^{-1}u= -f $
$ u(0)=u(1)=0$
For $\rho(t) \leq 0 $.
First it is solved the problem:
$u''(t) + \rho (t) u= -f $
$u(0)=u(1)=0 \hspace{1cm}$ (1)
Picking two arbitrary linearly independent solutions $u_1,u_2, u_1(0)=0$ $u_2(1)=0$ and using Variation of constants method to obtain a particular solution $u_p$. Then, by imposing to the general solution $u=u_p +a u_1 +b u_2$ the boundary conditions of the original (1) problem, it is found the solution that the solution to (1) is written in integral form as $ \int_{0}^{1} k(t,s)f(s)ds$. So it is defined the operator $K$:
$$K:L^2([0,1]) \hspace{1cm} \longrightarrow \hspace{1cm} L^2([0,1])$$
$$ \hspace{4cm} f \hspace{1cm} \longrightarrow \hspace{1cm} u(t)= \int_{0}^{1} k(t,s)f(s)ds$$
Where u is the solution to the ODE $u'' + \rho u=f$ (With the boundary conditions $u(0)=u(1)=0$) and $k(t,s)$ is Green's Function:
$$k(t,s):=\left\{\begin{matrix}
- \frac{u_2(t)u_1(s)}{W(0)}& s \leq t \\
-\frac{u_2(s)u_1(t)}{W(0)}& t\leq s
\end{matrix}\right.
$$
I understand the last part of the paper which uses spectral theorem for compact self-adjoint operators to solve the initial problem.
But I have a few questions:
*
*It is shown that the operator $K$ is injective, so that for a given $f \in L^2([0,1])$ there is a unique solution $u \in L^2([0,1])$. Is this necessary? Couldn't it be shown using the fact that two different $\hat{u_1}, \hat{u_2}$ lineally independent solutions yield the same Green's function as $u_1, u_2$?
*Also, it is shown that if $f \in C([0,1])$ then $u \in C^2([0,1])$. This is done by derivating u in its integral form two times. Again, Is this necessary? As u verifies $u'' + \rho u=f$ then $u''$ is also continuous so that $u \in C^2([0,1])$ as long as f is continuous. Am I missing something?
A: In the first set of equations, I think you meant
$$
u''(t)+\rho(t)u(t) + \frac{1}{\lambda}u(t) = -f(t)
$$
instead of
$$
u''(t)+\rho(t)u(t) + \frac{1}{\lambda} = -f(t).
$$
That's the typical way such problems are defined.
For the second set of equations
$$
\begin{array}{c}
u''(t)+\rho(t)u(t) = -f(t),\\
u(0)=0,\;\;\; u(1)=0,
\end{array} \;\;\;\; (\dagger)
$$
the solution may not exist for all $f$ and may not be unique if it does exist. It depends on $\rho$. For example, suppose $\rho(t)=\pi^2$. Then $u(t)=\sin(\pi t)$ is a solution of the homogeneous equation
$$
u''(t)+\pi^2u(t) = 0, \\
u(0)=0,\;\;\; u(1)=1.
$$
Therefore if you have a solution $u_0$ of the inhomogeneous system $(\dagger)$ with $\rho(t)=\pi^2$, then $u_0+C\sin(\pi t)$ is also a solution of the inhomogenous equation for any constant $C$. So there have to be conditions on $\rho$.
|
9d278b86a07e050f6647ba24983fa02755bb83a7
|
Q: Help to solve this ODE with integration and exponential function So I have this system of ODEs and the unknowns are $\lambda_1(t)$ and $\lambda_2(t)$. Other parameters and functions are all known.
$\lambda_1(t)=u_{11}p_{11}e^{-u_{11}t}\int_{0}^{t}\lambda_1(x)e^{u_{11}x}dx+u_{21}p_{21}e^{-u_{21}t}\int_{0}^{t}\lambda_2(x)e^{u_{21}x}dx+g_1(t)$
$\lambda_2(t)=u_{12}p_{12}e^{-u_{12}t}\int_{0}^{t}\lambda_1(x)e^{u_{12}x}dx+u_{22}p_{22}e^{-u_{22}t}\int_{0}^{t}\lambda_2(x)e^{u_{22}x}dx+g_2(t)$
I have tried to define the integration as a function, i.e., $f_1(t)=\int_{0}^{t}\lambda_1(x)e^{u_{11}x}dx$, and then take a derivative, getting, $\lambda_1(t)=e^{-u_{11}t}f_1(t)'$. But notice that for each integration the rate of the exponential component is different. Therefore, I will have to define four different $f_i(t)$s and I have only two equations.
Any help and suggestions are much appreciated.
PS: $p$ and $u$ are all constant. The only thing unknown are the $\lambda(t) $s
A: $\lambda_1(t) =u_{11}p_{11}e^{-u_{11}t}\int_{0}^{t}\lambda_1(x)e^{u_{11}x}dx+u_{21}p_{21}e^{-u_{21}t}\int_{0}^{t}\lambda_2(x)e^{u_{21}x}dx+g_1(t)$
$\lambda_2(t)=u_{12}p_{12}e^{-u_{12}t}\int_{0}^{t}\lambda_1(x)e^{u_{12}x}dx+u_{22}p_{22}e^{-u_{22}t}\int_{0}^{t}\lambda_2(x)e^{u_{22}x}dx+g_2(t)$
$f_{11}(t)=\int_{0}^{t}\lambda_1(x)e^{u_{11}x}dx$
$f_{21}(t)=\int_{0}^{t}\lambda_2(x)e^{u_{21}x}dx$
$f_{12}(t)=\int_{0}^{t}\lambda_1(x)e^{u_{12}x}dx$
$f_{22}(t)=\int_{0}^{t}\lambda_2(x)e^{u_{22}x}dx$
Then you have a system of four linear ODEs with four unknown $\:f_{11}(t)\:$ , $\:f_{12}(t)\:$ , $\:f_{21}(t)\:$ and $\:f_{22}(t)\:$
$\lambda_1(t)=e^{-u_{11}t}f_{11}(t)'=e^{-u_{12}t}f_{12}(t)'=u_{11}p_{11}e^{-u_{11}t}f_{11}(t)+u_{21}p_{21}e^{-u_{21}t}f_{21}(t)+g_1(t)$
$\lambda_2(t)=e^{-u_{21}t}f_{21}(t)'=e^{-u_{22}t}f_{22}(t)'=u_{12}p_{12}e^{-u_{12}t}f_{12}(t)+u_{22}p_{22}e^{-u_{22}t}f_{22}(t)+g_2(t)$
|
5ede06bb2518593e507a7977519eaf37302f106a
|
Q: if two sets of vectors are linearly independent; then a set containing all the vectors is linearly independent? Say we have a set of vectors {a,b} and another set of vectors {c,d}. Both these sets are linearly independent. a,b,c,d are all distinct vectors. How would I prove that the set {a,b,c,d} is linearly independent?
What if all the vectors are in ℝ4
A: It isn't, for example: $$\{(0,1), (1,0)\}, \{(0,4), (4,0)\}$$
Even if all the vectors are in $\mathbb{R}^4$,
$$\{(0,1,0,0), (1,0,0,0)\}, \{(0,4,0,0), (4,0,0,0)\}$$
A: It isn't linearly independent.
You lose linear independence if you introduce redundant elements (that is, elements which can be formed as linear combinations of existing elements).
A: We can take this further. Suppose $\{a,b\} , \{b,c\} , \{c,a\}$ are all linearly independnet. That does not imply $\{a,b,c\}$ is a linearaly independent set.
Take $a=(1,0) , b=(0,1) , c=(1,1)$
Edit: for $\mathbb R ^4 $ you can just view these as a projection onto $\mathbb R ^2$ (just add another independent vector if you may).
Intuitively, I don't see how you can something "powerful" about a union of arbitrary sets of independent vectors. Meaning you will have to add more restrictions; perhaps the dimension of the space, orthogonality, independence in triplets etc...
|
58880d6efd6eb1fb7ab1baf510563e56e7c0510b
|
Q: Number animation in geogebra?
Is there a way to animate this "triangle" in Geogebra so that the rows are expanding from the top to the bottom?
A: Slider n (from 0 to 10) then:
a = floor(123456789 / 10^(10 - n))
text1 = a + " × 9 + " + n + " = " + (9a + n)
|
c2adc27a93fffe5127da826073e3b2917e90bb16
|
Q: Compactness related property of topological spaces One of the standard definitions of a compact topological space $\langle X,\mathscr{O}\rangle$ says that $X$ is compact iff every open cover of $X$ has a finite subcover. I would like to ask you if any class of spaces has been distinguished with respect to the following (or equivalent, similar) condition ($\mathrm{Cl}$ is the standard topological closure operator):
For any $A\subseteq X$ and $\mathscr{S}\subseteq\mathscr{O}$, if $A\subseteq\mathrm{Cl\,}\bigcup\mathscr{S}$, then there are $S_1,\ldots,S_n\in\mathscr{S}$ such that $A\subseteq\mathrm{Cl\,}S_1\cup\ldots\cup\mathrm{Cl\,}S_n$.
The motivation stems from studying regular open and closed sets (i.e. those that are equal to interior of their closures and the closures of interiors, respectively). In this setting the following condition is interetsing for me (let $\mathrm{r}\mathscr{O}$ and $\mathrm{r}\mathscr{C}$ be families of regular open and regular closed subsets of $X$):
For any $A\in\mathrm{r}\mathscr{O}$ and $\mathscr{S}\subseteq\mathrm{r}\mathscr{O}$, if $A\subseteq\mathrm{Cl\,}\bigcup\mathscr{S}$, then there are $S_1,\ldots,S_n\in\mathscr{S}$ such that $A\subseteq\mathrm{Cl\,}S_1\cup\ldots\cup\mathrm{Cl\,}S_n$.
In particular, w.r.t. the above, I would like to ask if there is a notion of $H$-closed set restricted to the class of regular open sets:
For any $A\in\mathrm{r}\mathscr{O}$ and $\mathscr{S}\subseteq\mathrm{r}\mathscr{O}$, if $A\subseteq\mathrm{Int\,Cl\,}\bigcup\mathscr{S}$, then there are $S_1,\ldots,S_n\in\mathscr{S}$ such that $A\subseteq\mathrm{Cl\,}S_1\cup\ldots\cup\mathrm{Cl\,}S_n$.
|
106248006b6911c233e208a386bd24f364116050
|
Q: What is a pure-jump process? I have been reading some notes and they keep referring (without definition) to a "pure jump process".
On wiki I can only find a reference in the Levy-Ito decomposition theorem, but still I can't find the definition
Can you guys help?
A: As saz mentioned in the comments, the definition of pure jump processes differs between authors & papers. The way it was defined in my class (using Cinlar's Probability and Stochastics textbook) was
A process in $\mathbb{R}^d$, where $X_t$ is equal to the sum of the sizes of its jumps during the interval $[0,t]$, i.e. for almost every $\omega$, $$X_t(\omega) = \sum_{s \in D_\omega \cap [0,t]} \Delta X_s(\omega), t \in \mathbb{R}_+,$$
where $D_\omega = \{t > 0: \Delta X_t(\omega) \neq 0\}$ (the discontinuity set for the path $X(\omega)$) and $\Delta X_t(\omega)$ is the jump size at time $t$, i.e. $$\Delta X_t (\omega) = X_{t+}(\omega) - X_{t-}(\omega),$$ i.e. the process jumps from its left-limit to its right limit.
One example of a pure jump processes is the drift-less increasing Levy process.
|
6b56442feb5fbd423111700e491ae5980997a240
|
Q: What is the formula for a square wave? After writing this, I realised it's rather long. Please delete if too convoluted.
Many many years ago I studied electronics and in a class we used excel to plot a sine wave.
Simple.
Get a sample rate (24) and make a column of 0 to 360 in 15° increments.
Next column, 360 divided by the previous column (decimal point form in 1/24 increments) and multiply by 2 (for PI)
Next column, multiply previous column by PI.
Next column, apply SIN() to previous column.
Here, we can apply a line chart, smooth it out and it looks like a reasonable sine wave.
Then my teacher told me at the time that a square wave is just a sine wave with harmonics added to it, specifically every odd harmonic.
Now onto making the harmonic column. A 3rd of the amplitude and 3 times the frequency, 5th and so on...
And we get a graph like this (albeit, rough):
Then we add all of them up at each 15° interval:
Now, it's sort of looking like a square wave but I was expecting it to be more... Square?
I then decided to revisit it and did it all the way up to the 21st harmonic and in 1° increments:
21st harmonic
Fundamental + 3rd - 21st (every odd) added
Is this true? Wikipedia says that it is, but I can't really see it?
The ideal square wave contains only components of odd-integer harmonic frequencies
A: This is called the Wilbraham-Gibbs phenomenon: this "ringing" occurs for any discontinuous function (the artefacts you see in low-quality JPEGs are also related to this).
For functions that have a finite number of finite discontinuities, you can improve convergence considerably by using Fejér sums of the series: this causes the initial terms to dominate, so the oscillations from the terms with shorter wavelengths are relatively suppressed.
A: It seems you have two questions: first, whether Fourier analyzing a square wave gives only odd harmonics and whether the approach you are following will converge to a square wave. The answer to both is yes. For the first, all the odd harmonics are odd functions of time, while the even harmonics are even functions of time. If you Fourier analyze any odd function you will use only the odd harmonics. The square wave is odd in time, so it uses only odd harmonics. For the second, your last plot is very useful. It should look to you like it is converging nicely on a square wave in the middle. Away from the transitions it is quite constant. Those little ripples will disappear as more terms are added. The worrisome thing is the big bumps near the transition. As you add more terms they actually get taller and narrower and move towards the transition points. For any given time, the peaks will eventually move between it and the transition and the value of the expansion will settle down to $\pm 1$ as you want.
What is going on here is that sine and cosine functions are infinitely differentiable, so they don't cope well with discontinuities. The coefficients in the Fourier expansion of a square wave fall off as $\frac 1n$, as they do for any discontinuous function. If you expand a continuous function they will eventually decrease as $\frac 1{n^2}$. If you expand a once differentiable function they will fall as $\frac 1{n^3}$ You can look at a table of Fourier expansions to see this.
|
990e360037d6672d7906534ac54b846ef9dab63a
|
Q: Time-scale law of Bernoulli-stopped process This post is quite long, but the problem stated carries no computational burden.
Consider the equally spaced partition $t_{i}^n=\frac{i}{n}$ with $i=0,...,n$ of the interval $[0,1]$ into $n$ sub-interval of length $\delta=\frac{1}{n}$.
For each $n$ let $B_{i,n}$ with $i=1,...,n$ be a triangular array of Bernoulli variates
such that $\mathbb{P}\left[B_{i,n}=1\right]=p_n$ and $\mathbb{P}\left[B_{i,n}=0\right]=1-p_n$. The $B$'s could be independent or not, it is irrelevant.
Suppose that $Y_t$ and $X_t$ with $t\in[0,1]$ are two continuous-time stochastic processes over the time window $[0,1]$ such that, when sample on the time grid $t_i^n$ it holds that
$$
\left\{
\begin{array}{lll}
X_{0}&=&Y_0\\
X_{t_i^n}&=&Y_{t_i^n}\,(1-B_{i,n})+B_{i,n}\,X_{t_{i-1}^n}\quad (1),
\end{array}
\right.
$$
and this is true irrespectively of the choice of $n\in\mathbb{N}$.
The idea is very simple. At $t=0$ the process $X$ is equal to $Y$. Later, at each time instant $t_{i}^n$ there are two possibilities:
1) if the Bernoulli is "activated", that is if $B_{i,n}=1$, then the $X$ does not update and repeats itself, that is $X_{t_i^n}=X_{t_{i-1}^n}$.
2) if the Bernoulli is not "activated", that is if $B_{i,n}=0$, then the $X$ is updated to the corresponding value of $Y$, that is $X_{t_i^n}=Y_{t_{i}^n}$.
If we write down the first two iterations of the process at the frequency $n=\frac{1}{\delta}$ we get
$$
\left\{
\begin{array}{lll}
X_{0}&=&Y_0\\
X_{\delta}&=&Y_{\delta}\,(1-B_{1,n})+B_{1,n}\,Y_{0}\\
X_{2\,\delta}&=&Y_{2\,\delta}\,(1-B_{2,n})+Y_{\delta}\,B_{2,n}\,(1-B_{1,n})+Y_{0}\,B_{2,n}\,B_{1,n}.
\end{array}
\right.
$$
So clearly $X_{\delta}$ may have as values either $Y_0$ or $Y_{\delta}$, while $X_{2\,\delta}$ may have as values or $Y_0$ or $Y_{\delta}$ or $Y_{2\,\delta}$.
My concern is related to what happen if we change the sampling frequency, for example taking $\Delta=\frac{1}{m}=2\,\delta=\frac{2}{n}>\delta$. Now the dynamics of $X$ sampled in the new time grid $t_i^m=\frac{1}{m}$ cannot be
$$
\left\{
\begin{array}{lll}
X_{0}&=&Y_0\\
X_{t_i^m}&=&Y_{t_i^m}\,(1-B_{i,m})+B_{i,m}\,X_{t_{i-1}^m}.
\end{array}
\right.
$$
since this would imply, taking just the first iteration, that
$$
\left\{
\begin{array}{lll}
X_{0}&=&Y_0\\
X_{\Delta}&=&Y_{\Delta}\,(1-B_{1,m})+B_{1,m}\,Y_{0}\\
\end{array}
\right.
$$
so that $X_{\Delta}$ may have as value only $Y_{\Delta}$ or $Y_0$, but since $\Delta=2\,\delta$ this contradicts what we have found at the higher frequency $n>m$.
So my final question is: is the process in (1) ill-defined? Or, in other words, is it possible that the process $X$ in (1) is obtained by sampling a continuous-time process on a finite partition?
My guess is that, if my reasoning is correct, the process in (1) can exist only in discrete time and so cannot be a sampling of a continuous-time process.
|
d6fe48dfe1354c1912a419cdddac330160254b10
|
Q: Greatest value of $\frac{1}{4\sin \theta-3 \cos \theta+6}$ State the greatest value of $$\frac{1}{4\sin \theta-3 \cos \theta+6}$$
Can anyone give me some hints on this?
A: Hints:
$4\sin\theta - 3\cos\theta = 5\sin(\theta - \arctan\frac 34)$
$-1 \leq \sin\alpha \leq 1$
A: $$4\sin\theta-3\cos\theta$$ is the dot product of the vector $(-3,4)$ with the unit vector in direction $\theta$. This dot product is minimized when the two vectors are antiparallel, and equals minus the product of the norms, i.e. $-5$.
The requested maximum is $$\frac1{-5+6}.$$
A: Hint: Use $$-\sqrt {a^2+b^2} \le a\sin \theta + b \cos \theta \le \sqrt {a^2+b^2}$$.
|
b31b14f60023eaf878c7fe4e1e6117cf08024d47
|
Q: Commutativity of $\operatorname{End}_{R}(M)$ when $M$ is a semi-simple module Let $M$ be a semi-simple module over a unital ring $R$. I want to see if $\operatorname{End}_{R}(M)$ is commutative only if $M$ is a direct sum of pair-wise non-isomorphic simple modules.
A: Suppose that $M \cong \bigoplus_{i \in I} M_i$ is a decomposition into simple submodules. Suppose that $M_{j_1} \cong M_{j_2}$ for some $j_1 \neq j_2$. Then $M_{j_1} \oplus M_{j_2} \subseteq M$ is a submodule, and we have an embedding
$$
\Phi \colon \mathrm{End}_R(M_{j_1} \oplus M_{j_2})
\hookrightarrow \mathrm{End}_R(M),
$$
where $\Phi(f)$ is given by
$$
\Phi(f)\left( \sum_{i \in I} m_i \right)
= f(m_{j_1} + m_{j_2})
= f(m_{j_1}) + f(m_{j_2}),
$$
where $m_i \in M_i$. By Schur’s Lemma we have
$$
\mathrm{End}_R(M_{j_1} \oplus M_{j_2}) \cong \mathrm{M}_2(D).
$$
for the skew field $D = \mathrm{End}_R(M_{j_1}) \cong \mathrm{End}_R(M_{j_2})$. Because $\mathrm{M}_2(D)$ is not commutative it follows that $\mathrm{End}_R(M)$ is not commutative.
So for $\mathrm{End}_R(M)$ to be commutative all simple summands of $M$ must be non-isomorphic.
PS: We can more generally look at a decomposition $M = \bigoplus_{i \in I} M_i^{(N_i)}$ with $M_i$ being simple and $M_i \ncong M_j$ for $i \neq j$ and $N_i$, $i \in I$ sets of the right cardinality. Using Schur’s Lemma we then have
$$
\mathrm{End}_R(M)
\cong \prod_{i \in I} \mathrm{End}_R\left( M_i^{(N_i)} \right)
\cong \prod_{i \in I} \mathrm{M}_{N_i}(D_i)
$$
where $D_i = \mathrm{End}_R(M_i)$ is a skew field and $\mathrm{M}_{N}(D)$ denotes the column finite $N \times N$ matrices with entries in $D$. So we can think of $\mathrm{End}_R(M)$ as the product of matrix rings over skew fields. For this to be commutative we need that all these matrix rings are actually of size $1 \times 1$, i.e $|N_i| = 1$ for all $i \in I$, and for all $i \in I$ the skew field $D_i$ must already be a field, i.e. be commutative.
(This idea of identifying $\mathrm{End}_R(M)$ with a product of matrix rings over skew fields is basically how the classification of semisimple rings (and semisimple algebras) works.)
|
ef25cfbd3663c464e6bbb649ced9dc7fde74e450
|
Q: Solve the IVP $xy'' + y' + 4xy = 0, y(0) = 3, y'(0) = 0$ It has to be solved with Laplace transform and then converted to Bessel equation.
$L(xy'') = -\frac{dL(y'')}{ds}$
$L(4xy) = -\frac{4dL(y)}{ds}$
$L(y'') = s²L(y) - sy(0) - y'(0) = s²L(y) -3s$
$L(y') = sL(y) - sy(0) - y(0) = sL(y) - 3$
$-\frac{d(s²L(y)-3s)}{ds} + sL(y)-\frac{4dL(y)}{ds} =0$
$-\frac{d(s²L(y))}{ds} + 3 + sL(y) - 3 -\frac{4dL(y)}{ds} =0$
$-\frac{s²L(y)}{ds} + sL(y) -\frac{4dL(y)}{ds} =0$
$-\frac{L(y)(s²+4)}{ds} + sL(y) =0$ (1)
$\frac{dL(y)}{L(y)} = \frac{sds}{s²+4}$
Integrating both sides
$ln(L(y)) = \frac{ln(s²+4)}{2} + c$
$L(y) = c\sqrt{s²+4}$
which won't lead me to the right answer.
I realized that if at (1) I use $\frac{L(y)(s² + 4)}{ds} + sL(y) =0$ instead I'll get the right answer according to wolfram, but I can't see what I'm doing wrong to end up with that negative sign.
http://www.wolframalpha.com/input/?i=xy%27%27+%2B+y%27+%2B+4xy+%3D+0%2C+y%280%29+%3D+3%2C+y%27%280%29+%3D+0
A: Managed to find my mistake, I solved as if s² wasn't part of the derivative at $-\frac{d(s²L(y))}{ds}$
$-\frac{d(s²L(y))}{ds} = -2sL(y) - \frac{s²d(L(y))}{ds}$
$-2sL(y) - \frac{s²d(L(y))}{ds} + sL(y) -\frac{4dL(y)}{ds} =0$
$ - \frac{s²d(L(y))}{ds} - sL(y) -\frac{4dL(y)}{ds} =0$
$ \frac{s²d(L(y))}{ds} + sL(y) +\frac{4dL(y)}{ds} =0$
$ \frac{d(L(y))(s²+4)}{ds} + sL(y) =0$
|
3fbeb0610f8c7159a92946611159608ee463fc0d
|
Q: Asynchronous XOR cellular automata and complexity I've been experimenting a bit with what I think is the simplest possible CA-like rule that generates complex patterns and behaviors:
https://eloquence.github.io/elixor/
Essentially, I define an array of size, then walk it in either direction, and XOR the current value in either direction. This gives me a total of four possible universes per universe size. I render all four of these universes by displaying the contents of the array size times, then refreshing.
By choosing a different universe size, I get vastly different patterns. Stability, movement direction, frequency, even speed of movement of the triangles vary. Sometimes fragment structures emerge that add to the complexity.
Does anyone know if the characteristics of this system have already been researched, in terms of:
*
*classification of the emergent behaviors and structures
*use for universal computation
*symmetry and different forms of projecting the information
*limitations or increases in complex behaviors as the universe size increases
*other aspects of the system, such as oscillation behavior at different sizes
It struck me that many of the behaviors this system generates seem comparable to the so-called elementary CAs, with significantly less complexity in implementation.
Any pointers to relevant papers would be appreciated, as well. :-)
|
6d42cb212f6e9ceec6fce856ae63687f2110a487
|
Q: Critical exponent Let $1\leq p\leq \infty$. I want to show that there is at most $1\leq q \leq \infty$ such that there is $C>0$ with
$$||u||_q \leq C ||\nabla u ||_p \hspace{1cm} \forall u \in C_c^1(\mathbb{R}^d)$$
This $q$ is somehow supposed to depend on $p$ and $d$. I don't know where to start - I've got the hint to look at $u \mapsto u_\lambda$ where $u_\lambda(x)=u(\lambda x)$ But that doesn't really help me.
Thank you
|
ae53c53a7e1001fa9eb2f6f1c18cf7dbfccc794c
|
Q: Mathematical explanation for geometric probability formulas Suppose we have a dartboard. Some of its portion is painted red. I throw a dart at it. Now, I want to calculate the probability that my dart hits the red area. I simply divide the area of the painted region with the total area of the dartboard.
My question is how can we be sure that this process of calculating probability using areas actually give the right answerer? Intuitively, this seems very correct. But is there any mathematical explanation to the formulas that we use in geometric probability?
A: The thing is, the answers from what we refer to as "geometric probability" are fundamentally limits of the ratio of various areas, which is why it is considered "continuous" - there are technically an infinite number of possible outcomes. But because areas are usually calculable, our answer is just as accurate as answers found using traditional discrete probability.
|
f67a64755724fdf752831ec13201367c00fce119
|
Q: Expected Value of Maximum of Two Lognormal Random Variables with One Source of Randomness We have two random variables $X$ and $Y$ which are log normally distributed, with suitable parameters, what is the expected value for $\max(X,Y)$?
Given,
$$
X=e^{\mu+\sigma Z};\quad Y=e^{\nu+\tau Z};\quad Z\sim N(0,1)
$$
We need to find an expression for $$E[\text{max}(X,Y)]$$
$X,Y$ are independent drawings.
Please note I have reached the step below, but am unsure how to proceed further.
Steps Tried
\begin{eqnarray*}
E\left[\max\left(X,Y\right)\right]=\int_{0}^{\infty}xf_{Y}\left(x\right)F_{X}\left(x\right)dx+\int_{0}^{\infty}yf_{X}\left(y\right)F_{Y}\left(y\right)dy
\end{eqnarray*}
\begin{eqnarray*}
\int_{0}^{\infty}xf_{Y}\left(x\right)F_{X}\left(x\right)dx{\displaystyle =\int_{0}^{\infty}\frac{1}{\tau}\phi\left(\frac{\ln x-\nu}{\tau}\right)}\Phi\left(\frac{\ln x-\mu}{\sigma}\right)dx
\end{eqnarray*}
\begin{eqnarray*}
{\displaystyle =\int_{0}^{\infty}\frac{1}{\tau}\phi\left(\frac{\ln x-\nu}{\tau}\right)}\Phi\left(\frac{\ln x-\mu}{\sigma}\right)dx\quad\text{, Substitution }u=\left(\frac{\ln x-\nu}{\tau}\right)\Rightarrow du=\frac{1}{x\tau}dx
\end{eqnarray*}
\begin{eqnarray*}
{\displaystyle =\int_{-\infty}^{\infty}e^{u\tau+\nu}\phi\left(u\right)}\Phi\left(\frac{u\tau+\nu-\mu}{\sigma}\right)du
\end{eqnarray*}
\begin{eqnarray*}
{\displaystyle =e^{\nu}\int_{-\infty}^{\infty}e^{u\tau}\phi\left(u\right)}\Phi\left(\frac{u\tau+\nu-\mu}{\sigma}\right)du
\end{eqnarray*}
Related Question
Please note, this present question was mis-phrased due to my limited knowledge; but still provides an interesting and instructive solution. The more general case has been made into a new question:
Expected Value of Maximum of Two Lognormal Random Variables
A: If $\sigma = \tau$, then
\begin{align*}
E(\max(X, Y)) &= E\left(e^{\sigma Z} \max\big(e^{\mu}, e^{\nu}\big) \right)\\
&=e^{\frac{1}{2}\sigma^2}\max\big(e^{\mu}, e^{\nu}\big).
\end{align*}
WLOG, we assume that $\sigma < \tau$ below. Let $\lambda = \frac{\mu-\nu}{\tau-\sigma}$. Then,
\begin{align*}
E(\max(X, Y)) &= E\left(\mathbb{I}_{Z \le \lambda} X + \mathbb{I}_{Z > \lambda} Y\right)\\
&=\frac{1}{\sqrt{2\pi}}\int_{-\infty}^{\lambda} e^{\mu+\sigma x - \frac{1}{2}x^2} dx +\frac{1}{\sqrt{2\pi}}\int_{\lambda}^{\infty} e^{\nu+\tau x - \frac{1}{2}x^2} dx.
\end{align*}
The remaining is now routine calculus.
|
bd339a8e33be7ad82b36b4d7d1fde98ee61f1a53
|
Q: Show that $[-1,1] \times [-1,1]$ is a closed set. Show that $A = [-1,1] \times [-1,1]$ is a closed set.
I know that I have to show $A^c$ is open. So I have to find $\epsilon > 0$
sufficiently small such that for all $x \in A^c$, $B(x,\epsilon) \subset A^c$. I am a bit blocked at this point. I think I have to use the triangle inequality and Cauchy-Schwarz.
Is anyone can give me a hint?
A: If $(x,y) \in A^c$ then $|x| > 1$ or $|y| > 1$. If the former, consider a ball around $(x,y)$ with radius $|x| - 1$... (draw a picture).
A: You have set yourself an impossible task. Instead you could aim to show that for any $x \in A^c$ you can find $\epsilon > 0$
sufficiently small such that $B(x,\epsilon) \subset A^c$ (note the change in order: $\epsilon$ depends on $x$).
Any point in $A^c$ has the coordinates $(1+c,d)$, $(-1-c,d)$, $(d, 1+c)$, or $(d, -1-c)$ for real $d$ and positive $c$. So given one of these perhaps choose $\epsilon = \frac{c}{2}$ and show that the result is of one of those forms.
A: It is an easy exercise to show that the following are open subsets of $\mathbb{R}^2$ for any $r\in\mathbb{R}$:
\begin{align*}
(r,\infty)\times\mathbb{R} &&
(-\infty,r)\times\mathbb{R} &&
\mathbb{R}\times(r,\infty) &&
\mathbb{R}\times(-\infty,r) &&
\end{align*}
Now notice that $([-1,1]\times[-1,1])^C$ can be described as a union of sets of this type for certain appropriate choices of $r$. Thus the complement is open as desired.
|
c330c35799a3afd0d003a9045239bba713f8d585
|
Q: question about the weak* closure of the embedded unit sphere in an infinitely dimensional normed vector space Let $\iota : X \to X^{**}$ be the standard isometric embedding of $X$ an infinitely dimensional vector space and $X^{**}$ its bidual space. I know that our embedding is a continous function with respect to the weak topology on $X$ and the weak* topology on $X^{**}$. Can I deduce the following equality:
$\iota(B) = \iota(cl_w(S)) = cl_w^*(\iota(S))$
where in the first step I use the fact that the weak closure of the unit sphere $S\subset X$ is the closed unit ball $B \subset X$ if $X$ is infinitely dimensional and in the second step I use the continuity of $\iota$ as described above.
Is this reasoning right?
|
60c28d637b30da7a7342625baca2f115885e0394
|
Q: Listing elements from set builder notation $\Bbb Z_0'= \{x \in \Bbb Z\,|\, x=km \text{ for any } k \in \Bbb Z\}$? Let $m$ be any fixed positive integer and $\Bbb Z_0= \{x \in \Bbb Z\,|\, x=km \text{ for some } k \in \Bbb Z\}$.
In the above "$\text{for some k}\in \Bbb Z$" is an existential quantifier denoted by, $\exists k\in \Bbb Z$
Then we can list elements from the set builder notation as the following:
$\Bbb Z_0 = \{0, \pm m, \pm2m, \pm3m, \ldots\}$
If we change the existential quantifier to universal quantifier such as'for any k $\in \Bbb Z$', 'for every k $\in \Bbb Z$', 'for all k $\in \Bbb Z$' denoted by $\forall k \in \Bbb Z$, the set builder notation is as the following:
$\Bbb Z_0'= \{x \in \Bbb Z\,|\, x=km \text{ for any } k \in \Bbb Z\}$
Then it brings up a question. How do we list elements of $\Bbb Z_0'= \{x \in \Bbb Z\,|\, x=km \text{ for any } k \in \Bbb Z\}$?
FYI
"Definition 6 Let F be an arbitrary family of sets. The union of the sets in F, denoted by $\bigcup\mathscr F$, is the set of all elements that are in A for some $A\in\mathscr F$.
$\bigcup\limits_{A \in \mathscr F}A$={$x\in U$|$x \in A$ for some $A\in \mathscr F$}"
"Definition 7 Let F be an arbitrary family of sets. The intersection of sets in F, denoted by $\bigcap\limits_{A\in\mathscr F}A$ or $\bigcap\mathscr F$, is the set of all elements that are in A for all $A \in\mathscr F$.
"
$\bigcap\limits_{A\in\mathscr F}A$={$x\in U$| x∈A for all $A\in \mathscr F$}
Source: Set Theory by You-Feng Lin, Shwu-Yeng T. Lin.
|
d1fdb3f4f21d93de4f5f74c4c8ba525d6c3da633
|
Q: Find $\lim_{x\to\infty} (x \ln(16x +14) - (x \ln(16x +7))$ using Maclaurin series. I am trying to find the limit of $\lim_{x\to\infty} (x \ln(16x +14) - (x \ln(16x +7))$.
I know I have to use Maclaurin series, but something went wrong.
A: You don't need any series expansion or L'Hospital to find this limit. You need only the continuity of $\ln(x)$ and some arithmetics:
\begin{gather}\lim_{x \to \infty} x\ln (16x+14) - x\ln(16x+7) =\\= \lim_{x \to \infty} x\ln\left({{16x+14}\over{16x+7}}\right) = \lim_{x \to \infty} \ln\left(\left({{16x+14}\over{16x+7}}\right)^x\right) =\\
= \lim_{x \to \infty} \ln\left(\left({{16x+14}\over{16x+7}}\right)^x\right) = \ln\left(\lim_{x \to \infty}\left({{16x+14}\over{16x+7}}\right)^x\right) =\\= \ln\left(\lim_{x \to \infty}\left({{1+{14\over {16x}}}\over{1+{7\over {16x}}}}\right)^x\right) = \ln\left({{e^{14 \over 16}}\over{e^{7\over 16}}}\right)
= \ln\big({e^{7\over 16}}\big) = {7\over 16} \end{gather}
A: METHODOLOGY $1$: Non-Calculus Based
We can evaluate the limit
$$\lim_{x\to \infty}\left(x\log(16x+14)-x\log(16x+7)\right)=\lim_{x\to \infty}\left(x\log\left(1+\frac{7}{8x}\right)-x\log\left(1+\frac{7}{16x}\right)\right)\tag 1$$
using only a standard inequality and the Squeeze Theorem.
In THIS ANSWER, and THIS ONE, I showed that the logarithm function satisfies the inequalities
$$\frac{x}{1+x}\le \log (1+x)\le x \tag 2$$
using only the limit definition of the exponential function and Bernoulli's Inequality.
Applying $(2)$ to the right-hand side of $(1)$ reveals
$$x\left(\frac{\frac7{8x}}{1+\frac7{8x}}\right)-x\frac{7}{16x}\le x\log\left(1+\frac{7}{8x}\right)-x\log\left(1+\frac{7}{16x}\right)\le x\left(\frac{7}{8x}\right)-x\left(\frac{\frac7{16x}}{1+\frac7{16x}}\right)$$
which simplifies to
$$\frac7{16}\left(\frac{1-\frac{7}{8x}}{1+\frac{7}{16x}}\right)\le x\log\left(1+\frac{7}{8x}\right)-x\log\left(1+\frac{7}{16x}\right)\le \frac7{16}\left(\frac{1+\frac{7}{8x}}{1+\frac{7}{16x}}\right) \tag 3$$
Finally, applying the Squeeze Theorem to $(3)$ yields the result
$$\bbox[5px,border:2px solid #C0A000]{\lim_{x\to \infty}\left(x\log(16x+14)-x\log(16x+7)\right)=\frac{7}{16}}$$
METHODOLOGY $2$: Series Expansion
If one wishes to use a series expansion, we can write
$$\begin{align}
\left(x\log\left(1+\frac{7}{8x}\right)-x\log\left(1+\frac{7}{16x}\right)\right)&=\left(\frac78 +O\left(\frac1x\right)\right)-\left(\frac7{16}+O\left(\frac1x\right)\right)\\\\
&=\frac7{16}+O\left(\frac1x\right)\\\\
&\to \frac{7}{16}\,\,\text{as}\,\,x\to \infty
\end{align}$$
and we are done!
|
01e532894c0f71fe80d7b6f9384db04055982cc2
|
Q: KKT conditions for a convex optimization (optimal crowdsourcing with budget constraint) I am having some troubles deriving the optimal solution of the following convex optimization problem, $w_j$, $c_{ij}$, and $B$ are fixed and non negative.
\begin{align}
& \underset{n_{ij}}{\text{minimize}}
& & -\sum_{i=1}^{N}\sqrt[]{\sum_{j=1}^{M}n_{ij}w_{j}^2} \\
& \text{subject to}
& & \sum_{ij} c_{ij}n_{ij} \leq B\\
&&& n_{ij} \geq 0, \text{ }i=1,...,N,j=1,...,M
\end{align}
I write out the KKT conditions, $\lambda$ and $\mu$ are the Lagrange multipliers.
\begin{align}
-\frac{w_{j'}^2}{2\sqrt{\sum_{j=1}^{M} n_{i'j}w_{j}^2}}+\lambda c_{i'j'}-\mu_{i'j'}&=0, \text{ }i'=1,...,N,j'=1,...,M \\
\lambda &\geq 0\\
\mu_{i'j'}&\geq 0, \text{ }i'=1,...,N,j'=1,...,M\\
\lambda(\sum_{ij}c_{ij}n_{ij}-B)&=0\\
\mu_{i'j'}n_{i'j'}&=0, \text{ }i'=1,...,N,j'=1,...,M\\
\end{align}
Then, I find that $n_{i'j'}\neq 0$, $\mu_{i'j'}=0$,
and $\lambda\neq0$, $B=\sum_{ij}c_{ij}n_{ij}$
From the first KKT conditions, I got
\begin{align}
\frac{w_{j'}^2}{2\sqrt{\sum_{j=1}^m n_{i'j}w_{j}^{2}}}+\lambda c_{i'j'}=0
\end{align}
Then, I dont know how to proceed.
The following is the optimal solution for the above problem.
\begin{align}
n_{ij}^* =
\begin{cases}
\frac{B}{\frac{c_{ij_{i}^{*}}^2}{w_{j_{i}^{*}}^2}\sum_{l=1}^{N} \frac{w_{j_{l}^{*}}^2}{c_{lj_{l}^{*}}}} & \quad \text{if } j=j_{i}^{*} \\
0 & \quad \text{otherwise}
\end{cases}\\
\text{where }j_{i}^{*}= \underset{j}{\operatorname{arg\,max}}\frac{w_{j}^2}{c_{ij}} \text{ and }i=1,2,...,N
\end{align}
Although I have the optimal solution, I don't how it was done. Could anyone show me the steps? Many thanks.
|
f466f604c0467216cb640304febf89f68af7a14d
|
Q: What does "proportional" mean? I used to think of $\propto$ as indicating the one quantity is proportional to the other, with possibly an additive constant involved, i.e. $f(x) \propto g(x)$ if $f(x) = ag(x) + b$. Is that reasonable, or does $\propto$ mostly requires $b=0$? Wikipedia seems to refer only to a multiplicative constant.
A: Yes, proportional means related only by a multiplicative constant. When $g(x)$ doubles, so does $f(x)$. If $b \neq 0$ this will not be true.
|
c1521d211bc5aafbc4b425b2a2eac181aec4048a
|
Q: SAT question about integers and exponents If $a$ and $b$ are positive integers and $$(a^\frac{1}{2}\times b^\frac{1}{3})^6=432$$
What is the value of $ab$?
A: As a hint, I'd suggest:
*
*Write out $(a^{\frac{1}{2}}\times b^{\frac{1}{3}})^3$,
*Factorize $432$ into prime numbers.
A: Note that $$432 = 2^4\cdot 3^3 = 3^3\cdot4^2$$ and the LHS can be written as $$(a^\frac{1}{2} \times b^\frac{1}{3})^6 = a^3\cdot b^2$$ So $a = 3, b = 4$ $\Rightarrow ab = 12$.
A: the relation is equivalent to $a^3.b^2=432$ now prime factorizing we get $432=2^4.3^3=4^2.3^3$ thus $a=3,b=4$ thus $ab=12$
|
4e0105ce0c243769763f2d77245ed6a147bf98d9
|
Q: computing a triangular region using 'line integral' of stoke's theorem Compute the line integral of the triangular region with vertices $\left ( 0,0 \right ),(2,0),\left ( 0,2 \right )$
with the function $\vec{v}=xy\hat{x}+\left ( 2yz \right )\hat{y}+\left ( 3xz \right )\hat{z}$
Forget the vertical and horizontal path. Those are trivial.
Diagonal path integral is annoying.
Along the diagonal line integral, $d\vec{l}=\left \langle 0,dy,dz \right \rangle$
evidently, x=0 and z=2-y
Then the line integral over this path is
$\int_{diagonal}\left \langle -2y,3z,-x \right \rangle.\left \langle 0,dy,dz \right \rangle=\int_{0}^{y=2-z}3zdy-\int_{0}^{z=2-y}xdz
$
This produces an answer with z variables which cannot be.
Where am I going wrong?
A: By the Stokes theorem:
$$
\oint_C \vec{F}\cdot d\vec{r} = \iint_S \nabla\times \vec{F}\cdot d\vec{S},
$$
where $C$ represents your triangle, $S$ the surface bounded by this triangle, and $\vec{F}$ is the field $(xy,2yz,3xz)$.
Assuming the triangle is orientated counterclockwise, a normal unit vector to $S$ is given by $\vec{n}=(0,0,1)$, and it follows that
$$
\iint_S \nabla\times \vec{F}\cdot d\vec{S} = \iint_S -x \;dS = \int_{0}^2\int_{0}^{2-x}-x \;dydx=-\frac{4}{3}.
$$
Otherwise, directly with the line integral. You need to parametrize the diagonal path $C_1$, which you can do as follows:
$$
(x,y,z)=(1-t)(2,0,0)+t(0,2,0)=(2-2t,2t,0), \quad 0\le t \le 1
$$
Then,
$$
\int_{C_1}\vec{F}\cdot d\vec{r} = \int_{0}^1 \vec{F}(t)\cdot (-2,2,0) \;dt =\\
\int_{0}^1 ((2-2t)2t,0,0)\cdot (-2,2,0) \;dt = -4 \int_{0}^1 (2-2t)t \;dt = -4/3.
$$
|
2865a58195fb3e3fda326552e32b30aacf950cfc
|
Q: In the finite field $F$ of characteristic $p$, is $a^{p^n} = a$? If F is a finite field of characteristic $p$, $a$ is some element in $F$ and the number of elements in $F$ is $p^n$, is it true that $a^{p^n} = a$ for all $a$ in $F$? If it is, how could one prove or motivate that?
A: $F-0$ is a group of order $p^n-1$, so $x^{p^n-1}=1$ if $x\neq 0$. This implies that $x^{p^n}=xx^{p^n-1}=x$ if $x\neq 0$. the result. Obviously, $x^{p^n}=x$ if $x=0$.
A: Hint: look at the multiplicative group.
A: Every finite subgroup of the multiplicative group of a field is cyclic. As the multiplicative group of a field of order $p^n$ is finite of order $p^n-1$ the result follows.
|
dcf3943e00756771dee3d3bb646d0c940e22504b
|
Q: Asymptotic bound with square-roots Let $f(n)$ and $g(n)$ be two increasing functions of $n$ such that:
$$ f \leq g + O(\sqrt{g}) + O(\sqrt{f}) $$
Is it true that:
$$ f \leq g + O(\sqrt{g}) $$
? If not, then what would be a good asymptotic upper bound on $f$?
NOTE: the asymptotic behavior is when $n\to\infty$.
A: This does not hold; for instance, take the following, where the asymptotics are taken around $+\infty$:
$$
f(x) = \frac{1}{x}, \qquad g(x)=\frac{1}{x^3}
$$
Then $$f(x) = \frac{1}{x} \leq \frac{1}{x^3} + O(x^{-3/2}) + O(x^{-1/2}) = O(x^{-1/2}) = O(\sqrt{f(x)})$$
but we do not have $f(x) \leq x^{-3} + O(x^{-3/2}) = O(x^{-3/2})$. You can adapt this example to other situations: you cannot get a better bound only in terms of $g$, since if the "relevant" upper bound from the first line if $f=O(\sqrt{f})$ then $g$ and $f$ can be arbitrarily unrelated.
Following the clarification from the OP:
Now, assuming further that $f\nearrow \infty$ and $g$ is non-decreasing: $\sqrt{f} = o(f)$, so that we must have $f \leq g + O(\sqrt{g})$. Indeed, dividing by $\sqrt{f}$, we get
$$
\sqrt{f} \leq \frac{g}{\sqrt{f}} + O\left(\sqrt{\frac{g}{f}}\right) + O(1)
$$
and since the LHS goes to infinity, so does the RHS. this implies in turn that $\sqrt{g}=o(g)$: otherwise, we do not have $g\to\infty$, so by monotonicity $g$ converges to a finite limit, and thus so does $\sqrt{g}$: the RHS does not go to infinity.
It follows that $\sqrt{g}=o(g)$, and therefore $g\nearrow\infty$; and we also get that $g = \Omega(f)$. Indeed, otherwise we would have $g=o(f)$ (not quite, but for a subsequence at least -- ignoring this detail), so that we would get, from the beginning:
$$
f \leq o(f) + O(\sqrt{f}) = o(f)
$$
which is impossible. Finally, this means that $\sqrt{g} = \Omega(\sqrt{f})$, and therefore
$$
f \leq g + O(\sqrt{g}).
$$
|
b1e7b44ce478688fce91d83593f996e4dbb10c6d
|
Q: explanation of $ \frac{dy}{dx} = \frac{1}{\frac{dx}{dy}} $? I'm studying about derivative of inverse function. The teacher in the video (https://www.youtube.com/watch?v=3ReOtNCYuBw) (at 9:00 minute) said this
if a differentiable function, f has an inverse, then:
$$ \frac{d}{dx}[f^{-1}(x)] = \frac{1}{f'[f^{-1}(x)]} $$
provided $f'[f^{-1}(x)]\neq 0$
then he said if we make $y = f^{-1}(x)$ then:
$$ \frac{dy}{dx} = \frac{1}{\frac{dx}{dy}} $$
the last line is when I really get lost because it should be $ \frac{dy}{dx} = \frac{1}{f'[y]} $ not $ \frac{dy}{dx} = \frac{1}{\frac{dx}{dy}} $ isn't it? any please explain to me in very detail, I'm a newbie.
A: This short hand notation is best understood if you realize that you're identifying variables and functions with each other, namely $x,y$ and $f, f^{-1}$.
Let's digest the formula
$$ \frac{d}{dx}[f^{-1}(x)] = \frac{1}{f'[f^{-1}(x)]} $$ from the question. The left hand side is a function of $x$, namely the derivative of $g=f^{-1}$. It states that for any fixed value $t$ (read this as a specific value for $x$, we have that $g'(t) = \frac{d}{dx}[f^{-1}(x)](t)$ is given by the right hand side.
So the right hand side is a function of $x$, too. For $x=t$, its value is $\frac{1}{f'(f^{-1}(t))}$, or to put it in other terms: for $s =f^{-1}(t)$ or equivalently $t=f(s)$, we have that
$$\left(\frac{d}{dx}[f^{-1}(x)]\large\right)_{x=t} = \left(\frac{1}{f'(f^{-1}(x))}\right)_{x=t} = \left(\frac{1}{f'(y)}\right)_{y=s}.$$
Here, I used $y$ as the name of the variable of $f$ and $f'$. We can write $f'(y)=\frac{d}{dy}[f(y)]$, so that the equation above becomes:
$$\left(\frac{d}{dx}[f^{-1}(x)]\large\right)_{x=t} = \left(\frac{1}{\frac{d}{dy}[f'(y)]}\right)_{y=s}.$$
In this equation, $x$ and $y$ still denote variables, not functions. If we keep in mind that for the fixed values $s,t$ above we have $f(s)=t$ and $f^{-1}(t)=s$, we can write for the variables $x,y$: $$y=f^{-1}(x) \mbox{ and } x=f(y).$$ Now, if we view these equations as definitions of functions named $x$ and $y$, we end up with:
$$\left(\frac{d}{dx}[y(x)]\right)_{x=t} = \left(\frac{1}{\frac{d}{dy}[x(y)]}\right)_{y=s}.$$
Lose the dependencies from the variables $x,y$ and their respective values and you end up with $$\frac{d}{dx}[y]= \frac{1}{\frac{d}{dy}[x]}.$$
A: To say that $y = f^{-1}(x)$ is also to say that $f(y) = x$. Differentiating both sides with respect to $y$ yields $f'(y) = dx/dy$.
EDIT: This is only a matter of notation, and, unfortunately, there is some abuse of notation going on here. The first line you wrote is absolutely correct, whereas the statement $$\frac{dy}{dx} = \frac{1}{\frac{dx}{dy}}$$ is ambiguous, albeit easy to remember. One problem with the notation is that it is not clear what $x$ and $y$ represent (are they both functions? if so, what does it mean to differentiate with respect to a function?). Also, if the left-hand side is to be evaluated at a point $t$, then implicit is the fact that, on the right-hand side, $dx/dy$ should be evaluated at $y(t)$.
With this in mind, let us suppress $f$ from the notation. If $x$ is an invertible function of $y$, we may write $y$ as a function of $x$. Thus $x$ and $y$ act both as functions and coordinates, depending on the context. In words, your theorem says that "the derivative of the inverse of $x$ (i.e., the derivative of $y$) at a point $t$ is the inverse of the derivative of $x$ at the point $y(t)$". We can write this as
$$\frac{dy}{dx}(t) = \frac{1}{\frac{dx}{dy}(y(t))}.$$
If we suppress the points from the notation, we arrive at the statement above. So perhaps it would be more adequate to write
$$\frac{dy}{dx} = \frac{1}{\frac{dx}{dy} \circ y}.$$
A: I repeat the comments as they are getting too long.
There are two notations for the derivative of a function:
$$F'(X) = \frac{d F}{dX}$$
On the right hand side, the $F$ in the numerator should be a function and in the denominator the $X$ should be the variable of the function. (To quibble more, there is a small mistake, if one really wants an equality of value, i.e. when a function is evaluated at a point, then one should write $F'(X_0) = \frac{d F}{dX}(X_0)$, otherwise it could be an equality of functions $F' = \frac{d F}{dX}$).
Assuming the first equality is prooved, the second $\frac{dy}{dx} = \frac{1}{\frac{dx}{dy}}$ is just a matter of notations, it is a rewritting of the first.
*
*in the left hand side (l.h.s.), one needs to understand $y$ as a function of $x$, namely $y: x\rightarrow f^{-1}(x)$ or in short $y=f^{-1}(x) $
*in the r.h.s., $x$ has to be understood as a function $x: y \rightarrow f(y)$ of $y$ but to avoid all possible confusion one should actually write $X$ for this function and $Y$ its variable since $x$ was previously already used as a variable and $y$ as a function.
Now where is everthing evaluated at, and to begin with, what is the variable (which should be the same in both side): start looking at l.h.s., $y$ is a function of $x$ and should be evaluated at some $x_0$. R.h.s. is then evaluated at $Y_0 := y(x_0)$ in order to recover the first equality so one could write
$$\frac{dy}{dx}(x_0) = \frac{1}{\frac{dX}{dY}(Y_0)}$$
A: A quick visualisation of this which I've always enjoyed is that if we draw the tangent to $y=f(x)$ at $(a,b)$, then its equation will be of the form $\frac{y-b}{x-a}=m$.
It thus follows naturally that if we draw the tangent to $x=f(y)$ at $(b,a)$, then its equation will be of the form $\frac{x-a}{y-b}=\frac{1}{m}$.
Drawing together the point-slope form of straight lines and the interpretation of $\frac{dy}{dx}$ as the slope gives quite neatly that $\frac{dy}{dx}\frac{dx}{dy}=1$
|
c7d8b3d92c257c6e76e3c686c880de443032d036
|
Q: Why integration constant is real? OK, we are all taught at school that the undefined integral of a function $f(x)$ is
$$\int f(x)\;\text{d}x = F(x) + k$$
where $F'(x) = f(x)$ and $k \in \mathbb R$.
But, why $k$ must be real?
I know the basis of including that $k$ is the fact that the difference between all primitives of a function is a constant, but, couldn't it be complex, and say the following?
$$\int f(x)\;\text{d}x = F(x) + k,\qquad \text{where $F'(x) = f(x)$ and $k \in \mathbb C$}$$
A: In general, it is not the case that $C$ must be real. Whenever you have a function, you always need to specify where it maps. Is it $f:\mathbb{C}\to\mathbb{R}$? $f:\mathbb{R^n}\to\mathbb{R}$? $f:\mathbb{R}\to\mathbb{R^n}$? Since we, in calculus, very often deal with functions $f:\mathbb{R}\to\mathbb{R}$, this is sometimes left off and assumed that the reader will know, however we do talk about function in other function spaces.
If $f:A\to B$, then the constant of integration must be an element of $B$. This is easily seen, as $F(0)=C$.
A: In my opinion, the answer is because in high-school you get prepared for "higher" mathematics, so they try not to complicate it too much. First of all, it is obvious that if you only deal with real-valued functions, you will only get real values of $k$. For instance, in my country, the topic of complex numbers is studied apart from everything, and I understood that once I studied complex analysis in the university. You cannot just go to a high-school class and talk about complex functions, because some of these are defined in special ways (logarithms, powers, trigonometric functions, etc.). That is why we study complex numbers and not complex analysis (from my point of view, of course).
Also, one more example of a similar fact: the first time we learn to solve quadratic equations (kids are around 14 y.o. when they learn that), whenever a solution of such equation is a complex number, they just tell students that there is no solution (some years later they learn there is actually a solution, which is not real). I am not saying this statement is right or wrong, I am just stating how they teach us in my country. The same with integrals, some years later you will be taught that $k$ can also be complex in some cases.
A: Sure, it can be.
It's just that in Calc I, II, III you strictly deal with functions $\mathbb{R} \rightarrow \mathbb{R}$, so if you want these functions to be closed indefinite integration, you need the constant to be $\in \mathbb{R}$.
|
f10ceedf6f8d9170b3d7befe457a9cfffd7a0f76
|
Q: Converting to vertex form, where coefficient of $w^2$ cannot be factored out I need help on converting this to vertex form:
$$12w^2 + 13w + 3$$
I have tried finding examples online, but every time I find an example where $x^2$ has a coefficient, it is always able to be factored out. I can't do that here because the $13w$ would not factor out cleanly. I did even try that. I was able to get the factored form of $(12)\left(w+\frac{1}{3}\right)\left(w+\frac{3}{4}\right)$, but I cannot for the life of me figure out how to get this into vertex form. I assume that I am supposed to be completing the square, since that is what the chapter was primarily about, but I really just cannot figure out how to do that.
A: $12w^2+13w+3=12(w^2+\frac{13w}{12}+\frac{1}{4})=12((w+\frac{13}{24})^2-\frac{25}{576}).$
To get the $\frac{13}{24}$, find $(\frac{13}{12 \cdot 2})^2$ and add and subtract it from the expression. In general, when completing the square, find $\frac{b^2}{2a}$ and add and subtract it from the expression.
$$12(w^2+\frac{13w}{12}+\frac{169}{576}-\frac{169}{576}+\frac{1}{4})=12((w+\frac{13}{24})^2-\frac{25}{576})=12(w+\frac{13}{24})^2-\frac{25}{48}$$
A: Lets continue with your answer
$$\begin{array}{lll}
12w^2+13w+3&=&12(w+\frac{1}{3})(w+\frac{3}{4})\\
&=&12(w+\frac{8}{24})(w+\frac{18}{24})\\
&=&12(w+\frac{13}{24}-\frac{5}{24})(w+\frac{13}{24}+\frac{5}{24})\\
&=&12(\color{blue}{(w+\frac{13}{24})}-\frac{5}{24})(\color{blue}{(w+\frac{13}{24})}+\frac{5}{24})\\
\end{array}$$
Can you take it from here?
|
e67f404d117e6bcc604ca3ae0b1e4c1241ea1ce9
|
Q: Evaluation of $\int_{1}^{3}\left[\sqrt{1+(x-1)^3}+(x^2-1)^{\frac{1}{3}}\right]dx$
Evaluation of $\displaystyle \int_{1}^{3}\left[\sqrt{1+(x-1)^3}+(x^2-1)^{\frac{1}{3}}\right]dx$
$\bf{My\; Try::}$ Put $x-1 = t\;,$ Then $dx = dt$ and changing limits, We get
$$I = \int_{0}^{2}\left[\sqrt{1+t^3}+\left(t^2+2t\right)^{\frac{1}{3}}\right]dt$$
Now Let $\sqrt{1+t^3}=u+1\;,$ Then $t^3=(u+1)^2-1\Rightarrow t = \sqrt[3]{(u+1)^2-1}$
Now How Can I solve after that, Help me, Thanks
A: (Too long for a comment, I just post it here) Try with the substitution $(x-1)^3=(\sinh(t))^2$. Your integral gets converted into
$$
\int_0^{\sinh ^{-1}\left(2 \sqrt{2}\right)} \frac{(2 \cosh (t)) \left(\sqrt[3]{2 \sinh ^{\frac{2}{3}}(t)+\sinh ^{\frac{4}{3}}(t)}+\cosh (t)\right)}{3 \sqrt[3]{\sinh (t)}} \, dt\ .
$$
Quite amazingly, Mathematica now knows a primitive of the integrand (too long to be reported here) in terms of $\sinh$, $\cosh$ and hypergeometric $_2 F_1$. So the result given by Mathematica is
$$
6+\left[\frac{3}{5} \left(\sqrt{3}+i\right) i \, _2F_1\left(\frac{1}{2},\frac{2}{3};\frac{3}{2};9\right)+\frac{6 \left(1+(-1)^{2/3}\right) \sqrt{\pi } \Gamma \left(\frac{1}{3}\right)}{5 \Gamma \left(-\frac{1}{6}\right)}\right].
$$
Quite remarkably, the term in square brackets seems to be numerically very close to zero. Therefore your initial integral seems to be equal to $6$.
|
f77b483b33c0c73a5d26b975d6f8d628a27a518c
|
Q: Range of any projection is closed. Let $X$ be a Banach space and $P$ a projection.
Show that the range of any projection is a closed subspace.
Can I use the fact that a Banach space is complete and thus closed and that $P = P^2$ to show that the range of $P$ is in fact closed?
A: If the projection is continuous, then the range of $P$ is closed, as it is the kernel of the continuous projection $I-P$.
However not all projections on a Banach space are continuous (as opposed to what happens in the case of a finite dimensional space).
So, in general, this is not true. Take for instance the subspace generated by $\{(1,0,\dots), (0,1,0, \dots), \dots\}$ in $\ell^1(\mathbb N)$. You can then extend the set $\{(1,0,\dots), (0,1,0, \dots), \dots\}$ to a basis of $\ell^1(\mathbb N)$ using the Axiom od Choice and then project onto the subspace generated by those elements. As the subspace (which is the range of such projection) has a countable basis, it can't be Banach, hence it can't be closed.
|
c041d00d11e412fcc6078aae01c0d85f02af4f5d
|
Q: Maximum area of a rectangle whose vertices lie on ellipse $x^2+4y^2=1$ Maximum area of a rectangle whose vertices lie on ellipse $x^2+4y^2=1$.
I try to do it by lagrange multiplier as
$F(x,y,t)= xy + t(x^2+4y^2-1=0)=0$. Differentiating w.r.t to x,y and solving i get $x=\frac{1}{\sqrt2}$ and $y=\frac{1}{2\sqrt2}$. So area=0.25. But textbook states answer =1. I like to know where i am wrong?
Thanks
A: The area of the rectangle is not $xy$. If the rectangle has a vertex at some point $(x,y)$, then the area will be $4xy$. Hopefully the crude drawing below will help you understand why.
Note that, if you consider the rectangle whose bottom-left vertex is at the origin, then the sides have length $x$ and $y$, so the area would indeed be $xy$. This is the problem you solved, which gives $A=1/4$. Since this is a quarter of the total rectangle in question, we can multiply by $4$ to see that the answer is in fact $1$.
A: By symmetry, the rectangle with the largest area will be one with its sides parallel to the ellipse's axes. Consider any point B(x1,y1)B(x1,y1) on the ellipse located in the first quadrant.
You can easily see that A≡(−x1,y1)A≡(−x1,y1), D≡(x1,−y1)D≡(x1,−y1), and C≡(−x1,−y1)C≡(−x1,−y1).
So, Area=4x1y1Area=4x1y1
We also have the relation:
x21+4y21=1x12+4y12=1
⇒x21=1−4y21⇒x12=1−4y12
⇒x1=1−4y21−−−−−−√⇒x1=1−4y12
We've taken the positive value since we chose this point to be in the first quadrant.
Area=4y11−4y21−−−−−−√Area=4y11−4y12
The possible values of y1y1 for which there lies a point on the ellipse are [0,12][0,12] in the first quadrant. Let's differentiate the area to find its point of maxima.
dAdy=yddy1−4y2−−−−−−√+1−4y2−−−−−−√ddyydAdy=yddy1−4y2+1−4y2ddyy
dAdy=y121−4y2√(−8y)+1−4y2−−−−−−√dAdy=y121−4y2(−8y)+1−4y2
dAdy=−8y2+2−8y221−4y2√dAdy=−8y2+2−8y221−4y2
dAdy=2−16y21−4y2√dAdy=2−16y21−4y2
For maxima,
dAdy=0dAdy=0
⇒2−16y21−4y2√=0⇒2−16y21−4y2=0
⇒y=18√⇒y=18
Corresponding to this,
x=1−4y2−−−−−−√x=1−4y2
⇒x=1−12−−−−−√⇒x=1−12
⇒x=12√⇒x=12
Thus,
Areamax=412√18√Areamax=41218
Areamax=1sq.units
|
5fd168520375977b7947206494d22563a2027bbd
|
Q: Every diagonalisable matrix has pairwise distinct eigenvalues I need to prove whether or not every diagonalisable matrix has pairwise distinct eigenvalues.
My instinct is to think that the statement is true as for a matrix to be diagonalisable there has to exist a basis consisting of the eigenvectors of the matrix. However, I am unsure what is meant by 'pairwise distinct'.
A: Repeating the obvious counterexample of sigmabe to get this off of the Unanswered queue: the identity matrix clearly is both diagonalizable and has repeated eigenvalues.
|
5fca249c29002282441012ec2281cabee68e9bc3
|
Q: Prove that the set $S \cup \{v\}$ is linearly independent.
Let $S$ be a linearly independent set of vectors in $\mathbb R^n$. Suppose that $v$ is a vector in $\mathbb R^n$ that is not in the span of $S$. Prove that the set $S \cup \{v\}$ is linearly independent.
Would a proof like the one below work?
Let $S = \{u_1, u_2, \ldots , u_k\}$. Given is if $a_1u_1 + a_2u_2 + a_3u_3 +\ldots +a_ku_k = 0$, then $a_i = 0$.
Consider $a_1u_1 + a_2u_2 + ...+a_ku_k - bv = 0$. Let $x_i$ be the elements of $u_1$, $u_i$ be the elements of $u_2$ and $z_i$ be the elements in $u_k$.
Then componentwise we have:
$a_1x_1 + a_2y_1 + \ldots + a_kz_1 - bv_1 = 0$
$a_1x_2 + a_2y_2 + \ldots + a_kz_2 + bv_2 = 0$
$\ldots$
$\ldots$
$a_1x_k + a_2y_k + \ldots + a_kz_k - bv_k = 0$
We can take a random row from the system above :
$a_1x_i + a_2y_i + \ldots + a_kz_i - bv_i = 0$ where $bv_i = a_1x_i + a_2y_i + \ldots + a_kz_i$ so that
$(a_1x_i + a_2y_i + \ldots + a_kz_i) + (a_1x_i + a_2y_i + \ldots + a_kz_i) = a_i(x_i + x_i) + a_2(y_i – y_i) + \ldots + a_k(z_i + z_i) = 0$.
Since no vector in $S$ is zero, none of $x_i + x_i, y_i + y_i, \ldots, z_i + z_i$ is $0$ meaning $a_i = 0.$
edit:
I think one of the problems of this proof is that if, say, $(0, 4, 3, \ldots, 8)$ and $(0, 7, 2, \ldots, 4) \in S$, then one of $x_i + x_i, y_i + y_i, \ldots, z_i + z_i$ is $0$, so the proof fails.
A: This question is tackled surprisingly quickly by using the contrapositive.
Given the premise $S\subset\mathbb{R}^n$ is linearly independent, you are trying to prove $p\implies q$, where:
*
*$p$ is "$v\notin \text{span}S$"
*$q$ is "$S\cup \{v\}$ is linearly independent".
The contrapositive works by proving $\text{not}\;q\implies \text{not}\;p$. Now:
*
*$\text{not}\;q$ is the statement "$S\cup \{v\}$ is linearly dependent".
*$\text{not}\;p$ is the statement "$v\in \text{span}S$".
Given that $S$ was previously linearly independent, any linear dependence relation on $S\cup \{v\}$ must involve a $v$ term, which means that we can easily make $v$ the subject of such an equality, which directly expresses $v$ as a member of $\text{span}S$. We are thus done.
A: As others have said, there is confusion in your notation, and much of it is unnecessary. You don't need to consider individual components of the vectors. While it is helpful to look over and correct the errors in your own proof, I will provide you with a proof sketch for help later.
Let's get things straight. $S$ is linearly independent, and $v$ is not in $Span (S)$. We are to show that $S\cup \{v\}$ is linearly independent. You should ask yourself what each of these previous statements means in terms of vectors: for instance, any nontrivial linear combination of vectors of $S$ is nonzero. If we assume for contradiction that $S\cup \{v\}$ is linearly dependent, then there is some linear combination of vectors in $S$ that equals a multiple of $v$. If this multiple is $0$, this is a contradiction since $S$ is linearly independent. If this multiple is nonzero, then $v$ can be written as some linear combination of vectors in $S$ which means that $v$ is in the span of $S$, also a contradiction. Thus $S\cup \{v\}$ is independent.
While this may look dense, it could be improved upon with the addition of symbolic representation of the vectors discussed. This is the idea, so try to sift through it.
|
6b504f73f34f151bb2d54681abfd7e888c7bc98b
|
Q: Nested radicals with logarithms The Wikipedia page about Nested radicals lists the following formula:
$$ \sqrt{n+\sqrt{n+\sqrt{n+\sqrt{n+\cdots}}}} = \tfrac12\left(1 +
\sqrt {1+4n}\right) = \Theta(\sqrt{n})$$
Suppose we replace the $\sqrt$ operator with the following function:
$$f(n) = \sqrt{n \ln{n}}$$
What is an asymptotic bound on:
$$ f{(n+f{(n+f{(n+f{(n+\cdots)))}}}} $$
?
A: Let $L$ be the limit, then
$L=f(n+L)=\sqrt{(n+L)\ln (n+L)}$
$L^{2}=(n+L)\ln(n+L)$
By $GM \leq AM$,
$L<\frac{n+L+\ln(n+L)}{2} \implies L<n+\ln(n+L)$
It seems that $L=o(n)$.
Trying to expand the RHS asymptotically, by Mathematica or else:
$L^{2} = n\ln n+L(1+\ln n)+O \left( \frac{L^{2}}{n} \right)$
By quadratic formula,
$L \sim \frac{(1+\ln n)+\sqrt{(1+\ln n)^{2}+4n\ln n}}{2}$
$L= \sqrt{n\ln n}+\frac{\ln n+1}{2}+
O\left( \ln n \, \sqrt{\frac{\ln n}{n}} \right)$.
|
89923449e817fb0c2b4cd0988a17ff7833910f9a
|
Q: Got stuck while integrating $\int x^x dx$ What is the integration of $$\int x^x dx$$ And how can I understand whether an integration is possible or not? Is there any rule to understand whether a function is integrable or not?
|
7c2e89b6b7d501dc4eaf1906c55f75b65416404b
|
Q: Find maximum value of $a$ such that the matrix has three linearly independent real eigenvectors . The maximum value of $a$ such that the matrix
$\begin{pmatrix} -3 & 0 & -2 \\ 1 & -1 & 0 \\0 & a & -2\end{pmatrix}$
has three linearly independent real eigenvectors .
Please give me a hint
Thanks
A: EDIT: It was pointed out in the comments, that my first attempt was incorrect, since the question asks about real eigenvectors. (I only checked whether there are independent eigenvectors, admitting the possibility that they might be complex.)
I have tried to suggest an alternative solution. I hope that somebody will come up with something more simple. (Either I have made a mistake somewhere, or the computation which we have to do would be rather complicated if we want solve the question by hand.)
We can start by calculating the characteristic polynomial $\chi_A(x)$?
$\chi_A(x)=\begin{bmatrix} x+3 & 0 & 2 \\ -1 & x+1 & 0 \\ 0 & -a & x+2 \end{bmatrix} =x^3+6x^2+11x+6+2a=(x+1)(x+2)(x+3)+2a$
WolframAlpha
We can find situations for which this polynomial has real roots/distinct roots using the discriminant of the cubic polynomial.
WolframAlpha
Namely we get that for $a=\frac1{\sqrt {27}}$ there is a multiple real root. (This root can be found as the root of the derivative.) For larger $a$'s there are two complex roots and one real root. For smaller $a$'s there are three distinct real roots.
WolframAlpha
For three distinct real roots the situation is simple: How to prove that eigenvectors from different eigenvalues are linearly independent
For $a=\frac1{\sqrt {27}}$ we cannot say just from the eigenvalues whether the matrix is diagonalizable or not. So now we can try to diagonalize this matrix for this particular value of $a$. (Or at least to find the dimension of the eigenspace for the multiple root.)
WolframAlpha
WolframAlpha says that the matrix is not diagonalizable. Which means that the given matrix have three real linearly independent eigenvectors only for $a<\frac1{\sqrt {27}}$.
|
c2e1674c0fe08f2813c368a6144a978618171de3
|
Q: When stating the range of concavity for $x^3$, should we include the edge? So my textbook states that it is $(-\infty,0)$ that is concave down, but why not $(-\infty,0]$? Could someone help?
A: This depends on your definition of concavity. I prefer the following:
Definition: Let $I \subseteq \mathbb{R}$ be an intervall and $f$ a realvalued function defined on $I$. Then $f$ is convex on $I$ if for all $x,y \in I$ and $\lambda \in [0,1]$
$$
f(\lambda x + (1-\lambda) y) \leq \lambda f(x) + (1-\lambda) f(y).
$$
and concave on $I$ if for all $x,y \in I$ and $\lambda \in [0,1]$
$$
f(\lambda x + (1-\lambda) y) \geq \lambda f(x) + (1-\lambda) f(y).
$$
We say strictly convex on $I$ if for all $x,y \in I$ with $x \neq y$ and $\lambda \in (0,1)$
$$
f(\lambda x + (1-\lambda) y) < \lambda f(x) + (1-\lambda) f(y).
$$
and concave on $I$ if for all $x,y \in I$ with $x \neq y$ and $\lambda \in (0,1)$
$$
f(\lambda x + (1-\lambda) y) > \lambda f(x) + (1-\lambda) f(y).
$$
When using this definition it is true that $f \colon \mathbb{R} \to \mathbb{R}$, $x \mapsto x^3$ is concave on $(-\infty,0]$ (even strictly concave), as well as (strictly) convex on $[0,\infty)$.
(This definition of convexity has the advantage that $f \colon I \to \mathbb{R}$ is convex on $I$ if and only if the epigraph $\{(x,y) \in I \times \mathbb{R} \mid y \geq f(x)\}$ is a convex subset of $I \times \mathbb{R}$. It also easily generalizes to functions defined on arbitrary convex subsets of vector spaces, e.g. convex subsets of $\mathbb{R}^n$.)
A: Most undergrad textbooks use the following definition:
A function $f$ is concave down on $(a,b)$ when $f''(x)<0$ for all $a<x<b$.
In your example $f''(x)=0$ at $0$, so the function $f(x)=x^3$ is not concave down at $x=0$ according to this definition.
A: Well, @User12345 if textbooks do that, they are certainly wrong.
$f:x\mapsto -x^4$ is certainly concave on $\mathbb{R}$ but do not satisfy your definition.
@CoolKid, I don't know the definition you have of a concave function, but for every definition I would agree with, I consider you are right and that $x\mapsto x^3$ is concave on $(-\infty,0]$.
At least, according to the initial definition of concave functions (without any regularity assumption), it is the case.
(I assumed concave=concave down; if I was wrong about this language subtlety and my comment is unvalid because of it, I apologize)
A: Let me give a more pedagogical explanation without referring to any specific definitions. This question is very similar to "Is $x^2$ increasing on $(0,\infty)$ or $[0,\infty)$?" Here is my answer to that question: if $x^2$ is increasing on $[0,\infty)$, then we would also have to agree by symmetry that $x^2$ is decreasing on $(-\infty,0]$. This means that at $0$ this function is both increasing and decreasing, which simply seems kind of weird. We normally think of increasing as strictly going up, and decreasing as strictly going down, so that there is a third option of going nowhere, neither increasing or decreasing, staying flat. We could modify our definitions of increasing and decreasing (which one usually calls monotone in one fashion or another), but for the sake of making sense of the English, most keep it this way.
Then to go back to your question, at $0$ your function would be both concave up and concave down, which just doesn't seem definitive to me. So we say that it is neither, and so $x^3$ is concave down on $(-\infty,0)$ only.
|
e18542f7327c546eca6be3b67d163f392316e8a0
|
Q: Roots of this cubic equation In my example I got 1,2,3 as roots. However the actual roots are 1,1,2. Where is my mistake here? And what method should I follow? Badly need to get this fundamental... Thank u
A: Notice, in the cubic equation:$\lambda^3-4\lambda^2+5\lambda-2=0$, the sum of coefficients of even power terms is equal to the sum of coefficients of odd power terms hence $\lambda=1$ is a root of the cubic equation hence $(\lambda-1)$ is a factor of $(\lambda^3-4\lambda^2+5\lambda-2)$ hence, using division one should have
$$\lambda^3-4\lambda^2+5\lambda-2=(\lambda-1)(\lambda^2-3\lambda+2)$$
Further factorizing $\lambda^2-3\lambda+2$, one should get,
$$\lambda^3-4\lambda^2+5\lambda-2=(\lambda-1)(\lambda-1)(\lambda-2)$$
hence, the cubic equation is $$(\lambda-1)(\lambda-1)(\lambda-2)=0$$
|
6a531465a864c2ff0f7093748a3056dd78512157
|
Q: What is the value of C in $O(x^n)$ definition? I read the definition that $f$ is in $O(x^n)$ if $|f(x)|<C|x^n|$ for some $C$.
I'm struggling to understand how to check this. For example, supposedly $f(x) = 5x+3x^2$ is in $O(x)$ but not $O(x^2)$?
If I plot $f(x) = 5x + 3x^2$ and $g(x)=x$ I see that the first goes to infinity much quicker.
If I let $g(x) = Cx$, and plot $C=1,C=10, C=20, C=100$, it looks like it overcomes $f$ for $C>10$:
But, if you zoom out further, you can see that's not true:
So, I know it doesn't matter what $C$ is, but how can I show that there exists a $C$ to make the definition hold so that I can tell if $f$ is in $O(x^n)$?
If you go out far enough, and $C$ is large enough can't I make either $f$ or $g$ as close to the y-axis as I want?
A: The definition is not complete since the inequality is supposed to hold for all $x>k$ where $k$ is a constant. The idea behind the $\mathcal O$-notation is to provide an estimate (comparison) for large $x$. This makes life much easier for your example since $x < x^2 \ \forall\, x>1$, so that
$0<f(x)= 5x+3x^2<5x^2+3x^2=8x^2 \ \forall \, x>1$
So you might choose $C:=8$ and $k:=2$ here in order to see that $f\in\mathcal O(x^2)$.
By the way, this statement can be generalized to hold for polynomials of degree $n$, i.e. if $f\in \mathcal P_n$ then $f\in\mathcal O(x^n)$.
A: The function $f(x) = 5x + 3x^{2} \not \in O(x)$, but it is in $O(x^{2})$. The way you show that $f(x) \in O(x^{2})$ is by picking a constant $C$ and appropriate constant $k$ such that $|f(x)| \leq Cx^{2}$ for all $x \geq k$.
Big-O can often be ascertained using limits. If the following condition holds:
$$L = \lim_{x \to \infty} \dfrac{f(x)}{g(x)} < \infty$$
Then we have that $f(x) \in O(g(x))$. The converse does not necessarily hold, as given in the comments below ($x*sin(x)$).
So consider:
$$\lim_{x \to \infty} \dfrac{5x + 3x^{2}}{x} = 5 + \lim_{x \to \infty} 3x = \infty$$
And:
$$\lim_{x \to \infty} \dfrac{5x + 3x^{2}}{x^{2}} = 0 + 3 = 3$$
Thus, $f(x) \in O(x^{2})$ but not $O(x)$.
You can check out these links for more information on how to formulate Big-O proofs:
http://www.dreamincode.net/forums/topic/280815-introduction-to-proofs-induction-and-big-o/
http://www.dreamincode.net/forums/topic/321402-introduction-to-computational-complexity/
|
9a93cbad14f21e54a772b912e864334d237e8b91
|
Q: How are groupoids richer structures than sets of groups? This has been bugging me for quite some time: My intuition with categories is, that I can simply identify isomorphic objects. It does for example not matter, whether the entries in a sudoku are the numbers $1,2,\dots,9$ or letters $a,b,\dots,i$ (this shows, that you can simply identify isomorphic sets).
I heard groupoids are important objects, possibly even more fundamental then categories (I even dealt with them before). But this seems to contradict my intuition, for you could mentally identify isomorphic objects in a groupoid and end up with just a set of groups. There has to be something wrong with this view and I suppose it has to do with the fact, that there are usually many isomorphisms between isomorphic objects (It is well known from linear algebra, that choices of bases "matter"). I realize that this is a very imprecise question, but:
How can I think about groupoids, such that there are more interesting
or richer in structure than just sets of groups?
A: The first short answer is that in order to identify a groupoid with a set of groups you need to pick a basepoint in each connected component (in more categorical terms, a representative of each isomorphism class), and there are various situations where you don't want to (analogous to why you often don't want to pick bases of vector spaces).
The second short answer is that there are many reasons to consider groupoids with extra structure, which can be considerably more interesting than sets of groups with extra structure.
Here is an example where both of these considerations apply. Suppose a group $G$ acts on a space $X$. Does this induce an action on the fundamental group? The answer is no: in order to get such an action, $G$ must fix a basepoint of $X$. But it can happen that $G$ fixes no basepoint (even in a homotopical sense). However, $G$ will always act on the fundamental groupoid of $X$.
For example, let $X$ be the configuration space of $n$ ordered points in $\mathbb{R}^2$. This space has fundamental group the pure braid group $P_n$, which fits into a short exact sequence
$$1 \to P_n \to B_n \to S_n \to 1$$
where $B_n$, the braid group, is the fundamental group of the configuration space of $n$ unordered points. Now, it's clear that $S_n$ acts on $X$ by permuting points. But this action cannot be upgraded to an action on $P_n$, because the above short exact sequence does not split.
This is not an isolated example. It's part of the reason why the $E_2$ operad can be described as an operad in groupoids, but not as an operad in groups, even though its underlying spaces (homotopy equivalent to the configuration spaces above) are all Eilenberg-MacLane spaces.
There are lots of other things to say here. For example, groupoids form a 2-category, groupoids are cartesian closed, topological groupoids are richer than topological groups... the list goes on and on. Here is a slightly cryptic slogan:
You cannot really identify isomorphic objects. The space of objects isomorphic to a fixed object $X$ is not a point, it is the classifying space $B \text{Aut}(X)$.
|
64b36158c4853c1db66bebc5d50a9a48cbcedb6e
|
Q: Find the transition function of the Markov chain (X_m) I haven't taken a probability/statistics course in years and I'm trying to make my way through an Introduction to Stochastic Processes book. The question reads as follows:
Suppose we have two urns (a left urn and a right urn). The left urn contains $n$ black balls and the right urn contains $n$ red balls. Every time step you take one ball (chosen randomly) from each urn, swap the balls, and place them back in the urns. Let $X_m$ be the number of black balls in the left urn after $m$ time steps. Find the transition function of the Markov chain ($X_m$).
A: *
*What are the possible values of $X_m$?
*For a certain value $x$ of $X_m$, what are the possible values of $X_{m+1}$? (Hint: you could pick two blacks, a red and a black, a black and a red, or two reds.)
*If $X_m$ is $x$, what is the probability that $X_{m+1}$ is $y$? (Hint: this depends on the probability of picking two blacks, red and black, black and red, or two reds, which is based on the number of blacks in the left urn.)
*Now you have computed the transition function so write it down.
|
294fdd7c9a59dc62db7a4cc6ea78f2afa7bb8921
|
Q: Name of this property: if $x * x = y * y \implies x = y$ Algebraically speaking, what's the name of this property?:
$x * x = y * y \implies x = y$
$*$ being a binary operation
A: I would simply say that the operation "$*$" admits uniqueness of square roots.
Just to be clear, following up the comment of Marc van Leeuwen (thanks!), if an element admits a square root, this is unique.
A: Not a full answer...
Define the function $f(x)= x * x$
Then your property is equivalent to $f(x)=f(y) \Rightarrow x=y$
I think that this means that $f(x)$ must be a one-to-one function.
Anyone care to extend / argue?
A: The operation defines an injective squaring operation.
|
8544e2531461383956dbf0d45163308b84e2d962
|
Q: Given $a+b+c=3$ .Prove $ \sum \limits_{cyc} \frac {1}{a^2+b^2+2} \le \frac 34$ Yesterday I found this on the Internet:
Give 3 non-negative numbers $a,b,c$ that $a+b+c=3$. Prove
$$ \sum _{cyc} \frac {1}{a^2+b^2+2} \le \frac 34 $$
I have tried to solve this using AM-GM:
From AM-GM we got:
$$a^2+b^2+2\ge2(a+b)$$
$$\Rightarrow \frac{1}{a^2+b^2+2} \le \frac12\frac1{a+b}$$
$$\Rightarrow \sum _{cyc} \frac {1}{a^2+b^2+2} \le \frac12\sum_{cyc}\frac{1}{a+b}$$
We have to prove $$\frac12\sum_{cyc}\frac{1}{a+b}\le\frac34$$
or $$\sum_{cyc}\frac{1}{a+b}\le\frac32$$
However the above statement seem to be false. If $a = 0 , b = 1, c = 2$:
$$\sum_{cyc}\frac{1}{a+b} = 1+\frac13+\frac12=\frac{11}6\gt \frac32$$
Anyone know the solutions?
A: It's suffice to show the following inequality $$ \sum\limits_{sic}{\frac{a^2+b^2}{a^2+b^2+2}} \ge \frac{3}{2} $$ By using Cauchy, we have $$ LHS \ge \frac{\left(\sqrt{a^2+b^2}+\sqrt{b^2+c^2}+\sqrt{c^2+a^2}\right)^2}{2\left(a^2+b^2+c^2\right)+6} \ge \frac{\sqrt{3\left(a^2b^2+b^2c^2+c^2a^2\right)}+2\left(a^2+b^2+c^2\right)}{\left(a^2+b^2+c^2\right)+3} $$ Notice that $ \sqrt{3\left(a^2b^2+b^2c^2+c^2a^2\right)} \ge ab+bc+ca $, thus, the last term is greater than $$ \frac{2\left(a^2+b^2+c^2\right)+\left(ab+bc+ca\right)}{\left(a^2+b^2+c^2\right)+3} = \frac{3}{2} $$ The conclusion follows. Note that $ a+b+c=3 $ implies $ a^2+b^2+c^2+2ab+2bc+2ca=9 $.
|
4886eec9bf19c8f2eb170f06bf1bce6ee3639285
|
Q: Make $f(x)=\sin x-\frac{x+ax^3}{1+bx^2}$ be the infinitesimal of the highest order Here is the question:
Find $a$ and $b$, letting $$f(x)=\sin x-\frac{x+ax^3}{1+bx^2}$$ be the
infinitesimal of the highest order when $x \to 0$, and find that
order.
According to the key, $a=-\frac{7}{60}$, $b=\frac{1}{20}$, and the highest order can be reached is $7$.
I have used the Maclaurin expansion of $\sin x$, but I cannot understand how can things like $-\frac{x^3}{3!}$ and $\frac{x^5}{5!}$ be cancelled. It seems what we have is just $\frac{x+ax^3}{1+bx^2}$ and its highest order is only 1.
A: The Maclaurin expansion of $\sin x$ is, as you know, $x - x^3/6 + x^5/120 - x^7/5040 + \cdots$.
You want to choose $a$ and $b$ so that the Maclaurin expansion of $(x+ax^3)/(1+bx^2)$ matches this for as many terms as possible.
You wouldn't want to find that expansion by differentiating repeatedly, but you don't have to - rather you can just write out the denominator as
$$ {1 \over 1+bx^2} = 1 - bx^2 + b^2 x^4 - b^3 x^6 + \cdots $$
and multiply out:
$$ \left( x + ax^3 \right) \left( 1 - bx^2 + b^2 x^4 + \cdots \right) = x + (a-b) x^3 + (b^2 - ab) x^5 + \cdots $$
so now you want to find $a, b$ such that $a-b = -1/6, b^2-ab = 1/120$. These turn out to be the values of $a$ and $b$ that are given in the key.
But if you try to do this matching the $x^7$ terms as well, you'll find that the resulting system of three equations in two unknowns has no solution. So this is the best you can do.
|
9bbda18ee6f7c9b7477c947f2a3d6397105abb52
|
Q: Coupon collector's problem using inclusion-exclusion Coupon collector's problem asks:
Given n coupons, how many coupons do you expect you need to draw with replacement before having drawn each coupon at least once?
The well-known solution is $E(T)=n \cdot H_n$, where T is the time to collect all n coupons(proof).
I am trying to approach another way, by calculating possible arrangements of coupons using inclusion-exclusion(Stirling's numbers of the second kind) and that one coupon should only be collected at last and other coupons should be collected at least once:
$$P(T=k)=\frac{n!\cdot{k-1\brace n-1}}{n^k}\\
=\frac{\sum\limits_{i=1}^{n-1}(-1)^{n-i-1}\cdot{n-1\choose i}\cdot i^{k-1}}{n^{k-1}}\\
E(T)=\sum\limits_{k=n}^{\infty}k\cdot P(T=k)\\
=\sum\limits_{k=n}^{\infty}k\cdot\frac{\sum\limits_{i=1}^{n-1}(-1)^{n-i-1}\cdot{n-1\choose i}\cdot i^{k-1}}{n^{k-1}}\\
=\sum\limits_{i=1}^{n-1}(-1)^{n-i-1}\cdot{n-1\choose i}\cdot\sum\limits_{k=n}^{\infty}k\cdot (\frac i n)^{k-1}\\
=\sum\limits_{i=1}^{n-1}(-1)^{n-i-1}\cdot{n-1\choose i}\cdot(\frac i n)^{n-1}\cdot(\frac 1 {1-\frac i n})\cdot(n-1+\frac 1 {1-\frac i n})$$
Calculation of first 170 terms yields same results.
Are two formulas same?
A: By way of enrichment here is a proof using Stirling numbers of the
second kind which encapsulates inclusion-exclusion in the generating
function of these numbers.
First let us verify that we indeed have a probability distribution
here. We have for the number $T$ of coupons being $m$ draws that
$$P[T=m] = \frac{1}{n^m} \times
n\times {m-1\brace n-1} \times (n-1)!.$$
Recall the OGF of the Stirling numbers of the second kind which says
that
$${n\brace k} = [z^n] \prod_{q=1}^k \frac{z}{1-qz}.$$
This gives for the sum of the probabilities
$$\sum_{m\ge 1} P[T=m]
= (n-1)! \sum_{m\ge 1} \frac{1}{n^{m-1}} {m-1\brace n-1}
\\ = (n-1)! \sum_{m\ge 1} \frac{1}{n^{m-1}}
[z^{m-1}] \prod_{q=1}^{n-1} \frac{z}{1-qz}
\\ = (n-1)! \prod_{q=1}^{n-1} \frac{1/n}{1-q/n}
= (n-1)! \prod_{q=1}^{n-1} \frac{1}{n-q} = 1.$$
This confirms it being a probability distribution.
We then get for the expectation that
$$\sum_{m\ge 1} m\times P[T=m]
= (n-1)! \sum_{m\ge 1} \frac{m}{n^{m-1}} {m-1\brace n-1}
\\ = (n-1)! \sum_{m\ge 1} \frac{m}{n^{m-1}}
[z^{m-1}] \prod_{q=1}^{n-1} \frac{z}{1-qz}
\\ = 1 + (n-1)! \sum_{m\ge 1} \frac{m-1}{n^{m-1}}
[z^{m-1}] \prod_{q=1}^{n-1} \frac{z}{1-qz}
\\ = 1 + (n-1)! \sum_{m\ge 2} \frac{m-1}{n^{m-1}}
[z^{m-1}] \prod_{q=1}^{n-1} \frac{z}{1-qz}
\\ = 1 + \frac{1}{n} (n-1)! \sum_{m\ge 2} \frac{1}{n^{m-2}}
[z^{m-2}] \left(\prod_{q=1}^{n-1}
\frac{z}{1-qz}\right)'
\\ = 1 + \frac{1}{n} (n-1)!
\left.\left(\prod_{q=1}^{n-1}
\frac{z}{1-qz}\right)'\right|_{z=1/n}
\\ = 1 + \frac{1}{n} (n-1)!
\left. \left(\prod_{q=1}^{n-1}
\frac{z}{1-qz}
\sum_{p=1}^{n-1} \frac{1-pz}{z} \frac{1}{(1-pz)^2}
\right)\right|_{z=1/n}
\\ = 1 + \frac{1}{n} (n-1)!
\prod_{q=1}^{n-1} \frac{1/n}{1-q/n}
\left. \sum_{p=1}^{n-1} \frac{1}{z} \frac{1}{1-pz}
\right|_{z=1/n}
\\ = 1 + \frac{1}{n} (n-1)!
\prod_{q=1}^{n-1} \frac{1}{n-q}
\sum_{p=1}^{n-1} \frac{n}{1-p/n}
\\ = 1 + \frac{1}{n}
\sum_{p=1}^{n-1} \frac{n^2}{n-p}
= 1 + n H_{n-1} = n \times H_n.$$
What we have here are in fact two annihilated coefficient extractors (ACE) more of which may be found at this MSE link. Admittedly the EGF better represents inclusion-exclusion than the OGF and could indeed be used here where the initial coefficient extractor would then transform it into the OGF.
|
40129522d081c7d0fe024ba017a360a0e9cf1e7a
|
Q: Does the normal form of the Fold bifurcation has something to do with the Dulac-Poincaré normal form? Maybe this is a silly question but does the normal form $\dot{x}=\mu\pm x^2$ of the fold bifurcation has something to do with the normal form by Dulac and Poincaré or are this completely different things?
The Poincaré-Dulac normal form is that we can write $f(x,\mu)$ as the sum of some linear part $Ax$ and a sum of resonant monomials, i.e.
$$
f(x,\mu)=Ax+\sum_{(m,\lambda)=\lambda_k}^{\infty}b_{mk}(\mu)x^m.
$$
where $\lambda_k=(m,\lambda)$ are the resonances.
Now, if we have $\dot{x}=f(x,\mu)$ fulfilling the conditions to be a fold birfurcation, what is the Poincaré-Dulac normal form of $f(x,\mu)$ and does it coincide with $\mu\pm x^2$?
Edit:
I think, first of all, one can show that $\alpha\pm x^2$ is a normal form, i.e. the easiest form. Then one can show that this is the Dulac-normal form (or some truncated form of it).
So let's suppose that we have shown that we can ease $\dot{x}=f(x,\alpha)$ to the form $\dot{x}=\alpha\pm x^2$. This is shown in some books.
Then for $\alpha=0$, we have the eigenvalue $\lambda=0$, and have infinitely many resonances: $\lambda=m\lambda$ for $m\geq 2$. If we choose $N=2$, one can use the truncated Dulac normal form, where $\alpha$ is the linear summand and $x^2$ is the resonant monomial. All other summands of the truncated normal form are $0$.
For $\alpha<0$, we have eigenvalues $\pm\sqrt{-\alpha}$ and we have - as far as I see - resonances with $\lvert m\rvert\geq 3$. So we can use that for $N=2$ we do not have resonances. A theorem tells us that we can write it as $Ax+R(x)$, where $Ax$ is the linear summand and $R(x)$ is a polonomial of resonant monomials of order at most N. So here, $Ax=\alpha, R(x)=x^2$.
So the normal form $\dot{x}=\alpha\pm x^2$ is the truncated Dulac normal form for $N=2$?
|
d0b4c1959718bf9b8c2f477bd209ebecee336af8
|
Q: Complete as a semimetric space but not as a topological group I shall begin with some definitions.
1) Suppose that $X$ is a topological (additive) group and $(x_{s})\subseteq X$ is a net, we said that $(x_{s})$ is Cauchy whenever $U$ is a neighbourhood of $0$, there is some index $r$ such that $x_{s}-x_{t}\in U$ for all $s,t\geq r$. We say that $X$ is complete as a topological group if each Cauchy net converge to some point $x$ in $X$.
2) Suppose that $X$ is a topological group induced by a semimetric $d$, we say that $X$ is complete as a semimetric space with respect to $d$ whenever every sequence $(x_{n})\subseteq X$ is such that $d(x_{m},x_{n})\rightarrow 0$ as $m,n\rightarrow\infty$, we must have some $x\in X$ such that $d(x_{n},x)\rightarrow 0$ as $n\rightarrow\infty$.
Now I am looking for some semimetric $d$ such that $X$ is complete as a semimetric space but not as a topological group. It is known that such $d$ cannot be translation invariant, can anyone suggest any of those?
A: I think that there is no such example.
For topological vector spaces this had been a question of Banach and was solved in the fifties by Victor Klee. I think that his arguments also work for topological groups:
Let thus $(X,\tau)$ be a topological group such that $\tau$ is the topology of a complete metric $d$ on $X$. Let $(\hat X,\hat \tau)$ be the completion of
$(X,\tau)$ as a topological group. I am optimistic that $\hat \tau$ is completely metrizable (in the case of topological vector spaces this is true and can be found e.e. in Köthe's book, the proof for groups should be similar: the unit element has a countable neighborhood base which should be used to construct a translation invariant metric. This however needs checking!)
Now, the complete metric space $(X,d)$ is contained as a topological subspace in $(\hat X,\hat \tau)$ and a result of Sierpinski implies that $X$ is a $G_\delta$ set in $\hat X$ which is also dense. Given any $z\in\hat X$ the set
$z-X$ is also dense $G_\delta$ and Baire's theorem implies $X\cap (z-X) \neq \emptyset$, which implies $z\in X+X =X$.
EDIT. The metrizability of topological groups is indeed equivalent to the existence of countable neighborhood bases of the unit element. I found this in Bourbaki's General topology.
|
952c6b2a68dc4bd93dabc75fd743b411f9cf5f60
|
Q: What are the disadvantages of non-standard analysis? Most students are first taught standard analysis, and the might learn about NSA later on. However, what has kept NSA from becoming the standard? At least from what I've been told, it is much more intuitive than the standard. So why has it not been adopted as the mainstream analysis, especially for lower-level students?
A: So far (55 years from Robinson's book) it has not caught on. Working mathematicians have not seen much benefit to learning it. Mathematical research relies on communication with others: so although I know NSA I normally to not use it to talk to others, since they don't. I believe NSA does have some important uses among logicians.
Nevertheless, amateur mathematicians keep posting here about non-standard analysis
A: Robinson's framework today is a flourishing field, with its own journal: Journal of Logic and Analysis, and applications to other fields like differential geometry.
The way Robinson originally presented his theory made it appear as if one needs to learn a substantial amount of mathematical logic in order to use infinitesimals. This perception lingers on combined with the feeling, reinforced by the choice of the term nonstandard, that one requires a brave new world of novel axioms in order to do mathematics with infinitesimals.
The fact that Robinson's, as well as Ed Nelson's, frameworks are conservative with respect to the traditional Zermelo-Fraenkel (ZFC) foundations did not "trickle down to the poor" as it should have.
In undergraduate teaching, it is insufficiently realized that just as one doesn't construct the real numbers in freshman calculus, there is no need to introduce the maximal ideals, either. Emphasis on rigorous procedures (rather than set-theoretic foundations) needs to be clarified further.
The proven effectiveness of the infinitesimal approach in the classroom parallels its increasing use around the world, including the US, Belgium, Israel, Switzerland, and Italy.
A: I think there are a number of reasons:
*
*Early reviews of Robinson's papers and Keisler's textbook were done by a prejudiced individual, so most mature mathematicians had a poor first impression of it.
*It appears to have a lot of nasty set theory and model theory in it. Start talking about nonprincipal ultrafilters and see the analysts' eyes glaze over. (This of course is silly: the construction of the hyperreals and the transfer principle is as important to NSA as construction of the reals is for real analysis, and we know how much people love that part of their first analysis course.)
*There is a substantial set of opinion that because NSA and standard analysis are equivalent, there's no point in learning the former.
*Often, the bounds created with NSA arguments are a lot weaker than standard analysis bounds. See Terry Tao's discussion here.
*Lots of mathematicians are still prejudiced by history and culture to instinctively think that anything infinitesimal is somewhere between false and actually sinful, and best left to engineers and physicists.
*As Stefan Perko mentions in the comments, there are a number of other infinitesimal approaches: smooth infinitesimals, nilpotents, synthetic differential geometry, . . . none of these is a standout candidate for replacement.
*It's not a widely-studied subject, so using it in papers limits the audience of your work.
Most of these reasons are the usual ones about inertia: unless a radical approach to a subject is shown to have distinct advantages over the prevalent one, switching over is seen as more trouble than it's worth. And at the end of the day, mathematics has to be taught by more senior mathematicians, so they are the ones who tend to determine the curriculum.
A: NSA is an interesting intellectual game in its own right, but it is not helping the student to a better understanding of multivariate analysis: volume elements, $ds$ versus $dx$, etcetera. The difficulties there reside largely in the geometric intuition, and not in the $\epsilon/\delta$-procedures reformulated in terms of NSA.
We are still awaiting a "new analysis" reconciling the handling of calculus using the notation of engineers (and mathematicians as well, when they are alone) with the sound concepts of "modern analysis".
And while I'm at it: Why should we introduce more orders of infinity than there are atoms in the universe in order to better understand $\int_\gamma \nabla f\cdot dx=f\bigl(\gamma(b)\bigr)-f\bigl(\gamma(a)\bigr)\>$?
|
a31b62e1afef34fbf98abed0b3c7d644588e9c18
|
Q: Find the diophantine equation $x^2(y^2-1)=z^2-1$ solution How can I solve (find all the solutions) the nonlinear Diophantine equation
Let $x,y,z$ be postive integers ,and $x,y,z\ge 2$,find this following equation solution
$$x^2=\dfrac{z^2-1}{y^2-1}$$
I included here what I had done so far. such $z=7,y=2,x=4(\dfrac{7^2-1}{2^2-1}=16=4^2)$ is one solution
thanks for any help.
A: Every $y \geq 2$ works. For each $y,$ we get an infinite sequence of solutions $(z_n, x_n)$ beginning with
$$ (1,0), $$
$$ (y,1), $$
$$ (2y^2 - 1,2y), $$
$$ (4y^3 - 3y,4y^2 - 1), $$
$$ (8y^4 - 8y^2+1,8y^3 - 4y), $$
continuing forever with
$$ z_{n+2} = 2 y z_{n+1} - z_n, $$
$$ x_{n+2} = 2 y x_{n+1} - x_n. $$
The two separate recurrences come from a single combined recurrence by Cayley-Hamiton,
$$ (z_{n+1}, \, x_{n+1}) = \left( y \, z_n + (y^2-1) \, x_n, \; \; z_n + y \, x_n \right) $$
I have not yet found these polynomials as named sequences, although it is quite likely that they have names. There are, for example, the named https://en.wikipedia.org/wiki/Fibonacci_polynomials although we are not using those. Alright, from comment below, these are the Chebyshev polynomials, the $z_n$ are the FIRST KIND, while the $x_n$ are the SECOND KIND
However, take a real number $t > 0$ so that
$$ y = \cosh t, $$ or
$$ t = \log \left( y + \sqrt{y^2 - 1} \right). $$
Then
$$ (z_n, x_n) = \left( \cosh nt, \; \; \frac{\sinh nt}{\sinh t} \; \right) $$
Here are $z \leq 1000$
z y x
7 2 4
17 3 6
26 2 15
31 4 8
49 5 10
71 6 12
97 2 56
97 7 14
99 3 35
127 8 16
161 9 18
199 10 20
241 11 22
244 4 63
287 12 24
337 13 26
362 2 209
391 14 28
449 15 30
485 5 99
511 16 32
577 3 204
577 17 34
647 18 36
721 19 38
799 20 40
846 6 143
881 21 42
967 22 44
====================
|
ec1005e24e1eeb4c316b0859d35de0d29d2e1bc2
|
Q: Any good Linear Algebra textbook that gives a good geometrical intuition? I want a book that maybe has pictures in it or just the author trying to explain what things mean geometrically.
|
a83fd81640a8ecbc974c359373ecb814b0a2cb74
|
Q: modular exponentiation large power mod composite number $$x^{11} \equiv 12 \pmod{143}$$
How to solve this without any sophisticated methods?
$$x^{11} \equiv 12 \pmod{143}$$
$${x^{120}}\equiv 12^{10}\pmod{143} $$
Is this correct knowing that x could divide 143?
Thanks in advance.
A: As $143=11\cdot13,$ $$x^{11}\equiv12\pmod{143}$$
$\implies(i)x^{11}\equiv12\pmod{11}\equiv1$
But by Fermat's little theorem, $x^{11-1}\equiv1\pmod{11}\implies x\equiv1\pmod{11}\ \ \ \ (1)$
and $(ii)x^{11}\equiv12\pmod{13}\equiv-1,$
Now $12x\equiv x^{12}\equiv1\pmod{13}$ by Fermat's little theorem
$\implies 1\equiv12x\equiv-x\iff \equiv-1\pmod{13}\ \ \ \ (2)$
Using Chinese remainder theorem/by observation, $$x\equiv12\pmod{\text{lcm}(11,13)}$$
A: $x^{11}\equiv12\pmod{143}\implies x^{22}\equiv1\pmod{143}$, as $12^2=1\pmod{143}$
Fermat's little theorem gives:
$$x^{\varphi(143)}=x^{120}=1\pmod{143}\implies 1^6=(x^{22})^6=x^{132}\equiv x^{12}\pmod{143}$$
So, $x^{11}=12$ and $x^{12}=1$.Noting that $x^{12}$ is an invertible element of $\mathbb{Z}_{143}$, we know that $x$ must also be invertible. We thus divide the two equations and see that $x$ must be the inverse of $12\pmod{143}$ - which is $12$.
We thus have that $x=12$ is our only solution to this equation.
A: Hint:
$$
x^{11\cdot11}=x^{\varphi(143)+1}\equiv x\pmod{143}
$$
Furthermore, note that $12^2\equiv1\pmod{143}$.
|
eef5aa3a479f43d6950470ae907445ac22567f76
|
Q: Can someone explain how I can triangulate using angles and one side of a Right-angled triangle? I've been looking around, trying to find a simple way explaining why and how to calculate distance using the triangulation technique, but I'm still pretty confused, I've got some simple math notions, but I lack knowledge on using angles to solve such problems.
I have a simple example, and I'd like to solve it using triangulation.
Any help is appreciated!
P.S.: I'm a layman at math, so I'm sorry in advance.
Triangle specifications:
A side: Unknown
B side: 10 meters
C(hypotenuse): Unknown
AB angle: 90°
BC angle: 70°
UPDATE:
After some searching, I found a website that clarifies:
Cos, Sin, Tan
https://www.mathsisfun.com/sine-cosine-tangent.html
A: I have drawn your triangle, but the sides are named with lower case letters instead of capitals thanks to Geogebra. Side $a$ is $10 \tan (70^\circ)$ , then you can get $c$ from either $c=\sqrt {a^2+b^2}$ or $c=\frac {10}{\cos (70^\circ)}$ If you have a right angle, finding the coordinates of $B$ is pretty easy. If you don't, you need to solve a pair of simultaneous equations that represent the lengths from A to B and C to B
A: A good acronym that we teach in the US is "soh cah toa," which is a made-up Native American word. You can interpret it as
*
*s = o/h, which means sin(angle) = "opposite side" over "hypotenuse"
*c = a/h, which means cos(angle) = "adjacent side" over "hypotenuse"
*t = o/a, which means tan(angle) = "opposite side" over "adjacent side"
In your case, the side $A$ is opposite the angle BC. I recommend you draw a picture to convince yourself that $\tan(70) = \frac{A}{B}$, and therefore $A = (10 {{\rm m}}) \times \tan(70^\circ \times (\frac{\pi {{\rm rad}}}{180^\circ} ) )$.
|
a2c3a7dd87018d42adf4cf7c5736bc2e38325064
|
Q: Is it true that for every $k\in\mathbb N$ , there exist infinitely many $n \in \mathbb N$ such that $kn+1 , (k+1)n+1$ both are perfect squares ? Is it true that for every $k\in\mathbb N$ , there exist infinitely many $n \in \mathbb N$ such that $kn+1 , (k+1)n+1$ both are perfect squares ? What I have tried is that I have to necessarily solve $a^2-b^2=n$ for given $n$ ; but I can not proceed further . Please help . Thanks in advance
A: The answer is Yes.
I. (Update)
The solution to,
$$\begin{aligned}
kn+1 &= x^2\\
(k+1)n+1 &= y^2
\end{aligned}$$
is given by,
$$n = \frac{ -(\alpha^2 + \beta^2) + \alpha^{2(2m+1)}+\beta^{2(2m+1)} }{4k(k+1)}$$
where,
$$\alpha = \sqrt{k}+\sqrt{k+1}\\
\beta = \sqrt{k}-\sqrt{k+1}$$
For example, for $m=1,2,3,\dots$ we get,
$$\begin{aligned}
n &= 8 + 16 k \\
n &= 24 + 176 k + 384 k^2 + 256 k^3 \\
n &= 48 + 736 k + 3968 k^2 + 9472 k^3 + 10240 k^4 + 4096 k^5
\end{aligned}$$
and so on.
II. (Old answer)
$$\begin{aligned}
kn+1 &= x^2\\
(k+1)n+1 &= y^2
\end{aligned}\tag1$$
Eliminate $n$ between them and we get the Pell-like,
$$(k+1)x^2-ky^2 = 1$$
We can get an infinite number of solutions using a transformation (discussed in this post). Let $p,\,q = 4k+1,\;4k+3$, then,
$$x = p u^2 + 2 k q u v + k (k+1) p v^2$$
$$y = q u^2 + 2 (k+1)p u v + k (k+1) q v^2$$
and $u,\color{brown}\pm v$ solve the Pell equation,
$$u^2-k(k+1)v^2 = 1$$
This has initial solution,
$$u = 2 k+1,\quad v = 2$$
and an infinite more. Thus,
$$\begin{aligned}
n &= 8 + 16 k \\
n &= 24 + 176 k + 384 k^2 + 256 k^3 \\
n &= 48 + 736 k + 3968 k^2 + 9472 k^3 + 10240 k^4 + 4096 k^5 \\
n &= 80 + 2080 k + 20096 k^2 + 93952 k^3 + 235520 k^4 + 323584k^5 + 229376 k^6 + 65536 k^7
\end{aligned}$$
and so on for an infinite number of $n$ for any $k$.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.