added
stringlengths 24
24
| attributes
dict | created
stringclasses 1
value | id
stringlengths 40
40
| metadata
dict | source
stringclasses 1
value | text
stringlengths 12
210k
| version
stringclasses 1
value |
---|---|---|---|---|---|---|---|
2023-04-23T09:04:30.096Z
|
{
"paloma_paragraphs": []
}
|
2023-03-29T00:00:00.000Z
|
c1b10ef09bf0c8f913bb9b147686807471805949
|
{
"language": "en",
"length": 136,
"provenance": "stackexchange_00001.jsonl.gz:864804",
"question_score": "2",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://math.stackexchange.com/questions/764470"
}
|
redpajama/stackexchange
|
Q: Question about definition of some classes of bimodules. Suppose that we have a ring $R$ and a $R$-$R$ bimodule $M$ such that:
For every $r\in R$ and $m\in M$ there exists $r'\in R$ such that $m\cdot r=r'\bullet m.$
Examples of this bimodules can be seen as follows.
If $R$ is a commutative ring, $M$ is a central $R$-$R$ bimodule (that is $mr=rm,$ for all $m\in M,r\in R$) and $G$ is a group acting on $R$ by automorphisms. Then, for any $g\in G$ we can construct a new $R$-$R$-bimodule $_gM_I,$ where .
$_gM_I=M$ as sets and the operations of $R$ on $M$ are $r\cdot m=g(r)m$ and $m\bullet r=mr.$ Then we see that $r\cdot m=g(r)m=mg(r)=m\bullet g(r), $ that is, $$r\cdot m=m\bullet g(r).$$
I want to know if there is a name for this kind of bimodules.
Thanks.
|
v1
|
2023-04-23T09:04:30.096Z
|
{
"paloma_paragraphs": []
}
|
2023-03-29T00:00:00.000Z
|
2e3828f37c1d4874cdfd59aaf2727a6c3ae1e9bb
|
{
"language": "en",
"length": 115,
"provenance": "stackexchange_00001.jsonl.gz:864805",
"question_score": "1",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://math.stackexchange.com/questions/764471"
}
|
redpajama/stackexchange
|
Q: Analysis of Steady State Probability for Markov Process I have a balance equation, representing a Markov Chain, which yields
$$
(K - z) \pi(Z_c = z) = (\lambda_c + (z+1)x) \pi(Z_c = z+1)
$$
where K is the maximum state of the server. The term $\lambda_c$ and $x$ are the constants.
From this, the following steady-state probability is obtained:
$$
\pi(Z_c = z) = \pi(Z_c = K) x^{(K-z)} * C
$$
where C is given by
$$
{K+\frac{\lambda_c}{x}}\choose{K-z}
$$
The approximation of $\pi(Z_c = K)$ is given by:
$$
\pi(Z_c = K) = (\sum_{y=0}^{K} x^y {K+ \frac{\lambda_c}{y}\choose{y} })^{-1}
$$
Can anybody help me in approximation $\pi(Z_c = K)$? How is $\pi(Z_c = K)$ approximated?
|
v1
|
2023-04-23T09:04:30.096Z
|
{
"paloma_paragraphs": []
}
|
2023-03-29T00:00:00.000Z
|
105e223ab943e09b8305797be4dfdfd716fbebbd
|
{
"language": "en",
"length": 250,
"provenance": "stackexchange_00001.jsonl.gz:864806",
"question_score": "1",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://math.stackexchange.com/questions/764472"
}
|
redpajama/stackexchange
|
Q: Null space of matrix product Is the following relation true or untrue?
$$\ker(ABC)=\ker(B)$$
where $A,B,C\in\mathbb R^{n\times n}$, $A$ and $C$ are nonsingular, and $B$ is singular.
A: Not true. Take for example $$B=\begin{bmatrix}1 & 0\\0 & 0\end{bmatrix}$$
which has a null space spanned by the vector $[0,1]^T$ and take $$A=\begin{bmatrix}1&0\\0&1\end{bmatrix}\\
C = \begin{bmatrix}0&1\\1&0\end{bmatrix}$$
The product $$ABC = \begin{bmatrix}0&1\\0&0\end{bmatrix}$$ has a null space spanned by the vector $[0,1]^T.$
A: Not true in general as the other answer shows. If $C$ is nonsingular and $Z$ is a basis$^{\color{red}*}$ of the kernel of $B$, then $C^{-1}Z$ is a basis of the kernel of $BC$ and hence of $ABC$ provided $A$ is nonsingular as well. So unless $C^{-1}Z=Z$ or equivalent $Z=CZ$, that is, $C$ acts as the identity on $\mathrm{ker}(B)$, one has $\mathrm{ker}(ABC)\neq\mathrm{ker}(B)$.
$^{\color{red}*}$$Z=[z_1,\ldots,z_k]$, $k=\dim\mathrm{ker}(B)$, $\mathcal{R}(Z)=\mathrm{span}\{z_1,\ldots,z_k\}=\mathrm{ker}(B)$.
A: The general statement here is about linear maps between $4$ vector spaces:
$$
V_1\overset{\varphi_1}\longrightarrow V_2
\overset{\varphi_2}\longrightarrow V_3
\overset{\varphi_3}\longrightarrow V_4,
$$
where $\varphi_1$ and $\varphi_3$ are isomorphisms (corresponding to the non-singular matrices) and you want to know about $\ker(\varphi_3\circ\varphi_2\circ\varphi_1)$.
Claim: $\ker(\varphi_3\circ\varphi_2\circ\varphi_1) = \ker(\varphi_2\circ\varphi_1) = \varphi_1^{-1}(\ker \varphi_2).$
Proof. Let $v \in \ker(\varphi_3\circ\varphi_2\circ\varphi_1)$, that is $\varphi_3(\varphi_2(\varphi_1(v)))=0$. Since $\varphi_3$ is an isomorphism, this is equivalent to $\varphi_2(\varphi_1(v))=0$, i.e. $v\in\ker(\varphi_2\circ\varphi_1)$, which proves the first equality. For the second equality, whenever $\varphi_2(\varphi_1(v))=0$ we have $\varphi_1(v)\in\ker \varphi_2$, so $v\in\varphi_1^{-1}(\ker\varphi_2)$. On the other hand, whenever $v\in\varphi_1^{-1}(\ker\varphi_2)$ we have $\varphi_1(v)\in \ker\varphi_2$, so $\varphi_2(\varphi_1(v))=0$, i.e. $v\in\ker(\varphi_2\circ\varphi_1)$.
Translated to the finite-dimensional case represented by matrices $A,B,C$ this says
$$
\ker(ABC) = \ker(BC) = C^{-1} \ker B.
$$
|
v1
|
2023-04-23T09:04:30.097Z
|
{
"paloma_paragraphs": []
}
|
2023-03-29T00:00:00.000Z
|
868b5cd0dbbd3b55d5d2acac93666c12b295a5e3
|
{
"language": "en",
"length": 853,
"provenance": "stackexchange_00001.jsonl.gz:864807",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://math.stackexchange.com/questions/764488"
}
|
redpajama/stackexchange
|
Q: Divisor in $\mathbb{C}[X]$ $\implies$ divisor in $\mathbb{R}[X]$? let $P \in \mathbb{R}[X]$ be a real polynomial divisible by a polynomial $Q \in \mathbb{R}[X]$
in $\mathbb{C}[X]$. How can I easily show that $P$ is also divisible by $Q$ in $\mathbb{R}[X]$?
A simple argument without using higher algebraic theorems is desirable. If I could use instruments of higher algebra, the exercise I have to do in whole would be done in two lines. But I'm not allowed to use. I think there would be an easy argument which I can't see yet because of my mental fogginess that I have sometimes.
Thank you beforehand.
A: The division algorithm uses only the field operations on the coefficients of the polynomials. If $P$ and $Q$ have real coefficients, all the computations take place in $\mathbb{R}$; so, if $P=QR$, then $R\in\mathbb{R}[X]$.
A: Say you have $P = QR$ where $R \in \mathbb{C}[X]$. Then $\overline{P} = \overline{QR} \Rightarrow P = Q \bar R$ (this is complex conjugation). If $Q$ is the zero polynomial, then so is $P$ and you are done. Otherwise, there is an infinite number of points $x \in \mathbb{R}$ where $Q(x)$ is nonzero, and for every such $x$, $\bar{R}(x) = \frac{\bar P(x)}{\bar Q(x)} = \frac{P(x)}{Q(x)} = R(x)$. The two polynomials $R, \bar{R}$ agree on an infinite number of points and are therefore equal. This means that $R$ has real coefficients, and so $Q$ divides $P$ in $\mathbb{R}[X]$.
More generally the technique I used works for any Galois extension. Suppose $K \subset F$ is a Galois extension, and that $Q \neq 0$ divides $P$ in $F[X]$, ie $P = QR$ with $R \in F[X]$, $P, Q \in K[X]$. Then for every $g \in \operatorname{Gal}(F/K)$, $P = Q R = g(P) = g(Q) g(R) = Q g(R)$ (where $g(P)$ is the polynomial where you apply $g$ to every coefficient). Since $Q$ is nonzero and $F[X]$ is an integral domain, it follows that $g(R) = R$ for all $g$, and therefore all the coefficients of $R$ are in $K$ (by general Galois theory).
A: Hint $\ $ It follows from the uniqueness of the quotient (and remainder) in the division algorithm (which is the same in $\,\Bbb R[x]\,$ and $\,\Bbb C[x],\,$ using the polynomial degree as the Euclidean "size").
Therefore, since dividing $\,P\,$ by $\,Q\,$ in $\,\Bbb C[x]\,$ leaves remainder $\,0,\,$ by uniqueness, the remainder must also be $\,0\,$ in $\,\Bbb R[x].\,$ Thus $\ Q\mid P\, $ in $\,\Bbb C[x]\ $ $\Rightarrow$ $\ Q\ |\ P\ $ in $\,\Bbb R[x].$
This is but one of many examples of the power of uniqueness theorems for proving equalities.
Remark $\ $ More generally, $ $ it follows from persistence of Euclidean gcds in extension domains since, by Bezout, the gcd may be specified (up to unit factor) via the solvability of a system of (linear) equations over $D,\,$ and such solutions persist in extension domains of $D,\,$ i.e. roots in $D\,$ persist as roots in $E\supset D.\,$ Note $\, Q\nmid P\,$ in $\,\Bbb R[x]\,$ iff their gcd $\,(Q,P) = AQ+BP\:$ has smaller degree than $\,Q.\,$ If so, the Bezout equation persists as a witness that $\,Q\nmid P\,$ in $\,\Bbb C[x]$.
Such uniqueness is a characteristic property of polynomial domains over fields. Namely, if $D$ is a Euclidean domain with division algorithm having unique quotient and remainder, then either $D$ is a field or $D = F[x]$ for a field $F.\,$ For proofs see e.g.
M. A. Jodeit, Uniqueness in the division algorithm, Amer. Math. Monthly 74 (1967), 835-836.
T. S. Rhai, A characterization of polynomial domains over a field, Amer. Math. Monthly 69 (1962), 984-986.
A: This can also be done by contradiction as follows:
Suppose that $P=QR$ where $P$ and $Q$ are in $\mathbb{R}[x]$ and $R$ is in $\mathbb{C}[x]\setminus\mathbb{R}[x]$. Let us write
$$
R=\sum_{i=0}^kr_ix^i.
$$
Since $R$ is not a polynomial with real coefficients, there is some $r_i$ which is not real. Let $j$ be the largest index where $r_i$ is not real. Then,
$$
R=\left(\sum_{i=0}^{j-1}r_ix^i\right)+r_jx^j+\left(\sum_{i=j+1}^kr_ix^i\right).
$$
I claim that, for $x$ sufficiently large, $R(x)$ is not a real number.
*
*For any real number $x$, the third summand is real since all of the coefficients are real.
*For all nonzero real numbers $x$, $r_jx^j$ is not a real number since $r_j$ is not real.
*For all real numbers $x$ sufficiently large, the first summand is less than the imaginary part of $r_jx^j$. For a sketch, let $r$ be the maximum of the absolute values of the $r_i$'s. Then, using a geometric sum, the absolute value of the first sum is bounded from above by
$$
r\left(\frac{x^j-1}{x-1}\right).
$$
A short calculation will show that if $x$ is sufficiently large,
$$
r\left(\frac{x^j-1}{x-1}\right)<\Im(r_j)x^j.
$$
Combining all of this results in the conclusion that the imaginary part of $r_jx^j$ cannot fully cancel, so $R(x)$ is not real.
This leads directly to our contradiction. After fixing $x$ sufficiently large from above, note that $P(x)$ and $Q(x)$ are real numbers (and we may choose $x$ sufficiently large so that these are nonzero). We then have $P(x)=Q(x)R(x)$, but it is impossible for this equality to hold while $P(x)$ and $Q(x)$ are real, but $R(x)$ is not.
|
v1
|
2023-04-23T09:04:30.097Z
|
{
"paloma_paragraphs": []
}
|
2023-03-29T00:00:00.000Z
|
6d3be56222a9b01f3e259ab14fc41783a48319a6
|
{
"language": "en",
"length": 204,
"provenance": "stackexchange_00001.jsonl.gz:864808",
"question_score": "2",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://math.stackexchange.com/questions/764491"
}
|
redpajama/stackexchange
|
Q: L1 convergence and Lp bounded implies Lq convergence I have tried to solve this problem for almost a week and did not manage to, so I figured to ask it here:
Let $(u_n)\to u$ in $L^1(0,1)$ strongly and let $\{u_n\}_{n\in\mathbb{N}}$ be bounded in $L^p(0,1)$ for some $p>1$. Show that $u_n\to u$ in $L^q(0,1)$ strongly for all $1 \leq q < p$.
Thanks in advance.
A: The case $q = 1$ is done by the premises, so let's suppose $1 < q < p$. The idea is to use Hölder's inequality to get an estimate
$$\int_0^1 \lvert u_n(t) - u_m(t)\rvert^q\,dt \leqslant \lVert u_n - u_m\rVert_{L^1}^\alpha \cdot \lVert u_n-u_m\rVert_{L^p}^\beta,$$
which then shows that $(u_n)$ is a Cauchy sequence in $L^q(0,1)$, since $\lVert u_n - u_m\rVert_{L^p} \leqslant \lVert u_n\rVert_{L^p} + \lVert u_m\rVert_{L^p}$ is bounded, and $(u_n)$ is an $L^1$-Cauchy sequence.
Thus we write
$$\int_0^1 \lvert u_n(t) - u_m(t)\rvert^q\,dt = \int_0^1 \lvert u_n(t)-u_m(t)\rvert^\gamma\cdot \lvert u_n(t) - u_m(t)\rvert^{q-\gamma}\,dt,$$
and it remains to determine $\gamma$ so that Hölder's inequality gives the exponent $1$ in the first and $p$ in the second factor, i.e., since for the first factor we raise to the $\frac{1}{\gamma}$-th power, we need $$\frac{q-\gamma}{1-\gamma} = p.$$
I trust the remaining details are not hard to find.
|
v1
|
2023-04-23T09:04:30.097Z
|
{
"paloma_paragraphs": []
}
|
2023-03-29T00:00:00.000Z
|
1299a217b01d734161ef86090376027cf48de8eb
|
{
"language": "en",
"length": 89,
"provenance": "stackexchange_00001.jsonl.gz:864809",
"question_score": "2",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://math.stackexchange.com/questions/764492"
}
|
redpajama/stackexchange
|
Q: Does $\mathrm{Im}(f(z))$ bounded above $\implies$ $|f|$ is bounded, for analytic $f$? If $f$ is analytic on $\Omega$ s.t. $\mathrm{Im}(f(z))$ is bounded from above, then does this imply that $|f|$ is itself bounded?
I know that if $\Omega = \mathbb{C}$, then the result follows as a result of Liouvilles's Theorem; however, I'm here looking at an arbitrary $\Omega$ that isn't necessarily equal to $\mathbb{C}$.
A: No. Let $\Omega = \{z \in \Bbb C : |\operatorname{Im}(z)| < 1\}$, $f(z) = z$. Obviously $\operatorname{Im}(f(z))$ is bounded on $\Omega$, but $|f(z)|$ isn't.
|
v1
|
2023-04-23T09:04:30.097Z
|
{
"paloma_paragraphs": []
}
|
2023-03-29T00:00:00.000Z
|
10d657c8a100fe7e6aac5c73f53a76ce27ca7231
|
{
"language": "en",
"length": 259,
"provenance": "stackexchange_00001.jsonl.gz:864810",
"question_score": "0",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://math.stackexchange.com/questions/764493"
}
|
redpajama/stackexchange
|
Q: Help with this exercise from Hungerford's book I'm trying to solve this question from Hungerford's book:
The lemma 6.11 says
Let $T$ be the subgroup of index $2$, if it generates $A_n$, then $A_n\subset T$, thus by Lagrange theorem: $[S_n:A_n]|A_n|=[S_n:T]|T|$, which implies $|A_n|=|T|$, so $A_n=T$ (why formally? pigeonhole principal?)
So the solution follows easily using the hint, but I couldn't prove why every subgroup of index $2$ must contain all $3$-cycles of $S_n$. What I know is every $3$-cycle has order $3$ and I'm trying to find something using this fact and the Lagrange theorem, without any success.
I need help.
Thanks in advance
A: Choose any $3$-cycle $\alpha \in S_n$. We want to show that $\alpha \in T$. To see this, we argue by contradiction. Suppose instead that $\alpha \notin T$. Then since $T$ has index $2$, we know that $S_n = T \cup \alpha T$ with $T \cap \alpha T = \emptyset$. Now consider the element $\alpha^2 \in S_n = T \cup \alpha T$. It can only land in two places:
*
*Case 1: Suppose that $\alpha^2 \in T$. Then since $\alpha$ has order $3$, we know that $\alpha^{-1} = \alpha^2 \in T$. But groups are closed under inverses, so $\alpha = (\alpha^{-1})^{-1} \in T$, a contradiction.
*Case 2: Suppose that $\alpha^2 \in \alpha T$. Then there is some $\beta \in T$ such that $\alpha^2 = \alpha\beta$. But then left cancellation by $\alpha$ gives us that $\alpha = \beta \in T$, a contradiction.
Thus, we conclude that $T$ contains all $3$-cycles of $S_n$, as desired. $~~~\blacksquare$
|
v1
|
2023-04-23T09:04:30.097Z
|
{
"paloma_paragraphs": []
}
|
2023-03-29T00:00:00.000Z
|
c05dcdb7f8bd441853779340e45d6502903a75ee
|
{
"language": "en",
"length": 63,
"provenance": "stackexchange_00001.jsonl.gz:864811",
"question_score": "0",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://math.stackexchange.com/questions/764496"
}
|
redpajama/stackexchange
|
Q: non uniform convergence of integrable functions Let $(f_n)\subseteq L(X,\mathbb{X},\mu)$ and $f_n\longrightarrow f$, then I must show that if $\lim_{n}\int \mid f_n-f\mid=0$ then $\int\mid f\mid d\mu =\lim_n \int\mid f_n\mid d\mu$.
i don't know how to use de non uniform convergency on my favor. I appreciate any suggestions.
A: Use the inequality $$\left|\int_A |f_n|\mu (dx) -\int_A |f| \mu (dx)\right|\leq \int_A |f_n -f|\mu (dx) $$
|
v1
|
2023-04-23T09:04:30.097Z
|
{
"paloma_paragraphs": []
}
|
2023-03-29T00:00:00.000Z
|
ab9e50f2f151194ba05c02948377232f532c668a
|
{
"language": "en",
"length": 146,
"provenance": "stackexchange_00001.jsonl.gz:864812",
"question_score": "1",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://math.stackexchange.com/questions/764499"
}
|
redpajama/stackexchange
|
Q: $\frac {a_{n+1}}{a_n} \le \frac {b_{n+1}}{b_n}$ If $\sum_{n=1}^\infty b_n$ converges then $\sum_{n=1}^\infty a_n$ converges as well
We have two positive series: $\displaystyle\sum_{n=1}^\infty a_n$, $\displaystyle\sum_{n=1}^\infty b_n$ and we know that: $\frac {a_{n+1}}{a_n} \le \frac {b_{n+1}}{b_n}$ (from a certain index). Show that if $\displaystyle\sum_{n=1}^\infty b_n$ converges then $\displaystyle\sum_{n=1}^\infty a_n$ converges as well.
I think I know how to do it with contrapositive (suppose $a_n$ diverges, show it's bounded by $b_n$ thus it can't diverge) but I don't know how to show that $\frac {a_{n+1}}{a_n} \le \frac {b_{n+1}}{b_n} \Rightarrow a_n \le b_n \ \forall n$.
A: If $M > 0$ and $a_n \le M b_n$, then $a_{n+1} \le \dfrac{a_n b_{n+1}}{b_n} \le M b_{n+1}$. You can use the comparison test.
A: $$\frac {a_{j+1}}{a_j} \le \frac {b_{j+1}}{b_j}$$
$$\log{a_{j+1}}-\log{a_{j}} \leq \log{b_{j+1}}-\log{b_{j}} $$
Sum from $j=1..n$:
$$\sum_{j=1}^n\log{a_{j+1}}-\log{a_{j}} \leq \sum_{j=1}^n\log{b_{j+1}}-\log{b_{j}} $$
$$\log{a_{n+1}}-\log{a_{1}} \leq \log{b_{n+1}}-\log{b_{1}} $$
Take exponential:
$$\frac{a_{n+1}}{a_1}\leq \frac{b_{n+1}}{b_1}$$
And you have your result.
|
v1
|
2023-04-23T09:04:30.098Z
|
{
"paloma_paragraphs": []
}
|
2023-03-29T00:00:00.000Z
|
65cc296f173dcae222c7034e4e81141ca32e3a44
|
{
"language": "en",
"length": 434,
"provenance": "stackexchange_00001.jsonl.gz:864813",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://math.stackexchange.com/questions/764504"
}
|
redpajama/stackexchange
|
Q: If $f$ is twice differentiable then $f^{-1}$ is twice differentiable
$f:(a,b) \rightarrow (c,d)$ is a bijection and $f$ is differentibale with $f'(x) \neq 0$ for all $x \in (a,b)$, then $f^{-1}$ is also everywhere differentiable.
Show that if $f$ is twice differentiable then so is $f^{-1}$ and write down the formula for $(f^{-1})''$.
Now my attempt at this question so far,
I do not know how to show that ths $f$ being twice differentiable implies that $f^{-1}$ is also twice differentiable.
But I did attempt the second part of the question
By inverse function theorem we know $$(f^{-1})'(f(x))= \dfrac{1}{f'(x)}$$
now differentiating by chain rule this we get $$f'(x)(f^{-1})''(f(x))=\dfrac{-f''(x)}{(f'(x))^{2}}$$ and rearranging; $$(f^{-1})''(f(x))=\dfrac{-f''(x)}{(f'(x))^{3}}$$
Any hints or help for the first part of th question would be much appreciated.
REMARK; If somebody could advise me as to how to write fractions where the middle line isn't missing, I have tried \frac and \dfrac but both to no success.
A: Let $f^{-1}=: g$ and denote by ${\rm rec}: z\mapsto {1/z}$ the reciprocal function. It seems you have available the formula
$$g'(y)={1\over f'\bigl(g(y)\bigr)}\qquad(c<y<d)\ .\tag{1}$$
This formula can be read as follows:
$$g'={\rm rec}\circ f'\circ g\ .\tag{2}$$
On the right side you have a composition of three differentiable functions: ${\rm rec}$ is differentiable wherever its argument $z$ is $\ne0$ (and the derivative is $-{1\over z^2}$); then $f'$ is differentiable by assumption, and $g$ is differentiable according to $(1)$.
We therefore can apply the chain rule to $(2)$ and obtain
$$g''=\bigl({\rm rec}'\circ f'\circ g\bigr)\cdot\bigl(f''\circ g\bigr)\cdot g'\ .\tag{3}$$
Writing this out in terms of the variable $y$ and using $(1)$ we obtain
$$g''(y)=-{1\over\bigl(f'\bigl(g(y)\bigr)\bigr)^2}\cdot f''\bigl(g(y)\bigr)\cdot{1\over f'\bigl(g(y)\bigr)}=-{f''\bigl(g(y)\bigr)\over\bigl(f'\bigl(g(y)\bigr)\bigr)^3}$$
A: The first order derivatives
For any $y\in(c,d)$ we have a unique $x=f^{-1}(y)\in(a,b)$ and by assumption we know that $f'(x)=\lim_{\Delta x\rightarrow 0}\frac{\Delta y}{\Delta x}=k$ is well defined and non-zero.
The next part can be phrased more technically via epsilon-delta arguments, but basically we then see that for $\Delta x$ small enough $\Delta y$ is non-zero and we can determine
$$
(f^{-1})'(y)=\lim_{\Delta y\rightarrow 0}\frac{\Delta x}{\Delta y}=\frac{1}{k}=\frac{1}{f'(x)}
$$
The second order derivatives
Assume now that $f$ is twice differentiable. So
$$
\begin{align}
(f^{-1})(y+\Delta y)-(f^{-1})(y)&=\frac{1}{f'(x+\Delta x)}-\frac{1}{f'(x)}\\
\color{white}{\frac{0}{0}}\\
&=\frac{f'(x)-f'(x+\Delta x)}{f'(x+\Delta x)\cdot f'(x)}
\end{align}
$$
The denominator of this tends to $f'(x)^2$ as $\Delta y$ and $\Delta x$ tend to zero and when divided by $\Delta x$ the numerator tends to $-f''(x)$ so we may write
$$
\frac{f'(x)-f'(x+\Delta x)}{\Delta y}=\frac{f'(x)-f'(x+\Delta x)}{\Delta x}\cdot\frac{\Delta x}{\Delta y}\longrightarrow-f''(x)\cdot\frac{1}{f'(x)}
$$
Applying the rules for limits this then eventually leads to the conclusion
$$
\begin{align}
(f^{-1})''(y)&=\lim_{\Delta y\rightarrow 0}\frac{(f^{-1})(y+\Delta y)-(f^{-1})(y)}{\Delta y}\\
\color{white}{\frac{0}{0}}\\
&=\lim_{\Delta y,\Delta x\rightarrow 0}\frac{\frac{f'(x)-f'(x+\Delta x)}{\Delta y}}{f'(x+\Delta x)\cdot f'(x)}\\
\color{white}{\frac{0}{0}}\\
&=\frac{-f''(x)}{f'(x)^3}
\end{align}
$$
Just like you wrote.
|
v1
|
2023-04-23T09:04:30.098Z
|
{
"paloma_paragraphs": []
}
|
2023-03-29T00:00:00.000Z
|
15b80407715878b13f95710462e636b07d159544
|
{
"language": "en",
"length": 519,
"provenance": "stackexchange_00001.jsonl.gz:864814",
"question_score": "1",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://math.stackexchange.com/questions/764508"
}
|
redpajama/stackexchange
|
Q: Any one help me solve Algebraically equation $\sqrt{x^2+x+1}+ x^{3}= \sqrt{2x+2}+x^{2}+x$ I can't solve this. Can any one show me detail solution
$$
\sqrt{x^2+x+1}+x^3=\sqrt{2x+2}+x^2+x
$$
A: A brute force method is to isolated the square roots, $\sqrt{x^2+x+1}-\sqrt{2x+2}=\ldots$ and square both sides. You will still have a square root left; isolate it and square again. This will give you an equation of degree twelve(!), but a few solutions spring to your face: When is and what happens if $\sqrt{2x+2}=0$? When is and what happens if $\sqrt{x^2+x+1}=\sqrt{2x+2}$? WIth the right idea, this brings you down to a degree $7$ polynomial remaining, which - unfortunately - is irreducible.
A: Executing Hagen von Eitzen's program you arrive at the equation
$$(1 + x) (-1 - x + x^2)^2 (1 - x - 5 x^2 - x^3 + 2 x^5 - 3 x^6 + x^7)=0\ .$$
The first two factors produce the zeros
$$x_1=-1, \quad x_2={1\over2}(1-\sqrt{5}),\quad x_3={1\over2}(1+\sqrt{5})\ ,$$
and the third factor has three real zeros that Mathematica computes numerically to
$$x_4=-0.550881,\quad x_5=0.350054,\quad x_6=2.39706\ .$$
But not all of these six $x_k$ are solutions of the original problem. We only have proven that the solution set $S$ is a subset of $\{x_1,x_2,\ldots, x_6\}$.
It is obvious that $x_1\in S$. Furthermore $x^2+x+2=2x+2=3\pm\sqrt{5}$ when $x=x_2$, resp., $x=x_3$. From this observation it then follows that $x_2$, $x_3\in S$ as well. Finally it can be verified numerically that $x_4\in S$, too. The candidates $x_5$ and $x_6$ are definitely not in $S$.
To sum it all up, we have $S=\{x_1,x_2,x_3,x_4\}$.
A: This can be solved easily by observing the aid equation once you rearrange the terms.
$$
\sqrt{x^2 + x + 1} - \sqrt{2x+2} = -x^3 + x^2 + x
$$
$$
\sqrt{x^2 + x + 1} - \sqrt{2x+2} = -x(x^2 -x -1)
$$
Notice that on the LHS, the difference between squares of the terms in under the root is RHS/$x$
Let's multiply and divide by the conjugate on the LHS
$$
\sqrt{x^2 + x + 1} - \sqrt{2x+2} \times \frac{(\sqrt{x^2 + x + 1} + \sqrt{2x+2})}{\sqrt{x^2 + x + 1} + \sqrt{2x+2}} = -x(x^2 -x -1)
$$
$$
\frac{x^2 -x -1}{\sqrt{x^2 + x + 1} + \sqrt{2x+2}}= -x(x^2 -x -1)
$$
Rearranging a few terms,
$$
\frac{x^2 -x -1}{\sqrt{x^2 + x + 1} + \sqrt{2x+2}} + x(x^2 -x -1) = 0
$$
$$
(x^2 -x -1) \times \left ( \frac{1}{\sqrt{x^2 + x + 1} + \sqrt{2x+2}} + x \right ) = 0
$$
Now, either of $ (x^2 -x -1) , \Bigl ( \frac{1}{\sqrt{x^2 + x + 1} + \sqrt{2x+2}} + x \Bigr)$ could be trivial.
We know that $x \geq -1$ (Constraint placed on $ \sqrt{2x + 2}$ ) making $ \frac{1}{\sqrt{x^2 + x + 1} + \sqrt{2x+2}} + x = 0$ at $ x = -1$
The other solutions lie in $ x^2 -x -1 = 0$, which can be solved as a standard quadratic polynomial.
These 3 values will be your solution.
Edit : There is a another solution inside the equation, which can be found by solving $ \Bigl ( \frac{1}{\sqrt{x^2 + x + 1} + \sqrt{2x+2}} + x \Bigr) = 0$ completely.
|
v1
|
2023-04-23T09:04:30.098Z
|
{
"paloma_paragraphs": []
}
|
2023-03-29T00:00:00.000Z
|
e069b2c1ec29f5a90731e9d7924b4ece3a516ffa
|
{
"language": "en",
"length": 71,
"provenance": "stackexchange_00001.jsonl.gz:864815",
"question_score": "1",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://math.stackexchange.com/questions/764509"
}
|
redpajama/stackexchange
|
Q: Decoupling system of two partial differential equations If I have the following systems of PDE
$$
u_t+x^2u_{xx}-\dfrac{h_1(t)}{h_0(t)}e^{-(v-u)}-\dfrac{h_0'(t)}{h_0(t)} = 0,\\
v_t-\dfrac{h_0(t)}{h_1(t)}e^{-(u-v)}-\dfrac{h_1'(t)}{h_1(t)} = 0,
$$
where $x\in[-L,L]$ and $t\in (0,T)$, $(h_0(t),h_1(t))$ are solutions to the system ${\bf h}'(t)={\bf M}{\bf h}(t)$. I have the explicit expressions for $h_0(t)$ and $h_1(t)$.
Is there a way of decoupling these PDEs? I think there is a way which could take advantage of the simpler $v$ equation.
|
v1
|
2023-04-23T09:04:30.098Z
|
{
"paloma_paragraphs": []
}
|
2023-03-29T00:00:00.000Z
|
44932339497943501cc69bcb31b550af6f100f52
|
{
"language": "en",
"length": 57,
"provenance": "stackexchange_00001.jsonl.gz:864816",
"question_score": "0",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://math.stackexchange.com/questions/764519"
}
|
redpajama/stackexchange
|
Q: how can I find the supremum?? We set $f_n(x)=(1+\frac{x}{n})^n, x \in \mathbb{R}$.Check the uniform convergence of $f_n$ at the intervals $(-\infty,a)$ and $(a,+\infty)$,where $a$ a random real number.
$\lim_{x \to +\infty} {f_n(x)}=e^x$,so $f$ converges pointwise to $e^x$.
To check the uniform converge at $(a,+\infty)$ ,I have to find: $\sup_{x>a} |(1+\frac{x}{n})^n-e^x|$. But..How can I calculate this supremum??
|
v1
|
2023-04-23T09:04:30.098Z
|
{
"paloma_paragraphs": []
}
|
2023-03-29T00:00:00.000Z
|
f72b95b55f6f6a1b77759db50d7b237c6205c3ef
|
{
"language": "en",
"length": 118,
"provenance": "stackexchange_00001.jsonl.gz:864817",
"question_score": "0",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://math.stackexchange.com/questions/764522"
}
|
redpajama/stackexchange
|
Q: Why this set is not a vector space? Let V =R^2 and define addition and scalar multiplication operation as follows :
$u=(u_1, u_2)$
$v=(v_1, v_2)$
$$u+v=(u_1+v_1,u_2+v_2)$$
$$ku=(u_1k,0)$$
The book says : "the addition operation is the standard one from R2, but the scalar multiplication is not".
Why not ? For example :
$K=1$ and $u=(2,4)$ then $Ku=(2,0) \in V$
So what is the problem ??
A: The scalar operation is not the usual one since that one is $k \cdot (a,b) = (ka,kb)$.
The problem with this definition of scalar multiplication is that $1 \cdot u \neq u$. In fact, we have $1 \cdot (u_1, u_2) = (u_1, 0)$.
Do you see why this is a problem?
|
v1
|
2023-04-23T09:04:30.099Z
|
{
"paloma_paragraphs": []
}
|
2023-03-29T00:00:00.000Z
|
52aadb2bfc04466cffdfab8a35ae067537f60821
|
{
"language": "en",
"length": 175,
"provenance": "stackexchange_00001.jsonl.gz:864818",
"question_score": "2",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://math.stackexchange.com/questions/764523"
}
|
redpajama/stackexchange
|
Q: About the implicit funtion in a holomorphic situation. Let $f(x,y)$ be a polonomial with integral coefficients which has a zero $(a,b)\in \mathbb{R}^2$ such that the partial derivative respect to $y$ at this point is nonzero. Then by the implicit function theorem we have a neighborhood $U\subset \mathbb{R}$ and a differenciable in $U$ function $y(x)$ such that
$f(x,y(x))=0.$
My question is the following:
Can I find a neighborhood $V\subset \mathbb{C}$ and a holomorphic function $Y(x)$ in $V$ such that
$f(x,Y(x))=0?$
Particularly is there a holomorphic implicit function theorem?
Thanks
A: If you first forget the complex structure, then this is a standard situation for the implicit function theorem with $F:\Bbb R^4\to \Bbb R^2$. The corollary of the theorem tells you that if $F(x,y)$ is $k$ times continuously differentiable, then so is $Y(x)$.
Here you only need the first derivative. Now reinserting back the complex structure, one finds that the derivative of $Y(x)$ inherits the complex structure from $F$ and its partial derivatives, so that $Y(x)$ satisfies the Cauchy-Riemann partial differential equations and is thus holomorph.
|
v1
|
2023-04-23T09:04:30.099Z
|
{
"paloma_paragraphs": []
}
|
2023-03-29T00:00:00.000Z
|
f03d3c7739ba9b5a69b4826736c25490b1a5ca13
|
{
"language": "en",
"length": 344,
"provenance": "stackexchange_00001.jsonl.gz:864819",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://math.stackexchange.com/questions/764526"
}
|
redpajama/stackexchange
|
Q: Proving that a metric space with 3 points can be embedded isometrically into $\mathbb{R}^2$ My definition of an isometric embedding is that if $(M_2,d_1)$ and $(M_2,d_2)$ are metric spaces, then $G:M_1 \to M_2$ is an isometric embedding if $d_2(G(x),G(y)) = d_1(x,y)$ for all $x,y \in M_1$.
I would like to show that any metric space that has 3 points can be embedded isometrically into $\mathbb{R}^2$ with the euclidean metric. My strategy has been to define the maps point by point, but it always seems that whatever first two points' map I define, I can never get the third to work. Does anyone know of a proof?
A: Let's write $\{x_0, x_1, x_2\}$ as your 3-point metric space. Let's say that $y_0 = d(x_0, x_1)$, $y_1 = d(x_0, x_2)$, and $y_2 = d(x_1, x_2)$. We want to define an isometric embedding of this space into $\Bbb R^2$. Because I'm lazy, let's say $f(x_0) = 0$, and $f(x_1)=(0,y_0)$. Everything works so far: $d(f(x_0),f(x_1))=y_0=d(x_0,x_1)$.
Now we want $d(f(x_0),f(x_2))=y_1$; so let's draw a circle of radius $y_1$ around $0$. If we can isometrically embed the 3-point metric space, $f(x_2)$ will have to lie on that circle. Similarly, we want $d(f(x_1),f(x_2))=y_2$; so let's draw a circle of radius $y_2$ around $(0,y_0)$. $f(x_2)$ will have to lie on this circle as well. So it suffices to find where our two circles intersect; if they intersect at all, then have $f(x_2)$ be one of the points of intersection. This is an isometric embedding.
So we need to check that the two circles intersect. But because $y_0+y_1 \geq y_2$ by the triangle inequality, this must be true (why?)
A: Let $X=\{A,B,C\}$ be the metric spaces with $3$ points and $a=d(B,C)$, $b=d(A,C)$ and $c=d(A,B)$.
Then pick any points $D$, $E$ and $F$ in ${\mathbb R}^2$ such that $|EF|=a$, $|DF|=b$ and $|DE|=c$ (where $|PQ|$ denotes the Euclidean distance between two points $P$ and $Q$). It exists since $a+b\leq c$, $a+c\leq b$ and $b+c\leq a$. Then the application $f$ from $X$ into ${\mathbb R}^2$ defined by $f(A)=D$, $f(B)=E$ and $f(C)=F$ is clearly an isometry.
|
v1
|
2023-04-23T09:04:30.099Z
|
{
"paloma_paragraphs": []
}
|
2023-03-29T00:00:00.000Z
|
57daea6885f418d229eebb0b06c4ec1c6b769403
|
{
"language": "en",
"length": 335,
"provenance": "stackexchange_00001.jsonl.gz:864820",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://math.stackexchange.com/questions/764531"
}
|
redpajama/stackexchange
|
Q: Odds in Pascal's Triangle Let $O(n)$ be the number of odds in rows $0-n$ in Pascal's triangle. Let $E(n)$ be the number of evens in rows $0-n$. I have heard the claim that the $\lim_{n \to \infty} \frac{O(n)}{E(n)}=0$. Does anyone have a proof of this? It seems like the relationship between Pascal's Triangle and the binary representation of the row number could be useful, but I am not seeing it.
A: I think it's easier to show the equivalent claim that $\lim \frac{O}{O+E}=0$.
Let $b(n)$ be the number of $1$ in the binary expansion of $n$. Then it is a well-known fact (alluded to in the question) that there are $2^{b(n)}$ odd numbers in the $n^{\rm th}$ row of Pascal's triangle, and so $O(n)=\sum_{i=0}^n 2^{b(n)}$.
We will start by computing $O(2^n-1)$. There are $\binom{n}{k}$ numbers less than $2^n$ that have $k$ $1$s in their binary expansion. Summing over all $k$, we have
$O(2^n-1)=\sum_{i=0}^{2^{n-1}} 2^{b(n)}=\sum_{k=0}^n 2^k \binom{n}{k}=3^n$.
On the other hand, $O(2^{n-1}-1)+E(2^{n-1}-1)$ is the total number of entries in the first $2^{n-1}-1$ rows, and thus is equal to $\frac{2^{n-1}(2^{n-1}+1)}{2}=2^{n-2}(2^{n-1}+1)$. This grows asymptotically like $4^n$, so clearly $\lim_{n\to \infty} \frac{O(2^n-1)}{O(2^{n-1}-1)+E(2^{n-1}-1)}=0$.
But $O$ and $O+E$ are increasing functions. Thus, for any $l$, we may take $n$ with $2^{n-1} \leq l \leq 2^n - 1$, and then $\frac{O(l)}{O(l)+E(l)} < \frac{O(2^{n}-1)}{O(2^{n-1}-1)+E(2^{n-1}-1)}$. As the latter sequence converges to $0$ and the former sequence is positive, it too must converge to $0$.
A: If you look at the structure of the odd elements in Pascal's triangle you'll see a well-known self-similar figure essentially equivalent to the Sierpinsky Triangle. Once you prove the self-similarity, you can use it to bound the ratio $R(2^n) = \frac{O(2^n)}{E(2^n)}$ by $\left(\frac34+o(1)\right)R(2^{n-1})$ (and in particular, bound it by e.g. $\frac45R(2^{n-1})$ for sufficiently large $n$); this provides a subsequence that converges to $0$. The other part of the proof is to show that the 'intermediate' ratios (i.e., $R(k)$ for $2^{n-1}\lt k \lt 2^n$) never get too much larger; this is a little more complicated, but still relatively straightforward.
|
v1
|
2023-04-23T09:04:30.099Z
|
{
"paloma_paragraphs": []
}
|
2023-03-29T00:00:00.000Z
|
07af9e16cbbfa8a25caf43ec6c8e9cf6610bc506
|
{
"language": "en",
"length": 72,
"provenance": "stackexchange_00001.jsonl.gz:864821",
"question_score": "0",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://math.stackexchange.com/questions/764532"
}
|
redpajama/stackexchange
|
Q: matrix logarithm, determination and trace maxmimzation Let $A = A^\ast \in M_n$ be a positive definite matrix ($\lambda_i(A) > 0$). Show that
$\log\det(A)-Tr(A)$
is maximized by $A = I$.
A: Note that if $\lambda_i>0$, $i=1,\ldots,n$, are the eigenvalues of $A$, then
$$\tag{$*$}
\log\det(A)-\mathrm{tr}(A)=\log\prod_{i=1}^n\lambda_i-\sum_{i=1}^n\lambda_i=\sum_{i=1}^n f(\lambda_i), \quad f(\lambda)=\log\lambda-\lambda.
$$
The function $f$ has on $(0,\infty)$ the global maximum at $\lambda=1$ and hence ($*$) has the global maximum at $\lambda_1=\cdots=\lambda_n=1$. Hence $A=I$ maximises ($*$).
|
v1
|
2023-04-23T09:04:30.099Z
|
{
"paloma_paragraphs": []
}
|
2023-03-29T00:00:00.000Z
|
63a20289319b0992e26d9cf4a8b30834aba91dd8
|
{
"language": "en",
"length": 176,
"provenance": "stackexchange_00001.jsonl.gz:864822",
"question_score": "1",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://math.stackexchange.com/questions/764535"
}
|
redpajama/stackexchange
|
Q: Bernoulli numbers identity with binomial coefficient The generating function for the Bernoulli numbers $B_k$ is given by $f(z) = \frac{z}{e^z -1}= \sum_{k=0}^{\infty} \frac{B_k}{k!} z^k$. Applying the identity
$$1 = \frac{e^z -1}{z} \cdot \sum_{k=0}^{\infty} \frac{B_k}{k!} z^k$$ yields $\sum_{n=0}^{k} \binom{k+1}{n} B_n= 0$. Recall that we also have that $f(z)= 1- \frac{z}{2}+ \sum_{k=1}^{\infty} \frac{B_{2k}}{(2k)!} z^{2k}$ which implies that $$\frac{e^z -1}{z} \cdot \sum_{k=1}^{\infty} \frac{B_{2k}}{(2k)!} z^{2k}= 1- \frac{e^z -1}{z}(1- \frac{z}{2}).$$ I believe the indices got me all confused, so I keep getting stuck on how to show that this above equality gives us the identity $\sum_{n=0}^{k} \binom{2k+1}{2n} B_{2n}= \frac{1}{2} (2k+1)$ after expanding both sides of the equality. How should I proceed?
A: If we don't have to use the equation
$$\frac{e^z -1}{z} \cdot \sum_{k=1}^{\infty} \frac{B_{2k}}{(2k)!} z^{2k}= 1- \frac{e^z -1}{z}(1- \frac{z}{2}),$$
then we can simply use the already proven
$$\sum_{n=0}^k \binom{k+1}{n}B_n = 0$$
for $k > 0$ and insert $k = 2m$ to obtain
$$\sum_{n=0}^m \binom{2m+1}{2n}B_{2n} = - \sum_{n=0}^{m-1} \binom{2m+1}{2n+1}B_{2n+1},$$
which, since $B_1 = -\frac{1}{2}$ and $B_{2n+1} = 0$ for $n \geqslant 1$ simplifies to
$$\sum_{n=0}^m \binom{2m+1}{2n}B_{2n} = -B_1\binom{2m+1}{1} = \frac{1}{2}(2m+1).$$
|
v1
|
2023-04-23T09:04:30.099Z
|
{
"paloma_paragraphs": []
}
|
2023-03-29T00:00:00.000Z
|
3da7eb7a5caf4b2a67e88391dbd61a485ce7b974
|
{
"language": "en",
"length": 382,
"provenance": "stackexchange_00001.jsonl.gz:864823",
"question_score": "1",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://math.stackexchange.com/questions/764536"
}
|
redpajama/stackexchange
|
Q: Help understanding Recursive algorithm question We have a function that is defined recursively by $f(0)=f_0$, $f(1)=f_1$ and $f(n+2) = f(n)+f(n+1)$ for $n\geq0$
For $n\geq0$, let $c(n)$ be the total number of additions for calculating
$f(n)$ using $f_0$ and $f_1 $ as input with $c(0) = 0$ and $c(1) = 0$. For $n \geq 2$,
express $c(n)$ using $c(n-1) $ and $c(n-2)$
Determine if $c(n)\geq2^{(n-2)/2}$ for $n\geq2$ and prove your answer.
I'm lost as to what to do with this question.
A: This is a badly formulated problem, the claim is probably wrong.
*
*If the task is just to compute $f(n)$, then one can use matrix powers as
$$
\begin{bmatrix}f(n)\\f(n-1)\end{bmatrix}
=
\begin{bmatrix}1&1\\1&0\end{bmatrix}^{n-1}
\begin{bmatrix}f(1)\\f(0)\end{bmatrix}
$$
These matrix powers can be computed by halving-and-squaring so that one needs $O(\log_2(n))$ additions and multiplications for the computation of the single value $f(n)$.
*If one wants to compute all of $f(0),f(1),f(2),...,f(n)$, then the step increasing $n$ adds one addition to the computational effort, resulting in $O(n)$, more precisely $(n-1)$, additions.
*Of course, to demonstrate the dangers of blind implementation of recursive functions, the exponential estimate results from the worst possible implementation of the computation of $f(n)$.
See the computation of the Fibonacci-sequence for further details such as complexity in bit-operations.
A: Hint:
$$c(n+2)=c(n+1)+c(n)+1$$
Solve this to get $c(n)$ and prove that $c(n) \ge 2^{\frac{n-2}{2}}$ by induction.
A: Try doing by induction:
Base step:
$$ c(2) = 2 \ge 2^{(2-2)/2} = 1$$
Inductive step:
if $c(n) \ge 2^{(n-2)/2} \rightarrow c(n+1) \ge 2^{((n+1)-2)/2}$
You probably need to think the way to calculate how $c(n)$ increases whit each value of $n$
A: In reality, only n-1 additions are needed. The only formula that we can use is f(n+2) = f (n+1) + f (n) for n >= 0. If f0 and f1 are given, the only value this formula allows us to calculate is
f2 = f1 + f0 (applying the formula with n = 0)
Now with f2 available as well, the only value the formula allows us to calculate is
f3 = f2 + f1 (applying the formula with n = 1)
and so on.
I'd express the number of additions as
c(n) = c(n−1) + 1
or if you insist on using c(n-2) in the formula then
c(n) = c(n−1) + 1 + 0 * c(n-2)
|
v1
|
2023-04-23T09:04:30.100Z
|
{
"paloma_paragraphs": []
}
|
2023-03-29T00:00:00.000Z
|
b71fdd54b2c22c398f504fa2372d0e801cd5dfde
|
{
"language": "en",
"length": 633,
"provenance": "stackexchange_00001.jsonl.gz:864824",
"question_score": "1",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://math.stackexchange.com/questions/764539"
}
|
redpajama/stackexchange
|
Q: A Horrible looking limit I have the following limit question:
$$\lim_{x \rightarrow 1 }\frac {({\rm log} (1+x)-{\rm log}\space 2)(3\times4^{x-1}-3x)}{[(7+x)^{1/3}-(1+3x)^{1/2}]{\rm sin}\space \pi x}$$
This has the form $\frac {0}{0}$ and hence I can apply L'Hospital's Rule. But, this is too large and that way it can go terribly wrong. Is there any clever way to do this?
I found it in a MCQ test and it has the following four alternatives:
A. $\frac{9}{\pi}{\rm log}\space \frac{4}{e}$
B. 1
C. $\frac{3}{\pi}{\rm log} \space \frac{2}{e}$
D. $\frac{1}{\pi}$.
A: You can write $\frac {({\rm log} (1+x)-{\rm log}\space 2)(3\times4^{x-1}-3x)}{[(7+x)^{1/3}-(1+3x)^{1/2}]{\rm sin}\space \pi x}=\frac{\log(1+x)-\log(2)}{x-1}\frac{3 4^{x-1}-3x}{x-1}\frac{x-1}{(7+x)^{1/3}-(1+3x)^{1/2}}\frac{x-1}{sin(\pi x)}=\frac{f(x)-f(1)}{x-1}\frac{g(x)-g(1)}{x-1}\frac{x-1}{h(x)-h(1)}\frac{x-1}{k(x)-k(1)}$
with $f(x)=\log(1+x)$, $g(x)=3 4^{x-1}-3x$, $h(x)=(7+x)^{1/3}-(1+3x)^{1/2}$ and $k(x)=sin(\pi x)$.
So $\lim_{x\to1}\frac{f(x)-f(1)}{x-1}=f'(1)=\frac{1}{2}$, $\lim_{x\to1}\frac{g(x)-g(1)}{x-1}=g'(1)=3(\ln(4)-1)$, $\lim_{x\to1}\frac{h(x)-h(1)}{x-1}=h'(1)=\frac{1}{3}7^{-2/3}-\frac{3}{4}$ and $\lim_{x\to1}\frac{k(x)-k(1)}{x-1}=k'(1)=\pi\cos(\pi)=-\pi$.
Since $h'(1)\neq 0$ and $k'(1)\neq 0$, Limit Laws give you that the limit you're looking for is $\frac{f'(1)g'(1)}{h'(1)k'(1)}$.
PS: I don't know if $\log$ stands for logarithm in base $10$ or $e$ so I left as it is.
A: Hint Apply L'H to both
$$\lim_{x \rightarrow 1 }\frac {{\rm log} (1+x)-{\rm log}\space 2}{{\rm sin}\space \pi x} $$
and
$$\lim_{x \rightarrow 1 }\frac {4^{x-1}-1}{(7+x)^{1/3}-(1+3x)^{1/2}} $$
A: This looks horrible but it is not that way if you separate the factors. We can proceed in following manner
\begin{align}
L &= \lim_{x \to 1}\frac{(\log(1 + x) - \log 2)(3\times 4^{x - 1} - 3x)}{[(7 + x)^{1/3} - (1 + 3x)^{1/2}]\sin \pi x}\notag\\
&= 3\lim_{x \to 1}\frac{(\log(1 + x) - \log 2)(4^{x - 1} - x)}{[(7 + x)^{1/3} - (1 + 3x)^{1/2}]\sin \pi x}\notag\\
&= -3\lim_{h \to 0}\frac{(\log(2 + h) - \log 2)(4^{h} - 1 - h)}{[(8 + h)^{1/3} - (4 + 3h)^{1/2}]\sin \pi h}\notag\\
&= -3\lim_{h \to 0}\frac{(\log(2 + h) - \log 2)(4^{h} - 1 - h)}{[(8 + h)^{1/3} - (4 + 3h)^{1/2}]\pi h}\cdot\frac{\pi h}{\sin \pi h}\notag\\
&= -\frac{3}{\pi}\lim_{h \to 0}\frac{\log(1 + (h/2))(4^{h} - 1 - h)}{[(8 + h)^{1/3} - (4 + 3h)^{1/2}]h}\notag\\
&= -\frac{3}{\pi}\lim_{h \to 0}\frac{\log(1 + (h/2))}{h/2}\cdot\frac{(4^{h} - 1 - h)}{2[(8 + h)^{1/3} - (4 + 3h)^{1/2}]}\notag\\
&= -\frac{3}{2\pi}\lim_{h \to 0}1\cdot\frac{(4^{h} - 1 - h)}{(8 + h)^{1/3} - (4 + 3h)^{1/2}}\notag\\
&= -\frac{3}{2\pi}\lim_{h \to 0}\frac{4^{h} - 1 - h}{h}\cdot\frac{h}{(8 + h)^{1/3} - (4 + 3h)^{1/2}}\notag\\
&= -\frac{3}{2\pi}\lim_{h \to 0}\left(\frac{4^{h} - 1}{h} - 1\right)\cdot\frac{h}{(8 + h)^{1/3} - (4 + 3h)^{1/2}}\notag\\
&= -\frac{3}{2\pi}(\log 4 - 1)\lim_{h \to 0}\frac{h}{(8 + h)^{1/3} - (4 + 3h)^{1/2}}\notag\\
&= -\frac{3}{2\pi}(\log 4 - 1)\lim_{h \to 0}\frac{h}{[(8 + h)^{1/3} - 8^{1/3}] - [(4 + 3h)^{1/2} - 4^{1/2}]}\notag\\
&= -\frac{3}{2\pi}(\log 4 - 1)\lim_{h \to 0}\dfrac{1}{\dfrac{(8 + h)^{1/3} - 8^{1/3}}{h} - 3\cdot \dfrac{(4 + 3h)^{1/2} - 4^{1/2}}{3h}}\notag\\
&= -\frac{3}{2\pi}(\log 4 - 1)\cdot\dfrac{1}{\lim\limits_{t \to 8}\dfrac{t^{1/3} - 8^{1/3}}{t - 8} - 3\cdot \lim\limits_{z \to 4}\dfrac{z^{1/2} - 4^{1/2}}{z - 4}}\notag\\
&= -\frac{3}{2\pi}(\log 4 - 1)\cdot\dfrac{1}{\lim\limits_{t \to 8}\dfrac{t^{1/3} - 8^{1/3}}{t - 8} - 3\cdot \lim\limits_{z \to 4}\dfrac{z^{1/2} - 4^{1/2}}{z - 4}}\notag\\
&= -\frac{3}{2\pi}(\log 4 - 1)\cdot\dfrac{1}{\dfrac{1}{3}8^{-2/3} - 3\cdot \dfrac{1}{2}4^{-1/2}}\notag\\
&= \frac{9(\log 4 - 1)}{4\pi}\notag
\end{align}
Here we have used the following standard limits $$\lim_{x \to 0}\frac{\sin x}{x} = 1,\,\lim_{x \to 0}\frac{a^{x} - 1}{x} = \log a,\,\lim_{x \to 0}\frac{\log(1 + x)}{x} = 1,\,\lim_{x \to a}\frac{x^{n} - a^{n}}{x - a} = na^{n - 1}$$
A: Put $x=y+1$ and take the limit $y\to 0$, so that we can use some standard limits and simplify the expression. The given limit will be:
\begin{align*}
& \lim_{y\to 0} - \frac{\displaystyle \log\left(1+\frac{y}{2}\right)\Big(3\, \left(4^y-1-y\right)\Big)}{\left(\left(8+y\right)^{1/3}-\left(4+3\, y\right)^{1/2}\right)\, \sin\left(\pi\, y\right)}\tag 1\\
\end{align*}
Divide the numerator and denominator by $y^2$ and we will take four separate limits:
\begin{align*}
\lim_{y\to 0} \frac{\displaystyle \log\left(1+\frac{y}{2}\right)}{\displaystyle \frac{y}{2}\cdot 2} &= \frac{1}{2}\\\\
\lim_{y\to 0} \frac{4^y-1}{y}-1 &= \log{4}-1\\\\
\lim_{y\to 0} \frac{\sin\left(\pi\, y\right)}{\pi\, y}\cdot \pi &= \pi
\end{align*}
and for the remaining one we can use L'Hôpital's rule:
\begin{align*}
\lim_{y\to 0} \frac{\left(8+y\right)^{1/3}-\left(4+3\, y\right)^{1/2}}{y} &= \lim_{y\to 0} \frac{1}{3}\left(8+y\right)^{-2/3}-\frac{3}{2}\left(4+3\, y\right)^{-1/2}\\
&= \frac{1}{3\cdot 4}-\frac{3}{4} = -\frac{2}{3}\\
\end{align*}
Combining all these in $(1)$, we see that
\begin{align*}
\lim_{x \to 1 }\frac {({\log} (1+x)-{\log}\space 2)(3\times4^{x-1}-3x)}{[(7+x)^{1/3}-(1+3x)^{1/2}]{\sin}\space \pi x} = \frac{9}{4\, \pi}\log\left(\frac{4}{e}\right)\approx 0.276662956773403
\end{align*}
which means none of the options are correct.
|
v1
|
2023-04-23T09:04:30.100Z
|
{
"paloma_paragraphs": []
}
|
2023-03-29T00:00:00.000Z
|
11f04206925eaec79d814637df9444c095318165
|
{
"language": "en",
"length": 83,
"provenance": "stackexchange_00001.jsonl.gz:864825",
"question_score": "2",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://math.stackexchange.com/questions/764541"
}
|
redpajama/stackexchange
|
Q: Rigorous proof of this assertion about Pascal's Triangle I have noticed that it seems that there are no prime numbers in Pascal's Triangle that are not directly adjacent to the number 1. Is there a rigorous proof for this assertion?
A: It is quite easy to show that for $n\ge 4$ and $2\le r \le n-2$ both: $$\binom nr\gt n$$
And that the prime factors of $\binom nr$ are all less than or equal to $n$. So $\binom nr$ cannot be prime.
|
v1
|
2023-04-23T09:04:30.100Z
|
{
"paloma_paragraphs": []
}
|
2023-03-29T00:00:00.000Z
|
172641857fa15add32608bc6a93ec45ba0f381eb
|
{
"language": "en",
"length": 150,
"provenance": "stackexchange_00001.jsonl.gz:864826",
"question_score": "1",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://math.stackexchange.com/questions/764542"
}
|
redpajama/stackexchange
|
Q: where does the modulus go when cancelling $e$ and $\ln$ in this problem? So I did this problem today:
Show that $\frac{dy}{dx} = yx^2$ can be written as $y = Ae^{\frac{x^3}{3}}$
my solution is shown below:
$$
\frac{dy}{dx} = yx^2
$$
$$
\frac{1}{y} dy = x^2 dx
$$
$$
\int\frac{1}{y}\ dy = \int x^2\ dx
$$
$$
\ln |y| = \frac{x^3}{3} + C
$$
$$
e^{\ln |y|} = e^{\frac{x^3}{3} + C}
$$
$$
y = e^{\frac{x^3}{3} + C}
$$
$$
y = Ae^{\frac{x^3}{3}}
$$
But I don't understand what happens to the modulus around the $y$? Why isn't it $|y| = Ae^{\frac{x^3}{3}}$? What happens if the modulus is in fact left there?
A: Because $e^x\ge 0\,\forall x\in\mathbb{R}$ so the modulus sign is redundant.
A: you have to remember, that $e^c$ respectively A is always positiv. So the whole Expression on the right hand is positiv: |y|=y
greetings,
calculus .
|
v1
|
2023-04-23T09:04:30.100Z
|
{
"paloma_paragraphs": []
}
|
2023-03-29T00:00:00.000Z
|
a51414048e8581f740841596a02290aa6db9b871
|
{
"language": "en",
"length": 104,
"provenance": "stackexchange_00001.jsonl.gz:864827",
"question_score": "1",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://math.stackexchange.com/questions/764546"
}
|
redpajama/stackexchange
|
Q: Given a diffeomorphism between two surfaces, is there an expression for the pullback of the covariant derivative of a vector field? Let $A$ and $B$ be two surfaces (smooth enough) in an affine space $M$ with metric $g$. Let $g^A$, $g^B$ be the metric tensors on the two surfaces induced by $g$, and $\nabla^A$, $\nabla^B$ the Levi-Civita connections on the two surfaces. Let $f:A\rightarrow B$ be a diffeomorphism.
I'm wondering if, given a vector field $X^B$ on $B$, there is an expression for $f^* (\nabla^{B} X^B)$. I think it should be something including $\nabla^A(f^*X^B)$, but I can't find it.
Thanks for your help.
|
v1
|
2023-04-23T09:04:30.101Z
|
{
"paloma_paragraphs": []
}
|
2023-03-29T00:00:00.000Z
|
4091019cc100dd4ae1580c81a35e38f5060b8636
|
{
"language": "en",
"length": 475,
"provenance": "stackexchange_00001.jsonl.gz:864828",
"question_score": "0",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://math.stackexchange.com/questions/764547"
}
|
redpajama/stackexchange
|
Q: Prove a statement for the infinite matrix We are given infinite two dimensional matrix $\{a_{i,j}\}_{i,j=1}^\infty$.
And we know that matrix contain only natural values and each number appears in the matrix exactly 8 times.
Task is to prove that $\exists m,n \in \mathbb{N}\space:\space a_{m,n} > m\cdot n $
We will denote set of i indexes as $I$, set of j indexes as $J$ and set of matrix values as $A$.
Below listed some obvious conclusions which I was able to draw:
Naive exploration of a problem led only to one conclusion:
if $A \sim \mathbb{N}$, than just using ascending order of elements illustrates an counter-example. But $A \nsim \mathbb{N}$ and we have 8 entries of each number. So I'm stuck here.
I don't see any way to prove it. May be I don't know some required theorems. Also an idea crossed my mind that predicate analysis may be the right tool for that problem, but I'm not experienced with it.
So I would appreciate some suggestions and pointers on it.
A: Assume on the contrary that $a_{i,j}\le ij$ for all $i,j$. We shall derive a contradiction from this.
Let $N\in\mathbb N$. For fixed $j\ge1$, we must have $a_{i,j}\le N$ for the $\lfloor \frac Nj\rfloor $ entries $a_{i,j}$ with $1\le i\le\lfloor \frac Nj\rfloor$, i.e. $a_{1,j},a_{2,j},\ldots,a_{\lfloor \frac Nj\rfloor,j}$.
So if $f(N)$ denotes the number of entries $\le N$ in the matrix, we have just seen that
$$\tag1 f(N)\ge \sum_{j=1}^N\left\lfloor \frac Nj\right\rfloor.$$
On the other hand, by the given condition, we can compute $f(N)$ exactly:
$$\tag2 f(N)=\sum_{j=1}^N 8 = 8N.$$
It is well-known that the harmonic series diverges, hence for some $M\in \mathbb N$ we have
$$\tag3 \sum_{j=1}^M \frac1j>8.$$
Now let $N=M!$, which is clearly $\ge M$. Then because for all $j\le M$ we have $j\mid N$ and hence can simplify $\left\lfloor\frac{N}{j}\right\rfloor = \frac{N}{j}$, we conclude
$$f(N)\ge \sum_{j=1}^N\left\lfloor \frac Nj\right\rfloor\ge\sum_{j=1}^M\left\lfloor \frac Nj\right\rfloor=\sum_{j=1}^M \frac Nj=N\sum_{j=1}^M\frac1j>8N=f(N),$$
contradiction.
Remark: The above proof is very wasteful in several places. The first $M$ for which $(3)$ holds is $M=1674$. This makes $N=M!\approx 3.7\cdot 10^{4671}$. On the other hand, by computer check the smallest $N$ for which the inequality $(1)$ contradicts $(2)$ is already $N=2550$. It is not a coincidence that the "best" $N$ is not much larger than the value $1674$ found for $M$. In fact, we can do the last step a bit less wastefull like this:
There exists $M\in\mathbb N$ with
$$\tag{3'} \sum_{j=1}^M \frac1j>\frac{17}2.$$
Let $N=2M$. Then
$$f(N)\ge \sum_{j=1}^N\left\lfloor \frac Nj\right\rfloor\ge\sum_{j=1}^M\left\lfloor \frac Nj\right\rfloor\ge\sum_{j=1}^M\left( \frac Nj-1\right)>\frac{17}2N-M=8N=f(N),$$
contradiction.
(Exact numerical checking wll reveal that this leads to $N=5518$, so only twice as big as the "best" possible $N$; of course for the sake of proof these considereatokns are all moot as it doesn't matter if we find an $N$ that is not much larger than the optimum - the problem statement asks us only to show the existence of some such $N$)
|
v1
|
2023-04-23T09:04:30.101Z
|
{
"paloma_paragraphs": []
}
|
2023-03-29T00:00:00.000Z
|
21788d8469cb4413757540a9bf832c11ab109db1
|
{
"language": "en",
"length": 204,
"provenance": "stackexchange_00001.jsonl.gz:864829",
"question_score": "0",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://math.stackexchange.com/questions/764551"
}
|
redpajama/stackexchange
|
Q: How would I evaluate this double integral using change of variable? This question has me stumped and I really don't know what to do..
$B$ is the region in the first quadrant bounded by the curves
$ xy = 1$ , $xy = 3$ , $x^2-y^2 = 1$ and $x^2-y^2 = 4$
How would I evaluate
$\iint_B (x^2 + y^2) dxdy$
using the change of variables
$u = x^2 - y^2$ , $v = xy$
The answer is simply 3. In most questions the substitution would either be linear or given as $x(u,v)$ instead of $u(x,y)$ as in this question and when I tried solving for $x(u,v)$ I got extremely messy equations
If someone could guide me that would be greatly appreciated. Thank you.
A: You don't need to solve for $x,y$ in terms of $u,v$. I assume you're doing this in order to calculate the Jacobian determinant:
$$\left|\frac{\partial(x,y)}{\partial(u,v)}\right|,$$
but you don't need to since
$$\left|\frac{\partial(x,y)}{\partial(u,v)}\right| = \left|\frac{\partial(u,v)}{\partial(x,y)}\right|^{-1}.$$
Here, the right side is the reciprocal of (the absolute value of) the determinant of the $2 \times 2$ matrix obtained by differentiating $u,v$ with respect to $x,y$. When you plug this into the change of variables formula it simplifies nicely with the integrand $x^2+y^2$.
|
v1
|
2023-04-23T09:04:30.101Z
|
{
"paloma_paragraphs": []
}
|
2023-03-29T00:00:00.000Z
|
70f802442bef016aca0f1ccd1f77862c3547a459
|
{
"language": "en",
"length": 115,
"provenance": "stackexchange_00001.jsonl.gz:864830",
"question_score": "1",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://math.stackexchange.com/questions/764553"
}
|
redpajama/stackexchange
|
Q: draw a circle using beizer curve and co-ordinate of control points I want to draw a circle of radius R centered at the origin using Bezier Curve Segments. I have to draw the circle using four Bezier Curve segments - one for each quadrant as shown in the following figure .
How can I find the co-ordinate of four control points for each of the Bezier curve segments ? The basis matrix for Bezier curve is as follows :
$
\begin{bmatrix}
-1 & 3 & -3 & 1 \\
3 & -6 & 3 & 1 \\
-3 & 3 & 0 & 0 \\
1 & 0 & 0 & 0
\end{bmatrix}
$
|
v1
|
2023-04-23T09:04:30.101Z
|
{
"paloma_paragraphs": []
}
|
2023-03-29T00:00:00.000Z
|
ef8b945279243f68cdfcdb0db1a82d7472bc9b4f
|
{
"language": "en",
"length": 676,
"provenance": "stackexchange_00001.jsonl.gz:864831",
"question_score": "2",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://math.stackexchange.com/questions/764555"
}
|
redpajama/stackexchange
|
Q: Bounding the number of nodes of a trigonometric polynomial using Bézout's inequality The zero set of a trigonometric polynomial $P$ in 3 variables $x,y,z$ is a two-dimensional manifold which we view as being inside the torus $\mathbb T^3$. For example, here is the zero set of $P(x,y,z) = \cos(2\pi x) + 2\sin(2\pi 2y) - 3\cos(2\pi z) + \cos(2\pi y)$, which I plotted in Mathematica:
My goal is to prove the following:
Proposition. The amount of connected components in the intersection of the above zero set and any plane is bounded by $O(D^2)$ where $D = \deg P$. (*)
For example, in the above picture we can clearly see the intersection of the zero set with a plane (the transparent plane with the black frame "near us"). This intersection seems to have 4 components but they are actually 1 component because we're in the torus.
I am not familiar with algebraic geometry and Bézout's inequality, so I ran into some problems trying to prove this.
First question: I tried to follow an approach where the trigonometric polynomial $P$ is converted to a regular polynomial in 6 variables by introducing $c_x = \cos(2\pi x)$, $s_x = \sin(2\pi
x)$ and similarly with $y,z$, with the conditions $c_x^2 + s_x^2 = 1$ and similarly with $y,z$. However, it is unclear to me now how to intersect it with a plane, which takes the form $Ax + By + Cz = D$ in the "old" variables and not something in the "new" variables. Maybe if $A,B,C$ are integers then this can be converted to a trigonometric polynomial by taking a cosine or something, but they might not be integers.
Second question: Okay, forget the intersection with a plane for a minute. Can Bézout's inequality be used to bound the number of components of the entire zero set by, let's say, $O(D^3)$? The formulation I've seen of Bézout's inequality counts the number of points in a finite set of points, but here there are 4 equations on 6 variables and the set is 2-dimensional, and I want to count components, not points.
(*) I am generally interested in $d \ge 3$ variables and then it should be $O(D\,^{d-1})$, but for the sake of the question and the ability to draw pictures, it suffices to assume $d = 3$.
Edit - I think I know how to answer the second question:
In the interior of each connected component of the zero set of $P$ there is a minimum or maximum point of $P$ and this must be a critical point, so the number of components is bounded by the number of critical points.
Counting the critical points: We get 3 equations from $c_x^2 + s_x^2 = 1$ (and similarly for $y,z$), and 3 equations from $\frac {\partial P}{\partial x} = 0$ (and similarly for $y,z$) which are polynomials of degree less than $D$ in $c_x, s_x, \ldots$.
These are 6 equations in 6 variables of which 3 are degree 2 and 3 are degree $O(D)$, and critical points are always isolated, so their number may be bounded using Bézout's inequality by $O(D^3)$.
Could you please tell me if my answer is correct or if I made a mistake in using Bézout's inequality? (And how to answer the first question?)
A: Unless I made a mistake, I answered my second question in the edit I made yesterday, and I since discovered that the embedding of the torus in the 6-dimensional space is related to something called the Clifford torus.
About the first question, I "solved" it by changing my approach to my bigger problem (not described here) in which this is a sub-problem:
Instead of counting the intersections between the zero set and a 2-dimensional plane before making the "Clifford" transformation to 6 dimensions, I first transform to 6 dimensions and then intersect with a 5-dimensional hyperplane. This is much better suited for using Bézout's inequality, because it yields a simple degree-1 algebraic equation, $(c_x, s_x, c_y, s_y, c_z, s_z) \cdot v = 0$ where $v$ is some vector perpendicular to the hyperplane.
|
v1
|
2023-04-23T09:04:30.101Z
|
{
"paloma_paragraphs": []
}
|
2023-03-29T00:00:00.000Z
|
13babba6f9a581ce38519ec106bb4ae47d3736a2
|
{
"language": "en",
"length": 128,
"provenance": "stackexchange_00001.jsonl.gz:864832",
"question_score": "1",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://math.stackexchange.com/questions/764557"
}
|
redpajama/stackexchange
|
Q: Finding the limit $\lim_{t \to \infty} \frac{t-t\sqrt{t}}{2t^{3/2}+3t-5} $ Can someone help me solve this limit?
$$\lim_{t \to \infty} \frac{t-t\sqrt{t}}{2t^{3/2}+3t-5} $$
A: Hint: $$t\sqrt t = t^{3/2}$$
Then divide numerator and denominator by $t^{3/2}$ and evaluate to get a limit of $-1/2$.
A: \begin{align}
\lim_{t \to \infty} \frac{t-t\sqrt{t}}{2t^{3/2}+3t-5}&=\lim_{t \to \infty} \frac{-t^{3/2}+t}{2t^{3/2}+3t-5}\\
&=\lim_{t \to \infty} \frac{-\frac{t^{3/2}}{t^{3/2}}+\frac{t}{t^{3/2}}}{2\frac{t^{3/2}}{t^{3/2}}+3\frac{t}{t^{3/2}}-\frac{5}{t^{3/2}}}\\
&=\frac{-1+0}{2+0-0}\\
&=-\frac{1}{2}
\end{align}
A: General rule with polynomials or terms with radical roots: divide by the term with the highest order. If there's one term of this order, the limit is its coefficient, if two - the ratio of their coefficients. All other terms tend to $0$.
A: Thx @amWHy
$$\lim_{t \to \infty} \frac{t-t\sqrt{t}}{2t^{3/2}+3t-5}
\implies
\lim_{t \to \infty} \frac{t-t^{3/2}}{2t^{3/2}+3t-5}
\implies
\lim_{t \to \infty} \frac{\frac{t}{t^{3/2}}-\frac{t^{3/2}}{t^{3/2}}}{\frac{2t^{3/2}}{t^{3/2}}+\frac{3t}{t^{3/2}}-\frac{5}{t^{3/2}}}
\implies
\lim_{t \to \infty} \frac{0-1}{2+0-0}
\implies
\lim_{t \to \infty}-\frac{1}{2}$$
|
v1
|
2023-04-23T09:04:30.102Z
|
{
"paloma_paragraphs": []
}
|
2023-03-29T00:00:00.000Z
|
c8cefe7a990fcbadd4a659426592fb6a64b79042
|
{
"language": "en",
"length": 831,
"provenance": "stackexchange_00001.jsonl.gz:864833",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://math.stackexchange.com/questions/764559"
}
|
redpajama/stackexchange
|
Q: Measuring distance on the Poincare disk I've seen several different ways to measure distance on the Poincare disk i.e Riemann metric/manifold (which I don't understand). However the method we're taught is using $\tanh^{-1}$ and complex numbers.
$$d(z_1,z_2)=2\tanh^{-1}\left|\frac{z_1-z_2}{1-\bar{z_1}z_2}\right|$$
Where does this equation come from? I've read that it's from integrating with a distance metric $$d\rho=\frac{2dr}{1-r^2}$$ with $r$ the euclidean distance from the centre of the disk and $\rho$ the hyperbolic distance. So that $$\rho=\int_0^r\frac{2du}{1-u^2}=2\tanh^{-1}r$$ Can anyone explain relatively simply why this comes about (i.e. why we use this distance metric, and whether there are any other suitable alternatives), and how complex numbers come into the equation?
A: There are many possible ways to explain the why, but I'd like to try one.
Start with some basic definitions for the Poincaré disk model. A point of the hyperbolic plane is a point inside the unit disk. A hyperbolic line is a circle arc which is perpendicular to the unit disk. From that one can eventually deduce that the first four Eulidean axiom as well as the hyperbolic version of the axiom of parallels do hold. So you have a model of hyperbolic geometry.
What you need to study next is the set of isometries of this model. An isometry has to be a bijection between hyperbolic points, so it maps the interior of the unit disk onto itself. Furthermore, an isometry will preserve angles, and since the Poincaré models are conformal, that means Euclidean angles will be preserved.
The set of angle-preserving (conformal) transformations of the plane (or rather the Riemann sphere, but don't worry about that distinction just now) is the set of all Möbius transformations. And one of the most natural ways to represent Möbius transformations is by interpreting points in the plane as complex numbers. (Actually I consider the orojective $\mathbb{CP}^2$ view even more natural, but that's again besides the point.) This is where complex numbers enter the scene. So taken together with mapping the unit disk onto itself, this means that you are after the set of all Möbius transformations which fix the unit disk.
Now one can write down a formula for these, and then do distance comparisons. Suppose you had a hyperbolic line segment from some point $P$ to some nearby point $Q$. You could move that point $P$ into the center, using an isometry which would preserve hyperbolic distances but change Euclidean ones. Applying that transformation to $Q$ would move it close to the center, and then you could measure distances close to the center. You could move your point $Q$ infinitesimally close to $P$, and do the same thing again. What you would get is the distance metric you mentioned, up to a scale factor.
A finite distance is simply an integral over infinitely many infinitesimal distances, so you'd have to integrate from the start point ($z_1$ in your question) to the end point ($z_2$) along a geodesic path. Integrating along geodesics might be tough, though. So you can avoid that by first applying a transformation which moves one of them to the center of the unit disk.
$$
f(z)=\frac{z-z_1}{-\overline{z_1}z+1}\qquad
d\bigl(z_1,z_2\bigr)=d\bigl(0,f(z_2)\bigr)
=2\tanh^{-1}\bigl\lvert f(z_2)\bigr\rvert
=2\tanh^{-1}\bigl\lvert\frac{z_2-z_1}{1-\overline{z_1}z_2}\bigr\rvert
$$
This is the formula you quoted, except for a sign change inside the absolute value which is of no relevance. One can show that all Möbius transformations which fix the unit disk have the form
$$z\mapsto\frac{az+b}{\bar bz+\bar a}$$
so that when you know the numerator for $f$ (which should be zero for $z=z_1$) then you already know the denominator as well. You might choose $a\neq 1$, but the result would simply be the same as above followed by a rotation around the origin.
So now you'd be able to compare distances, like “that hyperbolic segment is twice as long as that other one”. The constant term in that formula is a matter of convention. It is usually chosen in such a way that the Gaussian curvature of the plane is not only constant but actually $-1$. See also this question for details on that convention.
With respect to your question about alternatives, I'd like to point out that cross ratios can be a useful tool there. Take two points in the disk and connect them by a geodesic. That geodesic will intersect the unit circle in two ideal points. Treat all four points as complex numbers and compute their cross ratio. It will not change under Möbius transformations, so it can be used to define distances. Sicne cross ratios are multiplicative and lengths are additive, you need to take the logarithm. Then choose the correct constant as above. The question I referenced there uses this cross ratio approach, so it may be useful here as well.
Note that my above post so far deliberately ignored the fact that Anti-Möbius transformations which fix the unit disk are hyperbolic isometries as well. This is however irrelevant to the question at hand. You should only keep it in mind when you build up your intuition about the set of all hyperbolic isometries.
|
v1
|
2023-04-23T09:04:30.102Z
|
{
"paloma_paragraphs": []
}
|
2023-03-29T00:00:00.000Z
|
b2e6514a69d2a87475d6ad313527b81297179f34
|
{
"language": "en",
"length": 162,
"provenance": "stackexchange_00001.jsonl.gz:864834",
"question_score": "0",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://math.stackexchange.com/questions/764560"
}
|
redpajama/stackexchange
|
Q: Finding a Regular Expression with a Specific Length from a language Given this language, I was supposed to find the Regular Expression that represented it. Having given up and getting the answer later (below) I couldn't understand the regular expression.
Given this answer:
Why is {1,2} and {2} part of the regular expression?
EDIT: I dont know if Understanding Regular Expressions is a more appropriate title. My issue is that I dont understand the solution for this given problem.
A: It seems that $L\{1,2\}$ in this syntax means that the expressions produces words of the form $w$ or $w\cdot v$ for $w,v\in L$. Hence the regular expression you presented produces the empty word ($\epsilon$), all words of length $1$ or $2$ (with $(0\mid 1)\{1,2\}$), and words of length $\geq 3$ with the third character being $0$. Another way to put it, a regular expression that produces the same language with less syntactic sugar is
$$\epsilon + (0+1) + (0+1)(0+1) + (0+1)(0+1)0(0+1)^*$$
|
v1
|
2023-04-23T09:04:30.102Z
|
{
"paloma_paragraphs": []
}
|
2023-03-29T00:00:00.000Z
|
5a4eb0e7ef4b936b132460269f9f0901faa9a8d7
|
{
"language": "en",
"length": 294,
"provenance": "stackexchange_00001.jsonl.gz:864835",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://math.stackexchange.com/questions/764563"
}
|
redpajama/stackexchange
|
Q: $p$ divides the sum of the quadratic residues $\bmod p$ Could you help me at the following exercise?
Show that, if $p>3$ is a prime,then $p$ divides the sum of the quadratic residues $\bmod p$.
A: Let $S$ be the sum of the quadratic residues. As $k$ runs from $1$ to $p-1$, the number $k^2$ runs (twice) over the quadratic residues modulo $p$. Thus
$$2S\equiv 1^2+2^2+\cdot +(p-1)^2.$$
By using the formula for the sum of the first $n$ consecutive squares, we find that
$$2S\equiv \frac{(p-1)(p)(2p-1)}{6}\pmod{p},$$
and therefore
$$12S\equiv (p-1)(p)(2p-1)\pmod{p}.$$
Since $p$ divides $(p-1)(p)(2p-1)$ but $p\gt 3$, it follows that $p$ divides $S$.
Another way: Let $g$ be a primitive root of $p$. Then the quadratic residues are congruent, modulo $p$, to $g^2,g^4,\dots, g^{p-1}$. Thus if $S$ is the sum of the quadratic residues, we have
$$S\equiv g^2+g^4+\cdots +g^{p-1}\pmod{p}.$$
Multiply both sides by $g^2-1$. We get
$$(g^2-1)S\equiv g^2(g^{p-1}-1)\pmod{p}.$$
By Fermat's Theorem, the right-hand side is congruent to $0$ modulo $p$. And if $p\gt 3$, then $g^2\not\equiv 1\pmod{p}$, and therefore $S\equiv 0\pmod{p}$.
A: Let the quadratic residues mod $p$ be $a_1,a_2,a_3,\cdots$ and the sum is $S$.
Clearly(prove) $p|S\iff p|1^2+2^2+\cdots+(p-1)^2$
The sum is $$\frac{(p-1)p(2p-1)}{6}$$
and we are done.
A: No sum formulas are needed for this one. The non-zero quadratic residues form a subgroup $Q_p$ of the multiplicative group $\Bbb{Z}_p^*$. Because $p>3$ the residue class of $4$ is a quadratic residue. Because $\overline{4}\in Q_p$, we have, by basic properties of groups, that $x$ will range over $Q_p$ as $4x$ does. So in the field $\Bbb{Z}_p$ we have
$$
S=\sum_{x\in Q_p}x=\sum_{x\in Q_p}4x=4S.
$$
This gives the equation (still in the field $\Bbb{Z}_p$)
$$
0=4S-S=3S\quad\Longrightarrow\quad S=0,
$$
as $3$ is also a unit in the field.
Mapping everything back to integers gives then $S\equiv 0\pmod p$ as requested.
|
v1
|
2023-04-23T09:04:30.103Z
|
{
"paloma_paragraphs": []
}
|
2023-03-29T00:00:00.000Z
|
942e4b28b6de7f7695c7820018a80f1bd6a32b1a
|
{
"language": "en",
"length": 468,
"provenance": "stackexchange_00001.jsonl.gz:864836",
"question_score": "6",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://math.stackexchange.com/questions/764566"
}
|
redpajama/stackexchange
|
Q: Spanning trees in planar dual graph The amount of spanning trees in a planar graph G is equal to the amount of spanning trees in the dual graph G*.
I would like to prove this, i know it's true, but i would like to show that it holds for every spanning tree in G that there exist one and only one co spanning tree in G* for the original spanning tree in G.
I made a drawing of the cubegraph that illustrates what i'm trying to prove
Here i have added a vertex inside every mask in G, to create the dual graph G*.
You can clearly see the idea. If you have a spanning tree in G, the co spanning tree is the edges not colored in G. But i would like to prove that this always count.
I have tried with a proof by bijection, but it is not making sense, and i would like someone to explain to me how i would go about this.
A: Let $T$ be a spanning tree in a connected plane graph $G$. Let $G^*$ be the dual graph corresponding to an embedding of $G$ in the plane, and let $T^*$ be the subgraph of $G^*$ consisting of the edges of $G^*$ that correspond to the edges of $G$ not in $T$. We want to prove $T^*$ is a spanning tree of $G^*$.
First note that $T$ has $|V(G)|-1$ edges, so $T^*$ has $|E(G)|-(|V(G)|-1)$ edges. By Euler's formula, there are exactly $2 - |V(G)| + |E(G)|$ faces in the embedding of $G$, so $|V(G^*)| = 2 - |V(G)| + |E(G)|$, and we have that $|E(T^*)| = |V(G^*)|-1$. It remains to show that $T^*$ is acyclic, since an acyclic subgraph of a graph with 1 fewer edge than the number of vertices in the graph is a spanning tree.
Now suppose $T^*$ has a cycle $C$. Note that $C$ separates the embedding of $G^*$ into two connected pieces, each of which contains at least one face of the embedding. But then the faces of $G^*$ correspond to vertices of $G$, and the edges of $C$ must correspond to an edge cut of $G$. But $T$ does not contain any of the edges in that edge cut, so $T$ cannot be a spanning tree, a contradiction.
Thus $T^*$ is acylic and $|E(T^*)| = |V(G^*)|-1$, and $T^*$ is a spanning tree in $G^*$.
You might want to look into matroid theory. These notions are quite a bit easier to understand in that light. A spanning tree of a graph is a basis in the cycle matroid $M$ for the graph. The complement of basis is a basis in the dual matroid. In other words, the complement of a spanning tree in a planar graph is a spanning tree in the dual graph.
|
v1
|
2023-04-23T09:04:30.103Z
|
{
"paloma_paragraphs": []
}
|
2023-03-29T00:00:00.000Z
|
088f9a379f3d0814f6b18b1c41bb1157ca263b20
|
{
"language": "en",
"length": 137,
"provenance": "stackexchange_00001.jsonl.gz:864837",
"question_score": "-2",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://math.stackexchange.com/questions/764570"
}
|
redpajama/stackexchange
|
Q: Prove that $r^n/n!$ converges? I need to show that $r^n/n!$ converges where $n\ge r$. Which is basically showing $\lim_{n\to inf}\frac{r^n}{n!}=0.$. Yotas Trejos told I need to do this
Let $N$ be an integer number such that $N> r$. Then for $n>N$ the following holds:
$$\displaystyle\frac{r^n}{n!}=\displaystyle\frac{r}{1}\cdots\displaystyle\frac{r}{N-1}\displaystyle\frac{r}{N}\cdots\displaystyle\frac{r}{n}<\displaystyle\frac{r}{1}\cdots\displaystyle\frac{r}{N-1}(\displaystyle\frac{r}{N})^{n-N} $$
you have that $\displaystyle\frac{r}{1}\cdots\displaystyle\frac{r}{N-1}$ is a constant and $\displaystyle\frac{r}{N}<1$.
But I'm having trouble concluding it. Is it just because it is a geometric sequence then after that?
A: Notice that you've bounded $r^n\leq n!$ by $\frac{r}{N-1}\left(\frac{r}{N}\right)^{n-N}$, and that $N$ depends $only$ on $r$ which is fixed in this problem. Thus you have that:
$$\frac{r^n}{n!}\leq C \left(\frac{r}{N}\right)^n$$
where $C$ is a constant that depends only on $r$. Now, $0<r/N<1$, so really you're trying to show that $R:=r/N$ satisfies $R^n\rightarrow 0$. But this is hopefully something you know how to do already.
|
v1
|
2023-04-23T09:04:30.103Z
|
{
"paloma_paragraphs": []
}
|
2023-03-29T00:00:00.000Z
|
df4eb97541624e416ace32ca8929862d5ba95fb2
|
{
"language": "en",
"length": 150,
"provenance": "stackexchange_00001.jsonl.gz:864838",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://math.stackexchange.com/questions/764577"
}
|
redpajama/stackexchange
|
Q: Cohomology after Dehn surgery For $f:S\to M$ a knot in a 3-manifold, we can construct a 3-manifold $N$ by a $0/1$-type Dehn surgery along $f$:
*
*First remove from $M$ a solid torus which is a tubular neighbourhood of the knot $f$;
*Thereafter $N$ is the result of sewing this solid torus back in $M$ such that the meridian disc goes once time along the longitude and no times along the meridian.
Does the integral cohomology groups $H^1(N;{\mathbb{Z}})$ and $H^2(N;{\mathbb{Z}})$ depend only on the integral cohomology of $M$ or do they also depend on how the knot $f$ sits in $M$?
A: They depend on how $f$ sits inside $M$. Suppose $M=S^2\times S^1$. Then if a knot is isotopic to $\{x\}\times S^1$, $0$-framed Dehn filling gives back $S^2\times S^1$. On the other hand, if the knot lies in $S^2\times \{y\}$, then Dehn filling gives $S^2\times S^1 \# S^2\times S^1$.
|
v1
|
2023-04-23T09:04:30.103Z
|
{
"paloma_paragraphs": []
}
|
2023-03-29T00:00:00.000Z
|
fd5ee8f333defe4841b627e7b4d35a4faab6189a
|
{
"language": "en",
"length": 132,
"provenance": "stackexchange_00001.jsonl.gz:864839",
"question_score": "1",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://math.stackexchange.com/questions/764578"
}
|
redpajama/stackexchange
|
Q: What is the flaw in this proof? Below is a proof that straight lines cannot exist in the coordinate plane. Where is the flaw in its reasoning?
It will be shown that the equation of a straight line leads to a mathematical inconsistency. First, write the equation of the line in the form Ax+By=C, where A, B, and C are scalars. Second, choose variables u and v so that u=Ax+By and v is any arbitrary non-constant function of x and y. The equation of the line in the uv-plane can be written as u=C. Differentiate implicitly to achieve the equation 1=0, which is a mathematical inconsistency and hence the line cannot exist in the first place.
A:
$u=C$
means $u$ is constant
so after differentiation it should be
$0=0$ not
$1=0$.
|
v1
|
2023-04-23T09:04:30.103Z
|
{
"paloma_paragraphs": []
}
|
2023-03-29T00:00:00.000Z
|
601e24746232fa284882c4c67ca4e8b27211dfff
|
{
"language": "en",
"length": 123,
"provenance": "stackexchange_00001.jsonl.gz:864840",
"question_score": "1",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://math.stackexchange.com/questions/764579"
}
|
redpajama/stackexchange
|
Q: Question about a sum Why is it that
$$\sum_{k=1}^n(n+k+1)(n+1)=\frac{3}{2}\sum_{k=1}^n3k^2+k.$$
I cannot understand it. This is not homework, I am just a little interested in this!
A: These sums are classic and we can prove it by induction
$$\sum_{k=1}^n k=\frac{n(n+1)}{2}$$
and
$$\sum_{k=1}^n k^2=\frac{n(n+1)(2n+1)}{6}$$
so
$$\frac{3}{2}\sum_{k=1}^n3k^2+k=\frac{3n(n+1)^2}{2}$$
and on the other way
$$\sum_{k=1}^n(n+k+1)(n+1)=(n+1)\sum_{k=1}^n(n+k+1)\\=(n+1)\left(n(n+1)+\sum_{k=1}^nk\right)=\frac{3n(n+1)^2}{2}$$
so the equality is proved.
A: Just expand. $(n+k+1)(n+1) = n^2+n+kn+k+n+1 = n^2+n(k+2)+k+1$ so
$\sum_{k = 1}^n n^2+n(k+2)+k+1 = \sum_k n^2+\sum_k n(k+2)+\sum_k k + \sum_k 1$.
In the first term, $n$ is constant so the sum is just $n^2 \cdot n$, etc. Do this for all the sums and compare the two sides.
A: The first sum can be written as$$\sum_{k=1}^n(n+k+1)(n+1)=(n+1)\sum_{k=1}^n(n+k+1)=(n+1)n(n+1)+(n+1)\sum_{k=1}^nk\\=n(n+1)^2+\frac{n(n+1)^2}{2}=\frac{3n(n+1)^2}{2}$$
instead the second one as
$$\frac{3}{2}\sum_{k=1}^n3k^2+k=\frac{3}{2}\left(\sum_{k=1}^n3k^2+\sum_{k=1}^nk\right)=\frac{3}{2}\left(3\sum_{k=1}^nk^2+\sum_{k=1}^nk\right)=\frac{3}{2}\left(3\frac{n(n+1)(2n+1)}{6}+\frac{n(n+1)}{2}\right)=\frac{3}{2}\left(\frac{n(n+1)}{2}(2n+2)\right)=\frac{3}{2}\left(\frac{n(n+1)}{2}2(n+1)\right)=\frac{3n(n+1)^2}{2}$$
which equals the first one!
|
v1
|
2023-04-23T09:04:30.104Z
|
{
"paloma_paragraphs": []
}
|
2023-03-29T00:00:00.000Z
|
9871bddddaf731f8f5538201629a30158ca15019
|
{
"language": "en",
"length": 79,
"provenance": "stackexchange_00001.jsonl.gz:864841",
"question_score": "2",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://math.stackexchange.com/questions/764580"
}
|
redpajama/stackexchange
|
Q: what are all the functions that satisfying $f(\frac{x+a}{b})=f(\frac{f(x)+a}{b})$ Given $f(\frac{x+a}{b})=f(\frac{f(x)+a}{b})$, $x$ is a real number, $a$ is an integer number and $b$ is a natural number.
What are all the functions that satisfying this restriction?
I tried to put some numbers for $a,b$ but can't see how it helps me..
Thanks!
A: Beside $f(x)=x$, you also have $f(x)=x+c$, when $c$ is a real number.
But i don't know how to prove it and probably there are more functions.
|
v1
|
2023-04-23T09:04:30.104Z
|
{
"paloma_paragraphs": []
}
|
2023-03-29T00:00:00.000Z
|
f83a7041376afc855c239864bd49cb44951cc2df
|
{
"language": "en",
"length": 161,
"provenance": "stackexchange_00001.jsonl.gz:864842",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://math.stackexchange.com/questions/764581"
}
|
redpajama/stackexchange
|
Q: Can Bezout's theorem be generalized to non algebraically closed fields? The Bezout's theorem says that the intersection of two curves in $\mathbb{P}^2_k$, (counting multiplicity, $k$ is algebraically closed) is equal to the product of their degrees. Can the theorem be generalized to non algebraically closed field? Where the intersection points are replaced by the intersection of schemes?
A: Let $C, D$ be two curves in $\mathbb P^2_k$, without commun irreducible component. For any point $x\in C\cap D$, define the multiplicity $\mathrm{mult}_x(C.D)$ (over $k$) of $C\cap D$ as
the dimension over $k$ of $O_{\mathbb P^2_k, x}/(f, g)$, where $f, g\in O_{\mathbb P^2_k, x}$ are the respective local equations of $C, D$ at $x$. Then
$$ \deg C \deg D=\sum_{x\in C\cap D} \mathrm{mult}_x(C.D).$$
It would be better to define mult$_x$ as the length over $O_{\mathbb P^2,x}$ of the quotient ring, but then in the above formula, one has to multiply mult$_x$ by the degree $[k(x): k]$ of the residue field at $x$.
|
v1
|
2023-04-23T09:04:30.104Z
|
{
"paloma_paragraphs": []
}
|
2023-03-29T00:00:00.000Z
|
6e0224d2ba4a27b89851ef513266fada7fb45be0
|
{
"language": "en",
"length": 615,
"provenance": "stackexchange_00001.jsonl.gz:864843",
"question_score": "0",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://math.stackexchange.com/questions/764585"
}
|
redpajama/stackexchange
|
Q: Min $+$ convolution is associative Although the following question was encountered in a Communication Networking textbook, the problem is still one of algebraic and analytic manipulation.
Define the (min,+) convolution of two real valued functions (domain is $\mathbb{R}^+$) f,g as
$$f*g= \inf_{0 \leq s \leq t}\{ f(s) + g(t-s)\}$$
Interested readers may compare it with the usual definition of convolution. Anyhow, I needed to prove that this convolution was commutative and associative. I was able to prove the commutative part but associative seems to elude me.
Here is how far I got:
$$f*(g*h) = \inf_{0 \leq s \leq t}\{ f(t-s) + (g*h)(s)\}$$
$$ = \inf_{0 \leq s \leq t}\{ f(t-s) + \inf_{0 \leq u \leq s}\{ g(u) + h(s-u)\}\}$$
$$= \inf_{0 \leq s \leq t}\{ \inf_{0 \leq u \leq s} \{f(t-s) + g(u) + h(s-u)\}\}$$
I couldn't proceed from here. I have also tried expanding $(f*g)*h$ and trying to see if the steps "meet in the middle" but to no avail.
I'd appreciate it if someone could give me some insight on how to proceed with this.
Additional Info
In case of the convolution integral/summation, I used indicator functions to allow me to swap integrals. But here I need to swap two infima, the inner one dependent on the outer one. Is there an indicator function like trick that can be used here?
A: The infinum swapping is more straightforward than you think. In general, let $S\subseteq \mathbb{R}$ be non-empty, and $S=\bigcup_{\alpha\in I} S_\alpha$ for some arbitrary index set $I$.
We have $S_\alpha \subseteq S$ for all $\alpha$, so
$$\inf S \leq \inf S_\alpha\quad \forall\alpha\in I.$$
Further,
$$\forall \epsilon>0\quad\exists s\in S\text{ s.t. }
\inf S + \epsilon > s,$$
and thence
$$\forall \epsilon>0\quad\exists \alpha\in I \text { s.t. } \exists s\in S_\alpha\text{ s.t. }
\inf S + \epsilon > s_\alpha \geq \inf S_\alpha.$$
So
$$\inf S = \inf\ \{\inf S_\alpha | \alpha\in I\}.$$
Applying this to a function defined on a domain $D\subseteq \mathbb{R}^2$, let $D_u=\{(u,v)\,|\,v\in \mathbb{R},\ (u,v)\in D\}\subseteq D$, and substitute $S=f(D)$ and $S_u=f(D_u)$ above. Then
\begin{align*}
\inf f(D) &= \inf_{u}\ (\inf f(D_u) )\\
&= \inf_{u}\ \inf_{v\in D_u} f(u,v)
\end{align*}
In particular, it means that
\begin{multline}
\inf_{0\leq s\leq t} \inf_{0\leq u\leq s} (f(t-s)+g(u)+h(s-u)) =\\
\inf_{0\leq u \leq s\leq t} (f(t-s)+g(u)+h(s-u)).
\end{multline}
A: Got it. We can use the following identity as shown by halfflat:
$$\inf_{0\leq s \leq t} \inf_{0\leq u \leq s} f(u,s) = \inf_{0\leq u \leq t} \inf_{u \leq s \leq t} f(u,s).$$
It will be proved at the end. Let us use it first. Also note that convolution is commutative, the proof of which was easy and so I didn't need.
Now
$$f*(g*h) = \inf_{0\leq s \leq t} f(t-s) + \inf_{0\leq u \leq s} g(s-u)+h(u) \\
= \inf_{0\leq u \leq t} \inf_{u \leq s \leq t} f(t-s)+g(s-u)+h(u)\\
= \inf_{0\leq u \leq t} \inf_{0 \leq s-u \leq t-u} f(t-u-(s-u))+g(s-u)+h(u)\\
= \inf_{0\leq u \leq t} \inf_{0 \leq v \leq t-u} f(t-u-v)+g(v)+h(u)\\
= \inf_{0\leq u \leq t} (f*g)(t-u) + h(u)=(f*g)*h$$
Proof of Identity: We use indicator functions.
$$\inf_{0\leq s \leq t} \inf_{0\leq u \leq s} f(u,s) = \inf_{0\leq s \leq t} \inf_{0\leq u \leq t} f(u,s)1_{u\leq s}+f(s,s)1_{u > s} $$
This is true because
$$\inf_{0\leq u \leq s} f(u,s) \leq f(s,s)$$
Now that the infima are no longer dependent, swap them to get
$$ \inf_{0\leq u \leq t} \inf_{0\leq s \leq t} f(u,s)1_{u\leq s}+f(s,s)1_{u > s} \\
=\inf_{0\leq u \leq t} \min \left\{\inf_{u \leq s \leq t} f(u,s),\inf_{0\leq s\leq u}f(s,s)\right\}\\
\leq \inf_{0\leq u \leq t} \inf_{u \leq s \leq t} f(u,s) $$
To get the reverse inequality, observe that
$$\inf_{0\leq u \leq t} \inf_{u \leq s \leq t} f(u,s)=\inf_{0\leq u \leq t} \inf_{0 \leq s \leq t} f(u,s)1_{s\geq u}+f(u,u)1_{s<u}$$
Now repeat the steps above.
|
v1
|
2023-04-23T09:04:30.104Z
|
{
"paloma_paragraphs": []
}
|
2023-03-29T00:00:00.000Z
|
e5bab887753b05349246a9c2a918b33bd08b4a95
|
{
"language": "en",
"length": 157,
"provenance": "stackexchange_00001.jsonl.gz:864844",
"question_score": "1",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://math.stackexchange.com/questions/764587"
}
|
redpajama/stackexchange
|
Q: Area of a Parallelogram
The sides of a parallelogram measure $10$ cm and $18$ cm. One angle of the parallelogram measures $46$ degrees. What is the area of the parallelogram, to the nearest square centimeter?
I'm supposed to use the trigonometric area formula $A = \dfrac{1}{2}a b \sin C $ but I cannot seem to get it right.
Thanks in advance!
A: Area of a parallelogram: $A = 10 * 18 * sin (46 deg) = 180 * 0.719 = 129.48$
Area of a triangle: $A = \frac{1}{2} * 10 * 18 * sin (46 deg) = 90 * 0.719 = 64.74$
A: It should be $\dfrac{10\cdot 18}{2}\sin 46^{\circ}=90\sin 46^{\circ}\approx 65\operatorname{cm}^2$ for a triangle, so a parallelogram's area is just double this, which gives $65\cdot 2\approx 129\operatorname{cm}^2$ (after rounding)
The important thing to note is that $\dfrac{ab}{2}\sin C$ gives the area for a triangle, and a parallelogram's area is given by twice this amount: $ab\sin C.$
|
v1
|
2023-04-23T09:04:30.104Z
|
{
"paloma_paragraphs": []
}
|
2023-03-29T00:00:00.000Z
|
cc66648a8f43c442fbb0fd5acd6e35ee7ff2e889
|
{
"language": "en",
"length": 219,
"provenance": "stackexchange_00001.jsonl.gz:864845",
"question_score": "0",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://math.stackexchange.com/questions/764591"
}
|
redpajama/stackexchange
|
Q: Show the equality holds for any $x \in [0, \pi]$ We are considering a $2\pi$ periodic function defined on $x\in(-\pi,\pi)$ by
$$f(x) = \pi - x, 0<x<\pi $$ and 0 otherwise.
I already computed the full Fourier series is equal to:
$$f(x) = {\pi\over4}+ \sum_{p=0}^\infty {2\over{\pi(2p+1)^2}}\cos[(2p+1)x] + \sum_{n=1}^\infty {1\over n}\sin(nx) $$
The next piece is to show that for any $ x\epsilon[0, \pi]$ the following equality holds:
$$\sum_{n=1}^\infty {1\over n}\sin(nx)= {\pi\over4}+ \sum_{p=0}^\infty {2\over{\pi(2p+1)^2}}\cos[(2p+1)x]$$
I emailed my professor and he told me to deduce this from a result of the series pointwise convergence (limit). The pointwise limit is:
$$\begin{cases}
\pi-x, & \text{if $x \in (0,\pi)$} \\
0, & \text{if $x\in(-\pi,0)$} \\
{\pi\over2}, & \text{if $x=0$} \\
\end{cases}$$
So how do I show the equality holds?
A: Let $0 < x \leqslant \pi$. By the known pointwise convergence of the Fourier series, we have
$$0 = f(-x) = \frac{\pi}{4} + \sum_{p=0}^\infty \frac{2}{\pi(2p+1)^2}\cos \left[(2p+1)(-x)\right] + \sum_{n=1}^\infty \frac{\sin \left(n(-x)\right)}{n}.$$
Since $\cos$ is an even function and $\sin$ odd, this becomes
$$0 = \frac{\pi}{4} + \sum_{p=0}^\infty \frac{2}{\pi(2p+1)^2}\cos \left[(2p+1)x\right] - \sum_{n=1}^\infty \frac{\sin (nx)}{n},$$
and that is evidently equivalent to
$$\sum_{n=1}^\infty \frac{\sin (nx)}{n} = \frac{\pi}{4} + \sum_{p=0}^\infty \frac{2}{\pi(2p+1)^2}\cos \left[(2p+1)x\right].\tag{1}$$
Note that $(1)$ does not hold for $x = 0$, only for $0 < x \leqslant \pi$ (by peridicity, for $2k \pi < x \leqslant (2k+1)\pi$).
|
v1
|
2023-04-23T09:04:30.104Z
|
{
"paloma_paragraphs": []
}
|
2023-03-29T00:00:00.000Z
|
dd7999793a63b5a9d4a18abbd6e1e0303b2e7f65
|
{
"language": "en",
"length": 319,
"provenance": "stackexchange_00001.jsonl.gz:864846",
"question_score": "2",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://math.stackexchange.com/questions/764598"
}
|
redpajama/stackexchange
|
Q: Proof: Convex set of a quadrilateral is a convex quadrilateral Prove that $\Box ABCD$ is a convex set whenever $\Box ABCD$ is a convex quadrilateral.
Things I know:
*
*A set of points $S$ is said to be a convex set if for every pair of points $A$ and $B$ in $S$, the entire segment $AB$ is contained in $S$.
*The diagonals of the quadrilateral $\Box ABCD$ are the segments $AC$ and $BD$
*The angles of the quadrilateral $\Box ABCD$ are the angles $\angle ABC, \angle BCD, \angle CDA,$ and $\angle DAB$
*A quadrilateral is said to be convex if each vertex of the quadrilateral is contained in the interior of the angle formed by the other three vertices.
How would I start this proof? Assuming that the quadrilateral $\Box ABCD$ is a convex quadrilateral.
A: So I think we might be in the same geometry class. I think the best way to start this problem is to think of the two scenarios that could occur. Pick two points in the quadrilateral and think about what would happen if they were either the same point or two distinct points. Since you know you are working with a convex quadrilateral the line segment connecting the two points is also in ithe interior so the set must be convex. I'm not sure this will help but thats how I am thinking about it.
A: Assume that $R=ABCD$ is convex quadrilateral Consider angle at $A$, $C_A$ :
$$ C_A=\bigg\{ u\bigg( s\overrightarrow{AB}+(1-s)
\overrightarrow{AD}\bigg)| s\in [0,1],\ u\geq 0 \bigg\} $$
Since $C\in C_A$ so there is $u_0,\ s_0\in (0,1)$ s.t. $$
\overrightarrow{AC}=u_0\bigg( s_0\overrightarrow{AB}+(1-s_0)
\overrightarrow{AD}\bigg)
$$
Since $B,\ D$ is not in same side wrt line containing $[AC]$, then
$$ R=
\triangle ACB\cup \triangle ACD$$
And similar argument implies that $$R=\triangle BDA\cup \triangle BDC$$
Hence $\angle ABC,\ \angle BCD,\ \angle CDA,\ \angle DAB$ are
strictly less than $\pi$ so that we complete the proof
|
v1
|
2023-04-23T09:04:30.105Z
|
{
"paloma_paragraphs": []
}
|
2023-03-29T00:00:00.000Z
|
5f8cd3744bb089a338a33ef019c8b1569a4247a6
|
{
"language": "en",
"length": 235,
"provenance": "stackexchange_00001.jsonl.gz:864847",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://math.stackexchange.com/questions/764600"
}
|
redpajama/stackexchange
|
Q: Is the set of convex bodies include in a closed ball compact? I consider the set $\mathcal{K}_B$ of convex bodies (convex and compact) which are include inside the unit closed ball of $\mathbb{R}^d$.
I endow this set with the Hausdorff distance.
Is it compact?
A: Let ${\mathcal C}_B$ denote the space of closed subsets of $B$ equipped with Hausdorff metric. Then ${\mathcal C}_B$ is compact, see e.g. here. Then you check by a direct argument that ${\mathcal K}_B$ is a closed subset of ${\mathcal C}_B$ (limit of a sequence of convex subsets is convex). Now, argue that a closed subset of a compact metric space is compact.
Edit: As requested, here is a proof of convexity of the limit. Suppose that $C_n$ is a sequence of closed convex subsets Hausdorff-converging to a closed subset $C\subset B$. To verify that $C$ is convex, take two points $p, q\in C$; then $p=\lim_{n\to\infty} p_n, q=\lim_{n\to\infty} q_n$ for sequences $p_n\in C_n, q_n\in C_n$. Then the sequence of affine maps
$$
f_n: [0,1]\to C_n, t\mapsto (1-t)p_n+ tq_n
$$
converges uniformly to an affine map
$$
f: [0,1]\to B, t\mapsto (1-t)p+ tq.
$$
In particular, the sequences of the images of $f_n$'s Hausdorff-converges to the image of $f$ (this is a general fact about Hausdorff-convergence). Thus, the image of $f$ is contained in $C$. It follows, that $C$ contains the interval spanned by $p, q$, hence, $C$ is convex. qed
|
v1
|
2023-04-23T09:04:30.105Z
|
{
"paloma_paragraphs": []
}
|
2023-03-29T00:00:00.000Z
|
2380d17c879ba34ab00717c92bd26bb332609e42
|
{
"language": "en",
"length": 222,
"provenance": "stackexchange_00001.jsonl.gz:864848",
"question_score": "0",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://math.stackexchange.com/questions/764601"
}
|
redpajama/stackexchange
|
Q: Inverse of matrix with varying parameters Ok so I need some sort of verification on this. I have run into this matrix $\begin{pmatrix} e^t&3e^{-t}\\e^t&e^{-t} \end{pmatrix}$ and I need to find the inverse of this matrix. The book simply ignores the fact that there is a function in the matrix and computes the inverse normally (without the functions). However when we start throwing in $\sin t$ and $\cos t$ we start to see complicated inverses. What exactly is the procedure to determine the inverse of these types of matrices and how do you know if it is "safe" to take the inverse of the fundamental matrix? These matrices come up in differential equations for solutions to non-homogenous systems. How would you take the inverse had it been $\begin{pmatrix} \sin t& 3\cos t \\ \sin t& \cos t\end{pmatrix}$ ?
A: Hint
Do you know that
$$\begin{pmatrix}a&b\\c&d\end{pmatrix}^{-1}=\frac1{ad-bc}\begin{pmatrix}d&-c\\-b&a\end{pmatrix},\qquad \text{if}\;\; ad-bc\ne0$$
A: To make a long story short, everything you learned before this point with numbers instead of functions is more or less the same. In this case, we use that fact that a matrix can be inverted if it has none zero determinate. So the standard inversion matrix formula holds.
$det = ad - bc = e^t e^{-t} - 3e^t e^{-t} = e^{0} - 3 e^{0} = -2 \neq 0 \quad \forall t \in \mathbb{R}$
|
v1
|
2023-04-23T09:04:30.105Z
|
{
"paloma_paragraphs": []
}
|
2023-03-29T00:00:00.000Z
|
de60166879d794d626ffbc927b2fd85c6df40132
|
{
"language": "en",
"length": 284,
"provenance": "stackexchange_00001.jsonl.gz:864849",
"question_score": "0",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://math.stackexchange.com/questions/764602"
}
|
redpajama/stackexchange
|
Q: Finding the 'n'th k-permutation of a set, and finding 'n' for a given k-permutation (lexicographical ordering) Suppose you have a set, and want to order the k-permutations of the set (for example, the permutations of 5 elements of the set {1, 2, 3, ..., 16}). Is there a fast way of finding 'n' (the "lexicographical index") for, say {3, 16, 4, 11, 7}, and a fast way of finding the 5-permutation such that n = 100000?
A: When you say fast, what do you mean?
If you mean a faster way than actually compute all the k-permutations, you actually can treat the problem as finding some element in a base(here the basis coefficients are gonna be Permutation numbers).For example, if you give me the set $[p]=\{1,\ldots ,p\}$, an integer q and an index i you must iteratively compute $\lfloor \frac{i}{\binom{p-n-1}{q-1-n}*(q-1-n)!} \rfloor+1$ (where $\binom{p-n-1}{q-1-n}*(q-1-n)!$ states because you want to choose the other numbers that are gonna be in the sequence and they have an order)and then decrement i in the residue. For example, suposse you want to calculate the 837th 3-permutation of $[16]$. The first element would be $\lfloor \frac{837}{\binom{16-0-1}{3-1-0}*(3-1-0)!}\rfloor +1= \lfloor \frac{837}{\binom{15}{2}*2!}\rfloor+1=\lfloor \frac{837}{210}\rfloor+1=4$ then you update the i as $837-210*\lfloor \frac{837}{210}\rfloor=207$
$\lfloor \frac{207}{\binom{16-1-1}{3-1-1}*(3-1-1)!}\rfloor +1= \lfloor \frac{207}{\binom{14}{1}*1!}\rfloor+1=\lfloor \frac{207}{14}\rfloor+1=15$ but you put 4, so it's gonna be 16.
update i as $207-14*\lfloor\frac{207}{14}\rfloor=11$ but because you put 4,it's gonna be 13.
So the 3-permutation in $[16]$ with index 837 is $\{4,16,13\}$.
I think the complexity is gonna be $O(p*q)$.
For the inverse problem its the same Suposse i give $(a_1,\ldots a_q)$, the index of that permutation in $[p]$ is going to be like $(a_1-1)*\binom{p-1}{q-1}*(q-1)!+(a_2-1-[a_1<a2])*\binom{p-2}{q-2}(q-2)!+\ldots+(a_q-1-|\{j:a_j<a_q\}|)*\binom{p-q}{q-q}*(q-q)!$ I think this must be $O(p*q)$ also because of the binomial numbers.
|
v1
|
2023-04-23T09:04:30.106Z
|
{
"paloma_paragraphs": []
}
|
2023-03-29T00:00:00.000Z
|
5332db4b0f84d58bd2190f336c1cd4ebae81e128
|
{
"language": "en",
"length": 702,
"provenance": "stackexchange_00001.jsonl.gz:864850",
"question_score": "19",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://math.stackexchange.com/questions/764604"
}
|
redpajama/stackexchange
|
Q: Approximate $\sqrt{e}$ by hand I have seen this question many times as an example of provoking creativity. I wonder how many ways there are to approximate $\sqrt{e}$ by hand as accurately as possible.
The obvious way I can think of is to use Taylor expansion.
Thanks
A: How accurately do you need it? One option is to use binomial expansion:
$$
e^{\frac{1}{2}} \approx \Big(1+\frac{1}{n}\Big)^{\frac{n}{2}}=\sum_{k=0}^{\frac{n}{2}}\binom{\frac{n}{2}}{k}\frac{1}{n^k}
$$
which you can make arbitrarily close to $e^{\frac{1}{2}}$ for various values of $n$.
A: I would use the fact that $e \approx 2.7182818284$ and use Wikipedia on computing square roots. The digit by digit method will get you five decimals fairly quickly
A: As an alternative to series-based methods, there are differential equation based methods you can use.
If we recognize that $y=e^x$ is the solution to $y'=y$ with $y(0)=1$, and use Runge-Kutta with a small step size to approximate $y(1/2)$.
In this case, with just one step (using $h=1/2$ in the link), we obtain $e^{1/2}\approx1.6484375\ldots$, compared to the actual value of $e^{1/2}=1.64872127\ldots$.
With two steps, using $h=1/4$, we obtain $1.648699\ldots$.
With three steps, using $h=1/6$, we obtain $1.648716\ldots$.
With four steps, using $h=1/8$, we obtain $1.648716\ldots$.
With four steps, using $h=1/8$, we obtain $1.648719\ldots$.
In fairness, each "one" step in Runga-Kutta applied to this situation does require about seven multiplications. And since the step size needs to be decided from the start, you don't have the ability to refine your result further like you can with series by adding more terms. On the other hand a differential equation based method can give more accuracy in exchange for less computation in many cases.
A: And here is another answer. There is a known continued fraction expansion for $e^{1/n}$. Continued fraction sequences converge quickly (although with so many 1s, this particular continued fraction converges on the slower end of things). The downside is that you can't use the $n$th convergent to quickly find the $n+1$st convergent, so you have to make a choice right away how deep to go. As @spin notes in the comments, you can refine your convergent using the previous two convergents and the next integer in the continued fraction expression.
$$e^{1/2}=1+\cfrac{1}{1+\cfrac{1}{1+\cfrac{1}{1+\cfrac{1}{5+\cfrac{1}{1+\cfrac{1}{1+\cfrac{1}{9+\cfrac{1}{1+\cfrac{1}{1+\cfrac{1}{13+\cfrac{1}{1+\cfrac{1}{1+\cdots}}}}}}}}}}}}$$
A: Use the Power series to compute $x_n := \exp\left(-2^{-n}\right)$ for some $n \geqslant 1$ with high accuracy, and then compute
$$\sqrt{e} = \left(\frac{1}{x_n}\right)^{2^{n-1}}.$$
Using the negative exponent, and an exponent of smaller absolute gives you (much) faster convergence of the series, and the few operations of squaring and inverting don't lose much precision then. For e.g. $n = 3$, you get pretty good results for the first $10$ terms of the power series already.
A: $\newcommand{\+}{^{\dagger}}
\newcommand{\angles}[1]{\left\langle\, #1 \,\right\rangle}
\newcommand{\braces}[1]{\left\lbrace\, #1 \,\right\rbrace}
\newcommand{\bracks}[1]{\left\lbrack\, #1 \,\right\rbrack}
\newcommand{\ceil}[1]{\,\left\lceil\, #1 \,\right\rceil\,}
\newcommand{\dd}{{\rm d}}
\newcommand{\down}{\downarrow}
\newcommand{\ds}[1]{\displaystyle{#1}}
\newcommand{\expo}[1]{\,{\rm e}^{#1}\,}
\newcommand{\fermi}{\,{\rm f}}
\newcommand{\floor}[1]{\,\left\lfloor #1 \right\rfloor\,}
\newcommand{\half}{{1 \over 2}}
\newcommand{\ic}{{\rm i}}
\newcommand{\iff}{\Longleftrightarrow}
\newcommand{\imp}{\Longrightarrow}
\newcommand{\isdiv}{\,\left.\right\vert\,}
\newcommand{\ket}[1]{\left\vert #1\right\rangle}
\newcommand{\ol}[1]{\overline{#1}}
\newcommand{\pars}[1]{\left(\, #1 \,\right)}
\newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}}
\newcommand{\pp}{{\cal P}}
\newcommand{\root}[2][]{\,\sqrt[#1]{\vphantom{\large A}\,#2\,}\,}
\newcommand{\sech}{\,{\rm sech}}
\newcommand{\sgn}{\,{\rm sgn}}
\newcommand{\totald}[3][]{\frac{{\rm d}^{#1} #2}{{\rm d} #3^{#1}}}
\newcommand{\ul}[1]{\underline{#1}}
\newcommand{\verts}[1]{\left\vert\, #1 \,\right\vert}
\newcommand{\wt}[1]{\widetilde{#1}}$
$\ds{\root{\expo{}_{n}} = x_{n}}$. With Newton-Rapson: $\ds{x_{n + 1} = \half\pars{x_{n} + {e \over x_{n}}}\,,\quad x_{0} = 2}$
$\ds{\large\tt\mbox{With}\quad \color{#66f}{\Large n = 3}}$:
\begin{align}&{\LARGE\sqrt{\expo{}}}\approx
\frac{1}{2} \left\{\frac{1}{2} \left[\frac{1}{2} \left(2+\frac{e}{2}\right)+\frac{2 e}{2+\frac{e}{2}}\right]+\frac{2 e}{\frac{1}{2} \left(2+\frac{e}{2}\right)+\frac{2 e}{2+\frac{e}{2}}}\right\} = x_{3}
\approx 1.648721295
\end{align}
$$
\mbox{Relative Error} = \verts{{x_{3} \over \root{\expo{}}} - 1}\times 100\ \% =1.48\times 10^{-6}\ \%
$$
A: I found this series representation of $e$ on Wolfram Mathworld:
$$
e=\left(\sum_{k=0}^\infty\frac{4k+3}{2^{2k+1}(2k+1)!}\right)^2.
$$
Hence
$$
\sqrt{e}=\sum_{k=0}^\infty\frac{4k+3}{2^{2k+1}(2k+1)!}.
$$
Also from Maclaurin series for exponential function
$$
e^{\large\frac{1}{2}}=\sum_{n=0}^\infty\frac{1}{2^n n!}.
$$
A: The rapidly-converging series representation of $\sqrt{e}$ in Tunk-Fey's answer can be derived from simply expressing the Maclaurin series of $e^{x}$ as the sum of its even terms plus the sum of its odd terms.
$$ \begin{align} e^{x}&= \sum_{n=0}^{\infty} \frac{x^{2n}}{(2n)!} + \sum_{n=0}^{\infty} \frac{x^{2n+1}}{(2n+1)!} = \sum_{n=0}^{\infty} \frac{x^{2n}(2n+1) + x^{2n+1}}{(2n+1)!} \\ &= \sum_{n=0}^{\infty} \frac{x^{2n}(2n+1+x)}{(2n+1)!} = \sum_{n=0}^{\infty} \frac{x^{2n}(4n+2+2x)}{2(2n+1)!} \end{align}$$
A: If you apply the standard series expansion of $e^x$ to the case $x=-1/2$ and then find the reciprocal, it will converge faster than if you use $x=1/2$.
A: On a pocket calculator enter $2048$, ${1\over x}$, $+$, $1$, $=$, $x^2$ ($10$ times).
A: You can use the following identities:
*
*$e=\lim_n(1+1/n)^n$,
*$e=\lim_n\frac{n}{^n\sqrt{n!}}$,
Put a large value of $n$ and it should do. Of course you have to square-root the results.
|
v1
|
2023-04-23T09:04:30.106Z
|
{
"paloma_paragraphs": []
}
|
2023-03-29T00:00:00.000Z
|
9345506de51c6e853143004cbb6dfdcce222d6b6
|
{
"language": "en",
"length": 146,
"provenance": "stackexchange_00001.jsonl.gz:864851",
"question_score": "1",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://math.stackexchange.com/questions/764608"
}
|
redpajama/stackexchange
|
Q: stochastic process with random variable as time, measurable i have a problem with the following exercise:
Let $X$ be an measurable $\mathbb{R}^d$ valued stochastic process on $(\Omega,\mathcal{F},\mathbb{P})$ and $T$ a finite $T : \Omega\to \big[ 0, +\infty \big)$ random time.
. Show that $X_T$ is $\mathcal{F}$-measurable.
I know how to show $\{\omega\in \Omega|{X_{T(\omega)}(\omega)}\in B\} \in \mathcal{F}$, but only if T would take values in $\mathbb{N}$.
I have no clue how to show it for a real valued random time.
thank you
edit:Yes $X$ is measurable on the space $(\Omega \times [ 0, +\infty),\mathcal{F}\otimes \mathcal{B}([0,\infty))$
A: Thank you Bryon Schmuland, the answer of "The strong Markov property with an uncountable index set" does answer my question too.
I was first put off why $T$ should be finite, but since then $\{T<\infty \}=\Omega, X_T$ is well defined for every $\omega$ without having to look at the Limits.
|
v1
|
2023-04-23T09:04:30.106Z
|
{
"paloma_paragraphs": []
}
|
2023-03-29T00:00:00.000Z
|
353c984654cd8c7cac0ed6aaecf77da2247c8153
|
{
"language": "en",
"length": 420,
"provenance": "stackexchange_00001.jsonl.gz:864852",
"question_score": "2",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://math.stackexchange.com/questions/764614"
}
|
redpajama/stackexchange
|
Q: Complex Numbers of Unit Modulus if $z_1$, $z_2$ and $z_3$ are Complex Numbers of Unit Modulus Such That:
\begin{equation}
|z_1-z_2|^2+|z_1-z_3|^2=4 \tag{1}
\end{equation} Find the value of $$|z_2+z_3|$$
A: Fix $z_1$ and $z_2$ on the unit circle. The constraint:
\begin{equation}\tag{1} |z_1 - z_2|^2 + |z_1 - z_3|^2 = 4 \end{equation}
Says that $z_3$ must lie on a circle centered at $z_1$ with radius $R = \sqrt{4 - |z_1 - z_2|^2}$. The additional constraint that $z_3$ has unit norm means there are two solutions for $z_3$ which are the intersection points of the two circles (except in the degenerate case when $R = 0$ or $2$ which correspond to $z_2 = \pm z_1$).
Now, the special thing is that this equation is always satisfied when $z_3 = -z_2$. You can see this algebraically, and additionally there is a geometric picture: $z_3 = -z_2$ implies the numbers lie on the same diameter of the unit circle, and so the triangle formed by $z_1,z_2,z_3$ is a right triangle. Then the Pythagorean theorem implies the constraint (1) holds.
From this, you can see that the other solution should be the reflection of $-z_2$ about $z_1$, which is given algebraically by $-z_1^2\bar{z}_2$. In other words: $z_3 = -z_2$ or $z_3 = -z_1^2\bar{z}_2$. In the first case, $|z_2 + z_3| = 0$, and in the second $|z_2 + z_3| = |z_2 - z_1^2\bar{z}_2| = |z_2^2 - z_1^2|$. This can be any value between zero (when $z_2 = \pm z_1$) and two (when $z_2 = \pm i z_1$). So without more specification there is not a unique solution.
A: Tried in this way:
\begin{equation}
|z_1+z_2|^2+|z_1-z_3|^2=4+|z_1+z_2|^2-|z_1-z_2|^2=4+4Re(z_1z_2^*) \tag{2}
\end{equation} $\sim$
\begin{equation}
|z_1+z_3|^2+|z_1-z_2|^2=4+|z_1+z_3|^2-|z_1-z_3|^2=4+4Re(z_1z_3^*) \tag{3}
\end{equation} So
\begin{equation}
|z_2+z_3|^2=|(z_1+z_2)-(z_1-z_3)|^2=\\|z_1+z_2|^2+|z_1-z_3|^2-2Re\left((z_1+z_2)(z_1-z_3)^*\right)=\\4+4Re(z_1z_2^*)-2Re\left((z_1+z_2)(z_1-z_3)^*\right) \tag{4}
\end{equation}
$\sim$
\begin{equation}
|z_2+z_3|^2=|(z_1+z_3)-(z_1-z_2)|^2=\\|z_1+z_3|^2+|z_1-z_2|^2-2Re\left((z_1+z_3)(z_1-z_2)^*\right)=\\4+4Re(z_1z_3^*)-2Re\left((z_1+z_3)(z_1-z_2)^*\right) \tag{5}
\end{equation} Also Expanding Eqn $(1)$ We get
\begin{equation}
Re(z_1(z_2+z_3)^*)=0 \tag{6}
\end{equation} Also
\begin{equation}
Re(z-z*)=0 \tag{7}
\end{equation} Adding Egns $(4)$ and $(5)$ and Using $(6)$ and $(7)$ we get
\begin{equation}
|z_2+z_3|^2=2(1+Re(z_2z_3^*))
\end{equation}
I need further assistance in solving this. I will be happy if there is shorter way to solve preferably using Geometry.
A: The question does not have a well-defined answer. Here are two examples which satisfy the conditions but give different results for $|z_2+z_3|$.
1) $z_2=1$, $z_3=-1$, $|z_1|=1$ (i.e. $z_2$ and $z_3$ are the endpoints of a diameter of the unit circle, and $z_1$ is any point on the circle. By Pythagoras theorem the sum of squares of sides = $(1+1)^2=4$.) Here $|z_2+z_3|=0$.
2) $z_1 = e^{i\pi/2}, z_2 = e^{i\pi/3}, z_3 = e^{-i\pi/3}$. You can verify with WolframAlpha that the condition is satisfied. Here $|z_2+z_3|=2\cos(\pi/3)=1$.
|
v1
|
2023-04-23T09:04:30.106Z
|
{
"paloma_paragraphs": []
}
|
2023-03-29T00:00:00.000Z
|
f21edbc9735b6440fd8590ff716bf2c1d9b43413
|
{
"language": "en",
"length": 419,
"provenance": "stackexchange_00001.jsonl.gz:864853",
"question_score": "0",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://math.stackexchange.com/questions/764618"
}
|
redpajama/stackexchange
|
Q: Density of rationals in an ordered field Given an ordered field $F$, is it equivalent to say that $\mathbb{Q}$ is dense in $F$ with respect to the order topology and to say that $\forall x\in F,\forall y\in F,x<y\Rightarrow\exists q\in \mathbb{Q},x<q<y$ ?
A: Yes.
In fact, for any linearly ordered set $(X,\leq)$, a subset $Y \subset X$ is called order-dense if for all $x_1 < x_2 \in X$ there is $y \in Y$ with $x_1 < y < x_2$. Suppose $X$ has at least two points and is order-dense in $X$. Then a subset $Y$ of $X$ is order-dense in $X$ if and only if $Y$ is dense in $X$ for the order topology: if $Y$ is order dense in $X$ and $U \subset X$ is nonempty open, then $U$ contains a nonempty interval of the form $(a,b)$ (for $a < b$) $[a,b)$ (for $a$ the least element of $X$) or $(a,b]$ (for $b$ the greatest element of $X$). The hypotheses imply there is $c \in Y$ with $a < c < b$. If $Y$ is dense in the order topology and $a < b \in X$, then $(a,b)$ contains an element $c$ of $Y$.
The hypothesis that $X$ has at least two points and is order-dense in $X$ applies to every ordered field $F$: $0 < 1 \in F$, and for $x < y \in F$, we have $x < \frac{x+y}{2} < y$. (This uses that every ordered field has characteristic $0$, or at least characteristic different from $2$. We could get around this by also using $\frac{2x+y}{3}$.)
A: Yes for ordered fields. We say that a set $S\subset K$ is order-dense in $K$ if and only if $\forall x,y\in F,x<y \Rightarrow \exists q\in S,x<q<y$. The order topology is the one having as base the sets : $]x,\infty[, ]x,y[, ]\infty,x[$ with $x,y\in F$. To prove that an order-dense set is dense is straightforward. Let us call this set $S$. Given a neighborhood $V$ of $x$, we have a set $B$ of the base with $x\in B\subset V$ and we can see that in the three cases for $B$, there is a $q\in S$ that is in $V$.
The converse requires that the ordered set is a ordered field. First, recall that a ordered field has characteristic $0$. So we have a set $S$ dense in $F$. If $x,y\in F$ such that $x<y$, necessarily, $]x,y[$ is not empty, since $\frac{x+y}{2}$ is in $]x,y[$ and thence $]x,y[$ is a neighborhood of $\frac{x+y}{2}$. So there is $q\in S$ such that $x<q<y$.
|
v1
|
2023-04-23T09:04:30.106Z
|
{
"paloma_paragraphs": []
}
|
2023-03-29T00:00:00.000Z
|
edae68e91f0fc3bab5213260f13476f2a0d69ab4
|
{
"language": "en",
"length": 187,
"provenance": "stackexchange_00001.jsonl.gz:864854",
"question_score": "2",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://math.stackexchange.com/questions/764619"
}
|
redpajama/stackexchange
|
Q: Algebraic question I've been trying to solve this question for the past few hours with no luck, I'm starting to feel a formula may be involved. Can you please help me out on this problem:
If $a+b+c=3 (a>0,b>0,c>0)$ then what is the greatest value of $a^2*b^3*c^2$.
(If I assume each one of them to be $1$,the greatest value also would equal $1$, but I'm not certain whether this answer is right.)
A: You are trying to maximize the function $f(a,b,c)=a^2b^3c^2)$ subject to the constraints $a,b,c>0$ and $a+b+c=3$. This last constraint is the equation of a plane, and at the maximum, the gradient of the objective must be parallel to the normal of the plane. The gradient is:
$$ \nabla f = \{ 2ab^3c^2, 3a^2b^2c^2, 2a^2b^3c \}$$
and the normal to the plane is $\{1,1,1\}$, so we want:
$$ \nabla f = \lambda \{1,1,1\} $$
for some $\lambda$. Therefore, we can solve the following set of equations:
$$ 2ab^3c^2 = \lambda \\ 3a^2b^2c^2 = \lambda \\ 2a^2b^3c = \lambda \\ a+b+c=\lambda $$
This is essentially the method of Lagrange multipliers.
Hint: The solution for $a,b,c$ involves sevenths.
|
v1
|
2023-04-23T09:04:30.107Z
|
{
"paloma_paragraphs": []
}
|
2023-03-29T00:00:00.000Z
|
eb9f85439ddad53ad38c7992d04b7c85b284f817
|
{
"language": "en",
"length": 89,
"provenance": "stackexchange_00001.jsonl.gz:864855",
"question_score": "0",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://math.stackexchange.com/questions/764621"
}
|
redpajama/stackexchange
|
Q: Galois field splitting a polynomial Can someone explain to me how i would go about doing a problem like this? I don't really know where to start. GF refers to a Galois field. I'm struggling to even understand exactly what they want me to do here.
A: The point here to realize is that all fields of size $p^n$ are isomorphic and that you construct a (and hence the) field of $p^n$ elements by taking ${\mathbb F}_p[x]/(f(x))$ for some irreducible polynomial $f(x) \in {\mathbb F}_p[x]$ of degree $n$.
|
v1
|
2023-04-23T09:04:30.107Z
|
{
"paloma_paragraphs": []
}
|
2023-03-29T00:00:00.000Z
|
4f3247446697959fdcac034a73a3ccaee510f0ef
|
{
"language": "en",
"length": 270,
"provenance": "stackexchange_00001.jsonl.gz:864856",
"question_score": "1",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://math.stackexchange.com/questions/764622"
}
|
redpajama/stackexchange
|
Q: Wave Characteristic diagram question Say $F(x)=v(x)=0$ when $|x| \geq a$ whilst $F(x)=h(x)$ and $v(x)=c \ h'(x)$, both when $|x|<a$. As we know, the wave equation in D'Alembert's form is:
$$
y(x,t)=\frac{1}{2}\left[F(x-ct)+F(x+ct)\right] + \frac{1}{2 c} \int_{x-ct}^{x+ct} v(s) \,ds.
$$
I calculated that $y(a,t)=\frac{h(a)}{2}$ in $t \in (0, \frac{2a}{c})$ but I'm curious how to calculate $y(a,t)$ for $t > \frac{2a}{c}$?
I think the answer is $0$ because the wave moves outwards from the center at speed $c$ so eventually it will pass the point $a$. However, if time is on the $y$ axis, does this mean it stays at position $a$ and just increases on $y$?
Can someone please clarify my intuition of the characteristic diagram involving the time parameter?
A: When dealing with piecewise defined initial conditions, it helps to draw the regions of dependency of each piece. To do this, draw the line $x=x_0\pm ct$ for each point $x_0$ separating two pieces:
I put big fat $0$s in the regions which do not depend on $[-a,a]$. The rest of space-time is divided into three parts.
In part (iii),
$$y(x,t)=\frac{1}{2}\left[h(x+ct) + h(x-ct)\right] + \frac{1}{2 c} \int_{x-ct}^{x+ct} ch' (s) \,ds = h(x+ct) $$
This describes the wave moving to the left keeping its profile.
In Part (i),
$$y(x,t)=\frac{1}{2}\left[h(x+ct)\right] + \frac{1}{2 c} \int_{-a}^{x+ct} ch' (s) \,ds = h(x+ct) - \frac12 h(-a) $$
This is also the wave moving to the left in its original form, but with the added shock from the discontinuity at $-a$ (which doesn't exist if $h(-a)=0$).
Part (ii) is similar and is left as an exercise. It will have $h(x-ct)$, plus the shock from the discontinuity at $a$.
|
v1
|
2023-04-23T09:04:30.107Z
|
{
"paloma_paragraphs": []
}
|
2023-03-29T00:00:00.000Z
|
b4ce9180fbccf1a0d0cb2d0fc476c6e2e14e79e1
|
{
"language": "en",
"length": 97,
"provenance": "stackexchange_00001.jsonl.gz:864857",
"question_score": "0",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://math.stackexchange.com/questions/764623"
}
|
redpajama/stackexchange
|
Q: Partial derivatives of $f(x,y)=\sqrt{|xy|}$ $f(x,y)=\sqrt{|xy|}$
First question: How to find $f_x(0,0)$ and $f_y(0,0)$? I have figured out this using definition - Both are $0$.
My next question is: How to show that $f_x(0,0)$ and $f_y(0,0)$ are the only directional derivative that exist at $(0,0)$?
Again, using definition, let the direction be $\mathbf{v}=(u,v)$. $\nabla_\mathbf{v}f(0,0) = \lim_{h\rightarrow 0}(f(hu,hv)-f(0,0))/h = \lim_{h\rightarrow 0}h\sqrt{|uv|} = 0$.
What's wrong with my argument?
A: You miscomputed something.
Note that, given $\mathbf v=(u,v)$,
$$\begin{align}
\nabla_\mathbf{v}f(0,0) &= \lim \limits_{h\rightarrow 0}\left(\dfrac{f(hu,hv)-f(0,0)}{h}\right)\\
&= \lim \limits_{h\rightarrow 0}\left[\dfrac {\left(\left|h^2uv\right|\right)^{1/2}}{h}\right]\\
&= \lim \limits_{h\to 0}\left(\dfrac {\color{red}| h\color{red}|\sqrt{|uv|}}{h}\right).
\end{align}$$
Can you conclude?
|
v1
|
2023-04-23T09:04:30.107Z
|
{
"paloma_paragraphs": []
}
|
2023-03-29T00:00:00.000Z
|
5e05fffd1a66d9e62cf00a1ceaf226f57b977e7b
|
{
"language": "en",
"length": 163,
"provenance": "stackexchange_00001.jsonl.gz:864858",
"question_score": "2",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://math.stackexchange.com/questions/764624"
}
|
redpajama/stackexchange
|
Q: From Shortened Code to Reed Muller code [Ok people, before scolding me, let me tell you this thing is giving me headaches..]
I have a $C$ = $C(8,3)$ described by this matrix:
$$G=\begin {bmatrix}
1&0&0&1&0&1&1&0\\
0&1&0&1&1&1&1&1\\
0&0&1&1&0&0&1&0\\
\end{bmatrix}$$
I've been asked get a shortened code of $C$. I got:
$$G'=\begin {bmatrix}
1&0&1&0&1&1&0\\
0&1&1&1&1&1&1\\
\end{bmatrix}$$
(Started from the very beginning in order to show you "where I'm coming from")
I should try to find a Reed-Muller code from $C'$.
As far as I know, $n = 2^m$ and $k = \sum_{i=1}^{r} 1 + \binom mi$ with hamming distance $d = 2^{m-r}$.
But if $d = 4$ and $n = 7$, $m$ (that should be an integer) in those formulas seems not matching at all. Am I doing something wrong?
Also, when calculating $k$, since $r$ is the upper bound of a sum, how big should it be?
but, my BIG question is, is it possible to get a Reed-Muller code from this matrix?
|
v1
|
2023-04-23T09:04:30.107Z
|
{
"paloma_paragraphs": []
}
|
2023-03-29T00:00:00.000Z
|
485d9298a70919cab6cac93de6a07d1c742f35c8
|
{
"language": "en",
"length": 150,
"provenance": "stackexchange_00001.jsonl.gz:864859",
"question_score": "0",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://math.stackexchange.com/questions/764626"
}
|
redpajama/stackexchange
|
Q: Solving $\lim_{x\rightarrow0}\frac{\log(1+\sin{x})-\log(1+x)}{x-\tan{x}}$ (doubts with Landau notation) I'm trying to solve the following limit:
$$\lim_{x\rightarrow0}\frac{\log(1+\sin{x})-\log(1+x)}{x-\tan{x}}$$
It is pretty straightforward by substituting those expressions by their Taylors polynomial:
Let $y:=\sin{x}$, then that limit is:
$$\lim_{x\rightarrow0}\frac{\log(1+\sin{x})-\log(1+x)}{x-\tan{x}}=\lim_{x\rightarrow0}\frac{\log(1+y)-\log(1+x)}{x-\tan{x}}=\lim_{x\rightarrow0}\frac{[y+o(y)]-[x+o(x)]}{x-[x+\frac{-x^3}{3}+o(x^3)]}=\lim_{x\rightarrow0}\frac{[[x-\frac{x^3}{6}+o(x^3)]+o(x)]-[x+o(x)]}{x-[x+\frac{-x^3}{3}+o(x^3)]}=\lim_{x\rightarrow0}\frac{\frac{-x^3}{6}+o(x)+o(x^3)}{\frac{-x^3}{3}+o(x^3)}$$
Now, that seems to be $\frac{1}{2}$, and it is indeed, but I think I'm doing something wrong with how I manipulate Landau's notation, because I don't know how to simplify that to get the $\frac{1}{2}$. I'm having this same problem in several other limits, I'm not sure whether I'm using it properly or not, any help on this subject would be greatly appreciated.
A: Your result is correct but your method isn't perfect since you should write the taylor series at the order $3$. Let's show how to proceed for the numerator
$$\log(1+\sin x)-\log(1+x)=\log\left(1+x-\frac{x^3}6+o(x^3)\right)-x+\frac{x^2}2-\frac{x^3}3+o(x^3)\\=\left(x-\frac{x^3}6\right)-\frac{\left(x-\frac{x^3}6\right)^2}{2}+\frac{\left(x-\frac{x^3}6\right)^3}{3}-x+\frac{x^2}2-\frac{x^3}3+o(x^3)\\=\left(x-\frac{x^3}6\right)-\frac{x^2}{2}+\frac{x^3}{3}-x+\frac{x^2}2-\frac{x^3}3+o(x^3)=-\frac{x^3}6+o(x^3)$$
and your work for the denominator is correct.
A:
Here is another simpler method of doing it using L'hopitals rule thrice.
enjoy!
|
v1
|
2023-04-23T09:04:30.108Z
|
{
"paloma_paragraphs": []
}
|
2023-03-29T00:00:00.000Z
|
c12190e26c6458f6c8fa571b963e550be9500024
|
{
"language": "en",
"length": 276,
"provenance": "stackexchange_00001.jsonl.gz:864860",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://math.stackexchange.com/questions/764627"
}
|
redpajama/stackexchange
|
Q: Advanced Calculus – (Real Analysis) function f Def. The statement that $f$ is continuous means that $f$ is continuous at each point in its domain.
Def. if $D$ is a subset of $\mathbb{R}$ and $f$ is real valued function with domain $D$ then the statement that $f$ is continuous at each point $p$ in $D$ means if $(a,b)$ contains $f(p)$, then there is a $(c,d)$ containing $p$ such that $f(x)$ is in $(a,b)$ for each $x$ in $D\cap(c,d)$
The question is: There is a function $f$, defined over $[0,1]$ such that $f$ is continuous on the irrational numbers and discontinuous on the rational
work i have done
let $f$ be defined as $f(x)=0$ for $x$ irrational for $x$ rational, $x=\tfrac pq$, with $\gcd(p,q)=1$ define $f(x)=1/q$.
I don't know where to go or if this is right. I thought about using Thomae function and modding it but it is not on the definitions sheet, the only defs I found where the one above for f as a continuous function and the one for domain.
A: You need to prove this:
$f$ is discontinuous at rational arguments: You know, that for any $x\in\mathbb Q$ there is a sequence $x_n\in\mathbb R\setminus\mathbb Q$ with $\lim_{n\rightarrow\infty} x_n = x$. Use this sequence to show, that $f$ cannot be continuous at $x$.
$f$ is continuous at irrational arguments: Take any $x\in\mathbb R\setminus\mathbb Q$. Here you have to prove, that for any sequence $x_n\in \mathbb R$ with $\lim_{n\rightarrow\infty} x_n = x$ you have $\lim_{n\rightarrow\infty} f(x_n) = f(x) (=0)$. The crucial part is to show, that if $x_n$ has any subsequence $y_n=\tfrac{p_n}{q_n}$ consisting only of rational numbers, one has $\lim_{n\rightarrow\infty} q_n = \infty$.
|
v1
|
2023-04-23T09:04:30.108Z
|
{
"paloma_paragraphs": []
}
|
2023-03-29T00:00:00.000Z
|
210a3531aa8c0629ef837f724c379e6b46a7ac4c
|
{
"language": "en",
"length": 182,
"provenance": "stackexchange_00001.jsonl.gz:864861",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://math.stackexchange.com/questions/764631"
}
|
redpajama/stackexchange
|
Q: How does the chain rule work for more than one variable? I know that that $$\dfrac{d\sqrt{x}}{dt} = \dfrac{d\sqrt{x}}{dx} \dfrac{dx}{dt}$$
In this equation there you only have 1 variable, namely $x$.
But why is the following correct?:
$$T = \frac{1}{2} m \left(v_{x}^2 + v_{y}^2 + v_{z}^2 \right)$$
$$\dfrac{dT}{dt} = m \left( v_{x} \dfrac{dv_{x}}{dt} + v_{y} \dfrac{dv_{y}}{dt} + v_{z} \dfrac{dv_{z}}{dt} \right)$$
How do you use the chain rule with this 3 variables and what is the mathematical proof for that?
A: $v_x,v_y,v_z$ these are three dependent variables. There is only one independent variable $t$. Infact $v_x$ is a function of both position $x$ and time $t$ but the position $x$ is itself a function of $t$ so all the components of $v$ are just a function of $t$.
If there were some other independent variable $t'$ then we would talk about partial derivatives and the multivariable chain rule.
About the proof: the usual proof of chain rule is valid. $v_x$ is just a function of $t$. There might be some other functions of $t$ like acceleration but the chain rule is still valid.
|
v1
|
2023-04-23T09:04:30.108Z
|
{
"paloma_paragraphs": []
}
|
2023-03-29T00:00:00.000Z
|
5f67ed11b2d0e9b612ed1bde80fcab1f2ba9b5e1
|
{
"language": "en",
"length": 260,
"provenance": "stackexchange_00001.jsonl.gz:864862",
"question_score": "1",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://math.stackexchange.com/questions/764632"
}
|
redpajama/stackexchange
|
Q: Why is it required to change variable to get the right answer for this question? The question is this :
$$\lim_{x\to-\infty} {\sqrt{x^2+x}+\cos x\over x+\sin x}$$
The solution is $-1$ and this seems to be only obtained from the change variable strategy, such as $t=-x$.
However, I have no idea why this isn't just solved by simply eliminating $x$ in numerator and denominator, which generates the value $1$.
It seems that this is related with $x\to-\infty$, but I have no specific idea.
Can anyone help me?
A: Note that the trigonometric terms are negligible as $x \to-\infty $. Hence,
$$\lim_{x\to-\infty}\frac{\sqrt{x^2+x}}{x}$$
You cannot take the x in the square root. But, your problem is that x is negative. So, you must :
$$\lim_{x\to-\infty}\frac{\sqrt{x^2+x}}{-(-x)}$$
Now, $-x$ is positive and you can take it inside the root where it will become squared.
$$\lim_{x\to-\infty}\frac{\sqrt{1+1/x}}{-1}$$
Clearly, the answer is $-1$.
Substituing $t=-x$, just makes it look good. Note that you can write $-x$ as $t$ every time in your substituted answer.
A: $t = -x$ gives: $L$ = $-\displaystyle \lim_{t \to \infty} \dfrac{\sqrt{t^2 - t} + \cos t}{t + \sin t} = -\displaystyle \lim_{t \to \infty} \dfrac{\sqrt{1 - \dfrac{1}{t}} + \dfrac{\cos t}{t}}{1 + \dfrac{\sin t}{t}} = -1$ because $\displaystyle \lim_{t \to \infty} \dfrac{\cos t}{t} = \displaystyle \lim_{t \to \infty} \dfrac{\sin t}{t} = \displaystyle \lim_{t \to \infty} \dfrac{1}{t} = 0$
A: You are thinking to do $\sqrt{x^2 + x} = x\sqrt{1+\frac 1 {x^2}}$, however, $\sqrt{x^2 + x}$ and $\sqrt{1+\frac 1 {x^2}}$ are (defined to be) positive, so how can you do this with a negative $x$?
|
v1
|
2023-04-23T09:04:30.108Z
|
{
"paloma_paragraphs": []
}
|
2023-03-29T00:00:00.000Z
|
ea879f646ce822e05c5059cee4afa0405a10933e
|
{
"language": "en",
"length": 93,
"provenance": "stackexchange_00001.jsonl.gz:864863",
"question_score": "0",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://math.stackexchange.com/questions/764633"
}
|
redpajama/stackexchange
|
Q: prove that the number $38^n+31$ is composite Prove that for every positive integer $n$, $38^n+31$ is a composite number. for example $38+31=69$ is composite. $38^2+31=1475$ is also composite. I have tried modulo but it didn't work.
A: For odd $n$, take modulo $3$. For even $n$, if $n \equiv 2 \pmod 4$, take modulo $5$. So only $n \equiv 0 \pmod 4$ case remains. (We can further break it up into the cases $12n+4,12n+8,12n$ with modulo $7$..)
Anyway, it is hard to get a conclusive answer out of these types of problems....
|
v1
|
2023-04-23T09:04:30.108Z
|
{
"paloma_paragraphs": []
}
|
2023-03-29T00:00:00.000Z
|
93a10a9272f06c056d53f23766545870bcd980f2
|
{
"language": "en",
"length": 160,
"provenance": "stackexchange_00001.jsonl.gz:864864",
"question_score": "2",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://math.stackexchange.com/questions/764643"
}
|
redpajama/stackexchange
|
Q: Finding $\tan B$ and $\tan(A+B)$ So I know that
$$
\tan(A+B) = \frac{\tan(A) + \tan(B)}{1 - \tan(A) \tan(B)},
$$
but I don't know how to find $\tan(B)$ for the following problem:
If $\tan A = 2/3$ and $\sin B = 5/\sqrt{41}$ and angles $A$ and $B$ are in Quadrant I, find the value of $\tan(A+B)$.
Thanks in advance for any help.
A: Note that $\cos^2 B = 1 -\sin^2 B$, so you can find $\cos B$. Armed with this and the information in your question, you can find $\tan B$, and finally $\tan(A + B)$ with your identity.
A: $$\cos^2 B = 1-\sin^2B = 1-\frac{25}{41} = \frac{16}{41}$$
$$\implies \cos B = \frac{4}{\sqrt{41}}$$
$$\implies \tan B = \frac54$$
$$\tan(A+B) = \frac{\frac23+\frac54}{1-\frac23\cdot\frac54} = \boxed{\frac{23}{2}}$$
A: First, draw out the right triangle including angle B. Then, use the Pythagorean theorem to find the length of the adjacent side. Then, use the formula tangent=(opposite/adjacent) to find tan(B) and substitute that into the formula.
|
v1
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.