id
stringlengths 40
40
| text
stringlengths 12
210k
|
---|---|
c22f543a341f9fbdce2e6e37acb7e8d81f021fe6
|
Q: Number of $100$-element subsets with sum congruent to $1$ mod $5$ Given the set of first $2015$ natural numbers $\{1,2,3...,2015\}$. How many $100$-element subsets of this set are there such that the sum of the elements of the subset is congruent to $1$ modulo $5$?
A: Consider all the subsets that contain exactly one of $\{1,2,3,4,5\}$ and $99$ other numbers. For each choice of the $99$ other numbers there are five such subsets, of which one has a sum that is $1 \bmod 5$. Similarly, of all the subsets that have two of $\{1,2,3,4,5\}, \frac 2{10}$ have a sum that is $1 \bmod 5$ because there are $10$ two element subsets of $\{1,2,3,4,5\}, 2$ of which have each sum $\bmod 5$. The same argument works for those containing three or four.
For the subsets that have no elements or all of $\{1,2,3,4,5\}$ we can make the same argument for $\{6,7,8,9,10\}$. Exactly $\frac 15$ of the subsets that contain some but not all of them have sum $1 \bmod 5$. We continue upward. Each subset gets grouped into a group of $5$ or $10$ subsets that have the same number of elements in the lowest block of $5$ that is incomplete and all the same higher elements. That group is equally distributed among the values $\mod 5$. Once we consider all the blocks of $5$ we have accounted for all the subsets that do not consist of $20$ blocks of $5$ numbers. There are $403$ such blocks of $5$ numbers, so there are ${403 \choose 20}$ subsets that consist of these blocks, all of which have sum $0 \bmod 5$.
We can partition the ${2015 \choose 100}$ subsets into ${403 \choose 20}$ that have sum $0 \bmod 5$ and ${2015 \choose 100}-{403 \choose 20}$ that are equally distributed among the moduli. There are therefore $$\frac 15\left({2015 \choose 100}-{403 \choose 20}\right)\approx 4.717\cdot 10^{170}$$ subsets with sum $1 \bmod 5$ The difference from $\frac 15$ of the total $100$ element subsets is only of order $10^{33}$, so it is trivial by comparison.
|
3f34d9fe318a83e7d80b12dc951eec5987bc2bb0
|
Q: Transformation matrix in polar coordinates I'm trying to write a software widget that allows the user to resize the component, so I can write a transformation matrix $\mathbf T_\text{xy}$ that will map $(x,y)$ to a transformed $(x',y')$, that is
$$\left(\begin{array}{cc}
x'\\
y'
\end{array}\right)=
\mathbf T_\text{xy}\left(\begin{array}{cc}
x\\
y
\end{array}\right) \tag{1}$$
But my in application it makes it easier to use Polar instead of Cartesian, I can convert $(r,\theta)$ to $(x,y)$ and then use the above transformation to get $(x',y')$ and convert it $(r', \theta ')$, but I was wondering if I there is a way to find a transformation matrix $\mathbf T_{r\theta}$ that will transform directly from $(r,\theta)$ to $(r', \theta ')$ and that is equivalent to the transformation equation above. I failed in deriving such a relation since there is no linear relation between $(x,y)$ and $(r,\theta)$.
In short my question is to find $\mathbf T_{\text{r}\theta}$ in
$$\left(\begin{array}{cc}
r'\\
\theta'
\end{array}\right)=
\mathbf T_{\text{r}\theta}\left(\begin{array}{cc}
r\\
\theta
\end{array}\right)$$
such that it transforms my points in the same way as eq(1) was doing
Any help is appreciated.
A: There are two different ways you can describe points in the Euclidean plane. Both ways look very similar as they allow you to describe any point by simply a pair of numbers. I'll try to describe the two methods to show you that the representation of a point by $(r,\theta)$ is different from the representation of a point by $(x,y)$. And thus you can't expect them to behave the same way. In particular you can't expect the points $(r,\theta)$ to transform in the same linear fashion as $(x,y)$.
Consider the Euclidean plane $\Bbb E_2$. Choose any two distinct directions and then consider a set $\mathcal V = \{\mathbf {v_1}, \mathbf {v_2}\}$ where each of those vectors is parallel to one of the two directions.
This set is called a basis for the plane because any vector $\mathbf u\in \Bbb E_2$ can be decomposed uniquely into a component parallel to $\mathbf {v_1}$ and a component parallel to $\mathbf {v_2}$: $$\mathbf u = u_1\mathbf {v_1} + u_2\mathbf {v_2}$$
Then the set of numbers $(u_1,u_2)$ can be used to uniquely specify any given point in the plane with respect to the basis $\mathcal V$.
The representation of a point by the pair $(x,y)$ is this type of object. $x$ and $y$ are just the usual names we give to the coordinates of a point in $\Bbb E_2$ when the basis we chose happened to be an orthonormal basis. Denote the orthonormal basis associated with $(x,y)$ as $\mathcal E= \{\mathbf {\hat e_1}, \mathbf {\hat e_2}\}$.
(In this image, the blue vector is $\mathbf {\hat e_1}$ and the orange vector is $\mathbf {\hat e_2}$.)
Then any vector $\mathbf u\in \Bbb E_2$ can be decomposed as $$\mathbf u = x\mathbf {\hat e_1} + y\mathbf {\hat e_2}$$ or it can be more simply represented by the ordered pair $(x,y)$.
Now let's think of another way of representing a point in $\Bbb E_2$. Consider two families of curves that vary smoothly across the plane.
As you can see from the image, we can still describe points in the plane by a pair of numbers. You just need to describe each curve by a particular number -- then for any point of interest you just find the two curves that cross at that point and read off the numbers associated with each.
This is the type of object that $(r,\theta)$ is.
As an exercise, try to locate the unique point on the plot described by $(r,\theta) = (2,105°)$. Do you see that you could also uniquely locate a point that doesn't happen to be the intersection of two of the curves shown in the plot (because it's really hard to draw all infinity curves) such as $(r,\theta) = (1.5, 62°)$?
The pair $(r,\theta)$ isn't the coordinates of a vector with respect to some basis vectors, it's the pair of numbers describing which particular pair of curves (of each pair in the infinite family of curves) intersect at the point $P$.
So $(x,y)$ and $(r,\theta)$ are really are two specific instances of two completely different ways of representing a given point in the plane. As such hopefully it won't be a big surprise to you when I say that while $(x,y)$ can be transformed linearly (by matrices) to some other $(x',y')$, $(r,\theta)$ cannot.
|
9875d1b6641f3f7f0e63b7bca1b522c8aa2d296f
|
Q: Determine if the improper integral converges $\sin(1/x)$ I tried comparing it with $\sin(x)$ but $\sin(1/x)$ is $\geq$ than $\sin(x)$ and the integer of $\sin(x)$ converges so that doesn't help me
$$\int_0^{1} \sin (\frac{1}x) dx$$
A: It is not an improper integral. $\sin(1/x)$ is continuous and bounded on $(0,1]$, so it is integrable.
A: The function $x\mapsto \sin(1/x)$ is bounded, and like any bounded function, its integral over a bounded set converges (assuming the integral makes sense, but here your function is continuous on $]0,1]$, so both Lebesgue or Riemann's framework would agree here that your function is integrable).
|
bb968fde84dca0b931b9b89895bc21e4b83c1847
|
Q: Show $e^x$ is irrational for rational $x \neq 0$ I want to show that if $x$ is rational and nonzero then $e^x$ is irrational.
Clearly $e^{\frac{r}{s}} = \frac{p}{q} \Rightarrow q^s e^r = p^s$, but this doesn't seem helpful. The usual proof that $e$ is irrational doesn't look like it can be extended either.
A: You are on the right track.
Knowning that $e$ is transcendental, the algebraic equation $q^sz^r-p^s=0$ cannot have $e$ as a root.
|
21c573750e0b276c160e235be01430f98d69b6f3
|
Q: Spans of subsets and union of two sets Is this true or not? How would I prove or disprove this?
If the set of vectors $\{a_1 \dots a_n\}$ spans a subset $S$ and
the set of vectors $\{b_1 \dots b_n\}$ spans a subset $T$, then $\text{Span} \{a_1 \dots a_n, b_1 \dots b_n\} = S \cup T$
A: Hint: Think about vectors in $\Bbb R^2$. Say $n=1$, $a_1=(1,0)$ and $b_1=(0,1)$. What are $S$ and $T$; what is the span of $a_1,b_1$?
|
e9003de98adc1f888a401a15b5cbedad6612b673
|
Q: $\inf_{a\in\mathbb{C}}\|f-a\|_{L^{\infty}(I)}\le|I|\|f'\|_{L^{\infty}(I)}$ Let $I\subset\mathbb{R}$ be an interval of finite length and $f:\mathbb{R}\to\mathbb{C}$ a function that is differentiable on a neighborhood of $I$. I tried to prove:$$
\inf_{a\in\mathbb{C}}\|f-a\|_{L^{\infty}(I)}\le|I|\|f'\|_{L^{\infty}(I)}
$$
If the range of $f$ and $a$ are both in $\mathbb{R}$, then I can prove this inequality. The key observation is that at minimum, $f-a$ must equal to 0 at some point in $I$ and so I can write this function as an integral over a subset of $I$ by fundamental theorem of calculus. But when it comes to $\mathbb{C}$, I don't have a very clear geometric picture as in the $\mathbb{R}$ case. I tried to Work on real and complex parts of $f$ separately, but I can't get back $|f'|$
A: Because $f$ is differentiable on a neighborhood of $I$ we can assume that $I = [c,d] $ with $c < d$. Setting $b = f(c)$ we get for every $x \in I$
\begin{align*}
|f(x) - b|
&= \left| \int_{c}^x f'(t) \,\text{d}t \right|
\leq \int_{c}^x |f'(t)| \,\text{d}t
\leq \int_{c}^x \|f'\|_{L^\infty(I)} \,\text{d}t \\
&= \|f'\|_{L^\infty(I)} (x-c)
\leq |I| \|f'\|_{L^\infty(I)}.
\end{align*}
(Here we estimate the distance of $f(c)=b$ and $f(x)$ by the length of the curve described by $f$). Therefore
$$
\|f-b\|_{L^\infty(I)}
= \sup_{x \in I} |f(x)-b|
\leq |I| \|f'\|_{L^\infty(I)}
$$
and thus
$$
\inf_{a \in \mathbb{C}} \|f-a\|_{L^\infty(I)}
\leq \|f-b\|_{L^\infty(I)}
\leq |I| \|f'\|_{L^\infty(I)}.
$$
|
7341ac9ce0ada0708cb82acf1d901d3410c2a7e0
|
Q: I would like to calculate this limit: $ \lim_{n \to \infty}(n^2+1)\cdot(\ln(n^2-4)-2\ln(n)) $ I would like to calculate this limit:
$$ \lim_{n \to \infty}(n^2+1)\cdot(\ln(n^2-4)-2\ln(n)) $$
but I am a bit lost on how to tackle the logarithm. Any help would be greatly appreciated.
A: Maybe it helps to express it this way, so you only have to solve a limit without $\ln(x)$:
$$ \lim_{n \to \infty}(n^2+1)\cdot(\ln(n^2-4)-2\ln(n)) = \lim_{n \to \infty}(n^2+1)\cdot(\ln(n^2-4)-\ln(n^2)) $$ $$ = \lim_{n \to \infty}(n^2+1)\cdot(\ln\big({n^2-4 \over n^2}\big))= \lim_{n \to \infty}\ln\big(\big({n^2-4 \over n^2}\big)^{n^2+1}\big) = \ln\big(\lim_{n \to \infty}\big({n^2-4 \over n^2}\big)^{n^2+1}\big) $$
A: $2\ln n=\ln(n^2),$
$\implies\ln(n^2-4)-2\ln(n)=\ln\left(1-\dfrac4{n^2}\right)$
Setting $1/n=h$
$$\lim_{n \to \infty}(n^2+1)\cdot(\ln(n^2-4)-2\ln(n))=-4\cdot\lim_{h\to0}(1+h^2)\cdot\lim_{h\to0}\dfrac{\ln(1-4h^2)}{-4h^2}=?$$
|
4e542e95c96381a76facb32218afeea1c4d6c56f
|
Q: Multi-variable chain rule example Let $p(x,y,z) = q(q(x^2, xy), q(xyz, sin(x^2y^2z^3)))$ where $q$ is a function of 2-variables. Find all partial derivatives.
What I know/tried: Chain rule needs to be applied but the main issue is how the function is written. I have tried to take the derivative w.r.t $x$ but it is a landscape page wide.
Bonus Question: Does $\frac{\partial p}{\partial z}$ exist?
A: Using the comments I think I have figured it out:
Let $a = q(x^2, xy)$ and $b = q(xyz, sin(x^2y^2z^3))$.
Quickly take the partials w.r.t $x$ now...
$\frac{\partial a}{\partial x} = \frac{\partial q}{\partial x^2}(2x) + \frac{\partial q}{\partial xy}(y)$
and
$\frac{\partial b}{\partial x} = \frac{\partial q}{\partial (xyz)}(yz) + \frac{\partial q}{\partial (sin(x^2y^2z^3))}(cos(x^2y^2z^3)(2xy^2z^3))$
So then rewrite and apply chain rule...
$p(x,y,z) = q(a, b)$
$\frac{\partial p}{\partial x} = \frac{\partial q}{\partial a}\frac{\partial a}{\partial x} + \frac{\partial q}{\partial b}\frac{\partial b}{\partial x}$
After this, we substitute the work we did and since $q$ is arbitrary, we leave the ones with $q$.
$\frac{\partial p}{\partial x} = \frac{\partial q}{\partial a}[\frac{\partial q}{\partial x^2}(2x) + \frac{\partial q}{\partial xy}(y)]+ \frac{\partial q}{\partial b}[\frac{\partial q}{\partial (xyz)}(yz) + \frac{\partial q}{\partial (sin(x^2y^2z^3))}(cos(x^2y^2z^3)(2xy^2z^3))]$
Rinse and repeat for y and z.
Bonus question: Yes this derivative does exist.
|
095ed8d7c7381869c578c65296d37b1c3c7682f4
|
Q: How many adjunctions are there between (infinite) categories? The title is a little bit imprecise. Consider "typical" categories $\mathcal{A},\mathcal{B}$, let's say infinite and locally small. A pair of adjointable functors $(F,G$) is a pair of functors $(F : \mathcal{A} \to \mathcal{B}, G : \mathcal{B} \to \mathcal{A})$, such that $F$ is left-adjoint to $G$. We consider $(F,G)$ and $(F',G'$) the same, if $F\cong F'$ or $G\cong G'$.
How many pairs of adjointable functors are there between $\mathcal{A}$
and $\mathcal{B}$?
I mean this very broadly: finitely many, infinitely many, possibly a proper class? If one cannot say anything generally, then what about important examples like $\mathcal{A},\mathcal{B}$ being categories of universal algebras or one being the category of small categories?
A: Let $A, B$ both be $\text{Set}$. For any set $X$ there is an adjunction between the functors $(-) \times X$ and $[X, -]$ (here I mean the set of functions from $X$ into some other set). So in this case there is a proper class of adjunctions, even up to isomorphism. I expect that this is typical. (Note that $F \cong F'$ is equivalent to $G \cong G'$.)
More generally in this argument we can replace $A, B$ with a closed monoidal category which is not essentially small. There are many examples, such as $(\text{Ab}, \otimes)$ or, for that matter, $(\text{Cat}, \times)$.
Edit: It's sometimes possible to classify all left adjoints $F : A \to B$. For example, if $A$ is the category $[C^{op}, \text{Set}]$ of presheaves on an essentially small category $C$, then the category of left adjoints $A \to B$ is equivalent to the category of functors $C \to B$. If $B$ itself is the category $[D^{op}, \text{Set}]$ of presheaves on another essentially small category $D$, then this is in turn the category $[C \times D^{op}, \text{Set}]$ of bimodules over $C$ and $D$.
|
78ca4ac6d890a6da553864ffec3de1ac2dab395b
|
Q: Calculate $\lim\limits_{R\rightarrow\infty}\int_0^\pi \cos(R\cos t)dt$ w.o. Bessel function Im calculating a complex path integral to calculate $\int_0^\infty \frac{\sin x}{x}dx$. I was able to evaluate everything except the arc $\int_0^\pi i~\exp(iR~e^{it})dt$ where $R$ is the radius.
I managed to use the $\lim\limits_{R\rightarrow \infty}$ to break the problem down to calculating $\lim\limits_{R\rightarrow \infty}\int_0^\pi i\cos(R \cos(t))dt$.
I know it evaluates to $\pi$ but we haven't introduced the Bessel function, which solves the integral. Is ther any other way to solve it?
A: Not a strict limit, but leading-order behavior in this limit.
Consider
$$I(R) = 2 \int_0^{\pi/2} dt \, e^{i R \cos{t}} $$
We may use the method of stationary phase: as $R \to \infty$, the value of the integral is dominated by the neighborhoods about which the derivative of the exponent vanishes, i.e., $\sin{t}=0$ or $t=0$. The reason for this is, at the very high frequencies in this limit, the value of the integral tends to cancel by the Riemann-Lesbegue lemma. Thus, we expand out to second order about a small neighborhood near this stationary point and get
$$I(R) \approx 2 \int_0^{\epsilon} dt \, e^{i R (1-t^2/2)} $$
We may show that, with small error, we may expand out the outer integration limit to infinity. Thus,
$$I(R) \approx 2 e^{i R} \int_0^{\infty} dt \, e^{-i R t^2/2} = e^{i R} \sqrt{\frac{2\pi}{i R}} = \sqrt{2 \pi} e^{i (R-\pi/4)} R^{-1/2}$$
Noting that we only want the real part of the above expression, we have
$$I(R) \approx \sqrt{\frac{2 \pi}{R}} \cos{\left (R-\frac{\pi}{4} \right )} $$
A: Regarding your first integral, the usual way to do this is to note that
$$ \lvert e^{iR e^{it}} \rvert = \lvert e^{iR (\cos{t}+i\sin{t})} \rvert = e^{-R \sin{t}}, $$
because the absolute value of the exponential comes from the real part of the argument. You also have the inequality $\sin{t}\geqslant 2t/\pi$ for $0<t<\pi$, so
$$ \lvert e^{iR e^{it}} \rvert \leqslant e^{-2Rt/\pi}. $$
Now use the integral triangle inequality, $\lvert \int_a^b f \rvert < \int_a^b \lvert f \rvert $ to show that
$$ \left\lvert \int_0^{\pi/2} e^{iR e^{it}} \, dt \right\lvert \leqslant \int_0^{\pi} e^{-2Rt/\pi} \, dt \to 0 $$
as $R \to \infty$. The absolute value of the integrand is symmetric about $\pi/2$, so this is sufficient to show that the whole integral tends to zero.
|
43845a5d0469a46694c85eaf66826f32dee8b334
|
Q: Proving Ehrenfest Theorem $m\frac{d}{dt}\langle\vec{\hat{x}}\rangle\;=\; \langle\vec{\hat{p}}\rangle$ I'm trying to prove Ehrenfest Theorem:
$$m\frac{d}{dt}\langle \vec{\hat x}\rangle\;=\; \langle\vec {\hat{p}}\rangle$$
We can just consider one component of $\vec x$, say $x$.
$$m\frac{d}{dt}\langle\hat{x}\rangle\; =m\int x\left (\frac{d\rho}{dt}\right )\,d^3\vec x$$
Now I can reduce the result in question to
$$m\frac{d}{dt}\langle\hat{x}\rangle\; =\frac{i\hbar}{2}\int \left( \frac{d\Psi^*}{dx}\Psi-\Psi^*\frac{d\Psi}{dx}\right)\,d^3\vec x$$
Which I understand the right hand side is:
$$\frac{\langle\hat{p_x}\rangle^*-\langle\hat{p_x}\rangle}{2}$$
However I don't understand how this equates to $\langle\hat{p_x}\rangle$?
I can prove for normalizable states the expectation of the momentum operator is real, which would imply that this gives zero?
A: The time-dependent Schrodinger Equation is given by
$$i \bar h\frac{\partial \psi(\vec r, t)}{\partial t}=H\{\psi(\vec r, t)\}$$
where the Hamiltonial operator is
$$H\{\cdot \}=-\frac{\bar h^2}{2m}\nabla^2\{\cdot\}+V(\vec r,t)\{\cdot\}$$
Therefore, we can write
$$\begin{align}
m\frac d{dt}\int_V \psi^*(\vec r, t)\vec r \psi(\vec r, t)\,dV&=\frac{im}{\bar h}\int_V \vec r\left(H\{\psi^*(\vec r, t)\} \psi(\vec r, t)-H\{\psi(\vec r, t)\} \psi^*(\vec r, t)\right)\,dV\\\\
&=-\frac{i\bar h}{2}\int_V \vec r\left(\psi(\vec r, t)\nabla^2\psi^*(\vec r, t)-\psi^*(\vec r, t)\nabla^2\psi(\vec r, t)\right)\,dV\\\\
&=-\frac{i\bar h}{2}\int_V \vec r\left(\nabla \cdot \left(\psi(\vec r, t)\nabla\psi^*(\vec r, t)-\psi^*(\vec r, t)\nabla\psi(\vec r, t)\right)\right)\,dV\\\\
&=-\frac{i\bar h}{2}\oint_S \vec r\left(\psi(\vec r, t)\frac{\partial \psi^*(\vec r, t)}{\partial n}-\psi^*(\vec r, t)\frac{\partial \psi(\vec r, t)}{\partial n}\right)\,dS\\\\
&+\bbox[5px,border:2px solid #C0A000]{\frac{i\bar h}{2}\int_V\left(\psi(\vec r, t)\nabla\psi^*(\vec r, t)-\psi^*(\vec r, t)\nabla\psi(\vec r, t)\right)\,dV}\,\,\dots \text{OP Starting Point}\\\\
&=-\frac{i\bar h}{2}\oint_S \left(\vec r \psi(\vec r, t)\frac{\partial \psi^*(\vec r, t)}{\partial n}-\vec r \psi^*(\vec r, t)\frac{\partial \psi(\vec r, t)}{\partial n} -\psi^*(\vec r, t)\psi(\vec r, t)\right)\,dS\\\\
&-\frac{i\bar h}{2}\int_V 2 \psi^*(\vec r, t)\nabla \psi(\vec r, t)\,dV
\end{align}$$
If we take $V$ to be all of space, then the surface integral vanishes and we are left with
$$\begin{align}
m\frac d{dt}\int_V \psi^*(\vec r, t)\vec r \psi(\vec r, t)\,dV&=\int_V \psi^*(\vec r, t)\left(-i\bar h \nabla \psi(\vec r, t)\right)\,dV\\\\
&=\langle \vec p \rangle
\end{align}$$
where
$$\langle \vec p \rangle=\int_V \psi^*(\vec r, t)\left(-i\bar h \nabla \psi(\vec r, t)\right)\,dV$$
since the momentum operator in the spatial representation is given by
$$\vec P_{op}\{ \psi(\vec r, t)\}=-i\bar h\nabla \psi(\vec r, t)$$
|
4820fb0cec72be99e5d88a9681e8a96ca035f757
|
Q: How to integrate exponential and power function? I am trying to solve the following integral
$$\int_{0}^{\infty}e^{-(ax+bx^c)}\,dx ; ~~~a,b,c>0.$$
I tried using partial functions but that didn't lead to anything.
Any suggestion?
|
f74a1ca791cdc07420a7257193f7920dc884e9df
|
Q: Prove that $\mathbb{Z}/(p^n)$ is indecomposable Could anyone please give me a hint about how to prove that?
I guess I should show that the direct sum is not a cyclic group and get a contradiction but I'm not sure how to start.
thanks
A: Assuming you're talking about $\mathbb Z$-modules, your guess is right.
Let $M=\mathbb Z/(p^n)$.
If $M = A \oplus B$ is a non-trivial decomposition, then $A$ and $B$ are finite groups of order $p^a$ and $p^b$, with $0< a,b <n$. But then, with $c=\max(a,b) < n$, we get that $p^c$ annihilates $M$, which is not true, since $M$ has an element of order $p^n > p^c$.
|
b4676799abebe781f92c76b5651c3847bff2b874
|
Q: Orthogonal projection onto the eigenspace of compact, self-adjoint operators. Let $T$ be a compact, self-adjoint operator on a separable Hilbert space H. Suppose that $f\in H$, $||f|| =1$ and $||(T-3)f||\leq 1/2$. Let P be the orthogonal projection onto the direct sum of all the eigenspace of T with eigenvalue $\lambda \in [2,4]$. Show that
$||Pf||\geq \frac{\sqrt{3}}{2}$.
I think what I need to do is use spectral theory to find an orthonormal basis of H and then find the projection using that basis.
A: Write $f=Pf+(I-P)f$; the summands are mutually orthogonal. From the assumption by the spectral theorem you get $$1/2\ge\|(T-3I)f\|\ge \|(T-3I)(I-P)f\|\ge\|(I-P)f\|$$ (the last inequality is true because $(I-P)f$ belongs to the spectral subspace of the set $\{x: \; |x-3|>1\}$, which is the complement of the segment $[2, 4]$). Now the claim follows from the identity $\|f\|^2=\|Pf\|^2+\|(I-P)f\|^2$.
|
8858c270421949c572f048ce7c8ce110a22079dd
|
Q: Distribution function of $\sin X$ , where $X$ is a random variable Let $X$ be a random variable with a continuous distribution function $F$. Find expression of the distribution of $\sin X$.
The solution to this problem says that
if $-1 \le y \le 1$,
$P(\sin X \le y)=\sum_{-\infty}^{\infty} P((2n+1)\pi - \sin^{-1}y \le X \le (2n+2)\pi + \sin^{-1}y)$.
I'm having trouble coming up with this particular formulation of the distribution. Can anyone provide me with an insight to come up with this inequality?
A: It's actually quite simple. Visualize the graph of the function $y = \sin \theta$. Now, if you pick a $y = y_0 \in [-1,1]$, that is like drawing a horizontal line at $y = y_0$, and the inequality $\sin \theta \le y_0$ corresponds to choosing those angles $\theta$ for which the points on the curve are at or below this horizontal line.
Now, because the function is periodic with period $2\pi$, it is clear that we ought to restrict our attention to some interval, say $\theta \in (-\pi, \pi]$, and then any solution for $\theta$ to the inequality $\sin \theta \le y_0$ immediately corresponds to an infinite family of solutions $\{\theta + 2\pi n\}_{n = -\infty}^\infty$. This why the summation appears in the original claim.
Given the above, it should not be too difficult to do the computation and convince yourself of the claim.
|
5e9eb5d788de8700c1109dd72fbfccbfd62688f4
|
Q: convergence of series in inner product space let $V$ be some inner product space and $\lbrace {e_i\rbrace }_{i\in\mathbb{N}} \subset V$ be some countable orthonormal set. I am wondering if for any $x\in V$ the series $$\sum\limits_{i=1}^{\infty} \langle x,e_i \rangle e_i $$
is convergent in $V$?
If $V$ is a Hilbert space, then the series always converge to some alement $z\in V$ du to Bessel inequality. But my proof make use of the completeness of the space in an essential way. Furthermore, if the series converges to some $z\in Z$, then obviously $\langle x,e_i\rangle=\langle z,e_i\rangle $ for all $i$.
I would like to ask you if you know a counterexample or a proof without using the completeness of $V$?
Best wishes
A: In general, this series will not be convergent.
Let's consider the case where $V$ is the subset of a Hilbert space, which is not closed: For the Hilbert space $L^2(0,2\pi)$, we have an orthonormal basis
$$E= \left\{e_n: x \mapsto \frac{1}{2\pi}e^{inx}\right\}_{ n \in \mathbb N}.$$
Let $V= span E,$ i.e. the set of (finite) linear combinations of elements of $E$.
Then, $V$ is a vector space of continuous functions which we can equip with the $L^2$ scalar product and $E$ is an orthonormal set in $(V,\langle,\cdot,\cdot\rangle)$.
However, since $E$ is a Hilbert Basis for $L^2(0,2\pi)$, we can express any element of $L^2(0,2\pi)$ as a series above, in particular discontinuous functions like the characteristic function $\chi_{(0,1)}$. Then we have
$$\chi_{(0,1)}= \sum_{i=1}^\infty\langle\chi_{(0,1)},e_i\rangle e_i,$$
a series which converges in $L^2(0,2\pi)$, but its limit is clearly not an element of $V$, so it cannot converge in $V$ (even though it is Cauchy).
|
973d65c607d492e40a975e9314d7c59ea5d81784
|
Q: Did Ackermann produce a finitary consistency proof of second-order $PRA$? In Wilhelm Ackermann's Doctoral Thesis (it is claimed, by Richard Zach, for one, in his paper "The Practice of Finitism: Epsilon Calculus and Consistency Proofs in Hilbert's Program", arXiv: math/0102189v1 [mathLO] 24 Dec. 2001) there is a finitary proof of second-order $PRA$ (Primitive Recursive Arithmetic--Note also Zach states that Hilbert gave a finitary proof of $PRA$ in his lectures of 1921-22 and 1922-23). Is this true, and did Hilbert and Bernays recognize it as finitary? Also, is there an English translation of his thesis? Third, could someone give a short, precise synopsis of Hilbert's proof of the consistency of $PRA$? That would be very much appreciated.
|
60e4bf1f1963f9b3992e45d7370bb1a1181c49fc
|
Q: Nested absolute-value inequality I try to solve a problem in two ways, but the results are not the same.
Method 1.
$$\lvert \lvert x \rvert + x \rvert \le 2$$
For $x < 0$, we have $\lvert x \rvert = -x$. Therefore:
$$\lvert -x+x \rvert \le 2$$
which is always true.
For $x \ge 0$, we have $\lvert x \rvert = x$. Therefore:
$$
\begin{align}
\lvert x + x \rvert & \le 2 \\
x & \le 1
\end{align}
$$
Combining both answers, we get $x \le 1$.
Method 2.
$$\lvert \lvert x \rvert + x \rvert \le 2$$
Squaring both sides, we get:
$$
\begin{align}
(\lvert x \rvert + x)^2 & \le 4 \\
\lvert x \rvert^2 + 2x\lvert x \rvert + x^2 & \le 4 \\
2x^2 + 2x\lvert x \rvert & \le 4 \\
x\lvert x \rvert & \le 2-x^2 \\
\end{align}
$$
Squaring both sides again:
$$
\begin{align}
x^2 \lvert x \rvert^2 & \le 4 - 4x^2 + x^4 \\
x^4 & \le 4 - 4x^2 + x^4 \\
0 & \le 4 - 4x^2 \\
x^2 & \le 1 \\
-1 & \le x \le 1
\end{align}
$$
The answers are different, but I believe the first one is correct because if we substitute $x=-10$, for instance, we get the correct result. Where did I do wrong?
A: In method 2, the second time you square both sides, you have no guarantee that the sides you square will both be positive. Therefore the squaring will not necessarily preserve the inequality.
A: Solution $1$ is correct.
Solution $2$ is incorrect, the mistake is in the step:
$x|x|\leq 2-x^2 \iff x^2|x|^2\leq 4-4x^2+x^4 $.
Clearly the left hand side holds for any negative value of $x$, the right side does not.
A: Squaring cannot eliminate nested absolute inequality.
\begin{align}
(|x| + x)^2 \\
|x| + 2x|x| + x^2 \\
2x^2 + 2x|x|
\end{align}
And if you move $x$ to the other side of inequality, you should make it sign before squaring it in order to maintain the inequality.
or instead of squaring it again:
For $x \lt 0$
\begin{align}
x(-x) &\le 2 - x^2 \\
-x^2 &\le 2 - x^2 \\
0 &\le 2 \tag{always true for $x \lt 0$}
\end{align}
For $x \ge 0$
\begin{align}
x(x) &\le 2 - x^2 \\
x^2 &\le 2 - x^2 \\
2x^2 &\le 2 \\
x^2 &\le 1 \\
-1 \le x &\le 1
\end{align}
Thus, combining both answer $x \le 1$
Squaring also eliminate discontinuity.
See graphic for $|x + |x||$
and compare with graphic for $(|x + |x||)^2$
|
5a322e704cda16f960913ad4a38dd16492119297
|
Q: Why does P-Integrability imply Q-Integrability for function and mapping Q Let $f : Ω → R_+$ be a measurable function satisfying
$\int_\Omega f(ω)P(dω) = 1$
Define a mapping by
$Q: A → R_+ $
$Q(A) = \int_\Omega 1_A (ω)f(ω)P(dω)$
1) Let $h: Ω → R_+$ be a non-negative, measurable function. Show that if
h(·)f(·) is P-integrable, then h is Q-integrable.
2) Let h: Ω → R be a measurable function. Show by using part (1) that if
h(·)f(·) is P-integrable then h is Q-integrable and satisfies
$\int_\Omega f(ω)Q(dω)=\int_\Omega h(ω)f(ω)P(dω)$
My thoughts:
Let $h_n(x)$ be simple function of the form $\sum_{i=1} 1_{A_i}a_k $ Then:
$\int\sum_{i=1} a_k1_{A_i}(w)f(w)P(dω) < \infty$ since $h()f()$ is P-integrable
so $\int\sum_{i=1} a_k1_{A_i}(w)f(w)P(dω) = \sum_{i=1}\int a_kQ(A_i)$ since the $A_i$ are disjoint and simple. Hence h is Q integrable. The general case is then attained by taking limits and using the Monotone Convergence Theorem.
I'm not sure if this is the correct procedure and I'm not sure how to approach part 2. Any help or advice would be much appreciated. Thanks.
|
6827510a9d6aef79f5efafc2cd23070393e8feae
|
Q: Permutations of variable size using the letters of a word I have a word, for example SANTACLAUSE and want to calculate all permutations of variable length, but each letter can only be used as often as it it used in SANTACLAUSE.
For length = 11 its $\frac{11!}{3!2!}$.
For length = 0 its 1, for length = 1 its 11 - 3 = 8.
But I have no idea how to get a general formula for length n.
I am currently brute forcing it with python to get a feel for the number, but it takes a while and it gets big...
I thought about first selecting the n elements we use from all elements and then permutating the selected elements, that would be $\binom{11}{n} * n!$ but I would get duplicates and I don't know how I can eliminate them...
A: One way (again, not much fun) is to find and sum up the coefficient of $x^k$
$$\sum_{k=1}^{11} k!(1+x)^6(1+x+x^2/2!)(1+x+x^2/2!+x^3/3!)$$
The method has been explained in another answer here
A: What you want is the multinomial coefficient. Specifically, it describes the number of permutations on a set with repeated elements.
A: In c++
#include <bits/stdc++.h>
using namespace std;
vector <int> W(26);
vector <int> V(26);
long long F[20];
string S;
int push(){
for(int i=0;i<26;i++){
if(V[i]<W[i]){
V[i]++;
for(int j=i-1;j>=0;j--){
V[j]=0;
}
return(1);
}
}
return(0);
}
long long multi(){
long long res=0;
for(int i=0;i<26;i++){
res+=V[i];
}
res=F[res];
for(int i=0;i<26;i++){
res/=F[V[i]];
}
return(res);
}
int main(){
F[0]=1;
long long res=0;
for(int i=1;i<20;i++){
F[i]=F[i-1]*i;
}
cin >> S;
for(int i=0;i<S.length();i++){
W[S[i]-'A']++;
}
while(push()){
res+=multi();
}
cout << res << endl;
}
All this does it try for all the possible multiplicities for each letter, and count each one with multinomial coefficients. To use it you must enter the word in ** big caps**. Also, make sure the word does not include any one letter more than $19$ times.
For "SANTACLAUSE" the answer seems to be $9,392,913$.
|
dcdd4b59732b2a21795db678e13453227d385cee
|
Q: Euler's criterion An integer $n$ is a square modulo $p$ if there exists another integer $ x$ such that n $≡$ $x^2$ (mod $p$).
Theorem 1 (Euler’s Criterion). :
$1$. If $n$ is a square modulo $p$ then $n^{p-1\over2}$ $≡$ $1$ (mod $p$).
$2$. If $n$ is not a square modulo $p$ then $n^{p-1\over2}$ $≡$ $−1$ (mod $p$).
Assume that $p$ $≡$ $3$ (mod $4$) and $n$ $≡$ $x^2$ (mod $p$). Given $n$ and $p$, find one possible value of $x$. use Euler’s Criterion
A: If $n$ is a square modulo $p$ and $p$ does not divide $n$, then $n^{\frac{p-1}{2}}\equiv 1\ (\ mod\ p\ )$.
So, we have $n^{\frac{p+1}{2}}\equiv n\ (\ mod\ p\ )$
Since $\frac{p+1}{2}$ is even, $n^{\frac{p+1}{4}}$ is a solution for $x^2\equiv n\ (\ mod\ p\ )$
|
5f78046507c33f67bd66e5b960010d2967beeb0f
|
Q: Proof that $[\alpha, \beta]$ is closed in $\mathbb{R}$ I need to prove that $[\alpha, \beta]$ is closed in $\mathbb{R}$ for $\forall \alpha, \beta \in \mathbb{R}$. I think I almost completed the proof, but got stuck at the last step. Some hint(s) would be much appreciated.
Proof:
We need to show that $[\alpha, \beta]^c=(-\infty, \alpha)\bigcup(\beta, \infty)$ is open. Let $x\in [\alpha, \beta]^c$, $\varepsilon_1=\lvert x_1-\beta\rvert$, $\varepsilon_2=\rvert x-\alpha\rvert$. Pick a $y\in B_\varepsilon(x)$, so that $\rvert y-x\rvert<\min\{\varepsilon_1, \varepsilon_2\}$. We need to show that $\rvert y- \frac{\alpha + \beta}{2}\rvert > \rvert\beta-\frac{\alpha + \beta}{2}\rvert=\rvert \frac{\beta - \alpha}{2} \rvert$.
$\rvert y-\frac{\alpha + \beta}{2}\rvert = \rvert x-\frac{\alpha + \beta}{2}-x+y\rvert \ge \rvert x- \frac{\alpha + \beta}{2}\rvert-\rvert x -y \rvert = \rvert x - \frac{\alpha + \beta}{2}\rvert-\rvert y-x\rvert > \rvert x-\frac{\alpha + \beta}{2}\rvert-\min\{\varepsilon_1, \varepsilon_2\}=\rvert x-\frac{\alpha + \beta}{2}\rvert-\rvert x-\beta \rvert$
or $\rvert x-\frac{\alpha + \beta}{2}\rvert-\rvert x-\alpha \rvert$
$\rvert x - \frac{\alpha + \beta}{2}\rvert-\rvert x-\beta\rvert \ge \rvert x \rvert - \rvert \frac{\alpha + \beta}{2}\rvert-\rvert x\rvert+\rvert\beta\rvert=\rvert\beta\rvert-\rvert \frac{\alpha + \beta}{2}\rvert...$
And that's where I get stuck.
A: You can prove this without calculating so much, by noting that
$$ B_\epsilon(x) = \{ y \in \mathbb R \, | \, \vert x - y \vert < \epsilon \} = (x - \epsilon, x + \epsilon ) \; .$$
So let $\alpha < \beta$. We have $[\alpha, \beta]^{\mathrm c} = (-\infty, \alpha) \cup (\beta, \infty)$. Let $x \in [\alpha, \beta]^{\mathrm c}$. We need to show, that there is a $\epsilon > 0$, such that $B_\epsilon(x) \subset [\alpha, \beta]^{\mathrm c}$.
Let's assume that $x \in (-\infty, \alpha)$. We can choose an $\epsilon > 0$ with $0 < \epsilon < (\alpha - x)$. Then we have
$$B_\epsilon(x) = (x-\epsilon, x + \epsilon) \subset (x - \epsilon, \alpha) \subset (-\infty, \alpha) \subset [\alpha, \beta]^{\mathrm c} \; .$$
You can do basically the same thing if $x \in (\beta, \infty)$.
|
b78fd1b3b01fbd2e3ef299d7812f850c465576a9
|
Q: Why is there only a complex conjugate, but no real conjugate? In mathematics one often uses the complex conjugate
$$
\Bbb C\to\Bbb C,\quad z=a+b\cdot\mathrm{i}\;\;\mapsto\;\; \bar z=a-b\cdot\mathrm{i}
$$
This is often described as a a reflection along the real axis.
But in analogy one could also define a real conjugate
$$
\Bbb C\to\Bbb C,\quad z=a+b\cdot\mathrm{i}\;\;\mapsto\;\; \tilde z=-a+b\cdot\mathrm{i}
$$
This would be a reflection along the imaginary axis.
However real conjugation is never used. Why is it that complex conjugation is so useful, but real conjugation is not?
A: One definition of conjugate arises from the factoring of
$a^2 - b^2$ into $(a + b)(a - b)$.
But that does not answer the question of why we can obtain the complex conjugate of a complex number only by negating the imaginary part
and never by negating the real part.
(For that matter, it does not explain why we separate the number into
real and imaginary parts in order to obtain a conjugate in the first place.)
But there is another, somewhat different notion of conjugate.
Quoting one writer:
Two elements $\alpha, \beta$ of a field $K$, which is an extension field of a field $F$, are called conjugate (over $F$) if they are both algebraic over $F$ and have the same minimal polynomial.
(Barile, Margherita. "Conjugate Elements." From MathWorld--A Wolfram Web Resource, created by Eric W. Weisstein. http://mathworld.wolfram.com/ConjugateElements.html)
If we take $K$ to be the complex numbers, and $F$ to be the real numbers,
then we can verify that $a + bi$ and $a - bi$ ($a, b$ both real)
are the two roots of a certain polynomial over $z$,
specifically, the solutions for $z$ in the equation
$$ z^2 - 2az + a^2 + b^2 = 0, $$
in which the left-hand side is a polynomial
over the real numbers (that is, over $F$).
That is, $a + bi$ and $a - bi$ fit perfectly the definition of
conjugate elements of $\mathbb C$,
viewed as an extension field of $\mathbb R$.
What about $a + bi$ and $-a + bi$? These are solutions of the
equation $(z - a - bi)(z + a - bi) = 0$; multiplying out the
left-hand side, we see that $a + bi$ and $-a + bi$
are solutions for $z$ in
$$ z^2 - (2bi)z - a^2 - b^2 = 0. $$
The coefficients of the polynomial on the left-hand side are not
all real numbers unless $b = 0$, so it seems that $-a + bi$ cannot be
a conjugate of $a + bi$ in the same interesting way that $a - bi$ can.
Historically, complex numbers arose in the process of trying to
solve polynomials with real-valued coefficients.
Eventually, people decided that complex numbers were actually acceptable
roots of such a polynomial.
When you have a polynomial with integer coefficients, it always has
a factorization into polynomials of the form $(ax + b)$
or $(ax^2 + bx + c)$, where all the coefficients $a, b, c$ are real numbers.
Of course $(ax + b)$ has just one real root (and no other roots),
but the roots of $(ax^2 + bx + c)$ are precisely a pair of
complex conjugates.
A: The complex conjugate is an automorphism that maps $\mathbb C \to \mathbb C$. There are no non-trivial automorphisms $\mathbb R \to \mathbb R$.
Edit: $\mathbb C$ can be defined as the splitting field (https://en.wikipedia.org/wiki/Splitting_field) $\mathbb R(X)/(X^2+1)$. The roots of $X^2+1=0$ are symmetric, so $X=i$ or $X=-i$ is an arbitrary choice.
Edit2: an automorphism $\alpha$ observes addition and multiplication, such that for every $a,b$ it is true that $\alpha(a+b)=\alpha(a)+\alpha(b)$ and $\alpha(a\cdot b)=\alpha(a)\cdot\alpha(b)$. This does not hold for the OP's proposed "real conjugate".
Edit3: for the proposed conjugate ($\Bbb C\to\Bbb C,\quad z=a+b\cdot\mathrm{i}\;\;\mapsto\;\; \tilde z=-a+b\cdot\mathrm{i}$), this would result into $i=\tilde{i}=\widetilde{1\cdot i}=\tilde{1}\cdot\tilde{i}=-1\cdot i=-i$ when assuming it would work as an automorphism (or at least as an homomorphism). This gives a contradiction.
|
6c1c9f12a0b257e21bb880927c3785e1ae29ee29
|
Q: Evaluate $\sum_{p=1}^{32}(3p+2)\Bigg(\sum_{q=1}^{10}\bigg(\sin\frac{2q\pi}{11}-i\cos\frac{2q\pi}{11}\bigg)\Bigg)^{p} $ Evaluate $$\sum_{p=1}^{32}(3p+2)\Bigg(\sum_{q=1}^{10}\bigg(\sin\frac{2q\pi}{11}-i\cos\frac{2q\pi}{11}\bigg)\Bigg)^{p} $$
A: This is $$\sum_{p=1}^{32}(3p+2)\Bigg(\sum_{q=1}^{10}(-i)\bigg(\cos\frac{2q\pi}{11}+i\sin\frac{2q\pi}{11}\bigg)\Bigg)^{p} $$ Here $$\sum_{q=1}^{10}(-i)\bigg(\cos\frac{2q\pi}{11}+i\sin\frac{2q\pi}{11}\bigg) $$
is $i$ and hence the sum $$\sum_{p=1}^{32}(3p+2)i^p $$ is evaluated by splitting into factors of $i,-1,-i,1$ as $48(1-i)$.
|
0e07a196a84aedbcbcf0e2cd6850d0e4fc6f4fa8
|
Q: Prob. 4(a), Sec. 13 in Munkres' TOPOLOGY, 2nd ed: Any necessary and sufficient conditions for a union of topologies to be a topology? Let $X$ be a non-empty set, let $$\left\{ \ \mathscr{T}_\alpha \ \colon \ \alpha \in J \ \right\}$$ be a non-empty collection of topologies on $X$, and let $$ \mathscr{T} \colon= \bigcup_{\alpha\in J} \mathscr{T}_\alpha.$$
Then $\mathscr{T}$ may or may not be a topology on $X$.
What is (are) the necessary and sufficient condition(s) for $\mathscr{T}$ to be a topology on $X$?
As a sufficient condition, I can think of the following:
For any $\alpha$ and $\beta \in J$ such that $\alpha \neq \beta$, we must have either $\mathscr{T}_\alpha \subset \mathscr{T}_\beta$ or $\mathscr{T}_\beta \subset \mathscr{T}_\alpha$.
Am I right?
Is the above condition necessary also?
Can we formulate other conditions that are either necessary or sufficient, or both for $\mathscr{T}$ to be a topology?
Can we come up with any condition(s) on the set $X$ that turn out to be either necessary or sufficient, or both, for $\mathscr{T}$ to be a topology?
A: Since you asked me to answer here: http://dbfin.com/topology/munkres/chapter-2/section-13-basis-for-a-topology/problem-5-solution/#comment-2456653263.
Your sufficient condition is not in fact sufficient
Indeed, let $X=\mathbb{R}$, and consider topology $\mathcal{T}_n=\{\emptyset,X,I_1,\ldots,I_n=(-1,1-\tfrac{1}{n})\}$. Each $\mathcal{T}_n$ is clearly a topology on $\mathbb{R}$, but their union contains every $I_n$, but not their countable union $\cup_n I_n = (-1,1)$.
A general (not very useful) necessary and sufficient condition
So, given an indexed family of topologies $\{\mathcal{T}_\alpha\}_{\alpha\in J}$ on $X$, the question is, when $\mathcal{T}=\cup_{\alpha\in J} \mathcal{T}_\alpha$ is a topology. Let us check all conditions.
*
*The empty set and $X$ are in $\mathcal{T}$ because they are in each topology.
*Given $U_\beta\in\mathcal{T}$, $\beta\in K$, we need $\cup_{\beta\in K} U_\beta\in \mathcal{T}$.
For every $\alpha\in J$, let $S(\alpha)$ be the set of indexes $\beta$ such that $U_\beta\in \mathcal{T}_\alpha$. Then, $\cup_{\beta\in K}U_\beta = \cup_{\alpha\in J}\cup_{\beta\in S(\alpha)}U_\beta$, but $\cup_{\beta\in S(\alpha)}U_\beta\in \mathcal{T}_\alpha$.
*Similarly for (finite) intersections, $\cap_{n=1}^N U_n = \cap_{\alpha\in J}\cap_{\beta\in S(\alpha)}U_\beta$, but $\cap_{\beta\in S(\alpha)}U_\beta\in \mathcal{T}_\alpha$ because $\cup_\alpha S(\alpha)=\{1,\ldots,N\}$ is finite.
Therefore, we conclude, that a sufficient condition is as follows:
arbitrary unions and finite intersections of sets
from pairwise different topologies must belong to
some topologies in the family.
This condition is also, clearly, necessary.
This condition is not very useful, but about to be the best one can find in general. It can also be useful in specific cases, such as those below.
A specific trivial case
If $X$ has two elements, then the union of an arbitrary family of topologies on $X$ is a topology on $X$.
Finite case
Suppose now $X$ is arbitrary, but the family of topologies contains a finite number of topologies, $\mathcal{T}_n$, $1\le n\le N$. Then, we have the following criterion: $\mathcal{T}=\cup_n\mathcal{T}_n$ is a topology iff for every $U\in\mathcal{T}_i$ and $V\in\mathcal{T}_j$, $U\cup V$ and $U\cap V$ belong to some $\mathcal{T}_k$ and $\mathcal{T}_{k'}$, respectively.
Note, that this condition holds for the above example of topologies on $\mathbb{R}$, even though their union is not a topology. This is, of course, because this criterion is for a finite number of topologies only, and there we have a countable number of topologies.
A: Simply put, a "necessary and sufficient condition for" some statement $\;P\;$ is a statement that is equivalent to $\;P\;$. And presumably that equivalent statement is simpler, or more useful, or more explicit, than $\;P\;$.
For this specific question, let's try the calculational approach.$
\newcommand{\calc}{\begin{align} \quad &}
\newcommand{\op}[1]{\\ #1 \quad & \quad \unicode{x201c}}
\newcommand{\hints}[1]{\mbox{#1} \\ \quad & \quad \phantom{\unicode{x201c}} }
\newcommand{\hint}[1]{\mbox{#1} \unicode{x201d} \\ \quad & }
\newcommand{\endcalc}{\end{align}}
\newcommand{\ref}[1]{\text{(#1)}}
\newcommand{\then}{\Rightarrow}
\newcommand{\when}{\Leftarrow}
\newcommand{\true}{\text{true}}
\newcommand{\false}{\text{false}}
\newcommand{\a}{\alpha}
\newcommand{\b}{\beta}
\newcommand{\c}{\gamma}
\newcommand{\V}{\mathcal V}
\newcommand{\W}{\mathcal W}
\newcommand{\T}[1]{{\mathcal T}_{#1}}
\newcommand{\istop}[2]{#1\text{ is a topology on }#2}
$
We are given a non-empty set $\;X\;$, and a non-empty set $\;J\;$ which is used as an 'index set'. To avoid visual noise, we will implicitly assume that $\;\a,\b,\c \in J\;$. We are given a family $\;\T{-}\;$ where for every $\;\alpha\;$, $\;\istop{\T{\a}}{X}\;$.
The question is now: find an equivalent for the statement
$$
\tag{0}
\istop{\langle \cup \a :: \T{\a} \rangle}{X}
$$
Let's also make explicit the definition of a topology:
\begin{align}
\tag{1}
& \istop{\V}{X} \;\equiv\;
\\ & \qquad
\tag{1a}
X \in \V
\\ & \qquad \land\;
\tag{1b}
\langle \forall A,B : A \in \V \land B \in \V : A \cap B \in \V \rangle
\\ & \qquad \land\;
\tag{1c}
\langle \forall \W : W \subseteq \V : \bigcup \W \in \V \rangle
\end{align}
Now, let's try to calculate an equivalent for $\ref{0}$.
Using definition $\ref{1}$ we can write $\ref{0}$ as $\ref{0a} \land \ref{0b} \land \ref{0c}$, and we will rewrite those three parts in turn.
First part $\ref{a}$ of the definition:
$$\calc
\tag{0a}
X \in \langle \cup \a :: \T{\a} \rangle
\op=\hint{definition of $\;\cup\;$}
\langle \exists \a :: X \in \T{\a} \rangle
\op=\hint{each $\;\T{\a}\;$ is a topology, using $\ref{1a}$}
\langle \exists \a :: \true \rangle
\op=\hint{the index set $\;J\;$ is not empty}
\true
\tag{A}
\endcalc$$
So this part is always satisfied.
For part $\ref{b}$ we get:
$$\calc
\tag{0b}
\langle \forall A,B : A \in \langle \cup \a :: \T{\a} \rangle \land B \in \langle \cup \a :: \T{\a} \rangle : A \cap B \in \langle \cup \a :: \T{\a} \rangle \rangle
\op=\hint{definition of $\;\cup\;$, three times; rename dummies}
\langle \forall A,B : \langle \exists \a :: A \in \T{\a} \rangle \land \langle \exists \b :: B \in \T{\b} \rangle : \langle \exists \c :: A \cap B \in \T{\c} \rangle \rangle
\op=\hint{logic: $\;\langle \forall x : \langle \exists y :: P \rangle : Q \rangle\;$ is equivalent to $\;\langle \forall x,y : P : Q \rangle\;$, twice}
\langle \forall A,B,\a,\b : A \in \T{\a} \land B \in \T{\b} : \langle \exists \c :: A \cap B \in \T{\c} \rangle \rangle
\tag{B}
\endcalc$$
In words: For any two open sets in any of the topologies (from the family), their intersection is in some topology (in the family).
Finally part $\ref{c}$, where we can do even less:
$$\calc
\tag{0c}
\langle \forall \W : \W \subseteq \langle \cup \a :: \T{\a} \rangle : \bigcup \W \in \langle \cup \a :: \T{\a} \rangle \rangle
\op=\hint{definition of $\;\subseteq$; definition of $\;\cup\;$, twice}
\langle \forall \W : \langle \forall Z : Z \in \W : \langle \exists \a :: Z \in \T{\a} \rangle \rangle : \langle \exists \a :: \bigcup \W \in \T{\a} \rangle \rangle
\tag{C}
\endcalc$$
In words: For any collection of sets, where each is open in some topology (in the family), the union of those sets is in some topology (in the family).
Combining the above, we've proven that $\ref{B} \land \ref{C}$ is an equivalent, and more explicit, version of $\ref{0}$.
Without more knowledge about the family of topologies $\;\T{-}\;$, it is not possible to go any further.
The only thing one could still do is to rewrite the last line of part $\ref{c}$ above, as follows:
$$\calc
\tag{C}
\langle \forall \W : \langle \forall Z : Z \in \W : \langle \exists \a :: Z \in \T{\a} \rangle \rangle : \langle \exists \a :: \bigcup \W \in \T{\a} \rangle \rangle
\op=\hints{set theory: introduce a choice function}
\hint{-- to convert the nested $\;\langle \forall \ldots \rangle\;$ to a $\;\langle \exists \ldots \rangle\;$}
\langle \forall \W : \langle \exists f : f \in \W \to J : \langle \forall Z : Z \in \W : Z \in \T{f(Z)} \rangle \rangle : \langle \exists \a :: \bigcup \W \in \T{\a} \rangle \rangle
\op=\hint{logic: $\;\langle \forall x : \langle \exists y :: P \rangle : Q \rangle\;$ is equivalent to $\;\langle \forall x,y : P : Q \rangle\;$}
\langle \forall \W, f : f \in \W \to J \;\land\; \langle \forall Z : Z \in \W : Z \in \T{f(Z)} \rangle : \langle \exists \a :: \bigcup \W \in \T{\a} \rangle \rangle
\endcalc$$
But that does not seem useful in most contexts...
|
b04925191734ae2f038bc0ebbf010fe947a23c4f
|
Q: Finding a given group in groups twice as large. Given a group $H$ with order $n$, can we determine how many groups $G$ of order $2n$ contain $H$ as a subgroup, and perhaps find these groups? For example, $\mathbb{Z}_4$ is contained in $\mathbb{Z}_8$, $\mathbb{Z}_4 \times \mathbb{Z}_2$, $D_4$, and $Q_8$.
I'm curious if we can find a constant upper bound (or upper bound related to $n$) on the number of groups $G$ that satisfy my constraints for any $H$. I'm not very familiar with group theory, so methods to approach this might be over my head. I'm particularly interested in the case where we only consider cyclic $H$. Feel free to generalize! Thanks.
A: Subgroups of index $2$ are normal, so equivalently you want to classify short exact sequences
$$1 \to H \to G \to \mathbb{Z}_2 \to 1.$$
In other words, you want to classify extensions of $\mathbb{Z}_2$ by $H$. If $H$ has odd order then such a short exact sequence must split (this generalizes to the Schur-Zassenhaus theorem, but in this case we can just appeal to Cauchy's theorem), so $G$ must be a semidirect product
$$G \cong H \rtimes \mathbb{Z}_2$$
and now it suffices to classify actions of $\mathbb{Z}_2$ on $H$. If $H = \mathbb{Z}_n$ where $n$ is odd then write $n = \prod p_i^{e_i}$ where the $p_i$ are odd primes and $k$ primes appear. Then
$$H \cong \prod_i \mathbb{Z}_{p_i^{e_i}}$$
so there are $2^k$ actions of $\mathbb{Z}_2$ on $H$, given by acting by $\pm 1$ on each factor. The corresponding extensions take the form
$$G \cong \mathbb{Z}_a \times D_b$$
where $D_b$ is the dihedral group $\mathbb{Z}_b \rtimes \mathbb{Z}_2$ and $ab = n$.
If $H$ has even order then the answer is more complicated and involves group cohomology. To give some indication of how complicated the answer must be, every finite $2$-group is an iterated extension of copies of $\mathbb{Z}_2$. There are more than 49 billion groups of order $1024 = 2^{10}$, and they make up almost all groups of order less than $2000$. By contrast, there are 10 million groups of order $512$. This means at least one group of order $512$ is a subgroup of at least $5000$ groups of order $1024$. In general, it's known that the number of groups of order $2^n$ is asymptotically
$$2^{\frac{2}{27} n^3 + O(n^{8/3})}$$
so at least one group of order $2^n$ is a subgroup of somewhere around $2^{\frac{2}{9} n^2}$ groups of order $2^{n+1}$; note that this is faster than polynomial growth in the order.
If $H = \mathbb{Z}_n$ where $n$ may be even then this is how the classification begins. First, you still need to classify all actions of $\mathbb{Z}_2$ on $\mathbb{Z}_n$. I'll leave this to you as a nice exercise to figure out how to handle the powers of $2$ dividing $n$. Second, fixing such an action, you need to compute the cohomology group
$$H^2(\mathbb{Z}_2, \mathbb{Z}_n)$$
(which depends on the action; unfortunately this is suppressed by the notation). There is one extension for every pair of an action and a class in this cohomology group. If the class vanishes, then the extension is a semidirect product, but in general it won't.
|
fb2dbf8b286761a88e81a22fd0dd50ee58bd9ebb
|
Q: Paper of Paul Erdös I'm trying to understand On Arithmetical Properties of Lambert Series by Erdös, but am stuck on the first page.
He states:
Put $k=\left[(\log n)^{1/10}\right]$ and let $p_1,p_2,\ldots$ be the sequence of consecutive primes greater than $(\log n)^2$. (...) From elementary results about the distribution of primes, it follows that $p_i < 2(\log n)^2$ for $i \leq \frac{k(k+1}{2}$.
I think the "elementary results" refers to Bertrand's Postulate. But, from that, I can only say $(\log n)^2 < p_1 < 2((\log n)^2)$ -- not the statement about all $p_i$ for the given $i$. How is this result obtained?
He further states:
Put $$A = \left\{\prod_{1\leq i \leq \frac{k(k+1)}{2}} p_i\right\}^t$$
(...) by a simple computation, we obtain $$A < \left\{2(\log n)\right\}^{tk^2} < e^{(\log n)^{1/4}}$$
How does this follow?
Finally, the page ends with a listing of congruences. Let's say the first congruence is true. Is there a guarantee that all other congruences will be true as well?
|
48038f1b468901296dcea2cd9293126861e939b4
|
Q: Approximation theory in Lp spaces (Reference Needed) I am looking for some reference on approximation theory in Lp spaces.
I have found a number of papers like: paper1 , paper2 etc.
I was wondering if there is a book or a monograph that will contain summary of know results.
Thank you very much.
|
e0b8d9a499cfc40c7d9d5a937bd84fb7857d70fe
|
Q: Get a bounded metric from a metric - triangle inequality for $d'(x,y):=\frac{d(x,y)}{1+d(x,y)}$ This is related to Proof that every metric space is homeomorphic to a bounded metric space but I can remember that if $d$ is a metric, then $d'(x,y):=\frac{d(x,y)}{1+d(x,y)}$ is also a metric that defines the same topology.
I'm stuck on how to prove that $d'$ satisfies the triangle in-equality Am I just missing the trick how to prove it, or does my memory play tricks on me?
A: Define $\phi:[0,\infty)\to[0,\infty)$ by $$\phi(t)=\frac{t}{1+t}.$$Since $\phi$ is increasing it's enough to show that $$\phi(a+b)\le\phi(a)+\phi(b)\quad(a,b\ge0).$$This is the same as $$\phi(a+b)-\phi(a)\le\phi(b)-\phi(0),$$which is the same as $$\int_0^b(\phi'(t+a)-\phi'(t))\,dt\le0.$$So it's enough to show that $\phi'$ is decreasing.
|
d5a34eb8b2aade4f4bddc0601485f42cb7c8da9a
|
Q: Why does $\ln(1+\frac{3}{n^2}+o(\frac{1}{n^2}))=\frac{3}{n^2}+o(\frac{1}{n^2})$? In order to show that a series converges, I want to show that $\sum\ln(\frac{v_{n+1}}{v_n})$ Which led me to the following first part of the equation, but I didn't achieved to solve it so I looked in tha answer book for the second part wich I didn't understand...
why does
\begin{align*}
\ln(1+\frac{3}{n^2}+o(\frac{1}{n^2}))
&=\frac{3}{n^2}+o(\frac{1}{n^2})
\end{align*}
A: Note that
$$\ln(1+\frac{3}{n^2}+o(\frac{1}{n^2})) = \frac{3}{n^2}+o(\frac{1}{n^2}) \\ \iff \\
\frac{\ln(1+\frac{3}{n^2}+o(\frac{1}{n^2}))}{\frac{3}{n^2}} = \frac{\frac{3}{n^2}+o(\frac{1}{n^2})}{\frac{3}{n^2}} = 1 + \frac{o(\frac{1}{n^2})}{\frac{3}{n^2}} = 1+ 0 =1
$$
so you only need to prove
$$\lim_{x \to 0} \frac{\ln(1+\frac{3}{n^2}+o(\frac{1}{n^2}))}{\frac{3}{n^2}} = 1$$
There are any workarounds here, but the simplest is probably using the Mac Laurin series, hence you get
$$\lim_{x \to 0} \frac{\ln(1+\frac{3}{n^2}+o(\frac{1}{n^2}))}{\frac{3}{n^2}} = \frac{\frac{3}{n^2}+o(\frac{1}{n^2})}{\frac{3}{n^2}} = 1 + \frac{o(\frac{1}{n^2})}{\frac{3}{n^2}} = 1+ 0 =1$$
|
594bb61b0ff7d53f184b2ab05d2928cb7860a7df
|
Q: Why is $f = f_{0} + \sum_{i}\alpha_{i}X_{i} + \frac{1}{2}\sum_{i}^{n} \sum_{j}^{n}A_{ij}X_{i}X_{j}$ the standard quadratic form in n dimensions? The claim that $$f = f_{0} + \sum_{i}\alpha_{i}X_{i} + \frac{1}{2}\sum_{i}^{n} \sum_{j}^{n}A_{ij}X_{i}X_{j}$$
is the standard quadratic form for $n$ dimensions, where $\alpha$ is some $ 1 \times n$ matrix and X some $n \times n$ matrix,
appears in the first page of the following paper:
http://bioinfo.ict.ac.cn/~dbu/AlgorithmCourses/Lectures/Fletcher-Powell.pdf
I want to understand why this equation holds.
It makes sense that, when the input matrix, $X$, is in one dimension ($n = 1$), the above equation would give the equation of a quadratic in algebra: $f(x) = c + bx + ax^2$.
If we take the quadratic given by ($n = 2$) to be a parabaloid, then the above equation also makes sense.
I do not understand why, for higher dimensions ($n > 2$), the quadratic equation is what the above formula gives.
As an aside, could someone show why the gradient of the equation above is just $AX + \alpha$
For example, when $(n = 2)$, the expansion above is:
$ f_{0} + \alpha_{1}X_{1} + \alpha_{2}X_{2} + \frac{1}{2}(A_{11}X_{1}X_{1} + A_{12}X_{1}X_{2} + A_{21}X_{2}X_{1} + A_{22}X_{2}X_{2})$
What differentiation of this expansion leads to the result for the gradient?
|
b45429d62389c8a326efd610e6c06ccb11f42f2a
|
Q: How does the integral $\int_0^{\infty} \ln\left( 1+\frac{1}{x^2} \right) \,\text{d}x$ converge? I tried using the fact that $\ln(f(x)) < f(x)$ but that doesn't seem to work. It's an improper integral.
$$
\int_0^{\infty} \ln\left( 1+\frac{1}{x^2} \right) \,\text{d}x
$$
A: As $x\to 0$, the integrand behaves as $-2 \log{x}$, which is an integrable singularity.
As $x \to \infty$, the integrand behaves as $1/x^2$, which is integrable in this limit.
There are no other singular points in the integration interval.
A: To prove convergence, you should estimate behavior of the integrand near zero and near infinity.
Near zero $\ln(1 + x^{-2}) \approx \ln(x^{-2}) = -2 \ln x $. It is known that $\int \ln x \; dx$ converges near zero.
Near infinity $\ln(1 + x^{-2}) \approx x^{-2}$. It is known that $\int x^{-2} \; dx$ converges near infinity.
So the integral converges properly. In fact, notation $f(x) \approx g(x)$ should be replaced with more strict $f(x) = \Theta(g(x))$ everywhere.
A: Note that
$$\int \ln(1+\frac{1}{x^2}) dx = x \log \left(\frac{1}{x^2}+1\right)+2 \arctan (x) + C$$ so
$$\begin{align}
\int_0^{\infty} \ln(1+\frac{1}{x^2}) dx \\ &= \lim_{x \to \infty} \left( x \log \left(\frac{1}{x^2}+1\right)+2 \arctan (x) \right) - \lim_{x \to 0} \left(x \log \left(\frac{1}{x^2}+1\right)+2 \arctan (x) \right) \\ &= \left( 0 + 2 \cdot \frac{\pi}{2} \right) - \left( 0 + 0 \right) \\ &= \pi
\end{align} $$
|
0c4bda24adbb556643e6ec13ac90e63af87ed787
|
Q: Prime ideal theorem for modular lattices? There's a well-known theorem for distributive lattices commonly referred to as the "prime ideal theorem:"
Let $L$ be a distributive lattice, $I$ an ideal of $L$, and $F$ a filter of $L$ such that $I\cap F=\varnothing$. Then there exists a prime ideal $P$ such that $I\subseteq P$ and $P\cap F=\varnothing$.
Is there an analogous theorem for modular lattices? A cursory look through one or two texts and a brief online search seem to indicate that the answer is no, but perhaps I am mistaken.
Does anyone have a counter-example or a reference to an affirmative answer? Thanks!
A: Take the modular lattice $M_3=\{0,a,b,c,1\}$ where $0,1$ are the minimum and maximum elements, respectively, and consider $I=\{0,c\}$ and $F=\{a,1\}$. There is no (proper) prime ideal containing $I$, and hence none disjoint from $F$. E.g., $I$ is not prime since $a\wedge b=0\in I$ but $a,b\notin I$.
|
4f68ad8a0231fefc9f81f15247e712b924a3a807
|
Q: Estimating standard deviation I read in a book that we can estimate the distribution of the number of heads we get if we flip a coin 100 times(so doing 100 flip sessions multiple times and taking the distribution of the number of heads) with the normal distribution (50,5), where 5 is the standard deviation. But how do we approximate the standard deviation?(i.e. why is it 5 when the number of flips is 100, how are these numbers related)
A: This is not a simple result that one can just arrive at by looking at it. If you want a proof, take a look at this pdf.
|
330e40055126c2e448b9ec43f766962c2e8a5622
|
Q: To find the arrangement of given letters so that there is fixed number of transition between them. A 10 letter word is composed of $P,\ Q,\ R,\ S$. The problem is to find the number of arrangements of these letters which could lead to a fixed number of transitions between each pair of letters.
Example, consider the following arrangement of $P,\ Q,\ R,\ S$ given as
$PQPRSPQRSS$ has $3\ P$, $2\ Q$, $2\ R$, $3\ S$ has $3$ transitions between $P$ and $Q$, $1$ transition between $P$ and $R$, $1$ transition between $P$ and $S$, $2$ transition between $R$ and $S$ and $1$ transition between $Q$ and $R$.
The question here is to find the number of ways it can be arranged so that the transition between alphabets remains conserved.
A: Consider a $4 \times 4$ symmetric matrix $M$ with entries $m_{ij}$ and diagonal entries $1$.
I'll label the indices $1,2,3,4$ as $P,Q,R,S$. Each entry
of $M^{9}$ is a polynomial of total degree $ \le 9$ in the $m_{ij}$, where the coefficient of each monomial in $M^9_{ij}$ gives the number of $10$-letter words starting with $i$, ending with $j$, and with the numbers of transitions specified by the monomial. Thus for the number with $3$ transitions between $P$ and $Q$, $1$ between $P$ and $R$, $1$ between $P$ and $S$, $1$ between $Q$ and $R$, $2$ between $R$ and $S$, and $0$ for the other pairs, you would take the coefficient of $m_{PQ}^3 m_{PR} m_{PS} m_{QR} m_{RS}^2$. Since you don't care where you start and end, add these for all $i,j$. Using Maple I get $288$.
|
5173b9550a7d995bfa8e66aa868495b0ff17a0a1
|
Q: Prove that $\mathrm{span}(S) = S$ for a subspace $S$. Prove that if $S$ is a subspace of a vector space $V$, then $\mathrm{span}(S) = S$.
What I tried: I considered using the properties of vector spaces or maybe using an example where $S \subseteq \mathbb{R}^2$, but those strategies didn't amount to much. Any thoughts?
A: $\mathrm{Span}(S)$ is the set of finite linear combinations of elements in $S$. In particular, every element in $S$ is in $\mathrm{Span}(S)$. Thus $S\subseteq\mathrm{Span}(S)$. Now, set $u\in\mathrm{Span}(S)$. Then $u=a_1s_1+...+a_ns_n$ for some scalars $a_i$ and $s_i\in S$. As $S$ is a subspace, it is closed under plus and product by scalar. Then $u\in S$. This proves $\mathrm{Span}(S)\subseteq S$.
A: Note that the span of a set $S\subset V$ is the intersection of all subspaces of $V$ that contain $S$, i.e. $$\operatorname{Span}(S) = \bigcap_{S\ \subset\ T\ \leqslant V}T. $$
The intersection of subspaces is again a subspace, so $$\bigcap_{S\ \subset\ T\ \leqslant V}T$$ is a subspace containing $S$, which implies that $$\operatorname{Span}(S)\subset \bigcap_{S\ \subset\ T\ \leqslant V}T. $$ But $\operatorname{Span}(S)$ itself is a subspace containing $S$, therefore
$$\operatorname{Span}(S)\supset \bigcap_{S\ \subset\ T\ \leqslant V}T. $$
It follows immediately that if $S\leqslant V$, then $\operatorname{Span}(S)=S$.
|
f8da4cefa21633f0bc25a894b75c3ce8febb9232
|
Q: Range of a function of two variable How to find $f(\mathbb{R}^2)$ if $f:\mathbb{R}^2 \to \mathbb{R}^2$
$$f(x,y)=(e^{x+y}+e^{x-y},e^{x+y}-e^{x-y} )$$
|
8c82c0d39b70fabb345f46300bf12a806cdc2cc5
|
Q: Proof that $[\alpha, \infty)$ is closed I'm wondering if my approach to this proof is correct. Would appreciate your evaluation.
Prove that $[\alpha, \infty)$ is closed for all $\alpha\in \mathbb{R}$.
Proof
We need to show that $[\alpha, \infty)^c=(-\infty, \alpha)$ $(*)$ is open. Let $x \in *$, $\varepsilon:=\lvert x - \alpha\lvert$. Pick a $y\in B_\varepsilon(x)$ so that $\lvert y-x\lvert<\varepsilon$. We need to show that $\lvert y-\alpha\lvert<2\lvert x-\alpha\lvert$.
Now, $\lvert y - \alpha\lvert=\lvert y -x+x-\alpha\lvert\le \lvert y-x \lvert+\lvert x-\alpha\lvert<\varepsilon+\lvert x-\alpha\lvert=2\varepsilon$.
Hence, $y\in B_\varepsilon(x)\implies y\in[\alpha, \infty)^c\implies[\alpha,\infty)^c$ is open, and $[\alpha, \infty)$ is closed.
A: You need to show that $(x-\epsilon,x+\epsilon)\subseteq(-\infty,\alpha)$, where $\epsilon=\alpha-x$ (it is the convenient choice). Take $y\in(x-\epsilon,x+\epsilon)$. We have $y<x+\epsilon=x+(\alpha-x)=\alpha$. So $(x-\epsilon,x+\epsilon)\subseteq(-\infty,\alpha)$.
A: You appear to be a little confused about what exactly you need to show. Here is my own proof along similar lines.
We need to show that $[\alpha,\infty)^\mathsf{c}=(-\infty,\alpha)=I$ is open. Let $x\in I$, and let $\epsilon=\alpha-x$. If $y\in B_{\epsilon}(x)=(x-\epsilon,x+\epsilon)$, we then have $y<x+\epsilon$, that is, $y<x+\alpha-x=\alpha$. Hence $y\in I$ whenever $y\in B_{\epsilon}(x)$. Since this argument holds for all $x\in I$, it follows that $I$ is open, and hence that $[\alpha,\infty)$ is closed.
Some extra advice:
*
*You are always free to refer to equations by symbols such as $(\ast)$, but you should really avoid calling things like numbers and sets anything like $(\ast)$. In particular, "$x\in\ast$" is highly unusual, and I actually found it quite jarring to read.
*Were I actually asked such a question, I would not use this definition of "closed" to answer it. Rather, I would use the sandwich/squeeze theorem for sequences in conjunction with the following theorem (which hopefully is familiar to you):
A subset $A$ of $\mathbb{R}^{d}$ (for any $d\in\mathbb{N}$) is closed if and only if every convergent sequence $(x_{n})$ contained in $A$ satisfies $\lim_{n\to\infty} x_{n} \in A$.
A: Here is a simple proof which does not need to use complements:
Let $ x \notin [\alpha,\infty) $. Then $x < \alpha$. Let $\delta = |x-\alpha|$. Then the open ball $B_{\delta} (x)$ centred at $x$ will not intersect $[\alpha,\infty)$, therefore $x$ is not a limit point of $[\alpha,\infty)$. This is equivalent to the statement that if $x$ is a limit point of $[\alpha,\infty)$ , then $ x \in [\alpha,\infty) $, which is the same as saying that $[\alpha,\infty)$ is closed.
A: You are on the right track but you logic and continuity are off.
Revision first, critique later.
I'm trying to rewrite your proof in your style and reasoning as much as possible:
=====
We need to show that $[\alpha,\infty)^c=(−\infty,\alpha)(∗)$ is open. Let $x \in ∗, \epsilon :=∣x−\alpha∣$. We need to show that $B_{\epsilon}(x) \subset *$. Pick a $y \in B_{\epsilon}(x)$ so that $∣y−x∣<\epsilon$. We need to show that $y \in *$. Now $|y - x|< \epsilon \implies y \in (x - \epsilon, x + \epsilon) \implies y < x + \epsilon = x + \alpha - x = \alpha \implies y \in (-\infty, \alpha) = *$. Hence, $y \in B_{\epsilon}(x) \implies y \in [\alpha, \infty)^c \implies [\alpha, \infty)^c$ is open $\implies [\alpha, \infty)$ is closed.
=====
Now critique of what you wrote:
We need to show $(-\infty, \alpha) = (*)$ is open. check. Let $x \in (*);$ let $\epsilon = |\alpha - x|$. Check.
Let $y \in B_{\epsilon}(x)$. Okay... but....
1) You should state this point that our goal is to show $B_{\epsilon}(x) \subset (*)$. It's not clear from the rest of the proof that you are aware that this is the goal.
2) We should state that our goal is to show $y \in (*)$. As $y$ is arbitrary that means $B_{\epsilon}(x) \subset (*)$ and as $x$ was arbitrary that'd imply (*) is open.
3) You claim to show instead that $y \in B_{\epsilon}(x)$ which is a) unnecessary as we just declared that to be true in the first place b) doesn't in anyway show that $(*)$ is open and ... c) well... your argument isn't actually correct.
Then you claim the we need to show that $|y - a| < 2|x - a|$. This is where your proof goes off the rails. Showing that gives us no information. Let $x = -1; \alpha = 0$ and $y = 1/2$. Then $|y -\alpha | < 2|x - a|$ in fact $|y - a| < |x - \alpha|$. But $y \not \in (*)$.
What we need to show is that $y \in (*)$.
Then you show correctly show that $|y - a| < 2\epsilon$ and a conclude that that implies $y \in B_{\epsilon}(x)$.
1) We didn't need to show that as we declared it when we chose $y$ and
2) It doesn't show that; it shows $y \in B_{2\epsilon}(\alpha)$ which may or may not have anything to do with $B_{\epsilon}(x)$.
Then you conclude $y \in B_{\epsilon}(x) \implies (*)$ is open. It only implies that if we can show $B_{\epsilon}(x) \subset (*)$.
|
f076b4c2c8ced3a2c72a92c77c311427e882a072
|
Q: Is there a standard shorthand notation for typebounds? Let's assume I want to declare a function. The full "signature" (or "typebound") is given by:
$f(a,b,c)\rightarrow d$ where $a\in \mathbb{N}, b\in \mathbb{Q}, c\in \mathbb{N}, d\in \mathbb{N}$
Is there any generally accepted shorthand notation for the line above, without using $\in$ repeatedly? Something like:
$f:<\mathbb{N},\mathbb{Q},\mathbb{N}>\rightarrow\mathbb{N}$
Also, I would need to use structures (e.g. matrices of certain sizes) in that notation as well, not just predefined sets of numbers (like $\mathbb{N}$ and $\mathbb{Q}$).
A: It's called Cartesian product: if $f$ takes $n$ arguments $x_1, x_2, \dots, x_n$, and $x_i \in X_i \ \forall 1 \le i \le n$, and if $f$ takes values in $Y$, then you write the "signature" of $f$ as $f : X_1 \times X_2 \times \dots \times X_n \to Y$ or, to get rid of the dots, $f : \prod \limits _{i = 1} ^n X_i \to Y$.
|
3f4d5794e4d8261a7fc7f762fb99cbaad30035f6
|
Q: Implicit Differentiation in multivariate calculus Let $y(x)$ be the be given explicitly by the equation :
$xy\left( x\right) -\ln y\left( x\right) = 1$
Determine $\dfrac {dy}{dx}$
I'm unsure of how to go about this problem.
A: Deriving with respect to $x$ we have:
$$
\frac{d}{dx}(xy)-\frac{d}{dx}(\ln y)+\frac{d}{dx}(-1)=0
$$
$$
y+x\frac{dy}{dx}-\frac{1}{y}\frac{dy}{dx}=0
$$
$$
\frac{dy}{dx}\left(x-\frac{1}{y} \right)=-y
$$
$$
\frac{dy}{dx}=\frac{y^2}{1-xy}
$$
A: I'm going to abbreviate $y(x)$ as $y$ and $\frac{dy}{dx}$ as $y'$. I assume you mean to differentiate with respect to $x$. Please correct me if I'm wrong.
For $xy -\ln y = 1$, we differentiate both sides as $y+xy'-\frac{1}{y}y'=0.$
Then, we write $-y=y'(x-\frac{1}{y})$.
Finally $$y'=\frac{-y}{x-\frac{1}{y}}=\frac{-y^2}{xy-1}.$$
If you want to write the derivative entirely in terms of $x$, you need to use the $W$ Lambert function, as you cannot solve $xy-\ln y=1$ for y using only "elementary functions" (some consider the Lambert function elementary, but I do not).
A: NOTE - the original question, before it was changed, was to find $\frac{dy}{dt}$, so that is what this answer is answering. Even though $t$ is not a part of the original equation, finding the derivative of one variable with respect to another that isn't present is very commonly used in related rates problems. Since $dt$ is just a value like any other, it can be divided just like any other, just as long as you do it to both sides.
To find the answer, just take a differential and divide by $dt$. Usually, in Calculus, people try to find whole derivatives, but really, finding the differential first and then dividing is best.
Let's start with your problem:
$$xy - \ln y = 1$$
Now, take the differential:
$$x\cdot dy + y\cdot dx - \frac{dy}{y} = 0$$
Now that you have the differential, divide both sides by $dt$:
$$x\frac{dy}{dt} + y\frac{dx}{dt} - \frac{1}{y}\frac{dy}{dt} = 0$$
Now, solve for $\frac{dy}{dt}$:
$$x\frac{dy}{dt} + y\frac{dx}{dt} - \frac{1}{y}\frac{dy}{dt} = 0$$
$$x\frac{dy}{dt} + y\frac{dx}{dt} = \frac{1}{y}\frac{dy}{dt} $$
$$y\frac{dx}{dt} = \frac{1}{y}\frac{dy}{dt} - x\frac{dy}{dt} $$
$$y\frac{dx}{dt} = (\frac{1}{y} - x)\frac{dy}{dt}$$
$$\frac{y}{\frac{1}{y} - x}\frac{dx}{dt} = \frac{dy}{dt}$$
A: I solved $y(x)$ not $y'(x)$, I'm sorry.
$$xy(x)-\ln(y(x))=1\Longleftrightarrow$$
$$\frac{\text{d}}{\text{d}x}\left[xy(x)-\ln(y(x))\right]=\frac{\text{d}}{\text{d}x}\left[1\right]\Longleftrightarrow$$
$$xy'(x)+y(x)-\frac{y'(x)}{y(x)}=0\Longleftrightarrow$$
$$y(x)+y'(x)\left[x-\frac{1}{y(x)}\right]=0\Longleftrightarrow$$
Let $\text{A}(x,y)=y$ and $\text{Q}(x,y)=x-\frac{1}{y}$,
This is an exact equation, because $\frac{\partial\text{A}(x,y)}{\partial y}=1=\frac{\partial\text{Q}(x,y)}{\partial x}$.
Define $f(x,y)$ such that $\frac{\partial f(x,y)}{\partial x}=\text{A}(x,y)$ and $\frac{\partial f(x,y)}{\partial y}=\text{Q}(x,y)$:
Then, the solution will be given by $f(x,y)=\text{C}$, where $\text{C}$ is an arbitrary constant.
Integrate $\frac{\partial f(x,y)}{\partial x}$ with respect to $x$ in order to find $f(x,y)$:
$f(x,y)=\int y\space\text{d}x=yx+g(y)$ where $g(y)$ is an arbitrary function of $y$.
Differentiate $f(x,y)$ with respect to $y$ in order to find $g(y)$:
$\frac{\partial f(x,y)}{\partial y}=\frac{\partial}{\partial y}\left[yx+g(y)\right]=x+\frac{\text{d}g(y)}{\text{d}y}$.
$$x+\frac{\text{d}g(y)}{\text{d}y}=x-\frac{1}{y}\Longleftrightarrow$$
$$\frac{\text{d}g(y)}{\text{d}y}=-\frac{1}{y}\Longleftrightarrow$$
$$g(y)=-\ln|y|\Longleftrightarrow$$
$$f(x,y)=y(x)-\ln|y|\Longleftrightarrow$$
$$f(x,y)=y(x)-\ln|y|$$
$$yx-\ln|y|=f(x,y)\Longleftrightarrow$$
$$yx-\ln|y|=\text{C}\Longleftrightarrow$$
$$y(x)=-\frac{\text{W}\left(\text{C}x\right)}{x}$$
|
c2b78e26e4282f116167362d83e20a2aa6cd6105
|
Q: Lagrange multipliers - regarding the theory and motivation I am new with Lagrange multipliers , and having trouble understanding what is a necessary condition and what is sufficient.
Assume I want to find global exterma of $f(x,y,z) \quad s.t. \quad g(x,y,z)=0$. As far as I understand, Lagrange multipliers is a necessary condition (right?), so my question is:
Is it possible that the function $f$ has no exterma under the constraint $g=0$ (It does not have any maximum or minimum under this constraint ), but when I use Lagrange multiplies, I will obtain solutions (i.e.- points where $\nabla L =0$ ) ?
Hope I made myself clear
Thanks a lot in advance
A: $\newcommand{\Reals}{\mathbf{R}}\newcommand{\grad}{\nabla}$Let $f$ and $g$ be continuously-differentiable real-valued functions on $\Reals^{3}$, and assume $\grad g$ is non-vanishing on the (non-empty) level set $\{g = 0\}$, so that $M = \{g = 0\}$ is a smooth surface by the implicit function theorem.
The Lagrange multipliers equation
$$
\grad f = \lambda \grad g
\tag{1}
$$
is the critical point condition for $f$ on $M$. As you say, this is necessary in order that $f$ have an extremum, but it is not generally sufficient, as you suspect.
To give a simple example, if $g(x, y, z) = z$, then $M = \{g = 0\}$ is the $(x, y)$-plane, and a solution of (1) is precisely a critical point of the function $\phi(x, y) = f(x, y, 0)$. It's easy to cook up functions $f$ so that $\phi$ has no extrema (say, $f(x, y, z) = x^{2} - y^{2}$), or has absolute extrema but also critical points that are not absolute extrema.
If $M$ is compact, i.e., closed and bounded, then the extreme value theorem guarantees that $f$ achieves an absolute minimum and an absolute maximum on $M$. Nonetheless, $f$ can still have critical points on $M$ that are not extrema. The prototypical example is perhaps to pick real numbers $a < b < c$, and to consider
$$
f(x, y, z) = ax^{2} + by^{2} + cz^{2},\quad
g(x, y, z) = x^{2} + y^{2} + z^{2} - 1.
$$
The solutions of (1) are easily checked to be the standard basis vectors and their negatives; the absolute minima of $f$ occur at $\pm(1, 0, 0)$, the absolute maxima at $\pm(0, 0, 1)$. The points $\pm(0, 1, 0)$ are non-extremal critical points.
|
02fc9aa9426fa0118813fdcd8e6222ee0664fecd
|
Q: Is there coordinates $u,v$ such that $e^{-x^2-4y^2}dx\wedge dy = du\wedge dv$? Is there coordinates $u,v$ on $\mathbb{R}^2$ such that $e^{-x^2-4y^2}dx\wedge dy = du\wedge dv$?
A: You have one choice as $ u = \int e^{-x^2}dx , v = \int e^{-4y^2}dy$.
You cannot integrate this explicitly.
|
b424eaef84791fbf500155d621e748f2fbc44347
|
Q: congruency of triangles in hyperbolic and spherical geometry In Euclidean geometry, we have the following congruencies of triangles: side-side-side, side-angle-side, angle-angle-side = angle-side-angle (because of the angle sum) and side-side-angle (only if the side opposite to the angle is larger than the other given side). Angle-angle-angle does never hold.
My question is this: when are triangles congruent in the hyperbolic and the spherical case and what are the conditions under which the congruencies hold?
Does anybody know a reference where all these congruencies are derived from the respective laws of sines and cosines?
|
16be1739c4579991aeafaf65758cc006bef82f3c
|
Q: How to find a planar graph if I know that it has 7 faces with certain sizes? I can't figure out how to find a graphs with this properties:
I have to find 2 non-isomorphic plane graphs which (each of them) have 7 faces, two of which are of size 3 and the rest of size 4.
This is my partial solution:
EDIT: I've figured out that this Euler calculation is wrong. Do anybody knows how to get number of vertices from that? Is it possible?
The first thing I've thought about is that I could count number of vertices in this graph using Euler which says: |V(G)-7 (number of faces)- E(G)| = 2. So |V(G) = 5 + E(G) and now I'm lost. What to do next?
Another solution (which is obviously incorrect) is this: Since there are 7 faces, 2 of them with size 3 and 5 of them with size 4, the outer face should be of size 4. But if the outer face was of size 4, the "outer edge" of graph should be like square (C4 Graph). But it is not possible to find 2 faces of size 3 and 4 faces of size 4 in the C4 because we can draw only 2 edges there.
Do you know where is the problem? Thank you very much
A: It is is easy to show that $\sum\limits_{\text{f is a face}}e(f)=2E$. (here $e(f)$ is the number of edges in the face $f$)
So there are $(3+3+4+4+4+4+4)/2=13$ edges
From here we apply Euler's formula, so $V=13-7+2=8$ vertices.
I found one way so far:
|
a617855f79c730e81dd3ec98782a8e1b5e16c511
|
Q: Show that every $\sigma \in S_n$ is of the form $\sigma = \prod_i (1 \; \; x_i)$ Let $n \in \Bbb N$ and let $S_n$ denote the group of permutations of $\{1,2,...,n\}$.
Prove that for all $\sigma \in S_n$, we have:
$$\sigma = \prod_{i=1}^m (1 \ \ x_i), \text{ for some $x_1,...,x_m \in \{1,...,n\}$}$$
here, $(1 \ \ x_i)$ denotes a transposition of $1$ and $x_i$.
I understand why this is true.
If, for instance,
$$\sigma = \begin{pmatrix} 1 & 2 & 3 & 4 & 5 \\ 4 & 1 & 3 & 5 & 2 \end{pmatrix}$$
Then, we can write:
$$\sigma = (1 \ \ 2)(1 \ \ 5)(1 \ \ 4)$$
However, I am unable to construct an algorithm for writing a general $\sigma$ in this form.
Can someone offer a proof of that?
A: If $(a \;\; b)$ is a transposition with $a,b \neq 1$ then
$$
(a \;\; b) = (1 \;\; a) (1 \;\; b) (1 \;\; a).
$$
So every transposition can be written as such a product. Now every cycle $(a_1 \;\; \cdots \;\; a_n)$ can be written as the product of transpositions
$$
(a_1 \;\; \cdots \;\; a_n)
= (a_1 \;\; a_2) (a_2 \;\; a_3)
\dotsm (a_{n-2} \;\; a_{n-1}) (a_{n-1} \;\; a_n).
$$
Lastly, every permuation can be written as the product of (disjoint) cycles.
Combining this we can write every permutation as the product of transpositions of the form $(1 \;\; a)$.
(We can also decompose a cycle $(a_1 \;\; \cdots \;\; a_n)$ as
$$
(a_1 \;\; \cdots \;\; a_n)
= (a_1 \;\; a_n) (a_1 \;\; a_{n-1})
\dotsm (a_1 \;\; a_3) (a_1 \;\; a_2).
$$
If then $a_1 = 1$ then the transpositions $(1 \;\; a_i)$ are already of the desired form, and no further decomposition is needed.)
PS: As an example take your permutation
$$
\sigma =
\begin{pmatrix}
1 & 2 & 3 & 4 & 5 \\
4 & 1 & 3 & 5 & 2
\end{pmatrix}.
$$
We have
$$
\sigma
= (1 \;\; 4 \;\; 5 \;\; 2)
= (1 \;\; 2) (1 \;\; 5) (1 \;\; 4).
$$
Another examples would be
$$
\tau =
\begin{pmatrix}
1 & 2 & 3 & 4 & 5 & 6 & 7 & 8\\
5 & 7 & 1 & 2 & 3 & 8 & 4 & 6
\end{pmatrix},
$$
for which we have
\begin{align*}
\tau
&= (1 \;\; 5 \;\; 3) (2 \;\; 7 \;\; 4) (6 \;\; 8) \\
&= (1 \;\; 3) (1 \;\; 5) (2 \;\; 7) (7 \;\; 4) (6 \;\; 8) \\
&= (1 \;\; 3) (1 \;\; 5) (1 \;\; 2) (1 \;\; 7) (1 \;\; 2) (1 \;\; 7) (1 \;\; 4) (1 \;\; 7) (1 \;\; 6) (1 \;\; 8) (1 \;\; 6).
\end{align*}
|
34616921061ae24d9ec13560a636b992a62a7eb5
|
Q: Evaluating the integral $\int\frac{x^{1/6}-1}{x^{2/3}-x^{1/2}}dx$. How can I evaluate the following integral $$\int\frac{x^{1/6}-1}{x^{2/3}-x^{1/2}}dx.$$
A: Why not just expand $x^{2/3}-x^{1/2}=x^{1/2}(x^{1/6}-1)$. After cancellation, you're left with $\int x^{-1/2}dx=2\sqrt{x}+C$.
A: Substitute $u={x}^\frac 16\implies u^6=x\implies 6u^5\,du=dx$:
$$\therefore 6\int\frac{(u-1)u^5}{u^4-u^3}\,du=6\int\frac{u^5}{u^3}\,du=6\int u^2\,du=\frac{6u^3}{3}+c=2\sqrt{x}+c$$
|
29f547358b749db3c69874bf43b1befb27315a6c
|
Q: Understanding (proportional to) through an example. This question may be redundant and I apologize in advance but I am really having a hard time to digest the notion of proportional to in mathematics. Kindly, can someone simplify the idea of it and when we can say proportional to something.
Thank you for your help.
A: We say that $y$ is proportional to $x$ if there is a constant $c$ for which $y=cx$. Intuitively, this means that
*
*if we double $x$, then we double $y$
*if we triple $x$, then we triple $y$
*if we halve $x$, then we halve $y$
... and indeed, in general, if we scale $x$ multiplicatively by a certain amount, then we scale $y$ multiplicatively by that same amount. The earliest examples of this that we tend to come across are in converting units:
*
*something's length in metres is proportional to its length in cm
*something's weight in kilograms is proportional to its weight in pounds (lbs)
*something's value in US dollars is proportional to its value in pounds sterling (£)
When we start talking more abstractly about proportionality, we should go back to this scaling concept - any 2 squares are proportional, for example, because doubling the length of all sides of a square leads you to another square. However, this is not true of all shapes - to get from a triangle with sidelengths $\{3,4,5\}$ to a triangle with sidelengths $\{5,12,13\}$, you'd necessarily have to scale one side more than one of the others.
One way of showing this is to note that the first triangle has two sides in the ratio $4:3$, and this ought to be preserved by multiplicative scalings - upon a scaling of scale factor $f$, the sides would become $\{3f,4f,5f\}$, and $4f:3f=4:3$ for all nonzero $f$. However, none of the pairs of sides in the second triangle are in this ratio.
A: A quantity is proportional to another if you can multiply that quantity with a number $a$ to get the other quantity. That means, both quantities have the same ratio.
For example. The number of human legs is proportional to the number of people, because every person has two legs. Thus, the ratio of people to legs is 1 : 2. There are twice as many legs as people. (I do not mean to discriminate people with leg amputations).
A: Two variables $x$ and $y$ are said to be proportional provided that: if you multiply $x$ by $2$, then $y$ will also be multiplied by $2$; if you multiply $x$ by $3$, then $y$ will also be multiplied by $3$; in general, if you multiply $x$ by any constant $k\neq 0$, then $y$ will also be multiplied by $k$. Mathematically, this is expressed by saying that there is a constant $k$ such that $y=kx$.
Examples:
*
*(MILK) Assume that the price is $\$3.00$ per $1$L. The total price is proportional to the amount of liters: if we buy $3$ liters, we will pay $\$9.00$; if we buy $6$ liters (which is $2$ times $3$), we will pay $\$18.00$ (which is $2$ times $9$); in general, if we buy $L$ liters, we will pay $3L$. So, there is a constant $k$ (which is $3$) such that total price $=k$ amount of liters.
*(TAXI) Assume that the fare is $\$3.00$ (initial charge) plus $50$ cents per 1/5 mile. The taxi fare isn't proportional to the traveled distance: if we travel $3$ mile, we will pay $\$10.50$ but if we travel $6$ miles (which is $2$ times $3$) we will pay $\$18.00$ (which is not $2$ times $10,50$).
|
86d40bf6d7c92933544a7658e449ca757430edce
|
Q: Description of norms on $\Bbb R$ How could one describe all possible norms on $\Bbb R$, if one views $\Bbb R$ as a $1$-dimensional vector space?
A: Hint: By positive homogeneity, it all depends on $\|1\|$.
|
78e3821ee31fe258f5b907a30d130cc82a83c1b3
|
Q: Finite geometry - how to determine parallel classes I try to learn a little about finite geometry and I have now encountered the following exercise:
Exercise:
Construct the affine plane $\mathrm{AP}(\mathbb{Z}_3)$. Determine it's parallel classes and the corresponding Latin squares for the classes of finite nonzero slope.
I understand how you construct the lines
(i) slope of 0: $y=0$, $y=1$, $y=2$
(ii) Slope 1: $y=x$, $y=x+1$, $y=x+2$
(ii) Slope 2: $y=2x$, $y=2x+1$, $y=2x+2$
But then, when I look at the solution, they have drawn a pictures but I can't see how they relate to each other and why. The picture looks like this (now drawn in paint ;) )
Hope someone can explain this to me. I read another example from my text-book but i never understood it then. So i thought, perhaps if I do an exercise I will grasp it. But no luck with that either. I'm not really sure I even understand what a parallell class is. why have they drawn the picture like this? Why do the left-top get the number 3 and so? Would be happy if someone could answer this. Thanks
A: These three lines form the parallel class of lines having slope 2. I think they are shown as kind of weird in this drawing, but it is arbitrary what order they are in. For example, consider $y = 2x$, this line is satisfied by $(0,0)$, $(1,2)$, $(2,1)$. These three points make up the blue line in the picture.
None of these three lines intersect, and they partition the points of the plane; that is what makes them a parallel class.
|
545f6718e7d8c27ebe03e93c4fb82c9c434709b9
|
Q: $\delta - \epsilon$ proof that $f(x) = x^2-2$ is continuous for all $x \in \mathbb{R}$ I've tried to proof that $f(x) = x^2-2$ is continuous for all $x \in \mathbb{R}$ using the $\delta -\epsilon$ definiton of continuity, I think my proof is correct. But it's not the same as my textbook's. Would someone be willing to check whether this proof is correct?
Here's my proof:
Let $c \in \mathbb{R}$ be randomly given $f$ is continuous in $c$ if $\forall \epsilon \in \mathbb{R}_{>0} \exists \delta \in \mathbb{R}_{>0}$ so that $\forall x \in \mathbb{R}$ it is the case that $|x - c| < \delta \implies |f(x)-f(c)| < \epsilon$ such a $\delta$ exists, choose namely $\delta = \frac{\epsilon}{2\cdot|x+c|}$. This works, because $|f(x)-f(c)| = |(x^2-2)-(c^2-2)| = |x^2-c^2| = |(x-c)(x+c)| = |x-c| \cdot |x+c|$ But remember that $|x-c| < \frac{\epsilon}{2\cdot|x+c|}$, so $|x-c| \cdot |x+c| \leq \frac{\epsilon}{2\cdot|x+c|} \cdot |x+c|= \frac{\epsilon}{2} <\epsilon$
So it is true that$\forall \epsilon \in \mathbb{R}_{>0} \exists \delta \in \mathbb{R}_{>0}$ $|x - c| < \delta \implies |f(x)-f(c)| < \epsilon$, and considering $c \in \mathbb{R}$ was randomly given, this stament holds $\forall c\in \mathbb{R}$, so f is continuous on the whole of $\mathbb{R}$.
A: As pointed out, you cannot have your $\delta$ depend on $x$. Instead, with these types of proofs, it's easier to start with what you want. delta-epsilon proofs tend to be easier for me when I start by working backward. We can get around the problem with your original proof with the following trick:
WlOG assume that $c>0$.
Goal: $|x^{2}-2-c^{2}+2|<\epsilon$
$|(x+c)(x-c)|<\epsilon$.
$|(x+c)|\cdot|x-c|<\epsilon$.
Suppose that $\delta<1$.
Then $|x-c|<1 \implies 2c-1<x+c<2c+1$
then $|x-c||x+c|<\epsilon \implies \delta\cdot(2c+1)<\epsilon $
...Can you see where this is headed? The idea is to get rid of the $x$ by only dealing with the entire $|x+c|$ term, instead of using $x$ explicitly. By assuming a pretty small delta (somewhat arbitrary), you can use this fact to get a neat expression for $\delta$. Notice my "WLOG", otherwise $|x+c|<2c+1$ isn't justified. Although you could very easily take the max of the two values.
|
bd291498838f2478ebc37cb2f1c0467c39be6445
|
Q: Evaluate$\int_0^{\infty}x^4e^{-3x}\; dx$ using the change of variable: $t=3x$ I have been given the hint that I need to use integration by parts more than once in order to get the answer. However, I can't seem to get a reasonable result.
A: If $t=3x$, then $dt=3dx$ and $x^4=\frac{t^4}{3^4}$, so we have:
$\int x^4e^{-3x}dx = \frac{1}{3^5}\int t^4e^{-t}dt$. And then use integration by parts many times until you solve it.
A: set $$-3x=t$$ then we have $$x=-\frac{1}{3}t$$ and $$dx=-\frac{1}{3}dt$$ and then do integration by parts
|
213b29d852bddb16dabc420d57eba116c1fe63cf
|
Q: Convergence of $\sin{\pi\sqrt{n}}$ Revising for an exam:
Let $a_n = \sin{(\pi\sqrt{n})}.$ Show that:
(i) $a_{n+1} - a_{n} \rightarrow 0$
(ii) The sequence $(a_n)$ is bounded.
(iii) $(a_n)$ does not converge.
My attempt:
(i) ???
(ii) min($\sin(x)$) = -1, max($\sin{x}$) = 1, so $-1 \leq a_n \leq 1, \forall n \in \mathbb{N}$. Thus 1 is an upper bound and -1 is a lower bound.
(iii) $a_n$ has a monotonic subsequence which converges to 1 by the Bolanzo-Weierstrass theorem. Note that the subsequence $a_{n^2}$ converges to 0. Since $0 \neq 1$, $a_n$ does not converge.
A: Hint: (for (i))
$$
\sin a - \sin b = 2\sin\frac{a-b}{2}\cos\frac{a+b}{2}
$$
and the product of two sequences, one converging to $0$ and the other bounded, converges to $0$.
In more detail:
$$
a_{n+1}-a_n = 2\sin\left(\pi\frac{\sqrt{n+1}-\sqrt{n}}{2}\right)\cos\left(\pi\frac{\sqrt{n+1}+\sqrt{n}}{2}\right)
$$
The second factor is bounded as $\cos$ is, and the first goes to $0$ as
*
*$\sqrt{n+1}-\sqrt{n} = \sqrt{n}\left(\sqrt{1+\frac{1}{n}}-1\right) = \frac{1}{2\sqrt{n}}+o\left(\frac{1}{\sqrt{n}}\right)\xrightarrow[n\to\infty]{}0$.
*$\sin u\xrightarrow[u\to 0]{}0$
Edit: For part (iii), that I hadn't realized was "still open" as well.
Suppose by contradiction $a_n\to\ell\in\mathbb{R}$.
*
*As you noticed by looking at the subsequence $(a_{n^2})_n$, we necessarily have $\ell=0$.
*Now, this implies that $a^2_n \xrightarrow[n\to\infty]{} 0$ as well, and using $\cos^2+\sin^2=1$ we get $\cos(\pi\sqrt{n})^2 \xrightarrow[n\to\infty]{} 1$.
*Suppose for now we have shown that $$\cos(\pi\sqrt{n}) \xrightarrow[n\to\infty]{} \beta\in\{-1,1\} \tag{$\dagger$}$$
*From the above, we have
$$
a_{n+1}-a_n = b_n\cos\left(\pi\frac{\sqrt{n+1}+\sqrt{n}}{2}\right)
$$
where $b_n\operatorname*{\sim}_{n\to\infty}\frac{\pi}{\sqrt{n}}$. Let's deal with the other term: as
$$
\pi\frac{\sqrt{n+1}+\sqrt{n}}{2} = \pi\sqrt{n}+\frac{\pi}{4\sqrt{n}} + o\left(\frac{1}{\sqrt{n}}\right)
$$
we get
$$\begin{align}
\cos\left(\pi\frac{\sqrt{n+1}+\sqrt{n}}{2}\right)
&= \cos\left(\pi\sqrt{n}+\frac{\pi}{4\sqrt{n}} + o\left(\frac{1}{\sqrt{n}}\right)\right)\\
&= \cos \pi\sqrt{n} \cos\left(\frac{\pi}{4\sqrt{n}} + o\left(\frac{1}{\sqrt{n}}\right)\right) - \sin \pi\sqrt{n} \sin\left(\frac{\pi}{4\sqrt{n}} + o\left(\frac{1}{\sqrt{n}}\right)\right)\\
&= \cos \pi\sqrt{n} \cos(o(1)) - \sin \pi\sqrt{n}\sin(o(1)) \\
&\xrightarrow[n\to\infty]{} \beta\cdot 1 - 0\cdot 0 = \beta.
\end{align}$$
Putting it all together, this leads to
$$
a_{n+1}-a_n \operatorname*{\sim}_{n\to\infty}\frac{\beta\pi}{\sqrt{n}}
$$
which by comparison implies that the series $$\sum_{n=0}^{\infty} (a_{n+1}-a_n)$$ diverges to $\infty$ (or $-\infty$, depending on $\beta$). But this is a contradiction, since this is a telescoping series, equal (by assumption on $(a_n)_{n\in\mathbb{N}}$ converging) to $\ell - a_0 = 0$. $\square$
The remaining issue, of course, is that we don't actually have proven ($\dagger$). But it is enough for our purposes (handwaving a bit here, but it's not hard to make it formal) to have either:
$$\cos(\pi\sqrt{n}) \xrightarrow[n\to\infty]{} \beta\in\{-1,1\}$$
or two sequances $(k_n)_n$, $(m_n)_n$ partitioning the natural numbers ($\mathbb{N} = \bigcup_n \{k_n\}\cup\{m_n\}$) such that
$$\cos(\pi\sqrt{k_n}) \xrightarrow[n\to\infty]{} -1,
\qquad
\cos(\pi\sqrt{\ell_n}) \xrightarrow[n\to\infty]{} -1$$
which are the only two cases that can happen knowing that $\cos^2(\pi\sqrt{n}) \xrightarrow[n\to\infty]{}1$.
Indeed, the first case we took care of; and for the second case, we can now restrict the above argument to (that's the handwavy part) to either $(a_{k_{n+1}} - a_{k_n})_n$ or $(a_{m_{n+1}} - a_{m_n})_n$, knowing that at least one of the two the series $\sum_{n} \frac{1}{\sqrt{k_n}}$ and $\sum_{n} \frac{1}{\sqrt{m_n}}$ has to diverge.
A: (i) Since $\left|\sin(x)\right|\le\left|x\right|$
$$
\begin{align}
\left|a_{n+1}-a_n\right|
&=\left|\sin\left(\pi\sqrt{n+1}\right)-\sin\left(\pi\sqrt{n}\right)\right|\\[6pt]
&=2\left|\cos\left(\pi\frac{\sqrt{n+1}+\sqrt{n}}2\right)\sin\left(\pi\frac{\sqrt{n+1}-\sqrt{n}}2\right)\right|\\[3pt]
&\le2\cdot1\cdot\frac\pi2\left(\sqrt{n+1}-\sqrt{n}\right)\\[3pt]
&=\frac\pi{\sqrt{n+1}+\sqrt{n}}\\[3pt]
&\le\frac\pi{2\sqrt{n}}\tag{1}
\end{align}
$$
(ii) Since $\left|\sin(x)\right|\le1$, we have
$$
\begin{align}
\left|a_n\right|
&=\left|\sin\left(\pi\sqrt{n}\right)\right|\\
&\le1\tag{2}
\end{align}
$$
(iii) The limit of the subsequence
$$
\begin{align}
\lim\limits_{n\to\infty}a_{n^2}
&=\lim\limits_{n\to\infty}\sin\left(\pi\sqrt{n^2}\right)\\[3pt]
&=\lim\limits_{n\to\infty}0\\[3pt]
&=0\tag{3}
\end{align}
$$
Since
$$
\begin{align}
\left|\,n+\tfrac12-\sqrt{n^2+n}\,\right|
&=\frac{\frac14}{n+\frac12+\sqrt{n^2+n}}\\
&\le\frac1{8n}\tag{4}
\end{align}
$$
and $\cos(x)\ge1-\frac12x^2$, we have that
$$
\begin{align}
\left|a_{n^2+n}\right|
&=\left|\sin\left(\pi\sqrt{n^2+n}\right)\right|\\
&\ge\left|\cos\left(\pi\left(n+\frac12-\sqrt{n^2+n}\right)\right)\sin\left(\pi\left(n+\frac12\right)\right)\right|\\
&-\left|\sin\left(\pi\left(n+\frac12-\sqrt{n^2+n}\right)\right)\cos\left(\pi\left(n+\frac12\right)\right)\right|\\
&=\left|\cos\left(\pi\left(n+\frac12-\sqrt{n^2+n}\right)\right)\right|\\
&\ge1-\frac12\frac{\pi^2}{64n^2}\\[6pt]
&\gt0.9\tag{5}
\end{align}
$$
for $n\ge1$.
If the sequence converged, then the limit must be the limit of the subsequence computed in $(3)$. However, $(5)$ precludes the limit from being $0$.
A: To answer the point $iii)$, note that
$$\lim_{n\to \infty} \sin ^2\left(\pi \sqrt{n^2-n}\right)=1,$$
since we have that
$$\sqrt{1-\frac{1}{n}}\approx 1-\frac{1}{2n},$$ when $n$ is large.
Q.E.D. (which you combine with $\sin{\pi\sqrt{n^2}}=0$ to show divergence)
A: Hint:
$$\sin(\pi\sqrt{n+1})-\sin(\pi\sqrt n)=2\sin\left(\frac{\sqrt{n+1}-\sqrt n}2\right)\cos\left(\frac{\sqrt{n+1}+\sqrt n}2\right).$$
As $$\sqrt{n+1}-\sqrt n=\frac1{\sqrt{n+1}+\sqrt n},$$ and $$\sin(x)\le x$$ in the first quadrant, the expression decreases to $0$ like $\dfrac1{\sqrt n}$.
A: (i) The mean value theorem shows
$$\tag 1 a_{n+1}-a_n = (\cos c_n)(\pi \sqrt {n+1} - \pi \sqrt n).$$
Verify that $\pi \sqrt {n+1} - \pi \sqrt n \to 0.$ Because $\cos c_n$ is bounded, $(1)\to 0$ as desired.
(ii) Obvious.
(iii) Lemma: Let $x_1< x_2 < \cdots\to \infty,$ with $x_{n+1}-x_n \to 0.$ Then $\sin x_n$ is dense in $[-1,1].$
Proof: It suffices to show $e^{ix_n}$ is dense in the unit circle. But think about about it: As $n\to \infty, e^{ix_n}$ makes infinitely many orbits around the circle (because $x_n \to \infty$), in steps of arc length $x_{n+1}-x_n.$ Those arc lengths $\to 0.$ Thus if $A$ is any open arc on the circle, $e^{ix_n}$ has to land in $A$ infinitely many times; you can't avoid $A$ in an orbit once the steps have length less than the length of $A.$ Thus $e^{ix_n}$ is dense in the unit circle as desired.
With $x_n = \pi \sqrt n,$ we have the hypotheses of the lemma. Thus $a_n = \sin x_n$ is dense in $[-1,1].$ So certainly $a_n$ can't converge.
A: (iii) Assume instead that $a_n\to l$ for some $l\in\mathbb{R}.\;\;$ As noted by the OP, $l=0$ since $a_{n^2}\to 0$.
For any $k\ge1, \;\;\left(2k+\frac{5}{6}\right)^2-\left(2k+\frac{1}{6}\right)^2=(4k+1)(\frac{2}{3})>1$, so there is an $n_k\in\mathbb{N}$ such that
$\hspace{.4 in}\left(2k+\frac{1}{6}\right)^2<n_k<\left(2k+\frac{5}{6}\right)^2$.
Then $2k+\frac{1}{6}<\sqrt{n_k}<2k+\frac{5}{6}\implies2\pi k+\frac{\pi}{6}<\pi \sqrt{n_k}<2\pi k+\frac{5\pi}{6}\implies \sin\pi\sqrt{n_k}>\frac{1}{2}$;
and this gives a contradiction since $a_{n_k}=\sin\pi\sqrt{n_k}\not\to0$
A: The first two parts have already been answered in several answer in a pretty standard way:
*
*From $|\sin\pi\sqrt{n+1}-\sin\pi\sqrt{n}| \le \pi(\sqrt{n+1}-\sqrt{n}) = \frac\pi{\sqrt{n+1}+\sqrt{n}}$ we get that $a_{n+1}-a_n \to 0$. (We have used this inequality: Show that $|\sin{a}-\sin{b}| \le |a-b| $ for all $a$ and $b$.)
*Clearly, $|\sin\pi\sqrt{n}|\le1$.
Let me try to argue that $a_n=\sin\pi\sqrt{n}$ is not convergent slightly differently than in other answers.
The first observation is that for $n=k^2$ we have $a_n=\sin\pi\sqrt{n}=\sin\pi{k}=0$. The value zero is attained for $n=(k+1)^2$ again.
What happens for $k^2 \le n \le (k+1)^2$? We can notice that for any such $n$ we have
$$\sqrt{n+1}-\sqrt{n} = \frac1{\sqrt{n+1}+\sqrt{n}} \le \frac1{2\sqrt{n}}\le\frac1{2k}.$$
This means that the numbers $\sqrt{n}$ for $n$'s in this interval are increasing monotonically from $k$ to $(k+1)$ and the difference between two consecutive terms is at most $1/(2k)$. So there exists an $n_k$ such that
$$k+\frac12-\frac1k \le \sqrt{n_k} \le k+\frac12+\frac1k,$$
since every interval of the length $\frac1{2k}$ with the endpoints between $k$ and $(k+1)$ contains some $\sqrt{n}$.
For any $n_k$ with the above properties we have
$$|a_{n_k}|=|\sin\pi{\sqrt{n_k}}| \ge \left|\sin\pi\left(k+\frac12+\frac1k\right)\right| = \left|\sin\pi\left(\frac12+\frac1k\right)\right|.$$
Since
$$\lim_{k\to\infty} \left|\sin\pi\left(\frac12+\frac1k\right)\right| = \left|\sin\frac\pi2\right|=1$$
we see that $\lim\limits_{k\to\infty} a_{n_k} \ne 0$.
We found two subsequences such that one of them converges to zero and the other one does not. So the sequence $(a_n)$ is not convergent.
A: (i) $a_{n+1}-a_{n} \to 0$
Instead of $\sin(\pi\sqrt{n})$ we will be looking at $e^{i\pi\sqrt{n}} = \cos(\pi\sqrt{n}) + i\sin(\pi\sqrt{n})$
So we have $$\lim\limits_{n \to \infty} e^{i\pi\sqrt{n+1}} - e^{i\pi\sqrt{n}} =$$
$$\lim\limits_{n \to \infty} e^{i\pi\sqrt{n}}(e^{i\pi(\sqrt{n+1}-\sqrt{n})} - 1) $$
Now we have that $\lim\limits_{n \to \infty} \sqrt{n+1}-\sqrt{n} = 0$ because for each $\epsilon$ there is $m$ so that for all $n>m$ we have $ \sqrt{n+1}-\sqrt{n} < \epsilon $ In order to find this $m$ it is sufficient to solve $ \sqrt{m+1}-\sqrt{m} = \epsilon $ for which we have
$$m=\frac{(\epsilon^2-1)^2}{4\epsilon^2}$$ so as soon as $n > \frac{(\epsilon^2-1)^2}{4\epsilon^2}$ we have $ \sqrt{n+1}-\sqrt{n} < \epsilon $
Since $e^{i\pi\sqrt{n}}$ is bounded we can insert the limit further
$$\lim\limits_{n \to \infty} e^{i\pi\sqrt{n}}(e^{i\pi(\sqrt{n+1}-\sqrt{n})} - 1)= e^{i\pi\sqrt{n}}\lim\limits_{n \to \infty}(e^{i\pi(\sqrt{n+1}-\sqrt{n})} - 1)=
e^{i\pi\sqrt{n}}(e^{i\pi\cdot 0} - 1)=0$$
From there $\lim\limits_{n \to \infty} \sin(\pi\sqrt{n}) =0$
(ii) The sequence is bounded
For each element we have $-1 \leq a_{n} \leq 1$ because $-1 \leq sin(x) \leq 1$. Observe that inequality is strictly $\leq$ because there are infinite square roots that are integers and $sin(k\pi)=0$ for k integer.
(iii) Sequence does not converge
We will use the integral test, thus avoiding the problem of square root. Infinite series converges if and only if its associated integral converges.
In our case the integral is $\int\limits_{1}^{\infty} e^{i\pi\sqrt{x}} dx$ and we are interested in its imaginary part.
The integral is solved first by substitution $t=\sqrt{x}$, $dt=\frac{1}{2\sqrt{x}dx}$ which makes $2tdt=dx$ having
$$\int\limits_{1}^{\infty} e^{i\pi\sqrt{x}} dx=\int\limits_{1}^{\infty} e^{i\pi t} 2tdt$$
then partial integration $u=2t$,$dv=e^{i\pi t} dt$ finally having
$$\int\limits_{1}^{\infty} e^{i\pi\sqrt{x}} dx=\frac{2e^{i\pi\sqrt{x}}(1-i\pi\sqrt{x})}{\pi^2}|_{1}^{\infty}$$
Since we are interested in imaginary part we can extract it as
$$Im[\int\limits_{1}^{\infty} e^{i\pi\sqrt{x}} dx]=\frac{2\sin(\pi\sqrt{x})}{\pi^2}-\frac{2\cos(\pi\sqrt{x})\sqrt{x}}{\pi} |_{1}^{\infty}$$
and this does not converge so the series does not converge as well.
|
3fd6da81442541f6143d8bbd6c713b2a9dc59229
|
Q: Two Subspaces Being Equal I know that having the same dimension is one of the conditions for 2 subspaces being equal. What other conditions do I need to check to see if 2 subspaces (null space and column space) are equal?
A: Testing if two arbitrary subspaces are equal is one thing, but testing if the null space and the column space of a matrix $A$ are equal is a much more specialized question. I'll answer that question.
The column space of a matrix $A$ is the range of the matrix: the set of all places it could send a vector, i.e. $\{y \mid \mathbf{A}x = y\}$.
The null space of a matrix is the set of all vectors it kills: everything it sends to $0$, i.e. $\{x \mid \mathbf{A}x = 0\}$.
If the null space equals the column space, that means a few things:
*
*the matrix must be $n \times n$ (because you're applying it to its own outputs)
*$n$ must be an even number. By of the rank-nullity theorem: a matrix acting on $\mathbb{R}^n$ must have (number of dimensions it kills) + (number of dimensions it preserves) equal to $n$. $\mathbf{\dim}{(\text{column space})}$ is the number of dimensions it preserves (its rank) and $\mathbf{\dim}{(\text{null space})}$ is the number of dimensions it kills.
*the matrix must be nilpotent (since $\mathbf{A}y = 0 \implies \mathbf{A}(\mathbf{A}x) = 0 \implies \mathbf{A}^2x = 0$)
and additional conditions are given here.
|
a1cdccc1ed244a9d0378dd4bbebf568f3996a436
|
Q: Examine convergence of $\sum_{n=1}^{\infty}(e-(1+\frac{1}{n})^n)^\alpha$ depening on $\alpha$ Determine for what $\alpha$ following series converge:
$$
\sum_{n=1}^{\infty}\Big(e-(1+\frac{1}{n})^n\Big)^\alpha
$$
I tried to do it by using following ineqaulity:
$$
1+1+\frac{1}{2!}+\cdots+\frac{1}{n!}-(1+\frac{1}{n})^n< e-(1+\frac{1}{n})^n<(1+\frac{1}{n})^{n+1}-(1+\frac{1}{n})^n
$$
Right side of it is easy to operate, but I don't know how to procceed with left side of this inequality.
A: Write this as
$$\sum_{n=1}^{\infty}\left[e-\left(1+\frac{1}{n}\right)^n\right]^\alpha=e^\alpha\sum_{n=1}^{\infty}(1-a_n)^\alpha,$$
where
$$a_n = e^{-1}\left(1+\frac{1}{n}\right)^n.$$
Note that $0 < a_n < 1 $ and $a_n \to 1$ as $n \to \infty.$
Using the inequality $x < -\ln(1-x) < x/(1-x)$ for $0 < x < 1$, we have
$$1-a_n < -\ln a_n = -\ln (1 - (1-a_n))< \frac{1-a_n}{a_n}.$$
Hence,
$$ 1 < \frac{-\ln a_n}{1-a_n} < \frac{1}{a_n}.$$
It follows from the Squeeze Theorem that
$$\lim_{n \to \infty} \frac{-\ln a_n}{1-a_n} = 1.$$
Therefore, the original series converges if and only if we have convergence of the series
$$\sum_{n=1}^{\infty}(-\ln a_n)^\alpha.$$
Using the Taylor series expansion $\ln(1+x) = x - x^2 /2 +O(x^3)$, it is easy to show that
$$(-\ln a_n)^\alpha = [1 - n \ln (1 + 1/n)]^\alpha \sim (1/2n)^\alpha.$$
Therefore, the series converges for $\alpha > 1$ and diverges for $\alpha \leqslant 1$.
|
e743ab65fe07639bcfc27cf74d6c43089cbe59f2
|
Q: Is it true that taking injective hull commutes with the tensor product? Let $M$ and $N$ be two modules (can assume them to be finitely generated if need be) over the ring $A=k[x_0,...,x_n]$. Denote by $E(M)$ the injective hull of $M$. We work in the category of positively graded modules where $\deg x_i=d_i>0$.
My question:
Is it true that $E(E(M)\otimes_A E(N)) \cong E(M)\otimes_A E(N)$? What hypothesis would be needed to achieve this result?
|
9926a414b2eaf678e1ba1ec44bf98e1a7dcf048b
|
Q: Is $ \operatorname{rank}A =\operatorname{rank} A^T$? Assume A is an $m\times n$ matrix with real-valued entries. Is it always true that $\operatorname{rank} A = \dim \operatorname{Col} A = \dim\operatorname{Row} A = \dim \operatorname{Col} A^T = \operatorname{rank} A^T$?
A: Yes, it's true, you can prove it using the fact that the range of $A^T$ (denoted here by $\text{Im}(A^T)$ is the orthogonal complement of the null space of $A$ (denoted here by $\text{null}(A))$. This tells us that
\begin{equation}
\text{dim null}(A) + \text{dim Im}(A^T) = n.
\end{equation}
From the rank-nullity theorem, we have:
\begin{equation}
\text{dim null}(A) + \text{dim Im}(A) = n.
\end{equation}
Comparing these two equations, we see that $\text{dim Im}(A) = \text{dim Im}(A^T)$.
|
8e814f58cd5a03148b3c813b54010afe6f0c645a
|
Q: How to integrate $\int (\tan x)^{1/ 6} \,\text{d}x$? How do I compute the following integral
$$
I=\int (\tan x)^{1/ 6} \,\text{d}x
$$
A: The final answer maybe complicated as mentioned in the comments. But here are the steps that you may take to get there.
First use the substitution $\tan x=u$ and then use another substitution $u=t^6$ to get
$$\begin{align}
I &= \int (\tan x)^{\frac 16} dx \\
&= \int \frac{u^{\frac 16}}{1+u^2} du \\
&= \int \frac{5t^6}{1+t^{12}} dt \\
&= 5 \int \frac{t^6}{1+t^{12}} dt
\end{align}$$
For tackling down the last integral you should use partial fractions. You may wonder that the denominator has no real roots. The answer is that the partial fraction decomposition works for complex roots as well.
A: Given you said you "won't even try" to solve this integral, try Mathematica:
$$\frac{-2 \left(\sqrt{3}-1\right) \tan ^{-1}\left(\frac{-2 \sqrt{2} \sqrt[6]{\tan
(x)}+\sqrt{3}-1}{1+\sqrt{3}}\right)+4 \tan ^{-1}\left(1-\sqrt{2} \sqrt[6]{\tan
(x)}\right)-4 \tan ^{-1}\left(\sqrt{2} \sqrt[6]{\tan (x)}+1\right)+2
\left(\sqrt{3}-1\right) \tan ^{-1}\left(\frac{2 \sqrt{2} \sqrt[6]{\tan
(x)}+\sqrt{3}-1}{1+\sqrt{3}}\right)-2 \left(1+\sqrt{3}\right) \tan ^{-1}\left(-2
\sqrt{2+\sqrt{3}} \sqrt[6]{\tan (x)}+\sqrt{3}+2\right)+2 \left(1+\sqrt{3}\right) \tan
^{-1}\left(\left(\sqrt{2}+\sqrt{6}\right) \sqrt[6]{\tan (x)}+\sqrt{3}+2\right)-2 \log
\left(\sqrt[3]{\tan (x)}-\sqrt{2} \sqrt[6]{\tan (x)}+1\right)+2 \log
\left(\sqrt[3]{\tan (x)}+\sqrt{2} \sqrt[6]{\tan (x)}+1\right)-\left(1+\sqrt{3}\right)
\log \left(2 \sqrt[3]{\tan (x)}+\sqrt{2} \left(\sqrt{3}-1\right) \sqrt[6]{\tan
(x)}+2\right)+\left(\sqrt{3}-1\right) \log \left(2 \sqrt[3]{\tan (x)}-2
\sqrt{2+\sqrt{3}} \sqrt[6]{\tan (x)}+2\right)+\left(1+\sqrt{3}\right) \log \left(2
\sqrt[3]{\tan (x)}+\left(\sqrt{2}-\sqrt{6}\right) \sqrt[6]{\tan
(x)}+2\right)-\left(\sqrt{3}-1\right) \log \left(2 \sqrt[3]{\tan
(x)}+\left(\sqrt{2}+\sqrt{6}\right) \sqrt[6]{\tan (x)}+2\right)}{4 \sqrt{2}}$$
...just in case you need the answer.
A: HINT (the integral will become very very big):
$$\int\left(\tan(x)\right)^{\frac{1}{6}}\space\text{d}x=\int\sqrt[6]{\tan(x)}\space\text{d}x=$$
Substitute $u=\tan(x)$ and $\frac{\text{d}u}{\text{d}x}=\sec^2(x)$:
$$\int\frac{\sqrt[6]{u}}{1+u^2}\space\text{d}u=$$
Substitute $s=\sqrt[6]{u}$ and $\frac{\text{d}s}{\text{d}u}=\frac{1}{6\sqrt[6]{u^5}}$:
$$\int\frac{5s^6}{1+s^{12}}\space\text{d}s=5\int\frac{s^6}{1+s^{12}}\space\text{d}s=$$
$$5\int\left[\frac{s^6+s^2}{3(s^8-s^4+1)}-\frac{s^2}{3(s^4+1)}\right]\space\text{d}s=$$
$$5\left[\int\frac{s^6+s^2}{3(s^8-s^4+1)}\space\text{d}s-\int\frac{s^2}{3(s^4+1)}\space\text{d}s\right]=$$
$$5\left[\frac{1}{3}\int\frac{s^6+s^2}{s^8-s^4+1}\space\text{d}s-\frac{1}{3}\int\frac{s^2}{s^4+1}\space\text{d}s\right]=$$
$$5\left[\frac{1}{3}\int\frac{s^6+s^2}{s^8-s^4+1}\space\text{d}s-\frac{1}{3}\int\left[-\frac{s}{2\sqrt{2}\left(-s^2+\sqrt{2}s-1\right)}-\frac{s}{2\sqrt{2}\left(s^2+\sqrt{2}s+1\right)}\right]\space\text{d}s\right]$$
|
12193dc863e18eae550fb17422c7cfb923e21414
|
Q: Prove $P(A|B^c)=P(A)$
Given that $A$ and $B$ are independent, Prove $P(A|B^c)=P(A)$
$P(A|B^c)=P(A\cap B^c)/P(B^c)=P(A)$ if $A$ is independent to $B^c$, which is not given.
Am I wrong?
A: Well, then you will need to show that $A$ and $B^{c}$ are independent if $A$ and $B$ are.
So we need to show $P(A \cap B^{c}) = P(A)P(B^{c})$.
Let's do this using other equations we already know that involve some of the above terms:
*
*$P(A) = P(A \cap B) + P(A \cap B^{c})$, right?
*Also, since $A$ and $B$ are independent, $P(A \cap B) = P(A)P(B)$.
So substituting 2. into 1. gives us $P(A) = P(A)P(B) + P(A \cap B^{c})$, or rearranged: $$P(A) - P(A)P(B) = P(A \cap B^{c}).$$
Now, if only the left hand side (LHS) equaled $P(A)P(B^{c})$, we'd be done!
But wait:
$$\text{LHS} = P(A) - P(A)P(B) = P(A)(1 - P(B))$$ and since $1 - P(B) = P(B^{c})$, we do get that the left hand side equals $P(A)P(B^{c})$.
So, $P(A \cap B^{c}) = P(A)P(B^{c})$, which means $A$ and $B^{c}$ are independent if $A$ and $B$ are.
A: Remember that
$$P(A) = P(B)P(A|B)+P(B^c)P(A|B^c).$$
Now, using the assumption that $A$ and $B$ are independent, $P(A \cap B) = P(A)P(B)$, we get $P(A|B) = P(A \cap B)/P(B) = P(A)$. Now, solving the first equation for $P(A|B^c)P(B^c)$, we get
$$P(A|B^c)P(B^c) = P(A)-P(A)P(B) = P(A)(1-P(B)) = P(A)P(B^c),$$
from which we can solve $P(A|B^c) = P(A)$, which is what we wanted.
|
18f6d05aa8006c96ed4455ddacfc88e52b8f6c66
|
Q: Conditional probability - Proof I interpret this as the statement that if A is true then B is more likely.
I have
$P(B \mid A) > P(B)$
Now I want to formulate and prove:
*
*If not-B is true, then A becomes less likely
*If not-A is true, then B becomes less likely
If I get help with the first one I am sure I will be able to do the second one myself.
Thanks!
A: What we want to show for the first is $P(A)>P(A|B^c)$. Let's start by using the law of total probability, as suggested by Alex. The law states that
$$P(A) = P(B^c)P(A|B^c)+P(B)P(A|B).$$
Now, we can solve for $P(A|B^c)$ and we get
$$P(A|B^c) = \frac{P(A)-P(B)P(A|B)}{P(B^c)}.$$
Noting that $P(B)P(A|B) = P(A \cap B) = P(A)P(B|A)$, we get
$$P(A|B^c) = \frac{P(A)-P(A)P(B|A)}{P(B^c)}.$$
Since we assumed that $P(B|A) > P(B)$ and using $P(B^c)=1-P(B)$, we get the inequality
$$P(A|B^c) = \frac{P(A)-P(A)P(B|A)}{P(B^c)} < \frac{P(A)-P(A)P(B)}{1-P(B)} = P(A)\frac{1-P(B)}{1-P(B)} = P(A).$$
For the second statement, just follow similar steps and you should get to an answer.
|
4dd048ca0ebaa29d88a54011a952bc3dc96929fc
|
Q: Conditions for embedding between non-oriented graphs I have the following assignment on my Algorithms Analysis course.
Given two undirected graphs $G_1 = (V_1, E_1)$ and $G_2 = (V_2, E_2)$ with $\operatorname{card} (V_1) < \operatorname{card} (V_2)$ and $\operatorname{card} (E_1) < \operatorname{card} (E_2)$, is the graph $G_1$ isomorphic with a subgraph of $G_2$? The proof must be based on a non-deterministic algorithm.
A: Pick, non-deterministicly, an injection $f:V_1 \to V_2$ (in other words, pick, once and for all, a numbering of the vertices of $G_1$, and pick, non-deterministically, an ordered list of $|V_1|$ elements from $V_2$). Check whether this is an isomorphism onto its image subgraph. If not, pick another injection. Rinse and repeat.
The isomorphism check is naively linear in $|E_1|\cdot |E_2| < |V_1|^2 \cdot |V_2|^2$: For every pair $(e_1, e_2)$ with $e_1 \in E_1, e_2 \in E_2$, see whether the end points of $e_1$ are sent to the endpoints of $e_2$ by $f$. There are $|E_1| \cdot |E_2|$ such pairs. The check is therefore polynomial time, and thus the whole algorithm is $NP$.
|
55f0d3b7d9fc580c01dae9a0050496692f098ac7
|
Q: GNS construction of a weight In the theory of quantum groups in the operator algebraic setting, one deals with weights (instead of positive linear functionals).
Definition: A weight is a function $\phi $ : $A^+ \rightarrow [0, \infty]$ such that
1) $\phi(a + b) = \phi(a) + \phi(b)$ for all $a, b \in A^+$
2) $\phi(\lambda a) = \lambda \phi(a)$ for all $a \in A^+$ and all $\lambda \in \mathbb{R}^+$
In the literature it is said that one can make a unique GNS-construction (By considering $\mathcal{N}_\phi = \{a \in A; \phi(a^*a) < \infty \}$) for any weight on a $C^*$-algebra. However, how can one define the "sesquilinear" form $(a| b) = \phi(b^*a)$ with a weight, since it is not linear?
A: What you do is to define $\phi $ by linearity on $$\mathcal M_\phi=\text {span}\,\{a\in A^+:\ \phi (a)<\infty\}. $$ Then the inequality from your comment shows that $b^*a\in \mathcal M_\phi $ whenever $a,b\in\mathcal N_\phi $.
|
70d2c46294afa832a7f9b7cc9a19c0c652f88d4e
|
Q: Do projections preserve closed subspaces Let $H$ be a Hilbert space and let $\pi \colon H \to H$ be an orthogonal projection. Let $E \subset H$ be a closed subspace of $H$.
My question: Is there any hope that one can conclude that $\pi(E)$ is closed?
A: This is not true. Let's take $H=L^2(\mathbb R)$ and $E=PW_1$ as the subspace of all $f$ whose Fourier transform is supported by $[-1,1]$ (the Paley-Wiener space). Let $\pi$ be the projection onto $L^2(-1,1)$.
Then $f=\chi_{(-1,1)}(x)$ is in $\overline{\pi(E)}$: this follows because
$$
\frac{\sin ax}{ax} = \frac{1}{2a}\int_{-a}^a e^{itx}\, dt \in E
$$
for all small $a>0$, and clearly these functions converge to $f$ uniformly on $(-1,1)$ as $a\to 0$.
However, $f\notin\pi(E)$ because the functions from $E$ are entire, but the holomorphic continuation of $f$ from $(-1,1)$ to $\mathbb R$ is identically equal to $1$ and thus not in $L^2(\mathbb R)$.
|
eba94fb6d73ef126859a5aab4ad20704f68c73e2
|
Q: Simple Monte-carlo approximation of $\pi$ and integration using Matlab. Hi I am new to programming and the following task that my tutor want me to handle was a bit to hard to begin with. I hope someone can help me and I need the code in Matlab.
*
*You are throwing darts at a circular dartboard (radius one half of a unit length) that are fixed at a square background (with the side one unit length). The dart hits this constellation at point $(x,y)$ that are a random variable. Want to use this idea to approximate the value of $\pi$.
*I want to use the same idea to approximate $\int \limits _0 ^1 \sqrt[3]{1-x^3} {\rm d} x$.
Thanks in advance.
A: hint :
N = 2^16;
X = rand(2,N)*2-1;
norm2_X = sum(X.*X,1);
nb_inside_the_circle = sum(norm2_X <= 1);
pi_estimated = 3.14;
fprintf('pi_estimated = %f\n',pi_estimated);
add one more line to get an estimation for $\pi$.
A: Some comments, hints, and code that should help you finish this
assignment.
1. Here is some R code for the first part. Your question is difficult for me to understand.
I hope my comments are relevant.
The idea is to put a million points at random
into a square of area 4 centered at the origin. Then we find the
percentage of points in that data cloud (translated 'constellation"?) that also lie in the inscribed circle, Finally we convert
that percentage to provide an estimate of $\pi.$
m = 10^6; x = runif(m, -1, 1); y = runif(m, -1, 1)
in.circle = (x^2 + y^2 <= 1) # logical variable (values T and F)
mean(in.circle) # mean of logical vector is proportion of T's
## 0.785467 # proportion of pts in square that are also in circle
4*mean(in.circle) # approx. value of pi
## 3.141868
The plot below shows 30,000 points similarly generated in the
square (light grey) and the ones of them also in the circle (dark blue). [A million points would have made a dense "blob" of points---not a readable plot, but more points
give a better numerical estimate of $\pi.$]
I am not a good enough Matlab programmer to write exemplary code,
and, in any case, do not want to do the assignment for you. Together with the
Answer by @user1952009 that gives some Matlab code, maybe you can write suitable code,
and annotate it with explanations.
2. For the second part, you should first sketch the area represented
by the integral. Then you might ponder what fraction of points
in the unit square with vertices $(0,0)$ and $(1,1)$ lies
beneath the curve being integrated. This part illustrates
one of several methods of 'Monte Carlo' integration. It is
sometimes called an 'acceptance-rejection' method because points
below the curve are accepted and those above are rejected.
By contrast, the R code below approximates the area under the curve
by 3000 rectangles, all of width 1/3000. The height each rectangle is the value
of the function at the midpoint of its base. This is numerical
integration (with no randomness) based on the definition of the
Riemann integral. Your simulation in the second part should
result in an answer close to the one below.
m = 3000; a = 0; b = 1
w = (b-a)/m # width of rectangles
g = seq(a+w/2, b-w/2, length=m) # grid of centers
h = (1 - g^3)^(1/3) # heights at centers of rectangles
sum(w * h)
## 0.8833213 # deterministic approx of integral
Note: For a given number of iterations $m,$ an integral over an interval on the real line is frequently more accurately
evaluated by Riemann approximation
than by Monte Carlo integration. However,
for multiple-dimensional integration, Monte Carlo methods are
frequently superior.
|
ace72d87f31697f7bff0ecbd46f52fdbf889a1a0
|
Q: Show that the set {${\alpha \in F_{5^6} | F_5(\alpha)=F_{5^6} }$} contains 15480 elements Question: Show that the set {${\alpha \in F_{5^6} | F_5(\alpha)=F_{5^6} }$} contains $15480$ elements
Since this number is so large, I think there is some trick to get the answer.
Also $15480$ is quite near to $5^6$, since $5^6$-15480=145
Would these 145 elements , $\alpha$, be extended roots?
Any help would be much appreciated
A: Note that $\Bbb F_{5^6}$ contains exactly one(!) subfield isomorphic to $\Bbb F_{5^3}$ and one isomorphic to $\Bbb F_{5^2}$, and they intersect in $\Bbb F_5$. As every proper subfield must be contained in one of these subfields, we conclude that $\Bbb F_{5}(\alpha)\ne \Bbb F_{5^6}$ iff $\alpha\in\Bbb F_{5^3}\cup \Bbb F_{5^2}$. Hence there are $|\Bbb F_{5^3}|+|\Bbb F_{5^2}|-|\Bbb F_{5^3}\cap\Bbb F_{5^2}|=5^3+5^2-5=145$ "bad" $\alpha$.
|
af05cc66ca4b7df809c2ca53d3def520be171912
|
Q: Calculating the standard deviation involving a moment generating function An actuary determines that the claim size for a certain class of accidents is a random variable, X, with moment generation function:
$$M_X(t)=\frac{1}{(1-2500t)^4}.$$
Calculate the standard deviation of the claim size for this class of accidents.
(Note: the shortcut for this question is seeing that it's a gamma distribution function with parameters 4 and 2500. Therefore, Var(X) will be $4\cdot 2500^2 = 25,000,000$ and the standard deviation will then be $\sqrt{25,000,000} = 5,000$. However, let's assume we cannot see this.)
From I understand, the SD can be found by doing this calculation:
$$SD(X)=\sqrt{M''(0)-[M'(0)]^2}$$
The reason to that is because $SD(X)=\sqrt{Var(X)}=\sqrt{E(X^2)-[E(X)]^2}$ and $E(X^n) = M^n(0)$.
($M^n(0)$ is the derivative of $M(X)$ to the $n$-th time at $0$)
Why is $E(X^n) = M^n(0)$ true?
A: By definition, $M(t)=\int_\Omega e^{tX}dP$. Then, $M^{(n)}(t)=\int_\Omega X^ne^{tX}dP$ (it follows by differentiated $n$ times) Then, $M^{(n)}(0)=\int_\Omega X^ndP$, and the last is precisly $E[X^n]$.
|
a001b387da558dc5f63df42b7527f35b2fcaf717
|
Q: Given a lines parametric equations, and a point how do I find the closest point on that line to that point. I thought of using the dot product set to $0$ but I'd need two vectors, and I online have one if I use the parametric equations as $x, y, z$ values of a vector.
This is the example
Line: $l = [(t,14-t,t-5)]$
Point: I have is $A(1,3,4)$
How do I approach this?
A: Minimize using the distance formula. The distance of a point on the line to A at time $t$ is $$d(t)=\sqrt{(1-t)^2+(3-(14-t))^2+(4-(t-5))^2}.$$ Then differentiate to find the minimum. Note: one can minimize the distance squared.
A: Your idea of using a dot product was good. The two vectors that you need for this are the vector from $A$ to a point on the line and the vector from some other arbitrary point on the line to that one. A convenient choice is to set $t=0$, so you end up having to solve $$
(l(t)-[1,3,4])\cdot(l(t)-l(0)) = [t-1,11-t,t-9]\cdot[t,-t,t] = 3t^2-21t = 0.
$$ This has two solutions, but you can discard $0$ since that corresponds to the zero vector, which is orthogonal to everything. The other solution happily agrees with the one you get by minimizing the distance function as suggested by Brad A.M..
|
d87e6d8aa7eb9ea0f84c39051d1416146001ad7e
|
Q: Eliminating the linear term in a cubic Let $f(x)=ax^3+bx^2+cx+d$. It is known that by substituting $$x=t-\frac{b}{3a}$$ we get a depressed cubic, which does not have a quadratic term.
My question is: Is there a substitution that will remove the linear term instead?
A: Hint: take $x=y+k$, $k\in\mathbb{R}$, replace in the equation and find $k$ so that the linear term vanishes.
|
559f783a8763698568f9fe7e8c9db7ae23528184
|
Q: Divisibility by 101; a problem with induction I was trying to show that $10^{2n}+(-1)^{n+1}$ is divisible by $101$. Would anyone help me with the induction step please?
A: Taking module 101, $10^{2n}= 100^n\equiv (-1)^n(\mathrm{mod}101)$ and the result follows.
A: $$10^{2(n+1)}+(-1)^{(n+1)+1}=100\cdot10^{2n}-(-1)^{n+1}=101\cdot10^{2n}-\underbrace{(10^{2n}+(-1)^{n+1})}_{\text{multiple of }101}$$
A: Use the standard fact that $a-b \mid a^n-b^n$ as follows :
$$101=100-(-1)\mid 100^n-(-1)^n=10^{2n}+(-1)^{n+1}$$
A: Hint. You should try writing out the sequence. For instance, for $n = 1$, we have $101$; for $n = 2$, we have $9999$; and so on. Then try dividing the second term by the first term, the third term by the second term, the fourth term by the third term. You should see a pattern that will help you construct the induction step. (It will be convenient to break it down by even and odd $n$, as hinted at by the $(-1)^{n+1}$.)
A: Notice the following steps of mathematical induction,
*
*Setting $n=1$ in the given number $10^{2n}+(-1)^{n+1}$ gives $$10^{2\cdot 1}+(-1)^{1+1}=101$$ above number is divisible by $101$ for $n=1$
*assume that $10^{2n}+(-1)^{n+1}$ is divisible by $101$ for $n=k$ then $$10^{2k}+(-1)^{k+1}=101m$$
or $$10^{2k}=101m-(-1)^{k+1}\tag 1$$
where, $m$ is some integer
*setting $n=k+1$, $$10^{2(k+1)}+(-1)^{k+1+1}$$
$$=100\cdot 10^{2k}-(-1)^{k+1}$$
setting the value of $10^{2k}$ from (1),
$$=100(101m-(-1)^{k+1})-(-1)^{k+1}$$
$$=101(100m)-101(-1)^{k+1}$$
$$=101(100m-(-1)^{k+1})$$
since, $(100m-(-1)^{k+1})$ is an integer hence the number $101(100m-(-1)^{k+1})$ is divisible by $101$
thus $10^{2n}+(-1)^{n+1}$ is divisible by $101$ for $n=k+1$
hence, $10^{2n}+(-1)^{n+1}$ is divisible by $101$ for all integers $n\ge 1$
|
d3d250a11424642a498c9e8140e4d84f349b778d
|
Q: What does the quadratic form $0.5x^T Ax^T-b^Tx$ find the minimum of? I'm trying to work through example 2, from here.
We start by defining a symmetric positive definite matrix $A$:
$\begin{pmatrix}
1.2054 & 0.6593 &1.2299 & 1.2577 & 1.0083\\
0.6593 & 0.5179 & 0.5460 & 0.8562 & 0.4608\\
1.2299 & 0.5460 & 2.6713 & 1.8403 & 1.6162\\
1.2577 & 0.8562 & 1.8403 & 2.5114 & 1.6419\\
1.0083 & 0.4608 & 1.6162 & 1.6419 & 1.4266
\end{pmatrix}$.
Next, we define a vector $b$ with random numbers as entries:
$\begin{pmatrix}
0.0258\\
0.1957\\
0.9065\\
0.3823\\
0.7864
\end{pmatrix}$
Then, we compute $xs=A^{-1}b$.
Finally, we feed this the quadratic form:
$0.5xs^T Axs^T-b^Txs$
and, with the my numbers, get $-4.46333$, which doesn't appear anywhere in the matrix $A$. So if this is supposed to be an exact minimum, what's it supposed to be an exact minimum of?
A: I think, while $A$ is a matrix and $b$ is a vector one tries to find a vector $x$ such that $0.5x^T Ax^T-b^Tx$ is minimal. The minimal value of your form might then be $−4.46333$.
Doing so one gets
$$0.5x^TA-b^T =0$$
$$0.5x^TA=b^T$$
$$0.5x^T=b^T A^{-1}$$
$$0.5x = (A^{-1})^Tb$$
I am not sure about all the transposels sorry.
A: It is the minimum value of the quadratic form $J(x) = 0.5 x^T A x - b^T x$.
Consider the vector $b \in \mathbb{R}^n$ and $A \in M_{n\times n}(\mathbb{R})$ a symmetric positive defined matrix, we have the quadratic form $J:\mathbb{R}^n \to \mathbb{R}$, that is a function of $n$ variables.
From Calculus, we know the point of minimum value is a critic point of $J$.
Then, we have to search for $x^*$ such that
$$
\nabla J(x^*) = 0.
$$
It is easy to verify that the gradient of $J$ is given by,
$$
\nabla J(x) = Ax - b.
$$
So we want $x^*$ such that
$$
A x^* - b = 0 \quad \Longrightarrow A x^* = b \quad \Longrightarrow x^* = A^{-1}b.
$$
We can invert the matrix $A$ because it is positive defined and, consequently, non-singular. The linear system has an unique solution.
It remains to be seen now if $x^*$ is actually a point of minimum. But the Hessian matrix of $J(x)$ is $A$ that is positive defined, then $x^*$ is a point of minimum.
Now the minimum value of $J(x)$ is given by
$$J(x^*) = 0.5 (x^*)^T A x^* - b^T x^* = 0.5 (A^{-1}b)^T A A^{-1}b - b^T A^{-1}b = -0.5 b^T A^{-1} b$$.
There is no reason for this number appears somewhere in the matrix $A$.
I Hope this helps you.
|
ddcd0e927041efe593a212b4672a737275d9c8dd
|
Q: Monic irreducible polynomials of degree 6 in $F_{5}[X]$ Question A How many monic irreducible polynomials of degree 6 in $F_{5}[X]$
Question B Give an example of an irreducible polynomial of degree 6 in
$F_{5}[X]$
Idea for a Such a polynomial would be of the form: $f(x)=x^6+bx^5+cx^4+dx^3+ex^2+fx+g$ with $b, c, d, e, f, g \in${$0, 1, 2, 3, 4$}. $f(x)$ is irreducible if it has no linear factors, yet there are too many unknowns to find them. So I am not sure how to prove this general statement
Idea for b So I need to find a polynomial of degree 6 such that $0, 1, 2, 3, 4$ are roots. Take $f(x)=x^6+b$. I tried $f(x)$ for all values $0, 1, 2, 3, 4$ and narrowed it down to $b=3$ being the only case where f(x) of this form has no roots. Hence $f(x)=x^6+3$ Is this ok?
A: I’ll explain how to find the number of monic irreducibles of degree $6$ over $\Bbb F_5$, and I hope that the general method will be obvious from this example. On the other hand, finding one of them is another kettle of fish.
For ease in typing, I’ll call $k_n$ the field with $5^n$ elements. Then $\Bbb F_5=k_1$, $\Bbb F_{25}=k_2$, etc. I want to consider $k_6$. It has three proper subfields, namely $k_1$, $k_2$, and $k_3$. Let’s let $N_m$ be the number of elements $\alpha$ of $k_6$ with the property that $\alpha\in k_m$ but in no proper subfield of $k_m$.
Let’s notice two things: first, that we have gotten a partition of the set $k_6$ into four disjoint sets, so that $N_1+N_2+N_3+N_6=5^6$, and second, that such an element $\alpha\in k_m$ not in a proper subfield has exactly $m$ conjugates (including itself) over $\Bbb F_5$, and the monic polynomial whose roots are those conjugates will be an $\Bbb F_5$-irreducible polynomial of degree $m$. Furthermore, if we call $I_m$ the number of monic irreducibles of degree $m$, we have $mI_m=N_m$. Therefore, we have:
$$
\begin{align}
5^6&=N_1+N_2+N_3+N_6&\text{and more generally}\\
5^m&=\sum_{d|m}N_d&\text{for any $m$}\\
N_m&=\sum_{d|m}\mu(d)5^{m/d}=mI_m\,,
\end{align}
$$
where $\mu$ is the Möbius function. For our special case $\mu(1)=\mu(6)=1$ and $\mu(2)=\mu(3)=-1$. In particular, the number of irreducible monic sextics over $\Bbb F_5$ is
$$
I_6=\frac16\bigl(5^6-5^3-5^2+5\bigr)=2580
$$
A: Lubin showed you how to calculate the number of sextic monic irreducibles.
Here is one:
$$
p(x)=x^6+x^3+1.
$$
You may (should?) recognize this as the ninth cyclotomic polynomial $\Phi_9(x)=(x^9-1)/(x^3-1)$. Those are always irreducible over $\Bbb{Q}$. Over a finite field their irreducibility is not given, but it is easy to test using the following general result.
Fact. The cyclotomic polynomial $\Phi_n(x)$ of order $n$ and degree $\phi(n)$ remains irreducible modulo a prime $p, p\nmid n,$ if and only if $m=\phi(n)$
is the smallest positive integer $m$ with the property $p^m\equiv1\pmod n$.
Proof. The condition $p\nmid n$ means that there exists an $n$th primitive root of unity $\zeta$ in some extension field $K=\Bbb{F}_{p^m}$ of $\Bbb{F}_p$. The minimal polynomial $m(x)$ of $\zeta$ over $\Bbb{F}_p$ will be a factor of $\Phi_n(x)$, so we only need to find the degree of $m(x)$. But it is a basic fact about finite fields that the multiplicative group of $K$ is cyclic of order $p^m-1$. So $K$ contains a primitive $n$th root of unity if and only if $n\mid (p^m-1)$. Consequently $\deg m(x)$ is the smallest integer $m$ such that $n\mid (p^m-1)$. Q.E.D.
In your example case it is easy to see that with $n=9, p=5$ we get $m=6$. This is equivalent to showing that the residue class $\overline{5}$ modulo nine is a generator of the group $\Bbb{Z}_9^*$.
Caveat: There is no guarantee that an irreducible cyclotomic polynomial of the desired degree exists. It is just one of the first families to check, because testing is easy using the above fact.
A: Hint: We have 5 pol of degree 1, they are irreducible. How many reducible polynomials of degree $2$ are there? They are $(x-a)(x-b)$ with $a,b \in \mathbb Z_5$ thus they are 15 (care of commutations). How many in general of degree 2? 25. So there are 10 irreducible polynomials of degree 2. How many reducible of degree 3? How many in total? $\cdots$ Go on this way!
|
49d7cdb33e0e846adeeab60953d5fe4148436f7c
|
Q: Beginner Integration (Substitution) I am pretty new to calculus and would like a nudge in the right direction in order to complete this question properly (Maybe also correct any misrepresentations I have about integration)
So the question is this:
$$\int_{0}^{1}x\sqrt{1-\sqrt{x}}\:dx$$
My attempt at a solution:
let $$u=\sqrt{x}$$
$$u^2={x}$$
thus $$\frac{du}{dx}=\frac{1}{2}x^{-1/2}$$
$$dx=x^{1/2}2du=2udu$$
Now, $$\int_{a}^{b}u^{2}\sqrt{1-u}\:2udu\:$$
$$=2\int_{a}^{b}u^{2}\sqrt{1-u}\:udu$$
$$=2(\frac{1}{3}u^{3}*\frac{2}{3}({1-u})^{3/2}*\frac{1}{2}u^{2}*\frac{1}{2}u^{2})$$
I am able to substitute $\sqrt{x}$ for u, and solve the integral, but i don't believe this gives me the correct solution.
Can anyone point me towards a mistake, mis-conceptualization or different substitution that would allow me to solve this question?
The question is tagged as dual substitution, does that mean i need to substitute two factors, or that multiple substitutions will work for this integral?
Thanks
P.S. Feel free to clean up my LaTex
A: At the line $$=2\int_{0}^{1}u^{2}\sqrt{1-u}\:udu = 2\int_0^1 u^3\sqrt{1-u}\ du$$
(why did I set the bounds $=0$ and $1$?) you could do integration by parts, but since you say you're a "beginner", why not just use another substitution: Let $w=1-u$, then $dw=-du$.
$$=-2\int_1^0 (1-w)^3\sqrt{w}\ dw = 2\int_0^1 (1-w)^3\sqrt{w}\ dw$$ Then if you expand this out (meaning $(1-w)^3\sqrt{w} = (1 -3w + \cdots)\sqrt{w} = w^{1/2}-3w^{3/2}+\cdots$) you'll just be left with several integrals of the type $\int aw^k\ dw$ which hopefully you know how to evaluate. Good luck. :)
A: In my previous answer I ended where I assumed you could take over. I guess that was a false assumption. In this answer I'll try to explain each step.
Start with the integral $$\int_{x=0}^{x=1} x\sqrt{1-\sqrt{x}}\ dx$$
Here I'd be a bit more ambitious than you. I know that anything other than simply a variable under a square root is going to cause me trouble so I'm going to try to substitute everything under the outer square root.
Let $u = 1-\sqrt{x}$. This implies that $$du = \frac{d(1-\sqrt{x})}{dx}\ dx = -\frac{1}{2\sqrt{x}}\ dx = -\frac 1{2(1-u)}\ dx \\ \implies -2(1-u)\ du = dx$$
We also will need to substitute for $x$ so notice that $u=1-\sqrt{x} \implies x=(1-u)^2$.
Now we make this substitution. Notice how I evaluate the bounds:
$$\begin{align}\int_{x=0}^{x=1} (1-u)^2\sqrt{u}\big(-2(1-u)\ du\big) &= -2\int_{u=1-\sqrt{0}}^{u=1-\sqrt{1}} (1-u)^3\sqrt{u}\ du \\ &= -2\int_{u=1}^{u=0} (1-u)^3u^{1/2}\ du \\ &= 2\int_{u=0}^{u=1} (1-u)^3u^{1/2}\ du\end{align}$$
From here we should just simplify $(1-u)^3u^{1/2}$. $$(1-u)^3u^{1/2} = (1-3u+3u^2-u^3)u^{1/2} = u^{1/2}-3u^{3/2}+3u^{5/2}-u^{7/2}$$
Now we can use this and the fact that $\int_a^b (\beta f(x) + \gamma g(x))\ dx = \beta\int_a^b f(x)\ dx + \gamma\int_a^b g(x)\ dx$ to get our integral into a form we know how to evaluate.
$$\begin{align}2\int_{u=0}^{u=1} (1-u)^3u^{1/2}\ du &= 2\int_0^1 (u^{1/2}-3u^{3/2}+3u^{5/2}-u^{7/2})\ du \\ &= 2\left(\int_0^1 u^{1/2}\ du -3\int_0^1 u^{3/2}\ du + 3\int_0^1 u^{5/2}\ du -\int_0^1 u^{7/2}\ du\right) \\ &= 2\int_0^1 u^{1/2}\ du -6\int_0^1 u^{3/2}\ du + 6\int_0^1 u^{5/2}\ du - 2\int_0^1 u^{7/2}\ du\end{align}$$
At this point we remember that the antiderivative of $x^k$ is $\frac 1{k+1}x^{k+1}$ and get the answer:
$$2\int_0^1 u^{1/2}\ du -6\int_0^1 u^{3/2}\ du + 6\int_0^1 u^{5/2}\ du - 2\int_0^1 u^{7/2}\ du\ = \left.\big(2\frac 23u^{3/2} -6\frac25u^{5/2} +6\frac27u^{7/2}-2\frac29u^{9/2}\big)\right|_{\ 0}^{\ 1} \\ = \frac 43-\frac{12}5+\frac{12}7-\frac 49$$
A: attempt numero dos, (thanks to @bye_world) (third time maybe charm)
from $$=2\int_{a}^{b}u^{2}\sqrt{1-u}\:udu$$
$$=2\int_{a}^{b}u^{3}\sqrt{1-u}\:du$$
let $$w=1-u\:$$
$$u=1-w\:$$
and $$\frac{dw}{du}=-1$$
$${-dw}={du}$$
so, $$=2\int_{b}^{a}({1-w})^{3}\sqrt{w}\:dw$$
$$=2\int_{b}^{a}(-w^{7/2}+3w^{5/2}-3w^{3/2}+w^{1/2})\:dw$$
Okay, I think I get what you guys are trying very hard to get through to me, $\int{f(x)+g(x)+h(x)}$ can be split up into $\int{f(x)}+\int{g(x)}+\int{h(x)}$ but $\int{f(x)*g(x)}$ does not equal $\int{f(x)}*\int{g(x)}$
Anyways so
$$=2[(-\frac{2}{9}(w)^{9/2}+\frac{6}{7}(w)^{7/2}-\frac{6}{5}(w)^{5/2}+\frac{2}{3}(w)^{3/2})]^0_1$$
amirite?
|
0e3d2ded10f9e3010f8130937199da01ecb0eb5d
|
Q: Is $f(x)=\frac{1}{q}$ for $x=\frac{p}{q}$ and $f(x)=0$ else integrable? Let $f:[0,1]\to\mathbb{R}$ be defined as:
$$
f(x)=\frac{1}{q}\space\text{ if }\space x=\frac{p}{q}\space \text{ for some }\space p,q\in\mathbb{N} \space \text{ with } (p,q)=1\\
f(x)=0 \space\text{ else}
$$
Is $f$ integrable over $[0,1]$, i.e. does
$$
\int_0^1f
$$
exist?
I actually have no idea how to prove it since I don't see any mean of bounding the riemann sums accordingly.
A: If you mean to ask whether it is Lebesgue-integrable, the answer is "yes, with integral $0$" because it is zero almost everywhere (the set of $x$ such that $f(x) \neq 0$ is countable, so it is of Lebesgue measure $0$).
If you mean to ask whether it is Riemann-integrable, a theorem of Lebesgue states that a bounded function on a segment is Riemann-integrable iff its set of points of discontinuity has Lebesgue measure $0$ (furthermore, in this case, the Lebesgue and Riemann integrals coincide). But the set of points of discontinuity is the set of rationals, so it is of Lebesgue measure $0$, so the function is Riemann-integrable with integral $0$.
Edit: if you don't want to use the Lebesgue criterion, it can still be done, but it will be more or less difficult depending on which exact definition of the Riemann integral you use. Here is one nice and fairly standard definition: $f\colon [0,1]\to\mathbb{R}$ is R-integrable iff for every $\varepsilon>0$ there exists $\varphi,\psi$ two "step functions" such that $|f(x)-\varphi(x)|\leq\psi(x)$ for every $x$ and $\int_0^1\psi <\varepsilon$ (where a "step function" means one which is constant on each open interval of a subdivision of $[0;1]$) (furthermore, if this is the case, then $\int_0^1 f$ is defined as the limit of the $\int_0^1 \varphi$ when $\varepsilon$ is made arbitrarily small). If you apply this definition to $\psi = \frac{1}{q}$ with $q$ large enough, and $\varphi(x)$ equal to $f(x)$ when $f(x)\geq\frac{1}{q}$ and $0$ otherwise, it is clear that the condition is satisfied (and that the integral is zero).
In fact, this shows a little bit more: the function $f$ is "regulated", meaning that: for every $\varepsilon>0$ there exists $\varphi$ a "step function" such that $|f(x)-\varphi(x)|\leq \varepsilon$ for every $x$. This is a stronger condition than "Riemann-integrable", and a statement analogous to the Lebesgue criterion is that a bounded function on a segment is regulated iff only has discontinuities of the first kind (that is, its left and right limits always exist).
Edit 2: if you don't like the Lebesgue criterion and you also don't like the definition of Riemann-integrable functions (or the concept of regulated functions) in the first edit above, if you really insist on using Riemann sums, then the right way to do it is to use the criterion given in this Wikipedia article after the paragraph which starts with "Unfortunately, this definition is very difficult to use". Given $\varepsilon>0$, consider $q \geq \frac{2}{\varepsilon}$ integer, let $\{z_i\}$ be the rationals with denominator $\leq q$ (there are at most $q^2$ of them), and let $\{y_i\}$ be the set of $z_i-\frac{\varepsilon}{8 q^2}$ and $z_i+\frac{\varepsilon}{8 q^2}$, sorted in increasing order. Then clearly, for any closed interval between two consecutive $y_i$, either the function takes only values $\leq\frac{1}{q}$ or the interval has length $\leq\frac{\varepsilon}{4 q^2}$: so the Riemann sum on any tagged refinement of the $y_i$ will have value $\leq\frac{1}{q} + \frac{\varepsilon}{2} \leq \varepsilon$, and apply the criterion I just mentioned.
A: This function is called Thomae's function, and it is integrable. It turns out that the integral from $0$ to $1$ is $0$.
In this case, we have the fact that the set of discontinuities has measure zero. More simply put, a function is Riemann integrable if it is bounded and the set of discontinuities is countable.
A: The function you state is continuous at every irrational number. Thus the number of discontinuous points (which are exactly the rational numbers) is countable and by Lebesgue integrability condition $f$ is Riemannintegrable.
|
5990a3e1a5215ee0b67aa46253133f0a7a0f5bd9
|
Q: Position vector of a particle moving with constant speed on a straight line Suppose we have a particle which starts from a point $A$ and moves with constant speed $u$ along the line $AB$.
One wants to show that the position vector $\mathbf{x}$ of the particle at time $t$ is $$\mathbf{x}=\mathbf{a}+ut\frac{\mathbf{b}-\mathbf{a}}{|\mathbf{b}-\mathbf{a}|},$$ where $\mathbf{a}$ and $\mathbf{b}$ are the position vectors of $A$ and $B$ relative to an arbitrary origin.
So, first we write down the vector equation of the straight line passing through two points $A$ and $B$, that is $$\mathbf{x}=\mathbf{a}+k(\mathbf{b}-\mathbf{a}),$$ where $k$ is a parameter. From the position equation of the particle $$\mathbf{x}(t)=\mathbf{v}_0t+\mathbf{x}_0,$$ at any time $t$, we have $\mathbf{x}_0=\mathbf{a}$ at $t=0$ .
But I am stuck on how to derive that $\mathbf{v_0}$ is equal to $u\frac{\mathbf{b}-\mathbf{a}}{|\mathbf{b}-\mathbf{a}|}$.
I would appreciate any help or suggestion. Thank you.
A: $\mathbf{b-a}$ is a vector in the direction from $\mathbf{a}$ to $\mathbf{b}$. Dividing by its length produces a unit vector in that direction.
Then multiplying by the (presumably positive) scalar $u$ produces a vector of magnitude $u$ in the direction from $\mathbf{a}$ to $\mathbf{b}$, which is precisely what you want as the velocity vector.
A: You need a unit vector for the direction of motion, otherwise the speed gets multiplied by the length of the vector, so the net speed is off. Dividing $\mathbf b-\mathbf a$ by its length takes care of that.
|
8812038c5e48c954f54529037883ce1cf1e78fd2
|
Q: $P = I − uu^T$ when $u$ is a normalized vector (subtraction of scalar from matrix) I have a vector $u$ that is normalized (i.e., $||u||_2=1)$. This means that
$$
uu^T = ||u||_2^2 = 1
$$
Does it make sense to define $P=I-uu^T$ given that $uu^T$ is a scalar.
What does it mean to subtract a scalar from a matrix?
A: $uu^T$ is a matrix. Looks like you’ve confused it with $u^Tu$, which is scalar.
A: If $u$ is a column vector with $n$ columns, then $\|u\|^2_2=u^Tu$, but $$uu^T = \begin{pmatrix}u_1\\u_2\\ \vdots\\ u_n\end{pmatrix}\begin{pmatrix}u_1&u_2& \dots& u_n\end{pmatrix}$$ is a matrix whose $ij-$th entry is $u_iu_j$.
Consequently, $I-uu^T$ has $1-u_i^2$ on the diagonal and $-u_iu_j$ as $ij$-th entry if $i\neq j$.
|
4f66b24326c5851818ea5cd964a8543accabb70f
|
Q: Probability of choosing two real numbers $a$ and $b$ from $[1,4]$ such that $ ab>4$.
What is the probability that when you pick two real numbers from the closed interval $[1,4]$, their product is greater than 4?
I tried to solve it with integration but I couldn't get the right answer. And I think that this problem can be solved without integration.
A: On the figure below, I have plotted the curve $x * y = 4$, as well as the square $[1, 4] \times [1, 4]$.
plot http://puu.sh/mtgbV/cce4527223.png
Any point above the red curve and within the square has coordinates with product $> 4$. Any point on or below the red curve and within the square has coordinates with product $<= 4$. Thus the probability you are looking for is the partition of the square above the red curve, over the total area of the square.
A: Supposing independence, this yields:
$P(XY\le4)=\int_1^4\int_1^{4/x}f_{X,Y}(x,y)dydx=\int_1^4\int_1^{4/x}\frac{1}{3}\frac{1}{3}dydx=\frac{8\log 2}{9}-\frac{1}{3}$.
Then, $P(XY>4)=1-P(XY\le 4)=\frac{4}{3}-\frac{8\log 2}{9}\sim 0.717$
A: Hint. Draw the picture of $xy=4$ in the square with corners $(1,1)$ and $(4,4)$.
|
182cd4941ef8d69e4fdaf7d266e5c217657dd931
|
Q: Analytic function on unit disk with $|f(z)|\le1$
Let $f$ be analytic function on $D=\{z\in\mathbb C:\left\lvert z\right\rvert<1\}$. Assume that for each $z\in D$, $\left\lvert f(z)\right\rvert \le1$.
Then, which of the following is not the possible value of $(e^f)''(0)$:
(a) $2$
(b) $6$
(c) $\frac{7}{9} e^\frac{1}{9}$
(d) $\sqrt{2}+i\sqrt{2}$
I have tried the following:
Let $g(z)=e^{f(z)}$ then,
$g''(z)=(f''(z)+(f'(z))^2)e^{f(z)}$
Now, $g''(0)=(f''(0)+(f'(0))^2)e^{f(0)}$
What next to do after this step, or is there any other easier way to solve it?
A: So I take a look at Cauchy Integral Formula and then $$(e^{f})''(0)=\frac{1}{\pi}\int_{|z|=1}\frac{e^{f(z)}}{z^3}$$
Taking modulus on both sides we have $$|(e^{f})''(0)| \le \frac{e}{\pi}2\pi=2e$$
Hence (b) cannot be the choice.
|
38dad6a5b8c2f928f8b56d0a1dd0636ae1e74f46
|
Q: Fourier series calculation I had an exam question today that stated something along the lines of the following:
"Let $f$ be an even function given by $f(x)=x$ on $[0,\pi]$ and extend $f$ to $\mathbb{R}$ by $2\pi$-periodicity. Find the Fourier series of $f$"
Now for this reason I took $f$ to be $|x|$ on $[-\pi,\pi]$, and at any rate, took my coefficients for $a_n$ and $b_n$ to be $$a_n=\frac{2}{2\pi}\int_0^\pi x\cos(nx)dx$$ for $n\neq 0$, $$a_0=\frac{1}{\sqrt{2\pi}}\int_0^\pi xdx$$ and $b_n=0$ $\forall n\in\mathbb{N}$, with corresponding Fourier series $$|x|\sim\sum_{n=1}^\infty a_n\cos(nx)+b_n\sin(nx).$$
However, I know from a later part of the question that my answer was wrong. Where did I go wrong here? Many thanks in advance for any help that anyone can offer.
|
233324e53f6c4c06993b93e1b0ffb4cee8a9f7f5
|
Q: Improper integrals and convergence Let C= {(x,y) such that x>0, y> 0}. Let f(x,y) = $\frac{1}{(x^2 +\sqrt x )(y^2 + \sqrt y)}$
Show that $\int_C{f}$ exists, do not attempt to calculate it.
Attempt at at solution:
I was thinking that since f(x,y) $\leq (xy)^{-\frac{1}{2}}$ we can use pointwise estimate if we show that $(xy)^{-\frac{1}{2}}$ is convergent, it is convergent on [0,1]^2 but I can't show that it is convergent on [1, $\infty$]
Please help and be as detailed as possible, and if you have a different method I would love to see that too.
Cheers
A: Hints: 1. Verify that $\int_0^\infty 1/(\sqrt t + t^2)\,dt < \infty.$
*Because $f(x,y) \le (1/\sqrt x)(1/(\sqrt y +y),$ you can use 1) to see
$$\int_0^1\int_0^\infty f(x,y)\,dydx <\infty.$$
Same thing for $\int_0^1\int_0^\infty f(x,y)\,dxdy.$
*So you'll be done if you show $\int_1^\infty\int_1^\infty f(x,y)\,dxdy < \infty.$
|
5d8e86bc61e7f9b9eafc7a2ff2a32e69ce5d1296
|
Q: Infinite product $\prod_k (1-z/a_k)(1-z/b_k)^{-1}$ converges uniformly on compacts of$\{\mathbb Im z>0\}$ and is Herglotz Hi everyone I find the following exercise of infinite product functions.I'd stuck in this exercise.
Let $\{a_k\}$ and $\{b_k\}$ sequences of positive real numbers such that $b_k<a_k<b_{k+1}$ for all $k$ in $\mathbb N$. Prove that the infinite product $\prod_k (1-z/a_k)(1-z/b_k)^{-1}$ converges uniformly on compacts of $\mathbb C_+:=\{z:\mathbb{Im}z>0\}$ and the function $f(z)=\prod_k (1-z/a_k)(1-z/b_k)^{-1}$ is Herglotz.
Let $K$ a compact set in $\mathbb C_+$ in order to show that the infinite product converges uniformly on $K$ it will suffices to show that $\sum_k [(1-z/a_k)(1-z/b_k)^{-1}-1]$ converges absolutely and uniformly on $K$.
\begin{align*}[(1-z/a_k)(1-z/b_k)^{-1}-1]=\frac{z}{b_k-z}\left(1-\frac{b_k}{a_k}\right) \tag{1}\end{align*}
Since $d=d(K,\mathbb C_+^c)>0$ this is a lower bound for $|b_k-z|$ and so for the denominator of (1) in norm. The $z$ in the numerator is bounded by some $M>0$ on $K$, since $K$ is compact and the identity function is continuous. Then
\begin{align*}|[(1-z/a_k)(1-z/b_k)^{-1}-1]|\le \frac{M}{d}\left(1-\frac{b_k}{a_k}\right) \tag{2}\end{align*}
But from here I'm stuck. I'd appreciate if someone can help me I didn't know how to use the form in which the numbers $\{a_k\}$ and $\{b_k\}$ are defined.
A: You want to show that $\sum_k c_k(z)$ converges absolutely and uniformly on
any compact set $K$ in the upper half-plane, where
$$
c_k(z) = \frac{1-z/a_k}{1-z/b_k} - 1 = \frac{z}{z - b_k}\frac{a_k-b_k}{a_k} \, .
$$
Consider first the case that the sequences $(a_k)$, $(b_k)$ are bounded,
and therefore have a common limit $A$. Since
$$
\left| \frac{z}{z - b_k} \right | \le \frac{|z|}{\text{Im}(z)}
$$
is bounded on $K$ by some constant $M$, we have
$$
|c(z)| \le M \frac{a_k-b_k}{a_k} \le M \frac{b_{k+1}-b_k}{b_1}
$$
and therefore
$$
\sum_{k=1}^N |c(z)| \le \frac{M}{b_1} \sum_{k=1}^N (b_{k+1}-b_k)
= \frac{M}{b_1} (b_{N+1} - b_1) \le \frac{M}{b_1} (A - b_1)
$$
which implies the absolute and uniform convergence
of $\sum_k c_k(z)$ on $K$.
Now consider the case that the sequences $(a_k)$, $(b_k)$ are
unbounded and therefore converge to $+\infty$.
Let $M$ be an upper bound for $|z|$ on $K$. Then for all sufficiently
large $k \ge k_0$, $b_k > 2M \ge 2|z|$ and therefore
$$
\left| \frac{z}{z - b_k} \right | \le \frac{|z|}{b_k - |z|}
\le \frac{M}{b_k/2} \, .
$$
Then
$$
\sum_{k=k_0}^N |c(z)| \le 2M \sum_{k=k_0}^N \frac{a_k-b_k}{a_kb_n}
= 2M \sum_{k=k_0}^N \bigl( \frac{1}{b_k} - \frac{1}{a_k} \bigr)
\le 2M \sum_{k=k_0}^N \bigl( \frac{1}{b_k} - \frac{1}{b_{k+1}} \bigr) \\
= 2M \bigl( \frac{1}{b_{k_0}} - \frac{1}{b_{N+1}} \bigr)
\le 2M \frac{1}{b_{k_0}}
$$
which again implies the absolute and uniform convergence
of $\sum_k c_k(z)$ on $K$.
To prove that the product maps the upper half-plane into itself (i.e.
is a "Herglotz function"), observe that the argument of
$$
\frac{1-z/a_k}{1-z/b_k} = \frac{b_k}{a_k} \frac{a_k-z}{b_k-z}
$$
for $z = x + iy$ is the angle under which the segment
$[b_k, a_k]$ is seen from $z$:
$$
\arctan \frac{a_k-x}{y} - \arctan \frac{b_k-x}{y} <
\arctan \frac{b_{k+1}-x}{y} - \arctan \frac{b_k-x}{y} \, .
$$
The sum of the arguments of all factors in the product can therefore
be estimated by a "telescoping sum" which is less than
$\frac {\pi}{2} -\frac {-\pi}{2} = \pi$.
|
302ebf09af609fbfe7d276e82d74c9a69bc5995c
|
Q: How to solve non-homogeneous recurrences? I am trying to find a way to solve non-homogeneous recurrences by solving the homogeneous and non-homogeneous parts separately. I can use generating functions for the whole thing, but I want to learn the separation method, which can often lead to a quick solution.
For example:
$T(1) = 1$
$T(n) = 2T(n - 1) + n$
The solution is $T(n) = 2^{n+1} - n - 2$. I want to be able to arrive at solutions like these by splitting up $T(n)$ into the homogeneous part, $2T(n-1)$, and the non-homogeneous part $n$.
I've seen several explanations online and on this stackexchange site, but I feel like several steps get skipped and various instructions / implied intentions are not clear to me at all.
I already know how to solve homogeneous recurrences. For example the homogeneous relationship $T(n) = 2T(n-1)$ has characteristic polynomial $x - 2$ with one root, $2$, so the solution of this piece is of form $T(n) = \alpha 2^n$.
So assuming I can get the form of the homogeneous part, how do I then solve the non-homogeneous part?
A: One easy case is when the recurrence is linear with constant coefficients and the inhomogeneity is a polynomial. In this case a particular solution is also a polynomial. Most of the time, it is a polynomial of the same degree as the inhomogeneity. For example:
$$T(n)=-2T(n-1)-T(n-2)+n.$$
If you assume $T(n)=an+b$ then the equation becomes
$$an+b=-2a(n-1)-2b-a(n-2)-b+n.$$
This simplifies to
$$an+b=(-3a+1)n+(4a-3b)$$
So $-3a+1=a,4a-3b=b$, which are two linear equations you can solve.
An exception occurs when $1$ is a root of the characteristic polynomial. Then the solution is still a polynomial, but of a higher degree. For example:
$$T(n)=3T(n-1)-2T(n-2)+n$$
If you try $T(n)=an+b$ you find
$$an+b=3(a(n-1)+b)-2(a(n-2)+b)+n=(a+1)n+\dots$$
and you can't have $a=a+1$. In this case a solution will be a polynomial of a higher degree. Try it with $T(n)=an^2+bn$.
The problem in this case is that the linear map $F(x_n)=x_n-3x_{n-1}+2x_{n-2}$ sends constants to zero. Thus in view of the rank theorem from linear algebra, it cannot map linear polynomials onto all linear polynomials. You need to enlarge the domain to the quadratic polynomials to hit all linear polynomials. The situation gets worse still when $1$ is a multiple root of the characteristic polynomial. For instance, there is neither a linear nor a quadratic solution to $T(n)=2T(n-1)-T(n-2)+n$: you need a cubic solution instead.
|
c9498f8cea4e3741b61c6e96a8cc99cc6d85412d
|
Q: Uniform convergence in Mercer Theorem for bounded kernels Let $\mu$ be a finite, strictly positive measure on $\mathbb{R}$, and let $k$ be a measurable positive-definite kernel. Assume $k$ is bounded, and let $T:L^2(\mu)\rightarrow L^2(\mu)$ be defined by $$ Tf(x):=\int k(x,y) f(y)d\mu(y).$$
Does the representation $$k(x, y) = \sum_{j=1}^\infty \lambda_j \varphi_j(x) \varphi_j(y)$$ hold uniformly on $(x,y)$?
Here, the $(\lambda_j, \varphi_j)$ pairs are defined via $$T\varphi_j = \lambda_j\varphi_j.$$
A: I'm assuming that the functions $\varphi_j$ are real-valued. The Cauchy-Schwartz inequality on $\ell^2(\mathbb{N})$ gives
\begin{align*}\sum_{j=1}^{\infty} |\lambda_j \varphi_{j}(x) \varphi_{j}(y)| &= \sum_{j=1}^{\infty} |\sqrt{\lambda_j} \varphi_{j}(x) \sqrt{\lambda_j}\varphi_{j}(y)| \leq \sum_{j=1}^{\infty} \lambda_j \varphi_j(x)^2 \sum_{j=1}^{\infty}\lambda_j \varphi_j(y)^2 \\ &= k(x,x) k(y,y) \leq C^2\end{align*}
As $k(x,y)$ is bounded. This shows uniform convergence, assuming you know already that the equality holds pointwise.
EDIT: I made an implicit assumption in the following argument that I just became aware of now. Let's assume that the $\varphi_j$ are continuous (that would follow if we would assume $k$ to be continuous in both variables). Then we know the equality holds pointwise for some subsequence $\sum_{j=1}^{n_k} \lambda_j \varphi_j(x) \varphi_j(y)$. Applying the argument above shows uniform convergence for this subsequence. This implies that there is a $k_0$ such that $$\sum_{j=n_k}^{\infty} \|\lambda_j \varphi_j(.) \varphi_j(.)\|_{\infty} < \epsilon, \quad k\geq k_0$$ which implies the existence of a $N$ such that
$$ \sum_{j=N}^{\infty} \|\lambda_j \varphi_j(.) \varphi_j(.)\|_{\infty} < \epsilon, \quad n\geq N$$
Now, if the $\varphi_j$ are continuous, we can identify $(\sum_{j=1}^{n}\lambda_j \varphi_j(.)\varphi_j(.))_{n\in\mathbb{N}}$ as a Cauchy-sequence in $C([0,1]\times[0,1])$ with the supremum norm. As this is a complete space, uniform convergence follows.
A: Uniform convergence is established in Theorem 3.a.1 in König's Eigenvalue Distribution of Compact Operators (DOI: 10.1007/978-3-0348-6278-3)
Theorem: Let $(\Omega, \mu)$ be a finite measure space and $k\in L^\infty(\Omega\times\Omega,\mu\times\mu)$ be a kernel such that $T_k\colon L^2(\Omega,\mu)\to L^2(\Omega,\mu)$ is positive. Then the eigenvalues $(\lambda_n)$ of $T_k$ are absolutely summable. The eigenfunctions $f_n\in L^2(\Omega,\mu)$ of $T_k$, associated with those $n$ such that $\lambda_n\neq 0$, and normalized by $\| f_n\|_2 = 1$, actually belong to $L^\infty(\Omega,\mu)$ with $\sup_n \|f_n\|_\infty <\infty$ and $$k(x,y) = \sum_{n\in\mathbb{N}} \lambda_n \overline{f_n(y)}f_n(x)$$ holds $\mu\times\mu$-a.e., where the series converges absolutely and uniformly $\mu\times\mu$-a.e.
|
0f9e76008934c35b98724b42ef8dacbcb949b575
|
Q: Inverse image of a diffeomorphism Let $$f: (0, \infty ) \times (- \pi , - \pi) \to \mathbb{R}^2 \setminus ( (- \infty , 0] \times \{0\})$$
$$f(r, \alpha)=(r \cos \alpha , r \sin \alpha) $$
Find $f^{-1 }(A)$, where
$$A = \{ (x,y) \in \mathbb{R}^2 \setminus ( (- \infty , 0] \times \{0\}): (x-\frac{1}{2} )^2+y^2< \frac{1}{4} \}$$
$$(r \cos \alpha-\frac{1}{2})^2+r^2 \sin^2 \alpha < \frac{1}{4} $$
$$r^2-r \cos \alpha< 0 $$
$$r - \cos \alpha< 0$$
Thus
$f^{-1 }(A)= \{(r, \alpha) \in (0, \infty ) \times (- \pi , - \pi): r< \cos \alpha \}$
It is correct?
A: Yep, this is correct. I checked this by visualizing, which I find helpful, so here's a way to do that: Note that $A$ is the interior of the circle with radius $\frac{1}{2}$ centered at $(\frac{1}{2}, 0)$. If you do a polar plot of $r < \cos(\alpha)$, that's exactly what you get:
Which makes sense, because $f$ is exactly the transformation we use when doing polar plots (it takes you from $(r, \alpha)$ space to $(x, y)$ space).
Bonus: If you view the $(r, \alpha)$ space as a rectangle with base running from $(-\pi, \pi)$ and height running from $(0, \infty)$, $f$ can be visualized as taking horizontal slices of the rectangle and wrapping them around to form circular regions.
|
87d4cb935c12eafbfa426cd9776736f4aae93bc9
|
Q: Is the answer of $\ \ 2415^n \ mod \ \ 2556 \ \ $ is only $711$ and $1989$ for $n\in \mathbb{Z}^{+} >1 \ \ ?$ Is the answer of $\ \ 2415^n \ mod \ \ 2556 \ \ $ is only $711$ and $1989$ for $n\in \mathbb{Z}^{+} >1 \ \ ?$
If it's true, how to prove that ?
I try to expand $(2556 - 141)^n $ for any $n\in \mathbb{Z}^{+} >1$ , but it doesn't make sense.
According to WolframAlpha,
It seems like remainder is $711$ for $n=3,5,7,9,11,13,15,...$
and remainder is $1989$ for $n=2,4,6,8,10,12,14,16,...$
But I don't know how to prove that , or maybe the answer of $\ \ 2415^n \ mod \ \ 2556 \ \ $ is not only $711$ and $1989$ $?$
Thank you so much for every comments .
A: We have $$1989^2\equiv 1989\ (\ mod\ 2556\ )$$ and therefore $$1989^n\equiv 1989 \ (\ mod\ 2556\ )$$ for all $n\ge 1$
That means $$2415^{2n}\equiv \ 1989\ (\ mod\ 2556\ )$$ for all $n\ge 1$
Additionally, we have $$2415\times 1989\equiv\ 711\ (\ mod\ 2556\ )$$ showing $$2415^{2n+1}\equiv 711\ (\ mod\ 2556\ )$$ for all $n\ge 1$
A: Peter has answered the question, but here is a little more to the story:
$2556$ is $4\times 639 = 2^2 \times 9\times 71 = 2^2\times 3^2 \times 71$. Therefore, by the Chinese Remainder Theorem, we can tell the behavior of powers of a number mod $2556$ by considering it mod $2^2 = 4$, mod $3^2=9$, and mod $71$.
Now $2415$ is $3=-1$ mod $4$, $3$ mod $9$, and $1$ mod $71$.
Because it is $-1$ mod $4$, its powers alternate between $-1$ and $+1$ mod $4$.
Because it is $3$ mod $9$, all its powers after the first power are $0$ mod $9$.
Because it is $1$ mod $71$, all its powers are $1$ mod $71$.
This explains the situation. The even powers are all $+1$ mod $4$, $0$ mod $9$, and $1$ mod $71$. Mod $2556$, this forces them all to be $1989$. The odd powers beyond the first are $-1$ mod $4$, $0$ mod $9$, and $1$ mod $71$. This forces them to be $711$ mod $2556$.
|
32d0bc2dffd092ebbd85df7244cf532fd75f1464
|
Q: Points on unit circle with arguments from an arithmetic progression I'm having trouble understanding a small detail from a paper I'm reading about Roth's theorem. Below is the context.
Let $A \subset \mathbb{Z}_{N}$ be such that $\left|\sum_{x \in A}{\exp\left(\frac{2\pi i rx}{N}\right)} \right| \geq 3 \gamma N$ for some $r \neq 0$, and $\gamma$ is such that $1/\gamma$ is a positive integer $n$. Assume that $N$ is a large prime number (in particular so that $r$ and $N$ are coprime).
Now, divide the unit circle in the complex in $n$ equal parts and let $P_{k} = \left\{x \in \mathbb{Z}_{N}: \exp\left(\frac{2\pi i rx}{N}\right) \in I_{k}\right\}$, where $r \neq 0$, and $k$ is a fixed number $k \in \left\{1,\ldots,n\right\}$. Why does it follow that $P_{k}$ is an arithmetic progression (with difference $r^{-1}$, the inverse of $r$ in $\mathbb{Z}_{N}$)?
Thanks!
|
92f2f053a20ed29e91ea4b630ebac6dc19d1a9ab
|
Q: If$(10000x^{2015} - x)\over (x^2+x+1)$is odd, then $4x^2 + 3x + 1$ is even. How would I I begin to prove this implication?
Starting with the hypothesis $(10000x^{2015} - x)\over(x^2+x+1)$ = $2(k)+1$.
I'm sort of lost on where to go?
A: Suppose, $x$ is even. Then $1000x^{2015}-x$ is even and $x^2+x+1$ is odd.
So, the fraction is even, contradicting the assumption.
Hence , $x$ must be odd and therefore $4x^2+3x+1$ even.
A: Look at the denominator first:
$$x^2+x+1=x(x+1)+1$$
At least one of $x$ and $x+1$ is even so $x(x+1)$ is even and $x(x+1)+1$ is thus odd .
This means that $$10000x^{2015}-x=(2k+1)(x^2+x+1)$$ is odd because it's the product of two odd numbers .
But $10000x^{2015}$ is even so $x$ must be odd .
To finnish notice that $4x^2$ is even and $3x$ and $1$ are odd . Thus the number $$4x^2+3x+1$$ is even .
|
711662bcf637ed2febca76a8a127bbc406d6b8c2
|
Q: How many 4 digits number can be formed with 2, 2, 2, 3, 3, 3, 1, 1 I can not find a way to solve this question without writing all the possible numbers.
Notice that we only have a limited quantity of each number. For example, we can only use 1 twice.
Also, please note that the ones, two's and three's are non-distinct.
Thank you for sparing the time to read my question.
A: We can have no $1$, one $1$, or two $1$s. With no $1$ we have $2^4$ choices, but $2222$ and $3333$ are forbidden. One $1$ we can place in ${4\choose1}$ ways, and then we can fill the other three places at will. Two $1$s we can place in ${4\choose2}$ ways, and then we can fill the other two places at will. It follows that in all there are
$$(2^4-2)+{4\choose1}\cdot 2^3+{4\choose2}\cdot 2^2=70$$
possibilities.
A: There are three cases to consider: the four-digit number either uses one number three times, two numbers twice, or one number twice and two numbers once each.
In the first case, there are $2$ choices (a $2$ or a $3$ but not a $1$) for the number that gets used three times and $2$ choices for the remaining number, which can appear as any of the $4$ digits, for a total of
$$2\times2\times4=16$$
possibilities.
In the second case, there are ${3\choose2}=3$ ways to pick the two numbers that get used twice, and ${4\choose2}=6$ ways to arrange them as a $4$-digit number for a total of
$$3\times6=18$$
possibilites.
In the third case there are $3$ ways to choose the number that appears twice, ${4\choose2}=6$ ways to assign as two of the four digits, and $2$ ways to insert the other two numbers as digits, for a total of
$$3\times6\times2=36$$
possibilities.
In all, we have
$$16+18+36=70$$
different $4$-digit numbers.
A: All the numbers are, in some sense, of the form aaaa, aaab,aabb, aabc. The first one can't be, cause there are not more than 3 of each digit.
The second one: choose the a: $\binom{2}{1}$. choose the places for it $\binom{4}{3}$. Choose b: $\binom{2}{1}$. There are $\binom{2}{1}\binom{4}{3}\binom{2}{1}=16$
For the third one, choosing a and b give us $\binom{3}{2}$. Choosing the places for a, automatically are choosing places for b:$\binom{4}{2}$. Thus, there are $\binom{3}{2}\binom{4}{2}=3\times 6=18$ numbers with only two digits.
Por the fourth one, choose the repeated digit: $\binom{3}{1}$. Choose the places for it: $\binom{4}{2}$. The other two digits can be arranged in two ways. Then, there are $2\binom{3}{1}\binom{4}{2}=36$.
Therefore, answer is $16+18+36=70$
|
960e8783d1f976a447c233d971f642cd372554e5
|
Q: Looking for a function $g(x)$ such that $g(2x+2) = g(x) + 2x+2$ So recently I got bored in maths class (I'm in tenth grade) and made up a little equation that looked something like this:
$$g(f(x)) = g(x) + f(x) $$
My original goal was to find different $g(x)$ to fulfill this equation for $f(x) = 2x, 2x+1$ and $2x+2$. I found solutions for the first two cases, but the third one keeps hiding its secrets from me and it's slowly taking my sleep.
So if anyone of you guys out there who have some actual knowledge about the topic (I tried and asked my maths teacher and he didn't even know an answer for $f(x) = 2x+1$) could sort out how to find a fitting $g(x)$, please tell me. (There might be a general way to solve for any given f$(x)$?)
I tried proving that a polynomial of some sort could solve the equation, but I stopped at third degree because it simply was too much work to write down all the formulas and I don't know how to handle all those terribly professional math programs like Octave etc. (I'm 15)
Looking forward to some informative replies.
Edit: I've already gained some really helpful insights here, but I still don't know why my original solution for $f(x)=2x+1$ wasn't correct. So, if anybody could give me a hint, that would be very appreciated. Here is how I got there:
I assumed that the function looked like $f(x)=$ $(ax^2+bx+c) \over (x+1)$ because I found out that $g(x)$ had no value at $x=-1$. By just putting this function into our initial equation, we get
$${a(2x+1)^2+b(2x+1)+c \over x+1}={ax^2+bx+c \over x+1}+2x+1,$$ after simplifying
$$3ax^2+4ax+a+bx+b=2x^2+3x+1$$
We can now see that $a$ and $b$ have to be solutions to the following three equations in order to be valid parameters for our function $g$:
$$1: 3a = 2$$
$$2: 4a+b = 3$$
$$3: a+b = 1$$
When working out the solutions, one will easily find $a$ to be $2 \over 3$ and $b$ to be $1 \over 3$, thus our function $g(x)$ is defined as ${{2 \over 3}x^2+{1 \over 3}x}\over x+1$.
For the test, we just put this function into our starting problem again:
$${{2 \over 3}(2x+1)^2+{1 \over 3}(2x+1) \over x+1} = {{2 \over 3}x^2+{1 \over 3}x \over x+1} + 2x+1.$$
Multiplying with $x+1$ (but still keeping in mind that $x=-1$ may never hold), we get:
$${2 \over 3}(2x+1)^2+{1 \over 3}(2x+1) = {2 \over 3}x^2+{1 \over 3}x + (2x+1)(x+1)$$
$${2 \over 3}(4x^2+4x+1)+{1 \over 3}(2x+1) = {2 \over 3}x^2+{1 \over 3}x + 2x^2+3x+1$$
$${8 \over 3}x^2+{8 \over 3}x+{2 \over 3}+{2 \over 3}x+{1 \over 3} = {2 \over 3}x^2+{1 \over 3}x+2x^2+3x+1$$
$${8 \over 3}x^2+{10 \over 3}x+1 = {8 \over 3}x^2+{10 \over 3}x+1, $$
which holds for all $x \in \Bbb R \ | \ x \neq -1$.
I really don't know where my error is, and I am very grateful for every piece of help.
A: So, let's start off with $g(2x+2)=g(x)+2x+2$. I suspect that the reason that the $f(x)=2x$ case came more easily was because the arguments of each of the $g$ terms were similar - let's see if we can do the same thing here, by tweaking the $x$ to $x+a$ and comparing:
$$g(2x+2a+2)=g(x+a)+2x+2a+2$$
If we choose $a=-2$, this solves $2a+2=a$, which means that the $g$ terms will be $g(2x-2)$ and $g(x-2)$ respectively. This is good news, because we've pushed the terms of the equation to look a bit more similar to one another. Indeed, if we introduce the notation $h(x)=g(x-2)$ into our equation, we get:
$$h(2x)=h(x)+2x-2$$
and this is a bit more like the first case you solved. Now, with recurrence equations in general (where functions are defined in terms of other values of the function, broadly speaking), one common way of approaching is to try to push your recurrence into the form $F(x+1)=F(x)+h(x)$, for some $F$ and $h$. The reason this is so popular is because it allows us to make use of telescoping sums, which facilitate viewing recursions similarly to sums. For now, let's try to get our recurrence into a form involving only $x$ and $x+1$ as inputs.
Now, as things stand, the arguments of (inputs to) $g$ are $x,2x$, which means that instead of adding $1$ between steps of computing, we're multiplying by $2$. This might hint at us to consider the powers of 2. Indeed, replacing $x$ by $2^x$ in our recurrence for $h$, we obtain:
$$h(2^{x+1})=h(2^x)+2^{x+1}-2$$
Now, this is better! We see an $x+1$ on one side, and an $x$ on the other, which is what we were shooting for. Let's call $j(x)=h(2^x)$ to take charge of this, and we land at:
$$j(x+1)=j(x)+2^{x+1}-2$$
This is good, because it lets us move from $x$ to $x+1$, which means that if we know 1 value of $j$, we know an infinity of them! Observe:
$$j(x+2)=j(x+1)+(2^{x+2}-2)=j(x)+(2^{x+1}-2)+(2^{x+2}-2)$$
If we keep proceeding with this, we arrive at (for integer $n$), that:
$$j(x+n)=(2^{x+1}-2)+(2^{x+2}-2)+...+(2^{x+n}-2)+j(x)$$
If you've come across geometric series, you'll realise that we can collapse each of these sums:
$$2^{x+1}+2^{x+2}+...+2^{x+n}=2^{x+n+1}-2^{x+1}$$
$$(-2)+(-2)+...+(-2)=-2n$$
So, we have the recurrence that $j(x+n)=j(x)+(2^{x+n+1}-2^{x+1})-2n$. Writing $y=x+n$, this gives that for any $x,y$, that $j(y)=j(x)+2^{y+1}-2^{x+1}-2(y-x)$, provided that the difference between $x,y$ is an integer.
This is good, because each of the variables only appears in unmixed terms - that is, there's no $xy,x/y$ terms or anything like that. So we can isolate them as:
$$j(y)-2^{y+1}+2y=j(x)-2^{x+1}+2x$$
The key here is that this holds whenever the difference between $x,y$ is anything, so we can say that it is constant.
[n.b. technically it says that it's a constant + a 1-periodic function, but we'll gloss over this for now]
So, we might now say that $j(x)=2^{x+1}-2x+C$, where $C$ is a constant. Let's now make the long journey back to $g(x)$:
$j(x)=2^{x+1}-2x+C \implies h(2^x)=2^{x+1}-2x+C \implies h(x)=2x-2\log_2{x}+C$
$$ h(x)=2x-2\log_2{x}+C \implies g(x-2)=2x-2\log_2{x}+C$$
$$\implies g(x)=2x+4-2\log_2(x+2)+C=2x-2\log_2(x+2)+C'$$
noting that $C$ could have been any constant, whence $C+4=C'$ is just any constant.
[for those keeping track, our full solution is $g(x)=2x-2\log_2(x+2)+p(\log_2(x+2))$, where $p$ is any 1-periodic function]
So, we have a general solution! It does come with the downside that, due to the nature of logarithms, we need to specify our domain as $\{x \vert x>-1\}$, so that anything we take the logarithm of is positive. But otherwise, we have a nice, continuous function which satisfies the functional equation we'd like it to. Hopefully this also hints at how you could approach the general case of $f(x)=2x+b$, or even $f(x)=ax+b$.
A: With $h(x)=g(x-n)$,
$$g(2x+n)=g(x)+2x+n$$
becomes
$$h(2(x+n))=h(x+n)+2x+n.$$
Setting $x+n=:2^t$ this gives
$$h(2^{t+1})=h(2^t)+2^{t+1}-n,$$
which is an ordinary recurrence
$$l(t+1)=l(t)+2^{t+1}-n.$$
By summation, a general solution is
$$l(t)=2^{t+1}-nt+C,$$ corresponding to
$$h(x+n)=2(x+n)-n\log_2(x+n)+C=g(x).$$
As one can check,
$$g(2x+n)=4(x+n)-n\log_2(2(x+n))+C=\\
g(x)+2x+n=2(x+n)-n\log_2(x+n)+C+2x+n.$$
And we have
$$\begin{align}0\to& g(x)=2x+C\\
1\to& g(x)=2(x+1)-\log_2(x+1)+C\\
2\to& g(x)=2(x+2)-2\log_2(x+2)+C.\end{align}$$
Actually, as the recurrence relates values of $t$ one unit apart, $C$ can be defined to be any function over $t\in(0,1]$, i.e. $x\in(1-n,2-n]$.
|
91ab18de6d344c4c248ecb669168a1ad56e09130
|
Q: Computational Topology and Lie Group Theory I study Machine Learning and my limited background in math is enough to understand all the popular algorithms and methods.
However, recently, Topology has been successfully applied to Data Analysis and Lie groups have been used to explain why Deep Learning works so well.
My limited math background is:
*
*Analysis 1 & 2
*Linear Algebra
*Discrete Math / Combinatorics
*Probability
*Statistics
*Convex Optimization
I'd like to study both Computational Topology and Lie Group Theory.
I didn't asked two separate questions because I suspect there might be some overlap in the prerequisites.
What books/papers/tutorials should I read and in which order?
I found some interesting recommendations but they're usually tailored to math students who have a different background than mine. I'm willing to fill any gaps in my knowledge but I'm not sure where I should start.
A: I haven't studied Computational Topology or Lie Groups, but I can maybe recommend some stuff to get you started in that direction.
First off, you will need a good understanding of Algebra and Point-Set Topology. Here are some topics you will probably need to know:
Abstract Algebra
*
*Groups: actions, homomorphisms, isomorphisms, subgroups, quotient groups, Fundamental Theorem of Finitely Generated Abeliean Groups
*Rings: ring homomorphism, quotient rings, ideals
*Modules
*Fields
*a little bit about Vector Spaces maybe
Point-Set Topology
*
*Definition of Topology
*Continuous Functions
*Homeomorphisms
*Product Topology
*Subspace Topology
*Quotient Space
*Compactness and Completeness
For books I used Dummit and Foote for Algebra and Munkres for Topology. The topics in algebra correspond to parts I, II, and III in D&F. The topics in Point-Set Topology correspond to chapters 1-3 of Munkres.
From there you can study Algebraic Topology (you should be able to start studying this after finishing the initial Topology subjects and knowing groups and rings). I hear Hatcher is pretty good (and it's free online). I'm guessing from that you will most need to study homology (I believe that's an entire chapter).
You will also need to study Differential Geometry for Lie Groups, but I don't know enough about that to suggest a book. But the books above should keep you busy for a while anyway.
But like I said, I don't know too much about the subjects of Computational Topology and Lie Groups, so take this advice with a grain of salt.
|
401ebd2850aee024bf16424d0113f7fa66dd6215
|
Q: Finite sub cover In the definition of compact set: From any open cover $E=\cup_{i\in I} U_i$ we can find a finite sub-cover $E=\cup_{k=1}^NU_{i_k}$?
Is the finite sub-cover is always open please?
Thank you.
|
534bbf3956563d08b258ccdf4a9542f3e84bcfe2
|
Q: Orientation on a manifold as a sheaf I am thinking about orientation of a connected manifold $M$ of dim $n$ as a sheaf.
There are two definitions I could use, the first is the sheaf associated to the presheaf
$$U\mapsto H_n(M,M-U;R).$$
The second is the sheaf of sections of generators of the fibration $R^*\to \tilde{M}_R\to M$, where $R$ is a ring and $R^*$ is the discrete group of units of $R$ and $\tilde{M}_R$ is the $R$-orientable cover of $M$.
I have the following questions:
1. Are the two definitions the same?
2. Theorem 3.26 of Hatcher seems to translate to that if $M$ is closed, then it is orientable iff the orientation sheaf has a global section that generate stalk-wise, i.e. it is a principal $\underline{R}$-module.
3. Lemma 3.27 seems to be saying if $M$ is closed, then the presheaf above is already a sheaf.
Are these correct? Feel free to tell me more things or give me references.
A: (1) Your two definitions can't be the same because one of them restricts to units in $R$ and the other does not. To be more precise, let $F_0$ be the presheaf of $R$-modules $U\mapsto H_n(M,M-U;R)$ and let $F$ be its sheafification. Let $G$ be the sheaf of continuous sections of the bundle of $R$-modules on $M$ whose fiber at $x$ is $H_n(M,M-\{x\};R)$, and let $G^*\subset G$ be the subsheaf of sections which generate every fiber. Then $F$ is your first sheaf, and $G^*$ is your second sheaf. But these are not isomorphic; rather, $F\cong G$. To get this isomorphism, note that there is a canonical map $F_0\to G$ (given an element of $H_n(M,M-U;R)$, restrict it to $H_n(M,M-\{x\};R)$ for each $x\in U$), and this map induces an isomorphism on stalks (since any point has arbitrarily small neighborhoods on which both $F_0$ and $G$ evaluate to $R$, with the map being the identity). This map thus induces an isomorphism after sheafifying, giving an isomorphism $F\to G$. It is $G^*$ which is normally referred to as the "sheaf of $R$-orientations", not $F$. If you like, you can identify $G^*$ as a subsheaf $F^*$ of $F$ via the isomorphism; it can be described as the subsheaf of sections which generate every stalk as an $R$-module (or, more elegantly, as sheaf of isomorphisms of $\underline{R}$-modules $\operatorname{Iso}_{\underline{R}}(\underline{R},F)$). While $F\cong G$ canonically has the structure of a sheaf of $R$-modules, $F^*\cong G^*$ is merely a sheaf of $R^*$-sets.
(2) Theorem 2.36(a) says that if $M$ is closed, connected, and $R$-orientable, then there is a global section of $F_0$ that makes $F$ a principal $\underline{R}$-module, i.e. it is (noncanonically) isomorphic to the constant sheaf $\underline{R}$ as a sheaf of $R$-modules. Note that Hatcher's definition of "$R$-orientable" is exactly that $G$ is a principal $\underline{R}$-module (or equivalently, that $G^*\cong\operatorname{Iso}_{\underline{R}}(\underline{R},G)$ has a global section), so it is trivial that in that case $F\cong G$ is also principal. The nontrivial content of Theorem 2.36(a) is to say that the global generating section is actually already a section of the presheaf $F_0$.
(3) Lemma 2.37 doesn't quite say that $F_0$ is a sheaf if $M$ is closed, because $A$ is required to be a compact set, rather than an open set. Here is an instructive example. Take $M=S^1$ and let $U\subset M$ be an open set whose complement is countably infinite. Then $U$ is a disjoint union of countably infinitely many open intervals, so $F(U)\cong R^\mathbb{N}$ (since $F(V)=R$ for any open interval $V\subset S^1$). But $F_0(U)$ can be computed directly to be a direct sum of countably infinitely many copies of $R$ (the key point being that $H_0(M-U)$ is the free $R$-module on $M-U$, which is countable). So $F_0(U)\not\cong F(U)$, so $F_0$ is not a sheaf.
|
00ca8ae5fe8749c040f598ad8b56a2433d5413d7
|
Q: convergence of series: $ \sum_{n=1}^\infty(\sqrt{n+1}-\sqrt{n})\cdot(x+1)^n $ I would like to prove the convergence of series: $$ \sum_{n=1}^\infty(\sqrt{n+1}-\sqrt{n})\cdot(x+1)^n $$ for x $\in \mathbb{R}$. I am a bit lost on this one. I guess I would be interested in
*
*$x<-1$
*$x = -1$
*$x > -1$
Any help would be greatly appreciated.
A: We have $\dfrac{\sqrt{n+2}-\sqrt{n+1}}{\sqrt{n+1}-\sqrt{n}}\dfrac{|x+1|^{n+1}}{|x+1|^n}\to|x+1|$. By the ratio test, the series conveges in $(-2,0)$ and diverges in $(-\infty,-2)\cup(0,\infty)$.
Now, $\sum_{k=1}^n(\sqrt{k+1}-\sqrt{k})=\sqrt{n+1}-1\to\infty$. Thus series diverges for $x=0$.
Can you solve for $x=-2$?
A: Hint: you want to use the Root test so you want to calculate
$$\lim_{n \to \infty} \frac{a_{n+1}}{a_n} = \lim_{n \to \infty} \dfrac{\sqrt{n+2}-\sqrt{n+1}}{\sqrt{n+1}-\sqrt{n}}\dfrac{|x+1|^{n+1}}{|x+1|^n} = |x+1|$$
and find the values for which the limit is $< 1$.
So the series converges
$$\forall x | |x+1| < 1\\
\forall -1 < x + 1 < 1\\
\forall -2 < x < 0$$
|
9a7e125f36c9e9daf1e0125377330a2327d26e09
|
Q: Integral (log and exp)
Question (Someone asked me for help on this integral and I couldn't figure it out myself.)
$$ \int_{-∞}^∞ log(1+ae^{-t^2})dt $$
Even taking the Taylor series such that $log(1+ae^{-t^2})$ ~ $log(ae^{-t^2})$
Integral of $Log(a)$ + $Log(e^{-t^2})$
Integral of $Log(a)$ + $-t^2$ ...
still doesn't converge at infinity.
Any ideas?
A: If $|a| \lt 1$ and using the fact that
$$\int_{-\infty}^{\infty} dt \, e^{-k t^2} = \sqrt{\frac{\pi}{k}} $$
we, get, using the Taylor expansion of the log:
$$\sqrt{\pi} \sum_{k=1}^{\infty} \frac{(-a)^{k+1}}{k^{3/2}} = \sqrt{\pi} \operatorname{Li}_{3/2}(-a)$$
|
52e056682d3ecb7f99f1bd2e00e189f92c02ba95
|
Q: Prove span(span(s)) = span(s) Let $S$ be a subset of a vector space $V$ . Prove that $\text{Span}(\text{Span}(S)) = \text{Span}(S)$.
What I tried: I know that this proof revolves around the fact that a linear combination of elements = a linear combination of more fundamental elements.
A: If $v\in $ Span(Span(S)), then $v$ can be written as a linear combination of elements of Span(S). Hence, $v = a_1w_1 +...+ a_nw_n$, where $w_i \in$ Span(S). Now, $w_i \in $ Span(S) $\implies$ $w_i = b_1z_1 +...+b_mz_m$ where $z_m$ is in $S$. Now, just replace each term in the first equation with its expansion and you have written $v$ as a linear combination of elements of $S$.
A: The span $\text{span}(T)$ of some subset $T$ of a vector space $V$ is the smallest subspace containing $T$.
Thus, for any subspace $U$ of $V$, we have $\text{span}(U) = U$. This holds in particular for $U=\text{span}(S)$, since the span of a set is always a subspace.
A: Let $V$ be a vector space over a field $F$. By definition, $\text{Span}(S) = \left\{x \, | \, x = a_1 s_1 + \dots + a_n s_n, a_i \in F, s_i \in S \right\}$ and $\text{Span}(\text{Span}(S)) = \left\{x \, | x = a_1 s_1 + \dots + a_n s_n, a_i \in F, s_n \in \text{Span}(S) \right\}$. So to show equality, you can just show $\text{Span}(S) \subset \text{Span}(\text{Span}(S))$ and $\text{Span}(\text{Span}(S)) \subset \text{Span}(S)$.
|
1a42a516b70e2aa92d51c169a521b243f52b2255
|
Q: Find maximal domain and range $f\left( x,y\right) = \ln \left( 1-x^{2}-y^{2}\right)$
I have noticed that
$1-x^{2}-y^{2}\gt 0$
But from here I am unsure
Thanks
A: hint
Logarithm takes arguments which are positive. So you want $1-x^2-y^2 > 0$.
This is interior of the unit circle.
For range observe that log is an increasing function, therefore try to focus on the center of your domain and see what happens as you go towards the circumference of the unit circle.
|
e674971b7a08ab69e802afd285edaaf4feb2c9b4
|
Q: Rewrite the expression $a^{n-1}b^{n}-a^{n}b^{n-1}$ I am a computational, high-speed aerodynamics student.
Somewhere through my calculations, I came upon an ugly expression of this kind: (simplified version)
$f=C \left[(a^{n-1})b^{n} - a^{n}(b^{n-1})\right]=C\cdot h(a,b,n)$
I am trying to think of way to simplify the function $h(a,b,n)$ so as to turn it into a more manageable function. Is it possible such a task?
A: Both terms in the the $h$ expression contain $(ab)^{n-1}$, so you can factor it out:
$$(a^{n-1})b^{n} - a^{n}(b^{n-1}) = (a^{n-1})(b^{n-1})b-(a^{n-1})(b^{n-1})a= (ab)^{n-1}(b-a)$$
|
63f39c6a837cc796d3e7fbc7e6cc77620b8787e1
|
Q: Is mathematical Induction possible in this situation? Is mathematical Induction possible with this sigma sign?
$\sum_{k=1}^{n} ((-1)^{n-k} * b^{n-k}) = \frac{b^{n}+1}{b+1}$
with $n = 2s+1 ; s \epsilon \mathbb{N}$
Statement: $\sum_{k=1}^{n} ((-1)^{n-k} * b^{n-k}) = \frac{b^{n}+1}{b+1}$
Assumption: $\sum_{k=1}^{n+2} ((-1)^{n-k} * b^{n-k}) = \frac{b^{n+2}+1}{b+1}$
I already checked the basis but I have problems with splitting up the assumption.
I tried it successfully with:
$\sum_{k=1}^{n+2} ((-1)^{n-k} * b^{n-k}) \equiv \sum_{k=1}^{n} ((-1)^{n-k} * b^{n-k}) + (-1)^{n+1}*b^{n+1} + (-1)^{n}*b^{n}$
$\Leftrightarrow \sum_{k=1}^{n+2} ((-1)^{n-k} * b^{n-k}) \equiv \sum_{k=1}^{n} ((-1)^{n-k} * b^{n-k}) + b^{n+1} - b^{n}$
It is kind of consequentially if you look at this:
$\frac{b^{n}+1}{b+1}$
$\frac{b^{3}+1}{b+1} = b^{2}-b+1$
$\frac{b^{5}+1}{b+1} = b^{4} - b^{3} + b^{2}-b+1$
$
\frac{b^{7}+1}{b+1} = b^{6}-b^{5}+ b^{4} - b^{3} + b^{2}-b+1$
but I cant find a mathematical correct way to split my sigma sign like above.
I am pretty sure that the Statement is correct for any possible natural uneven number.
Now Im not even sure if i used the right Assumption for n -> n+2.
I would be very happy with some help :)
A: First, you need the statement you are trying to prove to be correct. You want your sums to go up to an even number, not to an odd number. If you look at your examples the highest power on the right is even. The power of $b$ on the right should be one greater than the upper limit of the sum. You also start the sum with $1=b^0$, so the lower limit of the sum should be $k=0$. So you are trying to prove $$\sum_{k=0}^{n-1} ((-1)^{n-k-1} * b^{n-k}) = \frac{b^{n}+1}{b+1}$$ with $n$ odd. What you call the Statement is the induction hypothesis. You need a base case, presumably $n=3$, which you show at the bottom. Then what you call Assumption is the thing you are trying to prove from the induction hypothesis. Note that there are just two extra terms added to the prior sum, which is how you make use of the induction hypothesis.
$$\begin {align} \sum_{k=0}^{n+1} ((-1)^{n-k-1} * b^{n-k}) &=b^{n+1} - b^{n}+\sum_{k=0}^{n-1} ((-1)^{n-k} * b^{n-k-1})\\&=b^{n+1}-b^{n}+\frac {b^n+1}{b+1}\\&=\frac{b^{n+2}+1}{b+1}\end {align}$$
A: Let's pose
$$A_s = \sum_{k=1}^{2s+1} (-b)^{2s+1-k} = \frac{b^{2s+1}+1}{b+1}.$$
We start from $A_1$:
$$A_1 = b^2-b+1 = \frac{b^3+1}{b+1} = \frac{b^{2s+1}+1}{b+1}.$$
Now we need to find the inductive rule.
$$A_{s+1} = \sum_{k=1}^{2(s+1)+1} (-b)^{2(s+1)+1-k} = \sum_{k=1}^{2s+3} (-b)^{2s+3-k} = \\
= \sum_{k=1}^{2s+1} (-b)^{2s+3-k} + (-b)^{2s+3-(2s+2)} + (-b)^{2s+3-(2s+3)}= \\
= (-b)^2\sum_{k=1}^{2s+1} (-b)^{2s+1-k} + (-b)^{1} + (-b)^{0}= \\
= b^2 A_s -b + 1.$$
Finally:
$$A_{s+1} = b^2 A_s -b + 1 = b^2 \frac{b^{2s+1}+1}{b+1} -b + 1 = \frac{b^{2s+3}+b^2+(-b+1)(b+1)}{b+1} = \\
= \frac{b^{2(s+1)+1}+b^2-b^2 + 1}{b+1} = \frac{b^{2(s+1)+1} + 1}{b+1}.$$
|
8e7a15c74c96b8b8f41534b2596819665482d8fe
|
Q: Find a matrix from its eigenvalues and corresponding vectors
Suppose $A$ is a $3 \times 3$ matrix with eigenvalues $\lambda_1=-1$ $\lambda_2=0$ and $\lambda_3=1$ and with the corresponding eigenvectors $\vec{v_1}=<1,0,2>$ $\vec{v_2}=<-1,1,0>$ and $\vec{v_3}=<0,0,1>$
Find matrix the $A$
So I made $P=$
$\begin{bmatrix}
1 & -1 & 0 \\
0 & 1 & 0 \\
2 & 0 & 1 \\
\end{bmatrix}$
and got $P^{-1} =$
$\begin{bmatrix}
1 & 1 & 0 \\
0 & 1 & 0 \\
-2 & -2 & 1 \\
\end{bmatrix}$
I am unsure where to go from here though? I feel as though maybe there is missing information in the question?
A: Hint:
$$\begin{pmatrix}a&b&c\\d&e&f\\g&h&i\end{pmatrix}\begin{pmatrix}1\\0\\2\end{pmatrix}=\begin{pmatrix}-1\\0\\-2\end{pmatrix}$$
and the same with other eigenvalues and eigenvectors.
A: $A=P \Lambda P^{-1}$, where $\Lambda=diag(\lambda_1,\dots,\lambda_3)$
A: The searched matrix is $M=PDP^{-1}$ where $D$ is the diagonal matrix that has as diagonal elements the eigenvalues, in the same order as the eigenvectors in $P$ (see here).
$$
M=
\begin{bmatrix}
1&-1&0\\
0&1&0\\
2&0&1
\end{bmatrix}
\begin{bmatrix}
-1&0&0\\
0&0&0\\
0&0&1
\end{bmatrix}
\begin{bmatrix}
1&1&0\\
0&1&1\\
-2&-2&1
\end{bmatrix}
=
\begin{bmatrix}
-1&-1&0\\
0&0&0\\
-4&-4&1
\end{bmatrix}
$$
|
dd0e3a24f67d97dbb2141f11d0a2feae5708bb63
|
Q: decomposition of $\mathbb{C}[A_3],\mathbb{R}[A_3]$ and $\mathbb{F}_{p}$ into simple algebras Let $A_3$ be the alternating group of three symbols (which is cyclic group of order 3).
I want to know how to write $\mathbb{C}[{A_3}],\mathbb{R}[{A_3}],\mathbb{F}_{p}[A_3]$ as a direct sum of decomposition into simple algebras.
My attempt:-
I think $\mathbb{C}[A_3]=\mathbb{C}\oplus \mathbb{C} \oplus \mathbb{C}$.
$\mathbb{R}[A_3]=\mathbb{R} \oplus \mathbb{C}$
A: If $F$ is any field, then $F[A_3]$ is $F[x]/(x^3 - 1)$. By the Chinese remainder theorem this decomposes as follows:
*
*If $F$ has characteristic $\neq 3$ and $x^2 + x + 1$ is irreducible over $F$, then $F[x]/(x^3 - 1) \cong F \times F[x]/(x^2 + x + 1)$.
*If $F$ has characteristic $\neq 3$ and $x^2 + x + 1$ is reducible over $F$, then $F[x]/(x^3 - 1) \cong F^3$.
*If $F$ has characteristic $3$, then $F[x]/(x^3 - 1) \cong F[x]/(x - 1)^3 \cong F[x]/x^3$.
In particular your answers are correct. More generally, CRT can be used to describe $F[G]$ where $G$ is a finite abelian group. The nonabelian case is harder.
|
d8730808e4b463fac3d34d87062724058f1b4881
|
Q: How do I compute the residue at this pole? Let $$ f(z) = \frac{e^{az}}{1 + e^z} \qquad (0 < a < 1) $$ I know this function has poles at $z = (2k + 1) i \pi$ with $k \in \mathbb{Z}$. Let's say I need to find the residue at the pole $z = \pi i$. How would I compute this? I tried using the formula $$ Res(f(z), z= \pi i) = \lim_{z \to \pi i} (z - z_0) f(z)$$ but that doesn't seem to work.
A: Let $\zeta=z-i \pi$. Then the function of $\zeta$ is
$$\frac{e^{i a (\zeta+i \pi)}}{1+e^{(\zeta+ i \pi)}} = e^{i \pi a} \frac{e^{i a \zeta}}{1-e^{\zeta}}$$
Now take the Laurent expansion of this about $\zeta=0$ and see that the coefficient of $\zeta^{-1}$ is clearly $-e^{i \pi a}$.
|
6650b4d6a987dc53193851c7d86a6edab2d67826
|
Q: What is this trick called? Let $f$ be a positive $\mathcal C^2$ function defined on $[-1,1]$, with a maximum of $1$ attained only at $0$, and a negative second derivative at $0$ equal to $-\alpha$
Then $$ \int_{-1}^{1}f(x)^n\, \mathrm dx \approx \sqrt{\frac{2\pi}{\alpha n}}$$
I remember that it was proven by approximating the function in a neighborhood of $0$ by a Gaussian function. But I can't remember the name of the approximation.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.