Tuesday, January 26, 2016

So what is 'speed', really?

I have begun to realize the title of this blog may be slightly ambitious, seeing as on average so far I've posted $\frac{2}{9}$ times per day. 

On the upside, this state of affairs allows for a smooth segue into what I want to discuss next, namely,  the concept of speed.

Speed is something most people are innately familiar with. It is normally understood that running from point $A$ to point $B$ will take less time than walking from point $A$ to point $B$.

So speed is intuitive, we all understand it to some degree or another.

The purpose of this post, will be to prove an important fact connecting distance, speed and velocity for a particle travelling in an $n$ dimensional continuous path with continuous velocity.

 It is often the case that concepts that are intuitive to us when considered in $1$ or $2$ dimensions become increasingly diffuse when extrapolated upwards, therefore every effort will be made to appeal to intuition along the way.

Let us start by defining the position vector of some particle as a differentiable function of time: $\vec{r(t)}$. Suppose our particle is travelling in $n$ dimensional space, then $\vec{r(t)} : [a,b]\to \mathbb R^n$.

Now suppose we wish to determine the total distance (NOT displacement) our particle has traveled during the interval of time $[a,b]$. 

Since we don't know how our particle is travelling, we shouldn't make any simplifying assumptions here, as for all we know it could be following a helical trajectory: (the picture below is courtesy of 'http://www.math.uri.edu/~bkaskosz/flashmo/parcur/')



So it isn't immediately obvious how we should go about attempting to compute the exact distance our particle travels between $t=a$ and $t=b$, but perhaps we can start by trying to approximate.

Distance has one nice property, which is that it is cumulative; That is, if you travel $x$ units between $t=t_0$ and $t=t_1$, and then continue to travel $y$ units between $t=t_1$ and $t=t_2$, you can be sure that in total, you've traveled a distance of $x+y$ units between $t=t_0$ and $t=t_2$.

Furthermore, as we are working in euclidean space, the shortest distance between two points is the straight line distance. 

Both these facts will be useful in deciphering what is to come.

In order to start approximating, we begin by partitioning the interval of time we are working in, $[a,b]$.

A 'partition' is essentially a 'division' of a segment, so for example, a partition of the interval $[1,3]$ may be $\{1,1.2,1.5,1.8,2,2.5,3\}$

Now, consider a partition of $[a,b]$, $ P = \{t_0=a, t_1, t_2, .... , t_{m-1}, t_m = b\}$.

We define a 'refinement' of $P$ as $P'$ where $P\subset P'$. This means that $P'$ contains all the points of $P$ and possibly more, and is thus a less 'course' or 'finer' partition of our interval of time.

We will also define $A(P) = \sum_{i=1}^m |\vec{r(t_i)} - \vec{r(t_{i-1})}|$ as our 'approximation' of the distance travelled during the interval of time $[a,b]$ . It is clear that our approximations are dependent on the partition of $[a,b]$ we consider. The 'finer' the partition, the larger our approximation. (This is a simple consequence of the fact that the shortest distance between two points is always the straight line distance , aka the triangle inequality).

At this stage it is important that we are able to visually conceptualise roughly what these approximations look like, so we have some physical sense of what we are doing.

The easiest way to do this is to imagine a smooth curve in $\mathbb {R}^2$ (you can also draw this on a piece of paper). Approximating this curves length can now be done by drawing points on your smooth curve, and joining these consecutive points with straight lines. The sum of the lengths of these lines will approximate the length of your curve. It is also not difficult to see that by drawing more points on your curve, and subsequently connecting these points with straight lines, a more accurate approximation is obtained. Indeed, this substantiates much of the basis for the celebrated 'arc-length integral' in two-dimensions.

Now, we claim that $\lim _{||P|| -> 0} A(P)$ exists, where $||P|| = max{|t_i - t_{i-1}|}$. That is, our approximations begin to approach a certain number as we make our partitions finer and finer, this certain number, is defined to be the length of our curve. In fact, we will also see that this certain number is also the least upper bound of all our approximations.

In order to prove that this is indeed the case, we will make use of a useful lemma, of which the proof can be found in Baby Rudin page 113 (Theorem 5.19). Intuitively speaking, the lemma states that if you have travelled a certain distance in a very small amount of time, at some point during your motion, you must have been travelling 'very fast'.

Theorem 5.19 (Baby Rudin Page 113) - Suppose $f$ is a continuous mapping of $[a,b]$ into $\mathbb {R}^k$ and $f$ is differentiable in $(a,b)$. Then there exists $x \in (a,b)$ such that:

$ |f(b) - f(a)| \leq (b-a)|f'(x)| $.

Rather than actually utilizing the full force of this theorem, we will make use of an intermediate result that can be found within the proof of Theorem 5.19 in Baby Rudin. If you wish to prove this intermediate result for yourself, I have provided a sketch of the proof below the result.

Intermediate Result: $\frac{|f(b) - f(a)|^2}{b-a} = f'(x)\cdot(f(b) - f(a))$

Proof (Sketch) : Consider $g(x) = (f(b) - f(a)) \cdot f(x)$ . Clearly $g(x)$ is continuous on $[a,b]$ and differentiable on $(a,b)$. It is also a function on $\mathbb R$.

Thus, by the mean value theorem, $\exists c \in [a,b]$ such that $g'(c) = \frac{g(b) - g(a)}{b-a}$


Using this fact with $b = t_i$ and $a = t_{i-1}$, $x = t_i* \in (t_{i-1},t_i)$ and $\vec{r(t)}$ in place of $f(x)$, we obtain:

$ \large \frac{|\vec{r(t_i)} - \vec{r(t_{i-1})}|^2}{|t_i - t_{i-1}|} = \vec{r'(t_i*)} \cdot (\vec{r(t_i)} - \vec{r(t_{i-1})})$

Now, before we continue, we need to establish some facts about continuous vector-valued functions.

Theorem 5.20 - If a vector valued function $\vec{s(t)} : [t_{i-1},t_i]\to \mathbb R^k$ is continuous on $[t_{i-1}, t_i]$, then $\vec{s(t)}$'s component functions are continuous on $[t_{i-1}, t_i]$.

Proof - $\vec{s(t)} = \sum_{i=1}^k s_i(t)u_i$ where $ \{u_1, u_2, ... , u_k\}$ is the standard basis of $\mathbb {R}^k$.

Recall by the definition of continuity of a vector-valued function on an interval, a function $\vec{s(t)}$ is continuous on $[t_{i-1}, t_i]$ iff $\forall \epsilon > 0$ there exists a $ \delta>0$ such that if $|t-t_0| < \delta$ then $|\vec{s(t)} - \vec{s(t_0)}| < \epsilon$

So if $\vec{s(t)}$ is continuous on $[t_{i-1}, t_i]$, then for some $t_0 \in [t_{i-1}, t_i]$ we have:

 $|\vec{s(t)} - \vec{s(t_0)}| < \epsilon$ given $|t-t_0|<\delta$ , this implies $\sum_{i=1}^k |s_i(t) - s_i(t_0)|^2 < \epsilon^2$, and thus $|s_i(t) - s_i(t_0)| < \epsilon$.

Thus $s_i(t)$ is continuous on $[t_{i-1},t_i]$.

By continuity over an interval implies uniform continuity we can also deduce that $s_i(t)$ is uniformly continuous on $[t_{i-1}, t_i]$.

Now, back to where we left off: $ \large \frac{|\vec{r(t_i)} - \vec{r(t_{i-1})}|^2}{|t_i - t_{i-1}|} = \vec{r'(t_i*)} \cdot (\vec{r(t_i)} - \vec{r(t_{i-1})})$


Re-writing, we have $ \large \frac{|\vec{r(t_i)} - \vec{r(t_{i-1})}|^2}{|t_i - t_{i-1}|} = \sum_{k=1}^n r'_k(t_i*)(r_k(t_i) - r_k(t_{i-1}))$ where $r'_k(t), 1\leq k \leq n$ are the components of $\vec{r'(t)}$ and $t_i* \in [t_{i-1}, t_i]$.

Now as $r_k(t)$ is continuous on $[t_{i-1},t_{i}]$ and differentiable on $(t_{i-1},t_i)$ we can employ the Mean Value Theorem.

$\exists p_i* \in (t_{i-1},t_i)$ such that $r_k(t_i) - r_k(t_{i-1}) = r'_k(p_i*)(|t_i - t_{i-1}|)$.

Therefore,  $ \large \frac{|\vec{r(t_i)} - \vec{r(t_{i-1})}|^2}{|t_i - t_{i-1}|^2} = \sum_{k=1}^n r'_k(t_i*)(r'_k(p_i*))$ .

We will now consider $ |\sum_{i=1}^n r'_k(t_i*)r'_k(p_i*) - r'_k(t_i*)r'_k(t_i*)| = |\sum_{i=1}^n r'_k(t_i*)(r'_k(p_i*) - r'_k(t_i*))| $ , since $r'_k(t)$ is continuous on $[a,b]$, it is bounded. Let $|r'_k(t)|<M$ . Now as $r'_k(t)$ is uniformly continuous on $[a,b]$ , (as it is continuous on $[a,b]$) given $||P||< \delta $ we can ensure $|r'_k(p_i*) - r'_k(t_i*)| < \frac{\epsilon}{nM}$.

Thus $ |\sum_{i=1}^n r'_k(t_i*)(r'_k(p_i*) - r'_k(t_i*))| < \sum_{i=1}^n \frac{\epsilon}{n} = \epsilon $

Thus for $||P|| < \delta$ we have $| |\vec{r'(t_i*)}|^2  - \frac{|\vec{r(t_i)} - \vec{r(t_{i-1})}|^2}{|t_i - t_{i-1}|^2} | < \epsilon $

Then by the reverse triangle inequality we have $ |\vec{r'(t_i*)}|^2 - |\frac{|\vec{r(t_i)} - \vec{r(t_{i-1})}|^2}{|t_i - t_{i-1}|^2}| < | |\vec{r'(t_i*)}|^2  - \frac{|\vec{r(t_i)} - \vec{r(t_{i-1})}|^2}{|t_i - t_{i-1}|^2} | < \epsilon $

$ (|\vec{r'(t_i*)}||t_i - t_{i-1}| - |\vec{r(t_i)} - \vec{r(t_{i-1})}|)(|\vec{r'(t_i*)}||t_i - t_{i-1}| + |\vec{r(t_i)} - \vec{r(t_{i-1})}|) < (\epsilon)(|t_i - t_{i-1}|^2) $

But $(|\vec{r'(t_i*)}||t_i - t_{i-1}| + |\vec{r(t_i)} - \vec{r(t_{i-1})}|) > (|\vec{r'(t_i*)}||t_i - t_{i-1}| - |\vec{r(t_i)} - \vec{r(t_{i-1})}|) $ .

So $ (|\vec{r'(t_i*)}||t_i - t_{i-1}| - |\vec{r(t_i)} - \vec{r(t_{i-1})}|)^2 < (\epsilon)(|t_i - t_{i-1}|)^2 $

it thus follows that $(|\vec{r'(t_i*)}||t_i - t_{i-1}| - |\vec{r(t_i)} - \vec{r(t_{i-1})}|) < (\sqrt\epsilon)(|t_i - t_{i-1}|)$

And so $\sum_{i=1}^m |\vec{r'(t_i*)}||t_i - t_{i-1}| - |\vec{r(t_i)} - \vec{r(t_{i-1})}| < (\sqrt\epsilon)\sum_{i=1}^m |t_i - t_{i-1}| = (\sqrt\epsilon)(b-a)$

Now, by Theorem 5.18, we know that$ |\vec{r(t_i)} - \vec{r(t_{i-1})}| \leq |\vec{r'(t_i*)}||t_i - t_{i-1}|$ . Therefore, $\sum_{i=1}^m |\vec{r(t_i)} - \vec{r(t_{i-1})}| \leq \sum_{i=1}^m |\vec{r'(t_i*)}||t_i - t_{i-1}|$.

It thus follows that $\lim _{||P|| -> 0} A(P) = \lim _{||P|| -> 0} \sum_{i=1}^m |\vec{r'(t_i*)}||t_i - t_{i-1}| = \int\limits_a^b |\vec{r'(t)}| $

Furthermore we can be sure that $\lim _{||P|| -> 0} \sum_{i=1}^m |\vec{r'(t_i*)}||t_i - t_{i-1}| = \int\limits_a^b |\vec{r'(t)}| $ exists due to the fact that $\vec{r'(t)}$ is continuous and so $|\vec{r'(t)}|$ is Riemann integrable over $[a,b]$. (This fact will be justified in the next post).

Thus we have shown that our approximations converge to $\int\limits_a^b |\vec{r'(t)}|$ .

It now follows that we may thus define 'distance traveled' by our particle over $[a,b]$ to be $\int\limits_a^b |\vec{r'(t)}|$ .

It also follows that we can now naturally define 'speed' as the first derivative of 'distance' with respect to time (by the fundamental theorem of calculus) , and that 'speed' is also the magnitude of our particles velocity vector at any given time.















Tuesday, January 19, 2016

Cosine and 0.739

Playing around with the cosine or sine function, one may find that upon iteration, both functions begin to behave unexpectedly 'nicely'.

What I mean by this is as follows:

0. Make sure you are computing everything in RADIANS.

1. Take any number ($1$, $2$, $0.7$, whatever)
2. Compute $cos$(your number)
3. Compute $cos$($cos$(your number))
4. Compute $cos$($cos$($cos$(your number)))
5. Compute $cos$($cos$($cos$($cos$(your number))))
6. You get the idea

If you've made sure your calculator is in radians, this sequence should start getting closer and closer to about $0.7$, and you can perform the same procedure with other numbers (it should still work!).

Here's a nice graph depicting this process starting at $-1$ (Courtesy of Wikipedia):





Perplexed by this, I did some research, and came across an interesting theorem on Wikipedia.

Theorem (A la Wikipedia) - If a function $f(x)$ has a fixed point at $x= x_0$ and is continuously differentiable in an open neighborhood of $x_0$ with $|f'(x_0)| < 1$, attraction at $x_0$ is guaranteed.

Now the 'attraction is guaranteed' part seemed a bit vague. So I tried to reformulate the theorem so that it was a bit more precise, and then furnished a proof. Both of which are presented below:

Theorem 1 - Suppose $x_0 \in [a,b]$ is a point such that $f(x_0) = x_0$.  Given $f$ is continuously differentiable at $x_0$ such that $|f'(x_0)|<1$, there exists a $\delta > 0$ such that $\forall x \in (x_0 - \delta , x_0 + \delta) $ the sequence $ f^n(x)$ converges to $x_0$ as $ {n \to \infty} $ . $f^n(x)$ is $f$ iterated $n$ times, or composed with itself $n$ times.

Proof - $f'$ is continuous at $x_0$. Therefore, there must exist $\delta >0 $ such that $\forall x\in (x_0 - \delta, x_0 + \delta), |f'(x) - f'(x_0)| < \epsilon $ where $\epsilon > 0 $ such that $\epsilon + |f'(x_0)| = \frac{1}{p}$ where $p>1$.

So by the reverse triangle inequality, we can deduce $|f'(x)| < \frac{1}{p}$ $\forall x\in (x_0 - \delta, x_0 + \delta)$.

Now choose some $z \in (x_0 - \delta, x_0 + \delta) $. Consider $f(z) - f(x_0)$.

By the mean-value theorem, $\exists c_1\in (x_0 - \delta, x_0 + \delta) $ such that $\frac{f(z) - f(x_0)}{z - x_0} = f'(c_1)$.

Now $c_1\in (x_0 - \delta, x_0 + \delta)$, therefore, $|f(z) - f(x_0)| = |z-x_0||f'(c_1)| < |z-x_0|\frac{1}{p}$.

Suppose $|f^k(z) - f^k(x_0)| < |z-x_0|\frac{1}{p^k}$ where $k \geq1 $ and $ k$ is an integer.

Now consider $\frac{f(f^k(z)) - f(f^k(x_0))}{f^k(z) - f^k(x_0)} = f'(c_{k+1})$ by the mean value theorem.

$c_{k+1} \in (x_0 - \delta, x_0 + \delta)$ as $c_{k+1}$ is between $f^k(z)$ and $f^k(x_0)$ and $|f^k(z) - f^k(x_0)| < |z-x_0|\frac{1}{p^k}$ and $f^k(x_0) = x_0$, by hypothesis and by definition respectively.

Then $|f^{k+1}(z) -  f^{k+1}(x_0)| < |f^k(z) - f^k(x_0)|\frac{1}{p} < |z-x_0|\frac{1}{p^{k+1}}$.


The theorem follows by induction and the definition of convergence.

$\Box$

Now how does this explain why $cos(x)$ behaves this nicely upon iteration?

Well, notice that in the proof above, all that was required for the convergence of $f^n(x)$ was that $|f'(x)| < 1$ for all $x$ in some neighborhood of $f(x)$'s fixed point.

It isn't difficult to see that $cos(x)$ has a fixed point. That is, $cos(x)$ intersects $y=x$ at some point. (Namely $0.739$, to three decimal places).

Furthermore, in the entire interval $[0,\frac{\pi}{2})$ we have the desired fact $|cos'(x)| = |sin(x)|<1$

Thus, by the theorem we proved above, iterating cosine for any $x$ in $[0,\frac{\pi}{2})$ will yield convergence to approximately $0.7$.

Now it isn't too hard to prove that this is the case with cosine's entire domain, $[0,2\pi]$, using some properties of cosine, but I'll omit this part. If you want to try proving it for yourself, the fact that cosine is an even function may prove to be useful.

After a bit of reflecting, I realized an even 'cheaper' form of Theorem 1 could be proved, which doesn't require for $f$ to be continuously differentiable at $x_0$, rather, only differentiability at $x_0$ is needed.

This 'cheaper' version of Theorem 1 is presented and proved below:

Theorem 1 (Cheap version) - Suppose $x_0 \in [a,b]$ is a point such that $f(x_0) = x_0$.  Given $f$ is  differentiable at $x_0$ such that $|f'(x_0)|<1$, there exists a $\delta > 0$ such that $\forall x \in (x_0 - \delta , x_0 + \delta) $ the sequence $ f^n(x)$ converges to $x_0$ as $ {n \to \infty} $ . $f^n(x)$ is $f$ iterated $n$ times, or composed with itself $n$ times.

Suppose $f'(x_0)$ exists, then we know that $\forall \epsilon > 0 , \exists \delta>0 $ such that $\forall x\in (x_0-\delta, x_0 + \delta), |\frac{f(x) - f(x_0)}{x-x_0} - L | < k $ where $k>0$ such that $|L| + k = \frac{1}{p}$ and $p>1$ .

Then we have, $\forall x\in (x_0-\delta, x_0+\delta), |\frac{f(x) - f(x_0)}{x-x_0}| < \frac{1}{p} $so that $|f(x) - f(x_0)| < |x-x_0|\frac{1}{p}$.

Now we similarly employ an inductive hypothesis, namely, $|f^k(x) - f^k(x_0)| < |x-x_0|\frac{1}{p^k} $ for integral $k \geq 1$.

By the inductive hypothesis, we see that $f^k(x) \in (x_0-\delta, x_0 + \delta)$

Therefore, by differentiability at $x_0$ we know $|f(f^k(x)) - f(x_0)| < |f^k(x) - x_0|\frac{1}{p} $

Now employing the inductive hypothesis, $|f^{k+1}(x) - f(x_0)| < |x-x_0|\frac{1}{p^{k+1}}$.

$\Box$








Sunday, January 17, 2016

First Post/ Welcome

Hi,

To summarize what sort of things you can expect to read here, the purpose of this blog is for me to keep a record of anything meaningful I've been doing/thinking about every day. Therefore I'll try to make sure I post one or two things on a daily basis. The subject matter of posts will vary, but will primarily concern math/mathematically related subjects as well as some general musings about anything on my mind.