Thursday, February 3, 2011

Bug on Band (from Feynman Lectures on Physics)

CHECK THE COMMENT BEFORE YOU READ THIS! (It's pretty awesome.)

This is a problem that i found while on this website: http://feynmanlectures.info/ (all credit goes there). The problem is this (and there is a funny story behind it):

An infinitely stretchable rubber band has one end nailed to a wall, while the other end is pulled away from the wall at the rate of 1 m/s;  initially the band is 1 meter long. A bug on the rubber band, initially near the wall end, is crawling toward the other end at the rate of 0.001 cm/s. Will the bug ever reach the other end? If so, when?






Lev Okun  gave this problem to Andrei Sakharov to pass the time while they were being driven from Moscow to JINR in Dubna.  However, it didn't work (to pass the time) because immediately after being told the problem, Sakharov pulled out a pen, took Lev's magazine and wrote down the solution, without any hesitation whatsoever.


So it actually has a pretty nice history, and much respect should be given to Sakharov for his quick solution! Although, i see how, after having done the problem, that once you do one, you've done 'em all. I also want to mention another similar problem that i came across some time ago, and just remembered when i read this one. It is from a book called Professor Stewart's Cabinet of Mathematical Curiosities by Ian Stewart; the problem is as follows and is made slightly simpler than the one above (but trivially so, as it turns out. But it is actually kind of nice that way! - I'll explain more what I mean after I present my solutions below...):


The spaceship Indefensible is at the center of a spherical galaxy with radius 1000 lightyears (lyrs). The spaceship travels at a rate of one lightyear every year - the speed of light (assume it's made of photons...). At exactly one year intervals after the Indefensible starts its voyage, the universe expands instantly by 1000 lyrs, and the ship is carried along with the space in which it sits.


So these two problems are really one and the same, except that the bug is being carried along as the band expands constantly while the spaceship is only effected by the expanding space at discrete intervals.
We'll see how this effects the solution later.


At this point I kind of want to leave these problems up to you guys to solve, without giving away too many hints. One hint I will give for Stewart's problem (which he gives as well) is that the following link may be helpful: http://en.wikipedia.org/wiki/Harmonic_series_(mathematics) (particularly the section on Rate of Divergence...)

I'll upload the solutions sometime later, but don't peek! It's (They're) a quite rewarding problem(s) to work!

I'll note that the entire time I was working the Bug on Band problem I thought that there was absolutely no way that my solution was even remotely correct, until I got the right answer! Just goes to show you sometimes...

Tuesday, August 31, 2010

SHM Solution W/O Small Angle!

for some time now, the small angle approximation has been a pet peeve of mine.
don't get me wrong, i greatly appreciate its uses! there are countless problems which would be unsolvable without it.
but i have always tried to get around using it, or when i have to, get approximate solutions.

i'm actually not positive on how much work has been done on solving the Harmonic Motion equation WITHOUT the small angle approximation. i do know that has no closed-form solution, but approximations have been done. and, in fact, i have one to present today!

it was actually not as much work as i thought it would be to get it into a form which could be approximated, but it definitely was more than is usually called for in a physics class i've been in.

so here goes. first, consider the free-body diagram of a swinging pendulum, and hopefully arrive at this equation:

-Lmg sinθ = Iα
where L is the length of the string, m is the mass of the object attached to it, g is the acceleration of gravity, I is the moment of inertia for the pendulum, and α is the rotational acceleration.
rearrange this and put it into the familiar differential equation form:

θ'' + c²sinθ = 0, where c² = Lmg/I

now let ω' = θ'' = -c²sinθ, then
dω/dt = -c²sinθ
also note that dθ/dt = ω. and we can combine the previous two equations to arrive at the following:

ω dω/dθ = -c²sinθ
ω dω = -c²sinθ dθ
∫ω dω = ∫-c²sinθ dθ + C₁
½ ω² = c² cosθ + C₁
ω² = 2c² cosθ + C₂

now to find C₂:
we know that at time t = 0, dθ/dt = ω(0) = 0, and θ(0) = θmax = θ₀
0² = 2c² cos(θ₀) + C₂
C₂ = -2c² cos(θ₀)

there so now we have:
ω² = 2c² cosθ - 2c² cos(θ₀)
... skipping a few steps now in the interest of space:
dθ/dt = ± c(√2)√[ cos(θ) - cos(θ₀) ]
this D.E. is separable and leads to the following integral:

∫dθ/√[ cos(θ) - cos(θ₀) ] = ± c(√2) t + C₃
unfortunately, this is an elliptic integral and cannot be solved explicitly. so i will just call it I and solve for t:
I = ± c(√2) t
t = I/(± c(√2)) + C₄.

you can, however, approximate this integral with a Taylor Series. i'll give the first few terms here. but note that  the rest can be found by plugging the integral into Wolfram Alpha's solver. and in the interest of clarity, i will post a picture instead of writing the terms out:
∫dx/√[ cos(x) - a ] ≈ ...


it is nice to see this solution compared to the solution of the equation WITH the small angle approximation. we can write an expression for t of this equation as such:
t = arccos(θ/θ₀)/c
(much simpler than the other form!)

i would have liked to graph these two functions and see how accurately the small angle approximation approximates the actual function, but i couldn't find a grapher that was accurate enough, nor enough terms of the integral to get it sufficiently accurate.
i will, however, tell you that it actually approximates the true solution VERY well, and is alright in my book.
if you are interested in plotting or graphing these values, the Wolfram Alpha site will let you get copy-able text to paste into your software by clicking on the picture.
i found this website convenient for quick graphing for anyone who wants it:
(not very accurate though)

http://www.livephysics.com/ptools/online-function-grapher.php

Tuesday, August 24, 2010

Random Walk!

This is something that I discovered in my lab manual for lower div Thermodynamics - we didn't do it in class at all, in fact it's horrible that it wasn't even mentioned (not only because it's very interesting, but because it plays a major role in thermodynamics)!

It is known as the Random Walk "experiment" or "hypothesis".

The experiment is this:
You are standing on a city block in a city such as Philadelphia (so that all the streets are in a rectangular grid), and you hold a coin in your hand.
You flip the coin: if it comes up heads, you walk a block West; if it comes tails, you walk a block East. (And it is assumed that all the blocks are of equal length).
So you do this a great number of times (or until you get tired).

Where are you?

Think about this. Most people would think - rather logically - that you will end up in your original place! This seems logical because you could make the argument that the coin will come up heads and equal number of times that it will come up tails. And therefore, you will walk the same number of blocks West as you will East, with the net result being that you will end up exactly where you began!

Unfortunately, and very surprisingly, this is NOT the case.
You do not end up in the same place. Would you be surprised if I told you that you will end up approximately the square root of the number of flips away from your origin? (As the number of flips and trials you do becomes large).

How can this be? Well there is a mathematical proof which i will present for you. But I also ecnourage you to run some tests of this phenomenon! (You don't actually have to go outside and do this, a simple coin and pencil will do - or a computer program, which would be much faster).

Here is the python code that I used to test this:


Try it yourself! And if you don't have Python, it is extremely easy to download from phython.org (I use version 2.6.5)
A good way to see the results is to do manyRandWalk(1000,10000). It'll take a few seconds, but you can see that it is very accurate.

And I actually should note that, on average, it is computed that this will not exactly approach sqrt(n), where n is the number of flips. It is, for statistical reasons about 4/5 of that, so 0.8*sqrt(n) is actually correct. Which you will see if you run the program! i got about 81.43 and 77.876 on my first couple tries. and the exact answer should be around 80!

and, if you were wondering, here is the mathematical proof. this was actually a proof by Feynman from his Lectures on Physics. and if you're wondering how this pertains to physics, i'll write a little about that after the short proof:

let D(n) be the distance travelled after n coin tosses. and let a + number be distance travelled west, and - be distance traveled east.
then D(1) is guaranteed to be ±1. or we can say
D(1)² = 1

we also know, by the same logic, that
D(n+1) = D(n) ± 1, then
D(n+1)² = D(n)² ± 2D(n) + 1

here, feynman uses some arguement (which escapes me at the moment) as to why the "± 2D(n)" can be ignored or eliminated. as best as i can remember, it had to do with the two forms cancelling out for large n, which would seem reasonable. but anyway, continuing thusly (and trusting Feynman at this point):
[edit: i did some work on this, not too long after publishing this post, and it is very clear that, on the average, the ± 2D(n) term will be zero. it's actually just painfully obvious and should be intuitive. but note that making this simplification means that you accept that all the following work is only on the average. that is to say, when i will show that D(2)² = 2, it seems contradictory to say that D(2) = ±1.414... (an irrational number) because it should obviously be either 0 or ±2! this is an average! so while this may have been a little too detailed, i just thought it was interesting! and now the proof:]

D(n+1)² = D(n)² + 1
and since we know that D(1)² = 1, we can find D(2)²:
D(2)² = D(1)² + 1 = 1 + 1 = 2
and D(3)²:
D(3)² = D(2)² + 1 = 2 + 1 = 3
and D(4)²:
D(4)² = D(3)² + 1 = 3 + 1 = 4
etc...
doing this, we find that
D(n)² = n, or
D(n) = √n

and that is how it is done! i do encourage you to try it out on your computer (or it is even easy enough to program on your calculator).

now just a little bit as to why this is physically relevant. imagine a random walk - not back and forth in 1 dimension - but all around space in 3 dimensions. consider also that a gaseous molecule travels roughly randomly through space. you should now see that a random walk in 3 dimensions is actually useful for estimating the path - or displacement - of a gas!
while this may not necessarily be as exciting a revelation, random walks are referenced in a variety of other topics such as Path Integration - very Feynman!

anyway, i hope you found this topic as curious and stimulating as i did!

Y!A: String and Quantum Theory

This is a question i answered for someone who seemed to be having some problem conceptualizing both String Theory and Quantum Mechanics in the same context. i can't say that i completely understood what he was asking, but i hope i gave him a good idea that these two concepts are very well related.

http://answers.yahoo.com/question/index?qid=20100824133230AAGX5rj&r=w#OpBoM23EKWK18mDD9Dq0

Q:
Simultaniously Conceptualizing String & Quantum Theory?
String theory says there are strings that are 1-dimensional slices of a 2-dimensional membrane vibrating in 11-dimensional space with variations in their vibrations resulting in the creation of all light, matter, gravity etc... in the universe. Quantum Mechanics says particles can exist in a super position until the wave function is collapsed, part of the particle wave duality. How can one stitch these two theories together conceptually? Are all strings in some type of super position as well until collapsed? Or another way of asking; If a particle is in super position, is the string also in a kind of superposition? 


Maybe the question is illogical. Like asking what's the marital status of the number nine. 


Help with conceptualizing this would be much appreciated!

A:
first of all, i think you know very well what strings and string theory are, and i compliment you on that knowledge!
one thing i want to clarify is how these strings create, as you say, "all light, matter, gravity etc... in the universe." the only thing that a variation in a string vibration produces is a different fundamental particle. that is, one specific vibration of a string is an up quark, while another is the gluon. then the interactions between these particles create light, gravity, etc... (and even mass, itself!)
but this i'm sure you know, and i just wanted to clarify.

these two ideas can be easily conceptualized as follows:
quantum theory does state that particles exist in a state of uncertainty until they are 'observed'. so, in short, yes, you can think of the strings being in a sort of quantum state as well. but remember that this quantum state applies only to properties of the particle like position, speed, etc... so it is essentially no different than thinking of the particles in this superposition. since these basic particles are, more fundamentally strings, it is just fine to thing of the strings adhering to the same rules as the particles! although i don't necessarily know what great insight this approach would yield.

one thing which would be wrong to say is that the vibration of the string is also in superposition. that is, the string can be vibrating in many different ways - and thus be many different particles - at the same time. although this may seem theoretically possible, remember that a particle remains in superposition only as long as it is not observed or measured. i would think that the universe is constantly checking on which particle a certain string is behaving as. i may be wrong however, but i have never heard of quarks suddenly turning into leptons and then turning into photons.

Sunday, August 22, 2010

Alternate Derivation of SHM Equation

here is a derivation of the SHM equation in a slightly simpler form that might be easier to understand for those who don't want to go through all linear operations and auxiliary equations that comes with the other proof.
so here is a different proof where i've just used a few substitutions and a couple integrations.

we all know the basic condition for SHM:
F = -kx
and Newton's Second Law:
F = ma

and from these we construct our differential equation (look to the previous blog post if you need to see how this is put together):
d²c/dt² + c²x = 0,
where c² = k/m

now, let v = dx/dt. then also, d²x/dt² = v * dv/dx (this is a substitution often used for solving differential equations)
v * dv/dx = -c² x
v dv = -c² x dx
∫v dv = -c² ∫x dx + C₁
½ v² = -c² ½ x² + C₁
v² = -c² x² + C₂ ... (C₂ = 2C₁)
(dx/dt)² = -c² x² + C₂

let us now find C₂:
we assume that at time t = 0, the oscillating object is at its maximum displacement, A, and has no velocity. no velocity means that x(0) = A, x'(0) = 0:
x'(0) = -c² x(0)² + C₂
0² = -c² A² + C₂
C₂ = c²A²

now back to the differential equation:
(dx/dt)² = -c²x² + c²A²
dx/dt = ±√[-c²x² + c²A²]
dx/dt = ± c√[ A² - x²]
dx / √[ A² - x²] = ± c dt
∫dx / √[ A² - x²] = ± c ∫dt + C₃

here we use a trigonometric substitution x = A sinθ to solve the integral. this makes:
√[ A² - x² ] = √[ A² - A² sin²θ ] = A √ (1 - sin²θ) = A √ (cos²θ) = A cosθ
and,
dx = A cosθ dθ
lastly,
sinθ = x/A => θ = arcsin[x/A]

∫A cosθ dθ/ A cosθ = ± c ∫dt + C₃
∫dθ = ± c ∫dt + C₃
θ = ± ct + C₃
arcsin[x/A] = ± ct + C₃
x/A = sin( ±ct + C₃ )
x = A sin( ±ct + C₃ )

now, to find C₃:
recall that at time t = 0, x is a maximum of A:
x(0) = A sin( ±c*0 + C₃)
A = A sin(C₃)
1 = sin(C₃)
C₃ = π/2

then our equation becomes:
x = A sin( ±ct + π/2 ), but know from trigonometry that sin(θ + π/2) = cos(θ)
x = A cos( ±ct)
and remember also that cos(-θ) = cos(θ), which gives:
x(t) = A cos(ct)

THERE WE GO!
see, that wasn't so difficult! it was actually probably easier to follow than the previous derivation.
and i wrote out literally ALL the steps necessary. a physics textbook wouldn't have to show half of this stuff, and it's not that difficult to put in the back in an appendix or something, come on!

Derivation of SHM equation

every time i read a physics book (especially the textbooks) and i get to the part about simple harmonic motion (SHM), i get nervous. this is because every single time i have read the section, the author NEVER explains how to - or says it is too difficult to - derive the equation for SHM! everyone knows that it is
x(t) = A cos(ωt) ,
or some of its other variants (like with the +φ, or the sin or what-have-you). but this is just GIVEN to us! and the author never tells us how we got there! sure, we know that it is simple enough to show that it satisfies Hooke's condition:
F = -kx, and can be equated with Newton's Second Law:
F = ma = m (d²x/dt²), yielding:
m (d²x/dt²) = -kx, or
x'' + (k/m) x = 0, and letting k/m = c²
x'' + c²x = 0

but NO ONE has ever explained the process of deriving this! and i know that, at this level of physics, that sort of mathematics isn't necessary, but it shouldn't be difficult to whip together a part in the appendix which has the solution for those who are curious!

it is also worth noting that, way before i could solve these differential equations myself, i found another solution to this one. it was actually because i had been staring so long and so hard at this solution and the equation, trying to find out how it was done, but i came with it (which, with some small effort, can actually be seen as equivalent to the more common solution):
x = A e^(±ict). taking derivatives, we see that
x' = ±icA e^(±ict), and
x'' = -c²A e^(±ict), or
x'' = -c²x, which satisfies our condition!

so this was one cool (and complex!) solution the differential equation which, although not very useful (because of the imaginary component) was still exciting at the time!

so naturally i, not being able to find the solution anywhere, had to derive it myself.
i did this a couple ways (and at different times). the first time was in the infancy of my Diff. equ. class when we had just learned how to handle differential equation with second order derivatives. PERFECT! i made the necessary substitutions and solved the equation that way. it actually was not that difficult and could have easily been done in one section (or appendix) and have been understandable by anyone with a calculus background.
but this isn't the solution i'm going to present today; the one i have was derived with some knowledge of linear operators and some more basic calculus (and, of course, our dear friend Euler). i might also post my first solution up here if i get around to re-working that proof.

so here goes!
we start with our fundamental differential equation derived from Hooke's Law:
F = -kx = ma = m d²x/dt²
rearranging, we get:
d²x/dt² + (k/m) x = 0, and i will make the substitution c² = k/m
d²x/dt² + c²x = 0
note that this is a homogeneous differential equation with constant coefficients. it can also be written this way:
D²x + c²x = 0, or
(D² + c²)(x) = 0

now recall what we learned about in the last post about these types of differential equations. we have to write the auxiliary equation and find the roots:
r² + c² = 0, solving this for r:
r² = -c²
r = ±√(-c²)
r = ± ic, where i = √(-1). so we have
r₁ = ic, r₂ = -ic.
note that this is of the more general form r = α ± iβ (where α = 0, β = c)

now that we have this solution, we find that our solution for this case is of the form:
x = e^(αt) [ C₁e^(iβt) + C₂e^(-iβt) ],
where C₁ and C₂ are constants of integration.

and don't think, for a second, that i would just present an esoteric solution to a complicated problem and expect you to blindly accept it! remember that i did post how to arrive at this solution in my previous post. so if you have any questions about how we got there, go one blog post back.

and so, substituting our results into the solution, we get:
x = C₁e^(ict) + C₂e^(-ict).
but this doesn't look at all like the solution we know and love! but if we are just patient, we can do something very interesting with 'i' in the exponent of 'e' - where, again, Euler will save us. and i'm sure you've seen it before. it is the identity:
e^(iθ) = cosθ + i sinθ.
substituting ct (and -ct) for θ, we get:
e^(ict) = cos(ct) + i sin(ct), and
e^(-ict) = cos(-ct) + i sin(-ct). but we can actually use our knowledge of the odd and even qualities of sin and cos, to simplify this second equation. we will use the fact that cos(-θ) = cos(θ), and sin(-θ) = -sin(θ):
e^(-ict) = cos(ct) - i sin(ct)

so then, substituting back in, our solution looks like this:
x = C₁[ cos(ict) + i sin(ict) ]+ C₂[ cos(ict) - i sin(ict) ]. and rearranging, we can write it this way:
x(t) = (C₁ + C₂) cos(ct) + i (C₁ - C₂) sin(ct)

huzzah! this is our general solution! wonderful, isn't it? we can simplify it a little bit by giving the initial value conditions that x(0) = A, and x'(0) = 0. where A is the amplitude, or maximum value of the oscillation. by doing this we've said that at time t=0, the pendulum (or spring, or what-have-you) is at it's maximum displacement, and that it has 0 velocity at that time and place (which logically, would have to be the case).
this gives us the following:
x(0) = (C₁ + C₂) cos(0) + i (C₁ - C₂) sin(0) = (C₁ + C₂) = A
x'(0) = -c(C₁ + C₂)sin(0) + ci(C₁ - C₂)cos(0) = ci(C₁ - C₂) = 0

solving this system of C₁ and C₂, we get:
C₁ = C₂ = A/2
and plugging these into out equation:

x(t) = A cos(ct), c² = k/m
it is obvious now that this solution is EQUIVALENT to the commonly known solution! so not only have we derived it completely, we have derived it correctly! and with all steps shown and no gimmicks! it's hard to imagine that a physics textbook couldn't fit this somewhere into the text! even a mention in the appendix is not that hard to do!

i hope that, at the very least, people now have some idea that this equation CAN be derived - and rather simply too! (i've actually realized that first solution i worked out for this problem might be even simpler and easier to see than this one, so i will go ahead and post that one too).

Saturday, August 21, 2010

Derivation of D.E. solution


i'm writing this post mostly to show how we will arrive at the solution to the differential equation (D.E.) that we will encounter for SHM (Simple Harmonic Motion) in my next post. this is so that nothing is left out of the proof and my reader(s?) will not feel lost at all! with this proof and the following post, you should have everything you need to solve the SHM differential equation.

but this is not just used to solve the SHM equation! do note that this solution can be applied to many differential equations, and really rather easily! maybe you'll come across one and this post will have helped you solve it! (well, i can only dream).

there probably will not be as much step-by-step explanation in this post, mostly just math. but if you follow carefully, and i don't make any mistakes, the math should speak for itself.
so here goes:

first we need to mention that this is a homogeneous linear D.E. with constant coefficients which can have any order derivative.
i will present mine as a D.E. with a 2nd order derivative because that is the most basic and relevant example, and it can easily be expanded to any higher derivatives with little (or no) extra work.

consider, then the D.E.:
d²x/dt² + 2a* dx/dt + bx = 0, or
x'' + 2ax' + bx = 0, where a and b are constants. this can be written in operator notation as:
(D² + 2aD + b) x = 0
and if my convention for operator notation is different that your, D is the differential operator, d/dt. and so dx/dt is written Dx or D(x).

now, in order to proceed, we need to know something about linear D.E.s with constant coefficients - which is where Euler will come in and help us. something known as an 'auxiliary equation' is used to classify this D.E. and quickly arrive at its solution. the aux. equ. is simply:
r² + 2ar + b = 0
regardless of the particular solutions, this quadratic has 2 roots, r₁ and r₂, such that
(r - r₁) (r - r₂) = 0
(and it may have more for a higher order differential)

it follows then that our D.E. can be written as:
(D - r₁) (D - r₂) x = 0

now, let u = (D - r₂) x, and we can write our equation as:
(D - r₁) u = 0, and solve for u
du/dt - r₁u = 0
du/dt = r₁u
du/u = r₁dt, and integrating...
∫du/u = ∫ r₁dt + C₁''
ln|u| = r₁t + C₁''
|u| = e^( r₁t + C₁''), and let |C₁'| = e^C₁''
u = C₁' e^( r₁t)

then, since:
(D - r₂) x = u
dx/dt - r₂x = C₁' e^(r₁t), or
dx/dt + (- r₂) x = C₁' e^(r₁t), which is of a form which has a known solution. but i'm not going to even cut any corners there. i'll walk through it.

let p(t) = -r₂, q(t) = C₁' e^(r₁t), such that our equation is:
x' + px = q
now let μ = e^[∫p(t)dt] = e^[∫-r₂dt] = e^(-r₂t)
note also that dμ/dt = -r₂ e^(-r₂t) = μp
now multiply through our equation with μ (which is okay since μ ≠ 0 ∀t):
μx' + μpx = μq, or
μ (dx/dt) + x (dμ/dt) = μq, which should look like the product rule for differentiation to you, thus:
d(μx)/dt = μq, or
d(μx) = μqdt, and integrating:
∫d(μx) = ∫μqdt + C₂
μx = ∫μqdt + C₂, and substituting back in:
e^(-r₂t) x = ∫ e^(-r₂t) C₁' e^(r₁t) dt + C₂
e^(-r₂t) x = C₁' ∫ e^[(r₁ - r₂) t] dt + C₂

here, we will observe 3 special cases for evaluating the integral ∫ e^[(r₁ - r₂) t] dt:
CASE 1: r₁ = r₂ = r
e^(-rt) x = C₁' ∫ e^[0 t] dt + C₂
e^(-rt) x = C₁'t + C₂, (and let C₁ = C₁')
x(t) = e^(rt) (C₁t + C₂)

CASE 2: r₁ ≠ r₂
e^(-r₂t) x = C₁' ∫ e^[(r₁ - r₂) t] dt + C₂
e^(-r₂t) x = C₁'/(r₁ - r₂) * e^[(r₁ - r₂) t] + C₂, (and let C₁ = C₁'/(r₁ - r₂))
e^(-r₂t) x = C₁ e^[(r₁ - r₂) t] + C₂
x(t) = C₁ e^(r₁t) + C₂ e^(r₂t)

CASE 3: r = α ± iβ (the case for complex roots of the aux. equ.)
let r₁ = α + iβ, r₂ = α - iβ
e^[-(α - iβ) t] x = C₁' ∫ e^[(2iβ) t] dt + C₂
e^[-(α - iβ) t] x = C₁'/2iβ * e^[(2iβ) t] + C₂, (and let C₁ = C₁'/2iβ)
e^[-(α - iβ) t] x = C₁ e^[(2iβ) t] + C₂
x = e^[(α - iβ) t] * C₁ e^[(2iβ) t] + e^[(α - iβ) t] * C₂
x(t) = e^(αt) [ C₁e^(iβt) + C₂e^(-iβt) ]
(to make things a little more interesting still, convert this to sin and cos using Euler's formula! i'll have to do this in the next post anyway)
(hint: just call (C₁ + C₂) and (C₁ - C₂), C₁* and C₂*, respectively)

and there we go! 3 comprehensive solutions to ANY homogeneous linear D.E. with constant coefficients. but what if you have higher order derivatives and more roots in your aux. equ., you say? well, those can simply be handled like so:
CASE 1 will become:
x(t) = e^(rt) (C₁t² + C₂t + C₃), and so forth for more roots...
CASE 2 will (easily) become:
x(t) = C₁ e^(r₁t) + C₂ e^(r₂t) + C₃ e^(r₃t) ... for more roots
and CASE 3 you don't really have to worry about because you will never have complex roots that don't come in pairs (unless your coefficients are complex)

what'll really get you is more multiplicities in your roots! consider the equation:
(D² + 3D + 2)² x = 0, which you can write:
(D + 2)² (D + 1)² x = 0. the solutions for this turn out to be a combination of CASE 1 and CASE 2:
x(t) = e^(-2t) (C₄t + C₃) + e^(-t) (C₂t + C₁)
verify yourself!
the cool part is that, with practice, you should be able to solve these differential equations in 5 or 10 seconds in your head! amaze your friends! trust me, it's very fun :] all you need to know how to do is factor polynomials!

what about the following?:
(D² + 4D + 5)² (D² - 5D + 4)³ x = 0
well, it may take you more than 5 or 10 seconds to actually write it, but you should have no problem coming up with an answer very quickly!

also, something else to note as we go through these examples: always make sure the number of constants matches up with the order of differentials in your D.E.! if there is a 2nd order derivative in your D.E., you should have a C₁ and a C₂. this is, of course, easily understandable since you would normally have to do 6 integrations for a D.E. with a 6th order derivative, and thus giving you 6 constants of integration.

so there, i hope that this will completely resolve the question of how to solve the SHM equation and even some general solutions for a plethora of differential equations! enjoy! (challenge: solve the differential equation which governs a dampened oscillation!)