Scientific Computing Using Python - PHYS:4905 - Fall 2018

Lecture #08 - 9/13/2018 - Prof. Kaaret

These notes borrow from Linear Algebra by Cherney, Denton, Thomas, and Waldron.

Muppets from Space (1999)

We will be considered vectors with an arbitrarily large number of components: n-vectors.  We write them as
      a=(a1a2an)orb=(b1b2bn)a = \begin{pmatrix} a^1 \\ a^2 \\ ⋮ \\ a^n \\ \end{pmatrix} \;\;\;\;\; or \;\;\;\;\; b = \begin{pmatrix} b^1 \\ b^2 \\ ⋮ \\ b^n \\ \end{pmatrix}

where the individual components are real numbers, ana^n ∊ ℝ, where we use to denote the set of real numbers.  Note that a2a^2 means the second element of the vector a and not a squared.  Also, note that in standard mathematical notion, one starts indexing from 1 rather than from zero as in Python.

Vectors exist in a 'vector space'.  The vector space for an n-vector is the set of all possible n-vectors that particular n, which we can write as n{ℝ}^{n}.

The Big Cube (1969)

We will initially be working in vector spaces with Euclidean geometry.  This is the geometry that you are familiar with from 2 and 3 dimensional vectors in physics.  Addition of vectors and multiplication of a scalar times a vector work as you expect

      a+b=(a1+b1a2+b2an+bn)andλa=(λa1λa2λan)a + b = \begin{pmatrix} a^1 +b^1 \\ a^2 + b^2 \\ ⋮ \\ a^n + b^n \\ \end{pmatrix} \;\;\;\;\; and \;\;\;\;\; λa = \begin{pmatrix} λa^1 \\ λa^2 \\ ⋮ \\ λa^n \\ \end{pmatrix}


We define the dot product of two vectors as

      ab=a1b1+a2b2+anbna•b = a^1 b^1 + a^2 b^2 + a^n b^n


We define the Euclidean length of an n-vector as

      a=(a1)2+(a2)2+(an)2=aa\begin{Vmatrix} a \end{Vmatrix} = \sqrt{ (a^1)^2 + (a^2)^2 + ⋯ (a^n)^2} = \sqrt{a•a}

and we define the angle between two vectors θθ as

      abcos(θ)=ab\begin{Vmatrix} a \end{Vmatrix} \begin{Vmatrix} b \end{Vmatrix} \cos(θ) = a•b


The dot product is

- commutative (or symmetric)   ab=baa•b = b•a

- distributive  a(b+c)=ab+aca•(b+c) = a•b + a•c

- linear in both vectors or bilinear  a(λb+λc)=λab+λaca•(λb+λc) = λa•b + λa•c

- and positive definite  aa0a•a ⩾ 0

This isn't the only possible way to define the dot product.  If you've done any special relativity, you might have been introduced to a dot product that is not positive definite.  In the Lorentzian inner product, the product of the terms in the time dimension comes into the sum with a minus sign.  This is because space in relativity is not Euclidean.

Hyperplanes, Trains, and Automobiles (1987)

In n-dimensional Euclidean geometry, we have a vector space n{ℝ}^{n} that is full of points.  We can use n-vectors to label particular points P.  There is a special point, the origin, that we label with the 0 vector, which has all of its elements equal to zero.  The zero vector is the only vector with zero length and no direction.

We can describe a line in n{ℝ}^{n} in terms of two vectors, a and b, as the set of points  {a+λb|λ}\{ a + λb | λ ∊ ℝ \}

We can describe a plane in n{ℝ}^{n} in terms of three vectors, a, b, and c,  as the set of points  {a+λb+μc|λ,μ}\{ a + λb + μc | λ, μ ∊ ℝ \}

We can keep going and describe a hyperplane with k vectors a1 ... ak  where knk ⩽ n as the set of points

{P+i=1kλiai|λi}\left\{P + \underoverset{i=1}{k}{∑} λ_i a_i \, | \, λ_i ∊ ℝ \right\}
where we have replaced the vector pointing to a position with the point P at that position.


Office Space (1999)

The vector spaces n{ℝ}^{n}  are very nice vector spaces, but they are not the only possibilities.  We could, for instance, consider the space of functions of one real variable.  One such function is y = x, another is y = 3x2, another is y = sin(2x).  Each point in this space represents a function.  We need an infinite number of numbers to specify every possible function, so the space is {ℝ}^{ℝ}.  Note that the common operations that we use on vectors still work. 

For example, we can add two functions f and g, f(x)+g(x)=(f+g)(x)f(x) + g(x) = (f+g)(x)

Addition in this vector space means starting at one vector, adding another vector, and ending up at a final vector.  It works just like vector additional in a Euclidean space, but the points in the space represent different functions.

This space also has a zero, defined as f(x) = 0.  If we add the zero function to another function, we get back our original function.  This is exactly the same as adding the zero vector.

Our more fancy vector spaces still need to follow a bunch of rules.

Field of Dreams (1989)

We defined our vector spaces over the real numbers.  In this context, the real numbers would be called the field or the base field or the baseball field, (well, maybe not that last one.)   We could instead use a different field.  In quantum mechanics, we use vector spaces to define the possible states of a physical system.  For example, we might have an electron that can have its spin up, represented by (10)\begin{pmatrix} 1 \\ 0 \end{pmatrix},  and its spin down, represented by (01)\begin{pmatrix} 0 \\ 1 \end{pmatrix}.  In a classical description of the electron, one would have some probability (a positive definite real number) that the electron is in the spin up state and a probability that the electron is in the spin down state.  It would be nice if the two probabilities add to one, so that the electron is in one state or the other.

A very interesting aspect of quantum mechanics is that probabilities alone aren't good enough.  We need to have a probability amplitude of the electron being in the spin up state and another probability amplitude that it is in the spin down state.  One finds the probability on the electron being in a state by taking the modulus squared of the corresponding amplitude (and, again, it is nice if the probabilities add to one).  However, complex amplitudes allow description of phenomena like interference.