Manipulating Vectors

- Introduction to Notation, Terminology, and Basics of Latex.
- Introduction to Vectors
- Scalar multiplication with vectors
- Vector Addition and Subtraction
- Linear Combinations, Linear Independence, and Basis

In the following lectures, we'll delve into exploring linear algebra. Note that I'll be using some original code written by the professor to help generate some interactive visualizes for some concepts. To use this code yourself, the `bokeh`

module must be installed on your machine (or in your Google Colab Instance)

`!pip install bokeh`

In [1]:

```
import numpy as np
import requests
# Read in Visualization code from Github
exec(requests.get('https://raw.githubusercontent.com/edunford/ppol564/master/lectures/visualization_library/visualize.py').content)
vla = LinearAlgebra # assign class to an simplier naming convention.
```

Latex allows for the simple construction and formulation of mathematical statements. The framework extends to robust text editor that eases scientific writing. Moreover, .tex files (the file type for a latex document) is ultimately a text document, making it ideal for version control software, such as git.

Writing math in Latex is involved and can take some time to master. Please refer to the following link for a useful Wiki on composing mathematical statements in Latex.

Note that Jupyter Notebooks come equipped with Latex, and we can write Latex code directly in to the markdown code chunks, housing our mathematical statements between dollar signs.

`$ <math here> $`

or

`$$ <math here> $$`

The difference between on and two `$`

is that the former will write the mathematical statement inline whereas the latter will print the statement on its own line.

E.g.

*We found that $x = 4$.*

*We found that $$x = 4$$.*

We can also compose equations as follows:

```
\begin{equation}
< math here >
\end{equation}
```

E.g.

\begin{equation} y_i = \beta_0 + \beta_1 x_i + \epsilon_i \end{equation}

There are a number of resources that ease writing mathematical statements in Latex, such as Daum Equation Editor.

A **vector** provides both magnitude and direction.

$$ \vec {v} = \begin{bmatrix} 1 \\ 2 \end{bmatrix} $$

In [2]:

```
# Define a vector in R2
vec1 = np.array([1,2])
```

In [3]:

```
plot = vla()
plot.graph()
plot.vector(vec1)
plot.show()
```

Vectors are normally portrayed as shooting out from the origin (0,0); however, there is nothing implicitly binding them to this location.

Note that using the zero vector ($\vec{0}$) as an origin is known as the "standard position"

In [4]:

```
plot.change_origin(np.array([-2,-1]))
plot.vector(vec1,add_color="blue",alpha=.3)
plot.change_origin(np.array([1,1]))
plot.vector(vec1,add_color="blue",alpha=.3)
plot.change_origin(np.array([0,-3]))
plot.vector(vec1,add_color="blue",alpha=.3)
plot.show()
```

Note that $\vec{v}$ is a 2-dimensional vector, but we could easily describe vectors in $N$ dimensions (even though we can't visualize them beyond 3 dimensions).

We denote a two dimensional plane composing all the real numbers on that plane as $\Re^2$. Here the superscript captures the number of dimensions, which is 2. We'd denote some $n$ dimensional space as $\Re^n$

For practical reasons, most of the visualizations I'll perform during the lectures will be in $\Re^2$

\begin{equation} c\vec {v} = c\begin{bmatrix} a \\ b \end{bmatrix} = \begin{bmatrix} ca \\ cb \end{bmatrix} \end{equation}

Where $c$ is an arbitrary scalar of the real numbers, $c \in \Re$

**Example**
\begin{equation}
2\vec {v} = 2\begin{bmatrix} 1 \\ 2 \end{bmatrix} = \begin{bmatrix} 2(1) \\ 2(2) \end{bmatrix} = \begin{bmatrix} 2 \\ 4 \end{bmatrix}
\end{equation}

In [5]:

```
print(vec1)
new_vec = 2*vec1
print(new_vec)
```

In [6]:

```
plot.clear()
plot.graph(10)
plot.vector(vec1,add_color="blue")
plot.vector(new_vec,add_color="blue",alpha=.2)
plot.show()
```

Scalars increase/decrease the length of a vector. Put differently, the vector is scaled by some magnitude.

Thus, we can express any number along a $\vec{v}$ by scaling it by come constant $c$, where $c \in \Re$

In [7]:

```
for scalar in [-1.2,-.3,-2.3,3.57]:
new_vec = scalar*vec1
plot.vector(new_vec,add_color="blue",alpha=.2)
plot.show()
```

Given $\vec{a},~\vec{b} \in \Re^2$

$$\vec{a} = \begin{bmatrix} a_1 \\ a_2 \end{bmatrix}$$ $$\vec{b} = \begin{bmatrix} b_1 \\ b_2 \end{bmatrix}$$

then \begin{equation} \vec{a} + \vec{b} = \begin{bmatrix} a_1+b_1 \\ a_2+b_2 \end{bmatrix} \end{equation}

In [8]:

```
a = np.array([2,1])
b = np.array([1,2])
c = a + b
c
```

Out[8]:

$$\vec{a} = \begin{bmatrix} 2 \\ 1 \end{bmatrix}$$ $$\vec{b} = \begin{bmatrix} 1 \\ 2 \end{bmatrix}$$

then \begin{equation} \vec{a} + \vec{b} = \begin{bmatrix} 1+2 \\ 2+1 \end{bmatrix} = \begin{bmatrix} 3 \\ 3 \end{bmatrix} \end{equation}

In [9]:

```
plot.clear()
plot.graph()
plot.vector(a)
plot.vector(b)
plot.vector(c)
plot.show()
```

What is really going on here when we add two vectors?

In [10]:

```
plot.clear()
plot.graph(8)
plot.add_vectors(a,b)
plot.show()
```

Given $\vec{a},~\vec{b} \in \Re^2$

$$\vec{a} = \begin{bmatrix} a_1 \\ a_2 \end{bmatrix}$$ $$\vec{b} = \begin{bmatrix} b_1 \\ b_2 \end{bmatrix}$$

then \begin{equation} \vec{a} - \vec{b} = \begin{bmatrix} a_1-b_1 \\ a_2-b_2 \end{bmatrix} \end{equation}

In [11]:

```
a = np.array([2,1])
b = np.array([1,2])
c = a - b
c
```

Out[11]:

$$\vec{a} = \begin{bmatrix} 2 \\ 1 \end{bmatrix}$$ $$\vec{b} = \begin{bmatrix} 1 \\ 2 \end{bmatrix}$$

then \begin{equation} \vec{a} - \vec{b} = \begin{bmatrix} 2-1 \\ 1-2 \end{bmatrix} = \begin{bmatrix} 1 \\ -1 \end{bmatrix} \end{equation}

In [12]:

```
plot.clear()
plot.graph()
plot.vector(a)
plot.vector(b)
plot.vector(c,add_color="purple")
plot.show()
```

What is really going on here when we subtract two vectors?

In [13]:

```
plot.clear()
plot.graph()
plot.subtract_vectors(a,b)
plot.show()
```

In [14]:

```
# Two Vectors
a = np.array([2,1])
b = np.array([1,2])
# Scaled by some arbitrary constants
c1 = 2
c2 = -3
# generate a new vector
v = c1*a + c2*b
v
```

Out[14]:

Given that $\vec{a}$ and $\vec{b}$ are linearly independent. We can use some combination of the two to describe every point in $\Re^2$

In [15]:

```
plot.clear()
plot.graph(20)
plot.vector(a)
plot.vector(b)
plot.vector(-4*a + -2*b)
plot.vector(.3*a + 5*b)
plot.vector(-1*a + -2.2*b)
plot.show()
```

Let's randomly generate a whole bunch of arbitrary constants for $\vec{a}$ and $\vec{b}$.

In [16]:

```
# We can draw random values from known distributions using Numpy
# Here I'm drawing one value from a uniform distribution.
np.random.uniform(low=-5,high=5,size=1)
```

Out[16]:

In [17]:

```
# Let's draw two random constants 50 times
for i in range(50):
c1,c2 = np.random.uniform(low=-5,high=5,size=2)
plot.vector(c1*a + c2*b,add_color='black',alpha=.2)
```

In [18]:

```
plot.show()
```

Eventually we could define the entire $\Re^2$ space with just these two vectors.

When we can describe a vector with some linear combination of other vectors, we call that vector linearly dependent. Put differently, the addition of that new vector to our set does not provide any additional information that we do not already have with the existing vectors in our set.

This concept is a little tricky at first, so let's examine this visually.

In [19]:

```
# Are these two vectors linearly dependent?
a = np.array([1,4])
b = np.array([2.5,10])
```

In [20]:

```
# Let's visualize them...
plot.clear()
plot.graph(30)
plot.vector(a,alpha=.4)
plot.vector(b,alpha=.4)
plot.show()
```

When visualized the dependence becomes apparent. No linear combination of these two vectors will allow them to break into a new dimension.

To run this point home, let's repeat the exercise from above.

In [21]:

```
# Let's draw two random constants 50 times
for i in range(50):
c1,c2 = np.random.uniform(low=-5,high=5,size=2)
plot.vector(c1*a + c2*b,add_color='black',alpha=.2)
```

In [22]:

```
plot.show()
```

Linear independence is an important concept because it tells us how much information is contained within a system.

As we've seen, linear combinations of linearly independent can be used to locate any space within the dimensions the vectors describe. To an extent, all linearly independent vectors are linear combinations of their unit vectors.

A **unit vector** (or basis vector) is a one unit movement along a specific dimension in some $\Re^n$ dimensional space. A unit vector is denoted by $\hat{i}$ and $\hat{j}$ (and $\hat{k}$ when referring to $\Re^3$).

$$\hat{i} = \begin{bmatrix} 1 \\ 0 \end{bmatrix}$$ $$\hat{j} = \begin{bmatrix} 0 \\ 1 \end{bmatrix}$$

In [23]:

```
i = np.array([1,0])
j = np.array([0,1])
plot.clear()
plot.graph(8)
plot.vector(i,add_color="blue")
plot.vector(j,add_color="red")
plot.show()
```

Any specific vector can be treated as a linear combination of the unit vectors.

To get to the position $\begin{bmatrix} 2 \\ 3 \end{bmatrix}$, we could just scale and combine $\hat{i}$ and $\hat{j}$ accordingly.

$$2\hat{i} + 3\hat{j}= 2\begin{bmatrix} 1 \\ 0 \end{bmatrix} + 3\begin{bmatrix} 0 \\ 1 \end{bmatrix} = \begin{bmatrix} 2 \\ 3 \end{bmatrix}$$

This concept of a unit vector helps us define the "**basis**" of our coordinate system. That is, they help use establish the fundamental units that describe the positions within a specific space.

In [24]:

```
2*i+3*j
```

Out[24]:

In [25]:

```
cnt =0
while cnt <= 50:
c1,c2 = np.random.uniform(low=-5,high=5,size=2)
plot.vector(c1*i + c2*j,
add_color='black',alpha=.2)
cnt += 1
plot.show()
```

As the figure above loosely demonstrates, a set of vectors **spans** a vector space if every vector in that vector space can be written as a linear combination of vectors from that set. A set of linearly independent vectors that span a vector space define a **basis** for that vector space.

The reason why linear independence is so important is that we are interested in the minimum number of spanning vectors among a set of vectors, which tells us the dimension of that vector space. The **dimension** of a vector space is equal to the number of vectors in its basis.