Linear Algebra

Instructor: Prof. Gilbert Strang

Department: Mathematics

Year: Fall 2011

Level: Undergraduate

Week 1

01. The Geometry of Linear Equations

Key Concepts

  • The Problem: Understanding systems of linear equations geometrically
  • The Matrix: A compact way to represent linear equations
  • Vectors: Understanding linear combinations and their geometric meaning
  • Matrix Form: Writing equations as Ax = b
  • Dimensional Space: Extending concepts beyond 3D to n-dimensions

Important Points

Linear algebra provides a framework for solving systems of linear equations through geometric interpretation. The key is understanding how vectors combine and intersect in space, and how matrices represent these relationships systematically.

Linear combinations of vectors can span spaces, creating planes, lines, or entire dimensional spaces depending on their properties.

The matrix equation Ax = b represents a system where we're looking for a point x that satisfies all equations simultaneously.

Example 1: Geometric Interpretation

From Professor Strang's lecture, consider this system of linear equations:

2x - y = 0
x + y = 3

Geometric Analysis:

  • First equation (2x - y = 0): A line through the origin with slope 2
  • Second equation (x + y = 3): A line with y-intercept 3
  • Solution: The intersection point (2, 1) where both equations are satisfied

This example demonstrates how linear equations represent lines and their intersection represents the solution to the system.

Example 2: Column Picture

The same system viewed as column vectors:

x[2] + y[1] = [0]
 [-1]   [1]   [3]

This shows us that we're looking for the right combination of the column vectors [2;-1] and [1;1] to produce the vector [0;3].

As Professor Strang emphasizes, this column picture is fundamental to understanding linear algebra geometrically - we're finding the right linear combination of vectors to reach a specific point.

Key Questions

Can we solve Ax = b for every b?

Not always. The solution exists only when b is in the column space of A (the span of A's columns).

Do the linear combinations of the columns fill 3-D space?

Only if the columns are linearly independent and there are three of them. The columns must point in different directions to span the full space.

When can the combinations go wrong?

Problems occur when vectors are linearly dependent (lie in the same plane or line), making it impossible to reach certain points in the space. This happens when determinant = 0 or when columns are parallel.

Dimensional Space

While we can visualize in 2D and 3D, the concepts extend to n-dimensional space. The same principles apply: linear independence, span, and solutions exist in higher dimensions.

02. Elimination with Matrices

Key Concepts

  • Gaussian Elimination: Systematic method for solving systems of linear equations
  • Matrix Operations: Understanding how elimination steps can be represented as matrix operations
  • Upper Triangular Form: Converting matrices to a simpler form for solving equations
  • Back Substitution: Process of solving equations after elimination

Important Points

Elimination is a fundamental process in linear algebra that transforms a system of equations into an equivalent, easier-to-solve system. The process involves systematic steps to create zero entries below the diagonal of the matrix.

Each elimination step can be represented as a matrix multiplication, showing how linear algebra connects computational and theoretical aspects.

The final upper triangular form allows for straightforward back substitution to find solutions.

Example 1: Gaussian Elimination Process

From Professor Strang's lecture, consider this system of equations:

x + 2y + z = 2     (equation 1)
3x + 8y + z = 12   (equation 2)
0x + 4y + z = 2    (equation 3)

Step-by-Step Elimination:

  1. First elimination step: Multiply equation 1 by -3 and add to equation 2:
  2. x + 2y + z = 2
    3x + 8y + z = 12
    -3x - 6y - 3z = -6
    _________________
    0x + 2y - 2z = 6
  3. This gives us the new equation 2: 2y - 2z = 6 or y - z = 3
  4. Next, we continue with equation 3, which already has a zero in the first position

As Professor Strang emphasizes, elimination is the systematic process of creating zeros below the pivot elements, transforming the system into an equivalent upper triangular form that's easier to solve.

Example 2: Matrix Representation

The same elimination process in matrix form:

[1 2 1 | 2]     [1 2 1 | 2]     [1 2 1 | 2]
[3 8 1 | 12] → [0 2 -2 | 6] → [0 2 -2 | 6]
[0 4 1 | 2]     [0 4 1 | 2]     [0 0 5 | -10]

Professor Strang shows how this matrix representation reveals the systematic nature of elimination. The pivots are 1, 2, and 5. The final upper triangular form allows for back substitution to find the solution: z = -2, y = 1, x = 2.

Key Questions

When does elimination fail?

Elimination fails when we encounter a zero pivot (division by zero) or when the system has no unique solution.

What makes a good pivot element?

A good pivot should be non-zero and preferably large in magnitude to minimize numerical errors in calculations.

How do we handle special cases?

Special cases like zero pivots may require row exchanges (pivoting) or indicate that the system has no solution or infinitely many solutions.

03. Multiplication and Inverse Matrices

Key Concepts

  • Matrix Multiplication: Understanding the process and properties
  • Inverse Matrices: Definition and conditions for existence
  • Matrix Properties: Associative and distributive laws
  • Applications: Solving systems using inverse matrices

Important Points

Matrix multiplication is not commutative (AB ≠ BA) but follows specific rules for combining rows and columns. The process involves taking dot products of rows with columns.

A matrix A is invertible if there exists a matrix A⁻¹ such that AA⁻¹ = A⁻¹A = I (identity matrix). Not all matrices have inverses.

The inverse matrix provides a direct way to solve equations: if Ax = b, then x = A⁻¹b (when A is invertible).

Example 1: Matrix Multiplication

From Professor Strang's lecture, consider this matrix multiplication:

[1 0] [2 5]
[3 4] [1 3]

Professor Strang demonstrates that we can view this multiplication in different ways:

  • Row perspective: The first row of the result is [1×2 + 0×1, 1×5 + 0×3] = [2, 5]
  • Column perspective: The first column of the result is 2[1;3] + 1[0;4] = [2;10]
  • The complete result is [2 5; 10 17]

Example 2: Inverse Matrix

For the matrix from the lecture:

A = [1 2]
    [3 7]

A⁻¹ = [7  -2]
      [-3  1]

Professor Strang verifies this is the inverse by showing:

AA⁻¹ = [1 2][7  -2] = [1 0]
       [3 7][-3  1]   [0 1]

This demonstrates a key concept from the lecture: when a matrix is invertible, we can solve Ax = b directly as x = A⁻¹b. The determinant of A is 1×7 - 2×3 = 1, which confirms A is invertible.

Key Questions

When does a matrix have an inverse?

A matrix has an inverse if and only if it is square and its determinant is not zero (non-singular).

Why isn't matrix multiplication commutative?

The dimensions and arrangement of entries make AB and BA different in general, though they may be equal in special cases.

What are the practical applications?

Inverse matrices are used in computer graphics, economics, and solving complex systems of equations efficiently.

04. Factorization into A = LU

Key Concepts

  • LU Decomposition: Breaking down a matrix into lower and upper triangular matrices
  • Triangular Matrices: Properties and advantages in solving systems
  • Factorization Process: Understanding how elimination leads to LU form
  • Computational Efficiency: Benefits of LU factorization in solving multiple systems

Important Points

LU factorization represents the elimination process as a product of two triangular matrices: L (lower triangular) and U (upper triangular).

The factorization A = LU stores the elimination steps efficiently and allows solving Ax = b by first solving Ly = b, then Ux = y.

This method is particularly efficient when solving multiple systems with the same coefficient matrix but different right-hand sides.

Example 1: LU Decomposition

From Professor Strang's lecture, consider this matrix factorization:

A = [1 2 1]
    [3 8 1]
    [0 4 1]

L = [1   0   0]
    [3   1   0]
    [0   2   1]

U = [1 2 1]
    [0 2 -2]
    [0 0 5]

Professor Strang demonstrates that this factorization captures the elimination process:

  • L contains the multipliers used during elimination (the numbers we multiply rows by)
  • The 3 in L comes from eliminating the first entry in row 2
  • The 2 in L comes from eliminating the second entry in row 3
  • U is the upper triangular matrix resulting from elimination
  • When we multiply L and U, we get back the original matrix A

Example 2: Solving Systems Using LU

Professor Strang explains how LU factorization makes solving multiple systems efficient:

To solve Ax = b using LU factorization:

  1. Factor A = LU (done once)
  2. Solve Ly = b (forward substitution)
  3. Solve Ux = y (back substitution)

For example, with b = [5; 13; 8], we first solve Ly = b to get y = [5; -2; 12], then solve Ux = y to get x = [1; 2; 2]. This is especially efficient when solving multiple systems with the same coefficient matrix A but different right-hand sides b.

Key Questions

When is LU factorization possible?

LU factorization is possible when all leading submatrices have non-zero determinants and no row exchanges are needed.

What are the advantages of LU factorization?

It's more efficient than computing the inverse, especially for large systems or when solving multiple systems with the same coefficient matrix.

How does pivoting affect LU factorization?

When pivoting is needed, we get PLU factorization, where P is a permutation matrix representing row exchanges.

05. Transposes, Permutations, Spaces R^n

Key Concepts

  • Matrix Transpose: Properties and operations with A^T
  • Permutation Matrices: Understanding row and column exchanges
  • Vector Spaces: Properties and operations in R^n
  • Subspaces: Understanding subsets that preserve vector operations

Important Points

The transpose A^T is formed by converting rows to columns (or vice versa). For any matrix A, (A^T)^T = A and (AB)^T = B^TA^T.

Permutation matrices represent row exchanges and are invertible, with their inverse being their transpose.

R^n is a vector space with specific properties like closure under addition and scalar multiplication. Understanding its structure is crucial for linear algebra.

Example 1: Transpose Properties

From Professor Strang's lecture, consider this matrix and its transpose:

A = [1 2 3]
    [4 5 6]

A^T = [1 4]
      [2 5]
      [3 6]

Professor Strang demonstrates key properties:

  • (A^T)^T = A: The transpose of the transpose returns the original matrix
  • (AB)^T = B^T A^T: The transpose of a product reverses the order
  • For symmetric matrices, A = A^T (like A = [1 2; 2 3])

Example 2: Permutation Matrices

A key permutation matrix from the lecture:

P = [0 1]
    [1 0]

When P multiplies a matrix A, it exchanges the rows of A. Professor Strang shows that permutation matrices are invertible, and their inverse is their transpose: P^(-1) = P^T.

Key Questions

What are the properties of transpose matrices?

Transpose matrices have special properties in terms of multiplication, addition, and inverse operations. For symmetric matrices, A = A^T.

How do permutations affect matrix operations?

Permutations preserve the essential properties of matrices while rearranging rows or columns, useful in optimization and solving systems.

What defines a vector space?

A vector space must satisfy eight axioms including closure under addition and scalar multiplication, existence of zero vector, and additive inverses.

06. Column Space and Nullspace

Key Concepts

  • Column Space: All possible linear combinations of matrix columns
  • Nullspace: All solutions to Ax = 0
  • Rank: Dimension of the column space
  • Fundamental Theorem: Relationship between dimensions

Important Points

The column space C(A) consists of all vectors b such that Ax = b has a solution. It represents all possible outputs of the matrix transformation.

The nullspace N(A) contains all vectors x that satisfy Ax = 0. These vectors reveal the linear dependencies among the columns.

The rank of a matrix determines its column space dimension and is crucial in understanding system solvability.

Example 1: Column Space and Nullspace

From Professor Strang's lecture, consider this matrix:

A = [1  2]
    [2  4]
    [3  6]

Professor Strang analyzes this matrix in detail:

  • Column Space: All linear combinations of the columns [1;2;3] and [2;4;6]
  • Since the second column is exactly twice the first column, the column space is just a line in R³
  • Nullspace: Contains all vectors x where Ax = 0
  • The vector x = [-2;1] is in the nullspace since A[-2;1] = -2[1;2;3] + 1[2;4;6] = [0;0;0]
  • Rank = 1 (only one linearly independent column)

Example 2: Fundamental Spaces

Professor Strang emphasizes the fundamental relationship:

For this 3×2 matrix with rank 1:

  • dim(Column Space) = rank = 1
  • dim(Nullspace) = n - rank = 2 - 1 = 1
  • dim(Row Space) = rank = 1

Key Questions

How do we find the column space?

The column space consists of all linear combinations of the columns. Reduced row echelon form helps identify a basis.

What does the nullspace tell us?

The nullspace reveals linear dependencies and helps understand when the system Ax = b has no solution or infinitely many solutions.

How are rank and nullspace related?

The Rank-Nullity Theorem states that rank(A) + dim(N(A)) = n, where n is the number of columns in A.

07. Solving Ax = 0: Pivot Variables, Special Solutions

Key Concepts

  • Pivot Variables: Key variables in solving systems
  • Free Variables: Variables that can take any value
  • Special Solutions: Basic solutions to homogeneous equations
  • Reduced Row Echelon Form: Standard form for solving systems

Important Points

When solving Ax = 0, pivot variables are determined by the leading entries in the reduced row echelon form. Free variables can be assigned any value.

Special solutions are found by setting one free variable to 1 and others to 0, creating a basis for the nullspace.

The number of free variables equals the dimension of the nullspace, while the number of pivot variables equals the rank.

Example: Finding the Nullspace

From Professor Strang's lecture, consider this system Ax = 0:

[1 2 2 2][x1]   [0]
[2 4 6 8][x2] = [0]
[3 6 8 10][x3]   [0]
           [x4]

After row reduction to echelon form:

[1 2 2 2]
[0 0 2 4]
[0 0 0 0]

Professor Strang identifies:

  • Pivot variables: x1 and x3 (corresponding to columns 1 and 3)
  • Free variables: x2 and x4 (corresponding to columns 2 and 4)
  • The rank is 2, so the nullspace dimension is 4-2 = 2

Special solutions (basis for nullspace):

x2 = 1, x4 = 0 gives x1 = -2, x3 = 0, so x = [-2, 1, 0, 0]
x2 = 0, x4 = 1 gives x1 = 0, x3 = -2, so x = [0, 0, -2, 1]

The complete nullspace is all linear combinations of these two special solutions.

Key Questions

How do we identify pivot variables?

Pivot variables correspond to leading 1's in the reduced row echelon form. They are determined by the rank of the matrix.

What is the significance of free variables?

Free variables generate the nullspace. Each free variable leads to a special solution, and their linear combinations give all solutions.

How do we find special solutions?

Set one free variable to 1 and others to 0, then solve for pivot variables. Repeat for each free variable to find a basis for the nullspace.

08. Solving Ax = b: Row Reduced Form R

Key Concepts

  • Row Reduced Form: Standard form for solving linear systems
  • Complete Solution: Particular + nullspace solutions
  • Solvability Conditions: When Ax = b has solutions
  • Rank Condition: Relationship to existence of solutions

Important Points

The row reduced form R reveals whether Ax = b has solutions and helps find them efficiently. The complete solution combines a particular solution with the nullspace solution.

Solvability depends on whether b is in the column space of A. This can be checked using the rank condition.

When solutions exist, there may be either a unique solution or infinitely many solutions, determined by the presence of free variables.

Example: Solving Ax = b

From Professor Strang's lecture, consider this system:

[1 2 3][x]   [6]
[2 4 6][y] = [12]
[3 6 9][z]   [18]

Professor Strang shows the row reduced form:

[1 2 3 | 6]
[0 0 0 | 0]
[0 0 0 | 0]

Analysis of the solution:

  • The rank of A is 1 (only one pivot)
  • The augmented matrix [A b] also has rank 1
  • Since rank(A) = rank([A b]), the system is consistent and has solutions
  • The equation becomes: x + 2y + 3z = 6
  • With y and z as free variables, we can write the complete solution as:
x = 6 - 2y - 3z
y = any value
z = any value

Professor Strang emphasizes that this represents a plane of solutions in R³, where any point on the plane satisfies the original system.

Key Questions

When does Ax = b have a solution?

A solution exists if and only if b is in the column space of A, which occurs when the rank of [A b] equals the rank of A.

How do we find the complete solution?

Find a particular solution xp where Axp = b, then add any solution xn from the nullspace (Axn = 0). The complete solution is x = xp + xn.

What role does rank play?

The rank determines both existence (through the rank condition) and uniqueness (through the number of free variables) of solutions.

09. Independence, Basis, and Dimension

Key Concepts

  • Linear Independence: Vectors that cannot be expressed as combinations of others
  • Basis: Minimal spanning set of vectors
  • Dimension: Number of vectors in a basis
  • Coordinate Systems: Representing vectors in terms of a basis

Important Points

Vectors are linearly independent if none can be written as a linear combination of the others. This is equivalent to having only the trivial solution to c₁v₁ + c₂v₂ + ... + cₙvₙ = 0.

A basis is a linearly independent set that spans the space. Every vector in the space can be written uniquely as a linear combination of basis vectors.

The dimension of a space is the number of vectors in any basis. This is a fundamental invariant of the space.

Example 1: Testing Linear Independence

From Professor Strang's lecture, consider these vectors:

v₁ = [1]
     [1]
     [2]

v₂ = [2]
     [2]
     [5]

v₃ = [3]
     [1]
     [8]

To test if these vectors are linearly independent, we ask: Does c₁v₁ + c₂v₂ + c₃v₃ = 0 have only the trivial solution?

Professor Strang demonstrates this by creating a matrix with these vectors as columns:

A = [1 2 3]
    [1 2 1]
    [2 5 8]

After row reduction:

[1 2 3]
[0 0 -2]
[0 1 2]

Since we have 3 pivots (in columns 1, 3, and 2), these vectors are linearly independent.

Example 2: Finding a Basis

Professor Strang illustrates how to find a basis for the column space of:

A = [1 2 3]
    [1 2 3]
    [2 5 8]

After row reduction to find pivot columns:

[1 2 3]
[0 0 0]
[0 1 2]

The pivot columns are 1 and 2. Going back to the original matrix, the basis for the column space is:

[1]    [2]
[1] and [2]
[2]    [5]

Professor Strang emphasizes that this basis has exactly r = rank(A) = 2 vectors, confirming that dim(C(A)) = rank(A).

Key Questions

How do we test for linear independence?

Form a matrix with the vectors as columns and reduce to echelon form. The vectors are independent if and only if each column has a pivot.

Why is basis important?

A basis provides a coordinate system for the space, allowing us to represent any vector uniquely as a combination of basis vectors.

What determines dimension?

The dimension is determined by the number of vectors needed to span the space while maintaining linear independence.

10. The Four Fundamental Subspaces

Key Concepts

  • The Four Fundamental Subspaces: Column space C(A), Nullspace N(A), Row space C(AT), Left nullspace N(AT)
  • Dimension Relations: dim C(A) = dim C(AT) = rank(A)
  • Fundamental Theorem: dim N(A) + rank(A) = n (number of columns)
  • Solution Analysis: Using subspaces to analyze Ax = b

Important Points

  • The column space contains all possible outputs of the matrix transformation
  • The nullspace reveals all solutions to Ax = 0
  • The row space is orthogonal to the nullspace
  • The left nullspace contains all vectors y where ATy = 0

Key Questions and Answers

What is the relationship between the four subspaces?

The row space and nullspace are orthogonal complements in Rn, while the column space and left nullspace are orthogonal complements in Rm.

How do we find these subspaces?

Column space: Look at the span of columns. Nullspace: Solve Ax = 0. Row space: Take transpose and find column space. Left nullspace: Solve ATy = 0.

Why are these subspaces fundamental?

They completely characterize the behavior of a linear transformation, including its solutions, kernel, and image.

Example 1: The Four Fundamental Subspaces

From Professor Strang's lecture, consider this matrix:

A = [1 2 3]
    [2 4 6]

Professor Strang analyzes each of the four subspaces:

Column Space C(A)

The column space consists of all linear combinations of the columns:

c₁[1] + c₂[2] + c₃[3] = [?]
  [2]    [4]    [6]   [?]

Since the third column is 3 times the first column, the column space is just a plane in R². Its basis is {[1,2]} and dim(C(A)) = 1.

Nullspace N(A)

The nullspace contains all solutions to Ax = 0:

[1 2 3][x₁]   [0]
[2 4 6][x₂] = [0]
       [x₃]

After row reduction:

[1 2 3]
[0 0 0]

We have one pivot variable (x₁) and two free variables (x₂, x₃). Setting x₂ = s and x₃ = t:

x₁ = -2s - 3t
x₂ = s
x₃ = t

The nullspace basis vectors are:

[-2]    [-3]
[ 1] and [ 0]
[ 0]    [ 1]

So dim(N(A)) = 2, and we confirm that dim(C(A)) + dim(N(A)) = 1 + 2 = 3 = number of columns.

Row Space C(AT)

The row space is the span of the rows of A:

span{[1,2,3], [2,4,6]}

Since the second row is twice the first row, the row space has dimension 1 with basis {[1,2,3]}.

Left Nullspace N(AT)

The left nullspace contains all vectors y where ATy = 0:

[1 2][y₁] = [0]
[2 4][y₂]   [0]
[3 6]       [0]

This has the solution y = t[-2,1] for any scalar t, so dim(N(AT)) = 1.

Professor Strang emphasizes the fundamental relationships:

  • dim(C(A)) = dim(C(AT)) = rank(A) = 1
  • dim(N(A)) = n - rank(A) = 3 - 1 = 2
  • dim(N(AT)) = m - rank(A) = 2 - 1 = 1

11. Matrix Spaces; Rank 1; Small World Graphs

Key Concepts

  • Matrix Spaces: Vector spaces of matrices with specific dimensions
  • Rank-1 Matrices: Fundamental building blocks of higher rank matrices
  • Small World Graphs: Network structures with specific connectivity patterns
  • Matrix Decomposition: Breaking down matrices into simpler components

Important Points

  • Every rank-1 matrix is the outer product of two vectors
  • Higher rank matrices can be expressed as sums of rank-1 matrices
  • Small world networks combine local and long-range connections
  • The dimension of matrix spaces depends on the matrix size

Key Questions and Answers

What characterizes a rank-1 matrix?

A rank-1 matrix can always be written as the outer product uvT of two vectors, where all columns are multiples of a single vector.

How do small world graphs relate to linear algebra?

The adjacency and connectivity matrices of small world graphs can be analyzed using matrix operations to understand network properties and information flow.

Why are matrix spaces important?

They provide a framework for understanding linear transformations between vector spaces and help analyze complex systems through matrix representations.

Example 1: Rank-1 Matrices

From Professor Strang's lecture, he demonstrates how rank-1 matrices are formed:

A = uv^T = [1] [2 3 4] = [2  3  4]
           [2]           [4  6  8]

Professor Strang explains that every rank-1 matrix can be written as the outer product of two vectors. In this case:

  • u = [1; 2] is a column vector
  • v^T = [2 3 4] is a row vector
  • Their outer product creates a 2×3 matrix where each column is a multiple of u

Example 2: Matrix Decomposition

Professor Strang shows how higher-rank matrices can be decomposed into sums of rank-1 matrices:

A = [4 1]
    [2 7]

This matrix can be written as the sum of two rank-1 matrices:

A = [4 0] + [0 1] = [4] [1 0] + [0] [0 1]
    [2 0]   [0 7]   [2]         [7]

Professor Strang emphasizes that this decomposition is fundamental to understanding singular value decomposition (SVD) and other matrix factorizations.

Example 3: Small World Graphs

Professor Strang illustrates small world networks with a simple example:

Consider a ring of nodes where each node is connected to its immediate neighbors. The adjacency matrix has 1's on the sub- and super-diagonals. Adding just a few "long-range" connections dramatically reduces the average path length between nodes.

He demonstrates how the powers of the adjacency matrix A reveal connectivity patterns:

  • A^1 shows direct connections
  • A^2 shows two-step connections
  • A^k shows k-step connections

This matrix analysis helps explain the "six degrees of separation" phenomenon in social networks.

12. Graphs, Networks, Incidence Matrices

Key Concepts

  • Graph Representation: Using matrices to represent network structure
  • Incidence Matrices: Encoding node-edge relationships
  • Network Analysis: Understanding connectivity and flow
  • Graph Properties: Paths, cycles, and components

Important Points

  • Incidence matrices show how nodes connect through edges
  • The nullspace reveals cycles in the graph
  • Connected components appear in the matrix structure
  • Flow conservation is represented by matrix equations

Key Questions and Answers

How do incidence matrices represent graphs?

Each row represents a node, each column an edge. Entries are typically +1, -1 for edge direction, and 0 for no connection.

What can we learn from the nullspace?

The nullspace basis vectors correspond to fundamental cycles in the graph, helping identify independent circular paths.

How do we analyze network flow?

Flow conservation at nodes is expressed through matrix equations, where the incidence matrix multiplied by the flow vector equals the supply/demand vector.

Example 1: Graph Representation with Incidence Matrices

From Professor Strang's lecture, he analyzes a simple graph with 4 nodes and 5 edges:

    1 --- 2
    |\    |
    | \   |
    |  \  |
    |   \ |
    4 --- 3

Professor Strang constructs the incidence matrix A where rows represent nodes and columns represent edges:

        e₁  e₂  e₃  e₄  e₅
node 1  1   1   1   0   0
node 2 -1   0   0   1   0
node 3  0  -1   0  -1   1
node 4  0   0  -1   0  -1

He explains that each column has exactly two non-zero entries (1 and -1) representing the nodes connected by that edge. The signs indicate direction (from +1 to -1).

Example 2: Nullspace and Fundamental Cycles

Professor Strang demonstrates that the nullspace of the incidence matrix reveals the cycles in the graph:

For the graph above, he finds a basis for the nullspace of A:

x₁ = [1, 1, 0, 1, 0]ᵀ  (cycle: e₁→e₂→e₄)
x₂ = [1, 0, 1, 0, 1]ᵀ  (cycle: e₁→e₃→e₅)

Each nullspace basis vector corresponds to a fundamental cycle in the graph. The entries indicate which edges form the cycle (1 means the edge is in the cycle).

Example 3: Network Flow Analysis

Professor Strang shows how to analyze flow in a network:

If f is a vector of flows on each edge, then Af represents the net flow at each node. For flow conservation:

Af = s

Where s is the supply/demand vector (positive for sources, negative for sinks, zero for transit nodes).

For example, with s = [1, 0, 0, -1]ᵀ (flow from node 1 to node 4), he finds a valid flow vector f that satisfies the conservation law.

13. Quiz 1 Review

Comprehensive Review

1. The Geometry of Linear Equations
  • Visualization of linear equations in 2D and 3D
  • Geometric interpretation of solution sets
  • Row picture vs. column picture
  • Systems with unique solutions, no solutions, and infinite solutions
2. Elimination with Matrices
  • Gaussian elimination process
  • Upper triangular form and back substitution
  • Matrix operations in elimination
  • Permutation matrices and their role
3. Matrix Operations
  • Matrix multiplication and its properties
  • Inverse matrices and their computation
  • Transpose properties
  • Matrix factorization (LU decomposition)
4. Vector Spaces and Subspaces
  • Vector space axioms and properties
  • Subspace criteria and examples
  • Span and linear combinations
  • Linear independence vs. dependence
5. Vector Spaces and Linear Transformations
  • Linear transformations and their properties
  • Matrix representation of transformations
  • One-to-one and onto mappings
  • Kernel and range of transformations
6. Column Space and Nullspace
  • Column space: C(A) and its properties
  • Nullspace: N(A) and its significance
  • Rank and dimension relationships
  • The Rank-Nullity theorem
7. Solving Ax = 0
  • Pivot variables and free variables
  • Special solutions and their construction
  • Reduced row echelon form (RREF)
  • Complete solution set description
Practice Problems

Example 1: Complete Solution to Homogeneous System

From Professor Strang's review, he works through this example:

Find the complete solution to Ax = 0 where:

A = [1  2  2  2]
    [2  4  6  8]
    [3  6  8 10]

Professor Strang demonstrates the systematic approach:

  1. Perform row reduction to get the echelon form:
  2. [1  2  2  2]
    [0  0  2  4]
    [0  0  0  0]
  3. Identify pivot variables (x₁, x₃) and free variables (x₂, x₄)
  4. Express pivot variables in terms of free variables:
  5. x₁ = -2x₂ - (2x₄ - 2x₃)/1 = -2x₂ - 2x₄ + 2x₃
    x₃ = -2x₄
  6. Substitute back to get:
  7. x₁ = -2x₂ - 2x₄ + 2(-2x₄) = -2x₂ - 6x₄
    x₂ = x₂ (free)
    x₃ = -2x₄
    x₄ = x₄ (free)
  8. Write the complete solution as:
  9. x = x₂[-2] + x₄[-6]
           [ 1]      [ 0]
           [ 0]      [-2]
           [ 0]      [ 1]

Professor Strang emphasizes that this represents all possible solutions to the system, with the nullspace dimension = 4 - 2 = 2 (number of columns minus rank).

Example 2: Basis Verification

From the review lecture, Professor Strang examines this problem:

Determine if these vectors form a basis for R³:

v₁ = [1]
     [2]
     [3]

v₂ = [0]
     [1]
     [1]

v₃ = [2]
     [5]
     [7]

Professor Strang's approach:

  1. Check if the vectors are linearly independent by forming the matrix:
  2. [1  0  2]
    [2  1  5]
    [3  1  7]
  3. Row reduce to determine rank:
  4. [1  0  2]
    [0  1  1]
    [0  0  0]
  5. Since the rank is 2 (only 2 pivots), the vectors are not linearly independent
  6. Therefore, they cannot form a basis for R³, which requires 3 linearly independent vectors

Professor Strang shows that v₃ = 2v₁ + v₂, confirming the linear dependence.

Example 3: Four Fundamental Subspaces

Professor Strang reviews this key example from earlier lectures:

A = [1  0  0]
    [1  1  0]
    [1  1  1]

He identifies all four fundamental subspaces:

  • Column Space: span{[1;1;1], [0;1;1], [0;0;1]} = R³ (full column space)
  • Nullspace: {0} (only the zero vector, since A has full column rank)
  • Row Space: span{[1,0,0], [1,1,0], [1,1,1]} = R³ (full row space)
  • Left Nullspace: {0} (only the zero vector, since A has full row rank)

Professor Strang emphasizes that this triangular matrix is invertible, with all four fundamental subspaces having their maximum possible dimensions.

Key Theorems to Remember
  • Rank-Nullity Theorem: rank(A) + dim(N(A)) = n
  • Dimension Theorem for subspaces
  • Invertible Matrix Theorem conditions
  • Basis Theorem for vector spaces
Common Mistakes to Avoid
  • Forgetting to check all subspace properties
  • Incorrect row reduction in elimination
  • Misidentifying pivot columns
  • Overlooking the importance of linear independence
Additional Topics
  • Orthogonality and Projections
  • Eigenvalues and Eigenvectors
  • Determinants and their properties
  • Symmetric matrices and orthogonal matrices
Study Tips
  • Focus on understanding concepts geometrically
  • Practice solving problems systematically
  • Connect different topics and their relationships
  • Review definitions and theorems regularly