TL;

Back to Base
MathematicsYouTube Videofourierseriescomplexnumbersheatequation

The Mathematics of Complex Fourier Series and Their Application

3Blue1Brown
3Blue1Brown
Published on Mar 14, 2026

Contributed by

Anonymous

Source

The Mathematics of Complex Fourier Series and Their Application

#FourierSeries #ComplexNumbers #HeatEquation #DifferentialEquations #SignalProcessing #MathematicalAnimation

This document explores the mathematical principles behind complex Fourier series, demonstrating their application in animation and their historical significance in solving differential equations like the heat equation.

Introduction to Complex Fourier Series

A complex Fourier series represents a function as a sum of rotating vectors. Each vector rotates at a constant integer frequency, and their combined "tip-to-tail" sum traces out a specific shape over time. By adjusting the initial size and angle of each vector, virtually any desired shape can be drawn.

Consider an animation with 300 rotating arrows. Individually, each arrow's motion is simple: rotation at a steady rate. However, their collective sum generates intricate and complex patterns. This emergent complexity, despite the underlying clockwork rigidity of individual motions, is precisely what Fourier series enable us to describe and control mathematically. By tuning only the starting conditions (initial size and angle) of these vectors, a "swarm" of rotations can conspire to draw any shape, provided a sufficient number of vectors are used. The remarkable aspect is that the underlying formula for this phenomenon is incredibly concise.

Fourier series are often introduced in terms of real-valued functions decomposed into sums of sine waves. This is a special case of the more general rotating vector phenomenon, which we will build up to.

Historical Context: Fourier and the Heat Equation

The development of Fourier series originated from Fourier's work on the heat equation, a partial differential equation (PDE) that describes how temperature distribution evolves over time. This equation also models many other phenomena beyond heat transfer.

Solving the Heat Equation with Cosine Waves

While directly solving the heat equation for an arbitrary heat distribution is challenging, a simple solution exists if the initial function resembles a cosine wave, with its frequency tuned to ensure flat endpoints (a specific boundary condition). Over time, these waves are scaled down exponentially, with higher-frequency waves decaying faster.

The heat equation is a linear equation. This means:

  • If S1S_1 and S2S_2 are two solutions, then S1+S2S_1 + S_2 is also a solution.
  • Solutions can be scaled by a constant: if SS is a solution, then kSkS (where kk is a constant) is also a solution.

This linearity is crucial. It allows us to construct solutions for new, tailor-made initial conditions by taking an infinite family of exponentially decaying cosine wave solutions, scaling them by custom constants, and combining them.

An important observation is that when these waves are combined, the higher-frequency components decay faster. Consequently, the sum tends to "smooth out" over time as these high-frequency terms quickly approach zero, leaving the low-frequency terms to dominate. This difference in decay rates for different frequency components captures the complexity in the evolution of the heat distribution.

Fourier's Insight: Representing Arbitrary Functions

Fourier's groundbreaking insight was to ask how any initial distribution, even seemingly non-wavy ones, could be expressed as a sum of sine waves. This question seemed absurd at the time because most real-world distributions do not resemble simple waves.

Consider the example of two rods, each at a uniform temperature, brought into contact. If the left rod is at 1 degree and the right at -1 degree, and the total length L=1L=1, the initial temperature distribution is a step function. This function is flat, discontinuous, and clearly not a sine wave or a sum of sine waves.

Fourier, however, boldly proposed that even such a function could be expressed as an infinite sum of sine waves. Furthermore, these waves must satisfy specific boundary conditions. For instance, if the endpoints are fixed, one would use sine functions; if the endpoints are flat, cosine functions are used, with frequencies being whole number multiples of a base frequency.

This idea—breaking down functions and patterns into combinations of simple oscillations—is now synonymous with Fourier's name and has proven to be incredibly important and far-reaching across various scientific and engineering disciplines.

Infinite Sums and Approximations

Any finite sum of sine waves will always be continuous and cannot perfectly represent a discontinuous function like a step function. However, Fourier considered infinite sums.

For the step function described (1 for 0t<0.50 \le t < 0.5, -1 for 0.5t10.5 \le t \le 1), it can be represented by the infinite sum: f(t)=4π(sin(2πt)13sin(6πt)+15sin(10πt)17sin(14πt)+)f(t) = \frac{4}{\pi} \left( \sin(2\pi t) - \frac{1}{3}\sin(6\pi t) + \frac{1}{5}\sin(10\pi t) - \frac{1}{7}\sin(14\pi t) + \dots \right) (Note: The original transcript mentioned cosine waves for the heat equation example, but the coefficients given for the step function correspond to a sine series. This is a common variation depending on boundary conditions. For the general complex Fourier series, both sines and cosines are implicitly handled.)

The concept of an "infinite sum" means that as more terms are added, the sequence of partial sums approaches a limit. For functions, this applies to each point in the domain. For the step function:

  • For t<0.5t < 0.5, the sum approaches 1.
  • For t>0.5t > 0.5, the sum approaches -1.
  • At the point of discontinuity (t=0.5t=0.5), all sine terms are 0, so the sum approaches 0. Thus, for the infinite sum to be strictly true, the function's value at the discontinuity is often defined as the midpoint of the jump (e.g., 0 for a jump from 1 to -1).

This ability of an infinite sum of continuous functions to represent a discontinuous function highlights the qualitative changes that limits introduce, which finite sums alone cannot achieve.

While there are technical nuances regarding convergence, the ability to represent discontinuous initial conditions with Fourier series allows for exact solutions to the heat equation describing their evolution over time.

Generalizing to Complex Fourier Series

To compute the coefficients for these series, a more general approach involving complex numbers is often used. This not only broadens the applicability but also simplifies computations and provides a clearer understanding of the underlying mechanics.

Functions as 2D Drawings

Instead of functions with real number outputs (like temperature), we consider functions whose output can be any complex number in the 2D plane. The input is still a real number over a finite interval (e.g., tt from 0 to 1). Such a function can be visualized as a "drawing," where a pencil tip traces points in the complex plane as the input tt varies.

Real-valued functions are essentially "boring drawings" confined to a 1D line (the real axis). When a real-valued function is decomposed into rotating vectors, vectors with frequencies nn and n-n will have the same length and be horizontal reflections of each other. Their sum remains on the real number line and oscillates like a sine wave. Thus, Fourier's original work with real-valued functions and sine waves is a special case of this more general framework of 2D drawings and rotating complex vectors.

The Role of Complex Exponentials

The core of complex Fourier series is the complex exponential, eite^{it}. As the input tt progresses, eite^{it} traces a path around the unit circle at a rate of one unit per second in the complex plane. This function is fundamental because it elegantly describes rotation. While Fourier series can theoretically be described using only 2D vectors without explicitly mentioning ii (the square root of -1), the formulas become more convoluted, and the deep connection to differential equations is obscured. For now, eite^{it} can be viewed as a powerful notational shorthand for rotating vectors, but its significance extends far beyond mere convenience.

Defining the Rotating Vectors

Each rotating vector can be described using the complex exponential. Let's assume the input tt ranges from 0 to 1.

  • Constant Vector (Frequency 0): This vector remains at the number 1 (or any complex constant c0c_0). It can be written as e02πit=1e^{0 \cdot 2\pi i t} = 1.
  • Vector Rotating One Cycle per Second (Frequency 1): This is e2πite^{2\pi i t}. The 2π2\pi ensures one full rotation as tt goes from 0 to 1.
  • Vector Rotating One Cycle per Second in the Opposite Direction (Frequency -1): This is e2πite^{-2\pi i t}.
  • Vector Rotating nn Cycles per Second (Frequency nn): This is en2πite^{n \cdot 2\pi i t}. This formula encompasses all integer frequencies, positive, negative, and zero.

The "dials and knobs" we control are the initial size and direction of each vector. This is achieved by multiplying each exponential by a complex constant, cnc_n.

  • cnc_n determines the initial magnitude (length) and phase (starting angle) of the nn-th rotating vector.
  • For example, if c0=0.5c_0 = 0.5, the constant vector has a length of 0.5.
  • If c1=0.3eiπ/4c_1 = 0.3 \cdot e^{i\pi/4}, the vector with frequency 1 starts at a 45-degree angle with a length of 0.3.

Our goal is to express an arbitrary function f(t)f(t) (e.g., a function that draws an eighth note) as a sum of these terms: f(t)=n=cnen2πitf(t) = \sum_{n=-\infty}^{\infty} c_n e^{n \cdot 2\pi i t} We need a method to determine these complex coefficients cnc_n given f(t)f(t).

Computing the Fourier Coefficients

The key challenge is to find the coefficients cnc_n.

Finding the Constant Term (c0c_0)

The constant term c0c_0 represents the "center of mass" or average position of the drawing. If we sample many evenly spaced values of tt from 0 to 1, the average of the function's outputs for these samples approaches c0c_0 in the limit. This continuous average is an integral: c0=01f(t)dtc_0 = \int_0^1 f(t) dt

To understand why this works, consider f(t)f(t) as the sum of all rotating vectors. The integral of a sum is the sum of the integrals. 01f(t)dt=01(n=cnen2πit)dt=n=cn01en2πitdt\int_0^1 f(t) dt = \int_0^1 \left( \sum_{n=-\infty}^{\infty} c_n e^{n \cdot 2\pi i t} \right) dt = \sum_{n=-\infty}^{\infty} c_n \int_0^1 e^{n \cdot 2\pi i t} dt

For any n0n \ne 0, the integral 01en2πitdt\int_0^1 e^{n \cdot 2\pi i t} dt represents the average value of a vector that completes a whole number of rotations around the origin. The average value of such a rotating vector over a full cycle (or multiple full cycles) is 0. The only exception is when n=0n=0, where e02πit=1e^{0 \cdot 2\pi i t} = 1. So, 011dt=1\int_0^1 1 dt = 1. Therefore, all terms in the sum vanish except for the c0c_0 term: 01f(t)dt=c01+n0cn0=c0\int_0^1 f(t) dt = c_0 \cdot 1 + \sum_{n \ne 0} c_n \cdot 0 = c_0 This integral effectively "kills" all rotating terms, leaving only the constant term c0c_0.

Finding Any Coefficient (cnc_n)

To find a specific coefficient ckc_k (for any integer kk), we employ a clever trick:

  1. "Stop" the desired vector: Multiply the entire function f(t)f(t) by ek2πite^{-k \cdot 2\pi i t}.

    • When multiplying exponentials, their exponents add. So, for any term cnen2πitc_n e^{n \cdot 2\pi i t} in the sum, multiplying by ek2πite^{-k \cdot 2\pi i t} transforms it into cne(nk)2πitc_n e^{(n-k) \cdot 2\pi i t}.
    • This shifts the frequency of every vector down by kk.
    • Crucially, the kk-th vector (with original frequency kk) now has a frequency of kk=0k-k=0, meaning it becomes a constant term cke02πit=ckc_k e^{0 \cdot 2\pi i t} = c_k.
    • All other vectors (with original frequency nkn \ne k) now have a non-zero integer frequency (nk)(n-k), meaning they still rotate a whole number of times.
  2. Take the average (integral): Integrate this modified function over the interval [0,1][0,1]: 01f(t)ek2πitdt=01(n=cnen2πit)ek2πitdt\int_0^1 f(t) e^{-k \cdot 2\pi i t} dt = \int_0^1 \left( \sum_{n=-\infty}^{\infty} c_n e^{n \cdot 2\pi i t} \right) e^{-k \cdot 2\pi i t} dt =n=cn01e(nk)2πitdt= \sum_{n=-\infty}^{\infty} c_n \int_0^1 e^{(n-k) \cdot 2\pi i t} dt

As before, all integrals 01e(nk)2πitdt\int_0^1 e^{(n-k) \cdot 2\pi i t} dt will be 0 when nk0n-k \ne 0 (i.e., nkn \ne k). The only term that survives is when n=kn=k: 01f(t)ek2πitdt=ck01e02πitdt=ck1=ck\int_0^1 f(t) e^{-k \cdot 2\pi i t} dt = c_k \int_0^1 e^{0 \cdot 2\pi i t} dt = c_k \cdot 1 = c_k

Thus, the general formula for any Fourier coefficient cnc_n is: cn=01f(t)en2πitdtc_n = \int_0^1 f(t) e^{-n \cdot 2\pi i t} dt

This elegant expression captures all the complexity of decomposing a function into rotating vectors. It instructs us to:

  1. Modify the function (the 2D drawing) to make the nn-th vector stationary.
  2. Perform an average (integral) that eliminates all other moving vectors, leaving only the now-stationary nn-th component.

Practical Implementation

When rendering animations, computers numerically compute these integrals. For a given path f(t)f(t) (often derived from an SVG file), the program calculates cnc_n for a range of nn values (e.g., from -50 to 50 for 101 vectors). Numerical integration involves chopping the interval into small pieces of size Δt\Delta t and summing f(t)en2πitΔtf(t) e^{-n \cdot 2\pi i t} \Delta t.

Once these coefficients are computed, each cnc_n determines the initial angle and magnitude of its corresponding rotating vector. These vectors are then set into motion, added tip-to-tail, and the path traced by the final tip approximates the original function. As the number of vectors approaches infinity, the approximation becomes increasingly accurate.

Example: The Step Function Revisited

Returning to the step function example (1 for 0t<0.50 \le t < 0.5, -1 for 0.5t10.5 \le t \le 1), its Fourier series approximation involves a sum of vectors that stay close to 1 for the first half of the cycle, then quickly jump to -1 for the second half. Each pair of oppositely rotating vectors corresponds to a sine or cosine wave.

To find the coefficients for this step function, one would compute the integral: cn=01f(t)en2πitdt=00.5(1)en2πitdt+0.51(1)en2πitdtc_n = \int_0^1 f(t) e^{-n \cdot 2\pi i t} dt = \int_0^{0.5} (1) e^{-n \cdot 2\pi i t} dt + \int_{0.5}^1 (-1) e^{-n \cdot 2\pi i t} dt This integral can be solved analytically to obtain the exact coefficients. For real-valued functions, pairing off cnc_n and cnc_{-n} reveals their relationship to sine and cosine components.

Conclusion and Broader Implications

This exploration of complex Fourier series concludes the discussion on the heat equation, providing a glimpse into the study of partial differential equations. More broadly, it highlights a profound and recurring idea in mathematics: the critical role of exponential functions (including their complex and matrix generalizations) in differential equations, especially linear ones. The technique of decomposing a function into a combination of exponentials to solve differential equations is a fundamental concept that reappears in various forms, such as the Laplace transform.


Generated by AI-powered TranscribeLecture.com • 3/14/2026

Share your knowledge!