PHOT 411: Numerical Methods for Photonics

LECTURE 03

Michaël Barbier, Spring semester (2024-2025)

Outlook

  • Interpolation:
    • Curve fitting, smoothing, or interpolating data points
    • Piece-wise linear approximation
    • Polynomial (least squares) approximation
    • Applying splines
  • Integration
    • Trapezoidal rule
    • Simpson’s rule
  • 1D Finite difference schemes

Interpolation

Interpolation and curve fitting

  • Limited “accurate” data points: values in between?
  • Sufficient data but noisy: real curve?
  • Question: Required to pass through the data points?

When to use Interpolation ?

  • Limited data: use interpolation
    • “Accurate” experimental data or,
    • computational hard tasks

When to use curve fitting ?

  • Use curve fitting for:
    • Noisy data: “smoothing” experimental data points
    • You have a known parameterized curve: optimize parameters  Examples: \(\,\,f(x) = a \sin(b\, x)\), \(\,\,f (x) = \exp(-a x^2)\)

Piecewise linear interpolation

  • Linear interpolation draws straight lines between the data points
  • Formula to calculate the value of \(x \in [x_i, x_{i+1}]\):
    \[ f_i(x) = y_i + \frac{y_{i+1} - y_i}{x_{i+1} - x_i} \, (x - x_i) \]

Piecewise linear interpolation

  • Advantages: simple and fast to compute, always exists, robust.
  • Disadvantage: the derivative is not continuous

Global polynomials

  • Higher polynomial order \(n\)
  • Polynomial function of order \(n\): \[ f(x) = a_0 + a_1 x + a_2 x^2 + \dots + a_n x^n \]
  • Polynomial function determined by column vector: \[ \vec{a} = (a_0, a_1, a_2, \dots, a_n)^T \]
  • Requirement to go through data points \((x_i, y_i)\), for every \(i = 1 \dots N\): \[ f(x_i) = a_0 + a_1 x_i + a_2 x_i^2 + \dots + a_n x_i^n = y_i \]

Global polynomials: Matrix equation

\[ f(x_i) = a_0 + a_1 x_i + a_2 x_i^2 + \dots + a_n x_i^n = y_i \] System of equations \(f(x_i) = y_i\) in matrix form \(A\,\vec{a} = \vec{y}\): \[ \begin{pmatrix} 1 & x_1 & x_1^2 & \dots & x_1^n \\ 1 & x_2 & x_2^2 & \dots & x_2^n \\ \vdots & \vdots & \vdots & & \vdots \\ 1 & x_i & x_i^2 & \dots & x_i^n \\ \vdots & \vdots & \vdots & & \vdots \\ 1 & x_N & x_N^2 & \dots & x_N^n \\ \end{pmatrix} \begin{pmatrix} a_0 \\ a_1 \\ a_2 \\ \vdots \\ a_n \end{pmatrix} = \begin{pmatrix} y_1 \\ y_2 \\ \vdots \\ y_i \\ \vdots \\ y_N \end{pmatrix} \]

Global polynomials: Exact fit

  • \(N\) rows/equations/points, \(\,\,\,n+1\) columns/unknowns
  • Exact fit: use polynomial order \(n = N-1\): A \[ \begin{pmatrix} 1 & x_1 & x_1^2 & \dots & x_1^n \\ 1 & x_2 & x_2^2 & \dots & x_2^n \\ \vdots & \vdots & \vdots & & \vdots \\ 1 & x_i & x_i^2 & \dots & x_i^n \\ \vdots & \vdots & \vdots & & \vdots \\ 1 & x_N & x_N^2 & \dots & x_N^n \\ \end{pmatrix} \begin{pmatrix} a_0 \\ a_1 \\ a_2 \\ \vdots \\ a_n \end{pmatrix} = \begin{pmatrix} y_1 \\ y_2 \\ \vdots \\ y_i \\ \vdots \\ y_N \end{pmatrix} \]

Example quadratic fit

  • Obtain unknowns \(\,\,\vec{a} = (a_0, a_1, a_2)\,\) by \(\,\vec{a} = A^{-1} \vec{y}\) \[ \begin{pmatrix} 1 & x_1 & x_1^2 \\ 1 & x_2 & x_2^2 \\ 1 & x_3 & x_3^2 \\ \end{pmatrix} \begin{pmatrix} a_0 \\ a_1 \\ a_2 \end{pmatrix} = \begin{pmatrix} y_1 \\ y_2 \\ y_3 \end{pmatrix} \]

Global polynomials: Approximate curve fitting

  • \(N\) rows/equations/points, \(\,\,\,n+1\) columns/unknowns
  • Approximate fit: use polynomial order \(n < N-1\) \[ \begin{pmatrix} 1 & x_1 & x_1^2 & \dots & x_1^n \\ 1 & x_2 & x_2^2 & \dots & x_2^n \\ \vdots & \vdots & \vdots & & \vdots \\ 1 & x_i & x_i^2 & \dots & x_i^n \\ \vdots & \vdots & \vdots & & \vdots \\ 1 & x_N & x_N^2 & \dots & x_N^n \\ \end{pmatrix} \begin{pmatrix} a_0 \\ a_1 \\ a_2 \\ \vdots \\ a_n \end{pmatrix} = \begin{pmatrix} y_1 \\ y_2 \\ \vdots \\ y_i \\ \vdots \\ y_N \end{pmatrix} \]

Approximate polynomial fit: error

  • \(N\) rows/equations/points, \(\,\,\,n+1\) columns/unknowns
  • Approximate fit: use polynomial order \(n < N-1\)
  • We cannot fit all points at the same time: \[ A \vec{a} = \vec{y} \quad \Longrightarrow \quad A \vec{a} = \vec{y} + \vec{e} \]
  • Errors \(\vec{e} = (e_1, e_2, \dots, e_N)^T\) represent mismatch \(y_i \longrightarrow y_i + e_i\)
  • Minimize the norm of unknown errors (least squares): \[ \|\vec{e}\| = \vec{e}^T\cdot\vec{e} = \sqrt{\sum_{i = 1}^{N} e_i^2} \]

Approximate polynomial fit: solution

  • Minimize norm errors and solve resulting least-squares equation \[ \|\vec{e}\| = \vec{e}^T\cdot\vec{e} = \sqrt{\sum_{i = 1}^{N} e_i^2} \quad\Longrightarrow\quad A^T A \vec{a} = A^T \vec{y} \]

Example of splines: Cubic splines

  • Splines: “local” fit, robust, but continuous differentiable
Natural clamped not-a-knot
Boundary condition \(g''(x_0) = 0\) \(g'(x_0)\) fixed \(g'''(x_1) = 0\)

Integration

Mid-point (Riemann) and Trapezium rules

\[ I_\textrm{mid} = \sum_i f\left(\frac{x_{i-1} + x_{i}}{2}\right) h \quad I_\textrm{trap} = \sum_i \frac{f(x_{i-1}) + f(x_{i})}{2} h \]

Similar errors

\[ E_\textrm{mid} \propto \frac{1}{24} h^2 \, (B-A) \qquad E_\textrm{trap} \propto \frac{1}{12} h^2 \, (B-A) \]

Simpson’s rule

  • Simpson’s rule extends trapezoidal
  • Fit parabola to each segment

\[ I_\textrm{Simpson} = \sum_i \frac{f(x_{i-1}) + 4f(x|_{i-1/2}) + f(x_{i})}{6} h \]

  • Error is 4th order (instead of 2nd for trapezoidal rule):

\[ E_\textrm{Simpson} \propto h^4 \, (B-A) \]

Comments on integration schemes

  • Data points can be spaced irregular: \(h \longrightarrow h_i\)
  • Adaptive integration schemes can become more accurate where function is fluctuating stronger
  • Example: Adaptive quadrature switches between Trapezoidal (rough) and Simpson (accurate)

Differentiation

1D Finite difference schemes

  • Formulas for the derivative \(f_i' \equiv f'(x_i)\) at \(x_i\) are:
\(f_i'\) backward \(f_i'\) forward \(f_i'\) central
\(\frac{f_i - f_{i-1}}{x_i - x_{i-1}} =\frac{f_i - f_{i-1}}{h}\) \(\frac{f_{i+1} - f_{i}}{x_{i+1} - x_{i}} = \frac{f_{i+1} - f_i}{h}\) \(\frac{f_{i+1} - f_{i-1}}{x_{i+1} - x_{i-1}} =\frac{f_{i+1} - f_{i-1}}{2h}\)

Best scheme?

  • For on-the-fly calculations, only “old” values available: use backward scheme
  • At boundaries: forward for left, backward for right boundary
  • Other cases: central difference often has a smaller error

Best scheme?

  • For on-the-fly calculations, only “old” values available: use backward scheme
  • At boundaries: forward for left, backward for right boundary
  • Other cases: central difference often has a smaller error \[ f_{i, \textrm{central}}' = f_i' + f_{i}'''\frac{h^2}{6} + \mathcal{O(h^4)}\quad \Longrightarrow \quad E_\textrm{central} \approx f_{i}'''\frac{h^2}{6} \] \[ f_{i, \textrm{forward}}' = f_i' + f_{i}''\frac{h}{2} + \mathcal{O(h^2)}\quad \Longrightarrow \quad E_\textrm{forward} \approx f_{i}''\frac{h}{2} \]

Example error for sine function

  • Example: \(f(x) = \sin(x)\), \(f' = \cos(x)\), \(f'' = -\sin(x)\)
    • Error central difference mostly smaller than forward difference
    • Central(forward) difference performs bad(good) at inflection points of the function (curvature flips sign)