Brief Notes + Equations
This is just a collection of notes for ES3C5 Signal Processing that I have found useful to have on hand and easily accessible.
The notes made by Adam (MO) cover everything so this is just intended to be an easy to search document.
Download lecture notes here
Use ./generateTables.sh ../src/es2c5/briefnotes.md
in the scripts folder.
Laplace Conversion  

Laplace Table  Insert table here 
Finding Time Domain Output $y(t)$  
Input as Delta Function $δ(t)$  $x(t)=δ(t)$ 
Input as Step Function $u(t)$  $x(t)=u(t)$ 
LTI System Properties  LTI = 
3  Poles and Zeros  

General Transfer Function as 2 polynomials  $H(s)=a_{0}s_{N}+a_{1}s_{N−1}+⋯+a_{N−1}s+a_{N}b_{0}s_{M}+b_{1}s_{M−1}+⋯+b_{M−1}s+b_{M} $ 
Factorised Transfer Function  $H(s)=K(s−p_{1})(s−p_{2})⋯(s−p_{N})(s−z_{1})(s−z_{2})⋯(s−z_{M}) $ 
Real system as real  $M≤N$ 
Zero Definition  Roots z of the numerator. When $s=$ any $z$, $H(s)=0$ 
Pole Definition  Poles p of the denominator. When $s=$ any $p$, $H(s)$ approaches $f$ 
Transfer Function Gain  K is the overall transfer function gain. (Coefficient of $s_{M}$ and $s_{N}$ is 1.) 
Stable System  A system is considered stable if its impulse response tends to zero or a finite ... 
Components to Response  Real Components $⇒$ Exponential Response $∣$ Imaginary $⇒$ angular f... 
4  Analog Frequency Response  

Frequency Response  Frequency response of a system = output in response to sinusoid input of unit ma... 
Continuous Fourier Transform  $F(jw)=∫_{t=0},f(t),e_{−jωt},dt$ 
Inverse Fourier Transform  $f(t)=2π1 ∫_{ω=−∞},F(jω),e_{jωt},dω$ 
Magnitude of Frequency Response (MFR) $∣H(jω)∣$  $∣H(jω)∣=∣K∣∏_{i=1}∣jω−p_{i}∣∏_{i=1}∣jω−z_{i}∣ $ 
Phase Angle of Frequency Response (PAFR) $∠H(jω)$  $K>0$  $∠H(jω)=∑_{i=1}∠(jω−z_{i})−∑_{i=1}∠(jω−p_{i})$ 
Phase Angle of Frequency Response (PAFR) $∠H(jω)$  $K<0$  $∠H(jω)=∑_{i=1}∠(jω−z_{i})−∑_{i=1}∠(jω−p_{i})+π$ 
5  Analog Filter Design  

Ideal Filters  Each ideal filter has unambiguous 
Realisability  System starts to respond to input before input is applied. Nonzero for $t<0$. 
Causality  Output depends only on past and current inputs, not future inputs. 
Realising Filters  Realise as we seek smooth behaviour. 
Gain $G_{dB}$ (linear $→$ dB)  $G_{dB}=20,log_{10}(G_{linear})$ 
Gain $G_{linear}$ (dB $→$ linear)  $G_{linear}=10_{20G}$ 
Transfer Function of Nth Order Butterworth Low Pass Filter  $H(s)=∏_{n=1}(s−p_{n})ω_{c} $ 
Frequency Response of common Low pass Butterworth filter  $∣H(jω)∣=sqrt1+(ww )_{2N}1 $ 
Normalised Frequency Response of common Low pass Butterworth filter  $∣H(jω)∣=sqrt1+w_{2N}1 $ 
Minimum Order for Low Pass Butterworth  $N= 2,log(ωω )log(10−110−1 ) $ 
Low pass Butterworth Cutoff frequency $ω_{c}$ (Pass)  $ω_{c}=(10_{−10G}−1)_{2N1}ω_{p} $ 
Low pass Butterworth Cutoff frequency $ω_{c}$ (Stop)  $ω_{c}=(10_{−10G}−1)_{2N1}ω_{s} $ 
6  Periodic Analogue Functions  

Exponential Representation from Trigonometric representation  $e_{jx}=cosx+jsinx$ 
Trigonometric from exponential  Real (cos)  $cosx=Ree_{jx}=2e_{jx}+e_{−jx} $ 
Trigonometric from exponential  Imaginary (cos)  $sinx=Ime_{jx}=2je_{jx}+e_{−jx} $ 
Fourier Series  $x(t)=∑_{k=−∞}X_{k}e_{jkω_{0}t}$ 
Fourier Coefficients  $X_{k}=T_{0}1 ∫_{T_{0}}x(t)e_{−jkω_{0}t}dt$ 
Fourier Series of Periodic Square Wave (Example)  $x(t)=∑_{k=−∞}T_{0}Aτ sinc(kω_{0}2τ )e_{jkω_{0}t}$ 
Output of LTI system from Signal with multiple frequency components  $y(t)=∑_{k=−∞}H(jkω_{0})X_{k}e_{jkω_{0}t}$ 
Filtering Periodic Signal (Example 6.2)  See example 6.2 below... 
8  Signal Conversion between Analog and Digital  

Digital Signal Processing Workflow  See diagram: 
Sampling  Convert signal from continuoustime to discretetime. Record amplitude of the an... 
Oversample  Sample too often, use more complexity, wasting energy 
Undersample  Not sampling often enough, get 
Aliasing  Multiple signals of different frequencies yield the same data when sampled. 
Nyquist Rate  $ω_{s}=2ω_{B}$ 
Quantisation  The mapping of 
Data Interpolation  Convert digital signal back to analogue domain, reconstruct continous signal fro... 
Hold Circuit  Simplest interpolation in a DAC, where amplitude of continuoustime signal match... 
Resolution  $2_{W}1 ×100$ 
Dynamic range  $,20log_{10}2_{W}≈6WdB$ 
9  ZTransforms and LSI Systems  

LSI Rules  Linear ShiftInvariant 
Common Components of LSI Systems  For digital systems, only need 3 types of LSI circuit components. 
Discrete Time Impulse Function  Impulse response is very similar in digital domain, as it is the system output w... 
Impulse Response Sequence  $h[n]=Fδ[n]$ 
LSI Output  $y[n]=∑_{k=−∞}x[k]h[n−k]=x[n]∗h[n]=h[n]∗x[n]$ 
ZTransform  $Zf[n]=F(z)=∑_{k=0}f[k]z_{−k}$ 
ZTransform Examples  Simple examples... 
Binomial Theorem for Inverse ZTransform  $∑_{n=0}a_{n}=1−a1 $ 
ZTransform Properties  Linearity, Time Shifting and Convolution 
Sample Pairs  See example 
ZTransform of Output Signal  $Y(z)=Zy[n]=Zx[n]∗h[n]=Zx[n]Zh[n]=X(z)H(z)⇒Y(z)=X(z)H(z)$ 
Finding timedomain output $y[n]$ of an LSI System  Transform, product, inverse. 
Difference Equation  Time domain output $y[n]$ directly as a function of timedomain input $x[n]$ as ... 
ZTransform Table  See table... 
10  Stability of Digital Systems  

ZDomain Transfer Function  $H(z)=a[N]z_{−N}+a[N−1]z_{1−N}+⋯+a[1]z_{−1}+1b[M]z_{−M}+b[M−1]z_{1−M}+⋯+b[1]z_{−1}+b[0] $ 
General Difference Equation  $y[n]=∑_{k=0}b[k]x[n−k]−∑_{k=0}a[k]y[n−k]$ 
Poles and Zeros of Transfer Function  $H(z)=K(z−p_{1})(z−p_{2})⋯(z−p_{M})(z−z_{1})(z−z_{2})⋯(z−z_{M}) =K∏_{i=1}z−p_{i}∏_{i=1}z−z_{i} $ 
Bounded Input and Bounded Output (BIBO) Stability  Stable if bounded input sequence yields bounded output sequence. 
11  Digital Frequency Response  

LSI Frequency Response  Output in response to a sinusoid input of unit magnitude and some specified freq... 
DiscreteTime Fourier Transform (DTFT)  Digital Frequency Response  $F(e_{jΩ})=F(Ω)=∑_{k=0}f[k]e_{−jkΩ}$ 
Inverse DiscreteTime Fourier Transform (Inverse DTFT)  $f[k]=2π1 ∫_{−π}F(e_{jΩ})e_{jkΩ}dΩ$ 
LSI Transfer Function  $H(e_{jΩ})=K∏_{i=1}e_{jΩ}−p_{i}∏_{i=1}e_{jΩ}−z_{i} $ 
Magnitude of Frequency Response (MFR) $∣H(e_{jΩ})∣$  $ H(e_{jΩ}) =∣K∣∏_{i=1}∣e_{jΩ}−p_{i}∣∏_{i=1}∣e_{jΩ}−z_{i}∣ $ 
Phase Angle of Frequency Response (PAFR) $∠H(e_{jΩ})$  $K>0$  $∠H(e_{jΩ})=∑_{i=1}∠(e_{jΩ}−z_{i})−∑_{i=1}∠(e_{jΩ}−p_{i})$ 
Example 11.1  Simple Digital High Pass Filter  See image... 
12  Filter Difference equations and Impulse responses  

ZDomain Transfer Function  $H(z)=a[N]z_{−N}+a[N−1]z_{1−N}+⋯+a[1]z_{−1}+1b[M]z_{−M}+b[M−1]z_{1−M}+⋯+b[1]z_{−1}+b[0] $ 
General Difference Equation  $y[n]=∑_{k=0}b[k]x[n−k]−∑_{k=0}a[k]y[n−k]$ 
Example 12.1 Proof y[n] can be obtained directly from H[z]  See image... 
Order of a filter  $order=max(N,M)$ 
Taps in a filter  Minimum number of unit delay blocks required. Equal to the order of the filter. 
Example 12.2 Filter Order and Taps  See example... 
Tabular Method for Difference Equations  Given a difference equation, and its input x[n], can write specific output y[n] ... 
Example 12.3 Tabular Method Example  See example 
Infinite Impulse Response (IIR) Filters  IIR filters have 
Example 12.4 IIR Filter  See example 
Finite Impulse Response (FIR) Filters  FIR Filter are none recursive (ie, no feedback components), so a[k] = 0 for k!=0... 
FIR Difference Equation  $y[n]=∑_{k=0}b[k]x[n−k]$ 
FIR Transfer function  $H(z)=b[M]z_{−M}+b[M−1]z_{1−M}+⋯+b[1]z_{−1}+b[0]$ 
FIR Transfer Function  Roots  $H(z)=z_{M}b[M]+b[M−1]z+⋯+b[1]z_{M−1}+b[0]z_{M} =Kz_{M}∏_{k=0}(z−z_{k}) $ 
FIR Stability  FIR FILTERS ARE ALWAYS STABLE. As in transfer function, all M poles are all on t... 
FIR Linear Phase Response  Often have a linear phase response. The phase shift at the output corresponds to... 
FIR Filter Example  See example 12.5 
Ideal Digital Filters  Four main types of filter magnitude responses (defined over 0 \le \Omega \le \p... 
Realising Ideal Digital Filters  Use poles and zeros to create simple filters. Only need to consider response ove... 
Example 12.6  Simple High Pass Filter Design  See diagram 
13  FIR Digital Filter Design  

Discrete Time Radial Frequency  $Ω=f_{s}ω =f_{s}2πf $ 
Realising Ideal Digital Filter  Aim is to get as close as possible to 
Practical Digital Filters  Good digital low pass filter will try to realise the (unrealisable) ideal respon... 
Windowing  Window Method  design process: start with ideal $h_{i}[n]$ and windowing infinite... 
Windowing Criteria  
Practical FIR Filter Design Example 13.2  See example... 
Specification for FIR Filters Example 13.3  See example... 
14  Discrete Fourier Transform and FFT  

Discrete Fourier Transform DFT  $X[k]=∑_{n=0}x[n]e_{−jnkN2π}$ 
Inverse DFT  x[n] = \frac{1}{N}\sum_{k=0}^{N1}X[k]e^{jnk\frac{2\pi}{N}}, \quad n=\left { 0,1,2, \cdots , N1 \right } 
Example 14.1 DFT of Sinusoid  See example 
Zero Padding  Artificially increase the length of the time domain signal $x[n]$ by adding zero... 
Example 14.2 Effect of Zero Padding  See example 
Fast Fourier Transform FFT  Family of alogrithms that evaluate DFT with complexity of $O(Nlog_{2}N)$ compare... 
16  Digital vs Analogue Recap  

Aperiodic (simple periodic) continuoustime signal f(t)  Laplace, fourier transform. 
More Complex Continuoustime signal f(t)  Fourier series, multiples of fundamental, samples of frequency response. 
Discretetime signal f[n] (infinite length)  ZDomain, Discretetime fourier transform 
Discretetime signal f[n] (finite length)  Finite Length N, convert to frequency domain (DFT), N points distributed over 2 ... 
Stability  Sdomain: negative real component, Z domain: poles within unit circle. 
BiLinearity  Not core module content. 
17  Probabilities and random signals  

Random Variable  A quantity that takes a nondeterministic values (ie we don't know what the valu... 
Probability Distribution  Defines the probability that a random variable will take some value. 
Probability Density Function (PDF)  Continuous random variables  $∫_{x=x_{min}}p(x)dx=1$ 
Probability mass function (PMF)  Discrete random variables  $∑_{x=x_{min}}p(x)=1$ 
Moments  $E[X_{n}]=∑_{x=x_{min}}x_{n}p(x)E[X_{n}]=∫_{x=x_{min}}x_{n}p(x)dx$ 
Uniform Distribution  Equal probability for a random variable to take any value in its domain, ie over... 
Bernoulli  Discrete probability distribution with only 2 possible values (yes no, 1 0, etc)... 
Gaussian (Normal) Distribution  Continuous probability distribution over $(−∞,∞)$, where values closer... 
Central Limit Theorem (CLT)  Sum of independent random variables can be approximated with Gaussian distributi... 
Independent Random Variables  No dependency on each other (i.e., if knowing the value of one random variable g... 
Empirical Distributions  Scaled histogram by total number of samples. 
Random Signals  Random variables can appear in signals in different ways, eg: 
18  Signal estimation  

Signal Estimation  Signal estimation, refers to estimating the values of parameters embedded in a s... 
Linear Model  See equation 
Generalised Linear From  See equation 
Optimal estimate  See equation 
Predicted estimate  See equation 
Observation Matrix $Θ$  See below 
Mean Square Error (MSE)  See equation 
Example 18.1  See example 
Example 18.2  See example 
Linear Regression  $theta=Obs∖y$ 
Weighted Least Squares Estimate  Weighted least squares, includes a weight matrix W, where each sample associated... 
Maximum Likelihood Estimation (MLE)  See equation 
19  Correlation and Power spectral density  

Correlation  Correlation gives a measure of timedomain similarity between two signals. 
Cross Correlation  $R_{x_{1}x_{2}}[k]≈N−k1 ∑_{n=0}x_{1}[n]x_{2}[k+n]$ 
Example 19.1  Discrete CrossCorrelation  See example 
Autocorrelation  Correlation of a signal with itself, ie $x_{2}[n]=x_{1}[n]$ or $x_{2}(t)=x_{1}(t)$ 
Example 19.2  Discrete Autocorrelation  See example 
Example 19.3  Correlation in MATLAB  See example 
20  Image Processing  

Types of colour encoding  Binary (0, 1), Indexed (colour map), Greyscale (range 0>1), True Colour (RGB) 
Notation  See below 
Digital Convolution  $y[n]=∑_{k=−∞}x[k]h[n−k]=x[n]∗h[n]=h[n]∗x[n]$ 
Example 20.1  1D Discrete Convolution  See example 
Example 20.2  Visual 1D Discrete Convolution  See example 
Image Filtering  Determine output y[i][j] from input x[i][j] through filter (kernel) h[i][j] 
Edge Handling  Zeropadding and replicating 
Kernels  Different types of kernels. 
Example 20.3  Image Filtering  See example 
Part 1  Analogue Signals and Systems
Laplace Conversion
Laplace Table
Insert table here
Finding Time Domain Output $y(t)$
 Transform $x(t)$ and $h(t)$ into Laplace domain
 Find product $Y(s)=X(s)H(s)$
 Take inverse Laplace transform $Y(s)$
Input as Delta Function $δ(t)$
$x(t)=δ(t)$ Then $X(s)=1$, so $Y(s)=H(s)$.
Input as Step Function $u(t)$
$x(t)=u(t)$ Then $X(s)=s1 $, so $Y(s)=sH(s) $.
LTI System Properties
LTI = Linear Time Invariant.
 LTI systems are linear. Given system $F{}$ and signals $x_{1}(t)$, $x_{2}(t)$ etc
 LIT is Additive: $F{x_{1}(t)+x_{2}(t)}=F{x_{1}(t)}+F{x_{2}(t)}$
 LTI is scalable (or homogeneous) $F{αx_{1}(t)}=αF{x_{1}(t)}$
 LTI is timeinvariant, ie, if output $y(t)=F{x_{1}(t)}$ then:
 $y(t−τ)=F{x_{1}(t−τ)}$
3  Poles and Zeros
General Transfer Function as 2 polynomials
$H(s)=a_{0}s_{N}+a_{1}s_{N−1}+⋯+a_{N−1}s+a_{N}b_{0}s_{M}+b_{1}s_{M−1}+⋯+b_{M−1}s+b_{M} $
Factorised Transfer Function
$H(s)=K(s−p_{1})(s−p_{2})⋯(s−p_{N})(s−z_{1})(s−z_{2})⋯(s−z_{M}) $ Is factorised and rewrite as a ratio of products: $=K∏_{t=1}s−p_{t}∏_{t=1}s−z_{t} $
Real system as real
$M≤N$ Where the numerator i a $M$th order polynomial with coefficients $b$s and the denominator is a $N$th order polynomial with coefficients $a$s. For a system to be real, the order of the numerator polynomial must be no greater than the order of the denominator polynomial, ie: $M≤N$.
Zero Definition
Roots z of the numerator. When $s=$ any $z$, $H(s)=0$
Pole Definition
Poles p of the denominator. When $s=$ any $p$, $H(s)$ approaches $f$
Transfer Function Gain
K is the overall transfer function gain. (Coefficient of $s_{M}$ and $s_{N}$ is 1.)
Stable System
A system is considered stable if its impulse response tends to zero or a finite value in the time domain.
Requires all real components to be negative (on the left hand side of the complex splane of a polezero plot (left if the imaginary s axis)).
Components to Response
Real Components $⇒$ Exponential Response $∣$ Imaginary $⇒$ angular frequency of oscillating responses.
4  Analog Frequency Response
Frequency Response
Frequency response of a system = output in response to sinusoid input of unit magnitude and specified frequency, $ω$. Response is measured as magnitude and phase angle.
Continuous Fourier Transform
$F(jw)=∫_{t=0}f(t)e_{−jωt}dt$
Laplace transform evaluated on the imaginary saxis at some frequency $s=jω$.
$ω=$ radial frequency, $srad $
Inverse Fourier Transform
$f(t)=2π1 ∫_{ω=−∞}F(jω)e_{jωt}dω$
Magnitude of Frequency Response (MFR) $∣H(jω)∣$
$∣H(jω)∣=∣K∣∏_{i=1}∣jω−p_{i}∣∏_{i=1}∣jω−z_{i}∣ $
In words, the magnitude of the frequency response (MFR) $∣H(jω)∣$ is equal to the gain multiplied by the magnitudes of the vectors corresponding to the zeros, divided by the magnitudes of the vectors corresponding to the poles.
Phase Angle of Frequency Response (PAFR) $∠H(jω)$  $K>0$
$∠H(jω)=i=1∑M ∠(jω−z_{i})−i=1∑N ∠(jω−p_{i})$
Phase Angle of Frequency Response (PAFR) $∠H(jω)$  $K<0$
$∠H(jω)=i=1∑M ∠(jω−z_{i})−i=1∑N ∠(jω−p_{i})+π$
In words, the phase angle of the frequency response (PAFR) $∠H(jω)$ is equal to the sum of the phases of the vectors corresponding to the zeros, minus the sum of the phases of the vectors correspond to the poles, plus $π$ if the gain is negative.
Each phase vector is measured from the positive real saxis (or a line parallel to the real saxis if the pole or zero is not on the real saxis).
5  Analog Filter Design
Ideal Filters
Each ideal filter has unambiguous pass bands, which are ranges of frequencies that pass through the system without distortion, and stop bands, which are ranges of frequencies that are rejected and do not pass through the system without significant loss of signal strength. The transition band between stop and pass bands in ideal filters has a size of 0; transitions occur at single frequencies.
Realisability
System starts to respond to input before input is applied. Nonzero for $t<0$.
Causality
Output depends only on past and current inputs, not future inputs.
Realising Filters
Realise as we seek smooth behaviour.
 Drop $h_{i}(t)$ for $t<0$ ($h_{i}(t)u(t)$)
 Would not get suitable behaviour in frequency domain, as discarded 50% of system energy
 But can tolerate delays
 So shift sinc to the right
 Time domain shift = scaling by complex exponential in laplace
 True in fourier transform, so delay in time maintains magnitude but changes phase of frequency response
 Truncate
 As can't wait for infinity, so truncate impulse response.
Gain $G_{dB}$ (linear $→$ dB)
$G_{dB}=20log_{10}(G_{linear})$
Gain $G_{linear}$ (dB $→$ linear)
$G_{linear}=10_{20G}$
Transfer Function of Nth Order Butterworth Low Pass Filter
$H(s)=∏_{n=1}(s−p_{n})ω_{c} $
Butterworth = Maximally flat in pass band (freq response magnitudes are flat as possible for given order)
 $p_{n}$ = nth pole
 = $jω_{c}e_{2Njπ(2n−1)}$
 = $−ω_{c}sin(2Nπ(2n−1) )+jω_{c}cos(2Nπ(2n−1) )$
 Form semicircle to left of imaginary saxis
 $ω_{c}$ = halfpower cutoff frequency
 Frequency where filter gain is $G_{linear}=2 1 $ or $G_{dB}=−3dB$
Frequency Response of common Low pass Butterworth filter
$H(jω)=1+(w_{c}w )_{2N} 1 $
Increasing order improves approximation of ideal behaviour
Normalised Frequency Response of common Low pass Butterworth filter
$H(jω)=1+w_{2N} 1 $
To convert normalised frequency form to nonnormalised = multiply $ω$ by the actual $ω_{c}$
Minimum Order for Low Pass Butterworth
$N= 2log(ω_{p}ω_{s} )log(10_{−10G}−110_{−10G}−1 ) $
Round up as want to oversatisfy not undersatisfy
Low pass Butterworth Cutoff frequency $ω_{c}$ (Pass)
$ω_{c}=(10_{−10G}−1)_{2N1}ω_{p} $
Gain in dB
Low pass Butterworth Cutoff frequency $ω_{c}$ (Stop)
$ω_{c}=(10_{−10G}−1)_{2N1}ω_{s} $
Gain in dB
6  Periodic Analogue Functions
Exponential Representation from Trigonometric representation
$e_{jx}=cosx+jsinx$
Trigonometric from exponential  Real (cos)
$cosx=Re{e_{jx}}=2e_{jx}+e_{−jx} $
Trigonometric from exponential  Imaginary (cos)
$sinx=Im{e_{jx}}=2je_{jx}+e_{−jx} $
Fourier Series
$x(t)=k=−∞∑∞ X_{k}e_{jkω_{0}t}$ Period signal = sum of complex exponentials.
Fundamental frequency $f_{0}$, such that all frequencies in signal are multiples of $f_{0}$.
Fundamental period $T_{0}=1/f_{0}$
$w_{0}=2πf_{0}=2π/T_{0}$
Fourier spectra only exist at harmonic frequencies (ie integer multiples of fundamental frequency)
Fourier Coefficients
$X_{k}=T_{0}1 ∫_{T_{0}}x(t)e_{−jkω_{0}t}dt$
Important property of Fourier series is how is represents real signals $x(t)$.
 Even magnitude spectrum $→∣X_{k}∣=∣X_{−k}∣$
 Odd phase spectrum = $→∠X_{k}=−∠X_{−k}$
Fourier Series of Periodic Square Wave (Example)
$x(t)=k=−∞∑∞ T_{0}Aτ sinc(kω_{0}2τ )e_{jkω_{0}t}$
Where $X_{k}=T_{0}Aτ sinc(kω_{0}2τ )$
Output of LTI system from Signal with multiple frequency components
$y(t)=k=−∞∑∞ H(jkω_{0})X_{k}e_{jkω_{0}t}$
Or in other words:
$Y_{k}=H(jkω_{0})X_{k}$
The output of an LTI system due to a signal with multiple frequency components can be found by superposition of the outputs due to the individual frequency components. IE system will change amplitude and phase of each frequency in the input.
Filtering Periodic Signal (Example 6.2)
See example 6.2 below...
7  Computing with Analogue Signals
This topic isn't examined as it is MATLAB
8  Signal Conversion between Analog and Digital
Digital Signal Processing Workflow
See diagram:
 Low pass filter applied to timedomain input signal $x(t)$ to limit frequencies
 An analoguetodigital converter (ADC) samples and quantises the continuous time analogue signal to convert it to discrete time digital signal $x[n]$.
 Digital signal processing (DSP) performs operations required and generates output signal $y[n]$.
 A digitaltoanalogue converter (DAC) uses hold operations to reconstruct an analogue signal from $y[n]$
 An output low pass filter removes high frequency components introduced by the DAC operation to give the final output $y(t)$.
Sampling
Convert signal from continuoustime to discretetime. Record amplitude of the analogue signal at specified times. Usually sampling period is fixed.
Oversample
Sample too often, use more complexity, wasting energy
Undersample
Not sampling often enough, get aliasing of our signal (multiple signals of different frequencies yield the same data when sampled.)
Aliasing
Multiple signals of different frequencies yield the same data when sampled.
If we sample the black sinusoid at the times indicated with the blue marker, it could be mistaken for the red dashed sinusoid. This happens when undersampling, and the lower signal is called the alias. The alias makes it impossible to recover the original data.
Nyquist Rate
$ω_{s}=2ω_{B}$
Minimum antaliasing sampling Frequency.
Frequencies above this $ω_{s}≥2ω_{B}$ remain distinguishable.
Quantisation
The mapping of continuous amplitude levels to a binary representation.
IE: $W$ bits then there are $2_{W}$ quantisation levels. ADC Word length $=W$.
Continuous amplitude levels are approximated to the nearest level (rounding). Resulting error between nearest level and actual level = quantisation noise
Data Interpolation
Convert digital signal back to analogue domain, reconstruct continous signal from discrete time series of points.
Hold Circuit
Simplest interpolation in a DAC, where amplitude of continuoustime signal matches that of the previous discrete time signal.
IE: Hold amplitude until the next discrete time value. Produces staircase like output.
Resolution
$2_{W}1 ×100%$
Space between levels, often represented as a percentage.
For $W$bit DAC, with uniform levels
Dynamic range
$20log_{10}2_{W}≈6WdB$
Range of signal amplitudes that a DAC can resolve between its smallest and largest (undistorted) values.
9  ZTransforms and LSI Systems
LSI Rules
Linear ShiftInvariant
Common Components of LSI Systems
For digital systems, only need 3 types of LSI circuit components.
 A multiplier scales the current input by a constant, i.e., $y[n]=b[1]x[n]$.
 An adder outputs the sum of two or more inputs, e.g., $y[n]=x_{1}[n]+x_{2}[n]$.
 A unit delay imposes a delay of one sample on the input, i.e, $y[n]=x[n−1]$.
Discrete Time Impulse Function
Impulse response is very similar in digital domain, as it is the system output when the input is an impulse.
Impulse Response Sequence
$h[n]=F{δ[n]}$
LSI Output
$y[n]=k=−∞∑∞ x[k]h[n−k]=x[n]∗h[n]=h[n]∗x[n]$
Discrete Convolution of input signal with the impulse response.
ZTransform
$Z{f[n]}=F(z)=k=0∑∞ f[k]z_{−k}$
Converts discretetime domain function $f[n]$ into complex domain function $F(z)$, in the zdomain Assume $f[n]$ is causal, ie $f[n]=0,∀n<0$
Discrete time equivalent to Laplace Transform. However can be written by direct inspection (as have summation instead of intergral). Inverse equally as simple.
ZTransform Examples
Simple examples...
Binomial Theorem for Inverse ZTransform
$n=0∑∞ a_{n}=1−a1 $
Cannot always find inverse Ztranform by immediate inspection, in particular if the Ztransform is written as a ratio of polynomials of z. Can use Binomial theorem to convert into single (sometimes infinite length) polynomial of $z$
ZTransform Properties
Linearity, Time Shifting and Convolution
Sample Pairs
See example
ZTransform of Output Signal
$Y(z)=Z{y[n]}=Z{x[n]∗h[n]}=Z{x[n]}Z{h[n]}=X(z)H(z)⇒Y(z)=X(z)H(z)$
Where $H(z)$ = Pulse Transfer Function (as it is also the system output when the timedomain input is a unit impulse.) but by convention can refer to $H(z)$ as the Transfer Function
Finding timedomain output $y[n]$ of an LSI System
Transform, product, inverse.
 Transform $x[n]$ and $h[n]$ into zdomain
 Find product $Y(z)=X(z)H(z)$
 Taking the inverse Ztransform of $Y(z)$
Difference Equation
Time domain output $y[n]$ directly as a function of timedomain input $x[n]$ as well as previous timedomain outputs $x[n−k]$ (ie can be feedback).
ZTransform Table
See table...
10  Stability of Digital Systems
ZDomain Transfer Function
$H(z)=a[N]z_{−N}+a[N−1]z_{1−N}+⋯+a[1]z_{−1}+1b[M]z_{−M}+b[M−1]z_{1−M}+⋯+b[1]z_{−1}+b[0] $
Negative powers of z.
No constraint on $M$ and $N$to be real (unlike analogue) but often assume $M=N$
General Difference Equation
$y[n]=k=0∑M b[k]x[n−k]−k=0∑N a[k]y[n−k]$
Poles and Zeros of Transfer Function
$H(z)=K(z−p_{1})(z−p_{2})⋯(z−p_{M})(z−z_{1})(z−z_{2})⋯(z−z_{M}) =K∏_{i=1}z−p_{i}∏_{i=1}z−z_{i} $
 Coefficient of each $z$ in this form is 1.
 Poles $p_{i}$ and zeros $z_{i}$ carry same meaning as analogue
 Unfortunately symbol for variable $z$ and zeros $z_{i}$ are very similar (take care)
 Insightful to plot
Bounded Input and Bounded Output (BIBO) Stability
Stable if bounded input sequence yields bounded output sequence.
A system is BIBO stable if all of the poles lie inside the $∣z∣=1$ unit circle
A system is Conditionally stable if there is atleast 1 pole directly on the unit circle.
Explanation:
 An input sequence $x[n]$ is bounded if each element in the sequence is smaller than some value $A$.
 An output sequence $y[n]$ corresponding to $x[n]$ is bounded if each element in the sequence is smaller than some value $B$.
11  Digital Frequency Response
LSI Frequency Response
Output in response to a sinusoid input of unit magnitude and some specified frequency. Shown in two plots (magnitude and phase) as a function of input frequency.
DiscreteTime Fourier Transform (DTFT)  Digital Frequency Response
$F(e_{jΩ})=F(Ω)=k=0∑∞ f[k]e_{−jkΩ}$
Where angle $Ω$ is the angle of th unit vector measured from the positive real $z$axis. Denotes digital radial frequency, measured in radians per sample $samplerad $
$F(e_{jΩ})$ as spectrum of $f[n]$ (frequency response).
Convention of writing DTFT includes $F(e_{jΩ})$ or simply $F(Ω)$
Derivation: Using ZTransform Definition. $Z{f[n]}=F(z)=k=0∑∞ f[k]z_{−k}$
Let $z$ be polar coords ($z=re_{jΩ}$), ie magnitude to r, angle $Ω$. Hence rewrite $F(re_{jΩ})=k=0∑∞ f[k]r_{−k}e_{−jkΩ}$
Then let $r=1$, so that any point lies on the unit circle. $F(e_{jΩ})=F(Ω)=k=0∑∞ f[k]e_{−jkΩ}$
Inverse DiscreteTime Fourier Transform (Inverse DTFT)
$f[k]=2π1 ∫_{−π}F(e_{jΩ})e_{jkΩ}dΩ$
LSI Transfer Function
$H(e_{jΩ})=K∏_{i=1}e_{jΩ}−p_{i}∏_{i=1}e_{jΩ}−z_{i} $
$H(e_{jΩ})$ is a function of vectors from the system's poles and zeros to the unit circle at angle $Ω$.Thus from polezero plot, can geometrically determine magnitude and phase of frequency response.
Magnitude of Frequency Response (MFR) $∣H(e_{jΩ})∣$
$ H(e_{jΩ}) =∣K∣∏_{i=1}∣e_{jΩ}−p_{i}∣∏_{i=1} e_{jΩ}−z_{i} $
In words, the magnitude of the frequency response (MFR) $∣H(e_{jΩ})∣$ is equal to the gain multiplied by the magnitudes of the vectors corresponding to the zeros, divided by the magnitudes of the vectors corresponding to the poles.
Repeats every $2π$ as Eulers Formula. Due to symettery of poles and zeros about real $z$axis, frequency response is symmetric about $Ω=π$, so only need to find over one interval of $π$
Phase Angle of Frequency Response (PAFR) $∠H(e_{jΩ})$  $K>0$
$∠H(e_{jΩ})=i=1∑M ∠(e_{jΩ}−z_{i})−i=1∑N ∠(e_{jΩ}−p_{i})$
In words, the phase angle of the frequency response (PAFR) $∠H(e_{jΩ})$ is equal to the sum of the phases of the vectors corresponding to the zeros, minus the sum of the phases of the vectors correspond to the poles, plus $π$ if the gain is negative.
Repeats every $2π$ as Eulers Formula. Due to symettery of poles and zeros about real $z$axis, frequency response is symmetric about $Ω=π$, so only need to find over one interval of $π$
Example 11.1  Simple Digital High Pass Filter
See image...
12  Filter Difference equations and Impulse responses
ZDomain Transfer Function
$H(z)=a[N]z_{−N}+a[N−1]z_{1−N}+⋯+a[1]z_{−1}+1b[M]z_{−M}+b[M−1]z_{1−M}+⋯+b[1]z_{−1}+b[0] $
General Difference Equation
$y[n]=k=0∑M b[k]x[n−k]−k=0∑N a[k]y[n−k]$
Real coefficients $b[]˙ $ and $a[]˙ $ are the same. (Note $a[0]$ = 1, so no coefficient corresponding to $y[n]$).
Easier to convert directly between transfer function $H(z)$ (with negative powers of z) and the difference equation for output $y[n]$, ideal for implementation of the system. (rather than use timedomain impulse response $h[n]$)
Example 12.1 Proof y[n] can be obtained directly from H[z]
See image...
Order of a filter
$order=max(N,M)$
Taps in a filter
Minimum number of unit delay blocks required. Equal to the order of the filter.
Example 12.2 Filter Order and Taps
See example...
Tabular Method for Difference Equations
Given a difference equation, and its input x[n], can write specific output y[n] using tabular method.
 Starting with input $x[n]$, make a column for every input and output that appears in difference equation
 ASsume every output and delayed input is initially zero (ie the filter is causal, initially no memory, hence system is quiescent)
 Fill in column for $x[n]$ with given system input for all rows needed, and fill in delayed versions of $x[n]$
 Evaluate $y[0]$ from inital input, and propagate the value of y[0] to delayed outputs (as relavent)
 Evaluate $y[1]$ from $x[]˙ $s and $y[0]$
 Continue evaluating output and propagating delayed outputs.
Can be alternative method for finding timedomain impulse response $h[n]$
Example 12.3 Tabular Method Example
See example
Infinite Impulse Response (IIR) Filters
IIR filters have infinite length impulse responses because they are recursive (ie feedback terms associated with nonzero poles in transfer function, hence $y[n−k]$ exists.)
Standard transfer function and difference equation can be used to represent. $H(z)=a[N]z_{−N}+a[N−1]z_{1−N}+⋯+a[1]z_{−1}+1b[M]z_{−M}+b[M−1]z_{1−M}+⋯+b[1]z_{−1}+b[0] $ $y[n]=k=0∑M b[k]x[n−k]−k=0∑N a[k]y[n−k]$
Not possible to have a linear phase response (so there are different delays associated with different frequencies, and they are not always stable (depending on the exact locations of poles.))
IIR filters are more efficient than FIR designs at controlling gain of response.
Although response is technically infinite, in practice decays towards zero or can be truncated to zero (assume response is $h[n]=0$ beyond some value $n$)
Example 12.4 IIR Filter
See example
Finite Impulse Response (FIR) Filters
FIR Filter are none recursive (ie, no feedback components), so a[k] = 0 for k!=0.
Finite in length, and strictly zero beyond that ($h[n]=0$ for $n>M$). Therefore the number of filter taps dictates the length of an FIR impulse response
Since there is no feedback, can write impulse response $h[n]$ as: $h[n]=b[n]$
FIR Difference Equation
$y[n]=k=0∑M b[k]x[n−k]$
FIR Transfer function
$H(z)=b[M]z_{−M}+b[M−1]z_{1−M}+⋯+b[1]z_{−1}+b[0]$
Simplified from general differernce equation tranfer function.
FIR Transfer Function  Roots
$H(z)=z_{M}b[M]+b[M−1]z+⋯+b[1]z_{M−1}+b[0]z_{M} =Kz_{M}∏_{k=0}(z−z_{k}) $
More convenient to work with positive powers of z, so multiply top and bottom by $z_{M}$ then factor.
FIR Stability
FIR FILTERS ARE ALWAYS STABLE. As in transfer function, all M poles are all on the origin (z =0) and so always in the unit circle.
FIR Linear Phase Response
Often have a linear phase response. The phase shift at the output corresponds to a time delay.
FIR Filter Example
See example 12.5
Ideal Digital Filters
Four main types of filter magnitude responses (defined over $0≤Ω≤π$, mirrored over $π≤Ω<2π$ and repeated every $2π$)
 Low Pass  pass frequencies less than cutoff frequency $Ω_{c}$ and reject frequencies greater.
 High Pass  rejects frequencies less than cutoff frequency $Ω_{c}$ and pass frequencies greater.
 Band Pass  Passes frequency within specified range, ie between $Ω_{1}$ and $Ω_{2}$, and reject frequencies that are either below or above the band within $[0,π]$
 Band Stop  Rejects frequency within specified range, ie between $Ω_{1}$ and $Ω_{2}$, and passes all other frequencies within $[0,π]$
Ideal response appear to be fundamentally different from ideal analogue, however we only care over fundamental band $[−π,π)$ where behaviour is identical
Realising Ideal Digital Filters
Use poles and zeros to create simple filters. Only need to consider response over the fixed $[−π,π)$ band.
Key Concepts:

To be physically realisable, complex poles and zeros need to be in conjugate pairs

Can place zeros anywhere, so will often place directly on unit circle when frequency / range of frequency needs to be attenuated

Poles on the unit circle should generally be avoided (conditionally stable). Can try to keep all poles at origin so can be FIR, otherwise IIR, so feedback. Poles used to amplify response in the neighbourhood of some frequency.

Low Pass  zeros at or near $Ω=π$, poles near $Ω=0$ which can amplify maximum gain, or be used at a higher frequency to increase size of pass band.

High Pass  literally inverse of low pass. Zeros at or near $Ω=0$, poles near $Ω=π$ which can amplify maximum gain, or be used at a lower frequency to increase size of pass band.

Band Pass  Place zeros at or near both $Ω=0$ and $Ω=π$; so must be atleast second order. Place pole if needed to amplify the signal in the neighbourood of the pass band.

Band Stop  Place zeros at or near the stop band. Zeros must be complex so such a filter must be atelast second order. Place poles at or near both $Ω=0$ and $Ω=π$ if needed.
Example 12.6  Simple High Pass Filter Design
See diagram
13  FIR Digital Filter Design
Discrete Time Radial Frequency
$Ω=f_{s}ω =f_{s}2πf $
As long as $Ω≤π$  otherwise there will be an alias at a lower frequency. So $2f≤f_{s}$ to avoid aliasing.
Realising Ideal Digital Filter
Aim is to get as close as possible to ideal behaviour. But when using Inverse DTFT, the ideal impulse response is $πΩ_{c} sinc(nΩ_{c})$.
This is analogous to the inverse Fourier transform of the ideal analogue low pass frequency response.
Sampled a scaled sinc function, nonzero values for $n<0$. So needs to respond to an input before the input is applied, thus unrealisable.
Practical Digital Filters
Good digital low pass filter will try to realise the (unrealisable) ideal response. Will try to do this with FIR filters (always stable, tend to have greater flexibility to implement different frequency responses).
 Need to induce a delay to capture most of the ideal signal energy in causal time, ie: use $h_{i}[n−k]u[n]$
 Truncate response to delay tolerance $k_{d}$, such that $h[n]=0$ for $n>k_{d}$. Also limits complexity of filter: shorter = smaller order
 Window response, scales each sample, attempt to mitigate negative effects of truncation
Windowing
Window Method  design process: start with ideal $h_{i}[n]$ and windowing infinite timedomain response to obtain a realsiable $h[n]$ that can be implemented.
Windowing Criteria
 Main Lobe Width  Width in frequency of the main lobe.
 Rolloff rate  how sharply main lobe decreases, measured in dB/dec (db per decade).
 Peak side lobe level  Peak magnitude of the largest side lobe relative to the main lobe, measured in dB.
 Pass Band ripple  The amount the gain over the pass band can vary about unity $1−δ_{p}/2$ and $1+δ_{p}/2$
 Pass Band Ripple Parameter, dB $r_{p}=20log_{10}(1−δ_{p}/21+δ_{p}/2 )$
 Stop Band ripple  Gain over the stop band, must be less than the stop band ripple $δ_{s}$
 Transition Band  $Ω_{Δ}=Ω_{s}−Ω_{p}$
Practical FIR Filter Design Example 13.2
See example...
Specification for FIR Filters Example 13.3
See example...
14  Discrete Fourier Transform and FFT
Discrete Fourier Transform DFT
$X[k]=n=0∑N−1 x[n]e_{−jnkN2π}$
For $k={0,1,2...,N−1}$
This is Forward discrete Fourier transform. (Not discrete time transform, but samples of it over interval $[0,2π)$
Explanation:
Discretetime Fourier Transfomr (DTFT), takes discrete time signal, provides continous spectrem that repeats every $2π$. Defined for infinite length sequency $x[n]$, gives continous spectrum with values at all frequencies.
$X(e_{jΩ})=n=0∑∞ x[n]e_{jnΩ}$
Digital often has finite length sequences. (Also inverse DTFT, uses intergration thus approximated). So assume sequence $x[n]$ is length $N$.
$X(e_{jΩ})=n=0∑N−1 x[n]e_{jnΩ}$
Sample spectrum $X[e_{jΩ}]$. Repeats every $2π$, can sample over $[0,2π)$.
Take same number of samples in frequency domain as length of time domain signal.So $N$ evenly spaced samples of $X(e_{jΩ})$. (Aka bins)
Occur at fundemental frequency $Ω_{0}=2π/N$ $Ω={0,N2π ,N4π ,⋯,N2π(N−1) }$
Substitude into the DTFT.
$X[k]=n=0∑N−1 x[n]e_{−jnkN2π}$
For $k={0,1,2...,N−1}$
$f_{k}=kNf_{s} ,0≤k≤N−1$
Inverse DFT
$x[n]=N1 k=0∑N−1 X[k]e_{jnkN2π},n={0,1,2,⋯,N−1}$
Example 14.1 DFT of Sinusoid
See example
Zero Padding
Artificially increase the length of the time domain signal $x[n]$ by adding zeros to the end to see more detail in the DTFT as DFT provides sampled view of DTFT, only see DTFT at $N$ frequencies.
Example 14.2 Effect of Zero Padding
See example
Fast Fourier Transform FFT
Family of alogrithms that evaluate DFT with complexity of $O(Nlog_{2}N)$ compared to $O(N_{2})$. Achieved with no approximations.
Details are beyond module, but can be used in matlab with fft
function.
15  Computing Digital Signals
This topic isn't examined as it is MATLAB
16  Digital vs Analogue Recap
Aperiodic (simple periodic) continuoustime signal f(t)
Laplace, fourier transform.
 APeriodic (simple periodic) continuous time signal $f(t)$
 Convert to Laplace domain (s domain) via Laplace transform
 Which $s=jw$ is the (continuous) Fourier transform.
 Fourier transform of the signal is its frequency response $F(jω)$, generally defined for all $ω$
 Laplace and fourier transform, have corresponding inverse transforms, convert $F(s)$ or $F(jω)$ back to $f(t)$
More Complex Continuoustime signal f(t)
Fourier series, multiples of fundamental, samples of frequency response.
 For a more complex periodic continuoustime signal f (t)
 Fourier series representation decomposes the signal into its frequency components $F_{k}$ at multiples of the fundamental frequency $ω_{0}$.
 Can be interpreted as samples of the frequency response $F(jω)$,
 which then corresponds to periodicity of $f(t)$ over time.
 The coefficients $F_{k}$ are found over one period of $f(t)$.
Discretetime signal f[n] (infinite length)
ZDomain, Discretetime fourier transform
 Discretetime signal $f[n]$, we can convert to the zdomain via the Ztransform,
 Which for $z=e_{jΩ}$ is the discretetime Fourier transform DTFT.
 The discretetime Fourier transform of the signal is its frequency response $F(e_{jΩ})$
 Repeats every $2π$ (i.e., sampling in time corresponds to periodicity in frequency).
 There are corresponding inverse transforms to convert $F(z)$ or $F(e_{jΩ})$ back to $f[n]$
Discretetime signal f[n] (finite length)
Finite Length N, convert to frequency domain (DFT), N points distributed over 2 pi (periodic)
 For discretetime signal $f[n]$ with finite length (or truncated to) N,
 Can convert to the frequency domain using the discrete Fourier transform,
 which is also discrete in frequency.
 The discrete Fourier transform also has N points distributed over $2π$ and is otherwise periodic.
 Here, we see sampling in both frequency and time, corresponding to periodicity in the other domain (that we usually ignore in analysis and design because we define both the time domain signal $f[n]$ and frequency domain signal $F[k]$ over one period of length N).
Stability
Sdomain: negative real component, Z domain: poles within unit circle.
BiLinearity
Not core module content.
17  Probabilities and random signals
Random Variable
A quantity that takes a nondeterministic values (ie we don't know what the value will be in advance).
Probability Distribution
Defines the probability that a random variable will take some value.
Probability Density Function (PDF)  Continuous random variables
$∫_{x=x_{min}}p(x)dx=1$
For a random variable $X$, take values between $x_{min}$ and $x_{max}$ (could be $±∞$), $p(x)$ is the probability that $X=x$.
The integration of these probabilities is equal to 1.
Can take integral over subset to calculate the probability of X being within that subset.
Probability mass function (PMF)  Discrete random variables
$x=x_{min}∑x_{max} p(x)=1$
For a random variable $X$, take values between $x_{min}$ and $x_{max}$ (could be $±∞$), $p(x)$ is the probability that $X=x$.
The sum of these probabilities is equal to 1.
Can take summation over subset to calculate the probability of X being within that subset.
Moments
$E[X_{n}]=x=x_{min}∑x_{max} x_{n}p(x)E[X_{n}]=∫_{x=x_{min}}x_{n}p(x)dx$
Of PMF and PDF respectively.
 $n=1$, called the mean $μ_{x}$  Expected (average) value
 $n=2$, called the meansquared value, describes spread of random variable.
Often refer second order moments to as variance $σ_{X}$. meansquared value, with correction for the mean $σ_{X}=E[(X−μ_{x})_{2}]=E[X_{2}]−(E[X])_{2}$
Standard deviation $σ=σ_{X} $
Uniform Distribution
Equal probability for a random variable to take any value in its domain, ie over $x_{min}≤x≤x_{max}$.
PDF continuous version:
Discrete uniform distributions: result of dice roll, coin toss etc. Averege is average of min and max.
Bernoulli
Discrete probability distribution with only 2 possible values (yes no, 1 0, etc). Values have different probabilities, in general $p(1)=1−p(0)$.
Mean: $μ_{x}=p(1)$, Variance: $σ_{X}=p(1)p(0)$
Gaussian (Normal) Distribution
Continuous probability distribution over $(−∞,∞)$, where values closer to mean are more likely.
Arguably most important continuos distribution as appears everywhere.
PDF is
$p(x)=2πσ_{X} 1 exp(−2σ_{X}(x−μ_{x})_{2} )$
Central Limit Theorem (CLT)
Sum of independent random variables can be approximated with Gaussian distribution. Approximation improves with as more random variables are included in the sum. True for any probability distributions.
Independent Random Variables
No dependency on each other (i.e., if knowing the value of one random variable gives you no information to be able to better guess another random variable, then those random variables are independent of each other).
Empirical Distributions
Scaled histogram by total number of samples.
To observe behaviour that would match a PDF or PMF, require infinite number of samples. In practice can make histogram.
Random Signals
Random variables can appear in signals in different ways, eg:
 Thermal noise  in all electronics, from agitation electrons. Often modelled by adding gaussian random variable to signal
 Signal processing techniques introduce noise  aliasing, quantisation, nonideal filters.
 Random variables can be used to store information, e.g., data can be encoded into bits and delivered across a communication channel. A receiver does not know the information in advance and can treat each bit as a Bernoulli random variable that it needs to estimate.
 Signals can be drastically transformed by the world (wireless signals obstructed by buildings trees etc)  Analogue signals passing through unknown system $h(t)$, which can vary with time etc
18  Signal estimation
Signal Estimation
Signal estimation, refers to estimating the values of parameters embedded in a signal. Signals have noise, so can't just calculate parameters.
Linear Model
See equation
$y =Θϕ +w$
Polynomial terms,, linearity means y[n] must be linear function of unknown parameters.
EG: $y[n]=A+Bn+w[n]$
 A,B are unknown parameters
 $w[n]$ refers to noise  assume gaussian random variables with mean $μ_{w}=0$ and varience $σ_{w}$  also assume white noise.
Write as column vector for each n.
Create observation matrix $Θ$.
Since there are 2 parameters, $N×2$ matrix.
Can therefore be written as
$y =Θϕ +w$
With Optimal estimate $ϕ ^ $: $ϕ ^ =(Θ_{T}Θ)_{−1}Θ_{T}y $
Can write prediction: $y ^ =Θϕ ^ $
Calculate MSE
Generalised Linear From
See equation
$y =Θϕ +s+w$
Where $s$ is a vector of known samples. Convenient when our signal is contaminated by some large interference with known characteristics.
To account for this in the estimator but subtracting by s from both sides.
$ϕ ^ =(Θ_{T}Θ)_{−1}Θ_{T}(y −s)$
Optimal estimate
See equation
$ϕ ^ =(Θ_{T}Θ)_{−1}Θ_{T}y $
Predicted estimate
See equation
$y ^ =Θϕ ^ $
Without noise.
Observation Matrix $Θ$
See below
$N×P$ matrix where $P$ is the number of parameters, and $N$ is the number of time steps.
Each column is the coefficients of the corresponding parameter at the given timestamp (per row).
Mean Square Error (MSE)
See equation
$MSE(y ^ )=N1 n=0∑N−1 (y^ [n]−y[n])_{2}$
Example 18.1
See example
Example 18.2
See example
Linear Regression
$theta=Obs∖y$
AKA Ordinary least squares (OLS).
Form of observation matrix had to be assumed but may be unknown. If so can try different ones and find simplest that has best MSE.
Weighted Least Squares Estimate
Weighted least squares, includes a weight matrix W, where each sample associated with positive weight.
Places more emphasis on more reliable samples.
Good choice of weight $W[n]$: $W[n]=σ_{n}1 $
Therefore resulting in:
$ϕ ^ =(Θ_{T}WΘ)_{−1}Θ_{T}Wy $
Using equation, where W is the column vector of weights.
theta = lscov(Obs, y,W);
Maximum Likelihood Estimation (MLE)
See equation
Found by determining $ϕ ^ $, maximises the PDF of the signal $y[n]$, which depends on the statistics of the noise $w[n]$.
Given some type of probability distribution, the MLE can be found.
MATLAB mle
function from the Statistics and Machine Learning Toolbox.
19  Correlation and Power spectral density
Correlation
Correlation gives a measure of timedomain similarity between two signals.
Cross Correlation
$R_{x_{1}x_{2}}[k]≈N−k1 n=0∑N−k x_{1}[n]x_{2}[k+n]$
$R_{x_{1}x_{2}}[k]≈N−k1 (x_{1}[0]x_{2}[k]+x_{1}[1]x_{2}[k+1]+⋯+x_{1}[N]x_{2}[k+N])$
 $k$ is the time shift of $x_{2}[n]$ sequence relative to the $x_{1}[n]$ sequence.
 Approximation as signal lengths are finite and the signals could be random.
Example 19.1  Discrete CrossCorrelation
See example
Autocorrelation
Correlation of a signal with itself, ie $x_{2}[n]=x_{1}[n]$ or $x_{2}(t)=x_{1}(t)$
 Gives a measure of whether the current value of the signal says anything about a future value. Especially good for random signals.
Key Properties
 Autocorrelation for zero delay is the same as the signals mean square value. The auto correlation is never bigger for any nonzero delay.
 Auto correlation is an even function of $k$ or $τ$, ie $R_{x_{1}x_{2}}[k]=R_{x_{1}x_{2}}[−k]$
 Autocorrelation of sum of two uncorrelated signals is the same as the sums fo the autocorrelations of the two individual signals.
For $x_{1}[n]$ and $x_{2}[n]$ are uncorrelated,
$y[n]=x_{1}[n]+x_{2}[n]⇒R_{YY}[k]=R_{x_{1}x_{1}}[k]+R_{x_{2}x_{2}}[k]$
Example 19.2  Discrete Autocorrelation
See example
Example 19.3  Correlation in MATLAB
See example
20  Image Processing
Types of colour encoding
Binary (0, 1), Indexed (colour map), Greyscale (range 0>1), True Colour (RGB)
 Binary has value 0 and 1 to represent black and white
 Indexed each pixel has one value corresponding to predetermined list of colours (colour map)
 Greyscale  each pixel has value within 0 (black) and 1 (white)  often write as whole numbers and then normalise
 True colour  Three associated values, RGB
But focus on binary and greyscale for hand calculations
Notation
See below
$f[i][j]$  follows same indexing conventions as MATLAB
ie: $i$ refers to vertical coordinate (row)
$j$ refers to horizontal coordinate (column)
$(i,j)=(1,1)$ is the top left pixel.
Digital Convolution
$y[n]=k=−∞∑∞ x[k]h[n−k]=x[n]∗h[n]=h[n]∗x[n]$
Example 20.1  1D Discrete Convolution
See example
Example 20.2  Visual 1D Discrete Convolution
See example
Image Filtering
Determine output y[i][j] from input x[i][j] through filter (kernel) h[i][j]
Filter (Kernel) = $h[i][j]$, assume square matrix with odd rows and columns so obvious middle
 Flip impulse response $h[i][j]$ to get $h_{∗}[i][j]$
 Achieved by mirroring all elements around center element.
 By symmetry, sometimes $h[i][j]=h_{∗}[i][j]$
 Move flipped impulse response $h_{∗}[i][j]$ along input image $x[i][j]$.
 Each time kernel is moved, multiply all elements of $h_{∗}[i][j]$ by corresponding covered pixels in $x[i][j]$.
 Add together products and store sum in output $y[i][j]$  corresponds to middle pixel covered by kernel
 Only consider overlaps between $h_{∗}[i][j]$ and $x[i][j]$ where the middle element of the kernel covers a pixel in the input image
Edge Handling
Zeropadding and replicating
 Zero Padding Treat all off image pixels as having value 0. $x[i][j]=0$ beyond defined image. Simplest but may lead to unusual artefacts at the edges of the image. Only option available for
conv2
and default forimfilter
.  Replicating the border  Assuming off image have same values as nearest element along edge of image. IE: Assume pixels at the outside corner take the value of the corner pixel $x[0][1],x[0][0],x[1][0]=x[1][1]$
Kernels
Different types of kernels.
Larger kernels have increased sensitivity but more expensive to compute.

Values add to 0 = Removes signal strength to accentuate certain details

Values add to 1 = maintain signal strength by redistributing

Low Pass Filter  Equivalent to taking weight average around neighbourhood of pixel. Adds to 1

Blurring filter  Similar to low pass, but elements adds uo to more than 1, so washes out the image more

High Pass Filter  Accentuates transitions between colours, can be used as simple edge detection (important task, first step to detecting objects)

Sobel operator  More effective at detecting edges than high pass filter. Do need to apply different kernels for different directions. Xgradient = detecting vertical edges, Ygradient = detecting horizontal edges
Example 20.3  Image Filtering
See example