aboutsummaryrefslogtreecommitdiffstats
diff options
context:
space:
mode:
authorNaoki Pross <np@0hm.ch>2021-10-19 18:27:21 +0200
committerNaoki Pross <np@0hm.ch>2021-10-19 18:27:21 +0200
commit5b1337d5df92a56c4349bddc302ad228f553ed45 (patch)
treea37734336521d1c41b39953b219fc3893361cbea
parentCreate jupyter notebook (diff)
parentSpectrum replication (diff)
downloadDigSig1-5b1337d5df92a56c4349bddc302ad228f553ed45.tar.gz
DigSig1-5b1337d5df92a56c4349bddc302ad228f553ed45.zip
Merge branch 'master' of github.com:NaoPross/DigSig1
Diffstat (limited to '')
-rw-r--r--DigSig1.tex59
1 files changed, 35 insertions, 24 deletions
diff --git a/DigSig1.tex b/DigSig1.tex
index 10379ae..6056b6e 100644
--- a/DigSig1.tex
+++ b/DigSig1.tex
@@ -67,8 +67,7 @@
\subsection{Random variables}
-A \emph{random variable} (RV) is a function \(x : \Omega \to \mathbb{R}\).
-The \emph{distribution function} of a RV is a function \(F_x : \mathbb{R} \to [0,1]\) that is always monotonically increasing and given by
+A \emph{random variable} (RV) is a function \(x : \Omega \to \mathbb{R}\). The \emph{distribution function} of a RV is a function \(F_x : \mathbb{R} \to [0,1]\) that is always monotonically increasing and given by
\[
F_x(\alpha) = \Pr{x \leq \alpha}.
\]
@@ -98,8 +97,7 @@ The \emph{variance} of a RV is
\[
\Var{x} = \sigma^2 = \E{(x - \E{x})^2} = \E{x^2} - \E{x}^2,
\]
-where \(\sigma\) is called the \emph{standard deviation}.
-The variance is sometimes also called the \emph{second moment} of a RV, the \emph{\(n\)-th moment} of a RV is \(\E{x^n}\).
+where \(\sigma\) is called the \emph{standard deviation}. The variance is sometimes also called the \emph{second moment} of a RV, the \emph{\(n\)-th moment} of a RV is \(\E{x^n}\).
\subsection{Jointly distributed RVs}
@@ -122,9 +120,7 @@ Recall the three important operations for the analysis of analog signals.
The Laplace transform reduces to the Fourier transform under the substitution \(s = j\Omega\).
\subsection{Linear Systems}
-Recall that superposition holds.
-Thus the system is characterized completely by the impulse response function \(h(t)\).
-The output in the time domain \(y(t)\) is given by the convolution product
+Recall that superposition holds. Thus the system is characterized completely by the impulse response function \(h(t)\). The output in the time domain \(y(t)\) is given by the convolution product
\[
y(t) = h(t) * x(t) = \int_\mathbb{R} h(t - t') x(t') \,dt',
\]
@@ -135,15 +131,11 @@ and in the frequency domain \(Y(\Omega) = H(\Omega) X(\Omega)\), where \(H(\Omeg
\section{Sampling and reconstruction}
-To sample a signal \(x(t)\) it means to measure (take) the value at a periodic interval every \(T\) seconds. \(T\) is thus called the \emph{sample interval} and \(f_s =1/T\) is the \emph{sampling frequency}. We will introduce the notation
-\[
- x[n] = x(nT)
-\]
-to indicate that a signal is a set of discrete samples.
+To sample a signal \(x(t)\) it means to measure (take) the value at a periodic interval every \(T\) seconds. \(T\) is thus called the \emph{sample interval} and \(f_s =1/T\) is the \emph{sampling frequency}.
\subsection{Sampling theorem}
-To represent a signal \(x(t)\) by its samples \(x[n]\) two conditions must be met:
+To represent a signal \(x(t)\) by its samples \(\hat{x}(nT)\) two conditions must be met:
\begin{enumerate}
\item \(x(t)\) must be \emph{bandlimited}, i.e. there must be a frequency \(f_\text{max}\) after which the spectrum of \(x(t)\) is always zero.
\item The sampling rate \(f_s\) must be chosen so that
@@ -151,28 +143,47 @@ To represent a signal \(x(t)\) by its samples \(x[n]\) two conditions must be me
f_s \geq 2 f_\text{max}.
\]
\end{enumerate}
-In other words you need at least 2 samples / period to reconstruct a signal.
-When \(f_s = 2 f_\text{max}\), the edge case, the sampling rate is called \emph{Nyquist rate}.
-The interval \(\left[-f_s / 2, f_2 / 2\right]\), and its multiples are called \emph{Nyquist intervals}, as they are bounded by the Nyquist frequencies.
-It would be good to have an arbitrarily high sampling frequency but in reality there is upper limit given by processing time \(T_\text{proc}\). Thus \(2f_\text{max} \leq f_s \leq f_\text{proc}\).
+In other words you need at least 2 samples / period to reconstruct a signal. When \(f_s = 2 f_\text{max}\), the edge case, the sampling rate is called \emph{Nyquist rate}. The interval \(\left[-f_s / 2, f_2 / 2\right]\), and its multiples are called \emph{Nyquist intervals}, as they are bounded by the Nyquist frequencies. It would be good to have an arbitrarily high sampling frequency but in reality there is upper limit given by processing time \(T_\text{proc}\). Thus \(2f_\text{max} \leq f_s \leq f_\text{proc}\).
\subsection{Discrete-Time Fourier Transform}
-Mathematically speaking, to sample a signal is equivalent multiplying a function with the \emph{impulse train distribution}\footnote{Sometimes it is also called the Dirac comb.}
+Mathematically speaking, to sample a signal is equivalent multiplying a function with the the so called \emph{impulse train distribution} (aka Dirac comb).
+\[
+ s(t) = \sum_{n = -\infty}^{\infty} \delta(t - nT),
+\]
+so we write \(\hat{x}(t) = s(t)\cdot x(t)\) to represent a sampled signal. Because of the special propertie of the Dirac delta, The spectrum of a sampled function \(\hat{x}\) is
\[
- \Comb_T (t) = \sum_{n = -\infty}^{\infty} \delta(t - nT),
+ \hat{X}(f) = \sum_{n = -\infty}^{\infty} x(nT) e^{-2\pi jfTn}.
\]
-so \(x[n] = \Comb_T(t)\, x(t)\).
-Interestingly the impulse train is periodic, and has thus a Fourier series with all coefficients equal to \(1/T\).
-So the Fourier transform of a comb is also a comb, i.e.
+This can be thought as a numerical approximation of the real spectrum \(X(f)\) which gets better as \(T \to 0\), i.e.
\[
- \Comb_T(t) \leftrightarrow \Comb_{1/T}(\Omega).
+ X(f) = \lim_{T \to 0} T\hat{X}(f).
\]
-
+If we have a finite number \(L\) of samples to work with, we will repeat them periodically and obtain what is known as the \emph{Discrete-Time Fourier Transform} (DTFT), i.e.
+\[
+ \hat{X}(f) \approx \hat{X}_L(f) = \sum_{n = 0}^{L -1} x(nT) e^{-2\pi jfTn}.
+\]
+
\subsection{Spectrum replication and aliasing}
+
+Notice that the impulse train is periodic, and has thus a Fourier series, whose coefficients all equal to \(1/T\) (\(= f_s\), the sampling rate). So the Fourier transform of a comb is also a comb. The consequence is that, because the Fourier of the product \(x(t)\cdot s(t)\) in the time domain becomes a convolution \(X(f) * S(f)\) where \(S(f)\) is an impulse train of Dirac deltas spaced \(1/T = f_s\) apart, what is called \emph{spectrum replication} happens, mathematically
+\[
+ \hat{X}(f)
+ = \sum_{n = -\infty}^{\infty} x(nT) e^{2\pi jfTn}
+ = \frac{1}{T}\sum_{m = -\infty}^\infty X(f - mf_s).
+\]
+In other words, the modulation of the property of the Fourier transform copies the baseband spectrum into integer multiples of the sampling frequency. This is why \(f_s \geq 2f_\text{max}\). The important result is that
+\[
+ X(f) = T \hat{X}(f), \quad
+ \text{for} \quad -\frac{f_2}{2} \leq f \leq \frac{f_s}{2},
+\]
+and if the sampling theorem is satisfied the exact original spectrum can be recovered with a low pass filter.
+
+
% Alias frequency \(f_a = f \pmod{f_s}\).
% Anti-aliasing: analog LP prefilter cutoff \@ \(f_s/2\)
+\section{Quantization}
\end{document}