diff options
Diffstat (limited to 'DigSig1.tex')
-rw-r--r-- | DigSig1.tex | 96 |
1 files changed, 92 insertions, 4 deletions
diff --git a/DigSig1.tex b/DigSig1.tex index 9c6cbad..2ca6207 100644 --- a/DigSig1.tex +++ b/DigSig1.tex @@ -1,15 +1,20 @@ % !TeX program = xelatex % !TeX encoding = utf8 % !TeX root = DigSig1.tex +% vim: set ts=2 sw=2 et: -%% TODO: publish to CTAN -\documentclass[]{tex/hsrzf} +\documentclass[margin=small]{tex/hsrzf} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % Packages -%% TODO: publish to CTAN \usepackage{tex/hsrstud} +\usepackage{tex/docmacros} + +%% Font configuration +\usepackage{fontspec} +% \usepackage{gfsbaskerville} +\setmainfont[Ligatures = TeX]{TeX Gyre Pagella} %% Language configuration \usepackage{polyglossia} @@ -44,13 +49,96 @@ \maketitle \tableofcontents -\section{License} +\section*{License} \doclicenseThis \twocolumn \setcounter{page}{1} \pagenumbering{arabic} + +\section{Probability and stochastics} + +\subsection{Random variables} + +A \emph{random variable} (RV) is a function \(x : \Omega \to \mathbb{R}\). +The \emph{distribution function} of a RV is a function \(F_x : \mathbb{R} \to [0,1]\) that is always monotonically increasing and given by +\[ + F_x(\alpha) = \Pr{x \leq \alpha}. +\] +The probability density function (PDF) is +\[ + f_x(\alpha) = \frac{dF_x}{d\alpha}. +\] +The \emph{expectation} of a RV is +\[ + \E{x} = \int_\mathbb{R} \alpha f_x(\alpha) \,d\alpha, +\] +and in the case of a discrete RV +\[ + \E{x} = \sum_k \alpha_k \Pr{x = \alpha_k}. +\] +In general it holds that +\[ + \E{g(x)} = \int_\mathbb{R} g(\alpha) f_x(\alpha) \,d\alpha, +\] +for example +\begin{align*} + \E{x^2} &= \int_\mathbb{R} \alpha^2 f_x(\alpha) \,d\alpha \\ + \E{|x|} &= \int_\mathbb{R} |\alpha| f_x(\alpha) \,d\alpha \\ + &= \int_0^\infty \alpha \left[ f_x(\alpha) + f_x(-\alpha) \right] \,d\alpha +\end{align*} +The \emph{variance} of a RV is +\[ + \sigma^2 = \Var{x} = \E{(x - \E{x})^2} = \E{x^2} - \E{x}^2, +\] +where \(\sigma\) is called the \emph{standard deviation}. +The variance is sometimes also called the \emph{second moment} of a RV, the \emph{\(n\)-th moment} of a RV is \(\E{x^n}\). + +\subsection{Jointly distributed RVs} + +\section{Analog signals} + +\paragraph{Notation} \(\Omega = 2\pi f\) is used for physical analog frequencies (in radians / second), whereas \(\omega\) is for digital frequencies (in radians / sample). + +\paragraph{Transformations} Recall the three important operations for the analysis of analog signals. +\begin{flalign*} + \textit{Fourier Transform} && + X(\Omega) &= \int_\mathbb{R} x(t) e^{j\Omega t} \,dt \\ + % + \textit{Inverse Fourier Transform} && + x(t) &= \int_\mathbb{R} X(\Omega) e^{j\Omega t} \,\frac{d\Omega}{2\pi} \\ + % + \textit{Laplace Transform} && + X(s) &= \int_\mathbb{R} x(t) e^{-st} \,dt +\end{flalign*} +The Laplace transform reduces to the Fourier transform under the substitution \(s = j\Omega\). + +\paragraph{Linear Systems} +Recall that superposition holds. +Thus the system is characterized completely by the impulse response function \(h(t)\). +The output in the time domain \(y(t)\) is given by the convolution product +\[ + y(t) = h(t) * x(t) = \int_\mathbb{R} h(t - t') x(t') \,dt', +\] +and in the frequency domain \(Y(\Omega) = H(\Omega) X(\Omega)\), where \(H(\Omega)\) is the Fourier transform of \(h(t)\). + +% Analog signals: +% TODO: FT of eigenfunctions e^{j\Omega_k t\} + \section{Sampling and reconstruction} +Sampling theorem: \(f_s = 2 f_\text{max}\) is called Nyquist rate. In other words you need at least 2 samples/cycle to reconstruct a signal. +%% TODO: ideal sampler +Nyquist intervals are bounded by Nyquist frequencies, i.e. \(\left[-f_s / 2, f_2 / 2\right]\) + +Alias frequency \(f_a = f \pmod f_s\). + +Anti-aliasing: analog LP prefilter cutoff \@ \(f_s/2\) + +Processing: Upper limit on sampling frequency given by processing time \(T_\text{proc}\). Thus \(2f_\text{max} \leq f_s \leq f_\text{proc}\). + + + + \end{document} |