Learning outcomes of the course unit
Knowledge and understanding
The main goal of this course is to provide students with the ability to
understand the foundations of modern digital communication systems and, in particular,
- the digital communication systems in the presence of channels with memory;
- the techniques for complexity reduction;
- the advanced technique for channel coding and decoding;
- the performance analysis techniques for these systems.
Applying knowledge and understanding
The abilities to apply the acquired knowledge and understanding result to be:
- the design and performance analysis of moder digital communication systems;
- the selection and design of proper coding systems for a given communication channel.
Detection and estimation theory
Course contents summary
Transmission systems with memory--General model of modulated signals. Sequence detection. Error probability evaluation for receivers based on sequence detection. Continuous phase modulations. Trellis-coded modulations. Reduced-state sequence detection. Linear and decision-feedback equalization.
Advanced topics--Sequence detection in the presence of unknown parameters. Per-survivor processing. Turbo codes and iterative decoding. Factor graphs and the sum-product algorithm. Low-density parity-check codes. Bit-interleaved coded modulation. Space-time codes.
Every class = 2 hours
CLASS 1: Introduction. Transmission systems with memory: Model for signals with finite memory.
CLASS 2: Coded linear modulations with long shaping pulses interpreted as signals with finite memory. Examples.
CLASS 3: Excercises (chapter 1).
CLASS 4: MAP sequence and symbol detection strategies: introduction.
CLASS 5: MAP sequence detection strategy for the general case of signals with finite memory. Viterbi algorithm.
CLASS 6: Implementation aspects of the Viterbi algorithm. MAP sequence detection strategy for coded linear modulations: Ungerboeck observation model.
CLASS 7: Ungerboeck observation model: expression of the branch metrics, special cases, condition for the absence of ISI.
CLASS 8: MAP sequence detection strategy for coded linear modulations: Forney observation model.
CLASS 9: MAP sequence detection strategy for coded linear modulations: introduction to the performance analysis. Upper bounds for the symbol error probability of a MAP sequence detector.
CLASS 10: Pairwise error probability and its computation in the case of the AWGN channel. Distance spectrum. Computation of the distance for the case of coded linear modulations and the Ungerboeck model.
CLASS 11: Computation of the distance for the case of coded linear modulations and the Forney model. Exercises (chapter 2).
CLASS 12: Exercises (chapter 2).
CLASS 13: Introduction to the problem of sequence detection in the presence of unknown parameters. Sequence detection in the presence of unknown parameters: channel models, sufficient statistics, optimal strategy in the presence of an unknown stochastic parameter.
CLASS 14: Sequence detection in the presence of unknown parameters: memory truncation. Examples: noncoherent sequence detection and receivers based on linear prediction. UMP test. Generalized likelihood.
CLASS 15: Approach based on estimation. Estimators' classifications. CRB and MCRB. DA estimation. Example: Rife & Boorstyn algorithm and related MCRB.
CLASS 16: 1st order PLL and its analysis.
CLASS 17: Decision-directed PLL, S-curve of a DD PLL. Per-survivor processing and tentative decisions. NDA estimation.
CLASS 18: Soft-decision-directed estimation. Sufficient statistics for channels with unknown parameters.
CLASS 19: Exercises (chapter 3).
CLASS 20: Continuous phase modulations.
CLASS 21: Trellis coded modulations.
CLASS 22: Trellis coded modulations: example. Reduced-state sequence detection.
CLASS 23: Adaptive equalization.
CLASS 24: Channel identification. Excercises (chapter 4).
CLASS 25: MAP symbol detection: the BCJR algorithm. Implementation of the BCJR algorithm in the logarithmic domain. Soft-output Viterbi decoder.
CLASS 26: The computation of the information rate. Mismatched detection. Pragmatic capacity.
CLASS 27: Turbo codes and iterative decoding.
CLASS 28: EXIT charts.
CLASS 29: Excercises (chapter 5 and 6).
CLASS 30: Factor graphs (FGs) and the Sum-Product Algorithm (SPA). FG transformations.
CLASS 31: Low density parity check codes and their decoding. BCJR and Viterbi algorithms derived through FG/SPA.
CLASS 32: Application of the FG/SPA framework to detection and estimation problems.
CLASS 33: Excercises (chapter 7)
CLASS 34: Codes for fading channels. Bit-interleaved coded modulation.
CLASS 35: Space-time (ST) codeword design criteria for fading channels. ST block codes. The Alamouti scheme.
CLASS 36: ST trellis codes. Layered ST codes.
G. Colavolpe, R. Raheli, Lezioni di Trasmissione numerica, Monte Università Parma editore, 2004.
S. Benedetto, E. Biglieri, Principles of digital communications, with wireless applications, Kluwer, 1999.
J. G. Proakis, Digital communications, McGraw-Hill, 4th ed., 2001.
G. Vitetta, D. P. Taylor, G. Colavolpe, F. Pancaldi, and P. A. Martin, Wireless Communications: Algorithmic Techniques, John Wiley & Sons. August 2013. ISBN: 0-470-51239-3.
G. Ferrari, G. Colavolpe, and R. Raheli, Detection Algorithms for Wireless Communications, John Wiley & Sons. August 2004. ISBN: 0-470-85828-1.
Lectures and exercises (approximately with a ratio 80%-20%). For the latter, the teacher will solve on the blackboard the exercises assigned to the students one week in advance. In such a way, the students can try to solve them at home and will take advantage much more of the interaction with the lecturer, and can explain their work
Assessment methods and criteria
Written and oral exams. It is required to pass the written exam to be admitted to the oral exam. The final mark will be the arithmetic mean of both marks. The written exam is about the design and analysis of a digital communication system, the oral exam on the theoretical aspects. Intermediate written exams will be considerated upon students' request.