Undergraduate Teaching 2023-24

Engineering Tripos Part IIB, 4F3: An Optimisation Based Approach to Control, 2023-24

Engineering Tripos Part IIB, 4F3: An Optimisation Based Approach to Control, 2023-24

Not logged in. More information may be available... Login via Raven / direct.

PDF versionPDF version

Module Leader and lecturer

Prof G Vinnicombe

Lecturer

Prof G Vinnicombe, Dr I Lestas

Timing and Structure

Lent term. 14 lectures + 2 examples classes, Assessment: 100% exam

Prerequisites

3F1 and 3F2 useful

Aims

The aims of the course are to:

  • introduce methods for feedback system design based on the optimization of an objective, including reinforcement learning and predictive control.
  • demonstrate how such control laws can be computed and implemented in practice.

Objectives

As specific objectives, by the end of the course students should be able to:

  • understand the derivation and application of optimal control methods.
  • appreciate the main ideas, applications and techniques of predictive control and reinforcement learning.

Content

Optimal Control (7L + 1 examples class, Prof F Forni)

  • Formulation of convex optimisation problems
  • Status of theoretical results and algorithms
  • Formulation of optimal control problems. Typical applications
  • Optimal control with full information (dynamic programming)
  • Control of Linear Systems with a quadratic objective function
  • Output feedback: ‘LQG’ control
  • Control design with an “H-infinity” criterion

Predictive Control and an Introduction to Reinforcement Learning (7L + 1 examples class, Prof G Vinnicombe)

  • What is predictive control? Importance of constraints. Flexibility of specifications. Typical applications
  • Basic formulation of predictive control problem without constraints and the receding horizon concept. Comparison with unconstrained Linear Quadratic Regulator
  • Including constraints in the problem formulation. Constrained convex optimization
  • Terminal conditions for stability 
  • Emerging applications: advantages and challenges
  • Policy and generalized policy iteration; rollout algorithms and predictive control
  • Approximate dynamic programming
  • Deep neural nets as universal approximators for value and policy.
  • Simulation based vs state space models - Q learning.

 

Booklists

Please refer to the Booklist for Part IIB Courses for references to this module, this can be found on the associated Moodle course.

Examination Guidelines

Please refer to Form & conduct of the examinations.

 
Last modified: 15/09/2023 14:24