Undergraduate Teaching 2025-26

Not logged in. More information may be available... Login via Raven / direct.

Engineering Tripos Part IIA Project, GG4: Neural Control with Adaptive State Estimation, 2025-26

Leader

Dr Flavia Mancini

Timing and Structure

Students work to their own schedule. A staffed "surgery" runs according to the lab timetable.

Prerequisites

Useful: 3F3 (Inference), 3F1 (Statistical Signal Processing), 3F4 (Systems and Control); Python (NumPy, Matplotlib, Jupyter)

Aims

The aims of the course are to:

  • To introduce students to simulation and control of partially observed dynamical systems.
  • To give practical experience with classical methods for state estimation.
  • To explore optimal feedback control in a closed-loop system.
  • To develop collaborative coding, analysis, and presentation skills.
  • To foster understanding of robustness in estimation and control under noise and model mismatch.

Objectives

As specific objectives, by the end of the course students should be able to:

  • Understand and apply state-space models to simulate dynamic systems.
  • Implement and tune models to decode noisy observations.
  • Design and use controllers for optimal state feedback control.
  • Integrate estimation and control in a closed-loop system.
  • Conduct experiments to assess tracking accuracy, control effort, and robustness.
  • Collaborate effectively to develop shared code and produce a joint presentation.
  • Present technical results clearly using plots, metrics, and structured reports.

Content

This lab explores how brain-machine interface (BMI)-like systems can decode noisy neural activity to control movement. In this design project, small groups will simulate and control a simplified neural interface system. A 2D cursor moves in a plane based on a latent trajectory, observed indirectly through noisy neural-like signals. Students will estimate the cursor's hidden state and control its movement toward a dynamic target. Over four weeks, they will explore estimation accuracy, control performance, and system robustness to disturbances and model mismatch. The project blends inference, control, signal processing, and neural data simulation in a realistic, design-oriented lab. 

 

Week 1–2 (Group) 

Introduction to classical filtering and control methods (primer provided). 

Groups set up simulation environment and run example trajectories. 

Implement group simulation code with documentation. 

Deliverable: Group simulation code + brief documentation (group mark). 

 

Week 3 (Individual) 

Implement control loops. 

Test closed-loop performance and robustness. 

Continue experiments for final analysis. 

 

Week 4 (Group & Individual) 

Group presentation: approach, results, lessons learned (group mark). 

Individual final report due end of Week 4: methods, results, discussion (individual mark). 

 
 

Coursework

  • Group Simulation Code & Documentation (Week 2): 10 marks  
  • Group Presentation (Week 4): 10 marks  
  • Individual Interim Report (end of Week 2: 20 marks 
  • Individual Final Report (Week 4): 40 marks  

 

Examination Guidelines

Please refer to Form & conduct of the examinations.

 
Last modified: 08/01/2026 11:35

Engineering Tripos Part IIA Project, GF5: Animating 3D Characters, 2025-26

Leader

Dr E Wu

Timing and Structure

Fridays 9-11am plus afternoons, and Tuesdays 11-1pm

Aims

The aims of the course are to:

  • Introduce students to the core components of 3D character animation, including rigging, skinning, animation, and rendering
  • Provide hands-on experience with modern 3D graphics and animation tools
  • Give students practical exposure to building, animating, and rendering a 3D character model
  • As part of the project, students will capture an animatable 3D model of themselves and create a short animation

Objectives

As specific objectives, by the end of the course students should be able to:

  • Understand the concepts of skeleton-based rigging and skinning
  • Construct a simple rig for a 3D character and bind mesh geometry to the skeleton
  • Understand simple animation techniques such as keyframe interpolation
  • Capture a 3D human model and integrate it into an animation pipeline
  • Produce a short animated 3D scene with animated 3D characters

Content

Week 1

  • Introduction to 3D visualization and animation tools (using Python-based packages)
  • Overview of 3D meshes, skeletons, joints, skinning weights, and kinematic chains
  • Basic rig construction and skinning weights assignment on a simple 3D character
  • Implement forward kinematic transformations and pose the 3D character using Linear Blend Skinning (LBS)

Week 2

  • Create a simple animation sequence using keyframe interpolation
  • Render the animation into a 2D video
  • Individual interim report

Week 3

  • Load and animate a skinned 3D human model (SMPL)
  • Explore human motion sequences using the human model
  • Work in groups to capture 3D models of your team members
  • Drive your character models using existing motion sequences and produce animated motion clips

Week 4

  • Refine character animations and integrate them into a coherent 3D scene
  • Produce a 30-second long animation video featuring the virtual characters
  • Final group report

Coursework

Coursework Due date Marks
Interim report Friday 29 May 2026 (4pm) 20 (individual)
Interim animation results Friday 29 May 2026 (4pm) 5 (individual)
Final report Friday 12 June 2026 (4pm) 40 (50% individual, 50% group)
Final animation results
Friday 12 June 2026 (4pm) 15 (group)

 

Examination Guidelines

Please refer to Form & conduct of the examinations.

 
Last modified: 30/11/2025 18:50

Engineering Tripos Part IIA Project, GF4: Structure from Motion, 2025-26

Leader

Dr A Tewari

Timing and Structure

Thursdays 9-11am plus afternoons; and Mondays 11-1pm

Objectives

As specific objectives, by the end of the course students should be able to:

  • To understand the principles of Structure from Motion, one of the most important computer vision algorithm, through hands-on implementation.
  • To develop intuition for key steps such as feature matching, triangulation, and camera pose estimation.
  • To explore dense 3D reconstruction and visualisation using open-source tools.
  • To gain insight into the challenges and applications of 3D reconstruction from images.
  • To see how geometry, optimisation, vision, and graphics combine to form a working 3D pipeline.

Content

The aim of this project is to follow the full pipeline of 3D reconstruction from images using the technique of Structure from Motion (SfM). Students will begin with a sequence of photographs or video frames of a real object or scene and proceed all the way through to a textured 3D model. Along the way, they will learn about multi-view geometry, feature extraction and matching, camera calibration, bundle adjustment, dense reconstruction, and 3D visualisation. The project links concepts in computer vision, geometry, and graphics with hands-on experimentation and investigation.

The first half of the project introduces students to the mathematical and algorithmic foundations of SfM by building a simplified SfM pipeline. They will begin with a set of 2D images, extract visual features, estimate relative camera poses, and triangulate 3D points to obtain a sparse point cloud. I will provide modular Python functions for many components (e.g. feature detection, essential matrix estimation) to allow students to focus on understanding and experimentation, not just software implementation.

The second half of the project turns to a real-world dataset. Students will receive a sequence of photographs taken around a complex object or small scene. Each group will choose a task involving dense reconstruction and rendering: for example, reconstructing a building façade, an archaeological artefact, or a mechanical part. Students will explore open-source tools like COLMAP to achieve a complete reconstruction. They will identify the cases where SfM works well, and where it does not.

The project culminates in a short presentation and a report, showcasing the pipeline, reconstruction quality, and any creative solutions to problems encountered along the way.

Week 1:

  • Introduction to epipolar geometry, camera models, and SfM pipeline.
  • Experiments with feature detection (SIFT, ORB), matching, and fundamental matrix estimation.
  • Pose estimation and triangulation to obtain sparse reconstructions.

Week 2:

  • Bundle adjustment and error analysis.
  • Extensions to include camera calibration, RANSAC, and scale ambiguity resolution.
  • Submit minimal pipeline and preliminary results (interim report).

Week 3:

  • Receive real dataset. Begin full SfM reconstruction using external tools.
  • Prepare intermediate dense point clouds or meshes.

Week 4:

  • Complete model creation and visualisation.
  • Prepare and deliver final presentation and report.

Coursework

Coursework Due Date Marks
Interim Report End of Week 2 25 (individual)
Minimal SfM Pipeline Mid-week 2 10 (group)
Final Report Friday of 4th Week 45 (50% individual, 50% group)

 

Examination Guidelines

Please refer to Form & conduct of the examinations.

 
Last modified: 30/11/2025 20:22

Pages

Subscribe to CUED undergraduate teaching site RSS