Undergraduate Teaching 2023-24

Engineering Tripos Part IIB, 4G3: Computational Neuroscience, 2023-24

Engineering Tripos Part IIB, 4G3: Computational Neuroscience, 2023-24

Not logged in. More information may be available... Login via Raven / direct.

PDF versionPDF version

Module Leader

Prof Máté Lengyel

Lecturers

Prof G Hennequin, Dr Y Ahmadian and Prof M Lengyel

Timing and Structure

Lent term. 16 lectures. Assessment: 100% coursework

Prerequisites

3G2 and 3G3 is useful but not essential

Aims

The aims of the course are to:

  • introduce alternative ways of modelling single neurons, and the way these single neuron models can be integrated into models of neural networks.
  • describe the challenges posed by neural coding and decoding, and the computational methods that can be applied to study them.
  • demonstrate case studies of computational functions that neural networks can implement.
  • describe models of plasticity and learning and how they apply to the basic paradigms of machine learning (supervised, unsupervised, reinforcement) as well as pattern formation in the nervous system.
  • consider control tasks (sensorimotor and other) faced and solved by the nervous system.
  • examine the energy efficiency of neural computations.

Objectives

As specific objectives, by the end of the course students should be able to:

  • understand how neurons, and networks of neurons can be modelled in a biomimetic way, and how a systematic simplification of these models can be used to gain deeper insight into them.
  • develop an overview of how certain computational problems can be mapped onto neural architectures that solve them.
  • recognise the essential role of learning is the organisation of biological nervous systems.
  • appreciate the ways in which the nervous system is different from man-made intelligent systems, and their implications for engineering as well as neuroscience.

Content

The course covers basic topics in computational neuroscience, and demonstrates how mathematical analysis and ideas from dynamical systems, machine learning, optimal control, and probabilistic inference can be applied to gain insight into the workings of biological nervous systems. The course also highlights a number of real-world computational problems that need to be tackled by any ‘intelligent’ system, as well as the solutions that biology offers to some of these problems.

Principles of Computational Neuroscience (8L, M Lengyel)

  • how is neural activity generated? mechanistic neuron models
  • how to predict neural activity? descriptive neuron models
  • what should neurons do? normative neuron models
  • how to read neural activity? neural decoding
  • what happens when many neurons are connected? neural networks
  • how to tell a neural network what to do? supervised learning
  • how can neuronal networks learn without being told what to do? unsupervised learning
  • how do neural networks remember? auto-associative memory
  • how can our brains achieve the goal of life? reinforcement learning

Network dynamics & Plasticity (6L, Y Ahmadian

  • linear and non-linear network dynamics
  • Hebbian plasticity
  • spike timing-dependant plasticity
  • learning receptive fields

Biophysics (2L, T O'Leary)

  • biohysical models of single neurons
  • biohysical models of simple circuits

Further notes

See the Moodle page for the course for more information (e.g. handouts, coursework assignments).

Examples papers

N/A

Coursework

Coursework Format

Due date

& marks

Coursework activity #1: network dynamics and plasticity

Most computations in the brain are implemented in networks of recurrently coupled neurons. In this coursework you will build simple neural network models and understand how they give rise to emergent dynamical and computational properties.

Learning objective:

  • implement simple neural networks and understand the effects of eigenvalues and eigenvectors on the resulting dynamics
  • implement balanced neural circuits and understand how asynchronous and irregular activity is generated

Individual report

Anonymously marked

Posted Wed week 3

Due Wed week 5

[30/60]

Coursework activity #2: autoassociative memory and single neuron models

One of the most fundamental functions of the brain is to store and recall memories. In this coursework you will build and analyse a simple, canonical model of a neural network that implements autoassociative memory. You will also implement a simple, biophysical model of single neuron dynamics to study the conditions under which more abstract neuron models used in network simulations may be valid approximations.

Learning objective:

  • implement an associative memory network and understand how different parameters influence its memory capacity
  • implement the Hodgkin-Huxley model and undesrand how it responds to different stimulation patterns

Individual Report

Anonymously marked

Posted Wed week 8

Due Wed two weeks later

[30/60]

See the Moodle page for the course for more information (e.g. handouts, coursework assignments).

Booklists

Please refer to the Booklist for Part IIB Courses for references to this module, this can be found on the associated Moodle course.

Examination Guidelines

Please refer to Form & conduct of the examinations.

 
Last modified: 15/09/2023 14:29