Undergraduate Teaching 2023-24

Engineering Tripos Part IIB, 4G3: Computational Neuroscience, 2018-19

Engineering Tripos Part IIB, 4G3: Computational Neuroscience, 2018-19

Not logged in. More information may be available... Login via Raven / direct.

PDF versionPDF version

Module Leader

Prof Máté Lengyel

Lecturer

Prof Máté Lengyel, Dr Guillaume Hennequin, Dr Timothy O'Leary

Timing and Structure

Lent term. 16 lectures. Assessment: 100% coursework

Prerequisites

3G2 and 3G3 is useful but not essential

Aims

The aims of the course are to:

  • introduce alternative ways of modelling single neurons, and the way these single neuron models can be integrated into models of neural networks.
  • describe the challenges posed by neural coding and decoding, and the computational methods that can be applied to study them.
  • demonstrate case studies of computational functions that neural networks can implement.
  • describe models of plasticity and learning and how they apply to the basic paradigms of machine learning (supervised, unsupervised, reinforcement) as well as pattern formation in the nervous system.
  • consider control tasks (sensorimotor and other) faced and solved by the nervous system.
  • examine the energy efficiency of neural computations.

Objectives

As specific objectives, by the end of the course students should be able to:

  • understand how neurons, and networks of neurons can be modelled in a biomimetic way, and how a systematic simplification of these models can be used to gain deeper insight into them.
  • develop an overview of how certain computational problems can be mapped onto neural architectures that solve them.
  • recognise the essential role of learning is the organisation of biological nervous systems.
  • appreciate the ways in which the nervous system is different from man-made intelligent systems, and their implications for engineering as well as neuroscience.

Content

The course covers basic topics in computational neuroscience, and demonstrates how mathematical analysis and ideas from dynamical systems, machine learning, optimal control, and probabilistic inference can be applied to gain insight into the workings of biological nervous systems. The course also highlights a number of real-world computational problems that need to be tackled by any ‘intelligent’ system, as well as the solutions that biology offers to some of these problems.

Principles of Computational Neuroscience (8L, M Lengyel)

  • how is neural activity generated? mechanistic neuron models
  • how to predict neural activity? descriptive neuron models
  • what should neurons do? normative neuron models
  • how to read neural activity? neural decoding
  • what happens when many neurons are connected? neural networks
  • how to tell a neural network what to do? supervised learning
  • how can neuronal networks learn without being told what to do? unsupervised learning
  • how do neural networks remember? auto-associative memory
  • how can our brains achieve the goal of life? reinforcement learning

Network dynamics & Plasticity (4L, G Hennequin)

  • linear and non-linear network dynamics
  • Hebbian plasticity
  • spike timing-dependant plasticity
  • learning receptive fields

Biophysics (2L, T O'Leary)

  • energetics of information processing
  • the energetic cost of spikes and synapses

Further notes

See the Moodle page for the course for more information (e.g. handouts, coursework assignments).

Examples papers

N/A

Coursework

Coursework Format

Due date

& marks

Coursework activity #1: reinforcement and representational learning

Organisms learn about their environment to build internal representations that allow them to choose actions adaptively so as to maximise future reward. In this coursework, you will build simple models of reinforcement and representational learning and understand how they map onto neural phenomena.

Learning objective:

  • understand and implement the algorithm of temporal difference learning
  • learn to interpret the predictions of temporal difference learning  for dopaminergic midbrain activity
  • understand the outputs of a simple independent components analysis (ICA) model and their relation to natural image statistics
  • implement a simple divisive normalisation model and interpret its relation to natural image statistics as well as to activity in primary visual cortex (V1)

Individual report

Anonymously marked

Posted Wed week 3

Due Wed week 5

[30/60]

Coursework activity #2: network dynamics and plasticity

Most computations in the brain are implemented in networks of recurrently coupled neurons. In this coursework you will build simple neural network models and understand how they give rise to emergent dynamical and computational properties.

Learning objective:

  • implement simple neural networks and understand the effects of eigenvalues and eigenvectors on the resulting dynamics
  • implement balanced neural circuits and understand how asynchronous and irregular activity is generated
  • implement an associative memory network and understand how different parameters influence its memory capacity

Individual Report

Anonymously marked

Posted Wed week 8

Due Wed two weeks later

[30/60]

See the Moodle page for the course for more information (e.g. handouts, coursework assignments).

Booklists

Please see the Booklist for Group G Courses for references for this module.

Examination Guidelines

Please refer to Form & conduct of the examinations.

 
Last modified: 17/05/2018 14:26