Module Leader
Lecturers
Prof G Hennequin, Dr Y Ahmadian and Prof M Lengyel
Timing and Structure
Lent term. 16 lectures. Assessment: 100% coursework
Prerequisites
3G2 and 3G3 is useful but not essential
Aims
The aims of the course are to:
- develop an understanding of the fundamentals of reinforcement learning, and how they relate to neural and behavioural data on the ways in which the brain learns from rewards
- demonstrate the importance of internal models in neural computations, and provide examples for their behavioural and neural signatures
- introduce alternative ways of modelling single neurons, and the way these single neuron models can be integrated into models of neural networks.
- explain how the dynamical interactions between neurons give rise to emergent phenomena at the level of neural circuits
- describe models of plasticity and learning and how they apply to the basic paradigms of machine learning (supervised, unsupervised, reinforcement) as well as pattern formation in the nervous system
- demonstrate case studies of computational functions that neural networks can implement
Objectives
As specific objectives, by the end of the course students should be able to:
- understand how neurons, and networks of neurons can be modelled in a biomimetic way, and how a systematic simplification of these models can be used to gain deeper insight into them.
- develop an overview of how certain computational problems can be mapped onto neural architectures that solve them.
- recognise the essential role of learning is the organisation of biological nervous systems.
- appreciate the ways in which the nervous system is different from man-made intelligent systems, and their implications for engineering as well as neuroscience.
Content
The course covers basic topics in computational neuroscience, and demonstrates how mathematical analysis and ideas from dynamical systems, machine learning, optimal control, and probabilistic inference can be applied to gain insight into the workings of biological nervous systems. The course also highlights a number of real-world computational problems that need to be tackled by any ‘intelligent’ system, as well as the solutions that biology offers to some of these problems.
Principles of Computational Neuroscience (9L, M Lengyel)
- introduction: the goals of computational neuroscience, levels of analysis, and module plan
- reinforcement learning: theoretical background and basic theorems, alternative algorithmic solutions and multiple leaerning & memory systems, model-based vs. model-free computations, the temporal difference learning theory of dopamine responses
- internal models: theoretical framework, internal models in perception, sensori-motor control, statistical learning, structure learning, neural correlates, neural representations of unceratinty, representational learningr
- associative memory: the Hebbian paradigm, attractor neural networks, the Hopfield network, energy function, capacity, place cells, place cells, long-term plasticity, and navigation, place cell remapping
Network dynamics & Plasticity (4L, G Hennequin)
- linear and non-linear network dynamics
- spiking neural network dynamics
- excitatory-inhiitory balance
- chaotic dynamics
- network mechanisms of selective amplification
- orientation tuning in primary visual cortex
Plasticity & Biophysics (3L, Y Ahmadian)
- Hebbian plasticity
- spike timing-dependant plasticity
- learning receptive fields
- biohysical models of single neurons
- biohysical models of simple circuits
Further notes
See the Moodle page for the course for more information (e.g. handouts, coursework assignments).
Examples papers
N/A
Coursework
Coursework | Format |
Due date & marks |
---|---|---|
Coursework activity #1: network dynamics Most computations in the brain are implemented in networks of recurrently coupled neurons. In this coursework you will build simple neural network models and understand how they give rise to emergent dynamical and computational properties. Learning objective:
|
Individual report Anonymously marked |
Posted week 4 Due week 6 [30/60] |
Coursework activity #2: synaptic plasticity and representational learning The brain constantly reconfigures itself via synaptic plasticity to develop useful representations of its inputs. In this courtsework you will build and analyse simple models to understand some of the basic principles underlying this process Learning objective:
|
Individual Report Anonymously marked |
Posted week 8 Due two weeks later [30/60] |
See the Moodle page for the course for more information (e.g. handouts, coursework assignments).
Booklists
Please refer to the Booklist for Part IIB Courses for references to this module, this can be found on the associated Moodle course.
Examination Guidelines
Please refer to Form & conduct of the examinations.
Last modified: 31/05/2024 10:09