Blogs

Machine Learning Blog

Monthly Archives: March 2015

Seminar by Dr. Luke Dickens (10th April, 2015)

News, Seminar.

In the next Machine Learning Group seminar, we will have a talk by Dr. Luke Dickens who is a Lecturer in the Department of Information Studies at UCL.

Venue: A226 (College Building)

Date & Time: Apr 10, 2015 (12:00-13:00)

Title: Part 1: Efficient Knowledge Aquisition in Crowdsourcing ; Part 2: The Human Gamma Project

Abstract:

This talk will be in two parts.

In the first part, I will talk about my work in crowdsourcing and crowdsensing. Crowdsourcing, and its younger sibling crowdsensing, provide ways to harness the time, expertise, intellectual capacity, organisation skill, moral judgement, and distributed nature of large groups of people, from interested laypeople to focused experts. There has been a wealth of work recently, investigating how to establish high-quality ground-truth predictions using multiple semi-trusted sources. The underlying idea behind much of this work uses correspondence between sources for mutual validation. In simple terms, two sources are more likely to agree on a label, if there is a shared cause, such as the sources being reliable. I will discuss when these approaches work, and what can cause them to fail, as well as potential mitigation strategies. I will then go on to talk about our methods that use these models to efficiently acquire new labels, and our techniques for fast ground truth prediction across multiple contexts.

In the second part of the talk, I will briefly outline my work in behavioural modelling of humans undertaking reinforcement tasks, and the implicit discounting we use to choose between short term small gains versus longer term larger rewards. Reinforcement learning models offer a biologically plausible framework in which to study human behaviour in sequential learning tasks. In particular, reward prediction errors found in the brain, have a close analog to ‘temporal differences’ in the widely used temporal difference (TD) machine learning algorithm. I will discuss our psychophysics experiments, designed to elicit human behaviours, and investigate reward discounting characteristics. I will also present some preliminary findings that suggest that humans adapt their reward discounting to certain features of task complexity. This work may help us to develop new reinforcement learning algorithms with adaptive reward discounting.

These works have been undertaken with a number of researchers at Imperial College, and were supported by EPSRC and EIT ICT Labs funding.

Speaker Bio: I am a Machine Learning specialist with a particular interest in reinforcement learning, probabilistic modelling and systems neuroscience. I completed my PhD at Imperial College London under the supervision of Dr Alessandra Russo and Dr Krysia Broda, investigating the use of Reinforcement Learning for non-cooperative multi-agent environments with hidden state.

Since then, I have held a number of post-doctoral posts at Imperial College, engineering and developing machine learning techniques for various application areas, including: security & privacy, behavioural modelling, systems neuroscience, and crowdsourcing. My current research focuses on applying probabilistic modelling and information theory to these domains. I now work as a Lecturer in the Department of Information Studies at UCL.

Find us

City, University of London

Northampton Square

London EC1V 0HB

United Kingdom

Back to top

City, University of London is an independent member institution of the University of London. Established by Royal Charter in 1836, the University of London consists of 18 independent member institutions with outstanding global reputations and several prestigious central academic bodies and activities.