Machine Learning Blog

ML seminar, Wed 19 June, 2:00pm

News, Seminar.

Machine Learning seminar

When: Wed, 19 June 2019, 2:00pm
Where: A225, College Building

Who: Adam White; City, University of London.

Title: Measurable Counterfactual Explanations for Any Classifier

Abstract: The predictions of machine learning systems need to be explainable to the individuals they affect. Yet the inner workings of many machine learning systems seem unavoidably opaque. In this talk we will introduce a new system Counterfactual Local Explanations viA Regression (CLEAR). CLEAR is based on the view that a satisfactory explanation of a prediction needs to both explain the value of that prediction and answer ‘what-if-things-had-been-different’ questions. Furthermore, it must also be measurable and state how well it explains a machine learning system. It must know what it does not know. CLEAR generates counterfactuals that specify the minimum changes necessary to flip a prediction’s classification. It then builds local regression models, using the counterfactuals to measure and improve the fidelity of its regressions. By contrast, the popular LIME method, which also uses regression to generate local explanations, neither measures its own fidelity nor generates counterfactuals. When applied to multi-layer perceptrons trained on four datasets, CLEAR improves on the fidelity of LIME by approximately 40%.

Bio: Adam White is currently working as a Research Assistant at City, University of London. His research interests are in explainable AI and causality. Adam received a PhD in philosophy of science from the London School of Economics in 2017. His PhD thesis was on the causal discovery of nonlinear dynamics in biochemistry. He then completed the MSc in Data Science at City in 2017/2018. Adam worked for 17 years as an Operational Research analyst in British Airways and Barclays Bank.

All welcome!

ML seminar, Tue 28 May, 3:30pm

News, Seminar.

Machine Learning seminar

When: Tue, 28 May 2019, 3:30pm
Where: AG07b, College Building

Who: Marco Gori, University of Siena, Italy.

Title: The Principle of Least Cognitive Action

Abstract: In this talk we introduce the principle of Least Cognitive Action with the purpose of understanding perceptual learning processes. The principle closely parallels related approaches in physics, and suggests to regard neural networks as systems whose weights are Lagrangian variables, namely functions depending on time. Interestingly, neural networks “conquer their own life” and there is no neat distinction between learning and test; their behavior is characterized by the stationarity of the cognitive action, an appropriate functional which contains a potential and a kinetic term. While the potential term is somewhat related to the loss function used in supervised and unsupervised learning, the kinetic term represents the energy connected with the velocity of weight change. Unlike traditional gradient descent, the stationarity of the cognitive action yields differential equations in the connection weights, and gives rise to a dissipative process which is needed to yield ordered configurations. We give conditions under which this learning process reduces to stochastic gradient descent and to Backpropagation. We give examples on supervised and unsupervised learning, and briefly discuss the application to deep convolutional neural networks, where an appropriate Lagrangian term is used to enforce motion invariance in the visual feature extraction.

Bio: Marco Gori received the Ph.D. degree in 1990 from Università di Bologna, Italy, working partly at the School of Computer Science (McGill University, Montreal). In 1992, he became an Associate Professor of Computer Science at Università di Firenze and, in November 1995, he joint the Università di Siena, where he is currently full professor of computer science.
His main interests are in machine learning with applications to pattern recognition, Web mining, and game playing. He is especially interested in bridging logic and learning and in the connections between symbolic and sub-symbolic representation of information. He was the leader of the WebCrow project for automatic solving of crosswords, that outperformed human competitors in an official competition which took place during the ECAI-06 conference. As a follow up of this grand challenge he founded QuestIt, a spin-off company of the University of Siena, working in the field of question-answering. He is co-author of “Web Dragons: Inside the myths of search engines technologies,” Morgan Kauffman (Elsevier), 2006, and “Machine Learning: A Constrained-Based Approach,” Morgan Kauffman (Elsevier), 2018.
Dr. Gori serves (has served) as an Associate Editor of a number of technical journals related to his areas of expertise, he has been the recipient of best paper awards, and keynote speakers in a number of international conferences. He was the Chairman of the Italian Chapter of the IEEE Computational Intelligence Society, and the President of the Italian Association for Artificial Intelligence.
He is a fellow of the IEEE, ECCAI, IAPR. He is in the list of top Italian scientists kept by the VIA-Academy (

All welcome!

ML seminar, Fri 17 May, 2pm

News, Seminar.

Machine Learning seminar

When: Fri, 17 May 2019, 2pm
Where: AG03, College Building

Who: Wang-Zhou Dai, Imperial College London.

Title: Bridging Machine Learning and Logical Reasoning by Abductive Learning

Abstract: Perception and reasoning are two representative abilities of intelligence that are integrated seamlessly during problem-solving processes. In the area of artificial intelligence (AI), perception is usually realised by machine learning and reasoning is often formalised by logic programming. However, the two categories of techniques were developed separately throughout most of the history of AI. This talk will introduce the abductive learning framework targeted at unifying the two AI paradigms in a mutually beneficial way. In this framework, machine learning models learn to perceive primitive logical facts from the raw data, while logical reasoning is able to correct the wrongly perceived facts for improving the machine learning models. We demonstrate that by using the abductive learning framework, computers can learn to recognise numbers and resolve equations with unknown arithmetic operations simultaneously from images of simple hand-written equations. Moreover, the learned models can be generalized to complex equations and adapted to different tasks, which is beyond the capability of state-of-the-art deep learning models.

Bio: Wang-Zhou Dai is a research associate in the Department of Computing, Imperial College London. He completed his PhD at Nanjing University in machine learning and his undergraduate studies at Northwestern Polytechnical University in applied maths at 2019 and 2010, respectively. His research interests lie in the area of Artificial Intelligence and machine learning, especially in applying first-order logical background knowledge in general machine learning techniques. He has published multiple research papers on major conferences and journals in AI and machine learning including AAAI, ILP, ICDM, ACML and Machine Learning, etc. He has been awarded the IBM PhD Fellowship and Google Excellence Scholarship during his PhD study, and now he is serving as a PC member and reviewer in many top AI & machine learning conferences including IJCAI, AAAI, NeurIPS, ICML, ACML, PRICAI, PAKDD and so on.

All welcome!

ML seminar, Wed 3 Apr, 2pm

News, Seminar.

Machine Learning seminar

When: Wed, 3 Apr 2019, 2pm
Where: A226, College Building

Who: Derek Doran,Wright State University.

Title: Mappers and Manifolds Matter!

Abstract: Topological Data Analysis (TDA) is a branch of data science that estimates and then exploits the “shape” of a dataset for downstream characterization and inference. TDA methods arerising in popularity in the ML community as a tool to theoretically understand the actions of deep neural nets and other algorithms by connections to the Manifold Hypothesis. TDA methods, and in particular the Mapper algorithm, is also finding recent increased use in applied data science workflows. This talk will introduce the essential definitions and notions from Topology needed for audience members to jump into the field of TDA and to build Mappers of datasets. It will then demonstrate the utility of Mapper, and TDA methods generally, for data science tasks including complex data visualization, interpretable dimensionality reduction, and explainable deep learning. The application of Mapper for dataset characterization and semi-supervised learning will also be illustrated.

Bio: Derek Doran is an Associate Professor of Computer Science and Engineering at Wright State University, Dayton OH, USA. His research interests are in machine learning and complex systems analysis, with current emphasis on topological methods for explainable AI, deep learning, complex network analysis, and web and geospatial systems mining. He is an author of over 75 publications on these topics, four of which have been recognized with best paper awards and nominations, is an author on multiple patents, and has published a book under the Springer Briefs in Complexity series. He serves on the program committee of major AI and Web conferences, is on the Editorial Board of Social Network Analysis and Mining and has served as process improvement chair at ESWC. Dr. Doran is a National Science Foundation EAPSI Fellow, a graduate research awardee of the U.S. Transportation Research Board, and a twice summer alumnus of Bell Labs. He will be a Fulbright Fellow stationed in Reykjavik University effective January 2020. Please see more information at https://derk–

All welcome!

ML seminar, Wed 13 Mar, 2pm

News, Seminar.

Machine Learning seminar

When: Wed, 13 Mar 2019, 2pm
Where: A226, College Building

Who: Robin Manhaeve, Katholieke Universiteit Leuven, Belgium.

Title: DeepProbLog: Neural Probabilistic Logic Programming

Abstract: We introduce DeepProbLog, a probabilistic logic programming language that incorporates deep learning by means of neural predicates. We show how existing inference and learning techniques can be adapted for the new language. Our experiments demonstrate that DeepProbLog supports both symbolic and subsymbolic representations and inference, 1) program induction, 2) probabilistic (logic) programming, and 3) (deep) learning from examples. To the best of our knowledge, this work is the first to propose a framework where general-purpose neural networks and expressive probabilistic-logical modeling and reasoning are integrated in a way that exploits the full expressiveness and strengths of both worlds and can be trained end-to-end based on examples.

Bio: Robin Manhaeve is a PhD student at the department of Computer Science at the KU Leuven. In 2017, he completed his MSc in Engineering Science: Computer Science at the KU Leuven. He’s currently researching the  integration of Deep Learning and Probabilistic Logic Programming under the supervision of Prof. Luc De Raedt and is funded by an SB grant from the Research Foundation – Flanders (FWO).

All welcome!

MPhil-PhD transfer seminar – Charitos Charitou

News, Seminar.

MPhil-PhD transfer presentation

When: Fri, 1st Mar 2019, 2.00pm
Where: C323 (3rd Floor, Tait Building)

Who: Charitos Charitou; City, University of London

Title: Deep Learning for Compliance: “Application of machine learning to online gambling data to identify money laundering”

Abstract:  Most of the current online gambling operators are using handcrafted basic rules for their anti money laundering (AML) strategy. These methods are not enough anymore for identifying complex fraudulent activities. Kindred group entered into research collaboration with City University and the main goal is to effectively use machine learning to detect money laundering. Understanding the needs of the industry and what the industry stakeholders believe was a priority. A series of interviews with various stakeholders of the gambling industry took place and the findings were published earlier this year in the form of a white paper.
The second part  of the research involved the analysis and evaluation of the gambling data that were provided by Kindred. We present how the imbalanced dataset problem was tackled, and the new experimental dataset that was created for supervised learning. The performance of  Logistic Regression (LR), Random Forest (RF) and Multilayer perceptron (MLP) was examined and compared. Our results, showed that Random Forest was the best model for predicting the normal players, while the MLP managed to detect suspicious players with the higher accuracy. Finally, the sequential relationship of the data was investigated using discrete and continuous Hidden Markov Models (HMM).

All welcome!

ML seminar, Wed 20 Feb, 2pm

News, Seminar.

Machine Learning seminar

When: Wed, 20 Feb 2019, 2pm
Where: AG21, College Building

Who: Dr. Alberto Ferreira De Souza, Universidade Federal do Espírito Santo (UFES), Brazil.

Title: Building IARA – The Intelligent Autonomous Robotic Automobile


The Intelligent Autonomous Robotic Automobile  (IARA) is one of the most advanced self-driving cars in world, figuring in eighth place according to the metric of number of interventions per 1000 miles in 2017. We start building IARA in 2009 and, since then, more than 30 students, including Ph.D., M.Sc., and undergraduates, have completed their courses working on the project. IARA is based on the precise localization paradigm where the self-driving car must have a detailed map of the environment to operate autonomously. In this talk, we will present some of the history behind the 10 years of research that led to the current status of development of IARA and will describe how some of its main software modules work, including the modules responsible for mapping, localization and autonomous navigation.
A demonstration video can be found here.


Dr. Alberto Ferreira De Souza is a Professor of Computer Science and Coordinator of the Laboratório de Computação de Alto Desempenho (LCAD – High Performance Computing Laboratory) at the Universidade Federal do Espírito Santo (UFES), Brazil. He received B. Eng. (Cum Laude) in electronics engineering and M. Sc. in systems engineering and computer science from Universidade Federal do Rio de Janeiro (COPPE/UFRJ), Brazil, in 1988 and 1993, respectively; and Doctor of Philosophy (Ph.D.) in computer science from the University College London, United Kingdom in 1999. He has authored/co-authored one USA patent and over 130 publications. He has edited proceedings of four conferences (two IEEE sponsored conferences), and is a Standing Member of the Steering Committee of the International Conference in Computer Architecture and High Performance Computing (SBAC-PAD).

All welcome!

MPhil-PhD transfer seminar – Fatemeh Najibi

News, Seminar.

MPhil-PhD transfer presentation

When: Fri, 1st Feb 2019, 1.30pm
Where: C103 (1st  Floor, Tait Building)

Who: Fatemeh Najibi, City, University of London

Title: Deterministic Microgrid Optimal Operation

Abstract: With the inclusion of renewable energy into power systems, traditional power system face new challenges. Due to their inherent fluctuations and variability,  the introduction of renewable energies in power systems poses new challenges in modeling uncertainty. Controlling and optimizing the operation cost by adjusting the output generation of renewable energy resources make power systems operation more reliable and secure. In this work, we aim to solve one optimal microgrid management problem in deterministic and probabilistic framework. This microgrid is connected to the utility and comprises of different renewable energy generators such as photovoltaic (PV), wind generators, batteries, hydroelectric plants, and microturbine.
The objective is to minimize the cost of generation and the voltage deviation from the reference.
The optimization problem is nonlinear since the AC load flow which is a constraint in the optimization problem is nonlinear. we linearize AC load flow in the first step. Secondly, we model the problem in a deterministic framework without considering the impact of uncertainties on power system. The other physical constraints which are taken into account in this work are the equality constraint of load generation balance, output power limitation and voltage limitation. Finally, we will model the nonlinear problem in probabilistic framework to see how uncertainties can affect the system. The first two steps have been done before transferring to PhD and the final step will be done in the next following two years.

All welcome!

NeurIPS 2018 and End-of-Year ML party

News, Seminar.

Tillman Weyde, Rahda Kopparti, Dan Philps and Artur Garcez attended and presented papers at NeurIPS 2018 in Montreal, Canada, during the week of 3 Dec 2018. Dan and Artur provided an informal overview of their impressions of NeurIPS to the ML group’s End-of-Year meeting on 14 Dec 2018. Thanks to Benedikt Wagner for organising the meeting.


Research Visit

News, Seminar.

Luciano Serafini (FBK, Trento, Italy) and Michael Spranger (Sony CSL, Tokyo, Japan) visited City’s Research Centre for Machine Learning during the week of 10 Dec 2018. The main focus of the visit was to continue research collaborations on Logic Tensor Networks. LTNs are a deep learning system implemented in Tensorflow, capable of reasoning with first-order many-valued logic. For more information, please check the webpage of the IJCAI’2018 tutorial on LTNs:


Find us

City, University of London

Northampton Square

London EC1V 0HB

United Kingdom

Back to top

City, University of London is an independent member institution of the University of London. Established by Royal Charter in 1836, the University of London consists of 18 independent member institutions with outstanding global reputations and several prestigious central academic bodies and activities.