Blogs

Machine Learning Blog

Author Archives: ml

Data Bites seminar, Mon 11 Nov, 5:00pm

News, Seminar.

Data Bites seminar

When: Mon, 11 November 2019, 5:00pm
Where: A130, College Building

Who: Kevin Ryan; City, University of London

Title: Deep Learning and Computer Vision in the Property Market – Making the ‘Right’ Move

Abstract: Rightmove is the UK’s largest online real estate portal. The company was started in 2000 by the top four corporate estate agents Countrywide, Connells, Halifax and Royal and Sun Alliance. In 2006 it was floated on the London Stock Exchange and today its boasts a revenue of £267m with an operating profit of £198.6m.
Rightmove offers an Automated Valuation Model (AVM) which predicts the price of a UK-based property based principally on easy to measure property metrics such as number of bedrooms, previous sold price, asking price, location etc. These metrics are generalisable across different property types and are effective in capturing gross differences in price. However, they do not capture more specific differences in the marketability between properties such as the presence/absence of specific features or style/decor-based characteristics that can often play a significant effect on sold price.
Property images contain a great deal of unstructured information relating to these more nuanced features of a property. In this talk I will discuss some of the data gathering and deep learning approaches that I used in order to capture marketable information from property images. I will also discuss a little background around how I sourced and obtained my internship at Rightmove as part of my MSc in Data Science.

Bio: Dr Kevin Ryan is currently completing City’s MSc in Data Science. Previously he worked as one of the principal Bioinformaticians at Viapath where he was responsible for implementing an end-to-end analysis platform for the High Throughput DNA sequencing facility at Guy’s Hospital’s Genetics Department. His platform went live in 2015 and formed a central part of the service responsible for serving 3.8 million people in the South Thames area.
Prior to this Dr Ryan was based at the University of Nottingham where he worked as a Postdoctoral research scientist. Here his research involved the development of analysis systems to characterise gene expression networks involved in the regulation of skeletal muscle growth and energy metabolism. Originally trained within the fields of molecular biology and nutritional biochemistry, he completed his PhD in 2005 at the University of Nottingham.
He is currently completing an internship at the Property Portal company Rightmove Plc where his project explores the use of Computer Vision approaches in extracting unstructured data from property images to help inform future property price prediction models.

All welcome!

Psychology Seminar, 23 Oct, 1:00pm

News, Reading Group.

Department of Psychology seminar

When: Wed, 23 October 2019, 1:00pm
Where: D427, Rhind Building

Who: Bert Kappen; Donder Institute, Radboud University Nijmegen (Netherlands)

Title: Path Integral Control Theory

Abstract: Stochastic optimal control theory deals with the problem of computing an optimal set of actions to attain some future goal. Examples are found in many contexts such as motor control tasks for robotics, planning and scheduling tasks or managing a financial portfolio. The computation of the optimal control is typically very difficult due to the size of the state space and the stochastic nature of the problem. For a special class of non-linear stochastic control problems, the solution can be mapped onto a statistical inference problem. For these so-called path integral control problems the optimal cost-to-go solution of the Bellman equation is given by the minimum of a free energy. I will give a high level introduction to the underlying theory and illustrate with some examples from robotics and other areas.

Bio: Prof. Bert Kappen conducts theoretical research that lie at the interface between machine learning, control theory, statistical physics, computer science, computational biology and artifcial intelligence. He has developed many novel approximate inference methods inspired by methods from statistical physics. He has pioneered the mean field analysis of stochastic neural networks with dynamical synapses, revealing up and down states and rapid switching. He has identified a novel class of non-linear stochastic control problems that can be solved using path integrals. This approach has been adapted by leading robotics groups world wide, and is recognized as an important novel approach to stochastic control. His work on mean field theory for asymmetric stochastic neural networks is at the basis of current research to find connectivity patterns in neural circuits. He is author of about 130 peer reviewed articles in scientific journals and leading conferences. In collaboration with medical experts, he has developed a Bayesian medical expert system, including approximate inference methods, and he has co-founded the company Promedas to commercialize this system. He is director of SNN, the Dutch foundation for Neural Networks. SNN has a long reputation for successfully applying neural network and machine learning methods in collaboration with numerous industrial partners. He has co-founded the company Smart Research bv, that offers commercial service on machine learning and that has developed the Bonaparte Disaster Victim Identification software. He is honorary faculty at the Gatsby Unit for Computational Neuroscience at University College London. For more information: http://www.snn.ru.nl/~bertk/

All welcome!

 

MPhil-PhD transfer seminar – Benedikt Wagner

News, Seminar.

MPhil-PhD transfer presentation

When: Wed, 16th Oct 2019, 12.00 noon
Where: A108 (1st Floor, College Building)

Who: Benedikt Wagner; City, University of London

Title: Reasoning about what has been learned: Knowledge Extraction from Neural Networks

Abstract: Machine Learning-based systems, including Neural Networks, are experiencing greater popularity in recent years. A weakness of these model that rely on complex representations is that they are considered black boxes with respect to explanatory power. In the context of current initiatives on the side of the regulatory authorities and societal discussions regarding, a desire for transparency and corresponding accountability of automated decision systems, attempts on better interpretable or explainable methods and systems in Artificial Intelligence and Machine Learning is ongoing. As a result, there has been a plethora of methods introduced in recent years, resulting in a large mixture of approaches and steps towards getting a better understanding of the behaviour of a model. Therefore, we have developed a taxonomy that provides a holistic view and structure on the topic. We further investigate three promising methods deeper which are based on Counterfactuals, Concept Activation Vectors, and Knowledge Extraction approaches. Concept Activation Vectors try to target the hidden representation as useful base for explanations based on conceptual sensitivities. The tree-structured Knowledge Extraction methods, on the other hand, aim at global representation in a constrained architecture that illustrate how a decision was made and achieve reasonable predictive performance. We emphasise potential benefits and weaknesses of the methods before providing an outlook on promising directions for future research.

All welcome!

ML seminar, Wed 07 Aug, 3:00pm

News, Seminar.

Machine Learning seminar

When: Wed, 07 August 2019, 3:00pm
Where: AG22, College Building

Who: Alessandro Daniele; Fondazione Bruno Kessler (Trento, Italy)

Title: Knowledge Enhanced Neural Networks

Abstract: We propose Knowledge Enhanced Neural Networks (KENN), an architecture for injecting prior knowledge, codified by a set of logical clauses, into a neural network. In KENN clauses are directly incorporated in the structure of the neural network as a new layer that includes a set of additional learnable parameters, called clause weights. As a consequence, KENN can learn the level of satisfiability to impose in the final classification. When training data contradicts a constraint, KENN learns to ignore it, making the system robust to the presence of wrong knowledge. Moreover, the method returns learned clause weights, which gives us informations about the influence of each constraint in the final predictions, increasing the interpretability of the model. We evaluated KENN on two standard datasets for multilabel classification, showing that the injection of clauses automatically extracted from the training data sensibly improves the performances. Furthermore, we apply KENN to solve the problem of finding relationship between detected objects in images by adopting manually curated clauses. The evaluation shows that KENN outperforms the state of the art methods on this task.

Bio: Alessandro Daniele received his master degree in Computer Science from Università degli Studi di Padova in 2014. At the end of 2014 he started working for a private company focusing on the development of a Business Intelligence software. In 2015 he worked as a research fellow at CRIBI Biotechnology Center at Università degli Studi di Padova, continuing his master thesis work on Multiple Sequence Alignment, a well known problem in Bioinformatics.
In 2016 he started his PhD at Università degli Studi di Firenze and at Data Knowledge and Management (DKM) group at Fondazione Bruno Kessler (Trento, Italy). His main research interest is in Machine Learning and its application, with a particular focus on Neural Symbolic Integration.

All welcome!

ML seminar, Wed 19 June, 2:00pm

News, Seminar.

Machine Learning seminar

When: Wed, 19 June 2019, 2:00pm
Where: A225, College Building

Who: Adam White; City, University of London.

Title: Measurable Counterfactual Explanations for Any Classifier

Abstract: The predictions of machine learning systems need to be explainable to the individuals they affect. Yet the inner workings of many machine learning systems seem unavoidably opaque. In this talk we will introduce a new system Counterfactual Local Explanations viA Regression (CLEAR). CLEAR is based on the view that a satisfactory explanation of a prediction needs to both explain the value of that prediction and answer ‘what-if-things-had-been-different’ questions. Furthermore, it must also be measurable and state how well it explains a machine learning system. It must know what it does not know. CLEAR generates counterfactuals that specify the minimum changes necessary to flip a prediction’s classification. It then builds local regression models, using the counterfactuals to measure and improve the fidelity of its regressions. By contrast, the popular LIME method, which also uses regression to generate local explanations, neither measures its own fidelity nor generates counterfactuals. When applied to multi-layer perceptrons trained on four datasets, CLEAR improves on the fidelity of LIME by approximately 40%.

Bio: Adam White is currently working as a Research Assistant at City, University of London. His research interests are in explainable AI and causality. Adam received a PhD in philosophy of science from the London School of Economics in 2017. His PhD thesis was on the causal discovery of nonlinear dynamics in biochemistry. He then completed the MSc in Data Science at City in 2017/2018. Adam worked for 17 years as an Operational Research analyst in British Airways and Barclays Bank.

All welcome!

ML seminar, Tue 28 May, 3:30pm

News, Seminar.

Machine Learning seminar

When: Tue, 28 May 2019, 3:30pm
Where: AG07b, College Building

Who: Marco Gori, University of Siena, Italy.

Title: The Principle of Least Cognitive Action

Abstract: In this talk we introduce the principle of Least Cognitive Action with the purpose of understanding perceptual learning processes. The principle closely parallels related approaches in physics, and suggests to regard neural networks as systems whose weights are Lagrangian variables, namely functions depending on time. Interestingly, neural networks “conquer their own life” and there is no neat distinction between learning and test; their behavior is characterized by the stationarity of the cognitive action, an appropriate functional which contains a potential and a kinetic term. While the potential term is somewhat related to the loss function used in supervised and unsupervised learning, the kinetic term represents the energy connected with the velocity of weight change. Unlike traditional gradient descent, the stationarity of the cognitive action yields differential equations in the connection weights, and gives rise to a dissipative process which is needed to yield ordered configurations. We give conditions under which this learning process reduces to stochastic gradient descent and to Backpropagation. We give examples on supervised and unsupervised learning, and briefly discuss the application to deep convolutional neural networks, where an appropriate Lagrangian term is used to enforce motion invariance in the visual feature extraction.

Bio: Marco Gori received the Ph.D. degree in 1990 from Università di Bologna, Italy, working partly at the School of Computer Science (McGill University, Montreal). In 1992, he became an Associate Professor of Computer Science at Università di Firenze and, in November 1995, he joint the Università di Siena, where he is currently full professor of computer science.
His main interests are in machine learning with applications to pattern recognition, Web mining, and game playing. He is especially interested in bridging logic and learning and in the connections between symbolic and sub-symbolic representation of information. He was the leader of the WebCrow project for automatic solving of crosswords, that outperformed human competitors in an official competition which took place during the ECAI-06 conference. As a follow up of this grand challenge he founded QuestIt, a spin-off company of the University of Siena, working in the field of question-answering. He is co-author of “Web Dragons: Inside the myths of search engines technologies,” Morgan Kauffman (Elsevier), 2006, and “Machine Learning: A Constrained-Based Approach,” Morgan Kauffman (Elsevier), 2018.
Dr. Gori serves (has served) as an Associate Editor of a number of technical journals related to his areas of expertise, he has been the recipient of best paper awards, and keynote speakers in a number of international conferences. He was the Chairman of the Italian Chapter of the IEEE Computational Intelligence Society, and the President of the Italian Association for Artificial Intelligence.
He is a fellow of the IEEE, ECCAI, IAPR. He is in the list of top Italian scientists kept by the VIA-Academy (http://www.topitalianscientists.org/top_italian_scientists.aspx)

All welcome!

ML seminar, Fri 17 May, 2pm

News, Seminar.

Machine Learning seminar

When: Fri, 17 May 2019, 2pm
Where: AG03, College Building

Who: Wang-Zhou Dai, Imperial College London.

Title: Bridging Machine Learning and Logical Reasoning by Abductive Learning

Abstract: Perception and reasoning are two representative abilities of intelligence that are integrated seamlessly during problem-solving processes. In the area of artificial intelligence (AI), perception is usually realised by machine learning and reasoning is often formalised by logic programming. However, the two categories of techniques were developed separately throughout most of the history of AI. This talk will introduce the abductive learning framework targeted at unifying the two AI paradigms in a mutually beneficial way. In this framework, machine learning models learn to perceive primitive logical facts from the raw data, while logical reasoning is able to correct the wrongly perceived facts for improving the machine learning models. We demonstrate that by using the abductive learning framework, computers can learn to recognise numbers and resolve equations with unknown arithmetic operations simultaneously from images of simple hand-written equations. Moreover, the learned models can be generalized to complex equations and adapted to different tasks, which is beyond the capability of state-of-the-art deep learning models.

Bio: Wang-Zhou Dai is a research associate in the Department of Computing, Imperial College London. He completed his PhD at Nanjing University in machine learning and his undergraduate studies at Northwestern Polytechnical University in applied maths at 2019 and 2010, respectively. His research interests lie in the area of Artificial Intelligence and machine learning, especially in applying first-order logical background knowledge in general machine learning techniques. He has published multiple research papers on major conferences and journals in AI and machine learning including AAAI, ILP, ICDM, ACML and Machine Learning, etc. He has been awarded the IBM PhD Fellowship and Google Excellence Scholarship during his PhD study, and now he is serving as a PC member and reviewer in many top AI & machine learning conferences including IJCAI, AAAI, NeurIPS, ICML, ACML, PRICAI, PAKDD and so on.

All welcome!

ML seminar, Wed 3 Apr, 2pm

News, Seminar.

Machine Learning seminar

When: Wed, 3 Apr 2019, 2pm
Where: A226, College Building

Who: Derek Doran,Wright State University.

Title: Mappers and Manifolds Matter!

Abstract: Topological Data Analysis (TDA) is a branch of data science that estimates and then exploits the “shape” of a dataset for downstream characterization and inference. TDA methods arerising in popularity in the ML community as a tool to theoretically understand the actions of deep neural nets and other algorithms by connections to the Manifold Hypothesis. TDA methods, and in particular the Mapper algorithm, is also finding recent increased use in applied data science workflows. This talk will introduce the essential definitions and notions from Topology needed for audience members to jump into the field of TDA and to build Mappers of datasets. It will then demonstrate the utility of Mapper, and TDA methods generally, for data science tasks including complex data visualization, interpretable dimensionality reduction, and explainable deep learning. The application of Mapper for dataset characterization and semi-supervised learning will also be illustrated.

Bio: Derek Doran is an Associate Professor of Computer Science and Engineering at Wright State University, Dayton OH, USA. His research interests are in machine learning and complex systems analysis, with current emphasis on topological methods for explainable AI, deep learning, complex network analysis, and web and geospatial systems mining. He is an author of over 75 publications on these topics, four of which have been recognized with best paper awards and nominations, is an author on multiple patents, and has published a book under the Springer Briefs in Complexity series. He serves on the program committee of major AI and Web conferences, is on the Editorial Board of Social Network Analysis and Mining and has served as process improvement chair at ESWC. Dr. Doran is a National Science Foundation EAPSI Fellow, a graduate research awardee of the U.S. Transportation Research Board, and a twice summer alumnus of Bell Labs. He will be a Fulbright Fellow stationed in Reykjavik University effective January 2020. Please see more information at https://derk–.github.io.

All welcome!

ML seminar, Wed 13 Mar, 2pm

News, Seminar.

Machine Learning seminar

When: Wed, 13 Mar 2019, 2pm
Where: A226, College Building

Who: Robin Manhaeve, Katholieke Universiteit Leuven, Belgium.

Title: DeepProbLog: Neural Probabilistic Logic Programming

Abstract: We introduce DeepProbLog, a probabilistic logic programming language that incorporates deep learning by means of neural predicates. We show how existing inference and learning techniques can be adapted for the new language. Our experiments demonstrate that DeepProbLog supports both symbolic and subsymbolic representations and inference, 1) program induction, 2) probabilistic (logic) programming, and 3) (deep) learning from examples. To the best of our knowledge, this work is the first to propose a framework where general-purpose neural networks and expressive probabilistic-logical modeling and reasoning are integrated in a way that exploits the full expressiveness and strengths of both worlds and can be trained end-to-end based on examples.

Bio: Robin Manhaeve is a PhD student at the department of Computer Science at the KU Leuven. In 2017, he completed his MSc in Engineering Science: Computer Science at the KU Leuven. He’s currently researching the  integration of Deep Learning and Probabilistic Logic Programming under the supervision of Prof. Luc De Raedt and is funded by an SB grant from the Research Foundation – Flanders (FWO).

All welcome!

Find us

City, University of London

Northampton Square

London EC1V 0HB

United Kingdom

Back to top

City, University of London is an independent member institution of the University of London. Established by Royal Charter in 1836, the University of London consists of 18 independent member institutions with outstanding global reputations and several prestigious central academic bodies and activities.