Professor Alessandra Russo, Fellow of the British Computer Society (FBCS), leads the “Structured and Probabilistic Intelligent Knowledge Engineering” (SPIKE) research group at the Department of Computing, Imperial College London, where she was conferred the title of Professor in Applied Computational Logic in September 2016. Her research expertise concerns knowledge-based inference, symbolic learning, complex system behavior modeling, algorithms and systems for distributed inference, probabilistic learning and adaptation, probabilistic programming. She has pioneered in collaboration with members of her research group new logic-based machine learning algorithms and systems, some of which are currently among the state-of-the-art learning systems. She also has an established track record on the application of these systems to security, privacy, policy-based management systems and software engineering in general. Prof. Russo has co-authored two research manuscripts and published over 130 articles in flagships conferences and high impact journals in the areas of Artificial Intelligence and Software Engineering. She is currently Senior PC member at IJCAI2019, co-chair of ESEC/FSE2019, and PC member for AAAI, KR, ILP, ICLP, and ICSE. She has been editor-in-chief of the IET Software in 2006-2016 and Associate Editor of ACM Computing Survey in 2013-2016.
Dr Mark Law is a research associate in the Department of Computing, Imperial College London, where he recently completed his PhD in the area of logic-based machine learning. He is the author of several journal publications on logic-based machine learning. During his PhD, Mark developed ILASP (Inductive Learning of Answer Set Programs), which is a state-of-the-art system for learning Answer Set Programs (ASP). Mark has applied ILASP to a wide range of application areas including user preference learning for scheduling and route planning, policy learning and learning game rules and strategies. Mark’s current research interests include Inductive Logic Programming, Answer Set Programming and Preference Learning.
Dr Krysia Broda is a Senior Lecturer at Imperial College London. Her research interests are currently in the area of combining symbolic and neural learning and reasoning and how this aids interpretability. This builds on her recent work in inductive logic programming. Previously, she has worked in theorem proving, particularly tableau methods, hybrid connectionist networks, logic production systems and teleoreactive programming. She is the co-author of an undergraduate textbook in program reasoning and two research manuscripts, as well as over 100 research papers. Dr Broda has served on numerous program committees in the last two decades. She was recently for four years Programme Director of the HiPEDS Centre for Doctoral Training, and is passionate about giving PhD students opportunities for networking and to develop their communication skills. She has supervised and co-supervised more than 20 PhD students.
In this lecture, we will first introduce today’s large knowledge bases — both academic projects and industrial projects at Microsoft, IBM, Google, and Apple. We will discuss in particular how knowledge is represented in these knowledge bases: with entities, classes, relations, and links between them. We will then dive into the topic of rule mining: automatically finding patterns in the data such as “If two people are married, they most likely live in the same place”. These rules can serve not just to correct the data, but also to complete it, and, most importantly, to understand it.
Fabian M. Suchanek is a full professor at the Telecom ParisTech University in Paris. Fabian developed inter alia the YAGO knowledge base, one of the largest public general-purpose knowledge bases, which earned him a honorable mention of the SIGMOD dissertation award. His interests include information extraction, automated reasoning, and knowledge bases. Fabian has published around 80 scientific articles, among others at ISWC, VLDB, SIGMOD, WWW, CIKM, ICDE, and SIGIR, and his work has been cited more than 8000 times.
Many tasks often regarded as requiring some form of intelligence to perform can be seen as instances of query answering over a semantically rich knowledge base. In this context, two of the main problems that arise are: (i) uncertainty, including both inherent uncertainty (such as events involving the weather) and uncertainty arising from lack of sufficient knowledge; and (ii) inconsistency, which involves dealing with conflicting knowledge. These unavoidable characteristics of real world knowledge often yield complex models of reasoning; assuming these models are mostly used by humans as decision-support systems,meaningful explainability of their results is a critical feature. This course is divided into two parts, one for each of these basic issues. In Part 1, we present basic probabilistic graphical models and discuss how they can be incorporated into powerful ontological languages; in Part 2, we discuss both classical inconsistency-tolerant semantics for ontological query answering based on the concept of repair and other semantics that aim towards more flexible yet principled ways to handle inconsistency. Finally, in both parts we ponder the issue of deriving different kinds of explanations that can be attached to query results.
Maria Vanina Martinez received her PhD at University of Maryland College Park in 2011 and carried out her postdoctoral studies at Oxford University in the Information Systems Group. Her main research interest are in the intersection of Knowledge Representation and Reasoning and Databases, she has lately being working on problems related to uncertain ontological query answering on the Web and preference-based reasoning. Martinez is currently Adjunct Researcher at CONICET as a member of the Institute for Research in Computer Science and assistant professor at the Department of Computer Science at University of Buenos Aires in Argentina. In 2017 she was selected as a speaker in the Early Career Spotlight Track at IJCAI, where young researchers expose their perspectives for the future of Artificial Intelligence. In 2018 was selected by IEEE Intelligent Systems as one of the ten prominent researchers in AI to watch. Recently, she participated in several event related to ethics in AI, including the panel "Artificial Intelligence: Ethical, Social and Human Rights Challenges" at T20 Argentina summit organized by UNESCO, and several seminars and events related to AI and Ethics (the challenge of autonomous lethal weapons) funded by Campaign to stop Killing Robots and the Human Security Network in Latin America and the Caribbean (SEHLAC). She currently serves as co-chair of NMR 2020 (International Workshop on Non-monotonic Reasoning).
Gerardo I. Simari is an assistant professor at Universidad Nacional del Sur in Bahía Blanca, and an adjunct researcher at CONICET, Argentina. His research focuses on topics within AI and Databases, and reasoning under uncertainty. He received a PhD in computer science from University of Maryland College Park in 2010; after his doctoral degree, he obtained a postdoctoral researcher position in the Department of Computer Science, University of Oxford (UK), and one year later secured a position as Senior Researcher in the same department. Simari was selected as one of IEEE Intelligent Systems "AI's Ten to Watch" for 2016, was awarded Best Paper prizes at FOSINT-SI 2016, RuleML 2015, and the Best Student Paper prize at ICLP 2009, and also received an honorable mention for the IJAR Young Researcher Award 2011 for ongoing research in Probabilistic Logic. He currently serves on the editorial board of Journal of Artificial Intelligence Research (JAIR), and is a former Fulford Junior Research Fellow of Somerville College, University of Oxford.
Model-based approaches to AI are well suited to explainability in principle, given the explicit nature of their world knowledge and of the reasoning performed to take decisions. AI Planning in particular is relevant in this context as a generic approach to action-decision problems. Indeed, explainable AI Planning (XAIP) has received interest since more than a decade, and has been taking up speed recently along with the general trend to explainable AI.
The lecture offers an introduction to the nascent XAIP area. The first half provides an overview, categorizing and illustrating the different kinds of explanation relevant in AI Planning, and placing previous work in this context. The second half of the lecture goes more deeply into one particular kind of XAIP, contrastive explanation, aimed at answering user questions of the kind "Why do you suggest to do A here, rather than B (which seems more appropriate to me)?". Answers to such questions take the form of reasons why A is preferrabe over B. Covering recent work by the lecturers towards this end, we set up a formal framework allowing to provide such answers in a systematic way; we instantiate that framework with the special case of questions about goal-conjunction achievability in oversubscription planning (where not all goals can be achieved and thus a trade-off needs to be found); and we discuss the compilation of more powerful question languages into that special case. Linking to the state of the art in research on effective planning methods, we briefly cover recent techniques for nogood learning in state space search, as a key enabler to efficiency in the suggested analyses.
Jörg Hoffmann obtained a PhD from the University of Freiburg, Germany, with a thesis that won the ECCAI Dissertation award 2002. After positions at Max Planck Institute for Computer Science (Saarbrücken, Germany), the University of Innsbruck (Austria), SAP Research (Karlsruhe, Germany), and INRIA (Nancy, France), he is now a Professor of CS at Saarland University, Saarbrücken, Germany. He has published more than 100 scientific papers, has been Program Co-Chair of the AAAI'12 Conference on AI, and has received 4 Best Paper Awards from the International Conference on Automated Planning and Scheduling, as well as the IJCAI-JAIR Best Paper Prize 2005. His core research area is AI automated planning, but he has performed research also in related areas including model checking, semantic web services and business process management, Markov decision processes, natural language generation, and network security testing.
Daniele Magazzeni is Associate Professor of Artificial Intelligence at King’s College London, where he leads the Trusted Autonomous Systems hub and he is Co-Director of the Centre for Doctoral Training in Safe and Trusted AI. Dan's research interests are in Safe, Trusted and Explainable AI, with a particular focus on AI planning for robotics and autonomous systems, and human-AI teaming. Dan is the President-Elect of the ICAPS Executive Council. He was Conference Chair of ICAPS 2016 and Workshop Chair of IJCAI 2017. He is Co-Chair of the IJCAI-19 Workshop on XAI, and Co-chair of the ICAPS-19 Workshop on Explainable Planning.
Pierre Senellart is a Professor in the Computer Science Department at the École normale supérieure (ENS, PSL University) in Paris, France, and an Adjunct Professor at Télécom ParisTech. He is an alumnus of ENS and obtained his M.Sc. (2003) and Ph.D. (2007) in computer science from Université Paris-Sud, studying under the supervision of Serge Abiteboul. Before joining ENS, he was an Associate Professor (2008–2013) then a Professor (2013–2016) at Télécom ParisTech. He also held secondary appointments as Lecturer at the University of Hong Kong in 2012–2013, and as Senior Research Fellow at the National University of Singapore from 2014 to 2016. His research interests focus around practical and theoretical aspects of Web data management, including Web crawling and archiving, Web information extraction, uncertainty management, Web mining, and intensional data management.
Sebastian obtained a PhD in Mathematics from TU Dresden in 2006, before joining Rudi Studer’s Knowledge Management Group in Karlsruhe, where he received his habilitation in Computer Science in 2011. Since 2013, he is a full professor for Computational Logic at TU Dresden. His main research interests comprise Artificial Intelligence (especially Knowledge Representation and Reasoning), Database Theory, NLP and others. Sebastian co-authored several textbooks on Semantic Web technologies. He recently received an ERC Consolidator Grant in support of his research on the decidability boundaries of logic-based Knowledge Representation.
Bernhard Ganter is an emeritus professor of mathematics at Technische Universität Dresden. He obtained a PhD in Mathematics from TU Darmstadt in 1974. His research interests first were in Discrete Mathematics, Universal Algebra and Mathematical Music Theory. Later he was a member of the team around Rudolf Wille, that developed Formal Concept Analysis. As full professor at TU Dresden he was a member both of the Mathematics and of the Computer Science Faculty. He is an author of several monographies on Formal Concept Analysis.
Gerd Stumme is Full Professor of Computer Science, leading the Chair on Knowledge and Data Engineering at the University of Kassel. He earned his PhD in 1997 at Darmstadt University of Technology, and his Habilitation at the Institute AIFB of the University of Karlsruhe in 2002. In 1999/2000 he was Visiting Professor at the University of Clermont-Ferrand, France, and Substitute Professor for Machine Learning and Knowledge Discovery at the University of Magdeburg in 2003. His research group in Kassel is running the social bookmark and publication sharing system BibSonomy.
Computational methods are emerging that promise to work well despite being based on uncertain information. In order to explain how well they work, probabilistic models need to be built, maintained and analysed. Markov models are very common behavioural models to study such operational phenomena. They often are represented with discrete state space, and come in various conceptual flavours, overarched by Markov automata. As such, Markov automata provide the prime ingredients enabling the study of a wide range of quantitative properties related to risk, cost, performance, and strategy.
This tutorial gives an introduction to Markov automata modelling and analysis. We work with the language Modest to represent Markov automata and the various ingredients. We start off by discussing discrete-time Markov models, explaining the underlying concepts and features, and afterwards turn our attention to continuous time. We discuss compositional model construction and verification algorithms, and finally provide a survey of tool support and applications.
Arnd Hartmanns is an assistant professor in the Formal Methods and Tools group at the University of Twente in Enschede, the Netherlands. He studied at Saarland University, Germany, where he received his Ph.D. in computer science in 2015, and Linköping University, Sweden. His research focuses on formal modelling and verification techniques for stochastic timed and cyber-physical systems. He leads the development of the Modest Toolset, which has been used for the design, testing, verification, and optimisation of system designs ranging from networks on chips to control strategies for power microgrids. He has co-chaired the artifact evaluation committees of TACAS 2018 and 2020, and organised the QComp 2019 verification tool competition.
Holger Hermanns is Full Professor in Computer Science at Saarland University, Saarbrücken, Germany and Distinguished Professor at Institute of Intelligent Software, Guangzhou, China. His research interests include modeling and verification of concurrent systems, resource-aware embedded systems, compositional performance and dependability evaluation, and their applications to energy informatics. He has authored or co-authored more than 200 peer-reviewed scientific papers (ha-index 92), co-chaired the program committees of CAV, CONCUR, TACAS and QEST, and serves on the steering committees of ETAPS and TACAS. He is an ERC Advanced Grantee and member of Academia Europaea.
Description Logics (DLs) are a family of languages designed to represent conceptual knowledge in a formal way as a set of ontological axioms. DLs provide a formal foundation of the ontology language OWL, which is a W3C standardized language to represent information in Web applications. The main computational problem in DLs is finding relevant consequences of the information stored in ontologies, e.g., to answer user queries. Unlike related methods, such as those based on machine learning, the notion of consequence is well-defined using a formal logic-based semantics. On the one hand, the semantics provides a baseline for comparison of ontology tools, as each tool must return the same answers to the same query. On the other hand, the semantics can be used to explain to the user why certain consequences hold or do not hold, which can be used, e.g., for debugging of ontologies. In this lecture course, we give an overview of practical reasoning algorithms for computing logic-based consequences of ontologies, and of methods for extracting explanations of the reasoning results computed using such algorithms.
Dr. Yevgeny Kazakov is a lecturer at the Institute of Artificial Intelligence, University of Ulm. His main research area of interest is Knowledge Representation and Reasoning, and more specifically, automated reasoning in Description Logics. In 2006 he obtained his PhD in Computer Science from the University of Saarland based on his work at the Max-Planck-Institute for Computer Science. After that he held research positions at the University Manchester (2005-2007) and University of Oxford (2008-2011), before jointing the University of Ulm. While working in Ulm, he has received a prestigious Heisenberg scholarship from the German Research Foundation (2012-2017). He has published over 50 scientific papers, one of which has received a distinguished paper award from the International Conference on Artificial Intelligence (2009). He co-developed the widely used ontology reasoner ELK, which won multiple awards at competitions, such as the Kurt Gödel Society medal at the FLoC Olympic Games in 2014.
Constraints are ubiquitous in Artificial Intelligence and Operations Research. They appear in logical problems like propositional satisfiability, in discrete problems like constraint satisfaction, and in full-fledged mathematical optimization tasks. Constraint learning enters the picture when the structure or the parameters of the constraint satisfaction / optimization problem to be solved are (partially) unknown and must be inferred from data. The required supervision may come from offline sources or gathered by interacting with human domain experts and decision makers. With these lecture notes, we offer a brief but self-contained introduction to the core concepts of constraint learning, while sampling from the diverse spectrum of constraint learning methods, covering classic strategies and more recent advances. We sill also discuss links to other areas of AI and machine learning, including concept learning, (statistical) relational learning, structured-output prediction, learning from queries, inverse (combinatorial) optimization, and preference elicitation.
Stefano Teso received his Ph.D. in Computer Science from the University of Trento, Italy, in 2013. He worked as a postdoctoral researcher at the Fondazione Bruno Kessler, Trento (one year) and at the Computer Science department of the University of Trento (two years). He is currently a post-doc at the DTAI lab KU Leuven, Belgium. His interests include machine learning for structured and relational data, combining learning and constraint satisfaction/optimization, and interactive learning from human advice. He has published in top journals (AI Journal) and conferences (AAAI, IJCAI). Stefano won a Fondazione Caritro grant in 2014 for learning and reasoning over relational data in the tributary and administrative domains. He recently co-presented tutorials on constraint learning at AAAI and IJCAI.