Program Tutorials


Tutorials' Schedule:

Morning T02 T05 T12 T11 T13 T07 T03 T14
Afternoon T08 T06 T11 T04 T10 T01

Bridges with Combinatorial Game Theory (T01)

Reshef Meir

Urban Larsson

Location: Room: MB 3.265

We would like to introduce AAMAS audiences to an external topic that can motivate or use AAMAS research. The topic is Combinatorial Game Theory (CGT).
This is a half day tutorial. There will be three talks divided in three main topics of CGT. No further back ground will be needed than basic knowledge about extensive form games, and interest in combinatorial aspects of games. We believe that the introduction of CGT is interesting and relevant to AAMAS, and we see a 2-way bridge. (i) Importing the rich set of concepts and tools from CGT to use in economic, multi-agent situations. We demonstrate some such applications in the tutorial (e.g. via bidding games for bargaining) and mention other possible directions. (ii) Almost the entire current literature on combinatorial games is restricted to 0-sum games. We hope that researchers from the AAMAS community will pick up the challenge of extending the theory to more general games, inspired by realistic multiagent scenarios.

Distributed Ledger Technology and Multi-Agent Systems (T02)

Luke Riley

Grammateia Kotsialou

Patrick McCorry

Peter McBurney

Location: Room: MB 3.255

This tutorial seeks to introduce multi-agent systems researchers to the theoretical and implementation aspects of the emerging topic of Distributed Ledger Technlogies, while outlining ways in which multi-agent systems research can be re-applied into this emerging domain. Briefly, Distributed Ledger Technologies (DLT) (including blockchains) enable easy dissemination of data between self-interested agents in a tamper-proof way. Distributed ledgers achieve this without a trusted central coordinator through a peer-to-peer network of agents who reach agreement on which data will be saved and shared (so each agent has a copy of the same accurate data). In this tutorial we will cover: (1) the basic definitions of Distributed ledger Technology; (2) how the agents reach agreement of what data is contained within the ledger; (3) example distributed ledger and multi-agent system crossover research; and (4) implementation issues for blockchain technology, including how to code your own blockchain program.

Modelling Planning Tasks (T03)

Roman Bartak

Lukas Chrpa

Location: Room: MB 3.265

Research efforts in the Automated Planning community predominantly focus on developing novel planning techniques and incorporating and/or combining them into domain-independent planning engines that can be exploited in a wide range of real-world ap- plications (e.g., Space Exploration, Manufacturing, Urban Traffic Control). In contrast to domain-dependent approaches, where one has to develop an algorithm for solving planning problems in a specific domain, domain-independent approach provides a lot of flexibility by decoupling domain models and planning engines. For being able to exploit domain-independent planning engines, one has to develop a planning domain model which, roughly speaking, describes the environment and agent's actions.

Automated Planning can be exploited as an efficient tool for deliberative reasoning for single agents as well as teams of multiple agents, which is one of the prominent area of AAMAS. This tutorial focuses on the AAMAS audience who is interested in using of domain-independent Automated Planning engines in their research efforts. With regards to the domain modelling process, we will introduce available "machinery", i.e., languages and knowledge engineering tools, that can be exploited, a "walk-through" of the process, and our practical experience with developing domain models for real-world applications. Attendees will get a basic understanding of the domain modelling process, tools they can exploit, and challenges they will face. A basic level of knowledge of Automated Planning is recommended (on the level of an undergraduate AI course).

Solving Games with Complex Strategy Spaces (T04)

Albert Xin Jiang

Fei Fang

Hau Chan

Location: Room: MB 3.255

A central problem of computational game theory is the computation of game-theoretic solution concepts given a description of the game. Classical results focused on solving finite games represented in normal form, where the strategies and utility functions are given explicitly in a tabular representation. However, in many real-world multi-agent domains including infrastructure security, environmental protection, electronic commerce, and network routing, each agent need to make a complex decision consisting of multiple components, such as choosing a path in a network, selecting a subset of targets to protect/attack, executing a patrol route for each patrol unit, bidding in multiple auctions, or taking actions in continuous areas. The resulting strategy space may consist of an exponential number or even infinite number of pure strategies, and as a result the standard normal form representation and its associated algorithms are inadequate.

This tutorial will summarize recent advances in developing efficient algorithms for games with complex strategy spaces, including the use of marginal probabilitiess, general framework for representing and solving games with structured strategy spaces, and the use of differentiable learning and (multi-agent) deep reinforcement learning. We will cover application domains ranging from infrastructure security to environmental and wildlife protection.

Optimization & Learning Approaches to Resource Allocation for Social Good (T05)

Sanmay Das

John P. Dickerson

Bryan Wilder

Location: Room: MB 3.265

Societies around the world face an array of difficult challenges: preventing and treating disease, confronting poverty and homelessness, and a range of other issues impacting billions of people. In response, governments and communities deploy interventions addressing these problems (e.g., outreach campaigns to enroll patients in treatment or offering subsidized public housing). However, these interventions are always subject to limited resources and are deployed under considerable uncertainty about properties of the system; deciding manually on the best way to deploy an intervention is extremely difficult. At the same time, research in artificial intelligence and multiagent systems has witnessed incredible growth, providing us with unprecedented computational tools with which to contribute to solving societal problems. This tutorial will introduce multiagent systems students and researchers to the use of techniques from optimization and machine learning to enhance the delivery of policy or community-level interventions aimed at addressing social challenges. We will focus in particular on three application areas: public health, social work, and healthcare. On a technical level, the tutorial will introduce methods for aggregating value judgments from multiple agents about an intervention’s goals, discuss the creation of agents which can learn and plan under uncertainty to aid in resource allocation, and showcase examples of how these techniques are used in concrete, deployed applications. The goal of this tutorial is to provide a unified view of computational methods for resource allocation for social good and spark new research cutting across the sub-areas we cover.

Adversarial Machine Learning (T06)

Bo Li

Dawn Song

Yevgeniy Vorobeychik

Location: Room: MB 3.265

Machine learning has seen a remarkable rate of adoption in recent years across a broad spectrum of industries and applications. Many applications of machine learning techniques are adversarial in nature, insofar as the goal is to distinguish instances which are "bad" from those which are "good" (Fawcett and Provost 1997; Mahoney and Chan 2002). Indeed, adversarial use goes well beyond this simple classification example: forensic analysis of malware which incorporates clustering, anomaly detection, and even vision systems in autonomous vehicles could all potentially be subject to attacks. In response to these concerns, there is an emerging literature on adversarial machine learning, which spans both the analysis of vulnerabilities in machine learning algorithms, and algorithmic techniques which yield more robust learning. This tutorial will survey a broad array of these issues and techniques from both the cybersecurity and machine learning research areas. In particular, we consider the problems of adversarial classifier evasion, where the attacker changes behavior to escape being detected, and poisoning, where training data itself is corrupted. We discuss both the evasion and poisoning attacks, first on classifiers, and then on other learning paradigms, and the associated defensive techniques. We then consider specialized techniques for both attacking and defending neural network, particularly focusing on deep learning techniques and their vulnerabilities to adversarially crafted instances.

Epistemic Reasoning in Mulitagent Systems (T07)

Tristan Charrier

Francois Schwartzentruber

Location: Room: MB 3.255

This tutorial introduces Dynamic Epistemic Logic that enables to reason about knowledge and its evolution in time. We will first show how to model epistemic situations. Second, we will discuss algorithmic issues for reasoning tasks. Third, we will present extensions of dynamic logic. We will use the pedagogical software called Hintikka's world.

Computational Argumentation in the Context of Human-Agent Interaction (T08)

Simon Parsons

Elizabeth Sklar

Nir Oren

Nadin Kokciyan

Isabel Sassoon

Josh Murphy

Location: Room: MB 3.255

Computational Argumentation is an emerging area of research in the domain of Artificial Intelligence. Computational Argumentation provides a mechanism to reason with uncertain and at times incomplete information, and a way of explaining the resulting outcome of the reasoning process. Computational Argumentation can be used to enhance the interaction of humans with Intelligent Agent Based Systems. The main aim of this tutorial is to provide an initial foundation in the concepts underlying computational argumentation and its relevance and potential applications in the context of Human-agent interaction (HAI). Attendees will gain exposure to the field of computational argumentation with an emphasis on those aspects of the domain that are more readily applicable to Human-Agent Interaction context.

Heuristic Search (T10)

Nathan Sturtevant

Sven Koenig

Daniel Harabor

Location: Room: MB 3.445

The aim of the tutorial is to give a survey of selected recent directions in search, and by providing three speakers we will ensure a diverse set of topics and viewpoints. The tutorial will have several focus areas, each given by a speaker whose personal research has focused on the given area. In particular, we aim to focus on the following areas:

  1. Background and Fundamental Algorithms. This section will discuss the core algorithms in the field of heuristic search, including the use of heuristics and constraints in search, with live online demos of each of the algorithms. (Speaker: Nathan Sturtevant)
  2. Preprocessing and Heuristics This section will discuss many different ways in which preprocessing can be used to speedup search. (Speaker: Sven Koenig and Daniel Harabor)
  3. Any-angle Search. This section will discuss how to perform pathfinding between locations on a grid where movement is not restricted to the grid itself. (Speaker: Sven Koenig and Daniel Harabor)

Throughout this tutorial, we aim to highlight the different characteristics of different search problems and indicate which search methods are used for explicit (polynomial) domains and implicit (exponential) domains.

Scalable Deep Learning: From Theory to Practice (T11)

Decebal Constantin Mocanu

Elena Mocanu

Phuong H. Nguyen

Madeleine Gibescu

Zita Vale

Damien Ernst

Location: Room: MB 3.445

A fundamental task for Multi-Agent Systems (MAS) is learning. Deep Neural Networks (DNNs) have proven to cope perfectly with all learning paradigms, i.e. supervised, unsupervised, and reinforcement learning. Nevertheless, traditional deep learning approaches make use of cloud computing facilities and do not scale well to autonomous agents with low computational resources. Even in the cloud they suffer from computational and memory limitations and cannot be used to model properly large physical worlds for agents which assume networks with billion of neurons. These issues were addressed in the last few years by the emerging topics of scalable and efficient deep learning. The tutorial covers these topics focusing on theoretical advancements, practical applications, and hands-on experience, in two parts:

Part I - Scalable Deep Learning: from pruning to evolution. The first part of the tutorial focuses on theory. We first revise how many agents make use of deep neural networks nowadays. We then introduce the basic concepts of neural networks and we draw a parallel between artificial and biological neural networks from a functional and topological perspective. We continue by introducing the first papers on efficient neural networks coming from early 90s which make use either of sparsity enforcing penalties or weights pruning of fully connected networks based on various saliency criterion. Afterwards, we review some of the recent works which start from fully connected networks and make use of prune-retrain cycles to compress deep neural networks and to make them more efficient in the inference phase. We then discuss an alternative approach, i.e. NeuroEvolution of Augmenting Topologies (NEAT) and its followups, to grow efficient deep neural networks using evolutionary computing. Further on, we introduce the topic of Scalable Deep Learning (SDL) which builds on efficient deep learning and put together all of the above. Herein, we discuss how DNNs are trained using the new proposed Sparse Evolutionary Training (SET) algorithm. SET-DNNs start from random sparse networks and use an evolutionary process to adapt their sparse connectivity to the data while learning. SET-DNNs offer benefits in both phases, training and inference, having quadratically lower memory-footprints and much faster running time then their fully connected counterparts. These make them the perfect match for autonomous agents or for the modeling of large physical environments which need millions (or perhaps billions) of neurons. Up to now, everything is discussed in the context of supervised and unsupervised learning. We conclude the first part of this tutorial by introducing deep reinforcement learning and by paving the ground for scalable deep reinforcement learning. We describe some very recent progresses in the field of deep reinforcement learning that could be used to foster the performances of reinforcement learning agents when confronted with environments that can exhibit sudden changes in their dynamics, as it is often the case with energy systems.

Part II - Scalable Deep Learning: agents in smart grids. The second part of the tutorial focuses on practical applications. Distributed generation, demand response, distributed storage, and electric vehicles are bringing new challenges to the power and energy sector. The tutorial addresses the current and envisioned solutions for the management of these distributed energy resources in the context of smart grids. Artificial intelligence based approaches bring important new possibilities enabling efficient individual and aggregated energy management. Such approaches can provide different players aiming to accomplish individual and common goals in the frame of a market-driven environment with advanced decision-support and automated solutions. The first presentation in the afternoon session is concluded with the description of MARTINE (Multi-Agent based Real-Time INfrastruture for Energy), a platform to support real-time energy management and simulation of buildings and smart grids. The platform will be used as the basis to present different data-driven and cognitive approaches to support efficient energy management in buildings and smart grids. Overall, in the multi-agents settings we analyze the opportunity of using different types of strategies (e.g centralized versus decentralized, cooperative versus noncooperation, and so on). Through the end, we will argue that reinforcement learning paradigm can be very powerful to solve many decision-making problems in the energy sector, as for example investment problems, the design of bidding strategies for playing with the intraday electricity market or problems related to the control of microgrids. The last presentation define the resource allocation problems as a sequential stochastic decision-making process in multi-agent learning, that considers scalable and efficient deep reinforcement learning agents. We investigate how multiple learning agents interact and influence each other in the smart grid context, what kind of global system dynamics arise, and how desired electrical behaviour can be obtained by modifying the learning algorithms used. The settings considered, range from one-on-one interactions (e.g. games) to small groups (e.g. multi-agent coordination) and large communities (e.g. interactions in social networks).

After the tutorial, the participants shall have: a basic understanding of scalable deep neural networks for MAS learning, of MAS in the smart grid context; basic hands-on experience to use these concepts in various practical applications; and some good thoughts for future research directions.

Multi-agent Distributed Constrained Optimization (T12)

Ferdinando Fioretto

Gauthier Picard

Pierre Rust

Long Tran-Thanh

Location: Room: MB 3.430

Teams of agents often have to coordinate their decisions in a distributed manner to achieve both individual and shared goals. Examples include service-oriented computing, sensor network problems, and smart devices coordination homes problems. The resulting Distributed Constraint Optimization Problem (DCOP) is NP-hard to solve, and the multi-agent coordination process non-trivial. This tutorial is composed of two parts and will provide an overview of DCOPs, focusing on its algorithms and its applications to the Internet-of-Things (IoT). In the first part, we will present an accessible and structured overview of the available optimal and suboptimal approaches to solving DCOPs. We will discuss recent extensions to the DCOP framework to capture agents acting in a dynamic environment and/or using asymmetric costs/rewards. In the second part, we will review the application of DCOP methods that are suitable for the IoT context, illustrate a case study on how to model a real smart home, and, finally, show how to program and deploy DCOP algorithms on a real IoT environment composed of Raspberry PIs. The tutorial will conclude with the most recurrent challenges and open questions.

Social Choice and Mechanism Design on Social Networks (T13)

Umberto Grandi

Dengji Zhao

Location: Room: MB 3.435

This tutorial provides an overview of novel approaches put forward in the computational social choice and mechanism design literature under social networks. Starting from work analysing the effects of social network phenomena to classical collective decision methods, to those designing mechanisms that take into consideration the network structure underlying a social choice or mechanism design setting. We will end with a detailed overview of recent work in information diffusion on social networks, where information represents preferences, binary view, or auction information and valuations. From a social choice perspective, we investigate the termination of the process, its resistance to manipulative actions, and conditions to reach consensus. From a mechanism design perspective, we investigate truthful mechanisms such that truthful auction information is fully propagated and collected via social networks, which offers a novel way to increase sellers’ revenue by incentivizing participants to invite their neighbors to join the auction.

Designing Agents' Preferences, Beliefs, and Identities (T14)

Vincent Conitzer

Location: Room: MB 3.445

We often assume that each agent has a well-defined identity, well-defined preferences over outcomes, and well-defined beliefs about the world. However, when designing agents, we in fact need to specify where the boundaries between one agent and another in the system lie, what objective functions these agents aim to maximize, and to some extent even what belief formation processes they use. What is the right way to do so? As more and more AI systems are deployed in the world, this question becomes increasingly important. In this tutorial, I will show how it can be approached from the perspectives of decision theory, game theory, social choice theory, and the algorithmic and computational aspects of these fields. (No previous background required.)

A AAAI'19 blue sky writeup on a subset of these ideas can be found here: