Third International Competition on Computational Models of Argumentation (ICCMA'19)

                                 Call for Solvers


Argumentation is a major topic in the study of Artificial Intelligence. In particular, the problem of solving certain reasoning tasks on Dung's abstract argumentation frameworks is central to many advanced argumentation systems. The fact that many of the problems to be solved are intractable requires efficient algorithms and solvers.

The main goals of the competition are to provide a forum for empirical comparison of solvers, to highlight challenges to the community, to propose new directions for research, and to provide a core of common benchmark instances and a representation formalism that can aid in the comparison and evaluation of solvers.

After the success of ICCMA'15 and ICCMA'17 (, the Third International Competition on Computational Models of Argumentation (ICCMA'19) will be conducted in the first half of 2019. ICCMA'19 will focus on reasoning tasks in abstract argumentation frameworks. Submitted solvers will be tested on a selected collection of benchmark instances (see Call for Benchmarks:

Solvers need to be packaged by participants in a Docker container ( Please check for a condensed guide on how to create a container with Docker. Submission will be just accomplished by communicating the repository link to this container. The main advantage is to allow each solver to be delivered with its complete run time environment, with the purpose to make setup and deployment easier; moreover, a dockerized application can be launched on different platforms (e.g., Windows, Linux, macOS, and in the cloud), making it possible to rerun the experiments anywhere.
For more detailed information and guide to Docker please refer to

Solvers will be evaluated based on their performance in solving the following problems:

(SE)  Given an abstract argumentation framework, determine some extension;
(EE)  Given an abstract argumentation framework, determine all extensions;
(DC)  Given an abstract argumentation framework and some argument, decide whether the given argument is credulously inferred;
(DS)  Given an abstract argumentation framework and some argument, decide whether the given argument is skeptically inferred/

The above computational problems are to be solved with respect to the following standard semantics:

(CO)  Complete Semantics (SE, EE, DC, DS);
(PR)  Preferred Semantics (SE, EE, DC, DS);
(ST)  Stable Semantics (SE, EE, DC, DS);
(SST) Semi-stable Semantics (SE, EE, DC, DS);
(STG) Stage Semantics (SE, EE, DC, DS);
(GR)  Grounded Semantics (only (SE) and (DC));
(ID)  Ideal Semantics (only (SE) and (DC)).

A task is a problem under a semantics. All the tasks of a particular semantics constitute a single different track. For single-status semantics (GR and ID) only the problems SE and DC are considered (EE is equivalent to SE, and DS is equivalent to DC). Note that DC-CO and DC-PR are equivalent as well, but in order to allow the participation in the preferred track without implementing tasks on the complete semantics (or viceversa), we repeat the task.

In addition, four new tracks will be dedicated to the solution of problems over dynamic argumentation frameworks. In this case, a benchmark consists of an initial framework and an additional file storing a sequence of additions/deletions of attacks. This file will be provided through a simple text format, e.g., a sequence of "+att(a,b)." or "-att(d,e).".

The final output needs to report the solution for the initial framework and as many outputs as the number of changes. The four new tracks involve the following semantics and problems.

(CO-D)  Complete Semantics (SE, EE, DC, DS), where D stands for "dynamic";
(PR-D)  Preferred Semantics (SE, EE, DC, DS);
(ST-D)  Stable Semantics (SE, EE, DC, DS);
(GR-D)  Grounded Semantics (only (SE) and (DC)).

Besides the file with changes, we will also provide all the full frameworks (one for each change), in order to allow non-dynamic solvers to participate to these tracks as well. More info available at

Developers of solvers may decide to only provide support for a subset of the above computational tasks and/or tracks (for a maximum of eleven tracks). For each task and each (dynamic) track we will provide a ranking of the performance of the submitted solvers. Awards go to the winners of tracks. Detailed evaluation and ranking rules can be found at

Input and output format are adapted from the last edition and detailed at

The evaluation process will consist of two phases: after the registration phase, the competitors will be given a set of representative frameworks to test their solvers on their own machines. Then, authors will be allowed to submit a final version of their solver by updating its docker at the same repository link.

The competitors first need to declare their interest to submit their solver and participate to the competition (by March 1 2019).

In order to register the solver, the competitors need to prepare their solver description (2-4 pages, using the EasyChair style:, and submit it by using the link Please check the "Solver" button during submission.

This paper also has to indicate the name and affiliations of each team member, the name of the solver, and the classical/dynamic tasks that the solver will be able to handle (a complete list can be found at, the system architecture, which features or functions the system provides, what design choices were made and what lessons were learned. In addition, the paper has to mandatorily report the link to the public Docker repository ( from which the solver can be pulled. If the competitors do not want to publicly offer their solver, links to private repositories can be communicated by email ([log in to unmask]).

Final Submission
Registered competitors will receive a sample of the frameworks on which their solver will be tested (by mid of March).

Registered competitors are allowed to modify the submitted container with their solver until April 1, when the final version will be pulled for testing.

Note that even if Docker repository links need to be communicated by March 1 2019, dockerized solvers will not be downloaded and tested before April 1 2019 (i.e., the repository can stay empty until April 1).

The schedule of the competition activities is:

- March 1, 2019: Registration
- April 1, 2019: Final deadline for modifying containers with solvers
- August, 2019: Presentation of results at TAFA'19

Main contact: [log in to unmask]

Participants or just interested people are welcome to subscribe to [log in to unmask], by sending an email with header "subscribe argumentationcompetition <your first name> <your surname>” to [log in to unmask], in order to receive information concerning future editions of ICCMA.

Stefano Bistarelli, Department of Mathematics and Computer Science, University of Perugia, Italy
Lars Kotthoff, Department of Computer Science, University of Wyoming, USA
Theofrastos Mantadelis, Department of Mathematics and Computer Science, University of Perugia, Italy
Francesco Santini, Department of Mathematics and Computer Science, University of Perugia, Italy
Carlo Taticchi, Gran Sasso Science Institute (GSSI), L'Aquila, Italy

The ICCMA steering committee:
Federico Cerutti, School of Computer Science & Informatics, Cardiff University, UK
Sarah A. Gaggl, Computational Logic Group, TU Dresden, Germany
Nir Oren, Department of Computing Science, University of Aberdeen, UK
Jean-Guy Mailly, LIPADE, Université Paris Descartes, France
Matthias Thimm, Institute for Web Science and Technologies, University of Koblenz-Landau, Germany
Mauro Vallati, School of Computing and Engineering, University of Huddersfield, UK
Serena Villata, WIMMICS Research Team, INRIA Sophia Antipolis, France

Dr. Francesco Santini (Assistant Professor)
Department of Mathematics and Computer Science
University of Perugia
Via Vanvitelli 1
06123 Perugia