Responsible Artificial Intelligence Agents (RAIA) Workshop 2019 CfP

A full-day workshop as part of AAMAS2019

The development and use of AI raise fundamental ethical issues for society, which are of vital importance to our future. There is already much debate concerning the impact of AI on labour, social interactions (including healthcare), privacy, fairness and security (including peace initiatives and warfare). The societal and ethical impact of AI encompasses many domains, for instance, machine classification systems raise questions about privacy and bias, and autonomous vehicles raise questions about safety and responsibility. Researchers, policy-makers, industry and society all recognise the need for approaches that ensure the safe, beneficial and fair use AI technologies, to consider the implications of ethically and legally relevant decision-making by machines, and the ethical and legal status of AI. These approaches include the development of methods and tools, consultation and training activities, and governance and regulatory efforts.

Responsible Artificial Intelligence Agents (RAIA), will bring together researchers from AI, ethics, philosophy, robotics, psychology, anthropology, cognitive science, law, regulatory governance studies and engineering to discuss and work on the complex challenges concerning the design and regulation of AI systems as these become part of our daily life. RAIA focuses on three aspects that together can ensure that AI is developed for societal good (e.g. contributing to the UN sustainable development goals), using verifiable and accountable processes, and that its impact is governed by fair and inclusive mechanisms and institutions.

The workshop is looking for papers on amongst others the following topics: - AI design methodologies taking into account ethical and social consequences of AI agents - Computational methods for understanding, developing, and evaluating ethical agency - Engineering techniques for autonomous systems to incorporate ethical principles and social norms - Ethically informed design methodologies for AI agents - Formalisms (logics, algebras, argumentation, case-based reasoning etc.) for representing and reasoning about ethics, legal constraints, and social norms for AI agents - Social simulation approaches for evaluation of AI agents and socio-technical systems - Verification and validation of ethical behavior

Important dates Deadline for papers: February 10, 2019 Notification of acceptance: March 10, 2019 Camera-ready deadline: April 10, 2019

Submission Submission via easychair at Submissions should conform to the LNCS Springer format, the style file or Word templates can be found at

Submissions may be of two types: - Long papers: These are full-length research papers detailing work in progress or work that could potentially be published at a major conference. These should not be more than 16 pages long in the LNCS format above. - Short papers: These are position papers or demo papers that describe either a project on human-agent systems, an application that has not yet been evaluated, or initial work. These should not be more than 8 pages long (excluding appendices and assuming the LNCS format above).

Organizing committee Virginia Dignum (UmeƄ University) Pablo Noriega (IIIA) Harko Verhagen (Stockholm University)

This is a companion discussion topic for the original entry at