** Paper deadline: September 3rd, 2019 ** This workshop will be held as part of the 12th International Conference on Natural Language Generation (INLG2019), October 29 - November 1, Tokyo, Japan CALL FOR PAPERS The focus of this workshop is on the automatic generation of interactive explanations in natural language (NL), as humans naturally do, and as a complement to visualization tools. NL technologies, both NL Generation (NLG) and NL Processing (NLP) techniques, are expected to enhance knowledge extraction and representation through human-machine interaction (HMI). As remarked in the last challenge stated by the USA Defense Advanced Research Projects Agency (DARPA), "even though current AI systems offer many benefits in many applications, their effectiveness is limited by a lack of explanation ability when interacting with humans". Accordingly, users without a strong background on AI, require a new generation of Explainable AI systems. They are expected to naturally interact with humans, thus providing comprehensible explanations of decisions automatically made. The ultimate goal is building trustworthy AI that is beneficial to people through fairness, transparency and explainability. To achieve it, not only technical but also ethical and legal issues must be carefully considered. We solicit contributions in the form of regular papers (up to 4 pages + 1 references in the ACL paper format) or demo papers (up to 2 pages) dealing with research topics in any of the many aspects concerned with Explainable AI systems. Submissions should be made through https://easychair.org/conferences/?conf=nl4xai2019 TOPICS (include, but are not limited to) Definitions and Theoretical Issues on Explainable AI Interpretable Models versus Explainable AI systems Explaining black-box models Explaining Bayes Networks Explaining Fuzzy Systems Explaining Logical Formulas Multi-modal Semantic Grounding and Model Transparency Explainable Models for Text Production Verbalizing Knowledge Bases Models for Explainable Recommendations Interpretable Machine Learning Self-explanatory Decision-Support Systems Explainable Agents Argumentation Theory for Explainable AI Natural Language Generation for Explainable AI Interpretable Human-Machine Multi-modal Interaction Metrics for Explainability Evaluation Usability of Explainable AI/interfaces Applications of Explainable AI Systems IMPORTANT DATES Submissions due: September 3, 2019 Notification of acceptance: October 1, 2019 Camera-ready papers due: October 15, 2019 Half-day workshop session: October 29, 2019 PUBLICATION All accepted papers will be published in the ACL Anthology. The papers will undergo a peer reviewing process by members of the workshop's program/reviewing committee, assessing their relevance and originality for the workshop. ORGANIZERS José M. Alonso (Centro Singular de Investigacion en Tecnoloxias Intelixentes (CiTIUS), University of Santiago de Compostela, Spain) Alejandro Catala (Centro Singular de Investigacion en Tecnoloxias Intelixentes (CiTIUS), University of Santiago de Compostela, Spain) PROGRAM COMMITTEE Alberto Bugarin, CiTIUS, University of Santiago de Compostela (Spain) Katarzyna Budzynska, Institute of Philosophy and Sociology of the Polish Academy of Sciences (Poland) Claire Gardent, CNRS/LORIA, Nancy (France) Albert Gatt, University of Malta (Malta) Dirk Heylen, Human Media Interaction, University of Twente (The Netherlands) Simon Mille, Universitat Pompeu Fabra (Spain) Martı́n Pereira-Fariña, Institute of Heritage Sciences (Incipit), Spanish National Research Council (CSIC) (Spain) Chris Reed, Center for Argument Technology, University of Dundee (UK) Ehud Reiter, University of Aberdeen, Arria NLG plc. (UK) Carles Sierra, Institute of Research on Artificial Intelligence (IIIA), Spanish National Research Council (CSIC) (Spain) Mariët Theune, Human Media Interaction, University of Twente (The Netherlands) Nava Tintarev, Technische University of Delft (The Netherlands) Hitoshi Yano, Minsait, INDRA (Spain)
|