2nd Workshop on Explainable Smart Systems (ExSS),
Held in conjunction with ACM Intelligent User Interfaces (IUI),
Los Angeles, California, 16-20 March 2019
http://explainablesystems.comp.nus.edu.sg/2019/
Smart systems that apply complex reasoning to make decisions and plan behavior, such as clinical decision support systems, personalized recommendations, and machine learning classifiers, are difficult for users to understand. This workshop will follow on from the very successful ExSS 2018 workshop held at IUI with the goal of bringing together researchers in academia and industry who have an interest in making smart systems explainable to users and therefore more intelligible and transparent. This topic has attracted increasing interest to provide glimpses into the black-box behavior of these systems in order to provide more effective steering or training of the system, better reliability and improved usability. This workshop will provide a venue for exploring issues that arise in designing, developing and evaluating smart systems that use or provide explanations of their behavior.
Researchers in academia or industry who have an interest in making smart systems explainable to users are invited to submit papers up to 6 pages (not including references) in ACM SIGCHI Paper Format (see http://iui.acm.org/2019/call_for_papers.html). These paper submissions can be of two types: (1) position papers summarizing authors’ existing research in this area and how it relates to the workshop theme and (2) papers offering an industrial perspective on the workshop theme or a real-world approach to provide explanations. Suggested contribution types include, but are not limited to:
- What is an explanation? What should they look like?
- Are explanations always a good idea? Can explanations “hurt” the user experience, and in what circumstances?
- When are the optimal points at which explanations are needed for a particular system?
- How can we measure the value of explanations or how the explanation is provided? What human factors influence the value of explanations?
- What are “more explainable” models that still have good performance in terms of speed and accuracy?
Papers should be submitted via Easychair (https://easychair.org/conferences/?conf=exss2019) by end of December 14th 2018 and will be reviewed by committee members. Position papers do not need to be anonymized. At least one author of each accepted position paper must attend the workshop.
Paper authors will present their work as part of a thematic panel. The second part of the workshop will consist of sub-group activities focused on how to design and show explanations for a real-world system.
For further questions please contact the workshop organizers at <[log in to unmask]>.
Important Dates:
Submission deadline: *extended to* 14 December 2018
Notification to Authors: 14 January 2019
Camera-ready copies due: 15 February 2019
Organising committee:
Brian Lim - National University of Singapore
Advait Sarkar - Microsoft Research, Cambridge
Alison Smith-Renner - Decisive Analytics Corporation, USA
Simone Stumpf - City, University of London, UK
Reviewing Committee:
Gagan Bansal - University of Washington, USA
Fan Du - Adobe Research, USA
Malin Eiband - University of Munich, Germany
Melinda Gervasio - SRI, USA
Dave Gunning - DARPA, USA
Judy Kay - University of Sydney, Australia
Bran Knowles - University of Lancaster, UK
Per Ola Kristensson - University of Cambridge, UK
Todd Kulesza - Microsoft, USA
Mark W. Newman - University of Michigan, USA
Jim Nolan - Decisive Analytics Corporation, USA
Kenton O'Hara - Microsoft Research Cambridge, UK
Forough Poursabzi-Sangdeh - Microsoft, USA
Stephanie Rosenthal - Carnegie Mellon University, USA
Ramya Srinivasan - Fujitsu, USA
Jo Vermeulen - Aarhus University, Denmark
Jurgen Ziegler - University of Duisburg, Germany
########################################################################
To unsubscribe from the BCS-HCI list, click the following link:
https://www.jiscmail.ac.uk/cgi-bin/webadmin?SUBED1=BCS-HCI&A=1
|