Joint 3rd Workshop on Explainable Smart Systems (ExSS) and 2nd Workshop on Intelligent User Interfaces for Algorithmic Transparency in Emerging Technologies (ATEC)
Held in conjunction with ACM Intelligent User Interfaces (IUI) Cagliari, Italy, 17 - 20 March 2020

Smart systems that apply complex reasoning to make decisions and plan behavior, such as decision support systems and personalized recommendations, are difficult for users to understand. Algorithms allow the exploitation of rich and varied data sources, in order to support human decision-making and/or taking direct actions; however, there are increasing concerns surrounding their transparency and accountability, as these processes are typically opaque to the user - e.g., because they are too technically complex to be explained or are protected trade secrets. The topics of transparency and accountability have attracted increasing interest to provide more effective system training, better reliability and improved usability. This workshop will provide a venue for exploring issues that arise in designing, developing and evaluating intelligent user interfaces that provide system transparency or explanations of their behavior. In addition, our goal is to focus on approaches to mitigating algorithmic biases that can be applied by researchers, even without access to a given system’s inter-workings, such as awareness, data provenance, and validation. 

Researchers in academia or industry who have an interest in these areas are invited to submit papers up to 6 pages (not including references) in ACM SIGCHI Paper Format (see http://iui.acm.org/2020/call_for_papers.html). These submissions must be original and relevant contributions and can be of two types: (1) position papers summarizing authors’ existing research in this area and how it relates to the workshop theme and (2) papers offering an industrial perspective on the workshop theme or a real-world approach to system transparency. Suggested contribution types include, but are not limited to:
- Is transparency (or explainability) always a good idea? Can transparent algorithms or explanations “hurt” the user experience, and in what circumstances?
- When are the optimal points at which explanations are needed for transparency?
- What are explanations? What should they look like?
- What are more transparent models that still have good performance in terms of speed and accuracy?
- How can we detect biases and discrimination in transparent systems?
- What is important in user modeling for system transparency and explanations?
- What are important social aspects in interaction design for system transparency and explanations?
- What are possible metrics that can be used when evaluating transparent systems and explanations?

Carrie Cai, a senior research scientist at Google Brain and PAIR (Google’s People+AI Research Initiative), will be our keynote speaker. Paper authors will then present their work as part of thematic panels or as poster presentations. The second part of the workshop will consist of sub-group activities focused on how to design and evaluate transparent systems.

Papers should be submitted via Easychair (https://easychair.org/conferences/?conf=exss-atec2020) by the end of December 20th 2019 and will be reviewed by committee members. Position papers do not need to be anonymized. At least one author of each accepted position paper must register for and attend the workshop. For further questions please contact the workshop organizers at <[log in to unmask]>.

Important Dates
Submission date:         Dec 20, 2019
Notifications send:      Jan 14, 2020
Camera-ready:            TBA
Workshop Date:           March 17, 2020

Organizing Committee
==============
Alison Smith-Renner - Decisive Analytics Corporation, USA
Styliani Kleanthous - CyCAT - Open University of Cyprus  and RISE Research Centre, Cyprus
Brian Lim - National University of Singapore
Tsvi Kuflik - University of Haifa, Israel
Simone Stumpf - City, University of London, UK
Jahna Otterbacher - CyCAT - Open University of Cyprus  and RISE Research Centre, Cyprus
Advait Sarkar - Microsoft Research, Cambridge, UK
Casey Dougan - IBM Research, Cambridge, USA
Avital Shulner - University of Haifa, Israel

Program Committee
=============
Tak Yeon Lee, Adobe Research
Alan Hartman, Afeka College of Engineering and Bar Ilan University
Jim Nolan, Decisive Analytics Corporation
Judy Kay, The University of Sydney
Fan Du, Adobe Research
Nava Tintarev, Delft University of Technology
Todd Kulesza, Google
Jon Dodge, Oregon State University
Ramya Srinivasan, Fujitsu Laboratories of America
Martin Schuessler, TU Berlin
Sarah Theres Völkel, Ludwig Maximilian University of Munich
Malin Eiband, Ludwig Maximilian University of Munich
Stephanie Rosenthal, Carnegie Mellon University
Gonzalo Ramos, Microsoft
Melinda Gervasio, SRI International
Gagan Bansal, Allen School of Computer Science & Engineering
Jürgen Ziegler, University of Duisburg-Essen
Bran Knowles, Lancaster University
Forough Poursabzi-Sangdeh, Microsoft
Robin Burke, University of Colorado, Boulder
Mike Terry, Google
Veronika Bogina, Haifa University
Fausto Giunchiglia, University of Trento  

--

 

Styliani Kleanthous, Ph.D

CyCAT - Cyprus Center for Algorithmic Transparency

Open University of Cyprus

Phone: 22411904

web: http://www.cycat.io

LinkedIn: https://www.linkedin.com/in/styliani-kleanthous/




To unsubscribe from the BCS-HCI list, click the following link:
https://www.jiscmail.ac.uk/cgi-bin/webadmin?SUBED1=BCS-HCI&A=1