7 de agosto de 2019

SMC Workshop: Opening the Black Box An Activity Driven Workshop on Explainable AI

PURPOSE

Artificial intelligence systems are embedded in our daily lives.  We take advantage of the effectiveness of such systems, but also become dependent on them.  For example, when we apply for a bank loan, our application is analyzed by intelligent systems.  A yes/no answer is not enough. Current data protection laws in place all over the world have emphasized the need for explainable systems.  

Explanation systems embody processes that allow users to gain insight into a system’s rationale, with the intent of improving the user’s performance on a related task. For example, a system could allow a drone to explain to its operator the situations for when it will deviate (e.g., avoid placing fragile packages on unsafe locations), thus allowing the operator to better manage these drones. Likewise, a decision aid could explain its recommendation for an aggressive surgical intervention (e.g., in reaction to a patient’s recent health patterns and medical breakthroughs) so that a doctor can provide better care. The system’s models could be learned and/or hand-coded, and be used for a wide variety of analysis or synthesis tasks. However, while users usually require understanding before committing to decisions, most AI systems do not support the explanation process. Addressing this challenge has become more urgent with the increasing reliance on learned models in deployed applications.

The need for explainable systems raises several questions, such as: how should explainable models be designed? How should user interfaces communicate decision making? What types of user interactions should be supported? How should explanation quality be measured? These questions are of interest to researchers, practitioners, and end-users, independent of what AI techniques are used. Solutions can draw from several disciplines, including cognitive science, human factors, and psycholinguistics.

This workshop will provide a forum for discussing expectations and approaches for explainable AI systems. We will look at the computational perspective including causal modeling, constraint reasoning narrative intelligence, as well as,  the human perspective including trust, transparency, and explainability. We are most concerned with opening semi-symbolic (connectionism) machine learning-based systems. 

TOPICS

Main topics may include but are not limited to the following:

  • Technologies (the object to be explained)
    • Statistical relational
    • Cognitive architectures
    • Commonsense reasoning
    • Data mining
    • Deep learning
    • Intelligent agents (e.g., planning and acting, goal reasoning)
    • Knowledge mining
    • Machine learning
    • Recurrent networks, LSTM
    • Temporal reasoning
  • Applications/Tasks (the context )
    • Ambient intelligence
    • Autonomous control
    • Computer games
    • Image processing (e.g., security/surveillance)
    • Information retrieval and reuse
    • Intelligent Decision Support Systems
    • Intelligent tutoring
    • Medical Systems
    • Recommender systems
    • User modeling
    • Visual question-answering

IMPORTANT DATES

  • Submission: August 30
  • Notification:  September 15
  • Camera-ready: September 20
  • Workshop: October 6

SUBMISSIONS 

All paper submissions must be in English, and they must not exceed four (4) pages in length, including references. The papers must be formatted using the IEEE conference proceedings

Papers must be in pdf and sent to SMC.XAIworkshop@gmail.com

ORGANIZING COMMITTEE

Ana  Cristina Bicharra Garcia (Universidade Federal do Estado do Rio de Janeiro – Brazil)

Adriana S. Vivacqua (Universidade Federal do Rio de Janeiro- Brazil)

Luis Correia (Universidade de Lisboa- Portugal)

Jose Manuel Molina Lopez (Universidad Carlos III de Madrid- Spain)