58
0
0
2021-06-05

Special Issue on Explainable Artificial Intelligence for Healthcare Submission Date: 2021-07-01 Scope and Objective


The spread of the use of artificial intelligence techniques is now pervasive and unstoppable. However, it brings with its opportunities but also risks and problems that must be addressed in order not to compromise an effective evolution. The eXplainable AI (XAI) is one of the answers to these problems to bring humans closer to machines.


While from a research perspective the discussions on XAI date back a few decades, the concept emerged with renewed vigour at the end of 2019 when Google, after announcing its "AI-first" strategy in 2017, recently announced a new XAI toolset for developers.


Nowadays many of the machine and deep learning applications do not allow you to understand how they work entirely or the logic behind them for effect called "BlackBox", according to which machine learning models are mostly black boxes.


This feature is considered one of the biggest problems in the application of AI techniques; it makes machine decisions not transparent and often incomprehensible even to the eyes of experts or developers themselves.


Explainable AI systems can explain the logic of decisions, characterize the strengths and weaknesses of decision making, and provide insights into their future behaviour.


We think of autonomous driving systems, AI applications used in healthcare, in the financial, legal or military sectors. In these cases, it is easy to understand that to trust the decisions and the data obtained, it is necessary to know how the artificial partner has "reasoned".


The most popular AI architecture currently is given by Deep Learning (DL) in which a neural network (NN) of tens or even hundreds of layers of "neurons", or elementary processing units, is used.


The complexity of DL architectures makes them behave like "black boxes", so it is practically impossible to identify the exact mechanism for which the system provides specific answers.


The applications of artificial intelligence in healthcare, in particular in diagnostic imaging, are rapidly growing. But the involvement of deep learning architectures turns the spotlight on the "accountability" of processes.


Given the widespread use of DL solutions, this problem will become increasingly felt in perspective. It must be emphasized that in the medical field the accountability, or responsibility, of the professional is of primary importance: any medical decision must be able to be justified a posteriori, possibly through objective evidence.


The same must be true when the outcome of a type AI processing also contributes to the clinical decision, for which the "black box" architectures are hardly compatible with the healthcare sector. Furthermore, since these software applications have to be certified, the criticality of this procedure is understood in the face of an unexplained algorithm.


Doctors are happy to be able to use neural networks in the most complex or challenging diagnoses, but they need to understand how they come to their conclusions to validate the report.


The main objective of this special issue is to bring together diverse, novel and impactful research work on Explainable Deep Learning for Medicine, thereby accelerating research in this field.


Topics of Interest


The topics of interest for this special issue include, but are not limited to:


Explainable AI on graph structured medical data;

Real-time Explainable AI for medical image processing;

Intelligent feature selection for interpretable deep learning classification;

Explainable Artificial Intelligence for Internet of Medical Things;

Explainable deep Bayesian learning for medical data;

Fusion of emerging Explainable AI methods with conventional methods;

Explainable Artificial Intelligence methodologies to detecting emerging medical threats from Social Media;

Relations between Explainability and other Quality Criteria (such as Interpretability, Accuracy, Stability, etc.)

Hybrid Approaches (e.g. Neuro-Fussy systems) for Explainable AI.


Evaluation Criterion


Novelty of approach (how is it different from what exists already?)

Technical soundness (e.g., rigorous model evaluation)

Impact (how does it change our current state of affairs)

Readability (is it clear what has been done)

Reproducibility and open science: pre-registrations if confirmatory claims are being, open data, materials, code as far as ethically possible.


Important Dates


Submission portal opens: March 1st, 2021

Deadline for paper submission: July 1st, 2021

Reviewing: Continuous basis

Revision deadline: September 15th, 2021

Latest acceptance deadline for all papers: December 1st, 2021


Guest Editors


Francesco Piccialli (lead GE) – University of Naples Federico II, Italy, francesco.piccialli@unina.it

David Camacho - Universidad Politécnica de Madrid, Spain, david.camacho@upm.es

Chun-Wei Tsai - National Sun Yat-sen University, Taiwan, cwtsai@cse.nsysu.edu.tw

登录用户可以查看和发表评论, 请前往  登录 或  注册


SCHOLAT.com 学者网
免责声明 | 关于我们 | 用户反馈
联系我们: