Complex & Intelligent Systems

Title: Interpretation of Deep Learning: Prediction, Representation, Quantification and Visualization

Aims and Scope

While Big Data offers the great potential for revolutionizing all aspects of our society, harvesting of valuable knowledge from Big Data is an extremely challenging task. The large scale and rapidly growing information hidden in the unprecedented volumes of non-traditional data requires the development of decision-making algorithms. Deep learning is currently an extremely active research area in machine learning and pattern recognition society. In contrast to the conventional classification methods, deep learning models can learn a hierarchy of features by building high-level features from low-level ones, thereby automating the process of feature construction for the problem. The deep learning approach exploits many nonlinear processing layers to develop representations of data at increasing levels of abstraction. It has demonstrated best-in-class performance in a range of applications, including image classification, and been successfully applied in industry products that take advantage of the large volume of digital data. Companies like Google, Apple, and Facebook, who collect and analyze massive amounts of data on a daily basis, have been aggressively pushing forward deep learning techniques.

While deep learning has achieved unprecedented prediction capabilities, it is often criticized as a black box because of lacking interpretability, which is very important in real-world applications such as healthcare and cybersecurity. For example, healthcare professionals would appropriately trust and effectively manage prediction results only if they can understand why and how a patient is diagnosed with prediabetes. There has been an explosion of interest recently in the related research directions, such as a) analyzing the information bottleneck for efficient learning, b) inferring and regularizing the network structure for stable and robust prediction, and c) interpreting the learned representations and generated decisions.

This special issue will focus on the interpretability of deep learning from representation, modeling and prediction, as well as the deployment of interpretability in various applications. Potential topics include but are not limited to the following:

Important Dates:

Best papers will be selected and recommended for potential inclusion in the Special Issue. The following important information is about the preparation and submission of your paper for possible publication in the special issue of the CAIS.

  1. Authors are required to register and present their papers at the ISNN2020 conference (https://conference.cs.cityu.edu.hk/isnn/). The deadline for registration is September 1, 2020.
  2. 2. Authors can choose one of the following two options for publishing their papers. a) Authors can publish their papers in both the ISNN2020 proceedings AND the CAIS Special Issue. In this case, the extended papers must be at least 40% different from the conference paper; or b) Authors can publish their papers in the CAIS Special Issue without changing their conference papers. However, their papers will not be included in the ISNN2020 proceedings.
  3. Preparing the manuscript for the Special Issue according to the journal format and submitting the manuscript at: https://www.editorialmanager.com/cais/.

Guest Editors:
Dr. Nian Zhang, University of District of Columbia, Washington DC, USA. nzhang@udc.edu
Dr. Zhaojie Ju, University of Portsmouth, Portsmouth, UK. zhaojie.ju@port.ac.uk
Dr. Chenguang Yang, University of West England, Bristol, UK. charlie.yang@uwe.ac.uk
Dr. Dingguo Zhang, University of Bath, Bath, UK. d.zhang@bath.ac.uk
Dr. Jinguo Liu, Shenyang Institute of Automation, China. liujinguo@sia.cn