==== World Wide Web Journal Special Issue on "Explainability in Web" ==== Guest Editor: Guandong Xu, Hongzhi Yin, Irwin King, Lin Li https://www.springer.com/journal/11280/updates/17975196 Background This special issue encourages the submission of high-quality research papers on all topics on explainability and application topics in Web. With the aim of provision of high-quality information, services and items to users, advanced Web applications, e.g., personalisation and recommendation systems have experienced an exponential growth and advanced rapidly over the past decades. Prior efforts extensively rely on complex data mining, machine learning or latent representation models, such as association rule mining, collaborative filtering, matrix factorization, deep learning to build highly effective and accurate models to capture user preferences and profiles, however with little knowledge about the cause or explanation of predictions, and the artificial intelligent ways to achieve those results. This is mainly because on one hand, the explanation generation and explainability of such systems is far from satisfaction and generates too coarse-grained explanations, which makes the predicted outcomes less reliable and trustworthy. On the other hand, the increasing complexity of learning algorithms makes such systems working more in a black-box way, which thus sacrifices transparency, persuasiveness and satisfaction of results to end users. Providing human understandable explanations and effective explanation generation mechanism to fully address the problem of explainability, reliability, and trust has gained a high momentum in both industrial and research communities. In a broader sense, generating highly reliable explanations via advanced artificial intelligent algorithms attempts to generate both high-quality predictions and intuitive explanations for users, either in a post-hoc way or by constructing an inherently interpretable model. Popular explainability research community has been leveraging a wide range of machine learning techniques from matrix factorization, topic modelling, graph learning, deep learning to knowledge graph embedding for generating high-quality explanations for various Web systems. Explanation generation and explainability researches are currently essential and have enhanced a diversity of real-world scenarios, such as explainable e-commerce, business intelligence, prescriptive analytics, interpretable modelling, and explainable recommendation. In order to gather and present innovative research on explainability and interpretability in Web applications, we solicit submissions of high-quality manuscripts reporting the state-of-the-art techniques and trends in this field. Topics of interest include but are not limited to: > Methodology and Architectures to improve explainability o Interpretable machine learning o Intrinsically Interpretable Model o Model-Agnostic explainable Methods o Post-hoc explainable models o Causal inference for explainability o Explainable recommender systems o Explainable search systems > Explanation generation via big data o Feature/sentence/word-cloud based explanations o Social/geo/trust-based explanations o Knowledge graph derived explanations o Multi-source data aggregation for explainability > Explainable recommendation models o Matrix factorization models o Sequential based models o Graph and path-based models o Deep learning for explainable recommendations o Model Agnostic explainable recommendations o Casual learning for explainable recommendations > Explainable search models o Deep Learning models for search o Term explainable search o User behaviour analysis for explainable search o Feature sensitive analysis for explainable search o Attention mechanisms for explainable search > Evaluation of Explainability o Online/ Offline metrics o User studies o Case studies > Applications o Trust in recommendation o Fairness in recommendation o Social/Geo-location/Group recommendation o Conversational recommendation o Web search o Question & Answer o Chatbot & Dialogue Systems > Cross-disciplinary topics involving explainability o Casual explanation in business o What-if analysis and targeted marketing o Prescriptive analytics and intervention o Generating visual explanations Submission Manuscripts should be submitted to: http://WWWJ.edmgr.com. Authors should choose article type: “Explainability in Web" when submitting their papers. Important Dates * Manuscript Due October 28, 2020 * First Round of Reviews. December 20, 2020 * Decision of Acceptance: February 15, 2021 * Publication Date: mid 2021ß Guest Editors Guandong Xu University of Technology Sydney, Sydney, Australia Guandong.Xu@uts.edu.au Hongzhi Yin, The University of Queensland, Brisbane, Australia h.yin1@uq.edu.au Irwin King Chinese University of Hong Kong, China king@cse.cuhk.edu.hk Lin Li Wuhan University of Technology, China cathylilin@whut.edu.cn