"Frontiers in Big Data" has launched a new Research Topic, "Adversarial Machine Learning for Robust Prediction". Manuscripts will be peer-reviewed and, if accepted for publication, will be free to access for all readers, and indexed in relevant repositories. 

This is a great opportunity to have your research published in Frontiers in Big Data. Led by Field Chief Editor Huan Liu, it is at the forefront of data-driven sciences and how acquiring intelligence from information can help address the global challenges of humankind.

Visit the homepage for this Research Topic for a full description of the project:

The abstract deadline is July 21, 2021.
The submission deadline is September 21, 2021.

More about Journal
Frontiers' Research Topics are collections of peer-reviewed articles around an emerging or cutting-edge theme. As a contributing author, you will benefit from:

  •  high visibility and the chance to be included in a downloadable ebook
  •  rigorous, transparent and fast peer-review for your article
  •  publication throughout the year, as soon as your article is accepted
  •  advanced impact metrics

More about Research Topics
With continued advances in science and technology, digital data have grown at an astonishing rate in various domains and forms, such as business, geography, health, multimedia, network, text, and web data. Machine learning, a powerful tool for automatically extracting, managing, inferencing, and transferring knowledge, has been proven to be extremely useful in understanding the intrinsic nature of real-world big data. Despite achieving remarkable performance, machine learning models, especially deep learning models, suffer from harassment caused by small adversarial perturbations injected by malicious parties and users. There is an immediate and crucial need for theoretical and practical techniques to identify the vulnerability of machine learning models and explore the defense mechanism and the certifiable robustness.

The goal of this Research Topic is to present state-of-the-art methodologies build upon an innovative blend of techniques from computer science, mathematics, and statistics, and to greatly expand the reach of adversarial machine learning from both theoretical and practical points of view, allowing the machine learning models to be deployed in safety and security-critical applications. This Research Topic will focus on three main research tasks: (1) How to develop effective modification 'attack' strategies to tamper with intrinsic characteristics of data by injecting fake information? (2) How to develop defense strategies to offer sufficient protection to machine learning models against adversarial attacks? (3) How to verify certifiable robustness to adversarial perturbations for a general class of machine learning models? This Research Topic also aims at identifying future challenges and research directions related to adversarial machine learning.

We invite submissions of high-quality manuscripts reporting research in the areas of analyzing, characterizing, understanding, and tackling the vulnerability and robustness analysis of various machine learning models under different real-world scenarios.

Topics of interest include, but not limited to:
• White-box Attack, Gray-box Attack, and Black-box Attack
• Poisoning Attack and Evasion Attack
• Targeted Attack and Non-targeted Attack
• Backdoor Attack
• Privacy Attack
• Model-agnostic Attack
• Attack Imperceivability
• Adversarial Defense
• Attack Detection
• Defensive Distillation
• Privacy Defense
• Model-agnostic Defense
• Certifiable Robustness
• Robustness and Regularization
• Attack and Defense Transferability
• Attack and Defense Automation
• Adversarial Attack/Defense on Image/Graph/Text Data

Keywords: Adversarial Machine Learning, Adversarial Attack, Adversarial Defense, Certifiable Robustness, Big Data Analytics

Topic Editors:
 Dr. Yang Zhou, Auburn University, Auburn, United States
 Dr. Neil Gong, Duke University, Durham, United States
 Dr. Ting Wang, Pennsylvania State University, University Park, United States