Trustworthiness is a critical issue in artificial intelligence (AI), especially for real-world applications. It is impossible to apply AI in the real-world without its being trustworthy. However, the connotation and extension of trustworthiness are not entirely clear to the scientific community. There has not been a single definition that is accepted by all researchers. Nevertheless, the vast majority of researchers agree that AI trustworthiness should include at least accuracy, reliability, robustness, safety, security, privacy, fairness, transparency, controllability, maintenability, etc. First, this talk reviews very briefly AI ethics, which is closely related to AI trustworthiness. Second, the talk examines the fairness and explainability issues of machine learning models. It is argued that many aspects of trustworthiness, such as fairness and explainability, are inherently multi-dimensional. In other words, there are many dimensions to properties like fairness and explainability. They cannot be defined comprehensively using any single numerical measure. This talk advocates multi-objective evolutionary learning as an approach to enhance AI trustworthiness. Fairness and explainability are used as two examples to demonstrate how multi-objective evolutionary learning can be used to improve fairness and explainability of learned models. Third, the talk also illustrates how constrained learning could be used to regulate models that must meet certain criteria, e.g., safety. Finally, the talk ends with some concluding remarks.
Xin Yao is a Chair Professor of Computer Science at the Southern University of Science and Technology (SUSTech), Shenzhen, China, and a part-time Professor of Computer Science at the University of Birmingham, UK. He is an IEEE Fellow and was a Distinguished Lecturer of the IEEE Computational Intelligence Society (CIS). He served as the President (2014-15) of IEEE CIS and the Editor-in-Chief (2003-08) of IEEE Transactions on Evolutionary Computation. His major research interests include evolutionary computation, neural networks, ensemble learning, and multi-objective learning. His work won the 2001 IEEE Donald G. Fink Prize Paper Award; 2010, 2016 and 2017 IEEE Transactions on Evolutionary Computation Outstanding Paper Awards; 2011 IEEE Transactions on Neural Networks Outstanding Paper Award; 2010 BT Gordon Radley Award for Best Author of Innovation (Finalist); and other best paper awards at conferences. He received a 2012 Royal Society Wolfson Research Merit Award, the 2013 IEEE CIS Evolutionary Computation Pioneer Award and the 2020 IEEE Frank Rosenblatt Award.