Deep learning has witnessed remarkable success in many application domains and is now shifting towards training super deep models with extremely large scale labeled or unlabeled data on expensive computational resources. In this talk, I will present some of the recent progress. Specifically, I will first show the PAC-Bayes generalization bounds and present some practical implications for new algorithm designs. Then, I will propose an efficient architecture design for visual transformers, named ViTAE, by exploring the intrinsic inductive biases. Next, he will introduce a novel self-supervised training method called RegionCL, which uses a simple region swapping strategy to build effective supervisory signals from rich positive/negative pairs at both the instance level and the region level. It greatly advances the ability of representative self-supervised leaning frameworks including MoCo, SimCLR, and SimSam. Finally, some promising applications of visual transformers and self-supervised leaning will be presented, including image classification, object detection, semantic segmentation, and pose estimation.
Dacheng Tao is the Inaugural Director of the JD Explore Academy and a Senior Vice President of JD.com. He is also an advisor and chief scientist of the digital science institute in the University of Sydney. He mainly applies statistics and mathematics to artificial intelligence and data science, and his research is detailed in one monograph and over 200 publications in prestigious journals and proceedings at leading conferences. He received the 2015 Australian Scopus-Eureka Prize, the 2018 IEEE ICDM Research Contributions Award, and the 2021 IEEE Computer Society McCluskey Technical Achievement Award. He is a fellow of the Australian Academy of Science, the World Academy of Sciences, the Royal Society of NSW, AAAS, ACM, IAPR and IEEE.