I am a Nanyang Associate Professor at the School of Computer Science and Engineering, Nanyang Technological University, Singapore. My research interests lie mainly in Computational Narrative Intelligence, Multi-modal Learning, and Machine Learning.

Prior to NTU, I was a Senior Research Scientist at Baidu Research USA, and a Research Scientist and Group Leader at Disney Research. I received my Ph.D. degree from Georgia Institute of Technology.

Story is a powerful tool for communication, an exhibit of creativity, and a timeless form of entertainment. Computational Narrative Intelligence (CNI) aims to create intelligent machines that can understand and create stories, manage interactive narratives, and respond appropriately to stories told to them. I have made contributions to all major areas of CNI, ranging from story generation and interactive narratives to human cognition, from learning story knowledge to story understanding.

I believe that in order to simulate human intelligence, artificial intelligence must first acquire human-level knowledge from an approximation of the human experience, which is inherently multimodal. Further, inspired by the observation that human intelligence emerges from the interaction of multiple cognitive processes, I am interested in understanding the interactions between different neural network components and developing techniques that coordinate the learning of different neural network components beyond simplistic stochastic gradient descent.

gs.ude.utn@il.gnayob :liamE

Selected Papers

Yinan Zhang, Boyang Li, Yong Liu, Hao Wang, Chunyan Miao. Initialization Matters: Regularizing Manifold-informed Initialization for Neural Recommendation Systems. ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD). 2021.

TL;DR: If neural recommenders performed poorly (e.g., worse than well-tuned k-nearest-neighbors), it is probably because they did not use this data-dependent, manifold-informed initialization.

Paper Video Code bibtex

Xu Guo, Boyang Li, Han Yu, and Chunyan Miao. Latent-Optimized Adversarial Neural Transfer for Sarcasm Detection. The Annual Conference of the North American Chapter of the Association for Computational Linguistics (NAACL-HLT). 2021.

TL;DR: Sarcasm detection is an ideal problem for transfer learning. We identify the competition between losses in adversarial transfer learning and propose a modified optimization technique to solve the problem, which achieves the SOTA result on the iSarcasm dataset.

Paper Code bibtex

Chang Liu, Han Yu, Boyang Li, Zhiqi Shen, Zhanning Gao, Peiran Ren, Xuansong Xie, Lizhen Cui, and Chunyan Miao. Noise-resistant Deep Metric Learning with Ranking-based Instance Selection. The IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). 2021.

TL;DR: We introduce a simple, efficient, and (we believe) the first technique for deep metric learning under noisy training data; the method outperforms 12 baseline methods under both synthetic and natural noise.

Paper Supplemental Video 视频 Code bibtex

Yuanyuan Chen, Boyang Li, Han Yu, Pengcheng Wu, and Chunyan Miao. HyDRA: Hypergradient Data Relevance Analysis for Interpreting Deep Neural Networks. The AAAI Conference on Artificial Intelligence (AAAI). 2021.

TL;DR: We provide an approximate hypergradient method for estimating how training data contribute to individual network predictions and a theoretical bound on the approximation error.

Paper Supplemental Code bibtex

Jianan Wang, Boyang Li, Xiangyu Fan, Jing Lin, and Yanwei Fu. Data-efficient Alignment of Multimodal Sequences by Aligning Gradient Updates and Internal Feature Distributions. The IEEE Winter Conference on Applications of Computer Vision (WACV). 2021.

TL;DR: We present several tricks that improve data efficiency of the NeuMATCH network, which aligns video and textual sequences.

Paper Supplemental Video Code & Data bibtex

Adam Noack, Isaac Ahern, Dejing Dou, and Boyang Li. An Empirical Study on the Relation between Network Interpretability and Adversarial Robustness Springer Nature Computer Science. 2020.

TL;DR: Does the interpretability of neural networks imply robustness against adversarial attack? We provide some positive empirical evidence.

Paper Code bibtex

Hannah Kim, Denys Katerenchuk, Daniel Billet, Jun Huan, Haesun Park, and Boyang Li. Understanding Actors and Evaluating Personae with Gaussian Embeddings. The AAAI Conference on Artificial Intelligence (AAAI). 2019.

TL;DR: We computationally model movie casting decisions and actors' versatility.

Paper Code & Data bibtex

Pelin Dogan, Boyang Li, Leonid Sigal, Markus Gross. A Neural Multi-sequence Alignment TeCHnique (NeuMATCH). The Conference on Computer Vision and Pattern Recognition (CVPR). 2018.

TL;DR: We propose the first end-to-end optimizable network for aligning video and text sequences.

Paper Data bibtex

Ng Annalyn, Maarten Bos, Leonid Sigal, Boyang Li. Predicting Personality from Book Preferences with User-Generated Content Labels. IEEE Transaction on Affective Computing. 2018.

TL;DR: We can infer your personality from the books you read.

Paper bibtex


I have multiple open positions for Ph.D. students, postdocs, and research engineers. If you are interested, please send me your CV.

What's New