Bao, H. & Takatsu, A. Proper Losses Regret at Least 1/2-order. Journal of Machine Learning Research, 2025. (minor revision, to appear) [arXiv] (alphabetical ordering)
Bao, H. Feature Normalization Prevents Collapse of Non-contrastive Learning Dynamics. Neural Computation 37(11):2079-2124, 2025. [link][arXiv]
Shimada, T., Bao, H., Sato, I., & Sugiyama, M. Classification from Pairwise Similarities/Dissimilarities and Unlabeled Data via Empirical Risk Minimization. Neural Computation 33(5):1234-1268, 2021. [link][arXiv]
Bao, H., Sakai, T., Sato, I., & Sugiyama, M. Convex Formulation of Multiple Instance Learning from Positive and Unlabeled Bags. Neural Networks 105:132-141, 2018. [link][arXiv]
Sakaue, S., Bao, H., Tsuchiya, T., & Oki, T. Online Structured Prediction with Fenchel–Young Losses and Improved Surrogate Regret for Online Multiclass Classification with Logistic Loss. In Proceedings of 37th Annual Conference on Learning Theory (COLT2024) PMLR 247:4458-4486, Edmonton, Canada, Jun. 30-Jul. 3, 2024. [link][arXiv]
Takezawa, Y.*, Sato, R.*, Bao, H., Niwa, K., & Yamada, M. Beyond Exponential Graph: Communication-Efficient Topologies for Decentralized Learning via Finite-time Convergence. Advances in Neural Information Processing Systems 36 (NeurIPS2023), 76692-76717, New Orleans, LA, USA, Dec. 10-16, 2023. [link][arXiv][github] (* equal contribution)
Nakamura, S., Bao, H., & Sugiyama, M. Robust Computation of Optimal Transport by β-potential Regularization. In Proceedings of 14th Asian Conference on Machine Learning (ACML2022) PMLR 189:770-785, Hyderabad, India, Dec. 12-14, 2022. [link][arXiv]
Bao, H., Scott, C., & Sugiyama, M. Calibrated Surrogate Losses for Adversarially Robust Classification. In Proceedings of 33rd Annual Conference on Learning Theory (COLT2020), PMLR 125:408-451, online, Jul. 9-12, 2020. [link][arXiv (corrigendum)][slides] (arXiv version contains a corrigendum; the definition of calibrated losses is modified)
Kuroki, S., Charoenphakdee, N., Bao, H., Honda, J., Sato, I., & Sugiyama, M. Unsupervised Domain Adaptation Based on Source-guided Discrepancy. In Proceedings of 33rd AAAI Conference on Artificial Intelligence (AAAI2019), 33 01:4122-4129, Honolulu, HI, USA, Jan. 27-Feb. 1, 2019. [link][arXiv]
Sakaue, S., Bao, H., & Cao, Y. Non-Stationary Online Structured Prediction with Surrogate Losses. [arXiv]
Liu, W., Bao, H., Yamada, M., Huang, Z., Zheng, N., & Qian, H. Many-to-Many Matching via Sparsity Controlled Optimal Transport. [arXiv]
Zhang, G., Bao, H., & Kashima, H. Online Policy Learning from Offline Preferences. [arXiv]
Sato, R., Takezawa, Y., Bao, H., Niwa, K., & Yamada, M. Embarrassingly Simple Text Watermarks. [arXiv]
Books
Mochihashi, D., & Suzuki, T. (Eds.), Ishiguro, K., Ito, S., Kajino, H., Kuroki, Y., Komiyama, J., Sato, R., Suzuki, T., Bao, H., Teshima, T., Hataya, R., Futami, F., Minami, K., Mochihashi, D., & Yokoi, S. (Trans.) Probabilistic Machine Learning: An Introduction, Asakura Pub., Tokyo, Japan, 2025. (確率的機械学習:入門編,朝倉書店,2025) [link (vol 1)][link (vol 2)] (Japanese translation)
Omata, H. R. (Ed.), Schuab, J.-F., Sato, S., Minaka, N., Matsumoto, T., & Bao, H. Where Facts Intersect, Nakanishiya Pub., Kyoto, Japan, 2025. (「事実」の交差点—科学的対話が生まれる文脈を探して,ナカニシヤ出版,2025) [link]
Sugiyama, M., Bao, H., Ishida, T., Lu, N., Sakai, T., & Niu, G. Machine Learning from Weak Supervision: An Empirical Risk Minimization Approach, MIT Press, Cambridge, MA, USA, 2022. [link]
PhD Thesis
Excess Risk Transfer and Learning Problem Reduction towards Reliable Machine Learning, UTokyo Repository, 2022. Date of granted: 2022/03/24, Japanese title: "信頼性の高い機械学習を目指した剰余リスク転移と学習問題の帰着" [link]