Publication

Books

  1. Sugiyama, M., Bao, H., Ishida, T., Lu, N., Sakai, T., & Niu, G.
    Machine Learning from Weak Supervision: An Empirical Risk Minimization Approach, MIT Press, Cambridge, MA, USA, 2022.
    [link]

PhD Thesis

  • Excess Risk Transfer and Learning Problem Reduction towards Reliable Machine Learning, UTokyo Repository, 2022.
    Date of granted: 2022/03/24, Japanese title: "信頼性の高い機械学習を目指した剰余リスク転移と学習問題の帰着"
    [link]

Journal Articles (refereed)

  1. Lin, X., Bao, H., Cui, Y., Takeuchi, K., & Kashima, H.
    Scalable Individual Treatment Effect Estimator for Large Graphs.
    Machine Learning, xx:xx-xx, 2024.
    (Presented at the 16th Asian Conference on Machine Learning (ACML2024), Vietnam, Dec. 5-8, 2024)
    [link]
  2. Takezawa, Y., Bao, H., Niwa, K., Sato, R., & Yamada, M.
    Momentum Tracking: Momentum Acceleration for Decentralized Deep Learning on Heterogeneous Data.
    Transactions on Machine Learning Research, 2023.
    [link][arXiv][github]
  3. Bao, H. & Sakaue, S.
    Sparse Regularized Optimal Transport with Deformed q-Entropy.
    Entropy, 24(11):1634, 2022.
    [link]
  4. Yamada, M., Takezawa, Y., Sato, R., Bao, H., Kozareva, Z., & Ravi, S.
    Approximating 1-Wasserstein Distance with Trees.
    Transactions on Machine Learning Research, 2022.
    [link][arXiv]
  5. Shimada, T., Bao, H., Sato, I., & Sugiyama, M.
    Classification from Pairwise Similarities/Dissimilarities and Unlabeled Data via Empirical Risk Minimization.
    Neural Computation 33(5):1234-1268, 2021.
    [link][arXiv]
  6. Bao, H., Sakai, T., Sato, I., & Sugiyama, M.
    Convex Formulation of Multiple Instance Learning from Positive and Unlabeled Bags.
    Neural Networks 105:132-141, 2018.
    [link][arXiv]

Conference Proceedings (refereed)

  1. Yokoi, S., Bao, H., Kurita, H., & Shimodaira, H.
    Zipfian Whitening.
    Advances in Neural Information Processing Systems 37 (NeurIPS2024), xxx-xxx, Vancouver, BC, Canada, Dec. 9-15, 2024.
    [link][arXiv]
  2. Takezawa, Y., Bao, H., Sato, R., Niwa, K., & Yamada, M.
    Polyak Meets Parameter-free Clipped Gradient Descent.
    Advances in Neural Information Processing Systems 37 (NeurIPS2024), xxx-xxx, Vancouver, BC, Canada, Dec. 9-15, 2024.
    [link][arXiv]
  3. Sakaue, S., Bao, H., Tsuchiya, T., & Oki, T.
    Online Structured Prediction with Fenchel–Young Losses and Improved Surrogate Regret for Online Multiclass Classification with Logistic Loss.
    In Proceedings of 37th Annual Conference on Learning Theory (COLT2024) PMLR 247:4458-4486, Edmonton, Canada, Jun. 30-Jul. 3, 2024.
    [link][arXiv]
  4. Bao, H., Hataya, R., & Karakida, R.
    Self-attention Networks Localize When QK-eigenspectrum Concentrates.
    In Proceedings of 41st International Conference on Machine Learning (ICML2024), PMLR 235:2903-2922, Vienna, Austria, Jul. 22-27, 2024.
    [link][arXiv]
  5. Houry, G., Bao, H., Zhao, H., & Yamada, M.
    Fast 1-Wasserstein Distance Approximations Using Greedy Strategies.
    In Proceedings of 27th International Conference on Artificial Intelligence and Statistics (AISTATS2024), PMLR 238:325-333, Valencia, Spain, May 2-4, 2024.
    [link]
  6. Takezawa, Y.*, Sato, R.*, Bao, H., Niwa, K., & Yamada, M.
    Beyond Exponential Graph: Communication-Efficient Topologies for Decentralized Learning via Finite-time Convergence.
    Advances in Neural Information Processing Systems 36 (NeurIPS2023), 76692-76717, New Orleans, LA, USA, Dec. 10-16, 2023.
    [link][arXiv][github] (* equal contribution)
  7. Hataya, R., Bao, H., & Arai, H.
    Will Large-scale Generative Models Corrupt Future Datasets?
    In Proceedings of IEEE International Conference on Computer Vision (ICCV2023), 20555-20565, Paris, France, Oct. 2-6, 2023.
    [link][arXiv][dataset]
  8. Lin, X., Zhang, G., Lu, X., Bao, H., Takeuchi, K., & Kashima, H.
    Estimating Treatment Effects Under Heterogeneous Interference.
    In Proceedings of the European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases (ECMLPKDD2023), LNCS 14169:576-592, Turin, Italy, Sep. 18-22, 2023.
    [link][arXiv]
  9. Bao, H.
    Proper Losses, Moduli of Convexity, and Surrogate Regret Bounds.
    In Proceedings of 36th Annual Conference on Learning Theory (COLT2023) PMLR 195:525-547, Bangalore, India, Jul. 12-15, 2023.
    [link]
  10. Arase, Y., Bao, H., & Yokoi, S.
    Unbalanced Optimal Transport for Unbalanced Word Alignment.
    In Proceedings of 61st Annual Meeting of the Association for Computational Linguistics (ACL2023) 3966–3986, Toronto, Canada, Jul. 9-14, 2023.
    [link][arXiv][github]
  11. Nakamura, S., Bao, H., & Sugiyama, M.
    Robust Computation of Optimal Transport by β-potential Regularization.
    In Proceedings of 14th Asian Conference on Machine Learning (ACML2022) PMLR 189:770-785, Hyderabad, India, Dec. 12-14, 2022.
    [link][arXiv]
  12. Bao, H., Nagano, Y., & Nozawa, N.
    On the Surrogate Gap between Contrastive and Supervised Losses.
    In Proceedings of 39th International Conference on Machine Learning (ICML2022), PMLR 162:1585-1606, Baltimore, MD, USA, Jul. 17-23, 2022.
    [link][arXiv][poster][github] (equal contribution & alphabetical ordering)
  13. Bao, H.*, Shimada, T.*, Xu, L., Sato, I., & Sugiyama, M.
    Pairwise Supervision Can Provably Elicit a Decision Boundary.
    In Proceedings of 25th International Conference on Artificial Intelligence and Statistics (AISTATS2022), PMLR 151:2618-2640, online, Mar. 28-30, 2022.
    [link][arXiv][poster] (* equal contribution)
  14. Dan, S., Bao, H., & Sugiyama, M.
    Learning from Noisy Similar and Dissimilar Data.
    In Proceedings of the European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases (ECMLPKDD2021), LNCS 12976:233-249, online, Sep. 13-17, 2021.
    [link][arXiv]
  15. Bao, H. & Sugiyama, M.
    Fenchel-Young Losses with Skewed Entropies for Class-posterior Probability Estimation.
    In Proceedings of 24th International Conference on Artificial Intelligence and Statistics (AISTATS2021), PMLR 130:1648-1656, online, Apr. 13-15, 2021.
    [link][poster][github]
  16. Nordström, M., Bao, H., Löfman, F., Hult, H., Maki, A., & Sugiyama, M.
    Calibrated Surrogate Maximization of Dice.
    In Proceedings of 23rd International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI2020), LNCS 12264:269-278, online, Oct. 4-8, 2020.
    [link]
  17. Bao, H., Scott, C., & Sugiyama, M.
    Calibrated Surrogate Losses for Adversarially Robust Classification.
    In Proceedings of 33rd Annual Conference on Learning Theory (COLT2020), PMLR 125:408-451, online, Jul. 9-12, 2020.
    [link][arXiv (corrigendum)][slides] (arXiv version contains a corrigendum; the definition of calibrated losses is modified)
  18. Bao, H. & Sugiyama, M.
    Calibrated Surrogate Maximization of Linear-fractional Utility in Binary Classification.
    In Proceedings of 23rd International Conference on Artificial Intelligence and Statistics (AISTATS2020), PMLR 108:2337-2347, online, Aug. 26-28, 2020.
    [link][arXiv][slides]
  19. Wu, Y.-H., Charoenphakdee, N., Bao, H., Tangkaratt, V., & Sugiyama, M.
    Imitation Learning from Imperfect Demonstration.
    In Proceedings of 36th International Conference on Machine Learning (ICML2019), PMLR 97:6818-6827, Long Beach, CA, USA, Jun. 9-15, 2019.
    [link][arXiv][poster][github]
  20. Kuroki, S., Charoenphakdee, N., Bao, H., Honda, J., Sato, I., & Sugiyama, M.
    Unsupervised Domain Adaptation Based on Source-guided Discrepancy.
    In Proceedings of 33rd AAAI Conference on Artificial Intelligence (AAAI2019), 33 01:4122-4129, Honolulu, HI, USA, Jan. 27-Feb. 1, 2019.
    [link][arXiv]
  21. Bao, H., Niu, G., & Sugiyama, M.
    Classification from Pairwise Similarity and Unlabeled Data.
    In Proceedings of 35th International Conference on Machine Learning (ICML2018), PMLR 80:461-470, Stockholm, Sweden, Jul. 10-15, 2018.
    [link][arXiv][slides][poster][github]

Preprints

  • Bao, H. & Takatsu, A.
    Proper Losses Regret at Least 1/2-order.
    [arXiv] (alphabetical ordering)
  • Ishikawa, S.*, Yamada, M.*, Bao, H., & Takezawa, Y.
    PhiNets: Brain-inspired Non-contrastive Learning Based on Temporal Prediction Hypothesis.
    [arXiv]
  • Zhang, G., Bao, H., & Kashima, H.
    Online Policy Learning from Offline Preferences.
    [arXiv]
  • Sato, R., Takezawa, Y., Bao, H., Niwa, K., & Yamada, M.
    Embarrassingly Simple Text Watermarks.
    [arXiv]
  • Takezawa, Y., Sato, R., Bao, H., Niwa, K., & Yamada, M.
    Necessary and Sufficient Watermark for Large Language Models.
    [arXiv][github]
  • Bao, H.
    Feature Normalization Prevents Collapse of Non-contrastive Learning Dynamics.
    [arXiv]