Past workshop talks (non-refereed otherwise noted)
Shing, M., Misaki, K., Bao, H., Yokoi, S., & Akiba, T. TAID: Temporally Adaptive Interpolated Distillation for Efficient Knowledge Transfer in Language Models. Presented at Machine Learning and Compression Workshop at NeurIPS 2024, Vancouver, BC, Dec. 15, 2024. [link] (refereed)
Bao, H. Feature Normalization Prevents Collapse of Non-contrastive Learning Dynamics. Presented at 26th Information-Based Induction Sciences Workshop (IBIS2023), Kokura, Japan, Oct. 29-Nov. 1, 2023. The winner of the presentation award finalist.
Takezawa, Y., Sato, R., Bao, H., Niwa, K., & Yamada, M. Beyond Exponential Graph: Communication-Efficient Topologies for Decentralized Learning via Finite-time Convergence. Presented at 26th Information-Based Induction Sciences Workshop (IBIS2023), Kokura, Japan, Oct. 29-Nov. 1, 2023.
Takezawa, Y., Sato, R., Bao, H., Niwa, K., & Yamada, M. Beyond Exponential Graph: Communication-Efficient Topologies for Decentralized Learning via Finite-time Convergence. IEICE Technical Report 123:83-90, 2023. Presented at 50th Information-Based Induction Sciences and Machine Learning Technical Committee (IBISML050), Okinawa, Japan, Jun. 29-Jul. 01, 2023. [link] The winner of IEICE TC-IBISML Research Award 2023.
Takezawa, Y., Bao, H., Niwa, K., Sato, R., & Yamada, M. Momentum Tracking: Momentum Acceleration for Decentralized Deep Learning on Heterogeneous Data. Presented at 25th Information-Based Induction Sciences Workshop (IBIS2022), Tsukuba, Japan, Nov. 20-23, 2022.
Bao, H., Nagano, Y., & Nozawa, N. On the Surrogate Gap between Contrastive and Supervised Losses. Presented at 25th Information-Based Induction Sciences Workshop (IBIS2022), Tsukuba, Japan, Nov. 20-23, 2022. The winner of the presentation award.
Bao, H., Scott, C., & Sugiyama, M. Calibrated Surrogate Losses for Adversarially Robust Classification. Presented at 23rd Information-Based Induction Sciences Workshop (IBIS2020), Online, Nov. 23-26, 2020. The winner of the best presentation award.
Bao, H. & Sugiyama, M. Calibrated Surrogate Maximization of Linear-fractional Utility in Binary Classification. Presented at 22nd Information-Based Induction Sciences Workshop (IBIS2019), Nagoya, Japan, Nov. 20-23, 2019. The winner of the student presentation award.
Shimada, T., Bao, H., Sato, I., & Sugiyama, M. Classification from Pairwise Similarities/Dissimilarities and Unlabeled Data via Empirical Risk Minimization. Presented at 22nd Information-Based Induction Sciences Workshop (IBIS2019), Nagoya, Japan, Nov. 20-23, 2019.
Bao, H. & Sugiyama, M. Calibrated Surrogate Maximization of Linear-fractional Utility in Binary Classification. Presented at Joint Workshop of BBDC, BZML, and RIKEN AIP, Berlin, Germany, Sep. 9-10, 2019.
Shimada, T., Bao, H., Sato, I., & Sugiyama, M. Classification from Pairwise Similarities/Dissimilarities and Unlabeled Data via Empirical Risk Minimization. Presented at 3rd International Workshop on Symbolic-Neural Learning (SNL2019), Tokyo, Japan, Jul. 11-12, 2019.
Bao, H., Niu, G., & Sugiyama, M. Classification from Pairwise Similarity and Unlabeled Data. Presented at 1st Japan-Israel Machine Learning Meeting (JIML-2018), Tel-Aviv, Israel, Nov. 19-20, 2018. The winner of the best poster award. [poster]
Kuroki, S., Charoenphakdee, N., Bao, H., Honda, J., Sato, I., & Sugiyama, M. Unsupervised Domain Adaptation Based on Distance between Distributions Using Source-domain Labels. Presented at 21st Information-Based Induction Sciences Workshop (IBIS2018), Sapporo, Japan, Nov. 4-7, 2018.
2022/07/26 Kyoto Machine Learning Workshop (at Kyoto University), Japan. Reliable and Transferrable Machine Learning via Loss Function Perspective.
2022/03/16 Seminar Talk at Matsuura Lab — The University of Tokyo, Japan. Excess Risk Transfer and Learning Problem Reduction towards Reliable Machine Learning. (in Japanese)
2019/09/12 Seminar Talk at Parietal Team — INRIA Paris-Saclay, Paris, France. Calibrated Surrogate Maximization of Linear-fractional Utility in Binary Classification.