About me:
I am a graduate student doing research on machine learning.
My research interest lies in statistical learning theory especially regarding loss functions.
In addition, transfer learning and similarity learning are in my favor.
Oct 20, 2020: We are going to hold an online talk event on learning theory and loss functions, with Jessie and Yutong. The registration and further information is available here.
Jun 23, 2020: One paper was accepted by MICCAI2020!
May 26, 2020: Our paper "Calibrated Surrogate Losses for Adversarially Robust Classification" has been accepted by COLT2020! This work studies surrogate losses under adversarial attacks, showning that there does not exist convex and calibarted surrogates. The preprint will appear soon. The preprint is uploaded here. (updated on May 29)
Bao, H.*, Shimada, T.*, Xu, L., Sato, I., & Sugiyama, M. Similarity-based Classification: Connecting Similarity Learning to Binary Classification. [arXiv] (* equal contribution)
Dan, S., Bao, H., & Sugiyama, M. Learning from Noisy Similar and Dissimilar Data. [arXiv]
Journal Articles
Shimada, T., Bao, H., Sato, I., & Sugiyama, M. Classification from Pairwise Similarities/Dissimilarities and Unlabeled Data via Empirical Risk Minimization. Neural Computation, 2020 (to appear). [arXiv]
Bao, H., Sakai, T., Sato, I., & Sugiyama, M. Convex Formulation of Multiple Instance Learning from Positive and Unlabeled Bags. Neural Networks 105:132-141, 2018. [link][arXiv]
Kuroki, S., Charoenphakdee, N., Bao, H., Honda, J., Sato, I., & Sugiyama, M. Unsupervised Domain Adaptation Based on Source-guided Discrepancy. In Proceedings of 33rd AAAI Conference on Artificial Intelligence (AAAI2019), 33 01:4122-4129, Honolulu, HI, USA, Jan. 27-Feb. 1, 2019. [link][arXiv]
Bao, H. Scott, C., & Sugiyama, M. Calibrated Surrogate Losses for Adversarially Robust Classification. Presented at 23rd Information-Based Induction Sciences Workshop (IBIS2020), Online, Nov. 23-26, 2020. The winner of the best presentation award.
Bao, H. & Sugiyama, M. Calibrated Surrogate Maximization of Linear-fractional Utility in Binary Classification. Presented at 22nd Information-Based Induction Sciences Workshop (IBIS2019), Nagoya, Japan, Nov. 20-23, 2019. The winner of the student presentation award.
Shimada, T., Bao, H., Sato, I., & Sugiyama, M. Classification from Pairwise Similarities/Dissimilarities and Unlabeled Data via Empirical Risk Minimization. Presented at 22nd Information-Based Induction Sciences Workshop (IBIS2019), Nagoya, Japan, Nov. 20-23, 2019.
Bao, H. & Sugiyama, M. Calibrated Surrogate Maximization of Linear-fractional Utility in Binary Classification. Presented at Joint Workshop of BBDC, BZML, and RIKEN AIP, Berlin, Germany, Sep. 9-10, 2019.
Shimada, T., Bao, H., Sato, I., & Sugiyama, M. Classification from Pairwise Similarities/Dissimilarities and Unlabeled Data via Empirical Risk Minimization. Presented at 3rd International Workshop on Symbolic-Neural Learning (SNL2019), Tokyo, Japan, Jul. 11-12, 2019.
Bao, H., Niu, G., & Sugiyama, M. Classification from Pairwise Similarity and Unlabeled Data. Presented at 1st Japan-Israel Machine Learning Meeting (JIML-2018), Tel-Aviv, Israel, Nov. 19-20, 2018. The winner of the best poster award. [poster]
Kuroki, S., Charoenphakdee, N., Bao, H., Honda, J., Sato, I., & Sugiyama, M. Unsupervised Domain Adaptation Based on Distance between Distributions Using Source-domain Labels. Presented at 21st Information-Based Induction Sciences Workshop (IBIS2018), Sapporo, Japan, Nov. 4-7, 2018.
2019/09/12 Seminar Talk at Parietal Team — INRIA Paris-Saclay, Paris, France. Calibrated Surrogate Maximization of Linear-fractional Utility in Binary Classification.
2019/08/10 CLASP! 第9回,東京. スモールデータの機械学習 (in Japanese).