Talks

Upcoming talks

  • Shing, M., Misaki, K., Bao, H., Yokoi, S., & Akiba, T.
    TAID: Temporally Adaptive Interpolated Distillation for Efficient Knowledge Transfer in Language Models.
    Presented at Machine Learning and Compression Workshop at NeurIPS 2024, Vancouver, BC, Dec. 15, 2024.
  • 2024/12/01 第57回日本科学哲学会,関西大学.
    AIから考える言語・知性・科学.(in Japanese)
    [link]

Past workshop talks (non-refereed)

  1. Bao, H.
    Proper Losses, Moduli of Convexity, and Surrogate Regret Bounds.
    Presented at 27th Information-Based Induction Sciences Workshop (IBIS2024), Omiya, Japan, Nov. 4-7, 2024.
  2. Takezawa, Y., Bao, H., Niwa, K., Sato, R., & Yamada, M.
    Parameter-free Optimization Method for Clipped Gradient Descent.
    Presented at 27th Information-Based Induction Sciences Workshop (IBIS2024), Omiya, Japan, Nov. 4-7, 2024.
  3. Yokoi, S., Bao, H., Kurita, H., & Shimodaira, H.
    Zipfian Whitening.
    Presented at 27th Information-Based Induction Sciences Workshop (IBIS2024), Omiya, Japan, Nov. 4-7, 2024.
    The winner of the best presentation award.
  4. Bao, H.
    Feature Normalization Prevents Collapse of Non-contrastive Learning Dynamics.
    Presented at 26th Information-Based Induction Sciences Workshop (IBIS2023), Kokura, Japan, Oct. 29-Nov. 1, 2023.
    The winner of the presentation award finalist.
  5. Takezawa, Y., Sato, R., Bao, H., Niwa, K., & Yamada, M.
    Beyond Exponential Graph: Communication-Efficient Topologies for Decentralized Learning via Finite-time Convergence.
    Presented at 26th Information-Based Induction Sciences Workshop (IBIS2023), Kokura, Japan, Oct. 29-Nov. 1, 2023.
  6. Takezawa, Y., Sato, R., Bao, H., Niwa, K., & Yamada, M.
    Beyond Exponential Graph: Communication-Efficient Topologies for Decentralized Learning via Finite-time Convergence.
    IEICE Technical Report 123:83-90, 2023.
    Presented at 50th Information-Based Induction Sciences and Machine Learning Technical Committee (IBISML050), Okinawa, Japan, Jun. 29-Jul. 01, 2023.
    [link]
    The winner of IEICE TC-IBISML Research Award 2023.
  7. Yamada, M., Takezawa, Y., Sato, R., Bao, H., Kozareva, Z., & Ravi, S.
    Approximating 1-Wasserstein Distance with Trees.
    Presented at 25th Information-Based Induction Sciences Workshop (IBIS2022), Tsukuba, Japan, Nov. 20-23, 2022.
  8. Takezawa, Y., Bao, H., Niwa, K., Sato, R., & Yamada, M.
    Momentum Tracking: Momentum Acceleration for Decentralized Deep Learning on Heterogeneous Data.
    Presented at 25th Information-Based Induction Sciences Workshop (IBIS2022), Tsukuba, Japan, Nov. 20-23, 2022.
  9. Bao, H., Nagano, Y., & Nozawa, N.
    On the Surrogate Gap between Contrastive and Supervised Losses.
    Presented at 25th Information-Based Induction Sciences Workshop (IBIS2022), Tsukuba, Japan, Nov. 20-23, 2022.
    The winner of the presentation award.
  10. Nakamura, S., Bao, H., & Sugiyama, M.
    Robust Computation of Optimal Transport by β-potential Regularization.
    IEICE Technical Report 122:8-14, 2022.
    Presented at 45th Information-Based Induction Sciences and Machine Learning Technical Committee (IBISML045), online, Mar. 08-09, 2022.
    [link]
  11. Bao, H. & Sugiyama, M.
    Fenchel-Young Losses with Skewed Entropies.
    Presented at 24th Information-Based Induction Sciences Workshop (IBIS2021), Online, Nov. 10-13, 2021.
  12. Bao, H., Scott, C., & Sugiyama, M.
    Calibrated Surrogate Losses for Adversarially Robust Classification.
    Presented at 23rd Information-Based Induction Sciences Workshop (IBIS2020), Online, Nov. 23-26, 2020.
    The winner of the best presentation award.
  13. Bao, H. & Sugiyama, M.
    Calibrated Surrogate Maximization of Linear-fractional Utility in Binary Classification.
    IEICE Technical Report 119:71-78, 2020.
    Presented at 39th Information-Based Induction Sciences and Machine Learning Technical Committee (IBISML039), Kyoto, Japan, Mar. 10-11, 2020.
    [link]
  14. Bao, H. & Sugiyama, M.
    Calibrated Surrogate Maximization of Linear-fractional Utility in Binary Classification.
    Presented at 22nd Information-Based Induction Sciences Workshop (IBIS2019), Nagoya, Japan, Nov. 20-23, 2019.
    The winner of the student presentation award.
  15. Shimada, T., Bao, H., Sato, I., & Sugiyama, M.
    Classification from Pairwise Similarities/Dissimilarities and Unlabeled Data via Empirical Risk Minimization.
    Presented at 22nd Information-Based Induction Sciences Workshop (IBIS2019), Nagoya, Japan, Nov. 20-23, 2019.
  16. Bao, H. & Sugiyama, M.
    Calibrated Surrogate Maximization of Linear-fractional Utility in Binary Classification.
    Presented at UK-Japan Robotics and AI Research Collaboration Workshops, Edinburgh, UK, Sep. 17-18, 2019.
  17. Bao, H. & Sugiyama, M.
    Calibrated Surrogate Maximization of Linear-fractional Utility in Binary Classification.
    Presented at Joint Workshop of BBDC, BZML, and RIKEN AIP, Berlin, Germany, Sep. 9-10, 2019.
  18. Shimada, T., Bao, H., Sato, I., & Sugiyama, M.
    Classification from Pairwise Similarities/Dissimilarities and Unlabeled Data via Empirical Risk Minimization.
    Presented at 3rd International Workshop on Symbolic-Neural Learning (SNL2019), Tokyo, Japan, Jul. 11-12, 2019.
  19. Bao, H., Niu, G., & Sugiyama, M.
    Classification from Pairwise Similarity and Unlabeled Data.
    Presented at 1st Japan-Israel Machine Learning Meeting (JIML-2018), Tel-Aviv, Israel, Nov. 19-20, 2018.
    The winner of the best poster award.
    [poster]
  20. Kuroki, S., Charoenphakdee, N., Bao, H., Honda, J., Sato, I., & Sugiyama, M.
    Unsupervised Domain Adaptation Based on Distance between Distributions Using Source-domain Labels.
    Presented at 21st Information-Based Induction Sciences Workshop (IBIS2018), Sapporo, Japan, Nov. 4-7, 2018.
  21. Bao, H., Sakai, T., Sugiyama, M., & Sato, I.
    Risk Minimization Framework for Multiple Instance Learning from Positive and Unlabeled Bags.
    Presented at 1st International Workshop on Symbolic-Neural Learning (SNL2017), Nagoya, Japan, Jul. 7-8, 2017.
  22. Bao, H., Sakai, T., Sato, I., & Sugiyama, M.
    Risk Minimization Framework for Multiple Instance Learning from Positive and Unlabeled Bags.
    IEICE Technical Report 117:55-62, 2017.
    Presented at 29th Information-Based Induction Sciences and Machine Learning Technical Committee (IBISML029), Okinawa, Japan, Jun. 23-25, 2017.
    [link]
  23. Bao, H., Usui, T., & Matsuura, K.
    Improving Optimization Level Estimation of Malware by Feature Selection.
    Presented at 32nd Symposium on Cryptography and Information Security (SCIS2015), Kokura, Japan, Jan. 20-23, 2015.

Past invited/organized talks

  1. 2024/08/20 Japanese Conference on Combinatorics and its Applications 2024, Yamagata University, Japan.
    Optimal Transport Meets q-Exponential.
  2. 2024/07/29 Seminar Talk at University of Tuebingen.
    Proper Losses, Moduli of Convexity, and Surrogate Regret Bounds.
    [link]
  3. 2024/06/27 Michinoku Communication Science Seminar at Tohoku NLP Lab — Tohoku University
    Self-attention Networks Localize When QK-eigenspectrum Concentrates.
    [link][slides]
  4. 2023/10/13 Statistics Seminar at University of Bristol.
    Proper Losses, Moduli of Convexity, and Surrogate Regret Bounds.
    [slides]
  5. 2023/07/22 科学の物語、物語の科学—科学の生きる時間軸をさかのぼる学際研究—,京都大学.
    人工知能と「最適性」の神話—技術権力とはなにか. (in Japanese)
  6. 2023/06/07 Machine Learning and Data Science (MLDS) Unit Seminar, Okinawa Institute of Science and Technology, Japan.
    Proper Losses, Moduli of Convexity, and Surrogate Regret Bounds.
  7. 2023/05/26 文化翻訳の過去・現在・未来,京都大学.
    計算機科学者の見る世界—翻訳可能性とモデル化を通じて. (in Japanese)
  8. 2023/03/15-17 Workshop OT 2023, The University of Tokyo, Japan.
    Sparse Regularized Optimal Transport with Deformed q-Entropy.
    [link][slides]
  9. 2023/03/14-16 Workshop on Functional Inference and Machine Intelligence, The Institute of Statistical Mathematics, Japan.
    Proper Losses, Moduli of Convexity, and Surrogate Regret Bounds.
  10. 2023/01/06 Seminars on Optimization Methods and Algorithms, The University of Tokyo, Japan.
    機械学習と凸共役の交わり. (in Japanese)
    [slides]
  11. 2022/09/20 Hakubi Seminar (Kyoto University), Japan.
    Loss function perspective of machine learning:
 What does a machine learn?
    [link]
  12. 2022/09/09 電子情報通信学会ソサエティ大会(大会企画セッション「データサイエンスと情報理論」),オンライン.
    学習基準と評価基準の差を探る.(in Japanese)
    [link]
  13. 2022/07/26 Kyoto Machine Learning Workshop (at Kyoto University), Japan.
    Reliable and Transferrable Machine Learning via Loss Function Perspective.
  14. 2022/03/16 Seminar Talk at Matsuura Lab — The University of Tokyo, Japan.
    Excess Risk Transfer and Learning Problem Reduction towards Reliable Machine Learning. (in Japanese)
  15. 2022/03/10 日本応用数理学会 若手の会 第7回学生研究発表会,オンライン.
    学習基準と評価基準の差を探る.(in Japanese)
  16. 2021/09/29 人工知能学会 第117回人工知能基本問題研究会(SIG-FPAI),オンライン.
    学習基準と評価基準の差を探る.(in Japanese)
  17. 2021/03/26 Michinoku Communication Science Seminar at Tohoku NLP Lab — Tohoku University
    Learning from Excess Risk Transfer Perspective. (in Japansese)
  18. 2020/12/22 東芝研究開発センター シンポジウム.
    敵対的学習における適合的な損失関数.(in Japanese)
  19. 2020/12/15 Talk event: Learning theory of loss functions.
    Calibrated Surrogate Losses and Robust Learning.
    [link][slides]
  20. 2020/09/10 Talk at RIKEN AIP, Tokyo, Japan.
    Learning Theory Bridges Loss Functions.
    [link][slides]
  21. 2020/09/07 ソシオグローバル情報工学研究センター講演会 — 生産技術研究所,東京.
    損失関数をつなぐ学習理論.(in Japanese)
    [slides]
  22. 2020/07/13 Seminar Talk at Kashima Lab — Kyoto University, Kyoto, Japan.
    Learning Theory Bridges Loss Functions.
    [slides]
  23. 2020/02/07 Seminar Talk at Professor Sanmi Koyejo's Group — University of Illinois at Urbana-Champaign, Champaign, IL, USA.
    Calibrated Surrogate Maximization of Linear-fractional Utility in Binary Classification.
    [slides]
  24. 2019/09/23 Modal Seminar — INRIA Lille Nord Europe, Lille, France.
    Unsupervised Domain Adaptation Based on Source-guided Discrepancy.
    [slides]
  25. 2019/09/19 IPAB Seminar — The University of Edinburgh, Edinburgh, UK.
    Calibrated Surrogate Maximization of Linear-fractional Utility in Binary Classification.
  26. 2019/09/12 Seminar Talk at Parietal Team — INRIA Paris-Saclay, Paris, France.
    Calibrated Surrogate Maximization of Linear-fractional Utility in Binary Classification.
  27. 2018/10/29 第8回脳型人工知能とその応用に関するミニワークショップ — ATR,京都.
    弱教師付きデータを用いた統計的分類.(in Japanese)
  28. 2018/08/12 第3回 統計・機械学習若手シンポジウム,東京.
    Classification from Pairwise Similarity and Unlabeled Data. (in Japanese)
  29. 2018/06/18 Science Salon — International Research Center for Neurointelligence, Tokyo, Japan.
    Classification from Pairwise Similarity and Unlabeled Data.
  30. 2017/09/19 Seminar Talk at Sierra Team — INRIA Paris, Paris, France.
    Multiple Instance Learning with Positive and Unlabeled Data.

Outreach (in Japanese)

  1. 2024/03/23 人間(ひと)を形づくるもの【京都大学 白眉センター×NHKカルチャー】第6回 — NHKカルチャー,京都.
    機械は思考できるようになったのか?
  2. 2024/03/16 優秀な若手研究者が集う白眉センターとは!?その実態に迫る! — ScienceTalks TV,YouTube.
  3. 2021/11/21-23 10分で伝えます!東大研究最前線 — 東京大学駒場キャンパス,オンライン.
    計算機科学者から見たコンピュータと科学.
  4. 2021/09/19-20 10分で伝えます!東大研究最前線 — 東京大学本郷キャンパス,オンライン.
    情報が「近い」ってどういうこと?
  5. 2020/11/21-23 10分で伝えます!東大研究最前線 — 東京大学駒場キャンパス,オンライン.
    パターン認識を支える最適化.
  6. 2019/11/22-24 10分で伝えます!東大研究最前線 — 東京大学駒場キャンパス,東京.
    スパースモデリングの最前線.
  7. 2019/08/10 CLASP! 第9回,東京.
    スモールデータの機械学習.
  8. 2019/08/08 東京大学理学部オープンキャンパス2019,東京.
    人工知能は人間の夢を見るか?
  9. 2019/05/18-19 10分で伝えます!東大研究最前線 — 東京大学本郷キャンパス,東京.
    不動点とアルゴリズム.
  10. 2018/11/23-25 10分で伝えます!東大研究最前線 — 東京大学駒場キャンパス,東京.
    プライバシーの数理.