About me
🔎 For students
I'm open for accepting PhD students and interns at ISM. Feel free to reach out to me. Here are some of my favorite recent papers, but I'm more broadly interested in learning theory.
Research interests
- Learning theory
- classification-calibrated losses, proper scoring rule, property elicitation
- class probability estimation, probability calibration
- learning dynamics, gradient descent
- convex analysis, information geometry
- Representation learning
- contrastive learning
- robust learning
- Online convex optimization
News
- Oct 1, 2025: Japanese translation of Kevin Murphy’s textbook “Probabilistic Machine Learning: An Introduction” will be published soon from Asakura Publishing [link (vol 1)][link (vol 2)].
- Sep 19, 2025: Our three papers are accepted by NeurIPS2025: (1) O(n ln T) regret bound for online inverse linear optimization, (2) gradient descent convergence for Fenchel-Young losses beyond the stable regime (spotlight!), (3) linear surrogate regret bounds by convex smooth losses (spotlight!).
- Mar 28, 2025: My grant proposal to JST-BOOST (Japanese govermental 5 year research funding for early-career researchers in AI field) has been accepted (official info).
- Feb 25, 2025: I moved to the Institute of Statistical Mathematics as an associate professor.
- Jan 23, 2025: Our three new papers are accepted by AISTATS2025: (1) unified understanding of online inverse optimization via Fenchel-Young loss, (2) non-principal-centric model of inverse optimization via prediction market, and (3) a new loss class extending proper loss to incorporate the focal loss. Additionally, our two papers are accepted by ICLR2025: (1) hippocampus-inspired self-supervised learning and (2) scheduled knowledge distillation for language modeling.
- (archived)
Upcoming travels
- Nov 12-15: Okinawa (IBIS)
- Dec 2-7: San Diego (NeurIPS)
- (archived)