About me

Research interests

My central focus of research is in theoretical understanding of statistical machine learning, particularly from the following perspectives.

  1. Learning theory of loss functions, through which I like to study robustness to adversarial attacks (COLT2020) and class imbalance (AISTATS2020, AISTATS2021).
  2. Evaluation metrics of predictions and representations. Recently, I am interested in how it is possible to learn good representations via similarity in light of a downstream task (ICML2018, preprint).

I am glad to have discussions with those who have common interests! You may have a look at the slides of my past talks such as this to see my tastes.


  • Jan 19, 2022: Our paper “Pairwise Supervision Can Provably Elicit a Decision Boundary” has been accepted by AISTATS2021. We elucidated that pairwise supervision (i.e., information indicating whether two input vectors belong to the same underlying class) is sufficient to recover a binary decision boundary. The earlier version is available here.
  • Jun 21, 2021: Our paper “Learning from Noisy Similar and Dissimilar Data” has been accepted by ECMLPKDD2021.
  • May 17, 2021: We have publicized a corrigendum to our COLT2020 paper. The definition of calibrated losses is corrected and the proofs of our main results are modified.
  • Jan 23, 2021: Our paper “Fenchel-Young Losses with Skewed Entropies for Class-posterior Probability Estimation” has been accepted by AISTATS2021!
  • Jan 8, 2021: Our presentation at IBIS2020 got the best presentation award (1st place out of 116 presentations)!