Publication
Books
- Sugiyama, M.,
Machine Learning from Weak Supervision: An Empirical Risk Minimization Approach, MIT Press, Cambridge, MA, USA, 2022.
[link]
, Ishida, T., Lu, N., Sakai, T., & Niu, G.
Journal Articles (refereed)
- Takezawa, Y.,
Momentum Tracking: Momentum Acceleration for Decentralized Deep Learning on Heterogeneous Data.
Transactions on Machine Learning Research, 2023.
[link][arXiv][github]
, Niwa, K., Sato, R., & Yamada, M.
Sparse Regularized Optimal Transport with Deformed q-Entropy.
Entropy, 24(11):1634, 2022.
[link]
& Sakaue, S.- Yamada, M., Takezawa, Y., Sato, R.,
Approximating 1-Wasserstein Distance with Trees.
Transactions on Machine Learning Research, 2022.
[link][arXiv]
, Kozareva, Z., & Ravi, S. - Shimada, T.,
Classification from Pairwise Similarities/Dissimilarities and Unlabeled Data via Empirical Risk Minimization.
Neural Computation 33(5):1234-1268, 2021.
[link][arXiv]
, Sato, I., & Sugiyama, M.
Convex Formulation of Multiple Instance Learning from Positive and Unlabeled Bags.
Neural Networks 105:132-141, 2018.
[link][arXiv]
, Sakai, T., Sato, I., & Sugiyama, M.
Conference Proceedings (refereed)
- Takezawa, Y.*, Sato, R.*,
Beyond Exponential Graph: Communication-Efficient Topologies for Decentralized Learning via Finite-time Convergence.
Advances in Neural Information Processing Systems 36, xxx-xxx, 2023.
[link][arXiv][github] (* equal contribution)
, Niwa, K., & Yamada, M. - Hataya, R.,
Will Large-scale Generative Models Corrupt Future Datasets?
In Proceedings of IEEE International Conference on Computer Vision (ICCV2023), 20555-20565, Paris, France, Oct. 2-6, 2023.
[link][arXiv][dataset]
, & Arai, H. - Lin, X., Zhang, G., Lu, X.,
Estimating Treatment Effects Under Heterogeneous Interference.
In Proceedings of the European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases (ECMLPKDD2023), LNCS 14169:576-592, Turin, Italy, Sep. 18-22, 2023.
[link][arXiv]
, Takeuchi, K., & Kashima, H.
Proper Losses, Moduli of Convexity, and Surrogate Regret Bounds.
In Proceedings of 36th Annual Conference on Learning Theory (COLT2023) PMLR 195:525-547, Bangalore, India, Jul. 12-15, 2023.
[link]- Arase, Y.,
Unbalanced Optimal Transport for Unbalanced Word Alignment.
In Proceedings of 61st Annual Meeting of the Association for Computational Linguistics (ACL2023) 3966–3986, Toronto, Canada, Jul. 9-14, 2023.
[link][arXiv][github]
, & Yokoi, S. - Nakamura, S.,
Robust Computation of Optimal Transport by β-potential Regularization.
In Proceedings of 14th Asian Conference on Machine Learning (ACML2022) PMLR 189:770-785, Hyderabad, India, Dec. 12-14, 2022.
[link][arXiv]
, & Sugiyama, M.
On the Surrogate Gap between Contrastive and Supervised Losses.
In Proceedings of 39th International Conference on Machine Learning (ICML2022), PMLR 162:1585-1606, Baltimore, MD, USA, Jul. 17-23, 2022.
[link][arXiv][poster][github] (equal contribution & alphabetical ordering)
, Nagano, Y., & Nozawa, N.
Pairwise Supervision Can Provably Elicit a Decision Boundary.
In Proceedings of 25th International Conference on Artificial Intelligence and Statistics (AISTATS2022), PMLR 151:2618-2640, online, Mar. 28-30, 2022.
[link][arXiv][poster] (* equal contribution)
*, Shimada, T.*, Xu, L., Sato, I., & Sugiyama, M.- Dan, S.,
Learning from Noisy Similar and Dissimilar Data.
In Proceedings of the European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases (ECMLPKDD2021), LNCS 12976:233-249, online, Sep. 13-17, 2021.
[link][arXiv]
, & Sugiyama, M.
Fenchel-Young Losses with Skewed Entropies for Class-posterior Probability Estimation.
In Proceedings of 24th International Conference on Artificial Intelligence and Statistics (AISTATS2021), PMLR 130:1648-1656, online, Apr. 13-15, 2021.
[link][poster][github]
& Sugiyama, M.- Nordström, M.,
Calibrated Surrogate Maximization of Dice.
In Proceedings of 23rd International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI2020), LNCS 12264:269-278, online, Oct. 4-8, 2020.
[link]
, Löfman, F., Hult, H., Maki, A., & Sugiyama, M.
Calibrated Surrogate Losses for Adversarially Robust Classification.
In Proceedings of 33rd Annual Conference on Learning Theory (COLT2020), PMLR 125:408-451, online, Jul. 9-12, 2020.
[link][arXiv (corrigendum)][slides] (arXiv version contains a corrigendum; the definition of calibrated losses is modified)
, Scott, C., & Sugiyama, M.
Calibrated Surrogate Maximization of Linear-fractional Utility in Binary Classification.
In Proceedings of 23rd International Conference on Artificial Intelligence and Statistics (AISTATS2020), PMLR 108:2337-2347, online, Aug. 26-28, 2020.
[link][arXiv][slides]
& Sugiyama, M.- Wu, Y.-H., Charoenphakdee, N.,
Imitation Learning from Imperfect Demonstration.
In Proceedings of 36th International Conference on Machine Learning (ICML2019), PMLR 97:6818-6827, Long Beach, CA, USA, Jun. 9-15, 2019.
[link][arXiv][poster][github]
, Tangkaratt, V., & Sugiyama, M. - Kuroki, S., Charoenphakdee, N.,
Unsupervised Domain Adaptation Based on Source-guided Discrepancy.
In Proceedings of 33rd AAAI Conference on Artificial Intelligence (AAAI2019), 33 01:4122-4129, Honolulu, HI, USA, Jan. 27-Feb. 1, 2019.
[link][arXiv]
, Honda, J., Sato, I., & Sugiyama, M.
Classification from Pairwise Similarity and Unlabeled Data.
In Proceedings of 35th International Conference on Machine Learning (ICML2018), PMLR 80:461-470, Stockholm, Sweden, Jul. 10-15, 2018.
[link][arXiv][slides][poster][github]
, Niu, G., & Sugiyama, M.
Preprints
- Sato, R., Takezawa, Y.,
Embarrassingly Simple Text Watermarks.
[arXiv]
, Niwa, K., & Yamada, M. - Takezawa, Y., Sato, R.,
Necessary and Sufficient Watermark for Large Language Models.
[arXiv][github]
, Niwa, K., & Yamada, M.
Feature Normalization Prevents Collapse of Non-contrastive Learning Dynamics.
[arXiv]