Empirical Process in M-Estimation (Chapter 3)

March 21, 2018

In Chapter 3, one of the most important theorem in empirical process theory, namely, the Uniform Law of Large Numbers (ULLN), is stated and discussed. Summary If a function class $\mathcal{G}$ is said to satisfy the ULLN, then $$ \sup_{g \in \mathcal{G}} \left| \int gd(P_n - P) \right| \rightarrow 0, \quad \text{a.s.} $$ The ULLN means that the expected and empirical average become closer as the number of samples gets larger for any $g \in \mathcal{G}$, which is explained in the following figure. ... Read more

Empirical Process in M-Estimation (Chapter 2)

March 11, 2018

I would like to summarize the contents of van de Geer (2000)1 one chapter by one, which is devoted for the theory of the empirical process and the convergence rates. In Chapter 2, the definitions of the empirical process and entropy are given. Empirical Measure As the number of samples gets larger, the estimation error will converge to zero in many cases. This convergence property is stated by the empirical process. ... Read more

Optimal Convergence Rate for Empirical Risk Minimization

February 23, 2018

These days, papers such as du Plessis et al. (NIPS2014)1 and Sakai et al. (ICML2017)2 refers optimality of convergence rates of their learning methods. The convergence rate $O(n^{-\frac{1}{2}})$ is referred to as optimal (parametric) convergence rate for classification. In this article, I investigate why this is optimal for empirical risk minimization. Most of the parts in this article try to describe intuition of Mendelson’s technical report3. I tried to explain as intuitive as possible, but sometimes sacrificed rigidness of discussion. ... Read more

© Han Bao 2018