playground:playground

差分

このページの2つのバージョン間の差分を表示します。

この比較画面へのリンク

両方とも前のリビジョン 前のリビジョン
次のリビジョン
前のリビジョン
playground:playground [2021/08/22 14:33] Hideaki IIDUKAplayground:playground [2021/08/22 14:34] (現在) Hideaki IIDUKA
行 15: 行 15:
 | Proposed |$\displaystyle{\mathcal{O}\left( \frac{1}{T} \right)} + C_1 \alpha + C_2 \beta$|$\displaystyle{\mathcal{O}\left( \frac{1}{\sqrt{T}} \right)}$|$\displaystyle{\mathcal{O}\left( \frac{1}{n} \right)} + C_1 \alpha + C_2 \beta$|$\displaystyle{\mathcal{O}\left( \frac{1}{\sqrt{n}} \right)}$| | Proposed |$\displaystyle{\mathcal{O}\left( \frac{1}{T} \right)} + C_1 \alpha + C_2 \beta$|$\displaystyle{\mathcal{O}\left( \frac{1}{\sqrt{T}} \right)}$|$\displaystyle{\mathcal{O}\left( \frac{1}{n} \right)} + C_1 \alpha + C_2 \beta$|$\displaystyle{\mathcal{O}\left( \frac{1}{\sqrt{n}} \right)}$|
  
-Note: $C$, $C_1$, and $C_2$ are constants independent of learning rates $\alpha, \beta$, number of training examples $T$, and number of iterations $n$. The convergence rate for convex optimization is measured in terms of regret as $R(T)/T$, and the convergence rate for nonconvex optimization is measured as the expectation of the squared gradient norm $\min_{k\in [n]} \mathbb{E}[\|\nabla f(\bm{x})\|^2]$. In the case of using constant learning rates, SGD \cite{sca2020} and Algorithm \ref{algo:1} can be applied to not only convex but also nonconvex optimization. In the case of using diminishing learning rates, SGD \cite{sca2020} and Algorithm \ref{algo:1} had the best convergence rates, $\mathcal{O}(1/\sqrt{n})$.  (*) Theorem 1 in \cite{reddi2018} shows that a counter-example to the \cite{adam} results exists.+Note: $C$, $C_1$, and $C_2$ are constants independent of learning rates $\alpha, \beta$, number of training examples $T$, and number of iterations $n$. The convergence rate for convex optimization is measured in terms of regret as $R(T)/T$, and the convergence rate for nonconvex optimization is measured as the expectation of the squared gradient norm $\min_{k\in [n]} \mathbb{E}[\|\nabla f(x)\|^2]$. In the case of using constant learning rates, SGD \cite{sca2020} and Proposed can be applied to not only convex but also nonconvex optimization. In the case of using diminishing learning rates, SGD \cite{sca2020} and Proposed had the best convergence rates, $\mathcal{O}(1/\sqrt{n})$.  (*) Theorem 1 in \cite{reddi2018} shows that a counter-example to the \cite{adam} results exists.
 \end{table*} \end{table*}
  
  
  • playground/playground.1629610419.txt.gz
  • 最終更新: 2021/08/22 14:33
  • by Hideaki IIDUKA