[GoogleML] Adam Optimizer

2023. 9. 20. 22:23ใ†ArtificialIntelligence/2023GoogleMLBootcamp

 

 

 

Gradient Descent with Momentum

์„ธ๋กœ ์„ฑ๋ถ„์„ ํ‰๊ท ๋‚ด๋ฒ„๋ฆฐ๋‹ค๊ณ  ์ƒ๊ฐํ•ด๋„ ๋  ๊ฒƒ ๊ฐ™๋‹ค! :)

 

 

 

๊ณต์„ ๊ตด๋ฆด ๋•Œ, ์†๋„์™€ ๊ฐ€์†๋„๋กœ ์ƒ๊ฐํ•  ์ˆ˜ ์žˆ๋‹ค

 

 

 

์•Œ๊ณ ๋ฆฌ์ฆ˜

 

 

 

(๋ณด๋ผ์ƒ‰ ๋ฒ„์ „์€) ์กฐ๊ธˆ ๋œ ์ง๊ด€์ ์ด๋‹ค

 

 

 

RMSprop

์ด๊ฒƒ์ด ์–ด๋–ป๊ฒŒ ๋™์ž‘ํ•  ์ˆ˜ ์žˆ๋Š”๊ฐ€?

์„ธ๋กœ ๋ฐฉํ–ฅ์œผ๋กœ๋Š” ์ ๊ฒŒ, 

๊ฐ€๋กœ ๋ฐฉํ–ฅ์œผ๋กœ๋Š” ๋งŽ์ด update ๋˜์–ด์•ผ ํ•œ๋‹ค! (๋งŽ์ด ๊ฐ€์•ผ ํ•œ๋‹ค) 

 

 

 

 

 

 

numerical ์•ˆ์ •์„ฑ์„ ์œ„ํ•ด ๋ถ„๋ชจ์— ์—ก์‹ค๋ก ์„ ๋”ํ•ด์ค€๋‹ค

 

 

 

Adam Optimization Algorithm

์•„๋‹ด

 

 

 

 

 

 

Learning Rate Decay

์ดˆ๋ฐ˜์—๋Š” ํฌ๊ฒŒ ํ™• ํ™•

๋‚˜์ค‘์—๋Š” learning rate ํฌ๊ธฐ๋ฅผ ์ค„์ธ๋‹ค

 

 

 

 

 

 

 

 

 

The Problem of Local Optima

๊ธฐ์šธ๊ธฐ๊ฐ€ 0์ธ ์ง€์ ์ด ์•ˆ์žฅ์  ๊ฒฝ์šฐ๊ฐ€ ๋” ๋งŽ๋‹ค (local optima์ผ ์ƒํ™ฉ๋ณด๋‹ค)

 

 

 

wow! saddle point

 

 

 

์˜ค๋žซ๋™์•ˆ ๊ธฐ์šธ๊ธฐ๊ฐ€ 0์— ๊ฐ€๊นŒ์šด, plateau๋„ ํ•™์Šต์„ ๋ฐฉํ•ดํ•˜๋Š” ์š”์ธ์ด ๋œ๋‹ค

 

 

 

 

'ArtificialIntelligence > 2023GoogleMLBootcamp' ์นดํ…Œ๊ณ ๋ฆฌ์˜ ๋‹ค๋ฅธ ๊ธ€

[GoogleML] Batch Normalization  (0) 2023.09.21
[GoogleML] Hyperparameter Tuning  (1) 2023.09.20
[GoogleML] Optimization Algorithms  (0) 2023.09.20
[GoogleML] Optimization Problem  (0) 2023.09.13
[GoogleML] Regularizing Neural Network  (0) 2023.09.12