SohyeonKim(347)
-
[GoogleML] Logistic Regression as a Neural Network
W๊ฐ only parameter, nx dim vector. b๋ real number loss func - single training example์ ๋ํ error cost func - cost of your params (์ ์ฒด ๋ฐ์ดํฐ์ ๋ํด, Parameter W, b์ ํ๊ท ์๋ฌ๋ฅผ ์๋ฏธ) Gradient Descent slope of the function Derivatives ์ง์ ์ด๋ผ๋ฉด (1์ฐจ ํจ์) a์ ๊ฐ์ ๋ฌด๊ดํ๊ฒ, ํจ์์ ์ฆ๊ฐ๋์ ๋ณ์ ์ฆ๊ฐ๋์ 3๋ฐฐ ์ฆ 3์ผ๋ก ๋ฏธ๋ถ๊ฐ์ด ์ผ์ ํ๋ค Computation Graph Derivatives with a Computation Graph Logistic Regression Gradient Descent Gradient Descent on m Exam..
2023.09.05 -
[GoogleML] Introduction to Deep Learning
Neural Networks and Deep Learning 1. Introduction to Deep Learning Supervised Learning with Neural Networks Why is Deep Learning taking off? + ์๊ฐ๋ณด๋ค ์์ด ํด์ฆ๊ฐ ๊ต์ฅํ ๊น๋ค๋กญ๋ค . . .
2023.09.05 -
[Paper reading] Swin Transformer
Swin Transformer Hierarchical Vision Transformer using Shifted Windows Abstract This paper presents a new vision Transformer, called Swin Transformer, that capably serves as a general-purpose backbone for computer vision. Challenges in adapting Transformer from language to vision arise from differences between the two domains, such as large variations in the scale of visual entities and the high..
2023.09.04 -
[Paper reading] Transformers for image recognition, ViT
Transformers for image recognition Model overview. We split an image into fixed-size patches, linearly embed each of them, add position embeddings, and feed the resulting sequence of vectors to a standard Transformer encoder. In order to perform classification, we use the standard approach of adding an extra learnable “classification token” to the sequence. Abstract While the Transformer archite..
2023.08.28 -
[Apple Developer Academy] ์ ํ ์์นด๋ฐ๋ฏธ Open Day ํ์ฌ ์ฐธ์ฌ
2023. 08. 15 Tuesday Apple developer academy @POSTECH Open Day ๐ ์ฌ์ค ์ ๋ ์์นด๋ฐ๋ฏธ ์คํ ๋ฐ์ด๊ฐ, 1๊ธฐ ๋ฌ๋๋ค ํฌ์คํ ์์ ๋ชจ์ด๋ ์๋ฆฌ์ธ ์ค ์๊ณ ์ ์ฒญํ๋๋ฐ, ์๊ณ ๋ณด๋ 3๊ธฐ ์ง์ ํฌ๋งํ์๋ ๋ถ๋ค์ด ์์นด๋ฐ๋ฏธ์์ ์ธ์ ๋ฃ๊ณ , ์ฒดํํ๋ ๋ ์ด๋๋ผ๊ตฌ์ . . ๐ ๐ฉ๐ป ๋ง์ ์ค๋น๋ ํ์ง ๋ชปํ์ง๋ง , , ๊ทธ๋๋ 3๊ธฐ ์ง์์ ๋ถ๋ค Q&A ํตํด์ ์๋ ์์นด๋ฐ๋ฏธ ์ํ๋ ๋์๋ณผ ์ ์์๊ณ , ํ ๋์ ์๊ณ ์ง๋๋๋ฐ, ์๋ ์ถ์ต๋ ์๋ก์๋ก ๋ ์ฌ๋ฆด ์ ์์๋ ์๊ฐ์ด์์ต๋๋ค. ๋ฉํ ๋ถ๋ค๋ ์ค๋๋ง์ ๋ง๋ ์ ์์๋ต! :) ๐ค๏ธ ์กฐ๊ธ ๋ ๋๋ค ๊ฐ๊ณ ์ถ์๋๋ฐ, ๋ค์๋ ์ผ์ ์ด ์์ด์ ๋ฐ๋ก ์ฌ๋ผ์์ต๋๋ค. ๐ฅฒ ๋ค์ ๋ฒ์๋ ๋๋ผ์ ์ค๋์ค๋ ์ด์ผ๊ธฐํ๊ณ , ๋ ํฌํญ ๊ตฌ๊ฒฝ๋ ์ ๋๋ก ํ๊ณ ๊ฐ๊ณ ์ถ์ด..
2023.08.26 -
UMC 4th DemoDay ํ์ฌ ์ฐธ์ฌ
UMC Demoday 2023. 08. 24 Thursday ๐ฅณ # ๋ฌธ๋, ์ฌ๋๋ง์ธ๋ 2F https://github.com/DeveloperAcademy-POSTECH/MC2-Team9-MUSE GitHub - DeveloperAcademy-POSTECH/MC2-Team9-MUSE: Apple Developer Academy 1๊ธฐ MC2 ํ๋ก์ ํธ Apple Developer Academy 1๊ธฐ MC2 ํ๋ก์ ํธ. Contribute to DeveloperAcademy-POSTECH/MC2-Team9-MUSE development by creating an account on GitHub. github.com Muse ํ๋ก์ ํธ ๊ธฐํํ ๋, ๋ค์ฑ๋ก์ด ์์ ์จ๋ฒ ์ปค๋ฒ ์์ ๋๋ฌธ์ ๋ฐฐ๊ฒฝ์ด๋ ๋ฉ์ธ ์ปฌ๋ฌ ์ก์ ๋ ..
2023.08.26 -
[GoogleML] ๊ตฌ๊ธ ๋จธ์ ๋ฌ๋ ๋ถํธ์บ ํ 2023 ํฉ๊ฒฉ 2023.08.26
-
[Paper reading] Attention is all you need, Transformer
Transformer Abstract The dominant sequence transduction models are based on complex recurrent or convolutional neural networks that include an encoder and a decoder. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and conv..
2023.08.25 -
[Paper reading] DenseNet
DenseNet Abstract Recent work has shown that convolutional networks can be substantially deeper, more accurate, and efficient to train if they contain shorter connections between layers close to the input and those close to the output. In this paper, we embrace this observation and introduce the Dense Convolutional Network (DenseNet), which connects each layer to every other layer in a feed-forw..
2023.08.22 -
[Paper reading] GoogleNet
Inception Module The fundamental way of solving both issues would be by ultimately moving from fully connected to sparsely connected architectures, even inside the convolutions. adding an alternative parallel pooling path As these “Inception modules” are stacked on top of each other, their output correlation statistics are bound to vary; as features of higher abstraction are captured by higher l..
2023.08.18