KimAnt 🥦

KimAnt 🥦

  • SohyeonKim (368) N
    • ComputerScience (111) N
      • ProcessingInMemory (8)
      • FaultTolerance (5) N
      • OperatingSystem (21)
      • FreeBSD (23)
      • DesignPattern (1)
      • ComputerNetwork (12)
      • FullStackProgramming (17)
      • DockerKubernetes (16)
      • Database (5)
    • ArtificialIntelligence (72)
      • ECCV2024 (11)
      • WRTNCampusLeader (4)
      • PaperReading (14)
      • 2023GoogleMLBootcamp (33)
      • DeepLearning (10)
    • Programming (27)
      • Swift (17)
      • JAVA (3)
      • CodingTest (2)
      • Algorithms (5)
    • Experiences (37)
      • KIST Europe Internship (15)
      • Activities (8)
      • Competition (6)
      • International (7)
      • Startup (1)
    • iOS (41)
      • AppProject (10)
      • AppleDeveloperAcademy@POSTE.. (9)
      • CoreMLCreateML (8)
      • MC3Puhaha (4)
      • NC2Textinit (10)
      • MACSpaceOver (0)
    • GitHub (5)
    • UniversityMakeUsChallenge (23)
      • UMCActivities (3)
      • UMCiOS (12)
      • UMCServer (7)
      • BonheurAppProject (1)
    • Science (33)
      • 2022GWNRSummer (13)
      • 2023GWNRWinter (8)
      • 2024GWNRWinter (2)
      • Biology (6)
    • Etc (16)
      • StudyPlanner (13)
  • 홈
  • 태그
  • 방명록
RSS 피드
로그인
로그아웃 글쓰기 관리

KimAnt 🥦

컨텐츠 검색

태그

biohybrid Programming OS ios 수치상대론 docker process 딥러닝 app kernel pim 중력파 server Apple umc AI Container swift Google CPU

최근글

댓글

공지사항

아카이브

ArtificialIntelligence/2023GoogleMLBootcamp(33)

  • [GoogleML] Python and Vectorization

    Python and Vectorization More Vectorization Examples Vectorizing Logistic Regression X의 차원은 (nx, m) nx dim vector가 m(sample)개 있다! :) 왜냐하면 x 데이터 1개를 열벡터로, 세로로 stacking 했으므로, 세로 방향으로는 성분들 (dim), 가로 방향으로는 m개의 샘플들이 진행 Vectorizing Logistic Regression's Gradient Output 벡터화 하더라도, iteration loop는 항상 필요하다. Broadcasting in Python Can you do this without explicit for-loop? A Note on Python/Numpy Vectors Qu..

    2023.09.07
  • [GoogleML] Logistic Regression as a Neural Network

    W가 only parameter, nx dim vector. b는 real number loss func - single training example에 대한 error cost func - cost of your params (전체 데이터에 대해, Parameter W, b의 평균 에러를 의미) Gradient Descent slope of the function Derivatives 직선이라면 (1차 함수) a의 값에 무관하게, 함수의 증가량은 변수 증가량의 3배 즉 3으로 미분값이 일정하다 Computation Graph Derivatives with a Computation Graph Logistic Regression Gradient Descent Gradient Descent on m Exam..

    2023.09.05
  • [GoogleML] Introduction to Deep Learning

    Neural Networks and Deep Learning 1. Introduction to Deep Learning Supervised Learning with Neural Networks Why is Deep Learning taking off? + 생각보다 영어 퀴즈가 굉장히 까다롭다 . . .

    2023.09.05
이전
1 2 3 4
다음
Git-hub Linked-in
© 2018 TISTORY. All rights reserved.

티스토리툴바