KimAnt 🥦

KimAnt 🥦

  • SohyeonKim (369) N
    • ComputerScience (112) N
      • ProcessingInMemory (8)
      • FaultTolerance (6) N
      • OperatingSystem (21)
      • FreeBSD (23)
      • DesignPattern (1)
      • ComputerNetwork (12)
      • FullStackProgramming (17)
      • DockerKubernetes (16)
      • Database (5)
    • ArtificialIntelligence (72)
      • ECCV2024 (11)
      • WRTNCampusLeader (4)
      • PaperReading (14)
      • 2023GoogleMLBootcamp (33)
      • DeepLearning (10)
    • Programming (27)
      • Swift (17)
      • JAVA (3)
      • CodingTest (2)
      • Algorithms (5)
    • Experiences (37)
      • KIST Europe Internship (15)
      • Activities (8)
      • Competition (6)
      • International (7)
      • Startup (1)
    • iOS (41)
      • AppProject (10)
      • AppleDeveloperAcademy@POSTE.. (9)
      • CoreMLCreateML (8)
      • MC3Puhaha (4)
      • NC2Textinit (10)
      • MACSpaceOver (0)
    • GitHub (5)
    • UniversityMakeUsChallenge (23)
      • UMCActivities (3)
      • UMCiOS (12)
      • UMCServer (7)
      • BonheurAppProject (1)
    • Science (33)
      • 2022GWNRSummer (13)
      • 2023GWNRWinter (8)
      • 2024GWNRWinter (2)
      • Biology (6)
    • Etc (16)
      • StudyPlanner (13)
  • 홈
  • 태그
  • 방명록
RSS 피드
로그인
로그아웃 글쓰기 관리

KimAnt 🥦

컨텐츠 검색

태그

수치상대론 딥러닝 Apple process Google pim server CPU OS Programming docker Container 중력파 app ios AI kernel umc swift biohybrid

최근글

댓글

공지사항

아카이브

VIT(2)

  • [Paper reading] Swin Transformer

    Swin Transformer Hierarchical Vision Transformer using Shifted Windows Abstract This paper presents a new vision Transformer, called Swin Transformer, that capably serves as a general-purpose backbone for computer vision. Challenges in adapting Transformer from language to vision arise from differences between the two domains, such as large variations in the scale of visual entities and the high..

    2023.09.04
  • [Paper reading] Transformers for image recognition, ViT

    Transformers for image recognition Model overview. We split an image into fixed-size patches, linearly embed each of them, add position embeddings, and feed the resulting sequence of vectors to a standard Transformer encoder. In order to perform classification, we use the standard approach of adding an extra learnable “classification token” to the sequence. Abstract While the Transformer archite..

    2023.08.28
이전
1
다음
Git-hub Linked-in
© 2018 TISTORY. All rights reserved.

티스토리툴바