KimAnt 🥦

KimAnt 🥦

  • SohyeonKim (369) N
    • ComputerScience (112) N
      • ProcessingInMemory (8)
      • FaultTolerance (6) N
      • OperatingSystem (21)
      • FreeBSD (23)
      • DesignPattern (1)
      • ComputerNetwork (12)
      • FullStackProgramming (17)
      • DockerKubernetes (16)
      • Database (5)
    • ArtificialIntelligence (72)
      • ECCV2024 (11)
      • WRTNCampusLeader (4)
      • PaperReading (14)
      • 2023GoogleMLBootcamp (33)
      • DeepLearning (10)
    • Programming (27)
      • Swift (17)
      • JAVA (3)
      • CodingTest (2)
      • Algorithms (5)
    • Experiences (37)
      • KIST Europe Internship (15)
      • Activities (8)
      • Competition (6)
      • International (7)
      • Startup (1)
    • iOS (41)
      • AppProject (10)
      • AppleDeveloperAcademy@POSTE.. (9)
      • CoreMLCreateML (8)
      • MC3Puhaha (4)
      • NC2Textinit (10)
      • MACSpaceOver (0)
    • GitHub (5)
    • UniversityMakeUsChallenge (23)
      • UMCActivities (3)
      • UMCiOS (12)
      • UMCServer (7)
      • BonheurAppProject (1)
    • Science (33)
      • 2022GWNRSummer (13)
      • 2023GWNRWinter (8)
      • 2024GWNRWinter (2)
      • Biology (6)
    • Etc (16)
      • StudyPlanner (13)
  • 홈
  • 태그
  • 방명록
RSS 피드
로그인
로그아웃 글쓰기 관리

KimAnt 🥦

컨텐츠 검색

태그

pim 중력파 process OS swift app AI 딥러닝 Apple Container Programming ios biohybrid umc CPU Google kernel docker server 수치상대론

최근글

댓글

공지사항

아카이브

LLM(1)

  • Transformer Tokenizer, Embedding and LLaMA

    Tokenization and Embedding: Science Behind Large Language Model Every input that we are providing to GPT is nothing but a token (numerical id) or a sequence of tokens. GPT doesn’t understand the language the way humans do but it just processes sequence of numerical ids, that we call tokens. But how does it find the association among words(tokens) and provide human like response, here comes the c..

    2024.07.06
이전
1
다음
Git-hub Linked-in
© 2018 TISTORY. All rights reserved.

티스토리툴바