ComputerScience(120)
-
[UPMEM PIM] UPMEM Checksum Example Code Review
2025. 10. 11. SaturdayUPMEM Official Example Review
2025.10.12 -
[UPMEM PIM] UPMEM-GEMM Code Review
2025. 10. 10. Friday다음주 랩미팅 준비: Code Review · 구현 📌 참고자료 - UPMEM SDK: https://sdk.upmem.com/stable/030_DPURuntimeService_Tasklets.html- UPMEM Naive-GEMM: https://github.com/hhessammheidary/UPMEM-GEMM 📌 TODOLIST- UPMEM Checksum example Code Review- PIM Embedding Lookup(Python Wrapper) - MHA Implementation
2025.10.10 -
클라우드프로젝트 SPEC 발표
2025. 10. 01. 수요일
2025.10.07 -
졸업논문 기초조사서
2025. 09. 30. 화요일
2025.10.07 -
[PIM] UPMEM Simulator Example
UPMEM Hello World! Examplehttps://sdk.upmem.com/stable/02_HelloWorld.html Hello World! Example — UPMEM DPU SDK 2025.1.0 Documentation© Copyright 2015-2024, UPMEM SAS - All rights reserved.sdk.upmem.com 0. UPMEM SDK 설치하기 https://sdk.upmem.com UPMEM DPU SDKUPMEM SDK The Software Development Kit for programming and using the DPU provided by the UPMEM Acceleration platform.sdk.upmem.com tar -..
2025.09.21 -
[Paper Review] PAPI: Exploiting Dynamic Parallelism in Large Language Model Decoding with a Processing-In-Memory-Enabled Computing System
PAPI: Exploiting Dynamic Parallelism in Large Language Model Decoding with a Processing-In-Memory-Enabled Computing Systemhttps://arxiv.org/abs/2502.15470 PAPI: Exploiting Dynamic Parallelism in Large Language Model Decoding with a Processing-In-Memory-Enabled Computing SystemLarge language models (LLMs) are widely used for natural language understanding and text generation. An LLM model relies ..
2025.09.16 -
[Paper Review] Pimba: A Processing-in-Memory Acceleration forPost-Transformer Large Language Model Serving
Pimba: A Processing-in-Memory Acceleration for Post-Transformer Large Language Model Servinghttps://github.com/casys-kaist/pimba GitHub - casys-kaist/pimba: Official code repository for "Pimba: A Processing-in-Memory Acceleration for Post-Transformer LargeOfficial code repository for "Pimba: A Processing-in-Memory Acceleration for Post-Transformer Large Language Model Serving [MICRO'25]" - casys..
2025.09.15 -
[Paper Review] Accelerating LLMs using an Efficient GEMM Library and Target-Aware Optimizations on Real-World PIM Devices
Accelerating LLMs using an Efficient GEMM Library and Target-Aware Optimizations on Real-World PIM Devices * TVM = deep learning compiler frameworkApache TVM is a machine learning compilation framework, following the principle of Python-first development and universal deployment. It takes in pre-trained machine learning models, compiles and generates deployable modules that can be embedded and..
2025.09.13 -
[TFLite] 캡스톤디자인 최종 발표
2025. 06. 20 금요일
2025.06.20 -
[TFLite] Emulation Results
2025. 06. 10. 화요일 📌 TODOLIST- FLIP PROB 0.00001 수준으로 줄이기- 동일한 rate에서 100회 반복 후 통계 - 값이 보존 or 변경 여부로 그래프 - random seed 변경하도록 구현
2025.06.10