ArtificialIntelligence/PaperReading(14)
-
[Paper reading] Dataset Condensation with Distribution Matching
Dataset Condensation with Distribution Matching Abstraction Computational cost of training state-of-the-art deep models in many learning problems is rapidly increasing due to more sophisticated models and larger datasets. A recent promising direction for reducing training cost is dataset condensation that aims to replace the original large training set with a significantly smaller learned synthe..
2023.10.01 -
[Paper reading] NeRF, Representing Scenes as Neural Radiance Fields for View Synthesis
Abstract Keywords: scene representation, view synthesis, image-based rendering, volume rendering, 3D deep learning We present a method that achieves state-of-the-art results for synthesizing novel views of complex scenes by optimizing an underlying continuous volumetric scene function using a sparse set of input views. Our algorithm represents a scene using a fully-connected (non- convolutional)..
2023.09.24 -
[Paper reading] Denoising Diffusion Probabilistic Models
Abstract We present high quality image synthesis results using diffusion probabilistic models, a class of latent variable models inspired by considerations from nonequilibrium thermodynamics. (비평형 열역학) Our best results are obtained by training on a weighted variational bound designed according to a novel connection between diffusion probabilistic models and denoising score matching with Langevin..
2023.09.19 -
[Paper reading] Implicit Neural Representations
Implicit Neural Representations with Periodic Activation Functions Abstract Implicitly defined, continuous, differentiable signal representations parameterized by neural networks have emerged as a powerful paradigm, offering many possible benefits over conventional representations. However, current network architectures for such implicit neural representations are incapable of modeling signals w..
2023.09.18 -
[Paper reading] Dataset Condensation Keynote
Keynote 최종 version
2023.09.11 -
[Paper reading] Dataset Condensation reading
Dataset Condensation with Gradient Matching (ICLR 2021) Abstract As the state-of-the-art machine learning methods in many fields rely on larger datasets, storing datasets and training models on them become significantly more expensive. This paper proposes a training set synthesis technique for data-efficient learning, called Dataset Condensation, that learns to condense large dataset into a smal..
2023.09.11 -
[Paper reading] Dataset Condensation Summary
Keynote 초안
2023.09.10 -
[Paper reading] Swin Transformer
Swin Transformer Hierarchical Vision Transformer using Shifted Windows Abstract This paper presents a new vision Transformer, called Swin Transformer, that capably serves as a general-purpose backbone for computer vision. Challenges in adapting Transformer from language to vision arise from differences between the two domains, such as large variations in the scale of visual entities and the high..
2023.09.04 -
[Paper reading] Transformers for image recognition, ViT
Transformers for image recognition Model overview. We split an image into fixed-size patches, linearly embed each of them, add position embeddings, and feed the resulting sequence of vectors to a standard Transformer encoder. In order to perform classification, we use the standard approach of adding an extra learnable “classification token” to the sequence. Abstract While the Transformer archite..
2023.08.28 -
[Paper reading] Attention is all you need, Transformer
Transformer Abstract The dominant sequence transduction models are based on complex recurrent or convolutional neural networks that include an encoder and a decoder. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and conv..
2023.08.25