Learning 썸네일형 리스트형 Bayesian inference key terms To understand Bayesian inference, it's important to understand the following key terms: Prior probability: This is the probability of an event or parameter before we have observed any data or evidence. It reflects our initial beliefs, knowledge, or assumptions about the event or parameter. Likelihood: This is the probability of the observed data or evidence given the event or parameter.. 더보기 BUILD THE NEURAL NETWORK Neural networks comprise of layers/modules that perform operations on data. The torch.nn namespace provides all the building blocks you need to build your own neural network. Every module in PyTorch subclasses the nn.Module. A neural network is a module itself that consists of other modules (layers). This nested structure allows for building and managing complex architectures easily. In the foll.. 더보기 TRANSFORMS Data does not always come in its final processed form that is required for training machine learning algorithms. We use transforms to perform some manipulation of the data and make it suitable for training. All TorchVision datasets have two parameters -transform to modify the features and target_transform to modify the labels - that accept callables containing the transformation logic. The torch.. 더보기 DATASETS & DATALOADERS Code for processing data samples can get messy and hard to maintain; we ideally want our dataset code to be decoupled(분리된) from our model training code for better readability and modularity. PyTorch provides two data primitives: torch.utils.data.DataLoader and torch.utils.data.Dataset that allow you to use pre-loaded datasets as well as your own data. Dataset stores the samples and their corresp.. 더보기 Tensors Tensors are a specialized data structure that are very similar to arrays and matrices. In PyTorch, we use tensors to encode the inputs and outputs of a model, as well as the model’s parameters. ("encode" means to represent data in a standardized way using tensors, encoding data as tensors allows us to represent a wide variety of data in a standardized format that can be efficiently processed by .. 더보기 AlexNet AlexNet은 2012년 Alex Krizhevsky, Ilya Sutskever 및 Geoffrey Hinton이 소개한 딥 러닝 신경망 아키텍처입니다. ILSVRC(ImageNet Large Scale Visual Recognition Challenge) 대회에서 최초로 대규모 컨볼루션 신경망(CNN)으로 우승했으며, 딥 러닝 발전에 중요한 이정표가 되었습니다. AlexNet의 주요 구성 요소는 다음과 같습니다. 1. Convolutional Layers: AlexNet에는 5개의 convolutional layer가 있으며 입력 이미지에서 local patterns 과 features를 detect합니다. 이 Layer의 주요 목표는 feature map 이라 불리는 새로운 매트릭스를 생성하는것 .. 더보기 Convolutional Neural Networks 머신러닝은 CS231n의 강의순서대로 정리할 예정입니다. 현재는 중간고사가 끝난 상태라 애매하게 CNN부터 포스팅을 시작하지만 나중에 강의목록 전체를 정리하겠습니다. 퍼셉트론, 역전파, loss 및 activation function, optimization, regularization 에 대한 지식이 필요합니다. 개인 정리용이므로 왠만하면 다른 포스트보는걸 추천합니다 :) Fully connected Neural Network (대표적인 Neural Networks) Fully connected Neural Network은 한 계층(layer)의 모든 뉴런을 다른 계층의 모든 뉴런에 연결하는 일련의 완전히 연결된 계층(layer)으로 구성됩니다. 주요 장점은 "구조에 구애받지 않는것"입니다. 즉, 입력.. 더보기 이전 1 다음