[ํŒŒ์ดํ† ์น˜] ํŒŒ์ดํ† ์น˜๋กœ CNN ๋ชจ๋ธ์„ ๊ตฌํ˜„ํ•ด๋ณด์ž! (๊ธฐ์ดˆํŽธ + DataLoader ์‚ฌ์šฉ๋ฒ•)

Posted by Euisuk's Dev Log on November 26, 2021

[ํŒŒ์ดํ† ์น˜] ํŒŒ์ดํ† ์น˜๋กœ CNN ๋ชจ๋ธ์„ ๊ตฌํ˜„ํ•ด๋ณด์ž! (๊ธฐ์ดˆํŽธ + DataLoader ์‚ฌ์šฉ๋ฒ•)

์›๋ณธ ๊ฒŒ์‹œ๊ธ€: https://velog.io/@euisuk-chung/ํŒŒ์ดํ† ์น˜-ํŒŒ์ดํ† ์น˜๋กœ-CNN-๋ชจ๋ธ์„-๊ตฌํ˜„ํ•ด๋ณด์ž-๊ธฐ์ดˆํŽธ-DataLoader-์‚ฌ์šฉ๋ฒ•

์˜ค๋Š˜์€ MNIST๋ฐ์ดํ„ฐ๋กœ Convolutional Neural Network(์ดํ•˜ CNN)์„ ๊ตฌํ˜„ํ•˜๊ณ  ๋Œ๋ ค๋ณด๋Š” ์‹œ๊ฐ„์„ ๊ฐ–๋„๋ก ํ•˜๊ฒ ์Šต๋‹ˆ๋‹ค!

๋จผ์ €, CNN์€ ํฌ๊ฒŒ ์•„๋ž˜์™€ ๊ฐ™์€ ๊ตฌ์„ฑ์š”์†Œ๋กœ ์ด๋ฃจ์–ด์ ธ์žˆ์Šต๋‹ˆ๋‹ค.

  • ํ•ฉ์„ฑ๊ณฑ ์—ฐ์‚ฐ(CNN) : ์ด๋ฏธ์ง€์˜ ํŠน์„ฑ ์ถ”์ถœ
  • ๋งฅ์Šคํ’€๋ง(Max Pooling) : ์ด๋ฏธ์ง€์˜ ํŠน์„ฑ ์ถ•์•ฝ
  • ์™„์ „์—ฐ๊ฒฐ ์‹ ๊ฒฝ๋ง(Fully Connected Network) : ์ถ”์ถœ ๋ฐ ์ถ•์•ฝ๋œ ํŠน์ง•์„ ์ž…๋ ฅ์— ์‚ฌ์šฉํ•˜์—ฌ downstream task ์ˆ˜ํ–‰

CNN

Import Library

๊ฐ€๋ณ๊ฒŒ ์–ด๋–ค ๋ชจ๋ธ๋“ค์„ importํ• ์ง€ ์‚ดํŽด๋ณผ๊นŒ์š”?

1
2
3
4
5
6
7
8
9
10
11
12
import torch
import torch.nn as nn # ์‹ ๊ฒฝ๋ง๋“ค์ด ํฌํ•จ๋จ
import torch.optim as optim # ์ตœ์ ํ™” ์•Œ๊ณ ๋ฆฌ์ฆ˜๋“ค์ด ํฌํ•จ๋จ
import torch.nn.init as init # ํ…์„œ์— ์ดˆ๊ธฐ๊ฐ’์„ ์คŒ

import torchvision.datasets as datasets # ์ด๋ฏธ์ง€ ๋ฐ์ดํ„ฐ์…‹ ์ง‘ํ•ฉ์ฒด
import torchvision.transforms as transforms # ์ด๋ฏธ์ง€ ๋ณ€ํ™˜ ํˆด

from torch.utils.data import DataLoader # ํ•™์Šต ๋ฐ ๋ฐฐ์น˜๋กœ ๋ชจ๋ธ์— ๋„ฃ์–ด์ฃผ๊ธฐ ์œ„ํ•œ ํˆด

import numpy as np
import matplotlib.pyplot as plt

Set Hyperparameter

ํ•™์Šต์— ํ•„์š”ํ•œ Hyperparameter๋ฅผ ์ •์˜ํ•ด๋ณผ๊นŒ์š”?

  • batch_size : batch size๋Š” ํ•œ ๋ฒˆ์˜ batch๋งˆ๋‹ค ์ฃผ๋Š” ๋ฐ์ดํ„ฐ ์ƒ˜ํ”Œ์˜ size๋กœ, ๋‚˜๋ˆ ์ง„ ๋ฐ์ดํ„ฐ ์…‹์„ ๋œปํ•˜๋ฉฐ, iteration๋Š” ํ•œ๋ฒˆ์˜ epoch๋ฅผ batch_size๋กœ ๋‚˜๋ˆ„์–ด์„œ ์‹คํ–‰ํ•˜๋Š” ํšŸ์ˆ˜๋ผ๊ณ  ์ƒ๊ฐํ•˜๋ฉด ๋ฉ๋‹ˆ๋‹ค.
  • learning_rate : learning_rate์€ ์–ด๋А ์ •๋„์˜ ํฌ๊ธฐ๋กœ ๊ธฐ์šธ๊ธฐ๊ฐ€ ์ค„์–ด๋“œ๋Š” ์ง€์ ์œผ๋กœ ์ด๋™ํ•˜๊ฒ ๋Š”๊ฐ€๋ฅผ ๋‚˜ํƒ€๋‚ด๋Š” ์ง€ํ‘œ๋กœ, ํ•™์Šต์ด ์–ผ๋งˆ๋‚˜ ๋นจ๋ฆฌ ์ง„ํ–‰๋˜๋Š”๊ฐ€๋ฅผ ์ •ํ•ด์ฃผ๋Š” ์ง€ํ‘œ๋ผ๊ณ  ์ƒ๊ฐํ•˜๋ฉด ๋ฉ๋‹ˆ๋‹ค.
  • num_epoch : ํ•œ ๋ฒˆ์˜ epoch๋Š” ์ธ๊ณต ์‹ ๊ฒฝ๋ง์—์„œ ์ „์ฒด ๋ฐ์ดํ„ฐ ์…‹์— ๋Œ€ํ•ด forward pass/backward pass ๊ณผ์ •์„ ๊ฑฐ์นœ ๊ฒƒ์„ ๋งํ•ฉ๋‹ˆ๋‹ค. ์ฆ‰, ์ „์ฒด ๋ฐ์ดํ„ฐ ์…‹์— ๋Œ€ํ•ด ํ•œ ๋ฒˆ ํ•™์Šต์„ ์™„๋ฃŒํ•œ ์ƒํƒœ๋ผ๊ณ ๋„ ๋ณผ ์ˆ˜ ์žˆ๋Š”๋ฐ์š”. ์ด๋ ‡๊ฒŒ ์ „์ฒด ๋ฐ์ดํ„ฐ์…‹์„ ๋ช‡๋ฒˆ ๋ณผ ๊ฒƒ์ธ๊ฐ€๋ฅผ num_epoch๋ฅผ ํ†ตํ•ด ์ •์˜ํ•ด์ฃผ๊ฒŒ ๋ฉ๋‹ˆ๋‹ค.
1
2
3
batch_size = 100
learning_rate = 0.0002
num_epoch = 10

Load MNIST Data

ํ•™์Šต์šฉ ๋ฐ์ดํ„ฐ์…‹์ธ MNIST๋ฅผ ๊ฐ€์ ธ์™€๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค. ๊ฐ๊ฐ ํ•จ์ˆ˜๋Š” ๋‹ค์Œ๊ณผ ๊ฐ™์ด ์ •์˜๋  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค.

  • root=โ€์›ํ•˜๋Š” ๊ฒฝ๋กœโ€

    • root๋Š” ์šฐ๋ฆฌ๊ฐ€ ๋ฐ์ดํ„ฐ๋ฅผ ์–ด๋””์—๋‹ค๊ฐ€ ์ €์žฅํ•˜๊ณ , ๊ฒฝ๋กœ๋กœ ์‚ฌ์šฉํ• ์ง€๋ฅผ ์ •์˜ํ•ด์ค๋‹ˆ๋‹ค.
  • train=True(๋˜๋Š” False)

    • train์€ ์šฐ๋ฆฌ๊ฐ€ ์ง€๊ธˆ ์ •์˜ํ•˜๋Š” ๋ฐ์ดํ„ฐ๊ฐ€ ํ•™์Šต์šฉ์ธ์ง€ ํ…Œ์ŠคํŠธ์šฉ์ธ์ง€ ์ •์˜ํ•ด์ค๋‹ˆ๋‹ค.
  • transform=transforms.ToTensor()

    • ๋ฐ์ดํ„ฐ์— ์–ด๋– ํ•œ ๋ณ€ํ˜•์„ ์ค„ ๊ฒƒ์ธ๊ฐ€๋ฅผ ์ •์˜ํ•ด์ค๋‹ˆ๋‹ค.
    • ํ•ด๋‹น transforms.ToTensor()์˜ ๊ฒฝ์šฐ, ๋ชจ๋ธ์— ๋„ฃ์–ด์ฃผ๊ธฐ ์œ„ํ•ด ํ…์„œ ๋ณ€ํ™˜์„ ํ•ด์คŒ์„ ์˜๋ฏธํ•ฉ๋‹ˆ๋‹ค.
  • target_transform=None

    • ๋ผ๋ฒจ(ํด๋ž˜์Šค)์— ์–ด๋– ํ•œ ๋ณ€ํ˜•์„ ์ค„ ๊ฒƒ์ธ๊ฐ€๋ฅผ ์ •์˜ํ•ด์ค๋‹ˆ๋‹ค.
  • download=True

    • ์•ž์—์„œ ์ง€์ •ํ•ด์ค€ ๊ฒฝ๋กœ์— ํ•ด๋‹น ๋ฐ์ดํ„ฐ๊ฐ€ ์—†์„ ์‹œ ๋‹ค์šด๋กœ๋“œํ•˜๋„๋ก ์ •์˜ํ•ด์ค๋‹ˆ๋‹ค.
1
2
mnist_train = datasets.MNIST(root="../Data/", train=True, transform=transforms.ToTensor(), target_transform=None, download=True)
mnist_test = datasets.MNIST(root="../Data/", train=False, transform=transforms.ToTensor(), target_transform=None, download=True)

Define Loaders

DataLoader๋Š” ์•ž์—์„œ๋„ ๋ง์”€๋“œ๋ ธ๋“ฏ์ด DataLoader๋Š” ํ•™์Šต ๋ฐ ๋ฐฐ์น˜๋กœ ๋ชจ๋ธ์— ๋„ฃ์–ด์ฃผ๊ธฐ ์œ„ํ•œ ํˆด์ž…๋‹ˆ๋‹ค. ์•ž์—์„œ ์ •์˜ํ•œ ๋ฐ์ดํ„ฐ์…‹์„ DataLoader์— ๋„ฃ์–ด์ฃผ๊ฒŒ ๋˜๋ฉด ์šฐ๋ฆฌ๊ฐ€ ์ •์˜ํ•ด์ค€ ์กฐ๊ฑด์— ๋งž๊ฒŒ ๋ชจ๋ธ์„ Train, Inferenceํ•  ๋•Œ ๋ฐ์ดํ„ฐ๋ฅผ Loadํ•ด์ฃผ๊ฒŒ ๋ฉ๋‹ˆ๋‹ค.

  • batch_size=batch_size

    • ์ •์˜๋œ ๋ฐ์ดํ„ฐ๋ฅผ batch_size๊ฐœ์ˆ˜๋งŒํผ ๋ฌถ์–ด์„œ ๋ชจ๋ธ์— ๋„ฃ์–ด์ฃผ๊ฒ ๋‹ค๋Š” ์˜๋ฏธ์ž…๋‹ˆ๋‹ค.
  • shuffle=True

    • ๋ฐ์ดํ„ฐ๋ฅผ ์„ž์–ด์ค„ ๊ฒƒ์ธ๊ฐ€ ์ง€์ •ํ•ด์ฃผ๋Š” ํŒŒ๋ผ๋ฏธํ„ฐ์ž…๋‹ˆ๋‹ค.
  • num_workers=2

    • ๋ฐ์ดํ„ฐ๋ฅผ ๋ฌถ์„๋•Œ ์‚ฌ์šฉํ•  ํ”„๋กœ์„ธ์„œ์˜ ๊ฐœ์ˆ˜๋ฅผ ์˜๋ฏธํ•ฉ๋‹ˆ๋‹ค.
  • drop_last=True

    • ๋ฌถ๊ณ  ๋‚จ์€ ๋ฐ์ดํ„ฐ๋ฅผ ๋ฒ„๋ฆด ๊ฒƒ์ธ๊ฐ€๋ฅผ ์ง€์ •ํ•ด์ฃผ๋Š” ํŒŒ๋ผ๋ฏธํ„ฐ์ž…๋‹ˆ๋‹ค.
1
2
train_loader = DataLoader(mnist_train, batch_size=batch_size, shuffle=True, num_workers=2, drop_last=True)
test_loader = DataLoader(mnist_test, batch_size=batch_size, shuffle=False, num_workers=2, drop_last=True)

Define CNN(Base) Model

๋จผ์ € class๋ฅผ ํ†ตํ•ด CNN class๋ฅผ ์ •์˜ํ•ด๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค. torch์˜ nn.Module์„ ์‚ฌ์šฉํ•˜์—ฌ nn.Module class๋ฅผ ์ƒ์†๋ฐ›๋Š” CNN์„ ๋‹ค์Œ๊ณผ ๊ฐ™์ด ์ •์˜ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
class CNN(nn.Module):
    def __init__(self):
    	# superํ•จ์ˆ˜๋Š” CNN class์˜ ๋ถ€๋ชจ class์ธ nn.Module์„ ์ดˆ๊ธฐํ™”
        super(CNN, self).__init__()
        
        # batch_size = 100
        self.layer = nn.Sequential(
            # [100,1,28,28] -> [100,16,24,24]
            nn.Conv2d(in_channels=1,out_channels=16,kernel_size=5),
            nn.ReLU(),
            
            # [100,16,24,24] -> [100,32,20,20]
            nn.Conv2d(in_channels=16,out_channels=32,kernel_size=5),
            nn.ReLU(),
            
            # [100,32,20,20] -> [100,32,10,10]
            nn.MaxPool2d(kernel_size=2,stride=2),
            
            # [100,32,10,10] -> [100,64,6,6]
            nn.Conv2d(in_channels=32, out_channels=64, kernel_size=5),
            nn.ReLU(),
            
            # [100,64,6,6] -> [100,64,3,3]
            nn.MaxPool2d(kernel_size=2,stride=2)          
        )
        self.fc_layer = nn.Sequential(
        	# [100,64*3*3] -> [100,100]
            nn.Linear(64*3*3,100),                                              
            nn.ReLU(),
            # [100,100] -> [100,10]
            nn.Linear(100,10)                                                   
        )       
        
    def forward(self,x):
    	# self.layer์— ์ •์˜ํ•œ ์—ฐ์‚ฐ ์ˆ˜ํ–‰
        out = self.layer(x)
        # view ํ•จ์ˆ˜๋ฅผ ์ด์šฉํ•ด ํ…์„œ์˜ ํ˜•ํƒœ๋ฅผ [100,๋‚˜๋จธ์ง€]๋กœ ๋ณ€ํ™˜
        out = out.view(batch_size,-1)
        # self.fc_layer ์ •์˜ํ•œ ์—ฐ์‚ฐ ์ˆ˜ํ–‰    
        out = self.fc_layer(out)
        return out

device๋ฅผ ์•„๋ž˜์™€ ๊ฐ™์ด ์„ ์–ธํ•˜์—ฌ gpu๊ฐ€ ์‚ฌ์šฉ ๊ฐ€๋Šฅํ•œ ๊ฒฝ์šฐ์—๋Š” device๋ฅผ gpu๋กœ ์„ค์ •ํ•˜๊ณ , ๋ถˆ๊ฐ€๋Šฅํ•˜๋ฉด cpu๋กœ ์„ค์ •ํ•ด์ค„ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค.

1
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")

CNN().to(device)์„ ์–ธ์„ ํ†ตํ•ด ์ •์˜ํ•œ ๋ชจ๋ธ ๊ฐ์ฒด๋ฅผ ์„ ์–ธํ•˜๊ณ , ์ด๋ฅผ ์ง€์ •ํ•œ ์žฅ์น˜(device)๋กœ ์˜ฌ๋ ค์ค๋‹ˆ๋‹ค.

1
model = CNN().to(device)

๋ชจ๋ธ์ด ํ•™์Šต์„ ์ˆ˜ํ–‰ํ•˜๋ ค๋ฉด, ์†์‹คํ•จ์ˆ˜์™€ ์ตœ์ ํ™”ํ•จ์ˆ˜๊ฐ€ ํ•„์š”ํ•œ๋ฐ ์ด๋Š” ์•„๋ž˜์™€ ๊ฐ™์ด ์ •์˜ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. (์†์‹คํ•จ์ˆ˜๋Š” Cross Entropy, ์ตœ์ ํ™”ํ•จ์ˆ˜๋Š” Adam Optimizer์„ ์‚ฌ์šฉํ•˜์˜€์Šต๋‹ˆ๋‹ค)

๋˜ํ•œ, model.parameters()์™€ lr=learning_rate์„ torch.optim.Adam()๋กœ ๊ฐ์‹ธ์คŒ์œผ๋กœ์จ ๋ชจ๋ธ์˜ ํŒŒ๋ผ๋ฏธํ„ฐ๋“ค์„ ์‚ฌ์ „์— ์ •์˜ํ•œ learning_rate๋กœ ์—…๋ฐ์ดํŠธ ํ•ด์ฃผ๊ณ ์ž ํ•ฉ๋‹ˆ๋‹ค.

1
2
3
loss_func = nn.CrossEntropyLoss()

optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate)

Train Model

train_loader์—์„œ image์™€ label์˜ ์Œ์„ batch_size๋งŒํผ ๋ฐ›์•„์„œ ๋ชจ๋ธ์— ์ „๋‹ฌํ•˜์—ฌ ์†์‹ค์„ ๊ณ„์‚ฐํ•˜๊ณ , ์†์‹ค์— ๋Œ€ํ•œ ๊ฒฝ์‚ฌํ•˜๊ฐ•๋ฒ•์„ ์ง„ํ–‰ํ•˜์—ฌ ๋ชจ๋ธ์„ ์—…๋ฐ์ดํŠธํ•ฉ๋‹ˆ๋‹ค. ์ด๋•Œ 1000๋ฒˆ์งธ iteration๋งˆ๋‹ค loss๋ฅผ ์ถœ๋ ฅํ•˜๊ณ , ์ด๋ฅผ loss_arr์— ์ถ”๊ฐ€ํ•˜๋„๋ก ์ฝ”๋“œ๋ฅผ ์ž‘์„ฑํ•˜์˜€์Šต๋‹ˆ๋‹ค.

  • enumerate(train_loader)ํ•จ์ˆ˜๋ฅผ ํ†ตํ•ด ๊ฐ๊ฐ batch์˜ index(j)์™€ [image,label]๋ฅผ ๋ฐ›์•„์„œ x, y๋กœ ์ •์˜ํ•ด์ค๋‹ˆ๋‹ค
  • optimizer.zero_grad()๋ฅผ ํ†ตํ•ด ์ง€๋‚œ loop์—์„œ ๊ณ„์‚ฐํ–ˆ๋˜ ๊ธฐ์šธ๊ธฐ๋ฅผ 0์œผ๋กœ ์ดˆ๊ธฐํ™”ํ•ด์ค๋‹ˆ๋‹ค.
  • loss.backward()ํ˜ธ์ถœ์„ ํ†ตํ•ด ๊ฐ๊ฐ์˜ model(weight) parameter์— ๋Œ€ํ•œ ๊ธฐ์šธ๊ธฐ๋ฅผ ๊ณ„์‚ฐํ•˜๊ณ , optimizer.step()ํ•จ์ˆ˜๋ฅผ ํ˜ธ์ถœํ•˜์—ฌ ์ธ์ˆ˜๋กœ ๋“ค์–ด๊ฐ”๋˜ model.parameters()์—์„œ ๋ฆฌํ„ด๋˜๋Š” ๋ณ€์ˆ˜๋“ค์˜ ๊ธฐ์šธ๊ธฐ์—learning_rate๋ฅผ ๊ณฑํ•˜์—ฌ ๋นผ์ฃผ๋ฉด์„œ ์ด๋ฅผ ์—…๋ฐ์ดํŠธํ•˜๊ฒŒ ๋ฉ๋‹ˆ๋‹ค.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
loss_arr =[]
for i in range(num_epoch):
    for j,[image,label] in enumerate(train_loader):
        x = image.to(device)
        y= label.to(device)
        
        optimizer.zero_grad()
        
        output = model.forward(x)
        
        loss = loss_func(output,y)
        loss.backward()
        optimizer.step()
        
        if j % 1000 == 0:
            print(loss)
            loss_arr.append(loss.cpu().detach().numpy())

Test Model

๋งˆ์ง€๋ง‰ ๋ถ€๋ถ„์€ ํ•™์Šต๋œ ๋ชจ๋ธ์„ ๋ฐ”ํƒ•์œผ๋กœ ํ…Œ์ŠคํŠธ ๋ฐ์ดํ„ฐ์— ๋Œ€ํ•˜์—ฌ ๊ฒ€์ฆํ•˜๋Š” ๋ถ€๋ถ„์ž…๋‹ˆ๋‹ค.

  • model.eval() :model.eval์€ ํ•ด๋‹น ๋ชจ๋ธ์˜ ๋ชจ๋“  ๋ ˆ์ด์–ด๊ฐ€ eval mode์— ๋“ค์–ด๊ฐ€๊ฒŒ ํ•ด์ค๋‹ˆ๋‹ค. ์ด ๋ง์€ ๊ณง, ํ•™์Šตํ•  ๋•Œ๋งŒ ์‚ฌ์šฉํ•˜๋Š” ๊ฐœ๋…์ธ Dropout์ด๋‚˜ Batchnorm ๋“ฑ์„ ๋น„ํ™œ์„ฑํ™” ์‹œํ‚จ๋‹ค๋Š” ๊ฒƒ์„ ์˜๋ฏธํ•œ๋‹ค.
  • torch.no_grad() : with torch.no_grad()๋Š” pytorch์˜ autograd engine์„ ๋น„ํ™œ์„ฑํ™” ์‹œํ‚ต๋‹ˆ๋‹ค. ์ฆ‰, ๋”์ด์ƒ gradient๋ฅผ ํŠธ๋ž˜ํ‚นํ•˜์ง€ ์•Š์Œ์„ ์˜๋ฏธํ•˜๊ณ , ์ด์— ๋”ฐ๋ผ ํ•„์š”ํ•œ ๋ฉ”๋ชจ๋ฆฌ๊ฐ€ ์ค„์–ด๋“ค๊ณ  ๊ณ„์‚ฐ์†๋„๊ฐ€ ์ฆ๊ฐ€ํ•˜๊ฒŒ ๋ฉ๋‹ˆ๋‹ค.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
correct = 0
total = 0

# evaluate model
model.eval()

with torch.no_grad():
    for image,label in test_loader:
        x = image.to(device)
        y= label.to(device)

        output = model.forward(x)
        
        # torch.maxํ•จ์ˆ˜๋Š” (์ตœ๋Œ“๊ฐ’,index)๋ฅผ ๋ฐ˜ํ™˜ 
        _,output_index = torch.max(output,1)
        
        # ์ „์ฒด ๊ฐœ์ˆ˜ += ๋ผ๋ฒจ์˜ ๊ฐœ์ˆ˜
        total += label.size(0)
        
        # ๋„์ถœํ•œ ๋ชจ๋ธ์˜ index์™€ ๋ผ๋ฒจ์ด ์ผ์น˜ํ•˜๋ฉด correct์— ๊ฐœ์ˆ˜ ์ถ”๊ฐ€
        correct += (output_index == y).sum().float()
    
    # ์ •ํ™•๋„ ๋„์ถœ
    print("Accuracy of Test Data: {}%".format(100*correct/total))

๊ธด ๊ธ€ ์ฝ์–ด์ฃผ์…”์„œ ๊ฐ์‚ฌํ•ฉ๋‹ˆ๋‹ค ^~^



-->