*The author selected the Code 2040 to receive a donation as part of the Write for DOnations program.*

### Introduction

Machine learning is a field of computer science that finds patterns in data. As of 2021, machine learning practitioners use these patterns to detect lanes for self-driving cars; train a robot hand to solve a Rubik’s cube; or generate images of dubious artistic taste. As machine learning models grow more accurate and performant, we see increasing adoption in mainstream applications and products.

*Deep learning* is a subset of machine learning that focuses on particularly complex models, termed *neural networks*. In later, advanced DigitalOcean articles (like this tutorial on building an Atari bot), we will formally define what “complex” means. Neural networks are the highly accurate and hype-inducing modern-day models your hear about, with applications across a wide range of tasks. In this tutorial, you will focus on one specific task called object recognition, or image classification. Given an image of a handwritten digit, your model will predict which digit is shown.

You will build, train, and evaluate deep neural networks in PyTorch, a framework developed by Facebook AI Research for deep learning. When compared to other deep learning frameworks, like Tensorflow, PyTorch is a beginner-friendly framework with debugging features that aid in the building process. It’s also highly customizable for advanced users, with researchers and practitioners using it across companies like Facebook and Tesla. By the end of this tutorial, you will be able to:

- Build, train, and evaluate a deep neural network in PyTorch
- Understand the risks of applying deep learning

While you won’t need prior experience in practical deep learning or PyTorch to follow along with this tutorial, we’ll assume some familiarity with machine learning terms and concepts such as training and testing, features and labels, optimization, and evaluation. You can learn more about these concepts in An Introduction to Machine Learning.

## Prerequisites

To complete this tutorial, you will need a local development environment for Python 3 with at least 1GB of RAM. You can follow How to Install and Set Up a Local Programming Environment for Python 3 to configure everything you need.

## Step 1 — Creating Your Project and Installing Dependencies

Let’s create a workspace for this project and install the dependencies you’ll need. You’ll call your workspace `pytorch`

:

`Navigate to the `

`pytorch`

directory:

`Then create a new virtual environment for the project:`

`Activate your environment:`

- source pytorch/bin/activate

`Then install PyTorch. On macOS, install PyTorch with the following command:`

- python -m pip install torch==1.4.0 torchvision==0.5.0

`On Linux and Windows, use the following commands for a CPU-only build:`

- pip install torch==1.4.0+cpu torchvision==0.5.0+cpu -f https://download.pytorch.org/whl/torch_stable.html
- pip install torchvision

`With the dependencies installed, you will now build your first neural network.`

`Step 2 — Building a “Hello World” Neural Network`

`Step 2 — Building a “Hello World” Neural Network`

`In this step, you will build your first neural network and train it. You will learn about two sub-libraries in Pytorch, `

`torch.nn`

for neural network operations and `torch.optim`

for neural network optimizers. To understand what an “optimizer” is, you will also learn about an algorithm called *gradient descent*. Throughout this tutorial, you will use the following five steps to build and train models:

`Build a computation graph`

`Set up optimizers`

`Set up criterion`

`Set up data`

`Train the model`

`In this first section of the tutorial, you will build a small model with a manageable dataset. Start by creating a new file `

`step_2_helloworld.py`

, using `nano`

or your favorite text editor:

- nano step_2_helloworld.py

`You will now write a short 18-line snippet that trains a small model. Start by importing several PyTorch utilities:`

`step_2_helloworld.py`

```
import torch
import torch.nn as nn
import torch.optim as optim
```

`Here, you alias PyTorch libraries to several commonly used shortcuts:`

`torch`

contains all PyTorch utilities. However, routine PyTorch code includes a few extra imports. We follow the same convention here, so that you can understand PyTorch tutorials and random code snippets online.`torch.nn`

contains utilities for constructing neural networks. This is often denoted`nn`

.`torch.optim`

contains training utilities. This is often denoted`optim`

.

`Next, define the neural network, training utilities, and the dataset:`

`step_2_helloworld.py`

```
. . .
net = nn.Linear(1, 1) # 1. Build a computation graph (a line!)
optimizer = optim.SGD(net.parameters(), lr=0.1) # 2. Setup optimizers
criterion = nn.MSELoss() # 3. Setup criterion
x, target = torch.randn((1,)), torch.tensor([0.]) # 4. Setup data
. . .
```

`Here, you define several necessary parts of any deep learning training script:`

`net = ...`

defines the “neural network”. In this case, the model is a line of the form`y = m * x`

; the parameter`nn.Linear(1, 1)`

is the slope of your line. This*model parameter*`nn.Linear(1, 1)`

will be updated during training. Note that`torch.nn`

(aliased with`nn`

) includes many deep learning operations, like the fully connected layers used here (`nn.Linear`

) and convolutional layers (`nn.Conv2d`

).`optimizer = ...`

defines the optimizer. This optimizer determines how the neural network will learn. We will discuss optimizers in more detail after writing a few more lines of code. Note that`torch.optim`

(aliased to`optim`

) includes many such optimizers that you can use.`criterion = ...`

defines the loss. In short, the loss defines*what*your model is trying to minimize. For your basic model of a line, the goal is to minimize the difference between your line’s predicted y-values and the actual y-values in the training set. Note that`torch.nn`

(aliased with`nn`

) includes many other loss functions you can use.`x, target = ...`

defines your “dataset”. Right now, the dataset is just one coordinate—one x value and one y value. In this case, the`torch`

package itself offers`tensor`

, to create a new tensor, and`randn`

to create a tensor with random values.

`Finally, train the model by iterating over the dataset ten times. Each time, you adjust the model’s parameter:`

`step_2_helloworld.py`

```
. . .
# 5. Train the model
for i in range(10):
output = net(x)
loss = criterion(output, target)
print(round(loss.item(), 2))
net.zero_grad()
loss.backward()
optimizer.step()
```

`Your general goal is to minimize the loss, by adjusting the slope of the line. To effect this, this training code implements an algorithm called `

*gradient descent*. The intuition for gradient descent is as follows: Imagine you’re looking straight down at a bowl. The bowl has many points on it, and each point corresponds to a different parameter value. The bowl itself is the loss surface: the center of the bowl—the lowest point—indicates the best model with the lowest loss. This is the *optimum*. The fringes of the bowl—the highest points, and the parts of the bowl closest to you—hold the worst models with the highest loss.

`To find the best model with the lowest loss:`

`With`

`net = nn.Linear(1, 1)`

you initialize a random model. This is equivalent to picking a random point on the bowl.`In the`

`for i in range(10)`

loop, you begin training. This is equivalent to stepping closer to the center of the bowl.`The direction of each step is given by the gradient. You will skip a formal proof here, but in summary, the negative gradient points to the lowest point in the bowl.`

`With`

`lr=0.1`

in`optimizer = ...`

, you specify the*step size*. This determines how large each step can be.

`In just ten steps, you reach the center of the bowl, the best possible model with the lowest possible loss. For a visualization of gradient descent, see Distill’s “Why Momentum Really Works,” first figure at the top of the page.`

`The last three lines of this code are also important:`

`net.zero_grad`

clears all gradients that may have been leftover from the previous step iterate.`loss.backward`

computes new gradients.`optimizer.step`

uses those gradients to take steps. Notice that you didn’t compute gradients yourself. This is because PyTorch, and other deep learning libraries like it, automatically differentiate.

`This now concludes your “hello world” neural network. Save and close your file.`

`Double-check that your script matches `

`step_2_helloworld.py`

. Then, run the script:

- python step_2_helloworld.py

`Your script will output the following:`

Output

0.33
0.19
0.11
0.07
0.04
0.02
0.01
0.01
0.0
0.0

`Notice that your loss continually decreases, showing that your model is learning. There are two other implementation details to note, when using PyTorch:`

`PyTorch uses`

`torch.Tensor`

to hold all data and parameters. Here,`torch.randn`

generates a tensor with random values, with the provided shape. For example, a`torch.randn((1, 2))`

creates a 1x2 tensor, or a 2-dimensional row vector.`PyTorch supports a wide variety of optimizers. This features`

`torch.optim.SGD`

, otherwise known as stochastic gradient descent (SGD). Roughly speaking, this is the algorithm described in this tutorial, where you took steps toward the optimum. There are more-involved optimizers that add extra features on top of SGD. There are also many losses, with`torch.nn.MSELoss`

being just one of them.

`This concludes your very first model on a toy dataset. In the next step, you will replace this small model with a neural network and the toy dataset with a commonly used machine learning benchmark.`

`Step 3 — Training Your Neural Network on Handwritten Digits`

`Step 3 — Training Your Neural Network on Handwritten Digits`

`In the previous section, you built a small PyTorch model. However, to better understand the benefits of PyTorch, you will now build a deep neural network using `

`torch.nn.functional`

, which contains more neural network operations, and `torchvision.datasets`

, which supports many datasets you can use, out of the box. In this section, you will build a *relatively complex, custom* model with a *premade* dataset.

`You’ll use `

*convolutions*, which are pattern-finders. For images, convolutions look for 2D patterns at various levels of “meaning”: Convolutions directly applied to the image are looking for “lower-level” features such as edges. However, convolutions applied to the outputs of many other operations may be looking for “higher-level” features, such as a door. For visualizations and a more thorough walkthrough of convolutions, see part of Stanford’s deep learning course.

`You will now expand on the first PyTorch model you built, by defining a slightly more complex model. Your neural network will now contain two convolutions and one fully connected layer, to handle image inputs.`

`Start by creating a new file `

`step_3_mnist.py`

, using your text editor:

`You will follow the same five-step algorithm as before:`

`Build a computation graph`

`Set up optimizers`

`Set up criterion`

`Set up data`

`Train the model`

`First, define your deep neural network. Note this is a pared down version of other neural networks you may find on MNIST—this is intentional, so that you can train your neural network on your laptop:`

`step_3_mnist.py`

```
import torch
import torch.nn as nn
import torch.optim as optim
import torch.nn.functional as F
from torchvision import datasets, transforms
from torch.optim.lr_scheduler import StepLR
# 1. Build a computation graph
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.conv1 = nn.Conv2d(1, 32, 3, 1)
self.conv2 = nn.Conv2d(32, 64, 3, 1)
self.fc = nn.Linear(1024, 10)
def forward(self, x):
x = F.relu(self.conv1(x))
x = F.relu(self.conv2(x))
x = F.max_pool2d(x, 1)
x = torch.flatten(x, 1)
x = self.fc(x)
output = F.log_softmax(x, dim=1)
return output
net = Net()
. . .
```

`Here, you define a neural network class, inheriting from `

`nn.Module`

. All operations in the neural network (including the neural network itself) must inherit from `nn.Module`

. The typical paradigm, for your neural network class, is as follows:

`In the constructor, define any operations needed for your network. In this case, you have two convolutions and a fully connected layer. (A tip to remember: The constructor always starts with`

`super().__init__()`

.) PyTorch expects the parent class to be initialized before assigning modules (for example,`nn.Conv2d`

) to instance attributes (`self.conv1`

).`In the`

`forward`

method, run the initialized operations. This method determines the neural network architecture, explicitly defining how the neural network will compute its predictions.

`This neural network uses a few different operations:`

`nn.Conv2d`

: A convolution. Convolutions look for patterns in the image. Earlier convolutions look for “low-level” patterns like edges. Later convolutions in the network look for “high-level” patterns like legs on a dog, or ears.`nn.Linear`

: A fully connected layer. Fully connected layers relate all input features to all output dimensions.`F.relu`

,`F.max_pool2d`

: These are types of non-linearities. (A non-linearity is any function that is not linear.)`relu`

is the function`f(x) = max(x, 0)`

.`max_pool`

takes the maximum value in every patch of values. In this case, you take the maximum value across the entire image.`log_softmax`

: normalizes all of the values in a vector, so that the values sum to 1.

`Second, like before, define the optimizer. This time, you will use a different optimizer and a different `

*hyper-parameter* setting. Hyper-parameters configure training, whereas training adjusts model parameters. These hyper-parameter settings are taken from the PyTorch MNIST example:

`step_3_mnist.py`

```
. . .
optimizer = optim.Adadelta(net.parameters(), lr=1.) # 2. Setup optimizer
. . .
```

`Third, unlike before, you will now use a different loss. This loss is used for `

*classification* problems, where the output of your model is a class index. In this particular example, the model will output the digit (possibly any number from 0 to 9) contained in the input image:

`step_3_mnist.py`

```
. . .
criterion = nn.NLLLoss() # 3. Setup criterion
. . .
```

`Fourth, set up the data. In this case, you will set up a dataset called MNIST, which features handwritten digits. Deep Learning 101 tutorials often use this dataset. Each image is a small 28x28 px image containing a handwritten digit, and the goal is to classify each handwritten digit as 0, 1, 2, … or 9:`

`step_3_mnist.py`

```
. . .
# 4. Setup data
transform = transforms.Compose([
transforms.Resize((8, 8)),
transforms.ToTensor(),
transforms.Normalize((0.1307,), (0.3081,))
])
train_dataset = datasets.MNIST(
'data', train=True, download=True, transform=transform)
train_loader = torch.utils.data.DataLoader(train_dataset, batch_size=512)
. . .
```

`Here, you preprocess the images in `

`transform = ...`

by resizing the image, converting the image to a PyTorch tensor, and normalizing the tensor to have mean 0 and variance 1.

`In the next two lines, you set `

`train=True`

, as this is the training dataset and `download=True`

so that you download the dataset if it is not already.

`batch_size=512`

determines how many images the network is trained on, at once. Barring ridiculously large batch sizes (for example, tens of thousands), larger batches are preferable for roughly faster training.

`Fifth, train the model. In the following code block, you make minimal modifications. Instead of running ten times on the same sample, you will now iterate over all samples in the provided dataset once. By passing over all samples once, the following is training for one `

*epoch*:

`step_3_mnist.py`

```
. . .
# 5. Train the model
for inputs, target in train_loader:
output = net(inputs)
loss = criterion(output, target)
print(round(loss.item(), 2))
net.zero_grad()
loss.backward()
optimizer.step()
. . .
```

`Save and close your file.`

`Double-check that your script matches `

`step_3_mnist.py`

. Then, run the script.

`Your script will output the following:`

Output

2.31
2.18
2.03
1.78
1.52
1.35
1.3
1.35
1.07
1.0
...
0.21
0.2
0.23
0.12
0.12
0.12

`Notice that the final loss is less than 10% of the initial loss value. This means that your neural network is training correctly.`

`That concludes training. However, the loss of 0.12 is difficult to reason about: we don’t know if 0.12 is “good” or “bad”. To assess how well your model is performing, you next compute an accuracy for this classification model.`

`Step 4 — Evaluating Your Neural Network`

`Step 4 — Evaluating Your Neural Network`

`Earlier, you computed loss values on the `

*train* split of your dataset. However, it is good practice to keep a separate *validation* split of your dataset. You use this validation split to compute the accuracy of your model. However, you can’t use it for training. Following, you set up the validation dataset and evaluate your model on it. In this step, you will use the same PyTorch utilities from before, including `torchvision.datasets`

for the MNIST dataset.

`Start by copying your `

`step_3_mnist.py`

file into `step_4_eval.py`

. Then, open the file:

- cp step_3_mnist.py step_4_eval.py
- nano step_4_eval.py

`First, set up the validation dataset:`

`step_4_eval.py`

```
. . .
train_loader = ...
val_dataset = datasets.MNIST(
'data', train=False, download=True, transform=transform)
val_loader = torch.utils.data.DataLoader(val_dataset, batch_size=512)
. . .
```

`At the end of your file, after the training loop, add a validation loop:`

`step_4_eval.py`

```
. . .
optimizer.step()
correct = 0.
net.eval()
for inputs, target in val_loader:
output = net(inputs)
_, pred = output.max(1)
correct += (pred == target).sum()
accuracy = correct / len(val_dataset) * 100.
print(f'{accuracy:.2f}% correct')
```

`Here, the validation loop performs a few operations to compute accuracy:`

`Running`

`net.eval()`

ensures that your neural network is in evaluation mode and ready for validation. Several operations are run differently in evaluation mode than when in training mode.`Iterating over all inputs and labels in`

`val_loader`

.`Running the model`

`net(inputs)`

to obtain probabilities for each class.`Finding the class with the highest probability`

`output.max(1)`

.`output`

is a tensor with dimensions`(n, k)`

for`n`

samples and`k`

classes. The`1`

means you compute the max along the index`1`

dimension.`Computing the number of images that were classified correctly:`

`pred == target`

computes a boolean-valued vector.`.sum()`

casts these booleans to integers and effectively computes the number of true values.`correct / len(val_dataset)`

finally computes the percent of images classified correctly.

`Save and close your file.`

`Double-check that your script matches `

`step_4_eval.py`

. Then, run the script:

`Your script will output the following. Note the specific loss values and final accuracy may vary:`

Output

2.31
2.21
...
0.14
0.2
89% correct

`You have now trained your very first deep neural network. You can make further modifications and improvements by tuning hyper-parameters for training: This includes different numbers of epochs, learning rates, and different optimizers. We include a sample script with tuned hyper-parameters; this script trains the same neural network but for 10 epochs, obtaining 97% accuracy.`

`Risks of Deep Learning`

`Risks of Deep Learning`

`One gotcha is that deep learning does not always obtain state-of-the-art results. Deep learning works well in feature-rich, data-rich scenarios but conversely performs poorly in data-sparse, feature-sparse regimes. Whereas there is active research in deep learning’s weak areas, many other machine learning techniques are already well-suited for feature-sparse regimes, such as decision trees, linear regression, or support vector machines (SVM).`

`Another gotcha is that deep learning is not well understood. There are no guarantees for accuracy, optimality, or even convergence. On the other hand, classic machine learning techniques are well-studied and are relatively interpretable. Again, there is active research to address this lack of interpretability in deep learning. You can read more in “What Explainable AI fails to explain (and how we fix that)”.`

`Most importantly, lack of interpretability in deep learning leads to overlooked biases. For example, researchers from UC Berkeley were able to show a model’s gender bias in captioning (“Women also Snowboard”). Other research efforts focus on societal issues such as “Fairness” in machine learning. Given these issues are undergoing active research, it is difficult to recommend a prescribed diagnosis for biases in models. As a result, it is up to you, the practitioner, to apply deep learning responsibly.`

`Conclusion`

`Conclusion`

`PyTorch is deep learning framework for enthusiasts and researchers alike. To get acquainted with PyTorch, you have both trained a deep neural network and also learned several tips and tricks for customizing deep learning.`

`You can also use a pre-built neural network architecture instead of building your own. Here is a link to an optional section: Use Existing Neural Network Architecture on Google Colab that you can try. For demonstration purposes, this optional step trains a much larger model with much larger images.`

`Check out our other articles to dive deeper into machine learning and related fields:`

`Is your model complex enough? Too complex? Learn about the bias-variance tradeoff in Bias-Variance for Deep Reinforcement Learning: How To Build a Bot for Atari with OpenAI Gym to find out. In this article, we build AI bots for Atari Games and explore a field of research called Reinforcement Learning. Alternatively, find a visual explanation of the bias-variance trade-off in this Understanding the Bias-Variance Trade-off article.`

`How does a machine learning model process images? Learn more in Build an Emotion-Based Dog Filter. In this article, we discuss how models process and classify images, in more detail, exploring a field of research called Computer Vision.`

`Can a neural network be fooled? Learn how to in Tricking a Neural Network. In this article, we explore adversarial machine learning, a field of research that devises both attacks and defenses for neural networks for more robust real-world deep learning deployments.`

`How can we better understand how neural networks work? Read one class of approaches called “Explainable AI” in How To Visualize and Interpret Neural Networks. In this article, we explore explainable AI, and in particular visualize pixels that the neural networks believes are important for its predictions.`