1  Overview

This book has three sections. The second and third will explore various deep learning applications and essential scientific computation techniques, respectively. Before though, in this first part, we are going to learn about torch’s basic building blocks: tensors, automatic differentiation, optimizers, and modules. I’d call this part “torch basics”, or, following a common template, “Getting started with torch”, were it not for a certain false impression this could create. These are basics, true, but basics in the sense of foundations: Having worked through the next chapters, you’ll have solid conceptions about how torch works, and you’ll have seen enough code to feel comfortable experimenting with the more involved examples encountered in later sections. In other words, you’ll be, to some degree, fluent in torch.

In addition, you’ll have coded a neural network from scratch – twice, even: One version will involve just raw tensors and their in-built capabilities, while the other will make use of dedicated torch structures that encapsulate, in an object-oriented way, functionality essential to neural network training. As a consequence, you’ll be excellently equipped for part two, where we’ll see how to apply deep learning to different tasks and domains.