Chapter 00

First steps in deep learning: How does AI think?

Find out at a glance what deep learning is and what you'll learn in Ch01–Ch12.

Deep learning diagram by chapter

As you complete each chapter, the diagram below fills in. This is the structure so far.

What you learn in Ch01–Ch12

  • Chapter 01
    Vector dot product: Finding similarity between data

    The most basic operation: multiplying two vectors' direction and magnitude into a single value.

  • Chapter 02
    Matrix multiplication: The magic of computing at once

    The product of two matrices is a new matrix filled with dot products of rows of the first and columns of the second.

  • Chapter 03
    Linear layer: Weights that decide importance

    Linear layer (or linear transformation layer). A layer that multiplies the input by a weight matrix and adds bias.

  • Chapter 04
    Activation function: Adding judgment to AI

    Activation function. A function that makes a neuron's output nonlinear.

  • Chapter 05
    Artificial neuron: A unit that gathers information and sends signals

    Artificial neuron. A unit that takes input, computes a weighted sum, and applies an activation function.

  • Chapter 06
    Batch processing: Learning together in one go

    Batch. A unit that processes multiple samples in one computation.

  • Chapter 07
    Weight connections: The countless chains that build intelligence

    Connections. The weighted links between layers and between neurons.

  • Chapter 08
    Hidden layer: The invisible depth of thought

    Hidden. Layers between the input and output layers.

  • Chapter 09
    Deep network: The power to solve more complex problems

    Depth. A network with many hidden layers is called a deep network.

  • Chapter 10
    Width and neurons: Finding more features at once

    Width. A layer with many neurons is called a wide layer.

  • Chapter 11
    Softmax: Turning results into confidence

    Softmax (probability distribution). Transforms output so values are between 0 and 1 and sum to 1.

  • Chapter 12
    Gradient and backpropagation: Learning from mistakes

    Gradient. Tells which direction to move parameters to reduce loss.

What is Deep Learning?

Deep learning is like a smart calculator that learns by itself — Instead of humans defining every rule one by one, it's a way for computers to find rules on their own by looking at huge amounts of data. Inspired by neurons in the brain exchanging signals, small computing units are stacked in many layers (Layer), which is why we call it deep learning.

Deep learning is everywhere in our lives — From conversational AI you use every day like ChatGPT and Gemini, to self-driving cars that read the road with cameras, to Netflix and YouTube recommendation systems that know your taste better than you do—they're all products of deep learning. The core idea is turning complex images and sounds into numbers, then adding and multiplying those numbers to find the right answer.

You need the basics to build more powerful AI — Beyond just using ready-made models, knowing the basic math that happens inside is important if you want to adapt and use models for your own goals. When you understand how numbers are grouped and computed, you can clearly see why an AI made a certain decision and tune it for better performance.

What one layer in deep learning does — Each layer multiplies the incoming numbers by weights (importance) and adds them, then passes the result to the next layer. As layers get deeper, the AI goes from dots and lines in the data to eyes, nose, mouth, and finally high-level features like dog vs. cat. The guide for adjusting those weights precisely toward the right answer is gradient.

This course's learning roadmap — Deep learning is ultimately an efficient repetition of multiplication and addition. You'll learn the basics of how data moves through Ch01 dot product and Ch02 matrix multiplication, go through Ch03–05 artificial neurons and activation functions, and grasp Ch06–10 the structure of deep and wide neural networks. Finally, in Ch11–12, you'll conquer step by step the core idea of how AI learns by itself: the gradient.

Follow the roadmap below to see what each chapter aims for. If you follow along step by step, you'll gain the ability to interpret what kind of mathematical language state-of-the-art AI systems use internally.