AI & Machine Learningintermediate

Building Neural Networks from Scratch with Python

Master the fundamentals of deep learning by implementing neural networks without frameworks

PH
PlayHveTech Education Platform
December 8, 2025
40 min read
8.2K views

Building Neural Networks from Scratch with Python: A Complete Guide

Master the fundamentals of deep learning by implementing neural networks without frameworks

Introduction

Neural networks have revolutionized the field of artificial intelligence, powering everything from image recognition to natural language processing. While libraries like TensorFlow and PyTorch make it easy to build complex models, understanding the underlying mathematics and implementation details is crucial for becoming a proficient machine learning engineer.

In this comprehensive tutorial, we'll build a neural network from scratch using only NumPy. By the end of this guide, you'll have a deep understanding of forward propagation, backpropagation, and gradient descent the fundamental building blocks of modern deep learning.

Prerequisites

Before diving in, you should have:

  • Basic Python programming knowledge
  • Understanding of linear algebra (matrices, vectors)
  • Familiarity with calculus concepts (derivatives)
  • NumPy installed in your Python environment

Understanding the Architecture

A neural network consists of layers of interconnected nodes (neurons). Each connection has an associated weight, and each neuron has a bias. The network processes input data through these layers, transforming it step by step until producing an output.

The Perceptron: Building Block of Neural Networks

The simplest neural network unit is the perceptron, which computes:

output = activation(sum(weights * inputs) + bias)

Let's implement this concept in Python:

import numpy as np

class Perceptron:
    def __init__(self, input_size, learning_rate=0.01):
        """
        Initialize a perceptron with random weights.
        
        Args:
            input_size: Number of input features
            learning_rate: Step size for gradient descent
        """
        self.weights = np.random.randn(input_size) * 0.01
        self.bias = 0
        self.learning_rate = learning_rate
    
    def activation(self, x):
        """Sigmoid activation function."""
        return 1 / (1 + np.exp(-np.clip(x, -500, 500)))
    
    def forward(self, X):
        """
        Compute the forward pass.
        
        Args:
            X: Input data of shape (n_samples, n_features)
        
        Returns:
            Predictions of shape (n_samples,)
        """
        linear_output = np.dot(X, self.weights) + self.bias
        return self.activation(linear_output)

Activation Functions: Adding Non-linearity

Activation functions introduce non-linearity into the network, enabling it to learn complex patterns. Without activation functions, a deep neural network would simply compute a linear transformation of the input, regardless of its depth.

Common Activation Functions

Let's implement the most widely used activation functions:

class ActivationFunctions:
    @staticmethod
    def sigmoid(x):
        """
        Sigmoid function: f(x) = 1 / (1 + e^(-x))
        
        Properties:
        - Output range: (0, 1)
        - Smooth gradient
        - Prone to vanishing gradients
        """
        return 1 / (1 + np.exp(-np.clip(x, -500, 500)))
    
    @staticmethod
    def sigmoid_derivative(x):
        """Derivative of sigmoid: f'(x) = f(x) * (1 - f(x))"""
        s = ActivationFunctions.sigmoid(x)
        return s * (1 - s)
    
    @staticmethod
    def relu(x):
        """
        ReLU function: f(x) = max(0, x)
        
        Properties:
        - Output range: [0, 
        - Sparse activation
        - Fast computation
        - Can suffer from "dying ReLU" problem
        """
        return np.maximum(0, x)
    
    @staticmethod
    def relu_derivative(x):
        """Derivative of ReLU: f'(x) = 1 if x > 0, else 0"""
        return (x > 0).astype(float)
    
    @staticmethod
    def tanh(x):
        """
        Hyperbolic tangent: f(x) = (e^x - e^(-x)) / (e^x + e^(-x))
        
        Properties:
        - Output range: (-1, 1)
        - Zero-centered (unlike sigmoid)
        - Still prone to vanishing gradients
        """
        return np.tanh(x)
    
    @staticmethod
    def tanh_derivative(x):
        """Derivative of tanh: f'(x) = 1 - f(x)^2"""
        return 1 - np.tanh(x) ** 2
    
    @staticmethod
    def leaky_relu(x, alpha=0.01):
        """
        Leaky ReLU: f(x) = x if x > 0, else alpha * x
        
        Properties:
        - Allows small gradients for negative values
        - Addresses "dying ReLU" problem
        """
        return np.where(x > 0, x, alpha * x)
    
    @staticmethod
    def leaky_relu_derivative(x, alpha=0.01):
        """Derivative of Leaky ReLU"""
        return np.where(x > 0, 1, alpha)
    
    @staticmethod
    def softmax(x):
        """
        Softmax function for multi-class classification.
        Converts raw scores to probabilities.
        """
        exp_x = np.exp(x - np.max(x, axis=-1, keepdims=True))
        return exp_x / np.sum(exp_x, axis=-1, keepdims=True)

Building a Complete Neural Network Layer

Now let's create a flexible neural network layer that can be stacked to create deep networks:

class Layer:
    def __init__(self, input_size, output_size, activation='relu'):
        """
        Initialize a neural network layer.
        
        Args:
            input_size: Number of input neurons
            output_size: Number of output neurons
            activation: Activation function ('relu', 'sigmoid', 'tanh')
        """
        # He initialization for ReLU, Xavier for others
        if activation == 'relu':
            scale = np.sqrt(2.0 / input_size)
        else:
            scale = np.sqrt(1.0 / input_size)
        
        self.weights = np.random.randn(input_size, output_size) * scale
        self.bias = np.zeros((1, output_size))
        self.activation_name = activation
        
        # Cache for backpropagation
        self.input = None
        self.z = None  # Pre-activation
        self.a = None  # Post-activation
        
        # Gradients
        self.dW = None
        self.db = None
    
    def activate(self, z):
        """Apply the activation function."""
        if self.activation_name == 'relu':
            return ActivationFunctions.relu(z)
        elif self.activation_name == 'sigmoid':
            return ActivationFunctions.sigmoid(z)
        elif self.activation_name == 'tanh':
            return ActivationFunctions.tanh(z)
        elif self.activation_name == 'softmax':
            return ActivationFunctions.softmax(z)
        else:
            return z  # Linear activation
    
    def activate_derivative(self, z):
        """Compute derivative of activation function."""
        if self.activation_name == 'relu':
            return ActivationFunctions.relu_derivative(z)
        elif self.activation_name == 'sigmoid':
            return ActivationFunctions.sigmoid_derivative(z)
        elif self.activation_name == 'tanh':
            return ActivationFunctions.tanh_derivative(z)
        else:
            return np.ones_like(z)
    
    def forward(self, X):
        """
        Perform forward propagation through this layer.
        
        Args:
            X: Input data of shape (batch_size, input_size)
        
        Returns:
            Output of shape (batch_size, output_size)
        """
        self.input = X
        self.z = np.dot(X, self.weights) + self.bias
        self.a = self.activate(self.z)
        return self.a
    
    def backward(self, dA, learning_rate):
        """
        Perform backpropagation through this layer.
        
        Args:
            dA: Gradient of loss with respect to this layer's output
            learning_rate: Learning rate for weight updates
        
        Returns:
            Gradient of loss with respect to this layer's input
        """
        m = self.input.shape[0]
        
        # Compute gradient of activation
        if self.activation_name == 'softmax':
            dZ = dA  # For softmax with cross-entropy, gradient simplifies
        else:
            dZ = dA * self.activate_derivative(self.z)
        
        # Compute gradients
        self.dW = (1/m) * np.dot(self.input.T, dZ)
        self.db = (1/m) * np.sum(dZ, axis=0, keepdims=True)
        
        # Gradient for previous layer
        dA_prev = np.dot(dZ, self.weights.T)
        
        # Update weights and biases
        self.weights -= learning_rate * self.dW
        self.bias -= learning_rate * self.db
        
        return dA_prev

The Complete Neural Network

Now we can combine multiple layers into a complete neural network:

class NeuralNetwork:
    def __init__(self, layer_sizes, activations=None):
        """
        Initialize a neural network with multiple layers.
        
        Args:
            layer_sizes: List of layer sizes [input, hidden1, ..., output]
            activations: List of activation functions for each layer
        """
        self.layers = []
        
        if activations is None:
            # Default: ReLU for hidden layers, softmax for output
            activations = ['relu'] * (len(layer_sizes) - 2) + ['softmax']
        
        for i in range(len(layer_sizes) - 1):
            layer = Layer(
                layer_sizes[i],
                layer_sizes[i + 1],
                activations[i]
            )
            self.layers.append(layer)
        
        self.loss_history = []
    
    def forward(self, X):
        """
        Forward propagation through all layers.
        
        Args:
            X: Input data of shape (batch_size, input_size)
        
        Returns:
            Network output of shape (batch_size, output_size)
        """
        output = X
        for layer in self.layers:
            output = layer.forward(output)
        return output
    
    def compute_loss(self, y_pred, y_true):
        """
        Compute cross-entropy loss.
        
        Args:
            y_pred: Predicted probabilities
            y_true: True labels (one-hot encoded)
        
        Returns:
            Cross-entropy loss value
        """
        m = y_true.shape[0]
        epsilon = 1e-15  # Prevent log(0)
        
        # Clip predictions to prevent numerical instability
        y_pred = np.clip(y_pred, epsilon, 1 - epsilon)
        
        # Cross-entropy loss
        loss = -np.sum(y_true * np.log(y_pred)) / m
        return loss
    
    def backward(self, y_pred, y_true, learning_rate):
        """
        Backpropagation through all layers.
        
        Args:
            y_pred: Predicted probabilities
            y_true: True labels (one-hot encoded)
            learning_rate: Learning rate for updates
        """
        m = y_true.shape[0]
        
        # Gradient of loss with respect to output
        dA = (y_pred - y_true)  # For softmax + cross-entropy
        
        # Propagate backwards through layers
        for layer in reversed(self.layers):
            dA = layer.backward(dA, learning_rate)
    
    def train(self, X, y, epochs, learning_rate, batch_size=32, verbose=True):
        """
        Train the neural network using mini-batch gradient descent.
        
        Args:
            X: Training data
            y: Training labels (one-hot encoded)
            epochs: Number of training epochs
            learning_rate: Learning rate
            batch_size: Size of mini-batches
            verbose: Print training progress
        """
        n_samples = X.shape[0]
        
        for epoch in range(epochs):
            # Shuffle data at the start of each epoch
            indices = np.random.permutation(n_samples)
            X_shuffled = X[indices]
            y_shuffled = y[indices]
            
            epoch_loss = 0
            n_batches = 0
            
            # Mini-batch training
            for i in range(0, n_samples, batch_size):
                X_batch = X_shuffled[i:i + batch_size]
                y_batch = y_shuffled[i:i + batch_size]
                
                # Forward pass
                y_pred = self.forward(X_batch)
                
                # Compute loss
                loss = self.compute_loss(y_pred, y_batch)
                epoch_loss += loss
                n_batches += 1
                
                # Backward pass
                self.backward(y_pred, y_batch, learning_rate)
            
            # Record average epoch loss
            avg_loss = epoch_loss / n_batches
            self.loss_history.append(avg_loss)
            
            if verbose and (epoch + 1) % 100 == 0:
                print(f"Epoch {epoch + 1}/{epochs}, Loss: {avg_loss:.4f}")
    
    def predict(self, X):
        """
        Make predictions on new data.
        
        Args:
            X: Input data
        
        Returns:
            Predicted class indices
        """
        probabilities = self.forward(X)
        return np.argmax(probabilities, axis=1)
    
    def accuracy(self, X, y):
        """
        Compute classification accuracy.
        
        Args:
            X: Input data
            y: True labels (one-hot encoded)
        
        Returns:
            Accuracy percentage
        """
        predictions = self.predict(X)
        true_labels = np.argmax(y, axis=1)
        return np.mean(predictions == true_labels) * 100

Putting It All Together: A Complete Example

Let's use our neural network to solve a real classification problem:

from sklearn.datasets import load_iris
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler, OneHotEncoder

# Load the Iris dataset
iris = load_iris()
X, y = iris.data, iris.target.reshape(-1, 1)

# Preprocess the data
scaler = StandardScaler()
X_scaled = scaler.fit_transform(X)

encoder = OneHotEncoder(sparse=False)
y_encoded = encoder.fit_transform(y)

# Split into training and testing sets
X_train, X_test, y_train, y_test = train_test_split(
    X_scaled, y_encoded, test_size=0.2, random_state=42
)

# Create and train the neural network
network = NeuralNetwork(
    layer_sizes=[4, 16, 8, 3],  # 4 input, 16 hidden, 8 hidden, 3 output
    activations=['relu', 'relu', 'softmax']
)

print("Training Neural Network...")
print("-" * 50)

network.train(
    X_train, y_train,
    epochs=1000,
    learning_rate=0.1,
    batch_size=16,
    verbose=True
)

# Evaluate the model
train_acc = network.accuracy(X_train, y_train)
test_acc = network.accuracy(X_test, y_test)

print("-" * 50)
print(f"Training Accuracy: {train_acc:.2f}%")
print(f"Testing Accuracy: {test_acc:.2f}%")

Expected Output:

Training Neural Network...
--------------------------------------------------
Epoch 100/1000, Loss: 0.5234
Epoch 200/1000, Loss: 0.2156
Epoch 300/1000, Loss: 0.1423
Epoch 400/1000, Loss: 0.1012
Epoch 500/1000, Loss: 0.0789
Epoch 600/1000, Loss: 0.0634
Epoch 700/1000, Loss: 0.0523
Epoch 800/1000, Loss: 0.0445
Epoch 900/1000, Loss: 0.0387
Epoch 1000/1000, Loss: 0.0342
--------------------------------------------------
Training Accuracy: 98.33%
Testing Accuracy: 96.67%

Advanced Optimization Techniques

Our basic implementation uses vanilla gradient descent, but modern neural networks employ sophisticated optimizers. Let's implement some popular ones:

Adam Optimizer

Adam (Adaptive Moment Estimation) combines the benefits of RMSprop and momentum:

class AdamOptimizer:
    def __init__(self, learning_rate=0.001, beta1=0.9, beta2=0.999, epsilon=1e-8):
        """
        Initialize Adam optimizer.
        
        Args:
            learning_rate: Initial learning rate
            beta1: Exponential decay rate for first moment
            beta2: Exponential decay rate for second moment
            epsilon: Small constant for numerical stability
        """
        self.learning_rate = learning_rate
        self.beta1 = beta1
        self.beta2 = beta2
        self.epsilon = epsilon
        self.t = 0  # Time step
        self.m = {}  # First moment estimates
        self.v = {}  # Second moment estimates
    
    def update(self, params, grads, param_id):
        """
        Update parameters using Adam.
        
        Args:
            params: Current parameter values
            grads: Gradients
            param_id: Unique identifier for this parameter
        
        Returns:
            Updated parameters
        """
        if param_id not in self.m:
            self.m[param_id] = np.zeros_like(params)
            self.v[param_id] = np.zeros_like(params)
        
        self.t += 1
        
        # Update biased first moment estimate
        self.m[param_id] = self.beta1 * self.m[param_id] + (1 - self.beta1) * grads
        
        # Update biased second moment estimate
        self.v[param_id] = self.beta2 * self.v[param_id] + (1 - self.beta2) * grads**2
        
        # Compute bias-corrected estimates
        m_hat = self.m[param_id] / (1 - self.beta1**self.t)
        v_hat = self.v[param_id] / (1 - self.beta2**self.t)
        
        # Update parameters
        params -= self.learning_rate * m_hat / (np.sqrt(v_hat) + self.epsilon)
        
        return params

Regularization Techniques

To prevent overfitting, we can add regularization:

L2 Regularization (Weight Decay)

class RegularizedLayer(Layer):
    def __init__(self, input_size, output_size, activation='relu', l2_lambda=0.01):
        super().__init__(input_size, output_size, activation)
        self.l2_lambda = l2_lambda
    
    def backward(self, dA, learning_rate):
        """Backward pass with L2 regularization."""
        m = self.input.shape[0]
        
        if self.activation_name == 'softmax':
            dZ = dA
        else:
            dZ = dA * self.activate_derivative(self.z)
        
        # Add L2 regularization term to weight gradient
        self.dW = (1/m) * np.dot(self.input.T, dZ) + (self.l2_lambda * self.weights)
        self.db = (1/m) * np.sum(dZ, axis=0, keepdims=True)
        
        dA_prev = np.dot(dZ, self.weights.T)
        
        self.weights -= learning_rate * self.dW
        self.bias -= learning_rate * self.db
        
        return dA_prev

Dropout

class DropoutLayer:
    def __init__(self, dropout_rate=0.5):
        """
        Initialize dropout layer.
        
        Args:
            dropout_rate: Probability of dropping a neuron
        """
        self.dropout_rate = dropout_rate
        self.mask = None
        self.training = True
    
    def forward(self, X):
        """Apply dropout during training."""
        if self.training:
            self.mask = np.random.binomial(1, 1 - self.dropout_rate, X.shape)
            self.mask = self.mask / (1 - self.dropout_rate)  # Inverted dropout
            return X * self.mask
        return X
    
    def backward(self, dA, learning_rate):
        """Backpropagate through dropout."""
        if self.training:
            return dA * self.mask
        return dA

Visualizing Training Progress

Understanding how your network learns is crucial. Here's how to visualize the training:

import matplotlib.pyplot as plt

def plot_training_history(network, title="Training Progress"):
    """
    Plot the loss history during training.
    
    Args:
        network: Trained neural network
        title: Plot title
    """
    plt.figure(figsize=(12, 5))
    
    # Loss curve
    plt.subplot(1, 2, 1)
    plt.plot(network.loss_history, 'b-', linewidth=2)
    plt.title('Training Loss Over Time')
    plt.xlabel('Epoch')
    plt.ylabel('Cross-Entropy Loss')
    plt.grid(True, alpha=0.3)
    
    # Loss curve (log scale)
    plt.subplot(1, 2, 2)
    plt.semilogy(network.loss_history, 'r-', linewidth=2)
    plt.title('Training Loss (Log Scale)')
    plt.xlabel('Epoch')
    plt.ylabel('Cross-Entropy Loss')
    plt.grid(True, alpha=0.3)
    
    plt.suptitle(title)
    plt.tight_layout()
    plt.show()

def visualize_decision_boundary(network, X, y, resolution=100):
    """
    Visualize the decision boundary for 2D data.
    
    Args:
        network: Trained neural network
        X: Input features (2D)
        y: Labels
        resolution: Grid resolution
    """
    x_min, x_max = X[:, 0].min() - 1, X[:, 0].max() + 1
    y_min, y_max = X[:, 1].min() - 1, X[:, 1].max() + 1
    
    xx, yy = np.meshgrid(
        np.linspace(x_min, x_max, resolution),
        np.linspace(y_min, y_max, resolution)
    )
    
    grid = np.c_[xx.ravel(), yy.ravel()]
    predictions = network.predict(grid)
    predictions = predictions.reshape(xx.shape)
    
    plt.figure(figsize=(10, 8))
    plt.contourf(xx, yy, predictions, alpha=0.4, cmap='viridis')
    plt.scatter(X[:, 0], X[:, 1], c=np.argmax(y, axis=1), 
                cmap='viridis', edgecolors='black')
    plt.title('Decision Boundary')
    plt.xlabel('Feature 1')
    plt.ylabel('Feature 2')
    plt.colorbar(label='Class')
    plt.show()

Understanding Gradient Flow

One of the most important concepts in neural networks is understanding how gradients flow through the network. Let's create a utility to visualize this:

def analyze_gradients(network):
    """
    Analyze gradient statistics across layers.
    
    Args:
        network: Neural network after backward pass
    """
    print("\nGradient Analysis")
    print("=" * 60)
    
    for i, layer in enumerate(network.layers):
        if layer.dW is not None:
            w_grad_mean = np.mean(np.abs(layer.dW))
            w_grad_max = np.max(np.abs(layer.dW))
            w_grad_std = np.std(layer.dW)
            
            print(f"\nLayer {i + 1}:")
            print(f"  Weight gradient - Mean: {w_grad_mean:.6f}, "
                  f"Max: {w_grad_max:.6f}, Std: {w_grad_std:.6f}")
            
            if w_grad_mean < 1e-7:
                print("  鈿狅笍  Warning: Vanishing gradients detected!")
            elif w_grad_max > 1000:
                print("  鈿狅笍  Warning: Exploding gradients detected!")

Best Practices and Common Pitfalls

1. Weight Initialization

Poor weight initialization can lead to vanishing or exploding gradients:

def initialize_weights(shape, method='he'):
    """
    Initialize weights using various methods.
    
    Args:
        shape: Tuple of (input_size, output_size)
        method: 'he', 'xavier', or 'normal'
    
    Returns:
        Initialized weights
    """
    input_size, output_size = shape
    
    if method == 'he':
        # Best for ReLU activation
        return np.random.randn(input_size, output_size) * np.sqrt(2.0 / input_size)
    
    elif method == 'xavier':
        # Best for sigmoid/tanh activation
        return np.random.randn(input_size, output_size) * np.sqrt(1.0 / input_size)
    
    elif method == 'normal':
        return np.random.randn(input_size, output_size) * 0.01
    
    else:
        raise ValueError(f"Unknown initialization method: {method}")

2. Learning Rate Selection

The learning rate is one of the most important hyperparameters:

class LearningRateScheduler:
    """Various learning rate scheduling strategies."""
    
    @staticmethod
    def step_decay(initial_lr, epoch, drop_rate=0.5, epochs_drop=100):
        """Reduce learning rate by factor every epochs_drop epochs."""
        return initial_lr * (drop_rate ** (epoch // epochs_drop))
    
    @staticmethod
    def exponential_decay(initial_lr, epoch, decay_rate=0.96):
        """Exponentially decay the learning rate."""
        return initial_lr * (decay_rate ** epoch)
    
    @staticmethod
    def cosine_annealing(initial_lr, epoch, total_epochs):
        """Cosine annealing schedule."""
        return initial_lr * (1 + np.cos(np.pi * epoch / total_epochs)) / 2

3. Batch Normalization

Normalize activations to stabilize training:

class BatchNormLayer:
    def __init__(self, n_features, epsilon=1e-8, momentum=0.9):
        """
        Initialize batch normalization layer.
        
        Args:
            n_features: Number of features
            epsilon: Small constant for numerical stability
            momentum: Momentum for running statistics
        """
        self.epsilon = epsilon
        self.momentum = momentum
        
        # Learnable parameters
        self.gamma = np.ones((1, n_features))
        self.beta = np.zeros((1, n_features))
        
        # Running statistics for inference
        self.running_mean = np.zeros((1, n_features))
        self.running_var = np.ones((1, n_features))
        
        # Cache for backpropagation
        self.cache = None
        self.training = True
    
    def forward(self, X):
        """Apply batch normalization."""
        if self.training:
            mean = np.mean(X, axis=0, keepdims=True)
            var = np.var(X, axis=0, keepdims=True)
            
            # Update running statistics
            self.running_mean = (self.momentum * self.running_mean + 
                                (1 - self.momentum) * mean)
            self.running_var = (self.momentum * self.running_var + 
                               (1 - self.momentum) * var)
            
            # Normalize
            X_norm = (X - mean) / np.sqrt(var + self.epsilon)
            
            # Scale and shift
            out = self.gamma * X_norm + self.beta
            
            # Cache for backprop
            self.cache = (X, X_norm, mean, var)
        else:
            # Use running statistics during inference
            X_norm = (X - self.running_mean) / np.sqrt(self.running_var + self.epsilon)
            out = self.gamma * X_norm + self.beta
        
        return out

Conclusion

Congratulations! You've successfully built a neural network from scratch. This implementation covers:

  • Forward Propagation: Computing outputs through layers
  • Backpropagation: Computing gradients for learning
  • Activation Functions: Adding non-linearity
  • Mini-batch Training: Efficient learning with batches
  • Regularization: Preventing overfitting
  • Optimization: Advanced training techniques

Understanding these fundamentals will make you a better deep learning practitioner, even when using high-level frameworks.

Next Steps

  1. Experiment: Modify the architecture, try different activation functions
  2. Extend: Add convolutional layers for image processing
  3. Optimize: Implement GPU acceleration with CuPy
  4. Apply: Use your network on real-world datasets

The code in this tutorial provides a solid foundation for understanding modern deep learning. While production systems use optimized libraries, knowing how everything works under the hood is invaluable.

Additional Resources


This tutorial is part of the PlayHve AI & Machine Learning series. Subscribe to our newsletter for more in-depth technical content.

PH

Written by PlayHve

Tech Education Platform

Your ultimate destination for cutting-edge technology tutorials. Learn AI, Web3, modern web development, and creative coding.