Visualizing Convolutions

I’m currently learning about Convolutional Neural Nets from deeplearning.ai, and boy are they really powerful. Some of them even have cool names like Inception Network and make use of algorithms like You Only Look Once (YOLO). That is hilarious and awesome. This notebook/post is an exercise in trying to visualize the outputs of the various layers in a CNN. Let’s get to it. Setup import numpy as np import pandas as pd from math import ceil import matplotlib.pyplot as plt import matplotlib.image as mpimg from sklearn.model_selection import train_test_split from keras import layers from keras.layers import Input, Dense, Activation, ZeroPadding2D, Flatten, Conv2D from keras.layers import AveragePooling2D, MaxPooling2D from keras.models import Model from keras.datasets import fashion_mnist from keras.optimizers import Adam from keras.models import load_model To begin with, I’ll use the Fashion-MNIST dataset. ...

September 24, 2018

Summary Notes: Forward and Back Propagation

I recently completed the first course offered by deeplearning.ai, and found it incredibly educational. Going forwards, I want to keep a summary of the stuff I learn (for my future reference) in the form of notes like this. This one is for forward and back-prop intuitions. Setup A neural network with $L$ layers. Notation: $l$ ranges from 0 to $L$. Zero corresponds to input activations and $L$ corresponds to predictions. Activation of layer $l$: $a^{[l]}$ Training examples are represented as column vectors. So X is of shape $(n^{[0]},m)$, where $n^{[0]}$ is number of input features. Weights for layer $l$ have shape: $(n^{[l]},n^{[l-1]})$ Biases for layer $l$ have shape: $(n^{[l]},1)$ Forward Propagation Intuition (for batch gradient descent) Forward prop will simply take in inputs from layer $a^{[l-1]}$, calculate linear and non-linear activations based on its weights and biases, and propagate them to the next layer. ...

September 15, 2018