The goal of The LSTM Reference Card is to demonstrate how an LSTM Forward pass works using just vanilla Python and NumPy. Studying these simple functions with the diagram above will result in a strong intuition for how and why LSTM networks work. This exercise does not cover backpropagation; the focus is on understanding how the cell uses prior events to make predictions
To demonstrate how an LSTM makes predictions, below is a small LSTM network. We'll allow PyTorch to randomly initialize the weights, but they could really be initialized any way - the point here is just to ensure that the PyTorch LSTM and our NumPy LSTM both use the same starting weights so that the outputs can be compared.
Since our desired output size does not typically match the size of the hidden state, we'll also add a fully connected layer to the network that receives the LSTM output and returns our desired output size.
Next, initialize an LSTM model in PyTorch - also with that final fully connected layer - , and examine the state dictionary to see the weights it initialized:
Don't get overwhelmed! The PyTorch documentation explains all we need to break this down:
We can therefore extract the weights for the NumPy LSTM to use in this way:
Now, we have two networks - 1 PyTorch, 1 NumPy -with access to the same starting weights. We'll put some time series data through each to ensure they are identical. To do a forward pass with our network, we'll pass the data into the LSTM gates in sequence, and print the output after each event:
Putting the same data through the PyTorch model shows that we return identical output:
We can additionally verify that after the data has gone through the LSTM cells, the two models have the same hidden and cell states:
I hope this helps build an intuition for how LSTM networks make predictions. Below is the full example code:
Issues or Questions? Please reach out! I'd love to hear from you.
Hungry to keep the fun going with backpropagation? Check out Backpropogating an LSTM: A Numerical Example.