# Recurrent neural network – time-series data- part 1

R

If you are human and curious about your future, then the recurrent neural network (RNN) is definitely a tool to consider. Part 1 will demonstrate some simple RNNs using TensorFlow 2.0 and Keras functional API.

## What is RNN

An RNN is a class of artificial neural networks where connections between nodes form a directed graph along a temporal sequence (time series). This allows it to exhibit temporal dynamic behaviour.

RNNs come in many variants, such as fully recurrent, Elman networks and Jordan networks, Long short-term memory, Bi-directional, etc. Nevertheless, the basic idea of RNN is to memory patterns from the past using cells to predict the future.

## The time-series data

In this demo, we first generate a time series of data using a sinus function. We then fetch the data into an RNN model for training and then get some prediction data.

First, we enable TensorFlow 2.0 and import some libraries

The data follows the simple equation:

$$x(t) = 0.5 \times sin (f(t))$$

$$f(t) = (t_i – o_i)\times(10f_i + 10)$$ where $$t_i, o_i, f_i \in [0, 1)$$

The following function generates time-series data following above equation.

We want to generate the train, validation, and test data where every n_steps (n data points) we have one next value (one step ahead for t+1) (see Figure 2).

The shapes of x_train and y_train: Fig 2: Every 50 points (.) from 0-> 50, we have one point (X) at (t+1)

## Prediction with the simplest RNN

With Keras, things can be very pretty straightforward. We will first try with a shallow RNN with only one layer.

That is it, then we compile the model using Adam optimizer.

After that, we train the model and log the mean squared error each epoch.

Remember that, if you don’t pass validation data, then you won’t have val_loss in history. Now, we can plot the training loss using the flowing function:

Predict on validation data and plot the result (the last point on the series):

This is the fun part to predict some more steps in the future (so you don’t have to go to a foreteller :D). First, we generate new data.

Next, we will predict 30 steps ahead ($$t_i \to t_{i+30}$$) one by one.

As shown in Figure 3, the accuracy is not really good with one-by-one prediction. Now we can modify the model so that it can predict several at once. The new model supports 10 steps ahead at once:

We then see a better result with this model.

## Note on replacing outputprojectionwrapper from TF1.X

If you work with TensorFlow 1.X, there is a good chance that you use OUTPUTPROJECTIONWRAPPER for your RNN model.

Unfortunately, this function was deprecated in TF2.0 and was not included in the tf.compat.v1. A dense layer is recommended for that based on this post. Therefore, I include another example for ones who want to convert their model from TF1.x to TF2.x. We simply stack up another dense layer at the end (similar to the previous example).

## Conclusion

Working with a recurrent neural network is fun. Nevertheless, the current setting did not fully exploit the power of the RNN. The next part, we will create more complicated data and apply with deeper models of RNN and different cell types, such as LSTM or GRU.