[login to view URL] is my first attempt at implementing an autoencoder, but it is not as easy as I thought.
I built a RNN (+ LSTM) model with 80 inputs data in order to make a medium-frequency trading model, but it is unclear if I need all of them to train the model. I need to iterate through over 10 million time intervals to train the main model with 80 inputs data. It is a chance of overfitting in that type of context.
I heard that the best choice for me was to use an autoencoder. As the model must play with a lot of data, is it possible to have an LSTM autoencoder? If so, how is it possible to improve the code above with this technique?
The task is to adapt this autoencoder so that it can treat sequential data (LSTM autoencoder is needed here). I am offering $40CAD for that task because I judge it is a trivial task for someone who knows how to deal with autoencoder.