I had 3 epiphanies about algorithmic trading…
|
The first one came yesterday morning: every tutorial on trading strategy optimization is exactly the same. The author introduces the idea of “optimization”, shows some Python code, and ends the article with simulated backtest results. For somebody interested in algorithmic trading, these articles tend to leave a lot to be desired.
|
The second one came to me last night. I’ve never seen anybody optimize a trading strategy similar to how one might optimize a neural network. Even though a single-layer neural network is a universal function approximator, nobody uses single-layer neural networks at all. Our best networks have trillions of parameters and are trained for months using multiple supercomputers. Most trading strategy optimization occurs on a much smaller scale.
|
The third epiphany came to me this morning — genetic algorithms (GAs) can act as universal function approximators. The simplest way I can think of explaining it is by giving the example of optimizing a neural network with a GA. If the loss function is used as an objective function, given enough computational power (and a high enough mutation rate), a neural network optimized with a GA should theoretically arrive at similar solutions as if it was optimized by gradient descent. We can see evidence of this approach with the lottery ticket hypothesis.
|
With these 3 epiphanies, I posed a question in my head: if we trained a genetic algorithm more similar to how we train neural networks, with more parameters and computational power, could it arrive at better solutions?
|
This article will explore this question. I will launch an experiment where I deploy such a strategy. First, I will create the automated investing strategy rules. Then, I will optimize it in a unique way similar to how one might optimize neural networks, but without gradient descent. Finally, I’ll deploy the strategy for real-time paper trading and give updates as I measure the strategy’s real-time performance.
|
|