Experiments with GANs for Simulating Returns (Guest post)

By Akshay Nautiyal, Quantinsti

  • Empirical distributions of assets show sharp peaks which traditional models are often not able to gauge.
  1. The discriminator is able to tell real data from the generated one
  • log(D(G(z))) — Done by the generator — So, as observed empirically while training GANs, at the beginning of training G is an extremely poor “truth” generator while D quickly becomes good at identifying real data. Hence, the component log(1 — D(G(z))) saturates or remains low. It is the job of G to maximize log(1 — D(G(z))). What that means is G is doing a good job of creating real data that D isn’t able to “call out”. But because log(1 — D(G(z))) saturates, we train G to maximize log(D(G(z))) rather than minimize log(1 — D(G(z))).
Fig 1. Returns by simple feed-forward GAN
Fig 2. shows the empirical distributions for AAPL starting 1980s up till now.
Fig 3. shows the generated returns by Geometric Brownian motion on AAPL.

No Code Financial Machine Learning SaaS that helps traders compute the probability of profit for their next trade or the daily risk for their strategy.