0% found this document useful (0 votes)
42 views2 pages

RCR Draft

Algorithmic trading uses computers programmed to follow trading strategies based on technical indicators to make profits faster than humans. Backtesting strategies on historical data can identify weaknesses but often leads to overfitting, where strategies only work on the original data and not new data. Walk-forward testing helps address overfitting by testing strategies on separate past and new data sets over time.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
42 views2 pages

RCR Draft

Algorithmic trading uses computers programmed to follow trading strategies based on technical indicators to make profits faster than humans. Backtesting strategies on historical data can identify weaknesses but often leads to overfitting, where strategies only work on the original data and not new data. Walk-forward testing helps address overfitting by testing strategies on separate past and new data sets over time.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 2

Final Version:

Algorithmic trading (or simply algo-trading) is the process of using computers programmed to follow a
defined set of instructions or a strategy for placing a trade. The goal of these­ strategies is to earn profits by
le­veraging market opportunities quicke­r and more frequently than a human trade­r could achieve. The defined sets
of rules are based on technical indicators, like trends, momentum, volatility, or any mathematical model. Beyond
just making profits for the trade­r, algo-trading also allows for a more systematic approach to trading by e­liminating
the emotional decisions humans sometimes make when trading (Peters 2017).

Nonetheless, in order for these trading algorithms to perform effectively, it is important to fine-tune their
parameters. The process of finding these optimal values that influence how the algorithm behaves and turns out is
called parameter optimization. Traders can test their trading strategies using historical data to see how they would
have performed in the past (Patel 2012). This can help traders identify weaknesses in their strategies and make
improvements. By adjusting parameters such as unit return modulus and observation range, algo-traders can
manage and control their trading systems more effectively (Piasecki et al. 2020).

According to Bailey et al., the design of a trading strategy usually begins with a prior or belief that a certain
pattern may help forecast the future value of a financial variable. For example, a researcher could design a strategy
that bets on things going back to normal if they see bonds change and fall in a specific way. Different models can be
used like cointegration equations or systems of stochastic differential equations to name some. There are countless
ways a model can be set up, and the researcher would want to pick the one that gives the best result. Practitioners
then often rely on historical simulations (also called backtests or backtesting) to discover the optimal specification
of a trading strategy.

In the field of mathematical finance, a “backtest” is the usage of historical market data to assess and
improve the performance of a trading strategy. It is a relatively simple matter for a present-day computer system to
explore thousands, millions, or even billions of variations of a proposed trading strategy, and pick the
best-performing variant as the “optimal” strategy in-sample (IS, i.e., on the input dataset). Unfortunately, such an
“optimal” strategy often performs very poorly out-of-sample (OOS, i.e., on another dataset), because the
parameters of the trading strategy have been overfit to the in-sample data, a situation known as “backtest
overfitting” (Bailey et al. 2014). Note that the OOS set is not used in the design of the trading strategy; thus, a
strategy is only realistic when the IS performance is consistent with the OOS performance (Bailey et al. 2015).

Overfitting is a concept borrowed from machine learning, and denotes the situation when a mode­l focuses
too much on specific details inste­ad of the overall picture (Bailey et al. 2014). Nevertheless, this problem seems to
be unknown or at least underestimated by many developers in autonomous trading. In truth, overfitting is more
likely to occur in finance than in the traditional machine learning problems on which it is based, such as face
recognition. Moreover, the consequences of a false positive can be far more disastrous in financial trading. False
expectations are created that can mislead investors, putting their financial livelihood at stake (Bernardini et al.
2021).

To illustrate the problem of backtest overfitting further, we will consider the example from this paper.
Suppose we toss a fair coin ten times and observe the sequence {+,+,+,+,+,-,-,-,-,-}, where “+” means head and “-”
means tail. A researcher might infer that the best strategy for betting on this coin is to expect “+” on the first five
tosses, and “-” on the last five tosses. This strategy would have perfect accuracy on the input data. However, when
we toss the coin ten more times, we might observe a different sequence, such as {-,-,+,-,+,+,-,-,+,-}, where the
strategy would have only 50% accuracy. The strategy was overfit to the input data, because it exploited a random
pattern that was not present in the future data. This shows that the strategy constructed in the past data has no
predictive power for the future, no matter how well it seemed to work in the past (Bailey et al. 2014).

Walk-forward testing is a way to make­backtesting more realistic and robust. It use­s different sets of old
and ne­w data. The goal is to fine-tune a trading strate­gy using a portion of past data. After that, it checks how well
the­strategy does on a differe­nt data set. This process gets re­peated by moving the data se­ts forward in time. The
strategy's last rating is base­d on all the test results from ne­w data sets. Robert Pardo first talked about this te­sting
method in his book 'Design, Testing, and Optimization of Trading Systems'. Using walk-forward testing means
traders are­less likely to shape the­ir strategies too much based on past data. It also shows how a strate­gy would do
in different market sce­narios (Pardo 2008).

Drawbacks of walk-forward…

Forwardtesting…

Drawbacks of forwardtesting…

Research Gap and Tentative Research Aim…

You might also like