Zhao - 2019 - Cryptocurrency Price Prediction Using Support Vector Machines - Albolote - Q1-1
Zhao - 2019 - Cryptocurrency Price Prediction Using Support Vector Machines - Albolote - Q1-1
Christopher Brookins
Valiendero Digital Assets, LP
Pittsburgh, PA 15213
[email protected]
Abstract
Few assets in financial history have been as notoriously volatile as cryptocurrencies.
While the long term outlook for this asset class remains unclear, we are successful
in making short term price predictions for several major crypto assets. Using
historical data from July 2015 to November 2019, we develop a large number of
technical indicators to capture patterns in the cryptocurrency market. We then
test various classification methods to forecast short-term future price movements
based on these indicators. On both PPV and NPV metrics, our classifiers do well
in identifying up and down market moves over the next 1 hour. Beyond evaluating
classification accuracy, we also develop a strategy for translating 1-hour-ahead
class predictions into trading decisions, along with a backtester that simulates
trading in a realistic environment. We find that support vector machines yield the
most profitable trading strategies, which outperform the market on average for
Bitcoin, Ethereum and Litecoin over the past 22 months, since January 2018.
1 Introduction
The price of Bitcoin has swung wildly over the past several years, drawing interest from both
economists trying to fundamentally understand the phenomenon (Ciaian et al. [2014] Mitchnick and
Athey [2018]) and investors trying to simply predict price swings in order to earn profits (Barnwal
et al. [2019] Jiang and Liang [2016] Kakushadze [2018]). This paper falls into the latter camp. We
abstract away the economic and political contexts in which cryptocurrencies exist, focusing solely
on the problem of future price prediction given past trading data. Even without considering first
principles, it is a daunting task to build an explicit model that accounts for the many possible sources
of variation for a poorly understood asset like Bitcoin. Such a model is likely to be overly complex
and not useful for prediction. Instead, in this paper we opt for a black-box approach that simply
targets prediction accuracy and backtest profit in a time-varying setting.
Bitcoin is a peer-to-peer payment system introduced as open source software in January 2009 by a
computer programmer using the pseudonym Satoshi Nakamoto (Nakamoto [2008]). Bitcoin utilizes
cryptographic principles in transaction verification, privacy, and issuance. Key features of the Bitcoin
Bitcoin has become a worldwide phenomenon and has experienced incredible growth, leap-
ing from a market capitalization of essentially zero to over $114 billion (at time of writing) in
less than 10 years. Coupled with this parabolic growth has been Bitcoin’s detachment from
macroeconomic features that have been widely tracked for traditional financial asset pricing. Table
1 shows that Bitcoin has very little correlation with major traditional financial assets (Klein et al.
[2018]).
Table 1: Unconditional pairwise Pearson correlations of Bitcoin with major financial assets over the
period Jul 2, 2011 to Dec 31, 2017. See Klein et al. [2018] for source.
Asset Correlation with Bitcoin
Gold 0.0459
Silver 0.0071
WTI (Crude Oil) 0.0158
S&P 500 0.0491
MSCI World 0.0457
MSCI EM50 0.0042
The nascency of this market, its noise and extreme volatility, decorrelation with existing as-
sets, rapidly shifting sentiment, lack of established fundamental economic valuation, and even
suspected price manipulation (Gandal et al. [2018]) make Bitcoin one of the hardest prediction
problems to solve. Investors with diverse strategies can be found across the spectrum. Some are
purely fundamental investors in the venture capital style, focusing on the technological implications
of the blockchain. (The blockchain is an immutable public ledger that records all transactions for a
given cryptocurrency. We describe blockchain data in more detail in Section 2.2.) Some are applying
high frequency trading strategies that are widely utilized in traditional markets. Some are mining
sentiment data from news and social network platforms to predict market cycles.
Our approach is simple yet effective at beating the market over an almost 2-year period for
Bitcoin, as well as “altcoins” Ethereum (Wood [2014]) and Litecoin (Lee [2011]), which are among
the most widely used cryptocurrencies with the highest capitalizations, next to Bitcoin. In Section
2, we describe the datasets we work with and features we generate. In Section 3, we discuss our
methodology for generating price predictions and translating them into trading decisions. In Section
4, we present our results on classification accuracy and backtested profit of our strategies.
Coinbase provides open, high, low, close, and volume data (OHLCV) at 1 hour intervals for
the currency pairs BTCUSD, ETHUSD, and LTCUSD. open and close refer to the asset price
at the start and end of the interval, respectively. high is the highest price the asset attains within
the interval, while low is the lowest price within the interval. volume is the total volume, in units
of cryptocurrency, of trades made on the Coinbase exchange within the interval. Historical data
for BTCUSD is available at 1 minute intervals since July 2015. ETHUSD is available at 1 minute
intervals since Aug 2016, and LTCUSD is available at 1 minute intervals since Sep 2016.
2
3
Figure 1: Historical prices for BTC, ETH, and LTC from July 2015 to the present are highly volatile, as well as highly correlated with each other.
Figure 2: Historical volumes for BTC, ETH, and LTC from July 2015 to the present are highly volatile,
as well as highly correlated with each other and with historical prices.
Figures 1 and 2 show that the prices and traded volumes of these cryptocurrencies have
fluctuated dramatically over the past several years. Moreover, the cross-asset time series correlations
are all high, and price and volume tend to be highly correlated as well.
We form various technical features based on historical prices and volumes. In Appendix A,
we describe a few illustrative examples, to give context for the kinds of features one might derive
from historical market data. The basic idea is we attempt to capture the “direction” prices have been
moving in the recent past. We implicitly assume that recent price patterns do in fact influence future
prices, and that our features capture relevant properties of recent price evolutions.
Blocktap is a data platform that provides on-chain data, which refers to all data for a digital asset that
is natively stored on that asset’s public blockchain. Blockchain data differs by asset, because each
asset has its own unique use case or terminology. Thus, blockchain metrics may differ in title and
significance for price prediction, asset to asset. Some examples of on-chain data terminology include
miners revenue, mining rate, gas, active addresses, native coin emission rate, transaction count,
transaction volume, block size, and more. Furthermore, each metric represents some fundamental
economic component of the asset’s network value, i.e. supply or demand. For example, active
addresses generally denote a demand side feature while native coin emission rate is supply side.
The largest, free data repositories of such information can be found at coinmetrics.io
and blocktap.io, whereas real-time transaction tracking and metric graphics can be found
at blockchain.com (Bitcoin blockchain) and etherscan.io (Ethereum blockchain). These
information sources provide a host of metrics ranging from user adoption rate to investor purchases
to recent transactions. This type of open source and real-time insight into a financial economy and
its players’ movements is the first of its kind. Furthermore, it is hypothesized that certain on-chain
data metrics, such as stock-to-flow (supply of BTC vs. how much new BTC coming out), influence
4
the underlying price of digital assets like Bitcoin. As the asset class matures, we expect greater
utilization of on-chain data into predictive models as we have explored in this paper.
3 Methodology
First, we define price prediction as a classification problem and propose classifiers to solve it, using
various features we derive from market and on-chain data, as described in Section 2. Beyond
generating price predictions, we develop a framework for backtesting how well our strategy would
have performed, in a quasi-realistic setting, over the period of available data.
Figure 3: LEFT: The distribution of 1-hour returns for BTCUSD from July 2015 to the present. The
two vertical lines indicate the thresholds for class membership, at 0.005 and -0.005. RIGHT: We
show the 9 month rolling proportion of points in each class, from 2016 to the present, for BTCUSD.
We will use this 9 month rolling window to retrain our classifiers, as described in Section 3.2. These
classes are highly imbalanced, with the majority of points falling in class c3 . We clearly see a higher
proportion of the classes c1 and c2 during the volatile bubble and crash periods of 2017-2018.
Since our goal is prediction, not inference, we incorporate historical information directly
into predictive features, instead of explicitly fitting a time series model. Moreover, Figure 4 shows
the autocorrelations and partial autocorrelations of 1-hour log returns are all close to zero.
Explicitly specifying a high-dimensional parametric model is not only prohibitively complex, but
prediction intervals would also tend to be wide and unusable, given the inherent noisiness of the
5
Figure 4: The first 50 autocorrelations and partial autocorrelations of 1-hour log returns for BTCUSD
are all close to zero. ETHUSD and LTCUSD exhibit similar properties.
data generating process. Further, any such parametric model would require many assumptions
(e.g. stationarity) that we have no reason to believe are true. All things considered, a black-box
approach seems appropriate. We try several multiclass classification methods, including support
vector machines (SVM), random forest, and XGBoost. We train these classifiers to predict rt+1 given
all the relevant features from Coinbase and Blocktap at time t. Since our dataset is sometimes very
imbalanced (vast majority of returns in a 1-hour period fall into a single class, usually c3 ), we weight
classes inversely by their frequency in the dataset. We find that SVM with a squared hinge loss is our
best performing classifier. See Appendix B for a detailed explanation of SVM.
We must also decide how much historical data to use to train our classifiers. Including too
much history runs the risk of training on irrelevant information, as the present regime may differ
completely from the past. Including too little history runs the risk of underfitting the classifiers due to
inadequate data. We decide to use a rolling window of 9 months, which we believe is a sensible
balance between training on enough data while avoiding unhelpful data from the distant past. We
also find empirically that using a 9 month window performs well. Every month, we retrain the
classifier on the most recent 9 months of data. Taking this classifier as fixed over the course of the
next month, every hour we use the classifier to generate 1 hour ahead predictions based on the current
data that hour. At the end of the next month, we retrain the classifier, then generate new hour-by-hour
predictions for the following month. Empirically, we did not see any benefit to constantly refitting
the classifiers with the latest hour of information−updating the models monthly was sufficient.
As described in Section 3.2, every hour, SVM generates a class prediction for the return over the next
hour. These predictions often are not “confident” or significant enough to warrant any action. Put
another way, we would rather make fewer trading decisions that are very accurate, in lieu of many
trades that on average are only slightly better than chance. The former scenario is more profitable,
especially after considering the trading costs a small, non-institutional player would incur (see
Section 3.4.2). To address this, we essentially use a form of classification with rejection (Yuan and
Wegkamp), saying “we don’t know” when the classifier does not seem “confident” enough in its
prediction.
Formally, we define a tuning parameter γ, which serves as a threshold for making trading
decisions. In the case of SVM, this tuning parameter is a threshold for the signed distance to the
margin (See Appendix B.2). In other words, we will only consider potentially making a buy trade
(i.e. consider a new point to belong to class c1 ) if SVM predicts c1 and the estimated distance to
the margin exceeds γ. Random forest and XGBoost are perhaps more intuitive examples−here,
γ will serve as a threshold for the predicted class probability from the regression functions. For
three classes, a naive Bayes’ rule would set γ = 1/3 and suggest buy trades whenever the predicted
probability weight on c1 exceeds 1/3. If we choose a higher γ > 1/3, we effectively make our
6
trading strategy more conservative, setting a higher bar to meet before trading actions are taken.
Empirically, we find that using a γ threshold for predicting classes c1 and c2 improves NPV and PPV
(see Figure 5), and in turn profitability (see Section 4.3).
3.4 Backtesting
We translate class predictions into trading decisions, and then we develop a backtester to simulate in
a quasi-realistic setting how well our trading strategy would have done historically.
To simplify things, we assume we always hold either 100% cash or 100% cryptocurrency (only
allowed to take long positions). If we are holding all cash, every hour we evaluate whether the
predicted class is c1 and the signed distance to the margin exceeds γ. If both conditions are met, we
convert all our cash into cryptocurrency, buying at the current market price (we enter into a long
position). If we are holding all cryptocurrency, every hour we evaluate whether the predicted class is
c2 and the signed distance to the margin exceeds γ. If both conditions are met, we convert all our
cryptocurrency into cash, selling at the current market price (we exit the long position).
Algorithm 1 describes our trading strategy. Recall that we retrain the SVM classifier every
month, using a rolling 9 month window, and we use it to make hourly predictions over the next
month, denoted ŷ1 , . . . , ŷT in the algorithm below. We can of course refine this strategy by allowing
shorting, as well as position sizing based on holdings and prediction confidence. But even the crude
“all-in” strategy described here can outperform the market, as we will see in Section 4.3.
Our strategy is slow-moving enough and the market is active enough that we do not need market
orders−limit orders at the best available price should execute within minutes, incurring a fee of 0.00%
to 0.15% of the trade value to Coinbase Prime (https://prime.coinbase.com/). To be conserva-
tive, we assume trading costs of 0.25% for each trade. This is because in addition to fees explicitly
paid to the exchange, trading also incurs invisible fees due to market impact (Moro et al. [2009]). The
larger and more quickly one trades, the more expensive it is because one pushes the price further away.
Our backtester does not allow leverage, so the position size is limited to 100% of the port-
folio value. We also include take-profit and stop-loss features to manage risk and modulate the
aggressiveness of our strategy. These define the largest positional gain and loss, respectively, that one
is willing to see before closing the position, either to lock in gains or prevent further losses.
7
4 Experimental Results
In both classification accuracy and backtest performance, SVM was our best model. Indeed, a trading
strategy based on SVM beats the market on average from January 2018 to present.
Accuracy in key decision moments matters more than overall classification accuracy, especially as
our strategies only decide to trade a small fraction of the time. The overall accuracy, taken over every
hour of every day, is more indicative of “missed opportunities” and less relevant for our trading style.
(See Appendix C for more detailed results.) We focus instead on PPV and NPV, which reflect how
often our model is correct among the times that it chooses to actively buy or sell.
Figure 5 shows the PPV and NPV for SVM, random forest, and XGBoost. Using the deci-
sion framework described in Section 3.4, we set a threshold γ, which means the prediction
“confidence” must exceed γ for the algorithm to treat it as positive (class c1 ) or negative (class c2 ).
SVM consistently has the highest accuracy, and it forms the basis of a viable trading strategy.
Figure 5: PPV and NPV values for our classifiers, averaged from Jan 2018 to present.
Figure 6 shows our trading strategy in action in the BTCUSD market in March 2018. Red x’s with
vertical lines denotes trades, with green lines meaning buy and red lines meaning sell. Table 2
summarizes the 12 trades that were made. We see this was a particularly good month, where 5 out of
6 buy / sell pairs were profitable, and our algorithm made 22.7% after trading fees while the market
dropped by 35.6%. It is also encouraging to see that visually, our trading decisions occur after sudden
“jumps” in the price series, as expected for a strategy trading on momentum-based features.
Figure 7 shows monthly backtest results for BTCUSD. Each month begins on the 5th and ends
on the 5th, and SVM returns are reported net of trading costs. We focus on the period Jan 2018
to the present as a separate “regime” from earlier years. Prior to 2018, crypto markets were in
their infancy, with perhaps essentially different dynamics than the recent market, after the crypto
bubble burst at the end of 2017. Our features seem work well over this recent period. Figure 8
shows monthly backtest results for ETHUSD and LTCUSD. Similar to Bitcoin, SVM outperforms
the market somewhat consistently for Ethereum and Litecoin as well, while enjoying lower volatility.
These results are encouraging, as our naive “all-in” strategy performs well even without
8
9
Figure 6: This figure shows that trades that the SVM model makes during Mar 2018 for BTCUSD. Red x’s denote trading points, with vertical lines for visual clarity.
A green line denotes a buy trade, and a red line denotes a sell trade. Visually, these trading decision points seem to correspond to sudden “jumps” in the price series,
consistent with the momentum-based features our model uses. In a month when the price of BTC drops by 35.6%, our algorithm, over the course of 6 pairs of buy/sell
trades, returns 22.7% after trading costs.
Table 2: Trading decisions in BTCUSD market over month of March 2018.
Type Time Unit Price
buy 2018-03-09 05:00:00 8499.90
sell 2018-03-09 06:00:00 8815.08
buy 2018-03-11 00:00:00 8529.96
sell 2018-03-11 17:00:00 9631.78
buy 2018-03-14 17:00:00 8335.12
sell 2018-03-15 02:00:00 7797.50
buy 2018-03-15 05:00:00 7791.95
sell 2018-03-15 08:00:00 8194.61
buy 2018-03-30 00:00:00 6815.01
sell 2018-03-30 05:00:00 7110.39
buy 2018-04-01 15:00:00 6450.01
sell 2018-04-01 16:00:00 6805.01
Figure 7: This table shows monthly backtest returns for SVM for Bitcoin from Jan 2018 to the present,
along with the total compounded return and standard deviation of returns. SVM outperforms the
market somewhat consistently, while having lower volatility. The return correlation of SVM to the
market is 35.5% over this period. The plot shows these returns cumulated over the backtest period,
i.e. the hypothetical growth of $100 in our portfolio vs. the BTC market.
shorting or position sizing. Our average return over the 22 months from Jan 2018 to the present is
higher than the market’s, while our fluctuations and drawdowns are smaller. This is true for all three
assets we tested on: BTC, ETH, and LTC. It seems that in uncertain conditions when the market
declines, such as the second half of 2018, our algorithm often does not trade at all, which helps
overall performance while reducing overall volatility. In all 3 crypto markets, we are able to avoid the
catastrophic losses of the market, while nimbly finding opportunities to make short term profits.
10
Figure 8: We show monthly backtest returns for SVM for Ethereum and Litecoin from Jan 2018 to
the present. For both assets, SVM outperforms the market while having lower volatility. The return
correlation of SVM to the market over this period is 0.1% for ETHUSD, and 18.5% for LTCUSD.
Figure 9: Hypothetical growth of $100 in our portfolio vs. the market, for ETH and LTC.
Figure 10 compares the time series of monthly returns from our strategy, rolling market
volatility, and market returns for BTCUSD. In all 3 crypto markets we tested, our strategy seems to
trade more when volatility is higher. (We study this in more detail in Appendix D.) Furthermore, its
returns are relatively uncorrelated with the market, which is a great feature that allows one to, say,
hold a long cryptocurrency position while using our strategy to trade short term around that position.
11
Figure 10: We compare the monthly returns for SVM and the market in BTCUSD, along with rolling
market volatility. We see that our strategy is more likely to trade when market volatility is high, and
often does not trade at all in low volatility periods, when it is more “uncertain” of any price moves.
(We explore this further in Appendix D.) Our strategy is also relatively uncorrelated with the market.
5 Discussion
Cryptocurrency price prediction is a difficult, open-ended problem with many possible approaches.
We chose a black-box classification approach over the “medium term”, on the order of hours−neither
ultra-short term like high frequency strategies, nor long term like venture capital investing. We
gathered data from Coinbase and Blocktap, related to historical prices and volumes as well as
on-chain metrics underlying the cryptographic networks. We then defined features to feed into a
classification algorithm, and translated the resulting predictions into a trading strategy. While there
remains much to be optimized and improved, it is heartening to see that our simple, straighforward
black-box approach can deliver superior returns over an extended period of time.
We believe there are opportunities in the cryptocurrency markets at other time horizons, both shorter
and longer. For example, Coinbase provides order book information available at 1 minute intervals,
which may be useful for higher frequency trading strategies. Instead of regular intervals, we could
try retraining the classifier in an online fashion using change point detection (Aminikhanghahi and
Cook [2017]). We could also adjust risk limits and position sizing dynamically based on market
volatility. Another idea is to train only on short periods before and after major price spikes and drops,
to try to understand these key events. We could also define the classification problem differently,
or choose a regression approach instead. More powerful algorithms such as ensemble methods and
recurrent neural networks might improve predictions. Also, the authors initially explored conformal
methods for more cautious prediction intervals (Hechtlinger et al. [2018]), and it may prove fruitful
to apply these ideas to our current framework. Outside of historical prices and volumes, sentiment
data, in the form of news articles and social media posts, might reflect or influence investor sentiment
and in turn influence prices. Additionally, macroeconomic factors may play an important role in
cryptocurrency dynamics. Political and currency instability around the world, for example, might
increase the demand for cryptoassets, which are relatively uncorrelated with the macroeconomy.
Overall, we believe there is ample structure and opportunity in this relatively new, exciting asset class,
and we are encouraged by the success of our simple black-box approach.
12
Acknowledgments
Firstly, I am grateful to my wonderful advisor Alessandro Rinaldo for his direction, guidance and
support. I am also grateful to Christopher Brookins of Valiendero Digital Assets, LP for his wisdom
and abundant industry experience. It has been rewarding to collaborate with dedicated colleagues
Niccolo Dalmasso, Riccardo Fogliato, and Wanshan Li.
13
References
S. Aminikhanghahi and D. J. Cook. A survey of methods for time series change point detection.
Knowledge and Information Systems, 51(2):339–367, 2017.
G. Appel. Become your own technical analyst. The Journal of Wealth Management, 6(1):27–36,
2003.
A. Barnwal et al. Stacking with neural network for cryptocurrency investment. arXiv preprint
arXiv:1902.07855, 2019.
J. Bollinger. Using bollinger bands. Stocks and Commodities, 10(2):47–51, 1992.
P. Ciaian, Miroslava Rajcaniova, and d’Artis Kancs. The economics of bitcoin price formation. arXiv
preprint arXiv:1405.04498, 2014.
N. Gandal et al. Price manipulation in the bitcoin ecosystem. Journal of Monetary Economics, 95:
86–96, 2018.
Y. Hechtlinger, B. Poczos, and L. Wasserman. Cautious deep learning. arxiv preprint
arXiv:1805.09460, 2018.
Z. Jiang and J. Liang. Cryptocurrency portfolio management with deep reinforcement learning. arXiv
preprint arXiv:1612.01277, 2016.
Z. Kakushadze. Cryptoasset factor models. Algorithmic Finance (forthcoming), 2018.
K. Senthamarai Kannan et al. Financial stock market forecast using data mining techniques. In
Proceedings of the International Multiconference of Engineers and computer scientists., 1, 2010.
T. Klein, H. P. Thu, and T. Walther. Bitcoin is not the new gold–A comparison of volatility, correlation,
and portfolio performance. International Review of Financial Analysis, 59:105–116, 2018.
C. Lee. Litecoin. https://litecoin.org/, 2011. Online; accessed 1 November 2019.
R. Mitchnick and S. Athey. A fundamental valuation frame-
work for cryptoassets. https://s3-us-west-1.amazonaws.com/
fundamental-valuation-framework-for-cryptoassets/A+Fundamental+Valuation+
Framework+for+Cryptoassets_June+2018.pdf, 2018. Online; accessed 1 November 2019.
E. Moro et al. Market impact and trading profile of hidden orders in stock markets. Physical Review
E, 80.6:066102, 2009.
S. Nakamoto. Bitcoin: A peer-to-peer electronic cash system. Consulted, 1:2012, 2008.
M.S. Pan, Kam C.C., and R. Fok. Do currency futures prices follow random walks? Journal of
Empirical Finance, 4.1:1–15, 1997.
G. Wood. Ethereum: A secure decentralised generalised transaction ledger. Ethereum project yellow
paper, 151.2014:1–32, 2014.
M. Yuan and M. Wegkamp. Classification methods with reject option based on convex risk minimiza-
tion. Journal of Machine Learning Research, 11.
14
Appendix A. Momentum-based features
Below, we give a few illustrative examples of the kinds of features we might derive from historical
price and volume data. In particular, we seek features that capture the general “direction” or
“momentum” of the market recently. The features below by no means represent an exhaustive list of
the features used in our experiments in this paper.
Bollinger bands (Bollinger [1992]) compare the current price to rolling mean and volatility. Denote
P (i) as the price at time i, and M A(n, t) as the n-day moving average price at time t. Define upper
and lower bands:
v
u t
u X (P (i) − M A(n, t))2
U (t) = M A(n, t) + 2 t
i=t−n+1
n
v
u t
u X (P (i) − M A(n, t))2
L(t) = M A(n, t) − 2t
i=t−n+1
n
Two features we derive from these Bollinger bands are %b and bandwidth. At time t, we write:
P (t) − L(t)
%b =
U (t) − L(t)
U (t) − L(t)
bandwidth =
M A(n, t)
Figure 11 shows an example of Bollinger bands. Intuitively, we measure rolling mean and volatility
in order to compare the current price level to levels in the recent past.
Figure 11: Here is an illustration of Bollinger bands, the Bollinger level (current price relative to
upper and lower bands), and the Bollinger width (rolling volatility).
The moving average convergence-divergence (MACD) metric (Appel [2003]) compares two exponen-
tial moving averages (EMA) with different windows, and basically measures the level of the faster
one relative to the slower one. Define the MACD line as
M (t) = EM A(P, m, t) − EM A(P, n, t), m < n
15
where P denotes price. Typical values for m and n are 12 and 26, respectively. Define the signal line
as an exponential moving average of the MACD line:
S(t) = EM A(M, l, t)
A typical value for l is 26. Finally, define the histogram line as
H(t) = M (t) − S(t)
Figure 12 illustrates features derived from the moving-average convergence divergence (MACD)
metric. We compare a “slow” and “fast” exponential moving average, to capture where price levels
have been recently relative to a slightly longer history.
Figure 12: Here is an illustration of the various features of the moving-average convergence diver-
gence (MACD) metric. The blue line is the MACD line, which is the fast EMA minus the slow EMA.
The red line is the signal line, which is an EMA of the MACD line. The histogram line is represented
by the gray histograms. This is computed as the MACD line minus the signal line.
Here we give one additional example of a momentum-based feature. The relative strength index
(RSI) metric (Kannan et al. [2010]) compares the magnitudes of recent gains and losses, producing a
number between 0 and 100. Define the average gains and losses at time t:
Pn
max{0, Rt−i }
AG(n, t) = i=1
n
Pn
| min{0, Rt−i }|
AL(n, t) = i=1
n
where we define the return at time t as
Pt − Pt−1
Rt =
Pt−1
Then define the first relative strength at time t as
AG(n, t)
F RS(n, t) =
AL(n, t)
and the smoothed relative strength at time t as
AG(n, t − 1) · (n − 1) + max{0, Rt }
SRS(n, t) =
AL(n, t − 1) · (n − 1) + | min{0, Rt }|
16
Now, the first relative strength index is defined as
100
RSIF = 100 −
1 + F RS
and the smoothed relative strength index is defined as
100
RSIS = 100 −
1 + SRS
A typical value for n is 14 days.
Suppose data x1 , ..., xn belong to two classes, with labels t1 , ..., tn ∈ {1, −1} and feature transfor-
mation φ, and we model the predicted class assignment with
y(x) = wT φ(x) + w0
We seek parameters w, w0 such that
ti y(xi ) ≥ 0, ∀i = 1, ..., N
and that maximizes the minimal signed distance from any point to the boundary, known as the margin:
1
ŵ, ŵ0 = arg max min ti y(xi )
w,w0 kwk i∈{1,...,N }
Equivalently, we solve
1
ŵ, ŵ0 = arg min kwk2
w,w0 2
such that
ti (wT φ(xi ) + w0 ) ≥ 1, ∀i = 1, ..., n
Often, the dataset is not exactly linearly separable in the given feature space, and furthermore, a
solution that exactly separates the classes may overfit to training noise. We allow some points to
remain “misclassified” by defining a slack variable ξi for each data point xi :
0, ti y(xi ) ≥ 1
ξi =
|ti − y(xi )|, ti y(xi ) < 1
The slack variable represents the penalty incurred for violating the KKT conditions of the hard margin
problem. If a point is exactly on the decision boundary, then y(xi ) = 0 and ξi = 1. Points for which
ξi > 1 are misclassified, and points for which 0 < ξi < 1 are correctly classified but fall within the
minimum margin. The soft margin problem can then be formulated as follows:
n
1 X
ŵ, ŵ0 = arg min kwk2 + C ξi
w,w0 2
i=1
such that
ti (wT φ(xi ) + w0 ) ≥ 1 − ξi , ∀i = 1, ..., n
where C > 0 is a tuning parameter that controls the tradeoff between minimizing training error and
tolerating misclassifications. A larger C more heavily discourages violating the hard margin KKT
conditions. In the limit C → ∞, we recover the hard margin problem.
17
The stationarity conditions are:
n
∂L(w, w0 , a, b) X
= 0 =⇒ w = ai ti φ(xi )
∂w i=1
n
∂L(w, w0 , a, b) X
= 0 =⇒ 0 = ai ti
∂w0 i=1
∂L(w, w0 , a, b)
= 0 =⇒ ai = C − bi , ∀i
∂ξi
Observe that ai = C − bi , bi ≥ 0 =⇒ ai ≤ C. Substituting back into the Lagrangian, we obtain
the dual representation:
n n n
X 1 XX
D(a) = ai − ai aj ti tj k(xi , xj )
i=1
2 i=1 j=1
where k(xi , xj ) = φ(xi )T φ(xj ). Notice that this expression is identical to the hard margin case.
The difference between the soft and hard margin problems lies entirely in the constraints. For the soft
margin problem, we have:
0 ≤ ai ≤ C, ∀i = 1, ..., n
Xn
ai ti = 0
i=1
The soft margin SVM must satisfy the following KKT conditions ∀i = 1, ..., n:
ai ≥ 0
ti y(xi ) − 1 + ξi ≥ 0
ai [ti y(xi ) − 1 + ξi ] = 0
bi ≥ 0
ξi ≥ 0
bi ξi = 0
Every data point satisfies either ai = 0 or ti y(xi ) = 1 − ξi . The points for which ai = 0 are
non-support vectors that do not contribute to the SVM solution at all. Only a sparse subset of points
have ai 6= 0: these are the support vectors that determine the solution.
SVM easily generalizes to multiple classes. With K classes, we run K different one-against-all
SVMs, where in each case the label is +1 if it belongs to a class and -1 if it doesn’t. With functions
ŷ1 , ...ŷK , the final class assignment would then be k̂ ≡ max ŷk (x). One issue with this approach is
k≤K
we cannot guarantee that ŷ1 , ...ŷK are comparable, since they are trained on different tasks. Also,
the datasets for each classifier are unbalanced, with the proportions of labels are roughly 1/K for
+1 and (K − 1)/K for -1. Despite these shortcomings, one-against-all remains a popular method.
Another approach would be to train K(K − 1)/2 pairwise one-against-one classifiers, which is more
computationally expensive, and does not result in better performance on our datasets.
18
Figure 13: These tables compare overall the classification accuracy of SVM, random forest, and
XGBoost for the 3 crypto assets we tested. SVM consistently performs better than the other 2
classifiers, and also performs significantly better than chance. It is worth noting that some of these
figures may overstate the effectiveness of our models, since the classes are highly imbalanced, as
described in Section 3.1. Thus, when certain months have very high overall accuracy, this may reflect
the fact that the vast majority of points in that month fall into class c3 , i.e. we rarely make any trades.
19
20
Figure 14: This figure compares the weekly returns for SVM and the market in ETHUSD, along with rolling weekly market volatility. We see that our strategy is more
likely to trade when market volatility is high, and often does not trade at all in low volatility periods, when it is more “uncertain” of any price moves. Our strategy is
also relatively uncorrelated with the market.