MMS - Overview
MMS - Overview
Related terms:
Market Microstructure
Robert Kissell Ph.D, in The Science of Algorithmic Trading and Portfolio Manage-
ment, 2014
How exactly has market microstructure research evolved over time? First, research
in the 1970s and 1980s focused mainly on spreads, quotes, and price evolution.
In the following decade, 1990–2000, interest turned to transaction cost analysis
but focused on slippage and cost measurement. The following period, 2000–2010,
was marked with decimalization and Reg-NMS, and research turned to algorithmic
trading. Studies focused on pre-trade analysis in order to improve trading decision
and computer execution rules, and the development of optimal trading strategies
for single stocks and portfolios. The 2010 era so far has been marked with market
fragmentation, high frequency trading, algorithmic trading in multi-asset classes,
and exchange traded fund research. And many portfolio managers are studying how
best to incorporate transaction costs into stock selection and portfolio construction.
These eras of market microstructure research are shown in Table 2.1.
To help analysts navigate the vast amount of market microstructure research and
find an appropriate starting point, we have highlighted some sources that we
have found particularly useful. First, the gold standard is the research by Ananth
Madhavan in two papers: “Market Microstructure: A Survey” (2000) and “Market
Microstructure: A Practitioner’s Guide” (2002). Algorithmic Trading Strategies (Kissell,
2006) provides a literature review in Chapter 1, and Algorithmic Trading & DMA by
Barry Johnson (2010) provides in-depth summaries of research techniques being
incorporated into today’s trading algorithms for both order execution and black
box high frequency trading. Finally, Institutional Investor’s Journal of Trading has
become the standard for cutting edge academic and practitioner research.
Some of the more important research papers for various transaction cost topics
and algorithmic trading needs are shown in the following tables. Table 2.2 pro-
vides important insight into spread cost components. These papers provide the
fundamental needs for order placement and smart order routing logic. Table 2.3
provides important findings pertaining to trade cost measurement and trading
cost analysis. These papers provide a framework for measuring, evaluating, and
comparing trading costs across brokers, trading, and algorithms. Table 2.4 provides
insight into pre-trade cost estimation. These papers have provided the groundwork
for trading cost and market impact models across multi-assets as well as providing
insight into developing dynamic algorithmic trading rules. Finally, Table 2.5 provides
an overview of different trade cost optimization techniques for algorithmic trading
and portfolio construction.
Note: Positive value indicates a cost and negative value indicates a saving.
Kissell and Glantz (2003) Risk Aversion, Value at Risk, Price Improvement
In the simplest form, there is a subset of the agents that are endowed with su-
perior knowledge regarding the value of an asset. These agents are referred to as
privately informed or simply informed agents. Agents without superior information
are referred to as noise or liquidity traders and are assumed to be indistinguishable
from the informed agents. Questions regarding the means by which the asset price
transitions to reflect the information of the privately informed agents can be couched
in this context. Early theoretical papers utilizing this framework include Glosten and
Milgrom (1985), Easley and O'Hara (1992), Copeland and Galai (1983), and Kyle
(1985). A very comprehensive review of this literature can be found by O'Hara (1995).
The premise of these models is that market makers optimally update bid and ask
prices to reflect all public information and remaining uncertainty. For the NYSE it
was historically the specialist who plays the role of market maker. Even in markets
without a designated specialist bid and ask quotes are generally inferred either
explicitly or implicitly from the buy and sell limit orders closest to the current price.
Informed and uniformed traders are assumed to be indistinguishable when arriving
to trade so the difference between the bid and the ask prices can be viewed as
compensation for the risk associated with trading against potentially better informed
agents. Informed traders will make profitable transactions at the expense of the
uninformed.
In this asymmetric information setting, two types of prices can be defined. Prices that
trades occur at and a notional fair market value that reflects both public and private
information. We call the prices that trades occur at a “transaction price,” although
it could be a posted bid, ask, or midpoint of the bid and ask prices. Following the
microstructure literature, we will call the notional fair market value an “efficient
price” in the sense that it reflects all information, both public and private. If the two
prices do not coincide then trades occur away from their efficient values so there is
a gain to one party and a loss to another in a transaction. Natural questions arise. The
larger these deviations, the larger the loss to one party. Measures of market quality
can be constructed to reflect the size of these deviations. The simplest example would
be half the bid-ask spread. If prices occur at the bid or the ask price and the midpoint
of the bid and ask quotes are on average equal to the efficient price, then this
distance reflects the average loss to trader executing a market order and the gain to
the trader executing the limit order. How do the rules of trade affect market quality?
How is market quality affected by the amount of asymmetric information? How does
market quality vary across stocks with different characteristics such as volatility or
daily volume?
Empirical market microstructure plays a role in two larger areas. First, an ideal
market would have transaction prices that accurately reflect all information. That is,
the efficient price and the transaction prices would coincide. In absence of this
perfect world, the deviation would be as small as possible. Because the rules of trade,
that is the market structure, play a role in this relationship, market designers play a
large role in determining market quality with the rules imposed. This can be a natural
objective or imposed by government regulations. A more complete understanding
about market features across different trading rules and platforms aides in market
design.
Clearly assessing the market microstructure effects requires studying data at an in-
traday, high frequency. Data aggregated over the course of a day will not contain the
detailed information of price adjustment discussed in this section. Implementing
the econometric analysis, however, is complicated by the data features discussed in
the previous section. This chapter provides a review of the techniques and issues
encountered in the analysis of high-frequency data.
There has always been a nagging worry, however, that there are many situations in
which broad-brush equilibrium models that do not delve sufficiently deeply into
the process of trading and the strategic nature of its dynamics may not be good
enough to tell us what we need to know; to do better we will ultimately have to roll
up our sleeves and properly understand how prices change from a more microscopic
point of view. Walras himself worried about the process of tâtonnement, the way in
which prices settle into equilibrium. While there are many proofs for the existence
of equilibria, it is quite another matter to determine whether or not a particular
equilibrium is stable under perturbations—that is, whether prices initially out of
equilibrium will be attracted to an equilibrium. This necessarily requires a more
detailed model of the way prices are actually formed. There is a long history of work
in economics seeking to create models of this type (see e.g., Fisher, 1983), but many
would argue that this line of work was ultimately not very productive, and in any
case it has had little influence on modern mainstream economics.
The work reviewed here argues that trading is inherently an incremental process and
that for this reason, prices often respond slowly to new information. The reviewed
body of theory springs from the recent empirical discovery that changes in supply
and demand constitute a long-memory process; that is, that its autocorrelation
function is a slowly decaying power law (Bouchaud et al., 2004; Lillo and Farmer,
2004). This means that supply and demand flow in and out of the market only very
gradually, with a persistence that is observed on time scales of weeks or even months.
We argue that this is primarily caused by the practice of order splitting, in which
large institutional funds split their trading orders into many small pieces. Because
of the heavy tails in trading size, there are long periods where buying pressure
dominates and long periods where selling pressure dominates. The market only
slowly and with some difficulty “digests” these swings in supply and demand. To
keep prices efficient in the sense that they are unpredictable and there are not easy
profit-making opportunities, the market has to make significant adjustments in
liquidity. Understanding how this happens leads to a deeper understanding of many
properties of market microstructure, such as volatility, the Bid–Ask spread, and the
market impact of individual incremental trades. It also leads to an understanding
of important economic issues that go beyond market microstructure, such as how
large institutional orders impact the price and in particular how this depends on
both the quantity traded and on time. It implies that the liquidity of markets is a
dynamic process with a strong history dependence.
The work reviewed here by no means denies that information plays a role in forming
prices, but it suggests that for many purposes this role is secondary. In the last
half of the twentieth century, finance has increasingly emphasized information and
deemphasized supply and demand. The work we review here brings forward the role
of fluctuations and correlations in supply and demand, which may or may not be
exogenous. As we view it, it is useful to begin the story with a quantitative description
of the properties of fluctuations in supply and demand. Where such fluctuations
come from doesn't really matter; they could be driven by rational responses to
information or they could simply be driven by a demand for liquidity. In either case,
they imply that there are situations in which order arrival can be very predictable.
Orders contain a variable amount of information about the hidden background of
supply and demand. This affects how much prices move and therefore modulates
the way in which information is incorporated into prices. This notion of information
is internal to the market. In contrast to the prevailing view in market microstructure
theory, there is no need to distinguish between “informed” and “uninformed”
trading to explain important properties of markets, such as the shape of market
impact functions or the Bid–Ask spread.3
We believe that the work here should have repercussions on a wide gamut of
questions:
We do not wish to imply that these questions will be answered here—only that
the work described here bears on all of them. We will return to discussing the
implications in our conclusions.
• The options market is highly liquid so that market microstructure biases do not
adversely affect the forecasts. In illiquid markets bid-ask spreads are wide and
furthermore, option quotes are likely to be based largely on historical volatility
forecasts so that the option quotes add little or no new information.
• Many strike prices are available. In this case, model-free methods can be used
to extract volatility forecasts without assuming particular volatility dynamics or
return distributions.
• Many different maturities are available. In this case the option-implied forecast
for any desired horizon can be constructed with a minimum of modeling
assumptions.
• The underlying asset is uniquely determined in the option contract. Most
often this is the case, but in the case of Treasury bond options for example,
many different bonds are deliverable, which in turn complicates option-based•
forecasting of the underlying return dynamics.
Options are European, or in the case of American options, when the early •
exercise premium can be easily assessed. The early exercise premium must
be estimated and subtracted from the American option price before using the
method surveyed in this chapter. Estimating the early exercise premium thus
adds noise to the option-implied forecast.
The underlying asset has highly persistent volatility dynamics. In this case •
time-series forecasting models may be difficult to estimate reliably.
The underlying asset undergoes structural breaks. Such breaks complicate the•
estimation of time-series models and thus heighten the role for option-im-
plied forecasts.
The focus is on higher moments of the underlying asset return. The non-linear•
payoff structure in option contracts makes them particularly attractive for
higher-moment forecasting.
Risk premia are small. Risk premia may bias the option-implied forecasts but
option-based forecasts may nevertheless contain important information not
captured in historical forecasts.
Related to the last bullet point, we have also summarized the key theoretical rela-
tionships between option-implied and physical densities, enabling the forecaster to
take into account risk premia and convert the option-implied forecasts to physical
forecasts. We hasten to add that it is certainly not mandatory that the option-implied
information is mapped into the physical measure to generate forecasts. Some em-
pirical studies have found that transforming option-implied to physical information
improves forecasting performance in certain situations (see Shackleton et al., 2010
and Chernov, 2007) but these results do not necessarily generalize to other types of
forecasting exercises.
More generally, the existence of a bias does not prevent the option-implied infor-
mation from being a useful predictor of the future object of interest. Much recent
evidence on variance forecasting for example (see our Table 1 which is Table 1 in
Busch et al. (2011)) strongly suggests that this is indeed the case empirically.
Most of the empirical literature has been devoted to volatility forecasting using
simple Black–Scholes implied volatility. Clearly, much more empirical research is
needed to assess the more sophisticated methods for computing option-implied
volatility as well as the other forecasting objects of interest including skewness,
kurtosis, and density forecasting.
4.4 Conclusion
Although U.S. regulation of short sales remained fundamentally unchanged for
nearly its first 70 years of existence, it is unlikely that another several decades will pass
before fundamental changes are again introduced into the way the SEC regulates
short sales. Developments in market microstructure now happen at a faster pace
than in the 20th century, which makes it more likely that any particular regulatory
framework will be rendered ineffective, counterproductive, or even obsolete in the
near future. In addition, ongoing efforts at financial reform under the Dodd–Frank
Act and other initiatives will likely keep the technicalities of short sale regulation
permanently unsettled.
Accordingly, although the SEC has recently returned to its original approach to short
sale regulation in implementing a price test, market participants should be aware
that there is no guarantee that a price test or any of its particular exceptions or
approaches will remain in place for an extended period of time. Rules with respect
to locate and close-out requirements in particular may be amenable to change to
the extent that the SEC is influenced by economic studies specifically analyzing
naked short sales. A study by Fotak and colleagues (2009), for instance, found that
naked short sales generally contribute to price quality, which may undermine the
rationale for relatively strict locate and close-out requirements. The studies that the
SEC must conduct pursuant to Dodd–Frank may also call aspects of the current short
sale regulatory regime into question. In short, market developments, differences
in opinion among commissioners, an extended period of share price volatility or
stability, and continued governmental and academic research into the impact of
short selling and its regulation are all factors likely to cause change to any particular
short sale regime.
A number of papers have discussed the possible origins of bid-ask spread. These
may be broadly categorized as theories focusing on inventory, order placement,
and information asymmetry (see Whitcomb, 1988). Stoll (1978) develops a theory
of inventory holding costs with market makers having the prerogative to use the
bid-ask spread to offset the risk and cost of holding an inventory. Garbade and Silber
(1979) define liquidity risk as the variance of the difference between the equilibrium
value of an asset at the time an investor decides to trade and the transaction price
ultimately realized. A result of their theoretical analysis is that market makers who
trade on their own accounts reduce liquidity risk. Again, dealers’ inventory plays
an important role in buffering transient imbalances in purchase and sale orders.
Combined with Stoll’s (1978) theory, it follows that investors need to compensate
market makers by way of a bid-ask spread since they benefit from liquidity risk
reduction and immediacy of trade.
An alternative exploration by Cohen et al. (1981) demonstrates that the costs of in-
formation-related activities induce investors to use other order placement strategies
such as limit orders. Their conclusion is that the bid-ask spread stems from the
dynamic interaction between market orders and limit orders. The third path that
attempts to understand the origin of bid-ask spread is pioneered by Bagehot (1971),
Copeland and Galai (1983) as well as Glosten and Milgrom (1985). They approach
it from the perspective of adverse selection. In their studies, the spread is the
transaction cost that the specialists impose to insure themselves against informed
traders.
In view of these problems that invalidate the assumptions, many consider the effec-
tive spread instead. It is the difference between the market price and the midpoint
of the prevailing bid and ask quotes. The effective spread relaxes Assumption II, as it
does not require the transaction price to equal the bid or the ask price. Nevertheless,
it still requires the validity of Assumption III. For highly liquid stocks with competitive
quoting by market makers, the midpoint is likely to coincide with the efficient price.
When quoted prices and the associated sizes are not updated frequently enough to
account for changing market conditions, there is no guarantee that the midpoint is
still a good proxy for the efficient price.
Since the assumptions underlying traditional quotes-based measures are not with-
out flaws, alternative ways of estimating transaction costs appear necessary. In this
chapter, we extend Roll’s model by postulating that the unobservable efficient
price follows the geometric Brownian motion. This is a common assumption in the
market microstructure literature, which is also assumed by Roll (1984).
23.4 Conclusion
This chapter reviewed the main characteristics of short selling in emerging markets,
discussing how short selling restrictions can affect liquidity in emerging markets
and considerably slow the market recovery after a financial shock. Moreover, long
positions cannot be hedged easily and, even in the case of a derivative market in
place, derivatives cannot be priced efficiently. In general, short selling allows for
efficient pricing information and a symmetric market microstructure, which makes
the market more robust with respect to financial bubbles.
This evidence makes us think of the famous anecdote of the “boiling frog,” according
to which “if a frog is placed in boiling water, it will jump out, but if it is placed in
cold water that is heated slowly, it will not perceive the danger and will be cooked to
death”: short selling allows the market to react quickly to any information, even at the
cost of some “temporary scalds” (in our case, high temporary volatility). Restricting
or banning short selling practices condemns the market to a much slower recovery
(lower Sharpe ratios, higher market drawdowns) and a lower liquidity, which can
fatally limit its operation and slowly make it irrelevant for the local economy.
IPO Underpricing*
Alexander Ljungqvist, in Handbook of Empirical Corporate Finance, 2007
4.2.1 How widespread is price support?
Direct evidence of price support is limited because stabilizing activities are generally
notifiable, if at all, only to market regulators, and not to investors at large. Thus it is
hard to identify which IPOs were initially supported, how the intensity of intervention
varied over time, and at what time support was withdrawn. Most work therefore
relies on indirect evidence. For instance, one might investigate after-market mi-
crostructure data for behavior indicative of price support, and relate it to the under-
writer’s premarket activities such as bookbuilding. This is particularly promising on
NASDAQ, where underwriters can, and usually do, become market-makers for the
companies they take public.
The microstructure variables of interest are the bid–ask spreads that underwriters
charge (especially compared to competing market-makers who are not part of
the original IPO syndicate); who provides ‘price leadership’ (by offering the best
bid and ask prices); who trades with whom and in what trade sizes; what risks
underwriters take in the after-market; and how much inventory dealers accumulate
(indicating that they are net buyers). Schultz and Zaman (1994) and Hanley, Kumar,
and Seguin (1993) find microstructure evidence consistent with widespread price
support, especially among weak IPOs. Using proprietary Nasdaq data that identifies
the transacting parties, Ellis, Michaely, and O’Hara (2000) show that the lead IPO
underwriter always becomes the dominant market-maker and accumulates sizeable
inventories over the first 20 trading days. Underwriters buy back substantially more
stock in ‘cold’ offerings (those that opened below their offer prices and never
recovered in the first 20 days) than in ‘hot’ offerings (those that never fell below their
offer prices in the first 20 days). These inventory accumulation patterns are strong
evidence of price support activities, and indicate that such activities persist for a
perhaps surprising length of time.
Liquidity Trading
Liquidity trading is a particularly adept high frequency trading strategy which mim-
ics the role of the traditional market maker. Liquidity traders, or scalpers for short,
attempt to make the spread (buy the bid, sell the ask) in order to capture the spread
gain. Such efforts allow for profit even if the bid or ask do not move at all. The key
idea here away from traditional day traders is to establish and liquidate positions
extremely quickly.
Contrary to traditional traders who would double down on winners, scalpers gain
their advantage through increasing the number of winners. The aim here is to
make as many short trades as possible, whilst keeping a close eye on market
microstructure. Decimalization has eroded scalpers’ profits so the imperative on
keeping things simple and getting the market data and execution architecture in
place is ever increased. Scalping is the least quantitatively involved strategy which
we are covering. To ensure it works well one needs to have a good quantitative
model of market direction on which there is a plethora of literature—see for instance
the application of neural networks (Resta, 2006), traditional time-series analysis
techniques (Dacorogna et al., 2001), and sequential Monte-Carlo methods (Hanif
and Smith, 2012b; Javaheri, 2005).
Complementary to the above, there are two further types of scalping (Graifer, 2005).
Volume scalping is usually achieved through purchasing a large number of shares
for a small gain and betting on an extremely small price improvement. This type of
scalping works only on highly liquid stocks. The third type of scalping is closest to
traditional trading. The trader enters into a trade and closes out as soon as the first
exit condition, usually when the 1:1 risk/reward is met.
1 Introduction
In their quest for more rewarding investment opportunities, international investors
are continually seeking markets that promise higher returns and more efficient
diversification. However, increasing market linkages and interdependence among
global equity markets have made diversification opportunities more difficult to un-
cover, and higher returns are invariably associated with higher risk. Emerging mar-
kets once provided adequate diversification and higher returns; however, such ben-
efits have gradually diminished as emerging markets have evolved into developed
and advanced emerging markets. This transition was achieved as countries upgraded
their market microstructure—which in turn increased market efficiency—and
developed their economies. Currently, frontier markets have caught the cautious
attention of international investors, replacing emerging markets as a possible venue
for achieving the illusive goal of greater diversification with greater returns.
It is widely accepted that frontier markets are less liquid and thinner than most
equity markets, and so tend to be less efficient. Indeed, the efficiency of
financial markets is related to the long-term dependence of price returns. The
existence of long-term dependency among price movements indicates the property
of long memory. The presence of long memory in stock returns suggests that
current returns are dependent on past returns. This violates the efficient market
hypothesis (EMH) and martingale processes which are assumed in most financial
asset pricing models (Fama, 1970). Detecting the long memory property in stock
markets is important for investors, regulators, and policymakers in their attempts to
mitigate market inefficiency.
There is a vast literature of the long memory properties of developed and develop-
ing stock markets (Lo, 1991; Cheung and Lai, 1995; Caporale and Gil-Alana, 2002).
However, little is known about the long memory processes in frontier markets, and
the existing literature on frontier markets generally focuses on Africa and the Gulf
region. This paper is motivated by the fact that there are relatively very few papers
about the frontier markets in Europe. In addition to this, frontier stock markets in
Europe are of particular research interest due to their growth potential and EU mem-
bership. Since EU members are subject to EU regulations and economic reforms
due to euro zone criteria, investors may find attractive investment opportunities in
these countries.
The objective of this paper is to investigate the long memory property in stock
returns and volatility, using data from the major frontier markets of Slovenia, Slo-
vakia, Romania, Croatia, Estonia, and Lithuania. The sample period is 2012–14. We
use the daily closing prices of stock market indices. Since frontier markets grapple
with market thinness, rapid changes in regulatory framework, and unpredictable
market responses to information flow, stock returns in frontier markets have distinct
properties compared to other markets. Therefore, modeling long memory in return
and volatility becomes important in measuring risk in these markets. To test the
long memory property, we use the estimation of GPH in conjunction with the GSP
method. We also estimate ARFIMA–FIGARCH models to examine the presence of
long memory in stock returns and volatility.
The rest of the paper is organized as follows: Section 2 presents the relevant literature
review, Section 3 presents the data, Section 4 outlines the methodology used for
the study, Section 5 shows the empirical findings and analysis, and Section 6
summarizes and concludes the study.