Recently
while studying for my Life Risk Management Exam coming fall I went off syllabus
to studying market risks. I have always felt that financial models tend to have
too many subtle assumptions that should always be borne in mind while working
with them and the no-arbitrage principle tops that list.
The
section on “Characteristics of financial time series” in the “Financial Enterprise Risk Management” Book (Chapter 14) had some interesting information
that I felt would serve as rule of thumb in financial modeling. My regular
perceptions on the subject did change dramatically.
Firstly;
“In
spite of the assumptions in many models to the contrary, market returns are
rarely independent and identically distributed.”
I
have always felt the same as markets tend to be driven by common perceptions
and copy cats a lot. Also there are many instances where markets go over kill with
an idea and then subsequent corrections start to take place gradually. Also,
mean reversion is always there very much observable. So does this mean models
assuming a random walk process as in the Log-normal model is wrong?
Then;
“Whilst
there is little obvious evidence of serial correlation between returns, there
is some evidence that returns tend to follow trends over shorter periods and to
correct for excessive optimism and pessimism over longer periods. However, the
prospect of such serial correlation is enough to encourage trading to
neutralise the possibility of arbitrage. In other words, serial correlation
does not exist to the extent that it is possible to make money from it – ……..”
Ummm…. so does that mean the use of a random walk
process is justified?
But then;
“Whilst
there is no apparent serial correlation in a series of raw returns, there is
strong serial correlation in a series of absolute or squared returns: groups of
large or small returns in absolute terms tend to occur together. This implies volatility
clustering. It is also clear that volatility does vary over time, hence the
development of ARCH and GARCH models.”
From WIKI, Volatility Clustering infers
that "large changes tend to be followed by large changes, of either sign,
and small changes tend to be followed by small changes" (Mandelbrot,
1963). This I understand as it being very much obvious in periods of financial
crisis and depressions; often high downturns are followed by strong uplifts. But
to me this implies more of a systemic risk.
And then all of a sudden:
“The
distribution of market returns also appears to be leptokurtic, with the degree
of leptokurtosis increasing as the time frame over which returns are measured
falls. This is linked to the observation that extreme values tend to occur
close together. In other words, very bad (and very good) series of returns tend
to follow each other. This effect is also more pronounced over short time horizons.”
This is where I felt I had actually
learnt something. My pessimism with simple stock models that do not embed
volatility clustering has been in infancy. There is no reason to call them
incorrect I suppose. The Black Scholes assumption of risk neutrality and
expected growth being equal to the risk free return makes sense now. The
returns would very much linger around the risk free interest rate and would not
deviate much further.
The return distribution (in normal
circumstances) therefore is much narrower than I thought and so assuming
extreme events to be outliers, simple models can provide a good fit. However,
being risk professionals our interest lies in the tails which for a leptokurtic
curve are longer.
So moving on;
“Correlations
do exist between stocks, and also between asset classes and economic variables.
However, these correlations are not stable. They are also not fully descriptive
of the full range of interactions between the various elements. For example, whilst
the correlation between two stocks might be relatively low when market
movements are small, it might increase in volatile markets. This is in part a
reflection of the fact that stock prices are driven by a number of factors. Some
relate only to a particular firm, others to an industry, others still to an entire
market. The different weights of these factors at any particular time will determine
the extent to which two stocks move in the same way.”
This has me asking too many questions at the same time:
- Is it ok to model stock performances based on a recognized index?
- Should we be modeling each type of stock individually and is it practical / possible?
- Can systemic risk occurrences be built in a probability model or is scenario testing the only way?
- Are we wasting our time on Complex volatility clustering models? Would a random walk process suffice along with intuitively designed scenarios for changes in the business environment or systemic risk?
- Can economic scenario generators (ESG’s) embed that level of complexity to relating economic factors, business trends, commodity prices, etc for economic capital purposes?
- Should fund managers improvise for cutting edge performances with clever fund compositions or should they just match the market beta (market composition) and be happy with it (go with the flow and a track-able index)?
WHY DID I WASTE MY BREATH ON THIS:
Being a Valuation Actuary primarily I
am obsessed with Gross Premium Valuation Models, i.e. models that make real
time sense. However, the asset side has never quiet been addressed by us (or
other non-finance actuaries). Also, working in a company running traditional
insurance business that credits bonuses based on the complete asset portfolio returns
make life a great deal harder. It is imperative that a consistent approach to
modeling portfolio performance (consisting of long term debt instruments,
stocks, money market instruments, etc) is adopted, specifically for Economic
Capital purposes.
This has also led me to understand
that I should not have been fooling around in Economics, if scenario designing
is the way to go I should have my facts straight. Also I shall be tinkering
with ARCH and GARCH soon.
No comments:
Post a Comment