• Ei tuloksia

The precise definition of the volatility of an asset is ―an annualized measure of dispersion in the stochastic process that is used to model the log returnsβ€– (Alexander 2008: 90). However, true process of volatility cannot be observed, because the pure volatility is not traded in the market. It can be only estimated and forecasted.

2.1.1. Volatility measurements

Statistically, volatility is often used to refer to the standard deviation of the returns in the sample period,

1 𝜎 = 1

𝑇 βˆ’ 1 (π‘Ÿπ‘‘βˆ’ π‘Ÿ )2

𝑇

𝑑=1

where π‘Ÿπ‘‘ is the return on day t, and π‘Ÿ is the average return over the T-day period (Poon 2005: 1). This sample standard deviation 𝜎 is a distribution free parameter representing the second moment characteristic of the sample.

In practice, the predictions of price variations of financial assets are very hard. So the usual way is to assume that the distributions of successive returns are relatively

independent of each other. The one-period log returns are normally distributed with mean πœ‡ and standard deviation 𝜎. Since the dispersion will increase as the holding period h increases, this means that standard deviation of n-days returns cannot compare with standard deviation of m-days. It is necessary to transform the sample standard deviation into annualized form in order to make it comparable. The annualized standard deviation is called the annual volatility, or simply the volatility, defined as follows

2 π΄π‘›π‘›π‘’π‘Žπ‘™ π‘£π‘œπ‘™π‘Žπ‘‘π‘–π‘™π‘–π‘‘π‘¦ = 100𝜎 𝐴 %

where A is an annualizing factor, the number of returns per year (Alexander 2001:5).

However, the above transformation of standard deviation is only valid when returns are i.i.d (independent and identical distribution), which implies that volatility is constant.

The constant volatility process exactly corresponds to the assumption for Black-Scholes-Merton type option pricing models and moving average statistical volatility, whereas this assumption is not realistic in the real world. Nonetheless, the annual volatility as quoting volatility has been the market convention and been widely employed to forecast standard deviation, not considering whether it is based on an i.i.d assumption for returns (Alexander 2008:92).

Even though annualized sample standard deviation is broadly utilized, some academicians query its applicability, especially when the small sample is taken into account. For example, when monthly volatility is needed and daily data is obtainable, it is simple to use formula (1) to calculate the standard deviation. However, when daily volatility is considered and only daily data is accessible, the problem appears. Figlewski (1997) points out that since sample mean has the inherent statistical properties making it very imprecise, when used as the estimate of the true mean. To address this issue, many researchers turn to use daily squared return to represent daily volatility, generated from the market closing prices. By employing this daily squared return method as the measurement of the latent volatility process, Cumby et al. (1993), Figlewski (1997), and Jorion (1995, 1996) find that despite they get highly significant in-sample parameter estimates, their standard volatility models do poor performance in out-the-sample forecasts. Andersen and Bollerslev (1998:886) give the explanation as follow. Set the

return innovation as π‘Ÿπ‘‘ = πœŽπ‘‘βˆ— 𝑧𝑑, where 𝑧𝑑 denotes an independent mean zero, unit variance stochastic process. While the latent volatility, πœŽπ‘‘, changes simultaneously with the specific model described. If the model for πœŽπ‘‘ is correctly specified, then πΈπ‘‘βˆ’1 πœŽπ‘‘2βˆ— 𝑧𝑑2 = πΈπ‘‘βˆ’1 π‘Ÿπ‘‘2 = πœŽπ‘‘2 . It is apparently reasonable to adopt daily squared return innovation as a proxy for ex-post volatility. Whereas the error component, 𝑧𝑑2, varies observation-by-observation in a large degree. Hence, the squared innovation may become the quite noisy measurement. When π‘Ÿπ‘‘2 is as the measure of ex-post volatility, the poor predictability of volatility models may well be due to the inherent noise in the return process, but not incompetent models.

As mentioned above, daily returns are generated by the close prices. Some researchers think that it is not sufficient. Different prices contain ample information. Just using close price is incapable of reflecting this information set. Parkinson (1980) proposed extreme value method, also called high-low measurement, to estimate volatility. The basic idea is to use the highest- and lowest price for a unit time interval to capture the relative information. The specific formula is as follows

3

𝜎

𝑑2

=

(𝑙𝑛𝐻4𝑙𝑛2π‘‘βˆ’π‘™π‘›πΏπ‘‘)2

Denote 𝐻𝑑 and 𝐿𝑑 respectively as the highest and lowest price on t time interval. The extreme value method is very easy to apply in practice, because daily, weekly, and in some cases, monthly highs and lows are published for every stock by major newspaper.

Although high-low method captures more information and is convenient to implement, it is still founded on the normal distribution assumption which is invalid for financial market returns. Financial market return exhibits a long tail. Therefore, this method is very sensitive to outliners. When it is applied on the volatile procedures, it will be inefficient.

Pitfalls above motivate the researchers to find a new and more accurate way to measure the latent volatility process. Fung and Hsieh (1991), Andersen and Bollerlev (1998) take the initiative in using the term realized volatility, which means ―the sum of intraday

squared returns at the short intervals such as fifteen- or five-minutes.β€– (Poon & Granger 2003: 481). This total new method reduced the noise dramatically and makes a radical improvement in temporal stability relative to the method based on daily return (Andersen & Bollerslev 1998:887). According to Poon (2005), such a volatility estimator has been shown to be the accurate estimate of the latent volatility process.

However, high-frequency data is not readily accessible, especially impossible in some illiquid markets. Table 1 summarized respective strengths and weaknesses of different methods to measure the latent volatility.

Table 1. Summary of possible measurements of the latent volatility process.

Measurements Formula Strengths Drawbacks

Sample

2.1.2. Misperceptions of volatility

In financial market, the investors usually tend to translate the volatility as risk.

However, they are not exactly the same. Volatility only refers to the spread of a distribution but nothing with its shape. If and only if the distribution is normal or log- normal distribution, volatility is a sufficient statistic for the dispersion of the returns distribution. Otherwise, the shape or the entire function of the distribution needs to be known to determine the dispersion. Figure 1 (a) plots the return distributions of Dow index from 1st October 1928 to 8th March 2010. The normal distribution simulated with the same mean and standard deviation of the financial asset returns is drawn on Figure 1 (b) to facilitate comparison. Compared with normal distribution, the distribution of Dow returns has fat left tail and higher peak with skewness (-0.596960) and kurtosis (27.81276). The volatility is not the sole determinant input for the dispersion of the Dow return distribution. This thesis is only interested in volatility. Although volatility is not the sole determinant of the asset return distribution, it is also a key input to many significant financial applications.

Figure1. (a) Return distribution of Dow index from 1928 to 2010.

0

Figure1. (b) The simulated normal distribution with the same mean and standard deviation of Dow index.