• Ei tuloksia

Financial connectedness of the Nordic banking sector : examining time and frequency connectedness in return volatilities of bank shares

N/A
N/A
Info
Lataa
Protected

Academic year: 2022

Jaa "Financial connectedness of the Nordic banking sector : examining time and frequency connectedness in return volatilities of bank shares"

Copied!
93
0
0

Kokoteksti

(1)

LAPPEENRANTA-LAHTI UNIVERSITY OF TECHNOLOGY LUT School of Business and Management

Strategic Finance and Business Analytics

Master’s Thesis

Financial connectedness of the Nordic banking sector: examining time and frequency connectedness in return volatilities of bank shares

7.12.2020 Author: Matias Hällström Supervisor: Research fellow Jan Stoklasa 2nd Examiner: Professor Sheraz Ahmed

(2)

ABSTRACT

Author: Matias Hällström

Title: Financial connectedness of the Nordic banking sector:

examining time and frequency connectedness in return volatilities of bank shares

Faculty: LUT School of Business and Management Degree: Master of Science in Economics and Business

Administration

Master’s Program: Strategic Finance and Business Analytics

Year: 2020

Master’s Thesis: 74 pages, 15 figures, 11 tables, 6 appendices Examiners: Research Fellow, Jan Stoklasa

Professor, Sheraz Ahmed

Keywords: Financial connectedness, volatility spillover, Nordic Banking, stock markets, frequency connectedness

This thesis aims to analyze the volatility connectedness found between Nordic publicly listed banking institutions between 2004 and 2020. Assessment of connectedness is done on multiple levels of granularity, from system-wide connectedness to connectedness between specific banks. Static full sample connectedness and dynamic over time connectedness measures are achieved using the spillover index framework of Diebold and Yilmaz (2012). These measures are added upon with the frequency decomposition of connectedness methodology of Barunik and Krehlik (2018). The first framework allows us to assess general connectedness and its time-varying dynamics and the latter is used to understand whether connectedness is more long- or short-term in nature.

Our results show that, on average, about 51 % of the variation in a 10-day forecast is due to volatility spillover from other Nordic banks. This connectedness varies over time, and an increase in connectedness is associated with market turbulence. The overall connectedness varies between 36 and 75 %. Generally, Swedish banks seem to be central in the system, as they both emit and receive the most volatility. While they tend to be on the bigger side, we also don’t find conclusive evidence supporting that larger banks are more connected. However, connectedness in the Nordics does seem to be higher within countries than across them. Finally, we find that the connectedness in the banking sector seems to be long-term in nature as most of the connectedness is created at low- frequency cycles, which means that shocks don’t dissipate immediately but persist.

(3)

TIIVISTELMÄ

Tekijä: Matias Hällström

Tutkielman nimi: Pohjoismaisen pankkisektorin keskinäisriippuvuus:

Pankkiosakkeiden tuottovolatiliteetin leviäminen yli ajan ja frekvenssien

Tiedekunta: LUT School of Business and Management Tutkinto: Kauppatieteiden maisteri

Maisteriohjelma: Strategic Finance and Business Analytics

Vuosi: 2020

Pro Gradu -tutkielma: 74 sivua, 15 kuvaajaa, 11 taulukko, 6 liitettä Tarkastajat: Tutkijatohtori, Jan Stoklasa

Professori, Sheraz Ahmed

Avainsanat: Taloudellinen keskinäisriippuvuus, volatiliteetin leviäminen, Pohjoismainen pankkisektori, osakemarkkinat, frekvenssi riippuvuus

Tämän Pro Gradu -tutkielman tarkoituksena on arvioida Pohjoismaisten pankkien keskinäisriippuvaisuutta ja volatiliteetin leviämistä 2004 ja 2020 välisenä aikana.

Keskinäisriippuvaisuutta arvioidaan sekä systeemitasolla että yksittäisten pankkien välillä. Koko aikavälin kattavaa ja liikkuvaan otokseen perustuvaa keskinäisriippuvuutta arvioidaan Dieboldin ja Yilmazin (2012) ”spillover” indeksillä. Lisäksi keskinäisriippuvuutta arvioidaan eri taajuuksilla ja taloudellisissa sykleissä Barunikin ja Krehlikin (2018) metodologialla. Ensimmäisellä metodologialla arvioimme keskinäisriippuvuutta yleisellä tasolla sekä sen tyypillisiä yli ajan ominaisuuksia. Jälkimmäinen metodologia mahdollistaa arvioin siitä, onko keskinäisriippuvaisuus lyhyt vai pitkäaikaista.

Tuloksemme osoittavat, että keskimäärin noin 51 %:a variaatiosta 10 päivän ennusteessa voidaan osoittaa olevan lähtöisin muista pankeista levinneistä volatiliteetti shokeista.

Keskinäisriippuvuus muuttuu yli ajan ja kasvava riippuvuus liittyy yleensä markkinatilanteen heikkenemiseen. Levinneiden volatiliteettishokkien osuus variaatiosta vaihtelee 36 ja 75 prosentin välillä. Yleisesti ottaen ruotsalaiset pankit sekä levittävät että vastaanottavat eniten volatiliteettia. Vaikka ruotsalaiset pankit ovatkin keskimäärin suurempia, ei pankin koon ja sen levittämän volatiliteetin välillä ole selkeää yhteyttä. Sen sijaan maiden sisäinen keskimääräinen keskinäisriippuvuus on suurempaa kuin maiden välinen. Viimeisenä tulokset osoittavat, että Pohjoismaisten pankkien välinen riippuvuus on pitkäaikaista matalilla taajuuksilla esiintyvää, eivätkä shokit katoa nopeasti.

(4)

Acknowledgements

As I’m finishing my Master’s thesis, and with it my studies, I have to say that the prevailing emotion is that of relief. While most of my studies flew by, the same can’t be said about the thesis process, which has had its ups and down. Trying to complete a thesis alongside work proved to be a futile effort, but thankfully I was able to take some time off and after a slow start I was able to get the wheels rolling.

I want to acknowledge my thesis supervisor Jan and thank him for all the discussions and feedback along the way. It was great to have such an approachable and helpful supervisor.

I want to also thank my family and girlfriend for their support during my studies and the thesis writing process.

Helsinki 4th of December 2020 Matias Hällström

(5)

Table of Contents

1. INTRODUCTION ... 1

1.1. Motivation and background ... 3

1.2. Research objectives ... 6

1.3. Scope of the study ... 7

1.4. Outline of the paper ... 8

2. THEORETICAL FRAMEWORK AND MODELS ... 10

2.1. Volatility and its proxies ... 10

2.2. Financial connectedness ... 18

2.3. Frequency Dynamics of Financial Connectedness ... 27

3. LITERATURE REVIEW ... 31

3.1. Connectedness and spillovers in the financial sector ... 31

4. DATA AND METHODOLOGY ... 38

4.1. Data ... 38

4.2. Methodology... 45

5. RESULTS ... 49

6. CONCLUSIONS ... 67

6.1. Summary and implications ... 67

6.2. Potential further research ... 73

REFERENCES ... 75

APPENDICES ... 83

(6)

List of Figures

Figure 1: Bank equity prices ... 39

Figure 2: Bank equity log-returns ... 41

Figure 3: Bank return volatilities, Yang-Zhang ... 42

Figure 4: Bank log-return volatilities, Yang-Zhang ... 44

Figure 5: Total connectedness ... 54

Figure 6: To others connectedness ... 55

Figure 7: From others connectedness ... 56

Figure 8: Net-connectedness ... 57

Figure 9: Total frequency connectedness, absolute and within ... 59

Figure 10: Within frequency connectedness, European debt crisis event study ... 61

Figure 11: Horizon robustness ... 63

Figure 12: Lag order robustness ... 63

Figure 13: Cholesky factorization with 16 random variable orders ... 64

Figure 14: Window length robustness ... 65

Figure 15: Robustness to the choice of volatility proxy... 66

List of Tables Table 1: Summary of volatility estimation methods ... 18

Table 2: Connectedness table Schematic ... 24

Table 3: Descriptive statistics for bank equity prices ... 40

Table 4: Descriptive statistics for bank log-returns ... 42

Table 5: Descriptive statistics for bank return volatilities ... 43

Table 6: Descriptive statistics for bank log-return volatilities ... 45

Table 7: Model selection, information criteria results ... 46

Table 8: Total connectedness, full sample ... 50

Table 9: country subsamples from full sample connectedness ... 51

Table 10: Directional connectedness rankings ... 52

Table 11: Rank correlation testing ... 53

(7)

Abbreviations

ADF Augmented Dickey-Fuller test AML Anti-Money Laundering

ARCH Autoregressive Conditional Heteroskedasticity CoVaR Conditional Value-at-Risk

ECB European Central Bank

EGARCH Exponential Generalized Autoregressive Conditional Heteroskedasticity ES Expected shortfall

FEVD Forecast Error Variance Decomposition FI Financial Institution

GARCH Generalized Autoregressive Conditional Heteroskedasticity GFEVD Generalized Forecast Error Variance Decomposition

IMF International Monetary Fund JB Jarque-Bera test

KYC Know Your Customer MES Marginal Expected Shortfall VAR Vector Autoregression VaR Value-at-Risk

(8)

1. INTRODUCTION

Modern-day global markets are increasingly interconnected and as such, the world we live in is, in a way, much smaller than it once was. Events on the other side of the planet don’t feel so far away when we can read about them as they are happening, while not so long ago, we might have never known about them. Not only do we know about what’s happening worldwide, but those far away events affect our lives. Many different factors have driven this interconnectivity. Technological advancements have allowed for instant transfer of information and communication across the planet. Political drivers such as the European Union, United Nations and the world trade organization have encouraged international dialog and brought countries to the negotiation tables. Domestic markets have become more saturated and the nations beyond national borders have become more attractive and available as businesses have sought room to grow. The increased ease and speed of transferring goods and information has done its part to pave the way for an interconnected global economy. Companies across the world form networks with varying degrees of connectedness between them.

Naturally, this interconnectivity is not just limited to more traditional markets with tangible products, but the financial markets too, where local shocks can have international consequences. Exceptional times often highlight how shocks can spread. The financial crisis of 2008 and onwards is still in recent memory and is a prime example of how shocks disperse across connected markets. This phenomenon, where a crisis in one market can spread to another, is referred to as financial contagion. Contagion is inherently related to the integration of markets and the spread of information across them. (ECB, 2005) Contagion and connectedness are distinct but related concepts. Contagion refers to a spread of run-like behavior in a system, which can be intensified through connectedness.

Connectedness itself can be defined as the degree of interconnectivity between institutions, enabling a failure in one to spread to another. Two main types of connectedness relate to balance sheet links. The party who owns debt, equity, or

(9)

derivatives is exposed through asset connectedness to the issuing institution's failures.

In turn, liability connectedness is realized when an institution depending on another for funding suffers from failures of the funder due to disruptions in the funding stream. (Scott, 2016)

Much attention has been paid to researching linkages between markets and firms, especially after the recent crisis periods. Often spillover effect is the focus of such studies.

Spillover can be defined as the impact an event, even a seemingly unrelated one, has on a particular financial market or economy. Studies of spillover effects often focus on mean return spillovers and volatility spillovers. In simplistic terms, these are the effects a change in financial returns or volatility somewhere has on returns and volatility somewhere else.

Regardless of the asset type or financial market, the global financial crisis of 2008 seems to be the peak in interconnectedness and the spillover effect. For example, Angkinand, Barth and Kim (2010) find an increasing degree of interconnectedness over time in stock market returns between advanced economies. Spillover effects were found to be highest just after the US subprime mortgage meltdown in 2007. Diebold and Yilmaz (2012) find a similar increase in the spillover of volatility between four different US asset classes:

stocks, bonds, foreign exchange and commodities, during crisis periods.

Shocks do not just spread between international markets, but within industries too, be they global or local. The financial sector and its subindustries are a common subject in connectedness studies. So much of the world’s day to day operations depend on the sector that it deserves the increased scrutiny. Banking for example is a crucial piece of the puzzle that is the global economy and its failures can have significant consequences.

We need not look any further than the global financial crisis, for a banking system lead crisis. The widely agreed-upon cause of which was the combination, along with other factors, of excessive risk-taking by banks and the burst of the real estate market. The trigger for the crisis in 2008 was the bankruptcy of the US investment bank Lehman Brothers. (Aliber and Zoega, 2019)

This thesis also focuses on banking and its interconnectedness. We focus specifically on the Nordic banking sector and its systematic connectedness and cross spillovers. The region is often overlooked and to our knowledge, there are no previous peer-reviewed

(10)

published studies that focus primarily on measuring connectedness in the Nordics. This offers an attractive research gap to fill. The area is often thought to be calm and well regulated, but it is not immune to contagion and systematic risks. Iceland for example experienced the largest systemic bank collapse relative to its size by any country ever during the financial crisis (the economist, 2008). Our study aims to shed light on how vulnerable the Nordic banking sector is to such an event in one of the Nordic banks.

The methodological choice of this thesis is the spillover index framework of Diebold and Yilmaz (2009, 2012, 2014). The relatively recent framework for measuring connectedness allows us to look at the linkages between Nordic banks on multiple granularity levels, which is a feature not found in many other models. We not only look at systematic connectedness, but pairwise as well as connectedness between specific banks and the system. To provide an even more comprehensive look at connectedness, we also utilize the frequency connectedness framework of Baruník and Křehlík (2018).

The next section provides more detail on the subject and the motivations behind why such research is essential. Later chapters detail the methodological choices and their alternatives.

1.1. Motivation and background

Financial connectedness is a somewhat elusive term with competing definitions and alternative measurements. Generally, when we refer to connectedness, we speak about the ways and the degree to which individual variables are linked: how they affect each other and by how much. In the previous section, we touch on asset and liability connectedness, for example. The concept of spillover is also related to connectedness.

For example, a shock to one countries economy affecting a seemingly unrelated country’s economy is referred to as the spillover effect. Spillovers can be thought of as a symptom of high connectedness. You would be surprised to find significant spillovers between Sweden and Peru, but one might expect it between the much more connected Sweden and Finland. In this thesis, we define connectedness as the state of being connected to other variables and measuring connectedness as a measurement of how interconnected

(11)

those variables are, i.e. how much past values of variables affect the variation in other variables through existing connections. In our chosen methodology, this is done with variance decomposition, which measures how much information each variable contributes to others. For the most part, we take no stand regarding what those connections and linkages are. For example, whether there are some specific asset or liability-based connections between the Nordic banks.

Connectedness is also a critical factor in risk measurement. For example, return and portfolio connectedness relate to market risk, default connectedness to credit risk, connectedness due to contractual obligations to counterparty risk and most importantly, system interconnectedness is a significant factor in systematic risk (Diebold and Yilmaz, 2014). The last is reflected by the fact that the Basel committee recognizes connectedness as one of the five critical factors in identifying systematically critical banks, the other factors being bank size, the degree of cross-jurisdictional activities, substitutability and complexity. (Basel committee on banking supervision, 2020). It’s no wonder that the concept of “too connected to fail” exists, and connectedness measures continue to be proposed.

So why study volatility instead of returns, for example? As we are especially interested in crises and the spread of shocks, volatility presents a better alternative as it’s more crisis sensitive, as noted by Diebold and Yilmaz (2009). Volatility can be thought of as an indicator of investor fears and its connectedness as the fear connectedness (Diebold and Yilmaz, 2015). These fear dynamics are of particular interest to us as we look at the time- varying nature of connectedness. Additionally, volatility dynamics are typically such that they support our methodological choice. This is explained later when requirements for the generalized framework are discussed. Some attention is also paid to a relatively overlooked factor in measuring connectedness, the choice of volatility proxy.

The Nordics themselves are a group of relatively small export-dependent economies, which makes them vulnerable to global economic fluctuations. Their economies are also intertwined as they make up a significant portion of each other’s imports and exports.

(Ahoniemi and Putkuri, 2020) The region was susceptible to the financial and the European debt crises, and Finland, Sweden and Denmark especially suffered a deep

(12)

recession. Since the crisis period, Nordic banks, in general, have been more profitable than their European counterparts. Their capital adequacy is also comparatively strong.

(Koskinen, Putkuri, Pylkönen and Tölö, 2016) While Iceland is a part of the Nordics, in this thesis, when we refer to the Nordics, we mean specifically the countries in the dataset:

Finland, Sweden, Norway and Denmark. There are no Icelandic banks in the set as none of them remained publicly listed through and after the country’s banking crisis.

While the current banking and financial system is stable, according to Koskinen et al.

(2016), some structural vulnerabilities can pose a systemic threat. First of all, the banking sector is very large relative to the Nordic economies. For example, according to IMF data, Denmark, Sweden and Norway ranked in the top 3 in Europe in bank assets as a percentage of GDP in 2017. (TheGlobalEconomy.com, 2020) The pure number of banks is also significant. While there are many banks, the sector is also relatively concentrated, as a few large banks dominate the system. Especially Nordea, Danske Bank and Handelsbanken hold significant market share in this thesis's four Nordic countries. There are also other notable more, although not fully, local institutions such as DNB, SEB and Swedbank. The third major vulnerability is related to a special characteristic in the Nordic banking sector: the importance of mortgage credit institutions and cover bonds, especially in the funding of housing loans, which make up a significant portion of the Banks’ balance sheets when compared to European banks in general. This makes the system especially vulnerable to housing market risks. (Koskinen et al., 2016) The COVID-19 pandemic has aggravated this vulnerability as data indicated a decline in both property sales and prices (Ahoniemi and Putkuri, 2020). While the Nordic covered bonds are considered safe, therein lies another vulnerability. Firstly, the banks are very dependent on market funding as 35-45 % of it is market-based, which opens the sector up to changes in the global financial markets. The banks also play an important role as market makers for covered bonds, which has led to Nordic banks holding 20-30 % of them. Refinancing of the bonds held by banks in the short-term money markets also further subjects them to disruptions in the markets. Finally, because a large portion of investors in the covered bonds are domestic, their systemic importance is further amplified. These are also all factors that increase the interconnectedness of the Nordic banking system. (Koskinen et al., 2016)

(13)

In any case, the Nordic countries are thought to be stable with prudent financial regulations and supervision, which should deviate risk. Paltadis, Gounopoulos, Kizys and Koutelidakis (2015) show the northern eurozone banking sector's systemic risk to be less apparent, whereas the banking sector in the southern eurozone is more prone to bank failures due to contagion. While only Finland is part of the euro area, it is not farfetched to assume that systemic risk in the other Nordics is similar. Even though the risk is not apparent, it doesn’t mean there isn’t any. We might argue that the unknown presents an even more significant threat than the known. In any case, attention needs to be paid to the Nordics as well, who form a culturally and economically tight-knit group of countries, which at the very least could experience a systematic local crisis and at worst affect the broader European or global economy.

Such a connectedness study could also benefit a variety of interest groups.

Connectedness, first of all, is inherently related to risk, especially of the systematic kind.

This naturally is useful to risk managers and financial supervisors alike. Knowledge of the connectedness level can also benefit portfolio managers as they make diversification choices: highly interconnected assets make for poor diversification options. Heightened connectedness can also act as a signal for regulators to act as it tends to signal market turbulence. In any case, providing new information on the connectedness in the Nordic banking sector benefits many parties.

1.2. Research objectives

This thesis aims to assess and analyze the financial connectedness found within the Nordic Banking system, between listed banks from Finland, Sweden, Norway and Denmark. The study's timeline spans from before the global financial crisis up until the very recent events related to the global Covid-19 pandemic and its economic consequences. This allows us to pay special attention to the over-time dynamics of connectedness as the period contains many financial ups and downs. Aside from time, we also focus on the lesser understood frequency dynamics of connectedness to better

(14)

understand at which frequencies is connectedness created and at which times. We also assess whether size is a factor in bank connectedness.

To summarize, we touch upon time, frequency and size dynamics of connectedness and upon systematic risk factors in general. We also offer a Nordic perspective to the questions as a point of comparison to research done elsewhere. The main research questions are as follows:

1) How connected are the Nordic Banks?

2) Is the connectedness between banks time-varying?

3) Which banks are the major exporters of Volatility? Which are its primary receivers?

4) Is there a significant correlation between a bank’s size and its effect on others? Is there a correlation between size and a bank’s susceptibility to spillovers originating from others?

5) Is the connectedness higher within countries than across them?

6) Is most of the connectedness created at low, medium or high frequencies?

1.3. Scope of the study

This study focuses on the financial connectedness found in the Nordic banking sector, on a system level as well as from, to and in between individual banks. The set of banks is limited to only publicly listed banks in the Nordic countries. While this does encompass most major banks in the respective countries, it does rule out some systematically important banks from the dataset due to the lack of available share price data. For example, the Finnish OP Group is a non-listed co-operative bank, which holds the biggest market share domestically in many key categories. The bank has a 40 % share in private housing loans and corporate loans and a 39 % share of deposits (vs. the second largest Nordea with 29, 30 and 27 % in those categories). (Suomen pankki, 2020) This limitation also rules out any Icelandic banks from the set as none of them were publicly listed for a sufficiently long period to be included in the study. These exclusions might affect how

(15)

representative of the Nordic banking landscape the results are. The full set of 9 banks is introduced later in the paper.

Timewise the scope of this paper is focused on relatively recent history. More specifically, we focus on the period between 2.1.2004 and 4.9.2020. These limits are set to include the most up to date price observations and the most important events in recent history.

Most importantly, the goal was to include the period of the financial crisis and the great recession of 2007 to 2009 and the run-up to that. The European debt crisis also lands within this timeframe. By including the most recent available data, we can also assess the effects of the 2020 Covid-19 global pandemic on connectedness. Timeframe selection was limited by bank listing dates as if we were to choose a very long period, it would limit the number of banks included, as some of them were listed later than others. On the other hand, if we were to choose a shorter period to include more banks, we might miss something in the time-varying nature of connectedness. The selected timeframe covers a sufficiently long period and allows us to include a decent number of banks.

As we choose to focus on return volatility connectedness and spillover, we face a key limitation. As volatility itself is not observable in the same way as for example returns, we need to choose a way to estimate it somehow. The estimation is done using a volatility proxy calculated from daily opening, closing, high and low prices. Because we need to estimate volatility, we might introduce some additional measurement error to our connectedness measure. This is something we have to accept and a later section delves deeper into volatility estimation and its accuracy.

Additionally, we are limited to daily data, as high-frequency intraday data was not available to us. We try to compensate for this by choosing a volatility estimation method that considers daily highs and lows, which are much more readily available. However, some intraday volatility might be missed by our estimation.

1.4. Outline of the paper

From this point on, the thesis is divided into five main sections. We start by providing context to the methodological choices of this thesis in the theoretical framework chapter,

(16)

which covers literature and methodologies on volatility and its proxies, measurement of connectedness with a focus on the spillover index framework of Diebold and Yilmaz (2009). Finally, we introduce the frequency connectedness framework of Baruník and Křehlík (2018). The theoretical framework also provides important context to the literature review chapter after it, which gives a summary of the existing literature on connectedness within the financial sector. Chapter 4 covers the data and methodology used in the thesis.

First, the data is introduced and descriptive statistics for it are reported. The methodology section goes into detail on methodological parameters and on how the empirical results are achieved. In the fifth chapter, the empirical results are introduced and analyzed. In the final chapter, conclusions are drawn from the results and the thesis is summarized.

We also look at the implications of the study. Finally, some further lines of research are discussed.

(17)

10

2. THEORETICAL FRAMEWORK AND MODELS

We begin with a chapter on the theoretical framework, which introduces the relevant tools and methodologies used in this thesis in particular and gives context to the broader field of connectedness and spillover research. Thus, the chapter is also relevant to the literature review conducted in the following chapter 3, where we summarize studies that also use the same models and frameworks introduced here. The first section of this chapter concerns volatility, in which we measure connectedness. After introducing volatility and its characteristics, we look at how this seemingly unobservable variable can be estimated and a volatility proxy can be created for use in modeling. The second section concerns financial connectedness and the methodological choices for measuring it. After a wider look at alternative approaches, we focus on the connectedness framework of Diebold and Yilmaz (2009, 2012, 2014), which is the methodological choice of this thesis.

Finally, we look at a more recent extension to that framework, the frequency connectedness framework of Baruník and Křehlík (2018).

2.1. Volatility and its proxies

To be able to measure volatility spillover in a particular system of variables using the methodology introduced by Diebold and Yilmaz in their 2009 and subsequent papers, we need a way to first model or, in some other way, represent the level of volatility over time.

In practice, one needs to have a time series of, for example, daily, weekly or monthly volatilities.

Let us first define what volatility is to give context to the variety of methods that could be used to produce the required time series. To put it simply, volatility is a measure of the variation in a given instrument's returns over time. For a particular dataset, it measures the dispersion relative to its mean. With a volatile instrument, we expect the movement in price to deviate further from the mean, whereas the value of a low volatility instrument

(18)

might just barely change over time. (Ursone, 2015) Risk is often closely associated with volatility, as a highly volatile instrument is more unpredictable than a low volatility alternative and is therefore considered riskier. While risk is often thought to be related to negative outcomes, volatility makes no difference between the two opposites. (Poon, 2005) Risk can also be thought of as purely the existence of uncertainty.

Volatility has been the focus of countless scientific studies and through extensive research, several particular characteristics have been identified in asset returns. First of all, volatility tends to cluster up, meaning there are periods where volatility is high and periods where it’s low. This was first documented by Benoit Mandelbrot (1963, 418), who noticed that large price changes are not isolated but tend to be followed by more large changes of either sign. In summary, any movements are likely to be followed by movements of similar size.

The second important feature of volatility is the asymmetric nature of it. Return volatility is different in response to a large price increase than to a similarly sized drop. Large positive returns don’t affect overall volatility as much as negative returns. (Bekaert and Wu, 2000) While there is no absolute consensus on the cause, there are alternative explanations for the phenomena. One of these is the leverage effect hypothesis, which is based on the fact that negative returns increase financial leverage, which in turn increases risk and volatility (Christie, 1982). An alternative hypothesis is the volatility feedback effect. This relies on the fact that if volatility is priced and it increases, the required return of the underlying increases and as a result, the asset price immediately drops (İnkaya and Yolcu Okur, 2014).

Other known characteristics are that volatility evolves in a continuous manner over time and that volatility does not diverge into infinity. The first refers to the fact that sudden jumps in volatility are rare, although when they happen, they often happen in a series, e.g. volatility clustering. The latter means that there exists a range in which volatility usually varies and is often, therefore, stationary. (Tsay, 2002)

The above-mentioned properties of volatility often play an important role in models and measurements for volatility. A common approach to take clustering into account is to use an ARCH-type model in modeling a price process. Some models have even been

(19)

12 specifically created to address specific characteristics of volatility and to correct modeling errors they might cause in more conventional approaches. For example, the EGARCH model was created to take asymmetric volatility into account. (Tsay, 2002)

As the section title implies, we often use proxies to represent volatility. This is because volatility as such is not directly observable, unlike returns, for example. There are a wide variety of approaches for estimating and representing volatility, such as observation- based GARCH models, parameter-based stochastic models, implied volatility and realized volatility. (Diebold and Yilmaz, 2015) Of these, we’ll focus on estimating realized volatility. Optimally we would use high-frequency intraday data. For example, with 5- minute returns, we could calculate an efficient estimate of daily volatility. (Tsay, 2002, 80) However, in the absence of high-frequency data, other alternatives need to be explored to achieve a measure of volatility as accurate as possible. Therefore, we’ll focus on methods that utilize more readily available data such as daily open and close, as well as daily high and low prices.

The simplest way to calculate historic or so-called realized volatility is to calculate the standard deviation of daily returns. The standard deviation is often expressed as annualized volatility, which is calculated by multiplying the standard deviation by the square root of the number of trading days, commonly 252.

𝑎𝑛𝑛𝑢𝑎𝑙𝑖𝑧𝑒𝑑 𝑣𝑜𝑙𝑎𝑡𝑖𝑙𝑖𝑡𝑦 = √∑𝑛𝑖=1(𝑥𝑖 − 𝑥̅)2

𝑛 − 1 × √252 (1)

Annualized volatility is widely utilized as a volatility parameter, but it does not provide enough granularity for the purposes of this paper. To capture the effect of volatility spillover, we need to see immediate reactions in the volatilities. Even if we were to calculate annualized volatility on a rolling basis, daily, weekly or even monthly volatility changes would not be visible enough as the weight of a new observation entering into the sample would be minuscule. This formula can, however, be adapted for shorter periods and daily close-to-close volatility can be calculated with the following equation:

(20)

𝜎𝐶𝐶 = √ 𝐹

𝑁 − 1√∑(𝑥𝑖− 𝑥̅)2

𝑁

𝑖=1

, (2)

where F is the number of periods in a year, N is the number of periods used in the estimation, 𝑥𝑖is the log return calculated with adjacent closing prices and 𝑥̅ is the mean return. As one can see, this is essentially the same formula (1)

We have now introduced one possible estimation method, which can produce the required proxy. There does not seem to be a generally agreed-upon practice on which method to use. Therefore, the major choices and their advantages and disadvantages are introduced in the following subsections. It is worth noting that even Diebold and Yilmaz, and the broader field of connectedness research, have chosen to use different proxies in different articles on the spillover methodology they introduced.

As volatility can’t be directly observed and high-frequency data has just recently become more accessible, several alternative methods have been proposed. Historically, many researchers have resorted to using very simplistic ways of estimating volatility. Because a daily closing price series is perhaps the most readily available type of dataset, it is attractive to many researchers. This has led to the so-called close-to-close estimations of volatility, of which we already introduced one. Squared, or less commonly absolute, daily returns are an even more simple proxy for daily volatility. (Poon, 2005) The use of squared returns has been criticized, and for example, Lopez (2001) finds it to be problematic as a proxy due to it being a “noisy” or imprecise estimator because of its asymmetric distribution. It might be palatable on average bases, but on any specific day, the estimate might be far from the truth.

A problem with using only one data point per day, the closing price, is that it completely ignores intraday volatility. Let’s say that there is a lot of volatility during the day, but the closing price is still very near the previous day’s close. In such a case, we would see almost no volatility in the proxy. This could be addressed by using intraday squared returns, which would be a true measure of daily price movements. The problem is that even today, high-frequency data is often expensive or not readily available. Because of this, close-to-close methods remain somewhat popular. In the context of this study, the

(21)

14 optimal choice would be high-frequency tick data. Unfortunately, this was not feasible as it was only available for part of the dataset and even then for only short periods.

Fortunately, we don’t have to settle for squared returns or any other estimate only based on closing prices. The range-based estimators introduced in the following sections represent better alternatives.

The earliest example of range-based estimation of volatility was introduced by Michael Parkinson (1980). In the method he referred to as the extreme value method, later dubbed as High-low estimation or Parkinson’s estimation, we utilize the daily extreme values of a particular asset. The required daily high and daily low prices are readily available for a wide range of asset classes and listed securities, making Parkinson’s estimation an attractive option. Citing a large body of previous literature setting a precedent, even Diebold and Yilmaz (2012) use Parkinson’s estimation with their model. Parkinson himself noted that depending on the method of measuring the difference, his extreme value method produces an estimate about 2,5 to 5 times more accurate than the traditional close-to-close method and is much more sensitive to variation in volatility. (Parkinson, 1980) Parkinson’s extreme value method can be expressed as below:

𝜎𝑃𝑎𝑟𝑘 = √𝐹 𝑁√ 1

4 ln(2)∑ (𝑙𝑛ℎ𝑖 𝑙𝑖)

𝑁 2

𝑖=1

, (3)

where 𝐹 is the number of periods within a year, commonly the number of trading days 252, 𝑁, in turn, is the number of periods used in the estimation, ℎ𝑖is the intraday high on day 𝑖 and finally 𝑙𝑖is the corresponding intraday low price.

A clear upgrade compared to the close-to-close methods is the fact that Parkinson’s model takes some intraday volatility into account. It, however, does still have some shortcomings that are addressed by later models. For example, the resulting proxy does not capture close to open or overnight volatility, and it assumes geometric Brownian motion with zero drift. Researchers such as Kunitomo (1992) and Fiszeder and Perczak (2013) have shown that alterations that allow for non-zero drift produce more efficient volatility estimations.

(22)

The second major model for estimation of volatility is the Garman-Klass model from 1980, the same year in which Parkinson released his research. The creators Mark Garman and Michael Klass aimed to create a model with universally accessible data, mainly with that which was available in the newspapers of the time. To the high and low daily prices used by Parkinson, they added open and closing prices as well. Their aim was not to create

“the correct” model for asset fluctuation but to provide a proxy with readily available data.

As does Parkinson’s model, the Garman-Klass model also assumes geometric Brownian motion with zero drift, meaning price changes over any interval have a mean of zero.

Although not perfect, this new model can be up to eight times more efficient than classical close-to-close methodology. (Garman & Klass, 1980) The model can be formulated as follows:

𝜎𝐺𝐾 = √𝐹

𝑁√∑ (𝐿𝑛ℎ𝑖 𝑙𝑖)

𝑁 2

𝑖=1

− (2𝐿𝑛(2) − 1) (𝐿𝑛 (𝑐𝑖 𝑜𝑖))

2

, (4)

where 𝐹 and 𝑁 are, as in the Parkinson formula, the number of periods within a year and periods used in the estimation, ℎ𝑖 is again the daily high and 𝑙𝑖 is the daily low and finally alongside these we have the closing price 𝑐𝑖 and the opening price 𝑜𝑖 of the day 𝑖.

The Garman and Klass model has many of the same shortcomings as the Parkinson’s model. Even with the inclusion of open and close prices, it doesn’t capture nighttime volatility, which the authors also recognize. Assuming Brownian motion with zero drift also introduces bias when drift exists in the data.

The next advancement in volatility estimation came from Rogers and Satchell (1991), who set out to create a model to address a shortcoming in the previously existing alternatives.

Their goal was to create a model that does not assume zero drift with the same readily available data used in other models.

𝜎𝑅𝑆 = √𝐹

𝑁√∑ 𝐿𝑛 (ℎ𝑖 𝑐𝑖)

𝑁

𝑖=1

𝐿𝑛 (ℎ𝑖

𝑜𝑖) + 𝐿𝑛 (𝑙𝑖

𝑐𝑖) 𝐿𝑛 (𝑙𝑖

𝑜𝑖) (5)

(23)

16 Where 𝐹 is the number of yearly periods, 𝑁 is the number of periods used in the estimation, ℎ𝑖is the intraday high, 𝑙𝑖 is the intraday low, 𝑐𝑖is the closing price of day 𝑖 and finally 𝑜𝑖 is the opening price of the same day. As we see, the data needed is the same as in the Garman-Klass volatility estimation method.

Rogers and Satchell (1991) find that their methodology outperforms the previous models when the underlying process follows geometric Brownian motion with a non-zero drift, a finding corroborated by others, who have included drift terms to previous models (Kunitomo, 1992; Fiszeder and Perczak, 2013). While the Rogers and Satchell model addresses one shortcoming in previous literature, it still ignores the existence of price jumps overnight between trading sessions.

The final and seemingly most comprehensive, widely used measure of realized volatility is the so-called Yang-Zhang estimator. Unlike the others, it can handle both non-zero drift and overnight jumps. It also produces the least amount of variance among the introduced estimators. (Yang and Zhang, 2000). Essentially the produced proxy is the sum of volatility overnight and the weighted average of open-to-close and Rogers-Satchel volatility (Bennett and Gil, 2012). The estimator is calculated as follows:

𝜎𝑌𝑍 = √𝐹√𝜎𝑜𝑣𝑒𝑟𝑛𝑖𝑔ℎ𝑡 𝑣𝑜𝑙𝑎𝑡𝑖𝑙𝑖𝑡𝑦2+ 𝑘𝜎𝑜𝑝𝑒𝑛 𝑡𝑜 𝑐𝑙𝑜𝑠𝑒 𝑣𝑜𝑙𝑎𝑡𝑖𝑙𝑖𝑡𝑦2+ (1 − 𝑘)𝜎𝑅𝑆2, (6)

where 𝐹 is again the number of trading days in a year, the overnight volatility is presented in formula (8), open to close volatility in formula (9) and 𝜎𝑅𝑆 is the Rogers-Satchel estimate in its entirety as seen in formula (5). Finally, the variable 𝑘 is calculated as:

𝑘 = 0.34 1.34 +𝑁 + 1

𝑁 − 1

, (7)

where 𝑁 is the number of periods used in the estimation. The variable 𝑘 is used to minimize the variance of the volatility estimate and in practice, we use it to calculate a weighted average of the open-to-close volatility and the Rogers-Satchel volatility (Yang and Zhang, 2000).

The overnight volatility can be formulated as follows:

(24)

𝜎𝑜𝑣𝑒𝑟𝑛𝑖𝑔ℎ𝑡 𝑣𝑜𝑙𝑎𝑡𝑖𝑙𝑖𝑡𝑦2 = 1

𝑁 − 1∑ (𝐿𝑛 ( 𝑜𝑖

𝑐𝑖−1) − 𝐿𝑛 ( 𝑜𝑖 𝑐𝑖−1)

̅̅̅̅̅̅̅̅̅̅̅̅

)

𝑁 2

𝑖=1

, (8)

where 𝑁 is the same as in formula for 𝑘, oi is the opening price on day 𝑖 and 𝑐𝑖−1 is the closing price of the previous day. The second term within the sum represents the average of overnight volatilities

Finally, the open to close volatility can be formulated as follows:

𝜎𝑜𝑝𝑒𝑛−𝑡𝑜−𝑐𝑙𝑜𝑠𝑒 𝑣𝑜𝑙𝑎𝑡𝑖𝑙𝑖𝑡𝑦2 = 1

𝑁 − 1∑ (𝐿𝑛 (𝑐𝑖

𝑜𝑖) − 𝐿𝑛 (𝑐𝑖 𝑜𝑖)

̅̅̅̅̅̅̅̅̅

)

2

,

𝑁

𝑖=1

(9)

where the notation is very similar to the overnight volatility formula, the difference being that we calculate the volatility from opening to closing price and not the form the previous close to the following open.

Having introduced some of the most well-known volatility estimation methods, we are left with the decision of which to use. By looking purely at the features of the estimators, Yang-Zhang seems to be the most attractive, as it is the only choice that handles both drift and overnight jumps, both of which real-world data regularly exhibits. There are two standard metrics to assess the quality of volatility measures: the efficiency and the bias of the measure. Efficiency is defined by Garman and Klass (1980) as the ratio between the variance of our estimator to the variance of a benchmark estimator, which is commonly the close-to-close estimation of volatility. Therefore, by definition, close-to- close estimation has an efficiency of 1, against which all others are compared to. Bias, in turn, refers to the difference between estimated variance and the average volatility. For example, Bias is caused by overlooking drift or overnight jump when they exist in the data.

Table 1 contains a summary of the main range-based estimators of realized volatility. In the required data column, C stands for closing, O for opening, H for high and L for low price. The last column describes the maximum efficiency of each estimate. It is worth noting that excess efficiency decreases when we increase the sample size. Therefore, maximum efficiency is achieved in small samples.

(25)

18

Table 1: Summary of volatility estimation methods

Estimate Required data Handles drift? Overnight jumps? Efficiency (max)

Close-to-close C No No 1

Parkinson’s HL No No 5.2

Garman-Klass OHLC No No 7.4

Roger-Satchel OHLC Yes No 8

Yang-Zhang OHLC Yes Yes 14

Note: Bennett and Gil, 2012

Based on all the factors mentioned above, we opt to use the Yang-Zhang estimator of realized volatility. The methodology does not have much precedent in connectedness studies as most seem to choose either Parkinson’s or Garman-Klass estimation, following the example set by Diebold and Yilmaz (2009, 2012). However, they give no substantiated reasons for the choices and Yang-Zhang seems to be the more efficient proxy. In any case, we compare a rolling estimation of connectedness measure achieved with different volatility proxies in the result robustness section. To the best of our knowledge, such a comparison hasn’t been shown previously.

2.2. Financial connectedness

At the core of this thesis is the concept of financial connectedness and spillover. The goal is to measure and interpret connectedness within a specified system of variables, in this case, the return volatilities of a set of Nordic banks. While we recognize that connectedness plays a significant role in the financial markets, there isn’t a natural definition of connectedness and, therefore, no singular measure of it. Much has still been researched when it comes to financial interdependence and there are multiple alternative methodologies to uncover and measure the effects different variables have on each other.

They all have their purposes and limitation, but most don’t present a comprehensive picture of connectedness, but instead focus on specific aspects. Some of these methodologies are summarized below and finally, the Diebold and Yilmaz framework, which promises a more comprehensive approach to connectedness, is introduced.

As an example, many papers approach return and volatility spillovers with multivariate GARCH models, a common choice being a bivariate GARCH BEKK model first introduced by Engle and Kroner (1995). These kinds of models are especially suited for measuring

(26)

pairwise directional connections between variables. Saleem and Fedorova (2010), for example, investigate pairwise linkages in stock and currency markets in eastern Europe and Maghyereh and Awartani (2012) measure return and volatility spillovers within the financial markets of Dubai and Abu Dhabi. While these pairwise connections are an important aspect of connectedness, they do not tell us how the variables interact with the system as a whole. Another shortcoming of multivariate GARCH models is that the coefficients of the estimated model themselves are not directly interpretable and there is no intuitive way of presenting them aside from tables, which grow in size significantly with additional lags and variables.

Aside from GARCH based approaches, there are generally two types of common connectedness measures: correlation-based ones and more modern extremes-based ones that focus on tail events. Correlation is a well-known measure, which in terms of connectedness, mainly focuses on the pairwise relationship between two variables. In general, it is a linear and nondirectional measure of dependence, although some nonlinearity can be captured by conditional time-varying correlation. As with GARCH models, correlation in its typical cases is only a measure of a pairwise connection.

(Diebold and Yilmaz, 2015, 24-25) However, Engle and Kelly (2012) have proposed a way of aggregating correlations between multiple variables to achieve a system-wide correlation, the so-called equicorrelation, which can be thought of as a measure of system correlation, similar to the concept of total connectedness in the methodology used in this paper.

The second set of connectedness measures, the so-called extreme or tail-based measures, approach connectedness on a high system level, whereas correlation typically is a low-level pairwise measure. These system connectedness measures are built upon the familiar notions of value-at-risk (VaR) and expected shortfall (ES), widely used especially by risk managers and financial supervisors. First of the measures is the conditional value-at-risk (CoVaR, not to be confused with CVaR, which is also referred to as conditional value-at-risk aka expected shortfall) of Adrian and Brunnermeier (2016).

Essentially CoVaR measures the contribution of a single firm to the VaR of a particular

(27)

20 system. In the context of this thesis, it could be used to calculate how much risk a single bank contributes to the VaR of the whole set of Nordic banks.

On the other hand, the marginal expected shortfall or MES measures an individual firm's exposure to the entire market, meaning the expected loss of a particular firm when the whole market experiences extreme events (Idier, Lame and Mesonnier, 2014). These measures are similar to the “to others” and “from others” connectedness measures of the framework used in this paper. They are referred to as extreme or tail-based measures because they are usually calculated from the firm or system-specific tail of the distribution of profits and losses. Both CoVaR and MES are often used as measures for systemic risk.

As previous sections outline, there are multiple different ways to measure aspects of connectedness, be it pairwise or system-wide. However, a comprehensive measure of connectedness is attractive for the purposes of deep dives on interconnectivity within specified systems, as is the case in this thesis. Connectedness exists on many levels and it is more straightforward to be able to measure them with a unified framework instead of a collection of different models that are not necessarily directly comparable.

In 2009, Diebold and Yilmaz proposed a simple and intuitive measure for financial connectedness and interdependence dubbed by them as the spillover index. Their measure of connectedness allows us to decompose system-wide connectedness into lower-level aspects of connectedness. We are able to not only look at directional pairwise connections, but connections of individual variables to the system and vice versa, and we are able to divide the system-wide connectedness into subsystems. Diebold and Yilmaz built upon and expanded their original framework in a series of papers and a book.

(Diebold and Yilmaz, 2009, 2012, 2014, 2015)

At its core, the spillover index framework is based on the vector autoregression (VAR) framework of Sims (1980). VAR is a multivariate linear autoregression model where 𝑛 number of variables are explained not only by 𝑘 number of their own past values, lags, but by the 𝑘 past values of all other remaining variables in the model. Usually, we are not directly interested in the VAR model results: coefficients, their significances or 𝑅2 statistics. Due to the complicated dynamics of VAR, they don’t have straightforward

(28)

implications. Thus the model is often interpreted from Granger-causality tests, impulse response functions or variance decompositions. (Stock and Watson, 2001) Of these, the spillover index framework is built upon the variance decomposition of VAR.

Variance Decomposition, also commonly referred to as forecast error variance decomposition (FEVD), is a tool for interpreting variable relationships in fitted VAR models. It essentially tells us how much of the forecast error variance of a variable is caused by shocks to other variables in the VAR model. The variance is often analyzed for multiple forecast horizons, as some variables might affect others more in the long term than in the short. (Luthkepohl, 2012) By aggregating these connections between variables, Diebold and Yilmaz (2009) published the first iteration of their spillover index framework. Essentially, for each variable 𝑖 we sum up shares of the error variance of its forecast coming from shocks to variable 𝑗, where 𝑖 ≠ 𝑗, and finally, we add across all the variables. For obtaining the variance decomposition, the original methodology relies on so-called Cholesky factorization, which like VAR, traces back to Sims (1980). The downside of the Cholesky method is that it is not invariant to variable ordering in VAR.

This means that the results are sensitive to the order of variables in the dataset. While Diebold and Yilmaz (2015) note that the range of total spillover across different orderings is usually quite small, they recognized the need for an approach that produces the same result for any order of variables. Also, directional connectedness is more sensitive to ordering (Diebold and Yilmaz, 2015).

To address the issue of variable ordering, we look to the generalized forecast error variance decomposition (GFEVD) of Pesaran and Shin (1998) built, in turn, on the generalized impulse response of Koop, Pesaran and Potter (1996). Based on this, Diebold and Yilmaz (2012) proposed an order invariant framework, which alongside total connectedness measures, includes robust directional measures for spillovers. We have to note that the generalized variance decomposition does not come without its drawbacks.

While it is invariant to ordering, it introduces the requirement for normality. Assessing spillover of returns, which are rarely normally distributed, is therefore, better done with Cholesky factorization. GFEVD is better suited for log-return volatilities, as is the focus in this thesis, which are well-approximated as Gaussian. (Diebold & Yilmaz, 2015)

(29)

22 Now that we have introduced the foundation for the Diebold and Yilmaz spillover index framework, let us move on to how the model is actually calculated. As mentioned, the process begins with an 𝑁 variable autoregression with 𝑝 lags. We note the covariance stationary VAR, based on the notation in the 2012 Diebold and Yilmaz paper, as:

𝑉𝐴𝑅(𝑝), 𝑥𝑡 = ∑ 𝜙𝑖

𝑝

𝑖=1

𝑥𝑡−𝑖+ 𝜀𝑡, (10)

where we have 𝜙1, … , 𝜙𝑝 coefficient matrices and where 𝜀~(0, ∑) is the error process, which contains identically and independently distributed disturbances. Essentially, we have a regression model where each variable is explained by p lags of all variables, their own and all the others’. The infinite-order moving average representation of the VAR process is as follows:

𝑥𝑡 = ∑ 𝐴𝑖

𝑖=0

𝜀𝑡−𝑖, (11)

where we have 𝑁 × 𝑁 matrices of coefficients noted by 𝐴𝑖, which follow the recursion 𝐴𝑖 = 𝜙1𝐴𝑖−1+ 𝜙2𝐴𝑖−2+ ⋯ + 𝜙𝑝𝐴𝑖−𝑝, where 𝐴0 is an 𝑁 × 𝑁 identity matrix and 𝐴𝑖 = 0 when 𝑖 <

0. The coefficients of the moving average are crucial to recognizing the key system dynamics. For this, we use the forecast error variance decomposition to identify forecast variance errors rising from shocks to the variables in the system. To construct the order invariant spillover index, we use the generalized forecast error variance decomposition framework.

Variance decomposition allows us to separate H-step ahead forecasting error variance to 𝑜𝑤𝑛 𝑣𝑎𝑟𝑖𝑎𝑛𝑐𝑒 and to 𝑐𝑟𝑜𝑠𝑠 𝑣𝑎𝑟𝑖𝑎𝑛𝑐𝑒 also known as 𝑠𝑝𝑖𝑙𝑙𝑜𝑣𝑒𝑟. We define the first as the share of the error variance in forecasting variable 𝑥𝑖 caused by shocks to itself. The latter in turn is the share of forecasting error variance caused by shocks to 𝑥𝑗 where 𝑗 = 1,2, … , 𝑁 and 𝑖 ≠ 𝑗

Using the GFEVD of Pesaran and Shin (1998), we create an 𝐻-step forward variance decomposition matrix 𝐷𝑔𝐻 = [𝑑𝑖𝑗𝑔𝐻] in which the entries are calculated as:

(30)

𝑑𝑖𝑗𝑔𝐻 =𝜎𝑗𝑗−1𝐻−1ℎ=0(𝑒𝑖𝐴∑ 𝑒𝑗)2

𝐻−1ℎ=0(𝑒𝑖𝐴∑ 𝐴𝑒𝑖) , (12)

where ∑ is the variance matrix of the error term vector 𝜀, 𝜎𝑗𝑗 represents its 𝑗th equation’s standard deviation and 𝑒𝑖 is the selection vector with zeros aside from one for the 𝑖th element. Finally, 𝐴 is a matrix of the moving average coefficients at lag ℎ.

The resulting matrix contains, for each variable, the own variance share and the individual cross variance shares of forecast error variance coming from each variable in the system.

As the shocks in the GFEVD environment don’t need to be orthogonal, the variance contributions (own and cross) don’t always sum up to one, as is the case with the standard Cholesky factorization method. To address this, Diebold and Yilmaz (2012) suggest normalizing each row by the row sum. Therefore, instead of the matrix 𝐷𝑔𝐻 = [𝑑𝑖𝑗𝑔𝐻], we use 𝐷̃𝑔𝐻 = [𝑑̃𝑖𝑗𝑔𝐻] for the spillover index where:

𝑑̃𝑖𝑗𝑔𝐻 = 𝑑𝑖𝑗𝑔𝐻

𝑁𝑗=1𝑑𝑖𝑗𝑔𝐻 (13)

Due to normalization ∑𝑁𝑗=1𝑑̃𝑖𝑗𝑔 = 1 and ∑𝑁𝑖,𝑗=1𝑑̃𝑖𝑗𝑔 = 𝑁, because we have N number of rows that all sum to 1. We can now use the matrix 𝐷̃𝑔𝐻 to construct a so-called connectedness table and to calculate the spillover index measures.

From the results of the variance decomposition, we create a connectedness table. A basic schematic of such a table can be seen below in table 2. The connectedness table is at the core of the framework and from this table, we can derive the measures of connectedness. On the upper-left of the table, we have the 𝑁 × 𝑁 matrix 𝐷̃𝑔𝐻, which contains the normalized results of the forecast error variance decomposition. The left- most column containing 𝑥1, 𝑥2, … , 𝑥𝑁 describes the destination of the spillover and the top row containing 𝑥1, 𝑥2, … , 𝑥𝑁 the source of it. The Diagonal elements where 𝑖 = 𝑗 contain the own variance share of the forecast error variance. The From others Column contains the sum of elements where 𝑖 ≠ 𝑗 and together with the diagonal element they sum to one.

(31)

24

Table 2: Connectedness table Schematic

𝑥1 𝑥2 𝑥𝑁 𝐹𝑟𝑜𝑚 𝑂𝑡ℎ𝑒𝑟𝑠

𝑥1 𝑑11 𝑑12 𝑑1𝑁 𝑑1𝑗, 𝑗 ≠ 1

𝑁 𝑗=1

𝑥2 𝑑21 𝑑22 𝑑2𝑁 𝑁 𝑑2𝑗, 𝑗 ≠ 2

𝑗=1

𝑥𝑁 𝑑𝑁1 𝑑𝑁2 𝑑𝑁𝑁

𝑑𝑁𝑗, 𝑗 ≠ 𝑁

𝑁 𝑗=1

𝑇𝑜 𝑜𝑡ℎ𝑒𝑟𝑠

𝑑𝑖1 𝑁

𝑖=1 𝑑𝑖2

𝑁 𝑖=1

𝑑𝑖𝑁

𝑁 𝑖=1

1

𝑁 𝑑𝑖𝑗 𝑁 𝑖,𝑗=1

𝑖 ≠ 1 𝑖 ≠ 2 𝑖 ≠ 𝑁 𝑖 ≠ 𝑗

Note: Diebold and Yilmaz, 2015

In turn, the bottom to others row is the sum of the off-diagonal elements in the column.

Finally, the bottom-right element contains the grand-average of all off-diagonal elements in the matrix.

Now let us define how different connectedness measures are calculated from the variance decomposition matrix. The degree of pairwise directional connectedness is the simplest of the measures to interpret, as it is directly included in the variance decomposition matrix. We have also already defined how it is calculated in equations (12) and (13). We can abbreviate the 𝑑̃𝑖𝑗𝑔𝐻 notation to a more intuitive connectedness notation 𝐶𝑖←𝑗𝐻 , where 𝐶 stands for connectedness, 𝐻 tells us the 𝐻-step forecast horizon and the subscript describes the source and destination of shocks. (Diebold and Yilmaz, 2015) We can read the same information from the table as:

𝐶𝑖←𝑗 = 𝑑𝑖𝑗, (14)

where 𝑖 is the row number and 𝑗 is the column. For example, the cross variance share of 𝑥2 to 𝑥1 is 𝑑12. To use a trade analogy, this is similar to import and export. 𝑥2 imports 𝑑12 volatility from 𝑥1. (Diebold and Yilmaz, 2015) Note that 𝐶𝑖←𝑗 ≠ 𝐶𝑗←𝑖. We can calculate the

“balance of trade”, the net pairwise directional connectedness, between them as:

𝐶𝑖𝑗 = 𝐶𝑗←𝑖− 𝐶𝑖←𝑗, (15)

Viittaukset

LIITTYVÄT TIEDOSTOT

Hä- tähinaukseen kykenevien alusten ja niiden sijoituspaikkojen selvittämi- seksi tulee keskustella myös Itäme- ren ympärysvaltioiden merenkulku- viranomaisten kanssa.. ■

Jos valaisimet sijoitetaan hihnan yläpuolelle, ne eivät yleensä valaise kuljettimen alustaa riittävästi, jolloin esimerkiksi karisteen poisto hankaloituu.. Hihnan

Vuonna 1996 oli ONTIKAan kirjautunut Jyväskylässä sekä Jyväskylän maalaiskunnassa yhteensä 40 rakennuspaloa, joihin oli osallistunut 151 palo- ja pelastustoimen operatii-

Helppokäyttöisyys on laitteen ominai- suus. Mikään todellinen ominaisuus ei synny tuotteeseen itsestään, vaan se pitää suunnitella ja testata. Käytännön projektityössä

Tornin värähtelyt ovat kasvaneet jäätyneessä tilanteessa sekä ominaistaajuudella että 1P- taajuudella erittäin voimakkaiksi 1P muutos aiheutunee roottorin massaepätasapainosta,

Liike- ja julkinen rakentaminen työllisti vuonna 1997 tuotannon kerrannaisvaikutukset mukaan lukien yhteensä noin 28 000 henkilöä. Näistä työmailla työskenteli noin 14

Työn merkityksellisyyden rakentamista ohjaa moraalinen kehys; se auttaa ihmistä valitsemaan asioita, joihin hän sitoutuu. Yksilön moraaliseen kehyk- seen voi kytkeytyä

The new European Border and Coast Guard com- prises the European Border and Coast Guard Agency, namely Frontex, and all the national border control authorities in the member