top of page
Canada Research Chair in Risk Management Scientific Program by Georges Dionne

Scientific
program

  • Research report for the project "Applications of machine learning in finance and economics"

      SSHRC grant 435-2019-1183 (2019-2024)

One goal of our research program was to analyze how machine learning can be used in modern financial markets. Our main applications were in bank securitization and High Frequency Trading (HFT). Machine learning develops statistical algorithms that can learn new links between variables in large data environments and make potentially more refined predictions without explicit programming. Our main research activity with respect to this goal was to use machine learning to develop algorithms permitting to improve price forecasts in HFT. The other main research subjects were credit risk, insurance pricing, causality tests, insurance, and reinsurance.

We published two articles on HFT trading. The first one analysed cross-listed stocks arbitrage between NYSE and TSE and the second one studied the arbitrage opportunities between Frankfort and London. Cedric Poutré did his PhD thesis on the subject, and we published in very good journals: Poutré, Cédric, Limit order books in statistical arbitrage and anomaly detection, Université de Montréal, July 2023 (M. Morales, codirector).

Poutré, C., Dionne, G., Yergeau, G., The profitability of lead-lag arbitrage at high frequency, International Journal of Forecasting 40, 3, 1002-1021, July-September 2024.

Poutré, C., Dionne, G., Yergeau, G., International high-frequency arbitrage for cross-listed stocks, International Review of Financial Analysis 89, article 102777, October 2023.

Yann Bilodeau also did a PhD thesis on HFT but did not use machine learning: High-frequency data: Information processing and financial analysis, HEC Montreal, December 2021, and we published two other  articles on the management of the limit order book with HFT: Cenesizoglu, T., Dionne, G., Zhou, X., Asymmetric effects of the limit order book on price dynamics, Journal of Empirical Finance 65, 77-98, December 2021; and The dynamics of ex-ante weighted spread: An empirical analysis, Quantitative Finance 20, 4, 593-617, March 2020.

The second main research was on asymmetric information tests in bank securitization and insurance markets: Asymmetric information means that some market participants have more information than others. This asymmetry may cause market failures and inequity in market transactions. Helmi Jedidi completed his PhD thesis on the subject: Jedidi, Helmi, Information Asymmetry in the mortgage servicing market, HEC Montreal, May 2020. He developed a nonparametric test with causality showing that moral hazard introduced inefficiencies in banks` securitization. We also published an article on road safety in China with a post doc student: Dionne, G., Liu, Y., Effects of insurance incentives on road safety: Evidence from a natural experiment in China, Scandinavian Journal of Economics 123, 2, 453–477, April 2021.

We extended our contributions in risk management. In particular, we improved the back test literature on Value at Risk and Conditional Value at Risk by showing that fat tails statistical distributions must be used for market risk with data during the last financial crisis. We also analyse Expected Shortfall computation in portfolio risk management. Two articles were published:

Saissi Hassani, S., Dionne, G., Using skewed exponential power mixture for VaR and CVaR forecasts to comply with market risk regulation, Journal of Risk 25, 6, August 2023.

Fortin, A.P., Simonato, J.G., Dionne, G., Forecasting expected shortfall: Should we use a multivariate model for stock market factors?, International Journal of Forecasting 39, 1, 314-331, January-March 2023.

We showed how joint hedging may increase firm value when compared to hedging two assets independently. This research was completed with a Master student and a research assistant. An article was published:

Dionne, G., El Hraiki, R., Mnasri, M., Determinants and real effects of joint hedging: An empirical analysis of US oil and gas producers, Energy Economics 124, article 106801, June 2023.

With colleagues in Management Science, we studied liquidity in Credit Default Swaps (CDS). Usually, CDS are associated to credit risk. They became subject to liquidity risk during the last financial crisis. This affected the management of basis risk. Sahar Guesmi did a PhD thesis on the subject: Guesmi, Sahar, Essays on the CDS-bond basis, HEC Montréal, June 2019. We also published an article on CDS management: Akari, M.A., Ben-Abdallah, R., Breton, M., Dionne, G., The impact of central clearing on the market for single-name credit default swaps, North American Journal of Economics and Finance 56, no 101346, April 2021.

We continued our research on accidents distribution and insurance pricing:

Desjardins, D., Dionne, G., Lu, Y., Hierarchical random-effects model for insurance pricing of vehicles belonging to a fleet, Journal of Applied Econometrics 38, 2, 242-259, March 2023. Dionne, G., Desjardins, D., Angers, J.F., Road safety for fleets of vehicles, International Journal of Banking, Finance and Insurance Technologies 1, 1, 31-59, October 2021.

On climate risk: Dionne, G., Desjardins, D., A re-examination of the U.S. insurance market’s capacity to pay catastrophe losses, Risk Management and Insurance Review 25, 4, 515-549, December 2022.

On portfolio risk management: Koumou, G.B., Dionne, G., Coherent diversification measures in portfolio theory: An axiomatic foundation, Risks 10, 11, October 2022.

On reinsurance demand:Desjardins, D., Dionne, G., Koné, N., Reinsurance demand and liquidity creation: A search for bicausality, Journal of Empirical Finance 66, 137-154, January 2022. Cummins, J.D., Dionne, G., Gagné, R., Nouira, A., The costs and benefits of reinsurance, Geneva Papers on Risk and Insurance – Issues and Practice 46, 177-199, March 2021.

On insurance fraud: PhD thesis of Raphael Zerbato on the use of machine learning for detecting insurance fraud in an insurer’s claims portfolio. In preparation.

  • Scientific program of the Canada Research Chair in Risk Management

 

Over recent years, we have observed a significant increase in collective and social risks: The risk of natural disasters has greatly increased; the economic and social costs of ecological risks are reaching record levels; and risks linked to food consumption are a concern for several populations. Nor must we forget the September 11 events which take us into new territory, as they were premeditated. Private risks have also largely increased. The stock market has been very volatile, particularly for stocks related to the new economy and liquidity risk became very important during the last financial crisis. These trends may be associated with new non-human, exogenous natural phenomena or they may be considered endogenous—the results of poor private and social management of risks. These trends may, moreover, be linked to private and social choices that put the concern for prevention far too low on the policy agenda. They are also associated to poor private and public governance.

 

We have been involved in risk analysis for more than three decades. Our current research activities are primarily related to information problems, road safety, decision-making under uncertainty, portfolio choice, integrated risk management, environmental risk, liquidity risk, credit risk, insurance fraud, risk-management regulation, and risk management governance. Most of our work is empirical, but some of models developed are primarily or solely theoretical. In the following pages, we shall address the most significant projects related to our future research on risk management. Work on these issues will be stimulated and broadened in the academic environment of a Canada Research Chair in Risk Management.

The new research program can be summarized as follows. Liquidity risk currently prevails in many markets, high frequency trading is strongly present in all exchanges, and securitization risks are not well understood. The goal of our new research program is to develop optimal models to improve the management of these risks. Our objectives are to measure liquidity risk more precisely, to isolate the welfare implications of high frequency trading, and to enhance the financial stability of securitization. We will continue to investigate information problems and to study credit and operational risks. By integrating graduate students in our activities, we can enrich their education experience and better equip them to become leaders in risk management.

 

  • School’s strategic plan

 

The proposed research program is integrated in HEC Montréal’s strategic research plan. It will help HEC Montréal to improve its position in risk management by proposing projects in two major areas: 1) the production of wealth and the optimal use of resources, and 2) the market and the consumers. Risk management is a field of finance of crucial importance for HEC Montréal since it is now at the forefront of teaching, research and practice in this field. Since 1996, HEC Montréal has invested significant resources in the development of risk management. The School wants to be very well positioned on the international scene and to have nothing to envy any other Canadian university in this field.

 

  • Research to better understand the decision-making process

 

Individuals and businesses are often called upon to make decisions without knowing all the associated risks involved. How are decisions made under such conditions of uncertainty? What are the factors enabling individuals to manage their risks optimally? The Chair will be looking into all these aspects of risk management.

 

Based on the latest psychological findings on decision-making in a climate of uncertainty (cognitive processes and dissonance), the Chair is striving to better understand how individuals’ decisions are made and to find tools to measure the factors that determine whether individuals will manage their risks in an optimal fashion. Decisions about choosing to play a lottery, to drive while inebriated, or simply to smoke are all excellent examples of this research topic. We did also contribute to the definition of first-order risk in a recent article in the Journal of Economic Theory (2014).

 

  • Statistical measurement of information problems

 

Wherever risks arise—whether in the environment, transportation, health care, the workplace or financial markets—problems of asymmetric information abound. Over recent years, our principal contributions have focused on the empirical study of such problems in various markets. The methodologies we have developed enable us to identify, isolate, and estimate the effects of moral hazard and adverse selection on the allocation of resources (Review of Economics and Statistics, 1989, 1991, 2001, 2011; Journal of Political Economy, 1994, 2001; Journal of Risk and Uncertainty, 2002; Review of Economic Studies, 2009; Journal of the European Economic Association, 2013; Canadian Journal of Economics, 2015).

 

There are two well-known information problems that feature prominently in the economic literature: moral hazard and adverse selection. These two problems occur in most markets but, over the past twenty years, they have drawn particularly sharp attention in insurance and financial markets. One important bifocal question is the following: Are these problems truly significant and do they really affect the performance of markets? Before answering this question, let us go back over the classical definitions of these two information problems. Adverse selection exists because one party to the insurance or financial contract cannot observe the other party’s risk. To remedy this, the insurer or the banker fall back on risk classifications and whatever can be surmised about risks from the choices of contracts or riders. The presence of adverse selection can thus be observed to affect the forms of contracts.

 

Moral hazard for its part is associated with the non-observable behaviors of policyholders. There are two forms of moral hazard, depending on whether the non-observable actions affecting contracts outputs occur before or after the random event. Ex-ante moral hazard is more closely linked to activities aimed at preventing accidents, whereas ex-post moral hazard involves the statements of individuals on accidents or results. This second form of moral hazard is now associated with insurance fraud or bank operational risk. Under both forms of moral hazard, it is the form of the contract that may affect risk behavior and incentives.

 

For a researcher, it is difficult to know which of the three problems weighs most heavily in the portfolio of an insurer or banker, because he has no information at his disposal other than that available to the financial institution being studied. Usually open to his observation are the contracts chosen, the claims or financial distresses in play, and the risk classification variables used. Over the past twenty-five years, much progress has been made with regard to the measurement of the residual information problems present in different markets (see Chiappori and Salanie, 2013, and Dionne, 2013, for surveys). For example, it has been ascertained that risk classification variables are efficient in controlling adverse selection: The results are less spectacular as concerns moral hazard—a more difficult problem to isolate, as it arises from endogenous behaviors which are in constant evolution, unlike adverse selection where the characteristics of individuals are assumed exogenous and stable.

In the coming years, we plan to apply the models developed to financial contracts and more particularly to high frequency trading. It is now documented that standard debt is not optimal when the projects are very risky and when their results depend on the non-observable behavior of many decision-makers. Convertible debt, warrants, and even stocks are used to finance projects, depending on their level of risk. But the true risk is not perfectly observable ex-ante and this raises an interesting question: Can these different forms of contract somehow be used as lenses to reveal risk ex-ante. We are using a data set in collaboration with a large venture capitalist in order to be able to analyze this question.

 

We also plan to extend the current statistical tests to high frequency trading activities and securitization.

 

The presence of moral hazard is highly suspected in this market since the recent financial crisis although no formal test with an appropriate methodology has been applied yet. This analysis is not directly related to HFT because assets from securitization are not yet traded in this way. However, the project requires high performance computers that can process very large databases very fast, hence the new infrastructure. The recent financial crisis was caused, in part, by particular characteristics of structured finance such as agency problems in the securitization market. In their securitization activities, banks and mortgage brokers had little incentive to be vigilant and carefully monitor borrowers’ risk because a large proportion (often 100%) of loans’ risks were securitized without optimal contracting clauses under potential moral hazard and adverse selection.

The first objective of our research consists of extending the incentive contracting of securitized products under moral hazard in relation with the incentive structure of banks. Potential adverse selection will also be considered. One major theoretical contribution will be to take into account of correlations between assets in the original pool of contracts. These correlations were misevaluated before and during the 2007-2009 financial crisis. We already have theoretical results on optimal retention in the presence of moral hazard, but our models do not presently consider systemic risk and adverse selection (Malekan and Dionne, 2014; Dionne and Malekan, 2015).

 

The second objective is to measure the significance of residual moral hazard in the securitization market. Asymmetric information between participants in these markets could be the source of significant resource misallocation. Rejecting the null hypothesis that there is no moral hazard in the securitization activity means there is room to improve the contract design in this economic sector. The recent regulations in USA and Europe propose a flat retention rate at the equity level or in each tranche of the securitization product like a CDO. It is not clear that such a regulation scheme is optimal because it does not take into account of potential moral hazard on loss severity or loss given default. As shown by Dionne and Malekan (2015), a proportional contract like a coinsurance contract may be more optimal when the actions of the bank affect the loss severity. As in any field of research, tests performed must be closely related to theory (Chiappori, 2000; Chiappori and Salanié, 2013) and data must be of high quality. Over the last two years, we have prepared a new dataset for this research. Our loan-level data come from the MBSData LLC, a provider of data on mortgage backed securities. The database is mainly divided into static and dynamic datasets. The former contains detailed information on loan origination obtained at the time of original underwriting while the latter provides historical information on the original deal updated on a monthly basis and including default information.

 

  • Risk management in financial and non-financial firms

 

Over recent years, we have also been involved in many activities concerned with firms risk management. In some joint work with graduate students, we have extended Tufano’s data base (Journal of Finance, 1996) to analyze more carefully the determinants related to the maximization of a firm’s value. We have obtained very good preliminary results that were recently published in Economics Letters (2003). All the theoretical determinants except one prove to be significant. We also have investigated the effect of risk management on firm value.

 

Up to the early 90s, risk management in non-financial firms was limited to the demand for insurance. The globalization of markets and the increased volatility of interest rates, exchange rates, commodity prices, and the prices of several resources such as petroleum, natural gas, and gold have pushed non-financial firms to develop risk-management activities. Besides taking into account managers' risk aversion, several determining factors associated with the maximization of the firm’s value have been considered to explain the varying intensity of risk-management activities. For example, Dionne and Garand (2003) have shown that gold producers who hedge against fluctuations in the price of gold are those who experience the greatest financial difficulties, who pay the highest taxes, and who are the largest producers. Managers with more shares or actual funds in firms are more active, but those with options are less so; for, it is a well-known fact that the values of such options will increase along with the volatility of the underlying asset or the price of gold (Tufano, 1996; Dionne and Triki, 2013). This result follows directly from application of the Black-and-Scholes formula and is due to the fact that higher volatility increases the probability that the option will be in the money. One particular difficulty on which we plan to work next years is finding ways to separate hedging from speculation. But more fundamentally, it seems that the different results are sensitive to the methodology used (Dionne and Triki, 2013). Indeed, the significant determinants vary depending on the econometric specification used and on the way endogeneity problems are controlled. For example, it has often been proposed in the literature that expected financial distress costs may affect hedging decisions because they reduce the firm’s value. Many researchers use debt to approximate these costs as a determinant of risk management. But risk management may affect the firms' capital structure by gaining access to more debt for firms that manage their risks. Moreover, the manager’s decision to hedge his firm’s risk may be driven by his own portfolio of stocks and more particularly of stock options in this firm. But the decision to hold options may, in turn, be affected by the firm’s risk-management policy. It seems that these interactions must be analyzed jointly, and we are planning to model risk-management decision, debt policy, and the number of options held by managers in a system of three simultaneous equations with panel data.

 

  • Value of risk management

 

To overcome the major source of inconsistency from empirical literature findings (i.e., endogeneity), we use an econometric approach based on instrumental variables applied to models with essential heterogeneity inspired by the work of Heckman, Urzua, and Vytlacil (2006). Their work controls for the individual-specific unobserved heterogeneity in the marginal treatment effects estimation from using high hedging ratios (i.e., upper quartile) versus low hedging ratios (i.e., lower quartile). Heckman, Urzua, and Vytlacil (2006) confirm that the plain method of instrumental variables appears to be inappropriate when there are heterogeneous responses to treatment. In our application of the essential heterogeneity model, we identify a credible instrument arising from the economic literature pertaining to the macroeconomic responses to crude oil price shocks; namely, the Kilian (2009) index, which gives a measure of the demand for industrial commodities driven by the economic perspective.

Our evidence suggests that marginal firm financial value (marginal treatment effect, MTE), as measured by the Tobin’s q, is increasing in oil producers’ propensity to hedge their oil production to a greater extent (i.e., upper quartile). This finding corroborates one strand in the previous literature that argues for the existence of a hedging premium for non-financial firms (Allayannis and Weston, 2001; Carter, Rogers, and Simkins, 2006; Adam and Fernando, 2006; Perez-Gonzales and Yun, 2013, among others). Consistent with the literature (e.g., Guay, 1999; Bartram, Brown, and Conrad, 2011), we find that marginal firm riskiness, as measured by its systematic and idiosyncratic risks, is decreasing with oil producers’ propensity to be high intensity hedgers rather than low intensity hedgers. Oil beta, representing firms’ stock returns’ sensitivity to fluctuations in oil prices, is decreasing with the propensity to hedge to larger extents, albeit with no statistical significance. Altogether, these findings suggest that any potential positive effects associated with oil hedging should translate into value enhancement for shareholders because of the decrease in the required cost of equity. This is due to the lower riskiness of the oil producers, particularly lower systematic risk as suggested by Gay, Lin, and Smith (2011). We also find that the firm’s marginal accounting performance, as measured by the return on equity, is lower for oil producers that are low intensive hedgers. Finally, we obtain a significant average treatment effect (ATE) for Tobin’s q (positive), idiosyncratic risk (negative), and systematic risk (negative).

 

  • High-frequency trading

 

Two main research questions are at the heart of the new research program on HFT: efficiency and fairness in trading, and asymmetric information. Both are probably interconnected and involve new research approaches that are fundamental to understand the behavior of trading participants and to make appropriate policy or regulatory recommendations. The structure of exchange markets, such as the Toronto Stock Exchange (TSX), has been radically transformed by high technology over the last 25 years. HFT is executed by extremely fast computers, and their programs are often strategic. Liquidity and price discovery now arise in a more complex way, partly owing to high speed. These changes have affected the market microstructure and the formation of capital in financial markets. They may have also reduced fairness between market participants, warranting new regulatory rules. However, more research activities have to be developed to really understand the effects of HFT on the welfare of investors. Up to now, conclusions on the net social benefits of HFT are not always based on solid academic research (Chordia et al, JFM, 2013).

 

Stock exchanges are using different market models while public firms commonly employ interlisting. Controlling for exchanges’ characteristics simultaneously, various dimensions of HFT will be analyzed during the next years: arbitrage, liquidity, granularity of the limit order book, bid-ask spread, welfare effects of high-frequency traders, behavior during periods of high stress, and asymmetric information. Due to space limitation, special attention will be paid here to the future research activities related to arbitrage and to asymmetric information, including adverse selection and liquidity in HF markets.

One particular problem that we want to tackle is the arbitrage between exchanges. The current market structure in North America and Europe is very competitive, fragmented, and fast (Biais and Wooley, WP, 2011, Jones, WP, 2012, Goldstein et al., FinR, 2014, and O’Hara, JFE, 2015). In the presence of market fragmentation, traders need to search for liquidity across many venues in the same country or across countries; high speed is thus crucial. The ability of HFT traders (HFTs) to enter and cancel orders very rapidly makes it hard to discern where liquidity exists, which creates more opportunities for HFTs to exploit profitable trading opportunities. The existence of multiple venues also means that prices for a given asset need not always be the same across all venues for a very short period of time, opening the door to high speed arbitrage across markets (Budish et al., QJE, 2015; Foucault et al., 2016). These new forms of arbitrage might reduce fairness because the advantage is often related to quoting and trading speed instead of market opportunities related to real economic activity. Our new infrastructure will grant us access to many exchanges in North America and Europe at the same time. We plan to identify traders and follow them in different exchanges. Menkveld (JFM, 2013) analyzes the behavior of one HFT dealer who is a market maker. He shows that HFTs reduce price variation for the same stock on different exchanges by doing arbitrage activities across trading venues. These traders make money with such arbitrage activities because they move even before humans can observe the price differences. This is perhaps the only study that performs such an analysis in the literature. However, the researcher knew the identity of the trader, an advantage that is not common in these markets, so his result cannot be used to draw conclusions about the entire market.

 

The second objective is to continue developing our empirical research agenda on asymmetric information in HFT (Cenesizoglu et al., 2016). As O’Hara (JFE, 2015) and Budish et al. (QJE, 2015) contend, the study of asymmetric information in the presence of HFT has not really started yet. We plan to develop a new procedure to identify fast and informed traders in order to reach our two objectives. For now, unless we have access to databases from regulators or trading partners, we cannot identify the exact origin of trades continuously.

 

  • Liquidity and credit risk

 

According M. O’Hara (NFA, 2016) liquidity risk in 2016 is as important as during the recent financial crisis. Liquidity risk can be a source of arbitrage when the basis is negative. In the credit risk market, the basis represents the difference in spread between credit default swap (CDS) premium and bond spread for the same debt issuer, with similar maturities. In the credit derivatives market, basis can be positive or negative. A negative basis means that the CDS spread is smaller than the bond spread. In the absence of pricing errors, an investor can buy a bond and a CDS at the same time and make money. Yet it seems that such arbitrage was not frequent during the last financial crisis even if the basis was largely negative. Our research program is devoted to explaining this potential puzzle by analyzing the default and liquidity risks of the products involved.

One goal of the research is to extend the analysis of liquidity by adding a different measure of bond illiquidity in the analysis of basis risk; the new comprehensive measure of illiquidity should be significant to explain the negative basis risk observed after the recent financial crisis. Bai and Collin-Dufresne (2012) obtained a significant result with their bond liquidity measure only during the financial crisis, although the negative basis risk is documented during and after the financial crisis. We also plan to introduce a dynamic analysis of basis risk by explicitly considering business cycles in the model, as in the recent literature on credit risk (Chen-Maalaoui, Dionne and François, 2014). Another contribution will be to develop an explicit measure of CDS illiquidity in the statistical model that explains the basis risk; presently, only a variable for bond illiquidity is used in the literature. It is interesting to observe that the literature does not consider the CDS liquidity risk, which was also documented to have been an important problem during the financial crisis (Arakelyan et al., 2012).This extension could become a source of the explanation for why we do not observe arbitrage activities. Finally, we plan to verify how the new central clearing system for the CDS market improves the liquidity of CDS.

 

We plan to investigate in more detail the illiquidity component that was an important source of bond credit spread during the last financial crisis (Dick-Nielsen et al. 2012). According to the theory, basis risk should be slightly positive in regular market conditions for different technical reasons (Duffie, 1999).To explain the negative result we will investigate the risk-return trade-off in a basis trade for an investor with limited capital. We will consider different explanatory variables to separate the factors that explain the variations in the CDS-Bond basis such as measures of funding cost risk, collateral quality, bond liquidity risk, and CDS liquidity risk.

 

  • Operational risk

 

Over the past few years, the operational risk of financial institutions has become an important concern. Financial institutions and regulatory agents worry about what potential effects breakdowns in the likes of electronic computer and information exchange systems will have on the financial health of businesses and even markets.

At the end of the last decade, special attention was paid to the regulation of capital as associated with operational risk. In effect, the large losses incurred by investment banks such as Barings and Daiwa Securities in 1995 alerted the Basel Committee to impose directives for the measurement and management of operational risk. Our research project has examined the problem of quantifying this risk and developing a VaR (Value at Risk) for it.

 

The leading difficulty comes from the fact that some occurrences of operational risk are rare and that the losses to be expected from an operational event are difficult to quantify. Besides, the stochastic process of operational losses can in no way be explained by a normal distribution, as in the case for market risk. Data for the former are generally composed of a large number of small events and a small number of major events. For good risk management, we thus need to know the behavior of the tail (events incurring enormous losses) of such a distribution. Therefore, the mathematical techniques to be used are those such as the theory of extreme values which treat thick-tailed distributions for a limited number of data.

We present the results of two studies that use the LDA approach, conducted by Dahen and Dionne (2010a) and by Dionne and Saissi Hassani (2017). The model of Dionne and Dahen (2010a) shows how to quantify optimal capital for banks’ operational risk by adding external data to the bank data. Two main difficulties are linked to operational risk measurement; loss data are still rare and historical evidence is lacking. Many banks do not really have data on their losses prior to the 2000s. Because extreme losses are very rare, it is practically impossible to apply advanced statistical models that exclusively use internal data. The limited history of internal losses is not representative of a bank’s real exposure. In addition, for cost reasons, losses are recorded by financial institutions only starting from a certain threshold. Truncated statistical distributions are thus necessary. Lastly, for many types of risk, losses are not normally distributed and their modelling requires the use of distributions intended for the study of rare or extreme events.

 

The objective of the Dahen and Dionne (2010a) study is to obtain a measure of operational regulatory capital by risk type and line of business that reflects a bank’s risk level by integrating external loss data and internal bank data to improve the statistical reliability of the model used. The study results can answer several questions that banks have asked about the Basel II Accord, particularly concerning the use of external loss data of other institutions. The study also shows how to obtain a good fit of statistical distributions by using extreme loss data while controlling for the fact that the loss data are truncated.

The contribution by Dionne and Saissi Hassani (2017) proposes an extension of the previous study by adding the possibility that operational losses may be cyclical. The authors show that operational loss data from American banks are indeed characterized by a Markov model. Monthly losses follow a symmetrical normal distribution in low regimes and an asymmetric distribution with thick tails in high regimes. Various statistical tests do not reject this asymmetry. The regimes obtained are integrated in the estimation of operational losses and the authors show that their presence affects loss distributions significantly. These results are particularly important for some operational losses, including those related to errors in pricing financial products for which several large banks have been sued during and after the most recent financial crisis.

 

The study by Dionne and Saissi Hassani (2017) notably documents the presence of temporal heterogeneity within the data. If this heterogeneity is not considered in risk management models, the estimations of regulatory risk capital will be biased. Levels of capital set aside will be overestimated during periods where loss amounts are normal, corresponding to low regimes, and underestimated in periods of high regimes.

 

  • Conclusion and impact in the field

 

In this short statement, we have presented our activities and future research projects linked to private and social risk management. We have highlighted the fact that issues in risk theory are associated with four important considerations that we plan to study in detail:

  • Models for evaluating risks generated by individuals and firms are not yet satisfactory, more importance should perhaps be given to cognitive processes.

  • It is important to analyze all risks with a portfolio approach so as to better diversify them.

  • The deregulation of markets must be accompanied by mechanisms motivating commitment to the social objectives of prevention.

  • It is important to develop statistical tools and data sets to measure the different effects and the real issues, including asymmetric information and governance problems related to risk management.

 

We believe that these considerations will be significant to risk-management research for the next years. We hope the results will enable managers to better understand and qualify new issues emerging in the field of risk management. In particular we hope they will be able to see which contract forms and market structures will best maximize the outcomes of risk management. Some regulatory aspects of risk management under asymmetrical information problems will be re-examined. Finally, new risk-management tools will be developed, in particular new data sets and statistical methodologies designed to measure the real issues more accurately.

 

  • Graduate students and postdoctoral fellows

 

Over the past twenty years at HEC, we have been very involved in training students. 25 of them have completed their Ph.D. thesis (4 in Paris), 77 have completed their masters thesis, and we have supervised 8 postdoctoral fellows, 4 from Europe and one from China. We are currently supervising seven Ph.D. students at HEC Montréal. Four masters’ students are also working under our supervision. We enjoy working with graduate students, particularly when they are motivated by their research projects. In the previous paragraphs, we mentioned that many Ph.D. students are involved in the different projects. Over the next seven years, we plan to integrate new Ph.D. and masters students in the Chair research activities.

 

  • Financial support for the purchase of high-performance computer equipment

 

The Canada Research Chair in Risk Management also obtained, in 2017, $1.9 million in joint funding from the Canada Foundation for Innovation, the Quebec Department of Education, and private-sector partners. The additional funding will enabled the Chair to build a high-performance infrastructure by purchasing equipment needed to handle huge databases and rapid calculations. This powerful computer hardware opens up new access to fields of research such as banks’ securitization, insurance pricing for vehicle fleets, insurance fraud with machine learning, demand for reinsurance, high frequency trading, banks’ operational risk, hedging in different industries, and bonds and CDS liquidity risk which require the processing of millions of observations spread out over time and different industries.

 

The required technological infrastructure for HFT is spread across six main components: Data acquisition from many exchanges, Database administrator (DBA), Servers for data processing and storage, Grid computing cluster, Research workstations and Algo-trading software. The acquisition of data from many exchanges is necessary to make comparisons between exchanges with different trading systems on many research issues such as liquidity, granularity of the limit order book, bid-ask spread, welfare effects of high-frequency traders, high stress periods, and asymmetric information. We also need data from different exchanges simultaneously in the new infrastructure to detect arbitrage activities between exchanges and measure the potential related presence of adverse selection.

 

As discussed in the description of the Chair’s proposed research program, the existence of multiple venues means that prices for a given asset need not always be the same across all venues for a very short period of time, opening the door to high speed arbitrage across markets. These new forms of arbitrage might reduce fairness because the advantage is often related to quoting and trading speed instead of market opportunities related to real economic activity. Such comparisons between exchanges and arbitrage activities have never been measured between exchanges although they are suspected by many researchers. Our new infrastructure will grant us access to many exchanges in North America and Europe simultaneously. We plan to study arbitrage activities starting with cross-listing stocks in the Toronto-NY-NASDAQ group, or the London- Frankfurt group and even across the continents. There are 102 cross-listed firms between Germany and Canada. Between the US and Canada there are more than 400 cross-listed firms, and 130 between US-UK. To our knowledge, we will be among the very few researchers in the world who will possess such infrastructure and be able to do these analyses using our new infrastructure. We will be able to consider other assets like options and bonds.

Research Program intergrated to the HEC Montréal Strategic Plan
Theory of risk and risk perception
Statistical measurement of information problems
Risk management in financial and non-financial firms
Bancassurance
Operaiional Risk
Impact in the Risk management research
bottom of page