File Exchange

image thumbnail

Systemic Risk

version 3.5.0 (9.35 MB) by Tommaso Belluzzo
A framework for systemic risk valuation and analysis.

36 Downloads

Updated 02 Dec 2020

View Version History

GitHub view license on GitHub

# INTRODUCTION #

This script calculates and analyses the following risk measures:

COMPONENT MEASURES
=> AR (Absorption Ratio) by Kritzman et al. (2010) https://doi.org/10.2139/ssrn.1633027
=> CATFIN by Allen et al. (2012) https://doi.org/10.1093/rfs/hhs094
=> CS (Correlation Surprise) by Kinlaw & Turkington (2012) https://doi.org/10.2139/ssrn.2133396
=> TI (Turbulence Index) by Kritzman & Li (2010) https://doi.org/10.2469/faj.v66.n5.3
=> Principal Component Analysis

CONNECTEDNESS MEASURES
=> DCI (Dynamic Causality Index)
=> CIO ("In & Out" Connections)
=> CIOO ("In & Out - Other" Connections)
=> Network Centralities: Betweenness, Degree, Closeness, Clustering, Eigenvector & Katz
--| References: Billio et al. (2011) https://doi.org/10.2139/ssrn.1963216

CROSS-ENTROPY MEASURES
=> JPoD (Joint Probability of Default)
=> FSI (Financial Stability Index)
=> PCE (Probability of Cascade Effects)
=> DiDe (Distress Dependency)
=> SI (Systemic Importance)
=> SV (Systemic Vulnerability)
=> CoJPoDs (Conditional Joint Probabilities of Default)
--| References: Segoviano & Goodhart (2009) http://doi.org/10.5089/9781451871517.001, Radev (2012) https://doi.org/10.2139/ssrn.2048585, Segoviano & Espinoza (2017) http://www.systemicrisk.ac.uk/publications/discussion-papers/consistent-measures-systemic-risk, Cortes et al. (2018) http://doi.org/10.5089/9781484338605.001

CROSS-QUANTILOGRAM MEASURES
=> Full Cross-Quantilograms
=> Partial Cross-Quantilograms
--| References: Han et al. (2016) https://doi.org/10.1016/j.jeconom.2016.03.001

CROSS-SECTIONAL MEASURES
=> Idiosyncratic Metrics: Beta, Value-at-Risk & Expected Shortfall
=> CAViaR (Conditional Autoregressive Value-at-Risk) by White et al. (2015) https://doi.org/10.1016/j.jeconom.2015.02.004
=> CoVaR and Delta CoVaR (Conditional Value-at-Risk) by Adrian & Brunnermeier (2008) https://doi.org/10.2139/ssrn.1269446
=> MES (Marginal Expected Shortfall) by Acharya et al. (2010) https://doi.org/10.2139/ssrn.1573171
=> SES (Systemic Expected Shortfall) by Acharya et al. (2010) https://doi.org/10.2139/ssrn.1573171
=> SRISK (Conditional Capital Shortfall Index) by Brownlees & Engle (2010) https://doi.org/10.2139/ssrn.1611229

DEFAULT MEASURES
=> D2C (Distance To Capital) by Chan-Lau & Sy (2007) https://doi.org/10.1057/palgrave.jbr.2350056
=> D2D (Distance To Default) by Vassalou & Xing (2004) https://doi.org/10.1111/j.1540-6261.2004.00650.x
=> DIP (Distress Insurance Premium) by Black et al. (2012) https://doi.org/10.2139/ssrn.2181645
=> SCCA (Systemic Contingent Claims Analysis) by Jobst & Gray (2013) https://doi.org/10.5089/9781475572780.001

LIQUIDITY MEASURES
=> ILLIQ (Illiquidity Measure) by Amihud (2002) https://doi.org/10.1016/S1386-4181(01)00024-6
=> RIS (Roll Implicit Spread) by Hasbrouck (2009) https://doi.org/10.1111/j.1540-6261.2009.01469.x
=> Classic Indicators: Hui-Heubel Liquidity Ratio, Turnover Ratio & Variance Ratio

REGIME-SWITCHING MEASURES
=> 2-States Model: High & Low Volatility
=> 3-States Model: High, Medium & Low Volatility
=> 4-States Model: High & Low Volatility With Corrections
=> AP (Average Probability of High Volatility)
=> JP (Joint Probability of High Volatility)
--| References: Billio et al. (2010) https://www.bis.org/bcbs/events/sfrworkshopprogramme/billio.pdf, Abdymomunov (2011) https://doi.org/10.2139/ssrn.1972255

SPILLOVER MEASURES
=> SI (Spillover Index)
=> Spillovers From & To
=> Net Spillovers
--| References: Diebold & Yilmaz (2008) https://doi.org/10.1111/j.1468-0297.2008.02208.x, Diebold & Yilmaz (2012) https://doi.org/10.1016/j.ijforecast.2011.02.006, Diebold & Yilmaz (2014) https://doi.org/10.1016/j.jeconom.2014.04.012

TAIL DEPENDENCE MEASURES
=> ACHI (Average Chi) by Balla et al. (2014) https://doi.org/10.1016/j.jfs.2014.10.002
=> ADR (Asymptotic Dependence Rate) by Balla et al. (2014) https://doi.org/10.1016/j.jfs.2014.10.002
=> FRM (Financial Risk Meter) by Mihoci et al. (2020) https://doi.org/10.1108/S0731-905320200000042016

Some of the aforementioned models have been adjusted and improved according to the methodologies described in the V-Lab Documentation (https://vlab.stern.nyu.edu/docs), which represents a great hub for systemic risk measurement.

# MORE INFORMATION #

Full description and documentation of the framework is published on the official GitHub repository:
https://github.com/TommasoBelluzzo/SystemicRisk

In case of issues, feel free to contact me at the following e-mail address:
tommaso [DOT] belluzzo [AT] gmail [DOT] com

Depending on OS (version, bitness, regional settings), Excel (version, bitness, regional settings) and/or MATLAB, the dataset parsing process might present issues. Due to the high number of users asking for help, support is no more guaranteed.

Cite As

Tommaso Belluzzo (2020). Systemic Risk (https://github.com/TommasoBelluzzo/SystemicRisk/releases/tag/v3.5.0), GitHub. Retrieved .

Comments and Ratings (158)

Tommaso Belluzzo

You're welcome Liu!

xudong liu

Thanks very much.

xudong liu

Test 3.2.0 version,
Tanks very much.

Sincerely yours,
Daniel Tulips Liu

BeiJing,China.

xudong liu

version 3.1.0 ,
Thanks very much .
Best wishes .

Sincerely yours,
Daniel Tulips Liu

BeiJing,China.

Tommaso Belluzzo

Hi DJShark911, unfortunately I have no more time to provide direct support. As stated in the main page of the project: "Due to the high number of users asking for help, support is no more guaranteed"; I receive dozens of requests per month and this means I cannot provide direct support anymore, I'm sorry for that but I have no time to handle this kind of requests. Have a look at the GitHub repository guidelines and try to perform some debugging on your own.

xudong liu

DJShark911

Dear Tommaso,
Can you help me modify the data and run it in your matlab.My teacher can not help me and I have already sent my data to your email.My email is axcc88@zjnu.edu.cn.Really really thank you a lot!
Yours,sincerely!

DJShark911

Dear Tommaso,
Thanks for your reply! I follow your advice to download the version 3.0.0,but the problem isn't solving.I examined the data carefully and found no empty spaces and no missing values. I have made three trial with your data in MATLAB r2018a. In the first trial, I changed a data in the Shares table. The second trial, change a number in the assets, and the third trial change a number in the equity but all three of the trials reported Error in dataset 'Example_Small_1.xlsx': the 'Assets' sheet contains missing values.I don't know what to do. Another question is What data can be used to replace CDS and separate accounts, because I do not find relevant data of companies in the database, I would like to ask which similar data in the balance sheet can replace CDS and independent accounts?
Yours,sincerely!

DJShark911

Tommaso Belluzzo

Hi DJShark911, well known problems with datasets unfortunately. My advice is to check whether your files have used cells outside the data range, like empty spaces. Alternatively, check the output result with debugger to see what's happening. In addition, you may want to download the new release as the new package handles datasets slightly better.

DJShark911

Dear Tommasso,
Thanks for your great job!I have some problems i can't solve and i have tried many times.The errors are as follow:
error parse_dataset>parse_table_balance (line 526)
Error in dataset 'example_Large_xtxfx70.xlsx': the 'Assets' sheet contains invalid or missing values.

error parse_dataset>parse_dataset_internal (line 175)
tab_assets = parse_table_balance(file,file_name,tab_index,tab_name,date_format_balance,dates_num,firm_names,true);

error parse_dataset (line 38)
ds = parse_dataset_internal(file,file_sheets,version,date_format_base,date_format_balance,shares_type,crises_type);

(line 526 )is if (any(any(ismissing(tab_partial))) || any(any(~isfinite(tab_partial{:,2:end}))))
error(['Error in dataset ''' file_name ''': the ''' name ''' sheet contains invalid or missing values.']);
(line175) is tab_assets = parse_table_balance(file,file_name,tab_index,tab_name,date_format_balance,dates_num,firm_names,true);
assets = table2array(tab_assets);
(line175) ds = parse_dataset_internal(file,file_sheets,version,date_format_base,date_format_balance,shares_type,crises_type);
My version is matlab r2018a Excel2016,and the excel2016 editing language is English, could you give me some advice?Thank you!
Yours,sincerely!

DJShark911

Tommaso Belluzzo

Hi Imen, unfortunately not at the moment. You can either dissect my code or calculate all the measures, taking only what you need.

Imen Fredj

Hi, I'm working on systemic risk. Are there codes only to calculate the marginal expected shortfall?

Leon Guo

xudong liu

Tommaso Belluzzo

Hi LuoMin, try to see if the Excel sheet contains empty values on the right margin which might be interpreted as UsedRange by the Office Introp. A debugging session may also help to see what happens. If you have no clue, contact me via e-mail and attach the dataset please.

luomin wei

Dear Tommaso,

Thank you for your work, it helps me a lot. However, when I try to create my own dataset it shows this error
' Error using parse_dataset>parse_table_standard (line 641)
The 'Shares' sheet contains unnamed columns.'
I'm trying to modify the data but have no idea since I have named all the columns. Do you have any suggestions? Thank you so much.

Tommaso Belluzzo

Ok basically 'QQ yyyy' worked fine for both datetime and datestr until a certain version of Matlab. I suppose 2019a because other users with that version also reported this problem. I have to find a generalized solution for this problem... actually, after a certain version I have to accept 'QQQ yyyy' as date format but treat it as 'QQ yyyy' for "datestr". Any suggestion is mode than welcome.

Keli Nazo

Dear Andrea,

Is it possible to contact you? I have encountered the same problems and I need your help.
Please let me know!!

Andrea Renzetti

Yes I also get error, but with datetime() things seems different. For example:
date = '01/01/2001'
f1 = datetime(date,'InputFormat','dd/MM/yyyy', 'Format', 'QQ yyyy')
f2 = datetime(date,'InputFormat','dd/MM/yyyy', 'Format', 'QQQ yyyy')

Tommaso Belluzzo

FYI: datestr(now(),'QQQ') gives this on my version: "Error using matlab.internal.datetime.cnv2icudf (line 83). Unrecognized quarter format. Format: QQQ."

Andrea Renzetti

Dear Tommaso,

the problem was that the format QQ yyyy is for quarterly dates as "01 2001" and not for "Q1 2001" which is recognized in MATLAB as QQQ yyyy. I have changed the quarterly dates in my Excel spreadsheet to QQ yyyy ( 01 2001, 02 2001...) and the code runs smoothly with the exception of the variable ds.MonthlyTicks = length(unique(year(d))) <= 3; which gives errors since year() takes as input DateTime and not numeric. Hope these comments are usefull for the other users and might be helpfull to improve the code, which is already a great work. Thank you for your help,

Kind regards,
Andrea

Tommaso Belluzzo

Hi Andrea, as a first attempt, I would try to see what is the output of "detectImportOptions". It might provide useful information about what's going on. As a second attempt, I would try to remove the "Basic" parameter from the "readtable" call; it might be a source of problems in recent versions.

Tommaso

Andrea Renzetti

Hi Tommaso,

the problem is in the parse_table_balance() function when computes tab_partial = readtable(file,options,'Basic',true). I get a first column of NaT (the column should be datetime QQ YYYY). I'm looking for something to overcome the problem, don't know if you have any suggestion. Thank you for the attention,

Kind regards,
Andrea

Tommaso Belluzzo

Dear Andrea, let me know if you find something during your debugging sessions. Unfortunately I don't have Excel 2019 and Matlab 2020, so it's impossible for me to understand what's going on blind eyes. Best, Tommaso

Andrea Renzetti

Dear Tommaso,

thanks for the fast reply. I have tried with the 2016 Excel version and switched to English language but I still get the same error. The problem I think is different since it is enough to delete the MATLAB data file "Example_Large" without editing the Excel file to get the same error. Anyway I will try to debug the code to see what's going on.

Kind Regards,
Andrea

Tommaso Belluzzo

Hi Andrea, read the bottom notes here: https://github.com/TommasoBelluzzo/SystemicRisk/blob/master/README.md
Seems like your Excel 2019 is the prime suspect. You have two options at present: downgrade to Excel 2016 or debug the code to see what's going on.

Regards, Tommaso

Andrea Renzetti

Dear Tommaso,
Thanks a lot for your great work and for sharing it. I have no problem running the code on the "Example_Large" dataset but as Wang before, I get:

Error using parse_dataset>parse_table_balance (line 440)
The 'Assets' sheet contains invalid or missing values.

Error in parse_dataset>parse_dataset_internal (line 155)
tab_assets = parse_table_balance(file,tab_index,tab_name,date_format_balance,dates_num,firm_names,true);

as I add some new observations to the dataset. My MATLAB version is R2020a, Excel is version 2019.

Kind regards,

wang

Dear Tommaso,
I'm sorry to bother you for a long time.
The version 2.2.1 is working.
Thanks very much for your help!
Yours sincerely.

wang

Dear Tommaso,
Thanks for your reply.
Sorry for being late.
I have sent the dataset to your email.
The software I'm currently working on is MATLAB R2018a and Excel2016.
Thanks very much for your help!
Yours, sincerely.

Tommaso Belluzzo

Hi wang, sorry for being late. I'm pretty sure this is a mess caused by how regional settings are handled by Excel, it's not the first time I see this after all. Please, mail me the modified dataset that gives you errors, I'll try to see what happens on my PC when I parse it. My mail is tommaso.belluzzo [AT] gmail [DOT] com. Sorry for the inconvenience.

wang

Dear Tommaso,
I left the data on the "Asset", "Equity", but a single number of any other sheets has been touched (even simple copy and paste), the following errors can be seen:

Error using parse_dataset>parse_table_balance (line 376)
The 'Assets' sheet contains invalid or missing values.

Error in parse_dataset>parse_dataset_internal (line 154)
tab_assets = parse_table_balance(file,tab_index,tab_name,date_format_balance,dates_num,firm_names,true);

Error in parse_dataset (line 36)
data =
parse_dataset_internal(file,file_sheets,version,date_format_base,date_format_balance,ipr.shares_type,ipr.forward_rolling);

Error in run (line 114)
data = parse_dataset(file,dataset_version,'dd/MM/yyyy','QQ yyyy','P',3);

The editing language of Excel was set to English, but the data can not pass.
Everthing looks fine to Asset sheet, but dataset process can't be finished. The software I'm currently working on is MATLAB2018 and Excel2016.Could you give me some advice?
Thanks for your help!
Yours, sincerely.

Tommaso Belluzzo

Hi YaTing, I don't know that source. I retrieved my data from Bloomberg / Thomson Reuters. Have you tried to see if assets are subject to the same difference?

Yating Zhao

Dear Tommaso,
Thanks for your reply.
Now I am clear about the meaning of "Equity". I use https://www.bamsec.com/ to obtain values of balance sheet. But I didn't collect all data, just view the balace sheet on the website. So I'm sorry that I am unable to send you the data. I don't know why there are such a big difference between the sample dataset and the value from the website. Maybe I misunderstood the values in the balance sheet.
Thanks for your help!
Yours, sincerely.

Tommaso Belluzzo

Hi YaTing, I confirm you that for equity I mean the balance sheet total equity volume. Differences may be due to the fact that you are retrieving values from a different source. If you find any sensible difference between what's in my dataset and what you think is the correct value, please feel free to contact me at "tommaso.belluzzo [AT] gmail [DOT] com", so we can share and compare some data.

Yating Zhao

Dear Tommaso,
I have a quesiton about the value of "Equity" in the data set. "Equity: book value of equity of the firms, with the given balance sheet elements frequency." that is what you have written in readme. So I think "Equity" is the total equity of a firm, that meas Asset=Equity+Liabilities. But I check the balance sheet of these firms, finding "Equity" is not equal to the the total equity of a firm.
How should I understand "Equity"? Could you tell me an more specific explanation?
Thanks for your help!
Yours sincerely.

Yating Zhao

gary

Dear Tommaso,
It is also possible that the problem is due to the editing language of Excel.
I tried to find if there is an empty cell during the used range of the matrix, and the format of the edited dataset was compared with original example. No visible exceptions were found under previous editing language , but the program did not work.
I checked the format of the data both in previous language and English environment, which were different. Then the editing language of Excel was set to English, and the data check can pass.It maybe helpful for users with non-English editing.

Tommaso Belluzzo

Hi gary. The error is pretty self-explanatory. You have missing values in your sheet. Be careful when you edit the Excel file. You may edit an empty cell far away and add empty rows/columns to the used range of the matrix. Data validation is pretty strict here, because I don't really like to set up all the routines required for automated sanification or cleaning. If you want to edit a cell, be sure to edit exactly that cell without corrupting the matrix structure. Also be sure to be compliant to the current format of numbers and dates.

gary

Dear Tommaso,
Thanks for your last reply.
An exception occurred with example dataset. The original dataset undoubtedly make the program run. I left the data on the "Asset", "Equity", and "Separate Accounts" untouched, but a single number of any other sheets has been touched (even simple copy and paste), the following errors can be seen:

Error using parse_dataset>parse_table_balance (line 376)
The 'Assets' sheet contains invalid or missing values.

Error in parse_dataset>parse_dataset_internal (line 154)
tab_assets = parse_table_balance(file,tab_index,tab_name,date_format_balance,dates_num,firm_names,true);

Error in parse_dataset (line 36)
data =
parse_dataset_internal(file,file_sheets,version,date_format_base,date_format_balance,ipr.shares_type,ipr.forward_rolling);

Error in run (line 114)
data = parse_dataset(file,dataset_version,'dd/MM/yyyy','QQ yyyy','P',3);

Everthing looks fine to Asset sheet, but dataset process can't be finished. The software I'm currently working on is MATLAB2019 and Excel2016.Could you give me some advice?
Thanks for your help!
Yours, sincerely.

Tommaso Belluzzo

Hi LiJie, there is no difference. I just made everything easier to handle since most of balance sheet data is provided on quarterly basis.

Lijie Yu

Dear Tommaso,
Thanks a lot for your great work.
The dataset in the new version has changed, for example, in Shares sheet returns changed for price and in Asset sheet daily frequency changed for quarterly frequency.
I am not very clear about the reason for the change and if I want to calculate SRISK, which version is better. Could you give me some advice?
Thanks for your help!
Yours sincerely.

Tommaso Belluzzo

Hi gary, nothing extraordinary. Just cross-quantilograms calculated over rolling windows and maximum lag.

gary

Dear Tommaso,
The new feature is quite impressive.
I noticed final results of cross-quantilogram measures with two figures refer to “Windows”, and I'm confused what the " Cross-Quantilogram(Windows) " and "Partial Cross-Quantilogram(Windows) " measure for?
Yours, sincerely.

Tommaso Belluzzo

Hi MeiZhen, the first one is just a warning. If you don't provide certain time series, some of the systemic risk metrics cannot be calculated.
The second one is a different type of error. If your Shares sheet looks like "DATES INDEX FIRM_A FIRM_B FIRM_C FIRM_D" you have to make sure the firm columns match between all the sheets. So your "Market Capitalization" must look like "DATES FIRM_A FIRM_B FIRM_C FIRM_D".

Zheng Meizhen

Dear, Tommaso,
I have some questions, I want to change the data in your excel file ” example small 2”, but I just change a small number after the point in it, the matlab will show me error that
Warning : The dataset file does not contain all the sheets required for the computation of default measures ('Market
Capitalization', 'CDS', 'Assets' and 'Equity').
> In parse_dataset>parse_dataset_internal (line 62)
In parse_dataset (line 36)
In run (line 109)
ill usage parse_dataset>parse_table_standard (line 526)
The firm names between the 'Shares' sheet and the 'Market Capitalization' sheet are mismatching.

error parse_dataset>parse_dataset_internal (line 145)
tab_capitalization =
parse_table_standard(file,tab_index,tab_name,date_format_base,dates_num,firm_names,true);
error parse_dataset (line 36)
data =
parse_dataset_internal(file,file_sheets,version,date_format_base,date_format_balance,ipr.shares_type,ipr.forward_rolling);
error run (line 109)
data = parse_dataset(file,dataset_version,'dd/MM/yyyy','QQ yyyy','P',3);
And if I withdraw this command in excel, and do it again, it still shows the same error.
I have to delete this excel and download a totally new one, then it will work.
I don’t know why it happens

Sorry to bother you. Thanks so much

gary

Dear Tommaso,
I'm sorry to bother you for a long time because of my own software, the problems have been solved.
And the version 2.1.8 is also working.
Thanks for your help!
Yours sincerely.

Tommaso Belluzzo

Hi gary. Sorry for the inconvenience. I'll try to answer by points to your issues:
1) Now I'm 100% sure you have script conflicts in your 2016 release. I've checked old Matlab docs and pca was defined just in the same way it's right now. It looks like the script is trying to run this code "https://github.com/bashtage/mfe-toolbox/blob/master/crosssection/pca.m" which is part of the MFE Toolbox instead of the built-in function.
2) The same can be for the "sortrows" function. There is nothing wrong in that line of code, even looking at older Matlab versions like 2015. You must have another "sortrows" function defined somewhere in your Matlab version that is being called instead of the built-in one. You can double check that with "help sortrows" and see what input arguments is accepting.
3) The issue with "Index in position 2 exceeds array bounds." is fixed. Please download the last version 2.1.8.

gary

Hi Tommaso, there is another problem maybe you can help. I have closed the "Cross-Sectional" and "Default" measures, and calculated the last three measures on my own dataset containing”Shares“, ”Market Capitalization“ and "Groups". It worked with v2.1.6, but got errors in the latest updated version 2.1.7.

Index in position 2 exceeds array bounds.

Error in parse_dataset>detect_distress (line 542)
eq = equity(:,i);

Error in parse_dataset>parse_dataset_internal (line 180)
[defaults,insolvencies] = detect_distress(returns,equity);

Error in parse_dataset (line 33)
data =parse_dataset_internal(file,file_sheets,date_format_base,date_format_balance,ipr.shares_type,ipr.forward_rolling);

Error in run (line 108)
data = parse_dataset(file,'dd/MM/yyyy','QQ yyyy','P',3);

Could you give me some advice?

gary

Hi Tommaso, here are some questions I found:
1.There are two lines of code that refer to “[coefficients,scores,~,~,explained]”(not the new version of 2.1.7). I modified the code that had been reporting errors, line 466("[coefficients,scores,~,~,explained] = pca(data_normalized,'Economy',false)"), the input arguments of which has been changed to “pca(randn(300,20))”, and got following responses in 2016:

Error using pca
Too many input arguments.

Error in run_component>calculate_pca (line 448)
[coefficients,scores,~,~,explained] = pca(data,'Economy',false);

Error in run_component>data_finalize (line 219)
[coefficients,scores,explained] = calculate_pca(data.CATFINVaR,false);

Error in run_component>run_component_internal (line 123)
data = data_finalize(data,futures_results);

Error in run_component (line 40)
[result,stopped] =
run_component_internal(data,temp,out,ipr.bandwidth,ipr.k,ipr.f,ipr.q,ipr.analyze);

Error in run>@(data,temp,file,analysis)run_component(data,temp,file,252,0.99,0.2,0.75,analysis)

Error in run (line 131)
[result,stopped] = run_function(data,temp,out,analysis);
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
2.The output arguments of “pca” function are represented differently in R2016 and R2019. The old version have these expressions:

[WEIGHTS,PRINCOMP,EIGENVALS,EXPLVAR,CUMR2]=pca(data,type)
WEIGHTS: A K by K matrix of componet weights, where the ith row corresponds to the ith principle component.
PRINCOMP: A t by k matrix of principal componets
EIGENVALS: The eigenvalues associated with each PRINCOMP
EXPLVAR: The percent of the viariance explained by each PRINCOMP
CUMR2: The cumulative R2 of including the PRINCOMP 1,2,...,i

The new version is "[coeff,score,latent,tsquared,explained,mu] = pca(X)".
Mabey the order of "explained" doesn't maches the one in old version.
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
3.The latest v2.1.7 in 2016 reported slightly different errors than before:

Error using parallel.FevalFuture/fetchNext (line 247)
The function evaluation completed with an error.

Error in run_component>run_component_internal (line 87)
[future_index,value] = fetchNext(futures);

Error in run_component (line 40)
[result,stopped] =
run_component_internal(data,temp,out,ipr.bandwidth,ipr.k,ipr.f,ipr.q,ipr.analyze);

Error inrun>@(data,temp,file,analysis)run_component(data,temp,file,252,0.99,0.2,0.75,analysis)

Error in run (line 150)
[result,stopped] = run_function(data,temp,out,analysis);

Caused by:
Error using run_component>calculate_catfin_var (line 381)
Too many input arguments.
The line 381 in script "run_component" is "h = sortrows([data w],1,'ascend')"

Tommaso Belluzzo

Hi gary, worry not. Your contribution is more than welcome. So the "pca" function differs from your old version and the new one of Matlab. Just a little final help, please. Open 2016 version and type "help pca" on console. Can you check if the order and the number of output arguments matches the one in the script ("[coefficients,scores,~,~,explained]")? May you try to run the function without "Economy" parameter ("[coefficients,scores,~,~,explained] = pca(randn(300,20))"? Thanks! One of the two things is the cause of the problem.

gary

Hi Tommaso, sorry for not replying you in time. I ran the code, but kept reporting the same erros as before.

Tommaso Belluzzo

Hi gary, thanks for your help. What happens if you run the following code snipped on Matlab 2016 console: "[coefficients,scores,~,~,explained] = pca(randn(300,20),'Economy',false)"? Does it give the same error?

gary

Hi Tommaso,
The line 446 in "run_component" file is "[coefficients,scores,~,~,explained] = pca(data_normalized,'Economy',false)", and the full function for this section are as follows:
function [coefficients,scores,explained] = calculate_pca(data,normalize)

if (normalize)
data_normalized = data;

for i = 1:size(data_normalized,2)
c = data_normalized(:,i);

m = mean(c);

s = std(c);
s(s == 0) = 1;

data_normalized(:,i) = (c - m) / s;
end

data_normalized(isnan(data_normalized)) = 0;

[coefficients,scores,~,~,explained] = pca(data_normalized,'Economy',false);
else
[coefficients,scores,~,~,explained] = pca(data,'Economy',false);
end

end

Tommaso Belluzzo

Hi gary, this issue is maybe worth investigating if you can spend a little bit of time on it. The "Error using parallel.FevalFuture/fetchNext" is the outer exception. The inner exception is "Error using run_component>calculate_pca (line 446)". What do you have on line 446? Can you somehow replicate the exception using 2016? I'm glad it works with 2019, anyway.

gary

Dear Tommaso,
Thanks for your reply.I checked the "run_component" code, the call to "calculate_pca" all requires 2 input arguments, and argument "Economy" has been found too. I guess the subsequent errors may be from "Error using parallel.FevalFuture/fetchNext", so I changed MATLAB version to 2019b, and component measure is now working.

Tommaso Belluzzo

Hi gary, pretty strange indeed. How many input arguments "calculate_pca" requires in your code? In "run_component" code, do you find any call to "calculate_pca" with less than that number of arguments? If you type "help pca", does the documentation mentions a name-value pair argument called "Economy"?

gary

Dear Tommaso,
Thanks for your toolbox.
I runed the new version of program on example dataset, the first four measurements worked well, when calculating component measures,I got these responses:
Error using parallel.FevalFuture/fetchNext (line 247)
The function evaluation completed with an error.

Error in run_component>run_component_internal (line 86)
[future_index,value] = fetchNext(futures);

Error in run_component (line 40)
[result,stopped] =
run_component_internal(data,temp,out,ipr.bandwidth,ipr.k,ipr.f,ipr.q,ipr.analyze);

Error in run>@(data,temp,file,analysis)run_component(data,temp,file,252,0.99,0.2,0.75,analysis)

Error in run (line 131)
[result,stopped] = run_function(data,temp,out,analysis);

Caused by:
Error using run_component>calculate_pca (line 446)
Too many input arguments.

My MATLAB version is 2016b.
Do you have some advice?
Thanks for your reply?

Tommaso Belluzzo

@Bolarinwa: zeroes cannot be the cause, the function being used for the check is "ismissing". So, yout best bet is to set a breakpoint when that error is being thrown and see what's the content of the table. If it contains NaNs, remove those lines, fill them with interpolation or insert the known values.

@Nicu: set the value of second column of the "setup" variable to false.

Nicu Sprincean

Tommmaso,

In the run.file ("Measures"), how do I adjust de code to run only Connectedness measures, for instance? At this stage it runs all measures at once.
Thank you.

Bolarinwa Tompson

Hello Tommaso, I run the codes on my data and got these responses:

Error using parse_dataset>parse_dataset_internal (line 76)
The 'Returns' table contains invalid or missing values.

Error in parse_dataset (line 23)
data = parse_dataset_internal(ipr.file,ipr.date_format);

Error in run (line 68)
data = parse_dataset(dataset);

I use MATLAB 2016b with data format dd/mm/yyyy.

Any help on how to resolve these issues. First, I also guess that the date format may be an issue here. Is there any way I could convert the date's format to the one you used?

Secondly, some zeros appear in the returns' data, could this be a problem?

Thanks.

Tommaso Belluzzo

tommaso.belluzzo [at] gmail.com

jack lee

Hi, Tommaso
Thank you very much. Could you give me your email? I do not know what your email is .
I want to discuss the issue further with you.
Best wish.

Tommaso Belluzzo

Hi again Jack, sorry for the issue. The combo Matlab 2019 / Excel 2019 is the principal suspect here.
Just for the sake of trying everything, can you check if your Assets sheet contains NaNs or negative values after the parser extracts the data inside it? This message is also suspicious: "failed to convert the date number to a date vector". Maybe it has something to do with your regional language settings. Is "dd/MM/yyyy" the format you see when you open the Excel spreadsheet?
If everything looks fine to you, please send me the dataset you are currently using on my personal e-mail.

jack lee

Hi, Tommaso
My Matlab version is 2019a , OS is Windows and Excel version is Excel 2019.
I just use your example data in Dataset file to run this program, not other dataset.
In fact, this program worked fine at first, but in some day it happened this problem. Maybe after this program is updated, it occured.
After I remove those checks from the "validate_file" function and continue to run, it just happen the error as follows:

error using parse_dataset>parse_table_balance (line 274)
The 'Assets' sheet contains invalid or missing values.

error in parse_dataset>parse_dataset_internal (line 129)
tab_assets = parse_table_balance(file,tab_index,tab_name,date_format_balance,dates_num,firm_names,true);

error in parse_dataset (line 33)
data = parse_dataset_internal(file,file_sheets,date_format_base,date_format_balance,ipr.shares_type,ipr.forward_rolling);

error in run (line 73)
data = parse_dataset(dataset,'dd/MM/yyyy','QQ yyyy','prices',3);

And I continue to remove checks, it happens:

Error using dateformverify (line 18)
DATESTR failed to convert the date number to a date vector. The date number is out of range.

Error in datestr (line 199)
S = dateformverify(dtnumber, dateformstr, islocal);

Error in parse_dataset>parse_table_balance (line 297)
dates_from = cellstr(datestr(dates_num_current,date_format)).';

Error in parse_dataset>parse_dataset_internal (line 129)
tab_assets = parse_table_balance(file,tab_index,tab_name,date_format_balance,dates_num,firm_names,true);

Error in parse_dataset (line 33)
data = parse_dataset_internal(file,file_sheets,date_format_base,date_format_balance,ipr.shares_type,ipr.forward_rolling);

Error in run (line 73)
data = parse_dataset(dataset,'dd/MM/yyyy','QQ yyyy','prices',3);

After removing all checks, eventually the above error. It have some relationship with Assets sheet.
I use your data from dataset file and the error happened.
So what do you have some advice?
Thank you very much!
Best wish.

Tommaso Belluzzo

Hi Jack. Can you please tell me your Matlab, OS and Excel versions? Thanks.
To fix the error, just remove those checks from the "validate_file" function and you will be able to continue.

The second error is pretty self-explanatory. Some of your sheets contain missing values (NaNs) or, for example in the case of Assets, negative values.
The script requires a certain level of pre-processing on the time series to work properly.

jack lee

Hi, Tommaso,
Thanks your reply, it is very useful. Under your advice, I check the code.
I find the error come from the function strcmp(file_format,'xlOpenXMLWorkbook').
That means file_format is not equal to 'xlOpenXMLWorkbook'.
And file_status is 'Microsoft Excel Spreadsheet', file_format is null 0×0 char array.

I continued to check, and got the following error.

error using parse_dataset>parse_table_balance (line 274)
The 'Assets' sheet contains invalid or missing values.

error in parse_dataset>parse_dataset_internal (line 129)
tab_assets = parse_table_balance(file,tab_index,tab_name,date_format_balance,dates_num,firm_names,true);

error in parse_dataset (line 33)
data = parse_dataset_internal(file,file_sheets,date_format_base,date_format_balance,ipr.shares_type,ipr.forward_rolling);

error in run (line 73)
data = parse_dataset(dataset,'dd/MM/yyyy','QQ yyyy','prices',3);

So, probably is it the sheet Assets that have some problems in the dataset?
Do you have some advice?
Thank you very much.
Best wish.

Tommaso Belluzzo

Hi Jack, I think you can't receive my mails. I have no idea about what's happening.
It can depend on your MATLAB version, Windows version (or maybe you have another OS) or Excel version (Excel 365 is well known to be a source of issues).

Put a breakpoint in "validate_file" function and follow it. You should be able to see why this error is being thrown.

It can be this:

[~,~,extension] = fileparts(file);
if (~strcmp(extension,'.xlsx'))
error('The dataset file is not a valid Excel spreadsheet.');
end

Or this:

if (ispc())
[file_status,file_sheets,file_format] = xlsfinfo(file);

if (isempty(file_status) || ~strcmp(file_format,'xlOpenXMLWorkbook'))
error('The dataset file is not a valid Excel spreadsheet.');
end
else
[file_status,file_sheets] = xlsfinfo(file);

if (isempty(file_status))
error('The dataset file is not a valid Excel spreadsheet.');
end
end

The first problem is easy to solve. Just make sure your Excel file has the ".xlsx" extension.
The second is harder to solve. You need to find out if "file_status" is empty, or what is the content of "file_format" (the latter only if you use Windows).

But you have to give me more infos otherwise I can't help.

jack lee

Hi, Tommaso
The error message is as follows.

Error using parse_dataset>validate_file (line 224)
The dataset file is not a valid Excel spreadsheet.

Error in parse_dataset (line 27)
[file,file_sheets] = validate_file(ipr.file);

Error in run (line 73)
data = parse_dataset(dataset,'dd/MM/yyyy','QQ yyyy','prices',3);

The program version is V2.1.2 .
Could you help me sovle it?
Thanks,
Best Wish.

Tommaso Belluzzo

Hi Xu, another user reported a similar error a few comments below. The error informs you that the columns between the Shares sheet and the Market Capitalization sheet are mismatching, in other words columns don't have the same order. Example: if in Shares sheet the firm A is in the third column, then it must be in the second column of Market Capitalization sheet (in the former sheet the first column after dates defines the benchmark index, so you have one less time series).

xudong liu

Dear Tommaso
the new version of your toolbox i have some error;

Error using parse_dataset>parse_table_standard (line 432)
The firm names between the 'Shares' sheet and the 'Market Capitalization' sheet
are mismatching.

Error in parse_dataset>parse_dataset_internal (line 124)
tab_capitalizations =
parse_table_standard(file,tab_index,tab_name,date_format_base,dates_num,firm_names,true);

Error in parse_dataset (line 33)
data =
parse_dataset_internal(file,file_sheets,date_format_base,date_format_balance,ipr.shares_type,ipr.forward_rolling);

Error in run (line 73)
data = parse_dataset(dataset,'dd/MM/yyyy','QQ yyyy','prices',3);

and ,for three months i can not run your program.
i don`t know why.

E:\AIMsystem\TommasoBelluzzo-SystemicRisk-2663c65

>> version

ans =

'9.7.0.1190202 (R2019b)'

thanks very much for your reply

yours: Daniel tulips liu

FUN AC

Tommaso Belluzzo

This is a "well known" problem of gplotmatrix function, which tends to target the currently active figure at a given moment instead of focusing on the figure being passed as a parameter. I'll try to find a fix to solve this as soon as possible, sorry for the inconvenience.

Yang Zhang

Hi Tommaso, When I run the 'run.m' through matlab 2018b, I got the following error message:
Error in gplotmatrix/gplotmatrixLabelCallback (line 530)
if ax(ii,1).YTick(1)- ax(ii,1).YLim(1) < rangeY*0.05 && ii~=rows

Error in using parallel.FevalFuture/fetchNext (line 195)
Error in calculating Figure SizeChangedFcn
Please direct me to solve this problem! Thank you so much!

Tommaso Belluzzo

Hi Wei. I'm glad it's working now. And it's good to know that detectImportOptions has such behavior. I'll integrate this fix in my next release.

Weidong Lin

Hi Tommaso, thanks for your reply! I downloaded the codes from Github and tested it on both Windows 10 and mac (Catalina 10.15.2). The codes run perfectly on windows without any error, but on mac there is always an error when reading data from different sheets. After modifying 'options = detectImportOptions(file,'Sheet',sheet)' to 'options = detectImportOptions(file,'Sheet',name)', the error disappeared. It seems that on mac we cannot use 'tab_index' to identify different sheets inside of an xlsx file, so I just specify the sheet name in options.

Tommaso Belluzzo

Hi Wei, redownload the package from GitHub now. The error was due to a left over from the previous rework unfortunately.
Anyway, the error seems to point out that the columns between your Shares sheet and your Market Capitalization sheet are mismatching.
The columns must always have the same order. If in Shares sheet the firm A is in the third column, then it must be in the second column of Market Capitalization sheet (in Shares sheet you have your first column as benchmark so there is one less time series).

Weidong Lin

Hi Tommaso, when I run the 'run.m' on my mac, I got the following error message:
Unrecognized function or variable 'tab_name'.

Error in parse_dataset>parse_table_standard (line 432)
error(['The firm names between the ''Shares'' sheet and the ''' tab_name ''' sheet are mismatching.']);

Error in parse_dataset>parse_dataset_internal (line 124)
tab_capitalizations = parse_table_standard(file,tab_index,tab_name,date_format_base,dates_num,firm_names,true);

Error in parse_dataset (line 33)
data = parse_dataset_internal(file,file_sheets,date_format_base,date_format_balance,ipr.shares_type,ipr.forward_rolling);

Error in run (line 73)
data = parse_dataset(dataset,'dd/MM/yyyy','QQ yyyy','prices',3);

How can I fix this? Many thanks in advance.

Tommaso Belluzzo

Hi Bolarinwa, life insurers usually accumulate their premiums in investment funds and register those amounts, called separate accounts, as both assets and liabilities in balance sheets. Since a fluctuation in their values does not affect the book equity, but only the overall balance volume, it's a good idea to use them in order to reduce the Debt component of SRISK: this avoids overstating the capital requirements and preserviong capital ratios coherent with the desided target. They are not calculated but obtained from a data provider or from annual reports of companies.

Bolarinwa Tompson

Hello Tommaso, what variable do separate accounts captured in Example_Large excel data file and how is the variable measured?

Thanks a lot!

Tommaso Belluzzo

Hi MichiM, it has just been renamed to run_cross_sectional, but it's just the same.

MichiM

Hello Thomas,
in the actual version you have removed the run_stochastic Skript.
It is possible that you can add this script on next update?

Thank you very much!!

Tommaso Belluzzo

Hi Omar, honestly, without having a look at your code it's really hard to say what's going wrong with it. I wouldn't go for a quantile regression when calculating the unconditioned VaR, but I'd rather stick on one of the standard methodologies (RiskMetrics, for example). With Bn representing the n-th beta coefficient of the quantile regression, CoVaR should be calculated as B1 + [B2 * VaR(x)], while Delta CoVaR should be calculated as B2 * [VaR(x) - Median(x)].

Omar Aburayyan

Hi Tommaso,

Thanks a lot for sharing this amazing work! I am trying to replicate the results using your data in R on a weekly basis for a class at my university, specifically for (Network) CoVaR and delta-CoVaR. but i am getting weird results where the estimated CoVar is exactly the same as VaR (probably i am doing something wrong and i hope you can help me to find the right solution-direction). I following Adrian and Brunnermeier paper to estimate VaR and CoVaR through the following steps:

1- convert the data (log-returns, state variables & mkt cap) to weekly basis by taking the average.

2- Calculate VaR: a series of quantile regressions(99%) where each firm's log returns is regressed on standardised state variables. --> following your matlab code you use GARCH in the estimation of VAR. Here i am just estimating it using quantile regression. would that affect the results?

3- Calculate CoVaR: a series of quantile regressions(99%) where each firm's VaR is regressed on a chosen reference-VaR (eg. JPM) & standardised state variables.

4- Delta CoVaR: first I calculate VaR as in step 2 but at the median, second i calculate CoVaR as in step 3 but by using VaR at its median instead and finally taking CoVaR|VaR_99 - CoVaR|VaR_50

It would be really great if you could help me spot whats wrong here. Thanks a lot and have a nice day

Tommaso Belluzzo

@long long: it's been long time I did not review that part of code and all the related algorithms, which is pretty stable but as far as I remember when A[i,j] = 1 it means that J granger-causes I. If you want, you can obtain a confirmation of this answer by looking at the Billio's paper.

@Wei-qiang Huang: that comes from a "recent" review of the SRISK formulation published on V-Lab, follow this link to find your answers https://vlab.stern.nyu.edu/docs/srisk/MES.

Wei-qiang Huang

Hi Tommaso,
Thank you for your excellent work!
I am not very clear about the "lrmes" calculation equation in the function "calculate_mes", that is, lrmes = 1 - exp(log(1 - d) .* beta).
Is it a closed-form expression for LRMES in the SRISK estimation? Can you give the references?

long long

There is another question, if a (I, J) = 1 in "average adjacency maxtrix", is it to say J Granger causes I, or I Granger causes J?

Tommaso Belluzzo

@Thach Pham: try a step-by-step debug using my script and your script in parallel on two different Matlab instances, this should let you understand what is going on and which value are contributing to produce different outputs. Unfortunately I have very limited time to invest in maintenance and improvement of my script, really not enough to put effort in derivative works.
@long long: 1) make sure the file exists in the data set directory and its path is written correctly; 2) no, only connectedness measures can optionally specify it; 3) the dataset must already implement the rolling, I was thinking about a built-in function but I never implemented it.

long long

I have two more questions: first, besides being useful in Figure 6 (network graph), do other calculations need this grouping?
Second, what do you mean by "roll forward liabilities by at least 3 months" in the description? Do I need to "roll forward" the liabilities data in the data file? Or there is the roll forward parameter in the program. Can I adjust this parameter?

long long

Hi Tommaso, Thank you very much for your programm.

I have just use matlab2019a and tried the code,If I use the "example_large" excel file, the running result is normal. If I replace it with the "example_small1" excel file, the following error will appear:
error using parse_dataset>parse_dataset_internal (line 34)
The dataset file does not exist.

error in parse_dataset (line 21)
data = parse_dataset_internal(ipr.file,ipr.date_format);

error in run (line 68)
data = parse_dataset(dataset);

Could you please help me find the problem?

long long

Hi Tommaso, Thank you very much for your programm.

I have just use matLab2019a and tried the code,I have used samle data"Example_Large"or"Example_Small1". unfortunately, I got these errors:
error using parse_dataset>parse_dataset_internal (line 34)
The dataset file does not exist.

error in parse_dataset (line 21)
data = parse_dataset_internal(ipr.file,ipr.date_format);

error in run (line 68)
data = parse_dataset(dataset);

Could you please help me find the problem?

Thach Pham

I am trying to understand your code, and I think that the best way is to write your code in my way. I am expecting that the results of CoVaR of AIG firm could be the same using your code (run.m) and my code as an example.

Tommaso Belluzzo

I think there is definately something wrong with how the new version of Matlab handles Excel imports, then. But as long as I can't upgrade to the new version, I cannot find the bug unfortunately, unless someone helps me out with this. For what concerns your code, your functions porting looks fine to me, but I don't know what you are attempting to do and what you expect to obtain.

Thach Pham

Hi Tommaso,
Actually I am using your data set to run the code. In one computer with MatLab2019a, it works perfectly. In another one with MatLab2019b Student version, that error happened.
But then, just ignore that error as I am using the computer that works well with the code.

I am trying to learn the code by replicate the result with data of AIG firm only. Please see this link for the data and code (https://drive.google.com/open?id=1Y-ioYxICs3JnxH8QyMhmU5jJkqrHPg0d). dcc_gjrgarch and quantile_regression are those from your codes.

However, the results of VaR and CoVar are different compared to your results. It is greatly appreciated if you could have a look and advise on this.

Tommaso Belluzzo

Dear Thach, as the error clearly states, it looks like your dataset contains negative values in the MCap sheet. There nothing to fix on code side in order to solve this problem in my opinion, instead you may want to double check your dataset to see if it contains invalid values or if something wrong is going on during the import process through a step-by-step debugging session. Market capitalization less than or equal to 0 sounds weird to me.

Thach Pham

Hi Tommaso,
Thank you very much for this great work.

I have just downloaded the MatLab2019b Student version and tried the code. However, I got these errors:

Error using parse_dataset>parse_dataset_internal (line 121)
The 'Market Capitalization' sheet contains negative values.

Error in parse_dataset (line 21)
data = parse_dataset_internal(ipr.file,ipr.date_format);

Error in run (line 68)
data = parse_dataset(dataset);

Can you please show me how to fix them?

Bolarinwa Tompson

Just coming across this! My doctoral thesis is targeted at estimating systemic risk in Nigerian banking industry. Thanks for this great work.

Emilio Llorente-Cano

in "parse_dataset.m", line 298, when changing "sheet" by "name" , the issue is solved. Thank you again Tomasso.

Tommaso Belluzzo

Hi Emilio, thanks for your feedback! Honestly, I don't know what's causing this issue since I don't have Matlab 2019b, so the best I can suggest you is to run a few debugging sessions with step-ins to see what's going on under the hood inside the parsing functions. I had a few issues with Matlab 2019a too, but I managed to solve them. Other problems are arising from Excel 365 COM libraries, which are not being handled well by Matlab.

Emilio Llorente-Cano

Extraordinary work. Thank you Tomasso.
Just a small issue. The latest version from the github used to run perfectly fine with 2019a; under 2019b it only parses the first tab (returns), so market capitalization shows negative values creating an error. Where do you think is the problem?

xudong liu

Download the the new toolboxs
thank you

yours xudong
English name: Daniel Tulip liu
China

Tommaso Belluzzo

This is due to a problem concerning the function "mfilename", which does not work well when the package is executed as a live script. A temporary solution would be to move all the m files in the subfolders at root level, where the "run.m" script is located. I'm working on a quick fix.

Andreas Andresen

@Devon Leukes: Did you find a solution? I think I've the same error in the run.m script.

It states:

Undefined function 'parse_dataset' for input arguments of type 'char'.

Error in Untitled (line 31)
data = parse_dataset(fullfile(path_base,path_dset));

I simply copied Tomasso's newest code and then ran it.

If any others might have an idea, you're more than welcome!

The entire run.m:

warning('off','all');

close('all');
clearvars();
clc();
delete(allchild(0));

data = xlsread('Example.xlsx')

[path_base,~,~] = fileparts(mfilename('fullpath'));

if (~strcmpi(path_base(end),filesep()))
path_base = [path_base filesep()];
end

paths_base = genpath(path_base);
addpath(paths_base);

path_dset = strrep('Datasets\Example_Large.xlsx','\',filesep());

path_tpro = strrep('Templates\TemplatePRO.xlsx','\',filesep());
file_tpro = fullfile(path_base,path_tpro);
path_rpro = strrep('Results\ResultsPRO.xlsx','\',filesep());
file_rpro = fullfile(path_base,path_rpro);

path_tnet = strrep('Templates\TemplateNET.xlsx','\',filesep());
file_tnet = fullfile(path_base,path_tnet);
path_rnet = strrep('Results\ResultsNET.xlsx','\',filesep());
file_rnet = fullfile(path_base,path_rnet);

data = parse_dataset(fullfile(path_base,path_dset));

main_pro(data,file_tpro,file_rpro,0.95,0.40,0.08,true);
pause(2);
main_net(data,file_tnet,file_rnet,0.05,true,true);

save('data.mat','data');

rmpath(paths_base);

Tommaso Belluzzo

Hi George, sure it will.

George Kladakis

Hi Tommaso,

Will this work with annual frequency in Total Liabilities? (I mean daily frequency but rolled forward by 12 months instead of 3)

Thank you,
George

Christian Mütze

Tommaso Belluzzo

Dear XuDong, I don't really remember how I came up with that small parfor in adjacency matrix calculations. At the time I was developing network measures, it probably seemed a good performance tweak after digging into community forums and reading the technical documentations, At present, with new MATLAB released meanwhile, it is probably not anymore the case. If you think that it is not optimizing the performance, or even worsening it, it should not be very difficult to remove it.

xudong liu

Dear Tommaso
I have some question for your code .Your program is very perfect.
two parfor loop question.
One: dcc_gjrgarch.m code line 68,start a parpool local work automatic;this i can understand
Two:main_net call the funcion file calculate_adjacency_matrix. code line 36 ,a parfor loop Starting parallel pool (cluster) automatic ,for this code i con`t understand it can start parpool cluster .Can you tell me the technology detail.
This is my last question,thanks for anser .
Thank you very much
Yours Xudong Chongqing,China

Tommaso Belluzzo

You should upload your dataset somewhere and share it with me. Because I cannot see what's going on with yours with just a stack trace.

teng

Dear Tommaso, I can run your dataset. But when using my dataset, I cannot get the results. I also get the following errors:
Error using calculate_covar (line 32)
The value 'svars' is invalid.The required input should be limited.

Error in main_pro>main_pro_internal (line 67)
[covar,dcovar] = calculate_covar(ret0_m,ret0_x,var_x,data.A,data.StVarsLag);

Error in main_pro (line 34)
main_pro_internal(ip_res.data,res,ip_res.k,ip_res.d,ip_res.l,ip_res.anl);

Error in covar (line 21)
main_pro(data,fullfile(path_base,path_rpro),0.95,0.40,0.08,true);

xudong liu

Andrew Suphat

xudong liu

Thanks a lot

Tommaso Belluzzo

Dear XuDong, this script comes from my master of science thesis and it's written in italian language. The papers you're probably looking for are the following ones:
- CoVaR/ΔCoVaR (https://papers.ssrn.com/sol3/papers.cfm?abstract_id=1269446)
- MES (https://papers.ssrn.com/sol3/papers.cfm?abstract_id=1573171)
- SRISK (https://papers.ssrn.com/sol3/papers.cfm?abstract_id=1611229)
- Network Measures (https://papers.ssrn.com/sol3/papers.cfm?abstract_id=1963216)

xudong liu

Dear Tommaso ,we can run your toolbox ,can generate two excel file ,and graph.but i an my frend can not understand your result.can you give a link your work paper?so wo can understand your code calculate for . A Chines guy,Daniel tulip lew.my email: xudongliu520@vip.qq.com. thank you for your anser.

Tommaso Belluzzo

Dear George, dates should be in the format "dd/MM/yyyy". Probably, your local Windows settings are forcing Excel to use another format that the script can somehow handle but that produces errors in the long run. You could try to force your own date format in the parsing function or, alternatively, force Excel to use that format. I'm gonna fix this by allowing users to specify their own format, but this will be ready in no less than two days.

George Kladakis

Thank you Tommaso. As I said, I'm using your dataset for now. I managed to partly solve it by formatting each A column as Date on excel but I still have the same issue with State Variables, even if I copy and paste the column A from Market Capitalization to State Variables. Without State Variables and Groups, I get the ResultsPro but not the ResultsNet and no plots. I also get the following errors:

Error using matlab.graphics.axis.Axes/set
While setting the 'XLim' property of Axes:
Value must be a 1x2 vector of numeric type in which the second element is larger than the first and may be Inf

Error in main_pro>plot_index (line 189)
set(sub_1,'XLim',[data.DatesNum(1) data.DatesNum(end)],'YLim',[(min(data.IdxRet) - 0.01) (max(data.IdxRet) + 0.01)]);

Error in main_pro_main_pro_internal (line 94)
plot_index(data);

Error in main_pro (line 34)
main_pro_internal(ip_res.data,res,ip_res.k,ip_res.d,ip_res.l,ip_res.anl);

Error in run (line 22)
main_pro(data,fullfile(path_base,path_rpro),0.95,0.40,0.08,true);

Tommaso Belluzzo

Dear George, as per script requirements, "market capitalizations must contain a supplementar observation at the beginning because a one-day lagged version is used in order to calculate weighted averages of probabilistic measures". That error is being thrown because your time series have either a totally mismatching time frame or because (more likely) the first date of the market capitalizations table is not less than the first date of the returns table. Also, a check on your current datetime format may detect another possible reason for this error to occur. Feel free to comment again if you are still experiencing problems.

George Kladakis

Hi Tommaso, Thank you for this important contribution. I'm using Matlab R2015a and unfortunately I get the following errors when using your dataset:

Error using parse_dataset_parse_dataset_internal (line 106)
The 'Returns' table and the 'Market Capitalization' table observation dates are mismatching.

Error in parse_dataset (line 19)
data = parse_dataset_internal(res.file);

Error in run (line 20)
data = parse_dataset(fullfile(path_base,path_dset));

Tommaso Belluzzo

Dear Teng, the script has been created for performing the calculations using the maximum available data granularity. As far as I can remember, all the systemic risk indicators are based on daily frequencies. You tell me whether your purpose can be fulfilled and I answer you it can totally be. Try to run the script on a dataset with an increased frequency and see if it can finish smoothly... because it may require several adjustments.

teng

Dear Tommaso,thank you for your answer.Is it possible to modify the program to process the weekly financial time series? If so, how?

Tommaso Belluzzo

Dear Teng, the sign of value-at-risk models output is always likely to be negative since they represent losses occurring at -N standard deviations from the mean, where N is a value determined by the chosen confidence level. Of course, CoVaR makes no exception to this rule, the paper is correct. This being said, however, it's a common practice to reverse the sign of VaR results turning them into positive values (this is what my script does) because it's easier to compare them against other risk measures or metrics. There is also a "semantic" reason behind this behavior: when a loss of, let's say, 10$ occurs... you say you lost 10$, you don't say you lost -10$.

teng

Dear Tommaso, Why is CoVaR,DCoVaR positive but in the paper, Adrian & Brunnermeier (2009), it's negative?

xudong liu

Dear Belluzzo ,thank you for answer,now the program can run .

Tommaso Belluzzo

Dear Liu, did you perhaps changed the format of your dates in the Excel spreadsheet? Another possible cause of this issue is that you are using a different regional settings and therefore Excel products differently formatted dates when parsed. You may try to play a little bit with the "InputFormat" parameter until you find the one that is suitable for your needs.

xudong liu

错误使用 datetime (line 593)
无法使用 'dd/MM/yyyy' 格式解析日期/时间字符串。

出错 parse_dataset>parse_table (line 219)
res.Date = datetime(res.Date,'InputFormat','dd/MM/yyyy');

出错 parse_dataset>parse_dataset_internal (line 71)
rets = parse_table(file,1,'Returns');

出错 parse_dataset (line 19)
data = parse_dataset_internal(res.file);

出错 run (line 20)
data = parse_dataset(fullfile(path_base,path_dset));

Tommaso Belluzzo

Dear Jasper, sorry for the late reply. Yes, my algorithm uses the quantile regression and yes, it does make use of a GARCH method for the volatility computation. To be more specific, it uses a DCC-GJRGARCH. The code is open source and you can browse it on GitHub if necessary. Regards.

Jasper Lim

Hi Tommaso, does your code use quantile regression, or DCC multivariate GARCH to compute CoVaR? I believe that is the reason for my confusion as the quantile regression method proposed by Adrian and Brunnermeier requires state variables to generate time-varying delta-CoVaR. Thank you for your clarification.

Tommaso Belluzzo

@Jasper: I think we are misunderstanding each other. What do you mean when you say the measure is "time-varying"? Were you expecting to obtain a single Delta-CoVaR value when removing the state variables? State variables are just an instrument for improving the capability of the model to capture the time-varying risk profile of the economic contingency, nothing else. If you take a look at the "calculate_covar.m" script, you will see that state variables are not taken into account if they are omitted in the datased, but a time projection of the risk index is performed anyway.

@Michalis: hi and thanks for your feedback. I know the script is very computationally intensive (especially when network measures are being calculated) and I did my best, over the past years, to improve the performace. Halas, I didn't overcome the problem when dealing with big datasets. If you have any proposal, I'd be glad to hear them.

Michalis Ioannides

Salvo Tommasso, this briliant contribution, i wonder whether you have attempted to re-produce the analysis with a lot more counterparties or/and more state variables. I have attempted to enlarge the dataset. This certainly pushes the limits of computation and i wonder whether you thought of ways to proxy the equations and computations involved using different techniques? Crazie Mille.

Jasper Lim

Thank you Tommaso for your answers. I also realised that your code was able to generate time-varying delta-CoVaR even when I removed state variables from my dataset. May I know if this is a mistake? From my knowledge, only unconditional CoVaR and Delta-CoVaR can be computed without state variables according to the original CoVaR paper by Adrain and Brunnermeier.

Tommaso Belluzzo

Dear Jasper, the readme file was a little bit outdated. Actually, that statement is wrong: all you need is a benchmark and at least 3 firms as you can see at line 73 of "parse_dataset.m" in the current release. I fixed this small error, thanks for the feedback.

For what concerns the state variables, this is more of an econometric question. Actually, their purpose is to capture risk factors over time and provide a better estimate of the relation between the financial institutions you want to analyze. The more you provide, as long as they are meaningful in describing the economic/financial context, the better it is. Of course, an increase of their number increases the overall computation complexity and time.
The best suggestion I can give is to pick a number of state variables between 3 and 6, sticking on what other academic papers normally use.

Jasper Lim

Hi Tommaso, in the readme file it is stated that a minimum of 5 firms is required. May I enquire if that refers to 5 firms per group or a total of 5 firms in the dataset? Also, is there a minimum number of state variables required for the computations to be meaningful?

Tommaso Belluzzo

Hi Devon, after reviewing my code, I confirm everything works as expected at least on my machine. In order to run a meaningful test, I downloaded a brand new release package directly from GitHub (https://github.com/TommasoBelluzzo/SystemicRisk) and after launching the run script everything was correctly processed.

In order to help you out, I need more details about your current setup. Are you under Windows OS? Have you tried to evaluate the result of "fullfile(path_base,path_dset)" in run.m at line 20? Does the path points to a valid existing file?

Tommaso Belluzzo

Hi Devon, as far as I can see, the script can't find the dataset you are targeting. Let me check if something wrong happened with my last update. Otherwise, I'll assist you in debugging this issue.

Devon Leukes

Hi Tommaso, I've downloaded your code as is (no modifications) and running it in MATLAB R2017b. Running the run.m script I get the following errors:
-------------------------------------------------------------------------------------------------
Error using parse_dataset>parse_dataset_internal (line 26)
The dataset file does not exist.

Error in parse_dataset (line 19)
data = parse_dataset_internal(res.file);

Error in run (line 20)
data = parse_dataset(fullfile(path_base,path_dset));
------------------------------------------------------------------------------------------------------------------------------------
Can you please assist with debugging?
Many Thanks
Devon

Tommaso Belluzzo

Hi Yulin Li, thanks for your feedback. May you kindly provide a few examples of assets that report this behavior after being processed through GJR Garch? Do they belong to the default dataset? If not, may you upload your data somewhere and link it to me? Thanks!

Yulin Li

Hi Tommaso, thank you for the file. I've been working on some data using this code. I find that for some assets, the conditional asset volatility, which is the sqrt of s can be the same along time. s remains as the asset variance iteratively. I'm not familiar with gjr garch. I wonder what is the reason behind this. Please let me know your thoughts or information. Thanks a lot!

Tommaso Belluzzo

Dear Yufei Cao, the bandwidth for the MES kernel density was calculating using a simplified variant of the Scott's rule of thumb due to the fact the returns were already squeeze. Thanks to your feedback I decided to implement a standard version of it (signally, the same that can be found into Matlab built-in ksdensity function). The results have been only slightly affected by the modification and you can download the new version to try it out. For what concerns your second question, I verified the computation at it seems correct as per current implementation. Again, thanks for supporting this project.

Yufei Cao

Two questions about MES. In your function: calculate_mes_internal(), (1) how to select h? (2) computation f: it may be u-(c./s_m)) ./h not (c./s_m)-u?

Tommaso Belluzzo

Hi Mugdha. It seems that the dataset file you are trying to process cannot be found in the specified path. Put a breakpoint in run, line 12, and retrieve the value of "fullfile(path,'\exampledata\Example.xlsx')". See if the path is valid; if not, fix it and run the script again.

MUGDHA PILANKAR

/////////////////////////////////////////////////////////////////////
Error using parse_dataset_parse_dataset_internal (line 26)
The dataset file does not exist.

Error in parse_dataset (line 19)
data = parse_dataset_internal(res.file);

Error in run (line 12)
data = parse_dataset(fullfile(path,'\exampledata\Example.xlsx'));
////////////////////////////////////////////////////////////////////////

I'm getting this error when I run the file run.m as per your instructions. Can you help you me understand and debug this error.
Thank you.

Giacomo Mureddu

Hi Tommaso,

thanks for answering, the reference paper is the following:
"The systemic risk of European banks during the financial and sovereign debt crises". Journal of Banking & Finance, 63, 107-125 Black, L., Correa, R., Huang, X., & Zhou, H. (2016)."
You can find it here:
https://www.federalreserve.gov/pubs/ifdp/2013/1083/ifdp1083.pdf

Tommaso Belluzzo

The DIP by Black & Huang is not part of this framework, but during the development I stumbled upon it multiple times. I don't have a lot of time at present, but if you provide me a reference paper I can eventually check if this can be included in the package with a quick and dirty implementation.

Giacomo Mureddu

Hi Tommaso, thanks for your help.
Do you have any practical framework on the "Distress insurance premium" by Black and Huang?

Thank you

Richard Schmidt

EMB

Hi Nicu, I appreciate your help. Thank you very much. I should update my Matlab version to 2017 B since my version is 2017 A. Again, thank you for your help

Nicu Sprincean

@Eufrocinio, try with a 2017 version of Matlab (if possible, R2017b) and create an empty folder called 'Results'. It should work.

EMB

Hi Tommaso,

Thank you very much for sharing your code. However when i attempt to run the file (run.m), i encountered these error messages,

Error using parse_dataset>parse_dataset_internal (line 26)
The dataset file does not exist.

Error in parse_dataset (line 19)
data = parse_dataset_internal(res.file)

Error in run (line 12)
data = parse_dataset(fulfilepath(path,'\Datasets\Example.xlsx'))

I appreciate your time and effort. Thank you

regards,

Eufrocinio

Weidong Lin

I am using Matlab 2017b. This is my run.m file:
warning('off','all');
close('all');
clearvars();
clc();
[path,~,~] = fileparts('C:\Users\29943\Desktop\Systemic risk matlab\TommasoBelluzzo-SystemicRisk-9244888\run.m');
paths = genpath(path);
addpath(paths);
data = parse_dataset(fullfile(path, '\Datasets\Example.xlsx'));
main_pro(data,fullfile(path, '\Results\ResultsPRO.xlsx'),0.95,0.40,0.08,true);
pause(5);
main_net(data,fullfile(path, '\Results\ResultsNET.xlsx'),0.05,true,true);
save('data.mat','data');
rmpath(paths);
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
And the errors are:
Error using matlab.graphics.interaction.internal.zoom/setAxesZoomConstraint (line 12)
Axes must be resident in the same figure as the mode

Error in matlab.graphics.interaction.internal.zoom/setAxesZoomMotion (line 5)
setAxesZoomConstraint(hThis,hAx,cons);

Error in gplotmatrix/gplotmatrixLabelCallback (line 454)
setAxesZoomMotion(hz1,ax2,'horizontal');

Error using main_pro>plot_correlations (line 275)
Error while evaluating Figure SizeChangedFcn.

Error using matlab.graphics.interaction.internal.zoom/setAxesZoomConstraint (line 12)
Axes must be resident in the same figure as the mode

Error in matlab.graphics.interaction.internal.zoom/setAxesZoomMotion (line 5)
setAxesZoomConstraint(hThis,hAx,cons);

Error in gplotmatrix/gplotmatrixLabelCallback (line 454)
setAxesZoomMotion(hz1,ax2,'horizontal');

Error using waitbar (line 113)
Error while evaluating Figure SizeChangedFcn.

Index exceeds matrix dimensions.

Error in gplotmatrix/gplotmatrixLabelCallback (line 465)
if ax(ii,1).YTick(1)- ax(ii,1).YLim(1) < rangeY*0.05 && ii~=rows

Error using waitbar (line 113)
Error while evaluating Figure SizeChangedFcn.

Index exceeds matrix dimensions.

Error in gplotmatrix/gplotmatrixLabelCallback (line 465)
if ax(ii,1).YTick(1)- ax(ii,1).YLim(1) < rangeY*0.05 && ii~=rows

Error using waitbar (line 113)
Error while evaluating Figure SizeChangedFcn.

Error using distcomp.remoteparfor/getCompleteIntervals (line 257)
Index exceeds matrix dimensions.

Error in calculate_adjacency_matrix>calculate_adjacency_matrix_internal (line 36)
parfor j = ij_seq

Error in calculate_adjacency_matrix (line 23)
adjm = calculate_adjacency_matrix_internal(ip_res.data,ip_res.sst,ip_res.rob);

Error in main_net>main_net_internal (line 71)
adjm = calculate_adjacency_matrix(win_i,data.SST,data.Rob);

Error in main_net (line 35)
main_net_internal(ip_res.data,res,ip_res.sst,ip_res.rob,ip_res.anl);

Error in run (line 16)
main_net(data,fullfile(path, '\Results\ResultsNET.xlsx'),0.05,true,true);

Weidong Lin

Hi Tommaso,
Many thanks for your codes. Could you please tell me what arguments do I need to input when I run the 'run.m' file? Especially for the function 'fileparts', 'parse_dataset', 'fullfile' in the 'run.m' file.

Many thanks!

YINING XU

Nicu Sprincean

Hi, Tommaso,

Thank you for sharing the files. I have encountered the same problem as Yao. When I run the 'run' file, I get the following errors:

Error in parse_dataset>parse_dataset_internal (line 73)
opts = detectImportOptions(file,'Sheet',1);

Error in parse_dataset (line 19)
data = parse_dataset_internal(res.file);

Error in run (line 12)
data = parse_dataset(fullfile(path,'\Datasets\Example.xlsx'));

I have put the file as you have suggested, but still there seems to be a problem. I would appreciate your help.
Thank you.
Nicu

Tommaso Belluzzo

Hi Yao, thanks for your feedback, but your log is not really meaningful since it doesn't show the exception being thrown by the script. Please, try to provide a full version of the error log so I can try to debug it. Thanks!

Yao qu

Hi Alessio. Currently, I am running your example in this package and it gives some warnings:

Error in parse_dataset>parse_dataset_internal (line 29)
[file_stat,file_shts,file_fmt] = xlsfinfo(file);

Error in parse_dataset (line 19)
data = parse_dataset_internal(res.file);

Error in run (line 12)
data = parse_dataset(fullfile(path,'\Datasets\Example.xlsx'));

Error in run (line 91)
evalin('caller', strcat(script, ';'));

I have tried several Matlab versions include 2014b, 2015b and 2017b but that problem still exist. I will be grateful if you are able to give me some help.

Yao.

Tommaso Belluzzo

Salve Alessio. Questo programma fa esattamente al caso suo. I dati in suo possesso devono però essere modificati in modo tale da risultare compatibili con quelli attesi in fase di input/lettura. Può trovare maggiori informazioni in merito nella pagina principale del progetto ( https://github.com/TommasoBelluzzo/SystemicRisk ) e se qualcosa non fosse chiaro, può ricontattarmi qui (o là) per vedere di risolvere il problema.

P.S. = in fase di descrizione metodologica, un riferimento al mio progetto, sebbene non obbligatorio, sarebbe assai gradito.

Saluti, T.B.

alessio tancredi

Salve. Sto scrivendo la tesi e dovrei calcolare ΔCoVaR ed SRISK. Vorrei sapere se scaricando il pacchetto che trovo in questa pagina (e con i dati necessari) è possibile calcolarle. Attendo risposta. Grazie.

A.T.

MATLAB Release Compatibility
Created with R2018a
Compatible with R2014b to R2020b
Platform Compatibility
Windows macOS Linux

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!