# Matfile runs incredibly slowly on large files--what might be the problem?

59 views (last 30 days)

Show older comments

I have a matlab file that contains one variable, a 64000x31250 array of singles. I use matfile to pull single columns out of that array. I've done similar operations on smaller (say 7000x31250) arrays and had it work fine. However, with this matrix, each column read takes 20!!!! seconds. In the profiler, essentially all of the time is taken on matfile.m's line 460:

[varargout{1:nargout}] = internal.matlab.language.partialLoad(obj.Properties.Source, varSubset, '-mat');

all this work (saving, matfile'ing, etc.) is done in 2012B and in 7.3 file format.

To set the performance scale, reading in the entire variable with a load command takes 127 seconds (ie less than the time matfile is taking to read 7 of the 31250 columns).

edit: a few details I should have included: 24 gigs ram, windows 7 x64, CPU is i7-950 (4 cores, 8 with hyperthreading), disk activity is very, very low during this process, but a single core is running at max speed (ie, one matlab process is using 13% CPU on the "8 core" CPU throughout.

Any ideas why matfile is choking so badly?

##### 4 Comments

Isaac Asimov
on 25 Jan 2018

I have also encountered this problem in R2017a.

Reading the data by row or by column does not affect the performance remarkably.

I locate the lines which take most time:

(Line 459 ~ 463 in file: `InstallPath\MATLAB\toolbox\matlab\iofun\+matlab\+io\MatFile.m`)

if obj.Properties.SupportsPartialAccess

[varargout{1:nargout}] = matlab.internal.language.partialLoad(obj.Properties.Source, varSubset, '-mat');

else

[varargout{1:nargout}] = inefficientPartialLoad(obj, indexingStruct, varName);

end

MAT-files saved by '-v7.3' invoke the upper line and MAT-files saved by other versions ('-v7','-v6') invoke the lower line.

But I do not know why this line takes so much time.

If you comment the upper line `matlab.internal.language.partialLoad(...)`, and replace it with the lower line `inefficientPartialLoad(...)`, the performance does not change too much.

It seems that the upper function is not more efficient than the lower `inefficientPartialLoad`.

And I hope that this function could be improved by the development team of The MathWorks.

After all, it is unpractical to `load` large data to the workspace in order to speed up the program. (However, `load` MAT-files is also much slower than what we expect.)

### Accepted Answer

per isakson
on 6 Jul 2013

Edited: per isakson
on 25 Jul 2013

Summary: "column-major" does not apply to the matlab.io.MatFile class when it comes to reading speed.

---

Column-major or row-major?

Doc on hdf5read says:

[...]HDF5 describes data set dimensions in row-major order; MATLAB stores

data in column-major order. However, permuting these dimensions may not

correctly reflect the intent of the data and may invalidate metadata. When

BOOL is false (the default), the data dimensions correctly reflect the data

ordering as it is written in the file — each dimension in the output variable

matches the same dimension in the file.

Matlab uses column-major order and HDF5 uses row-major order. The MAT-file 7.3 file format "is" HDF5.

The following test ( R2012a 64bit, 8GB, Windows 7) shows that for a .<5000x5000 single>:

- reading one column takes approximately half the time compared to reading the full matrix
- reading one row is approx 20 times faster than reading one column.

In this case the matrix is so small that my 8GB should not be a bottleneck.

N = 5e3;

filespec = 'matfile_test.mat';

mat = rand( N, 'single' );

save( filespec, 'mat', '-v7.3' )

obj = matfile( filespec );

tic, mfm = obj.mat; toc

tic, h5m = h5read( filespec, '/mat' ); toc

dfm = mfm-mat;

d5m = h5m-mat;

max(abs(dfm(:)))

max(abs(d5m(:)))

tic, mfm = obj.mat( :, 1 ); toc

tic, h5m = h5read( filespec, '/mat', [1,1], [N,1] ); toc

dfm = mfm-mat( :, 1 );

d5m = h5m-mat( :, 1 );

max(abs(dfm(:)))

max(abs(d5m(:)))

tic, mfm = obj.mat( 1, : ); toc

tic, h5m = h5read( filespec, '/mat', [1,1], [1,N] ); toc

dfm = mfm-mat( 1, : );

d5m = h5m-mat( 1, : );

max(abs(dfm(:)))

max(abs(d5m(:)))

returns

Elapsed time is 1.955082 seconds.

Elapsed time is 1.674106 seconds.

ans =

0

ans =

0

Elapsed time is 0.984833 seconds.

Elapsed time is 0.822843 seconds.

ans =

0

ans =

0

Elapsed time is 0.056097 seconds.

Elapsed time is 0.029657 seconds.

ans =

0

ans =

0

>>

.

2013-07-24: Test with R2013a 64bit, 8GB, Windows 7; same computer, same OS, and new Matlab release. The results below are from the third run of the script after restarting the computer and Matlab. There is a little improvement in speed. However, nothing comparable with the result of reading a row, which Matt J report in the comment.

>> matfile_h5_script

Elapsed time is 2.626919 seconds.

Elapsed time is 1.219851 seconds.

ans =

0

ans =

0

Elapsed time is 0.809362 seconds.

Elapsed time is 0.765147 seconds.

ans =

0

ans =

0

Elapsed time is 0.049908 seconds.

Elapsed time is 0.020192 seconds.

ans =

0

ans =

0

>>

##### 4 Comments

per isakson
on 24 Jul 2013

Edited: per isakson
on 24 Jul 2013

I assume that you obtained the numbers after running the script a few times in a row. That is, the data are available in the system cache.

The SSD should make a large difference when running the script for the first time after restarting Windows. (I know of no other way to "clear" the system cache.)

There is a difference between our result, which I cannot explain. I have to leave it with that.

### More Answers (4)

Isaac Asimov
on 25 Jan 2018

Fortunately I have finally found a practical solution to this problem, that is:

----------------------------------------------------------

Put your varible in cells and then save it to a MAT-file.

Do not save it directly as a double array/matrix!

----------------------------------------------------------

I have tested this method in my computer, and the result is amazing:

tic;

m1 = matfile('var_as_matrix.mat.mat');

x1 = m1.trs_sample(1,:);

toc;

% Elapsed time is 8.686759 seconds.

tic;

m2 = matfile('var_as_cell.mat.mat');

x2 = m2.trs_sample(1,:);

y2 = x2{:};

toc;

% Elapsed time is 0.295925 seconds!

Let me do some explanations.

I have a large data set (not sparse) and I read it as a matrix. Its size is 1000x250000 (int8).

When I save the matrix directly as a MAT-File ('-v7.3'):

- MATLAB automatically change the data type to `double`. (You can check this by yourself. Use matfile('YourMatFilePath'), the console will show you its properties. )
- Its size is about 400 MB.
- It takes about 8.7 s to assign the first row of the matrix to a variable.

When I put each row into a cell and then save it (with the same settings):

- I get a 1000x1 cell, and the data type keeps `int8`.
- Its size is about 200 MB. The size is reduced by half!
- It takes about 0.3 s to fetch the first cell and then assign its contents to a variable. It is almost 30x faster than the upper! (I have tested this for another several times, and the speed keeps between 15x and 30x faster.)

----------------------------------------------------------

But why this method works?

I guess that, when you reconstruct the matrix to cells, the saved MAT-file is "better structured", because you build a higher hierarchy above the original matrix.

Thus, when you read the "better structured" MAT-file from the disk, MATLAB can parse and read the data more efficently. And I think that is why the method improves the performance remarkably.

----------------------------------------------------------

I hope my method can help you more or less.

Currently, apart from my solution, I have not found any other useful suggestion to this problem. This is strange, because the problem is so common and critical. And even four years after the question was asked, there is no feasible solution.

If anyone knows some more details, please share with us and post your answer here. It would be a great help to people who might encounter the similar problem.

##### 5 Comments

Mitchell Tillman
on 27 Aug 2021

This method worked for me too! But I'm thinking maybe it's only true for numeric array variables, not arrays within structs? I have just checked this method with a 8GB mat file comprised of one struct, and it increased my file size by 30% and the load time too. The struct is originally formatted in a matrix as below. Then, I stored each row of each data stream as a cell. Of course, I do have a small amount of metadata/other data also stored inside this struct.

Does anyone have any insight as to why @Isaac Asimov's method wouldn't work to save space/speed up loading arrays inside of a struct?

for subNum=1:10; % 10 subjects

for trialNum=1:50; % 50 trials per subject

for dataStreamNum=1:50; % 50 data streams per subject

dataMatrix=rand(3,3000); % Each data stream is 3x3000

structName.Subject(subNum).Trial(trialNum).Data(dataStreamNum).Matrix=dataMatrix; % Data in matrix form

structName.Subject(subNum).Trial(trialNum).Data(dataStreamNum).Cell{1,1}=dataMatrix(1,:); % Cell row 1 of datastream

structName.Subject(subNum).Trial(trialNum).Data(dataStreamNum).Cell{1,2}=dataMatrix(2,:); % Cell row 2 of datastream

structName.Subject(subNum).Trial(trialNum).Data(dataStreamNum).Cell{1,3}=dataMatrix(3,:); % Cell row 3 of datastream

end

end

end

per isakson
on 5 Jul 2013

Edited: per isakson
on 14 Sep 2021

I'm neither surprised nor shocked.

Which OS, and how much RAM installed?

Your matrix is large

>> 64000*31250*4/1e9

ans =

8

that is 8GB.

7.3 file format "is" HDF5.

A little experiment to show that RAM is important. (R2012a 64bit, 8GB, Windows 7)

N = 1e4;

filespec = 'matfile_test.mat';

mat = rand( N, 'single' );

save( filespec, 'mat', '-v7.3' )

tic,

h5m = h5read( filespec, '/mat', [1,1], [N,1] ); toc

tic,

obj = matfile( filespec );

mfm = obj.mat( :, 1 );

toc

d5m = h5m-mat( :, 1 );

dfm = mfm-mat( :, 1 );

max(abs(d5m(:)))

max(abs(dfm(:)))

returns

Elapsed time is 3.214658 seconds.

Elapsed time is 3.499495 seconds.

ans =

0

ans =

0

Create a variable just to use RAM

>> buf = zeros( N );

and rerun the script, which now returns

Elapsed time is 52.967529 seconds.

Elapsed time is 52.730371 seconds.

ans =

0

ans =

0

Watch the Windows Task Manager|Performance during the reading.

Jason Climer
on 11 Apr 2018

Edited: per isakson
on 11 Apr 2018

##### 0 Comments

Thomas Richner
on 8 Jul 2019

Try Tim Holy's savefast

And as a replacement for matfile, try using h5create and h5write directly--they are lower level, which is annoying, but they are faster. You can specify the block size and compression using h5create, which gives you the ability to pick your trade off between column and row. I did some benchmarking and found a block of [64 64] gives reasonable performance for 2D arrays when you later need to read back rows or columns.

##### 0 Comments

### See Also

### Products

### Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!