MATLAB offers ways to access a fraction of data without loading a whole file, and they can be really useful particularly when you need to handle large data.
One way is to use matfile for *.mat files wrtten with save. Another is to use memmapfile for binary files (typically, *.bin or *.dat written with fwrite)
Let us suppose that we'd like to save the following variables into a file:
A = randi([intmin('int16'),intmax('int16')], 1000000,1, 'int16');
Fs = 1024
scale = 1.20
offset = 0.05
title = 'EEG'
Which of the matfile and memmapfile approaches do you think is better? What are the pros and cons? Please give me insights on this matter.
If we save the variables above into myData.mat, partial loading of A can be achieved with Pmatfile as below:
mat = matfile('myData.mat')
A(10) == mat.A(10)
- The variables stored in the *.mat file can be easily distinguished by the property names of mat.
- You don have to worry about too much for bytes used by each variables.
If we save the variables above into myData.bin, partial loading of A can be achieved with memmapfile as below:
fid = fopen('myData.bin','w','n','UTF-8')
fid = fopen('myData.bin','r','n','UTF-8')
Fs_ = fread(fid,[1 1],'double')
scale_ = fread(fid,[1 1],'double')
offset_ = fread(fid,[1 1],'double')
title_ = char(fread(fid,[1 3],'char'))
m = memmapfile('myData.bin','Format',...
A(10) == m.Data(10)
- Because the data is stored in bare binary files, as long as metadata is stored somewhere else it's guranteed that you can open the file in the future.
- Writing and reading with low level functions fwrite and fread as well as memmapfile are A LOT more labourious then using high level operations with save and matfile
- You cannot access variables in the file by their names.
- You need to know exact bytes used for each variables in the file.