Create HDF5 dataset
Create a fixed-size 100-by-200-by-300 dataset 'myDataset' whose full path is specified as '/g1/g2/myDataset'.
h5create('myfile.h5','/g1/g2/myDataset',[100 200 300])
Write data to 'myDataset'. Since the dimensions of 'myDataset' are fixed, the amount of data to be written to it must match its size.
mydata = ones(100,200,300); h5write('myfile.h5','/g1/g2/myDataset',mydata) h5disp('myfile.h5')
HDF5 myfile.h5
Group '/'
Group '/g1'
Group '/g1/g2'
Dataset 'myDataset'
Size: 100x200x300
MaxSize: 100x200x300
Datatype: H5T_IEEE_F64LE (double)
ChunkSize: []
Filters: none
FillValue: 0.000000
Create a single-precision 1000-by-2000 dataset and apply the highest level of compression. Chunk storage must be used when applying HDF5 compression.
h5create('myfile.h5','/myDataset2',[1000 2000],'Datatype','single', ... 'ChunkSize',[50 80],'Deflate',9)
Display the contents of the entire HDF5 file.
h5disp('myfile.h5')HDF5 myfile.h5
Group '/'
Dataset 'myDataset2'
Size: 1000x2000
MaxSize: 1000x2000
Datatype: H5T_IEEE_F32LE (single)
ChunkSize: 50x80
Filters: deflate(9)
FillValue: 0.000000
Create a two-dimensional dataset '/myDataset3' that is unlimited along the second dimension. ChunkSize must be specified to set any dimension of the dataset to Inf.
h5create('myfile.h5','/myDataset3',[200 Inf],'ChunkSize',[20 20])
Write data to '/myDataset3'. You can write data of any size along the second dimension to '/myDataset3', since its second dimension is unlimited.
mydata = rand(200,500); h5write('myfile.h5','/myDataset3',mydata,[1 1],[200 500])
Display the entire contents of the HDF5 file.
h5disp('myfile.h5')HDF5 myfile.h5
Group '/'
Dataset 'myDataset3'
Size: 200x500
MaxSize: 200xInf
Datatype: H5T_IEEE_F64LE (double)
ChunkSize: 20x20
Filters: none
FillValue: 0.000000
filename — File NameFile name, specified as a character vector or string scalar containing the name of an HDF5 file.
Depending on the location you are writing to, filename
can take on one of these forms.
Location | Form | ||||||
|---|---|---|---|---|---|---|---|
| Current folder | To write to the current folder, specify the
name of the file in
Example:
| ||||||
Other folders | To write to a folder different from the current
folder, specify the full or relative path name in
Example:
Example:
| ||||||
Remote Location | To write to a remote location,
Based on your
remote location,
For more information, see Work with Remote Data. Example:
|
If filename does not already exist,
h5create creates it.
If you specify an existing HDF5 file name and a new dataset name,
then h5create will add the new dataset to the
existing HDF5 file.
ds — Dataset nameDataset name, specified as a character vector or string scalar containing
the full path name of the dataset to be created. If you specify intermediate
groups in the dataset name and they did not previously exist, then
h5create creates them.
sz — Dataset sizeDataset size, specified as a row vector. To specify an unlimited
dimension, specify the corresponding element of sz as
Inf.
Specify optional
comma-separated pairs of Name,Value arguments. Name is
the argument name and Value is the corresponding value.
Name must appear inside quotes. You can specify several name and value
pair arguments in any order as
Name1,Value1,...,NameN,ValueN.
'Deflate','9''Datatype' — Datatype'double' (default) | 'single' | 'uint64' | 'uint32' | 'uint16' | …Datatype of the dataset, specified as the comma-separated pair
consisting of 'Datatype' and any of the following
MATLAB® datatypes.
'double'
'single'
'uint64'
'int64'
'uint32'
'int32'
'uint16'
'int16'
'uint8'
'int8'
'string'
'ChunkSize' — Chunk sizeChunk size, specified as the comma-separated pair consisting of
'ChunkSize' and a row vector containing the
dimensions of the chunk. The length of 'ChunkSize'
must equal the length of the dataset size sz.
ChunkSize must be specified to set any dimension
in sz to Inf.
'Deflate' — gzip compression levelgzip compression level, specified as a numeric
value between 0 and 9, where 0 is the lowest compression level and 9 is
the highest.
'FillValue' — Fill value for missing data0 (default) | numeric valueFill value for missing data in numeric datasets.
FillValue must be a numeric value.
'Fletcher32' — 32-bit Fletcher checksum filterfalse or
0 (default) | true or 132-bit Fletcher checksum filter, specified as the comma-separated pair consisting of 'Fletcher32' and a numeric or logical 1 (true) or 0 (false). A Fletcher checksum filter is designed to verify that the transferred data in a file is error-free.
'Shuffle' — Shuffle filterfalse or
0 (default) | true or 1Shuffle filter, specified as the comma-separated pair consisting of
'Shuffle' and a numeric or logical
1 (true) or
0 (false). A shuffle filter is
an algorithm designed to improve the compression ratio by rearranging
the byte order of data stored in memory.
'TextEncoding' — Text Encoding'UTF-8' (default) | 'system'Text encoding, specified as the comma-separated pair consisting of
'TextEncoding' and one of these values:
'UTF-8' — Represent characters using
UTF-8 encoding.
'system' — Represent characters as
bytes using the system encoding (not recommended).
h5create does not support creating files stored remotely in
HDFS™.
Chunk storage refers to a method of storing a dataset in memory by dividing it into smaller pieces of data known as "chunks". Chunking a dataset can improve performance when operating on a subset of the dataset, since the chunks can be read and written to the HDF5 file individually.
You have a modified version of this example. Do you want to open this example with your edits?