Code covered by the BSD License  

Highlights from


3.0 | 2 ratings Rate this file 20 Downloads (last 30 days) File Size: 7.76 KB File ID: #11984
image thumbnail




16 Aug 2006 (Updated )

A different version of bench.m with additional options and output.

| Watch this File

File Information

Benchmark is a modification of BENCH, a routine developed by Stephen Lord and Cleve Moler, both at MathWorks Inc., to time six different Matlab tasks.

These times can be used to check Matlab performance on different computers or over time. Comparison between different versions should be done carefully as the tasks may change from version to version. This may still be useful to discover changes in software. A bar chart shows the performance relative to other computers stored in a database, bench.dat, which is maintained by Stephen Lord. Both bench.m and bench.dat are located in the file exchange as well as in the folder $matlabroot\toobox\matlab\demos.

Differences from BENCH.M:
Output includes the mean and standard deviations of the measurement times.
The mean values are used in the display of results.
Speed performance is rated in percent versus the fastest machine.
The tests can be individually selected.
The bar graph compares the sum of times of the selected tests only. This may change the relative order of machines.
The execution count is displayed during the test.
The first execution time is routinely dropped in the results as it may be affected by compile times.
Minor improvements in figure handling and labelling.

This version adds flexibility to handle format changes to the currently released bench.dat. This program should still be able to read former versions of bench.dat with the caveat that the data in older files might not be comparable to current benchmarks.
The bargraph now indicates in red a 2 standard deviation width calculated from the results.


Bench.Dat inspired this file.

MATLAB release MATLAB 7.4 (R2007a)
Other requirements None, though the 3-D graphics test may hang on some systems with multiple monitors using the extended desktop.
Tags for This File   Please login to tag files.
Please login to add a comment or rating.
Comments and Ratings (6)
12 Jan 2010 Alan

Jim, the description is for benchmark, not bench. bench is just given as a point of reference that some users might know. Also, as a user, I would like to know if the same task (ie sparse manipulation or symbolic solving) takes longer on different versions so I can choose which version to use.

26 Jul 2007 Tim Davis

You state in the description "These times can be used to check Matlab performance on ... different versions". This is incorrect. Different versions of bench.m do different things with different problem sizes. In MATLAB 7.1 and earlier, the sparse bench worked on a matrix from a 120-by-120 mesh and turned off the default fill-reducing ordering. In MATLAB 7.2, the mesh size increased to 300-by-300, and the default fill-reducing ordering was left on. In addition, the default ordering changed from symmmd to amd, and the sparse Cholesky changed from the original one to CHOLMOD. The latter is about 10 times faster than the old one. Even running the old bench.m file in the new MATLAB will not help, since the flag that turns off the fill-reducing ordering does that for the old chol, not the new one.

With all these changes to the problem size and task (without ordering in the old, with ordering in the new), using "bench" to compare different versions of MATLAB is completely misleading.

Please update your description. You should also add a warning to your code that says "bench" cannot be used to compare different versions of MATLAB ... and in particular, "sparse" changed dramatically from v7.1 to v7.2.

27 Nov 2006 Tim Davis

The grid size of the mesh was increased from 120 to 300 because "chol" in MATLAB is about 10 times faster than it used to be (chol is now based on CHOLMOD). The n=120 mesh was too small to adequately test the new chol. See
for details.

25 Oct 2006 Costa Georgana  
03 Oct 2006 Mirko Hrovat

Zheng, Yes the number of iterations was increased from 120 to 300 for the sparse computation. This seemed to have been done so that the sparse computation time was more in line with other tests. It seems to me that the original concept was for each test to take about the same length of time for a "typical" system. Unusually high times for any one test as compared to the other tests could indicate a problem.

03 Oct 2006 Zheng Hui

is the size of sparse computation correct? the performance of my notebook is quite close to other machines except the sparse computation. it's 10 times of the computational time. my original bench.m uses 120 while the new bench.m as well as this m file use 300. thx

Contact us