Asked by huda nawaf
on 26 Apr 2013

*hi,

I have similarity matrix with size 17770*17770. When process it get out of memory

In fact, at first , I got this similarity matrix by segmenting the original matrix into 7 parts , each one with size 2500*17770, then collect these parts to get the final size. But, the next step , I can not process it partly because I want to make clustering for this similarity matrix. So, it is impossible processing it partly

Is there a way to process this similarity matrix.*

Thanks in advance

Opportunities for recent engineering grads.

Apply Today
## 22 Comments

## Matt J (view profile)

Direct link to this comment:https://www.mathworks.com/matlabcentral/answers/73708-similarity-matrix-has-very-large-size-how-process-it-without-segmenting-it#comment_145758

How sparse is the matrix? How is it computed?

## per isakson (view profile)

Direct link to this comment:https://www.mathworks.com/matlabcentral/answers/73708-similarity-matrix-has-very-large-size-how-process-it-without-segmenting-it#comment_145770

data type?

## huda nawaf (view profile)

Direct link to this comment:https://www.mathworks.com/matlabcentral/answers/73708-similarity-matrix-has-very-large-size-how-process-it-without-segmenting-it#comment_145857

to Matt J,

17770*17770 is co_occurance matrix. I got it after segment it into 7 parts.

I have 17770 files , ecah file is movie has been seen by number of users. I were wanting to compute co_occurance of all movies, so compute this matrix partly, but when get it finally I would like to cluster the movies depending on co_occurance matrix.

thanks

## huda nawaf (view profile)

Direct link to this comment:https://www.mathworks.com/matlabcentral/answers/73708-similarity-matrix-has-very-large-size-how-process-it-without-segmenting-it#comment_145859

to per,

it is co_occurance matrix, it is integer.

## Matt J (view profile)

Direct link to this comment:https://www.mathworks.com/matlabcentral/answers/73708-similarity-matrix-has-very-large-size-how-process-it-without-segmenting-it#comment_145870

How sparse is the matrix?

## Anand (view profile)

Direct link to this comment:https://www.mathworks.com/matlabcentral/answers/73708-similarity-matrix-has-very-large-size-how-process-it-without-segmenting-it#comment_145890

What kind of processing do you want to do on it? If it doesn't fit into memory and you need to do neighborhood-like processing on it, write it to a tiff file and use something like blockproc on it.

## Cedric Wannaz (view profile)

Direct link to this comment:https://www.mathworks.com/matlabcentral/answers/73708-similarity-matrix-has-very-large-size-how-process-it-without-segmenting-it#comment_145893

Use NNZ on a 2500x17770 block to tell how sparse it is, and if the number is significantly inferior to the product of sizes, go for a solution based on SPARSE matrices.

## Walter Roberson (view profile)

Direct link to this comment:https://www.mathworks.com/matlabcentral/answers/73708-similarity-matrix-has-very-large-size-how-process-it-without-segmenting-it#comment_145902

What is the maximum co-occurrence count? Could you represent it as int16 ? That would only be on the order of 600 Mb, which could probably be processed even on a 32 bit MS Windows system if /3G was in effect.

## huda nawaf (view profile)

Direct link to this comment:https://www.mathworks.com/matlabcentral/answers/73708-similarity-matrix-has-very-large-size-how-process-it-without-segmenting-it#comment_145913

To Anand,

I try to cluster the co_occurance matrix using ward method(following code), but at the same time I used another menthod spectral clustering for Newman, I got out of memory too.

thanks

## huda nawaf (view profile)

Direct link to this comment:https://www.mathworks.com/matlabcentral/answers/73708-similarity-matrix-has-very-large-size-how-process-it-without-segmenting-it#comment_145917

to Cedric, when used NNZ for 17770%17770 , I got

undifined function or method 'NNZ' for input arguments of type'double'.

Walter, the max count is 232940, this is diagonal element , i.e it is occurance the item with itself.

thanks

## Walter Roberson (view profile)

Direct link to this comment:https://www.mathworks.com/matlabcentral/answers/73708-similarity-matrix-has-very-large-size-how-process-it-without-segmenting-it#comment_145923

nnz should be lower-case.

## Cedric Wannaz (view profile)

Direct link to this comment:https://www.mathworks.com/matlabcentral/answers/73708-similarity-matrix-has-very-large-size-how-process-it-without-segmenting-it#comment_145935

Hi Huda, nnz should be applied to a variable that contains one of the 2500*17770 blocks of data. As Walter mentions, the function name is lower case (but some of us tend to write function names in upper case on the forum, to differentiate them from the rest of the text).

## huda nawaf (view profile)

Direct link to this comment:https://www.mathworks.com/matlabcentral/answers/73708-similarity-matrix-has-very-large-size-how-process-it-without-segmenting-it#comment_145936

nnz for co_occurance_mat. is 315077904

product size is 315772900

thanks

## huda nawaf (view profile)

Direct link to this comment:https://www.mathworks.com/matlabcentral/answers/73708-similarity-matrix-has-very-large-size-how-process-it-without-segmenting-it#comment_145938

Walter, I have to normalize the matrix befor clustering, so the max account will be 1

## Cedric Wannaz (view profile)

Direct link to this comment:https://www.mathworks.com/matlabcentral/answers/73708-similarity-matrix-has-very-large-size-how-process-it-without-segmenting-it#comment_145940

Ok, it is dense. A full matrix of size 17770*17770 stored as a

double(class/type) array takes a little more than 2.5GB. How much RAM do you have, and are you working on a 32 or 64bits system? If, for any reason, 2.5GB is too large for your system, you can either go on operating on blocks, or, as Walter mentions, work with a lower precision class/type of array (_double_ is 8 bytes, and there are less precise, 4 or 2 bytes classes available).## huda nawaf (view profile)

Direct link to this comment:https://www.mathworks.com/matlabcentral/answers/73708-similarity-matrix-has-very-large-size-how-process-it-without-segmenting-it#comment_145961

My system is 64bit, and 6 GB of RAM.

In this case , must I use blocks or what Walter suggested. if so, please give me an idea how use blocks or how work with a lower precision?

thanks in advance

## Walter Roberson (view profile)

Direct link to this comment:https://www.mathworks.com/matlabcentral/answers/73708-similarity-matrix-has-very-large-size-how-process-it-without-segmenting-it#comment_145965

When you are initializing the integer co-occurrence matrix, instead of initializing it as zeros(17770,17770), initialize it as zeros(17770,17770,'int32').

Then when you want to normalize it, use

That might still cause you to run out of memory because of the temporary space needed to do the conversion and division. If it does, then probably the formation of the distance matrix during clustering would also run out of memory.

## huda nawaf (view profile)

Direct link to this comment:https://www.mathworks.com/matlabcentral/answers/73708-similarity-matrix-has-very-large-size-how-process-it-without-segmenting-it#comment_145968

Hi Cedric, why I have to use nnz with 2500*17770? we need to know the no. of non zero in total matrix.

Right?

Anyway, I want someone tell me how deal with blocks of matrix to make clustering for total matrix?

## Walter Roberson (view profile)

Direct link to this comment:https://www.mathworks.com/matlabcentral/answers/73708-similarity-matrix-has-very-large-size-how-process-it-without-segmenting-it#comment_145969

We needed to see the result of nnz to know whether the matrix was sparse or dense. It turns out to be dense, so the idea of using sparse calculations to save memory will not work.

## Matt J (view profile)

Direct link to this comment:https://www.mathworks.com/matlabcentral/answers/73708-similarity-matrix-has-very-large-size-how-process-it-without-segmenting-it#comment_145970

Anyway, I want someone tell me how deal with blocks of matrix to make clustering for total matrix?That question becomes unnecessary if it turns out that the majority of your matrix elements are zeros. In that case, you don't have to break the matrix into blocks. You would use the SPARSE command to make the entire matrix fit into memory. Since you seem unaware of SPARSE and what it does, the others want to make sure you consider it before proceeding.

## Walter Roberson (view profile)

Direct link to this comment:https://www.mathworks.com/matlabcentral/answers/73708-similarity-matrix-has-very-large-size-how-process-it-without-segmenting-it#comment_145972

It appears to me that you could save memory during the clustering by

notusing pdist yourself, and instead use## huda nawaf (view profile)

Direct link to this comment:https://www.mathworks.com/matlabcentral/answers/73708-similarity-matrix-has-very-large-size-how-process-it-without-segmenting-it#comment_146084

Walter,

ward did cluster when I used : L = linkage(d, 'ward', 'euclidean', 'savememory', 'on');

But ,I can not predicate the running time ,maybe 4-5 hours. anyway, it is not important the running time becuase I run it one time.

you resolved big problem , many many thanks.

Walter, If I want use spectral clustering instead of ward to show the difference betwen them in terms of clustering. earlier I faced the same problem (out of memory) wth spectral clustering. what I have to change in following code.in the following function call to other function, but the out of memory happen befor calling the other function

Log in to comment.