Hi, I'm tying to process a big dataset that is stored on Amazon S3. My code architecture is as following:
Matlab client calls Matlab Parallel Cloud (my default cluster is Parallel Cloud, 16 workers):
r = zeros(100,1);
readTimes = r;
[ri,readTimesi] = myProcess(i);
r(i) = ri;
readTimes(i) = readTimesi;
fprintf('Mean Read Time %.1f sec\n',mean(readTimes));
Each worker access Amazon S3 independently to retrieve data for processing using dataStore.
function [r,readTime] = myProcess(i)
fp= ['s3://mybucket/data/file' num2str(i) '.data'];
r = mean(data);
function data= AWSRead(fileName)
fid = fopen(fileName);
I'm trying to trouble shoot why my Mean Read Time is slow, and how can I speed it up.
I noticed that Mean Read Time is much faster if I am using my local machine as the parallel worker pool parpool('local') rather then Matlab Parallel Cloud. I read in Matlab's documentation that Matlab Parallel Cloud runs on EC2 which should integrate with S3 automatically to have very good data transfer speeds if both EC2 and S3 are on the same site.
My questions are: Which site should I use to have maximal data transfer performances? Where is Matlab Parallel Cloud hosted? Or how can I speed my data transfer performances (except running it locally, as I need many more workers)?
I did not use Matlab Drive to host my files, as they are too big and will not fit drive's 5GB maximum allocation.