Hi everyone i'm wondering is there any way to optimize this code to run faster by using vectorization Methods or avoiding loops or any other methods . Here i have a very large text file (up to 2.5 G ) that must be readed and compared line by line with another file in .xlsx format .It take me eternal to run and i'm also worried that the memory will not be enough because the result of calculation will be much bigger than the .txt file
the text file is something like this:
1567683075.081675 800002C1 1100000000000000
1567683075.082312 80000189 7437060000843B00
also same structure with 3 column as time and hex and about 10 million row which will be 2 G
and the excel file has 800 row and 17 column like this:
'1' '800002CA' 'EBC1' 'nodata' 'ASRECA_1' 'nodata ' '1' '0' 'NaN' 'NaN' 'NaN' 'NaN' 'NaN' 'NaN' 'NaN' '4' '0,5'
as I said the second column of text file will be compared with the second column in excel file and some calculation happend . so this should be done for all the rows in text file and the result will be stored in a structure .
I'm so far with the code and i want to know how can i replace this 2 for loops because the second one will be irritated 10 million time as length of c is the same with text.
fileID = fopen('day_29_08.txt');
text = textscan(fileID,'%s %s %s');
excel_data = readtable('List.xlsx');
excel_id = table2cell(excel_data(:,2));
excel_signal_name = table2cell(excel_data(:,5));