hi, I am trying to use dwt2 to extract features from a signal.

assuming that my signal is d1.data with the size of 118*50*1400. I should mention that the signal is already passed to a function to calculate common average reference.

P=unfold(d1.data,ndims(d1.data))';

[P]=SVD_Func(P); % calculates 99% SVD

[xdim,ydim]=size(P); % xdim=184 and ydim=1400

dwtmode('zpd');

[cA,cH,cV,cD]=dwt2(P,'db10');

[cAx,cAy]=size(cA); %cAx=101,cAy=709

cH, cV, cD also have the same size as cA.

the problem is that I want to use the results of dwt2 function as input data for a neural network function to classify them. therefor, since the size of my Target matrix is T= 2*1400 , meaning two classes with 1400 samples, it is essential to have an input matrix with second dimension size of 1400.

I tried the inverse dwt (P2=idwt2(cA,cH,cV,cD,'db10');) but my understanding is that it will return the original P which means it will ignore the feature extraction part which was suppose to be done with dwt.

I appreciate it if someone could give me some suggestions about this.

Adham

"Adham " <adham.atyabi@flinders.edu.au> wrote in message <i452kg$994$1@fred.mathworks.com>...

> hi, I am trying to use dwt2 to extract features from a signal.

> assuming that my signal is d1.data with the size of 118*50*1400. I should mention that the signal is already passed to a function to calculate common average reference.

>

> P=unfold(d1.data,ndims(d1.data))';

> [P]=SVD_Func(P); % calculates 99% SVD

> [xdim,ydim]=size(P); % xdim=184 and ydim=1400

> dwtmode('zpd');

> [cA,cH,cV,cD]=dwt2(P,'db10');

> [cAx,cAy]=size(cA); %cAx=101,cAy=709

>

> cH, cV, cD also have the same size as cA.

>

> the problem is that I want to use the results of dwt2 function as input data for a neural network function to classify them. therefor, since the size of my Target matrix is T= 2*1400 , meaning two classes with 1400 samples, it is essential to have an input matrix with second dimension size of 1400.

> I tried the inverse dwt (P2=idwt2(cA,cH,cV,cD,'db10');) but my understanding is that it will return the original P which means it will ignore the feature extraction part which was suppose to be done with dwt.

>

> I appreciate it if someone could give me some suggestions about this.

> Adham

Hi Adham, you're correct that just taking the inverse 2-D dwt will simply return P. Typically some thresholding is done on the dwt coefficients before taking the inverse to preserve the important (in some sense) features of the signal, while removing or attenuating unimportant ones.

Can you explain a little more about your data? Are your data really just 1-D signals that you have collected in a 3-D matrix? For example, if you acquired 1400 sample signals at 118x50 locations spatial locations, you might just be interested in the (118*50) 1-D signals. Can you provide the details about your data and what you are trying to classify?

Wayne

"Wayne King" <wmkingty@gmail.com> wrote in message <i45uvb$9j0$1@fred.mathworks.com>...

> "Adham " <adham.atyabi@flinders.edu.au> wrote in message <i452kg$994$1@fred.mathworks.com>...

> > hi, I am trying to use dwt2 to extract features from a signal.

> > assuming that my signal is d1.data with the size of 118*50*1400. I should mention that the signal is already passed to a function to calculate common average reference.

> >

> > P=unfold(d1.data,ndims(d1.data))';

> > [P]=SVD_Func(P); % calculates 99% SVD

> > [xdim,ydim]=size(P); % xdim=184 and ydim=1400

> > dwtmode('zpd');

> > [cA,cH,cV,cD]=dwt2(P,'db10');

> > [cAx,cAy]=size(cA); %cAx=101,cAy=709

> >

> > cH, cV, cD also have the same size as cA.

> >

> > the problem is that I want to use the results of dwt2 function as input data for a neural network function to classify them. therefor, since the size of my Target matrix is T= 2*1400 , meaning two classes with 1400 samples, it is essential to have an input matrix with second dimension size of 1400.

> > I tried the inverse dwt (P2=idwt2(cA,cH,cV,cD,'db10');) but my understanding is that it will return the original P which means it will ignore the feature extraction part which was suppose to be done with dwt.

> >

> > I appreciate it if someone could give me some suggestions about this.

> > Adham

>

> Hi Adham, you're correct that just taking the inverse 2-D dwt will simply return P. Typically some thresholding is done on the dwt coefficients before taking the inverse to preserve the important (in some sense) features of the signal, while removing or attenuating unimportant ones.

>

> Can you explain a little more about your data? Are your data really just 1-D signals that you have collected in a 3-D matrix? For example, if you acquired 1400 sample signals at 118x50 locations spatial locations, you might just be interested in the (118*50) 1-D signals. Can you provide the details about your data and what you are trying to classify?

>

> Wayne

Dear Wayne

The signal is coming from EEG with 1000hz fs. Generally speaking, I have 118 channels, which has recordings of 2.5 seconds (2500 samples) with 280 stimuli (which we call them epochs). So the data is 118*2500*280. later on, the data is resampled and subepoched with 100hz fs which gaved me a matrix of 118*50*1400.

This is essential since the number of epochs ( they refer to them as samples in NN) is so low that NN can not give any good results on the data. in the next stage, I used to use abs(fft) and SVD so I can convert the voltage data to frequency and also shrink the data in a smaller size which is more convenient for the NN. How ever, after looking different preprocessing methods and also after looking at different bands in the signal, I reached to this idea that it might be the case that using extracted features from time-frequency domain give me a better results in compare to the voltage and frequency features. so I start working with the dwt. I can use dwt function but it looks like the signal is 2d since dwt returns a 3d matrix which has 3 as the size of the 3rd dimention. how ever, after getting to the point that I have cA, cD, cH, and cV, I have no idea about how to use them for

classification. I can not just pass half of my target matrix to the NN for sure.

Thanx

Adham

"Adham " <adham.atyabi@flinders.edu.au> wrote in message <i463p4$imf$1@fred.mathworks.com>...

> "Wayne King" <wmkingty@gmail.com> wrote in message <i45uvb$9j0$1@fred.mathworks.com>...

> > "Adham " <adham.atyabi@flinders.edu.au> wrote in message <i452kg$994$1@fred.mathworks.com>...

> > > hi, I am trying to use dwt2 to extract features from a signal.

> > > assuming that my signal is d1.data with the size of 118*50*1400. I should mention that the signal is already passed to a function to calculate common average reference.

> > >

> > > P=unfold(d1.data,ndims(d1.data))';

> > > [P]=SVD_Func(P); % calculates 99% SVD

> > > [xdim,ydim]=size(P); % xdim=184 and ydim=1400

> > > dwtmode('zpd');

> > > [cA,cH,cV,cD]=dwt2(P,'db10');

> > > [cAx,cAy]=size(cA); %cAx=101,cAy=709

> > >

> > > cH, cV, cD also have the same size as cA.

> > >

> > > the problem is that I want to use the results of dwt2 function as input data for a neural network function to classify them. therefor, since the size of my Target matrix is T= 2*1400 , meaning two classes with 1400 samples, it is essential to have an input matrix with second dimension size of 1400.

> > > I tried the inverse dwt (P2=idwt2(cA,cH,cV,cD,'db10');) but my understanding is that it will return the original P which means it will ignore the feature extraction part which was suppose to be done with dwt.

> > >

> > > I appreciate it if someone could give me some suggestions about this.

> > > Adham

> >

> > Hi Adham, you're correct that just taking the inverse 2-D dwt will simply return P. Typically some thresholding is done on the dwt coefficients before taking the inverse to preserve the important (in some sense) features of the signal, while removing or attenuating unimportant ones.

> >

> > Can you explain a little more about your data? Are your data really just 1-D signals that you have collected in a 3-D matrix? For example, if you acquired 1400 sample signals at 118x50 locations spatial locations, you might just be interested in the (118*50) 1-D signals. Can you provide the details about your data and what you are trying to classify?

> >

> > Wayne

>

> Dear Wayne

>

> The signal is coming from EEG with 1000hz fs. Generally speaking, I have 118 channels, which has recordings of 2.5 seconds (2500 samples) with 280 stimuli (which we call them epochs). So the data is 118*2500*280. later on, the data is resampled and subepoched with 100hz fs which gaved me a matrix of 118*50*1400.

> This is essential since the number of epochs ( they refer to them as samples in NN) is so low that NN can not give any good results on the data. in the next stage, I used to use abs(fft) and SVD so I can convert the voltage data to frequency and also shrink the data in a smaller size which is more convenient for the NN. How ever, after looking different preprocessing methods and also after looking at different bands in the signal, I reached to this idea that it might be the case that using extracted features from time-frequency domain give me a better results in compare to the voltage and frequency features. so I start working with the dwt. I can use dwt function but it looks like the signal is 2d since dwt returns a 3d matrix which has 3 as the size of the 3rd dimention. how ever, after getting to the point that I have cA, cD, cH, and cV, I have no idea about how to use them for

> classification. I can not just pass half of my target matrix to the NN for sure.

>

> Thanx

> Adham

Hi Adham, are you sure that you don't want to process the data as 1D time series (using dwt and not dwt2) at the 118 electrode sites? If I understand your description correctly, you have time series of voltage recordings at each electrode (118 of them) of length 2500 samples. Each of these 2500 samples is recorded in one of 280 stimulus conditions.

Further, if I correctly understand you, you are saying that since you only have one recording per stimulus condition, there's not enough data to infer anything meaningful about the response to that one stimulus type.

If what I have stated is accurate, I would encourage you to keep the time series lengths at 2500 samples and think about a couple things:

1.) Perhaps there is a way to naturally group the responses based on the stimulus used that would justify analyzing a subset as representative of one response type, for example speech stimuli vs. non-speech auditory, etc. This is a kind of supervised learning approach since you know under what conditions the data were obtained.

2.) Use the results of 1d wavelet, or 1d wavelet packet analysis on the responses to classify the responses themselves. This is a kind of unsupervised approach in the sense that you let the data classify themselves based on their wavelet or wavelet packet decompositions.

I would be hesitant to subject the data to a data reduction method that makes you lose the essential time-dependent voltage nature of your data. I would encourage you to think of a way to analyze the raw time data with some time-frequency (scale) technique.

In any event, I don't think you want to use dwt2(). dwt2 is really for image data. I don't see how the vertical, horizontal, and diagonal details are going to help you in this case.

Wayne

"Wayne King" <wmkingty@gmail.com> wrote in message <i465l7$iap$1@fred.mathworks.com>...

> "Adham " <adham.atyabi@flinders.edu.au> wrote in message <i463p4$imf$1@fred.mathworks.com>...

> > "Wayne King" <wmkingty@gmail.com> wrote in message <i45uvb$9j0$1@fred.mathworks.com>...

> > > "Adham " <adham.atyabi@flinders.edu.au> wrote in message <i452kg$994$1@fred.mathworks.com>...

> > > > hi, I am trying to use dwt2 to extract features from a signal.

> > > > assuming that my signal is d1.data with the size of 118*50*1400. I should mention that the signal is already passed to a function to calculate common average reference.

> > > >

> > > > P=unfold(d1.data,ndims(d1.data))';

> > > > [P]=SVD_Func(P); % calculates 99% SVD

> > > > [xdim,ydim]=size(P); % xdim=184 and ydim=1400

> > > > dwtmode('zpd');

> > > > [cA,cH,cV,cD]=dwt2(P,'db10');

> > > > [cAx,cAy]=size(cA); %cAx=101,cAy=709

> > > >

> > > > cH, cV, cD also have the same size as cA.

> > > >

> > > > the problem is that I want to use the results of dwt2 function as input data for a neural network function to classify them. therefor, since the size of my Target matrix is T= 2*1400 , meaning two classes with 1400 samples, it is essential to have an input matrix with second dimension size of 1400.

> > > > I tried the inverse dwt (P2=idwt2(cA,cH,cV,cD,'db10');) but my understanding is that it will return the original P which means it will ignore the feature extraction part which was suppose to be done with dwt.

> > > >

> > > > I appreciate it if someone could give me some suggestions about this.

> > > > Adham

> > >

> > > Hi Adham, you're correct that just taking the inverse 2-D dwt will simply return P. Typically some thresholding is done on the dwt coefficients before taking the inverse to preserve the important (in some sense) features of the signal, while removing or attenuating unimportant ones.

> > >

> > > Can you explain a little more about your data? Are your data really just 1-D signals that you have collected in a 3-D matrix? For example, if you acquired 1400 sample signals at 118x50 locations spatial locations, you might just be interested in the (118*50) 1-D signals. Can you provide the details about your data and what you are trying to classify?

> > >

> > > Wayne

> >

> > Dear Wayne

> >

> > The signal is coming from EEG with 1000hz fs. Generally speaking, I have 118 channels, which has recordings of 2.5 seconds (2500 samples) with 280 stimuli (which we call them epochs). So the data is 118*2500*280. later on, the data is resampled and subepoched with 100hz fs which gaved me a matrix of 118*50*1400.

> > This is essential since the number of epochs ( they refer to them as samples in NN) is so low that NN can not give any good results on the data. in the next stage, I used to use abs(fft) and SVD so I can convert the voltage data to frequency and also shrink the data in a smaller size which is more convenient for the NN. How ever, after looking different preprocessing methods and also after looking at different bands in the signal, I reached to this idea that it might be the case that using extracted features from time-frequency domain give me a better results in compare to the voltage and frequency features. so I start working with the dwt. I can use dwt function but it looks like the signal is 2d since dwt returns a 3d matrix which has 3 as the size of the 3rd dimention. how ever, after getting to the point that I have cA, cD, cH, and cV, I have no idea about how to use them for

> > classification. I can not just pass half of my target matrix to the NN for sure.

> >

> > Thanx

> > Adham

>

> Hi Adham, are you sure that you don't want to process the data as 1D time series (using dwt and not dwt2) at the 118 electrode sites? If I understand your description correctly, you have time series of voltage recordings at each electrode (118 of them) of length 2500 samples. Each of these 2500 samples is recorded in one of 280 stimulus conditions.

>

> Further, if I correctly understand you, you are saying that since you only have one recording per stimulus condition, there's not enough data to infer anything meaningful about the response to that one stimulus type.

>

> If what I have stated is accurate, I would encourage you to keep the time series lengths at 2500 samples and think about a couple things:

>

> 1.) Perhaps there is a way to naturally group the responses based on the stimulus used that would justify analyzing a subset as representative of one response type, for example speech stimuli vs. non-speech auditory, etc. This is a kind of supervised learning approach since you know under what conditions the data were obtained.

> 2.) Use the results of 1d wavelet, or 1d wavelet packet analysis on the responses to classify the responses themselves. This is a kind of unsupervised approach in the sense that you let the data classify themselves based on their wavelet or wavelet packet decompositions.

>

> I would be hesitant to subject the data to a data reduction method that makes you lose the essential time-dependent voltage nature of your data. I would encourage you to think of a way to analyze the raw time data with some time-frequency (scale) technique.

> In any event, I don't think you want to use dwt2(). dwt2 is really for image data. I don't see how the vertical, horizontal, and diagonal details are going to help you in this case.

>

> Wayne

Dear Wayne

thanks for the detail info. just to clarify, the important thing for me is to keep the size of the third dimension untouched. So, even if I pass the 118*2500*280 to dwt, I would be ended up with a cA and cD with 3rd dimension size of approximately 124. I can also unfold my data in a way that it become a 2d matrix of 295000*280 and in this case I need the out put of dwt to have the size of 280 on the second dimension. Is there something that I should do with cA and cD to generate the representative time-frequency signal? is there something that I am missing here? I know that cA and cD are representing low and high bands, but how should I mix them at the end if I actually have to do something like that. by the way, the other issue is what if I use swt? I still did not try it but it looks like it does not change the 2rd or 3rd dimension size.

Adham

"Adham " <adham.atyabi@flinders.edu.au> wrote in message <i467kr$nsc$1@fred.mathworks.com>...

> "Wayne King" <wmkingty@gmail.com> wrote in message <i465l7$iap$1@fred.mathworks.com>...

> > "Adham " <adham.atyabi@flinders.edu.au> wrote in message <i463p4$imf$1@fred.mathworks.com>...

> > > "Wayne King" <wmkingty@gmail.com> wrote in message <i45uvb$9j0$1@fred.mathworks.com>...

> > > > "Adham " <adham.atyabi@flinders.edu.au> wrote in message <i452kg$994$1@fred.mathworks.com>...

> > > > > hi, I am trying to use dwt2 to extract features from a signal.

> > > > > assuming that my signal is d1.data with the size of 118*50*1400. I should mention that the signal is already passed to a function to calculate common average reference.

> > > > >

> > > > > P=unfold(d1.data,ndims(d1.data))';

> > > > > [P]=SVD_Func(P); % calculates 99% SVD

> > > > > [xdim,ydim]=size(P); % xdim=184 and ydim=1400

> > > > > dwtmode('zpd');

> > > > > [cA,cH,cV,cD]=dwt2(P,'db10');

> > > > > [cAx,cAy]=size(cA); %cAx=101,cAy=709

> > > > >

> > > > > cH, cV, cD also have the same size as cA.

> > > > >

> > > > > the problem is that I want to use the results of dwt2 function as input data for a neural network function to classify them. therefor, since the size of my Target matrix is T= 2*1400 , meaning two classes with 1400 samples, it is essential to have an input matrix with second dimension size of 1400.

> > > > > I tried the inverse dwt (P2=idwt2(cA,cH,cV,cD,'db10');) but my understanding is that it will return the original P which means it will ignore the feature extraction part which was suppose to be done with dwt.

> > > > >

> > > > > I appreciate it if someone could give me some suggestions about this.

> > > > > Adham

> > > >

> > > > Hi Adham, you're correct that just taking the inverse 2-D dwt will simply return P. Typically some thresholding is done on the dwt coefficients before taking the inverse to preserve the important (in some sense) features of the signal, while removing or attenuating unimportant ones.

> > > >

> > > > Can you explain a little more about your data? Are your data really just 1-D signals that you have collected in a 3-D matrix? For example, if you acquired 1400 sample signals at 118x50 locations spatial locations, you might just be interested in the (118*50) 1-D signals. Can you provide the details about your data and what you are trying to classify?

> > > >

> > > > Wayne

> > >

> > > Dear Wayne

> > >

> > > The signal is coming from EEG with 1000hz fs. Generally speaking, I have 118 channels, which has recordings of 2.5 seconds (2500 samples) with 280 stimuli (which we call them epochs). So the data is 118*2500*280. later on, the data is resampled and subepoched with 100hz fs which gaved me a matrix of 118*50*1400.

> > > This is essential since the number of epochs ( they refer to them as samples in NN) is so low that NN can not give any good results on the data. in the next stage, I used to use abs(fft) and SVD so I can convert the voltage data to frequency and also shrink the data in a smaller size which is more convenient for the NN. How ever, after looking different preprocessing methods and also after looking at different bands in the signal, I reached to this idea that it might be the case that using extracted features from time-frequency domain give me a better results in compare to the voltage and frequency features. so I start working with the dwt. I can use dwt function but it looks like the signal is 2d since dwt returns a 3d matrix which has 3 as the size of the 3rd dimention. how ever, after getting to the point that I have cA, cD, cH, and cV, I have no idea about how to use them for

> > > classification. I can not just pass half of my target matrix to the NN for sure.

> > >

> > > Thanx

> > > Adham

> >

> > Hi Adham, are you sure that you don't want to process the data as 1D time series (using dwt and not dwt2) at the 118 electrode sites? If I understand your description correctly, you have time series of voltage recordings at each electrode (118 of them) of length 2500 samples. Each of these 2500 samples is recorded in one of 280 stimulus conditions.

> >

> > Further, if I correctly understand you, you are saying that since you only have one recording per stimulus condition, there's not enough data to infer anything meaningful about the response to that one stimulus type.

> >

> > If what I have stated is accurate, I would encourage you to keep the time series lengths at 2500 samples and think about a couple things:

> >

> > 1.) Perhaps there is a way to naturally group the responses based on the stimulus used that would justify analyzing a subset as representative of one response type, for example speech stimuli vs. non-speech auditory, etc. This is a kind of supervised learning approach since you know under what conditions the data were obtained.

> > 2.) Use the results of 1d wavelet, or 1d wavelet packet analysis on the responses to classify the responses themselves. This is a kind of unsupervised approach in the sense that you let the data classify themselves based on their wavelet or wavelet packet decompositions.

> >

> > I would be hesitant to subject the data to a data reduction method that makes you lose the essential time-dependent voltage nature of your data. I would encourage you to think of a way to analyze the raw time data with some time-frequency (scale) technique.

> > In any event, I don't think you want to use dwt2(). dwt2 is really for image data. I don't see how the vertical, horizontal, and diagonal details are going to help you in this case.

> >

> > Wayne

>

> Dear Wayne

>

> thanks for the detail info. just to clarify, the important thing for me is to keep the size of the third dimension untouched. So, even if I pass the 118*2500*280 to dwt, I would be ended up with a cA and cD with 3rd dimension size of approximately 124. I can also unfold my data in a way that it become a 2d matrix of 295000*280 and in this case I need the out put of dwt to have the size of 280 on the second dimension. Is there something that I should do with cA and cD to generate the representative time-frequency signal? is there something that I am missing here? I know that cA and cD are representing low and high bands, but how should I mix them at the end if I actually have to do something like that. by the way, the other issue is what if I use swt? I still did not try it but it looks like it does not change the 2rd or 3rd dimension size.

>

> Adham

Hi Adham, I would extract the 1D time series from the matrix and feed those 1D time series to dwt, or wpt (see my comment in your other post about the wavelet packet transform offering better frequency resolution than the dwt).

So for example

X = randn(10,10,2500);

ts = squeeze(X(1,1,:));

% obtain the DWT of ts down to level three

[C,L] = wavedec(ts,3,'sym4');

Also, if you have access to the R2010b pre-release, there is a new feature in the Wavelet Toolbox for measuring the wavelet coherence between pairs of time series.

Wayne

> X = randn(10,10,2500);

> ts = squeeze(X(1,1,:));

> % obtain the DWT of ts down to level three

> [C,L] = wavedec(ts,3,'sym4');

>

>

> Also, if you have access to the R2010b pre-release, there is a new feature in the Wavelet Toolbox for measuring the wavelet coherence between pairs of time series.

>

> Wayne

Dear Wayne

sorry for asking this again, first, just for confirmation, do you mean some thing like this

for i=1:size(d1.data,1)

for j=1:size(d1.data,2)

ts=squeeze(d1.data(i,j,:);

[c{i,j},l{i,j}]=wavedec(ts,3,'db7');

% it is stated in literature that db7-10 are the %best for eeg data

end

end

now, if d1.data is 118*50*1400, each c{i,j} would be 1437*1.

which means at the end, I would have a matrix of 118*50*1437. however, I am not sure how should I cut the extra 37 since my target vector has the size of 1400.

I know that if I use db1 in stead of db7,10,or 11, I would have the exact size, but as far as I know, the best results are achieved with db7 to 11 on EEG data.

Adham

"Adham " <adham.atyabi@flinders.edu.au> wrote in message <i47mtv$rp9$1@fred.mathworks.com>...

>

> > X = randn(10,10,2500);

> > ts = squeeze(X(1,1,:));

> > % obtain the DWT of ts down to level three

> > [C,L] = wavedec(ts,3,'sym4');

> >

> >

> > Also, if you have access to the R2010b pre-release, there is a new feature in the Wavelet Toolbox for measuring the wavelet coherence between pairs of time series.

> >

> > Wayne

>

> Dear Wayne

>

> sorry for asking this again, first, just for confirmation, do you mean some thing like this

>

> for i=1:size(d1.data,1)

> for j=1:size(d1.data,2)

> ts=squeeze(d1.data(i,j,:);

> [c{i,j},l{i,j}]=wavedec(ts,3,'db7');

> % it is stated in literature that db7-10 are the %best for eeg data

> end

> end

> now, if d1.data is 118*50*1400, each c{i,j} would be 1437*1.

> which means at the end, I would have a matrix of 118*50*1437. however, I am not sure how should I cut the extra 37 since my target vector has the size of 1400.

> I know that if I use db1 in stead of db7,10,or 11, I would have the exact size, but as far as I know, the best results are achieved with db7 to 11 on EEG data.

>

> Adham

Hi Adham, I think you have a problem because you are attempting to use cell syntax

[c{i,j},l{i,j}]=wavedec(ts,3,'db7');

for something that is not a cell array.

If you use wavedec for a time series with 2500 samples with the db7 wavelet you obtain with the default boundary replication a vector of length, 2536.

There are known boundary effects with the DWT and you need to extend the time series to account for these effects.

so for example:

ts = randn(2500,1);

[C,L] = wavedec(ts,3,'db7');

length(C)

The L vector tells you how many wavelet coefficients you have at each level.

L(1) is the number of approximation coefficients, L(2) is the number of level 3 wavelet coefficients, etc.

Use detcoef() to extract the coefficients.

Wayne

> > > X = randn(10,10,2500);

> > > ts = squeeze(X(1,1,:));

> > > % obtain the DWT of ts down to level three

> > > [C,L] = wavedec(ts,3,'sym4');

> > >

> > >

> > > Also, if you have access to the R2010b pre-release, there is a new feature in the Wavelet Toolbox for measuring the wavelet coherence between pairs of time series.

> > >

> > > Wayne

> >

> > Dear Wayne

> >

> > sorry for asking this again, first, just for confirmation, do you mean some thing like this

> >

> > for i=1:size(d1.data,1)

> > for j=1:size(d1.data,2)

> > ts=squeeze(d1.data(i,j,:);

> > [c{i,j},l{i,j}]=wavedec(ts,3,'db7');

> > % it is stated in literature that db7-10 are the %best for eeg data

> > end

> > end

> > now, if d1.data is 118*50*1400, each c{i,j} would be 1437*1.

> > which means at the end, I would have a matrix of 118*50*1437. however, I am not sure how should I cut the extra 37 since my target vector has the size of 1400.

> > I know that if I use db1 in stead of db7,10,or 11, I would have the exact size, but as far as I know, the best results are achieved with db7 to 11 on EEG data.

> >

> > Adham

>

> Hi Adham, I think you have a problem because you are attempting to use cell syntax

>

> [c{i,j},l{i,j}]=wavedec(ts,3,'db7');

>

> for something that is not a cell array.

>

> If you use wavedec for a time series with 2500 samples with the db7 wavelet you obtain with the default boundary replication a vector of length, 2536.

>

> There are known boundary effects with the DWT and you need to extend the time series to account for these effects.

>

> so for example:

>

> ts = randn(2500,1);

> [C,L] = wavedec(ts,3,'db7');

> length(C)

>

> The L vector tells you how many wavelet coefficients you have at each level.

>

> L(1) is the number of approximation coefficients, L(2) is the number of level 3 wavelet coefficients, etc.

>

> Use detcoef() to extract the coefficients.

>

> Wayne

Dear Wayne

Thanks for your help and advise. I wrote a function following your advise but my results are still very poor. I am not sure what is wrong in here!!! Can you please advise me on this.

%size(d1.data)= 60(channel)*50(sample)*1400(epoch)

d1.data=permute(d1.data,[3 1 2]);

[c1,l1,h1]=DWT_Func(d1,'db1',3);

h1=permute(h1,[2 3 1]);

P=unfold(h1,ndims(h1))';

[T]=Creat_Target(d1);% this function generates the target vector for classification

% Size(T)= 4*1400 which means 4 classes for 1400 epochs

k=10;

% 10 fold cross validation

for i=1:10

[traindex,testindex,valindex,resindex]=Divide_kfold(T,P,k,Origin_Dim);

% in here, the idea is to generate the separated folds based on trials so we can guarantee that a trial either counted for training or testing and not both

for j=1:10

Val.P=P(:,cell2mat(valindex(j)));% validation

Val.T=T(:,cell2mat(valindex(j)));

Tr.P=P(:,cell2mat(traindex(j)));%training

Tr.T=T(:,cell2mat(traindex(j)));

Ts.P=P(:,cell2mat(testindex(j)));%testing

Ts.T=T(:,cell2mat(testindex(j)));

net = newff(P,T,40,{'tansig' 'logsig'},'trainscg','learngdm','msereg');

net.inputs{1}.processFcns = {};

net.outputs{end}.processFcns = {};

net.trainParam.showWindow = false;

net = init(net);

[net,trainrecord] = train(net,Tr.P,Tr.T,[],[],Val);

Y = sim(net,Ts.P);

cm = full(compet(Y)*compet(Ts.T)');

acc{i,j} = sum(diag(cm)) / sum(cm(:));

bm{i,j}=bookmaker(cm);

data1.cm{i,j}=cm;

data1.acc{i,j}=acc(i,j);% accuracy

data1.bm{i,j}=bm(i,j);% book maker

data1.trainrec{i,j}=trainrecord;

data1.Y{i,j}=Y;

end

end

function [c,l,h]=DWT_Func(d1,mother_wavelet,level)

for i=1:size(d1.data,1)

for j=1:size(d1.data,2)

ts=squeeze(d1.data(i,j,:));

[cx,lx]=wavedec(ts,level,mother_wavelet);

[H]=detcoef(cx,lx,level);

c(i,j,:)=cx(:);

l(i,j,:)=lx(:);

h(i,j,:)=H(:);

end

end

end

Thanks

Adham

"Adham " <adham.atyabi@flinders.edu.au> wrote in message <i4onrg$ro$1@fred.mathworks.com>...

>

> > > > X = randn(10,10,2500);

> > > > ts = squeeze(X(1,1,:));

> > > > % obtain the DWT of ts down to level three

> > > > [C,L] = wavedec(ts,3,'sym4');

> > > >

> > > >

> > > > Also, if you have access to the R2010b pre-release, there is a new feature in the Wavelet Toolbox for measuring the wavelet coherence between pairs of time series.

> > > >

> > > > Wayne

> > >

> > > Dear Wayne

> > >

> > > sorry for asking this again, first, just for confirmation, do you mean some thing like this

> > >

> > > for i=1:size(d1.data,1)

> > > for j=1:size(d1.data,2)

> > > ts=squeeze(d1.data(i,j,:);

> > > [c{i,j},l{i,j}]=wavedec(ts,3,'db7');

> > > % it is stated in literature that db7-10 are the %best for eeg data

> > > end

> > > end

> > > now, if d1.data is 118*50*1400, each c{i,j} would be 1437*1.

> > > which means at the end, I would have a matrix of 118*50*1437. however, I am not sure how should I cut the extra 37 since my target vector has the size of 1400.

> > > I know that if I use db1 in stead of db7,10,or 11, I would have the exact size, but as far as I know, the best results are achieved with db7 to 11 on EEG data.

> > >

> > > Adham

> >

> > Hi Adham, I think you have a problem because you are attempting to use cell syntax

> >

> > [c{i,j},l{i,j}]=wavedec(ts,3,'db7');

> >

> > for something that is not a cell array.

> >

> > If you use wavedec for a time series with 2500 samples with the db7 wavelet you obtain with the default boundary replication a vector of length, 2536.

> >

> > There are known boundary effects with the DWT and you need to extend the time series to account for these effects.

> >

> > so for example:

> >

> > ts = randn(2500,1);

> > [C,L] = wavedec(ts,3,'db7');

> > length(C)

> >

> > The L vector tells you how many wavelet coefficients you have at each level.

> >

> > L(1) is the number of approximation coefficients, L(2) is the number of level 3 wavelet coefficients, etc.

> >

> > Use detcoef() to extract the coefficients.

> >

> > Wayne

>

> Dear Wayne

>

> Thanks for your help and advise. I wrote a function following your advise but my results are still very poor. I am not sure what is wrong in here!!! Can you please advise me on this.

>

> %size(d1.data)= 60(channel)*50(sample)*1400(epoch)

>

>

> d1.data=permute(d1.data,[3 1 2]);

> [c1,l1,h1]=DWT_Func(d1,'db1',3);

> h1=permute(h1,[2 3 1]);

> P=unfold(h1,ndims(h1))';

> [T]=Creat_Target(d1);% this function generates the target vector for classification

> % Size(T)= 4*1400 which means 4 classes for 1400 epochs

>

> k=10;

> % 10 fold cross validation

> for i=1:10

> [traindex,testindex,valindex,resindex]=Divide_kfold(T,P,k,Origin_Dim);

> % in here, the idea is to generate the separated folds based on trials so we can guarantee that a trial either counted for training or testing and not both

> for j=1:10

> Val.P=P(:,cell2mat(valindex(j)));% validation

> Val.T=T(:,cell2mat(valindex(j)));

> Tr.P=P(:,cell2mat(traindex(j)));%training

> Tr.T=T(:,cell2mat(traindex(j)));

> Ts.P=P(:,cell2mat(testindex(j)));%testing

> Ts.T=T(:,cell2mat(testindex(j)));

> net = newff(P,T,40,{'tansig' 'logsig'},'trainscg','learngdm','msereg');

> net.inputs{1}.processFcns = {};

> net.outputs{end}.processFcns = {};

> net.trainParam.showWindow = false;

> net = init(net);

> [net,trainrecord] = train(net,Tr.P,Tr.T,[],[],Val);

> Y = sim(net,Ts.P);

> cm = full(compet(Y)*compet(Ts.T)');

> acc{i,j} = sum(diag(cm)) / sum(cm(:));

> bm{i,j}=bookmaker(cm);

> data1.cm{i,j}=cm;

> data1.acc{i,j}=acc(i,j);% accuracy

> data1.bm{i,j}=bm(i,j);% book maker

> data1.trainrec{i,j}=trainrecord;

> data1.Y{i,j}=Y;

> end

> end

>

> function [c,l,h]=DWT_Func(d1,mother_wavelet,level)

> for i=1:size(d1.data,1)

> for j=1:size(d1.data,2)

> ts=squeeze(d1.data(i,j,:));

> [cx,lx]=wavedec(ts,level,mother_wavelet);

> [H]=detcoef(cx,lx,level);

> c(i,j,:)=cx(:);

> l(i,j,:)=lx(:);

> h(i,j,:)=H(:);

> end

> end

> end

>

>

> Thanks

> Adham

Hi Adham, can you send me a small sample of your data, say 10x10x2500? Just save it as a .mat file and mail it to me. Try to include a short description of what you are trying to do again in terms of what features you are trying to enhance/detect in your signal.

Thanks,

Wayne

You can think of your watch list as threads that you have bookmarked.

You can add tags, authors, threads, and even search results to your watch list. This way you can easily keep track of topics that you're interested in. To view your watch list, click on the "My Newsreader" link.

To add items to your watch list, click the "add to watch list" link at the bottom of any page.

To add search criteria to your watch list, search for the desired term in the search box. Click on the "Add this search to my watch list" link on the search results page.

You can also add a tag to your watch list by searching for the tag with the directive "tag:tag_name" where tag_name is the name of the tag you would like to watch.

To add an author to your watch list, go to the author's profile page and click on the "Add this author to my watch list" link at the top of the page. You can also add an author to your watch list by going to a thread that the author has posted to and clicking on the "Add this author to my watch list" link. You will be notified whenever the author makes a post.

To add a thread to your watch list, go to the thread page and click the "Add this thread to my watch list" link at the top of the page.

A tag is like a keyword or category label associated with each thread. Tags make it easier for you to find threads of interest.

Anyone can tag a thread. Tags are public and visible to everyone.

Got questions?

Get answers.

MATLAB and Simulink resources for Arduino, LEGO, and Raspberry Pi test

Learn moreDiscover what MATLAB ^{®} can do for your career.

Opportunities for recent engineering grads.

Apply TodayThe newsgroups are a worldwide forum that is open to everyone. Newsgroups are used to discuss a huge range of topics, make announcements, and trade files.

Discussions are threaded, or grouped in a way that allows you to read a posted message and all of its replies in chronological order. This makes it easy to follow the thread of the conversation, and to see what’s already been said before you post your own reply or make a new posting.

Newsgroup content is distributed by servers hosted by various organizations on the Internet. Messages are exchanged and managed using open-standard protocols. No single entity “owns” the newsgroups.

There are thousands of newsgroups, each addressing a single topic or area of interest. The MATLAB Central Newsreader posts and displays messages in the comp.soft-sys.matlab newsgroup.

**MATLAB Central**

You can use the integrated newsreader at the MATLAB Central website to read and post messages in this newsgroup. MATLAB Central is hosted by MathWorks.

Messages posted through the MATLAB Central Newsreader are seen by everyone using the newsgroups, regardless of how they access the newsgroups. There are several advantages to using MATLAB Central.

**One Account**

Your MATLAB Central account is tied to your MathWorks Account for easy access.

**Use the Email Address of Your Choice**

The MATLAB Central Newsreader allows you to define an alternative email address as your posting address, avoiding clutter in your primary mailbox and reducing spam.

**Spam Control**

Most newsgroup spam is filtered out by the MATLAB Central Newsreader.

**Tagging**

Messages can be tagged with a relevant label by any signed-in user. Tags can be used as keywords to find particular files of interest, or as a way to categorize your bookmarked postings. You may choose to allow others to view your tags, and you can view or search others’ tags as well as those of the community at large. Tagging provides a way to see both the big trends and the smaller, more obscure ideas and applications.

**Watch lists**

Setting up watch lists allows you to be notified of updates made to postings selected by author, thread, or any search variable. Your watch list notifications can be sent by email (daily digest or immediate), displayed in My Newsreader, or sent via RSS feed.

- Use a newsreader through your school, employer, or internet service provider
- Pay for newsgroup access from a commercial provider
- Use Google Groups
- Mathforum.org provides a newsreader with access to the comp.soft sys.matlab newsgroup
- Run your own server. For typical instructions, see: http://www.slyck.com/ng.php?page=2

You can also select a location from the following list:

- Canada (English)
- United States (English)

- Belgium (English)
- Denmark (English)
- Deutschland (Deutsch)
- España (Español)
- Finland (English)
- France (Français)
- Ireland (English)
- Italia (Italiano)
- Luxembourg (English)

- Netherlands (English)
- Norway (English)
- Österreich (Deutsch)
- Portugal (English)
- Sweden (English)
- Switzerland
- United Kingdom (English)