2 views (last 30 days)

PREFACE

(((This is an offshoot from the following Newsgroup discussion https://se.mathworks.com/matlabcentral/newsreader/view_thread/345189. The discussion below will only cover part of the items discussed in NEWSGROUP thread (Greg, I will get back to you with an improved code as soon as I can, probably this will be sometime during next week). I will here use “Answers” to be able to attach figures)))

INTRODUCTION

Dear all, With this question I would like to start an open discussion on feedback delays (FD) used in NARXNET and NARNET. Normally a post here includes a question; instead I would like to start off with the following statement: It is not possible to predict wavelengths longer than the maximum index value of the FD in NARXNET and NARNET.

DETAILS

If you think I’m wrong, please share your thoughts. My reasoning is as follows:

Let’s start with a periodogram of the simplenar dataset:

We see that the most significant wavelengths are at a period length of 3-4 samples. For an open loop network; increasing the default FD from 1:2 to 1:3, 1:4 or 1:5 allows the most significant wavelengths to be included and provides a reduction in NMSEo (normalised mean square error for the o pen network). (The error is investigated using the code written by Greg, the code can be found in the NEWSGROUP link above in the preface.)

The figure below shows the periodogram of the solar dataset, this dataset is slightly more complicated than the simplenar dataset.

I don’t see how it can be possible to obtain any small error without using the rather large set of FD’s, and, so far in my trial and error process it seems as if I get a relatively small error by using a FD with 147 values with a maximum “index value” of 152 (please let me know if you’d like me to submit these values). The FD used for the solar dataset is only one example, the significant item here is that the values presented cover the most significant wavelengths. I don’t see how to reduce NMSEo by using a significantly lower number of values. This discussion could be about removing outliers, model selection, Ntrials etc. but I think we are able to discuss FD separately…meaning that I think we can find the FD that gives the smallest NMSEo (only altering the FD) and then resort to other measures for further decreasing NMSEo. The code for generating the two above plots:

T = solar_dataset; % simplenar_dataset;

t=cell2mat(T);

[pxx,f] = periodogram(t,[],[],1);

plot(1./f,pxx)

% xlim([2 6]) % simplenar

xlim([66 250]) % solar

xlabel('Wavelength (number of samples)')

ylabel('Amplitude')

title('Periodogram of the Solar Dataset')

DISCUSSION AND SUGGESTIONS FOR FUTURE WORK

The figure below is not ideal for what I’d like to describe but I’d like you to see this as a representation of FD used to predict the x+1 value (link to location of image http://www.obitko.com/tutorials/neural-network-prediction/neural-network-training.html).

If the weights for the x+1 value are restricted from having knowledge about the most significant wavelength, how is it then to use the weights to make predictions that includes the most significant wavelengths? If it is necessary to use a small number of FD then, sure, it might be possible to make a prediction of wavelengths that are shorter than the most significant wavelength…but this is not what I think people would like to do.

The other default option for NARXNET and NARNET are the number of nodes (10 nodes), this might very well find its applicability in working with a range of problems but the default option for FD (1:2) has probably a very limited applicability.

I think a good idea when designing a measurement system is to have a sampling frequency with some safety margin meaning that the sampling frequency is a bit higher than what is needed to characterize the smallest wavelengths of interest/of significance. It may very well be the case that the smallest wavelengths that are possible to be characterised (in the example smaller than the wavelengths of interest/of significance) with the sampling frequency includes a large portion of noise since, from my experience the sampling frequency is often chosen close to the performance limit of the measurement system.

First thing one might think of is that it would be a sound idea to just decrease the measurement sampling rate of the original measurement and even maybe retrieve the signal with reduced sampling rate from a filtered signal. I am fine with this approach but I would, in my first iteration start with the original signal. One might be lucky enough to have a NN model with small error from the original signal. And this would then be with a signal that has not endured any information reducing procedures.

SUMMARY

This is an attempt to begin a discussion on the subject if it is not possible to predict wavelengths longer than the maximum index value of the FD in NARXNET and NARNET. Analysis has been made on different datasets and the conclusion is that, in order to generate small NMSEo a FD delay that spans over the most significant wavelengths of the data is required.

Greg Heath
on 15 May 2016

AHA!!!

You have just discovered the other side of the time/frequency conspiracy.

From most of my time-series tutorials and examples one could conclude the following master plan:

1. Use zscore to standardize the data

2. Detect and modify outliers

3. Find the significant feedback lags from the peaks in the

target autocorrelation function.

OH! ... Wait a minute! ... The autocorrelation function is just the fft of the Power Spectrum!

That could mean that by looking at the target power spectrum I could determine the significant lags!

Definitely! However, if you had to pick just one of the two methods, which would you pick?

Which one is easier?

Which one yields the most useful info?

Staffan,

Thanks for the enlightening post.

Greg

Opportunities for recent engineering grads.

Apply Today
## 0 Comments

Sign in to comment.