> "Gabriele " <gabman82@gmail.comwrote in message ...
> <lelst5$ikq$1@newscl01ah.mathworks.com>.
> Hi,
>
> I have a set of N time series each being the 'condition indicator' (derived
> from vibration signals) of a component (bearing) until failure.
> All components are assumed identical, but, nevertheless, they fail at
> different times (due to the probabilistic nature of the failure process).
>
> The idea is to develop a (nn) prognostic model based on this historical
> data that would be able, given a new time series (from a bearing in
> operation that has not yet failed), to predict future values of the condition
> indicator of the bearing.
How are the condition indicator values obtained from the vibrations?
A failing bearing indicator should show signs of a different Fourier spectrum.
Are there visually identifiable regions where the indicators are predominantly
monotonic vs constant amplitude or decaying oscillations?
Typically, what different frequency ranges in each region?
Is there any visual difference in plot characteristics just before failure and
the remainder of the signal?
> The condition indicator time series is bounded between 0 and 1. It goes
> to 0 as time goes to Inf.
What does 0 mean? What does 1 mean?
Why do they go to 0 as t> inf? Are they 1 at t=0?
What are typical values at failure?
What criterion do you use for predicting a failure?
What is the minimum amount of time that you want before predicting an
iminent failure?
What are the similarites in subsets of the last M (M<=750) timesteps of the
8 examples below?
means? variances? correlation coefficients? Significant (95% confidence)
autocorrelation lags?, Characteristic frequency intervals?
> I have encountered the following problems when starting to build up my
> nn model:
>
> 1. EXOGENOUS INPUT
> I have Matlab 2009 — I don’t have exogenous inputs, but the available
> nn for this kind of problems in the 2009 release of the software is only
> the narx network (which requires the external input). I think I can get
> around this problem using the time series of the condition indicator of
> operating bearing itself and treat it as an exogenous input. I should be
> able to do this also because the narx function is y(t) = f(y(t1)…,u(t1)
> ...), thus doesn’t require the current input value to estimate the output.
I don't agree. The the current narx function requires BOTH an input AND
a feedback delay from the output. I assume this is also true for the 2006
version.
I think your best bet is the timedelaynet or it's predecessor newfftd.
> However, please advice me on better ways.
Instead of a timeseries for predicting future values have you considered
a classifier with target values {0,1} and a failure criterion output <= a
specified threshold? In this scenario, you would have to find a window
length that only predicts a delayed failure near the end of the signal.
You could also consider spectral characteristics using windowed ffts.
> 2. TRAINING SET COMPOSED OF N TIME SERIES
> I have no clue of how to train the network on multiple time series (I
> cannot treat them as multiinput since when the model is deployed
> only one time series will be available (from the operating bearing))
Train in a loop where the training of series i+1 begins with the weights
obtained from series i (Do not use the function configure!)
> 3. MULTISTEP AHEAD PREDICTION
> 3.a My approach to this would be to create a for loop and use the
> estimated values of the output as inputs for the next iteration. I have
> seen your post with that smarter ways to do so without the four loop,
> but is this gives the same results, then I am happy also with the for
> loop.
> Please advice me also on this.
Closeloop predictions are a nice idea. However, most of the examples
I have seen eventually fall apart from accumulated prediction errors.
I would stick with timedelaynet (unless, of course, it proves inadequate)
> 3.b Moreover, I would like to ask you if there is a smarter way to
> approach this multi step prediction. An idea could be to use a
> feedworward network with multiple outputs (where each output
> represent an additional time step prediction) embedded in a for loop.
Again, to avoid accumulated errors, it may be best to just predict a certain
time interval ahead without feedback. You will need to look at the
autocorrelation significant lags to get a feel for how long you can predict
ahead without feedback.
> 3.c Point 3.b raise another question: are a narx network and a
> feedforward network embedded in a for loop the same (in the sense
> that they will result the same after the training or at least return
> comparable results)?
Not sure why you are fixated on a narxnet. You only have one signal.
Therefore a timedelaynet should suffice.
However, to answer your question: Yes, you can construct a loop
to allow newff to simulate certain timeseries results.
> Answering to mail question would be so very kind of you and much
> appreciated. Providing me with the code for doing so given the
> following inputs, would be just fantastic!
I give you permisson, without conditions, to use any posted codes of
mine that you find helpful (Well, I guess you could mention my name)..
> Time series for training
>
> ci1 = 1x800 double
> ci2 = 1x2200 double
> ci3 = 1x1400 double
> ci4 = 1x3000 double
> ci5 = 1x750 double
> ci6 = 1x2100 double
> ci7 = 1x2800 double
> ci8 = 1x1700 double
>
> Time series for prediction
>
> ci_operating_component = 1x700 double
>
> Problem: predict ci_operating_component(701:10000)
> Thank you in advance,
> Gabriele
You can also treat it as
x(td1:t1) predicting x(t:t+d2)
Impossible to say more without seeing dat, code, error messages, etc
Hope this helps.
Greg
P.S. Sorry for the unavoidably delayed reply.
