Just asking if it's possible a forecast data result in minus while using Neural Network method? I'm using for experimental in daily precipitation data which contain data from 0.0 to 97.1 The appearance of minus result in forecast appears when there is big gap/difference in one record/day to next record/day Could it be a learning rate and momentum factors causing the minus result or anything else?
Why not? Of course this is possible. Computers don't understand what you are doing. They have no concept of precipitation, or that a negative amount of rain is impossible.
For example, suppose you got various amounts of rain, always in an increasing sequence. 1 inch today, 2 inches tomorrow, 3 inches the next day. Now, suppose you formulated a model for precipitation. And tried to predict the amount in day 4. Now surely any model will predict 4 inches, on day 4, 5 inches on day 5.
But suppose we had that same sequence going in the opposite direction? Thus, 3, then 2, then 1. So 3 days of a nice linear progression. Is that ANY reason in the world your computer modeling tool will not predict ZERO rain on day 4, and a negative amount on day 5? Thus -1 inches on day 5? And if is it reasonable to predict 0 inches on day 4, then why not -1 inches on day 5?
A model is just a model. If you try to use it to predict behavior, it looks at what it sees, and tries to predict. -1 is just as far below 0 as 4 is above 3. I'm pretty sure I learned that in 4th grade (or so. I have no idea when though.)
So exactly where in your model does your computer understand that numbers can never go below zero? In my first example, it seemed perfectly reasonable to assume that an increasing sequence will continue to increase ABOVE what we had seen ever in our data. In the second example, it seems perfectly reasonable to want to predict 0, then -1 inches on days 4 and 5. Remember that numbers are just numbers to a computer model. They are not inches of rain. Just numbers, no more, no less.
The point of all this is that a model is only as good as the understanding of the process that you build into it. And computers have no real intelligence (well, not yet, except on TV or in the movies.) They do as they are told to do.
So you might consider that precipitation is probably a good process to employ a transformation to build that model. Thus, work on a log scale. For example, suppose you are taking data from the Amazon rain forest? Suppose that one month, you see 100 inches of precipitation. The next month, 101 inches. Is the difference between those two months as significant as 0 inches and 1 inch of rain in Death Valley? 1 inch of rain in Death valley is about what you might expect in a whole year. (Just a guess there.)
What I am saying is that rainfall is something that should best be viewed on a log scale. As well, this is where you should be modeling it too! Now everything works in terms of proportional differences.
So work in the log(precipitation). Or log10, which is a bit more easy to think in terms of. Then when you want to predict an actual amount, raise 10 to that power. Voila! Always non-negative, and all problems go away.
Finally, you will need to recognize that log10(0) will be a problem. So either decide that a day with no rain at all is just a day with no data, or you might assign some tiny amount of rain to that day, whatever the smallest amount of rain it is that you can measure. So if your rain gauge reads in increments of 0.01 inch, then a day with no rain at all by your measurement might get 0.005 inches assigned for that day.