I have a list of numbers that, when plotted on a graph, clearly demonstrate trends such as rising upwards, dropping, repeating etc.. When a human sees the graph, they can easily make out what's happening. What I'm trying to do is achieve the same thing numerically and have the system detect what trends are occurring in the graph. My initial idea is to have a window that shifts along the data, and only the values within this window are used to calculate the trend. A window too small would result in small changes being exaggerated and a window too large would undermine large changes so it might be tricky finding a suitable window size which might make this approach unsuitable.
For example, if I have this set (I've added the square brackets to match the description below, but they're purely illustrative; the data is still only one set):
{[0,1,0,1,0,0],[1,1,1,1,1,1,1],[2,2,4,6,7,8,9,10,9,0],[0,1,1,0,1]}
We can easily see (especially if drawn on a graph) that it starts off relatively stable, then gets stable, starts rising, suddenly drops, then becomes relatively stable again.
What techniques and topics should I read up on to find ways of having the system detect those types of patterns efficiently? One approach would be to have rules. For example, a counter which increased everytime the number increased and decreased when the number decreased. Then, if the counter passed a high threshold, then the system should return rising. Is this rule-based approach a good way, or are there better approaches? (I prefer efficiency to accuracy, given a trade-off).

rising,fallingandstableso I could fit local models there. However, I need to also know at what point they stop. So, for example, therisingtrend stopped on the 10th day. I'm not sure if this is also possible with model fitting methods? – keyboardP Sep 27 '11 at 20:14