Original link:tecdat.cn/?p=22071

Original source:Tuo End number according to the tribe public number

 

There are at least two kinds of nonstationary time series: time series with trends and time series with unit roots (called integral time series). The unit root test cannot be used to evaluate whether a time series is stationary or not. They can only detect integral time series. The same is true of seasonal unit roots.

Here, consider the monthly average temperature data.

> mon=read.table("temp.txt")

> plot(mon)
Copy the code

We can now calculate the P-values of three different stationarity tests for all years

 

for(y in 1955:2013){
Temp[which(Year==y)]
as.numeric(pp.test(Zc)$p.value)	
as.numeric(kpss.test(Zc)$p.value)	
as.numeric(adf.test(Zc)$p.value)
Copy the code

Graphically, if red is non-stationary and blue is stationary, we get

polygon(y,col=CL[1+(D[y-1954,i]==1)*5],border=NA)}}
Copy the code

It can be seen that most sequences cannot reject the original test at the significance level of 5%, indicating that the sequence is non-stationary.

The temperature in winter and summer is totally different. We can visualize:

 
> plot(month,(tsm))
> lines(1:12,apply(M,2,mean  
Copy the code

 

or

   plot(tsm)
Copy the code

 

 

> 3D(tsm)
Copy the code

It looks like our time series is cyclical, because every year is seasonal. Autocorrelation function:

 

 

Now the question is is there a seasonal unit root? That tells us that our model should be

If we forget the autoregressive and moving average components, we can estimate

If you have a seasonal unit root, it should be close to 1.


arima(x = tsm, order = c(0, 0, 0), seasonal = list(order = c(1, 0, 0), period = 12))

Coefficients:
        sar1  intercept
      0.9702     6.4566
s.e.  0.0071     2.1515
Copy the code

That’s about the same as one. In fact, it can’t get too close to 1. If so, we’ll get an error message… To illustrate the model, let’s consider the quarterly temperature,

sp(1:4,N,theta=-50,col="yellow",shade=TRUE,
Copy the code

 

 

VAR quarterly temperature model

The VAR model describes that n variables (endogenous variables) within the same sample period can be linear functions of their past values.

A VAR(p) model can be written as:

Where c is n × 1 constant vector and Ai is n × n matrix. Et is an n × 1 error vector, which satisfies:

  1. Of the error termThe mean0
  2. Of the error termcovarianceThe matrix is omeganX ‘nPositive definite matrix)
  3. (For all non-zerokThere is no autocorrelation in the error term

 

Where A is A 4X4 matrix. The model is easy to estimate

model=VAR(df)
Copy the code

The matrix A is right there

> A=rbind(
+ coefficients(varresult$y1)[1:4],
+ coefficients(varresult$y2)[1:4],
+ coefficients(varresult$y3)[1:4],
+ coefficients(varresult$y4)[1:4])
Copy the code

Since the stationarity of this multi-time series is closely related to the eigenvalues of this matrix, let’s see,

> eigen(A) [1] 0.35834830-0.32824657-0.14042175 0.09105836 > Mod(eigen(A) [1] 0.35834830 0.32824657 0.14042175 0.09105836Copy the code

Periodic autoregression (PAR) model

It doesn’t look like there’s a stationarity problem here. Constrained models are called periodic autoregressive models and are calledmodel

Among them

and

This is a VAR(1) model, so

We can estimate this model

par(wts=tsq,  type="PAR", p=1)
> PAR(model)
Copy the code

The characteristic equation is

So there’s a (seasonal) unit root, if

But that’s certainly not the case here. The Canova Hansen(CH) test can be performed. The Canova Hansen(CH) test is mainly used to test seasonal differences and verify the null hypothesis, that is, the seasonal pattern is stable or varies with time during the sampling period.

The output of the check is here

> CH.test(tsm)
Copy the code

It looks like we’re rejecting the seasonal unit root hypothesis. I refer to the following inspection procedures

> nsdiffs(tsm, test="ch")
[1] 0
Copy the code

“1” indicates that there is a seasonal unit root, “0” indicates that there is no seasonal unit root. Easy to read, isn’t it? If we consider a periodic autoregressive model of monthly data, the output is

> model
Copy the code

So, no matter what the test, we always reject the assumption that there is a seasonal unit root. That doesn’t mean our sequence can’t be periodic! In fact, the sequence is almost periodic. But there is no unit root! So all of this makes sense.

To make sure we get it right, consider two time series. The first is a periodic sequence (with very little noise) and the second is an integral sequence.

> p1 = Xp2 = as numeric (t (M)) > for (t in 13: length (M)) {+ p2 [t] = Xp2 [12] t - + rnorm (1,0,2)Copy the code

 

 

To view

3D(tsp1)
3D(tsp2)
Copy the code

 

If we take a quick look at these sequences, I would say that the first one has no unit roots — even though it’s not stationary, but that’s because the sequence is periodic — and the second one has unit roots. If we look at the Canova Hansen(CH) test, we get

> CH.test(tsp1)
Copy the code

Consider the

> nsdiffs(tsp1, 12,test="ch")
[1] 0
> nsdiffs(tsp2, 12,test="ch")
[1] 1
Copy the code

Here we have the same conclusion. The first one has no unit root, but the second one has a unit root. Use Osborn – Chui – Smith – Birchenhall inspection

> nsdiffs(tsp1, 12,test="ocsb")
[1] 1
> nsdiffs(tsp2, 12,test="ocsb")
[1] 1
Copy the code

We also have a unit root in our periodic sequence.

So here, at low frequencies, we reject the assumption that there are unit roots in our temperature series, even seasonal unit roots.


Most welcome insight

1. Use LSTM and PyTorch for time series prediction in Python

2. Long and short-term memory model LSTM is used in Python for time series prediction analysis

3. Time series (ARIMA, exponential smoothing) analysis using R language

4. R language multivariate Copula – Garch – model time series prediction

5. R language Copulas and financial time series cases

6. Use R language random wave model SV to process random fluctuations in time series

7. Tar threshold autoregressive model for R language time series

8. R language K-Shape time series clustering method for stock price time series clustering

Python3 uses ARIMA model for time series prediction