ACFおよびPACFの式


18

時系列データからACFとPACFをプロットするコードを作成したい。このように、minitab(下)からプロットを生成しました。

ACFプロット

PACFプロット

数式を検索しようとしましたが、まだよくわかりません。 式とその使用方法を教えてください。 上記のACFおよびPACFプロットの水平の赤い線は何ですか?式は何ですか?

ありがとうございました、


1
@javlacalleあなたが提供する式は正しいですか?
ρ(k)=1nkt=k+1n(yty¯)(ytky¯)1nt=1n(yty¯)1nkt=k+1n(ytky¯),
それは仕事ならないだろう
t=1n(yty¯)<0and/ort=k+1n(ytky¯)<0
右?次のようにすべきですか?$$ \ rho(k)= \ frac {\ frac {1} {nk} \ sum_ {t = k + 1} ^ n(y_t-\ bar {y})(y_ {tk}-\ bar {y} )} {\ sqrt {\ frac {1} {n} \ sum_ {t = 1} ^ n(y_t-\ bar {y})^ 2} \ sqrt {\ frac {1} {nk} \ sum_ {t = k + 1} ^ n(y_ {tk}-\ bar {y})^ 2}} \ ,,
conighion

@conighionあなたは正しい、ありがとう。私は前にそれを見なかった。私はそれを修正しました。
javlacalle

回答:


33

自己相関

2つの変数y1,y2間の相関は次のように定義されます。

ρ=E[(y1μ1)(y2μ2)]σ1σ2=Cov(y1,y2)σ1σ2,

Eは期待値演算子であり、ここでμ1及びμ2それぞれについて手段でありy1及びy2及びσ1,σ2、それらの標準偏差です。

単一の変数、つまり自己相関のコンテキストでは、y1は元の系列であり、y2はその時系列バージョンです。上記の定義の際に、注文のサンプル自己相関k=0,1,2,...観察されたシリーズでは、次式を計算することによって得ることができるytt=1,2,...,n

ρ(k)=1nkt=k+1n(yty¯)(ytky¯)1nt=1n(yty¯)21nkt=k+1n(ytky¯)2,

ここy¯データのサンプル平均です。

部分自己相関

部分自己相関は、両方の変数に影響する他の変数の影響を除去した後、1つの変数の線形依存性を測定します。例えば、注文対策効果の(線形依存性)の部分的自己相関yt2上のytの影響を除去した後yt1の両方でyt及びyt2

各部分自己相関は、次の形式の一連の回帰として取得できます。

y~t=ϕ21y~t1+ϕ22y~t2+et,

where y~t is the original series minus the sample mean, yty¯. The estimate of ϕ22 will give the value of the partial autocorrelation of order 2. Extending the regression with k additional lags, the estimate of the last term will give the partial autocorrelation of order k.

An alternative way to compute the sample partial autocorrelations is by solving the following system for each order k:

(ρ(0)ρ(1)ρ(k1)ρ(1)ρ(0)ρ(k2)ρ(k1)ρ(k2)ρ(0))(ϕk1ϕk2ϕkk)=(ρ(1)ρ(2)ρ(k)),

where ρ() are the sample autocorrelations. This mapping between the sample autocorrelations and the partial autocorrelations is known as the Durbin-Levinson recursion. This approach is relatively easy to implement for illustration. For example, in the R software, we can obtain the partial autocorrelation of order 5 as follows:

# sample data
x <- diff(AirPassengers)
# autocorrelations
sacf <- acf(x, lag.max = 10, plot = FALSE)$acf[,,1]
# solve the system of equations
res1 <- solve(toeplitz(sacf[1:5]), sacf[2:6])
res1
# [1]  0.29992688 -0.18784728 -0.08468517 -0.22463189  0.01008379
# benchmark result
res2 <- pacf(x, lag.max = 5, plot = FALSE)$acf[,,1]
res2
# [1]  0.30285526 -0.21344644 -0.16044680 -0.22163003  0.01008379
all.equal(res1[5], res2[5])
# [1] TRUE

Confidence bands

Confidence bands can be computed as the value of the sample autocorrelations ±z1α/2n, where z1α/2 is the quantile 1α/2 in the Gaussian distribution, e.g. 1.96 for 95% confidence bands.

Sometimes confidence bands that increase as the order increases are used. In this cases the bands can be defined as ±z1α/21n(1+2i=1kρ(i)2).


1
(+1) Why the two different confidence bands?
Scortchi - Reinstate Monica

2
@Scortchi Constant bands are used when testing for independence, while the increasing bands are sometimes used when identifying an ARIMA model.
javlacalle

1
The two methods for calculating confidence bands are explained in a little more detail here.
Scortchi - Reinstate Monica

Perfect explanation!
Jan Rothkegel

1
@javlacalle, does the expression for ρ(k) miss squares in the denominator?
Christoph Hanck

9

"I want to create a code for plotting ACF and PACF from time-series data".

Although the OP is a bit vague, it may possibly be more targeted to a "recipe"-style coding formulation than a linear algebra model formulation.


The ACF is rather straightforward: we have a time series, and basically make multiple "copies" (as in "copy and paste") of it, understanding that each copy is going to be offset by one entry from the prior copy, because the initial data contains t data points, while the previous time series length (which excludes the last data point) is only t1. We can make virtually as many copies as there are rows. Each copy is correlated to the original, keeping in mind that we need identical lengths, and to this end, we'll have to keep on clipping the tail end of the initial data series to make them comparable. For instance, to correlate the initial data to tst3 we'll need to get rid of the last 3 data points of the original time series (the first 3 chronologically).

Example:

We'll concoct a times series with a cyclical sine pattern superimposed on a trend line, and noise, and plot the R generated ACF. I got this example from an online post by Christoph Scherber, and just added the noise to it:

x=seq(pi, 10 * pi, 0.1)
y = 0.1 * x + sin(x) + rnorm(x)
y = ts(y, start=1800)

enter image description here

Ordinarily we would have to test the data for stationarity (or just look at the plot above), but we know there is a trend in it, so let's skip this part, and go directly to the de-trending step:

model=lm(y ~ I(1801:2083))
st.y = y - predict(model)

enter image description here

Now we are ready to takle this time series by first generating the ACF with the acf() function in R, and then comparing the results to the makeshift loop I put together:

ACF = 0                  # Starting an empty vector to capture the auto-correlations.
ACF[1] = cor(st.y, st.y) # The first entry in the ACF is the correlation with itself (1).
for(i in 1:30){          # Took 30 points to parallel the output of `acf()`
  lag = st.y[-c(1:i)]    # Introducing lags in the stationary ts.
  clipped.y = st.y[1:length(lag)]    # Compensating by reducing length of ts.
  ACF[i + 1] = cor(clipped.y, lag)   # Storing each correlation.
}
acf(st.y)                            # Plotting the built-in function (left)
plot(ACF, type="h", main="ACF Manual calculation"); abline(h = 0) # and my results (right).

enter image description here


OK. That was successful. On to the PACF. Much more tricky to hack... The idea here is to again clone the initial ts a bunch of times, and then select multiple time points. However, instead of just correlating with the initial time series, we put together all the lags in-between, and perform a regression analysis, so that the variance explained by the previous time points can be excluded (controlled). For example, if we are focusing on the PACF ending at time tst4, we keep tst, tst1, tst2 and tst3, as well as tst4, and we regress tsttst1+tst2+tst3+tst4 through the origin and keeping only the coefficient for tst4:

PACF = 0          # Starting up an empty storage vector.
for(j in 2:25){   # Picked up 25 lag points to parallel R `pacf()` output.
  cols = j        
  rows = length(st.y) - j + 1 # To end up with equal length vectors we clip.

  lag = matrix(0, rows, j)    # The storage matrix for different groups of lagged vectors.

for(i in 1:cols){
  lag[ ,i] = st.y[i : (i + rows - 1)]  #Clipping progressively to get lagged ts's.
}
  lag = as.data.frame(lag)
  fit = lm(lag$V1 ~ . - 1, data = lag) # Running an OLS for every group.
  PACF[j] = coef(fit)[j - 1]           # Getting the slope for the last lagged ts.
}

And finally plotting again side-by-side, R-generated and manual calculations:

enter image description here

That the idea is correct, beside probable computational issues, can be seen comparing PACF to pacf(st.y, plot = F).


code here.


1

Well, in the practise we found error (noise) which is represented by et the confidence bands help you to figure out if a level can be considerate as only noise (because about the 95% times will be into the bands).


Welcome to CV, you might want to consider adding some more detailed information on how OP would go about do this specifically. Maybe also add some information on what each line represents?
Repmat

1

Here is a python code to compute ACF:

def shift(x,b):
    if ( b <= 0 ):
        return x
    d = np.array(x);
    d1 = d
    d1[b:] = d[:-b]
    d1[0:b] = 0
    return d1

# One way of doing it using bare bones
# - you divide by first to normalize - because corr(x,x) = 1
x = np.arange(0,10)
xo = x - x.mean()

cors = [ np.correlate(xo,shift(xo,i))[0]  for i in range(len(x1)) ]
print (cors/cors[0] )

#-- Here is another way - you divide by first to normalize
cors = np.correlate(xo,xo,'full')[n-1:]
cors/cors[0]

Hmmm Code formatting was bad:
Sada
弊社のサイトを使用することにより、あなたは弊社のクッキーポリシーおよびプライバシーポリシーを読み、理解したものとみなされます。
Licensed under cc by-sa 3.0 with attribution required.