線形システムが正弦波の忠実度を示すのはなぜですか?


9

正弦波の忠実度の証明を探しています。DSPでは、線形システムについて多くのことを研究しています。線形システムは同種であり、付加的です。それが満たすもう1つの条件は、信号が正弦波または余弦波の場合、出力は位相または振幅のみを変更することです。どうして?正弦波が入力として与えられたとき、なぜ出力はまったく異なる出力にならないのですか?


1
DSPへようこそ。すばらしい質問です。
フォノン

5
あなたの理解は不完全です。線形(均一で加算的な)システムは、入力正弦波が同じ周波数の正弦波を生成するが、振幅と位相が異なる可能性があるという特性を必ずしも持っているわけではありません。システムも時間不変であることをさらに制限する必要があります。 例えば入力場合、x(t)出力生成x(t)cos(2π109t)、システムが均質で添加、従って線形であるが、SISO(正弦波IN-正弦波アウト)の特性を満足しません。
Dilip Sarwate、2012

ディリップ(または誰か)は答えとして「彼らはしないでください」と述べるべきです。唯一の時不変線形システムが行います。
hotpaw2

2
ちょうどメモとして、この質問を表現する別の方法は、「なぜ線形時不変システムの指数関数固有関数なのか」です。
Jason R

回答:


8

他の回答に対する視覚的な補足

あなたは線形で時不変のシステムについて話している。

指数関数には1つの独特のプロパティがあります(実際にはそれによって定義できます)。時間変換を実行すると、同じ関数に定数が乗算されます。そう

ett0=et0et

Mathematicaグラフィック

赤い指数は、eで割った青い指数でもよいe 1秒右に移動

一般的に、これは複雑な指数にも当てはまります

あなたの心に描くことができますのような複雑な倍音のプロットx(t)=ej2πt?もしそうなら、それはばねのようなものであることがわかります:それは時間が経つにつれて複雑な平面に沿って回転します。

Mathematicaグラフィック

そのばねを回転させる(単位円内の複素数を掛ける)ことは、それを平行移動することと同じです。あなたはおそらくあなたの人生のどこかでこの視覚効果に入ってきました

ここに画像の説明を入力してください

これも標準的なネジの原則です。

これを線形時不変システムに入力するとします。出力取得します 。今度は、この春の回転バージョンを入力します。直線性のため、出力は同じ量だけy回転する必要があります。ただし、回転は時間変換と同等であり、システムは時間不変であるため、出力も同じ量だけy時間変換する必要があります。だから、yyyyyは入力と同じプロパティを満たす必要があります。回転は特定の時間変換と同等でなければなりません。これは、出力が元のばねの倍数である場合にのみ発生します。

どのくらいの翻訳?まあ、それは、ばねで起こるのと同じように、回転に正比例します。スプリングのループがきつくなっている(回転が速い)ほど、特定の回転の時間変換が少なくなります。ねじのループがきつくなるほど、完全に収まるようにするには、より多くのラウンドが必要になります。そして、ラウンドの半分が完了すると、ねじは半分になります。出力は同じ関係を満たす必要があるため、出力スプリングは入力と同じ周波数で回転します。y

ついに、リマインダー

cos(t)=ejt+ejt2

sin(t)=ejtejt2j

したがって、指数関数で発生することは、最も一般的なケースでは、実際には余弦および正弦で発生する必要はありません。しかし、システムも本物である場合、それは別の話です...

一般に、これと同じ理由で、指数関数は線形時不変システムの「固有関数」(出力は入力に比例します)です。そのため、これらのシステムではZ変換とラプラス変換が非常に便利です。


どのように/どこからそのアニメーションを入手しましたか?
Spacey

@Mohammadは、アルキメデスのネジに関するウィキペディアのページからそれを採用しました
Rojo

コルク抜きのプロットはどこで手に入れたのですか?:) math.stackexchange.com/q/144268/2206
endolith

@endolithああ、私はMathematicaでそれをやった。あなたのほうがいいです;)
Rojo

4

入力と出力y t を持つシステムを考えます。Lars1の回答から表記法を借用して、この関係 x t y t )を示します。このシステムは、次の特性を満たす場合、線形時不変(LTI)システムであると言います。x(t)y(t)x(t)y(t)

H.場合はは、α X T α Y T x(t)y(t)αx(t)αy(t)

A.場合 およびX 2T Y 2T は、 X 1T + X 2T Y 1T + Y 2T x1(t)y1(t)x2(t)y2(t)x1(t)+x2(t)y1(t)+y2(t).

T.場合は は、X T - τ Y T - τ 任意の実数のためのτx(t)y(t)x(tτ)y(tτ)τ

プロパティHAは一緒にプロパティLと同等です

L.場合は、 およびX 2T Y 2T は、 α X 1T + β X 2T α Y 1T + β y 2t x1(t)y1(t)x2(t)y2(t)αx1(t)+βx2(t)αy1(t)+βy2(t)


時不変系に周期的入力は、周期的な出力が生成する
と仮定しである周期的な周期を有する信号Tであり、X T - N T = X T すべての整数のN。次に、プロパティTから、y t も周期Tの周期信号であることがすぐにわかります。したがって、y t をフーリエ級数として表すことができます 。x(t)Tx(tnT)=x(t)ny(t)Ty(t)

ω=2π/T基本周波数です。

y(t)=a02+n=1ancos(nωt)+bnsin(nωt)
ω=2π/T

以来ω T の周期信号であり、我々が持っている任意の時不変システムのため、直線的かどうか、 COS ω T cos(ωt)sin(ωt) 実際、線形時不変(LTI)システムの場合p1q1r1sを除くすべてpnqnrnおよびsnはゼロです。

cos(ωt)p02+n=1pncos(nωt)+qnsin(nωt)sin(ωt)r02+n=1rncos(nωt)+snsin(nωt).
pn,qn,rn,sn。なぜそうなのかを確認するには、私たちはへLTIシステムの応答計算してみましょうのcos ω トン- θを 2つの異なる方法で、結果を比較。p1,q1,r1,s1cos(ωtθ)

以来、我々は、プロパティから取得Lていることと、上記の式 のcos ω トン- θ cos(ωtθ)=cos(θ)cos(ωt)+sin(θ)sin(ωt) 一方、以降のcosωT-θ=COSωT-θ/ω だけ遅延バージョンであるCOSωTT

cos(ωtθ)p0cos(θ)+q0sin(θ)2+n=1(pncos(θ)+rnsin(θ))cos(nωt)+n=1(qncos(θ)+snsin(θ))sin(nωt).
cos(ωtθ)=cos(ω(tθ/ω))cos(ωt)プロパティから、 我々が得ること これら2つのフーリエ級数は、選択するθの値に関係なく同じでなければなりません。係数を比較すると、我々はその参照 P0/2に等しくすることができないP0、COSθ+R0、COSθを
cos(ωtθ)p02+n=1pncos(nωtnθ)+qnsin(nωtnθ)=p02+n=1(pncos(nθ)qnsin(nθ))cos(nωt)+n=1(qncos(nθ)+pnsin(nθ))sin(nωt).
θp0/2(p0cos(θ)+r0cos(θ))/2 for all θ unless p0=r0=0. Similarly, for any n>1, pncos(nθ)qnsin(nθ) cannot equal pncos(θ)+rnsin(θ) etc. for all θ unless pn=qn=rn=sn=0. However, for n=1, p1cos(θ)q1sin(θ)=p1cos(θ)+r1sin(θ) implies that r1=q1, and similarly, s1=p1. In other words, for an LTI system,
cos(ωt)p1cos(ωt)+q1sin(ωt)sin(ωt)q1cos(ωt)+p1sin(ωt).
Now, p1cos(ωt)+q1sin(ωt)=Bcos(ωtϕ) where B=p12+q12 and ϕ=arctan(q1/p1). Therefore, Properties T and H give us that
Acos(ωtθ)ABcos(ωtϕθ).
Any sinusoid of frequency ω rad/s can be expressed as Acos(ωtθ) for appropriate choice of A and θ, and so the above result is what we need.

SISO property of linear time-invariant systems: If the input to an LTI system is a sinusoid, the output is a sinusoid of the same frequency but possibly different amplitude and phase.

This is not quite the result that the OP wanted -- he wanted a proof that a linear system (one in which Properties H and A (equivalently, Property L) hold but not necessarily Property T) has the SISO property, but as the development above shows, Property T must hold in order to prove even the weaker result that periodic input results in periodic output.


As a final comment, note that it is not necessary to use complex numbers or convolution theorems or Fourier or LaPlace transforms, impulses, eigenfunctions etc to prove the SISO property. It follows from Properties L and *T and the trigonometric identity

cos(αβ)=cos(α)cos(β)+sin(α)sin(β).

What would happen if x(t) is not periodic (not periodic could happen for incommensurate frequencies)? Need T be finite? Could we gain something in terms of generality by requiring x(t) to be square integrable in the observation time interval?
Lars1

@Lars1 If the input to a LTI system is not periodic, the output is not periodic either. As a specific case, if x(t)=A1cos(ω1t)+\A2cos(ω2t) where ω1/ω2 is irrational (and so input is not periodic), then from Property L we have that
A1cos(ω1t)+\A2cos(ω2t)A1B1cos(ω1tϕ1)+\A2B2cos(ω2tϕ2)
which output is not periodic either. So there is no problem.
Dilip Sarwate

@Sarwate: Not quite what I meant to say, sorry. Was wondering if e.g. x(t)=cos(πt)+cos(2t) would be handled by the case above. If we require a finite observation time interval with tT=[0;T] any square integrable signal can be written as a Fourier series in the observation interval. For finite T this is likely the most general approach and your derivations still hold as far as I can see. Obviously the Fourier series approach forces periodicity outside T but if we only care about the signal t\ont this does not really matter.
Lars1

@Lars1 I don't agree with your comment that the enforced periodicity outside [0,T] does not matter. If input x(t) produces output y(t) in an LTI system, then applying the SISO property to the Fourier series does not give y(t) restricted to [0,T]. Instead, what is obtained is one period of the periodic response y^(t) to the periodic signal x^(t) where for each time instant t, <t<,
x^(t)=x(tmodT).
In other words, the T-second segment of x(t) repeated periodically (with period T) along the time axis.
Dilip Sarwate

E.g. in nonlinear RF systems we often choose a sum of incommensurate sinusoidals to ensure a unique frequency mapping from input to output. These result in a non-periodic signal, and I just was curious to why you had to assume periodicity above which to me seems to exclude most practically relevant signals. Square integrable x(t) and y(τ) in finite observation intervals can be written as Fourier series. I did not (intend to) claim that t was defined on the same interval for x and y BTW and y could be a time offset version. I'll stop here to avoid further confusion.
Lars1

3

Here's the idea of the proof. Let's assume we can describe the output of a system by a convolution,

y(t)=kt(tτ)f(τ)dτ

Notice that the function (aka "kernel") kt(t) as I've written it here may change as t varies. However, we usually make an important assumption about kt(t) - that it doesn't change with time. This is called "linear time-invariance" (also check out the Wikipedia page on Toeplitz matrices). If our system is linear time-invariant, kt is the same for any t, and so we'll just ignore the subscript and write

y(t)=k(tτ)f(τ)dτ

Now, let's say f(t) is a sinusoid, say f(t)=eiωt. So, we have

y(t)=k(tτ)eiωτdτ=k(τ)eiω(tτ)dτ=eiωtk(τ)eiωτdτ

Notice that the last equation has no dependence on t! As a result, let's define K(ω):=k(τ)eiωτdτ.

Thus, we've discovered that

y(t)=K(ω)eiωt

or, in other words, y(t) is a sinusoid oscillating at the same frequency as the input, but weighted by a complex number K(ω) which is constant with respect to t (and thus may shift the amplitude and phase of the output with respect to the input).

EDIT: The comments noted this answer was pretty loose. My goal was to avoid details like different forms of the Fourier transform, but I ended up conflating the Fourier and Laplace transforms. What I called Fourier transform previously was only the Fourier transform if s was purely imaginary. I decided that clarifying this route would necessarily add too much notation, so I'm relegating it to italics.

Now, take the Laplace transform, to end up with (since Laplace transform takes convolution to multiplication),

Y(s)=K(s)F(s)

Now, if f is a sinusoid, say f(t)=eiωt, its Laplace transform is a delta function at that ω. That is, F(s)=δw(s). So, the Laplace transform of the output is also a delta function at that frequency:

Y(s)=K(s)δω(s)=K(ω)δω(s)

Since K(ω) is just some complex number that depends on the input frequency, the output y(t) will be a sinusoid with the same frequency as the input, but with potentially different amplitude and phase.

Incidentally, I just noticed you can find the same idea written out in the time domain at Wikipedia. A higher-level explanation (which you can ignore if it's too mathy) is that linear systems theory is defined through the convolution operation, which is diagonalized by the Fourier transform. Thus, a system whose input is an eigenvector of the Fourier transform operator will output only a scaled version of its input.


-1 What is s and how does it relate to ω? And could you explain what is meant by δω(s)? Your equation Y(s)=K(s)δωs) is sheer nonsense.
Dilip Sarwate

@DilipSarwate I suspect he's using Laplace transform notation instead of Fourier notation.
Jim Clay

@sydeulissie The problem is that you assert that K(w) is "just some complex number", but you haven't said why it's just a complex number at each frequency. That's the heart of the proof.
Jim Clay

3
This has a correct outline but many problems in the details. Not downvoting, but it should be fixed.
Phonon

1

Say we have a system with input x1(t) which generates the output y1(t)=G(x1(t)), and with an input x2(t) we get the output y2(t)=G(x1(t)). The system is linear if:

ax1(t)+bx2(t)y(t)=G(ax1(t)+bx2(t))=aG(x1(t))+bG(x2(t))=ay1(t)+by2(t)

where a and b are (real or complex) constants. If the equations above are not fulfilled the system is nonlinear. The equation can be used for real and complex signals in time and frequency domains. This is the same as the superposition principle must be valid. As Sarwate illustrates in a comment this does not prevent the system from generating new frequencies. We are probably often just used to indirectly assume time invariance. The reason is likely that it is often possible to map a time varying system to a time invariant system by applying one or more external controlling signals.

From the definition of linearity and further requiring a time invariant system we can directly see that two (or more signals) can not interfere and generate new frequency components while still complying with the linearity requirement. The principle of superposition also follows directly from the linearity definition.

Also from the linearity definition the concept of convolution for linear time invariant systems follow. For nonlinear systems we for example have Volterra series which is a multi-dimensional convolution integral - the 1-dimensional convolution integral is a special case of the Volterra series. This is way more complicated than linear techniques though. But based on the convolution integral for a linear system the derivation follows the one shown by @sydeulissie.

To demonstrate a simple counter example of a nonlinear relation where new frequencies are generated we could use G:y(t)=x2(t). Let us first show that this is indeed nonlinear. If we apply the input x1(t) we get the output y1(t)=x12(t) and if we apply the input x2(t) we get the output y2(t)=x22(t). The output y(t) is then:

y(t)={ax1(t)+bx2(t)}2=a2x12(t)+b2x22(t)+2abx1(t)x2(t)

or:

y(t)=a2y1(t)+b2y2(t)±2aby1(t)y2(t)ay1(t)+by2(t)

and we have thus proved x2 to be nonlinear (which can hardly be surprising). If we apply a single sinusoidal signal x(t)=Acos(2πf0t+ϕ0) to the system G we have the output:

y(t)=x2(t)=A2cos2(2πf0t+ϕ0)=A22+A22cos(2π2f0t+2ϕ0)

The output here contains a DC component and another component at the frequency 2f0. The nonlinear function x2 thus generates new frequency components.

In conclusion it can be observed that a linear system may generate frequency components not present in the input (if the system is time variant). If the system is linear time invariant the output can not include frequency components not present in the input.

Thanks to @Sarwate for the most relevant comment.


You are right. I forgot to mention that I refer to time invariant systems. The example you provide is a time varying system where your example does not hold. Normally such a signal as the cos(t) is applied at an external port as a signal in which case the linearity is not fulfilled. I have noted the time invariant part in the answer above.
Lars1

@DilipSarwate So is that that only LTI systems have that property?
Phonon

Just checked a couple of books to be on the safe side. Actually there seems to be some difference in the details. One definition in Yang and Lee's book on circuit systems from 2007 says: "A system is said to be linear if the superposition principle holds, i.e. its output to a linear combination of several arbitrary inputs is the same as the linear combination of the outputs to individual inputs". In that respect Sarwate's example is linear - but not time invariant. Other refs are less precise though. Thanks to @Sarwate.
Lars1

1
Comment referred to by Lars1 with typographical errors corrected: Consider the system that produces output x(t)cos(t) from input x(t). Then, ax1(t)+bx2(t) produces output
(ax1(t)+bx2(t))cos(t)=ax1(t)cos(t)+bx2(t)cos(t)
so that the system is linear but without the claimed property.
Dilip Sarwate

@Sarwate How is the system which produces output x(t) cos(t) time varying? I am a beginner in DSP's
Hobyist

1

As Dilip Sarwate pointed out, only linear shift-invariant (LSIV) systems have the SISO (sinusoid in- sinusoid out) property.

The short answer to your question is that the complex exponentials eȷωt are are the eigenfunctions of a LSIV system. By the definition of eigenfunction, if the input is eigenfunction (sine/cos can be represented by complex exponential according to Euler's formula), the output is just the product of the input and the corresponding eigenvalue, which could be a complex number, and that's where changes the phase/amplitude come from.

弊社のサイトを使用することにより、あなたは弊社のクッキーポリシーおよびプライバシーポリシーを読み、理解したものとみなされます。
Licensed under cc by-sa 3.0 with attribution required.