マイクロコントローラーにクロックが必要な理由


31

設定された時間間隔で(つまり、クロックを使用して)命令を処理する必要があるのはなぜですか?前の命令が完了した直後に、それらを順番に実行することはできませんか?

マイクロコントローラでのクロックの必要性の類推は、特に有用であることがわかります。


6
非同期プロセッサがあります。
レオン・ヘラー

6
「前の指示がいつ完了したか」をどのように判断しますか?考えてみると、「前の指示が完了したのはいつか」を知る必要があります。計算が完了し、「「前の命令は完了しましたか?」計算」計算が完了したとき、そして........「命令を完了するには0.4ナノ秒かかります」と言う方がはるかに簡単です。
user253751

4
論理ゲートは、いつ終了するかを言いません。安定した値に落ち着くまで、信号は一定期間だけ不確定です。本質的に、クロックにより、デザインはロジックが正しい値に落ち着いたことを知ることができます。マイクロ命令などの高度な戦略は、マシン命令を小さな断片に分割することで役立ちます。そのため、ADDには4クロックティック、メモリアクセスには数百ティックかかることがあります。

回答:


31

ここでは、1つまたは2つの例が役立ちます。次の仮想回路を見てください。

schematic

この回路のシミュレーションCircuitLabを使用して作成された回路

AとBの両方が高い(1)とします。したがって、ANDの出力は1であり、XORへの入力は両方とも1なので、出力は0です。

論理要素はその状態を即座に変更しません-入力の変更が処理されるとき、小さいが重要な伝播遅延があります。Bがロー(0)になったとします。XORは2番目の入力で新しい状態を即座に確認しますが、最初の入力ではANDゲートからの「古い」1を確認します。その結果、出力は短時間ハイになりますが、信号がANDゲートを伝搬してXORへの両方の入力がローになり、出力が再びローになるまでです。

グリッチは回路の動作の望ましい部分ではありませんが、ロジックの量や配線の長さのために、回路の異なる部分を通る伝搬速度に差があるときはいつでもそのようなグリッチが発生します。

これを処理するための本当に簡単な方法の1つは、次のように、エッジトリガーフリップフロップを組み合わせロジックの出力に配置することです。

schematic

この回路をシミュレートする

現在、発生するグリッチはフリップフロップによって回路の残りの部分から隠されています。フリップフロップは、クロックが0から1になったときにのみ状態を更新します。組み合わせロジックチェーンを介して、結果は確実に決定論的になり、グリッチがなくなります。


6
Thank you for actually mentioning propagation delay almost immediately, that is probably 99% of the answer.

1
A working example of this in action can be observed on Microchip (and other) microcontrollers' digital I/O peripherals. If you use the PORT registers to update outputs (rather than the LATCH) using consecutive Read-Modify-Write instructions, it is possible to read the state of the pin whilst it is changing state. See section 10.2.2 of the dsPIC33E/24E documentation for more detail.
Evil Dog Pie

Do I understand it right that sequential circuits critically need clocking not only because they'll get glitches, but also because, due of this glitch, some flip-flop may end up storing the wrong value?
lakesare

20

I feel a lot of these answers are not exactly hitting on the core question. The micro-controller has a clock simply because it executes (and is driven by) sequential logic.

In digital circuit theory, sequential logic is a type of logic circuit whose output depends not only on the present value of its input signals but on the sequence of past inputs, the input history. This is in contrast to combinational logic, whose output is a function of only the present input. That is, sequential logic has state (memory) while combinational logic does not. Or, in other words, sequential logic is combinational logic with memory.

As well:

The main advantage of synchronous logic is its simplicity. The logic gates which perform the operations on the data require a finite amount of time to respond to changes to their inputs. This is called propagation delay. The interval between clock pulses must be long enough so that all the logic gates have time to respond to the changes and their outputs "settle" to stable logic values, before the next clock pulse occurs. As long as this condition is met (ignoring certain other details) the circuit is guaranteed to be stable and reliable. This determines the maximum operating speed of a synchronous circuit.


15

Short answer: managers want a simple, testable, PROOF of function before committing to millions (or more) dollars to a design. Current tools, just do not give asynchronous designs those answers.

Microcomputers and microcontrollers typically utilize a clocking scheme to insure timing control. All process corners have to maintain timing across all voltage, temperature, process, etc effects on signal propagation speeds. There are no current logic gates change instantly: each gate switches depending on the voltage it is supplied, the drive it gets, the load it drives, and the size of the devices that are used to make it, (and of course the process node (device size) it is made in, and how fast THAT process is actually performing --- THIS pass through the fab). In order to get to "instant" switching, you'd have to use quantum logic, and that assumes that quantum devices can switch instantly; (I am not sure).

Clocked logic makes PROVING that the timing across the entire processor, works across the expected voltage, temperature and processing variables. There are many software tools available that help measure this timing, and the net process is called "timing closure". Clocking can (and, in my experience, does) take somewhere between 1/3 to 1/2 of the power used in a microprocessor.

So, why not asynchronous design? There are few, if any, timing closure tools to support this design style. There are few, if any, automated place and route tools that can deal with, and manage, a large asynchronous design. If nothing else, managers do NOT approve anything that does not have a straightforward, computer generated, PROOF of functionality.

非同期設計には「大量のトランジスタ」を必要とする「大量の」同期信号が必要であるというコメントは、グローバルクロックのルーティングと同期のコスト、およびクロッキングシステムに必要なすべてのフリップフロップのコストを無視しています。非同期設計は、クロックを使用したものよりも小さくて高速です(またはそうする必要があります)。(一つは単に取るONE最も遅い信号経路を、先行ロジックに「レディ」信号をフィードバックするために使用します)。

非同期ロジックは、どこか別のブロック用に拡張する必要のあるクロックを待つ必要がないため、高速です。これは、レジスタからロジックへのレジスタ関数で特に当てはまります。非同期ロジックには、複数の「セットアップ」および「ホールド」の問題はありません。フリップフロップが散在してパイプライン化されたロジックのセットとは異なり、ロジックの伝播遅延からクロッキングまでの間隔とは異なり境界。

できますか?確かに、10億個のトランジスタ設計でもです。難しいですか?はい。ただし、チップ全体(またはシステムさえ)で動作することを証明することだけが、より複雑です。紙上でタイミング取得することは、1つのブロックまたはサブシステムに対して合理的に直接的です。自動化された配置配線システムでタイミングを制御することは、ツールがタイミング制約の非常に大きな潜在的なセットを処理するように設定されていないため、はるかに困難です。

また、マイクロコントローラーには、(比較的)低速な外部信号に接続する潜在的に大きな他のブロックのセットがあり、マイクロプロセッサーのすべての複雑さに加えられます。これにより、タイミングが少し複雑になりますが、それほど重要ではありません。

「最初に到着する」「ロックアウト」信号メカニズムを実現することは、回路設計の問題であり、それを処理する既知の方法があります。競合状態は1)の兆候です。貧弱な設計慣行; または2)。プロセッサに入る外部信号。クロッキングは、実際には「セットアップ」および「ホールド」違反に関連する信号対クロックの競合状態を引き起こします。

私は個人的に、非同期設計がどのようにストールまたは他の競合状態に陥るかについて理解していません。それは私の制限かもしれませんが、プロセッサに入力されるデータで発生しない限り、適切に設計された論理システムでは決して可能ではないはずです。それでも、信号が入力されると発生する可能性があるため、それを処理するように設計します。

(これが役立つことを願っています)。

All that said, if you have the money ...


Of course, it depends on the chip you're building - for example, neural networking hardware tends to be asynchronous, because that's actually the easiest thing - the thing they're emulating is asynchronous. We're mostly building synchronous sequential hardware, because the software /firmware is also mostly synchronous and sequential (especially on the "sequential" part - asynchronous code is used more and more commonly). In fact, it's a lot easier to wrap your head around a sequential, synchronous system, especially when most programming is done in languages that encourage sequential code.
Luaan

Events in the real-world happen at unpredictable times. If a device has a button, and is supposed to execute one code path if it's pushed "soon enough" and execute another code path if it isn't, then in the absence of quantum-mechanical limitations, between a moment when pushing the button where a button push would happen soon enough to trigger the alternate code path, and a moment where a button push would be "too late", there would be some precise moment where a button push would cause some behavior "between" the two (e.g. causing some bits of the program counter to get changed...
supercat

...but not others). In the absence of quantum-mechanical limitations, the time between the last moment when the push would cause the branch, and the first moment when a push would cleanly fail to do so, could be made arbitrarily small but not reduced to zero. Quantum-mechanical limits may make it likely that any button push would happen either earlier enough to register or late enough to fail cleanly, but proving that there will never be a quantum state that would allow a button push in the deadly intermediate time would generally be infeasible.
supercat

Using synchronous logic greatly simplifies the analysis of situations where the system will need to respond to a truly-asynchronous event by ensuring that race conditions will have a very low probability of escaping a very small portion of the overall device. Analyzing that small portion of the device to ensure that race conditions are unlikely to escape is apt to be a much more tractable problem than allowing race conditions to occur almost anywhere and trying to analyze their effects to prove they're acceptably unlikely to cause trouble.
supercat

10

Microcontrollers need to use a clock because they need to be able to respond to events that may occur at any time, including nearly simultaneously with either other external events or events generated by the controllers themselves, and will often have multiple circuits that need to know whether one event X precedes another event Y. It may not matter whether all such circuits decide that X preceded Y, or all such circuits decide that X did not precede Y, but it will often be critical that if any of the circuits decides that X preceded Y, then all must do so. Unfortunately, it's difficult to ensure that circuits will within a bounded time reach a guaranteed consensus as to whether X precedes Y, or even reach a consensus on whether or not they have reached a consensus. Synchronous logic can help enormously with that.

Adding a clock to a circuit makes it possible to guarantee that a subsystem will not experience any race conditions unless an input to the system changes in a very small window relative to the clock, and also guarantee if the output of one device is fed into another, the first device's output will not change in the second device's critical window unless the input to the first device changes within an even smaller critical window. Adding another device before that first device will ensure that the input to the first device won't change in that small window unless the input to the new device changes within a really really tiny window. From a practical perspective, unless one is deliberately trying to cause a consensus failure, the probability of a signal changing within that really really tiny window can be reduced to be smaller than the probability of the device suffering some other uncontrollable failure such as a meteor strike.

It's certainly possible to design fully-asynchronous systems that run "as fast as possible", but unless a system is extremely simple it will be hard to avoid having a design get tripped up by a race condition. While there are ways of resolving race conditions without requiring clocks, race conditions can often be solved much more quickly and easily by using clocks than would be the case without them. Although asynchronous logic would often be able to resolve race conditions faster than clocked logic, the occasions where it can't do so pose a major problem, especially given the difficulty of having parts of a system reach consensus on whether or not they have reached consensus. A system which can consistently run one million instructions per section will generally be more useful than one which may sometimes run four million instructions per second, but could potentially stall for milleconds (or longer) at a time because of race conditions.


It's worth noting that the states being decided on can equally be internal ones - such as the result of an arithmetic operation. Delays due to line length can result in one part of the MCU seeing the result - and, without a clock, acting on it - before other parts.
Nick Johnson

@NickJohnson: If the sequence in which operations are performed is never dependent upon things that aren't computed yet, those issues can be resolved without difficulty if each section like an ALU has "valid" inputs and a "valid" output, and things can be arranged so as to happen in deterministic sequence. Where the wheels fall off is when the order in which operations occur should depend upon the timing (e.g. if one has a number of parallel operations which need to use a shared memory bus and two of them issue near-simultaneous requests, arbitration of which one should go first...
supercat

...and which one should wait may be intractable. If one decides beforehand which one is going to go first, such problems can be avoided, but if it turns out that the unit which was designated to go first isn't ready until long after the other one, performance may severely suffer as a result.
supercat

This is why going to space is so hard, the probabilities change unfavourably.
Magic Smoke

6

MCUs are only one very complex example of a synchronous sequential logic circuit. The simplest form is probably the clocked D-flip-flop (D-FF), i.e. a synchronous 1 bit memory element.

There are memory elements that are asynchronous, for example the D-latch, which is (in a sense) the asynchronous equivalent of the D-FF. An MCU is nothing more than a bunch of millions of such basic memory elements (D-FF) glued together with tons of logic gates (I'm oversimplifying).

Now let's get to the point: why do MCUs use D-FFs instead of D-latches as memory elements internally? It's essentially for reliability and ease of design: D-latches react as soon as their inputs change and their outputs are updated as fast as possible. This allows for nasty unwanted interactions between different parts of a logic circuit (unintended feedback loops and races). Designing a complex sequential circuit using asynchronous building blocks is inherently more difficult and error prone. Synchronous circuits avoid such traps by restricting the operation of the building blocks to the time instants when the clock edges are detected. When the edge arrive a synchronous logic circuit acquires the data at its inputs, but doesn't update its outputs yet. As soon as the inputs are acquired, the outputs are updated. This avoids the risk that an output signal is fed back to an input which hasn't been completely acquired and mess things up (said simply).

This strategy of "decoupling" input data acquisition from outputs updating allows simpler design techniques, which translates in more complex systems for a given design effort.


5

What you're describing is called asynchronous logic. It can work, and when it does it's often faster and uses less power than synchronous (clocked) logic. Unfortunately, asynchronous logic has some problems that prevent it from being widely used. The main one I see is that it takes a lot more transistors to implement, since you need a ton of independent synchronization signals. (Microcontrollers do a lot of work in parallel, as do CPUs.) That's going to drive up cost. The lack of good design tools is a big up-front obstacle.

Microcontrollers will probably always need clocks since their peripherals usually need to measure time. Timers and PWMs work at fixed time intervals, ADC sampling rates affect their bandwidth, and asynchronous communication protocols like CAN and USB need reference clocks for clock recovery. We usually want CPUs to run as fast as possible, but that's not always the case for other digital systems.


3

Actually You are seeing the MCU as a complete unit,but the truth is it itself is made of different gates and TTL and RTL logic's, often FF array,the all need the clock signal individually,

To be more specific think about simply accessing a address from the memory,this simple task may itself involve multiple operation like making the BUS available for the data lines and the address-lines.
The best way to say is ,the instruction themselves occur in small units of operation that require clock cycles,these combined for machine cycles ,which account for various MCU properties like speed(FLOPS** in complicated MCU's),pipe lining etc.

Response to OP's comment

To be very precise,I give you an example there is a chip called ALE(Address latch enable) usually for the purpose of multiplexing the lower address bus for transmitting both address and data on same pins,we use a oscillators(the intel 8051 uses 11.059MHz local oscillator as clock) to fetch the address and then data.

As you may know that basic parts of MCU are CPU,ALU and internal register and so on,the CPU(controlling s/g) sends the address to all the address pins 16 in case of 8051,this occurs at timing instant T1 and after the address is the corresponding matrix of capacitor storing (charge as a signal )(*memory mapping *) is activated and selected.

After selection the ALE signal is activated ie ALE pin is made high at the next clock say T2(usually a High signal but change as per processing unit design),after this the lower address buses act like data lines,and data is written or read (depending upon the output at the RD/WR pin-of the MCU).
You can clearly see that all events are timely sequential

What would happen if we wont use clock Then we will have to use asynchronous clocking method ASQC this would then make each gate dependent on the other and may result in hardware failures,Also this kills the Pipe-lining of instruction impossible,Long Dependent and irregular time to complete task.
So it is something undesirable


That kind of makes sense. But why do these various compartments of the MCU need the clock signal to operate? What theoretically would occur if they didn't use a clock?
M-R

1
@Martin, logic gates change state immediately when their input changes. Clocked, sequential logic only evaluates it's inputs during a clock event. This is the basic principle that drives digital memory circuits. It gives us the ability to selectively move data from one place to another with absolute control, allowing the creation of general purpose hardware that can be programmed via softaware to do - well, anything.
Sean Boddy

3
@SeanBoddy: Logic gates do not chance state immediately, there is a short lag which is viewable on an oscilloscope. If we didn't use a clock, the differences in these timings between components could cause race-conditions producing the wrong results.
BlueRaja - Danny Pflughoeft

@BlueRaja - well good golly gumdrops, how about that. Maybe I'll go back through 4 years of power electronics notes and 8 years of navy training to find out where I missed that one thing.
Sean Boddy

2

The fundamental problem that a clock solves is that transistors are not really digital devices: they use analogue voltage levels on the inputs to determine the output and take a finite length of time to change state. Unless, as has been mentioned in another answer, you get into quantum devices, there will be a period of time in which the input transitions from one state to another. The time this takes is affected by capacitive loading, which will be different from one device to the next. This means that the different tranisistors that make up each logic gate will respond at slightly different times. The clock is used to 'latch' the outputs of the component devices once they have all stabilised.

As an analogy, consider the SPI (Serial Peripheral Interface) communications transport layer. A typical implementation of this will use three lines: Data In, Data Out and Clock. To send a byte over this transport layer the master will set its Data Out line and assert the Clock line to indicate that the Data Out line has a valid value. The slave device will sample its Data In line only when instructed to do so by the Clock signal. If there were no clock signal, how would the slave know when to sample the Data In line? It could sample it before the line was set by the master or during the transition between states. Asynchronous protocols, such as CAN, RS485, RS422, RS232, etc. solve this by using a pre-defined sampling time, fixed bit rate and (overhead) framing bits.

In other words, there is some kind of Common Knowledge required to determine when all the transistors in a set of gates have reached their final state and the instruction is complete. In the (100 blue eyes) puzzle stated in the link above, and explained in some detail in this question on Maths Stack Exchange, the 'oracle' acts as the clock for the people on the island.

弊社のサイトを使用することにより、あなたは弊社のクッキーポリシーおよびプライバシーポリシーを読み、理解したものとみなされます。
Licensed under cc by-sa 3.0 with attribution required.