設定された時間間隔で(つまり、クロックを使用して)命令を処理する必要があるのはなぜですか?前の命令が完了した直後に、それらを順番に実行することはできませんか?
マイクロコントローラでのクロックの必要性の類推は、特に有用であることがわかります。
設定された時間間隔で(つまり、クロックを使用して)命令を処理する必要があるのはなぜですか?前の命令が完了した直後に、それらを順番に実行することはできませんか?
マイクロコントローラでのクロックの必要性の類推は、特に有用であることがわかります。
回答:
ここでは、1つまたは2つの例が役立ちます。次の仮想回路を見てください。
この回路のシミュレーション – CircuitLabを使用して作成された回路図
AとBの両方が高い(1)とします。したがって、ANDの出力は1であり、XORへの入力は両方とも1なので、出力は0です。
論理要素はその状態を即座に変更しません-入力の変更が処理されるとき、小さいが重要な伝播遅延があります。Bがロー(0)になったとします。XORは2番目の入力で新しい状態を即座に確認しますが、最初の入力ではANDゲートからの「古い」1を確認します。その結果、出力は短時間ハイになりますが、信号がANDゲートを伝搬してXORへの両方の入力がローになり、出力が再びローになるまでです。
グリッチは回路の動作の望ましい部分ではありませんが、ロジックの量や配線の長さのために、回路の異なる部分を通る伝搬速度に差があるときはいつでもそのようなグリッチが発生します。
これを処理するための本当に簡単な方法の1つは、次のように、エッジトリガーフリップフロップを組み合わせロジックの出力に配置することです。
現在、発生するグリッチはフリップフロップによって回路の残りの部分から隠されています。フリップフロップは、クロックが0から1になったときにのみ状態を更新します。組み合わせロジックチェーンを介して、結果は確実に決定論的になり、グリッチがなくなります。
I feel a lot of these answers are not exactly hitting on the core question. The micro-controller has a clock simply because it executes (and is driven by) sequential logic.
In digital circuit theory, sequential logic is a type of logic circuit whose output depends not only on the present value of its input signals but on the sequence of past inputs, the input history. This is in contrast to combinational logic, whose output is a function of only the present input. That is, sequential logic has state (memory) while combinational logic does not. Or, in other words, sequential logic is combinational logic with memory.
As well:
The main advantage of synchronous logic is its simplicity. The logic gates which perform the operations on the data require a finite amount of time to respond to changes to their inputs. This is called propagation delay. The interval between clock pulses must be long enough so that all the logic gates have time to respond to the changes and their outputs "settle" to stable logic values, before the next clock pulse occurs. As long as this condition is met (ignoring certain other details) the circuit is guaranteed to be stable and reliable. This determines the maximum operating speed of a synchronous circuit.
Short answer: managers want a simple, testable, PROOF of function before committing to millions (or more) dollars to a design. Current tools, just do not give asynchronous designs those answers.
Microcomputers and microcontrollers typically utilize a clocking scheme to insure timing control. All process corners have to maintain timing across all voltage, temperature, process, etc effects on signal propagation speeds. There are no current logic gates change instantly: each gate switches depending on the voltage it is supplied, the drive it gets, the load it drives, and the size of the devices that are used to make it, (and of course the process node (device size) it is made in, and how fast THAT process is actually performing --- THIS pass through the fab). In order to get to "instant" switching, you'd have to use quantum logic, and that assumes that quantum devices can switch instantly; (I am not sure).
Clocked logic makes PROVING that the timing across the entire processor, works across the expected voltage, temperature and processing variables. There are many software tools available that help measure this timing, and the net process is called "timing closure". Clocking can (and, in my experience, does) take somewhere between 1/3 to 1/2 of the power used in a microprocessor.
So, why not asynchronous design? There are few, if any, timing closure tools to support this design style. There are few, if any, automated place and route tools that can deal with, and manage, a large asynchronous design. If nothing else, managers do NOT approve anything that does not have a straightforward, computer generated, PROOF of functionality.
非同期設計には「大量のトランジスタ」を必要とする「大量の」同期信号が必要であるというコメントは、グローバルクロックのルーティングと同期のコスト、およびクロッキングシステムに必要なすべてのフリップフロップのコストを無視しています。非同期設計は、クロックを使用したものよりも小さくて高速です(またはそうする必要があります)。(一つは単に取るONE最も遅い信号経路を、先行ロジックに「レディ」信号をフィードバックするために使用します)。
非同期ロジックは、どこか別のブロック用に拡張する必要のあるクロックを待つ必要がないため、高速です。これは、レジスタからロジックへのレジスタ関数で特に当てはまります。非同期ロジックには、複数の「セットアップ」および「ホールド」の問題はありません。フリップフロップが散在してパイプライン化されたロジックのセットとは異なり、ロジックの伝播遅延からクロッキングまでの間隔とは異なり境界。
できますか?確かに、10億個のトランジスタ設計でもです。難しいですか?はい。ただし、チップ全体(またはシステムさえ)で動作することを証明することだけが、より複雑です。紙上でタイミングを取得することは、1つのブロックまたはサブシステムに対して合理的に直接的です。自動化された配置配線システムでタイミングを制御することは、ツールがタイミング制約の非常に大きな潜在的なセットを処理するように設定されていないため、はるかに困難です。
また、マイクロコントローラーには、(比較的)低速な外部信号に接続する潜在的に大きな他のブロックのセットがあり、マイクロプロセッサーのすべての複雑さに加えられます。これにより、タイミングが少し複雑になりますが、それほど重要ではありません。
「最初に到着する」「ロックアウト」信号メカニズムを実現することは、回路設計の問題であり、それを処理する既知の方法があります。競合状態は1)の兆候です。貧弱な設計慣行; または2)。プロセッサに入る外部信号。クロッキングは、実際には「セットアップ」および「ホールド」違反に関連する信号対クロックの競合状態を引き起こします。
私は個人的に、非同期設計がどのようにストールまたは他の競合状態に陥るかについて理解していません。それは私の制限かもしれませんが、プロセッサに入力されるデータで発生しない限り、適切に設計された論理システムでは決して可能ではないはずです。それでも、信号が入力されると発生する可能性があるため、それを処理するように設計します。
(これが役立つことを願っています)。
All that said, if you have the money ...
Microcontrollers need to use a clock because they need to be able to respond to events that may occur at any time, including nearly simultaneously with either other external events or events generated by the controllers themselves, and will often have multiple circuits that need to know whether one event X precedes another event Y. It may not matter whether all such circuits decide that X preceded Y, or all such circuits decide that X did not precede Y, but it will often be critical that if any of the circuits decides that X preceded Y, then all must do so. Unfortunately, it's difficult to ensure that circuits will within a bounded time reach a guaranteed consensus as to whether X precedes Y, or even reach a consensus on whether or not they have reached a consensus. Synchronous logic can help enormously with that.
Adding a clock to a circuit makes it possible to guarantee that a subsystem will not experience any race conditions unless an input to the system changes in a very small window relative to the clock, and also guarantee if the output of one device is fed into another, the first device's output will not change in the second device's critical window unless the input to the first device changes within an even smaller critical window. Adding another device before that first device will ensure that the input to the first device won't change in that small window unless the input to the new device changes within a really really tiny window. From a practical perspective, unless one is deliberately trying to cause a consensus failure, the probability of a signal changing within that really really tiny window can be reduced to be smaller than the probability of the device suffering some other uncontrollable failure such as a meteor strike.
It's certainly possible to design fully-asynchronous systems that run "as fast as possible", but unless a system is extremely simple it will be hard to avoid having a design get tripped up by a race condition. While there are ways of resolving race conditions without requiring clocks, race conditions can often be solved much more quickly and easily by using clocks than would be the case without them. Although asynchronous logic would often be able to resolve race conditions faster than clocked logic, the occasions where it can't do so pose a major problem, especially given the difficulty of having parts of a system reach consensus on whether or not they have reached consensus. A system which can consistently run one million instructions per section will generally be more useful than one which may sometimes run four million instructions per second, but could potentially stall for milleconds (or longer) at a time because of race conditions.
MCUs are only one very complex example of a synchronous sequential logic circuit. The simplest form is probably the clocked D-flip-flop (D-FF), i.e. a synchronous 1 bit memory element.
There are memory elements that are asynchronous, for example the D-latch, which is (in a sense) the asynchronous equivalent of the D-FF. An MCU is nothing more than a bunch of millions of such basic memory elements (D-FF) glued together with tons of logic gates (I'm oversimplifying).
Now let's get to the point: why do MCUs use D-FFs instead of D-latches as memory elements internally? It's essentially for reliability and ease of design: D-latches react as soon as their inputs change and their outputs are updated as fast as possible. This allows for nasty unwanted interactions between different parts of a logic circuit (unintended feedback loops and races). Designing a complex sequential circuit using asynchronous building blocks is inherently more difficult and error prone. Synchronous circuits avoid such traps by restricting the operation of the building blocks to the time instants when the clock edges are detected. When the edge arrive a synchronous logic circuit acquires the data at its inputs, but doesn't update its outputs yet. As soon as the inputs are acquired, the outputs are updated. This avoids the risk that an output signal is fed back to an input which hasn't been completely acquired and mess things up (said simply).
This strategy of "decoupling" input data acquisition from outputs updating allows simpler design techniques, which translates in more complex systems for a given design effort.
What you're describing is called asynchronous logic. It can work, and when it does it's often faster and uses less power than synchronous (clocked) logic. Unfortunately, asynchronous logic has some problems that prevent it from being widely used. The main one I see is that it takes a lot more transistors to implement, since you need a ton of independent synchronization signals. (Microcontrollers do a lot of work in parallel, as do CPUs.) That's going to drive up cost. The lack of good design tools is a big up-front obstacle.
Microcontrollers will probably always need clocks since their peripherals usually need to measure time. Timers and PWMs work at fixed time intervals, ADC sampling rates affect their bandwidth, and asynchronous communication protocols like CAN and USB need reference clocks for clock recovery. We usually want CPUs to run as fast as possible, but that's not always the case for other digital systems.
Actually You are seeing the MCU as a complete unit,but the truth is it itself is made of different gates and TTL and RTL logic's, often FF array,the all need the clock signal individually,
To be more specific think about simply accessing a address from the memory,this simple task may itself involve multiple operation like making the BUS available for the data lines and the address-lines.
The best way to say is ,the instruction themselves occur in small units of operation that require clock cycles,these combined for machine cycles ,which account for various MCU properties like speed(FLOPS** in complicated MCU's),pipe lining etc.
Response to OP's comment
To be very precise,I give you an example there is a chip called ALE(Address latch enable) usually for the purpose of multiplexing the lower address bus for transmitting both address and data on same pins,we use a oscillators(the intel 8051 uses 11.059MHz local oscillator as clock) to fetch the address and then data.
As you may know that basic parts of MCU are CPU,ALU and internal register and so on,the CPU(controlling s/g) sends the address to all the address pins 16 in case of 8051,this occurs at timing instant T1 and after the address is the corresponding matrix of capacitor storing (charge as a signal )(*memory mapping *) is activated and selected.
After selection the ALE signal is activated ie ALE pin is made high at the next clock say T2(usually a High signal but change as per processing unit design),after this the lower address buses act like data lines,and data is written or read (depending upon the output at the RD/WR pin-of the MCU).
You can clearly see that all events are timely sequential
What would happen if we wont use clock
Then we will have to use asynchronous clocking method ASQC this would then make each gate dependent on the other and may result in hardware failures,Also this kills the Pipe-lining of instruction impossible,Long Dependent and irregular time to complete task.
So it is something undesirable
The fundamental problem that a clock solves is that transistors are not really digital devices: they use analogue voltage levels on the inputs to determine the output and take a finite length of time to change state. Unless, as has been mentioned in another answer, you get into quantum devices, there will be a period of time in which the input transitions from one state to another. The time this takes is affected by capacitive loading, which will be different from one device to the next. This means that the different tranisistors that make up each logic gate will respond at slightly different times. The clock is used to 'latch' the outputs of the component devices once they have all stabilised.
As an analogy, consider the SPI (Serial Peripheral Interface) communications transport layer. A typical implementation of this will use three lines: Data In, Data Out and Clock. To send a byte over this transport layer the master will set its Data Out line and assert the Clock line to indicate that the Data Out line has a valid value. The slave device will sample its Data In line only when instructed to do so by the Clock signal. If there were no clock signal, how would the slave know when to sample the Data In line? It could sample it before the line was set by the master or during the transition between states. Asynchronous protocols, such as CAN, RS485, RS422, RS232, etc. solve this by using a pre-defined sampling time, fixed bit rate and (overhead) framing bits.
In other words, there is some kind of Common Knowledge required to determine when all the transistors in a set of gates have reached their final state and the instruction is complete. In the (100 blue eyes) puzzle stated in the link above, and explained in some detail in this question on Maths Stack Exchange, the 'oracle' acts as the clock for the people on the island.