Automatic Controller

automatic controller

[¦ȯd·ə¦mad·ik kən¦trōl·ər] (control systems) An instrument that continuously measures the value of a variable quantity or condition and then automatically acts on the controlled equipment to correct any deviation from a desired preset value. Also known as automatic regulator; controller.

Automatic Controller

 

a device or set of devices by means of which automatic control is accomplished. The automatic controller uses a sensing element, or sensor, to measure, depending on the control principle, either the controlled variable or the disturbing action. With the aid of a converting or computing device, the automatic controller applies an action to the control element of the controlled system in accordance with the control law, or control algorithm. The automatic controller may also include amplifiers and adjustable correcting devices. The correcting devices ensure the stability and required quality of the control process. The amplifiers increase the power of the controller’s output signal—the manipulated variable—to a

Figure 1. Block diagrams of the structure of automatic control systems: (a) deviation-stimulated control, (b) disturbance-stimulated control, (c) combined deviation- and disturbance-stimulated control; (x0) set point, (e) dynamic error, (u) control action, (t) disturbing action (load), (x) controlled variable; the circles divided into sectors are comparators

value sufficient to operate the actuating mechanism, which controls the state of the control element. An actuating mechanism that effects the mechanical motion of the control element is usually called a servomotor.

The usual feedback controller has a comparison device (null detector), which subtracts the current value x of the controlled variable from the desired value x0, as indicated by the set-point mechanism. A distinction is made between static controllers—for example, the proportional controller—and floating controllers. Because of the time lags associated with the elements of the controller, the manipulated variable u is described by a differential equation of the form u = f(ε, εʹ, εʺ.…), where ε = x0(t) – x (t). If F is continuous, the controller is called a continuous-action controller. If signal quantization occurs in the controller, we speak of a discrete-action controller. The various types of discrete-action controllers include pulse controllers (with time quantization), relay controllers (with level quantization), and digital controllers (with time and level quantization). Controllers in which the output signal of the sensing element acts directly on the control element are called direct-action; controllers having power amplifiers that control the intake of energy from external sources are known as indirect-action controllers. Extremal controllers are a special type of controller. Controllers are sometimes classified according to the type of controlled variable, which may be, for example, voltage, frequency, speed, temperature, pressure, or concentration. The common name of a controller often draws attention to a characteristic feature of the controller—for example, the principle of operation or the material of the sensing element (electronic, carbon-pile), the type of external energy source (hydraulic, pneumatic), and design features (vibration-type, chopper bar). Sometimes a double name is given to a controller to describe the physical nature of the controlled variable and the energy of the actuating mechanism, for example, electromechanical or electrohydraulic.

The enormous variety of controllers being produced by industry has required the standardization of control mechanisms and the use of the principle of construction from interchangeable parts (see INTEGRATED UNIFIED SYSTEM).

REFERENCES

Ivashchenko, N. N. Avtomaticheskoe regulirovanie, 3rd ed. Moscow, 1973.
Ustroistva i elementy sistem avtomaticheskogo regulirovaniia i upravleniia, book 1. Edited by V. V. Solodovnikov. Moscow, 1973.
Oppelt, W. Kleines Handbuch technishcher Regelvorgänge, 5th ed. Weinheim, 1972.

A. A. VORONOV