جزییات کتاب
The great advances made in large-scale integration of semiconductors and the resulting cost-effective digital processors and data storage devices determine the present development of automation. The application of digital techniques to process automation started in about 1960, when the first process computer was installed. From about 1970 process computers with cathodic ray tube display have become standard equipment for larger automation systems. Until about 1980 the annual increase of process computers was about 20 to 30%. The cost of hardware has already then shown a tendency to decrease, whereas the relative cost of user software has tended to increase. Because of the high total cost the first phase of digital process automation is characterized by the centralization of many functions in a single (though sometimes in several) process computer. Application was mainly restricted to medium and large processes. Because of the far-reaching consequences of a breakdown in the central computer parallel standby computers or parallel back-up systems had to be provided. This meant a substantial increase in cost. The tendency to overload the capacity and software problems caused further difficulties. In 1971 the first microprocessors were marketed which, together with large-scale integrated semiconductor memory units and input/output modules, can be assem bled into cost-effective microcomputers. These microcomputers differ from process computers in fewer but higher integrated modules and in the adaptability of their hardware and software to specialized, less comprehensive tasks.