Data transmission networks in the automated control system. Communication protocols in the APCS Purpose and conditions for the use of the APCS "VP"

Modern methods of designing the activities of users of automated control systems have developed within the framework of the system engineering concept of design, due to which the consideration of the human factor is limited to solving the problems of coordinating the “inputs” and “outputs” of a person and a machine. At the same time, when analyzing the dissatisfaction of ACS users, it is possible to reveal that it is often explained by the lack of a single, integrated approach to the design of interaction systems, presented as a comprehensive, interconnected, proportional consideration of all factors, ways and methods for solving a complex multifactorial and multivariant problem of designing an interaction interface. This refers to functional, psychological, social and even aesthetic factors.

At present, it can be considered proven that the main task of designing a user interface is not to rationally “fit” a person into the control loop, but, based on the tasks of managing an object, to develop a system of interaction between two equal partners (a human operator and hardware-software complex of automated control systems), rationally managing the control object. The human operator is the closing link of the control system, i.e. the subject of management. APK (hardware and software complex) ACS is implementation tool his (operator) managerial (operational) activities, i.e. control object. According to the definition of V.F. Venda, ACS is a hybrid intellect, in which the operational (management) staff and ACS ACS are equal partners in solving complex management problems. The interface of human interaction with the technical means of the automated control system can be structurally depicted (see Fig. 1.).

Rice. 1. Information-logical scheme of the interaction interface

The rational organization of labor of ACS operators is one of the most important factors determining the effective functioning of the system as a whole. In the overwhelming majority of cases, managerial work is an indirect human activity, since in the conditions of an automated control system, he manages without “seeing” the real object. Between the real object of control and the human operator is object information model(information display means). Therefore, the problem arises of designing not only the means of displaying information, but also the means of interaction between the human operator and the technical means of the automated control system, i.e. system design problem, which we should call user interface.

It consists of APK and communication protocols. The hardware-software complex provides the following functions:

    conversion of data circulating in the automated control system of the automated control system into information models displayed on monitors (SDI - means of displaying information);

    regeneration of information models (IM);

    ensuring interactive interaction of a person with the TS ACS;

    transformation of the impacts coming from the CHO (human operator) into data used by the control system;

    physical implementation of interaction protocols (coordination of data formats, error control, etc.).

The purpose of the protocols is to provide a mechanism for the reliable and reliable delivery of messages between the human operator and the SOI, and hence between the HO and the control system. Protocol- this is a rule that defines interaction, a set of procedures for exchanging information between parallel processes in real time. These processes (the functioning of the ACS ACS and the operational activities of the subject of management) are characterized, firstly, by the absence of fixed time relationships between the occurrence of events and, secondly, by the absence of interdependence between events and actions when they occur.

The protocol functions are concerned with the exchange of messages between these processes. The format and content of these messages form the logical characteristics of the protocol. The rules for the execution of procedures determine the actions that are performed by the processes that jointly participate in the implementation of the protocol. The set of these rules is a procedural characteristic of the protocol. Using these concepts, we can now formally define a protocol as a set of logical and procedural characteristics of a communication mechanism between processes. The logical definition makes up the syntax, and the procedural definition makes up the semantics of the protocol.

Image generation with the help of HSC allows to obtain not only two-dimensional images projected onto a plane, but also to realize three-dimensional picture graphics using second-order planes and surfaces with transfer of the image surface texture.

When creating complex automated control systems, software development is of great importance, because It is software that creates the intelligence of a computer that solves complex scientific problems and controls the most complex technological processes. At present, when creating such systems, the role of the human factor and, consequently, the ergonomic support of the system is significantly increasing. The main task of ergonomic support is to optimize the interaction between man and machine, not only during operation, but also during the manufacture and disposal of technical components. Thus, when systematizing the user interface design approach, we can give some basic functional tasks and construction principles that the system must solve.

Principle of minimum working force software developer and user, which has two aspects:

    minimization of resource costs on the part of the software developer, which is achieved by creating a certain methodology and technology of creation, characteristic of conventional production processes;

    minimization of resource costs on the part of the user, i.e. The PO should only do the work that is necessary and cannot be done by the system, there should be no repetition of work already done, etc.

The task of maximum mutual understanding user and HSC represented by the software developer. Those. The CMO should not, for example, search for information, or the information output to the video monitoring device should not require recoding or additional interpretation by the user.

The user must memorize as little information as possible, as this reduces the ability of the PO to make operational decisions.

The principle of maximum concentration user on the problem being solved and the localization of error messages.

The principle of accounting for professional skills human operator. This means that when developing a system, on the basis of some initial data on a possible contingent of candidates specified in the terms of reference, a “human component” is designed taking into account the requirements and features of the entire system and its subsystems. The formation of a conceptual model of interaction between a person and technical means of automated control systems means understanding and mastering the algorithms for the functioning of the subsystem "human - technical means" and mastering the professional skills of interacting with a computer.

Key for creating effective interface is in fast, as much as possible, operator representation of a simple conceptual interface model. General User Access does this through consistency. The concept of consistency is that when working with a computer, the user forms a system of expecting the same reactions to the same actions, which constantly reinforces the user interface model. Consistency, by providing a dialogue between the computer and the human operator, can reduce the amount of time it takes for the user to both learn the interface and use it to get the job done.

Consistency is a property of an interface to reinforce user views. Another part of the interface is property of its concreteness and visibility. This is done by applying the plan of the panel, using colors and other expressive techniques. Ideas and concepts are then physically expressed on a screen that the user interacts with directly.

In practice, high-level user interface design precedes the initial design, which allows you to identify the required functionality of the application being created, as well as the characteristics of its potential users. The specified information can be obtained by analyzing the terms of reference for the automated control system (ACS) and the operation manual (OM) for the control object, as well as information received from users. For this purpose, a survey of potential operators and operators working on a non-automated control object is carried out.

After defining the goals and objectives facing them, they proceed to the next stage of design. This step is about creating custom scripts. A scenario is a description of the actions performed by the user in the framework of solving a specific problem on the way to achieving his goal. Obviously, it is possible to achieve a certain goal by solving a number of problems. The user can solve each of them in several ways, therefore, several scenarios must be formed. The more there are, the lower the likelihood that some key objects and operations will be missed.

At the same time, the developer has the information necessary to formalize the functionality of the application. And after the formation of scenarios, the list of individual functions becomes known. In an application, a function is represented by a function block with the corresponding screen form(s). It is possible that several functions are combined into one function block. Thus, at this stage, the required number of screen forms is set. It is important to define the navigation relationships of functional blocks. In practice, the most appropriate number of links for one block is three. Sometimes, when the sequence of execution of functions is rigidly defined, a process link can be established between the corresponding functional blocks. In this case, their screen forms are called sequentially one from the other. Such cases do not always take place, so navigation links are formed either based on the logic of data processing with which the application works, or based on user views (card sorting). The navigation links between the individual functional blocks are displayed on the navigation system diagram. Navigation capabilities in an application are communicated through various navigation elements.

The main navigation element of the application is the main menu. The role of the main menu is also great because it provides dialog interaction in the "user-application" system. In addition, the menu indirectly performs the function of teaching the user how to work with the application.

Menu formation begins with an analysis of the application's functions. To do this, within each of them, separate elements are distinguished: operations performed by users, and objects on which these operations are performed. Therefore, it is known which functional blocks should allow the user to perform which operations on which objects. It is convenient to select operations and objects based on user scenarios and application functionality. Selected items are grouped into general sections of the main menu. The grouping of individual elements occurs in accordance with the ideas of their logical connection. In this way, main menu can have cascading menus drop-down when you select a section. The cascading menu maps a list of subsections to a primary section.

One of the requirements for the menu is their standardization, the purpose of which is the formation of a stable user model for working with the application. There are requirements put forward from the standpoint of standardization, which relate to the placement of section headings, the content of sections frequently used in different applications, the form of headings, the organization of cascading menus, etc. The most general standardization recommendations are as follows:

    groups of functionally related sections are separated by separators (bar or blank space);

    do not use phrases in section titles (preferably no more than 2 words);

    section titles begin with a capital letter;

    the names of menu sections associated with calling dialog boxes end with an ellipsis;

    the names of the menu sections to which the cascading menus belong end with an arrow;

    use shortcut keys to access individual sections of the menu. They are underlined;

    allow the use of "hot keys", the corresponding key combinations are displayed in the headings of the menu sections;

    allow to use the inclusion in the menu of icons;

    the changed color shows the unavailability of some sections of the menu while working with the application;

    allow you to make inaccessible sections invisible.

The unavailability of some sections of the menu is due to the following. The main menu is static and is present on the screen during the entire time of working with the application. Thus, when working with different screen forms (interaction with different functional blocks), not all menu sections make sense. Such sections are assumed to be inaccessible. Therefore, depending on the context of the tasks solved by the user (sometimes on the context of the user himself), the main menu of the application looks different. It is customary to talk about such different external representations of the menu as different states of the menu. Unlike the scheme of the navigation system, compiled earlier and needed mainly by the developer, the user enters into direct interaction with the menu. The menu determines the number of windows and their type. The entire interface is accompanied by warning windows, hint windows, wizard windows that set the sequence of user actions when performing some necessary operations.

INTRODUCTION

Modern methods of designing the activities of users of automated control systems have developed within the framework of the system engineering concept of design, due to which the consideration of the human factor is limited to solving problems of coordination
"inputs" and "outputs" of man and machine. At the same time, when analyzing the dissatisfaction of users of automated control systems, it is possible to reveal that it is often due to the lack of a single, integrated approach to the design of interaction systems.

The use of a systematic approach allows one to take into account many factors of a very different nature, to single out those that have the greatest impact in terms of existing system-wide goals and criteria, and to find ways and methods of effectively influencing them.
The systematic approach is based on the application of a number of basic concepts and provisions, among which one can single out the concepts of a system, the subordination of goals and criteria of subsystems to system-wide goals and criteria, etc. The system approach allows us to consider the analysis and synthesis of objects that are different in nature and complexity from a single point of view, while identifying the most important characteristic features of the functioning of the system and taking into account the most significant factors for the entire system. The value of the system approach is especially great in the design and operation of systems such as automated control systems (ACS), which are essentially human-machine systems, where a person plays the role of a control subject.

A systematic approach to design is a comprehensive, interrelated, proportional consideration of all factors, ways and methods for solving a complex multifactorial and multivariant task of designing an interaction interface. Unlike classical engineering design, when using a systematic approach, all factors of the designed system are taken into account - functional, psychological, social, and even aesthetic.

Automation of control inevitably entails the implementation of a systematic approach, since it implies the existence of a self-regulating system with inputs, outputs and a control mechanism. The very concept of an interaction system indicates the need to consider the environment in which it must function. Thus, the interaction system should be considered as part of a larger system - a real-time automated control system, while the latter is a system of a controlled environment.

At present, it can be considered proven that the main task of designing a user interface is not to rationally “fit” a person into the control loop, but, based on the tasks of managing an object, to develop a system of interaction between two equal partners (a human operator and hardware-software complex
ACS) rationally managing the control object.
SUBJECT AREA

So, it is obvious that the human operator is the closing link of the control system, i.e. the subject of management, and the HSC (hardware and software complex) ACS is a tool for the implementation of its management (operational) activities, i.e. control object. According to the definition of V.F. Venda, ACS is a hybrid intellect, in which the operational (management) staff and ACS ACS are equal partners in solving complex management problems.

Rational organization of work of AWS operators is one of the most important factors determining the effective functioning of the system as a whole. In the overwhelming majority of cases, managerial work is an indirect human activity, since in the conditions of an automated control system, he manages without “seeing” the real object. Between the real control object and the human operator there is an information model of the object (means of displaying information). Therefore, the problem arises of designing not only the means of displaying information, but also the means of interaction between the human operator and the technical means of the automated control system, i.e. the problem of system design, which we should call the user interface.

The interface of human interaction with the technical means of the automated control system can be structurally depicted (see Fig. 1.). It consists of APK and communication protocols. The hardware-software complex provides the following functions:

1. conversion of data circulating in the automated control system of the automated control system into information models displayed on monitors (SDI - means of displaying information);

2. regeneration of information models (IM);

3. Ensuring interactive interaction between a person and a TS ACS;

4. conversion of the impacts coming from the CHO (human operator) into data used by the control system;

5. physical implementation of interaction protocols (coordination of data formats, error control, etc.).

The purpose of the protocols is to provide a mechanism for the reliable and reliable delivery of messages between the human operator and the SOI, and hence between the HO and the control system. A protocol is a rule that defines interaction, a set of procedures for exchanging information between parallel processes in real time. These processes (the functioning of the ACS ACS and the operational activities of the subject of management) are characterized, firstly, by the absence of fixed time relationships between the occurrence of events and, secondly, by the absence of interdependence between events and actions when they occur.

The protocol functions are concerned with the exchange of messages between these processes. The format and content of these messages form the logical characteristics of the protocol. The rules for the execution of procedures determine the actions that are performed by the processes that jointly participate in the implementation of the protocol. The set of these rules is a procedural characteristic of the protocol. Using these concepts, we can now formally define a protocol as a set of logical and procedural characteristics of a communication mechanism between processes. The logical definition makes up the syntax, and the procedural definition makes up the semantics of the protocol.

Image generation with the help of HSC allows to obtain not only two-dimensional images projected onto a plane, but also to realize three-dimensional picture graphics using second-order planes and surfaces with transfer of the image surface texture.

Depending on the type of the reproduced image, it is necessary to single out the requirements according to the IM alphabet, according to the method of character formation and according to the type of use of image elements. The alphabet used characterizes the type of model, its pictorial possibilities. It is determined by the class of tasks being solved, it is set by the number and type of characters, the number of gradations of brightness, the orientation of characters, the frequency of flickering of the image, etc.

The alphabet must ensure the construction of any information models within the displayed class. It is also necessary to strive to reduce the redundancy of the alphabet.

Sign formation methods are classified according to the image elements used and are divided into modeling, synthesizing and generating. For a character that is formed on a CRT screen, a matrix format is preferred.

Observation of the monitor allows the user to build an image of the system mode, which is formed on the basis of training, training and experience (conceptual model), therefore, it is possible to compare this image with a theoretical image according to the situation.
The requirement of adequacy, isomorphism, similarity of the space-time structure of the displayed control objects and the environment determines the effectiveness of the model.

An image is reproduced based on its digital representation, which is contained in a memory block called the refresh buffer.

Rice. 1. Information-logical scheme of the interaction interface.

INFORMATION MODEL: INPUT AND OUTPUT INFORMATION

The information model, being a source of information for the operator, on the basis of which he forms an image of the real situation, as a rule, includes a large number of elements. Given the different semantic nature of the elements used, the information model can be represented as a set of interrelated elements:

D ^ (Dn) , where Rj is the set of elements of the information model of the j-th group, n=1,...N; k=1,...K.

The number of groups of elements of the information model is determined by the degree of detail of the description of the states and conditions of the functioning of the control object. As a rule, an element of the information model is associated with some parameter of the control object. Along with this, an information model of a graphic type can be considered as a complex graphic image. Elements of the information model here act as image elements. Any image consists of a certain set of graphic primitives, which are an arbitrary graphic element with geometric properties. Letters (alphanumeric and any other symbols) can also act as primitives.

The set of graphic primitives, which the operator can manipulate as a whole, is called a segment of the displayed information. Along with a segment, the concept of a graphic object is often used, which is understood as a set of primitives that have the same visual properties and status, and are also identified by one name.
When organizing the process of processing information in display systems, we will manipulate the following concepts:

6. Static information - information that is relatively stable in content and used as a background. For example, coordinate grid, plan, terrain image, etc.

7. Dynamic information - information that is variable in a certain time interval in terms of content or position on the screen. Really dynamic information is often a function of some random parameters.

This division is considered highly arbitrary. Despite this, when designing real information display systems, it is solved without difficulty.

When creating complex automated control systems, software development is of great importance, because It is software that creates the intelligence of a computer that solves complex scientific problems and controls the most complex technological processes. At present, when creating such systems, the role of the human factor, and, consequently, the ergonomic support of the system, is significantly increasing. The main task of ergonomic support is to optimize the interaction between man and machine, not only during operation, but also during the manufacture and disposal of technical components. So, when systematizing the user interface design approach, we can give some basic functional tasks and construction principles that a modern programming language should solve and that Delphi successfully copes with:

The principle of minimum working effort, which has two aspects:

8. minimization of resource costs on the part of the software developer, which is achieved by creating a specific methodology and technology of creation, inherent in conventional production processes;

9. minimization of resource costs on the part of the user, i.e. The PO should only do the work that is necessary and cannot be done by the system, there should be no repetition of work already done, etc.

The task of maximum mutual understanding. Those. The RO should not, for example, search for information, or the information displayed on the screen should not require recoding or additional interpretation by the user.

The user should memorize as little information as possible, as this reduces the ability of the CHO to make operational decisions.

The principle of maximum user concentration on the task being solved and the localization of error messages.
WHAT TO UNDERSTAND BY INTERFACE

The user interface is the communication between a human and a computer. General User Access are rules that explain dialogue in terms of common elements, such as rules for the presentation of information on a screen, and rules for interactive technology, such as rules for how a human operator should react to what is presented on a screen. In this course project, we will consider the IBM OPD standard developed jointly with MICROSOFT for the PC-AT class of machines.

INTERFACE COMPONENTS

On a practical level, an interface is a set of standard techniques for interacting with technology. At a theoretical level, an interface has three main components:

1. The way the machine communicates with the human operator.

2. The way the human operator communicates with the machine.

3. Way of user interface presentation.

MACHINE TO USER

The way the machine communicates with the user (representation language) is determined by the machine application (application software system).
The application manages access to information, information processing, presentation of information in a user-friendly way.

USER TO MACHINE

The user must recognize the information that the computer represents, understand (analyze) it, and move on to the answer. The answer is implemented through interactive technology, the elements of which can be actions such as selecting an object using a key or mouse. All this makes up the second part of the interface, namely the action language.

HOW THE USER THINKS

Users can have an idea about the machine interface, what it does and how it works. Some of these beliefs are formed by users as a result of their experience with other machines, such as a printer, calculator, video games, and a computer system. A good user interface makes use of this experience. More advanced views are formed from the experience of users with the interface itself. The interface helps users to develop views that can be further used when working with other application interfaces.

CONSISTENT INTERFACE

The key to creating an effective interface is to develop a simple conceptual interface model for operators as quickly as possible. Shared User Access does this through consistency. The concept of consistency is that when working with a computer, the user forms a system of expecting the same reactions to the same actions, which constantly reinforces the user interface model. Consistency, by providing a dialogue between the computer and the human operator, can reduce the amount of time it takes for the user to both learn the interface and use it to get the job done.

Consistency is a property of an interface to reinforce user views. Another component of the interface is the property of its concreteness and visibility. This is done by applying the plan of the panel, using colors and other expressive techniques. Ideas and concepts are then physically expressed on a screen that the user interacts with directly.

CONSISTENCY - THREE DIMENSIONS:

Saying that an interface is consistent is like saying that something is greater than something. We are forced to ask: "More than what?". When we say that an interface is consistent, we are forced to ask, "Consistent with what?". It is necessary to mention some dimension.

An interface can be aligned with three broad categories or dimensions: physical, syntactic, and semantic.

4. Physical consistency refers to hardware: keyboard layouts, key layouts, mouse usage. For example, there will be physical consistency for the F3 key if it is always in the same location regardless of system usage. Likewise, it will be physically consistent to select a button on a mouse if it is always located under the index finger.

5. Syntactic consistency refers to the sequence and order in which elements appear on the screen (view language) and the sequence of requests for action requirements (action language).

For example: there will be syntactic consistency if you always place the panel title in the center and at the top of the panel.

6. Semantic consistency refers to the meaning of the elements that make up the interface. For example, what does "Exit" mean? Where do users "Log out" and what happens next?

INTER-SYSTEM CONSISTENCY

General User Access contains definitions of all elements and interactive technology. But these definitions can be implemented in different ways due to the technical capabilities of specific systems. So, the common interface cannot be identical for all systems.

Composite systems consistency is a balance between physical, syntactic, semantic consistency and the desire to take advantage of the system's optimal capabilities.

BENEFITS OF A CONSISTENT USER INTERFACE

A consistent interface brings users and developers time and cost savings. Users benefit if they take less time to learn how to use the applications and then take less time to get the job done when they are functional. Additional benefits for the user will be reflected in their attitude towards applications.

A consistent interface reduces user error, increases task satisfaction, and makes the user feel more comfortable with the system.

A consistent user interface also benefits application developers by highlighting common blocks of elements for an interface through standardization of interface elements and interactive technology. These building blocks can allow programmers to create and modify applications more easily and quickly. For example, because the same panel can be used on many systems, application developers can use the same panels in different projects.

While the user interface sets rules for interface elements and interactive technology, it allows for a fairly high degree of flexibility. For example, five types of panels are defined for the interface, but it is assumed that application-specific panels can be used. General User Access recommends the use of certain panels, but if this is not possible, then specific elements of certain panels should be used.


INTERFACE

MS-Windows provides users with a graphical user interface (GUI) shell that provides a standard user and programmer environment. (GUI) offers a more sophisticated and friendly user environment than the DOS command-driven interface. Working in Windows is based on intuitive principles. You can easily switch from task to task and exchange information between them. However, application developers traditionally face programming challenges because the organization of the Windows environment is extremely complex.

Delphi is a programming language and environment related to the RAD-class.
(Rapid Application Development - "Rapid Application Development Tool") CASE tools - technologies. Delphi has made the development of powerful applications
Windows is a fast and enjoyable process. Applications
Windows that required a lot of human effort to create, such as in C++, can now be written by a single person using Delphi.

The Windows interface provides a complete transfer of CASE-technologies into an integrated system for supporting work on creating an applied system at all phases of the life cycle of work and system design.

Delphi has a wide range of features, from a form designer to support for all popular database formats. The environment eliminates the need to program such components
General purpose Windows like labels, icons and even dialog boxes.
Working on Windows, you have repeatedly seen the same "objects" in many different applications. Dialog panels (such as Choose File and Save
File) are examples of reusable components built right into Delphi, which allows you to tailor these components to the task at hand so that they work exactly as required by the application you are building. There are also predefined visual and non-visual objects, including buttons, data objects, menus, and pre-built dialog boxes. With these objects, you can, for example, provide data entry with just a few clicks of the mouse, without resorting to programming. This is a visual implementation of the use of CASE-technologies in modern application programming. The part that is directly related to programming the user interface of the system is called visual programming.

Benefits of designing workstations in Windows environment using Delphi:

10. Eliminates the need to re-enter data;

11. Consistency of the project and its implementation is ensured;

12. Productivity of development and portability of programs increases.

Visual programming, as it were, adds a new dimension to the creation of applications, making it possible to depict these objects on the monitor screen before executing the program itself. Without visual programming, the display process requires writing a piece of code that creates and sets up an object in place. It was possible to see the encoded objects only during the execution of the program. With this approach, getting objects to look and behave the way you want becomes a tedious process that requires you to repeatedly fix the code, then run the program and see what happens.

With visual design tools, you can work with objects in front of your eyes and get results almost immediately. The ability to see objects as they appear during program execution eliminates the need for a lot of manual work, which is typical for working in a non-visual environment, whether it is object-oriented or not. After the object is placed in the form of the visual programming environment, all its attributes are immediately displayed in the form of a code that corresponds to the object as a unit that is executed during the program.

Object placement in Delphi involves a tighter relationship between objects and actual program code. The objects are placed on your form, and the code corresponding to the objects is automatically written to the source file. This code compiles to provide significantly better performance than the visual environment, which only interprets information while the program is running.

The three main parts of interface design are: panel design, dialog design, and window presentation. For General
User Access should also take into account the conditions of application
Architectures of Applied Systems. Other conditions also exist: whether the input devices on the terminals are keyboards or pointers, and whether the applications will be character or graphic.

PANEL DESIGN DEVELOPMENT

Let's establish the basic terms related to the development of the panel.

A screen is the surface of a computer workstation or terminal on which information intended for the user is located.
A panel is a predefined grouping of information that is structured in a specific way and arranged on the screen. General
User Access installs five panel schemes called panel types. You need to use different panel types to represent different kinds of information. The five panel types are as follows:

9. Information;

10. List;

11. Logical.

You can also mix parts of these panel types to create mixed panels. You should think of each panel as a kind of space, divided into three main parts, each of which contains a separate type of information:

12. Action menu and dropdown menu;

13. Panel body;

14. Area of ​​function keys.

On fig. 2 shows the position of the three areas of the panel.
| Action menu |
| |
| Panel body |
| |
| Area function keys | |

Rice. 2. Three panel areas.

The action menu appears at the top of the panel. This gives users access to a group of actions that the application supports. The action menu contains a list of possible actions to choose from. When users make a selection, a list of possible actions appears on the screen in the form of a pull-down menu. The pull down menu is an extension of the action menu.

The word "actions" in "action menu" does not imply that all commands must be verbs. Nouns are also allowed. The meaning of action in the term "action menu" comes from the fact that the selection of an action menu item is performed by the application through user actions. For example, in a text editor, selecting "Fonts" from the action menu is a noun and allows the user to require font selection actions.

Some panels will have an action menu while others won't.

The action menu and the dropdown menu provide two great benefits for users.

The first advantage is that these actions are made visible to users and can be requested to be performed through a simple interactive technique. "Request" means initiating an action.
The way the human operator initiates an action is by pressing a function key, making a selection from a pull-down menu, or typing (typing) a command. The action menu and drop-down menu provide visibility that helps users find the actions they need without having to remember and type the name of the action.

The second advantage is that the selection in the action menu leads to the drop-down menu, i.e. they never cause immediate action. Users see that the implementation of such actions does not lead to fatal consequences, and they do not develop fear from the wrong action.

The action menu and drop-down menu provide a two-level hierarchy of actions. You can provide an additional level by using the pop-up windows that appear when an operator makes a selection from a drop-down menu. Then, when the operator makes a selection in the pop-up window, a series of pop-up windows may appear as the actions are performed. General
User Access recommends that you limit the number of pop-up levels to three, as many users have difficulty understanding the hierarchy of menus that have many levels.

The body of the bar is below the action menu and above the function key area. Each panel you create will have a body that can be split into multiple areas if your application needs to show users more than one group of information at a time, or allows users to enter or update more than one group of information at the same time. time.

The panel body may also contain a command area in which users type application or system commands, and a message area in which messages appear.

The command pane is a means of providing users with a command interface, which is an alternative to prompting for actions via the action menu and pull-down menu. The message area gives you a place to put messages on the screen, different from windows, as it is important that the messages do not collide with the information on the panel or with the action request.

The function key area is located at the bottom of the panel and the operator can choose to place it in short or long form or not at all. It contains a list of function keys. Some panels may contain both an action menu and a function key title. It is necessary to ensure that the function key area is included for all panels, although the user can choose not to shield them. See fig. 3 which shows a general view of the system user panel.
|Choice of Communication |
|Select one of the following types of communication: |
|1. Receiving mail |
|2. Receive messages |
|3. Sending mail |
|4. Postal magazine |
|5. Operations |
|6. Postal status |
|Esc=Cancel |F1=Help |F3=Exit |

Rice. 3. Panel with function keys area. The function key area is escaped in short form and contains the selections Cancel, Help, and

Panel elements are the smallest parts of a panel design.
Some elements belong exclusively to certain areas of the panel, while others can be used in different areas.

General User Access provides a number of symbols and visual cues, such as pseudo-buttons and contact buttons, that you can use to tell users which of the select boxes or actions they are working with.

DESIGN PRINCIPLES: OBJECT - ACTION

The division of the panel into areas that contain information objects or action selections is based on the object-action principle of panel design. This principle allows users to first make a selection of an object on the body of the panel, and then select the appropriate action to work with the selected object from the action menu or from the function key area.

This object-valid mapping allows you to generate action menus and descender menus from an action, including only those that are valid for the corresponding objects. The application of the concept of an action object helps to minimize the number of modes, a large number of which sometimes cause inconvenience to users and make the application difficult to learn and use. The object-action principle is preferred, but in most cases an action-object relationship can also be applied, in which the operator selects objects and actions in reverse order.

USER WORK WITH THE PANEL

The user interacts with the elements of the panel using the selection cursor, one of the selection forms of which is a color bar used to highlight the selection fields and input fields. The selection cursor shows where and with what the user is going to work. Users move the cursor around the panel using the keyboard or mouse.

DIRECT INTERACTION

Shared User Access includes design concepts such as the concept of step-by-step guidance, visual cue, and interactive technique.
However, advanced users may not require this level of ease of use. They may require more direct interaction with the application. For such users, User Shared Access also contains fast interactive technologies such as:

15. Assigning actions to function keys.

16. Fast exit from high-level actions.

17. Using mnemonics and numbers to select objects and actions.

18. The command area allows the user to enter the application and system commands.

19. Using the mouse speeds up the selection of actions.

BUILDING A DIALOGUE

A dialogue is a sequence of requests between the user and the computer: the user's request, the computer's response and request, and the computer's final action.

While the user and the computer are exchanging messages, the dialogue under the control of the operator moves along one of the paths provided by the application. Essentially, the user navigates through the application using specific actions that are part of a conversation. These dialog actions do not necessarily require the computer to process information; they can only cause transitions from one panel to another or from one application to another if more than one application is running. Dialog actions also control what happens to the information users type on a particular panel; whether it should be saved or remembered when users decide to navigate to a different app bar.

So, the dialogue consists of two parts:

Each step of the dialog is accompanied by a decision to save or not save the new information.

With multiple dialog flow paths, the operator is given the opportunity to alternately move forward with their decisions, including common dialog actions such as enter, cancel, and exit. Common dialog actions are a set of such actions defined in
Shared User Access, which have a common meaning in all applications. With some of these modes, the user can advance:

22. Forward one step (entry action);

23. Back one step (undo action);

24. Back to a specific application point (function exit action);

25. Leave the application (exit application mode).

Enter and cancel actions, as dialog steps, usually present a new panel to the operator, or may present the same panel but with significant changes. At various points in the dialog, the withdrawal and exit actions are performed in the same way, no matter how many exit points the application has. Some applications have only one exit point, while others have several. A set of several common dialog actions is illustrated in fig. four.

This illustrates the navigation capabilities of a typical dialog when moving from panel to panel, which are depicted by rectangles. Operations
Forward and Backward are scrolling operations, not navigational, and are used to navigate within panels.

Rice. 4. Dialogue actions.

RETENTION AND RETENTION OF INFORMATION

While users are navigating the application, something must be happening with the information being changed in the panel. It may be held at the panel level or may be saved.

The retained information belongs to the application panel level information. When users return to the dialog via canceling a panel, the application discards or saves any changes to the information in the panel.
The retained information can be escaped as default values ​​the next time the user views the panel. But this does not mean that the information will be saved. Each application decides to withhold or retain such information.

Saving information means placing it in a memory area specified by the operator. Navigation actions that take the user through the application do not save information until the user specifically specifies that these actions should end with saving information.

If the user's actions could result in the loss of certain information, User Access General recommends requiring the user to confirm that they do not want to save the information, or allow them to save the information, or cancel the last request and go back one step.

Your application can run in windows mode. This means that the panel is located in separate limited parts of the screen, which are called windows. A windowed system allows the user to divide the screen into windows containing their own panel. Using several windows at once, the user can simultaneously watch several panels of one or different applications on the screen.

If the screen contains one or two windows, the user may not see the entire panel in each window. It depends on the size of the window.
The user can move or resize each window to fit the information they need. Also, users can scroll the contents of windows by moving the information on the panel within the screen area bounded by the window.

Windowing mode features are provided operating system or its service and tools, otherwise the applications themselves must implement this mode.

THREE TYPES OF WINDOWS

The primary window is the window from which the user and computer begin their dialogue. For example, in a text editor, the primary window contains the text to be edited. In the spreadsheet editor, the primary window contains the table. On systems without the ability to create windows, consider the entire screen as the primary window. Each primary window can contain as many panels as needed, one after the other, to carry on the dialogue. Users can switch the primary window to another primary or secondary window.

Secondary windows are called from primary windows. These are windows in which users and the computer have a dialogue parallel to the dialogue in the primary window. For example, in a text editor, the secondary window may contain a panel with which the user changes the format of the document, and the primary window contains editable information. Secondary windows are also used to provide auxiliary information that is related to the dialogue in primary windows. Users can switch from primary windows to secondary windows and vice versa. Primary and secondary windows have title bars at the top of the window. The title is related to the window via applications.

Pop-up windows are a portion of the screen that contains a screenable panel that expands the user's dialog through primary and secondary windows. Popup windows are associated with other windows and appear when an application wishes to extend a dialog to another window. One of the uses of pop-ups is to convey various messages. Before continuing a dialog with a certain window, the user must complete their work with the pop-up window associated with it.

Input Devices: keyboard, mouse and others

Shared User Access supports the coordinated use of a keyboard and mouse, or any other device that acts like a mouse. We will further assume that the mouse is the main pointing device.

Users should be prepared to switch between keyboard and mouse at almost any stage of the conversation without having to change application modes. One device may be more efficient than another in a given situation, hence the user interface allows users to easily switch from one device to another.

All personal computer applications must take into account the use of the mouse. However, applications on non-programmable terminals cannot support a mouse. Mouse support is not required on these terminals.

Keyboard Support

Let's take as a de facto standard General User Access, designed with one type of keyboard in mind, namely the extended IBM keyboard.

You must assign keys to application functions according to the rules and specifications of the IBM standard. Key assignments refer to IBM keyboard
Enhanced Keyboard. For other types of keyboards, use the appropriate technical documentation, such as the IBM Modifiable Keyboard
Keyboard.

Key assignment rules:

26. Any keys can be used in applications, including both keys pressed without Shift, as well as combinations with Shift +, Ctrl + and

Alt+ if the programmable workstation or non-programmable terminal allows the application to access those keys. You should avoid using any keys assigned by the operating system under which the application will run.

27. If the application will be translated into other languages, you should not assign alphanumeric key combinations with Alt. However, if possible, users can assign different functions to these keys.

28. To change the original value of the keys, use them in combination with the Alt, Ctrl and Shift keys. The Alt, Ctrl and Shift keys are not used on their own.

29. Do not remap or duplicate key assignments.

30. Users are given the opportunity to change the assignment of keys, as an additional function of the application. Users should be able to assign actions and options to any function keys, and change their designation on the screen.

31. If some function is assigned function key is the same in several applications, then you should assign this key to this function in all applications.

32. If users press a key that is not assigned at the current panel level, then there should be no effect unless otherwise specified.
CONCLUSION

In modern conditions, the search for an optimal solution to the problem of organizing an interaction interface acquires the character of a complex task, the solution of which is significantly complicated by the need to optimize the functional interaction of operators with each other and with the technical means of automated control systems in the changing nature of their professional activities.

In this regard, I would like to emphasize the particular urgency of the problem of modeling the interaction of the PO with the technical means of the automated control system. Today there is a real opportunity with the help of modeling on modern multifunctional means of processing and displaying information such as
Delphi specify the type and characteristics of the information models used, identify the main features of the future activities of operators, formulate requirements for the parameters of the interaction interface hardware and software, etc.

Speaking about the problems of human interaction with the TS ACS and the practical implementation of the interaction interface, one cannot omit such an important issue as unification and standardization. The use of standard solutions, the modular principle of designing systems for displaying and processing information is becoming increasingly widespread, which, however, is quite natural.

Particular emphasis in the implementation of these tasks should, of course, be given to modern CASE-tools for developing programs, since they most optimally allow you to design solutions based, first of all, on the requirements for a consistent user interface, which is the Windows interface. No other company's products available today offer the same ease of use, performance, and flexibility that Delphi does. This language bridged the gap between 3rd and 4th generation languages ​​by combining their strengths and creating a powerful and productive development environment.

LITERATURE

Organization of human interaction with technical means of automated control systems, volume 4:
“Information Display”, edited by V.N. Chetverikov, Moscow, “Higher School”
1993.
Organization of human interaction with technical means of automated control systems, volume 7:
"System design of human interaction with technical means", edited by V.N. Chetverikov, Moscow, "Higher School" 1993.
"Cybernetic dialogue systems", I.P. Kuznetsov.
"Common User Interface Guidelines", Microsoft Edition
1995
John Matcho, David R. Faulkner. "Delphi" - trans. from English. - M.: Binom, 1995.

INTRODUCTION 2

SUBJECT AREA 3

INFORMATION MODEL: INPUT AND OUTPUT INFORMATION 6

FUNCTIONAL TASKS THAT DELPHI SOLVES WHEN DESIGNING THE INTERFACE
7

WHAT TO UNDERSTAND BY INTERFACE 8

INTERFACE COMPONENTS 8

MACHINE TO USER 8

USER TO MACHINE 8

HOW THE USER THINKS 8
CONSISTENT INTERFACE 9

CONSISTENCY - THREE DIMENSIONS: 9

INTER-SYSTEM CONSISTENCY 10

BENEFITS OF A CONSISTENT USER INTERFACE 10

SOFTWARE AND HARDWARE: IMPLEMENTATION AND CREATION OF A CUSTOM
INTERFACE 11

PANEL DESIGN 13
DESIGN PRINCIPLES: OBJECT - ACTION 16

USER OPERATION WITH PANEL 16

DIRECT INTERACTION 16

BUILDING A DIALOGUE 16
RETENTION AND RETENTION OF INFORMATION 19
OKNA 19

THREE TYPES OF WINDOWS 20
INPUT DEVICES: KEYBOARD, MOUSE AND OTHERS 20

KEYBOARD SUPPORT 21

Industrial data transmission networks are one of the main elements of modern industrial control systems. The advent of industrial communication protocols marked the beginning of the introduction of geographically distributed control systems that can cover many technological installations, unite entire workshops, and sometimes factories. Today, the field of industrial communications is developing by leaps and bounds: more than 50 communication network standards are known, specially adapted for industrial applications, new progressive data transmission technologies appear every year. This is not surprising, because it is communication networks that largely determine the quality, reliability and functionality of the APCS as a whole.

Data transmission networks used in APCS can be divided into two classes:

  1. Field buses (Field Buses);
  2. Top-level networks (operator level, Terminal Buses).


1. Field buses

The main function of the field bus is to provide network communication between controllers and remote peripherals (eg I/O nodes). In addition, the field bus can be connected to various instrumentation and actuators (Field Devices), equipped with appropriate network interfaces. Such devices are often called intelligent (Intelligent Field Devices), as they support high-level network communication protocols.

As already noted, there are many fieldbus standards, the most common of which are:

  1. ProfibusDP;
  2. Profibus PA;
  3. Foundation Fieldbus;
  4. Modbus RTU;
  5. HART;
  6. DeviceNet.

Despite the implementation nuances of each of the standards (data transfer rate, frame format, physical environment), they have one thing in common - the network data exchange algorithm used, based on the classic Master-Slave principle or its slight modifications. Modern fieldbuses meet stringent technical requirements, making them suitable for harsh industrial environments. These requirements include:

1. Determinism. This implies that the transmission of a message from one network node to another takes a strictly fixed period of time. Office networks built using Ethernet technology are a great example of a non-deterministic network. The algorithm for accessing a shared medium using the CSMA/CD method does not determine the time during which a frame from one network node will be transmitted to another, and, strictly speaking, there are no guarantees that the frame will reach the destination at all. For industrial networks, this is unacceptable. The transmission time of a message must be limited and, in the general case, taking into account the number of nodes, the data transfer rate and the length of messages, it can be calculated in advance.

2. Support for long distances. This is an essential requirement, because the distance between control objects can sometimes reach several kilometers. The protocol used should be oriented to use in long-distance networks.

3. Protection against electromagnetic interference. Long lines are particularly susceptible to the harmful effects of electromagnetic interference emitted by various electrical equipment. Strong interference in the line can distort the transmitted data beyond recognition. To protect against such interference, special shielded cables are used, as well as optical fiber, which, due to the light nature of the information signal, is generally insensitive to electromagnetic interference. In addition, industrial networks should use special methods of digital data coding that prevent their distortion during transmission or, at least, allow the receiving node to effectively detect corrupted data.

4. Reinforced mechanical construction of cables and connectors. Here, too, there is nothing surprising if you imagine the conditions under which communication lines often have to be laid. Cables and connectors must be strong, durable and suitable for use in the most difficult conditions (including aggressive atmospheres, high vibration levels, humidity).

According to the type of physical data transmission medium, field buses are divided into two types:

  1. Field buses based on fiber optic cable. The advantages of using optical fiber are obvious: the possibility of building long communication lines (up to 10 km or more); large bandwidth; insensitivity to electromagnetic interference; Possibility of laying in hazardous areas. Disadvantages: relatively high cost of the cable; complexity of physical connection and cable connection. These works must be carried out by qualified specialists.
  2. Fieldbuses based on copper cable. As a rule, this is a two-wire twisted-pair cable with special insulation and shielding. Advantages: acceptable price; ease of laying and making physical connections. Disadvantages: susceptible to electromagnetic interference; limited length of cable lines; lower bandwidth than fiber optics.

An example of a module that connects a Simatic S7-300 controller to a Profibus DP network with fiber optic cable is the CP 342-5 FO communication processor. The CP 342-5 module can be used to connect the S7-300 to a Profibus DP network with a copper cable.


2. Top-level networks

The APCS top-level networks are used to transfer data between controllers, servers and operator workstations. Sometimes such networks include additional nodes: a central archive server, an industrial application server, an engineering station, etc. But these are already options.

What networks are used at the top level of the process control system? Unlike fieldbus standards, there is not much variety here. In fact, most of the top-level networks used in today's industrial control systems are based on the Ethernet standard (IEEE 802.3) or its faster variants Fast Ethernet and Gigabit Ethernet. In this case, as a rule, the communication protocol TCP / IP is used. In this regard, carrier-level networks are very similar to conventional LANs used in office applications. The widespread industrial use of Ethernet networks is due to the following obvious points:

1) High-level industrial networks combine many operator stations and servers, which in most cases are personal computers. The Ethernet standard is well suited for organizing such LANs; to do this, it is necessary to equip each computer with only a network adapter (NIC, network interface card). Many modern controllers have communication modules for connecting to Ethernet networks (for example, the CP 343-1 communication processor allows you to connect an S7-300 to an Industrial Ethernet network).

2) There is a large selection on the market at affordable prices. communication equipment for Ethernet networks, including specially adapted for industrial applications.

3) Ethernet networks have a high data transfer rate. For example, the Gigabit Ethernet standard allows you to transfer data at speeds up to 1 Gb per second using Category 5 twisted pair. throughput networks are becoming extremely important for industrial applications.

4) The use of the Ethernet network at the top level of the APCS provides the possibility of simple connection of the APCS network with the local network of the plant (or enterprise). As a rule, the existing plant LAN is based on the Ethernet standard. The use of a single network standard makes it possible to simplify the integration of process control systems into common network enterprises.

However, industrial networks of the upper level of industrial control systems have their own specifics, due to the conditions of industrial use. Typical requirements for such networks are:

1. Large bandwidth and data transfer rate. The volume of traffic directly depends on many factors: the number of archived and visualized technological parameters, the number of servers and operator stations, the applications used, etc. Unlike field networks, there is no strict determinism requirement here: strictly speaking, it does not matter how long it takes to transfer a message from one node to another - 100 ms or 700 ms (of course, this does not matter as long as it is within reasonable limits). The main thing is that the network as a whole can cope with the total amount of traffic for a certain time. The most intensive traffic goes along the network sections connecting servers and operator stations (clients). This is due to the fact that at the operator station the technological information is updated on average once per second, and there can be several thousand transmitted technological parameters. But even here there are no strict time limits: the operator will not notice if the information is updated, say, every one and a half seconds instead of the prescribed one. At the same time, if the controller (with a scan cycle of 100 ms) encounters a 500-millisecond delay in receiving new data from the sensor, this may lead to incorrect processing of control algorithms.

2. Fault tolerance. It is achieved, as a rule, by redundant communication equipment and communication lines according to the 2 * N scheme so that in the event of a switch failure or a channel break, the control system is able to localize the failure point in the shortest possible time (no more than 1-3 s), perform automatic rebuilding the topology and redirecting traffic to redundant routes.

3. Compliance of network equipment with industrial operating conditions. This includes such important technical measures as: - protection of network equipment from dust and moisture; - extended temperature range of operation; - extended life cycle; - possibility of convenient installation on a DIN rail; - low-voltage power supply with the possibility of redundancy; - strong and wearproof sockets and connectors.

Functions of industrial network equipment practically do not differ from office counterparts, however, due to the special design, it costs a little more. Figure 1 shows, for example, photographs of industrial network switches that support a redundant network topology.

Fig.1 Industrial switches SCALANCE X200 from Siemens (left) and LM8TX from Phoenix Contact (right): DIN rail mounting

When talking about industrial networks based on Ethernet technology, the term Industrial Ethernet is often used, thereby hinting at their industrial purpose. There are now extensive discussions about making Industrial Ethernet a separate industry standard, but at the moment Industrial Ethernet is only a list of technical recommendations for networking in a production environment, and is, strictly speaking, an informal addition to the physical layer specification of the Ethernet standard.

There is another point of view on what Industrial Ethernet is. The fact is that recently a lot of communication protocols have been developed based on the Ethernet standard and optimized for the transmission of time-critical data. Such protocols are conditionally called real-time protocols, meaning that they can be used to organize data exchange between distributed applications that are time-critical and require precise time synchronization. The ultimate goal is to achieve relative determinism in data transfer. An example of Industrial Ethernet is:

  • Profinet;
  • EtherCAT;
  • Ethernet Power Link;
  • Ethernet/IP.

These protocols modify the standard TCP/IP protocol to varying degrees, adding new networking algorithms, diagnostic functions, self-correction methods, and synchronization functions to it. At the same time, the Ethernet link and physical layers remain unchanged. This allows new communication protocols to be used on existing Ethernet networks using standard communications equipment.

User's manual

1. Introduction
1.1. Application area………………………………………………………………. 3
1.2. Brief Description of Opportunities………………………………………………..... 3
1.3. Level of user training…………………………………………………... 3

2. Purpose and conditions for the use of APCS "VP"……………………………………. four

3. Solution of the APCS system “VP” …………………………………………………………. 5

4. Starting the system……………………………………………………………………..……… 6

1. Introduction.

1.1. Application area

The requirements of this document apply when:

· preliminary complex tests;

trial operation;

· acceptance tests;

industrial operation.

1.2. Brief Description of Features

The software product "Weight Flow" is designed to carry out analytical work, automate and optimize the processes of document management and interdepartmental logistics of various departments of the enterprise. The system also provides the possibility of operational control and adjustment of the operation of technical processes at enterprises associated with the use of weighing equipment at elevators, nave and gas storage facilities, railway freight stations and other industrial facilities.

The software and hardware-software complex of the APCS "Weight Flow" have a modular structure.

When working with reporting, the following are often used: OLE 1C software with the online synchronization function (allows initiation of weighing from the accounting system) and SAP RFC software with the online synchronization function (forms weights in the accounting system), which provides the following:


Checking the possibility of the passage of the vehicle on the territory of the enterprise;

creation of a document in "1C" on the fact of weighing the vehicle at the enterprise;

return of balance data Money on the counterparty's account in the 1C system;

Search for a document by vehicle number and return the document number. If there are several documents, then the output order is determined by the developer, the function always returns one document;

    return information about the document; returning a directory element; entering the weight of the goods into the document; issuance of a list of documents for the date.

1.3. User experience level

The user must have experience with MS Windows (95/98/NT/2000/XP, XP-7), be familiar with MS Office, and have the following knowledge:

know the relevant subject area;

know the principle of operation of automobile scales;

· be able to connect peripheral devices.

2. Purpose and conditions for the use of APCS "VP".

Dispatching of production, transport, roads, successfully applied in many areas of activity, ranging from commercial roads and crossings, automatic parking lots, to automation of the gas production industry.

The software and hardware complex of the APCS "Weight Flow" is designed to automate industrial weighing systems (automatic scales, car scales, etc.) and workflow, configuration, taking into account the industry affiliation of the enterprise and accounting features.

All systems have the ability to be easily integrated into other systems, for example, accounting systems (1C, Turbobukhgalter, SAP, BAAN, etc.). The systems are also equipped with the option of remote / remote control. All our projects include the most advanced and unique software and hardware solutions using RFID technologies (radio frequency identification), active and passive.

APCS "Weight Flow" includes the installation of security and video surveillance systems, access control systems at industrial facilities for various purposes and any level of complexity, with their integration into the enterprise's technological processes and document management, as well as the use of modern RFID technologies, (active / passive) .

3. Solution of the APCS "VP" system

Typical options for completing the systems of automated process control systems "Weight flow"

Event identification options. "Event" is an important component that allows you to organize the work of the system without a person, which eliminates the "risks" associated with the activities of dishonest employees.

1. Intelligent video analytics - recognition system Vehicle, vehicle numbers/wagons/containers;
2. RFID - radio frequency identification (active or passive);
3. Various sensors - induction, thermal sensors;
4. Human entry of event data

Actuating devices: - any digital devices, the design of which includes connection ports (COM USB, RS 232/485, IP network, etc.);
- any analog devices with on/off functions (traffic lights / motors / bulbs / barriers / shutters, etc.);
- digital sensors / analyzers, electronic and with dry contacts.

Software components of APCS "VP"
We have several APCS modules - their functionality is briefly described in the specification, in more detail in the manual. Below are the main software components of the APCS "Weight flow". Each module has certain basic functions:

1. Server - APCS software "Weight flow"
Central scale north (WEB, SQL, URBD)

2. Weighing program - automated process control system "Weight Flow" Auto weighing / railway weighing module
3. Use of various devices - APCS "Weight Flow" Module controller +
in system

4. Corrections, visible/invisible - APCS "VP" Module Laboratory

5. Optional workplace- APCS "VP" Module additional workplace
(possibility to connect remotely or over the network to AWP PVK)


4. Starting the system

https://pandia.ru/text/80/223/images/image002_125.jpg" width="672 height=361" height="361">

Rice. 2. Interface of APCS "Weight flow"

Interface consists of the following elements:

1.Navigation menu. Used to set up and manage the system.

2. Buttons for switching between scales. Serve to switch the display of the status of different scales and indication of the currently active scale, in case more than one scale is connected to the system.

3. Operator menu. Serves to manage weighing, documents and access control system. Switches the appearance and functions of the operator panel.

4. Operator panel. Serves to manage weighing, documents and access control system. Appearance and functions depend on the currently selected tab in the operator menu (item 3). When the system starts, the scale control panel is displayed (as in Fig. 2).

5.Calendar. Serves to select the weighing results displayed on the weighing protocol panel (pos. 7) by date and display the current date.

6.Button "Write document". Used to create a new document.

7. Weighing protocol panel. Serves to display the weighing results for a specific date selected in the calendar (pos. 5).

8. Video panel. Displays the video broadcast from CCTV cameras.

Navigation menu(Fig. 3) is located in the upper left corner of the monitor and consists of the following sections: "File", "Configuration", "Modules", "Windows", "About".

https://pandia.ru/text/80/223/images/image004_81.jpg" align="left" width="120" height="76">

Rice. 4. File menu.

Menu "Configuration" (Fig. 5)

Provides access to system service parameters

"Print Form Designer" - used to register document layouts

"System settings" - serves to configure the system in accordance with the required parameters

https://pandia.ru/text/80/223/images/image006_48.jpg" align="left" width="171" height="92 src=">

Rice. 6. Menu "Modules".

Menu "Window" (Fig. 7)

Displays a list open windows and allows you to switch between them

https://pandia.ru/text/80/223/images/image008_40.jpg" width="675 height=356" height="356">

Download document

STATE STANDARD OF THE UNION OF THE SSR

INTERFACE
FOR AUTOMATED
CONTROL SYSTEMS
DISTRIBUTED OBJECTS

GENERAL REQUIREMENTS


K.I. Didenko, cand. tech. sciences; Yu.V. Rosen; K.G. Karnaukh; M.D. Gafanovich, cand. tech. sciences; K.M. Usenko; J.A. Gusev; L.S. Lanina; S.N. Kiyko

INTRODUCED by the Ministry of Instrumentation, Automation and Control Systems

Member of the Board N.I. Gorelikov

APPROVED AND INTRODUCED BY Decree of the USSR State Committee for Standards dated March 30, 1984 No. 1145

STATE STANDARD OF THE UNION OF THE SSR


until 01.01.90

Non-compliance with the standard is punishable by law

This standard applies to an interface that regulates the general rules for organizing the interaction of local subsystems as part of automated control systems for dispersed objects using a backbone communication structure (hereinafter referred to as the interface).

As part of the physical implementation, the standard applies to the interfaces of aggregate tools that use electrical signals to transmit messages.

1. PURPOSE AND SCOPE

1.1. The interface is designed to organize communication and information exchange between local subsystems as part of automated control systems for technological processes, machines and equipment in various industries and non-industrial areas.


interface with operational and technological personnel;

interfacing with upper-tier control computing complexes in hierarchical systems.

2. MAIN FEATURES

2.1. The interface implements a bit-serial synchronous method for transmitting digital data signals over a two-wire trunk channel.

2.2. The total attenuation of the signal between the output of the transmitting station and the input of the receiving station should be no more than 24 dB, while the attenuation introduced by the communication line (main channel and branches) should be no more than 18 dB, introduced by each communication device with the line, - no more than 0, 1 dB.

Note. When using cable type RK-75-4-12, the maximum length of the communication line (including the length of the taps) is 3 km.


(New edition, Rev. No. 1).

2.5. To represent signals, two-phase modulation with phase-difference coding should be used.

2.6. For code protection of transmitted messages, a cyclic code with a generating polynomial should be used X 16 + X 12 + X 5 + 1.

2.7. In order to eliminate random errors, it should be possible to retransmit messages between the same local subsystems.

2.8. The transmission of messages between local subsystems must be carried out using a limited set of functional bytes, the sequence of which is determined by the message format. The interface establishes two types of message formats (fig. 1).

Format 1 has a fixed length and is intended for transmission of interface messages only.

Format 2 includes a data part variable in length for data transmission.

Format 2, depending on the transmission rate (low-speed or high-speed range), should look like 2.1 or 2.2, respectively.

Message Format Types

Format 1

2.9. Message formats must include the following function bytes:

synchronizing CH;

the address of the called local subsystem AB;

code of the performed CF function;

own address of the local AS subsystem;

the number of data bytes in the information part of the DS, DS1 or DS2;

information bytes DN1 - DNp;

bytes of control codes KB1 and KB2.

2.8, 2.9.

2.9.1. The CH sync byte is used to indicate the start and end of a message. The sync byte has been assigned the code ?111111?.

2.9.2. The subsystem address AB byte specifies the local subsystem to which the message is directed.

2.9.3. The performed function CF byte defines the operation that is performed in this communication cycle. The assignment of bits within the CF byte is shown in Fig. 2.

CF byte structure

2.9.4. The CF codes and the corresponding operations performed are indicated in the table.

Byte notation

Function code

Operation in progress

Multicast (with general addressing)

Write-read

Centralized polling of controllers

Transfer of control of the main channel

Return of control of the main channel. Message with general address not accepted

Return of control of the main channel. Message with general address received

Decentralized polling of controllers. No channel capture request. Message with general address not accepted

Request to capture the main channel. Message with general address not accepted

Request to capture the main channel. Message with general address received

Passing the token

Message acknowledgment

Message Confirmation

Acknowledgment of receipt and subsequent issuance of a message. Responses to a centralized survey

No channel capture request. Message with general address not accepted

No channel capture request. Message with general address accepted

Channel capture request. Message with general address not accepted

Channel capture request. Message with general address received

The zero digit determines the type of message (challenge-response) transmitted over the trunk channel.

Bit 1 takes on a single value when the subsystem is busy (for example, the formation of a data buffer).

Bit 2 is set to 1 if a Format 2 message is transmitted in this cycle.

Bit 3 is set to 1 in a retransmitted message to the same local subsystem in the event of an error or no response.

(Revised edition, Rev. No. 1).

2.9.5. The own address of the local subsystem that generates the AS message is issued in order to inform the called subsystem of the response address and to control the correctness of its choice.

2.9.6. The DS byte determines the length of the information part in the 2.1 format, while the value of the binary code of the DS byte determines the number of DN bytes. The exception is the code ????????, which means that 256 information bytes are transmitted.

Bytes DS1, DS2 determine the length of the information part in format 2.2.

(Revised edition, Rev. No. 1).

2.9.7. The DN data bytes represent the information part of the format 2 message. The data encoding must be established by the normative documents for the interfaced local subsystems.

2.9.8. The control bytes KB1, KB2 form the control part and are used to determine the validity of the transmitted messages.

3. STRUCTURE OF THE INTERFACE

3.1. The interface provides the ability to build dispersed systems with a backbone communication structure (Fig. 3).

The structure of the connection of local subsystems

LC1 - LCn- local subsystems; MK- main canal; PC- terminating resistor

3.2. All interfaced local subsystems must be connected to the main channel through which information is exchanged.

3.3. To interface local subsystems with the main channel, they must include communication controllers. Communication controllers must implement:

converting information from the form of representation accepted in the local subsystem into the form required for transmission over the main channel;

adding and highlighting synchronization characters;

recognition and reception of messages addressed to a given local subsystem;

generation and comparison of control codes to determine the validity of received messages.

3.4. The exchange of messages between local subsystems must be organized in the form of cycles. A cycle is a procedure for transmitting one message of format 1 or 2 to the main channel. Several interconnected cycles form the transmission process.

3.5. The transfer process must be organized according to the asynchronous principle: the calls sent to the main channel must be answered by the local subsystem (with the exception of group operations).

4. INTERFACE FUNCTIONS

4.1. The interface establishes the following types of functions, which differ in the levels of control that local subsystems occupy during the messaging process:

passive reception;

reception and response;

decentralized management of the main channel;

request to capture the main channel;

centralized management of the main channel.

(Revised edition, Rev. No. 1).

4.2. The composition of the interface functions implemented by the local subsystem is determined by the composition of the problem solved by this subsystem and its functional characteristics.

4.3. The type of local subsystem is determined by the function of the highest level provided. The local subsystem is considered active with respect to the function it performs in the current cycle.

4.4. In accordance with the composition of the implemented interface functions, the following types of local subsystems are distinguished:

passive controlled subsystem;

controlled subsystem;

control subsystem;

initiative control subsystem;

leading subsystem.

4.4.1. The passive managed subsystem only recognizes and receives messages addressed to it.

4.4.2. The controlled subsystem receives messages addressed to it and generates a response message in accordance with the received function code.

4.4.3. The control subsystem must be able to:

take control of the exchange through the main channel in a centralized and decentralized mode;

form and transmit messages over the main channel;

receive and analyze response messages;

return or transfer control of the trunk channel after the end of the transfer process.

(Revised edition, Rev. No. 1).

4.4.4. The initiative control subsystem, in addition to the function according to clause 4.4.3, must be able to generate a request signal to capture the trunk channel, receive and send appropriate messages when performing the search procedure for the requesting subsystem.

4.4.5. The leading subsystem coordinates the work of all local subsystems in the mode of centralized control of the trunk channel. She carries out:

arbitration and transfer of control of the main channel to one of the control local subsystems;

central control of all local subsystems;

control of the operation of the active control local subsystem;

transmission of messages with a common address for all (or several) local subsystems.

Only one subsystem that has an active master function can be connected to a trunk channel.

(Revised edition, Rev. No. 1).

5. COMMUNICATIONS

5.1. Each cycle of message transmission over the trunk channel must begin with the synchronization of all interfaced subsystems.

5.1.1. To perform synchronization, the master or active control subsystem must send a CH sync byte to the trunk channel. It is allowed to send several sync bytes in sequence. Additional sync bytes are not included in the message format.

5.1.2. After all subsystems have synchronized, the master or active control subsystem sends a Format 1 or 2 message, including their own CH bytes, to the trunk link.

5.1.3. All bytes, with the exception of control KB1 and KB2, are transferred to the main channel, starting from the least significant bit.

Bytes KB1, KB2 are transmitted from the high order.

5.1.4. To exclude from the message transmitted to the main channel a sequence of bits that match the CH byte code, each message must be converted in such a way that after 5 consecutive "1" characters, one additional character "0" should be included. The receiving subsystem should exclude this character from the message accordingly.

5.1.5. After the transmission of the message, including the end CH byte, the transmitting subsystem must send at least 2 more CH bytes to complete the receive operations, after which the transmission cycle ends.

5.2. The trunk channel control procedure defines the sequence of operations for activating one of the control subsystems to perform the message transfer process. Subsystems connected by interface can operate in the mode of centralized control of the main channel.

5.2.1. The procedure for centralized control of the main channel provides for the presence of a leading subsystem that coordinates the interaction of subsystems by controlling the transfer of control of the main channel.

5.2, 5.2.1. (New edition, Rev. No. 1).

5.2.2. When transferring control of a trunk link, the master subsystem designates the active control subsystem to perform the message passing process. To do this, the leading subsystem must send to the selected control subsystem a message of format 1 with function code KF6.

5.2.3. The control subsystem, after receiving a message with function code CF6, must become active and can perform several message exchange cycles in one transmission process. The number of exchange cycles must be controlled and limited by the leading subsystem.

5.2.4. After performing the transfer of control of the main channel, the leading subsystem must activate the passive receive function in itself and turn on the control time. If within the set time (response waiting time should not be more than 1 ms) the assigned active subsystem does not start transmitting messages over the main channel, the leading subsystem resends the control subsystem a message of format 1 with function code KF6 and a retransmission flag.

5.2.5. If the control subsystem does not start transmitting messages (does not become active) even during repeated access, the leading subsystem determines it as faulty and implements the procedures provided for such a situation.

5.2.6. At the end of the transfer process, the active control subsystem must perform the function of returning control of the trunk channel. To do this, it must send a message to the leading subsystem with the function code CF7 or CF8.

5.2.7. The procedure for decentralized control of the main channel provides for the sequential transfer of the active function to other control subsystems by passing the token. The subsystem that received the token is the active subsystem.

5.2.8. For the initial capture of the token, all subsystems connected via the trunk channel must include interval timers, and the values ​​of the time intervals must be different for all subsystems. A subsystem with a higher priority should be assigned a smaller time interval.

5.2.9. If, after the subsystem's own time interval expires, the trunk channel is free, this subsystem shall consider itself the owner of the token and begin the transfer process as an active control subsystem.

5.2.10. After the transfer process is completed, the active control subsystem must transfer control of the trunk channel to the next control subsystem with the address AB = AD + 1, for which it must issue a token, activate the passive receive function in itself, and turn on the control time countdown.

As a marker, a message of format 1 (Fig. 1) is used with the function code KF13 and the address AB.

If the subsystem that received the token does not start the transfer process within the specified time, the subsystem that sent it must attempt to transfer the token to subsystems with subsequent addresses AB = AC + 2, AB = AC + 3, etc. until the token is accepted. The address of the subsystem that received the token MUST be remembered by that subsystem as the next address until the initial capture is re-executed.

5.2.11. Any active subsystem that has detected an unauthorized access to the communication channel must perform the actions according to clause 5.2.8.

5.2.12. In decentralized trunk control mode, all subsystems must have an active passive receive function. In case of loss of the token (for example, in case of failure of the active control subsystem), the mechanism for the initial capture of the token (clauses 5.2.8, 5.2.9) should be triggered and the operation should be restored.

5.2.13. Any subsystem that owns the token and has been given active master can take over centralized control of the trunk and retain it until its active master is cancelled.

5.2.7 - 5.2.13. (Introduced additionally, Amendment No. 1).

5.3. In the centralized control mode, the transfer of control over the main channel can be organized upon requests from initiative control subsystems.

5.3.1. Subsystems must have an active function of requesting the capture of the main channel to organize the transfer of control on demand.

5.3.2. There are two ways to organize the search for a subsystem requesting access to the main channel - centralized and decentralized.

5.3, 5.3.1, 5.3.2. (New edition, Rev. No. 1).

5.3.3. With centralized polling, the leading subsystem must sequentially poll all initiative control subsystems connected to the main channel. The leading subsystem must send to each initiative control subsystem a message of format 1 with function code CF5.

The initiative control subsystem must send a response message to the leading subsystem with one of the function codes CF21 - CF24, depending on its internal state. The sequence of operations in the centralized survey procedure is shown in Fig. four.

5.3.4. Decentralized polling provides a fast process for determining the proactive control subsystems that have established a request for access to the main channel. The leading subsystem should apply only to the first initiative control subsystem in turn with a message of format 1 and function code CF9.

Each initiative control subsystem must receive the message addressed to it and send its own message to the main channel, addressed to the next subsystem in turn. In the generated message, one of the function codes KF9 - KF12, characterizing the state of this subsystem, should be transmitted. The decentralized survey procedure is illustrated in Fig. 5.

5.3.5. The master subsystem, after starting the decentralized poll, activates the passive reception function and receives all messages sent by the initiative control subsystems. This allows the leading subsystem, after the end of the decentralized survey, to have information about requests for access to the main channel from all initiative control subsystems.

Subsystem Centralized Polling Process

Subsystem decentralized polling process

The last initiating control subsystem in the decentralized polling chain must address its message to the leading subsystem, which means the end of the decentralized polling procedure.

5.3.6. If any subsystem does not issue messages to the main channel after calling it, the leading subsystem must be activated and send it a repeated message identical to the previous one. If there is no response (or errors) to the repeated call, the leading subsystem starts a decentralized poll from the next subsystem in turn, and this subsystem is excluded from the poll.

5.4. The data transfer procedure can be performed as one of the following processes:

group recording;

write-read.

5.4.1. The group write must be performed by the master subsystem. When group recording is performed, the leading subsystem issues a format 2 message to the main channel, in which the code 11111111 (255) and the function code CF1 are written as the AB address.

5.4.2. All subsystems responding to a multicast address must receive a message from the trunk link and latch a state indicating that the message with the shared address has been received. Response messages during group recording are not issued by receiving subsystems.

5.4.3. Acknowledgment of receipt of a group message is carried out in the process of centralized or decentralized polling, as well as when the control of the main channel is returned, for which a bit of the corresponding state is included in the function codes KF7, KF8, KF9 - KF12 and KF21 - KF24.

5.4.4. During the recording process, the master subsystem or the active control subsystem sends a format 2 message with function code KF2 to the main channel, intended for reception by a specific controlled subsystem, the address of which is indicated in the AB byte. After issuing a message, the active control subsystem turns on the control time and waits for a response message.

5.4.5. The addressed subsystem recognizes its address and receives the message sent to it. In the event that the message is received without error, the receiving subsystem must issue a response to the main channel in the form of a message of format 1 with the function code KF18.

5.4.6. If an error is detected in the received message, the receiving subsystem shall not issue a response.

5.4.7. The active control subsystem, if there is no response within the control time interval, must retransmit the same message.

5.4.8. If there is no response to a repeated message, this subsystem is considered to be faulty and the active control subsystem must perform the procedure prescribed for such a situation (switching on an alarm, removing a subsystem from circulation, switching on a reserve, etc.).

5.4.9. In the mode of centralized control of the main channel, the dialogue between the controlling and managed subsystems must be constantly controlled by the leading subsystem, which at this time performs the function of passively receiving messages.

(New edition, Rev. No. 1).

5.4.10. The reading process must be started by sending a Format 1 message with function code CF3 by the active control subsystem.

5.4.11. The subsystem to which this message is addressed, in case of its correct reception, must issue a response message of format 2 with the function code KF19.

5.4.12. If the called subsystem cannot issue data within the set waiting time, then after receiving the message with the read function, it must fix the sign of the subsystem's busyness and proceed to the formation of an array of data for issuance.

5.4.13. This controlled subsystem must remember the address of the active control subsystem that has addressed it (for which data is being prepared) and set a busy flag in response messages to other control subsystems.

5.4.14. To read the prepared data, the active control subsystem must again apply to the controlled subsystem with a message of format 1 with function code CF3. If the data has been prepared by this time, then the managed subsystem must issue a response message of format 2 with the function code CF19.

The subsystem busy indicator shall be removed only after the transmission of the format 2 response message.

5.4.15. If the response message is received by the active control subsystem without error, then the reading process is completed.

5.4.16. If an error is detected or there is no response, the active control subsystem repeats the call, and then takes measures similar to those given in paragraphs. 5.4.7, 5.4.8.

5.4.17. Writing-reading is a combination of processes according to paragraphs. 5.4.4 - 5.4.15.

5.4.18. The active control subsystem sends a format 2 message with function code KF4 to the main channel.

5.4.19. The addressed subsystem must accept the message sent to it and generate a response.

5.4.20. The response message in this process must be represented by format 2 (contain readable data) and have the function code KF20.

5.4.21. The control over the authenticity of transmitted messages and the actions taken by the active control subsystem should be similar to those given for the writing and reading processes.

6. PHYSICAL IMPLEMENTATION

6.1. Physically, the interface is implemented in the form of communication lines that form a backbone channel, and communication controllers that provide direct connection to the communication lines.

6.2. Communication controllers must be implemented in the form of functional units that are part of the subsystem, or in the form of structurally separate devices.

6.3. The rules for pairing and interaction of communication controllers with the functional part of the subsystem are not regulated by this standard.

6.4. For communication lines of the main channel, a coaxial cable with a characteristic impedance of 75 ohms should be used.

6.5. The coaxial cable must be terminated at both ends with (75 ± 3.75) ohm terminating resistors. The power of the terminating resistors must be at least 0.25 W.

Termination resistors must be connected to the ends of the communication lines using RF connectors.

Grounding or connection of communication lines with the housings of devices in mating subsystems is not allowed.

6.6. The attenuation along the communication line of the main channel should be no more than 18 dB for a speed of 500 kbps.

6.7. The total attenuation introduced by each branch from the main channel communication line should not exceed 0.1 dB, including the attenuation determined by the quality of the connection point, the attenuation on the branch and the attenuation depending on the input-output parameters of the matching circuits.

6.8. Branches from the communication line of the main channel must be carried out with a coaxial cable with a characteristic impedance of 75 ohms. The length of each branch is no more than 3 m. The total length of all branches is included in the total length of the main channel. Connection to the communication line must be carried out using RF connectors. Disabling any of the subsystems should not lead to a break in the communication line.

6.9. Communication controllers must contain transceiver amplifiers that provide:

reception sensitivity, not worse .............................................. ............. 240 mV

output level ............................................................... ........................... 4 to 5 V

output impedance .................................................................. ............................... (37.50 ± 1.88) Ohm

6.10. The formation of electrical signals for transmission to the main channel is carried out by modulating the clock frequency by the signals of the transmitted message. Each bit of the transmitted message corresponds to a full period of the clock frequency, and the rising and falling edges of the transmitted signal must coincide with the zero crossing of the clock frequency (Fig. 6). The correspondence of symbols received from the trunk channel to meaningful states is established as follows:

the symbol "0" corresponds to the opposite phase relative to the previous symbol,