Client-server technology. Client-server technologies What are client and server

"Client-server" is a model of interaction between computers on a network.

As a rule, computers in this configuration are not equal. Each of them has its own purpose, different from the others, and plays its own role.

Some computers on the network own and manage information and computing resources, such as processors, file systems, mail services, printing services, and databases. Other computers have the ability to access these services using the services of the former. The computer that manages this or that resource is usually called the server of this resource, and the computer that wants to use it is called the client (Fig. 4.5).

A specific server is determined by the type of resource it owns. So, if the resource is a database, then we are talking about a database server, the purpose of which is to serve client requests related to data processing in databases; if the resource is a file system, then we talk about a file server, or a file server, etc.

On a network, the same computer can act as both a client and a server. For example, in an information system that includes personal computers, a mainframe computer, and a minicomputer, the latter can act both as a database server, serving requests from clients - personal computers, and as a client, sending requests to the mainframe computer.

The same principle applies to the interaction of programs. If one of them performs certain functions, providing others with a corresponding set of services, then such a program acts as a server. Programs that use these services are called clients.

Data processing is based on the use of database technology and data banks. In the database, information is organized according to certain rules and represents an integrated set of interrelated data. This technology provides an increase in processing speed for large volumes. Data processing at the intramachine level is the process of performing a sequence of operations specified by an algorithm. Processing technology has come a long way.

Today, data processing is carried out by computers or their systems. The data is processed by user applications. Of primary importance in organizational management systems is the processing of data for the needs of users, and primarily for top-level users.

In the process of evolution of information technology, there is a noticeable desire to simplify and reduce the cost for users of computers, their software and the processes performed on them. At the same time, users receive increasingly broader and more complex services from computing systems and networks, which leads to the emergence of technologies called client-server.


Limiting the number of complex subscriber systems in a local network leads to the appearance of computers in the role of server and client. The implementation of client-server technologies may have differences in the efficiency and cost of information and computing processes, as well as in the levels of software and hardware, in the mechanism of component connections, in the speed of access to information, its diversity, etc.

Receiving a diverse and complex service organized in the server makes the work of users more productive and costs users less than the complex software and hardware of many client computers. Client-server technology, being more powerful, replaced file-server technology. It made it possible to combine the advantages of single-user systems (high level of interactive support, user-friendly interface, low price) with the advantages of larger computer systems (integrity support, data protection, multitasking).

In the classical sense, a DBMS is a set of programs that allow you to create and maintain a database up to date. Functionally, the DBMS consists of three parts: the core (database), language and programming tools. Programming tools refer to the client interface, or front end. These may include a query language processor.

A language is a collection of procedural and non-procedural commands supported by a DBMS.

The most commonly used languages ​​are SQL and QBE. The kernel performs all other functions that are included in the concept of “database processing”.

The main idea of ​​client-server technology is to place servers on powerful machines, and client applications that use the language are located on less powerful machines. Thus, the resources of a more powerful server and less powerful client machines will be used. I/O to the database is not based on physical data fragmentation, but on logical, i.e. the server does not send clients a complete copy of the database, but only logically necessary portions, thereby reducing network traffic.

Network traffic is the flow of network messages. In client-server technology, client programs and its queries are stored separately from the DBMS. The server processes client requests, selects the necessary data from the database, sends it to clients over the network, updates information, and ensures the integrity and safety of the data.

The main advantages of client-server systems are as follows:

Low network load (the workstation sends a request to the database server to search for certain data, the server itself performs the search and returns over the network only the result of processing the request, i.e. one or more records);

High reliability (DBMS based on client-server technology support transaction integrity and automatic recovery in case of failure);

Flexible adjustment of the level of user rights (some users can be assigned only viewing data, others viewing and editing, others will not see any data at all);

Support for large fields (data types whose size can be measured in hundreds of kilobytes and megabytes are supported).

However, client-server systems also have disadvantages:

Difficulty in administration due to territorial disunity and heterogeneity of computers at workplaces;

Insufficient degree of information protection from unauthorized actions;

A closed protocol for communication between clients and servers, specific to a given information system.

To eliminate these shortcomings, the architecture of Intranet systems is used, concentrating and combining the best qualities of centralized systems and traditional client-server systems.

Client-server architecture is used in a large number of network technologies used to access various network services. Let's briefly look at some types of such services (and servers).

Web servers

Initially, they provided access to hypertext documents via HTTP (Huper Text Transfer Protocol). Now they support advanced capabilities, in particular working with binary files (images, multimedia, etc.).

Application Servers

Designed for centralized solution of applied problems in a certain subject area. To do this, users have the right run server programs for execution. Using application servers reduces client configuration requirements and simplifies overall network management.

Database servers

Database servers are used to process user queries in SQL language. In this case, the DBMS is located on the server, to which client applications connect.

File servers

File server stores information in the form of files and provides users with access to it. As a rule, a file server also provides a certain level of protection against unauthorized access.

Proxy server

First, it acts as an intermediary, helping users obtain information from the Internet while protecting the network.

Secondly, it stores frequently requested information in a cache on the local disk, quickly delivering it to users without having to access the Internet again.

Firewalls(firewalls)

Firewalls that analyze and filter passing network traffic to ensure network security.

Mail servers

Provide services for sending and receiving electronic mail messages.

Remote Access Servers (RAS)

These systems provide communication with the network via dial-up lines. A remote employee can use the resources of a corporate LAN by connecting to it using a regular modem.

These are just a few types of the entire variety of client-server technologies used in both local and global networks.

To access certain network services, clients are used whose capabilities are characterized by the concept of “thickness”. It determines the hardware configuration and software available to the client. Let's consider possible boundary values:

Thin client

This term defines a client whose computing resources are only sufficient to run the required network application through a web interface. The user interface of such an application is formed by means static HTML (JavaScript execution is not provided), all application logic is executed on the server.
For the thin client to work, it is enough just to provide the ability to launch a web browser, in the window of which all actions are carried out. For this reason, a web browser is often called a "universal client".

"Fat" client

This is a workstation or personal computer running its own disk operating system and having the necessary set of software. Thick clients turn to network servers mainly for additional services (for example, access to a web server or corporate database).
A “thick” client also means a client network application running under the local OS. Such an application combines a data presentation component (OS graphical user interface) and an application component (the computing power of the client computer).

Recently, another term has been increasingly used: “rich”-client. The “Rich” client is a kind of compromise between the “thick” and “thin” clients. Like the “thin” client, the “rich” client also presents a graphical interface, described using XML tools and including some functionality of thick clients (for example, drag-and-drop interface, tabs, multiple windows, drop-down menus, etc.)

The application logic of the “rich” client is also implemented on the server. Data is sent in a standard exchange format, based on the same XML (SOAP, XML-RPC protocols) and is interpreted by the client.

Some basic XML-based rich client protocols are given below:

  • XAML (eXtensible Application Markup Language) - developed by Microsoft, used in applications on the .NET platform;
  • XUL (XML User Interface Language) is a standard developed within the Mozilla project, used, for example, in the Mozilla Thunderbird email client or the Mozilla Firefox browser;
  • Flex is an XML-based multimedia technology developed by Macromedia/Adobe.

Conclusion

So, The main idea of ​​the client-server architecture is to divide the network application into several components, each of which implements a specific set of services. The components of such an application can run on different computers, performing server and/or client functions. This improves the reliability, security, and performance of network applications and the network as a whole.

Control questions

1. What is the main idea of ​​KS interaction?

2. What are the differences between the concepts of “client-server architecture” and “client-server technology”?

3. List the components of C-S interaction.

4. What tasks does the presentation component perform in the KS architecture?

5. For what purpose are database access tools presented as a separate component in the KS architecture?

6. Why is business logic identified as a separate component in the KS architecture?

7. List the models of client-server interaction.

8. Describe the file-server model.

9. Describe the database server model.

10. Describe the application server model

11. Describe the terminal server model

12. List the main types of servers.

Using client-server technology

Over time, the not very functional file server model for local networks (FS) was replaced by the “Client Server” building models that appeared one after another (RDA, DBS and AS).

The Client-Server technology, which occupied the very bottom of the database, has become the main technology of the global Internet. Further, as a result of the transfer of Internet ideas to the sphere of corporate systems, Intranet technology emerged. Unlike the Client-Server technology, this technology is focused on information in its final form ready for consumption, and not on data. Computing systems that are built on the basis of the Intranet include central information servers and certain components for presenting information to the last user (browsers or navigator programs). The action between the server and the client on the Intranet is carried out using web technologies.

In modern times, the Client-Server technology has become very widespread, but this technology itself does not have universal recipes. It only provides a general judgment on how the current distribution information system should be created. Also, implementations of this technology in certain software products and even in types of software are recognized quite significantly.

Classic two-tier client-server architecture

As a rule, network components do not have equal rights: some have access to resources (for example: a database management system, processor, printer, file system, etc.), others have the ability to access these resources. operating system server technology

“Client-server” technology is the architecture of a software complex that distributes an application program into two logically different parts (server and client), which interact according to the “request-response” scheme and solve their own specific problems.

A program (or computer) that controls and/or owns a resource is called a server for that resource.

A program (computer or) that requests and uses a resource is called a client of that resource.

In this case, conditions may also arise when a certain software block simultaneously implements the functions of a server in relation to one block and a client in relation to another block.

The main principle of the Client-Server technology is to divide the application functions into at least three parts:

User interface modules;

This group is also called presentation logic. With its help, users can interact with applications. Regardless of the specific characteristics of presentation logic (command line interface, proxy interfaces, complex graphical user interfaces), its purpose is to provide a means for more efficient exchange of information between the information system and the user.

Data storage modules;

This group is also called business logic. Business logic finds what exactly a particular application is needed for (for example, application functions specific to the provided domain). Separating an application across program boundaries provides a natural basis for distributing an application across two or more computers.

Data processing modules (resource management functions);

This group is also called logic, data access algorithms or simply data access. Data entry algorithms are considered as a specific interface for a specific application to a stable data storage device such as a DBMS or a file system. Using data processing modules, a specific interface for the DBMS application is organized. Using the interface, the application can manage database connections and queries (translating application-specific queries into SQL, retrieving results, and translating those results back into application-specific data structures). Each of the listed links can be implemented independently of several others. For example, without changing the programs that are used to process and store data, you can change the user interface so that the same data will be displayed in the form of tables, histograms or graphs. The simplest applications are often able to combine all three parts into a single program, and such separation corresponds to functional boundaries.

In accordance with the division of functions in each application, the following components are distinguished:

  • - data presentation component;
  • - application component;
  • - resource management component.

Client-server in a classic architecture needs to distribute the three main parts of the application into 2 physical modules. Typically, the application component is located on the server (for example, on the database server), the data presentation component is on the client side, and the resource management component is distributed between the server and client parts. This is the main disadvantage of the classical two-tier architecture.

In a two-tier architecture, when data processing algorithms are separated, developers must have complete information about the latest changes that have been made to the system and understand these changes, which creates considerable difficulties in the development of client-server systems, their maintenance and installation, since it is necessary to spend large amounts of money. efforts to coordinate the actions of different groups of specialists. Contradictions often arise in the actions of developers, and this slows down the development of the system and forces them to change ready-made and proven elements.

To avoid inconsistency between different elements of the architecture, two modifications of the two-tier “Client - Server” architecture were created: “Thick Client” (“Thin Server”) and “Thin Client” (“Fat Server”).

In this architecture, developers tried to perform data processing on one of two physical parts - either on the client side ("Thick Client") or on the server ("Thin Client").

Each approach has its significant disadvantages. In the first situation, the network is unjustifiably overloaded because unprocessed, that is, redundant data is transmitted through it. In addition, system support and changes become more difficult, because correcting an error or replacing a calculation algorithm requires a simultaneous complete replacement of all interface programs; if a complete replacement is not made, data inconsistency or errors may occur. If all information processing is performed on the server, then the problem of describing built-in procedures and their debugging arises. A system with information processing on a server is absolutely impossible to transfer to another platform (OS), this is a serious drawback.

If a two-level classic “Client - Server” architecture is created, then you need to know the following facts:

The "Fat Server" architecture is similar to the "Thin Client" architecture

Transferring a request from the client to the server, processing the request by the server and transmitting the result to the client. However, the architectures have the following disadvantages:

  • - implementation becomes more complicated, since languages ​​like SQL are not suitable for developing such software and there are no good debugging tools;
  • - the performance of programs written in languages ​​like SQL is very low than those created in other languages, which is most important for complex systems;
  • - programs that are written in DBMS languages, as a rule, do not function very reliably; an error in them can lead to failure of the entire database server;
  • - the resulting programs are completely unportable to other platforms and systems.
  • - the “Thick Client” architecture is similar to the “Thin Server” architecture

The request is processed on the client side, that is, all raw data from the server is transferred to the client. In this case, architectures have negative sides:

  • - updating the software becomes more complicated, because it must be replaced simultaneously throughout the entire system;
  • - the distribution of powers becomes more complicated, because access is limited not by actions, but by tables;
  • - the network is overloaded due to the transmission of unprocessed data through it;
  • - weak data protection, since it is difficult to correctly distribute powers.

To solve these problems, you need to use multi-level (three or more levels) Client-Server architectures.

Three-level model .

Since the mid-90s of the last century, the three-tier “Client - Server” architecture has gained popularity among specialists, dividing the information system according to functionality into three specific links: data access logic, presentation logic and business logic. In contrast to the two-tier architecture, the three-tier architecture has an additional link - the application server, designed to implement business logic, while the client, which sends requests to the middleware, is completely unloaded, and all the capabilities of the servers are used to the maximum.

In a three-tier architecture, the client, as a rule, is not overloaded with data processing functions, but performs its main role as a system for presenting information coming from the application server. Such an interface can be implemented using standard Web technology tools - a browser, CGI and Java. This reduces the volume of data provided between the client and the application server, allowing client computers to connect even over slow lines such as telephone lines. In this regard, the client part can be so simple that in most cases it is carried out using a universal browser. However, if you still have to change it, then this procedure can be carried out quickly and painlessly.

An application server is software that acts as an intermediate layer between the server and the client.

  • - Message oriented - prominent representatives of MQseries and JMS;
  • - Object Broker - prominent representatives of CORBA and DCOM;
  • - Component based - bright representatives of .NET and EJB.

Using an application server brings many more benefits, for example, the load on client computers is reduced, since the application server distributes the load and provides protection against failures. Since the business logic is stored on the application server, any changes in reporting or calculations do not affect client programs in any way.

There are few application servers from such famous companies as Sun, Oracle Microsystem, IBM, Borland, and each of them differs in the set of services provided (I will not take performance into account in this case). These services make it easy to program and deploy enterprise-scale applications. Typically, an application server provides the following services:

  • - WEB Server - most often included in the package is the most powerful and popular Apache;
  • - WEB Container - allows you to run JSP and servlets. For Apache, this service is Tomcat;
  • - CORBA Agent - can provide a distributed directory for storing CORBA objects;
  • - Messaging Service - message broker;
  • - Transaction Service - already from the name it is clear that this is a transaction service;
  • - JDBC - drivers for connecting to databases, because it is the application server that will have to communicate with the databases and it needs to be able to connect to the database used in your company;
  • - Java Mail - this service can provide SMTP service;
  • - JMS (Java Messaging Service) - processing of synchronous and asynchronous messages;
  • - RMI (Remote Method Invocation) - calling remote procedures.

Multi-level client-server systems can be quite easily transferred to Web technology - to do this, you need to replace the client part with a specialized or universal browser, and supplement the application server with a Web server and small server procedure call programs. For

To develop these programs, you can use both the Common Gateway Interface (CGI) and more modern Java technology.

In a three-level system, the fastest lines that require minimal costs can be used as communication channels between the application server and the DBMS, since the servers are usually located in the same room (server room) and will not overload the network due to the transfer of a large amount of information.

From all of the above, the conclusion follows that the two-level architecture is very inferior to the multi-level architecture; therefore, today only the multi-level “Client - Server” architecture is used, recognizing three modifications - RDA, DBS and AS.

Various models of Client-Server technology

The very first major underlying technology for local area networks was file server (FS) model. At that time, this technology was very common among domestic developers who used systems such as FoxPro, Clipper, Clarion, Paradox and so on.

In the FS model, the functions of all 3 components (presentation component, application component and resource access component) are combined in one code, which is executed on the server computer (host). In such an architecture, there is no client computer at all, and the display and transfer of data is performed using a computer or terminal in the manner of terminal emulation. Applications are usually written in a fourth generation language (4GL). One of the computers on the network is considered a file server and provides file processing services to other computers. It operates under the control of a network OS and plays an important role as a component of access to information resources. On other PCs on the network, an application is running, the codes of which combine the application component and the presentation component.

The technology of action between the client and the server is as follows: a request is sent to a file server, which transmits the required data block to the DBMS, which is located on the client computer. All processing is performed on the terminal.

An exchange protocol is a set of calls that provide an application with access to the file system on a file server.

The positive aspects of this technology are:

  • - ease of application development;
  • - ease of administration and software updates
  • - low cost of workplace equipment (terminals or cheap computers with low specifications in terminal emulation mode are always cheaper than full-fledged PCs).

But the advantages of the FS model outweigh its disadvantages:

Despite the considerable amount of data that is sent over the network, response time is critical because every character entered by the client on the terminal must be transmitted to the server, processed by the application and returned back to be displayed on the terminal screen. In addition, there is the problem of distributing the load between several computers.

  • - expensive server hardware , since all users share its resources;
  • - lack of graphical interface .

Thanks to solving the problems inherent in the “File - Server” technology, a more advanced technology appeared, called “Client - Server”.

For modern DBMSs, the client-server architecture has become the de facto standard. If it is assumed that the designed network technology will have a “client-server” architecture, then this means that application programs implemented within its framework will be distributed in nature, that is, some of the application functions will be implemented in the client program, others in the program -server.

Differences in the implementation of applications within the Client-Server technology are determined by four factors:

  • - what types of software are in logical components;
  • - what software mechanisms are used to implement the functions of logical components;
  • - how logical components are distributed by computers on the network;
  • - what mechanisms are used to connect components with each other.

Based on this, three approaches are distinguished, each of which is implemented in the corresponding Client-Server technology model:

  • - remote data access model (Remote Date Access - RDA);
  • - database server model (DateBase Server - DBS);
  • - application server model (Application Server - AS).

A significant advantage of the RDA model is its extensive selection of application development tools, which provide rapid development of desktop applications that work with SQL-oriented DBMSs. Typically, tools support a graphical user interface to the OS, as well as automatic code generation facilities that mix presentation and application functions.

Despite its wide distribution, the RDA model is giving way to the most technologically advanced DBS model.

Database Server (DBS) Model - network architecture of the Client-Server technology, which is based on a stored procedure mechanism that implements application functions. In the DBS model, the concept of an information resource is compressed into a database due to the same mechanism of stored procedures, implemented in a DBMS, and even then not in all.

The positive aspects of the DBS model over the RDA model are obvious: this is the possibility of centralized administration of various functions, and a reduction in network traffic because instead of SQL queries, calls to stored procedures are transmitted over the network, and the ability to divide a procedure between two applications, and saving computer resources for by using a once created procedure execution plan.

Application Server (AS) Model is the network architecture of the Client-Server technology, which is a process that runs on the client computer and is responsible for the user interface (data input and display). The most important element of this model is the application component, called the application server, which operates on a remote computer (or two computers). The application server is implemented as a group of application functions, designed as services. Each service provides some services to all programs that are willing and able to use them.

Having learned all the models of the “Client - Server” technology, we can draw the following conclusion: RDA and DBS models, these two models are based on a two-tier scheme for separating functions. In the RDA model, application functions are transferred to the client; in the DBS model, their execution is implemented through the DBMS kernel. In the RDA model, the application component merges with the presentation component; in the DBS model, it is integrated into the resource access component.

The AS model implements a three-tier function separation scheme, where the application component is highlighted as the main isolated element of the application, which has standardized interfaces with two other components.

The results of the analysis of the “File Server” and “Client - Server” technology models are presented in Table 1.

Despite its name, Client-Server technology is also a distributed computing system. In this case distributed computing understood as a “Client - Server” architecture with the participation of some servers. In the context of distributed processing, the term "server" simply means the program that responds to requests and performs the necessary actions as requested by the client. Since distributed computing is a type of Client-Server system, users gain the same benefits, such as increased overall throughput and the ability to multitask. Also, integrating discrete network components and making them work as a single unit helps improve efficiency and reduce savings.

Because processing occurs anywhere on the network, distributed computing in a Client-Server architecture ensures efficient scaling. To achieve a balance between server and client, an application component should only run on the server if centralized processing is more efficient. If the program logic that interacts with centralized data resides on the same machine as the data, it does not need to be transferred over the network, so the demands on the network environment can be reduced.

As a result, we can draw the following conclusion: if you need to work with small information systems that do not require a graphical user interface, you can use the FS model. The issue of a graphical interface can be freely resolved using the RDA model. The DBS model is a very good option for database management systems (DBMS). The AS model is the best option for creating large information systems, as well as when using low-speed communication channels.

Client-server technology provides for the presence of two independent interacting processes - a server and a client, communication between which is carried out over the network.

Servers are processes that are responsible for maintaining the file system, and clients are processes that send a request and expect a response from the server.

The client-server model is used when building a system based on a DBMS, as well as mail systems. There is also a so-called file-server architecture, which differs significantly from the client-server architecture.

Data in a file server system is stored on a file server (Novell NetWare or WindowsNT Server), and it is processed on workstations through the operation of “desktop DBMSs” such as Access, Paradox, FoxPro, etc.

The DBMS is located on a workstation, and data manipulation is performed by several independent and inconsistent processes. All data is transferred from the server over the network to the workstation, which slows down the speed of information processing.

Client-server technology is implemented by the functioning of two (at least) applications - clients and server, which share functions among themselves. The server is responsible for storing and directly manipulating data, an example of which can be SQLServer, Oracle, Sybase and others.

The user interface is formed by the client, which is based on special tools or desktop DBMSs. Logical data processing is performed partly on the client and partly on the server. Requests are sent to the server by the client, usually in SQL. Received requests are processed by the server and the result is returned to the client(s).

In this case, the data is processed in the same place where it is stored - on the server, so a large volume of it is not transmitted over the network.

Advantages of client-server architecture

Client-server technology brings the following qualities to an information system:

  • Reliability

Data modification is carried out by the database server using the transaction mechanism, which gives the set of operations such properties as: 1) atomicity, which ensures data integrity upon any completion of the transaction; 2) independence of transactions of different users; 3) resistance to failures - saving the results of transaction completion.

  • Scalability, i.e. the ability of the system to be independent of the number of users and volumes of information without replacing the software used.

Client-server technology supports thousands of users and gigabytes of information with the appropriate hardware platform.

  • Security, i.e. reliable protection of information from
  • Flexibility. In applications that work with data, there are logical layers: user interface; logical processing rules; Data management.

As already noted, in file-server technology, all three layers are combined into one monolithic application operating on a workstation, and all changes in the layers necessarily lead to modification of the application, the versions of the client and server differ, and it is necessary to update the versions on all workstations .

Client-server technology in a two-tier application provides for the execution of all functions for creating on the client, and all functions for managing database information on the server; business rules can be implemented both on the server and on the client.

A three-tier application allows for a middle tier that implements business rules, which are the most changeable components.

Several levels allow you to flexibly and cost-effectively adapt your existing application to constantly changing business requirements.

Almost all models for organizing user interaction with a database are built on the basis of client-server technology. It is assumed that each such application differs in the way it distributes functions: the client part is responsible for targeted data processing and organizing user interaction, the server part provides data storage - processes requests and sends results to the client for special processing. A typical architecture of client-server technology is shown in Fig. 4.1:

Rice. 4.1. Typical architecture of client-server technology

Some of the functions of central computers were taken over by local computers. Any software application in this case is represented by three components: a presentation component that implements the interface with the user; an application component that provides application functions; a component for accessing information resources (resource manager), which accumulates information and manages data.

Based on the distribution of these components between the workstation and the network server, client-server architecture models are distinguished:

· model of access to remote data (Fig. 4.2). The server contains only the following data:

Rice. 4.2. Remote data access model

This model is characterized by low productivity, since all information is processed on workstations; in addition, a low exchange rate is maintained when transferring large amounts of information from the server to workstations;

· data management server model (Fig. 4.3):

Rice. 4.3. Data management server model

Features of this model: reduction in the amount of information transmitted over the network, since the selection of necessary information elements is carried out on the server, and not on workstations; unification and a wide selection of application creation tools; lack of a clear distinction between the presentation component and the application component, which makes it difficult to improve the computing system. It is advisable to use in the case of processing moderate amounts of information, while the complexity of the application component should be low,

· complex server model (Fig. 4.4):

Rice. 4.4. Complex server model

Advantages of the model: high performance, centralized administration, saving network resources. Such a server is optimal for large networks focused on processing large volumes of information that increase over time;

· three-tier client-server architecture (Fig. 4.5). Used when the complexity and resource intensity of an application component increases.

Rice. 4.5. Three-tier architecture

An application server can implement several application functions, each of which is designed as a separate service that provides some services to all programs. There may be several such servers, each of them is focused on providing a certain set of services. This architecture is based on further specialization of the architecture components: the client is only involved in organizing the interface with the user, the database server performs only standard data processing, to implement the data processing logic, the architecture provides a separate layer - the business logic layer, it can be either a dedicated server (application server ), or placed on the client as a library.

Within the client-server architecture, there are two main concepts:

· "thin" client. A powerful database server and a library of stored procedures are used to make calculations that implement the basic data processing logic directly on the server. The client application, accordingly, places low demands on the workstation hardware;

· a “thick” client implements the main processing logic on the client, and the server is a pure database server that ensures the execution of only standard requests for data manipulation (usually reading, writing, modifying data in relational database tables).

Network IT

Email. The first to emerge, this form of electronic messaging (e-mail) demonstrated the very possibility of almost instantaneous communication through computer networks. Architecturally designed for the exchange of messages between two subscribers, it allowed groups of people to exchange information. Groups or mailing lists became such a modification. Using email software, you can create email messages and make attachments to them. The attachment function is used to send documents of any type by mail, such as text documents, spreadsheets, multimedia files, database files, etc. Text filtering software later developed expanded the capabilities of email to help the user structure, route, and filter messages. The need for these services is due to the constantly growing amount of mail that is of little or no use to the user (Spam). Filtering software can ensure that users receive only personalized messages that contain news that is important to them, and can also help them find information that users need in their decision-making process.

Teleconferences or newsgroups. Teleconferences are the next stage in the development of communication systems. Their features were, firstly, storing messages and providing interested parties with access to the entire exchange history, and secondly, various ways of thematic grouping of messages. Such conferencing systems enable a group of co-working but geographically separated people to exchange opinions, ideas or information online when discussing any issue, overcoming time and spatial barriers. Currently, there are many types of conferencing systems, including computer conferencing (meetings conducted via e-mail), conference calls with the ability to connect mobile subscribers, conferences using desktop PCs, multimedia, television and video conferencing.

Interactive communication(chat rooms). With the development of telecommunications, an increasing number of users begin to work on the Internet in constant presence mode, so a real-time communication service has appeared, when the subscriber receives a message within a short period of time after the interlocutor sent it.

The most common modern means of interactive communication are Web applications that support the following forms of communication:

o Guest books. The first and simplest form. The simplest guest book is a list of messages, shown from latest to first, each of which was left in it by a visitor.

o Forums. The first forums appeared as an improvement on guest books and organized messages into threads - much like newsgroups. User messages in forums are grouped by topics, which are usually set by the first messages. All visitors can see the topic and post their message - in response to those already written. Topics are grouped into thematic forums, and the system is managed by informal administrators and moderators. The most developed forums are beginning to have the first signs of social networks - long-term social connections based on interests can be established between participants.

o Blogs(Web Log - Web log, Web protocol). In these services, each participant keeps his own journal - leaves entries in chronological order. The topics of the entries can be anything; the most common approach is to keep a blog as your own diary. Other visitors can leave comments on these posts. In this case, the user, in addition to the ability to maintain his own journal, gets the opportunity to organize a listing view - a list of entries from the journals of “friends”, regulate access to entries, and search for interlocutors based on interests. On the basis of such systems, communities of interest are created - magazines that are maintained collectively. In such a community, its members can freely post any message related to the community’s activities.

In general, all modern systems for ensuring the operation of online communities have several common features:

· The vast majority of communities provide for user registration, i.e. Each participant must have an account. When registering, the user provides some information about himself for identification. Almost all systems require you to enter your email address and check its functionality by sending an email with an account activation code. If the address is incorrect, only the system administrator can activate the entry. This approach guarantees, to a certain extent, the uniqueness of the participant and his identifiability.

· Work in the environment is carried out in sessions. Each session begins with the user entering their name and confirming their identity by entering a password. For convenience, sessional participation is usually hidden from the user by technical means, but, nevertheless, the user is identified constantly.

· In addition to credentials, the user customizes the environment - appearance, additional data about himself, indicates his interests, desired contacts, topics for communication, etc.

· Social networks and the services that support them have proven to be an extremely effective method of ensuring site traffic and feedback; they have gradually become one of the means of filling the site with content that has real commercial and social value.

Based on the latter approach, a fairly large number of social Web services appeared and quickly gained popularity, united under the general name Web 2.0 services. You can specify some such resources:

o Social bookmarks. Some websites allow users to share a list of bookmarks or popular websites with others. Such sites can also be used to find users with common interests. Example: Delicious.

o Social directories resemble social bookmarks, but are focused on use in the academic field, allowing users to work with a database of citations from scientific articles. Examples: Academic Search Premier, LexisNexis Academic University, CiteULike, Connotea.

o Social libraries are applications that allow visitors to leave links to their collections, books, and audio recordings that are available to others. Support for a system of recommendations and ratings is provided. Examples: discogs.com, IMDb.com.

o Multiplayer online games simulate virtual worlds with different scoring systems, levels, competition, winners and losers. Example: World of Warcraft.

o Multilingual social networks allow you to establish social connections between people speaking different languages. In this case, special software is used that allows you to translate phrases from one language to another in real time. Examples: Dudu.

o Geosocial networks form social connections based on the user’s geographic location. In this case, various geolocation tools are used (for example, GPS or hybrid systems such as AlterGeo technology), which make it possible to determine the current location of a user and correlate his position in space with the location of various places and people around.

o Professional social networks are created for communication on professional topics, exchange of experience and information, search and offer of vacancies, development of business connections. Examples: Doctor at work, Professionals.ru, MyStarWay.com, LinkedIn, MarketingPeople, Viadeo.

o Service social networks allow users to unite online around their common interests, hobbies, or for various reasons. For example, some sites provide services through which users can post personal information needed to find partners. Examples: LinkedIn, VKontakte.

o Commercial social networks are focused on supporting business transactions and building people's trust in brands by taking into account their opinions about the product, thereby allowing consumers to participate in the promotion of the product and increasing their awareness.