In the client-server architecture, a web application runs. Various architectural solutions used when implementing multi-user databases

Regardless of how the concept of client-server architecture is defined (and there are many such definitions in the literature), the basis of this concept is a distributed computing model. In the most general case, under client And server two interacting processes are understood, one of which is a provider of some service for the other.

The term “client-server” means an architecture of a software package in which its functional parts interact according to the “request-response” scheme. If we consider two interacting parts of this complex, then one of them (the client) performs an active function, that is, it initiates requests, and the other (the server) passively responds to them. As the system develops, the roles may change, for example, some software block will simultaneously perform the functions of a server in relation to one block and a client in relation to another.

Server - one or more multi-user processors with a single memory field, which, in accordance with the needs of the user, provides them with the functions of calculation, communication and access to databases. Server can be called a program that provides some services to other programs. Examples of servers are the Apache web server, database servers - MySQL, ORACLE, network file systems and Windows printers.

Client - work station for one user, providing the registration mode and other functions necessary at his workplace - calculations, communication, access to databases, etc. A client can be called a program that uses the service provided by the server program. Client Examples - MSIE (MS Internet Explorer), ICQ client.

Often people simply refer to the computer on which one of these programs runs as a client or server.

In essence, client and server are roles, executable programs. Clients and servers can physically reside on the same computer. The same program can be both a client and a server at the same time, etc... these are just roles.

If we draw an analogy with society - a bank or a store - “servers”. They provide some services to their clients. But the bank may at the same time be a client of some other company, etc...

Client-Server processing is an environment in which application processing is distributed between a client and a server. Machines are often involved in processing different types, and the client and server communicate with each other using a fixed set of standard exchange protocols and procedures for accessing remote platforms.

DBMS with personal computers(such as Clipper, DBase, FoxPro, Paradox, Clarion have network versions that simply share database files of the same formats for the PC, while implementing network locks to restrict access to tables and records. In this case, all work is carried out on the PC . The server is used simply as a shared one. remote disk large capacity. This way of working leads to the risk of data loss due to hardware failures.

Compared to such systems, systems built in the Client-Server architecture have the following advantages:

    allow you to increase the size and complexity of programs running on a workstation;

    ensures the transfer of the most labor-intensive operations to a server, which is a machine with greater computing power;

    minimizes the possibility of loss of information contained in the database through the use of internal data protection mechanisms available on the server, such as, for example, transaction tracing systems, rollback after a failure, and means of ensuring data integrity;

    reduces the amount of information transmitted over the network several times.

    In a client-server architecture, the database server not only provides access to shared data, but also handles all processing of that data. The client sends requests to the server to read or change data, which are formulated in SQL. The server itself makes all the necessary changes or selections, while monitoring the integrity and consistency of the data, and sends the results in the form of a set of records or a return code to the client’s computer.

    It allows you to optimally distribute the computing load between the client and the server, which also affects many characteristics of the system: cost, performance, support.

    1.2. Story…

    The architecture and term "client-server" were first used in the early 80s. The first applications with a client-server architecture were databases.

    Before this, there was no clear division - the program usually did everything itself - including working with data in file system, presentation of data to the user, etc. Over time, the volume and criticality of data for business grew, and this over time began to give rise to problems (performance, security, and others).

    Then they figured out that it would be convenient to install the database on a powerful separate computer(server) and allow this database to be used by many small computer users (clients) over the network, which was done.

    Essentially, the “explosion” in the popularity of client-server technology was caused by IBM’s invention of a simple query language relational databases SQL data. Today SQL is the universal standard for working with databases. Recently, this “explosion” continues with the invention of the Internet, in which literally every interaction occurs using a client-server architecture.

    1.3. Protocols

    The server and client on the network “talk” to each other in a “language” (in the broad sense of the word) that is understandable to both parties. This “language” is called a protocol.

    In the case of a bank, the protocol can be called the forms that the client fills out.

    In our case, examples of protocols:

    FTP ( File Transfer protocol)

    HTTP (Hyper Text Transfer Protocol)

    SMTP (Simple Mail Transfer Protocol)

    IP (Internet Protocol)

    MySQL Client/Server Protocol

    Note that protocols can be different levels. Classification systems of levels may be different, but one of the most famous lines is OSI (Open Systems Interconnection), which has 7 levels.

    For example, HTTP is an application (seventh - highest) layer protocol, and IP is a network (third) layer protocol.

    1.4. Distribution of functions in the client-server architecture

    IN classical architecture The client-server has to distribute the three main parts of the application across two physical modules. Typically, data storage software is located on the server (for example, a database server), the user interface is on the client side, but data processing must be distributed between the client and server parts. This is the main drawback of the two-tier architecture, from which several unpleasant features, greatly complicating the development of client-server systems.

    The development process of such systems is quite complex, and one of the most important tasks is the decision of how the functionality of the application should be distributed between the client and server parts. Trying to solve this problem, developers get two-tier, three-tier and multi-tier architectures. It all depends on how many intermediate links are included between the client and server.

    The main problem that it solves client application, is providing an interface with the user, i.e. entering data and presenting results in a user-friendly form, and managing application scenarios.

    The main functions of a server DBMS are ensuring reliability, consistency and security of data, managing client requests, and fast processing of SQL queries.

    The entire logic of the application - application tasks, business rules - in a two-tier architecture is distributed by the developer between two processes: client and server (Fig. 1).

    At first, most of the application's functions were solved by the client, the server was only involved in processing SQL queries. This architecture is called “thick client - thin server”.

    The emergence of the ability to create stored procedures on the server, i.e. compiled programs with internal operating logic, has led to a tendency to transfer an increasing part of the functions to the server. The server became more and more “fat”, and the client became “thinner”.

    This solution has obvious advantages, for example, it is easier to maintain, since all changes need to be made in only one place - on the server.

    The models discussed above have the following disadvantages.

    1. “Thick” client:

    – complexity of administration;

    – updating the software becomes more complicated, since it must be replaced simultaneously across the entire system;

    – the distribution of powers becomes more complicated, since access is limited not by actions, but by tables;

    – the network is overloaded due to the transmission of unprocessed data through it;

    – weak data protection, since it is difficult to correctly distribute powers.

    2. “Fat” server:

    – implementation becomes more complicated, since languages ​​like PL/SQL are not suitable for developing such software and there is no good funds debugging;

    – the performance of programs written in languages ​​like PL/SQL is significantly lower than those created in other languages, which has important for complex systems;

    – programs written in DBMS languages ​​usually do not work reliably; an error in them can lead to failure of the entire database server;

    – the resulting programs are completely unportable to other systems and platforms.

    To solve these problems, multi-level (three or more levels) client-server architectures are used. layered architecture client-server allows you to significantly simplify distributed computing, making it not only more reliable, but also more accessible.

    However, the language in which stored procedures are written is not powerful or flexible enough to easily implement complex application logic.

    Then there was a tendency to entrust the execution of application tasks and business rules to a separate application component (or several components), which can run both on a specially dedicated computer - the application server, and on the same computer where the database server runs. This is how three-tier and multi-tier client-server architectures emerged.


    Rice. 1. Distribution of functions between client and server

    There was a special software Middleware (software) that must enable the multiple components of such a multi-component application to function together. Such applications are flexible, scalable, but difficult to develop.


    BIBLIOGRAPHY

  1. Informatics / Ed. N.V. Makarova. – M.: Finance and Statistics, 1998.

    Evdokimov V.V. and others. Economic informatics. St. Petersburg: Peter, 2004.

    Kazakov S.I. Basics network technologies– M.: Radio and Communications, 2004.

    Kogalovsky M.R., Database technology on personal computers, - M.: Finance and Statistics, 2003.

    Popov V.V. Fundamentals of computer technology. –M.: Finance and Statistics, 2001.

    Figurnov V.E. IBM PC for the user. M., 2000.

OPERATING SYSTEM MS-DOS. BASIC CONCEPTS AND COMMANDS BASIC CONCEPTS: DATABASE, DBMS, ENTITY, ATTRIBUTE, RELATIONSHIP (ONE-TO-ONE, ONE-TO-MANY, MANY-TO-MANY), RELATIONSHIP, PRIMARY KEY

DBs operating using FILE SERVER technology;

DBs operating using CLIENT-SERVER technology.

File server


- Access to the database (query)
- Transfer of data while blocking access of other users
- Data processing on the user's computer

For clarity, let's consider specific examples. Let's say you need to view sent payment orders for the period from May 19 to May 25 in the amount of 5,000 rubles. The user will need to launch a client application on his computer that works in the database with payment orders, and enter the necessary selection criteria. After which it will be downloaded to your computer from the database server and loaded into RAM a file containing all documents of this type for the entire period for any amounts. A client application running on the user’s computer that works with the database will process this information (sort it out) and then provide a response (a list of payment orders that meet your criteria will appear on the screen). After this, you will select the desired payment order and try to edit (change) one field in it - for example, the date. During editing, the data source is blocked, that is, the entire file containing this document. This means that the file will either not be available to other users at all, or will only be available in view mode. Moreover, this kind of capture does not even occur at the record level, that is, one document, but the entire file is locked - that is, the entire table containing similar documents. Only after this field is fully processed and the editing mode is exited, this payment order file will be unlocked from being captured by the user. If the data is stored in larger objects, for example, one file contains payment orders for both the receipt of funds and the sending of funds, then even more of the information will not be available. You will work with one "date" field in one document - the rest of the enterprise employees will wait until you finish.

The disadvantages of the FILE SERVER system are obvious:

    Very huge pressure on the network, increased requirements for bandwidth. In practice, this makes it almost impossible to work simultaneously large number users with large amounts of data.

    Data processing is carried out on the users' computer. This entails increased hardware requirements for each user. The more users, the more money will have to be spent on equipping their computers.

    Locking data when editing by one user makes it impossible for other users to work with this data.

    Safety. To be able to work with such a system, you will need to give each user full access to the entire file, in which he may be interested in only one field.

    Client-server

    Processing a single user request:
    - Access to the database (SQL query)
    - Transmission of the response - the result of processing


    If it is necessary to process information stored in the database, a client application running on the user's computer that works with the database generates a query in the SQL language (name from the initial letters - Structured Query Language). The database server accepts the request and processes it independently. No data array (file) is transmitted over the network. After processing the request, only the result is transferred to the user's computer - that is, in the previous example, a list of payment orders that meet the necessary criteria. The file itself, in which the data that served as the source for processing was stored, remains unblocked for access by the server itself at the request of other users.

    In serious client-server DBMSs there are additional mechanisms, reducing the load on the network, reducing requirements for user computers. As an example, we will give stored procedures - that is, entire programs for processing data stored in the database. In this case, not even SQL expressions are transferred from the user to the server - a function call with call parameters is transferred. Thus, workplace the user experience is further simplified; the logic of the program is transferred to the server. The user space becomes just a means of displaying information. All this means a further reduction in the load on the network and user workstations.

    So everything the above disadvantages FILE-SERVER schemes are eliminated in the CLIENT-SERVER architecture:

      Data arrays are not transferred over the network from the database server to the user's computer. Network bandwidth requirements are reduced. This makes it possible for a large number of users to work simultaneously with large amounts of data.

      Data processing is carried out on the database server, and not on the users’ computer. This allows the use of simpler, and therefore cheaper, computers at client sites.

      Data is not blocked (captured) by one user.

      The user is provided with access not to the entire file, but only to the data from it that the user has the right to work with.

      Having considered the difference between a FILE SERVER and a CLIENT SERVER, we can complete our consideration of the concept of “information storage”. It is important to emphasize that the operation of the corporate system largely depends on the type of DBMS used. It is quite obvious that for large enterprises, with big amount users, with huge number records in the database, the file-server scheme is completely unacceptable. On the other hand, there are differences in databases in other parameters and capabilities:

        types of data that can be stored in the database (numbers, dates, text, pictures, video, sound, etc.);

        on technologies organized by the database itself to access data in the database and the level of information protection from unauthorized access;

        on the provided development tools and methods that can be used to design any information system based on this database;

        on the provided tools and methods for analyzing information (data), which can be applied in an information system based on this database;

        in terms of reliability and stability, that is (roughly) the number of records (filled fields) in the database, which ensures a reliable and uninterrupted ability to access, change, and analyze information in the database;

        by speed - time spent accessing and processing information;

        if possible, organize work on computers different manufacturers, that is, compatibility with other platforms and operating systems;

        by the level of support (service) provided by the database developer or its authorized dealer;

        on the availability of good tools for creating applications that use this database data, etc.

        Why is it not profitable to invest in a file server solution today? Today the future path of database development is already obvious. Multi-level client-server systems are appearing, with very thin clients, removing any restrictions from client stations, both in terms of performance and platform and operating system. If for a client-server solution further development seems completely clear, and the transition from a client-server to a multi-level client-server is not problematic, then for a file server a simple transition to a client-server represents a huge problem and enormous labor costs, if this suddenly turns out to be possible at all.

Vladimir, web developer at Noveo, says:

Most developers of websites, web services and mobile applications sooner or later have to deal with client-server architecture, namely, develop a web API or integrate with it. In order not to invent something new every time, it is important to develop a relatively universal approach to web API design, based on the experience of developing similar systems. We bring to your attention a combined series of articles devoted to this issue.

Approximation one: Characters

At one point, in the process of creating another web service, I decided to collect all my knowledge and thoughts on the topic of designing a web API to serve the needs of client applications and put it in the form of an article or a series of articles. Of course, my experience is not absolute, and constructive criticism and additions are more than welcome.

The reading turned out to be more philosophical than technical, but for fans of the technical part there will be something to think about. I doubt that I will say in this article something fundamentally new, something that you have never heard of, read or thought about yourself. Just trying to fit everything in unified system, first of all, in your own head, and this is already worth a lot. Nevertheless, I will be glad if my thoughts will be useful to you in your practice. So, let's go.

Client and server

Server in this case, we consider an abstract machine on the network that is capable of receiving an HTTP request, processing it and returning the correct response. In the context of this article, its physical essence and internal architecture are completely unimportant, be it a student laptop or a huge cluster of industrial servers scattered around the world. In the same way, it doesn’t matter to us what’s under the hood, who greets the request at the door, Apache or Nginx, what unknown beast, PHP, Python or Ruby processes it and generates a response, what data storage is used: Postgresql, MySQL or MongoDB . The main thing is that the server meets the main rule - to hear, understand and forgive.

Client it can also be anything that is capable of generating and sending an HTTP request. Until a certain point in this article, we will also not be particularly interested in the goals that the client sets for himself when sending this request, nor what he will do with the response. The client can be a JavaScript script running in the browser, mobile app, an evil (or not so evil) demon running on the server, or a too-wise refrigerator (there are already such things).

For the most part, we will talk about the method of communication between the above two, in such a way that they understand each other, and neither of them has any questions.

REST Philosophy

REST (Representational state transfer) was originally conceived as a simple and unambiguous interface for data management, which involved only a few basic operations with direct network storage (server): data retrieval (GET), saving (POST), modification (PUT/PATCH) and deletion (DELETE). Of course, this list was always accompanied by such options as processing errors in the request (is the request compiled correctly), restricting access to data (suddenly you shouldn’t know this) and validating incoming data (suddenly you wrote nonsense), in general, by all possible checks, which the server executes before fulfilling the desire client.

In addition, REST has a number of architectural principles, a list of which can be found in any other article about REST. Let’s go over them briefly so that they are at hand and you don’t have to go anywhere:

Independence of the server from the client- servers and clients can be instantly replaced by others independently of each other, since the interface between them does not change. The server does not store client states.
Uniqueness of resource addresses- each unit of data (of any degree of nesting) has its own unique URL, which, in fact, is entirely a unique resource identifier.

Example: GET /api/v1/users/25/name

Independence of the data storage format from the transmission format- the server can support several various formats to transfer the same data (JSON, XML, etc.), but stores the data in its internal format, regardless of the supported ones.

Presence of all necessary metadata in the response- in addition to the data itself, the server must return details of request processing, for example, error messages, various resource properties necessary for further work with it, for example, the total number of records in the collection to correctly display pagination navigation. We'll go over the different types of resources later.

What are we missing?

Classic REST involves the client working with the server as a flat data store, while nothing is said about the connectivity and interdependence of data among themselves. All this, by default, falls entirely on the shoulders of the client application. However, modern subject areas for which data management systems are developed, be it social services or online marketing systems, imply a complex relationship between the entities stored in the database. Supporting these connections, i.e. data integrity is the responsibility of the server side, while the client is only an interface for accessing this data. So what are we missing in REST?

Function calls

In order not to change the data and connections between them manually, we simply call a function on the resource and “feed” the necessary data as an argument. This operation does not fit the REST standards, there is no special verb for it, which forces us, developers, to get out of our own way.

The simplest example– user authorization. We call the login function, pass it an object containing credentials as an argument, and receive an access key in response. What happens to the data on the server side is not of concern to us.

Another option– creating and breaking connections between data. For example, adding a user to a group. Calling the entity group addUser function, passing an object as a parameter user, we get the result.

And also There are operations that are not directly related to storing data as such, for example, sending notifications, confirming or rejecting any operations (completion of the reporting period, etc.).

Multiple Operations

It often happens, and client developers will understand what I mean, that it is more convenient for a client application to create/change/delete/several homogeneous objects at once with one request, and for each object a different server-side verdict is possible. There are at least several options here: either all changes have been completed, or they have been partially completed (for some objects), or an error has occurred. Well, there are also several strategies: apply changes only if everyone succeeds, or apply partially, or roll back in case of any error, and this already leads to a full-fledged transaction mechanism.

For a web API striving for the ideal, I would also like to somehow bring such operations into the system. I'll try to do this in one of the sequels.

Statistical queries, aggregators, data formatting

It often happens that, based on data stored on the server, we need to obtain a statistical summary or data formatted in a special way: for example, to build a graph on the client side. Essentially, this is data generated on demand, more or less on the fly, and is read-only, so it makes sense to put it in separate category. One of distinctive features statistical data, in my opinion, is that they do not have a unique ID.

I am sure that this is not everything that can be encountered when developing real applications, and I will be glad to see your additions and adjustments.

Types of data

Objects

The key data type in communication between client and server is an object. Essentially, an object is a list of properties and their corresponding values. We can send an object to the server in a request and receive the request result as an object. In this case, the object will not necessarily be a real entity stored in the database, but at least, in the form in which it was sent or received. For example, authorization credentials are passed as an object, but are not an independent entity. Even objects stored in the database tend to acquire additional properties of an intra-system nature, for example, dates of creation and editing, various system labels and flags. Object properties can be either their own scalar values ​​or contain related objects And collections of objects, which are not part of the object. Some object properties can be editable, some are system properties, read-only, and some can be statistical in nature and calculated on the fly (for example, the number of likes). Some object properties may be hidden, depending on the user's rights.

Collections of objects

When we talk about collections, we mean a type of server resource that allows you to work with a list of homogeneous objects, i.e. add, delete, change objects and select from them. In addition, a collection could theoretically have its own properties (for example, the maximum number of elements per page) and functions (I’m confused here, but this also happened).

Scalar values

In their pure form, scalar values ​​as a separate entity have been extremely rare in my memory. Typically they have appeared as properties of objects or collections, and as such they can be read or written. For example, the username can be retrieved and changed individually with GET /users/1/name . In practice, this feature is rarely useful, but if necessary, I would like to have it at hand. This is especially true for collection properties, such as the number of records (with or without filtering): GET /news/count .

In one of the following articles I will try to classify these operations and offer options possible requests and answers based on which ones I encountered in practice.

Approximation two: The right path

In this approach, I would like to talk separately about approaches to building unique paths to resources and methods of your web API and about those architectural features of the application that affect appearance this path and its components.

What to think about while standing on the shore

Versioning

Sooner or later, any operating system begins to evolve: develop, become more complex, scale, and become more modern. For REST API developers, this is fraught, first of all, with the need to launch new versions of the API while the old ones are running. Here I am no longer talking about architectural changes under the hood of your system, but about the fact that the data format itself and the set of operations with it are changing. In any case, versioning must be provided as in the original organization source code, and in principle the construction of URLs. When it comes to URLs, there are two most popular ways to indicate the version of the API to which the request is addressed. Prefixing the example-api.com/v1/ path and separating versions at the v1.example-api.com subdomain level. You can use any of them, depending on your needs and requirements.

Autonomy of components

The Web API of complex systems that support multiple user roles often requires division into parts, each of which serves its own range of tasks. In fact, each part can be standalone application, work on different physical machines and platforms. In the context of describing the API, it is not at all important to us how the server processes the request and what forces and technologies are involved in this. For the client, the API is an encapsulated system. However, different parts of the system may have completely different functionality, for example, the administrative and user parts. And the methodology for working with seemingly the same resources can differ significantly. Therefore, such parts must be separated at the level of the admin.v1.example-api.com domain or the example-api.com/v1/admin/ path prefix. This requirement is not mandatory, and much depends on the complexity of the system and its purpose.

Data exchange format

The most convenient and functional, in my opinion, data exchange format is JSON, but no one prohibits the use of XML, YAML or any other format that allows you to store serialized objects without losing the data type. If desired, you can do it in API support multiple input/output formats. It is enough to use the HTTP request header to indicate the desired Accept response format and Content-Type to indicate the format of the data transmitted in the request. To others popular way is to add an extension to the resource URL, for example, GET /users.xml , but this method seems less flexible and beautiful, if only because it makes the URL heavier and is true more for GET requests than for all possible operations.

Localization and multilingualism

In practice, API multilingualization most often comes down to translating service and error messages into the required language for direct display to the end user. Multilingual content also has its place, but storing and issuing content on different languages, in my opinion, should be differentiated more clearly, for example, if you have the same article in different languages, then in fact these are two different entities, grouped based on the unity of content. Various methods can be used to identify the expected language. The simplest is the standard HTTP header Accept-Language. I've seen other ways, such as adding a GET parameter language="en" , using the path prefix example-api.com/en/ , or even at the domain name level en.example-api.com . It seems to me that the choice of how to specify the locale depends on the specific application and the tasks facing it.

Internal routing

So we've reached the root node of our API (or one of its components). All further routes will pass directly inside your server application, according to the set of resources it supports.

Paths to collections

To specify the path to the collection, we simply use the name of the corresponding entity, for example, if this is a list of users, then the path will be /users. Two methods are applicable to the collection as such: GET (receiving a limited list of entities) and POST (creating a new element). In requests for lists, we can use many additional GET parameters used to page output, sorting, filtering, searching etc, but they must be optional, i.e. these parameters must not be passed as part of the path!

Collection elements

To access a specific element of the collection, we use it in the route unique identificator/users/25 . This is the unique path to it. To work with an object, the methods GET (obtaining an object), PUT/PATCH (changing) and DELETE (deleting) are applicable.

Unique objects

Many services have objects that are unique to the current user, such as the current user's profile /profile , or personal settings /settings . Of course, on the one hand, these are elements of one of the collections, but they are the starting point for the use of our Web API by the client application, and also allow a much wider range of operations on data. At the same time, the collection storing custom settings may not be available at all due to security and data privacy reasons.

Properties of objects and collections

In order to get to any of the properties of an object directly, it is enough to add the property name to the path to the object, for example, get the username /users/25/name . The methods GET (getting a value) and PUT/PATCH (changing a value) are applicable to a property. The DELETE method is not applicable because a property is a structural part of an object, as a formalized unit of data.

In the previous part, we talked about how collections, like objects, can have their own properties. In my experience, the only property I've found useful is the count property, but your application may be more complex and specific. Paths to the properties of collections are built according to the same principle as to the properties of their elements: /users/count . For collection properties, only the GET (getting a property) method is applicable, since a collection is just an interface for accessing a list.

Collections of related objects

One type of object property can be related objects or collections of related objects. Such entities, as a rule, are not an object’s own property, but only references to its connections with other entities. For example, a list of roles that were assigned to the user /users/25/roles. We will talk in detail about working with nested objects and collections in one of the following parts, and on at this stage It is enough for us that we have the ability to access them directly, like any other property of an object.

Functions of objects and collections

To build the path to the function call interface of a collection or object, we use the same approach as for accessing a property. For example, for the /users/25/sendPasswordReminder object or the /users/disableUnconfirmed collection. For function calls, we always use the POST method. Why? Let me remind you that in classic REST there is no special verb for calling functions, and therefore we will have to use one of the existing ones. In my opinion, the POST method is most suitable for this because... it allows you to pass the necessary arguments to the server, is not idempotent (returning the same result when accessed multiple times), and is the most abstract in semantics.

I hope that everything fits into the system more or less :) In the next part we will talk in more detail about requests and responses, their formats, and status codes.

Approximation Three: Queries and Answers

In previous approximations, I talked about how the idea came to collect and summarize the existing experience in web API development. In the first part, I tried to describe what types of resources and operations on them we deal with when designing a web API. The second part touched on the issues of constructing unique URLs for accessing these resources. And in this approximation I will try to describe the possible options for requests and responses.

Universal answer

We have already said that the specific format of communication between the server and the client can be any at the discretion of the developer. For me, the JSON format seems to be the most convenient and visual, although a real application may support several formats. Now let's focus on the structure and necessary attributes of the response object. Yes, we will wrap all data returned by the server in a special container - generic response object, which will contain all the necessary service information for further processing. So, what is this information:

Success - marker of the success of the request

In order to immediately understand when receiving a response from the server whether the request was successful and to pass it to the appropriate handler, it is enough to use the “success” success token. The simplest server response, containing no data, would look like this:

POST /api/v1/articles/22/publish ("success": true)

Error - error information

If the request fails - we’ll talk about the reasons and types of negative server responses a little later - the “error” attribute is added to the response, containing the HTTP status code and the text of the error message. Please do not confuse with messages about validation errors data for specific fields. It is most correct, in my opinion, to return the status code in the response header, but I have also seen another approach - always return status 200 (success) in the header, and send details and possible error data in the body of the response.

GET /api/v1/user ("success": false, "error": ("code": 401, "message": "Authorization failed"))

Data - data returned by the server

Most server responses are designed to return data. Depending on the type of request and its success, the expected data set will vary, however, the “data” attribute will be present in the vast majority of responses.

Example of data returned if successful. In this case, the response contains the requested user object.

GET /api/v1/user ( "success": true, "data": ( "id": 125, "email": " [email protected]", "name" : "John", "surname" : "Smith", ) )

An example of the data returned in case of an error. In this case, it contains field names and validation error messages.

PUT /api/v1/user ( "success": false, "error": ( "code" : 422, "message" : "Validation failed" ) "data": ( "email" : "Email could not be blank. ", ) )

Pagination - information necessary to organize page navigation

In addition to the data itself, in responses returning set of collection elements, information about page navigation (pagination) based on the query results must be present.

The minimum set of values ​​for pagination consists of:

  • total number of records;
  • number of pages;
  • current page numbers;
  • number of records per page;
  • the maximum number of records per page supported by the server side.

Some web developers The APIs also include in the pagination a set of ready-made links to adjacent pages, as well as the first, last and current.

GET /api/v1/articles Response: ( "success": true, "data": [ ( "id" : 1, "title" : "Interesting thing", ), ( "id" : 2, "title" : "Boring text", ) ], "pagination": ( "totalRecords" : 2, "totalPages" : 1, "currentPage" : 1, "perPage" : 20, "maxPerPage" : 100, ) )

Work on mistakes

As mentioned above, not all requests to the web API are successful, but this is also part of the game. The error reporting system is powerful tool, facilitating the client's work and guiding the client application along the right path. The word "error" in this context is not entirely appropriate. A better word here would be exception, since in fact the request was successfully received, parsed, and an adequate response was returned explaining why the request could not be completed.

What are the potential reasons for the exceptions you receive?

500 Internal server error - everything is broken, but we will fix it soon

This is exactly the case when the problem occurred on the side of the server itself, and the client application can only sigh and notify the user that the server is tired and lies down to rest. For example, the connection to the database is lost or there is a bug in the code.

400 Bad request - and now everything is broken for you

The answer is exactly the opposite of the previous one. Returned in cases where the client application sends a request that, in principle, cannot be processed correctly, does not contain required parameters, or has syntax errors. This can usually be resolved by re-reading the web API documentation.

401 Unauthorized - stranger, identify yourself

Authorization is required to access this resource. Of course, having authorization does not guarantee that the resource will become available, but without authorization, you definitely won’t know. Occurs, for example, when trying to access a private part of the API or when the current token has expired.

403 Forbidden - you are not allowed here

The requested resource exists, but the user does not have sufficient rights to view or modify it.

404 Not found - no one lives at this address

Such a response is returned, as a rule, in three cases: the path to the resource is incorrect (erroneous), the requested resource was deleted and ceased to exist, the rights of the current user do not allow him to know about the existence of the requested resource. For example, while we were looking through the list of products, one of them suddenly went out of fashion and was removed.

405 Method not allowed - you can’t do this

This type of exception is directly related to the verb used in the request (GET, PUT, POST, DELETE), which, in turn, indicates the action that we are trying to perform with the resource. If the requested resource does not support the specified action, the server says so explicitly.

422 Unprocessable entity - correct and send again

One of the most useful exceptions. Returned whenever there are logical errors in the request data. By request data we mean either a set of parameters and their corresponding values ​​passed by the GET method, or fields of an object passed in the body of the request POST methods, PUT and DELETE. If the data has not been validated, the server returns a report in the “data” section about which parameters are invalid and why.

The HTTP protocol supports much larger number various status codes for all occasions, but in practice they are rarely used and in the context of the web API are of no practical use. In my memory, I have never had to go beyond the above list of exceptions.

Requests

Retrieving Collection Items

One of the most common requests is the request to obtain collection elements. The client application displays information feeds, product lists, various information and statistical tables and much more by accessing collection resources. To make this request, we access the collection using the GET method and passing additional parameters in the query string. As we have already indicated above, as a response we expect to receive an array of homogeneous collection elements and the information necessary for pagination - loading the continuation of the list or its specific page. The contents of the selection can be specially limited and sorted using the transfer additional parameters. They will be discussed further.

Page navigation

page- the parameter indicates which page should be displayed. If this parameter is not passed, then the first page is displayed. From the first successful response from the server it will be clear how many pages the collection has with the current filtering parameters. If the value exceeds the maximum number of pages, then it is best to return an error 404 Not found.

GET /api/v1/news?page=1

perPage- indicates the desired number of elements on the page. Typically the API has eigenvalue by default, which returns perPage as a field in the pagination section, but in some cases allows you to increase this value to reasonable limits by providing maximum value maxPerPage:

GET /api/v1/news?perPage=100

Sorting results

Often, you want to sort the results of a selection by ascending or descending values ​​of certain fields that support comparative (for numeric fields) or alphabetical (for string fields) sorting. For example, we need to organize a list of users by name or products by price. In addition, we can set the sorting direction from A to Z or in the opposite direction, and it is different for different fields.

sortBy- there are several approaches to transmitting complex sorting data in GET parameters. Here it is necessary to clearly indicate the sorting order and direction.

Some APIs suggest doing this as a string:

GET /api/v1/products?sortBy=name.desc,price.asc

Other options suggest using an array:

GET /api/v1/products? sortBy=name& sortBy=desc& sortBy=price& sortBy=asc

In general, both options are equivalent, since they convey the same instructions. In my opinion, the option with an array is more universal, but, as they say, it depends on taste and color...

Simple filtering by value

In order to filter a selection by the value of a field, in most cases it is enough to pass the field name and the required value as a filter parameter. For example, we want to filter articles by author ID:

GET /api/v1/articles?authorId=25

Advanced filtering options

Many interfaces require more complex filtering and searching systems. I will list the main and most common filtering options.

Filtering by upper and lower bounds using comparison operators from (greater than or equal to), higher (greater than), to (less than or equal to), lower (less than). Applies to fields whose values ​​can be ranked.

GET /api/v1/products?price=500&price=1000

Filter by several possible values ​​from the list. Applies to fields whose set of possible values ​​is limited, for example, a filter by several statuses:

GET /api/v1/products?status=1&status=2

Filtering by partial string match. Applies to fields containing text data or data that can be equated to text, such as numeric product numbers, phone numbers, etc.

GET /api/v1/users?name=John GET /api/v1/products?code=123

Named Filters

In some cases, when certain sets of filter parameters are often used and are implied by the system as something holistic, especially if they affect the internal, often complex mechanics of sampling, it is advisable to group them into so-called named filters. It is enough to pass the filter name in the request, and the system will build the selection automatically.

GET /api/v1/products?filters=recommended

Named filters can also have their own parameters.

GET /api/v1/products?filters=kidds

In this subsection, I tried to talk about the most popular options and methods for obtaining the required sample. Most likely, in your practice there will be many more examples and nuances regarding this topic. If you have anything to add to my material, I will be only glad. In the meantime, the post has already grown to a significant scale, so we will analyze other types of requests in the next approximation.

The term "client-server" can describe Hardware and in this case means networked server and client computers or a way of organizing software and services on a network.

Client-server model (client/server) – a computing model in which the processing load application programs distributed between a client computer and a server computer that shares information via a network. This model combines the advantages of centralized computing and the client model. Typically, a client is end-user software running on the WS and capable of communicating with a server (usually a database server). Performance when using the client-server model is higher than usual, since the client and server share the data processing load. The client-server model works best when accessing large amounts of data.

Client-server architecture is a way of organizing the interaction of programs or components of a multi-component program, implying the presence of a program or program component called a server and one or more other components called clients.

A client is a local network component that requests services from some server, and a server is a local network component that provides services to some clients. The local network server provides resources (services) to workstations and/or other servers. In a client-server system, the client sends a request to the server, and all information processing occurs on the server.

The core of the client/server architecture is the database server (a system that receives requests from client programs over a computer network and responds with the requested data (a set of responses); each database server consists of a computer, operating system and DBMS server software), which is an application that carries out a set of data management actions: executing queries, storing and backing up data, tracking referential integrity, checking user rights and privileges, and maintaining a transaction log. Typically, clients over a computer network send requests to the server in the form of sentences in SQL language. The server interprets them and sends the corresponding data back to the client.

The client has the ability to initiate the execution of server procedures asynchronously for the server and receive the results of their execution. Typically, a client-server architecture allows multiple clients to interact with a server in parallel and independently of each other.

In its simplest form, client-server architecture consists of three main components:

A database server that manages data storage, access and security, backups, monitors data integrity according to business rules and, most importantly, fulfills client requests;

A client that provides a user interface, executes application logic, validates data, sends requests to the server, and receives responses from it;

Network and communications software that communicates between client and server using network protocols.

DB, on structural language SQL queries (Structured Query Language), which is industry standard in the world of relational databases. Remote server accepts the request and forwards it to the SQL database server. SQL server – special program, which manages the remote database. The SQL server provides interpretation of the request, its execution in the database, generation of the result of the request and delivery of it to the client application. In this case, the resources of the client computer are not involved in the physical execution of the request; the client computer only sends a request to the server database and receives the result, after which it interprets it as necessary and presents it to the user. Since the result of the request is sent to the client application, only the data that the client needs “travels” through the network. As a result, the load on the network is reduced. Since the request is executed where the data is stored (the server), there is no need to send large batches of data. In addition, the SQL server, if possible, optimizes the received query so that it is executed in the minimum time with the least overhead [[3.2], [3.3]]. the system is shown in Fig.

3.3.


All this increases system performance and reduces the time it takes to wait for a request result. When queries are executed by the server, the degree of data security is significantly increased, since data integrity rules are defined in the database on the server and are the same for all applications that use this database. This eliminates the possibility of defining conflicting rules for maintaining integrity. The powerful transaction engine supported by SQL servers makes it possible to prevent simultaneous changes to the same data by different users and provides the ability to roll back to the original values ​​when making changes to the database that ended abnormally [[3.2], [3.3]]. Rice. 3.3.

  • Client-server architecture
  • There is a local network consisting of client computers, each of which has a client application installed for working with the database.
  • On each of the client computers, users have the ability to run the application. Using the user interface provided by the application, it initiates a call to the DBMS located on the server to retrieve/update information. For communication, a special query language SQL is used, i.e. Only the request text is transmitted over the network from the client to the server.

Let's look at what the separation of functions between the server and client looks like.

  • Client application functions:
    • Sending requests to the server.
    • Interpretation of query results received from the server.
    • Presenting the results to the user in some form (user interface).
  • Server side functions:
    • Receiving requests from client applications.
    • Interpretation of requests.
    • Optimization and execution of database queries.
    • Sending results to the client application.
    • Ensuring a security system and access control.
    • Database integrity management.
    • Implementation of stability of multi-user operating mode.

So-called “industrial” DBMSs operate in the client-server architecture. They are called industrial because it is the DBMS of this class that can provide the work information systems the scale of a medium and large enterprise, organization, bank. The category of industrial DBMS includes MS SQL Server, Oracle, Gupta, Informix, Sybase, DB2, InterBase and a number of others [[3.2]].

As a rule, a SQL server is maintained by an individual employee or a group of employees (SQL server administrators). They manage the physical characteristics of databases, perform optimization, configuration and redefinition various components databases, create new databases, change existing ones, etc., and also issue privileges (permissions to a certain level of access to specific databases, SQL server) to various users [[3.2]].

Let's look at the main advantages of this architecture compared to the file-server architecture:

  • Network traffic is significantly reduced.
  • The complexity of client applications is reduced (most of the load falls on the server part), and, consequently, the requirements for the hardware capacity of client computers are reduced.
  • Availability of special software tool– SQL server – leads to the fact that a significant part of design and programming tasks has already been solved.
  • The integrity and security of the database is significantly increased.

Disadvantages include higher financial expenses for hardware and software, and also the fact that a large number of client computers located in different places causes certain difficulties with timely update client applications on all client computers. However, the client-server architecture has proven itself in practice, in currently There are a large number of databases built in accordance with this architecture.

3.4. Three-tier (multi-tier) client-server architecture.

Three-link (in some cases multi-link) architecture(N-tier or multi- three-tier architecture? Now, when business logic changes, there is no longer a need to change client applications and update them for all users. In addition, the requirements for user equipment are reduced as much as possible.

So, as a result, the work is structured as follows:

  • The database in the form of a set of files is located on the hard drive of a specially dedicated computer (network server).
  • The DBMS is also located on the network server.
  • There is a specially dedicated application server on which the business analysis software (business logic) is located [[3.1]].
  • There are many client computers, each of which has a so-called “thin client” installed - a client application that implements the user interface.
  • On each of the client computers, users have the opportunity to run an application - a thin client. Using the user interface provided by the application, it initiates a call to the business intelligence software located on the application server.
  • The application server analyzes user requirements and generates queries to the database. For communication, a special query language SQL is used, i.e. Only the request text is transmitted over the network from the application server to the database server.
  • The DBMS encapsulates within itself all information about the physical structure of the database located on the server.
  • The DBMS initiates calls to data located on the server, as a result of which the result of the query is copied to the application server.
  • The application server returns the result to the client application (user).
  • The application, using the user interface, displays the result of the queries.