Mysql replication of individual tables. Setting up Master-Slave replication in MySQL

My report is intended for those people who know the word “replication”, even know that MySQL has it, and perhaps they set it up once, spent 15 minutes and forgot about it. They don't know anything else about her.

The report will not include:


All this is on the Internet, there is no point in understanding the syntax.

We'll go through the theory a little, try to explain how it all works inside, and after that you can dive into the documentation yourself with triple your strength.

What is replication, in principle? This is copying changes. We have one copy of the database, we want another copy for some purpose.

Replication comes in different types. Different comparison axes:

  • degree of synchronization of changes (sync, async, semisync);
  • number of recording servers (M/S, M/M);
  • change format (statement-based (SBR), row-based (RBR), mixed);
  • theoretically, a model for transferring changes (push, pull).

Fun fact - if you think about it a little, replication theoretically helps us scale only reading for fundamental reasons. Here is a somewhat non-obvious conclusion. This is because if we need to upload a certain number of changes to the same copy of data, and this particular copy of data is served by the same server, then this server is able to withstand a certain number of updates per second, and will not upload more. The server is capable of updating 1000 records per second, but not 2000. What will change if you put a replica to this server, no matter in master-slave or master-master mode? Will you be able to pour a second thousand updates on this replica? The correct answer is no.

Of course, you can add additional updates to a replica in master-master mode, but another thing is that when they don’t arrive at the first master and try to make a second thousand updates on it, the capacity will no longer be enough. You need to understand and not confuse two almost obvious points, that replication is about one thing, but that the data needs to be split up, and if you need to scale not reading, but writing, then you will have to do something else, and replication will not really help .

Those. replication is more about reading.

About synchronization.

Synchronization is a guarantee of availability and accessibility. Availability in the sense that our commit has passed, the transaction has been committed, everything is fine, this data is visible to one or more nodes in the cluster, they can participate in the following requests. Availability means that the data, in principle, is on more than one server, but perhaps the transaction has not been lost and is not available.

There's no "commit succeeded, what does that mean?" refrain here. Synchronous commit means that our local and remote (at least on one replica) have ended, i.e. we committed something to the machine, if we have synchronous replication mode, then these changes were successfully committed, they are visible for subsequent requests on the local machine, and they are also visible on the remote machine (at least one). This means that if a standard emergency situation occurs, i.e. a crowbar flew into one of the servers and pierced everything right through - from the processor to the screw itself, then, despite this, the data is not only copied to a certain remote server, but also, in addition, can instantly, without any additional delays, participate in subsequent transactions.

This is all general terminology and has absolutely nothing to do with MySQL. In any distributed system it will be arranged like this.

Asynchronous commit - no additional guarantees, depending on your luck.

Semi-synchronous commit is a nice intermediate solution, this is when our local commit has passed, nothing is known about the remote commit - maybe the slave caught up, or maybe it didn’t, but at least we received confirmation that this data is somewhere then they flew away and were accepted there and, probably, signed up.

About recording servers. What are the types of replication?

Master-slave classic, changes are all poured onto one server, after which they are copied to a lot of replicas.

Master-master true - when changes flow onto a bunch of masters simultaneously and somehow from one to another, from another to a third and between them all, which gives rise to both a number of joys and a number of automatic problems. It is clear that when you have one “golden copy” and several replicas from it, which should (ideally - instantly) repeat this “golden copy”, then everything is relatively simple from the point of view of how to drive data back and forth and what do on each specific copy. With master-master an interesting “headache” begins, and, I emphasize, not specifically in the case of MySQL, but purely theoretical. What to do if on two nodes at the same time they tried to run the same transaction, which changes the same data, and, for the sake of simplicity of the example, changes it in different ways. It is clear that we cannot apply these two changes at the same time. At the moment when we start changing something on one node, there is nothing yet on the second node. Conflict. One of the transactions will have to be rolled back. In addition, separate “dances” begin with checking watches, etc.

An interesting point - even the option where ultimately all the changes from all the masters should gradually spread everywhere will still not help that same write bandwidth. It's a shame, but that's how it is.

A nice option is called “Master-slave + routing requests”. It’s nice because it’s easy to program inside, you have one main copy, you replicate it to a bunch of machines. This is much simpler than in a master-master environment, when everyone has equal rights, etc., but from the application point of view it still looks like you have a lot of write points. You come to any node, it knows where to route you, and routes you successfully. Well, readings are scaled - that’s the joy of replication. You can read everything from all points, always.

Now closer to databases, “magic” statement-based, row-based, etc. formats. About the format of changes.

What can you do? You can transmit the queries themselves, or you can only transmit the modified rows. I emphasize that while we have not yet dived into the jungle of MySQL, this can be done by any DBMS that has queries that generate a large (or not very large) number of changes, i.e. updating a lot of data. The question arises - what exactly are we going to copy? You can send the requests themselves back and forth between nodes, or you can only send the changed data. It’s interesting that both ways are very bad! You can still try mixing.

One more point about what types of replications there are. About the distribution model. Probably, somewhere the Push-based model has not yet completely died out, when the node that made the changes is obliged to send them to all other nodes. From the point of view of programming and tracking states, this is still a hassle. That’s why Pull-based rules. Taking updates from one or another node is much easier to program than monitoring a chaotic cluster of its replicas on one node.

Some general terms were introduced. Let's move on to how we did it in MySQL.

MySQL, in itself, is a kind of deception. There is a logical layer called MySQL, which deals with all sorts of general things that are isolated from data storage - network, optimizer, caches, etc. The specific physical layer, which is responsible for storing data, lies one floor below. There are several built-in ones, and some installed by plugins. But even the built-in MyISAM, InnoDB, etc. live on the physical layer. Plugin architecture is cool, you can pick up a new engine, but a certain suboptimality immediately arises. In principle, transactional write-ahead log" and (WAL), which the physical storage layer writes anyway, would be good to use for replication, and if the system knows that there is a certain physical layer, or is well enough coupled with this physical layer , then it would be possible not to write a separate log at the logical level, but to use the same WAL. But with MySQL this is conceptually impossible, or if you change the interface in PSE so that it becomes possible conceptually, then there will be a lot of work.

Replication is implemented at the level of MySQL itself. There is also good in this - in addition to one log in the form of deeply internal data of the storage engine, there is a more or less logical log, perhaps at the statement level, which is maintained separately from this engine. And this is “extra” security, etc. plus, since there are no internal restrictions, you can do any creative thing like replacing the engine on the fly.

In these terms, MySQL 4.1 implemented: master-slave, pull-based, strictly async and strictly SBR. If you're stuck in the ancient 4.x era, then things are probably bad for you. Versions 5.x are already almost 10 years old - it’s time to update.

It’s funny to follow the versions of how people stepped on all kinds of rakes and, when nothing could be done, they screwed new rakes onto these rakes so that life would not be so painful. So, in version 5.1 they added RBR to compensate for the inevitable problems with SBR, and added a mixed mode. In version 5.6, we added some more nice things: semi-sync, delayed slave, GTID.

One more thing. Since MySQL is a kind of common layer, on the one hand, and a bunch of pluggable engines, on the other hand, including built-in ones, from a certain point there is a divine NDB cluster, about which they say cool things. There is completely synchronous master-master replication, a very accessible in-memory database... But there is one caveat - as soon as you start looking for people who use NDB cluster in production, there are very few such people.

What does the master do when you decide to enable replication? There are quite a few additional movements going on in the master. As usual, we receive requests over the network, parse them, send transactions, record them, etc. In addition to this, at the logical level, the MySQL master begins to maintain a binary log - a file, not quite a text one, into which all changes are poured. The master can also send these logs over the network. It's all very simple and seems to work.

What does a slave do? It’s better not to send changes to the slave, because you might get into something incomprehensible. The slave has a little more work. In addition to keeping one additional log and sending it out upon request, there is also a thread that goes to a remote master, perhaps even more than one, and downloads a binary log from there. The solution is “let’s go to and from several remote masters "downloading different logs" is ambiguous. On the one hand, it’s not bad, but on the other, there is an instant discrepancy. You can’t just physically copy files via SCP, you already get one log on the server, it contains your own positions, locally we drag them along the grid, put them in a separate log, There is also a separate thread running and trying to play these local logs. The most infernal thing, in my opinion, is that up until version 5.6, the identification of a particular transaction in the log occurred by the file name and position on the master. An interesting solution.

Here is the write path that a simple insert takes without replication:


The application connected to the server, put it in the table and quit.

With replication there are several additional steps:


The writer application goes to the master in the same way, but in addition, this data ends up in one form or another in the binary log, then is downloaded over the network to the relay log, then from the relay log it is gradually replayed (if we are lucky and the slave does not lag, are replayed immediately) into a table on the slave, after which everything is available in the reader.

What exactly ends up in the binary log depends on the SBR/RBR/mixed settings. Where does all this grow from? Let's imagine ourselves as a database. We received a simple request “update one specific record” – UPDATE users SET x=123 WHERE id=456

What to write to binary log? In principle, it’s all the same, really. We can write down a short request, or (and he updated one record) we can write down the change somehow in one format or another.

Another situation. Let's imagine that we received the same request, which in itself is small, but changes a lot of data - UPDATE users SET bonus=bonus+100

There is only one effective option - to write the request itself, because the request is exactly 32 bytes, and it can update an arbitrary number of records - 1000, 100,000, 1,000,000, as many as you like... It is inefficient to write changed records to the log.

What happens if we put such a simple request in the log: “let’s disable all users who have not logged in for a long time” – UPDATE users SET disabled=1 WHERE last_login

Suddenly horror sets in. The problem is that if the request itself is ideally replicated, then, firstly, the time is never synchronous between the two nodes, in addition, due to the fact that the recording path is so long, at the time of replay this “NOW” will diverge. The replica suddenly diverges from the master, and all subsequent changes, formally speaking, are no longer safe and can lead to anything.

Generally speaking, for such queries, regardless of the amount of changed data, ideally it would be necessary to copy the lines themselves. In this particular case, you can not copy the lines themselves, but fix the constant and write not “NOW” in the log, but the specific timestamp that was used by the master at the time of replication.


Fun facts you accidentally learn while diving into the jungle of replication. Moreover, you can dive shallowly - you’ll run into them right away. In random order they are:

  • The master is multi-threaded, but the slave is not. It is clear that if the master pours a load into four cores, the slave does not have time to pour this load into one core. It's all pretty bad;
  • The state of the slave is determined by the name of the position in the master file. Think about it - the state of one node in the cluster is determined by the name of the file and the position in this file on another node in the cluster, with which anything can happen for any reason!
  • "saving" RBR. It turns out that by default the full before/after row images are written there, i.e. we changed one column in a five-kilobyte string, oops! – 10 KB of traffic and 20-40 bytes of overheads for this line, then oops! – there’s such a bold line from the previous version, oops! – after this there is a version with new values. Administrators howl in unison! However, this is just awesome from the point of view of some perverted applications, for example, external readers that try to hook up to the MySQL server, pull data from it and do something with it, for example, stick it in a full-text index. As bad as this is from the point of view of database administration, in which one change per three bytes generates 10 KB of traffic on the screw, and then 10 KB of network traffic for each slave, it is just as good for any systems such as full-text search, like Sphinx, which there is no local copy of the data, and there is no desire to implement MySQL from scratch. In MySQL 5.6 they realized it and made binlog_row_image (but by default full, not minimal or noblob).

In short, everything is not cleverly arranged - a stick, a rope, one log, another log. And even in this log, “childhood” diseases are quite funny:


For a person who has been using replication for two days, all this is scary and difficult. But, knowing how simple it is, in principle, it’s clear how to live with it:

  • First of all, we don’t believe in defaults;
  • We carefully look at the settings, think about what we want - SBR, RBR, etc.

And it’s better to set it up right away so you don’t have to sort out the strange mince later.

In the situation “the log is corrupted, the position has diverged, it is not known what is happening,” there is a certain toolkit - we look at events, try to understand which transaction has already passed through, which has not, whether this whole thing can be saved or restored, etc. If GTID “If you managed to turn it on in advance, then life becomes easier.

Another point in observing replication. It is interesting to see how the internal crooked structure provokes not only competition, but the creation of additional products. The “magic” Tungsten Replicator, they say, solves well the problem called “a single-threaded slave is bad,” and if not for the inherent difficulties, there would be no additional product that allows you to use this mechanism, transfer data to other systems, on the one hand, and at the same time solve a number of problems built into the existing system, on the other hand.

As usual, it is impossible to give advice. It helps some, but others will spit a lot. But they say there are situations in which Tungsten copes well with the inevitable single-threaded lag. I'm sure there are other interesting tricks out there, but an internal single-threaded slave is hard.

What to do if for some reason you used replicas as a backup? I think you have to bang your head against the wall, because a replica and a backup are two different things. However, if you are creative guys and are using a fairly new version, delayed replication will save you, on the one hand, but on the other hand, if you do not make full backups, nothing will save you anyway.

Next is another element of creativity. It’s not hard to imagine a situation where the master filled up the entire 10 PB cloud disk with logs or filled up the entire network by sending these logs, while we don’t need 90% of these updates, because we are interested in replicating, for example, one table on sight or one database on sight, and by default everything pours into the binary log - all changes across all databases, across all tables, across everything. The solution again amazes with its creativity. On the one hand, there are four settings - (binlog|replicate)_(do|ignore)_db, which allow you to filter on the master what will be written to the log and what will be ignored. On the slave, accordingly, it allows you to do the same. Those. on the master we can filter what goes into the binary log - into this funnel, which then merges into the network, and on the slave, accordingly, we can put an incoming filter on what arrives from the network. Or write only part of the data to disk, and then replay, again, only part of the data on the slave. Suddenly, even in this simple story, horror sets in, because the combination - we use one database and update the table in another database using an interesting syntax - it behaves somehow... And how exactly it will behave is unknown, because different filters are triggered at different times.

There are no built-in nice things called “re-election of the master if he suddenly dies”; you have to raise it with your hands. The lack of tools for cluster management - this, in my opinion, is good - gives rise to competition, gives rise to the creation of additional products. In fact, if the very cool master-master replication worked perfectly in regular MySQL, or at least automatic recovery after failures, then why would all Galera, Percona/MariaDB Cluster, etc. be needed?

A few more tricks. An interesting implementation is replication, which is as simple as stick and rope, without any checks, on the one hand, and without any tools to make it more pleasant to manage the cluster of a replicating slave, on the other hand. This is bad. But on the other hand, you can manually sculpt such interesting configurations from this that everyone who later comes and takes it apart for you will shudder.

Configuration No. 1. A master-master “on the knee” in MySQL style is done like this:


What's scary is how many idiots there are in the world! Google “Master-master MySQL replication” - every second link is like this. Hell and the Holocaust.

Focus No. 2 – catch-all slave – is more pleasant. There are no unnecessary checks - what flies from whom, who gets it, and what to do with it. Due to this, you can make funny things like a slave, onto which either part of the data from a bunch of servers is precisely merged, or all the data from all servers is precisely merged - a server with all the backups. But, I repeat, there is replication, i.e. There is a certain basic tool that copies table A instead of B and that’s it.

And finally, trick No. 3 - we replace everything. Let us remember that replication lives at the logical level, which is in no way connected with the physical storage level. Due to this, you can create extremely interesting weird things. You can change the engine “on the fly” for unclear purposes - here’s a true story that, they say, replication from InnoDB databases to MyISAM tables is simply for the sake of making full-text search work at least somehow. There is a creative trick called "schema modification via replication". I refuse to understand what the fat is, but there are such tricks. Well, there is a clear and interesting mode of operation called “paranoid version upgrade through replication.”

During the report we learned:


Nevertheless, you can live with this hell if you at least roughly understand how it works.

The main message is that:


In 2015, at the HighLoad++ Junior conference, Andrey Aksenov read a new version of his report on the replication device in MySQL. We also deciphered it on our blog.

The term replication is used to refer to a mechanism for synchronizing multiple copies of data, which increases information security, fault tolerance and system performance. A striking example is database replication between two servers.

Master-Slave MySQL replication

In Master-Slave terminology, the master is the primary server with the database; it writes to the database, but reading is distributed between the master and slave depending on the load on the system, which increases fault tolerance and performance. In addition, thanks to this approach, a copy of the database is always at hand and can be restored if one of the servers fails.

In what situations might a slave server be needed? For example, when a large array of data arrives to be written to the database and the master server simply does not have time to read and the client has to wait for the end of the write, which can be avoided thanks to the slave server.

Situations are possible when the master server fails; in this case, the slave server takes over all the functions of the master and works alone until it is restored. The client most likely won’t notice anything, and he certainly won’t wait an hour or two or three for the technician to fix it.

Setting up replication is not at all difficult, since the mechanism was built into MySQL from the very beginning.

Setting up on the Master server

Let's start by editing the configuration file my.cnf, which is most often located at /etc/mysql/my.cnf. You need to find and uncomment (remove #), or write such lines.

Bind-address = 0.0.0.0 server-id = 1 log_bin = /var/log/mysql/mysql-bin.log

Important! If bind-address has already been registered, it needs to be changed, otherwise it will not be possible to establish a connection between the servers.

Immediately after this, we will restart the database on the server.

/etc/init.d/mysql restart

Now we need to create a user with rights to replicate our database; this can be done from root in the MySQL console using the command

GRANT REPLICATION SLAVE ON *.* TO "slave_user"@"%" IDENTIFIED BY "slave_password"; FLUSH PRIVILEGES;

Where instead of “slave_user” and “slave_password” you need to write the login and password for the slave.

Now let's look at the master data

SHOW MASTER STATUS;

Column values File And Position you need to remember, they will be used in setting up the slave, which is what we are moving on to now.

Setting up on the Slave server

The first step is to create a database with the same name as the one we are going to replicate. This is an important step and should not be neglected. Next, go to the configuration file that is already familiar to us my.cnf and write the settings.

Server-id = 2 relay-log = /var/log/mysql/mysql-relay-bin.log bin-log = /var/log/mysql/mysql-bin.log

Important! In bin-log the path to the bin-log is written on the mester server . The server ID must be different from the master ID, it is convenient to set it to 1 more.

CHANGE MASTER TO MASTER_HOST="1.1.1.1", MASTER_USER="slave_user", MASTER_PASSWORD="slave_password", MASTER_LOG_FILE = "mysql-bin.000001", MASTER_LOG_POS = 107; START SLAVE;

Where host is the master’s IP address, login and password correspond to those we created on the master, master_log_file and master_log_pos are filled with information from the last item for configuring the master server .

From this very moment, all changes in the database will be transferred from the master to the slave.

Checking replication status

In addition to the SHOW MASTER STATUS command;

There is a similar one for the slave SHOW SLAVE STATUS\G, which will display a table with information. The main sign that the servers have connected and are working correctly is the presence of such lines Not long ago I was asked to talk about replication in MySQL . I decided that this topic could be useful to many, so in this article I will talk about.

what is replication in MySQL, when is it needed and how to configure it The main task of replication is combine the power of several servers 2 . Let's say your website has a dedicated server, but over time it becomes very visited and can no longer withstand the load. As a result, the server begins to slow down and crash regularly. The easiest way is to buy a more powerful server, and this is what most people do. But sooner or later there comes a time when the cost of increasing the price of a server does not correspond to the increase in its performance, so it is more profitable to buy

different servers for less money.

As a result, your database will be on two servers at once. When one main server (aka the head server) can no longer cope, it switches to a spare one. All database update requests always go to the head server

. After updating the head server, it places information about this in a separate file, from where the slave servers take all the information. But sampling operations, which are usually the majority, and they are the slowest, can already be transferred to slave servers, since the data is the same in both. Now let's figure it out:

  1. how to configure replication in MySQL latest versions of MySQL to all servers.
  2. Create a user with the privilege on the main server REPLACING SLAVE. For the address from which it can connect, specify " All".
  3. Stop all servers.
  4. In settings MySQL(in file my.cnf) In chapter add the following lines: log-bin
    server-id=1 Please note that server-id must be different on all servers. In fact, this is what distinguishes one server from another.
  5. On slave servers, add to settings MySQL the following lines: master-host=master_host_name
    master-user=login of the created_user
    master-password=password of the created_user
    master-port=port_for_connecting_to_the_master_server
    server-id=id_of_this_slave_server
  6. Move all bases from the head server to the slaves.
  7. Run the head server, then all the slaves.

Replication is a mechanism for synchronizing the contents of multiple copies of an object. This process refers to copying data from one source to many others and vice versa.

Designations:

  • master - the main server whose data needs to be duplicated;
  • replica - a repaired server that stores a copy of the master data

To set up replication in MySQL, you need to follow the sequence of actions described below, but this is not a dogma and the parameters can change depending on the circumstances.

On the main server, edit the my.cnf file and add the following lines to the mysqld section:

Server-id = log-bin = mysql-bin log-bin-index = mysql-bin.index log-error = mysql-bin.err relay-log = relay-bin relay-log-info-file = relay-bin. info relay-log-index = relay-bin.index expire_logs_days=7 binlog-do-db =

  • - unique MySQL server identifier, number in the range 2 (0-31)
  • - the name of the database, information about which will be written to the binary log; if there are several databases, then each requires a separate line with the binlog_do_db parameter

On the slave, edit the my.cnf file and add the following lines to the mysqld section:

Server-id = master-host = master master-user = replication master-password = password master-port = 3306 relay-log = relay-bin relay-log-info-file = relay-log.info relay-log-index = relay-log.index replicate-do-db =

On the main server, add the replication user with data replication rights:

GRAANT REPLICATION SLAVE ON *.* TO "replication"@"replica" IDENTIFIED BY "password"

Let's block replicated databases on the main server from changing data, programmatically or using the MySQL functionality:

Mysql@master> FLUSH TABLES WITH READ LOCK; mysql@master> SET GLOBAL read_only = ON;

To unlock, use the command:

Mysql@master> SET GLOBAL read_only = OFF;

Let's make backup copies of all databases on the main server (or those that we need):

Root@master# tar -czf mysqldir.tar.gz /var/lib/mysql/

or using the mysqldump utility:

Root@master# mysqldump -u root -p --lock-all-tables > dbdump.sql

Let's stop both servers (in some cases you can do without it):

Root@master# mysqlamdin -u root -p shutdown root@replica# mysqlamdin -u root -p shutdown

Let's restore replicated databases on the slave server by copying the directory. Before replication begins, the databases must be identical:

Root@replica# cd /var/lib/mysql root@replica# tar -xzf mysqldir.tar.gz

or mysql functionality, then there was no need to stop mysql on the slave server:

Root@replica# mysql -u root -p< dbdump.sql

Let's run mysql on the master server (and then on the slave server, if necessary):

Root@master# /etc/init.d/mysql start root@replica# /etc/init.d/mysql start

Let's check the operation of the master and slave servers:

Mysql@replica> start slave; mysql@replica> SHOW SLAVE STATUS\G mysql@master> SHOW MASTER STATUS\G

On the slave server, check the logs in the master.info file, it should contain requests to change data in the database. So this binary file must first be converted to text format:

Root@replica# mysqlbinlog master.info > master_info.sql

If errors occur, you can use the commands:

Mysql@replica> stop slave; mysql@replica> RESET SLAVE; mysql@master> RESET MASTER;

and repeat all steps starting with blocking the databases.

To hot add replication servers you can use the following syntax:

Mysql@replica> SHOW SLAVE STATUS\G mysql@master> SHOW MASTER STATUS\G mysql@replica-2> CHANGE MASTER TO MASTER_HOST = "master", MASTER_USER ="replication", MASTER_PASSWORD = "password", MASTER_LOG_FILE ="mysql- bin.000004 ", MASTER_LOG_POS = 155; mysql@replica-2> START SLAVE;

Information from the statuses will show the position and name of the current log file.

With asynchronous replication, an update from one replica is propagated to others after some time, rather than in the same transaction. Thus, asynchronous replication introduces a latency, or waiting time, during which individual replicas may not be effectively identical. But this type of replication also has positive aspects: the master server does not have to worry about data synchronization, you can block the database (for example, to create a backup copy) on a slave machine, without problems for users.

List of sources used

  1. Habrahabr.ru - Basics of replication in MySQL (http://habrahabr.ru/blogs/mysql/56702/)
  2. Wikipedia (http://ru.wikipedia.org/wiki/Replication_(computing_technology))

When using any materials from the site in whole or in part, you must clearly indicate a link to as the source.

Data replication mysql allows you to have an exact copy of the database from one server - the master server (leading server) on one or more other servers (slave server). By default, Mysql replication is asynchronous.
Which means that the master server has no control and does not know whether the slave servers are reading the log file and whether they are doing it correctly.
There are also other types of synchronization, synchronous and semi-synchronous, where these processes are controlled.
Depending on the settings, you can replicate both entire databases and individual database tables.

What can you use replication for:
1. Load distribution between hosts to improve performance.
In such a scheme, the master node will perform read and write operations, nodes that have a subscription to the master node will provide a base for reading, thus we will relieve the master server from read operations
2. Data security and ease of maintenance, since the slave node contains read-only data, changes to data on the subscriber will be limited, ease of maintenance - the ability to run processes serving the database without interrupting the operation of applications
3. Distribution of data over long distances. You can create a copy of data on any host, regardless of its location
mysql supports the following replication methods:
Traditional - the method is based on replication of events from the master's binary log file and requires log files. The positions between the master and slave servers must be synchronized.
Method Using Global Transaction Identifiers GTIDs (Transactional Method)
mysql supports the following types of synchronization:
asynchronous (one-way synchronization)
semi-synchronous (partial control of subscribers)
synchronous (full control of subscribers)

Setting up Mysql database replication traditional method

Principle of operation
Master server contains bin log files, which record all changes occurring in the master database, a file describing the names bin files, as well as the position in the log where the last master data was recorded
The slave node receives data from the master having information about the names bin files and position in the log file.

Wizard setup
my.ini must contain a unique identifier - a number from 1 to 2 to the 32nd power - 1, server-id.
By default server-id=0, which means do not accept subscriptions from slave servers

log-bin=mysql-bin
server-id=1

These two lines are enough to start
Note: however, if InnoDB is used, it is additionally recommended to add
innodb_flush_log_at_trx_commit=1
sync_binlog=1

And you need to check that the ability to work with the network is not disabled and the skip-networking parameter is set
The slave server connects to the master using a username and password, so we first create a user on the master server
CREATE USER repl@%.mydomain.com IDENTIFIED BY slavepass;
GRANT REPLICATION SLAVE ON *.* TO repl@%.mydomain.com;

Let's look at the condition
SHOW MASTER STATUS
If the procedure for creating binary logs has already been launched, then for InnoDB tables, you must first lock the tables in one of the sessions
FLUSH TABLES WITH READ LOCK;
If you exit the session, the table lock is automatically released
In another session we get the name values bin log and position
Both values ​​represent the replication coordinates at which the slave server must begin reading from the file at the desired location to begin replication.
The next step depends on whether there is data on the slave server, data from the master
If they exist, then we leave the tables locked and create dump(this is the recommended way when using InnoDB)
You can find out the database type with the command
mysqlshow -u mysql_user -p -i database-name
If the database is stored in binary files, then they can be copied from the master to the slave server
Let's do dump
mysqldump --all-databases --master-data dbdump.db
to select bases mysqldump --databases --master-data dbdump.db
master-data parameter, automatically adds CHANGE MASTER TO on a slave node, if the parameter is not added, then all tables in the session must be locked manually
Unlock
UNLOCK TABLES;

Slave Node Configuration A
Add to my.ini server-id from personal from the master and from other nodes

server-id=2

Create a subscription
CHANGE MASTER TO
MASTER_HOST=master_host_name,
MASTER_USER=replication_user_name,
MASTER_PASSWORD=replication_password,
MASTER_LOG_FILE=recorded_log_file_name,
MASTER_LOG_POS=recorded_log_position;

When setting up replication with existing data, you must transfer a snapshot from the master to the slave before replication begins
We use mysqldump
1.Start the slave node using --skip-slave-start parameter to prevent replication from starting
2.Import the dump file
mysql fulldb.dump
3. Start the subscription process
START SLAVE;
Checking the replication status
SHOW SLAVE STATUS\G
Slave_IO_State: - current state of the slave device
Slave_IO_Running: - whether the data stream is read from the master
Slave_SQL_Running: - whether sql queries are running, should be yes

Example Let’s configure the Master (master) server – ip 11.11.11.10 V my.ini
[
mysqld] log-bin=mysql-bin server-id=1
Create a user mysql -u root -p GRANT REPLICATION SLAVE ON *.* TO replica@% IDENTIFIED BY password; FLUSH PRIVILEGES;
Next, we lock all tables in the database FLUSH TABLES WITH READ LOCK;
We look at the status SHOW MASTER STATUS; We remember the file name and position, we will use them on the Slave server for subscription

On Slave B my.ini
log-bin=mysql-bin server-id=2

Create a subscription CHANGE MASTER TO MASTER_HOST=11.11.11.10, MASTER_PORT=3306,
MASTER_USER=replica, MASTER_PASSWORD=password,
MASTER_LOG_FILE=server-mysql-bin.000002,
MASTER_LOG_POS=1151664, MASTER_CONNECT_RETRY=10;
START SLAVE;
Replication status SHOW SLAVE STATUS\G