[Osaka/Yokohama/Tokushima] Looking for infrastructure/server side engineers!

[Osaka/Yokohama/Tokushima] Looking for infrastructure/server side engineers!

[Deployed by over 500 companies] AWS construction, operation, maintenance, and monitoring services

[Deployed by over 500 companies] AWS construction, operation, maintenance, and monitoring services

[Successor to CentOS] AlmaLinux OS server construction/migration service

[Successor to CentOS] AlmaLinux OS server construction/migration service

[For WordPress only] Cloud server “Web Speed”

[For WordPress only] Cloud server “Web Speed”

[Cheap] Website security automatic diagnosis “Quick Scanner”

[Cheap] Website security automatic diagnosis “Quick Scanner”

[Reservation system development] EDISONE customization development service

[Reservation system development] EDISONE customization development service

[Registration of 100 URLs is 0 yen] Website monitoring service “Appmill”

[Registration of 100 URLs is 0 yen] Website monitoring service “Appmill”

[Compatible with over 200 countries] Global eSIM “Beyond SIM”

[Compatible with over 200 countries] Global eSIM “Beyond SIM”

[If you are traveling, business trip, or stationed in China] Chinese SIM service “Choco SIM”

[If you are traveling, business trip, or stationed in China] Chinese SIM service “Choco SIM”

[Global exclusive service] Beyond's MSP in North America and China

[Global exclusive service] Beyond's MSP in North America and China

[YouTube] Beyond official channel “Biyomaru Channel”

[YouTube] Beyond official channel “Biyomaru Channel”

PostgreSQL replication settings from a MySQL user perspective

Hello.
I'm Mandai, in charge of Wild on the development team.

This is a summary of the settings when using synchronous replication with PostgreSQL 9.4.
At our company, we often use MySQL, so we were using PostgreSQL, so
even though it's the same RDBMS, it was a bit confusing.

It worked surprisingly well and I felt really good about it, so
I'd like to summarize it with the feeling that you should try it out as well.

This time, the process is a little long and complicated, so I'll just reveal the menu.

PostgreSQL replication (master DB side work)

  1. Create a replication user
  2. Addition to pg_hba.conf
  3. Editing postgresql.conf
  4. Restart master DB

Slave DB side work

  1. Copy master with pg_basebackup
  2. Editing postgresql.conf (slave side)
  3. Creating recovery.conf
  4. Editing recovery.conf
  5. Restart slave DB

Verifying replication

I would like to proceed with the steps above.

 

PostgreSQL replication (master DB side work)


we worked in a truly wild situation,
setting up replication for a DB that was already in full operation As a side note, this kind of situation actually makes me feel more irritated.

As a rough work

  • Complete the settings on the master DB side and restart
  • Copy data of master DB to slave DB side
  • Insert settings on slave DB side and restart
  • I think it will look like this.

    Assuming that the master DB in the local network is assigned an IP address of 192.168.1.1 and the slave DB is assigned an IP address of 192.168.1.2.

     

    Create a replication user

    First, let's start from the simplest point: create a user called repl_user for replication.
    After logging into the master DB, create a user with the below query.

    CREATE USER repl_user REPLICATION PASSWORD '[password]';

     

    Addition to pg_hba.conf

    Next, edit pg_hba.conf.
    Here, we will enable work from the slave DB using the repl_user user we created earlier.

    host replication repl_user 192.168.1.2/32 md5


     Change the IP address to that of the slave.
    If you create multiple slave DBs, you may need to modify the IP mask, but
    if you have only one slave DB, I think /32 is fine.

     

    Editing postgresql.conf

    Next, move on to editing postgresql.conf.
    Here, we will add and change settings related to replication.
    All necessary setting items are commented out, so you can enable them by uncommenting them.
    It's more friendly than MySQL in this regard.

    The setting items required for replication are as follows.

     

    wal_level

    This item sets the degree of information to be included in the wal file.

    There are three wal levels: minimal, archive, and hot_standby.
    The default is minimal, which is fine during normal operation.

    When setting up replication, a level higher than archive is required, so this time we will set it to hot_standby.

    wal_level = hot_standby

     

    synchronous_commit

    This setting determines whether to wait for the writing of the wal file to be completed when returning success to the client after completing a transaction.

    The default is on, which returns the response to the client after the wal file is written.

    If turned off, transaction data will be lost if writing to the wal file fails, resulting in a different status from the one notified to the client side.
    However, in terms of data, the data in the transaction is simply lost, but it does not mean that an inconsistency occurs.

    This means that a problem may occur where success is returned to the client, but it is not actually success.

    The response speed to the client side is improved, but the only problem seems to be (is that the only problem?) that it sometimes (how often depends on how often the transaction fails) returns a lie.

    This time, leave it on.

    synchronous_commit = on

     

    archive_mode

    By turning on archive mode, you can save wal files in an area different from the normal wal storage area.

    If a replication delay occurs, the slave side will chase unapplied wal files in order.
    However, the life cycle of the wal file in the default storage area is surprisingly short, so it cannot tolerate long delays.
    In this case, if a delay that cannot be recovered occurs, the data must be reloaded, which takes time and effort.

    To solve this problem, by copying the wal file outside the default storage area, it is not affected by the life cycle and can handle long replication delays.

    The copied wal file does not continue to accumulate (be careful as it will continue to remain if there is a delay), so there is no need to rm it in a separate batch.
    This time, leave it on.

    archive_mode = on

     

    archive_command

    If archive_mode = on, set the command to save the wal file. I think this would normally be handled using the cp command.

    There are several replacement strings that can be used in the command to indicate the wal file to be copied, so use them.

    • %f ・・・ File name of wal file
    • %p ・・・ Absolute path of the wal file to be worked on
    archive_command = '[console command to save wal file]'

     

    max_wal_senders

    Set the number of slave DBs to connect.
    It seems that the number of slave DBs + 1 is often used for connection via the pg_basebackup command.

    max_wal_senders = 2 # number of slaves +1

     

    synchronous_standby_names

    Enumerate the DB names to connect as slaves.
    It is not possible to connect to slaves with names not listed here.

    synchronous_standby_names = "db_slave1" # If you have multiple slaves, add DB names separated by commas

     

     

    Restart master DB

    This completes the settings on the master DB side.
    It is also possible to configure the slave side here (the reason will be explained later), but postgresql.conf becomes difficult to understand, so this time I will configure it each time.

    At this point, restart PostgreSQL for the settings to take effect.

    As a side note, when I looked into the PostgreSQL startup command, I found that there are various ways to start it, but it seems that it may not work due to differences in the installation method.
    The command I usually use to start it is as follows.

    # via initd /etc/init.d/postgresql-9.4 [start|stop|restart] # via pg_ctl su postgres -c "/usr/pgsql-9.4/bin/pg_ctl [start|stop|restart] -D /var/ lib/pgsql/9.4/data -m fast"

     
    Both require running as root, but it worked with PostgreSQL installed with yum.

    If using pg_ctl, you can also specify the location of $PGDATA, so if you want to start two PostgreSQL processes,
    you need to use pg_ctl.

     

    Slave DB side work

    Let's proceed with the settings on the slave side.
    Let's proceed with the assumption that PostgreSQL is installed on the slave DB server.
    We will also proceed assuming that there are no differences in versions.

     

    Copy master with pg_basebackup

    First, copy the master data set using pg_basebackup.
    You can copy the $PGDATA location to a different location than the default location, but in that case, if you start it via initd, you will need to change the $PGDATA path in the /etc/init.d/postgresql script. there is.
    This time, we will copy it to the default /var/lib/pgsql/9.4/data.
    Rename and save the existing data directory.

    The pg_basebackup command to copy data from the master is as follows.

    pg_basebackup -h 192.168.1.1 -U repl_user -D /var/lib/pgsql/9.4/data -P --xlog


     Depending on the size of the DB, copying will be completed after waiting for a while.
    When you enter the copied data directory, you will notice that pg_hba.conf and postgresql.conf have also been copied.

    As mentioned in the section on editing postgresql.conf on the master DB side, the slave side settings can also be made on the master side because the master side's postgresql.conf is copied.
    The DB that operates as a master DB skips the configuration items necessary to operate as a slave DB, and the DB that operates as a slave DB skips the configuration items necessary to operate as a master DB, so it is done well. I think so.

     

    Editing postgresql.conf (slave side)

    Configure the slave settings in postgresql.conf.

     

    hot_standby

    These are setting items for running PostgreSQL in hot standby mode.
    A server running in hot standby mode cannot perform any update-related processing and becomes a read-only database.

    However, this item alone does not establish replication; the replication function will be enabled if hot_standby is on and recovery.conf, which will be created later, exists at the same time.

    hot_standby = on

     

     

    Creating recovery.conf

    There is no need to create this file from scratch; a sample exists, so copy it and modify it.
    The recovery.conf sample is located under /usr/pgsql-9.4/share, so copy it to /var/lib/pgsql/9.4/data.

    cp /usr/pgsql-9.4/share/recovery.conf.sample /var/lib/pgsql/9.4/data/recovery.conf

     

     

    Editing recovery.conf

    There are two places to edit in recovery.conf.
    Once this is done, the replication settings are complete!

     

    standby_mode

    By setting standby_mode to on, even after receiving the last wal file and completing processing, it will continue to operate while waiting for the next wal file.

    standby_mode = on

     

    primary_conninfo

    Enter the information for connecting to the master DB.
    The point to pay particular attention to is whether the application_name part is included in the names listed in synchronous_standby_names in postgresql.conf of the master DB.

    Other than that, it feels like I read it.

    primary_conninfo = 'host=192.168.1.1 port=5432 user=repl_user password=[password] application_name=db_slave1'

     

     

    From restarting the slave DB to checking replication

    After completing the above settings, you are ready to restart the slave DB.
    The restart command is the same as for the master DB.

    Once the restart is complete, replication should be in progress, so check the replication progress from the master DB side.

    Log in to the master DB and run the following query.

    select * from pg_stat_replication; -[ RECORD 1 ]----+------------------------------ pid | 12345 usesysid | 12345 usename | repl_user application_name | db_slave1 client_addr | 192.168.1.2 client_hostname | client_port | 43210 backend_start | 2016-06-12 17:01:07.923879+09 backend_xmin | state | streaming sent_location | 27/B4EED5F8 write_location | 27 /B4EED5F8 flush_location | 27/ B4EED5F8 replay_location | 27/B4EED5C0 sync_priority | 0 sync_state | async

     
    If the state item is streaming, it means that replication is being executed.

    You can see how well the data is synchronized using sent_location, write_location, flush_location, and replay_location.

    • sent_location is an address that indicates how far the WAL file was sent from the master DB to the slave DB.
    • write_location is an address that indicates how much has been written to the buffer area on the slave side.
    • flush_location is an address that indicates how much has been written to the disk area on the slave side.
    • replay_location is an address that indicates how far data has been imported in the slave DB.

    The four addresses increase with the data, so you can see the progress at a glance.

    In the above example, the addresses sent_location, write_location, and flush_location are the same, so it means that the state up to this point is the latest.
    There is a slight delay in replay_location, but since it is currently executed in asynchronous mode, it is not a concern unless it is significantly off.

     
    That's it.

    If you found this article helpful , please give it a like!
    0
    Loading...
    0 votes, average: 0.00 / 10
    894
    X facebook Hatena Bookmark pocket
    [2025.6.30 Amazon Linux 2 support ended] Amazon Linux server migration solution

    [2025.6.30 Amazon Linux 2 support ended] Amazon Linux server migration solution

    The person who wrote this article

    About the author

    Yoichi Bandai

    My main job is developing web APIs for social games, but I'm also fortunate to be able to do a lot of other work, including marketing.
    Furthermore, my portrait rights in Beyond are treated as CC0 by him.