How to tune SQL to a system running on MySQL - Discover the bottleneck -
table of contents
This is Haraoka from Beyond.
Today we will talk about SQL tuning of MySQL.
- The system is slow!
- The server suddenly became heavy!
SQL tuning can have a dramatic effect in such cases.
This time, we will explain how to find the bottleneck.
・Why SQL tuning?
About half (50%) of the causes of sudden system slowdowns are caused by the DB.
When there is a problem with the DB, I think 70% of it is caused by SQL.
100% × 0.5 × 0.7 = 35%
So, if you learn SQL tuning,
you should be able to solve about 1/3 of the problems where the system suddenly becomes heavy! !
*Based on past experience
・Flow of SQL tuning
Let's do it right away.
Broadly speaking, SQL tuning goes like this:
① Identify heavy SQL
↓
② SQL analysis
↓
③ Modify SQL or add index
Identifying heavy SQL
First, identify heavy SQL statements.
・Slowlog output
slowlog and outputs heavy SQL.
This allows you to output SQL that took a certain amount of time to the log.
The output of Slowlog is written in my.cnf as follows.
[mysqld] slow_query_log = 1 ---Enable slowlog long_query_time = 1 ---Set SQL output time
It is easy to start with
long_query_time The reason is that long_query_time can be set in units of 0.1 seconds, but setting it to 0.1 seconds may result in a large amount of output, so
it is better to first eliminate SQL statements that are longer than 1 second.
Since version 5.1, it has become a global variable that allows you to change settings without restarting, so you can set it while the server is running.
mysql> set global slow_query_log = 1; mysql> set global long_query_time = 1; mysql> show variables like "%query%"; +---------------------- --------+----------------+ | Variable_name | Value | +------------------- ------------+----------------+ | long_query_time | 1.000000 | | slow_query_log | OFF | +--------- ---------------------+----------------+
If you only set global, the settings will be returned after rebooting, so add it to the config file.
*The above cannot be used for the 5.0 series, so restart after setting.
$ vi /etc/my.cnf --------------------- [mysqld] slow_query_log = 1 long_query_time = 1 ------------ --------- $ service mysqld restrat
If you do not specify a file path, the output will be in the data directory.
$ ll /var/lib/mysql/ mysql-slow.log
・How to read Slowlog
If you look inside, the following log will be output.
$ tail -100 mysql-slow.log
(1)# Time: 150926 4:30:04 (2)# User@Host: hogeuser[hogedb] @ [172.16.0.1] (3)# Query_time: 1.644283 Lock_time: 0.542217 Rows_sent: 3 Rows_examined: 9971 use hogedb; ( 4)SET timestamp=1424145644; SELECT 1 FROM aaa_history WHERE id = 26 AND code = 'bbbb';
(1) | Log output date and time | |
---|---|---|
(2) | User ID, connection server | |
(3) | Query_time | Query execution time |
Lock_time | lock time | |
Rows_sent | Number of output lines | |
Rows_examined | Number of rows to be processed | |
(4) | SQL statement |
The important thing here Rows_examined (number of rows to be processed).
This is how much data the database has read.
In other words, the higher this value, the more data is being read.
Furthermore, by dividing by Rows_sent, you can find out how many rows were read to output one row.
In this case
Rows_examined: 9971 ÷ Rows_sent: 3 = 3323
This means that to output one line
So, take a quick look at the log and pick up the SQL with a large Rows_examined value.
SQL with a large Rows_examined value is often the bottleneck.
running mysqldumpslow
If you want to collect the slowlog statistics and identify the heavier SQL,
you can use the
mysqldumpslow By using this command, you can easily find SQL that is a bit heavy and is executed many times
・Log cutting
You can apply mysqldumpslow to slowlog as is, but
if the log has accumulated due to long-term operation, it is easier to identify the most recent heavy SQL by cutting out the log with an appropriate number of lines.
$ tail -10000 /var/lib/mysql/mysql-slow.log > /home/hogeuser/160721_mysql-slow.log
・Execute mysqldumpslow
Analyze the log extracted above using mysqldumpslow and write it to a separate file.
$ mysqldumpslow -st /home/hogeuser/160721_mysql-slow.log > /home/hogeuser/160721_mysqldumpslow.log
-s
sort_type determines how the output is sorted.
The contents of the sort are as follows.
t, at | Sort by query time or average query time |
---|---|
l,al: | Sort by lock time or average lock time |
r, ar | Sort by number of lines sent or average number of lines sent |
c | sort by count |
The above command will output slowlog statistics.
・Check output results
If you check the files that were output earlier, they will be output in order of query execution time.
$ more 160721_mysqldumpslow.log
①Count: 762 Time=14.12s (10761s) Lock=0.00s (0s) Rows=45.9 (34974), 5users@13hosts use aaa_db; SET timestamp=N; ②select * from aaa_history where id = N and mid = N and kid in (N,N,N,N,N,N,N,N,N,N,N,N,N) and status = N
①
Count | Number of slow log occurrences |
---|---|
Time | Average query execution time (cumulative execution time) |
Lock | Average lock time (cumulative lock time) |
Rows | Average number of output lines (total number of output lines) |
that are at the top of the
SQL This is a bottleneck that is easy to find even from slowlog.
SQLs that are at the top of the output results and have a high count, short average time, and long cumulative execution time are somewhat heavy and occur many times.
This becomes a bottleneck that is difficult to find without using mysqldump.
Next, we will explain how to parse SQL.
How to tune SQL to a system operating with MySQL Part 2 ~SQL analysis~