分类目录归档:MySQL

MySQL自动备份到亚马逊S3


#!/bin/sh

# Updates etc at: https://github.com/woxxy/MySQL-backup-to-Amazon-S3
# Under a MIT license

# change these variables to what you need
MYSQLROOT=XXX
MYSQLPASS=XXXX
S3BUCKET=xx-mysql-backup
# DATABASE='--all-databases'
DATABASE=${1-default}
FILENAME=${DATABASE}
HOST=master
DUMPOPTION='--quick --single-transaction'
# the following line prefixes the backups with the defined directory. it must be blank or end with a /
S3PATH=mysql-backup/${DATABASE}
# when running via cron, the PATHs MIGHT be different. If you have a custom/manual MYSQL install, you should set this manually like MYSQLDUMPPATH=/usr/local/mysql/bin/
MYSQLDUMPPATH=
#tmp path.
TMP_PATH=/tmp/

DATESTAMP=$(date +".%Y-%m-%d.%H:%M:%S")
DELDATESTAMP=$(date -d"15 day ago 2017-04-16" +".%Y-%m-%d.%H:%M:%S")
DAY=$(date +"%d")
DAYOFWEEK=$(date +"%A")

PERIOD=${2-day}
if [ ${PERIOD} = "auto" ]; then
if [ ${DAY} = "01" ]; then
PERIOD=month
elif [ ${DAYOFWEEK} = "Sunday" ]; then
PERIOD=week
else
PERIOD=day
fi
fi

echo "Selected period: $PERIOD."

echo "Starting backing up the database to a .gz file..."

# dump all databases
${MYSQLDUMPPATH}mysqldump -h${HOST} ${DUMPOPTION} --user=${MYSQLROOT} --password=${MYSQLPASS} ${DATABASE} | gzip > ${TMP_PATH}${FILENAME}.gz

echo "Done backing up the database to a file."
# echo "Starting compression..."

# tar czf ${TMP_PATH}${FILENAME}${DATESTAMP}.tar.gz ${TMP_PATH}${FILENAME}.sql
mv ${TMP_PATH}${FILENAME}.gz ${TMP_PATH}${FILENAME}${DATESTAMP}.gz

# echo "Done compressing the backup file."

# upload all databases
echo "Uploading the new backup..."
s3cmd put -f --check-md5 -s --continue-put ${TMP_PATH}${FILENAME}${DATESTAMP}.gz s3://${S3BUCKET}/${S3PATH}${PERIOD}/
echo "New backup uploaded."
if [ $? -ne 0 ]
then
echo "Re uploading the backup file..."
s3cmd put -f --check-md5 -s --continue-put ${TMP_PATH}${FILENAME}${DATESTAMP}.gz s3://${S3BUCKET}/${S3PATH}${PERIOD}/
echo "Re upload backup file done."
fi

echo "Moving the backup from past $PERIOD to another folder..."
s3cmd mv --recursive s3://${S3BUCKET}/${S3PATH}${PERIOD}/${FILENAME}${DELDATESTAMP} s3://${S3BUCKET}/${S3PATH}previous_${PERIOD}/
echo "Past backup moved."

# we want at least two backups, two months, two weeks, and two days
echo "Removing old backup (15 ${PERIOD}s ago)..."
s3cmd del --recursive s3://${S3BUCKET}/${S3PATH}previous_${PERIOD}/
echo "Old backup removed."

# remove databases dump
# rm ${TMP_PATH}${FILENAME}.sql
echo "Removing the gz files..."
rm ${TMP_PATH}${FILENAME}${DATESTAMP}.gz
echo "Files removed."
echo "All done."

Mariadb 修改数据目录,并且打开了SELinux,SELinux配置命令

Job for mariadb.service failed because the control process exited with error code. See “systemctl status mariadb.service” and “journalctl -xe” for details.


MySQL 8.0启动报错,也是这个问题(已经chown了)

mysqld: File ‘./binlog.index’ not found (OS errno 13 – Permission denied)

chcon -R -t mysqld_db_t /mnt/mysql

参考页面:https://www.thegeekstuff.com/2016/05/move-mysql-directory/

MySQL GET_LOCK函数实现MySQL加锁

GET_LOCK(str,timeout)

Tries to obtain a lock with a name given by the string str, using a timeout of timeout seconds. A negative timeout value means infinite timeout. The lock is exclusive. While held by one session, other sessions cannot obtain a lock of the same name.

  • 尝试使用超时秒数的超时获得一个由字符串str给出的名称的锁。
  • 负超时值意味着无限超时。
  • 锁是独占的。
  • 只能由一个会话持有,其他会话无法获得同名的锁。

参考:https://dev.mysql.com/doc/refman/5.7/en/miscellaneous-functions.html#function_get-lock
参考:https://blog.csdn.net/tangtong1/article/details/51792617

MySQL二进制日志清理

  1. 查看过期天数

show variables like ‘%logs%’;

2. 设置过期天数(最大值为99)

set global expire_logs_days = 99;

MySQL8.0 支持binlog_expire_logs_seconds参数,顾名思义就是日志过期秒数。

自动二进制日志文件删除的天数。 默认值为0,表示“不自动删除”。 可能的删除发生在启动时和二进制日志刷新。

2.1 生效问题

flush logs;

如果你发现设置没有生效,可能需要执行此命令。

3. 查看现有的日志文件

SHOW BINARY LOGS;

4. 手动清理较旧日志

PURGE BINARY LOGS TO ‘mysql-bin.000007’;

将mysql-bin.000007以前的日志清理掉,又或者

PURGE BINARY LOGS BEFORE ‘2018-01-06 12:12:12’;

将2018-01-06 12:12:12以前的日志清理掉

5. 修改MySQL配置文件

expire_logs_days = 99

 

create table和主从复制一坑

正确:
CREATE TABLE `chapter_lock_reason` (
`id` INT(11) NOT NULL AUTO_INCREMENT
PRIMARY KEY (`id`)
);

错误的方式:
CREATE TABLE xxx.`chapter_lock_reason` (
`id` INT(11) NOT NULL AUTO_INCREMENT
PRIMARY KEY (`id`)
);

MySQL ARCHIVE 引擎,从库突然Can’t write; duplicate key in table ‘nopay_record” on query.

这是ARCHIVE引擎的BUG

Error ‘Can’t write; duplicate key in table ‘nopay_record” on query. Default database: ‘doufu’. Query: ‘insert into `nopay_record` (`user_id`, `novel_id`, `chapter_id`, `channel`, `created_at`) values (3927416, 365027, 2499984, NULL, ‘2017-12-29 23:38:01′)’

表结构:

CREATE TABLE `nopay_record` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`user_id` int(11) NOT NULL,
`novel_id` int(11) NOT NULL,
`chapter_id` int(11) NOT NULL,
`channel` tinyint(4) DEFAULT NULL,
`created_at` timestamp NOT NULL DEFAULT ‘0000-00-00 00:00:00’,
`updated_at` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
PRIMARY KEY (`id`)
) ENGINE=ARCHIVE AUTO_INCREMENT=440357 DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_bin;

改成innodb

又一个MySQL坑

SELECT * FROM novel where title='一间客栈(连载)' ;
SELECT * FROM novel where title='一间客栈(连载) ' ;

SELECT * FROM novel where title=binary(‘一间客栈(连载)’ );
不加binary,MySQL会忽略大小写和首尾空格;