NDPF MySQL configurations

From PDP/Grid Wiki
Jump to navigationJump to search

MySQL configuration for bedstee and the NDPF data services

The /etc/my.cfg (actually, /var/lib/mysql/my.cfg) is now:

[mysqld]

innodb_file_per_table
innodb_open_files=1024

log-slow-queries = /var/log/mysql-slow.log
long_query_time = 8

key_buffer_size = 384M
max_allowed_packet = 1M
table_cache = 512
sort_buffer_size = 2M
read_buffer_size = 4M
read_rnd_buffer_size = 32M
myisam_sort_buffer_size = 64M
thread_cache_size = 8
query_cache_size = 64M
tmp_table_size=128M
# Try number of CPU's*2 for thread_concurrency
thread_concurrency = 8

# Uncomment the following if you are using InnoDB tables
#innodb_data_home_dir = /var/lib/mysql/
#innodb_data_file_path = ibdata1:2000M;ibdata2:10M:autoextend
#innodb_log_group_home_dir = /var/lib/mysql/
#innodb_log_arch_dir = /var/lib/mysql/
# You can set .._buffer_pool_size up to 50 - 80 %
# of RAM but beware of setting memory usage too high
innodb_buffer_pool_size = 3072M
innodb_additional_mem_pool_size = 20M
# Set .._log_file_size to 25 % of buffer pool size
#innodb_log_file_size = 100M
innodb_log_buffer_size = 16M
innodb_flush_log_at_trx_commit = 1
innodb_lock_wait_timeout = 50


Monitoring the MySQL state

On bedstee, the MySQL tables now have been migrated to an innodb_file_per_table configuration, so that the total disk space usage for each of the databases can easily be monitored from the outside. This monitoring is integrated in ganglia with the following script, which is invoked every minute from cron:

#! /bin/sh
#
# @(#)$Id$
#
MYSQL=/var/lib/mysql

find $MYSQL/ -type d -exec du -sk {} \; | while read size name
do
  base=`echo $name | sed -e 's/.var.lib.//;s/\//_/;s/_$/_Total/'`
  gmetric -n $base -v $size -u kByte -d 120 -t uint32
done

Legacy

Originally, all of the databases were in a single InnoDB file. This file is still there can cannot be compacted, so the 9 GByte file "ibdata1" in /var/lib/mysql/ is there and happily wasting space.

Ad-hoc mysql backup script

#! /bin/sh
#
DIR="${DIR:-/project/backup/MySQL}"
DATEMARK=`date '+%Y-%V-%A'|tr A-Z a-z`
UTCTIME=`date -u '+%Y%m%d%H%M%SZ'`
DATEINFO=`date '+%Y-%m-%d %H:%M:%S %Z %z (%A, week %V)'`

. /etc/mysql

MYSQL="mysql -u root --password="$PW""
MYSQLDUMP="mysqldump -u root --password="$PW" --single-transaction -q"

databases=`echo "show databases" | $MYSQL | egrep -v '^(Database|test)'`

for db in $databases
do
  ARCH="$DIR/db.$db.$DATEMARK.sql"
  (
  echo "--- Backup of database $db" ;
  echo "--- Dumped on $UTCTIME" ;
  echo "--- Compression: directly-compressed" ;
  echo "--- Dateinfo: $DATEINFO" ;
  $MYSQLDUMP --databases $db ) | gzip -c > ${ARCH}.gz
done

# verwijder dumpfiles ouder dan 30 dagen
find "$DIR" -mtime +30 -exec rm {} \;

MySQL restore script

#! /bin/sh

MYSQL_USER=root
MYSQL_PASSWORD=

echo -ne "password for user $MYSQL_USER: "
stty -echo
read MYSQL_PASSWORD
stty echo

function restore() {
   echo "Restoring $1 (assume compressed)";
   (
       echo "SET AUTOCOMMIT=0;"
       echo "SET UNIQUE_CHECKS=0;"
       echo "SET FOREIGN_KEY_CHECKS=0;"
       gunzip -dc "$1"
       echo "SET FOREIGN_KEY_CHECKS=1;"
       echo "SET UNIQUE_CHECKS=1;"
       echo "SET AUTOCOMMIT=1;"
       echo "COMMIT;"
   ) | mysql -u"$MYSQL_USER" -p"$MYSQL_PASSWORD"
}

restore "$1"

monitoring restore progress

Pikc up the PID of gunzip, and (on EL6 this now works) look at the offset in the file.

while :; do lsof -p 18641 -o -i|awk '/accounting/ { print strtonum($7)/2047780797 }' ; sleep 10 ; done