The Oracle ZFS Storage Appliance supports NFS, iSCSI, and Fibre Channel access to a MySQL database. By consolidating the storage of MySQL databases onto the Oracle ZFS Storage Appliance, the following goals can be achieved:

1. Expand and contract storage space easily in a pooled environment.
2. Focus high-end caching on the Oracle ZFS Storage Appliance to simplify provisioning.
3. Eliminate network overhead by leveraging Infiniband and Fibre Channel connectivity.

mysql.png

This blog post will focus specifically on NFS. A followup post will discuss iSCSI and Fibre Channel.

Configuring the Oracle ZFS Storage Appliance

Each database should be contained in its own project.

1. From the ZFS controller's CLI, create a project called 'mysql'.

        zfs:> shares project mysql

2. Set logbias to latency to leverage write flash capabilities:

        zfs:shares mysql (uncommitted)> set logbias=latency
                               logbias = latency (uncommitted)

3. Set the default user to mysql and default group to mysql:

        zfs:shares mysql (uncommitted)> set default_user=mysql
                          default_user = mysql (uncommitted)
        zfs:shares mysql (uncommitted)> set default_group=mysql
                            default_group = mysql (uncommitted)

Note: If a name service such as LDAP or NIS is not being used, change these to the actual UID and GID found in /etc/passwd and /etc/group on the host.

4. Disable 'Update access time on read':

        zfs:shares mysql> set atime=false
                                 atime = false (uncommitted)

5. Commit the changes:

        zfs:shares mysql> commit

6. Create a filesystem called innodb-data to hold data files:

        zfs:shares mysql> filesystem innodb-data

7. Set the database record size to 16K to match Innodb's standard page size:

        zfs:shares mysql/innodb-data (uncommitted)> set recordsize=16K
                             recordsize = 16K (uncommitted)
        zfs:shares mysql/innodb-data (uncommitted)> commit

8. Create a filesystem called 'innodb-log' to hold redo logs:

        zfs:shares mysql> filesystem innodb-log

9. Set the database record size to 128K:

        zfs:shares mysql/innodb-log (uncommitted)> set recordsize=128K
                            recordsize = 128K (uncommitted)
        zfs:shares mysql/innodb-log (uncommitted)> commit

Configuring the server

This example assumes a Linux server will be running the MySQL database. The following commands are roughly the same for a Solaris machine:

1. A directory structure should be created to contain the MySQL database:

        # mkdir -p /mysql/nas/innodb-data
        # mkdir -p /mysql/nas/innodb-log
        # chown -R mysql:mysql /mysql/nas

2. Each filesystem provisioned on the Oracle ZFS Storage Appliance should be mounted with the following options:

        rw,bg,hard,rsize=1048576,wsize=1048576,vers=3,nointr,timeo=600,tcp,
        actimeo=0,nolock

3. This should be supplied in /etc/fstab in order to be mounted automatically at boot, or it can be run manually from a shell like so:

        # mount -t nfs -o rw,bg,hard,rsize=1048576,wsize=1048576,vers=3,nointr,
          timeo=600,tcp,actimeo=0,nolock zfs:/export/innodb-data /mysql/nas/innodb-data
        # mount -t nfs -o rw,bg,hard,rsize=1048576,wsize=1048576,vers=3,nointr,
          timeo=600,tcp,actimeo=0,nolock zfs:/export/innodb-log /mysql/nas/innodb-log

Configuring the MySQL database

The option file my.cnf should be modified to offload the database onto the Oracle ZFS Storage Appliance and to make additional tunings for optimal performance. Prior to changing this file, MySQL should be stopped and restarted once completed.
    # service mysql stop
    # service mysql start

Important my.cnf changes

1. innodb_doublewrite = 0

A double write buffer is necessary in the event of partial page writes. However, the transactional nature of ZFS guarantees that partial writes will never occur. This can be safely disabled.

2. innodb_flush_method = O_DIRECT

Ensures that InnoDB calls directio() instead of fcntl() for the data files. This allows the data to be accessed without OS level buffering and read-ahead.

3. innodb_data_home_dir = /path/to/innodb-data

The data filesystem for InnoDB should be located on its own share or LUN on the Oracle ZFS Storage Appliance.

4. innodb_log_group_home_dir = /path/to/innodb-log

The log filesystem for InnoDB should be located on its own share or LUN on the Oracle ZFS Storage Appliance.

5. innodb_data_file_path = ibdatafile:1G:autoextend

This configures a single large tablespace for InnoDB. The ZFS controller is then responsible for managing the growth of new data. This eliminates the complexity needed for controlling multiple tablespaces.

You can also download the following example my.cnf file to get started.

Testing with Sysbench

Sysbench is a handy benchmark tool for generating a database workload. To fill a test database, run the following command:
    # sysbench \
    --test=oltp \
    --oltp-table-size=1000000 \
    --mysql-db=test \
    --mysql-user=root \
    --mysql-password= \
    prepare
Once filled, create an OLTP workload with the following command and parameters:
    # sysbench \
    --test=oltp \
    --oltp-table-size=1000000 \
    --oltp-test-mode=complex \
    --oltp-read-only=off \
    --num-threads=128 \
    --max-time=3600 \
    --max-requests=0 \
    --mysql-db=test \
    --mysql-user=root \
    --mysql-password= \
    --mysql-table-engine=innodb \
    run

Analytics

The Analytics feature of the Oracle ZFS Storage Appliance offers an unprecedented level of observability into your database workload. This can assist in identifying performance bottlenecks based on the utilization of your network, hardware, and storage protocols. Its drill-down functionality can also narrow the focus of a MySQL instance into a workload’s operation type (read/write), I/O pattern (sequential / random), response latency, and I/O size for both the data and log files. At any point in time, a DBA can track a database instance at an incredibly granular level.

Once you have a single database installed, you can try creating more instances to analyze your I/O patterns. Run separate sysbench processes for each database and then use Analytics to monitor the differences between workloads.

analytics.png


posted by paulie
17:37 PST - February 14, 2015