Sunday, January 12, 2014

OpenStack 101: How to Setup OpenStack Swift (OpenStack Object Storage Service)

In this tutorial we'll setup OpenStack Swift which is the object store service. Swift can be used to store data with high redundancy. The nodes in Swift can be broadly classified in two categories:
  • Proxy Node: This is a public facing node. It handles all the http request for various Swift operations like uploading, managing and modifying metadata. We can setup multiple proxy nodes and then load balance them using a standard load balancer.
  • Storage Node: This node actually stores data. It is recommended to make this node private, only accessible via proxy node but not directly. Other than storage service, this node also houses container service and account service which are used for managing mapping of containers and accounts respectively. 
For a small scale setup, both proxy and storage node can reside on the same machine but avoid doing so for a bigger setup.

Step 1: Let us install all the required packages for Swift:
# yum install openstack-swift openstack-swift-proxy openstack-swift-account openstack-swift-container openstack-swift-object memcached

Step 2: Attach a disk which would be used for storage or chop off some disk space from the existing disk.
Using additional disks:
Most likely this is done when there is large amount of data to be stored. XFS is the recommended filesystem and is known to work well with Swift. If the additional disk is attached as /dev/sdb then following will do the trick:
# fdisk /dev/sdb
# mkfs.xfs /dev/sdb1
# echo "/dev/sdb1 /srv/node/partition1 xfs noatime,nodiratime,nobarrier,logbufs=8 0 0" >> /etc/fstab
# mkdir -p /srv/node/partition1
# mount /srv/node/partition1

Chopping off disk space from the existing disk:
We can chop off disk from existing disks as well. This is usually done for smaller installations or for "proof-of-concept" stage. We can use XFS like before or we can use ext4 as well.
# truncate --size=2G /tmp/swiftstorage
# DEVICE=$(losetup --show -f /tmp/swiftstorage)
# mkfs.ext4 $DEVICE
# mkdir -p /srv/node/partition1
# mount $DEVICE /srv/node/partition1 -t ext4 -o noatime,nodiratime,nobarrier,user_xattr

Step 3 (optional): Setup rsync to replicate the objects. In case replication or redundancy is not required, then  this step can be skipped.
uid = swift
gid = swift
log file = /var/log/rsyncd.log
pid file = /var/run/rsyncd.pid
address = <storage_local_net_ip>

[account]
max connections = 2
path = /srv/node/
read only = false
lock file = /var/lock/account.lock

[container]
max connections = 2
path = /srv/node/
read only = false
lock file = /var/lock/container.lock

[object]
max connections = 2
path = /srv/node/
read only = false
lock file = /var/lock/object.lock

Note that there can be multiple account, container and object sections if we wish to use multiple disks or partitions.
Enable rysnc in defaults and start the service:
# vim /etc/default/rsync
RSYNC_ENABLE = true
# service rsync start

Step 4: Setup the proxy node. The default config which is shipped with the Fedora 20 is good with minor changes. Open /etc/swift/proxy-server.conf and edit the [filter:authtoken] as below:
[filter:authtoken]
paste.filter_factory = keystoneclient.middleware.auth_token:filter_factory
admin_tenant_name = admin
admin_user = admin
admin_password = ADMIN_PASS
auth_host = 127.0.0.1
auth_port = 35357
auth_protocol = http
signing_dir = /tmp/keystone-signing-swift

Keep in mind that the admin token, admin_tenant_name and admin_user should be same which was used while setting up Keystone. If you have not installed and setup Keystone already, then check out this tutorial before you proceed.

Step 5: Now we will create the rings. Rings are mappings between the storage node components and the actual physical drive. Note that the create commands below has 3 numeric parameters in the end. The first parameter signifies the number of the swift partitions (not same as the disk partitions). Higher number of partitions ensure even distribution but also higher number of partitions put higher strain on the server. So we have to find a good trade off. The rule of thumb is to create about 100 swift partitions per drive. For that the first numeric parameter would be 7 which is (2^7=128, closest to 100). The second parameter defines the number of copies to create for the sake of replication. For a small instance with no rsync, set it to one but recommended is three. Last number is the time in hours before a specific partition can be moved in succession. Set it to a low number for testing but 24 is recommended for production instances.
# cd /etc/swift
# swift-ring-builder account.builder create 7 1 1
# swift-ring-builder container.builder create 7 1 1
# swift-ring-builder object.builder create 7 1 1

Add the device created above to the ring:
# swift-ring-builder account.builder add z1-127.0.0.1:6002/partition1 100
# swift-ring-builder container.builder add z1-127.0.0.1:6001/partition1 100
# swift-ring-builder object.builder add z1-127.0.0.1:6000/partition1 100

Rebalance the ring. This will ensure even distribution and minimal partition moves.
# swift-ring-builder account.builder rebalance
# swift-ring-builder container.builder rebalance
# swift-ring-builder object.builder rebalance

Set the owner and the group for the partitions
# chown -R swift:swift /etc/swift /srv/node/partition1

Step 6: Create the service and end point using Keystone.
# keystone service-create --name=swift --type=object-store --description="Object Store Service"
+-------------+----------------------------------+
|   Property  |              Value               |
+-------------+----------------------------------+
| description |       Object Store Service       |
|      id     | b230a3ecd12e4a52954cb24502be9d07 |
|     name    |              swift               |
|     type    |           object-store           |
+-------------+----------------------------------+

Copy the id from the output of the command above and use it to create the endpoint.
# keystone endpoint-create --region RegionOne --service_id b230a3ecd12e4a52954cb24502be9d07 --publicurl "http://127.0.0.1:8080/v1/AUTH_\$(tenant_id)s" --adminurl http://127.0.0.1:8080/v1 --internalurl http://127.0.0.1:8080/v1

Step 7: Start the services and test it:
# service memcached start
# for srv in account container object proxy  ; do sudo service openstack-swift-$srv start ; done
# swift -V 2.0 -A http://127.0.0.1:5000/v2.0 -U admin -K pass stat
IN_PASS stat
   Account: AUTH_939ba777082a4f988d5b70dc886459e3
Containers: 0
   Objects: 0
     Bytes: 0
Content-Type: text/plain; charset=utf-8
X-Timestamp: 1389435011.63658
X-Put-Timestamp: 1389435011.63658

Upload a file abc.txt to a Swift container myfiles like this:
# swift -V 2.0 -A http://127.0.0.1:5000/v2.0 -U admin -K pass upload myfiles abc.txt


The OpenStack Swift is ready to use.

4 comments:

  1. rsync on Fedora/CentOS/RHEL is run from xinetd, so you enable it there.

    ReplyDelete
  2. It's really an informative and well described post regarding data storage. I appreciate your topic for blogging. Thanks for sharing such a useful post.
    opslagruimte

    ReplyDelete
  3. The facilities are immaculate and well lit which was important for me. I did not want to go to some dingy place where I couldn't see who was around the corner. They are professional and friendly and the best part - they offered freshly baked cookies!! Now who doesn't love cookies?
    Storage in Hume

    ReplyDelete

Note: Only a member of this blog may post a comment.