Sunday, December 7, 2014

Docker Quick Start Guide

Here is a short and sweet guide to Docker for absolute beginners. I have added a few FAQs as well.

Q. What is a container?
A. Container is an isolated Linux system running on a Linux machine itself. They are lightweight and consume less resources than a virtual machine. They rely on kernels cgroups and namespace features to create isolation for CPU, memory etc..

Q. What is Docker?
A. Docker is a container based platform to build and ship applications. Docker makes containers easy to use by providing a lot of automation and tools for container management.

Q. Why would I use Docker?
A. If you have any of the following concerns then you should use Docker:
  • My production needs to be homogeneous
  • I need to ship entire environment to my colleague
  • My hypervisor ate all the CPU (or RAM)
  • .. it works on my machine, but not in production  ..

How to play with Docker
Step1: Let us install and run the Docker first:
# yum install docker-io
# systemctl start docker

Step2: Docker has something called registries. A registry stores container images from which we can download and run containers. These registries can be public or private. Docker.io maintains a public registry which is the default if we want to download an image. The command below will download an image with name fedora-busybox, contributed by user adimania:
# docker pull adimania/fedora-busybox
Pulling repository adimania/fedora-busybox
605bfcc0af5d: Download complete

Step3: Let us check out the image that we just downloaded.
# docker images
REPOSITORY                   TAG                 IMAGE ID            CREATED             VIRTUAL SIZE

adimania/fedora-busybox      latest              605bfcc0af5d        7 minutes ago        1.309 MB


Step4: Once we have the image, we would want to run a container off it. The command below will take care of that and drop us in the container's shell:
# docker run -i -t adimania/fedora-busybox /sbin/sh

The run command takes certain parameters and run the image provided as argument. The arguments "-i" and "-t" tells run command to open STDIN and allocate a pseudo-TTY. Last argument is the command that is runs inside the container in foreground. One thing to note here is that docker always need a process to run in foreground. As soon as this process exits, the docker container shuts down. For certain containers, this foreground process is implicit and we may not need to tell docker what to run. However for certain other containers, like the one which we are using, we specify "/sbin/sh" to run as foreground process. docker run command supports several other arguments and flags. It is advisable to fire docker run --help to check out all the options.

Step5: We can see more information about this containers that are currently running by using docker ps command:
# docker ps
CONTAINER ID    IMAGE                      COMMAND          CREATED             STATUS              PORTS            NAMES
3af04d663b3d      adimania/fedora-busybox:latest   "/sbin/sh"         25 seconds ago      Up 24 seconds          furious_leakey

docker ps commands shows all the containers that are running along with other useful info like uptime, foreground command etc.. This command takes an optional argument "-a" which shows all the containers, including the stopped ones. 

Step6: Let us stop and start the container again. We'll need the container id obtained from the docker ps command
# docker stop 3af04d663b3d
3af04d663b3d

# docker start 3af04d663b3d
3af04d663b3d

Above commands are a part of workshop which I have conducted before at Flock and CentOS Dojo. Check out the slides here.

Thursday, November 27, 2014

Encrypt Everything: Encrypt data using GPG and save passwords

Data security is an important concern these days and encryption is a very powerful tool to secure the data. In my previous post I talked about how to encrypt a disk. Now we are going to talk about how to encrypt files using GNU Privacy Guard (GPG).

GPG uses public key cryptography. This means that instead of having one key to encrypt and decrypt, there are two keys. One of these keys can be publicly shared and hence is known as public key. The other key is to be kept secret and is known as private key. Anything encrypted with public key can only be decrypted with private key.

How to encrypt files?
Assuming a scenario that user "test" wants to send an encrypted file to me, the user just has to find my public key, encrypt the data and send it to me where I will be able to decrypt the file using my private key and obtain the data. Note that user "test" doesn't need to have GPG keys generated in order to encrypt and send data to me.

Step1: Let us create a text file which we'll encypt:
test$ echo "This is a secret message." > secret.txt

Step2: User "test" needs to find my keys. There are many public servers where one can share their public key in case someone else wants to encrypt the data. One such server is run by MIT at http://pgp.mit.edu.
test$ gpg --keyserver pgp.mit.edu --search-keys aditya@adityapatawari.com

Step3: Once the user obtains my public key, then encrypting data is really easy.
test$ gpg --output secret.txt.gpg --encrypt --recipient aditya@adityapatawari.com secret.txt

The command above will create an encrypted file named secret.txt.gpg which can be shared via email or any other means. Once I get the encrypted file, I can decrypt it using my private key
aditya$ gpg --output secret.txt --decrypt secret.txt.gpg

How to create GPG keys to receive data?
Now assume a scenario where "test" user wants to create a set of GPG keys in order to share the public key and receive encrypted data.

Step1: Generate a key pair. The command will present you some options (stick to defaults if you are not sure) and ask for some data like your name and email address etc.
test$ gpg --gen-key

Step2: Check the keys.

test$ gpg --list-secret-keys
/home/test/.gnupg/secring.gpg
-----------------------------
sec   2048R/E46749BB 2014-11-23
uid                  Aditya TestKeys (This is not a valid key) <adimania+test@gmail.com>
ssb   2048R/C5E57FF2 2014-11-23


Step3: Upload the key to a public server using the id from the above.
test$ gpg --keyserver pgp.mit.edu --send-key E46749BB

Now others can search for the key, use it to encrypt the data and send it to the "test" user. 

To use GPG for saving password, have a look at pass utility. It uses GPG to encrypt passwords and other data and store it in a hierarchical format. 

Saturday, November 22, 2014

Encrypt Everything: How to encrypt the disk to protect the data

Recently, at BrowserStack.com, some of our services got compromised. We use Amazon Web Services extensively. The person (or group) who attacked us mounted one of our backups and managed to steal some of the data. We could have prevented this simply by ensuring that we use encrypted disks which would have made this attack useless. Learning from our mistakes, we have recently started encrypting everything and I am going to show you how to do that. One point worth noting here is that Amazon AWS does provide encryption support for the EBS volumes but that is transparent and would not help in case of the account getting compromised. I am going to use dm-crypt which is supported by Linux kernel so the steps are quite generic and would work on any kind of disk, on any kind of environment, including Amazon AWS, Google Compute Engine, physical disks in your datacenter.

Our goal is to encrypt /home. To achieve this, we'll attach a disk, encrypt it, move the entire /home data to this disk and create a symbolic link to /home.

Step1: We are going to use Linux Unified Key Setup. For that we need to install cryptsetup package.
# yum install cryptsetup

Step2: While using AWS, never attach the volume to be encrypted while launching the instance. If we do so, the instance will fail to boot up next time because it'll ask for decryption password while booting up which is not possible to supply in AWS. Still if it is absolutely mandatory to do this then I suggest trying to remove entries from fstab and crypttab but it is much easier to just attach the disk after the launching of the instance is done. Assuming that the attached disk is available at /dev/xvdf, we'll setup the encryption now.
# cryptsetup -y -v luksFormat /dev/xvdf
WARNING!
========
This will overwrite data on /dev/xvdf irrevocably.

Are you sure? (Type uppercase yes): YES
Enter LUKS passphrase:
Verify passphrase:

Command successful.


We can verify the encryption parameters as well. Default is AES 256 bit.
# cryptsetup luksDump /dev/xvdf

Step3: We'll open the device and map it to /dev/mapper/home so that we can use it.
# cryptsetup luksOpen /dev/xvdf home
Enter passphrase for /dev/xvdf:


Step4: This step is optional. To further protect our data, we can zero out the entire disk before even creating the filesystem.
# dd if=/dev/zero of=/dev/mapper/home

Step5: Now we'll create a filesytem
# mkfs.ext4 /dev/mapper/home

Step6: Let us mount and copy the data from /home
# mkdir /myhome
# mount /dev/mapper/home /myhome
# cp -a /home/* /myhome/
# rm -rf /home
# ln -s /myhome /home

Great! Our /home directory is encrypted. But wait a minute.. this approach has a short coming. We have deliberately designed it so that the disk won't auto-mount during the boot because there is no way to give it a password in cloud environment during the boot. Since the disk won't mount, we won't be able to ssh into the machine because the authorized_keys file is kept inside the home directory of the user. To address this problem, either change the "AuthorizedKeysFile" in sshd_config or create a user with home directory in /var/lib or /opt and grant sudo for cryptsetup and mount commands. So after reboot, if we take the first approach, we would be able to ssh without any problem or we'll ssh via other user, mount the encrypted drive and then use it normally.

$ ssh mountuser@<ip>
$ sudo /sbin/cryptsetup luksOpen /dev/xvdf home
$ sudo /bin/mount /dev/mapper/home /myhome/


Couple of points to remember:
  • Do not forget the LUKS password. It cannot be retrieved, if lost.
  • Try it a couple of times on staging machines before doing it on the machines that matter.


Wednesday, October 15, 2014

How to check for SSL POODLE / SSLv3 bug? How to fix Nginx?

Google has just disclosed SSL POODLE vulnerability which is a design flaw in SSLv3. Since it is a design flaw in the protocol itself and not an implementation bug, there will be no patches. Only way to mitigate this is to disable SSLv3 in your web server or application using SSL.

How to test for SSL POODLE vulnerability?
$ openssl s_client -connect google.com:443 -ssl3
If there is a handshake failure then the server is not supporting SSLv3 and it is secure from this vulnerability. Otherwise it is required to disable SSLv3 support.

How to disable the SSLv3 support on Nginx?
In nginx configuration, just after the "ssl on;" line, add the following to allow only TLS protocols:
ssl_protocols TLSv1.2 TLSv1.1 TLSv1;

Hacker News: Discuss and upvote on Hacker News.

Friday, September 5, 2014

Types of NAT and How to determine the NAT Type

I am going to do a couple of posts on NAT (Network Address Translation) to discuss their classification and how to create a NAT on a Linux machine. This post will cover NAT types.

Generally NAT is used to allow private IPs to talk to the Internet. There are certain security aspects to it as well since outsiders cannot directly access the machines inside the NAT (well, not easily at least).


In general there are 4 kinds of NAT. Let us understand them one by one.
  • Full cone NAT: This is also known as one to one NAT. It is basically simple port forwarding where there is a static binding from client ip:port to NAT's ip:port and any one from Internet can write to NAT's ip:port and it will be forwarded to the client. This kind of NAT is used very infrequently. 
  • (Address) Restricted cone NAT: In this scenario, client can only receive packets from the host where it has already sent packets before. For example, if the client from the diagram above sends a packet to server with address 8.8.8.8 then NAT will accept the reply from any port of the server as long as the source IP (8.8.8.8) remains the same.
  • Port Restricted cone NAT: In this scenario, client can only receive packets from the host where it has already sent packets before as long as they come from the same server port. For example, if the client from the diagram above sends a packet to server with address 8.8.8.8 on port 5555 then NAT will only accept reply originating from port 5555 from the server. This NAT is more restricted than Address Restricted NAT.
  • Symmetric NAT: In general all the above NAT types preserve the port. For example if the client is sending a packet from 192.168.0.2:54321 to 8.8.8.8:80, then NAT will usually map 192.168.0.2:54321 to 1.2.3.4:54321 preserving the port number. But in Symmetric NAT a random port is chosen for every new connection.This makes port prediction very difficult and techniques like UDP hole punching fails in this scenario
How do you tell what kind of NAT you are in? I have written a set of scripts to determine that. 


Run server.py on a publicly accessible server and client.py on the client inside the NAT. Make sure that UDP is allowed to public server's 5005 port (or you can change the port in the code).

If you see any bug in the scripts then please let me know in the comments or on the Github

Friday, August 22, 2014

Introduction To Ansible

I recently gave a talk about Ansible at Flock, Prague. Here is a youtube video of the same.

Paul W. Frields has written a summary of the talk on the Fedora Magazine.

Wednesday, April 16, 2014

A Simple Netcat How-To for Beginners

There are tonnes of tutorials on Netcat already. This one is to remind me and my colleagues about the awesomeness of nc which we forget on regular basis.
Common situations where nc can be used:
  • Check connectivity between two nodes. I had to learn hard way that ping (read all ICMP) based protocols are not always the best way to judge connectivity. Often ISPs set ICMP to lower priority and drop it.
  • Single file transfer.
  • Testing of network applications. I have written several clients and loggers for logstash and graphite which couldn't have been easier to test without nc.
  • Firing commands to remote servers where running a conventional tcp/http server is not possible (like VMWare ESXi)
Basic Netcat servers:
  • nc -l <port>
    Netcat starts listening for TCP sockets at the specified port. A client can connect and write arbitrary strings to the socket which will be reflected here.
  • nc -u -l <port>
    Netcat starts listening for UDP sockets at the specified port. A client can write arbitrary strings to the socket which will be reflected here.
  • nc -l <port> -e /bin/bash
    Netcat starts listening for TCP sockets at the specified port. A client can connect and write arbitrary commands which will be passed to /bin/bash and executed. Use with extreme caution on remote servers. The security here is nil.
  • nc -l -k <port> -e /bin/bash
    Problem with above command is that nc gets terminated as soon as client disconnects. -k option forces nc to stay alive and listen for subsequent connections as well.
Basic Netcat Clients:
  • nc <address> <port>
    Connect as client to the server running on <address>:<port> via TCP.
  • nc -u <address> <port>
    Connect as client to the server running on <address>:<port> via UDP.
  • nc -w <seconds> <address> <port>
    Connect as client to the server running on <address>:<port> via TCP and timeout after <seconds> of being idle. I used it a lot to send data to graphite using shell scripts.

A cool example to stream any file's content live (mostly used for logs) can be found at commandlinefu.

Monday, January 20, 2014

Using OpenStack Swift as ownCloud Storage Backend

ownCloud helps us to access our files from anywhere in the world, without take the control of data from us. Traditionally server's local hard disks have been used to act as storage backend but these days, as the latency of networks is decreasing, storing data over network is becoming cheaper and safer (in terms of recovery). ownCloud is capable of using SFTP, WebDAV, SMB, OpenStack Swift and several other storage mechanisms. We'll see the usage of OpenStack Swift with ownCloud in this tutorial

At this point, the assumption is that we already have admin access to an ownCloud instance and we have set up OpenStack Swift somewhere. If not, to setup OpenStack Swift, follow this tutorial.

Step 1: External storage facilities are provided by an app known as "External storage support", written by Robin Appelman and Michael Gapczynski, which ships with ownCloud and is available on the apps dashboard. It is disabled by default, we need to enable it.

Step 2: We need to go to Admin page of the ownCloud installation and locate "External Storage" configuration area. We'll select "OpenStack Swift" from the drop down menu.

Step 3: We need to fill in the details and credentials. We'll need the following information:
  • Folder Name: A user friendly name for the storage mount point.
  • user: Username of the Swift user (required)
  • bucket : Bucket can be any random string (required). It is a container where all the files will be kept.
  • region: Region (optional for OpenStack Object Storage).
  • key: API Key (required for Rackspace Cloud Files). This is not required for OpenStack Swift. Leave it empty.
  • tenant: Tenant name (required for OpenStack Object Storage). Tenant name would be the same tenant of which the Swift user is a part of. It is created using OpenStack Keystone.
  • password: Password of the Swift user (required for OpenStack Object Storage)
  • service_name: Service Name (required for OpenStack Object Storage). This is the same name which was used while creating the Swift service
  • url: URL of identity endpoint (required for OpenStack Object Storage). It is the Keystone endpoint against which authorization will be done.
  • timeout: Timeout of HTTP requests in seconds (optional)

Just to get a better hold on things, check out the image of an empty configuration form and here is a filled up one.

Notice that if ownCloud is successfully able to connect and authorize then a green circle appear on the left side of the configuration. In case things don't work out as expected then check out the owncloud.log in the data directory of ownCloud instance.

That is it. Now ownCloud is now ready to use OpenStack Swift to store data.

Sunday, January 12, 2014

OpenStack 101: How to Setup OpenStack Swift (OpenStack Object Storage Service)

In this tutorial we'll setup OpenStack Swift which is the object store service. Swift can be used to store data with high redundancy. The nodes in Swift can be broadly classified in two categories:
  • Proxy Node: This is a public facing node. It handles all the http request for various Swift operations like uploading, managing and modifying metadata. We can setup multiple proxy nodes and then load balance them using a standard load balancer.
  • Storage Node: This node actually stores data. It is recommended to make this node private, only accessible via proxy node but not directly. Other than storage service, this node also houses container service and account service which are used for managing mapping of containers and accounts respectively. 
For a small scale setup, both proxy and storage node can reside on the same machine but avoid doing so for a bigger setup.

Step 1: Let us install all the required packages for Swift:
# yum install openstack-swift openstack-swift-proxy openstack-swift-account openstack-swift-container openstack-swift-object memcached

Step 2: Attach a disk which would be used for storage or chop off some disk space from the existing disk.
Using additional disks:
Most likely this is done when there is large amount of data to be stored. XFS is the recommended filesystem and is known to work well with Swift. If the additional disk is attached as /dev/sdb then following will do the trick:
# fdisk /dev/sdb
# mkfs.xfs /dev/sdb1
# echo "/dev/sdb1 /srv/node/partition1 xfs noatime,nodiratime,nobarrier,logbufs=8 0 0" >> /etc/fstab
# mkdir -p /srv/node/partition1
# mount /srv/node/partition1

Chopping off disk space from the existing disk:
We can chop off disk from existing disks as well. This is usually done for smaller installations or for "proof-of-concept" stage. We can use XFS like before or we can use ext4 as well.
# truncate --size=2G /tmp/swiftstorage
# DEVICE=$(losetup --show -f /tmp/swiftstorage)
# mkfs.ext4 $DEVICE
# mkdir -p /srv/node/partition1
# mount $DEVICE /srv/node/partition1 -t ext4 -o noatime,nodiratime,nobarrier,user_xattr

Step 3 (optional): Setup rsync to replicate the objects. In case replication or redundancy is not required, then  this step can be skipped.
uid = swift
gid = swift
log file = /var/log/rsyncd.log
pid file = /var/run/rsyncd.pid
address = <storage_local_net_ip>

[account]
max connections = 2
path = /srv/node/
read only = false
lock file = /var/lock/account.lock

[container]
max connections = 2
path = /srv/node/
read only = false
lock file = /var/lock/container.lock

[object]
max connections = 2
path = /srv/node/
read only = false
lock file = /var/lock/object.lock

Note that there can be multiple account, container and object sections if we wish to use multiple disks or partitions.
Enable rysnc in defaults and start the service:
# vim /etc/default/rsync
RSYNC_ENABLE = true
# service rsync start

Step 4: Setup the proxy node. The default config which is shipped with the Fedora 20 is good with minor changes. Open /etc/swift/proxy-server.conf and edit the [filter:authtoken] as below:
[filter:authtoken]
paste.filter_factory = keystoneclient.middleware.auth_token:filter_factory
admin_tenant_name = admin
admin_user = admin
admin_password = ADMIN_PASS
auth_host = 127.0.0.1
auth_port = 35357
auth_protocol = http
signing_dir = /tmp/keystone-signing-swift

Keep in mind that the admin token, admin_tenant_name and admin_user should be same which was used while setting up Keystone. If you have not installed and setup Keystone already, then check out this tutorial before you proceed.

Step 5: Now we will create the rings. Rings are mappings between the storage node components and the actual physical drive. Note that the create commands below has 3 numeric parameters in the end. The first parameter signifies the number of the swift partitions (not same as the disk partitions). Higher number of partitions ensure even distribution but also higher number of partitions put higher strain on the server. So we have to find a good trade off. The rule of thumb is to create about 100 swift partitions per drive. For that the first numeric parameter would be 7 which is (2^7=128, closest to 100). The second parameter defines the number of copies to create for the sake of replication. For a small instance with no rsync, set it to one but recommended is three. Last number is the time in hours before a specific partition can be moved in succession. Set it to a low number for testing but 24 is recommended for production instances.
# cd /etc/swift
# swift-ring-builder account.builder create 7 1 1
# swift-ring-builder container.builder create 7 1 1
# swift-ring-builder object.builder create 7 1 1

Add the device created above to the ring:
# swift-ring-builder account.builder add z1-127.0.0.1:6002/partition1 100
# swift-ring-builder container.builder add z1-127.0.0.1:6001/partition1 100
# swift-ring-builder object.builder add z1-127.0.0.1:6000/partition1 100

Rebalance the ring. This will ensure even distribution and minimal partition moves.
# swift-ring-builder account.builder rebalance
# swift-ring-builder container.builder rebalance
# swift-ring-builder object.builder rebalance

Set the owner and the group for the partitions
# chown -R swift:swift /etc/swift /srv/node/partition1

Step 6: Create the service and end point using Keystone.
# keystone service-create --name=swift --type=object-store --description="Object Store Service"
+-------------+----------------------------------+
|   Property  |              Value               |
+-------------+----------------------------------+
| description |       Object Store Service       |
|      id     | b230a3ecd12e4a52954cb24502be9d07 |
|     name    |              swift               |
|     type    |           object-store           |
+-------------+----------------------------------+

Copy the id from the output of the command above and use it to create the endpoint.
# keystone endpoint-create --region RegionOne --service_id b230a3ecd12e4a52954cb24502be9d07 --publicurl "http://127.0.0.1:8080/v1/AUTH_\$(tenant_id)s" --adminurl http://127.0.0.1:8080/v1 --internalurl http://127.0.0.1:8080/v1

Step 7: Start the services and test it:
# service memcached start
# for srv in account container object proxy  ; do sudo service openstack-swift-$srv start ; done
# swift -V 2.0 -A http://127.0.0.1:5000/v2.0 -U admin -K pass stat
IN_PASS stat
   Account: AUTH_939ba777082a4f988d5b70dc886459e3
Containers: 0
   Objects: 0
     Bytes: 0
Content-Type: text/plain; charset=utf-8
X-Timestamp: 1389435011.63658
X-Put-Timestamp: 1389435011.63658

Upload a file abc.txt to a Swift container myfiles like this:
# swift -V 2.0 -A http://127.0.0.1:5000/v2.0 -U admin -K pass upload myfiles abc.txt


The OpenStack Swift is ready to use.

Saturday, January 11, 2014

OpenStack 101: How to Setup OpenStack Keystone (OpenStack Identity Service)

OpenStack Keystone is an identity or authorization service. Before we can do anything on other OpenStack components, we have to authorize ourselves and only then the operation can proceed. Let us get acquainted with some terminologies before we proceed.
  • Token: An alphanumeric string which allows access to certain set of services depending up on the access level (role) of the user.
  • Service: An OpenStack service like Nova, Swift and Keystone itself.
  • Tenant: A group of users. 
  • Endpoint: A URL (may be private) used to access the service.
  • Role: The authorization level of a user.
Let us go ahead and build the Keystone service for our use.

Step 1: Fedora 20 has OpenStack Havan in its repositories so install it is not a pain at all. Additionally, we need MySQL (replaced by MariaDB in Fedora 20) where Keystone will save its data.
# yum install openstack-utils openstack-keystone mysql-server

Step 2: Once the packages above are installed, we need to set a few things in keystone config. Find the lines and edit them to look like these:
# vim /etc/keystone/keystone.conf
[DEFAULT]
admin_token = ADMIN_TOKEN
.
.
.
[sql]
connection = mysql://keystone:KEYSTONE_DBPASS@controller/keystone

Note that ADMIN_TOKEN and KEYSTONE_DBPASS should be long and difficult to guess. Remember that ADMIN_TOKEN is the almighty token which will have full access to create and destroy users and services. Also several tutorials and the official docs use command openstack-config --set /etc/keystone/keystone.conf to do the changes that we just did manually. I do not recommend using the command. It created duplicate sections and entries for me which can be confusing down the line. 

Step 3: Set up MySQL/MariaDB (only required for the first run of MySQL) to set root password. 
# mysql_secure_installation

Now we need to create the required database and tables for Keystone to work. The command below will do that for us. It will ask us for root password to create the keystone use and the database.
# openstack-db --service keystone --init --password KEYSTONE_DBPASS

Step 4: Create the signing keys and certificates for the tokens.
# keystone-manage pki_setup --keystone-user keystone --keystone-group keystone

Step 5: Set the file owners, just in case something messed up and start the service.
# chown -R keystone:keystone /etc/keystone/* /var/log/keystone/keystone.log
# service openstack-keystone start
# chkconfig openstack-keystone on

Step 6: Setup the required environment variables. This will save the effort of supplying all the information every time a Keystone command is executed. Note that by default the Keystone admin port is 35357. This can be changed in /etc/keystone/keystone.conf.
# cat > ~/.keystonerc <<EOF

> export OS_SERVICE_TOKEN=ADMIN_TOKEN
> export OS_SERVICE_ENDPOINT=http://127.0.0.1:35357/v2.0
> export OS_USERNAME=admin
> export OS_PASSWORD=ADMIN_PASS
> export OS_TENANT_NAME=admin
> export OS_AUTH_URL=http://127.0.0.1:35357/v2.0
> EOF
# . ~/.keystonerc

Step 7: Create the tenants, users and the Keystone service with endpoint.
Creating the tenant:
# keystone tenant-create --name=admin --description="Admin Tenant"

Creating the admin user:
# keystone user-create --name=admin --pass=ADMIN_PASS --email=admin@example.com

Creating and adding admin user to admin role:
# keystone role-create --name=admin
# keystone user-role-add --user=admin --tenant=admin --role=admin

Creating Keystone service and endpoint:
# keystone service-create --name=keystone --type=identity --description="Keystone Identity Service"
+-------------+--------------------------------------+
| Property    | Value                                |
+-------------+--------------------------------------+
| description | Keystone Identity Service            |
| id          | c3dbb8aa4b27492f9c4a663cce0961a3     |
| name        | keystone                             |
| type        | identity                             |
+-------------+--------------------------------------+

Copy the id from the command above and use it in the command below:
# keystone endpoint-create --service-id=c3dbb8aa4b27492f9c4a663cce0961a3 --publicurl=http://127.0.0.1:5000/v2.0 --internalurl=http://127.0.0.1:5000/v2.0 --adminurl=http://127.0.0.1:35357/v2.0

Step 8: Test the keystone service.
# unset OS_SERVICE_TOKEN OS_SERVICE_ENDPOINT
# keystone --os-username=admin --os-password=ADMIN_PASS --os-auth-url=http://127.0.0.1:35357/v2.0 token-get

A token with id, validity and other information will be returned.

Keystone is up and running. We'll create some services next tutorial.

OpenStack 101: What is OpenStack?

OpenStack, in simple words, is an open source project which facilitates us to build our own cloud computing setup. In other words, it creates an Infrastructure as a Service (IaaS) on our own infrastructure. We can have Amazon AWS like service up and running quite fast and painlessly wherever we want. A lot of efforts have been taken to ensure that code written for Amazon AWS can be ported to any OpenStack installation easily.

Below is a small comparison (not exhaustive) between major OpenStack services and Amazon AWS to give you an idea about the compatibility.

OpenStack Service Amazon AWS Service
NovaEC2
Cinder EBS
Swift S3
Keystone IAM
Glance AMI
Horizon AWS Web Console
Neutron EC2 network components

OpenStack 101 is a tutorial series to simplify using OpenStack and integration of OpenStack with simple applications. It'll help you create OpenStack installations for "proof -of-concept" stage or hosting small IaaS service. For most of the part I have tried to keep the tutorials as close to official documentation as possible. Let me also state this loud and clear, OpenStack's documentation is really great. If you can, the please go through it. If you are done with "proof-of-concept" and are going to run production ready machines, then go through the official documentation. These tutorials will help you get started but are not a replacement for the docs.

I am going to use OpenStack Havana and will run it on Fedora 20 (latest at the time of writing, January 2014). All the commands and codes are tested well before putting them up here but if you see any errors, please point them out to me.

Contents:
OpenStack 101: What is OpenStack?
OpenStack 101: How to Setup OpenStack KeyStone (OpenStack Identity Service)
OpenStack 101: How to Setup OpenStack Swift (OpenStack Object Storage Service)