tag:blogger.com,1999:blog-55568547481520455632024-03-13T09:38:31.204-07:00Learning Linux System Administration<center><b>Linux - Docker - Ansible - Fedora - CentOS - Enterprise Linux - Python - TCP/IP - DevOps - System Administration - Internet - Scaling - Hacking - Load Balancing - Uptime - High Availability - Cloud - Puppet</b></center>Aditya Patawarihttp://www.blogger.com/profile/11007675457270523326noreply@blogger.comBlogger96125tag:blogger.com,1999:blog-5556854748152045563.post-77229294069623683402016-02-26T03:17:00.001-08:002016-03-01T22:22:26.962-08:00Big Panda's community panel on Cloud monitoring<div dir="ltr" style="text-align: left;" trbidi="on">
On Wednesday, February 10th, I participated in an online panel on the subject of <span style="color: #1155cc; font-family: "arial"; font-size: 14.6667px; text-decoration: underline; vertical-align: baseline; white-space: pre-wrap;"><a href="https://bigpanda.io/blog/monitoringscape-cloud-monitoring" style="text-decoration: none;">Cloud Monitoring</a></span>, as part of MonitoringScape Live (#MonitoringScape), a series of community panels about everything that matters to DevOps, ITOps, and the modern NOC.<br />
<br />
Watch a recording of the panel:
<iframe allowfullscreen="" frameborder="0" height="315" src="https://www.youtube.com/embed/I3Mnbmfqpzk" width="560"></iframe><br />
<br />
Points to note from the session above:<br />
<br />
<ul style="text-align: left;">
<li><b>What is cloud?</b><br />Most of the panelists agreed that cloud is a way to get resources on demand. I personally think that a scalable and practically infinite pool of resources with high availability can be termed as cloud.</li>
<li><b>How cloud based architectures have impacted user experience?</b><br />There are mixed feeling about this. While a lot of clutter and noise is generated because getting resources to build and host applications have become easier by the virtue of cloud, I think it is not a bad thing. Cloud has reduced barrier to entry for a lot of application developers. It helps in shielding users from bad experiences during high volume of requests or processing. In a way, cloud has help to serve users more consistently.</li>
<li><b>What is the business case for moving to cloud?</b><br />It is easy to scale, not only scale out and up but also down and in. Out and up helps in consistent user experience and ensuring that the app does not die due to high load. Down and in helps in reducing the expense which might have been incurred due to underutilized resources lying around.</li>
<li><b>What is different about monitoring cloud application?</b><br />Cloud is dynamic. So, in my opinion, monitoring hosts is less important than monitoring services. One should focus on figuring out the health of the service, rather than the health of individual machines. Alerting was a pain point that every panelist pointed out. I think we need to change the way we alert for cloud systems. We need to measure parameters like response time of the application, rather than CPU cycles on individual machine.</li>
<li><b>What technology will impact cloud computing the most in next 5 years?</b><br />This is a tricky question. While I would bet that containers are going to change the way we deploy and run our applications, it was pointed out, and I accept this that predicting technology is hard. So we just need to wait and watch and be prepared to adapt and evolve to whatever comes.</li>
<li><b>Will we ever automate people out of datacenters?</b><br />I think we are almost there. As I see it, there are only two manual tasks left to get a server online which is to connect it to network and power it on. From there, thanks to network boot and technologies like kickstarts, taking things forward is not too difficult and does not need a human inside the datacenter.
</li>
</ul>
This was a summary of the panel discussion. I would recommend everyone to go through the video and listen to what different panelists had to say about cloud monitoring.<br />
I would like to thank <a href="https://www.bigpanda.io/integrations/nagios-the-alternative-to-a-flood-of-alerts" target="_blank">Big Panda</a> for organizing this. There are more community panels that are going to happen with different panelists. <a href="http://bigpanda.io/community-panels" target="_blank">Do check them out</a>.</div>
Aditya Patawarihttp://www.blogger.com/profile/04110480749979714191noreply@blogger.com0tag:blogger.com,1999:blog-5556854748152045563.post-35389424730164910162015-10-26T08:30:00.000-07:002015-10-26T08:30:01.294-07:00Common iptables commands<div dir="ltr" style="text-align: left;" trbidi="on">
A few years ago I wrote a <a href="http://blog.adityapatawari.com/2011/12/ip-packet-filtering-iptables-explained.html" target="_blank">iptables tutorial</a> explaining the basics of the topic. This post is just a small cheat sheet of most simple and effective commands that I end up using very frequently. I am assuming that we are operating for IP 1.1.1.1 and using INPUT chain of the iptables.<br />
<ol style="text-align: left;">
<li><b>Block an IP</b><br /><code>iptables -A INPUT -s 1.1.1.1 -j DROP</code></li>
<li><b>Block an IP range</b><br /><code>iptables -A INPUT -s 1.1.1.1/24 -j DROP</code></li>
<li><b>List current rule set</b><br /><code>iptables --list</code></li>
<li><b>List current rules set with numbers</b><br /><code>iptables --line-numbers --list</code></li>
<li><b>Delete a rule by rule</b><br /><code>iptables -D INPUT -s 1.1.1.1 -j DROP</code></li>
<li><b>Delete a rule by number</b><br /><code>iptables -D INPUT 7</code></li>
<li><b>Save iptables rules to a file</b><br /><code>iptables-save > iptables.dat</code></li>
<li><b>Load iptables rules from a file</b><br /><code>iptables-restore < iptables.dat</code></li>
</ol>
</div>
Aditya Patawarihttp://www.blogger.com/profile/04110480749979714191noreply@blogger.com0tag:blogger.com,1999:blog-5556854748152045563.post-54163731925226092792015-10-22T12:35:00.003-07:002015-10-22T12:40:30.802-07:00The LVM Beginner's Guide<div dir="ltr" style="text-align: left;" trbidi="on">
<a href="https://en.wikipedia.org/wiki/Logical_Volume_Manager_(Linux)" target="_blank">Logical Volume Manager (LVM)</a> helps in managing disk partitions irrespective of underlying disk layouts. In simple terms, this helps in extending a filling partition easily by just adding a disk. This is very useful in cloud based environments where adding a disk is very easy but extending a partition, if it is not LVM, might be very difficult.<br />
<br />
Components of LVM:<br />
<ol style="text-align: left;">
<li><b>Physical Volume (PV):</b> These are the underlying disk partitions that builds up to volume group. </li>
<li><b>Volume Group (VG):</b> Analogous of an actual disk drive. A bunch of partitions (PVs) will combine to build a volume group. </li>
<li><b>Logical Volume (LV):</b> Analogous to partitions on a disk. They are carved out of volume groups. </li>
<li><b>Physical Extent (PE):</b> The unit which makes up logical volume. The smallest amount of the disk that can be given to a logical volume and further additions are done in multiples of physical extent.</li>
</ol>
<br />
<b>Installation:</b><br />
On Fedora or Centos or Red Hat, do the following:<br />
<code># sudo yum install lvm2</code><br />
<br />
<b>How to create a LVM?</b><br />
<ol style="text-align: left;">
<li>To prepare a disk for using in LVM, we need to create an actual partition and set the type of the partition as LVM. Assuming that the disk is attached to the system at /dev/sdb, following are the steps:<br /><code># fdisk /dev/sdb</code><br />This command will open the fdisk prompt. Type "<i>n</i>" followed by "<i>p</i>" to create a new primary partition. On a new disk, this would be the first partition, so hit "<i>1</i>". Accepting default for next prompts, until we reach "<i>Command (m for help):</i>" would be fine. Now we have a new partition.<br />To set the type of partition as LVM, hit "<i>t</i>" followed by "<i>8e</i>".<br />Finally, to write these changes to the disk, hit "<i>w</i>".</li>
<br />
<li>The above exercise would produce a partition /dev/sdb1. We will use this to create a PV.<br /><code># pvcreate /dev/sdb1<br /> Physical volume "/dev/sdb1" successfully created</code><br />Let us check out what we created:<br /><code># pvdisplay<br /> "/dev/sdb1" is a new physical volume of "15.00 GiB"<br />
--- NEW Physical volume ---<br />
PV Name /dev/sdb1<br />
VG Name <br />
PV Size 15.00 GiB<br />
Allocatable NO<br />
PE Size 0 <br />
Total PE 0<br />
Free PE 0<br />
Allocated PE 0<br />
PV UUID tAo1Xk-1N5g-Q9EM-1s7h-EinR-lFv5-DSgkLe</code><br /><br />Note that the VG Name line is empty which signifies that this PV is currently not a part of any VG.</li>
<br />
<li>Now let us create a volume group and add the PV created in previous step.<br /><code># vgcreate testvg /dev/sdb1<br /> Volume group "testvg" successfully created</code><br />Let us check out the VG, we just created:<br />
<code># vgdisplay<br />
--- Volume group ---<br />
VG Name testvg<br />
System ID <br />
Format lvm2<br />
Metadata Areas 1<br />
Metadata Sequence No 1<br />
VG Access read/write<br />
VG Status resizable<br />
MAX LV 0<br />
Cur LV 0<br />
Open LV 0<br />
Max PV 0<br />
Cur PV 1<br />
Act PV 1<br />
VG Size 15.00 GiB<br />
PE Size 4.00 MiB<br />
Total PE 3839<br />
Alloc PE / Size 0 / 0 <br />
Free PE / Size 3839 / 15.00 GiB<br />
VG UUID d2i9eU-4cXQ-cytm-dsLG-EOzb-1e6M-AkjKIb</code></li>
<br />
<li>Let us create a logical volume now.<br /><code># lvcreate --name testlv --size 5G testvg<br /> Logical volume "testlv" created.</code><br />Let us check out our LV<br /><code># lvdisplay <br />
--- Logical volume ---<br />
LV Path /dev/testvg/testlv<br />
LV Name testlv<br />
VG Name testvg<br />
LV UUID ZSrEP2-ibK6-wrbq-8ckc-5SxL-WppL-4QY3Sq<br />
LV Write Access read/write<br />
LV Creation host, time localhost, 2015-10-22 18:34:57 +0000<br />
LV Status available<br />
# open 0<br />
LV Size 5.00 GiB<br />
Current LE 1280<br />
Segments 1<br />
Allocation inherit<br />
Read ahead sectors auto<br />
- currently set to 8192<br />
Block device 253:1</code></li>
<br />
<li>
Our volume group is ready. Let us create a filesystem on that. For most of the regular usage, ext4 is a reasonable choice.
<br /><code># mkfs.ext4 /dev/testvg/testlv</code><br />We can mount it and use it now.</li>
</ol>
<b>How to extend LVM or add disk to LVM Partition?</b><br />
LVM offers flexibility of letting us add the disk over a period of time without the need of taking down the current processes that might be using the disk. So let us see how to add a new disk and extend the logical volume. Check out the steps 1 and 2 from "How to create a LVM?". They are the same for adding a new disk to a LVM.<br />
<ol style="text-align: left;">
<li>Once we are done with the first two steps, we got the disk added in the PV. Now let us extend the volume group.<br />
<code># vgextend testvg /dev/sdc1 <br />
Volume group "testvg" successfully extended</code>
</li>
<li>
After extending the volume group, we need to increase the logical volume.
<br /><code># lvextend /dev/testvg/testlv /dev/sdc1<br />
Size of logical volume testvg/testlv changed from 5.00 GiB (1280 extents) to 9.00 GiB (2303 extents).<br />
Logical volume testlv successfully resized<br />
</code>
</li>
<li>Once we have more space in the partition, we can extend our filesystem to claim that space.<br /><code># resize2fs /dev/testvg/testlv</code></li>
</ol>
</div>
Aditya Patawarihttp://www.blogger.com/profile/04110480749979714191noreply@blogger.com0tag:blogger.com,1999:blog-5556854748152045563.post-22270733698203163062015-05-09T01:45:00.000-07:002015-06-22T04:53:00.735-07:00Fixing Gummi "Compilation program is missing" error<div dir="ltr" style="text-align: left;" trbidi="on">
I use <a href="http://www.latex-project.org/" target="_blank">LaTeX</a>, mostly <a href="https://bitbucket.org/rivanvx/beamer/wiki/Home" target="_blank">beamer</a>, for my slides. I really like the Warsaw theme and it has been the default for almost all my presentations since quite some time now. <a href="http://gummi.midnightcoding.org/" target="_blank">Gummi</a> is my choice of editor for this since it is dead simple and I can see the preview as the slides develop in the side pane.<br />
However installing Gummi in Fedora never pulls all the dependencies for me. So I always get a compilation error on a fresh installation. In this tutorial I am going to write about how to setup Gummi to fix that issue.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg9-OYtsoClFHG2BfNwq5PzyacmD58a-4p7z6slEW8_TmbMp4vSM_gnLWMhy8uwEEmUmK7jhAlobVqpH6IrsY1Ja4Vy0spmPSCDxORwsxL1-VF11m04eNqYZ3ZuPEh4NNYRA7UQQicp6cY/s1600/Screenshot+from+2015-05-09+01:18:26.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="280" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg9-OYtsoClFHG2BfNwq5PzyacmD58a-4p7z6slEW8_TmbMp4vSM_gnLWMhy8uwEEmUmK7jhAlobVqpH6IrsY1Ja4Vy0spmPSCDxORwsxL1-VF11m04eNqYZ3ZuPEh4NNYRA7UQQicp6cY/s400/Screenshot+from+2015-05-09+01:18:26.png" width="500" /></a></div>
<br />
<br />
Step 1: Install Gummi<br />
# yum install gummi<br />
<br />
Step 2: Install compilation tools<br />
# yum install rubber latexmk texlive-xetex<br />
<br />
Step 3: Install beamer for the warsaw and other themes<br />
# yum install texlive-beamer<br />
<br />
Step 4: For presentations, I usually need SI units.<br />
# yum install texlive-siunitx-svn31333.2.5s<br />
<br />
And this is about it!</div>
Aditya Patawarihttp://www.blogger.com/profile/04110480749979714191noreply@blogger.com0tag:blogger.com,1999:blog-5556854748152045563.post-45666449855610203292015-04-25T09:59:00.001-07:002015-04-25T10:01:29.194-07:00Setting up Yubikey for SSH two-factor with public key authentication<div dir="ltr" style="text-align: left;" trbidi="on">
I just bought a <a href="https://www.yubico.com/products/yubikey-hardware/yubikey-neo/" target="_blank">Yubikey Neo</a>. It is a tiny usb device which can be used for multi-factor authentication with many application. But setting it up can become tricky at times due to lack of documentation. Here is what I did to setup ssh with Yubikey two-factor for my Fedora 20 and Fedora 21:<br />
<br />
<b>Step1:</b> Install the pam module for yubikey auth.<br />
<code># yum install pam_yubico </code><br />
<br />
<b>Step 2:</b> We need to create a mapping of user and the yubikey associated with it. We'll need the key id of the yubikey. To obtain that, we just need to open any text editor, plug the key in a usb slot and touch the golden button on the Yubikey. The first 12 characters is the id. Also note that, there can be multiple keys associated with one user. We'll create a file /etc/yubi-map:<br />
<code># cat /etc/yubi-map</code><br />
<code>aditya:scwechdueeuv</code><br />
<br />
<b>Step 3:</b> Now we have to add the yubico pam module to the sshd auth. Change the /etc/pam.d/sshd so that first few lines look like this:<br />
#%PAM-1.0<br />
<code>
auth required pam_sepermit.so use_first_pass<br />
auth sufficient pam_yubico.so id=1 authfile=/etc/yubi-map debug</code><br />
<br />
Note that I have added the pam_yubico as a sufficient auth and also modified the pam_sepermit to use the user's initial password.<br />
<br />
<b>Step 4:</b> We'll modify the /etc/ssh/sshd_config to allow challenge response and define the authentication method.<br />
<code>
ChallengeResponseAuthentication yes<br />
AuthenticationMethods publickey,keyboard-interactive<br />
</code><br />
Optionally, disable the password auth as well<br />
<code>PasswordAuthentication no</code><br />
<br />
Restart the sshd.<br />
<code># systemctl sshd restart</code><br />
<br />
<div>
<b>Step 5:</b> Now, here is the real catch. Yubikey needs to contact an authentication server before it can process. As far as I understand, we cannot use Yubikey when there is no internet (alternatively, you can run an authentication server in your own infra but more on that later). This also creates a problem, which is, that SELinux denies any network request during authentication. To handle the situation, <a href="https://developers.yubico.com/yubico-pam/Yubikey_and_SELinux_on_Fedora_18_and_up.html" target="_blank">Yubikey docs</a> suggests setting a boolen.</div>
<code># setsebool -P authlogin_yubikey 1</code>
</div>
Aditya Patawarihttp://www.blogger.com/profile/04110480749979714191noreply@blogger.com1tag:blogger.com,1999:blog-5556854748152045563.post-14378948980494801732015-03-04T00:13:00.000-08:002015-03-04T00:14:50.533-08:00How to check for SSL FREAK Vulnerability? <div dir="ltr" style="text-align: left;" trbidi="on">
A research group named SMACK has released a vulnerability known as <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2015-0204" target="_blank">FREAK</a> which can be used for man-in-the-middle (MITM) attack. The vulnerability is due to an old ghost created by USA Government (NSA, more specifically) where in, years ago, they convinced several organizations to use weaker keys, known as export-grade keys for any software that was to be used outside the borders of USA. While the use of strong keys is wide spread now, several servers still have support for the weaker keys.<br />
<br />
The group discovered that this vulnerability can be exploited by using a client and making a connection via a weak key. Once the key is generated by the server, it is reused until the server is restarted which can potentially be months. The group was able to crack this weak server key in 7.5 hours using Amazon EC2. Once this is cracked, potentially all the communication can be downgraded to use weak keys and MITM'ed.<br />
<br />
<b>How to check if a server vulnerable or not?</b><br />
Fire the following command:<br />
<code>$ openssl s_client -connect www.google.com:443 -cipher EXPORT</code>
<br />
<code><br /></code>
A handshake failure signifies that EXPORT cipher is not active on the server and it is safe.<br />
<br />
<u style="background-color: white; color: #222222; font-family: Arial, Tahoma, Helvetica, FreeSans, sans-serif; font-size: 15.3999996185303px; line-height: 21.5599994659424px;"><a href="https://news.ycombinator.com/item?id=9143218" style="color: #888888; text-decoration: none;" target="_blank">Hacker News: Discuss and upvote on Hacker News.</a></u><br />
<br /></div>Aditya Patawarihttp://www.blogger.com/profile/04110480749979714191noreply@blogger.com1tag:blogger.com,1999:blog-5556854748152045563.post-70847858748062358322015-01-05T06:48:00.000-08:002018-10-22T01:23:33.354-07:00Basic Docker Orchestration with Google Kubernetes on Fedora<div dir="ltr" style="text-align: left;" trbidi="on">
<a href="http://kubernetes.io/" target="_blank">Kubernetes</a> is new framework by Google to manage Linux container clusters. I started playing with it today and it seems like a cool, powerful tool to manage a huge barrage of containers and to ensure that a predefined number of containers are always running. Installation and configuration on <a href="https://github.com/adimania/kubernetes/blob/master/docs/getting-started-guides/fedora/fedora_manual_config.md" target="_blank">Fedora</a> and many other distributions can be found at these <a href="https://github.com/adimania/kubernetes/tree/master/docs/getting-started-guides" target="_blank">Getting Started Guides</a>. I recommend using two machines for this experiment (one physical and one VM is fine). Kubelet (or Minion) is the one where Docker containers will run, so use more powerful machine for that.<br />
<br />
After the installation we'll see something like below when we look for minions from kube master:<br />
<code>master# kubectl get minions<br />NAME LABELS<br />fed-minion <none></code><br />
<code><br /></code>
Now we would move to <a href="https://github.com/GoogleCloudPlatform/kubernetes/blob/master/examples/walkthrough/README.md" target="_blank">Kubernetes 101 Walkthrough</a> where we will run a container using the yaml from the Intro section.<br />
<code>master# kubectl create -f kubeintro.yaml</code><br />
<br />
<strike>.. except, (as on 25 Dec 2014) it won't run. It will give an error like this:</strike><br />
<strike><code><b>the provided version "v1beta1" and kind "" cannot be mapped to a supported object</b></code></strike><br />
<strike><br /></strike><strike>Turns out that a field "kind" is empty. So the kubectl won't be able to run the container. Correct this so that</strike> kubeintro looks like this:<br />
<br />
<code>master# cat kubeintro.yaml<br />apiVersion: v1beta1<br /><b>kind: Pod</b><br />id: www<br />desiredState:<br /> replicas: 2<br /> manifest:<br /> version: v1beta1<br /> id: www<br /> containers:<br /> - name: nginx<br /> image: dockerfile/nginx</code><br />
<code><br /></code>
<b>Optional:</b> Now, I do not exactly know what is there inside the image "dockerfile/nginx". So I would replace it with something that I want to spawn like "<a href="https://registry.hub.docker.com/u/adimania/flask/" target="_blank">adimania/flask</a>" image. The dockerfile for my flask image can be found in <a href="https://github.com/fedora-cloud/Fedora-Dockerfiles/tree/master/flask" target="_blank">Fedora-Dockerfiles</a> repo.<br />
<br />
Once the kubeintro.yaml is fixed, we can run it on the master and we'll see that a container is started on the minion. We can stop the container on the minion using <code>docker stop</code> command and we'll see the kubernetes will start the container again.<br />
<br />
The example above doesn't do much. We need to publish the ports of the container so that we can access the webpage served by it. Modify the kubeintro.yml to tell it to publish ports like this:<br />
<br />
<code>master# cat kubeintro.yaml<br />apiVersion: v1beta1<br />kind: Pod<br />id: www<br />desiredState:<br /> replicas: 2<br /> manifest:<br /> version: v1beta1<br /> id: www<br /> containers:<br /> - name: nginx<br /> image: dockerfile/nginx<br /> ports:<br /> - containerPort: 80<br /> hostPort: 8080</code><br />
<code><br /></code>
Now delete the older pod named www and start a new one from the new kubeintro.yaml file.<br />
<code>master# kubectl delete pod www<br />master# kubectl create -f kubeintro.yaml</code><br />
<code><br /></code>
We can browse via a browser to localhost:8080 and we'll see Nginx serving the default page. (If we would have used "adimania/flask" image, we would have seen "Hello from Fedora!" instead.)<br />
<br />
If you need any help with managing kubernetes, checkout <a href="https://devopsnexus.com/" target="_blank">my consulting services</a>.</div>
Aditya Patawarihttp://www.blogger.com/profile/04110480749979714191noreply@blogger.com0tag:blogger.com,1999:blog-5556854748152045563.post-29841192172522148062014-12-07T05:07:00.003-08:002014-12-07T05:14:18.933-08:00Docker Quick Start Guide<div dir="ltr" style="text-align: left;" trbidi="on">
Here is a short and sweet guide to <a href="https://www.docker.com/" target="_blank">Docker</a> for absolute beginners. I have added a few FAQs as well.<br />
<br />
<b>Q. What is a container?</b><br />
A. Container is an isolated Linux system running on a Linux machine itself. They are lightweight and consume less resources than a virtual machine. They rely on kernels cgroups and namespace features to create isolation for CPU, memory etc..<br />
<br />
<b>Q. What is Docker?</b><br />
A. Docker is a container based platform to build and ship applications. Docker makes containers easy to use by providing a lot of automation and tools for container management.<br />
<br />
<b>Q. Why would I use Docker?</b><br />
A. If you have any of the following concerns then you should use Docker:<br />
<ul style="text-align: left;">
<li>My production needs to be homogeneous</li>
<li>I need to ship entire environment to my colleague</li>
<li>My hypervisor ate all the CPU (or RAM)</li>
<li>.. it works on my machine, but not in production ..</li>
</ul>
<div>
<br /></div>
<div>
<b>How to play with Docker</b></div>
<div>
<b>Step1:</b> Let us install and run the Docker first:</div>
<div>
<code># yum install docker-io</code></div>
<div>
<code># systemctl start docker</code></div>
<code>
</code>
<br />
<div>
<b>Step2:</b> Docker has something called registries. A registry stores container images from which we can download and run containers. These registries can be public or private. Docker.io maintains a public registry which is the default if we want to download an image. The command below will download an image with name fedora-busybox, contributed by user adimania:</div>
<div>
<code># docker pull adimania/fedora-busybox</code></div>
<div>
<code>Pulling repository adimania/fedora-busybox</code></div>
<div>
<div>
<code>605bfcc0af5d: Download complete</code></div>
</div>
<br />
<b>Step3:</b> Let us check out the image that we just downloaded.<br />
<code># docker images<br />
</code>
<div>
<div>
<code>REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE</code></div>
</div>
<code>
</code>
<br />
<div>
<code>adimania/fedora-busybox latest 605bfcc0af5d 7 minutes ago 1.309 MB</code></div>
<code>
</code>
<br />
<div>
<br /></div>
<div>
<b>Step4:</b> Once we have the image, we would want to run a container off it. The command below will take care of that and drop us in the container's shell:</div>
<code># docker run -i -t adimania/fedora-busybox /sbin/sh</code><br />
<code>
</code>
<br />
<div>
The run command takes certain parameters and run the image provided as argument. The arguments "-i" and "-t" tells run command to open STDIN and allocate a pseudo-TTY. Last argument is the command that is runs inside the container in foreground. <b>One thing to note here is that docker always need a process to run in foreground</b>. As soon as this process exits, the docker container shuts down. For certain containers, this foreground process is implicit and we may not need to tell docker what to run. However for certain other containers, like the one which we are using, we specify "/sbin/sh" to run as foreground process. docker run command supports several other arguments and flags. It is advisable to fire docker run --help to check out all the options.</div>
<div>
<br /></div>
<div>
<b>Step5:</b> We can see more information about this containers that are currently running by using docker ps command:</div>
<code># docker ps<br />
</code>
<div>
<div>
<code>CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES</code></div>
<div>
<code>3af04d663b3d adimania/fedora-busybox:latest "/sbin/sh" 25 seconds ago Up 24 seconds furious_leakey</code></div>
</div>
<code>
</code>
<br />
<div>
docker ps commands shows all the containers that are running along with other useful info like uptime, foreground command etc.. This command takes an optional argument "-a" which shows all the containers, including the stopped ones. </div>
<div>
<br /></div>
<div>
<b>Step6:</b> Let us stop and start the container again. We'll need the container id obtained from the docker ps command</div>
# docker stop 3af04d663b3d<code></code><br />
<div>
<code>3af04d663b3d</code></div>
<code>
</code>
<br />
<div>
<code># docker start 3af04d663b3d</code></div>
<div>
<code>3af04d663b3d</code></div>
<code>
</code>
<br />
<div>
Above commands are a part of workshop which I have conducted before at Flock and CentOS Dojo. Check out the slides <a href="http://www.slideshare.net/AdityaPatawari/docker-centosdojo" target="_blank">here</a>.</div>
</div>Aditya Patawarihttp://www.blogger.com/profile/04110480749979714191noreply@blogger.com0tag:blogger.com,1999:blog-5556854748152045563.post-65199251914450349082014-11-27T07:00:00.000-08:002014-11-27T07:00:51.294-08:00Encrypt Everything: Encrypt data using GPG and save passwords<div dir="ltr" style="text-align: left;" trbidi="on">
Data security is an important concern these days and encryption is a very powerful tool to secure the data. In my previous post I talked about <a href="http://blog.adityapatawari.com/2014/11/encrypt-everything-how-to-encrypt-disk.html" target="_blank">how to encrypt a disk</a>. Now we are going to talk about how to encrypt files using <a href="https://www.gnupg.org/" target="_blank">GNU Privacy Guard (GPG)</a>.<br />
<div>
<br /></div>
<div>
GPG uses <a href="http://en.wikipedia.org/wiki/Public-key_cryptography" target="_blank">public key cryptography</a>. This means that instead of having one key to encrypt and decrypt, there are two keys. One of these keys can be publicly shared and hence is known as public key. The other key is to be kept secret and is known as private key. Anything encrypted with public key can only be decrypted with private key.</div>
<div>
<br /></div>
<div>
<b>How to encrypt files?</b></div>
<div>
Assuming a scenario that user "test" wants to send an encrypted file to me, the user just has to find my public key, encrypt the data and send it to me where I will be able to decrypt the file using my private key and obtain the data. Note that user "test" doesn't need to have GPG keys generated in order to encrypt and send data to me.</div>
<div>
<br /></div>
<div>
<b>Step1:</b> Let us create a text file which we'll encypt:</div>
<div>
<code>test$ echo "This is a secret message." > secret.txt</code></div>
<div>
<br /></div>
<div>
<b>Step2:</b> User "test" needs to find my keys. There are many public servers where one can share their public key in case someone else wants to encrypt the data. One such server is run by MIT at <a href="http://pgp.mit.edu/" target="_blank">http://pgp.mit.edu</a>.</div>
<div>
<code>test$ gpg --keyserver pgp.mit.edu --search-keys aditya@adityapatawari.com</code></div>
<div>
<br /></div>
<div>
<b>Step3:</b> Once the user obtains my public key, then encrypting data is really easy.</div>
<div>
<code>test$ gpg --output secret.txt.gpg --encrypt --recipient aditya@adityapatawari.com secret.txt</code></div>
<div>
<br /></div>
<div>
The command above will create an encrypted file named secret.txt.gpg which can be shared via email or any other means. Once I get the encrypted file, I can decrypt it using my private key</div>
<div>
<code>aditya$ gpg --output secret.txt --decrypt secret.txt.gpg</code></div>
<div>
<br /></div>
<div>
<b>How to create GPG keys to receive data?</b></div>
<div>
Now assume a scenario where "test" user wants to create a set of GPG keys in order to share the public key and receive encrypted data.</div>
<div>
<br /></div>
<div>
<b>Step1:</b> Generate a key pair. The command will present you some options (stick to defaults if you are not sure) and ask for some data like your name and email address etc.</div>
<div>
<code>test$ gpg --gen-key</code></div>
<div>
<br /></div>
<div>
<b>Step2:</b> Check the keys.</div>
<code>
</code>
<br />
<div>
<div>
<code>test$ gpg --list-secret-keys</code></div>
<div>
<code>/home/test/.gnupg/secring.gpg</code></div>
<div>
<code>-----------------------------</code></div>
<div>
<code>sec 2048R/<b>E46749BB</b> 2014-11-23</code></div>
<div>
<code>uid Aditya TestKeys (This is not a valid key) <adimania+test@gmail.com></code></div>
<div>
<code>ssb 2048R/C5E57FF2 2014-11-23</code></div>
</div>
<code>
<br />
</code>
<br />
<div>
<b>Step3:</b> Upload the key to a public server using the id from the above.</div>
<div>
<code>test$ gpg --keyserver pgp.mit.edu --send-key E46749BB</code></div>
<div>
<br /></div>
<div>
Now others can search for the key, use it to encrypt the data and send it to the "test" user. </div>
<div>
<br /></div>
<div>
To use GPG for saving password, have a look at <a href="http://www.passwordstore.org/" target="_blank">pass</a> utility. It uses GPG to encrypt passwords and other data and store it in a hierarchical format. </div>
</div>
Aditya Patawarihttp://www.blogger.com/profile/04110480749979714191noreply@blogger.com0tag:blogger.com,1999:blog-5556854748152045563.post-10279579591225991322014-11-22T11:39:00.000-08:002014-11-23T01:53:53.730-08:00Encrypt Everything: How to encrypt the disk to protect the data<div dir="ltr" style="text-align: left;" trbidi="on">
Recently, at BrowserStack.com, some of our services got <a href="http://www.browserstack.com/attack-and-downtime-on-9-November" target="_blank">compromised</a>. We use Amazon Web Services extensively. The person (or group) who attacked us mounted one of our backups and managed to steal some of the data. We could have prevented this simply by ensuring that we use encrypted disks which would have made this attack useless. Learning from our mistakes, we have recently started encrypting everything and I am going to show you how to do that. One point worth noting here is that Amazon AWS does provide encryption support for the EBS volumes but that is transparent and would not help in case of the account getting compromised. I am going to use dm-crypt which is supported by Linux kernel so the steps are quite generic and would work on any kind of disk, on any kind of environment, including Amazon AWS, Google Compute Engine, physical disks in your datacenter.<br />
<br />
Our goal is to encrypt /home. To achieve this, we'll attach a disk, encrypt it, move the entire /home data to this disk and create a symbolic link to /home.<br />
<br />
<b>Step1:</b> We are going to use <b style="background-color: white; color: #252525; font-family: sans-serif; font-size: 14px; line-height: 22.3999996185303px;"><a href="https://code.google.com/p/cryptsetup/" target="_blank">Linux Unified Key Setup</a>. </b>For that we need to install cryptsetup package.<br />
# yum install cryptsetup<br />
<br />
<b>Step2:</b> While using AWS, never attach the volume to be encrypted while launching the instance. If we do so, the instance will fail to boot up next time because it'll ask for decryption password while booting up which is not possible to supply in AWS. Still if it is absolutely mandatory to do this then I suggest trying to remove entries from fstab and crypttab but it is much easier to just attach the disk after the launching of the instance is done. Assuming that the attached disk is available at /dev/xvdf, we'll setup the encryption now.<br />
# cryptsetup -y -v luksFormat /dev/xvdf<br />
<code>
WARNING!<br />
========<br />
This will overwrite data on /dev/xvdf irrevocably.<br />
<br />
Are you sure? (Type uppercase yes): YES<br />
Enter LUKS passphrase:<br />
Verify passphrase:<br />
<br />
Command successful.<br />
</code><br />
<code><br /></code>
We can verify the encryption parameters as well. Default is AES 256 bit.<br />
<code># cryptsetup luksDump /dev/xvdf</code><br />
<br />
<b>Step3:</b> We'll open the device and map it to /dev/mapper/home so that we can use it.<br />
<code># cryptsetup luksOpen /dev/xvdf home<br />
Enter passphrase for /dev/xvdf:</code><br />
<br />
<b>Step4:</b> This step is optional. To further protect our data, we can zero out the entire disk before even creating the filesystem.<br />
<code># dd if=/dev/zero of=/dev/mapper/home</code><br />
<br />
<b>Step5:</b> Now we'll create a filesytem<br />
<code># mkfs.ext4 /dev/mapper/home</code><br />
<br />
<b>Step6:</b> Let us mount and copy the data from /home<br />
<code>
# mkdir /myhome<br />
# mount /dev/mapper/home /myhome<br />
# cp -a /home/* /myhome/<br />
# rm -rf /home<br />
# ln -s /myhome /home<br />
</code>
<br />
Great! Our /home directory is encrypted. But wait a minute.. this approach has a short coming. We have deliberately designed it so that the disk won't auto-mount during the boot because there is no way to give it a password in cloud environment during the boot. Since the disk won't mount, we won't be able to ssh into the machine because the authorized_keys file is kept inside the home directory of the user. To address this problem, either change the "AuthorizedKeysFile" in sshd_config or create a user with home directory in /var/lib or /opt and grant sudo for cryptsetup and mount commands. So after reboot, if we take the first approach, we would be able to ssh without any problem or we'll ssh via other user, mount the encrypted drive and then use it normally.<br />
<br />
<code>
$ ssh mountuser@<ip><br />
$ sudo /sbin/cryptsetup luksOpen /dev/xvdf home<br />
$ sudo /bin/mount /dev/mapper/home /myhome/<br />
</code>
<br />
<br />
Couple of points to remember:<br />
<ul style="text-align: left;">
<li>Do not forget the LUKS password. It cannot be retrieved, if lost.</li>
<li>Try it a couple of times on staging machines before doing it on the machines that matter.</li>
</ul>
<br />
<div>
<br /></div>
</div>
Aditya Patawarihttp://www.blogger.com/profile/04110480749979714191noreply@blogger.com0tag:blogger.com,1999:blog-5556854748152045563.post-66291225946098061512014-10-15T03:51:00.000-07:002014-10-15T08:30:28.584-07:00How to check for SSL POODLE / SSLv3 bug? How to fix Nginx?<div dir="ltr" style="text-align: left;" trbidi="on">
Google has just <span id="goog_1153803476"></span><a href="https://www.openssl.org/~bodo/ssl-poodle.pdf" target="_blank">disclosed</a><span id="goog_1153803477"></span> SSL POODLE vulnerability which is a design flaw in SSLv3. Since it is a design flaw in the protocol itself and not an implementation bug, there will be no patches. Only way to mitigate this is to disable SSLv3 in your web server or application using SSL.<br />
<br />
<b>How to test for SSL POODLE vulnerability?</b><br />
<code>$ openssl s_client -connect google.com:443 -ssl3</code><br />
If there is a handshake failure then the server is not supporting SSLv3 and it is secure from this vulnerability. Otherwise it is required to disable SSLv3 support.<br />
<br />
<b>How to disable the SSLv3 support on Nginx?</b><br />
In nginx configuration, just after the "ssl on;" line, add the following to allow only TLS protocols:<br />
<code>ssl_protocols TLSv1.2 TLSv1.1 TLSv1;</code><br />
<code><br /></code>
<u><a href="https://news.ycombinator.com/item?id=8458292" target="_blank">Hacker News: Discuss and upvote on Hacker News.</a></u></div>
Aditya Patawarihttp://www.blogger.com/profile/04110480749979714191noreply@blogger.com2tag:blogger.com,1999:blog-5556854748152045563.post-41479104457965345092014-09-05T08:22:00.000-07:002014-09-05T08:22:07.818-07:00Types of NAT and How to determine the NAT Type<div dir="ltr" style="text-align: left;" trbidi="on">
I am going to do a couple of posts on <a href="http://en.wikipedia.org/wiki/Network_address_translation" target="_blank">NAT (Network Address Translation)</a> to discuss their classification and how to create a NAT on a Linux machine. This post will cover NAT types.<br />
<br />
Generally NAT is used to allow private IPs to talk to the Internet. There are certain security aspects to it as well since outsiders cannot directly access the machines inside the NAT (well, not easily at least). <br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhtlGeYiJ_YILN3cA0PwfKZngcQHswjEm3Ks-7i3nUh2B2kYiCOE_lRcJb-brOS9q12EP6QAuMtXJYaAW0l_dDHzBnx5hMbuweq04zV7CILwpk55St0cPM-3SlswsqARiHq6D9boc4JHOI/s1600/NAT.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhtlGeYiJ_YILN3cA0PwfKZngcQHswjEm3Ks-7i3nUh2B2kYiCOE_lRcJb-brOS9q12EP6QAuMtXJYaAW0l_dDHzBnx5hMbuweq04zV7CILwpk55St0cPM-3SlswsqARiHq6D9boc4JHOI/s1600/NAT.jpg" height="237" width="400" /></a></div>
<br />
<br />
In general there are 4 kinds of NAT. Let us understand them one by one.<br />
<ul style="text-align: left;">
<li><b>Full cone NAT:</b> This is also known as one to one NAT. It is basically simple port forwarding where there is a static binding from client ip:port to NAT's ip:port and any one from Internet can write to NAT's ip:port and it will be forwarded to the client. This kind of NAT is used very infrequently. </li>
<li><b>(Address) Restricted cone NAT:</b> In this scenario, client can only receive packets from the host where it has already sent packets before. For example, if the client from the diagram above sends a packet to server with address 8.8.8.8 then NAT will accept the reply from any port of the server as long as the source IP (8.8.8.8) remains the same.</li>
<li><b>Port Restricted cone NAT:</b> In this scenario, client can only receive packets from the host where it has already sent packets before as long as they come from the same server port. For example, if the client from the diagram above sends a packet to server with address 8.8.8.8 on port 5555 then NAT will only accept reply originating from port 5555 from the server. This NAT is more restricted than Address Restricted NAT.</li>
<li><b>Symmetric NAT:</b> In general all the above NAT types preserve the port. For example if the client is sending a packet from 192.168.0.2:54321 to 8.8.8.8:80, then NAT will usually map 192.168.0.2:54321 to 1.2.3.4:54321 preserving the port number. But in Symmetric NAT a random port is chosen for every new connection.This makes port prediction very difficult and techniques like UDP hole punching fails in this scenario</li>
</ul>
<div>
How do you tell what kind of NAT you are in? I have written a set of <a href="https://github.com/adimania/NATDiscovery" target="_blank">scripts</a> to determine that. </div>
<div>
<br />
<script src="http://gist-it.appspot.com/github/adimania/NATDiscovery/blob/master/server.py"></script>
</div>
<div>
<br />
<script src="http://gist-it.appspot.com/github/adimania/NATDiscovery/blob/master/client.py"></script>
</div>
<div>
Run server.py on a publicly accessible server and client.py on the client inside the NAT. Make sure that UDP is allowed to public server's 5005 port (or you can change the port in the code).<br />
<br />
If you see any bug in the scripts then please let me know in the comments or on the <a href="https://github.com/adimania/NATDiscovery/issues" target="_blank">Github</a>. </div>
</div>
Aditya Patawarihttp://www.blogger.com/profile/04110480749979714191noreply@blogger.com0tag:blogger.com,1999:blog-5556854748152045563.post-20601468611968339712014-08-22T03:16:00.000-07:002014-09-05T03:20:54.420-07:00Introduction To Ansible<div dir="ltr" style="text-align: left;" trbidi="on">
<div dir="ltr" style="text-align: left;" trbidi="on">
I recently gave a talk about Ansible at Flock, Prague. Here is a <a href="http://youtu.be/sCXCgsmQuSY?t=4m" target="_blank">youtube video</a> of the same.<br />
<br /></div>
<iframe allowfullscreen="" frameborder="0" height="315" src="//www.youtube.com/embed/sCXCgsmQuSY?t=4m" width="560"></iframe>
Paul W. Frields has written a summary of the talk on the <a href="http://fedoramagazine.org/flock-2014-day-2-orchestration-with-ansible-at-fedora-project/" target="_blank">Fedora Magazine</a>.</div>
Aditya Patawarihttp://www.blogger.com/profile/04110480749979714191noreply@blogger.com0tag:blogger.com,1999:blog-5556854748152045563.post-67165813913556579702014-04-16T08:56:00.000-07:002014-04-16T08:56:00.040-07:00A Simple Netcat How-To for Beginners<div dir="ltr" style="text-align: left;" trbidi="on">
There are tonnes of tutorials on <a href="http://netcat.sourceforge.net/" target="_blank">Netcat</a> already. This one is to remind me and my colleagues about the awesomeness of nc which we forget on regular basis.<br />
Common situations where nc can be used:<br />
<ul style="text-align: left;">
<li>Check connectivity between two nodes. I had to learn hard way that ping (read all ICMP) based protocols are not always the best way to judge connectivity. Often ISPs set ICMP to lower priority and drop it.</li>
<li>Single file transfer.</li>
<li>Testing of network applications. I have written several clients and loggers for logstash and graphite which couldn't have been easier to test without nc.</li>
<li>Firing commands to remote servers where running a conventional tcp/http server is not possible (like VMWare ESXi)</li>
</ul>
<div>
Basic Netcat servers:</div>
<div>
<ul style="text-align: left;">
<li><b>nc -l <port></b><br />Netcat starts listening for TCP sockets at the specified port. A client can connect and write arbitrary strings to the socket which will be reflected here.</li>
<li><b>nc -u -l <port></b><br />Netcat starts listening for UDP sockets at the specified port. A client can write arbitrary strings to the socket which will be reflected here.</li>
<li><b>nc -l <port> -e /bin/bash</b><br />Netcat starts listening for TCP sockets at the specified port. A client can connect and write arbitrary commands which will be passed to /bin/bash and executed. Use with extreme caution on remote servers. The security here is nil.</li>
<li><b>nc -l -k <port> -e /bin/bash</b><br />Problem with above command is that nc gets terminated as soon as client disconnects. -k option forces nc to stay alive and listen for subsequent connections as well.</li>
</ul>
<div>
Basic Netcat Clients:</div>
<div>
<ul style="text-align: left;">
<li><b>nc <address> <port></b><br />Connect as client to the server running on <address>:<port> via TCP.</li>
<li><b>nc -u <address> <port></b><br />Connect as client to the server running on <address>:<port> via UDP.</li>
<li><b>nc -w <seconds> <address> <port></b><br />Connect as client to the server running on <address>:<port> via TCP and timeout after <seconds> of being idle. I used it a lot to send data to graphite using shell scripts.</li>
</ul>
</div>
<div>
<br />
A cool example to stream any file's content live (mostly used for logs) can be found at <a href="http://www.commandlinefu.com/commands/view/11873/tail-a-log-file-over-the-network" target="_blank">commandlinefu</a>.</div>
</div>
</div>
Aditya Patawarihttp://www.blogger.com/profile/04110480749979714191noreply@blogger.com0tag:blogger.com,1999:blog-5556854748152045563.post-60040664459512615292014-01-20T08:00:00.000-08:002014-01-20T08:00:02.368-08:00Using OpenStack Swift as ownCloud Storage Backend<div dir="ltr" style="text-align: left;" trbidi="on">
<div class="separator" style="clear: both; text-align: center;">
</div>
<a href="http://owncloud.org/" target="_blank">ownCloud</a> helps us to access our files from anywhere in the world, without take the control of data from us. Traditionally server's local hard disks have been used to act as storage backend but these days, as the latency of networks is decreasing, storing data over network is becoming cheaper and safer (in terms of recovery). ownCloud is capable of using SFTP, WebDAV, SMB, OpenStack Swift and several other storage mechanisms. We'll see the usage of <a href="http://swift.openstack.org/" target="_blank">OpenStack Swift</a> with ownCloud in this tutorial<br />
<br />
At this point, the assumption is that we already have admin access to an ownCloud instance and we have set up OpenStack Swift somewhere. If not, to setup OpenStack Swift, follow <a href="http://blog.adityapatawari.com/2014/01/openstack-101-how-to-setup-openstack_12.html" target="_blank">this tutorial</a>.<br />
<br />
<b>Step 1:</b> External storage facilities are provided by an app known as "External storage support", written by Robin Appelman and Michael Gapczynski, which ships with ownCloud and is available on the apps dashboard. It is disabled by default, we need to enable it.<br />
<br />
<b>Step 2:</b> We need to go to Admin page of the ownCloud installation and locate "External Storage" configuration area. We'll select "OpenStack Swift" from the drop down menu.<br />
<br />
<b>Step 3:</b> We need to fill in the details and credentials. We'll need the following information:<br />
<ul style="text-align: left;">
<li>Folder Name: A user friendly name for the storage mount point.</li>
<li>user: Username of the Swift user (required)</li>
<li>bucket : Bucket can be any random string (required). It is a container where all the files will be kept.</li>
<li>region: Region (optional for OpenStack Object Storage).</li>
<li>key: API Key (required for Rackspace Cloud Files). This is not required for OpenStack Swift. Leave it empty.</li>
<li>tenant: Tenant name (required for OpenStack Object Storage). Tenant name would be the same tenant of which the Swift user is a part of. It is created using OpenStack Keystone.</li>
<li>password: Password of the Swift user (required for OpenStack Object Storage)</li>
<li>service_name: Service Name (required for OpenStack Object Storage). This is the same name which was used while creating the Swift service</li>
<li>url: URL of identity endpoint (required for OpenStack Object Storage). It is the Keystone endpoint against which authorization will be done.</li>
<li>timeout: Timeout of HTTP requests in seconds (optional)</li>
</ul>
<br />
Just to get a better hold on things, check out the image of an <a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg9vfNhMm49UNWtq28FxwdptuauM_CJWK_MKb6RSxEujFgNNNUBeG9AM7j_50w_Xa8WnK-dcblGPh4uPFHrLZiVaeB9ozfCpSm-FNb58N0WEKh47rZE4-noXGSk5sO-I9HCUpe2pDo5uWE/s1600/ownCloud_OpenStack_Swift1.png" target="_blank">empty configuration</a> form and here is a <a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj_m2oGU5Vzf2sA6kGDOK-AFvUXpTr0H4R4UqV6XuMf6iMgVYs2gPGJOsKraduHzvCQZ3b3EiRd9SefMk12qLJiKjZ7PndA6oEJl14kqpbe3e2Vn5P1cnYqZtM5IfYUyrsr9hFLzUR07ZY/s1600/ownCloud_OpenStack_Swift2.png" target="_blank">filled up</a> one.<br />
<br />
Notice that if ownCloud is successfully able to connect and authorize then a green circle appear on the left side of the configuration. In case things don't work out as expected then check out the owncloud.log in the data directory of ownCloud instance.<br />
<br />
That is it. Now ownCloud is now ready to use OpenStack Swift to store data.</div>
Aditya Patawarihttp://www.blogger.com/profile/04110480749979714191noreply@blogger.com1tag:blogger.com,1999:blog-5556854748152045563.post-25954819027213100052014-01-12T09:00:00.000-08:002014-01-12T09:19:09.325-08:00OpenStack 101: How to Setup OpenStack Swift (OpenStack Object Storage Service)<div dir="ltr" style="text-align: left;" trbidi="on">
<div dir="ltr" style="text-align: left;" trbidi="on">
In this tutorial we'll setup <a href="http://docs.openstack.org/developer/swift/" target="_blank">OpenStack Swift</a> which is the object store service. Swift can be used to store data with high redundancy. The nodes in Swift can be broadly classified in two categories:<br />
<ul style="text-align: left;">
<li><b>Proxy Node:</b> This is a public facing node. It handles all the http request for various Swift operations like uploading, managing and modifying metadata. We can setup multiple proxy nodes and then load balance them using a standard load balancer.</li>
<li><b>Storage Node:</b> This node actually stores data. It is recommended to make this node private, only accessible via proxy node but not directly. Other than storage service, this node also houses container service and account service which are used for managing mapping of containers and accounts respectively. </li>
</ul>
<div>
For a small scale setup, both proxy and storage node can reside on the same machine but avoid doing so for a bigger setup.</div>
<div>
<br /></div>
<b>Step 1:</b> Let us install all the required packages for Swift:<br />
<code># yum install openstack-swift openstack-swift-proxy openstack-swift-account openstack-swift-container openstack-swift-object memcached</code><br />
<br />
<b>Step 2:</b> Attach a disk which would be used for storage or chop off some disk space from the existing disk.<br />
Using additional disks:<br />
Most likely this is done when there is large amount of data to be stored. <a href="http://en.wikipedia.org/wiki/XFS" target="_blank">XFS</a> is the recommended filesystem and is known to work well with Swift. If the additional disk is attached as /dev/sdb then following will do the trick:<br />
<code>
# fdisk /dev/sdb<br />
# mkfs.xfs /dev/sdb1<br />
# echo "/dev/sdb1 /srv/node/partition1 xfs noatime,nodiratime,nobarrier,logbufs=8 0 0" >> /etc/fstab
<br />
# mkdir -p /srv/node/partition1<br />
# mount /srv/node/partition1<br />
<br />
</code>
Chopping off disk space from the existing disk:<br />
We can chop off disk from existing disks as well. This is usually done for smaller installations or for "proof-of-concept" stage. We can use XFS like before or we can use <a href="http://en.wikipedia.org/wiki/Ext4" target="_blank">ext4</a> as well.<br />
<code>
# truncate --size=2G /tmp/swiftstorage<br />
# DEVICE=$(losetup --show -f /tmp/swiftstorage)<br />
# mkfs.ext4 $DEVICE<br />
# mkdir -p /srv/node/partition1<br />
# mount $DEVICE /srv/node/partition1 -t ext4 -o noatime,nodiratime,nobarrier,user_xattr<br />
<br />
</code>
<b>Step 3 (optional):</b> Setup <a href="http://en.wikipedia.org/wiki/Rsync" target="_blank">rsync</a> to replicate the objects. In case replication or redundancy is not required, then this step can be skipped.<br />
<div>
<code>
uid = swift<br />
gid = swift<br />
log file = /var/log/rsyncd.log<br />
pid file = /var/run/rsyncd.pid<br />
address = <storage_local_net_ip><br />
<br />
[account]<br />
max connections = 2<br />
path = /srv/node/<br />
read only = false<br />
lock file = /var/lock/account.lock<br />
<br />
[container]<br />
max connections = 2<br />
path = /srv/node/<br />
read only = false<br />
lock file = /var/lock/container.lock<br />
<br />
[object]<br />
max connections = 2<br />
path = /srv/node/<br />
read only = false<br />
lock file = /var/lock/object.lock<br />
</code></div>
</div>
<br />
Note that there can be multiple account, container and object sections if we wish to use multiple disks or partitions.<br />
Enable rysnc in defaults and start the service:<br />
<code>
# vim /etc/default/rsync<br />
RSYNC_ENABLE = true
<br />
# service rsync start<br />
<br />
</code>
<b>Step 4:</b> Setup the proxy node. The default config which is shipped with the Fedora 20 is good with minor changes. Open /etc/swift/proxy-server.conf and edit the [filter:authtoken] as below:<br />
<code>
[filter:authtoken]<br />
paste.filter_factory = keystoneclient.middleware.auth_token:filter_factory<br />
admin_tenant_name = admin<br />
admin_user = admin<br />
admin_password = ADMIN_PASS<br />
auth_host = 127.0.0.1<br />
auth_port = 35357<br />
auth_protocol = http<br />
signing_dir = /tmp/keystone-signing-swift<br />
</code><br />
<div>
Keep in mind that the admin token, admin_tenant_name and admin_user should be same which was used while setting up <a href="http://docs.openstack.org/developer/keystone/" target="_blank">Keystone</a>. If you have not installed and setup Keystone already, then check out <a href="http://blog.adityapatawari.com/2014/01/openstack-101-how-to-setup-openstack.html" target="_blank">this tutorial</a> before you proceed.</div>
<div>
<br /></div>
<b>Step 5:</b> Now we will create the rings. Rings are mappings between the storage node components and the actual physical drive. Note that the create commands below has 3 numeric parameters in the end. The first parameter signifies the number of the swift partitions (not same as the disk partitions). Higher number of partitions ensure even distribution but also higher number of partitions put higher strain on the server. So we have to find a good trade off. The rule of thumb is to create about 100 swift partitions per drive. For that the first numeric parameter would be 7 which is (2^7=128, closest to 100). The second parameter defines the number of copies to create for the sake of replication. For a small instance with no rsync, set it to one but recommended is three. Last number is the time in hours before a specific partition can be moved in succession. Set it to a low number for testing but 24 is recommended for production instances.<br />
<code># cd /etc/swift<br /># swift-ring-builder account.builder create 7 1 1<br /># swift-ring-builder container.builder create 7 1 1<br /># swift-ring-builder object.builder create 7 1 1<br />
</code><br />
Add the device created above to the ring:<br />
<code># swift-ring-builder account.builder add z1-127.0.0.1:6002/partition1 100<br /># swift-ring-builder container.builder add z1-127.0.0.1:6001/partition1 100<br /># swift-ring-builder object.builder add z1-127.0.0.1:6000/partition1 100<br />
</code>
<br />
Rebalance the ring. This will ensure even distribution and minimal partition moves.<br />
<code># swift-ring-builder account.builder rebalance<br /># swift-ring-builder container.builder rebalance<br /># swift-ring-builder object.builder rebalance<br />
</code>
<br />
Set the owner and the group for the partitions<br />
<code># chown -R swift:swift /etc/swift /srv/node/partition1<br />
</code><br />
<b>Step 6:</b> Create the service and end point using Keystone.<br />
<code># keystone service-create --name=swift --type=object-store --description="Object Store Service"<br />
+-------------+----------------------------------+<br />
| Property | Value |<br />
+-------------+----------------------------------+<br />
| description | Object Store Service |<br />
| id | b230a3ecd12e4a52954cb24502be9d07 |<br />
| name | swift |<br />
| type | object-store |<br />
+-------------+----------------------------------+<br />
</code><br />
<div>
Copy the id from the output of the command above and use it to create the endpoint.</div>
<div>
</div>
<code># keystone endpoint-create --region RegionOne --service_id b230a3ecd12e4a52954cb24502be9d07 --publicurl "http://127.0.0.1:8080/v1/AUTH_\$(tenant_id)s" --adminurl http://127.0.0.1:8080/v1 --internalurl http://127.0.0.1:8080/v1</code>
<br />
<div>
<br /></div>
<div>
<b>Step 7: </b>Start the services and test it:</div>
<code># service memcached start<br /># for srv in account container object proxy ; do sudo service openstack-swift-$srv start ; done<br /># swift -V 2.0 -A http://127.0.0.1:5000/v2.0 -U admin -K pass stat
<br />
IN_PASS stat<br />
Account: AUTH_939ba777082a4f988d5b70dc886459e3<br />
Containers: 0<br />
Objects: 0<br />
Bytes: 0<br />
Content-Type: text/plain; charset=utf-8<br />
X-Timestamp: 1389435011.63658<br />
X-Put-Timestamp: 1389435011.63658<br />
</code><br />
Upload a file abc.txt to a Swift container myfiles like this:<br />
<code># swift -V 2.0 -A http://127.0.0.1:5000/v2.0 -U admin -K pass upload myfiles abc.txt</code>
<br />
<code><br /></code>
<br />
<div>
The OpenStack Swift is ready to use.</div>
</div>
Aditya Patawarihttp://www.blogger.com/profile/04110480749979714191noreply@blogger.com4tag:blogger.com,1999:blog-5556854748152045563.post-11067340853748751612014-01-11T10:22:00.000-08:002014-01-11T10:29:17.511-08:00OpenStack 101: How to Setup OpenStack Keystone (OpenStack Identity Service)<div dir="ltr" style="text-align: left;" trbidi="on">
<a href="http://docs.openstack.org/developer/keystone/">OpenStack Keystone</a> is an identity or authorization service. Before we can do anything on other OpenStack components, we have to authorize ourselves and only then the operation can proceed. Let us get acquainted with some terminologies before we proceed.<br />
<ul style="text-align: left;">
<li>Token: An alphanumeric string which allows access to certain set of services depending up on the access level (role) of the user.</li>
<li>Service: An OpenStack service like Nova, Swift and Keystone itself.</li>
<li>Tenant: A group of users. </li>
<li>Endpoint: A URL (may be private) used to access the service.</li>
<li>Role: The authorization level of a user.</li>
</ul>
<div>
Let us go ahead and build the Keystone service for our use.</div>
<div>
<br /></div>
<div>
<b>Step 1:</b> Fedora 20 has OpenStack Havan in its repositories so install it is not a pain at all. Additionally, we need MySQL (replaced by MariaDB in Fedora 20) where Keystone will save its data.</div>
<div>
<code># yum install openstack-utils openstack-keystone mysql-server</code></div>
<br />
<div>
<div>
<b>Step 2:</b> Once the packages above are installed, we need to set a few things in keystone config. Find the lines and edit them to look like these:</div>
<div>
<code># vim /etc/keystone/keystone.conf</code></div>
<div>
<code>[DEFAULT]</code></div>
<div>
<code>admin_token = <i>ADMIN_TOKEN</i></code></div>
<div>
<code>.</code></div>
<div>
<code>.</code></div>
<div>
<code>.</code></div>
<div>
<code>[sql]</code></div>
<div>
<code>connection = mysql://keystone:<i>KEYSTONE_DBPASS</i>@controller/keystone</code></div>
<br />
<div>
Note that<i> ADMIN_TOKEN </i>and <i>KEYSTONE_DBPASS</i> should be long and difficult to guess. Remember that <i>ADMIN_TOKEN</i> is the almighty token which will have full access to create and destroy users and services. Also several tutorials and the official docs use command <code>openstack-config --set /etc/keystone/keystone.conf</code> to do the changes that we just did manually. I do not recommend using the command. It created duplicate sections and entries for me which can be confusing down the line. </div>
<div>
<b><br /></b>
<b>Step 3:</b> Set up MySQL/MariaDB (only required for the first run of MySQL) to set root password. </div>
<div>
<code># mysql_secure_installation</code></div>
<div>
<br /></div>
<div>
Now we need to create the required database and tables for Keystone to work. The command below will do that for us. It will ask us for root password to create the keystone use and the database.<br />
<code># openstack-db --service keystone --init --password KEYSTONE_DBPASS</code></div>
<div>
<br />
<b>Step 4:</b> Create the signing keys and certificates for the tokens.</div>
<div>
<code>
# keystone-manage pki_setup --keystone-user keystone --keystone-group keystone</code></div>
</div>
<div>
<br />
<b>Step 5:</b> Set the file owners, just in case something messed up and start the service.<br />
<code>
# chown -R keystone:keystone /etc/keystone/* /var/log/keystone/keystone.log<br />
# service openstack-keystone start<br />
# chkconfig openstack-keystone on<br />
</code>
</div>
<div>
<br />
<b>Step 6:</b> Setup the required environment variables. This will save the effort of supplying all the information every time a Keystone command is executed. Note that by default the Keystone admin port is 35357. This can be changed in <span style="font-family: monospace;">/etc/keystone/keystone.conf.</span></div>
<div>
<code># cat > ~/.keystonerc <<EOF</code></div>
<code>
</code>
<br />
<div>
<code>> export OS_SERVICE_TOKEN=ADMIN_TOKEN</code></div>
<code>
</code>
<div>
<code>> export OS_SERVICE_ENDPOINT=http://127.0.0.1:35357/v2.0</code></div>
<code>
<div>
> export OS_USERNAME=admin</div>
<div>
> export OS_PASSWORD=ADMIN_PASS</div>
<div>
> export OS_TENANT_NAME=admin</div>
<div>
> export OS_AUTH_URL=http://127.0.0.1:35357/v2.0</div>
<div>
> EOF</div>
<div>
# . ~/.keystonerc<br />
<br /></div>
</code><b>
Step 7:</b> Create the tenants, users and the Keystone service with endpoint.<br />
Creating the tenant:<br />
<div>
<code># keystone tenant-create --name=admin --description="Admin Tenant"</code></div>
<div>
<br />
Creating the admin user:</div>
<div>
<code># keystone user-create --name=admin --pass=ADMIN_PASS --email=admin@example.com</code><br />
<br />
Creating and adding admin user to admin role:</div>
<div>
<code># keystone role-create --name=admin</code></div>
<code>
</code>
<div>
<code># keystone user-role-add --user=admin --tenant=admin --role=admin</code></div>
<code>
</code>
<br />
<div>
Creating Keystone service and endpoint:<br />
<code>
# keystone service-create --name=keystone --type=identity --description="Keystone Identity Service"<br />
+-------------+--------------------------------------+<br />
| Property | Value |<br />
+-------------+--------------------------------------+<br />
| description | Keystone Identity Service |<br />
| id | c3dbb8aa4b27492f9c4a663cce0961a3 |<br />
| name | keystone |<br />
| type | identity |<br />
+-------------+--------------------------------------+<br />
<br />
</code>
Copy the id from the command above and use it in the command below:</div>
<div>
<code># keystone endpoint-create --service-id=c3dbb8aa4b27492f9c4a663cce0961a3 --publicurl=http://127.0.0.1:5000/v2.0 --internalurl=http://127.0.0.1:5000/v2.0 --adminurl=http://127.0.0.1:35357/v2.0</code><br />
<br />
<b>Step 8:</b> Test the keystone service.</div>
<div>
<code># unset OS_SERVICE_TOKEN OS_SERVICE_ENDPOINT</code></div>
<code>
</code>
<div>
<code># keystone --os-username=admin --os-password=ADMIN_PASS --os-auth-url=http://127.0.0.1:35357/v2.0 token-get</code></div>
<code>
</code>
<br />
A token with id, validity and other information will be returned.<br />
<br />
Keystone is up and running. We'll create some services next tutorial.
</div>
Aditya Patawarihttp://www.blogger.com/profile/04110480749979714191noreply@blogger.com4tag:blogger.com,1999:blog-5556854748152045563.post-8225298459592944162014-01-11T10:15:00.000-08:002014-01-12T09:15:39.005-08:00OpenStack 101: What is OpenStack?<div dir="ltr" style="text-align: left;" trbidi="on">
<a href="http://www.openstack.org/" target="_blank">OpenStack</a>, in simple words, is an open source project which facilitates us to build our own cloud computing setup. In other words, it creates an Infrastructure as a Service (IaaS) on our own infrastructure. We can have <a href="http://aws.amazon.com/" target="_blank">Amazon AWS</a> like service up and running quite fast and painlessly wherever we want. A lot of efforts have been taken to ensure that code written for Amazon AWS can be ported to any OpenStack installation easily.<br />
<br />
Below is a small comparison (not exhaustive) between major OpenStack services and Amazon AWS to give you an idea about the compatibility.<br />
<br />
<center>
<table border="2" bordercolor="#000000" cellpadding="3" cellspacing="3" style="width: 100%px;">
<tbody>
<tr>
<th>OpenStack Service</th>
<th>Amazon AWS Service</th>
</tr>
<tr>
<td>Nova</td><td>EC2</td>
</tr>
<tr>
<td>Cinder</td>
<td>EBS</td>
</tr>
<tr>
<td>Swift</td>
<td>S3</td>
</tr>
<tr>
<td>Keystone</td>
<td>IAM</td>
</tr>
<tr>
<td>Glance</td>
<td>AMI</td>
</tr>
<tr>
<td>Horizon</td>
<td>AWS Web Console</td>
</tr>
<tr>
<td>Neutron</td>
<td>EC2 network components</td>
</tr>
</tbody></table>
</center>
<br />
OpenStack 101 is a tutorial series to simplify using OpenStack and integration of OpenStack with simple applications. It'll help you create OpenStack installations for "proof -of-concept" stage or hosting small IaaS service. For most of the part I have tried to keep the tutorials as close to official documentation as possible. Let me also state this loud and clear, <a href="http://docs.openstack.org/" target="_blank">OpenStack's documentation</a> is really great. If you can, the please go through it. If you are done with "proof-of-concept" and are going to run production ready machines, then go through the official documentation. These tutorials will help you get started but are not a replacement for the docs.<br />
<br />
I am going to use <a href="http://www.openstack.org/software/havana/" target="_blank">OpenStack Havana</a> and will run it on <a href="https://fedoraproject.org/" target="_blank">Fedora 20</a> (latest at the time of writing, January 2014). All the commands and codes are tested well before putting them up here but if you see any errors, please point them out to me.<br />
<br />
Contents:<br />
<a href="http://blog.adityapatawari.com/2014/01/openstack-101-what-is-openstack.html" target="">OpenStack 101: What is OpenStack?</a><br />
<a href="http://blog.adityapatawari.com/2014/01/openstack-101-how-to-setup-openstack.html">OpenStack 101: How to Setup OpenStack KeyStone (OpenStack Identity Service)</a><br />
<a href="http://blog.adityapatawari.com/2014/01/openstack-101-how-to-setup-openstack_12.html">OpenStack 101: How to Setup OpenStack Swift (OpenStack Object Storage Service)</a></div>
Aditya Patawarihttp://www.blogger.com/profile/04110480749979714191noreply@blogger.com0tag:blogger.com,1999:blog-5556854748152045563.post-45419957155409496922013-08-24T11:29:00.001-07:002014-01-11T15:33:37.637-08:00Installing ownCloud on Raspberry Pi<div dir="ltr" style="text-align: left;" trbidi="on">
<div dir="ltr" style="text-align: left;" trbidi="on">
<div dir="ltr" style="text-align: left;" trbidi="on">
<div>
<h3 style="text-align: left;">
<b>Presenting a ready-to-install image of ownCloud for Raspberry Pi </b></h3>
<b style="background-color: #cfe2f3;">A small introduction to ownCloud</b></div>
<span style="background-color: #cfe2f3;"><a href="http://owncloud.org/" target="_blank">ownCloud</a> is an application which enables users to share their data without giving control to any third party posing as a facilitator. While sharing the data without loosing control is the main objective, ownCloud is much more than that. It can also rapidly sync the data, contacts, calendar events etc from several devices. It can work with several custom backends and it is highly flexible.</span><br />
<div>
<br /></div>
<div>
Many of us have a Raspberry Pi with us and we love playing with it. In past I have written posts on <a href="http://blog.adityapatawari.com/2013/05/arch-linux-on-raspberry-pi-running-xfce.html">how to install Arch Linux on it</a> and <a href="http://blog.adityapatawari.com/2013/07/converting-raspberry-pi-into-media.html">how to install OpenELEC to convert the Raspberry Pi into a Media Centre</a>. This time I plan to go a little further. This time I have made a custom image which comes preinstalled with ownCloud and some tweaks to improve the ownCloud experience with Raspberry Pi. This image is based on Raspbian Wheezy.<br />
<br /></div>
</div>
Just follow the steps below and you'll be good to go in no time:<br />
<ol style="text-align: left;">
<li>Download the archived image from in either <a href="http://sourceforge.net/projects/owncloud-raspberrypi/files/owncloud-raspberrypi-0.2.img.zip/download" target="_blank">zip format</a> (usually for Windows) or <a href="http://sourceforge.net/projects/owncloud-raspberrypi/files/owncloud-raspberrypi-0.2.img.gz/download" target="_blank">gunzip format</a> (usually for Linux and Unix like platforms)<br />Since I am running on Linux, I would download gunzip format.</li>
<li>Extract it and put it on a SD card using dd or any other tool or command. Check out this article on <a href="http://elinux.org/RPi_Easy_SD_Card_Setup" target="_blank">elinux</a> if you need any help for this. Although 2 GB SD card would be fine but I would recommend using 4 GB or more.<br />I would run the following commands:<br /><code>$ gunzip owncloud-raspberrypi-0.1.img.gz # to extract the gz archive<br />$ sudo dd bs=1M if=owncloud-raspberrypi-0.1.img of=/dev/mmcblk0 # to write to the SD card. /dev/mmcblk0 can be obtained by the output of df command.</code></li>
<li>Put this SD card in your Raspberry Pi and boot. The default credentials are:<br />user: pi<br />password: owncloud</li>
<li>Run raspi-config and follow the directions to expand the filesystem to enjoy maximum disk space. Reboot, if required.</li>
<li>Run ifconfig to get the ip address of the Raspberry Pi.</li>
</ol>
<br />
That is it. Just open http://<ip_address>/<ip_address>owncloud and create the admin user and explore ownCloud on Raspberry Pi.</ip_address></div>
<br />
This image PHP execution time increased to 60 seconds and the upload limit has been bumped up to 500M. The Apache is set to allow .htaccess for the protection of data directory. Also SSH has been enabled by default.<br />
<br />
The official page for the image can be found at <a href="http://www.owncloudbook.com/owncloud-on-raspberry-pi/" target="_blank">ownCloud on Raspberry Pi</a>. A Hacker News discussion is also going on <a href="https://news.ycombinator.com/item?id=6268271" target="_blank">here</a>.<br />
<br />
If you like this image and you are interested in knowing more about ownCloud, then please consider buying my book, <a href="http://www.owncloudbook.com/" target="_blank">Getting Started with ownCloud</a>. It is available from <a href="http://www.amazon.com/gp/product/1782168257/ref=as_li_tf_il?ie=UTF8&amp;camp=1789&amp;creative=9325&amp;creativeASIN=1782168257&amp;linkCode=as2&amp;tag=owncloudbook-20" target="_blank">Amazon.com</a>, <a href="http://www.amazon.co.uk/gp/product/1782168257/ref=as_li_qf_sp_asin_tl?ie=UTF8&camp=1634&creative=6738&creativeASIN=1782168257&linkCode=as2&tag=owncloudbook-21" target="_blank">Amazon.co.uk</a>, <a href="http://www.barnesandnoble.com/w/getting-started-with-owncloud-aditya-patawari/1116059136" target="_blank">Barnes & Nobles</a> and on <a href="http://www.amazon.com/dp/B00E4O3JKQ?tag=owncloudbook-20&creative=384345&linkCode=kin" target="_blank">Kindle</a>.</div>
Aditya Patawarihttp://www.blogger.com/profile/04110480749979714191noreply@blogger.com20tag:blogger.com,1999:blog-5556854748152045563.post-47779707787484781082013-07-29T12:40:00.002-07:002013-08-24T01:49:04.856-07:00Converting Raspberry Pi into Media Center Using OpenELEC (XMBC)<div dir="ltr" style="text-align: left;" trbidi="on">
One of the most discussed and common use of Raspberry Pi is to turn it into a media center for music and videos. Raspberry Pi uses very less power so it is an ideal device for people who listen to music all the time. Talking about media center, <a href="http://xbmc.org/" target="_blank">XMBC</a> is a very popular choice with Linux users these days. It is a full fledged media center operating system, running Linux at its core. It comes packed with all the common codecs and presents a very pleasing user interface and playlists to organize the videos and music.<br />
Sounds good? But the catch is that it can be too heavy for a Raspberry Pi. So we need to look for a lighter alternative.<br />
<br />
<a href="http://openelec.tv/" target="_blank">OpenELEC (Open Embedded Linux Entertainment Center)</a> is an appliance which means that almost everything will be pre-configured. OpenELEC takes an easy XMBC and makes it even easier to install and maintain. So let us start installing it on our SD card for Raspberry Pi. Latest version of the OpenELEC can be downloaded from their downloads page in .tar.bz2 form. Once downloaded, we need to extract this archive to obtain the .img file.<br />
<code>$ tar -xjvf OpenELEC-RPi.arm-3.0.6.tar.bz2</code><br />
<br />
After the extraction of img file is done, we need to dd this file into the SD card. To do this, put the SD card into the right slot in the computer. Run df command to see if it gets auto mounted and note the path of the device file. If it does not get auto mounted then, on terminal, type <code>ls /dev/mm*</code>. This will list all the memory cards in your system. Once you have got this information, now run dd to install the OpenELEC into the SD card.<br />
<code># dd bs=4M if=OpenELEC-RPi.arm-3.0.6.img of=/dev/mmcblk0</code><br />
<br />
This may take a few minutes. It performs byte by byte copy of the img file to the SD card. Once we are done with this step, we can just insert the SD card into the slot and fire up the Raspberry Pi. First thing which I noticed is that OpenELEC is quite fast to boot. Now just insert a thumb drive with any music or videos into the Raspberry Pi and use a standard keyboard/mouse to browse and choose the media.<br />
<br />
I had a chance to go through Mikkel Viager's <a href="https://www.packtpub.com/create-media-center-with-openelec-starter/book" target="_blank">Instant OpenELEC Starter</a>. It has much more detailed explanation about installing and maintaining OpenELEC. It also talks about installation on non-Raspberry Pi platforms and provides with handy tips to manage XMBC. I liked the feature of auto indexing of movies and tv shows and XMBC management using an Android phone remotely.<br />
<br />
Off to watch a movie now! :-D</div>
Aditya Patawarihttp://www.blogger.com/profile/04110480749979714191noreply@blogger.com1tag:blogger.com,1999:blog-5556854748152045563.post-7274872649246522712013-06-18T08:05:00.000-07:002013-06-18T08:13:18.040-07:00Deploying Big Using BitTorrent [Sharing Files Using BitTorrent]<div dir="ltr" style="text-align: left;" trbidi="on">
If you just want to share some files without concern of privacy, please check out this short tutorial on <a href="http://www.bittorrent.com/help/guides/send-files" target="_blank">bittorrent.com</a>. This article will talk a bit about BitTorrent's basic internals and it's usage to do large code/application deploys.<br />
<br />
<b>Scenario:</b> I have to do deploy some application(s) across many co-located data centers. The collective size of deploy will be of the order of tens of GB.<br />
<br />
<b>Conventional methods like scp, rsync and http fails</b>:<br />
<ul style="text-align: left;">
<li>scp will not resume if it breaks at any point. Every time I will have to start over and over again.</li>
<li>rsync works well with text files, not so well with binaries (it works nonetheless). The amount of CPU it eats is unacceptable though.</li>
<li>http can resume most of the times but as more servers try to download the application, the bandwidth limitations slow down the entire process.</li>
</ul>
<div>
<b>Enter BitTorrent!</b> </div>
<div>
<ul>
<li>Resumes the download every time. No problem if the connection breaks.</li>
<li>Does not eats my CPU.</li>
<li>As more servers download, they can act as seeder and actually increase collective bandwidth.</li>
</ul>
<div>
Now let us start the technical details. For torrent to work, you will need to create a torrent file (also known as a metafile). You'll also need a tracker. Tracker keeps track of what all leechers and seeders (collectively known as peers) are there and help in general coordination by announcing the available peers periodically. Finally you will need a torrent client which can seed the files you are going to share. </div>
</div>
<div>
Now the problem is that BitTorrent is no longer open sourced. So either you have to get license from BitTorrent, Inc. which can be very costly (I am not sure) or you can use the older code which was once open source and still works like a charm.</div>
<div>
<br /></div>
<div>
For Centos/Red Hat/Scientific Linux, you should try NauLinux School repo:<br />
<code># vim /etc/yum.repos.d/naulinux-school.repo:<br />
[naulinux-school]<br />
name=NauLinux School<br />
baseurl=http://downloads.naulinux.ru/pub/NauLinux/6.2/$basearch/sites/School/RPMS/<br />
enabled=0<br />
gpgcheck=1<br />
gpgkey=http://downloads.naulinux.ru/pub/NauLinux/RPM-GPG-KEY-linux-ink<br />
</code><br />
Install bittorrent rpm package:<br />
<code># yum --enablerepo=naulinux-school install bittorrent</code></div>
<div>
<br />
For Fedora, you can try downloading the rpm from their build system <a href="http://koji.fedoraproject.org/koji/packageinfo?packageID=1396" target="_blank">koji</a> and manually install it.<br />
<code># yum localinstall ./bittorrent-4.4.0-16.fc15.noarch.rpm</code><br />
<br />
Also install mktorrent which will be used to create torrent meta files.<br />
<code># yum install mktorrent</code><br />
<br />
<b>Creating a torrent tracker</b><br />
As I have mentioned before, tracker is a critical piece of the bittorrent setup. It helps in co-ordinating between the peers and maintains a list of the same. It also keeps a record of all the seeds along with the checksum of the torrent. Needless to say that without a torrent tracker, entire bittorrent setup will fail.<br />
You can setup a tracker for yourself easily. Just run the following command on CentOS:<br />
<code>$ bittorrent-tracker --port 8080 --dfile dstate --logfile tracker.log</code>
<br />
<br />
For Fedora, you can use the bttrack command after installing the bittorent package:<br />
<code>$ bttrack --port 8080 --dfile dstate --logfile tracker.log</code>
<br />
<br />
Alternatively, you can use one of the public tracker like <a href="http://openbittorrent.com/" target="_blank">OpenBitTorrent</a>. This may save you sometime.<br />
<br />
<b>Creating a torrent metafile</b><br />
Once we have the tracker up, we need to create the actual torrent file to distribute. A torrent file contains <a href="http://en.wikipedia.org/wiki/Bencode" target="_blank">bencoded</a> data about the files and the announce URL of the tracker along with some other information.<br />
Creating torrent using mktorrent easy but if you prefer GUI, you can use <a href="http://www.transmissionbt.com/" target="_blank">transmission</a> or any other bittorrent client.<br />
<code>$ mktorrent -a http://tracker.example.com:8080/announce -l 18 -v /path/to/the/app</code>
<br />
<br />
Here -a specifies the tracker's announce url which we created before. -l flag specifies the size of each chunk of file which will be transferred at a time and -v flag is for verbosity.<br />
<br />
Once the torrent metafile is created, you need to seed the torrent so that other peers can download it. I like to use rtorrent for this:<br />
<code># yum install rtorrent<br />
$ rtorrent <path to the torrent metafile></code><br />
<br />
Here is an easy-to-follow <a href="http://harbhag.wordpress.com/2010/06/30/tutorial-using-rtorrent-on-linux-like-a-pro/" target="_blank">tutorial</a>, if you are more interested in rtorrent.<br />
<br />
<b>Tips for peaceful life</b><br />
There are certain parameters that can be tweaked for better performance. While making the torrent try adjusting the -l flag to a higher value if you have really good bandwidth. Since my deployment was for a bunch of data centers which have really good bandwidth, I usually set it up to 20.<br />
<br />
If you do the deploys without taking out the machines from production, it is possible to limit the bandwidth usage of torrent client. This comes really handy and helps in avoiding the clogging of network pipes. Check out the tutorials and docs of your torrent client to know about these controls.<br />
<br />
Before initiating the transfer, always make sure that you inform the relevant data center technicians and network operations guys. I did not, the first time, and due to huge spike in network, the one of the data center ops thought that we are under some sort of DOS attack and cut off connectivity to all our servers resulting in minor service disruption.<br />
<br />
Happy deploying!<br />
<br />
Discuss this post on <a href="https://news.ycombinator.com/item?id=5899660" target="_blank">Hacker News</a>.</div>
</div>
Aditya Patawarihttp://www.blogger.com/profile/04110480749979714191noreply@blogger.com2tag:blogger.com,1999:blog-5556854748152045563.post-36584492417137908612013-05-02T12:58:00.000-07:002013-05-02T13:09:40.572-07:00Arch Linux on Raspberry Pi Running XFCE [Version 2]<div dir="ltr" style="text-align: left;" trbidi="on">
<b><i>I have created a one-liner to install XFCE on Arch Linux, Raspberry Pi. Find it in the last line.</i></b><br />
<br />
I <a href="http://blog.adityapatawari.com/2013/01/arch-linux-on-raspberry-pi.html" target="_blank">wrote</a> about installing XFCE on <a href="https://www.archlinux.org/" target="_blank">Arch Linux</a> running on Raspberry Pi but it seems that those instructions are no longer valid for the new version of Arch as hosted on Raspberry Pi download page. So below are the new instructions for installing XFCE on Arch, Raspberry Pi. Please note that initial steps are similar for the previous versions of Arch too.<br />
<br />
First off download the latest Arch Linux ARM from Raspberry Pi <a href="http://www.raspberrypi.org/downloads" target="_blank">downloads</a> page and unzip it to extract the img file. Once you have the img file, you need to write this on a SD Card. You can use dd command or tools like ImageWriter. There are more options available on <a href="http://elinux.org/RPi_Easy_SD_Card_Setup" target="_blank">elinux</a> page. Let us use dd command for now:<br />
<code># dd bs=4M if=~/archlinux-hf-2012-09-18.img of=/dev/mmcblk0<br />
</code><br />
No, cp command is not supposed to be used here because cp copies over the file system and we have to do something at much more lower level. In case you are wondering how I got the /dev/mmcblk0 bit, I just mounted the sd card and check the output of df -h command. If you are using a sd card of more than 2G memory, then I recommend using gparted or anything else and expand the size of the file system since by default it'll be just about 2G and rest of your space will go unused. Once you are done here, insert the sd card into your Pi and fire it up.<br />
Now you can see the awesome black login screen. The default password for root user is 'root'. Login as root and create pacman, the Arch Package Manager, database.<br />
<code># pacman-key --init<br />
</code><br />
Some randomness would be helpful here. So hit ALT+F2 to go to another tty and execute some random commands like ls and echo and cd etc. Switch back to the previous tty by hitting ALT+F1 and wait till the initialization of db is done. Now you can update your repositories:<br />
<code># pacman -Syu<br />
</code><br />
Let us install Xorg libraries first:<br />
<code># pacman -S xorg-xinit xorg-server xorg-server-utils xterm</code><br />
This will get us the basic X server and related dependencies<br />
<br />
Next, we will install XFCE:<br />
<code># pacman -S xfce4</code><br />
The CLI will ask you if you want to install selected packages only. I choose to install everything since they looked bare minimum anyway but you can be choosy here.<br />
<br />
Now we may need the display drivers:<br />
<code># pacman -S mesa xf86-video-fbdev xf86-video-vesa<br />
</code><br />
Also we will need a login manager. I use SLiM since it is lightweight:<br />
<code># pacman -S slim<br />
</code><br />
Next we need to enable SLiM and graphics user mode (systemd lingo for runlevel 5):<br />
<code>
# systemctl enable slim.service<br />
# systemctl enable graphical.target<br />
</code><br />
And we have to create a .xinitrc in the user's home directory. This file reads X server configs and starts XFCE environment:<br />
<code>
# vim ~/.xinitrc </code><br />
<code><br />#!/bin/sh </code><br />
<code>if [ -d /etc/X11/xinit/xinitrc.d ]; then <br />
for f in /etc/X11/xinit/xinitrc.d/*; do <br />
[ -x "$f" ] && . "$f" <br />
done <br />
unset f <br />
fi <br />
exec startxfce4<br />
</code>
<br />
Also we need a ~/.bash_profile to execute startx to initiate the Xserver as soon as the user (root in this case) logs in. :<br />
<code># vim ~/.bash_profile<br />
[[ -z $DISPLAY && $XDG_VTNR -eq 1 ]] && exec startx<br />
</code><br />
That is it! Reboot and enjoy XFCE on Raspberry Pi.<br />
<br />
To save you some time, I have combined these commands in a small shell script and put it on <a href="https://github.com/adimania/arch-desktop-environments" target="_blank">github</a> (fork it). So now, to install XFCE on your Pi, you need to fire just one command:<br />
<code><br /></code>
<code><b>curl https://raw.github.com/adimania/arch-desktop-environments/master/XFCE-Arch-RPi.sh | bash</b></code><br />
<code><b><br /></b></code>
<br />
Discuss this post on <a href="https://news.ycombinator.com/item?id=5646845" target="_blank">Hacker News</a>.</div>
Aditya Patawarihttp://www.blogger.com/profile/04110480749979714191noreply@blogger.com16tag:blogger.com,1999:blog-5556854748152045563.post-4139919610967889122013-03-28T13:42:00.000-07:002013-03-31T11:06:46.119-07:00All About inodes, Hard Links and Soft Links<div dir="ltr" style="text-align: left;" trbidi="on">
Open your terminal and fire "ls -i" and you will see that each file is associated with a number.<br />
<code>$ ls -i</code><br />
<code>2889973 users.sh 2889972 fedoraRepo.sh 2889970 sfs.sh <br />
2889969 bigFile.sh 2889971 dbBackup.sh 2889714 tree-clone.py
</code>
<br />
<code><br /></code>
Ever wondered what this number is?<br />
Ever thought what happens when a file is deleted?<br />
How does the system knows the owner of the file or it's last modification time?<br />
What are hard links?<br />
What is the difference between hard links and soft links?<br />
<br />
I'll try to answer these questions and probably more but I want to stress on a point; every thing in Linux is a file including the directories and devices attached.<br />
Also install a package called sleuthkit using yum or apt to obtain a tool called <a href="http://www.sleuthkit.org/sleuthkit/man/istat.html" target="_blank">istat</a>.<br />
<br />
When a filesystem (considering ext3/<a href="http://kernelnewbies.org/Ext4" target="_blank">ext4</a> for now) is created on a disk, a special data structure is created. We'll call this inode table (technically it is an array of structure). It is indexed from 1 to n where n is the maximum numbers of inodes in the filesystem. Details like maximum number of inodes are decided while creating the filesystem usually. We can run "df -i" to check out number of inodes used and available.<br />
Now whenever we create a file or a directory, an unallocated inode number is assigned and this is where several details about the file or the directory is stored. POSIX standard requires inode to contain the following information (borrowed from <a href="http://en.wikipedia.org/wiki/Inode" target="_blank">Wikipedia</a>):<br />
<ul style="text-align: left;">
<li>The size of the file in bytes.</li>
<li>Device ID (this identifies the device containing the file).</li>
<li>The User ID of the file's owner.</li>
<li>The Group ID of the file.</li>
<li>The file mode which determines the file type and how the file's owner, its group, and others can access the file.</li>
<li>Additional system and user flags to further protect the file (limit its use and modification).</li>
<li>Timestamps telling when the inode itself was last modified (ctime, inode change time), the file content last modified (mtime, modification time), and last accessed (atime, access time).</li>
<li>A link count telling how many hard links point to the inode.</li>
<li>Pointers to the disk blocks that store the file's contents (see inode pointer structure).</li>
</ul>
<div>
Notice that this does not include the filename. Surprised? In fact inodes never hold that information. So an obvious question arises, when we open file, how does the system know what inode it is associated with? To understand this, we need to understand what exactly is a directory. As I have mentioned, everything in Linux is a file which implies that even directory is a file. Every directory consist of a data structure, first part of which holds an inode number and the second part holds the file name. So when we try to perform any operation on a file, a recursive traversal is performed to lookup for inode number against that file name and that is how inode is obtained.<br />
<br />
Among the information contained in inode, a link count and size is maintained. When we delete a file the link count is decreased until it reaches zero where the size of the file is also set to zero. See the example:<br />
<code>
# ls -i abc<br />
2891791 abc<br />
<br />
# istat /dev/sda5 2891791<br />
inode: 2891791<br />
Allocated<br />
Group: 353<br />
Generation Id: 3534721592<br />
uid / gid: 1000 / 1000<br />
mode: rrw-rw-r--<br />
Flags: <br />
size: 6<br /><b>
num of links: 1</b><br />
<br />
Inode Times:<br />
Accessed: 2013-03-29 01:16:46 (IST)<br />
File Modified: 2013-03-29 01:16:46 (IST)<br />
Inode Modified: 2013-03-29 01:16:46 (IST)<br />
<br />
Direct Blocks:<br />
127754 <br />
<br />
# ln abc def<br />
# istat /dev/sda5 2891791<br />
inode: 2891791<br />
Allocated<br />
Group: 353<br />
Generation Id: 3534721592<br />
uid / gid: 1000 / 1000<br />
mode: rrw-rw-r--<br />
Flags: <br />
size: 6<br /><b>
num of links: 2</b><br />
<br />
Inode Times:<br />
Accessed: 2013-03-29 01:18:41 (IST)<br />
File Modified: 2013-03-29 01:16:46 (IST)<br />
Inode Modified: 2013-03-29 01:18:34 (IST)<br />
<br />
Direct Blocks:<br />
127754 <br />
<br />
# rm abc<br />
# istat /dev/sda5 2891791<br />
inode: 2891791<br />
Allocated<br />
Group: 353<br />
Generation Id: 3534721592<br />
uid / gid: 1000 / 1000<br />
mode: rrw-rw-r--<br />
Flags: <br />
size: 6<br /><b>
num of links: 1</b><br />
<br />
Inode Times:<br />
Accessed: 2013-03-29 01:18:41 (IST)<br />
File Modified: 2013-03-29 01:16:46 (IST)<br />
Inode Modified: 2013-03-29 01:18:57 (IST)<br />
<br />
Direct Blocks:<br />
127754 </code><br />
<code><br /></code>
So the "num of links" increased when we created a hardlink using ln command by one. When we deleted the file using rm command, the "num of links" decreased by one. If we delete the def file then the count and size will be set to zero.<br />
<code><br /></code>
<code>
# istat /dev/sda5 2891791<br />
inode: 2891791<br /><b>
Not Allocated</b><br />
Group: 353<br />
Generation Id: 3534721592<br />
uid / gid: 1000 / 1000<br />
mode: rrw-rw-r--<br />
Flags: <br /><b>
size: 0<br />
num of links: 0</b><br />
<br />
Inode Times:<br />
Accessed: 2013-03-29 01:18:41 (IST)<br />
File Modified: 2013-03-29 01:30:10 (IST)<br />
Inode Modified: 2013-03-29 01:30:10 (IST)<br />
Deleted: 2013-03-29 01:30:10 (IST)<br />
<br />
Direct Blocks:</code><br />
<code><br />
</code>
This brings us to our next topic of discussion, what are hard links and what are soft links? Simply putting, hard link of a file holds the inode of that file where as the soft link of the file is just a reference to another file. If we delete the original file but have a hard link to it, then we can still access the contents using the hard link. If we delete the original file, the soft link is pretty much useless since all it did was to point to the original name which contained the inode. A crude way to depict what I am saying is below. See how both "Original Name" and "Hard Link" is pointing to the inode but "Soft Link" is not.<br />
Original Name ---------> inode <--------- Hard Link<br />
Soft Link ----------> Original Name -----------> inode<br />
<br />
Now, soft links have their own importance. We cannot use hard links to point to files across different filesystems but with soft links we can. This comes really handy when you want to maintain one name regardless of version differences. See how /usr/bin/python actually points to another binary.<br />
<code><br /></code>
<code>
# ls -l /usr/bin/python<br />
lrwxrwxrwx. 1 root root 7 Jan 9 22:45 /usr/bin/python -> python2<br />
</code>
<br />
<code><br /></code>
<br />
Honestly, there are many more creative uses of links. If you are interested then I recommend that you check out how BusyBox implements a lot of commands using a single binary.<br />
(Hint: $0 is the name of the script which is passed as a variable to the binary)<br />
<br />
<b>Update</b>: As mentioned by <a href="https://news.ycombinator.com/user?id=reirob" target="_blank">reirob</a> on <a href="https://news.ycombinator.com/item?id=5457489" target="_blank">HackerNews</a>, there is a particular case where deleting the file does not set the "num of links" to zero. This usually happens when there is a process which is writing to the file. I have encountered this a few times myself when I delete a log file but the server writing it hasn't been restarted.</div>
</div>
Aditya Patawarihttp://www.blogger.com/profile/04110480749979714191noreply@blogger.com10tag:blogger.com,1999:blog-5556854748152045563.post-35009943211223822592013-01-13T19:56:00.000-08:002013-05-02T12:59:21.481-07:00Arch Linux on Raspberry Pi Running XFCE<div dir="ltr" style="text-align: left;" trbidi="on">
<b><i>Instructions in this post are no longer valid. Please find the updated post <a href="http://blog.adityapatawari.com/2013/05/arch-linux-on-raspberry-pi-running-xfce.html" target="_blank">here</a>.</i></b><br />
<br />
I recently got a Raspberry Pi from RS online store. I wanted one so bad and it took so long before I got to play with it that by the time I got it, I was pretty much drooling over it. I started off by installing <a href="http://www.raspbian.org/" target="_blank">Raspbian</a> which worked out of the box (what fun it is! :( ). I then moved on to try Arch and the fun began. Arch Linux install guide at <a href="http://elinux.org/ArchLinux_Install_Guide" target="_blank">elinux</a> is pretty good but it only helps you to get bare bones Arch up and running. After that you are on your own. So here I am going to discuss how I managed to get Arch up and running with XFCE, a login manager and a web browser.<br />
<br />
First off, download the Arch Linux from Raspberry Pi <a href="http://www.raspberrypi.org/downloads" target="_blank">downloads</a> page. Raspberry Pi's processor is ARMv6 so you cannot just use any Arch variant. Once you are done with the download, you need to extract it and transfer the .img file to a sd card. Either use dd command for this or use a tool like ImageWriter. There are more options available. Check out <a href="http://elinux.org/RPi_Easy_SD_Card_Setup" target="_blank">elinux</a> more choices. I'll use dd command here:<br />
<code># dd bs=4M if=~/archlinux-hf-2012-09-18.img of=/dev/mmcblk0</code><br />
<br />
No, cp command is not supposed to be used here because cp copies over the file system and we have to do something at much more lower level. In case you are wondering how I got the /dev/mmcblk0 bit, I just mounted the sd card and check the output of df -h command. If you are using a sd card of more than 2G memory, then I recommend using gparted or anything else and expand the size of the file system since by default it'll be just about 2G and rest of your space will go unused. Once you are done here, insert the sd card into your Pi and fire it up.<br />
<br />
Now you can see the awesome black login screen. The default password for root user is 'root'. Login as root and create pacman, the Arch Package Manager, database.<br />
<code># pacman-key --init</code><br />
Some randomness would be helpful here. So hit ALT+F2 to go to another tty and execute some random commands like ls and echo and cd etc. Switch back to the previous tty by hitting ALT+F1 and wait till the initialization of db is done. Now you can update your repositories:<br />
<code># pacman -Syu</code><br />
<br />
Now first we will install the xorg libraries:<br />
<code># pacman -S xorg-xinit xorg-server xorg-server-utils</code><br />
This will install the X server and pull some common dependencies.<br />
<br />
To install XFCE now, fire:<br />
<code># pacman -S xfce4</code><br />
It'll ask you to "Enter a selection" after giving some packages. I installed all of them since they looked quite necessary like Thunar and the top panel etc but you can be choosy if you want.<br />
<br />
Is your GUI working? You may be missing display drivers. Install them:<br />
<code># pacman -S mesa xf86-video-fbdev xf86-video-vesa</code><br />
<br />
We still need a login manager. I used <a href="http://slim.berlios.de/" target="_blank">SLiM</a>, the Simple Login Manager. Remember it is Pi, so we are trying to do everything lightweight.<br />
<code># pacman -S slim</code><br />
<br />
Reboot after this and you will be shown a GUI interface to enter your user id and password to login. Do that and open a terminal. We'll install a web browser now. You might be tempted to install Firefox or Chrome but remember, this is ARMv6 and none of the main stream browsers support this architecture out of the box. So either you can compile the binary from Firefox or Chromium code or install a browser like Midori or Arora. I installed Midori because I am more familiar with it.<br />
<code># pacman -S midori</code><br />
<br />
That is it. You got Arch in quite usable state with a working XFCE. Have fun!<br />
<br />
PS: The memory footprint with XFCE up and running is about 140m for my Pi.</div>
Aditya Patawarihttp://www.blogger.com/profile/11007675457270523326noreply@blogger.com19tag:blogger.com,1999:blog-5556854748152045563.post-63170265202662049962012-11-05T08:53:00.000-08:002012-11-05T08:53:00.188-08:00Testing Network And TCP Optimizations<div dir="ltr" style="text-align: left;" trbidi="on">
This post is more like a "note to self" for certain TCP parameters which I usually modify (or plan to modify) on production servers.<br />
<br />
Some good to know terms:<br />
<ul style="text-align: left;">
<li><a class="zem_slink" href="http://en.wikipedia.org/wiki/Round-trip_delay_time" rel="wikipedia" target="_blank" title="Round-trip delay time">Round Trip Time</a> (RTT): It is the time taken by a packet from source machine to reach destination and come back. You can use ICMP ping to get the RTT.</li>
<li><a class="zem_slink" href="http://en.wikipedia.org/wiki/Latency_%28engineering%29" rel="wikipedia" target="_blank" title="Latency (engineering)">Latency</a>: The time from the source sending a packet to the destination receiving it. This is often mixed with RTT. Clarify what you are talking about before interpreting anything.</li>
<li><a class="zem_slink" href="http://en.wikipedia.org/wiki/Bandwidth-delay_product" rel="wikipedia" target="_blank" title="Bandwidth-delay product">Bandwidth Delay Product</a> (BDP): It is the amount of data that can be in transit in the network or simply the product of link bandwidth and RTT.</li>
</ul>
Say you want to test your app or benchmark hardware you just bought then first thing you need to do is to add it into the network, even <a class="zem_slink" href="http://en.wikipedia.org/wiki/Local_area_network" rel="wikipedia" target="_blank" title="Local area network">local network</a> will do. Please avoid wireless network because RTT vary a lot for a wireless network and it becomes difficult to see of hardware is at fault or the wireless.<br />
<br />
<b>Adding Latency Or RTT Delay To The Network</b><br />
If you are serious about testing hardware then you may need to test at various RTT/latency values to evaluate the experience of your customers from various locations across the world. To introduce this RTT delay you can use Network Emulator or simply netem and fire the following command:<br />
<code><br /></code>
<code>tc qdisc add dev eth0 root netem delay 100ms</code><br />
<br />
The command above will introduce a RTT delay of 100ms on eth0 interface. Now you can play around with it to check various values of RTT delays. When you are done, remove the delay by deleting the rule.<br />
<code><br /></code>
<code>tc qdisc del dev eth0 root</code><br />
<br />
A awesome tutorial of netem can be found at <a href="http://www.linuxfoundation.org/collaborate/workgroups/networking/netem" target="_blank">LinuxFoundation.org</a>. Netem <a href="https://lists.linux-foundation.org/pipermail/netem/" target="_blank">mailing list archives</a> might help in debugging in several cases.<br />
<br />
<b>Server Setup For Testing</b><br />
If you plan to test the hardware then I suggest running a simple and no-frills http server on your hardware like python single threaded server. Using scp for testing is not a good idea since openssh itself had some app level congestion controlling mechanisms. To run python single thread server, fire the following command on your terminal:<br />
<code><br /></code>
<code>cd <<i>doc_root_of_http_server</i>><br />
python -m SimpleHTTPServer</code><br />
<br />
Make sure a large file is present in the document root of the server and that curl or wget is present on the client. Do not use any browser or download manager to download from server.<br />
Of course if you are testing your app then above thing might not be applicable to you. In that case setup the server and client depending upon your app.<br />
<br />
<b>Testing and Recording The Defaults</b><br />
Recording defaults is important in case you need to revert anything. A full backup can be obtained easily by <a class="zem_slink" href="http://en.wikipedia.org/wiki/Sysctl" rel="wikipedia" target="_blank" title="Sysctl">sysctl</a> command:<br />
<code>sysctl -A > sysctl.bak</code><br />
<br />
Now download the file without introducing any latency from the server. This is the default performance at 0ms added latency. Now let us start the serious testing and introduce latency. Add a RTT of 100ms and download the file using curl or wget and see the speed.<br />
<br />
<b>Various TCP Optimizations and Parameters To Check</b><br />
<i>I just found out during this experiment that new kernels have great settings for TCP, still cross checking won't hurt.</i><br />
<i><br /></i>
First and foremost get acquainted with /proc of you machine, specifically /proc/sys/net/ directory. I would also encourage you to do through the man page of tcp and understand the parameters.<br />
<br />
<b><i>The changes I am going to suggest depend heavily on kernel version and the distribution. If not done correctly, these changes can degrade networking performance or may harm your machine in any other way. You have been warned. You are on your own.</i></b><br />
<ul style="text-align: left;">
<li>First of all we'll examine if<a href="http://tools.ietf.org/rfc/rfc2018.txt" target="_blank"> TCP selective ack</a> is turned on or not and turn it on if it is off. It is boolean so just set the right value to 1 and you are good to go:<br /><code>sysctl -w net.ipv4.tcp_sack=1</code></li>
<li>We need to make sure that <a href="http://www.ietf.org/rfc/rfc1323.txt" target="_blank">TCP window can scale</a> to utilize maximum buffer possible:<br /><code>sysctl -w net.ipv4.tcp_window_scaling=1</code></li>
<li>Fix the read and write buffers for tcp to an optimum value. It is an array of 3 values which defines minimum, default and maximum values of memory that can be utilized. Also note that this overwrites the values defined for generic (non-tcp) connections in the following files:<br /><br /><code>/proc/sys/net/core/rmem_max<br />/proc/sys/net/core/wmem_max<br />/proc/sys/net/core/rmem_default<br />/proc/sys/net/core/wmem_default</code><br /><br />Setting this is usually heuristic and depends largely on your network. Also with auto scaling on, it can scale up to the maximum value defined. Set it up by using the following command:<br /><code>sysctl -w net.ipv4.tcp_rmem='4096 87380 4194304'<br />sysctl -w net.ipv4.tcp_wmem='4096 16384 4194304'</code><br />Here default memory allocated to receive buffer for <b>each TCP</b> connection would be 87380 bytes and can scale up to 4194304 depending upon the connection. I suggest that you experiment with the values a bit to find the most optimum combination.<br />If you are doing non-tcp optimizations as well then set <code>net.core.rmem_max, net.core.wmem_max, net.core.rmem_default, net.core.wmem_default</code> as well to similar values.</li>
<li>Enable the <a href="http://tools.ietf.org/rfc/rfc1185.txt" target="_blank">TCP time_wait reuse</a>. This would allow the reuse of connections that are in time_wait state. This generally increases performance if your machine is going to make a lot of short lived connections.<br /><code>sysctl -w net.ipv4.tcp_tw_reuse=1</code></li>
<li>Maximum number of concurrent connections can sometimes play a role in servers handling high traffic. This can be determined by dividing the difference of values in the file <code>/proc/sys/net/ipv4/ip_local_port_range</code> by the value in <code>/proc/sys/net/ipv4/tcp_fin_timeout</code>. For my system it is (61000-32768)/60 which turns out to be 470. You can increase the range of the ports or you can reduce the tcp_fin_timeout but experiment first before deploying in production.</li>
</ul>
<div>
There are a lot of other parameters that can be tweaked for higher performance. You can try all of them out but do not march straight into production servers with these tweaks. Experiment in your staging boxes first.</div>
</div>
Aditya Patawarihttp://www.blogger.com/profile/04110480749979714191noreply@blogger.com2