Linux, Unix, and whatever they call that world these days

Setting Up Private Internet Access on CentOS 6.4

Illustration

I am a big fan of Private Internet Access. It gives you anonymous and secure connection to Internet. I personally value my privacy and thus I find such VPN service very valuable.

Under Windows and huge variety of alternate platforms (Android, iOS, Ubuntu, …) installation is very simple and it hardly ever fails. But some platforms don’t come with instructions. Unfortunately one of them is CentOS. Fortunately, setting it all up is not that hard.

First we can do the easy stuff. Download PIA’s OpenVPN configuration files and extract it to directory of your choice. I kept them in /home/MyUserName/pia.

Next easy step is setting up DNS resolving. For that we go to System, Preferences, Network Connections. Just click edit on connection you are using and go to IPv4 Settings tab. Change Method to Automatic (DHCP addresses only). Under DNS servers enter 209.222.18.222 209.222.18.218 (Private Internet Access DNS).

All other commands are to be executed in terminal and most of them require root privileges. It might be best if you just become root for a while:

su - root

CentOS repositories are not known for their extensive software collection. But we can always add a repository of our choice:

wget http://dl.fedoraproject.org/pub/epel/6/i386/epel-release-6-8.noarch.rpm
rpm -Uvh epel-release-6-8.noarch.rpm

This repository has OpenVPN package that we need:

yum install openvpn

Next step is getting configuration in place (replace username and password with yours):

cp /home/MyUserName/pia/ca.crt /etc/openvpn/ca.crt
cp /home/MyUserName/pia/US\ Midwest.ovpn /etc/openvpn/client.conf
echo "auth-user-pass /etc/openvpn/login.pia" >> /etc/openvpn/client.conf
echo "username" > /etc/openvpn/login.pia
echo "password" >> /etc/openvpn/login.pia

Now we can test our connection (after we restart network in order to activate DNS changes):

service network restart
openvpn --config /etc/openvpn/client.conf

Assuming that this last step ended with Initialization Sequence Completed, we just need to verify whether this connection is actually used. I found whatismyipaddress.com quite helpful here. If you see some mid-west town on map, you are golden (assuming that you don’t actually live in US mid-west).

Now you can stop test connection via Ctrl+C in order to properly start it. In addition, you can specify it should start on each system startup:

service openvpn start
chkconfig openvpn on

And that is all.

CentOS 6.4 and VirtualBox Additions

When you install CentOS 6.4 in VirtualBox, quite quickly you might be annoyed by a lack of a mouse integration. Usual cure in form of VM guest additions simply fails with

Building the main Guest Additions module   [FAILED]

Fortunately this message comes with some additional information which points to lack of compiler and kernel headers. Easiest way to install them is in terminal:

su - root
yum install gcc
yum install kernel-devel-`uname -r`

After this you can retry guest additions installation and you should see better results.

PS: This method probably works for RedHat also.

PostgreSQL on CentOS

It is not really a possibility to be Windows-only developer these days. Chances are that you will end up connecting to one Linux server or another. And best way to prepare is to have some test environment ready. For cold days there is nothing better than once nice database server.

For this particular installation I (again) opted for minimal install of CentOS 6.3. I will assume that it is installed (just bunch of Nexts) and your network interfaces are already set (e.g. DHCP).

First step after this is actually installing PostgreSQL:

yum install postgresql postgresql-server
 Complete!

Next step is initializing database:

service postgresql initdb
 Initializing database:                                     [  OK  ]

Start service and basic setup is done:

chkconfig postgresql on
service postgresql start
 Starting postgresql service:                               [  OK  ]

Next step is allowing TCP/IP connections to be made. For that we need to edit postgresql.conf:

su - postgres
vi /var/lib/pgsql/data/postgresql.conf

There we find listen_addresses and port parameters and un-comment them (along with small change from all to *):

listen_addresses = '*'
port = 5432

While we are at it, we might add all hosts as friends in pg_hba.conf (yes, don’t do this in production):

vi /var/lib/pgsql/data/pg_hba.conf

Add following line at the bottom:

host    all         all         0.0.0.0/0             trust

Finish up editing and restart service

exit
 logout
/etc/init.d/postgresql restart
 Stopping postgresql service:                               [  OK  ]
 Starting postgresql service:                               [  OK  ]

Quick check with psql is in order (notice that \q is used for exit):

psql -h 192.168.56.101 -U postgres -d postgres
 psql (8.4.13)
 Type "help" for help.
\q

If external connections are needed, we must handle firewall. And easiest way to do this is disabling it. For production environment this is a big no-no. For simple testing of virtual machine it will suffice:

/etc/init.d/iptables stop
chkconfig iptables off

And with this we are ready to accept clients.

Simplest LDAP Server

One application I am working on needed LDAP authorization support. In order to test before actually deploying it I decided to create local LDAP server in virtual machine.

I decided to use CentOS minimal install as starting point. It is extremely small distribution to start with and it allows for virtual machine with only 256 MB of RAM (although it needs 512 MB in order to install, go figure).

Installation of CentOS is uneventful. Just go next, next, next and it is done. Although it might be wise to skip media check since it takes ages. In matter of minutes OS will boot up and then the fun starts.

Since we will need network access for both using machine as LDAP server and for getting packages of the Internet, we need network access. Getting it to work is as easy as writing ifup eth0. In order to make these changes permanent just edit /etc/sysconfig/network-scripts/ifcfg-eth0 and change line starting with ONBOOT with ONBOOT="yes". It is as easy (if you disregard annoyance of vi editor).

Now we need to install our directory server. First install package (answer y to everything):

yum install 389-ds-base

And then run setup (answer yes to first two questions and just use default for others):

setup-ds.pl

That should leave us with values totally unsuitable for anything but for testing (which is exactly what we want):

Computer name ...............: //localhost.localdomain//
System User .................: //nobody//
System Group ................: //nobody//
Directory server network port: //389//
Directory server identifier .: //localhost//
Suffix ......................: //dc=localdomain//
Directory Manager DN ........: //cn=Directory Manager//

Quick search will prove that our directory server is up and running

ldapsearch -h 127.0.0.1 -x -b "dc=localdomain"
 ...
 # search result
 search: 2
 result: 0 Success
 # numResponses: 10
 # numEntries: 9

Well, now we are ready to add our first user. In order to do this just create user.ldif file with following content:

dn: uid=jdoe,ou=People,dc=localdomain
objectClass: top
objectClass: person
objectClass: organizationalPerson
objectClass: inetOrgPerson
uid: jdoe
cn: John Doe
displayName: John Doe
givenName: John
sn: Doe
userPassword: test

Not all these attributes are mandatory but I find this to be minimum acceptable set for my use. This is not enough if you want to use LDAP server for logons but it is enough for basic password checking. We add user with:

ldapadd -h 127.0.0.1 -x -D "cn=Directory Manager" -W -f user.ldif
 adding new entry "uid=jdoe,ou=People,dc=localdomain"

If something is messed up, just delete the user and add it again:

ldapdelete -h 127.0.0.1 -x -D "cn=Directory Manager" -W "uid=jdoe,ou=people,dc=localdomain"
ldapadd -h 127.0.0.1 -x -D "cn=Directory Manager" -W -f user.ldif
 adding new entry "uid=jdoe,ou=People,dc=localdomain"

Yes, there is an ldapmodify operation but I find it better to start with clean slate during testing.

Another test to verify that our user authentication works and we are good. Password asked here is not your root LDAP password but password of an user (test in my example):

ldapsearch -h 127.0.0.1 -x -D "uid=jdoe,ou=People,dc=localdomain" -W -b "ou=people,dc=localdomain" "uid=jdoe"
 dn: uid=jdoe,ou=People,dc=localdomain
 objectClass: top
 objectClass: person
 objectClass: organizationalPerson
 objectClass: inetOrgPerson
 uid: jdoe
 cn: John Doe
 displayName: John Doe
 givenName: John
 sn: Doe
 search: 2
 result: 0 Success

Congratulations, you have just made your first LDAP authorization.

Since, in current state, our LDAP cannot talk with outside world, we can think of dropping firewall (not something that you should do in production environment):

iptables -F INPUT
service iptables save

And last step would be to ensure that our directory server gets started as soon as machine is booted up:

chkconfig dirsrv on

With this LDAP test server configuration is done.

SHA-1 Sum Every File

Easiest way to check whether file is valid after download is to grab it’s SHA-1 sum. Most commonly it has same name as file but with additional .sha1 extension (e.g. temp.zip would have SHA-1 sum in temp.zip.sha1). One annoyance is how to generate all those .sha1 files…

To make my life little bit easier, I made a bash script. This script will go through given directories and create all SHA-1 sums. I use content and download directories in this case:

#!/bin/bash
for file in ~/public_html/{content,download}/*
do
    if [ -f "$file" ]
    then
        if [ ${file: -5} != ".sha1" ]
        then
            file1=$file
            file2="$file1.sha1"
            file1Sum=`sha1sum $file1 | cut --delimiter=' ' -f 1`
            if [ -e "$file2" ]
            then
                file2Sum=`cat $file2`
                if [ "$file1Sum" == "$file2Sum" ]
                then
                    echo "  $file1"
                else
                    echo "X $file1"
                    echo "$file1Sum" > "$file2"
                fi
            else
                echo "+ $file1"
                echo "$file1Sum" > "$file2"
            fi
        else
            file1=${file%.sha1}
            file2=$file
            if [ ! -e "$file1" ]
            then
                echo "- $file1"
                rm "$file2"
            fi
        fi
    fi
done

Probably some explanation is in order. Script check each file in content and download directories. If file ends in .sha1 (bottom of the script), we will just remove that file and log action with minus (-) sign. This serves as clean-up for orphaned SHA-1 sums.

If file does exist, we need to check existing SHA-1 sum. If there is no sum script will just create one and log it with plus (+) sign. If sum does exist, script compares it with newly generated value. If both match, there is nothing to do, if they do not match, that is logged with X character.

Example output would be:

  /public_html/download/qtext301.exe
+ /public_html/download/qtext310.exe
X /public_html/download/seobiseu110.exe
- /public_html/download/temp.zip

Here we can see that sum for qtext301.exe was valid and no action was taken. Sum for qtext310.exe was added and one for seobiseu110.exe was fixed (it’s value didn’t match). File temp.zip.sha1 was removed since temp.zip does not exist anymore.

P.S. While this code is not perfect and it might not be best solution, it does work for me. :)

DD-WRT on WL-330GE

Illustration

I recently bought Asus WL-330GE wireless router. I needed small travel router and I needed to run DD-WRT on it. It seemed like perfect match.

Upgrade to DD-WRT went without a hitch. However, as soon as it booted I noticed that I was getting DHCP address from my hotel’s server instead from router. Quick investigation revealed that there was no WAN port configured. Single port was in my LAN segment and thus it was leaking everything. Solution for that ought to be simple - I just went to Setup -> Networking and changed WAN assignment there. It seemed to work but, as soon as I rebooted router, everything went back to original state. Quite annoying.

Quick googling revealed something that looked quite close to solution but it didn’t work for me. I could get it to work sometime but every time after restart it would put my WAN port into default bridge. Not quite what I wanted.

In order to debug this I executed nvram show after clean install and then I executed it again after everything got working. That gave me delta that I had to apply. And, as far as bridges go, I decided to manually remove eth0 (WAN port) from default bridge.

Final result was this start-up script (Administration > Commands):

brctl delif br0 eth0
nvram set lan_ifnames="eth1"
nvram set wan_ifname="eth0"
nvram set wan_ifname2="eth0"
nvram set wan_ifnames="eth0"
nvram set wanup=0
nvram unset dhcpc_done
nvram commit
udhcpc -i eth0 -p /var/run/udhcpc.pid -s /tmp/udhcpc &

First line just ensures that WAN port is thrown out of bridge. All those nvram lines sort out minor differences. Last line enables DHCP renewal on WAN interface. After startup that should produce bridge state as displayed on picture. Just what I wanted. :)

Only thing that might look funny afterward is that both WLAN and LAN interface have same MAC address. To solve this we need to telnet (or ssh) to machine and execute following commands:

nvram get lan_hwaddr
nvram get wan_hwaddr
nvram get wl0_hwaddr

Each command will give you MAC address of each interface. In my case this was:

lan_hwaddr: __F4:6D:06:94:02:39__
wan_hwaddr: __F4:6D:06:94:02:39__
wl0_hwaddr: __F4:6D:06:94:02:3B__

From that we can interpolate that wan_hwaddr should be F4:6D:06:94:02:3A (just before wireless and just after LAN). Only thing to do now is to enhance our startup script (somewhere BEFORE nvram commit) with:

nvram set wan_hwaddr=F4:6D:06:94:02:3A
nvram set et0macaddr=F4:6D:06:94:02:3A

This game with MAC is not strictly necessary but I like to set it anyhow.

I tested this on build 14896 (recommended for Asus WL-330GE in router database) and on special build 15962 (recommended on forums as stable).

P.S. Next time remember not to take router advice from Windows programmer.

IPv6 in Your Local Network Via DD-WRT

Illustration

After sorting out tunneling on my computer, there came time to setup my router too. Idea is not to configure each client with separate tunnel but to have one tunnel on router and all computers connecting to it should use it transparently. Hurricane Electric gives /64 prefix and that ought to be enough.

As a router I will use my trusty DD-WRT. Exact version used in this example is DD-WRT v24-sp2 (12/08/11) std-nokaid (SVN revision 17990M NEWD-2 Eko). Your mileage may vary depending on version of your choosing.

Obvious first step is to enable IPv6. It is easy enough to do. Under Administration -> Management find IPv6 support and enable IPv6 and Radvd. Radvd is configured as simple as it can be:

interface br0
{
   AdvSendAdvert on;
   prefix ^^2001:db8:9:10ee::/64^^
   {
   };
};

Notice that prefix is same text “Routed /64” under your tunnel details.

Unfortunately this will not do. There is need for small script:

insmod ipv6

SERVER_IPV4_ADDRESS="^^216.66.22.2^^"
SERVER_IPV6_ADDRESS="^^2001:db8:8:10ee::1^^"
CLIENT_IPV4_ADDRESS=$(ip -4 addr show dev eth0 | awk '/inet / {print $2}' | cut -d/ -f1)
CLIENT_IPV6_ADDRESS="^^2001:db8:8:10ee::2^^"
ROUTED_IPV6_ADDRESS="^^2001:db8:9:10ee::1^^"

if [ -n $CLIENT_IPV4_ADDRESS ]
then
    ip tunnel add he-ipv6 mode sit remote $SERVER_IPV4_ADDRESS local $CLIENT_IPV4_ADDRESS ttl 255
    ip link set he-ipv6 up
    ip addr add $CLIENT_IPV6_ADDRESS/64 dev he-ipv6
    ip route add ::/0 dev he-ipv6
    ip -6 addr add $ROUTED_IPV6_ADDRESS/64 dev br0
    kill $(ps | awk '/radvd / { print $1}')
    radvd -C /tmp/radvd.conf
    wget "http://ipv4.tunnelbroker.net/ipv4_end.php?ip=AUTO&pass=^^9fc4d3d26b6ba921226c53e6c664c1ab0^^&apikey=^^tb4f139f1c342fgbd4.44123289860^^&tid=^^5511235463^^"
fi

Script sets some variables, brings interface up, adds some routes and restarts radvd daemon in order to pickup latest settings. Last line is needed only for users with dynamic IP (full explanation can be got once you load page in browser). This script needs to be saved with Save Firewall (under Administration -> Commands).

Once router gets restarted you will notice that all clients get IPv6 address alongside IPv4 (they have to support IPv6, of course). Easiest way to check it to run ping -6 ipv6.google.com. Or load it or one of many IPv6 test pages.

Windows 7 works just beautifully with IPv6.

P.S. In case you are wondering where I pulled those IPv6 addresses from, here is what Hurricane Electric gave me:

IPv6 Tunnel Endpoints
    Server IPv4 Address: __216.66.22.2__
    Server IPv6 Address: __2001:db8:8:10ee::1/64__
    Client IPv4 Address: __174.78.144.123__
    Client IPv6 Address: __2001:db8:8:10ee::2/64__

Routed IPv6 Prefixes
    Routed /64:          __2001:db8:9:10ee::/64__

It's Not Natty, It Is Nutty

On my “standard” work day I interact with Unix a lot so it seemed like a logical solution to have Linux on my machine. I used Ubuntu 10.10 and world was beautiful. Yes, some things did bother me but it worked. And after work I could always enjoy Windows 7.

Two days ago I upgraded to newest Ubuntu (11.04, nicknamed Natty). After upgrade boot seemed a little slow so I decided to do what I should have done first time - clean install. I spent one full day on fresh installation of Nutty and here are some problems I had in span of eight hours:

  • If your secondary monitor is on left, it is almost impossible to pinpoint single dot at which Unity launcher will appear.
  • Turning off auto-hide for Unity launcher causes it to be over all maximized windows and to hide their left side.
  • Applications closed without warning and without any form of crash dialog. Since this includes even calculator, I would dare to suggest that new Unity interface is to blame.
  • Once system started swapping it took Ctrl+Alt+F1 and top command to find that compiz took whooping 4.5 GB of RAM (out of 6 GB).
  • LibreCalc died every five minutes while editing relatively simple document. Recovery worked in almost 50% of cases.
  • Laptop could not wake from sleep.
  • Network connection was breaking every half an hour or so.
  • Booting took ages - easily double the time that 10.10 needed.
  • … do notice that there was a lot of smaller problems still …

I started using Linux it in nineteens with Slackware and went through my share of different distributions. I was never completely satisfied and people working with me know that I curse Linux a lot because of small bugs. But this is first time ever that I was actually unable to do my work. At the end I just rebooted into Windows XP (company issued) and did my work there.

My next step will be to take out this Natty piece of shit from laptop and to go back to Ubuntu 10.10.

P.S. I actually liked search interface in Unity. Unfortunately everything else sucks.

Cleaning Up

What to do if your script needs to kill all processes that it started? Just kill everything that has same parent as your current shell ($$):

#!/bin/bashkill `ps -ef | awk '$3 == '$$' {print $2}'`

Extracting Sparse Tar

I had to move some files from Unix. File was big, I had small USB drive - one thing led to another…

GZipped tar was obvious solution. In addition to that a friend of mine recommended to also use --sparse argument with it. Theory behind sparse files tells that block of 0 should be saved extremely efficiently thus making my file smaller even before zipping part gets involved. This made my command look like “tar cfzS somefile.tar.gz somefile”. It all worked as advertised.

Next day I got to extract this on Windows. My trusty WinRAR had no idea how to proceed. I just got “The archive is corrupt” message. My next efforts went into searching for Win32 version of tar. Since GNU tools like to be small and concentrated, of course this was not sufficient - I needed Win32 GZip also. Notice that I might be wrong here and there might be Win32 tar somewhere with everything integrated - I just haven’t found it.

Since (on Win32) extracting this tar.gz needed temporary files, I did it in two steps: first with gzip (gzip -d < somefile.tar.gz > somefile.tar) and then with tar (tar xSf somefile.tar). Even with all this, file was just too small.

After testing few more programs I gave up and recreated this archive without --sparse option. It ends up that size difference (with compression on) is not that high after all but final result is much more portable.

Here are tools I used: