Linux, Unix, and whatever they call that world these days

PostgreSQL on CentOS

It is not really a possibility to be Windows-only developer these days. Chances are that you will end up connecting to one Linux server or another. And best way to prepare is to have some test environment ready. For cold days there is nothing better than once nice database server.

For this particular installation I (again) opted for minimal install of CentOS 6.3. I will assume that it is installed (just bunch of Nexts) and your network interfaces are already set (e.g. DHCP).

First step after this is actually installing PostgreSQL:

yum install postgresql postgresql-server
 Complete!

Next step is initializing database:

service postgresql initdb
 Initializing database:                                     [  OK  ]

Start service and basic setup is done:

chkconfig postgresql on
service postgresql start
 Starting postgresql service:                               [  OK  ]

Next step is allowing TCP/IP connections to be made. For that we need to edit postgresql.conf:

su - postgres
vi /var/lib/pgsql/data/postgresql.conf

There we find listen_addresses and port parameters and un-comment them (along with small change from all to *):

listen_addresses = '*'
port = 5432

While we are at it, we might add all hosts as friends in pg_hba.conf (yes, don’t do this in production):

vi /var/lib/pgsql/data/pg_hba.conf

Add following line at the bottom:

host    all         all         0.0.0.0/0             trust

Finish up editing and restart service

exit
 logout
/etc/init.d/postgresql restart
 Stopping postgresql service:                               [  OK  ]
 Starting postgresql service:                               [  OK  ]

Quick check with psql is in order (notice that \q is used for exit):

psql -h 192.168.56.101 -U postgres -d postgres
 psql (8.4.13)
 Type "help" for help.
\q

If external connections are needed, we must handle firewall. And easiest way to do this is disabling it. For production environment this is a big no-no. For simple testing of virtual machine it will suffice:

/etc/init.d/iptables stop
chkconfig iptables off

And with this we are ready to accept clients.

Simplest LDAP Server

One application I am working on needed LDAP authorization support. In order to test before actually deploying it I decided to create local LDAP server in virtual machine.

I decided to use CentOS minimal install as starting point. It is extremely small distribution to start with and it allows for virtual machine with only 256 MB of RAM (although it needs 512 MB in order to install, go figure).

Installation of CentOS is uneventful. Just go next, next, next and it is done. Although it might be wise to skip media check since it takes ages. In matter of minutes OS will boot up and then the fun starts.

Since we will need network access for both using machine as LDAP server and for getting packages of the Internet, we need network access. Getting it to work is as easy as writing ifup eth0. In order to make these changes permanent just edit /etc/sysconfig/network-scripts/ifcfg-eth0 and change line starting with ONBOOT with ONBOOT="yes". It is as easy (if you disregard annoyance of vi editor).

Now we need to install our directory server. First install package (answer y to everything):

yum install 389-ds-base

And then run setup (answer yes to first two questions and just use default for others):

setup-ds.pl

That should leave us with values totally unsuitable for anything but for testing (which is exactly what we want):

Computer name ...............: //localhost.localdomain//
System User .................: //nobody//
System Group ................: //nobody//
Directory server network port: //389//
Directory server identifier .: //localhost//
Suffix ......................: //dc=localdomain//
Directory Manager DN ........: //cn=Directory Manager//

Quick search will prove that our directory server is up and running

ldapsearch -h 127.0.0.1 -x -b "dc=localdomain"
 ...
 # search result
 search: 2
 result: 0 Success
 # numResponses: 10
 # numEntries: 9

Well, now we are ready to add our first user. In order to do this just create user.ldif file with following content:

dn: uid=jdoe,ou=People,dc=localdomain
objectClass: top
objectClass: person
objectClass: organizationalPerson
objectClass: inetOrgPerson
uid: jdoe
cn: John Doe
displayName: John Doe
givenName: John
sn: Doe
userPassword: test

Not all these attributes are mandatory but I find this to be minimum acceptable set for my use. This is not enough if you want to use LDAP server for logons but it is enough for basic password checking. We add user with:

ldapadd -h 127.0.0.1 -x -D "cn=Directory Manager" -W -f user.ldif
 adding new entry "uid=jdoe,ou=People,dc=localdomain"

If something is messed up, just delete the user and add it again:

ldapdelete -h 127.0.0.1 -x -D "cn=Directory Manager" -W "uid=jdoe,ou=people,dc=localdomain"
ldapadd -h 127.0.0.1 -x -D "cn=Directory Manager" -W -f user.ldif
 adding new entry "uid=jdoe,ou=People,dc=localdomain"

Yes, there is an ldapmodify operation but I find it better to start with clean slate during testing.

Another test to verify that our user authentication works and we are good. Password asked here is not your root LDAP password but password of an user (test in my example):

ldapsearch -h 127.0.0.1 -x -D "uid=jdoe,ou=People,dc=localdomain" -W -b "ou=people,dc=localdomain" "uid=jdoe"
 dn: uid=jdoe,ou=People,dc=localdomain
 objectClass: top
 objectClass: person
 objectClass: organizationalPerson
 objectClass: inetOrgPerson
 uid: jdoe
 cn: John Doe
 displayName: John Doe
 givenName: John
 sn: Doe
 search: 2
 result: 0 Success

Congratulations, you have just made your first LDAP authorization.

Since, in current state, our LDAP cannot talk with outside world, we can think of dropping firewall (not something that you should do in production environment):

iptables -F INPUT
service iptables save

And last step would be to ensure that our directory server gets started as soon as machine is booted up:

chkconfig dirsrv on

With this LDAP test server configuration is done.

SHA-1 Sum Every File

Easiest way to check whether file is valid after download is to grab it’s SHA-1 sum. Most commonly it has same name as file but with additional .sha1 extension (e.g. temp.zip would have SHA-1 sum in temp.zip.sha1). One annoyance is how to generate all those .sha1 files…

To make my life little bit easier, I made a bash script. This script will go through given directories and create all SHA-1 sums. I use content and download directories in this case:

#!/bin/bash
for file in ~/public_html/{content,download}/*
do
    if [ -f "$file" ]
    then
        if [ ${file: -5} != ".sha1" ]
        then
            file1=$file
            file2="$file1.sha1"
            file1Sum=`sha1sum $file1 | cut --delimiter=' ' -f 1`
            if [ -e "$file2" ]
            then
                file2Sum=`cat $file2`
                if [ "$file1Sum" == "$file2Sum" ]
                then
                    echo "  $file1"
                else
                    echo "X $file1"
                    echo "$file1Sum" > "$file2"
                fi
            else
                echo "+ $file1"
                echo "$file1Sum" > "$file2"
            fi
        else
            file1=${file%.sha1}
            file2=$file
            if [ ! -e "$file1" ]
            then
                echo "- $file1"
                rm "$file2"
            fi
        fi
    fi
done

Probably some explanation is in order. Script check each file in content and download directories. If file ends in .sha1 (bottom of the script), we will just remove that file and log action with minus (-) sign. This serves as clean-up for orphaned SHA-1 sums.

If file does exist, we need to check existing SHA-1 sum. If there is no sum script will just create one and log it with plus (+) sign. If sum does exist, script compares it with newly generated value. If both match, there is nothing to do, if they do not match, that is logged with X character.

Example output would be:

  /public_html/download/qtext301.exe
+ /public_html/download/qtext310.exe
X /public_html/download/seobiseu110.exe
- /public_html/download/temp.zip

Here we can see that sum for qtext301.exe was valid and no action was taken. Sum for qtext310.exe was added and one for seobiseu110.exe was fixed (it’s value didn’t match). File temp.zip.sha1 was removed since temp.zip does not exist anymore.

P.S. While this code is not perfect and it might not be best solution, it does work for me. :)

DD-WRT on WL-330GE

Illustration

I recently bought Asus WL-330GE wireless router. I needed small travel router and I needed to run DD-WRT on it. It seemed like perfect match.

Upgrade to DD-WRT went without a hitch. However, as soon as it booted I noticed that I was getting DHCP address from my hotel’s server instead from router. Quick investigation revealed that there was no WAN port configured. Single port was in my LAN segment and thus it was leaking everything. Solution for that ought to be simple - I just went to Setup -> Networking and changed WAN assignment there. It seemed to work but, as soon as I rebooted router, everything went back to original state. Quite annoying.

Quick googling revealed something that looked quite close to solution but it didn’t work for me. I could get it to work sometime but every time after restart it would put my WAN port into default bridge. Not quite what I wanted.

In order to debug this I executed nvram show after clean install and then I executed it again after everything got working. That gave me delta that I had to apply. And, as far as bridges go, I decided to manually remove eth0 (WAN port) from default bridge.

Final result was this start-up script (Administration > Commands):

brctl delif br0 eth0
nvram set lan_ifnames="eth1"
nvram set wan_ifname="eth0"
nvram set wan_ifname2="eth0"
nvram set wan_ifnames="eth0"
nvram set wanup=0
nvram unset dhcpc_done
nvram commit
udhcpc -i eth0 -p /var/run/udhcpc.pid -s /tmp/udhcpc &

First line just ensures that WAN port is thrown out of bridge. All those nvram lines sort out minor differences. Last line enables DHCP renewal on WAN interface. After startup that should produce bridge state as displayed on picture. Just what I wanted. :)

Only thing that might look funny afterward is that both WLAN and LAN interface have same MAC address. To solve this we need to telnet (or ssh) to machine and execute following commands:

nvram get lan_hwaddr
nvram get wan_hwaddr
nvram get wl0_hwaddr

Each command will give you MAC address of each interface. In my case this was:

lan_hwaddr: __F4:6D:06:94:02:39__
wan_hwaddr: __F4:6D:06:94:02:39__
wl0_hwaddr: __F4:6D:06:94:02:3B__

From that we can interpolate that wan_hwaddr should be F4:6D:06:94:02:3A (just before wireless and just after LAN). Only thing to do now is to enhance our startup script (somewhere BEFORE nvram commit) with:

nvram set wan_hwaddr=F4:6D:06:94:02:3A
nvram set et0macaddr=F4:6D:06:94:02:3A

This game with MAC is not strictly necessary but I like to set it anyhow.

I tested this on build 14896 (recommended for Asus WL-330GE in router database) and on special build 15962 (recommended on forums as stable).

P.S. Next time remember not to take router advice from Windows programmer.

IPv6 in Your Local Network Via DD-WRT

Illustration

After sorting out tunneling on my computer, there came time to setup my router too. Idea is not to configure each client with separate tunnel but to have one tunnel on router and all computers connecting to it should use it transparently. Hurricane Electric gives /64 prefix and that ought to be enough.

As a router I will use my trusty DD-WRT. Exact version used in this example is DD-WRT v24-sp2 (12/08/11) std-nokaid (SVN revision 17990M NEWD-2 Eko). Your mileage may vary depending on version of your choosing.

Obvious first step is to enable IPv6. It is easy enough to do. Under Administration -> Management find IPv6 support and enable IPv6 and Radvd. Radvd is configured as simple as it can be:

interface br0
{
   AdvSendAdvert on;
   prefix ^^2001:db8:9:10ee::/64^^
   {
   };
};

Notice that prefix is same text “Routed /64” under your tunnel details.

Unfortunately this will not do. There is need for small script:

insmod ipv6

SERVER_IPV4_ADDRESS="^^216.66.22.2^^"
SERVER_IPV6_ADDRESS="^^2001:db8:8:10ee::1^^"
CLIENT_IPV4_ADDRESS=$(ip -4 addr show dev eth0 | awk '/inet / {print $2}' | cut -d/ -f1)
CLIENT_IPV6_ADDRESS="^^2001:db8:8:10ee::2^^"
ROUTED_IPV6_ADDRESS="^^2001:db8:9:10ee::1^^"

if [ -n $CLIENT_IPV4_ADDRESS ]
then
    ip tunnel add he-ipv6 mode sit remote $SERVER_IPV4_ADDRESS local $CLIENT_IPV4_ADDRESS ttl 255
    ip link set he-ipv6 up
    ip addr add $CLIENT_IPV6_ADDRESS/64 dev he-ipv6
    ip route add ::/0 dev he-ipv6
    ip -6 addr add $ROUTED_IPV6_ADDRESS/64 dev br0
    kill $(ps | awk '/radvd / { print $1}')
    radvd -C /tmp/radvd.conf
    wget "http://ipv4.tunnelbroker.net/ipv4_end.php?ip=AUTO&pass=^^9fc4d3d26b6ba921226c53e6c664c1ab0^^&apikey=^^tb4f139f1c342fgbd4.44123289860^^&tid=^^5511235463^^"
fi

Script sets some variables, brings interface up, adds some routes and restarts radvd daemon in order to pickup latest settings. Last line is needed only for users with dynamic IP (full explanation can be got once you load page in browser). This script needs to be saved with Save Firewall (under Administration -> Commands).

Once router gets restarted you will notice that all clients get IPv6 address alongside IPv4 (they have to support IPv6, of course). Easiest way to check it to run ping -6 ipv6.google.com. Or load it or one of many IPv6 test pages.

Windows 7 works just beautifully with IPv6.

P.S. In case you are wondering where I pulled those IPv6 addresses from, here is what Hurricane Electric gave me:

IPv6 Tunnel Endpoints
    Server IPv4 Address: __216.66.22.2__
    Server IPv6 Address: __2001:db8:8:10ee::1/64__
    Client IPv4 Address: __174.78.144.123__
    Client IPv6 Address: __2001:db8:8:10ee::2/64__

Routed IPv6 Prefixes
    Routed /64:          __2001:db8:9:10ee::/64__

It's Not Natty, It Is Nutty

On my “standard” work day I interact with Unix a lot so it seemed like a logical solution to have Linux on my machine. I used Ubuntu 10.10 and world was beautiful. Yes, some things did bother me but it worked. And after work I could always enjoy Windows 7.

Two days ago I upgraded to newest Ubuntu (11.04, nicknamed Natty). After upgrade boot seemed a little slow so I decided to do what I should have done first time - clean install. I spent one full day on fresh installation of Nutty and here are some problems I had in span of eight hours:

  • If your secondary monitor is on left, it is almost impossible to pinpoint single dot at which Unity launcher will appear.
  • Turning off auto-hide for Unity launcher causes it to be over all maximized windows and to hide their left side.
  • Applications closed without warning and without any form of crash dialog. Since this includes even calculator, I would dare to suggest that new Unity interface is to blame.
  • Once system started swapping it took Ctrl+Alt+F1 and top command to find that compiz took whooping 4.5 GB of RAM (out of 6 GB).
  • LibreCalc died every five minutes while editing relatively simple document. Recovery worked in almost 50% of cases.
  • Laptop could not wake from sleep.
  • Network connection was breaking every half an hour or so.
  • Booting took ages - easily double the time that 10.10 needed.
  • … do notice that there was a lot of smaller problems still …

I started using Linux it in nineteens with Slackware and went through my share of different distributions. I was never completely satisfied and people working with me know that I curse Linux a lot because of small bugs. But this is first time ever that I was actually unable to do my work. At the end I just rebooted into Windows XP (company issued) and did my work there.

My next step will be to take out this Natty piece of shit from laptop and to go back to Ubuntu 10.10.

P.S. I actually liked search interface in Unity. Unfortunately everything else sucks.

Cleaning Up

What to do if your script needs to kill all processes that it started? Just kill everything that has same parent as your current shell ($$):

#!/bin/bashkill `ps -ef | awk '$3 == '$$' {print $2}'`

Extracting Sparse Tar

I had to move some files from Unix. File was big, I had small USB drive - one thing led to another…

GZipped tar was obvious solution. In addition to that a friend of mine recommended to also use --sparse argument with it. Theory behind sparse files tells that block of 0 should be saved extremely efficiently thus making my file smaller even before zipping part gets involved. This made my command look like “tar cfzS somefile.tar.gz somefile”. It all worked as advertised.

Next day I got to extract this on Windows. My trusty WinRAR had no idea how to proceed. I just got “The archive is corrupt” message. My next efforts went into searching for Win32 version of tar. Since GNU tools like to be small and concentrated, of course this was not sufficient - I needed Win32 GZip also. Notice that I might be wrong here and there might be Win32 tar somewhere with everything integrated - I just haven’t found it.

Since (on Win32) extracting this tar.gz needed temporary files, I did it in two steps: first with gzip (gzip -d < somefile.tar.gz > somefile.tar) and then with tar (tar xSf somefile.tar). Even with all this, file was just too small.

After testing few more programs I gave up and recreated this archive without --sparse option. It ends up that size difference (with compression on) is not that high after all but final result is much more portable.

Here are tools I used:

2032

Illustration

After annual maintenance of power grid in my neighborhood with few hours without power, my trusty file server went down. It wasn’t first time that it went down. It was first time it stayed down.

This was alix1d embedded PC with FreeNAS running on it so my first thoughts went to file corruption. And I was right, there was some file corruption, but nothing that simple fsck could not solve. However, boot process still had issues.

I will not detail everything that I tried. It is sufficient to say that I wasted whole day playing with this thing. As last resort I decided to reinstall system.

As I went into BIOS to set my boot device, I noticed that my BIOS password is missing. As I went through setting, everything seemed to be on default. And default is not state you wish your alix1d board to be in.

FreeNAS has some issues with ACPI on this board. It will just not boot if you have it turned on. And I had it turned on in my BIOS. Fixing was easy - just turn it OFF. All that wasted time amounted to issue I already knew.

Reason why BIOS settings were changed was simple CR2032 battery. It usually keeps BIOS settings nice and fresh but mine was dead. Any power outage would cause same issues. It was accident waiting to happen.

I checked old invoices and it happens that this system is only two years old. I find it quite peculiar that battery is already gone. There is something that is drinking battery like mad on this motherboard.

Anyhow, everything works perfectly with new battery. I only hope that I will remember this issue when everything fails again in two years. :)

Private Mercury

Illustration

Sharing source with Mercurial is not hard. There is quite a good guide at Martin’s Blog and, indeed, first part of this post will mostly follow his setup.

Sharing sources with password authentication is still not hard but (at least from my perspective) is not obvious. This post will document my efforts of creating private and password protected Mercurial storage.

This procedure is tested on Ubuntu 10.04.1 LTS but I would expect it to work on older versions as well.

First we need to install it’s package.

sudo apt-get install mercurial
 Reading package lists... Done
 Building dependency tree
 Reading state information... Done
 The following extra packages will be installed:
   mercurial-common
 Suggested packages:
   qct vim emacs kdiff3 tkdiff meld xxdiff python-mysqldb python-pygments
 The following NEW packages will be installed:
   mercurial mercurial-common
 0 upgraded, 2 newly installed, 0 to remove and 63 not upgraded.
 Need to get 1,182kB of archives.
 After this operation, 4,956kB of additional disk space will be used.
 Do you want to continue [Y/n]? Y
 Get:1 http://hr.archive.ubuntu.com/ubuntu/ lucid/universe mercurial-common 1.4.3-1 [1,131kB]
 Get:2 http://hr.archive.ubuntu.com/ubuntu/ lucid/universe mercurial 1.4.3-1 [50.7kB]
 Fetched 1,182kB in 3s (388kB/s)
 Selecting previously deselected package mercurial-common.
 (Reading database ... 124142 files and directories currently installed.)
 Unpacking mercurial-common (from .../mercurial-common_1.4.3-1_all.deb) ...
 Selecting previously deselected package mercurial.
 Unpacking mercurial (from .../mercurial_1.4.3-1_i386.deb) ...
 Processing triggers for man-db ...
 Setting up mercurial-common (1.4.3-1) ...
 Setting up mercurial (1.4.3-1) ...
 Creating config file /etc/mercurial/hgrc.d/hgext.rc with new version
 Processing triggers for python-support ...

Create location for Mercurial repositories at /srv/hg with cgi-bin at subdirectory:

sudo mkdir -p /srv/hg/cgi-bin
sudo cp /usr/share/doc/mercurial-common/examples/hgweb.cgi /srv/hg/cgi-bin/

Additionally we need “/srv/hg/cgi-bin/hgweb.config” (do not forget to sudo) with following lines:

[collections]
/srv/hg/ = /srv/hg/

In newer Mercurial installations you also need to edit “/srv/hg/cgi-bin/hgweb.cgi” in order to fix config parameter. Just change example config line with:

config = "/srv/hg/cgi-bin/hgweb.config"

Next thing to do is installing apache web server:

sudo apt-get install apache2
 Reading package lists... Done
 Building dependency tree
 Reading state information... Done
 The following extra packages will be installed:
   apache2-mpm-worker apache2-utils apache2.2-bin apache2.2-common libapr1
   libaprutil1 libaprutil1-dbd-sqlite3 libaprutil1-ldap libssl0.9.8
 Suggested packages:
   apache2-doc apache2-suexec apache2-suexec-custom
 The following NEW packages will be installed:
   apache2 apache2-mpm-worker apache2-utils apache2.2-bin apache2.2-common
   libapr1 libaprutil1 libaprutil1-dbd-sqlite3 libaprutil1-ldap
 The following packages will be upgraded:
   libssl0.9.8
 1 upgraded, 9 newly installed, 0 to remove and 62 not upgraded.
 Need to get 6,343kB of archives.
 After this operation, 10.2MB of additional disk space will be used.
 Do you want to continue [Y/n]? Y
 Get:1 http://hr.archive.ubuntu.com/ubuntu/ lucid-updates/main libssl0.9.8 0.9.8k-7ubuntu8.1 [3,015kB]
 Get:2 http://hr.archive.ubuntu.com/ubuntu/ lucid/main libapr1 1.3.8-1build1 [116kB]
 Get:3 http://hr.archive.ubuntu.com/ubuntu/ lucid/main libaprutil1 1.3.9+dfsg-3build1 [85.4kB]
 Get:4 http://hr.archive.ubuntu.com/ubuntu/ lucid/main libaprutil1-dbd-sqlite3 1.3.9+dfsg-3build1 [27.1kB]
 Get:5 http://hr.archive.ubuntu.com/ubuntu/ lucid/main libaprutil1-ldap 1.3.9+dfsg-3build1 [25.1kB]
 Get:6 http://hr.archive.ubuntu.com/ubuntu/ lucid-updates/main apache2.2-bin 2.2.14-5ubuntu8.2 [2,622kB]
 Get:7 http://hr.archive.ubuntu.com/ubuntu/ lucid-updates/main apache2-utils 2.2.14-5ubuntu8.2 [159kB]
 Get:8 http://hr.archive.ubuntu.com/ubuntu/ lucid-updates/main apache2.2-common 2.2.14-5ubuntu8.2 [290kB]
 Get:9 http://hr.archive.ubuntu.com/ubuntu/ lucid-updates/main apache2-mpm-worker 2.2.14-5ubuntu8.2 [2,366B]
 Get:10 http://hr.archive.ubuntu.com/ubuntu/ lucid-updates/main apache2 2.2.14-5ubuntu8.2 [1,484B]
 Fetched 6,343kB in 14s (440kB/s)
 Preconfiguring packages ...
 (Reading database ... 124530 files and directories currently installed.)
 Preparing to replace libssl0.9.8 0.9.8k-7ubuntu8 (using .../libssl0.9.8_0.9.8k-7ubuntu8.1_i386.deb) ...
 Unpacking replacement libssl0.9.8 ...
 Setting up libssl0.9.8 (0.9.8k-7ubuntu8.1) ...
 Processing triggers for libc-bin ...
 ldconfig deferred processing now taking place
 Selecting previously deselected package libapr1.
 (Reading database ... 124530 files and directories currently installed.)
 Unpacking libapr1 (from .../libapr1_1.3.8-1build1_i386.deb) ...
 Selecting previously deselected package libaprutil1.
 Unpacking libaprutil1 (from .../libaprutil1_1.3.9+dfsg-3build1_i386.deb) ...
 Selecting previously deselected package libaprutil1-dbd-sqlite3.
 Unpacking libaprutil1-dbd-sqlite3 (from .../libaprutil1-dbd-sqlite3_1.3.9+dfsg-3build1_i386.deb) ...
 Selecting previously deselected package libaprutil1-ldap.
 Unpacking libaprutil1-ldap (from .../libaprutil1-ldap_1.3.9+dfsg-3build1_i386.deb) ...
 Selecting previously deselected package apache2.2-bin.
 Unpacking apache2.2-bin (from .../apache2.2-bin_2.2.14-5ubuntu8.2_i386.deb) ...
 Selecting previously deselected package apache2-utils.
 Unpacking apache2-utils (from .../apache2-utils_2.2.14-5ubuntu8.2_i386.deb) ...
 Selecting previously deselected package apache2.2-common.
 Unpacking apache2.2-common (from .../apache2.2-common_2.2.14-5ubuntu8.2_i386.deb) ...
 Selecting previously deselected package apache2-mpm-worker.
 Unpacking apache2-mpm-worker (from .../apache2-mpm-worker_2.2.14-5ubuntu8.2_i386.deb) ...
 Selecting previously deselected package apache2.
 Unpacking apache2 (from .../apache2_2.2.14-5ubuntu8.2_i386.deb) ...
 Processing triggers for man-db ...
 Processing triggers for ufw ...
 Processing triggers for ureadahead ...
 ureadahead will be reprofiled on next reboot
 Setting up libapr1 (1.3.8-1build1) ...
 Setting up libaprutil1 (1.3.9+dfsg-3build1) ...
 Setting up libaprutil1-dbd-sqlite3 (1.3.9+dfsg-3build1) ...
 Setting up libaprutil1-ldap (1.3.9+dfsg-3build1) ...
 Setting up apache2.2-bin (2.2.14-5ubuntu8.2) ...
 Setting up apache2-utils (2.2.14-5ubuntu8.2) ...
 Setting up apache2.2-common (2.2.14-5ubuntu8.2) ...
 Enabling site default.
 Enabling module alias.
 Enabling module autoindex.
 Enabling module dir.
 Enabling module env.
 Enabling module mime.
 Enabling module negotiation.
 Enabling module setenvif.
 Enabling module status.
 Enabling module auth_basic.
 Enabling module deflate.
 Enabling module authz_default.
 Enabling module authz_user.
 Enabling module authz_groupfile.
 Enabling module authn_file.
 Enabling module authz_host.
 Enabling module reqtimeout.
 Setting up apache2-mpm-worker (2.2.14-5ubuntu8.2) ...
  * Starting web server apache2
 apache2: Could not reliably determine the server's fully qualified domain name, using 127.0.1.1 for ServerName
 Setting up apache2 (2.2.14-5ubuntu8.2) ...
 Processing triggers for libc-bin ...
 ldconfig deferred processing now taking place

We need new configuration for web interface of our repositories (“/etc/apache2/sites-available/hg”) with following content:

NameVirtualHost *
<VirtualHost *>
    ServerAdmin webmaster@localhost
    DocumentRoot /srv/hg/cgi-bin/
    <Directory "/srv/hg/cgi-bin/">
        SetHandler cgi-script
        AllowOverride None
        Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch
        Order allow,deny
        Allow from all
    </Directory>
    ErrorLog /var/log/apache2/hg.log
    <Location />
        AuthType Basic
        AuthName "Mercurial"
        AuthUserFile  /srv/hg/.htpasswd
        Require valid-user
    </Location>
</VirtualHost>

Lines under Location are ones that ensure privacy of our repository.

We can now disable default web site and enable new one (and we can ignore all warnings) together with changes of ownership and rights:

sudo chown -R www-data /srv/hg

sudo chmod a+x /srv/hg/cgi-bin/hgweb.cgi

sudo a2dissite default
 Site default disabled.
 Run '/etc/init.d/apache2 reload' to activate new configuration!

sudo a2ensite hg
 Enabling site hg.
 Run '/etc/init.d/apache2 reload' to activate new configuration!

sudo /etc/init.d/apache2 reload
  * Reloading web server config apache2
 apache2: Could not reliably determine the server's fully qualified domain name, using 127.0.1.1 for ServerName
 [warn] NameVirtualHost *:80 has no VirtualHosts

If we try to access “http://localhost” now, we should be greeting with password prompt.

Thing that we are missing is “/srv/hg/.htpasswd” file. We can create all users we need with htpasswd command:

sudo htpasswd -c /srv/hg/.htpasswd testuser
 New password:
 Re-type new password:
 Adding password for user testuser

All further users are then added with slightly modified command (notice that -c is missing):

sudo htpasswd /srv/hg/.htpasswd testuser2
 New password:
 Re-type new password:
 Adding password for user testuser2

After creating repository itself

sudo hg init /srv/hg/TestRepo

we must also create “/srv/hg/TestRepo/.hg/hgrc” file with following content:

[web]
push_ssl=false
allow_push=testuser

This allows for using http (instead of https) and allows access to our “testuser” (if there are no restricturons, just put * for user name). Very last step in setup is actually allowing apache to use our repository for writing. Easiest thing to do here is just transferring ownership to it:

sudo chown -R www-data /srv/hg/TestRepo

Finally we can use “http://192.168.0.2/hgweb.cgi/TestRepo/” for pushing and pulling data from any Mercurial client.

P.S. To use https, check second post of a series.