Linux Articles

Hosting your own Git-based shared repositories using SSH

Git has become one of the most important tools in a developer's toolkit. To a Drupal developer, it is even more critical as nearly everyone in the community has standardized on it. While there are many great Git hosting services out there, sometimes clients need to have only local copies and Git is all about making each copy a distinctive repo to itself... so why not create your own Git host on your own server? That is what we are here to do today.


This recipe is for any Linux host that has Git installed. It requires SSH as it will be used for managing the connections with your users. By default, SSH uses the Linux system's user accounts as an authentication system (known as "auth" method) but if your needs require it, you can also use SSH modules to plug into your local LDAP or ActiveDirectory® authentication systems.

One thing that will be of great importance in this tutorial is permissioning the users correctly and setting up a deployment action that suits your needs best. The strategies we use here may be adapted to your own use cases.

Getting Started with Git as a Host

By now I'm sure you've probably heard the philosophy behind Git is that every repo contains all the history of a project and any copy can become the master copy if the original is lost. While this is great in principle, in reality, to share Git with others we will need to setup a special type of repository that is accessible to your system's users.

Installing Applications

First, let's make sure we have Git and SSH installed. On Debian or Ubuntu the command to install Git is as follows:

apt-get install git-core openssh-server

There is no special version of Git to do shared repositories, the standard one will do it all.


You will need to create yourself a folder where your repositories will be stored. In my case I'm creating a new directory right in the root of the server so that my users will have a nice path to work with when I give them access to the server.

The storage should *not* be your production webserver. You need to put it somewhere that is not live to your users as the shared repository has a bunch of files you don't want to put into a production environment.

mkdir /projects

I actually created my projects folder under /var/projects and just created a symlink here, to better integrate with our existing backup processes.

Grouping the Users

Make sure that we have a group for our users. I'm using the group name "webmasters" but you may already have a group established for your team. If that is the case, use the group you are already using.

addgroup webmasters

We will have to do additional work on the user account to make this work... but for now this is enough.

Initializing the Shared Repository

Now we will create a new project called "newsite". When your colleagues connect to the site the path will be /projects/newsite.git with this configuration.

cd /projects
git init --bare --shared newsite.git
chgrp -R webmasters newsite.git

Adding Users to the Mix

If you already have users on your site, great. If not...

adduser newdeveloper webmasters
usermod -a -G webmasters newdeveloper

The usermod step is necessary so that each time your user, newdeveloper, creates a file, that it will be permissioned to the entire group. This will allow other users to modify the file if it was created by another user.

There is one last step to get the permissions structure just right. By default, most Linux systems only allow user files to be edited by the user who created it, even though you have put the file into the group. There are many strategies for how to override this. My personal favourite is to change the system umask value to apply the same permission for the owner to the group as well.

To make a global change to enable "group writeable" by default in Debian or Ubuntu do the following:

Edit the file /etc/profile with your favourite text editor.
Add umask 002 to the end of the file. If you already have a umask value, you can change it rather than adding a new line.

You can also add the umask 002 line to the user's ~/.bashrc file if you wish to do per-user setup for this.

Be sure to test that this is working by logging in as your new user by doing su newdeveloper and then typing cd to go to their home folder (note, be sure to login after making the change), then in the user's home directory try doing touch testfile followed by ls -la | grep testfile. You should see the following output:

-rw-rw-r-- 1 newdeveloper webmasters    0 2012-12-20 13:17 testfile

In particular: look at the codes at the start. If you see -rw-r--r-- then umask is not set correctly for some reason. You should also see newdeveloper and webmasters as the user and group respectively. If not, go back to the step where you set the user's group to be set to new files by default.

Does it all look ok? Then rm testfile and log out of your new user's account. The Control-D key will get you out of their account fast. ;)

Keep in mind there are other methods for doing this. If you already have a different system for managing group ownership of files, you will probably want to stick to the system you are already using if it is appropriate for your use case.

Accessing the Repository

Your repository can now be accessed using the following paths. Keep in mind, if it is the first time you clone your repository it will warn you that you are cloning an empty repository. That is ok! You can add some files later and push up to the server so that the next person to clone does not get that message.

From the same (local) machine:

git clone file:///projects/newsite.git
cd newsite

From a remote computer anywhere on the Internet:

git clone
cd newsite

If you are using a remote computer, you will be asked for your password unless you have added your public key from your remote computer to the user's account on the server.

For Bonus Points, auto-checkout into stage

There is one critical thing that you will want to consider before you go live. How are you going to update your staging environment? By default there IS an action performed when users push new updates to git, defined in the shared repository's hooks folder (under /projects/newsite.git/hooks in the file system), in the post-commit file. One word of warning here though - it will run as the user who does the commit. So your staging environment will constantly have permission errors. Ideally your stage environment probably has one user who is in control of it.

To fix this, a really crude way, I rigged up a script that checks for updates every 30 seconds. Eventually I'll come up with something better, an action that can be taken by any user that doesn't involve giving everyone sudo access to the stage user. Run this "daemon" as a script from cron as the user you want to be responsible for stage:

cd /stage
while [ 'FALSE' != 'TRUE' ]
  git pull origin master
  sleep 30

WARNING: it should be obvious that this code won't scale... and will waste some resources unnecessarily; you've been warned!

In my second crude attempt at solving this issue I have taken the following approach:

1. Create a hooks/post-receive file inside your repo
2. Set this file to echo your destination path into a queue file:
echo "/var/www/newsite" >> /projects/queue
3. Create the /projects/queue file: touch /projects/queue && chown root:webmasters /projects/queue && chmod 660 /projects/queue
4. Create a checkout script:

while read PATH;
    cd $PATH;
    /bin/su target_username -c "/usr/bin/git pull origin master"

5. Then create a watcher to trigger that checkout script:

echo "" >/projects/queue  # empty the queue first
tail -f /projects/queue | /usr/local/bin/checkout

This solves the issue of having multiple users accessing the repository because you specify a user to run the checkout. All the users are able to write to the queue file, and the watcher just keeps an eye on that file. Since the watcher must sudo into another user's account to do the checkout, we can run the watcher as root and there is no possibility to any of our users figuring out they can sudo as someone else - because we don't use sudo at all.

You should add your watcher to your startup scripts.

More Bonus Points, disable SSH interactive mode for some users, and allow logins without passwords

All the Drupal people in the house be rolling their eyes right now. This can be considered a sort of cruel and unusual punishment by some... however, in some cases it is handy, for example, when you have a designer changing theme files but who shouldn't be able to get into all your databases and other things.

This is really simple to accomplish, but it comes with another caveat:

usermod newdeveloper -s /usr/bin/git-shell

So what is the caveat? It ignores your public keys! So if you get the user to generate a public key on their desktop/laptop/whatever, and they give it to you to put in their /home/newdeveloper/.ssh/authorized_keys on the git server... it will totally not do a thing. Jerk!

If you use this method the user must type their password every single commit, even if you setup public keys. Can be annoying...

The recommendation for dealing with public keys is to have the user login to SSH normally, then drop the user into git-shell. I'll be rolling this out soon so I can collect some of these bonus points.

Have your user generate the public key. You may wish to avoid RSA because some server-wide sshd_config files have it off by default. I have used DSA in this example, if you use a different encoding, just make sure you use the associated file for that.

On the developer's machine, grab the existing .ssh/ or generate one using:

ssh-keygen -t dsa

Be sure to leave the challenge response blank. Then copy the contents of the file to the server. The contents of the file should be appended to the .ssh/authorized_keys file on the server... then... the important stuff:

Back on the server:

chmod 700 /home/newdeveloper/.ssh
chmod 600 /home/newdeveloper/.ssh/authorized_keys

That is it! Now the user should be able to log in automatically, and they will not be able to SSH into the host... only to use git to post the files.

Want to be a Linux admin? Start here.

This collection of links originally appeared on my consulting website.  If you are a developer new to using Linux or Unix systems these guides will probably come in handy at some point.  Enjoy.

Migration to Debian 5, aka, Lenny

Todays post is a quick review of the upgrade process to get your existing Debian system up-to-date with version 5.  For those who are unfamiliar with Linux, Debian is a variation of the free computer operating system that is well suited for server usage.  It also happens to strip out most of the branding you would find in other Linux distributions which is one of the reasons I like it.

The Upgrade Process

Moving up to the new version of Debian was as simple as running the distribution update command:

apt-get install dist-upgrade

... except for the fact that my OS partition is now getting quite full.  So eventually it would stop, complain about disk space and ask me to resume later.  Apparently Lenny, the codename for Debian 5, requires more space.  Go figure.

apt-get clean

Ok, we've dumped all the installer files for these packages that have been installed.  Resume installation.  Everything goes pretty smoothly from here.

At the end I need to run Lilo and I really need a new Kernel (I had been shamefully running an ancient Kernel on this box - over a year old at least).  So I asked the system for a new kernel and got it. 

One blip: the video driver.  I have a strange Intel-based motherbord - the IntelD201GLY - which has an integrated SIS graphics card with little/no support anywhere to be found.  I had to compile the drivers myself... (against the new kernel of course) and now having done this process twice I will be more diligent about kernel updates because it really doesn't take that long to fix.

The User Experience

The difference in performance was profound.  The combination of a new kernel, a more modern browser (Firefox 3 - packaged as Iceweasel 3) and the vast array of other updates have made the first few minutes an amazingly refreshing adventure.  The few hours I spent debugging were well worth the effort.

The icons changed, themes changed, and in some cases syntax changed (which of course means fixing a variety of scripts).  Overall the system feels more integrated, nicer to look at and less invasive.  I can see myself getting very comfortable with this.

The Missing Link

The one change that did catch me really off guard is the disappearance of the original Xmms.  I had been using it for years as a secondary player and I apprecaited the support it had for changing output devices on the fly.  No other gui player seems to have that feature implemented as far as I have been able to tell.  Why not just use Xmms2 you ask?  As far as I am concerned Xmms2 is a nightmare.

With Xmms2 it seems the developers wanted to make a server-based player.  Fair enough, but I already use Moosic for this and it does a good job.  I wanted to drag and drop from Nautilus - no more.  Of all the gui interfaces to Xmms2 none of them seem to support drag and drop from external apps.  Further to this, Gxmms2 can't seem to load the queue with files from within it's own interface.  Abraca is the same.  Esperanza will load the files into the list but runs the KDE interface - the only app on my desktop that does.

The main issue I have with Xmms2 is the lack of support for changing devices in the gui apps.  I have to run commands to update text files.  Frustrating!  The volume controls do not seem to want to associate with the proper device no matter what I try and no features other than the bare minimum are documented.  What seems to have happened here is that Xmms2 became a radio streaming program and lost sight of why it was created in the first place.  I don't mean to be harsh - I'm sure I will like all those new features when I get to them... but in the meantime, how do I listen to tunes on my second audio card without a bunch of hassle?  Suggestions?

Last Word

Debian 5 is a nice update to a great OS.  There are small improvements everywhere and it makes my old desktop a lot more fun to use.  Hardware support seems to be gradually improving over time and this is a good thing.  I was thinking about jumping ship to Ubuntu a few months ago but it was worth the wait.  I use Debian on the desktop and the server so the consistency is a huge advantage to me.


Today, the clock strikes 1234567890 - on Friday the 13th no less!

This Friday the 13th is a special one to your humble desktop computer.  Today the clock will strike 1234567890 seconds since the birth of time as we know it.

If you can think back as far as 1234567890 seconds ago you will possibly recall that computers were a big thing in 1970.  Big companies were starting to use them and programmers were getting tired of devising different ways of counting time and settled on one.  With that, the Unix Epoch began on January 1, 1970 and has been counting every second since.

Have you ever wondered why Macs often revert back to 1969 if you reset the system?  That was the beginning of time as far as your computer is concerned, adjusted to our time zone here in the Cascades of course.  Windows systems rely on different code but the majority of other computers all derive from this initial concept.  Even today, Windows systems rely on external clocks to set themselves and these servers are probably doing things the Unix way too.

Officially your computer will strike this magic second at Friday the 13th, 2009 at 11:31:30pm UTC, or about 2:31:30pm here on the west coast.

What can you expect to happen to your stuff?  Well, this is no Y2K folks.  It makes no difference to your computer whether it is currently 1234567890 or 1234567999 so you have nothing to fret.  The weirdness that is Friday the 13th, however, will probably live on so good luck with those Valentines Day dates you have lined up for tonight.


Winter technology meanderings - web and email system upgrades

Today I'm nearing the end of a full week off. It's been great sleeping in, snowboarding and working on some projects I've been putting off for a long time.

As you probably already know I have a web server I own that I maintain as a hobby. Recently a few friends who have accounts on the server started requesting new features and I figured it was probably time to bit the bullet and start the upgrades.

So in the past few weeks I did a lot of research and performed the following upgrades:
  • Migrated 12 live websites to a newer web server
  • Rolled out a secure webmail platform
  • Deployed an IMAP server to synchronize mail and read/replied status across devices
  • Added or improved secure mechanisms for hosting content and backing up file

Upon doing this I discovered another great thing I have recently accomplished:
  • I am no longer dependent on the "Blackberry Internet Service" provided by Rogers in partnership with RIM. This service is horribly slow for personal email accounts and is better served by running an IMAP client like LogicMail, or to use another device altogether. Periodic or "on demand" message checking works better than the "push" message service on the Blackberry which really only checks your messages every 15 minutes.

Good luck planning lunch with that kind of delay.

I would highly recommend that small businesses skip the fanfare around the Blackberry and get themselves access to an IMAP server, it's much more accommodating than I had expected. Synchronization is a great tool to have at your disposal. Doing the same tasks over and over again is definitely not in most creative people's interest so I'm happy to be settling into the new configuration.

Oh yeah, and for those of you on the RSS feed, yes it's been awhile. Now you know why! Stay tuned for more fun in the comng weeks.

Preparing your network for the fall business season

This week marks the start of my fall hardware purchasing series that will bring new severs to my apartment. It's the first step in starting up a business that I have been planning for the better part of a year now.

The old configuration was getting dated. Here's a quick summary of my existing network architecture:

  • An iBook from 2001, running Debian (no Mac OS for me, thanks). This machine has now suffered the loss of a second LCD screen, is running on it's second battery, has routine hard disk failures among other bugs. It acts as a gateway to the servers since it is incapable of maintaining files reliably.
  • The original Linux server, built in 2004 with recycled materials generously provided by members of the Linux Users Group of Vancouver. The processor is by far the slowest of the lot, coming in at an awesome 350 MHz, but works just fine with nearly any task you throw at it. It serves up files and applications remotely and powers my in-house radio station I started back in 2003.
  • The DMZ server, built in 2005 with more donated hardware from co-workers and personal friends. This machine acts as a firewall and NAT in addition to providing web services and email hosting. It is much faster than the other two machines.
  • A Blackberry, purchased in 2006 to remotely manage the machines. It's slow data throughput makes it an ideal candidate for replacement as well, though the user interface is generally pretty nice.

These computers make for a very busy household with wires abounding from every angle. The processing power is minimal on the user side of things and productivity wanes on two of the boxes due to vastly insufficient memory installed in the boxes.

The roadmap for the fall includes a new desktop system, upgrades to the servers and repurposing of the laptop to become a media server for my TV.

All three "desktop" systems (2 servers, 1 desktop, laptop excluded) will be rebuilt using Mini-ITX hardware and tiny cases that will move the servers onto the bookshelf. It will reduce energy consumption in my house, almost eliminate the noise from the computers (moving it below the ambient noise level of the street), and allow more flexibility with application development by separating the testing environment from production areas of work. Lastly, the new setup will remove many of the wires that are currently under siege from the new kitten and allow serious media work to commence in the new studio.

Earlier this week I purchased a new LCD monitor which will facilitate the work on all of these new machines. Now that it's up and running the cleanup of the old servers has begun and ordering of the Mini-ITX components is set to begin. It's nice to finally be organized again.

Ready to start talking about your upcoming business plans? I'd love to hear from you.


New hardware coming soon

Well after many deliberations I think I have finally decided on my new computer. I'm going to buy a mini-itx case on eBay and a small motherboard to match it. The unit will have a riser card so I can plug in an alternate sound card, tuner card, or network card should the need arise.

This computer will replace my server which I built in 2004 out of spare parts from around Vancouver. Back then I just needed something that I could churn out resumes with while my laptop was in repair. It's still going strong and now manages my entire media collection and all of my communication related archives.

The new unit will sit on the book shelf as opposed to on the floor and should be much quieter than it's predecessor. I will also be purchasing an LCD display so I can retire my laptop.

Automatic Upgrade

I had an automatic upgrade on my computer this week. I had troubles with my laptop and fearing the worst (virus) I decided to investigate. Upon running my update I noticed a huge amount of activity so I let the system update.

Within the evening I had unintentionally upgraded my computer to Debian 4.0, the latest version which until now had escaped my notice. For the Windows and Mac users out there, that's like a major upgrade (like from XP to Vista but without the scary hardware considerations).

A few things broke but I was able to fix them all really quickly. Though I had to change some configurations I was well aware of which areas of the system that needed the changes and I'm already working away and enjoying the new stability.

Another bonus to this new release is that my Wacom tablet finally works on Linux for PowerPC. Previously I had only been able to use it with my Mac desktop (which is extremely slow) or with a Windows computer (which I only use at work). Now I can have my cake and eat it too.

This laptop is long for this world, the display and the hard disk each routinely fail. I doubt it will survive until the next Debian upgrade. To those who contributed to the release, thank you. Once again your efforts have gone far and beyond my expectations. My next PC will be running an x86 chipset but for now my PowerPC is still going strong. Nice.


Typing in Chinese on Debian

Months ago I installed the software necessary to type Chinese in Linux, called SCIM (Smart Common Input Method), but I could not figure the program out. I am an absolute beginner with the language so finding the characters I wanted was a lot of trouble.

After visiting Montréal I have a new found desire to learn more languages so I'm back at the table. Working in the Gimp I started typing. Rather than use full pinyin (romanized words) I started only with the first character. The words suddenly appeared. It makes typing Chinese faster.

I also ran some tests in Inkscape, the vector graphics program for Linux. I drafted some signs, outlined the fonts, and sent them out to the printer. Nice work. I was surprised it all worked so well. Now I'm going to make some flash cards to send out to my blackberry and desktops.


After years of Photoshop, the Gimp is rocking my world

Years of Photoshop skills wasted. Or perhaps just a precursor to the destination I have found with the Gimp. The interface is different but the workflow is nice. In Photoshop I was an intermediate user with the Gimp I finding that I'm becoming an expert faster. Maybe I'm just too excited.


As a novice user my favorite feature was the detaching menus, so you can leave the filters menu and others on the screen for convenient clicking. This is great for new users who just need to get things done. Another love is the right mouse button menu. It lists all of the menu options from the top of the screen. Much quicker if you have been working in a tiny area of the screen for some time as artists often do.


The downfall with the Gimp is the text rendering. It doesn't seem to support letter spacing at all, the only movement allowed is line spacing. Scribus does a better job of doing this for print layouts but for web work I almost want to stay in the Gimp as much as possible. Designing website comps outside of a graphics suite would shock and amaze the people in my office. We only get Photoshop files. This file dependency also causes a problem as the newer versions of Photoshop don't layer right. Always get a Tiff to verify you're looking at the same thing as your clients.


Most people don't use Photoshop that much but they still need some basic graphics editing. If this sounds like you the Gimp will do your task. For the super math kids in the crowd, you probably already know how fun this software suite is. Have fun everybody.