Zach Burlingame
Programming, Computers, and Other Notes on Technology

Limiting an Application to a Single Instance with Named Events

June 21st, 2011

Sometimes it’s desirable to allow only a single instance of an application per user or even per system. There are several ways you can do this.

FindWindow

In the case of a single instance per user session, you can use FindWindow to enumerate through the window handles for the current user session based on the window class and window title. This is actually how Microsoft Outlook does it. The drawback is that this only supports limiting the current session and it requires a window handle (i.e. doesn’t work in Console applications without creating a hidden window).

Use a File or Registry as a Lock

This method is used by VMware to establish whether a .vmdk file is locked by another process. Sometimes when you get an unclean shutdown of the owning VMware process, the file lock hangs around and the user must manually delete the file in order to boot the VM. This solution does not rely on a window handle and is thus applicable for any application that can access the disk which is good. However just like with with VMWare, using this as a solution for a single application instance could get us into a state where the user can’t run the app at all until they delete the lock file – not good.

CreateMutex

This is one of the most prevalent and well documented. This technique places a uniquely named mutex in the either the global namespace (for a single system-wide instance) or the local namespace (for a single session-wide instance) using the CreateMutex Win32 API call and then detecting whether the object already exists when the application first starts. The mutex is torn down when the last process that holds a handle to the object exits. This prevents use from getting in a stale state where we can’t start any instances of the application at all. Since this solution doesn’t require a handle to a window, it’s suitable for any type of Windows application (e.g. Windows Application/WinForm, Console, Service).

CreateEvent

This technique uses the same concept of a uniquely named event as the mutex technique. Also like the mutex solution, it’s suitable for any type of Windows application and the event is torn down when the last process that holds a handle to the event exits. The reason I choose this method over the global mutex however is that I overload the use of this event to serve as my shutdown signal. This allows me to use the same object to determine if an instance of an application is running as well as signal all instances of the application to terminate if necessary.

Based on my signal terminate solution here, you can limit an application to a single instance by removing this from initialize_terminate_event in signal_terminate.c

  // Make sure our instance of the application didn't already initialize the event
  if( fh_terminate_event != NULL )
  {
    return SUCCESS;
  }

and calling it at the beginning of your application’s main routine like this.

// Library Includes
#include <Windows.h>
#include <stdio.h>

// Application Includes
#include "error_codes.h"
#include "terminate.h"
#include "types.h"

Int32 main( Int32 argc, Char* argv[] )
{
  Int32 return_code = initialize_terminate_event( );
  // If the event already exists or if there is an error 
  // creating the event, just exit. 
  if( return_code != SUCCESS )
  {
     return return_code;
  }

  // Main routine here
}

Simple Reader-Writer Lock in C on Win32

June 20th, 2011

In a particular native C Win32 application, I have a few threads that regularly read a particular set of information while performing their main work routines and a single thread that updates that information. A readers-writer lock is well suited to a workload that is heavily read based with scarce writes.

The Win32 API provides a Slim Reader-Writer Lock. The problem is that it wasn’t added until Vista and I still need to support Windows XP. I wasn’t too keen on writing my own as writing thread-safe code – particularly synchronization objects – is notoriously tricky. A quick search turned up several solutions for a reader-writer lock in C++, but not too many in C. I was even less keen on using a fully-featured RWLock that wasn’t from a mature and active project or porting an implementation from C++. Fortunately, a basic RWL is not that difficult as far as synchronization objects go.

I decided to roll my own and I’ve placed the project on Bitbucket here. As I come across other threading needs, I’ll add any functions and utilities to it. There are certainly no guarantees that my implementation is bug-free but I did at least give a bit of thought. If you find anything, please share so that I can fix it!

HOWTO: Generate and Publish Doxygen Documentation in TeamCity

June 12th, 2011

I’ve started using Doxygen and JavaDoc style comments on my native C/C++ applications for documentation generation. In keeping with my goal to have everything “just work” on checkout with minimal dependencies (ideally just Visual Studio and version control) I wanted to get it integrated directly into the project. That way anyone can generate the latest version of the documentation from their working copy whenever they need it. Since I use TeamCity for my continuous integration server, it was natural to have the CI server generate and publish the latest documents during the build process.

Setup Doxygen

Setup Doxygen in a Working Copy of your Project

  1. Download the Windows Doxygen zip package
  2. Create build/doxygen/bin
  3. Place DoxyFile in build/doxygen/
  4. Create build.xml in build/doxygen/
  5. +---build
    |   \---doxygen
    |       \---bin
    +---code
    \---documentation
    

    build.xml

    <?xml version="1.0" encoding="utf-8"?>
    <Project ToolsVersion="3.5" DefaultTargets="Doxygen" 
    xmlns="http://schemas.microsoft.com/developer/msbuild/2003">
      <PropertyGroup>
        <OutputDirectory>..\..\documentation\doxygen\</OutputDirectory>
      </PropertyGroup>
      <Target Name="Doxygen">
        <MakeDir Directories="$(OutputDirectory)" />
        <Exec Command="bin\doxygen.exe" />
      </Target>
    </Project>
    
  6. Test the script using MSBuild in a Visual Studio Command prompt
  7. [batch]
    cd /path/to/project/build/doxygen
    msbuild build.xml
    [/batch]

  8. Add the documentation/doxygen output folder to your VCS ignore pattern. (e.g. for Mercurial, add the following to the .hgignore file)
  9. glob:documentation/doxygen/*
    
  10. Add the all the files (including doxygen.exe, build.xml, and DoxyFile) to version control, commit the changes and publish them to the master repo (e.g. for Mercurial)
  11. [batch]
    hg add
    hg commit -m"Added Doxygen"
    hg push ssh://user@reposerver://srv/hg/project
    [/batch]

Setup Public Key Authentication

The following steps must be done on the BUILD SERVER. If you have multiple build servers/build agents for TeamCity, then you’ll need to duplicate most of these steps on each one. Alternatively, you can use a shared RSA key.

Generate an RSA public/private key pair

  1. Download and Install PuTTY.Look under “A Windows installer for everything except PuTTYtel”
  2. Open puttygen
  3. Click Generate
  4. Click Save Private Key
  5. Choose a location to save (I’ll use C:\keys for this example)
  6. Name the file (I’ll use buildserver.ppk for this example)
  7. Click Save Public Key
  8. Choose the same location used for the private key
  9. Name the file (I’ll use buildserver.pub for this example)

The following steps must be done on the WEB SERVER

Add an account for the buildserver

sudo useradd buildserver

Setup public key authentication for the user

  1. Setup the necessary files, directories, and permissions
  2. su buildserver
    mkdir ~/.ssh
    chmod 700 ~/.ssh
    touch ~/.ssh/authorized_keys
    chmod 600 ~/.ssh/authorized_keys
    vim ~/.ssh/authorized_keys
    
  3. In PuTTYGen, copy the entire contents of the box labeled: “Public key for pasting into OpenSSH authorized_keys file:”
  4. Paste the contents into authorized_keys on the Web Server and save the file.

Setup Rsync

The following steps must be done on the BUILD SERVER.
Since Windows doesn’t natively have rsync, I use the cygwin packaged cwrsync.

  1. Download cwrsync http://www.itefix.no/i2/cwrsync
  2. I ran into problems with cwrsync being used in conjunction with plink where I received the following error:

    “Unable to read from standard input: The parameter is incorrect.”

    The problem apparently is when cygwin programs use redirected stdin/stdout handles. The solution I found to this was to use cygnative. From their website:

    Cygnative.exe is a wrapper for cygwin programs which call native Win32 programs with stdin and/or stdout handle redirected. Without cygnative.exe it is possible that the native program can not read from the handle and receives a “invalid handle” error.

  3. Download cygnative
  4. Create a script which will call cwrsync and pipe the rsync over an SSH connection from the build server to the web server.
  5. I placed this script during testing in project\build\doxygen\cwrsync.cmd I was only using it for testing before I put it into a TeamCity build step so I had no plans of commiting it to source control since it ultimately needs the private key which I don’t want in version control. If you aren’t going to use a TeamCity build step to publish the documentation, you can use this script as a starting point for your solution.

    [batch]
    @ECHO OFF
    REM *****************************************************************
    REM
    REM CWRSYNC.CMD – Batch file template to start your rsync command (s).
    REM
    REM By Tevfik K. (http://itefix.no)
    REM *****************************************************************
    REM Make environment variable changes local to this batch file
    SETLOCAL

    SET LOCAL_DIR=../../documentation/doxygen/html
    SET REMOTE_SERVER=yer_remote_machine_name
    SET REMOTE_USER=yer_remote_user_name
    SET REMOTE_DIR=/var/cache/doxygen/yer_project_name
    SET SSH_KEY=yer_ssh_key
    SET RSYNC_ARGS=-arz
    SET PLINK_CMD=cygnative plink -i %SSH_KEY% -batch
    SET REMOTE_CHMOD=chmod -R a+rx %REMOTE_DIR%

    REM Specify where to find rsync and related files (C:\CWRSYNC)
    SET CWRSYNCHOME=%PROGRAMFILES(x86)%\CWRSYNC

    REM *****************************************************************
    REM Don’t Change Below This Line
    REM *****************************************************************

    REM Set HOME variable to your windows home directory. That makes sure
    REM that ssh command creates known_hosts in a directory you have access.
    SET HOME=%HOMEDRIVE%%HOMEPATH%

    REM Make cwRsync home as a part of system PATH to find required DLLs
    SET CWOLDPATH=%PATH%
    SET PATH=%CWRSYNCHOME%\BIN;%PATH%

    REM Publish the files
    rsync %RSYNC_ARGS% -e "%PLINK_CMD%" "%LOCAL_DIR%" %REMOTE_USER%@%REMOTE_SERVER%:%REMOTE_DIR%

    REM Fix the permissions on the files
    %PLINK_CMD% %REMOTE_USER%@%REMOTE_SERVER% %REMOTE_CHMOD%
    [/batch]

    In a command prompt, cd to the directory that the cwrsync.cmd script is and run it

    cd /path/to/cwrsync/script/
    cwrsync.cmd
    

    It should ‘just work’. If you get an error running the script or your Web Server isn’t serving up the content, try turning up the verbosity of plink and and rsync by adding -v like this:

    SET RSYNC_ARGS=-arzvvvv
    SET PLINK_CMD=cygnative plink -v -i %SSH_KEY% -batch
    

Configure TeamCity

  1. Create a build step to generate the documentation using the build.xml file created earlier in your project’s build configuration.
  2. Runner type: MSBuild
    Build file path: build\doxygen\build.xml
    Working Directory:
    MSBuild version: Microsoft .NET Framework 4.0
    MSBuild ToolsVersion: 4.0
    Run platform: x86
    Targets: Doxygen
    Command line parameters:
    Reduce test failure feedback time:
    .NET Coverage tools:
    Report type:

    Click “Save”

  3. Create a build step to publish the documentation to the web server.
  4. Rather than use a CMD file in version control or pushing it out to all the build agents, I prefer to use a build step in the build configuration for the project in TeamCity. To use the script in the TeamCity build step, you have to use %% rather than % because TeamCity will treat the % as a TeamCity build property.

    Runner type: Command Line
    Working directory:
    Run: Custom Script
    Custom script: < the contents of your cwrsync.cmd from earlier, with every '%' replaced with '%%' >
    Report type:

    Click “Save”

  5. Run your build and verify that everything works!

References

Creating Temporary Files on Win32 in C – Part 2

June 6th, 2011

Last post I talked about the existing options for creating a temporary file on Win32 and the pros and cons of each. This time I’m going to show you the solution that I normally use.

I stated that a temporary file solution would ideally be:

  • Cross-platform
  • Guarantee a unique name
  • Support SBCS/MBCS/Unicode
  • Allow you to have some control over where the file goes
  • Automatically delete itself when you close it or the process exits.

I pretty much gave up on the cross-platform goal after reading through the various existing options. The level of complexity that was going to be required to handle all the nuances just wasn’t worth it to me.

A Partial Solution

I settled on a partial solution. One function which returns a temporary filename, supports SBCS/MBCS/Unicode, checks whether the filename already exists, and allows you to specify either the basepath or the filename (or both). Automatically deleting the file when you close it or the process exits is achieved via CreateFile and FILE_FLAG_DELETE_ON_CLOSE.

Ignoring the cross-platform goal, there are still two problems with this implementation.

  1. By separating the filename creation from the file creation, we still suffer from Race condition 2 “The function only generates filenames which are unique when they are created. By the time the file is opened, it is possible that another process has already created a file with the same name.”
  2. CreateFile returns a HANDLE, not a FILE* so you have to the API calls WriteFile, CloseFile, etc. rather than the CRT calls to fwrite, fclose, etc. [1]
[1] It may be possible to convert a Win32 HANDLE to a FILE* based on the information in this article.

Getting the Temporary Filename

#include <Windows.h>
#include <errno.h>
#include <stdlib.h>
#include <tchar.h>

#define SUCCESS                               +0
#define FAILURE_NULL_ARGUMENT                 -1         
#define FAILURE_INSUFFICIENT_BUFFER           -2
#define FAILURE_API_CALL                      -3
#define FAILURE_INVALID_PATH                  -4
#define FAILURE_FILE_ALREADY_EXISTS           -5

Bool directory_exists( LPCTSTR p_path )
{
  DWORD attributes = GetFileAttributes( p_path );
  return ( attributes != INVALID_FILE_ATTRIBUTES &&
         (attributes & FILE_ATTRIBUTE_DIRECTORY) );
}

Bool file_exists( LPCTSTR p_path )
{
  DWORD attributes = GetFileAttributes( p_path );
  return ( attributes != INVALID_FILE_ATTRIBUTES &&
         !(attributes & FILE_ATTRIBUTE_DIRECTORY) );
}

int get_tmp_filename( LPCTSTR p_filename,
                        LPCTSTR p_basepath,
                        LPTSTR  p_tmp_filename,
                        DWORD   tmp_filename_size )
{
  TCHAR   tmp_path[MAX_PATH]  = { 0 };
  TCHAR   tmp_name[MAX_PATH]  = { 0 };

  // Parameter Validation
  if( p_tmp_filename == NULL )
  {
    return FAILURE_NULL_ARGUMENT;
  }

  // Get a basepath
  if( p_basepath != NULL )
  {
    _tcscpy_s( tmp_path, MAX_PATH, p_basepath );
  }
  else
  { // Use the CWD if a basepath wasn't supplied
    _tcscpy_s( tmp_path, MAX_PATH, TEXT(".\\") );
  }
  if( !directory_exists( tmp_path ) )
  {
    return FAILURE_INVALID_PATH;
  }

  // Form the full filename
  if( p_filename != NULL )
  {
    _tcscpy_s( tmp_name, MAX_PATH, tmp_path );
    _tcscat_s( tmp_name, MAX_PATH, TEXT("\\") ); 
    _tcscat_s( tmp_name, MAX_PATH, p_filename );
  }
  else
  { // Get a temporary filename if one wasn't supplied
    if( GetTempFileName( tmp_path, NULL, 0, tmp_name ) == 0 )
    {
      _ftprintf( stderr, TEXT("Error getting temporary filename in %s.\n"), tmp_path );
      return FAILURE_API_CALL;
    }
  }

  // Copy over the result
  switch( _tcscpy_s( p_tmp_filename, tmp_filename_size, tmp_name ) )
  {
  case 0:
    // Make sure that the file doesn't already exist before we suggest it as a tempfile.
    // They will still get the name in-case they intend to use it, but they have been warned.
    if( file_exists( tmp_name ) )
    {
      return FAILURE_FILE_ALREADY_EXISTS;
    }
    return SUCCESS;
    break;
  case ERANGE:
    return FAILURE_INSUFFICIENT_BUFFER;
    break;
  default:
    return FAILURE_API_CALL;
    break;
  }
}

Create a File that is Automatically Deleted when the Last Handle is Closed or the Program Terminates Normally

HANDLE h_file = CreateFile( tmpfilename, 
                          GENERIC_READ, 
                          FILE_SHARE_READ, 
                          NULL,
                          OPEN_EXISTING, 
                          FILE_FLAG_DELETE_ON_CLOSE,
                          NULL );

An Example of Putting It All Together

  HANDLE h_file;
  int    return_code;
  TCHAR  tmpfilename[_MAX_PATH] = { 0 };

  int return_code = get_tmp_filename( NULL, NULL, tmpfilename, _MAX_PATH );
  switch( return_code )
  {
  case FAILURE_FILE_ALREADY_EXISTS:
    break;
  case SUCCESS:
    break;
  default:
    return return_code;
  }

  // Extract the DLL to disk
  h_file = CreateFile( tmpfilename, 
                         GENERIC_READ, 
                         FILE_SHARE_READ, 
                         NULL,
                         OPEN_EXISTING, 
                         FILE_FLAG_DELETE_ON_CLOSE,
                         NULL );
  if( h_file == INVALID_HANDLE_VALUE )
  {
    _ftprintf( stderr, TEXT("Error creating temporary file %s.\n"), tmpfilename );
    return GetLastError();
  }

Creating Temporary Files on Win32 in C – Part 1

June 2nd, 2011

So you wanna create a temporary file?

You’re in C, on Windows, and you want to create a temporary file. Ideally it would be:

  • Cross-platform
  • Guarantee a unique name
  • Support SBCS/MBCS/Unicode
  • Allow you to have some control over where the file goes
  • Automatically delete itself when you close it or the process exits.

You wish. There are at least four primary ways of creating a temporary file (if you include the Secure CRT, Unicode, MBCS, and TCHAR versions then there are at least 12)! Each of these provides one or two of the ideal features above, with a few providing more when used in combination with other functions. None of them provides all of these features.

In this post we discuss what our basic options are when creating a temporary file on Windows in C. In Part 2 we’ll discuss which method I prefer and how I’ve implemented it.

So tell me, What are my Options?

tmpnam

Creates a unique filename for the current-working directory of the process

  • Pros:
    • Part of the ISO C Standard
  • Cons
    • No unicode support
    • Potentially unsafe if the parameter is non-NULL and insufficiently sized
    • Race condition 1 – If the str parameter NULL, the returned str points to an internal static buffer that will be overwritten by subsequent calls from the same process.
    • Race condition 2 – The function only generates filenames which are unique when they are created. By the time the file is opened, it is possible that another process has already created a file with the same name.

_wtmpnam

Unicode version of tmpnam

Pros/Cons are the same as tmpnam, except it supports UNICODE instead of SBCS/MBCS and is Windows-only.

_ttmpnam

Generic-Text Routine Mapping. Used with TCHAR to map to tmpnam in MBCS builds and _wtmpnam in UNICODE builds.

Pros/Cons are the same as tmpnam/_wtmpnam except it can support either MBCS/UNICODE at build time and is Windows-only

tmpnam_s

Security-Enhanced CRT version of tmpnam

  • Pros:
    • Security enhancements (avoids buffer overflow and ensures null termination of string)
  • Cons:
    • Unique filenames in CWD only
    • No unicode support
    • Race condition 1 (see tmpnam above)
    • Race condition 2 (see tmpnam above)
    • Windows-only

_wtmpnam_s

Unicode version of tmpnam_s

Pros/Cons are the same as tmpnam_s, except it supports UNICODE instead of SBCS/MBCS.

_ttmpnam_s

Generic-Text Routine Mapping. Used with TCHAR to map to tempnam in MBCS builds and _wtempnam in UNICODE builds.

Pros/Cons are the same as _ttmpnam_s/_wtmpnam_s except it can support either MBCS/UNICODE at build time

_tempnam

From MSDN:

“_tempnam will generate a unique file name for a directory chosen by the following rules:

– If the TMP environment variable is defined and set to a valid directory name, unique file names will be generated for the directory specified by TMP.
– If the TMP environment variable is not defined or if it is set to the name of a directory that does not exist, _tempnam will use the dir parameter as the path for which it will generate unique names.
– If the TMP environment variable is not defined or if it is set to the name of a directory that does not exist, and if dir is either NULL or set to the name of a directory that does not exist, _tempnam will use the current working directory to generate unique names. Currently, if both TMP and dir specify names of directories that do not exist, the _tempnam function call will fail.

The name returned by _tempnam will be a concatenation of prefix and a sequential number, which will combine to create a unique file name for the specified directory. _tempnam generates file names that have no extension. _tempnam uses malloc to allocate space for the filename; the program is responsible for freeing this space when it is no longer needed.”

  • Pros:
    • There is a way to use a directory other than the default
    • Allocates memory for the return call so you don’t have to guess the size ahead of time
  • Cons:
    • Using a directory other than the default requires changing environment variables for the entire process
    • Holy crap, did you see how complex the rules are just to get a stupid temporary file name?!
    • Only creates a filename, not a file so Race Condition 2 applies again.
    • No Unicode support
    • Allocates memory that the caller has to remember to free (I like to keep my mallocs and frees matched as close together as possible)
    • Windows-only

_wtempnam

Unicode version of _tempnam

Pros/Cons are the same as _tempnam, except it supports UNICODE instead of SBCS/MBCS

_ttempnam

Generic-Text Routine Mapping. Used with TCHAR to map to tempnam in MBCS builds and _wtempnam in UNICODE builds.

Pros/Cons are the same as tempnam/_wtempnam except it can support either MBCS/UNICODE at build time

tmpfile

Creates a temporary file

  • Pros:
    • Part of the ISO standard
    • Creates a file (not a filename) and thus avoids Race Condition 2
    • The temporary file is automatically deleted when the file is closed, the program terminates normally, or when _rmtmp is called (assuming that the CWD doesn’t change)
  • Cons:
    • Creates a temporary file in the root directory – WTH?! This of course, requires Admin privs on Vista and later.

tmpfile_s

Windows-only version of tmpfile with the Secure-CRT enhancements.

Pros/Cons are otherwise the same as tmpfile.

GetTempFileName

Creates a name for a temporary file. If a unique file name is generated, an empty file is created and the handle to it is released; otherwise, only a file name is generated.

MSDN has an article on “Creating and Using a Temporary File” that uses this function. Note that it uses CreateFile which returns a HANDLE not a FILE*.

  • Pros:
    • Supports both Unicode (via GetTempFileNameW macro resolution) and MBCS (via GetTempFileNameA macro resolution)
    • Allows the caller to specify the path (yay!)
    • Allows the caller to specify a filename prefix (up to three characters)
  • Cons:
    • Caller needs to make sure the out buffer is MAX_PATH chars to avoid buffer overflow
    • While it can create the file, it releases the handle, which the caller has to reopen. This can create a security vulnerability where someone else can get to the file before the intented caller does.
    • Windows-only

 

HOWTO: Upgrade from Subversion 1.4 to 1.6 on CentOS 5

May 27th, 2011

How to upgrade the packages and existing repositories from Subversion 1.4 to 1.6.6 on CentOS 5.

# File: Subversion_1.6_Upgrade.notes
# Auth: burly
# Date: 12/01/2009
# Refs: http://svnbook.red-bean.com/nightly/en/index.html
#       http://dev/antoinesolutions.com/subversion
# Desc: Upgrading from subversion 1.4 to 1.6.6 on CentOS 5
#       NOTE:These instructions are actually fairly generic 
#       in regards to the version of SVN you are upgrading
#       from/to. At the time of writing, it just happened
#       to be 1.4 -> 1.6.6

# Backup each repository
svnadmin dump /srv/svn/<repo> > /backup/svn/<Repo>_20091201_rXXXX.dump

# Backup any hooks or configuration files in 
# /srv/svn/<repo>/hooks and /srv/svn/conf

# Setup yum to allow the package to come in from
# RPMforge (must setup RPMforge repo first).
vim /etc/yum.repos.d/Centos-Base.repo

# Add the following line at the end of each section
# in the Centos-Base.repo
exclude=subversion mod_dav_svn

# Restart the yum update daemon
service yum-updatesd restart

# Upgrade subversion
yum upgrade subversion

# For each repository
#    delete the existing repo
rm -rf /srv/svn/<repo>

# Create a new repo
svnadmin create /srv/svn/<repo> --fs-type fsfs

# Import the data
svnadmin load /srv/svn/<repo> < /backup/srv/<Repo>_20091201_rXXXX.dump

# Restore any hooks or configuration files in 
# /srv/svn/<repo>/hooks and /srv/svn/<repo>/conf

# If you are using Trac, you'll need to resync the repo
trac-admin /srv/trac/<repo> resync

HOWTO: Migrate an Existing RAID Array to a New Array

May 25th, 2011

How to migrate from an existing software RAID 1 array to a new RAID 1 array on CentOS 5.5

# File: Migrate_to_new_RAID_Array_on_CentOS_5.5.notes
# Auth: burly
# Date: 11/20/2010
# Refs: 
# Desc: How migrate from one RAID 1 array to a new one
#       on CentOS 5.5

# I booted from a Knoppix CD to do this. In retrospect,
# I should have used a CentOS LiveCD because the
# tooling, versions, and layout of Knoppix are different 
# which caused some issues. Also, because my OS is x86-64
# but Knoppix is x86, I could not chroot into my system 
# environment, which are ultimately required to create the
# initrd files.

# Boot from the Knoppix CD and drop to a shell

# Start up the existing RAID Array (one of the 2 drives
# from the existing RAID 1 array was on sdc for me)
mdadm --examine --scan /dev/sdc1 >> /etc/mdadm/mdadm.conf
mdadm --examine --scan /dev/sdc2 >> /etc/mdadm/mdadm.conf
mdadm --examein --scan /dev/sdc3 >> /etc/mdadm/mdadm.conf
/etc/init.d/mdadm start
/etc/init.d/mdadm-raid start

# Partition first SATA drive in whatever partition numbers
# and sizes you want. Make sure all partitions that 
# will be in an RAID array use ID type "fd" for RAID 
# autodetect and type "82" for swap. Make sure /boot
# is marked with the bootable flag
fdisk /dev/sda
 
# Repeat for the other disks OR if you are using the
# identical setup on each, you can use sfdisk to 
# simplify your life.
sfdisk -d /dev/sda | sfdisk /dev/sdb

# Create the new boot array
# NOTE: If you don't use metadata 0.90 (but instead 
#       1.0 or 1.1) you'll run into problems with grub.
#       In RAID 1, with metadata 0.90, you can mount
#       the fs on the partition without starting RAID.
#       With newer versions of metadata the superblock
#       for RAID gets written at the beginning of the 
#       partition where the filesystem superblock
#       normally would go. This results in the inability
#       to mount the filesystem without first starting
#       RAID. In the case of your boot partition, this 
#       results in the inability to setup grub and thus boot.
mdadm --create --verbose --metadata=0.90 /dev/md0 --level=1 --raid-devices=2 /dev/sda1 /dev/sdb1

# Copy everything over for /boot
mkdir /mnt/oldBoot
mkdir /mnt/newBoot
mkfs.ext3 /dev/md0
mount --options=ro /dev/md0 /mnt/oldBoot
cd /mnt/oldBoot
find . -mount -print0 | cpio -0dump /mnt/newBoot

# Make the new swap
mkswap /dev/sda2
mkswap /dev/sdb2

# Create the new array for LVM. I used metadata
# 0.90 again for consistency AND because I believe
# the version of mdadm in CentOS won't handle newer
# versions of it
mdadm --create --verbose --metadata=0.90 /dev/md1 --level=1 --raid-devices=2 /dev/sda3 /dev/sdb3

# Setup LVM2
pvcreate /dev/md1
vgcreate vg /dev/md1
lvcreate -L8G -nroot vg
lvcreate -L10G -nhome vg
lvcreate -L250G -nvm vg

# Format the filesystems.
# NOTE: I fixed the reserved space to 1% (default is 5%)
#       for the VM LV to save some space and 
#       because in larger, non-root partitions, you
#       don't need all that reserved space.
mkfs.ext3 /dev/vg/root
mkfs.ext3 /dev/vg/home
mkfs.ext3 -m 1 /dev/vg/vm


# Copy everything over for /
mkdir /mnt/oldRoot
mkdir /mnt/newRoot
mount --options=ro /dev/vgOS/lvRoot /mnt/oldRoot
mount /dev/vg/root /mnt/newRoot
cd /mnt/oldRoot
find . -mount -print0 | cpio -0dump /mnt/newRoot

# Copy everything over for /home
mkdir /mnt/oldHome
mkdir /mnt/newHome
mount --options=ro /dev/vgOS/lvHome /mnt/oldHome
mount /dev/vg/home /mnt/newHome
cd /mnt/oldHome
find . -mount -print0 | cpio -0dump /mnt/newHome

# Copy everything over for /boot
mkdir /mnt/oldVM
mkdir /mnt/newVM
mount --options=ro /dev/vgOS/lvVM /mnt/oldVM
mount /dev/vg/vm /mnt/newVM
cd /mnt/oldVM
find . -mount -print0 | cpio -0dump /mnt/newVM

# Remove any existing/stale lines in the mdadm.conf file

# Setup the mdadm config on the new /
mdadm -Esb /dev/sda1 >> /mnt/newRoot/etc/mdadm.conf
mdadm -Esb /dev/sda3 >> /mnt/newRoot /etc/mdadm.conf

# Update fstab on the new machine to use the new 
# mount points (e.g. if you changed VolumeGroup or 
# LogicalVolume names)
vim /mnt/newRoot/etc/fstab

# REBOOT TO A CENTOS LIVECD (if you weren't already on one)

# First we chroot
mkdir /mnt/sysimage
mount /dev/vg/root /mnt/sysimage
mount /dev/vg/home /mnt/sysimage/home
mount /dev/md0 /mnt/sysimage/boot
mount --bind /dev /mnt/sysimage/dev
mount -t proc none /mnt/sysimage/proc
mount -t sysfs none /mnt/target/sys
chroot /mnt/sysimage

# Make a new initrd to boot from
cd /boot
mv initrd-2.6.18-194.26.1.el5.img initrd-2.6.18-194.26.1.el5.img.bak
mkinitrd initrd-2.6.18-194.26.1.el5.img  2.6.18-194.26.1.el5

# Setup grub on both of the drives
grub
root(hd0,0)
setup(hd0)
root(hd1,0)
setup(hd1)
quit

# Reboot!

HOWTO: Create a Local Repository Mirror on Ubuntu

May 23rd, 2011

How to create and use a local repository mirror on Ubuntu 9.10. These instructions should work with minor modifications for other versions of Ubuntu.

# File: HOWTO Create a Local Repository Mirror on Ubuntu.notes
# Date: 2010/03/17
# Refs: https://help.ubuntu.com/community/Debmirror
#       http://ubuntuforums.org/archive/index.php/t-599479.html
#       http://www.arsgeek.com/2007/02/14/how-to-set-up-your-own-local-repositories-with-apt-mirror/
#       http://pwet.fr/man/linux/commandes/debmirror
# Desc: How to create a local repository for 
#       Ubuntu 9.10 Karmic Koala.

# -------------------------------------
#           Setup the Server
# -------------------------------------
# Install Ubuntu (I used 9.10) on a machine with plenty of 
# free storage space (I used an 8GB OS vmdk and an 80GB data 
# vmdk used through LVM so that I could easily add/grow to
# it in the future if necessary).

# Create the mirror user, I'll be using ubuntu.
# NOTE: You don't have to add this user to the wheel but if you don't, the steps below that require sudo
#       will require you to run them from an account with root or wheel access and may also require
#       that you change the ownership/group of files/directories afterwards.
sudo useradd -m ubuntu -Gusers,wheel
sudo password ubuntu

# UPDATE 2012/01/30: As Dave points out below, you'll need to create your mirrorkeyring folder with the correct user account.
#                    If you aren't already running as that user, you can change your shell using su
su - ubuntu

# Update your apt-get package listing
sudo apt-get update

# Install debmirror
sudo apt-get install debmirror

# Create the location for the repo data to live
sudo mkdir -P /mirror/ubuntu

# Set the permissions for the repo data
sudo chown -R ubuntu:ubuntu /mirror/ubuntu
sudo chmod -R 771 /mirror/ubuntu

# Setup the keyring for correctly verifying Release signatures
gpg --no-default-keyring --keyring /home/ubuntu/mirrorkeyring/trustedkeys.gpg --import /usr/share/keyrings/ubuntu-archive-keyring.gpg

# Create the mirrorbuild.sh script
vim /home/ubuntu/mirrorbuild.sh

# NOTE: The ubuntu community documentation has you using 
#       the HTTP protocol for the mirror build script
#       however, I prefer rsync because we can rate limit.
#       When the download is going to take days,
#       I'd like to be able to use my connection in
#       the interim.

# --------------------------------------------
# BEGIN MIRRORBUILD.SH SCRIPT
# --------------------------------------------

#!/bin/bash

## Setting variables with explanations.

#
# Don't touch the user's keyring, have our own instead
#
export GNUPGHOME=/home/ubuntu/mirrorkeyring

# Arch=         -a      # Architecture. 
# For Ubuntu can be i386, amd64, powerpc and/or sparc (sparc support begins with dapper)
# 
# Comma separated values
arch=i386,amd64

# Minimum Ubuntu system requires main, restricted
# Section=      -s      # Section
# (One of the following - main/restricted/universe/multiverse).
# You can add extra file with $Section/debian-installer.
# ex: main/debian-installer,universe/debian-installer,multiverse/debian-installer,restricted/debian-installer
section=main,restricted,universe,multiverse

# Release=      -d      # Release of the system
# (Dapper, Edgy, Feisty, Gutsy, Hardy, IntrepidJaunty, Karmic), 
# and the -updates and -security ( -backports can be added if desired)
dist=karmic,karmic-updates,karmic-security

# Server=       -h      # Server name,
# minus the protocol and the path at the end
# CHANGE "*" to equal the mirror you want to create your
# mirror from. au. in Australia  ca. in Canada. This can be 
# found in your own /etc/apt/sources.list file, 
# assuming you have Ubuntu installed.
server=us.archive.ubuntu.com

# Dir=          -r      # Path from the main server,
# so http://my.web.server/$dir, Server dependant
# Lead with a '/' for everything but rsync,
# where we lead with a ':'
inPath=:ubuntu

# Proto=        -e      # Protocol to use for transfer
# (http, ftp, hftp, rsync)
# Choose one - http is most usual the service, and the
# service must be availabee on the server you point at.
# NOTE: debmirror uses -aIL --partial by default.
#       However, if you provide the --rsync-options
#       paramter (which we do) then you HAVE to provide 
#       it -aIL --partial in addition to whatever You
#       want to add (e.g. --bwlimit) If you don't
#       debmirror will exit with thousands of files
#       missing.
proto=rsync
rsyncoptions="-aIL --partial --bwlimit=100"

# Outpath=              # Directory to store the mirror in
# Make this a full path to where you want to mirror the material.
#
outPath=/mirror/ubuntu/

# The --nosource option only downloads debs and not deb-src's
# The --progress option shows files as they are downloaded
# --source \ in the place of --no-source \ if you want sources also.
# --nocleanup  Do not clean up the local mirror after mirroring
# is complete. Use this option to keep older repository
# Start script
#
debmirror       -a $arch \
                --no-source \
                -s $section \
                -h $server \
                -d $dist \
                -r $inPath \
                --progress \
                -e $proto \
                --rsync-options="$rsyncoptions" \
                $outPath

# -----------------------------------------------------
# END BUILDMIRROR.SH SCRIPT 
# -----------------------------------------------------

# Add execute permissions on the mirrorbuild.sh script
chmod +x mirrorbuild.sh

# Run the script
./mirrorbuild.sh

# Go home, kick back, have a beer while it downloads 43GBs 
# (in the case of karmic, karmic-update, karmic-securty for
# i386 and amd64)

# --------------------------------------
#          Setup the mirror
# --------------------------------------
# Install apache2
sudo apt-get install apache2

# Symlink the mirror data into the web root
sudo ln -s /mirror/ubuntu /var/www/ubuntu

# Point your browser at http://localhost/ubuntu and
# you should see your pool!

# -------------------------------------
#        Updating the Repo Mirror
# -------------------------------------
# To update the repo mirror, just execute the mirrorbuild.sh
# script used to initially build it.
./mirrorbuild.sh

# -------------------------------------
#   Configure Clients to Use this Repo
# -------------------------------------
# Update the apt sources list
cd /etc/apt
sudo mv sources.list sources.list.orig
sudo sensible-editor sources.list

# Replace 'mirrorbox' with your server's DNS name 
# (e.g. karmic-repo.test.com)
# -----------------------------------------------------------------------------
# BEGIN SOURCES.LIST
# -----------------------------------------------------------------------------
# Local network mirror sources.
deb http://mirrorbox/ubuntu karmic main restricted universe multiverse
deb http://mirrorbox/ubuntu karmic-updates main restricted universe multiverse
deb http://mirrorbox/ubuntu karmic-security main restricted universe multiverse
# -----------------------------------------------------------------------------
# END SOURCES.LIST
# -----------------------------------------------------------------------------

# Test to see if you are able to pull down updates 
# from the new mirror
sudo apt-get update

Erasing a hard drive using a Linux LiveCD

May 20th, 2011

Normally when I need to erase a hard drive, I use dban. Recently however, I’ve run into issues with dban not detecting disks (I’m guessing it doesn’t support the I/O controller/drivers). While it isn’t as secure, a decent and easy way is to just zero out the hard drive using a Linux LiveCD (besides, if you really want it done securely, physically destroy the drive). Ubuntu is my usual distro of choice but there are tons out there that will work.

dd if=/dev/ of=/dev/sda bs=4096

To get an update on it’s progress, you can signal it from another terminal using

pkill -USR1 ^dd

HOWTO: Setup a DHCP Server on Ubuntu 9.10

May 20th, 2011

Setting up a DHCP server on a LAN with Ubuntu 9.10. These instructions should also basically work on Ubuntu 10.x.

# File:	HOWTO Configure a DHCP Server on Ubuntu.notes
# Date:	2010/03/24
# Refs: https://help.ubuntu.com/community/dhcp3-server
#       http://www.ubuntugeek.com/how-to-install-and-configure-dhcp-server-in-ubuntu-server.html
# Desc:	Setting up a DHCP server on a LAN with Ubuntu 9.10

# Install DHCP3
sudo apt-get install dhcp3-server

# Specify the interface(s) that dhcp3-server should manage
# in /etc/default/dhcp3-server
INTERFACES="eth0"

# Set a static IP for the DHCP server itself on the
# interfaces that it will manage in /etc/network/interfaces
auth eth0
iface eth0 inet static
    address 192.168.72.1
    netmask 255.255.255.0
    network 192.168.72.0
    gateway 192.168.72.254
    broadcast 192.168.72.255

# Edit the etc/dhcp3/dhcpd.conf configuration file
# I'm going to be running a 192.168.72.0 host-only 
# vmnet on eth0 with fixed addresses for several machines 
# in my example here
ddns-update-style none;
log-facility local7;
authoritative;

subnet 192.168.72.0 netmask 255.255.255.0 {

    option routers              192.168.72.254;
    option subnet-mask          255.255.255.0;
    option broadcast-address    192.168.72.255;
    option domain-name-servers  192.168.72.1;
    option ntp-servers          192.168.72.1;
    default-lease-time          7200;
    max-lease-time              86400;

    host helium {
            hardware ethernet 00:0c:29:c6:de:09;
            fixed-address 192.168.72.2;
    }
    host lithium {
            hardware ethernet 00:0c:29:d8:d5:7f;
            fixed-address 192.168.72.3;
    }
    host beryllium {
            hardware ethernet 00:0c:29:b6:93:41;
            fixed-address 192.168.72.4;
    }
    host boron {
            hardware ethernet 00:0c:29:3f:c6:f3;
            fixed-address 192.168.72.5;
    }
}