Installing Thoughtworks Mingle on the Amazon cloud

I had to install an instance of Thoughtwork Studio’s agile project management tool, Mingle. My favorite tracking tool is Excel but such is life.

The instance does not need to be highly-available but I don’t want to risk data loss and I want to minimize the cost of hosting. So I decided to put the instance out on the Amazon cloud.

What follows are instructions targeted to relative newbies to PostgreSQL and Amazon Web Services (AWS) but some basic knowledge of Unix. I’ve tried to give credit wherever I based what I did on content from the web.

The whole process takes several hours with a clean set of notes but took me a couple days of trial and error to get through.

Create and deploy an Amazon Machine Image (AMI)

I used Elastic Server to build a clean Ubuntu Linux AMI with the recommended versions of Java JDK (1.5) and PostgreSQL (8.3). Through Elastic Server site I deployed the AMI to a new small instance in our Amazon EC2 Account.

Note: An implementation detail of the Elastic Server image is that the shell account user is different than the PostgreSQL user account so in the following instructions, most bash shell comands run as postgres or root using the su command.

Assign an Amazon Elastic IP to the instance and register a domain name to that IP

Since this is the cloud, I can’t rely on the permanence of the instance. Amazon supplies the Elastic IP service to ensure you have a consistent way to reference your sites regardless of the particular instance the site is hosted on.

Once the AMI was running on a new small instance within our EC2 account, I logged into our AWS console and created and assigned an Elastic IP address to the instance and registered a DNS entry against that IP address.

Move PostgreSQL data to an Elastic Block Store (EBS) volume

The small instance created by Elastic Server uses the instance-store for the boot volume. This is fine for the AMI itself and for the Mingle installation which is easy to install and rev but not for the database storage itself.

We decided the best approach for us was to locate the data store on an EBS volume with copies of custom config files and scripts. We also wanted to create nightly database dumps on that volume. Finally, we’d enable snapshots of the EBS volume giving us a stable store with convenient backup and restorals.

Create an EBS Volume (AWS Management Console)

  1. Create volume 10GB in the same zone as the instance
  2. Create snapshot of volume
  3. Attach the volume to the instance (button in console UI volume tab – takes a while…)

Format and mount the volume from within the instance

Requirements: An attached EBS Volume

Assumptions:

  • Your EBS volume is: /dev/sdf and you are creating one volume. This mount point is listed in the AWS console in the Attachment Information column of the EBS volumes console.
  • You want to mount the volume at /data
  1. SSH into your instance. AWS Console -> Instances -> select the specific instances -> Instance Management -> Connect -> terminal opens login as cftuser
  2. type sudo fdisk /dev/sdf
  3. At “Command” type n
  4. At “Primary or Extended” type p
  5. At Number of volumes type 1
  6. Press “Return” two times to accept the default cylinder values
  7. Press “w” to write the partition table and exit
  8. Type sudo “mkfs.ext3 /dev/sdm” to format an ext3 file system
  9. Type sudo “mkdir /data” this creates a directory called data, which we will use to mount the new volume.
  10. Type sudo mount /dev/sdm /data to mount the drive
  11. (optional) edit /etc/fstab to add the drive mount to automount at boot, this will not automaticly attach the volume to your instance at boot, you need to do that through the RightScale interface or maybe i’ll write another tutorial on that later.
    • sudo vi /etc/fstab
    • add a line: /dev/sdf /data ext3 defaults 0 0
    • type :wq and type return

Instructions based on notes from https://learningspaces.njit.edu/content/how-format-and-mount-ebs-volume

Move PostgreSQL data directory in PostgreSQL config

  1. Stop the PostgreSQL server sudo /etc/init.d/PostgreSQL-8.3 stop
  2. Edit PostgreSQL.conf sudo /etc/PostgreSQL/8.3/main/PostgreSQL.conf
    • Change from: data_directory = ‘/var/lib/PostgreSQL/8.3/main’
    • To: data_directory = ‘/data/PostgreSQL/’
  3. sudo cp -r /var/lib/PostgreSQL/8.3/main /data
  4. sudo mv main PostgreSQL
  5. sudo chown -R postgres:postgres PostgreSQL/
  6. move main directory sudo mv /var/lib/PostgreSQL/8.3/main /var/lib/PostgreSQL/8.3/data_bak
  7. restart server sudo /etc/init.d/PostgreSQL-8.3 start
  8. copy this change to /data/config_copies

Basic PostgreSQL server setup

Change the PostgreSQL postgres user password in order to access the server. As the postgres Linux user execute the psql command:

  1. sudo -u postgres psql template1
  2. \password postgres
  3. ctrl-D (to exit posgreSQL prompt)

Based on notes from https://help.ubuntu.com/community/PostgreSQL

Create a PostgreSQL database called “mingle”

  • sudo -u postgres createdb mingle

Turn off the software firewall via the Elastic Server Management Console on your instance at port 2999

Mingle needs access to traffic across multiple ports. These are prohibited by rule by the ubuntu firewall. In a situation where you need to lock this server down, you’ll have to figure out what specific ports to open traffic to. I just turned off the Ubuntu firewall. Our instance of mingle ain’t that mission critical.

  1. Open up access to tcp traffic to port 2999 in AWS.
    • Enable port 2999 in the security group to which your instance is assigned in AWS. This is listed in the Security Group column of the instance list in the console. Go to Security Groups, select the group and add a rule to allow tcp traffic on port 2999.
  2. Browse to Elastic server manager and turn off firewall http://:2999/firewall

Install Mingle http://www.thoughtworks-studios.com/mingle/3.1/help/installing_mingle.html

  1. Download Mingle
  2. secure copy the install package to your small intance:
    scp mingle_unix_.tar cftuser@:~
  3. sudo mv mingle_unix_.tar /usr/local
  4. cd /usr/local
  5. sudo tar -xvf mingle_unix_.tar
  6. sudo ln -s mingle_3_1/ mingle
  7. sudo ln -s /usr/local/mingle/MingleServer /usr/local/bin/MingleServer
  8. sudo mkdir /home/mingle
  9. sudo mkdir /home/mingle/data
  10. sudo chown cftuser /home/mingle
  11. sudo chown cftuser /home/mingle/data
  12. Edit the mingle config to set the swap directory and to run on port 80

  13. -Dmingle.swapDir=/home/mingle/data/tmp
    -Dmingle.appContext=/
    -Dmingle.memcachedPort=11211
    -Dmingle.port=80
    -Dmingle.logDir=/usr/local/log
    -Dmingle.memcachedHost=127.0.0.1

  14. Start the instance (see below) and wait for the server to startup – takes minutes

Starting Mingle: sudo MingleServer –mingle.dataDir=/home/mingle/data start

Stopping Mingle: sudo MingleServer stop

Run the Mingle setup (refer to thoughtworks studio help)

  1. Browse to the server
  2. Run the install wizard
  3. Create your first user
  4. Enter your license key

Manually managing dumps of the mingle database

  • Dumping a single database (from cftuser folder)
    • sudo -u postgres /usr/bin/pg_dump –create –oids –format=c –verbose –file=”dump/mingle.backup” mingle
  • Renaming the mingle database
    • Stop the mingle server: sudo MingleServer stop
    • sudo -u postgres psql template1
    • ALTER DATABASE mingle RENAME TO mingleback
  • Dropping the mingle database
    • sudo -u postgres /usr/bin/dropdb -i -e mingleback
  • Restoring single database
    • sudo -u postgres createdb mingle
    • sudo -u postgres /usr/bin/pg_restore –create –verbose –dbname=mingle dump/mingle.backup

Setting up a schedule cron job to dump the mingle database nightly at 7pm

  1. Create a backup script and place it in /data named backupmingledb.sh

  2. #!/bin/sh

    #Based on PostgreSQL DUMP RECIPE by Frederik Dannemare
    # Must run as postgres user!

    # Script to automate database backups onto the EBS volume
    # at /data/backupmingledb.sh with owner group postgres (
    # dump dir = /data/database_backups/ again owner/group postgres
    # entered as crontab for postgres user

    export PGPASSWORD=
    export PATH=$PATH:usr/bin
    DATE=`date +%y%m%d-%H%M`

    echo "Beginning to dump mingle database..."
    DUMP_DIR=/data/database_backups/
    DUMP_FILE=mingledump-$DATE.backup
    pg_dump --username=postgres --create --oids --format=c --verbose --file="$DUMP_DIR$DUMP_FILE" mingle 2> /$DUMP_DIR/pg_dump.out
    echo "Mingle database has been dumped to $DUMP_DIR$DUMP_FILE.gz"

    exit 0

  3. Change the owner group to postgres: sudo -u postgres chown postgres:postgres backupmingledb.sh
  4. Make the script executable by postgres user: sudo -u postgres chmod 755 backupmingledb.sh
  5. Create a cron job to invoke the script
    • sudo -u postgres crontab -e
    • To append a new line type: o
    • Type: 0 19 * * * /data/backupmingledb.sh

Comments

Pingback from >Making an iteration/sprint burn up chart with Thoughtworks Studio Mingle | Ken H. Judy
Time: August 22, 2010, 12:34 pm

[…] month, we migrated our instance to the EC2 cloud. I took advantage of that migration to un-cruft our Mingle instance, apply the XP template and […]