I had to install an instance of Thoughtwork Studio’s agile project management tool, Mingle. My favorite tracking tool is Excel but such is life.
The instance does not need to be highly-available but I don’t want to risk data loss and I want to minimize the cost of hosting. So I decided to put the instance out on the Amazon cloud.
What follows are instructions targeted to relative newbies to PostgreSQL and Amazon Web Services (AWS) but some basic knowledge of Unix. I’ve tried to give credit wherever I based what I did on content from the web.
The whole process takes several hours with a clean set of notes but took me a couple days of trial and error to get through.
Create and deploy an Amazon Machine Image (AMI)
I used Elastic Server to build a clean Ubuntu Linux AMI with the recommended versions of Java JDK (1.5) and PostgreSQL (8.3). Through Elastic Server site I deployed the AMI to a new small instance in our Amazon EC2 Account.
Note: An implementation detail of the Elastic Server image is that the shell account user is different than the PostgreSQL user account so in the following instructions, most bash shell comands run as postgres or root using the su command.
Assign an Amazon Elastic IP to the instance and register a domain name to that IP
Since this is the cloud, I can’t rely on the permanence of the instance. Amazon supplies the Elastic IP service to ensure you have a consistent way to reference your sites regardless of the particular instance the site is hosted on.
Once the AMI was running on a new small instance within our EC2 account, I logged into our AWS console and created and assigned an Elastic IP address to the instance and registered a DNS entry against that IP address.
Move PostgreSQL data to an Elastic Block Store (EBS) volume
The small instance created by Elastic Server uses the instance-store for the boot volume. This is fine for the AMI itself and for the Mingle installation which is easy to install and rev but not for the database storage itself.
We decided the best approach for us was to locate the data store on an EBS volume with copies of custom config files and scripts. We also wanted to create nightly database dumps on that volume. Finally, we’d enable snapshots of the EBS volume giving us a stable store with convenient backup and restorals.
Create an EBS Volume (AWS Management Console)
- Create volume 10GB in the same zone as the instance
- Create snapshot of volume
- Attach the volume to the instance (button in console UI volume tab – takes a while…)
Format and mount the volume from within the instance
Requirements: An attached EBS Volume
Assumptions:
- Your EBS volume is: /dev/sdf and you are creating one volume. This mount point is listed in the AWS console in the Attachment Information column of the EBS volumes console.
- You want to mount the volume at /data
- SSH into your instance. AWS Console -> Instances -> select the specific instances -> Instance Management -> Connect -> terminal opens login as cftuser
- type sudo fdisk /dev/sdf
- At “Command” type n
- At “Primary or Extended” type p
- At Number of volumes type 1
- Press “Return” two times to accept the default cylinder values
- Press “w” to write the partition table and exit
- Type sudo “mkfs.ext3 /dev/sdm” to format an ext3 file system
- Type sudo “mkdir /data” this creates a directory called data, which we will use to mount the new volume.
- Type sudo mount /dev/sdm /data to mount the drive
- (optional) edit /etc/fstab to add the drive mount to automount at boot, this will not automaticly attach the volume to your instance at boot, you need to do that through the RightScale interface or maybe i’ll write another tutorial on that later.
- sudo vi /etc/fstab
- add a line: /dev/sdf /data ext3 defaults 0 0
- type :wq and type return
Instructions based on notes from https://learningspaces.njit.edu/content/how-format-and-mount-ebs-volume
Move PostgreSQL data directory in PostgreSQL config
- Stop the PostgreSQL server sudo /etc/init.d/PostgreSQL-8.3 stop
- Edit PostgreSQL.conf sudo /etc/PostgreSQL/8.3/main/PostgreSQL.conf
- Change from: data_directory = ‘/var/lib/PostgreSQL/8.3/main’
- To: data_directory = ‘/data/PostgreSQL/’
- sudo cp -r /var/lib/PostgreSQL/8.3/main /data
- sudo mv main PostgreSQL
- sudo chown -R postgres:postgres PostgreSQL/
- move main directory sudo mv /var/lib/PostgreSQL/8.3/main /var/lib/PostgreSQL/8.3/data_bak
- restart server sudo /etc/init.d/PostgreSQL-8.3 start
- copy this change to /data/config_copies
Basic PostgreSQL server setup
Change the PostgreSQL postgres user password in order to access the server. As the postgres Linux user execute the psql command:
- sudo -u postgres psql template1
- \password postgres
- ctrl-D (to exit posgreSQL prompt)
Based on notes from https://help.ubuntu.com/community/PostgreSQL
Create a PostgreSQL database called “mingle”
- sudo -u postgres createdb mingle
Turn off the software firewall via the Elastic Server Management Console on your instance at port 2999
Mingle needs access to traffic across multiple ports. These are prohibited by rule by the ubuntu firewall. In a situation where you need to lock this server down, you’ll have to figure out what specific ports to open traffic to. I just turned off the Ubuntu firewall. Our instance of mingle ain’t that mission critical.
- Open up access to tcp traffic to port 2999 in AWS.
- Enable port 2999 in the security group to which your instance is assigned in AWS. This is listed in the Security Group column of the instance list in the console. Go to Security Groups, select the group and add a rule to allow tcp traffic on port 2999.
- Browse to Elastic server manager and turn off firewall http://
:2999/firewall
Install Mingle http://www.thoughtworks-studios.com/mingle/3.1/help/installing_mingle.html
- Download Mingle
- secure copy the install package to your small intance:
scp mingle_unix_.tar cftuser@ :~ - sudo mv mingle_unix_
.tar /usr/local - cd /usr/local
- sudo tar -xvf mingle_unix_
.tar - sudo ln -s mingle_3_1/ mingle
- sudo ln -s /usr/local/mingle/MingleServer /usr/local/bin/MingleServer
- sudo mkdir /home/mingle
- sudo mkdir /home/mingle/data
- sudo chown cftuser /home/mingle
- sudo chown cftuser /home/mingle/data
- Edit the mingle config to set the swap directory and to run on port 80
- Start the instance (see below) and wait for the server to startup – takes minutes
-Dmingle.swapDir=/home/mingle/data/tmp
-Dmingle.appContext=/
-Dmingle.memcachedPort=11211
-Dmingle.port=80
-Dmingle.logDir=/usr/local/log
-Dmingle.memcachedHost=127.0.0.1
Starting Mingle: sudo MingleServer –mingle.dataDir=/home/mingle/data start
Stopping Mingle: sudo MingleServer stop
Run the Mingle setup (refer to thoughtworks studio help)
- Browse to the server
- Run the install wizard
- Create your first user
- Enter your license key
Manually managing dumps of the mingle database
- Dumping a single database (from cftuser folder)
- sudo -u postgres /usr/bin/pg_dump –create –oids –format=c –verbose –file=”dump/mingle.backup” mingle
- Renaming the mingle database
- Stop the mingle server: sudo MingleServer stop
- sudo -u postgres psql template1
- ALTER DATABASE mingle RENAME TO mingleback
- Dropping the mingle database
- sudo -u postgres /usr/bin/dropdb -i -e mingleback
- Restoring single database
- sudo -u postgres createdb mingle
- sudo -u postgres /usr/bin/pg_restore –create –verbose –dbname=mingle dump/mingle.backup
Setting up a schedule cron job to dump the mingle database nightly at 7pm
- Create a backup script and place it in /data named backupmingledb.sh
- Change the owner group to postgres: sudo -u postgres chown postgres:postgres backupmingledb.sh
- Make the script executable by postgres user: sudo -u postgres chmod 755 backupmingledb.sh
- Create a cron job to invoke the script
- sudo -u postgres crontab -e
- To append a new line type: o
- Type: 0 19 * * * /data/backupmingledb.sh
#!/bin/sh
#Based on PostgreSQL DUMP RECIPE by Frederik Dannemare
# Must run as postgres user!
# Script to automate database backups onto the EBS volume
# at /data/backupmingledb.sh with owner group postgres (
# dump dir = /data/database_backups/ again owner/group postgres
# entered as crontab for postgres user
export PGPASSWORD=
export PATH=$PATH:usr/bin
DATE=`date +%y%m%d-%H%M`
echo "Beginning to dump mingle database..."
DUMP_DIR=/data/database_backups/
DUMP_FILE=mingledump-$DATE.backup
pg_dump --username=postgres --create --oids --format=c --verbose --file="$DUMP_DIR$DUMP_FILE" mingle 2> /$DUMP_DIR/pg_dump.out
echo "Mingle database has been dumped to $DUMP_DIR$DUMP_FILE.gz"
exit 0