So you’re developing your hot new application and you say to yourself “Self, we should Redis because <insert-one-of-many-reasons-redis-is-awesome>”. But, like me, you’re primarily a Windows person and every time you see instructions look like
you say “Ewww, I just threw up a little bit in mouth”.
Not to worry! I got you covered. Using the fantastic Chocolatey package manager for Windows you can just do:
C:\> cinst redis
…and BAM you’ve got a local redis instance running as a Windows Service. You can hit it from any redis client at http://localhost:6379.
I can only take credit for packaging this up for Chocolatey. All of the really hard work is done by:
If you’re not familiar with Redis you should probably Google it and make sure you are using it because it actually solves your problem and not just because someone on the internet told you it was awesome.
I recently started using Octopus Deploy to manage deployment across environments. They provide a NuGet package called OctoPack that makes it easy to create the required packages to deploy your app. OctoPack is basically just a set of MSBuild targets that get added as an import directive in your project file. This is all fine and good unless you are using the NuGet Package Restore workflow where you don’t commit your NuGet packages to source control. What you end up with is a chicken-and-egg scenario: Your project is set to use an MSBuild pre-build task to restore packages, but your project imports MSBuild tasks from one of those packages so you get a compilation error before it can restore the packages. To solve this problem you need to copy the MSBuild targets file for OctoPack to your project and commit it to source control.
Luckily, the OctoPack project is available on GitHub so I forked it and updated it’s install script to handle all this for you.
I have submitted a pull request to the author but if you just can’t wait I have also published an update to the package on the Geek Indulgence MyGet feed: http://www.myget.org/F/geekindulgence/. Just add this to your VS2010 Package Manager (via Tools > Options > Package Manager > Package Sources) and you can install the updated version of the package. UPDATE: Paul Stovell of Octopus Deploy has merged my pull request and pushed a new version of OctoPack to the official NuGet feed.
Octopus Deploy is an awesome tool and I’ll be putting together a detailed post about using it very soon. Stay tuned!
Jason Seifer has a nice post on getting OS X setup in a developer friendly manner after you finish the install. He goes through setting up rvm, git and some nice tweaks to the bash prmpt. This is mostly just a personal bookmark so I can find this again later.
So…you’re using MVC3 right? Good. And you’re using the awesome new server debugging/troubleshooting tool Glimpse right? Naturally! And you’re deploying to fantastic AppHarbor platform right? Of course you are! And they all go together like peanut butter and chocolate right? WRONG!
While all 3 of these things are quite awesome, you’ll be quite disappointed when you push your site to AppHarbor and then try to get a Glimpse into what’s happening on the server-side. This is because Glimpse, by default, only allows you to use it from localhost and if you want to use it from any other hosts you have to specify the IPs in the web.config. OK, that’s cool, I’ll just add my public IP and we’ll be in business right? Nope. That’s because the IP restrictions are enforced by this code:
So, what’s wrong with that? Nothing. The problem lies in the architecture of AppHarbor. They use load balancers to send requests to the server your app is running on. That means that Request.UserHostAddress is going to be the IP of the load balancer rather than the actual client.
At this point you have two options:
Both of these result in any client being allowed to turn on Glimpse on your site. That’s not good. It reveals too much info about your server. The code could be updated to also check the HTTP_X_FORWARDED_FOR header value but that would be pretty easy to fake in a non-loadbalanced environment.
Creating a custom AMI from scratch can be a daunting task, not to mention time consuming. There are a lot of public AMI’s that are a pretty good start to many tasks, so it might be easier and quicker to just customize one.
I’ll assume that you have already:
So you’ve got your image all polished up just the way you want it. Now what? Well, first you need to save your x509 EC2 certificate to the image. The most straight forward way to accomplish this is to open your certificate locally and copy all the text to your clip board. Now hop over to putty, type
and hit enter. This will create a file called cert.pem in /mnt and open it for editing. (NOTE: The image bundling utility will ignore certain sub folders when it creates the image. One of those is /mnt which makes it a good place to store things like private key files and the new image itself that you wouldn’t want bundled with the image.) Press ESC followed by i to enter INSERT mode. Now you can paste the text of your certificate into PuTTY by simply right clicking in the PuTTY window.
Do the same for your private key file saving it to /mnt/privatekey.pem.
OK, now you’re ready to bundle the AMI and save the image. run the following command from the console of the AMI you customized:
ec2-bundle-vol -d <path to save the image> -k <path to private key file> -c <path to certificate file> -u <user account number>
<path to save the image> = Where you want to save the AMI that you are bundling. I suggest something like /mnt/ami so that the image won’t be included in the bundled image. (That would be rather redundant!)
<path to private key file> = The path to your private key file on the image. In this example we used /mnt/privatekey.pem.
<path to certificate file> = The path to your certificate file on the image. In this example we used /mnt/cert.pem.
<user account number> = Your Amazon Web Services account number. You can find this by logging into the AWS site and clicking on Access Identifiers. Your account number is listed near the top right corner right under “Welcome, Your Name”.
Alright, now we need to upload the image S3 so that it will be usable. From the console of the image run the following command:
ec2-upload-bundle -b <bucket name> -m <path to manifest file> -a <access key> -s <secret key>
<bucket name> = The bucket in your S3 account that you want to save the image to.
<path to manifest file> = Path to manifest.xml created by the image bundling tool. In this example /mnt/ami/manifest.xml.
<access key> = Your AWS access key from the Access Identifiers page of AWS.
<secret key> = Your AWS secret key from the Access Identifiers page of AWS.
We’re almost done! Now you just need to register the image with EC2. From your desktop (not the image you customized.) run the following command:
ec2-register <bucket name>/manifest.xml
Congratulations! You now have a customized AMI ready to be launched.
Haven’t had to use this trick in a while since I’m running Vista on all my machines these days. I was over at my Mom’s house using her ancient XP box that was almost out disk space. I fired up the disk cleanup wizard and then I remembered why I don’t like it. It takes about 3 years to scan every file on the disk and suggest that I compress the old ones. Follow the steps below and you can use the disk cleanup wizard without it trying to compress all your old term papers still on your hard drive!
Of course, if your looking for something that will help you get a better handle on what file are taking up so much space and where they are you should check out WinDirStat. It will scan your drives and draw a picture of every file on your hard drive so you can see where the big ones are and delete them.
Trying to install Ubuntu on VPC 2007 I kept getting the following error:
An unrecoverable processor error has been encountered.
The virtual machine will reset now.
After much searching I landed at a blog post by Robert Cain over at Arcane Code. He has a nice step by step for installing Ubuntu 8.04 on VPC 2007, however like myself, many folks were still getting this error. Reading down the comments there were several suggested workarounds. This one from SteveZ fixed the problem:
- At the prompt, press F4 and select “Safe graphics mode”.
- Then press F6 and delete the part that says “quiet splash –” and replace it with “vga=791 noreplace-paravirt”.
You can read Robert’s excellent step by step at:
And be sure to check all the way down the comments if you are still having issues. Lot’s of smart folks chiming in over there.
PS: Don’t forget to edit grub to use those options after you install or you won’t be able to boot