Paul Dowmancreated something really useful for the Ruby on Rails Community: a Pre-configured Ruby On Rails Stack AMI that "just simply works". Public AMIs are pre-packaged, pre-configured, compressed file-system blobs that are stored on Amazon S3 which can be instantiated by any Amazon EC2 user.
Ruby on Rails cuts down development time significantly due to all the freebies and code generators that comes with it. Now with these types of new public AMIs, it will also cut down deployment time too.
With Paul's Public AMI, developers will get all of following pre-configured and ready to go:
Automatic backup of MySQL database to S3 every 10 minutes.
MySQL and Apache configured to write logs to /mnt/log so you don’t fill up EC2’s small root filesystem
Hostname set correctly to public hostname
A script to re-bundle, save and register your own copy of this image in one step (if you want to).
In the Blog Post, Paul has a nice "Instructions Manual" to get you started. There was significant confusion among the Rails community whether to use HAProxy, Apache mod_proxy_balancer, nginx and so many other alternatives. Paul was smart enough to just put the right ingredients in his recipe.
I can't wait to see the entire network topology from load balancer to MySQL master-master replication model and a cluster monitoring solution in one single AMI that we can instantiate by invoking simple commands using Parameterized Launches feature of Amazon EC2.
Imagine a Zero-configuration world, where we simply reuse other developer's once-done configuration and even optimizations!.
Thanks to Paul for taking lead on this and sharing with the world! Travis Reeder shares his expertise with an example (in the comments section of his blog post) on how to normalize your data fields in a database by storing big blobs onto Amazon S3 thereby making the database less heavy and easier to manage.
This is what I like about this "Social computing" era - willingness to share and collaborate. Hats off! to all the developers who share their development experiences so that others don't have to go through the same pain of configuring, tweaking, patching components.
This tool incorporates the latest pricing changes including the tiered pricing model for download bandwidth.
Use this tool to estimate your monthly bill, to determine your best and worst
case scenarios (if you get Slashdotted, Dugg etc.), and identify
areas of development to reduce your monthly costs and even compare it with other service providers who do not offer utility-style of billing (pay-as-you-go).
Just a quick post to act as a heads up on two additions to the AWS Developer Resource Center.
First, I recently interviewed Doug Kaye, who is CTO of GigaVox Media, about their experience with Amazon Simple Queue Service (aka Amazon SQS). The interview runs approximately 9 minutes, and offers some insight by Doug. I married the audio with some visuals to make a screencast, then posted it here. You can watch in either Windows Media or Flash.
Second, a while back I posted a screencast about how to extend an ASP.NET site to deliver photos from Amazon S3, while the rest of the site runs on your machine at home (on a DSL line). Just added sample code in VB.NET, if that is a language that you use.
The first-ever AWS Developer Chat in Second Life was a big success and a whole lot of fun to boot. We had about 20 people (avatars) in attendance and we talked about Amazon S3, EC2, and all kinds of other related subjects. Here's a picture taken by my friend Betsy Weber (who has a really good blog):
As more and more people now move their production apps on Amazon S3, we are getting emails from CEOs and CTOs about their success and how Amazon S3 helped them sleep better at night. Last week I blogged about live-blogging backed by Amazon S3 and their 2-hour-$10-scaling app. This week its Pictogame. As Louis Choquel, President of zSlide (of Podmailing fame) puts it in his own words:
- with S3 we could sleep better, spend totally cool week-ends watching our Digg score climb in total serenity.
- without S3 we would have spent much more money but slept badly anyhow, had nightmares, and actually seen our server crashed, probably at the worst time of night as these things usually do.
Pictogame launched on May 25 - its one of those user-created game widgets that you can create dynamically and embed it on your MySpace blogs. Technically, the app consists of several general purpose SWF flash files (game loader, skin, game template), user media files (currently only pictures: jpeg, gif, png), an XML file describing how to mix all that into a customized game. All of these are stored in a single bucket on Amazon S3. SWF files are stored once as they are common to all, user media are first uploaded to their servers for processing then pushed to Amazon S3. XML files are uploaded to Amazon S3 each time a new game is created. A background process in PHP stores all these files asynchronously on Amazon S3 while they keep the Amazon S3 key in their database for later editing and deletion purpose.
Their stats are all the more impressive: Since launch (in 3 weeks), they got "dugg" several times and were also there in Top Ten for few hours. Overall, over 75GB were downloaded and 1 million widgets were served (Each game widget weights an average of 150KB) and Total costs were less than $20 (for storage and bandwidth).
Now since the widgets get directly relayed from Amazon S3, they don't have to worry about scaling. No more post-production headaches!
Here's the game - Try building the famous "AWS building block" (My best time : 400sec)
Couple of cool widgets available that enable you to earn Amazon Associate commisions.
The first is the Amazon Browser Widget, which their website describes as "The Amazon Browser widget is a mini-Amazon browser that allows you to search for items at Amazon. As a user scrolls through the displayed items, it automatically loads additional items as necessary. The widget keeps track of browsing history, and the forward/back buttons can be used to view previous searches."
The other is the Amazon List Widget, which displays Amazon lists, including wedding and baby registries, Listmania lists, and wish lists. As a user scrolls through the displayed items, it automatically loads additional items as necessary.
The site manages the entire creation, revision, and signing process for documents.
During creation and revision, Middlepost tracks all of the changes and comments associated the document, supporting both internal and external collaborators. Each collaborator has a profile with verified personal data, and plays a role in the process (creator, signer, approver, or observer).
Interested parties receive notification whenever a document is changed, ensuring that everyone is always looking at the latest version of the document.
Once the document is complete, Middlepost routes the document to the designated signers for on-line signing, and can then produce a complete, digitally signed document.
Finished documents are stored for a specified time period (default is 10 years) and cannot be modified once signed. The documents are signed with 128 bit encryption and stored redundantly using Amazon S3.
Travis Reeder, CTO of Middlepost, told me that SQS allowed them to build the system in a loosely coupled fashion. He also told me that:
From a cost and maintenance perspective, we're really excited about the fact that we'll never really have to think about storage limits and the costs are so minimal that it's barely a concern. That takes a huge load off our shoulders since we're storing very important and critical documents.