Recent AWS Customer Success Stories & Videos

More AWS Customer Success Stories...

« Appirio -- Attachments In | Main | Assay Depot »


TrackBack URL for this entry:

Listed below are links to weblogs that reference Storage Space, The Final Frontier:

» GNC-2008-04-15 #365 from Geek News Central Podcast
Kind of a weird show tonight.Lots of varied content and of course a healthy dose of tech news and information. Thanks for your support of our new Sponsor Block Buster. Please Support Show Sponsors: BLOCKBUSTER Total Access click to activate... [Read More]


Feed You can follow this conversation by subscribing to the comment feed for this post.


"Needless to say, you can use these volumes to host a relational database."

... and that's the line we've been waiting for. w00t!

Though I have to ask: how reliable are these volumes? Do they get the same redundancy/replication as normal S3 data?

Thorsten - CTO RightScale

Having tested the new storage volumes I can say only one thing: you'll love them! They really raise the EC2 offering to the next level. It will surpass non-cloud computing not only in scale and price but also in features. Yay!
More thoughts on how the storage volumes will change the game in my blog post at

Paul Stamatiou

On behalf of everyone using EC2 or that has used EC2 in the past but written it off due to the various limitations with data persistence, I would like to say Thank You. Google AppEngine.. what's that?

Mirko Sciachero

It's also possible to attach the same storage to more than one ec2 instance?

Victor Boctor

Great feature. That boosts the usability of the EC2 service since it is now much easier to use for already built applications and tools.

1. Are these volumes going to be resizable? For example, can you start off by a 100GB volume, then later resize is to 200GB?

2. I wonder if the cost is going to be based on read/writes vs. just the size of the volume.

3. What is the cost of a snapshot? Is it based on volume size? size of data on volume? or incremental based on changes from previous snapshot?

Przemyslaw Rudzki

Simply beautiful. I just hope that pricing will be set in a "affordable" range ;-).


James Hill

Yep, this has made me ecstatically happy!


This is amazing. I'm curious, is there any plan to allow a single volume to be mounted read-only across several EC2 instances?


Is there any timeline on future availability of this service? I am launching a site within days, and the final frontier for me was installing JungleDisk to have an S3 filesystem.

Obviously, this news is the must preferred solution. I would hold off if there was a chance this functionality would become available in the next few weeks.

Nicolas Lehuen

What about building an EC2 instance cluster with a shared file system using Redhat Global File System ?

Would this be possible ?

Amit Sudharshan

For read only access you may be able to mount using an SSHFS which will be secure.
Securing NFS in this environment may be more difficult than its worth for read only access.

Alex Kerr

This is excellent. I'd like to add my voice to the chorus of calls (also on the forum post) asking for individual mounted volumes to be read/writeable by multiple instances. I think this really is a crucial feature. All sorts of back end processing of data, files, DB contents etc. becomes much, much more feasible, easy and reliable if the volume it's mounted on is accessible from any of multiple EC2 instances that exist to do the processing. Please make this possible at release, Amazon!


Thorsten - CTO RightScale

Paul, as stated on Werner Vogels' blog you can attach a volume only to one instance. I guess this is a feature we'll have to hope for in V2.

Victor, unlike all other AWS services which are priced very aggressively it looks like the storage volumes will be horribly expensive (JOKE). Seriously, before introducing elastic IPs AWS had been discussing a number of pricing options and I believe everyone was surprised by the pricing model they chose: free while you use it and pay while you don't. If you think about it, it makes a lot of sense for the EIPs. So I expect to be surprised by the storage volume pricing structure and to find it to make sense.

Felix: I wouldn't hold off if I were you. If Jungle Disk works for you the price is right and it'll get you off the ground. You can then move over to the storage volumes once they become available and all the tools are there.


Compliment: This is truly a great frontier - thanks for listening to the consumers! - listening is your competitive advantage in the future of Cloud Computing!

Request for clarification: The S3 environment has data redundancy - data across multiple machines - My perception is that this new service has it data stored on a single physical disk - Is this correct?

Request for feature: The geographic zone feature for EC2 is an important part of redundancy - it would be nice to be able for developer to build a system of these new disks across multiple hardware (different physical drives) and geographic zones. There should also be clarification (service level agreement) that the disk is not on the same physical disk as another raw disk one is using.

Coining a name: How about "point storage", "raw drive", "plain disk", "true disk", or "mount point".


Guan Yang

"I'd like to add my voice to the chorus of calls (also on the forum post) asking for individual mounted volumes to be read/writeable by multiple instances. I think this really is a crucial feature."

I agree that this would be useful, but is it really that crucial?

Think of how difficult it is to have multiple servers have read/write access to the same raw disk today. I'm not even sure what hardware could be used, apart from FireWire. Can standard Fibre Channel hardware do this?

And it would probably also be a pain to set up. Oracle clustering uses shared disks, but it's just not that common, especially not in open source software. I wouldn't put this at the top of the todo list.


If shared storage is crucial really depends on your app, for some it is.

FireWire isn't the common way to do it. Usually its fiber channel, or ethernet (iscsi). It's not that hard to set up. RHEL has it built in (dlm, gfs, etc), and v5 supports a 128 node quorum.

If there's not a (really) quick way to attached your storage to another node it almost becomes essential or you'll be dead in the water while you're waiting for your locked up node to release storage and attach it to another node.

Alex Kerr

Guan, yes I would say the ability for multiple EC2 instances to access the same storage volume was indeed "crucial". That's one reason why so many people are asking for it (across the blogs/forums I've seen this posted on already). So far there have been various 3rd party attempts at creating a single "disk" across multiple instances (e.g. s3dfs and various others). I don't care how Amazon actually implement it, just as long as it's possible. If this doesn't happen, it means we are still at square one almost, and data (files, DBs etc) that are being processed by a farm of multiple instances have to have that data continually replicated across all storage volumes. This is a major, major pain. Multiple instance access to a volume solves this instantly. The fact that these 3rd party solutions were much asked for, and are much used, shows how needed this feature is.
You seem to be thinking of the difficulties associated with physical network hardware, I don't think these need apply in the AWS environment.


I would be great if we could possibly attach the same ec2 storage to more than one ec2 instances (at best for both read and write but if this is not possible then at least for read).

And while I am posting here, when do you expect European EC2 data centers to be supported (for lower latency) ?


You guys rock. Watching stuff like this develop makes me think I'm watching something very big happen. Keep it up.


I also hope this will be affordable...I had some ideas for solutions I posted in the AWS developer forums that I would have hoped made it to someone at Amazon. Anyway, to be short, I really hope there's some sort of "included" persistent storage with EC2...Having to pay additional for persistent storage, backup (s3), AND the EC2 server is just out of hand and out of line. Unless you're of course tying up a ton of space...I think allowing people to simply create partitions of any size is over the top...but I think you should at least get some amount of included persistent storage with your EC2 instance (based on the size of the instance) just to cover things...On the other hand, I think it's abusive to store media that will see high downloads...that's best suited for S3 (or another paid solution), but normal site content sure yea it should be protected. I really guess there's no way to police that other than setting a limit to the "included" space.

My other ideas included keeping EC2 data for a set amount of the server admin enough time to bring another instance to make sure nothing is lost. So setting some sort of expire after X amount of days or what not. OR charge for storage the moment the instance is down...(better yet) This is probably the BEST solution as it doesn't tie up disk space on Amazon's cloud and it doesn't screw over the user. It's fair.

Anyway, hope Amazon continues to be smart here and this all works out.

Matthew Lanham

Well all i can say is i cant wait to find out more and get my hands dirty, its one of the missing links to make everything come together, and i can see it being very effective...

Stu Thompson

I am absolutely salivating for this; just cannot wait to get my greedy little hands on EC2 volumes! I signed up moments after receiving the email a few weeks ago, but nothing yet. :(

Pick me, pick me!


Laith Zraikat

Great move by Amazon and I second the request for sharing a drive across instances.

Might I suggest a name: Elastic Density Drive (ED2)


The comments to this entry are closed.

Featured Events

The AWS Report

Brought to You By

Jeff Barr (@jeffbarr):

Jinesh Varia (@jinman):

Email Subscription

Enter your email address:

Delivered by FeedBurner

April 2014

Sun Mon Tue Wed Thu Fri Sat
    1 2 3 4 5
6 7 8 9 10 11 12
13 14 15 16 17 18 19
20 21 22 23 24 25 26
27 28 29 30