Recent AWS Customer Success Stories & Videos

More AWS Customer Success Stories...

« AWS Free Usage Tier now Includes Microsoft Windows on EC2 | Main | Guest Post: Geo-Blocking Content With Amazon CloudFront »


TrackBack URL for this entry:

Listed below are links to weblogs that reference Amazon DynamoDB - Internet-Scale Data Storage the NoSQL Way:


Feed You can follow this conversation by subscribing to the comment feed for this post.


Interesting that you have not mentioned SimpleDB, as there seems to be some overlap.


Thanks for this writeup! I'm having trouble understanding how to create a unique attribute id for use as the hash key.

For example, let's say I'm storing comments. Coming from the SQL world, I just let the database auto-increment the unique CommentID. With Dynamo, how would I create a unique CommentID attribute for use as the hash key? Do I need to do that in the application code using php's uniqid function? That doesn't seem right...what am I missing? Thanks!

Phil Smith

Well done! Do the growable read and write throughput parameters apply to consistent read and conditional writes as well?

Jeff Barr

Phil - The throughput parameters apply to all reads and all writes.

Alexander Dimitriyadi

Well SimpleDB was always in beta so I presume this is the successor to it.

In terms of SDKs the documentation doesn't mention Ruby. Do we have a timeline for this??

Jeff Barr

Alexander, the AWS SDK for Ruby ( supports DynamoDB!


Ahh! there's a cool Christmas present! Except it's not available in my region. Aww. :(

I didn't even bother touching SimpleDB in the past because of the 10GB domain limit thing. However, the unlimited and seamless growth of this new service is definitely appealing.


Ismael Juma

This looks excellent. When can we have it in the EU region? :)



I hope boto gets support soon!

Ruben Orduz

I, for one, am really excited about this announcement. I was dreading having to deal with MongoDB replica sets and all that noise. This is a good fit to couple of our apps. Keep 'em coming.

Jeff Barr

Chris, there's no simple answer to this question. If you used PHP's uniqid function, you would have to add further randomness to insure that inserting two comments in close succession, and or from two different application servers, doesn't cause a collision.

You will, of course, want to choose a key that allows you to retrieve the comments later.


Does the free tier apply to every table created? Or is it 10 read units and 5 write units for the entire customer account?

Jeff Barr

GK, those units are really per table. The DynamoDB detail page calls it out as follows:

"DynamoDB customers get 100 MB of free storage, as well 5 writes/second and 10 reads/second of ongoing throughput capacity."

Ramon Salvadó

Is DynamoDB open source or are there any plans to open source it?

Jeff Barr

Ramon - It is not open source, it is a web service. As far as I know there are no plans to open source it.

Account Deleted

The DynamoDB looks great. Data storage management is something I want someone else to handle while I concentrate on the application. That said, the throughput pricing makes the initial storage costs costly on DynamoDB.

IF you have 7 tables to start with, you must pay for 5 Reads and Writes per table. That is 35 Reads and 35 Writes. 10 Reads costs $0.01/hour and 5 Writes cost $0.01 per hour. Which makes it kinda costly since a baby application will not need that 5 Writes per second.

A lower throughput option (say 2) would be so much better for experimental development when you do not have clients paying you. Well you could argue to put all data in one table, but that becomes ugly, we lose even basic data schema. I am still trying to figure out a detailed pricing for our application. Yours thoughts are welcome.

Sumit Datta

Sorry about my earlier comment. I had mentioned wrong Read and Write throughput rates. The actual rates are:
* Write Throughput: $0.01 per hour for every 10 units of Write Capacity
* Read Throughput: $0.01 per hour for every 50 units of Read Capacity

This pricing does look much better for experimental stuff. There are many libraries popping up on GitHub already. There will be frameworks supporting DyanmoDB in a couple of months I guess. It seems to be an exciting way ahead. Congrats again!

Aahan Krish

Hi, As per the FAQs:

{{ How long does it take to change the provisioned throughput level of a table?

In general, decreases in throughput will take anywhere from a few seconds to a few minutes, while increases in throughput will typically take anywhere from a few minutes to a few hours. We strongly recommend that you do not try and schedule increases in throughput to occur at almost the same time when that extra throughput is needed. We recommend provisioning throughput capacity sufficiently far in advance to ensure that it is there when you need it. }}

But someone here ( mentions that it takes "about 1.5 minutes per GB when scaling up." And that implies, it could take days to scale up a database that's TBs in size?! Is it true? Please clarify.


Jeff Barr

Aaahan - The scale up factor that you quoted is not correct. We responded to this as follows in the DynamoDB forum (

"The overall time is not linear as the Google Groups poster suggests. In most cases it will be between a few minutes to a few hours regardless of total size. Larger data sets may take a bit longer than smaller data sets simply because there is often more data movement to perform and coordination to be made across a greater number of machines. Rest assured though, we make use of parallelism where we can so the curve is far from linear."

Akbar Ali Butt


This is very good tutorial and it helps me a lot.
I need little help when i tried to create table i am getting following response in body section


Will you please update me about this exception.


Jeff Barr

The comments to this entry are closed.

Featured Events

The AWS Report

Brought to You By

Jeff Barr (@jeffbarr):

Jinesh Varia (@jinman):

Email Subscription

Enter your email address:

Delivered by FeedBurner

April 2014

Sun Mon Tue Wed Thu Fri Sat
    1 2 3 4 5
6 7 8 9 10 11 12
13 14 15 16 17 18 19
20 21 22 23 24 25 26
27 28 29 30