Our customers have been making great use of Amazon DynamoDB's provisioned throughput model! They are provisioning tables that handle hundreds of thousands of reads or writes per second to millions and even billions of items. They are adjusting provisioned throughput on the fly, in order to cope with changes in requirements, and paying only for the throughput that they have provisioned.
As a refresher, DynamoDB's provisioning is expressed in terms of read and write capacity units, each sufficient to handle an item that is at most 1 KB in size. A single unit of read capacity enables you to read one strongly consistent item per second or two eventually consistent items per second. A single unit of write capacity enables you to write one item per second.
Until now, we've focused on scaling up to large tables with plenty of read and write capacity. Today, we are heading the other direction, toward more modestly sized and provisioned tables. We've lowered the minimum read and write capacities as follows:
|Unit||Old Minimum||New Minimum|
|Read Capacity||5 / second||1 / second
|Write Capacity||5 / second
||1 / second
With this change, you can start out at the absolute, bare minimum (with respect to both cost and throughput) and then scale up your usage of DynamoDB as your application grows and your requirements expand. If your application makes use of hundreds of tables, this may result in a significant cost savings for you.
The AWS Free Usage Tier allows you to consume up to 100 MB of DynamoDB storage, 5 read capacity units, and 5 write capacity units per month. As a very beneficial side effect of today's announcement, you can now create up to 5 tables within the Free Usage Tier. This has been a very frequent customer request in the DynamoDB forum.
Visit the DynamoDB home page and get started today. You may also enjoy my recent post on DynamoDB libraries, mappers, and mock implementations.