Graphical user interfaces (e.g. the AWS Management Console) are great, but nothing beats the expressiveness of the command line!
Today we are releasing the AWS Command Line Interface (CLI). The AWS CLI provides a single, unified interface to a very large collection of AWS services. After downloading and configuring the CLI you can drive Amazon EC2, Amazon S3, Elastic Beanstalk, the Simple Workflow Service, and twenty other services (complete list) from your Linux, OS X, or Windows command line.
Download and Configure
The Getting Started page contains the information that you need to know in order to download and configure the AWS CLI. You'll need to have Python (any version from 2.6.x up to 3.3.x) installed if you are on Linux or OS X, but that's about it. You can install the CLI using easy_install, pip, or from a Windows MSI.
You can set your AWS credentials for the CLI using environment variables or a configuration file. If you are running the CLI on an EC2 instance, you can also use an IAM role.
I recommend that you create an IAM user (I called mine awscli) so that you have full control of the AWS operations that can be performed from the command line. For testing purposes, I used the all-powerful Administrator Access policy template. In practice I would definitely use a more restrictive template.
The AWS CLI commands take the form:
The SERVICE is the name of the service, except in the case of Amazon S3, where it is s3api. The s3 service is used to invoke a very powerful set of file manipulation commands that I will describe in a moment.
The OPERATION is the name of the corresponding AWS API function -- describe-instances, list-buckets, and so forth. You can issue the help operation to see a list of all available operations for the service:
Each operation generates its output in JSON format by default. You can use --output text and --output table to request text and tabular output, respectively. Here are samples of all three:
The jq (JSON Query) tool makes it easy for you to process JSON data. For example, a recent post on Reddit asked about extracting tag data for an EC2 instance. Here's how to solve that problem by using the AWS CLI and jq:
S3 File Operations
The AWS CLI also supports a set of S3 file operations. You can list (ls), copy (cp), move (mv), and sync (sync) files (S3 objects). You can also make (mb) and remove (rb) S3 buckets. Here are a couple of examples:
The file operations will automatically make use of parallelized multi-part uploads to Amazon S3 for large files and for groups of files. You can also use the --recursive option on the cp, mv, and rm commands in order to process the current directory and all directories inside of it.
A Few More Options
Some of the AWS commands accept or require large amount of text or JSON content as parameters. You can store parameters values of this type in a file or in a web-accessible location and then reference it as follows:
The AWS CLI is an open source project and the code is available on GitHub at http://github.com/aws/aws-cli. You can browse the source, enter suggestions, raise issues, and submit pull requests for desired changes.