Recent AWS Customer Success Stories & Videos

More AWS Customer Success Stories...

« Announcing the AWS Start-Up Challenge - Win $100,000 in Prizes | Main | Friday Fun Fest - A Plethora of Interesting AWS Stuff »

TrackBack

TrackBack URL for this entry:
http://www.typepad.com/services/trackback/6a00d8341c534853ef00e555028d618834

Listed below are links to weblogs that reference AWS Security White Paper:

Comments

Feed You can follow this conversation by subscribing to the comment feed for this post.

Colin Percival

The Backups section on page 3 is confusing. First it says that data in S3, SDB, and EBS is redundantly stored; then it says that Amazon does not "perform backups" of data stored on EC2 instances or in S3 or SDB. As far as S3 and SDB are concerned, I'm guessing this means that there are multiple "live" copies of data but no offline copies -- fair enough, but this could probably be stated more clearly. But does EBS have backups, or was it merely an oversight that it is not listed in the "does not perform backups" sentence? For that matter, my understanding is that data on EBS is far less stable than data on S3 -- so the first sentence talking about data being redundantly stored should probably also be clarified lest people not appreciate the distinction.

Tim Freeman

Thanks for providing this. May I ask if the firewall discussed on page 4 is using a hardware/firmware rooted solution or something more along the lines of iptables/pf boxes? Thanks.

Tim Freeman

Oh, I see later on the question is answered, sorry:

"In addition, the aforementioned firewall resides within the hypervisor layer, between the physical interface and the instance's virtual interface."

Is there any border DoS control; may I ask if the VMM hosts sit "directly" on the net? Any worry or experience with dom0 being overwhelmed?

Thanks

Jim Jones

I found this paper interesting to read, albeit a bit light on the details. Thanks for posting.

On a related note, and in the hope that someone responsible at amazon might read this:
There is a serious security flaw in the AWS authentication model. Each auxiliary service (S3, SimpleDB, SQS) requires all requests to be signed with the AWS secret key. This implies that each instance that needs to access such a service must know the AWS secret key.

The problem is that there is only *one* AWS secret key, there seems to be no separation of concerns implemented whatsoever. This means that if any instance that uses any of these services is compromised, the attacker will gain complete control over your AWS account. Not only will he gain access to the services that the compromised instance legitimately had access to, he will also gain access to all other AWS functionality, including the ability to list/start/stop instances!

I think amazon should *urgently* invent separate keys for the separate services. In the case of SimpleDB there should probably even be an option to use separate keys for separate domains.

Most importantly the "master-key", the one that enables to start/stop instances, needs to be separate from any "service-keys".
There is no reason to give that kind of power to any instance (much less to *every* instance, as it is now) by default.

Colin Percival

Jim, There's a workaround for the problem you describe: Create multiple accounts. Amazon has no problem with billing the same credit card several times each month for different accounts.

Alexey Bokov

Interesting paper :-) I'm also have a questions :
- Is it right that data stored in EBS will never be damaged or lost?
- May you explain - what does it mean - "unintentionally leaving data on disk devices is only one possible breach of confidentiality" ( "Instance Isolation" section ). In other words this note means that another instance may have access to data which I store on disk device ? Does the same thing happen with mounted EBS volumes?
Thanks ,-)

M. David Peterson

@Jim Jones,

The other solution is to use front facing load balancers which do not have access to your private key, passing all requests to backend servers which are completely sealed to the outside world. By using security groups to lock down internal access to all ports on the backend servers except those handling the load balancing requests, you can help eliminate the possibility of a front end server being compromised and used to gain access to the backend servers for anything other than a simple "please perform this operation and return to me the results".

I do agree with your overall point: You should /never/ store a private key that provides "root" level control of your entire account on an Internet facing node. But maintaining an internal policy of delegating all AWS API requests to non-public facing nodes will go a long way in ensuring the smallest exposed surface area possible to external attackers gaining control of your account.

Another policy to consider is the consistent rotation of private keys, using the private key regeneration feature provided by Amazon to minimize the effect an attacker can have if he/she were to gain access to your private key. If anything, the one feature I would /love/ to see Amazon expose would be a private key regeneration feature that requires the use of a specific x509 public/private certificate that has been given extended permissions to call the private key regeneration API. This could then allow the automation of generating a new private key from a single node/IP which has been granted permission to use the x509 pub/private certificate w/ extended permissions.

Actually, come to think of it, extending the IP-level security controls provided by the EC2 API to the public/private keys such that regardless of whether a key is compromised, it can't be used outside of a given set/range of IP's would be an /excellent/ way to achieve the desired security level you are suggesting. At least this way, regardless of whether a key has been compromised, the only way to use the key would be to do so from a node which has been granted permissions. And if a node that has been granted permissions starts making obscure requests, at least you know where those requests are coming from, quarantining the node from doing any further damage while you investigate the attack. This approach would maintain backwards compatibility with the existing API and public/private keypair, using an opt-in approach similar to the way they handled adding vanity domains and logging support to S3 a while back.

Hey Amazon: Something to consider?

The comments to this entry are closed.

Featured Events

The AWS Report


Brought to You By

Jeff Barr (@jeffbarr):



Jinesh Varia (@jinman):


Email Subscription

Enter your email address:

Delivered by FeedBurner

April 2014

Sun Mon Tue Wed Thu Fri Sat
    1 2 3 4 5
6 7 8 9 10 11 12
13 14 15 16 17 18 19
20 21 22 23 24 25 26
27 28 29 30