You can now manage the listeners, SSL certificates, and SSL ciphers for an existing Elastic Load Balancer from within the AWS Management Console. This enhancement makes it even easier to get started with Elastic Load Balancing and simpler to maintain a highly available application using Elastic Load Balancing. While this functionality has been available via the API and command line tools, many customers told us that it was critical to be able to use the AWS Console to manage these settings on an existing load balancer.
With this update, you can add a new listener with a front-end protocol/port and back-end protocol/port:
If the listener uses encryption (HTTPS or SSL listeners), then you can create or select the SSL certificate:
In addition to selecting or creating the certificate, you can now update the SSL protocols and ciphers presented to clients:
We have also expanded IPv6 support for Elastic Load Balancing to include the US West (Northern California) and US West (Oregon) regions.
Introduction Amazon CloudFront's network of edge locations (currently 30, with more in the works) gives you the ability to distribute static and streaming content to your users at high speed with low latency.
Today we are introducing a set of features that, taken together, allow you to use CloudFront to serve dynamic, personalized content more quickly.
What is Dynamic Personalized Content? As you know, content on the web is identified by a URL, or Uniform Resource Locator such as http://media.amazonwebservices.com/blog/console_cw_est_charge_service_2.png . A URL like this always identifies a unique piece of content.
A URL can also contain a query string. This takes the form of a question mark ("?") and additional information that the server can use to personalize the request. Suppose that we had a server at www.example.com, and that can return information about a particular user by invoking a PHP script that accepts a user name as an argument, with URLs like http://www.example.com/userinfo.php?jeff or http://www.example.com/userinfo.php?tina.
Up until now, CloudFront did not use the query string as part of the key that it uses to identify the data that it stores in its edge locations.
We're changing that today, and you can now use CloudFront to speed access to your dynamic data at our current low rates, making your applications faster and more responsive, regardless of where your users are located.
With this change (and the others that I'll tell you about in a minute), Amazon CloudFront will become an even better component of your global applications. We've put together a long list of optimizations that will each increase the performance of your application on their own, but will work even better when you use them in conjunction with other AWS services such as Route 53, Amazon S3, and Amazon EC2.
Tell Me More Ok, so here's what we've done:
Persistent TCP Connections - Establishing a TCP connection takes some time because each new connection requires a three-way handshake between the server and the client. Amazon CloudFront makes use of persistent connections to each origin for dynamic content. This obviates the connection setup time that would otherwise slow down each request. Reusing these "long-haul" connections back to the server can eliminate hundreds of milliseconds of connection setup time. The connection from the client to the CloudFront edge location is also kept open whenever possible.
Support for Multiple Origins - You can now reference multiple origins (sources of content) from a single CloudFront distribution. This means that you could, for example, serve images from Amazon S3, dynamic content from EC2, and other content from third-party sites, all from a single domain name. Being able to serve your entire site from a single domain will simplify implementation, allow the use of more relative URLs within the application, and can even get you past some cross-site scripting limitations.
Support for Query Strings - CloudFront now uses the query string as part of its cache key. This optional feature gives you the ability to cache content at the edge that is specific to a particular user, city (e.g. weather or traffic), and so forth. You can enable query string support for your entire website or for selected portions, as needed.
Variable Time-To-Live (TTL) - In many cases, dynamic content is either not cacheable or cacheable for a very short period of time, perhaps just a few seconds. In the past, CloudFront's minimum TTL was 60 minutes since all content was considered static. The new minimum TTL value is 0 seconds. If you set the TTL for a particular origin to 0, CloudFront will still cache the content from that origin. It will then make a GET request with an If-Modified-Since header, thereby giving the origin a chance to signal that CloudFront can continue to use the cached content if it hasn't changed at the origin.
Large TCP Window - We increased the initial size of CloudFront's TCP window to 10 back in February, but we didn't say anything at the time. This enhancement allows more data to be "in flight" across the wire at a given time, without the usual waiting time as the window grows from the older value of 2.
API and Management Console Support - All of the features listed above are accessible from the CloudFront APIs and the CloudFront tab of the AWS Management Console. You can now use URL patterns to exercise fine-grained control over the caching and delivery rules for different parts of your site.
Working Together Let's take a look at the ways that various AWS services work together to make delivery of static and dynamic content as fast, reliable, and efficient and possible (click on the diagram at right for an even better illustration):
From Application / Client to CloudFront - CloudFront’s request routing technology ensures that each client is connected to the nearest edge location as determined by latency measurements that CloudFront continuously takes from internet users around the world. Route 53 may be optionally used as a DNS service to create a CNAME from your custom domain name to your CloudFront distribution. Persistent connections expedite data transfer.
Within the CloudFront Edge Locations - Multiple levels of caching at each edge location speed access to the most frequently viewed content and reduce the need to go to your origin servers for cacheable content.
From Edge Location to Origin - The nature of dynamic content requires repeated back and forth calls to the origin server. CloudFront edge locations collapse multiple concurrent requests for the same object into a single request. They also maintain persistent connections to the origins (with the large window size). Connections to other parts of AWS are made over high-quality networks that are monitored by Amazon for both availability and performance. This monitoring has the beneficial side effect of keeping error rates low and window sizes high.
Cache Behaviors In order to give you full control over query string support, TTL values, and origins you can now associate a set of Cache Behaviors with each of your CloudFront distributions. Each behavior includes the following elements:
Path Pattern - A pattern (e.g. "*.jpg") that identifies the content subject to this behavior.
Origin Identifier -The identifier for the origin where CloudFront should forward user requests that match this path pattern.
Query String - A flag to enable support for query string processing for URLs that match the path pattern.
Trusted Signers - Information to enable other AWS accounts to create signed URLs for this URL path pattern.
Protocol Policy - Either allow-all or https-only, also applied only to this path pattern.
MinTTL - The minimum time-to-live for content subject to this behavior.
I'd like to let you know that CloudBerry Explorer is ready to support new CloudFront features by the time of release. We have added the ability to manage multiple origins for a distribution, configure cache behavior for each origin based on URL path patterns and configure CloudFront to include query string parameters.
Hi, I am one of the developer of Bucket Explorer. I am excited to announce that Bucket Explorer new version is supporting CloudFront Dynamic Content feature. Try Its 30 day trial version with full featured. Dynamic Content (Steps and Images).
And Here You Go Together with CloudFront's cost-effectiveness (no minimum commits or long-term contracts), these features add up to a content distribution system that is fast, powerful, and easy to use.
So, what do you think? What kinds of applications can you build with these powerful new features?
Introduction Because the AWS Cloud operates on a pay-as-you-go model, your monthly bill will reflect your actual usage. In situations where your overall consumption can vary from hour to hour, it is always a good idea to log in to the AWS portal and check your account activity on a regular basis. We want to make this process easier and simpler because we know that you have more important things to do.
To this end, you can now monitor your estimated AWS charges with our new billing alerts, which use Amazon CloudWatch metrics and alarms.
What's Up? We regularly estimate the total monthly charge for each AWS service that you use. When you enable monitoring for your account, we begin storing the estimates as CloudWatch metrics, where they'll remain available for the usual 14 day period. The following variants on the billing metrics are stored in CloudWatch:
Estimated Charges: By Linked Account and Service (if you are using Consolidated Billing)
You can use this data to receive billing alerts (which are simply Amazon SNS notifications triggered by CloudWatch alarms) to the email address of your choice. Since the notifications use SNS, so you can also route them to your own applications for further processing.
It is important to note that these are estimates, not predictions. The estimate approximates the cost of your AWS usage to date within the current billing cycle and will increase as you continue to consume resources. It includes usage charges for things like Amazon EC2 instance-hours and recurring fees for things like AWS Premium Support. It does not take trends or potential changes in your AWS usage pattern into account.
So, what can you do with this? You can start by using the billing alerts to let you know when your AWS bill will be higher than expected. For example, you can set up an alert to make sure that your AWS usage remains within the Free Usage Tier or to find out when you are approaching a budget limit. This is a very obvious and straightforward use case, and I'm sure it will be the most common way to use this feature at first. However, I'm confident that our community will come up with some more creative and more dynamic applications.
Here are some ideas to get you started:
Relate the billing metrics to business metrics such as customer count, customer acquisition cost, or advertising spending (all of which you could also store in CloudWatch, as custom metrics) and use them to track the relationship between customer activity and resource consumption. You could (and probably should) know exactly how much you are spending on cloud resources per customer per month.
Update your alerts dynamically when you change configurations to add or remove cloud resources. You can use the alerts to make sure that a regression or a new feature hasn't adversely affected your operational costs.
Establish and monitor ratios between service costs. You can establish a baseline set of costs, and set alarms on the total charges and on the individual services. Perhaps you know that your processing (EC2) cost is generally 1.5x your database (RDS) cost, which in turn is roughly equal to your storage (S3) cost. Once you have established the baselines, you can easily detect changes that could indicate a change in the way that your system is being used (perhaps your newer users are storing, on average, more data than than the original ones).
Enabling and Setting a Billing Alert To get started, visit your AWS Account Activity page and enable monitoring of your AWS charges. Once you've done that, you can set your first billing alert on your total AWS charges. Minutes later (as soon as the data starts to flow in to CloudWatch) you'll be able to set alerts for charges related to any of the AWS products that you use.
We've streamlined the process to make setting up billing alerts as easy and quick as possible. You don't need to be familiar with CloudWatch alarms; juts fill out this simple form, which you can access from the Account Activity Page:
(click for full-sized image)
You'll receive a subscription notification email from Amazon SNS; be sure to confirm it by clicking the included link to make sure you receive your alerts. You can then access your alarms from the Account Activity page or the CloudWatch dashboard in the AWS Management Console.
Going Further If you have already used CloudWatch, you are probably already thinking about some even more advanced ways to use this new information. Here are a few ideas to get you started:
Publish the alerts to an SNS queue, and use them to recalculate your business metrics, possibly altering your Auto Scaling parameters as a result. You'd probably use the CloudWatch APIs to retrieve the billing estimates and to set new alarms.
Use two separate AWS accounts to run two separate versions of your application, with dynamic A/B testing based on cost and ROI.
I'm sure that your ideas are even better than mine. Feel free to post them, or (better yet), implement them!
We are continuing to simplify the Windows development experience on AWS, and today we are excited to announce Amazon RDS for SQL Server and .NET support for AWS Elastic Beanstalk. Amazon RDS takes care of the tedious aspects of deploying, scaling, patching, and backing up of a relational database, freeing you from time-consuming database administration tasks. AWS Elastic Beanstalk is an easy way to deploy and manage applications in the AWS cloud and handles the deployment details of capacity provisioning, load balancing, auto scaling, and application health monitoring.
Today we are extending the manageability benefits of Amazon RDS to SQL Server customers. Amazon RDS now supports Express, Web, Standard, and Enterprise Editions of SQL Server 2008 R2. We plan to add support for SQL Server 2012 later this year.
If you are a new Amazon RDS customer, you can use Amazon RDS for SQL Server (Express Edition) under the free usage tier for a full year. After that, you can use the service under multiple licensing models, with prices starting as low as $0.035/hour. Refer to Amazon RDS for SQL Server pricing for more details.
Today, we are extending Elastic Beanstalk to our Windows developers who are building .NET applications. Elastic Beanstalk leverages the Windows Server 2008 R2 AMI and IIS 7.5 to run .NET applications. You can run existing applications on AWS with minimal changes. There is no additional charge for Elastic Beanstalk—you pay only for the AWS resources needed to store and run your applications. And if you are eligible for the AWS free usage tier, you can deploy and run your application on Elastic Beanstalk for free.
AWS Toolkit for Visual Studio Enhancements We are also updating the AWS Toolkit for Visual Studio so you can deploy your existing web application projects to AWS Elastic Beanstalk. You can also use the AWS Toolkit for Visual Studio to create Amazon RDS DB Instances and connect to them directly, so you can focus on building your applications without leaving your development environment.
Deploy Your Application to AWS Elastic Beanstalk To get started, simply install the AWS Toolkit for Visual Studio and make sure you have signed up for an AWS account. You can deploy any Visual Studio Web project to AWS Elastic Beanstalk, including ASP.NET MVC projects and ASP.NET Web Forms. As an example, I will use the NerdDinner MVC sample application.
(click for full-sized image)
To deploy to AWS Elastic Beanstalk, right-click the project, and then click Publish to AWS. Provide the details and complete the wizard. This will launch a new Elastic Beanstalk environment and create the AWS resources to run your application. That’s it; NerdDinner is now running on Elastic Beanstalk.
Create and Connect to an Amazon RDS Database Instance By default, NerdDinner connects to a local SQL Server Express database, so we’ll need to make a few changes to connect it to an Amazon RDS for SQL Server instance. Let’s start by creating a new Amazon RDS for SQL Server instance using the AWS Explorer view inside Visual Studio.
(click for full-sized image)
We will also need to create the schema that NerdDinner expects. To do so, simply use the Publish to Provider wizard in Visual Studio to export the schema and data to a SQL script. You can then run the SQL script against the RDS for SQL Server database to recreate the schema and data.
(click for full-sized image)
Update Your Running Application Now that the Amazon RDS for SQL Server database is set up, let’s modify the application’s connection string to use it. To do so, you simply modify the ConnectionString.config file in your NerdDinner project and provide the connection details of your RDS instance.
(click for full-sized image)
Finally, you will republish these changes to AWS Elastic Beanstalk. Using incremental deployments, the AWS Toolkit for Visual Studio will only upload the modified file and RDS-backed NerdDinner becomes available a few seconds later.
(click for full-sized image)
I hope that you enjoy these new AWS features!
-- Jeff (with lots of help from Saad Ladki of the Elastic Beanstalk team);
Many customers are taking advantage of this possibility to run different types of workloads on the AWS Cloud. After listening to customer feedback (as we always like to do) and feature requests, today we're happy to announce some updates to our Microsoft SQL Server offerings. Here they are.
Support for Additional Instance Types You can now launch Microsoft SQL Server on m1.small (1 ECU, 1.7 GB RAM) and m1.medium (2 ECU, 3.75 GB RAM) instance types. Since we have several instance types, you might also want to take a look at the details.
Support for Microsoft SQL Server Web Edition For customers who run web-facing workloads with Microsoft SQL Server software, we are introducing support for Microsoft SQL Server Web Edition, which brings together affordability, scalability, and manageability in a single offering. SQL Server Web will be supported across all Amazon EC2 instance types, all AWS regions, and On-Demand and Reserved Instance offerings.
Support for Microsoft SQL Server 2012 Last, but definitively not least, we now support Microsoft SQL Server 2012 on Amazon EC2. Customers now have immediate access to Amazon published (official) AMIs for:
Using the new AWS Marketplace, you can easily find, compare, and start using an array of software systems and products. We've streamlined the discovery, deployment, and billing steps to make the entire process of finding and buying software quick, painless, and worthwhile for application consumers and producers.
Here's what it looks like:
We are launching the AWS Marketplace with many categories of development and IT software. They are grouped into 3 categories:
Business Software - Business Intelligence, Collaboration, Content Management, CRM, eCommerce, High Performance Computing, Media, Project Management, and Storage & Backup.
The AWS Marketplace includes pay-as-you-go products that are available in Amazon Machine Image (AMI) form and hosted software with a variety of pricing models. When you launch an AMI, the product will run on your own private EC2 instance and the usage charges (monthly and/or hourly) will be itemized on your AWS Account Activity report. Hosted software is run by the seller and accessed through a web browser.
The Details Each product in the marketplace is described by a detail page. The page contains the information you'll need to make an informed decision including an overview, a rating, versioning data, details on the support model for the product, a link to the EULA (End User License Agreement), and pricing for each AWS Region.
For this example, I will focus on the Zend Server. I can find it by browsing or by searching:
I can then choose from among a list of matching products:
I can read all about the product, and I can check on the pricing. I'll pay for the software and for the AWS resources separately:
The software pricing can vary by EC2 instance type:
1-Click Launch When I am ready to go I click the Continue button. I then have two launch options: 1-click and EC2 Console:
The 1-click launch process starts with sensible default values (as recommended by the software provider) that I can customize as desired by expanding the section of interest:
As you can see from the screen shot above, the Marketplace can use an existing EC2 security group or it can create a new one that's custom tailored to the application's requirements. Once everything is as I like it, I need only click on the Accept Terms and Launch button:
I can visit the Your Software section of the AWS Marketplace to see all of my subscriptions and all of the EC2 instances that they are running on:
The Access Software link routes directly to the admin page for the Zend Server. After accepting the license agreement and entering a password, I can proceed to the Zend Server console:
EC2 Console Launch I can also choose to launch the Zend Server AMI through the EC2 console. You can do this if you want to launch multiple instances at the same time, exercise additional control over the security groups, launch the software within a VPC or on Spot Instances, or perform other types of customization:
The AWS Marketplace distributes and then tracks AMIs for each product across Regions. These AMIs are versioned and the versions are tracked; you have the ability to select the version of your choice when launching a product.
Selling on the AWS Marketplace If you are an ISV (Independent Software Vendor) and you want to list your products in the AWS Marketplace, start here! Check out our listing guidelines and best practices guides, and then get in touch with us via the email address on that page. Products that fit within one of the existing categories will be given the highest priority. As I noted earlier, we'll add additional categories over time.
This white paper discusses general concepts regarding how to use SharePoint services on AWS and provides detailed technical guidance on how to configure, deploy, and run a SharePoint Server farm on AWS. It illustrates reference architecture for common SharePoint Server deployment scenarios and discusses their network, security, and deployment configurations so you can run SharePoint Server workloads in the cloud with confidence.This white paper is targeted to IT infrastructure decision-makers and administrators. After reading it, you should have a good idea on how to set up and deploy the components of a typical SharePoint Server farm on AWS.
Here's what you will find inside:
SharePoint Server Farm Reference Architecture
Common SharePoint Server Deployment Scenarios
Intranet SharePoint Server Farm
Internet Website or Service Based on SharePoint Server
Implementing SharePoint Server Architecture Scenarios in AWS
Amazon VPC setups for Intranet and Public Website Scenarios
AD DS Setup and DNS Configuration
Server Setup and Configuration
Mapping SharePoint Server Roles and Servers to Amazon EC2 AMIs and Instance Types
Let's face it. Sometimes you just need a local server. Perhaps your office is too cold, or you have the urge to pull the cover off and reseat the memory. Or, you might have some data on floppy disks that you simply cannot live without.
Because we will leave no stone unturned in our efforts to bring on-demand computing to the masses, I would like to tell you about today's release, the Amazon Fresh Server.
Starting today, if you live within 45° North or South of the Equator, we can deliver a fresh EC2 server to you in 15 minutes or less. This is a genuine, physical server. We've launched (literally) some brand new technology in order to make this a reality. Read on to learn a lot more.
There are two delivery modes: terrestrial and atmospheric.
Terrestrial Delivery If you live in a densely populated urban area, a uniformed delivery person will have your new server on your doorstep in a matter of minutes. As I write this, trucks loaded with servers are circling the 100 largest cities in the country. Here's one of our delivery people in action:
Actual delivery person delivering actual server.
Atmospheric Delivery The Atmospheric Delivery model is a lot more interesting. In conjunction with our friends at NASA JPL, we've launched a fleet of satellites in to low Earth orbit. Each satellite is stocked with a considerable number of Cluster Compute Eight Extra Large (cc2.8xlarge) servers, individually packaged in our proprietary re-entry shields.
When you order a server (currently limited to one per customer) using the new Deliver Instance button, we'll select a satellite and place your order in the appropriate delivery queue. After a set of careful (checked, double-checked, and then re-checked) ballistic calculations, the satellite will release your order on a trajectory that will deliver it to the latitude and longitude of your choice, accurate to a 1 meter radius, within 10 minutes. You need do nothing more than fill out this dialog:
Actual fake picture of genuine AWS Management Console.
So far so good, right? Read on, it gets even better!
As you probably know, a satellite in LEO is traveling at approximately 7.8 km/second. The amount of heat generated on re-entry to the atmosphere is considerable since the payload must lose all of that speed within a few minutes. We capture that "delta-v" energy and use it to power the server for up to two weeks. Because the server includes a built-in Wi-Fi card and a preconfigured Elastic IP Address, you don't have to connect any cables. You can simply leave it where it lands and start using it. In fact, under optimal conditions, you can start using it while it is still decelerating. You'll be up and running in minutes.
Actual conceptual diagram by genuine artist.
As part of the pre-release beta, my son Stephen ordered a server and took delivery in the courtyard of his graduate student housing complex. It was up and running when it landed and we were happily coding away in no time flat:
Actual developers using a genuine Amazon Fresh Server.
When you are done with your server, you can initiate the return process via a single click in the AWS Management Console.
This is a pilot program, and we'll be taking orders starting today. Get your server today!
We have received tremendous positive feedback from customers and partners since we launched Amazon DynamoDBtwo months ago. Amazon DynamoDB enables customers to offload the administrative burden of operating and scaling a highly available distributed database cluster while only paying for the actual system resources they consume. We also received a ton of great feedback about how simple it is get started and how easy it is to scale the database. Since Amazon DynamoDB introduced the new concept of a provisioned throughput pricing model, we also received several questions around how to think about its Total Cost of Ownership (TCO).
We are very excited to publish our new TCO whitepaper: The Total Cost of (Non) Ownership of a NoSQL Database service. Download PDF.
In this whitepaper, we attempt to explain the TCO of Amazon DynamoDB and highlight the different cost factors involved in deploying and managing a scalable NoSQL database whether on-premise or in the cloud.
When calculating TCO, we recommend that you start with a specific use case or application that you plan to deploy in the cloud instead of relying on generic comparison analysis. Hence, in this whitepaper, we walk through an example scenario (a social game to support the launch of a new movie) and highlight the total costs for three different deployment options over three different usage patterns. The graph below summarizes the results of our white paper.
When determining the TCO of a cloud-based service, it’s easy to overlook several cost factors such as administration and redundancy costs, which can lead to inaccurate and incomplete comparisons. Additionally, in the case of a NoSQL database solution, people often forget to include database administration costs. Hence, in the paper, we are providing a detailed breakdown of costs for the lifecycle of an application.
It’s challenging to do the right apples-to-apples comparison between on-premises software and a Cloud service, especially since some costs are up-front capital expenditure while others are on-going operating expenditure. In order to simplify the calculations and cost comparison between options, we have amortized the costs over a 3 year period for the on-premises option. We have clearly stated our assumptions in each option so you can adjust them based on your own research or quotes from your hardware vendors and co-location providers.
Amazon DynamoDB frees you from the headaches of provisioning hardware and systems software, setting up and configuring a distributed database cluster, and managing ongoing cluster operations. There are no hardware administration costs since there is no hardware to maintain. There are no NoSQL database administration costs such as patching the OS and managing and scaling the NoSQL cluster, since there is no software to maintain. This is an important point because NoSQL database admins are not that easy to find these days.
We hope that the whitepaper provides you with the necessary TCO information you need so you can make the right decision when it comes to deploying and running a NoSQL database solution. If you have any questions, comments, suggestions and/or feedback, feel free to reach out to us.