Amazon releases alternative way to upload into cloud

With cloud computing, sometimes the biggest hurdle is in trying to get all of your data up into the cloud for the initial upload (or subsequent syncs), or trying to download it all at a later date. If you are talking about multiple Terabytes then it could take a few days to transfer, and if you are paying for bandwidth like you are in Australia then it could become very expensive in data transfer fees. Since last year Amazon have been trialling a new “Import/Export” service where you can send a hard drive in with your data, and Amazon will upload the data directly into their data centre. As reported by Network World, this services is now generally available and has been used by at least 3 high profile companies http://www.networkworld.com/news/2010/061010-amazon-cloud-fedex.html

There are currently 4 data centres you can send your drives to for the Import/Export: Seattle, Virginia, Dublin and Ireland.

This is good, because as Ted Steven reminds us, “The internet isn’t like a truck. It is a series of tubes, and those tubes can be blocked”.

Advertisements

Architecting for the cloud – Best practices from Amazon

Amazon released a great “best practices” guide on how to architect for the cloud. It can be found at http://jineshvaria.s3.amazonaws.com/public/cloudbestpractices-jvaria.pdf

It covers off a lot of great information, some of the information is Amazon specific as you would expect, but a lot of it is generic enough to let you apply the thoughts to any cloud architecture.

  • Starts with the benefits of using the cloud (reduced up front cost, quicker provisioning of additional resources, etc.)
  • Explains why your application needs to be created in a way that you can scale the work out, not up
  • Explains the concept of elasticity
  • The concept of changing your thought process from “this one server doesn’t have enough RAM to handle all the users”, to thinking of the resources as abstract components for you to use
  • Expecting that servers will fail and to compensate for it

If you haven’t read it yet, i highly recommend that you do.

By David Burela

New cloud computing community website

I would like to announce the launch of a new Cloud Computing community website!

Logo
www.AllYourClouds.com

This is a new community website which focuses on having the answers for all your clouds.

Got a question about Amazon EC2, Azure, Google app engine, Go-grid, rackspace, etc? Need to know how to modify your code? Wondering how to migrate?
Just post the question and someone in the community will answer it for you.

The best part about it, is that the site uses OpenID, so there is no need to sign up. Just click to log in with your existing credentials (Google, wordpress, blogger, etc. etc.)

By David Burela

Analysis of Windows Azure virtual machine sizes

*Update* Microsoft have updated their FAQ with instance pricing

In my previous post on the newly released Azure SDK I touched on the ability to set a size for your VM instance.

Lets delve down into what size virtual machines are available (values from http://msdn.microsoft.com/en-us/library/ee814754.aspx)

VM Size CPU Cores Memory Disk space for local storage
Small 1 1.7 GB 250 GB
Medium 2 3.5 GB 500 GB
Large 4 7 GB 1,000 GB
Extra Large 8 15 GB 2,000 GB

The sizes are easy to follow, they are all just multiples of the base VM size. Microsoft have said in their FAQ that the pricing is based on multiples of the small VM size. It is based on “CPU cores / hour”, so $0.12 per hour for the small VM, $0.24 for medium, $0.48 for large, etc.

Lets draw up a matrix to compare the Microsoft Azure and Amazon EC2 pricing side by side:

Continue reading