Amazon Hit With Friday The 13th Cloud Glitch, But It's Not A Horror Show

But the incident, which reportedly affected Heroku, Github, CMSWire and other sites, doesn't appear to have caused nearly as much pain and suffering as some of the Friday The 13th series of horror films (especially the later ones).

The network issues lasted from 7:04 a.m. to 8:54 a.m., Amazon said. During that time, EC2 instances that were unreachable via public IP addresses could still communicate with other EC2 instances in the same Availability Zone using private IP addresses, Amazon said in a post to the AWS Service Health Dashboard.

[Related: Tech Data Adds VMware vCloud Hybrid To Its Cloud Mix ]

After the network issues, Amazon "experienced increased provisioning times for new load balancers, which did not impact existing load balancers," Amazon said in the dashboard post.

id
unit-1659132512259
type
Sponsored post

Also affected were Amazon's Relational Database Service, Simple Email Service and CloudHSM (Hardware Security Module). RedShift, the data warehousing service Amazon unveiled in February, also experienced "a small number" of affected instances, Amazon said.

Amazon also experienced "increased error rates and latencies" for the Elastic Block Store APIs, as well as increased error rates for EC2 instances that use the block storage service, the company said. These issues were resolved by 10:04 a.m.

All of the issues affected load balancers in a single Availability Zone in the US-EAST-1 region, Amazon said.

Amazon gets a lot of heat for outages, and that's only fair given its status as the top cloud IaaS player. And, there have been cases where EC2 has gone down and taken a lot of startups with it.

But, cloud outages and glitches are pretty much unavoidable at this stage of the game. In this case, it looks like Amazon did everything it could to keep customers in the loop.

PUBLISHED SEPT. 13, 2013