Join me at the xMatter’s Flow 18 conference on October 22-24 at the Radisson Blu Aqua Hotel in Chicago, where I will be giving one of the keynotes for the conference. I will be giving my newly created “Keeping Modern Applications Performing – Driving Insights to Action within the Enterprise” talk, where it will make its North American debut. This will be hot on the trail after giving it down under in Sydney and Melbourne Australia the week before.
Join me in New York on Nov 13 for the 10th annual Cloud Expo at the Javits Center where I will be giving my talk “Dynamic Infrastructure and The Cloud
Adventures in Keeping Your Application Running…at Scale”. This will be my second appearance and third presentation at this conference.
To register for the conference and my presentation, please click here.
The #1 book on their list is “Architecting for Scale” book by Lee Atchison. As the article says:
Whenever we discuss cloud adoption with enterprise companies curious about making the move, one of the first questions is, which is better: public cloud or private cloud? Cloud adopters want to know which approach is most likely to give them better performance, greater flexibility, stronger security, and lowest cost to operate. While these are important requirements, they miss a critical issue: So you want to share your cloud with others? If you’re working towards an effective cloud adoption strategy, you’d be wise to consider whether you want a multi-tenant or single-tenant cloud offering.
If you still think multi-cloud is all about deliberately choosing several cloud providers to avoid vendor lock-in, you may be missing the point. That’s just one key takeaway from the latest episode of the New Relic Modern Software Podcast, which delves into the complex world of running—and monitoring—applications in multi-cloud environments.
Does this story sound familiar? It’s the day of the big game. You invite a bunch of your friends over to watch on your brand-new 75-inch Ultra-HD Super Deluxe TV. You’ve got the beer. You’ve got your snacks laid out. Everything’s ready to go. The game is about to start. When, all of a sudden, the power goes out, the lights flicker off, and the TV goes dark. For you and your friends, it’s game over.
This guide will help determine whether a multi-cloud environment is right for your app and offers some advice in choosing the right cloud model for you.
Major bug? Human error? Neither. The AWS S3 outage last week was more like a minor bug in an otherwise solid availability plan executed by AWS. Read my article at The New Stack..
As applications grow, two things begin to happen: they become significantly more complicated (and hence brittle), and they handle significantly larger traffic volume (which more novel and complex mechanisms manage). This can lead to a death spiral for an application, with users experiencing brownouts, blackouts, and other quality-of-service and availability problems. “But your customers don’t care. They just want to use your application to do the job they expect it to do. If your application is down, slow, or inconsistent, customers will simply abandon it and seek out competitors that can handle their business. That’s how my new book, Architecting for Scale: High Availability for Your Growing Applications, begins.
I had the rewarding opportunity of being a guest on theCUBE on Silicon Angle TV at the AWS Summit in Santa Clara, CA.
I was interviewed recently by O’Reilly Media about my book Architecting for Scale. This interview was recorded during the O’Reilly Velocity conference in Santa Clara, CA, on June 23, 2016.
Software Engineering Daily Podcast. Listen to Jeff Meyerson talk to Lee Atchison about Lee’s new book, “Architecting for Scale”, by O’Reilly Media.
What a great day!
On Tuesday, July 5, we officially expanded our annual FutureStack user conference beyond San Francisco, kicking off our new FutureStack16 Tour “across the pond” in London.
Take a look at the article
Customer Successes Take Center Stage at FutureStack London,
I wrote that shows what a truly great day it was.
As enterprises increasingly move to the cloud, they are discovering a wide variety of routes to get there. In a recent series of blog posts, I’ve addressed
For many companies, the goal isn’t to share their applications across both their own data center and the public cloud. Rather, they want to move some of their applications lock, stock, and barrel to the cloud. If some of the company’s apps live in the cloud while others remain in the on-premise data center, then intentionally or not, these companies also have hybrid clouds.
As hybrid clouds become more and more common in enterprise IT settings, a number of different use cases and journeys are beginning to become apparent. In part 1 of this series on how enterprise IT is using the hybrid cloud, we looked at how the hybrid cloud can be a faster and more economical way to add new data center or server capacity—or even an entire new or better data center.
The term “hybrid cloud” has found its way into common usage among IT operations folk, but not everyone agrees on exactly what it means. Basically, a hybrid cloud refers to any situation in which you have an application running partially in your company’s data center, and partially in one or more public clouds, such as AWS, Microsoft Azure, or Google Cloud Platform.
As our applications grow, keeping them operational can be challenging. High growth means more data, more computation, and more opportunities for problems. The cloud offers us the ability to improve our scalability, while maintaining and improving our availability. During this talk, we’ll show you the “keep two mistakes high” principal and use the cloud to prevent availability issues, keeping our applications healthy and growing, yet keeping costs inline.
We all know the value of distributing an application across multiple data centers. The same philosophy applies to the cloud. As we put our applications into the cloud we need to watch where in the cloud they are located. How geographically and network topologically distributed our applications are is just as important as with normal data centers. While Amazon AWS won’t tell you specifically where your application is running, they do give you enough information to make diversification decisions. Interpreting and understanding this information, and using it to your advantage, requires an understanding of how AWS is architected. In part 1 of this article, we talked about the AWS Architecture of regions and availability zones. In part 2, we went into more detail about how availability zones are structured, and how we can utilize this information. In this final part, we discuss the availability zone to data center mapping, why it is important, and how to use all this information to make sure you have the highest diversification as possible for your application.
We all know the value of distributing an application across multiple data centers. The same philosophy applies to the cloud. As we put our applications into the cloud we need to watch where in the cloud they are located. How geographically and network topologically distributed our applications are is just as important as with normal data centers. While Amazon AWS won’t tell you specifically where your application is running, they do give you enough information to make diversification decisions. Interpreting and understanding this information, and using it to your advantage, requires an understanding of how AWS is architected. In part 1 of this article, we talked about the AWS Architecture of regions and availability zones. In part 2, we will go into more detail about how availability zones are structured, and how we can utilize this information.
We all know the value of distributing an application across multiple data centers. The same philosophy applies to the cloud. As we put our applications into the cloud we need to watch where in the cloud they are located. How geographically and network topologically distributed our applications are is just as important as with normal data centers. However, the cloud makes knowing where your application is located harder. The cloud also makes it harder to proactively make your application more distributed. Some cloud providers don’t even expose enough information to let you know where, geographically, your application is running. Luckily, larger providers like AWS are better. No, AWS won't tell you specifically where, geographically, your application is running, since they do not disclose their actual data center locations (I worked at AWS, and I have no idea, specifically, where the data centers are located). While they won’t tell you specifically where your application is running, they do give you enough information to make diversification decisions. Interpreting and understanding this information, and using it to your advantage, requires an understanding of how AWS is architected.
Join me at Cloud Expo 2016 at the Javits Center in New York, NY on June 7-9, 2016, where I will be speaking on keeping high availability in the Cloud.
An updated copy of my book, Architecting for Scale, published by O’Reilly Media, is available for download. This is the second version under the early release program. The full book is scheduled to be released in May.
One of the most important topics in architecting for scalable systems is availability. While there are some companies and some services where a certain amount of downtime is reasonable and expected, most businesses cannot have any downtime at all without it impacting their customer’s satisfaction, and ultimately their company’s bottom line. How do you keep your customers happily using your service and keep your company’s revenue coming in? You keep your service operational as much as possible. There is a direct and meaningful correlation between system availability, and customer satisfaction.