Articles providing techniques, guidance, and best practices for how to build web applications that scale to significant traffic volumes.
Serverless computing is one of the hottest trends in tech, however it’s also one of the most misunderstood. From the article:
Major bug? Human error? Neither. The AWS S3 outage last week was more like a minor bug in an otherwise solid availability plan executed by AWS. Read my article at The New Stack..
Agile development and DevOps processes are in vogue now. It seems that most well-run development organizations either have these processes ingrained in their culture, or are trying to build them into their culture.
In the world of applications, services are standalone components that, when connected and working together, create an application that performs some business purpose. But services come in a wide variety of sizes, from tiny, super-specialized microservices up to services big and complete enough to form their own monolithic applications.
Technological innovation drives every business, industry and sector - mostly positively, but not always. 2016 was no exception – from the first long-haul driverless cargo delivery to automated retail locations to the stiffening competition among ‘smart assistants’ we’re seeing big technological leaps at a breakneck pace.
When I look back at my career over the last 30 years, it’s amazing to see how much the world has changed when it comes to building, running, and managing software. At my first job, for example, our company was trying to reduce its development cycle down to less than a year. Nowadays with cloud architectures we’re seeing development cycles of just weeks, days, or even hours. But that’s not to say that all cloud environments are dynamic and rapidly changing.
As applications grow, two things begin to happen: they become significantly more complicated (and hence brittle), and they handle significantly larger traffic volume (which more novel and complex mechanisms manage). This can lead to a death spiral for an application, with users experiencing brownouts, blackouts, and other quality-of-service and availability problems. “But your customers don’t care. They just want to use your application to do the job they expect it to do. If your application is down, slow, or inconsistent, customers will simply abandon it and seek out competitors that can handle their business. That’s how my new book, Architecting for Scale: High Availability for Your Growing Applications, begins.
I had the rewarding opportunity of being a guest on theCUBE on Silicon Angle TV at the AWS Summit in Santa Clara, CA.
I was interviewed recently by O’Reilly Media about my book Architecting for Scale. This interview was recorded during the O’Reilly Velocity conference in Santa Clara, CA, on June 23, 2016.
Software Engineering Daily Podcast. Listen to Jeff Meyerson talk to Lee Atchison about Lee’s new book, “Architecting for Scale”, by O’Reilly Media.
What a great day!
On Tuesday, July 5, we officially expanded our annual FutureStack user conference beyond San Francisco, kicking off our new FutureStack16 Tour “across the pond” in London.
Take a look at the article
Customer Successes Take Center Stage at FutureStack London,
I wrote that shows what a truly great day it was.
I arrived at the hotel, a typical Residence Inn, and checked in. “You have a view of Fenway from your room”. Oh, that’s cool. I went to the room, looked outside, and there she was, the green industrial looking complex that is Fenway Park. It didn’t look all that special from where I was. That would change. I arrived early in the day, and didn’t “officially” have any scheduled events until the next day, but I had to prepare what I was going to do the following day, and I’m sure I would be meeting up with some of the other New Relic and MLB folks for dinner later. But, that still left me time to look around.
As enterprises increasingly move to the cloud, they are discovering a wide variety of routes to get there. In a recent series of blog posts, I’ve addressed
For many companies, the goal isn’t to share their applications across both their own data center and the public cloud. Rather, they want to move some of their applications lock, stock, and barrel to the cloud. If some of the company’s apps live in the cloud while others remain in the on-premise data center, then intentionally or not, these companies also have hybrid clouds.
As hybrid clouds become more and more common in enterprise IT settings, a number of different use cases and journeys are beginning to become apparent. In part 1 of this series on how enterprise IT is using the hybrid cloud, we looked at how the hybrid cloud can be a faster and more economical way to add new data center or server capacity—or even an entire new or better data center.
The term “hybrid cloud” has found its way into common usage among IT operations folk, but not everyone agrees on exactly what it means. Basically, a hybrid cloud refers to any situation in which you have an application running partially in your company’s data center, and partially in one or more public clouds, such as AWS, Microsoft Azure, or Google Cloud Platform.
How has the cloud changed how we think and build applications? The changes to how we build applications are foundational, and the cloud has caused a complete rethink in how we architect our applications.
As our applications grow, keeping them operational can be challenging. High growth means more data, more computation, and more opportunities for problems. The cloud offers us the ability to improve our scalability, while maintaining and improving our availability. During this talk, we’ll show you the “keep two mistakes high” principal and use the cloud to prevent availability issues, keeping our applications healthy and growing, yet keeping costs inline.
We all know the value of distributing an application across multiple data centers. The same philosophy applies to the cloud. As we put our applications into the cloud we need to watch where in the cloud they are located. How geographically and network topologically distributed our applications are is just as important as with normal data centers. While Amazon AWS won’t tell you specifically where your application is running, they do give you enough information to make diversification decisions. Interpreting and understanding this information, and using it to your advantage, requires an understanding of how AWS is architected. In part 1 of this article, we talked about the AWS Architecture of regions and availability zones. In part 2, we went into more detail about how availability zones are structured, and how we can utilize this information. In this final part, we discuss the availability zone to data center mapping, why it is important, and how to use all this information to make sure you have the highest diversification as possible for your application.