Articles, tagged with Architecture, providing techniques, guidance, and best practices for how to build web applications that scale to significant traffic volumes.
It’s simple, really — services call other services and they take actions based on the responses from those services. Sometimes, that action is a success, sometimes it’s a failure. But whether it is a success or a failure depends on if the interaction meets certain requirements. In particular, the response must be predictable, understandable and reasonable for the given situation. This is important so that the service reading the response can make appropriate decisions and not propagate garbage results. When a service gets a response it does not understand, it can take actions based on the garbage response and those actions can have dangerous side effects to your service and your application.
Continuing in the same manner, I added a third article to my series of articles on serverless computing that was published this summer in ComputerWorld. Here is a summary:
I’ve published two articles in ComputerWorld this month, both on the topic of serverless. They are:
On July 17th, I was fortunate enough to take part in a podcast jointly sponsored by Electric Cloud and DZone titled “Continuous Discussions: The DevOps Toolchain”. The podcast was a panel discussion with a variety of DevOps experts from around the industry. I was fortunate enough to be included on this panel.
Migrating to the cloud is easy, right? What could possibly go wrong? There are at least four things I can think of. Often, when we begin a cloud migration, we come in with lofty expectations. As the migration progresses, however, we often find that moving to the cloud isn’t necessarily as easy as we would like it to be - or as easy as we were led to believe it would be. Sometimes, the cloud doesn’t meet our expectations. Promises we’ve been given may not hold true. Promises we’ve made to our stakeholders can turn out to be impossible to keep. Migrating to the cloud is not necessarily the slam dunk we expected it to be.
The concept of “serverless” is on the minds of many developers and operations teams these days. The technology is definitely hot, but is serverless really ready for prime time in production environments? To find out, we invited a pair of New Relic experts, senior director of strategic architecture Lee Atchison and developer advocate Clay Smith, back to the show to debate the issue. Listen in to the podcast on New Relic’s Modern Software Podcast, below or on iTunes: You can also read an edited transcript of the discussion on the New Relic Blog.
Having been involved in cloud computing for more than a decade, I’ve heard from many IT executives working to move key enterprise applications to the public cloud. In several cases, their teams have struggled or had only limited success in their cloud migrations. But they never gave up and they used the lessons they learned to improve their results in subsequent attempts.
Join me in learning best practices and understanding key challenges you face when moving a modern software application to the cloud.
Cloud computing is mainstream. That’s a fact. Chances are if your company isn’t already extensively using the cloud, it is planning on doing so in the very near future. But be careful. There are many mistakes that companies new to the cloud make when they begin looking into cloud adoption. Here are three of the main ones.
I wrote not that long ago (see article in Diginomica) that the future of serverless is not Lambda, but is technologies such as AWS Fargate. I truly believe this is. Lambda is very useful for some kinds of computing needs, but it is not suitable as a general serverless solution to replace standard programming methodologies for building services and systems.
I'd like to invite you all to join me in my new online training course with O'Reilly Media called "Building a Cloud Roadmap". It's part of the new O'Reilly Media live online training series and is delivered as part of their Safari program. The first time the course will be given is 10:00am PT on May 1, 2018. Here is the course description:
Last year I wrote an article on what serverless computing is all about. In that article, I described that while serverless computing doesn’t remove servers, it moves the management of servers to the cloud computing provider, away from your development and IT organization. It removes complexity from application management and enables easier and more significant scaling by sharing server resources across a larger set of consumers. But last year, when you said ‘serverless computing’, you were almost exclusively referring to Function-as-a-Service (FaaS) technologies such as AWS Lambda, Microsoft Azure Functions, and Google Cloud Functions. While there are other serverless technologies – such as serverless data stores and databases – these functional computing services were usually what you meant when you were referring to ‘serverless computing’.
The #1 book on their list is “Architecting for Scale” book by Lee Atchison. As the article says:
Whenever we discuss cloud adoption with enterprise companies curious about making the move, one of the first questions is, which is better: public cloud or private cloud? Cloud adopters want to know which approach is most likely to give them better performance, greater flexibility, stronger security, and lowest cost to operate. While these are important requirements, they miss a critical issue: So you want to share your cloud with others? If you’re working towards an effective cloud adoption strategy, you’d be wise to consider whether you want a multi-tenant or single-tenant cloud offering.
If you still think multi-cloud is all about deliberately choosing several cloud providers to avoid vendor lock-in, you may be missing the point. That’s just one key takeaway from the latest episode of the New Relic Modern Software Podcast, which delves into the complex world of running—and monitoring—applications in multi-cloud environments.
“The yin-yang of dynamic apps and DevOps may come into a new balance in 2018. Container orchestration will be less important, while monitoring live deployments will become the crucial focus. This shift comes in large part due to big steps in Amazon Web Services, says Lee Atchison, senior director of strategic architecture at New Relic. IDN explores. “ Read this interview with Lee Atchison on idevnews.
Does this story sound familiar? It’s the day of the big game. You invite a bunch of your friends over to watch on your brand-new 75-inch Ultra-HD Super Deluxe TV. You’ve got the beer. You’ve got your snacks laid out. Everything’s ready to go. The game is about to start. When, all of a sudden, the power goes out, the lights flicker off, and the TV goes dark. For you and your friends, it’s game over.
During the months of October and November, I will be undertaking a four week, ten city, six country, worldwide Cloud Roadshow. During this trip I will be visiting key customers and speaking at various events across the globe. I’ll be visiting Australia, New Zealand, England, Netherlands, Germany, and Switzerland.
Compare the Cloud speaks to Senior Director for New Relic, Lee Atchison at Futurestack. Lee speaks about his previous experience at AWS and the future of e-commerce platforms.
We’ve heard the buzzword, we hear the excitement, but what exactly is serverless computing and why should I care about it?
Launched in parallel two and a half years ago by Amazon Web Services (AWS), AWS Lambda and Amazon EC2 Container Service (ECS) are two distinct services that each offer a new, leaner way of accessing compute resources. Amazon ECS lets developers tap into container technology on a pay-as-you-go basis. AWS Lambda offers what is often known as ‘serverless’ computing, or function-as-a-service — the ability to access specific functions, again on pay-as-you-go terms.
Agile development and DevOps processes are in vogue now. It seems that most well-run development organizations either have these processes ingrained in their culture, or are trying to build them into their culture.
In the world of applications, services are standalone components that, when connected and working together, create an application that performs some business purpose. But services come in a wide variety of sizes, from tiny, super-specialized microservices up to services big and complete enough to form their own monolithic applications.
Technological innovation drives every business, industry and sector - mostly positively, but not always. 2016 was no exception – from the first long-haul driverless cargo delivery to automated retail locations to the stiffening competition among ‘smart assistants’ we’re seeing big technological leaps at a breakneck pace.
When I look back at my career over the last 30 years, it’s amazing to see how much the world has changed when it comes to building, running, and managing software. At my first job, for example, our company was trying to reduce its development cycle down to less than a year. Nowadays with cloud architectures we’re seeing development cycles of just weeks, days, or even hours. But that’s not to say that all cloud environments are dynamic and rapidly changing.
As applications grow, two things begin to happen: they become significantly more complicated (and hence brittle), and they handle significantly larger traffic volume (which more novel and complex mechanisms manage). This can lead to a death spiral for an application, with users experiencing brownouts, blackouts, and other quality-of-service and availability problems. “But your customers don’t care. They just want to use your application to do the job they expect it to do. If your application is down, slow, or inconsistent, customers will simply abandon it and seek out competitors that can handle their business. That’s how my new book, Architecting for Scale: High Availability for Your Growing Applications, begins.
I had the rewarding opportunity of being a guest on theCUBE on Silicon Angle TV at the AWS Summit in Santa Clara, CA.
I was interviewed recently by O’Reilly Media about my book Architecting for Scale. This interview was recorded during the O’Reilly Velocity conference in Santa Clara, CA, on June 23, 2016.
Software Engineering Daily Podcast. Listen to Jeff Meyerson talk to Lee Atchison about Lee’s new book, “Architecting for Scale”, by O’Reilly Media.
What a great day!
On Tuesday, July 5, we officially expanded our annual FutureStack user conference beyond San Francisco, kicking off our new FutureStack16 Tour “across the pond” in London.
Take a look at the article
Customer Successes Take Center Stage at FutureStack London,
I wrote that shows what a truly great day it was.
I arrived at the hotel, a typical Residence Inn, and checked in. “You have a view of Fenway from your room”. Oh, that’s cool. I went to the room, looked outside, and there she was, the green industrial looking complex that is Fenway Park. It didn’t look all that special from where I was. That would change. I arrived early in the day, and didn’t “officially” have any scheduled events until the next day, but I had to prepare what I was going to do the following day, and I’m sure I would be meeting up with some of the other New Relic and MLB folks for dinner later. But, that still left me time to look around.
Take a look at the article Microservice Architectures: What They Are and Why You Should Use Them>, written by me and published by New Relic.
It’s an increasingly common scenario: As a company grows, it finds that it needs to move away from the monolithic software architecture that powered its initial success. The alternative? A microservices approach that provides more speed and flexibility. That’s the story told by both our guests on the latest episode of The New Stack @ Scale Podcast: Tung Nguyen, vice-president of engineering at Bleacher Report, and our own Lee Atchison, principal cloud architect & advocate at New Relic. Listen to it on New Relic’s Blog.
An updated copy of my book, Architecting for Scale, published by O’Reilly Media, is available for download. This is the second version under the early release program. The full book is scheduled to be released in May.
One of the most important topics in architecting for scalable systems is availability. While there are some companies and some services where a certain amount of downtime is reasonable and expected, most businesses cannot have any downtime at all without it impacting their customer’s satisfaction, and ultimately their company’s bottom line. How do you keep your customers happily using your service and keep your company’s revenue coming in? You keep your service operational as much as possible. There is a direct and meaningful correlation between system availability, and customer satisfaction.
Traditionally, software companies created large, monolithic applications. The single monolith encompasses all business activities for a single application. As the company grew, so did the monolith. In this model, implementing an improved piece of business functionality requires developers to make changes within the single application, often with many other developers attempting to make changes to the same single application. Developers can easily step on each other’s toes and make conflicting changes that result in problems and outages. Development organizations get stuck in the muck, and applications slow down and become unreliable. The companies, as a result, end up losing customers and money. The muck is not inevitable, you can build and rearchitect your application to scale with your company, not against it.