The 2018 Pivot for Dynamic Apps, DevOps: Live Deployment Monitoring Takes Center Stage Away From Container Orchestration

"The yin-yang of dynamic apps and DevOps may come into a new balance in 2018. Container orchestration will be less important, while monitoring live deployments will become the crucial focus. This shift comes in large part due to big steps in Amazon Web Services, says Lee Atchison, senior director of strategic architecture at New Relic. IDN explores. "

Read this interview with Lee Atchison on idevnews.


The Dynamic Cloud: Availability and Scalability for Your Biggest Days

Does this story sound familiar?

It’s the day of the big game. You invite a bunch of your friends over to watch on your brand-new 75-inch Ultra-HD Super Deluxe TV. You’ve got the beer. You’ve got your snacks laid out. Everything’s ready to go. The game is about to start.

When, all of a sudden, the power goes out, the lights flicker off, and the TV goes dark. For you and your friends, it’s game over. Well, not for your friends—they just head over to somebody else’s house to watch. OK for them, not so much for you.

This was supposed to be your big day, the day that you wanted to show off and have fun with your friends, and it didn’t work. Obviously, you’re upset, so you call up the power company and ask, “What the heck happened?”

Not surprisingly, you get little sympathy. After all, they say, you had power most of the time!

Read More

The 6 Levels of Cloud Maturity

For many enterprises, finding success in the cloud is still a daunting challenge. Too often, organizations set overly high expectations for the benefits while underestimating the amount of work required. An unfortunate result can be a vicious cycle of blame, finger pointing, and grasping for something—anything—that could be considered a victory.


Read More

World Tour 2017 - Stop #8: Zürich, Switzerland

Last public stop on the world tour was Zürich, Switzerland.

At this event, we had the opportunity to listen to Pasi Katajainen, CTO Germany of Nordcloud, one of our partners. He continued on from my presentation on migration maturity, and added his take on how cloud migration is as much a cultural transformation as a technical transformation. He further talked about issues around security and system architecture patterns to enable a secure cloud infrastructure.

This was a small and quiet crowd during the presentation, but good questions and follow up afterwards. In discussions after the event, it seems that most of the customers and prospects present could see where they were in the cloud maturity process I presented. This process seems to be hitting home in companies, especially those enterprises that are not as far along on their cloud journey. The newer you are in your cloud journey, the more useful it can be to understand the maturity process ahead of you. It can save you time, aggravation, and failure in the future.

Tomorrow is a private customer event in Stuttgart, then Frankfurt and home. This is the end of the world tour. It’s been a long trip, but it’s been very valuable in seeing how customers in different cultures and different environments have similar problems but potentially very different takes on them. In future articles, I hope to discuss some of these cultural patterns and how companies can avoid potential problems from the patterns. Additionally, how can companies proactively leverage these patterns to their own advantages in order to take better advantage of the dynamic cloud and dynamic infrastructure to build highly scalable, highly available applications.


World Tour 2017 - Stop #6: Düsseldorf, Germany (via Frankfurt, Germany)

After a quick stop for an important customer visit in Frankfurt, I headed to Düsseldorf for the next leg in our cloud roadshow.

Here we had a small crowd, half dozen people or so. Honestly, I was worried it would be dry and flat with such a small crowd. But on the contrary, the audience was very interactive and we had good discussions about cloud and cloud migration.


Among these customers, the public cloud was not as much of an immediate concern. Regulatory and other issues have kept most of the folks we talked to in their own data centers. Concerns about security and data sovereignty are important considerations for this group of customers. Keeping their company and customer data in country…or at least in the EU…was of serious concern for them. Resiliency and the ability to rollover to new data centers during outages was also a frequent discussion point. Doing that while maintaining the ability to stay in country was a nagging concern. Data sovereignty isn’t solved if contingency plans involve taking data out of country.

All-in-all, most of these folks were much earlier on their cloud migration than the typical customer I talk to. This helped me to remember that the ability to use the public cloud is still not a given in every industry and in every culture. Yes, I did know this before, but this event has helped that sink in and served as a reminder.

There was great general interest in the cloud and how dynamic infrastructures can help them in the future. Moving “faster” was not necessarily a compelling goal, but consistent progress was important. Tools like DevOps processes are tools that can help formalize development processes in a universal way.

Overall a great, albeit small, event with well informed and engaged attendees.

World Tour 2017 - Stop #4: London, UK

Wednesday was stop #4 on my world tour in London, UK. I gave one of my dynamic cloud presentations to a room full of customers. Our friends at Sumo Logic were also there to talk about our integration with them, and customer Direct Line Group gave us a case study review of their use of New Relic.

Read More

World Tour 2017 - Stop #3: Auckland, New Zealand

I love Auckland. I arrived in Auckland Friday night after finishing my meetings in Melbourne, Australia earlier in the day. I’ve spent the whole weekend here island hopping, wine tasting, and picture taking. New Zealand is as beautiful as you imagine it to be.

On Monday was our executive breakfast. We had around 30 people from all over New Zealand attend. I presented a longer version of my FutureStack talk on enabling cloud migrations and dynamic infrastructures. Afterwards, we had a more intimate meeting with key individuals from several of those attending the morning event. This was a Q&A session and several good questions were asked. I was pleasantly surprised to see how open execs from different companies could be with each other in such a casual setting. It really demonstrated the open and friendly nature that is New Zealand.

I also saw demonstrated on multiple occasions how, even though NZ is a small and relatively isolated nation, the people you know and the connections you make are as important to NZ culture as the services you provide. I learned the real meaning of the NZ “two degrees of separation”…chances are high that someone you know, knows almost anyone else in NZ. It really is a small world here. Small, but friendly and very productive.

After that, I had private meetings with a few of our key customers in a few different industries.

It’s clear to me that the cloud is a critical component for businesses in the middle of a digital transformation in New Zealand. Many of their struggles are the same as other companies across the world. But they also have some unique requirements. New Zealand is a small country that is physically isolated from the rest of the world. This gives tremendous opportunity for local businesses to fill the gap of larger enterprises that cover much of the rest of the globe. For these companies, fast and nimble execution is even more critical than it is for their more global counterparts, and they must be fast and nimble with reduced resources and reduced customer opportunity. There is no room for error and no room for waste.

So far on my world tour, New Zealand is my greatest surprise. It’s a beautiful country with warm, friendly, and inviting people. But people that take their business seriously.

This was my first trip to NZ…pronounced “InZed” here…but I am absolutely certain it won’t be my last.


Join Me on World Cloud Evangelism Tour

During the months of October and November, I will be undertaking a four week, ten city, six country, worldwide Cloud Roadshow. During this trip I will be visiting key customers and speaking at various events across the globe. I'll be visiting Australia, New Zealand, England, Netherlands, Germany, and Switzerland. Here are the cities I will be visiting and the dates I will be there:

  • Sydney, Australia - Oct 23 to Oct 25. This will include presenting at New Relic's FutureStack/Sydney show on October 24th.
  • Melbourne, Australia - Oct 26 to Oct 27.
  • Auckland, New Zealand - Oct 30.
  • London, England - Nov 7 to Nov 8.
  • Amsterdam, Netherlands - Nov 9 to Nov 10.
  • Dusseldorf, Germany - Nov 13.
  • Munich, Germany - Nov 14.
  • Zurich, Switzerland - Nov 15.
  • Stuttgart, Germany - Nov 16.
  • Frankfurt, Germany - Nov 17.

As details are finalized, more information will be available on my website at


Everything you ever wanted to know about serverless computing but were afraid to ask

We’ve heard the buzzword, we hear the excitement, but what exactly is serverless computing and why should I care about it?

Serverless computing is running an application in the cloud in such a way that the application owner does not have to manage the underlying servers that are running the application. The servers are still there, but they are managed completely and invisibly by the cloud service provider. From the standpoint of the application owner, the servers are invisible to them, hence the term serverless.

While it is also commonly referred to as ‘Function-as-a-Service’, a better name for serverless computing in my opinion would be ‘Compute-as-a-Service’ (CaaS – if it wasn’t taken already) because it offers the ability to purchase compute in small increments, not functions in small increments.

Understanding serverless computing is critical as it is rapidly becoming a component of enterprise digital strategies. In fact at New Relic we recently surveyed more than 500 customers on their adoption of dynamic cloud technologies and found 64% of respondents had deployed serverless technologies in some form of production or pilot, with another 13% investigating with eyes towards a pilot.

What’s servers got to do with it?

One of the burdens that most IT organizations within fast growing digital enterprises must deal with is deciding how many servers to allocate for a given application in the cloud. They must allocate enough servers for the application to run effectively for however many of their users may try to use the application. If they allocate too many servers, they waste money and resources. If they allocate too few servers, the application may fail by not functioning properly or crashing completely for their users.

Additionally, if an application sees a sudden spike in traffic for some unforeseen reason – such as a news site suddenly getting a surge in visitors because of a breaking story – the additional load can overwhelm the existing servers and make the application unresponsive. We’ve all experienced this as a digital consumer. We go to a website that is currently very popular, and the website is slow to respond or doesn’t respond at all. The process of making sure enough servers is available to the application at any time is called ‘application scaling’.

If the application is run on a serverless cloud, however, IT does not have to worry about how many servers are needed to run the application. The cloud service provider will make sure that sufficient servers are always available to handle the application’s needs. As the needs of the application change, the number of allocated compute resources can be adjusted automatically.

The cloud service provider does this typically by maintaining a shared pool of servers across all their customers and allocates those computing resources as needed for a particular customer’s applications only when they are needed. When the application no longer needs the server capacity, the computing resources are pulled back into the shared pool and made available for another customer’s use.

Server shuffle – bearing the cost of servers

For IT organizations, there are two main advantages of this approach. First, the application can respond to sudden spikes in traffic automatically without the IT team involved in the scaling of the application. This is especially useful for applications that often see sudden and unforeseeable traffic increases, such as a news site covering breaking news.

Second, IT only has to pay for the actual compute resources they consume. They do not have to pay for idle servers lying around and unused due to low traffic volume. They are only charged for the actual compute resources they consume. When the application is busy, they pay more for the needed compute resources. When the application is less busy, they pay less for the needed resources.

There are benefits for cloud service providers too. Serverless computing allows them to manage computing resources across a larger customer set, which averages out traffic more and makes it easier for them to predict demand. This is because the larger the number of customers, the more uniform the average traffic needs are, and the better they can optimize usage. Additionally, by ‘hiding’ the servers and implementation of the service from the consumer, they can optimize the implementation based on their predicted needs and requirements.

From a financial standpoint, the cloud service provider’s ability to predict demand accurately is critical for them being able to support their customers while maintaining very thin business financial margins. Additionally, due to the extra flexibility provided to their customers, cloud service providers can usually charge a premium price for these compute resources.

When to go serverless

Like all tools, knowing when it is useful and when it is not is important to make effective use of the tool. Understanding when and how to use serverless computing involves three main considerations.

Cost & traffic

Serverless computing works best when a company’s computing needs are quite variable, with very high highs and very low lows in traffic volume. If this is the case, companies only pay for the resources they actually consume, so they may pay more at times of higher utilisation and less at times of lower utilisation. For very spiky applications, this will save money in the long term. However, if an application’s use of computing is much more uniform, the advantages of serverless are less dramatic and the premium price for the resources can cause serverless computing to be significantly more expensive for an organization than managing their own servers. So, serverless computing is useful mainly for applications with variable traffic profiles.

Setup & operation

Serverless computing is often seen as harder to setup and manage than traditional server-based computing. This is mostly because the existing tools that IT professional have commonly relied on, for years, are optimized for deploying applications to server-based environments. Newer tools are needed to make better use of serverless computing and make it easier to manage large serverless applications. Those tools will eventually be created. Today, however, the current tools are mostly immature or non-existent.

Additionally, the need for diagnostic tools for solving problems with serverless computing are fundamentally different than for solving standard server-based applications. This means that new tools and capabilities must be developed to keep serverless applications running optimally. While there are tools on the market which currently support serverless computing, these tools must continue to evolve to meet the needs of these new compute paradigms before they can provide the same level of support as they do for server based applications.

Standardization & portability

There are also no standards today for how application owners interface with serverless computing. Each cloud service provider provides a different and unique method for offering serverless computing. Amazon Lambda works very differently from Microsoft Azure Functions, which works very differently from Google Cloud Functions. This means that an application owner who wants to take advantage of serverless computing will find they can be locked into a single cloud service provider to a greater degree than if they use more standardized traditional server-based computing.

Different flavors of serverless services

When thinking about serverless, it is easy to focus on serverless computing, such as those capabilities provided by Amazon Lambda, Microsoft Azure Functions and Google Cloud Functions. However, there are many other cloud based services that offer similar advantages to serverless computing, meaning they allow the application owner to scale the use of the service without having to worry about allocating reserved servers for the service to use.

Classic examples of this include serverless databases such as Amazon DynamoDB and Google Cloud Datastore. But there are other services, such as object stores (Amazon S3), queuing and notification services (SQS, SNS), and email distribution services that offer similar scalable capabilities without the need for allocating and managing servers. Using these services involves the same sets of considerations as does serverless computing.

The bottom line

Serverless computing offers a valuable toolset digital enterprises can use in building their applications, especially applications with huge variability in traffic usage. However, like any tool, they have a use and a purpose and it typically does not make sense to use serverless for all of an IT organization’s computing needs. Traditional server-based computing still has advantages and uses and will likely remain that way for some time to come.

Used properly, serverless computing can help you build your application to scale to your greatest needs without breaking the bank financially. But it should be used in conjunction with – not as a replacement for – other tools and computing capabilities to form a complete application solution.

Article, written by me, originally appeared in Diginomica, Aug 2017.

AWS Lambda v Amazon ECS — two paths to one goal, which is best?

Launched in parallel two and a half years ago by Amazon Web Services (AWS), AWS Lambda and Amazon EC2 Container Service (ECS) are two distinct services that each offer a new, leaner way of accessing compute resources. Amazon ECS lets developers tap into container technology on a pay-as-you-go basis. AWS Lambda offers what is often known as ‘serverless’ computing, or function-as-a-service — the ability to access specific functions, again on pay-as-you-go terms.

On the surface, they both serve the same goal — provide a compute environment for applications, services and microservices that allows developers to focus on the application, not on the infrastructure.

But why are there two distinct services? What’s the difference between them? And, most importantly, when would I use one versus the other?

Great questions. Let’s take a look at each service … But first, for clarity, a quick explanation. To avoid confusion of the term ‘service’ in this article, I will refer to applications, whether they are monolithic or elementally broken into services, as application services or simply applications. I will refer to the AWS services such as AWS Lambda and Amazon ECS generically simply as cloud services or AWS services. OK, now that’s clear, let’s move on.

What is AWS Lambda?

AWS Lambda allows custom code to execute in response to triggers caused by activity from other AWS resources, services, and web apps. AWS Lambda provides this capability by allowing specially constructed code segments (called Lambda functions) to execute in an environment where the infrastructure becomes totally invisible and irrelevant.

Scaling and server management are handled transparently by AWS. The user isn’t even aware of, and has no visibility into, how the servers are organized to execute the functions — this is all hidden from view by AWS.

The downside of this approach is that the code segments (functions) that run in AWS Lambda are quite limited in what they can do — they must be relatively small and simple. These requirements are enforced not only by the execution environment provided, but by the pricing model put in place for the cloud service.

What is Amazon ECS?

Amazon ECS allows running Docker containers in a standardized, AWS-optimized environment. The containers can contain any code or application module written in any language.

Rather than being handled by AWS, scaling and server management has to be set up by the user. The containers themselves run on standard Amazon EC2 instances that are configured with special Amazon ECS software. These underlying Amazon EC2 instances within an individual cluster of servers can be of any size or quantity, depending on your application’s scaling needs. Via the Amazon ECS software, configuration and management of the underlying cluster is used to determine where, how many, and how each container is to execute on the given cluster. The Amazon EC2 instances in the cluster must be sized and scaled by the user to handle the quantity and execution demands of the containers.

AWS Lambda v Amazon ECS

AWS Lambda and Amazon ECS are similar in many regards. The code that the two AWS services execute does not have to have any visibility into the underlying infrastructure. The infrastructure decisions you must make in operating the service can be made independently from application coding decisions. If constructed properly, the code on either AWS service can provide significantly valuable scaling capabilities.

However, the two services differ in some very substantial ways. AWS Lambda does not provide any visibility into the server infrastructure environment used to run the application code, while Amazon ECS actively exposes the servers used in the cluster as standard Amazon EC2 instances and allows (or more correctly requires) the user to size and scale their fleet themselves.

AWS Lambda functions must be written in one of a handful of supported languages and are restricted in the type of actions they can perform. Amazon ECS, on the other hand, can run any container using any code that is capable of running in a container (which is almost any application that runs on a typical Linux operating system).

AWS Lambda is optimized for simple and quick functions to execute. Larger and more complex functions create execution complexity (and significant execution cost) to the user. Amazon ECS, on the other hand, can be used with any reasonable size and complexity container.

With AWS Lambda, all scaling and sizing decisions are made automatically and continuously by AWS. This allows a complete hands-off solution where the user can ignore most scaling issues. Amazon ECS, on the other hand, requires the user to knowingly understand the required server fleet sizing and make active decisions to resize the fleet as necessary as scaling needs change.

Which AWS service should I use?

Either one of these services can be used to run applications or application services. So, which AWS service should you use for a particular purpose? The answer depends on the needs of the application. If you want to run very small actions that are relatively simple in complexity, AWS Lambda provides a compelling hands-off solution to a highly scalable application. If your application or applications services have any complexity to them at all, Lambda may be too restrictive and too expensive to operate, and Amazon ECS may provide better options for you.

Of course, it is perfectly reasonable for different application services within a single application to separately use either of these two AWS services. As such, some of your application may run in AWS Lambda, and other parts of your application run in Amazon ECS.

I personally would like to see another option. I believe AWS should support a hybrid service. That is, a service with the infrastructure opacity and ease of management that Lambda provides, but which allows the code that is executed to be written and executed within a container environment. This will allow the best of each offering: versatility of container-based applications with the simplified infrastructure management available from AWS Lambda. This would be the best of both worlds, and I hope AWS is considering such a service.

Originally published at on June 29, 2017.

The London Sunday Times: Raconteur: Serverless computing

Serverless computing is one of the hottest trends in tech, however it’s also one of the most misunderstood. From the article:

Lee Atchison, senior director at analytics platform New Relic, warns: “Each service provides a different and unique method for offering serverless computing. This means that an IT professional who wants to take advantage of serverless computing will find they are locked into a single cloud service provider to a greater degree than if they use more standardised traditional server-based computing.”

Read More