The following is a list of major presentations that Lee has given at various events.
Bringing down an application is easy. All it takes is the failure of a single service and the entire set of services that make up the application can come crashing down like a house of cards. Just one minor error from a non-critical service can be disastrous to the entire application. There are, of course, many ways to prevent dependent services from failing. However, adding extra resiliency in non-critical services also adds complexity and cost, and sometimes it is not needed.
Application availability is best served by focusing your energies and processes on your most critical systems while working to minimize the impact of non-critical systems. Service Tiers are a way to accomplish this.
In this talk, we will learn what service tiers are and how they can be applied to service based applications. Then we will show how to utilize service tiers to keep your application available and functioning as designed. We will use example service definitions to illustrate how service tiers can help you keep your application working.
Your application is the key. You work for a company that has applications that you build and manage. They are critical to your business.
Applications that scale…applications that must stay operational.
You must manage your application to scale to your biggest day, and your biggest day grows and grows every year.
Your application needs grow and grow…and ebb and flow…and spike…and have unpredictable needs.
Your company is dependent on your application working. Your customers ASSUME your application will work. Your customers will not tolerate outages.
Temperature probes monitoring crops? Micro drones monitoring wind speed in the atmosphere? You don’t have to turn to these novel uses to see edge computing in action, look no further than the Point of Sale device at your local grocery store or the app on your mobile phone that is letting you order a cup of coffee.
Edge computing is all about taking the specific timing-sensitive parts of your application and moving them closer to where they are needed…whether that need is an end user or a source of interesting data, it’s all the same thing.
What really is the edge and how do we deal with it? How do we decide what computing should occur at the edge and what computing should occur in the cloud? How do you verify that your application is doing what it is expected to do? How do you know if you are meeting your performance expectations in the edge? How do you keep visibility in your entire application, whether it’s in the cloud or at the edge?
It’s your big day, the day of the year your company either makes it, or breaks it. Your customers expect your system to work, always. Excuses are unacceptable.
To meet this new challenge, your application must use modern tools and techniques. Serverless, containers, and cloud technologies are working with new DevOps processes and risk management concepts in order to build a dynamic, highly scalable, highly available application that meets your customers needs.
And central to all of this is the modern analytics necessary to determine how your system is running and what you need to do to keep it running…at scale.
Your customers demand modern applications, and modern applications demand modern tools and modern analytics.
Are you ready to meet these modern challenges?
Monitoring applications running in a typical data center is a pretty static process. Monitoring in the cloud is a very different endeavor. Why? The dynamic nature of cloud computing makes keeping track of resources being monitored a non-trivial activity. Additionally, examining the dynamic changes of the cloud environment itself is a valuable tool for detecting and diagnosing problems, yet often is difficult to actually accomplish in a useful and compelling way.
In this session, we will discuss some of the best practices learned both internally at New Relic, and from observing and working with customers, on how you can monitor applications running in this dynamic environment and take advantage of the dynamic nature of the cloud to give you additional insights into your application performance.
What happens when your migration goes sideways? How do you make sure it doesn’t happen again? In this session, we discuss how do migrate your application to the cloud so it doesn’t go sideways. We talk about KPIs, and acceptance criteria. We talk about how to tell when your migration is complete and your system is operating as it should in the cloud. We talk about how to understand what your application is made of, so that you can properly plan and execute your cloud migration.
The team at AWS & MLBAM joined New Relic to tell the story of how we enabled the team at MLB Advanced Media deliver their business on the biggest days in baseball - 25 million fans and a World Series that made history.
Dynamic Infrastructure and the Cloud.
There are two ways that companies use the cloud: The static method is simply to use the cloud as a better data center. The dynamic method is to take full advantage of the dynamic capabilities of the cloud. This involves allocating resources only when necessary, whether those resources are EC2 instances, queues, files, or Lambda functions.
I fly radio controlled model planes. There is an old adage: “Keep your plane at least two mistakes high”. The same applies when building highly available, high scale applications.
How to keep your web application functioning and highly available.
Strategies for adopting the cloud by enterprises. This presentation discusses the process enterprises go through in deciding when and if to move an application to the cloud.