From Vision to Venture Ep. 05: Mike Amundsen, Amundsen.com

From Vision to Venture Ep. 05: Mike Amundsen, Amundsen.com

In this episode, Derric Gilling, CEO of Moesif, sits down with Mike Amundsen, a leader in the API ecosystem, to explore the evolving role of observability in API design and customer experience.

They break down the differences between machine vs. customer observability, best practices for tracking key metrics, and the importance of designing observability into APIs from the start. Mike also shares insights on API product management, the lifecycle of APIs, and how organizations can adapt to AI-driven changes in observability. Whether you’re a developer, product manager, or business leader, this episode is packed with valuable takeaways on optimizing API-driven businesses.

Moesif · From Vision to Venture 05: Mike Amundsen - Amundsen.com


Listen to the episode on SoundCloud, Apple Podcasts, YouTube Music, or wherever you listen to podcasts. You can also watch the video on our YouTube Channel.

Listen on Apple Podcasts Listen on YouTube Music


Table of Contents


Learn More About Moesif Monetize in Minutes with Moesif 14 day free trial. No credit card required. Try for Free

Introduction

Derric Gilling: Alright, welcome to the Moesif podcast learning around APIs and customer observability. Joining us today is Mike Amundsen, leader and expert within the API ecosystem, especially around machine observability but also talking about product observability. And so Mike would love to hear just a little bit about yourself and you know what comes to mind when you, you know, hear the term observability, or especially customer observability.

Mike Amundsen: Yeah. Well, first of all, it’s great to join you, Derek. I always enjoy talking with you. We don’t get to do it often enough so I’m happy to be here and share. You and I have talked about this before. I really think of observability as that way to figure out what’s really going on, how it can affect a customer, how it can affect a product, and how that can affect the experience. I tend to think of observability a lot from the design perspective. How can we design early on in the process, designing for the ability to monitor, observe and react in ways that positively affect the product? So. So I tend to think of things like machines, services and products, different versions of observability but also just the design side as well.

Machine vs. Customer Observability

Derric: Awesome, and you mentioned customer observability and some of this machine-level stuff. How do you differentiate between these different types of metrics, you know? What would be some key examples in terms of the different key metrics to track?

Mike: Yeah, so starting from the machine side of things. I think people are pretty familiar with metrics like latency, memory usage, disk space, response time or error rates. These are just the nuts and bolts of getting a system up and running, but they don’t really say much about what’s going on with your product or services between machines. So thinking about service abilities, like how many completions of a cart checkout, how often carts are abandoned, and how many times there are errors connecting to another API. That’s the service layer. And then you think about the product itself, how long it takes a customer to complete a task, do they abandon? Do they not understand or make common mistakes over and over again? Observing the way people behave, the way your service reacts to them is also really critical in helping you build a good product. I think.

Designing for Observability

Derric: Makes sense. And does this change the approach to observability? You know, it’s one thing to track uptime. But you know tracking a cart checkout, there are lots of ways to run into issues with the cart checkout experience. So how do you wrap your head around what to be tracking and making sure we’re tracking the right things.

Mike: Right. I think that’s an excellent question, like you had pointed out. People sort of understand how to do the basics of machines but how do you start tracking other bits? I think that’s where the design part comes in early. When you’re designing an API itself, I’m going to want to pay attention to this transaction—this notion of completing a payment or selecting shipping, collecting the information I need in order to serve this customer better. Or the amount of time it takes a customer to fill out a complex onboarding experience. So, the way I design very often has that monitoring built in. So there’s a constant thread and activity, you know, see how you design by identifying the customer experience itself, not just the user but also the service they’re doing, or authentication or authorization. Just having a tag on this service experience and then collecting that experience gives us an opportunity to say, you know, did you notice it takes a lot of time? Everybody seems to slow down when we get to this point. Maybe we need to re-engineer or redesign the product or service to improve it. So I think putting on that design hat is really important.

Derric: I really like that, taking it from the initial design versus treating observability as an afterthought. Because otherwise you start running into trying to pull on and band-aid a product, maybe not even tracking the right things. You’re just kind of throwing observability into the kitchen sink in a way.

Mike: Yeah.

Best Practices for API Design and Observability

Derric: How do you incorporate that in terms of a design process? You know, what are the best practices especially for design and architecture teams who are thinking about shipping new APIs.

Mike: Yeah, I try to encourage teams to think the way I was saying about there being an experience—onboarding is an experience, checkout is an experience. Selecting a product is experienced, approving a contract is an experience. Anything that touches that should be monitored as part of the product. So anything touching a particular flow or experience is in the chain something I want to monitor. So, for example, I want to monitor the amount of time it takes. But also monitor things like did they abandon the process? Did they stop? Did they have to save for later and come back for more? Did they get hung up on some feature that seems complicated, like creating a list or something. So really identifying those things is important. I’ll say another thing about observability and monitoring. You know, I mentioned how long it takes or did they abandon? These are what I call proxies. They may not be very good proxies. So one of the other things is really important is to be prepared to say maybe measuring time isn’t the right measure. That’s not the pain point for the customer. Maybe the way we’re asking for information is the pain point. So sometimes at the product or customer level, you need to be pretty creative. You need to get customers in a room and see how they behave and do that becomes much better proxies or representations for what you want to monitor. And can affect the way you create your product.

Who Owns Customer Observability?

Derric: That’s a really good point. And you know we speak on things like time to first API call or the time to first “Hello World”. And you know, these types of metrics are very cross-functional and can impact documentation to the actual API experience. And you know, what does that mean from an ownership standpoint? Who owns customer observability and ability, and how do you ensure you have the right process to be tracking against those metrics that you’re defining?

Mike: Yeah, that’s a good one too. I encourage the notion of product ownership. Whoever your designated product owner or team needs to look out for these experiences I just mentioned. They often need to be the customer’s voice in the room, thinking about not just user interfaces but APIs. Often, the API Dev Rel person is the customer in the room and they need to be pretty vocal and attentive. If we’ve got a target audience of enterprise developers, I need to spend time in the enterprise space and own that experience. That person needs to translate whatever those enterprise people say into things that make sense for our product team, developer team and design team. That goes for all kinds of audiences. I tend to think of this as a way for people to inform products on how they can better support customers.

The Role of an API Product Manager

Derric: Awesome, and just changing gears a little. You know we’ve heard this term API product manager and more recently AI product managers are now becoming a thing. And what does that mean? Should you have a product manager for your APIs? You know, if so, what is their day-to-day responsibilities like? What are their goals and responsibilities?

Mike: Yeah, that’s a good one. My short answer is yes, you definitely should have a product owner, even for internal APIs. So if my job is to create a set of APIs that make other developers in our organization more productive, effective and reduce mistakes, get things done with higher quality in a shorter amount of time. That’s my product, you know. What can I do? Often I think of product owners as the ones that carry water for everyone else. What can I do to make your job better? How can I improve this experience, reduce time and quality without added effort? And that doesn’t matter whether it’s APIs you’re creating or third-party ones like I mentioned, addressing management. Let’s get a third-party to help us do address management and make sure we’re doing things correctly or taxes. But now I’m responsible for making sure those APIs work well in our ecosystem.

API Lifecycle and Deprecation

Derric: I like that treating your APIs as products even in internal organizations where you have other teams and business units. That might be the customer in this case, and you need to treat them as a customer just like an external customer. But me, as the API product manager, you know what should I be tracking in terms of business outcomes? As my API matures and eventually potentially an end-of-life scenario, you know we could talk about different experience metrics and adoption metrics. Is it always the same or evolving? And how do I think about what I should be looking at from an observability standpoint?

Mike: Well, you know, I think of this again with a product owner hat on. There’s a lifecycle in every product right there, it’s a new product, an experiment—we’re not sure if it solves the problem. We think it will get uptake or maybe it does and we’re realizing its potential, earnings potential or whatever the payback is. Reduced amount of time for internal things and so on. Eventually, that API will start paying off, it’s doing what it should do. That’s probably a good time to leave it alone as a product owner. It’s not really a good idea for me to invest more time and money into changing it. It’ll probably take it a bit further from what its original purpose was. If we need some other behaviors, maybe we need to create a new API. Leave this one alone and let it churn. A great design can get a great audience. Some products have been on the shelf for 60, 70 or 80 years and they’re still serving their customers fine. They don’t need a new box, label or ingredients—they just stay the way they are and so you can. My goal as a product owner is to get that maintenance life, keep it healthy and clean. Maybe fix some bugs but not introduce new material. Eventually that product’s gonna get passed over, people don’t use it as much anymore, they don’t need it that much. Sometimes you build an API to solve a problem and within a year or two, the problem is solved. So it’s time to deprecate that whole thing.

Measuring API Value and Sunset Planning

Derric: I really like that approach to being pragmatic before we don’t need to replace and recode every single service or API out there. There’s no value in doing that. That’s just creating work for the sake of it. Speaking more around delivering value through APIs, you know how do you measure that? And how do you ensure the APIs you shipped two or three years ago are still creating value? Then, you know, how do you determine when it makes sense to deprecate at a certain point. You know, you do have to deprecate.

Mike: Yeah, that’s good. The value proposition is kind of baked in at the beginning, right? You need to reduce onboarding time, or increase customer engagement by 10% more than last year. We have some kind of engagement metrics, then we need to see if that metric really got realized. You know, did you change the amount of onboarding time? Are people using it more now? So I have to come up with observability elements that make that observable. Now, eventually if I’m paying attention, I might find the number of people using this API is dropping off. Sometimes that’s because we’re getting some errors or a problem was introduced sometimes it’s just the audience is getting smaller. So once I know what those value propositions are, and I know what you need to keep a steady stream. If it’s starting to wane, it may be time to deprecate, and that means it’s a part of the deprecation process. I need to figure out if there’s going to be a replacement, do we have one that’s better, more appealing or solves the problem in a better way? Or is it just not needed anymore. Then I’ve got to give people a chance to get used to the notion that this is not going to be here anymore, and depending on the audience. If you’ve got a large audience of thousands or tens of thousands of sites, not just developers but organizations. That’s going to take some time. They can’t just stop on a dime, and you need to offer them alternatives. You need to say we have a new API that’s much faster and better or you need to say there are other options out there. You need to give them a way to replace what’s left, they’re not updating anymore, doing security fixes. And finally, you’re just not taking any traffic anymore. That’s the lifecycle. It’s a circle of life and every organization needs to face that.

Derric: You have to face it, as long as you’re not pissing off people by shutting something off tomorrow, then usually it’s okay. But as long as there’s a plan of action that you know someone can take, and hopefully a good enough time period to do that within. Then that usually helps a lot.

Common Pitfalls in Observability

Derric: Speaking more around, what can go wrong, you know. What do you see in terms of determining the right metrics? You know, maybe tracking too much or not enough.

Mike: Right, that’s a good point. So there’s the classic case where you want to make sure you’re not overloading so much transactional information and internal metrics that you affect the performance of your product itself. You want to make sure you don’t change your design to be more amenable to observability than it is for customers. Right? Observability and monitoring are behind the scenes, Job. That’s the hard part. It should be super easy for developers. And you’ve got to select proxies well, a lot of times. You figure out where the value is and you can think of a cloud service like people uploading a lot of photos and then having social interaction. The value is that they get a lot of social interaction, so you start monitoring that and designing the product to pay off on the social interaction. Well, it may turn out that storage is actually a better proxy for success than interaction. So if you pick the wrong proxy, you might create a billing model or marketing campaign that really doesn’t add value to your customer. So you have to be prepared to change proxies in some ways, or rethink it. You need to constantly talk to customers.

The Impact of AI on Observability

Derric: That’s a good point about understanding and being open to changing proxy metrics. And we’re seeing this, especially within the Gen AI space where APIs are relatively cheap and have some storage costs or CRUD apps behind them, but now with more data and Gen AI APIs there’s real cost to consider. So getting that type of ability to understand are you upside down? Are you underwater in terms of generating revenue for the business is becoming more important.

Mike: Yeah. And you bring up a good point. We’re still very early in the curve or hype phase on these Gen AI situations. But there’s a big push on AI-driven agents, they’re becoming a whole new customer for your API. They have their own pains and troubles like most of these AI agents really have a limited amount of memory. You can’t just keep shoving lots of data at them to get them to figure out how to use your API. You’re going to have to start designing this API specifically for the Gen AI agent community. That’s like designing for mobile or another different community. This is going to be important and it’s a great example where, when you get bots that are involved with your API, they could really throw off your metrics and value add. You have to pay attention now. I’m telling customers to be very observant, monitor exactly who’s making these calls and watch for AI bots because that could be a new market or threat.

Derric: Indeed, and sometimes watching for that unintentional abuse. You know it’s not necessarily a bad actor but just accidentally hand your API the wrong way, but your bill still goes up. And making sure you’re able to detect that and prevent some massive overage or spend is now becoming more important.

Mike: Or to work the other way, it’s not just you as the host having a massive spend but all of a sudden your customers who are using your API to generate revenue or manage costs, are suddenly upside down. You don’t want them to end up with a huge bill that they don’t understand because someone else is using their API in unintended ways, and often those are great gifts but you can only do that if you’re really observing your API.

Derric: Sounds like great power comes with great responsibility. APIs already give you a lot of ways to integrate, but with AI agents it’s even an explosion of different use cases and ways to get value out of the platform. So I like your point around you have to be even more diligent around observability for these types of new products.

Future of Observability and APIs

Derric: My last set of questions is really around, you know how is observability and the API space changing especially with the proliferation of Gen AI?

Mike: Yeah. The line is, there is no AI without APIs, right? A truism that we all need to remember. Years ago, we used to focus so much on machine observability because we were writing all our own code, consuming and hosting it ourselves. But now we’ve got so many cloud services that are run by AI, what’s really happened in this observability space is it’s just gotten more complex. Not just complicated, but there are more and more different kinds of players interacting with each other. So the job of being a product manager, whether internal or external, really ends up observing and paying attention to finding value in your own APIs, other people’s APIs, cloud platforms, client applications. That’s a huge job and organizations that pay attention to that span and have someone or team look at it in their organization will get ahead in this innovation round involving mostly AI.

Derric: That makes sense. This is more and more use cases, more data to look at, you have to interpret it. You have to understand is this the right decision to grow my team or business, and where I need to invest.

Final Takeaways and Advice

Derric: Any last takeaways in terms of customer observability and what’s happening in the API space?

Mike: I guess more than anything else, pay attention. Make sure you’re paying attention because things are changing so quickly. Unintended use is happening more and more, often a gift so if you pay attention, you can be on the leading edge of the curve. You won’t get so surprised and sometimes you might end up actually well ahead of your competitors because you saw it before others did. Observability is a fantastic tool for improving your business, so pay attention all the time.

Derric: That’s a really good point. Which is one thing to have observability in place but you have to look at it, set up the right alerts and dashboards. Make sure there’s a process at the organization to make sure you’re getting the most value out of observability.

Mike: That’s a great idea. Is there a process to vet this material? Not just figure out what’s happening but what does it mean and are we willing to experiment with that process? That’s really good.

Derric: Well, thank you so much for having us on the podcast and looking forward to sharing this with our audience.

Mike: I’m looking forward to it too, and I’d be happy to come back anytime. Thanks a lot.

Derric: Appreciate it.

Learn More About Moesif Deep API Observability with Moesif 14 day free trial. No credit card required. Try for Free
Monetize in Minutes with Moesif Monetize in Minutes with Moesif

Monetize in Minutes with Moesif

Learn More