The POPCAST with Dan POP

Episode 61 - Austin 4:14 says you need Observability and OpenTracing

Episode Summary

In this episode of the POPCAST, the amazing Austin Parker talks about his journey, Observability, Open Tracing and how he first put together the truly unique Deserted Island DevOps, which is a single-day virtual event, set in the popular game Animal Crossing on the Nintendo switch. You will learn a ton from Austins vast experience in Observability and Tracing and in depth discussion on the CNCF project Open Telemetry and its usage. We also discuss at length of mutual love of Professional Wrestling, favorite matches, WWE, ECW, AEW and much much more.

Episode Notes

Austin Parker is an Open Source Software Engineer at LightStep, where he works as a core contributor and maintainer to the OpenTracing project. Prior to LightStep, he was a Software Architect at Apprenda building enterprise platforms using Kubernetes.

Timeline/Topic

00:00 - Sponsor - Stormforge - stormforge.io/popcast

00:59 - Opener

01:05 - Intro to Austin Parker and his journey

08:47 - Developer Advocacy do you need to have a prior life in SRE or Operations?

12:12 - Observability and Tracing - How do you do this well?

18:04 - OpenTelemetry Project in DEPTH - Why should i use it?  What does it do?

33:19 - What is Lightstep... and what does it for me?  

41:30 - Deserted Island DevOps  

47:07 - Professional Wrestling....and Austin Parker 3:16 Says

57:26 - What work is Austin most proud of

Episode Links

Open Telemetry - https://github.com/open-telemetry

LightStep -     https://lightstep.com/popcast

Deserted Island DevOps - https://desertedisland.club/

Follow on Twitch - https://twitch.tv/desertedislandtv

Glossary of wrestling terms...  https://en.wikipedia.org/wiki/Glossary\_of\_professional\_wrestling\_terms

Episode Transcription

Announcer (00:00:00):

This month's Popcast sponsor is StormForge. StormForge's Kubernetes performance testing and optimization platform is the easiest way to ensure your applications behave the way you want them to, while cutting out unnecessary resources and time spent manually tuning. In alignment with this, StormForge is asking for your help reducing the amount of unused cloud resources, making both the financial and environmental impact in the world today. We want you to help make that impact with us, so visit https://stormforge.io/popcast. That's stormforge.io/popcast to learn more about how you can help erase cloud waste and take the cloud waste pledge. While you're there, try out the free tier, the machine learning back service, to start saving resources and getting better performance today.

 

POP (00:01:01):

Hello everyone, and welcome to the popcast. Today, I have a friend of mine. This is Austin Parker. He's the principal developer advocate for LightStep. Welcome, Austin Parker.

 

Austin Parker (00:01:15):

Hi, it's great to be here.

 

POP (00:01:17):

It's great having you, man. We were in a panel together. I'm like, "I like this dude." I'm like, "I want to have him on the show." And I like LightStep. I've like LightStep for a bunch of years, and we're going to get into that, folks.

 

Austin Parker (00:01:26):

Yeah.

 

POP (00:01:26):

I want to talk about your journey, dude. I want to talk about how you started computing in general, and just all the way to where you are now.

 

Austin Parker (00:01:37):

Wow. Yeah, first off, thanks for having me again. I love to talk to people about this stuff. I really got started pretty young. I was a computer nerd growing up. A lot of kids like to play sports or do things exciting like that, and I decided to stay inside and play around on an old Apple II. And that got me into... Really actually, first program I ever wrote was, I wrote a database in BASIC to keep track of baseball cards. I wanted to see quickly what I had.

 

POP (00:02:20):

Where did you grow up, out of curiosity, if you don't mind me asking?

 

Austin Parker (00:02:22):

when or where?

 

POP (00:02:23):

Where. Where.

 

Austin Parker (00:02:25):

Eastern Kentucky.

 

POP (00:02:26):

Okay.

 

Austin Parker (00:02:27):

Southern boy. Culturally Southern, I guess.

 

POP (00:02:30):

We talked about the baseball cards. And what was your team?

 

Austin Parker (00:02:32):

Reds.

 

POP (00:02:33):

You were there [crosstalk 00:02:34]. Okay, so you liked the Cincinnati Reds.

 

Austin Parker (00:02:36):

Yeah, that's like the Pete Rose area. Yeah.

 

POP (00:02:37):

Okay. All right. Big Red Machine.

 

Austin Parker (00:02:39):

Yeah, Big Red Machine. I remember actually, I saw Ken Griffey Jr. when he was going through the Reds before Seattle. Or was it after Seattle? Been a while.

 

POP (00:02:52):

Okay.

 

Austin Parker (00:02:56):

So yeah, I got started really early with that. And honestly, funny enough, for the longest time I didn't want it to be a career. I think this is like the '80s, '90s, and there was a very negative stereotype about nerds, and you didn't want to be a nerd. So I spent a lot of time in my 20s trying to do things other than computers, and never really lost the habit, never really lost the nerve for it. So at some point, things worked out. I had the opportunity to go back and finish college in my late 20s. And got out of there, got into a software job. It was a company called Apprenda, which, if you are a long time cloud native observer, you might recognize the name of Apprenda. If you've been to the second CubeCon, you might have a t-shirt with that name on it.

 

Austin Parker (00:03:58):

But I got into Apprenda as a software developer in test, and then went from there into DevOps, really. We didn't have a DevOps program, or really a lot of... It was very much a siloed thing. There were QA and ops people on one side, there were devs on the other side. They would write a bunch of changes, throw the code over the wall, and it was our job to run it. So that's what got me down this DevOps observability path. Apprenda, unfortunately no longer with us as a going corporate concern. And I joined LightStep several years ago to originally work on open-source and it into a different direction. Decided, "Well, hey, let's try this DevRel thing out." And here I am.

 

POP (00:04:54):

Incredible. And so, what drew you to observability and distributed tracing and all of that? What drove you to that?

 

Austin Parker (00:05:02):

So, the primary thing-

 

POP (00:05:05):

Because look, anybody can go be a software developer and you can just work on the infrastructure, work on the operations, but there's a special person... Because again, you know my history, I was [inaudible 00:05:14]. And so, it's always about telemetry, it's aspects [inaudible 00:05:19] being able to correlate things that SREs care about. But it takes a special person to get involved with that. What drew you to it? Let's let's go that route.

 

Austin Parker (00:05:31):

So I think that ironically enough, it was my time at Apprenda that really made me understand the difficulty of not having it. So from an early age, the first thing that got me interested in using computers as a tool was, I want to keep track of stuff. I have this information and I need to make it accessible to me somehow. I have these giant boxes of baseball cards. I want to know which ones do I have, which ones do I not have, so when I open a new pack, I can find it. That desire to use the computing as a tool to help you understand information and understand information at scale, really stuck with me.

 

Austin Parker (00:06:12):

And it became really personal when I was at Apprenda, because I eventually wound up being one of the primary maintainers, really, of our entire dev test QA per release life cycle. And a huge part of that for us was these very long and painful test runs. Apprenda was a platform as a service. It ran on .net and Java, and it was really this big enterprise product. It was extremely customizable, and well over 80% of the tests for this were integration tests. You had to deploy the entire thing and you had to deploy it to a lot of really specific configurations. We had this whole 20x10 matrix of features that had to be enabled and disabled to cover all of these different test cases.

 

Austin Parker (00:07:08):

This was not too long ago, but it was long enough ago that we were testing on Windows Server 2008, Windows Server 8.1, I think CentOS 5 or 6. Really kind of older setups where you didn't have a lot of the nice automation tools you do now. So the whole thing is held together with spit and bailing wire and prayer. And the worst thing was, is you had these 16-hour test runs and it's the middle of the night and something breaks, and you don't know about it until the next day. And you come in the next morning, everyone's under the gun to get a release done, and whoops, turns out something crashed in 3:00 AM. We don't know why. But it wipes out that entire test cycle, so now you have to do it all over again. And now you can't make progress.

 

Austin Parker (00:08:03):

So I really started to understand, because it personally affected me, how valuable, how essential having that kind of data is and that kind of insight is into your system, into your software, and how useful it is to be able to take that data and those insights and funnel them back into the team. It's not just about, "I want to know what's going on to scratch some itch." It's like, "This is hurting everyone's life and the way we make decisions, because we don't have this information." We can't communicate it in reasonable ways. Everyone gets super mad because people are yelling at each other. It was extremely stressful to try to run the systems in that data vacuum.

 

POP (00:08:47):

Can I detach from that a little bit? Do you think that being good at developer advocacy takes a certain amount of time in the field? And what I mean by that is, getting to deal with outages and be able to say, "Look, I can sympathize with you and I'm going to write an article or I'm going to do a talk about this, because I've literally got shrapnel from dealing with the Apprendas of the world, or doing SRE work for Kubernetes," or anything like that. Would you say it's a useful thing to have?

 

Austin Parker (00:09:22):

I want to draw a distinction because I think, is it useful? Yeah, it's absolutely useful. Experiencing things for yourself is maybe the best way to be able to talk about them, but I wouldn't say it's a requirement. I think you can be an excellent developer advocate even if you don't have the battle scars, because that's going to give you a unique perspective too. I think what's one of the things that we do ourselves a disservice in the tech industry, by almost requiring everyone to go through all this trauma of like, "Well, I did it this shitty way, and so now you have to do it this shitty way too or you're not a real dev," or you're not this or that or the other. And one, it's kind of self-defeating because it means that you're putting a bunch of people in a situation where they're incentivized to not change things for the better, or to not listen, I guess, when it comes down to it.

 

Austin Parker (00:10:29):

If we put such this high value on effectively hazing people into the industry by making them go through negative experiences, then one, that drives a lot of people out because they don't necessarily get it, they don't get that, "Hey, it's okay. You're supposed to go through this," and two, I just don't think that it's a super healthy way to consider a job or an industry or a role. Taking the pager shouldn't be this scary thing. Responding to an incident or an outage or whatever, shouldn't be traumatic. And because of the way it is, and this goes to a lot of things, it influences our language.

 

Austin Parker (00:11:16):

There was a really interesting thread on Twitter the other day by, I think Ian Coldwater about the prevalence of military language in red teaming and insecurity communications, where you need to ask yourself, "Well, why does that happen?" How did we get to the point where cybersecurity, and security in general, is often described in these complex... Or not even complex, these blunt military metaphors? You're at war with attackers, you're doing defense in depth. And that leads to a lot of things. It kind of sets people's minds in a certain way, and I think it gets reinforced with the fact that you look around and a lot of ex-military people tend to go into security. Is there a reason for that? Well, maybe it's because it feels like something that they're comfortable with because of things like this. Yeah, it's something to keep in mind, something to consider.

 

POP (00:12:12):

So let's get back into just observability and distributed tracing in general.

 

Austin Parker (00:12:17):

Yeah.

 

POP (00:12:20):

Let's just talk about observability in general. Tell me the core tenants of how you do this well.

 

Austin Parker (00:12:28):

Yeah, that's a great question. One of the things that I think is important to understand about observability is that, to me at least, observability is a lot more about the culture you have around understanding systems. Let's step back for a second. I work for a vendor, I have a lot of friends that work for vendors, and we all have our marketing lines on this. And I don't think that there's anyone that is out that... Everyone's going to have their own particular spin on what is observability? And I think it's silly for everyone to go to war or go to the mat over marketing terminology, because at the end of the day, observability is a state more than a set of check boxes. It's, "I have this potentially infinitely complex system and I want to be able to ask infinite questions about that system, and I want to useful answers too." That's the part that tooling helps with.

 

Austin Parker (00:13:37):

And then the next part of it, once you've got your infinite system and your infinite questions, and you've got your infinite answers, then you want to do something about them. And that's where the cultural part of observability comes in. And that's where you're using all of this data, you're connecting it from engineers to product people, to CS, to executives, to whoever, and you're using that to really drive decision-making and to let people feel like they have control over what's going on. And that creates a virtual cycle where it's like, oh, now that we can do this, we want to keep doing this. We want to do more of this. We want to get more and more observability. We want to make sure that when it's 3:00 AM and something crashes, we know why and we don't have to spend a bunch of time trying to figure out why. Preferably, we would have a tool that we could just say like, "Hey, this crashed. Why?"

 

POP (00:14:31):

And again, I think from a vendor perspective, that's how we're able to pull out our goods. But I think what I see out there as well is, I think it's a cultural shift. It's where the application owners need to understand there's key things that you have to tool to be able to ensure this. And especially with something like cloud native and cloud app, it's too super difficult to understand that because it's completely different than just with something running in a monolithic state at an enterprise level. It's just a completely different thing. So, I'm sure you spent-

 

Austin Parker (00:15:05):

Yeah, there's too many layers of abstraction.

 

POP (00:15:07):

I'm sure you spent at least the first couple of years at LightStep just talking about how to do... Because folks were like, "What does this mean? What is latency? What are percentiles? Why should I care about that?" and stuff like that.

 

Austin Parker (00:15:25):

Yeah. Yeah, you have to understand the value, and I think the hard part about understanding the value sometimes is because we approach observability as... There's a couple ways you can approach it. One is looking at what you already have. Well, we all have logs and we're all pretty comfortable with... Well, I say we're all pretty comfortable with logs, but for as widely used as they are, the battle for structured logging still continues. We certainly haven't reached this point of, "Hey-

 

Austin Parker (00:16:03):

We certainly haven't reached this point of hey, everybody is using structured logging or everyone's using the same logging format. Too many people I know, people maybe listening to this, in recent memory have probably [gripped 00:16:16] through stuff on a remote server because they're not aggregating stuff, they're not collecting it and centralizing it ... But when you start from that point where people are looking at this, and I think this is a challenge that I don't know if we've actually done as well as we can as an industry on, but you start out by saying, "Okay, you've ..." You tell people that maybe are having this sort of struggle with logging, where it's like I don't even understand logging that well or I know we could be doing better, but we're not doing great. And you show them [Prometheus 00:16:52] and then you show them tracing, and then you come back and say, "Well, all right, all that stuff, that's junk. None of that's actually observability."

 

Austin Parker (00:16:59):

We do a lot to discourage folks I think, inadvertently by looking at this leading edge of hey, here's all the people that are doing it extremely right by our metrics and that's what you should be going for. Meanwhile, it's so hard to actually do any of this stuff in a useful way that we have thousands and thousands of people that are stuck back here on just like, I have some logs, those, the text files and those get rotated every 24 hours. And when someone calls me and tells me the app is down, then I start searching through those. And this is the thing that I think, this is one reason that I think open telemetry it can be very useful because it can start to address some of these systemic problems.

 

POP (00:17:46):

Yeah because folks think that ... You have Prometheus, I have my metrics and I'm exporting this data, but then it's like okay, that's just a ton of data or a ton of logs and I need to correlate it. And the distributed tracing is yet another tooling piece that it's getting you there. Interesting thing about tracing, and I love the open telemetry project, I think it's fantastic. I had [Liz Font 00:18:09], [Liz the Gray 00:18:09] on last year and last season, and again, that is the high watermark of what a good SRA practice is and person is. And it's interesting, open telemetry. It's obviously a lot of cool things that folks are using, but give me the why. If I'm [inaudible 00:18:34] just starting out in the field, why should I care about open telemetry in general?

 

Austin Parker (00:18:37):

Yeah, so I think the thing to keep in mind about open telemetry is it's one of those stories in technology where how we got here is more important than what we're doing, if that makes sense.

 

POP (00:18:49):

Mm-hmm (affirmative).

 

Austin Parker (00:18:51):

If you think back even five years ago, six years ago, if you wanted to go and you wanted to ... You say, "I'm tired, I need something, right. I need metrics or logs or traces or whatever." You had really two broad options. One was to try the open source route. And you had tools like [Jaeger 00:19:15], you're you were starting, Prometheus was out there, Elasticsearch ... well, was more open source maybe than it is today, but you had these options. But the thing that always held you back was the support. You needed all these different plugins, you needed something for your specific ... It's like, "Oh, I got to get host metrics out of here. I got to get, you know, my, my SIS log stuff. I need this particular thing for my SQL client to trace that and my web framework to trace that." And it was a lot of work and a lot of it wasn't done for you. It was a lot of I got to go and manually write this stuff. And so, maybe you look at the vendor space see a bunch of different companies that all said, "Look, it's easy. You take, you know, you pay us whatever. You drop this agent in on a host and it does it all for you. It'll get your logs, it'll get your trace, the APM, it'll do your metrics. It'll do this, that and the other." And literally every vendor has their own one of these, and depending ... You go in maybe a few years after that and some of this stuff starts to be open-sourced and you go and look at the code bases and you start to realize that a lot of the code for the actual how are we scraping this data looks very similar because there's only really one way to instrument SQL client JDBC. There's really only one way to get stuff out of [proc 00:20:54]. It all looks the same at the end of the day, but we've got all this duplicated work going on across the industry to make a bunch of proprietary agents that ... It's silly. It really was silly. And I don't think anybody liked it. I don't think customers liked it because I don't think the people that are signing the checks certainly don't like, because it's like well, we can't get out. We can't stop using this company or that company because we've deployed them everywhere.

 

Austin Parker (00:21:28):

If you don't have the budget for it, then you're stuck back in a prehistoric age where it's like well, I have to do all this work myself. I'm spending more time trying to instrument my software and understand and do the plumbing to see what's going on rather than actually fixing problems in my stuff. And I think if you were a developer or someone that's interested in instrumentation, someone maybe is creating things like Kubernetes or framework code or open-source license libraries, and you have users that say, "Well, I want to, I want to monitor this." How do you choose? Do you write and integrate? Do you integrate with this or that or the other? No, you don't. You can't integrate with one proprietary thing and not the other proprietary thing. The maturity level of a lot of these open source projects was different.

 

Austin Parker (00:22:24):

Now in 2021, it's easy to say, "Yeah, Prometheus basically, you know, has won for some certain definition of won-

 

POP (00:22:35):

They're for metrics, though-

 

Austin Parker (00:22:37):

For metrics-

 

POP (00:22:38):

For metrics, yeah.

 

Austin Parker (00:22:39):

But even with that, if you go to Prometheus exec and you look at how many of those ... There's a lot of stuff that doesn't export native Prometheus. It's just the community has stepped up and said, "Well, we're going to, we're going to treat Prometheus as the [lingua franca 00:22:54] for our metrics, and we're going to write the exporter that lets you scrape, you know, whatever, right. Your your five big IP."

 

Austin Parker (00:23:04):

Open telemetry is really the story of how all those different stakeholders, those end users, the vendors, the open-source people got together and said, "It's really silly for us to duplicate our work like this." And the goal of open telemetry in five ... If I'm still in open telemetry in five years, and I hope I will be, and I hope everything still goes great, but five years from now I want to come back on here and say, "You know, if someone asks, what's open telemetry, say, I'm going to say you're already using it because it's just the standard. It's built in to everything. Did you run, did you, you know, [cube cuddle 00:23:40] apply a deployment? Congratulations. You're using open telemetry. Right? Did you spin up my SQL? Congratulations. You're using open telemetry. Are you using express or asp.net core or spring? Right. Congratulations. It's already there."

 

POP (00:23:58):

Meaning it's basically like the tracing functions are already deployed as part of just the underlying framework for anything-

 

Austin Parker (00:24:06):

For anything [crosstalk 00:24:07].

 

POP (00:24:07):

And it's agnostic to any vendor, because they're contributing to open telemetry as the basis versus having to do offshoots and all of this-

 

Austin Parker (00:24:17):

Right

 

POP (00:24:18):

Okay.

 

Austin Parker (00:24:19):

That's the goal, that open telemetry just becomes a part of the cloud native ecosystem so that if you're writing cloud native software, your frameworks have this built in. And not just for traces either, but also for metrics and eventually for logs so that ... The three big things that open telemetry provides, one is the specifications standards work. We've done a lot of work over the past couple of years to define semantic conventions for things like well, what should the metric name be for garbage collection? What should the metric name be for memory utilization or disk free? Once there's an open standard on that, all of these other tools that are doing analysis on the raw data can start to adopt that and start to become smarter, because they know no matter whatever it is, it's going to send me CPU data in this format.

 

Austin Parker (00:25:23):

Beyond that, the second big part of this is this API work, creating an API, a pretty low-level API admittedly, for creating traces and metrics and at some point associating logs with that.

 

Austin Parker (00:25:40):

And then the third thing is really this idea of context in that, underpinning the traces of metrics and logs, so that when I'm in a process and I'm creating a trace, it has a context associated with it. If I create, when I-

 

POP (00:25:54):

As in if you're running in Kubernetes, it's in this pod, in this cluster and-

 

Austin Parker (00:25:58):

Right. That's all those labels really that [affects 00:26:01] it, but also that you could use to associate hey, these metric measurements are from this specific request versus that specific request. So, not only do I know what was the ... maybe I see the error rate go up on something because I have a metric for that. I could actually see the exact request that comes back to rather than using exemplars or other things that are correlated versus causal relationships. And the same thing on my logs where it's ... And then, so that gives you this idea of oh, now I have all of my data interconnected. That's the idea behind the software side of this.

 

Austin Parker (00:26:43):

And then the third thing I think is the idea of the tools around it. The tools are things the open telemetry collector, and also the open telemetry automatic instrumentation. The collector, it's like a Swiss Army knife. It lets you take data in a bunch of different formats and then translate it into the open telemetry format, or take data [crosstalk 00:27:03]-

 

POP (00:27:03):

Versus having to inject it in your code to be able to get it, right?

 

Austin Parker (00:27:05):

Right.

 

POP (00:27:06):

Because that's the normal function, because if you think about ... And I'm talking APM. We're not going to go into metrics on this whole thing, but APM functions to me, where you have to inject your code, you think about vendors that inject it there, inject something in your code, or you had to push it out to some type of either Prometheus or StatsD or something like that. And so, we're versus, like you said, that collector function, and then that instant instrumentation. APM in the past was just so involved. But again, talk to me again about that, just that wrinkle, but with open telemetry, like with the collector. I want to understand that because that's really cool stuff.

 

Austin Parker (00:27:48):

Yeah. So the collector and the automatic instrumentation, they don't necessarily work hand in hand, but they're really part of the same story. Right now ... the APM part of this, the getting your code traced, that's what the automatic instrumentation does. And before you'd have to use a proprietary agent for this, something that your vendor gave you. Now, what you can do is you can take a package and you can install it. It depends on the language. The great thing I think about open telemetry is one of the decisions we made when we started the project was you should ... If you're writing C-sharp, then you should make the ... as [respect 00:28:38] the ... The word I'm looking for is ... conventions. You should respect the conventions of your language. So, your API should feel like a C-sharp API. If you're running Java, it should feel like a Java API. If you're using Go, it should feel like a Go API. We don't want one thing for everyone. We want stuff that feels natural for you.

 

Austin Parker (00:28:59):

So if you're using C-sharp then, and you want automatic instrumentation, you have an asp.net app and you want it traced, cool. Right now you can go in, you can add some usings, you can add a little bit of code into your setup, your start-up dot CS file, add it in through a dependency injection. And now every request you make is traced. Every time your app calls to the database through entity framework, that's traced. You talk to Resis, that's traced.

 

Austin Parker (00:29:34):

For Java, we have a similar thing. Now, you can do it either by importing it ... And this is the great thing, is it's not the same way for everyone. For Java, it's like okay, well we can throw a Java agent in through the JVM invocation, you [dash dash 00:29:48] Java agent. You download the jar, you put it into the class path through that, you give it some config through dash D blah, blah, blah, blah, blah, and it works.

 

POP (00:30:00):

And there's a convention for it. That's the whole point of this-

 

Austin Parker (00:30:03):

Yeah, it's all standard.

 

POP (00:30:04):

Right. Once you go [crosstalk 00:30:07] and say, "Extend this," because if you want to, you can take it and grow it out. Is that ...

 

Austin Parker (00:30:12):

Yes, you can take it and you can ... It's all, everything I'm talking about is 100% open source. You can go look at the source code for this. You can make pull requests, you can write documentation. Please God help me with my documentation.

 

POP (00:30:24):

By the way, everyone, we'll have a link to the open telemetry.

 

Austin Parker (00:30:27):

Yeah, opentelemetry.io.

 

POP (00:30:31):

Opentelemetry.io, and we'll have to get so people can contribute, because again, you're ... and hopefully this season I really want to impart this contribution because a lot of folks are taking ... We could probably spend an hour on that right there, is like taking this ... a lot of the hearts and soul that a lot of maintainers writing this and you're taking advantage of it, but give back. Either write documentation or come back with issues or come back and write new specs and conventions and all that fun stuff.

 

Austin Parker (00:30:57):

Yeah.

 

POP (00:30:57):

So, I'm going to get off my soap box real quick.

 

Austin Parker (00:30:59):

Oh, you're cool man.

 

POP (00:31:00):

Yeah.

 

Austin Parker (00:31:02):

So yeah, the second part is once you've ... The other half of that from the [automatic instrumentation 00:31:08] is the collector because the collector can work like one of those host based agents where you put that on your, you put it in your EC2 instance, or you run it as a side car and it can gather those resources, things like what pod am I in, what node am I on, what's my CPU, what's memory's utilization like. And your automatic instrumentations sends that data to the collector. And this is the really cool part, is the collector is also extensible, so there are exporters on this. You can send it to any number of companies, so not only you send it to open source back ends, like Jaeger Zipkin for tracing or Prometheus for metrics, but you can send it to anyone that comes in and writes a plugin.

 

Austin Parker (00:32:02):

Anyone that comes in and writes a plugin, right, or you can send it out through the open telemetry format, which a lot of people are starting to natively accept. So no more like, "Oh, we went with big vendor A five years ago and everything is done in that format. We can't switch because it's too big." Now it can be, "Cool, we put up in telemetry everywhere and we are no longer locked into anything."

 

Austin Parker (00:32:33):

I've seen, like in the past six months alone, I've seen like two or three new open source projects or similar ones come out in this space. Things like Tempo from Profana And there was one I saw just there day, I can't remember the name of it, but I saw another tracing plus metrics thing that's built on open source stuff like Prometheus and Grafana, and their whole thing is like, "Yeah, we accept open telemetry. So if you can take your data, get it into a telemetry format, it'll go to all these different things and you no longer have to worry about, 'Is this going to be the best thing for me tomorrow?' You can just put it into Notel and sort of forget about it."

 

POP (00:33:17):

Got it. And so riddle me this then, so we have open telemetry. Again it's convention. It's awesome. It's open source. Somebody could just kick the tires right now, get the instrumentation, the collector aspect. What is LightStep and what does it do for me? How does that fit in this mix? For anybody listening and watching.

 

Austin Parker (00:33:35):

So LightStep is, a little background on LightStep, we were actually, our founders were at Google back in the day. They helped build Dapper, which if you've heard of that was really one of the first, extremely large scale distributed tracing systems. And we also helped build Mar, which is their time series metrics database. So the original idea of LightStep is, "Hey, we've got this technology called Dapper and it's very cool and it's actually extremely applicable." if you think all the way back to my sort of like 3:00 AM apprenda what happened calls, the hard part about it is even six hours after an incident, or especially in the middle of an incident, you want to have a bunch of data at hand and you want an easy way to see what's changed, because things generally don't break for no reason. There's some sort of contributing factor or whatnot. So the question that LightStep is really trying to answer for people is, what has changed. What's changed in your system? And how we do this is we take metric and tracing data and logs, we feed all that into LightStep and we provide a lot of features built on sort of the confluence of these things.

 

POP (00:35:02):

Is that SAS, is it on prem?

 

Austin Parker (00:35:08):

So it's a combination. Without getting too much into the weeds, there's three ways you can use a LightStep. One is you can use it through our community tier which is free. Has some rate limits on it. And you can try out all the stuff I'm about to talk about. The second is our teams, and that's for people that don't want to run their own infrastructure, people that are kind of growing, they're going into prod. They want something that's kind of pay as you go and they don't want to be under contract. And then we have an enterprise option that has some options for moving parts of this on premise, but primarily it's a SAS product. The actual backend is all SAS.

 

Austin Parker (00:35:53):

The basic experience is this, you have your application. You're sending traces and you're sending metrics. You see something on a dashboard. You see squiggly line go up or squiggly line go down. You say, "That's interesting." So you click on that, and we pop up a little thing that says what's changed. You click on that. What LightStep is able to do for you is we are able to kind of correlate and analyze 100% of the unsampled trace data that's coming from your application. We're able to look at all of that. We're able to look at these time series metrics. And we're able to not only look at what's wrong in the service that you clicked on, but we're also able to analyze the dependency graph of that service and drill down to like, "Okay, maybe I'm," the way I describe it is maybe I'm like a front end dev and I get paged because my my checkout route is failing a lot. So I can, rather than having to know, back before all this, I could look at the logs. Maybe I could go in and I could check to my browser and I can open up a web inspector and say like, "Oh yeah, that's a 500. Or it's only a 500 sometimes."

 

Austin Parker (00:37:09):

But who knows where that's coming from? There could be 200 services in that request chain from that API all the way back to the database and back. And then think of all the other things that could confound this. Because if you're in this big, modern, fast paced dev environment, there could be feature flags. There could be people doing chaos experiments. There could be weird sort of network flaps and other outages based on cloud weirdness, for lack of a better term. There's too much information for me as an individual dev to really understand every part of this chain and be able to figure out like, "Well is this is a problem I need to worry about it or is this a problem someone else needs to worry about?"

 

Austin Parker (00:37:52):

Now, my boss might be right on my butt about this because I'm one getting alerted. I'm the one being paged, so I need to be able to give him an answer. In LightStep rather than not knowing or having to go dig through 20 different run books, you go to the graph. You see, "Ah, that's interesting." You click on it. Say what's changed. We're able to look through the traces of that request and find interesting things that are happening, even if they're not at the service where you saw the error. So you saw the error on the front end, but maybe the actual problem is in a checkout service or an external API that's 10 or 15 hops away in the service graph. We can find that, identify it for you and say, "Hey, we think this is contributing." A lot of people talk about AI and things like that, and I heavily disagree with the idea of calling the stuff AI. It's statistical analysis. I've looked at the code. It is advanced stats, but it's still just stats.

 

POP (00:38:57):

And that's the whole point, though. It's based on your experience. If you think about your founders. If you think about, these are folks that were looking at a ton of data from different huge systems and stuff like that.

 

Austin Parker (00:39:08):

Oh, yeah.

 

POP (00:39:08):

So that's just practical. Again, AI is only as good as the practical experience or statistical stuff is only by the practical experience put behind it. And so that's awesome though, just being able to call out, "Hey, this is out of whack and here's what we think the issue is based on all our practical experience we have." So that's phenomenal

 

Austin Parker (00:39:31):

And we're going to give you all the options. The whole idea, and this goes back to what I was saying really early, is it's not just about the computer telling you there's a problem. It's about having this cycle of being able to ask these questions. Because you might find, maybe we might be wrong. Who knows? We're going to show you something. We're going to tell you like, "Yeah, this is certainly correlated," or, "Hey, there's this thing that pops up more frequently here than in other places." But that might be the proximate cause. That might be like, "Oh, that's why this is going wrong, but why is that going wrong?" And so that starts you down another, "Okay, now I'm going to go look at this." And you can keep asking those arbitrary questions and getting those answers, and that's what LightStep is doing for you. It is a force multiplier. It is a, "Hey, I don't have to grap across 25 different pods. I don't have to go wake someone up. I don't have to go reconfluence for an hour to understand this chain of dependencies or whatever's going on here. I just start clicking around, looking at data, exploring that data and learning for myself." And I think that's a better way to do it.

 

Austin Parker (00:40:44):

The reason I'm at LightStep primarily is because when I talked to someone here years ago and in 15 minutes, they kind of pitched this whole of what this technology could do. And I thought to myself, "I would be so much happier if during all of those middle of the night or the morning afters at Apprenda, when something broke and all we had are a bunch of graphs and a bunch of log files to go through, if I had had this, Oh, it would have saved me so much time. I'd have been so much happier. It would have changed my life." And that's what I'm here to do now. I'm here to tell people about how great it can be when you do have these sorts of tools.

 

POP (00:41:29):

Let's move on now. Let's talk about Deserted Island DevOps. Okay, I'm going to give you some background. So I'm pretty much family with Ian Coldwater. They are one of my really deep friends. And they're like, "I'm doing a talk." And so when I had them on the podcast, we talked about you. And so there's always these, the cloud native world is very small. It's a small world. And so I've always heard of you and you've heard of me. And it was always like, "Okay, let's talk." And I'm like, this is the coolest thing ever. So give me everybody who's listening to this, this is the coolest thing. Tell me about the way you got the idea for this, for Animal Crossing as a conference mechanism, because is it's amazing, dude. It's amazing.

 

Austin Parker (00:42:17):

I'll give you the abbreviated. I could do a day on this alone. The abbreviated version is, this was last year, last March, end of February, yeah end of March. And it was kind of the first two weeks of global like, shit this is going to be bad lockdown. And I'm looking at my schedule and everything's being canceled. All my talks, all the conferences are getting canceled. Everyone's just stuck inside. And hey, kismet, Animal Crossing had come out and everyone was playing Animal Crossing. Why wouldn't you? It's a game where you get to virtually travel and it's cute and it keeps you from thinking about how awful the world outside is.

 

Austin Parker (00:43:11):

So at some point I made this, there's an editor in there where you can make your own designs. I made a LightStep logo and I put it on something. And I was like, "What if you did a conference with an Animal Crossing?" And like four people or five people liked it on Twitter. And that was kind of all I really needed. That was like, "Huh, maybe this will work." So I put up a landing page for it on April 1st last year, as a, "Here's this thing coming and register here for more information." And my thought kind of was, "All right, this is a really silly idea, but it's technically feasible to have a conference in Animal Crossing and stream it out. And if I put this up and no one signs up for it, then, okay, we tried." Or then it's like, ah, this is a fun, April fools. Right? Who would ever do something this stupid? And then a hundred people signed up the first day. And so I turned around, I go back. I'm like, "A hundred people sign up for this. What do we do? What do I do?" They're like, "Oh, I guess you have to do it now." And I'm like, "I guess I do." So that is kind of how it started. 30 days later ...

 

POP (00:44:32):

There was a high turn out to that.

 

Austin Parker (00:44:36):

14,000 people viewed, watched.

 

POP (00:44:40):

This is incredible. Here's why it's incredible. Again, it's where people, these days, again, we have a pandemic and there was that there was that Superbowl thing, "Oh, life gave you lemons," and you know that commercial. But it was like, yeah, you turned the thing around. It's very similar to what I had to do with the podcast. It was like, "Look, I'm know I'm not going to get to go out there. I'm not going to see my friends. I'm not going to be able to do that." You got to turn it around and be like, "I'm going to do something that's going to just, I'm not going to do the usual go and do a webinar and go and do a meet up thing." It's like, "Let's do something fun, cute, that it's different than what's going on." It took so much bravery.

 

Austin Parker (00:45:18):

I feel like a lot of people started podcasts because of the pandemic.

 

POP (00:45:22):

But there's the entertainment aspect of it. Like the best talks or the best conferences, if you think about like rejects or some of the best keynotes at [inaudible 00:45:33] cons are always ones that entertain beyond the technology, make people think. Think about what Ian presented at yours. It was great. So long story short, I was super like, "This guy's awesome." Props to doing Deserted Island DevOps. I thought it was amazing.

 

Austin Parker (00:45:54):

Wow. Yeah. The call for speakers is up for Deserted Island DevOps 2021. You can find that link off of the website. And Deserted Island DevOps 2021 will happen again on April 30th.

 

POP (00:46:14):

This is going to air in March, everyone. So we'll have a link to it.

 

Austin Parker (00:46:19):

The CFE might still be open by the time you listen to this. It closes the second or third week of March. So it'll either be about to close or it will have just closed, and you'll have missed your opportunity. But you should still come to Deserted Island DevOps 2021. It's absolutely free. There's no registration. A lot of it is really just like me saying, "Well, if I wanted to go to a virtual event and not feel like I wasted my day, what would I do?" And that's kind of the motivating factor for a lot of the decisions about how it gets run. So yeah, it's going to be a different experience, but I think you can go back and watch what it was like last year on YouTube, and it's going to be pretty similar to that, I think.

 

POP (00:47:04):

We'll have a link to both last year and this year. So I want to talk about something and we kind of glanced at this when we were talking about and preparing the interview. I'm a wrestling fan as well. So sometimes you got to kind of hold it inside and stuff like that because ... I think [crosstalk 00:47:21]

 

Austin Parker (00:47:21):

I feel there's a lot more wrestling fans in this industry than you'd think, I think.

 

POP (00:47:26):

Oh really? Okay, cool.

 

Austin Parker (00:47:27):

I think so. I don't know. I think everyone keeps it quiet.

 

POP (00:47:31):

So I'm going to ask you this. There's a couple of things. In your memory, what's your earliest memory of wrestling? And I'll tell you mine.

 

Austin Parker (00:47:42):

Oh gosh, my earliest earliest is like 80.

 

POP (00:47:47):

And by the way, this isn't Greco Roman. This is World Wrestling Entertainment. This is pro, yeah.

 

Austin Parker (00:47:52):

WWE now. My absolute earliest is going to be like late eighties, because when I was little and it was Hulk, Warrior ...

 

Austin Parker (00:48:03):

... when I was little and it was Hulk, Warrior, Savage, people like that. It was that era.

 

POP (00:48:07):

Did you have a favorite wrestler growing up?

 

Austin Parker (00:48:13):

Probably Macho Man.

 

POP (00:48:16):

Let me tell you something. Let's talk about that. So WrestleMania III, it was the one with Andre and my favorite match ever. I remember to this day, and it impacted me is Ricky the Dragon versus Macho Man. I'm kind of a historian. I read behind the scene. They choreographed that. Macho Man was very big on choreographing each nuance of it.

 

Austin Parker (00:48:41):

Very detail oriented.

 

POP (00:48:43):

Yeah, and you wouldn't think he's got this persona, whatever, but I mean, the guy just literally to the [inaudible 00:48:48] and that is still one of the greatest matches I've ever seen by far.

 

Austin Parker (00:48:53):

I would say that's probably one of the best WrestleMania matches ever actually.

 

POP (00:48:56):

Yeah, without a doubt. What do you think of-

 

Austin Parker (00:49:01):

But when I was a kid, that was-

 

POP (00:49:03):

That was your favorite?

 

Austin Parker (00:49:04):

I think my favorite from when I was younger in general though, is I guess I was kind of basic, because I really liked Austin. I was a big Stone Cold mark back in the day.

 

POP (00:49:20):

We're you using the lingo now. We're using mark, kayfabe. We're going to throw [inaudible 00:49:23] out there.

 

Austin Parker (00:49:24):

Yeah. We're killing the business. I actually have a really strong talk that I want to give him sometime when we can do in person events again. It's basically about if you want to be a better [inaudible 00:49:43], go watch pro wrestling, because it's incredibly instructive on a lot of levels. I don't want to give the whole thing away, but seriously, everything I have learned, everything I feel like I need to know about this job, in a lot of cases, I got it through watching pro wrestling.

 

POP (00:50:06):

Well, there's a swagger. If you think about Ric Flair and stuff like that, he just walked down the aisle and he had that flair. It's stuff like that. And Austin, Austin, just literally was like, "Don't take it from anybody. Don't take any crap off anybody." It was almost reality-based wrestling, even though it obviously was fake.

 

Austin Parker (00:50:30):

Yeah. I mean, there was a lot. I would probably not recommend Attitude Era WWE to anyone. I mean, I don't know how much I'd recommend WWE to anyone these days.

 

POP (00:50:45):

Yeah, agreed.

 

Austin Parker (00:50:46):

You want something to watch, All Elite Wrestling-

 

POP (00:50:49):

Yeah, let's talk about that.

 

Austin Parker (00:50:51):

AEW is real, real good.

 

POP (00:50:55):

Here's what I love about it, the indie spirit. It's almost like an open source thing. Think about All In. It's very similar to what you did with Deserted Island DevOps. These guys, ladies, they put together and they're saying, "You know what? We're doing all the indies. Let's go together and do this thing." They did it and they made a promotion out of it, and it's phenomenal. I mean, you had Kenny Omega and these were guys were in New Japan and all of that. But they were all [crosstalk 00:51:27]

 

Austin Parker (00:51:30):

They're incredibly talented individuals. Any one of these people individually could have gone to WWE and become one of the highest paid people in the company.

 

POP (00:51:37):

And they turned it all down because it's the art of it. It's the art of it and now they have their own place. I watched a lot of ECW growing up as well. In Queens, I used to go to the House of Hardcore over there and stuff like that. That time was amazing because it was revolutionary. And now what they're doing is revolutionary. Think about the Bullet Club and stuff like that. It was banding together and being like, okay, we're going to take this industry by storm. I love that.

 

Austin Parker (00:52:09):

It's great. I think one of the really interesting parts about what AEW is doing, and this is actually a really good analogy I feel like for a thing you can learn if you want a business takeaway from this or an anything takeaway. The really remarkable thing about AEW is that they've gone around and they have formed relationships with other promotions. Was it last week or the week before last? Well, okay, this is airing in March. They recently opened up a relationship with New Japan Pro-wrestling, which is a legitimately world-class, these people are huge not only in Japan, but worldwide. It's like, now we have a working relationship with that company, too, so one of their guys is on our show, comes out of nowhere. There's a whole angle right now where you've got Kenny Omega reuniting with people he used to wrestle with in Japan that are on a completely different brand promotion now for Impact Wrestling-

 

POP (00:53:13):

And Impact, which was started by one of the people... It was almost his uncle growing up, which is [Don Callis 00:53:19].

 

Austin Parker (00:53:18):

Don Callis. A lot of this is being played as storyline. A lot of this is being played up as this is a story [inaudible 00:53:28]. There's a lot of parts about AEW that are hard to recommend to people that don't know wrestling that well. And one of the biggest ones is that AEW is very smarky, which isn't to say, they play to an audience of people that get wrestling as kind of this weird meta art form.

 

POP (00:53:48):

Smark means smart mark.

 

Austin Parker (00:53:49):

Smart mark.

 

POP (00:53:50):

Mark is somebody who's really into the business of wrestling. There'll be [crosstalk 00:53:55]

 

Austin Parker (00:53:55):

You're going to have so many of these captions and subtitles here. The takeaway is this, though, is the big incumbent player here, WWE, is extremely unfriendly to this idea of working together. Their stars, their performers, their athletes, whatever you want to call them, their sports entertainers do not do things outside of that company. A lot of times they'll get fined for it or they will lose opportunities because it's like, oh, you were seen on someone else's thing. It's a very closed loop. They don't acknowledge the rest of the world.

 

Austin Parker (00:54:40):

And then over here, you've got kind of the smaller upstarts that are banding together and working together to promote each other. I think that that's one of the real takeaways I have is that you can think of the world as a zero-sum game or not. In reality, the world is very rarely zero-sum, even in business, especially in technology.

 

Austin Parker (00:55:05):

I think one of the biggest things I do at LightStep is I spend a lot of time talking to people that don't work at LightStep. You can make a huge impact by working with people outside of your circle, working with people outside your company, through things like open source, but also through do a podcast together. Do a stream together, write some docs together, create information, create stuff that is bigger than just your walled garden as it were. Do it for the community, because one, people notice and two, you'll get better stuff that way. You get better quality if you work together rather than kind of doing everything on your own.

 

POP (00:55:57):

It's same team, different company. You hear that adage all the time. I believe in it. I'm like, look, it is what it is there. If I would've started my show and made it just about a product and whatever, to me, again, what I think is so successful about Deserted Island DevOps as well is it's a way to kind of connect people from the community in a different way from the traditional methods that you normally have.

 

POP (00:56:26):

What is it? We go to a [CubeCon 00:56:28] or we go to a Black Hat or whatever it might be. And then we break off. It's like, look, now we're all kind of isolated, sheltering, whatever. And as a mechanism, you're playing Animal Crossing already.

 

Austin Parker (00:56:46):

You might as well do something fun, productive, I don't know.

 

POP (00:56:50):

Right. And so, it's new ways of staying community-based, but not preaching and selling and all of that. To me, it was genius and dude, I applaud you because it was fantastic.

 

Austin Parker (00:57:02):

Thank you. None of it would have been possible without the extremely tireless efforts of my co-host Katie, @TheKaterTot on the Twitters. But yeah, Katie really made that show what it was, I think. I was just the guy with the stream deck punching buttons.

 

POP (00:57:23):

Shout out to you, Katie. Good, good friend of mine as well. All right. So last question for you. And then I know we could probably talk wrestling for another hour and a half, but we'll do that in person someday. That's a good angle. Maybe we got a different show. We can just do wrestling and DevOps.

 

Austin Parker (00:57:35):

Yeah, we'll do a wrestling podcast. There's not enough of those in the world.

 

POP (00:57:41):

So what work are you most proud of, Austin?

 

Austin Parker (00:57:48):

In my entire life? That's a tough one. When I was a kid, when I was a teenager, my dad forced me to stay in Boy Scouts because he thought it would be culturally be good for me. I think your parents always do that when you hit that surly teenager stage. I hated Boy Scouts, I'll be quite honest. I did not enjoy my time there a lot, but there's one thing I did that I actually was very proud of. It's still there actually to this day.

 

Austin Parker (00:58:39):

Because I didn't have a lot better to do, I decided to go for my Eagle Scout. And part of you going for Eagle Scout is you have to do a service project. You have to do something that gives back to the community. I decided to put a walkway in at the church I went to at the time, or the church I grew up, because there was a path that was very old and disused.

 

Austin Parker (00:59:02):

And so, I went out and I came up with the plans and I raised some money and I procured the materials, organized some people. And we went out and we built us a walkway from the church to wherever the pastor lives. I don't know. There's a name for it. Little house next to the church, let's say. That is still there. I don't get home often for a lot of reasons, but when I do, I've gone by and it's still in one piece and it's actually been maintained by people that have come after me.

 

Austin Parker (00:59:42):

So I think overall in my life, that's probably the thing I'm most proud of. I mean, Deserted Island DevOps is great, but that's an idea anyone could have had. I just had to be the one [crosstalk 01:00:04]

 

POP (01:00:03):

Anybody could have had, but you were the one who executed it well, dude. You should just be humble on that one, man. That was great.

 

Austin Parker (01:00:09):

I am humble about it. I'm proud of it, yes. But there's a difference in my mind. I've never told that story, the one about the thing, because I'm proud of it, but I'm proud of it because it's not something that matters. It matters a lot to people sort of in the moment, but there's no greater moral to this. It's just I saw a problem, I went and fixed it.

 

Austin Parker (01:00:43):

The things that I feel that contribute the most of the world are people solving those kind of problems. I think that if we spent more time on sort of the lower ego work of tending gardens, then we would have a society that was a lot better of a place to live in. That got a little weird and preachy at the end, but-

 

POP (01:01:09):

It's all good.

 

Austin Parker (01:01:11):

Take it in the spirit it was meant.

 

POP (01:01:14):

No worries.

 

Austin Parker (01:01:14):

Do good things.

 

POP (01:01:15):

Austin, I appreciate you being on the show, man. This was great, and you always have a spot anytime you want, man. This was a great talk. Thank you so much.

 

Austin Parker (01:01:26):

All right, [inaudible 01:01:27] for having me on. Thank you.