Observe CEO Jeremy Burton: Still ‘Early Days’ With The Channel, But The ‘Value-Add For The Partner Is Very Clear’

‘I think once you’ve got that repeatable go-to-market, which I actually think we’re pretty close to, then I think you’re ready to look at partners. And partners are also going to want to know that we’ve got market fit with the product, and we’ve got velocity,’ Observe CEO Jeremy Burton tells CRN.

What Observe Does

Observe, a San Mateo, Calif.-based developer of SaaS-based technology for turning machine-generated data from distributed sources into data that enterprises can use to detect and resolve system issues, just closed its Series A-2 funding round, adding $70 million to that round to bring it to a total of $114.5 million. The company, according to Jeremy Burton, who in 2018 joined the company as CEO after serving as top marketing executive as Symantec Veritas and at Dell EMC, is what one would get if someone put Splunk or Elastic with Datadog in a blender with an AppDynamics or a New Relic.

“There’s no reason why you should have to have three discrete products to troubleshoot problems in your applications or infrastructure,” Burton told CRN. “And today, most companies, particularly large companies, have at least three.”

Observe was founded in late 2017 by Sutter Hill Ventures, which was also an early investor in Snowflake, the Bozeman, Mont.-based cloud data warehousing company that serves as the back-end warehouse for data collected by Observe. The company’s first customer signed on 18 months ago, and Observe currently has about 50 enterprise customers, Burton said. Other investors include Capital One Ventures, Madrona Ventures, Michael Dell, former Pure Storage CEO Scott Dietzen and Snowflake CEO Frank Slootman.

[Related: Observe Exits Stealth; Targets Splunk, Datadog For Observability: CEO Jeremy Burton]

Observe initially went direct to find its first customers, but its business is really one that is suited for enterprise solution providers looking to help their customers make sense of all the data they collect, Burton said.

“Once you get Observe hooked up to the various different data pipelines, someone’s got to come into the account, sit down with the customer, and ask them, ‘What questions do you want to ask about your application infrastructure?’” he said. “And there’s some work to do on the data in order to answer those questions. And for me, that’s perfect for a partner to provide.”

Here is more of what Burton had to say in an interview with CRN.

What exactly is Observe? What is it that you do?

Maybe the simple nontechnical answer is, if you were to put something like Splunk or Elastic with Datadog in a blender with an AppDynamics or a New Relic, that’s the product. Our belief is that these discrete products for analyzing logs or monitoring or APM [application performance management], they’re all going to collapse into this new segment called observability. There’s no reason why you should have to have three discrete products to troubleshoot problems in your applications or infrastructure. And today, most companies, particularly large companies, have at least three. And those three products tend to vary by infrastructure team or application team. I think the 451 Group did a report on this. It’s not uncommon for an enterprise to have eight or nine different tools. And the problem with that is, the data is fragmented. And when data is fragmented, it’s hard to see what’s going on. You need people to look at this tool, and then look at this one, and be like, ‘Oh, I see that over here, and so now let me look over here and, OK.’ What we’ve found is, if you put the data in one place, it’s much easier to figure out what’s going on. So our big idea at Observe was, take all of the event data, put it in one big database, in this case a Snowflake database, and once the data is in one place, it’s much easier to ask questions about what’s going on. And because the data is in one place, it’s easy to relate to it.

Nobody else is doing that?

Nobody else is doing that. What you tend to find is that the bigger companies like Splunk, for example, started off with logs. And then they bought SignalFX to do metrics. And they’ve made a string of acquisitions to try and build out all of the functionality. And Datadog is doing the same thing. They’re trying to build a suite of products which allow you to look at the different kinds of data. And so their mantra is very much, ‘Well, you can buy those three products from us instead of getting it from three separate vendors.’

Our approach is much more like ‘No, no, you don’t need three products. You need one.’ Because if you’ve got three products, let me guess you’re going to have three users. And someone still is then going to have to correlate what they see to figure out what’s really happening. And you can probably see this time and time again in tech. At one point, we had a phone, and we had a web browser, and we had a music device. And then magically, now we have an iPhone, and it does it all. And we don’t even think about it.

I think the same thing is happening in the world of observability. You know, you don’t need an APM product, a logging product and a monitoring product. You need an observability product. And our advantage, if you like, is that we started later, in 2018. Splunk started in 2003. Datadog started in 2010. Because we started later, we get to take advantage of newer things like Snowflake, which has a new architecture that makes all these things possible that really weren’t possible back in 2010 or 2003.

Does the company work with solution providers, or does it have a big direct sales focus?

We’re doing early work with solution providers. ... The first move with the solution provider tends to be to get the technology into the hands of their technical folks to evaluate it. But at least in the first year, most of the go-to-market really was direct-sales-led, which I think it has to be because you’ve got to figure out a repeatable go-to-market: Who do we sell to? Which size customers? What’s the sweet spot? I think once you’ve got that repeatable go-to-market, which I actually think we’re pretty close to, then I think you’re ready to look at partners. And partners are also going to want to know that we’ve got market fit with the product, and we’ve got velocity. They’re going to make an investment to get up to speed on our product. And so they want to make sure if they’re going to make that investment, [they are asking], ‘Is there a real business there?’ And I think that’s for us to prove really before we go big on the channel. So it’s very early days. It’s in the hands of some technical folks. But as we mature the business and establish those routes to market and the market fit, I think it’s a natural place for us to go because the value-add for the partner in what we’re doing is very clear. Once you get Observe hooked up to the various different data pipelines, someone’s got to come into the account, sit down with the customer, and ask them, ‘What questions do you want to ask about your application infrastructure?’ And there’s some work to do on the data in order to answer those questions. And for me, that’s perfect for a partner to provide.

Do you expect the primary business in the future will be through indirect channels? Or do you expect direct channels to be the primary driver going forward?

I think it’s going to be a mix. Clearly, we’re going to have a small sales team. And so just purely from a reach standpoint, I mean, expanding that reach through partners makes sense. And I do think the partners are going to be value-added partners because there is always going to be a good repeatable business for the partner as the customer brings on more applications and as they want insight into more applications. There’s a little bit of work to be done to shape the data to answer the questions that they want to have answered. And the nice thing about that, the infrastructure, we should just be able to troubleshoot that box because we know the format of the data in advance. And application is specific to a customer. So there’s some work to understand the structure of the event data of the customer and then transform that data into something meaningful to the customer so they can answer the questions they have about their application. And for me at least, that’s a great repeatable business.

Has Observe put in place any training or certification programs yet?

We haven’t got the full certification. But one of the things that we are announcing is our first sort of Observability 101 certification. So your question is right on the money because I think observability is very confusing for people. I think it’s difficult for them to really understand what it is. I think it’s difficult for them to understand the journey to observability. And so I feel the need for some kind of certification is going to be critical. The first course is introductory, to try and educate people on the basic concepts and how to think about observability differently. … It’s a free course. There is a certification at the end of it, but the way I look at it is that that’s the first course of many that we’ll introduce.

Observe’s technology provides data. Do you have plans to add automated actions based on that data?

Some folks that we’re working with do this today. And the way that works is, there’d be some insight that Observe would discover. And then we would fire off an event or an alert to another product like PagerDuty or something like that. And they then catch the events. And then the customer would configure that product to go and take the automated action. So we’re not planning on building out that capability within Observe right now. We’re staying very focused on the analysis of the data, and maybe discovering insight that people have never seen before. And the automated action for us would be a partner play. And the nice thing is, it’s pretty easy through web hooks and things like that to connect to partner products like Slack or PagerDuty or whatever, and folks could automate their actions from that.

One of your investors is Capital One Ventures. What is it doing with Observe?

They were early investors in Snowflake, and they were very good at shepherding Snowflake when it was much earlier in its life. Back in 2015, it wasn’t anywhere near as good as it is today. But they did a really good job of shepherding the product to the right parts of the bank to allow it to be successful. And I think today Capital One is one of Snowflake’s biggest customers. And so we are running a [proof of concept] at the bank, and Capital One really is our first enterprise customer.

One of their goals is to use Observe within the bank. It’s still early days. But they were enamored by our ability to relate and correlate their data, which you can imagine in a large enterprise they’ve got lots of data silos, and they’ve got to try and bring those together to figure out what’s going on. So the big news is the Capital One investment, and yes, hopefully they can be as big a customer for Observe as they are for Snowflake.

How important is Snowflake to Observe, not just from the investment point of view but in terms of technology?

It’s very important. We use Snowflake as the data store for all of the event data that we collect. We ingest about 40 terabytes a day, and have about 10 petabytes of data under management. We execute 2.5 million queries a day, which actually is just over 1 percent of Snowflake’s daily query volume. So it’s big. And without the scale and the reliability of that platform, I don’t think we could build Observe. And we certainly wouldn’t have the differentiation of being able to put all the data in one place. They do structured data, and they do these joins of million-row tables very quickly. When you think about observability, lots of data, and providing context for folks, in database terms you’re performing a giant. And being able to do those joins at scale is a game-changer in itself. And maybe we fight like brothers at times, but it is brotherly love and brotherly fighting. And we’re always challenging them to do more for us. And they’ve been very good partners so far.