According to Gartner, there’s been a big shift in the way it’s seeing the whole Enterprise BI space. Can you walk us through the key observations each year and then let’s talk about the trends and what’s behind those.
Sure, of course. If you look back the last three years of Gartner and Magic Quadrants for Business Intelligence and Analytics Solutions, starting in 2014 on the left side you see what I would refer to as a standard distribution for one of these Gartner charts. What that means it’s a linear distribution from lower left corner to upper right corner. You can draw a straight line right through the middle of those contenders. Again they all fall roughly on either side of that line. They’re very linear distribution and you’ve got players in all of the different quadrants, varying numbers. You also have a significant number of players, over a dozen, in the upper half. The upper half is challengers and leaders which is where you want to be.
Moving forward to 2015, everyone drops down. You have less players in the upper half and everyone’s clustered around the center, the crosshairs of this chart if you will. You’ve got a significant move down and everyone’s bunching together in the middle, so a very non-linear distribution. Then as you move ahead yet one more year to 2016, then you have everybody essentially collapsing into the lower half of the chart. You have a completely empty challenger quadrant and you have only three contenders.
You have a completely empty challenger quadrant and you have only three contenders.
If you recall we had twelve contenders in the upper half of the chart, the challenger leader part of the quadrant, in 2014. Just two years later that number is now three so it’s dropped by a factor of four. In this case almost everyone is clustered down in the lower half of the chart.
What that means is you’ve seen a steady and dramatic drop from just 2014 to 2016, just two years. You see a steady, dramatic drop from the upper half to the lower half of the chart. What that demonstrates is just a lack of innovation and leadership.
It’s certainly not like parts falling off the car. Their products aren’t deteriorating so it’s something else. It’s lack of innovation vis-a-vis what the context is. Can you talk a little bit about what businesses were doing in 2014 and what they’re doing now.
What’s driving this is not that their tools have changed much. They’re actually very similar to what they were two years ago. It’s the data that they’re trying to access and analyze has gotten significantly more complicated, much more complex. You’re seeing a proliferation of new kinds of data stores, NoSQL data stores… and even cloud API’s. Cloud API’s are becoming the new databases if you will. You’ve got over 200,000 API’s on the web today and behind each of those API’s is massive amounts of data and in many cases data that people want to access.
Again what’s changed from 2014 to 2016 is that the complexity and the amount of kinds of data that enterprises just need to be able to access efficiently has gone way up.
The tools are operating essentially in the same mode they were before that happened and that’s what’s causing the drop in satisfaction and drop in capabilities for these tools.
In 2017 what are we going to see? What are the innovators that are going to occupy that top going to be doing?
I think what the market needs right now is a new approach to solving these problems. The existing approaches, based on last generation technology, just obviously aren’t working which is why we’re seeing this drop in satisfaction for the existing vendors. What the market needs is an approach that can be easily adapted to modern data sources. I view the idea of data sources or data models as an N plus one problem where we’ve added a bunch of new ways to store and consume data in the last few years and will continue to do that over the next ten, twenty, thirty years. The problem will just get worse not better in terms of data complexity and volume.
What’s required is a different approach and an approach that’s more agile that allows a solution, and in this case like SlamData, to talk or connect to any source of data quickly, regardless of what that source is or where it exists, whether it’s a new database, a new data model or a cloud API.
What’s required is a different approach and an approach that’s more agile that allows a solution, and in this case like SlamData, to talk or connect to any source of data quickly, regardless of what that source is or where it exists, whether it’s a new database, a new data model or a cloud API. I think architecturally we’re seeing a shift in how people think about accessing data. From first generation, which is largely what you see in the existing Magic Quadrants, where were based around a fairly narrowly defined relational data model, to something that’s much more agile and flexible. I would argue that that’s the age of the data compiler or the analytic compiler that we’re promoting.