Enterprise Business Intelligence Is Failing. And It’s Going to Get Worse
Enterprise Business Intelligence solutions are failing, and the reasons are very obvious. Leading analyst firms including Gartner and G2 have published rankings which show virtually no leadership and scant challengers in the market. One of the most glaring examples is Domo, a fairly recent entrant to the market that has raised an eye-popping $600M in venture capital yet can’t escape the “niche” vendor quadrant for either firm mentioned above. Niche is a nice way of saying loser for these analyst charts.
Why? Simple, a total lack of innovation.
They just throw army’s of bodies at projects and rely on a basket of legacy technologies combined with some newer open source tools. Not much real innovation in this approach, just smoke and mirrors.
New DB Tech + Legacy BI Tools = Analytics Purgatory
Over the last 6-8 years CIO’s have seen their data explode in size and variety. New DB’s are adopted daily for specific use cases, and the age of “Polyglot Persistence” has fully arrived. The age of a company standardizing on a single DB for all their data needs has vanished.
The age of a company standardizing on a single DB for all their data needs has vanished.
What has caused this change? Companies are being forced to develop and scale new types of applications not even considered just a decade ago. Online capabilities of every kind within a company, and the ability to extract and use data from dozens of API’s and other sources have contributed to the problem of data chaos in the enterprise.
Unfortunately enterprise BI tools have simply not kept up with these rapid changes in data volume and complexity, in spite of what they may claim or marketing hype. BI vendors routinely claim support for every imaginable kind of data, but this is flat out false. The reason analyst firms have been reluctant to rate BI vendors highly is precisely because they spend hours interviewing CIO’s and a common theme quickly comes forward. CIO’s love BI tool XX, but it only really works well with their legacy data sources (RDBMS). As soon as you need support for Hadoop, NoSQL or API data it becomes a giant project to extract data, create ETL machines, or weeks to months of “Data prep” before you can even consider using their current BI tools, and results are often poor quality and out of date.
And all this added ETL and data prep increases costs and decreases analytics agility tremendously.
And all this added ETL and data prep increases costs and decreases analytics agility tremendously. Not what CIO’s want or need. Hence the reason for the so-so ratings of current BI tools by analyst firms and enterprise CIO’s.
If You Have Lemons, Sell Lemonade (Or, Why Existing BI Vendors Haven’t Stepped Up)
Simply put, Enterprise BI is failing due to a lack of innovation. Existing vendors have massive investments in their current architectures, and changing them to meet the needs of rapidly changing data is hard and expensive, so they do the opposite. They force users to make ALL their data fit their tools.
They force users to make ALL their data fit their tools.
The core issue for legacy BI vendors is their approach to data is 40 years old. Each solution relies on some version of an analytic “engine” to perform analysis. This engine expects data to be completely tabular and homogenous with a fixed schema since this was the standard RDBMS data model for decades. Unfortunately modern data sources support “schemaless” and highly heterogenous data models. Practically speaking, imagine a table in an RDBMS system with every row different than the one before it and the one after it, and table schema constantly changing. Existing tools have zero chance of doing anything useful with this data! Adding support for a single new data format (like JSON) to an existing BI tools data engine can easily take 12-18 months of work. And if the last 10 years is any indication new data formats will keep coming in the future, so vendors will continue to struggle to keep up.
So we have created dozens of tools to “fix” the data; ETL tools, data prep tools, data wrangling tools, you name it. CIO’s spend massive amounts of money on new tools to “fix” their data and then still get mediocre results from incumbent BI solutions. Not hard to understand why they don’t give glowing reviews to the analyst firms.
On Thinking Like A Beginner: How Waking Up to Polyglot Could Only Mean One Thing
The most difficult aspect of real innovation is thinking outside the box. Determining an entirely new approach to solving a problem, not just tweaking at the margin of existing solutions, but truly re-thinking the problem and the solution.
This is exactly the approach needed to reinvent enterprise BI for the next 40 years.
This is exactly the approach needed to reinvent enterprise BI for the next 40 years.
The Age of the Analytic Compiler
The days of the analytic “engine” described previously are numbered. We are entering the age of the analytic compiler. What is an analytic compiler? Simply put, it “compiles” queries to any structure of data, and executes them where the data lives. In the age of polyglot persistence and enterprise big data the data engine approach just can’t keep up! The constant extracting and reformatting of data to fit an analytic “engine” simply falls apart because it’s too slow and lacks analytic fidelity.
New formats of data will keep coming, we don’t know what the future of data will look like, so the compiler approach is the only approach that solves this problem. Unlike the data engine which requires massive changes to support any new data source, the compiler can easily adapt queries to any data with a small amount of effort. Once you add the visualization and reporting features to the compiler approach you have a BI solution that works perfectly for any data and any user. No ETL machines, no data prep required, just connect and go.
The Future’s So Bright I Gotta Wear Shades
Lots of folks at this point are thinking “how can this be right, why hasn’t somebody already done this”? Two reasons, first, innovation in the face of decades of entrenched technology is hard, and second, this approach is technically difficult to build, even starting without the shackles of decades of legacy technology. And it’s virtually impossible to build if you need to graft it onto an existing solution supporting thousands of users.
I’m sure I’ll hear from lots of vendors claiming they have a better approach, or their legacy BI tools works “great” with modern complex data. This is simply not the case, and enterprise CIO’s know it. Solutions that require users to extract data from the original source in all cases, and that the data be completely flat and homogenous cannot succeed in the ever changing face of modern data. Existing vendors need 12+ months to add first class support for any new data model, the analytic compiler approach can add support in a matter of weeks, for any data model or source.
Change is hard, and innovation is harder. The “Data Engine” architectures of the last 30 years won’t disappear overnight. The compiler approach will need to mature like any radical departure from entrenched technology, but it’s the only solution to an ever more complicated world of data.
Enterprise BI is failing day by day; therefore, companies that use the antiquated approach are digging themselves deeper and deeper. If only they’d put down the shovels for a minute….
Latest posts by Jeff Carr (see all)
- Enterprise Business Intelligence Is Failing. And It’s Going to Get Worse - November 1, 2016
- SlamData 4.0 Released, Adds Analytic Support for Apache Spark, Couchbase, and Marklogic - October 18, 2016
- How Does SlamData Differ from Alteryx, Tableau, or Pentaho? - August 10, 2016
Data Intelligence On Complex, Modern Data. Without Moving Any Of It.
Recent News & Blogs
To help shed light on where customers can go to address their data-driven challenges, Database Trends and Applications magazine assembles an annual list of solutions…read more
The following is an interview I conducted with Jeff Carr, CEO and Founder of SlamData regarding the trends in enterprise business intelligence.read more
Boulder, CO — SlamData Inc., the company building the industry’s first comprehensive Business Intelligence solution for complex modern data, today announced the release of SlamData 4.0, which marks the debut of new connectors for modern data sources. SlamData 4.0 now supports MongoDB, Apache Spark on Hadoop, MarkLogic and Couchbase.read more
We’re mighty stoked to make it into InfoWorld’s list of best open source big data tools for the second time!read more
Who Is Using SlamData?
Whitepaper: The Characteristics of NoSQL Analytics Systems
by John De Goes, CTO and Co-Founder of SlamData
Semistructured data, called NoSQL data in this paper, is growing at an unprecedented rate. This growth is fueled, in part, by the proliferation of web and mobile applications, APIs, event-oriented data, sensor data, machine learning, and the Internet of Things, all of which are disproportionately powered by NoSQL technologies and data models.
This paper carves out a single concern, by focusing on the system-level capabilities required to derive maximum analytic value from a generalized model of NoSQL data. This approach leads to eight well-defined, objective characteristics, which collectively form a precise capabilities-based definition of a NoSQL analytics system.
These capabilities are inextricably motivated by use cases, but other considerations are explicitly ignored. They are ignored not because they are unimportant (quite the contrary), but because they are orthogonal to the raw capabilities a system must possess to be capable of deriving analytic value from NoSQL data.
Table of Contents
- The Nature of NoSQL Data
- NoSQL Databases
- Big Data
- A Generic Data Model for NoSQL
- Approaches to NoSQL Analytics
- Coding & ETL
- Real-Time Analytics
- Relational Model Virtualization
- First-Class NoSQL Analytics
- Characteristics of NoSQL Analytics Systems
- Generic Data Model
- Isomorphic Data Model
- Unified Schema/Data
- Polymorphic Queries
- Dynamic Type Discovery & Conversion
- Structural Patterns