Enterprise Business Intelligence Is Failing. And It’s Going to Get Worse

Jeff Carr
Jeff Carr CEO & Co-Founder
Enterprise Business Intelligence solutions are failing, and the reasons are very obvious. Leading analyst firms including Gartner and G2 have published rankings which show virtually no leadership and scant challengers in the market.

Enterprise Business Intelligence solutions are failing, and the reasons are very obvious. Leading analyst firms including Gartner and G2 have published rankings which show virtually no leadership and scant challengers in the market. One of the most glaring examples is Domo, a fairly recent entrant to the market that has raised an eye-popping $600M in venture capital yet can’t escape the “niche” vendor quadrant for either firm mentioned above. Niche is a nice way of saying loser for these analyst charts.

Why? Simple, a total lack of innovation.They just throw army’s of bodies at projects and rely on a basket of legacy technologies combined with some newer open source tools. Not much real innovation in this approach, just smoke and mirrors.

New DB Tech + Legacy BI Tools = Analytics Purgatory

Over the last 6-8 years CIO’s have seen their data explode in size and variety. New DB’s are adopted daily for specific use cases, and the age of “Polyglot Persistence” has fully arrived. The age of a company standardizing on a single DB for all their data needs has vanished.

The age of a company standardizing on a single DB for all their data needs has vanished.


The Five Money-Saving Tricks MongoDB Doesn’t Want You To Know

If you listen to your friendly MongoDB sales rep, it's easy to think they are a one-stop shop for all things MongoDB.

Read More

The Easiest Way To Do Cross-Collection JOINs With MongoDB

Damon, each week we get together and discuss a feature of SlamData that we're seeing a lot of clients use, inquire about, or one that we feel like is worth exploring because it's that unique and can solve big problems in the market simply. This week it's JOINs.

Read More

MongoDB and the Shocking Case of the Missing JOIN ($lookup)

At MongoWorld 2015, Eliot Horowitz, CTO of MongoDB, stepped onto the stage and announced to the world that joins would be coming to the famous document-oriented database.

Read More

What has caused this change? Companies are being forced to develop and scale new types of applications not even considered just a decade ago. Online capabilities of every kind within a company, and the ability to extract and use data from dozens of API’s and other sources have contributed to the problem of data chaos in the enterprise.

Unfortunately enterprise BI tools have simply not kept up with these rapid changes in data volume and complexity, in spite of what they may claim or marketing hype. BI vendors routinely claim support for every imaginable kind of data, but this is flat out false. The reason analyst firms have been reluctant to rate BI vendors highly is precisely because they spend hours interviewing CIO’s and a common theme quickly comes forward. CIO’s love BI tool XX, but it only really works well with their legacy data sources (RDBMS). As soon as you need support for Hadoop, NoSQL or API data it becomes a giant project to extract data, create ETL machines, or weeks to months of “Data prep” before you can even consider using their current BI tools, and results are often poor quality and out of date.

Get Updates And News From SlamData

And all this added ETL and data prep increases costs and decreases analytics agility tremendously.

If You Have Lemons, Sell Lemonade (Or, Why Existing BI Vendors Haven’t Stepped Up)Click To Tweet

New DB Tech + Legacy BI Tools = Analytics Purgatory

Simply put, Enterprise BI is failing due to a lack of innovation. Existing vendors have massive investments in their current architectures, and changing them to meet the needs of rapidly changing data is hard and expensive, so they do the opposite. They force users to make ALL their data fit their tools.

They force users to make ALL their data fit their tools.

The core issue for legacy BI vendors is their approach to data is 40 years old. Each solution relies on some version of an analytic “engine” to perform analysis. This engine expects data to be completely tabular and homogenous with a fixed schema since this was the standard RDBMS data model for decades. Unfortunately modern data sources support “schemaless” and highly heterogenous data models. Practically speaking, imagine a table in an RDBMS system with every row different than the one before it and the one after it, and table schema constantly changing. Existing tools have zero chance of doing anything useful with this data! Adding support for a single new data format (like JSON) to an existing BI tools data engine can easily take 12-18 months of work. And if the last 10 years is any indication new data formats will keep coming in the future, so vendors will continue to struggle to keep up.

So we have created dozens of tools to “fix” the data; ETL tools, data prep tools, data wrangling tools, you name it. CIO’s spend massive amounts of money on new tools to “fix” their data and then still get mediocre results from incumbent BI solutions. Not hard to understand why they don’t give glowing reviews to the analyst firms.

On Thinking Like A Beginner: How Waking Up to Polyglot Could Only Mean One Thing

The most difficult aspect of real innovation is thinking outside the box. Determining an entirely new approach to solving a problem, not just tweaking at the margin of existing solutions, but truly re-thinking the problem and the solution.

This is exactly the approach needed to reinvent enterprise BI for the next 40 years.

The Age of the Analytic Compiler

The days of the analytic “engine” described previously are numbered. We are entering the age of the analytic compiler. What is an analytic compiler? Simply put, it “compiles” queries to any structure of data, and executes them where the data lives. In the age of polyglot persistence and enterprise big data the data engine approach just can’t keep up! The constant extracting and reformatting of data to fit an analytic “engine” simply falls apart because it’s too slow and lacks analytic fidelity.

New formats of data will keep coming, we don’t know what the future of data will look like, so the compiler approach is the only approach that solves this problem. Unlike the data engine which requires massive changes to support any new data source, the compiler can easily adapt queries to any data with a small amount of effort. Once you add the visualization and reporting features to the compiler approach you have a BI solution that works perfectly for any data and any user. No ETL machines, no data prep required, just connect and go.

The Future’s So Bright I Gotta Wear Shades

Lots of folks at this point are thinking “how can this be right, why hasn’t somebody already done this”? Two reasons, first, innovation in the face of decades of entrenched technology is hard, and second, this approach is technically difficult to build, even starting without the shackles of decades of legacy technology. And it’s virtually impossible to build if you need to graft it onto an existing solution supporting thousands of users.

I’m sure I’ll hear from lots of vendors claiming they have a better approach, or their legacy BI tools works “great” with modern complex data. This is simply not the case, and enterprise CIO’s know it. Solutions that require users to extract data from the original source in all cases, and that the data be completely flat and homogenous cannot succeed in the ever changing face of modern data. Existing vendors need 12+ months to add first class support for any new data model, the analytic compiler approach can add support in a matter of weeks, for any data model or source.

Change is hard, and innovation is harder. The “Data Engine” architectures of the last 30 years won’t disappear overnight. The compiler approach will need to mature like any radical departure from entrenched technology, but it’s the only solution to an ever more complicated world of data.

Enterprise BI is failing day by day; therefore, companies that use the antiquated approach are digging themselves deeper and deeper. If only they’d put down the shovels for a minute….

News, Analysis and Blogs

Skip Months of Development By Adopting SlamData's
Out-of-the-Box Reporting Solution for NoSQL. Like These Companies :-)


The Characteristics of NoSQL Analytics Systems

  • The Nature of NoSQL Data
    • APIs
    • NoSQL Databases
    • Big Data
    • A Generic Data Model for NoSQL
  • Approaches to NoSQL Analytics
    • Coding & ETL
    • Hadoop
    • Real-Time Analytics
    • Relational Model Virtualization
    • First-Class NoSQL Analytics
  • Characteristics of NoSQL Analytics Systems
    • Generic Data Model
    • Isomorphic Data Model
    • Multi-Dimensionality
    • Unified Schema/Data
    • Post-Relational
    • Polymorphic Queries
    • Dynamic Type Discovery & Conversion
    • Structural Patterns

What's People Are Saying

Send Us A Message

Get In Touch

(720) 588-9810

[email protected]

1215 Spruce Street, Suite 200 Boulder, CO 80302

Connect With Us

© 2017 SlamData, Inc.

Do NOT follow this link or you will be banned from the site!
"When our company migrated from SQL database to MongoDB,
all our query tools became obsolete.  SlamData saved the day!" 
Register for Webinar