Enterprise Business Intelligence Is Failing. And It’s Going to Get Worse

Jeff Carr
Jeff Carr CEO & Co-Founder
Enterprise Business Intelligence solutions are failing, and the reasons are very obvious. Leading analyst firms including Gartner and G2 have published rankings which show virtually no leadership and scant challengers in the market.

Enterprise Business Intelligence solutions are failing, and the reasons are very obvious. Leading analyst firms including Gartner and G2 have published rankings which show virtually no leadership and scant challengers in the market. One of the most glaring examples is Domo, a fairly recent entrant to the market that has raised an eye-popping $600M in venture capital yet can’t escape the “niche” vendor quadrant for either firm mentioned above. Niche is a nice way of saying loser for these analyst charts.

Why? Simple, a total lack of innovation.They just throw army’s of bodies at projects and rely on a basket of legacy technologies combined with some newer open source tools. Not much real innovation in this approach, just smoke and mirrors.

New DB Tech + Legacy BI Tools = Analytics Purgatory

Over the last 6-8 years CIO’s have seen their data explode in size and variety. New DB’s are adopted daily for specific use cases, and the age of “Polyglot Persistence” has fully arrived. The age of a company standardizing on a single DB for all their data needs has vanished.

Request A Demo

No ETL. No Mapping. No Stale Extracts.

The age of a company standardizing on a single DB for all their data needs has vanished.

What has caused this change? Companies are being forced to develop and scale new types of applications not even considered just a decade ago. Online capabilities of every kind within a company, and the ability to extract and use data from dozens of API’s and other sources have contributed to the problem of data chaos in the enterprise.

Unfortunately enterprise BI tools have simply not kept up with these rapid changes in data volume and complexity, in spite of what they may claim or marketing hype. BI vendors routinely claim support for every imaginable kind of data, but this is flat out false. The reason analyst firms have been reluctant to rate BI vendors highly is precisely because they spend hours interviewing CIO’s and a common theme quickly comes forward. CIO’s love BI tool XX, but it only really works well with their legacy data sources (RDBMS). As soon as you need support for Hadoop, NoSQL or API data it becomes a giant project to extract data, create ETL machines, or weeks to months of “Data prep” before you can even consider using their current BI tools, and results are often poor quality and out of date.

And all this added ETL and data prep increases costs and decreases analytics agility tremendously.

If You Have Lemons, Sell Lemonade (Or, Why Existing BI Vendors Haven’t Stepped Up)Click To Tweet

New DB Tech + Legacy BI Tools = Analytics Purgatory

Simply put, Enterprise BI is failing due to a lack of innovation. Existing vendors have massive investments in their current architectures, and changing them to meet the needs of rapidly changing data is hard and expensive, so they do the opposite. They force users to make ALL their data fit their tools.

They force users to make ALL their data fit their tools.

The core issue for legacy BI vendors is their approach to data is 40 years old. Each solution relies on some version of an analytic “engine” to perform analysis. This engine expects data to be completely tabular and homogenous with a fixed schema since this was the standard RDBMS data model for decades. Unfortunately modern data sources support “schemaless” and highly heterogenous data models. Practically speaking, imagine a table in an RDBMS system with every row different than the one before it and the one after it, and table schema constantly changing. Existing tools have zero chance of doing anything useful with this data! Adding support for a single new data format (like JSON) to an existing BI tools data engine can easily take 12-18 months of work. And if the last 10 years is any indication new data formats will keep coming in the future, so vendors will continue to struggle to keep up.

TRENDING BLOGS

Now Available: the Definitive Guide To JOINs On MongoDB

Lots of business come to us looking for help with doing BI on MongoDB. Specifically, many people want to just do what they’ve always done: query data with SQL. Here's the definitive guide!

Webinar Replay: Plug-And-Play Analytics for Your SaaS App Built On MongoDB

Check out the replay of this webinar to learn how US Mobile uses SlamData to deliver interactive reporting across its business.

The Five Money-Saving Tricks MongoDB Doesn’t Want You To Know

If you listen to your friendly MongoDB sales rep, it's easy to think they are a one-stop shop for all things MongoDB.

So we have created dozens of tools to “fix” the data; ETL tools, data prep tools, data wrangling tools, you name it. CIO’s spend massive amounts of money on new tools to “fix” their data and then still get mediocre results from incumbent BI solutions. Not hard to understand why they don’t give glowing reviews to the analyst firms.

On Thinking Like A Beginner: How Waking Up to Polyglot Could Only Mean One Thing

The most difficult aspect of real innovation is thinking outside the box. Determining an entirely new approach to solving a problem, not just tweaking at the margin of existing solutions, but truly re-thinking the problem and the solution.

This is exactly the approach needed to reinvent enterprise BI for the next 40 years.

The Age of the Analytic Compiler

The days of the analytic “engine” described previously are numbered. We are entering the age of the analytic compiler. What is an analytic compiler? Simply put, it “compiles” queries to any structure of data, and executes them where the data lives. In the age of polyglot persistence and enterprise big data the data engine approach just can’t keep up! The constant extracting and reformatting of data to fit an analytic “engine” simply falls apart because it’s too slow and lacks analytic fidelity.

New formats of data will keep coming, we don’t know what the future of data will look like, so the compiler approach is the only approach that solves this problem. Unlike the data engine which requires massive changes to support any new data source, the compiler can easily adapt queries to any data with a small amount of effort. Once you add the visualization and reporting features to the compiler approach you have a BI solution that works perfectly for any data and any user. No ETL machines, no data prep required, just connect and go.

Get Updates And News From SlamData

The Future’s So Bright I Gotta Wear Shades

Lots of folks at this point are thinking “how can this be right, why hasn’t somebody already done this”? Two reasons, first, innovation in the face of decades of entrenched technology is hard, and second, this approach is technically difficult to build, even starting without the shackles of decades of legacy technology. And it’s virtually impossible to build if you need to graft it onto an existing solution supporting thousands of users.

I’m sure I’ll hear from lots of vendors claiming they have a better approach, or their legacy BI tools works “great” with modern complex data. This is simply not the case, and enterprise CIO’s know it. Solutions that require users to extract data from the original source in all cases, and that the data be completely flat and homogenous cannot succeed in the ever changing face of modern data. Existing vendors need 12+ months to add first class support for any new data model, the analytic compiler approach can add support in a matter of weeks, for any data model or source.

Change is hard, and innovation is harder. The “Data Engine” architectures of the last 30 years won’t disappear overnight. The compiler approach will need to mature like any radical departure from entrenched technology, but it’s the only solution to an ever more complicated world of data.

Enterprise BI is failing day by day; therefore, companies that use the antiquated approach are digging themselves deeper and deeper. If only they’d put down the shovels for a minute….

News, Analysis and Blogs

What Our Customers Are Saying

v

We use SlamData to build custom reports and have found the tool is exceptionally easy to use and very powerful. We recently needed to engage the support team and we were very pleased with the turn-around time and the quality of support that we received.

Troy Thompson
Director of Software Engineering
Intermap Technologies, Inc.

v

When our company migrated from SQL database to MongoDB, all our query tools became obsolete. SlamData saved the day! I was able to easily write SQL2 queries. Plus the sharing, charting, and interactive reports were a game changer.

Michael Melmed
VP, Ops and Strategy
US Mobile

v

Slamdata helped shine the light on how our new product was being used. The support staff was awesome and we saved engineering cycles in building all the analytics in-house. I am using it to change the mindset in the teams and shift the focus from product launches to product landings

Engineering Lead
Cisco Systems

WHITEPAPER

The Characteristics of NoSQL Analytics Systems

  • The Nature of NoSQL Data
    • APIs
    • NoSQL Databases
    • Big Data
    • A Generic Data Model for NoSQL
  • Approaches to NoSQL Analytics
    • Coding & ETL
    • Hadoop
    • Real-Time Analytics
    • Relational Model Virtualization
    • First-Class NoSQL Analytics
  • Characteristics of NoSQL Analytics Systems
    • Generic Data Model
    • Isomorphic Data Model
    • Multi-Dimensionality
    • Unified Schema/Data
    • Post-Relational
    • Polymorphic Queries
    • Dynamic Type Discovery & Conversion
    • Structural Patterns

What's People Are Saying

© 2017 SlamData, Inc.

Do NOT follow this link or you will be banned from the site!

SlamData Provides Missing Platform for NoSQL Data Insight

This case study documents the return on investment, performance enhancements, and efficiency gains experienced by US Mobile resulting from its SlamData implementation. 
Download Case Study Now
The study was conducted by Constellation Research and published on June 25, 2017.
close-link
Click Me
Tweet
Share
Share
+1
Reddit
Buffer