Enterprise Business Intelligence Is Failing. And It’s Going to Get Worse

by | Nov 1, 2016 | Analysis, Business Intelligence

your-bi-tools-need-an-upgrade-3

Enterprise Business Intelligence solutions are failing, and the reasons are very obvious. Leading analyst firms including Gartner and G2 have published rankings which show virtually no leadership and scant challengers in the market. One of the most glaring examples is Domo, a fairly recent entrant to the market that has raised an eye-popping $600M in venture capital yet can’t escape the “niche” vendor quadrant for either firm mentioned above. Niche is a nice way of saying loser for these analyst charts.

Why? Simple, a total lack of innovation.

They just throw army’s of bodies at projects and rely on a basket of legacy technologies combined with some newer open source tools. Not much real innovation in this approach, just smoke and mirrors.

New DB Tech + Legacy BI Tools = Analytics Purgatory

Over the last 6-8 years CIO’s have seen their data explode in size and variety. New DB’s are adopted daily for specific use cases, and the age of “Polyglot Persistence” has fully arrived. The age of a company standardizing on a single DB for all their data needs has vanished.

The age of a company standardizing on a single DB for all their data needs has vanished.

What has caused this change? Companies are being forced to develop and scale new types of applications not even considered just a decade ago. Online capabilities of every kind within a company, and the ability to extract and use data from dozens of API’s and other sources have contributed to the problem of data chaos in the enterprise.

Unfortunately enterprise BI tools have simply not kept up with these rapid changes in data volume and complexity, in spite of what they may claim or marketing hype. BI vendors routinely claim support for every imaginable kind of data, but this is flat out false. The reason analyst firms have been reluctant to rate BI vendors highly is precisely because they spend hours interviewing CIO’s and a common theme quickly comes forward. CIO’s love BI tool XX, but it only really works well with their legacy data sources (RDBMS). As soon as you need support for Hadoop, NoSQL or API data it becomes a giant project to extract data, create ETL machines, or weeks to months of “Data prep” before you can even consider using their current BI tools, and results are often poor quality and out of date.

And all this added ETL and data prep increases costs and decreases analytics agility tremendously.

And all this added ETL and data prep increases costs and decreases analytics agility tremendously. Not what CIO’s want or need. Hence the reason for the so-so ratings of current BI tools by analyst firms and enterprise CIO’s.

If You Have Lemons, Sell Lemonade (Or, Why Existing BI Vendors Haven’t Stepped Up)

Simply put, Enterprise BI is failing due to a lack of innovation. Existing vendors have massive investments in their current architectures, and changing them to meet the needs of rapidly changing data is hard and expensive, so they do the opposite. They force users to make ALL their data fit their tools.

They force users to make ALL their data fit their tools.

The core issue for legacy BI vendors is their approach to data is 40 years old. Each solution relies on some version of an analytic “engine” to perform analysis. This engine expects data to be completely tabular and homogenous with a fixed schema since this was the standard RDBMS data model for decades. Unfortunately modern data sources support “schemaless” and highly heterogenous data models. Practically speaking, imagine a table in an RDBMS system with every row different than the one before it and the one after it, and table schema constantly changing. Existing tools have zero chance of doing anything useful with this data! Adding support for a single new data format (like JSON) to an existing BI tools data engine can easily take 12-18 months of work. And if the last 10 years is any indication new data formats will keep coming in the future, so vendors will continue to struggle to keep up.

So we have created dozens of tools to “fix” the data; ETL tools, data prep tools, data wrangling tools, you name it. CIO’s spend massive amounts of money on new tools to “fix” their data and then still get mediocre results from incumbent BI solutions. Not hard to understand why they don’t give glowing reviews to the analyst firms.

On Thinking Like A Beginner: How Waking Up to Polyglot Could Only Mean One Thing

The most difficult aspect of real innovation is thinking outside the box. Determining an entirely new approach to solving a problem, not just tweaking at the margin of existing solutions, but truly re-thinking the problem and the solution.

This is exactly the approach needed to reinvent enterprise BI for the next 40 years.

This is exactly the approach needed to reinvent enterprise BI for the next 40 years.

The Age of the Analytic Compiler

The days of the analytic “engine” described previously are numbered. We are entering the age of the analytic compiler. What is an analytic compiler? Simply put, it “compiles” queries to any structure of data, and executes them where the data lives. In the age of polyglot persistence and enterprise big data the data engine approach just can’t keep up! The constant extracting and reformatting of data to fit an analytic “engine” simply falls apart because it’s too slow and lacks analytic fidelity.

New formats of data will keep coming, we don’t know what the future of data will look like, so the compiler approach is the only approach that solves this problem. Unlike the data engine which requires massive changes to support any new data source, the compiler can easily adapt queries to any data with a small amount of effort. Once you add the visualization and reporting features to the compiler approach you have a BI solution that works perfectly for any data and any user. No ETL machines, no data prep required, just connect and go.

The Future’s So Bright I Gotta Wear Shades

Lots of folks at this point are thinking “how can this be right, why hasn’t somebody already done this”? Two reasons, first, innovation in the face of decades of entrenched technology is hard, and second, this approach is technically difficult to build, even starting without the shackles of decades of legacy technology. And it’s virtually impossible to build if you need to graft it onto an existing solution supporting thousands of users.

I’m sure I’ll hear from lots of vendors claiming they have a better approach, or their legacy BI tools works “great” with modern complex data. This is simply not the case, and enterprise CIO’s know it. Solutions that require users to extract data from the original source in all cases, and that the data be completely flat and homogenous cannot succeed in the ever changing face of modern data. Existing vendors need 12+ months to add first class support for any new data model, the analytic compiler approach can add support in a matter of weeks, for any data model or source.

Change is hard, and innovation is harder. The “Data Engine” architectures of the last 30 years won’t disappear overnight. The compiler approach will need to mature like any radical departure from entrenched technology, but it’s the only solution to an ever more complicated world of data.

Enterprise BI is failing day by day; therefore, companies that use the antiquated approach are digging themselves deeper and deeper. If only they’d put down the shovels for a minute….

Native Analytics On MongoDb, Couchbase, MarkLogic and Hadoop.

No mapping. No ETL.

Download It Now

Recent News & Blogs

SlamData Secures $6.7MM Series A to Support Modern Data in the Enterprise

Boulder, Colo., February 23, 2017 – SlamData Inc., the leading open source analytics company for modern unstructured data today announced that it has raised a $6.7M Series A funding round, led by Shasta Ventures. The investment will drive further development of the firm’s breakthrough analytics solution: a single application for natively exploring, visualizing and embedding analytics against unstructured data sources including NoSQL, Hadoop, and cloud API’s.

read more

Getting Started With SlamData – Part 1

Welcome to the SlamData getting started video. Let’s jump right in. By default, SlamData runs on port 20223. You can change the port it runs on by modifying the quasar-config.json file. By default, this file is located in the following directories on Windows, Mac and Linux.

read more

Who Is Using SlamData?

Whitepaper: The Characteristics of NoSQL Analytics Systems

by John De Goes, CTO and Co-Founder of SlamData

Overview

Semi­structured data, called NoSQL data in this paper, is growing at an unprecedented rate.  This growth is fueled, in part, by the proliferation of web and mobile applications, APIs, event­-oriented data, sensor data, machine learning, and the Internet of Things, all of which are disproportionately powered by NoSQL technologies and data models.

This paper carves out a single concern, by focusing on the system­-level capabilities required to derive maximum analytic value from a generalized model of NoSQL data. This approach leads to eight well­-defined, objective characteristics, which collectively form a precise capabilities­-based definition of a NoSQL analytics system.

These capabilities are inextricably motivated by use cases, but other considerations are explicitly ignored. They are ignored not because they are unimportant (quite the contrary), but because they are orthogonal to the raw capabilities a system must possess to be capable of deriving analytic value from NoSQL data.

Table of Contents

  • Overview
  • The Nature of NoSQL Data
    • APIs
    • NoSQL Databases
    • Big Data
    • A Generic Data Model for NoSQL
  • Approaches to NoSQL Analytics
    • Coding & ETL
    • Hadoop
    • Real-Time Analytics
    • Relational Model Virtualization
    • First-Class NoSQL Analytics
  • Characteristics of NoSQL Analytics Systems
    • Generic Data Model
    • Isomorphic Data Model
    • Multi-Dimensionality
    • Unified Schema/Data
    • Post-Relational
    • Polymorphic Queries
    • Dynamic Type Discovery & Conversion
    • Structural Patterns

 

 

  • This field is for validation purposes and should be left unchanged.