Can Data Virtualization Address the Data Integration Bottleneck?

We all understand the business value of BI and analytics as enablers for growth, a means to attract and retain customers, or a way to drive innovation and reduce costs.

CIOs do as well. Both Gartner’s Amplifying the Enterprise: The 2012 CIO Agenda and IBM’s Global CIO Study 2011 place BI and analytics atop CIO’s technology priorities.

The Data Integration Bottleneck

Providing analytics and BI solutions with the data required has always been difficult, with data integration long considered the biggest bottleneck in any analytics or BI project.

For the past two decades, the default solution has been to first consolidate the data into a data warehouse, and then provide users with tools to analyze and report on this consolidated data.

But data integration based on these traditional replication and consolidation approaches have numerous moving parts that must be synchronized. Doing this properly extends lead times.

The Data Warehousing Institute confirms this lack of agility. Their recent study stated the average time needed to add a new data source to an existing BI application was 8.4 weeks in 2009, 7.4 weeks in 2010, and 7.8 weeks in 2011. And 33% of the organizations needed more than 3 months to add a new data source.

And new and complex data landscapes, diverse data types, new sources such as big data and the cloud have created new requirements that further challenge traditional data integration approaches.

Simplify to Succeed

Rick van der Lans, in Data Virtualization for Business Intelligence Systems: Revolutionizing Data Integration for Data Warehousesdescribes how the architectures of most business intelligence systems are based on a complex chain of data stores starting with production databases, data staging areas, a data warehouse, dependent data marts, and personal data stores. Simply maintaining this complexity is overwhelming IT today.

According to van der Lans, “these classic BI architectures served business well for the last twenty years. However, considering the need for more agility, they have some disadvantages:

  • Duplication of data
  • Non-shared meta data specifications
  • Limited flexibility
  • Decrease of data quality
  • Limited support for operational reporting:
  • Limited support for reporting on unstructured and external data.”

Data virtualization provides a more streamlined data integration approach, a more iterative development process, and a more adaptable change management process than traditional data integration based on consolidation and replication.

Using data virtualization as a complement to existing data integration approaches, ten organizations profiled in Data Virtualization: Going Beyond Traditional Data Integration to Achieve Business Agility, cut analytics and BI project times in half or more.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Follow

Get every new post delivered to your Inbox.

Join 26 other followers

%d bloggers like this: