This article answers the question – what is big data?
Big Data is getting bigger. This market growth has led to a proliferation of technology vendors touting themselves as “Big Data solution providers.” In fact, the volume and variety of Big Data vendors seems to rival the volume and variety of Big Data itself, which can make the selection of technology partners a daunting task for any business pursuing a digital transformation strategy.
To help ensure you have a productive and mutually-beneficial conversation with Big Data technology vendors, it’s critical that you first fully understand:
- What is Big Data?
- The key components of the Big Data ecosystem, which include infrastructure, analytics, applications, data sources and application programming interfaces (APIs)
Here’s everything you need to know.
What is Big Data?
A helpful way to think about Big Data is to compare it to its forerunner: the Enterprise Data Warehouse (EDW). An EDW ingests multiple data sources into a common format using Extraction/Transformation/Load (ETL) tools and best practices. The inbound data tends to be well-structured and needs to be converted from its native format to the EDW format. The processing of data into and provisioning of data out of an EDW tend to happen in the same technical environment (usually the same server).
In contrast, Big Data processing can happen in many different environments and can ingest many different types of structured and unstructured data. When thinking about Big Data, consider the “seven V’s”:
Big Data is, well … big! With the dramatic growth of the internet, mobile devices, social media, and Internet of Things (IoT) technology, the amount of data generated by all these sources has grown accordingly.
In addition to getting bigger, the generation of data and organizations’ ability to process it is accelerating.
In earlier times, most data types could be neatly captured in rows on a structured table. In the Big Data world, data often comes in unstructured formats like social media posts, server log data, lat-long geo-coordinates, photos, audio, video and free text.
The meaning of words in unstructured data can change based on context.
With many different data types and data sources, data quality issues invariably pop up in Big Data sets. Veracity deals with exploring a data set for data quality and systematically cleansing that data to be useful for analysis.
Once data has been analyzed, it needs to be presented in a visualization for end users to understand and act upon.
Data must be combined with rigorous processing and analysis to be useful.
Broadly speaking, Big Data infrastructure includes the hardware and software “plumbing” that gathers and organizes data. These foundational pieces include data collection, data storage, data access and data processing.
Collecting data simply means bringing it into your enterprise for further organization, processing and analysis. Much of this data you may already have in-house, such as data from point-of-sale systems, customer databases, customer relationship management systems, and financials. You may also have a need to collect external data from social media threads or third-party data sources, which could require a bit of technical development to acquire.
Storage can be as simple as a computer hard drive or even a flash memory stick. But in the Big Data world, there is almost always a need to manage multiple points (or nodes) of data storage and processing. This is where a toolset like Hadoop comes into play. Hadoop components provide a framework for keeping track of stored data files on multiple nodes, breaking up data for large computations to multiple nodes, and assembling the results, as well as node management functions like scheduling automated jobs.
Before the advent of Big Data, Structured Query Language (SQL) was the common language of the data world. SQL enables users to access structured, relational databases to retrieve data with emphasis on consistency and reliable transactions. Of course, with Big Data, much of the data is unstructured as described above. This is where newer data access tools like NoSQL (short for “Not Only SQL”) come into play. While SQL databases are rigorously structured and reside on a single server, NoSQL databases house unstructured data across multiple servers and emphasize speed and scalability.
A tool like Massively Parallel Processing (MPP) breaks up the processing of large data sets across multiple nodes and concurrently manages the needed processing for a single application. You could think of MPP as the general contractor for a house. The general contractor is in charge of the entire project but farms out the foundation work, framing, electrical, plumbing, HVAC, roofing and interior work to subcontractors. In a similar manner, MPP manages the completion of a large data processing task, but farms out the processing of certain tasks to separate nodes, only to reassemble the results when all processing is complete.
With the foundational infrastructure and ability to retrieve and process large amounts of data from many different sources in place, Big Data can now be analyzed for business insight. Analytics involve data preparation, modeling and visualization.
Data preparation involves combining disparate data sources into a common format. Successfully preparing data requires in-depth, exploratory analysis to understand the data types and data quality issues of the source data, as well as a well-thought-out target data model.
With properly-prepared data in hand, analysts and data-savvy business users can develop their analytics model. This is where the business problem meets the data. The model can be something as basic as summarizing sales data by region. But more sophisticated analyses, like machine-learning algorithms, are well-suited to Big Data’s ability to manage and process massive amounts of information.
Bringing analytic insights to life for non-technical users is the domain of visualization. In the hands of skilled business intelligence authors, visualization tools take on an artistic quality while presenting conclusions from the aforementioned analytic insights.
Industry-specific applications bring business insights to life in an actionable and relevant way for business users. These applications combine the technical architecture, data engineering, data science and analytics of Big Data with industry insight to provide compelling solutions to real-world business problems. The examples below are a taste of what Big Data can accomplish.
Faster Time to Market
Like all pharmaceutical companies, Bristol-Myers Squibb (BMS) is interested in reducing the length of clinical trials to bring new products to market more quickly. Using an Amazon Web Services (AWS) portal to host computational-heavy simulations, BMS removed the bottleneck of its slow and congested internal research system in favor of a cloud-based, scalable solution. Prior to moving to a Big Data environment, BMS scientists took 60 hours to run a few hundred simulations. After the move to AWS, scientists can run up to 2,000 simulations in only 1.2 hours.
Increased Sales Through Personalized Marketing
Kroger accomplished a rare feat in retail marketing: they realized a 70% direct mail return rate when the average is 3.7%. Using a Big Data approach enabled Kroger to record the purchase history of each individual customer. Then, using predictive analytics, they were able to create mailers specific to that customer. According to Nishat Mehta, an executive at dunnhumby, Kroger’s customer data partner, “We make decisions not based on what you bought today but what you have bought over the last two years. We can recommend a product you buy every four months. You don’t have to know, but we know.”
Optimizing Customer Value
Avis Budget Group’s foray into Big Data has increased market share and improved customer loyalty. Using a remarkable combination of diverse data sources, Big Data technologies and data science, Avis Budget can make a fact-based assessment of a customer’s value and market segment. Avis Budget’s Big Data platform combines corporate affiliation data, demographics, rental history, service history, customer feedback forms and even social media data to feed a predictive model that estimates each customer’s lifetime value. Armed with these insights, Avis Budget marketers created graduated incentives to move each customer segment along the path of greater customer loyalty. This effort yielded an estimated $200M in additional revenue.
Data Sources and APIs
One of the things that makes Big Data powerful is its ability to pull multiple data sources together into a single analysis to produce unique insights. These disparate data sources include:
- Traditional structured data
- Internal documents
- Multimedia files
- Transaction data
- Social media data
- Sensor data
- Public web sources
- Machine log data
Many of these data sources can be accessed using application programming interfaces (APIs). These digital gateways enable Big Data storage and processing tools to access data sources in an automated way, freeing human analysts from the burden of trafficking data from source to target, and speeding time to insight.
We are living in truly remarkable times. The dramatic increase in the amount of data produced is nearly matched by the ability to collect, store and manage it. The challenge comes in applying smart analysis to, and generating actionable insight from, all that data.
Having an understanding of – what is Big Data – and the relationship between infrastructure, analytics, applications, data sources and APIs will help businesses to create a sustainable strategy to build their own Big Data ecosystem.