Ad Hoc Data Analysis on Big Data Sets
The potential for ad-hoc analytics is getting interesting with technologies such as Hadoop to facilitate queries on big data sets. In this short article you'll see its potential through an example of creating a 360 degree view of a customer. You'll also learn why ad hoc analytics using an on-premise Hadoop deployment limits performance, scalability, and usability and discover that the cloud is a real game changer for facilitating fast, interactive queries accessible by business analysts with infinite capacity to scale.
What is ad hoc analysis on big data sets?
Ad hoc analytics is the discipline of analyzing data on an as-needed or requested basis. Historically challenging, ad hoc analytics on big data sets versus relational databases adds a new layer of complexity due to increased data volumes, faster data velocity, greater data variety and more sophisticated data models.
What is driving ad-hoc analysis on big data sets?
Organizations are experiencing an increasing need to enable ad hoc analytics on big data sets to optimize their sales and marketing initiatives, discover new revenue opportunities, enhance customer service and improve operational efficiency.
Let's look at the role of ad hoc analytics on big data sets to achieve a 360-degree view of customers for an organization trying to understand why its customer churn has increased. By querying its structured, internal data the company can determine things like products losing customers, price changes that might have impacted defection, and changes in customer service metrics. But these only tell part of the story.
With the help of big data, analysts can use data sources like social media, mobile data, click stream data or purchase history to gain a 360-degree view of their customer. This will help answer important questions, such as:
What customers are saying about your company?
How customers interact with you?
How well you serve customers?
How to improve every interaction
Hadoop and its primary programming language, MapReduce, are designed for batch-oriented processing of big data sets. Apache Hive is a data warehousing system for large volumes of data stored in Hadoop. It's considered one of the de-facto tools for Hadoop since it provides a SQL-based query language that makes it easy to query big data sets. However, queries performed with Hive are usually very slow because of its reliance on MapReduce.
To gain real-time interactive query functionality, organizations must use real-time processing engines such as Apache Spark and Facebook's Presto alongside Hadoop. Unfortunately, these open source tools can be very difficult for some organizations to deploy and support.
With on-premise deployments, fixed clusters mean that ad hoc data queries can easily run out of capacity or take way too long to process. Thus, companies either limit the number of queries they run, try to avoid processing queries during times of peak usage or spend too much money because they are over provisioning capacity to guarantee acceptable performance under any condition.
Specialized Skill Requirements
Creating and executing ad hoc analytics in an on-premise Hadoop environment requires developers and data scientists with specialized MapReduce, Pig and Hive skills. And, users need to obtain technical assistance to start and stop clusters every time they run a query
Why is ad hoc data analytics easier in the cloud?
Big Data as a service (BDaaS) solution available on Amazon Web Services, Google Computer Engine and Microsoft Azure that removes ad hoc analysis obstacles associated with on-premise Hadoop. In fact we advocate "Everything as a Service" MapReduce, Hive, Pig, Oozie, and Sqoop, plus Hadoop Spark and Presto, open source cluster computing frameworks for real-time interactive queries on data stored in Hive, HDFS, HBase and Amazon S3. Our solution allows users to launch and provision Spark or Presto clusters and start running queries in minutes.