Scaling Big Data with Hadoop and Solr – A new how-to guide – #bigdata #java #bookreview

Scaling Big Data with Hadoop and Solr

Learn new ways to build efficient, high performance enterprise search repositories for Big Data using Hadoop and Solr
Hrishikesh Karambelkar
(Packt – paperback, Kindle)

This well-presented, step-by-step guide shows how to use Apache Hadoop and Apache Solr to work with Big Data.  Author and software architect Hrishikesh Karambelkar does a good job of explaining Hadoop and Solr, and he illustrates how they can work together to tackle Big Data enterprise search projects.

“Google faced the problem of storing and processing big data, and they came up with the MapReduce approach, which is basically a divide-and-conquer strategy for distributed data processing,” Karambelkar notes. “MapReduce is widely accepted by many organizations to run their Big Data computations. Apache Hadoop is the most popular open source Apache licensed implementation of MapReduce….Apache Hadoop enables distributed processing of large datasets across a commodity of clustered servers. It is designed to scale up from single server to thousands of commodity hardware machines, each offering partial computational units and data storage.”

Meanwhile, Karambelkar adds, “Apache Solr is an open source enterprise search application which provides user abilities to search structured as well as unstructured data across the organization.”

His book (128 pages in print format) is structured with five chapters and three appendices:

  • Chapter 1: Processing Big Data Using Hadoop MapReduce
  • Chapter 2: Understanding Solr
  • Chapter 3: Making Big Data Work for Hadoop and Solr
  • Chapter 4: Using Big Data to Build Your Large Indexing
  • Chapter 5: Improving Performance of Search while Scaling with Big Data
  • Appendix A: Use Cases for Big Data Search
  • Appendix B: Creating Enterprise Search Using Apache Solr
  • Appendix C: Sample MapReduce Programs to Build the Solr Indexes

Where the book falls short (and I have noted this about many works by computer-book publishers) is that the author simply assumes everything will go well during the process of downloading and setting up the software–and gives almost no troubleshooting hints. This can happen with books written by software experts that are also are reviewed by software experts. Their systems likely are already optimized and may not throw the error messages that less-experienced users may encounter.

For example, the author states: “Installing Hadoop is a straightforward job with a default setup….” Unfortunately, there are many “flavors” and configurations of Linux running in the world. And Google searches can turn up a variety of problems others have encountered when installing, configuring and running Hadoop.  Getting Solr installed and running likewise is not a simple process for everyone.

If you are ready to plunge in and start dealing with Big Data, Scaling Big Data with Hadoop and Solr definitely can give you some well-focused and important information.  But heed the “Who this book is for” statement on page 2: “This book is primarily aimed at Java programmers, who wish to extend Hadoop platform to make it run as an enterprise search without prior knowledge of Apache Hadoop and Solr.”

And don’t surprised if you have to seek additional how-to details and troubleshooting information from websites and other books, as well as from co-workers and friends who may know Linux, Java and NoSQL databases better than you do (whether you want to admit it or not).

Si Dunn