Why Hadoop on the Forward! Fabrics

Disruptive IT Trends7 minutes readJan 20th, 2014
SHARE +

Be sure that you are not the first to ask this question! It comes up with some regularity as we talk about Forward! by Unisys™ with our ClearPath customers and potential Forward! by Unisys customers.

At first glance, it might seem that Hadoop, which caters to the failure modes of low cost commodity servers (let’s call them “nodes”) is a poor fit for mission-critical Forward! fabric, where hardware failures are rare, and stability, a premium. Resiliency in the face of node failures is certainly desirable, but what’s the point when nodes don’t fail very often? Well, Hadoop offers a lot besides node failure resiliency, and it is these other characteristics that make it an attractive denizen of Forward! fabric.

Let’s have a look at what Hadoop offers, and how its features can be useful on Forward! fabric.

To start with, let’s be clear about one thing:  Hadoop is a distributed file system, not a database management system.  Although Hadoop clusters are used in many database management-like scenarios, they do not offer the full range of features traditional DBMSs are well known for providing.  For example, Hadoop itself does not support so-called ACID transactions semantics and in-place update features of modern DBMSs.  Sometimes you can augment Hadoop clusters with open source or commercial software products that extend Hadoop’s DBMS-like characteristics, but don’t let this disguise its true distributed file system nature.

So, if you are building a transaction processing application, Hadoop probably isn’t for you.  But if your goal is batch-like analysis, read-mostly data analytics or big data searching on-line, Hadoop’s lack of traditional DBMS features won’t be much of a problem for you.

Hadoop clusters are assembled from many (and sometimes really a lot of) commodity servers connected by standard Ethernet connections.  These nodes are typically what the database folks call “shared nothing,” meaning nodes are independent servers each with their own processor, memory and persistent storage resources.

Yahoo, who fostered the creation of the Apache Hadoop open source project,  is thought have around 42,000 nodes devoted to Hadoop clusters with the largest having about 4,500 (as of July 2011), but many other companies have much smaller node count clusters.  You can read all about cluster size, and find out who uses them.  Often the number of nodes in a Hadoop cluster reflects the type of application the cluster serves – search engine-like applications and massive data stores typically have many nodes whereas data analytics applications often have fewer.

Hadoop clusters are made up of two kinds of nodes, called “name nodes” and “data nodes.”  A cluster has one name node and many data nodes.  A programming pattern known as “map/reduce” is used to construct data processing jobs for Hadoop clusters.  The cluster’s name node coordinates map/reduce jobs, knows where data are located, and orchestrates cluster resiliency in the face of data node failures.   In practice, a cluster’s name node may be replicated (i.e., two name nodes) so the cluster can survive name node failures by “failing over” to the backup name node.

The name node spreads data across all of the cluster’s data nodes in a way (called “sharding”) that promotes parallel access during retrieval operations.  With many data nodes working simultaneously on their own parts of a map/reduce job, the overall answer can be produced much faster than would happen on a single server.  This is why your web searches seem to return data almost as fast as you can start the search.

For resiliency in the face of data node failures, the name node directs individual data items to more than one (often three) data nodes.  When a data node fails for any reason, the name node takes it offline and uses the other two copies to construct a new third copy of the data on another data node.  So processing continues even in the face of data node failures (neat!).  Also, multiple data copies allow yet more parallelism for normal job processing.  So in practice, Hadoop clusters can survive data node failures that would take down more conventional data processing systems; they just don’t stop working.

So, Hadoop provides applications with

  • Resiliency in the face of hardware failures, and
  • Parallelism for improved performance.

With this background, let’s return to our original question and see how these Hadoop features are relevant on Forward! fabric.

Let’s get node count out of the way first.  Today’s Forward! fabric do not support enough partitions1 to make them useful for applications that need many data nodes.  So, for now at least, deploy your search engine application on another platform.  But don’t despair; many interesting Hadoop use cases fall within the up to 96 partitions available on Forward! today.   Take a look again at the links above and see how many of the applications fall within today’s Forward! capabilities – there are a lot that do!  And as the partition capacity of our fabrics increases over the next years, more and more Hadoop applications will come within the range of Forward! fabric.

Today’s Forward! fabric, like all previous Unisys computing platforms, are designed and built for reliable, resilient, secure operation.  On such platforms, Hadoop’s resiliency in the face of data node failures is just not as important as it would be on everyday commodity servers because it’s not needed as often.  Nevertheless, Hadoop can survive any rare partition failures that might occur.  And it’s a good thing that data node failures on Hadoop are rare on Forward! — when they do happen, data node recoveries can be resource “spikey” and disruptive of smooth system operations until the failed node is rebuilt.  Finally, the inherent speed of Forward! interconnect helps reduces the duration of data node recoveries.

That said, the most important reason for Hadoop on Forward! fabric is the speed up that parallel access to data sharded across many nodes delivers for applications.  We have seen many cases where the throughput of retrieval-bound applications has been significantly improved by re-engineering them to move the retrieval-bound portions to a Hadoop cluster.  Look around, talk to your industry colleagues, search the internet, you’ll find them too!

Finally, the Forward! fabric’s innovative partition commissioning capabilities allow you to repurpose your fabric as business conditions change.  To illustrate, after you have deployed Hadoop on your fabric, you can later suspend operation of the cluster, uninstall the Hadoop nodes, and reconfigure your fabric for other needs.  When you need your Hadoop cluster again, simply reconfigure your fabric and reload your Hadoop cluster.  You can resume your Hadoop operations exactly where you left off.  We use this same commissioning ability to initially deploy Hadoop on Forward! fabric, and you can use it to change the configuration of yours.  Try it!

So in summary, here’s why Hadoop makes sense on Forward! :

  • Inherent parallelism and high-speed interconnect are leveraged to deliver big data analytics applications that can drive competitive advantage for your business.
  • A range of modern application speed-up scenarios (needing no more than 96 partitions) can leverage Forward! fabric today.
  • Innovative commissioning services allow you to repurpose your Forward! fabric as needed without disrupting your Hadoop cluster configuration.

Move Forward! with Hadoop today!

 


1 For our discussion, “partition” and “node” both refer to independent, shared nothing data processing components.  One Hadoop data node occupies one partition on a fabric.