Workflow Engines for Hadoop
Over the past 2 years, I've had the opportunity to work with two open-source workflow engines for Hadoop. I used and contributed to Azkaban, written and open-sourced by LinkedIn, for over a year while I worked at Adconion. Recently, I've been working with Oozie, which is bundled as part of Cloudera's CDH3. Both systems have a lot of great features but also a number of weaknesses. The strengths and weaknesses of both systems don't always overlap, so I hope that each can learn from the other to improve the tools available for Hadoop.
In that vain, I'm going to produce a head-head comparison of the two systems considering a number of different features. In the follow comparisons, I'm considering the version of Azkaban found in master on github (with exceptions noted) and Oozie from CDH3u3.
Job Definition
Both systems support defining a workflow as a DAG (directed acyclic graph) made up of individual steps.
Azkaban
In Azkaban, a "job" is defined as a java properties file. You specify a job type, any parameters, and any dependencies that job has. Azkaban doesn't have any notion of a self-contained workflow -- a job can depend on any other job in the system. Each job has a unique identifier which is used to reference dependent jobs.
Oozie
In Oozie, a "jobs" are referred to as "actions". A workflow is defined in an XML file, which specifies a start action. There are special actions such as fork and join (which fork and join dependency graph), as well as the ability to reference a "sub-workflow" defined in another XML file.
Job Submission
Azkaban
To submit a job to Azkaban, one creates a tar.gz or zip archive and uploads it via Azkaban's web interface. The archive contains any jars necessary to run the workflow, which are automatically added to the classpath of job at launch time.
It's possible to bypass the archive upload (this is what we did at Adconion), and directly place the files on the filesystem then tell Azkaban to reload the workflow definitions. I liked this approach because we were able to use RPMs to install workflows, and thus gave us the ability to rollback to a previous version.
Oozie
Oozie comes with a command-line program for submitting jobs. This command-line program interacts with the Oozie server via REST. Unfortunately, the REST api (at least in our version of Oozie) doesn't have very good error reporting. It's actually very easy to cause the server to 500 in which case you have to investigate Oozie's logs to guess at the problem.
Before submitting a job, the job definition, which is a folder contain xml and jar files, must be uploaded to HDFS. Any jars that are needed by the workflow should be placed in the "lib" directory of the workflow folder. Optionally, Oozie can include "system" libraries by setting a sytem library path in oozie-site and adding a property setting. Note that *only* HDFS is supported, which makes testing an Oozie workflow cumbersome since you must spin-up a MiniDFS cluster.
Running a Job
Azkaban
Azkaban provides a simple web-interface for running a job. Each job is given a name in its definition, and one can choose the appropriate job in the UI and click "Run Now". It's also easy to construct a HTTP POST to kick-off a job via curl or some other tool.
Determining which job to launch, though, can be quite confusing. With Azkaban, you don't launch via the "first" or "start" node in your DAG, but rather, you find the last node in your DAG and run it. This causes all the (recursive) dependent jobs to run. This model means that you sometimes have to jump through some hoops to prevent duplicate work from occurring if you have multiple sinks with a common DAG.
Azkaban runs the driver program as a child process of the Azkaban process. This means that you're resource constrained by the memory on the box, which caused us to DOS or box a few times (Azkaban does have a feature to limit the number of simultaneous jobs, which we did use to alleviate this problem. But then your job-submission turns into FIFO).
Oozie
Once a workflow is uploaded to HDFS, one submits or runs a job using the Oozie client. You must give Oozie the full path to your workflow.xml file in HDFS as a parameter to the client. This can be cumbersome since the path changes if you version your workflows (and if you don't version your workflow, a re-submission could cause a running job to fail). Job submission typically references a java properties file that contains a number of parameters for the workflow.
Oozie runs the "driver" program (e.g. PigMain or HiveMain or your MapReduce program's main) as a MapTask. This has a few implications:
- If you have the wrong scheduler configuration, it's possible to end up with all map task slots occupied only by these "driver" tasks.
- If a map task dies (disable preemption and hope that your TaskTrackers don't die), you end up with an abandoned mapreduce job. Unless you kill the job, retrying at the Oozie level will likely fail.
- There's another level of indirection to determine what's happened if your job failed. You have to navigate from Oozie to Hadoop to the Hadoop job to the map task to the map task's output to see what happened.
- But also, you don't have a single box that you might DOS.
Scheduling a Job
Azkaban
Azkaban provides a WebUI for scheduling a job with cron-like precision. It's rather easy to recreate this HTTP POST from the command-line.
Oozie
Oozie has a great feature called "coordinators". A coordinator is an XML file that optionally describes datasets that a workflow consumes, and also describes the frequency of your dataset. For example, you can tell it that your dataset should be created daily at 1am. If there are input datasets described, then your workflow will only be launched when those datasets are available.
A coordinator requires a "startDate", which is rather annoying in the usual case (I just want to launch this workflow today and going forward… we have taken to making the startDate a parameter since we don't necessarily know when the coordinator will be released), but also makes it very easy to do a backfill of your data. E.g. if you have a new workflow that you want to run over all data from the first of the year onwards, just specify a startDate of Jan 1st.
Azkaban doesn't include anything like Oozie's coordinators. At Adconion, we wrote our own version of it, which also supported some nice features like reruns when data arrive late.
Security
Azkaban
Azkaban doesn't support secure Hadoop. That means, if you're running CDH3 or Hadoop 0.20.200+, that all of your jobs will be submitted to Hadoop as a single user. There have been discussions about fixing this, and I know that Adconion was working on something. Even so, with the fair scheduler it's possible to assign jobs to different pools.
Oozie
Oozie has built-in support for secure Hadoop including kerberos. We haven't used this, but it does mean that you have to configure Hadoop to allow Oozie to proxy as other users. Thus, jobs are submitted to the cluster as the user that submitted the job to Oozie (although it's possible to override this in a non-kerberos setting).
Property Management
Azkaban
Azkaban has a notion of "global properties" that are embedded within Azkaban itself. These global properties can be referenced from within a workflow, and thus a generic workflow can be built as long as different values for these global properties are specified in different environments (e.g. testing, staging, production). Typical examples of global properties are things like the location of the pig global props and database usernames and passwords.
Azkaban determines which Hadoop cluster to talk to by checking for HADOOP_HOME and HADOOP_CONF_DIR directories containing core-site.xml and mapred-site.xml entries. This also allows you to specify things like the default number of reducers very easily.
Global properties are nice, because if you need to tweak one you don't have to redeploy the workflows that depend on them.
Oozie
Oozie doesn't have a notion of global properties. All properties must be submitted as part of every job run. This include the jobtracker and the namenode (so make sure you have CNAMEs setup for those in case they ever change!). Also, Oozie doesn't let you refer to anything with a relative path (including sub-workflows!), so we've taken to setting a property called workflowBase that our tooling provides.
At foursquare, we've had to build a bunch of tooling around job submission so that we don't have to keep around all of these properties in each of our workflows. We're still stuck with resubmitting all coordinators, though, if we have to make a global change. Also, the jobtracker/namenode settings are extra annoying because you *must* specify these in each and every workflow action. Talk about boilerplate. I assume that since Yahoo has use-cases for supporting multiple clusters for a particular Oozie, but the design over-complicates things for the typical case.
Reruns
Azkaban
A neat feature of Azkaban is partial reruns - i.e. if your 10 step workflow fails on step 8, then you can pickup from step 8 and just run the last 3 steps. This was possible to do via the UI. This was an attractive feature of Azkaban, but we didn't use it.
Oozie
In order to get a similar feature in Oozie, each action in your workflow must be a sub-workflow, then you can run the individual sub-workflows. At least in theory -- it turns out that you have to set so many properties that it becomes untenable, and even with the right magic incantation, I couldn't get this to work well.
Reruns of failed days in a coordinator are easy, but only in an all-or-nothing sense -- if the last step of the workflow failed, there's no easy way to rerun it.
UI
Azkaban
Azkaban has a phenomenal UI for viewing workflows (including visualizing the DAG!), run histories, submitting workflows, creating schedules, and more. The UI does have some bugs, such as when you run multiple instances of the same workflow, the history page gets confused. But in general, it's very easy to tell what the state of the system is.
Oozie
The Oozie UI, on the other hand, is not very useful. It's all Aaax, but is formatted in a window sized for a 1999 monitor. It's laggy, doubl-clicks don't always work, and things that should be links aren't. It's nearly impossible to navigate once you have a non-trivial number of jobs because jobs aren't named with any human-readable form, the UI doesn't support proper sorting, and it's too laggy.
Monitoring
Azkaban
Azkaban supports a global email notification whenever a job finishes. This is a nice and cheap mechanism to detect failures. Also, my Adconion-colleague Don Pazel contributed a notification system that can be stitched up to detect failures, run times, etc and expose these via JMX or HTTP. That's what we did at Adconion, but that piece wasn't open-sourced.
Oozie
With Oozie, it's possible to have an email action that mails on success or failure, but an action has to be defined for each workflow. Since there's no good way to detect failure, we've written a workflow that uses the Oozie REST api to check the status of jobs and then sends us a daily email. This is far from ideal since we sometimes don't learn about a failure until hours after it occurred.
Testing
Azkaban
Testing with Azkaban can be achieved by instantiating the Azkaban JobRunner and using the java api to submit a job. We had a lot of success with this at Adconion, and tests ran in a matter of seconds.
Oozie
Oozie has a LocalOozie utility, but it requires spinning up a HDFS cluster (since Oozie has lots of hard-coded checks that data lives in HDFS). Thus, integration testing is slow (on the order of a minute for a single workflow).
Oozie also has a class that can validate a schema, which we've incorporated to our build. But that doesn't catch things like parameter typos, or referencing non-existant actions.
Custom Job Types
Azkaban
Writing a custom job is fairly straightforward. The Azkaban API has some abstract classes you can subclass. Unfortunately, you must recompile Azkaban to expose a new job type.
Oozie
Admittedly, I haven't tried this. But the action classes that I've seen are well into the hundreds of lines of code.
Baked-in support
Azkaban
Azkaban has baked-in support for Pig, java, shell, and mapreduce jobs.
Oozie
Oozie has baked-in support for Pig, Hive, and Java. The shell and ssh actions have been deprecated. In addition, though, an Oozie action can have a "prepare" statement to cleanup directories to which you might want to write. But these directories *must* be HDFS, which means that if you use <prepare> then your code is less testable.
Storing State
Azkaban
Azkaban uses JSON files on the filesystem to store state. It caches these files in memory using a LRU cache, which makes the UI responsive. The JSON files are also easy to investigate if you want to poke-around bypassing the UI. Creating a backup is an easy snapshot of a single directory on a filesystem.
Oozie
Oozie uses an RDBMS for storing state, and one must backup that RDBMS via whatever mechanism in order to create a backup.
Documentation
Both systems feature rich documentation. Oozie's documentation tends to be much longer and larger since it includes XML fragments as well as details of the XML schemas. Over the years, Azkaban's documentation has at times fallen out of sync with implementation, but the general docs are still maintained.
A note on Oozie versions
To be fair to Oozie, I haven't tried the latest version, yet. Hopefully many issues I've noted are fixed, but if not, it'll be easier to file bug reports once on the latest versions. A number of bugs I found in the CDH3u3 version of Oozie were either fixed or not applicable to trunk, so it became difficult to keep track of what was what.
Summary
Both Azkaban and Oozie offer substantial features and are powerful workflow engines. The weaknesses and strengths of the systems tend to complement one-another, and it'd be fantastic if each system integrated the strengths of the other to improve.
It's worth noting that there are also a number of other workflow systems available that I haven't used. I am not discounting these systems, but I have zero authority to speak on them. Lot's of folks also seem to be using in-house engines, and it'd be fantastic to see more work open-sourced.
Writing a general-purpose workflow engine is very hard, and there are certainly remnants of the "LinkedIn" or "Yahoo" way of doing things in each system. As communities grow, hopefully these engines will start to lose those types of annoyances.
The second-half of 2012 could be very interesting with workflow-engine improvements. For example, there is talk of a new UI for building Oozie workflows, HCatalog and Oozie integration could be interesting, and YARN integration could make for a better-solution to distributing the "driver" programs for workflows (I've heard rumblings that Oozie could go down this path).
Lastly, I realize that this post more than likely has mistakes or oversights. These are inadvertent -- if you find any, please make a note in the comments, and I will try to correct.