Monday, May 25, 2015

Oozie hive action job stuck in PREP state.

Problem:

Oozie triggered workflow which has a hive action stuck in PREP state:
[gpadmin@pccadmin ~]$ oozie job -oozie http://pccadmin.phd.local:11000/oozie -info 0000000-140619225354896-oozie-oozi-W
Job ID : 0000000-140619225354896-oozie-oozi-W
------------------------------------------------------------------------------------------------------------------------------------
Workflow Name : hive-wf
App Path : hdfs://test/user/gpadmin/examples/apps/hive
Status : RUNNING
Run : 0
User : gpadmin
Group : -
Created : 2014-06-20 05:54 GMT
Started : 2014-06-20 05:54 GMT
Last Modified : 2014-06-20 05:54 GMT
Ended : -
CoordAction ID: -
 
Actions
------------------------------------------------------------------------------------------------------------------------------------
ID                                           Status      Ext ID     Ext Status      Err Code
------------------------------------------------------------------------------------------------------------------------------------
0000001-140619235104359-oozie-oozi-W@:start:   OK           -          OK             -
------------------------------------------------------------------------------------------------------------------------------------
0000001-140619235104359-oozie-oozi-W@hive-node PREP         -           -              -
------------------------------------------------------------------------------------------------------------------------------------

Cause: If oozie job is stuck in PREP state, one must start looking into the logs to identify the cause of such a behavior. You can start with oozie.log, and in the instance described in this article, the execution of the oozie hive action threw an exception NoClassDefFoundError for "HiveConf" class after which it was stuck.
0:24:52,497 ERROR ActionStartXCommand:536 - USER[nweissi] GROUP[-] TOKEN[] APP[hive-wf] JOB[0000000-140619202307001-oozie-oozi-W] ACTION[0000000-140619202307001-oozie-oozi-W@hive-node] Error,
java.lang.NoClassDefFoundError: org/apache/hadoop/hive/conf/HiveConf
 at org.apache.oozie.action.hadoop.HCatCredentialHelper.getHCatClient(HCatCredentialHelper.java:79)
 at org.apache.oozie.action.hadoop.HCatCredentialHelper.set(HCatCredentialHelper.java:52)
 at 
..
..
 at java.lang.Thread.run(Thread.java:744)
Exception in thread "pool-2-thread-9" java.lang.NoClassDefFoundError: org/apache/hadoop/hive/conf/HiveConf
 at org.apache.oozie.action.hadoop.HCatCredentialHelper.getHCatClient(HCatCredentialHelper.java:79)

Fix:
common.loader variable in catalog.properties sets the classpath for the Oozie launcher jobs. It is located under $CATALINA_BASE/conf directory, use /etc/default/oozie to identify the CATALINA_BASE path in PHD 1.x / 2.x or the files which set this configuration based on your install.
Ensure that catalina.properties file has specified the required libraries. In this case, it was HiveConf.class which is packaged in hive-common.jar and hive-exec.jar, but hive library directory was missing from common.loader variable, due to which the launcher job was failing to retrieve the hcat credentials used in workflow.xml on secured cluster.
Another way to verify is to open Oozie Web Console UI at http://<oozie-server-host>:11000/oozie/. Browser over to the "System Info" -> "Java System Props" tab and look for the parameter common.loaders. It must have the required jars.

How to enable kerberos trace logging with HAWQ ?

Environment
  • KRB5_TRACE only works on Kerberos 1.9.0+.
  • HAWQ 1.1.4 and above
Question:
How to enable kerberos trace logging with HAWQ database to identify any kerberos issue?

Solution:
Step 1: Edit /usr/local/hawq/greenplum_path.sh file to include the below entry:
export KRB5_TRACE=<File Name>
Note: Do not use /dev/out instead of filename above, it will slow down the initialization process and also, messages would not be available for future references
Enabling KRB5_TRACE will send tracing information for kinit to the specified file.

Step 2: Source greenplum_path.sh file
source /usr/local/hawq/greenplum_path.sh

Step 3: Restart HAWQ database
Now you can refer to the file created and review any kerberos errors encountered.

How to configure queues using YARN capacity-scheduler.xml ?

In this article, we would go through important aspects involved while setting up queues using YARN Capacity Scheduler. 
Before, we setup the queue, let's first see how we could configure the amount of maximum memory to be utilized by YARN node managers. In order to configure a PHD cluster to utilize a specific amount of memory for YARN node managers, we could edit the parameter "yarn.nodemanager.resource.memory-mb" in yarn configuration file "/etc/gphd/hadoop/conf/yarn-site.xml". After a desired value has been defined, it needs a restart of YARN services for the change to take place.
yarn-site.xml
 <property>
     <name>yarn.nodemanager.resource.memory-mb</name>
     <value>16384</value>
 </property>
In the above example, we have assigned 16 GB of memory for utilization by YARN node managers per server.
We may now define multiple queues depending on the requirement for the operations which needs to be performed and give them a share of the cluster resources defined. However, in order to allow YARN to use capacity scheduler, we need to have parameter "yarn.resourcemanager.scheduler.class" defined in yarn-site.xml to use CapacityScheduler. In PHD, by default, the value is set to use CapacityScheduler, so we could straight away define the queues.
 yarn-site.xml
 <property>
     <name>yarn.resourcemanager.scheduler.class</name>
 <value>org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler</value>
 </property> 
Setting up the queues:
CapacityScheduler has a pre-defined queue called root. All queueus in the system are children of the root queue. In capacity-scheduler.xml, parameter "yarn.scheduler.capacity.root.queues" could be used to define child queues. For example, In order to create 3 queues, we could specify the name of the queues in a comma separated list. 
<property>
     <name>yarn.scheduler.capacity.root.queues</name>
     <value>alpha,beta,default</value>
     <description>
       The queues at the this level (root is the root queue).
     </description>
 </property>
With the above change being done, now we could proceed further to specify the queue specific parameters. Parameters denoting queue specific properties follow a standard set of naming convention & they include the name of the queue for which they are relevant.
Here is an example of general syntax: yarn.scheduler.capacity.<queue-path>.<parameter>
where :
<queue-path> : identifies the name of the queue.
<parameter>  : identifies the parameter whose value is being set.

Please refer to Apache Yarn Capacity Scheduler documentation for the complete list of configurable parameter.
Link: http://hadoop.apache.org/docs/r2.0.5-alpha/hadoop-yarn/hadoop-yarn-site/CapacityScheduler.html 

Let's move on to set some major parameters
1) Queue Resource Allocation related queue parameter:
a) yarn.scheduler.capacity.<queue-path>.capacity
To set the percentage of cluster resource which must be allocated to these resources, we need to edit the value of the parameter yarn.scheduler.capacity.<queue-path>.capacity in capacity-scheduler.xml accordingly. In the below example, we set the queues to use 50%, 30% and 20% of the allocated cluster resources which was earlier set by "yarn.nodemanager.resource.memory-mb" per nodemanager.

Example below:
 <property>
     <name>yarn.scheduler.capacity.root.alpha.capacity</name>
     <value>50</value>
     <description>Default queue target capacity.</description>
   </property>
 
 <property>
     <name>yarn.scheduler.capacity.root.beta.capacity</name>
     <value>30</value>
     <description>Default queue target capacity.</description>
   </property>
 
 <property>
     <name>yarn.scheduler.capacity.root.default.capacity</name>
     <value>20</value>
     <description>Default queue target capacity.</description>
   </property>
There are few other parameters as well which defines resource allocation and could be set, you could refer them on the Apache Yarn Capacity Scheduler documentation.

2) Queue Administration & Permissions related parameter: 
a) yarn.scheduler.capacity.<queue-path>.state
To enable the queue to allow jobs / application to be submitted via them, the state of queue must be RUNNING, else you may receive error message stating that queue is STOPPED. RUNNING & STOPPED are the permissible values for this parameter.

Example below:
  <property>
     <name>yarn.scheduler.capacity.root.alpha.state</name>
     <value>RUNNING</value>
     <description>
       The state of the default queue. State can be one of RUNNING or STOPPED.
     </description>
   </property>
 
 <property>
    <name>yarn.scheduler.capacity.root.beta.state</name>
     <value>RUNNING</value>
     <description>
       The state of the default queue. State can be one of RUNNING or STOPPED.
     </description>
   </property>
 
 <property>
     <name>yarn.scheduler.capacity.root.default.state</name>
     <value>RUNNING</value>
     <description>
       The state of the default queue. State can be one of RUNNING or STOPPED.
     </description>
   </property>
b) yarn.scheduler.capacity.root.<queue-path>.acl_submit_applications 
To enable a particular user to submit a job / application to a specific queue, we must define the username / group in a comma separated list. A special value of * allows all the users to submit jobs / application to the queue.

Example format for specifying the list of users:
1) <value>user1,user2</value> : This indicates that user1 and user2 are allowed.
2) <value>user1,user2 group1,group2</value> : This indicates that user1, user2 and all the users from group1 & group2 are allowed.
3) <value>group1,group2</value>: This indicates that all the users from group1 & group2 are allowed.
Under this parameter, first thing you must define is the value for the parameter as "hadoop,yarn,mapped,hdfs" for non-leaf root queue, it ensures that only the special users could use all the queues. Since the child queues inherit permissions of their root queue, and by default its "*", thus if you don't restrict the list at root queue, all the user may still be able to run jobs on any of the queues. By specifying "hadoop,yarn,mapped,hdfs" for non-leaf root queue, you could control user access based on specific child queues.
Non-Leaf Root queue :
Example below:
 <property>
    <name>yarn.scheduler.capacity.root.acl_submit_applications</name>
     <value>hadoop,yarn,mapred,hdfs</value>
     <description>
       The ACL of who can submit jobs to the root queue.
     </description>
   </property>
Child Queue under root queue / Leaf child queue: 
Example below:
<property>
   <name>yarn.scheduler.capacity.root.alpha.acl_submit_applications</name>
   <value>sap_user hadoopusers</value>
     <description>
      The ACL of who can submit jobs to the alpha queue.
     </description>
   </property>
 
 <property>
     <name>yarn.scheduler.capacity.root.beta.acl_submit_applications</name>
     <value>bi_user,etl_user failgroup</value>
     <description>
       The ACL of who can submit jobs to the beta queue.
     </description>
   </property>
 
   <property>
     <name>yarn.scheduler.capacity.root.default.acl_submit_applications</name>
     <value>adhoc_user hadoopusers</value>
     <description>
       The ACL of who can submit jobs to the default queue.
     </description>
   </property>
c) yarn.scheduler.capacity.<queue-path>.acl_administer_queue
To set the list of administrator who could manage an application on a queue, you may set the username in a comma separated list for this parameter. A special value of * allows all the users to administrator an application running on a queue.
You may define the below properties as we defined for acl_submit_applications. Same syntax is followed.

Example below:
 <property>
     <name>yarn.scheduler.capacity.root.alpha.acl_administer_queue</name>
     <value>sap_user</value>
     <description>
       The ACL of who can administer jobs on the default queue.
     </description>
   </property>
 
  <property>
     <name>yarn.scheduler.capacity.root.beta.acl_administer_queue</name>
     <value>bi_user,etl_user</value>
     <description>
       The ACL of who can administer jobs on the default queue.
     </description>
   </property>
 
  <property>
     <name>yarn.scheduler.capacity.root.default.acl_administer_queue</name>
     <value>adhoc_user</value>
     <description>
       The ACL of who can administer jobs on the default queue.
     </description>
   </property>
3) There are "Running and Pending Application Limits" related other queue parameters, which could also be defined but we have not covered in this article.
Bringing the queues in effect:
Once you have defined the required parameters in capacity-scheduler.xml file, now run the below command to bring the changes in effect.
yarn rmadmin -refreshQueues

After successful completion of the above command, you may verify if the queues are setup using below 2 options:

1) hadoop queue -list
[root@phd11-nn ~]# hadoop queue -list
 DEPRECATED: Use of this script to execute mapred command is deprecated.
 Instead use the mapred command for it.
 
 14/01/16 22:10:25 INFO service.AbstractService: Service:org.apache.hadoop.yarn.client.YarnClientImpl is inited.
 14/01/16 22:10:25 INFO service.AbstractService: Service:org.apache.hadoop.yarn.client.YarnClientImpl is started.
 ======================
 Queue Name : alpha
 Queue State : running
 Scheduling Info : Capacity: 50.0, MaximumCapacity: 1.0, CurrentCapacity: 0.0
 ======================
 Queue Name : beta
 Queue State : running
 Scheduling Info : Capacity: 30.0, MaximumCapacity: 1.0, CurrentCapacity: 0.0
 ======================
 Queue Name : default
 Queue State : running
 Scheduling Info : Capacity: 20.0, MaximumCapacity: 1.0, CurrentCapacity: 0.0
2) By opening YARN resourcemanager GUI, and navigating to the scheduler tab. Link for resroucemanager GUI is : http://<Resouremanager-hostname>:8088.
where 8088 is the default port & replace <Resouremanager-hostname> with the hostname as per your PHD cluster. Below is an example for the same depicting one of the queue created "alpha"
Get ready to execute a job by submitting it to a specific queue:
Before, you execute any hadoop job, use the below command to identify the queue names on which you could submit your jobs.
[fail_user@phd11-nn ~]$ id
 uid=507(fail_user) gid=507(failgroup) groups=507(failgroup)
 
 [fail_user@phd11-nn ~]$ hadoop queue -showacls
 Queue acls for user :  fail_user
 
 Queue  Operations
 =====================
 root  ADMINISTER_QUEUE
 alpha  ADMINISTER_QUEUE
 beta  ADMINISTER_QUEUE,SUBMIT_APPLICATIONS
 default  ADMINISTER_QUEUE
If you see the above output, fail_user could submit application only on beta queue, since its part of "failgroup" and have been assigned only to beta queue in capacity-scheduler.xml as described earlier.

Let's move on a bit closer to running our first job in this article. In order to submit an application, you have to use the parameter -Dmapred.job.queue.name=<queue-name> or -Dmapred.job.queuename=<queue-name>

The below examples illustrates how to run a job on a specific queue.
[fail_user@phd11-nn ~]$ yarn jar /usr/lib/gphd/hadoop-mapreduce/hadoop-mapreduce-examples-2.0.5-alpha-gphd-2.1.1.0.jar wordcount -D mapreduce.job.queuename=beta /tmp/test_input /user/fail_user/test_output
14/01/17 23:15:31 INFO service.AbstractService: Service:org.apache.hadoop.yarn.client.YarnClientImpl is inited.
 14/01/17 23:15:31 INFO service.AbstractService: Service:org.apache.hadoop.yarn.client.YarnClientImpl is started.
 14/01/17 23:15:31 INFO input.FileInputFormat: Total input paths to process : 1
 14/01/17 23:15:31 INFO mapreduce.JobSubmitter: number of splits:1
 In DefaultPathResolver.java. Path = hdfs://phda2/user/fail_user/test_output
 14/01/17 23:15:32 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1390019915506_0001
 14/01/17 23:15:33 INFO client.YarnClientImpl: Submitted application application_1390019915506_0001 to ResourceManager at phd11-nn.saturn.local/10.110.127.195:8032
 14/01/17 23:15:33 INFO mapreduce.Job: The url to track the job: http://phd11-nn.saturn.local:8088/proxy/application_1390019915506_0001/
 14/01/17 23:15:33 INFO mapreduce.Job: Running job: job_1390019915506_0001
 2014-01-17T23:15:40.702-0800: 11.670: [GC2014-01-17T23:15:40.702-0800: 11.670: [ParNew: 272640K->18064K(306688K), 0.0653230 secs] 272640K->18064K(989952K), 0.0654490 secs] [Times: user=0.06 sys=0.04, real=0.06 secs]
 14/01/17 23:15:41 INFO mapreduce.Job: Job job_1390019915506_0001 running in uber mode : false
 14/01/17 23:15:41 INFO mapreduce.Job:  map 0% reduce 0%
 14/01/17 23:15:51 INFO mapreduce.Job:  map 100% reduce 0%
 14/01/17 23:15:58 INFO mapreduce.Job:  map 100% reduce 100%
 14/01/17 23:15:58 INFO mapreduce.Job: Job job_1390019915506_0001 completed successfully
While the job is executing, you may also monitor resource manger GUI to see on what queue is the job submitted. Here is a snapshot of the name. In the snapshot below, green color indicates the queue which is being used by the above word count application.


Now, let's see what happens when another queue is used on which fail_user is not allowed to submit applications. This must fail.
 [fail_user@phd11-nn ~]$ yarn jar /usr/lib/gphd/hadoop-mapreduce/hadoop-mapreduce-examples-2.0.5-alpha-gphd-2.1.1.0.jar wordcount -D mapreduce.job.queuename=alpha /tmp/test_input /user/fail_user/test_output_alpha
14/01/17 23:20:07 INFO service.AbstractService: Service:org.apache.hadoop.yarn.client.YarnClientImpl is inited.
 14/01/17 23:20:07 INFO service.AbstractService: Service:org.apache.hadoop.yarn.client.YarnClientImpl is started.
 14/01/17 23:20:07 INFO input.FileInputFormat: Total input paths to process : 1
 14/01/17 23:20:07 INFO mapreduce.JobSubmitter: number of splits:1
 In DefaultPathResolver.java. Path = hdfs://phda2/user/fail_user/test_output_alpha
 14/01/17 23:20:08 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1390019915506_0002
 14/01/17 23:20:08 INFO client.YarnClientImpl: Submitted application application_1390019915506_0002 to ResourceManager at phd11-nn.saturn.local/10.110.127.195:8032
 14/01/17 23:20:08 INFO mapreduce.JobSubmitter: Cleaning up the staging area /user/fail_user/.staging/job_1390019915506_0002
 14/01/17 23:20:08 ERROR security.UserGroupInformation: PriviledgedActionException as:fail_user (auth:SIMPLE) cause:java.io.IOException: Failed to run job : org.apache.hadoop.security.AccessControlException: User fail_user cannot submit applications to queue root.alpha
 java.io.IOException: Failed to run job : org.apache.hadoop.security.AccessControlException: User fail_user cannot submit applications to queue root.alpha
      at org.apache.hadoop.mapred.YARNRunner.submitJob(YARNRunner.java:307)
      at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:395)
      at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1218)
      at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1215)
      at java.security.AccessController.doPrivileged(Native Method)
      at javax.security.auth.Subject.doAs(Subject.java:415)
      at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1478)
      at org.apache.hadoop.mapreduce.Job.submit(Job.java:1215)
      at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1236)
      at org.apache.hadoop.examples.WordCount.main(WordCount.java:84)
      at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
      at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
      at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
      at java.lang.reflect.Method.invoke(Method.java:606)
      at org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:72)
      at org.apache.hadoop.util.ProgramDriver.driver(ProgramDriver.java:144)
      at org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:68)
      at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
      at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
      at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
      at java.lang.reflect.Method.invoke(Method.java:606)
      at org.apache.hadoop.util.RunJar.main(RunJar.java:212)

Oozie - A Workflow Scheduler

What is Oozie ?
  • Oozie is a server based Workflow Engine specialized in running workflow
  • A workflow is a collection of nodes arranged in a control dependency DAG (Direct Acyclic Graph).
  • Workflow nodes are classified in 2 types:
    • Control flow nodes: Nodes that control the start and end of the workflow and workflow job execution path.
    • Action nodes: Nodes that trigger the execution of a computation/processing task.
  • Oozie workflows definitions are written in hPDL (a XML Process Definition Language)
  • Oozie supports actions to execute jobs for below hadoop components:
    • Apache MapReduce
    • Apache Pig
    • Apache Hive
    • Apache Sqoop
    • System specific jobs (java programs, shell scripts etc.)
  • Oozie is installed on top of Hadoop, thus hadoop components must be installed and running
Oozie - What are it’s components ?
  • Tomcat server - It is a web server including a Tomcat JSP engine and a variety of different connectors, but its core component is called Catalina. Catalina provides Tomcat's actual implementation of the servlet specification, when you startup your Tomcat server, you're actually starting Catalina.
  • Database: Oozie stores workflow jobs details in the database.
    • Default: Derby
    • Others supported: HSQL, MySQL, Oracle and PostgreSQL
  • ExtJs - Optional, It is a Javascript application framework for building up interactive web applications. It is used to enable Oozie web console UI. Oozie workflow jobs details can be viewed using Oozie Web UI.
  • Client CLI - Utility providing command line functions to interact with Oozie. It is used to submit, monitor and control Oozie jobs.
Oozie - Derby database & Web Console UI
  • Oozie uses Apache Derby RDBS to store workflow execution details.
  • By default, logs for derby database are stored under /var/log/gphd/oozie/derby.log
  • “ij” is an interactive SQL scripting tool that can be used to connect to derby
  • How to connect to derby database: /var/lib/gphd/oozie/oozie-db is the database path
[gpadmin@pccadmin]$ sudo -u oozie /usr/java/jdk1.7.0_45/db/bin/ij
ij version 10.8
ij> connect 'jdbc:derby:/var/lib/gphd/oozie/oozie-db'
  • Oozie Web Console UI pulls the details from configured (derby) database. Login to database is not required.
Putting it all together !!
  • User installs Oozie, configures it with an appropriate database
  • User creates a workflow xml and required configurations files which specifies the job properties and type of action to be used
  • User submits the job to the oozie server using Oozie CLI
  • User can monitor the status of job execution using Oozie Web Console UI or Oozie CLI options
Let’s understand a workflow !!
Workflow consists of 2 types of nodes:
  1. Action nodes to execute below task: The below actions support invoking a job for the respecitive application. For example, hive action is used to invoke a hive sql script.
    • Map Reduce
    • Pig
    • Hive
    • FS (HDFS)
    • Sub-Workflow
    • Java program
    • ssh
  1. Control nodes to provide below functions
    • Start : Entry point for a workflow job, it indicates the first workflow node the workflow job must transition to.
    • End : The end for a workflow job, it indicates that the workflow job has completed successfully.
    • Kill : The kill node allows a workflow job to kill itself. When a workflow job reaches the kill it finishes in error (KILLED).
    • Decision : A decision node enables a workflow to make a selection on the execution path to follow. The behavior of a decision node can be seen as a switch-case statement.
    • Fork and join : A fork node splits one path of execution into multiple concurrent paths of execution. A join node waits until every concurrent execution path of a previous fork node arrives to it.
The below diagram is an attempt to showcase some of the action and control nodes in a diagrammatical representation.
What we need to build an application workflow !!
1. On local filesystem :
  • job.properties
2. On HDFS :
  • Application workflow directory
    • All configuration files and scripts (Pig, hive, shell) needed by the workflow action nodes should be under the application HDFS directory.
    • workflow.xml  
      • Controls the execution of the application
    • “lib” named directory (Optional)
      • Any additional jar required for program execution is provided by user in this directory. During execution, all the jars available under lib are added to the classpath

Here is a an example with a sub workflow with hive action.
Ex 1 : Create a workflow to execute the below 2 actions:
  • Execute a FS action (Main Workflow)
  • Execute a sub workflow with hive action (Sub Workflow)
In order to perform the above task, we will need to create the below files:
  1. job.properties (On local filesystem)
  2. workflow.xml for Main workflow (On HDFS)
  3. workflow.xml for Sub Workflow (On HDFS)
  4. script.q (A text file with Hive SQL)
Once the workflow.xml files are created, they should be put under required directories at hdfs.
job.properties - Let's have it under examples/apps/hive/
nameNode=hdfs://test
jobTracker=pccadmin.phd.local:8032
queueName=default
examplesRoot=examples
oozie.use.system.libpath=true
#queueName=batch
oozie.wf.application.path=${nameNode}/user/${user.name}/${examplesRoot}/apps/hive
workflow.xml (Parent workflow)
hdfs dfs -ls /home/gpadmin/examples/apps/hive/workflow.xml
<workflow-app xmlns="uri:oozie:workflow:0.2.5" name="hive-wf">
  <start to="pwf"/>
  <action name='pwf'>
  <fs>
      <delete path="${nameNode}/user/gpadmin/examples/output-data/hivesub"/>
  </fs>
      <ok to="swf"/>
     <error to="fail"/>
  </action>
  <action name='swf'>
     <sub-workflow>
        <app-path>${nameNode}/user/gpadmin/${examplesRoot}/apps/hivesub</app-path>
        <propagate-configuration/>
     </sub-workflow>
     <ok to="end"/>
     <error to="fail"/>
   </action>  <kill name="fail">
      <message>Hive failed, error message[${wf:errorMessage(wf:lastErrorNode())}]</message>
   </kill>
  <end name="end"/>
</workflow-app>
workflow.xml (sub workflow) 
hdfs dfs -ls /home/gpadmin/examples/apps/hivesub/workflow.xml
<workflow-app xmlns="uri:oozie:workflow:0.2.5" name="start-swf">
    <credentials>
        <credential name='hive_auth' type='hcat'>
        <property>
              <name>hcat.metastore.uri</name>
              <value>thrift://hdw2.phd.local:9083</value>
        </property>
        <property>
              <name>hcat.metastore.principal</name>
              <value>hive/_HOST@MYREALM</value>
        </property>
       </credential>
   </credentials>
         <start to="hive-node"/>
   <action name="hive-node" cred='hive_auth' >
         <hive xmlns="uri:oozie:hive-action:0.2" >
               <job-tracker>${jobTracker}</job-tracker>
               <name-node>${nameNode}</name-node>
               <prepare>
                     <delete path="${nameNode}/user/gpadmin/examples/output-data/hivesub"/>
                     <mkdir path="${nameNode}/user/gpadmin/examples/output-data"/>
               </prepare>
                <job-xml>${nameNode}/user/oozie/hive-oozie-site.xml</job-xml>
                <configuration>
                      <property>
                           <name>mapred.job.queue.name</name>
                           <value>${queueName}</value>
                       </property>
                 </configuration>
                 <script>script.q</script>
                 <param>INPUT=${nameNode}/user/gpadmin/examples/input-data/table</param>
                 <param>OUTPUT=${nameNode}/user/gpadmin/examples/output-data/hive</param>
         </hive>
         <ok to="end"/>
         <error to="fail"/>
      </action>  <kill name="fail">
       <message>Hive failed, error message[${wf:errorMessage(wf:lastErrorNode())}]</message>
   </kill>
 <end name="end"/>
</workflow-app>
[gpadmin@pccadmin ~]$ hdfs dfs -cat examples/apps/hivesub/script.q
DROP Table if exists testa;
CREATE EXTERNAL TABLE testa (a INT) STORED AS TEXTFILE LOCATION '${INPUT}';
INSERT OVERWRITE DIRECTORY '${OUTPUT}' SELECT * FROM testa;
[gpadmin@pccadmin ~]$ oozie job -oozie http://localhost:11000/oozie -config examples/apps/hive/job.properties -run
job: 0000000-140709224809456-oozie-oozi-W
[gpadmin@pccadmin ~]$ oozie job -oozie http://localhost:11000/oozie -info 0000000-140709224809456-oozie-oozi-W
Job ID : 0000000-140709224809456-oozie-oozi-W
------------------------------------------------------------------------------------------------------------------------------------
Workflow Name : hive-wf
App Path : hdfs://test/user/gpadmin/examples/apps/hive
Status : RUNNING
Run : 0
User : gpadmin
Group : -
Created : 2014-07-10 06:58 GMT
Started : 2014-07-10 06:58 GMT
Last Modified : 2014-07-10 06:58 GMT
Ended : -
CoordAction ID: -
Actions
------------------------------------------------------------------------------------------------------------------------------------
ID Status Ext ID Ext Status Err Code
------------------------------------------------------------------------------------------------------------------------------------
0000000-140709224809456-oozie-oozi-W@:start: OK - OK -
------------------------------------------------------------------------------------------------------------------------------------
0000000-140709224809456-oozie-oozi-W@pwf OK - OK -
------------------------------------------------------------------------------------------------------------------------------------
0000000-140709224809456-oozie-oozi-W@swf RUNNING 0000001-140709224809456-oozie-oozi-W- -
------------------------------------------------------------------------------------------------------------------------------------
[gpadmin@pccadmin ~]$ oozie job -oozie http://localhost:11000/oozie -info 0000000-140709224809456-oozie-oozi-W
Job ID : 0000000-140709224809456-oozie-oozi-W
------------------------------------------------------------------------------------------------------------------------------------
Workflow Name : hive-wf
App Path : hdfs://test/user/gpadmin/examples/apps/hive
Status : SUCCEEDED
Run : 0
User : gpadmin
Group : -
Created : 2014-07-10 06:58 GMT
Started : 2014-07-10 06:58 GMT
Last Modified : 2014-07-10 06:59 GMT
Ended : 2014-07-10 06:59 GMT
CoordAction ID: -
Actions
------------------------------------------------------------------------------------------------------------------------------------
ID Status Ext ID Ext Status Err Code
------------------------------------------------------------------------------------------------------------------------------------
0000000-140709224809456-oozie-oozi-W@:start: OK - OK -
------------------------------------------------------------------------------------------------------------------------------------
0000000-140709224809456-oozie-oozi-W@pwf OK - OK -
------------------------------------------------------------------------------------------------------------------------------------
0000000-140709224809456-oozie-oozi-W@swf OK 0000001-140709224809456-oozie-oozi-WSUCCEEDED -
------------------------------------------------------------------------------------------------------------------------------------
0000000-140709224809456-oozie-oozi-W@end OK - OK -
------------------------------------------------------------------------------------------------------------------------------------

Logs useful for HAWQ troubleshooting

This article will assist you to locate the logs which are required while troubleshooting a HAWQ related issues. 

Let's quickly list out the different logs available which are very helpful for troubleshooting. There are primarily 5 logs which can provide us insights to the problem:

1. HAWQ Master Logs - Entry to the HAWQ database users from client perspective is via the HAWQ master. All the activities related to HAWQ access, query processing or related operations is logged in HAWQ master logs.

2. HAWQ Segment Logs - HAWQ Master dispatches query execution task to its segments. Each segment is a database in itself and logs related to any operation on the segment database are tracked under HAWQ segment logs.
There is a correlation between HAWQ master logs and HAWQ segment logs. If a query is triggered on the HAWQ master, and if logging is enabled at segment, master and segment will store information for query processing. Often, there are events which are specific to the segment executing the query and logs for it will be written only in HAWQ segment logs but can be easily correlated using a unique identified "session id".
Every session to the HAWQ database is assigned a unique session id, thus any query run in that session will use the session id as it's identifier and master and segment logs can be correlated using it. Ex: con9866 can be used to identify related activity on the segments.
2014-07-15 09:40:22.899924 PDT,"gpadmin","gpadmin",p91220,th699058016,"[local]",,2014-07-15 09:38:35 PDT,33851,con9866,cmd16,seg-1,,,x33851,sx1,"LOG","00000","2014-07-15 09:40:22:898757 PDT,THD000,TRACE,""CAutoTrace.cpp:53: [OPT]: Search terminated at stage 1/1"",",,,,,,"select gp.hostname, pe.fselocation from pg_filespace_entry pe, gp_segment_configuration gp where pe.fsedbid = gp.dbid and fselocation !~ 'hdfs://';",0,,"COptTasks.cpp",463,

3. Namenode Logs - HAWQ is a MPP database engine which works on hadoop, thus namenode is a critical entity in HAWQ ecosystem. When there is a request to execute a HDFS operation, HAWQ segments connect to the namenode to retrieve the location of the files on HDFS and use the retrieved information for further processing.

4. Datanode Logs - Datanode are the actual daemons which store HAWQ user tables data. If there is a failure during an HDFS operation, HAWQ queries will fail, thus datanodes logs helps identifying critical information during query execution failure.

5. HAWQ Administration logs - Any maintenance operation performed using HAWQ management utilities including on HAWQ database is tracked under these logs. Ex: start, stop, etc.

How to locate the logs:
HAWQ master logs.
1. Go the HAWQ master. (ssh <hawq_master>)
2. Verify if HAWQ master process is running.
[gpadmin@hawq-master ~]$ ps -ef | egrep silent | egrep master
gpadmin 480392 1 0 Jul04 ? 00:00:05 /usr/local/hawq-1.2.0.1/bin/postgres -D /data1/master/gpseg-1 -p 5432 -b 1 -z 6 --silent-mode=true -i -M master -C -1 -x 8 -E
3. The value for option -D in above example is "/data1/master/gpseg-1" which is known as the master data directory for HAWQ. Often the term, MASTER_DATA_DIRECTORY is used to reference it.
4. Based on the above identified MASTER_DATA_DIRECTORY, HAWQ Master Logs can be located under:
 "/data1/master/gpseg-1/pg_log
Or you can login to the database and run the below command:
[gpadmin@hdw3 pg_log]$ psql
gpadmin=# show data_directory ;
 data_directory
-----------------------
 /data1/master/gpseg-1
(1 row)
5. Based on the date of the issue identified, HAWQ log files can be easily located. HAWQ logs files have a format gpdb-YYYY-MM-DD_<timestamp>.log which easily identifies the log file dates.  

HAWQ segment logs:
1. Login to HAWQ database (psql <databasename>). Catalog files to locate HAWQ segment directories are common to all the database, so you can login to any database.
2. Below sql can be used to identify the hostname, status of the segment, content id (an identifier for the segment) and location
gpadmin=# select gp.hostname,gp.status,gp.content, pe.fselocation from pg_filespace_entry pe, gp_segment_configuration gp where pe.fsedbid = gp.dbid and fselocation !~ 'dfs';
 hostname | status | content | fselocation
--------------------+--------+---------+-----------------------
hdw3.phd.local | u | -1 | /data1/master/gpseg-1
hdw1.phd.local | u | 2 | /data1/primary/gpseg2
pccadmin.phd.local | u | -1 | /data1/master/gpseg-1
Translating above into a statement:
Log for content id 2 are located on host hdw1.phd.local under /data1/primary/gpseg2/pg_log directory, currently the segment is marked up.
Note: In the above example, content id = -1 is repeated 2 times, they both represent HAWQ master (active and standby). But, you can only login to the database using active master.

Namenode
You can use PCC UI -> Topology tab to identify the namenode and datanode hostname. On namenode and datanode, under /etc/default you can find respective file names which holds parameters to specify log location for a PHD cluster. If your cluster is secured, use the location specified by HADOOP_SECURE_DN_LOG_DIR for datanodes.
[gpadmin@hdm1 ~]$  egrep LOG /etc/default/hadoop-hdfs-namenode
export HADOOP_LOG_DIR=/var/log/gphd/hadoop-hdfs


Datanode
[gpadmin@hdw3 pg_log]$ egrep LOG /etc/default/hadoop-hdfs-datanode
export HADOOP_LOG_DIR=/var/log/gphd/hadoop-hdfs
# export HADOOP_SECURE_DN_LOG_DIR=$HADOOP_LOG_DIR/hdfs

Note: You can also run ps command to grep for hadoop namenode/datanode daemon process which can also show the location of the logs.

HAWQ administration logs
These logs are available under on the HAWQ master and segment directories and the name of the log depends on the type of operation performed.
/home/gpadmin/gpAdminLogs
Log ex:
gpstart_YYYYMMDD.log
gpstop_YYYYMMDD.log

How to run a Map Reduce jar using Oozie workflow

In this article we will go through the steps to use Oozie for executing map reduce programs / jar files provided by hadoop distribution. You may use this as a base and setup your applications / workflow accordingly
Environment
  • PHD 1.1.1 / Oozie 3.3.2
Prerequisite
  • Basic Oozie knowledge & Working Oozie, PHD cluster
Example used
  • Wordcount program from hadoop-mapreduce-examples.jar
  • hadoop-mapreduce-examples.jar is a symbolic link to the jar provided in the Pivotal Hadoop Distribution.
Below is the list of steps
1) Untar the jar hadoop-mapreduce-examples.jar. You can find this jar under /usr/lib/gphd/hadoop-mapreduce directory on a Pivotal Hadoop cluster.
[hadoop@hdm1 test]$ jar xf hadoop-mapreduce-examples.jar
[hadoop@hdm1 test]$ ls
hadoop-mapreduce-examples.jar META-INF org
2) Navigate to the directory to see the list of class files associated with WordCount. 
[hadoop@hdm1 test]$ cd org/apache/hadoop/examples/
[hadoop@hdm1 examples]$ ls WordCount*
WordCount.class WordCount$IntSumReducer.class WordCount$TokenizerMapper.class
In WordCount program, name of the mapper class is WordCount$TokenizerMapper.class and reducer class is WordCount$TokenizerMapper.class. We will use these file when defining the oozie workflow.xml
3) Create a job.properties file. The parameters for the Oozie job are provided in a Java properties file (.properties) or a Hadoop configuration xml (.xml), in this case we use a .properties file.
nameNode=hdfs://phdha
jobTracker=hdm1.phd.local:8032
queueName=default
examplesRoot=examplesoozie
oozie.wf.application.path=${nameNode}/user/${user.name}/${examplesRoot}/map-reduce
outputDir=map-reduce

where:
namenode = Variable to define the namenode path by which HDFS can be accessed. Format: hdfs://<nameservice> or hdfs://<namenode_host>:<port>
jobTracker = Variable to define the resource manager address in case of Yarn implementation. Format: <resourcemanager_hostname>:<port>
queueName = Name of the queue as defined by Capacity Scheduler, Fail Scheduler etc. By default, it's "default".
examplesRoot = Environment variable for the workflow.
oozie.wf.application.path = Environment variable which defines the path on HDFS which holds the workflow.xml to be executed.
outputDir = Variable to define the output directory

Note: You can define the parameter, oozie.libpath under which all the libraries required for the mapreduce program can be stored. However, in this example we donot use this.
Ex: oozie.libpath=${nameNode}/$(user.name)/share/lib
4) Create a workflow.xml. workflow.xml defines a set of actions to be performed as a sequence or in control dependency DAG (Direct Acyclic Graph).  "control dependency" from one action to another means that the second action can't run until the first action has completed. 
Refer to the documentation at http://oozie.apache.org/docs/3.3.2/WorkflowFunctionalSpec.html for details
<workflow-app xmlns="uri:oozie:workflow:0.1" name="map-reduce-wf">
 <start to="mr-node"/>
 <action name="mr-node">
     <map-reduce>
       <job-tracker>${jobTracker}</job-tracker>
       <name-node>${nameNode}</name-node>
       <prepare>
         <delete path="${nameNode}/user/${wf:user()}/${examplesRoot}/output-data/${outputDir}"/>
       </prepare>
 
   <configuration>
     <property>
       <name>mapred.mapper.new-api</name>
       <value>true</value>
     </property>
     <property>
       <name>mapred.reducer.new-api</name>
       <value>true</value>
     </property>
     <property>
       <name>mapred.job.queue.name</name>
       <value>${queueName}</value>
     </property>
     <property>
       <name>mapreduce.map.class</name>
       <value>org.apache.hadoop.examples.WordCount$TokenizerMapper</value>
     </property>
     <property>
       <name>mapreduce.reduce.class</name>
       <value>org.apache.hadoop.examples.WordCount$IntSumReducer</value>
     </property>
     <property>
       <name>mapreduce.combine.class</name>
       <value>org.apache.hadoop.examples.WordCount$IntSumReducer</value>
     </property>
     <property>
       <name>mapred.output.key.class</name>
       <value>org.apache.hadoop.io.Text</value>
     </property>
     <property>
       <name>mapred.output.value.class</name>
       <value>org.apache.hadoop.io.IntWritable</value>
     </property>
     <property>
       <name>mapred.input.dir</name>
       <value>/user/${wf:user()}/${examplesRoot}/input-data/text</value>
     </property>
     <property>
       <name>mapred.output.dir</name>
       <value>/user/${wf:user()}/${examplesRoot}/output-data/${outputDir}</value>
     </property>
   </configuration>
  </map-reduce>
  <ok to="end"/>
  <error to="fail"/>
 </action>
   <kill name="fail">
   <message>Map/Reduce failed, error message[${wf:errorMessage(wf:lastErrorNode())}]</message>
   </kill>
   <end name="end"/>
</workflow-app>
5. Create a directory on HDFS under which all the files related to the Oozie job will be kept. In this directory, push the workflow.xml created in previous step.
[hadoop@hdm1 map-reduce]$ hdfs dfs -mkdir -p /user/hadoop/examplesoozie/map-reduce
[hadoop@hdm1 map-reduce]$ hdfs dfs -copyFromLocal workflow.xml /user/hadoop/examplesoozie/map-reduce/workflow.xml
6. Now under the directory created for the Oozie job, create a folder named lib in which are the required library / jar files will be kept.
[hadoop@hdm1 map-reduce]$ hdfs dfs -mkdir -p /user/hadoop/examplesoozie/map-reduce/lib
7. Once the directory is created, copy hadoop mapreduce examples jar under this directory.  
[hadoop@hdm1 map-reduce]$ hdfs dfs -copyFromLocal /usr/lib/gphd/hadoop-mapreduce/hadoop-mapreduce-examples.jar /user/hadoop/examplesoozie/map-reduce/lib/hadoop-mapreduce-examples.jar
8. Now you can execute the workflow created and use it to run hadoop mapreduce program for wordcount
[hadoop@hdm1 ~]$ oozie job -oozie http://localhost:11000/oozie -config examplesoozie/map-reduce/job.properties -run
9. You can view the status of the job as shown below:
[hadoop@hdm1 ~]$ oozie job -oozie http://localhost:11000/oozie -info 0000009-140529162032574-oozie-oozi-W
Job ID : 0000009-140529162032574-oozie-oozi-W
------------------------------------------------------------------------------------------------------------------------------------
Workflow Name : map-reduce-wf
App Path      : hdfs://phdha/user/hadoop/examplesoozie/map-reduce
Status        : SUCCEEDED
Run           : 0
User          : hadoop
Group         : -
Created       : 2014-05-30 00:31 GMT
Started       : 2014-05-30 00:31 GMT
Last Modified : 2014-05-30 00:32 GMT
Ended         : 2014-05-30 00:32 GMT
CoordAction ID: -

Actions
------------------------------------------------------------------------------------------------------------------------------------
ID                                                                            Status    Ext ID                 Ext Status Err Code
------------------------------------------------------------------------------------------------------------------------------------
0000009-140529162032574-oozie-oozi-W@:start:                                  OK        -                      OK         -
------------------------------------------------------------------------------------------------------------------------------------
0000009-140529162032574-oozie-oozi-W@mr-node                                  OK        job_1401405229971_0022 SUCCEEDED  -
------------------------------------------------------------------------------------------------------------------------------------
0000009-140529162032574-oozie-oozi-W@end                                      OK        -                      OK         -
------------------------------------------------------------------------------------------------------------------------------------
10. Once the job is completed, you can review the output in the directory as specified by workflow.xml.
[hadoop@hdm1 ~]$ hdfs dfs -cat /user/hadoop/examplesoozie/output-data/map-reduce/part-r-00000
SSH:/var/empty/sshd:/sbin/nologin 1
Server:/var/lib/pgsql:/bin/bash 1
User:/var/ftp:/sbin/nologin 1
Yarn:/home/yarn:/sbin/nologin 1
adm:x:3:4:adm:/var/adm:/sbin/nologin 1
bin:x:1:1:bin:/bin:/sbin/nologin 1
console 1
daemon:x:2:2:daemon:/sbin:/sbin/nologin 1
ftp:x:14:50:FTP 1
games:x:12:100:games:/usr/games:/sbin/nologin 1
gopher:x:13:30:gopher:/var/gopher:/sbin/nologin 1
gpadmin:x:500:500::/home/gpadmin:/bin/bash 1
hadoop:x:503:501:Hadoop:/home/hadoop:/bin/bash 1
halt:x:7:0:halt:/sbin:/sbin/halt 1
Miscellaneous
1. You can see exception like below if workflow.xml file is not specified correctly. For instance, in workflow.xml if mapreduce.map.class is spelled incorrectly as mapreduce.mapper.class, and mapreduce.reduce.class as mapreduce.reducer.class.
2014-05-29 16:29:26,870 ERROR [main] org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException as:hadoop (auth:SIMPLE) cause:java.io.IOException: Type mismatch in key from map: expected org.apache.hadoop.io.Text, received org.apache.hadoop.io.LongWritable
2014-05-29 16:29:26,874 WARN [main] org.apache.hadoop.mapred.YarnChild: Exception running child : java.io.IOException: Type mismatch in key from map: expected org.apache.hadoop.io.Text, received org.apache.hadoop.io.LongWritable
2. Refer to the documentation at to view several examples on workflow.xml https://github.com/yahoo/oozie/wiki/Oozie-WF-use-cases
3. wordcount.java program for reference
package org.apache.hadoop.examples;

import java.io.IOException;
import java.util.StringTokenizer;

import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.Reducer;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import org.apache.hadoop.util.GenericOptionsParser;

public class WordCount {

  public static class TokenizerMapper 
       extends Mapper<Object, Text, Text, IntWritable>{
    
    private final static IntWritable one = new IntWritable(1);
    private Text word = new Text();
      
    public void map(Object key, Text value, Context context
                    ) throws IOException, InterruptedException {
      StringTokenizer itr = new StringTokenizer(value.toString());
      while (itr.hasMoreTokens()) {
        word.set(itr.nextToken());
        context.write(word, one);
      }
    }
  }
  
  public static class IntSumReducer 
       extends Reducer<Text,IntWritable,Text,IntWritable> {
    private IntWritable result = new IntWritable();

    public void reduce(Text key, Iterable values, 
                       Context context
                       ) throws IOException, InterruptedException {
      int sum = 0;
      for (IntWritable val : values) {
        sum += val.get();
      }
      result.set(sum);
      context.write(key, result);
    }
  }

  public static void main(String[] args) throws Exception {
    Configuration conf = new Configuration();
    String[] otherArgs = new GenericOptionsParser(conf, args).getRemainingArgs();
    if (otherArgs.length != 2) {
      System.err.println("Usage: wordcount  ");
      System.exit(2);
    }
    Job job = new Job(conf, "word count");
    job.setJarByClass(WordCount.class);
    job.setMapperClass(TokenizerMapper.class);
    job.setCombinerClass(IntSumReducer.class);
    job.setReducerClass(IntSumReducer.class);
    job.setOutputKeyClass(Text.class);
    job.setOutputValueClass(IntWritable.class);
    FileInputFormat.addInputPath(job, new Path(otherArgs[0]));
    FileOutputFormat.setOutputPath(job, new Path(otherArgs[1]));
    System.exit(job.waitForCompletion(true) ? 0 : 1);
  }
}
Problem:
On a hive-server2 (hive .13), you may see a notification as mentioned below, but the query will still return the response.
Snippet:
hive> show tables;
FAILED: Error in semantic analysis: Lock manager could not be initialized, check hive.lock.manager Check hive.zookeeper.quorum and hive.zookeeper.client.port
OK
foo
employee

Solution:
Once you have enabled Hive's table lock manager service to support concurrency with hive-server2 by setting the value of hive.support.concurrency = true, you can see this error. [hive.lock.manager here refers to hive.support.concurrency]
In order to isolate the cause, you must verify if the below parameter is set in hive-site.xml
1. hive.zookeeper.quorum
2. hive.zookeeper.client.port (If not present, by default its 2181) 
Often if all the configuration are correct, you will still see the error if there are spaces in the list of hive.zookeeper.quorum, so ensure that there are no spaces.
It must be like below.
<property>
 <name>hive.zookeeper.quorum</name>
 <description>Zookeeper quorum used by Hive's Table Lock Manager</description>
 <value>host1.com,host2.com,host3.com/value>
</property>
Symptom:
Map Reduce job failed with the below error
Error: org.apache.hadoop.util.DiskChecker$DiskErrorException: Could not find any valid local directory for output/attempt_1426272186088_184484_m_000232_4/file.out
 at org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.getLocalPathForWrite(LocalDirAllocator.java:398)

Cause:
Map Reduce task during execution will store intermediate data onto local directories, these directories are specified by the parameter "mapreduce.cluster.local.dir" (Old deprecated name: mapred.local.dir) specified in the mapred-site.xml.
During job processing, map reduce framework looks for the directories specified by mapreduce.cluster.local.dir parameter and verifies if there is enough space on the directories listed to create the intermediate file, if there is no directory which has the required space the map reduce job will fail with the error as shared above.

Workaround:
1. Ensure that there is enough space on to the local directories based on the requirement of data to be processed.
2. You may compress the intermediate output files to minimize the space consumption
3. After trying the above mentioned points, (space and compression), if it still does not work, check your job and try either improving the way it's designed or reduce the amount of data processed.