oozie 创建文件sql文件怎么传

I want to move files in S3 using AWS oozie. I want to run
aws s3 mv s3://temp/*.zip s3://temp/processed_files/. --recursive
How I can do this in oozie?
10:18:55,758
WARN ShellActionExecutor:542 - USER[hadoop] GROUP[-] TOKEN[] APP[rad_workflow] JOB[-oozie-oozi-W] ACTION[-oozie-oozi-W@sh] Launcher exception: Cannot run program "move.sh" (in directory "/mnt1/yarn/usercache/hadoop/appcache/application_7_0421/container_7_002"): error=2, No such file or directory
java.io.IOException: Cannot run program "move.sh" (in directory "/mnt1/yarn/usercache/hadoop/appcache/application_7_0421/container_7_002"): error=2, No such file or directory
at java.lang.ProcessBuilder.start(ProcessBuilder.java:1047)
at org.apache.oozie.action.hadoop.ShellMain.execute(ShellMain.java:93)
at org.apache.oozie.action.hadoop.ShellMain.run(ShellMain.java:55)
at org.apache.oozie.action.hadoop.LauncherMain.run(LauncherMain.java:37)
at org.apache.oozie.action.hadoop.ShellMain.main(ShellMain.java:47)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.oozie.action.hadoop.LauncherMapper.map(LauncherMapper.java:226)
at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:65)
at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:452)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:344)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:171)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:166)
Caused by: java.io.IOException: error=2, No such file or directory
at java.lang.UNIXProcess.forkAndExec(Native Method)
at java.lang.UNIXProcess.&init&(UNIXProcess.java:187)
at java.lang.ProcessImpl.start(ProcessImpl.java:130)
at java.lang.ProcessBuilder.start(ProcessBuilder.java:1028)
... 17 more
10:18:55,838
INFO ActionEndXCommand:539 - USER[hadoop] GROUP[-] TOKEN[] APP[rad_workflow] JOB[-oozie-oozi-W] ACTION[-oozie-oozi-W@sh] ERROR is considered as FAILED for SLA
10:18:55,880
INFO ActionStartXCommand:539 - USER[hadoop] GROUP[-] TOKEN[] APP[rad_workflow] JOB[-oozie-oozi-W] ACTION[-oozie-oozi-W@killemail] Start action [-oozie-oozi-W@killemail] with user-retry state : userRetryCount [0], userRetryMax [0], userRetryInterval [10]
Screen shot of Hue Oozie screen shot and error log.
解决方案 Write a java class to move files in s3 and implement it as a java action in oozie.
本文地址: &
我想使用AWS了Oozie在S3移动文件。我想运行
AWS S3 MV S3://temp/*.zip S3://临时/ processed_files /。 --recursive
我怎么能在Oozie的做到这一点?
号10:18:55758 WARN ShellActionExecutor:542
- 用户[Hadoop的]集团[ - ]标记[] APP [rad_workflow] JOB [-oozie- oozi-W] ACTION [-Oozie的-oozi-W @ SH]启动例外:不能运行程序“move.sh”(目录"/mnt1/yarn/usercache/hadoop/appcache/application_7_0421/container_7_002"):错误= 2,没有这样的文件或目录
产生java.io.IOException:不能运行程序“move.sh”(目录"/mnt1/yarn/usercache/hadoop/appcache/application_7_0421/container_7_002"):错误= 2,没有这样的文件或目录
在java.lang.ProcessBuilder.start(ProcessBuilder.java:1047)
在org.apache.oozie.action.hadoop.ShellMain.execute(ShellMain.java:93)
在org.apache.oozie.action.hadoop.ShellMain.run(ShellMain.java:55)
在org.apache.oozie.action.hadoop.LauncherMain.run(LauncherMain.java:37)
在org.apache.oozie.action.hadoop.ShellMain.main(ShellMain.java:47)
在sun.reflect.NativeMethodAccessorImpl.invoke0(本机方法)
在sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
在sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
在java.lang.reflect.Method.invoke(Method.java:606)
在org.apache.oozie.action.hadoop.LauncherMapper.map(LauncherMapper.java:226)
在org.apache.hadoop.ma pred.MapRunner.run(MapRunner.java:65)
在org.apache.hadoop.ma pred.MapTask.runOldMapper(MapTask.java:452)
在org.apache.hadoop.ma pred.MapTask.run(MapTask.java:344)
在org.apache.hadoop.ma pred.YarnChild $ 2.运行(YarnChild.java:171)
在java.security.AccessController.doPrivileged(本机方法)
在javax.security.auth.Subject.doAs(Subject.java:415)
在org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
在org.apache.hadoop.ma pred.YarnChild.main(YarnChild.java:166)
java.io.IOException异常:产生的原因错误= 2,没有这样的文件或目录
在java.lang.UNIXProcess.forkAndExec(本机方法)
在java.lang.UNIXProcess< INIT>(UNIXProcess.java:187)
在java.lang.ProcessImpl.start(ProcessImpl.java:130)
在java.lang.ProcessBuilder.start(ProcessBuilder.java:1028)
... 17更多
号10:18:55838信息ActionEndXCommand:539
- 用户[Hadoop的]集团[ - ]标记[] APP [rad_workflow] JOB [-Oozie的-oozi-W] ACTION [-oozie- oozi-W @ SH]错误被认为是失败的SLA
号10:18:55880信息ActionStartXCommand:539
- 用户[Hadoop的]集团[ - ]标记[] APP [rad_workflow] JOB [-Oozie的-oozi-W] ACTION [-oozie- oozi-W @ killemail]开始行动[-Oozie的-oozi-W @ killemail]与用户重试状态:userRetryCount [0],userRetryMax [0],userRetryInterval [10]
截屏色调Oozie的屏幕截图和错误日志。
解决方案 编写一个Java类移动在S3中的文件并执行它作为一个Java行动Oozie的。
本文地址: &
扫一扫关注官方微信Oozie4.0.1详细安装教程
(window.slotbydup=window.slotbydup || []).push({
id: '2611110',
container: s,
size: '240,200',
display: 'inlay-fix'
您当前位置: &
[ 所属分类
1、编译oozie 环境条件:Unix box (tested on Mac OS X and )Java JDK 1.6+Maven 3.0.1+Hadoop 0.20.2+Pig 0.7+ 注意 JDK commands (java, javac) must be in the command path. The Maven command (mvn) must be in the command path. 1.1、安装maven:将maven的源码包解压缩,将解压后的文件夹放置/opt/目录下,并将/opt/apache-maven-3.2.3/bin目录添加至用户的环境变量中 1.2、安装pig:将pigxx.tar.gz解压,将解压后的文件夹放在/opt/目录下,并将${PIG_HOME}/bin路径加入到用户的环境变量中
修改ooize根目录下的pom.xml文件: //更改jdk为系统所用的版本,1.8版本的jdk编译会出错 &javaVersion&1.7&/javaVersion& &targetJavaVersion&1.7&/targetJavaVersion& //默认为2.3.0,改为2.5.0后,编译出错 &hadoop.version&2.3.0&/hadoop.version& 1.3、执行oozie-4.0.1.tar.gz解压后bin目录下的mkdistro脚本
./mkdistro.sh -DskipTests 编译成功后,执行第二步2、安装Oozie 2.1 将oozie-4.0.1目录下的distro/target目录下的oozie-4.0.1-distro.tar.gz解压至/usr/local/目录下,并将其重命名为oozie(个人喜好,也可不用重命名) 2.2、/usr/local/oozie/目录下,解压share,example,client三个tar包,如下: oozie-client-4.0.1.tar.gz(oozie的客户端,可通过它提交工作流任务)、oozie-examples.tar.gz(oozie的工作流事例程序)、oozie-sharelib-4.0.1.tar.gz 2.3、在HDFS文件系统中创建一个/user/hu的目录('hu'应改为oozie的使用者名称),并将oozie-sharelib-4.0.1.tar.gz解压后的share目录上传至HDFS中的/user/hu目录下; :/usr/local/hadoop/bin$ hadoop fs -mkdir /user/hu (创建/user/hu目录) :/usr/local/hadoop$ bin/hadoop dfs -copyFromLocal /usr/local/oozie/share /user/hu (将share目录上传至/user/hu目录) :/usr/local/hadoop$ bin/hadoop dfs -ls /user/hu (列出HDFS中/user/hu目录下的文件,看是否上传成功) 2.4、在/usr/local/oozie目录下创建libext目录,将oozie-4.0.1/hadooplibs/target/oozie-4.0.1-hadooplibs/oozie-4.0.1/hadooplibs/hadooplib-2.3.0.oozie-4.0.1目录下的所有tar包copy至新建的libext目录 cp hadooplib-2.3.0.oozie-4.0.1/* /usr/local/oozie/libext/ 2.5、将-connector-java-5.1.27.jar(应对应自己的mysql版本)和ext2.2.zip拷贝至/usr/local/oozie/lib和/usr/local/oozie/libext目录下
2.6、 将ext2.2.0和hadoop的jar包打进新war包里,否则会启动会失败。可以检查下/usr/local/oozie/oozie-server/webapps/有没有 oozie.war,来验证是否成功 在/usr/local/oozie/bin下执行命令(这将会把/usr/locao/oozie/libext目录下的jar包打包为一个war文件,该war文件存放在/usr/local/oozie/oozie-server/webapps目录下): ./oozie-setup.sh prepare-war 2.7、设置环境变量 编辑/etc/profile文件,添加如下: export OOZIE_HOME=/usr/local/oozie export CATALINA_HOME=/usr/local/oozie/oozie-server export PATH=${CATALINA_HOME}/bin:${OOZIE_HOME}/bin:$PATH export OOZIE_URL=http://localhost:11000 export OOZIE_CONFIG=/usr/local/oozie/conf 2.8、修改/usr/local/oozie/conf/oozie-site.xml文件:修改如下&property& &name&oozie.db.schema.name&/name& &value&oozie&/value& &description& Oozie DataBase Name &/description& &/property& &property& &name&oozie.service.JPAService.create.db.schema&/name& &value&false&/value& &description& &/description& &/property& &property& &name&oozie.service.JPAService.jdbc.driver&/name& &value&com.mysql.jdbc.Driver&/value& &description& JDBC driver class. &/description& &/property& &property& &name&oozie.service.JPAService.jdbc.url&/name& &value&jdbc:mysql://localhost:3306/${oozie.db.schema.name}&/value& &description& JDBC URL. &/description& &/property& &property& &name&oozie.service.JPAService.jdbc.username&/name& &value&oozie&/value& &description& DB user name. &/description& &/property& &property& &name&oozie.service.JPAService.jdbc.password&/name& &value&oozie&/value& &description& DB user password. IMPORTANT: if password is emtpy leave a 1 space string, the service trims the value,if empty Configuration assumes it is NULL. &/description& &/property&2.9、配置mysql数据库,并生成oozie数据库脚本文件(将会在/usr/local/oozie/bin目录下生成oozie.sql文件) mysql -u root -p (进入mysql命令行) c (创建名称为oozie的数据库) grant all privileges on oozie.* to 'oozie'@'localhost' identified by 'oozie'; (设置oozie数据库的访问全选,创建用户名为oozie,密码为oozie的用户) grant all privileges on oozie.* to 'oozie'@'%' identified by 'oozie'; (设置oozie数据库的访问权限) FLUSH PRIVILEGES; 在/usr/local/oozie/bin目录下执行以下命令: ./ooziedb.sh create -sqlfile oozie.sql 接着执行如下命令,执行oozie数据库脚本文件,这将在oozie数据库中生成与oozie相关的数据表 ./oozie-setup.sh db create -run -sqlfile /usr/local/oozie/bin/oozie.sql 2.10、修改hadoop安装目录下的core-site.xml文件,hu为用户名,hu为hu用户所在的组(修改完之后,需要重启hadoop)&property& &name&hadoop.proxyuser.hu.hosts&/name& &value&192.168.168.101&/value&(192.168.168.101应改为hadoop的主节点ip) &/property& &property& &name&hadoop.proxyuser.hu.groups&/name& &value&hu&/value& &/property&2.11、修改/usr/local/oozie/conf/hadoop-conf/core-site.xml文件,添加如下:&property& &name&yarn.resourcemanager.address&/name& &value&192.168.168.101:8032&/value&(应与hadoop的配置相同,同下) &/property& &property& &name&yarn.resourcemanager.scheduler.address&/name& &value&192.168.168.101:8030&/value& &/property& 2.12、执行bin\oozie-start.sh,启动oozie 可以用如下命令,查看oozie的运行状态。正常的话,应该显示NORMAL ./oozie admin -oozie http://localhost:11000/oozie -status 如果启动还有错误可以查看/usr/local/oozie/logs/catalina.out 里面的错误日志 2.13、运行oozie的mapreduce示例程序
将实例程序上传至HDFS的/user/hu文件夹 bin/hadoop fs -copyFromLocal /usr/local/oozie/examples /user/hu 修改/usr/local/oozie/examples/apps/map-reduce/job.properties文件(yarn中已经没有jobTracker,以下jobTracker填入yarn.resourcemanager.address的值,oozie.wf.application.path即HDFS中oozie示例程序的路径) nameNode=hdfs://master:9000 jobTracker=master:8032 queueName=default examplesRoot=examples oozie.wf.application.path=${nameNode}/user/${user.name}/${examplesRoot}/apps/map-reduce outputDir=map-reduce 在/usr/local/oozie/oozie-client-4.0.1/bin中调用oozie脚本,执行工作流 ./oozie job -oozie http://localhost:11000/oozie -config /usr/local/oozie/examples/apps/map-reduce/job.properties -run 注意:若此时报错:java.net.ConnectException: Connection refused,则表明oozie没有启动指定Oozie Java节点的Hadoop属性 http://www.codesec.net/Linux/617.htmHadoop平台上Oozie调度系统的安装配置 http://www.codesec.net/Linux/382.htmOozie中运行mapreduce node-action时的常见异常解决方法 http://www.codesec.net/Linux/85.htmOozie web-console 时间本地化 http://www.codesec.net/Linux/97.htmHadoop Oozie学习笔记 使用Oozie,通过命令行运行example http://www.codesec.net/Linux/29.htmHadoop Oozie学习笔记 自定义安装和启动 http://www.codesec.net/Linux/28.htm更多Hadoop相关信息见Hadoop 专题页面 http://www.codesec.net/topicnews.aspx?tid=13本文地址:http://www.codesec.net/Linux/456tm
本文系统(linux)相关术语:linux系统 鸟哥的linux私房菜 linux命令大全 linux操作系统
转载请注明本文标题:本站链接:
分享请点击:
1.凡CodeSecTeam转载的文章,均出自其它媒体或其他官网介绍,目的在于传递更多的信息,并不代表本站赞同其观点和其真实性负责;
2.转载的文章仅代表原创作者观点,与本站无关。其原创性以及文中陈述文字和内容未经本站证实,本站对该文以及其中全部或者部分内容、文字的真实性、完整性、及时性,不作出任何保证或承若;
3.如本站转载稿涉及版权等问题,请作者及时联系本站,我们会及时处理。
登录后可拥有收藏文章、关注作者等权限...
CodeSecTeam微信公众号
船到江心难补漏,马到悬崖难回头!是非皆因开口多,祸事皆因强出头!万花丛中过,片叶不沾身
手机客户端
,专注代码审计及安全周边编程,转载请注明出处:http://www.codesec.net
转载文章如有侵权,请邮件 admin[at]codesec.netHadoop Oozie学习笔记 使用OOZIE,通过命令行运行example
Hadoop Oozie学习笔记 使用OOZIE,通过命令行运行example
  Oozie下面有很多例子,提供测试.也可以将源码放入Eclipse中启动提交.这里就一起试下.但发现有些问题,一一解决吧.
  运行Oozie例子map-reduce,命令:
  $OOZIE_HOME/bin/oozie job -oozie
  本以为会很顺利,但控制台报以下错误:
  Error: E0902 : E0902: Exception occured: [.ConnectException: Call to localhost/127.0.0.1:8020 failed on connection exception: .ConnectException: Connection refused]
  跑到日志中$OOZIE_HOME/logs/oozie.log发现有以下报错:
   10:42:30,721& WARN V1JobsServlet:539 - USER[?] GROUP[users] TOKEN[-] APP[-] JOB[-] ACTION[-] URL[//localhost:11000/oozie/v1/jobs?action=start] error[E0902], E0902: Exception occured: [.ConnectException: Call to localhost/127.0.0.1:8020 failed on connection exception: .ConnectException: Connection refused]
org.apache.oozie.servlet.XServletException:&E0902:&Exception&occured:&[.ConnectException:&Call&to&localhost/127.0.0.1:8020&failed&on&connection&exception:&.ConnectException:&Connection&refused] &&
&&&&at&org.apache.oozie.servlet.BaseJobServlet.checkAuthorizationForApp(BaseJobServlet.java:196) &&
&&&&at&org.apache.oozie.servlet.BaseJobsServlet.doPost(BaseJobsServlet.java:89) &&
&&&&at&javax.servlet.(HttpServlet.java:637) &&
&&&&at&org.apache.oozie.servlet.JsonRestServlet.service(JsonRestServlet.java:281) &&
&&&&at&javax.servlet.(HttpServlet.java:717) &&
&&&&at&orre.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:290) &&
&&&&at&orre.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) &&
&&&&at&orre.StandardWrapperValve.invoke(StandardWrapperValve.java:233) &&
&&&&at&orre.StandardContextValve.invoke(StandardContextValve.java:191) &&
&&&&at&orre.StandardHostValve.invoke(StandardHostValve.java:127) &&
&&&&at&org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102) &&
&&&&at&orre.StandardEngineValve.invoke(StandardEngineValve.java:109) &&
&&&&at&ornnector.CoyoteAdapter.service(CoyoteAdapter.java:298) &&
&&&&at&oryote.(Http11Processor.java:859) &&
&&&&at&oryote.$Http11ConnectionHandler.process(Http11Protocol.java:588) &&
&&&&at&org.apach.JIoEndpoint$Worker.run(JIoEndpoint.java:489) &&
&&&&at&java.lang.Thread.run(Thread.java:662) &&
Caused&by:&org.apache.oozie.service.AuthorizationException:&E0902:&Exception&occured:&[.ConnectException:&Call&to&localhost/127.0.0.1:8020&failed&on&connection&exception:&.ConnectException:&Connection&refused] &&
&&&&at&org.apache.oozie.service.AuthorizationService.authorizeForApp(AuthorizationService.java:320) &&
&&&&at&org.apache.oozie.servlet.BaseJobServlet.checkAuthorizationForApp(BaseJobServlet.java:185) &&
&&&&...&16&more &&
Caused&by:&org.apache.oozie.service.HadoopAccessorException:&E0902:&Exception&occured:&[.ConnectException:&Call&to&localhost/127.0.0.1:8020&failed&on&connection&exception:&.ConnectException:&Connection&refused] &&
&&&&at&org.apache.oozie.service.KerberosHadoopAccessorService.createFileSystem(KerberosHadoopAccessorService.java:208) &&
&&&&at&org.apache.oozie.service.AuthorizationService.authorizeForApp(AuthorizationService.java:285) &&
&&&&...&17&more &&
Caused&by:&.ConnectException:&Call&to&localhost/127.0.0.1:8020&failed&on&connection&exception:&.ConnectException:&Connection&refused &&
&&&&at&org.apache.hadoop.ipc.Client.wrapException(Client.java:1131) &&
&&&&at&org.apache.hadoop.ipc.Client.call(Client.java:1107) &&
&&&&at&org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226) &&
&&&&at&$Proxy22.getProtocolVersion(Unknown&Source) &&
&&&&at&org.apache.hadoop.ipc.RPC.getProxy(RPC.java:398) &&
&&&&at&org.apache.hadoop.ipc.RPC.getProxy(RPC.java:384) &&
&&&&at&org.apache.hadoop.hdfs.DFSClient.createRPCNamenode(DFSClient.java:111) &&
&&&&at&org.apache.hadoop.hdfs.DFSClient.&init&(DFSClient.java:213) &&
&&&&at&org.apache.hadoop.hdfs.DFSClient.&init&(DFSClient.java:180) &&
&&&&at&org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:89) &&
&&&&at&org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1514) &&
&&&&at&org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:67) &&
&&&&at&org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:1548) &&
&&&&at&org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1530) &&
&&&&at&org.apache.hadoop.fs.FileSystem.get(FileSystem.java:228) &&
&&&&at&org.apache.oozie.service.KerberosHadoopAccessorService$3.run(KerberosHadoopAccessorService.java:200) &&
&&&&at&org.apache.oozie.service.KerberosHadoopAccessorService$3.run(KerberosHadoopAccessorService.java:192) &&
&&&&at&java.security.AccessController.doPrivileged(Native&Method) &&
&&&&at&javax.security.auth.Subject.doAs(Subject.java:396) &&
&&&&at&org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1115) &&
&&&&at&org.apache.oozie.service.KerberosHadoopAccessorService.createFileSystem(KerberosHadoopAccessorService.java:192) &&
&&&&...&18&more &&
Caused&by:&.ConnectException:&Connection&refused &&
&&&&at&sun.nio.ch.SocketChannelImpl.checkConnect(Native&Method) &&
&&&&at&sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:567) &&
&&&&at&or.Socnnect(SocketIOWithTimeout.java:206) &&
&&&&at&nnect(NetUtils.java:408) &&
&&&&at&org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:425) &&
&&&&at&org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:532) &&
&&&&at&org.apache.hadoop.ipc.Client$Connection.access$2300(Client.java:210) &&
&&&&at&org.apache.hadoop.ipc.Client.getConnection(Client.java:1244) &&
&&&&at&org.apache.hadoop.ipc.Client.call(Client.java:1075) &&
&&&&...&37&more
  从Oozie的FAQ找到了解决办法:,需要将oozie-default.xml中的 oozie.services属性拷贝到oozie-site.xml中,也就是以下属性:
&&&主编推荐
H3C认证Java认证Oracle认证
基础英语软考英语项目管理英语职场英语
.NETPowerBuilderWeb开发游戏开发Perl
二级模拟试题一级模拟试题一级考试经验四级考试资料
软件测试软件外包系统分析与建模敏捷开发
法律法规历年试题软考英语网络管理员系统架构设计师信息系统监理师
高级通信工程师考试大纲设备环境综合能力
路由技术网络存储无线网络网络设备
CPMP考试prince2认证项目范围管理项目配置管理项目管理案例项目经理项目干系人管理
职称考试题目
招生信息考研政治
网络安全安全设置工具使用手机安全
生物识别传感器物联网传输层物联网前沿技术物联网案例分析
Java核心技术J2ME教程
Linux系统管理Linux编程Linux安全AIX教程
Windows系统管理Windows教程Windows网络管理Windows故障
数据库开发Sybase数据库Informix数据库
&&&&&&&&&&&&&&&
希赛网 版权所有 & &&1024人阅读
oozie(5)
假设workflow里有两个action节点,shell和hive,hive需要用到shell节点里的值,shell脚本如下
day=`date '+%Y%m%d%H'`
echo &day:$day&
hive节点需传入day这个参数。需要用到shell节点里&capture-output/&这个属性,如下
&action name=&shell-118a &&
&shell xmlns=&uri:oozie:shell-action:0.1&&
&job-tracker&${jobTracker}&/job-tracker&
&name-node&${nameNode}&/name-node&
&configuration&
&property&
&name&mapred.job.queue.name&/name&
&value&${queueName}&/value&
&/property&
&/configuration&
&exec&${shell}&/exec&
&file&${shell}#${shell}&/file&
&capture-output/&
&ok to=&hive_node &/&
&error to=&fail&/&
&action name=&hive_node&&
&hive xmlns=&uri:oozie:hive-action:0.2&&
&job-tracker&${jobTracker}&/job-tracker&
&name-node&${nameNode}&/name-node&
&job-xml&${apps_hdfs_home}/common/conf/hive-site.xml&/job-xml&
&script&${sql}&/script&
&param&day=${wf:actionData('shell-118a')['day']}&/param&
&ok to=&end&/&
&error to=&Kill&/&
参考知识库
* 以上用户言论只代表其个人观点,不代表CSDN网站的观点或立场
访问:9999次
排名:千里之外
原创:11篇
(1)(2)(1)(2)(2)(2)(1)(1)(1)(4)(3)

我要回帖

更多关于 sql注入实现文件上传 的文章

 

随机推荐