flumehdfs接收不到数据 下沉到hdfs的配置文件中round如何理解

后使用快捷导航没有帐号?
查看: 1415|回复: 0
flume安装测试与收集日志到HDFS
高级会员, 积分 569, 距离下一级还需 431 积分
论坛徽章:4
h1..com 10.0.0.71 DNS+NTP+NFS& &oracle
h5.hadoop.com 10.0.0.75 hadoop-1.1.2&&oracle linux 6.6&&flume-1.5.0
h6.hadoop.com 10.0.0.76 hadoop-1.1.2&&oracle linux 6.6
[grid@h5 soft]$ tar -zxf apache-flume-1.5.0-bin.tar.gz
[grid@h5 soft]$ ls apache-flume-1.5.0-bin
bin&&CHANGELOG&&conf&&DEVNOTES&&docs&&lib&&LICENSE&&NOTICE&&README&&RELEASE-NOTES&&tools
[grid@h5 soft]$ mv apache-flume-1.5.0-bin /home/grid/flume-1.5.0
[grid@h5 soft]$ cd /home/grid/
[grid@h5 ~]$ ls -lrt flume-1.5.0/
-rw-r--r--&&1 grid grid&&1779 Mar 29&&2014 README
-rw-r--r--&&1 grid grid& &249 Mar 29&&2014 NOTICE
-rw-r--r--&&1 grid grid&&6172 Mar 29&&2014 DEVNOTES
-rw-r--r--&&1 grid grid 22517 May&&7&&2014 LICENSE
-rw-r--r--&&1 grid grid 61364 May&&7&&2014 CHANGELOG
-rw-r--r--&&1 grid grid&&1585 May&&7&&2014 RELEASE-NOTES
drwxr-xr-x 10 grid grid&&4096 May&&8&&2014 docs
drwxr-xr-x&&2 grid grid&&4096 May 11 11:00 conf
drwxr-xr-x&&2 grid grid&&4096 May 11 11:00 bin
drwxrwxr-x&&2 grid grid&&4096 May 11 11:00 tools
drwxrwxr-x&&2 grid grid&&4096 May 11 11:00 lib
2.设置环境变量:
[grid@h5 ~]$ cat .flume
# .bash_profile
# Get the aliases and functions
if [ -f ~/.bashrc ]; then
& && &&&. ~/.bashrc
# User specific environment and startup programs
PATH=$PATH:$HOME/bin
export HADOOP_HOME=/home/grid/hadoop-1.1.2/
export FLUME_HOME=/home/grid/flume-1.5.0/
export FLUME_CONF_DIR=$FLUME_HOME/conf
export PATH=$HADOOP_HOME/bin/:$FLUME_HOME/bin:/usr/local/jdk1.8.0_74/bin/:$PATH
3.修改配置:
[grid@h5 ~]$ cd flume-1.5.0/conf/
[grid@h5 conf]$ pwd
/home/grid/flume-1.5.0/conf
[grid@h5 conf]$ ls
flume-conf.properties.template&&flume-env.sh.template&&log4j.properties
[grid@h5 conf]$ cp flume-env.sh.template flume-env.sh
[grid@h5 conf]$ cat flume-env.sh | grep -v '#' | sort&&| uniq
_HOME=/usr/local/jdk1.8.0_74/
JAVA_OPTS=&-Xms100m -Xmx200m -Dcom.sun.management.jmxremote&
[grid@h5 conf]$ cp flume-conf.properties.template flume-conf
[grid@h5 conf]$ cat flume-conf
# # Name the components on this agent
a1.sources = r1
a1.sinks = k1
a1.channels = c1
# # Describe/configure the source
a1.sources.r1.type = netcat
a1.sources.r1.bind = localhost
a1.sources.r1.port = 44444
# Describe the sink
a1.sinks.k1.type = logger
# # Use a channel which buffers events in memory
a1.channels.c1.type = memory
a1.channels.c1.capacity = 1000
a1.channels.c1.transactionCapacity = 100
# # Bind the source and sink to the channel
a1.sources.r1.channels = c1
a1.sinks.k1.channel = c1
[grid@h5 flume-1.5.0]$ cd bin/
[grid@h5 bin]$ ls
[grid@h5 bin]$ ./flume-ng agent --conf ../conf/ --conf-file ../conf/flume-conf --name a1 -Dflume.root.logger=INFO,console
Info: Sourcing environment configuration script /home/grid/flume-1.5.0/conf/flume-env.sh
Info: Including Hadoop libraries found via (/home/grid/hadoop-1.1.2/bin/hadoop) for HDFS access
Warning: $HADOOP_HOME is deprecated.
Error: Could not find or load main class org.apache.flume.tools.GetJavaProperty
Warning: $HADOOP_HOME is deprecated.
Info: Excluding /home/grid/hadoop-1.1.2/libexec/../lib/slf4j-api-1.4.3.jar from classpath
Info: Excluding /home/grid/hadoop-1.1.2/libexec/../lib/slf4j-log4j12-1.4.3.jar from classpath
+ exec /usr/local/jdk1.8.0_74//bin/java -Xms100m -Xmx200m -Dcom.sun.management.jmxremote -Dflume.root.logger=INFO,console -cp '/home/grid/flume-1.5.0/conf:/home/grid/flume-1.5.0//lib/*:/home/grid/hadoop-1.1.2/libexec/../conf:/usr/local/jdk1.8.0_74//lib/tools.jar:/home/grid/hadoop-1.1.2/libexec/..:/home/grid/hadoop-1.1.2/libexec/../hadoop-core-1.1.2.jar:/home/grid/hadoop-1.1.2/libexec/../lib/asm-3.2.jar:/home/grid/hadoop-1.1.2/libexec/../lib/aspectjrt-1.6.11.jar:/home/grid/hadoop-1.1.2/libexec/../lib/aspectjtools-1.6.11.jar:/home/grid/hadoop-1.1.2/libexec/../lib/commons-beanutils-1.7.0.jar:/home/grid/hadoop-1.1.2/libexec/../lib/commons-beanutils-core-1.8.0.jar:/home/grid/hadoop-1.1.2/libexec/../lib/commons-cli-1.2.jar:/home/grid/hadoop-1.1.2/libexec/../lib/commons-codec-1.4.jar:/home/grid/hadoop-1.1.2/libexec/../lib/commons-collections-3.2.1.jar:/home/grid/hadoop-1.1.2/libexec/../lib/commons-configuration-1.6.jar:/home/grid/hadoop-1.1.2/libexec/../lib/commons-daemon-1.0.1.jar:/home/grid/hadoop-1.1.2/libexec/../lib/commons-digester-1.8.jar:/home/grid/hadoop-1.1.2/libexec/../lib/commons-el-1.0.jar:/home/grid/hadoop-1.1.2/libexec/../lib/commons-httpclient-3.0.1.jar:/home/grid/hadoop-1.1.2/libexec/../lib/commons-io-2.1.jar:/home/grid/hadoop-1.1.2/libexec/../lib/commons-lang-2.4.jar:/home/grid/hadoop-1.1.2/libexec/../lib/commons-logging-1.1.1.jar:/home/grid/hadoop-1.1.2/libexec/../lib/commons-logging-api-1.0.4.jar:/home/grid/hadoop-1.1.2/libexec/../lib/commons-math-2.1.jar:/home/grid/hadoop-1.1.2/libexec/../lib/commons-net-3.1.jar:/home/grid/hadoop-1.1.2/libexec/../lib/core-3.1.1.jar:/home/grid/hadoop-1.1.2/libexec/../lib/hadoop-capacity-scheduler-1.1.2.jar:/home/grid/hadoop-1.1.2/libexec/../lib/hadoop-fairscheduler-1.1.2.jar:/home/grid/hadoop-1.1.2/libexec/../lib/hadoop-thriftfs-1.1.2.jar:/home/grid/hadoop-1.1.2/libexec/../lib/hdb-1.8.0.10.jar:/home/grid/hadoop-1.1.2/libexec/../lib/jackson-core-asl-1.8.8.jar:/home/grid/hadoop-1.1.2/libexec/../lib/jackson-mapper-asl-1.8.8.jar:/home/grid/hadoop-1.1.2/libexec/../lib/jasper-compiler-5.5.12.jar:/home/grid/hadoop-1.1.2/libexec/../lib/jasper-runtime-5.5.12.jar:/home/grid/hadoop-1.1.2/libexec/../lib/jdeb-0.8.jar:/home/grid/hadoop-1.1.2/libexec/../lib/jersey-core-1.8.jar:/home/grid/hadoop-1.1.2/libexec/../lib/jersey-json-1.8.jar:/home/grid/hadoop-1.1.2/libexec/../lib/jersey-server-1.8.jar:/home/grid/hadoop-1.1.2/libexec/../lib/jets3t-0.6.1.jar:/home/grid/hadoop-1.1.2/libexec/../lib/jetty-6.1.26.jar:/home/grid/hadoop-1.1.2/libexec/../lib/jetty-util-6.1.26.jar:/home/grid/hadoop-1.1.2/libexec/../lib/jsch-0.1.42.jar:/home/grid/hadoop-1.1.2/libexec/../lib/junit-4.5.jar:/home/grid/hadoop-1.1.2/libexec/../lib/kfs-0.2.2.jar:/home/grid/hadoop-1.1.2/libexec/../lib/log4j-1.2.15.jar:/home/grid/hadoop-1.1.2/libexec/../lib/mockito-all-1.8.5.jar:/home/grid/hadoop-1.1.2/libexec/../lib/oro-2.0.8.jar:/home/grid/hadoop-1.1.2/libexec/../lib/servlet-api-2.5-.jar:/home/grid/hadoop-1.1.2/libexec/../lib/xmlenc-0.52.jar:/home/grid/hadoop-1.1.2/libexec/../lib/jsp-2.1/jsp-2.1.jar:/home/grid/hadoop-1.1.2/libexec/../lib/jsp-2.1/jsp-api-2.1.jar:/home/grid/hadoop-1.1.2/myclass' -Djava.library.path= org.apache.flume.node.Application --conf-file ../conf/flume-conf --name a1
11:25:30,464 (lifecycleSupervisor-1-0) [INFO - org.apache.flume.node.PollingPropertiesFileConfigurationProvider.start(PollingPropertiesFileConfigurationProvider.java:61)] Configuration provider starting
11:25:30,477 (conf-file-poller-0) [INFO - org.apache.flume.node.PollingPropertiesFileConfigurationProvider$FileWatcherRunnable.run(PollingPropertiesFileConfigurationProvider.java:133)] Reloading configuration file:../conf/flume-conf
11:25:30,483 (conf-file-poller-0) [INFO - org.apache.flume.conf.FlumeConfiguration$AgentConfiguration.addProperty(FlumeConfiguration.java:930)] Added sinks: k1 Agent: a1
11:25:30,483 (conf-file-poller-0) [INFO - org.apache.flume.conf.FlumeConfiguration$AgentConfiguration.addProperty(FlumeConfiguration.java:1016)] Processing:k1
11:25:30,484 (conf-file-poller-0) [INFO - org.apache.flume.conf.FlumeConfiguration$AgentConfiguration.addProperty(FlumeConfiguration.java:1016)] Processing:k1
11:25:30,495 (conf-file-poller-0) [INFO - org.apache.flume.conf.FlumeConfiguration.validateConfiguration(FlumeConfiguration.java:140)] Post-validation flume configuration contains configuration for agents: [a1]
11:25:30,495 (conf-file-poller-0) [INFO - org.apache.flume.node.AbstractConfigurationProvider.loadChannels(AbstractConfigurationProvider.java:150)] Creating channels
11:25:30,501 (conf-file-poller-0) [INFO - org.apache.flume.channel.DefaultChannelFactory.create(DefaultChannelFactory.java:40)] Creating instance of channel c1 type memory
11:25:30,507 (conf-file-poller-0) [INFO - org.apache.flume.node.AbstractConfigurationProvider.loadChannels(AbstractConfigurationProvider.java:205)] Created channel c1
11:25:30,508 (conf-file-poller-0) [INFO - org.apache.flume.source.DefaultSourceFactory.create(DefaultSourceFactory.java:39)] Creating instance of source r1, type netcat
11:25:30,519 (conf-file-poller-0) [INFO - org.apache.flume.sink.DefaultSinkFactory.create(DefaultSinkFactory.java:40)] Creating instance of sink: k1, type: logger
11:25:30,524 (conf-file-poller-0) [INFO - org.apache.flume.node.AbstractConfigurationProvider.getConfiguration(AbstractConfigurationProvider.java:119)] Channel c1 connected to [r1, k1]
11:25:30,531 (conf-file-poller-0) [INFO - org.apache.flume.node.Application.startAllComponents(Application.java:138)] Starting new configuration:{ sourceRunners:{r1=EventDrivenSourceRunner: { sourcerg.apache.flume.source.NetcatSource{name:r1,state:IDLE} }} sinkRunners:{k1=SinkRunner: { policy:org.apache.flume.sink.DefaultSinkProcessor@2e604876 counterGroup:{ name:null counters:{} } }} channels:{c1=org.apache.flume.channel.MemoryChannel{name: c1}} }
11:25:30,539 (conf-file-poller-0) [INFO - org.apache.flume.node.Application.startAllComponents(Application.java:145)] Starting Channel c1
11:25:30,540 (lifecycleSupervisor-1-0) [INFO - org.apache.flume.instrumentation.MonitoredCounterGroup.register(MonitoredCounterGroup.java:119)] Monitored counter group for type: CHANNEL, name: c1: Successfully registered new MBean.
11:25:30,541 (lifecycleSupervisor-1-0) [INFO - org.apache.flume.instrumentation.MonitoredCounterGroup.start(MonitoredCounterGroup.java:95)] Component type: CHANNEL, name: c1 started
11:25:30,543 (conf-file-poller-0) [INFO - org.apache.flume.node.Application.startAllComponents(Application.java:173)] Starting Sink k1
11:25:30,544 (conf-file-poller-0) [INFO - org.apache.flume.node.Application.startAllComponents(Application.java:184)] Starting Source r1
11:25:30,544 (lifecycleSupervisor-1-4) [INFO - org.apache.flume.source.NetcatSource.start(NetcatSource.java:150)] Source starting
11:25:30,585 (lifecycleSupervisor-1-4) [INFO - org.apache.flume.source.NetcatSource.start(NetcatSource.java:164)] Created serverSocket:sun.nio.ch.ServerSocketChannelImpl[/127.0.0.1:44444]
上面的报错可忽略:
Error: Could not find or load main class org.apache.flume.tools.GetJavaProperty
在另一个终端telnet 44444端口:
[root@h5 ~]# su - grid
[grid@h5 ~]$ telnet localhost 44444
Trying ::1...
telnet: connect to address ::1: Connection refused
Trying 127.0.0.1...
Connected to localhost.
Escape character is '^]'.
hello flume
可以看到信息已经被截取:
11:30:56,617 (SinkRunner-PollingRunner-DefaultSinkProcessor) [INFO - org.apache.flume.sink.LoggerSink.process(LoggerSink.java:70)] Event: { headers:{} body: 68 65 6C 6C 6F 20 66 6C 75 6D 65 0D& && && && & hello flume. }
[root@h5 ~]# ps -ef | grep flume
grid& && &&&0 11:25 pts/0& & 00:00:02 /usr/local/jdk1.8.0_74//bin/java -Xms100m -Xmx200m -Dcom.sun.management.jmxremote -Dflume.root.logger=INFO,console -cp /home/grid/flume-1.5.0/conf:/home/grid/flume-1.5.0//lib/*:/home/grid/hadoop-1.1.2/libexec/../conf:/usr/local/jdk1.8.0_74//lib/tools.jar:/home/grid/hadoop-1.1.2/libexec/..:/home/grid/hadoop-1.1.2/libexec/../hadoop-core-1.1.2.jar:/home/grid/hadoop-1.1.2/libexec/../lib/asm-3.2.jar:/home/grid/hadoop-1.1.2/libexec/../lib/aspectjrt-1.6.11.jar:/home/grid/hadoop-1.1.2/libexec/../lib/aspectjtools-1.6.11.jar:/home/grid/hadoop-1.1.2/libexec/../lib/commons-beanutils-1.7.0.jar:/home/grid/hadoop-1.1.2/libexec/../lib/commons-beanutils-core-1.8.0.jar:/home/grid/hadoop-1.1.2/libexec/../lib/commons-cli-1.2.jar:/home/grid/hadoop-1.1.2/libexec/../lib/commons-codec-1.4.jar:/home/grid/hadoop-1.1.2/libexec/../lib/commons-collections-3.2.1.jar:/home/grid/hadoop-1.1.2/libexec/../lib/commons-configuration-1.6.jar:/home/grid/hadoop-1.1.2/libexec/../lib/commons-daemon-1.0.1.jar:/home/grid/hadoop-1.1.2/libexec/../lib/commons-digester-1.8.jar:/home/grid/hadoop-1.1.2/libexec/../lib/commons-el-1.0.jar:/home/grid/hadoop-1.1.2/libexec/../lib/commons-httpclient-3.0.1.jar:/home/grid/hadoop-1.1.2/libexec/../lib/commons-io-2.1.jar:/home/grid/hadoop-1.1.2/libexec/../lib/commons-lang-2.4.jar:/home/grid/hadoop-1.1.2/libexec/../lib/commons-logging-1.1.1.jar:/home/grid/hadoop-1.1.2/libexec/../lib/commons-logging-api-1.0.4.jar:/home/grid/hadoop-1.1.2/libexec/../lib/commons-math-2.1.jar:/home/grid/hadoop-1.1.2/libexec/../lib/commons-net-3.1.jar:/home/grid/hadoop-1.1.2/libexec/../lib/core-3.1.1.jar:/home/grid/hadoop-1.1.2/libexec/../lib/hadoop-capacity-scheduler-1.1.2.jar:/home/grid/hadoop-1.1.2/libexec/../lib/hadoop-fairscheduler-1.1.2.jar:/home/grid/hadoop-1.1.2/libexec/../lib/hadoop-thriftfs-1.1.2.jar:/home/grid/hadoop-1.1.2/libexec/../lib/hsqldb-1.8.0.10.jar:/home/grid/hadoop-1.1.2/libexec/../lib/jackson-core-asl-1.8.8.jar:/home/grid/hadoop-1.1.2/libexec/../lib/jackson-mapper-asl-1.8.8.jar:/home/grid/hadoop-1.1.2/libexec/../lib/jasper-compiler-5.5.12.jar:/home/grid/hadoop-1.1.2/libexec/../lib/jasper-runtime-5.5.12.jar:/home/grid/hadoop-1.1.2/libexec/../lib/jdeb-0.8.jar:/home/grid/hadoop-1.1.2/libexec/../lib/jersey-core-1.8.jar:/home/grid/hadoop-1.1.2/libexec/../lib/jersey-json-1.8.jar:/home/grid/hadoop-1.1.2/libexec/../lib/jersey-server-1.8.jar:/home/grid/hadoop-1.1.2/libexec/../lib/jets3t-0.6.1.jar:/home/grid/hadoop-1.1.2/libexec/../lib/jetty-6.1.26.jar:/home/grid/hadoop-1.1.2/libexec/../lib/jetty-util-6.1.26.jar:/home/grid/hadoop-1.1.2/libexec/../lib/jsch-0.1.42.jar:/home/grid/hadoop-1.1.2/libexec/../lib/junit-4.5.jar:/home/grid/hadoop-1.1.2/libexec/../lib/kfs-0.2.2.jar:/home/grid/hadoop-1.1.2/libexec/../lib/log4j-1.2.15.jar:/home/grid/hadoop-1.1.2/libexec/../lib/mockito-all-1.8.5.jar:/home/grid/hadoop-1.1.2/libexec/../lib/oro-2.0.8.jar:/home/grid/hadoop-1.1.2/libexec/../lib/servlet-api-2.5-.jar:/home/grid/hadoop-1.1.2/libexec/../lib/xmlenc-0.52.jar:/home/grid/hadoop-1.1.2/libexec/../lib/jsp-2.1/jsp-2.1.jar:/home/grid/hadoop-1.1.2/libexec/../lib/jsp-2.1/jsp-api-2.1.jar:/home/grid/hadoop-1.1.2/myclass -Djava.library.path= org.apache.flume.node.Application --conf-file ../conf/flume-conf --name a1
root& && &&&0 11:35 pts/1& & 00:00:00 grep flume
========================================================================flume收集日志到HDFS====================================================
1.修改配置文件
[grid@h5 conf]$ pwd
/home/grid/flume-1.5.0/conf
[grid@h5 conf]$ cp flume-conf.properties.template flume-conf-hdfs
[grid@h5 conf]$ cat&&flume-conf-hdfs&&| grep -v &#& | uniq
a1.sources = r1
a1.sinks = k1
a1.channels = c1
a1.sources.r1.type = exec
a1.sources.r1.channels = c1
a1.sources.r1.command = tail -F /home/grid/hadoop-1.1.2/logs/hadoop-grid-namenode-h5.hadoop.com.log
a1.sinks.k1.type = hdfs
a1.sinks.k1.channel = c1
a1.sinks.k1.hdfs.path = hdfs://h5.hadoop.com:9000/outputs
a1.sinks.k1.hdfs.filePrefix = eventsa1.sinks.k1.hdfs.round = true
a1.sinks.k1.hdfs.roundValue = 10
a1.sinks.k1.hdfs.roundUnit = minute
a1.sinks.k1.hdfs.rollSize = 4000000
a1.sinks.k1.hdfs.rollCount = 0
a1.sinks.k1.hdfs.writeFormat = Text
a1.sinks.k1.hdfs.fileType = DataStream
a1.sinks.k1.hdfs.batchSize = 10
a1.channels.c1.type = memory
a1.channels.c1.capacity = 1000
a1.channels.c1.transactionCapacity = 100
2.检查hadoop运行状态:
[grid@h5 bin]$ jps
4741 JobTracker
4461 NameNode
4638 SecondaryNameNode
[grid@h6 ~]$ jps
3804 DataNode
3884 TaskTracker
3.flume收集日志到HDFS:
[grid@h5 bin]$ pwd
/home/grid/flume-1.5.0/bin
[grid@h5 conf]$ cd /home/grid/flume-1.5.0/bin
[grid@h5 bin]$ ./flume-ng agent --conf ../conf/ --conf-file ../conf/flume-conf-hdfs --name a1 -Dflume.root.logger=INFO,console
Info: Sourcing environment configuration script /home/grid/flume-1.5.0/conf/flume-env.sh
Info: Including Hadoop libraries found via (/home/grid/hadoop-1.1.2/bin/hadoop) for HDFS access
Warning: $HADOOP_HOME is deprecated.
Error: Could not find or load main class org.apache.flume.tools.GetJavaProperty
Warning: $HADOOP_HOME is deprecated.
Info: Excluding /home/grid/hadoop-1.1.2/libexec/../lib/slf4j-api-1.4.3.jar from classpath
Info: Excluding /home/grid/hadoop-1.1.2/libexec/../lib/slf4j-log4j12-1.4.3.jar from classpath
+ exec /usr/local/jdk1.8.0_74//bin/java -Xms100m -Xmx200m -Dcom.sun.management.jmxremote -Dflume.root.logger=INFO,console -cp '/home/grid/flume-1.5.0/conf:/home/grid/flume-1.5.0//lib/*:/home/grid/hadoop-1.1.2/libexec/../conf:/usr/local/jdk1.8.0_74//lib/tools.jar:/home/grid/hadoop-1.1.2/libexec/..:/home/grid/hadoop-1.1.2/libexec/../hadoop-core-1.1.2.jar:/home/grid/hadoop-1.1.2/libexec/../lib/asm-3.2.jar:/home/grid/hadoop-1.1.2/libexec/../lib/aspectjrt-1.6.11.jar:/home/grid/hadoop-1.1.2/libexec/../lib/aspectjtools-1.6.11.jar:/home/grid/hadoop-1.1.2/libexec/../lib/commons-beanutils-1.7.0.jar:/home/grid/hadoop-1.1.2/libexec/../lib/commons-beanutils-core-1.8.0.jar:/home/grid/hadoop-1.1.2/libexec/../lib/commons-cli-1.2.jar:/home/grid/hadoop-1.1.2/libexec/../lib/commons-codec-1.4.jar:/home/grid/hadoop-1.1.2/libexec/../lib/commons-collections-3.2.1.jar:/home/grid/hadoop-1.1.2/libexec/../lib/commons-configuration-1.6.jar:/home/grid/hadoop-1.1.2/libexec/../lib/commons-daemon-1.0.1.jar:/home/grid/hadoop-1.1.2/libexec/../lib/commons-digester-1.8.jar:/home/grid/hadoop-1.1.2/libexec/../lib/commons-el-1.0.jar:/home/grid/hadoop-1.1.2/libexec/../lib/commons-httpclient-3.0.1.jar:/home/grid/hadoop-1.1.2/libexec/../lib/commons-io-2.1.jar:/home/grid/hadoop-1.1.2/libexec/../lib/commons-lang-2.4.jar:/home/grid/hadoop-1.1.2/libexec/../lib/commons-logging-1.1.1.jar:/home/grid/hadoop-1.1.2/libexec/../lib/commons-logging-api-1.0.4.jar:/home/grid/hadoop-1.1.2/libexec/../lib/commons-math-2.1.jar:/home/grid/hadoop-1.1.2/libexec/../lib/commons-net-3.1.jar:/home/grid/hadoop-1.1.2/libexec/../lib/core-3.1.1.jar:/home/grid/hadoop-1.1.2/libexec/../lib/hadoop-capacity-scheduler-1.1.2.jar:/home/grid/hadoop-1.1.2/libexec/../lib/hadoop-fairscheduler-1.1.2.jar:/home/grid/hadoop-1.1.2/libexec/../lib/hadoop-thriftfs-1.1.2.jar:/home/grid/hadoop-1.1.2/libexec/../lib/hsqldb-1.8.0.10.jar:/home/grid/hadoop-1.1.2/libexec/../lib/jackson-core-asl-1.8.8.jar:/home/grid/hadoop-1.1.2/libexec/../lib/jackson-mapper-asl-1.8.8.jar:/home/grid/hadoop-1.1.2/libexec/../lib/jasper-compiler-5.5.12.jar:/home/grid/hadoop-1.1.2/libexec/../lib/jasper-runtime-5.5.12.jar:/home/grid/hadoop-1.1.2/libexec/../lib/jdeb-0.8.jar:/home/grid/hadoop-1.1.2/libexec/../lib/jersey-core-1.8.jar:/home/grid/hadoop-1.1.2/libexec/../lib/jersey-json-1.8.jar:/home/grid/hadoop-1.1.2/libexec/../lib/jersey-server-1.8.jar:/home/grid/hadoop-1.1.2/libexec/../lib/jets3t-0.6.1.jar:/home/grid/hadoop-1.1.2/libexec/../lib/jetty-6.1.26.jar:/home/grid/hadoop-1.1.2/libexec/../lib/jetty-util-6.1.26.jar:/home/grid/hadoop-1.1.2/libexec/../lib/jsch-0.1.42.jar:/home/grid/hadoop-1.1.2/libexec/../lib/junit-4.5.jar:/home/grid/hadoop-1.1.2/libexec/../lib/kfs-0.2.2.jar:/home/grid/hadoop-1.1.2/libexec/../lib/log4j-1.2.15.jar:/home/grid/hadoop-1.1.2/libexec/../lib/mockito-all-1.8.5.jar:/home/grid/hadoop-1.1.2/libexec/../lib/oro-2.0.8.jar:/home/grid/hadoop-1.1.2/libexec/../lib/servlet-api-2.5-.jar:/home/grid/hadoop-1.1.2/libexec/../lib/xmlenc-0.52.jar:/home/grid/hadoop-1.1.2/libexec/../lib/jsp-2.1/jsp-2.1.jar:/home/grid/hadoop-1.1.2/libexec/../lib/jsp-2.1/jsp-api-2.1.jar:/home/grid/hadoop-1.1.2/myclass' -Djava.library.path= org.apache.flume.node.Application --conf-file ../conf/flume-conf-hdfs --name a1
12:01:39,634 (lifecycleSupervisor-1-0) [INFO - org.apache.flume.node.PollingPropertiesFileConfigurationProvider.start(PollingPropertiesFileConfigurationProvider.java:61)] Configuration provider starting
12:01:39,641 (conf-file-poller-0) [INFO - org.apache.flume.node.PollingPropertiesFileConfigurationProvider$FileWatcherRunnable.run(PollingPropertiesFileConfigurationProvider.java:133)] Reloading configuration file:../conf/flume-conf-hdfs
12:01:39,648 (conf-file-poller-0) [INFO - org.apache.flume.conf.FlumeConfiguration$AgentConfiguration.addProperty(FlumeConfiguration.java:1016)] Processing:k1
12:01:39,649 (conf-file-poller-0) [INFO - org.apache.flume.conf.FlumeConfiguration$AgentConfiguration.addProperty(FlumeConfiguration.java:1016)] Processing:k1
12:01:39,649 (conf-file-poller-0) [INFO - org.apache.flume.conf.FlumeConfiguration$AgentConfiguration.addProperty(FlumeConfiguration.java:930)] Added sinks: k1 Agent: a1
12:01:39,650 (conf-file-poller-0) [INFO - org.apache.flume.conf.FlumeConfiguration$AgentConfiguration.addProperty(FlumeConfiguration.java:1016)] Processing:k1
12:01:39,650 (conf-file-poller-0) [INFO - org.apache.flume.conf.FlumeConfiguration$AgentConfiguration.addProperty(FlumeConfiguration.java:1016)] Processing:k1
12:01:39,650 (conf-file-poller-0) [INFO - org.apache.flume.conf.FlumeConfiguration$AgentConfiguration.addProperty(FlumeConfiguration.java:1016)] Processing:k1
12:01:39,650 (conf-file-poller-0) [INFO - org.apache.flume.conf.FlumeConfiguration$AgentConfiguration.addProperty(FlumeConfiguration.java:1016)] Processing:k1
12:01:39,650 (conf-file-poller-0) [INFO - org.apache.flume.conf.FlumeConfiguration$AgentConfiguration.addProperty(FlumeConfiguration.java:1016)] Processing:k1
12:01:39,650 (conf-file-poller-0) [INFO - org.apache.flume.conf.FlumeConfiguration$AgentConfiguration.addProperty(FlumeConfiguration.java:1016)] Processing:k1
12:01:39,651 (conf-file-poller-0) [INFO - org.apache.flume.conf.FlumeConfiguration$AgentConfiguration.addProperty(FlumeConfiguration.java:1016)] Processing:k1
12:01:39,651 (conf-file-poller-0) [INFO - org.apache.flume.conf.FlumeConfiguration$AgentConfiguration.addProperty(FlumeConfiguration.java:1016)] Processing:k1
12:01:39,651 (conf-file-poller-0) [INFO - org.apache.flume.conf.FlumeConfiguration$AgentConfiguration.addProperty(FlumeConfiguration.java:1016)] Processing:k1
12:01:39,666 (conf-file-poller-0) [INFO - org.apache.flume.conf.FlumeConfiguration.validateConfiguration(FlumeConfiguration.java:140)] Post-validation flume configuration contains configuration for agents: [a1]
12:01:39,666 (conf-file-poller-0) [INFO - org.apache.flume.node.AbstractConfigurationProvider.loadChannels(AbstractConfigurationProvider.java:150)] Creating channels
12:01:39,676 (conf-file-poller-0) [INFO - org.apache.flume.channel.DefaultChannelFactory.create(DefaultChannelFactory.java:40)] Creating instance of channel c1 type memory
12:01:39,681 (conf-file-poller-0) [INFO - org.apache.flume.node.AbstractConfigurationProvider.loadChannels(AbstractConfigurationProvider.java:205)] Created channel c1
12:01:39,681 (conf-file-poller-0) [INFO - org.apache.flume.source.DefaultSourceFactory.create(DefaultSourceFactory.java:39)] Creating instance of source r1, type exec
12:01:39,688 (conf-file-poller-0) [INFO - org.apache.flume.sink.DefaultSinkFactory.create(DefaultSinkFactory.java:40)] Creating instance of sink: k1, type: hdfs
12:01:39,829 (conf-file-poller-0) [INFO - org.apache.flume.sink.hdfs.HDFSEventSink.authenticate(HDFSEventSink.java:555)] Hadoop Security enabled: false
12:01:39,832 (conf-file-poller-0) [INFO - org.apache.flume.node.AbstractConfigurationProvider.getConfiguration(AbstractConfigurationProvider.java:119)] Channel c1 connected to [r1, k1]
12:01:39,840 (conf-file-poller-0) [INFO - org.apache.flume.node.Application.startAllComponents(Application.java:138)] Starting new configuration:{ sourceRunners:{r1=EventDrivenSourceRunner: { source:org.apache.flume.source.ExecSource{name:r1,state:IDLE} }} sinkRunners:{k1=SinkRunner: { policy:org.apache.flume.sink.DefaultSinkProcessor@7c0e0b2c counterGroup:{ name:null counters:{} } }} channels:{c1=org.apache.flume.channel.MemoryChannel{name: c1}} }
12:01:39,850 (conf-file-poller-0) [INFO - org.apache.flume.node.Application.startAllComponents(Application.java:145)] Starting Channel c1
12:01:39,853 (lifecycleSupervisor-1-0) [INFO - org.apache.flume.instrumentation.MonitoredCounterGroup.register(MonitoredCounterGroup.java:119)] Monitored counter group for type: CHANNEL, name: c1: Successfully registered new MBean.
12:01:39,853 (lifecycleSupervisor-1-0) [INFO - org.apache.flume.instrumentation.MonitoredCounterGroup.start(MonitoredCounterGroup.java:95)] Component type: CHANNEL, name: c1 started
12:01:39,853 (conf-file-poller-0) [INFO - org.apache.flume.node.Application.startAllComponents(Application.java:173)] Starting Sink k1
12:01:39,854 (conf-file-poller-0) [INFO - org.apache.flume.node.Application.startAllComponents(Application.java:184)] Starting Source r1
12:01:39,856 (lifecycleSupervisor-1-0) [INFO - org.apache.flume.source.ExecSource.start(ExecSource.java:163)] Exec source starting with command:tail -F /home/grid/hadoop-1.1.2/logs/hadoop-grid-namenode-h5.hadoop.com.log
12:01:39,857 (lifecycleSupervisor-1-1) [INFO - org.apache.flume.instrumentation.MonitoredCounterGroup.register(MonitoredCounterGroup.java:119)] Monitored counter group for type: SINK, name: k1: Successfully registered new MBean.
12:01:39,857 (lifecycleSupervisor-1-1) [INFO - org.apache.flume.instrumentation.MonitoredCounterGroup.start(MonitoredCounterGroup.java:95)] Component type: SINK, name: k1 started
12:01:39,860 (lifecycleSupervisor-1-0) [INFO - org.apache.flume.instrumentation.MonitoredCounterGroup.register(MonitoredCounterGroup.java:119)] Monitored counter group for type: SOURCE, name: r1: Successfully registered new MBean.
12:01:39,860 (lifecycleSupervisor-1-0) [INFO - org.apache.flume.instrumentation.MonitoredCounterGroup.start(MonitoredCounterGroup.java:95)] Component type: SOURCE, name: r1 started
12:01:43,894 (SinkRunner-PollingRunner-DefaultSinkProcessor) [INFO - org.apache.flume.sink.hdfs.HDFSDataStream.configure(HDFSDataStream.java:58)] Serializer = TEXT, UseRawLocalFileSystem = false
12:01:43,943 (SinkRunner-PollingRunner-DefaultSinkProcessor) [INFO - org.apache.flume.sink.hdfs.BucketWriter.open(BucketWriter.java:261)] Creating hdfs://h5.hadoop.com:9000/outputs/eventsa1.sinks.k1.hdfs.round = true.5.tmp
12:01:44,116 (SinkRunner-PollingRunner-DefaultSinkProcessor) [WARN - org.apache.flume.sink.hdfs.BucketWriter.getRefIsClosed(BucketWriter.java:210)] isFileClosed is not available in the version of HDFS being used. Flume will not attempt to close files if the close fails on the first attempt
java.lang.NoSuchMethodException: org.apache.hadoop.hdfs.DistributedFileSystem.isFileClosed(org.apache.hadoop.fs.Path)
& && &&&at java.lang.Class.getMethod(Class.java:1786)
& && &&&at org.apache.flume.sink.hdfs.BucketWriter.getRefIsClosed(BucketWriter.java:207)
& && &&&at org.apache.flume.sink.hdfs.BucketWriter.open(BucketWriter.java:295)
& && &&&at org.apache.flume.sink.hdfs.BucketWriter.append(BucketWriter.java:554)
& && &&&at org.apache.flume.sink.hdfs.HDFSEventSink.process(HDFSEventSink.java:426)
& && &&&at org.apache.flume.sink.DefaultSinkProcessor.process(DefaultSinkProcessor.java:68)
& && &&&at org.apache.flume.SinkRunner$PollingRunner.run(SinkRunner.java:147)
& && &&&at java.lang.Thread.run(Thread.java:745)
12:02:14,118 (hdfs-k1-roll-timer-0) [INFO - org.apache.flume.sink.hdfs.BucketWriter.close(BucketWriter.java:409)] Closing hdfs://h5.hadoop.com:9000/outputs/eventsa1.sinks.k1.hdfs.round = true.5.tmp
12:02:14,120 (hdfs-k1-call-runner-6) [INFO - org.apache.flume.sink.hdfs.BucketWriter$3.call(BucketWriter.java:339)] Close tries incremented
12:02:14,142 (hdfs-k1-call-runner-7) [INFO - org.apache.flume.sink.hdfs.BucketWriter$8.call(BucketWriter.java:669)] Renaming hdfs://h5.hadoop.com:9000/outputs/eventsa1.sinks.k1.hdfs.round = true.5.tmp to hdfs://h5.hadoop.com:9000/outputs/eventsa1.sinks.k1.hdfs.round = true.5
12:02:14,145 (hdfs-k1-roll-timer-0) [INFO - org.apache.flume.sink.hdfs.HDFSEventSink$1.run(HDFSEventSink.java:402)] Writer callback called.
12:02:18,962 (SinkRunner-PollingRunner-DefaultSinkProcessor) [INFO - org.apache.flume.sink.hdfs.HDFSDataStream.configure(HDFSDataStream.java:58)] Serializer = TEXT, UseRawLocalFileSystem = false
12:02:19,004 (SinkRunner-PollingRunner-DefaultSinkProcessor) [INFO - org.apache.flume.sink.hdfs.BucketWriter.open(BucketWriter.java:261)] Creating hdfs://h5.hadoop.com:9000/outputs/eventsa1.sinks.k1.hdfs.round = true.2.tmp
12:02:19,026 (SinkRunner-PollingRunner-DefaultSinkProcessor) [WARN - org.apache.flume.sink.hdfs.BucketWriter.getRefIsClosed(BucketWriter.java:210)] isFileClosed is not available in the version of HDFS being used. Flume will not attempt to close files if the close fails on the first attempt
java.lang.NoSuchMethodException: org.apache.hadoop.hdfs.DistributedFileSystem.isFileClosed(org.apache.hadoop.fs.Path)
& && &&&at java.lang.Class.getMethod(Class.java:1786)
& && &&&at org.apache.flume.sink.hdfs.BucketWriter.getRefIsClosed(BucketWriter.java:207)
& && &&&at org.apache.flume.sink.hdfs.BucketWriter.open(BucketWriter.java:295)
& && &&&at org.apache.flume.sink.hdfs.BucketWriter.append(BucketWriter.java:554)
& && &&&at org.apache.flume.sink.hdfs.HDFSEventSink.process(HDFSEventSink.java:426)
& && &&&at org.apache.flume.sink.DefaultSinkProcessor.process(DefaultSinkProcessor.java:68)
& && &&&at org.apache.flume.SinkRunner$PollingRunner.run(SinkRunner.java:147)
& && &&&at java.lang.Thread.run(Thread.java:745)
[grid@h5 bin]$ ./hadoop fs -ls /outputs/eventsa1*
-rw-r--r--& &1 grid supergroup& && & -11 12:01 /outputs/eventsa1.sinks.k1.hdfs.round = true.5
-rw-r--r--& &1 grid supergroup& && &&&986
12:02 /outputs/eventsa1.sinks.k1.hdfs.round = true.2
-rw-r--r--& &1 grid supergroup& && & -11 12:02 /outputs/eventsa1.sinks.k1.hdfs.round = true.7
-rw-r--r--& &1 grid supergroup& && &&&986
12:03 /outputs/eventsa1.sinks.k1.hdfs.round = true.1
-rw-r--r--& &1 grid supergroup& && & -11 12:03 /outputs/eventsa1.sinks.k1.hdfs.round = true.3.tmp
[grid@h5 bin]$ ./hadoop fs -ls /outputs/eventsa1*tmp
Found 1 items
-rw-r--r--& &1 grid supergroup& && & -11 12:03 /outputs/eventsa1.sinks.k1.hdfs.round = true.3.tmp
[grid@h5 bin]$ ./hadoop fs -cat /outputs/eventsa1*tmp
12:03:53,183 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* NameSystem.addStoredBlock: blockMap updated: 10.0.0.76:50010 is added to blk_-4 size 986
12:03:53,196 INFO org.apache.hadoop.hdfs.StateChange: Removing lease on&&file /outputs/eventsa1.sinks.k1.hdfs.round = true.1.tmp from client DFSClient_NONMAPREDUCE_
12:03:53,196 INFO org.apache.hadoop.hdfs.StateChange: DIR* NameSystem.completeFile: file /outputs/eventsa1.sinks.k1.hdfs.round = true.1.tmp is closed by DFSClient_NONMAPREDUCE_
12:03:53,196 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of transactions: 26 Total time for transactions(ms): 4Number of transactions batched in Syncs: 0 Number of syncs: 16 SyncTimes(ms): 443
12:04:00,994 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* NameSystem.allocateBlock: /outputs/eventsa1.sinks.k1.hdfs.round = true.3.tmp. blk_-5
12:04:01,010 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* NameSystem.fsync: file /outputs/eventsa1.sinks.k1.hdfs.round = true.3.tmp for DFSClient_NONMAPREDUCE_
dataguru.cn All Right Reserved.
扫一扫加入本版微信群

我要回帖

更多关于 flume写hdfs 的文章

 

随机推荐