hbase的hbase region splitServer无法通过Master节点启动的问题,求助

hbase的regionServer无法通过Master节点启动的问题,求助!
主题帖子积分
新手上路, 积分 19, 距离下一级还需 31 积分
新手上路, 积分 19, 距离下一级还需 31 积分
hostname??? 你是说Hmaster的吗?那个点是日志内容的一部分。不是我写的。下面是Hmaster的日志内容。
14:29:22,417 ERROR [FifoRpcScheduler.handler1-thread-6] master.HMaster: Region server hadoop-,0148426 reported a fatal error:
ABORTING region server hadoop-,0148426: Unhandled: Call From hadoop-/192.168.2.244 to yiqirong:8020 failed on connection exception: java.net.ConnectException: C For more details see:&&
java.net.ConnectException: Call From hadoop-/192.168.2.244 to yiqirong:8020 failed on connection exception: java.net.ConnectException: C For more details see:&&
& & & & at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
& & & & at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
& & & & at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
& & & & at java.lang.reflect.Constructor.newInstance(Constructor.java:408)
& & & & at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:783)
& & & & at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:730)
& & & & at org.apache.hadoop.ipc.Client.call(Client.java:1351)
& & & & at org.apache.hadoop.ipc.Client.call(Client.java:1300)
& & & & at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)
& & & & at com.sun.proxy.$Proxy11.getFileInfo(Unknown Source)
& & & & at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
& & & & at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
& & & & at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
& & & & at java.lang.reflect.Method.invoke(Method.java:483)
& & & & at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:186)
& & & & at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
& & & & at com.sun.proxy.$Proxy11.getFileInfo(Unknown Source)
& & & & at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:651)
& & & & at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
& & & & at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
& & & & at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
& & & & at java.lang.reflect.Method.invoke(Method.java:483)
& & & & at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:294)
& & & & at com.sun.proxy.$Proxy12.getFileInfo(Unknown Source)
& & & & at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1679)
& & & & at org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1106)
& & & & at org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1102)
& & & & at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
& & & & at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1102)
& & & & at org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:398)
& & & & at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1397)
& & & & at org.apache.hadoop.hbase.regionserver.HRegionServer.setupWALAndReplication(HRegionServer.java:1522)
& & & & at org.apache.hadoop.hbase.regionserver.HRegionServer.handleReportForDutyResponse(HRegionServer.java:1286)
& & & & at org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:862)
& & & & at java.lang.Thread.run(Thread.java:745)
Caused by: java.net.ConnectException: Connection refused
& & & & at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
& & & & at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716)
& & & & at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
& & & & at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:529)
& & & & at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:493)
& & & & at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:547)
& & & & at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:642)
& & & & at org.apache.hadoop.ipc.Client$Connection.access$2600(Client.java:314)
& & & & at org.apache.hadoop.ipc.Client.getConnection(Client.java:1399)
& & & & at org.apache.hadoop.ipc.Client.call(Client.java:1318)
& & & & ... 28 more
Hmaster上的日志除了INFO,DEBUG,WARN以外,都是在说连接不上Hregionserver的ERROR
主题帖子积分
hostname??? 你是说Hmaster的吗?那个点是日志内容的一部分。不是我写的。下面是Hmaster的日志内容。
2014 ...你的什么配置,几台机器,说一下你的配置。连接不上自己都做了什么措施。细心很重要。不要忽略细节。
欢迎加入about云群 、 ,云计算爱好者群,关注
主题帖子积分
新手上路, 积分 19, 距离下一级还需 31 积分
新手上路, 积分 19, 距离下一级还需 31 积分
你的什么配置,几台机器,说一下你的配置。连接不上自己都做了什么措施。细心很重要。不要忽略细节。
我一共有七台服务器(使用KVM虚拟出来的,都是1核1G内存,40G硬盘)。主机名命名:hadoop-01到hadoop-07软件版本:
jdk-8u20-linux-x64.gz
hadoop-2.4.1.tar.gz
hbase-0.98.6.1-hadoop2-bin.tar.gz
hadoop-01和hadoop-02是NameNode,Hmaster
hadoop-03到hadoop-7 作为DataNode,Hregionserver
hadoop-03到hadoop-05上启动QjournalNode
hadoop-05到hadoop-07上安装Zookeeper
各节点上的防火墙和SELINUX都关闭了。
HDFS的HA功能测试正常,也可以自动的切换。
下面是hbase-size.xml的内容:
&configuration&
& & & & &property&
& & & & & & & & &name&hbase.cluster.distributed&/name&
& & & & & & & & &value&true&/value&
& & & & &/property&
& & & & &property&
& & & & & & & & &name&hbase.rootdir&/name&
& & & & & & & & &value&hdfs://xfzhou:8020/hbase&/value&
& & & & &/property&
& & & & &property&
& & & & & & & & &name&hbase.zookeeper.quorum&/name&
& & & & & & & & &value&hadoop-,hadoop-,hadoop-&/value&
& & & & &/property&
& & & & &property&
& & & & & & & & &name&hbase.zookeeper.property.clientPort&/name&
& & & & & & & & &value&2181&/value&
& & & & &/property&
&/configuration&
backup-master文件的内容如下:
[root@hadoop-01 conf]# cat backup-masters
192.168.2.242
regionservers的内容如下:
[root@hadoop-01 conf]# cat regionservers
[root@hadoop-01 conf]#
主题帖子积分
我一共有七台服务器(使用KVM虚拟出来的,都是1核1G内存,40G硬盘)。主机名命名:hadoop-01到hadoop-07软件 ...&&&property&
& && && && && & &name&hbase.zookeeper.quorum&/name&
& && && && && & &value&hadoop-,hadoop-,hadoop-&/value&
& && &&&&/property&
加上03和04试一下。
欢迎加入about云群 、 ,云计算爱好者群,关注
主题帖子积分
新手上路, 积分 19, 距离下一级还需 31 积分
新手上路, 积分 19, 距离下一级还需 31 积分
但是我的03和04上面没有安装zookeeper啊。不应该添加上的。不知道该问题的发起者有没有解决这个问题
主题帖子积分
高级会员, 积分 1147, 距离下一级还需 3853 积分
高级会员, 积分 1147, 距离下一级还需 3853 积分
11:31:55,523 DEBUG [main] util.DirectMemoryUtils: Failed to retrieve nio.BufferPool direct MemoryUsed attribute.
javax.management.InstanceNotFoundException: java.nio:type=BufferPool,name=direct
& & & & at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getMBean(DefaultMBeanServerInterceptor.java:1094)
& & & & at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getAttribute(DefaultMBeanServerInterceptor.java:662)
& & & & at com.sun.jmx.mbeanserver.JmxMBeanServer.getAttribute(JmxMBeanServer.java:639)
& & & & at org.apache.hadoop.hbase.util.DirectMemoryUtils.&clinit&(DirectMemoryUtils.java:72)
& & & & at org.apache.hadoop.hbase.io.hfile.CacheConfig.instantiateBlockCache(CacheConfig.java:369)
& & & & at org.apache.hadoop.hbase.io.hfile.CacheConfig.&init&(CacheConfig.java:166)
& & & & at org.apache.hadoop.hbase.regionserver.HRegionServer.&init&(HRegionServer.java:576)
& & & & at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
& & & & at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
& & & & at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
& & & & at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
& & & & at org.apache.hadoop.hbase.regionserver.HRegionServer.constructRegionServer(HRegionServer.java:2323)
& & & & at org.apache.hadoop.hbase.regionserver.HRegionServerCommandLine.start(HRegionServerCommandLine.java:61)
& & & & at org.apache.hadoop.hbase.regionserver.HRegionServerCommandLine.run(HRegionServerCommandLine.java:85)
& & & & at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
& & & & at org.apache.hadoop.hbase.util.ServerCommandLine.doMain(ServerCommandLine.java:126)
& & & & at org.apache.hadoop.hbase.regionserver.HRegionServer.main(HRegionServer.java:2340)
11:31:55,529 INFO&&[main] hfile.CacheConfig: Allocating LruBlockCache with maximum size 1.2 G
11:31:55,647 INFO&&[main] mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog
11:31:55,774 INFO&&[main] http.HttpServer: Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer$QuotingInputFilter)
11:31:55,780 INFO&&[main] http.HttpServer: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver
11:31:55,780 INFO&&[main] http.HttpServer: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static
11:31:55,822 INFO&&[main] http.HttpServer: Jetty bound to port 60030
11:31:55,822 INFO&&[main] mortbay.log: jetty-6.1.26
11:31:56,429 INFO&&[main] mortbay.log: Started :60030
11:31:56,474 INFO&&[regionserver60020] zookeeper.ZooKeeper: Client environment:zookeeper.version=3.4.5-1392090, built on 09/30/ GMT
11:31:56,474 INFO&&[regionserver60020] zookeeper.ZooKeeper: Client environment:host.name=zntd2
11:31:56,474 INFO&&[regionserver60020] zookeeper.ZooKeeper: Client environment:java.version=1.6.0_45
11:31:56,474 INFO&&[regionserver60020] zookeeper.ZooKeeper: Client environment:java.vendor=Sun Microsystems Inc.
11:31:56,474 INFO&&[regionserver60020] zookeeper.ZooKeeper: Client environment:java.home=/opt/jdk1.6.0_45/jre
11:31:56,474 INFO&&[regionserver60020] zookeeper.ZooKeeper: Client environment:java.class.path=/spo_market/hbase-0.96.2-hadoop2/bin/../conf:/opt/jdk1.6.0_45/lib/tools.jar:/spo_market/hbase-0.96.2-hadoop2/bin/..:/spo_market/hbase-0.96.2-hadoop2/bin/../lib/activation-1.1.jar:/spo_market/hbase-0.96.2-hadoop2/bin/../lib/aopalliance-1.0.jar:/spo_market/hbase-0.96.2-hadoop2/bin/../lib/asm-3.1.jar:/spo_market/hbase-0.96.2-hadoop2/bin/../lib/avro-1.7.4.jar:/spo_market/hbase-0.96.2-hadoop2/bin/../lib/commons-beanutils-1.7.0.jar:/spo_market/hbase-0.96.2-hadoop2/bin/../lib/commons-beanutils-core-1.8.0.jar:/spo_market/hbase-0.96.2-hadoop2/bin/../lib/commons-cli-1.2.jar:/spo_market/hbase-0.96.2-hadoop2/bin/../lib/commons-codec-1.7.jar:/spo_market/hbase-0.96.2-hadoop2/bin/../lib/commons-collections-3.2.1.jar:/spo_market/hbase-0.96.2-hadoop2/bin/../lib/commons-compress-1.4.1.jar:/spo_market/hbase-0.96.2-hadoop2/bin/../lib/commons-configuration-1.6.jar:/spo_market/hbase-0.96.2-hadoop2/bin/../lib/commons-daemon-1.0.13.jar:/spo_market/hbase-0.96.2-hadoop2/bin/../lib/commons-digester-1.8.jar:/spo_market/hbase-0.96.2-hadoop2/bin/../lib/commons-el-1.0.jar:/spo_market/hbase-0.96.2-hadoop2/bin/../lib/commons-httpclient-3.1.jar:/spo_market/hbase-0.96.2-hadoop2/bin/../lib/commons-io-2.4.jar:/spo_market/hbase-0.96.2-hadoop2/bin/../lib/commons-lang-2.6.jar:/spo_market/hbase-0.96.2-hadoop2/bin/../lib/commons-logging-1.1.1.jar:/spo_market/hbase-0.96.2-hadoop2/bin/../lib/commons-math-2.1.jar:/spo_market/hbase-0.96.2-hadoop2/bin/../lib/commons-net-3.1.jar:/spo_market/hbase-0.96.2-hadoop2/bin/../lib/findbugs-annotations-1.3.9-1.jar:/spo_market/hbase-0.96.2-hadoop2/bin/../lib/gmbal-api-only-3.0.0-b023.jar:/spo_market/hbase-0.96.2-hadoop2/bin/../lib/grizzly-framework-2.1.2.jar:/spo_market/hbase-0.96.2-hadoop2/bin/../lib/grizzly-http-2.1.2.jar:/spo_market/hbase-0.96.2-hadoop2/bin/../lib/grizzly-http-server-2.1.2.jar:/spo_market/hbase-0.96.2-hadoop2/bin/../lib/grizzly-http-servlet-2.1.2.jar:/spo_market/hbase-0.96.2-hadoop2/bin/../lib/grizzly-rcm-2.1.2.jar:/spo_market/hbase-0.96.2-hadoop2/bin/../lib/guava-12.0.1.jar:/spo_market/hbase-0.96.2-hadoop2/bin/../lib/guice-3.0.jar:/spo_market/hbase-0.96.2-hadoop2/bin/../lib/guice-servlet-3.0.jar:/spo_market/hbase-0.96.2-hadoop2/bin/../lib/hadoop-annotations-2.2.0.jar:/spo_market/hbase-0.96.2-hadoop2/bin/../lib/hadoop-auth-2.2.0.jar:/spo_market/hbase-0.96.2-hadoop2/bin/../lib/hadoop-client-2.2.0.jar:/spo_market/hbase-0.96.2-hadoop2/bin/../lib/hadoop-common-2.2.0.jar:/spo_market/hbase-0.96.2-hadoop2/bin/../lib/hadoop-hdfs-2.2.0.jar:/spo_market/hbase-0.96.2-hadoop2/bin/../lib/hadoop-hdfs-2.2.0-tests.jar:/spo_market/hbase-0.96.2-hadoop2/bin/../lib/hadoop-mapreduce-client-app-2.2.0.jar:/spo_market/hbase-0.96.2-hadoop2/bin/../lib/hadoop-mapreduce-client-common-2.2.0.jar:/spo_market/hbase-0.96.2-hadoop2/bin/../lib/hadoop-mapreduce-client-core-2.2.0.jar:/spo_market/hbase-0.96.2-hadoop2/bin/../lib/hadoop-mapreduce-client-jobclient-2.2.0.jar:/spo_market/hbase-0.96.2-hadoop2/bin/../lib/hadoop-mapreduce-client-jobclient-2.2.0-tests.jar:/spo_market/hbase-0.96.2-hadoop2/bin/../lib/hadoop-mapreduce-client-shuffle-2.2.0.jar:/spo_market/hbase-0.96.2-hadoop2/bin/../lib/hadoop-yarn-api-2.2.0.jar:/spo_market/hbase-0.96.2-hadoop2/bin/../lib/hadoop-yarn-client-2.2.0.jar:/spo_market/hbase-0.96.2-hadoop2/bin/../lib/hadoop-yarn-common-2.2.0.jar:/spo_market/hbase-0.96.2-hadoop2/bin/../lib/hadoop-yarn-server-common-2.2.0.jar:/spo_market/hbase-0.96.2-hadoop2/bin/../lib/hadoop-yarn-server-nodemanager-2.2.0.jar:/spo_market/hbase-0.96.2-hadoop2/bin/../lib/hamcrest-core-1.3.jar:/spo_market/hbase-0.96.2-hadoop2/bin/../lib/hbase-client-0.96.2-hadoop2.jar:/spo_market/hbase-0.96.2-hadoop2/bin/../lib/hbase-common-0.96.2-hadoop2.jar:/spo_market/hbase-0.96.2-hadoop2/bin/../lib/hbase-common-0.96.2-hadoop2-tests.jar:/spo_market/hbase-0.96.2-hadoop2/bin/../lib/hbase-examples-0.96.2-hadoop2.jar:/spo_market/hbase-0.96.2-hadoop2/bin/../lib/hbase-hadoop2-compat-0.96.2-hadoop2.jar:/spo_market/hbase-0.96.2-hadoop2/bin/../lib/hbase-hadoop-compat-0.96.2-hadoop2.jar:/spo_market/hbase-0.96.2-hadoop2/bin/../lib/hbase-it-0.96.2-hadoop2.jar:/spo_market/hbase-0.96.2-hadoop2/bin/../lib/hbase-it-0.96.2-hadoop2-tests.jar:/spo_market/hbase-0.96.2-hadoop2/bin/../lib/hbase-prefix-tree-0.96.2-hadoop2.jar:/spo_market/hbase-0.96.2-hadoop2/bin/../lib/hbase-protocol-0.96.2-hadoop2.jar:/spo_market/hbase-0.96.2-hadoop2/bin/../lib/hbase-server-0.96.2-hadoop2.jar:/spo_market/hbase-0.96.2-hadoop2/bin/../lib/hbase-server-0.96.2-hadoop2-tests.jar:/spo_market/hbase-0.96.2-hadoop2/bin/../lib/hbase-shell-0.96.2-hadoop2.jar:/spo_market/hbase-0.96.2-hadoop2/bin/../lib/hbase-testing-util-0.96.2-hadoop2.jar:/spo_market/hbase-0.96.2-hadoop2/bin/../lib/hbase-thrift-0.96.2-hadoop2.jar:/spo_market/hbase-0.96.2-hadoop2/bin/../lib/htrace-core-2.04.jar:/spo_market/hbase-0.96.2-hadoop2/bin/../lib/httpclient-4.1.3.jar:/spo_market/hbase-0.96.2-hadoop2/bin/../lib/httpcore-4.1.3.jar:/spo_market/hbase-0.96.2-hadoop2/bin/../lib/jackson-core-asl-1.8.8.jar:/spo_market/hbase-0.96.2-hadoop2/bin/../lib/jackson-jaxrs-1.8.8.jar:/spo_market/hbase-0.96.2-hadoop2/bin/../lib/jackson-mapper-asl-1.8.8.jar:/spo_market/hbase-0.96.2-hadoop2/bin/../lib/jackson-xc-1.8.8.jar:/spo_market/hbase-0.96.2-hadoop2/bin/../lib/jamon-runtime-2.3.1.jar:/spo_market/hbase-0.96.2-hadoop2/bin/../lib/jasper-compiler-5.5.23.jar:/spo_market/hbase-0.96.2-hadoop2/bin/../lib/jasper-runtime-5.5.23.jar:/spo_market/hbase-0.96.2-hadoop2/bin/../lib/javax.inject-1.jar:/spo_market/hbase-0.96.2-hadoop2/bin/../lib/javax.servlet-3.1.jar:/spo_market/hbase-0.96.2-hadoop2/bin/../lib/javax.servlet-api-3.0.1.jar:/spo_market/hbase-0.96.2-hadoop2/bin/../lib/jaxb-api-2.2.2.jar:/spo_market/hbase-0.96.2-hadoop2/bin/../lib/jaxb-impl-2.2.3-1.jar:/spo_market/hbase-0.96.2-hadoop2/bin/../lib/jersey-client-1.9.jar:/spo_market/hbase-0.96.2-hadoop2/bin/../lib/jersey-core-1.8.jar:/spo_market/hbase-0.96.2-hadoop2/bin/../lib/jersey-grizzly2-1.9.jar:/spo_market/hbase-0.96.2-hadoop2/bin/../lib/jersey-guice-1.9.jar:/spo_market/hbase-0.96.2-hadoop2/bin/../lib/jersey-json-1.8.jar:/spo_market/hbase-0.96.2-hadoop2/bin/../lib/jersey-server-1.8.jar:/spo_market/hbase-0.96.2-hadoop2/bin/../lib/jersey-test-framework-core-1.9.jar:/spo_market/hbase-0.96.2-hadoop2/bin/../lib/jersey-test-framework-grizzly2-1.9.jar:/spo_market/hbase-0.96.2-hadoop2/bin/../lib/jets3t-0.6.1.jar:/spo_market/hbase-0.96.2-hadoop2/bin/../lib/jettison-1.3.1.jar:/spo_market/hbase-0.96.2-hadoop2/bin/../lib/jetty-6.1.26.jar:/spo_market/hbase-0.96.2-hadoop2/bin/../lib/jetty-sslengine-6.1.26.jar:/spo_market/hbase-0.96.2-hadoop2/bin/../lib/jetty-util-6.1.26.jar:/spo_market/hbase-0.96.2-hadoop2/bin/../lib/jruby-complete-1.6.8.jar:/spo_market/hbase-0.96.2-hadoop2/bin/../lib/jsch-0.1.42.jar:/spo_market/hbase-0.96.2-hadoop2/bin/../lib/jsp-2.1-6.1.14.jar:/spo_market/hbase-0.96.2-hadoop2/bin/../lib/jsp-api-2.1-6.1.14.jar:/spo_market/hbase-0.96.2-hadoop2/bin/../lib/jsr305-1.3.9.jar:/spo_market/hbase-0.96.2-hadoop2/bin/../lib/junit-4.11.jar:/spo_market/hbase-0.96.2-hadoop2/bin/../lib/libthrift-0.9.0.jar:/spo_market/hbase-0.96.2-hadoop2/bin/../lib/log4j-1.2.17.jar:/spo_market/hbase-0.96.2-hadoop2/bin/../lib/management-api-3.0.0-b012.jar:/spo_market/hbase-0.96.2-hadoop2/bin/../lib/metrics-core-2.1.2.jar:/spo_market/hbase-0.96.2-hadoop2/bin/../lib/netty-3.6.6.Final.jar:/spo_market/hbase-0.96.2-hadoop2/bin/../lib/paranamer-2.3.jar:/spo_market/hbase-0.96.2-hadoop2/bin/../lib/protobuf-java-2.5.0.jar:/spo_market/hbase-0.96.2-hadoop2/bin/../lib/servlet-api-2.5-6.1.14.jar:/spo_market/hbase-0.96.2-hadoop2/bin/../lib/slf4j-api-1.6.4.jar:/spo_market/hbase-0.96.2-hadoop2/bin/../lib/slf4j-log4j12-1.6.4.jar:/spo_market/hbase-0.96.2-hadoop2/bin/../lib/snappy-java-1.0.4.1.jar:/spo_market/hbase-0.96.2-hadoop2/bin/../lib/xmlenc-0.52.jar:/spo_market/hbase-0.96.2-hadoop2/bin/../lib/xz-1.0.jar:/spo_market/hbase-0.96.2-hadoop2/bin/../lib/zookeeper-3.4.5.jar:
11:31:56,474 INFO&&[regionserver60020] zookeeper.ZooKeeper: Client environment:java.library.path=/opt/jdk1.6.0_45/jre/lib/amd64/server:/opt/jdk1.6.0_45/jre/lib/amd64:/opt/jdk1.6.0_45/jre/../lib/amd64:/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib
11:31:56,474 INFO&&[regionserver60020] zookeeper.ZooKeeper: Client environment:java.io.tmpdir=/tmp
11:31:56,474 INFO&&[regionserver60020] zookeeper.ZooKeeper: Client piler=&NA&
11:31:56,474 INFO&&[regionserver60020] zookeeper.ZooKeeper: Client environment:os.name=Linux
11:31:56,474 INFO&&[regionserver60020] zookeeper.ZooKeeper: Client environment:os.arch=amd64
11:31:56,474 INFO&&[regionserver60020] zookeeper.ZooKeeper: Client environment:os.version=2.6.32-358.el6.x86_64
11:31:56,474 INFO&&[regionserver60020] zookeeper.ZooKeeper: Client environment:user.name=spo_market
11:31:56,474 INFO&&[regionserver60020] zookeeper.ZooKeeper: Client environment:user.home=/spo_market
11:31:56,474 INFO&&[regionserver60020] zookeeper.ZooKeeper: Client environment:user.dir=/spo_market/hbase-0.96.2-hadoop2
11:31:56,475 INFO&&[regionserver60020] zookeeper.ZooKeeper: Initiating client connection, connectString=zntd2:2181,zntd1:2181,zntd3:2181 sessionTimeout=90000 watcher=regionserver:60020, quorum=zntd2:2181,zntd1:2181,zntd3:2181, baseZNode=/hbase
11:31:56,525 INFO&&[regionserver60020] zookeeper.RecoverableZooKeeper: Process identifier=regionserver:60020 connecting to ZooKeeper ensemble=zntd2:2181,zntd1:2181,zntd3:2181
11:31:56,540 INFO&&[regionserver60020-SendThread(zntd3:2181)] zookeeper.ClientCnxn: Opening socket connection to server zntd3/172.21.0.47:2181. Will not attempt to authenticate using SASL (Unable to locate a login configuration)
11:31:56,559 INFO&&[regionserver60020-SendThread(zntd3:2181)] zookeeper.ClientCnxn: Socket connection established to zntd3/172.21.0.47:2181, initiating session
11:31:56,586 INFO&&[regionserver60020-SendThread(zntd3:2181)] zookeeper.ClientCnxn: Session establishment complete on server zntd3/172.21.0.47:2181, sessionid = 0x40036, negotiated timeout = 40000
11:31:57,163 INFO&&[main] regionserver.ShutdownHook: Installed shutdown hook thread: Shutdownhook:regionserver60020
11:32:01,420 INFO&&[regionserver60020] zookeeper.ZooKeeper: Initiating client connection, connectString=zntd2:2181,zntd1:2181,zntd3:2181 sessionTimeout=90000 watcher=hconnection-0x690ff62a, quorum=zntd2:2181,zntd1:2181,zntd3:2181, baseZNode=/hbase
11:32:01,421 INFO&&[regionserver60020] zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x690ff62a connecting to ZooKeeper ensemble=zntd2:2181,zntd1:2181,zntd3:2181
11:32:01,424 INFO&&[regionserver60020-SendThread(zntd3:2181)] zookeeper.ClientCnxn: Opening socket connection to server zntd3/172.21.0.47:2181. Will not attempt to authenticate using SASL (Unable to locate a login configuration)
11:32:01,426 INFO&&[regionserver60020-SendThread(zntd3:2181)] zookeeper.ClientCnxn: Socket connection established to zntd3/172.21.0.47:2181, initiating session
11:32:01,463 INFO&&[regionserver60020-SendThread(zntd3:2181)] zookeeper.ClientCnxn: Session establishment complete on server zntd3/172.21.0.47:2181, sessionid = 0x40038, negotiated timeout = 40000
11:32:01,611 DEBUG [regionserver60020] catalog.CatalogTracker: Starting catalog tracker org.apache.hadoop.hbase.catalog.CatalogTracker@
11:32:01,621 INFO&&[regionserver60020] regionserver.HRegionServer: ClusterId : 9cc177b2-8fde-43e3-825e-e9fed68509d2
11:32:01,665 INFO&&[regionserver60020] zookeeper.RecoverableZooKeeper: Node /hbase/online-snapshot/acquired already exists and this is not a retry
11:32:01,694 INFO&&[regionserver60020] regionserver.MemStoreFlusher: globalMemStoreLimit=1.2 G, globalMemStoreLimitLowMark=1.1 G, maxHeap=2.9 G
11:32:01,702 INFO&&[regionserver60020] regionserver.HRegionServer: CompactionChecker runs every 10sec
11:32:01,741 INFO&&[regionserver60020] regionserver.HRegionServer: reportForDuty to master=zntd1,5242195 with port=60020, startcode=3
11:32:02,485 FATAL [regionserver60020] regionserver.HRegionServer: Master rejected startup because clock is out of sync
org.apache.hadoop.hbase.ClockOutOfSyncException: org.apache.hadoop.hbase.ClockOutOfSyncException: Server zntd2,5115253 Reported time is too far out of sync with master.&&Time difference of 129337ms & max allowed of 30000ms
& & & & at org.apache.hadoop.hbase.master.ServerManager.checkClockSkew(ServerManager.java:314)
& & & & at org.apache.hadoop.hbase.master.ServerManager.regionServerStartup(ServerManager.java:215)
& & & & at org.apache.hadoop.hbase.master.HMaster.regionServerStartup(HMaster.java:1292)
& & & & at org.apache.hadoop.hbase.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:5085)
& & & & at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2185)
& & & & at org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1889)
& & & & at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
& & & & at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
& & & & at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
& & & & at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
& & & & at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)
& & & & at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:95)
& & & & at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRemoteException(ProtobufUtil.java:277)
& & & & at org.apache.hadoop.hbase.regionserver.HRegionServer.reportForDuty(HRegionServer.java:1955)
& & & & at org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:794)
& & & & at java.lang.Thread.run(Thread.java:662)
Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.ClockOutOfSyncException): org.apache.hadoop.hbase.ClockOutOfSyncException: Server zntd2,5115253 Reported time is too far out of sync with master.&&Time difference of 129337ms & max allowed of 30000ms
& & & & at org.apache.hadoop.hbase.master.ServerManager.checkClockSkew(ServerManager.java:314)
& & & & at org.apache.hadoop.hbase.master.ServerManager.regionServerStartup(ServerManager.java:215)
& & & & at org.apache.hadoop.hbase.master.HMaster.regionServerStartup(HMaster.java:1292)
& & & & at org.apache.hadoop.hbase.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:5085)
& & & & at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2185)
& & & & at org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1889)
& & & & at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1454)
& & & & at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1658)
& & & & at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1716)
& & & & at org.apache.hadoop.hbase.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$BlockingStub.regionServerStartup(RegionServerStatusProtos.java:5402)
& & & & at org.apache.hadoop.hbase.regionserver.HRegionServer.reportForDuty(HRegionServer.java:1953)
& & & & ... 2 more
11:32:02,489 FATAL [regionserver60020] regionserver.HRegionServer: ABORTING region server zntd2,5115253: Unhandled: org.apache.hadoop.hbase.ClockOutOfSyncException: Server zntd2,5115253 Reported time is too far out of sync with master.&&Time difference of 129337ms & max allowed of 30000ms
& & & & at org.apache.hadoop.hbase.master.ServerManager.checkClockSkew(ServerManager.java:314)
& & & & at org.apache.hadoop.hbase.master.ServerManager.regionServerStartup(ServerManager.java:215)
& & & & at org.apache.hadoop.hbase.master.HMaster.regionServerStartup(HMaster.java:1292)
& & & & at org.apache.hadoop.hbase.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:5085)
& & & & at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2185)
& & & & at org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1889)
org.apache.hadoop.hbase.ClockOutOfSyncException: org.apache.hadoop.hbase.ClockOutOfSyncException: Server zntd2,5115253 Reported time is too far out of sync with master.&&Time difference of 129337ms & max allowed of 30000ms
& & & & at org.apache.hadoop.hbase.master.ServerManager.checkClockSkew(ServerManager.java:314)
& & & & at org.apache.hadoop.hbase.master.ServerManager.regionServerStartup(ServerManager.java:215)
& & & & at org.apache.hadoop.hbase.master.HMaster.regionServerStartup(HMaster.java:1292)
& & & & at org.apache.hadoop.hbase.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:5085)
& & & & at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2185)
& & & & at org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1889)
& & & & at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
& & & & at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
& & & & at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
& & & & at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
& & & & at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)
& & & & at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:95)
& & & & at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRemoteException(ProtobufUtil.java:277)
& & & & at org.apache.hadoop.hbase.regionserver.HRegionServer.reportForDuty(HRegionServer.java:1955)
& & & & at org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:794)
& & & & at java.lang.Thread.run(Thread.java:662)
Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.ClockOutOfSyncException): org.apache.hadoop.hbase.ClockOutOfSyncException: Server zntd2,5115253 Reported time is too far out of sync with master.&&Time difference of 129337ms & max allowed of 30000ms
& & & & at org.apache.hadoop.hbase.master.ServerManager.checkClockSkew(ServerManager.java:314)
& & & & at org.apache.hadoop.hbase.master.ServerManager.regionServerStartup(ServerManager.java:215)
& & & & at org.apache.hadoop.hbase.master.HMaster.regionServerStartup(HMaster.java:1292)
& & & & at org.apache.hadoop.hbase.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:5085)
& & & & at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2185)
& & & & at org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1889)
& & & & at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1454)
& & & & at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1658)
& & & & at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1716)
& & & & at org.apache.hadoop.hbase.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$BlockingStub.regionServerStartup(RegionServerStatusProtos.java:5402)
& & & & at org.apache.hadoop.hbase.regionserver.HRegionServer.reportForDuty(HRegionServer.java:1953)
& & & & ... 2 more
11:32:02,494 FATAL [regionserver60020] regionserver.HRegionServer: RegionServer abort: loaded coprocessors are: []
11:32:02,494 INFO&&[regionserver60020] regionserver.HRegionServer: STOPPED: Unhandled: org.apache.hadoop.hbase.ClockOutOfSyncException: Server zntd2,5115253 Reported time is too far out of sync with master.&&Time difference of 129337ms & max allowed of 30000ms
& & & & at org.apache.hadoop.hbase.master.ServerManager.checkClockSkew(ServerManager.java:314)
& & & & at org.apache.hadoop.hbase.master.ServerManager.regionServerStartup(ServerManager.java:215)
& & & & at org.apache.hadoop.hbase.master.HMaster.regionServerStartup(HMaster.java:1292)
& & & & at org.apache.hadoop.hbase.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:5085)
& & & & at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2185)
& & & & at org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1889)
11:32:02,494 INFO&&[regionserver60020] ipc.RpcServer: Stopping server on 60020
11:32:02,497 INFO&&[regionserver60020] regionserver.HRegionServer: Stopping infoServer
11:32:02,509 INFO&&[regionserver60020] mortbay.log: Stopped :60030
11:32:02,613 INFO&&[regionserver60020] snapshot.RegionServerSnapshotManager: Stopping RegionServerSnapshotManager abruptly.
11:32:02,614 INFO&&[regionserver60020] regionserver.HRegionServer: aborting server null
11:32:02,614 DEBUG [regionserver60020] catalog.CatalogTracker: Stopping catalog tracker org.apache.hadoop.hbase.catalog.CatalogTracker@
11:32:02,615 INFO&&[regionserver60020] client.HConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x40038
11:32:02,623 INFO&&[regionserver60020-EventThread] zookeeper.ClientCnxn: EventThread shut down
11:32:02,623 INFO&&[regionserver60020] zookeeper.ZooKeeper: Session: 0x40038 closed
11:32:02,627 INFO&&[regionserver60020] regionserver.HRegionServer:
all regions closed.
11:32:02,728 INFO&&[regionserver60020] regionserver.Leases: regionserver60020 closing leases
11:32:02,728 INFO&&[regionserver60020] regionserver.Leases: regionserver60020 closed leases
11:32:02,729 INFO&&[regionserver60020] pactSplitThread: Waiting for Split Thread to finish...
11:32:02,729 INFO&&[regionserver60020] pactSplitThread: Waiting for Merge Thread to finish...
11:32:02,729 INFO&&[regionserver60020] pactSplitThread: Waiting for Large Compaction Thread to finish...
11:32:02,729 INFO&&[regionserver60020] pactSplitThread: Waiting for Small Compaction Thread to finish...
11:32:02,748 INFO&&[regionserver60020-EventThread] zookeeper.ClientCnxn: EventThread shut down
11:32:02,748 INFO&&[regionserver60020] zookeeper.ZooKeeper: Session: 0x40036 closed
11:32:02,748 INFO&&[regionserver60020] regionserver.HRegionServer:
zookeeper connection closed.
11:32:02,749 INFO&&[regionserver60020] regionserver.HRegionServer: regionserver60020 exiting
11:32:02,749 ERROR [main] regionserver.HRegionServerCommandLine: Region server exiting
java.lang.RuntimeException: HRegionServer Aborted
& & & & at org.apache.hadoop.hbase.regionserver.HRegionServerCommandLine.start(HRegionServerCommandLine.java:66)
& & & & at org.apache.hadoop.hbase.regionserver.HRegionServerCommandLine.run(HRegionServerCommandLine.java:85)
& & & & at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
& & & & at org.apache.hadoop.hbase.util.ServerCommandLine.doMain(ServerCommandLine.java:126)
& & & & at org.apache.hadoop.hbase.regionserver.HRegionServer.main(HRegionServer.java:2340)
11:32:02,754 INFO&&[Thread-9] regionserver.ShutdownHook: Sh hbase.shutdown.hook= fsShutdownHook=org.apache.hadoop.fs.FileSystem$Cache$ClientFinalizer@51f88fbd
11:32:02,754 INFO&&[Thread-9] regionserver.HRegionServer: STOPPED: Shutdown hook
11:32:02,755 INFO&&[Thread-9] regionserver.ShutdownHook: Starting fs shutdown hook thread.
11:32:02,763 INFO&&[Thread-9] regionserver.ShutdownHook: Shutdown hook finished.怎么解决?
主题帖子积分
中级会员, 积分 306, 距离下一级还需 694 积分
中级会员, 积分 306, 距离下一级还需 694 积分
但是我的03和04上面没有安装zookeeper啊。不应该添加上的。不知道该问题的发起者有没有解决这个问题
没有解决,后来我没有用开源的了,现在在用hortonworks,就没有这个问题
主题帖子积分
新手上路, 积分 8, 距离下一级还需 42 积分
新手上路, 积分 8, 距离下一级还需 42 积分
我也遇到这个问题,搞了几天终于搞定。原来是hbase.regionserver.wal.codec的错误
修改$HBASE_HOME/conf/hbase-site.xml
去掉这个就可以了
&property&
& & &name&hbase.regionserver.wal.codec&/name&&&
& & &value&org.apache.hadoop.hbase.regionserver.wal.IndexedWALEditCodec&/value&
&/property&&&
主题帖子积分
新手上路, 积分 45, 距离下一级还需 5 积分
新手上路, 积分 45, 距离下一级还需 5 积分
我也遇到这个问题,你们解决了吗? 最终是怎么解决的?
主题帖子积分
我也遇到这个问题,你们解决了吗? 最终是怎么解决的?
虽然同一个问题,但是可能原因有所不同。试试@xuxian的
我也遇到这个问题,搞了几天终于搞定。原来是hbase.regionserver.wal.codec的错误
修改$HBASE_HOME/conf/hbase-site.xml
去掉这个就可以了
&property&
& & &name&hbase.regionserver.wal.codec&/name&&&
& & &value&org.apache.hadoop.hbase.regionserver.wal.IndexedWALEditCodec&/value&
&/property&&&
欢迎加入about云群 、 ,云计算爱好者群,关注
经常参与各类话题的讨论,发帖内容较有主见
经常帮助其他会员答疑
活跃且尽责职守的版主
站长推荐 /4
云计算hadoop视频大全(新增 yarn、flume|storm、hadoop一套视频
等待验证会员请验证邮箱
新手获取积分方法
技术类问答,解决学习openstack,hadoop生态系统中遇到的问题
Powered by

我要回帖

更多关于 hbase region 分配 的文章

 

随机推荐