java链接cdh hbase maven-1.0.0-cdh5.5.1集群 需要哪些jar

hbase(2)
spark(13)
CHD5.3搭建参考:
以下俩种方式安装Hbase集群:
一:tar包安装
1 . Hbase安装在39/40/41/42/43/44/45/46/47节点上,所以上传hbase压缩包hbase-0.98.6-cdh5.3.0.tar.gz到39节点,配置好39再同步其他节点
2 . hbase-site.xml内容修改如下:
&hbase.rootdir&
&hdfs://cdh5-test/hbase&
&hbase.cluster.distributed&
&hbase.master&
&172.23.27.39:60000&
&hbase.tmp.dir&
&/tmp/hbase-${user.name}&
&hbase.zookeeper.quorum&
& 172.23.27.45:.27.46:.27.47:2181&
3 . regionservers内容修改如下:
JXQ-23-27-40.h.chinabank.com.cn
JXQ-23-27-41.h.chinabank.com.cn
JXQ-23-27-42.h.chinabank.com.cn
JXQ-23-27-43.h.chinabank.com.cn
JXQ-23-27-44.h.chinabank.com.cn
JXQ-23-27-45.h.chinabank.com.cn
JXQ-23-27-46.h.chinabank.com.cn
JXQ-23-27-47.h.chinabank.com.cn
4 . hbase-env.sh
export HBASE_OPTS="-XX:+UseConcMarkSweepGC"
export JAVA_HOME=/usr/java/jdk1.7.0_51
export HBASE_LOG_DIR=/var/hbase/logs
export HBASE_MANAGES_ZK=false
export HBASE_SSH_OPTS="-p 51899"
5 . 同步文件夹,改权限
scp -P 51899 -r hbase-0.98.6-cdh5.3.0 pe@172.23.27.47:/tmp/
/tmp/hbase-0.98.6-cdh5.3.0 /export/server/
hbase-0.98.6-cdh5.3.0/
分别在这些节点上启动
start-hbase.sh
二:Hbase,yum安装:
39节点:安装master
yum install -y hbase-master hbase-rest hbase-thrift
//以下是具体安装上的包名
hbase-master.x86_64 0:0.98.6+cdh5.3.2+83-1.cdh5.3.2.p0.17.el6
hbase-rest.x86_64 0:0.98.6+cdh5.3.2+83-1.cdh5.3.2.p0.17.el6
hbase-thrift.x86_64 0:0.98.6+cdh5.3.2+83-1.cdh5.3.2.p0.17.el6
hbase.x86_64 0:0.98.6+cdh5.3.2+83-1.cdh5.3.2.p0.17.el6
40/41/42/43/44/45/46/47节点:安装region-server
yum install -y region-server
//以下是具体安装上的包名
hbase-regionserver.x86_64 0:0.98.6+cdh5.3.2+83-1.cdh5.3.2.p0.17.el6
hbase.x86_64 0:0.98.6+cdh5.3.2+83-1.cdh5.3.2.p0.17.el6
start-hbase.sh
该命令可在任意结点上执行,不过需要注意的是:在哪个结点上执行该命令,该点将自动成为master(与zookeeper的配置不同,hbase的配置文件中不提供指定master的选项),如果需要多个back-up master,可在另外的结点上通过hbase-daemon.sh start master单独启动master!
以下是单独启动某项服务的命令:
启动master
/usr/lib/hbase/bin
./hbase-daemon.sh start master
启动regionserver
/usr/lib/hbase/bin
./hbase-daemon.sh start regionserver
所有服务启动后,访问:
http://YOUR-MASTER:60010
常见问题:
172.23.27.46: ssh: connect to host 172.23.27.46 port 22: Connection refused
JXQ-23-27-41..cn: ssh: connect to host JXQ-23-27-41..cn port 22: Connection refused
当前环境无法root ssh直连,只能手动一台一台启动咯
2&HMaster启动后,分分钟又停止:
&hbase.rootdir&
&hdfs://cdh5-test/hbase&
原来是hbase目录没有创建。hbase.rootdir:这个目录是region server的共享目录,用来持久化Hbase。URL需要是'完全正确'的,还要包含文件系统的scheme。
1. 可以调整hbase-site.xml的hbase.hregion.max.filesize属性.
更大的Region可以使你集群上的Region的总数量较少。 一般来言,更少的Region可以使你的集群运行更加流畅。(你可以自己随时手工将大Region切割,这样单个热点Region就会被分布在集群的更多节点上)。默认情况下单个Region是256MB.你可以设置为1G。有些人使用更大的,4G甚至更多。
参考知识库
* 以上用户言论只代表其个人观点,不代表CSDN网站的观点或立场
访问:108042次
积分:2283
积分:2283
排名:第13890名
原创:115篇
转载:45篇
(2)(2)(2)(18)(12)(9)(9)(7)(6)(3)(3)(5)(12)(8)(2)(2)(2)(4)(1)(1)(2)(1)(2)(11)(4)(3)(2)(8)(1)(17)CDH5.0.2升级至CDH5.2.0 - 空中的鱼 - ITeye技术网站
博客分类:
升级需求
1.为支持spark kerberos安全机制
2.为满足impala trunc函数
3.为解决impala import时同时query导致impala hang问题
升级步骤
参考/content/cloudera/en/documentation/core/latest/topics/installation_upgrade.html
优先升级cloudera manager,再升级cdh
1.准备工作:
统一集群root密码,需要运维帮忙操作下
agent自动重启关闭
事先下载好parcals包
登录cmserver安装的主机,执行命令:
cat /etc/cloudera-scm-server/db.properties
备份CM数据:
& pg_dump -U scm -p 7432&& & scm_server_db_backup.bak
& 检查/tmp下是否有文件生成,期间保证tmp下文件不要被删除。
停止CM server :
& sudo service cloudera-scm-server stop
停止CM server依赖的数据库:
& sudo service cloudera-scm-server-db stop
如果这台CM server上有agent在运行也停止:
& sudo service cloudera-scm-agent stop
修改yum的 cloudera-manager.repo文件:
& sudo vim /etc/yum.repos.d/cloudera-manager.repo
&& [cloudera-manager]
&&&&&& # Packages for Cloudera Manager, Version 5, on RedHat or CentOS 6 x86_64
&&&&&& name=Cloudera Manager
&&&&&& baseurl=/cm5/redhat/6/x86_64/cm/5/
&&&&&& gpgkey = /cm5/redhat/6/x86_64/cm/RPM-GPG-KEY-cloudera
&&&&&& gpgcheck = 1
安装:
& sudo
&&&&& yum clean all
&&&&& sudo yum upgrade 'cloudera-*'
检查:
& rpm& -qa 'cloudera-manager-*'
启动CM server 数据库:
& sudo& service cloudera-scm-server-db start
启动CM server:
& sudo& service cloudera-scm-server start
登录http://172.20.0.83:7180/
& 安装agent(步骤略)
升级如果升级jdk,会改变java_home路径,导致java相关服务不可用,需要重新配置java_home
升级CM后需要重启CDH。
3.CDH升级
停止集群所有服务
备份namenode元数据:
& 进入namenode dir,执行:
&& tar -cvf /root/nn_backup_data.tar ./*
下载parcels
分发包-&激活包-&关闭(非重启)
开启zk服务
进入HDFS服务-&升级hdfs metadata
& namenode上启动元数据
& 启动剩余HDFS角色
& namenode响应RPC
& HDFS退出安全模式
备份hive metastore数据库
& mysqldump -h172.20.0.67 -ucdhhive -p111111 cdhhive & /tmp/database-backup.sql
进入hive服务-&更新hive metastore
&&&& database scheme
更新oozie sharelib:oozie-&install
&&&& oozie share lib
& 创建 oozie user
&&&&& sharelib
& 创建 oozie user
&&&&& Dir
更新sqoop:进入sqoop服务-&update
&&&& sqoop
& 更新sqoop2 server
更新spark(略,可先卸载原来版本,升级后直接安装新版本)
启动集群所有服务:zk-&hdfs-&spark-&flume-&hbase-&hive-&impala-&oozie-&sqoop2-&hue
分发客户端文件:deploy client
&&&& configuration
& deploy hdfs client configuration
& deploy spark client configuration
& deploy hbase client configuration
& deploy yarn client configuration
& deploy hive client configuration
删除老版本包:
& sudo& yum remove bigtop-utils bigtop-jsvc bigtop-tomcat hue-common sqoop2-client
启动agent:
& sudo service cloudera-scm-agent restart
HDFS
& metadata update
& hdfs server-&instance-&namenode=&action-&Finalize
&&&&& Metadata Upgrade
升级过程遇主要问题:
com.cloudera.server.cmf.FeatureUnavailableException: The feature Navigator Audit Server is not available.
&&&&&&& at com.cloudera.ponents.LicensedFeatureManager.check(LicensedFeatureManager.java:49)
&&&&&&& at com.cloudera.ponents.OperationsManagerImpl.setConfig(OperationsManagerImpl.java:1312)
&&&&&&& at com.cloudera.ponents.OperationsManagerImpl.setConfigUnsafe(OperationsManagerImpl.java:1352)
&&&&&&& at com.cloudera.api.dao.impl.ManagerDaoBase.updateConfigs(ManagerDaoBase.java:264)
&&&&&&& at com.cloudera.api.dao.impl.RoleConfigGroupManagerDaoImpl.updateConfigsHelper(RoleConfigGroupManagerDaoImpl.java:214)
&&&&&&& at com.cloudera.api.dao.impl.RoleConfigGroupManagerDaoImpl.updateRoleConfigGroup(RoleConfigGroupManagerDaoImpl.java:97)
&&&&&&& at com.cloudera.api.dao.impl.RoleConfigGroupManagerDaoImpl.updateRoleConfigGroup(RoleConfigGroupManagerDaoImpl.java:79)
&&&&&&& at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
&&&&&&& at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
&&&&&&& at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
&&&&&&& at java.lang.reflect.Method.invoke(Method.java:606)
&&&&&&& at com.cloudera.api.dao.impl.ManagerDaoBase.invoke(ManagerDaoBase.java:208)
&&&&&&& at com.sun.proxy.$Proxy82.updateRoleConfigGroup(Unknown Source)
&&&&&&& at com.cloudera.api.v3.impl.RoleConfigGroupsResourceImpl.updateRoleConfigGroup(RoleConfigGroupsResourceImpl.java:69)
&&&&&&& at com.cloudera.api.v3.impl.MgmtServiceResourceV3Impl$RoleConfigGroupsResourceWrapper.updateRoleConfigGroup(MgmtServiceResourceV3Impl.java:54)
&&&&&&& at com.cloudera.cmf.service.upgrade.RemoveBetaFromRCG.upgrade(RemoveBetaFromRCG.java:80)
&&&&&&& at com.cloudera.cmf.service.upgrade.AbstractApiAutoUpgradeHandler.upgrade(AbstractApiAutoUpgradeHandler.java:36)
&&&&&&& at com.cloudera.cmf.service.upgrade.AutoUpgradeHandlerRegistry.performAutoUpgradesForOneVersion(AutoUpgradeHandlerRegistry.java:233)
&&&&&&& at com.cloudera.cmf.service.upgrade.AutoUpgradeHandlerRegistry.performAutoUpgrades(AutoUpgradeHandlerRegistry.java:167)
&&&&&&& at com.cloudera.cmf.service.upgrade.AutoUpgradeHandlerRegistry.performAutoUpgrades(AutoUpgradeHandlerRegistry.java:138)
&&&&&&& at com.cloudera.server.cmf.Main.run(Main.java:587)
&&&&&&& at com.cloudera.server.cmf.Main.main(Main.java:198)
03:17:42,891 INFO ParcelUpdateService:com.ponents.ParcelDownloade
原先版本使用了60天试用企业版本,该期限已经过期,升级时Navigator服务启动不了,导致整个cloduera manager server启动失败
升级后问题
a.升级后flume原先提供的第三方jar丢失,需要将包重新放在/opt....下
b.sqoop导入mysql的驱动包找不到,需要将包重新放在/opt....下
c.hbase服务异常
Unhandled exception. Starting shutdown.
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.authorize.AuthorizationException): User hbase/ip-10-1-33-20.ec2. (auth:KERBEROS) is not authorized for protocol interface org.apache.hadoop.hdfs.protocol.ClientProtocol, expected client Kerberos principal is null
at org.apache.hadoop.ipc.Client.call(Client.java:1409)
at org.apache.hadoop.ipc.Client.call(Client.java:1362)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)
at com.sun.proxy.$Proxy15.setSafeMode(Unknown Source)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:186)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
at com.sun.proxy.$Proxy15.setSafeMode(Unknown Source)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.setSafeMode(ClientNamenodeProtocolTranslatorPB.java:594)
at org.apache.hadoop.hdfs.DFSClient.setSafeMode(DFSClient.java:2224)
at org.apache.hadoop.hdfs.DistributedFileSystem.setSafeMode(DistributedFileSystem.java:993)
at org.apache.hadoop.hdfs.DistributedFileSystem.setSafeMode(DistributedFileSystem.java:977)
at org.apache.hadoop.hbase.util.FSUtils.isInSafeMode(FSUtils.java:432)
at org.apache.hadoop.hbase.util.FSUtils.waitOnSafeMode(FSUtils.java:851)
at org.apache.hadoop.hbase.master.MasterFileSystem.checkRootDir(MasterFileSystem.java:435)
at org.apache.hadoop.hbase.master.MasterFileSystem.createInitialFileSystemLayout(MasterFileSystem.java:146)
at org.apache.hadoop.hbase.master.MasterFileSystem.&init&(MasterFileSystem.java:127)
at org.apache.hadoop.hbase.master.HMaster.finishInitialization(HMaster.java:789)
at org.apache.hadoop.hbase.master.HMaster.run(HMaster.java:606)
at java.lang.Thread.run(Thread.java:744)
通过cm将safe配置文件里的hbase.rpc.engine org.apache.hadoop.hbase.ipc.SecureRpcEngine去掉后重启成功。
后来发现是cm server的问题,之前修改了一个hostname,cloudera manager server未重启,重启后,加入该配置重启hbase不会有问题。
d.service monitor,zookeeper也有警告,其他服务都有部分红色异常
Exception in scheduled runnable.
java.lang.IllegalStateException
at mon.base.Preconditions.checkState(Preconditions.java:133)
at com.cloudera.cmon.firehose.polling.CdhTask.checkClientConfigs(CdhTask.java:712)
at com.cloudera.cmon.firehose.polling.CdhTask.updateCacheIfNeeded(CdhTask.java:675)
at com.cloudera.cmon.firehose.polling.FirehoseServicesPoller.getDescriptorAndHandleChanges(FirehoseServicesPoller.java:615)
at com.cloudera.cmon.firehose.polling.FirehoseServicesPoller.run(FirehoseServicesPoller.java:179)
at com.cloudera.enterprise.PeriodicEnterpriseService$UnexceptionablePeriodicRunnable.run(PeriodicEnterpriseService.java:67)
at java.lang.Thread.run(Thread.java:745)
后来发现是cm server的问题,之前修改了一个hostname,cloudera manager server未重启,重启后,加入该配置重启hbase不会有问题。
e.mapreduce访问安全机制下的hbase失败
去除client hbase-site safe配置文件内容:hbase.rpc.protection privacy,旧版本中必须加此配置,而新版本文档中也提到需要加此配置,但经过测试加此配置后报如上异常。
14/11/27 12:38:26 INFO zookeeper.ClientCnxn: Socket connection established to ip-10-1-33-24.ec2.internal/10.1.33.24:2181, initiating session
14/11/27 12:38:26 INFO zookeeper.ClientCnxn: Session establishment complete on server ip-10-1-33-24.ec2.internal/10.1.33.24:2181, sessionid = 0x549ef, negotiated timeout = 60000
14/11/27 12:38:41 WARN ipc.RpcClient: Couldn't setup connection for hbase/ip-10-1-10-15.ec2. to hbase/ip-10-1-34-31.ec2.
14/11/27 12:38:55 WARN ipc.RpcClient: Couldn't setup connection for hbase/ip-10-1-10-15.ec2. to hbase/ip-10-1-34-31.ec2.
14/11/27 12:39:15 WARN ipc.RpcClient: Couldn't setup connection for hbase/ip-10-1-10-15.ec2. to hbase/ip-10-1-34-31.ec2.
14/11/27 12:39:34 WARN ipc.RpcClient: Couldn't setup connection for hbase/ip-10-1-10-15.ec2. to hbase/ip-10-1-34-31.ec2.
14/11/27 12:39:55 WARN ipc.RpcClient: Couldn't setup connection for hbase/ip-10-1-10-15.ec2. to hbase/ip-10-1-34-31.ec2.
14/11/27 12:40:19 WARN ipc.RpcClient: Couldn't setup connection for hbase/ip-10-1-10-15.ec2. to hbase/ip-10-1-34-31.ec2.
14/11/27 12:40:36 WARN ipc.RpcClient: Couldn't setup connection for hbase/ip-10-1-10-15.ec2. to hbase/ip-10-1-34-31.ec2.
Caused by: java.io.IOException: Couldn't setup connection for hbase/ip-10-1-33-20.ec2. to hbase/ip-10-1-34-32.ec2.
at org.apache.hadoop.hbase.ipc.RpcClient$Connection$1.run(RpcClient.java:821)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1614)
at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleSaslConnectionFailure(RpcClient.java:796)
at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:898)
at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1543)
at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1442)
at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1661)
at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1719)
at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.execService(ClientProtos.java:30014)
at org.apache.hadoop.hbase.protobuf.ProtobufUtil.execService(ProtobufUtil.java:1623)
at org.apache.hadoop.hbase.ipc.RegionCoprocessorRpcChannel$1.call(RegionCoprocessorRpcChannel.java:93)
at org.apache.hadoop.hbase.ipc.RegionCoprocessorRpcChannel$1.call(RegionCoprocessorRpcChannel.java:90)
at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:114)
... 31 more
Caused by: javax.security.sasl.SaslException: No common protection layer between client and server
at com.sun.security.sasl.gsskerb.GssKrb5Client.doFinalHandshake(GssKrb5Client.java:252)
at com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:187)
at org.apache.hadoop.hbase.security.HBaseSaslRpcClient.saslConnect(HBaseSaslRpcClient.java:210)
at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupSaslConnection(RpcClient.java:770)
at org.apache.hadoop.hbase.ipc.RpcClient$Connection.access$600(RpcClient.java:357)
at org.apache.hadoop.hbase.ipc.RpcClient$Connection$2.run(RpcClient.java:891)
at org.apache.hadoop.hbase.ipc.RpcClient$Connection$2.run(RpcClient.java:888)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1614)
at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:888)
... 40 more
&property&
&&&& &name&hbase.rpc.engine&/name&
&&&& &value&org.apache.hadoop.hbase.ipc.SecureRpcEngine&/value&
&/property&
mr中使用/content/cloudera/en/documentation/cdh5/v5-0-0/CDH5-Installation-Guide/cdh5ig_mapreduce_hbase.html& TableMapReduceUtil.addDependencyJars(job);方式加载。
并且使用user api加入例如:
hbase.master.kerberos.principal=hbase/ip-10-1-10-15.ec2.
hbase.keytab.path=/home/dev/1015q.keytab
f.升级后impala jdbc安全机制下不可用
java.sql.SQLException: Could not open connection to jdbc:hive2://ip-10-1-33-22.ec2.internal:21050/ym_principal=impala/ip-10-1-33-22.ec2.: GSS initiate failed
at org.apache.hive.jdbc.HiveConnection.openTransport(HiveConnection.java:187)
at org.apache.hive.jdbc.HiveConnection.&init&(HiveConnection.java:164)
at org.apache.hive.jdbc.HiveDriver.connect(HiveDriver.java:105)
at java.sql.DriverManager.getConnection(DriverManager.java:571)
at java.sql.DriverManager.getConnection(DriverManager.java:233)
at com.cloudera.example.ClouderaImpalaJdbcExample.main(ClouderaImpalaJdbcExample.java:37)
Caused by: org.apache.thrift.transport.TTransportException: GSS initiate failed
at org.apache.thrift.transport.TSaslTransport.sendAndThrowMessage(TSaslTransport.java:221)
at org.apache.thrift.transport.TSaslTransport.open(TSaslTransport.java:297)
at org.apache.thrift.transport.TSaslClientTransport.open(TSaslClientTransport.java:37)
at org.apache.hadoop.hive.thrift.client.TUGIAssumingTransport$1.run(TUGIAssumingTransport.java:52)
at org.apache.hadoop.hive.thrift.client.TUGIAssumingTransport$1.run(TUGIAssumingTransport.java:49)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
at org.apache.hadoop.hive.thrift.client.TUGIAssumingTransport.open(TUGIAssumingTransport.java:49)
at org.apache.hive.jdbc.HiveConnection.openTransport(HiveConnection.java:185)
... 5 more
解决:
hadoop-auth-2.5.0-cdh5.2.0.jar
hive-shims-common-secure-0.13.1-cdh5.2.0.jar
两个包回退版本即可
浏览: 162935 次
来自: 上海
不开启时可以的,而且开启以后各种坑。。。。
博主请教一个问题,hue 控制hive表的权限怎么弄? 怎么联 ...
楼主你好,我用CM配置LDAP用户组映射,进入impala时, ...
版主:按你的步骤配置了,可是,执行 impala-shell
super_a 写道你好!找不到表这个问题是如何解决的,可以描 ...Hadoop-2.3.0-cdh5.1.0完全分布式搭建(基于CentOS)_服务器应用_Linux公社-Linux系统门户网站
你好,游客
Hadoop-2.3.0-cdh5.1.0完全分布式搭建(基于CentOS)
来源:Linux社区&
作者:jameshadoop
先参考:《-2.3.0-cdh5.1.0伪分布安装(基于)》
注:本例使用root用户搭建
操作系统:CentOS 6.5 64位操作系统
注:Hadoop2.0以上采用的是jdk环境是1.7,Linux自带的jdk卸载掉,重新安装
下载地址:
软件版本:hadoop-2.3.0-cdh5.1.0.tar.gz, zookeeper-3.4.5-cdh5.1.0.tar.gz
下载地址:
c1:192.168.58.11
c2:192.168.58.12
c3:192.168.58.13
二、安装JDK(略)见上面的参考文章
三、配置环境变量 (配置jdk和hadoop的环境变量)
四、系统配置
1关闭防火墙
chkconfig iptables off(永久性关闭)
配置主机名和hosts文件
2、SSH无密码验证配置
因为Hadoop运行过程需要远程管理Hadoop的守护进程,NameNode节点需要通过SSH(Secure Shell)链接各个DataNode节点,停止或启动他们的进程,所以SSH必须是没有密码的,所以我们要把NameNode节点和DataNode节点配制成无秘密通信,同理DataNode也需要配置无密码链接NameNode节点。
在每一台机器上配置:
vi /etc/ssh/sshd_config打开
RSAAuthentication yes # 启用 RSA 认证,PubkeyAuthentication yes # 启用公钥私钥配对认证方式
Master01:运行:ssh-keygen &t rsa &P ''& 不输入密码直接enter
默认存放在 /root/.ssh目录下,
cat ~/.ssh/id_rsa.pub && ~/.ssh/authorized_keys
[root@master01 .ssh]# ls
authorized_keys& id_rsa& id_rsa.pub& known_hosts
scp authorized_keys c2:~/.ssh/
scp authorized_keys c3:~/.ssh/
五、配置几个文件(各个节点一样)
hadoop/etc/hadoop/hadoop-env.sh 添加:
# set to the root ofyour Java installation& & export JAVA_HOME=/usr/java/latest& &
& # Assuming your installation directory is/usr/local/hadoop& & export HADOOP_PREFIX=/usr/local/hadoop&
5.2. etc/hadoop/core-site.xml
&configuration&& & & &property&& & & & &
&name&fs.defaultFS&/name&& & & & &
&value&hdfs://c1:9000&/value&& & & &/property&& &
&property&& &
&name&hadoop.tmp.dir&/name&& &
&value&/usr/local/cdh/hadoop/data/tmp&/value&&
&/property&& &/configuration&
5.3. etc/hadoop/hdfs-site.xml
&configuration&&
&&property&&
& & &!--开启web hdfs--&&
& & &name&dfs.webhdfs.enabled&/name&&
& & &value&true&/value&&
&&/property&& &&property&&
&name&dfs.replication&/name&&
&value&2&/value&&
&&/property&& &&property&&
& & &name&dfs.namenode.name.dir&/name&&
& & &value&/usr/local/cdh/hadoop/data/dfs/name&/value&&
& & &description& namenode 存放name table(fsimage)本地目录(需要修改)&/description&&
&&/property&&
&&property&&
&name&dfs.namenode.edits.dir&/name&&
&value&${dfs.namenode.name.dir}&/value&&
&description&namenode粗放 transactionfile(edits)本地目录(需要修改)&/description&&
& &/property&&
& &property&&
& & & &name&dfs.datanode.data.dir&/name&&
& & & &value&/usr/local/cdh/hadoop/data/dfs/data&/value&&
& & & &description&datanode存放block本地目录(需要修改)&/description&&
& &/property&& &property&& &
&name&dfs.permissions&/name&& &
&value&false&/value&&
&/property& &property&& &
&name&dfs.permissions.enabled&/name&& &
&value&false&/value&&/property&&/configuration&
5.4 etc/hadoop/mapred-site.xml
&configuration&& & & &property&& & & & & &name&mapreduce.framework.name&/name&& & & & & &value&yarn&/value&& & & &/property&& &/configuration&
5.5 etc/hadoop/yarn-env.sh
# some Java parametersexport JAVA_HOME=/usr/local/java/jdk1.7.0_67
5.6 etc/hadoop/yarn-site.xml
&configuration&&property&&name&yarn.resourcemanager.address&/name&&value&c1:8032&/value&&/property&&property&&name&yarn.resourcemanager.scheduler.address&/name&&value&c1:8030&/value&&/property&&property&&name&yarn.resourcemanager.resource-tracker.address&/name&&value&c1:8031&/value&&/property&&property&&name&yarn.resourcemanager.admin.address&/name&&value&c1:8033&/value&&/property&&property&&name&yarn.resourcemanager.webapp.address&/name&&value&c1:8088&/value&&/property&&property&&name&yarn.nodemanager.aux-services&/name&&value&mapreduce_shuffle&/value&&/property&&property&&
&name&yarn.nodemanager.aux-services.mapreduce.shuffle.class&/name&&
&value&org.apache.hadoop.mapred.ShuffleHandler&/value&&/property& &/configuration&
5.7. etc/hadoop/slaves
六:启动及验证安装是否成功
格式化:要先格式化HDFS:
&bin/hdfs namenode -format&
& sbin/start-dfs.sh& sbin/start-yarn.sh
&[root@c1 hadoop]# jps&3250 Jps&2491 ResourceManager&2343 SecondaryNameNode&2170 NameNode
&datanode节点:
[root@c2 ~]# jps&4196 Jps&2061 DataNode&2153 NodeManager
--------------------------------------------------------------------------------
13.04上搭建Hadoop环境
Ubuntu 12.10 +Hadoop 1.2.1版本集群配置
Ubuntu上搭建Hadoop环境(单机模式+伪分布模式)
Ubuntu下Hadoop环境的配置
单机版搭建Hadoop环境图文教程详解
搭建Hadoop环境(在Winodws环境下用虚拟机虚拟两个Ubuntu系统进行搭建)
Hadoop2.4.1尝鲜部署+完整版配置文件
--------------------------------------------------------------------------------
打开浏览器
NameNode - http://localhost:50070/
创建文件夹
3.& & $bin/hdfs dfs -mkdir /user
$ bin/hdfs dfs -mkdir /user/&username&
$ bin/hdfs dfs -put etc/hadoop input
$ bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.3.0-cdh5.1.0.jar grep input output 'dfs[a-z.]+'
$ bin/hdfs dfs -get output output
$ cat output/*
更多Hadoop相关信息见 专题页面
本文永久更新链接地址:
相关资讯 & & &
& (09/09/:22)
& (02/25/:30)
& (02/28/:52)
   同意评论声明
   发表
尊重网上道德,遵守中华人民共和国的各项有关法律法规
承担一切因您的行为而直接或间接导致的民事或刑事法律责任
本站管理人员有权保留或删除其管辖留言中的任意内容
本站有权在网站内转载或引用您的评论
参与本评论即表明您已经阅读并接受上述条款

我要回帖

更多关于 phoenix cdh5 hbase 的文章

 

随机推荐