加入收藏 | 设为首页 | 会员中心 | 我要投稿 李大同 (https://www.lidatong.com.cn/)- 科技、建站、经验、云计算、5G、大数据,站长网!
当前位置: 首页 > 综合聚焦 > 服务器 > Windows > 正文

windows单机同时运行hadoop、hbase、hive

发布时间:2020-12-13 21:09:57 所属栏目:Windows 来源:网络整理
导读:配置无法详细列了,记得一下配置hbase 做了以下改动 hbase-env.cmd 最后加上:set HBASE_MANAGES_ZK=false hbase-site.xml内容改成如下: ?xml version="1.0"??xml-stylesheet type="text/xsl" href="configuration.xsl"?!--/** * * Licensed to the Apache

配置无法详细列了,记得一下配置hbase 做了以下改动

hbase-env.cmd

最后加上:set HBASE_MANAGES_ZK=false

hbase-site.xml内容改成如下:

<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
/**
 *
 * Licensed to the Apache Software Foundation (ASF) under one
 * or more contributor license agreements.  See the NOTICE file
 * distributed with this work for additional information
 * regarding copyright ownership.  The ASF licenses this file
 * to you under the Apache License,Version 2.0 (the
 * "License"); you may not use this file except in compliance
 * with the License.  You may obtain a copy of the License at
 *
 *     http://www.apache.org/licenses/LICENSE-2.0
 *
 * Unless required by applicable law or agreed to in writing,software
 * distributed under the License is distributed on an "AS IS" BASIS,* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND,either express or implied.
 * See the License for the specific language governing permissions and
 * limitations under the License.
 */
 hdfs://localhost:9000
hdfs://127.0.0.1:9000/hbase/
-->
<configuration>
	  <property> 
    	<name>hbase.master</name> 
    	<value>localhost</value> 
    </property> 
    <property>  
       <name>hbase.rootdir</name>  
       <value>hdfs://127.0.0.1:9000/hbase/</value>  
    </property>
    <property>
       <name>hbase.tmp.dir</name>  
       <value>D:/hbase-1.2.5/tmp</value>  
    </property>
 
    <property>
       <name>hbase.zookeeper.quorum</name>
       <value>127.0.0.1</value>
    </property>
    <property>  
       <name>hbase.zookeeper.property.dataDir</name>
       <value>D:/hbase-1.2.5/zoo</value>
    </property> 
 
    <property>  
       <name>hbase.cluster.distributed</name>  
       <value>false</value>  
    </property>
 		<property>
    	<name>hbase.master.info.port</name>
      <value>60010</value>
		</property>
    <property>    
      <name>hbase.zookeeper.property.clientPort</name>    
      <value>2185</value>    
    </property>    		
 	                   
 
                
</configuration>

hive 启动hiveserver2 需要启独立 zookeeper_3.4.5 端口2181,故上面hbase的zk端口改成2185,hive的hive-site.xml改成如下:

<configuration>

	<!-- WARNING!!! This file is provided for documentation purposes ONLY!     -->
	<!-- WARNING!!! Any changes you make to this file will be ignored by Hive. -->
	<!-- WARNING!!! You must make your changes in hive-site.xml instead.       -->

	<!-- config mysql connection -->
	<property>
		<name>javax.jdo.option.ConnectionURL</name>
		<value>jdbc:mysql://127.0.0.1:3306/hive?createDatabaseIfNotExist=true&useSSL=false</value>
		<description>JDBC connect string for a JDBC metastore</description>
	</property>

	<property>
		<name>javax.jdo.option.ConnectionDriverName</name>
		<value>com.mysql.jdbc.Driver</value>
		<description>Driver class name for a JDBC metastore</description>
	</property>

	<property>
		<name>javax.jdo.option.ConnectionUserName</name>
		<value>root</value>
		<description>username to use against metastore database</description>
	</property>

	<property>
		<name>javax.jdo.option.ConnectionPassword</name>
		<value>root</value>
		<description>password to use against metastore database</description>
	</property>
	<property>
		<name>hive.metastore.schema.verification</name>
		<value>false</value>
	</property>

 
	<property>
		<name>hive.metastore.warehouse.dir</name>
		<value>/user/hive/warehouse</value>
	</property>

	<property>
		<name>javax.jdo.option.DetachAllOnCommit</name>
		<value>true</value>
		<description>detaches all objects from session so that they can be used after transaction is committed</description>
	</property>

	<property>
		<name>javax.jdo.option.NonTransactionalRead</name>
		<value>true</value>
		<description>reads outside of transactions</description>
	</property>

 	<property>
        <name>datanucleus.readOnlyDatastore</name>
        <value>false</value>
    </property>
    <property> 
        <name>datanucleus.fixedDatastore</name>
        <value>false</value> 
    </property>

    <property> 
        <name>datanucleus.autoCreateSchema</name> 
        <value>true</value> 
    </property>
    
    <property>
        <name>datanucleus.autoCreateTables</name>
        <value>true</value>
    </property>
    <property>
        <name>datanucleus.autoCreateColumns</name>
        <value>true</value>
    </property>
    <!-- hive2 服务 -->
		<property>
		   <name>hive.support.concurrency</name>
		   <value>true</value>
		</property>
		<property>
		   <name>hive.zookeeper.quorum</name>
		   <value>localhost</value>
		</property>
		<property>
		   <name>hive.server2.thrift.min.worker.threads</name>
		   <value>5</value>
		</property>
		<property>
		   <name>hive.server2.thrift.max.worker.threads</name>
		   <value>100</value>
		</property>
		<!--CUSTOM,NONE  -->
		<!--
		<property>  
		   <name>hive.server2.authentication</name>  
		   <value>NONE</value>  
		</property>  
	
		<property>  
		   <name>hive.server2.custom.authentication.class</name>  
		   <value>tv.huan.hive.auth.HuanPasswdAuthenticationProvider</value>  
		</property>  
		<property>  
		   <name>hive.server2.custom.authentication.file</name>  
		   <value>D:/apache-hive-2.1.1-bin/conf/user.password.conf</value>  
		</property>
		-->
 		<property>
    	<name>hive.server2.transport.mode</name>
    	<value>binary</value>
  	</property>
  	<property>
    	<name>hive.hwi.listen.host</name>
    	<value>0.0.0.0</value>
  	</property>

 		<property>
    	<name>hive.server2.webui.host</name>
    	<value>0.0.0.0</value>
    </property>
     <property>
        <name>hadoop.proxyuser.root.hosts</name>
        <value>*</value>
     </property>
     <property>
        <name>hadoop.proxyuser.root.groups</name>
        <value>*</value>
     </property>
     
     
		<property>
	    <name>hive.server2.thrift.client.user</name>
	    <value>root</value>
 	  </property>
	  <property>
	    <name>hive.server2.thrift.client.password</name>
	    <value>123456</value>
 	  </property>
	       
    
     <!--
	  <property>
	    <name>hive.metastore.uris</name>
	    <value>thrift://127.0.0.1:9083</value>
	    <description>Thrift URI for the remote metastore. Used by metastore client to connect to remote metastore.</description>
	  </property>    
	  --> 
     <property>
        <name>hive.server2.thrift.http.port</name>
        <value>11002</value>
     </property>
     <property>
        <name>hive.server2.thrift.port</name>
        <value>11006</value>
     </property>    	  
 

 		<property>
			<name>hbase.zookeeper.quorum</name>
      <value>0.0.0.0</value>
    </property>

</configuration>

hadoop 的core-site.xml 改成如下(为解决权限问题,还需用hadoop fs chmod改hdfs目录权限):

<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
  Licensed under the Apache License,Version 2.0 (the "License");
  you may not use this file except in compliance with the License.
  You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

  Unless required by applicable law or agreed to in writing,software
  distributed under the License is distributed on an "AS IS" BASIS,WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND,either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License. See accompanying LICENSE file.
-->

<!-- Put site-specific property overrides in this file. -->

<configuration>
	   <property>
        <name>hadoop.tmp.dir</name>
        <value>/D:/hadoop-2.5.2/workplace/tmp</value>
    </property>
    <property>
        <name>dfs.name.dir</name>
        <value>/D:/hadoop-2.5.2/workplace/name</value>
    </property>
    <property>
        <name>fs.default.name</name>
        <value>hdfs://localhost:9000</value>
    </property>
    <property>
        <name>hadoop.proxyuser.hadoop.hosts</name>                                               
        <value>*</value>
    </property>

    <property>
    	<name>hadoop.proxyuser.hadoop.groups</name>
      <value>*</value>
    </property>    
    
    <property>
    	<name>hadoop.proxyuser.hive.groups</name>
      <value>*</value>
    </property>
      
    <property>
    	<name>hadoop.proxyuser.hive.hosts</name>
      <value>*</value>
    </property>
    
    <property>
            <name>hadoop.proxyuser.root.hosts</name>
            <value>*</value>
     </property>

    <property>
            <name>hadoop.proxyuser.root.groups</name>
            <value>*</value>
    </property>
 

    <property>
            <name>hadoop.proxyuser.Administrator.hosts</name>
            <value>*</value>
     </property>

    <property>
            <name>hadoop.proxyuser.Administrator.groups</name>
            <value>*</value>
    </property>
 
         
</configuration>
还改了很多东西,各种报错按网上的方法解决!!!

由于我使用的hive版本是2.1.1,访版本lib目录所使用的是hbase jar版本是1.1的,我安的hbase是1.2.5 故需要把hbase的一些jar拷贝到hive/lib目录下,要拷贝替换的文件如下:


从hbase拷贝过来的文件如下:






与本文实现相同功能

(编辑:李大同)

【声明】本站内容均来自网络,其相关言论仅代表作者个人观点,不代表本站立场。若无意侵犯到您的权利,请及时与联系站长删除相关内容!

    推荐文章
      热点阅读