①下载压缩包
官网下载地址:http://mirror.bit.edu.cn/apache/hive/
选择apache-hive-2.3.4-bin.tar.gz,在Windows里面下载。

②将压缩包从Windows传输到Linux当前目录下¬¬
SecureCRT 【File】→【Connect SFTP Session】开启sftp操作

③解压
解压安装到指定目录下/opt/module(/opt是系统自带目录,之下的/module是自己创建的)
修改解压目录名为hive。

④修改环境变量
修改etc/profile文件,添加HIVE_HOME安装路径。
Source命令更新etc/profile文件,使其生效。

⑤配置hive-env.sh
进入/opt/module/hive/conf目录,修改hive-env.sh.template的文件名为hive-env.sh。(可以使用cp或者mv命令)
cp hive-env.sh.template hive-env.sh
修改Hadoop的安装路径
HADOOP_HOME=/opt/module /hadoop-2.7.3
修改Hive的conf目录的路径
export HIVE_CONF_DIR=/opt/module/hive/conf

⑥配置hive-site.xml
进入/opt/module/hive/conf目录,修改default.xml.template的文件名为hive-site.xml。(可以使用cp或者mv命令)
cp hive- default.xml.template hive-site.xml
在最后添加以下属性:
<name>javax.jdo.option.ConnectionURL</name>

jdbc:mysql://bigdata131:3306/hivedb?createDatabaseIfNotExist=true
JDBC connect string for a JDBC metastore

<name>javax.jdo.option.ConnectionDriverName</name>
<value>com.mysql.jdbc.Driver</value>
<description>Driver class name for a JDBC metastore</description>

javax.jdo.option.ConnectionUserName root username to use against metastore database javax.jdo.option.ConnectionPassword root password to use against metastore database
注:
查看Hive建库、建表默认的hdfs目录为/user/hive/warehouse

hive.metastore.warehouse.dir
/user/hive/warehouse
location of default database for the warehouse
MySQL Connector/J安装
①下载压缩包
官网下载地址:http://ftp.ntu.edu.tw/MySQL/Downloads/Connector-J/
mysql-connector-java-5.1.47.tar.gz

②将压缩包从Windows传输到Linux当前目录下¬¬
SecureCRT 【File】→【Connect SFTP Session】开启sftp操作

阿里云-推广AD

③解压
解压到指定目录下/opt/module(/opt是系统自带目录,之下的/module是自己创建的)

④拷贝驱动包
将驱动包mysql-connector-java-5.1.47-bin.jar复制到/opt/module/hive/lib目录中。

启动hive
①启动Hadoop:start-all.sh
②初始化Metastore架构:schematool -dbType mysql -initSchema
③启动Hive:hive
hive> 进入hive shell
④创建/删除/修改/查看 数据库、表、视图,向表中装载数据,查询数据等等。

注:
①启动hive报错:Exception in thread “main” java.lang.IllegalArgumentException: java.net.URISyntaxException: Relative path in absolute URI: KaTeX parse error: Expected ‘}’, got ‘EOF’ at end of input: …a.io.tmpdir%7D/%7Bsystem:user.name%7D,原因是hive-site.xml里的临时目录没有设置好。
修改system:Java.io.tmpdir为自己创建的临时目录/opt/module/hive/tmp。&lt;property>&lt;name>Hive.exec.local.scratchdir&lt;/name>&lt;value>{system:Java.io.tmpdir}为自己创建的临时目录/opt/module/hive/tmp。&lt;property&gt; &lt;name&gt;Hive.exec.local.scratchdir&lt;/name&gt; &lt;value&gt;system:Java.io.tmpdir为自己创建的临时目录/opt/module/hive/tmp。<property><name>Hive.exec.local.scratchdir</name><value>{system:Java.io.tmpdir}/system:user.name&lt;/value>&lt;description>LocalscratchspaceforHivejobs&lt;/description>&lt;/property>&lt;property>&lt;name>hive.downloaded.resources.dir&lt;/name>&lt;value>{system:user.name}&lt;/value&gt; &lt;description&gt;Local scratch space for Hive jobs&lt;/description&gt; &lt;/property&gt;&lt;property&gt; &lt;name&gt;hive.downloaded.resources.dir&lt;/name&gt; &lt;value&gt;system:user.name</value><description>LocalscratchspaceforHivejobs</description></property><property><name>hive.downloaded.resources.dir</name><value>{system:java.io.tmpdir}/hive.session.idresources&lt;/value>&lt;description>Temporarylocaldirectoryforaddedresourcesintheremotefilesystem.&lt;/description>&lt;/property>&lt;property>&lt;name>hive.server2.logging.operation.log.location&lt;/name>&lt;value>{hive.session.id}_resources&lt;/value&gt; &lt;description&gt;Temporary local directory for added resources in the remote file system.&lt;/description&gt; &lt;/property&gt;&lt;property&gt;&lt;name&gt;hive.server2.logging.operation.log.location&lt;/name&gt;&lt;value&gt;hive.session.id
r

esources</value><description>Temporarylocaldirectoryforaddedresourcesintheremotefilesystem.</description></property><property><name>hive.server2.logging.operation.log.location</name><value>{system:java.io.tmpdir}/system:user.name/operationlogs&lt;/value>&lt;description>Topleveldirectorywhereoperationlogsarestoredifloggingfunctionalityisenabled&lt;/description>&lt;/property>&lt;property>&lt;name>hive.querylog.location&lt;/name>&lt;value>{system:user.name}/operation_logs&lt;/value&gt;&lt;description&gt;Top level directory where operation logs are stored if logging functionality is enabled&lt;/description&gt;&lt;/property&gt;&lt;property&gt;&lt;name&gt;hive.querylog.location&lt;/name&gt;&lt;value&gt;system:user.name/operation
l

ogs</value><description>Topleveldirectorywhereoperationlogsarestoredifloggingfunctionalityisenabled</description></property><property><name>hive.querylog.location</name><value>{system:java.io.tmpdir}/${system:user.name}
Location of Hive run time structured log file
②schematool -dbType mysql -initSchema时报错:Schema initialization FAILED! Metastore state would be inconsistent !!
网上教程都是说在文件头部加上mysql的连接配置,但是hive-site.xml.templat中原本是有derby的配置,这样就会被下面的derby配置覆盖,导致初始化失败。方法就是可以将mysql配置放在最下面,或者删除derby的配置。
删除derby的配置产生的metastore_db目录。

③hive命令(如show databases ,show tables),会报出如下错误:Failed with exception Java.io.IOException:java.lang.IllegalArgumentException: java.NET.URISyntaxException: Relative path in absolute URI: system:user.name找到hive−site.xml的&lt;name>hive.exec.local.scratchdir&lt;/name>的值里面的{system:user.name}找到hive-site.xml的&lt;name&gt;hive.exec.local.scratchdir&lt;/name&gt;的值里面的system:user.name找到hive−site.xml的<name>hive.exec.local.scratchdir</name>的值里面的{system.user.name}改为${user.name}。

Hive应用实例:wordcount
①建数据源文件并上传到hdfs的/user/input目录下
②建数据源表t1:create table t1 (line string);

③装载数据:load data inpath ‘/user/input’ overwrite into table t1;

④编写HiveQL语句实现wordcount算法,建表wct1保存计算结果:
create table wct1 as select word, count(1) as count from (select explode (split (line, ’ ‘)) as word from t1) w group by word order by word;

⑤查看wordcount计算结果: