一、准备4个虚机,相互免登陆
免密码登录:
$ ssh-keygen -t rsa -P '' -f ~/.ssh/id_rsa
$ cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
$ chmod 0600 ~/.ssh/authorized_keys
4机器全部公钥都追加到authorized_keys文件中,再分享到各机器。
修改hosts,相互机器名访问。
192.168.226.130 jq1
192.168.226.131 jq3
192.168.226.132 jq2
192.168.226.133 jq4
二、版本环境
JDK1.8+
Zookeeper3.4.*
Hadoop2.7.*
Hive2.3.*
Spark2.4.*
三、Zookeeper安装配置
zoo.cfg,机器名可能会不行,机器名不行换成IP
server.1=192.168.226.130:2888:3888
server.2=192.168.226.131:2888:3888
server.3=192.168.226.132:2888:3888
四、Hadoop安装配置
hdfs-site.xml:
<configuration>
<property>
<name>dfs.replication</name>
<value>2</value>
</property>
<property>
<name>dfs.nameservices</name>
<value>yshadoop</value>
</property>
<property>
<name>dfs.ha.namenodes.yshadoop</name>
<value>nn1,nn2</value>
</property>
<property>
<name>dfs.namenode.rpc-address.yshadoop.nn1</name>
<value>jq1:8020</value>
</property>
<property>
<name>dfs.namenode.rpc-address.yshadoop.nn2</name>
<value>jq2:8020</value>
</property>
<property>
<name>dfs.namenode.http-address.yshadoop.nn1</name>
<value>jq1:50070</value>
</property>
<property>
<name>dfs.namenode.http-address.yshadoop.nn2</name>
<value>jq2:50070</value>
</property>
<property>
<name>dfs.namenode.shared.edits.dir</name>
<value>qjournal://jq2:8485;jq3:8485;jq4:8485/yshadoop</value>
</property>
<property>
<name>dfs.client.failover.proxy.provider.yshadoop</name>
<value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
</property>
<property>
<name>dfs.ha.fencing.methods</name>
<value>sshfence</value>
</property>
<property>
<name>dfs.ha.fencing.ssh.private-key-files</name>
<value>~/.ssh/id_rsa</value>
</property>
<property>
<name>dfs.ha.automatic-failover.enabled</name>
<value>true</value>
</property>
<property>
<name>dfs.permissions</name>
<value>false</value>
</property>
</configuration>
core-site.xml
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://yshadoop</value>
</property>
<property>
<name>dfs.journalnode.edits.dir</name>
<value>/opt/hadoop/jn/data</value>
</property>
<property>
<name>ha.zookeeper.quorum</name>
<value>jq1:2181,jq2:2181,jq3:2181</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>/opt/hadoop/hadoop2</value>
</property>
<property>
<name>hadoop.proxyuser.root.hosts</name>
<value>*</value>
</property>
<property>
<name>hadoop.proxyuser.root.groups</name>
<value>*</value>
</property>
<property>
<name>hadoop.proxyuser.hdfs.hosts</name>
<value>*</value>
</property>
<property>
<name>hadoop.proxyuser.hdfs.groups</name>
<value>*</value>
</property>
</configuration>
slaves
jq2
jq3
jq4
五、Hive安装配置
hive-site.xml
<configuration>
<property>
<name>javax.jdo.option.ConnectionURL</name>
<value>jdbc:mysql://localhost:3306/hivedb?createDatabaseIfNotExist=true&useSSL=false</value>
<description>JDBC connect string for a JDBC metastore</description>
</property>
<property>
<name>javax.jdo.option.ConnectionDriverName</name>
<value>com.mysql.jdbc.Driver</value>
<description>Driver class name for a JDBC metastore</description>
</property>
<property>
<name>javax.jdo.option.ConnectionUserName</name>
<value>root</value>
<description>username to use against metastore database</description>
</property>
<property>
<name>javax.jdo.option.ConnectionPassword</name>
<value>123456</value>
<description>password to use against metastore database</description>
</property>
<property>
<name>hive.metastore.warehouse.dir</name>
<value>/hive/warehouse</value>
<description>hive default warehouse, if nessecory, change it</description>
</property>
<property>
<name>hive.server2.custom.authentication.class</name>
<value>org.apache.hadoop.hive.contrib.auth.CustomPasswdAuthenticator</value>
</property>
<property>
<name>hive.jdbc_passwd.auth.hadoop</name>
<value>123456789</value>
<description/>
</property>
</configuration>
hive-env.sh
export HIVE_CONF_DIR=/data/apache-hive-2.3.7-bin/conf
export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
export HADOOP_HOME=/data/hadoop-2.7.2
六、Spark安装配置
spark-env.sh
export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
export HADOOP_HOME=/data/hadoop-2.7.2
export HADOOP_CONF_DIR=/data/hadoop-2.7.2/etc/hadoop
export SPARK_WORKER_MEMORY=2g
export SPARK_WORKER_CORES=2
export SPARK_DAEMON_JAVA_OPTS="-Dspark.deploy.recoveryMode=ZOOKEEPER -Dspark.deploy.zookeeper.url=jq1:2181,jq2:2181,jq3:2181 -Dspark.deploy.zookeeper.dir=/opt/hadoop/spark"
slaves
jq1
jq2
jq3
hive-site.xml
将Hive的配置文件hive-site.xml放到spark/conf/目录下
注意:本文归作者所有,未经作者允许,不得转载