欧美一区二区三区老妇人-欧美做爰猛烈大尺度电-99久久夜色精品国产亚洲a-亚洲福利视频一区二区

hadoopha+zookeeper+hbase

一、環(huán)境

巴林右旗網(wǎng)站制作公司哪家好,找成都創(chuàng)新互聯(lián)公司!從網(wǎng)頁設(shè)計、網(wǎng)站建設(shè)、微信開發(fā)、APP開發(fā)、響應(yīng)式網(wǎng)站建設(shè)等網(wǎng)站項目制作,到程序開發(fā),運營維護。成都創(chuàng)新互聯(lián)公司從2013年開始到現(xiàn)在10年的時間,我們擁有了豐富的建站經(jīng)驗和運維經(jīng)驗,來保證我們的工作的順利進行。專注于網(wǎng)站建設(shè)就選成都創(chuàng)新互聯(lián)公司

1、系統(tǒng):Red Hat Enterprise Linux Server release 6.4

2、所需軟件包

    hadoop-2.2.0.tar.gz  

    hbase-0.98.2-hadoop2-bin.tar.gz  

    jdk-7u67-linux-x64.tar.gz  

    zookeeper-3.4.6.tar.gz

3、各機器運行服務(wù)

192.168.10.40 master1 namenode resourcemanager   ZKFC   hmaster  

192.168.10.41 master2 namenode                   ZKFC   hmaster(backup)

192.168.10.42 slave1  datanode nodemanager  journalnode  hregionserver  zookeeper

192.168.10.43 slave2  datanode nodemanager  journalnode  hregionserver  zookeeper

192.168.10.44 slave3  datanode nodemanager  journalnode  hregionserver  zookeeper

二、安裝步驟:(為了便于同步,一般都是在master1上操作)

1、ssh無密碼登錄

(mkdir -m700 .ssh)

2、jdk的安裝(每臺都是)

1)、解壓

tar zxf jdk-7u67-linux-x64.tar.gz 

ln -sf jdk1.7.0_67 jdk

2)、配置

sudo vim /etc/profile

export JAVA_HOME=/home/richmail/jdk

export PATH=$JAVA_HOME/bin:$PATH

export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar

3)執(zhí)行,使生效

source /etc/profile

3、zookeeper的安裝

1)解壓

tar zxf zookeeper-3.4.6.tar.gz 

ln -sf zookeeper-3.4.6 zookeeper

2)、配置

vim zookeeper/bin/zkEnv.sh

ZOO_LOG_DIR="/home/richmail/zookeeper/logs"

cd zookeeper/conf

cp zoo_sample.cfg zoo.cfg

vim zoo.cfg

tickTime=2000

initLimit=10

syncLimit=5

dataDir=/home/richmail/zookeeper/data

dataLogDir=/home/richmail/zookeeper/logs

clientPort=2181

server.1=slave1:2888:3888

server.2=slave2:2888:3888

server.3=slave3:2888:3888

mkdir -p /home/richmail/zookeeper/{data,logs}

3)、復(fù)制到slave1,slave2,slave3上 

cd

scp -rv zookeeper slave1:~/ 

ssh slave1 ‘echo 1 > /home/richmail/zookeeper/data/myid’

scp -rv zookeeper slave2:~/  

ssh slave1 ‘echo 2 > /home/richmail/zookeeper/data/myid'

scp -rv zookeeper slave3:~/  

ssh slave1 ‘echo 3 > /home/richmail/zookeeper/data/myid’

4)、啟動zookeeper

分別去slave1,slave2,slave3區(qū)啟動zookeeper

cd ~/zookeeper/bin

./zkServer.sh start

4、hadoop的安裝

1)、解壓

tar zxf hadoop-2.2.0.tar.gz

ln -sf hadoop-2.2.0 hadoop

2)、配置

cd /home/richmail/hadoop/etc/hadoop

vim core-site.xml

<configuration>

<property>

<name>fs.defaultFS</name>

<value>hdfs://cluster</value>

</property>

<property>

<name>hadoop.tmp.dir</name>

<value>/home/richmail/hadoop/storage/tmp</value>

</property>

<property>

<name>ha.zookeeper.quorum</name>

<value>slave1:2181,slave2:2181,slave3:2181</value>

</property>

</configuration>

mkdir -p /home/richmail/hadoop/storage/tmp

vim hadoop-env.sh 

export JAVA_HOME=/home/richmail/jdk

export HADOOP_PID_DIR=/var/hadoop/pids  //默認(rèn) /tmp下

vim hdfs-site.xml 

<configuration>

<property>

<name>dfs.nameservices</name>

<value>cluster</value>

</property>

<property>

<name>dfs.ha.namenodes.cluster</name>

<value>master1,master2</value>

</property>

<property>

<name>dfs.namenode.rpc-address.cluster.master1</name>

<value>master1:9000</value>

</property>

<property>

<name>dfs.namenode.rpc-address.cluster.master2</name>

<value>master2:9000</value>

</property>

<property>

<name>dfs.namenode.http-address.cluster.master1</name>

<value>master1:50070</value>

</property>

<property>

<name>dfs.namenode.http-address.cluster.master2</name>

<value>master2:50070</value>

</property>

<property>

<name>dfs.namenode.shared.edits.dir</name>

<value>qjournal://slave1:8485;slave2:8485;slave3:8485/cluster</value>

</property>

<property>

<name>dfs.ha.automatic-failover.enabled</name>

<value>true</value>

</property>

<property>

<name>dfs.client.failover.proxy.provider.cluster</name>

<value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>

</property>

<property>

<name>dfs.ha.fencing.methods</name>

<value>sshfence</value>

</property>

<property>

<name>dfs.ha.fencing.ssh.private-key-files</name>

<value>/home/richmail/.ssh/id_rsa</value>

</property>

<property>

<name>dfs.journalnode.edits.dir</name>

<value>/home/richmail/hadoop/storage/journal</value>

</property>

</configuration>

mkdir -p /home/richmail/hadoop/storage/journal

vim mapred-site.xml

<configuration>

<property>

<name>mapreduce.framework.name</name>

<value>yarn</value>

</property>

</configuration>

vim yarn-env.sh

export YARN_PID_DIR=/var/hadoop/pids

 vim yarn-site.sh

<configuration>

<property>

<name>yarn.resourcemanager.hostname</name>

<value>master1</value>

</property>

<property>

<name>yarn.nodemanager.aux-services</name>

<value>mapreduce_shuffle</value>

</property>

</configuration>

vim slaves

slave1

slave2

slave3

3)、復(fù)制至其他機器

cd

scp -rv hadoop master2:~/

scp -rv hadoop slaver1:~/

scp -rv hadoop slaver2:~/

scp -rv hadoop slaver3:~/

4)、啟動hadoop

1)、在slave1,slave2,slave3上執(zhí)行journalnode

cd ~/hadoop/sbin

./hadoop-daemon.sh start journalnode

2)、在master1上執(zhí)行

cd ~/hadoop/bin

./hdfs zkfc -formatZK

./hdfs namenode -format

cd ../sbin

./hadoop-daemon.sh start namenode

./start-all.sh

3)、在master2上執(zhí)行

cd ~/hadoop/bin

hdfs namenode -bootstrapStandby

cd ../sbin

hadoop-daemon.sh start namenode

5)、驗證

使用瀏覽器訪問192.168.10.40:50070和192.168.10.41:50070,能夠看到兩個節(jié)點。一個是active,一個是standny

或者在名字節(jié)點執(zhí)行命令:

hdfs haadmin -getServiceState master1

hdfs haadmin -getServiceState master2

執(zhí)行hdfs haadmin –failover –forceactive master1 master2,可以使這兩個節(jié)點的狀態(tài)進行交換

5、hbase的安裝

1)、解壓

tar zxf hbase-0.98.2-hadoop2-bin.tar.gz 

ln -sf hbase-0.98.2-hadoop2 hbase 

2)、配置

cd ~/hbase/conf

vim hbase-env.sh

export JAVA_HOME=/home/richmail/jdk

export HBASE_MANAGES_ZK=false

vim hbase-env.sh 

export HBASE_PID_DIR=/var/hadoop/pids

vim regionservers

slave1

slave2

slave3

vim hbase-site.xml 

<configuration>

<property>

<name>hbase.rootdir</name>

<value>hdfs://cluster/hbase</value>

</property>

<property>

<name>hbase.master</name>

<value>60000</value>

</property>

<property>

<name>hbase.zookeeper.quorum</name>

<value>slave1,slave2,slave3</value>

</property>

<property>

<name>hbase.zookeeper.property.clientPort</name>

<value>2181</value>

</property>

<property>

<name>hbase.zookeeper.property.dataDir</name>

<value>/home/richmail/hbase/zkdata</value>

</property>

<property>

<name>hbase.cluster.distributed</name>

<value>true</value>

</property>

<property>

<name>hbase.tmp.dir</name>

<value>/home/richmail/hbase/data</value>

</property>

</configuration>

mkdir ~/hbase/{zkdata,data}

hbase有個啟動錯誤需要把hadoop的配置文件hdfs-site.xml復(fù)制到hbase/conf下,才能解決

3)、復(fù)制至其他機器

cd

scp -rv hbase master2:~/

scp -rv hbase slaver1:~/

scp -rv hbase slaver2:~/

scp -rv hbase slaver3:~/

4)、啟動hbase

在master1上執(zhí)行

cd ~/hbase/bin

./start-hbase.sh

在master2上執(zhí)行

./bin/hbase-daemon.sh start master --backup

至此這個集群就部署OK啦

網(wǎng)頁名稱:hadoopha+zookeeper+hbase
本文URL:http://chinadenli.net/article46/gehhhg.html

成都網(wǎng)站建設(shè)公司_創(chuàng)新互聯(lián),為您提供品牌網(wǎng)站設(shè)計、建站公司面包屑導(dǎo)航、網(wǎng)站制作、自適應(yīng)網(wǎng)站、定制開發(fā)

廣告

聲明:本網(wǎng)站發(fā)布的內(nèi)容(圖片、視頻和文字)以用戶投稿、用戶轉(zhuǎn)載內(nèi)容為主,如果涉及侵權(quán)請盡快告知,我們將會在第一時間刪除。文章觀點不代表本網(wǎng)站立場,如需處理請聯(lián)系客服。電話:028-86922220;郵箱:631063699@qq.com。內(nèi)容未經(jīng)允許不得轉(zhuǎn)載,或轉(zhuǎn)載時需注明來源: 創(chuàng)新互聯(lián)

h5響應(yīng)式網(wǎng)站建設(shè)