国产探花免费观看_亚洲丰满少妇自慰呻吟_97日韩有码在线_资源在线日韩欧美_一区二区精品毛片,辰东完美世界有声小说,欢乐颂第一季,yy玄幻小说排行榜完本

首頁 > 系統 > Linux > 正文

[Linux][Hadoop] 將hadoop跑起來

2024-06-28 13:25:13
字體:
來源:轉載
供稿:網友
[linux][Hadoop] 將hadoop跑起來

前面安裝過程待補充,安裝完成hadoop安裝之后,開始執行相關命令,讓hadoop跑起來

 

使用命令啟動所有服務:

hadoop@Ubuntu:/usr/local/gz/hadoop-2.4.1$ ./sbin/start-all.sh

當然在目錄hadoop-2.4.1/sbin下面會有很多啟動文件:

image

里面會有所有服務各自啟動的命令,而start-all.sh則是把所有服務一起啟動,以下為.sh的內容:

#!/usr/bin/env bash# Licensed to the Apache Software Foundation (ASF) under one or more# contributor license agreements.  See the NOTICE file distributed with# this work for additional information regarding copyright ownership.# The ASF licenses this file to You under the Apache License, Version 2.0# (the "License"); you may not use this file except in compliance with# the License.  You may obtain a copy of the License at##     http://www.apache.org/licenses/LICENSE-2.0## Unless required by applicable law or agreed to in writing, software# distributed under the License is distributed on an "AS IS" BASIS,# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either exPRess or implied.# See the License for the specific language governing permissions and# limitations under the License.# Start all hadoop daemons.  Run this on master node.echo "This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh" #這里說明了這個腳本已經被棄用了,要我們使用start-dfs.sh和start-yarn.sh來進行啟動
bin=`dirname "${BASH_SOURCE-$0}"`bin=`cd "$bin"; pwd`DEFAULT_LIBEXEC_DIR="$bin"/../libexecHADOOP_LIBEXEC_DIR=${HADOOP_LIBEXEC_DIR:-$DEFAULT_LIBEXEC_DIR}. $HADOOP_LIBEXEC_DIR/hadoop-config.sh  #這里執行相關配置文件在hadoop/libexec/hadoop-config.sh,該配置文件里面全是配置相關路徑的,CLASSPATH, export相關的
#真正執行的是以下兩個,也就是分別執行start-dfs.sh和start-yarn.sh兩個腳本,以后還是自己分別執行這兩個命令
# start hdfs daemons if hdfs is presentif [ -f "${HADOOP_HDFS_HOME}"/sbin/start-dfs.sh ]; then  "${HADOOP_HDFS_HOME}"/sbin/start-dfs.sh --config $HADOOP_CONF_DIRfi# start yarn daemons if yarn is presentif [ -f "${HADOOP_YARN_HOME}"/sbin/start-yarn.sh ]; then  "${HADOOP_YARN_HOME}"/sbin/start-yarn.sh --config $HADOOP_CONF_DIRfi

執行完成之后調用jps查看是否所有服務都已經啟動起來了:

image

這里注意,一定要有6個服務,我啟動的時候當時只有5個服務,打開兩個連接都成功http://192.168.1.107:50070/,http://192.168.1.107:8088/,但是在執行Wordcount示例的時候,發現執行失敗,查找原因之后,才發起我啟動的時候少了datanode服務

下面這個是application的運行情況:

image

下面這個是dfs的健康狀態:

image

http://192.168.1.107:50070/打開的頁面可以查看hadoop啟動及運行的日志,詳情如下:

image

我就是通過這里的日志找到問題原因的,打開日志之后是各個服務運行的日志文件:

image

此時打開datanode-ubuntu.log文件,有相關異常拋出:

2014-07-21 22:05:21,064 INFO org.apache.hadoop.hdfs.server.common.Storage: Lock on /usr/local/gz/hadoop-2.4.1/dfs/data/in_use.lock acquired by nodename 3312@ubuntu2014-07-21 22:05:21,075 FATAL org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for Block pool <registering> (Datanode Uuid unassigned) service to localhost/127.0.0.1:9000. Exiting. java.io.IOException: Incompatible clusterIDs in /usr/local/gz/hadoop-2.4.1/dfs/data: namenode clusterID = CID-2cfdb22e-07b2-4ab8-965d-fdb27645bd62; datanode clusterID = ID-2cfdb22e-07b2-4ab8-965d-fdb27645bd62at org.apache.hadoop.hdfs.server.datanode.DataStorage.doTransition(DataStorage.java:477)at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:226)at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:254)at org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:974)at org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:945)at org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:278)at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:220)at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:816)at java.lang.Thread.run(Thread.java:722)2014-07-21 22:05:21,084 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Ending block pool service for: Block pool <registering> (Datanode Uuid unassigned) service to localhost/127.0.0.1:90002014-07-21 22:05:21,102 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Removed Block pool <registering> (Datanode Uuid unassigned)2014-07-21 22:05:23,103 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Exiting Datanode2014-07-21 22:05:23,106 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 02014-07-21 22:05:23,112 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG: /************************************************************SHUTDOWN_MSG: Shutting down DataNode at ubuntu/127.0.1.1************************************************************/
根據錯誤日志搜索之后發現是由于先前啟動Hadoop之后,再使用命令格式化namenode會導致,datanode和namenode的clusterID不一致:
找到hadoop/etc/hadoop/hdfs-site.xml配置文件里配置的datanode和namenode下的./current/VERSION,文件,對比兩個文件中的clusterID,不一致,將Datanode下的clusterID改為namenode下的此ID,重新啟動之后即可。
參考連接:
http://www.CUOXin.com/kinglau/p/3796274.html

發表評論 共有條評論
用戶名: 密碼:
驗證碼: 匿名發表
主站蜘蛛池模板: 新乐市| 淮南市| 贵溪市| 德保县| 濉溪县| 阿坝| 丽江市| 静乐县| 项城市| 临泽县| 绥德县| 辽阳县| 如皋市| 金坛市| 南开区| 公安县| 巫山县| 江都市| 富民县| 吉木萨尔县| 宜良县| 大邑县| 晋宁县| 蒲江县| 阿克| 扶绥县| 宣武区| 佛学| 冕宁县| 乐昌市| 交口县| 肇庆市| 屯门区| 宿迁市| 高碑店市| 佛坪县| 临江市| 溧阳市| 安龙县| 新源县| 凌云县|