一.环境介绍

实验安装使用hadoop的版本为stable版本2.7.3,下载地址为:
http://www-eu.apache.org/dist/hadoop/common/
实验总共三台机器:
    
    
[hadoop@hadoop1 hadoop]$ cat /etchosts127.0.0.1 localhost localhost.localdomain localhost4 localhost4localdomain4::1localdomain localhost6 localhost6localdomain6192.16856.21 hadoop156.22 hadoop256.23 hadoop3
其中:
hadoop1为NameNode,SecondaryNameNode,ResourceManager
hadoop2/3为Datanode,NodeManager

文件系统如下,/hadoop目录准备用来安装hadoop和存放数据:
     
     
  • $ df -hFilesystem Size Used Avail Use% Mounted ondevmappervg_basiclv_root 18G5.5G11G34 tmpfs 499M00 /dev/shmsda1 485M34M427M8boothadoopvghadooplv49G723M46G2hadoop
  • 二.创建hadoop用户
    创建hadoop用户来执行hadoop的安装:
        
        useradd hadoopchown R hadoop:hadoop hadoop

    三.在hadoop用户下创建ssh免密钥

    在hadoop1/2/3上分别运行:
    su - hadoop
    ssh-keygen -t rsa
    ssh-keygen -t dsa
    cd /home/hadoop/.ssh
    cat *.pub >authorized_keys

    在hadoop2上执行:
    scp authorized_keys hadoop1:/home/hadoop/.ssh/hadoop2_keys

    在hadoop3上执行:
    scp authorized_keys hadoop1:/home/hadoop/.ssh/hadoop3_keys

    在hadoop1上执行:
    su - hadoop
    cd /home/hadoop/.ssh
    cat hadoop2_keys >> authorized_keys
    cat hadoop3_keys >> authorized_keys
    再将认证文件拷贝到其它机器上:
    scp ./authorized_keys hadoop2:/home/hadoop/.ssh/
    scp ./authorized_keys hadoop3:/home/hadoop/.ssh/

    注意:查看authorized_keys的权限必须是644,如果不是则需要chmod修改,否则免密钥不成功!

    四.添加java环境变量

    java下载地址:http://www.oracle.com/technetwork/java/javase/downloads/
    上传解压缩到/usr/local下,然后在.bash_profile中添加java的环境变量
    exprot JAVA_HOME=/usr/localjdk18.0_131PATH=$JAVA_HOMEbin$PATH$HOMEbin

    五.修改hadoop配置文件

    将hadoop安装文件解压缩到/hadoop中.
    ~/hadoop/etc/hadoop/hadoop-env.sh
    ~/hadoop/etc/hadoop/yarn-env.sh
    ~/hadoop/etc/hadoop/slaves
    ~/hadoop/etc/hadoop/core-site.xml
    ~/hadoop/etc/hadoop/hdfs-site.xml
    ~/hadoop/etc/hadoop/mapred-site.xml
    ~/hadoop/etc/hadoop/yarn-site.xml
    以上个别文件默认不存在的,可以复制相应的template文件获得。

    创建下面几个文件夹,分别用来存放数据,name信息,临时文件
    $cd hadoop$ mkdir data tmp name

    1.修改配置文件hadoop-env.sh,yarn-env.sh

    cd /hadoop/hadoop/etc/hadoop
    hadoop-env.sh,yarn-env.sh主要修改JAVA_HOME环境变量,其实如果你在profile文件里面已经添加了JAVA_HOME,就不需要修改了.

    2.修改配置文件slaves

    slaves配置文件修改datanode的主机:
    $ cat slaveshadoop2hadoop3

    3.修改配置文件core-site.xml

         
         
  • $ cat coresitexml <?xml version"1.0" encoding"UTF-8"?>xmlstylesheet type"text/xsl" href"configuration.xsl"<!-- Licensed under the Apache License, Version2.0(the "License"); you may not usethis file exceptin compliance with the License You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to writing software distributed under the Licenseis distributed on an "AS IS" BASIS WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND either express implied See the Licensefor the specific language governing permissions and limitations under the License accompanying LICENSE file.--> Put sitespecific property overrides file--><configuration><property><name>fsdefaultFS</name><value>hdfs//hadoop1:9000</value></propertyiofilebuffersize<value>131072value>tmpdir:/hadoop/<description>Abase other temporary directories.</descriptionproxyuserhduserhosts<value>*</groupsconfiguration>
  • 4.修改配置文件hdfs-site.xml

    $ cat hdfsdfsnamenodesecondaryhttpaddresshadoop1:9001datanodedatareplication1webhdfsenabled<value>true>

    5.修改配置文件mapred-site.xml

    $ mv mapred.template mapred$ cat mapred"1.0"mapreduceframeworkyarnjobhistory10020webapp19888>

    6.修改配置文件yarn-site.xml

    >
        
        $ cat yarn<configuration> Site specific YARN configuration properties nodemanagerauxservicesmapreduce_shuffleshuffleclassorgapachemapred.ShuffleHandlerresourcemanager8032scheduler8030resourcetracker8031admin80338088>
    六.格式化hadoop
    在启动namenode和yran之前必须先格式化namenode:
    $ binhdfs namenode format htest17/0514235022 INFO namenode.NameNode STARTUP_MSG /************************************************************STARTUP_MSG: Starting NameNodeSTARTUP_MSG: host = hadoop1/192.168.56.21STARTUP_MSG: args = [-format,htest]STARTUP_MSG: version = 2.7.3STARTUP_MSG: classpath = /hadoop/hadoop/etc/hadoop:/hadoop/hadoop/share/hadoop/common/lib/commons-configuration-1.6.jar:/hadoop/hadoop/share/hadoop/common/lib/httpclient-4.2.5.jar:/hadoop/hadoop/share/hadoop/common/lib/commons-beanutils-1.7.0.jar:/hadoop/hadoop/share/hadoop/common/lib/commons-digester-1.8.jar:/hadoop/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar:/hadoop/hadoop/share/hadoop/common/lib/paranamer-2.3.jar:/hadoop/hadoop/share/hadoop/common/lib/curator-client-2.7.1.jar:/hadoop/hadoop/share/hadoop/common/lib/junit-4.11.jar:/hadoop/hadoop/share/hadoop/common/lib/commons-compress-1.4.1.jar:/hadoop/hadoop/share/hadoop/common/lib/xz-1.0.jar:/hadoop/hadoop/share/hadoop/common/lib/curator-recipes-2.7.1.jar:/hadoop/hadoop/share/hadoop/common/lib/jackson-core-asl-1.9.13.jar:/hadoop/hadoop/share/hadoop/common/lib/hadoop-annotations-2.7.3.jar:/hadoop/hadoop/share/hadoop/common/lib/jettison-1.1.jar:/hadoop/hadoop/share/hadoop/common/lib/protobuf-java-2.5.0.jar:/hadoop/hadoop/share/hadoop/common/lib/guava-11.0.2.jar:/hadoop/hadoop/share/hadoop/common/lib/jets3t-0.9.0.jar:/hadoop/hadoop/share/hadoop/common/lib/java-xmlbuilder-0.4.jar:/hadoop/hadoop/share/hadoop/common/lib/curator-framework-2.7.1.jar:/hadoop/hadoop/share/hadoop/common/lib/commons-net-3.1.jar:/hadoop/hadoop/share/hadoop/common/lib/jsch-0.1.42.jar:/hadoop/hadoop/share/hadoop/common/lib/commons-lang-2.6.jar:/hadoop/hadoop/share/hadoop/common/lib/stax-api-1.0-2.jar:/hadoop/hadoop/share/hadoop/common/lib/jersey-json-1.9.jar:/hadoop/hadoop/share/hadoop/common/lib/jsp-api-2.1.jar:/hadoop/hadoop/share/hadoop/common/lib/asm-3.2.jar:/hadoop/hadoop/share/hadoop/common/lib/commons-httpclient-3.1.jar:/hadoop/hadoop/share/hadoop/common/lib/jackson-jaxrs-1.9.13.jar:/hadoop/hadoop/share/hadoop/common/lib/jersey-server-1.9.jar:/hadoop/hadoop/share/hadoop/common/lib/commons-beanutils-core-1.8.0.jar:/hadoop/hadoop/share/hadoop/common/lib/commons-collections-3.2.2.jar:/hadoop/hadoop/share/hadoop/common/lib/jetty-6.1.26.jar:/hadoop/hadoop/share/hadoop/common/lib/snappy-java-1.0.4.1.jar:/hadoop/hadoop/share/hadoop/common/lib/commons-logging-1.1.3.jar:/hadoop/hadoop/share/hadoop/common/lib/avro-1.7.4.jar:/hadoop/hadoop/share/hadoop/common/lib/servlet-api-2.5.jar:/hadoop/hadoop/share/hadoop/common/lib/htrace-core-3.1.0-incubating.jar:/hadoop/hadoop/share/hadoop/common/lib/jackson-mapper-asl-1.9.13.jar:/hadoop/hadoop/share/hadoop/common/lib/gson-2.2.4.jar:/hadoop/hadoop/share/hadoop/common/lib/api-asn1-api-1.0.0-M20.jar:/hadoop/hadoop/share/hadoop/common/lib/commons-math3-3.1.1.jar:/hadoop/hadoop/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/hadoop/hadoop/share/hadoop/common/lib/activation-1.1.jar:/hadoop/hadoop/share/hadoop/common/lib/jaxb-api-2.2.2.jar:/hadoop/hadoop/share/hadoop/common/lib/api-util-1.0.0-M20.jar:/hadoop/hadoop/share/hadoop/common/lib/jsr305-3.0.0.jar:/hadoop/hadoop/share/hadoop/common/lib/apacheds-i18n-2.0.0-M15.jar:/hadoop/hadoop/share/hadoop/common/lib/httpcore-4.2.5.jar:/hadoop/hadoop/share/hadoop/common/lib/jersey-core-1.9.jar:/hadoop/hadoop/share/hadoop/common/lib/jackson-xc-1.9.13.jar:/hadoop/hadoop/share/hadoop/common/lib/zookeeper-3.4.6.jar:/hadoop/hadoop/share/hadoop/common/lib/slf4j-api-1.7.10.jar:/hadoop/hadoop/share/hadoop/common/lib/log4j-1.2.17.jar:/hadoop/hadoop/share/hadoop/common/lib/apacheds-kerberos-codec-2.0.0-M15.jar:/hadoop/hadoop/share/hadoop/common/lib/commons-io-2.4.jar:/hadoop/hadoop/share/hadoop/common/lib/commons-codec-1.4.jar:/hadoop/hadoop/share/hadoop/common/lib/jetty-util-6.1.26.jar:/hadoop/hadoop/share/hadoop/common/lib/hamcrest-core-1.3.jar:/hadoop/hadoop/share/hadoop/common/lib/netty-3.6.2.Final.jar:/hadoop/hadoop/share/hadoop/common/lib/mockito-all-1.8.5.jar:/hadoop/hadoop/share/hadoop/common/lib/hadoop-auth-2.7.3.jar:/hadoop/hadoop/share/hadoop/common/lib/xmlenc-0.52.jar:/hadoop/hadoop/share/hadoop/common/lib/commons-cli-1.2.jar:/hadoop/hadoop/share/hadoop/common/hadoop-nfs-2.7.3.jar:/hadoop/hadoop/share/hadoop/common/hadoop-common-2.7.3.jar:/hadoop/hadoop/share/hadoop/common/hadoop-common-2.7.3-tests.jar:/hadoop/hadoop/share/hadoop/hdfs:/hadoop/hadoop/share/hadoop/hdfs/lib/jackson-core-asl-1.9.13.jar:/hadoop/hadoop/share/hadoop/hdfs/lib/protobuf-java-2.5.0.jar:/hadoop/hadoop/share/hadoop/hdfs/lib/guava-11.0.2.jar:/hadoop/hadoop/share/hadoop/hdfs/lib/commons-lang-2.6.jar:/hadoop/hadoop/share/hadoop/hdfs/lib/xercesImpl-2.9.1.jar:/hadoop/hadoop/share/hadoop/hdfs/lib/asm-3.2.jar:/hadoop/hadoop/share/hadoop/hdfs/lib/netty-all-4.0.23.Final.jar:/hadoop/hadoop/share/hadoop/hdfs/lib/leveldbjni-all-1.8.jar:/hadoop/hadoop/share/hadoop/hdfs/lib/jersey-server-1.9.jar:/hadoop/hadoop/share/hadoop/hdfs/lib/jetty-6.1.26.jar:/hadoop/hadoop/share/hadoop/hdfs/lib/commons-logging-1.1.3.jar:/hadoop/hadoop/share/hadoop/hdfs/lib/servlet-api-2.5.jar:/hadoop/hadoop/share/hadoop/hdfs/lib/htrace-core-3.1.0-incubating.jar:/hadoop/hadoop/share/hadoop/hdfs/lib/jackson-mapper-asl-1.9.13.jar:/hadoop/hadoop/share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar:/hadoop/hadoop/share/hadoop/hdfs/lib/jsr305-3.0.0.jar:/hadoop/hadoop/share/hadoop/hdfs/lib/jersey-core-1.9.jar:/hadoop/hadoop/share/hadoop/hdfs/lib/log4j-1.2.17.jar:/hadoop/hadoop/share/hadoop/hdfs/lib/commons-io-2.4.jar:/hadoop/hadoop/share/hadoop/hdfs/lib/xml-apis-1.3.04.jar:/hadoop/hadoop/share/hadoop/hdfs/lib/commons-codec-1.4.jar:/hadoop/hadoop/share/hadoop/hdfs/lib/jetty-util-6.1.26.jar:/hadoop/hadoop/share/hadoop/hdfs/lib/netty-3.6.2.Final.jar:/hadoop/hadoop/share/hadoop/hdfs/lib/xmlenc-0.52.jar:/hadoop/hadoop/share/hadoop/hdfs/lib/commons-cli-1.2.jar:/hadoop/hadoop/share/hadoop/hdfs/hadoop-hdfs-2.7.3.jar:/hadoop/hadoop/share/hadoop/hdfs/hadoop-hdfs-2.7.3-tests.jar:/hadoop/hadoop/share/hadoop/hdfs/hadoop-hdfs-nfs-2.7.3.jar:/hadoop/hadoop/share/hadoop/yarn/lib/commons-compress-1.4.1.jar:/hadoop/hadoop/share/hadoop/yarn/lib/xz-1.0.jar:/hadoop/hadoop/share/hadoop/yarn/lib/jackson-core-asl-1.9.13.jar:/hadoop/hadoop/share/hadoop/yarn/lib/jettison-1.1.jar:/hadoop/hadoop/share/hadoop/yarn/lib/protobuf-java-2.5.0.jar:/hadoop/hadoop/share/hadoop/yarn/lib/guava-11.0.2.jar:/hadoop/hadoop/share/hadoop/yarn/lib/commons-lang-2.6.jar:/hadoop/hadoop/share/hadoop/yarn/lib/stax-api-1.0-2.jar:/hadoop/hadoop/share/hadoop/yarn/lib/jersey-json-1.9.jar:/hadoop/hadoop/share/hadoop/yarn/lib/asm-3.2.jar:/hadoop/hadoop/share/hadoop/yarn/lib/jackson-jaxrs-1.9.13.jar:/hadoop/hadoop/share/hadoop/yarn/lib/leveldbjni-all-1.8.jar:/hadoop/hadoop/share/hadoop/yarn/lib/jersey-server-1.9.jar:/hadoop/hadoop/share/hadoop/yarn/lib/jersey-client-1.9.jar:/hadoop/hadoop/share/hadoop/yarn/lib/commons-collections-3.2.2.jar:/hadoop/hadoop/share/hadoop/yarn/lib/jetty-6.1.26.jar:/hadoop/hadoop/share/hadoop/yarn/lib/guice-servlet-3.0.jar:/hadoop/hadoop/share/hadoop/yarn/lib/commons-logging-1.1.3.jar:/hadoop/hadoop/share/hadoop/yarn/lib/servlet-api-2.5.jar:/hadoop/hadoop/share/hadoop/yarn/lib/guice-3.0.jar:/hadoop/hadoop/share/hadoop/yarn/lib/jackson-mapper-asl-1.9.13.jar:/hadoop/hadoop/share/hadoop/yarn/lib/jersey-guice-1.9.jar:/hadoop/hadoop/share/hadoop/yarn/lib/jaxb-impl-2.2.3-1.jar:/hadoop/hadoop/share/hadoop/yarn/lib/activation-1.1.jar:/hadoop/hadoop/share/hadoop/yarn/lib/jaxb-api-2.2.2.jar:/hadoop/hadoop/share/hadoop/yarn/lib/jsr305-3.0.0.jar:/hadoop/hadoop/share/hadoop/yarn/lib/jersey-core-1.9.jar:/hadoop/hadoop/share/hadoop/yarn/lib/aopalliance-1.0.jar:/hadoop/hadoop/share/hadoop/yarn/lib/zookeeper-3.4.6-tests.jar:/hadoop/hadoop/share/hadoop/yarn/lib/jackson-xc-1.9.13.jar:/hadoop/hadoop/share/hadoop/yarn/lib/zookeeper-3.4.6.jar:/hadoop/hadoop/share/hadoop/yarn/lib/javax.inject-1.jar:/hadoop/hadoop/share/hadoop/yarn/lib/log4j-1.2.17.jar:/hadoop/hadoop/share/hadoop/yarn/lib/commons-io-2.4.jar:/hadoop/hadoop/share/hadoop/yarn/lib/commons-codec-1.4.jar:/hadoop/hadoop/share/hadoop/yarn/lib/jetty-util-6.1.26.jar:/hadoop/hadoop/share/hadoop/yarn/lib/netty-3.6.2.Final.jar:/hadoop/hadoop/share/hadoop/yarn/lib/commons-cli-1.2.jar:/hadoop/hadoop/share/hadoop/yarn/hadoop-yarn-server-sharedcachemanager-2.7.3.jar:/hadoop/hadoop/share/hadoop/yarn/hadoop-yarn-common-2.7.3.jar:/hadoop/hadoop/share/hadoop/yarn/hadoop-yarn-server-common-2.7.3.jar:/hadoop/hadoop/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-2.7.3.jar:/hadoop/hadoop/share/hadoop/yarn/hadoop-yarn-api-2.7.3.jar:/hadoop/hadoop/share/hadoop/yarn/hadoop-yarn-server-nodemanager-2.7.3.jar:/hadoop/hadoop/share/hadoop/yarn/hadoop-yarn-registry-2.7.3.jar:/hadoop/hadoop/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-2.7.3.jar:/hadoop/hadoop/share/hadoop/yarn/hadoop-yarn-server-web-proxy-2.7.3.jar:/hadoop/hadoop/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-2.7.3.jar:/hadoop/hadoop/share/hadoop/yarn/hadoop-yarn-server-tests-2.7.3.jar:/hadoop/hadoop/share/hadoop/yarn/hadoop-yarn-client-2.7.3.jar:/hadoop/hadoop/share/hadoop/yarn/hadoop-yarn-server-applicationhistoryservice-2.7.3.jar:/hadoop/hadoop/share/hadoop/mapreduce/lib/paranamer-2.3.jar:/hadoop/hadoop/share/hadoop/mapreduce/lib/junit-4.11.jar:/hadoop/hadoop/share/hadoop/mapreduce/lib/commons-compress-1.4.1.jar:/hadoop/hadoop/share/hadoop/mapreduce/lib/xz-1.0.jar:/hadoop/hadoop/share/hadoop/mapreduce/lib/jackson-core-asl-1.9.13.jar:/hadoop/hadoop/share/hadoop/mapreduce/lib/hadoop-annotations-2.7.3.jar:/hadoop/hadoop/share/hadoop/mapreduce/lib/protobuf-java-2.5.0.jar:/hadoop/hadoop/share/hadoop/mapreduce/lib/asm-3.2.jar:/hadoop/hadoop/share/hadoop/mapreduce/lib/leveldbjni-all-1.8.jar:/hadoop/hadoop/share/hadoop/mapreduce/lib/jersey-server-1.9.jar:/hadoop/hadoop/share/hadoop/mapreduce/lib/snappy-java-1.0.4.1.jar:/hadoop/hadoop/share/hadoop/mapreduce/lib/guice-servlet-3.0.jar:/hadoop/hadoop/share/hadoop/mapreduce/lib/avro-1.7.4.jar:/hadoop/hadoop/share/hadoop/mapreduce/lib/guice-3.0.jar:/hadoop/hadoop/share/hadoop/mapreduce/lib/jackson-mapper-asl-1.9.13.jar:/hadoop/hadoop/share/hadoop/mapreduce/lib/jersey-guice-1.9.jar:/hadoop/hadoop/share/hadoop/mapreduce/lib/jersey-core-1.9.jar:/hadoop/hadoop/share/hadoop/mapreduce/lib/aopalliance-1.0.jar:/hadoop/hadoop/share/hadoop/mapreduce/lib/javax.inject-1.jar:/hadoop/hadoop/share/hadoop/mapreduce/lib/log4j-1.2.17.jar:/hadoop/hadoop/share/hadoop/mapreduce/lib/commons-io-2.4.jar:/hadoop/hadoop/share/hadoop/mapreduce/lib/hamcrest-core-1.3.jar:/hadoop/hadoop/share/hadoop/mapreduce/lib/netty-3.6.2.Final.jar:/hadoop/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-core-2.7.3.jar:/hadoop/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.7.3-tests.jar:/hadoop/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-2.7.3.jar:/hadoop/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.7.3.jar:/hadoop/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-common-2.7.3.jar:/hadoop/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-app-2.7.3.jar:/hadoop/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.3.jar:/hadoop/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-2.7.3.jar:/hadoop/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-2.7.3.jar:/contrib/capacity-scheduler/*.jarSTARTUP_MSG: build = https://git-wip-us.apache.org/repos/asf/hadoop.git -r baa91f7c6bc9cb92be5982de4719c1c8af91ccff; compiled by 'root' on 2016-08-18T01:41ZSTARTUP_MSG: java = 1.8.0_131************************************************************/ registered UNIX signal handlers TERM HUP INT] createNameNode [-format htest]Formattingusing clusterid CID-3cf41172e75f4bfb9f8d32877047a551.FSNamesystem No KeyProvider found fsLock  fair:true INFO blockmanagement.DatanodeManager dfsblockinvalidatelimit=1000registrationiphostnamecheck=.BlockManagerstartupdelaydeletionsec set to 0000000.000 The block deletion will start around 2017 May22 INFO util.GSet Computing capacity  map BlocksMap VM type 64bit2.0 max memory 966.7 MB 19.3 MB capacity ^212097152 entriesaccesstokenenablefalse defaultReplication 1 maxReplication 512 minReplication  maxReplicationStreams 2 replicationRecheckInterval 3000 encryptDataTransfer  maxnumBlocksToLog  fsOwner  hadoop authSIMPLE) supergroup  supergroup isPermissionEnabled  HA Enabled Append Enabled23 map INodeMap1.09.7201048576.FSDirectory ACLs enabled? XAttrs Maximum size of an xattr16384 Caching file names occuring more than 10 times map cachedBlocks0.252.418262144safemodethresholdpct 0.9990000128746033mindatanodes 0extension 30000 INFO metrics.TopMetrics NNTop conftopwindownumbuckets 10users windowsminutes 525 Retry cache on namenode  enabled cache will 0.03 of total heap retry cache entry expiry time 600000 millis map NameNodeRetryCache0.029999999329447746297.0 KB1532768.FSImage Allocatednew BlockPoolId BP102874337156.211494777023841 INFO common.Storage Storage directory name has been successfully formatted.FSImageFormatProtobuf Saving image file currentfsimageckpt_0000000000000000000 no compression24 Image file ckpt_0000000000000000000 of size 353 bytes saved  seconds.NNStorageRetentionManager Going to retain  images  txid >=.ExitUtil Exiting status  SHUTDOWN_MSGSHUTDOWN_MSG: Shutting down NameNode at hadoop1/192.168.56.21************************************************************/
    七.启动hadoop
    $ ./sbinstartshStarting namenodes on  starting namenode logging to logsouthadoop2 starting datanodehadoop3outStarting secondary namenodes  starting secondarynamenodesecondarynamenodeout
    查看hadoop1上的进程:
    $ jps8568 NameNode8873 Jps8764 SecondaryNameNode
    启动yarn:
    shstarting yarn daemonsstarting resourcemanager starting nodemanagerout8930 ResourceManager9187 SecondaryNameNode
    检查datanode上的进程:
    hadoop@hadoop2 hadoop7909 Datanode8039 NodeManager8139 Jps
    关闭hadoop:
    ./sbin/stopshsh

    八,WEB访问接口

    Web Interfaces:

    Once the Hadoop cluster is up and running check the web-ui of the components as described below:

    Daemon Web Interface Notes
    NameNode http://nn_host:port/ Default HTTP port is 50070.
    ResourceManager http://rm_host:port/ Default HTTP port is 8088.
    MapReduce JobHistory Server http://jhs_host:port/ Default HTTP port is 19888.

    九.参考文档:
    http://hadoop.apache.org/docs/r2.7.3/hadoop-project-dist/hadoop-common/ClusterSetup.html

    CentOS-6.4下安装hadoop2.7.3的更多相关文章

    1. ios中的.dylib和.a lib有什么区别?

      我知道Objectivec中的编译和运行时是什么,但是我想知道是什么画了这两个库之间的界限?他们的目的是什么,除了陈述一个是静态的而另一个是动态的?我们何时需要一个而不是另一个?

    2. xamarin.ios – ShareKit与MonoTouch如何?

      有人可以验证ShareKit实际上是否可用于MonoTouch并指导我完成使其工作所需的步骤?解决方法您首先从getsharekit.com下载还是使用ShareKit2.0?

    3. ios – iPhone崩溃日志不能正确地符号化并且是双重间隔的

      任何建议超过欢迎.谢谢.解决方法当这件事发生在我身上时,它只是我通过电子邮件收到的日志.如果我记得,至少有一些是在.msg文件中,我不得不把它们拿出来.它可能是Exchange编码更改.如果你显示不可见的字符,你可能会看到每个字符之间的东西.您可以找到并替换它们以删除它们或更改编辑器中的编码.

    4. ios – Xcode 7 beta 2:LaunchScreen.storyboard无法打开文档

      我在两个不同的Mac(iMac和MacBookpro)上收到这个错误.不知道为什么人们不能再现它,但我需要一些帮助.在运行XX优胜美地10.10.4的Mac上运行Xcode7beta2(15六月’15).甚至无法编译和运行我的项目..我从创建菜单创建了一个视图应用程序项目,就是这样.编辑:我试图删除并重新添加storyboard文件(也可以打开Main.storyboard插件),我仍然得到相同的

    5. ios – 多个项目和Cocoapods

      解决方法我在工作中遇到了类似的问题,我发现更好的是将项目结构改为使用Cocoapods.我认为您的正确解决方案,或至少正确的路径是将您的公共项目变成本地,privatepod.我实现了我的共同项目,并且我的应用程序项目也配置了CocoaPods,使用该私有pod.最后一句话,当通过CocoaPods构建一个共同的库项目时,您将需要覆盖该项目中的“其他链接器标志”构建设置,就像在CocoaPods创建和管理的Pods项目中一样.让我知道这是否适合你!

    6. ios – 为具有多个目标和不同平台的项目编写Podfile

      如何让CocoaPods成功整合到我的项目和iOS/Mac目标?我已经阅读了Podfile文档,但发现它在这方面缺乏.解决方法得到它了!从我的每个目标和运行的pod安装中删除libPods-xxxx.a文件,再次执行了我的目标集成.

    7. iOS中的CocoaPods是什么?

      任何人都可以详细说明iOS开发中的CocoaPods.我似乎无法理解它们是什么.提前致谢.解决方法CocoaPods是我在最近的iOS应用程序开发中发现的最好的东西之一.我用它来获取最新的Github开源项目作为框架和lib到我的项目中.最好的部分是它将自动管理依赖lib,因此无需拖放文件并下载文件夹blaablaaa只需一个简单的代码’podinstall即可!

    8. OpenStack 对象存储 Swift 简单介绍

      Swift最适合的就是永久类型的静态数据的长期存储。提供账号验证的节点被称为AccountServer。Swift中由Swauth提供账号权限认证服务。ProxyserveracceptsincomingrequestsviatheOpenStackObjectAPIorjustrawHTTP.Itacceptsfilestoupload,modificationstoMetadataorcontainercreation.Inaddition,itwillalsoservefilesorcontaine

    9. 使用 Swift语言进行 Hadoop 数据流应用程序开发

      如果您发现了问题,或者希望为改进本文提供意见和建议,请在这里指出.在您开始之前,请参阅目前待解决的问题清单.简介本项目包括两类Hadoop流处理应用程序:映射器mapper和总结器reducer。如上所示,在Hadoop上编写流处理程序是一个很简单的工作,也不需要依赖于特定的软件体系。

    10. Swift 2/iOS 9 – libz.dylib找不到

      我在我的新的Swift2.0项目中使用一些来自google的外部代码,在早期版本中需要“libz.dylib”。更新到新的XCode/新的SDK后。

    随机推荐

    1. 在airgapped(离线)CentOS 6系统上安装yum软件包

      我有一个CentOS6系统,出于安全考虑,它已经被空气泄漏.它可能从未连接到互联网,如果有,它很长时间没有更新.我想将所有.rpm软件包放在一个驱动器上,这样它们就可以脱机安装而无需查询互联网.但是,我在测试VM上遇到的问题是,即使指定了本地路径,yum仍然会挂起并尝试从在线存储库进行更新.另外,有没有办法使用yum-utils/yumdownloader轻松获取该包的所有依赖项和所有依赖项?目前

    2. centos – 命名在日志旋转后停止记录到rsyslog

      CentOS6.2,绑定9.7.3,rsyslog4.6.2我最近设置了一个服务器,我注意到在日志轮换后,named已停止记录到/var/log/messages.我认为这很奇怪,因为所有日志记录都是通过rsyslog进行的,并且named不会直接写入日志文件.这更奇怪,因为我在更新区域文件后命名了HUPed,但它仍然没有记录.在我停止并重新启动命名后,记录恢复.这里发生了什么?

    3. centos – 显示错误的磁盘大小

      对于其中一个磁盘,Df-h在我的服务器上显示错误的空白区域:Cpanel表明它只有34GB免费,但还有更多.几分钟前,我删除了超过80GB的日志文件.所以,我确信它完全错了.fdisk-l/dev/sda2也显示错误:如果没有格式化,我该怎么做才能解决这个问题?并且打开文件描述符就是它需要使用才能做到这一点.所以…使用“lsof”并查找已删除的文件.重新启动写入日志文件的服务,你很可能会看到空间可用.

    4. 如何在centos 6.9上安装docker-ce 17?

      我目前正在尝试在centOS6.9服务器上安装docker-ce17,但是,当运行yuminstalldocker-ce时,我收到以下错误:如果我用跳过的标志运行它我仍然得到相同的消息,有没有人知道这方面的方法?

    5. centos – 闲置工作站的异常负载平均值

      我有一个新的工作站,具有不寻常的高负载平均值.机器规格是:>至强cpu>256GB的RAM>4x512GBSSD连接到LSI2108RAID控制器我从livecd安装了CentOS6.564位,配置了分区,网络,用户/组,并安装了一些软件,如开发工具和MATLAB.在启动几分钟后,工作站负载平均值的值介于0.5到0.9之间.但它没有做任何事情.因此我无法理解为什么负载平均值如此之高.你能帮我诊断一下这个问题吗?

    6. centos – Cryptsetup luks – 检查内核是否支持aes-xts-plain64密码

      我在CentOS5上使用cryptsetupluks加密加密了一堆硬盘.一切都很好,直到我将系统升级到CentOS6.现在我再也无法安装磁盘了.使用我的关键短语装载:我收到此错误:在/var/log/messages中:有关如何装载的任何想法?找到解决方案问题是驱动器使用大约512个字符长的交互式关键短语加密.出于某种原因,CentOS6中的新内核模块在由旧版本创建时无法正确读取512个字符的加密密钥.似乎只会影响内核或cryptsetup的不同版本,因为在同一系统上创建和打开时,512字符的密钥将起作用

    7. centos – 大量ssh登录尝试

      22个我今天登录CentOS盒找到以下内容这是过去3天内的11次登录尝试.WTF?请注意,这是我从我的提供商处获得的全新IP,该盒子是全新的.我还没有发布任何关于此框的内容.为什么我会进行如此大量的登录尝试?是某种IP/端口扫描?基本上有4名匪徒,其中2名来自中国,1名来自香港,1名来自Verizon.这只发生在SSH上.HTTP上没有问题.我应该将罪魁祸首子网路由吗?你们有什么建议?

    8. centos – kswap使用100%的CPU,即使有100GB的RAM也可用

      >Linux内核是否应该足够智能,只需从内存中清除旧缓存页而不是启动kswap?

    9. centos – Azure将VM从A2 / 3调整为DS2 v2

      我正在尝试调整前一段时间创建的几个AzureVM,从基本的A3和标准A3到标准的DS2v2.我似乎没有能力调整到这个大小的VM.必须从头开始重建服务器会有点痛苦.如果它有所不同我在VM中运行CentOS,每个都有一个带有应用程序和操作系统的磁盘.任何人都可以告诉我是否可以在不删除磁盘的情况下删除VM,创建新VM然后将磁盘附加到新VM?

    10. centos – 广泛使用RAM时服务器计算速度减慢

      我在非常具体的情况下遇到服务器速度下降的问题.事实是:>1)我使用计算应用WRF>2)我使用双XeonE5-2620v3和128GBRAM(NUMA架构–可能与问题有关!

    返回
    顶部