hadoop on swarm
version:2.8.1
创建swarm网络
1 |
docker network create --driver overlay bigdata |
创建hadoop-swarm cluster
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 |
docker service create \ --name hadoop-master \ --hostname hadoop-master \ --network bigdata \ --replicas 1 \ --detach=true \ --endpoint-mode dnsrr \ --mount type=bind,source=/etc/localtime,target=/etc/localtime \ cppla/hadoop-docker:latest docker service create \ --name hadoop-slave1 \ --hostname hadoop-slave1 \ --network bigdata \ --replicas 1 \ --detach=true \ --endpoint-mode dnsrr \ --mount type=bind,source=/etc/localtime,target=/etc/localtime \ cppla/hadoop-docker:latest docker service create \ --name hadoop-slave2 \ --network bigdata \ --hostname hadoop-slave2 \ --replicas 1 \ --detach=true \ --endpoint-mode dnsrr \ --mount type=bind,source=/etc/localtime,target=/etc/localtime \ cppla/hadoop-docker:latest docker service create \ --name hadoop-slave3 \ --hostname hadoop-slave3 \ --network bigdata \ --replicas 1 \ --detach=true \ --endpoint-mode dnsrr \ --mount type=bind,source=/etc/localtime,target=/etc/localtime \ cppla/hadoop-docker:latest |
在hadoop-master容器执行初始化
1 2 3 4 5 6 7 8 |
# stop HDFS services sbin/stop-dfs.sh # format HDFS meta data bin/hadoop namenode -format # restart HDFS services sbin/start-dfs.sh |
测试
1 2 3 |
hdfs dfs -mkdir -p /user/hadoop/test/ echo "hello1,hello2,hello3" >> hello.txt hdfs dfs -put hello.txt /user/hadoop/test/ |
前几天有朋友需要快速搭建大数据测试集群(hadoop on swarm),借花献佛 base: newnius Dockerfiles