参考资料:
https://kubernetes.io/docs/tasks/tools/install-kubeadm/
https://blog.csdn.net/yitaidn/article/details/79937316
https://www.jianshu.com/p/9c7e1c957752
https://www.jianshu.com/p/3ec8945a864f

环境:3台centos7.3虚拟机

10.10.31.202 k8s-master
10.10.31.203 k8s-node1
10.10.31.204 k8s-node2

环境设置:

1 . 系统是centos7.3,不是执行 yum update,不要更新成7.5

[root@szy-k8s-node2 ~]# yum update #最好不好更新7.5此版本为7.3教程

2.关闭防火墙,SELinux,swap(所有节点)

setenforce 0 :临时关闭,用于关闭selinux防火墙,但重启后失效。
swapoff -a  #保证 kubelet 正确运行
systemctl stop firewalld 
systemctl disable firewalld #关闭防火墙

执行效果:
[root@szy-k8s-master ~]# systemctl stop firewalld && systemctl disable firewalld
Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.
Removed symlink /etc/systemd/system/basic.target.wants/firewalld.service.
[root@szy-k8s-master ~]# swapoff -a 
[root@szy-k8s-master ~]# setenforce 0
[root@szy-k8s-master ~]# sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config # 修改配置永久生效,需重启
[root@szy-k8s-master ~]# /usr/sbin/sestatus #查看selinux的状态
SELinux status:                 enabled
SELinuxfs mount:                /sys/fs/selinux SELinux root directory: /etc/selinux Loaded policy name: targeted Current mode: permissive Mode from config file: disabled Policy MLS status: enabled Policy deny_unkNown status: allowed Max kernel policy version: 28

组件安装:

1.docker的安装-经常出问题,版本和系统不兼容等问题
或者使用文件docker-packages.tar,每个节点都要安装。

yum install -y docker  
systemctl enable docker && systemctl start docker 


本人是第二种安装方法:
链接:https://pan.baidu.com/s/1nV_lOOJhNJpqGBq9heNWug 密码:zkfr

tar -xvf docker-packages.tar
cd docker-packages
rpm -Uvh * 或者 yum install local *.rpm  进行安装
docker version        #安装完成查看版本


注意:如果docker安装失败,重装步骤:
yum remove docker
yum remove docker-selinu
如果删不干净,就执行下面操作

1.#查看已经安装的docker安装包,列出入校内容
[root@szy-k8s-node2 docker-packages]# rpm -qa|grep docker
docker-common-1.13.1-63.git94f4240.el7.centos.x86_64
docker-client-1.13.1-63.git94f4240.el7.centos.x86_64
2.分别删除
yum -y remove docker-common-1.13.1-63.git94f4240.el7.centos.x86_64
yum -y remove docker-client-1.13.1-63.git94f4240.el7.centos.x86_64
3.删除docker镜像
rm -rf /var/lib/docker
再重新安装

2.给docker配置阿里镜像加速器【如下图,阿里已给出代码】


输入docker info,==记录Cgroup Driver==
Cgroup Driver: cgroupfs
docker和kubelet的cgroup driver需要一致,如果docker不是cgroupfs,则执行

sudo tee /etc/docker/daemon.json <<-'EOF'
{
  "registry-mirrors": ["https://XXXX.mirror.aliyuncs.com"],"exec-opts": ["native.cgroupdriver=cgroupfs"] 
}
EOF
systemctl daemon-reload && systemctl restart docker

离线安装 kubeadm,kubectl,kubelet

链接:https://pan.baidu.com/s/13zfZKfARUN2s96fPil-8VQ 密码:10am
使用文件kube-packages-1.10.1.tar,每个节点都要安装
kubeadm是集群部署工具
kubectl是集群管理工具,通过command来管理集群
kubelet的k8s集群每个节点的docker管理服务

tar -xvf kube-packages-1.10.1.tar
cd kube-packages-1.10.1
rpm -Uvh * 或者 yum install local *.rpm  进行安装

在所有kubernetes节点上设置kubelet使用cgroupfs,与dockerd保持一致,否则kubelet会启动报错

默认kubelet使用的cgroup-driver=systemd,改为cgroup-driver=cgroupfs 
sed -i "s/cgroup-driver=systemd/cgroup-driver=cgroupfs/g" /etc/systemd/system/kubelet.service.d/10-kubeadm.conf 
重设kubelet服务,并重启kubelet服务 
systemctl daemon-reload && systemctl restart kubelet

关闭swap,及修改iptables,不然后面kubeadm会报错

swapoff -a
vi /etc/fstab   #swap一行注释
cat <<EOF >  /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system

导入镜像

每个节点都要执行

docker load -i k8s-images-1.10.tar.gz 
#一共11个镜像,分别是 
k8s.gcr.io/etcd-amd64:3.1.12 
k8s.gcr.io/kube-apiserver-amd64:v1.10.1 
k8s.gcr.io/kube-controller-manager-amd64:v1.10.1 
k8s.gcr.io/kube-proxy-amd64:v1.10.1 
k8s.gcr.io/kube-scheduler-amd64:v1.10.1 
k8s.gcr.io/pause-amd64:3.1 
k8s.gcr.io/k8s-dns-kube-dns-amd64:1.14.8 
k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64:1.14.8 
k8s.gcr.io/k8s-dns-sidecar-amd64:1.14.8 
k8s.gcr.io/kubernetes-dashboard-amd64:v1.8.3 
quay.io/coreos/flannel:v0.9.1-amd64


kubeadm init 部署master节点,只在master节点执行

此处选用最简单快捷的部署方案。etcd、api、controller-manager、 scheduler服务都会以容器的方式运行在master。etcd 为单点,不带证书。etcd的数据会挂载到master节点/var/lib/etcd
init命令注意要指定版本,和pod范围

kubeadm init –kubernetes-version=v1.10.1 –pod-network-cidr=10.244.0.0/16

[root@szy-k8s-master kubernetes]# kubeadm init --kubernetes-version=v1.10.1 --pod-network-cidr=10.244.0.0/16
[init] Using Kubernetes version: v1.10.1
[init] Using Authorization modes: [Node RBAC]
[preflight] Running pre-flight checks.
        [WARNING SystemVerification]: docker version is greater than the most recently validated version. Docker version: 18.04.0-ce. Max validated version: 17.03
        [WARNING Service-Kubelet]: kubelet service is not enabled,please run 'systemctl enable kubelet.service'
        [WARNING Firewalld]: firewalld is active,please ensure ports [6443 10250] are open or your cluster may not function correctly
        [WARNING FileExisting-crictl]: crictl not found in system path
Suggestion: go get github.com/kubernetes-incubator/cri-tools/cmd/crictl
[preflight] Starting the kubelet service
[certificates] Generated ca certificate and key.
[certificates] Generated apiserver certificate and key.
[certificates] apiserver serving cert is signed for DNS names [szy-k8s-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.10.31.202]
[certificates] Generated apiserver-kubelet-client certificate and key.
[certificates] Generated etcd/ca certificate and key.
[certificates] Generated etcd/server certificate and key.
[certificates] etcd/server serving cert is signed for DNS names [localhost] and IPs [127.0.0.1]
[certificates] Generated etcd/peer certificate and key.
[certificates] etcd/peer serving cert is signed for DNS names [szy-k8s-master] and IPs [10.10.31.202]
[certificates] Generated etcd/healthcheck-client certificate and key.
[certificates] Generated apiserver-etcd-client certificate and key.
[certificates] Generated sa key and public key.
[certificates] Generated front-proxy-ca certificate and key.
[certificates] Generated front-proxy-client certificate and key.
[certificates] Valid certificates and keys Now exist in "/etc/kubernetes/pki"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf"
[controlplane] Wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[controlplane] Wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[controlplane] Wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"
[init] Waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests".
[init] This might take a minute or longer if the control plane images have to be pulled.
[apiclient] All control plane components are healthy after 23.002425 seconds
[uploadconfig] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[markmaster] Will mark node szy-k8s-master as master by adding a label and a taint
[markmaster] Master szy-k8s-master tainted and labelled with key/value: node-role.kubernetes.io/master=""
[bootstraptoken] Using token: ou9izo.w3o32jgx1kg7lypl
[bootstraptoken] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstraptoken] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: kube-dns
[addons] Applied essential addon: kube-proxy

Your Kubernetes master has initialized successfully!

To start using your cluster,you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should Now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can Now join any number of machines by running the following on each node
as root:

  kubeadm join 10.10.31.202:6443 --token ou9izo.w3o32jgx1kg7lypl --discovery-token-ca-cert-hash sha256:a1e4b696d1bfd4a86b4e352dec58a44facbbe0022bb65200c1d9f8e9414f53c8

记下join的命令,后续node节点加入的时候要用到
执行提示的命令,保存kubeconfig

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

此时执行kubectl get node 已经可以看到master节点,notready是因为还未部署网络插件

[root@szy-k8s-master k8s]# kubectl get node
NAME             STATUS     ROLES     AGE       VERSION
szy-k8s-master   NotReady   master    3m        v1.10.1
[root@szy-k8s-master k8s]# 

查看所有的pod,kubectl get pod –all-namespaces
kubedns也依赖于容器网络,此时pending是正常的

[root@szy-k8s-master k8s]# kubectl get pod --all-namespaces
NAMESPACE     NAME                                     READY     STATUS    RESTARTS   AGE
kube-system   etcd-szy-k8s-master                      1/1       Running   0          2m
kube-system   kube-apiserver-szy-k8s-master            1/1       Running   0          2m
kube-system   kube-controller-manager-szy-k8s-master   1/1       Running   0          2m
kube-system   kube-dns-86f4d74b45-gp8zc                0/3       Pending   0          3m
kube-system   kube-proxy-kqnfs                         1/1       Running   0          3m
kube-system   kube-scheduler-szy-k8s-master            1/1       Running   0          2m

配置KUBECONfig变量

[root@szy-k8s-master k8s]# echo "export KUBECONfig=/etc/kubernetes/admin.conf" >> /etc/profile
[root@szy-k8s-master k8s]# source /etc/profile
[root@szy-k8s-master k8s]# echo $KUBECONfig
/etc/kubernetes/admin.conf

部署flannel网络

k8s支持多种网络方案,flannel,calico,openvswitch
此处选择flannel。 在熟悉了k8s部署后,可以尝试其他网络方案

在当前k8s目录下,有kube-flannel.yml

[root@szy-k8s-master k8s]# kubectl apply -f kube-flannel.yml
clusterrole.rbac.authorization.k8s.io "flannel" created
clusterrolebinding.rbac.authorization.k8s.io "flannel" created
serviceaccount "flannel" created
configmap "kube-flannel-cfg" created
daemonset.extensions "kube-flannel-ds" created
[root@szy-k8s-master k8s]# kubectl get node
NAME             STATUS    ROLES     AGE       VERSION
szy-k8s-master   Ready     master    6m        v1.10.1
[root@szy-k8s-master k8s]# 

kubeadm join 加入node节点

  1. node节点加入集群
    使用之前kubeadm init 生产的join命令,加入成功后,回到master节点查看是否成功
kubeadm join 10.10.31.202:6443 --token ou9izo.w3o32jgx1kg7lypl --discovery-token-ca-cert-hash sha256:a1e4b696d1bfd4a86b4e352dec58a44facbbe0022bb65200c1d9f8e9414f53c8

在节点node1上执行的效果如下:

[root@szy-k8s-node1 ~]# kubeadm join 10.10.31.202:6443 --token ou9izo.w3o32jgx1kg7lypl --discovery-token-ca-cert-hash sha256:a1e4b696d1bfd4a86b4e352dec58a44facbbe0022bb65200c1d9f8e9414f53c8
[preflight] Running pre-flight checks.
        [WARNING SystemVerification]: docker version is greater than the most recently validated version. Docker version: 18.04.0-ce. Max validated version: 17.03
        [WARNING Service-Kubelet]: kubelet service is not enabled,please run 'systemctl enable kubelet.service'
        [WARNING FileExisting-crictl]: crictl not found in system path
Suggestion: go get github.com/kubernetes-incubator/cri-tools/cmd/crictl
[discovery] Trying to connect to API Server "10.10.31.202:6443"
[discovery] Created cluster-info discovery client,requesting info from "https://10.10.31.202:6443"
[discovery] Requesting info from "https://10.10.31.202:6443" again to validate TLS against the pinned public key
[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots,will use API Server "10.10.31.202:6443"
[discovery] Successfully established connection with API Server "10.10.31.202:6443"

This node has joined the cluster:
* Certificate signing request was sent to master and a response
  was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the master to see this node join the cluster.

在节点node2上执行的效果如下:

[root@szy-k8s-node2 ~]# kubeadm join 10.10.31.202:6443 --token ou9izo.w3o32jgx1kg7lypl --discovery-token-ca-cert-hash sha256:a1e4b696d1bfd4a86b4e352dec58a44facbbe0022bb65200c1d9f8e9414f53c8
[preflight] Running pre-flight checks.
        [WARNING SystemVerification]: docker version is greater than the most recently validated version. Docker version: 18.04.0-ce. Max validated version: 17.03
        [WARNING Service-Kubelet]: kubelet service is not enabled,will use API Server "10.10.31.202:6443"
[discovery] Successfully established connection with API Server "10.10.31.202:6443"

This node has joined the cluster:
* Certificate signing request was sent to master and a response
  was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the master to see this node join the cluster.

注意问题:
我在其中出现了报错:[discovery] Failed to request cluster info,will try again:

[discovery] Failed to request cluster info,will try again: [Get https://10.10.31.202:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: dial tcp 10.10.31.202:6443: getsockopt:no route to host]

解决方案:防火墙没有关闭,也可能是刚开始时操作了,没有生效。

systemctl stop firewalld 
systemctl disable firewalld #关闭防火墙

问题2:[discovery] Failed to request cluster info,will try again: [Get https://10.10.31.202:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: x509: certificate has expired or is not yet valid]
解决方案: 这是因为master节点缺少KUBECONfig变量

#master节点执行
export KUBECONfig=$HOME/.kube/config
#node节点kubeadm reset 再join
kubeadm reset
kubeadm join 10.10.31.202:6443 --token ou9izo.w3o32jgx1kg7lypl --discovery-token-ca-cert-hash sha256:a1e4b696d1bfd4a86b4e352dec58a44facbbe0022bb65200c1d9f8e9414f53c8

部署kubernetes-ui,Dashboard

dashboard是官方的k8s 管理界面,可以查看应用信息及发布应用。dashboard的语言是根据浏览器的语言自己识别的
官方默认的dashboard为https方式,如果用chrome访问会拒绝。本次部署做了修改,方便使用,使用了http方式,用chrome访问正常。
一共需要导入3个yaml文件

kubectl apply -f kubernetes-dashboard-http.yaml
kubectl apply -f admin-role.yaml
kubectl apply -f kubernetes-dashboard-admin.rbac.yaml

实际执行效果如下:

[root@szy-k8s-master k8s]# ll
total 1066600
-rwxr-xr-x. 1 root root       357 Jun 25 11:18 admin-role.yaml
drwxr-xr-x. 2 root root      4096 Apr 11 19:30 docker-packages
-rwxr-xr-x. 1 root root  39577600 Jun 25 11:18 docker-packages.tar
-rwxr-xr-x. 1 root root 999385088 Jun 25 11:18 k8s-images-1.10.tar.gz
-rwxr-xr-x. 1 root root      2801 Jun 25 11:18 kube-flannel.yml
drwxr-xr-x. 2 root root       190 Apr 25 16:27 kube-packages-1.10.1
-rwxr-xr-x. 1 root root  53207040 Jun 25 11:18 kube-packages-1.10.1.tar
-rwxr-xr-x. 1 root root       281 Jun 25 11:18 kubernetes-dashboard-admin.rbac.yaml
-rwxr-xr-x. 1 root root      4267 Jun 25 11:18 kubernetes-dashboard-http.yaml
[root@szy-k8s-master k8s]# kubectl apply -f kubernetes-dashboard-http.yaml
serviceaccount "kubernetes-dashboard" created
role.rbac.authorization.k8s.io "kubernetes-dashboard-minimal" created
rolebinding.rbac.authorization.k8s.io "kubernetes-dashboard-minimal" created
deployment.apps "kubernetes-dashboard" created
service "kubernetes-dashboard" created
[root@szy-k8s-master k8s]# kubectl apply -f admin-role.yaml
clusterrolebinding.rbac.authorization.k8s.io "kubernetes-dashboard" created
[root@szy-k8s-master k8s]# kubectl apply -f kubernetes-dashboard-admin.rbac.yaml
clusterrolebinding.rbac.authorization.k8s.io "dashboard-admin" created

最后检查成果:

[root@szy-k8s-master kubelet]# kubectl get pod --all-namespaces
NAMESPACE     NAME                                     READY     STATUS    RESTARTS   AGE
kube-system   etcd-szy-k8s-master                      1/1       Running   0          5h
kube-system   kube-apiserver-szy-k8s-master            1/1       Running   0          5h
kube-system   kube-controller-manager-szy-k8s-master   1/1       Running   0          5h
kube-system   kube-dns-86f4d74b45-2zw26                3/3       Running   162        5h
kube-system   kube-flannel-ds-ht6pl                    1/1       Running   0          5h
kube-system   kube-flannel-ds-npgfq                    1/1       Running   1          24m
kube-system   kube-flannel-ds-rfxv9                    1/1       Running   0          24m
kube-system   kube-proxy-8pg78                         1/1       Running   0          24m
kube-system   kube-proxy-gxcvh                         1/1       Running   0          24m
kube-system   kube-proxy-jgqtp                         1/1       Running   0          5h
kube-system   kube-scheduler-szy-k8s-master            1/1       Running   0          5h
kube-system   kubernetes-dashboard-5c469b58b8-hr4bk    1/1       Running   0          21m
[root@szy-k8s-master kubelet]# 

查看界面:

http://10.10.31.202:31000
http://10.10.31.203:31000
http://10.10.31.204:31000

[root@szy-k8s-master kubelet]# kubectl version
Client Version: version.Info{Major:"1",Minor:"10",GitVersion:"v1.10.1",GitCommit:"d4ab47518836c750f9949b9e0d387f20fb92260b",GitTreeState:"clean",BuildDate:"2018-04-12T14:26:04Z",GoVersion:"go1.9.3",Compiler:"gc",Platform:"linux/amd64"}
Server Version: version.Info{Major:"1",BuildDate:"2018-04-12T14:14:26Z",Platform:"linux/amd64"}
[root@szy-k8s-master kubelet]#

使用kubeadm安装kubernetes1.10.1【centos7.3离线安装docker,kubeadm,kubectl,kubelet,dashboard】kubernetesv1.10.1的更多相关文章

  1. Swift 后端开发

    作为一门新兴的现代化语言,Swift可以说是苹果在开发语言上的一次集大成之作,吸收了很多语言的优点。而且苹果还期望Swift能在服务端开发上能发挥作用。Perfect框架Perfect框架是Swift开发的Web应用服务器,它支持包括Redis、sqlite、Postgresql、MysqL、MongoDB、FileMaker这样的数据库,并且能以fastcgi或者Web服务器的形式提供服务。具体内容得到Swift源代码中可以找到。

  2. Swift构建总是在Docker中构建整个包

    使用像这样的Dockerfile时:当第3步运行时,swiftbuild将只编译应用程序一次,因为第二次执行将只使用已构建的对象,输出将是单个CompileSwiftModule’foo'然而,在运行第4步时,它似乎忽略了已经构建的任何东西,并重新重建整个事物,尽管没有任何改变且没有干净.我试过运行RUNls/foo/.build&&ls/tmp,一切似乎都到位了.我想要在现实中实现的是设置我的图像所以我首先从git克隆项目,构建它,然后copY在本地机器的任何变化中构建新的更新,但最终建立整个项目2次.

  3. android – 使用Camera2 API从服务录制视频

    我在一些我无法测试的手机上遇到问题(这是第一个问题)我正在尝试构建一个可以从camera2API录制的服务.但是,当我将应用程序发送到后台时,在某些手机上似乎存在我目前无法解决的问题.我只有日志,无法弄清楚应用程序无法正常工作的主要原因.由于该类太大而无法在此处显示为代码,因此您可以在此处找到它:https://github.com/Astyan42/RecordingService/blob/m

  4. android – 从AOSP master切换到froyo

    我刚用回购检查了主人.现在我想切换到froyo.如果不再下载一堆东西我怎么能这样做?我不想再次下载一堆东西,我只是希望能够在分支之间自由移动,就像在普通的gitclone中一样.解决方法由于您使用repo下载了所有内容,因此您可以轻松地使用repo在分支之间切换而无需再次下载所有内容:repoinit-bfroyo;回购同步这将只下载切换分支所需的文件,就像使用git在分支之间移动一样.

  5. MvvmCross:从android中的MvxListView获取所选项目的最佳方法是什么?

    我正在使用MvvmCrossv3.06,我在android中定义了一个绑定到列表的MvxListView.我可以看到列表,但无法确定获取单击它时所选项目的最佳方法.目前我在活动的OnCreate中执行以下操作,但它不是特别是MVVM,我想知道是否有更好的方法通过绑定?

  6. 在Android上使用Docker

    是否可以在Android上构建Docker应用程序?我注意到现在没有,但内核毕竟是基于Linux内核的.如果有办法在没有生根的情况下做到这一点,那就更好了!是否有可能为Android创建Docker应用程序?如果是这样,有没有人知道任何进展?

  7. node-red教程之dashboard简介与输入型仪表板控件的使用

    Node-red支持自定义节点,当然也就支持自定义图形化的节点。也有优秀的开发者把自己建立的图形化节点无偿分享。这里给出一个股票界面的例子,让大家看一看优秀的node-red界面能做到什么样子

  8. Docker 如何布置PHP开发环境

    本文主要介绍了如何使用Docker构建PHP的开发环境,文中作者也探讨了构建基于Docker的开发环境应该使用单容器还是多容器,各有什么利弊。推荐PHP开发者阅读。

  9. 一篇文章教会你部署vue项目到docker

    在前端开发中,部署项目是我们经常发生的事情,下面这篇文章主要给大家介绍了关于部署vue项目到docker的相关资料,文中通过示例代码介绍的非常详细,需要的朋友可以参考下

  10. PHP 应用容器化以及部署方法

    本文给大家分享的是如何把PHP应用容器化,以及使用docker在服务器上部署PHP应用,非常的简单实用,有需要的小伙伴可以参考下

随机推荐

  1. 在airgapped(离线)CentOS 6系统上安装yum软件包

    我有一个CentOS6系统,出于安全考虑,它已经被空气泄漏.它可能从未连接到互联网,如果有,它很长时间没有更新.我想将所有.rpm软件包放在一个驱动器上,这样它们就可以脱机安装而无需查询互联网.但是,我在测试VM上遇到的问题是,即使指定了本地路径,yum仍然会挂起并尝试从在线存储库进行更新.另外,有没有办法使用yum-utils/yumdownloader轻松获取该包的所有依赖项和所有依赖项?目前

  2. centos – 命名在日志旋转后停止记录到rsyslog

    CentOS6.2,绑定9.7.3,rsyslog4.6.2我最近设置了一个服务器,我注意到在日志轮换后,named已停止记录到/var/log/messages.我认为这很奇怪,因为所有日志记录都是通过rsyslog进行的,并且named不会直接写入日志文件.这更奇怪,因为我在更新区域文件后命名了HUPed,但它仍然没有记录.在我停止并重新启动命名后,记录恢复.这里发生了什么?

  3. centos – 显示错误的磁盘大小

    对于其中一个磁盘,Df-h在我的服务器上显示错误的空白区域:Cpanel表明它只有34GB免费,但还有更多.几分钟前,我删除了超过80GB的日志文件.所以,我确信它完全错了.fdisk-l/dev/sda2也显示错误:如果没有格式化,我该怎么做才能解决这个问题?并且打开文件描述符就是它需要使用才能做到这一点.所以…使用“lsof”并查找已删除的文件.重新启动写入日志文件的服务,你很可能会看到空间可用.

  4. 如何在centos 6.9上安装docker-ce 17?

    我目前正在尝试在centOS6.9服务器上安装docker-ce17,但是,当运行yuminstalldocker-ce时,我收到以下错误:如果我用跳过的标志运行它我仍然得到相同的消息,有没有人知道这方面的方法?

  5. centos – 闲置工作站的异常负载平均值

    我有一个新的工作站,具有不寻常的高负载平均值.机器规格是:>至强cpu>256GB的RAM>4x512GBSSD连接到LSI2108RAID控制器我从livecd安装了CentOS6.564位,配置了分区,网络,用户/组,并安装了一些软件,如开发工具和MATLAB.在启动几分钟后,工作站负载平均值的值介于0.5到0.9之间.但它没有做任何事情.因此我无法理解为什么负载平均值如此之高.你能帮我诊断一下这个问题吗?

  6. centos – Cryptsetup luks – 检查内核是否支持aes-xts-plain64密码

    我在CentOS5上使用cryptsetupluks加密加密了一堆硬盘.一切都很好,直到我将系统升级到CentOS6.现在我再也无法安装磁盘了.使用我的关键短语装载:我收到此错误:在/var/log/messages中:有关如何装载的任何想法?找到解决方案问题是驱动器使用大约512个字符长的交互式关键短语加密.出于某种原因,CentOS6中的新内核模块在由旧版本创建时无法正确读取512个字符的加密密钥.似乎只会影响内核或cryptsetup的不同版本,因为在同一系统上创建和打开时,512字符的密钥将起作用

  7. centos – 大量ssh登录尝试

    22个我今天登录CentOS盒找到以下内容这是过去3天内的11次登录尝试.WTF?请注意,这是我从我的提供商处获得的全新IP,该盒子是全新的.我还没有发布任何关于此框的内容.为什么我会进行如此大量的登录尝试?是某种IP/端口扫描?基本上有4名匪徒,其中2名来自中国,1名来自香港,1名来自Verizon.这只发生在SSH上.HTTP上没有问题.我应该将罪魁祸首子网路由吗?你们有什么建议?

  8. centos – kswap使用100%的CPU,即使有100GB的RAM也可用

    >Linux内核是否应该足够智能,只需从内存中清除旧缓存页而不是启动kswap?

  9. centos – Azure将VM从A2 / 3调整为DS2 v2

    我正在尝试调整前一段时间创建的几个AzureVM,从基本的A3和标准A3到标准的DS2v2.我似乎没有能力调整到这个大小的VM.必须从头开始重建服务器会有点痛苦.如果它有所不同我在VM中运行CentOS,每个都有一个带有应用程序和操作系统的磁盘.任何人都可以告诉我是否可以在不删除磁盘的情况下删除VM,创建新VM然后将磁盘附加到新VM?

  10. centos – 广泛使用RAM时服务器计算速度减慢

    我在非常具体的情况下遇到服务器速度下降的问题.事实是:>1)我使用计算应用WRF>2)我使用双XeonE5-2620v3和128GBRAM(NUMA架构–可能与问题有关!

返回
顶部