基于CentOS7.2安装Kubernetes-v1.2

摘要

使用swarm构建docker集群之后我们发现面临很多问题 swarm虽好但是还处于发展阶段功能上有所不足 我们使用kubernetes来解决这个问题

kubernetes 与swarm 比较

优点

  • 复制集与健康维护

  • 服务自发现与负载均衡

  • 灰度升级

  • 垃圾回收 自动回收失效镜像与容器

  • 与容器引擎解耦 不仅仅支持docker容器

  • 用户认证与资源隔离

缺点

大而全意味着 复杂度较高 从部署到使用都比swarm 复杂的多 相对而已swarm比较轻量级 而且跟docker引擎配合的更好 从精神上我是更支持swarm 奈何现在功能太欠缺 几天前发布了一个 SwarmKit的管理功能 功能多了不少 待其完善以后可以重回swarm的怀抱

K8s 核心概念简介

pod

k8s 中创建的最小部署单元就是pod 而容器是运行在pod里面的 pod可以运行多个容器 pod内的容器可以共享网络和存储相互访问

wKioL1eR-L-yL5SxAAF28VAJG1Q894.png-wh_50

replication controller

复制及控制器:对多个pod创建相同的副本运行在不同节点 一般不会创建单独的pod而是与rc配合创建 rc控制并管理pod的生命周期维护pod的健康

service

每个容器重新运行后的ip地址都不是固定的 所以要有一个服务方向和负载均衡来处理 service就可以实现这个需求 service创建后可以暴露一个固定的端口 与相应的pod 进行绑定

K8s 核心组件简介

wKioL1eR-RLSZSFSAADrQkk3nF4014.jpg-wh_50

apiserver

提供对外的REST API服务 运行在 master节点 对指令进行验证后 修改etcd的存储数据

shcheduler

调度器运行在master节点,通过apiserver定时监控数据变化 运行pod时通过自身调度算法选出可运行的节点

controller-manager

控制管理器运行在master节点 分别几大管理器定时运行 分别为

1)replication controller 管理器 管理并保存所有的rc的的状态

2 ) service Endpoint 管理器 对service 绑定的pod 进行实时更新操作 对状态失败的pod进行解绑

3)Node controller 管理器 定时对集群的节点健康检查与监控

4)资源配额管理器 追踪集群资源使用情况

kuctrl (子节点)

管理维护当前子节点的所有容器 如同步创建新容器 回收镜像垃圾等

kube-proxy (子节点)

对客户端请求进行负载均衡并分配到service后端的pod 是service的具体实现保证了ip的动态变化 proxy 通过修改iptable 实现路由转发

wKiom1eR-USwwFsvAAE339dhQFQ190.jpg-wh_50

工作流程

wKioL1eR-WTxXn_uAAKMxNgOVwU624.jpg-wh_50


k8s 安装过程:

一、主机规划表:

IP地址 角色 安装软件包
启动服务及顺序
192.168.20.60 k8s-master兼minion kubernetes-v1.2.4、etcd-v2.3.2、flannel-v0.5.5、docker-v1.11.2


etcd

flannel

docker

kube-apiserver

kube-controller-manager

kube-scheduler

kubelet

kube-proxy

192.168.20.61 k8s-minion1 kubernetes-v1.2.4、etcd-v2.3.2、flannel-v0.5.5、docker-v1.11.2

etcd

flannel

docker

kubelet

kube-proxy

192.168.20.62 k8s-minion2 kubernetes-v1.2.4、etcd-v2.3.2、flannel-v0.5.5、docker-v1.11.2

etcd

flannel

docker

kubelet

kube-proxy



二、环境准备

系统环境: CentOS-7.2


#yum update

#关闭firewalld,安装iptables

systemctlstopfirewalld.service

systemctldisablefirewalld.service

yum-yinstalliptables-services

systemctlrestartiptables.service

systemctlenableiptables.service


#关闭selinux

sed-i"s/SELINUX=enforcing/SELINUX=disabled/g"/etc/selinux/config

setenforce0

#添加repo

#tee /etc/yum.repos.d/docker.repo <<-'EOF'

[dockerrepo]

name=Docker Repository

baseurl=https://yum.dockerproject.org/repo/main/centos/7/

enabled=1

gpgcheck=1

gpgkey=https://yum.dockerproject.org/gpg

EOF

#yuminstalldocker-engine

#使用内部私有仓库

#vi/usr/lib/systemd/system/docker.service

ExecStart=/usr/bin/dockerdaemon--insecure-registry=192.168.4.231:5000-Hfd://

#启动docker

systemctl start docker

三、安装etcd集群(为k8s提供存储功能和强一致性保证

tarzxfetcd-v2.3.2-linux-amd64.tar.gz

cdetcd-v2.3.2-linux-amd64

cp etcd* /usr/local/bin/

#注册系统服务

#vi/usr/lib/systemd/system/etcd.service

Description=etcd

[Service]

Environment=ETCD_NAME=k8s-master#节点名称,唯一。minion节点就对应改为主机名就好

Environment=ETCD_DATA_DIR=/var/lib/etcd #存储数据路径,如果集群出现问题,可以删除这个目录重新配。

Environment=ETCD_INITIAL_ADVERTISE_PEER_URLS=http://192.168.20.60:7001 #监听地址,其他机器按照本机IP地址修改

Environment=ETCD_LISTEN_PEER_URLS=http://192.168.20.60:7001#监听地址,其他机器按照本机IP地址修改

Environment=ETCD_LISTEN_CLIENT_URLS=http://192.168.20.60:4001,http://127.0.0.1:4001 #对外监听地址,其他机器按照本机IP地址修改

Environment=ETCD_ADVERTISE_CLIENT_URLS=http://192.168.20.60:4001#对外监听地址,其他机器按照本机IP地址修改

Environment=ETCD_INITIAL_CLUSTER_TOKEN=etcd-k8s-1 #集群名称,三台节点统一

Environment=ETCD_INITIAL_CLUSTER=k8s-master=http://192.168.20.60:7001,k8s-minion1=http://192.168.20.61:7001,k8s-minion2=http://192.168.20.62:7001 #集群监控

Environment=ETCD_INITIAL_CLUSTER_STATE=new

ExecStart=/usr/local/bin/etcd

[Install]

WantedBy=multi-user.target

#启动服务

systemctl start etcd

#检查etcd集群是否正常工作

[root@k8s-monion2etcd]#etcdctlcluster-health

member2d3a022000105975ishealthy:gothealthyresultfromhttp://192.168.20.61:4001

member34a68a46747ee684ishealthy:gothealthyresultfromhttp://192.168.20.62:4001

memberfe9e66405caec791ishealthy:gothealthyresultfromhttp://192.168.20.60:4001

clusterishealthy#出现这个说明已经正常启动了。


#然后设置一下打通的内网网段范围

etcdctl set /coreos.com/network/config '{ "Network": "172.20.0.0/16" }'





四、安装启动Flannel(打通容器间网络,可实现容器跨主机互连)

tarzxfflannel-0.5.5-linux-amd64.tar.gz

mvflannel-0.5.5/usr/local/flannel

cd /usr/local/flannel

#注册系统服务

#vi/usr/lib/systemd/system/flanneld.service

[Unit]

Description=flannel

After=etcd.service

After=docker.service

[Service]

EnvironmentFile=/etc/sysconfig/flanneld

ExecStart=/usr/local/flannel/flanneld\

-etcd-endpoints=${FLANNEL_ETCD}$FLANNEL_OPTIONS

[Install]

WantedBy=multi-user.target


#新建配置文件

#vi /etc/sysconfig/flanneld

FLANNEL_ETCD="http://192.168.20.60:4001,http://192.168.20.61:4001,http://192.168.20.62:4001"


#启动服务

systemctlstartflanneld

mk-docker-opts.sh-i

source/run/flannel/subnet.env

ifconfigdocker0${FLANNEL_SUBNET}

systemctlrestartdocker

#验证是否成功

wKioL1eR-s-SrFP6AABSNuXNblI373.png

五、安装kubernets

1.下载源码包

cd /usr/local/

git clone

https://github.com/kubernetes/kubernetes.git



cd kubernetes/server/



tarzxf kubernetes-server-linux-amd64.tar.gz

cd kubernetes/server/bin

cpkube-apiserverkubectlkube-schedulerkube-controller-managerkube-proxy

kubelet


/usr/local/bin/




2.注册系统服务

#vi/usr/lib/systemd/system/kubelet.service

[Unit]

Description=KubernetesKubeletServer

Documentation=https://github.com/kubernetes/kubernetes

[Service]

EnvironmentFile=/etc/kubernetes/config

EnvironmentFile=/etc/kubernetes/kubelet

User=root

ExecStart=/usr/local/bin/kubelet\

$KUBE_LOGTOSTDERR\

$KUBE_LOG_LEVEL\

$KUBE_ALLOW_PRIV\

$KUBELET_ADDRESS\

$KUBELET_PORT\

$KUBELET_HOSTNAME\

$KUBELET_API_SERVER\

$KUBELET_ARGS

Restart=on-failure

LimitNOFILE=65536

[Install]

WantedBy=multi-user.target

#vi/usr/lib/systemd/system/kube-proxy.service

[Unit]

Description=KubernetesKube-proxyServer

Documentation=https://github.com/kubernetes/kubernetes

[Service]

EnvironmentFile=/etc/kubernetes/config

EnvironmentFile=/etc/kubernetes/kube-proxy

User=root

ExecStart=/usr/local/bin/kube-proxy\

$KUBE_LOGTOSTDERR\

$KUBE_LOG_LEVEL\

$KUBE_MASTER\

$KUBE_PROXY_ARGS

Restart=on-failure

LimitNOFILE=65536

[Install]

WantedBy=multi-user.target

3.创建配置文件

mkdir /etc/kubernetes

vi /etc/kubernetes/config

#Commaseparatedlistofnodesintheetcdcluster

KUBE_ETCD_SERVERS="--etcd-servers=http://192.168.20.60:4001"

#loggingtostderrmeanswegetitinthesystemdjournal

KUBE_LOGTOSTDERR="--logtostderr=true"

#journalmessagelevel,0isdebug

KUBE_LOG_LEVEL="--v=0"

#Shouldthisclusterbeallowedtorunprivilegeddockercontainers

#KUBE_ALLOW_PRIV="--allow-privileged=false"

KUBE_ALLOW_PRIV="--allow-privileged=true"

vi/etc/kubernetes/kubelet

#Theaddressfortheinfoservertoserveon

KUBELET_ADDRESS="--address=0.0.0.0"

#Theportfortheinfoservertoserveon

KUBELET_PORT="--port=10250"

#Youmayleavethisblanktousetheactualhostname

KUBELET_HOSTNAME="--hostname-override=192.168.20.60" #master,minion节点填本机的IP

#Locationoftheapi-server

KUBELET_API_SERVER="--api-servers=http://192.168.20.60:8080"

#Addyourown!

KUBELET_ARGS="--cluster-dns=192.168.20.64--cluster-domain=cluster.local" #后面使用dns插件会用到

vi/etc/kubernetes/kube-proxy

#Howthereplicationcontrollerandschedulerfindthekube-apiserver

KUBE_MASTER="--master=http://192.168.20.60:8080"

#Addyourown!

KUBE_PROXY_ARGS="--proxy-mode=userspace" #代理模式,这里使用userspace。而iptables模式效率比较高,但要注意你的内核版本和iptables的版本是否符合要求,要不然会出错。

关于代理模式的选择,可以看国外友人的解释:

http://stackoverflow.com/questions/36088224/what-does-userspace-mode-means-in-kube-proxys-proxy-mode?rq=1

4.以上服务需要在所有节点上启动,下面的是master节点另外需要的服务:

kube-apiserver

kube-controller-manager

kube-scheduler



4.1、配置相关服务

#vi/usr/lib/systemd/system/kube-apiserver.service

After=etcd.service

[Service]

EnvironmentFile=-/etc/kubernetes/config

EnvironmentFile=-/etc/kubernetes/apiserver

User=root

ExecStart=/usr/local/bin/kube-apiserver\

$KUBE_LOGTOSTDERR\

$KUBE_LOG_LEVEL\

$KUBE_ETCD_SERVERS\

$KUBE_API_ADDRESS\

$KUBE_API_PORT\

$KUBELET_PORT\

$KUBE_ALLOW_PRIV\

$KUBE_SERVICE_ADDRESSES\

$KUBE_ADMISSION_CONTROL\

$KUBE_API_ARGS

Restart=on-failure

Type=notify

LimitNOFILE=65536

[Install]

WantedBy=multi-user.target


#vi/usr/lib/systemd/system/kube-controller-manager.service

[Unit]

Description=KubernetesControllerManager

Documentation=https://github.com/GoogleCloudPlatform/kubernetes

[Service]

EnvironmentFile=-/etc/kubernetes/config

EnvironmentFile=-/etc/kubernetes/controller-manager

User=root

ExecStart=/usr/local/bin/kube-controller-manager\

$KUBE_LOGTOSTDERR\

$KUBE_LOG_LEVEL\

$KUBE_MASTER\

$KUBE_CONTROLLER_MANAGER_ARGS

Restart=on-failure

LimitNOFILE=65536

[Install]

WantedBy=multi-user.target

#vi/usr/lib/systemd/system/kube-scheduler.service

[Unit]

Description=KubernetesSchedulerPlugin

Documentation=https://github.com/GoogleCloudPlatform/kubernetes

[Service]

EnvironmentFile=-/etc/kubernetes/config

EnvironmentFile=-/etc/kubernetes/scheduler

User=root

ExecStart=/usr/local/bin/kube-scheduler\

$KUBE_LOGTOSTDERR\

$KUBE_LOG_LEVEL\

$KUBE_MASTER\

$KUBE_SCHEDULER_ARGS

Restart=on-failure

LimitNOFILE=65536

[Install]

4.2、配置文件

vi /etc/kubernetes/apiserver

#Theaddressonthelocalservertolistento.

KUBE_API_ADDRESS="--address=0.0.0.0"

#Theportonthelocalservertolistenon.

KUBE_API_PORT="--port=8080"

#Howthereplicationcontrollerandschedulerfindthekube-apiserver

KUBE_MASTER="--master=http://192.168.20.60:8080"

#Portkubeletslistenon

KUBELET_PORT="--kubelet-port=10250"

#Addressrangetouseforservices

KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=192.168.20.0/24"

#KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16"

#Addyourown!

KUBE_API_ARGS=""

vi /etc/kubernetes/controller-manager

#Howthereplicationcontrollerandschedulerfindthekube-apiserver

KUBE_MASTER="--master=http://192.168.20.60:8080"

#Addyourown!

KUBE_CONTROLLER_MANAGER_ARGS=""

vi /etc/kubernetes/scheduler

#Howthereplicationcontrollerandschedulerfindthekube-apiserver

KUBE_MASTER="--master=http://192.168.20.60:8080"

#Addyourown!

KUBE_SCHEDULER_ARGS=""

更多配置项可以参考官方文档:

http://kubernetes.io/docs/admin/kube-proxy/

#启动master服务

systemctlstartkubelet

systemctlstartkube-proxy

systemctlstartkube-apiserver

systemctlstartkube-controller-manager

systemctlstartkube-scheduler

#启动minion服务

systemctlstartkubelet

systemctlstartkube-proxy

#检查服务是否启动正常

[root@k8s-masterbin]#kubectlgetno

NAMESTATUSAGE

192.168.20.60Ready24s

192.168.20.61Ready46s

192.168.20.62Ready35s

#重启命令

systemctlrestartkubelet

systemctlrestartkube-proxy

systemctlrestartkube-apiserver

systemctlrestart kube-controller-manager

systemctlrestartkube-scheduler

#填坑

pause gcr.io 被墙,没有这个镜像k8s应用不了,报下面错误:

image pull failed for gcr.io/google_containers/pause:2.0


使用docker hub的镜像代替,或者下到本地仓库,然后再重新打tag,并且每个节点都需要这个镜像

docker pullkubernetes/pause

dockertag kubernetes/pause gcr.io/google_containers/pause:2.0


[root@k8s-masteraddons]#dockerimages

REPOSITORYTAGIMAGEIDCREATEDSIZE

192.168.4.231:5000/pause2.02b58359142b09monthsago350.2kB

gcr.io/google_containers/pause2.02b58359142b09monthsago350.2kB

5.官方源码包里有一些插件,如:监控面板、dns

cd /usr/local/kubernetes/cluster/addons/


5.1、dashboard 监控面板插件

cd /usr/local/kubernetes/cluster/addons/dashboard

下面有两个文件:

=============================================================

dashboard-controller.yaml #用来设置部署应用,如:副本数,使用镜像,资源控制等等

apiVersion:v1

kind:ReplicationController

metadata:

#Keepthenameinsyncwithimageversionand

#gce/coreos/kube-manifests/addons/dashboardcounterparts

name:kubernetes-dashboard-v1.0.1

namespace:kube-system

labels:

k8s-app:kubernetes-dashboard

version:v1.0.1

kubernetes.io/cluster-service:"true"

spec:

replicas:1 #副本数量

selector:

k8s-app:kubernetes-dashboard

template:

metadata:

labels:

k8s-app:kubernetes-dashboard

version:v1.0.1

kubernetes.io/cluster-service:"true"

spec:

containers:

-name:kubernetes-dashboard

image:192.168.4.231:5000/kubernetes-dashboard:v1.0.1

resources:

#keeprequest=limittokeepthiscontaineringuaranteedclass

limits:

cpu:100m

memory:50Mi

requests:

cpu:100m

memory:50Mi

ports:

-containerPort:9090

args:

---apiserver-host=http://192.168.20.60:8080#这里需要注意,不加这个参数,会默认去找localhost,而不是去master那里取。还有就是这个配置文件各项缩减问题,空格。

livenessProbe:

httpGet:

path:/

port:9090

initialDelaySeconds:30

timeoutSeconds:30

========================================================

dashboard-service.yaml #提供外部访问服务

apiVersion:v1

kind:Service

metadata:

name:kubernetes-dashboard

namespace:kube-system

labels:

k8s-app:kubernetes-dashboard

kubernetes.io/cluster-service:"true"

spec:

selector:

k8s-app:kubernetes-dashboard

ports:

-port:80

targetPort:9090

=========================================================

kubectlcreate-f./ #创建服务

kubectl--namespace=kube-systemgetpo #查看系统服务启动状态

spacer.gif

kubectl--namespace=kube-systemgetpo-owide #查看系统服务起在哪个节点

spacer.gif

#若想删除,可以执行下面命令

kubectldelete-f./

在浏览器输入: http://192.168.20.60:8080/ui/

wKioL1eR-y-QO1QFAACESLWeDjU176.png

5.2、DNS 插件安装

#使用ip地址方式不太容易记忆,集群内可以使用dns绑定ip并自动更新维护。

cd/usr/local/kubernetes/cluster/addons/dns

cpskydns-rc.yaml.in /opt/dns/skydns-rc.yaml

cpskydns-svc.yaml.in /opt/dns/skydns-svc.yaml


#/opt/dns/skydns-rc.yaml 文件

apiVersion:v1

kind:ReplicationController

metadata:

name:kube-dns-v11

namespace:kube-system

labels:

k8s-app:kube-dns

version:v11

kubernetes.io/cluster-service:"true"

spec:

replicas:1

selector:

k8s-app:kube-dns

version:v11

template:

metadata:

labels:

k8s-app:kube-dns

version:v11

kubernetes.io/cluster-service:"true"

spec:

containers:

-name:etcd

image:192.168.4.231:5000/etcd-amd64:2.2.1 #我都是先把官方镜像下载到本地仓库

resources:

#TODO:Setmemorylimitswhenwe'veprofiledthecontainerforlarge

#clusters,thensetrequest=limittokeepthiscontainerin

#guaranteedclass.Currently,thiscontainerfallsintothe

#"burstable"categorysothekubeletdoesn'tbackofffromrestartingit.

limits:

cpu:100m

memory:500Mi

requests:

cpu:100m

memory:50Mi

command:

-/usr/local/bin/etcd

--data-dir

-/var/etcd/data

--listen-client-urls

-http://127.0.0.1:2379,http://127.0.0.1:4001

--advertise-client-urls

-http://127.0.0.1:2379,http://127.0.0.1:4001

--initial-cluster-token

-skydns-etcd

volumeMounts:

-name:etcd-storage

mountPath:/var/etcd/data

-name:kube2sky

image:192.168.4.231:5000/kube2sky:1.14 #本地仓库取

resources:

#TODO:Setmemorylimitswhenwe'veprofiledthecontainerforlarge

#clusters,thiscontainerfallsintothe

#"burstable"categorysothekubeletdoesn'tbackofffromrestartingit.

limits:

cpu:100m

#Kube2skywatchesallpods.

memory:200Mi

requests:

cpu:100m

memory:50Mi

livenessProbe:

httpGet:

path:/healthz

port:8080

scheme:HTTP

initialDelaySeconds:60

timeoutSeconds:5

successThreshold:1

failureThreshold:5

readinessProbe:

httpGet:

path:/readiness

port:8081

scheme:HTTP

#wepollonpodstartupfortheKubernetesmasterserviceand

#onlysetupthe/readinessHTTPserveroncethat'savailable.

initialDelaySeconds:30

timeoutSeconds:5

args:

#command="/kube2sky"

---domain=cluster.local #一个坑,要和/etc/kubernetes/kubelet 内的一致

---kube_master_url=http://192.168.20.60:8080 #master管理节点

-name:skydns

image:192.168.4.231:5000/skydns:2015-10-13-8c72f8c

resources:

#TODO:Setmemorylimitswhenwe'veprofiledthecontainerforlarge

#clusters,thiscontainerfallsintothe

#"burstable"categorysothekubeletdoesn'tbackofffromrestartingit.

limits:

cpu:100m

memory:200Mi

requests:

cpu:100m

memory:50Mi

args:

#command="/skydns"

--machines=http://127.0.0.1:4001

--addr=0.0.0.0:53

--ns-rotate=false

--domain=cluster.local. #另一个坑!! 后面要带"."

ports:

-containerPort:53

name:dns

protocol:UDP

-containerPort:53

name:dns-tcp

protocol:TCP

-name:healthz

image:192.168.4.231:5000/exechealthz:1.0 #本地仓库取镜像

resources:

#keeprequest=limittokeepthiscontaineringuaranteedclass

limits:

cpu:10m

memory:20Mi

requests:

cpu:10m

memory:20Mi

args:

--cmd=nslookupkubernetes.default.svc.cluster.local127.0.0.1>/dev/null #还是这个坑

--port=8080

ports:

-containerPort:8080

protocol:TCP

volumes:

-name:etcd-storage

emptyDir:{}

dnsPolicy:Default#Don'tuseclusterDNS.


========================================================================

#/opt/dns/skydns-svc.yaml 文件

apiVersion:v1

kind:Service

metadata:

name:kube-dns

namespace:kube-system

labels:

k8s-app:kube-dns

kubernetes.io/cluster-service:"true"

kubernetes.io/name:"KubeDNS"

spec:

selector:

k8s-app:kube-dns

clusterIP:192.168.20.100

ports:

-name:dns

port:53

protocol:UDP

-name:dns-tcp

port:53

protocol:TCP

==================================================================

#启动

cd /opt/dns/

kubectl create -f ./


#查看pod启动状态,看到4/4 个服务都启动完成,那就可以进行下一步验证阶段。

kubectl--namespace=kube-systemgetpod-owide

wKiom1eR-3Hw3O5yAAAVuE6oGzc618.png

wKioL1eR-4uzMNnFAAAVuE6oGzc408.png

转官网的验证方法:

网址:https://github.com/kubernetes/kubernetes/blob/release-1.2/cluster/addons/dns/README.md#userconsent#

How do I test if it is working?

First deploy DNS as described above.

1 Create a simple Pod to use as a test environment.

Create a file named busybox.yaml with the following contents:

apiVersion:v1kind:Podmetadata:name:busyboxnamespace:defaultspec:containers:-image:busyboxcommand:-sleep-"3600"imagePullPolicy:IfNotPresentname:busyboxrestartPolicy:Always

Then create a pod using this file:

kubectlcreate-fbusybox.yaml

2 Wait for this pod to go into the running state.

You can get its status with:

kubectlgetpodsbusybox

You should see:

NAMEREADYSTATUSRESTARTSAGE
busybox1/1Running0<some-time>

3 Validate DNS works

Once that pod is running,you can exec nslookup in that environment:

kubectlexecbusybox--nslookupkubernetes.default

You should see something like:

Server:10.0.0.10
Address1:10.0.0.10

Name:kubernetes.default
Address1:10.0.0.1

If you see that,DNS is working correctly.


5.3、manage插件

参考:http://my.oschina.net/fufangchun/blog/703985


mkdir /opt/k8s-manage

cd /opt/k8s-manage

================================================

#catk8s-manager-rc.yaml

apiVersion:v1

kind:ReplicationController

metadata:

name:k8s-manager

namespace:kube-system

labels:

app:k8s-manager

spec:

replicas:1

selector:

app:k8s-manager

template:

metadata:

labels:

app:k8s-manager

spec:

containers:

-image:mlamina/k8s-manager:latest

name:k8s-manager

resources:

limits:

cpu:100m

memory:50Mi

ports:

-containerPort:80

name:http

=================================================

#catk8s-manager-svr.yaml

apiVersion:v1

kind:Service

metadata:

name:k8s-manager

namespace:kube-system

labels:

app:k8s-manager

spec:

ports:

-port:80

targetPort:http

selector:

app:k8s-manager

=================================================

#启动

kubectl create -f ./


浏览器访问:

http://192.168.20.60:8080/api/v1/proxy/namespaces/kube-system/services/k8s-manager

wKioL1eR-8yg4UqKAAB0O58jeZk680.png

wKiom1eR-82wYeDzAABrzOZpiQE408.png


实例演示

1.搭zookeeper、activeMQ、redis、mongodb服务

mkdir /opt/service/

cd /opt/service

==========================================================

#cat service.yaml


apiVersion:v1

kind:Service

metadata:

name:zk-amq-rds-mgd #服务名称

labels:

run:zk-amq-rds-mgd

spec:

type:NodePort

ports:

-port:2181 #标识

nodePort:31656#master节点对外服务端口

targetPort:2181 #容器内部端口

protocol:TCP #协议类型

name:zk-app #表示名

-port:8161

nodePort:31654

targetPort:8161

protocol:TCP

name:amq-http

-port:61616

nodePort:31655

targetPort:61616

protocol:TCP

name:amq-app

-port:27017

nodePort:31653

targetPort:27017

protocol:TCP

name:mgd-app

-port:6379

nodePort:31652

targetPort:6379

protocol:TCP

name:rds-app

selector:

run:zk-amq-rds-mgd

---

#apiVersion:extensions/v1beta1

apiVersion:v1

kind:ReplicationController

metadata:

name:zk-amq-rds-mgd

spec:

replicas:2 #两个副本

template:

metadata:

labels:

run:zk-amq-rds-mgd

spec:

containers:

-name:zookeeper #应用名称

image:192.168.4.231:5000/zookeeper:0524 #使用本地镜像

imagePullPolicy:IfNotPresent #自动分配到性能好的节点,去拉镜像并启动容器。

ports:

-containerPort:2181 #容器内部服务端口

env:

-name:LANG

value:en_US.UTF-8

volumeMounts:

-mountPath:/tmp/zookeeper #容器内部挂载点

name:zookeeper-d#挂载名称,要与下面配置外部挂载点一致

-name:activemq

image:192.168.4.231:5000/activemq:v2

imagePullPolicy:IfNotPresent

ports:

-containerPort:8161

-containerPort:61616

volumeMounts:

-mountPath:/opt/apache-activemq-5.10.2/data

name:activemq-d

-name:mongodb

image:192.168.4.231:5000/mongodb:3.0.6

imagePullPolicy:IfNotPresent

ports:

-containerPort:27017

volumeMounts:

-mountPath:/var/lib/mongo

name:mongodb-d

-name:redis

image:192.168.4.231:5000/redis:2.8.25

imagePullPolicy:IfNotPresent

ports:

-containerPort:6379

volumeMounts:

-mountPath:/opt/redis/var

name:redis-d

volumes:

-hostPath:

path:/mnt/mfs/service/zookeeper/data #宿主机挂载点,我这里用了分布式共享存储(MooseFS),这样可以保证多个副本数据的一致性。

name:zookeeper-d

-hostPath:

path:/mnt/mfs/service/activemq/data

-hostPath:

path:/mnt/mfs/service/mongodb/data

-hostPath:

path:/mnt/mfs/service/redis/data

name:redis-d

===========================================================================================

#创建服务

kubectl create -f ./

wKioL1eR_C-TKhVDAAATg5vmXrs703.png

wKiom1eR_DCxprs0AABNO0oIMo0200.png

wKiom1eR_DKDe0D_AACy-7ZumEc952.png

参考文献:http://my.oschina.net/jayqqaa12/blog/693919#userconsent#

版权声明:本文内容由互联网用户自发贡献,该文观点与技术仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 dio@foxmail.com 举报,一经查实,本站将立刻删除。

相关推荐


linux下开机自启: 在/etc/init.d目录下新建文件elasticsearch 并敲入shell脚本: 注意, 前两行必须填写,且要注释掉。 第一行为shell前行代码,目的告诉系统使用shell。 第二行分别代表运行级别、启动优先权、关闭优先权,且后面添加开机服务会用到。 shell脚本
1、因为在centos7中/etc/rc.d/rc.local的权限被降低了,所以需要赋予其可执行权 chmod +x /etc/rc.d/rc.local 2、赋予脚本可执行权限假设/usr/local/script/autostart.sh是你的脚本路径,给予执行权限 chmod +x /usr
最简单的查看方法可以使用ls -ll、ls-lh命令进行查看,当使用ls -ll,会显示成字节大小,而ls- lh会以KB、MB等为单位进行显示,这样比较直观一些。 通过命令du -h –max-depth=1 *,可以查看当前目录下各文件、文件夹的大小,这个比较实用。 查询当前目录总大小可以使用d
ASP.NET Core应用程序发布linux在shell中运行是正常的。可一但shell关闭网站也就关闭了,所以要配置守护进程, 用的是Supervisor,本文主要记录配置的过程和过程遇到的问题 安装Supervisor&#160;1 yum install python-setuptools
设置时区(CentOS 7) 先执行命令timedatectl status|grep &#39;Time zone&#39;查看当前时区,如果不是时区(Asia/Shanghai),则需要先设置为中国时区,否则时区不同会存在时差。 #已经是Asia/Shanghai,则无需设置 [root@xia
vim&#160;/etc/sysconfig/network-scripts/ifcfg-eth0 BOOTPROTO=&quot;static&quot; ONBOOT=yes IPADDR=192.168.8.106 NETMASK=255.255.252.0 GATEWAY=192.168.
一、安装gcc依赖 由于 redis 是用 C 语言开发,安装之前必先确认是否安装 gcc 环境(gcc -v),如果没有安装,执行以下命令进行安装 [root@localhost local]# yum install -y gcc 二、下载并解压安装包 [root@localhost local
第一步 On CentOS/RHEL 6.*: $ sudo rpm -Uvh http://li.nux.ro/download/nux/dextop/el6/x86_64/nux-dextop-release-0-2.el6.nux.noarch.rpm On CentOS/RHEL 7: $
/// &lt;summary&gt; /// 取小写文件名后缀 /// &lt;/summary&gt; /// &lt;param name=&quot;name&quot;&gt;文件名&lt;/param&gt; /// &lt;returns&gt;返回小写后缀,不带“.”&lt;/ret
which nohup .bash_profile中并source加载 如果没有就安装吧 yum provides */nohup nohup npm run start &amp; nohup ./kibana &amp;
1.1 MySQL安装 1.1.1 下载wget命令 yum -y install wget 1.1.2 在线下载mysql安装包 wget https://dev.mysql.com/get/mysql57-community-release-el7-8.noarch.rpm 1.1.3 安装My
重启 reboot shutdown -r now init 6 关闭 init 0 shutdown -h now shutdown -h 20:25 #8点25关机查看内存 free CPU利用率 top 日期 date 设置时间 date 033017002015 #月日时间年 日历 cal
1、firewalld的基本使用 启动: systemctl start firewalld 关闭: systemctl stop firewalld 查看状态: systemctl status firewalld 开机禁用 : systemctl disable firewalld 开机启用 :
1 下载并安装MySQL官方的&#160;Yum Repository wget -i -c http://dev.mysql.com/get/mysql57-community-release-el7-10.noarch.rpm 使用上面的命令就直接下载了安装用的Yum Repository,大概
CentOS6.x CentOS6中转用Upstrat代替以前的init.d/rcX.d的线性启动方式。 一、相关命令 通过initctl help可以查看相关命令 [root@localhost ~]# initctl help Job commands: start Start job. sto
1、使用命令:df -lk 找到已满磁盘 2、使用命令:du --max-depth=1 -h 查找大文件,删除
ifconfig:查看网卡信息 网卡配置文件位置: /etc/sysconfig/network-scripts/文件夹 nmtui:配置网卡 netstat -tlunp:查看端口信息 端口信息存储位置: /etc/services文件 route:查看路由信息 wget:下载网路文件,例如 wg
ps -ef:查看所有进程,&#160;ps -ef |grap firewalld 查看与firewalld相关的进程 which :查看进程:which firewalld kill 进程id:杀掉进程 kill 640,强制杀:kill -9 640 man:查看帮助,例如 man ps 查看
useradd:添加用户 useradd abc,默认添加一个abc组 vipw:查看系统中用户 groupadd:添加组groupadd ccna vigr:查看系统中的组 gpasswd:将用户abc添加到ccna组 gpasswd -a abc ccna groups abc:查看用户abc属