yum安装k8s(kubernetes)

此案例是以一个主,三个node来部署的,当然node可以根据自己情况部署

192.168.1.130 master
192.168.1.131 node1
192.168.1.132 node2
192.168.1.133 node3
合法的

Enable NTP on master and all nodes :

[root@k-master ~]# yum -y install ntp
[root@k-master ~]# systemctl start ntpd
[root@k-master ~]# systemctl enable ntpd
[root@k-master ~]# hwclock --systohc
[root@k-node1 ~]# yum -y install ntp
[root@k-node1 ~]# systemctl start ntpd
[root@k-node1 ~]# systemctl enable ntpd
[root@k-node1 ~]# hwclock --systohc
[root@k-node2 ~]# yum -y install ntp
[root@k-node2 ~]# systemctl start ntpd
[root@k-node2 ~]# systemctl enable ntpd
[root@k-node2 ~]# hwclock --systohc
[root@k-node3 ~]# yum -y install ntp
[root@k-node3 ~]# systemctl start ntpd
[root@k-node3 ~]# systemctl enable ntpd
[root@k-node3 ~]# hwclock --systohc

Add entries in “/etc/hosts” or reccords in your DNS :

[root@k-master ~]# grep "k-" /etc/hosts
192.168.1.130 k-master
192.168.1.131 k-node1
192.168.1.132 k-node2
192.168.1.133 k-node3
[root@k-node1 ~]# grep "k-" /etc/hosts
192.168.1.130 k-master
192.168.1.131 k-node1
192.168.1.132 k-node2
192.168.1.133 k-node3
[root@k-node2 ~]# grep "k-" /etc/hosts
192.168.1.130 k-master
192.168.1.131 k-node1
192.168.1.132 k-node2
192.168.1.133 k-node3
[root@k-node3 ~]# grep "k-" /etc/hosts
192.168.1.130 k-master
192.168.1.131 k-node1
192.168.1.132 k-node2
192.168.1.133 k-node3

Install required RPM :

  • On master :
[root@k-master ~]# yum -y install etcd kubernetes
...
...
...
Installed:
  etcd.x86_64 0:2.1.1-2.el7                       kubernetes.x86_64 0:1.0.3-0.2.gitb9a88a7.el7

Dependency Installed:
  audit-libs-python.x86_64 0:2.4.1-5.el7                   checkpolicy.x86_64 0:2.1.12-6.el7
  docker.x86_64 0:1.8.2-10.el7.centos                      docker-selinux.x86_64 0:1.8.2-10.el7.centos
  kubernetes-client.x86_64 0:1.0.3-0.2.gitb9a88a7.el7      kubernetes-master.x86_64 0:1.0.3-0.2.gitb9a88a7.el7
  kubernetes-node.x86_64 0:1.0.3-0.2.gitb9a88a7.el7        libcgroup.x86_64 0:0.41-8.el7
  libsemanage-python.x86_64 0:2.1.10-18.el7                policycoreutils-python.x86_64 0:2.2.5-20.el7
  python-IPy.noarch 0:0.75-6.el7                           setools-libs.x86_64 0:3.3.7-46.el7
  socat.x86_64 0:1.7.2.2-5.el7

Complete!
  • On nodes :
[root@k-node1 ~]# yum -y install flannel kubernetes
[root@k-node2 ~]# yum -y install flannel kubernetes
[root@k-node3 ~]# yum -y install flannel kubernetes

Stop the firewall

For for many convenience, we will stop firewalls during this lab :

[root@k-master ~]# systemctl stop firewalld && systemctl disable firewalld
Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.
Removed symlink /etc/systemd/system/basic.target.wants/firewalld.service.
[root@k-node1 ~]# systemctl stop firewalld && systemctl disable firewalld
Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.
Removed symlink /etc/systemd/system/basic.target.wants/firewalld.service.
[root@k-node2 ~]# systemctl stop firewalld && systemctl disable firewalld
Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.
Removed symlink /etc/systemd/system/basic.target.wants/firewalld.service.
[root@k-node3 ~]# systemctl stop firewalld && systemctl disable firewalld
Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.
Removed symlink /etc/systemd/system/basic.target.wants/firewalld.service.

On Kubernetes master

  • Configure “etcd” distributed key-value store :
[root@k-master ~]# egrep -v "^#|^$" /etc/etcd/etcd.conf
ETCD_NAME=default
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379"
ETCD_ADVERTISE_CLIENT_URLS="http://localhost:2379"
  • Kubernetes API server configuration file :
[root@k-master ~]# egrep -v "^#|^$" /etc/kubernetes/apiserver
KUBE_API_ADDRESS="--address=0.0.0.0"
KUBE_API_PORT="--port=8080"
KUBELET_PORT="--kubelet_port=10250"
KUBE_ETCD_SERVERS="--etcd_servers=http://127.0.0.1:2379"
KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16"
KUBE_ADMISSION_CONTROL="--admission_control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ResourceQuota"
KUBE_API_ARGS=""
  • Start all Kubernetes services :
[root@k-master ~]# for SERVICE in etcd kube-apiserver kube-controller-manager kube-scheduler
 > do
 > systemctl restart $SERVICE
 > systemctl enable $SERVICE
 > done
 Created symlink from /etc/systemd/system/multi-user.target.wants/etcd.service to /usr/lib/systemd/system/etcd.service.
 Created symlink from /etc/systemd/system/multi-user.target.wants/kube-apiserver.service to /usr/lib/systemd/system/kube-apiserver.service.
 Created symlink from /etc/systemd/system/multi-user.target.wants/kube-controller-manager.service to /usr/lib/systemd/system/kube-controller-manager.service.
 Created symlink from /etc/systemd/system/multi-user.target.wants/kube-scheduler.service to /usr/lib/systemd/system/kube-scheduler.service

We have now those LISTEN port :

[root@k-master ~]# netstat -ntulp | egrep -v "ntpd|sshd"
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name
tcp        0      0 127.0.0.1:10251         0.0.0.0:*               LISTEN      2913/kube-scheduler
tcp        0      0 127.0.0.1:10252         0.0.0.0:*               LISTEN      2887/kube-controlle
tcp        0      0 127.0.0.1:2380          0.0.0.0:*               LISTEN      2828/etcd
tcp        0      0 127.0.0.1:7001          0.0.0.0:*               LISTEN      2828/etcd
tcp        0      0 127.0.0.1:25            0.0.0.0:*               LISTEN      2356/master
tcp6       0      0 :::2379                 :::*                    LISTEN      2828/etcd
tcp6       0      0 :::8080                 :::*                    LISTEN      2858/kube-apiserver
tcp6       0      0 ::1:25                  :::*                    LISTEN      2356/master
  • Create “etcd” key :
[root@k-master ~]# etcdctl mk /frederic.wou/network/config '{"Network":"172.17.0.0/16"}'
{"Network":"172.31.0.0/16"}
[root@k-master ~]# etcdctl ls /frederic.wou --recursive
/frederic.wou/network
/frederic.wou/network/config
[root@k-master ~]# etcdctl get /frederic.wou/network/config
{"Network":"172.17.0.0/16"}

On each minion nodes

  • flannel configuration:
[root@k-node1 ~]# egrep -v "^#|^$" /etc/sysconfig/flanneld
FLANNEL_ETCD="http://192.168.1.130:2379"
FLANNEL_ETCD_KEY="/frederic.wou/network"
[root@k-node2 ~]# egrep -v "^#|^$" /etc/sysconfig/flanneld
FLANNEL_ETCD="http://192.168.1.130:2379"
FLANNEL_ETCD_KEY="/frederic.wou/network"
[root@k-node3 ~]# egrep -v "^#|^$" /etc/sysconfig/flanneld
FLANNEL_ETCD="http://192.168.1.130:2379"
FLANNEL_ETCD_KEY="/frederic.wou/network"
  • Kubernates :
[root@k-node1 ~]# egrep -v "^#|^$" /etc/kubernetes/config
KUBE_LOGTOSTDERR="--logtostderr=true"
KUBE_LOG_LEVEL="--v=0"
KUBE_ALLOW_PRIV="--allow_privileged=false"
KUBE_MASTER="--master=http://192.168.1.130:8080"
[root@k-node2 ~]# egrep -v "^#|^$" /etc/kubernetes/config
KUBE_LOGTOSTDERR="--logtostderr=true"
KUBE_LOG_LEVEL="--v=0"
KUBE_ALLOW_PRIV="--allow_privileged=false"
KUBE_MASTER="--master=http://192.168.1.130:8080"
[root@k-node3 ~]# egrep -v "^#|^$" /etc/kubernetes/config
KUBE_LOGTOSTDERR="--logtostderr=true"
KUBE_LOG_LEVEL="--v=0"
KUBE_ALLOW_PRIV="--allow_privileged=false"
KUBE_MASTER="--master=http://192.168.1.130:8080"
  • kubelet :
[root@k-node1 ~]# egrep -v "^#|^$" /etc/kubernetes/kubelet
KUBELET_ADDRESS="--address=0.0.0.0"
KUBELET_PORT="--port=10250"
KUBELET_HOSTNAME="--hostname_override=k-node1"
KUBELET_API_SERVER="--api_servers=http://k-master:8080"
KUBELET_ARGS=""
[root@k-node2 ~]# egrep -v "^#|^$" /etc/kubernetes/kubelet
KUBELET_ADDRESS="--address=0.0.0.0"
KUBELET_PORT="--port=10250"
KUBELET_HOSTNAME="--hostname_override=k-node2"
KUBELET_API_SERVER="--api_servers=http://k-master:8080"
KUBELET_ARGS=""
[root@k-node3 ~]# egrep -v "^#|^$" /etc/kubernetes/kubelet
KUBELET_ADDRESS="--address=0.0.0.0"
KUBELET_PORT="--port=10250"
KUBELET_HOSTNAME="--hostname_override=k-node3"
KUBELET_API_SERVER="--api_servers=http://k-master:8080"
KUBELET_ARGS=""
  • Start all services :
[root@k-node1 ~]# for SERVICE in kube-proxy kubelet docker flanneld
> do
> systemctl start $SERVICE
> systemctl enable $SERVICE
> done
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-proxy.service to /usr/lib/systemd/system/kube-proxy.service.
A dependency job for kubelet.service failed. See 'journalctl -xe' for details.
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.
Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service.
Job for flanneld.service failed because a timeout was exceeded. See "systemctl status flanneld.service" and "journalctl -xe" for details.
Created symlink from /etc/systemd/system/multi-user.target.wants/flanneld.service to /usr/lib/systemd/system/flanneld.service.
Created symlink from /etc/systemd/system/docker.service.requires/flanneld.service to /usr/lib/systemd/system/flanneld.service.
[root@k-node2 ~]# for SERVICE in kube-proxy kubelet docker flanneld
> do
> systemctl start $SERVICE
> systemctl enable $SERVICE
> done
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-proxy.service to /usr/lib/systemd/system/kube-proxy.service.
A dependency job for kubelet.service failed. See 'journalctl -xe' for details.
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.
Job for docker.service failed because the control process exited with error code. See "systemctl status docker.service" and "journalctl -xe" for details.
Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service.
Job for flanneld.service failed because a timeout was exceeded. See "systemctl status flanneld.service" and "journalctl -xe" for details.
Created symlink from /etc/systemd/system/multi-user.target.wants/flanneld.service to /usr/lib/systemd/system/flanneld.service.
Created symlink from /etc/systemd/system/docker.service.requires/flanneld.service to /usr/lib/systemd/system/flanneld.service.
[root@k-node3 ~]# for SERVICE in kube-proxy kubelet docker flanneld
> do
> systemctl start $SERVICE
> systemctl enable $SERVICE
> done
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-proxy.service to /usr/lib/systemd/system/kube-proxy.service.
A dependency job for kubelet.service failed. See 'journalctl -xe' for details.
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.
Job for docker.service failed because the control process exited with error code. See "systemctl status docker.service" and "journalctl -xe" for details.
Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service.
Job for flanneld.service failed because a timeout was exceeded. See "systemctl status flanneld.service" and "journalctl -xe" for details.
Created symlink from /etc/systemd/system/multi-user.target.wants/flanneld.service to /usr/lib/systemd/system/flanneld.service.
Created symlink from /etc/systemd/system/docker.service.requires/flanneld.service to /usr/lib/systemd/system/flanneld.service.

Kubernetes is now ready

[root@k-master ~]# kubectl get nodes
NAME            LABELS                                 STATUS
192.168.1.131   kubernetes.io/hostname=192.168.1.131   Ready
192.168.1.132   kubernetes.io/hostname=192.168.1.132   Ready
192.168.1.133   kubernetes.io/hostname=192.168.1.133   Ready

Troubleshooting

Unable to start Docker on minion nodes

[root@k-node1 ~]# systemctl start docker
Job for docker.service failed because the control process exited with error code. See "systemctl status docker.service" and "journalctl -xe" for details

Check “ntp” service :

[root@k-node1 ~]# ntpq -p
 remote refid st t when poll reach delay offset jitter
==============================================================================
+173.ip-37-59-12 36.224.68.195 2 u - 64 7 32.539 -0.030 0.477
*moz75-1-78-194- 213.251.128.249 2 u 4 64 7 30.108 -0.988 0.967
-ntp.tuxfamily.n 138.96.64.10 2 u 67 64 7 25.934 -1.495 0.504
+x1.f2tec.de 10.2.0.1 2 u 62 64 7 32.307 -0.044 0.466

Is “flanneld” up & running ?

[root@k-node1 ~]# ip addr show dev flannel0
3: flannel0: <POINTOPOINT,MULTICAST,NOARP,UP,LOWER_UP> mtu 1472 qdisc pfifo_fast state UNKNOWN qlen 500
 link/none
 inet 172.17.85.0/16 scope global flannel0
 valid_lft forever preferred_lft forever

Is this node able to connect to “etcd” master :

[root@k-node1 ~]# curl -s -L http://192.168.1.130:2379/version
{"etcdserver":"2.1.1","etcdcluster":"2.1.0"}[root@k-node1 ~]

Is “kube-proxy” service running ?

[root@k-node1 ~]# systemctl status kube-proxy
● kube-proxy.service - Kubernetes Kube-Proxy Server
   Loaded: loaded (/usr/lib/systemd/system/kube-proxy.service; disabled; vendor preset: disabled)
   Active: active (running) since Wed 2016-02-03 14:50:25 CET; 1min 0s ago
     Docs: https://github.com/GoogleCloudPlatform/kubernetes
 Main PID: 2072 (kube-proxy)
   CGroup: /system.slice/kube-proxy.service
           └─2072 /usr/bin/kube-proxy --logtostderr=true --v=0 --master=http://192.168.1.130:8080

Feb 03 14:50:25 k-node1 systemd[1]: Started Kubernetes Kube-Proxy Server.
Feb 03 14:50:25 k-node1 systemd[1]: Starting Kubernetes Kube-Proxy Server...

Try to manually start Docker daemon :

[root@k-node1 ~]# cat /run/flannel/docker
DOCKER_OPT_BIP="--bip=172.17.85.1/24"
DOCKER_OPT_IPMASQ="--ip-masq=true"
DOCKER_OPT_MTU="--mtu=1472"
DOCKER_NETWORK_OPTIONS=" --bip=172.17.85.1/24 --ip-masq=true --mtu=1472 "
[root@k-node1 ~]# /usr/bin/docker daemon -D --selinux-enabled --bip=172.17.85.1/24 --ip-masq=true --mtu=1472
...
...
...
INFO[0001] Docker daemon                                 commit=a01dc02/1.8.2 execdriver=native-0.2 graphdriver=devicemapper version=1.8.2-el7.centos

 

 

 

 

 

#转自http://frederic-wou.net/kubernetes-first-step-on-centos-7-2/

原文地址:https://www.cnblogs.com/rutor/p/10524722.html

版权声明:本文内容由互联网用户自发贡献,该文观点与技术仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 dio@foxmail.com 举报,一经查实,本站将立刻删除。

相关推荐


LinuxSystemd服务(2021.07.09)目录LinuxSystemd服务(2021.07.09)一、概述二、配置文件2.1Unit2.2Service2.3Install三、开机启动四、启动服务五、查看状态六、停止服务七、重启服务一、概述本文将介绍通过systemd来实现服务的自启动。systemd是一套系统启动和管理的工具,字
opidrvabortingprocessM002ospid(3561)asaresultofORA-600ORA-27300:操作系统相关操作:semctl失败,状态为:22ORA-27301:操作系统故障消息:InvalidargumentORA-27302:错误发生在:sskgpwrm1ORA-27157:已删除了操作系统发送/等待功能ORA-27300:操作系统相关操作
安装好haproxy后,配置正确无法启动,看日志:Feb1309:32:50cluster-node2systemd:StartedHAProxyLoadBalancer.Feb1309:32:50cluster-node2haproxy-systemd-wrapper:[ALERT]043/093250(6538):Startingproxymysql-pxc-cluster:cannotbindsocket[192.168.22.3
Linux 系统与服务管理工具Systemd被曝存在3大漏洞,影响几乎所有Linux发行版。Systemd是Linux系统的基本构建块,它提供了对系统和服务的管理功能,以PID1运行并启动系统的其它部分。目前大部分Linux发行版都以Systemd取代了原有的SystemV。安全公司Qualys近日发
一、systemd查看日志文件有隐藏 systemctlstatusSERVICE-l-l选项显示完整选项 journalctl-uSERVICE使用journalct命令查看 二、写一个systemd的配置文件,让nginx服务可以开机启动[Unit]Description=nginx[Service]Type=forkingPIDFile=/varunginx.pidExec
不要在mp目录下保存文件,该目录会定期清理文件mp默认保存10天/varmp默认保存30天配置文件:/usr/libmpfiles.dmp.conf默认配置文件:#Thisfileispartofsystemd.##systemdisfreesoftware;youcanredistributeitand/ormodifyit#underthetermsofthe
Step1:查看系统默认运行级别[root@node-1html]#systemctlget-default    //图形界面graphical.target[root@node-1html]#systemctllist-units--type=target  //查看支持的运行级别Step2:更改运行级别为level3 [root@node-1html]#systemctlset-defaultm
1.安装蓝牙驱动管理#apt-getinstallblueman2.打开蓝牙驱动管理,关闭设备3.关闭蓝牙开机启动服务#systemctldisablebluetooth.service#/lib/systemd/systemd-sysv-installdisablebluetooth4.重启reboot 
dhcpcd项目地址:http://www.linuxfromscratch.org/blfs/view/stable-systemd/basicnet/dhcpcd.html1.下载dhcpcd包并校验md5wgethttp:/oy.marples.name/downloads/dhcpcd/dhcpcd-7.0.7.tar.xzmd5sum-cmd5sums2.解压并进入包目录tar-xvfdhcpcd-7.0.7.tar.xzcddhcp
1.背景首先,我们先看一下/etc/init.d/README内容:Youarelookingforthetraditionalinitscriptsin/etcc.d/init.d,andtheyaregone?Here'sanexplanationonwhat'sgoingon:Youarerunningasystemd-basedOSwheretraditionalinitscriptshavebe
早就发现了,Arch的systemd提供的那个rc-local.service貌似有问题,rc.local不会执行。因为没用rc.local,一直没管。解决方法源自这里,需要稍加改动:http://superuser.com/questions/278396/systemd-does-not-run-etc-rc-local建立文件/etc/systemd/systemclocal.service(我怕和系
转载:https://www.cnblogs.com/sparkdev/p/8521812.html我们运行linux服务器的主要目的是通过运行程序提供服务,比如mysql、webserver等。因此管理linux服务器主要工作就是配置并管理上面运行的各种服务程序。在linux系统中服务程序的管理主要由init系统负责。如同笔者在
系统:Ubuntu18.04.02K8s版本:1.13.4故障现象:安装KubeDNS后,Pod内无法ping通外网域名,访问外网IP、K8s内部域名或者IP均正常  原因分析:查看Pod中的resolv.conf:kubectlexecbusybox--cat/etcesolv.confnameserver10.96.0.10searchdefault.svc.cluster.localsvc.cl
1.journalctl :日志查看工具journalctl -n5 //查看最近3条日志journalctl -perr //查看错误日志journalctl -overbose //查看日志的详细参数journalctl --since //查看从什么时间开始的日志journalctl --until //查看到什么时间为止的日志
此案例是以一个主,三个node来部署的,当然node可以根据自己情况部署192.168.1.130master192.168.1.131node1192.168.1.132node2192.168.1.133node3合法的EnableNTPonmasterandallnodes:[root@k-master~]#yum-yinstallntp[root@k-master~]#systemctlstartntpd[r
常用安装包下载yuminstall-yepel-releaseyum-yinstallbash-completionyum-yinstallnet-toolsyum-yinstalliprouteyum-yinstallwgetvimyum-yinstalllrzsznmaptreedos2unixnctelnetyum-yinstallopenssl一、系统类型1.1sysvinit1.系统第一个进程(p
修改了/etc/systemd/system.conf以后,发现不生效?修改了/etc/systemd/system.conf以后,必须使用systemctldaemon-reexec命令才能生效,使用systemctldaemon-reload是没有用的。daemon-reload重新加载的是所有单元文件,而不是systemd本身的配置。一定要注意了。被坑了。#addin/
Manjaro启动项目及服务配置备忘===============系统服务GUI管理搜索 systemdgenie 并安装,类似Windows的服务管理。================系统启动项目的快捷方式放在如下2个地方:/etc/xdg/autostart/cd~/.config/autostart,比如:/homeom/.config/autostart/===============#net
*1、systemd查看日志文件有隐藏该如何处理?答:Centos7.x使用systemd提供的journalctl日志管理a.基本上,系统由systemd所管理,那所有经由systemd启动的服务()如果在启动或结束的过程中发生了一些问题或是正常的信息),就会将该信息由systemd-journald.service以二进制的方式记录下来,之后
环境:centos7 创建的开机启动的链接地址: /etc/systemd/system/multi-user.target.wants/ 如:[root@tiaobanjisystem]#ll/etc/systemd/system/multi-user.target.wantsotal0lrwxrwxrwx.1rootroot38Feb2812:18auditd.service->/usr/lib/systemd/system/audit