공부/kubernetes

Ubuntu 20.04 kubernetes install

CITTA 2023. 2. 3. 19:00
728x90
반응형
SMALL

kubernetes 구축과 container에 대한 학습을 위해 간단히 설치부터 테스트 했습니다.

참고하면서 봤던 글들이 너무 뒤죽박죽이어서 나름데로 정리를 다시 했으나 그럼에도 미흡한 부분이 있을 수 있습니다.

 

* 모두 root 계정으로 구성하였습니다. 

hostname ip role resource
k8s-master 192.168.130.131/23 control-plane, master Disk 30GB, 2Core, 4GB M
k8s-worker1 192.168.130.132/23 worker Disk 30GB, 2Core, 4GB M
k8s-worker2 192.168.130.133/23 worker Disk 30GB, 2Core, 4GB M
  1. master node
  1. hosts 설정
$ hostnamectl set-hostname [name]

$ cat <<EOF >> /etc/hosts
192.168.130.131 k8s-master
192.168.130.132 k8s-worker1
192.168.130.133 k8s-worker2
EOF
  1. 방화벽 설정
## 방화벽 설정 확인
$ ufw status
Status: active

## 방확벽 disable
$ ufw disable
Firewall stopped and disabled on system startup
  1. 저장소 설정 및 docker , containerd 설치
$ apt-get update  -> 설치 가능한 패키지 리스트를 최신화
$ apt-get upgrade -> 실제 업데이트
$ apt-get install ca-certificates curl gnupg lsb-release

## docker 공식 gpg key 추가
$ curl -fsSL <https://download.docker.com/linux/ubuntu/gpg> | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg
$ echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] <https://download.docker.com/linux/ubuntu> $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null

## docker engine 및 containerd 설치
$ apt-get update
$ apt-get install docker-ce docker-ce-cli containerd.io

## docker version 확인
$ docker version
Client: Docker Engine - Community
 Version:           23.0.0
...
Server: Docker Engine - Community
 Engine:
  Version:          23.0.0
...

## Docker log 사이즈 지정
$ mkdir -p /etc/docker
$ cat <
  1. kubernetes 설치 전 환경설정
## iptables가 브릿지된 트래픽을 바라보게 하기
$ cat <<EOF | tee /etc/modules-load.d/k8s.conf
br_netfilter
EOF

$ cat <<EOF | tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF

$ sysctl --system
  1. swap off
$ swapoff -a
$ vi /etc/fstab
## swap 부분 주석처리
  1. master node에서 port 확인
## 6443, 8080 port 사용 확인, 사용중인 서비스가 없어야함
## kubernetes api 서버는 http서비스를 위해 6443, 8080 port를 사용
## 8080 port : 스트 및 부트스트랩을 하기 위한 것이며 마스터 노드의 다른 구성요소 (스케줄러, 컨트롤러 매니저)가 API와 통신
## 6443 port : TLS를 사용한다. --tls-cert-file 플래그로 인증서를 지정하고 --tls-private-key-file 플래그로 키를 지정한다

$ telnet [master ip] 6443
  1. kubectl, kubeadm, kubelet 설치
## 저장소 업데이트 및 필수 패키지 추가
$ apt-get update
$ apt-get install apt-transport-https ca-certificates curl

## google cloud public key 다운로드 및 kubernetes 저장소 추가
$ curl -fsSLo /usr/share/keyrings/kubernetes-archive-keyring.gpg <https://packages.cloud.google.com/apt/doc/apt-key.gpg>
$ echo "deb [signed-by=/usr/share/keyrings/kubernetes-archive-keyring.gpg] <https://apt.kubernetes.io/> kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list

$ apt-get update
$ apt-get install kubelet kubeadm kubectl

$ apt-mark hold kubelet kubeadm kubectl

## kubernetes 서비스 등록 및 재시작 수행
$ systemctl daemon-reload
$ systemctl restart kubelet
  1. ip_forward 설정
## 네트워크 인터페이스 간 패킷 교환여부를 설정하는 옵션값으로 디폴트값은 0, NO FORWARD를 의미
## 1로 변경하면 network 간 패킷 통신이 가능
$ echo 1 | tee /proc/sys/net/ipv4/ip_forward
  1. control plane 구성 - master node

(1) control plane 구성

$ kubeadm init
...
kubeadm join 192.168.130.131:6443 --token 5vq2ed.7rzhrrg97ezalngh \\
        --discovery-token-ca-cert-hash sha256:f53f8fdbbc1efc5979a020ce8d8d8b61f8a78396c1f2bd393b59ac3c11a0df6b

## 마지막 token 부분은 별도로 저장해야
## kubeadm init 에러 발생시(1)
containerd 데몬에서 cri 옵션이 버그를 야기시키는 것처럼 보임, 1.3.7 버전 이상에서 문제 발생
$  kubeadm init
[init] Using Kubernetes version: v1.26.1
[preflight] Running pre-flight checks
error execution phase preflight: [preflight] Some fatal errors occurred:
        [ERROR CRI]: container runtime is not running: output: time="2023-02-01T07:29:46Z" level=fatal msg="validate service connection: CRI v1 runtime API is not implemented for endpoint \\"unix:///var/run/containerd/containerd.sock\\": rpc error: code = Unimplemented desc = unknown service runtime.v1.RuntimeService"
, error: exit status 1
        [ERROR FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher

## 아래 명령어 수행하여 초기화, init 재실행
$ rm /etc/containerd/config.toml
systemctl restart containerd.service

## kubeadm init 에러 발생시(2)
br_netfilter 모듈이 로드 되지 않아서 발생
$  kubeadm init
[init] Using Kubernetes version: v1.26.1
[preflight] Running pre-flight checks
error execution phase preflight: [preflight] Some fatal errors occurred:
        [ERROR FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher

## 아래 명령어 수행 후 init 재실행
$ modprobe br_netfilter 

(2) 권한 설정

## root와 일반 user가 쿠버네티스 명령어를 사용할 수 있도록 설정
# root 유저로 수행시
$ export KUBECONFIG=/etc/kubernetes/admin.conf

# 일반 유저로 명령어 수행 시
$ mkdir -p $HOME/.kube

## admin.conf 파일은 kubelet 설치 시 생성
$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
$ sudo chown $(id -u):$(id -g) $HOME/.kube/config

(3) CNI 설치 (master node)

## pod network add-on 설치(calico, canal, flannel, romana, weave net 등)
## weave net 구성시에는 tcp 6783, udp 6783/6784 방화벽 차단 되면 안됨

$ kubectl apply -f <https://github.com/weaveworks/weave/releases/download/v2.8.1/weave-daemonset-k8s.yaml>
## 실행결과
serviceaccount/weave-net created
clusterrole.rbac.authorization.k8s.io/weave-net created
clusterrolebinding.rbac.authorization.k8s.io/weave-net created
role.rbac.authorization.k8s.io/weave-net created
rolebinding.rbac.authorization.k8s.io/weave-net created
daemonset.apps/weave-net created

## 참고링크
weaveworks: <https://www.weave.works/docs/net/latest/kubernetes/kube-addon/>

(4) weave-net pod 구동 확인

$ kubectl get pods --all-namespaces
NAMESPACE     NAME                                 READY   STATUS    RESTARTS      AGE
kube-system   coredns-787d4945fb-frc8t             1/1     Running   0             44h
kube-system   coredns-787d4945fb-mxdk6             1/1     Running   0             44h
kube-system   etcd-k8s-master                      1/1     Running   1 (28m ago)   44h
kube-system   kube-apiserver-k8s-master            1/1     Running   1 (28m ago)   44h
kube-system   kube-controller-manager-k8s-master   1/1     Running   1 (28m ago)   44h
kube-system   kube-proxy-s9hj7                     1/1     Running   1 (28m ago)   44h
kube-system   kube-scheduler-k8s-master            1/1     Running   1 (28m ago)   44h
kube-system   weave-net-b8g94                      2/2     Running   1 (23m ago)   23m
  1. Worker node
  1. hosts 설정
$ hostnamectl set-hostname [name]

$ cat <<EOF >> /etc/hosts
192.168.130.131 k8s-master
192.168.130.132 k8s-worker1
192.168.130.133 k8s-worker2
EOF
  1. 방화벽 설정
## 방화벽 설정 확인
$ ufw status
Status: active

## 방확벽 disable
$ ufw disable
Firewall stopped and disabled on system startup
  1. 저장소 설정 및 docker , containerd 설치
$ apt-get update  -> 설치 가능한 패키지 리스트를 최신화
$ apt-get upgrade -> 실제 업데이트
$ apt-get install ca-certificates curl gnupg lsb-release

## docker 공식 gpg key 추가
$ curl -fsSL <https://download.docker.com/linux/ubuntu/gpg> | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg
$ echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] <https://download.docker.com/linux/ubuntu> $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null

## docker engine 및 containerd 설치
$ apt-get update
$ apt-get install docker-ce docker-ce-cli containerd.io

## docker version 확인
$ docker version
Client: Docker Engine - Community
 Version:           23.0.0
...
Server: Docker Engine - Community
 Engine:
  Version:          23.0.0
...

## Docker log 사이즈 지정
$ mkdir -p /etc/docker
$ cat <
  1. kubernetes 설치 전 환경설정
## iptables가 브릿지된 트래픽을 바라보게 하기
$ cat <<EOF | tee /etc/modules-load.d/k8s.conf
br_netfilter
EOF

$ cat <<EOF | tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF

$ sysctl --system
  1. swap off
$ swapoff -a
$ vi /etc/fstab
## swap 부분 주석처리
  1. kubectl, kubeadm, kubelet 설치
## 저장소 업데이트 및 필수 패키지 추가
$ apt-get update
$ apt-get install apt-transport-https ca-certificates curl

## google cloud public key 다운로드 및 kubernetes 저장소 추가
$ curl -fsSLo /usr/share/keyrings/kubernetes-archive-keyring.gpg <https://packages.cloud.google.com/apt/doc/apt-key.gpg>
$ echo "deb [signed-by=/usr/share/keyrings/kubernetes-archive-keyring.gpg] <https://apt.kubernetes.io/> kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list

$ apt-get update
$ apt-get install kubelet kubeadm kubectl

$ apt-mark hold kubelet kubeadm kubectl

## kubernetes 서비스 등록 및 재시작 수행
$ systemctl daemon-reload
$ systemctl restart kubelet
  1. Worker node join
## master node에서 수행한 kubeadm init 명령어의 마지막 부분을 실행
$ kubeadm join 192.168.130.131:6443 --token 5vq2ed.7rzhrrg97ezalngh \\
        --discovery-token-ca-cert-hash sha256:f53f8fdbbc1efc5979a020ce8d8d8b61f8a78396c1f2bd393b59ac3c11a0df6b
## 아래와 같이 join 실패 시
$ kubeadm join 192.168.130.131:6443 --token 5vq2ed.7rzhrrg97ezalngh --discovery-token-ca-cert-hash sha256:f53f8fdbbc1efc5979a020ce8d8d8b61f8a78396c1f2bd393b59ac3c11a0df6b
[preflight] Running pre-flight checks
error execution phase preflight: couldn't validate the identity of the API Server: could not find a JWS signature in the cluster-info ConfigMap for token ID "5vq2ed"
To see the stack trace of this error execute with --v=5 or higher

## kubeadm token 유효기간은 24시간으로 지나면 재생성 해야함
## list 명령어로 확인 시 남아있을 경우 
$ kubeadm token list
TOKEN                     TTL         EXPIRES                USAGES                   DESCRIPTION                                                EXTRA GROUPS
rspbgu.bs06ttvl5g2nijji   23h         2023-02-04T05:57:21Z   authentication,signing   <none>                                                     system:bootstrappers:kubeadm:default-node-token

## 아래 명령어로 기존 token을 삭제 (24시간이 지났을 경우 list에 없을 수 있음)
$ kubeadm token delete rspbgu.bs06ttvl5g2nijji
bootstrap token "rspbgu" deleted
## 삭제 확인
$ kubeadm token list
## 신규 토큰 생성
$ kubeadm token create --print-join-command
kubeadm join 192.168.130.131:6443 --token l85ypt.kpwrps300qtqu9mi --discovery-token-ca-cert-hash sha256:f53f8fdbbc1efc5979a020ce8d8d8b61f8a78396c1f2bd393b59ac3c11a0df6b
## 정상적으로 join 시
$ kubeadm join 192.168.130.131:6443 --token l85ypt.kpwrps300qtqu9mi --discovery-token-ca-cert-hash sha256:f53f8fdbbc1efc5979a020ce8d8d8b61f8a78396c1f2bd393b59ac3c11a0df6b
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

## master에서 확인
$ kubectl get nodes
NAME          STATUS   ROLES           AGE   VERSION
k8s-master    Ready    control-plane   47h    v1.26.1
k8s-worker1   Ready    <none>          106m   v1.26.1
k8s-worker2   Ready    <none>          75m    v1.26.1
728x90
반응형
LIST