User:NonerKao

From ArchWiki

Welcome to my user page! Please feel free to comment on my talk page. Any discussion or chatting, no matter GNU/Linux, my research, or even games on Linux, learning Chinese, politics in asia are welcomed here!

若是我的翻譯有不詳盡或是錯誤的地方,也請不吝給予指正!

Personal Profile

關於我 / About me

  • 正體中文使用者 / Traditional Chinese user.
  • 晶心科技軟體工程師 / A software engineer in Andes Technology, Hsinchu, Taiwan

想做的事情 / Wish list

  • 將重要的文章翻譯成正體中文 / Translate important articles
  • 幫助更多初學者進入Archlinux的門檻 / Help more tranditional Chinese user use Arch
  • 更加了解Arch社群 / Get familiar with Arch community

聯絡方式 / Contact

  • alankao@andestech.com
  • s101062801@m101.nthu.edu.tw

Computer-related Interests

  • Container technology
  • Go
  • Linux Kernel, especially the RISC-V port
  • Operating system and Virtualization
  • Cloud computing
  • Heterogeneous computing

Basic configuration

You may either choose the kubeadm helper or manually configuring a kubernetes cluster.

Using kubeadm

The following guide is for a one-master-one-slave build, where both nodes are in 192.168.122.0/24 network and the master hosts the kubernetes cluster at 192.168.122.1. Note that pods have their own CIDR, assuming 192.168.123.0/24 here.

Master

First, setup the configuration file for kubelet service,

/etc/kubernetes/kubelet
KUBELET_ARGS="--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf \
              --kubeconfig=/etc/kubernetes/kubelet.conf \
              --config=/var/lib/kubelet/config.yaml \
              --network-plugin=cni \
              --pod-infra-container-image=k8s.gcr.io/pause:3.1"

Don't worry for the not yet existing files in the arguments. They will be created during the kubeadm initialization process. Note that if you are in a proxy environment or have special DNS settings, you should specify the resolv.conf to be used in containers by add one more argument

--resolv-conf=/the/path/to/the/resolv.conf

Then, run

# kubeadm init --advertise-address=192.168.122.1 --pod-network-cidr=192.168.123.0/24

It will show the progress of initialization and stuck later, complaining about something like

[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.

At this moment, start kubelet.service. It is anticipated that kubelet will launch some kubernetes components, which will be confirmed by kubeadm. If done successfully, there should be a message like:

Your Kubernetes master has initialized successfully!

Then you can configure your account as the administrator of this newly-created kubernetes cluster,

$ mkdir -p $HOME/.kube
$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
$ sudo chown $(id -u):$(id -g) $HOME/.kube/config

Then you can deploy a pod network. Many choices can be found here. Note that all the options have their own default pod network CIDR. Thus, you should modify those settings according to what was given in --pod-network-cidr.

Finally, check the health of this master,

$ kubectl get componentstatus

Node

Join the cluster by simply type in the final line of master's successful message,

kubeadm join --token <token> 192.168.122.1:6443 --discovery-token-ca-cert-hash sha256:<hash>

Trouble shooting

settings behind proxy

kubeadm reads the https_proxy, http_proxy, and no_proxy environment variables. Kubernetes internal networking should be included in the latest one, for example

export no_proxy="192.168.122.0/24,10.96.0.0/12,192.168.123.0/24"

where the second one is the default service network CIDR.

You may also need extra CNI plugins

$ go get -d github.com/containernetworking/plugins
$ cd ~/go/src/github.com/containernetworking/plugins
$ bash ./build_linux.sh 
# cp bin/* /opt/cni/bin/

fatal error: runtime: out of memory

This might happen when building kubernetes from source. A known trick is to setup a zram region:

# modprobe zram
# echo lz4 > /sys/block/zram0/comp_algorithm
# echo 16G > /sys/block/zram0/disksize
# mkswap --label zram0 /dev/zram0
# swapon --priority 100 /dev/zram0

error when creating "xxx.yaml": No API token found for service account "default"

Please check the details on stackoverflow.

Error: unable to load server certificate

This might happen when start a service. Check if any of the *.key files' permission setting.

Deprecated

Manual configuration

Note: It turns out to be a tedious but still incomplete guide. Leave this just for archive.

A basic configuration of one master, with key-value storage embedded, and one node is presented In this section. Assume that they connect to each other in a private network 192.168.122.0/24, where master's IP is 192.168.122.1 and the node is 192.168.122.10.

This guide uses kubernetesAUR, but one should be able to apply the following steps easily using kubernetes-binAUR. Note that this is an insecure setting, which should be run only for testing or in fully trusted environment.

Master

A kubernetes master machine hosts three services:

  • kube-apiserver
  • kube-controller-manager
  • kube-scheduler

it also requires a key-value store (or multiple ones for high availability). A de facto choice is etcdAUR.

etcd

Install etcdAUR and start etcd.service. By default it listens to http://127.0.0.1:2379.

kube-apiserver

A simple example of the service file is like:

/lib/systemd/system/kube-apiserver.service
[Unit]
Description=Kubernetes API Server
Documentation=https://kubernetes.io/docs/concepts/overview/components/#kube-apiserver https://kubernetes.io/docs/reference/generated/kube-apiserver/
After=network.target
After=etcd.service

[Service]
EnvironmentFile=-/etc/kubernetes/config
EnvironmentFile=-/etc/kubernetes/apiserver
User=kube
ExecStart=/usr/bin/kube-apiserver \
            $KUBE_LOGTOSTDERR \
            $KUBE_LOG_LEVEL \
            $KUBE_ETCD_SERVERS \
            $KUBE_API_ADDRESS \
            $KUBE_API_PORT \
            $KUBELET_PORT \
            $KUBE_ALLOW_PRIV \
            $KUBE_SERVICE_ADDRESSES \
            $KUBE_ADMISSION_CONTROL \
            $KUBE_API_ARGS
Restart=on-failure
Type=notify
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target

Two environment configuration files, /etc/kubernetes/config and /etc/kubernetes/apiserver are shown here. The former one is shared by all master's and nodes' components,

/etc/kubernetes/config
KUBE_LOGTOSTDERR="--logtostderr=true"
KUBE_LOG_LEVEL="--v=0"
KUBE_ALLOW_PRIV="--allow-privileged=false"
KUBE_MASTER="--master=http://192.168.122.1:8080"

and the later one is a specific setting for the apiserver,

/etc/kubernetes/apiserver
KUBE_API_ADDRESS="--insecure-bind-address=192.168.122.1"
KUBE_API_PORT="--port=8080"
KUBELET_PORT="--kubelet-port=10250"
KUBE_ETCD_SERVERS="--etcd-servers=http://127.0.0.1:2379"
KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=192.168.122.0/24"
KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,LimitRanger,SecurityContextDeny,ResourceQuota"
KUBE_API_ARGS="--log-dir=/var/log/kubernetes --service-node-port-range=1-65535"
kube-controller-manager
kube-scheduler

Nodes