Difference between revisions of "User:NonerKao"
(→Using kubeadm: master setting)
|Line 70:||Line 70:|
==== Node ====
==== Node ====
=== <s>Manual configuration</s> ===
=== <s>Manual configuration</s> ===
Revision as of 08:56, 14 January 2019
Welcome to my user page! Please feel free to comment on my talk page. Any discussion or chatting, no matter GNU/Linux, my research, or even games on Linux,
learning Chinese, politics in asia are welcomed here!
- 1 Personal Profile
- 2 Computer-related Interests
- 3 Basic configuration
- 4 Trouble shooting
關於我 / About me
- 正體中文使用者 / Traditional Chinese user.
- 晶心科技軟體工程師 / A software engineer in Andes Technology, Hsinchu, Taiwan
想做的事情 / Wish list
- 將重要的文章翻譯成正體中文 / Translate important articles
- 幫助更多初學者進入Archlinux的門檻 / Help more tranditional Chinese user use Arch
- 更加了解Arch社群 / Get familiar with Arch community
聯絡方式 / Contact
- Container technology
- Linux Kernel, especially the RISC-V port
- Operating system and Virtualization
- Cloud computing
- Heterogeneous computing
You may either choose the
kubeadm helper or manually configuring a kubernetes cluster.
The following guide is for a one-master-one-slave build, where both nodes are in
192.168.122.0/24 network and the master hosts the kubernetes cluster at
192.168.122.1. Note that pods have their own CIDR, assuming
First, setup the configuration file for kubelet service,
KUBELET_ARGS="--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf \ --kubeconfig=/etc/kubernetes/kubelet.conf \ --config=/var/lib/kubelet/config.yaml \ --network-plugin=cni \ --pod-infra-container-image=k8s.gcr.io/pause:3.1"
Don't worry for the not yet existing files in the arguments. They will be created during the
kubeadm initialization process. Note that if you are in a proxy environment or have special DNS settings, you should specify the
resolv.conf to be used in containers by add one more argument
# kubeadm init --advertise-address=192.168.122.1 --pod-network-cidr=192.168.123.0/24
It will show the progress of initialization and stuck later, complaining about something like
[kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
At this moment, start
kubelet.service. It is anticipated that kubelet will launch some kubernetes components, which will be confirmed by
kubeadm. If done successfully, there should be a message like:
Your Kubernetes master has initialized successfully!
Then you can configure your account as the administrator of this newly-created kubernetes cluster,
$ mkdir -p $HOME/.kube $ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config $ sudo chown $(id -u):$(id -g) $HOME/.kube/config
Then you can deploy a pod network. Many choices can be found here. Note that all the options have their own default pod network CIDR. Thus, you should modify those settings according to what was given in
Finally, check the health of this master,
$ kubectl get componentstatus
Join the cluster by simply type in the final line of master's successful message,
kubeadm join --token <token> 192.168.122.1:6443 --discovery-token-ca-cert-hash sha256:<hash>
- Note: It turns out to be a tedious but still incomplete guide. Leave this just for archive.
A basic configuration of one master, with key-value storage embedded, and one node is presented In this section. Assume that they connect to each other in a private network
192.168.122.0/24, where master's IP is
192.168.122.1 and the node is
This guide usesAUR, but one should be able to apply the following steps easily using AUR. Note that this is an insecure setting, which should be run only for testing or in fully trusted environment.
A kubernetes master machine hosts three services:
it also requires a key-value store (or multiple ones for high availability). A de facto choice isAUR.
A simple example of the service file is like:
[Unit] Description=Kubernetes API Server Documentation=https://kubernetes.io/docs/concepts/overview/components/#kube-apiserver https://kubernetes.io/docs/reference/generated/kube-apiserver/ After=network.target After=etcd.service [Service] EnvironmentFile=-/etc/kubernetes/config EnvironmentFile=-/etc/kubernetes/apiserver User=kube ExecStart=/usr/bin/kube-apiserver \ $KUBE_LOGTOSTDERR \ $KUBE_LOG_LEVEL \ $KUBE_ETCD_SERVERS \ $KUBE_API_ADDRESS \ $KUBE_API_PORT \ $KUBELET_PORT \ $KUBE_ALLOW_PRIV \ $KUBE_SERVICE_ADDRESSES \ $KUBE_ADMISSION_CONTROL \ $KUBE_API_ARGS Restart=on-failure Type=notify LimitNOFILE=65536 [Install] WantedBy=multi-user.target
Two environment configuration files,
/etc/kubernetes/apiserver are shown here. The former one is shared by all master's and nodes' components,
KUBE_LOGTOSTDERR="--logtostderr=true" KUBE_LOG_LEVEL="--v=0" KUBE_ALLOW_PRIV="--allow-privileged=false" KUBE_MASTER="--master=http://192.168.122.1:8080"
and the later one is a specific setting for the apiserver,
KUBE_API_ADDRESS="--insecure-bind-address=192.168.122.1" KUBE_API_PORT="--port=8080" KUBELET_PORT="--kubelet-port=10250" KUBE_ETCD_SERVERS="--etcd-servers=http://127.0.0.1:2379" KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=192.168.122.0/24" KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,LimitRanger,SecurityContextDeny,ResourceQuota" KUBE_API_ARGS="--log-dir=/var/log/kubernetes --service-node-port-range=1-65535"
fatal error: runtime: out of memory
This might happen when building kubernetes from source. A known trick is to setup a
# ￼￼modprobe zram # echo lz4 > /sys/block/zram0/comp_algorithm # echo 16G > /sys/block/zram0/disksize # mkswap --label zram0 /dev/zram0 # swapon --priority 100 /dev/zram0
error when creating "xxx.yaml": No API token found for service account "default"
Please check the details on stackoverflow.
Error: unable to load server certificate
This might happen when start a service. Check if any of the
*.key files' permission setting.