30

I am configuring a Kubernetes cluster with 2 nodes in CoreOS as described in https://coreos.com/kubernetes/docs/latest/getting-started.html without flannel. Both servers are in the same network.

But I am getting: x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kube-ca") while running kubelet in worker.

I configured the TLS certificates properly on both the servers as discussed in the doc.

The master node is working fine. And the kubectl is able to fire containers and pods in master.

Question 1: How to fix this problem?

Question 2: Is there any way to configure a cluster without TLS certificates?

Coreos version:
VERSION=899.15.0
VERSION_ID=899.15.0
BUILD_ID=2016-04-05-1035
PRETTY_NAME="CoreOS 899.15.0"

Etcd conf:

 $ etcdctl member list          
ce2a822cea30bfca: name=78c2c701d4364a8197d3f6ecd04a1d8f peerURLs=http://localhost:2380,http://localhost:7001 clientURLs=http://172.24.0.67:2379

Master: kubelet.service:

[Service]
ExecStartPre=/usr/bin/mkdir -p /etc/kubernetes/manifests
Environment=KUBELET_VERSION=v1.2.2_coreos.0
ExecStart=/opt/bin/kubelet-wrapper \
  --api-servers=http://127.0.0.1:8080 \
  --register-schedulable=false \
  --allow-privileged=true \
  --config=/etc/kubernetes/manifests \
  --hostname-override=172.24.0.67 \
  --cluster-dns=10.3.0.10 \
  --cluster-domain=cluster.local
Restart=always
RestartSec=10
[Install]
WantedBy=multi-user.target

Master: kube-controller.yaml

apiVersion: v1
kind: Pod
metadata:
  name: kube-controller-manager
  namespace: kube-system
spec:
  hostNetwork: true
  containers:
  - name: kube-controller-manager
    image: quay.io/coreos/hyperkube:v1.2.2_coreos.0
    command:
    - /hyperkube
    - controller-manager
    - --master=http://127.0.0.1:8080
    - --leader-elect=true 
    - --service-account-private-key-file=/etc/kubernetes/ssl/apiserver-key.pem
    - --root-ca-file=/etc/kubernetes/ssl/ca.pem
    livenessProbe:
      httpGet:
        host: 127.0.0.1
        path: /healthz
        port: 10252
      initialDelaySeconds: 15
      timeoutSeconds: 1
    volumeMounts:
    - mountPath: /etc/kubernetes/ssl
      name: ssl-certs-kubernetes
      readOnly: true
    - mountPath: /etc/ssl/certs
      name: ssl-certs-host
      readOnly: true
  volumes:
  - hostPath:
      path: /etc/kubernetes/ssl
    name: ssl-certs-kubernetes
  - hostPath:
      path: /usr/share/ca-certificates
    name: ssl-certs-host

Master: kube-proxy.yaml

apiVersion: v1
kind: Pod
metadata:
  name: kube-proxy
  namespace: kube-system
spec:
  hostNetwork: true
  containers:
  - name: kube-proxy
    image: quay.io/coreos/hyperkube:v1.2.2_coreos.0
    command:
    - /hyperkube
    - proxy
    - --master=http://127.0.0.1:8080
    securityContext:
      privileged: true
    volumeMounts:
    - mountPath: /etc/ssl/certs
      name: ssl-certs-host
      readOnly: true
  volumes:
  - hostPath:
      path: /usr/share/ca-certificates
    name: ssl-certs-host

Master: kube-apiserver.yaml

apiVersion: v1
kind: Pod
metadata:
  name: kube-apiserver
  namespace: kube-system
spec:
  hostNetwork: true
  containers:
  - name: kube-apiserver
    image: quay.io/coreos/hyperkube:v1.2.2_coreos.0
    command:
    - /hyperkube
    - apiserver
    - --bind-address=0.0.0.0
    - --etcd-servers=http://172.24.0.67:2379
    - --allow-privileged=true
    - --service-cluster-ip-range=10.3.0.0/24
    - --secure-port=443
    - --advertise-address=172.24.0.67
    - --admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota
    - --tls-cert-file=/etc/kubernetes/ssl/apiserver.pem
    - --tls-private-key-file=/etc/kubernetes/ssl/apiserver-key.pem
    - --client-ca-file=/etc/kubernetes/ssl/ca.pem
    - --service-account-key-file=/etc/kubernetes/ssl/apiserver-key.pem
    ports:
    - containerPort: 443
      hostPort: 443
      name: https
    - containerPort: 8080
      hostPort: 8080
      name: local
    volumeMounts:
    - mountPath: /etc/kubernetes/ssl
      name: ssl-certs-kubernetes
      readOnly: true
    - mountPath: /etc/ssl/certs
      name: ssl-certs-host
      readOnly: true
  volumes:
  - hostPath:
      path: /etc/kubernetes/ssl
    name: ssl-certs-kubernetes
  - hostPath:
      path: /usr/share/ca-certificates
    name: ssl-certs-host

Master: kube-scheduler.yaml

apiVersion: v1
kind: Pod
metadata:
  name: kube-scheduler
  namespace: kube-system
spec:
  hostNetwork: true
  containers:
  - name: kube-scheduler
    image: quay.io/coreos/hyperkube:v1.2.2_coreos.0
    command:
    - /hyperkube
    - scheduler
    - --master=http://127.0.0.1:8080
    - --leader-elect=true
    livenessProbe:
      httpGet:
        host: 127.0.0.1
        path: /healthz
        port: 10251
      initialDelaySeconds: 15
      timeoutSeconds: 1

Slave: kubelet.service

[Service]
ExecStartPre=/usr/bin/mkdir -p /etc/kubernetes/manifests

Environment=KUBELET_VERSION=v1.2.2_coreos.0 
ExecStart=/opt/bin/kubelet-wrapper \
  --api-servers=https://172.24.0.67:443 \
  --register-node=true \
  --allow-privileged=true \
  --config=/etc/kubernetes/manifests \
  --hostname-override=172.24.0.63 \
  --cluster-dns=10.3.0.10 \
  --cluster-domain=cluster.local \
  --kubeconfig=/etc/kubernetes/worker-kubeconfig.yaml \
  --tls-cert-file=/etc/kubernetes/ssl/worker.pem \
  --tls-private-key-file=/etc/kubernetes/ssl/worker-key.pem
Restart=always
RestartSec=10
[Install]
WantedBy=multi-user.target

Slave: kube-proxy.yaml

apiVersion: v1
kind: Pod
metadata:
  name: kube-proxy
  namespace: kube-system
spec:
  hostNetwork: true
  containers:
  - name: kube-proxy
    image: quay.io/coreos/hyperkube:v1.2.2_coreos.0
    command:
    - /hyperkube
    - proxy
    - --master=https://172.24.0.67:443
    - --kubeconfig=/etc/kubernetes/worker-kubeconfig.yaml
    - --proxy-mode=iptables
    securityContext:
      privileged: true
    volumeMounts:
      - mountPath: /etc/ssl/certs
        name: "ssl-certs"
      - mountPath: /etc/kubernetes/worker-kubeconfig.yaml
        name: "kubeconfig"
        readOnly: true
      - mountPath: /etc/kubernetes/ssl
        name: "etc-kube-ssl"
        readOnly: true
  volumes:
    - name: "ssl-certs"
      hostPath:
        path: "/usr/share/ca-certificates"
    - name: "kubeconfig"
      hostPath:
        path: "/etc/kubernetes/worker-kubeconfig.yaml"
    - name: "etc-kube-ssl"
      hostPath:
        path: "/etc/kubernetes/ssl"
jonatan
  • 7,963
  • 2
  • 24
  • 34
Nakshatra
  • 563
  • 1
  • 5
  • 14

6 Answers6

39
mkdir -p $HOME/.kube   
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config   
sudo chown $(id -u):$(id -g) $HOME/.kube/config
yasin lachini
  • 3,936
  • 3
  • 23
  • 48
  • 28
    Welcome to SO! Please add some details explaining what your answer does, it will be more helpful for the OP and future readers of the post. – EcologyTom Feb 17 '19 at 08:27
  • 4
    i confirm that this solutions works. i created the cluster via kubeadm init, after that i deleted cluster via kubeadm and i recreated the cluster via kubeadm init. But i didnt delete old config from $HOME. And i got the error described above. So i tried to use the newcluster with the old config file that is with the old k8s cert. Thats why it didnt work. After i replace config from /etc to $HOME all is fine now. so in my opinion if you get x509 error it means you are trying yo use old config in your $HOME from some old cluster. – Alex Sep 21 '19 at 18:15
  • I confirm this solution worked for me too, even if some explanations would be very welcome for beginners like me. – RemiGaudin Jun 10 '20 at 08:16
  • this should be accepted as the answer, it worked for me too – mhn_namak Feb 16 '21 at 21:14
  • 1
    This is what kubeadm says after kubeadm init. Your Kubernetes control-plane has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube....cp -i.... – Serve Laurijssen Jun 03 '21 at 05:46
19

From kubernetes official site:

  1. Verify that the $HOME/.kube/config file contains a valid certificate, and regenerate a certificate

  2. Unset the KUBECONFIG environment variable using:

    unset KUBECONFIG

    Or set it to the default KUBECONFIG location:

    export KUBECONFIG=/etc/kubernetes/admin.conf

  3. Another workaround is to overwrite the existing kubeconfig for the “admin” user:

    mv  $HOME/.kube $HOME/.kube.bak
    mkdir $HOME/.kube
    sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
    sudo chown $(id -u):$(id -g) $HOME/.kube/config
    

Reference: official site link reference

milo526
  • 4,908
  • 5
  • 41
  • 59
Umar Hayat
  • 3,160
  • 1
  • 8
  • 23
3

Please see this as a reference and maybe help you resolve your issue by exporting your certs:

kops export kubecfg "your cluster-name"
export KOPS_STATE_STORE=s3://"paste your S3 store"

Hope that will help.

jmrk
  • 26,789
  • 6
  • 42
  • 57
JohnBegood
  • 442
  • 4
  • 7
1

The above mentioned regular method does not work. I have tried to use the complete commands for a successful certificate. Please see the commands as follows.

$ sudo kubeadm reset
$ sudo swapoff -a 

$ sudo kubeadm init --pod-network-cidr=10.244.10.0/16 --kubernetes- 
  version "1.18.3"
$ sudo rm -rf $HOME/.kube

$ mkdir -p $HOME/.kube
$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
$ sudo chown $(id -u):$(id -g) $HOME/.kube/config

$ sudo systemctl enable docker.service
$ sudo service kubelet restart

$ kubectl get nodes

Notes:

If the port refuses to be connected, please add the following command.

$ export KUBECONFIG=$HOME/admin.conf
Matthias
  • 1,140
  • 19
  • 37
Mike Chen
  • 319
  • 3
  • 5
0

Well, to answer your first question I think you have to do a few things to resolve your problem.

First, run the command given in this link : kubernetes.io/docs/setup/independent/create-cluster-kubeadm‌​/…

Complete with those commands :

  • mkdir -p $HOME/.kube
  • sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  • sudo chown $(id -u):$(id -g) $HOME/.kube/config

This admin.conf should be known to kubectl so as to work properly.

arshbot
  • 7,909
  • 10
  • 35
  • 62
Abhay Dwivedi
  • 1,060
  • 1
  • 12
  • 21
0

I had the problem persist even after:

mkdir -p $HOME/.kube   
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config   
sudo chown $(id -u):$(id -g) $HOME/.kube/config

In that case, restarting kubelet solved the problem:

systemctl restart kubelet
Gabriel Fernandez
  • 480
  • 1
  • 3
  • 13