8

I'm trying to set up helm for the first time and I'm having troubles.

First, I've created account with cluster-admin role. (According to https://github.com/kubernetes/helm/blob/master/docs/rbac.md#example-service-account-with-cluster-admin-role ).

After that I've initialized brand new helm tiller by using helm init --service-account=tiller and it was successful.

Now when I'm trying to install something:

  • First try:

    $ helm repo add gitlab https://charts.gitlab.io
    $ helm install --name gitlab-runner -f gitlab-runner-values.yaml gitlab/gitlab-runner
    

    where gitlab-runner-values.yaml looks like this:

    gitlabUrl: https://my-gitlab.domain.com
    runnerRegistrationToken: "MY_GITLAB_RUNNER_TOKEN"
    concurrent: 10
    
  • Second try (as I was not sure if there is an issue with custom repo, so I tried from official):

    $ helm install stable/kibana
    

I'm getting this error:

Error: forwarding ports: error upgrading connection: error dialing backend: dial tcp 192.168.0.18:10250: getsockopt: connection timed out

I noticed that 192.168.0.18 is visble on pod list:

kube-system   kube-proxy-kzflh                        1/1       Running   0          7d        192.168.0.18   kube-worker-7
kube-system   weave-net-jq4n4                         2/2       Running   2          7d        192.168.0.18   kube-worker-7

and that tiller is running on the same node:

kube-system   tiller-deploy-5b48764ff7-qtv9v          1/1       Running   0          3m        10.38.0.1      kube-worker-7

I was told that I probably don't have permission to pods/port-forward and list pods, but kubectl auth can-i create pods/portforward tells me that I can do this (the same with list pods)

Also helm list is throwing the same error as install.

030
  • 13,235
  • 16
  • 74
  • 173
Morishiri
  • 211
  • 1
  • 2
  • 11

2 Answers2

4

The problem was that nodes, while registered with kubeadm init were providing their private IPs to the cluster master. This caused problems, because master was trying to reach 192.0.*.* addresses which were not resolved as nodes from it's point of view.

I needed to edit /etc/systemd/system/kubelet.service.d/10-kubeadm.conf and specify public IP of the node on --node-ip=<public-node-ip> parameter. The reload the service and restart it as mentioned in https://github.com/kubernetes/kubeadm/issues/203#issuecomment-335416377

Then I registered nodes again and everything was working fine.

Morishiri
  • 211
  • 1
  • 2
  • 11
3

You have found a Kubernetes issue #22770, where there is a workaround mentioned here and it goes like follows:

What you're experiencing is a known issue with k8s where for some operations it expects to be able to resolve your node names in the global DNS.

And suggested a work around would be to:

  • Add entries to /etc/hosts on the master mapping your hostnames to their public IPs
  • Install dnsmasq on the master (e.g. apt install -y dnsmasq)
  • Kill the k8s api server container on master (kubelet will recreate it)
  • Then systemctl restart docker (or reboot the master) for it to pick up the /etc/resolv.conf changes

As last comment on the issue #22770 there is

I can confirm that adding the names of the our nodes to /etc/hosts on the kubernetes master resolves this problem for us too :)

, so there is not a fix in k8s coming in any upcoming version. At least from this particular ticket.

mico
  • 525
  • 1
  • 5
  • 12
  • Thank you very much! That was not the issue, but you lead me to it. I will provide proper solution in separate answer. Thanks for turining my attention to IPs! – Morishiri Mar 12 '18 at 09:34