4

Anyone knows if there is a solution to provision (non-managed) autoscaling Kubernetes clusters in the cloud?

I am looking for a generic solution that will work in any public cloud and in private data centers.

After quite a bit of research, the known available options do not answer the requirement above -

  • kops which is AWS specific is using own AMI images.

  • kubespray (aka kargo) has no notion of autoscaling, executes Ansible on a specific inventory of hosts.

  • kubeadm expects a known list of hosts and does not take care of OS setup like docker, kubelet, etc...

  • kubernetes autoscaler controls existing auto scaling groups, does not prepare the images used by those groups.

Is there some generic solution to make an autoscaling group deploy Kubernetes cluster on itself? Possibly a way to build cloud images (like AMI in AWS) using Packer for this.

Looking for a method that is not specific to one public cloud and allows to use an elastic environment that scales automatically based on prepared images.

030
  • 13,235
  • 16
  • 74
  • 173
Evgeny Zislis
  • 8,963
  • 5
  • 38
  • 72
  • 1
    As far as I know kops allow deployments in AWS or GCE and some efforts are made to allow vsphere deploys also (this last point is far from stable that said), did you hit a roadblock on GCE or another cloud provider ? – Tensibai Nov 12 '18 at 16:32
  • 1
    I need to have a similar process that can be used with multiple clouds, AWS and AliCloud are first, but also bare-metal deployments like Packet.com are an option. Having a known working process that is not coupled to a specific cloud is what I am looking for. For example, kops are using their own custom built AMIs on AWS, this is not really acceptable for our purposes. – Evgeny Zislis Nov 12 '18 at 17:16
  • 1
    Ok, I didn't properly understood this requirements. Maybe looking around rancher could be an option to bake your own images from it ? – Tensibai Nov 12 '18 at 17:22
  • 1
    I'm assuming you're aware of the built-in horizontal pod autoscaling; can you describe what that doesn't do that you're looking for? – Xiong Chiamiov Nov 20 '18 at 20:53
  • @XiongChiamiov I am not talking about scaling pods, I am talking about scaling nodes. – Evgeny Zislis Nov 21 '18 at 09:47
  • 1
    Maybe elaborating on why you need autoscaling nodes would help. It's pretty atypical to need autoscaling nodes in a private datacentre, as you have already paid ahead of time for your capacity. – user2640621 Dec 21 '18 at 08:02
  • Elastic infrastructure is not atypical, adding nodes and having these register themselves once power and network cables are connected without waiting for a "sysadmin" is quite typical. Same goes when a power failure happens, or network problems disconnect some capacity from the master servers. – Evgeny Zislis Dec 23 '18 at 14:03
  • Could you indicate whether you solved the issue? Could you update the question? – 030 Dec 24 '19 at 14:25
  • @030 eventually solved it with a wrapper around kubeadm and a lot of custom orchestration written using script (bash, but could be any other language) that executes it on the various servers of the environment. So with the exception of this (completely new) thing, I know of no other solution as of yet - https://cluster-api.sigs.k8s.io/ – Evgeny Zislis Dec 24 '19 at 19:18
  • @Evgeny thanks for your reply. Could you post this as an answer? – 030 Dec 24 '19 at 19:20

1 Answers1

1

Recently Kubernetes started preview work on https://cluster-api.sigs.k8s.io which is meant to be the standard way to solve the way how High Availability clusters are deployed.

The kubeadm tool has also been updated with features to deploy multiple masters since this question was first asked, which help tremendously. With the exception of deploying cloud-specific configuration, the documentation for kubeadm configuration YAML file brings you quite the way towards having a properly deployed cluster.

docs for kubeadm: https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm/

Evgeny Zislis
  • 8,963
  • 5
  • 38
  • 72