2

I'm new to Kubernetes, and hence helm, so I've been reading a lot, and teaching myself by setting up a cluster, services, deployments, and pods. In my case I use AWS EKS.

I now at least understand each piece from a technical point of view. But now I want to put it all together in managing an "environment." I believe this is the idea behind GitOps.

Say I'm a QA person and am testing a "staging" environment. This environment comprises a number of microservices with their own respective versions. So logically I can represent it this way

/staging-2021-03-06-4 # 4th build of staging environment on 2/06/2021
- microservice1:1.1.0
- microservice2:1.2.0
- microservice3:1.3.0

When we build the next staging, e.g, staging-2021-03-06-5, anyone of those microservices versions can change.

But now I, the QA guy, find an issue with above staging-2021-03-06-4 build, and I ask a developer to look at it. I want to be able to just tell the developer to "check out" the staging-2021-03-06-4 environment so he/she can have exactly the environment that I have to reproduce the issue.

I'm thinking I can set up this idea of "environment" with helm, since the "environment" is nothing but a package of packages? Is this the correct utility of helm? If so, do I just set up a staging-2021-03-06-4 helm chart that has dependencies of other charts, and those other charts are for each microservice?

I'm also reading How to manage 10+ team environments with Helm and Kubernetes? since it applies to me also, but that describes a different (additional) problem I'm trying to solve with helm.

Update: I think I'm essentially wanting GitOps, so I'll read up on that.

Chris F
  • 421
  • 1
  • 3
  • 14
  • We're building a tool as a layer on top of GitOps - https://relizahub.com - with the specific functionality you're asking about described here - https://github.com/relizaio/reliza-cli#72-use-case-replace-tags-on-deployment-templates-to-inject-correct-artifacts-for-gitops-using-instance-and-revision . It's an early stage for us, so feel free to connect if this resonates. – taleodor Feb 06 '21 at 21:26
  • @taleodor I briefly checked it out, looks very useful so far. Thanks. – Chris F Feb 07 '21 at 22:20
  • I'm currently studying https://github.com/fluxcd/helm-operator-get-started – Chris F Feb 08 '21 at 14:12
  • 1
    Check ArgoCD as well, i.e. we have sample helm CD based on Argo here - https://gitlab.com/taleodor/sample-helm-cd/

    Also feel free to connect with me - would be happy to discuss more - https://www.linkedin.com/in/pshukhman/

    – taleodor Feb 08 '21 at 14:49
  • @taleodor, I sent you a LI connection request. I do have questions. – Chris F Feb 09 '21 at 18:31

2 Answers2

1

Helm charts should not be built for a specific environment, IMHO. You can see that many of the official helm charts have a set of default values and they are configurable from outside. You also can create a helm chart like that and declare your deployment + environment-specific values with another tool like Helm Operator or ArgoCD. In this way, you can both simplify your build pipeline and increase dev/prod parity. You can use that one chart and promote the same build into different environments so you can be sure that different environments are identical except the configuration.

Example ArgoCD application manifests:

production-application.yaml:

apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: production-yourservice
  namespace: argocd
spec:
  project: default
  source:
    repoURL: https://github.com/yourorganisation/helm-charts
    path: yourservice
    targetRevision: master
    helm:
      values: |
        environment: production
        replicas: 10
  destination:
    server: https://kubernetes.default.svc
    namespace: production-yourservice

staging-application.yaml:

apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: staging-yourservice
  namespace: argocd
spec:
  project: default
  source:
    repoURL: https://github.com/yourorganisation/helm-charts
    path: yourservice
    targetRevision: master
    helm:
      values: |
        environment: staging
        replicas: 1
  destination:
    server: https://kubernetes.default.svc
    namespace: staging-yourservice
yozel
  • 111
  • 1
  • I personally do NOT like this solution at all since now I have to maintain multiple manifest files for multiple applications, different only in the environment they're deployed in. – Chris F Feb 22 '21 at 20:49
  • @chris-f yes, duplicating the manifest files is not the way. You can use Helm chart of charts (I'm not a big fan) or putting together & overriding the manifest files with Kustomize, which I like to combine with ArgoCD. – David Lukac Mar 28 '22 at 20:10
0
  1. You could set up the "environments" as Chart of charts using Helm
  2. or (my prefered way) utilize ArgoCD with Kustomize - having "default" manifest for each service, creating the "environment" with a Kustomize file, referencing the "default" manifests and overriding needed values.

(also see my other answer on how to use ArgoCD)

David Lukac
  • 156
  • 7