Cloudogu Logo

Hello, we are Cloudogu!

Experts in Software Lifecycle Management and process auto­mation, supporter of open source soft­ware and developer of the Cloudogu EcoSystem.

featured image GitOps Repository Structures and Patterns Part 6: Example Repositories
10/11/2023 in DevOps

GitOps Repository Structures and Patterns Part 6: Example Repositories

Johannes Schnatterer
Johannes Schnatterer

Technical Lead

In this final part of the GitOps-repository-structures and -patterns series, I show example repositories that provide templates, ideas and tips for your own projects. Some recurring themes emerge, some of which are named differently: Structures for applications or teams, structures for cluster-wide resources, and structures for bootstrapping.

The examples also show that in principle hardly any differences between Argo CD and Flux are necessary for structures. These are limited to bootstrapping and linking. Since Kustomize is understood by both Argo CD and Flux via kustomize.yaml, it turns out to be an operator-agnostic tool.

You can get an overview of GitOps repository patterns and structures in the first part of this series, in the second part I introduce operator deployment patterns and in the third part repository patterns, in the fourth part promotion patterns and in the fifth part wiring patterns.

Example 1: Argo CD Autopilot

  • Repo pattern: Monorepo
  • Operator pattern: “Instance per Cluster” or “Hub and Spoke”
  • Operator: Argo CD
  • Boostrapping: argocd-autopilot CLI
  • Linking: Application, ApplicationSet, Kustomize
  • Features:
    • Automatic generation of the structure and YAML via CLI
    • Manage Argo CD itself via GitOps
    • Solution for cluster-wide resources
  • Source: argoproj-labs/argocd-autopilot

Repo structure for argocd-autopilot Figure 1: Repo structure for argocd-autopilot

Argo CD autopilot is a command line interface (CLI) tool that is intended to simplify installation and entry into Argo CD. To this end, it provides the ability to bootstrap Argo CD in the cluster as well as create repo structures.

The bootstrapping of Argo CD is done with a single command: argocd-autopilot repo bootstrap. In order to also deploy applications with the resulting structure, an AppProject (command project create) and an Application (command app create) are also required. Figure 1 shows the resulting repo structure. This structure can be viewed at GitHub at schnatterer/argocd-autopilot-example. Contexts are described below using the numbers in the figure:

  1. the application autopilot-bootstrap manages the folder bootstrap and thus binds all other applications in this enumeration. It itself is not under version control, but is imperatively passed to the cluster during bootstrapping.
  2. the Application argo-cd manages Argo CD itself via GitOps.
  3. To this end, it contains a Kustomization which includes additional resources from the internet. It directly references a customization in the repo of autopilot, which in turn fetches all resources necessary for installing Argo CD from the repo of Argo CD itself. In doing so, it points to the stable branch of Argo CD.
  4. The ApplicationSet cluster-resources references all JSON files under the path bootstrap/cluster-resources/ using git generator for files. This can be used to manage cluster-wide resources, such as namespaces used by multiple applications. By default, only the in-cluster.json file is located here, which contains values for the name and server variables. In the ApplicationSet template, these variables are used to create an Application that references the manifests underneath bootstrap/cluster-resources/in-cluster/. This creates the namespace argocd in the cluster where Argo CD is deployed. This is suitable for the instance per cluster pattern, but is extensible to other clusters to implement the hub and spoke pattern.
  5. The application root is responsible for including all AppProjects and Applications that are created below projects/. After executing the bootstrap command, this folder is still empty.
  6. Each time the project command is executed, an AppProject and associated ApplicationSet are generated in a file. These are intended for the implementation of different environments. The ApplicationSet references all config.json files located in subfolders of the apps folder for the respective environment, for example apps/my-app/overlays/staging/config.json, using the git generator for files. However, the apps folder is initially empty and no applications are generated.
  7. By executing the command app, the folder apps is filled with the structure for an application in an environment. This includes the config.json described in the last point, by means of which the ApplicationSet located in the projects folder generates an Application that deploys the folder itself, for example apps/my-app/overlays/staging. This folder can be used to deploy config that is specific to an environment.
  8. In addition, a customization.yaml is created pointing to the base folder. This folder can be used to deploy config that is the same in all environments. By this division redundant Config is avoided.

Analogously to 6. to 8. further environments can be added. Figure 1 shows a subfolder production in the folders apps and projects.

Finally, it should be mentioned that there are reasons to be cautious when using Autopilot in production. The project does not describe itself as stable, it is still in a “0.x” version. It is also not part of the official “argoproj” organization at GitHub, but is under “argoproj-labs”. The commits come mainly from one company: Codefresh. So it is conceivable that the project will be discontinued or breaking changes will occur. This makes it inadvisable to use it in production.

Also, by default, the Argo CD version is not pinned. Instead, the customization.yaml (3. in Figure 1) ultimately references the stable branch of the Argo CD repo. Here we recommend to reference a deterministic version via kustomize. A non-deterministic version screams trouble: Upgrades from Argo CD could go unnoticed. What about breaking changes in Argo CD? Which version does one restore in a disaster recovery case?

The repository structure that Autopilot creates is complicated, i.e. difficult to understand and maintain. The amount of concentration required to understand Figure 1 and its description speaks for itself. In addition, there are less obvious issues: Why is the autopilot-bootstrap application (1. in Figure 1) not in the GitOps repository, but only in the cluster?
The approach of an ApplicationSet inside AppProject’s YAML pointing to a config.json is hard to understand (4th and 6th in Figure 1). This is compounded by the mixture of YAML and JSON. The cluster-resources ApplicationSet is generally a well-scalable approach for managing multiple clusters via the hub and spoke pattern. However, JSON must be written here as well (4. in Figure 1).

Autopilot models environments via Argo CD Projects (6. and 7. in Figure 1). With this monorepo structure, how would it be feasible to separate different teams of developers? One idea would be to use multiple Argo CD instances according to the “instance per cluster” pattern. In this case, each team would have to manage its own Argo CD instance.

Many organizations like to outsource such tasks to platform teams and implement a repo per team pattern. This is not intuitive with Autopilot. The 2nd example shows an alternative for this.

Example 2: GitOps Playground

  • Repo pattern: “Repo per team” mixed with “Repo per app”
  • Operator pattern: Instance per Cluster (“Hub and Spoke” also possible)
  • Operator: Argo CD (Flux also possible)
  • Boostrapping: Helm, kubectl
  • Linking: Application
  • Features:
    • Env per app Pattern
    • Manage Argo CD itself via GitOps
    • Config update via CI server
    • Mixed repo patterns
    • Solution for cluster-wide resources
    • Examples for Argo CD and Flux
  • Source: cloudogu/gitops-playground

Relationship of the GitOps Repos in the GitOps Playground (Argo CD) Figure 2: Relationship of the GitOps Repos in the GitOps Playground (Argo CD)

The GitOps Playground provides an OCI image that can be used to provision a Kubernetes cluster with everything needed for operation using GitOps and illustrates this using sample applications. Tools installed include GitOps operator, Git server, monitoring and secrets management. For the GitOps operator, you have the choice between Argo CD and Flux.
In what follows, we focus on Argo CD because (unlike Flux) it does not itself make suggestions about repo structure. Also, there are fewer public examples of repo structures using Argo CD that are ready for production.

In the GitOps Playground, Argo CD is installed so that it can run itself via GitOps. In addition, a “repo per team” pattern mixed with a “repo per app” pattern is implemented. Figure 2 shows how the GitOps repos are wired.

The GitOps Playground performs some imperative steps once for bootstrapping Argo CD during installation. In the process, three repos are created and initialized:

  • argocd (management and configuration of Argo CD itself).
  • example-apps (example for the GitOps repository of a developer/application team) and
  • cluster-resources (example for the GitOps repo of a cluster administrator or an infra/platform team).

Argo CD is installed once by means of a Helm chart. Here helm template can be used. An alternative is to use helm install or helm upgrade -i. Afterwards, however, the secrets in which Helm manages its state should be deleted. Argo CD does not use them, so they would become obsolete and only cause confusion.

To complete the bootstrapping, two resources are also imperatively applied to the cluster: an AppProject named argocd and an Application named bootstrap. These are also contained in the argocd repository.

From there, everything is managed via GitOps. The following describes the relationships using the numbers in the figure:

  1. the application bootstrap manages the applications folder, which also contains bootstrap itself.
    This allows changes to bootstrap to be made via GitOps. Using bootstrap other applications are deployed (App-of-Apps pattern)
  2. the application argocd manages the folder argocd, which contains the resources of Argo CD as an Umbrella Helm chart. Here, the values.yaml contains the actual config of the Argo CD instance. Additional resources (for example secrets and Ingresses) can be deployed via the template folder. The actual Argo CD chart is declared in the chart.yaml.
  3. the chart.yaml contains the Argo CD Helm chart as a dependency. It references a deterministic version of the chart (pinned via Chart.lock) that is pulled from the chart repository on the Internet. This mechanism can be used to update Argo CD via GitOps.
  4. the Application projects manages the projects folder, which in turn contains the following AppProjects:
  5. argocd, which is used for bootstrapping,
  6. the default built-in to Argo CD (whose permissions are restricted from the default behavior to reduce the attack surface),
  7. an AppProject per team (to implement least privilege and notifications per team): cluster-resources (for platform admins, needs more privileges on the cluster) and example-apps (for developers, needs less rights on the cluster).
  8. the application cluster-resources points to the argocd folder in the cluster-resources repo. This repo has the typical folder structure of a GitOps repo (explained in the next step). This way administrators use GitOps in the same way as their “customers” (the developers) and can provide better support.
  9. the application example-apps points to the argocd folder in the example-apps repo. Like the cluster-resources it also has the typical folder structure of a GitOps repo:
  10. apps - contains Kubernetes resources of all applications (the actual YAML).
  11. argocd - contains Argo CD Applications pointing to subfolders of apps (App Of Apps pattern).
  12. misc - contains Kubernetes resources that do not belong to specific applications (for example, namespaces and RBAC)
  13. the application misc points to the folder misc.
  14. the application my-app-staging points to the apps/my-app/staging folder within the same repo. This provides a folder structure for promotion. The applications with the my-app- prefix implement the “environment per app” pattern. This allows each application to use individual environments, e.g. production and staging or none at all. The actual YAML can be pushed here either manually or automated as described in “Config Update”. The GitOps Playground contains examples that implement the config update via CI server based on an app repo. This approach is an example of mixing the “repo per team” and “repo per app” patterns.
  15. the associated production environment is implemented via the my-app-production application, which points to the apps/my-app/production folder within the same repo. In general, it is recommended to protect all production folders from manual access if this is possible on the part of the SCM used. Instead of the different YAML files used in the diagram, these applications could also be implemented as follows
    1. two Applications in the same YAML
  16. two Applications with the same name in different Kubernetes namespaces. It is necessary that these Namespaces are configured in Argo CD.
  17. an ApplicationSet that uses the git generator for folders.

The GitOps playground itself uses a single Kubernetes cluster for simplicity, implementing the “instance per cluster” pattern. However, the repo structure shown can also be used for multiple clusters using the “hub and spoke” pattern: Additional clusters can be defined either in vaules.yaml or as secrets using the templates folder.

Example 3: Flux Monorepo

  • Repo pattern: Monorepo
  • Operator pattern: Instance per Cluster
  • Operator: Flux (in principle also Argo CD)
  • Boostrapping: flux CLI
  • Linking: Flux Kustomization, Kustomize
  • Features:
    • Manage Flux itself via GitOps
    • Solution for cluster-wide resources
  • Source: fluxcd/flux2-kustomize-helm-example

Repo structure of flux2-customize-helm-example Figure 3: Repo structure of flux2-customize-helm-example

After these non-trivial examples in the context of Argo CD, our practical insight into the world of Flux begins with a positive surprise: No preliminary considerations or external tools are required for installation here. The Flux CLI brings a bootstrap command that performs bootstrapping of Flux in the cluster as well as the creation of repo structures. Additionally, Flux provides official examples that implement various patterns. We begin with the monorepo. Figure 3 shows the relationships. In the following, we describe them using the numbers in the figure:

  1. in the flux-system folder, the flux bootstrap command generates all the resources needed to install Flux, as well as a Git repository and a kustomization. For bootstrapping, flux imperatively applies these once to the cluster. The kustomization then references its own parent folder production. From here on, everything is managed via GitOps.
  2. the flux-system kustomization also deploys another kustomization infrastructure pointing to the folder of the same name. This can be used to deploy cluster-wide resources such as ingress controllers and network policies.
  3. In addition, flux-system deploys kustomization apps. This points to the subfolder of the respective environment under apps, for example apps/production.
  4. in this folder there is a kustomization.yaml, which is used to include the folders of all applications. In the folder of each application there is another kustomization.yaml, which compiles the actual resources for each application in an environment: As a basis, typical for Kustomize, there is a subfolder of base (for example apps/base/app1). This contains config that is the same in all environments. In addition, there is config that is specific to each environment. This is overlaid on top of base using patches from the environment’s respective folder (for example, apps/production/app1).

Analogously to the apps folder, there is also one folder per environment in the clusters folder. So here Flux implements an instance per cluster pattern: one Flux instance per environment.

The public example itself only shows the management of a single application and it is not obvious how more are added. Our real-world experience is that one Flux instance usually manages multiple applications. Hence, figure 3 shows an extended variant that supports multiple applications, based on gained insights from an issue discussion. This structure can also be viewed and tried on GitHub at schnatterer/flux2-kustomize-helm-example.

This structure has the disadvantage that all applications under apps are deployed from a single kustomization per environment. For example, when using the graphical interface of Weave GitOps, all resources contained therein are then displayed as one “app” on the interface (see Figure 4). This quickly becomes confusing. Analogous to the use of Applications with Argo CD (see previous examples), it is also conceivable to create one kustomization per application, instead of a single kustomization in the apps.yaml file. This requires more maintenance, but provides for clearer structures.

Multiple applications in one kustomization (screenshot Weave GitOps) Figure 4: Multiple applications in one kustomization (screenshot Weave GitOps)

The repo structure described here would also work for Argo CD with a few changes. Instead of the kustomizations, applications would have to be used for linking. The kustomization.yamls are understood by both tools.

Example 4: Flux repo per team

  • Repo pattern: Repo per team
  • Operator pattern: Instance per Cluster
  • Operator: Flux (in principle also Argo CD)
  • Boostrapping: flux CLI
  • Linking: Flux Kustomization, Kustomize
  • Features: same as example 3
  • Source: fluxcd/flux2-multi-tenancy

Relationship of the GitOps repos at flux2-multi-tenancy Figure 5: Relationship of the GitOps repos at flux2-multi-tenancy

If you prefer to use one repo per team for your organization, you will find an official example in the Flux project. In Flux, this is referred to as “multi-tenancy”. Instead of the more general term “tenant”, we use the term “team” here, which fits the pattern.

Some points are already known from the previous example. These include bootstrapping using the clusters folder and the cluster-wide resources in the infrastructure folder. Figure 5 shows the relationships. In the following, these are described using the numbers in the figure:

  1. the kustomization flux-system deploys a kustomization tenants pointing to the folder with the same name.
  2. in this folder there is a kustomization.yaml which is used to include the folders of all teams in an environment, for example tenants/production/team1.
  3. in the folder of each team there is another kustomization.yaml which compiles resources for each team in an environment. As a basis, typical for Kustomize, there is a subfolder of base (for example tenants/base/team1). This contains config that is the same in all environments. In addition, there is config that is specific to each environment. This is overlaid on the base using patches from the environment’s respective folder (for example, tenants/production/team1).
  4. concretely, there can be several resources in the base folder, which are joined together via yet another kustomization.yaml.
  5. the team repo is included via the sync.yaml file, which contains a Git Repository and yet another kustomization. Specific for each environment is then only the path in the team repo, which is put over it by means of a patch from the respective folder of the environment, for example tenants/production/team1/path.yaml.

The structure of the team repo then corresponds exactly to that of the app folder from the previous example. This structure can also be viewed and tried out on GitHub at schnatterer/flux2-multi-tenancy. As with this example, the disadvantage is that all applications in the team repo are deployed from one kustomization per environment and this becomes confusing.

Example 5: The Path to GitOps

  • Repo pattern: Monorepo
  • Operator pattern: Instance per Cluster
  • Operator: Argo CD (or Flux)
  • Boostrapping: kubectl
  • Linking: Application, ApplicationSet, Kustomize
  • Features:
    • Solution for cluster-wide resources
    • Env per app pattern
    • Examples for Argo CD and Flux
  • Source: christianh814/example-kubernetes-go-repo

The repo structure at example-kubernetes-go-repo Figure 6: The repo structure at example-kubernetes-go-repo

In his book “The Path to GitOps”, Christian Hernandez, who has been involved with GitOps for years at Akuity, RedHat and Codefresh, devotes a chapter to the topic of repo and folder structures. Figure 6 shows his example of a monorepo. It can also be viewed on GitHub, for both Argo CD and Flux. Some of the folder names differ between the book and the repo on GitHub. This fits with a tip from the book that the names of the repos are not important, but the concepts they represent.

In this example, a lot of what is already known in the previous examples can be found again:

  • There is a folder apps for applications. In the repo at GitHub, this is called tenants, so here it is related to teams as in example 4.
  • For cluster-wide resources, there is a cluster-config folder, which is called core in the repo at GitHub.
  • In the folder bootstrap the bootstrapping of the operator is implemented. Here, similar to Autopilot (see example 1), Argo CD is installed directly from the public Argo CD repo over the Internet using Kustomize.

To avoid repetition, the interrelationships of the repo structure will not be described in detail here. However, the following points are interesting.

  • This repo splits the config of the operator into two folders: The already known bootstrap folder and a components folder.
  • In addition, the entire structure is in a folder cluster-XXXX, which suggests that the entire structure including Argo CD refers to a cluster. So here the instance is implemented by cluster pattern.
  • For promotion, an “Env per app” pattern is implemented here via Kustomize, see the apps folder in figure 6. This is described in the book, but is not implemented in the GitHub repo.

As mentioned, the same repo structure is also available for Flux. It is generally interesting that the same structure can be used with minor changes for both Argo CD and Flux. However, for Flux it is recommended to use the structure of the flux bootstrap command instead. Since this is implemented in Flux, it can be considered good practice for Flux. It makes it easier to understand and maintain, for example when updating Flux.

Example 6: Environment-Varianten

The folder structure for gitops-environment-promotion Figure 7: The folder structure for gitops-environment-promotion

This last example differs from the previous ones in that it does not describe the structure of an entire repo, but only that of a single application. It can therefore be combined with the other examples. This example focuses on the implementation of a large number of environments. It shows how different environments (integration, load, prod, qa and staging), can be rolled out in different regions (asia, eu, us) without creating much redundant config. In total there will be 11 environments. In each environment a distinction should be made between prod and non-prod. This shows that Kustomize is well suited to implement such an extensive structure without redundancies. Although the example comes from the environment of Argo CD it can be used without changes also with Flux, since the linking is accomplished exclusively by kustomization.yaml.

Figure 7 shows the structure simplified to five environments. The starting point is the subfolders of an environment in the envs folder, for example envs/prod-eu. These subfolders would be included by Argo CD application or Flux kustomization. In each subfolder there is a kustomization.yaml, which uses the base folder as a basis, which contains identical config in all environments. In addition, the Config of the variants (folder variants) is included, for example eu and prod. In addition, the config specific to each environment is added by means of patches from the respective subfolder of env, for example envs/prod-eu. In principle, this example could also be implemented by Helm, but this would be more complicated and it would require the use of special CRDs, instead of universal kustomize.yaml.

This example also provides an idea for simplifying the promotion: in each folder there are many YAML files, one file per property. The advantage is that the promotion can then be done by simply copying one file. There is no need to cut and paste text, which simplifies the process and reduces the risk of errors. Also, the diffs are easier to read.