Empowering

Nous définissons ce cadran comme l’ensemble des techniques et technologies qui permettent d’offrir aux développeurs utilisant nos infrastructures une expérience fluide afin d’être plus productifs.

Adopt

50

ArgoCD

ArgoCD is a GitOps Continuous Delivery (CD) platform that enables the declarative deployment of applications in Kubernetes clusters.

ArgoCD is an open-source Continuous Delivery (CD) tool specific to Kubernetes. Following a GitOps approach, it enables resources to be deployed in clusters, using one or more Git repositories as the source of truth. The use of ArgoCD is centered around a Custom Resource called Application.

In an Application manifest, you can define many parameters, such as the source repository for deployed resources, the destination Kubernetes namespace and cluster, and the reconciliation strategy to be adopted. ArgoCD supports several sources for deploying resources in Kubernetes: Helm charts, Kustomize applications, Jsonnet files, or simple manifests. 

For more advanced requirements, the Custom Resource ApplicationSet and its associated controller can be used to generate multiple ArgoCD Applications from a single manifest. Using ApplicationSet generators and templates, it is then possible to deploy multiple applications in multiple Kubernetes clusters, all within a centralized specification!

It's a platform that finds its appeal in the fact that managing applications and resources in Kubernetes can lead to human error or inconsistency. ArgoCD is simple to use and configure: what is defined in your Git repository represents what is deployed in your Kubernetes cluster. In fact, ArgoCD has a feature that enables it to check the status of resources at regular intervals, automatically reconciling them if necessary. As a result, it's a reassuring tool for developers and administrators alike.

ArgoCD is also designed to be modular: beyond Kubernetes resources, you can manage and automatically update the container image versions deployed in your cluster by adding ArgoCD Image Updater, which follows the same GitOps principles. Using ArgoCD Notifications, you can also get monitoring and alerting on the deployment of your applications, although this feature is still immature in our view.

 

With just a few clicks, we quickly understand that this Application deploys several resources, such as a Service or a Deployment. In turn, a Deployment triggers the creation of one or more Pods, which is also reflected in the interface. You can even click on the Pod in question to view its logs! Practical, isn't it?

 

For these reasons, at Padok, ArgoCD has become the standard when deploying applications in Kubernetes. For many of our customers, implementing ArgoCD has enabled developers to gain a foothold in the world of infrastructure, using a reassuring environment and day-to-day use that is proving to be simple.

51

ELK

The ELK (Elasticsearch Logstash Kibana) stack offers a complete solution for managing the logs and performance of your applications.


The ELK stack (Elasticsearch, Logstash, Kibana) is the popular suite of open-source tools developed by the Elastic team to store, index, visualize and analyze the logs and performance of your applications and tools.


The stack will address many of the challenges of modern architectures and applications: 

  • Centralization of application and infrastructure logs
  • Real-time visualization
  • Application and tool performance monitoring (APM)
  • Setting up a business dashboard with Kibana
  • Detect anomalies and set up alerting systems

The suite can be complex to deploy and maintain. Long-term expertise in the various tools is required to update and evolve the platform in line with your needs. In addition to the maintenance cost, the complete suite may require a lot of resources (CPU, RAM, and disk space) depending on the quantity (history) of logs you will process. 


In this context, we can't recommend you enough to use the Kubernetes ELK operator to deploy the stack. This will make management much easier, especially for small-scale deployments. In the long term, you'll need to rely on solid technical skills to set up advanced log management configurations (rotation, backup, purging, etc.).


It's well worth the effort because once in place, it's a formidable tool that will improve your developers' experience and efficiency in producing higher-quality applications.


The ELK stack is a powerful, technically demanding tool that will deliver in the long term, provided you have the necessary technical skills. It's a more flexible and scalable choice than Cloud Provider's native tools, which could represent an interesting alternative at the outset, but will very quickly be limited.

52

Helm

Helm is a packaging solution for deploying containerized applications in Kubernetes.


When you use Kubernetes to deploy your applications, the number of manifests (Kubernetes resource configuration files) written for the resources to be created mechanically increases. Several complications will naturally emerge:


  • The code will become very large and costly to maintain
  • Managing the dependencies of your Kubernetes application, such as associating a pod with your service, will require you to create some logic.
  • The same applies to managing application dependencies, such as the joint operation of an application with Redis, a DB, or a reverse proxy.
  • versioning a set of Kubernetes resources into a single application is very complex
  • deploying the same application in several environments will require code duplication

Helm is a tool capable of solving all these problems. Its major strength lies in using the Go language's templating engine. Thanks to this engine, the creation of Kubernetes manifests is transformed into the application of values (variables depending on the deployment context) to templates (Kubernetes manifests enhanced with templating tags).


The set of templates and default values file constitutes a chart. A chart is versioned and can be deployed in a Kubernetes cluster (a Helm release is then deployed). Other charts can be declared as dependencies, enabling complete application stacks to be deployed.


The CLI lets you deploy charts locally or remotely, as Helm allows creating chart registries. It enables you to deploy specific versions, updates, or rollbacks of your applications. We use charts to deploy Prometheus/Grafana, Cert-Manager, Cluster Autoscaler, and Nginx Ingress Controller, among others.


At Padok, we created a Helm chart library for our customers' most frequently used applications. It shortens the time it takes to set up complex sets of applications. Helm is, therefore, a tool we recommend to all Kubernetes users.

53

Integrated CI/CD

Integrated CI/CD includes GitHub Actions and GitLab CI services. They are incorporated directly into the eponymous platforms, as close as possible to the code.


CI/CD (Continuous Integration / Continuous Delivery or Continuous Deployment) represents the practice of automating the steps involved in putting an application into production. This includes testing, build, release, and deployment.  Jenkins, CircleCI, and TeamCity are examples of tools used to build CI/CD pipelines. We'll refer to them here as "external” because they don't host the application's source code on which the pipelines are based.


On the other hand, GitHub and GitLab are today's biggest platforms for collaborative Git development, with around 130 million users combined. Every developer visits these platforms daily to code new applications, approve pull requests or unpack issues.


Both platforms now offer their own CI/CD engines: GitHub Actions and GitLab CI. Using YAML files, it's very quick to create automation pipelines that will save a lot of development time. What's more, the free tier of these platforms is quite generous, so there's no need to pay or create a dedicated infrastructure if your development team is small.


Today, these tools probably include all the features you could possibly need. You can always subscribe to a premium access package to access advanced functions if you have specific requirements.


Beyond that, both solutions are self-hosting. The GitLab CI runner system in Kubernetes is particularly well-proven. It makes it possible to build an infinitely scalable job execution platform rapidly. However, we caution you against hosting your runners if your needs don't lend themselves to it. Maintenance and computing costs can skyrocket. Moreover, as a key function in the application lifecycle, an unstable CI/CD can make life very difficult for your developers.


Despite this, we recommend that all our customers turn to one of these two platforms to create their CI/CD. Their close integration with the code makes them highly versatile and greatly reduces the feedback loop.

Trial

55

GitHub Copilot

56

Loki

Loki is the Grafana stack's native logging tool.


Loki is a logging tool developed by Grafana Labs. It enables you to collect logs from your applications deployed in Kubernetes and upload them to Grafana for querying.


This logging tool is very easy to install in your Kubernetes clusters. It's already packaged in the community's prom-stack chart: all you have to do is activate one option, and you're done, with virtually no additional configuration required. If you're already using Grafana, you can create complete dashboards, mixing metrics and logs from your applications for monitoring and debugging... Nothing could be better than making your developers completely autonomous in managing their applications!


Setting up long-term log retention is also possible by sending history to buckets. However, the tool is not optimized for searching old logs: searching directly in the logs accessible in your Kubernetes cluster is more efficient than searching in previously recorded logs that need to be decrypted and indexed. Moreover, the language used to query Loki's internal logs, LogQL, is not the easiest to master. The syntax differs from traditional query languages and requires a clear understanding of a range vector.


If you're already using Grafana for dashboarding and looking for an easy-to-implement solution for retrieving logs from your Kubernetes clusters for debugging purposes, Loki is for you! Note that the tool isn't designed for long-term log retention, and its querying system requires a bit of learning

57

Platform Engineering

Platform Engineering is a way of putting the DevOps philosophy into practice 😉

The original promise of the Cloud was to simplify system administration by abstracting its complexity: if your team included a resourceful backend developer, then NoOps was almost a reality! But the growing number of services (over 200 on AWS) and technologies has created a demand for specific skills. No worries, you might say: one or more DevOps experts integrated into the development teams, and you're all set!


However, DevOps is now reaching its limits, and DevOps is unfortunately (re)becoming the bottleneck of delivery, not least because : 

  • As mentioned above, technologies and services are increasingly complex to use
  • To keep the adage "You build it, you run it" true. Developers need expertise that often exceeds their own field of competence.

Some Digital Factories have an approach that comes close to Platform Engineering, as they produce common tools for all their group's business units. However, this is difficult to achieve, as they take only a superficial account of developers' needs and are encouraged to standardize and secure on a massive scale.


For us, Platform Engineering means considering infrastructure as a product for developers. This has 3 implications: 

  • Collect user needs and feedback 
  • Determine critical jobs to be done and infrastructure performance
  • Rethink the Operating Model. We advise you to split the DevOps team into two: an Enabler team delivering new products to the tech teams and an Operators team ensuring the reliability and consistency of the underlying platform.

 

At Padok, we believe that DevOps is at the developer's service, and without being revolutionary, Platform Engineering will help reinforce this mindset. The community needs to develop or adapt tools from the product world in order to move from a beautiful concept to a reality. A word of warning, however: Platform Engineering becomes interesting after a minimum Devs and Ops team size.

58

Preview Environments

Preview environments allow developers to test changes in a real environment to reduce the risk of error and increase efficiency.


Preview environments allow developers to preview changes to an application before deploying it in production. These environments are often used to test new features or code updates, including interoperability between the subsystems involved, and ensure that everything works as expected. This minimizes the risk of errors and the time needed to correct any problems. 


These environments also improve the efficiency of developers, who can work simultaneously on several functionalities in isolated environments, which differs from a "classic" staging environment. They can be used to facilitate collaboration between different team members. They enable developers to easily share and test their changes before publishing.


These environments can be created in a variety of ways. Here's the method Padok recommends today: 

  1. Each time a PR is opened on an application, we deploy, in addition to the "stable" version, the version resulting from the branch
  2. A header is defined for each new version of an application deployed, a header is defined, enabling requests to be routed to it.
  3. To test a PR, all you have to do is make queries using this famous header

This works perfectly in a Kubernetes cluster but may be more complicated to implement without. And while this method works perfectly for applications running over HTTP, we don't yet have a solution for event-driven applications.


Preview environments are beneficial for increasing the efficiency of your organization's developers. However, they are a little complicated to set up and maintain. We therefore recommend them mainly for teams undergoing significant expansion.

Assess

54

Sentry

Sentry is a tool that helps developers track application errors.


Sentry is a tool that helps developers track application errors. Using its web interface, they can easily identify and correct bugs in their code to improve the quality of the services they develop. To do this, simply install the Sentry SDK corresponding to your programming language, such as Node.js, Python, Ruby, Go, Android, and many more. The SDK will capture errors and exceptions and provide detailed information on the root cause of each error.


For some time now, Sentry has been gaining in popularity for its ability to provide actionable information to improve code quality and application reliability. It is also used to track application performance metrics in real time, which can help identify SPOFs and scalability problems.


To integrate Sentry into your ecosystem, you have two options: use the SaaS solution (for a fee) or install it self-hosted on your Kubernetes cluster, for example. We strongly recommend using the SaaS solution, which is ultimately the most cost-effective approach. 


Indeed, Sentry's application architecture is highly complex, with no fewer than 10 components to manage, even though it is deployed with a Chart Helm developed by the community (and not Sentry). In order to provide an acceptable level of service availability, you'll need to devote a great deal of time and energy to correcting the product's native errors, known to the editor. Many of us have pulled our hair out over this, believe us!


At Padok, we, therefore, recommend Sentry's SaaS solution to help your developers improve application quality and reliability on a daily basis.

59

Argo Rollouts

Argo Rollout makes it easy to set up complex deployment modes (Blue-Green / Canary)


Argo Rollout, like Flagger, is a solution for what is commonly known as "Blue-Green Deployment" or "Canary Release." These complex deployment methods were not within reach of all organizations before creating these tools, as they require developing a custom tool and/or implementing complex workflows via tools such as Jenkins.


Like all the tools in the Argo ecosystem, its implementation requires Kubernetes and new resources (CRD Rollout). The tool wisely takes advantage of the support for CRD sub-resources introduced in Kubernetes 1.10 to introduce a new resource comparable in every way to a Deployment (using the sub-resource / scale) but with a more complex update strategy specification.


You can now write "staged deployments" using :

  • Scaling stages (increasing the number of pods in the new version)
  • Break times
  • Background analysis to determine whether deployment should continue (e.g., a request to Prometheus)

For complex deployment modes, we prefer Flagger, as we have tested it more on our projects. However, we believe that Argo Rollout has a future in addressing this issue. Both tools benefit from integration with their respective ecosystems (FluxCD for one, ArgoCD for the other). 


However, implementing this kind of deployment mode requires a high level of maturity and a very good observability system.

60

Backstage

Backstage is a Swiss army knife that, if used correctly, can become the central point of your Internal Developer Platform (IDP).


Backstage is an open technology sourced in 2020 by Spotify, aiming to create an extensible developer portal for your in-house platform.


The idea behind Backstage is to become the "obligatory" single point of passage for interacting with your internal platform. In its version without additional plug-ins, Backstage already offers several interesting features:


  • TechDocs, which aggregates all Markdown documentation in your Backstage instance
  • Software Templates for creating boilerplates to start new services (e.g., repository initialization)
  • Software Catalog allows you to reference the various services/utilities you use or develop. It can also aggregate all OpenAPI or Swagger specs for your web services.

Backstage's main strength lies in its extensibility. Numerous plugins have been developed by the community (e.g., integration with ArgoCD), and it's entirely possible to imagine developing your own plugins to satisfy internal needs.


However, it is unsuitable for all organizations and only useful when your Tech/R&D team exceeds 50 people. This "critical size" corresponds to the point at which keeping track of all your platform's evolutions starts to become very complex.


The first benefit we've observed with Backstage is the easier onboarding of developers and DevOps staff, thanks to the centralization of technical documentation in a single location. This significantly improves the key "Time to first PR" KPI for large technical teams.


We're still observing Backstage's gains on tech teams of different sizes, which is why it remains in the "Assess" dial for the time being.

61

Kubernetes native CI/CD

Kube native CI/CDs are CI/CD tools fully integrated with Kubernetes.


The extensibility of Kubernetes via CRDs (Custom Resource Definitions) has led to many innovations in many fields, and CI/CD is no exception. Even if ArgoCD could fit into this category, it currently specializes in deployment on Kubernetes and doesn't address half of this blip (CI).


So today, we're seeing the emergence of Kubernetes-based tools that come close to the workflow engines we use/have used in the past (e.g., Jenkins), allowing you to define CI/CD pipelines or any other workflow declaratively you might need.


We will look at 2 tools that don't quite have the same target audience: Tekton and Argo Workflow.


Tekton is a Kubernetes extension (operator) that lets you define workflows via new resources (Task and Pipeline) and trigger these workflows via any type of event (EventListener and Trigger). Its simple integration into Openshift makes it a widely used tool for CI/CD, and it can be seen as the Kubernetes Native successor to Jenkins.


Like Tekton, Argo Workflow achieves much the same thing. Today, it is mainly adopted by teams involved in data processing (ETL) or machine learning.


We're not yet completely convinced that these approaches can fully replace the tools integrated into your git providers (e.g., GitHub Actions, Gitlab-CI), given their already established adoption. But if you have a heterogeneous infrastructure (e.g., VM-Based, Kubernetes, etc.), these tools will enable you to homogenize the way you test and deploy.

Hold

62

Fluxcd

63

Jenkins

Jenkins is a very generic workflow orchestrator that allows a lot of flexibility but requires a lot of customization.


Jenkins is, first and foremost, an orchestrator of automated workflows. It allows you to declare projects in which you can define sequences of actions, which you can then execute. It is also possible to visualize the evolution of status over time ("success," "failure," etc.).


Both the interface and the mechanisms available for workflows are highly customizable via plugins. This allows you to use matrix variables for actions, project weather forecasts, dashboards, and even general scripts to perform actions on your test platforms.


Jenkins also integrates natively with the Groovy language to take workflow definition further.


Among the many possibilities available, you can define CI/CD pipelines via integration plugins with code version managers, for example.


However, we recommend that our customers turn to integrated CI/CDs (e.g., GitHub Actions or GitLab-CI) instead. These natively provide all the functionality required for modern web development and offer a more turnkey experience. This involves a lot of initial customization effort but is more stable in use.


In addition, CI and/or CD tools specific to certain environments (e.g., ArgoCD for deployments in Kubernetes) reduce the interest of a generalist tool like Jenkins, which also requires its own maintenance.