Page MenuHomePhabricator

Evaluate Argo
Closed, ResolvedPublic


I found out about this project while attending the Kubernetes Office Hours. Two of its developers were on the panel.

It seems similar to Tekton in that it defines a handful of k8s CRDs. However, it only appears to have better tools (CLI and web). We'll see.

Event Timeline

zeljkofilipin renamed this task from Evaluate ArgoCD to Evaluate Argo CD.Mar 22 2019, 1:51 PM
dduvall renamed this task from Evaluate Argo CD to Evaluate Argo Workflow.Mar 22 2019, 5:51 PM

First, some clarification about the various Argo projects.

From IRC (2019-03-21 UTC-7):

2:59 PM <marxarelli> Dan Duvall argo projects are starting to make sense finally
3:00 PM argo cd is for "the tail end of the pipeline" where an image has already been published and you want teams to be able to easily control the deployment
3:01 PM argo-events is for consuming events from external systems (e.g. could be gerrit event stream) and triggering things like:
3:02 PM argo-workflow which is for running workflows (pipeline like workloads) on k8s
3:03 PM argo-ci looks to be sort of defunct
3:04 PM but events + workflow seems to constitute what ci would typically do, and is quite flexible
3:04 PM anyway, i'm thinking out loud... :)
3:05 PM took a while to figure all these projects out!
3:07 PM conceptually anyway. practically it set the record in my evals for ease of setup and getting blubber built, and there's also there's a ui, so that's kinda neat
3:09 PM now time to synthesize into my writeup...


Argo comprises a few different projects that have well defined concerns and would work well together to provide a fully functional CI system. Similar to Tekton (see T217912: Evaluate Tekton) it provides Kubernetes CRDs that delegate the workload scheduling and execution to k8s. Unlike Tekton, however, it provides a nice CLI interface argo, a specialized controller for workflow triggering, a separate project for consuming and propagating external events (Argo Events), and a simple but functional web UI. Benefits and drawbacks include:

  • Benefit: It's ridiculously easy to get installed and running. Getting it installed and Blubber building on it took only about 15 minutes or so.
  • Benefit: As a k8s native solution, it's straightforward to operate given you have knowledge of k8s and kubectl.
  • Benefit: The Workflow CRD that Argo provides is simple to understand with its concepts of inputs/outputs and containerized steps, and supports serial or DAG style execution. I could see these workflow manifests maintained either directly by teams or generated from our .pipeline/config.yaml.
  • Benefit: Very little overhead. Again, like Tekton, these CRDs essentially spin off Pods and k8s does the workload scheduling. In addition, Argo supplies two controllers, one for workflow triggering and integration with Argo Events, and one for the UI.
  • Benefit: It supplies a web UI. Granted it's a very simple read-only UI, but I actually quite like that it provides the things that are needed 99% of the time and nothing else: workflow build status and history, logs, and links to artifacts (actual.
  • Benefit: The team that maintains it seems invested and responsive so far. They are writing a lot of code, giving talks, and participating in k8s office hoursβ€”which is where I actually discovered the project. I joined the Slack channel to ask some questions and they were respectful, helpful, and responsive.
  • Benefit: The Argo Events gateways provide well defined interfaces for consumption of events from external systems. According to the developers, we have a few decent options for evented Gerrit integration, using either webhooks, kafka, or a custom gateway that would maintain a connection over SSH.
  • Benefit: Multiple external artifact stores are supported and integrated into the UI. I also consider the decoupled design a benefit.
  • Drawback: The web UI is limited. If complete control over workflow builds (CRUD operations) is what we need, we would need to modify the existing UI or create our own.
  • Drawback: Debugging of operational problems might be difficult for developers given the Argo's k8s native model, though I'm not sure debugging of operational issues by end users is really a requirement.

I definitely recommend Argo for further consideration.


(Note that for the initial evaluation documented here I skipped artifact setup, though I did go back and complete that part as well and it didn't require much more effort.)

Following the demo, installation was basically running kubectl apply. There's also a Helm chart but it didn't immediately play nicely with my minikube setup so I skipped it.

Additionally I installed the argo command from Homebrew.

$ brew install argoproj/tap/argo
$ minikube start
πŸ˜„  minikube v0.35.0 on darwin (amd64)
πŸ”₯  Creating virtualbox VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
πŸ“Ά  "minikube" IP address is
🐳  Configuring Docker as the container runtime ...
✨  Preparing Kubernetes environment ...
🚜  Pulling images required by Kubernetes v1.13.4 ...
πŸš€  Launching Kubernetes v1.13.4 using kubeadm ...
βŒ›  Waiting for pods: apiserver proxy etcd scheduler controller addon-manager dns
πŸ”‘  Configuring cluster permissions ...
πŸ€”  Verifying component health .....
πŸ’—  kubectl is now configured to use "minikube"
πŸ„  Done! Thank you for using minikube!
$ kubectl create ns argo
namespace/argo created
$ kubectl apply -n argo -f created
serviceaccount/argo created
serviceaccount/argo-ui created created created created created created created created
configmap/workflow-controller-configmap created
service/argo-ui created
deployment.apps/argo-ui created
deployment.apps/workflow-controller created
$ kubectl create rolebinding default-admin --clusterrole=admin --serviceaccount=default:default # <- may not have been necessary for the limited blubber workflow created
$ kubectl -n argo get pods
NAME                                   READY   STATUS    RESTARTS   AGE
argo-ui-f499c69d6-2tfgf                1/1     Running   0          2m48s
workflow-controller-67bdd477b9-vqqcn   1/1     Running   0          2m48s

Building Blubber

I did another Makefile + manifest for this one to show what was needed for the above installation and building Blubber.

all: build

	kubectl create ns argo
	kubectl apply -n argo -f
	kubectl create rolebinding default-admin --clusterrole=admin --serviceaccount=default:default
	kubectl patch svc argo-ui -n argo -p '{"spec": {"type": "LoadBalancer"}}'

	kubectl delete rolebinding default-admin
	kubectl delete -n argo -f
	kubectl delete ns argo

	argo submit --watch blubber.yaml

	kubectl delete --all
kind: Workflow
  generateName: build-blubber-
  entrypoint: git-clone
  - name: git-clone
      - name: blubber-source
        path: /src
          revision: master
      image: golang:1.11
      command: [make]
      workingDir: /src

CLI Demo

If you want to poke around the UI:

$ minikube service -n argo --url argo-ui
dduvall renamed this task from Evaluate Argo Workflow to Evaluate Argo.Mar 22 2019, 8:07 PM