Back when we first discussed moving the Toolforge CLI to an API architecture, there was a decision request about this, and another discussing what programming languages we should use for different components. However, the overarching architecture of a "Toolforge 2.0" wasn't significantly explored, perhaps due to our focus on the build service beta.
Recently, our efforts have expanded beyond the new build system, including a system for environment variables and secrets, a potential deploy subcommand, and more. A pattern is emerging, with each new subsystem being developed as an independent CLI-API pair. I’d like to argue for a simplified architecture that combines the benefits of a backend powered by microservices with the simplicity of a monolithic frontend. In the foreseeable future, this might extend to multiple frontends, if we aim to create a Toolforge UI.
The (simplified) diagram below is how I imagine what this might look like:
CLI
- One, unified user-facing CLI presenting a monolithic frontend, which would eventually absorb all the other pre-build system CLIs (jobs, webservice)
- A single package/binary
- Should follow good practices like modularity and separation of concerns, with the codebase organized in a way that separates different functionalities into their own modules or packages. All commands related to building might be in one module, all commands related to deploying web applications in another, and so on. This would allow for easier development, testing, and maintenance
Gateway API
- Acts as a single entrypoint for client-side applications (currently only the CLI, but why not also a UI in the future?)
- Decouples the client-side applications from the backend microservices, delegating any business logic
- Small and focused codebase, mainly dealing with request routing, applying cross-cutting concerns, and sometimes aggregating responses from downstream services
Some benefits of this design:
- Simplicity for Clients: Clients can treat the API gateway as a single point of interaction, without needing to know the details of the microservices architecture behind it.
- Cross-Cutting Concerns: The API gateway can handle things like authentication, rate limiting, request logging, etc., which reduces duplication since these things would otherwise need to be handled in each microservice.
- Isolation of Microservices: The API gateway can help protect the microservices by validating requests before passing them on, ensuring that only valid, authorized requests reach the microservices.
- Aggregation of Responses: If a client request needs data from multiple microservices, the API gateway can call all the necessary services and aggregate the responses into one. For instance, if we implement toolforge deploy, we’d need to make calls to both the build and the webservice microservices.
- Routing and Versioning: The API gateway can handle the routing of requests to different versions of microservices, or to different instances for load balancing and fault tolerance.
In this setup, each of our subsystems (build, webservice, jobs framework, etc.) would be independent microservices, each with its own separate responsibility, just as they are now. The API gateway would route requests from the clients to the appropriate service. This allows each system to be developed, deployed, scaled, and updated independently, while the clients only need to interact with the main API gateway. All the CLIs would be consolidated into one for a simplified development and distribution experience.
Tl;dr: I’m advocating for a monolithic (but modular) Toolforge CLI, and the introduction of a Gateway API to deal with the necessary decoupling of the client-side applications from the backend microservices.
