Page MenuHomePhabricator

Buildpack pipeline: do not require a 'web' command to be present
Closed, ResolvedPublic

Description

Building an image without a web command in Procfile fails with:

[taavi@toolsbeta-sgebastion-05 ~] $ kubectl -n image-build logs test-buildpacks-pipelinerun-jv2ck-build-from-git-pod -c step-export
Warning: Buildpack 'heroku/python@0.0.0' requests deprecated API '0.4'
Adding layer 'heroku/python:profile'
Adding layer 'buildpacksio/lifecycle:launch.sbom'
Adding 1/1 app layer(s)
Adding layer 'buildpacksio/lifecycle:launcher'
Adding layer 'buildpacksio/lifecycle:config'
Adding layer 'buildpacksio/lifecycle:process-types'
Adding label 'io.buildpacks.lifecycle.metadata'
Adding label 'io.buildpacks.build.metadata'
Adding label 'io.buildpacks.project.metadata'
ERROR: failed to export: determining entrypoint: tried to set web to default but it doesn't exist

This was seen with this repository: https://gitlab.wikimedia.org/taavi/test-cli-tool

However, the same error is not seen when building the image locally.

Event Timeline

dcaro changed the task status from Open to In Progress.May 19 2023, 9:25 AM
dcaro claimed this task.
dcaro moved this task from Backlog to Doing on the Toolforge Build Service (Beta release) board.

Trying to consolidate here the topics that have been mentioned in a couple other places.

I have been thinking that the Procfile semantics could be augmented and further tied into the jobs framework like this.

Imagine a Procfile with content:

web: whatever
action1: ./some-script.sh --with-args
action2: ./some-python.py

Then, this tool could run any of:

  • toolforge webservice start (reads the web entry)
  • toolforge job run action1 (reads the action1 entry)
  • toolforge job run action2 --schedule "* * * * * "
  • etc

I.e, the combo of Procfile + buildpacks can be used to hide many of the current parameters of the current implementation of the jobs framework.

On the other hand, this additional semantics of the Procfile may led us to better introduce something similar to heroku.yaml which in turn also resembles a bit to the load.yaml semantics.

Trying to consolidate here the topics that have been mentioned in a couple other places.

I have been thinking that the Procfile semantics could be augmented and further tied into the jobs framework like this.

Imagine a Procfile with content:

web: whatever
action1: ./some-script.sh --with-args
action2: ./some-python.py

Then, this tool could run any of:

  • toolforge webservice start (reads the web entry)
  • toolforge job run action1 (reads the action1 entry)
  • toolforge job run action2 --schedule "* * * * * "
  • etc

I.e, the combo of Procfile + buildpacks can be used to hide many of the current parameters of the current implementation of the jobs framework.

I really like that idea :)

I think that the current cli uses --command action1 to do that right?

At that point I think that would be better to consolidate job and webservice, as the distinction is not needed anymore, so we could simplify even more, with something like:

toolforge run web -> starts the web procfile entry
toolforge run action1 -> starts the action1 procfile entry
toolforge run action2 --schedule "...."  -> starts action 2 with that schedule

In my mind also the web one could stop being special, and instead allow to specify if you want to expose it or not:

> toolforge run myfrontend --public
Your service is now running, you can access it at https://mytool.toolforge.org/

# and for others
> toolforge run mybackend --port 8123
Your service is not running, and accessible internally by other components at mybackend.mytool.svc.toolforge.org:8123  # or whichever is the domain

This binds really well with T337191: Toolforge: consider introducing a command line for creating reverse proxies (partially at least).

On the other hand, this additional semantics of the Procfile may led us to better introduce something similar to heroku.yaml which in turn also resembles a bit to the load.yaml semantics.

I'm not sure about this one though, I think that the procfile might be enough, and make it more complicated later if needed. It would be interesting though being able to define your setup in yaml in your repo, but I think that would be a bigger effort.

In my mind also the web one could stop being special, and instead allow to specify if you want to expose it or not:

> toolforge run myfrontend --public
Your service is now running, you can access it at https://mytool.toolforge.org/

# and for others
> toolforge run mybackend --port 8123
Your service is not running, and accessible internally by other components at mybackend.mytool.svc.toolforge.org:8123  # or whichever is the domain

Big +1 for introducing the ability to define non-public backing services in the Procfile, and in general, create deployments consisting of multiple pods within the same tool.

Mentioned in SAL (#wikimedia-cloud) [2023-05-24T12:26:01Z] <dcaro> deploy latest buildservice (T336050)

Mentioned in SAL (#wikimedia-cloud) [2023-05-24T12:28:40Z] <dcaro> deploy latest buildservice (T336050)

dcaro moved this task from Doing to Done on the Toolforge Build Service (Beta release) board.