Page MenuHomePhabricator

Report results from SonarCloud to Gerrit
Closed, ResolvedPublic

Assigned To
Authored By
zeljkofilipin
Feb 25 2019, 11:20 AM
Referenced Files
F31871199: gerrit_robot_comment.png
Jun 19 2020, 8:45 AM
F31631673: image.png
Feb 26 2020, 10:33 AM
F31631675: image.png
Feb 26 2020, 10:33 AM
F31625565: image.png
Feb 21 2020, 2:11 PM
F31625569: image.png
Feb 21 2020, 2:11 PM
F31625567: image.png
Feb 21 2020, 2:11 PM
F31621125: image.png
Feb 18 2020, 12:49 PM
Tokens
"Yellow Medal" token, awarded by Tgr.

Description

This will be implemented with an app on toolforge (https://tools.wmflabs.org/sonarqubebot/)

Implementation:

  • Application to listen to POST requests from SonarQube
    • Validate that the requests come from SonarQube and not some rando
  • Create a SonarQubeBot user in gerrit
  • Craft a comment and post to the gerrit patchset

Event Timeline

There are a very large number of changes, so older changes are hidden. Show Older Changes
zeljkofilipin renamed this task from Report results from sonar cloud to gerrit to Report results from SonarCloud to Gerrit.Feb 25 2019, 12:45 PM
zeljkofilipin moved this task from Backlog 🪒 to Q4 👔 on the User-zeljkofilipin board.

That seems OK as long as the URL includes the branch (Gerrit patchset number) so that clicking the link takes you to a relevant page on SonarCloud.

Later on it would probably make sense to implement a webhook for reporting: https://docs.sonarqube.org/latest/project-administration/webhooks/

@zeljkofilipin I'm interested to pursue setting up a simple tool on ToolForge that:

  1. Is configured to listen for webhook data from SonarQube
  2. Is responsible for posting a comment in Gerrit with the link to the build, the pass/fail status, and (maybe) a quick summary of the new issues found

If you don't have time to work on this in the next two weeks, I might get started with that process, let me know what you think please.

I really don't know how much time I'll have in the next week or two. If you have the time, go ahead. Toolforge tool sounds like a good idea.

Real-world POST from the SonarCloud webhook:

 json
{
  "serverUrl": "https://sonarcloud.io",
  "taskId": "AWm6bF2mtK8xldclsL0c",
  "status": "SUCCESS",
  "analysedAt": "2019-03-26T15:31:29+0100",
  "changedAt": "2019-03-26T15:31:29+0100",
  "project": {
    "key": "mediawiki-core",
    "name": "mediawiki-core",
    "url": "https://sonarcloud.io/dashboard?id=mediawiki-core"
  },
  "branch": {
    "name": "490363",
    "type": "SHORT",
    "isMain": false,
    "url": "https://sonarcloud.io/project/issues?branch=490363&id=mediawiki-core&resolved=false"
  },
  "qualityGate": {
    "name": "Sonar way",
    "status": "ERROR",
    "conditions": [
      {
        "metric": "new_reliability_rating",
        "operator": "GREATER_THAN",
        "value": "3",
        "status": "ERROR",
        "errorThreshold": "1"
      },
      {
        "metric": "new_security_rating",
        "operator": "GREATER_THAN",
        "value": "2",
        "status": "ERROR",
        "errorThreshold": "1"
      },
      {
        "metric": "new_maintainability_rating",
        "operator": "GREATER_THAN",
        "value": "1",
        "status": "OK",
        "errorThreshold": "1"
      },
      {
        "metric": "new_coverage",
        "operator": "LESS_THAN",
        "status": "NO_VALUE",
        "errorThreshold": "80"
      },
      {
        "metric": "new_duplicated_lines_density",
        "operator": "GREATER_THAN",
        "status": "NO_VALUE",
        "errorThreshold": "3"
      }
    ]
  },
  "properties": {}
}
kostajh added a subscriber: mmodell.

@mmodell could you possibly help with "Create a SonarQubeBot user in gerrit" and "Add SonarQubeBot user to stream-events group" from the above, or could you point me to someone who could?

I think @thcipriani is working on cleaning up Gerrit groups at the moment. He might be able to help.

I met with @thcipriani about this today. In the short-term what we'll do is:

  1. Modify the wmf-sonar-scanner-{name} job template to poll for analysis completion

In the java8-sonar-scanner job we'll want to pipe the output to /log/scanner-output.txt

Then we'll add a shell script step which will parse that output:

- job-template:
    name: 'wmf-sonar-scanner-{name}'
    node: DebianJessieDocker
    concurrent: false
    branch: '$ZUUL_BRANCH'
    properties:
     - build-discarder:
         days-to-keep: 15
    triggers:
     - zuul
    builders:
    - docker-log-dir
    - docker-src-dir
    - docker-cache-dir
    - docker-ci-src-setup-simple
    - docker-run-with-log-cache-src:
       image: 'docker-registry.wikimedia.org/releng/java8-sonar-scanner:0.4.0'
       logdir: '/log'
       args: |
         -Dsonar.projectKey={projectname} \
         -Dsonar.projectName={projectname} \
         -Dsonar.organization=wmftest \
         -Dsonar.host.url=https://sonarcloud.io \
         -Dsonar.branch.target="$ZUUL_BRANCH" \
         -Dsonar.branch.name={branch} \
         -X
    - shell: |
    # Code goes here

In the "code goes here" section we will look for a line like this INFO: More about the report processing at https://sonarcloud.io/api/ce/task?id=AWm7SW1T9XBIOzEJZvP0.

We will need to extract the ID from that string, then we can set up a polling script that calls https://sonarcloud.io/web_api/api/ce?query=task with the ID, say, every 30 seconds for 5 minutes.

Once we see that the task has completed (status in the API response) we can make another API call to the quality gates endpoint (https://sonarcloud.io/web_api/api/qualitygates?query=analysisId). That gives us a JSON document we can dump to STDOUT, and also if status in that document is ERROR than we can return a non-zero exit code so the Jenkins job shows a failure. We can also output a link to the branch in SonarQube using this format https://sonarcloud.io/dashboard?branch={gerritChangeNumber}&id={ProjectKey}, e.g. https://sonarcloud.io/dashboard?branch=490363&id=mediawiki-core

I forgot to mention; being able to post meaningful comments on the task with information about issues uncovered, for example, is going to take longer as we need new infrastructure setup. One idea @thcipriani had was a Jenkins user that can build jobs and comment in gerrit; have the SonarQube webhook ping Jenkins with a token, and have that trigger a job build which accesses the POSTed data from SonarQube and post a comment with that info to gerrit.

Change 501465 had a related patch set uploaded (by Kosta Harlan; owner: Kosta Harlan):
[integration/config@master] wmf-sonar-scanner: Report back quality gate result using polling

https://gerrit.wikimedia.org/r/501465

Change 501465 merged by jenkins-bot:
[integration/config@master] wmf-sonar-scanner: Report back quality gate result using polling

https://gerrit.wikimedia.org/r/501465

@thcipriani wondering if the code health group could move ahead with this. We have the polling script approach working but we would like to use a bot so that we can post a small summary of the specific codehealth issues (rather than just a link to sonarcloud). I know you had also mentioned triggering a job build directly from the webhook. Please let us know if you have thoughts on a preferred approach.

@thcipriani wondering if the code health group could move ahead with this. We have the polling script approach working but we would like to use a bot so that we can post a small summary of the specific codehealth issues (rather than just a link to sonarcloud). I know you had also mentioned triggering a job build directly from the webhook. Please let us know if you have thoughts on a preferred approach.

Hrm, so I mentioned triggering a job via the webhook/remote token, but that's not currently well supported by our setup; i.e., you would have to add an auth-token to a job definition and we don't currently have a sane way to keep that info private. Plus it's a very Jenkins-specific approach, which may not be very future-proof.

The polling setup is non-optimal since it ties up a worker while it's running; however, that currently seems like the least bad option. We could mitigate the impact of tying up a worker machine by having the current job trigger a job that does the polling and posting to Gerrit on a dedicated, small, instance. We could create an integration machine for this purpose without a lot of resources (or use the existing trigger machine).

We'd still need a Gerrit user that can comment on tasks (can be setup as a bot account on Wikitech), put the creds in our Jenkins somewhere (you'd need a relenger's help for that), have the current job trigger a polling job, poll until complete, then trigger a script that parses the output and comments on Gerrit.

Posting comments on Gerrit through the change API ( https://gerrit-review.googlesource.com/Documentation/rest-api-changes.html ) isn't too bad -- the API is one of Gerrit's strong points (I wrote a library in python with the idea to do this at one point https://github.com/thcipriani/grrit -- might be helpful).

Change 557001 had a related patch set uploaded (by Kosta Harlan; owner: Kosta Harlan):
[integration/config@master] jjb: Modify branch name passed to sonar-scanner

https://gerrit.wikimedia.org/r/557001

kostajh updated the task description. (Show Details)

I've updated https://github.com/kostajh/sonarqubebot so that a more informative message will be posted (example https://gerrit.wikimedia.org/r/c/mediawiki/extensions/Echo/+/556830#message-df768acbc9f195ec681804105a8e4906c3baf584) and will follow up later with adding inline code comments for specific violations.

Change 557001 merged by jenkins-bot:
[integration/config@master] jjb: Modify branch name passed to sonar-scanner

https://gerrit.wikimedia.org/r/557001

Change 557138 had a related patch set uploaded (by Kosta Harlan; owner: Kosta Harlan):
[integration/config@master] jjb: Add back codehealth messages and adjust success/failure pattern

https://gerrit.wikimedia.org/r/557138

Change 557138 merged by jenkins-bot:
[integration/config@master] jjb: Add back codehealth messages and adjust success/failure pattern

https://gerrit.wikimedia.org/r/557138

Change 559419 had a related patch set uploaded (by Kosta Harlan; owner: Kosta Harlan):
[integration/config@master] Codehealth pipeline: Don't report back to gerrit

https://gerrit.wikimedia.org/r/559419

Change 559419 merged by jenkins-bot:
[integration/config@master] Codehealth pipeline: Don't report back to gerrit

https://gerrit.wikimedia.org/r/559419

Mentioned in SAL (#wikimedia-releng) [2019-12-19T09:40:59Z] <hashar> Deploying Zuul change "Codehealth pipeline: Don't report back to gerrit" https://gerrit.wikimedia.org/r/559419 for T217008

Change 559441 had a related patch set uploaded (by Kosta Harlan; owner: Kosta Harlan):
[integration/config@master] dockerfiles: Drop poll-sonar-for-response script

https://gerrit.wikimedia.org/r/559441

Change 559441 merged by jenkins-bot:
[integration/config@master] dockerfiles: Drop poll-sonar-for-response script

https://gerrit.wikimedia.org/r/559441

Change 572852 had a related patch set uploaded (by Kosta Harlan; owner: Kosta Harlan):
[labs/tools/sonarqubebot@master] Add inline commenting support

https://gerrit.wikimedia.org/r/572852

Change 572852 merged by jenkins-bot:
[labs/tools/sonarqubebot@master] Add inline commenting support

https://gerrit.wikimedia.org/r/572852

image.png (617×1 px, 85 KB)

This is what the inline comments look like if I use robot_comment support. The Run ID label is supposed to be the unique ID of the bot run but it might be more usable if we change it to "View more details" even though that's technically wrong.

Change 572857 had a related patch set uploaded (by Kosta Harlan; owner: Kosta Harlan):
[labs/tools/sonarqubebot@master] Fix bot URL

https://gerrit.wikimedia.org/r/572857

Change 572857 merged by jenkins-bot:
[labs/tools/sonarqubebot@master] Fix bot URL, add view details link

https://gerrit.wikimedia.org/r/572857

Change 572862 had a related patch set uploaded (by Kosta Harlan; owner: Kosta Harlan):
[labs/tools/sonarqubebot@master] Don't attempt to format a nice link

https://gerrit.wikimedia.org/r/572862

Change 572862 merged by jenkins-bot:
[labs/tools/sonarqubebot@master] Don't attempt to format a nice link

https://gerrit.wikimedia.org/r/572862

Change 572887 had a related patch set uploaded (by Kosta Harlan; owner: Kosta Harlan):
[labs/tools/sonarqubebot@master] Use project key, not name, for querying issues

https://gerrit.wikimedia.org/r/572887

Change 572887 merged by jenkins-bot:
[labs/tools/sonarqubebot@master] Use project key, not name, for querying issues

https://gerrit.wikimedia.org/r/572887

The gerrit documentation around robot comments is pretty opaque. AFAICT when you post a robot comment, you can get the notice in your email but they are not visible in the UI unless you happen to click directly on the file (and patchset number) where the comment was made, which is unlike human comments that have clickable links in the summary message and are also visible via the dropdown picker.

So, for example, on this patch: https://gerrit.wikimedia.org/r/c/mediawiki/extensions/GrowthExperiments/+/573995

With robot_comments, the end result in the UI is like this:

image.png (341×578 px, 49 KB)

Note that even though it says 7 comments, there is no summary below here showing the link to the comments, nor can you see them in the UI.

But when I use human comments (comments) then I see what I would expect:

image.png (531×639 px, 76 KB)

When looking at the detail view, robot comments are rendered differently in the UI than human comments.

image.png (316×935 px, 62 KB)

The item in blue is the robot comment and the one in yellow is a regular human comment.

The robot comment has some advantages: it's more clear from the UI it's generated by a machine, and you can embed a clickable link (the blue underline next Run ID), although not sure most people would know to click on the link next to Run ID so I added an explicit "View details" link in the message as well. I could add the "View details" text and link to the yellow human comment, I left it out while doing some testing. The blue robot comment also has a "Please fix" label which is confusing at best -- who is being asked to fix it?

It's really hard to tell from gerrit upstream (docs or code review tracker) in which direction robot comments are going but it does seem like some of the issues I've noted above are being addressed currently (for example https://gerrit-review.googlesource.com/c/gerrit/+/253669), but that won't help us any time soon as it looks like those fixes are in version 3.

tl;dr

  • the robot comments conceptually make sense but functionally they don't seem really useful to us in gerrit 2.15 (unless I did something totally wrong in how they're submitted)
  • if we use human comments then people might complain that they can't easily filter out machine generated comments from code review.
    • counter-argument: we should have sonarqube tuned such that the only comments that are made are the ones we really do want addressed, and are not noise

Given all the above I lean towards implementing the inline commenting feature as human comments, but we should really finalize and document how we want people to be able to mark false positives directly on sonarcloud.io or use suppress annotations.

cc @Gehel @Jrbranaa

@kostajh check out the "tag" field on https://gerrit.wikimedia.org/r/Documentation/rest-api-changes.html#comment-input - it appears you can tag comments with "autogenerated" and have them filterable, without actually using the half-baked "robot comments" feature.

Ah right. Thanks for pointing that out. I am already using that tag so pressing “Show comments only” will hide the sonar bot comments even when they don’t use the robot_comments field. So, that’s probably good enough for now then, and maybe someday we can revisit using robot comments proper when it’s more completely implemented.

The doc says "Robot comments are only supported with NoteDb, but not with ReviewDb", so presumably that's why it's broken.

Bots making normal comments is definitely a no-go, it would make human comments impossible to find. You can't expect people to fix all the style / test issues before asking for human review, that's exactly the wrong order for code review.

The very nice thing about bot commits would be that they can include fix suggestions which presumably can be applied via the UI. (That's unrelated to the "Please fix" thing, which just creates a comment with that text - presumably meant so that the maintainer can ask the commit owner to fix some non-voting issues, but it seems fairly pointless in practice.) But the new UI doesn't support commit editing, and I doubt the old UI supports anything involving NoteDb, so we'll probably have to wait a bit more for that.

The doc says "Robot comments are only supported with NoteDb, but not with ReviewDb", so presumably that's why it's broken.

Good catch. Yes, probably that's why.

Bots making normal comments is definitely a no-go, it would make human comments impossible to find.

To clarify, if we want to move ahead with this, we would use the autogenerated:codehealth tag for any comments which makes them filterable in the UI (press "Show comments only" to hide them), they are just not visually distinguishable (well, other than "SonarQube Bot" as the commenter name) because they would be using comments instead of robot_comments. Is this trade-off acceptable to you?

You can't expect people to fix all the style / test issues before asking for human review, that's exactly the wrong order for code review.

I think I disagree with this although I can see exceptions to it. We currently fail builds for style issues (phpcs), and while the codehealth pipeline isn't going to fail a build it mostly does provide feedback that I would want addressed or at least commented on before spending a lot of time on a review (e.g. the patch attempts to use a variable before it's defined -- if I see that warning I feel like I want that fixed before committing to do a code review).

The very nice thing about bot commits would be that they can include fix suggestions which presumably can be applied via the UI. (That's unrelated to the "Please fix" thing, which just creates a comment with that text - presumably meant so that the maintainer can ask the commit owner to fix some non-voting issues, but it seems fairly pointless in practice.) But the new UI doesn't support commit editing, and I doubt the old UI supports anything involving NoteDb, so we'll probably have to wait a bit more for that.

Yes that is definitely nice but I'm not sure if we could turn any of the feedback items from Sonar into automatic fixes to apply. (I looked very briefly into whether phpcs could generate single line replacement fixes but to possibly use that with another bot but it doesn't seem easy.)

Change 574429 had a related patch set uploaded (by Kosta Harlan; owner: Kosta Harlan):
[labs/tools/sonarqubebot@master] Use normal comments instead of robot_comments

https://gerrit.wikimedia.org/r/574429

Change 574429 merged by jenkins-bot:
[labs/tools/sonarqubebot@master] Use normal comments instead of robot_comments

https://gerrit.wikimedia.org/r/574429

The doc says "Robot comments are only supported with NoteDb, but not with ReviewDb", so presumably that's why it's broken.

Good catch. Yes, probably that's why.

We use notedb for comments and are almost entirely on notedb (with the exception of groups). Documentation for our version of gerrit also mentions, "Robot comments are not displayed in the web UI yet." -- so what you're seeing is likely expected.

FWIW, this is also true of 2.16.16 which is our next upgrade.

Bots making normal comments is definitely a no-go, it would make human comments impossible to find.

To clarify, if we want to move ahead with this, we would use the autogenerated:codehealth tag for any comments which makes them filterable in the UI (press "Show comments only" to hide them), they are just not visually distinguishable (well, other than "SonarQube Bot" as the commenter name) because they would be using comments instead of robot_comments. Is this trade-off acceptable to you?

The comment thread is not terribly useful in the first place because 1) most bots are not flagged as autogenerated (although that's fixable) and 2) you can't see what has / hasn't been resolved. Future versions of gerrit will allow filtering for that but for now, the only way to stay sane for large patches with dozens or hundreds of versions is to use the version dropdown which shows you the total / resolved comment count per version (and then once you select a version you get the same numbers per-file). The autogenerated tag doesn't help with that.

I think I disagree with this although I can see exceptions to it. We currently fail builds for style issues (phpcs), and while the codehealth pipeline isn't going to fail a build it mostly does provide feedback that I would want addressed or at least commented on before spending a lot of time on a review (e.g. the patch attempts to use a variable before it's defined -- if I see that warning I feel like I want that fixed before committing to do a code review).

If you are trying to provide a patch that's ready to merge then yes. I'm not sure that's a healthy approach though, it makes discussions about architecture infeasible because you need to put in so much work into details that would be thrown away if the architecture were changed. That's not an issue for most team-level reviews currently, because there is not much culture of high-level discussions anyway; but even if that doesn't change, it is still an issue for guiding less experienced developers through the code contribution process, and for discussing "proof of concept" patches, which is a common way to have discussions around RfCs and similar complex changes.

Yes that is definitely nice but I'm not sure if we could turn any of the feedback items from Sonar into automatic fixes to apply. (I looked very briefly into whether phpcs could generate single line replacement fixes but to possibly use that with another bot but it doesn't seem easy.)

Yeah that would be mainly for phpcs and the other style checkers. phpcs can report in a number of structured formats, in theory those seem straightforward to convert into the FixReplacementInfo data gerrit wants. No idea at what future version will gerrit start actually using that data, though.

The comment thread is not terribly useful in the first place because 1) most bots are not flagged as autogenerated (although that's fixable) and 2) you can't see what has / hasn't been resolved. Future versions of gerrit will allow filtering for that but for now, the only way to stay sane for large patches with dozens or hundreds of versions is to use the version dropdown which shows you the total / resolved comment count per version (and then once you select a version you get the same numbers per-file). The autogenerated tag doesn't help with that.

I think I am misunderstanding. Here is an example of a comment left using "human" comment (comments field not robot_comments) and with autogenerated:codehealth tag: https://gerrit.wikimedia.org/r/c/mediawiki/extensions/GrowthExperiments/+/573331/6. Note that 1) the link to the comment appears below the review message and 2) the unresolved comment number appears in the dropdown

image.png (405×716 px, 66 KB)

image.png (374×490 px, 41 KB)

If we use robot_comments we lose both of those things. A hacky workaround for 1 would be to manually embed links to each comment in the main review message (the bit that reports which parts of the quality gate passed/failed), if we really want to use robot_comments.


Overall, I'm uncertain of how we want to proceed. I see the three options as 1) don't do inline commenting until we're on Gerrit 3, 2) do inline commenting with robot_comments and accept that it's clunky and half-baked, and manually add links to inline comments in the main review comment so that they are semi-findable via the UI even though they won't show up in the dropdown picker, 3) use human comments with the autogenerated:codehealth flag

Personally option 3 seems reasonable given the problems / shortcomings of options 1 and 2.

If we use robot_comments we lose both of those things.

Losing the comment counts from the dropdown would be a feature IMO. Otherwise it will be full of bot-generated warnings about code coverage and whatnot, and it will be impossible to find human comments.
For complex patches it is a normal pattern that you get a number of review comments, and you fix some of them and disagree about others, and then you'll end up with debate threads somewhere in the middle of the patchset list, and it's important to be able to find them easily. Lots and lots of code style comments would just drown them out.

If we can flag all the bots (not just SonarQube) as autogenerated, maybe filtering the conversation view would be a replacement for that. Although that not does not show which comments have been resolved. (Future versions of gerrit allow filtering for that, but then future versions of gerrit support bot comments anyway.)

Overall, I'm uncertain of how we want to proceed. I see the three options as 1) don't do inline commenting until we're on Gerrit 3, 2) do inline commenting with robot_comments and accept that it's clunky and half-baked, and manually add links to inline comments in the main review comment so that they are semi-findable via the UI even though they won't show up in the dropdown picker, 3) use human comments with the autogenerated:codehealth flag

Personally option 3 seems reasonable given the problems / shortcomings of options 1 and 2.

Do we have any idea what our Gerrit upgrade timeline looks like? FWIW I think this is bug 5902 / 849133e which is fixed in 2.16 (we are in 2.15). (And then 9a4dfcc in 3.1 seems to hide them again, and the as of yet unreleased b41e20e creates the separate Findings tab for them, if I'm reading the code correctly.)

Manually adding would be a decent option if we can generate HTML (which I don't think is the case), otherwise it would probably look pretty ugly.

Manually adding would be a decent option if we can generate HTML (which I don't think is the case), otherwise it would probably look pretty ugly.

In our meeting today I agreed to press ahead with trying to generate links to individual comments in the review message, but now that I'm sitting down to do it (side note: the docs for what is supported are lacking / missing completely) it seems like our markup options are:

  • unordered and ordered bullets
  • code block

Given that, I don't see a nice way to surface the inline comments in the review message. Those who are tagged on the changeset will receive the comments in their email notification, if they read it. Otherwise, I think we need to wait for 2.16 or 3.1 with b41e20e8e9 as @Tgr said.

For robot comments to work for the java projects, T247243: Fix gerritProjectName assignment in run-java.sh needs to be sorted out.

Change 578340 had a related patch set uploaded (by Kosta Harlan; owner: Kosta Harlan):
[labs/tools/sonarqubebot@master] Fix assignment of file name

https://gerrit.wikimedia.org/r/578340

Change 578340 merged by jenkins-bot:
[labs/tools/sonarqubebot@master] Fix assignment of filename

https://gerrit.wikimedia.org/r/578340

That looks great! I guess the "View details" part is now obsolete as "Run Details" points to the same URL. (Although it could use a better name; "Run Details" sounds as if clicking it would execute something.)
The discoverability of the "Findings" tab could be better (would be nice if it showed a warning when there are non-zero findings for the current patchset) but even so it is a huge improvement.

Should we switch robot comments on for all projects and see what feedback we get, or enable incrementally? Right now it's just on for mediawiki/extensions/GrowthExperiments|wikidata/query/rdf|search/extra

I think I would be OK with enabling for all projects.

Change 608287 had a related patch set uploaded (by Kosta Harlan; owner: Kosta Harlan):
[labs/tools/sonarqubebot@master] Remove "View details" link in comment body

https://gerrit.wikimedia.org/r/c/labs/tools/sonarqubebot/ /608287

Change 608289 had a related patch set uploaded (by Kosta Harlan; owner: Kosta Harlan):
[labs/tools/sonarqubebot@master] Remove inline comment safelist

https://gerrit.wikimedia.org/r/c/labs/tools/sonarqubebot/ /608289

Change 608287 merged by jenkins-bot:
[labs/tools/sonarqubebot@master] Remove "View details" link in comment body

https://gerrit.wikimedia.org/r/c/labs/tools/sonarqubebot/ /608287

Change 608289 merged by jenkins-bot:
[labs/tools/sonarqubebot@master] Remove inline comment safelist

https://gerrit.wikimedia.org/r/c/labs/tools/sonarqubebot/ /608289

Should we switch robot comments on for all projects and see what feedback we get, or enable incrementally? Right now it's just on for mediawiki/extensions/GrowthExperiments|wikidata/query/rdf|search/extra

I think I would be OK with enabling for all projects.

I am definitely in favor of that feature :-]

Should we switch robot comments on for all projects and see what feedback we get, or enable incrementally? Right now it's just on for mediawiki/extensions/GrowthExperiments|wikidata/query/rdf|search/extra

I think I would be OK with enabling for all projects.

I am definitely in favor of that feature :-]

Alright, it's live!

Do you still need:

  • Add SonarQubeBot user to Gerrit stream-events group

Or is that solely triggered via CI/Zuul?

Do you still need:

  • Add SonarQubeBot user to Gerrit stream-events group

Or is that solely triggered via CI/Zuul?

Nope, not needed, thanks for checking!

All the checkboxes are checked—is there anything left to do here?

All the checkboxes are checked—is there anything left to do here?

The only thing left is to resolve this task. Thanks everyone!