Build Image:
docker build -f .pipeline/revertrisk/multilingual.yaml --target production --platform=linux/amd64 -t multilingual:events .Build Image:
docker build -f .pipeline/revertrisk/multilingual.yaml --target production --platform=linux/amd64 -t multilingual:events .I was experimenting with the option:
Another option: make a .v2 stream with a different/new or just new major version 2.0.0 schema that supports multiple model predictions per event, either via a array of them, or a map of them. The downside would be that evolving the items in the array or map would not be easily supported (it's complicated).
And I found it kinda complicated. I think we can go with the option that we are creating a different (dedicated) stream for the rr-multilingual predictions, something like: EVENTGATE_STREAM=mediawiki.page_revert_risk_multilingual_prediction_change.v1, this will separate the stream right ?
This way we have two different streams pointing to the same schema, and in the deployment charts we set the corresponding EVENT_STREAM value for each of the rr models. We also set the correct values under the changeprop so we maintain two different streams.
@Ottomata thank you for the comments.
We are not in the state for deploying this on production. I just built it like this in order to understand the flow and test it on staging as well.
Currently many people from our team are absent, so we will make the final decisions when they are back.
For now I just implemented this and we can test things on staging.
I will experiment with the alternatives as well:
In T415892#11580587, @Ottomata wrote:we may need to produce predictions to a separate stream instead of mediawiki.page_revert_risk_prediction_change.
Another option: make a .v2 stream with a different/new or just new major version 2.0.0 schema that supports multiple model predictions per event, either via a array of them, or a map of them. The downside would be that evolving the items in the array or map would not be easily supported (it's complicated).
Finished the implementation of the event mechanism in inference-services for the rr-multilingual model. \
This is the local testing on my machine:
Since the task: T406217 is finished we have a first version of end-to-end pipeline including all the basic steps of an ML-Lifecycle: Data Generation -> Model Training -> Export model in S3 bucket.
More info could be found here: https://phabricator.wikimedia.org/T398970
Generate Data (SparkSubmitOperator) -> Train/Validation/Test split (SparkSubmitOperator) -> Copy from HDFS to a PVC (WMFKubernetesPodOperator) -> Train model on GPU pod (WMFKubernetesPodOperator) -> Copy retrained model to S3 (PythonOperator)
Hey, I am working on this, I think that I have finished the implementation for publishing the predictions in events. I am now testing it locally.
Based on this: https://wikitech.wikimedia.org/wiki/Machine_Learning/LiftWing/Streams I think there are these steps:
Hey @Isaac, this ticket is assigned to @klausman but he is currently on his sabbatical. He will start working on this when he is back, I think around next month (???).
I am tagging @DPogorzelski-WMF here for visibility, maybe he has something more to add.
The end-to-end tone-check retraining pipeline succeeded, we solved the issues of Multy-Attach PVC.
| 1 | tone-check-training-dag-move-model-to-s3-nv8wgsew |
|---|---|
| 2 | ▶ Log message source details |
| 3 | [2026-01-28, 22:24:03 UTC] {local_task_job_runner.py:123} ▶ Pre task execution logs |
| 4 | [2026-01-28, 22:24:04 UTC] {crypto.py:82} WARNING - empty cryptography key - values will not be stored encrypted. |
| 5 | [2026-01-28, 22:24:05 UTC] {tone_check_training_dag.py:101} INFO - [+] S3 client loaded ! |
| 6 | [2026-01-28, 22:24:05 UTC] {tone_check_training_dag.py:103} INFO - Searching files in /mnt/model-training/tone_check/20260128T134152/output_model: |
| 7 | [2026-01-28, 22:24:05 UTC] {tone_check_training_dag.py:109} INFO - - File: /mnt/model-training/tone_check/20260128T134152/output_model/checkpoint-21530/config.json | wmf-ml-models | retrained-models/tone-check/checkpoint-21530/config.json |
| 8 | [2026-01-28, 22:24:05 UTC] {tone_check_training_dag.py:109} INFO - - File: /mnt/model-training/tone_check/20260128T134152/output_model/checkpoint-21530/model.safetensors | wmf-ml-models | retrained-models/tone-check/checkpoint-21530/model.safetensors |
| 9 | [2026-01-28, 22:24:12 UTC] {tone_check_training_dag.py:109} INFO - - File: /mnt/model-training/tone_check/20260128T134152/output_model/checkpoint-21530/special_tokens_map.json | wmf-ml-models | retrained-models/tone-check/checkpoint-21530/special_tokens_map.json |
| 10 | [2026-01-28, 22:24:12 UTC] {tone_check_training_dag.py:109} INFO - - File: /mnt/model-training/tone_check/20260128T134152/output_model/checkpoint-21530/rng_state.pth | wmf-ml-models | retrained-models/tone-check/checkpoint-21530/rng_state.pth |
| 11 | [2026-01-28, 22:24:12 UTC] {tone_check_training_dag.py:109} INFO - - File: /mnt/model-training/tone_check/20260128T134152/output_model/checkpoint-21530/tokenizer_config.json | wmf-ml-models | retrained-models/tone-check/checkpoint-21530/tokenizer_config.json |
| 12 | [2026-01-28, 22:24:13 UTC] {tone_check_training_dag.py:109} INFO - - File: /mnt/model-training/tone_check/20260128T134152/output_model/checkpoint-21530/vocab.txt | wmf-ml-models | retrained-models/tone-check/checkpoint-21530/vocab.txt |
| 13 | [2026-01-28, 22:24:13 UTC] {tone_check_training_dag.py:109} INFO - - File: /mnt/model-training/tone_check/20260128T134152/output_model/checkpoint-21530/tokenizer.json | wmf-ml-models | retrained-models/tone-check/checkpoint-21530/tokenizer.json |
| 14 | [2026-01-28, 22:24:13 UTC] {tone_check_training_dag.py:109} INFO - - File: /mnt/model-training/tone_check/20260128T134152/output_model/checkpoint-21530/training_args.bin | wmf-ml-models | retrained-models/tone-check/checkpoint-21530/training_args.bin |
| 15 | [2026-01-28, 22:24:14 UTC] {tone_check_training_dag.py:109} INFO - - File: /mnt/model-training/tone_check/20260128T134152/output_model/checkpoint-21530/scheduler.pt | wmf-ml-models | retrained-models/tone-check/checkpoint-21530/scheduler.pt |
| 16 | [2026-01-28, 22:24:14 UTC] {tone_check_training_dag.py:109} INFO - - File: /mnt/model-training/tone_check/20260128T134152/output_model/checkpoint-21530/trainer_state.json | wmf-ml-models | retrained-models/tone-check/checkpoint-21530/trainer_state.json |
| 17 | [2026-01-28, 22:24:14 UTC] {tone_check_training_dag.py:109} INFO - - File: /mnt/model-training/tone_check/20260128T134152/output_model/checkpoint-21530/optimizer.pt | wmf-ml-models | retrained-models/tone-check/checkpoint-21530/optimizer.pt |
| 18 | [2026-01-28, 22:24:29 UTC] {tone_check_training_dag.py:112} INFO - [+] Files uploded correctly at: s3://wmf-ml-models/retrained-models/tone-check// |
| 19 | [2026-01-28, 22:24:29 UTC] {python.py:240} INFO - Done. Returned value was: None |
$ s3cmd -c /etc/s3cmd/cfg.d/ml-team.cfg ls -H s3://wmf-ml-models/retrained-models/tone-check/checkpoint-21530/ 2026-01-28 22:24 865 s3://wmf-ml-models/retrained-models/tone-check/checkpoint-21530/config.json 2026-01-28 22:24 678M s3://wmf-ml-models/retrained-models/tone-check/checkpoint-21530/model.safetensors 2026-01-28 22:24 1357M s3://wmf-ml-models/retrained-models/tone-check/checkpoint-21530/optimizer.pt 2026-01-28 22:24 13K s3://wmf-ml-models/retrained-models/tone-check/checkpoint-21530/rng_state.pth 2026-01-28 22:24 1064 s3://wmf-ml-models/retrained-models/tone-check/checkpoint-21530/scheduler.pt 2026-01-28 22:24 695 s3://wmf-ml-models/retrained-models/tone-check/checkpoint-21530/special_tokens_map.json 2026-01-28 22:24 2M s3://wmf-ml-models/retrained-models/tone-check/checkpoint-21530/tokenizer.json 2026-01-28 22:24 1330 s3://wmf-ml-models/retrained-models/tone-check/checkpoint-21530/tokenizer_config.json 2026-01-28 22:24 9K s3://wmf-ml-models/retrained-models/tone-check/checkpoint-21530/trainer_state.json 2026-01-28 22:24 5K s3://wmf-ml-models/retrained-models/tone-check/checkpoint-21530/training_args.bin 2026-01-28 22:24 972K s3://wmf-ml-models/retrained-models/tone-check/checkpoint-21530/vocab.txt
We are currently do not store anywhere the predictions from the rr-multilingual model so we cannot export them in the same way that we are doing for the rr-language-agnostic one.
If there is this necessity, I can open a new Phabricator task in order to start developing the first step of saving the slice of the rr-multilingual predictions into the event stream, and then we can add them to the refinery and export them into the event_sanitized as we do for the rr-langugage-agnostic.
In T405358#11557401, @kostajh wrote:@gkyziridis I'm testing this out today but only seeing revertrisk-language-agnostic for an example revision on enwiki, is that expected?
spark-sql (default)> select predicted_classification from event.mediawiki_page_revert_risk_prediction_change_v1 where revision.rev_id = 1333904928; predicted_classification {"model_name":"revertrisk-language-agnostic","model_version":"3","predictions":["false"],"probabilities":{"false":0.7348057627677917,"true":0.26519423723220825}}
I also checked the PVC using kubectl and I see that the PVC is "RWO": "ReadWriteOnce" I am not sure if this makes the problem:
$ kube_env airflow-ml-deploy dse-k8s-eqiad $ kubectl get pvc airflow-ml-model-training -n airflow-dev NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE airflow-ml-model-training Bound pvc-8a6a2920-8d7e-4616-8ab6-a6a70b26d116 20Gi RWO ceph-rbd-ssd 151d
$ s3cmd -c /etc/s3cmd/cfg.d/ml-team.cfg ls -H --recursive s3://wmf-ml-models/retrained-models/tone-check/checkpoint-63618/ 2026-01-20 13:33 865 s3://wmf-ml-models/retrained-models/tone-check/checkpoint-63618/config.json 2026-01-20 13:33 678M s3://wmf-ml-models/retrained-models/tone-check/checkpoint-63618/model.safetensors 2026-01-20 13:33 1357M s3://wmf-ml-models/retrained-models/tone-check/checkpoint-63618/optimizer.pt 2026-01-20 13:33 13K s3://wmf-ml-models/retrained-models/tone-check/checkpoint-63618/rng_state.pth 2026-01-20 13:33 1064 s3://wmf-ml-models/retrained-models/tone-check/checkpoint-63618/scheduler.pt 2026-01-20 13:33 695 s3://wmf-ml-models/retrained-models/tone-check/checkpoint-63618/special_tokens_map.json 2026-01-20 13:33 2M s3://wmf-ml-models/retrained-models/tone-check/checkpoint-63618/tokenizer.json 2026-01-20 13:33 1330 s3://wmf-ml-models/retrained-models/tone-check/checkpoint-63618/tokenizer_config.json 2026-01-20 13:33 24K s3://wmf-ml-models/retrained-models/tone-check/checkpoint-63618/trainer_state.json 2026-01-20 13:33 5K s3://wmf-ml-models/retrained-models/tone-check/checkpoint-63618/training_args.bin 2026-01-20 13:33 972K s3://wmf-ml-models/retrained-models/tone-check/checkpoint-63618/vocab.txt
In T406179#11510890, @kevinbazira wrote:Weekly Update:
- The Wikimedia Enterprise team conducted load tests to simulate their traffic and shared results in T409388#11483570
- We are working on optimizing the revertrisk-wikidata inference service to achieve the Enterprise team's latency target in T414060
curl -s -X \ POST "https://inference.svc.eqiad.wmnet:30443/v1/models/revertrisk-language-agnostic:predict" \ -d '{"rev_id": 2, "lang": "test"}' \ -H "Host: revertrisk-language-agnostic.revertrisk.wikimedia.org"
Things we need to keep in mind:
The testwiki is not a canonical Wikipedia, it is a testing environment where articles can be written in any language, and it wasn’t part of the RR model’s training data, so we excluded from the list of canonical Wikipedias. So the, RR model doesn’t support testwiki.
We can easily parse this in the following two requests to enwiki and testwiki, respectively:
In T410663#11447291, @Isaac wrote:Nevertheless, this combination of versions seems to fix the issue using the GPU image, so your curiosity is in a super good shape towards the correct direction :P.
Haha, always happy to be accidentally helpful :) Once it's deployed on ml-lab1002, happy to test but definitely looking promising!
In T410663#11444307, @Isaac wrote:Thanks @gkyziridis for digging into this! Out of curiosity, why not jump to the current stable versions (2.9.1 for torch and 6.4 for AMD)? I see you commented that line out in the initial file that at least had torch at 2.9.1.
I built the image using: docker build --network=host -t torch_rocm3 .
In T409438#11428348, @Kgraessle wrote:Model configuration and threshold configuration have been deployed.
The next step is to backport adding thwiki to the ORES dblist: https://gerrit.wikimedia.org/r/c/operations/mediawiki-config/+/1207923.
Let's discuss in engineering weekly when we would like to go ahead and do that along with the backfill script.
When we start the actual deployment:
Due to the fact that we have a huge number of wikis which are needed to be deployed, I suggest to to do it in batches. Right now, in the patch above only the thresholds are set for each wiki, that means that if this patch is merged and deployed nothing will be changed. In the next iterations, when we start to deploying the wikis we need to enable ORES model and enable the UI as well. Only then the thresholds which are configured in the patch will be functional. So, I suggest to enable ORES model in batches e.g. for 4-5 wikis per batch. This will take some time to finish all batches, but we can easily handle issues that could occur during the backport deployments
I configure all the rr thresholds for all the wikis and enabled the model for all of them in this patch: https://gerrit.wikimedia.org/r/c/operations/mediawiki-config/+/1212086 .
I excluded thwiki from the above patch since you are using it for the MVP.
I also avoided to run the composer manage-dblist add {wiki_name} ores for all the wikis, which means that whenever we deploy all these wikis we need to run the composer for all of them.
I think that there is one more step which needs to be done which is to run: composer manage-dblist add {wiki_name} ores. I do not see thwiki to be added under "dblists/ores.dblist" file in this patch -> https://gerrit.wikimedia.org/r/c/operations/mediawiki-config/+/1207932
In T409388#11405117, @kevinbazira wrote:The revertrisk-wikidata inference service production endpoint uses similar scaling configs that other revertrisk inference-services use: https://github.com/wikimedia/operations-deployment-charts/blob/8412fc655d3b1e10b38cf0c954d910b820e93a05/helmfile.d/ml-services/revertrisk/values.yaml#L145-L150
IMO the prod endpoint should scale well unless results from the WME folks say otherwise.
Progress update on the hypothesis for the week, including if something has shipped:
Hey @kevinbazira thank very much for running the loading tests for Revert-Risk wikidata.
I think we should change a little bit the configuration in order to simulate a more realistic scenario close to reality.
We also need to run heavier tests spawning more users in order to check our API's capacity and capability to handle maximum RPS.
I ran three different locust tests with heavier configuration, you can see the results in the following phab paste:
| 1 | # 500 users | 5 per second |
|---|---|
| 2 | $ MODEL_LOCUST_DIR="revertrisk_wikidata" make run-locust-test |
| 3 | [2025-11-24 13:19:16,836] stat1010/INFO/locust.main: Run time limit set to 120 seconds |
| 4 | [2025-11-24 13:19:16,837] stat1010/INFO/locust.main: Starting Locust 2.31.5 |
| 5 | [2025-11-24 13:19:16,837] stat1010/INFO/locust.runners: Ramping to 500 users at a rate of 5.00 per second |
| 6 | [2025-11-24 13:20:55,994] stat1010/INFO/locust.runners: All users spawned: {"RevertriskWikidata": 500} (500 total users) |
| 7 | [2025-11-24 13:21:16,348] stat1010/INFO/locust.main: --run-time limit reached, shutting down |
| 8 | Load test results are within the threshold |
| 9 | [2025-11-24 13:21:16,556] stat1010/INFO/locust.main: Shutting down (exit code 1) |
| 10 | Type Name # reqs # fails | Avg Min Max Med | req/s failures/s |
| 11 | --------|----------------------------------------------------------------------------|-------|-------------|-------|-------|-------|-------|--------|----------- |
| 12 | POST /v1/models/revertrisk-wikidata:predict 1202 33(2.75%) | 18076 472 46826 12000 | 10.05 0.28 |
| 13 | --------|----------------------------------------------------------------------------|-------|-------------|-------|-------|-------|-------|--------|----------- |
| 14 | Aggregated 1202 33(2.75%) | 18076 472 46826 12000 | 10.05 0.28 |
| 15 | |
| 16 | Response time percentiles (approximated) |
| 17 | Type Name 50% 66% 75% 80% 90% 95% 98% 99% 99.9% 99.99% 100% # reqs |
| 18 | --------|--------------------------------------------------------------------------------|--------|------|------|------|------|------|------|------|------|------|------|------ |
| 19 | POST /v1/models/revertrisk-wikidata:predict 12000 26000 32000 35000 41000 42000 44000 45000 47000 47000 47000 1202 |
| 20 | --------|--------------------------------------------------------------------------------|--------|------|------|------|------|------|------|------|------|------|------|------ |
| 21 | Aggregated 12000 26000 32000 35000 41000 42000 44000 45000 47000 47000 47000 1202 |
| 22 | |
| 23 | Error report |
| 24 | # occurrences Error |
| 25 | ------------------|--------------------------------------------------------------------------------------------------------------------------------------------- |
| 26 | 33 POST /v1/models/revertrisk-wikidata:predict: BadStatusCode('https://inference-staging.svc.codfw.wmnet:30443/v1/models/revertrisk-wikidata:predict', code=502) |
| 27 | ------------------|--------------------------------------------------------------------------------------------------------------------------------------------- |
| 28 | |
| 29 | +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ |
| 30 | |
| 31 | # 500 users | 2 per second |
| 32 | $ MODEL_LOCUST_DIR="revertrisk_wikidata" make run-locust-test |
| 33 | [2025-11-24 13:13:03,964] stat1010/INFO/locust.main: Run time limit set to 120 seconds |
| 34 | [2025-11-24 13:13:03,964] stat1010/INFO/locust.main: Starting Locust 2.31.5 |
| 35 | [2025-11-24 13:13:03,965] stat1010/INFO/locust.runners: Ramping to 500 users at a rate of 2.00 per second |
| 36 | [2025-11-24 13:15:03,496] stat1010/INFO/locust.main: --run-time limit reached, shutting down |
| 37 | Load test results are within the threshold |
| 38 | [2025-11-24 13:15:03,651] stat1010/INFO/locust.main: Shutting down (exit code 1) |
| 39 | Type Name # reqs # fails | Avg Min Max Med | req/s failures/s |
| 40 | --------|----------------------------------------------------------------------------|-------|-------------|-------|-------|-------|-------|--------|----------- |
| 41 | POST /v1/models/revertrisk-wikidata:predict 879 9(1.02%) | 10939 474 25179 11000 | 7.35 0.08 |
| 42 | --------|----------------------------------------------------------------------------|-------|-------------|-------|-------|-------|-------|--------|----------- |
| 43 | Aggregated 879 9(1.02%) | 10939 474 25179 11000 | 7.35 0.08 |
| 44 | |
| 45 | Response time percentiles (approximated) |
| 46 | Type Name 50% 66% 75% 80% 90% 95% 98% 99% 99.9% 99.99% 100% # reqs |
| 47 | --------|--------------------------------------------------------------------------------|--------|------|------|------|------|------|------|------|------|------|------|------ |
| 48 | POST /v1/models/revertrisk-wikidata:predict 11000 14000 16000 17000 20000 21000 23000 24000 25000 25000 25000 879 |
| 49 | --------|--------------------------------------------------------------------------------|--------|------|------|------|------|------|------|------|------|------|------|------ |
| 50 | Aggregated 11000 14000 16000 17000 20000 21000 23000 24000 25000 25000 25000 879 |
| 51 | |
| 52 | Error report |
| 53 | # occurrences Error |
| 54 | ------------------|--------------------------------------------------------------------------------------------------------------------------------------------- |
| 55 | 9 POST /v1/models/revertrisk-wikidata:predict: BadStatusCode('https://inference-staging.svc.codfw.wmnet:30443/v1/models/revertrisk-wikidata:predict', code=502) |
| 56 | ------------------|--------------------------------------------------------------------------------------------------------------------------------------------- |
| 57 | |
| 58 | +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ |
| 59 | |
| 60 | # 100 users | 5 per second |
| 61 | $ MODEL_LOCUST_DIR="revertrisk_wikidata" make run-locust-test |
| 62 | [2025-11-24 13:26:48,568] stat1010/INFO/locust.main: Run time limit set to 120 seconds |
| 63 | [2025-11-24 13:26:48,568] stat1010/INFO/locust.main: Starting Locust 2.31.5 |
| 64 | [2025-11-24 13:26:48,569] stat1010/INFO/locust.runners: Ramping to 100 users at a rate of 5.00 per second |
| 65 | [2025-11-24 13:27:07,640] stat1010/INFO/locust.runners: All users spawned: {"RevertriskWikidata": 100} (100 total users) |
| 66 | [2025-11-24 13:28:48,102] stat1010/INFO/locust.main: --run-time limit reached, shutting down |
| 67 | Load test results are within the threshold |
| 68 | [2025-11-24 13:28:48,215] stat1010/INFO/locust.main: Shutting down (exit code 1) |
| 69 | Type Name # reqs # fails | Avg Min Max Med | req/s failures/s |
| 70 | --------|----------------------------------------------------------------------------|-------|-------------|-------|-------|-------|-------|--------|----------- |
| 71 | POST /v1/models/revertrisk-wikidata:predict 1742 4(0.23%) | 3314 81 6776 3400 | 14.58 0.03 |
| 72 | --------|----------------------------------------------------------------------------|-------|-------------|-------|-------|-------|-------|--------|----------- |
| 73 | Aggregated 1742 4(0.23%) | 3314 81 6776 3400 | 14.58 0.03 |
| 74 | |
| 75 | Response time percentiles (approximated) |
| 76 | Type Name 50% 66% 75% 80% 90% 95% 98% 99% 99.9% 99.99% 100% # reqs |
| 77 | --------|--------------------------------------------------------------------------------|--------|------|------|------|------|------|------|------|------|------|------|------ |
| 78 | POST /v1/models/revertrisk-wikidata:predict 3400 3800 4000 4200 4600 4900 5400 5700 6500 6800 6800 1742 |
| 79 | --------|--------------------------------------------------------------------------------|--------|------|------|------|------|------|------|------|------|------|------|------ |
| 80 | Aggregated 3400 3800 4000 4200 4600 4900 5400 5700 6500 6800 6800 1742 |
| 81 | |
| 82 | Error report |
| 83 | # occurrences Error |
| 84 | ------------------|--------------------------------------------------------------------------------------------------------------------------------------------- |
| 85 | 4 POST /v1/models/revertrisk-wikidata:predict: BadStatusCode('https://inference-staging.svc.codfw.wmnet:30443/v1/models/revertrisk-wikidata:predict', code=502) |
| 86 | ------------------|--------------------------------------------------------------------------------------------------------------------------------------------- |
For the dewiki we had spotted an issue which is described here: T407155#11311194 regarding many english samples used for training the model in dewiki. In order to overcome this, I used translation only where the english samples exists inside the dewiki dataset.
You can try tweaking the filters in the notebook, such as loosening the diff size conditions, expanding the revert time periods, or asking the community for more signals if possible.
The issue I am facing for reproducing the error is that we are logging the incoming request if it is successful (status code 200), but we do not log it if is not.
We need somehow to log it immediately after we receive it in order to reproduce it.
I will open a ticket for upgrading the logging on the model server side: https://phabricator.wikimedia.org/T409931
In T409657#11358844, @elukey wrote:@gkyziridis I am not 100% sure if the rev-id in the task's description is the problematic one, I thought it was when checking the logs but you may need to review /home/elukey/T409657 on deploy2002 to get other testing samples :(
# Request $ curl -i -X \ POST localhost:8080/v1/models/revertrisk-multilingual:predict \ -d '{"lang": "ru", "rev_id": 149673768}'