Refinery Spark HiveExtensions schema merge should support merging of arrays with struct elements
Closed, ResolvedPublic8 Story Points

Description

We just closed T197000: Modify revision-score schema so that model probabilities won't conflict and started Refining revision-score events. The new revision-score schema has an array of scores, each with a struct schema.

Currently, our schema merging code will only merge sub struct schemas, NOT arrays that have StructType elements. We need to add detection for array fields with StructType elements, and then merge those element StructTypes as well.

This was detected with the following error. You can reproduce this with the following command as well:

sudo -u hdfs /usr/bin/spark2-submit \
--name refine_eventlogging_eventbus_otto_revision_score --driver-java-options='-Drefine.log.level=DEBUG' \
--class org.wikimedia.analytics.refinery.job.refine.Refine \
--files /etc/hive/conf/hive-site.xml,/etc/refinery/refine/refine_eventlogging_eventbus.properties,/srv/deployment/analytics/refinery/artifacts/hive-jdbc-1.1.0-cdh5.10.0.jar,/srv/deployment/analytics/refinery/artifacts/hive-service-1.1.0-cdh5.10.0.jar --conf spark.driver.extraClassPath=/usr/lib/hadoop-mapreduce/hadoop-mapreduce-client-common.jar:hive-jdbc-1.1.0-cdh5.10.0.jar:hive-service-1.1.0-cdh5.10.0.jar --conf spark.dynamicAllocation.maxExecutors=64  \
/srv/deployment/analytics/refinery/artifacts/org/wikimedia/analytics/refinery/refinery-job-0.0.75.jar \
--config_file /etc/refinery/refine/refine_eventlogging_eventbus.properties \
--since 2018-11-26T18:00:00 --until 2018-11-26T19:00:00 \
--table_whitelist_regex='^mediawiki_revision_score$' --ignore_failure_flag=true

...


18/11/26 22:54:57 DEBUG Refine: Applying org.wikimedia.analytics.refinery.job.refine.deduplicate_eventbus$@157856e0 to `event`.`mediawiki_revision_score` (datacenter="eqiad",year=2018,month=11,day=26,hour=18)
18/11/26 22:54:57 DEBUG deduplicate_eventbus: Dropping duplicates based on `meta.id` field in `event`.`mediawiki_revision_score` (datacenter="eqiad",year=2018,month=11,day=26,hour=18)
18/11/26 22:55:02 DEBUG HiveExtensions: Converting DataFrame using SQL query:
SELECT database AS database, NAMED_STRUCT('domain', meta.domain, 'dt', meta.dt, 'id', meta.id, 'request_id', meta.request_id, 'schema_uri', meta.schema_uri, 'topic', meta.topic, 'uri', meta.uri) AS meta, page_id AS page_id, page_namespace AS page_namespace, page_title AS page_title, rev_id AS rev_id, rev_parent_id AS rev_parent_id, rev_timestamp AS rev_timestamp, CAST(scores AS array<struct<model_name:string,model_version:string,prediction:array<string>,probability:array<struct<name:string,value:double>>>>) AS scores, datacenter AS datacenter, year AS year, month AS month, day AS day, hour AS hour FROM t_fd3875eb220e422793b7373f1c7a1718
18/11/26 22:55:02 ERROR Refine: Failed refinement of dataset hdfs://analytics-hadoop/wmf/data/raw/event/eqiad_mediawiki_revision-score/hourly/2018/11/26/18 -> `event`.`mediawiki_revision_score` (datacenter="eqiad",year=2018,month=11,day=26,hour=18).
org.apache.spark.sql.AnalysisException: cannot resolve 't_fd3875eb220e422793b7373f1c7a1718.`scores`' due to data type mismatch: cannot cast array<struct<error:struct<message:string,type:string>,model_name:string,model_version:string,prediction:array<string>,probability:array<struct<name:string,value:double>>>> to array<struct<model_name:string,model_version:string,prediction:array<string>,probability:array<struct<name:string,value:double>>>>; line 1 pos 366;
'Project [database#186 AS database#315, named_struct(domain, meta#187.domain, dt, meta#187.dt, id, meta#187.id, request_id, meta#187.request_id, schema_uri, meta#187.schema_uri, topic, meta#187.topic, uri, meta#187.uri) AS meta#316, page_id#188L AS page_id#317L, page_namespace#189L AS page_namespace#318L, page_title#190 AS page_title#319, rev_id#191L AS rev_id#320L, rev_parent_id#192L AS rev_parent_id#321L, rev_timestamp#193 AS rev_timestamp#322, cast(scores#194 as array<struct<model_name:string,model_version:string,prediction:array<string>,probability:array<struct<name:string,value:double>>>>) AS scores#323, datacenter#195 AS datacenter#324, year#196L AS year#325L, month#197L AS month#326L, day#198L AS day#327L, hour#199L AS hour#328L]
+- SubqueryAlias t_fd3875eb220e422793b7373f1c7a1718
   +- Repartition 3, true
      +- LogicalRDD [database#186, meta#187, page_id#188L, page_namespace#189L, page_title#190, rev_id#191L, rev_parent_id#192L, rev_timestamp#193, scores#194, datacenter#195, year#196L, month#197L, day#198L, hour#199L], false

	at org.apache.spark.sql.catalyst.analysis.package$AnalysisErrorAt.failAnalysis(package.scala:42)
	at org.apache.spark.sql.catalyst.analysis.CheckAnalysis$$anonfun$checkAnalysis$1$$anonfun$apply$2.applyOrElse(CheckAnalysis.scala:93)
	at org.apache.spark.sql.catalyst.analysis.CheckAnalysis$$anonfun$checkAnalysis$1$$anonfun$apply$2.applyOrElse(CheckAnalysis.scala:85)
	at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$transformUp$1.apply(TreeNode.scala:289)
	at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$transformUp$1.apply(TreeNode.scala:289)
	at org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:70)
	at org.apache.spark.sql.catalyst.trees.TreeNode.transformUp(TreeNode.scala:288)
	at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$3.apply(TreeNode.scala:286)
	at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$3.apply(TreeNode.scala:286)
	at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$4.apply(TreeNode.scala:306)
	at org.apache.spark.sql.catalyst.trees.TreeNode.mapProductIterator(TreeNode.scala:187)
	at org.apache.spark.sql.catalyst.trees.TreeNode.mapChildren(TreeNode.scala:304)
	at org.apache.spark.sql.catalyst.trees.TreeNode.transformUp(TreeNode.scala:286)
	at org.apache.spark.sql.catalyst.plans.QueryPlan$$anonfun$transformExpressionsUp$1.apply(QueryPlan.scala:95)
	at org.apache.spark.sql.catalyst.plans.QueryPlan$$anonfun$transformExpressionsUp$1.apply(QueryPlan.scala:95)
	at org.apache.spark.sql.catalyst.plans.QueryPlan$$anonfun$1.apply(QueryPlan.scala:107)
	at org.apache.spark.sql.catalyst.plans.QueryPlan$$anonfun$1.apply(QueryPlan.scala:107)
	at org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:70)
	at org.apache.spark.sql.catalyst.plans.QueryPlan.transformExpression$1(QueryPlan.scala:106)
	at org.apache.spark.sql.catalyst.plans.QueryPlan.org$apache$spark$sql$catalyst$plans$QueryPlan$$recursiveTransform$1(QueryPlan.scala:118)
	at org.apache.spark.sql.catalyst.plans.QueryPlan$$anonfun$org$apache$spark$sql$catalyst$plans$QueryPlan$$recursiveTransform$1$1.apply(QueryPlan.scala:122)
	at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
	at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
	at scala.collection.immutable.List.foreach(List.scala:381)
	at scala.collection.TraversableLike$class.map(TraversableLike.scala:234)
	at scala.collection.immutable.List.map(List.scala:285)
	at org.apache.spark.sql.catalyst.plans.QueryPlan.org$apache$spark$sql$catalyst$plans$QueryPlan$$recursiveTransform$1(QueryPlan.scala:122)
	at org.apache.spark.sql.catalyst.plans.QueryPlan$$anonfun$2.apply(QueryPlan.scala:127)
	at org.apache.spark.sql.catalyst.trees.TreeNode.mapProductIterator(TreeNode.scala:187)
	at org.apache.spark.sql.catalyst.plans.QueryPlan.mapExpressions(QueryPlan.scala:127)
	at org.apache.spark.sql.catalyst.plans.QueryPlan.transformExpressionsUp(QueryPlan.scala:95)
	at org.apache.spark.sql.catalyst.analysis.CheckAnalysis$$anonfun$checkAnalysis$1.apply(CheckAnalysis.scala:85)
	at org.apache.spark.sql.catalyst.analysis.CheckAnalysis$$anonfun$checkAnalysis$1.apply(CheckAnalysis.scala:80)
	at org.apache.spark.sql.catalyst.trees.TreeNode.foreachUp(TreeNode.scala:127)
	at org.apache.spark.sql.catalyst.analysis.CheckAnalysis$class.checkAnalysis(CheckAnalysis.scala:80)
	at org.apache.spark.sql.catalyst.analysis.Analyzer.checkAnalysis(Analyzer.scala:92)
	at org.apache.spark.sql.catalyst.analysis.Analyzer.executeAndCheck(Analyzer.scala:105)
	at org.apache.spark.sql.execution.QueryExecution.analyzed$lzycompute(QueryExecution.scala:57)
	at org.apache.spark.sql.execution.QueryExecution.analyzed(QueryExecution.scala:55)
	at org.apache.spark.sql.execution.QueryExecution.assertAnalyzed(QueryExecution.scala:47)
	at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:74)
	at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:641)
	at org.apache.spark.sql.SQLContext.sql(SQLContext.scala:694)
	at org.wikimedia.analytics.refinery.spark.sql.HiveExtensions$DataFrameExtensions.convertToSchema(HiveExtensions.scala:631)
	at org.wikimedia.analytics.refinery.spark.connectors.DataFrameToHive$.apply(DataFrameToHive.scala:155)
	at org.wikimedia.analytics.refinery.job.refine.Refine$$anonfun$refineTargets$1.apply(Refine.scala:441)
	at org.wikimedia.analytics.refinery.job.refine.Refine$$anonfun$refineTargets$1.apply(Refine.scala:436)
	at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
	at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
	at scala.collection.mutable.ArraySeq.foreach(ArraySeq.scala:74)
	at scala.collection.TraversableLike$class.map(TraversableLike.scala:234)
	at scala.collection.AbstractTraversable.map(Traversable.scala:104)
	at org.wikimedia.analytics.refinery.job.refine.Refine$.refineTargets(Refine.scala:436)
	at org.wikimedia.analytics.refinery.job.refine.Refine$$anonfun$8.apply(Refine.scala:347)
	at org.wikimedia.analytics.refinery.job.refine.Refine$$anonfun$8.apply(Refine.scala:341)
	at scala.collection.parallel.AugmentedIterableIterator$class.map2combiner(RemainsIterator.scala:115)
	at scala.collection.parallel.immutable.ParHashMap$ParHashMapIterator.map2combiner(ParHashMap.scala:76)
	at scala.collection.parallel.ParIterableLike$Map.leaf(ParIterableLike.scala:1054)
	at scala.collection.parallel.Task$$anonfun$tryLeaf$1.apply$mcV$sp(Tasks.scala:49)
	at scala.collection.parallel.Task$$anonfun$tryLeaf$1.apply(Tasks.scala:48)
	at scala.collection.parallel.Task$$anonfun$tryLeaf$1.apply(Tasks.scala:48)
	at scala.collection.parallel.Task$class.tryLeaf(Tasks.scala:51)
	at scala.collection.parallel.ParIterableLike$Map.tryLeaf(ParIterableLike.scala:1051)
	at scala.collection.parallel.AdaptiveWorkStealingTasks$WrappedTask$class.compute(Tasks.scala:152)
	at scala.collection.parallel.AdaptiveWorkStealingForkJoinTasks$WrappedTask.compute(Tasks.scala:443)
	at scala.concurrent.forkjoin.RecursiveAction.exec(RecursiveAction.java:160)
	at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
	at scala.concurrent.forkjoin.ForkJoinTask.doJoin(ForkJoinTask.java:341)
	at scala.concurrent.forkjoin.ForkJoinTask.join(ForkJoinTask.java:673)
	at scala.collection.parallel.ForkJoinTasks$WrappedTask$class.sync(Tasks.scala:378)
	at scala.collection.parallel.AdaptiveWorkStealingForkJoinTasks$WrappedTask.sync(Tasks.scala:443)
	at scala.collection.parallel.ForkJoinTasks$class.executeAndWaitResult(Tasks.scala:426)
	at scala.collection.parallel.ForkJoinTaskSupport.executeAndWaitResult(TaskSupport.scala:56)
	at scala.collection.parallel.ExecutionContextTasks$class.executeAndWaitResult(Tasks.scala:558)
	at scala.collection.parallel.ExecutionContextTaskSupport.executeAndWaitResult(TaskSupport.scala:80)
	at scala.collection.parallel.ParIterableLike$ResultMapping.leaf(ParIterableLike.scala:958)
	at scala.collection.parallel.Task$$anonfun$tryLeaf$1.apply$mcV$sp(Tasks.scala:49)
	at scala.collection.parallel.Task$$anonfun$tryLeaf$1.apply(Tasks.scala:48)
	at scala.collection.parallel.Task$$anonfun$tryLeaf$1.apply(Tasks.scala:48)
	at scala.collection.parallel.Task$class.tryLeaf(Tasks.scala:51)
	at scala.collection.parallel.ParIterableLike$ResultMapping.tryLeaf(ParIterableLike.scala:953)
	at scala.collection.parallel.AdaptiveWorkStealingTasks$WrappedTask$class.compute(Tasks.scala:152)
	at scala.collection.parallel.AdaptiveWorkStealingForkJoinTasks$WrappedTask.compute(Tasks.scala:443)
	at scala.concurrent.forkjoin.RecursiveAction.exec(RecursiveAction.java:160)
	at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
	at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
	at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
	at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
18/11/26 22:55:02 ERROR Refine: The following 1 of 1 dataset partitions for table `event`.`mediawiki_revision_score` failed refinement:
	Failure(org.wikimedia.analytics.refinery.job.refine.RefineTargetException: Failed refinement of hdfs://analytics-hadoop/wmf/data/raw/event/eqiad_mediawiki_revision-score/hourly/2018/11/26/18 -> `event`.`mediawiki_revision_score` (datacenter="eqiad",year=2018,month=11,day=26,hour=18). Original exception: org.apache.spark.sql.AnalysisException: cannot resolve 't_fd3875eb220e422793b7373f1c7a1718.`scores`' due to data type mismatch: cannot cast array<struct<error:struct<message:string,type:string>,model_name:string,model_version:string,prediction:array<string>,probability:array<struct<name:string,value:double>>>> to array<struct<model_name:string,model_version:string,prediction:array<string>,probability:array<struct<name:string,value:double>>>>; line 1 pos 366;
'Project [database#186 AS database#315, named_struct(domain, meta#187.domain, dt, meta#187.dt, id, meta#187.id, request_id, meta#187.request_id, schema_uri, meta#187.schema_uri, topic, meta#187.topic, uri, meta#187.uri) AS meta#316, page_id#188L AS page_id#317L, page_namespace#189L AS page_namespace#318L, page_title#190 AS page_title#319, rev_id#191L AS rev_id#320L, rev_parent_id#192L AS rev_parent_id#321L, rev_timestamp#193 AS rev_timestamp#322, cast(scores#194 as array<struct<model_name:string,model_version:string,prediction:array<string>,probability:array<struct<name:string,value:double>>>>) AS scores#323, datacenter#195 AS datacenter#324, year#196L AS year#325L, month#197L AS month#326L, day#198L AS day#327L, hour#199L AS hour#328L]
+- SubqueryAlias t_fd3875eb220e422793b7373f1c7a1718
   +- Repartition 3, true
      +- LogicalRDD [database#186, meta#187, page_id#188L, page_namespace#189L, page_title#190, rev_id#191L, rev_parent_id#192L, rev_timestamp#193, scores#194, datacenter#195, year#196L, month#197L, day#198L, hour#199L], false
Restricted Application added a subscriber: Aklapper. · View Herald TranscriptMon, Nov 26, 10:55 PM

Input schema has:

|-- scores: array (nullable = true)
|    |-- element: struct (containsNull = true)
|    |    |-- error: struct (nullable = true)
|    |    |    |-- message: string (nullable = true)
|    |    |    |-- type: string (nullable = true)
|    |    |-- model_name: string (nullable = true)
|    |    |-- model_version: string (nullable = true)
|    |    |-- prediction: array (nullable = true)
|    |    |    |-- element: string (containsNull = true)
|    |    |-- probability: array (nullable = true)
|    |    |    |-- element: struct (containsNull = true)
|    |    |    |    |-- name: string (nullable = true)
|    |    |    |    |-- value: double (nullable = true)

whereas Hive table schema has:

|-- scores: array (nullable = true)
|    |-- element: struct (containsNull = true)
|    |    |-- model_name: string (nullable = true)
|    |    |-- model_version: string (nullable = true)
|    |    |-- prediction: array (nullable = true)
|    |    |    |-- element: string (containsNull = true)
|    |    |-- probability: array (nullable = true)
|    |    |    |-- element: struct (containsNull = true)
|    |    |    |    |-- name: string (nullable = true)
|    |    |    |    |-- value: double (nullable = true)
|-- datacenter: string (nullable = true)

Change 476093 had a related patch set uploaded (by Joal; owner: Joal):
[analytics/refinery/source@master] Update the unit-test for Dataframe conversion

https://gerrit.wikimedia.org/r/476093

Ottomata added a subscriber: Pchelolo.EditedTue, Nov 27, 8:49 PM

Hmm ok this is complicated.

Complex type changes are always hard, but it seems they are extra hard when they are complex inside of complex. In this case, we started with scores as an array of structs that did not have this error field. The code now wants to add the error field to the struct type inside of the array. I can fix this part and am working on it.

However, there's a problem with the code when going the other way. If the incoming JSON data is missing a field in a struct inside of an array, Hive won't know how to support it. E.g.:

CREATE TABLE `arr2`(
  `a1` array<struct<f1:string,f2:string>>)

insert into table arr2  select array(NAMED_STRUCT('f1', "hi")) as a1;

FAILED: SemanticException [Error 10044]: Line 1:18 Cannot insert into target table because column number/types are different 'arr2': Cannot convert column 0 from array<struct<f1:string>> to array<struct<f1:string,f2:string>>.

But

insert into table arr2  select array(NAMED_STRUCT('f1', "hi", 'f2', "yo")) as a1;

Works fine.

There's also no way to cast the missing field.

@Pchelolo this basically means that all fields inside of a object in an array are required, even if they are set to null. If there is no error, the error field should be null. And if there is no probability, (because of error) it should be null too.

Ah, rats, and in order to even support setting those to null, we need to change the schema to allow nulls.

error:
  type: [object, "null"]
  properties:
    type:
      description: The short name of this error
      type: string
    message:
      description: A human-readable explanation of what went wrong
      type: string

Change 476179 had a related patch set uploaded (by Ottomata; owner: Ottomata):
[analytics/refinery/source@master] [WIP] HiveExtensions StructType .merge supports Arrays and Maps with complex types

https://gerrit.wikimedia.org/r/476179

Ottomata added a comment.EditedWed, Nov 28, 6:09 PM

@Pchelolo, we discussed this more in standup today, and we thought juuust a bit more about the scores model.

We think that it would be best to separate out errors from actual scores. E.g.

scores: [
{
  model_name: ...
  prediciton: ...
  probabilty: ...
},
...
],
errors: [
{
  model_name: ...
  type: ...
  message: ...
}]

That way you don't need to do any query conditionals to figure out whether or not an entry in the scores array has a prediction or an error. This makes it clear that scores contain actual scores, and any models that caused errors during scoring are separate. It will also solve this bug and not require us to always set error: null.

This means we need to change the revision-score schema again, as well as CP code building the event.

Change 476381 had a related patch set uploaded (by Ottomata; owner: Ottomata):
[mediawiki/event-schemas@master] Move errors out of scores object into its own array

https://gerrit.wikimedia.org/r/476381

Change 476381 merged by jenkins-bot:
[mediawiki/event-schemas@master] Move errors out of scores object into its own array

https://gerrit.wikimedia.org/r/476381

Change 476179 merged by jenkins-bot:
[analytics/refinery/source@master] HiveExtensions StructType .merge supports Arrays and Maps with complex types

https://gerrit.wikimedia.org/r/476179

Change 476601 had a related patch set uploaded (by Ottomata; owner: Ottomata):
[operations/puppet@production] Use refinery-job 0.0.81 for refine jobs

https://gerrit.wikimedia.org/r/476601

Change 476601 merged by Ottomata:
[operations/puppet@production] Use refinery-job 0.0.81 for refine jobs

https://gerrit.wikimedia.org/r/476601

Change 476615 had a related patch set uploaded (by Ottomata; owner: Ottomata):
[operations/puppet@production] Blacklist mediawiki_revision_score from refine again until we fix problem

https://gerrit.wikimedia.org/r/476615

Change 476615 merged by Ottomata:
[operations/puppet@production] Blacklist mediawiki_revision_score from refine again until we fix problem

https://gerrit.wikimedia.org/r/476615

Mentioned in SAL (#wikimedia-analytics) [2018-12-03T20:05:41Z] <ottomata> dropping and recreating hive event.mediawiki_revision_score table and data - T210465

Everything looks perfect from CP side. Please close whenever you feel like it.

Ah, @Pchelolo one more (hopefully easy) change please.

@JAllemandou, we should have thought of this. If errors is an empty [] array, what will spark.json infer its element type as? It turns out: string. Since an hour of revision-scores might not contain errors, it's possible that the Hive table that gets created has error as an array of strings. This results in:

CAST(errors AS array<string>) AS errors

and an errors element like:

"[JSONDecodeError: Failed to process datasource.wikibase.revision.item_doc: Expecting property name enclosed in double quotes: line 1 column 2 (char 1)\nTraceback (most recent call last):\n  File \"/srv/deployment/ores/deploy-cache/revs/9b9ba06265c9191a0087ecdf25fbef712c642953/venv/lib/python3.5/site-packages/revscoring/dependencies/functions.py\", line 244, in _solve\n    value = dependent(*args)\n  File \"/srv/deployment/ores/deploy-cache/revs/9b9ba06265c9191a0087ecdf25fbef712c642953/venv/lib/python3.5/site-packages/revscoring/dependencies/dependent.py\", line 54, in __call__\n    return self.process(*args, **kwargs)\n  File \"/srv/deployment/ores/deploy-cache/revs/9b9ba06265c9191a0087ecdf25fbef712c642953/venv/lib/python3.5/site-packages/revscoring/features/wikibase/datasources/revision_oriented.py\", line 108, in _process_item_doc\n    return json.loads(text)\n  File \"/usr/lib/python3.5/json/__init__.py\", line 319, in loads\n    return _default_decoder.decode(s)\n  File \"/usr/lib/python3.5/json/decoder.py\", line 339, in decode\n    obj, end = self.raw_decode(s, idx=_w(s, 0).end())\n  File \"/usr/lib/python3.5/json/decoder.py\", line 355, in raw_decode\n    obj, end = self.scan_once(s, idx)\njson.decoder.JSONDecodeError: Expecting property name enclosed in double quotes: line 1 column 2 (char 1)\n, itemquality, 0.2.0, CaughtDependencyError]"

So, the job doesn't fail, but the errors elements will always be cast to strings.

The best thing to do would be to make the constraint: if an array is present, it always has at least one element.

@Pchelolo, we were wrong. Can we deploy a change to CP to not provide scores or errors at all if there are no elements?

Mentioned in SAL (#wikimedia-operations) [2018-12-04T17:45:23Z] <ppchelko@deploy1001> Started deploy [changeprop/deploy@e1aeb27]: Do not initialize scores and errors arrays in advance T210465

Mentioned in SAL (#wikimedia-operations) [2018-12-04T17:46:36Z] <ppchelko@deploy1001> Finished deploy [changeprop/deploy@e1aeb27]: Do not initialize scores and errors arrays in advance T210465 (duration: 01m 13s)

Change 477605 had a related patch set uploaded (by Ottomata; owner: Ottomata):
[operations/puppet@production] Re-enable revision-score refinement (again)

https://gerrit.wikimedia.org/r/477605

Change 477605 merged by Ottomata:
[operations/puppet@production] Re-enable revision-score refinement (again)

https://gerrit.wikimedia.org/r/477605

LOOKING GOOD!

18/12/04 22:20:58 INFO DataFrameToHive: Writing DataFrame to /wmf/data/event/mediawiki_revision_score/datacenter=eqiad/year=2018/month=12/day=4/hour=19 with schema:
root
 |-- database: string (nullable = true)
 |-- errors: array (nullable = true)
 |    |-- element: struct (containsNull = true)
 |    |    |-- message: string (nullable = true)
 |    |    |-- model_name: string (nullable = true)
 |    |    |-- model_version: string (nullable = true)
 |    |    |-- type: string (nullable = true)
 |-- meta: struct (nullable = true)
 |    |-- domain: string (nullable = true)
 |    |-- dt: string (nullable = true)
 |    |-- id: string (nullable = true)
 |    |-- request_id: string (nullable = true)
 |    |-- schema_uri: string (nullable = true)
 |    |-- topic: string (nullable = true)
 |    |-- uri: string (nullable = true)
 |-- page_id: long (nullable = true)
 |-- page_namespace: long (nullable = true)
 |-- page_title: string (nullable = true)
 |-- rev_id: long (nullable = true)
 |-- rev_parent_id: long (nullable = true)
 |-- rev_timestamp: string (nullable = true)
 |-- scores: array (nullable = true)
 |    |-- element: struct (containsNull = true)
 |    |    |-- model_name: string (nullable = true)
 |    |    |-- model_version: string (nullable = true)
 |    |    |-- prediction: array (nullable = true)
 |    |    |    |-- element: string (containsNull = true)
 |    |    |-- probability: array (nullable = true)
 |    |    |    |-- element: struct (containsNull = true)
 |    |    |    |    |-- name: string (nullable = true)
 |    |    |    |    |-- value: double (nullable = true)
 |-- datacenter: string (nullable = true)
 |-- year: long (nullable = true)
 |-- month: long (nullable = true)
 |-- day: long (nullable = true)
 |-- hour: long (nullable = true)
Ottomata triaged this task as Normal priority.Tue, Dec 4, 10:42 PM
Ottomata claimed this task.
Ottomata added a project: Analytics-Kanban.
Ottomata set the point value for this task to 8.
Ottomata moved this task from Next Up to Done on the Analytics-Kanban board.
fdans closed this task as Resolved.Mon, Dec 10, 5:08 PM