While working on uploading the first datasets, put down an outline of all the steps involved, from raw data to live bot run. Such an outline can then be used as a framework for working with WLM datasets on phabricator.
Have a look at: T139335 for how this has been done for Batch Uploads.
Related: T156889.
-------
* Set up milestone on Phabricator under #Connected-Open-Heritage-Wikidata-migration, using the name of the db table -- eg. `se-arbetsl`.
* Set up page under https://www.wikidata.org/wiki/Wikidata:WikiProject_WLM/Mapping_tables.
** Fill it out with sample data.
** Note: As of now, these are all created and filled out thanks to [[ https://github.com/Vesihiisi/COH-tools/blob/master/create_mapping_tables.py | this script ]]. It only needs to be rerun if a new table is added to the WLM db.
* Identify **heritage status**. Do all the items represent the same type of heritage protection (eg. //national monument in <country>//)?
** If not, how can the heritage status of each item be inferred?
** Create or edit any necessary items, eg. [[ https://www.wikidata.org/wiki/Q385405 | cultural monument of the Czech Republic (Q385405) ]]. It should at least have assigned country and subclass of cultural property / national heritage site.
...
* Identify **identifier**.
* Identify **`P31`**
* Any fields problematic due to language?