21:03:05 #startmeeting ArchCom content model storage (T105652) 21:03:05 Meeting started Wed Aug 17 21:03:05 2016 UTC and is due to finish in 60 minutes. The chair is robla. Information about MeetBot at http://wiki.debian.org/MeetBot. 21:03:05 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 21:03:05 The meeting name has been set to 'archcom_content_model_storage__t105652_' 21:03:05 T105652: RfC: Content model storage - https://phabricator.wikimedia.org/T105652 21:03:15 #topic Please note: Channel is logged and publicly posted (DO NOT REMOVE THIS NOTE) | Logs: http://bots.wmflabs.org/~wm-bot/logs/%23wikimedia-office/ 21:03:31 #link https://phabricator.wikimedia.org/E261 21:03:47 #link https://phabricator.wikimedia.org/T105652 21:03:48 #link https://phabricator.wikimedia.org/T105652 21:03:54 #link https://phabricator.wikimedia.org/T142980 21:03:58 oops :) 21:04:08 * aude waves 21:04:16 #link https://phabricator.wikimedia.org/T142980 (DanielK_WMDE_ 's revised proposal) 21:04:33 smallint = 16 bits? 21:04:46 So, today I would like to discuss if and how we want to modify the way we store meta-data about revision content, in particular, how and where we store the content model and format. 21:04:56 The RFC legoktm proposed last year aimed to make the storage of content model and format more efficient (T105652). I'm concerned that the solution that was approved then would need to be reverted to add support for multiple content slots per revision (T107595). 21:04:57 T105652: RfC: Content model storage - https://phabricator.wikimedia.org/T105652 21:04:57 T107595: [RFC] Multi-Content Revisions - https://phabricator.wikimedia.org/T107595 21:05:06 TimStarling: Yes, per https://dev.mysql.com/doc/refman/5.5/en/integer-types.html 21:05:23 While the problems are unrelated, the solutions overlap. So I propose to kill two birds with one stone, and add a new table for content meta-data that will use the new efficient way to represent content model anf format (T142980). 21:05:24 T142980: RFC: Create a content meta-data table - https://phabricator.wikimedia.org/T142980 21:05:28 The new table will also allow us to have more than one content "slot" per revision. And we won't have to add any columns to page, revision, or archive. 21:05:51 My goal for this meeting is to decide whether we want to implement legoktm's original proposal, or the modified one with the extra table. 21:05:59 is jynus around? 21:06:13 I am 21:06:22 ah, great! 21:06:41 #info primary question to resolve: do a) legoktm 's original T105652 b) DanielK_WMDE_ 's modification T142980 c) none of the above (stay with status quo) 21:06:41 So I'm not sure we'd need to add anything to page anyway (can join to revision) but the duplication between archive and revision is annoying and I like removing it by partially normalizing 21:06:42 do we have jcrespo here? 21:06:43 T105652: RfC: Content model storage - https://phabricator.wikimedia.org/T105652 21:06:43 T142980: RFC: Create a content meta-data table - https://phabricator.wikimedia.org/T142980 21:06:57 I am that that you call jcrespo 21:07:08 jynus: do you expect us to use compression in the foreseeable future, and do you expect the extra joins to be faster than decompression? 21:07:21 compression? 21:07:22 jynus: ah, sorry. good to have you here! 21:07:30 oh, right... /me is clearly consused 21:07:32 ;) 21:07:40 :) 21:07:45 confused, even 21:07:59 anyway, that would be my main question about either proposal 21:08:21 jynus: thanks for being here this evening! 21:08:27 do you mean normalization? 21:08:34 my original plan wasn't to do joins, but to store the id => string mapping in a cache like APC since it would be mostly static once initialized 21:08:50 there are a few things that concern me 21:09:05 yea, i also forsee no joins for resolving the numeric ids. 21:09:22 if we have an extra content table, we'd have an extra join though, for many use cases. 21:09:24 #info 14:08:34 my original plan wasn't to do joins, but to store the id => string mapping in a cache like APC since it would be mostly static once initialized 21:09:26 without joins, it sounds like we would need to manually maintain some mapping 21:09:32 So just joins on (rvision, content) 21:09:33 one is whether adding a new table with hundreds of millions of rows is justified, considering the performance implications due to loss of locality, compared to the proposal on the wiki page 21:09:48 how would this work for extensibility? 21:09:57 TimStarling: if we want MCR, we will have to do that anyway, sooner or later 21:10:03 another is the fact that ar_rev_id is not fully populated, there are 500k rows in enwiki.archive with ar_rev_id=null 21:10:07 i.e, custom models etc 21:10:16 gwicke: similar to namespace ids, though hopefully more consistently managed 21:10:31 Or else some config map 21:10:40 so we'd reserve numeric ranges to avoid conflicts? 21:10:42 whatever that doesn't require storing usless strings will be faster 21:10:52 some of those rows were created before the MW 1.5 and so there was never any rev_id for them in the first place 21:10:53 TimStarling: good point - but those are all legacy rows with default content model 21:11:01 So perhaps ok to have no matching row 21:11:01 TimStarling: could we just assign them an unused rev_id? 21:11:25 Heh 21:11:28 yes 21:11:42 i'd vote for that, then 21:12:05 That raises the specter perhaps of killing the demoralization between archive and revision by folding archive into rvision. That's a bigger issue tho 21:12:24 excellent freduian slip there 21:12:28 brion, I am all for that, but maybe out of the scope of this RFC 21:12:29 brion: yea, i didn't really want to touch that today 21:12:30 :-) 21:12:34 Yeah 21:12:40 could we test the performance options ahead of time, to validate our intuition? 21:12:41 Baby steps! 21:12:41 (there's a different RfC for that! https://www.mediawiki.org/wiki/Requests_for_comment/Page_deletion) 21:13:19 gwicke, sure, I can setup a demo if you need it, if you provide the code 21:13:26 Gwicke good idea perhaps, since we expect to need this second table for multi content revisions in future even if we don't start itf off now 21:13:40 Would be a similar but slightly different join 21:13:49 On th rev id and the role 21:14:03 do we have a db with a large-enough dataset that we could play with? 21:14:16 DanielK_WMDE_: unused or reserved? setting ar_rev_id to something that later gets assigned to an unrelated revision seems like asking for trouble 21:14:22 but, please do not fear joins without reasoning (even if your favorite storage system does not support them) 21:14:28 in some cases, we will be able to join page directly to content, and skip the revision table. in such cases, we'd not even add a join. 21:14:45 just insert rows into revision and immediately delete them 21:14:57 or maybe even insert and rollback, maybe that works 21:15:01 jynus: not fear, but it's good to measure before making decisions 21:15:04 tgr: rev_id would have to be bumpt to something greater than any of the ids we used for ar_rev_id. 21:15:21 Aaaaanyway were not doing that bit yet ;) 21:15:31 #info ar_rev_id is not fully populated on enwiki. we can assign fresh revision ids though (and bump rev_id accordingly) 21:15:39 gwicke, sure, although I can guarantee no slowdown, not a huge improvement right now 21:16:17 I'm especially curious if generic compression would achieve the same effect with less effort 21:16:18 TimStarling: i think you can even just insert the number you want, and the auto.increment keeps going after that. so just insert & delete one row. 21:16:31 the main issues happen when dataset doesn't fit into memory, which is exactly what I blocked (as the initial rolling in was going to do) 21:16:55 Ah DanielK_WMDE_ that reminds me 21:16:57 memory being the page cache? 21:17:08 or result set held in memory? 21:17:20 Do we need legacy rows or are empty left joins assumed to mean namespace default? 21:17:36 * jynus recommends gabriel reading about InnoDB buffer pool 21:18:10 jynus: I am aware of that, but am not sure if we configure that to use most memory, or rely on page cache 21:18:17 brion: we will need to construct legacy rows eventually, when we move the blob address into the content table. 21:18:18 also, does it hold compressed pages, or decompressed ones? 21:18:28 brion: whether we ewant/need it right away is up for discussion. 21:18:40 Ah good 21:18:54 yes, this is the only thing I can give to this discussion 21:19:08 doing the schema change 21:19:19 #info we will need to construct legacy rows eventually, when we move the blob address into the content table. 21:19:20 seems like this content table including MCR will be functionally equivalent to the text table 21:19:27 from one version to the other 21:19:28 #info 14:16:31 the main issues happen when dataset doesn't fit into memory, which is exactly what I blocked (as the initial rolling in was going to do) 21:19:36 is trivial 21:19:41 Hmmmm 21:19:42 if 21:19:42 except with a one-to-many mapping from revision to content 21:19:48 a) the table is small 21:19:53 which I think it will be 21:19:55 TimStarling: Yes. 21:20:04 TimStarling: do you envision changing text table instead? 21:20:07 b) the transactions that use them are small 21:20:20 ok... can I assume that there is still consensus on representing content model and format as integers, and have a mappoing in the db and in memory? This was approved last year with legoktm's original proposal. 21:20:28 this is problematic, for example, for tables such as revision or commons image 21:20:37 DanielK_WMDE_: yes I've been assuming that holds :) 21:20:44 TimStarling: Well, the content table doesn't allow storing the revision data directly. It's either revision→content→text, or revision→content→(external something) 21:20:54 this is the only thing that I can say that may be relatively helpful, the rest is for you to decide 21:20:57 brion: no 21:21:09 ok 21:21:32 DanielK_WMDE_: I'm still wishing we had performance data to back up the assertion that this is the most efficient way to improve performance, and is worth the hassle of managing ids manually 21:21:35 anomie: right, so you still need the text table, but do you keep text_flags etc.? 21:21:57 gwicke: nobody is suggesting to manage them manually. 21:22:04 does ExternalStore continue to work only with text rows, or can it also work with content rows? 21:22:18 gwicke: if no mapping is found in the in-memory mapping, you assign a new one from an auto-increment field. done. 21:22:19 gwicke: so the alternatives are probably enums or another join to a table of mappings. They're all roughly equivalent theoretically but may perform different dunno 21:22:26 DanielKWMDE: previous suggestion was to set them up in the config like namespaces 21:22:44 say no to enums on a table as large as revision 21:22:50 ok in other cases 21:23:12 gwicke: that's not how i understand the approved rfc. let me check again... 21:23:14 jynus: I always hear enums are cheap to change. Lies? :) 21:23:20 #info Discussion of DanielK_WMDE_'s question: "can I assume that there is still consensus on representing content model and format as integers, and have a mappoing in the db and in memory?" 21:23:23 TimStarling: Is text_flags used for anything when text.old_text actually contains the content instead of an external-store address? If so, then it would still be used for that purpose when the text table is used at all. When external-store is used for everything, the address would be in cont_address and the text table shouldn't be needed at all. 21:23:24 they are cheap to add, brion 21:23:24 so compression is definitely not achieving similar savings? 21:23:31 (items) 21:23:53 Heh 21:23:54 but if you want to delete, it would be one of our most complex changes 21:23:57 (although "external store" is now named "blob store", I think) 21:23:58 Yikes 21:24:17 gwicke: well, yeah, storing them in config like namespaces would be more performant. but then you end up with a wikipage where extensions write down the ids their content models use, and you just cross your fingers and hope you don't conflict with anyone. Anyways, we discussed and rejected that last year.... 21:24:25 anomie: 1. yes, compression and legacy charset mapping 2. good question 21:24:54 gwicke: hm, the old rfc doesn't really say. but it would be easy to do, i already wrote some code for this. no need for manually managing ids. 21:25:07 DanielKWMDE, legoktm: okay, so it would be stored in the db, but some background task would stash the db data into some cache & update that when needed? 21:25:35 so, I said this many times: do not fear joins just for the sake of an extra table (but I do *not* care, config/table/whatever) 21:25:35 gwicke: no background task. on a cache miss, check the db. if the db doesn't have it, add it. done. 21:25:40 Why join when you can put a blob in memcached :) 21:25:58 what DanielK_WMDE_ said. 21:25:59 Quite effective for small sets like this yeah 21:26:01 DanielK_WMDE_: that's "when needed" ;) 21:26:14 #info jynus: I always hear enums are cheap to change. Lies? :) they are cheap to add [...] but if you want to delete, it would be one of our most complex changes 21:26:15 but yeah, that sounds doable 21:26:26 #info re managing ids for content models etc: on a cache miss, check the db. if the db doesn't have it, add it. 21:27:14 for example, 2 selects, I guarantee you will be slower than 1 single query with a join 21:27:20 I think people working with raw db replicas would appreciate having the mapping in a db table even if it's not used for joins in production 21:27:38 but again, not agains memcache/config/whatever 21:27:52 brion: also, you need to persist the mapping, in case you lose the cache 21:27:54 so... tentative agreement to content model and format as int? can we rule out option (c) then, and look at (a) vs (b)? 21:27:58 Yeah 21:28:09 Definitely 21:28:12 well, ... 21:28:15 (just kidding.) 21:28:17 Lol 21:28:20 heh :P 21:28:38 #info tentative agreement to content model and format as int; we rule out option (c) then 21:28:39 which link has a b and c, sorry? 21:28:55 jynus: #info primary question to resolve: do a) legoktm 's original T105652 b) DanielK_WMDE_ 's modification T142980 c) none of the above (stay with status quo) 21:29:03 at this point I'm assuming that compression was tested & found to not be competitive 21:29:04 if you remember, the use of strings in the first place was quite a fraught compromise 21:29:17 Yes, let's rule out c) 21:29:19 T105652: RfC: Content model storage - https://phabricator.wikimedia.org/T105652 21:29:23 I asked for integers from the outset, and Daniel bluntly refused 21:29:36 so obviously yes, I am still in favour of using integers 21:29:39 T142980: RFC: Create a content meta-data table - https://phabricator.wikimedia.org/T142980 21:29:45 I think a) -> b) is easy to do, why do we want to do b directly (genune question) 21:29:59 TimStarling: o_O i actually had that implemented, and changed it upon request... from... uh... i don't recall. 21:30:12 TimStarling: my implementation did expose the in ids though. not as nice as we are planning for now 21:30:18 #info I think a) -> b) is easy to do, why do we want to do b directly (genune question) 21:30:19 may main concern is to get too blocked on a relatively complex feature 21:30:21 yeah, you started with integers, switched to strings, then came to me and I said "use integers" 21:31:04 and you said it was some reviewer who told you to use strings, and I said go back to them and tell them I said use integers 21:31:09 jynus: To avoid having to change the primary key on the table from (cont_revision) to (cont_revision,cont_role) once the table has millions of rows. You'd probably be in the best position to say whether that'd be a big deal or not. 21:31:26 and you said you didn't really like integers in the first place 21:31:28 are we talking about the small table? 21:31:37 TimStarling: i kind of remember it the other way... but whatever. no need to fight about that now. let's get rid of them. 21:31:43 the one that contains formats? 21:31:50 Ok so something like b will be needed later for multi content, but not strictly yet. If transitions from current to a, and a to b-prime are easy, then there's not a huge need to start with b 21:31:55 yea, i didn't like them to be exposed... 21:32:07 jynus: Oh, wait. Actually, (a) is putting rev_format_id and rev_model_id into revision, while (b) is making the content table. 21:32:38 then I vote for d 21:32:51 * anomie got confused between (a) and (b)'s basic versus medium versions 21:33:01 jynus: what's (d)? 21:33:22 stick to the original plan, then introduce the slot stuff afterwards 21:33:52 I do not think a and be are exclusive? 21:33:58 jynus: a) -> b) involved assing columns to page, revision, and archive, and then removing the same columns again later. i assumed that's not something you like. 21:34:09 jynus: So that's adding rev_format_id/rev_model_id (and similar to other tables) now, then later on make the content table and migrate to that? 21:34:11 it's defintly disruptive for tools on labs, and for extensions 21:34:27 WHAT? 21:34:46 ah, the extension thing could be a valid reason 21:35:04 er, why is it disruptive for extensions? 21:35:05 finally I get a good answer to my original question 21:35:20 I do not know if it is valid, the rest can tell me 21:35:23 it also means messing with the same code again, once changing it to use a sifferent column, then changing it to use a different table 21:35:55 legoktm: for extensions that look at the database directly. rare, i agree. not rare for tools on labs. 21:35:57 wait, but the original change doesn't involve a schema change, right? 21:36:01 actually, it's the reason we have labs 21:36:07 just a new table? 21:36:31 jynus: the original change calls for columns to be added to page, revision, and archive. two columns each. 21:36:37 to the largest tables we have 21:36:40 DanielK_WMDE_: when I grepped a year ago, the only places that queried those columns directly was in core 21:36:54 jynus: which would then be removed again, when we have the content table. 21:37:12 weren't we going to reuse the existing columns and redefine its meaning? 21:37:16 legoktm: good to know, yea. extensions probably wouldn't 21:37:20 jynus: no. 21:37:25 if we plan MCRs anytime soon, I think doing work that will have to be redone with MCRs now is not smart 21:37:34 jynus: that would be even more disruptive 21:37:45 SMalyshev: We do. 21:37:45 but I will do that work on production (?) 21:37:55 and labs 21:38:12 obiously you are the code magicians 21:38:35 jynus: the problem with labs is not applying the schema change. the problem is braking the tools that use the schema. 21:38:51 more changes -> more breakage 21:39:00 SMalyshev: I think transition work would be similar difficulty in either case on the mw internals 21:39:05 I call you this now, labs is not an issue 21:39:14 and MCR will break it anyway 21:39:19 James_F: we do what? 21:39:29 DanielK_WMDE_: I don't think we've ever promised database stability for labs. and I don't think we should worry about it tbh. 21:39:35 database schema stability* 21:39:36 +1 21:39:54 labs is too broken anyway (do not quote me on that) 21:39:56 brion: can't we make a model now that will make it easier? I mean sure we'll have to do work, but we could prepare for it 21:40:02 (it is all DBA's fault) 21:40:25 i'd like incremental steps, instead of doing one step, then undoing half of it again to do the second step. 21:40:34 I think the important question is 21:40:36 jynus: All the options add two small tables to map int->string for model and for format. Then option (a) adds new int columns to several tables, populates them based on the existing string columns, then drops the string columns. Option (b) puts the new int columns into a "content" table with a FK back to the revision/archive tables, populates it from the existing string columns, then drops the string columns. Eventually we'll need to do option 21:40:36 (b) for the multi-content revisions anyway. 21:40:52 independetly of what could be done 21:41:06 who is willing to work on this? 21:41:11 on both cases 21:41:13 ? 21:41:35 or on either case, I mean 21:41:41 me for option (b). not sure i can justify spending time on option (a), but i might. 21:41:48 willing as in "interested" or "has time" or 1&2? ;) 21:41:51 i can definitly help with reviewing in either case 21:42:00 :) 21:42:01 (note that I will do what devs told me, no matter what, infrastructure wise) 21:42:10 There's a side question as to whether option (b)'s primary key should start out as just (cont_revision), or if we should make it (cont_revision,cont_role) right away and just let cont_role always be 1 until we make the infrastructure for different values. 21:42:28 I'll pitch in if needed either way, but I've got other projects backing up :) 21:42:36 I am volunteering to do the implementation for (a), and can help review (b)/whatever 21:42:49 anomie: actually, i updated that - (cont_revision,cont_role) would be a unique key, we'd add an auto-increment field as a primary, for later use 21:43:24 legoktm: would you also be willing to do the bit that is the same for (a) and (b), namely the actual mapping stuff? 21:43:25 DanielK_WMDE_: Any particular reason to add an id field instead of a two-int PK? 21:44:06 anomie: yes, we can re-use content rows for multiple revisions that way. some slots will update only rarely. it's another bit of normalization. 21:44:10 give your thoughts, I do not have much to add, I personally incline towards small incremental changes if it was possible, but I cannot fairly enter here if I am not going to work on the code 21:44:15 anomie: we can also add that later, but the table is big. 21:44:22 So as long as we have realistic transition plan, and there's no major db-related reason to prefer one or the other, my main concern is we get the updates done. 21:44:49 If we make the content table in a way that we don't have to change it when adding multi content then that is a certain niceness 21:44:58 jynus: i'm with you regarding incremental changes. i just feel this isn't incremental, but two forward, one back, two forward... that's what i'm trying to avoid 21:45:05 DanielK_WMDE_: yeah. 21:45:08 DanielK_WMDE_: How would that (re-using content rows) work? 21:45:10 small changes would be fine except that DanielK_WMDE_ is working on this because he is interested in MCR 21:45:34 putting the MCR fields in the table from the outset would better respect that motivation 21:45:35 TimStarling, but he just said he wouldn't help on b, only lego would on a 21:45:39 anomie: another table, relating revisions to content. it's an option for later though. 21:46:02 sorry, I got confused, but I hope I got understood 21:46:20 jynus: i would help with a, but i probably can't drive it. 21:46:22 me for option (b). not sure i can justify spending time on option (a), but i might. 21:46:25 legoktm: any concerns on later difficulties if we do two steps? (A, then later b when we need multi)? 21:46:34 DanielK_WMDE_: All problems can be solved by adding another layer of indirection? ;) 21:46:38 Daniel will work on option b because it is a step towards MCR, which is fair enough 21:46:50 DanielK_WMDE_, but does b include all a functionality, would you work on a's functionality on b= 21:46:51 It's the Java way™! Oh, wait. ;-) 21:46:51 brion: not off the top of my head 21:46:54 anomie: and then you have a problem pointer... 21:46:57 Ok 21:47:29 jynus: i would, but i'd be greatful if legoktm would help 21:47:41 ok, what does lego have to say about that? 21:47:44 * robla plans to move this RFC (T105652) to the "in progress" column on the ArchCom-RFC board at the conclusion of this meeting, pointing to this meeting (E261) 21:47:51 * anomie wonders whether the complexity of revision→revision_content_mapping→content would be worth the savings in not duplicating cont_address and other fields. 21:48:12 anomie: yes, i wonder about that too. it's an option, not a plan 21:48:31 me helping in b)? I think so yeah 21:48:38 jynus: I am volunteering to do the implementation for (a), and can help review (b)/whatever 21:48:48 \o/ 21:48:53 Woo 21:48:58 it seems to me there is very cautious consensus around option "b", with some skepticism about who will do the work 21:49:01 ok, we still have anomies issue 21:49:06 what about that 21:49:17 robla: the tricky part is the migration code 21:49:21 can the scope be reduced to comtemplate that? 21:49:24 the rest should be pretty easy 21:49:43 jynus: which issue? 21:49:55 anomie wonders whether the complexity of revision→revision_content_mapping→content would be worth the savings in not duplicating cont_address and other fields. 21:49:59 DanielK_WMDE_: that's why I'm proposing "in progress" as a state, rather than "approved" 21:50:41 jynus: the current proposal is revision -> content. revision -> r-c-mapping -> content is a possibility for later 21:50:45 we are not committing to that 21:50:48 I don't know what revision_content_mapping is 21:51:25 TimStarling: a way to re-use content entries for multiple revisions. outside the scope of this rfc. 21:51:40 For the record, I'm sure I'll do some stuff on this, although whether that stuff involves writing code or just reviewing it I don't know at this point. Too many people all writing code can get in each others' way. 21:51:43 it might be nice, or it might be horrible to go that way, not sure yet. 21:51:48 ah right 21:51:53 anomie: thanks! 21:52:03 ok, we need more "buts", anyone? 21:52:12 well, the existing rev_text_id allows that, it is used for null revisions 21:52:19 we have not discussed whether the new table should just have the minimum fields for now, or the full set needed for MCR 21:52:34 but I guess I can sort that out with jynus later. or we have another session on that 21:52:46 adding new columns on a small table with low traffic is easy 21:53:16 it will be a small table? with left joins? 21:53:18 TimStarling: that allows the re-used of blobs for multiple content entries. not quite the same. but yea - re-using content meta-data may not be wirth the trouble. 21:53:30 (it is a bit of a simplificatin, go to https://wikitech.wikimedia.org/wiki/Schema_changes for the full version 21:53:39 jynus: the content table is going to be LARGE! 21:53:42 jynus: What about on a table with individually small rows, but with as many rows as the revision table? 21:53:42 #info we have not discussed whether the new table should just have the minimum fields for now, or the full set needed for MCR adding new columns on a small table with low traffic is easy 21:53:43 It I'll have lots of entries but small rows 21:53:45 for MCR it presumably needs to be fully populated 21:53:47 jynus: larger than revision. that's the point! 21:54:02 small per row though 21:54:07 yes, true 21:54:10 shouldn't we have visible code at that point? 21:54:17 And as active as the revision table. 21:54:44 jynus: for the database? sure, there's a patch on gerrit, and i send you sample db dumps. they are not exactly like the proposal, but quite close. 21:54:53 no, no 21:54:56 mediawiki code 21:55:11 Great All - thanks! 21:55:34 jynus: i have been working on that with brion. lots of moving parts. it's a bit of a hen-and-egg thing 21:55:41 we will deploy whatever we have, I suppose, and whatever is easy to migrate? 21:55:41 :) 21:55:41 you can't write the code before the db schema is final. 21:55:55 the migration part is the key 21:55:57 you can't decide on a schema if it's not clear how it will impact the php code 21:56:32 I am all for converting tables rather than migrating them (I think we discussed that on the image issue) 21:56:34 jynus: yes, i agree. that'S the tricky part. luckily, i have some experience with that, but i will be needing your help. 21:57:04 I do not think we can give a decision with a plan(?) 21:57:12 *without 21:57:38 jynus: well you can't make a plan when there is no decision on the goal :) 21:58:07 can I just repeat that I am putting my 2c in for MCR fields in the initial content table, with slot=1 always 21:58:17 I am not blocking that, I am asking what is the general mood, I said I will not take part on this decision, db is not a blocker here 21:58:17 jynus: writing complete migration code before it's even clear whether the migration is wanted is not a thing i like to do 21:58:20 because I think that will help motivate Daniel to actually write MCR 21:58:31 :)))) 21:58:42 TimStarling is a smart guy :) 21:59:05 * anomie likes TimStarling's reasoning 21:59:12 #info 14:58:08 can I just repeat that I am putting my 2c in for MCR fields in the initial content table, with slot=1 always 21:59:13 Aye :) 21:59:16 decision on the last question, not on the whole issue 21:59:26 so, i'll work on finalizing the schema, and propose migration code 21:59:32 (I was only talking about the last questio, I cannot answer that) 21:59:34 then we talk again, where or on wikitech-l 21:59:54 because I genuinly do not know 22:00:30 sorry, which last question? 22:00:43 the one about the extra columns 22:00:53 ok, I think this was a good meeting; we'll keep using T105652 and wikitech-l for followup. sound good? 22:00:58 T105652: RfC: Content model storage - https://phabricator.wikimedia.org/T105652 22:01:05 Yay 22:01:05 robla: yep :) 22:01:23 jynus: yea... i think i'll shoot for the "medium" proposal for now. 22:01:32 so can we deploy the schema change soon? 22:02:05 jynus: can you confirm that adding fields to a table that is much like the revision table isn't a problem? 22:02:19 jynus: if/when you understand and agree; we don't need to try to force that now 22:02:34 I cannot confirm it will not be a problem 22:02:43 schema changes on revision are hard 22:02:53 DanielK_WMDE_: Can we qualify that? revision has large rows and many rows. The new table will have small(er) rows, but still many. 22:02:55 I think we've settled that it's worth DanielK_WMDE_ to keep working on a proposal and some code/etc 22:03:09 jynus: ok. the content table will have roughly the same dimensions as revision. that's why i want to add all fields right away, instead of incrementing. 22:03:11 (with legoktm , et al) 22:03:15 let's discuss that on the listz 22:03:26 * robla will hit #endmeeting in 60 seconds 22:03:39 (it might be easier to answer that question with a straw schema) 22:03:54 feel free to continue discussion on #wikimedia-tech 22:04:24 thanks everyone! 22:04:26 o/ 22:04:31 #endmeeting