(Re)implement logging output for scap targets
Closed, ResolvedPublic1 Story Points

Description

Since landing T113085: Support event logging propagation from target to deployment host and T113779: Limit structured logging output for now we've lost output from the deploy targets.

@thcipriani and I came up with a decent idea for logging in today's meeting that should be simple enough to implement by mid next week and seems modular (bingo!) enough to plugin to a more sophisticated iscap/tmux once that lands.

  1. Main deploy process initializes a new log file (under scap/log/ perhaps?) for the current revision and sets up the default log handler to write all deploy-local events to it.
  2. Implement a deploy-log command that can tail/parse the structured log (ala varnishlog) in a separate terminal. We should try to keep this component's implementation general (outside of the cli.Application subclass) so it can later be utilized by an iscap/tmux wrapper.
dduvall created this task.Oct 9 2015, 7:27 PM
dduvall updated the task description. (Show Details)
dduvall raised the priority of this task from to Normal.
dduvall claimed this task.
dduvall added projects: Scap, Deployments.
dduvall added subscribers: dduvall, thcipriani, mmodell, demon.
Restricted Application added a subscriber: Aklapper. · View Herald TranscriptOct 9 2015, 7:27 PM
mmodell added a comment.EditedOct 9 2015, 7:40 PM

@dduvall: iscap output multiplexing works by directly writing to the pty devices that tmux creates. So what a log viewer for iscap needs to do is simply read the structured log and output ansi formatted lines.

Ideally there would be a simple cli syntax for configuring an instance of the log viewer with the filters that a user cares about and then we can save a session with all the tmux panes, log viewer args, etc, and restore that session when the same user wants to deploy the same project again. (e.g. store sessions in the user's home directory, I guess...)

So, the DeployLog class should be implemented as a stream filter that reads from one stream and writes to another. It would get passed a structured log stream for input and some arbitary stream for output. The output would be stdout for a standalone utility, for iscap we pass a file output stream connected to a pty that belongs to the corresponding tmux pane.

I'm really not sure if it would be better for iscap to run the log filter in-process with python threading or out-of-process as a separate OS process that uses blocking I/O. @thcipriani, @dduvall, anyone: Opinions welcome...

dduvall added a comment.EditedOct 9 2015, 9:23 PM

So, the DeployLog class should be implemented as a stream filter that reads from one stream and writes to another. It would get passed a structured log stream for input and some arbitary stream for output. The output would be stdout for a standalone utility, for iscap we pass a file output stream connected to a pty that belongs to the corresponding tmux pane.

That sounds sane and pretty close to what we were thinking. Not sure if it belongs outside the scope of DeployLog itself or not (if it has no purpose elsewhere, it can probably live there), but we can bikeshed on that in review.

The only thing that's bugging me about this plan (not your comment, Mukunda, but the plan in general) is how many times the structured messages get serialized/deserialized. The targets serialize their log records with log.JSONFormatter, the main Deploy process reads/deserializes those with ssh.JSONOutputHandler and feeds the structured messages back into its logger which will have a handler writing them back to a file, then various subscribers (DeployLog in this case) will be tailing that file and deserializing/filtering/routing messages elsewhere.

There are clear advantages in routing everything through the main process's logger, mainly the ability to route/filter/consume ALL log messages through a consistent interface, but it also seems like a cacophony of redirection. Then again, it's old school Unix-y, which I like. Hell, let's see how goes with this last piece then we can see about simplifying.

I'm really not sure if it would be better for iscap to run the log filter in-process with python threading or out-of-process as a separate OS process that uses blocking I/O. @thcipriani, @dduvall, anyone: Opinions welcome...

If we implement some decent options for deploy-log, I kinda like the idea of launching a subprocess of the command to write to the tmux pane's pty. (e.g. deploy-log -r scap/log/{rev}.log -w /dev/pty{n} expr where expr gets parsed to a log.Filter).

@dduvall I kinda like the idea of the subprocess as well, though doing it in-process would directly address your concern about serializing/parsing json multiple times. At least it would avoid one round of unnecessary serialization overhead.

@dduvall: Maybe we can make the underlying DeployLog class accept native python objects as well as a json stream, then the same filtering code would work in either context, thus avoiding the extra serialization when it runs inside iscap (as log handler installed in the main iscap process and writing to the tmux pty).

@dduvall I kinda like the idea of the subprocess as well, though doing it in-process would directly address your concern about serializing/parsing json multiple times. At least it would avoid one round of unnecessary serialization overhead.

Very true. Come to think of it, though, the main process doesn't need to thread/fork at all. It just needs to add a logging.StreamHandler('/dev/pty{n}') with a log.Filter to the logger. This deploy-log should probably do the same and then just start reading in the input log file.

mmodell set Security to None.Oct 12 2015, 4:14 PM
mmodell edited a custom field.
dduvall closed this task as Resolved.Oct 21 2015, 4:50 PM
dduvall moved this task from Services MVP to Done on the Scap board.