What has happened?

We completed Subplot issue 68 and Subplot issue 12. Subplot issue 20 remains incomplete at this time.

Lars has started to deal with the fallout of renaming the default branch to be main in his ick configuration.

We’ve agreed to dogfood this for a while in case something breaks, and think about what we might change otherwise (e.g. the website repo) in the future.

We agreed we were OK with all this and so closed Subplot milestone 10.

Issue review

We reviewed the situation for all the open issues, for the most part nothing was changed, however the following tweaks were done:

  • Lars was reading an article about abstraction and coupling between modules, which was of the opinion that inter-crate APIs are cleaner. Cargo enforces a DAG on crates. He created Subplot issue 53 a while ago and has updated that issue with a note on this.
    • After some discussion, we decided to create Subplot issue 69 to cover the splitting up of ast.rs and the factoring out of tests into independent files.
  • We linked Subplot issue 27 and Subplot issue 36 because they’re both related to styling.
  • We determined that Subplot issue 37 was completed anyway and so closed it

We agreed to continue with our current issue-filing and closing/keeping flow the same for now.

Milestone planning

New iteration is Subplot milestone 11


Debugging test programs

Lars started to change ewww so it would actually serve files. This involved learning new things about warp which weren’t too bad; but while learning them, problems were encountered because initial guesses on how to work with it were wrong and Lars had a devil of a time debugging the generated test program which was failing.

The fundamental problem was that each scenario got a generated directory and since that consumes disk space it deletes them after the program ends. This is fine when the scenario passes but makes it harder to debug on failure. This is a problem common to other tools such as yarn.

We should attempt to make this experience as smooth as possible because it’s not going to be unusual for people to need to diagnose test failures.

Daniel suggested adding ‘keep on failure’ or ‘shell on failure’ type approaches.

Lars had spoken to a friend who suggested:

  1. ‘on failure, make a data tarball/copy/whatever’.
  2. ‘on failure, open a shell prompt’
  3. ‘on failure, snapshot a container/vm/whatever’

We decided that Lars will write a story about a successful debugging experience with tooling which did all the right things, and from there we can derive some initial requirements for subplot’s built-in runner templates.