|← week ending 2021-06-11||Toolhub progress reports week ending 2021-06-25||week ending 2021-07-02 →|
Report on activities in the Toolhub project for the 2 weeks ending 2021-06-25.
Srishti implemented a UI for the patrolling API. Users in the "Patrollers" group will now be able to see the patrolled/unpatrolled status of each revision when looking at the edit history for a tool.
Bryan has a patch up for review which will also allow us to show unpatrolled edits in the audit log view. Unfortunately we have not yet figured out a reasonable way to allow this view to be filtered to only show unreviewed changes so that it can be used as a global patrolling work queue. The current system of tracking this information in the database keeps it tied to the revisions themselves in a way that the audit log search cannot make use of at database query time. We can however introduce a targeted API endpoint and related UI screen in the future that will make this feature possible.
The final major feature that we are hoping to implement at least partially prior to our initial production deployment is lists of tools. This is one of the things that we really hope will be useful to various subgroups of the Wikimedia community. Imagine the value for a wiki project like Women in Red of creating a list of recommend tools for new folks joining the project and hopefully you will be excited about lists too.
For the initial release we will not have all the fun things that we have been imagining for lists. This is about time more than anything else. We hope to have the API for creating lists and UI for displaying them implemented. Editing and patrolling of lists edits will likely come as a feature update from our post-launch, post-wikimania work on Toolhub.
Working out a timeline until the 1.0 deploymentEdit
Our July/August revised deployment target dates are coming up really, really fast! We would ideally like to have Toolhub deployed by the time that Wikimania 2021 starts. With that operational goal in mind, 2021-08-12 is the last day we could deploy and meet that goal.
Working backwards from that date, we need 2-3 weeks of feature freeze for the Security review and at least a week for post-review remediation. Let's call that 4 weeks to be safe. That puts our "pencils down" date for 1.0 features at 2021-07-15 at the latest. That gives us a bit less than 3 weeks to add the basic plumbing and display for lists and clean up any other outstanding issues that can't be done in parallel with the security review code freeze.
The Foundation is taking the week of 2021-07-05 through 2021-07-09 off this year, but those days can be time shifted too for folks who have time sensitive things to work on. It is probable that some folks on the Toolhub team will work some of those days and rest during the code freeze. Time will be tight either way, but it seems reasonably possible that we can get the work we need done in time to meet the desired launch. Ultimately however we will not be "launching no matter what". If we end up needing a bit more time to ensure that the community sees a stable, functional tool on the day we announce it, we will take that time. No fancy launch announcement is worth the loss of reputation that a new product takes when it fails to meet basic user expectations on first use.
The Foundation's fiscal year 2020/2021 comes to a close next week. Back in April 2021 we announced our goals for the fourth quarter of the fiscal year: content moderation, lists of tools, and annotations. A few weeks ago we amended those goals to remove annotations. Our hopes exceeded our reach, but we know that we will continue to work on the Toolhub project in fiscal year 2021/2022 so the feature will be added when we have the capacity to do so. As we get to the last few weeks before the planned launch we will have to make more difficult choices like this. That's a normal part of building a product. Good ideas are easy to generate, but good implementations take more time.
I hope all of you following along can understand our explanations of what is happening and why. If not, please do ask questions! We cannot guarantee that you will agree with our analysis and decisions, but we hope that we can at least show how we thought about the problems and what motivated us to choose one option over another. :)