Research talk:Onboarding new Wikipedians/OB6
Work log
editArchive
|
---|
|
Wednesday, February 12th
editToday, I'm building a report to track the usage of GettingStarted and GuidedTour. There are a few resources that I can draw from:
- Schema:GettingStartedRedirectImpression - Tracks impressions of the GS CTA
- Schema:GuidedTour - Tracks events for GuidedTours where
shouldLog=true
. I assume that this includes tours within GS - mw:Manual:Change_tag_table -
ct_tag = "gettingstarted edit"
tracks edits that are made through GS
Things that would be nice to know:
- What proportion of R:newly registered users see the CTA?
- Break down by type of CTA.
- What proportion of R:new editors make an edit through GS? Should be an early indicator of issues.
--Halfak (WMF) (talk) 18:22, 12 February 2014 (UTC)
So, this is going to be difficult. My log data from those schemas is only available to be joined with data from enwiki, but I'd like to be able to perform this analysis for all wikis where GS is turned on. Right now, that list is small, but it could grow. I think I'm going to have a build up a whole environment to help me gather this data. This could be something that supports a lot of other work too.
Checking with DarTar on that strategy now. --Halfak (WMF) (talk) 18:58, 12 February 2014 (UTC)
Discussion
editSubcohorts
editLike we discussed on IRC, using subcohorts creates confounds regarding the reason for any observed effect you see. However, I would like to observe basic edit rates (at least) for the following sub-cohorts. Explanation is inline.
- 1. control users with gettingstarted-specialpage-impression vs. test users with redirect-invite-impression.
Looking at basic funnel analysis, it seems that while 99% of control users got an impression, only 89-90% of all test users bucketed got a redirect-invite-impression. Aaron, your proportion might be different, since you also filtered out accounts that weren't self-created. These two subcohorts are most important I think, and have fewer potential confounds because they did not behaviorally change based on an action. We just know for sure they saw the UX we intended. I am concerned that by examining total cohorts we are unfairly disadvantaging the test condition, where fewer users got the correct UX due to poorer browser support and so on.
- 2. control and test users with appropriate click actions (so gettingstarted-specialpage-click in the former, redirect-invite-click in the latter) by funnel (meaning redirect funnel for test, and the usual gettingstarted-* for all)
This will tell us if there are different activation rates and quality according to sub-funnel. It is likely that while some may be substantially similar (i.e gettingstarted-copyedit) others are complete unknowns (particularly redirect click users)
- 3. test users who completed the 'firstedit' guided tour
What proportion of users in the redirect funnel / test condition actually completed the tour? Does the tour lead to more high-quality edits, or does it encourage low quality contributions?
Does this make sense Halfak? Steven Walling (WMF) • talk 23:48, 15 October 2013 (UTC)