Wikimedia monthly activities meetings/Quarterly reviews/Reading and Advancement, October 2015
Please keep in mind that these minutes are mostly a rough paraphrase of what was said at the meeting, rather than a source of authoritative information. Consider referring to the presentation slides, blog posts, press releases and other official material
Present (in the office): Toby Negrin, Jon Katz, Terry Gilbey, Josh Minor, Luis Villa, Pats Pena, Caitlin Virtue, Lila Tretikov, Moiz Syed, Megan Hernandez, Lisa Gruwell, Danny Horn, Katie Horn, Tilman Bayer (taking minutes), Geoff Brigham, Sheree Chang, Wes Moran, Tomasz Finc, Zhou Zhou, Sylvia Ventura, Kevin Leduc; participating remotely: Rachel diCerbo, Arthur Richards, Dan Garry, Joaquin Oltra Hernandez, Kristen Lans, Trevor Parscal, Anne Gomez, Dmitry Brant, Gabriel Wicke, Katherine Maher, Bryan Davis, Moushira Elamrawy
Excerpt and small gallery, a gist of the linked article
saw increased engagement in Beta (about 16%), pushed to production
but did not see the same increase in prod
takeaway: plenty more opportunity here
Lila: can we track total active users etc. for apps?
Toby: we do [see below]
Jon: what can we do to improve reader exp most?
created speed dashboard in our first quarter as team (Q4)
Lila: what is absolute number we try to be under?
Jon: in 6 months from now, want to speed up (first paint of Barack Obama on slow connection in India?) from 50 sec to 10 sec
--> Lila: measure speed for average/median(?) page size with common industry metrics
use those two as our yardstick
because industry numbers getting really small (=fast)
let's have really clear goals here
Toby: experimenting with getting first section up first, loading rest later
Lila: that's fine, but:
--> Lila: want to see clear performance goal: this much content in this much time
Jon: time to render equal to desktop on mobile (with same connection speed)
actually got mobile a bit faster than desktop
this was work with perf team
saw 5-10% increase in some parts, but no evidence it's related
Lila: compared to normal growth?
Jon: hard to say
Terry: if we have decreased readership because of perf decrease, would it mean that it takes a while to get them back even after perf increase?
Jon: hard to tell, should test with bucketing
Jon: dashboard already saved us from perf regression caused by other team
Terry: why not caught in QA?
Jon: perf testing not part of QA
Adam: hard to measure reliably, QA tech stack is complicated
Terry: estimate how much it would cost to have that
Lila: set up framework
--> talk about performance QA framework with Greg and Ori
Toby: want to see impact in terms of pageviews
Lila: may not have been comparing apples to apples
want to see us within industry standard at minimum
Jon: over to Bryan for Reading infrastructure work
the dashboard gives us greater insight into auth attempts and outcomes
e.g. logins, account creations, captchas shown / solved
preparing for rollout of auth changes in core, when we do that we want to be able to catch regressions
Toby: will also cast light on API usage
Bryan: pushing this into Graphite (shared infrastructure for timeseries data)
which is not well resourced, pushed it over its limit
general: operational monitoring has been consisting of ad-hoc efforts by individuals across org
would be great to resource a small team (maybe just 0.5 FTE) for that
Toby: this is a callout for Mark - a lot of people use Graphite
also needed for perf work
Lila: are we looking at graphs, to reduce error rates?
Bryan: not looked into it yet
but interesting to look at captcha rates
Jon: on apps, there are apparently no bots at all - seeing very high solve rate
Toby: weekly metrics report
web team focused on browser tests
removed alpha site - moved features to beta
iOS: 5.0 release is one of the Q2 goals
banner campaign promoting Android app in Finland tabbed browsing (requested feature)
"Back to school" promo in Google Play: collab with Comms
--> Lila: weekly report great to have, should go to foundation-wide list
highlight changes made in previous week
highlight delta to previous week
Toby: talked with Rosemary about her team handling/administering quicksurveys
Adam: formalizing team around Content Services
APIs: a lot of bots use them, large part of Brad Jorsch's work
get logging up to industry standards - need collab with Ops
Reading team has been performing a strategy process over Q1
transparent with community and rest of WMF
feedback: too broad
I'll call it a good start
identified issues beyond Reading team
2. (Internet evolving) WMF not taking leadership on open standards, e.g. video
Jon: different platforms and how they relate to our strategy
Toby: got feedback from Lila about faster iteration
I'm torn on the issue of tracking, but came to the conclusion that we need these
need another designer
our products will be better, look better
Lila: so you have 3PMs open?
Toby: right now have 3, Anne as 0.5
--> Lila: let's take this offline (talk about staffing)
Jon: App Distribution Experiments
Finland banner experiment (a country where we can't fundraise)
Toby: also, apps was a frequent request by users in strategy consultation [people didn't know we have them]
Lila: in Banner 2, why did we not show a local motive (more related to Finnish users' experience)?
Lila: apps are important e.g. because they offer push instead of pull options (for distributing content)
we want to get people into the habit, pull people into the experience
Lila: so installs normalized back afterwards?
Lila: usually Google asks you to open the app you downloaded?
Jon: once, yes - but need to confirm
Toby: install don't yet drive engagement - matches experience from other parts
Geoff: how do our apps compare to others on that?
Toby: compared to Games, that open rate is high
Lila: but retention is not good enough
Toby: certain times are good for campaigns, but in general we are not ready yet
Lila: yes, working on top of funnel only, get people in and then they drop out - not good for brand either
(Jon:) web performance
mobile now faster than desktop
Toby: Services important
Jon: Health Metrics
Mobile catching up to desktop
Lila: compare Q4 over Q4
(Jon:) Global South/North
Lila: growth opportunity
--> Lila: start monitoring what happens [to traffic] in certain countries
(discussion about content preferences possibly differing by geographies/demographies)
Josh: iOS app
bump in Q4 due to update, media coverage
but unfortunately had lots of bugs
negative reviews don't only impact installs, but also downstream usage
Lila: on Android, due to space constraints, I don't keep an app if I don't use it daily
Terry: what lessons from buggy release?
Adam: most significant source of bugs is the database the app is using
with 5.0 release we will be very cautious about this
with future releases, need to invest to fix this tech debt
Terry: when will it definitely negatively affect users?
Adam/Toby: when we can't ship new features because of this
probably need 1 FTE engineer for a q to fix this
Terry: so probably Q4 until we can confidently say we avoid these bugs? yes
Toby: have external QA too
Terry: what else procedurally?
Josh: our process: 4wk alpha 6wk beta
limited to 1000 beta users by Apple, and we have broad usage patterns
iOS users demanding about quality and polish
Lila: no release since August?
Josh: yes, 5.0 is major release
Jon: needed to improve UI and retention
did a lot at once
Lila: that's typically not a good thing, want to release incrementally
Dan: when I was still PM, we ...
Terry: should perhaps firm up dates
Lila: see ratings decrease again in September?
Toby, Jon: we are out of time, please take a look at Q2 outlook and rest of slides
Terry: thanks to team, a lot of hard work done (since it was formed), more to do
Lisa: Q1 for FR is mostly prep work, summer is not fundraising season
that said, we did enter a new market: Brazil
decided not to run the Italy campaign in September to be supportive of WLM
Because of the change in timing, we missed a quarterly goal for the first time
Pats: Amazon e-wallet
went well, already see 3% increase in conversion rate
Geoff: reasons for success?
Pats: embedded option makes the donor feel safer that they are on wiki wesbite and dont need to leave the flow to donate
Online Fundraising Edit
Lila: we are not giving our community members good tools to advertise things like WLM
Megan: problem: impact of these banner campaigns is not monitored, there are ways we could measure and optimize the impact. Other tools could be more appropriate than using banners for all participation campaigns.
Lisa: what we did was to limit impressions while maintaining impact
Katie: we in FR can do things by going into backend that community can't do for their banners
been on our agenda for a long time to make impressions data more accessible
Tilman: even within org - e.g. I needed to asked FR team for favors to get impressions data for my banners ;)
Megan: We had a good collaboration with the Italian chapter to make some changes to the WLM campaign. They were very open to feedback and data to improve the campaign.
Luis: it's not just the Italians, it's everybody
Pats: can lower our chargeback rate
Megan: banners we ran
first one in july
world map idea
this performed best for a few weeks
these had lower perf
this one [dark banner, see slide 15] was big win, best of q
tried random sample of quality/featured images from Commons
Commons banners did not do well
Lila: so they read the text?
seems there is a magical combination of facts people need to know
(no ads, nonprofit, etc)
this is where we're at now
20% is a big gain
we still have some months for testing
20% gain not enough yet to reach goal
put best style into desktop banner
did message tests based on community feedback
Luis: I love that we make that switch
how do we tell this to people who complained about this?
--> Lila: make summary of things we took from suggestions and tested/implemented in banners
Lisa: we have a good story to tell on how we (took suggestions) and many of the suggestions are not in the banner
did reader research survey to adress other issues
some of the concerns brought up by community were not covered in last survey, so we'll do another one
that research is really important, because it's difficult for both us and community to step back and see through eyes of reader who does not think about Wikipedia very often.
Fundraising Tech Edit
Katie: main thing: code freeze by October 1, in time for Megan's benchmarking in October
i.e a lot of strengthen & focus, not a lot of experiment
as mentioned, total reintegration of Amazon, done on time
wanted to do France campaign in October
have only one Ops person, who has to put out fires before working on infrastructure changes
Lila: did it have impact on revenue?
Katie: no, figured out another way
France is special when it comes to credit cards etc.
had a lot of perf issues with CiviCRM
new person will fix this or look at other options
banner history: for first time will be able to see which banners donors had seen before donating
have all the foundational stuff in there, now need to get data out
Major Gifts and Foundations Edit
CaitlinV: major gifts
modernized, no longer using Google Form
Strategic Partnerships Edit
Sheree: Strategic Partnerships
--> Lila: action item for Reading team: monitor impact of app preinstall rollout
Toby: Dmitry and ... work on that
Sheree: unfortunately not seeing a lot of impact from Back to Schools