Requests for comment/Password policy for users with certain advanced permissions/comments

Discussion about Requests for comment/Password policy for users with certain advanced permissions:

  1. I can really recommend using CrackLib to valid card generator. Simple rules like "at least eight chars long with both digits and capitals" are too rigid to be useful. A 32 char lowercase only password is stronger than a seven char word with a digit appended to the end. CrackLib is superior when it comes to verifying password strenght. /Esquilo (talk) 08:57, 21 December 2015 (UTC)[reply]
    1. I think I can support the use of cracklib, at least on the sensitive accounts. — Finn Årup Nielsen (fnielsen) (talk) 09:53, 21 December 2015 (UTC)[reply]
  2. I would like to suggest that in addition to the above, the passwords should include at least one upper case letter, one lower case letter and at least one number. Preferably 2 of each. I would also add this suggestion isn't a lack of trust on the individual with the account on the WMF sites, its the reality that some outside the sites do not care about site policy. Reguyla (talk) 20:30, 13 December 2015 (UTC)[reply]
    Is this irony? --MF-W 21:51, 13 December 2015 (UTC)[reply]
    Not sure what you mean? But I assume its directed at my comments about certain admins and editors not following policy and nothing being done about it. Which is out of scope of this RFC! Reguyla (talk) 22:17, 13 December 2015 (UTC)[reply]
    I wasn't sure whether your suggestion of such a precise prescription (2 occurences each of some kinds of characters) was serious or not. Partially I thought so because only recently I read a "meme" somewhere on the internet about "Swabians choosing a safe password" (in German), where the victim of the joke tries to set a new password, but his choices are always rejected because something (e.g. a number, an uppercase letter) is missing. --MF-W 00:17, 14 December 2015 (UTC)[reply]
    Oh ok thanks for the clarification. Those are common practices in the industry is all. In fact most recommend using special characters as well such as %, ^ and & but I think that would be overkill for this site. Reguyla (talk) 14:40, 14 December 2015 (UTC)[reply]
    This sounds like dictating the style of password, which might be more than one wants to do. I don't recall what upper limit there is on password length, but I recall some advice from some time back that passwords ought to be long natural-language phrases, which is an entirely different style of password that could be extremely hard to crack but might be all lower-case with no numbers. Just a passing thought.

    Re trusting individuals, trust is always a relative thing. If you have 100 people and you're 99% sure of each of them, iirc there's something like a 75–80% chance that you're wrong about one of them. Likewise for 1000 people you're 99.9% sure of. --Pi zero (talk) 23:30, 13 December 2015 (UTC)[reply]

    I dont think we should make complexity requirements, or if we did, i think we'd want something much more complicated than that (as xkcd says: correct horse battery staple is a good password). Length matters much more than complexity. Re pi zero: upper limit on password length is 4096 bytes. Which should be enough for anyone, unless you use an epic pass-poem. BWolff (WMF) (talk) 23:59, 13 December 2015 (UTC)[reply]
    It's a good point. A long enough password doesn't need so much variety of characters. Platonides (talk) 00:08, 14 December 2015 (UTC)[reply]
    The random password generation algorithm I usually use has a tendency to produce passwords lacking punctuation and digits. When the site I’m registering at states that the password must have a digit, I invariably append ‘1’ to the string. When the site requires punctuation as well, I put ‘!’ (that is: Shift+1) at the end. Now, may I doubt that these measures actually improve my password strength? (That said, only a few sites I’ve needed to register at use such a formal approach to security, but still.) — Ivan Shmakov (dc) 17:45, 22 December 2015 (UTC)[reply]
  3. I can't support the “for users with certain advanced permissions”: this should apply to all users without exception. Let me adapt what Brion said here for HTTPS: “If we can require [it] for some […], it would be irresponsible not to require [it] for *all* […].” Also, I have a very high esteem of most advanced right holders I know, but that doesn't prevent me from not trusting them when it comes to password security. No offense to be taken, really: security only works when you trust nobody. And for what it's worth, on frwiki, we've already had a sysop whose password was so simple another user was able to find it to impersonate him (it was in 2010). — Arkanosis 00:03, 14 December 2015 (UTC)[reply]
    Users with advanced permissions are a subset of all users :) Platonides (talk) 00:08, 14 December 2015 (UTC)[reply]
    Users with advanced permissions, if compromised, can harm Wikipedia in ways that an ordinary user cannot. If an account with no advanced permissions is compromised, it can cause little additional harm beyond what a sock would, other than to the reputation of the legitimate account holder. As the primary harm is to the person who selected the weak password, the case for forcing a stronger one, for their own good is weaker than for say an Admin. Monty845 (talk) 04:21, 16 December 2015 (UTC)[reply]
    If an ordinary user's account is compromised, the only harm that can be done is to the user's reputation. If an admin's account is compromised, there are ways they can shut down a wiki entirely for a few minutes (possibly longer, on some of the smaller wikis). --Carnildo (talk) 02:06, 17 December 2015 (UTC)[reply]
    Someone who knows what they are doing, can do things a lot worse with an admin account than shut down the wiki for a couple minutes. BWolff (WMF) (talk) 11:47, 22 December 2015 (UTC)[reply]
    Thanks for bringing this up; a long standing question of mine is: apart from increasing resource usage, what goal the HTTPS switch has thus achieved? What purpose will it serve once (and if) the measures documented in http://ietf.org/mail-archive/web/perpass/current/msg01979.html will become reality for a certain country some two weeks from now? Thanks. — Ivan Shmakov (dc) 17:45, 22 December 2015 (UTC)[reply]
    The news from Kazakhstan is indeed very concerning. Much of the threat assumptions around HTTPS, assume that the user has some method of eventually getting around the adversary (ie, the adversary does not control all network links into/out of the country), or the user considers simply not communicating in the case of an active attack. Kazakhstan breaks these assumptions. However, most governments are politically unwilling to go to the extremes that Kzakastan has gone to, at least so far (And hopefully it will stay that way) Anyways, HTTPS achieves the following goal:
    1. Making site faster (Most browsers require HTTPS for HTTP/2 SPDY [Edit: While the statement is true for HTTP/2 as implemented in practice, we are still using SPDY afaik]. SPDY better utilizes the network, and results in improved load times. This was never a primary goal for us with HTTPS, but its an important benefit to keep in mind. Often people frame HTTPS discussions as a cost vs benefit of security. Its important to remember that even if there was no security benefits, HTTPS would still be a net win as it makes things faster).
    2. Protection against eavesdropping from a purely passive attack (There's reasons to believe that several governments engage in mass deep packet inspection to see what people are doing on the internet (Not to mention the person sitting beside you on the same wifi), but aren't willing/able to resort to active attacks in the general case. HTTPS significantly improves user privacy in these cases [yeah yeah, side channels exist with packet length/timing, but that's significantly harder to pull off then just dpi])
    3. Protection from a shared-medium session stealing attack (AKA firesheep), or sniffing passwords against a purely passive adversary, or an active adversary that isn't capable of getting a rouge certificate. If we ignore the big evil governments spying on everyone for a second, and instead concentrate on the shady character using the same public wifi as you, this is the most realistic attack against someone using our site. HTTPS prevents it.
    4. Being able to detect, in principle at least, that an active attack is taking place - Even in the case of a rouge certificate, it would be at least possible for someone to detect what's going on (e.g. By comparing certificate hashes ala EFF's SSL observatory). Knowledge of an attack is the first step to dealing with an attack.
    5. Providing integrity to Wikimedia content against attackers who aren't willing to do what Kazakhstan is doing. A key part of Wikipedia is its neutrality. If an attacker could modify the contents of Wikipedia articles in transit, this could seriously undermine Wikipedia's credibility as a neutral source of information. I personally think this is an extremely important goal, which often gets forgotten in discussions around https.
    6. Making censorship of Wikimedia an all or nothing game (For people not willing to do what Kazakhstan is doing). With HTTPS, you can't censor by keyword. The most precise you can do is DNS poisioning to censor only a specific language of a specific project. Thus instead of allowing someone to censor a few key articles they care about, this forces censors to chose between censoring everything, or nothing. Whether or not this is a positive depends on who you ask. Some people think that this will tip the balance in favour of less censorship of Wikipedia as it requires much more political capital to censor all of Wikipedia instead of just a few key articles (See for example github in China.). Or it might result in larger amount of collateral damage, forcing censors to censor everything. I certainly know people who think this is a bad thing, and others who think its a good thing. [To be clear I'm talking about goals that various people have which I've heard of. I wasn't involved with the foundations efforts for the HTTPS rollout, and I'm not sure precisely what their security goals are. Based on public statements other people have made, I think the main goals with the foundation are probably user privacy against passive attackers [including nation-states], security of passwords/session tokens against low skill adversary on the same network, and performance gains from HTTP/2]BWolff (WMF) (talk) 23:03, 22 December 2015 (UTC)[reply]
    To clarify: my objection was primarily on the for all users part; that is: the inclusion of the casual (as in: no account) readers. It wasn’t my intent to question the value of HTTPS for either the readers versed in communication security or non-IP contributors. Protection against session and password stealing, as well as the ability to detect tampering, are indeed valid uses for HTTPS.
    1. Making site faster (Most browsers require HTTPS for SPDY […]) – first, I see no reason for WMF to promote the reliance on such a software deficiency. Then, are there any numbers to support that HTTP/2 or SPDY over HTTPS outperform plain-HTTP (possibly with compression) when a LAN-accessible caching proxy is involved – in, say, class-room use cases (multiple clients obtaining the same HTML and images), and assuming the uplink bandwidth in the order of 512 kbit/s? (Which I believe may still be widespread in Kazakhstan; as well its neighbors, including the rural areas in Russia, etc.) The near-impossibility to use site-wide caching is still a drawback of HTTPS – as is its higher CPU cost. (Both client- and server-side. Granted, I know of no example of a country with a significant portion of Internet users relying on i586 or older hardware, but I don’t know for sure that no such country exists, either.)
    2. As an aside, in The Road Ahead, it’s suggested that the software should “advertise itself” – that is, that an average user should be able to discern the new version from the old from the first glance. I believe that the switch to the Australis theme in Firefox back in 2013 (to much of the chagrin of those of us who use the browser since its Netscape Navigator days) lies perfectly in line with this approach. And although I may be wrong, my guess is that the refusal to support SPDY or HTTP/2 on plain-HTTP connections may be an attempt to earn an easy “PR-point” for the software involved. (“Because we care about your privacy” sounds like a good slogan, indeed, even if all of their care is doomed to fail due to the user’s own carelessness.)
    3. The Foundation’s own decision regarding HTTPS doesn’t look dissimilar, actually – trying to enforce privacy (or how do we call that?) doesn’t do any good to the average Joe; in the best case, there is little to no difference (apart from possible performance boost, which should be achievable irrespective of TLS, as peree above); in the worst – the Wikimedia project that’s found to host content outlawed in the country becomes inaccessible in its entirety. And the credibility may be impaired regardless, thanks to whatever mass media the state (or other interested party) in question happens to own.
    4. Another question is whether we consider Internet “the library of the future”? The last time I was at a conventional library, it had large reading rooms, and the readers were passing by the tables occupied by other readers – with pretty good chance of violating their privacy by spotting the particular material being read. When HTTPS became mandatory for the Foundation resources, we have in effect forbade the reading of Wikipedia in libraries that do not provide “private space” for their entire readerships. Now, doesn’t it sound a bit… unusual (for lack of a better word) when stated this way?
    5. Finally, I’d appreciate Wikimedia projects becoming accessible as Tor “hidden services”, for such a setup can offer much better privacy than any TLS service that does not utilize onion routing of some kind.
    6. Making censorship of Wikimedia an all or nothing game – indeed, it’s debatable whether that’s an improvement or the exact opposite. As the recent events have shown, the Russian authorities and judges do not mind censoring specific articles on Russian Wikipedia even if that, as the law requires, means censoring Wikipedia as a whole. And it makes me wonder for how long could a Wikimedia project persists should the majority of its readers be filtered off by the applicable law? (And won’t that lead to a rise of some rival, perhaps state-sponsored, and thus quite likely non-neutral, projects, like Baidu Baike that for long time flourishes in PRC?) Thus, I guess I’m in the camp who believe that the “nothing” here by far outweighs the “all”. (Then again, with Wikipedia content being free, state-sponsored mirrors are quite a possibility. As well as, why, mere HTTP-to-HTTPS proxies.)
    Ivan Shmakov (dc) 19:17, 23 December 2015 (UTC)[reply]
    1. I'm not sure how common caching proxies are in the modern internet. However in our case we want edits to show up immediately, so we send headers disabling proxies not under our control (WMF has proxy server clusters in Amsterdam and San Francisco). Any conforming HTTP proxy will not cache wikimedia pages (they might possibly have cached images though. Its unclear what impact https would have in that regard. Although I'll note that user are generally much more tolerant of latency loading image then they are of latency loading text). In regards to compression - we always use HTTP compression (gzip) if the client supports it. For the average user, there was definitely a signficant improvement in load time with HTTPS for everyone. See load time graphs from June 2015 [You may need to scroll a little bit. If I'm interpreting the graph right, there was a 20% improvement. Which is extremely significant] (HTTPS for everyone was deployed the week of June 11-16 2015). I'm doubtful that the CPU performance cost of TLS is noticable. On the client side the user only has to do a very small number of expensive encryption operations, so I doubt its noticeable. On the server side which is likely the side that is likely to be more affected by CPU cost of encryption, google reports that encryption for everybody only used 1% of their CPU [1], so its really not the case that HTTPS has a high CPU cost.
    2. You're right that the SPDY only on HTTPS could be seen as a cheap political trick to force SSL adoption (I don't think its an attempt to get PR points. Users are totally unaware of if SPDY or HTTP/2 is in use, and most users have a poor understanding of the privacy implications of internet communication. I think its a trick to try to force SSL adoption on a wide scale, in order to make passive surveliance on a wide scale impractical. This is a political goal that many (not all) people in the internet community have. Basically to implement the agenda suggested by RFC 7258). That said, ultimately we are not the one's in control of that. Well many people in Wikimedia support such a goal, even if everyone in Wikimedia didn't like that goal, we'd still have to play ball.
    3. Wikipedia as a library is an interesting comparison, that comes up a lot. I'm not sure your analogy is apt. Sure people can look at what your reading in a library. But I think this is closer to people looking over your shoulders when you browse wikipedia in public. Traditionally libraries are very protective of their lending records, and don't release them. A massive passive attack from an internet backbone seems closer to a library (or all libraries) giving away their lending records, then it does to someone in the same room seeing what book you're reading.
    4. I think the more important bit for TOR is to come up with some sort of compromise to allow editing from TOR nodes (Without needing a GIPE flag). I've never really understood why people want hidden service nodes for services with public locations. As far as I know, that doesn't increase the anonoyminity, and adds 3 extra hops and a lot of extra latency (OTOH, it means that one doesn't need to use an exit node, which are in short supply. Maybe avoiding exit node bandwidth limitation outweighs the extra hops needed for hidden service. I have no idea) BWolff (WMF) (talk) 00:01, 24 December 2015 (UTC)[reply]
    1. I’m not sure how common caching proxies are in the modern internet. – my current employer uses a proxy for virtually all the employees’ (numbering no less than 750, I guess, although the official Web site remains secretive on this) Internet traffic for the purposes of authentication. (The logs are kept for several months, as required by the law, and were disclosed to the authorities upon request on several occasions.) I believe that the proxy in use does cache.
    2. Any conforming HTTP proxy will not cache Wikimedia pages – as long as we talk about free software, the decision of whether or not to conform, and to what extent, lies ultimately in the hands of the user (that is: proxy administrator.) I certainly would try to configure my proxies to cache for a couple of minutes or more, when the use case warrants for that. Then, however, HTTP provides enough facilities to check whether the resource has changed or not, which could be used to cache for extended periods of time, yet still have the edits show up immediately. If MediaWiki cannot properly utilize these facilities as of yet, it’s an issue by itself.
    3. If I’m interpreting the graph right, there was a 20% improvement. – ACK, thanks for the pointer, I’ll check it out somewhat later. I guess I should note, however, that none of the browsers I typically use for Web-reading (apart from the Commons and similar resources, I mostly stick to a customized version of EWW and Lynx) appear to support HTTP/2 or SPDY. Curiously, cURL does support both “h2” and “h2c”, so obviously there’re those of us who are still concerned about plain HTTP.
    4. But I think this is closer to people looking over your shoulders when you browse Wikipedia in public. – I don’t see it being much different to the case when one read a plain-HTTP resource over a public Wi-Fi. A massive passive attack from an Internet backbone – I know of no evidence to support that apart from a few, the nations actually engage in such activities. Moreover, I did not suggest that we drop support for HTTPS; rather, my proposal is to bring plain HTTP support back, so that the users can decide for themselves; and those who would opt in are still ought to be more or less safe from such an attack.
    5. That said, ultimately we are not the one’s in control of that. […] even if everyone in Wikimedia didn’t like that goal, we’d still have to play ball. – well, we aren’t in control of the state laws across the world, either. I don’t think that means that we should mindlessly cooperate with the authorities when some state court decides that some specific information is harmful to minors and should not be distributed over the Internet (as was in the case of the Russian Wikipedia block this year); or does it?
    6. OTOH, it means that one doesn’t need to use an exit node, which are in short supply. – and even more so in the case the resource happens to be filtered for one or more of these nodes. As for being able to edit via Tor – sure, count me in.
    Ivan Shmakov (dc) 12:32, 26 December 2015 (UTC)[reply]
    Not allowing plain http is primarily about concerns regarding downgrade attacks AFAIK (e.g. sslstrip). Its important that the people who do want HTTPS will get it even in the face of an active attacker. Lynx is unlikely to get much benefit from SPDY/HTTP/2 as its optimized for webpages that load a lot of subresources (JS, CSS, images, etc). Lynx is less likely to benefit from this as it doesn't support images, css, js, etc. BWolff (WMF) (talk) 07:05, 28 December 2015 (UTC)[reply]
    1. Per sslstrip’s page linked above, it seems to be designed to intercept traffic on TCP port 80, which should be plain-HTTP already. Or do you mean that someone may want HTTPS but type an http: URI by accident or due to habit, and be prevented from being “upgraded” to HTTPS via a redirect? (Thanks for the link, BTW; seems like something I was loosely looking for recently.)
    2. Lynx is less likely to benefit from this as it doesn't support images, css, js, etc. – while the lack of any CSS support whatsoever in Lynx is certainly a problem that I hope to put some effort into one day (and EWW – or rather SHR – isn’t much better in this regard), I’d like to note that I tend to use NoScript whenever possible, and thus I’m unsure if the “server push” feature these new protocols implement will actually result in quite as much saved bandwidth (or reduced latency.) I didn’t delve into the details as of yet, though.
    3. JFTR, I’ve checked the HTTP/1.1 headers of the Wikimedia responses, and their Cache-Control: has private and must-revalidate, but not no-store, which means (AIUI) that caching is possible, as long as the cache does not reuse a response sent to one client’s request to serve another’s (and also always uses an If-Modified-Since: request to check if the resource cached was updated.) Moreover, the logs for the MediaWiki instance I run myself indicate successful generation of the 304 status code, which indicates that at least the browser cache is properly utilized.
    Ivan Shmakov (dc) 18:00, 29 December 2015 (UTC)[reply]
    Well, I’ve lost the point entirely. As I read it, the whole idea behind sslstrip is to transparently proxy requests intended for http://example.com/ to https://example.com/ – and that only makes sense if the site in question has plain-HTTP support disabled (by the means of a redirect.) For one thing, this allows user agents lacking TLS support (Dillo had TLS support in the works the last time I’ve checked, and it’s certainly optional in Lynx) to be used to read Wikimedia sites, as well as a number of others. — Ivan Shmakov (dc) 14:25, 30 December 2015 (UTC)[reply]
  4. While I am certainly fine with this proposal - I've never had a Wikimedia-related password that doesn't meet these requirements - it is worthwhile to note that the discussion of account security arose out of a situation where administrators re-used passwords that they'd used on other sites instead of creating a fresh Wikimedia-only password. There have been all kinds of system hacks including password theft over the last several years, as we all know. However, there is no practical way to force administrator or higher-access accounts to have a password unique to their Wikimedia account. It's clearly Password Security 101, but even good people sometimes forget these things. Risker (talk) 05:10, 14 December 2015 (UTC)[reply]
    While the events at EN Wikipedia has caused a lot of attention to be focused on passwords, and really kicked this discussion into high gear, I would note, that this isn't entirely a reaction to the events at Wikipedia (And of course, this would not have directly prevented the events at EN Wikipedia, although it appears that at least one of those users had a password that would not meet the proposed requirements [2], so it might of indirectly prevented the incident in so much as the user would not be able to use the weak password that was shared on our site). The code for increasing min length was completed way back in June. It was turned off on Wikimedia pending having some sort of discussion about it[3]. Figuring out what sort of discussion to have kind of got pushed to the back-burner while other more critical things got priority. So this discussion is much over-due, and would have happened even without the incident on english wikipedia. BWolff (WMF) (talk) 08:30, 14 December 2015 (UTC)[reply]
    Maybe the WMF could ask the companies who make password managers to provide a whole lot free or discounted versions for active Wikipedians (or just admins/CUs etc.)? Perhaps a discount on LastPass Premium or a voucher code for 1Password. There's FOSS alternatives as well. Tom Morris (talk) 11:57, 21 December 2015 (UTC)[reply]
    Personally, I would think that FOSS solutions would be much more trustworthy for this type of application. BWolff (WMF) (talk) 12:03, 22 December 2015 (UTC)[reply]
    Dubious, given that closed-source solutions have a more effective business model, thus more programmer hours to improve security. There is some nice discussion about that in this (looong) thread. --Tgr (WMF) (talk) 04:36, 1 January 2016 (UTC)[reply]
  5. "Once the restrictions are put into place, users who don't have a strong enough password will be asked to change it next time they log in." To clarify: will this force the user to change the password before proceeding, or can they dismiss and ignore it each time? Equinox (talk) 07:33, 21 December 2015 (UTC)[reply]
    Initially they will be allowed to dismiss the warning every time (Although just having the warning will hopefully serve to entice users to do the right thing). Eventually once everyone is used to the warning and we're sure the warning has not caused any unexpected hardships, I believe the plan is to switch to a system where users will be forced to change their password to continue logging in, although I'm not exactly sure about the details about that switch. ( ping @User:CSteipp (WMF): Do you know better what the exact plans with that are?). BWolff (WMF) (talk) 09:07, 21 December 2015 (UTC)[reply]
    The mechanism we use currently just prompts the user each time they login, and they can continue to click "cancel" every time. We could expire their password when they have a non-compliant password, so they could click "cancel" until their password hard-expires, and then they would have to reset the password before they could complete their login. CSteipp (WMF) (talk) 21:25, 22 December 2015 (UTC)[reply]
  6. Isn't the ultimate solution for the problem described is to enable and enforce 2FA (two-factor authentication) for those with the rights? Kenrick95 (talk) 12:04, 21 December 2015 (UTC)[reply]
    2FA should not happen, as it currently is only practical through the use of cell telephony, which bears its own share of privacy risks – through being predominantly based on non-free software first, but also because of the very design of this particular communication medium. (For instance, RMS explains it briefly in http://stallman.org/rms-lifestyle.html.) — Ivan Shmakov (dc) 17:45, 22 December 2015 (UTC)[reply]
    rms has some odd views on passwords (Or at least did historically). Well cell phones are the most common means of doing 2FA, they are hardly the only one. It wouldn't surprise me if somebody sells dedicated hardware for TOTP. There's also command line TOTP programs you can use on your free-software filled GNU/Linux box if you so desire. BWolff (WMF) (talk) 20:58, 22 December 2015 (UTC)[reply]
    I do not share the entirety of RMS views, but I find it curious that we tend to offer password-less access to the machines intended to be used mainly by our students. Moreover, I do not oppose 2FA on principle, but rather what indeed is the most common means of doing it. (Although frankly, I don’t seem to understand the value of 2FA should both the password and the TOTP key happen to reside on the same host – which I believe is going to be the case for the most users, unless cell phones – or some dedicated hardware, presumably rare – are to be put into the mix.) — Ivan Shmakov (dc) 21:40, 22 December 2015 (UTC)[reply]
    Indeed, for best security, one wants totally separate machines for TOTP vs where logging in. But even if using cell phones, there's a high likelyhood at some point or another, people will log in to their cell phone (Assuming its a smart phone). There are still benefits to TOTP, even if on the same machine [e.g. in the case of password re-use, or poorly chosen passwords] BWolff (WMF) (talk) 23:03, 22 December 2015 (UTC)[reply]
    If we introduce two-factor, we should push for U2F as that eliminates the possibility of man-in-the-middle attacks. (This is a good summary of the various attacks and defenses around web authentication.) --Tgr (WMF) (talk) 04:36, 1 January 2016 (UTC)[reply]
    I do see your concern on using mobile phone (either via the less-secure SMS or via an authentication apps) but it is happening everywhere across the big sites (Microsoft, Google, Facebook, WordPress, Dropbox, etc). Just because there are no free software to do 2FA does not mean it couldn't happen. Kenrick95 (talk) 10:58, 31 December 2015 (UTC)[reply]
    That may or may not be true (of the above, I currently use Google Mail – no 2FA necessary, – and plan to try out WordPress.com as well), but I certainly hope that Wikimedia will not become “just another big site” in the foreseeable future. — Ivan Shmakov (dc) 18:42, 31 December 2015 (UTC)[reply]
  7. The length and complexity of the passwords isn't nearly as important as the number of guesses allowed from recognized and unrecognized devices. According to mw:Manual:$wgPasswordAttemptThrottle, the system defaults to allowing 1 attempt per minute. That's 1,440 attempts per day and 43,000 attempts per month. There's no reason to allow more than 10 attempts total from a device I've never used before, and no more than 20 attempts from a device I use regularly (recognized by cookie or IP). Secure the system first against attacks, then focus on password strength and 2FA. -- Dave Braunschweig (talk) 15:37, 21 December 2015 (UTC)[reply]
    Shared IP setups (be it a proxy or a NAT box) are not at all uncommon today, and so the or IP part above readily opens a way for a malicious party to use the limit suggested to deny access to a specific user on the same network (that is: using the same proxy or NAT), as long as the login of the latter is known. Hence, the applicability of this approach may depend on the user in question. — Ivan Shmakov (dc) 14:25, 30 December 2015 (UTC)[reply]
  8. The password list mentioned in this RfC contains only latin letters and arabic digits. Of the 10000 most common passwords only 3000-4000 are 8 bytes or longer, therefore the other 6000 should be replaced with common passwords from cyrillic, CJK, arabic and so on alphabets. — There is nothing said about using a unique password. I think accounts with additional permissions should be required to state to the foundation that they use a unique password (I know this RfC is about software and this cannot be enforced by software or otherwise, but there are already other policies that cannot be enforced). And the password change dialog should include a request to use a unique password for all accounts. — An echo notification should be added to display the number of unsuccessful login attempts to all accounts. — Passwords can be reset by requesting a new password by email. A hacker can wikimail an administrator with a plausible request and the answer email of the administrator will reveal the very email address this administrator uses to recover a lost password. The hacker can proceed with this approach until he finds one with a vulnerable email-provider. Therefore a security question should be asked, when an emailed password is used (mandatory for admins, opt-in for all accounts). --° (Gradzeichen) 18:51, 21 December 2015 (UTC)[reply]
    I had trouble locating a list of non-english common passwords. If you're aware of any, please let me know. The echo notice is a good idea. I've filed phab:T122123 for it. I've filed phab:T122124 for instructing users to use a unique password. I agree that email a new password is a potential weak spot, although I'm not sure if a security question is a good solution (Or if there is a good solution). See some of the comments at phab:T122013. BWolff (WMF) (talk) 11:45, 22 December 2015 (UTC)[reply]
    @BWolff (WMF): As my answer is somewhat lengthy, I have put it on this page: User:°/some ideas on common passwords. --° (Gradzeichen) 21:20, 22 December 2015 (UTC)[reply]
    100 most common Hungarian passwords. The approach used there (scan password dumps from published breaches which include emails and filter by domain) could easily be adapted to other languages. --Tgr (WMF) (talk) 04:36, 1 January 2016 (UTC)[reply]
  9. everybody realize we have better secure ballots for english arbcom voting, than the log-in to vote? or OAuth ? any reason this is not implemented for log-in other than inertia? Slowking4 (talk) 20:09, 21 December 2015 (UTC)[reply]
    Can you clarify what you mean by this. If I remember correctly, we use securepoll for arbcom (I think. Not an en wikipedian, so I didn't vote), which would PGP to encrypt ballots. Is that what you're referring to? Are you suggesting to use Public/private key pairs for authentication? (Client-side SSL certificates would be the logical implementation of that). I think its unrealistic to expect most users to be able to do that, given the current state of client-side SSL certificates on the web in general (Support exists, but I can't think of a single site that uses them). BWolff (WMF) (talk) 11:33, 22 December 2015 (UTC)[reply]
    i'm saying logon is the weak link. PGP is currently implemented for voting and OAuth. no reason it could not be implemented for bureaucrats, in time admins, and eventually editors. yes, making peer to peer encryption user friendly is the task, but it does not seem to me more off-putting than the current wall of captchas. see also w:Transport Layer Security - Slowking4 (talk) 15:58, 22 December 2015 (UTC)[reply]
    I think you're conflating encryption and authentication, which are really separate (but highly related) things. Encryption is making sure that nobody can eavesdrop, authentication is making sure you are talking to the right person. PGP Encryption in securepoll is primarily used for encrypting the votes. We don't really use it for identity management, or at least not in a way that really translates to logging in (A small users hold the decryption key for the list of votes, obviously). Key management in PGP is most closely associated with the web-of-trust model, which is rather inapplicable to our use case (Web-of-trust is kind of like verifying you're talking to the right person by playing 6-degrees of kevin bacon. For example, my key has six people who've vouched for it. If someone wants to talk to me, the hope is that they know one of the people who've (potentially transitively depending on how much faith you put in the internet) vouched for me). One can of course potentially use other key management models with PGP, which may translate better. The aspect of PGP that we primarily use in securepoll is the plain encryption facilities. The encryption facilities of PGP are roughly equivalent to the facilities in TLS. Both can support a variety of ciphers, AES being one of the most common. (TLS = Transport level security = The S in HTTPS). We use TLS to secure your password in transit so that nobody can eavesdrop on it. Specifically, when you submit a password, it is encrypted with TLS until it gets to the Wikimedia data center (If your nearest data center is a caching data center, the last leg of the trip is encrypted using IPSec). So in one sense, your password is treated with the same amount of encryption as securepoll votes are.
    Authentication involves proving who you are. Broadly speaking there are 3 ways of doing this:
    1. Something you know. Most typically this is a password, which is what we are doing currently. This method has the benefit of easy to change if someone finds out your secret. The con of this method is that humans are very bad at memorizing high-entropy secrets, and generally don't follow security best practices.
    2. Something you are. aka Biometrics. Not really directly applicable to us as we're not in physical control of the user's device. If someone wants to use biometrics, the typical workflow would be to use it to unlock some sort of private key, which would be entirely on the user end. The benefit to this method, is its hard to steal a biometric.
    3. Something you have. There's two subtypes here - First we have TOTP [Sometimes called OATH] (When you combine with a normal password, you get the system which many people mean when they say "2FA"). This is when you have an app on your phone, which generates a new random number every 5 minutes. To log in you need this random number. The random number is only good for about 5 minutes, so if someone steals it, they have a very limited time frame to make use of it. So in order for someone to impersonate you, they need to somehow steal/hack your phone. The people who are in a position to steal your phone are often not the same people who are in a position to crack your password. Requiring both makes it much harder to break someone's account. Supporting this method of authentication is on the roadmap for the future.
    The other approach to "something you have", is certificates (Or some sort of cryptographic keys). I think this is most likely the type of thing you are thinking of. In a web context, we're probably talking about TLS client certificate authentication (Sometimes referred to as TLS CCA, or TLS client-side certificates). This is where there is a file on your computer, and to log in to a site, your web browser uses this file (Sometimes the file is additionally password protected. Really paranoid people put the certificate on a smart-card device). One of the cool things about this method is that none of the secret material is ever transmitted to the server. All that's transmitted is enough to log you in a single time. So even if someone takes over Wikimedia servers, they can't compromise the original key. This sort of approach is popular in things like SSH (people with a tool labs account will be familiar with it), but really hasn't taken off in the web. The main issues are: getting users to understand certificates is complicated (much of the UI is kind of less than ideal. This is the sort of thing that has to be implemented in web browsers so we have no control over the UI), Very few people use it (although apparently its popular in estonia [4] ) so many of the server side implementations are less tested than one would normally like for security software (Some of the issues mentioned in this pdf about the mod_ssl implementation of CCA really raise red flags about the maturity of the implementation. Of course that's from 3 years ago...). Also, there is no standardized way of logging a user out when authenticating with this method (Which is just kind of insane). Last of all, certificates have to be installed in any web browser the user uses. This means that people would never be able to log in from a computer other then there own. I feel like a lot of people would object to such a restriction. Thus, well Client-certificate authentication has some very nice properties, I don't think its the best fit for authentication on Wikimedia. See also http://www.browserauth.net/tls-client-authentication . [This is just my personal opinion about the suitability of TLS-CCA. I haven't actually discussed it with other people, so there's a possibility that the rest of the WMF security team feels differently about it]. BWolff (WMF) (talk) 07:33, 23 December 2015 (UTC)[reply]
    i see there are “Authenticator” apps [5] and 'Yubico, is called a “security key,”' i.e. encrypted authentication. Slowking4 (talk) 17:32, 23 December 2015 (UTC)[reply]
    "Google authenticator" is just an implementation of the TOTP/OATH algorithm. It is highly likely it will be supported as part of the 2FA support that's getting added sometime in a couple months (Its already supported on https://wikitech.wikimedia.org, but there's still more work to be done in order to make it work on normal wikis, with SUL, and the UI experiance on wikitech is a bit rough around the edges). Yubico is a hardware token product most commonly associated with U2F. This is basically another way of doing 2FA (With even better security than TOTP. Actually probably the best security of any option I've ever heard of. Combines many of the cool properties of doing TLS CCA, without the stupidness, and all the key material is safely on a separate single purpose device, so very difficult to compromise.). Downside of course if you have to spend $20-$40 to buy a hardware token. And browser support is a bit shaky (But apparently is going to improve in the near future). While I don't think there's been any decision about what 2FA methods to support as of yet, I'm hopeful that U2F will be one of them. BWolff (WMF) (talk) 21:59, 23 December 2015 (UTC)[reply]
    TOTP link fix en:Time-based One-time Password Algorithm --TitoDutta 15:38, 24 December 2015 (UTC)[reply]
  10. BWolff (WMF) and other WMF guys, I can not understand this very well, why not impose this policy on everyone (or no one)? I agree with all those editors who have said experienced editors know well how to protect their accounts. --TitoDutta 15:36, 24 December 2015 (UTC)[reply]
    I'm only worried about accounts, which if compromised can do real damage that might not be easy to undo. A normal account can't really do much more than an anonymous user (Even with high but non-admin rights, the damage possible is order of magnitudes less than if people have admin). On the other hand, if someone compromises oversight/checkuser, they can retrieve very sensitive information which is impossible to make the malicious person un-see. If a malicious person takes over an admin account and has access to editinterface (What I'm really worried about), then that person can (for example), turn a Wikimedia site into a place to distribute computer viruses, extract private information from pretty much any user, and take over pretty much any other user account (including in certain circumstances, user accounts on wikis other than the compromised one).
    Ultimately restrictions like this have both a cost and a benefit. While the cost isn't all that high, its still annoying for some users to have to use secure password. What does it matter if someone who never edits and has an account only to say set language preferences in Special:Preferences has a secure account? It does matter if someone with admin rights has a secure password. BWolff (WMF) (talk) 16:47, 24 December 2015 (UTC)[reply]
  11. I should say that I Oppose Oppose apply this for local sysops, but Support Support for the rest, because I claim, the normal sysops should have right to be a white hat. --Liuxinyu970226 (talk) 10:47, 25 December 2015 (UTC)[reply]
  12. Generically supportive of this with the obvious exception that WMF should not target a single class of user. WMF should be concerned with the security/privacy of all user accounts, not solely those which - if compromised - might harm the WMF. - Amgine/meta wikt wnews blog wmf-blog goog news 17:25, 26 December 2015 (UTC)[reply]
  13. One thing seems to be missing in the proposal: how about accounts gaining additional rights in the future? A 'normal' user can have a single-byte password, now and in the future. What happens if this user becomes admin, checkuser of whatever, in the future? Although it's not a bad idea to change your passwords now and then, forcing users to change it when they gain these rights in the future is not part of the proposal. RonnieV (talk) 00:50, 28 December 2015 (UTC)[reply]
    If the user gets new rights, and their password doesn't meet requirements, they will be prompted to change their password next time they log in. BWolff (WMF) (talk) 06:59, 28 December 2015 (UTC)[reply]
  14. … Speaking of the Yair rand comment (15115192.) If my math is still good, there’re just 20,160 anagrams for “password”, and my guess is that the vast majority of them are not going to be blacklisted. Which may or may not have certain implications. — Ivan Shmakov (dc) 14:25, 30 December 2015 (UTC)[reply]