ویرایش با آی‌پی: بهبود حریم خصوصی و کاهش سوءاستفاده

This page is a translated version of the page IP Editing: Privacy Enhancement and Abuse Mitigation and the translation is 22% complete.
Outdated translations are marked like this.

What is IP Masking and why is the Wikimedia Foundation masking IPs?

IP masking hides the IP addresses of unregistered editors on Wikimedia projects, fully or partially, from everyone except those who need access to fight spam, vandalism, harassment and disinformation.

Currently, anyone can edit Wikimedia wikis without a Wikimedia account or without logging in. MediaWiki, the software behind Wikimedia projects, will record and publish your IP address in its public log. Anyone seeking your IP address will find it.

Wikimedia projects have a good reason for storing and publishing IP addresses: they play a critical role in keeping vandalism and harassment off our wikis.

However, your IP address can tell where you are editing from and can be used to identify you or your device. This is of particular concern if you are editing from a territory where our wikis are deemed controversial. Publishing your IP address may allow others to locate you.

With changes to privacy laws and standards (e.g., the General Data Protection Regulation and the global conversation about privacy that it started), the Wikimedia Foundation Legal team has decided to protect user privacy by hiding IPs from the general public. However, we will continue to give access to users who need to see the addresses to protect the wikis.

We're aware that this change will impact current anti-abuse workflows. We are committed to developing tools or maintaining access to tools that can identify and block vandals, sock puppets, editors with conflicts of interest and other bad actors after IPs are masked.


Refocusing work on IP Masking (16 February 2023)

Hi all. We’re officially refocusing our work on the core IP Masking project, now that we have completed the first phase on IP Info Feature and other related projects. We are moving forward with technical planning to understand what will need to change when IP Masking goes into effect. We will be reaching out to our technical volunteers to help evaluate changes and migrate tools, as needed. Some of this planning work has already started on Phabricator, and you may reach out to us there if you have questions about specific tasks.

I will follow this up with another post shortly to share an outline of the MVP (Minimum Viable Product) we have landed on. This MVP is based on the conversations we have had with the community in the past, through this page and other mediums. Please feel free to peruse those previous conversations and read through the past updates on this page. If you have questions or concerns, you can reach out to us on the talk page.

Implementation Strategy and next steps (25 February 2022)

Hello all. We have an update on the IP Masking implementation strategy.

First off, thank you to everyone who arrived on this page and offered their feedback. We heard from a lot of you about how this page is not easy to read and we are working on fixing that. We genuinely want to thank you for taking the time to go through the information here and on the talk page. We took every comment on the talk page into consideration before the decision about the implementation plan was made.

We want to preface this also by saying that there are still a lot of open questions. There is a long road ahead of us on this project and we would like you to voice your opinion in more of these discussions as they come up. If you haven’t already, please go through this post about who will continue to have IP address access before reading further.

We received mixed feedback from the community about the two proposed implementation ideas without a clear consensus either way. Here are some quotes taken from the talk page:

  • For small wikis, I think the IP based approach is better because it is unlikely that two anonymous users will have the same IP, and for a vandal modifying its Ip is most difficult that erasing cookies.
  • The session-based system does seem better, and would make it easier to communicate with anonymous editors. I'm an admin on English Wikipedia, and my main interaction with IP editors is reverting and warning them against vandalism. In several cases recently I haven't even bothered posting a warning, since it seems unlikely the right person would receive it. In one case I was trying to have a conversation about some proposed change, and I was talking to several different IP addresses, and it was unclear that it was actually the same person, and I had to keep asking them about that.
  • As an admin in German-language Wikipedia, of the two paths described here (IP based identity vs. session-based identity) I clearly prefer the IP based approach. It's just too easy to use a browser's privacy mode or to clear the cookies (I'm doing it myself all the time); changing your IP address at least requires a bit more effort, and we have already a policy against using open proxies in place. I agree with Beland that the session-based identity approach could probably make communication with well-meaning unregistered editors easier, but it just doesn't seem robust enough.
  • I prefer the session-based approach. It provides more value in being able to identify and communicate with legitimate anonymous editors. However, at the same time, we need abuse filter options to be able to identify multiple new sessions from a single IP. These could be legitimate (from a school, for example), but will most likely represent abuse or bot activity. One feature I haven't seen mentioned yet. When a session user wants to create an account, it should default to renaming the existing session ID to the new name of their choice. We need to be able to see and/or associate the new named user with their previous session activity.
  • I am leaning towards the IP-based identities, even if encrypted, as cookies seem more complicated to deal with and very bothersome to keep shutting their annoying pop-ups (very standard in Europe). I have to mention that I prefer that till this day, one could use Wikipedia without cookies, unless he wants to log in to edit with his username.
  • The ability to perform purely session-based blocks in addition to the existing IP+session blocking would be an interesting upgrade. Being able to communicate with IPv6 users through their session instead of their repeatedly changing IP address would also be a benefit.

In summary, the main argument against the session-based approach was that cookies are easy to get rid of and the user may change their identity very easily.

And the main arguments against the IP-based approach were:

  • the encryption method can be compromised, hence compromising the IP addresses themselves
  • this approach does not provided the benefit of improved communication with the unregistered editors
  • does not allow for session-based blocking (in addition to IP based blocking)

In light of the above and the discussions with our technical team about the feasibility and wide-ranging implications of this implementation, we have decided to go with the session-based approach with some important additions to address the problem of users deleting their cookies and changing their identity. If a user repeatedly changes their username, it will be possible to link their identities by looking at additional information in the interface. We are still working out the details of how this will work – but it will be similar to how sockpuppet detection works (with some automation).

We are working out a lot of the technical details still and will have another update for you shortly with more specifics. This includes LTA documentation, communication about IPs, AbuseFilters, third-party wikis, gadgets, user-scripts, WMF cloud tools, restrictions for IP-viewer rights etc. We appreciate your input and welcome any feedback you may have for us on the talk page.

== پوشش آی‌پی و نحوهٔ محافظت از ویکی‌ها (به‌روزرسانی ۹ دسامبر ۲۰۲۱) ==

9 December 2021 Update
ما دو رویکرد متفاوت برای پوشش آی‌پی را که در نظر داریم مورد بحث قرار دادیم. در ادامه آن، ما با چند گردش کار متفاوت و نحوه تغییر آنها با دو پیاده‌سازی متفاوت مواجه شده‌ایم.

Note that in both alternatives admins, stewards, checkusers and users with the IPViewer role will be able to unmask IPs on pages like Recent changes and History for anti-vandalism purposes.

تجربه ویرایش برای ویرایشگران ثبت‌نام نشده

روش فعلی: در حال حاضر، ویرایشگران ثبت‌نام نشده می‌توانند بدون ورود به سیستم (در اکثر ویکی‌ها) ویرایش کنند. قبل از انجام ویرایش، اعلامیه‌ای را می‌بینند که به آنها می‌گوید به صورت دائمی آدرس آی‌پی آنها به طور عمومی ضبط و منتشر می‌شود.

هویت آی‌پی-محور: ویرایشگران ثبت‌نام نشده می‌توانند مانند حال حاضر ویرایش کنند. قبل از انجام ویرایش، آنها پیامی را می‌بینند که به آنها می‌گوید که ویرایش‌های آنها به نسخه رمزگذاری‌شده آدرس آی‌پی آنها نسبت داده می‌شود. خود آدرس آی‌پی برای مدیران و گشت‌بانان قابل مشاهده خواهد بود. برای مدت زمان محدودی حفظ خواهد شد.

هویت دوره-محور: این شبیه به بالایی است، با این استثنا که به ویرایشگران گفته می‌شود که ویرایش‌های آنها به یک نام کاربری تولید شده به طور خودکار نسبت داده می‌شود.

برقراری ارتباط در مورد ویرایشگران ثبت‌نام نشده

Current behavior: Unregistered editors are referred to by their IP addresses or if they are a known long-term abuser, they are often given a name based on their behavior.

IP-based identity: Patrollers and admins will not be able to refer to IP addresses publicly but will be able to refer to their encrypted IP address or the long-term abuser name. They can share the IP with others who have access to it.

Session-based identity: Patrollers and admins will not be able to refer to IP addresses publicly but will be able to refer to their auto-generated usernames. They can share the IP with others who have access to it. This can help identify a specific actor, but it can also be confusing if there are multiple IPs behind the username, similar to how many persons can be behind an IP today. To ease this concern, we are building a tool that will be able to surface information about all the different IP addresses an editor is active from.

تجربهٔ صفحهٔ بحث برای ویرایشگران ثبت‌نام نشده

روش فعلی: یک ویرایشگر ثبت‌نام نشده می‌تواند پیام‌هایی را در صفحهٔ بحث برای آی‌پی خود دریافت کند. هنگامی که آدرس آی‌پی ویرایشگر تغییر می‌کند، آنها پیام‌هایی را در صفحهٔ بحث آدرس آی‌پی جدید دریافت می‌کنند. این کار مکالمات را منقطع می‌کند و برقراری ارتباط با یک ویرایشگر ثبت‌نام نشدهٔ خاص را دشوار می‌کند.

هویت آی‌پی-محور: در این پیاده‌سازی، روش فعلی باقی می‌ماند. ویرایشگران ثبت‌نام نشده پیام‌هایی را در صفحات بحث آی‌پی رمزگذاری شده خود دریافت خواهند کرد و هنگامی که آی‌پی آنها تغییر کرد، صفحهٔ بحث مرتبط با آنها نیز تغییر می‌کند.

Session-based identity: In this implementation, unregistered editors receive messages on a talk page that is associated with a cookie on their browser. Even if their IP changes, that still allows them to receive messages on their talk page. If their browser cookie is cleared, they no longer retain their session identity and will receive a new cookie and a new talk page associated with that. Since IPs change more frequently than cookies, it is likely that many users will end up with a semi-permanent talk page unless they specifically try not to. Another advantage to note is that talk page messages will no longer end up with the wrong recipient in any scenario.

اعلان صفحهٔ بحث

مسدودسازی ویرایشگران ثبت‌نام نشده

Current behavior: An admin can block an IP address or an IP range directly. Additionally, they can turn it into an autoblock which can retain a cookie on the end user’s browser which prevents them from editing even if they change IP addresses. This functionality was introduced a few years ago.

IP-based identity: The behavior remains the same as current. IPs are masked by default, but admins and patrollers with the right privileges can access them.

Session-based identity: This implementation allows us to retain the current behavior of blocking by IP addresses. It also allows us to perform only cookie-based blocks as well. This can be helpful in scenarios where people share devices (like a library or cybercafé) and blocking the IP address or IP address range can cause unnecessary collateral. I want to point out that this will not work in cases where vandals are experienced editors and can evade cookie blocks.

== رویکردهای پیاده‌سازی پوشش آی‌پی (سؤالات متداول) ==

October 2021 Update

This FAQ helps answer some likely questions the community will have about the various implementation approaches we can take for IP Masking and how each of them will impact the community.

Q: Following implementation of IP Masking, who will be able to see IP addresses?

A: Checkusers, stewards and admins will be able to see complete IP addresses by opting-in to a preference where they agree not to share it with others who don't have access to this information.

Editors who partake in anti-vandalism activities, as vetted by the community, can be granted a right to see IP addresses to continue their work. This user right would be handled like other user rights by the community, and require a minimum number of edits and days spent editing.

All users with accounts over a certain age and with a minimum number of edits (to be determined) will be able to access partially unmasked IPs without permission. This means an IP address will appear with its tail octet(s) – the last parts – hidden. This will be accessible via a preference where they agree not to share it with others who don't have access to this information.

All other users will not be able to access IP addresses for unregistered users.

Q: What are the potential technical implementation options?

A: Over the last few weeks we have been engaged in multiple discussions about the technical possibilities to accomplish our goal for IP Masking while minimizing impact to our editors and readers. We gathered feedback from across different teams and gained varying perspectives. Below are the two key paths.

  • IP based identity: In this approach, we keep everything as is but replace existing IP addresses with a hashed version of IPs. This preserves a lot of our existing workflows but does not offer any new benefits.
  • Session based identity: In this approach, we create an identity for the unregistered editors based on a browser cookie which identifies their device browser. The cookie persists even when their IP address changes hence their session does not end.

Q: How does IP based identity path work?

A: At present, unregistered editors are identified by their IP addresses. This model has worked for our projects for many years. Users well-versed with IP addresses understand that a single IP address can be used by multiple different users based on how dynamic that IP address is. This is more true for IPv6 IP addresses than IPv4.

An unregistered user may also change IP addresses if they are commuting or editing from a different location.

If we pursue the IP-based identity solution for IP Masking, we would be preserving the way IP addresses function today by simply masking them with an encrypted identifier. This solution will keep the IPs distinct while maintaining user privacy. For example, an unregistered user such as User: may appear as User:ca1f46.

مزایای این رویکرد: Preserves existing workflows and models with minimal disruption

معایب این رویکرد: Does not offer any advantages in a world moving rapidly towards more dynamic/less useful IP addresses

Q: How does session-based identity path work?

A: The path is to create a new identity for unregistered editors based on a cookie placed in their browser. In this approach there is an auto-generated username which their edits and actions are attributed to. For example, User: might be given the username: User:Anon3406.

In this approach, the user’s session will persist as long as they have the cookie, even when they change IP addresses.

مزایای این رویکرد:

  • Ties the user-identity to a device browser, offering a more persistent way to communicate with them.
  • User identity does not change with changing IP addresses
  • This approach can offer a way for unregistered editors to have access to certain preferences which are currently only available to registered users
  • This approach can offer a way for unregistered editors to convert to a permanent account while retaining their edit history

معایب این رویکرد:

  • Significant change in the current model of what an unregistered editor represents
  • The identity for the unregistered editor only persists as long as the browser cookie does
  • Vandals in privacy mode or who delete their cookies would get a new identity without changing their IP
  • May require rethinking of some community workflows and tools

Q: Does the Foundation have a preferred path or approach?

A: Our preferred approach will be to go with the session-based identity as that will open up a lot of opportunities for the future. We could address communication issues we’ve had for twenty years. While someone could delete the cookie to get a new identity, the IP would still be visible to all active vandal fighters with the new user right. We do acknowledge that deleting a cookie is easier than switching an IP, of course, and do respect the effects it would have.

پیشنهاد به‌اشتراک‌گذاری نشانی‌هایی آی‌پی با کسانی که به این دسترسی نیاز دارند

10 June 2021 Update

سلام بر همگی. از آخرین بروزرسانی برای این پروژه چند ماه می‌گذرد. ما در این مدت با افراد زیادی — در میان اجتماع ویرایشگران و در داخل بنیاد — گفتگو کرده‌ایم. ما در خصوص ارزیابی همهٔ نگرانی‌های مطرح‌شده در گفتگوهای خود با اعضای باتجربهٔ جامعه در مورد تأثیری که این امر بر تلاش‌ها جهت مبارزه با خرابکاری در سرتاسر پروژه‌های ما ایجاد می کند، دقت ویژه‌ای داشته‌ایم. همچنین حرف‌های تعداد قابل توجهی از افرادی که حامی این پیشنهاد به‌عنوان گامی به سوی بهبود حریم خصوصی ویرایشگران ثبت‌نام‌نکرده و کاهش تهدیدهای حقوقی که به‌واسطهٔ فاش‌سازی نشانی‌های آی‌پی برای همگان، متوجه پروژه‌های ما می‌شود، هستند ار نیز شنیده‌ایم.

زمانی که در گذشته دربارهٔ این پروژه گفتگو کردیم، ایدهٔ واضحی برای شکل‌دهی به این پروژه نداشتیم. قصد ما، درک چگونگی مفید بودن نشانی‌های آی‌پی برای اجتماعاتمان بوده‌است. ما از آن زمان تاکنون بازخوردهای زیادی در این زمینه از تعدادی از گفتگوها در زبان‌های مختلف و اجتماع‌های مختلف دریافت کرده‌ایم. ما از تمامی اعضای احتماع که زمان خود را صرف آگاه‌سازی ما دربارهٔ چگونگی انجام کارهای گرداندن ویکی‌های خود یا محیط میان‌ویکیایی خاص خود کرده‌اند، بسیار سپاسگزاریم.

ما اکنون پیشنهادی مشخص‌تر برای این پروژه داریم که امیدواریم امکان انجام بی‌درنگ بیشتر کارهای مبارزه با خرابکاری را فراهم کند و در عین حال دسترسی به نشانی‌های آی‌پی را برای افرادی که نیازی به دیدن آن‌ها ندارند، محدود کنیم. قصد دارم روی واژهٔ «پیشنهاد» تأکید کنم؛ چرا که به‌هیچ شکل، نحو یا گونه‌ای نشان‌دهندهٔ آنچه اتفاق خواهد افتاد، نیست. قصد ما این است که بازخورد شما پیرامون این ایده را بدانیم – فکر می‌کنید چه‌چیزی کارساز خواهد بود؟ معتقید چه‌چیزی کارساز نخواهد بود؟ چه ایده‌هایی می‌توانند بهبودش دهند؟

ما این ایده‌ها را در پی گفتگوهای متعدد با اعضای باتجربهٔ اجتماع توسعه داده‌ایم و آن‌ها را با همکاری بخش حقوقی بنیاد اصلاح کرده‌ایم. طرح کلی به‌شرح زیر است:

  • بازرسان کاربر، ویکی‌بدها و مدیران باید بتوانند به‌واسطهٔ تعهد به عدم به‌اشتراک‌گذاری نشانی‌های آی‌پی با سایر افرادی که به این اطلاعات دسترسی ندارند، این نشانی‌ها را به‌طور کامل مشاهده کنند.
  • ویرایشگرانی که در فعالیت‌های ضد خرابکاری مشارکت دارند، طبق آنچه مورد توافق اجتماع است، می‌توانند برای ادامه‌دادن به کار خود، دسترسی مشاهدهٔ نشانی‌های آی‌پی را کسب کنند. این کار می‌تواند به طریقی مشابه کسب دسترسی مدیریت در پروژه‌های ما انجام پذیرد. به‌منظور حصول اطمینان از این که تنها ویرایشگرانی که نیاز مبرم به این دسترسی دارند، به آن دست می‌یابند، تأیید اجتماع برای این کار بسیار مهم است. ویرایشگران برای کسب این دسترسی، ملزم به داشتن حساب کاربری با حداقل عمر یک سال و داشتن دست کم ۵۰۰ ویرایش هستند.
  • تمامی کاربرانی که عمر حساب آنان بیش از یک سال باشد و دست کم ۵۰۰ ویرایش داشته‌باشند، قادر خواهند بود بدون کسب دسترسی، بخشی از نشانی‌های آی‌پی را مشاهده کنند. این بدین معنی است که بخش‌های پایانی نشانی آی‌پی مخفی خواهد بود. این قابلیت به‌واسطهٔ ترجیحاتی در دسترس خواهد بود که بر مبنای آن، کاربر متعهد می‌شود تا نشانی آی‌پی را با سایر افرادی که به این اطلاعات دسترسی ندارند، به اشتراک نگذارد.
  • سایر کاربران قادر به مشاهدهٔ نشانی آی‌پی کاربران ثبت‌نام‌نکرده نخواهند بود.

دسترسی به نشانی آی‌پی در سیاهه ثبت خواهد شد تا در صورت نیاز، امکان بررسی دقیق میسر باشد. این مشابه سیاهه‌ای است که از دسترسی بازرسان به اطلاعات خصوصی جمع‌آوری می‌شود. اینگونه است که امیدواریم بتوانیم تعادلی میان نیاز به حفظ حریم خصوصی و نیازهای اجتماع به دسترسی به اطلاعات جهت مبارزه با هرزنگاری، خرابکاری و آزار و اذیت برقرار کنیم. ما می‌خواهیم اطلاعات را در اختیار کسانی قرار دهیم که به آن نیاز دارند؛ اما نیاز به یک فرآیند داریم و لازم است که تنها افرادی منتخب با نیاز مبرم قادر به مشاهدهٔ نشانی‌های آی‌پی باشند و نیاز داریم که دسترسی به این اطلاعات در یک سیاهه ثبت شود.

ما مشتاق هستیم که نظرات شما در مورد این روش پیشنهادی را بدانیم. لطفاً بازخوردهای خود را در صفحهٔ بحث با ما در میان بگذارید.

  • به‌نظر شما چه‌چیزی کارساز خواهد بود؟
  • به‌نظر شما چه‌چیزی کارساز نخواهد بود؟
  • چه ایده‌هایی باعث بهبود این ایده می‌شوند؟

داده‌های ویکی‌پدیای پرتغالی پیرامون غیرفعال کردن ویرایش آی‌پی‌ها

Portuguese Wikipedia’s metrics following restriction

30 August 2021 Update

Hello. This is a brief update about Portuguese Wikipedia’s metrics since they started requiring registration to edit. We have a comprehensive report on the Impact report page. This report includes metrics captured through data as well as a survey that was conducted among active Portuguese Wikipedia contributors.

All in all, the report presents the change in a positive light. We have not seen any significant disruption over the time period these metrics have been captured. In light of this, we are now encouraged to run an experiment on two more projects to see if we observe similar impact. All projects are unique in their own ways and what holds true for Portuguese Wikipedia might not hold true for another project. We want to run a limited-time experiment on two projects where registration will be required in order to edit. We estimate that it will take approximately 8 months for us to collect enough data to see significant changes. After that time period, we will return to not requiring registration to edit while we analyse the data. Once the data is published, the community will be able to decide for themselves whether or not they want to continue to disallow unregistered editing on the project.

We are calling this the Login Required Experiment. You will find more detail as well as a timeline on that page. Please use that page and its talk page to discuss this further.

Portuguese Wikipedia IP editing restriction


ویکی‌پدیای پرتغالی یک سال پیش ویرایش آی‌پی‌ها در پروژه را ممنوع کرد. در چند ماه گذشته، تیم ما در حال جمع‌آوری داده‌ها دربارهٔ عواقب این حرکت برای سلامت عمومی پروژه بوده‌است. ما همچنین با چندین عضو از پروژه دربارهٔ تجربهٔ آن‌ها گفتگو کرده‌ایم. ما در حال کار بر روی قدم‌های پایانی برای گردآوری همهٔ داده‌هایی هستیم که تصویری دقیق از وضعیت پروژه را ارائه می‌دهند. امیدواریم که اخبار جدیدی پیرامون این موضوع را در آینده منتشر کنیم.


Tool development

Update 02
همانطور که احتمالاً می‌دانید، ما در حال کار بر روی ساخت چند ابزار جدید هستیم که بخشی از آن برای کاهش تأثیر پوشاندن آی‌پی و نیز تنها برای ساخت ابزارهای بهتر برای مبارزه با خرابکاری برای همگان است. این بر کسی پوشیده نیست که وضعیت ابزارهای مدیریتی در پروژه‌های ما، ابزارهایی که اجتماع‌ها شایستهٔ آن هستند را در اختیارشان قرار نمی‌دهد. زمینه‌های زیادی برای پیشرفت و بهبود وجود دارد. ما می‌خواهیم ابزارهایی بسازیم که انجام کار مؤثر و بهینه را برای مبارزه‌کنندگان با خرابکاری آسان‌تر کند. ما همچنین می‌خواهیم موانعی که در برابر مشارکت‌کنندگانی با مهارت‌های فنی کمتر برای پیوستن به این گروه‌ها موجود است را کاهش دهیم.

ما در گذشته پیرامون ایده‌هایی برای این ابزارها گفتگو کرده‌ایم که شرحی مختصر از آن‌ها را در پایین ذکر می‌کنم. دقت کنید که با توجه به این که تیم ما در حال کار بر روی تغییر اساسی SecurePoll جهت برآورده‌کردن نیازهای انتخابات پیش‌روی هیئت مدیرهٔ بنیاد بوده‌است، روند پیشرفت این ابزارها در چند ماه گذشته کند بوده‌است.

ویژگی اطلاعات آی‌پی

Mockup for IP Info

ما در حال ساخت ابزاری هستیم که اطلاعات مهم دربارهٔ یک نشانی آی‌پی که معمولاً در زمان تحقیقات مورد استفاده قرار می‌گیرند را نمایش خواهد داد. معمولاً گشت‌زنان، مدیران و بازرسان کاربر برای دریافت این اطلاعات بر وبگاه‌های خارجی متکی هستند. ما امیدواریم که به‌واسطهٔ ادغام اطلاعات از عرضه‌کنندگان آی‌پی معتبر در وبگاه‌های خود، بتوانیم این فرآیند را به فرآیندی آسان‌تر تبدیل کنیم. ما اخیراً پیش‌نمونه‌ای ساخته‌ایم و دوره‌ای از آزمون‌ها توسط کاربران را به‌منظور تأیید رویکرد خود به اجرا درآورده‌ایم. در این میان دریافته‌ایم که بخش عمده‌ای از ویرایشگران در گروه مصاحبه‌شوندگان این ابزار را مفید دانسته و اشاره کرده‌اند که مایل هستند در آینده از آن استفاده کنند. یک بروزرسانی در صفحهٔ پروژه موجود است که مایلم توجه شما را به آن جلب کنم. سؤالات کلیدی که ما می‌خواهیم بازخورد شما دربارهٔ آن‌ها را در صفحهٔ بحث پروژه بدانیم به شرح زیر است:

  • در زمان تحقیق دربارهٔ یک نشانی آی‌پی، در جستجوd کدام انواع از اطلاعات هستید؟ در زمان جستجو برای این اطلاعات، احتمال دارد به کدام صفحه رجوع کنید؟
  • کدام دسته از اطلاعات دربارهٔ آی‌پی‌ها را مفید می‌دانید؟
  • فکر می‌کنید کدام انواع از اطلاعات آی‌پی‌ها اگر منتشر شوند، ویرایشگران ناشناس ما را تحت خطر قرار می‌دهند؟

ویژگی تطابق ویرایشگران

در گفتگوهای اولیه، به این پروژه با نام «ویرایشگران نزدیک» و «شناسایی زاپاس» نیز اشاره شده‌است. ما سعی داریم نامی مناسب برای این ویژگی بیابیم که حتی برای افرادی که درکی از واژهٔ زاپاس ندارند نیز قابل درک باشد.

ما در مراحل اولیهٔ این پروژه هستیم. واحد پژوهش بنیاد ویکی‌مدیا پروژه‌ای دارد که می‌تواند در شناسایی شباهت رفتار ویرایشی بین دو ویرایشگر کمک‌کننده باشد. این به کشف ارتباط کاربران ثبت‌نام‌نکردهٔ مختلف، در زمانی که تحت نام‌های کاربری تولیدشده به‌صورت خودکار به ویرایش می‌پردازند، کمک می‌کند. وقتی سال پیش صحبت‌ها دربارهٔ این پروژه را آغاز کردیم، با حمایت‌های زیادی روبرو شدیم. علاوه بر این، چیزهای زیادی هم دربارهٔ ریسک‌های توسعهٔ چنین ویژگی‌ای شنیدیم.

ما در حال برنامه‌ریزی برای ساخت یک پیش‌نمونه در آیندهٔ نزدیک، و به اشتراک گذاشتن آن با اجتماع هستیم. یک صفحهٔ پروژه برای این پروژه موجود است. امیدواریم که به‌زودی بتوانیم یک بروزرسانی برای آن منتشر کنیم. از دیدگاه‌های شما دربارهٔ این پروژه در صفحهٔ بحث پروژه بسیار استقبال می‌کنیم.

Update 01

Like mentioned previously, our foremost goal is to provide better anti-vandalism tools for our communities which will provide a better moderation experience for our vandal fighters while also working towards making the IP address string less valuable for them. Another important reason to do this is that IP addresses are hard to understand and are really very useful only to tech-savvy users. This creates a barrier for new users without any technical background to enter into functionary roles as there is a higher learning curve for them to work with IP addresses. We hope to get to a place where we can have moderation tools that anyone can use without much prior knowledge.

The first thing we decided to focus on was to make the CheckUser tool more flexible, powerful and easy to use. It is an important tool that services the need to detect and block bad actors (especially long-term abusers) on a lot of our projects. The CheckUser tool was not very well maintained for many years and as a result it appeared quite dated and lacked necessary features.

We also anticipated an uptick in the number of users who opt-in to the role of becoming a CheckUser on our projects once IP Masking goes into effect. This reinforced the need for a better, easier CheckUser experience for our users. With that in mind, the Anti-Harassment Tools team spent the past year working on improving the CheckUser tool – making it much more efficient and user-friendly. This work has also taken into account a lot of outstanding feature requests by the community. We have continually consulted with CheckUsers and stewards over the course of this project and have tried our best to deliver on their expectations. The new feature is set to go live on all projects in October 2020.

The next feature that we are working on is IP info. We decided on this project after a round of consultation on six wikis which helped us narrow down the use cases for IP addresses on our projects. It became apparent early on that there are some critical pieces of information that IP addresses provide which need to be made available for patrollers to be able to do their roles effectively. The goal for IP Info, thus, is to quickly and easily surface significant information about an IP address. IP addresses provide important information such as location, organization, possibility of being a Tor/VPN node, rDNS, listed range, to mention a few examples. By being able to show this, quickly and easily without the need for external tools everyone can’t use, we hope to be able to make it easier for patrollers to do their job. The information provided is high-level enough that we can show it without endangering the anonymous user. At the same time, it is enough information for patrollers to be able to make quality judgements about an IP address.

After IP Info we will be focusing on a finding similar editors feature. We’ll be using a machine learning model, built in collaboration with CheckUsers and trained on historical CheckUser data to compare user behavior and flag when two or more users appear to be behaving very similarly. The model will take into account which pages users are active on, their writing styles, editing times etc. to make predictions about how similar two users are. We are doing our due diligence in making sure the model is as accurate as possible.

Once it’s ready, there is a lot of scope for what such a model can do. As a first step we will be launching it to help CheckUsers detect socks easily without having to perform a lot of manual labor. In the future, we can think about how we can expose this tool to more people and apply it to detect malicious sockpuppeting rings and disinformation campaigns.

You can read more and leave comments on our project page for tools.


IP masking impact report

IP addresses are valuable as a semi-reliable partial identifier, which is not easily manipulated by their associated user. Depending on provider and device configuration, IP address information is not always accurate or precise, and deep technical knowledge and fluency is needed to make best use of IP address information, though administrators are not currently required to demonstrate such fluency to have access. This technical information is used to support additional information (referred to as “behavioural knowledge”) where possible, and the information taken from IP addresses significantly impact the course of administrative action taken.

A Wikimedia Foundation-supported report on the impact that IP masking will have on our community.

On the social side, the issue of whether to allow unregistered users to edit has been a subject of extensive debate. So far, it has erred on the side of allowing unregistered users to edit. The debate is generally framed around a desire to halt vandalism, versus preserving the ability for pseudo-anonymous editing and lowering the barrier to edit. There is a perception of bias against unregistered users because of their association with vandalism, which also appears as algorithmic bias in tools such as ORES. Additionally, there are major communications issues when trying to talk to unregistered users, largely due to lack of notifications, and because there is no guarantee that the same person will be reading the messages sent to that IP talk page.

In terms of the potential impact of IP masking, it will significantly impact administrator workflows and may increase the burden on CheckUsers in the short term. If or when IP addresses are masked, we should expect our administrators' ability to manage vandalism to be greatly hindered. This can be mitigated by providing tools with equivalent or greater functionality, but we should expect a transitional period marked by reduced administrator efficacy. In order to provide proper tool support for our administrators’ work, we must be careful to preserve or provide alternatives to the following functions currently fulfilled by IP information:

  • Block efficacy and collateral estimation
  • Some way of surfacing similarities or patterns among unregistered users, such as geographic similarity, certain institutions (e.g. if edits are coming from a high school or university)
  • The ability to target specific groups of unregistered users, such as vandals jumping IPs within a specific range
  • Location or institution-specific actions (not necessarily blocks); for example, the ability to determine if edits are made from an open proxy, or public location like a school or public library.

Depending on how we handle temporary accounts or identifiers for unregistered users, we may be able to improve communication to unregistered users. Underlying discussions and concerns around unregistered editing, anonymous vandalism, and bias against unregistered users are unlikely to significantly change if we mask IPs, provided we maintain the ability to edit projects while logged out.

CheckUser workflow

We interviewed CheckUsers on multiple projects throughout our process for designing the new Special:Investigate tool. Based on interviews and walkthroughs of real-life cases, we broke down the general CheckUser workflow into five sections:

  • Triaging: assessing cases for feasibility and complexity.
  • Profiling: creating a pattern of behaviour which will identify the user behind multiple accounts.
  • Checking: examining IPs and useragents using the CheckUser tool.
  • Judgement: matching this technical information against the behavioural information established in the Profiling step, in order to make a final decision about what kind of administrative action to take.
  • Closing: reporting the outcome of the investigation on public and private platforms where necessary, and appropriately archiving information for future use.

We also worked with staff from Trust and Safety to get a sense for how the CheckUser tool factors into Wikimedia Foundation investigations and cases that are escalated to T&S.

The most common and obvious pain points all revolved around the CheckUser tool's unintuitive information presentation, and the need to open up every single link in a new tab. This caused massive confusion as tab proliferation quickly got out of hand. To make matters worse, the information that CheckUser surfaces is highly technical and not easy to understand at first glance, making the tabs difficult to track. All of our interviewees said that they resorted to separate software or physical pen and paper in order to keep track of information.

We also ran some basic analyses of English Wikipedia's Sockpuppet Investigations page to get some baseline metrics on how many cases they process, how many are rejected, and how many sockpuppets a given report contains.

Patroller use of IP addresses

Previous research on patrolling on our projects has generally focused on the workload or workflow of patrollers. Most recently, the Patrolling on Wikipedia study focuses on the workflows of patrollers and identifying potential threats to current anti-vandal practices. Older studies, such as the New Page Patrol survey and the Patroller work load study, focused on English Wikipedia. They also look solely at the workload of patrollers, and more specifically on how bot patrolling tools have affected patroller workloads.

Our study tried to recruit from five target wikis, which were

  • Japanese Wikipedia
  • Dutch Wikipedia
  • German Wikipedia
  • Chinese Wikipedia
  • English Wikiquote

They were selected for known attitudes towards IP edits, percentage of monthly edits made by IPs, and any other unique or unusual circumstances faced by IP editors (namely, use of the Pending Changes feature and widespread use of proxies). Participants were recruited via open calls on Village Pumps or the local equivalent. Where possible, we also posted on Wiki Embassy pages. Unfortunately, while we had interpretation support for the interviews themselves, we did not extend translation support to the messages, which may have accounted for low response rates. All interviews were conducted via Zoom, with a note-taker in attendance.

Supporting the findings from previous studies, we did not find a systematic or unified use of IP information. Additionally, this information was only sought out after a certain threshold of suspicion. Most further investigation of suspicious user activity begins with publicly available on-wiki information, such as checking previous local edits, Global Contributions, or looking for previous bans.

Precision and accuracy were less important qualities for IP information: upon seeing that one chosen IP information site returned three different results for the geographical location of the same IP address, one of our interviewees mentioned that precision in location was not as important as consistency. That is to say, so long as an IP address was consistently exposed as being from one country, it mattered less if it was correct or precise. This fits with our understanding of how IP address information is used: as a semi-unique piece of information associated with a single device or person, that is relatively hard to spoof for the average person. The accuracy or precision of the information attached to the user is less important than the fact that it is attached and difficult to change.

Our findings highlight a few key design aspects for the IP info tool:

  • Provide at-a-glance conclusions over raw data
  • Cover key aspects of IP information:
    • Geolocation (to a city or district level where possible)
    • Registered organization
    • Connection type (high-traffic, such as data center or mobile network versus low-traffic, such as residential broadband)
    • Proxy status as binary yes or no

As an ethical point, it will be important to be able to explain how any conclusions are reached, and the inaccuracy or imprecisions inherent in pulling IP information. While this was not a major concern for the patrollers we talked to, if we are to create a tool that will be used to provide justifications for administrative action, we should be careful to make it clear what the limitations of our tools are.

بیانیهٔ بخش حقوقی بنیاد ویکی‌مدیا


ژوئیه ۲۰۲۱

First of all, we’d like to thank everyone for participating in these discussions. We appreciate the attention to detail, the careful consideration, and the time that has gone into engaging in this conversation, raising questions and concerns, and suggesting ways that the introduction of masked IPs can be successful. Today, we’d like to explain in a bit more detail how this project came about and the risks that inspired this work, answer some of the questions that have been raised so far, and briefly talk about next steps.


To explain how we arrived here, we’d like to briefly look backwards. Wikipedia and its sibling projects were built to last. Sharing the sum of all knowledge isn’t something that can be done in a year, or ten years, or any of our lifetimes. But while the mission of the communities and Foundation was created for the long term, the technical and governance structures that enable that mission were very much of the time they were designed. Many of these features have endured, and thrived, as the context in which they operate has changed. Over the last 20 years, a lot has evolved: the way societies use and relate to the internet, the regulations and policies that impact how online platforms run as well as the expectations that users have for how a website will handle their data.

In the past five years in particular, users and governments have become more and more concerned about online privacy and the collection, storage, handling, and sharing of personal data. In many ways, the projects were ahead of the rest of the internet: privacy and anonymity are key to users’ ability to share and consume free knowledge. The Foundation has long collected little information about users, not required an email address for registration, and recognized that IP addresses are personal data (see, for example, the 2014–2018 version of our Privacy policy). More recently, the conversation about privacy has begun to shift, inspiring new laws and best practices: the European Union’s General Data Protection Regulation, which went into effect in May 2018, has set the tone for a global dialogue about personal data and what rights individuals should have to understand and control its use. In the last few years, data protection laws around the world have been changing—look at the range of conversations, draft bills, and new laws in, for example, Brazil, India, Japan, or the United States.

Legal risks

The Foundation’s Privacy team is consistently monitoring this conversation, assessing our practices, and planning for the future. It is our job to look at the projects of today, and evaluate how we can help prepare them to operate within the legal and societal frameworks of tomorrow. A few years ago, as part of this work, we assessed that the current system of publishing IP addresses of non-logged-in contributors should change. We believe it creates risk to users whose information is published in this way. Many do not expect it—even with the notices explaining how attribution works on the projects, the Privacy team often hears from users who have made an edit and are surprised to see their IP address on the history page. Some of them are in locations where the projects are controversial, and they worry that the exposure of their IP address may allow their government to target them. The legal frameworks that we foresaw are in operation, and the publication of these IP addresses pose real risks to the projects and users today.

We’ve heard from several of you that you want to understand more deeply what the legal risks are that inspired this project, whether the Foundation is currently facing legal action, what consequences we think might result if we do not mask IP addresses, etc. (many of these questions have been collected in the expanded list at the end of this section). We’re sorry that we can’t provide more information, since we need to keep some details of the risks privileged. “Privileged” means that a lawyer must keep something confidential, because revealing it could cause harm to their client. That’s why privilege is rarely waived; it’s a formal concept in the legal systems of multiple countries, and it exists for very practical reasons—to protect the client. Here, waiving the privilege and revealing this information could harm the projects and the Foundation. Generally, the Legal Affairs team works to be as transparent as possible; however, an important part of our legal strategy is to approach each problem on a case by case basis. If we publicly discuss privileged information about what specific arguments might be made, or what risks we think are most likely to result in litigation, that could create a road map by which someone could seek to harm the projects and the communities.

That said, we have examined this risk from several angles, taking into account the legal and policy situation in various countries around the world, as well as concerns and oversight requests from users whose IP addresses have been published, and we concluded that IP addresses of non-logged-in users should no longer be publicly visible, largely because they can be associated with a single user or device, and therefore could be used to identify and locate non-logged-in users and link them with their on-wiki activity.

Despite these concerns, we also understood that IP addresses play a major part in the protection of the projects, allowing users to fight vandalism and abuse. We knew that this was a question we’d need to tackle holistically. That’s why a working group from different parts of the Wikimedia Foundation was assembled to examine this question and make a recommendation to senior leadership. When the decision was taken to proceed with IP masking, we all understood that we needed to do this with the communities—that only by taking your observations and ideas into account would we be able to successfully move through this transition.

I want to emphasize that even when IP addresses are masked and new tools are in place to support your anti-vandalism work, this project will not simply end. It’s going to be an iterative process—we will want feedback from you as to what works and what doesn’t, so that the new tools can be improved and adapted to fit your needs.


Over the past months, you’ve had questions, and often, we’ve been unable to provide the level of detail you’re hoping for in our answers, particularly around legal issues.

Q: What specific legal risks are you worried about?

A: We cannot provide details about the individual legal risks that we are evaluating. We realize it’s frustrating to ask why and simply get, “that’s privileged” as an answer. And we’re sorry that we cannot provide more specifics, but as explained above, we do need to keep the details of our risk assessment, and the potential legal issues we see on the horizon, confidential, because providing those details could help someone figure out how to harm the projects, communities, and Foundation.

There are settled answers to some questions.

Q: Is this project proceeding?

A: Yes, we are moving forward with finding and executing on the best way to hide IP addresses of non-logged-in contributors, while preserving the communities’ ability to protect the projects.

Q: Can this change be rolled out differently by location?

A: No. We strive to protect the privacy of all users to the same standard; this will change across the Wikimedia projects.

Q: If other information about non-logged-in contributors is revealed (such as location, or ISP), then it doesn’t matter if the IP address is also published, right?

A: That’s not quite the case. In the new system, the information we make available will be general information that is not linked to an individual person or device—for example, providing a city-level location, or noting that an edit was made by someone at a particular university. While this is still information about the user, it’s less specific and individual than an IP address. So even though we are making some information available in order to assist with abuse prevention, we are doing a better job of protecting the privacy of that specific contributor.

Q: If we tell someone their IP address will be published, isn’t that enough?

A: No. As mentioned above, many people have been confused to see their IP address published. Additionally, even when someone does see the notice, the Foundation has legal responsibilities to properly handle their personal data. We have concluded that we should not publish the IP addresses of non-logged-in contributors because it falls short of current privacy best practices, and because of the risks it creates, including risks to those users.

Q: How will masking impact CC-BY-SA attribution?

A: IP masking will not affect CC license attribution on Wikipedia. The 3.0 license for text on the Wikimedia projects already states that attribution should include “​​the name of the Original Author (or pseudonym, if applicable)” (see the license at section 4c) and use of an IP masking structure rather than an IP address functions equally well as a pseudonym. IP addresses already may vary or be assigned to different people over time, so using that as a proxy for un-registered editors is not different in quality from an IP masking structure and both satisfy the license pseudonym structure. In addition, our Terms of use section 7 specify that as part of contributing to Wikipedia, editors agree that links to articles (which include article history) are a sufficient method of attribution.

And sometimes, we don’t know the answer to a question yet, because we’d like to work with you to find the solution.

Q: What should the specific qualifications be for someone to apply for this new user right?

A: There will be an age limit; we have not made a definitive decision about the limit yet, but it’s likely they will need to be at least 16 years old. Additionally, they should be active, established community members in good standing. We’d like to work through what that means with you.

I see that the team preparing these changes is proposing to create a new userright for users to have access to the IP addresses behind a mask. Does Legal have an opinion on whether access to the full IP address associated with a particular username mask constitutes nonpublic personal information as defined by the Confidentiality agreement for nonpublic information, and will users seeking this new userright be required to sign the Access to nonpublic personal data policy or some version of it?
1 If yes, then will I as a checkuser be able to discuss relationships between registered accounts and their IP addresses with holders of this new userright, as I currently do with other signatories?
2 If no, then could someone try to explain why we are going to all this trouble for information that we don't consider nonpublic?
3 In either case, will a checkuser be permitted to disclose connections between registered accounts and unregistered username masks?

A: This is a great question. The answer is partially yes. First, yes, anyone who has access to the right will need to acknowledge in some way that they are accessing this information for the purposes of fighting vandalism and abuse on the projects. We are working on how this acknowledgement will be made; the process to gain access is likely to be something less complex than signing the access to non-public personal data agreement.

As to how this would impact CUs, right now, the access to non-public personal data policy allows users with access to non-public personal data to share that data with other users who are also able to view it. So a CU can share data with other CUs in order to carry out their work. Here, we are maintaining a distinction between logged-in and logged-out users, so a CU would not be able to share IP addresses of logged-in users with users who have this new right, because users with the new right would not have access to such information.

Presuming that the CU also opts in to see IP addresses of non-logged-in users, under the current scheme, that CU would be able to share IP address information demonstrating connections between logged-in users and non-logged-in users who had been masked with other CUs who had also opted in. They could also indicate to users with the new right that they detected connections between logged-in and non-logged-in users. However, the CU could not directly the share IP addresses of the logged-in users with non-CU users who only have the new right.

Please let us know if this sounds unworkable. As mentioned above, we are figuring out the details, and want to get your feedback to make sure it works.

گام‌های بعدی

Over the next few months, we will be rolling out more detailed plans and prototypes for the tools we are building or planning to build. We’ll want to get your feedback on these new tools that will help protect the projects. We’ll continue to try to answer your questions when we can, and seek your thoughts when we should arrive at the answer together. With your feedback, we can create a plan that will allow us to better protect non-logged-in editors’ personal data, while not sacrificing the protection of Wikimedia users or sites. We appreciate your ideas, your questions, and your engagement with this project.

اکتبر ۲۰۲۰
این بیانیه از بخش حقوقی بنیاد ویکی‌مدیا، مطابق با درخواست و برای صفحهٔ بحث نوشته شده‌است و مطابق با همان زمینه است. برای شفافیت بیشتر، مایل بودیم که شما بتوانید آن را در اینجا نیز مطالعه کنید.

سلام بر همگی. این یادداشتی است از تیم امور حقوقی. در ابتدا، مایل هستیم از همگی بابت نظرات متفکرانه‌شان تشکر کنیم. لطفاً بدانید که در بعضی مواقع، ما به‌عنوان حقوق‌دان قادر نیستیم تمام جزئیات تفکرات خود را به‌طور عمومی به اشتراک بگذاریم؛ اما نظرات و دیدگاه‌های شما را می‌خوانیم و این نظرات و دیدگاه‌ها در مشاوره‌دادن به بنیاد برای ما بسیار مفید هستند.

در برخی مواقع، با توجه به قوانین اخلاق حقوقی و اختیارات قانونی که بر چگونگی کنترل اطلاعات دربارهٔ کار در حال انجام از سوی وکلا حاکم است، ما ملزم به محرمانه نگه‌داشتن کار خود یا مشاوره‌های خود به سازمان هستیم. ما دریافته‌ایم که عدم توانایی ما در بیان دقیق تفکر خود و این که چرا ممکن است کاری انجام دهیم یا انجام ندهیم، می‌تواند در پاره‌ای موارد، از جمله همین مورد، ناامیدکننده باشد. اگرچه همیشه قادر به فاش‌سازی جزئیات نیستیم، اما می‌توانیم تأیید کنیم که اهداف کلی ما این هستند که همچنان که از پیروی بنیاد از قوانین قابل اجرا اطمینان حاصل می‌کنیم، بهترین کار را برای محافظت همزمان از پروژه‌ها و اجتماع‌ها انجام دهیم.

Within the Legal Affairs team, the privacy group focuses on ensuring that the Foundation-hosted sites and our data collection and handling practices are in line with relevant law, with our own privacy-related policies, and with our privacy values. We believe that individual privacy for contributors and readers is necessary to enable the creation, sharing, and consumption of free knowledge worldwide. As part of that work, we look first at applicable law, further informed by a mosaic of user questions, concerns, and requests, public policy concerns, organizational policies, and industry best practices to help steer privacy-related work at the Foundation. We take these inputs, and we design a legal strategy for the Foundation that guides our approach to privacy and related issues. In this particular case, careful consideration of these factors has led us to this effort to mask IPs of non-logged-in editors from exposure to all visitors to the Wikimedia projects. We can’t spell out the precise details of our deliberations, or the internal discussions and analyses that lay behind this decision, for the reasons discussed above regarding legal ethics and privilege.

We want to emphasize that the specifics of how we do this are flexible; we are looking for the best way to achieve this goal in line with supporting community needs. There are several potential options on the table, and we want to make sure that we find the implementation in partnership with you. We realize that you may have more questions, and we want to be clear upfront that in this dialogue we may not be able to answer the ones that have legal aspects. Thank you to everyone who has taken the time to consider this work and provide your opinions, concerns, and ideas.

Best regards,
Anti-Harassment Tools Team

Please use the talk page for discussions on the matter. For any issues concerning this release, please don't hesitate to contact Niharika Kohli, Product Manager – niharika wikimedia.org and cc Sandister Tei, Community Relations Specialist – stei wikimedia.org or leave a message on the talk page.

For more information or documentation on IP editing, masking and an overview of what has been done so far including community discussions, please see the links below.