Community Wishlist Survey 2023/Multimedia and Commons

Multimedia and Commons
27 proposals, 312 contributors, 816 support votes
The survey has closed. Thanks for your participation :)



Enable a search by image in Commons

  • Problem: Sometimes, e.g. when you have a file in your computer but you don't know its original name, or even if it's a file from Commons, the search for it can be fatiguing, by trying to guess its title.
  • Proposed solution: Enable a basic search engine by image just as Google Lens, showing the thumbnails of the wanted image along with the bests matches, if it doesn't exist yet.
  • Who would benefit: Users who need to find the origin of their images, or have forgotten it.
  • More comments:
  • Phabricator tickets:
  • Proposer: De un millón (talk) 21:49, 4 February 2023 (UTC)[reply]

Discussion

  • Images on Commons are usually well crawled by image search engines (to the point that it can hinder your effort to find if an upload is copyvio), so I doubt there's much demand for this, which I imagine wouldn't be a small undertaking. Nardog (talk) 13:30, 6 February 2023 (UTC)[reply]
    I have a suggestion to this proposal. What about searching for metadata. Example: A photo has a bowling ball or it's related to bowling and it was taken in 2010. I search for: Bowling 2010. Goliv04053 (talk) 06:45, 11 February 2023 (UTC)[reply]
  • The Wikimedia API already can check for file presence by checksum. The Commons Android app uses that to show you what files you have uploaded already. Syced (talk) 08:38, 7 March 2023 (UTC)[reply]

Voting

Tool to copy images from archives to Commons with metadata

  • Problem: Many GLAM institutions make images available on their website which can be copies to Commons. These have to manually be download/uploaded one by one and metadata copied across.
  • Proposed solution: A way for Wikimedia Commons to take a URL and copy the image to Commons with descriptions and relevant metadata, including links back to the source.
  • Who would benefit: Everyone.
  • More comments: GLAM institutions like Libraries Tasmania / Tasmanian Archives have thousands of public domain images on their website (example). To add each one manually to Commons would take forever. A tool like this would help users of Wikimedia projects add more media, help GLAM institutions quickly share their own content, and make sharing images more accessible to new comers during training events.
  • Phabricator tickets: T193526
  • Proposer: Jimmyjrg (talk) 03:59, 24 January 2023 (UTC)[reply]

Discussion

  • It looks like the example above is using a catalogue product from SirsiDynix; I've not been able to find any API info. I think one aspect of this proposal is likely to be whether we can build a general-purpose tool that works with many libraries, or a single-purpose tool. For example, many archival catalogue systems support OAI-PMH, so if we built something that worked with that then it'd be perhaps more widely used. For site-specific scraping requests, there's a register of them at commons:Commons:Batch uploading. SWilson (WMF) (talk) 07:09, 24 January 2023 (UTC)[reply]
    Yes, I'd like something that adapts to the website/database that is being looked at. Some libraries use Spydus (example: Stonnington), which I think has a public API. Ideally it'd be best if there was some way to have it learn how a website works (the first time you visit you have to manually copy and paste all the information) but then it knows how to do it itself after.--Jimmyjrg (talk) 22:05, 24 January 2023 (UTC)[reply]
    @Jimmyjrg Double checking I understand the problem correctly: the proposal is to create a workaround for resources that are available online from institutions that do not have APIs or data dumps that can facilitate sharing data in bulk. Is that correct? __VPoundstone-WMF (talk) 16:55, 26 January 2023 (UTC)[reply]
    Yes @VPoundstone-WMF: That’s a good explanation. Basically I’d like something quicker than downloading and uploading everything myself (and copying/inputting metadata) when there’s a few images to move to Commons. Jimmyjrg (talk) 08:00, 27 January 2023 (UTC)[reply]
    Without commenting on the specific example above, in my experience, creating a generic tool to reliably scrape random websites with the sort of detail required for Commons is probably technically infeasible. c:Commons:Batch uploading exists for a reason. -FASTILY 22:33, 28 January 2023 (UTC)[reply]
  • I think, that there is one big problem. And that is, the source data have always different formats, so every time you have to change your program. And that's why there is a service on Commons, which helps with such mass transfers. Just now, I cannot find a link.Juandev (talk) 19:14, 9 February 2023 (UTC)[reply]
    I was inspired by the Web2Cit project which can learn to add citations using different formats. But you're right, it's likely more difficult for catalogues of images. Jimmyjrg (talk) 23:24, 21 February 2023 (UTC)[reply]
  • en:GLAM (cultural heritage), an acronym for galleries, libraries, archives, and museums, the cultural heritage institutions --Error (talk) 15:56, 13 February 2023 (UTC)[reply]

Voting

Allow category in EXIF data

  • Problem: Using "[[Category:Exif model: $1]]" as value for "MediaWiki:Exif-model-value". This currently adds the category to the file description page, but the file doesn't get categorized in the category. If the file description page already contains an identical category, the category is displayed twice.
  • Proposed solution: see phab:T23795#248878
  • Who would benefit: anyone
  • More comments:
  • Phabricator tickets: T23795
  • Proposer: Shizhao (talk) 02:41, 2 February 2023 (UTC)[reply]

Discussion

  • Would it be helpful to add operators to the search feature to search for images taken by specific cameras? Similar to how you can put filemime:image/png into the search bar? Because that is probably easier. Bawolff (talk) 05:52, 2 February 2023 (UTC)[reply]
    • @Bawolff: I think one of the benefits of the category approach is that photos can be added there even if they don't have the right EXIF data. That said, it does seem that what's wanted here is a way to search by camera, and probably the SDC captured with (P4082) property, combined with a bot that converts from EXIF to that, would be a good way to go. Then the info template could do the categorizing. Hacking system messages to do this doesn't feel all that solid. @Shizhao: Is the problem here, put more abstractly, that it's not possible to automatically categorize based on camera model? SWilson (WMF) (talk) 06:21, 6 February 2023 (UTC)[reply]
      If the native method is difficult to implement technically, using the bot method is also a temporary solution. Also, the mw:Manual:File metadata handling API seems to provide incomplete EXIF data? Shizhao (talk) 03:27, 8 February 2023 (UTC)[reply]
      @Shizhao: You're right, I was looking at this too broadly, sorry. There's definitely a bug in how the wikitext is being treated: it should either correctly add the category and not have it be duplicated, or not allow categories to be added there at all. SWilson (WMF) (talk) 04:04, 8 February 2023 (UTC)[reply]

Voting

Improve patrolling new uploads in Commons

  • Problem: Currently very few uploads in Commons get patrolled, Something along the lines between 4% to 10%.
  • Proposed solution: I made a tool that helps but I think we need something better, I don't have the capacity to maintain it long-term.
  • Who would benefit: Patrollers in Commons.
  • More comments:
  • Phabricator tickets:
  • Proposer: Amir (talk) 07:20, 29 January 2023 (UTC)[reply]

Discussion

Voting

Advanced sorting on Commons

  • Problem: We haven't any advanced sorting parameters on Commons.
  • Proposed solution: Adding sorting features on parameters like date, to Wikimedia Commons.
  • Who would benefit: Wikimedia Commons users
  • More comments: With advanced sorting parameters, we will be able to find images better and faster.
  • Phabricator tickets: phab:T329961
  • Proposer: Kurmanbek💬 16:40, 2 February 2023 (UTC)[reply]

Discussion

Voting

Special media search to allow filters in a category

  • Problem: Some categories have many images that take a long time to sort through by eye.
  • Proposed solution: Being able to filter a category by image size, licence, etc.
  • Who would benefit: Those searching on Wikimedia Commons for images to use
  • More comments:
  • Phabricator tickets:
  • Proposer: JRennocks (talk) 22:46, 23 January 2023 (UTC)[reply]

Discussion

  • I'd like to note that this kind of filtering is already possible in commons:Special:MediaSearch. Adding something like "is in Category: ________" to that interface might already do the trick. (That probably should include a setting/slider for how many levels of subcategories to include.) Make it accessible from each category page with a link titled something like "search in this Category". Shouldn't be too hard to implement, I guess? --El Grafo (talk) 16:07, 26 January 2023 (UTC)[reply]
    Trick: click in a category on the tab "More" and then on "Search not in category". Now you get a search page. Remove the first part of the search string, including the "-" in -incategory:"Name_category". Now you are left with incategory:"Name_category" and you can edit your search the way you want to. JopkeB (talk) 10:05, 11 February 2023 (UTC)[reply]
  • Without this feature, a lot of users are trying to do this by creating additional layers of categorization that if fine for one perspective of filtering, but can make the categories a mess for everyone else. Hopefully, a good filtering tool will help reduce the urge to create a bunch of extra categorization. Ideally, if done well and integrating structured data, over time we can greatly simplify the category structure, but that would remain to be seen after this feature is implemented. In any case, I think this would be a very good tool to add. Joshbaumgartner (talk) 22:50, 10 February 2023 (UTC)[reply]
    I would like to piggyback this idea of an advanced search engine using structured data. I don't understand why we have structured data if we can't use them to make complex searches. Or am I missing a tool ? BotaFlo (talk) 19:20, 19 February 2023 (UTC)[reply]

Voting

Make Commonist work again

Discussion

So this proposal wants WMF to take over a random tool that isn’t being maintained? Why would they want to do that when many other alternatives exist?Aaron Liu (talk) 18:58, 23 January 2023 (UTC)[reply]

WMF came to the conclustion last year, they will develop stand alone upload tool. One year have past and I havent heard about any progess. Thats why its again on the table. And we can put it on the table again for other stand alone upload tools like Pattypan or Vicuna. --Juandev (talk) 21:13, 23 January 2023 (UTC)

Commonist isn't working because of WMF, them developing their own solution after breaking some of the major community developed upload tools, would to say the least not be very nice. Let's have them get back to basics instead. They could improve the APIs, the half baked structured data support, and allow for uploading large files. Such changes would make it much easier to maintain and support the many community built upload tools. --Abbe98 (talk) 20:25, 23 January 2023 (UTC)[reply]

What exactly is wrong with the current list of upload tools? With the exception of Commonist, they all work fine, if not better. -FASTILY 01:27, 24 January 2023 (UTC)[reply]
At least for Vicuna, it hasn't been smooth sailing either lately. For a while, nobody felt responsible for it and major bugs were not addressed. With a Rapid Grant, a couple of the most gaping holes have been plugged, but that's about it. Without proper maintenance, it will break again. PattyPan was patched up too after last year's proposal, but how long will it last? I don't see the point in trying to CPR Commonist back into existence. We don't need more poorly maintained community tools, we need one fully featured, cross-platform mass upload tool. And we need it to be maintained properly, long-term (i.e. not by an unpaid volunteer who may just disappear tomorrow). --El Grafo (talk) 13:58, 26 January 2023 (UTC)[reply]
Yeah, but all of the above-mentioned tools are currently working right? You're speculating that something will break. Upload tools are non-trivial to create/maintain, and imo if it's not broken, then it doesn't need fixing. Also shameless plug, I've created an upload tool: c:Commons:Sunflower. -FASTILY 22:15, 26 January 2023 (UTC)[reply]
This is not about devloping tool and leaving it. I think that if WMF will overtake commonist or create a new app, they will also continue to maintain it. But as I indicates above, the question is wheter it will happen. One year have passed and no sighn of it. Juandev (talk) 19:10, 9 February 2023 (UTC)[reply]

@Fastily actually Rillke's bigchunkedupload.js script is the only free (free, as in everyone can use it without having to install software first) tool that allows to upload files of more than 1.2GB (I use my own tool, but UploadWizard should be able to do that) --C.Suthorn (talk) 11:27, 25 January 2023 (UTC)[reply]

  • If the WMF decided to choose an "official" upload tool, Commonist doesn't seem like a good choice. It's written in Java, which makes it hard to install and has little expertise overlap with other WMF-maintained software (parts of the search system are in Java, but that's the only thing I think), the author seems to have rolled his own library for everything from JSON parsing to build tools, and in general, it would be hard to justify putting the efforts into a desktop app as opposed to something that just works via the web, when there's very little benefit of a local app for an upload workflow. Probably UploadWizard should just be improved instead. Also "maintain tool X forever" isn't really fit for the wishlist.
    If the goal were to just make the minimum effort to make Commonist work again, that would be a more sensible wish. I think that would just involve a simple change in API parameters (#25) and maybe some decoding fixes (#24). --Tgr (talk) 00:51, 1 February 2023 (UTC)[reply]
    Commons hat zwei Hauptzwecke: 1) Mediendateien bereitzustellen, so dass sie genutzt werden können. 2) Leuten, die diese Mediendateien hinzufügen wollen, dies zu ermöglichen. Punkt 1) funktioniert weitgehend und weitgehend zuverlässig. Punkt 2) funktioniert nicht so toll. Wenn jemand sich bereits mit MW auskennt und eine kleine Zahl von kleinen Bildern mit klarer Quelle und eindeutigem Copyright hochladen will, funktioniert das meistens (aber umständlich) mit dem Upload Wizard. Probleme beginnen dann, wenn eine große Anzahl von Medien oder sehr große Medien hochgeladen werden sollen. Für eine bedeutende Zahl von Usern, die bedeutend zu Commons beitragen, ist dabei Commonist das Tool der Wahl (für mich nicht). Natürlich ist eine bessere Alternative denkbar. Aber es gibt Commonist und wenn Commonist funktioniert, sind eine Reihe produktiver User zunächst mal zufriedengestellt. Warum laufen beliebte und teils ausgezeichnete Tools wie Commonist, CropTool, V2C, F2C entweder garnicht oder nur fehlerhaft? In erster Linie weil die Maintainer, die sehr viel Arbeit darin gesteckt haben diese Tools überhaupt erst zu erstellen, in einer Reihe von Fällen unnötig vergrault wurden. Und weil sich keine neuen Maintainer finden, die sich sowas ans Bein binden wollen. Eine Person hat mehr als 6 Millionen Dateien hochgeladen und wichtige Tools erstellt, wurde aber derart gedoxxt und bedroht, dass diese Person fluchtartig das Projekt verlassen hat. Es gibt viele Bekundungen, was für ein großer Verlust das doch ist, aber so weit ich sehe keinerlei Bemühungen das Doxxing abzustellen, Bedingungen herzustellen, dass die Person sich sicher fühlen kann oder bei der Person um Entschuldigung zu bitten. C.Suthorn (talk) 08:37, 1 February 2023 (UTC)[reply]

Voting

Fix CropTool

  • Problem: Lossless cropping of images using the CropTool does not work. Crop of TIFF, GIF, and PDF files does not work, and files with high resolution can only be rotated by 90, 180, or 270 degrees.
  • Proposed solution: Find a new maintainer for the CropTool.
  • Who would benefit: Everyone
  • More comments:
  • Phabricator tickets:
  • Proposer: C.Suthorn (talk) 10:50, 30 January 2023 (UTC)[reply]

Discussion

Voting

Add status messages to assembling and publishing stages of upload process

  • Problem: Upload to Commons consists of the three stages uploading, assembling and publishing. During the assembling and publishing stages (that can take minutes each) the server does not send progress reports to the upload tool (like upload wizard)
  • Proposed solution: Server can be queried for progress reports (JSON) during assembling and publishing by the upload tool
  • Who would benefit: Users, Uploaders and developers who get better error reports for failed uploads.
  • More comments:
  • Phabricator tickets: T309094
  • Proposer: C.Suthorn (talk) 17:44, 28 January 2023 (UTC)[reply]

Discussion

This already exists, and can be retrieved via the API: mw:API:Upload#Additional notes. -FASTILY 22:37, 28 January 2023 (UTC)[reply]

No. What you are linking to are the results of a (failed) upload.
"getting EXIF from file"
"inserting XXX in database table 1"
"inserting XXX in database table 2"
"checking for malicous code in uploaded file"
"adding file to database"
"updateing file counter"
"writing EXIF to database"
"creating file desciption page"
"moving uploaded file to file system"
"updateing "patrolled" entry
or whatever "assembling" and "publishing" actually do, while you wait 5 minutes for your upload to appear, until it does not for some reason. C.Suthorn (talk) 00:41, 29 January 2023 (UTC)[reply]
You should spend some time to familiarize yourself with how chunked uploads work. "publishing" and "assembling" are meaningful statuses. A robust api client will be making use of this endpoint. -FASTILY 01:15, 29 January 2023 (UTC)[reply]
I Should not need to do this. The Upload process should work. It is not meaningful, of the upload is stuck for 5 minutes in "assembling" and if you poll for information you neither get a progress report like "7%, 23%, 51%" , nor an information what the sevrver is actually doing. Only ever "assembling", "assembling", "assembling". BTW: Have you tried in the lasst 6 month to upload a file of 600MB, 1.2GB or 4GiB with the Upload Wizard, Rillke'd tool and your own Upload tool? Did you succeed in the first try, second try, at all? C.Suthorn (talk) 12:23, 29 January 2023 (UTC)[reply]
  • This feels like a solution in search of a problem? Error reports need a unique identifier for the upload process that can be correlated with system logs (not sure if this exists but you should at least get a unique error ID which is close enough), not random status labels. --Tgr (talk) 00:43, 1 February 2023 (UTC)[reply]
    @Tgr have you uploaded af file of more than 1.2GB in the last 6 month? About "unique identifiers": In the past I have created phab tasks with the available identifiers from failed uploads, but either these are not helpful, or there is no interest to fix uploading to MW. C.Suthorn (talk) 08:19, 1 February 2023 (UTC)[reply]
    @C.Suthorn let me rephrase: this suggestion feels like the politician's syllogism to me. Commons large file upload is broken; here is a random feature that wouldn't make it any less broken; it is something so we must do it. There are all kinds of things that might help; I'm not convinced that trying to get the user interested in what specific technical steps are being taken in the background is one of those things. (A single end-to-end progress bar is nice, when the total timespan of the operation can be reasonably well estimated. For file uploads, that's probably not the case.) Tgr (talk) 20:19, 1 February 2023 (UTC)[reply]
    Wrt phab tasks, I think it's mostly the latter: it isn't anyone's job to deal with upload bugs, and it's both more complicated and arguably less productive than other kinds of technical improvements so few people spend time on it. Figuring out the immediate reason an upload failed is usually not hard (and often something mundane along the lines of "the file was too big and something timed out / ran out of memory"). Fixing uploads that ended up in some half-broken state does tend to be hard, but that would require an entirely different kind of logging. Tgr (talk) 20:24, 1 February 2023 (UTC)[reply]
    Ich hatte gefragt, ob Du in letzter Zeit mal eine sehr große Datei hochgeladen hast. Offensichtlich haben überhaupt nur 15 User in der Zeit von 2017 bis 2023 webm, ogv oder tif Dateien von 4,2GB bis 4.0GiB hochgeladen. Die Hälfte davon vor 2020 und teilweise per Server-Side-Upload. Seit 31. März 2022 haben nur @PantheraLeo1359531 und ich solche Dateien hochgeladen. Ich mit meinem eigennen Tool, PantheraLeo1359531 mit bigChunkedUpload (und das muss eine Qual gewesen sein, 10 Teile eines Videos hochzuladen, weil bigchunkedupload immer nur eine Datei hochlädt - wenn es denn klappt). Warum werden kaum solche Dateien hochgeladen? Weil es mit fast keinem Tool klappt (zuletzt auch nicht per ServerSideUpload, siehe @Urbanec). Ein Grund dürften TimeOuts, Deadlocks und Lifelocks sein. Diese treten aber immer wieder auch bei kleineren Dateien auf. Und wenn das passiert, gibt der User (häufig ein Neuling) gewöhnlich auf - und eine möglicherweise wichtige Datei ist für Commons auf immer verloren. Ja, es wäre nett, wenn es möglich wäre, dann einen Devloper zu rufen, der in die Logs schaut, das Problem findet, die Ursache erkennt und dann repariert - oder jedenfalls in phab dokumentiert, damit irgendein Developer es später tut. Leider sagt meine Erfahrung (die ich mit viel eigener Lebenszeit bezahlt habe), dass es so nicht funktioniert. Natürlich ist es den meisten Usern komplett Wurst, was schief geht und welche Fehlermeldungen angezeigt werden. Aber eben nicht allen. Und wenn dann eine Meldung aus der Assembling- oder Publishing-Stage mitgeteilt wird, dann gibt es jedenfalls einen Ansatz mit dem Developer arbeiten können -selbst noch wenn die Logs längst Geschichte sind. Profitieren werden davon alle, weil diese Fehler bei kleinen Dateien zwar viel seltener aber eben doch auftreten. C.Suthorn (talk) 22:02, 1 February 2023 (UTC)[reply]

The stages you mentioned above (Well the ones that actually exist, and it depends how loosely I interpret what you are saying) should already be reported by the API. e.g. you would get an error code of stashfailed, but the error info would be different depending on the cause. I suspect you never see them, because that is not where the upload is failing. Anyways, chunked upload is really fragile and could use lots of love. Bawolff (talk) 06:37, 2 February 2023 (UTC)[reply]

There are three stages (of four, if you count "queueing" as stage): "uploading", "assembling" and "publishing". While "uploading" you can poll for information and get the number of bytes already uploaded. If you poll while "assembling" or "publishing" you only get "assembling" or "polling" as answer. After "assembling" has finished you get the metadata. If the upload fails for a valid reason "user has no right", "database is down", then you get this reason and that is fine. But if the upload fails even though it should have succeeded (and it actually will succeed, if you try once more (with luck) or thousend times more (with less luck) ) you don't get any status at all. You say chunked upload is really fragile. I don't think so. I think it actually is very stable, but it fails under specific circumstances. As i have written before: Upload Wizard will succeed with files of upto 600MB, it may succeed with 600MB to 1.2GB, and it will always fail with 1.2GB+ . Rillke's tool (and my own tool) will (at the moment) upload 4.0GiB in most cases with the first try (because chunked upload is actually stable). However both in Rillke's tool and in my tool you can see that "assembling" and "publishing" both takes minutes and while you are waiting you only get the status "still assembling"/"still publishing" until it either fails without any status message or it succeeds with either a published file or an error like (filename not allowed, already uploaded, user has not right, ...).
And I repeat: Upload a file of 4GiB (or at least more than 1.2GB) and see what happens. It will succeed with Rillke's tool and with mine, but fail with every other tool without any information why it failed.
And then: Why do I actually discuss this: My own uploads work, and what do I care, if the uploads of other users fail? C.Suthorn (talk) 09:52, 2 February 2023 (UTC)[reply]
I agree with the points that C.Suthorn made up. The upload process of larger files sometimes fails. For example, I upload some public domain textures that are sometimes larger (above 1 GiB) with recurring errors (server didn't respond within a timespan, ...). This sometimes needs much time. And the point is that some files (especially videos and more detailed meshes) will have file sizes above 1 GiB more often in the near future, as recording systems get more capable (especially videos in 4K longer than 5-10 minutes). And it is sad that it is sometimes buggy, which leads some users to give up. I think it is very important to support uploads of larger files (for example as suggested), and also higher file size limits, to be prepared for the future. --PantheraLeo1359531 (talk) 15:23, 2 February 2023 (UTC)[reply]
Addendum: While uploading the FAILED: stashfailed: Could not connect to storage backend "local-swift-codfw". error occurs --PantheraLeo1359531 (talk) 15:43, 2 February 2023 (UTC)[reply]

Voting

Make Special:Search on Commons show all requested thumbnails

  • Problem: Special Search on Commons does not show all thumbs (example).
  • Proposed solution: Make it show all due thumbs after 1 user request, for example build in an automatic ctrl+F5 page refresh till all are loaded ... but better: clean up the code that prevents pages to load fully in one go. Make PDF-search optional by default. Investigate endless loops.
  • Who would benefit: Categorizers of large collections of images
  • More comments: Thumb display changes and loading issues were already discussed in 2022 here, but the issue of never loading of all of the thumbs in large searches (100-500 items) was never solved yet.
  • Phabricator tickets: T266155
  • Proposer: Peli (talk) 18:54, 23 January 2023 (UTC)[reply]

Discussion

Thank you. Peli (talk) 13:44, 20 February 2023 (UTC)[reply]

Voting

Provide example code for adding depict statements with a command line script

Discussion

Did some digging, and the page we'll probably want to improve is mw:Wikibase/API. It appears that pywikibot is also capable of adding structured data: phab:T213904, phab:T223796, release notes, proof of concept -FASTILY 22:48, 28 January 2023 (UTC)[reply]

Example for python3: commons:User:SchlurcherBot/commonsapiaddclaimsoauth (please read the wikitext, the page does not render well). Let me know if you need another working example. --Schlurcher (talk) 18:07, 29 January 2023 (UTC)[reply]
After a quick look: It adds copyright and license. But I am after an expample for getting the ID for a category and then adding the depict statement. For license and copyright the P-number are known and do not change, so they can be hard coded in the script. C.Suthorn (talk) 00:10, 30 January 2023 (UTC)[reply]
What does the ID for a category exactly mean? --Matěj Suchánek (talk) 16:52, 30 January 2023 (UTC)[reply]
Upload a picture of a blue-red stripped cat that is hunting an Uber car. While uploading you assign "Category:Images of blue-red cats hunting Uber cars". As uploader you have the knowledge, what depicts are useful. You can derive it (programmatically supported) from the category. A script would be able to fetch the Q-numbers and add the depicts to the image. Adding statements, if the Q-numbers are already known. C.Suthorn (talk) 22:50, 30 January 2023 (UTC)[reply]
Is there actually a way to do that? I.e., is the relation between the category and suitable entity IDs for depicts stored somewhere? Because I believe that's the key, and I believe there is not at the moment. Maybe bringing this up to c:Commons talk:Structured data can make it clear. By coincidence, a similar topic is being discussed there right now: c:Commons talk:Structured data#Using categories as a proxy for P180 depicts values. --Matěj Suchánek (talk) 09:13, 31 January 2023 (UTC)[reply]

@C.Suthorn: Could you elaborate a bit more on the requirements? I could provide an compliled "command line tool that adds depict statements" in case the above code would not work. The call would look like this:

APISchlurcherBot.exe "Username" "Password" "M66593822" "Q68"

This code would perform this edit: [1] Questions:

  • Windows or Linux (the code I have is Windows)
  • Is password provided in command line ok (only ok for non-shared PCs, but easier)
  • Is M id ok, or title of page better (M id easier, but conversion can be done as well)

If that fulfills the need, I can share an executable for this task. If more complex is envisioned, then not. --Schlurcher (talk) 10:46, 4 February 2023 (UTC)[reply]

While I could integrate such a tool (on Linux, and User,Passsword is no problem), I think, it would be better not to have a complete working software, but example code (which could be in a pseudo language, or basically any language like php, java, perl, python, c, whatever - as this is not about high performance operations like computing raytracing images, or implementing social scoring algorihms) for these basic tasks:
1 computing Q, M, and P numbers from a text string (like page name to Q number, ...)
2 setting an SDC value for a file
3 testing for values already set
From such examples other people with other tools could profit too C.Suthorn (talk) 12:47, 4 February 2023 (UTC)[reply]

Voting

Native SVG support

  • Problem: SVG files are currently rendered out as PNG files on pages that use the image.
  • Proposed solution: Embed the SVG into the page output, only using PNG as a fallback on devices that don't support SVG.
  • Who would benefit: Readers on various devices when zooming in on the image within an article.
  • More comments: Currently SVG files are rendered out in a fixed resolution PNG file which, when scaled, suffers from the usual issues when zooming into a raster image. Embedded SVG (allowing clients/web browsers to render the images instead) would allow these images to be scaled infinitely with the only restriction being the client/browser.
  • Phabricator tickets: T5593 (see related tickets)
  • Proposer: —Locke Coletc 18:59, 23 January 2023 (UTC)[reply]

Discussion

  • I wonder how worthwhile it would be to give users the option to forcibly use the PNG fallback for individual images if performance improvements are necessary, such as [[File:Example.svg|forcepng]]. -BRAINULATOR9 (TALK) 19:52, 23 January 2023 (UTC)[reply]
    We can't rely on that. If someone adds a 28MB SVG to the frontpage... there has to be some sort of technical protection against that. Also we have the problem with fonts not being consistent across browsers, so for that you also need some sort of processing before you can have files included (these are details which are already mentioned in the relevant tickets). —TheDJ (talkcontribs) 19:54, 23 January 2023 (UTC)[reply]
    Just a clarification. I think this wish is very much doable. But it's going to be a little bit more involved than 'just switch to the original svg'. Will easily take a few weeks of work. —TheDJ (talkcontribs) 23:48, 26 January 2023 (UTC)[reply]
    Huh? I'm talking about, if this proposal is implemented and SVGs default to being rendered as SVGs, then the option should exist to use the PNG format as a fallback. -BRAINULATOR9 (TALK) 18:16, 16 February 2023 (UTC)[reply]
    @Brainulator9: I understand what you're asking for, but is there a reason someone concerned about that couldn't just simply upload a PNG rendering of the SVG for such a use-case? I think @TheDJ and I are thinking of technical restrictions to (as TheDJ suggested) limit gigantic SVG files from being sent as-is. But if there's some edge-case where a PNG is desired, simply uploading a PNG and then the SVG source for future editors to make changes to as-needed would suffice, no? —Locke Coletc 18:41, 16 February 2023 (UTC)[reply]
    That could work, as long as the end users remember to update the PNG fallback alongside the SVG original. -BRAINULATOR9 (TALK) 02:01, 17 February 2023 (UTC)[reply]
    I definitely think there should be a cutoff for how large of an SVG we'd want sent to clients. For all SVG < 32KB *or* where the SVG is smaller than a PNG, I'd presume we just send the SVG. For situations where the SVG is larger than 32KB AND the PNG is smaller, then it might make sense to send the PNG still. 32KB is a purely arbitrary number, I'd hope we'd look at the existing average sizes of SVG files throughout the projects and do some data crunching to come up with a number that makes sense. —Locke Coletc 19:59, 23 January 2023 (UTC)[reply]
  • There's a script I used to test how native SVGs would work on Wikipedia... and they worked okay unless they were detailed geo maps with many tiny polygons, in which case large parts of the wikipage would stop rendering, text and pics. This is the code for your common.js: //mw.loader.load( '/w/index.php?title=User:Opencooper/svgReplace.js&action=raw&ctype=text/javascript' ); ponor (talk) 21:27, 23 January 2023 (UTC)[reply]
  • Why can't Wikipedia clean up svg pictures before uploading them to the server? There are resources with an open license such as SVGo. This will ensure the safety of this format of images. It is also possible to make a limit on the size of uploaded svg for example in 56Kb (talk) 12:07, 26 January 2023 (UTC)[reply]
    Limit the size of the SVG is not a good idea. Or at least if limited, should be some reasonable big number (at least 25-30 MB). I have some maps in svg that require details and easily go over 10-15 MB. Ikonact (talk) 08:33, 27 January 2023 (UTC)[reply]
  • It's will be awesome, because SVG is can contain not only vector graphics, but also animated and also interactive graphics. Although, it was a security and peroformace issuie if WP will embed every SVG image. Maybe, we need special tag or something for svg-images, that checked for issues to decide, embed this image into page or make png thumbnail.--Tucvbif (talk) 15:38, 11 February 2023 (UTC)[reply]
    I understood embedding SVGs to mean "using the original SVG wherever PNG thumbnails are currently being used". That being said, there's no risk of interactive graphics screwing with the page, as an SVG's code needs to be embedded directly into the page in order to respond to user activity. Moreover, this feature wouldn't introduce new security issues, as WP already blocks SVG uploads that contain scripts and other "problematic" content. Alhadis (talk)

Voting

Update Vega to the latest build

  • Problem: MediaWiki currently uses an older version of the Graph rendering library Vega (our build released seven years ago) that has issues with accessibility, syntax, functionality, and security. To work with it, you have to resurrect outdated manuals - this, in turn, prevents users from creating cool new graphics.
  • Proposed solution: Update Vega to the latest build
  • Who would benefit: Readers and Editors
  • More comments: Related previous wishes: 2022, 2021, 2019.
  • Phabricator tickets: T165118
  • Proposer: Iniquity (talk) 19:32, 25 January 2023 (UTC)[reply]

Discussion

Voting

  • Problem: If you want to know, which pages / items link to a specific page, you can click on "What links here" in the left column under Tools, example. But you can't see the number of items which links until you click on "500 search results per page" or sometimes you need "5000" to see the line: "Displayed 1,846 items."
  • Proposed solution: Add this line (how many items) everytime and independently of an enough big number of search results per page.
  • Who would benefit: probably every editor
  • More comments: Thank you!
  • Phabricator tickets: T6394
  • Proposer: W like wiki (talk) 04:28, 5 February 2023 (UTC)[reply]

Discussion

Voting

Ease the creation of diagrams

  • Problem: Create a diagram in Wikipedia is tedious. You have to basically create an SVG file. It makes updates on the diagrams difficult. SVG code is quite verbose.
  • Proposed solution: Use a high-level language for describing diagrams. For instance PGF/TikZ which is the somehow main stream solution for making figures/diagrams in a LaTEX document.
  • Who would benefit: Any contributor that wants to create/update a diagram/picture for a Wikipedia article.
  • More comments: Supporting PGF/TikZ language would be in some sense in the same spirit that the already existing support of Lilypond language (for music notation). The benefit of PGF/TikZ over other diagram languages is that it also offers support for LaTEX formulas directly. Of course, we should discuss the choice of the supported language. We could also offer the possibility to describe a diagram in many languages (PGF/TikZ, graphviz, etc.). In case this wish is selected, I would be glad to participate to development.
  • Phabricator tickets:
  • Proposer: Fschwarzentruber (talk) 18:37, 5 February 2023 (UTC)[reply]

Discussion

Voting

Can a deletion request for a category be as easy as for a file?

  • Problem: To make a deletion request for a Commons category, not being a Speedy deletion, requires a lot of effort and steps, see c:Commons:Deletion requests/Listing a request manually. Not only you need to nominate the category for deletion, but you also have to (1) Manually create a subpage of deletion request, (2) Manually add the newly created subpage to Commons:Deletion requests/JJJJ/MM/DD and (3) Manually notify the uploader on his/her talk page. While these extra steps are automatically implemented when you nominate a file for deletion.
  • Proposed solution: Make an automated process of the extra steps, like for nominating a file for deletion.
  • Who would benefit: Editors who nominate categories for deletion, not being a Speedy deletion.
  • More comments:
  • Phabricator tickets:
  • Proposer: JopkeB (talk) 05:48, 25 January 2023 (UTC)[reply]

Discussion

  • Sounds like a solid and good idea, we will only move this to a different category. KSiebert (WMF) (talk) 11:47, 25 January 2023 (UTC)[reply]
  • @JopkeB: See commons:Help:Nominate for deletion. There is a "Nominate for deletion" link under the "Tools" menu available that automates the process you describe. For this reason I'm going to archive this proposal as a solution already exists. Thanks for participating in the survey, MusikAnimal (WMF) (talk) 21:29, 8 February 2023 (UTC)[reply]
    Sorry, I see now you wanted this for categories. The same gadget should work there, though. Does it not for you? When viewing a category, the link instead will read "Nominate category for discussion". MusikAnimal (WMF) (talk) 21:31, 8 February 2023 (UTC)[reply]
    @MusikAnimal (WMF): Yes, this request is indeed for categories. Yes, I can nominate a category for discussion. But that does not delete that category. Once I closed a discussion about a category and asked for a Speedy deletion, then I was told this was not the proper way, I first had to nominate it properly for deletion. So after that, I did so, several times (also for other categories), and I found out it is a time consuming process, not as easy as nominating a file for deletion. So, I added this request to the Wishlist. Now I would like to ask you to bring this request back to the Wishlist 2023. JopkeB (talk) 10:45, 9 February 2023 (UTC)[reply]
    @JopkeB Do not worry, we can move this back (voting doesn't start until tomorrow), but I'm still not sure what you're asking for doesn't already exist. You can use the gadget to "Nominate category for discussion". From there, that discussion can lead to deletion. For example commons:Category:Anonymous painters which was deleted as part of a category for discussion. You should not have to list a request manually. More info at commons:Commons:Categories for discussion#Starting requests. Or am I still misunderstanding you?
    If I don't hear back by tomorrow, I'll just move the proposal back to the survey. Apologies for the hasty archiving to begin with. Thanks, MusikAnimal (WMF) (talk) 16:34, 9 February 2023 (UTC)[reply]
    This commons:Commons:Categories for discussion#Starting requests is exactly my point: there are four steps you have to take before a deletion request (DR) for a category is completed, while I would like to have it done in just one step, just like a DR for a file. So yes, please move the proposal back to the survey. Apologies excepted. Thanks. JopkeB (talk) 02:55, 10 February 2023 (UTC)[reply]
    Okay, no problem :) I'm still reading the four steps are the "by hand" method, and the gadget automates it. But clearly I'm missing something! I'll move it back now. Thanks for participating in the survey, MusikAnimal (WMF) (talk) 04:30, 10 February 2023 (UTC)[reply]
  • This would be great, as right now we get several categories nominated for discussion each month by users who want the contents of those categories deleted. Usually this is by users using the "Nominate category for discussion" gadget under the mistaken presumption that a CfD can do much about it. Even if discussion participants all agree that the category and files need to be deleted, someone still needs to go make the deletion request for the files, and only once they have been deleted, then the category can be deleted. We cannot delete a category that has contents as that leaves the files orphaned (and may prompt re-creation of the category as a result). If there is a better tool for users to nominate the contents of a category for deletion, directing the list of contents to COM:DR first, and then only after completion of that process, speedy deleting the then-empty category, all without clogging COM:CfD with a bunch of posts we really cannot do much about, that would be outstanding. Joshbaumgartner (talk) 22:23, 10 February 2023 (UTC)[reply]

Voting

Prevent Flickr2Commons from uploading duplicate files

  • Problem: Flickr2Commons is the main source of duplicate files.
  • Proposed solution: Fix Flickr2Commons by checking for the SH1-hash message digest of a Flickr file, if it is already available on Commons.
  • Who would benefit: Everyone
  • More comments:
  • Phabricator tickets:
  • Proposer: C.Suthorn (talk) 09:30, 30 January 2023 (UTC)[reply]

Discussion

  • I would have guessed the SHA1 hash check happens server-side. I've definitely ran into the warning myself using the native Special:Upload interface. That tells me the hash values probably don't match, and whatever image Flickr2Commons is trying to upload must have subtle differences with the duplicates already on Commons. I'm not sure how this could be resolved without machine learning. MusikAnimal (WMF) (talk) 16:48, 30 January 2023 (UTC)[reply]
    Yes, there is a server side check. It produces a "warning" (not an "error"). In case of an error the file will not be published. In case of a warning, the upload tool can decide to stop the upload or to go on and publish. It is possible to compute the SHA1 and check against the server, than not upload, if it is already there. Or it is possible to upload a file, than check for the warning, than stop the upload, if there is a duplicate. You could even do both: Check before uploading and check again before publishing the upload. F2c only checks for an identical filename (not an identical flickr id, but for an identical filename including the flickr Id. If the file gets renamed (or was uploaded with an other filename scheme (like Upload wizard does) the check by f2c will fail. C.Suthorn (talk) 22:57, 30 January 2023 (UTC)[reply]
  • What are the benefits of Flickr2Commons compared to the Flickr-import functionality in UploadWizard? It seems like it might be better to incorporate anything missing there, rather than having multiple things do the same work. SWilson (WMF) (talk) 01:01, 8 February 2023 (UTC)[reply]
    @SWilson (WMF) it can bulk-upload groups, photosets etc. Would be great to have this in UW. Strainu (talk) 21:35, 10 February 2023 (UTC)[reply]
    @Strainu: Yes, good idea. UploadWizard does support photosets/albums though. I'm not sure how it handles really large ones, and I don't think it does groups. SWilson (WMF) (talk) 07:43, 16 February 2023 (UTC)[reply]
    When i still tried to use the UW it allowed to upload 150/500 files in one go, but i never found any way to upload more than 100 files in one go without the webbrowser crashing. Also in the second go of uploads it always said, the first file of the second go was already uploading and it falsely claimed, it was more than 150/500 files, if the sum of the first and second go was larger than 150/500 files. C.Suthorn (talk) 17:55, 16 February 2023 (UTC)[reply]
  • Flickr has announced that they're taking over maintenance of Flickr2Commons, so this bug might be fixed as part of that. Discussion thread here: commons:Commons:Village pump#Flickr_Foundation_adopts_Flickr2CommonsSam Wilson 04:49, 2 June 2023 (UTC)[reply]
    Yes! Please see our project page here. Jessamyn (talk) 17:20, 7 July 2023 (UTC)[reply]

Voting

Display the metadata of video files in the metadata section of the file description

  • Problem: MediaWiki shows only the video and audio encoder version in the metadata section of the file descption page and removes all metadata from transcoded video files.
  • Proposed solution: Show the metadata just like in image files. The metadata is stored in the MediaWiki database and just does not get displayed on the file descripton page.
  • Who would benefit: Everyone who likes metadata info.
  • More comments:
  • Phabricator tickets: T49487, T237154
  • Proposer: C.Suthorn (talk) 00:51, 29 January 2023 (UTC)[reply]

Discussion

Voting

  • Problem: In thumbnails, only the lines "author" and "copyright" from EXIF are shown. Copyright info form XMP and IPTC metadata are not shown, and other relevant info from EXIF is also not shown.
  • Proposed solution: Change this thumbnailing process to include more of the EXIF in the thumb
  • Who would benefit: Everyone who wants to avoid ignoring copyright
  • More comments:
  • Phabricator tickets: phab:T5361
  • Proposer: C.Suthorn (talk) 00:47, 29 January 2023 (UTC)[reply]

Discussion

  • This is primarily to reduce datasize. EXIF can contain full images by itself, and can sometimes be bigger than the thumbnail, kinda defeating the point of thumbnailing. I believe it's currently done automatically by imagemagick (the thumbnailer), not by ourselves ? I'd have to check. —TheDJ (talkcontribs) 23:01, 29 January 2023 (UTC)[reply]
    • P.S. what we used to do, was add a link in the comment metadata back to the file page. This functionality got lost when we switched to thumbor, but it was also very limited, as only jpg and png thumbnailing supported this to begin with. —TheDJ (talkcontribs) 23:05, 29 January 2023 (UTC)[reply]
    This is not aobut keeping all EXIF (like thumbs, f number, apex, flash or exposure time), but imageTitle, keywords, location, webStatement, creator, author, copyright holder, IPTC, XMP etc. C.Suthorn (talk) 23:54, 29 January 2023 (UTC)[reply]
  • If we were to do more metadata-writing to files (which I think is a great idea), it might be better to look at doing so based on the info in the file page and/or SDC. These are considered the canonical data, and we don't really expect file's EXIF to be accurate (e.g. wrong dates, authors, etc. are allowed to remain in EXIF but will be fixed in the file page). In fact, it'd be nice to be able to download the full-sized file with the correct metadata as well. Sam Wilson 03:17, 8 February 2023 (UTC)[reply]
    A file with conflicting copyright infos in the EXIF and in Commons MetaData would need to be deleted or fixed. If a file has "copyright Reuters" in its EXIF/XMP/IPTC than either it needs to be deleted, as it is not freely licensed. Or it is not by Reuters (in which case it will be needed to deleted in most cases also, or be fixed, if it is not actually be Reuters). But adding EXIF from CommonsMetaData to EXIF of thumbs is a good idea. C.Suthorn (talk) 05:28, 8 February 2023 (UTC)[reply]
Es gibt zahlreiche Fotos von mir, die bei Reuters und Commons sind. Ralf Roletschek (talk) 12:55, 13 February 2023 (UTC)[reply]
@Ralf Roletschek hast Du mal ein Beispiel auf Commons, wie dann die Exif-Daten hier aussehen? Steht da dann auch Reuters drin? C.Suthorn (talk) 14:14, 13 February 2023 (UTC)[reply]
Die EXIF werden komplett gelöscht, bevor die Dateien zu Reuters gehen. --Ralf Roletschek (talk) 16:59, 13 February 2023 (UTC)[reply]
@Ralf Roletschek Verstehe ich das richtig: In den EXIF-Daten Deiner Dateien auf Commons ist kein Hinweis auf Reuters?? C.Suthorn (talk) 17:39, 13 February 2023 (UTC)[reply]
Oder willst Du damit sagen, dass Du die EXIF-Daten löschst, die Bilder an Reuters gibt, dann von Reuters herunterlädst und die Bilder mit den EXIF-Daten (genau genommen vermutlich IPTC) auf Commons hochlädst ohne die EXIF-Daten von den Reuters-Daten zu bereinigen? Das wäre ein sehr umständliches Vorgehen. C.Suthorn (talk) 17:42, 13 February 2023 (UTC)[reply]
Ich komme mal auf deine Disk auf Commons, das führt hier zu weit. --Ralf Roletschek (talk) 18:18, 13 February 2023 (UTC)[reply]
It was an interesting discussion, but hasn't to do with the proposal (closed). C.Suthorn (talk) 18:58, 13 February 2023 (UTC)[reply]

Voting

Support linking to lexemes in structured data on Commons

Discussion

I think that this is going to help Lexemes improve using the info on Wikidata. I strongly support it Egezort (talk) 00:48, 11 February 2023 (UTC)[reply]

There are also a growing number of images depicting written words (gathered in 'xxxx (text)' categories) and it would be handy to link these as well as the pronunciation files to lexemes. Joshbaumgartner (talk) 16:56, 11 February 2023 (UTC)[reply]

Voting

Add "search results per page" at the top of Special:Search

  • Problem: If you use Special Search (example) you can define the number of search results per page only at the bottom, not on top.
  • Proposed solution: Do it like the tool "What links here" (example) where you have the selection "View (newer 50 | older 50) (20 | 50 | 100 | 250 | 500)" also on top.
  • Who would benefit: probably all editors and readers
  • More comments: Thank you!
  • Phabricator tickets:
  • Proposer: W like wiki (talk) 04:54, 5 February 2023 (UTC)[reply]

Discussion

Voting

Improve error handling of video2commons

Discussion

Support. we need this badly, for both uploading videos from devices and importing videos from an url.--RZuo (talk) 15:40, 26 January 2023 (UTC)[reply]

Voting

Make MediaViewer actually show all authors, instead of just the first one

  • Problem: Media Viewer cannot handle attributions properly, and can, in relatively simple and common circumstances, only show one of several authors of an image. This can lead to copyvio on the WMF's part.

    Commons uses c:Template:Creator templates to make attributing authors easier. They'll provide a Wikidata link, and provide the birth and death years of the author. The problem triggers in any situation where there's multiple authors and at least one has a "Creator" template, which is not particularly uncommon. Media Viewer searches for the first creator template, and strips all other text. It can't even gracefully handle situations with multiple "Creator" templates: It will only return the first, and ignores the rest. However, ironically, if no creator templates are used, it's perfectly capable of just returning the entire Author/Artist field.

    This seems to be treated as an edge case, but it really isn't. Lithographs can easily have three main authors (c:File:Edward Duncan - The Explosion of the United States Steam Frigate Missouri - Original.tiff, and collaborative works aren't that rare, say, c:File:Humanité René Philastre and Charles-Antoine Cambon - Set design for the second part of Victor Hugo's Les Burgraves, première production - Original.jpg. One could also easily get this situation in a photograph of a sculpture or any other 3D work, any collage, etc.

    While the WMF can say (and has said) that the file description page is the only true place for someone to get file info, I've seen sites link back to a MediaViewer on a Wikipedia page as their source (Cracked magazine used to do that a lot, for instance). Given a CC-licensed image with more than one author, Media Viewer's failure could either set our reusers up for a lawsuit, or set the WMF up for one.

    Quite frankly, after 9 years of being aware of this (see Phabricator bug report), it feels like a basic attribution bug in what's being used as a key software component should have been fixed long ago.

  • Proposed solution: Fix MediaViewer to better handle such things; or, if it can't be fixed readily, provide a failstate for when it's not sure. For instance, "See file description page for details" is a valid choice in the face of the programming not being sure, and certainly better than inaccuracy.
  • Who would benefit: Creators, reusers, and arguably, it could save the WMF from a lawsuit if a Creative Commons license is misattributed because of the faulty software
  • More comments:
  • Phabricator tickets: phab:T68606 - this is 9 years old.
  • Proposer: Adam Cuerden (talk) 17:52, 28 January 2023 (UTC)[reply]

Discussion

  • It's not so much that we considered it as an edge case, but as a case where the Commons community needs to figure out how to represent multiple authors in a machine-readable way. (Back when MediaViewer still had official maintainers, anyway. These days, I don't think there is anyone considering it at all.) Authorship metadata is maintained by the community, and developers have very little influence over how it happens. --Tgr (talk) 03:17, 5 February 2023 (UTC)[reply]

Voting

A warning popup when adding a DAB cat while using HotCat

Discussion

If you're going to take on improvements to HotCat, it would be nice if you'd also do something about people directly adding the tracking categories that are supposed to be added by templates like en:w:Template:Citation needed instead. Anomie (talk) 13:51, 6 February 2023 (UTC)[reply]

Voting

Easier drag-and-drop uploading with licensing options

  • Problem: It's hard to upload images and provide the correct licensing information. A prime example is screenshots. They could be more frequently shared at technical support venues like the English Wikipedia's Village Pump (technical), but many users don't because they either don't know how to upload images or are confused by the licensing.
  • Proposed solution: Expand the VisualEditor drag-and-drop upload tool to have a dropdown that lets you select the license.

    A similar solution could also be explored for the 2010 wikitext editor.

  • Who would benefit: Users helping others with technical issues on the wiki, and users who simply want to upload and use images more easily.
  • More comments:
  • Phabricator tickets:
  • Proposer: Qwerfjkl (talk) 18:30, 23 January 2023 (UTC)[reply]

Discussion

  • FWIW, this would be amazing, although I'm not entirely certain how much utility it would have outside of one's first few uploads. That being said, anything that makes it easier to contribute good content to the project is a win in my book. Foxtrot620 (talk) 02:27, 24 January 2023 (UTC)[reply]
  • @Qwerfjkl: To help me get a better understanding of the problem, what is it specifically about uploading screenshots that makes it harder than uploading other images? Thanks, DWalden (WMF) (talk) 08:25, 24 January 2023 (UTC)[reply]
    Nothing; images are hard to upload in general (I haven't tried, actually, apart from screenshots), although screenshots are frequently store in the clipboard which makes saving them as a file and then uploading them a hassl.. I just picked screenshots because it's something I frequently have to do. This could, I suppose, be extended to images in general. Qwerfjkl (talk) 16:53, 24 January 2023 (UTC)[reply]
    @Qwerfjkl Are you primarily referring to screenshots of Wikipedia or MediaWiki software? I ask because I see value in specific functionality for that, for example to report bugs at technical forums like w:WP:VPT. On phab: we can just click and drag and don't have to bother with licensing. Commons isn't built for that, so I think a "screenshot uploader" would be nifty as it'd prefill all the licensing info and categorization. A more generalized upload tool would have to be as complicated as our default upload forms to accommodate other use cases. Of course, the screenshot uploader could be abused/misused as well, but that could be curtailed if it was limited to only certain pages like technical forums. MusikAnimal (WMF) (talk) 16:24, 25 January 2023 (UTC)[reply]
    @MusikAnimal, yes, I'm referring to screenshots of WIkipedia. Qwerfjkl (talk) 16:28, 25 January 2023 (UTC)[reply]
  •   Question: Isn't this already mostly integrated with the Visual Editor? If you edit a page in VE, you can PASTE from your clipboard which starts the upload screen. — xaosflux Talk 15:05, 26 January 2023 (UTC)[reply]
    Interesting. This is pretty much what I'd like, but outside of VE. (You can try this by adding ?veaction=edit to the URL.) Qwerfjkl (talk) 16:56, 26 January 2023 (UTC)[reply]
    @Qwerfjkl: I took inspiration from this and wrote a script. (It's pretty hacky.) Nardog (talk) 18:19, 7 February 2023 (UTC)[reply]
  •   Comment: Just thinking out-loud, this sounds like a request for a pastebin-like tool for images, maybe one which you sign in to via OAuth which uploads the image for you and sets the licence/category templates? ~TheresNoTime-WMF (talk) 17:54, 26 January 2023 (UTC)[reply]
  • There is a secondary issue here in that we lack social discussion for how to manage screenshots. The only template we have for copyright tagging is {{Wikimedia screenshot}}, which by default makes assumptions that screenshots contain copyrightable content. Often screenshots should have CC0 licenses, particularly when someone is screenshoting to create instructions or tutorials rather than to showcase Wikipedia article content. A consequence of the use of that license is the inappropriate propagation of Creative Commons licenses where they do not apply. Since so many minority and underserved groups are the beneficiaries of new and specialized tutorials, getting some upload support for Commons could be a helpful part of the tool. Bluerasberry (talk) 20:16, 26 January 2023 (UTC)[reply]
  • Visual Editor -> insert -> images and media -> upload -> drag n drop works really well for quickly uploading images, including screenshots. There's a major downside though: it's very hard to pick the correct license. Perhaps a solution to this (and perhaps what this wish should be converted to) is adding a combo box to this workflow that lets you choose common image licenses, including Wikipedia screenshots. –Novem Linguae (talk) 07:36, 30 January 2023 (UTC)[reply]
    That and possibly a drag and drop solution for the 2010 editor. @Qwerfjkl What do you think? With your permission, I'm happy to reword your proposal to what we think will do better in voting and be in scope. Then we can run it by you before accepting the wish for voting. MusikAnimal (WMF) (talk) 21:16, 3 February 2023 (UTC)[reply]
    @MusikAnimal, go ahead, no need to run it by me. Qwerfjkl (talk) 23:24, 3 February 2023 (UTC)[reply]
    The same upload tool is available in the 2010 wikitext editor, but you have to find it in the menus first: mw:Upload dialog#In wikitext editor (unlike in VE, you can't drag-and-drop onto the page editor itself, but you can drag-and-drop onto the dialog once it's open). Matma Rex (talk) 20:41, 11 February 2023 (UTC)[reply]
  • Uploading to Commons comes with expectations that you label the image, set a license etc, which doesn't make sense for "throwaway" use cases like debugging. Just upload to Phabricator or to some popular image sharing site like imgur. For many such sites there are even browser plugins which let you take and share screenshots with a single click. --Tgr (talk) 05:36, 5 February 2023 (UTC)[reply]
    Yeah, and this is what I've previously argued for over at en:WP:VPT (as a volunteer, to be clear). However many still feel the screenshots belong on Commons or the local file repo. Some say it's because they don't want to have to visit external sites, and others say they want to see images embedded in the wiki and not just via a link. So I think there's something in this proposal to work with. MusikAnimal (WMF) (talk) 16:56, 6 February 2023 (UTC)[reply]
  •   Comment Thinking about how to automate the licensing checks. It would be nice if there was a screenshot tool that scraped the website and image licensing. The other option would be look up the licensing using reverse image search OR If the Google APi for reverse image search (now deprecated I think) worked for us, or that google classified our images using their [|google image metadata] system? Wakelamp (talk) 03:00, 7 February 2023 (UTC)[reply]
  • phab:T249591 seems related. –Novem Linguae (talk) 06:02, 1 March 2023 (UTC)[reply]

Voting

Allow access to SDC from other wikis

  • Problem: It is not possible to access Structured Data on Commons data via Lua modules and parser functions from other wikis
  • Proposed solution: Enable cross-wiki arbitrary access to MediaInfo entity data from wikis other than Commons, in the same way that Wikidata data can be accessed.
  • Who would benefit: Editors and readers of all Wikimedia projects, since use of Commons media is universal
  • More comments: Enabling this access is necessary to fully realize the value of Structured Data on Commons. We need to allow the growing body of data on Structured Data Commons (SDC) to be accessed and used outside of Commons — just like the images themselves are — to make this growing body of data completely useful. This would allow templates to be coded using data derived from SDC statements in the same way infoboxes and many other templates rely on Wikidata. There are millions of SDC file captions, and hundreds of millions of SDC statements currently. Arbitrary access to SDC has long been assumed, such as in "What are the benefits of captions?"in the Commons help page, but never implemented.

    Example uses:

    1. Populate an image's caption with the SDC caption that is stored in the MediaInfo entity. File captions on Wikimedia Commons are encouraged prominently in the UI, including in the Upload Wizard, but not currently used for much across the wikis. This is what file captions do, but only within Wikimedia Commons. This could allow for displaying captions in the user's own language in multilingual projects, as long as the captions are added centrally on Wikimedia Commons for each language.
    2. Populate alt text for an image using the alt text property's value, if one is present, or even by listing all of the things depicted in the image with P180 (depicts).
    3. Use descriptive metadata fields like title, creator, data, and collection to make formatted citation for images that are historical artifacts from online catalogs, such as suggested in the "Image credits in Wikipedia: Can we do better?" Wikimania talk in 2022.
  • Phabricator tickets: task T238798
  • Proposer: Dominic (talk) 21:37, 29 January 2023 (UTC)[reply]

Discussion

  • A somewhat different proposal: T325949 --Tgr (talk) 02:18, 1 February 2023 (UTC)[reply]
    In somewhat more detail: I think there should be some sort of intermediary format between SDC and the tools using the data because:
    1. Often we want to be able to reuse those tools for local (non-Commons) images, but those aren't going to have SDC support for the forseeable future. OTOH if we had an exchange format which instead of the full richness of SDC Wikibase claims only supports the specific metadata types for which there's a use case for displaying them outside the file description page, coming up with a different way of providing them on non-SDC file pages (e.g. parser functions) would be easy.
    2. Similarly, in many cases we want the tools to be usable outside Wikimedia wikis, on third party MediaWiki installations (which typically don't use Wikibase which is a pretty unwieldy extension).
    3. It could also be used for cross-wiki access of non-SDC information (like EXIF data).
    4. If the specific PIDs and whatnot get encoded into a zillion tools and Lua modules, any kind of changes to SDC data (e.g. switching from monolingual text to multilingual text once that data type gets implemented in Wikibase) gets extremely hard to coordinate. In contrast, an exchange format would make it easy to provide backwards compatibility for however long it is deemed useful.
    5. For a developer usability point of view, some kind of simple key-value metadata format is much easier to understand and use than SDC with its many levels of complexity (properties, Wikidata items as values that need to be processed further, qualifiers, special values like partial dates etc). When special handling is required, having to implement it in every tool separately would be a lot of wasted effort and result in lots of sub-par functionality.
    6. Also, from a maintenance point of view, we don't want lots of different media-related tools and extensions to directly depend on Wikibase, which is hard to set up and has many fragile tests.
    7. Cache invalidation tends to be the hard part of cross-wiki data access, and that might be easier and more efficient for a data exchange format where you know what the use case for the various fields is than for raw SDC data.
    So IMO the way to go here is:
    • Define an exchange format, on a general level something simple like key => JSON value, with each key in use and the corresponding data semantics documented somewhere. (This wouldn't be completely dissimilar to what we have now with the GetExtendedMetadata hook, just more powerful.)
    • Define a hook or service system whereby SDC (WikibaseMediaInfo) can fill in and invalidate these values as they get queried and changed.
    • Add MediaWiki core functionality for caching structured media metadata in this data exchange format, accessing it cross-wiki via FileRepo, and exposing it via web API, Lua API and PHP API.
    Tgr (talk) 02:41, 5 February 2023 (UTC)[reply]
  • I see only gains, no pains! Beireke1 (talk) 08:50, 1 February 2023 (UTC)[reply]
  • In the project Wiki Loves Living Heritage, I would like to display a set of images depicting each heritage element, as in this manually created example. Dominic and their team has created ViewIt! that produces a json of images matching a given topic, using many different methods. However, there is no way I can use this result on those pages. This proposed solution would make that possible. – Susanna Ånäs (Susannaanas) 🦜 14:28, 11 February 2023 (UTC)[reply]
  • We are not realizing the benefits of all the hard work on SDC - we are doing so much to input and ingest, yet we are finding it extremely hard to read and derive the benefits of it because of the authentication issues. Let's open things up and let great things happen. - Fuzheado (talk) 11:10, 14 February 2023 (UTC)[reply]

Voting

Enable 180°-360° metadata detection and embed/navigate capabilities in Commons/MediaViewer

  • Problem: When uploading, Commons can't detect metadata appropriately for the 180°–360° JPEG photos. That’s why we're getting fewer educational 180°-360° panoramas on Commons and fewer Facebook 360°-like photos in articles.
 
  • Proposed solution: 1. whitelist (.jps, .mpo). Allow these 2 JPEG-based formats to be uploaded and stored in Commons.
    2. Just like animated GIFs, when uploading to Commons, all ‘JPEG’ 180°–360° (assume all 180°+ as 360° automatically) metadata should be detected. They should be extracted with the JPEGMetadataExtractor class and stored in a metadata field (just like Facebook 360).
    3. Next, enable Commons/MediaViewer to handle, embed, and navigate 180°–360° photos.
  • Who would benefit: The editor who wants to express their feelings about the surroundings of any place or internal architecture in that article. Instead of showing multiple photos, editors can now show a single 360° photo that contains all of the information they want to show. Readers who like an interactive experience want to feel the surroundings of any place or its internal architecture in any Wiki article they read. It puts the reader or viewer in control of what they want to look at within an image, which is like being in the moment when that particular photograph was captured! They can spin around, look up or down, zoom in, and control where to look—all from their smartphone. All the 2022 released smartphone default camera has a ‘panorama’ option. So, now we can get a huge amount of Commons- and Wikipedia-appropriate educational 180°/360° photos if we implement this community wish.
  • More comments: The capability to perform metadata injection manually would be nice. To keep this proposal simple, we shouldn’t mix it with any three-dimensional computer graphics, model objects, stereoscopy, VR, or AR content viewing proposals. This has been a wish on the previous wishlists and only slightly missed the Top 10 thrice. Proposed in 2016, 2019 surveys by MASUM THE GREAT, ranking at #15, #13 and 2017 by TheDJ ranked #11.
  • Phabricator tickets: T151749, T138933, phab:T70719
  • Proposer: --MASUM THE GREAT (talk) 00:54, 1 February 2023 (UTC)[reply]

Discussion

Duration: 4 minutes and 26 seconds.
Video of current work in progress
  • I have actually done some work on this since last year. The metadata parsing part of it is done (and merged in core), it only needs to be fed to the generated HTML and then an extensions needs to add the appropriate JS to initiate the view when you click it. The exact latest state is captured in my Dutch Wikimedia Hackathon summary. I'm now 2 hackathons into this project. Thats however also the only time i've spent on it this year. Progress is slow since I generally just don't have a random 2+ hours available to spend on this project. But it should be pretty easy to complete with some dedicated attention. Especially the 360 and stereo views are achievable. Panoramas are actually turning out to be a bit more complex than I had thought, and possibly are better off to be left out in an initial version. —TheDJ (talkcontribs) 13:37, 2 February 2023 (UTC)[reply]

Voting