Global user page migrationEdit
Hello Lucas Werkmeister. I deleted your local user pages on all wikis as you requested via Synchbot, and your global user page is already active. You can see the deletion log on your archive page. :) —Pathoschild 23:26, 26 April 2016 (UTC)
you can propose OAuth consumers on beta meta. Also, you can test OAuth consumers in production before they are approved (with your own user only, but that's typically enough for testing the three-legged flow). --Tgr (WMF) (talk) 08:55, 18 May 2018 (UTC)
- @Tgr (WMF): Thanks, I didn’t know there was a beta meta :) I managed to complete the test with the unapproved consumer, yes, but then I couldn’t find a way to withdraw the proposal – but if you can do it as an admin, feel free to reject it or something. --Lucas Werkmeister (talk) 08:57, 18 May 2018 (UTC)
- @Tgr (WMF): Sorry to bother you again, but next I’ll probably work on OAuth in Wikidata-Toolkit, which unfortunately seems to have Wikidata hard-coded everywhere so far, so I’m not sure if I’ll be able to test that with beta meta and beta wikidata. Do you know if owner-only consumers support the full three-legged OAuth flow, so I could test with an owner-only consumer? --Lucas Werkmeister (talk) 13:51, 18 May 2018 (UTC)
The speedpatrolling toolEdit
Re blog and structured data ...IAUPLOADEdit
Hi. Thanks for the article. I would love to see some integration of toollabs:ia-upload to pump data through to Wikidata to create the edition of the work at the time of upload, with the other structured data.
At the moment, the process is
- identify the file at IA, plug it into IAUPLOAD with the IA string, where it does a light rendering of that data into c:template:book, which we then often manually jiggle around
- import file to Commons, in book template
- at the Wikisource then create Index: ns page (and we have a js gadget that extracts and inhales the book template data into numbers of the respective fields of the Index: page
- (sometime down the track usually following the transcription process) we will transclude the transcription pages to main namespace
- then create the Wikisource data and flow through to Wikidata, either manually at WD, or maybe with the WEF framework tool at the WS. Or as I see regularly often nothing is done.
It is quite an onerous and repititive process, let alone adding the structured data that you desire. For what you are talking about in your blog, to me seems to be the clear opportunity to have the Commons and Wikidata items created at the beginning of the process and to have them immediately linked, pumped and primed at the beginning of the process, rather than just a possibility at the end.
Then at some point we need a good tool to create a book item from an edition, and an edition item from a book. So much manual work at Wikidata makes it a giant PITA to the point that one can just shut one's eyes and try to ignore it all together. — billinghurst sDrewth 06:06, 4 September 2019 (UTC)
36c3 Live QueryingEdit
The Live Querying is a really great idea and teaching instrument. Do you by any chance have a screen recording of your session? Since these are CC BY 4.0, I'm happy to provide more readable video edit. --YaguraStation (talk) 09:45, 17 January 2020 (UTC)