Абстрактна Вікіпедія/Оновлення/2021-02-18

This page is a translated version of the page Abstract Wikipedia/Updates/2021-02-18 and the translation is 13% complete.
Other languages:
716-newspaper.svg Оновлення Абстрактної Вікіпедії Translate

Абстрактна Вікіпедія (список розсилки) Абстрактна Вікіпедія в ICR Абстрактна Вікіпедія в Телеграм Абстрактна Вікіпедія у Твіттері Абстрактна Вікіпедія у Фейсбуці Абстрактна Вікіпедія на Ютубі Сторінка проекту Абстрактні Вікіпедії Translate

Активно велася розробка. Ми заглибились у фазу γ, працюючи над підтримкою основних типів Вікіфункцій, включаючи функції, реалізації, тестери, помилки тощо. Ми усуваємо деякі основні перешкоди для подальшого розвитку.

У той самий час ми вже розпочали роботу над більшою архітектурою системи, зокрема над нашим рушієм оцінки з підтримкою однієї рідної мови програмування. The evaluation engine is the part of Wikifunctions responsible for evaluating function calls. That is, it is the part that gets asked, “Hey, what’s the sum of 3 and 5?” and answers, “8”.

Our evaluation engine is principally separated into two main parts: the function orchestrator, which receives the calls and collates the functions and any data needed to process and evaluate the calls; and then the function evaluator (or executor), which runs the contributor-written code, as instructed by the orchestrator. As the executor can run uncontrolled native code, it lives in a tightly controlled environment and has only minimal permissions, beyond the limited use of computing and memory resources.

The orchestrator will also rely heavily on caching: if we have just calculated the sum of 3 and 5, and someone else asks for that too, we’ll just take it from the cache instead of re-running the computation. We'll also cache function definitions and inputs within the orchestrator, so that if someone asks for the sum of 3 and 6 we can answer more swiftly.

The current working top-level architectural model for how Wikifunctions will work.

But this is just our production evaluation engine. We are hoping that several other evaluation engines will be built, like the GraalVM-based engine on which Lucas Werkmeister is already working.

In order to support the development of evaluation engines, we are working on a test suite that other evaluation engines can use for conformance testing. If you’re interested in joining that effort, drop a note on this task. The test suite, as well as the common code used by several parts of our system to handle ZObjects, will live in a new library repository, function-schemata.

This development has been a bit out of order from the original plan we conceived last August.

In fact, we are thinking of changing the order of some of the developments, and we expect to do significant parts of it in parallel. Having the evaluation engine available earlier makes it possible to start the security and performance reviews in a timely manner, and to validate our architectural plans.

Originally, we had only planned for an evaluation engine that understands a programming language in Phase θ, and to support only a single programming language (Javascript) until after launch. We have now changed that to be much sooner, and also we plan to support at least two programming languages right at launch. This change will help us avoid the pitfall of possibly getting stuck with a design that only works for one programming language. Having two or more will better commit us to a multi-environment project, in terms of programming languages.

In other news

The deadline for submissions to the Wikifunctions logo concept is coming closer: submissions are accepted until Tuesday, 23 February, followed by a two-day discussion before the voting on which concept to develop starts on Thursday, 25 February. Currently, we have 17 submissions (and some additional variants).

There have been a number of talks and external articles which may be of interest:

We gave a presentation at the Graph Technologies in the Humanities: 2021 Virtual Symposium. You can watch our pre-recorded presentation for the symposium. It was followed by ample time to discuss the project; unfortunately, the discussion itself will not be published.

We also presented at the NSF Convergence Accelerator Series. The talk is very similar to the previous talk, but this recording includes the discussion following the talk.

The issue 322 of the Tool Box Journal: A Computer Journal For Translation Professionals reports on Abstract Wikipedia, Wikifunctions, and Wikidata. I found it very interesting to see how the projects are perceived by professional translators, and their comparison of Wikidata to a termbase.

The German magazine Der Spiegel published an interview with Denny about Abstract Wikipedia. They also published a more comprehensive article in their 16 January print issue, which is available in their archive for subscribers. Both the interview and the article are in German.