# Abstract Wikipedia/Semantics of Wikifunctions

This document will provide a specification of the evaluation and validation rules of Wikifunctions.

## Objective

The system behind Wikifunctions, known internally as “the function model”, is already described in this page. However, the description does not go into enough detail about the way objects are evaluated and validated, and it is not easy to guess the missing parts, as the model is quite atypical. Therefore it is important to clarify these ambiguities for the development of Wikifunctions. The end-result should hopefully be a shared understanding of the meaning of all the functions that our users will create.

### Goals

• Clarify the semantics of Wikifunctions and understand what properties the model has.
• Have something to which we can refer to for what our implementation should be doing (instead of relying on “the implementation as the definition”).
• Uncover potential problems in the model, and propose potential solutions.
• Determine whether the model fits the goals of Wikifunctions and Abstract Wikipedia.

In order to achieve this, we need to:

1. Understand what the original design had in mind and what the current implementation is doing.
2. Identify cases that are conflicting and cases that are underspecified.
3. Reach a common agreement on the correct intended behavior for those cases.

The result is a description of the evaluation and validation rules of Wikifunctions, which this document attempts to provide.

### Intended audience

This document is intended for the Wikifunctions developers, programming language enthusiasts who are curious about the inner workings of Wikifunctions, and potentially power users who need to understand some edge cases. Normal users do not need to understand the function model to be able to intuitively use Wikifunctions through the UI. Some use-cases like higher-order functions are also currently (as of September 2022) not possible through the UI and can only be achieved by direct interaction with the backend, which requires a good understanding of the model.

The document assumes familiarity with the function model and its use of ZObjects (syntax, normal vs canonical form, built-in types, ...). These are covered in some detail in the Abstract Wikipedia pages:

We will be using a mixture of natural language and formal notations to describe the semantics of the model. Although not required, it would be useful to have some familiarity with operational semantics, the branch of programming language theory that describes the steps in the evaluation of programs. The basics are well documented online; see for example this comprehensive introduction: CSCI 131 Fall '17.

## Background

Wikifunctions allows users to define and run functions in various supported programming languages (currently Python and JavaScript). These are run by the evaluator service. On top of that, the system allows the user to define functions by composing other functions together in a language agnostic way. These are run by the orchestrator service.

This extra composition layer is in fact an entirely distinct language on its own. It is a complex system, invented for the purpose of Wikifunctions. As such, it is important to understand this language and give it a proper definition.

The syntax of the language is given by ZObjects. These represent pieces of data as well as programs (function calls) that should be evaluated to produce a result. The meaning of what these objects represent and what values they produce is the semantics of the language. Together, the syntax and the semantics define the system as a whole, which is called “the function model”.

While the syntax is covered to a good degree in Abstract Wikipedia pages, the semantics is much less clear. The function model has atypical features that are not commonly found in other languages, and it is not easy to guess how they work. In this document, we attempt to clarify what various constructs of the system mean and how they are evaluated.

### Assumptions

The model was originally designed to satisfy the following requirements:

• Purely functional: Functions should have no side-effects and should always return the same value. They should be referentially transparent.
• First-class functions: It should be able to take functions as arguments, define local/anonymous functions and return functions as results. In particular, it should be possible to do currying.
• Lexical scoping: A variable (called argument reference in the model) should evaluate to the original value it had when it was first defined. There should be no additional capture through dynamic scoping.
• Turing complete: While this is easy to achieve (we allow global functions to call themselves), in particular it means that we should not restrict the computational power of the language through typing.

## Quick introduction to operational semantics

Operational semantics describe the sequence of steps in the evaluation of a program. In this section we give a brief overview of how operational semantics look like for the lambda calculus. Readers familiar with it can skip it and jump to the Evaluation section.

### Small-step rules

Small-step semantics define the evaluation in terms of single step relations ${\displaystyle M\to M'}$ , meaning that the term ${\displaystyle M}$  reduces in a single step to ${\displaystyle M'}$ . Full evaluation is the result of a sequence of such steps ${\displaystyle M\to ^{*}V}$  where ${\displaystyle V}$  is a value which cannot be evaluated further.

In ${\displaystyle \lambda }$ -calculus, evaluation rules are usually defined in terms of substitution:

${\displaystyle (\lambda x.\ M)\ N\to \ M[x\ :=\ N]}$

${\displaystyle {\frac {M\to M'}{M\ N\to M'\ N}}}$

The first is a rule without any assumption, and the second is a rule with one assumption, written above the horizontal line. These read as:

• The application of an abstraction ${\displaystyle \lambda x.\ M}$  to an argument ${\displaystyle N}$  reduces in a single step to the body ${\displaystyle M}$  where ${\displaystyle x}$  is replaced by ${\displaystyle N}$ .
• If the function part ${\displaystyle M}$  of an application ${\displaystyle M\ N}$  reduces to ${\displaystyle M'}$  then the application reduces to ${\displaystyle M'\ N}$ ." In other words, we first evaluate the left subterm of an application recursively. This is necessary in sequences of applications (i.e. currying), where the function part must be evaluated first before we can evaluate the function call.

### Big-step rules

The previous rules are so called “small-step” semantics because they define evaluation one step at a time. Evaluation can also be described with big-step semantics, which define evaluation all the way down to a value in one relation.

${\displaystyle {\frac {M\Downarrow \lambda x.\ M_{1}\ \ \ \ \ M_{1}[x\ :=\ N]\Downarrow V}{M\ \ N\Downarrow V}}}$

This reads as: “if ${\displaystyle M}$  evaluates to the function ${\displaystyle \lambda x.\ M_{1}}$  and if ${\displaystyle M_{1}}$ , where ${\displaystyle x}$  is replaced by ${\displaystyle N}$ , evaluates to the value ${\displaystyle V}$  then the application ${\displaystyle M\ N}$  evaluates to ${\displaystyle V}$ ”.

Alternatively, the rule can be seen as a relation between input and output of an interpreter: “to evaluate the application ${\displaystyle M\ N}$ , first evaluate ${\displaystyle M}$ , which gives ${\displaystyle \lambda x.\ M_{1}}$ , then evaluate ${\displaystyle M_{1}}$  where ${\displaystyle x}$  is replaced by ${\displaystyle N}$ , which gives the result ${\displaystyle V}$ ”.

The two concepts of small-step and big-step semantics are linked together as one would expect: big-step evaluation is the result of small-step evaluation repeated until we get a final value.

${\displaystyle {\frac {M\to ^{*}V}{M\ \Downarrow V}}}$

(where ${\displaystyle \to ^{*}}$  is a sequence of zero or more ${\displaystyle \to }$  steps)

Equivalently:

${\displaystyle {\frac {M\nrightarrow }{M\Downarrow M}}}$

${\displaystyle {\frac {M\to M'\ \ \ \ \ M'\Downarrow V}{M\Downarrow V}}}$

(where ${\displaystyle \nrightarrow }$  means there are no more ${\displaystyle \to }$  steps that can be applied)

### Environments

Alternatively, rules can be equivalently defined in terms of evaluation environments (a.k.a. scopes) instead of substitutions: we evaluate a term ${\displaystyle M}$  in an environment ${\displaystyle \sigma }$  that maps free variables to their value. The evaluation step becomes a relation on pairs of terms and their environment, ${\displaystyle (\sigma ;M)\to (\sigma ';M')}$ , meaning that the term ${\displaystyle M}$  in the scope ${\displaystyle \sigma }$  reduces in a single step to the term ${\displaystyle M'}$  in the scope ${\displaystyle \sigma '}$ . A pair of a term and its scope is called a closure, and ${\displaystyle \sigma }$  is itself a mapping from variables to closures. The rules for the ${\displaystyle \lambda }$  calculus can thus be written as:

${\displaystyle {\frac {(\sigma ;M)\Downarrow (\sigma _{1};\lambda x.\ M_{1})\ \ \ \ \ (\sigma _{1}[x\ :=\ (\sigma ;N)];M_{1})\Downarrow V}{\sigma ;M\ \ N\Downarrow V}}}$

${\displaystyle {\frac {\ }{(\sigma ;\lambda x.\ M)\Downarrow (\sigma _{1};\lambda x.\ M_{1})}}}$

${\displaystyle {\frac {(\sigma [x]=(\sigma _{x};N)\ \ \ \ \ (\sigma _{x};N)\Downarrow V}{(\sigma ;x)\Downarrow V}}}$

(where ${\displaystyle \sigma _{1}[x\ :=\ (\sigma _{2};M)]}$  means ${\displaystyle \sigma _{1}}$  is extended with ${\displaystyle x}$  mapping to the closure ${\displaystyle (\sigma _{2};M)}$ , with newer additions of the same variable shadowing older ones)

Note that the results of evaluation here are closures. If the result of an evaluation is a ${\displaystyle \lambda }$ -abstraction, we do not evaluate inside the function, but instead keep the environment with it.

Notice also that, in an application, we add to the scope the argument ${\displaystyle N}$  together with its original scope ${\displaystyle \sigma }$ , and that we restore that scope when retrieving the argument ${\displaystyle x}$ . The precise bookkeeping of scopes is crucial to get correct results and avoid issues like dynamic scoping. An easy way to think about this is that each term should always be attached to its own environment, which captures the values of all of its free variables.

Environments can be shared but are immutable and they can be thought of as a persistent list. They can be extended to bind more variables for new terms but that does not modify the environment of terms that have been previously attached. Implementations can still use interior mutability in practice, for example to cache evaluation results, but the exterior immutability provides invariants that simplify reasoning about environments and their implementation.

#### Advantages of using environments

While this definition might appear more complex, it is closer to how an interpreter for the lambda calculus would be implemented, as using environments is more efficient than substitutions. Evaluation using substitution requires traversing the same term multiple times, while evaluation using environments doesn’t (and in fact it can be shown that it only ever needs to operate on terms directly written by the user).

Another advantage is that there is no need to worry about alpha-renaming and free variable capture because there is no substitution or evaluation inside the body of functions.

### Properties

All these different approaches to defining the operational semantics of the lambda calculus are equivalent; it can be proved that if a term ${\displaystyle M}$  evaluates to a value ${\displaystyle V}$  according to one of the semantics above then it also evaluates to ${\displaystyle V}$  according to the others.[1] This is useful because it shows for example that the implementation of an interpreter with evaluation environments gives the correct result and does not change the meaning of programs.

Using those definitions, one can go further and show various properties of the language. For example, they can be used to prove progress (valid terms do not get stuck during evaluation) and type preservation (valid terms evaluate to a value of the same type), which together provide type safety. While we will not go as far with the function model, it goes to show the potential of taking a rigorous approach.

### Summary

The concepts presented in this section are very useful for describing the semantics of programming languages in a precise and succinct way. They are standard in programming language theory and are well documented online. See for example https://cs.pomona.edu/~michael/courses/csci131f17/lec/Lec16.html for a comprehensive introduction.

## Evaluation

We now try to define similar evaluation rules for Wikifunctions. The implementation mixes small-step (e.g. in the resolve() method) and big-step (e.g. in the execute() method) rules, so we will freely mix and match both styles. We also sometimes use normal form instead of canonical form as a short-hand in certain parts to save space, but unless explicitly stated otherwise, the rules are defined on objects in canonical form.

### Use of types in evaluation

The main thing to note about evaluation in Wikifunctions is that it is guided by the Z1K1/type field of ZObjects. These types are different from the traditional standard notion of types. In particular there are three special types – Z7/Function call, Z9/Reference, and Z18/Argument reference – that guide the evaluation. For example, if a ZObject has type Z7/Function call then it represents a function call, and will evaluate to the result of that function call (which might have a different type).

This is fundamentally different from traditional programming languages, where function calls and variables are syntactic constructs. They do not usually have their own type; a function call takes a function of type ${\displaystyle A\to B}$  and an argument of type ${\displaystyle A}$  but the whole construct has type ${\displaystyle B}$ ; a reference to an object of type ${\displaystyle A}$  has type ${\displaystyle A}$ , etc. This is the entire premise behind type theory and type checking.

The current representation as ZObjects leads to some weird situations and questions: can a function expect an argument of type Z7/Function call or of type Z18/Argument reference? What would that even mean? Doesn’t that break confluence?

One could say that we forbid such objects. In fact it is easier to think of these as special types that are only allowed to appear as literal references in the Z1K1/type field of ZObjects.

### Environments

Evaluation takes place in a global scope ${\displaystyle \Sigma }$  mapping Zids to ZObjects and a local scope ${\displaystyle \sigma }$  mapping keys to ZObjects. The global scope corresponds to the objects stored on MediaWiki and the local scope corresponds to the value of function arguments. The global scope is constant throughout the evaluation of a single object, while the local scope is extended as function calls are evaluated.

### Values

The first thing we need to answer before writing down the evaluation rules is, "What are the values of the language?" What do objects look like after they have been evaluated? These are called normal forms in programming language theory, but to avoid confusion with the notion of normal form syntax of ZObjects, we will just call these values.

As with lambda-calculus, we have to differentiate between different forms of values corresponding to the different stages of evaluation.

The first form we define corresponds to reduction at the outermost layer of a ZObject, and we will call this head-value. A ZObject is a head-value when it is either:

1. A string literal, or
2. ${\displaystyle Z1K1/type:O_{type},k_{1}:O_{1},\ldots ,k_{n}:O_{n}}$
where ${\displaystyle O_{type}}$  is not Z9/Reference, Z18/Argument reference, or Z7/Function call

The second form we define corresponds to reduction in the subobjects, to ensure that all fields have been evaluated before we return the result to the user, and we call this full-value. An object is in full value form when it is either:

1. A string literal, or
2. ${\displaystyle Z1K1/type:O_{type},k_{1}:O_{1},\ldots ,k_{n}:O_{n}}$
where
a) ${\displaystyle O_{type}}$  is not Z9/Reference, Z18/Argument reference, or Z7/Function call
b) each other field ${\displaystyle O_{i}}$  is a full-value, except if:
i) ${\displaystyle k_{i}}$  is an identity field (e.g. Z4K1/identity (Type), Z8K5/identity (Function), Z40K1 (Boolean))

Note that we do not require the type field to be itself evaluated. Otherwise this can lead to a lot of expansions, resulting in very big objects that are not very readable. Therefore, we only require that they do not trigger reduction of the outer object. This allows us to return objects where the type field is not fully evaluated, while retaining flexibility as to the degree of how evaluated it is (from partially to none at all).

The 2.b.i. condition is needed currently for technical reasons, but we might get rid of it in the future if we end up not relying on identity fields.

For example, the following is ZObject is a full-value representing the list containing the element “42”:

{
Z1K1/type: {
Z1K1/type: Z7/function call
Z7K1/function: Z881/Typed list
Z881K1/type: Z6/String
}
K1: "42"
K2: {
Z1K1/type: {
Z1K1/type: Z7/function call
Z7K1/function: Z881/Typed list
Z881K1/type: Z6/String
}
}
}


The following is a head-value but not a full-value because the inner field K1 is not a full-value as it is still a function that needs to be evaluated:

{
Z1K1/type: {
Z1K1/type: Z7/function call
Z7K1/function: Z881/Typed list
Z881K1/type: Z6/String
}
K1: {
Z1K1/type: {
Z1K1/type: Z7/function call
Z10000/first number: "13"
Z10000/second number: "29"
}
}
K2: {
Z1K1/type: {
Z1K1/type: Z7/function call
Z7K1/function: Z881/Typed list
Z881K1/type: Z6/String
}
}
}


### Evaluation rules

#### References and argument references

We can describe the rules for references and argument references are as follows:

${\displaystyle {\frac {\Sigma [z]=O_{z}}{\Sigma ;\sigma ;\{Z1K1/type\ :\ Z9/Reference\ :\ Z9K1/Reference\ id\ :\ z\}\to _{ref}\ \Sigma ;\emptyset ;O_{z}}}}$

${\displaystyle {\frac {\sigma [x]=(\sigma _{x};O_{x})}{\Sigma ;\sigma ;\{Z1K1/type\ :\ Z18/Argument\ reference\ :\ Z18/key\ id\ :\ x\}\to _{arg}\ \Sigma ;\sigma _{x};O_{x}}}}$

(Here Z18/Argument reference, in canonical notation to save space, stands for Z1K1/type : Z9/Reference, reference id : Z18/Argument reference)

• If the global environment ${\displaystyle \Sigma }$  maps the ZID ${\displaystyle z}$  to the zobject ${\displaystyle O_{z}}$ , then evaluating the Z9/Reference ${\displaystyle \approx }$  in a local environment ${\displaystyle \sigma }$  reduces to evaluating ${\displaystyle O_{z}}$  in the empty local environment”.
• Alternatively: “to evaluate a Z9/Reference ${\displaystyle z}$  in a local environment ${\displaystyle \sigma }$ , lookup the ZObjects ${\displaystyle O_{z}}$  that ${\displaystyle z}$  maps to in the global environment, and proceed to evaluating ${\displaystyle O_{z}}$  in the empty local environment”.
• If the local environment ${\displaystyle \sigma }$  maps the argument key ID ${\displaystyle x}$  to the closure ${\displaystyle O_{x};\sigma _{x}}$ , then evaluating the argument reference ${\displaystyle x}$  in the local environment ${\displaystyle \sigma }$  reduces to evaluating ${\displaystyle O_{x}}$  in the local environment ${\displaystyle \sigma _{x}}$ .
• Alternatively, “to evaluate a Z18/Argument reference ${\displaystyle x}$ , lookup the closure ${\displaystyle \sigma _{x};O_{x}}$  that ${\displaystyle x}$  maps to in the local environment, and proceed to evaluating ${\displaystyle O_{x}}$  in the local environment ${\displaystyle \sigma _{x}}$ ”.

Using the correct environment in the right place is crucial to getting the correct result. Notice how we clear the local environment when we dereference a global reference and restore the original environment when we dereference an argument reference. This has been the source of bugs in the past, and these rules help us make sense of the problem and fix and prevent such bugs (see task T309195 and its subtasks for example).

##### Literal reference types vs expanded types

See how we use the type field to determine that an object is a reference or an argument reference. Note that currently only literal ZIDs are accepted as the type in these rules. For example, Z18/Argument reference, which is itself a reference, must not be expanded to its definition. Otherwise the argument will not be resolved and the evaluation would get stuck. This might also have an impact on generics.

This also means that we cannot evaluate arbitrarily inside objects; the following simple property does not hold, as it would break confluence:

${\displaystyle {\frac {\Sigma ;\sigma ;O_{1}\to O'_{1}}{\Sigma ;\sigma ;\{x_{1}\ :\ O_{1},\ldots ,x_{n}\ :\ O_{n}\}\to \{x_{1}\ :\ O'_{1},\ldots ,x_{n}\ :\ O_{n}\}}}}$

Would it be possible to have different rules such as the above holds? This might be possible by defining an equivalence relation on objects:

${\displaystyle {\frac {O_{type}\equiv Z9/Reference\ \ \ \ \ \Sigma [z]=O_{z}}{\Sigma ;\sigma ;\{Z1K1/type\ :\ O_{type},\ Z9K1/Reference\ id\ :\ z\}\to _{ref}\ \Sigma ;\emptyset ;O_{z}}}}$

${\displaystyle {\frac {O_{type}\equiv Z18/Argument\ reference\ \ \ \ \ \sigma [x]=(\sigma _{x};O_{x})}{\Sigma ;\sigma ;\{Z1K1/type\ :\ O_{type},\ Z18/key\ id\ :\ x\}\to _{arg}\ \Sigma ;\sigma _{x};O_{x}}}}$

Because continuously expanding references would not terminate, this relation would need to be defined in a recursive way, comparing the left and right side, expanding them as needed if they don’t match and recursively comparing each field. This problem might not be decidable (Post correspondence problem?), in which case this strategy would only be an incomplete heuristic, but it would be similar to how types are compared in systems that allow computations in types.

#### Function calls

The rule for function calls can roughly be described as

${\displaystyle {\frac {\begin{cases}\Sigma ;\sigma ;O_{function}\to _{head}^{*}\sigma ';O'_{function}=\{Z1K1/type\ :\ Z8/Function,\ldots ,\ Z8K4/implementations\ :\ O_{implementations}\}\\\Sigma ;\sigma ';O_{implementations}\to _{head}^{*}\sigma '';[O_{implementation_{1}},\ldots ,\ O_{implementation_{m}}]\\\Sigma ;\sigma '';O_{implementation_{k}}\ (x_{1}\ :=\ (\sigma ;O_{x_{1}}),\ldots ,\ x_{n}\ :=\ (\sigma ;O_{x_{n}}))\ \Downarrow \ V\\\end{cases}}{\Sigma ;\sigma ;\{Z1K1/type\ :\ Z7/Function\ call,\ Z7K1/function\ :\ O_{function},\ x_{1}\ :\ O_{x_{1}},\ldots ,\ x_{n}\ :O_{x_{n}}\}\to _{call}\ V}}}$

We first evaluate the Z7K1/function field, down to a Z8/Function. This is needed because it can be a reference or even itself a function application in the case of currying. We then resolve the Z8K/implementations field down to a list of Z14/Implementation. This requires in practice chaining multiple evaluations as lists are represented as a linked list of heads and tails but we do not need to go over that in detail here. Finally, an implementation is chosen and executed with the arguments of the function call added to the scope. We write this ${\displaystyle O_{implementation_{k}}(x_{1}\ :=\ (\sigma ;O_{x1}),\ldots ,\ x_{n}\ :=\ (\sigma ;O_{xn}))}$  and explain it in the next section.

The way the implementation ${\displaystyle k}$  is chosen is not defined for now. Eventually it will take into account the results of testers as well as performance metrics.

As noted previously, there is no need to worry about alpha-renaming or free variable capture because we are using environments so there is no substitution or evaluation inside the body of functions.

Once again, it is important to use the correct scope in the correct context. Notice that evaluating the Z7K1/function field results in a modified environment ${\displaystyle \sigma '}$ , and it is in that environment that the list of implementations must be resolved, as they may refer to free variables that have been added there. Similarly, resolving the list of implementations results in each implementation having its own scope, and it is to that scope that we should add the arguments of the function.

#### Implementations

A Z14/implementation can be defined in one of three ways:

• By composition (the Z14K2/composition field is set)
• In a supported programming language (the Z14K3/code field is set)
• As a builtin (the Z14K4/built field is set)
##### Compositions

A composition implementation defines the body of a function as the ZObject that it should return, possibly referring to the arguments of the function through Z18/function argument.

This is where local scopes come into play. To evaluate such an implementation, we just add the arguments to the current scope, and evaluate the body.

${\displaystyle {\frac {\Sigma ;\sigma [x_{1}\ :=\ (\sigma ;O_{x_{1}}),\ldots ,\ x_{n}\ :=\ (\sigma _{n};O_{x_{n}})];O_{composition}\ \Downarrow \ V}{\Sigma ;\sigma \{Z14K2/composition\ :\ O_{composition}\}(\sigma ;O_{x_{1}}),\ldots ,\ x_{n}\ :=\ (\sigma _{n};O_{x_{n}}))\ \Downarrow \ V}}}$

##### Code

Code implementations are defined in a supported programming language such as Python or Javascript. The evaluation of such an implementation is done by delegating to the interpreter or compiler of that language.

Arguments are serialized to the target language (an operation we write as ${\displaystyle [[.]]}$ ), the code is executed, and the result is deserialized back to a ZObject in Wikifunctions (an operation we write as ${\displaystyle [.]}$ ). As such, it is a form of compilation or denotational semantics.

${\displaystyle {\frac {\Sigma ;\sigma ;O_{code}\Downarrow s_{code}\ \ \ \Sigma ;\sigma ;O_{x_{i}}\Downarrow V_{i}\ \ \ \forall _{i}=1\ldots n\ \ \ [[[s_{code}]]([[V_{1}]],\ldots ,[[V_{1}]])]\Downarrow V}{\Sigma ;\sigma ;\{Z14K3/code\ :\ O_{code}\}(x_{1}\ :=\ (\sigma _{1};O_{x_{1}}),\ldots ,x_{n}\ :=\ (\sigma _{n};O_{x_{n}})\Downarrow V}}}$

For this, arguments need to be fully evaluated. No closures can remain, and as such, some types cannot be represented, such as functions. We can fail if we ever encounter such an argument.

The serialization of basic types such as strings is straightforward, but more complex types are currently just translated as records representing the ZObjects. This is probably not very usable in its current form and there is probably a lot of room for improvements there (libraries? custom serialization/deserialization?).

##### Builtins

Built-in implementations are just entry points for the interpreter to do arbitrary computations. They allow the definition of basic building blocks that would be hard, impossible, or too inefficient to define using composition alone.

We can define rules for each of the builtin implementations (for example, the builtin implementation for the Z802/if function evaluates the first argument, and if that argument is Z41/true, reduces to the second argument, otherwise, it reduces to the third argument), but we will not go into details here.

#### Generic types

A generic type is represented by a function that returns a type. For example, the type of a list of strings is given by the application of a function Z881/Typed list to the type Z6/String, which returns the definition of the type of lists of strings.

This raises the question: what happens if the returned type object is Z7/Function call, Z9/References, or Z18/Argument reference? Should it trigger the corresponding evaluation rule? The same question comes up if the type is a Z9/Reference to one of these special types, or a Z18/Argument reference that resolves to one of these special types.

An easy answer to this would be no, that the special type should be there from the start. However, this breaks confluence. While one could say that this is an improper use of ZObjects, the result does come off as surprising. In particular, this effect is used in one of the propositions for generic function types.

An alternative could be to resolve the type field and trigger the corresponding rule if it resolves to Z7/Function call, Z9/References, or Z18/Argument reference.

${\displaystyle {\frac {\Sigma ;O_{Type}\to O'_{Type}}{\Sigma ;\{Z1K1/type\ :\ O_{Type},\ k_{1}\ :\ O_{k_{1}},\ldots ,\ k_{n}\ :\ O_{k_{n}}\}\to \{Z1K1/type\ :\ O'_{Type},\ k_{1}\ :\ O'_{k_{1}},\ldots ,\ k_{n}\ :\ O'_{k_{n}}\}}}}$

This can become a bit tricky. In particular:

• We need to modify the definition of values to disallow non-resolved Z1K1/type fields.
• Z7/Function call, Z9/References, or Z18/Argument reference are themselves references that could be expanded, so we need to prevent that, for example by not expanding builtin types.
• Z1K1/type field might now be more expanded than we would have liked.
• We have also deliberately left local scopes out for readability but they should also be carefully handled.

For example, here is an alternate definition of head values that could work for this purpose:

1. A string literal, or
2. ${\displaystyle \{Z1K1/type\ :\ O_{Type},\ k_{1}\ :\ O_{k1},\ldots ,\ k_{n}\ :\ O_{kn}\}}$  where
a) ${\displaystyle O_{Type}}$  does not reduce to Z9/Reference, Z18/Argument reference, or Z7/Function call

And here is the alternate definition of full values:

1. A string literal, or
2. ${\displaystyle \{Z1K1/type\ :\ O_{Type},\ k_{1}\ :\ O_{k1},\ldots ,\ k_{n}\ :\ O_{kn}\}}$  where
a) ${\displaystyle O_{Type}}$  does not reduce to Z9/Reference, Z18/Argument reference, or Z7/Function call, and
b) each other field ${\displaystyle O_{i}}$  is a full-value, except if:
i) ${\displaystyle k_{i}}$  is an identity field (e.g. Z4K1/identity (Type), Z8K5/identity (Function), Z40K1 (Boolean)).

#### Head-reduction and full evaluation

We define head-reduction as the union of the rules for references, argument references, and function calls: ${\displaystyle \to _{head}=\to _{ref}\cup \to _{arg}\cup \to _{call}}$

We repeatedly apply head-reduction for as long as possible. Once there are no head-reduction rules that can apply, we have a head value. We then need to recursively evaluate each field to obtain a full value:

${\displaystyle {\frac {\Sigma ;\sigma ;O\to _{head}\sigma ';O'\ \ \ \ \ \Sigma ;\sigma ';O'\Downarrow V}{\Sigma ;\sigma ;O\Downarrow V}}}$

${\displaystyle {\frac {\Sigma ;\sigma ;\{Z1K1/type\ :\ O_{Type},\ k_{1}\ :\ O_{k_{1}},\ldots ,\ k_{n}\ :\ O_{k_{n}}\}\nrightarrow _{head}\ \ \ \ \ \Sigma ;\sigma ;O_{k_{i}}\Downarrow V_{i}\ \forall i=i\ldots n}{\Sigma ;\sigma ;\{Z1K1/type\ :\ O_{Type},\ k_{1}\ :\ O_{k_{1}},\ldots ,\ k_{n}\ :\ O_{k_{n}}\}\Downarrow \{Z1K1/type\ :\ O_{Type},\ k_{1}\ :\ V_{1},\ldots ,\ k_{n}\ :\ V_{n}\}}}}$

Notice that we do not evaluate the Z1K1/type field. Doing so naively would lead to infinite recursion, as there will always be a reference to expand (e.g. Z4/Type). Alternatively, we could define a different evaluation rule for types, which would not expand references to builtin types, but that remains to be done. Moreover, it might be more readable to return types in an unevaluated form.

#### Evaluation strategy

The evaluation strategy corresponds roughly to normal-order reduction to weak head-normal form, which informally means argument evaluation is call-by-name (except for native code and some builtins). Optimizations can be done in the implementation as long as it gives the same result. Currently the orchestrator uses caching to avoid evaluating the same argument twice (similar to Haskell thunks); this makes the strategy call-by-need (a.k.a. lazy).

Laziness is necessary because “if” expressions are represented as function calls. Some fields are also not meant to ever be reduced, such as identity fields (Z4K1, Z8K5, etc.), and Z1K1/type fields also benefit from not being fully resolved to avoid huge objects.

An eager/strict call-by-value evaluation strategy could also be done. We would have to be careful how to handle fields that need to be lazy; for example, by hiding them behind lambdas (thunks) or introducing some native support for laziness.

A strict evaluation strategy would have some consequences on the result because of errors (see next section); if evaluating an argument fails we fail the entire computation, but in call-by-name we would not fail if the argument is not used.

#### Errors

Errors in Wikifunctions are themselves structured ZObjects. Errors are represented as Z5/Error and a success result or error is represented as Z22/Evaluation result.

If at any point we encounter an error (e.g. an expected field is missing, an object fails validation, or even some internal problem such as the evaluator backend not being reachable!), we produce an error and propagate it all the way to the top, possibly chaining it with other errors. This makes errors similar to exceptions.

Note however that currently it is not possible to intercept errors (i.e. exception handling). A function that expects a Z22/Evaluation result does not intercept errors.

It is possible to give precise semantics for errors but we feel it is not worth going into the details at this point and leave this implicit.

### Confluence

Confluence describes whether evaluating terms in different ways always yields the same result. It is a fundamental property of lambda calculus, and gives the freedom to choose any evaluation strategy we want. In particular it allows us to evaluate any subterm without worrying about the evaluation order.

While we do think that the Wikifunctions model is mostly confluent, there are multiple reasons why confluence does not always hold:

• If evaluating a function argument yields an error, the error is propagated. However with lazy evaluation that argument might not be evaluated, therefore missing the error entirely.
• Expanding the Z1K1 field of a reference, argument reference, or function call might prevent that object from further evaluating because we do not recognize the type anymore, as described previously.
• Using weird argument types in functions such as Z9/Reference, Z18/ArgumentReference, and Z7/Function call would cause validation to fail if we evaluate the argument before validating it, as described previously.

Non-confluence can be partly addressed by specifying the evaluation order, and in some sense this is what we have been doing so far in this document. For example, currently the system does lazy evaluation of function arguments. However it does not address all problems, and whether this evaluation strategy is the right one remains to be decided.

In particular lazy evaluation could lead to surprising results in the presence of side effects (such as errors) or non-confluence, as a slight change of the evaluation order (due e.g. to refactoring of a term) could lead to different results. A strict call–by-value strategy might be more appropriate.

## Validation

### Use of types in validation

Most compiled languages (e.g. C++, Java, Haskell) have statically-checked types (i.e. checked ahead of evaluation). These provide type safety. Most interpreted languages (e.g. Python, Javascript) only check types at runtime (if at all). These provide very little guarantees; nothing prevents you from eventually finding a string as an argument to your number addition during evaluation.

In Wikifunctions, ZObjects are validated against Z4/Type objects to make sure they are well-formed. A ZObject is valid with respect to a type when:

1. It has the fields described in the Z4K2/keys of the type, with those fields being recursively valid.
• It might not be necessary to have all the fields described in the type; they could be optional, in particular to accommodate union types. For example, this is currently used in Lists, where a list object either has both K1/head and K2/tail fields, or neither, to indicate that it is empty.
• It should not have fields that are not described in the type though, unless it is a Z7/Function call. That is because function arguments are supplied as fields in the object.
2. It passes the Z4K3/validator of that type (for example the type of positive integers would verify that the value corresponds indeed to a positive integer).

The first case can be seen as a special builtin version of the second case (and is in fact how it is currently implemented).

Since a validator can be any arbitrary function, validation has to be performed dynamically at runtime and is in general undecidable.

In addition, types are also useful for other reasons:

• They help users when writing and debugging implementations.
• They guide the UI in how to display ZObjects to the user.
• They can theoretically be used by the system itself, e.g. for optimizations.

### Validation rules

NOTE: Validation is currently underspecified and ambiguous. It is not clear what it means for an object to be valid and how validation should be done in a lazy evaluation setting.

There are four different situations where an object can be validated:

• Against its own Z1K1/type, to make sure it has the type that it says that it has.
• Against the declared type of the field in the parent object.
• In a function call, to make sure that arguments have the type that the function expects in its signature.
• When a function returns, to make sure that the returned object has the type that the function declares in its signature.

In each of these, we validate one ZObject against one type. We represent this by the following relation: ${\displaystyle \Sigma ;\sigma _{object};O_{object}\ :\ (\sigma _{Type}\ :\ O_{Type})}$ , noting that the type can have its own scope ${\displaystyle \sigma _{Type}}$ , which could be different from the scope of the object ${\displaystyle \sigma _{object}}$ .

#### Subobjects

The main rule for default validation is:

${\displaystyle {\frac {\begin{cases}\Sigma ;\sigma _{Type};O_{Type}\to ^{*}\ \sigma '_{Type};\{Z1K1/type\ :\ Z4/Type,\ldots ,Z4K2/Keys\ :\ O_{keys}\}\\\Sigma ;\sigma '_{keys};O_{keys}\to ^{*}\ [(\sigma _{key_{1}};O_{key_{1}}),\ldots ,(\sigma _{key_{n}};O_{key_{n}})]\\\Sigma ;\sigma '_{key_{i}};O_{key_{i}}\to ^{*}\ \sigma '_{key_{i}};\{Z1K1/type\ :\ Z3/Key,Z3K1/value\ type\ :\ O_{type_{i}},Z3K1/key\ id\ :\ k_{i},\ldots \}&\forall i=i\ldots n\\\Sigma ;\sigma ;O_{i}\ :\ (\sigma '_{Key_{i}};O_{Type_{i}})&\forall i=i\ldots n\\\end{cases}}{\Sigma ;\sigma ;\{Z1K1/type\ :\ O_{Type},k_{1}\ :\ O_{1},\ldots ,k_{n}\ :\ O_{n}\}\ :\ (\sigma _{Type};O_{Type})}}}$

That is, we evaluate the type ${\displaystyle O_{Type}}$  into a type definition ${\displaystyle \{Z1K1/type\ :\ Z4/Type,\ldots ,Z4K2/Keys\ :\ O_{keys}\}}$  to extract the list of key declarations ${\displaystyle [(\sigma _{key_{1}};O_{key_{1}}),\ldots ,(\sigma _{key_{n}};O_{key_{n}})]}$  (notice that each key declaration has its own scope), and we recursively check that each field in the object validates against the type in the key declaration.

Notice that we completely ignore the Z1K1/type field here. That is because we can validate the ZObject against different types, for example when passing the object to a function. The type that is checked against therefore depends on the context.

#### Non-resolved objects

Because the evaluation strategy is call-by-name (a.k.a. lazy), objects might not be fully resolved during validation. It is not clear how to handle this.

This problem might be avoidable in a call-by-value (a.k.a. eager, strict) evaluation strategy, because we could then be sure that we always run validation on fully evaluated objects.

#### Types

Notice that when we validate objects against a type, we do not validate that the type is itself a valid Z4/Type. In systems with higher-order/dependent types, this is addressed by either adding a check on the type (e.g. when it comes from the user), or showing via meta-theoretic argument that the type being used must necessarily be valid (e.g. if ${\displaystyle M\ :\ A}$  then it must be the case that ${\displaystyle A\ :\ Type}$ ), thereby avoiding an infinite validation problem.

Whether a similar result can be achieved in the Wikifunctions model remains to be determined. We could add constraints that validate the type against Z4, but we need to be careful to do it in a constrained enough way to avoid infinite validation.

#### Compositional validation

The validation rules can lead to a lot of re-validation of the same object, which leads to performance issues. It is not clear how to do validation in a compositional way, that is where the validation of an object in one context can be re-used in another context. This is something we are currently working on.

One way to do this is to have a notion of type equivalence: we only validate an object once, when we evaluate it, for example against its declared type, then whenever we need to validate it against another type (for example when we pass the ZObject as an argument to a function), we just check that the types are equivalent and avoid validating the object again. This is similar to the common typing rule in dependently-typed lambda calculus:

${\displaystyle {\frac {M\ :\ A\to B\ \ \ \ \ N\ :\ A'\ \ \ \ \ A\equiv A'}{M\ \ N\ :\ B}}}$

(If ${\displaystyle M}$  is a function of type ${\displaystyle A\to B}$  and the argument ${\displaystyle N}$  has type ${\displaystyle A'}$  then if ${\displaystyle A}$  is equivalent to ${\displaystyle A'}$  then the function application ${\displaystyle M\ \ N}$  is well-typed and has type ${\displaystyle B}$ )

In its simplest form, equivalence could be checking that the types are equal, field by field, however this would probably be too restrictive. Another way could be to check the identity field of the types, but not all types have a well-defined identity.

An alternative is to recursively define an equivalence relation that potentially evaluates either side when needed (this is similar to what is done in dependently-typed languages). It would only be an approximation as the general problem is probably undecidable, but it might be good enough in practice. This is still work in progress.

### Relation to other type systems

Even though there is no static type checking, and despite the types in the function model not corresponding to the usual notion of types in type theory, it is still interesting to ask what types we can express and what other type systems are similar, in particular those of the lambda calculus.

Given that the type of Z4/Type is Z4/Type and that we allow arbitrary computations in types, this corresponds roughly to the lambda-star system (the universal pure type system; see also system U). This means that type-checking is in general undecidable. Other type systems (e.g. system F, which is the basis for functional programming languages such as Haskell and OCaml, or systems with dependent types such as the Coq theorem prover, etc.) avoid this by restricting the types that can be formed in a way that ensures that computation in types always terminates.

It is interesting to note that all forms of type dependencies (i.e. dimensions of the lambda cube) are actually in use:

• Types that depend on types, to form generic types such as TypedList,
• Terms that depend on types, to form polymorphic functions such as those that operate on TypedLists[2],
• Types that depend on terms, to form the type of error values from Error Types.

It is not clear what properties we can derive from the types. For example, type preservation clearly does not hold, as mentioned in a previous section. It would be interesting to see if one can define an alternative system on top of ZObjects to do proper type checking, and see what properties it would have.

## Observations and recommendations

We attempted to provide a specification for the semantics of the function model by studying the implementation of the orchestrator and bringing little modification to it. A lot of difficulties were encountered along the way, some stemming from fundamental design decisions.

There are various changes one could bring to the design of the function model and its implementation, varying in scope and impact. The present section attempts to summarize them. It is the sole opinion of the author.

### The ZObjects system

A lot of complexity comes from using ZObject as the basis for everything. Mixing types for syntactic and semantic purposes can be confusing and introduces a lot of complexity. The questions and problems that arise from the Z7/Function call, Z9/Reference, and Z18/Argument reference are difficult to resolve. Having types be just like any other ZObject requires arbitrary computation for any form of type checking. Despite the drive for simplicity, even type uniqueness is not guaranteed (hence the Z1 type). All of this adds a lot of complexity for little apparent benefit.

It would be much simpler to take a standard and proven system rather than invent a new esoteric one with little guarantees. Various systems based on the lambda calculus exist. For the purposes of Wikifunctions, if polymorphism is needed then something like system F could be sufficient (more precisely system Fω to also include type operators, or a weaker variant such as Hindley–Milner although type inference is not needed). Moreover, it has been successfully extended with various mechanisms such as algebraic datatypes (e.g. lists), records, exceptions for handling errors, etc. that would fit the needs of Wikifunctions.

Separating objects and types into different layers could greatly simplify the system. As tempting as it is to represent objects and types using the same concepts, it introduces a lot of complexity, and systems with dependent types have extensive metatheoretic results and smart implementations developed to make them work.

Using different syntactic constructs than ZObjects to represent function calls, references, and argument references would also be beneficial. It would help dealing with the confluence problems and give less surprising results.

At the very least, I think the following points should be considered and consciously decided upon:

• Non-confluent cases: are we ok with having them? What should the behavior be?
• Evaluation strategy: is the lazy strategy the one we want? Is it part of the specification? If not, how do we reconcile and deal with the non-confluent cases?
• How should lazy fields be treated? Are identity fields really necessary or would it be better to let that be a meta-property that is computed and kept track of by the orchestrator?
• The intended semantics of validation: what does validation mean? What should the behavior be on partially evaluated objects? Would a strict evaluation strategy be more well-suited? Should types themselves be validated?

### Choice of programming language

Javascript is not well-suited at all for writing production-quality interpreters (especially for complex systems such as this one): it has dynamic type checking (if at all); there is very little code hygiene and invariants we can depend on; it makes refactoring very difficult to get right, at least without overly extensive testing.

Typescript would be an obvious first alternative and one that I would strongly encourage the team to try. While I do not have experience with it, the similarity with Javascript would obviously help, and having stronger type guarantees would go a very long way in providing a maintainable implementation. It might even be possible to adopt it gradually (e.g. by incrementally refining the type annotations), which would also smooth the migration process.

No builtin sum/union types also make it difficult to represent ASTs. Functional programming languages are much better suited for that (ML was even invented with that purpose in mind), and even modern imperative languages such as Rust support that. However, not enough experience in the team or support in the Wikimedia ecosystem could be a big hurdle in adopting them.

A possible alternative I came across could be Reason, which appears to be a javascript-like OCaml dialect. From its documentation, “Reason is a programming language powered by OCaml's strong type system, and has a syntax designed to feel familiar to people coming from JavaScript or C-family languages. [...] The powerful type system underlying Reason will reduce bugs, increase maintainability, and improve the refactorabilty of your code.” Again, not prior experience here, but something that might be worth looking into.

### Structure of the interpreter

The orchestrator is currently written in a top-down recursive style. This is natural and intuitive as it is close to what we think of when we evaluate a program. However it is important to note that this might lead to stack overflows in long evaluations.

There are various techniques that can be used to avoid this. In particular converting the orchestrator code to continuation-passing style would make the code tail-recursive, which would already be enough in languages with tail-recursion optimization. Defunctionalization would go one step further and get rid of higher-order functions. The result would essentially be the implementation of an abstract machine for the language, which can be implemented efficiently in an imperative paradigm without using recursion.

Note that the resulting code might be significantly less intuitive than the original and harder to modify (without going through the transformation again). Therefore I would only resort to these techniques if addressing performance issues such as stack overflows becomes a priority.

## Notes

1. We did not go into all the details, but if the body is a function, obviously the results will not be exactly the same when using environments since with environments it results in a closure. In practice, there are two main ways this is dealt with: either the result is opaque to the user (e.g. in Python it is displayed as <function <lambda> at 0x7f6b3cfa1ea0>), or the body of the function is opened and recursively evaluated through a process called reification but which needs careful alpha-renaming to avoid variable capture (e.g. in Coq and other systems with dependent types).
2. Although in order for this to truly be the case we are still missing generic function types.