I’m going to use this post to checkpoint some of my recent research. I realized the other day that I don’t know any practical motivations for this work, so I’m going to summarize it in this post and seek feedback regarding its worthwhileness.
Rosetta is a specification language with full-spectrum dependent types and first-class records. This turns out to not play very well with a particular mechanism of import or opening construct we are hoping to support in Rosetta.
The syntax for the import mechanism we’re after is
in t2. In-house, we call it a
use-expression. Such a term is well-formed if the term t1 is a well-formed term denoting a record and the term t2 is well-formed if the fields exported by t1 are in scope as variables. Thus we say that a
use-expression binds the fields of t1 in t2.
In more general vocabulary, I characterize
use-expressions as an implicit import mechanism in order to emphasize the fact that the names it brings into scope do not themselves occur in its syntax. This is a key distinction: an explicit import mechanism is much simpler to formalize since the names are immediately available.
As a consequence of the dependent-types and the fact that Rosetta is a logic, we have a lot of metavariables in our metatheory. These can arise for a variety of reasons. One example is what Rosetta jargon deems “uninterpreted values”: Rosetta declarations can bring a value into scope by specifying its type without specifying its value. So, given the declaration
F :: integer -> type, all the rest of the spec sees is that
F is a type constructor. The value of
F is represented as a metavariable. Furthermore, there’s not yet enough information to know anything but the type of an application of
F. Thus, the tools also generate a metavariable for an application of
F. Until more details are available about
F (i.e. it is refined, perhaps via user interaction), all we know about this metavariable is that it is equivalent to the application of
F. The same/an equivalent metavariable represents all applications
F to the same/an equivalent argument.
I sought to answer the question: What happens when you
use a term that has a metavariable as its type? The essence of the problem is that the names being bound are not immediately available. Since the type of the
used term is just a metavariable, the names of its exported fields are cannot necessarily be determined. This complicates the metatheory, since conventional term representations tacitly assume that names can be resolved (i.e. occurrences can be related to the corresponding declaration) immediately.
I have experimented with various context formers (e.g.
…, using u# = x :: T, … |- … where
u# is a fresh name), typing and term equivalence rules (e.g.
use-bound names become either projections when the name is known or else all name occurrences under a
use-binding become a metavariable that is suitable related to the
used term), restrictions on the well-formedness that would partially simplify the metatheory (e.g. names bound by a
use-clause do/must not eclipse existing names), and term representations (e.g. locally nameless with explicit substitutions). The metatheory either gets pretty complicated — i.e. lots of metavariables everywhere (disclaimer: perhaps its already this complicated in most provers?) — or the restrictions are unnatural for the user (e.g. some kinds of shadowing are ignored:
(let x :: int = 4 in x) == (let x :: integer = 4 in use <x :: bool = True> in x)).
I neglected to motivate implicit import mechanisms above. Their major benefit is that they decouple the import site (i.e. the
use-expression) from the interface of the imported record/module (i.e. the definition of the
used value). This decoupling improves modularity in a traditional software engineering way. They’re quite similar to Java’s
lib.piece.* import mechanism.
There’s actually a dual to this benefit that is itself a perhaps more significant drawback: if the
used value is extended with another field, then all import sites will suddenly incur an extra binding that might not have been foreseen — this is akin to unwanted variable capture! This drawback can only be safely ignored when the interface of the imported record/module/library is very stable or standardized. But, in these cases, it’s also highly unlikely that the type of the record/module/library would be a metavariable.
Thus the interplay between metavariable types and implicit imports that the research set out to formalize has the two following properties.
- It either complicates the metatheory, such as an explosion of metavariables; or it incurs behavior the unnatural to the user, such as
use-bound bindings not being allowed to eclipse other bindings.
- It doesn’t actually arise often in practice: you only tend to import quite stable interfaces — otherwise it’s not very beneficial to import, since there’s not many names in the interface; or it’s not very safe, because of the risk of unintended variable capture.
So, supporting implicit import complicates the language metatheory and it both doesn’t really happen that often and is kind of risky to use in the first place. That’s why I think it’s probably not very motivatable.