ezyang's research log

What I did today. High volume, low context. Sometimes nonsense. (Archive)

You can’t do accurate strictness types without dependent types

What is the strictness of this function?

(!!) :: [a] -> Int -> a

The answer is, it’s spine strict up to the index specified by Int, and value strict for the Int.

So, how do you specify this function’s strictness in the types? http://h2.jaguarpaw.co.uk/posts/strictness-in-types/ and most actual systems that a have been used (including what you put in strictness analysis) approximate the true strictness… because it’s really hard to say what the strictness actually is.

Comments

One weird trick that will recursively git-new-workdir your submodules

git-new-workdir: it’s my new most favorite Git trick. The most important thing about it is not that it lets you conveniently get another checkout tracking the original remote of a repository without having to copy it (although that is useful), but the fact that all of your branches are still kept track in a central place. This (1) improves branch hygiene (you won’t keep everything named master anymore, nosirree) and (2) it’s far easier to keep track of the various patches you have in flight.

One thing that’s bothered me, however, is that git-new-workdir will not “workdir’ify” your submodules, so if you git submodule init && git submodule update Git will still spend a bunch of time re-cloning your submodules.

So I thought, “Why don’t I just git-new-workdir the submodules manually myself!” Here is the command, done from the top-level of the new workdir (assuming that the submodules of the original have been checked out):

for i in `git submodule status | cut -d' ' -f2`; do \
rmdir $i; git-new-workdir /path/to/orig/$i $i; done

Alas, on recent versions of Git (2.1.0 is the one I tried), this doesn’t work! You get this mysterious error:

fatal: Could not chdir to '../../../submodule-name': No such file or directory

Zounds! Why? All of the symlinks were setup properly, but if we look at the .git/config of our submodule we’ll soon see the problem:

[core]
  worktree = ../../../submodule-name

The reason for this is, as of commit 501770e1bb5d132ae4f79aa96715f07f6b84e1f6, Git stores submodules in .git/module of the top-level checkout, rather than in the children. The stated reason is to allow users to delete submodule directories without worrying about accidentally deleting history. But when we git-new-workdir, we symlink over the config file, where the path is complete nonsense.

Ouch! What’s a man to do? Well, if you manually check out your submodules old-style, Git will not relocate them. I don’t have a good script for this yet, but the relevant info is in .gitmodules, so after a pile of git clone you should be on your way!

I would report this to the Git developers, but their email list seems to blackhole all email from my email addres…

Comments Comments

Compiling dependently typed languages to System FC

One of the most annoying things about creating a new programming language is generating code for it. Optimizations take a long time to develop, so it takes a long time before a compiler which does code generation from scratch to catch up to a highly optimizing compiler like GHC’s.

When your language is sufficiently similar to that of an existing one, you can often compile into their intermediate representation, and let their optimization passes take care of things from there. A number of language do this with GHC Core, compiling their code into Core and then letting the optimizer take care of things from there.

However, how do you go about implementing this transformation when your language has a more expressive type system than System F with coercions? The trouble arises when you have types which you know are equal, but cannot express the evidence of equality in the coercion language.

Traditional extraction to Haskell/ML from dependently typed languages often makes use of unsafeCoerce/Object.magic to manage these cases, but too much use of these impedes optimization. Another possibility is to generate additional axioms to represent coercions that would otherwise not be safe.

Anecdotally, integrating with the type system of the target language can be quite important. Edwin tried targeting Idris to GHC core a while ago (five years?) and he either had segfaults (because the optimizer was assuming invariants about his code which didn’t hold) or he completely defeated the optimizer (because he made all of his objects opaque and thus with no type information.)

Comments

SmartCheck: Automatic and Efficient Counterexample Reduction and Generalization - Lee Pike

Implementing shrink is hard! Shrinking is important (esp. bug reports)

IDEA: generate a family of counterexamples, not a single one

Basic idea: breadth first search and srhinking

"What I would like to have done is run this on the SPEC2000 suite of testing, but there is no such test suite"

Delta-debugging and randomization; developed simultaneously with CSmith

Shrinking/generalization is not enough: many fields are exactly the same, but it’s a large expression. What would be helpful is CSE. Also, it would good if we could make it easier to visualize and manipulate large algebraic data expressions.

Q: (samth) How competitive would it be to take a QC shrinking approach, but randomly find counterexamples. Because if you try one, then you could get stuck on the hill.

A: I don’t know definitely, but really it’s pseudorandom. You might get lucky but this is a bit more systematic.

Q: What if you had a custom show expression?

A: Bad programming practice.

Q: What about forall?

A: It’s some string hackery, it’s terrible.

Q: About shrinking strategy, the nice thing about QC deterministic shrinking, you know all the possible shrinkings did not provoke the bug; local minimum. Do you have this property too, when you do random?

A: You don’t have this property, and when you stop, e.g. if your parameter runs out, or when it has exhaustively gone through each subterm…

Q: You could have a hybrid, interleaving QC and SC shrinking

A: It’s easier implementing wise, to shrink before you SC

Q: But the stopping point is important! I.e., the obvious things don’t provoke the bug.

Q: You have made SmartCheck, which makes the other system DumbCheck…. but for the example you showed, does that include QC shrinking?

A: They didn’t implement shrink for all types. I should have used GHC’s automatically derived shrinking… it’s just a motivation. There’s more proper benchmarks in the paper.

Q: What often works is you have the structural way of doing shrinking, that gets you a long way, and then later, when you look at a counterexample, you add custom shrink rules. This is what you do in most projects. The randomness gives you some of that; you do other things. What you should do is what Ulf suggested, interleave them.

Q: The universal quantification seems useful, but the existential seems less so. I was hoping you’d find it was independent of the five and minus, but… correlating subexpressions

A: That is a difficult problem to do in any general setting. That could be related if you did some CSE, you might find a relation.

Q: (Wadler) “Not that SmartCheck”

Comments

LiquidHaskell: Refinement Types for the Real World - Eric Seidel

Q: Can we put these in the standard library?

A: Yes, though our libraries are on old versions and need some porting.

Q: IO seems hard, esp. state. Do you do anything?

A: No. We’re thinking about it.

Q: You had a non-example where we couldn’t figure it out, and the constraints were big. Is it hard to debug, do you get used to it?

A: Yes and Yes. It infers the strongest possible types for all subexpressions, so with a large context, it’s overwhelming. Something that is really helpful, however, is that it doesn’t just dump out a set of type binders; we annotate the source code and dump an HTML file, and insert popups, so you can hover over it. I found that invaluable. Something else we’re looking at is generating counterexamples by generating tests that satisfy the preconditions and see if it empirically fails.

Q: You defined the measure function in a funny comment. Could you define it as a Haskell function?

A: Not at the moment. Measures are not Haskell functions; they’re a very restricted subset of inductively defined Haskell functions, one equation per data constructor, and only pattern match on one: we’re translating into refined types. But as you saw with length, the definiton looks identical, so it would be good to identify a subset of Haskell functoins which are measures.

Comments

Type-checking Polymorphic Units for Astrophysics Research in Haskell - Takayuki Muranushi

Closed-type families critical to making this work! Nice example of adjusting units to make floating point not overflow

Q: To compare computations with one system to another, do you have to keep things polymorphic until you get to the final value? E.g. in the example 88 % Miles :/ Hour

A: So you can emit your internal representation in whatever you want, but you can also

Q: You’ve stressed polymorphism in the unit system, but in Andrew Kennedy’s system, he stressed polymorphism in the length dimension. So what’s the type of divide?

A: ‘Qu (Normalize (a @- b)) l n’, which does the normalize on the maps.

Q: OK, so Andrew Kennedy had a lot on how normalize works… supposing I fix l (it’s uniform), but I want to do lots of different combinations, a * b + c, you have a lot of calls of normalize building up. The types of just a simple looking arithmetic program gets large.

A: Well, it takes a long time to compile…

A: (Eisenberg) Normalize tries to get things small again. So if you have something really polymorphic, it would become big, but our experiments never encounted it…

A: The size of the map only gets as large as the number of units you have.

Q: Yesterday, I asked Stephanie why you’d put all this dependently typed stuff in Haskell. You’ve answered that.

Q: The types can get complex; can you compare how this goes in comparison to F# where you build in the constraint solving?

A: I have tried F#… one thing that is really good is that the error messages are quite readable. Built-in is surely an alternative.

Q: (Andrew Kennedy) I’m afraid I missed the beginning, but following onto SPJ’s question; in F#, it makes use of constraint solving doing Gaussian elimination, to get a most general type in all situations. Is it doing it?

A: Haskell type system looks superficially like value level… I don’t know. You don’t need Gaussian elimination, you just have addition and multiplication

Q: Well, to make it polymorphic, you need to deal with the exponentials

A: The polymorphism is over l, it could be anything, it’s Int!

Q: I guess I should try some examples.

Q: So one thing F# prevents you doing is adding bad units.

A: Yes, and also there is only one possible unit for length in a system.

Comments

The Next 1100 Haskell Programmers - Lars Hupel

  • QuickCheck defaulting to unit is not a good thing for testing polymorphic functions
  • The payback of running a competition is to motivate good students and discover them
  • Some students ignored the exercises just for the competition!
  • We introduced list comprehensions early on, and they thought it was the ultimate solution to everything
  • What’s hard? Type inference, higher-order functions, evaluation order (well, actually, ANY evaluation order is hard to understand)

Q: Do you have data correlating competition results with exam results?

A: At the end of the semester, there were not many people doing the competition: 20-30, but they all passed

Q: Did you use Safe Haskell?

A: We run them on VM in sandbox, didn’t test anything else. If people wrote clever enough to escape, we think they earn their grade.

Q: You asked how to teach how type inference better; I flip it around; is there a way to do Haskell type inference better?

A: I don’t now. Maybe it would be better if there were tools to visualize. A way to say, “Give me a trace of type inference which is human readable” and we can try to come up with visualizations on it. So they see the real thing and not some abstract theory.

Comments

Embedding Effect Systems - Dominic Orchard

Q: “Never never under any circumstance use bubblesort”. Say you have read effects and it reads the variable as bool and another as int, what happens?

A: You get a type error… I think the sort figures it out but I’m not sure.

Q: Given two sorted lists, you append, sort and nub. Why don’t you just merge?

A: Yeah, that would be an optimization… and you could add it to the library. I did the simplest thing to make the paper easy to read.

Q: Harping on some more… shouldn’t it be a partial monoid, since it’s not a total operation?

A: You could give lots of cases for your type family, and then the partiality comes out… this could be useful for program logic.

Q: I haven’t been in the Haskell Symposium for a few years… I thought you’d extended the language, but… I feel like I blinked, types which are sexy, to types which are absolutely “slutty.” (in the best possible way)

A: The user written types are not that bad.

Q: For the counter example, I was surprised that the multiplication could show up.

A: It worked because… you just did the right thing: multiply is similar to the definition of map, which is why it unifies. I cheated slightly, describing it using the Nat kind… that doesn’t work; we need our own version of Nat. Nat has plus and times, but the type checker doesn’t know enough. So you do the normal inductive plus/times.

Comments

1ML — core and modules as one (Or: F-ing first-class modules) - Andreas Rossberg

KEY IDEA: predicativity is the essence of ML module stratification!

An attempt to rethink modules in ML, in a cleaner way.

We think of ML as a language, but it’s not really one language: it’s two languages, the value level, and the module language (with different syntax but most of the same). (And if you like, there’s a type level language). They’re all syntactically different for no good reason at all. “Who likes the type application syntax of ML?”

One reason for this is modules are second class: there are different tradeoffs for having modules (more fancy types, but more verbose and not computationally interesting); the core has type inference. Second-class means you can’t compute with modules—a limitation that has been long observed.

CLAIM: being able to package a module as a value is not first-class modules.

"What I really want is classless society, a revolution". Incoherence, verbosity, redundancy, begone!

1ML is a triple pun: unified ML, first-class module language, and a question that is dear from the JavaScript community

IDEA: start with the module language, add everything else

Example: types (encoded as a module, a common trick in recent module ILs), type constructors (encoded as an applicative functor, obviously!)

Example: Map functor (standard ML)

Example: Ability to select a module at runtime (standard example). You can do this in package moduled but it’s a bit more tedious.

Example: Associated types, the “collection class” which comes with a key and a value. (In OCaml, apparently you have to phase-split this class manually).

(Edward: These examples… I think work in Haskell?)

Example: Higher-kinded polymorphism with the monad type class.

Technically: take F-ing modules (including implicit quantifier shuffling), collapse syntactic levels, semantic levels, and otherwise the F-ing rules work unchanged. So F-ing modules is already first class, except…

  1. Decidability (esp. subtyping)
  2. Phase separation (you don’t want to introduce real dependent types)
  3. Type inference
  4. Avoidance problem (well, already solved in F-ing)

Fact: abstract signatures leads to undecidability of subtyping. Abstractly, this matching can lead to an infinite set of substitutions, related to contravariance of functors. Also it breaks F-ing semantics invariants.

IDEA: Don’t do it! Idea from dependently typed languages, predicativity, disallow substituting an abstract type as type. bool : type, but type /: type (btw, Full-XML already has this with polytypes and monotypes). Signature matching only allows small types: the ONLY restriction you really need!

In fact, that was what the syntactic abstraction was intended to do.

Phase separation is not a problem.

So, what about type inference? Obviously it’s not going to be complete. How do we get some of it? IDEA: Reuse the distinction with small and large types, and infer SMALL TYPES. Add a “type wildcard” which compiler infers (always small), and “implicit” functions, which recover ML style polymorphism. Introduction (structure fields)/elimination is implicit. Why did we pick a tick? Because it almost resembles the syntax you’re used to: “empty ‘a : map a”

Main observation: for subtyping in small types, it’s basically type equivalence (the quantifier problems go away), so unification logic is incorporated. But there’s exceptions:

  1. Width subtyping. One solution is row variables to do inference…
  2. Include for record concatenation. Maybe you can do even more complicated row variables.
  3. Value restriction (kind of obscure)

Fortunately, these issues are not NEW to 1ML, they already existed. Practical completeness is still as good as ML.

Future directions: more general applicative functors (current semantics are limited from Shao’99, where you need to seal at a transparent functor), easy to do in the explicitly typed language but typed language is hard. Generalise implicit functors from type to type classes (but also tricky). Row polymorphism… first-class polymorphism inference? Recursive modules?

Wondering: if I took this approach and applied it to MixML, would I get Scala, but cleaner?

Q: Don’t you think predicativity is a strong restriction we might regret?

A: Of course you can have impredicative abstraction, but you have to be explicit about it. It’s like pack: you wrap a big type to small type.

Q: So there’s no reason you couldn’t do more, any ideas about more inference?

A: I don’t know. You want to keep it understandable; this small type restriction is easy to understand, but something more general would be interesting.

Q: (Derek Dreyer) When you put up the type class syntax, I thought, this might be nicer syntax for modular type classes. The point was you can encode type classes with modules, but the syntax was heavyweight.

A: I’ve always thought of type classes as first class modules plus inference magic. Here we have first class modules, and you need to have type classes.

Q: Tell us about phase separation.

A: It’s a bit technical. The point is, where do you allow types are projected from? If you make it arbitrary, you might depend on runtime computation. In F-ing, this is quite easy, because projectibility is equal to transparency. The structure of the language enforces this is always the case. Esp. for applicative functors, which are so limited. Also, conditionals are not a problem because you need to annotate them, and if you picked different types in the branches, then it would need an existential…

Q: There has been a movement to putting dependent types into languages, but here you’re avoiding it at all costs.

A: This is pushing the boundary further. It’s a fair question. Why don’t we just accept we want dependent types? Maybe the answer is we should. But what do we buy into?

Q: About modules… in user terminology, they tend to be things you can independently compile and then link together. But you didn’t address this. What about separate compilation?

A: The usual formal explanation for separate compilation is for functors: a module is separately compiled as a functor over its inputs. You just add more constructs to the module language. (So functors can be separately compiled?) Yes; a closed functor is the model you have. (If you seriously try to compile down to machine code, you’ll run into big problems.) I agree, but these problems are mostly orthogonal to what I showed. Changing the language doesn’t change these problems.

Comments
blog comments powered by Disqus