ezyang's research log

What I did today. High volume, low context. Sometimes nonsense. (Archive)

Rebase and submodules

One of the things that I simply cannot get over is that Git, by default, does not change where HEAD is pointing on a submodule when you do various working-tree operations in the parent repository, such as ‘git checkout’, ‘git rebase’, etc. There’s a good reason for this: the operation of ‘git submodule update’ could fail if Git doesn’t know where to get the commit hash.

Unfortunately, this interleaves with another UI decision very unfortunately: when your submodule pointer gets “left behind”, it now shows up as a CHANGE in your local working directory. But to make matters worse, there is no way to distinguish this change from a legitimate change to the submodule which you wanted to add and put into the commit.

The epitome of this badness is doing a rebase on a tree with submodules. When you hop back to the an earlier version than a submodule change, the submodule lags and it now SHOWS UP as a change. If you blithely “git commit —amend -a”, you’ll pick up that as a change and accidentally put in the submodule change too early.

Oh, and it gets better: git submodule update will blithely blow away TRUE changes to your submodule. So you can’t invoke that willy nilly either!

I’m not sure what the fix here is, but it’s just not good.


Self-bootstrapping metatheory

A while ago I wrote a post about F*’s ability to bootstrap itself: http://blog.ezyang.com/2011/10/the-new-reflections-on-trusting-trust/ The gist was that F*’s language was sufficiently powerful that it was possible to write down its own metatheory inside itself. That is, F* is powerful enough to explain itself, without needing to appeal to anything external: we’d say its metatheory is self bootstrapping.

Some language features, however, make it impossible to self-bootstrap the metatheory. One classic example is input/output. Now, by input/output, I don’t mean an abstract syntax tree of IO actions to take and what to do given their possible responses. These explain what a program could do, but not what the program actually will do. It’s the difference between saying the program will print out the front page of CNN, and saying the program prints out “HASKELLER INITIATES MISSILE STRIKE, THIRTY KILLED.” The only way to answer the semantics in the latter case is to actually consult the universe (i.e. fire the missiles). As it turns out, the universe is playing the role of your metatheory!

Another example is the Mtac monad. This monad makes nonterminating programs permissible in Coq; this remains sound, however, because even though you might write a nonterminating program, when you attempt to typecheck the program, Coq will hang and never actually claim that the proof is true. But if the theory infinite loops during typechecking, that means the metatheory infinite loops during evaluation (after all, that’s what makes it a metatheory): there is nontermination in the metatheory. This is bad news for self-bootstrapping: our theory is terminating (and indeed must be, otherwise it would be unsound), but our metatheory requires nontermination! (It’s still type safe in an appropriate sense, but you can prove false.) We could make the metatheory terminating by, of course, requiring it’s typechecking phase to infinite loop in those situations, but that’s only pushing the problem to the metametatheory.

To phrase things differently, when you are in a situation where you have a monad that you cannot seem to make a monad transformer (there is no IO transformer, Mtac as a monad transformer is unsound), this is probably because the true semantics of your monad have been punted to the metatheory, and not the theory itself. (When chatting about this with Jason, we wondered if it made any sense at all to consider infinite regress of monad transformers by treating it as some sort of equation, which admitted any model which satisfied itself. Weird.)

By the way, doesn’t Gödel’s incompleteness theorem say something about the inability of theories to talk about themselves? So why isn’t all of this self-bootstrapping impossible? (Editor’s note: some incorrect speculation about this, see comment thread for more.)


Selective generativity


Generative module semantics acts quite a bit like newtyping. And everyone knows that if you need multiple instances for the same logical “type”, you should newtype.

The implication here is that selective generativity of dependencies might be able to be used to arrange for multiple instances for the same data type coexist. Suppose you have

import Data.Bool
import Data.SomeTypeClass
instance SomeTypeClass Bool where

and some other user now needs to integrate this library with some of their code which also defines the type class for Bool. Intuitively, what we’d like the elaborated version of this code to look like is:

import Data.MyBool
import Data.SomeTypeClass
instance SomeTypeClass MyBool where

with all of the relevant instances of Bool replaced with MyBool (bonus points if you can leave alone the bits that don’t rely on the type class). Now the two Bools don’t unify, there’s no overlap, and YOU GET A TYPE ERROR WHEN YOU TRY TO USE ONE IN THE OTHER CONTEXT. You can coerce if it’s used in a representational context but not a nominal context. Everything is great! The trick though is that GENERATIVE MODULE INSTANTIATION SHOULD REUSE REPRESENTATION. That might be tricky.



The key insight of the apartness check, in Conor McBride’s words, is not to test against a minimal model, but the maximal model. (That’s why it rejects things even if they only unify due to an infinite unifier.)


Andy Gordon - Strategic Thinking for Researchers

[Alan Perlis quote, 1966]

It’s really about what you do beyond writing single papers.

At Simon’s talk yesterday, he gave a lot of good tips about how to write a paper, come up with ideas, etc. This talk is about longer term strategies; PhD, postdocs, your whole career. It’s worth thinking about this, because people who are passionate about research, really make it our lives. It can be long hours in your early and mid career. If you’re putting that much effort in it, think strategically about how to make the most of it.

There’s no original contribution in this talk; and I feel well of the leash, pontificating about ideas that have impacted me, and I think I’ve tried at one time or another with success, but not rigorously evaluated. Of course, there’s no one correct strategy, and I don’t think people follow all of these.

A few years ago, with a colleague Thore, we had fun here organizing a workshop for researchers at Microsoft about impact. We called it, “Making a Difference, by Research.” The goal was, “How to have impact?” How can our work change the world? Let’s move away from the paper you’re thinking about right now, your final report, your dissertation… This workshop was a lot of fun, so the next year, we did the MSR Speed Dating Society—-there were some misunderstandings about what this should be. One researcher said, “Andy I can’t come, I’m a happily married man.” The actual idea, was social links would lay foundations for transfer of expertise; have some fun, 90 minutes of having… now that we’re on separate floors, some researchers you don’t meet that often. It would be good to break down barriers. Some of the work I’ve done that was most influential was taking an idea from one area and putting it in another.

Most important thing: Know what you’re trying to do. [Two quotes, which seem a bit contradictory. Bjarne Stroustrup exhorting that it should be clear what you’re trying to build, and Wernher Von Braun (Rocket Scientist), research is what I’m doing when I don’t know what I’m doing.] The reconciliation: you should have some conception of what you’re trying to do! What’s the most important problem in your field? (Responses: P=NP? Trans [indistinct] computer architecture? Artifial general intelligence?) Follow-up questions: What are you working on?

I’ve had this happen to me in job interviews, and I’ve said… “Well, I’m not actually working on that.” Maybe you can’t stop at once to work on it, but you can move towards it.

Serendipity (chance connections): this happens, but we shouldn’t rely on it. “Chance favors the prepared mind.” He was ready for the luck, to take advantage of it.

Richard Hamming: you have to work on your problems. Essay “You and Your Research”, about working in corporate labs, and what you really need to do. He set Friday afternoons as “What the big questions are”? But don’t do a “peanut butter strategy”, where you spread yourself thinly over multiple different problems. Don’t do lots of little things.

Steve Johnson (TED talk), he’s analyzed where famous inventions in history came from: we can take comfort: a lot of them came down to collective effort. Example: double entry bookkeeping (from Florence), but the idea of counting outgoings/ingoings, had arisen in lots of places, although it hadn’t been written down. Another invention: combinatorial (Gutenberg: ink, paper, press, movable type—all of these things existed previously, but Gutenberg brought them together.) Third invention: “sheer individual insight”, Willis Carrier, in 1905, he invented air condition machine. Some customers said it’s hot and humid, and he figured out how to solve that. His eureka moment: it was misty, maybe I can create artificial mist, and he figured out how to do that.

When I was a PhD student, I thought, “how am I going to come up with big ideas?” This eureka moment from nowhere. In Johnson’s analysis, individual insight very rarely comes up. Instead, collective effort and combinatorial ideas are far more common. Most inventions are not original. Nobody had thought of how to combine them in a particular way. I find that quite comforting, becuase I feel good at combining ideas, but not necessarily coming things up from scratch.

As for where good ideas come from, “exploring the adjacent possible”, the set of ideas that are about to be found. “Liquid networks” facilitating combinations of ideas: great advice for scientists, live in a city; if you live in London, you’re more likely to go to parties with a broad range of people, and come into contact with different ideas. And the slow hunch: Darwin discovered evolution, and if you read his memoirs, he believes on Sept 28, 1838, that was when he came up with theory of evolution. But what’s interesting, one of his biographers read his diaries for a year before, and it turns out it he had essentially gotten the ideas in some form before he got the eureka moment, but then even afterwards, he hasn’t really completely got it. The point is: ideas come up, you have a hunch for a while that it’s possible, and eventually things fall into place. It’s quite rare for it to be a single Eureka moment, and even when they have them, they didn’t! As for serendipity, LSD was medicinal, Teflon was man to the moon, and Viagra was for hypertension, before they figured out it had, well, other applications. (laughter) Finally: Error, people hate making mistakes, but you should, you’ll learn for it. Lee de Forest, a colorful businessman, he invented electronics, the Audion (prelude to triode and vacuum tube.) He made an amplifier out of it. He thought it worked… rareified gas (vacuums weren’t possible), so he thought the amplification worked due to ions, as opposed to electrons flying through. So he was wrong about how it workd, but he still managed to make it work, and become rich being a business in it (and he defrauded people, people defrauded him, four wives.) I recommend this book.

Changing tack… Life in research labs. Don Syme invented F#. This lecture is about all stages of your career, and the nice thing about a research lab is you can take some time off to do something really something substantial in the company. Don Syme is the posterchild for a great tech transfer success. Joining in the late 90s, he did .NET generics (to the runtime and the original C#), going “all in” 93-03 where he wasn’t doing anything else besides hacking C# code; and then he built a new programming language to show off what you could do with genercs. (A bunch of things on the slide.) He was here speaking to us research: encouraging colleagues to get involved; it’s quite a challenge as researchers to reach out to people in Redmond where they’re building products with short timelines, whereas here we have a lot of freedom, and we’re in a different management chain. For them to trust us is a big deal: we don’t report to them. Don’s adivce: you need to go in, be fully dedicated, get respect, find actual problems they care about, and apply them to enterprises the company cares about. Also, often the company has a shared vision from the top (Gates was behind .NET, when Microsoft needed alternative to Java ecosystem, managed code runtime for webservers using C# and F#). He’s got a lot of stories about this; sometimes it’s difficult due to different values, but it can make a really big difference. There’s a lot of upside.

Related to that: George Heilmeier, also a character. He was in DARPA (American funding body) in the 70s and a great engineer, pioneered liquid crystal display. There are 100s of LCDs in this room: talk about impact! But when he was at DARPA, he had a checklist for projects:

  • What are you trying to do? (GOAL)
  • Who cares?
  • If you’re successful, what difference will it make? (IMPACT)
  • What are the midterm and final exams to check? (REVIEW)

It’s the “Heilmeier” catchechism. But this is a bit abstract. So let’s imagine I was Don Syme. What would I say?

What are you trying to do? allow benefits of typed functional programming on .NET platform. This is a little jargony, but for people in MS this is jargon-free.

How is it done today, what are the limits? C# had no generics, no functions, no type inference. No benefits of code reuse.

What is the new approach, and why successful? Simple syntax as opposed to C#. He had a good start: the CLR was going to support multiple languages, so there were in fact enough instructions that compiling F# would be feasible.

Who cares? Well, people making websites maybe don’t really care. But there was potential market in the finance industry, quants, who wanted automatic trading, these people were super interested in functional programming and F#. When it turned into an actual product, this argument made people decide to go with it.

If you’re successful, what difference will it make? Here was a business setting. Microsoft tries to lock in customers to their technology. This would help better take financial instutitions to .NET

What are the risks and payoffs? Risks: little support from groups (and this happened, but Don stuck with it). The payoff, transferring ideas.

How much will it cost? Don went all in, but this was maybe just about the person a single person could do, before getting other people.

How long? A year. But it took 8-9 years before it became a real product. There were a number of midterms, compiling itself, free download, customers…

If you’re writing a grant proposal, think about these questions and argue for what you’re going to do.

Seek criticism. (beat) This definitely applies as a PhD student. You know what you’re trying to do, you’ve got some ideas on how to get there, but your inexperienced, don’t know literature; you should put yourself out there, get some feedback. Write proposals, and they’re great, despite the complaints. It’s really good to write down what you’re trying to do, and get feedback from your formal committee. Also get people together, force them to listen to your talk. If you don’t get feedback, maybe they’re critical about it (loathe to be in public). Take them aside at some point and force them to get feedback later. John Wheeler: make mistakes as fast as possible.

Reviews and planning of projects. It’s easy to fool yourself, hard to fool peers. Get the ideas out there. Get the feedback. Don’t worry about failures. I love this picture: the two people who gave me my big break: Needham and Gates. Bill went to Roger, said, “Hire the best people you can, let them do what you want, and if all the research projects have succeeded, you have failed!” The idea is, if people propose that they’re going to do something, and then on the review, it’s happened, that means they’re not pushing themselves hard enough. It’s as if people said, “This year, we’re going to go to Sainsburies.” What you really want, “This year, we’re going to go to themoon.” And then at the end, they say, “We didn’t make it to the moon… but we made it to the space station.” In terms of numbers, maybe we’ve had 100-200 projects, and I’d say that they’re big enough that a few people are working on them, nad most of them successfully produced papers. But we’ve had 2-3 successful moonshots: F#, Kinect, that’s why we setup this lab. He wanted people to feel empowered to go for big things. He’d already got things in development, he wants researchers to dream big and occasionally have big impact. That should apply to research in all situations: universities and corporate labs. The reason the govt wants unis to do research, is big innovations for new businesses, etc.

Don’t be seduced by proxies. As time goes on, you’ll find you’re invited onto PCs, people cite you, you’re asked to give invited talks, you have software. Maybe this is why you came into research (e.g. dls you want to get software out there), but you really wanted to change the world, not sitting on PCs. Give lip service to that: it’s a necessary evil, but don’t confuse the proxies for what you’re trying to do.

Work in Collaboration. Collaboration is great! When you do your PhD, it’s individual, but when you becomemore senior you get to work with people. Edsger Djikstra: his slogan, “Only do what only you can do.” Figure out what your unique contribution is, and do it. If you’re on a project where anyone else could have done it, you’re wasting your talents. Your team needs pigs not chickens: the degree of commitment people have to a project, where some people are driving, and some people helping a little bit. The analogy is the breakfast plate. Across disciplines? Walther Scott: “One half the world thinks the other daft.” We always divide into different groups and think the other side is daft. That’s a thing to be wary of; it means the paper you write for POPL is not the kind of paper you’d send to an OS conference, even if it was PL to OSes. Communities have different values. Subjectsare conveniences for administrators: it’s just science, at the end of the day (e.g. machine learning, where it’s statistics, ML, or statistical chemistry, or … lots of different names.)

Do interdisciplinary work… but AFTER YOUR PHD. It’s pretty risky to do it in your PhD, you need to master one discipline first. And only have one specialist per discipline, or the two PL experts will argue about irrelevant nonsense in the discipline rather than collaborating.

More slides, two about theoreticians, one at practitioners. Theoreticians: Robin Milner, prof whenI was in underrad in Edinburgh, he established lab for foundations of CS. His emphasis, which was unusual in theory: he wanted interplay in theory and practice. The design of computing systems can only succeed if it’s well grounded in theory, nad important concepts in theory can only emerge through protacted exposure to practice. Test theory in practice, like physical scientists, controlled experiments. Take theories and try t do practical things with them. LCF was a theorem proving system, which needed a typed progrmaming language, because he needed to formulate unproved formulas as goals to be proved, and then ahave an abstract type of proved theorems, where it could only be put in through inference rulesof logic. Hebuilt up a grand apparatus of functional programming, and in the course of figuring that all out, he invented ML. ML has gone on to be hugely influential. If you’re a thoeretician, it’sgreat to explore math, but you need to figure out how these apply to practical things.

Moving along: Eric Reese, he has a book “the lean startup.” It’s not too much of a stretch to take his ideas for startups and paply them to your research. This doesn’t apply to everything: maybe don’t do this for theory, but for actual things, create new products/services under conditions of extreme uncertainty. Learning what your customers want, and what will work, is what startup is trying to do. It’s really easy to kid yourself about what customers want (inventing something that only works for you.) Validated learning: “minimal viable product”, you shouldn’t wait until the software is completely ready before giving it to actual people. As soon as possible, get feedback from people, guide what’s worth investing in. He wants a version that lets you iterate, build-measure-learn, build again. His exampl was a video. Dropbox was a huge successful startup, they didn’t know what features they needed, but they cooked up a video of the experience that you could have. A lot of people wanted it. Company Aardvark, they wanted a website which would shop locally, fulfilling menus. The idea was that the website would suggest a menu, and figure out which local suppliers would give you the ingredients. They weren’t sure if this would work, so what the CEO did instead, he put out some adverts for people who wanted the service, found one person prepared to pay $10/week, he went down, sat down beside him, siad what would you like for your supper this saturday, had a conversation , figured out what things she would like, and ifgured out how to source them from local supermarkets. No sfotware, no investment, it was just him; it was key that the person was paying him. Validated learning: someone interested in what you’re going to give. It was a success, even though it took a while to get the website going.

If you’re trying to build something practical, put a minimal product in front of someone, orpossibly not a product at all. Another example: a website to answer questions (AI), wizard of oz type thing. Behind the scenes, that had people who actually answered the questions. I don’t know if that worked out, but they got some information about what questions people asked. Think out of the box.

Work with the sytsem. Sometimes people don’t. There are huge arrays of resources at your disposal. Clay Shirkey, he has a book “Cognitive Surplus”, thinker aobut the internet. His thesis, in the developed world of white collar workers, people have hours free in the evening where they might contribute to an enterprise, such as open source coding. These people you can exploit… well, not exploit, but they’re there. Don Syme did that: his project, in the last few years, has exploited the fact that people in hobby time play with F#, contribute code. Lots of stories about open source. Applies to science: a lot of citizen science going on, which you might exploit. Galaxy Zoo: shows up distant images of galaxies, and humans are trained to classify them in different ways, sourcing data that way. Maybe you can come up with some crowdsourcing idea… Also, if you fight a system, be very careful. Do you want to change the system, or do top class science? Some guy wanted blackboards with chalk, made a huge fuss, fought the system, prevailed, but he spent days and hours complaining, nad it was kind of pointless. He told mea fterwards, he’d done it afterwards, in the end, he’d have been much bettter of teaching students and doing reserach. Another example, someone joined MSR some time ago, and he wanted f.bloggs in his email address. Richard made a big mistake… no one had a dot in their email address, so none of the systems (personnel, HR) were tested on it, so sure enough, some crucial HR system failed, and he didn’t get a bonus. Maybe I exagerrate, but he’s wasted a lot of time. Go with the system!

Finally, invite yourself places. This handsome young man, in 1992, when I ceased to be a student, I got a PhD, got a paper in to conference in Boston, “Sure Andy, we’ll pay for the airfare,” and I thought, it’s a shame to go for four days, so I thought, “Why don’t I invite myself to a few universities”, and sure enough, it was great, the unis said yes, I flew to Boston, then Yale, DC (watergate building), in the middle of the Andy Gordon North American Lecture Tour: you gotta put yourself forward (no one else will), went all the way up to Calgary (chased by a bear), eventually back to NJ… put yourself forward. Ask yourself. The larger point, specific suggestion. If you get money to go somewhere, don’t just go to one uni, ask around. People love to have you visit, give talks. They often pay for accom… some unis paid a fee! $100 checks, it was great. Do that! It works! I talk to a lot of grad students, and no one has ever heard of this… you should do it.

Now I’m going to get spiritual. These are great jobs, a lot of fun, but it’s demanding, stressful, and anxious. No one will tell you what to do, but if things don’t work out, it was your ideas. Strategies: get some exercise (picture of Turing as a marathon runner), make sure it’s fun (picture of Perlis), and (picture of Kathleen Fisher) she is philosophical about how to pursue a career as a professional—in the sence of dividing up principles work, family, community—she has a lot of advice, check the PDF.

Five Regrets of the Dying. Top five regrets, one includes “I wish I hadn’t worked so hard.” Take it easy.

12 resolutions for grad students. Maybe Matt Might works too hard, he’s got some great resolutions. Check it out. January: map out the year. Feburary: improve productivity. Embrace uncomfortable (prove a theorem?). Upgrade your tools. Stay healthy. Update CV/web site. Network. (Put yourself forward, because no one else will.) Say thanks. (If there’s something you loved, drop them an email. You’ll be glad on your deathbed.) (Simon said the same!) Volunteer for a tlak. Practice writing. (If you’re happier coding than writing, do some writing, maybe not the paper deadline.) Check with your committee. Think about the job market. Good time to think about internships (December.) Think about the job market. Good time to think about internships (December.) Think about the job market. Good time to think about internships (December.) Think about the job market. Good time to think about internships (December.) Email someone. Don’t imagine the system will automatically figure out that we will pull you in.

First part of homework: Time management thing: manaage email time better. Delete/delegate/do/defer. Process it really quickly.Switch off notifications that email has arrived. No email on weekends, no sending email. If you’re senior, DEFINITELY don’t do that. Heads of depts are bad at this. YOU choose a boundary.

Second part of homework: organize speed dating society. “Andy, I’m just a grad student, no one would pay attention to me.” But they would pay attention to you. above a certain size, institutions become siloed. Tell your head of dept, advisor, they’ll say go for it, lead a meeting, professional initiative. YOU cross a boundary. If you do this, email Andy, say it was a huge amount of fun.

(summary slide)

Q: What advice do you find people disagree with?

A: Email on weekends. Also, criticism will often not be imposed on you, you will just be met by deathly silence. So seek people out and ask them for their opinions.

Q: In the Heilmeier catechism, you bring up “Who cares.” But in research, you don’t necessarily have that foresite

A: There must be someone who cares. Ask your advisor why they care, and they should have an answer.

Q: What if it’s just pushing to the next level?

A: THen the people who care are other people in the community. We’ve got customers at different levels. The immediate customers in theory are often people in the community.

Q: What are some activities you did in the speed dating society?

A: It was very simple. 24 people, 1min surprise on a slide, and then we paired people up and had 5-8 speed dates, for 5min for a conversation

(Heard afterwards: “Everyone likes to give an opinion, so ask them for their opinion.”


Sketch: Concept for an intergalactic restaurant

The trope of an intergalactic restaurant (or bar, or whatever) is a motley mix of species from many different places. I.e. it’s an opportunity for the special effects crew and the alien species designers to show off.

What if you didn’t have a budget for that, and had to cast it all as human actors? That would be a very boring intergalactic restaurant, wouldn’t it?

Suppose that the year is 2401, intergalactic travel is common, but the majority of alien species (humans included) have not gotten over deep-seated being weirded-out-ness with interacting with aliens. This poses an interesting problem for proprietors of major travel nexus points: one simply can’t arrange for all foreign species to simply never be in sight. So, in a typically overengineered fashion, the designers of these crossroads have decided to use virtual reality technology to provide every traveler the illusion… that every single one of their travelers is their species!

Imagine a backwaters tourist who has just touched down at their local spaceport. The illusion is in full effect: the scene he sees is by no means dissimilar from the hustle and bustle of the airports he has been familiar with, men and women in suits walking by. And in the crowded anonymity of a space like that, all of these other characters might as well be philosophical zombies. But the illusion is only skin deep: they may look like humans, but they certainly won’t act like it. And that could lead to some deeply disorienting interactions…


Bridge 2014-06-26


Board 1: When declarer leads low out of hand in D, DUCK IT. you can tell either declarer is Kx, in which case you’re not losing it, or partner has the K and will win it for you.

Board 2: Interesting defensive problem for S, given the auction 2D 3N. Should they switch to a heart? If N has AQx, that beats the contract. But clubs could also lead to victory.

Board 8: Even if partner is not doing what you want, play your best game. In an ending of JT52 versus declarer AKQ3, DON’T LEAD THE 2!!

Board 10: ALWAYS ANALYZE THE LEAD. In this case, 6D lead by E; rule of 11 will tell you that you should let it RIDE.

Board 21: You can make inferences about lengths from opp leads. Prefer to lead low to an honor on dicey combinations, you can pick up the T doubleton that way. It would be better to get the count, although in this case you can’t. (NB: low to the J unblocks, so it might be better)

Board 23: This is a pretty weird hand, but on the play, I went for a ruff-sluff in spades without cashing our side club winner. This slipped us a trick. Also, the club position is difficult. Qxx looking at dummy with xxxx, small is right if declarer has KJ, but Q is right if partner has AJ. The trouble with small clubs is that it can induce a misguess in the latter case.

Board 27: Dodgy question about the diamond raise after auction proceeds 1D (1S); P (2C); 2D (2S) all pass. With worse spade spots, raising diamonds is clear, but KT942 is fairly chunky. With a diamond stiff, might be worth doubling.

Board 28: Never mind partner’s lack of double. Count in clubs is important, because holding up once is important to prevent declarer from enjoying good clubs. Partner needs to exit a spade after cashing hearts


Cod Poached in Court Bouillon »

Court Bouillon sounds complicated but it’s actually very simple. I didn’t have any saffron (oops) and subbed new potatoes. Cod was pleasantly flaky. Very easy, would cook again!! Paired with white wine and sauteed shallots and asparagus.

Comments Comments

The relationship of GHC and Cabal

Yesterday, while chatting with Simon Peyton Jones, I got a better picture of Simon’s mental model of how GHC and Cabal (the library) fit together. Essentially, Simon has an imaginary firewall between GHC and Cabal, where if there is any package-related complexity that the core GHC doesn’t need to know about, it can be pushed into Cabal. GHC has a low-level interface that can be implemented simply, and Cabal is responsible for “pushing the buttons” on this interface so that GHC does the right thing.

Thus, one road to understanding how Cabal works (and perhaps, why it’s failing to build some package of yours) is to understand what knobs it has available from GHC.

blog comments powered by Disqus