Category Uncategorized

More Compatible Packages

The Problem

A few days ago, Greg Weber posted “Transcending to dependency heaven”, describing the latest version of Cabal, which sounds like it will far improve the reliability of installing a set of Cabal packages. However, this is not entirely satisfying – we still have the fundamental problems aptly described in a blog post by cdsmith.

Short summaries of some potential solutions:

  • Improve Cabal’s dependency resolution behavior.
  • Isolate build environments.
  • “Create lists of blessed package-sets”
  • Reduce external dependencies by identifying internal dependencies (imports that do not leak into the interface). This has the downside of potentially shipping multiple versions of libraries in your linked binary.

These solutions all help, but ignore the fundamental nature of the diamond dependency issue.

We need to look closely at the purpose and effects of these upper-bound constraints. The intention is to prevent targeting a new, unforeseen version of an API, which may change the interfaces and behaviors. By addressing this with the PVP‘s “proper”, conservative upper bounds, we usually avoid the problem of spewing a bunch of compilation errors.

However, this comes at a huge cost: packages that would otherwise compile and function do not. An upgrade to one module that many of your dependencies depend on can cause many packages to need corresponding updates. This causes a portion of package maintenance to be proportional to the sum of the major-version velocity of the dependencies. This is awful, particularly as small, task-specific packages are encouraged. The last thing we want to do in such an eco-system is to disincentivize adding dependencies and splitting packages.


Solution: Delta Modules

My proposed solution is to create a convention for versioning the modules in packages in order to enable far less conservative package version upper bounds. The idea is that Haskell has enough ways to create synonyms for named things, that for most superficial changes, the old API can be expressed straightforwardly in terms of the new.

For example, data and newtype declarations can be renamed and have parameters re-ordered by type synonyms. Functions can be proxied by declaring functions of the form “old = new” and, in the case of parameter re-ordering, “old x y z = new z y x”.

Many basic refactorings are analogous to plain Haskell code. When applying the refactorings that a module represents, you are just inlining all of the elements of the module necessary to

It makes sense to package up these synonyms in a module that shares a namespace prefix with the module being versioned. For example, the diagrams-lib package would export Diagrams.Prelude.V0_5 as well as Diagrams.Prelude. From here on, such modules will be referred to as “delta modules”, as they express the change in API from version to version.


Example

Let’s say we are going through a number of iterations of a library. For familiarity sake, I chose a well-known, simple module: Data.Maybe


Version 0.0.1

module Data.Maybe ( Maybe(Nothing, Just), maybe ) where

data Maybe a = Nothing | Just a

-- Note: this intentionally deviates from the typical definition
maybe _ d Nothing  = d
maybe f _ (Just x) = f x

In order for people to start using the delta API, preemptively avoiding breaking changes, there needs to be a delta module that straight re-exports the current version:

module Data.Maybe.V0_0_1
  ( Maybe(Nothing, Just), maybe ) where
import Data.Maybe


Version 0.0.2

What happens if we add isNothing / isJust to this?

module Data.Maybe ( Maybe(Nothing, Just), maybe, isNothing, isJust ) where

-- Note: this intentionally deviates from the typical definition
maybe _ d Nothing  = d
maybe f _ (Just x) = f x

isNothing = maybe (const False) True
isJust    = maybe (const True)  False

This is a benign API change – just additions of top-level declarations. It seems a bit silly to add a module just to remove elements of the API.. However, this is still a change that matters, as it could break compilation for modules that import Data.Maybe without qualification or explicit imports. Ideally other changes would be bundled in, as it seems a bit silly to introduce a module just to remove elements of the API, but at least the delta module is very straightforward to create.

Since Data.Maybe.V0_0_1 only differs from the latest version by hiding some exports, we just add a WARNING pragma, and import the latest delta module.

module Data.Maybe.V0_0_1 {-# WARNING "V0_0_1 is not the latest version" #-}
  ( Maybe(Nothing, Just), maybe ) where
import Data.Maybe.V0_0_2
module Data.Maybe.V0_0_2
  ( Maybe(Nothing, Just), maybe, isNothing, isJust ) where
import Data.Maybe

Making the import structure into a linked-list of delta modules allows us to avoid modification of any but the most recent. This works, because if a module exported the correct API when it was the most recent modification of the latest API, then it ought to still be correct, if all subsequent delta modules are too.


Version 0.1.0

Now a breaking change! We’ll fix the definition of maybe to correspond to the typical API:

module Data.Maybe ( Maybe(Nothing, Just), maybe, isNothing, isJust ) where

-- Note: this intentionally deviates from the typical definition
maybe d _ Nothing  = d
maybe _ f (Just x) = f x

isNothing = maybe True  (const False)
isJust    = maybe False (const True)

Here’s what the delta modules would look like:

module Data.Maybe.V0_0_2 {-# WARNING "V0_0_2 is not the latest version" #-}
  ( Maybe(Nothing, Just), maybe, isNothing, isJust ) where
import Data.Maybe.V0_1_0
  ( Maybe(Nothing, Just),        isNothing, isJust )

import qualified Data.Maybe.V0_1_0 as N

maybe a b = N.maybe b a
module Data.Maybe.V0_1_0
  ( Maybe(Nothing, Just), maybe, isNothing, isJust ) where
import Data.Maybe

This is because the actual ADT is only necessary to provide pattern matching – mkConstructor functions are used to avoid writing newtype wrappers everywhere.


Version 1.0.0

Now a really breaking change! As I will explain later, setting the version number to “1.0.0″ should be a strong indication that the pre-1.0.0 delta modules are removed or imperfect.

module Data.Maybe ( Maybe, maybe, mkJust, mkNothing, isNothing, isJust ) where

import Data.Either ( Either(..), either )

type Maybe a = Either () a

maybe d f = either (const d) f

mkJust    = Left ()
mkNothing = Right

-- Other definitions as before
isNothing = maybe True  (const False)
isJust    = maybe False (const True)
module Data.Maybe.V0_1_0 {-# WARNING "V0_0_2 is not the latest version" #-}
  ( Maybe(Nothing, Just), maybe, isNothing, isJust ) where
import Data.Maybe.V1_0_0
  (                       maybe, isNothing, isJust )

import qualified Data.Maybe.V1_0_0 as N

{#- DEPRECATED The old representation of Maybe may require version-coercion #-}
data Maybe a = Nothing | Just a

instance View (Maybe a) (N.Maybe a) where
  view Nothing   = Left ()
  view (Just  x) = Left x

instance View (N.Maybe a) (Maybe a) where
  view (Left  _) = Nothing
  view (Right x) = Just x

maybe d f = N.maybe d f . view
isNothing = N.isNothing . view
isJust    = N.isJust    . view
module Data.Maybe.V1_0_0
  ( module Data.Maybe ) where
import Data.Maybe

This is pretty ugly. We had to export new versions of all of the functions that mention the data type. Worse yet, libraries that target version 0.1.0 or before won’t be directly compatible with those that come after. Also, this code is referencing a class which is a (likely intentionally) unimplemented portion of ViewPatterns. The View class looks like this:

class View a b where
  view :: a -> b

These view functions are then used to do pattern matching on the ADT.

type Iso a b = (View a b, View b a)

Iso is a constraint synonym that I made up for when we have views to and from a datatype. Ideally these would form an isomorphism, though that may not be possible for all data types.

Thankfully, if we know that we will be versioning our code in this way, we can pre-empt this in the design:

module A (ThingADT(..), Thing, mkThing) where

data ThingADT = ThingADT Double

newtype Thing = Thing ThingADT

-- One for each constructor
mkThing = ThingADT . Thing

instance View ThingADT Thing    where { view (ThingADT x) = x            }
instance View Thing    ThingADT where { view           x  = (ThingADT x) }

Then, we write all of the library functions in terms of the Thing wrapper. This is a slightly cumbersome solution, but view patterns, particularly implicit ones, really paper over the syntactic impact. mkConstructor functions are used to avoid writing newtype wrappers everywhere.

In the event that implicit view patterns are added to the language, it may also make sense to add a shorthand for composing view with a constructor, eg, #ThingADT :: Double -> Thing. I’m not sure what a good constructor pre / postfix would be – for better or worse, Haskell is rather short on spare symbol sequences for use by language extension.


Class Instances

Thinking in terms of what we can and can’t change with delta modules can guide API design, in order to avoid irrevocable decisions. What kinds of things might we change in class instances?

  • Remove an instance. This is supported pretty directly – just define the instance in the delta module for the last version for which it should exist.

  • Add an instance. This is also supported, though has caveats regarding orphan instances, described below.

  • Remove a constraint from an instance. This is fine, as reducing the constraints on usage still allow it to be used in all of the same places.

  • Add a constraint to an instance. This is a breaking change, because it causes these constraints to need to be added to anything that uses that particular instance. This isn’t really resolvable, but I’ll talk about a mitigating strategy later.


Adopt the Orphans!

Classes and their instances are a leaky part of the delta module abstraction. Namely, instances are inherited from their imports, so instances in the latest version will come with the older delta modules. This would be entirely fine if we could be sure that none of our client packages define or import instances that conflict with these added instances.

In “A world without orphans”, Luke Palmer points out that eliminating orphans allows us to create super-class instances. This is one way to be able to rework hierarchies (such as the numeric hierarchy) – retrospectively split classes while maintaining compatibility.

Orphan instances can be nice – in the comments of the above post, augustss makes a good point that newtype wrappers are clunky. The problem of course is that orphan instances leads to the same benefits and problems as multiple inheritance: “what do we do about conflicting instances?”

The problem with the newtype solution is that it composes badly, making it difficult to write two independent newtypes over a datatype, and easily combine the instances of the two. This can be done, I think, but only if both newtypes are designed to support it (by deriving instances of Control.Newtype and having each newtype be an existential wrapper like forall a. Newtype a Concrete => a). But then it has to be done with a data declaration and not a newtype! I wonder how the performance compares?

So, in lieu of a good, composable solution with newtype(s), the only solution is to attempt to control and mitigate the problems orphan instances cause. I have an idea of one way to do this:

Create an orphan “adoption” registry, and allow package authors to endorse canonical instances. The dependent packages can then annotate with a pragma that proudly declares cannonicality, suppressing the warning. Then, when GHC is compiling a non-canonical instance of something that has one, it will throw an error unless “-XNonCannonicalOrphans” is supplied. When building with cabal, using this flag will need to coincide with appropriate markings on the package, so that it’s clear to all users that it exports non-canonical orphan instances.

The main important aspect of this orphan registry is that the packages involved do not need to be imported, and therefore do not expand the package’s dependencies.


Classes

As things stand, almost no change to a typeclass can be properly expressed in a delta module. The good news is, with Constraint Kinds, we are much closer! Constraint Kinds allows us to create constraint synonyms, giving one name for many type-classes.

However, one thing is missing: the ability to declare instances for these synonyms. This feature is necessary for any of the changes to classes to be properly versioned. There’d be a couple caveats of these type-synonym instances:

  • Not all constraint synonyms can be instantiated – a synonym might mention a class twice. This could be resolved by “sub-instances” – instances within the typing context of an instance, disambiguating which methods are associated

  • Instances that are part of the type synonym, and declared in the same module, should override anything that would be generated by default. This is discussed in the Default Superclass Instances Proposal. I like my proposal better, because as this wiki page, points out that it’s strange to conflate defaults and instances of class aliases.

One nice thing about instances of constraint groups is that they let you consolidate a bunch of instances that need the same constraint context. Once we have this, we can start thinking about having delta modules deal with type classes. Here are all of the things I can imagine you wanting to do to a type class:

  • Rename a type class. This is a simple application of a type constraint synonym.

  • Split a type class. This is an extremely important feature, as it allows for re-organization of typeclass hierarchies. We split a typeclass by having the older version module export the union of the two classes which provide the operations.

  • Combine type classes. This works too – just define the two type classes in the latest version, and use a constraint synonym to merge them.

  • Change method API. If breaking instance declarations, but not usages is acceptable, then a function declaration works for this. It’s possible to not break instance declarations, but it’s not very pretty – you’d need to preserve both operations in the class, named different things. Each would have default implementations in terms of the other, and the methods would be selectively re-exported.

  • Remove a super-class constraint. This is quite doable – just use a type constraint in the delta modules to specify that the removed superclass is now required for a type of that name. This would have been the way of expressing the refactoring for the GHC 7.4.1 removal of Eq / Show superclasses of Num.

  • Add a super-class constraint. This is pretty much out of the picture if the superclass constraint is being added for use in default methods. If it isn’t, then a constraint synonym in the most-recent API would work.

Adding / removing a method is pretty much the same thing as combining / splitting a typeclass. Except in order to effectively remove a method while supporting backwards compatibility, you do still need to implement it.


Method Defaulting – Enter TH

Above, I make the claim that it’s strange to conflate the defalting mechanism with the typing mechanism. Superclass constraints give us a couple things that constraint synonyms cannot emulate:


  • Default implementations in terms of the super-classes.

  • Algebraic properties in terms of operations that are certainly defined, given a single class constraint.

  • Allows you to re-use the same symbol for a particular concept. With Functor / Applicative / Monad, which should have been more hierarchicalized than it currently is, we have the following sets of identical functions: [pure, return], [fmap, liftM, map, liftA], [(<*>), ap], and [concat, join].


These are very nice things to have! However, I think that this “feature” is a rather ugly side of typeclasses, for the following reasons:


  • They are irrevocable API decisions.

  • The defaulting is such that there is no good way of checking if the user’s definitions form an appropriately minimal subset of the class.

  • I feel like algebraic properties specified about a constraint group synonym, are equally weighty as algebraic properties specified about a class. This is particularly the case when all of the classes come from the same package.

  • There are many use cases for more elaborate defaulting. For example, it might be interesting to be able to define an instance for a class when given a function of a particular type.

  • People often face the “only one instance per class per datatype” problem, which isn’t really a problem, as the solution is to wrap it in a newtype! Introducing superclass instances (like in in Default Superclass Instances) brings about the “only one instance per superclass per class” problem. This is even worse, though, as there is no way to specialize on a per-data-type basis.


Doing things this way is fundamentally different, in that there is not a mandatory implication from one class to another. Instead, we use distinctly named derivers, which allow you to generate some or all definitions for an instance. Since this deriver is picked by the user, and can take additional parameters, the user has full control.

There’s a package of template haskell derivers for datatypes. I haven’t seen a full system for doing default instances from other instances, but James Cook’s flexible defaults package looks like an interesting take on this problem.

The deriver package has a DSL for expressing derivations, and ideally defaulting derivations would also be expressible with instance syntax. I’d like to see them look very similar to Luke Palmer’s superclass instance example:

class Additive a where
    (^+^) :: a -> a -> a

$(mkDeriver "AdditiveNum" [d|
instance (Num a) => Additive a where
    (^+^) = (+)
|] )

Possible now:

data Foo = -- ...

instance Num Foo where { ... }

$(deriveAdditiveNum)

The following would require quasi-quoters to be updated in order to be particularly convenient for defining stuff within instances (providing information about the context). The type of this part of the quasi-quoter would be Dec -> String -> [Dec], where the first Dec is the declaration of the class that has been accumulated so-far.

instance Num Foo where { ... }

instance Additive Foo where
  $[deriveFrom| Num Foo]

I think that Template Haskell is a very good way to experiment with better ways of defaulting class methods. Ideally something like this would make it into the language, so that most code doesn’t have any mysterious invocations of TH. But first, exploring the design space, and figuring out out what’s actually needed, is best.

It would also be quite straightforward to use Template Haskell to implement instances for type-synonym constraints. It would reify each class involved in the synonym, and extract which methods are needed by each, and then split all of the provided definitions into the appropriate instances.


Applying Deltas

This idea not only greatly reduces the pain of Cabal upper-bounds, but also removes much of the pain of changing identifier names. This is an excellent property – less resistance to package evolution leads to better packages.

Even with these delta modules, there is still quite a bit of resistance to API evolution, partially due to package users needing to modify their code when the API changes. The main point of this post is that we do not need a special format to store refactorings – Haskell is currently very close to being able to encode a powerful subset of the refactorings associated with API changes.

These refactoring modules do not need to be restricted to expressing the difference in API between versions. It’s also imaginable that you could represent the relationship between similarly designed libraries!


Generating Deltas

Writing these delta modules by hand wouldn’t be that difficult, but Haskellers are notoriously lazy.  We need a tool that attempts to infer what definitions the delta module needs, and inform the user of the remaining difference.

In order to appropriately mark the definitions that cannot be generated / defined, I’d like to request an additional pragma in the vein of WARNING / DEPRECATED – ERROR.  This would allow for a place to put the boilerplate error of “Automated generation of delta function ‘foobar’ infeasible”.  Usually the definition of functions marked with ERROR would undefined as their definition.


Interactions with existing infrastructure


Cabal

Cabal packages work as before, and support this use case very nicely, via the “hs-source-dirs:” field, which can be set to “src, vers”. This lets you have two module hierarchies, one for your normal code, and one for your delta modules. This avoids delta modules cluttering up the source tree, making it feel less manageable. It’s particularly nice when changing the namespaces of modules, as only the directory structure of “vers” needs to be mucked up.

Another directory that could conceivably exist is “imps”, in order to reduce the ugliness of having versioned imports everywhere. Instead of import Data.Maybe.V1_0_0 you’d import Imp.Data.Maybe, which would re-export the appropriate version.


Haddock

It would be nice if Haddock were aware of this convention, and intelligently hid all of the “Vn_n_n” form modules from the index page. The docs for the delta modules would still be accessible, but only through a link from the haddock of the module that they version.

It would also be nice if orphan instances were prominently specified and distinguished in the Haddock, particularly with clear indication of cannonical / non-cannonical annotations if there are orphan registries. I wouldn’t mind if there was a complete listing of orphans on the contents page of the package, after the exported namespaces, as somewhat of a “wall of shame”.


The PVP

One question is “What happens to the PVP?”. The PVP page on the wiki specifies the semantic meaning of the version numbers.

The PVP is compatible with this idea, however, we now have an additional class of change:

  1. A.B.C.D – Non-API-changing edits
  2. A.B.C.0 – Addition of entity
  3. A.B.0.0 – Removal or breaking modification of API
  4. (New) – Probable removal or breakage of delta modules

Since we are introducing a new type of breakage we are presented with a couple of choices:

  1. A.0.0.0 – Increment A on delta module breakage. The problem with this is that the library designer no longer has control over the major-major version number.

  2. Another option is to stick with the intention of the PVP. Now that we have versioned modules, most removal / breaking changes are equivalent to adding a definition.
    1. A.B.C.D – Non-API-changing edits
    2. A.B.C.0 – Modification of API
    3. A.B.0.0 – Probable removal or breakage of delta modules

    The problem with this is that it deviates from the traditional PVP when we export the “latest version” modules. A middle-ground that seems reasonable to me is to use this convention when forcing usage of the versioned modules, and use the prior otherwise.

  3. Another alternative is to make version numbers use five numbers, to accommodate every type of change, and still provide two digits of major version. I think this is too verbose.

Determining what constitutes a breaking delta module change, and what to do with the delta modules, is entirely up to the library designer. A strict library designer would increment, and deprecate old delta modules every time the behavior of old functionality is substantially changed.


Relationship to the ECT

When looking at the PVP page in order to write this post, I noticed that this idea has been brought up before, in a somewhat different form. I had forgotten about this page, but I read this page several years ago, when first learning Haskell, so the seed for this idea was likely planted then. Clearly, this didn’t catch on, maybe because the cure was uglier than the ailment. It seems like the growth and maturation of Hackage in the seven years since Issue 2 of the Monad Reader necessitates implementation of this, or similar schemes.

Differences between my proposal and the ECT:


  • The ECT recommends duplicating module code in the event of a behavioral change (see “Bugs, Behaviour, Semantics” section). I don’t think there’s a point to having the package contain two copies of the code. Not only will their data structures be incompatible, but the semantic link between the old and new version will be lost.

  • The ECT doesn’t address incompatible classes / mention problems related to orphans.

  • The ECT doesn’t address incompatible ADTs.

  • The ECT uses “DEPRECATED” for old versions. This will break packages that have “-Werror” TODO: Check this!

  • This is a pretty long post, at least by my standards! However, if you remove all of the contextualization / speculation text of this post, the description of this idea is fairly concise. The policy document for the “PVP’” or “MCP” (More Compatible Packages) will ideally be concise and understandable, to encourage adoption.


Conclusion

I realize that this is quite a multi-faceted proposal – changing package conventions, majorly altering typeclasses. However, all of these things are just refinements to attempt make delta packages feasible.

I think that this solution is quite attractive, because it can easily be tried now. With community consensus and buy in, something like this can also be incrementally deployed, without breaking any packages, and already cover most API changes.

I hope that this use case will help encourage and motivate resolution of the orphan instances problem. The class / module system is very close to being able to express API-deltas in a way that compose properly, and that is a very sweet, Haskell-ey quality.

© Michael Sloan

Built on Notes Blog Core
Powered by WordPress