Category Haskell

Template Haskell Lens Idea

I recently ran into the problem of fclabels partial lenses being partial in both directions. This was not a critical problem, but it was annoying that a type had to be Maybe when the code would never use the “Nothing” constructor. There’s a new, interesting lens library called YALL, that inspired me to think about it a bit more. There are some potential issues with this perspective – it is no longer clear that there are algebraic laws that hold. I think that there’s a possibility that this might be resolved by adding some restrictions on the relationship between m and w. Anyway, the point is that lens library design is not a settled issue.

I’m interested in trying another way of using template haskell to express lenses:

fstLens = [mkLens| \(a, _) -> a |]

sndLens = [mkLens| \(_, b) -> b |]

fooLens = [mkLens| \(Just (a, b)) -> [a, b] |]

tupListIso = [mkIso| \(a, b) = [a, b] |]

Each lens is specified in terms of the implementation of its get. We can do this because construction literals are bidirectional – they can be used for pattern matching. The right hand side of the lenses need to have variables in every position in order to preserve the lens laws (otherwise a portion of the set would not be reflected in the corresponding get).

The partiality of the lenses depends on whether any of the types used have multiple constructors – whether a match could fail. If the constructor on the left could fail, then the lens is partial in both directions (so fooLens is fully partial). If the constructor on the right could fail, then the lens is at least partial in the setter.

We can also bring in function application:

plusOneLens = lens (+1) (const . subtract 1)

switchPlus = [mkLens| \(a, b) -> (plusOneLens b, plusOneLens a) |]

This is moving towards a full-blown embedded language for creating bidirectional transformations! It’d be interesting to target the feature set of the Boomerang project, which has a particular focus on doing bidirectional operations with text, and can do so with regexes as well as more powerful grammars. I’ve already written a TH quasi-quoter that allows you to use regular expressions in patterns and expressions: rex. Incorporating this into lens generation, by adding cannonical serialization to regexes, would be really cool.

I think that this way of working with lenses / isos would really help to popularize their use in Haskell. While fclabels is quite excellent, the Applicative instance is not a very clear way to construct lenses on compound structures. This is even nice for the typical lenses, as it avoids using typical records in the first place. Though, foobar_ can be nicer than get foobar.

Plumbers Pointless?

In my last post, I attempted to sarcastically / humorously introduce the plumbers package. I probably should have saved it for April 1st, but I also don’t think the idea is merit-less. I don’t think that becoming a plumbers expert, adept at large plumbing pipelines, would be a very good way to spend your time, just as I don’t think becoming a pointfree combinator ninja is very valuable (though fun!).

The use-case that this is practical / useful for is the very same that the arrow combinators are usually applied to. Let’s face it – most of the time you’re dealing with the (->) arrow, and use (***), (&&&), first, and second. Perhaps I should take a look at making plumber operators for arrows / categories – I have an inkling that they may be useful for lenses.

Anyway, with this particular arrows use case, you don’t really see long chains of arrow combinators – just one at a time, applied to one or two functions. This is the use case I see for the provided plumbers – just take two functions, and apply, bind, or pair the results after giving them the auxiliary parameters. I don’t think that this is too awful – the types of the auxiliary parameters will often make it pretty clear what’s going on. The plumbing operator, when the code reader is skimming code, indicates “combine these two functions, providing these arguments as an environment to their execution”. They can look closer at the plumber to see what’s really happening to the arguments.

The reddit discussion was interesting!

Particularly interesting is this discussion between ehird and cdsmith. I probably should have commented with my thoughts, but I figured out my opinion a few days later, and it was a little bit longwinded, so I figured a followup post was in order. Thank you, ehird, for defending the idea! Thank you cdsmith, for your well informed assessment!

One thing that’s brought up is the “implementation issues” of this idea. I’d like to note that the binary size overhead of including all of these can be mitigated by using Control.Plumbers.TH.implementPlumber. There may be some overhead from invoking the function – I should really add INLINE pragmas!

The other criticism is that “Point-free style is useful when it helps you think at a higher level of abstraction… but I can’t see how these operators lead to higher levels of abstraction.” This is a fair point, however, holding these operators to the standard of “must increase abstraction”. I would argue that points-free form does not significantly increase abstraction, or as ehird points out, “They’re an abstraction of various forms of composition and pipe plumbing. It’s not like not using point-free style lets you escape the plumbing; you just write it in another way.”

f1 = g . h
f2 x = g (h x)

-- Manipulating with f1:
--             f1 $ 1 / 3
-- (Subst)  g . h $ 1 / 3
-- (inline) g ( h ( 1 / 3 ) )

-- Manipulating with f2:
--           f2                $ 1 / 3
-- (inline)  (\x -> g (h x))     1 / 3
-- (apply)          g (h (1 / 3))

If we view things from a value-centric perspective, then our code during evaluation will be full of lambdas, in order to bind these values to names. If we instead view them with a function-centric perspective, we often end up being able to reason about code by direct substitution without beta reduction. I think that the plumbing operators lead to similar substitutional reasoning, and can be good when used tastefully. The question is whether the rules of plumbers (which I should probably write down in a post) are too confusing for reasoning to be effective. It’s quite possible!

The plumbers experiment led me to think about language support for such “classes of identifiers”. It’d be interesting to support using a context free grammar to specify all of the operators / names that something can generate. Then, importing this would import the infinite set of operators generated. They need to be context free, such that we can test for the intersection of identifiers when re-exporting such generators. This would be a huge change to Haskell, for not very much pay off – but interesting to think about!

Pointless Plumbers

From this recent reddit comment thread / blog post, I had the idea of generalizing the operators found in the Data.Composition package. This could be a bad idea, as it encourages code to have larger, scarier operators, but I think I decided upon some interesting conventions. These operators can be used to construct pointfree expressions in a somewhat more straightforward, less nested fashion.

I brought it up on #haskell to mixed reactions. The following quote is now attributed to me via lambdabot by ski:

 (on pointless black magic)
<mgsloan> welcome to excessively pointless plumbing operators :)
<byorgey> mgsloan: that's... terrifying
<DanBurton> you should put it on hackage

So I did! I cleaned up the library and put it on hackage. Here’s how it works:

Pair Plumber

(*^) ::       r'  ->       r''  ->  a     -> (r', r'')
(*<) :: (a -> r') ->       r''  ->  a     -> (r', r'')
(*>) ::       r'  -> (a -> r'') ->  a     -> (r', r'')
(*&) :: (a -> r') -> (a -> r'') ->  a     -> (r', r'')
(**) :: (a -> r') -> (b -> r'') -> (a, b) -> (r', r'')

(*^) f1 f2  _     = (f1,   f2  )  -- Drop parameter
(*<) f1 f2  a     = (f1 a, f2  )  -- Left gets parameter
(*>) f1 f2  a     = (f1,   f2 a)  -- Right gets parameter
(*&) f1 f2  a     = (f1 a, f2 a)  -- Both get parameter
(**) f1 f2 (a, b) = (f1 a, f2 b)  -- Split tuple

The first two parameters are functions which are applied to the remainder of the parameters, in a fashion requested by the symbol after the initial “*”. These symbols specify a routing – which functions each parameter is routed to – leading to the name “plumbing”. Here’s one downside of this naming scheme – (**) is Floating exponentiation in the Prelude – so modules that fully import this library need to import the Prelude hiding (**).

If these operators were generalized to arrows, which they could be, then (**) would be the same thing as (***), and (*&) would be the same thing as (&&&). So what’s (***) being used for now?

(***) :: (a -> c -> r') -> (b -> d -> r'') -> (a, b) -> (c, d) -> (r', r'')
(***) f1 f2 (a, b) (c, d) = (f1 a c, f2 b d)

It’s the generic zip on tuples! The additional ‘*’ indicates that an additional tuple parameter should be split between the functions. This version of (***) is something I often want, and have added it, under the name zipT (though bizip is probably a better name), to project-specific utilities libraries a few times.

If this is the extended version of (Control.Arrow.***), then what’s the extended version of (Control.Arrow.&&&)?

(*&&) :: (a -> b -> r') -> (a -> b -> r'') -> a -> b -> (r', r'')
(*&&) f1 f2 a b = (f1 a b, f2 a b)

We can also mix & and * in a couple ways:

(*&*) :: (a -> b -> r') -> (a -> c -> r'') -> a -> (b, c) -> (r', r'')
(*&*) f1 f2 a (b, c) = (f1 a b, f2 a c)

(**&) :: (a -> c -> r') -> (b -> c -> r'') -> (a, b) -> c -> (r', r'')
(**&) f1 f2 (a, b) c = (f1 a c, f2 b c)

Never before seen combinators, as far as I know, but I think they are reasonably understandable with a little practice. In theory, I’m defining a naming scheme for an infinite set of related function definitions. In practice, only plumbers up to arity 3 are defined by default – you can invoke a Template Haskell function to generate more if you need them.

Examples

Some examples of using these functions:

λ> (+1) ** (*2) $ (9, 4)
(10, 8)

λ> ((++) *** (++)) ("a", "b") (" forest", "ird")
("a forest", "bird")

λ> (maybe (:[]) replicate *<& length) (Just 3) "hi"
(["hi","hi","hi"], 2)
(11, 20) == ((+1) *&   (*2)) 10

(12, 20) == ((+)  *&&  (*) ) 10 2

(13, 20) == ((+)  *&>< (*) ) 10 2 3

(12, 30) == ((+)  *&<> (*) ) 10 2 3

(12, 40) == ((+)  *&<  (*4)) 10 2

(14, 20) == ((+4) *&>  (*) ) 10 2

Composition Plumber

($^) :: (     r'' -> r') ->       r''  -> a      -> r'  -- Drop parameter
($<) :: (a -> r'' -> r') ->       r''  -> a      -> r'  -- Left gets parameter
($>) :: (     r'' -> r') -> (a -> r'') -> a      -> r'  -- Right gets parameter
($&) :: (a -> r'' -> r') -> (a -> r'') -> a      -> r'  -- Both get parameter
($*) :: (a -> r'' -> r') -> (b -> r'') -> (a, b) -> r'  -- Split tuple

($>) f1 f2  _     = f1   $ f2    -- Drop parameter
($<) f1 f2  a     = f1 a $ f2    -- Left gets parameter
($>) f1 f2  a     = f1   $ f2 a  -- Right gets parameter
($&) f1 f2  a     = f1 a $ f2 a  -- Both get parameter
($*) f1 f2 (a, b) = f1 a $ f2 b  -- Split tuple

The definitions are exactly the same as in the pair plumber, except using the ($) function to combine the arguments, instead of (,). Ordinary composition is "$>" in this system, as it combines the functions using "$", and provides the parameter to the function on the right. All of these operators have "infixr 9" priority, to match with ordinary composition.

λ> :t (.)
(.) :: (b -> c) -> (a -> b) -> a -> c

λ> :t ($>)
($>) ::  r'' -> r') -> (a -> r'') -> a ->r'


PNorm Example

Let's say we want to implement the p-norm on lists. This works by exponentiating each element of a list by p, summing, and exponentiating by 1 / p. Standard, cartesian distance is the p = 2 norm.

pow = flip (Prelude.**)
pnorm p xs = pow (1 / p) (sum (map (pow p) xs))

Here's how I'd normally write this function:

pnorm p = pow (1 / p) . sum . map (pow p)

But now we can go further! Without descending into the full points-free madness of

pnorm = ap ((.) . pow . (1 /)) ((sum .) . map . pow)

We can instead use the plumbers variant.

pnorm = (pow $> (1/)) $&> sum $>> map $> pow

--      (pow $> (1/)) $&>(sum $>> map($> pow))  -- infixr 9

--            /----^  p xs        ^    /---^
--            |       | |         |    |
--            \-------| \------=--/    |
--                    |                |
--                    \---------=------/

The ascii illustration shows how the plumbing operators route the parameters. The arrow for xs may be a little confusing. That parameter is being provided to the result of (map $> pow), because ($>>) is expecting something that uses two arguments (apart from functions) and ($>) = (.) only has one.

What if we want to normalize according to a given pnorm?

pnormalized = (flip map $<> (/)) $>& pnorm

--                    ^          p xs  ^ ^
--                    |          | |   | |
--                    \--=-------|-\---/ |
--                               |       |
--                               \-------/


List Cons Example

On IRC, ski suggested that such higher-arity combinators should be systematically decomposable into the others, and gave the following points-free example:

list3 :: a -> a -> a -> [a]
list3 = ($ []) .:: (.: (:)) .: (.: (:)) . (.: (:)) id

(.:) :: (b -> c) -> (a -> a1 -> b) -> a -> a1 -> c
(.:) = (.) . (.)

(.::) :: (b -> c) -> (a -> a1 -> a2 -> b) -> a -> a1 -> a2 -> c
(.::) = (.) . (.) . (.)

Instead of figuring out how to express these combinators in terms of the others (I might give some of these definitions / identities in a later post - omitted for brevity and convenience), I gave an equivalent definition of list3 using plumbers:

list3 = ((:) $<>> (:) $<> (:[]))

(.:) = ($>>)
(.::) = ($>>>)

Generalizing this to four is easy. However, the module doesn't currently export arity greater than three, as the compile time was longish, and the binary was 1MB. If you want these operators, you can use Control.Plumbers.TH to request their implementation.

list4 = ((:) $<>>> (:) $<>> (:) $<> (:[]))

Turns out that the expression of this can get even more uniform:

<ski> (could you separate `(:[])' into a `(:)' and a `[]', for uniformity ?)
<mgsloan> ((:) $<>>> (:) $<>> (:) $<> (:) $< []) 1 2 3 4
***ski claps

Something interesting to observe is that when using plumbing operators on cons, just by changing the operators involved, we can get out any 3-list that consists of the passed parameters:

λ> ((:) $<>> (:) $<> (:[])) 1 2 3
[1,2,3]

λ> ((:) $<>> (:) $<> (:) $< []) 1 2 3
[1,2,3]

λ> ((:) $<>> (:) $>< (:) $< []) 1 2 3
[1,3,2]

λ> ((:) $><> (:) $>< (:) $< []) 1 2 3
[2,3,1]

λ> ((:) $>>< (:) $>< (:) $< []) 1 2 3
[3,2,1]

λ> ((:) $>&^ (:) $>< (:) $< []) 1 2 3
[2,2,1]

λ> ((:) $&>^ (:) $>< (:) $< []) 1 2 3
[1,2,1]

Implementation

Here's the main body of Control.Plumbers:

$(implementPlumbers compositionSpec)

infixr 9 $^, $<, $>, $&, $*
infixr 9 $^^, $^<, $^>, $^&, $^*, $<^, $<<, $<>, $<&, $<*, $>^, $><, $>>, $>&, $>*, $&^, $&<, $&>, $&&, $&*, $*^, $*<, $*>, $*&, $**
infixr 9 $^^^, $^^<, $^^>, $^^&, $^^*, $^<^, $^<<, $^<>, $^<&, $^<*, $^>^, $^><, $^>>, $^>&, $^>*, $^&^, $^&<, $^&>, $^&&, $^&*, $^*^, $^*<, $^*>, $^*&, $^**, $<^^, $<^<, $<^>, $<^&, $<^*, $<<^, $<<<, $<<>, $<<&, $<<*, $<>^, $<><, $<>>, $<>&, $<>*, $<&^, $<&<, $<&>, $<&&, $<&*, $<*^, $<*<, $<*>, $<*&, $<**, $>^^, $>^<, $>^>, $>^&, $>^*, $><^, $><<, $><>, $><&, $><*, $>>^, $>><, $>>>, $>>&, $>>*, $>&^, $>&<, $>&>, $>&&, $>&*, $>*^, $>*<, $>*>, $>*&, $>**, $&^^, $&^<, $&^>, $&^&, $&^*, $&<^, $&<<, $&<>, $&<&, $&<*, $&>^, $&><, $&>>, $&>&, $&>*, $&&^, $&&<, $&&>, $&&&, $&&*, $&*^, $&*<, $&*>, $&*&, $&**, $*^^, $*^<, $*^>, $*^&, $*^*, $*<^, $*<<, $*<>, $*<&, $*<*, $*>^, $*><, $*>>, $*>&, $*>*, $*&^, $*&<, $*&>, $*&&, $*&*, $**^, $**<, $**>, $**&, $***

$(implementPlumbers productSpec)

infixr 9 *^, *<, *>, *&, **
infixr 9 *^^, *^<, *^>, *^&, *^*, *<^, *<<, *<>, *<&, *<*, *>^, *><, *>>, *>&, *>*, *&^, *&<, *&>, *&&, *&*, **^, **<, **>, **&, ***
infixr 9 *^^^, *^^<, *^^>, *^^&, *^^*, *^<^, *^<<, *^<>, *^<&, *^<*, *^>^, *^><, *^>>, *^>&, *^>*, *^&^, *^&<, *^&>, *^&&, *^&*, *^*^, *^*<, *^*>, *^*&, *^**, *<^^, *<^<, *<^>, *<^&, *<^*, *<<^, *<<<, *<<>, *<<&, *<<*, *<>^, *<><, *<>>, *<>&, *<>*, *<&^, *<&<, *<&>, *<&&, *<&*, *<*^, *<*<, *<*>, *<*&, *<**, *>^^, *>^<, *>^>, *>^&, *>^*, *><^, *><<, *><>, *><&, *><*, *>>^, *>><, *>>>, *>>&, *>>*, *>&^, *>&<, *>&>, *>&&, *>&*, *>*^, *>*<, *>*>, *>*&, *>**, *&^^, *&^<, *&^>, *&^&, *&^*, *&<^, *&<<, *&<>, *&<&, *&<*, *&>^, *&><, *&>>, *&>&, *&>*, *&&^, *&&<, *&&>, *&&&, *&&*, *&*^, *&*<, *&*>, *&*&, *&**, **^^, **^<, **^>, **^&, **^*, **<^, **<<, **<>, **<&, **<*, **>^, **><, **>>, **>&, **>*, **&^, **&<, **&>, **&&, **&*, ***^, ***<, ***>, ***&, ****

$(implementPlumbers ...) invokes a template haskell function which generates all of the function declarations. All of those "infixr 9" declarations should really be unnecessary - you can't create them with Template Haskell yet. See this GHC bug - which simonpj recently created a fix for! Props to him for fixing stuff like that! Until that fix is included in a GHC release, though, I'll leave these fixity declarations around.

You can create your own plumbing operators by using the following interface from Control.Plumbers.TH:

-- | Specifies all of the information needed to construct type declarations
--   for the plumber.
data PlumberTypes = PlumberTypes
 { leftType   :: Type  -- ^ Type of the left argument's result
 , rightType  :: Type  -- ^ Type of the right argument's result
 , resultType :: Type  -- ^ Results type.  This needs to be wrapped in a
                       --   forall naming all of the utilized type variables.
 }

-- | A basic set of types, which make r' the left type, and r'' the right type.
--   The resultType is a forall that introduces these type variables, and has
--   undefined content.  Therefore any implementation in terms of baseTypes
--   needs to redefine resultType, as the Forall has undefined as its content.
baseTypes :: PlumberTypes
baseTypes = PlumberTypes
  { leftType   = mkVT "r'"
  , rightType  = mkVT "r''"
  , resultType = ForallT [mkVB "r'", mkVB "r''"] [] undefined
  }

-- | Specifies all of the information needed to implement a plumber.
data PlumberSpec = PlumberSpec
 { plumberOpE     :: Exp -> Exp -> Exp  -- ^ The plumber implementation
 , plumberTypes   :: Maybe PlumberTypes -- ^ Optional explicit type signatures
 , plumberArities :: [Int]              -- ^ Arities to generate - 26 is max
 , plumberPrefix  :: String             -- ^ Prefix to use for operator
 }

-- | Creates a plumber spec for the given prefix for the generated operators,
--   and the name of the infix operator to use to construct the implementation.
baseSpec :: String -> String -> PlumberSpec
baseSpec p e = PlumberSpec
  { plumberOpE      = (\l r -> InfixE (Just l) (mkVE e) (Just r))
  , plumberTypes    = Nothing
  , plumberArities  = [1..3]
  , plumberPrefix   = p
  }

The operators, along with those that are exported by Control.Plumbers.Monad are defined in Control.Plumbers.Specs as follows:

productSpec :: PlumberSpec
productSpec     = (baseSpec "*" "_") { plumberTypes = Just productTypes
                                     , plumberOpE   = (\l r -> TupE [l, r]) }

compositionSpec :: PlumberSpec
compositionSpec = (baseSpec "$" "$") { plumberTypes = Just compositionTypes }

lbindSpec  :: PlumberSpec
lbindSpec  = (baseSpec "<=" "=<<")   { plumberTypes = Just lbindTypes }

rbindSpec  :: PlumberSpec
rbindSpec  = (baseSpec ">=" ">>=")   { plumberTypes = Just rbindTypes }

frbindSpec :: PlumberSpec
frbindSpec = (baseSpec ">>" ">>")    { plumberTypes = Just $ fbindTypes False }

flbindSpec :: PlumberSpec
flbindSpec = (baseSpec "<<" "<<")    { plumberTypes = Just $ fbindTypes True  }

productTypes :: PlumberTypes
productTypes = addBaseContext $ baseTypes
  { resultType = tuplesT [leftType baseTypes, rightType baseTypes] }

compositionTypes :: PlumberTypes
compositionTypes = addBaseContext $ baseTypes
  { leftType   = arrowsT [rightType baseTypes, leftType baseTypes]
  , resultType = leftType baseTypes
  }

This leaves the library open to others defining plumbing operators following the same conventions.

Thoughts?

I think that this family of operators has a very memorable and visual notation, and can be put to reasonable. I'm not set on all of these decisions, though - the notation may change in order to avoid collisions with (**) and the arrow operators.

What do people think? Is this awful? Useful? Are the symbol choices good? Other suggestions? It's something I've itched for often in past times when points-free style reaches slightly beyond its reasonable limit.

Visualizing the Haskell AST

I’m very enthused by the potential for development environments that offload more of the uninteresting minutae of programming onto the computer. This has been looked at a lot in the past, by a lot of very smart people. From structure editors to the visualization of the results of a variety of static analysis techniques, tons of work has been done. Despite this, these ideas have not yet revolutionized popular development as we know it, with auto-completion being the main widely utilized language-aware convenience tool.

Much more is known about a Haskell program at compile time than a program written in most of your run-of-the-mill programming languages. It seems like it would be a good idea to provide more of this information to the programmer, in a live, interactive, context-dependent form. Examples include depicting the parse tree, types of subexpressions, applicable semantics-preserving transformations, and example-evaluations, right in the programming editor. Rather than attempting to address this problem directly, on limited free-time, I intend to build a number of toy programs to play with the problem.

This post is literate Haskell, and so should be copy-pastable into a *.lhs file. It depends on two libraries that are not yet hackage-ready:

http://github.com/mgsloan/curve
http://github.com/mgsloan/gtk-toy

Gtk-Toy is a wrapper over GTK / Cairo that processes inputs into more Haskell-ey data types and provides a few convenience data structures. I intend to grow it as I write more “toys” for various purposes. The ‘Curve’ library and the ToyFramework are partial ports / re-imaginings of the lib2geom project, which I was more active in several years ago. While working on this library, we established a habit of creating an interactive toy to exercise particular features or to provide a prototying sandbox to play with a new idea. In order to encourage this development pattern, the infrastructure for toy-making had to be convenient and straight-forward, which is what I attempt to achieve with the Haskell equivalent.

I’ll intersperse explanation between chunks of code, for some of the trickier bits, but familiarity with Haskell is assumed. The source on this page is available on github: http://github.com/mgsloan/ast-vis/blob/master/Main.hs

Here’s a script to make trying it out convenient:

#!/bin/bash

# Download and locally install curve library
wget -O curve.tar.gz http://github.com/mgsloan/curve/tarball/c621e5e6b405801a69dbeb1e1ecdd4edcef28199
tar -xvf curve.tar.gz
rm curve.tar.gz
cd mgsloan-curve-c621e5e
cabal configure --user
cabal install
cd ../

# Download and locally install toyframework
wget -O toyframework.tar.gz http://github.com/mgsloan/toyframework/tarball/354c0225ec6d21c24b7696468b81ddb37aa099f2
tar -xvf toyframework.tar.gz
rm toyframework.tar.gz
cd mgsloan-toyframework-354c022
cabal configure --user
cabal install
cd ../

# Download and run the simple AST-vis
git clone git@github.com:mgsloan/ast-vis.git
cd ast-vis
runhaskell Main.hs
> {-# LANGUAGE FlexibleInstances, TemplateHaskell,
>              TupleSections, TypeOperators #-}
>
> import Control.Arrow ((&&&))
> import Control.Monad (liftM, zipWithM_)
> import Data.Curve
> import Data.Data
> import Data.Function (on)
> import Data.Generics.Aliases
> import Data.Label
> import Data.List (groupBy)
> import Data.Maybe
> import Graphics.ToyFramework
> import Language.Haskell.Exts.Annotated
> import qualified Graphics.Rendering.Cairo as C

Now that the imports are out of the way, we define the state representation for the AST-visualization toy. It’s very simple – this is not intended to be anywhere near a real text editor – and so just stores the code in a plain string, the current cursor position, and a cache of the parsed representation. It also stores the current mouse location, in order to provide vertical scrolling of the AST visualization (as it can easily get quite large).

Following the data declaration is a Template Haskell fclabels invocation which provides lenses for the different fields of the state. These allow you to construct views on data structures, use them to get / set / modify the projection (often times, and the whole time here, these are just ADT fields).

> data State = State
>   { _code :: String
>   , _cursor :: Int
>   , _parsed :: (ParseResult (Decl SrcSpanInfo))
>   , _mouseCursor :: (Double, Double)
>   }
>
> $(mkLabels [''State])

First, a few convenience fclabels-related utilities. modM and setM lift modify and set, respectively to yield monadic values. lensed provides a more generic self-modification, allowing the new value for some label to be derived from the projection of another. updateParse is an example usage of lensed which will soon become useful.

> modM :: Monad m => (b :-> a) -> (a -> a) -> b -> m b
> modM l f = return . modify l f
>
> setM :: Monad m => (b :-> a) -> a -> b -> m b
> setM l x = return . set l x

> lensed :: (f :-> a) -> (f :-> a') -> (a -> a') -> f -> f
> lensed l l' f s = set l' (f $ get l s) s
>
> updateParse :: State -> State
> updateParse = lensed code parsed parseDecl

This is what most toy main functions will look like – an initial value for the state of the toy, followed by references to the functions which handle events and drawing. handleMouse just sets the mouseCursor field of the state to the mouse position. This will later allow for adjustment of the vertical position of the AST diagram.

> main :: IO ()
> main = runToy $ Toy
>  { initialState = updateParse $
>      State "fibs = 0 : 1 : zipWith (+) fibs (tail fibs)" 0 undefined (0, 220)
>   , mouse   = const $ setM mouseCursor
>   , key     = handleKey
>   , display = handleDisplay
>   , tick    = const return
>   }

Definition of Toy from the “toyframework” source code, for reference:

data Toy a = Toy
  { initialState :: a

  -- Given the current keyboard state, perform a single 30ms periodic execution
  , tick    :: KeyTable                              -> a -> IO a

  -- Display using cairo, based on the canvas size and dirty region.
  , display :: IPnt -> IRect                         -> a -> C.Render a

  -- Handle mouse presses (first parameter is (pressed?, which)) and motion.
  , mouse   :: Maybe (Bool, Int) -> (Double, Double) -> a -> IO a

  -- Handle key-presses, first parameter is "pressed?", second is (Left string)
  -- to give the names of non-character keys, and (Right char) for the rest.
  , key     :: Bool -> Either String Char            -> a -> IO a
  }

Definition of the key-handler follows. It handles basic motion, deletion, and insertion.

> handleKey :: Bool -> Either [Char] Char -> State -> IO State
> handleKey True (Right k) (State xs ix p m) =
>   return . updateParse $ State (pre ++ (k : post)) (ix + 1) p m
>  where
>   (pre, post) = splitAt ix xs
>
> handleKey True (Left k) s@(State xs ix _ _) = liftM updateParse $ (case k of
>     "Left"  -> modM cursor (max 0 . subtract 1)
>     "Right" -> modM cursor (min endPos . (+1))
>     "Home"  -> setM cursor 0
>     "End"   -> setM cursor endPos
>     "BackSpace" -> modM cursor (max 0 . subtract 1)
>                  . set code (delIx (ix - 1))
>     "Delete" -> setM code (delIx ix)
>     "Escape" -> const $ error "The user escaped!"
>     _ -> return) s
>   where endPos = length xs
>         delIx i | (pre, (_:post)) <- splitAt i xs = pre ++ post
>                 | otherwise = xs
>
> handleKey _ _ s = return s

The handleDisplay function below draws the text and cursor, followed by either the parse tree or an error message. (^+^) and (^-^) are vector-space operators, in this case operating on 2D vectors.

> handleDisplay :: IPnt -> IRect -> State -> C.Render State
> handleDisplay _ (tl, br) s@(State txt ix p (_, ypos)) = do
>   let textPos = (50.5, 100.5)
>       height = (fromIntegral . snd $ br ^-^ tl) * 0.5
>       astPos = textPos ^+^ (0.0, ypos - height)
>
>   move textPos
>   C.showText txt
>
>   -- Draw the mouse cursor.
>   C.setLineWidth 1
>   draw . offset (textPos ^+^ (-1, 0)) . rside 1 . expandR 2
>        =<< textRect txt 0 ix
>   C.stroke
>
>   case p of
>     ParseOk decl -> drawSpans astPos txt (getSpans decl)
>     f@(ParseFailed _ _) -> C.showText (show f)
>   C.stroke
>
>   return s

We’re done with all the little support bits! Only the meat of the problem, the definition of drawSpans and getSpans remains. In fact, if we set it to be a no-op, the above code is a functioning single-line text editor. Not too bad for around 75 SLOC!

drawSpans _ _ _ = return ()
getSpans = undefined

Next, we display a horizontal-spans based visualization of the abstract syntax tree of the code that the user has typed. This proceeds in a fairly straightforward manner, as a pipeline of transformations to draw the source-spans as a stack of labelled lines:

> drawLabeledLine :: String -> DLine -> C.Render ()
> drawLabeledLine txt lin = do
>   draw lin
>   relText 0.5 (lin `at` 0.5 ^-^ (0, 7)) txt
>
> spanLine :: String -> (Int, Int) -> C.Render (Linear Double, Linear Double)
> spanLine txt (f, t) = liftM (rside 2 . expandR 2) $ textRect txt f (t - 1)
>
> drawSpans :: DPoint  -> String -> [((Int, Int), String)] -> C.Render ()
> drawSpans pos txt =
>       -- Draw each labeled line, with each subsequent line 15 pixels lower.
>   (>>= zipWithM_ (\d (lin, name) -> drawLabeledLine name . (`offset` lin)
>                                   $ pos ^+^ (0, 15) ^* fromIntegral d)
>                  [0..])
>
>       -- Turn each span into an appropriately sized line segment.
>   . mapM (\(s, n) -> liftM (, n) $ spanLine txt s)
>
>       -- Prefer last of all identically-spanned tokens.  Pretty arbitrary.
>   . map last . groupBy ((==) `on` (\(x,_)->x))

On the left is the diagram resulting from commenting out the line “. map last . groupBy ((==) `on` (\(x,_)->x))”. As illustrated, it mostly removes information that the user wouldn’t care about for understanding Haskell’s parse tree.

So, how did we manage to get the spans from the abstract syntax tree of the declaration? The haskell-src-exts documentation has tons of ADTs, each representing a different potential members of Haskell’s AST. In order to collect the source-span information, we could write a function for each type, pattern matching on every single case, recursing into the children of each node. What saves us from such drudgery is that every ADT has a derived Data and Typable instance! We will do something much nicer using SYB.

First off, the SrcSpanInfo annotations indicating source location on the AST nodes contain a lot of information we don’t need. For this simple single-file, single-line case, we discard everything by the column range, and so define a convenient accessor for this. If you aren’t familiar with arrows, when operating with functions, the (&&&) operator has type “(a -> b) -> (a -> c) -> a -> (b, c)”. In other words, it applies two functions to the same input, and wraps the result in a tuple.

> srcSpan :: SrcSpanInfo -> (Int, Int)
> srcSpan = (srcSpanStartColumn &&& srcSpanEndColumn) . srcInfoSpan

Here’s the exciting part! How can we get the source span from an arbitrary data type? SYB makes it very easy. Data.Data.gmapQ applies a generic function to every field, and yields the results as a list. Data.Generics.Aliases.extQ allows us to use “Just . srcSpan” whenever it can be applied (when the types are compatible), and otherwise “const Nothing”. So, in whole the following function Just yields a span tuple if the given data type has a SrcSpanInfo field.

> getSpan :: (Data a) => a -> Maybe (Int, Int)
> getSpan = listToMaybe . catMaybes
>         . gmapQ (const Nothing `extQ` (Just . srcSpan))

Next we need to be able to get all of the spans paired up with the names of the constructors. We use gmapQ again, but this time to recursively traverse the entire tree in preorder. Applying show to the constructor representation yielded by Data.Data.toConstr allows us to get the name of the current node in a generic fashion.

> getSpans :: (Data a) => a -> [((Int, Int), String)]
> getSpans x = maybeToList (fmap (, show $ toConstr x) $ getSpan x)
>           ++ concat (gmapQ getSpans x)

Admittedly, this isn’t very useful yet. It’s fun to modify code and see the AST change in realtime, and perhaps might even be marginally useful for those writing code manipulating Haskell-Src-Exts ASTs. What this does do, however, is lay out an initial skeleton for more useful and compact visualizations of the meta-data of Haskell source code, to come in subsequent posts.

© Michael Sloan

Built on Notes Blog Core
Powered by WordPress