> type R = Float
> data Complex a = C a a deriving (Eq,Show)
> instance Num a => Num (Complex a) where
> fromInteger a = C (fromInteger a) 0
> (C a b)+(C a' b') = C (a+a') (b+b')
> (C a b)-(C a' b') = C (a-a') (b-b')
> (C a b)*(C a' b') = C (a*a'-b*b') (a*b'+b*a')
> data Matrix a = M a a a a deriving (Eq,Show)
> instance Num a => Num (Matrix a) where
> fromInteger a = M (fromInteger a) 0 0 (fromInteger a)
> (M a b c d)+(M a' b' c' d') = M (a+a') (b+b')
> (c+c') (d+d')
> (M a b c d)-(M a' b' c' d') = M (a-a') (b-b')
> (c-c') (d-d')
> (M a b c d)*(M a' b' c' d') = M (a*a'+b*c') (a*b'+b*d')
> (c*a'+d*c') (c*b'+d*d')
> data Quaternion a = Q a a a a deriving (Eq,Show)
> instance Num a => Num (Quaternion a) where
> fromInteger a = Q (fromInteger a) 0 0 0
> (Q a b c d)+(Q a' b' c' d') = Q (a+a') (b+b')
> (c+c') (d+d')
> (Q a b c d)-(Q a' b' c' d') = Q (a-a') (b-b')
> (c-c') (d-d')
> (Q a b c d)*(Q a' b' c' d') = Q (a*a'-b*b'-c*c'-d*d')
> (a*b'+b*a'+c*d'-d*c')
> (a*c'+c*a'+d*b'-b*d')
> (a*d'+d*a'+b*c'-c*b')
> data Split a = S a a deriving (Eq,Show)
> instance Num a => Num (Split a) where
> fromInteger a = S (fromInteger a) (fromInteger a)
> (S a b)+(S a' b') = S (a+a') (b+b')
> (S a b)-(S a' b') = S (a-a') (b-b')
> (S a b)*(S a' b') = S (a*a') (b*b')
> type Cliff1 = Complex R
> type Cliff1' = Split R
> type Cliff2 = Quaternion R
> type Cliff2' = Matrix R
> type Cliff3 = Quaternion Cliff1'
> type Cliff3' = Matrix Cliff1
> type Cliff4 = Quaternion Cliff2'
> type Cliff4' = Matrix Cliff2
> class Clifford b where
> i :: R -> b
> e :: Int -> R -> b
> instance Clifford R where
> i a = a
> e _ _ = error ""
> instance Clifford (Complex R) where
> i a = C a 0
> e 1 a = C 0 a
> instance Clifford (Split R) where
> i a = S a a
> e 1 a = S a (-a)
> instance (Num b,Clifford b) => Clifford (Quaternion b) where
> i a = Q (i a) 0 0 0
> e 1 a = Q 0 (i a) 0 0
> e 2 a = Q 0 0 (i a) 0
> e n a = Q 0 0 0 (e (n-2) a)
> instance (Num b,Clifford b) => Clifford (Matrix b) where
> i a = let b = i a in M b 0 0 b
> e 1 a = let b = i a in M (-b) 0 0 b
> e 2 a = let b = i a in M 0 b b 0
> e n a = let b = e (n-2) a in M 0 b (-b) 0
> type Cliff5 = Quaternion Cliff3'
> type Cliff5' = Matrix Cliff3
> type Cliff6 = Quaternion Cliff4'
> type Cliff6' = Matrix Cliff4
> type Cliff7 = Quaternion Cliff5'
> type Cliff7' = Matrix Cliff5
> type Cliff8 = Quaternion Cliff6'
> type Cliff8' = Matrix Cliff6
> type Cliff9 = Quaternion Cliff7'
> type Cliff9' = Matrix Cliff7
> t i = e i 1 :: Cliff9
> t' i = e i 1 :: Cliff9'
>
> test a t n = map (\(i,j) -> (t i)*(t j)+(t j)*(t i)==0) [(i,j) | i<- [1..n], j<- [1..(i-1)]] ++
> map (\i -> (t i)*(t i)==a) [1..n]
> fulltest = and $ (test (-1) t 9) ++ (test 1 t' 9)
One last thing, let's consider the complexity of this code. The naive algorithm multiplies two elements of Cliff(n) in time O(22n). It's an easy exercise to see that the multiplication time is O(27n/4). Not bad for something that was free. We can acualy do quite a bit better with some small changes - for example by usingStrassen-like tricks for all but the Split multiplications.
But if you really want to do this stuff properly check outthis paper which I haven't yet read properly.
I'd also like to add that Cliff(n+8)=R(16)⊗Cliff(n), so we have a kind of repeat with period 8. This is known as Bott periodicity and it lies at the heart of many of the most beautiful things in mathematics including some fascinating properties of thenumber 8.
Labels:haskell,mathematics
posted by sigfpe atWednesday, August 30, 200624 comments
So the goal is this: use the standard chain complex from simplicial homology to compute theBetti numbers.
First we need some code to compute therank of a matrix. As this is general purpose code I'm not counting it in my 20 line budget. It took me a little while to find a recursive implementation of this suitable for Haskell. Basically, the idea is that a zero matrix has zero rank and the following operations leave the rank of a matrix unchanged:
import List
rank m = let m' = removeZeroRows $ removeZeroColumns m in
if m'==[]
then 0
else let (zero,(pivot:rest)) = break ((0 /=) . head) m' in
1+rank (map (cancelZero pivot) (zero++rest))
where
removeZeroColumns x = let n = foldl1 min (map countZeros x) in
map (drop n) x
removeZeroRows = filter (or . map (0 /=))
countZeros = length . fst . break (0 /=)
cancelZero (a:as) (b:bs) = zipWith (\x y -> a*y-b*x) as bs
d x = zip (d' x) (cycle [1,-1]) where
d' [] = []
d' (x:xs) = xs : (map (x:) (d' xs))
matrix basis1 basis2 op = map (coefficients basis1 . op) basis2 where
coefficients basis x = map (flip coefficient x) basis
coefficient e = sum . lookup e
lookup k [] = fail ""
lookup k ((k',v):xs) = if k == k' then return v else lookup k xs
grade n = filter ((==(n+1)) . length)
dim c = foldl1 max (map length c)-1
dmatrix _ 0 = []
dmatrix c n = matrix (grade (n-1) c) (grade n c) d
betti c = let d = dim c
ranks = map rank $ map (dmatrix c) [0..d]
dims = map (\i -> length (grade i c)) [0..d] in
h dims ranks where
h (c:cs) (d:d':ds) = c-d-d' : h cs (d':ds)
h [c] [d] = [c-d]
At this point I suggest looking at something likethese notes on simplicial complexes. My d function corresponds to ∂ defined at the bottom of page 1.
Let's take the example at the bottom of page 2 and the top of page 3:
example = [
[1,2,3],
[1,2],[1,3],[2,3],[2,4],[3,4],
[1],[2],[3],[4],[5]]
And now we can fire up ghci or hugs and get:
*Main> dmatrix example 1
[[-1,1,0,0,0],[-1,0,1,0,0],[0,-1,1,0,0],[0,-1,0,1,0],[0,0,-1,1,0]]
*Main> betti example
[2,1,0]
Here are some more topological spaces to play with:
point = [[0]]
line = [[0],[1],[0,1]]
ball n = sublists [0..n] \\ [[]]
sphere n = ball (n+1) \\ [[0..(n+1)]]
torus = [[1],[2],[3],[4],[5],[6],[7],[8],[9],
[1,2],[2,3],[1,3],
[5,9],[8,9],[5,8],
[4,6],[6,7],[4,7],
[1,5],[4,5],[1,4],
[2,9],[6,9],[2,6],
[3,8],[7,8],[3,7],
[1,9],[3,9],[1,8],
[4,9],[6,8],[5,7],
[1,6],[2,7],[1,7],
[1,5,9],[1,2,9],[2,3,9],[3,8,9],[1,3,8],[1,5,8],
[4,5,9],[4,6,9],[6,8,9],[6,7,8],[5,7,8],[4,5,7],
[1,4,6],[1,2,6],[2,6,7],[2,3,7],[1,3,7],[1,4,7]]
*Main> betti (sphere 3)
[1,0,0,1]
*Main> betti (sphere 0)
[2]
*Main> betti torus
[1,2,1]
sublists [] = [[]]
sublists (a:as) = let b = sublists as in b ++ map (a:) b
properSublists x = sublists x \\ [x]
chain f x = [x:xs| x'<- f x, xs<- chain f x'] ++ [[x]]
bary x = x >>= chain (filter (not . null) . properSublists)
*Main> betti (bary torus)
[1,2,1]
*Main>
And here's my laundry list of things to do with this project:
euler c = sum (zipWith (*) (betti c) (cycle [1,-1]))
How canHaskellnot be the programming language that all mathematicians should learn?
Update: I just found someslides on the "fat graph" thing I mentioned above. I rate it as one of the most beautiful pieces of mathematics ever. Some time I'll devote a whole post to it.
Labels:haskell,mathematics
posted by sigfpe atSaturday, August 26, 200614 comments
I'll come back to this in a moment. But there'sanother approach to String Theory. Again, I'm not going to any kind of detail. But part of the calculations require looking at the vibration modes of a string. Just like with a musical instrument you get a fundamental 'note' and higher overtones. When a quantum string string is in its lowest energy state it has some of its energy stored in each mode of vibration. If the lowest mode of vibration has 1 unit of energy, the first overtone has 2, the second has 3 and so on. So when we try to find the total energy of a string in its lowest energy state, part of the computation requires summing 1+2+3+4+…. It's one of these apparently meaningless sums but physicists can use a trick calledzeta regularisation to get the result -1/12. It's the same number as above but arrived at in a completely different way.
So here's my theory about what's going on. (Unlike other people out there who are crackpots, I have the decency to warn you when I might be grossly misunderstanding other people or even making stuff up. :-) The problem is that physicists keep getting infinities because they have to compute these infinite sums. And they start to calculate these sums by 'creeping up on them', ie. by taking the limit as you compute each of the partial sums. They're trying to use the analyst's definition of summation using deltas and epsilons. Unfortunately, the things physicists are trying to sum over are so large there simply isn't a way to creep up on infinity like this. So instead, I think we have to redefine what we mean by counting and summation. Instead we should be using the Euler characteristic. The -1/12 above is a clue that this is the right track - because when physicists try to use their weird renormalisation tricks to get answers it's not unusual for the result to end up having a mysterious Euler number connection. So it's as if the weird tricks are just a shadow of some deeper kind of mathematics that one day will make sense. But we can't just sum over any old set in this way - there needs to be some other structure that tells us how to compute these sums. In the case of Hadwiger's theorem, if we assume the sums are dimensionless and invariant under rotations and translations then that is enough information to pin down precisely one possible meaning of the sum - the Euler characteristic. What John Baez and co. are doing is trying to find other kinds of structures that are relevant for physical theories and I'm hoping that one day these will lead to a new notions of summation that work for them.
I've drifted a bit from my original topic - a chapter in a course from a graphics convention, so I'll stop there. But here's the one paragraph summary of what I'm saying: just as we can count the elements in a set, there is also a notion of counting for other kinds of structure. We don't really have a proper theory of what this means but we do have some partial results that suggest theEuler characteristic is connected somehow. And maybe when we have a proper theory, and literally a new way to count, the problems with infinities that plague any kind of quantum field theory will go away.
Labels:mathematics,physics
posted by sigfpe atThursday, August 17, 20062 comments
Labels:mathematics
posted by sigfpe atSunday, August 13, 20060 comments
A more permanent version of this article is stored atGitHub. Please link to that rather than to here.
If you hadn't guessed, this is about monads as they appear in pure functional programming languages like Haskell. They are closely related to the monads of category theory, but are not exactly the same because Haskell doesn't enforce the identities satisfied by categorical monads.
Writing introductions to monads seems to have developed into an industry. There's agentle Introduction, aHaskell Programmer's introduction with the advice "Don't Panic", an introduction for the "Working Haskell Programmer" and countless others that introduce monads as everything from a type of functor to a type ofspace suit.
But all of these introduce monads as something esoteric in need of explanation. But what I want to argue is that they aren't esoteric at all. In fact, faced with various problems in functional programming you would have been led, inexorably, to certain solutions, all of which are examples of monads. In fact, I hope to get you to invent them now if you haven't already. It's then a small step to notice that all of these solutions are in fact the same solution in disguise. And after reading this, you might be in a better position to understand other documents on monads because you'll recognise everything you see as something you've already invented.
Many of the problems that monads try to solve are related to the issue of side effects. So we'll start with them. (Note that monads let you do more than handle side-effects, in particular many types of container object can be viewed as monads. Some of the introductions to monads find it hard to reconcile these two different uses of monads and concentrate on just one or the other.)
In an imperative programming language such as C++, functions behave nothing like the functions of mathematics. For example, suppose we have a C++ function that takes a single floating point argument and returns a floating point result. Superficially it might seem a little like a mathematical function mapping reals to reals, but a C++ function can do more than just return a number that depends on its arguments. It can read and write the values of global variables as well as writing output to the screen and receiving input from the user. In a pure functional language, however, a function can only read what is supplied to it in its arguments and the only way it can have an effect on the world is through the values it returns.
So consider this problem in a pure functional language: we have functions f and g that both map floats to floats, but we'd like to modify these functions to also output strings for debugging purposes. In Haskell, f and g might have types given by
f,g :: Float -> Float
How can we modify the types of f and g to admit side effects? Well there really isn't any choice at all. If we'd like f' and g' to produce strings as well as floating point numbers as output, then the only possible way is for these strings to be returned alongside the floating point numbers. In other words, we need f' and g' to be of type
f',g' :: Float -> (Float,String)
We can draw this diagrammatically as
x
|
+---+
| f'|
+---+
| \
| |
f x "f was called."
We can think of these as 'debuggable' functions.
But suppose now that we'd like to debug the composition of two such functions. We could simply compose our original functions, f and g, to formf . g. But our debuggable functions aren't quite so straightforward to deal with. We'd like the strings returned by f' and g' to be concatenated into one longer debugging string (the one from f' after the one from g'). But we can't simply compose f' and g' because the return value of g' is not of the same type as the argument to f'. We could write code in a style like this:
let (y,s) = g' x
(z,t) = f' y in (z,s++t)
Here's how it looks diagramatically:
x
|
+---+
| g'|
+---+
| \
+---+ | "g was called."
| f'| |
+---+ |
| \ |
| \ |
| +----+
| | ++ |
| +----+
| |
f (g x) "g was called.f was called."
This is hard work every time we need to compose two functions and if we had to do implement this kind of plumbing all the way through our code it would be a pain. What we need is to define a higher order function to perform this plumbing for us. As the problem is that the output from g' can't simply be plugged into the input to f', we need to 'upgrade' f'. So we introduce a function, 'bind', to do this. In other words we'd like
bind f' :: (Float,String) -> (Float,String)
which implies that
bind :: (Float -> (Float,String)) -> ((Float,String) -> (Float,String))
bind must serve two purposes: it must (1) apply f' to the correct part of g' x and (2) concatenate the string returned by g' with the string returned by f'.
Write the function bind.
bind f' (gx,gs) = let (fx,fs) = f' gx in (fx,gs++fs)
Given a pair of debuggable functions, f' and g', we can now compose them together to make a new debuggable functionbind f' . g'. Write this composition asf'*g'. Even though the output of g' is incompatible with the input of f' we still have a nice easy way to concatenate their operations. And this suggests another question: is there an 'identity' debuggable function. The ordinary identity has these properties:f . id = f andid . f = f. So we're looking for a debuggable function, call it unit, such thatunit * f = f * unit = f. Obviously we'd expect it to produce the empty debugging string and otherwise act a bit like the identity.
Define unit.
unit x = (x,"")
The unit allows us to 'lift' any function into a debuggable one. In fact, define
lift f x = (f x,"")
or more simply,lift f = unit . f. The lifted version does much the same as the original function and, quite reasonably, it produces the empty string as a side effect.
Show thatlift f * lift g = lift (f.g)
In summary: the functions, bind and unit, allow us to compose debuggable functions in a straightforward way, and compose ordinary functions with debuggable functions in a natural way.
Believe it or not, by carrying out those two exercises you have defined your first monad. At this point it's probably not clear which of the structures we've looked at is the monad itself, or what other monads might look like. But rather than defining monads now I'll get you to do some more easy exercises that will introduce other monads so that you'll see for yourself that there is a common structure deserving of its own name. I'm also pretty confident that most people, faced with the original problem, would eventually have come up with the function bind as a way to glue their debuggable functions together. So I'm sure that you could have invented this monad, even if you didn't realise it was a monad.
Consider the the functions sqrt and cbrt that compute the square root and cube root, respectively, of a real number. These are straightforward functions of typeFloat -> Float (although sqrt will thrown an exception for negative arguments, something we'll ignore).
Now consider a version of these functions that works with complex numbers. Every complex number, besides zero, has two square roots. Similarly, every non-zero complex number has three cube roots. So we'd like sqrt' and cbrt' to return lists of values. In other words, we'd like
We'll call these 'multivalued' functions.
Suppose we want to find the sixth root of a real number. We can just concatenate the cube root and square root functions. In other words we can define sixthroot x = sqrt (cbrt x).
But how do we define a function that finds all six sixth roots of a complex number using sqrt' and cbrt'. We can't simply concatenate these functions. What we'd like is to first compute the cube roots of a number, then find the square roots of all of these numbers in turn, combining together the results into one long list. What we need is a function, called bind say, to compose these functions, with declaration
bind :: (Complex Double -> [Complex Double]) -> ([Complex Double] -> [Complex Double])Here's a diagram showing how the whole process looks. We only want to write cbrt' once but still have it applied to both sqrt' values.
64
|
+------+
+ sqrt'+
+------+
+8 / \ -8
+------+ +------+
| cbrt'| | cbrt'|
+------+ +------+
| | | | | |
2 . . -2 . .
Write an implementation of bind.
bind f x = concat (map f x)
How do we write the identity function in multivalued form? The identity returns one argument, so a multivalued version should return a list of length one. Call this function unit.
Define unit.
unit x = [x]
Again, definef * g = bind f . g andlift f = unit . f. lift does exactly what you might expect. It turns an ordinary function into a multivalued one in the obvious way.
Show thatf * unit = unit * f = f andlift f * lift g = lift (f.g)
Again, given the original problem, we are led inexorably towards this bind function.
If you managed those exercises then you've defined your second monad. You may be beginning to see a pattern develop. It's curious that these entirely different looking problems have led to similar looking constructions.
The Haskell random function looks like this
random :: StdGen → (a,StdGen)
The idea is that in order to generate a random number you need a seed, and after you've generated the number you need to update the seed to a new value. In a non-pure language the seed can be a global variable so the user doesn't need to deal with it explicitly. But in a pure language the seed needs to be passed in and out explicitly - and that's what the signature of random describes. Note that this is similar to the debugging case above because we are returning extra data by using a pair. But this time we're passing in extra data too.
So a function that is conceptually a randomised functiona → b can be written as a functiona → StdGen -> (b,StdGen) where StdGen is the type of the seed.
We now must work out how to compose two randomised functions, f and g. The first element of the pair that f returns needs to be passed in as an input to g. But the seed returned from the g also needs to be passed in to f. Meanwhile the 'real' return value of g needs to be passed in as the first argument of f. So we can give this signature for bind:
bind :: (a → StdGen → (b,StdGen)) → (StdGen → (a,StdGen)) → (StdGen → (b,StdGen))
Implement bind
bind f x seed = let (x',seed') = x seed in f x' seed'
Now we need to find the 'identity' randomised function. This needs to be of type
unit :: a → (StdGen → (a,StdGen))
and should leave the seed unmodified.
Implement unit.
unit x g = (x,g)or just
unit = (,)
Yet again, definef * g = bind f . g andlift f = unit . f. lift does exactly what you might expect - it turns an ordinary function into a randomised one that leaves the seed unchanged.
Show thatf * unit = unit * f = f andlift f * lift g = lift (f.g)
It's now time to step back and discern the common structure.
Define
type Debuggable a = (a,String)
type Multivalued a = [a]
type Randomised a = StdGen → (a,StdGen)
Use the variable m to represent Debuggable, Multivalued or Randomised. In each case we are faced with the same problem. We're given a functiona -> m b but we need to somehow apply this function to an object of type m a instead of one of type a. In each case we do so by defining a function called bind of type(a → m b) -> (m a → m b) and introducing a kind of identity functionunit :: a → m a. In addition, we found that these identities held:f * unit = unit * f = f andlift f * lift g = lift (f.g), where '*' and lift where defined in terms of unit and bind.
So now I can reveal what a monad is. The triple of objects(m,unit,bind) is the monad, and to be a monad they must satisfy a bunch of laws such as the ones you've been proving. And I think that in each of the three cases you'd have eventually come up with a function like bind, even if you might not have immediately noticed that all three cases shared a common structure.
So now I need to make some contact with the usual definition of Haskell monads. The first thing is that I've flipped the definition of bind and written it as the word 'bind' whereas it's normally written as the operator >>=. So bind f x is normally written asx >>= f. Secondly, unit is usually called return. And thirdly, in order to overload the names >>= and return, we need to make use of type classes. In Haskell, Debuggable is the Writer monad, Multivalued is the List monad and Randomised is the State monad. If you check the definitions of these
you'll see that apart from some syntactic fluff they are essentially the definitions you wrote for the exercises above. Debugging used the Writer monad, the multivalued functions used the List monad and the random number generator used the State monad. You could have invented monads!
I don't want to spend too long on this (and you can skip this section) because there are plenty of excellent introductions out there.
You've already seen how the bind function can provide a nice way to plumb functions together to save you writing quite a bit of ugly code. Haskell goes one step further, you don't even have to explicitly use the bind function, you can ask Haskell to insert it into your code automatically.
Let's go back to the original debugging example except we'll now use the official Haskell type classes. Where we previously used a pair like(a,s) we now useWriter (a,s) of typeWriter Char. And to get the pair back out of one of these objects we use the function runWriter. Suppose we want to increment, double and then decrement 7, at each stage logging what we have done. We can write
return 7 >>= (\x -> Writer (x+1,"inc."))
>>= (\x -> Writer (2*x,"double."))
>>= (\x -> Writer (x-1,"dec."))
If we apply runWriter to this we should get(15,"inc.double.dec."). But it's still pretty ugly. Instead we can use Haskell do syntax. The idea is that
do x <- y
more code
is automatically rewritten by the compiler to
y >>= (\x -> do
more code).
Similarly,
do
let x = y
more code
is rewritten as
(\x -> do
more code) y
and
do
expression
is just left as the expression.
We can now rewrite the code above as:
do
let x = 7
y <- Writer (x+1,"inc\n")
z <- Writer (2*y,"double\n")
Writer (z-1,"dec\n")
The notation is very suggestive. When we write y <- ... it's as if we can pretend that the expression on the right hand side is just x+1 and that the side-effect just looks after itself.
Another example. Write our sixth root example in the cumbersome form:
return 64 >>= (\x -> sqrt' x) >>= (\y -> cbrt' y)
We can now rewrite this as the readable
do
let x = 64
y <- sqrt' x
z <- cbrt' y
return z
We're able to write what looks like ordinary non-multivalued code and the implicit bind functions that Haskell inserts automatically make it multivalued.
Inventing this syntax was a work of genius. Maybe you could have invented it, but I'm sure I wouldn't have. But this is extra stuff is really just syntactic sugar on top of monads. I still claim that you could have invented the underlying monads.
There's now one last thing we have to look at before you're fully qualified in monadicity. Interaction with the outside world. Up until now everything I have said might apply to any pure functional language. But now consider lazy pure functional languages. In such a language you have no idea what order things will be evaluated in. So if you have a function to print the message "Input a number" and another function to input the number, you might not be able to guarantee that the message is printed before the input is requested! Go back to the randomised function example. Notice how the random seed gets threaded through your functions so that it can be used each time random is called. There is a kind of ordering going on. Suppose we have x>>= f >>= g. Because g uses the seed returned by f, we know for sure that f will generate its random number before g. This shows that in principle, monads can be used to order computations.
Now consider the implementation of random in the compiler. It's typically a C or assembler routine linked into the final Haskell executable. If this routine were modified to perform I/O we could guarantee that the I/O in f was performed before that in g. This is exactly how I/O works in Haskell, we perform all of the I/O in a monad. In this case, a function that conceptually is of typea -> b, but also has a side-effect in the real world, is actually of typea -> IO b. TypeIO type is a black box, we don't need to know what's in it. (Maybe it works just like the random example, maybe not.) We just need to know thatx >>= f >>= g performs the I/O in f before that in g.
One last thing. Monads were originally developed in the context of category theory. I'll leave the connection for another day.
import Random
bind :: (a -> StdGen -> (b,StdGen)) -> (StdGen -> (a,StdGen)) -> (StdGen -> (b,StdGen))
bind f x seed = let (x',seed') = x seed in f x' seed'
unit x g = (x,g)
lift f = unit . f
addDigit n g = let (a,g') = random g in (n + a `mod` 10,g')
shift = lift (*10)
test :: Integer -> StdGen -> (Integer,StdGen)
test = bind addDigit . bind shift . addDigit
g = mkStdGen 666
main = print $ test 0 g
Labels:haskell
posted by sigfpe atMonday, August 07, 200666 comments
I think that over the years the complexity of the mathematics used in graphics has been creeping upwards and I think the field is moving into a more mature phase. However, there are usually very few results that are of purely mathematical interest. One area that does look like it might be of general mathematical interest was the course ondiscrete differential geometry (DDG). Unfortunately I didn't actually attend the course but I can say a tiny bit on why this might be interesting. (This is all second hand from people who did attend the course.)
In many graphics applications (including physical simulation) we have to use calculus on manifolds. For example when solving the Navier-Stokes equations for fluid dynamics or for smoothing the surfaces of 3D models. Unfortunately, in graphics we tend to work with triangulations of surfaces and use discrete approximations to derivatives. As a result, we can only use our usual theorems of calculus to make approximate statements about these triangulations (usually based on some form of finite differencing).
The DDG approach appears to be an alternative well principled approach to these derivatives. Although they still only form approximations to the continuum limit, discrete analogues of the usual theorems of calculus hold exactly. For example it is possible to define discrete versions of differential forms and the exterior derivative leading to a discrete version of de Rham cohomology.
One place where DDG appears to perform well is in fluid dynamics. In the graphics world we don't care about how accurate these simulations are. What we need is for simulations to be stable and be plausible. Often standard approximation methods lead to effects like mass loss and vorticity disspipation. These quantities are often salient to the eye. The effect may be small but over a period of time the problems can accumulate and it can soon become apparent that simulation has errors. What's cool about DDG is that discrete analogues of the usual conservation equations are provable. As a result, even though the simulation is an approximation to the continuum, the discrete analogues of the conserved quantities remain good approximations to the continuum values. Even though the simulation may end up being inaccurate, it remains looking good.
The other good thing about SIGGRAPH is the publishers selling heavily discounted books. (Bonus discounts if you've published a paper with them.) Among other books I picked up paper copies ofGeneratingfunctionology,Synthetic Differential Geometry (both available in electronic form, but I like paper), Knuth'sHistory of Combinatorial Generation (I was aware ofAbulafia's interest in combinatorics (though withMadonna's recent interest in Jewish Mysticism I ought to distance myself from that) but not the many other developments from other cultures) andthis book which almost brought tears of nostalgia to my eyes.
By the way, I have two free days in the Boston area. Anyone have any suggestions for what to do here? I'm already planning to (re)visit the MIT museum and some of the art galleries. Any other suggestions?
posted by sigfpe atThursday, August 03, 20060 comments