What's the difference between recursion and corecursion?

What's the difference between recursion and corecursion?

  • What's the difference between these? http://en.wikipedia.org/wiki/Recursion_%28computer_science%29 http://en.wikipedia.org/wiki/Corecursion On Wikipedia, there is little information and no clear code explaining these terms. What are some very simple examples explaining these terms? How is corecursion the dual of recursion? Are there any classic corecusive algorithms?

  • Answer:

    There are a number of good ways of looking at this. The easiest thing for me is to think about the relation between "Inductive" and "Coinductive definitions" An inductive definition of a set goes like this. The set "Nat" is defined as the smallest set such that "Zero" is in Nat, and if n is in Nat "Succ n" is in Nat. Which corresponds to the following Ocaml type nat = Zero | Succ of nat One thing to not about this definition is that the number omega = Succ(omega) is NOT a member of this set. Why? Assume that it was, now consider the set N that has all the same elements as Nat except it does not have omega. Clearly Zero is in N, and if y is in N, Succ(y) is in N, but N is smaller than Nat which is a contradiction. So, omega is not in Nat. Or, perhaps more usefully for a computer scientist: Given some set "a", the set "List of a" is defined as the smallest set such that "Nil" is in List of a, and that if xs is in List of a and x is in a "Cons x xs" is in List of a. Which corresponds to something like type 'a list = Nil | Cons of 'a * 'a list The operative word here is "smallest". If we didn't say "smallest" we would not have any way of telling if the set Nat contained a banana! Again, zeros = Cons(Zero,zeros) is not a valid definition for a list of nats, just like omega was not a valid Nat. Defining data inductively like this allows us to define functions that work on it using recursion let rec plus a b = match a with | Zero -> b | Succ(c) -> let r = plus c b in Succ(r) we can then prove facts about this, like "plus a Zero = a" using induction (specifically, structural induction) Our proof proceeds by structural induction on a. For the base case let a be Zero. plus Zero Zero = match Zero with |Zero -> Zero | Succ(c) -> let r = plus c b in Succ(r) so we know plus Zero Zero = Zero. Let a be a nat. Assume the inductive hypothesis that plus a Zero = a. We now show that plus (Succ(a)) Zero = Succ(a) this is obvious since plus (Succ(a)) Zero = match a with |Zero -> Zero | Succ(a) -> let r = plus a Zero in Succ(r) = let r = a in Succ(r) = Succ(a) Thus, by induction plus a Zero = a for all a in nat We can of-course prove more interesting things, but this is the general idea. So far we have dealt with inductively defined data which we got by letting it be the "smallest" set. So now we want to work with coinductivly defined codata which we get by letting it be the biggest set. So Let a be a set. The set "Stream of a" is defined as the largest set such that given x such that x is in stream of a, x consists of the ordered pair (head,tail) such that head is in a, and tail is in Stream of a In Haskell we would express this as data Stream a = Stream a (Stream a) --"data" not "newtype" Actually, in Haskell we use the built in lists normally, which can be an ordered pair or an empty list. data [a] = [] | a:[a] Banana is not a member of this type either, since it is not an ordered pair or the empty list. But, now we can say ones = 1:ones and this is a perfectly valid definition. Whats more, we can perform co-recursion on this co-data. Actually, it is possible for a function to be both co-recursive and recursive. While recursion was defined by the function having a domain consisting of data, co-recursion just means it has a co-domain (also called the range) that is co-data. Primitive recursion meant always "calling oneself" on smaller data until reaching some smallest data. Primitive co-recursion always "calls itself" on data greater than or equal to what you had before. ones = 1:ones is primitively co-recursive. While the function map (kind of like "foreach" in imperative languages) is both primitively recursive (sort of) and primitively co-recursive. map :: (a -> b) -> [a] -> [b] map f [] = [] map f (x:xs) = (f x):map f xs same goes for the function zipWith which takes a function and a pair of lists and combines them together using that function. zipWith :: (a -> b -> c) -> [a] -> [b] -> [c] zipWith f (a:as) (b:bs) = (f a b):zipWith f as bs zipWith _ _ _ = [] --base case the classic example of functional languages is the Fibonacci sequence fib 0 = 0 fib 1 = 1 fib n = (fib (n-1)) + (fib (n-2)) which is primitively recursive, but can be expressed more elegantly as an infinite list fibs = 0:1:zipWith (+) fibs (tail fibs) fib' n = fibs !! n --the !! is haskell syntax for index at an interesting example of induction/coinduction is proving that these two definitions compute the same thing. This is left as an exercise for the reader.

user167908 at Programmers Visit the source

Was this solution helpful to you?

Other answers

Check this at http://blog.kovanovic.info/corecursion-and-lazy-evaluation/. I found it to the point: Lazy evaluation in one very nice feature found in programming languages with functional programming capabilities such as lisp, haskell, python etc. It mans that evaluation of variable value is delayed to the actual usage of that variable. It means that for example if you wanted to create a list of million elements with something like this (defn x (range 1000000)) it is not actually created, but it is just specified and when you really use that variable for the first time, for instance when you want 10th element of that list interpreter creates only first 10 elements of that list. So the first run of (take 10 x) actually creates these elements and all subsequent calls to the same function are working with already existing elements. This is very useful because you can create infinite lists without out of memory errors.The list will be large just how much you requested. Of course, if your program is working with large data collections it can hit memory limit in the usage of these infinite lists. On the other hand http://en.wikipedia.org/wiki/Corecursion is dual to recursion. What this means? Well just like the recursive functions, which are expressed in the terms of themselves, corecursive variables are expressed in the terms of themselves. This is best expressed on the example. Let’s say we want list of all prime numbers...

Expressions

http://cs-www.cs.yale.edu/homes/carsten/classes/f02/handouts/RAC8.pdf a series of slides that gives a fairly clear answer. Per those notes, a corecursive definition is similar to a recursive definition except that it has no base case.

John Bode

Basically, corecursion is recursion accumulator-style, building its result on the way forward from the starting case, whereas regular recursion builds its result on the way back from the base case. (speaking Haskell now). That's why foldr (with a strict combining function) expresses recursion, and foldl' (with strict comb. f.) / scanl/ until/ iterate/ unfoldr/ etc. express corecursion. Corecursion is everywhere. foldr with non-strict comb. f. expresses http://en.wikipedia.org/wiki/Tail_call#Tail_recursion_modulo_cons. And Haskell's guarded recursion http://stackoverflow.com/a/8882745/849891 like http://en.wikipedia.org/wiki/Tail_call#Tail_recursion_modulo_cons. This is recursion: fib n | n==0 = 0 | n==1 = 1 | n>1 = fib (n-1) + fib (n-2) fib n = snd $ g n where g n | n==0 = (1,0) | n>0 = let { (b,a) = g (n-1) } in (b+a,b) fib n = snd $ foldr (\_ (b,a) -> (b+a,b)) (1,0) [n,n-1..1] (read $ as "of"). This is corecursion: fib n = g (0,1) 0 n where g n (a,b) i | i==n = a | otherwise = g n (b,a+b) (i+1) fib n = fst.snd $ until ((==n).fst) (\(i,(a,b)) -> (i+1,(b,a+b))) (0,(0,1)) = fst $ foldl (\(a,b) _ -> (b,a+b)) (0,1) [1..n] = fst $ last $ scanl (\(a,b) _ -> (b,a+b)) (0,1) [1..n] = fst (fibs!!n) where fibs = scanl (\(a,b) _ -> (b,a+b)) (0,1) [1..] = fst (fibs!!n) where fibs = iterate (\(a,b) -> (b,a+b)) (0,1) = (fibs!!n) where fibs = unfoldr (\(a,b) -> Just (a, (b,a+b))) (0,1) = (fibs!!n) where fibs = 0:1:map (\(a,b)->a+b) (zip fibs $ tail fibs) = (fibs!!n) where fibs = 0:1:zipWith (+) fibs (tail fibs) = (fibs!!n) where fibs = 0:scanl (+) 1 fibs = ..... Folds: http://en.wikipedia.org/wiki/Fold_(higher-order_function)

Will Ness

Find solution

For every problem there is a solution! Proved by Solucija.

  • Got an issue and looking for advice?

  • Ask Solucija to search every corner of the Web for help.

  • Get workable solutions and helpful tips in a moment.

Just ask Solucija about an issue you face and immediately get a list of ready solutions, answers and tips from other Internet users. We always provide the most suitable and complete answer to your question at the top, along with a few good alternatives below.