The λ-calculus was designed to deal with swapping variables into functional expressions. Thus λx.x is the lambda expression for "right back at you" and λx.xy is the lambda expression for "right back at you times y." You can encode numbers this way, so λf.λx.x means "take in a function f and a symbol x, and apply f to the symbol precisely zero times." Similarly, λf.λx.fx means "take in a function f and a symbol x and apply f to it one time." If we adopt the notation nf to mean "apply the function f n times", then the rather clever expression λm.λn.λf.λx.mf nf x means "take in n, m, and a function f and a symbol x, then apply f to x n times, and subsequently apply f to the result m times."
So, it's possible to define a function for "n+1" by λn.λf.λx.f nf x -- and so on and so forth. In fact, any computation you can do you can express this way, which is the Church-Turing thesis.
I like the λ-calculus particularly because it is so close to the programming language LISP, which I find just great for practical programming. That said, programming is based on the metaphor of language and of "effective procedures", whereas when we think about context, I think we have to be prepared to think in a very different way. Sure, we could imagine that a given lambda expression describes a "context of evaluation" -- i.e. we could really stick in all the f's and x's and m's and n's and see what comes out -- but when we think about context as it applies in our own world, it's much richer than just a linguistic notion.
This may help to explain why various efforts to "formalize" context within a computational, er, setting, have not been entirely satisfactory. Hirst 1997 writes that "the notion of 'context' can be defined only in terms of its effects in a particular situation." This seems to to evade the point in a "micro-reductive" way.
Gregory Bateson was interested in thinking about cybernetics, but didn't seem to feel constrained to think about it using a strictly computational or information-theoretic paradigm, while still being informed by the ideas. This gave him the freedom to talk about ideas like "context", "relationship", "learning", and "communication" without needing to define them in precise computational terms. Nevertheless, he handles the ideas fairly rigorously. The first upshot of his analysis is that communication depends on learning. That is perhaps not so surprising, since it's basically an information-theoretic claim. The second key point, which I think makes his writing interesting, is that communication depends on a relationship. If were to read the word "channel" instead of "relationship", then there would be nothing new here either. But a relationship is not really the same as a channel. At the final point in this tower is "context" -- that is, relationships exist within a context. Bateson says that means that there is no communication without a meta-communication that classifies the other communication. But I think the concepts of relationship and context are still richer.
In any case, a simple example is the idea of a telos -- a goal or final cause. Thus, a somewhat abstracted moth "relates" to a candle, or to the moon, as its goal. Human beings set their sights still further and proceed per aspera ad astra.
Now, relationships themselves do not come out of nowhere -- they are themselves "voiced". I put this word in quotes because the voicing may not be linguistic or vocal at all. Thus, an adult wolf voices its relationship of pack-dominance by gently biting down on the scruff of the neck of a junior wolf, in order to say "I am your adult, you are my puppy." And a cat voices its relationship to its owner by saying "mew, mew" in front of the refrigerator -- which Bateson translates as "dependency, dependency."
This becomes the prototype for an entire class of behaviors that "voice" relationship -- these are Bateson's μ-functions. I think that the idea that Rasmus Rebane advances in the following quote nicely illustrates how this works in the context of human communication:
Indeed, this business with "C" opens up onto a huge space of potential commonality. As context is being shaped and reshaped through use -- rather like the moguls on a ski slope -- there is every opportunity to relate to it in different ways.
The provision of a public good may be very unlike what we tend to think of when we think "public" or "good." Here's an image from the comic Miracleman illustrating the fictional destruction of London through the effect of god-like wrath. This isn't simply for shock value. If we think of the destruction of the natural environment, we get something equally bad, just less directly visual. The fact that the earth's environment is under serious threat is another reason to take Rasmus Rebane's idea of a phatic turn very seriously. Bateson said similar things and sadly, environmental stuff has hardly gotten any better since his time. We get into real-world moral dilemmas that would probably make Malthus squirm, and that seem to require theoretical treatment that goes farther than mere utilitarianism. Maybe a μ-calculus could help with this, but I don't mean that it would specifically be a "moral calculus."
What I would like to explore in a suitable framing is the development of a μ-calculus that deals with the establishment, maintenance, and evolution of relationship(s). Whereas the λ-calculus confines itself to very specific kinds of basically-linguistic relationships between terms, I think the μ-calculus should be much more physical. I am partly inspired by contemporary ideas in quantum mechanics that basically says that as time goes by, everything is becoming more interrelated with everything else. I am also inspired by my reading of Deleuze's Difference and Repetition. A key concept in that work is the idea of a manifold or as he puts it a multiplicity.
There's something just so elegant about this kind of stuff:
So, it's possible to define a function for "n+1" by λn.λf.λx.f nf x -- and so on and so forth. In fact, any computation you can do you can express this way, which is the Church-Turing thesis.
I like the λ-calculus particularly because it is so close to the programming language LISP, which I find just great for practical programming. That said, programming is based on the metaphor of language and of "effective procedures", whereas when we think about context, I think we have to be prepared to think in a very different way. Sure, we could imagine that a given lambda expression describes a "context of evaluation" -- i.e. we could really stick in all the f's and x's and m's and n's and see what comes out -- but when we think about context as it applies in our own world, it's much richer than just a linguistic notion.
This may help to explain why various efforts to "formalize" context within a computational, er, setting, have not been entirely satisfactory. Hirst 1997 writes that "the notion of 'context' can be defined only in terms of its effects in a particular situation." This seems to to evade the point in a "micro-reductive" way.
Gregory Bateson was interested in thinking about cybernetics, but didn't seem to feel constrained to think about it using a strictly computational or information-theoretic paradigm, while still being informed by the ideas. This gave him the freedom to talk about ideas like "context", "relationship", "learning", and "communication" without needing to define them in precise computational terms. Nevertheless, he handles the ideas fairly rigorously. The first upshot of his analysis is that communication depends on learning. That is perhaps not so surprising, since it's basically an information-theoretic claim. The second key point, which I think makes his writing interesting, is that communication depends on a relationship. If were to read the word "channel" instead of "relationship", then there would be nothing new here either. But a relationship is not really the same as a channel. At the final point in this tower is "context" -- that is, relationships exist within a context. Bateson says that means that there is no communication without a meta-communication that classifies the other communication. But I think the concepts of relationship and context are still richer.
In any case, a simple example is the idea of a telos -- a goal or final cause. Thus, a somewhat abstracted moth "relates" to a candle, or to the moon, as its goal. Human beings set their sights still further and proceed per aspera ad astra.
Now, relationships themselves do not come out of nowhere -- they are themselves "voiced". I put this word in quotes because the voicing may not be linguistic or vocal at all. Thus, an adult wolf voices its relationship of pack-dominance by gently biting down on the scruff of the neck of a junior wolf, in order to say "I am your adult, you are my puppy." And a cat voices its relationship to its owner by saying "mew, mew" in front of the refrigerator -- which Bateson translates as "dependency, dependency."
This becomes the prototype for an entire class of behaviors that "voice" relationship -- these are Bateson's μ-functions. I think that the idea that Rasmus Rebane advances in the following quote nicely illustrates how this works in the context of human communication:
In other words, the "channel" that some phatic utterances are "about" are not the given contact between A and B but the contact that A and B both have with C. This is almost the reverse of a metachannel, perhaps something like a parachannel (although these communication theory terms are perhaps not the best ones for the occasion). -- soul searching: phatic labor postWhen I say "Nice weather we're having" the remark may be overtly about the weather, but it's also more importantly about something we at least nominally have in common. A possible reply would be "Oh, I don't know, I think we could do with a bit of rain." This need not be seen as a combative remark, and may indeed be an opening for a conversation with greater intimacy.
Indeed, this business with "C" opens up onto a huge space of potential commonality. As context is being shaped and reshaped through use -- rather like the moguls on a ski slope -- there is every opportunity to relate to it in different ways.
"Infrastructure is a classic 'public good,' as a set of resources available to all. [...] The creation and maintenance of infrastructure is not itself directly productive of value and yet is essential to the capitalist system of production." (Elyachar 2010: 455)I've been quite interested in questions about the creation and maintenance of public goods: they seem about as close as we get in this secular age to "sacred" things. Well, sacred also means that which is set aside as opposed to that which is common. So there is an interesting tension here to say the least.
The provision of a public good may be very unlike what we tend to think of when we think "public" or "good." Here's an image from the comic Miracleman illustrating the fictional destruction of London through the effect of god-like wrath. This isn't simply for shock value. If we think of the destruction of the natural environment, we get something equally bad, just less directly visual. The fact that the earth's environment is under serious threat is another reason to take Rasmus Rebane's idea of a phatic turn very seriously. Bateson said similar things and sadly, environmental stuff has hardly gotten any better since his time. We get into real-world moral dilemmas that would probably make Malthus squirm, and that seem to require theoretical treatment that goes farther than mere utilitarianism. Maybe a μ-calculus could help with this, but I don't mean that it would specifically be a "moral calculus."
The doings of Kid Miracleman: Absolute power corrupts |
By dimensions, we mean the variables or co-ordinates upon which a phenomenon depends; by continuity, we mean the set of relations between changes in these variables -- for example, a quadratic form of the differentials of the co-ordinates; by definition, we mean the elements reciprocally determined by these relations, elements which cannot change unless the multiplicity changes its order and its metric. -- p. 231 of Difference and Repetition (Bloomsbury Ed.)At the same time I would also like the work to have an at least "quasi-" computational flavor. It might not deal in algorithms per se, but if the theory was developed with regard to discrete structures, it could be computable. I would say that it should be possible to give a λ-calculus "reading" of μ structures in much the same way that a network can be decomposed into trees.
There's something just so elegant about this kind of stuff:
(1.) To evaluate a combination (a compound expression other than a special form), evaluate the subexpressions and then apply the value of the operator subexpression to the values of the operand subexpressions. (2.) To apply a compound procedure to a set of arguments, evaluate the body of the procedure in a new environment. To construct this environment, extend the environment part of the procedure object by a frame in which the formal parameters of the procedure are bound to the arguments to which the procedure is applied. - Structure and Interpretation of Computer Programs 4th Ed.... but even so I wonder whether this isn't just one example of Deleuze's understanding of multiplicity.
Comments
Post a Comment