ideas-computer-jasper-jasperControlNotes1

Difference between revision 5 and current revision

No diff available.

i guess references mark bits of state that persist even if you invoke a continuation.

--

we gotta do something about foldl, foldr. having the user keep track of which one of these will lead to an error when they sum a bunch of numbers is like manual memory management.

http://www.haskell.org/haskellwiki/Foldr_Foldl_Foldl%27

http://en.wikipedia.org/wiki/Fold_%28higher-order_function%29#Evaluation_order_considerations

http://en.wikibooks.org/wiki/Haskell/Performance_Introduction

ah, i think this is called "automatic strictness analysis".

--- the commments on this also suggest syntaxes etc for dealing with strictness:

http://pchiusano.blogspot.com/2009/06/perfect-strictness-analysis-part-2.html

--

https://www.google.com/search?client=ubuntu&channel=fs&q=foldl+foldr+automatic+strictness+analysis&ie=utf-8&oe=utf-8#bav=on.2,or.r_qf.&channel=fs&fp=48beb17d465ad959&q=+automatic+strictness+analysis

--

mb something like garbage collection for the stack? keep track of which parts of the stack resulted from which parts of code. if the stack is close to overflowing, then find a loop, and try switching folds (or whatever is causing the recursion) between foldl, foldl', foldr, foldr'

--

is clojure doseq an example of this powerful macro thang that everyone talks about? the test is whether or not it needs to execute before its arguments are evaluated (to rearrange the symbols). i guess you have to the pattern matching at the symbolic level, so maybe so.

interesting, todo: think about that.

--

" The Conduit library is a second generation approach to the problem of guaranteeing bounded memory usage in the presence of lazy evaluation. The first generation of these ideas were libraries like Iteratee, Enumerator, and IterIO?. All of these first generation libraries use the the term enumerator for data producers and iteratee for data consumers. The new Conduit library calls data producers "sources" and data consumers "sinks" to make them a little more approachable.

The other big difference between Conduit and the early libraries in this space is to do with guaranteeing early clean up of potentially scarce resources like sockets. Although I have not looked in any detail at the IterIO? library, both Iteratee and Enumerator simply rely on Haskell's garbage collector to clean up resources when they are no longer required. The Conduit library on the other hand uses Resource transformers to guarantee release of these resources as soon as possible. " http://www.mega-nerd.com/erikd/Blog/CodeHacking/Haskell/telnet-conduit.html

--

" no stack traces

This is less of a bother than one would think when coming from another language, but still a problem. Simon Marlow recently stated he may be able to come up with a solution, which was very exciting news. Yesod now supports logging statements that contain file and line number information using Template Haskell. I released a package that applies the same technique. This allows the programmer to record the file and line number location, but not a stack trace. There is also a package that is designed to give a stack trace for an exception. "

--

named break and named continue

"long after Dijkstra's essay, we could still occasionally find new places where goto was really the best way to do it: for instance, if you wanted to "break" out of multiple loops at once. So someone invented named break (likewise continue), and removed yet another use of the unconstrained goto in favour of a more constrained, structured construct."

--

"

wrl 7 hours ago

link

On the contrary, there are still perfectly good uses for goto. The two I can name right off the top of my head are stack-like error unwinding in C (which comes with an endorsement from CERT recommending its use) and computed goto dispatch tables in threaded interpreters.

"

---

hmm.. thinking about how the exception-like, antibody-like, event-driven-like 'backwards computation mode' would fit in with things.. how much is the handler tree distinct from the call stack?

could unify the conception of call stack and handler tree. e.g. so a return from a function is a message propagating upwards that gets immediately caught. wouldn't want to implement it that way though as you dont want returning from a function to be the least bit inefficient (esp. in a functional-ish language) -- so under the covers that common case would be optimized to a normal fn return.

also want to be able to have multiple handler trees, so e.g. you can have the call stack plus some sort of spreadsheet event-driven handler tree plus a GUI element handler tree.

as a special case of that, we might have those multiple threads of control in a single function, with multiple 'returns'

and of course note that handlers and handler trees are first-class objects.

---

hmm, reading about dataflow architectures, where "When all of the tagged operands of an instruction become available (that is, output from previous instructions and/or user input), the instruction is marked as ready for execution by an execution unit." -- http://en.wikipedia.org/wiki/Dataflow_architecture . There is also "Dataflow is a software architecture based on the idea that changing the value of a variable should automatically force recalculation of the values of variables which depend on its value. Dataflow embodies these principles, with spreadsheets perhaps the most widespread embodiment of dataflow. " -- http://en.wikipedia.org/wiki/Dataflow . Interesting that dataflow, the hardware architecture, is backwards looking (like promises), and dataflow, the software architecture, is forward looking (like event-driven programming). A more major different in those two seems to be that dataflow, the hardware arch, seems to be about how do you sequence a set of steps in a computation which may only be run once, whereas dataflow, the software arch, seems to be about how do you implicitly maintain invariants relating multiple pieces of state which may presumably lead to the need to redo a calculation many times.

and of course the bigger distinction it between either of these, and typical programming.

reminds me of the eager/lazy distinction, and also of promises, and also of multiple entry point functions, and also of event-driven programming.

should unify these.

--

also, with the problem of foldl and foldr and stack overflows due to unevaluated thunks, my intutition is to explore:

(a) a way for the programmer to specify that certain pieces of INPUT data are 'eager', e.g. any functions applied to that data are executed eagerly. This is because one of the main reason for having all functions be 'lazy by default' is to use infinite data structures; but if the user of a function knows that the input is not infinite, they can tell the function to go ahead and be eager on that input. Note that this eagerness is carried forward recursively (as opposed to Haskell's eagerness via seq, which is propagated backwards somewhat recursively, and deepseq, which is even more so; although i think we need seq and deepseq also) (b) note that even a function which takes all finite (and eager) input may not want to work eagerly and produce all eager outputs, however; e.g. consider a function which produces an infinite sequence from a constant input. (c) a sort of TCO for lazy functions/data; in the case of summing a large list via fold, the compiler should be able to realize that the thunks will ultimately be consumed (e.g. that 'if this value is requested, then that implies that ultimately that one will be too'), and evaluate them strictly. in fact, even in cases where the compiler cannot deduce this, the programmer should have a way to annotate the function (or parts of it) and specify that. (d) should we encourage the habit (and make the standard libraries this way) of annotating so that any function which, when given eager inputs, produces eager outputs without the possibility of more than constant unevaluated thunks in the middle, does? e.g. "fold op list" would look at list, and if list is marked as eager, it should use an eager foldl.

An argument against (d) might be, what if we want to use lazy lists not because we are dealing with an infinite list, but just a very long one? But we aren't seeing if the inputs are finite, we are seeing if they are marked eager, which is a choice made by the caller (or whoever called the caller, etc). So if the caller doesn't want this, they won't mark the list as eager.

i think the answer to (d) is "yes". in fact, i think that by default, we should assume that function that is given all 'eager' arguments will not produce a lazy result unless it declares that it does; and hence, in that case, the compiler or interpreter should always evaluate the function as soon as it can (e.g. whenever it is at the head of a thunk; this is like replacing all calls to the func with 'func `seq` func' in haskell, i think). if a function in addition wants to declare that it still gives an eager result even if some particular inputs are lazy (e.g. f x y = y + (head x) can produce an eager result even if x is lazy), then it can if it wants to, but this is not required.

A note on (c); http://stackoverflow.com/questions/6872898/haskell-what-is-weak-head-normal-form says "GHC has a strictness analyzer that will detect some situations where a subexpression is always needed and it can then evaluate it ahead of time. This is only an optimization, however, and you should not rely on it to save you from overflows". We want something similar, but like TCO, we want the language to GUARANTEE that certain things will be evaluated strictly.

--

so, regarding (d), a proposal:

mark data as 'eager' or 'lazy'.

eager data should be data that, if it is found inside a thunk, can safely be (and should be) immediately recursively evaluated (deepseq in haskell)

the programmer can manually change the flag of data to eager or to lazy

eagerness/laziness is propagated forward, from function input to function output, as computation is done.

a function can declare its outputs as eager or lazy. an eager output means that, if the function is not given any inputs marked lazy, then that output will be marked eager. by default all outputs are eager, that is, we assume that all funcs return all 'eager' return values if it is not given any lazy inputs.

any lazy-outputting function performed on all eager values are performed immediately (e.g. as soon as the function is at the head of a thunk, it is further evaluated if this condition is true of its inputs), but any function performed on a lazy value generates a thunk (unless the function declares that it can accept a lazy input on that input and still run eagerly).

it is a mistake for a programmer to fail to declare a function 'lazy' that can produce an infinite result from all finite inputs. such a function can cause an infinite loop/stack overflow when it returns, as the intepreter tries to immediately recursively evaluate its result. however really this declaration facility is just a convenience, because the caller of a function can cause it to execute lazily by providing a lazy input (so, for example, if we have a function that produces the entire (finite but very large) lookahead for a game of chess given the current position, it could declare its output lazy (on the assumption that no one would ever actually want the full table), or whoever calls it could coerce the input that they are feeding to it (e.g. the current position) to lazy).

the lazy declaration can use a pattern, e.g. a struct that contains both eager and lazy subcomponents (e.g. if function f(x) returns a struct Y with two fields, a and b, such that Y.a = sum(x) and Y.b = [0...], then (if x is eager), Y.a should be eager and Y.b should be lazy).

in practice, is this usually much different from clojure's lazy-seq macro and lazy- similar, which form a lazy iterator over code that returns a list? http://clojure.org/lazy ?

Haskell's fibonacchi sequence:

fib = 1 : 1 : zipWith + (tail fib)

can be written in Clojure:

(def fib (lazy-cat [1 1] (map + (rest fib) fib)))

hmm, this post: http://debasishg.blogspot.com/2010/05/laziness-in-clojure-some-thoughts.html

points out that if you have a large list, and you filter it, and then you map over the result, without laziness you iterate over the entire list twice, but with laziness, only once. So you will end up wanting to do imperative looping if you don't have laziness.

hmm.. it seems that filter isn't a 'dangerous' lazy function in the way that foldr is, because it doesn't require stack space proportional to the length of the list.

maybe we need to combine the above proposal with the "TCO" type thing; the evaluation of the supposedly 'eager' result happens only if some sort of TCO-like analysis determines that we're effectivlely forming thunks of thunks recursively via tail calls

i dunno how any of that would help if the user tries to do

> foldr f z [] = z > foldr f z (x:xs) = x `f` foldr f z xs

> sum1 = foldr (+) 0

> try1 = sum1 veryBigList

as http://www.haskell.org/haskellwiki/Foldr_Foldl_Foldl%27 shows, the way the parentheses are placed, you never get two integers next to each other in the same parens; you always get something like 1 + (2 + (3 + (foldr (+) 0 [4..1000000]))), so you have to recurse on foldr before doing any addition.

perhaps the solution there is to declare functions as associative, so that the compiler could add 1+2+3 in this case (assuming we have the other eagerness stuff above). alternately, the TCO-ish stuff could somehow realize that this function is like a loop (well, i guess that requires knowing the associativity, actually. alternately/in addition we could put this big thunk on the heap instead of on the stack, so at least it only takes up memory, rather than crashing. alternately, we could do the analysis specified in http://www.haskell.org/haskellwiki/Stack_overflow for the user to choose between foldl, foldr, foldl', foldr': "A function strict in its second argument will always require linear stack space with foldr, so foldl' should be used instead in that case. If the function is lazy/non-strict in its second argument we should use foldr to 1) support infinite lists and 2) to allow a streaming use of the input list where only part of it needs to be in memory at a time. Okay, both here and in the one-line summary, there is no mention of foldl. When should foldl be used? The pragmatic answer is: by and far, it shouldn't be used. A case where it makes a difference is if the function is conditionally strict in its first argument depending on its second, where I use conditionally strict to mean a function that is strict or not in one argument depending on another argument(s)."

in order to support the kind of reasoning in the last alternative, we need to distinguish between two kinds of laziness in our declarations: (a) if we are trying to evaluate the function but we may not have all of the arguments yet, which arguments does the function need to be evaluated before it can completely execute? (you might replace this with the other defn from btw http://www.haskell.org/haskellwiki/Stack_overflow concerning the possibility of nontermination, see below) (b) if we have evaluated the function and it has produced a result, will that result be infinite?

--

btw http://www.haskell.org/haskellwiki/Stack_overflow has another defn of strictness:

"* A strict function is a function f , such that f⊥ = ⊥.

Typically, we think of a function "being strict" in an argument as a function that "forces" its argument, but the above definition of strict should immediately suggest another function that is strict and doesn't "force" it's argument in the intuitive sense, namely id. ([]++) = id and therefore is a strict function. Sure enough, if you were to evaluate (concat (repeat [])) it would not terminate. As such, (++) is a conditionally strict function. This also makes the "always" slightly imprecise, a function that is strict because it just returns it's argument, will not use up stack space (but is, as mentioned, still an issue for infinitely long lists). "

-- in any case, wouldn't the Clojure example from http://debasishg.blogspot.com/2010/05/laziness-in-clojure-some-thoughts.html , where 'filter' and 'map' are composed in a way such that the list is only iterated over once, not work unless they are foldr-ish? i guess that's because 'filter' is "lazy/non-strict in its second argument" as http://www.haskell.org/haskellwiki/Stack_overflow puts it.

--

btw it seems that TCO-like strictness analysis should definitely be able to coerce foldl to foldl' in http://stackoverflow.com/questions/6872898/haskell-what-is-weak-head-normal-form

--

ok this is crazy too:

http://www.haskell.org/haskellwiki/Stack_overflow " Common newbie stack overflowing code:

myAverage = uncurry (/) . foldl' (\(acc, len) x -> (acc+x, len+1)) (0,0)

People who understand seq and weak head normal form (whnf) can immediately understand what goes wrong here. (acc+x, len+1) is already in whnf, so the seq (in the definition of foldl' ), which reduces a value to whnf, does nothing to this. This code will build up thunks just like the original foldl example, they'll just be inside a tuple. "

yarg, what do we do about that? haskell would let you make a 'strict tuple' and use that here. but that's just the kind of thinking that we want to avoid making the user do.

if we really have to do that, then maybe clojure soln of limiting laziness to data structures that explicitly support it is better; e.g. tuple etc would be strict by default.

again, maybe strictness analysis could determine that we will eventually need to reduce these guys and so would reduce them immediately

--

also dont forget my idea about doing something like 'garbage collection' on large thunks

dunno if that mixes with just putting them on the heap

btw in a language like haskell where you use folds for iteration, i guess it would be way inefficient to put this stuff on the heap (but maybe only move it there if it gets too big?)

---

and then what about this:

http://www.haskell.org/haskellwiki/Stack_overflow " Scans

A subtle stack-overflow surprise comes when

print (scanl (+) 0 [1..1000000])

completes successfully but

print (last (scanl (+) 0 [1..1000000]))

causes a stack overflow.

The latter stack overflow is explained exactly as before, namely,

last (scanl (+) 0 [1..5]) -> ... several steps ... -> ((((0+1)+2)+3)+4)+5

This is exactly like foldl , building a deep thunk, then evaluating, needing much stack.

Most puzzling is why the former succeeds without a stack overflow. This is caused by a combination of two factors:

    thunks in the list produced by
    scanl
    enjoy sharing: late thunks build upon early thunks
    printing a list of numbers evaluates early thunks and then late thunks "

i guess it's not really any more of a problem than the previous..

-- also, what about this one?

http://www.haskell.org/haskellwiki/Performance/Accumulating_parameter

first, we have the 'almost tail recursive' and we have to use an accumulating parameter to make it tail recursive. second, we have the same strictness problem as with foldl, above.

shouldn't the compiler do the 'almost tail recursive' transformation for us?

this seems like the foldl vs foldr problem.

using the naive implementation in the above blog post we'll get a stack that looks like 1 + (1 + (1 + (len xs)))

using the tail-recursive version we get instead len xs (1 + 1 + 1)

looking at the accumulating vs. naive implementation in the blog post, we see that the naive one has a call "len xs + 1" and the accumulating one has "len' xs (1 + acc)". So we started with one where the recursive call was buried under the '+', and we just 'bubbled it up' to the top level of the AST. But we need the associativity of the operations 'above' the recursive call in the AST in order to bubble up the recursive call to the top, as you can see by looking at the two expansions above.

--

this is a great read for Haskell's evaluation strategy:

http://stackoverflow.com/questions/6872898/haskell-what-is-weak-head-normal-form

--

related: GHC also has "bang pattern bindings", which means you prefix an exclamation point to a variable in a pattern, and then it's like 'seq' was applied to the assignment to that variable (e.g. to the value being assigned just before it is assigned)

--

http://www.haskell.org/haskellwiki/Stack_overflow

http://www.haskell.org/haskellwiki/Foldr_Foldl_Foldl%27

'stack_overflow' is a better read but 'Foldr_Foldl_Foldl%27' is easier to read

" There is no call stack in Haskell. Instead we find a pattern matching stack whose entries are essentially case expressions waiting for their scrutinee to be evaluated enough that they can match a constructor (WHNF).

When GHC is evaluating a thunked expression it uses an internal stack. This inner stack for thunk evaluation is the one that can overflow in practice. "

--

in fact, Jasper should allow one to program in an 'active data structure' mode if one wishes (e.g. one specifies for various pieces of data what these pieces of data do, that is, how they combine/interact with their neighbors to produce new data, and what that data does -- i guess this would be programmed like stem cell differentiation in tissue development!)

--

"

2 Controversy!

Note that seq is the only way to force evaluation of a value with a function type (except by applying it, which is liable to cause other problems). As such, it is the only reason why Haskell programs are able to distinguish between the following two values:

undefined :: a -> b const undefined :: a -> b

This violates the principle from lambda calculus of extensionality of functions, or eta-conversion, because f and \x -> f x are distinct functions, even though they return the same output for every input. For this reason, seq, and this distinction, is sometimes ignored e.g. when assessing the correctness of optimisation techniques or type class instances. "

--

i guess there are 2 differences between dataflow spreadsheets and lazy evaluation:

in lazy evaluation, a function's inputs are parameters

in lazy evaluation, a function is executed just when it is needed by another function (top-down propagation of need)

in dataflow, a function (or rather, an object to which a function is attached)'s inputs are pieces of state

in dataflow, a function (or rather, an object to which a function is attached) is executed when its input states change

however, one could also imagine a lazy spreadsheet, who wouldn't reexecute an object unless both its inputs changed, and its value was desired

some spreadsheet cells might be lazy in some of their inputs and eager in others, in that they execute immediately when the eager inputs change

you could represent the spreadsheet by calling methods to attach listeners, but it would be neater and probably more concise to represent it as a graph where 'a listens to b' is 'a -> b'

in fact it's as if each node had a handler tree for its changes. but if we're going to go there, don't we want each node to be able to spout off various messages more descriptive than 'i changed'? and then aren't we like smalltalk? not that that's bad.

maybe dataflow is a special case of actors, one in which the actors are just containers for a value and the only messages are 'what is your value' and 'my value changed' (and pub/sub, i guess, although in our model that's outside the individual actors).

--

forwhile loop (my idea): do a (possibly nested) for loop, but set flag and 'break' (all the way out) if condition is true; at the end, do one thing if condition was true, another thing otherwise

bounded while loop (my idea): a 'safe' while loop with a max number of iterations, also a minimum amount of 'progress' (percentage or abs change in any/all of various variables) (e.g. for writing numerical iterative algorithms that have an escape hatch to ensure termination in this fashion)

---

http://users.aber.ac.uk/afc/stricthaskell.html

--

great essay on the problems with laziness:

http://www.megacz.com/thoughts/on.laziness.html

in summary:

the bit about concurrency: " Minor: Full Abstraction / Concurrency

Laziness breaks full abstraction, and adding in the necessary operator to restore full abstraction forces you into accepting a sort of crippled concurrency. People often forget that lazy PCF isn't fully abstract without an interleaving operator. This operator is either missing or else a hacky second-class citizen in all implementations.

Lazy PCF can in fact be used as a semantics for concurrency (for example, Kahn process networks use basically the same non-flat denotational domain as lazy PCF), but the more I learn about this the less I think that it's a complete semantics for concurrency. If you've decided that you want to have concurrency, you probably want operators like “fair merge” which have no denotational semantics in the lazy PCF domain. So I think that “models concurrency well” is a terrible argument for laziness. "

--

https://github.com/dmbarbour/Sirea

http://awelonblue.wordpress.com/about/

https://github.com/dmbarbour/awelon

" Wednesday, September 5, 2012 Reactive Demand Programming David Barbour has created a very promising and exciting paradigm for writing interactive, networked applications: Reactive Demand Programming (RDP).

RDP is very sophisticated and I can't really do it justice here, but its salient points are:

    An RDP application is a dynamically-changing set of semi-permanent, bidirectional data exchanges, called behaviors. What pipes are to Unix, behaviors are to RDP.
    A signal is a monodirectional channel carrying a value, and varies discretely over time.
    A behavior is made up of one or more input signals, called demands, and a single output signal.
    RDP builds in the notion of anticipation - every signal update comes with the expected duration the new value is valid. This allows low latency through smart scheduling.
    [Update: See David's corrections in the comments.]

An example for a behavior would be a camera receiving move and zoom commands (or rather demands) with discrete time intervals (e.g. as long as a user moves the camera joystick) from one or more users on input signals, interpolating these commands using some deterministic decision procedure (such as averaging all commands), and outputting camera frames on the output signal, with the anticipation measure telling clients something about the rate at which the camera refreshes, allowing them to smartly perform display reprocessing.

The best way to get started is the README of David's RDP implementation. David has also just posted a progress report on his blog, which contains many articles on RDP.

I think RDP is one of the most exciting developments in interactive application development and is worth checking out.

..

dmbarbour said...

    I'm glad to have your interest, Manuel. As I've done a poor job communicating about RDP, I'll try to clarify a few things:
    Rather than "many input signals, one output signal", a behavior may have a distinct output signal for each distinct input signal. Much like a function can have a distinct value for each input. A behavior corresponds closely to a function. RDP behaviors are not pure functions, but far more restricted in side-effects than a typical functional/imperative hybrid. The constraints on effects are such that behaviors can be treated very much like pure functions, e.g. with respect to commuting expressions or eliminating duplicate expressions. One thing you can do with pure functions that you cannot do with RDP behaviors is completely eliminate a behavior just because you drop the output.
    The essential property of RDP is enforcing declarative effects while (unlike most logic and concurrent constraint paradigms) also supporting procedural abstraction and encapsulation. This puts RDP at a very sweet spot between logic and procedural. Dynamic behaviors allow RDP an OO-like aspect, staged binding of resources. (Staged is somewhere between early binding and late binding. I like to call it `punctual` binding. :)
    Via declarative effects, multiple input signals to a behavior can interact. It is possible, for example, to combine a set of signals into signal of sets - a useful pattern that became the basis for demand monitors.
    Signals for RDP don't need to vary discretely. But discrete-varying signals are a helluva lot easier to implement and reason about. Still, I plan to eventually support a limited subset of continuous-varying signals, e.g. via trigonometric polynomials. (Right now, I lean towards modeling continuous signals as a layer above the discrete-varying signals.)
    I had not thought of signals as channels before. I guess that may be useful, since there is a tight correspondence in RDP (signal updates are necessarily communicated via some sort of data plumbing).
    I agree that anticipation is a significant feature of RDP. It reduces latency and synchronization overheads. I can actually anticipate more than one future value at a time, which (assuming a system that is predictable for a few seconds at a time) can potentially save an order of magnitude in communication and processing costs - i.e. I can tell you about the next ten updates, then correct you if I later realize I was wrong about the seventh update, and thus benefit from reduced communication and batched computations. It's also valuable for chained composition of constraint, planning or prediction systems, and resource acquisition.
    Best,
    David
    Wed Sep 05, 08:05:00 PM GMT+2 Manuel Simoni said...
    Could you please explain how a programmer can make a behavior return different independent outputs for different inputs?
    Wed Sep 05, 11:36:00 PM GMT+2 dmbarbour said...
    An RDP programmer does not need to do anything special to make a behavior return distinct outputs for distinct inputs. Consider two cases:
    1) A simple behavior `bfmap (+ 1)` takes the input signal and adds one to produce an output signal. Clearly, this behavior can be used in any number of locations, and every distinct input signal will generate a distinct output signal. (`bfmap` is one of many primitive constructors for behaviors provided by Sirea.)
    2) A behavior representing access to a filesystem resource (perhaps a particular directory) may allow a developer to provide an input signal containing a filename, and respond with an an output signal containing that file's contents (which might change over time). Obviously, it is essential that each distinct filename produce the appropriate file contents (which may be distinct), even if there are many concurrent requests.
    I wonder if we're speaking past one another. What is it you mean by `many input signals`? I suppose it could refer to a complex signal like (x :&: (y :|: z). But I think this is not the case.
    For RDP, it hugely helps to distinguish behaviors from resources. Resources (including sensors, actuators, state, demand monitors, etc.) exist outside of RDP. Some behaviors may provide access to external resources. If you use a behavior that accesses a resource multiple times, you do not replicate the resource; you just end up accessing the same resource many times.
    Resources like a filesystem can provide a distinct response to each input because, when setting up the signal connection for the demand, one also sets up a signal connection for the response. Demand and response are tightly coupled in RDP (modulo dead-code optimizations).
    If we want to guarantee that every signal has the same response, we can use a unit signal. Since all active unit signals are the same, at least in the short term, it is very easy to reason that all the responses should be the same.
    Thu Sep 06, 08:44:00 AM GMT+2 Manuel Simoni said...
    "What is it you mean by `many input signals`?"
    For example, the inputs from the camera operators' joysticks.
    Thu Sep 06, 11:13:00 AM GMT+2 dmbarbour said...
    Yes, that is what I understood. Did the prior explanation help?
    Taking your camera example in particular: a camera can have more than one behavior associated with it - e.g. one behavior to control the actuation (pan, tilt, zoom) and another behavior to request the video stream.
    Obviously, most simple cameras cannot return different video streams to different clients. Well, I suppose one could request variations like down-sampling the resolution or update frequency. But, assuming a simple camera model that only replies with one video signal to all active clients... makes a rather poor example if you want to see a different response for each client!
    Even assuming all active clients effectively receive the same video signal from the camera, one must think of this as a duplicated signal - i.e. each client can (via distinct behaviors composed with the video acquisition) manipulate the signal through different post-processing filters and deliver the video to different monitors.
    The control signals are a different story. As you mentioned earlier, one option is to `average` the requests to influence the camera. Another option is to respond to some control signals with "unable to comply". Note that the response to the control signal doesn't need to be a video stream.
    (Note: "averaging" demands is not preferred in RDP. For idempotence, duplicate demands must not be counted twice. So if you want to average, you must ensure there are no duplicate demands - e.g. by including with each demand a unique URL identifying the client. Similar idioms apply to tallying votes.)
    Thu Sep 06, 04:48:00 PM GMT+2 

"

--

extensible primitive types: unboxed stuff like int, string, float could be thought of inscrutable 'atoms', which are treated in a uniform way, and which can have primitive 'operations', which are platform primitives or external libraries, attached to them

this also allows us to represent zero-ary functions, by having the 'atom' operator serve as 'suspension' when applied to a Jasper function

--

a selection primitive:

i guess either

Ghc core seems to choose case. GHC core uses trick of passing types explicitly as parameters to use case for polymorphic dispatch too. We could use typeOf for that.

i'm thinking we want to use 'get' rather than case, because it's more graph-y.

--

--

http://en.wikipedia.org/wiki/Conditional_(computer_programming)

http://users.dickinson.edu/~wahlst/356/ch8.pdf

--

one instantiation of the generalization of list comprehensions could even be a parallel control structure; a large number of threads are generated, in each one, potential actions are randomly Created by some function, then Transformed by another (or even executed within an undo-able transaction), and then Filtered by a third; actions which survive the Filter are actually executed (or, if there was an undo-able transaction, committed)

the Creation per-thread could be random (a generate-and-test stochastic algorithm), or deterministic concurrent (differing per thread in a deterministic way given thread identity; such a computation could also be executed in serial in one thread like an ordinary Python list comprehension) or deterministic fully serial (either the subsequent item depends on either all previous items) or deterministic accumulator serial (the subsequent item depends on the state of some accumulator variable, e.g. this is like a fold, and Transform might be called Integrate)

or, sequencing could be based on the middle step (Integrate/Pattern), rather than the Create step.

--

there's a similarity between macro application, logic-based production rule firing, and PEG grammar application, in that in these cases you have a control structure that looks over the thing to be matched (a state), applies a set of matching rules to it in order to find a potential match, and then mutates the state according to what the rule tells you to do, which may be a function of the part of state which matched

--